home/zuul/zuul-output/0000755000175000017500000000000015117127062014124 5ustar zuulzuulhome/zuul/zuul-output/logs/0000755000175000017500000000000015117130752015070 5ustar zuulzuulhome/zuul/zuul-output/logs/controller/0000755000175000017500000000000015117130606017251 5ustar zuulzuulhome/zuul/zuul-output/logs/controller/post_oc_get_builds.log0000644000175000017500000000161115117130576023630 0ustar zuulzuul*** [INFO] Showing oc get 'builds' Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/namespaces/service-telemetry/builds?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) [INFO] oc get 'builds' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/namespaces/service-telemetry/builds?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) home/zuul/zuul-output/logs/controller/post_oc_get_subscriptions.log0000644000175000017500000000030615117130576025255 0ustar zuulzuul*** [INFO] Showing oc get 'subscriptions' No resources found in service-telemetry namespace. [INFO] oc get 'subscriptions' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_oc_get_images.log0000644000175000017500000000151715117130577023621 0ustar zuulzuul*** [INFO] Showing oc get 'images' Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/images?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get images.image.openshift.io) [INFO] oc get 'images' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" Error from server (InternalError): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/images?limit=500\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get images.image.openshift.io) home/zuul/zuul-output/logs/controller/post_oc_get_imagestream.log0000644000175000017500000000030215117130577024641 0ustar zuulzuul*** [INFO] Showing oc get 'imagestream' No resources found in service-telemetry namespace. [INFO] oc get 'imagestream' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_oc_get_pods.log0000644000175000017500000000026415117130577023317 0ustar zuulzuul*** [INFO] Showing oc get 'pods' No resources found in service-telemetry namespace. [INFO] oc get 'pods' -oyaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_oc_describe_subscriptions_STO.log0000644000175000017500000000015015117130600026764 0ustar zuulzuulError from server (NotFound): subscriptions.operators.coreos.com "service-telemetry-operator" not found home/zuul/zuul-output/logs/controller/describe_sto.log0000644000175000017500000000006315117130601022413 0ustar zuulzuulNo resources found in service-telemetry namespace. home/zuul/zuul-output/logs/controller/post_question_deployment.log0000644000175000017500000000022515117130602025123 0ustar zuulzuulWhat images were created in the internal registry? What state is the STO csv in? apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" home/zuul/zuul-output/logs/controller/post_pv.log0000644000175000017500000000066315117130604021451 0ustar zuulzuulNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 30Gi RWX Retain Bound openshift-image-registry/crc-image-registry-storage crc-csi-hostpath-provisioner 533d home/zuul/zuul-output/logs/controller/post_pvc.log0000644000175000017500000000006315117130604021606 0ustar zuulzuulNo resources found in service-telemetry namespace. home/zuul/zuul-output/logs/controller/logs_sto.log0000644000175000017500000000024715117130605021607 0ustar zuulzuulerror: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples home/zuul/zuul-output/logs/controller/logs_sgo.log0000644000175000017500000000024715117130605021572 0ustar zuulzuulerror: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples home/zuul/zuul-output/logs/controller/logs_qdr.log0000644000175000017500000000024715117130605021570 0ustar zuulzuulerror: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples home/zuul/zuul-output/logs/controller/ansible.log0000644000175000017500000046041315117130606021401 0ustar zuulzuul2025-12-13 00:17:56,890 p=31003 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 00:17:56,891 p=31003 u=zuul n=ansible | Process install dependency map 2025-12-13 00:18:20,697 p=31003 u=zuul n=ansible | Starting collection install process 2025-12-13 00:18:20,698 p=31003 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 00:18:22,116 p=31003 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 00:18:22,117 p=31003 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 00:18:22,117 p=31003 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 00:18:22,717 p=31003 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 00:18:22,718 p=31003 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 00:18:22,718 p=31003 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 00:18:22,990 p=31003 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 00:18:22,990 p=31003 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 00:18:22,991 p=31003 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 00:18:23,105 p=31003 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 00:18:23,105 p=31003 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-12-13 00:18:31,182 p=31605 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:31 +0000 (0:00:00.036) 0:00:00.036 ***** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:31 +0000 (0:00:00.034) 0:00:00.034 ***** 2025-12-13 00:18:32,216 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,238 p=31605 u=zuul n=ansible | TASK [Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 00:18:32,239 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:01.038) 0:00:01.074 ***** 2025-12-13 00:18:32,239 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:01.038) 0:00:01.073 ***** 2025-12-13 00:18:32,270 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,276 p=31605 u=zuul n=ansible | TASK [Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-12-13 00:18:32,277 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.038) 0:00:01.112 ***** 2025-12-13 00:18:32,277 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.038) 0:00:01.111 ***** 2025-12-13 00:18:32,327 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.058) 0:00:01.170 ***** 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.058) 0:00:01.169 ***** 2025-12-13 00:18:32,680 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.351) 0:00:01.522 ***** 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.351) 0:00:01.521 ***** 2025-12-13 00:18:32,711 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.033) 0:00:01.556 ***** 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.555 ***** 2025-12-13 00:18:32,747 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.036) 0:00:01.593 ***** 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.036) 0:00:01.591 ***** 2025-12-13 00:18:32,782 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,791 p=31605 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-12-13 00:18:32,792 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.627 ***** 2025-12-13 00:18:32,792 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.626 ***** 2025-12-13 00:18:34,310 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:34,321 p=31605 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-12-13 00:18:34,321 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:01.529) 0:00:03.157 ***** 2025-12-13 00:18:34,322 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:01.529) 0:00:03.155 ***** 2025-12-13 00:18:34,511 p=31605 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-12-13 00:18:34,690 p=31605 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-12-13 00:18:34,853 p=31605 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:00.542) 0:00:03.699 ***** 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:00.542) 0:00:03.698 ***** 2025-12-13 00:18:35,899 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:35 +0000 (0:00:01.043) 0:00:04.743 ***** 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:35 +0000 (0:00:01.043) 0:00:04.741 ***** 2025-12-13 00:18:36,860 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:36 +0000 (0:00:00.962) 0:00:05.705 ***** 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:36 +0000 (0:00:00.962) 0:00:05.704 ***** 2025-12-13 00:18:46,000 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:09.136) 0:00:14.842 ***** 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:09.136) 0:00:14.840 ***** 2025-12-13 00:18:46,824 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.825) 0:00:15.667 ***** 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.825) 0:00:15.666 ***** 2025-12-13 00:18:46,850 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.024) 0:00:15.691 ***** 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.023) 0:00:15.690 ***** 2025-12-13 00:18:47,518 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:47,533 p=31605 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-12-13 00:18:47,534 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.677) 0:00:16.369 ***** 2025-12-13 00:18:47,534 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.677) 0:00:16.368 ***** 2025-12-13 00:18:47,571 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,583 p=31605 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-12-13 00:18:47,584 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.049) 0:00:16.419 ***** 2025-12-13 00:18:47,584 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.049) 0:00:16.418 ***** 2025-12-13 00:18:47,615 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.040) 0:00:16.460 ***** 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.040) 0:00:16.458 ***** 2025-12-13 00:18:47,662 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.046) 0:00:16.506 ***** 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.046) 0:00:16.505 ***** 2025-12-13 00:18:48,143 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.487) 0:00:16.993 ***** 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.487) 0:00:16.992 ***** 2025-12-13 00:18:48,917 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.766) 0:00:17.759 ***** 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.765) 0:00:17.758 ***** 2025-12-13 00:18:48,939 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.021) 0:00:17.781 ***** 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.021) 0:00:17.780 ***** 2025-12-13 00:18:48,962 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.024) 0:00:17.805 ***** 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.024) 0:00:17.804 ***** 2025-12-13 00:18:48,990 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.028) 0:00:17.834 ***** 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.028) 0:00:17.833 ***** 2025-12-13 00:18:49,024 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.030) 0:00:17.865 ***** 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.030) 0:00:17.864 ***** 2025-12-13 00:18:49,053 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,061 p=31605 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-12-13 00:18:49,061 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.031) 0:00:17.897 ***** 2025-12-13 00:18:49,062 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.031) 0:00:17.895 ***** 2025-12-13 00:18:49,077 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:17.919 ***** 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:17.918 ***** 2025-12-13 00:18:49,097 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.019) 0:00:17.939 ***** 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.019) 0:00:17.937 ***** 2025-12-13 00:18:49,119 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:17.962 ***** 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:17.960 ***** 2025-12-13 00:18:49,141 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.021) 0:00:17.983 ***** 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.021) 0:00:17.982 ***** 2025-12-13 00:18:49,161 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:18.006 ***** 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:18.004 ***** 2025-12-13 00:18:49,188 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:18.029 ***** 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:18.028 ***** 2025-12-13 00:18:49,398 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:49,406 p=31605 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-12-13 00:18:49,407 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.212) 0:00:18.242 ***** 2025-12-13 00:18:49,407 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.212) 0:00:18.241 ***** 2025-12-13 00:18:49,618 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.220) 0:00:18.462 ***** 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.220) 0:00:18.461 ***** 2025-12-13 00:18:49,828 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.210) 0:00:18.672 ***** 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.210) 0:00:18.671 ***** 2025-12-13 00:18:49,855 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.025) 0:00:18.697 ***** 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.025) 0:00:18.696 ***** 2025-12-13 00:18:49,890 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.035) 0:00:18.733 ***** 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.035) 0:00:18.732 ***** 2025-12-13 00:18:49,921 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.029) 0:00:18.763 ***** 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.029) 0:00:18.761 ***** 2025-12-13 00:18:49,964 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.046) 0:00:18.809 ***** 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.046) 0:00:18.808 ***** 2025-12-13 00:18:49,999 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:50,008 p=31605 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-12-13 00:18:50,008 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.034) 0:00:18.844 ***** 2025-12-13 00:18:50,009 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.034) 0:00:18.842 ***** 2025-12-13 00:18:50,031 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.031) 0:00:18.875 ***** 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.031) 0:00:18.874 ***** 2025-12-13 00:18:50,338 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.305) 0:00:19.181 ***** 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.305) 0:00:19.179 ***** 2025-12-13 00:18:50,592 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-12-13 00:18:50,770 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-12-13 00:18:50,781 p=31605 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-12-13 00:18:50,781 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.436) 0:00:19.617 ***** 2025-12-13 00:18:50,782 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.436) 0:00:19.616 ***** 2025-12-13 00:18:51,238 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.464) 0:00:20.081 ***** 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.464) 0:00:20.080 ***** 2025-12-13 00:18:51,545 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.312) 0:00:20.394 ***** 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.312) 0:00:20.392 ***** 2025-12-13 00:18:51,606 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.057) 0:00:20.451 ***** 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.057) 0:00:20.450 ***** 2025-12-13 00:18:51,641 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-12-13 00:18:51,650 p=31605 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-12-13 00:18:51,651 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.034) 0:00:20.486 ***** 2025-12-13 00:18:51,651 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.034) 0:00:20.485 ***** 2025-12-13 00:19:20,598 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:28.954) 0:00:49.441 ***** 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:28.954) 0:00:49.439 ***** 2025-12-13 00:19:20,826 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:00.231) 0:00:49.672 ***** 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:00.231) 0:00:49.671 ***** 2025-12-13 00:19:21,067 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:21 +0000 (0:00:00.239) 0:00:49.911 ***** 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:21 +0000 (0:00:00.239) 0:00:49.910 ***** 2025-12-13 00:19:26,302 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:05.232) 0:00:55.143 ***** 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:05.232) 0:00:55.142 ***** 2025-12-13 00:19:26,333 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.031) 0:00:55.175 ***** 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.031) 0:00:55.174 ***** 2025-12-13 00:19:26,686 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.355) 0:00:55.531 ***** 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.355) 0:00:55.529 ***** 2025-12-13 00:19:27,048 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.360) 0:00:55.891 ***** 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.360) 0:00:55.890 ***** 2025-12-13 00:19:27,069 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.021) 0:00:55.912 ***** 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.021) 0:00:55.911 ***** 2025-12-13 00:19:27,091 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.933 ***** 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.932 ***** 2025-12-13 00:19:27,111 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,117 p=31605 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-12-13 00:19:27,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.953 ***** 2025-12-13 00:19:27,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.952 ***** 2025-12-13 00:19:27,131 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,137 p=31605 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-12-13 00:19:27,137 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.973 ***** 2025-12-13 00:19:27,138 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.971 ***** 2025-12-13 00:19:27,151 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.993 ***** 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.992 ***** 2025-12-13 00:19:27,180 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,186 p=31605 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-12-13 00:19:27,187 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.028) 0:00:56.022 ***** 2025-12-13 00:19:27,187 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.028) 0:00:56.021 ***** 2025-12-13 00:19:27,401 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-12-13 00:19:27,596 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-12-13 00:19:27,826 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-12-13 00:19:28,021 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-12-13 00:19:28,212 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:01.038) 0:00:57.060 ***** 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:01.038) 0:00:57.059 ***** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.124) 0:00:57.184 ***** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.124) 0:00:57.183 ***** 2025-12-13 00:19:28,559 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-12-13 00:19:28,723 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-12-13 00:19:28,888 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.547) 0:00:57.731 ***** 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.547) 0:00:57.730 ***** 2025-12-13 00:19:28,936 p=31605 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-12-13 00:19:28,937 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.040) 0:00:57.772 ***** 2025-12-13 00:19:28,937 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.040) 0:00:57.771 ***** 2025-12-13 00:19:28,983 p=31605 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-12-13 00:19:28,984 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.054) 0:00:57.826 ***** 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.054) 0:00:57.825 ***** 2025-12-13 00:19:29,021 p=31605 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-12-13 00:19:29,022 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.046) 0:00:57.872 ***** 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.046) 0:00:57.871 ***** 2025-12-13 00:19:29,090 p=31605 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2025-12-13 00:19:29,090 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.053) 0:00:57.926 ***** 2025-12-13 00:19:29,091 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.053) 0:00:57.924 ***** 2025-12-13 00:19:29,111 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.027) 0:00:57.953 ***** 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.027) 0:00:57.952 ***** 2025-12-13 00:19:29,141 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.033) 0:00:57.987 ***** 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.033) 0:00:57.986 ***** 2025-12-13 00:19:29,171 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.025) 0:00:58.013 ***** 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.025) 0:00:58.012 ***** 2025-12-13 00:19:29,196 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.030) 0:00:58.043 ***** 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.030) 0:00:58.042 ***** 2025-12-13 00:19:29,296 p=31605 u=zuul n=ansible | ok: [localhost] => (item={}) 2025-12-13 00:19:29,304 p=31605 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-12-13 00:19:29,305 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.096) 0:00:58.140 ***** 2025-12-13 00:19:29,305 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.096) 0:00:58.139 ***** 2025-12-13 00:19:29,341 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:29,352 p=31605 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-12-13 00:19:29,352 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.047) 0:00:58.188 ***** 2025-12-13 00:19:29,353 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.047) 0:00:58.186 ***** 2025-12-13 00:19:29,949 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.606) 0:00:58.794 ***** 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.606) 0:00:58.793 ***** 2025-12-13 00:19:30,007 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.054) 0:00:58.848 ***** 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.054) 0:00:58.847 ***** 2025-12-13 00:19:30,030 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.027) 0:00:58.875 ***** 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.027) 0:00:58.874 ***** 2025-12-13 00:19:30,058 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.023) 0:00:58.899 ***** 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.023) 0:00:58.898 ***** 2025-12-13 00:19:30,088 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.032) 0:00:58.932 ***** 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.032) 0:00:58.930 ***** 2025-12-13 00:19:30,126 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.039) 0:00:58.971 ***** 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.039) 0:00:58.970 ***** 2025-12-13 00:19:30,163 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.035) 0:00:59.007 ***** 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.035) 0:00:59.005 ***** 2025-12-13 00:19:30,456 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.301) 0:00:59.309 ***** 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.302) 0:00:59.307 ***** 2025-12-13 00:19:30,500 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.037) 0:00:59.346 ***** 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.037) 0:00:59.345 ***** 2025-12-13 00:19:30,943 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.438) 0:00:59.785 ***** 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.438) 0:00:59.784 ***** 2025-12-13 00:19:30,974 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,988 p=31605 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-12-13 00:19:30,988 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.038) 0:00:59.824 ***** 2025-12-13 00:19:30,989 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.038) 0:00:59.822 ***** 2025-12-13 00:19:31,569 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.588) 0:01:00.412 ***** 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.588) 0:01:00.411 ***** 2025-12-13 00:19:31,611 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | TASK [Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.045) 0:01:00.458 ***** 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.045) 0:01:00.457 ***** 2025-12-13 00:19:32,084 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | localhost : ok=43 changed=23 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:32 +0000 (0:00:00.484) 0:01:00.943 ***** 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 28.95s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 9.14s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.23s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.53s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 0.96s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.83s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 0.77s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Run repo-setup --------------------------------------------- 0.68s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Get environment structure ------------------------------- 0.61s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | discover_latest_image : Get latest image -------------------------------- 0.59s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Ensure directories exist -------------------------------- 0.55s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 0.54s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Run repo-setup-get-hash ------------------------------------ 0.49s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | Create artifacts with custom params ------------------------------------- 0.48s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 0.46s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Create the install_yamls parameters file ---------------- 0.44s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Remove existing repos from /etc/yum.repos.d directory ------ 0.44s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:32 +0000 (0:00:00.485) 0:01:00.943 ***** 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 36.67s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 17.24s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 2.64s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_ca -------------------------------------------------------------- 1.99s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 1.04s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 0.63s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.copy ---------------------------------------------------- 0.49s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.12s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.set_fact ------------------------------------------------ 0.10s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | total ------------------------------------------------------------------ 60.91s 2025-12-13 00:19:33,160 p=32461 u=zuul n=ansible | PLAY [Run pre_infra hooks] ***************************************************** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.045) 0:00:00.045 ***** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.045) 0:00:00.045 ***** 2025-12-13 00:19:33,249 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.068) 0:00:00.114 ***** 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.068) 0:00:00.113 ***** 2025-12-13 00:19:33,313 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.063) 0:00:00.177 ***** 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.063) 0:00:00.177 ***** 2025-12-13 00:19:33,382 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,444 p=32461 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-12-13 00:19:33,466 p=32461 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-12-13 00:19:33,467 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.143) 0:00:00.321 ***** 2025-12-13 00:19:33,467 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.143) 0:00:00.320 ***** 2025-12-13 00:19:33,565 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.112) 0:00:00.434 ***** 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.112) 0:00:00.433 ***** 2025-12-13 00:19:33,598 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.027) 0:00:00.461 ***** 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.027) 0:00:00.460 ***** 2025-12-13 00:19:33,626 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,661 p=32461 u=zuul n=ansible | PLAY [Prepare the platform] **************************************************** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.082) 0:00:00.544 ***** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.082) 0:00:00.543 ***** 2025-12-13 00:19:33,732 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,746 p=32461 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 00:19:33,747 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.057) 0:00:00.601 ***** 2025-12-13 00:19:33,747 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.057) 0:00:00.600 ***** 2025-12-13 00:19:34,030 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,049 p=32461 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-12-13 00:19:34,050 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.303) 0:00:00.904 ***** 2025-12-13 00:19:34,050 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.303) 0:00:00.903 ***** 2025-12-13 00:19:34,074 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.038) 0:00:00.942 ***** 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.038) 0:00:00.942 ***** 2025-12-13 00:19:34,114 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.035) 0:00:00.978 ***** 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.035) 0:00:00.978 ***** 2025-12-13 00:19:34,158 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,176 p=32461 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-12-13 00:19:34,177 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.052) 0:00:01.031 ***** 2025-12-13 00:19:34,177 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.052) 0:00:01.030 ***** 2025-12-13 00:19:34,199 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,209 p=32461 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-12-13 00:19:34,210 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.032) 0:00:01.064 ***** 2025-12-13 00:19:34,210 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.063 ***** 2025-12-13 00:19:34,235 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.039) 0:00:01.103 ***** 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.039) 0:00:01.103 ***** 2025-12-13 00:19:34,277 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.042) 0:00:01.146 ***** 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.042) 0:00:01.146 ***** 2025-12-13 00:19:34,648 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,657 p=32461 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-12-13 00:19:34,658 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.365) 0:00:01.512 ***** 2025-12-13 00:19:34,658 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.365) 0:00:01.511 ***** 2025-12-13 00:19:34,690 p=32461 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-12-13 00:19:34,702 p=32461 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 00:19:34,703 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.045) 0:00:01.557 ***** 2025-12-13 00:19:34,703 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.045) 0:00:01.556 ***** 2025-12-13 00:19:34,739 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.046) 0:00:01.603 ***** 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.046) 0:00:01.603 ***** 2025-12-13 00:19:34,784 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.049) 0:00:01.653 ***** 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.049) 0:00:01.653 ***** 2025-12-13 00:19:34,819 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,832 p=32461 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-12-13 00:19:34,833 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.687 ***** 2025-12-13 00:19:34,833 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.686 ***** 2025-12-13 00:19:34,861 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.040) 0:00:01.728 ***** 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.040) 0:00:01.727 ***** 2025-12-13 00:19:35,070 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.215) 0:00:01.943 ***** 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.215) 0:00:01.943 ***** 2025-12-13 00:19:35,133 p=32461 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 00:19:35,145 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 00:19:35,145 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.056) 0:00:02.000 ***** 2025-12-13 00:19:35,146 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.056) 0:00:01.999 ***** 2025-12-13 00:19:35,182 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,207 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-12-13 00:19:35,208 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.062) 0:00:02.062 ***** 2025-12-13 00:19:35,208 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.062) 0:00:02.061 ***** 2025-12-13 00:19:35,242 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,256 p=32461 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-12-13 00:19:35,257 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.049) 0:00:02.111 ***** 2025-12-13 00:19:35,257 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.049) 0:00:02.110 ***** 2025-12-13 00:19:35,289 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.044) 0:00:02.155 ***** 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.044) 0:00:02.155 ***** 2025-12-13 00:19:35,332 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.043) 0:00:02.198 ***** 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.043) 0:00:02.198 ***** 2025-12-13 00:19:35,374 p=32461 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.047) 0:00:02.246 ***** 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.047) 0:00:02.246 ***** 2025-12-13 00:19:35,410 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,421 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-12-13 00:19:35,422 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.029) 0:00:02.276 ***** 2025-12-13 00:19:35,422 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.029) 0:00:02.275 ***** 2025-12-13 00:19:35,481 p=32461 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log 2025-12-13 00:19:35,914 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.503) 0:00:02.779 ***** 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.503) 0:00:02.778 ***** 2025-12-13 00:19:35,959 p=32461 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 00:19:35,970 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 00:19:35,970 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.045) 0:00:02.825 ***** 2025-12-13 00:19:35,971 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.045) 0:00:02.824 ***** 2025-12-13 00:19:36,446 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.489) 0:00:03.314 ***** 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.489) 0:00:03.314 ***** 2025-12-13 00:19:36,486 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.038) 0:00:03.353 ***** 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.038) 0:00:03.352 ***** 2025-12-13 00:19:36,832 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.356) 0:00:03.709 ***** 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.356) 0:00:03.709 ***** 2025-12-13 00:19:37,212 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.368) 0:00:04.078 ***** 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.368) 0:00:04.078 ***** 2025-12-13 00:19:37,566 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.355) 0:00:04.434 ***** 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.355) 0:00:04.433 ***** 2025-12-13 00:19:37,611 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.044) 0:00:04.478 ***** 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.044) 0:00:04.477 ***** 2025-12-13 00:19:38,194 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:38,204 p=32461 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-12-13 00:19:38,205 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.580) 0:00:05.059 ***** 2025-12-13 00:19:38,205 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.580) 0:00:05.058 ***** 2025-12-13 00:19:38,509 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.326) 0:00:05.385 ***** 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.326) 0:00:05.385 ***** 2025-12-13 00:19:39,051 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:39,073 p=32461 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 00:19:39,073 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.542) 0:00:05.928 ***** 2025-12-13 00:19:39,074 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.542) 0:00:05.927 ***** 2025-12-13 00:19:39,294 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:39,316 p=32461 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-12-13 00:19:39,317 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.243) 0:00:06.171 ***** 2025-12-13 00:19:39,317 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.243) 0:00:06.171 ***** 2025-12-13 00:19:39,344 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:39,369 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-12-13 00:19:39,370 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.052) 0:00:06.224 ***** 2025-12-13 00:19:39,370 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.052) 0:00:06.223 ***** 2025-12-13 00:19:40,355 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 00:19:41,109 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:41 +0000 (0:00:01.753) 0:00:07.977 ***** 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:41 +0000 (0:00:01.753) 0:00:07.977 ***** 2025-12-13 00:19:42,173 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:42,184 p=32461 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-12-13 00:19:42,185 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:42 +0000 (0:00:01.061) 0:00:09.039 ***** 2025-12-13 00:19:42,185 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:42 +0000 (0:00:01.061) 0:00:09.038 ***** 2025-12-13 00:19:42,946 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 00:19:43,635 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:43 +0000 (0:00:01.476) 0:00:10.516 ***** 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:43 +0000 (0:00:01.476) 0:00:10.515 ***** 2025-12-13 00:19:44,614 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:44,646 p=32461 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-12-13 00:19:44,646 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:44 +0000 (0:00:00.985) 0:00:11.501 ***** 2025-12-13 00:19:44,647 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:44 +0000 (0:00:00.985) 0:00:11.500 ***** 2025-12-13 00:19:44,717 p=32461 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log 2025-12-13 00:19:44,986 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.358) 0:00:11.859 ***** 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.358) 0:00:11.859 ***** 2025-12-13 00:19:45,028 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,048 p=32461 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-12-13 00:19:45,049 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.043) 0:00:11.903 ***** 2025-12-13 00:19:45,049 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.043) 0:00:11.902 ***** 2025-12-13 00:19:45,070 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,117 p=32461 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-12-13 00:19:45,118 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.069) 0:00:11.972 ***** 2025-12-13 00:19:45,118 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.068) 0:00:11.971 ***** 2025-12-13 00:19:45,137 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,147 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-12-13 00:19:45,148 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.030) 0:00:12.002 ***** 2025-12-13 00:19:45,148 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.030) 0:00:12.001 ***** 2025-12-13 00:19:45,182 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.044) 0:00:12.046 ***** 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.044) 0:00:12.046 ***** 2025-12-13 00:19:45,227 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.094 ***** 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.093 ***** 2025-12-13 00:19:45,263 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.033) 0:00:12.127 ***** 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.033) 0:00:12.127 ***** 2025-12-13 00:19:45,306 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.174 ***** 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.174 ***** 2025-12-13 00:19:46,142 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:46 +0000 (0:00:00.843) 0:00:13.018 ***** 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:46 +0000 (0:00:00.843) 0:00:13.018 ***** 2025-12-13 00:19:47,150 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.996) 0:00:14.015 ***** 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.996) 0:00:14.015 ***** 2025-12-13 00:19:47,890 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.743) 0:00:14.758 ***** 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.743) 0:00:14.758 ***** 2025-12-13 00:19:47,923 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.032) 0:00:14.791 ***** 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.032) 0:00:14.791 ***** 2025-12-13 00:19:47,964 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,012 p=32461 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-12-13 00:19:48,013 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.075) 0:00:14.867 ***** 2025-12-13 00:19:48,013 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.075) 0:00:14.866 ***** 2025-12-13 00:19:48,045 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,054 p=32461 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-12-13 00:19:48,055 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.042) 0:00:14.909 ***** 2025-12-13 00:19:48,055 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.042) 0:00:14.908 ***** 2025-12-13 00:19:48,081 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.038) 0:00:14.947 ***** 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.038) 0:00:14.947 ***** 2025-12-13 00:19:48,115 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:14.980 ***** 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:14.980 ***** 2025-12-13 00:19:48,148 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.013 ***** 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:15.013 ***** 2025-12-13 00:19:48,179 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.046 ***** 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.046 ***** 2025-12-13 00:19:48,212 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.030) 0:00:15.077 ***** 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.030) 0:00:15.076 ***** 2025-12-13 00:19:48,241 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,251 p=32461 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-12-13 00:19:48,251 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.028) 0:00:15.106 ***** 2025-12-13 00:19:48,252 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.028) 0:00:15.105 ***** 2025-12-13 00:19:48,286 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.047) 0:00:15.154 ***** 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.047) 0:00:15.153 ***** 2025-12-13 00:19:48,353 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.063) 0:00:15.217 ***** 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.063) 0:00:15.217 ***** 2025-12-13 00:19:48,423 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:48,434 p=32461 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 00:19:48,434 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.071) 0:00:15.289 ***** 2025-12-13 00:19:48,435 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.071) 0:00:15.288 ***** 2025-12-13 00:19:48,491 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | localhost : ok=35 changed=12 unreachable=0 failed=0 skipped=34 rescued=0 ignored=0 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.096) 0:00:15.385 ***** 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.75s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces --- 1.48s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Get internal OpenShift registry route ----------------- 1.06s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 1.00s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Wait for the image registry to be ready --------------- 0.99s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.84s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Patch samples registry configuration ------------------ 0.74s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Create the openshift_login parameters file ------------ 0.58s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Append the KUBECONFIG to the install yamls parameters --- 0.54s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift token --------------------------------- 0.50s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch new OpenShift access token ---------------------- 0.49s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift kubeconfig context -------------------- 0.37s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Ensure output directory exists ------------------------ 0.37s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Login into OpenShift internal registry ---------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift API URL ------------------------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift current user -------------------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Read the install yamls parameters file ---------------- 0.33s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | networking_mapper : Check for Networking Environment Definition file existence --- 0.30s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Ensure output directory exists ------------------------ 0.24s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Check if kubeconfig exists ---------------------------- 0.22s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.097) 0:00:15.385 ***** 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 8.94s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login --------------------------------------------------------- 4.78s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.51s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | run_hook ---------------------------------------------------------------- 0.51s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.43s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.17s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | total ------------------------------------------------------------------ 15.34s 2025-12-13 00:20:09,545 p=33055 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 00:20:09,565 p=33055 u=zuul n=ansible | Process install dependency map 2025-12-13 00:20:29,957 p=33055 u=zuul n=ansible | Starting collection install process 2025-12-13 00:20:29,957 p=33055 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 00:20:30,598 p=33055 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 00:20:30,599 p=33055 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 00:20:30,599 p=33055 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 00:20:31,594 p=33055 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 00:20:31,595 p=33055 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 00:20:31,595 p=33055 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 00:20:31,654 p=33055 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 00:20:31,655 p=33055 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 00:20:31,655 p=33055 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 00:20:32,953 p=33055 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 00:20:32,954 p=33055 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 00:20:32,954 p=33055 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 00:20:33,066 p=33055 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 00:20:33,067 p=33055 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 00:20:33,067 p=33055 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 00:20:33,186 p=33055 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 00:20:33,186 p=33055 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully home/zuul/zuul-output/logs/ci-framework-data/0000755000175000017500000000000015117130674020370 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/0000755000175000017500000000000015117130662021331 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_fetch_openshift.log0000644000175000017500000000035215117130427027621 0ustar zuulzuulWARNING: Using insecure TLS client config. Setting this option is not supported! Login successful. You have access to 64 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log0000644000175000017500000000002115117130440032552 0ustar zuulzuulLogin Succeeded! home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_check_for_oc.log0000644000175000017500000000002215117130635027050 0ustar zuulzuul/home/zuul/bin/oc home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_run_openstack_must_gather.log0000644000175000017500000001060515117130637031733 0ustar zuulzuul[must-gather ] OUT 2025-12-13T00:21:50.316163835Z Using must-gather plug-in image: quay.io/openstack-k8s-operators/openstack-must-gather:latest When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.20.6 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing [must-gather ] OUT 2025-12-13T00:21:50.360262922Z namespace/openshift-must-gather-zffxd created [must-gather ] OUT 2025-12-13T00:21:50.363875481Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-k4ftt created [must-gather ] OUT 2025-12-13T00:21:51.446119591Z namespace/openshift-must-gather-zffxd deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.20.6 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing Error from server (Forbidden): pods "must-gather-" is forbidden: error looking up service account openshift-must-gather-zffxd/default: serviceaccount "default" not found home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_prepare_root_ssh.log0000644000175000017500000045036115117130645030042 0ustar zuulzuulPseudo-terminal will not be allocated because stdin is not a terminal. Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 Part of OpenShift 4.16, RHCOS is a Kubernetes-native operating system managed by the Machine Config Operator (`clusteroperator/machine-config`). WARNING: Direct SSH access to machines is not recommended; instead, make configuration changes via `machineconfig` objects: https://docs.openshift.com/container-platform/4.16/architecture/architecture-rhcos.html --- + test -d /etc/ssh/sshd_config.d/ + sudo sed -ri 's/PermitRootLogin no/PermitRootLogin prohibit-password/' '/etc/ssh/sshd_config.d/*' sed: can't read /etc/ssh/sshd_config.d/*: No such file or directory + true + sudo sed -i 's/PermitRootLogin no/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config + sudo systemctl restart sshd + sudo cp -r .ssh /root/ + sudo chown -R root: /root/.ssh + mkdir -p /tmp/crc-logs-artifacts + sudo cp -av /ostree/deploy/rhcos/var/log/pods /tmp/crc-logs-artifacts/ '/ostree/deploy/rhcos/var/log/pods' -> '/tmp/crc-logs-artifacts/pods' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.log' '/ostree/deploy/rhcos/var/log/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.log' -> '/tmp/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/9.log' -> '/tmp/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/9.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.log' '/ostree/deploy/rhcos/var/log/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.log' -> '/tmp/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.log' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles' '/ostree/deploy/rhcos/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content' '/ostree/deploy/rhcos/var/log/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.log' -> '/tmp/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.log' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer' '/ostree/deploy/rhcos/var/log/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics/0.log' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd' '/ostree/deploy/rhcos/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd/0.log' -> '/tmp/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd/0.log' + sudo chown -R core:core /tmp/crc-logs-artifacts home/zuul/zuul-output/logs/ci-framework-data/logs/ci_script_000_copy_logs_from_crc.log0000644000175000017500000003475315117130647030341 0ustar zuulzuulExecuting: program /usr/bin/ssh host api.crc.testing, user core, command sftp OpenSSH_9.9p1, OpenSSL 3.5.1 1 Jul 2025 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: configuration requests final Match pass debug1: re-parsing configuration debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: Connecting to api.crc.testing [38.102.83.51] port 22. debug1: Connection established. debug1: identity file /home/zuul/.ssh/id_cifw type 2 debug1: identity file /home/zuul/.ssh/id_cifw-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.9 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.7 debug1: compat_banner: match: OpenSSH_8.7 pat OpenSSH* compat 0x04000000 debug1: Authenticating to api.crc.testing:22 as 'core' debug1: load_hostkeys: fopen /home/zuul/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: aes256-gcm@openssh.com MAC: compression: none debug1: kex: client->server cipher: aes256-gcm@openssh.com MAC: compression: none debug1: kex: curve25519-sha256 need=32 dh_need=32 debug1: kex: curve25519-sha256 need=32 dh_need=32 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:/ZfZ15bRL0d31T2CAq03Iw4h8DAqA2+9vySbGcnzmJo debug1: load_hostkeys: fopen /home/zuul/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: Host 'api.crc.testing' is known and matches the ED25519 host key. debug1: Found key in /home/zuul/.ssh/known_hosts:22 debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 debug1: rekey out after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: ssh_packet_read_poll2: resetting read seqnr 3 debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 4294967296 blocks debug1: SSH2_MSG_EXT_INFO received debug1: kex_ext_info_client_parse: server-sig-algs= debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: No credentials were supplied, or the credentials were unavailable or inaccessible No Kerberos credentials available (default cache: KCM:) debug1: No credentials were supplied, or the credentials were unavailable or inaccessible No Kerberos credentials available (default cache: KCM:) debug1: Next authentication method: publickey debug1: Will attempt key: /home/zuul/.ssh/id_cifw ECDSA SHA256:lIhYtzVYchHR6SkulS8mQVxADvItChYzaAVLst6CGlE explicit debug1: Offering public key: /home/zuul/.ssh/id_cifw ECDSA SHA256:lIhYtzVYchHR6SkulS8mQVxADvItChYzaAVLst6CGlE explicit debug1: Server accepts key: /home/zuul/.ssh/id_cifw ECDSA SHA256:lIhYtzVYchHR6SkulS8mQVxADvItChYzaAVLst6CGlE explicit Authenticated to api.crc.testing ([38.102.83.51]:22) using "publickey". debug1: pkcs11_del_provider: called, provider_id = (null) debug1: channel 0: new session [client-session] (inactive timeout: 0) debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: pledge: filesystem debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0 debug1: client_input_hostkeys: searching /home/zuul/.ssh/known_hosts for api.crc.testing / (none) debug1: client_input_hostkeys: searching /home/zuul/.ssh/known_hosts2 for api.crc.testing / (none) debug1: client_input_hostkeys: hostkeys file /home/zuul/.ssh/known_hosts2 does not exist debug1: client_input_hostkeys: no new or deprecated keys from server debug1: Remote: /var/home/core/.ssh/authorized_keys:28: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Remote: /var/home/core/.ssh/authorized_keys:28: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Sending subsystem: sftp debug1: pledge: fork scp: debug1: Fetching /tmp/crc-logs-artifacts/ to /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts scp: debug1: truncating at 4829 scp: debug1: truncating at 4640 scp: debug1: truncating at 4680 scp: debug1: truncating at 5158 scp: debug1: truncating at 1812039 scp: debug1: truncating at 8190 scp: debug1: truncating at 2347 scp: debug1: truncating at 0 scp: debug1: truncating at 2415 scp: debug1: truncating at 59795 scp: debug1: truncating at 36136 scp: debug1: truncating at 61484 scp: debug1: truncating at 1917 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 1976 scp: debug1: truncating at 1973 scp: debug1: truncating at 19721 scp: debug1: truncating at 59909 scp: debug1: truncating at 43448 scp: debug1: truncating at 9094 scp: debug1: truncating at 3376 scp: debug1: truncating at 5142 scp: debug1: truncating at 132206 scp: debug1: truncating at 50053 scp: debug1: truncating at 59236 scp: debug1: truncating at 85 scp: debug1: truncating at 85 scp: debug1: truncating at 85 scp: debug1: truncating at 5215 scp: debug1: truncating at 2494 scp: debug1: truncating at 6351 scp: debug1: truncating at 59637 scp: debug1: truncating at 736 scp: debug1: truncating at 23257 scp: debug1: truncating at 30155 scp: debug1: truncating at 0 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 312254 scp: debug1: truncating at 265 scp: debug1: truncating at 22010 scp: debug1: truncating at 116 scp: debug1: truncating at 11371 scp: debug1: truncating at 1648 scp: debug1: truncating at 120 scp: debug1: truncating at 381 scp: debug1: truncating at 1875 scp: debug1: truncating at 2038 scp: debug1: truncating at 9955 scp: debug1: truncating at 4976 scp: debug1: truncating at 102215 scp: debug1: truncating at 93408 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 23827 scp: debug1: truncating at 27959 scp: debug1: truncating at 96 scp: debug1: truncating at 0 scp: debug1: truncating at 122585 scp: debug1: truncating at 78087 scp: debug1: truncating at 64331 scp: debug1: truncating at 574827 scp: debug1: truncating at 313250 scp: debug1: truncating at 411501 scp: debug1: truncating at 25403 scp: debug1: truncating at 27324 scp: debug1: truncating at 51603 scp: debug1: truncating at 11526 scp: debug1: truncating at 75 scp: debug1: truncating at 43870 scp: debug1: truncating at 127371 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 7599 scp: debug1: truncating at 7732 scp: debug1: truncating at 12999 scp: debug1: truncating at 12595 scp: debug1: truncating at 10649 scp: debug1: truncating at 571 scp: debug1: truncating at 909 scp: debug1: truncating at 9654 scp: debug1: truncating at 13139 scp: debug1: truncating at 22065 scp: debug1: truncating at 1187 scp: debug1: truncating at 1054 scp: debug1: truncating at 14404 scp: debug1: truncating at 4051 scp: debug1: truncating at 664 scp: debug1: truncating at 4037 scp: debug1: truncating at 112225 scp: debug1: truncating at 110570 scp: debug1: truncating at 30933 scp: debug1: truncating at 83 scp: debug1: truncating at 0 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 36951 scp: debug1: truncating at 16333 scp: debug1: truncating at 26246 scp: debug1: truncating at 39824 scp: debug1: truncating at 28909 scp: debug1: truncating at 248443 scp: debug1: truncating at 23768 scp: debug1: truncating at 26036 scp: debug1: truncating at 58729 scp: debug1: truncating at 3886 scp: debug1: truncating at 760212 scp: debug1: truncating at 14943 scp: debug1: truncating at 11449 scp: debug1: truncating at 12256 scp: debug1: truncating at 1042 scp: debug1: truncating at 0 scp: debug1: truncating at 1042 scp: debug1: truncating at 13872 scp: debug1: truncating at 11354 scp: debug1: truncating at 2590 scp: debug1: truncating at 6726 scp: debug1: truncating at 4968 scp: debug1: truncating at 10461 scp: debug1: truncating at 2076 scp: debug1: truncating at 132818 scp: debug1: truncating at 229071 scp: debug1: truncating at 737559 scp: debug1: truncating at 58535 scp: debug1: truncating at 60383 scp: debug1: truncating at 44793 scp: debug1: truncating at 602 scp: debug1: truncating at 2922 scp: debug1: truncating at 409724 scp: debug1: truncating at 1508074 scp: debug1: truncating at 491271 scp: debug1: truncating at 217484 scp: debug1: truncating at 241466 scp: debug1: truncating at 5135 scp: debug1: truncating at 10108 scp: debug1: truncating at 17548 scp: debug1: truncating at 33371 scp: debug1: truncating at 31684 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 2031 scp: debug1: truncating at 46063 scp: debug1: truncating at 17132 scp: debug1: truncating at 3764 scp: debug1: truncating at 391605 scp: debug1: truncating at 890 scp: debug1: truncating at 163920 scp: debug1: truncating at 61 scp: debug1: truncating at 61 scp: debug1: truncating at 440 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 31683 scp: debug1: truncating at 6688 scp: debug1: truncating at 53338 scp: debug1: truncating at 3574 scp: debug1: truncating at 52224 scp: debug1: truncating at 37860 scp: debug1: truncating at 23357 scp: debug1: truncating at 46647 scp: debug1: truncating at 20580 scp: debug1: truncating at 296671 scp: debug1: truncating at 100273 scp: debug1: truncating at 162054 scp: debug1: truncating at 36391 scp: debug1: truncating at 34528 scp: debug1: truncating at 39921 scp: debug1: truncating at 91140 scp: debug1: truncating at 39993 scp: debug1: truncating at 1939205 scp: debug1: truncating at 11627525 scp: debug1: truncating at 7599 scp: debug1: truncating at 7732 scp: debug1: truncating at 5648 scp: debug1: truncating at 7598 scp: debug1: truncating at 10792 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 46476 scp: debug1: truncating at 136260 scp: debug1: truncating at 1173 scp: debug1: truncating at 1040 scp: debug1: truncating at 1276 scp: debug1: truncating at 1386 scp: debug1: truncating at 736 scp: debug1: truncating at 24056 scp: debug1: truncating at 27628 scp: debug1: truncating at 189141 scp: debug1: truncating at 475401 scp: debug1: truncating at 222883 scp: debug1: truncating at 408 scp: debug1: truncating at 408 scp: debug1: truncating at 411 scp: debug1: truncating at 411 scp: debug1: truncating at 392 scp: debug1: truncating at 392 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 404 scp: debug1: truncating at 404 scp: debug1: truncating at 414 scp: debug1: truncating at 414 scp: debug1: truncating at 80 scp: debug1: truncating at 80 scp: debug1: truncating at 157417 scp: debug1: truncating at 56640 scp: debug1: truncating at 1316 scp: debug1: truncating at 1183 scp: debug1: truncating at 164870 scp: debug1: truncating at 38969 scp: debug1: truncating at 172802 scp: debug1: truncating at 769174 scp: debug1: truncating at 227835 scp: debug1: truncating at 112444 scp: debug1: truncating at 59262 scp: debug1: truncating at 214174 scp: debug1: truncating at 84973 scp: debug1: truncating at 27335 scp: debug1: truncating at 40893 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 1437 scp: debug1: truncating at 65035 scp: debug1: truncating at 16179 scp: debug1: truncating at 1533 scp: debug1: truncating at 1533 scp: debug1: truncating at 45289 scp: debug1: truncating at 180933 scp: debug1: truncating at 396 scp: debug1: truncating at 396 scp: debug1: truncating at 11549 scp: debug1: truncating at 13994 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 43448 scp: debug1: truncating at 27628 scp: debug1: truncating at 23806 scp: debug1: truncating at 224902 scp: debug1: truncating at 8727966 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 74 scp: debug1: truncating at 74 scp: debug1: truncating at 74 scp: debug1: truncating at 17961 scp: debug1: truncating at 17961 scp: debug1: truncating at 20593 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 240 scp: debug1: truncating at 1184 scp: debug1: truncating at 518 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 26290 scp: debug1: truncating at 29309 scp: debug1: truncating at 43180 scp: debug1: truncating at 48554 scp: debug1: truncating at 3012 scp: debug1: truncating at 614882 scp: debug1: truncating at 1040 scp: debug1: truncating at 1173 scp: debug1: truncating at 77945 scp: debug1: truncating at 324520 scp: debug1: truncating at 193188 scp: debug1: truncating at 2192 scp: debug1: truncating at 1509 scp: debug1: truncating at 5045 scp: debug1: truncating at 101 scp: debug1: truncating at 101 scp: debug1: truncating at 101 scp: debug1: truncating at 21411 scp: debug1: truncating at 16511 scp: debug1: truncating at 79965 scp: debug1: truncating at 15339 scp: debug1: truncating at 1212 scp: debug1: truncating at 1345 scp: debug1: truncating at 739 scp: debug1: truncating at 0 scp: debug1: truncating at 0 scp: debug1: truncating at 440 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 169112, received 40753632 bytes, in 1.5 seconds Bytes per second: sent 109210.0, received 26318075.9 debug1: Exit status 0 home/zuul/zuul-output/logs/ci-framework-data/logs/ansible.log0000644000175000017500000046034115117130654023462 0ustar zuulzuul2025-12-13 00:17:56,890 p=31003 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 00:17:56,891 p=31003 u=zuul n=ansible | Process install dependency map 2025-12-13 00:18:20,697 p=31003 u=zuul n=ansible | Starting collection install process 2025-12-13 00:18:20,698 p=31003 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 00:18:22,116 p=31003 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 00:18:22,117 p=31003 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 00:18:22,117 p=31003 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 00:18:22,717 p=31003 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 00:18:22,718 p=31003 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 00:18:22,718 p=31003 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 00:18:22,990 p=31003 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 00:18:22,990 p=31003 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 00:18:22,991 p=31003 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 00:18:23,105 p=31003 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 00:18:23,105 p=31003 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-12-13 00:18:31,182 p=31605 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:31 +0000 (0:00:00.036) 0:00:00.036 ***** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:31 +0000 (0:00:00.034) 0:00:00.034 ***** 2025-12-13 00:18:32,216 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,238 p=31605 u=zuul n=ansible | TASK [Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 00:18:32,239 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:01.038) 0:00:01.074 ***** 2025-12-13 00:18:32,239 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:01.038) 0:00:01.073 ***** 2025-12-13 00:18:32,270 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,276 p=31605 u=zuul n=ansible | TASK [Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-12-13 00:18:32,277 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.038) 0:00:01.112 ***** 2025-12-13 00:18:32,277 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.038) 0:00:01.111 ***** 2025-12-13 00:18:32,327 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.058) 0:00:01.170 ***** 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.058) 0:00:01.169 ***** 2025-12-13 00:18:32,680 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.351) 0:00:01.522 ***** 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.351) 0:00:01.521 ***** 2025-12-13 00:18:32,711 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.033) 0:00:01.556 ***** 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.555 ***** 2025-12-13 00:18:32,747 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.036) 0:00:01.593 ***** 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.036) 0:00:01.591 ***** 2025-12-13 00:18:32,782 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,791 p=31605 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-12-13 00:18:32,792 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.627 ***** 2025-12-13 00:18:32,792 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.626 ***** 2025-12-13 00:18:34,310 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:34,321 p=31605 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-12-13 00:18:34,321 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:01.529) 0:00:03.157 ***** 2025-12-13 00:18:34,322 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:01.529) 0:00:03.155 ***** 2025-12-13 00:18:34,511 p=31605 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-12-13 00:18:34,690 p=31605 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-12-13 00:18:34,853 p=31605 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:00.542) 0:00:03.699 ***** 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:00.542) 0:00:03.698 ***** 2025-12-13 00:18:35,899 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:35 +0000 (0:00:01.043) 0:00:04.743 ***** 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:35 +0000 (0:00:01.043) 0:00:04.741 ***** 2025-12-13 00:18:36,860 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:36 +0000 (0:00:00.962) 0:00:05.705 ***** 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:36 +0000 (0:00:00.962) 0:00:05.704 ***** 2025-12-13 00:18:46,000 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:09.136) 0:00:14.842 ***** 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:09.136) 0:00:14.840 ***** 2025-12-13 00:18:46,824 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.825) 0:00:15.667 ***** 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.825) 0:00:15.666 ***** 2025-12-13 00:18:46,850 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.024) 0:00:15.691 ***** 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.023) 0:00:15.690 ***** 2025-12-13 00:18:47,518 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:47,533 p=31605 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-12-13 00:18:47,534 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.677) 0:00:16.369 ***** 2025-12-13 00:18:47,534 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.677) 0:00:16.368 ***** 2025-12-13 00:18:47,571 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,583 p=31605 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-12-13 00:18:47,584 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.049) 0:00:16.419 ***** 2025-12-13 00:18:47,584 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.049) 0:00:16.418 ***** 2025-12-13 00:18:47,615 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.040) 0:00:16.460 ***** 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.040) 0:00:16.458 ***** 2025-12-13 00:18:47,662 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.046) 0:00:16.506 ***** 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.046) 0:00:16.505 ***** 2025-12-13 00:18:48,143 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.487) 0:00:16.993 ***** 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.487) 0:00:16.992 ***** 2025-12-13 00:18:48,917 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.766) 0:00:17.759 ***** 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.765) 0:00:17.758 ***** 2025-12-13 00:18:48,939 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.021) 0:00:17.781 ***** 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.021) 0:00:17.780 ***** 2025-12-13 00:18:48,962 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.024) 0:00:17.805 ***** 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.024) 0:00:17.804 ***** 2025-12-13 00:18:48,990 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.028) 0:00:17.834 ***** 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.028) 0:00:17.833 ***** 2025-12-13 00:18:49,024 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.030) 0:00:17.865 ***** 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.030) 0:00:17.864 ***** 2025-12-13 00:18:49,053 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,061 p=31605 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-12-13 00:18:49,061 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.031) 0:00:17.897 ***** 2025-12-13 00:18:49,062 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.031) 0:00:17.895 ***** 2025-12-13 00:18:49,077 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:17.919 ***** 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:17.918 ***** 2025-12-13 00:18:49,097 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.019) 0:00:17.939 ***** 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.019) 0:00:17.937 ***** 2025-12-13 00:18:49,119 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:17.962 ***** 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:17.960 ***** 2025-12-13 00:18:49,141 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.021) 0:00:17.983 ***** 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.021) 0:00:17.982 ***** 2025-12-13 00:18:49,161 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:18.006 ***** 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:18.004 ***** 2025-12-13 00:18:49,188 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:18.029 ***** 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:18.028 ***** 2025-12-13 00:18:49,398 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:49,406 p=31605 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-12-13 00:18:49,407 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.212) 0:00:18.242 ***** 2025-12-13 00:18:49,407 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.212) 0:00:18.241 ***** 2025-12-13 00:18:49,618 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.220) 0:00:18.462 ***** 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.220) 0:00:18.461 ***** 2025-12-13 00:18:49,828 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.210) 0:00:18.672 ***** 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.210) 0:00:18.671 ***** 2025-12-13 00:18:49,855 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.025) 0:00:18.697 ***** 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.025) 0:00:18.696 ***** 2025-12-13 00:18:49,890 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.035) 0:00:18.733 ***** 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.035) 0:00:18.732 ***** 2025-12-13 00:18:49,921 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.029) 0:00:18.763 ***** 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.029) 0:00:18.761 ***** 2025-12-13 00:18:49,964 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.046) 0:00:18.809 ***** 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.046) 0:00:18.808 ***** 2025-12-13 00:18:49,999 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:50,008 p=31605 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-12-13 00:18:50,008 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.034) 0:00:18.844 ***** 2025-12-13 00:18:50,009 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.034) 0:00:18.842 ***** 2025-12-13 00:18:50,031 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.031) 0:00:18.875 ***** 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.031) 0:00:18.874 ***** 2025-12-13 00:18:50,338 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.305) 0:00:19.181 ***** 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.305) 0:00:19.179 ***** 2025-12-13 00:18:50,592 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-12-13 00:18:50,770 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-12-13 00:18:50,781 p=31605 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-12-13 00:18:50,781 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.436) 0:00:19.617 ***** 2025-12-13 00:18:50,782 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.436) 0:00:19.616 ***** 2025-12-13 00:18:51,238 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.464) 0:00:20.081 ***** 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.464) 0:00:20.080 ***** 2025-12-13 00:18:51,545 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.312) 0:00:20.394 ***** 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.312) 0:00:20.392 ***** 2025-12-13 00:18:51,606 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.057) 0:00:20.451 ***** 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.057) 0:00:20.450 ***** 2025-12-13 00:18:51,641 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-12-13 00:18:51,650 p=31605 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-12-13 00:18:51,651 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.034) 0:00:20.486 ***** 2025-12-13 00:18:51,651 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.034) 0:00:20.485 ***** 2025-12-13 00:19:20,598 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:28.954) 0:00:49.441 ***** 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:28.954) 0:00:49.439 ***** 2025-12-13 00:19:20,826 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:00.231) 0:00:49.672 ***** 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:00.231) 0:00:49.671 ***** 2025-12-13 00:19:21,067 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:21 +0000 (0:00:00.239) 0:00:49.911 ***** 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:21 +0000 (0:00:00.239) 0:00:49.910 ***** 2025-12-13 00:19:26,302 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:05.232) 0:00:55.143 ***** 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:05.232) 0:00:55.142 ***** 2025-12-13 00:19:26,333 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.031) 0:00:55.175 ***** 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.031) 0:00:55.174 ***** 2025-12-13 00:19:26,686 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.355) 0:00:55.531 ***** 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.355) 0:00:55.529 ***** 2025-12-13 00:19:27,048 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.360) 0:00:55.891 ***** 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.360) 0:00:55.890 ***** 2025-12-13 00:19:27,069 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.021) 0:00:55.912 ***** 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.021) 0:00:55.911 ***** 2025-12-13 00:19:27,091 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.933 ***** 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.932 ***** 2025-12-13 00:19:27,111 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,117 p=31605 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-12-13 00:19:27,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.953 ***** 2025-12-13 00:19:27,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.952 ***** 2025-12-13 00:19:27,131 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,137 p=31605 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-12-13 00:19:27,137 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.973 ***** 2025-12-13 00:19:27,138 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.971 ***** 2025-12-13 00:19:27,151 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.993 ***** 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.992 ***** 2025-12-13 00:19:27,180 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,186 p=31605 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-12-13 00:19:27,187 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.028) 0:00:56.022 ***** 2025-12-13 00:19:27,187 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.028) 0:00:56.021 ***** 2025-12-13 00:19:27,401 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-12-13 00:19:27,596 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-12-13 00:19:27,826 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-12-13 00:19:28,021 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-12-13 00:19:28,212 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:01.038) 0:00:57.060 ***** 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:01.038) 0:00:57.059 ***** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.124) 0:00:57.184 ***** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.124) 0:00:57.183 ***** 2025-12-13 00:19:28,559 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-12-13 00:19:28,723 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-12-13 00:19:28,888 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.547) 0:00:57.731 ***** 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.547) 0:00:57.730 ***** 2025-12-13 00:19:28,936 p=31605 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-12-13 00:19:28,937 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.040) 0:00:57.772 ***** 2025-12-13 00:19:28,937 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.040) 0:00:57.771 ***** 2025-12-13 00:19:28,983 p=31605 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-12-13 00:19:28,984 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.054) 0:00:57.826 ***** 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.054) 0:00:57.825 ***** 2025-12-13 00:19:29,021 p=31605 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-12-13 00:19:29,022 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.046) 0:00:57.872 ***** 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.046) 0:00:57.871 ***** 2025-12-13 00:19:29,090 p=31605 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2025-12-13 00:19:29,090 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.053) 0:00:57.926 ***** 2025-12-13 00:19:29,091 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.053) 0:00:57.924 ***** 2025-12-13 00:19:29,111 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.027) 0:00:57.953 ***** 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.027) 0:00:57.952 ***** 2025-12-13 00:19:29,141 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.033) 0:00:57.987 ***** 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.033) 0:00:57.986 ***** 2025-12-13 00:19:29,171 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.025) 0:00:58.013 ***** 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.025) 0:00:58.012 ***** 2025-12-13 00:19:29,196 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.030) 0:00:58.043 ***** 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.030) 0:00:58.042 ***** 2025-12-13 00:19:29,296 p=31605 u=zuul n=ansible | ok: [localhost] => (item={}) 2025-12-13 00:19:29,304 p=31605 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-12-13 00:19:29,305 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.096) 0:00:58.140 ***** 2025-12-13 00:19:29,305 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.096) 0:00:58.139 ***** 2025-12-13 00:19:29,341 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:29,352 p=31605 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-12-13 00:19:29,352 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.047) 0:00:58.188 ***** 2025-12-13 00:19:29,353 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.047) 0:00:58.186 ***** 2025-12-13 00:19:29,949 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.606) 0:00:58.794 ***** 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.606) 0:00:58.793 ***** 2025-12-13 00:19:30,007 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.054) 0:00:58.848 ***** 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.054) 0:00:58.847 ***** 2025-12-13 00:19:30,030 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.027) 0:00:58.875 ***** 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.027) 0:00:58.874 ***** 2025-12-13 00:19:30,058 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.023) 0:00:58.899 ***** 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.023) 0:00:58.898 ***** 2025-12-13 00:19:30,088 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.032) 0:00:58.932 ***** 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.032) 0:00:58.930 ***** 2025-12-13 00:19:30,126 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.039) 0:00:58.971 ***** 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.039) 0:00:58.970 ***** 2025-12-13 00:19:30,163 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.035) 0:00:59.007 ***** 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.035) 0:00:59.005 ***** 2025-12-13 00:19:30,456 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.301) 0:00:59.309 ***** 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.302) 0:00:59.307 ***** 2025-12-13 00:19:30,500 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.037) 0:00:59.346 ***** 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.037) 0:00:59.345 ***** 2025-12-13 00:19:30,943 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.438) 0:00:59.785 ***** 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.438) 0:00:59.784 ***** 2025-12-13 00:19:30,974 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,988 p=31605 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-12-13 00:19:30,988 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.038) 0:00:59.824 ***** 2025-12-13 00:19:30,989 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.038) 0:00:59.822 ***** 2025-12-13 00:19:31,569 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.588) 0:01:00.412 ***** 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.588) 0:01:00.411 ***** 2025-12-13 00:19:31,611 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | TASK [Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.045) 0:01:00.458 ***** 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.045) 0:01:00.457 ***** 2025-12-13 00:19:32,084 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | localhost : ok=43 changed=23 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:32 +0000 (0:00:00.484) 0:01:00.943 ***** 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 28.95s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 9.14s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.23s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.53s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 0.96s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.83s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 0.77s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Run repo-setup --------------------------------------------- 0.68s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Get environment structure ------------------------------- 0.61s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | discover_latest_image : Get latest image -------------------------------- 0.59s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Ensure directories exist -------------------------------- 0.55s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 0.54s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Run repo-setup-get-hash ------------------------------------ 0.49s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | Create artifacts with custom params ------------------------------------- 0.48s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 0.46s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Create the install_yamls parameters file ---------------- 0.44s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Remove existing repos from /etc/yum.repos.d directory ------ 0.44s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:32 +0000 (0:00:00.485) 0:01:00.943 ***** 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 36.67s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 17.24s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 2.64s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_ca -------------------------------------------------------------- 1.99s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 1.04s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 0.63s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.copy ---------------------------------------------------- 0.49s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.12s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.set_fact ------------------------------------------------ 0.10s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | total ------------------------------------------------------------------ 60.91s 2025-12-13 00:19:33,160 p=32461 u=zuul n=ansible | PLAY [Run pre_infra hooks] ***************************************************** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.045) 0:00:00.045 ***** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.045) 0:00:00.045 ***** 2025-12-13 00:19:33,249 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.068) 0:00:00.114 ***** 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.068) 0:00:00.113 ***** 2025-12-13 00:19:33,313 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.063) 0:00:00.177 ***** 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.063) 0:00:00.177 ***** 2025-12-13 00:19:33,382 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,444 p=32461 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-12-13 00:19:33,466 p=32461 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-12-13 00:19:33,467 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.143) 0:00:00.321 ***** 2025-12-13 00:19:33,467 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.143) 0:00:00.320 ***** 2025-12-13 00:19:33,565 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.112) 0:00:00.434 ***** 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.112) 0:00:00.433 ***** 2025-12-13 00:19:33,598 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.027) 0:00:00.461 ***** 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.027) 0:00:00.460 ***** 2025-12-13 00:19:33,626 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,661 p=32461 u=zuul n=ansible | PLAY [Prepare the platform] **************************************************** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.082) 0:00:00.544 ***** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.082) 0:00:00.543 ***** 2025-12-13 00:19:33,732 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,746 p=32461 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 00:19:33,747 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.057) 0:00:00.601 ***** 2025-12-13 00:19:33,747 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.057) 0:00:00.600 ***** 2025-12-13 00:19:34,030 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,049 p=32461 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-12-13 00:19:34,050 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.303) 0:00:00.904 ***** 2025-12-13 00:19:34,050 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.303) 0:00:00.903 ***** 2025-12-13 00:19:34,074 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.038) 0:00:00.942 ***** 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.038) 0:00:00.942 ***** 2025-12-13 00:19:34,114 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.035) 0:00:00.978 ***** 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.035) 0:00:00.978 ***** 2025-12-13 00:19:34,158 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,176 p=32461 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-12-13 00:19:34,177 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.052) 0:00:01.031 ***** 2025-12-13 00:19:34,177 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.052) 0:00:01.030 ***** 2025-12-13 00:19:34,199 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,209 p=32461 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-12-13 00:19:34,210 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.032) 0:00:01.064 ***** 2025-12-13 00:19:34,210 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.063 ***** 2025-12-13 00:19:34,235 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.039) 0:00:01.103 ***** 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.039) 0:00:01.103 ***** 2025-12-13 00:19:34,277 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.042) 0:00:01.146 ***** 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.042) 0:00:01.146 ***** 2025-12-13 00:19:34,648 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,657 p=32461 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-12-13 00:19:34,658 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.365) 0:00:01.512 ***** 2025-12-13 00:19:34,658 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.365) 0:00:01.511 ***** 2025-12-13 00:19:34,690 p=32461 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-12-13 00:19:34,702 p=32461 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 00:19:34,703 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.045) 0:00:01.557 ***** 2025-12-13 00:19:34,703 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.045) 0:00:01.556 ***** 2025-12-13 00:19:34,739 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.046) 0:00:01.603 ***** 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.046) 0:00:01.603 ***** 2025-12-13 00:19:34,784 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.049) 0:00:01.653 ***** 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.049) 0:00:01.653 ***** 2025-12-13 00:19:34,819 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,832 p=32461 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{********** cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-12-13 00:19:34,833 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.687 ***** 2025-12-13 00:19:34,833 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.686 ***** 2025-12-13 00:19:34,861 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.040) 0:00:01.728 ***** 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.040) 0:00:01.727 ***** 2025-12-13 00:19:35,070 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.215) 0:00:01.943 ***** 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.215) 0:00:01.943 ***** 2025-12-13 00:19:35,133 p=32461 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 00:19:35,145 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 00:19:35,145 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.056) 0:00:02.000 ***** 2025-12-13 00:19:35,146 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.056) 0:00:01.999 ***** 2025-12-13 00:19:35,182 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,207 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-12-13 00:19:35,208 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.062) 0:00:02.062 ***** 2025-12-13 00:19:35,208 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.062) 0:00:02.061 ***** 2025-12-13 00:19:35,242 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,256 p=32461 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-12-13 00:19:35,257 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.049) 0:00:02.111 ***** 2025-12-13 00:19:35,257 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.049) 0:00:02.110 ***** 2025-12-13 00:19:35,289 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.044) 0:00:02.155 ***** 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.044) 0:00:02.155 ***** 2025-12-13 00:19:35,332 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.043) 0:00:02.198 ***** 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.043) 0:00:02.198 ***** 2025-12-13 00:19:35,374 p=32461 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.047) 0:00:02.246 ***** 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.047) 0:00:02.246 ***** 2025-12-13 00:19:35,410 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,421 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-12-13 00:19:35,422 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.029) 0:00:02.276 ***** 2025-12-13 00:19:35,422 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.029) 0:00:02.275 ***** 2025-12-13 00:19:35,481 p=32461 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log 2025-12-13 00:19:35,914 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.503) 0:00:02.779 ***** 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.503) 0:00:02.778 ***** 2025-12-13 00:19:35,959 p=32461 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 00:19:35,970 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 00:19:35,970 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.045) 0:00:02.825 ***** 2025-12-13 00:19:35,971 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.045) 0:00:02.824 ***** 2025-12-13 00:19:36,446 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.489) 0:00:03.314 ***** 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.489) 0:00:03.314 ***** 2025-12-13 00:19:36,486 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.038) 0:00:03.353 ***** 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.038) 0:00:03.352 ***** 2025-12-13 00:19:36,832 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.356) 0:00:03.709 ***** 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.356) 0:00:03.709 ***** 2025-12-13 00:19:37,212 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.368) 0:00:04.078 ***** 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.368) 0:00:04.078 ***** 2025-12-13 00:19:37,566 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.355) 0:00:04.434 ***** 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.355) 0:00:04.433 ***** 2025-12-13 00:19:37,611 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.044) 0:00:04.478 ***** 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.044) 0:00:04.477 ***** 2025-12-13 00:19:38,194 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:38,204 p=32461 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-12-13 00:19:38,205 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.580) 0:00:05.059 ***** 2025-12-13 00:19:38,205 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.580) 0:00:05.058 ***** 2025-12-13 00:19:38,509 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.326) 0:00:05.385 ***** 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.326) 0:00:05.385 ***** 2025-12-13 00:19:39,051 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:39,073 p=32461 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 00:19:39,073 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.542) 0:00:05.928 ***** 2025-12-13 00:19:39,074 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.542) 0:00:05.927 ***** 2025-12-13 00:19:39,294 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:39,316 p=32461 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-12-13 00:19:39,317 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.243) 0:00:06.171 ***** 2025-12-13 00:19:39,317 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.243) 0:00:06.171 ***** 2025-12-13 00:19:39,344 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:39,369 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-12-13 00:19:39,370 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.052) 0:00:06.224 ***** 2025-12-13 00:19:39,370 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.052) 0:00:06.223 ***** 2025-12-13 00:19:40,355 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 00:19:41,109 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:41 +0000 (0:00:01.753) 0:00:07.977 ***** 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:41 +0000 (0:00:01.753) 0:00:07.977 ***** 2025-12-13 00:19:42,173 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:42,184 p=32461 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-12-13 00:19:42,185 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:42 +0000 (0:00:01.061) 0:00:09.039 ***** 2025-12-13 00:19:42,185 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:42 +0000 (0:00:01.061) 0:00:09.038 ***** 2025-12-13 00:19:42,946 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 00:19:43,635 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:43 +0000 (0:00:01.476) 0:00:10.516 ***** 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:43 +0000 (0:00:01.476) 0:00:10.515 ***** 2025-12-13 00:19:44,614 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:44,646 p=32461 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-12-13 00:19:44,646 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:44 +0000 (0:00:00.985) 0:00:11.501 ***** 2025-12-13 00:19:44,647 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:44 +0000 (0:00:00.985) 0:00:11.500 ***** 2025-12-13 00:19:44,717 p=32461 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log 2025-12-13 00:19:44,986 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.358) 0:00:11.859 ***** 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.358) 0:00:11.859 ***** 2025-12-13 00:19:45,028 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,048 p=32461 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-12-13 00:19:45,049 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.043) 0:00:11.903 ***** 2025-12-13 00:19:45,049 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.043) 0:00:11.902 ***** 2025-12-13 00:19:45,070 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,117 p=32461 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-12-13 00:19:45,118 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.069) 0:00:11.972 ***** 2025-12-13 00:19:45,118 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.068) 0:00:11.971 ***** 2025-12-13 00:19:45,137 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,147 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-12-13 00:19:45,148 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.030) 0:00:12.002 ***** 2025-12-13 00:19:45,148 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.030) 0:00:12.001 ***** 2025-12-13 00:19:45,182 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.044) 0:00:12.046 ***** 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.044) 0:00:12.046 ***** 2025-12-13 00:19:45,227 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.094 ***** 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.093 ***** 2025-12-13 00:19:45,263 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.033) 0:00:12.127 ***** 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.033) 0:00:12.127 ***** 2025-12-13 00:19:45,306 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.174 ***** 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.174 ***** 2025-12-13 00:19:46,142 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:46 +0000 (0:00:00.843) 0:00:13.018 ***** 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:46 +0000 (0:00:00.843) 0:00:13.018 ***** 2025-12-13 00:19:47,150 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.996) 0:00:14.015 ***** 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.996) 0:00:14.015 ***** 2025-12-13 00:19:47,890 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.743) 0:00:14.758 ***** 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.743) 0:00:14.758 ***** 2025-12-13 00:19:47,923 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.032) 0:00:14.791 ***** 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.032) 0:00:14.791 ***** 2025-12-13 00:19:47,964 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,012 p=32461 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-12-13 00:19:48,013 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.075) 0:00:14.867 ***** 2025-12-13 00:19:48,013 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.075) 0:00:14.866 ***** 2025-12-13 00:19:48,045 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,054 p=32461 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-12-13 00:19:48,055 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.042) 0:00:14.909 ***** 2025-12-13 00:19:48,055 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.042) 0:00:14.908 ***** 2025-12-13 00:19:48,081 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.038) 0:00:14.947 ***** 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.038) 0:00:14.947 ***** 2025-12-13 00:19:48,115 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:14.980 ***** 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:14.980 ***** 2025-12-13 00:19:48,148 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.013 ***** 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:15.013 ***** 2025-12-13 00:19:48,179 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.046 ***** 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.046 ***** 2025-12-13 00:19:48,212 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.030) 0:00:15.077 ***** 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.030) 0:00:15.076 ***** 2025-12-13 00:19:48,241 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,251 p=32461 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-12-13 00:19:48,251 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.028) 0:00:15.106 ***** 2025-12-13 00:19:48,252 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.028) 0:00:15.105 ***** 2025-12-13 00:19:48,286 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.047) 0:00:15.154 ***** 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.047) 0:00:15.153 ***** 2025-12-13 00:19:48,353 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.063) 0:00:15.217 ***** 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.063) 0:00:15.217 ***** 2025-12-13 00:19:48,423 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:48,434 p=32461 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 00:19:48,434 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.071) 0:00:15.289 ***** 2025-12-13 00:19:48,435 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.071) 0:00:15.288 ***** 2025-12-13 00:19:48,491 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | localhost : ok=35 changed=12 unreachable=0 failed=0 skipped=34 rescued=0 ignored=0 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.096) 0:00:15.385 ***** 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.75s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces --- 1.48s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Get internal OpenShift registry route ----------------- 1.06s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 1.00s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Wait for the image registry to be ready --------------- 0.99s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.84s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Patch samples registry configuration ------------------ 0.74s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Create the openshift_login parameters file ------------ 0.58s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Append the KUBECONFIG to the install yamls parameters --- 0.54s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift token --------------------------------- 0.50s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch new OpenShift access token ---------------------- 0.49s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift kubeconfig context -------------------- 0.37s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Ensure output directory exists ------------------------ 0.37s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Login into OpenShift internal registry ---------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift API URL ------------------------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift current user -------------------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Read the install yamls parameters file ---------------- 0.33s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | networking_mapper : Check for Networking Environment Definition file existence --- 0.30s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Ensure output directory exists ------------------------ 0.24s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Check if kubeconfig exists ---------------------------- 0.22s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.097) 0:00:15.385 ***** 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 8.94s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login --------------------------------------------------------- 4.78s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.51s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | run_hook ---------------------------------------------------------------- 0.51s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.43s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.17s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | total ------------------------------------------------------------------ 15.34s 2025-12-13 00:20:09,545 p=33055 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 00:20:09,565 p=33055 u=zuul n=ansible | Process install dependency map 2025-12-13 00:20:29,957 p=33055 u=zuul n=ansible | Starting collection install process 2025-12-13 00:20:29,957 p=33055 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 00:20:30,598 p=33055 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 00:20:30,599 p=33055 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 00:20:30,599 p=33055 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 00:20:31,594 p=33055 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 00:20:31,595 p=33055 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 00:20:31,595 p=33055 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 00:20:31,654 p=33055 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 00:20:31,655 p=33055 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 00:20:31,655 p=33055 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 00:20:32,953 p=33055 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 00:20:32,954 p=33055 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 00:20:32,954 p=33055 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 00:20:33,066 p=33055 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 00:20:33,067 p=33055 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 00:20:33,067 p=33055 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 00:20:33,186 p=33055 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 00:20:33,186 p=33055 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/0000755000175000017500000000000015117130654025557 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/must-gather.logs0000644000175000017500000001033315117130637030706 0ustar zuulzuul[must-gather ] OUT 2025-12-13T00:21:50.316471542Z Using must-gather plug-in image: quay.io/openstack-k8s-operators/openstack-must-gather:latest When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.20.6 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing [must-gather ] OUT 2025-12-13T00:21:50.360262922Z namespace/openshift-must-gather-zffxd created [must-gather ] OUT 2025-12-13T00:21:50.363875481Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-k4ftt created [must-gather ] OUT 2025-12-13T00:21:51.446119591Z namespace/openshift-must-gather-zffxd deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304 ClientVersion: 4.20.6 ClusterVersion: Stable at "4.16.0" ClusterOperators: clusteroperator/authentication is not available (WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because All is well clusteroperator/kube-apiserver is progressing: NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13 clusteroperator/machine-config is degraded because Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/timestamp0000644000175000017500000000015615117130637027510 0ustar zuulzuul2025-12-13 00:21:50.366177968 +0000 UTC m=+0.141738075 2025-12-13 00:21:51.441491577 +0000 UTC m=+1.217051724 home/zuul/zuul-output/logs/ci-framework-data/logs/openstack-must-gather/event-filter.html0000644000175000017500000000641015117130637031053 0ustar zuulzuul Events
Time Namespace Component RelatedObject Reason Message
home/zuul/zuul-output/logs/ci-framework-data/logs/2025-12-13_00-21/0000775000175000017500000000000015117130663023104 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/2025-12-13_00-21/ansible.log0000666000175000017500000046041315117130521025227 0ustar zuulzuul2025-12-13 00:17:56,890 p=31003 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 00:17:56,891 p=31003 u=zuul n=ansible | Process install dependency map 2025-12-13 00:18:20,697 p=31003 u=zuul n=ansible | Starting collection install process 2025-12-13 00:18:20,698 p=31003 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 00:18:21,183 p=31003 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 00:18:21,240 p=31003 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 00:18:21,969 p=31003 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 00:18:22,020 p=31003 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 00:18:22,116 p=31003 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 00:18:22,117 p=31003 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 00:18:22,117 p=31003 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 00:18:22,141 p=31003 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 00:18:22,284 p=31003 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 00:18:22,401 p=31003 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 00:18:22,472 p=31003 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 00:18:22,492 p=31003 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 00:18:22,717 p=31003 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 00:18:22,718 p=31003 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 00:18:22,718 p=31003 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 00:18:22,959 p=31003 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 00:18:22,990 p=31003 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 00:18:22,990 p=31003 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 00:18:22,991 p=31003 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 00:18:23,018 p=31003 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 00:18:23,105 p=31003 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 00:18:23,105 p=31003 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-12-13 00:18:31,182 p=31605 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:31 +0000 (0:00:00.036) 0:00:00.036 ***** 2025-12-13 00:18:31,200 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:31 +0000 (0:00:00.034) 0:00:00.034 ***** 2025-12-13 00:18:32,216 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,238 p=31605 u=zuul n=ansible | TASK [Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 00:18:32,239 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:01.038) 0:00:01.074 ***** 2025-12-13 00:18:32,239 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:01.038) 0:00:01.073 ***** 2025-12-13 00:18:32,270 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,276 p=31605 u=zuul n=ansible | TASK [Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-12-13 00:18:32,277 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.038) 0:00:01.112 ***** 2025-12-13 00:18:32,277 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.038) 0:00:01.111 ***** 2025-12-13 00:18:32,327 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.058) 0:00:01.170 ***** 2025-12-13 00:18:32,335 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.058) 0:00:01.169 ***** 2025-12-13 00:18:32,680 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.351) 0:00:01.522 ***** 2025-12-13 00:18:32,687 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.351) 0:00:01.521 ***** 2025-12-13 00:18:32,711 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.033) 0:00:01.556 ***** 2025-12-13 00:18:32,721 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.555 ***** 2025-12-13 00:18:32,747 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.036) 0:00:01.593 ***** 2025-12-13 00:18:32,757 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.036) 0:00:01.591 ***** 2025-12-13 00:18:32,782 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:32,791 p=31605 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-12-13 00:18:32,792 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.627 ***** 2025-12-13 00:18:32,792 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:32 +0000 (0:00:00.034) 0:00:01.626 ***** 2025-12-13 00:18:34,310 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:34,321 p=31605 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-12-13 00:18:34,321 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:01.529) 0:00:03.157 ***** 2025-12-13 00:18:34,322 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:01.529) 0:00:03.155 ***** 2025-12-13 00:18:34,511 p=31605 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-12-13 00:18:34,690 p=31605 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-12-13 00:18:34,853 p=31605 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:00.542) 0:00:03.699 ***** 2025-12-13 00:18:34,864 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:34 +0000 (0:00:00.542) 0:00:03.698 ***** 2025-12-13 00:18:35,899 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:35 +0000 (0:00:01.043) 0:00:04.743 ***** 2025-12-13 00:18:35,907 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:35 +0000 (0:00:01.043) 0:00:04.741 ***** 2025-12-13 00:18:36,860 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:36 +0000 (0:00:00.962) 0:00:05.705 ***** 2025-12-13 00:18:36,870 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:36 +0000 (0:00:00.962) 0:00:05.704 ***** 2025-12-13 00:18:46,000 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:09.136) 0:00:14.842 ***** 2025-12-13 00:18:46,006 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:09.136) 0:00:14.840 ***** 2025-12-13 00:18:46,824 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.825) 0:00:15.667 ***** 2025-12-13 00:18:46,832 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.825) 0:00:15.666 ***** 2025-12-13 00:18:46,850 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.024) 0:00:15.691 ***** 2025-12-13 00:18:46,856 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:46 +0000 (0:00:00.023) 0:00:15.690 ***** 2025-12-13 00:18:47,518 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:47,533 p=31605 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-12-13 00:18:47,534 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.677) 0:00:16.369 ***** 2025-12-13 00:18:47,534 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.677) 0:00:16.368 ***** 2025-12-13 00:18:47,571 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,583 p=31605 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-12-13 00:18:47,584 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.049) 0:00:16.419 ***** 2025-12-13 00:18:47,584 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.049) 0:00:16.418 ***** 2025-12-13 00:18:47,615 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.040) 0:00:16.460 ***** 2025-12-13 00:18:47,624 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.040) 0:00:16.458 ***** 2025-12-13 00:18:47,662 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.046) 0:00:16.506 ***** 2025-12-13 00:18:47,671 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:47 +0000 (0:00:00.046) 0:00:16.505 ***** 2025-12-13 00:18:48,143 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.487) 0:00:16.993 ***** 2025-12-13 00:18:48,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.487) 0:00:16.992 ***** 2025-12-13 00:18:48,917 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.766) 0:00:17.759 ***** 2025-12-13 00:18:48,924 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.765) 0:00:17.758 ***** 2025-12-13 00:18:48,939 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.021) 0:00:17.781 ***** 2025-12-13 00:18:48,946 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.021) 0:00:17.780 ***** 2025-12-13 00:18:48,962 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.024) 0:00:17.805 ***** 2025-12-13 00:18:48,970 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.024) 0:00:17.804 ***** 2025-12-13 00:18:48,990 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.028) 0:00:17.834 ***** 2025-12-13 00:18:48,999 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:48 +0000 (0:00:00.028) 0:00:17.833 ***** 2025-12-13 00:18:49,024 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.030) 0:00:17.865 ***** 2025-12-13 00:18:49,030 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.030) 0:00:17.864 ***** 2025-12-13 00:18:49,053 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,061 p=31605 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-12-13 00:18:49,061 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.031) 0:00:17.897 ***** 2025-12-13 00:18:49,062 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.031) 0:00:17.895 ***** 2025-12-13 00:18:49,077 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:17.919 ***** 2025-12-13 00:18:49,084 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:17.918 ***** 2025-12-13 00:18:49,097 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.019) 0:00:17.939 ***** 2025-12-13 00:18:49,103 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.019) 0:00:17.937 ***** 2025-12-13 00:18:49,119 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:17.962 ***** 2025-12-13 00:18:49,126 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:17.960 ***** 2025-12-13 00:18:49,141 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.021) 0:00:17.983 ***** 2025-12-13 00:18:49,148 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.021) 0:00:17.982 ***** 2025-12-13 00:18:49,161 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:18.006 ***** 2025-12-13 00:18:49,170 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.022) 0:00:18.004 ***** 2025-12-13 00:18:49,188 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:18.029 ***** 2025-12-13 00:18:49,194 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.023) 0:00:18.028 ***** 2025-12-13 00:18:49,398 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:49,406 p=31605 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-12-13 00:18:49,407 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.212) 0:00:18.242 ***** 2025-12-13 00:18:49,407 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.212) 0:00:18.241 ***** 2025-12-13 00:18:49,618 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.220) 0:00:18.462 ***** 2025-12-13 00:18:49,627 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.220) 0:00:18.461 ***** 2025-12-13 00:18:49,828 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.210) 0:00:18.672 ***** 2025-12-13 00:18:49,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.210) 0:00:18.671 ***** 2025-12-13 00:18:49,855 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.025) 0:00:18.697 ***** 2025-12-13 00:18:49,862 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.025) 0:00:18.696 ***** 2025-12-13 00:18:49,890 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.035) 0:00:18.733 ***** 2025-12-13 00:18:49,898 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.035) 0:00:18.732 ***** 2025-12-13 00:18:49,921 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.029) 0:00:18.763 ***** 2025-12-13 00:18:49,927 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.029) 0:00:18.761 ***** 2025-12-13 00:18:49,964 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.046) 0:00:18.809 ***** 2025-12-13 00:18:49,974 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:49 +0000 (0:00:00.046) 0:00:18.808 ***** 2025-12-13 00:18:49,999 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:50,008 p=31605 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-12-13 00:18:50,008 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.034) 0:00:18.844 ***** 2025-12-13 00:18:50,009 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.034) 0:00:18.842 ***** 2025-12-13 00:18:50,031 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.031) 0:00:18.875 ***** 2025-12-13 00:18:50,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.031) 0:00:18.874 ***** 2025-12-13 00:18:50,338 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.305) 0:00:19.181 ***** 2025-12-13 00:18:50,345 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.305) 0:00:19.179 ***** 2025-12-13 00:18:50,592 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-12-13 00:18:50,770 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-12-13 00:18:50,781 p=31605 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-12-13 00:18:50,781 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.436) 0:00:19.617 ***** 2025-12-13 00:18:50,782 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:50 +0000 (0:00:00.436) 0:00:19.616 ***** 2025-12-13 00:18:51,238 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.464) 0:00:20.081 ***** 2025-12-13 00:18:51,246 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.464) 0:00:20.080 ***** 2025-12-13 00:18:51,545 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.312) 0:00:20.394 ***** 2025-12-13 00:18:51,558 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.312) 0:00:20.392 ***** 2025-12-13 00:18:51,606 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.057) 0:00:20.451 ***** 2025-12-13 00:18:51,616 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.057) 0:00:20.450 ***** 2025-12-13 00:18:51,641 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-12-13 00:18:51,650 p=31605 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-12-13 00:18:51,651 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.034) 0:00:20.486 ***** 2025-12-13 00:18:51,651 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:18:51 +0000 (0:00:00.034) 0:00:20.485 ***** 2025-12-13 00:19:20,598 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:28.954) 0:00:49.441 ***** 2025-12-13 00:19:20,605 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:28.954) 0:00:49.439 ***** 2025-12-13 00:19:20,826 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:00.231) 0:00:49.672 ***** 2025-12-13 00:19:20,837 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:20 +0000 (0:00:00.231) 0:00:49.671 ***** 2025-12-13 00:19:21,067 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:21 +0000 (0:00:00.239) 0:00:49.911 ***** 2025-12-13 00:19:21,076 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:21 +0000 (0:00:00.239) 0:00:49.910 ***** 2025-12-13 00:19:26,302 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:05.232) 0:00:55.143 ***** 2025-12-13 00:19:26,308 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:05.232) 0:00:55.142 ***** 2025-12-13 00:19:26,333 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.031) 0:00:55.175 ***** 2025-12-13 00:19:26,340 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.031) 0:00:55.174 ***** 2025-12-13 00:19:26,686 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.355) 0:00:55.531 ***** 2025-12-13 00:19:26,695 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:26 +0000 (0:00:00.355) 0:00:55.529 ***** 2025-12-13 00:19:27,048 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.360) 0:00:55.891 ***** 2025-12-13 00:19:27,056 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.360) 0:00:55.890 ***** 2025-12-13 00:19:27,069 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.021) 0:00:55.912 ***** 2025-12-13 00:19:27,077 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.021) 0:00:55.911 ***** 2025-12-13 00:19:27,091 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.933 ***** 2025-12-13 00:19:27,098 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.932 ***** 2025-12-13 00:19:27,111 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,117 p=31605 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-12-13 00:19:27,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.953 ***** 2025-12-13 00:19:27,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.952 ***** 2025-12-13 00:19:27,131 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,137 p=31605 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-12-13 00:19:27,137 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.973 ***** 2025-12-13 00:19:27,138 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.019) 0:00:55.971 ***** 2025-12-13 00:19:27,151 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.993 ***** 2025-12-13 00:19:27,158 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.020) 0:00:55.992 ***** 2025-12-13 00:19:27,180 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:27,186 p=31605 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-12-13 00:19:27,187 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.028) 0:00:56.022 ***** 2025-12-13 00:19:27,187 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:27 +0000 (0:00:00.028) 0:00:56.021 ***** 2025-12-13 00:19:27,401 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-12-13 00:19:27,596 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-12-13 00:19:27,826 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-12-13 00:19:28,021 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-12-13 00:19:28,212 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:01.038) 0:00:57.060 ***** 2025-12-13 00:19:28,225 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:01.038) 0:00:57.059 ***** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.124) 0:00:57.184 ***** 2025-12-13 00:19:28,349 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.124) 0:00:57.183 ***** 2025-12-13 00:19:28,559 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-12-13 00:19:28,723 p=31605 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-12-13 00:19:28,888 p=31605 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.547) 0:00:57.731 ***** 2025-12-13 00:19:28,896 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.547) 0:00:57.730 ***** 2025-12-13 00:19:28,936 p=31605 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-12-13 00:19:28,937 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.040) 0:00:57.772 ***** 2025-12-13 00:19:28,937 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.040) 0:00:57.771 ***** 2025-12-13 00:19:28,983 p=31605 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-12-13 00:19:28,984 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.054) 0:00:57.826 ***** 2025-12-13 00:19:28,991 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:28 +0000 (0:00:00.054) 0:00:57.825 ***** 2025-12-13 00:19:29,021 p=31605 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'master', 'change_url': 'https://github.com/infrawatch/service-telemetry-operator', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/infrawatch/service-telemetry-operator', 'name': 'infrawatch/service-telemetry-operator', 'short_name': 'service-telemetry-operator', 'src_dir': 'src/github.com/infrawatch/service-telemetry-operator'}}) 2025-12-13 00:19:29,022 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.046) 0:00:57.872 ***** 2025-12-13 00:19:29,037 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.046) 0:00:57.871 ***** 2025-12-13 00:19:29,090 p=31605 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2025-12-13 00:19:29,090 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.053) 0:00:57.926 ***** 2025-12-13 00:19:29,091 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.053) 0:00:57.924 ***** 2025-12-13 00:19:29,111 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.027) 0:00:57.953 ***** 2025-12-13 00:19:29,118 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.027) 0:00:57.952 ***** 2025-12-13 00:19:29,141 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.033) 0:00:57.987 ***** 2025-12-13 00:19:29,152 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.033) 0:00:57.986 ***** 2025-12-13 00:19:29,171 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.025) 0:00:58.013 ***** 2025-12-13 00:19:29,178 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.025) 0:00:58.012 ***** 2025-12-13 00:19:29,196 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.030) 0:00:58.043 ***** 2025-12-13 00:19:29,208 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.030) 0:00:58.042 ***** 2025-12-13 00:19:29,296 p=31605 u=zuul n=ansible | ok: [localhost] => (item={}) 2025-12-13 00:19:29,304 p=31605 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-12-13 00:19:29,305 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.096) 0:00:58.140 ***** 2025-12-13 00:19:29,305 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.096) 0:00:58.139 ***** 2025-12-13 00:19:29,341 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:29,352 p=31605 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-12-13 00:19:29,352 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.047) 0:00:58.188 ***** 2025-12-13 00:19:29,353 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.047) 0:00:58.186 ***** 2025-12-13 00:19:29,949 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.606) 0:00:58.794 ***** 2025-12-13 00:19:29,959 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:29 +0000 (0:00:00.606) 0:00:58.793 ***** 2025-12-13 00:19:30,007 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.054) 0:00:58.848 ***** 2025-12-13 00:19:30,013 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.054) 0:00:58.847 ***** 2025-12-13 00:19:30,030 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.027) 0:00:58.875 ***** 2025-12-13 00:19:30,040 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.027) 0:00:58.874 ***** 2025-12-13 00:19:30,058 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.023) 0:00:58.899 ***** 2025-12-13 00:19:30,064 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.023) 0:00:58.898 ***** 2025-12-13 00:19:30,088 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.032) 0:00:58.932 ***** 2025-12-13 00:19:30,096 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.032) 0:00:58.930 ***** 2025-12-13 00:19:30,126 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.039) 0:00:58.971 ***** 2025-12-13 00:19:30,136 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.039) 0:00:58.970 ***** 2025-12-13 00:19:30,163 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.035) 0:00:59.007 ***** 2025-12-13 00:19:30,171 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.035) 0:00:59.005 ***** 2025-12-13 00:19:30,456 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.301) 0:00:59.309 ***** 2025-12-13 00:19:30,473 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.302) 0:00:59.307 ***** 2025-12-13 00:19:30,500 p=31605 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.037) 0:00:59.346 ***** 2025-12-13 00:19:30,511 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.037) 0:00:59.345 ***** 2025-12-13 00:19:30,943 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.438) 0:00:59.785 ***** 2025-12-13 00:19:30,950 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.438) 0:00:59.784 ***** 2025-12-13 00:19:30,974 p=31605 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:30,988 p=31605 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-12-13 00:19:30,988 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.038) 0:00:59.824 ***** 2025-12-13 00:19:30,989 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:30 +0000 (0:00:00.038) 0:00:59.822 ***** 2025-12-13 00:19:31,569 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.588) 0:01:00.412 ***** 2025-12-13 00:19:31,577 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.588) 0:01:00.411 ***** 2025-12-13 00:19:31,611 p=31605 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | TASK [Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.045) 0:01:00.458 ***** 2025-12-13 00:19:31,623 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:31 +0000 (0:00:00.045) 0:01:00.457 ***** 2025-12-13 00:19:32,084 p=31605 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | localhost : ok=43 changed=23 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:32 +0000 (0:00:00.484) 0:01:00.943 ***** 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 28.95s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 9.14s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.23s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.53s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.04s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 0.96s 2025-12-13 00:19:32,108 p=31605 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.83s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 0.77s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Run repo-setup --------------------------------------------- 0.68s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Get environment structure ------------------------------- 0.61s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | discover_latest_image : Get latest image -------------------------------- 0.59s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Ensure directories exist -------------------------------- 0.55s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Ensure directories are present ----------------------------- 0.54s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Run repo-setup-get-hash ------------------------------------ 0.49s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | Create artifacts with custom params ------------------------------------- 0.48s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 0.46s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls : Create the install_yamls parameters file ---------------- 0.44s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup : Remove existing repos from /etc/yum.repos.d directory ------ 0.44s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | Saturday 13 December 2025 00:19:32 +0000 (0:00:00.485) 0:01:00.943 ***** 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 36.67s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 17.24s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 2.64s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | install_ca -------------------------------------------------------------- 1.99s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 1.04s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 0.63s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.copy ---------------------------------------------------- 0.49s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.12s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ansible.builtin.set_fact ------------------------------------------------ 0.10s 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 00:19:32,109 p=31605 u=zuul n=ansible | total ------------------------------------------------------------------ 60.91s 2025-12-13 00:19:33,160 p=32461 u=zuul n=ansible | PLAY [Run pre_infra hooks] ***************************************************** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.045) 0:00:00.045 ***** 2025-12-13 00:19:33,191 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.045) 0:00:00.045 ***** 2025-12-13 00:19:33,249 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.068) 0:00:00.114 ***** 2025-12-13 00:19:33,259 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.068) 0:00:00.113 ***** 2025-12-13 00:19:33,313 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.063) 0:00:00.177 ***** 2025-12-13 00:19:33,323 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.063) 0:00:00.177 ***** 2025-12-13 00:19:33,382 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,444 p=32461 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-12-13 00:19:33,466 p=32461 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-12-13 00:19:33,467 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.143) 0:00:00.321 ***** 2025-12-13 00:19:33,467 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.143) 0:00:00.320 ***** 2025-12-13 00:19:33,565 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.112) 0:00:00.434 ***** 2025-12-13 00:19:33,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.112) 0:00:00.433 ***** 2025-12-13 00:19:33,598 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.027) 0:00:00.461 ***** 2025-12-13 00:19:33,607 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.027) 0:00:00.460 ***** 2025-12-13 00:19:33,626 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:33,661 p=32461 u=zuul n=ansible | PLAY [Prepare the platform] **************************************************** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | TASK [Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] ****** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.082) 0:00:00.544 ***** 2025-12-13 00:19:33,689 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.082) 0:00:00.543 ***** 2025-12-13 00:19:33,732 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:33,746 p=32461 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 00:19:33,747 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.057) 0:00:00.601 ***** 2025-12-13 00:19:33,747 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:33 +0000 (0:00:00.057) 0:00:00.600 ***** 2025-12-13 00:19:34,030 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,049 p=32461 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-12-13 00:19:34,050 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.303) 0:00:00.904 ***** 2025-12-13 00:19:34,050 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.303) 0:00:00.903 ***** 2025-12-13 00:19:34,074 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.038) 0:00:00.942 ***** 2025-12-13 00:19:34,088 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.038) 0:00:00.942 ***** 2025-12-13 00:19:34,114 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.035) 0:00:00.978 ***** 2025-12-13 00:19:34,124 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.035) 0:00:00.978 ***** 2025-12-13 00:19:34,158 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,176 p=32461 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-12-13 00:19:34,177 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.052) 0:00:01.031 ***** 2025-12-13 00:19:34,177 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.052) 0:00:01.030 ***** 2025-12-13 00:19:34,199 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,209 p=32461 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-12-13 00:19:34,210 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.032) 0:00:01.064 ***** 2025-12-13 00:19:34,210 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.063 ***** 2025-12-13 00:19:34,235 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.039) 0:00:01.103 ***** 2025-12-13 00:19:34,249 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.039) 0:00:01.103 ***** 2025-12-13 00:19:34,277 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.042) 0:00:01.146 ***** 2025-12-13 00:19:34,292 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.042) 0:00:01.146 ***** 2025-12-13 00:19:34,648 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,657 p=32461 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-12-13 00:19:34,658 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.365) 0:00:01.512 ***** 2025-12-13 00:19:34,658 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.365) 0:00:01.511 ***** 2025-12-13 00:19:34,690 p=32461 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-12-13 00:19:34,702 p=32461 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 00:19:34,703 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.045) 0:00:01.557 ***** 2025-12-13 00:19:34,703 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.045) 0:00:01.556 ***** 2025-12-13 00:19:34,739 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.046) 0:00:01.603 ***** 2025-12-13 00:19:34,749 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.046) 0:00:01.603 ***** 2025-12-13 00:19:34,784 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.049) 0:00:01.653 ***** 2025-12-13 00:19:34,799 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.049) 0:00:01.653 ***** 2025-12-13 00:19:34,819 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:34,832 p=32461 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-12-13 00:19:34,833 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.687 ***** 2025-12-13 00:19:34,833 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.033) 0:00:01.686 ***** 2025-12-13 00:19:34,861 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.040) 0:00:01.728 ***** 2025-12-13 00:19:34,873 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:34 +0000 (0:00:00.040) 0:00:01.727 ***** 2025-12-13 00:19:35,070 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.215) 0:00:01.943 ***** 2025-12-13 00:19:35,089 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.215) 0:00:01.943 ***** 2025-12-13 00:19:35,133 p=32461 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 00:19:35,145 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 00:19:35,145 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.056) 0:00:02.000 ***** 2025-12-13 00:19:35,146 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.056) 0:00:01.999 ***** 2025-12-13 00:19:35,182 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,207 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-12-13 00:19:35,208 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.062) 0:00:02.062 ***** 2025-12-13 00:19:35,208 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.062) 0:00:02.061 ***** 2025-12-13 00:19:35,242 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,256 p=32461 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-12-13 00:19:35,257 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.049) 0:00:02.111 ***** 2025-12-13 00:19:35,257 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.049) 0:00:02.110 ***** 2025-12-13 00:19:35,289 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.044) 0:00:02.155 ***** 2025-12-13 00:19:35,301 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.044) 0:00:02.155 ***** 2025-12-13 00:19:35,332 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.043) 0:00:02.198 ***** 2025-12-13 00:19:35,344 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.043) 0:00:02.198 ***** 2025-12-13 00:19:35,374 p=32461 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.047) 0:00:02.246 ***** 2025-12-13 00:19:35,392 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.047) 0:00:02.246 ***** 2025-12-13 00:19:35,410 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:35,421 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-12-13 00:19:35,422 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.029) 0:00:02.276 ***** 2025-12-13 00:19:35,422 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.029) 0:00:02.275 ***** 2025-12-13 00:19:35,481 p=32461 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log 2025-12-13 00:19:35,914 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.503) 0:00:02.779 ***** 2025-12-13 00:19:35,925 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.503) 0:00:02.778 ***** 2025-12-13 00:19:35,959 p=32461 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 00:19:35,970 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 00:19:35,970 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.045) 0:00:02.825 ***** 2025-12-13 00:19:35,971 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:35 +0000 (0:00:00.045) 0:00:02.824 ***** 2025-12-13 00:19:36,446 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.489) 0:00:03.314 ***** 2025-12-13 00:19:36,460 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.489) 0:00:03.314 ***** 2025-12-13 00:19:36,486 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.038) 0:00:03.353 ***** 2025-12-13 00:19:36,499 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.038) 0:00:03.352 ***** 2025-12-13 00:19:36,832 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.356) 0:00:03.709 ***** 2025-12-13 00:19:36,855 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:36 +0000 (0:00:00.356) 0:00:03.709 ***** 2025-12-13 00:19:37,212 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.368) 0:00:04.078 ***** 2025-12-13 00:19:37,224 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.368) 0:00:04.078 ***** 2025-12-13 00:19:37,566 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.355) 0:00:04.434 ***** 2025-12-13 00:19:37,579 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.355) 0:00:04.433 ***** 2025-12-13 00:19:37,611 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.044) 0:00:04.478 ***** 2025-12-13 00:19:37,624 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:37 +0000 (0:00:00.044) 0:00:04.477 ***** 2025-12-13 00:19:38,194 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:38,204 p=32461 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-12-13 00:19:38,205 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.580) 0:00:05.059 ***** 2025-12-13 00:19:38,205 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.580) 0:00:05.058 ***** 2025-12-13 00:19:38,509 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.326) 0:00:05.385 ***** 2025-12-13 00:19:38,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:38 +0000 (0:00:00.326) 0:00:05.385 ***** 2025-12-13 00:19:39,051 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:39,073 p=32461 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 00:19:39,073 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.542) 0:00:05.928 ***** 2025-12-13 00:19:39,074 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.542) 0:00:05.927 ***** 2025-12-13 00:19:39,294 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:39,316 p=32461 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-12-13 00:19:39,317 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.243) 0:00:06.171 ***** 2025-12-13 00:19:39,317 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.243) 0:00:06.171 ***** 2025-12-13 00:19:39,344 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:39,369 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-12-13 00:19:39,370 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.052) 0:00:06.224 ***** 2025-12-13 00:19:39,370 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:39 +0000 (0:00:00.052) 0:00:06.223 ***** 2025-12-13 00:19:40,355 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 00:19:41,109 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:41 +0000 (0:00:01.753) 0:00:07.977 ***** 2025-12-13 00:19:41,123 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:41 +0000 (0:00:01.753) 0:00:07.977 ***** 2025-12-13 00:19:42,173 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:42,184 p=32461 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-12-13 00:19:42,185 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:42 +0000 (0:00:01.061) 0:00:09.039 ***** 2025-12-13 00:19:42,185 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:42 +0000 (0:00:01.061) 0:00:09.038 ***** 2025-12-13 00:19:42,946 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 00:19:43,635 p=32461 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:43 +0000 (0:00:01.476) 0:00:10.516 ***** 2025-12-13 00:19:43,661 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:43 +0000 (0:00:01.476) 0:00:10.515 ***** 2025-12-13 00:19:44,614 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:44,646 p=32461 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-12-13 00:19:44,646 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:44 +0000 (0:00:00.985) 0:00:11.501 ***** 2025-12-13 00:19:44,647 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:44 +0000 (0:00:00.985) 0:00:11.500 ***** 2025-12-13 00:19:44,717 p=32461 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log 2025-12-13 00:19:44,986 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.358) 0:00:11.859 ***** 2025-12-13 00:19:45,005 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.358) 0:00:11.859 ***** 2025-12-13 00:19:45,028 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,048 p=32461 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-12-13 00:19:45,049 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.043) 0:00:11.903 ***** 2025-12-13 00:19:45,049 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.043) 0:00:11.902 ***** 2025-12-13 00:19:45,070 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,117 p=32461 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-12-13 00:19:45,118 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.069) 0:00:11.972 ***** 2025-12-13 00:19:45,118 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.068) 0:00:11.971 ***** 2025-12-13 00:19:45,137 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,147 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-12-13 00:19:45,148 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.030) 0:00:12.002 ***** 2025-12-13 00:19:45,148 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.030) 0:00:12.001 ***** 2025-12-13 00:19:45,182 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.044) 0:00:12.046 ***** 2025-12-13 00:19:45,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.044) 0:00:12.046 ***** 2025-12-13 00:19:45,227 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.094 ***** 2025-12-13 00:19:45,239 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.093 ***** 2025-12-13 00:19:45,263 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.033) 0:00:12.127 ***** 2025-12-13 00:19:45,273 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.033) 0:00:12.127 ***** 2025-12-13 00:19:45,306 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.174 ***** 2025-12-13 00:19:45,320 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:45 +0000 (0:00:00.047) 0:00:12.174 ***** 2025-12-13 00:19:46,142 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:46 +0000 (0:00:00.843) 0:00:13.018 ***** 2025-12-13 00:19:46,164 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:46 +0000 (0:00:00.843) 0:00:13.018 ***** 2025-12-13 00:19:47,150 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.996) 0:00:14.015 ***** 2025-12-13 00:19:47,161 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.996) 0:00:14.015 ***** 2025-12-13 00:19:47,890 p=32461 u=zuul n=ansible | changed: [localhost] 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.743) 0:00:14.758 ***** 2025-12-13 00:19:47,904 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.743) 0:00:14.758 ***** 2025-12-13 00:19:47,923 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.032) 0:00:14.791 ***** 2025-12-13 00:19:47,937 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:47 +0000 (0:00:00.032) 0:00:14.791 ***** 2025-12-13 00:19:47,964 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,012 p=32461 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-12-13 00:19:48,013 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.075) 0:00:14.867 ***** 2025-12-13 00:19:48,013 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.075) 0:00:14.866 ***** 2025-12-13 00:19:48,045 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,054 p=32461 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-12-13 00:19:48,055 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.042) 0:00:14.909 ***** 2025-12-13 00:19:48,055 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.042) 0:00:14.908 ***** 2025-12-13 00:19:48,081 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.038) 0:00:14.947 ***** 2025-12-13 00:19:48,093 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.038) 0:00:14.947 ***** 2025-12-13 00:19:48,115 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:14.980 ***** 2025-12-13 00:19:48,126 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:14.980 ***** 2025-12-13 00:19:48,148 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.013 ***** 2025-12-13 00:19:48,159 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.033) 0:00:15.013 ***** 2025-12-13 00:19:48,179 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.046 ***** 2025-12-13 00:19:48,192 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.032) 0:00:15.046 ***** 2025-12-13 00:19:48,212 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.030) 0:00:15.077 ***** 2025-12-13 00:19:48,223 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.030) 0:00:15.076 ***** 2025-12-13 00:19:48,241 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,251 p=32461 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-12-13 00:19:48,251 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.028) 0:00:15.106 ***** 2025-12-13 00:19:48,252 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.028) 0:00:15.105 ***** 2025-12-13 00:19:48,286 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.047) 0:00:15.154 ***** 2025-12-13 00:19:48,299 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.047) 0:00:15.153 ***** 2025-12-13 00:19:48,353 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.063) 0:00:15.217 ***** 2025-12-13 00:19:48,363 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.063) 0:00:15.217 ***** 2025-12-13 00:19:48,423 p=32461 u=zuul n=ansible | ok: [localhost] 2025-12-13 00:19:48,434 p=32461 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 00:19:48,434 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.071) 0:00:15.289 ***** 2025-12-13 00:19:48,435 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.071) 0:00:15.288 ***** 2025-12-13 00:19:48,491 p=32461 u=zuul n=ansible | skipping: [localhost] 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | localhost : ok=35 changed=12 unreachable=0 failed=0 skipped=34 rescued=0 ignored=0 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.096) 0:00:15.385 ***** 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 1.75s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces --- 1.48s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Get internal OpenShift registry route ----------------- 1.06s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 1.00s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Wait for the image registry to be ready --------------- 0.99s 2025-12-13 00:19:48,531 p=32461 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 0.84s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Patch samples registry configuration ------------------ 0.74s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Create the openshift_login parameters file ------------ 0.58s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Append the KUBECONFIG to the install yamls parameters --- 0.54s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift token --------------------------------- 0.50s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch new OpenShift access token ---------------------- 0.49s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift kubeconfig context -------------------- 0.37s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Ensure output directory exists ------------------------ 0.37s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Login into OpenShift internal registry ---------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift API URL ------------------------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Fetch OpenShift current user -------------------------- 0.36s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Read the install yamls parameters file ---------------- 0.33s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | networking_mapper : Check for Networking Environment Definition file existence --- 0.30s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup : Ensure output directory exists ------------------------ 0.24s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login : Check if kubeconfig exists ---------------------------- 0.22s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | Saturday 13 December 2025 00:19:48 +0000 (0:00:00.097) 0:00:15.385 ***** 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | =============================================================================== 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 8.94s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | openshift_login --------------------------------------------------------- 4.78s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.51s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | run_hook ---------------------------------------------------------------- 0.51s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.43s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.17s 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 00:19:48,532 p=32461 u=zuul n=ansible | total ------------------------------------------------------------------ 15.34s 2025-12-13 00:20:09,545 p=33055 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 00:20:09,565 p=33055 u=zuul n=ansible | Process install dependency map 2025-12-13 00:20:29,957 p=33055 u=zuul n=ansible | Starting collection install process 2025-12-13 00:20:29,957 p=33055 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 00:20:30,598 p=33055 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 00:20:30,599 p=33055 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 00:20:30,599 p=33055 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 00:20:30,665 p=33055 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 00:20:31,594 p=33055 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 00:20:31,595 p=33055 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 00:20:31,595 p=33055 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 00:20:31,654 p=33055 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 00:20:31,655 p=33055 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 00:20:31,655 p=33055 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 00:20:31,775 p=33055 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 00:20:31,818 p=33055 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 00:20:32,018 p=33055 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 00:20:32,191 p=33055 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 00:20:32,301 p=33055 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 00:20:32,325 p=33055 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 00:20:32,624 p=33055 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 00:20:32,953 p=33055 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 00:20:32,954 p=33055 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 00:20:32,954 p=33055 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 00:20:33,007 p=33055 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 00:20:33,066 p=33055 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 00:20:33,067 p=33055 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 00:20:33,067 p=33055 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 00:20:33,186 p=33055 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 00:20:33,186 p=33055 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully home/zuul/zuul-output/logs/ci-framework-data/logs/crc/0000755000175000017500000000000015117130646022102 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/0000755000175000017500000000000015117130646025571 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/0000755000175000017500000000000015117130647026537 5ustar zuulzuul././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130646033233 5ustar zuulzuul././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/9.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000726415117130646033246 0ustar zuulzuul2025-12-13T00:20:29.909408909+00:00 stdout F 2025-12-13T00:20:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0d2bb406-b6e5-4b8b-a2fc-f815d397d2b8 2025-12-13T00:20:29.951813588+00:00 stdout F 2025-12-13T00:20:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0d2bb406-b6e5-4b8b-a2fc-f815d397d2b8 to /host/opt/cni/bin/ 2025-12-13T00:20:29.970768071+00:00 stderr F 2025-12-13T00:20:29Z [verbose] multus-daemon started 2025-12-13T00:20:29.970768071+00:00 stderr F 2025-12-13T00:20:29Z [verbose] Readiness Indicator file check 2025-12-13T00:20:29.970899685+00:00 stderr F 2025-12-13T00:20:29Z [verbose] Readiness Indicator file check done! 2025-12-13T00:20:29.972143390+00:00 stderr F I1213 00:20:29.972097 29496 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-12-13T00:20:29.972339235+00:00 stderr F 2025-12-13T00:20:29Z [verbose] Waiting for certificate 2025-12-13T00:20:30.972741321+00:00 stderr F I1213 00:20:30.972649 29496 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-12-13T00:20:30.972967527+00:00 stderr F 2025-12-13T00:20:30Z [verbose] Certificate found! 2025-12-13T00:20:30.973781039+00:00 stderr F 2025-12-13T00:20:30Z [verbose] server configured with chroot: /hostroot 2025-12-13T00:20:30.973781039+00:00 stderr F 2025-12-13T00:20:30Z [verbose] Filtering pod watch for node "crc" 2025-12-13T00:20:31.074598459+00:00 stderr F 2025-12-13T00:20:31Z [verbose] API readiness check 2025-12-13T00:20:31.075119493+00:00 stderr F 2025-12-13T00:20:31Z [verbose] API readiness check done! 2025-12-13T00:20:31.075256867+00:00 stderr F 2025-12-13T00:20:31Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-12-13T00:20:31.075429191+00:00 stderr F 2025-12-13T00:20:31Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-12-13T00:20:38.513471449+00:00 stderr F 2025-12-13T00:20:38Z [verbose] DEL starting CNI request ContainerID:"0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37" Netns:"/var/run/netns/11aaf5ff-61e9-4955-9c46-163b21c02c12" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37;K8S_POD_UID=7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9" Path:"" 2025-12-13T00:20:38.514203749+00:00 stderr F 2025-12-13T00:20:38Z [verbose] Del: openshift-kube-apiserver:installer-13-crc:7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:20:38.651103404+00:00 stderr F 2025-12-13T00:20:38Z [verbose] DEL finished CNI request ContainerID:"0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37" Netns:"/var/run/netns/11aaf5ff-61e9-4955-9c46-163b21c02c12" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37;K8S_POD_UID=7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9" Path:"", result: "", err: ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000137466515117130646033263 0ustar zuulzuul2025-08-13T19:57:38.998949389+00:00 stdout F 2025-08-13T19:57:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d86c2164-4a1e-4b53-8f29-3287db575df7 2025-08-13T19:57:39.063740029+00:00 stdout F 2025-08-13T19:57:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d86c2164-4a1e-4b53-8f29-3287db575df7 to /host/opt/cni/bin/ 2025-08-13T19:57:39.142894289+00:00 stderr F 2025-08-13T19:57:39Z [verbose] multus-daemon started 2025-08-13T19:57:39.142894289+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Readiness Indicator file check 2025-08-13T19:57:39.143093375+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Readiness Indicator file check done! 2025-08-13T19:57:39.150443065+00:00 stderr F I0813 19:57:39.150296 23104 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-08-13T19:57:39.155552761+00:00 stderr F 2025-08-13T19:57:39Z [verbose] Waiting for certificate 2025-08-13T19:57:40.156328197+00:00 stderr F I0813 19:57:40.156164 23104 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-08-13T19:57:40.156925264+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Certificate found! 2025-08-13T19:57:40.158536691+00:00 stderr F 2025-08-13T19:57:40Z [verbose] server configured with chroot: /hostroot 2025-08-13T19:57:40.158536691+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Filtering pod watch for node "crc" 2025-08-13T19:57:40.264016993+00:00 stderr F 2025-08-13T19:57:40Z [verbose] API readiness check 2025-08-13T19:57:40.269915831+00:00 stderr F 2025-08-13T19:57:40Z [verbose] API readiness check done! 2025-08-13T19:57:40.269915831+00:00 stderr F 2025-08-13T19:57:40Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-08-13T19:57:40.270096556+00:00 stderr F 2025-08-13T19:57:40Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-08-13T19:57:48.872958586+00:00 stderr F 2025-08-13T19:57:48Z [verbose] ADD starting CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"" 2025-08-13T19:57:49.433950375+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"" 2025-08-13T19:57:49.630465507+00:00 stderr F 2025-08-13T19:57:49Z [verbose] Add: openshift-marketplace:community-operators-k9qqb:ccdf38cf-634a-41a2-9c8b-74bb86af80a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ac543dfbb4577c1","mac":"9e:fb:45:69:5c:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:1d","sandbox":"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.29/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:49.630527319+00:00 stderr F 2025-08-13T19:57:49Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29251905-zmjv9:8500d7bd-50fb-4ca6-af41-b7a24cae43cd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8eb40cf57cd4084","mac":"aa:3f:6d:4c:4e:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:23","sandbox":"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.35/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:49.638874547+00:00 stderr F I0813 19:57:49.638647 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251905-zmjv9", UID:"8500d7bd-50fb-4ca6-af41-b7a24cae43cd", APIVersion:"v1", ResourceVersion:"27591", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.35/23] from ovn-kubernetes 2025-08-13T19:57:49.638874547+00:00 stderr F I0813 19:57:49.638760 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-k9qqb", UID:"ccdf38cf-634a-41a2-9c8b-74bb86af80a7", APIVersion:"v1", ResourceVersion:"27590", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.29/23] from ovn-kubernetes 2025-08-13T19:57:49.749688731+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"" 2025-08-13T19:57:49.783585229+00:00 stderr F 2025-08-13T19:57:49Z [verbose] ADD starting CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"" 2025-08-13T19:57:50.109627859+00:00 stderr F 2025-08-13T19:57:50Z [verbose] ADD finished CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:fb:45:69:5c:25\",\"name\":\"ac543dfbb4577c1\"},{\"mac\":\"0a:58:0a:d9:00:1d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343\"}],\"ips\":[{\"address\":\"10.217.0.29/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:50.109852686+00:00 stderr P 2025-08-13T19:57:50Z [verbose] 2025-08-13T19:57:50.109924838+00:00 stderr P ADD finished CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:3f:6d:4c:4e:9e\",\"name\":\"8eb40cf57cd4084\"},{\"mac\":\"0a:58:0a:d9:00:23\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44\"}],\"ips\":[{\"address\":\"10.217.0.35/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:50.109956979+00:00 stderr F 2025-08-13T19:57:50.216443459+00:00 stderr F 2025-08-13T19:57:50Z [verbose] Add: openshift-marketplace:certified-operators-g4v97:bb917686-edfb-4158-86ad-6fce0abec64c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c30e71c46910d5","mac":"4a:8a:38:55:61:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:21","sandbox":"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.33/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:50.216899512+00:00 stderr F I0813 19:57:50.216754 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-g4v97", UID:"bb917686-edfb-4158-86ad-6fce0abec64c", APIVersion:"v1", ResourceVersion:"27585", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.33/23] from ovn-kubernetes 2025-08-13T19:57:50.257050749+00:00 stderr F 2025-08-13T19:57:50Z [verbose] Add: openshift-marketplace:redhat-operators-dcqzh:6db26b71-4e04-4688-a0c0-00e06e8c888d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fd8d1d12d982e02","mac":"62:c6:c2:53:ad:e3"},{"name":"eth0","mac":"0a:58:0a:d9:00:22","sandbox":"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.34/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:57:50.257381338+00:00 stderr F I0813 19:57:50.257318 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-dcqzh", UID:"6db26b71-4e04-4688-a0c0-00e06e8c888d", APIVersion:"v1", ResourceVersion:"27584", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.34/23] from ovn-kubernetes 2025-08-13T19:57:51.134684360+00:00 stderr F 2025-08-13T19:57:51Z [verbose] ADD finished CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:c6:c2:53:ad:e3\",\"name\":\"fd8d1d12d982e02\"},{\"mac\":\"0a:58:0a:d9:00:22\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43\"}],\"ips\":[{\"address\":\"10.217.0.34/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:51.134684360+00:00 stderr F 2025-08-13T19:57:51Z [verbose] ADD finished CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:8a:38:55:61:94\",\"name\":\"2c30e71c46910d5\"},{\"mac\":\"0a:58:0a:d9:00:21\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9\"}],\"ips\":[{\"address\":\"10.217.0.33/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:57:56.898496694+00:00 stderr P 2025-08-13T19:57:56Z [verbose] 2025-08-13T19:57:56.898613597+00:00 stderr P DEL starting CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"" 2025-08-13T19:57:56.898643558+00:00 stderr F 2025-08-13T19:57:56.900093550+00:00 stderr P 2025-08-13T19:57:56Z [verbose] 2025-08-13T19:57:56.900135291+00:00 stderr P Del: openshift-operator-lifecycle-manager:collect-profiles-29251905-zmjv9:8500d7bd-50fb-4ca6-af41-b7a24cae43cd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:57:56.900171212+00:00 stderr F 2025-08-13T19:57:57.037192858+00:00 stderr P 2025-08-13T19:57:57Z [verbose] 2025-08-13T19:57:57.037266280+00:00 stderr P DEL finished CNI request ContainerID:"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Netns:"/var/run/netns/4f11a94b-a4a4-4eea-91bc-1db62481ef44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251905-zmjv9;K8S_POD_INFRA_CONTAINER_ID=8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998;K8S_POD_UID=8500d7bd-50fb-4ca6-af41-b7a24cae43cd" Path:"", result: "", err: 2025-08-13T19:57:57.037292021+00:00 stderr F 2025-08-13T19:58:54.366505502+00:00 stderr F 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff" Netns:"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-08-13T19:58:54.618147395+00:00 stderr P 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb" Netns:"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-08-13T19:58:54.618344581+00:00 stderr F 2025-08-13T19:58:54.735013927+00:00 stderr F 2025-08-13T19:58:54Z [verbose] ADD starting CNI request ContainerID:"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0" Netns:"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-08-13T19:58:55.107383541+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a3a061a59b867b6","mac":"12:21:e7:30:c2:18"},{"name":"eth0","mac":"0a:58:0a:d9:00:3f","sandbox":"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.63/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.110313245+00:00 stderr F I0813 19:58:55.108594 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-controller-6df6df6b6b-58shh", UID:"297ab9b6-2186-4d5b-a952-2bfd59af63c4", APIVersion:"v1", ResourceVersion:"27254", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.63/23] from ovn-kubernetes 2025-08-13T19:58:55.227223477+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff" Netns:"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:21:e7:30:c2:18\",\"name\":\"a3a061a59b867b6\"},{\"mac\":\"0a:58:0a:d9:00:3f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/70f78e3f-759f-4789-a9bd-1530bc742a7e\"}],\"ips\":[{\"address\":\"10.217.0.63/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.287557247+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368" Netns:"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"" 2025-08-13T19:58:55.309136562+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Netns:"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-08-13T19:58:55.316436250+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cb33d2fb758e44e","mac":"26:8b:b4:e3:af:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:15","sandbox":"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.21/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.317640615+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8" Netns:"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-08-13T19:58:55.318273593+00:00 stderr F I0813 19:58:55.318144 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-operator-76788bff89-wkjgm", UID:"120b38dc-8236-4fa6-a452-642b8ad738ee", APIVersion:"v1", ResourceVersion:"27443", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.21/23] from ovn-kubernetes 2025-08-13T19:58:55.373365343+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb" Netns:"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:8b:b4:e3:af:9c\",\"name\":\"cb33d2fb758e44e\"},{\"mac\":\"0a:58:0a:d9:00:15\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2ef05a7b-fa59-47f5-b04f-8a0f8c971486\"}],\"ips\":[{\"address\":\"10.217.0.21/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.393036774+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219" Netns:"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-08-13T19:58:55.478690086+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884" Netns:"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-08-13T19:58:55.537093720+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"caf64d49987c99e","mac":"b2:ea:84:15:ac:21"},{"name":"eth0","mac":"0a:58:0a:d9:00:05","sandbox":"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.5/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.537340517+00:00 stderr F I0813 19:58:55.537273 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"machine-api-operator-788b7c6b6c-ctdmb", UID:"4f8aa612-9da0-4a2b-911e-6a1764a4e74e", APIVersion:"v1", ResourceVersion:"27399", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.5/23] from ovn-kubernetes 2025-08-13T19:58:55.584707128+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.584768759+00:00 stderr P ADD starting CNI request ContainerID:"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2" Netns:"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-08-13T19:58:55.584951845+00:00 stderr F 2025-08-13T19:58:55.625408338+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0" Netns:"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:ea:84:15:ac:21\",\"name\":\"caf64d49987c99e\"},{\"mac\":\"0a:58:0a:d9:00:05\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1cabf433-3078-402a-a57a-5dc9c2a3b85e\"}],\"ips\":[{\"address\":\"10.217.0.5/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.632598563+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.632651844+00:00 stderr P Add: openshift-machine-api:control-plane-machine-set-operator-649bd778b4-tt5tw:45a8038e-e7f2-4d93-a6f5-7753aa54e63f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2e8f0bacebafcab","mac":"9a:5d:fa:e3:14:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:14","sandbox":"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.20/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.632683775+00:00 stderr F 2025-08-13T19:58:55.635280089+00:00 stderr F I0813 19:58:55.633408 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"control-plane-machine-set-operator-649bd778b4-tt5tw", UID:"45a8038e-e7f2-4d93-a6f5-7753aa54e63f", APIVersion:"v1", ResourceVersion:"27292", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.20/23] from ovn-kubernetes 2025-08-13T19:58:55.723115223+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD starting CNI request ContainerID:"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5" Netns:"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-08-13T19:58:55.735739483+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368" Netns:"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:5d:fa:e3:14:9e\",\"name\":\"2e8f0bacebafcab\"},{\"mac\":\"0a:58:0a:d9:00:14\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f945e43c-f2df-4189-96aa-497dc55219ad\"}],\"ips\":[{\"address\":\"10.217.0.20/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.814497258+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9ed66fef0dec7ca","mac":"d2:7f:2e:a1:42:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:2f","sandbox":"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.47/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.814882729+00:00 stderr F I0813 19:58:55.814750 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-7287f", UID:"887d596e-c519-4bfa-af90-3edd9e1b2f0f", APIVersion:"v1", ResourceVersion:"27417", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.47/23] from ovn-kubernetes 2025-08-13T19:58:55.862531407+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.862588959+00:00 stderr F ADD starting CNI request ContainerID:"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933" Netns:"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-08-13T19:58:55.867309433+00:00 stderr F 2025-08-13T19:58:55Z [verbose] Add: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"861ac63b0e0c6ab","mac":"5e:7f:a7:dd:ef:03"},{"name":"eth0","mac":"0a:58:0a:d9:00:0b","sandbox":"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.11/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:55.867309433+00:00 stderr F I0813 19:58:55.867187 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-857456c46-7f5wf", UID:"8a5ae51d-d173-4531-8975-f164c975ce1f", APIVersion:"v1", ResourceVersion:"27311", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.11/23] from ovn-kubernetes 2025-08-13T19:58:55.900977383+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.901038685+00:00 stderr P ADD finished CNI request ContainerID:"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8" Netns:"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:7f:2e:a1:42:5d\",\"name\":\"9ed66fef0dec7ca\"},{\"mac\":\"0a:58:0a:d9:00:2f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a9771d53-0f7a-4d85-8611-be5ec13889be\"}],\"ips\":[{\"address\":\"10.217.0.47/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.901073766+00:00 stderr F 2025-08-13T19:58:55.932652876+00:00 stderr F 2025-08-13T19:58:55Z [verbose] ADD finished CNI request ContainerID:"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884" Netns:"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:7f:a7:dd:ef:03\",\"name\":\"861ac63b0e0c6ab\"},{\"mac\":\"0a:58:0a:d9:00:0b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e517e8ca-14f5-4d84-8cbb-cb4146e07cba\"}],\"ips\":[{\"address\":\"10.217.0.11/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:55.937498794+00:00 stderr P 2025-08-13T19:58:55Z [verbose] 2025-08-13T19:58:55.937598317+00:00 stderr P ADD starting CNI request ContainerID:"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724" Netns:"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-08-13T19:58:55.937638998+00:00 stderr F 2025-08-13T19:58:56.012273095+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3a1adfc54f586eb","mac":"a2:c6:80:90:78:65"},{"name":"eth0","mac":"0a:58:0a:d9:00:2b","sandbox":"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.43/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.012273095+00:00 stderr F I0813 19:58:56.012117 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8464bcc55b-sjnqz", UID:"bd556935-a077-45df-ba3f-d42c39326ccd", APIVersion:"v1", ResourceVersion:"27446", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.43/23] from ovn-kubernetes 2025-08-13T19:58:56.142080505+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"07c341dd7186a1b","mac":"06:3c:d7:23:95:35"},{"name":"eth0","mac":"0a:58:0a:d9:00:0c","sandbox":"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.12/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.142080505+00:00 stderr F I0813 19:58:56.141432 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7", UID:"71af81a9-7d43-49b2-9287-c375900aa905", APIVersion:"v1", ResourceVersion:"27289", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.12/23] from ovn-kubernetes 2025-08-13T19:58:56.157497584+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2680ced3658686e","mac":"2a:b7:0a:d6:5a:09"},{"name":"eth0","mac":"0a:58:0a:d9:00:03","sandbox":"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.3/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.157497584+00:00 stderr F I0813 19:58:56.157423 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-qdfr4", UID:"a702c6d2-4dde-4077-ab8c-0f8df804bf7a", APIVersion:"v1", ResourceVersion:"27375", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.3/23] from ovn-kubernetes 2025-08-13T19:58:56.186366667+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1f2d8ae3277a5b2","mac":"b6:78:0d:36:15:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:0d","sandbox":"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.13/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.187137139+00:00 stderr F I0813 19:58:56.187067 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-f9xdt", UID:"3482be94-0cdb-4e2a-889b-e5fac59fdbf5", APIVersion:"v1", ResourceVersion:"27286", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.13/23] from ovn-kubernetes 2025-08-13T19:58:56.207969403+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219" Netns:"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:c6:80:90:78:65\",\"name\":\"3a1adfc54f586eb\"},{\"mac\":\"0a:58:0a:d9:00:2b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9a40501a-04df-4b97-b0c1-7e22cd5bb9fe\"}],\"ips\":[{\"address\":\"10.217.0.43/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.216271910+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Netns:"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:3c:d7:23:95:35\",\"name\":\"07c341dd7186a1b\"},{\"mac\":\"0a:58:0a:d9:00:0c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/cde88eb3-039d-4e0e-88a8-df51e2e90a47\"}],\"ips\":[{\"address\":\"10.217.0.12/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.216271910+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5" Netns:"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:b7:0a:d6:5a:09\",\"name\":\"2680ced3658686e\"},{\"mac\":\"0a:58:0a:d9:00:03\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fd584002-b48d-40c7-b6b9-fd1d9206c3f9\"}],\"ips\":[{\"address\":\"10.217.0.3/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.259076390+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2" Netns:"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b6:78:0d:36:15:42\",\"name\":\"1f2d8ae3277a5b2\"},{\"mac\":\"0a:58:0a:d9:00:0d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2f56b827-692e-4c19-ad24-0ca3b949cfda\"}],\"ips\":[{\"address\":\"10.217.0.13/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.286691187+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e" Netns:"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-08-13T19:58:56.297972859+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"042b00f26918850","mac":"76:a6:6d:aa:9c:56"},{"name":"eth0","mac":"0a:58:0a:d9:00:30","sandbox":"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.48/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.297972859+00:00 stderr F I0813 19:58:56.297589 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-8jhz6", UID:"3f4dca86-e6ee-4ec9-8324-86aff960225e", APIVersion:"v1", ResourceVersion:"27295", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.48/23] from ovn-kubernetes 2025-08-13T19:58:56.363031603+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51987a02e71ec40","mac":"f2:7b:ae:d5:12:4e"},{"name":"eth0","mac":"0a:58:0a:d9:00:18","sandbox":"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.24/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.364867235+00:00 stderr F I0813 19:58:56.363236 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"package-server-manager-84d578d794-jw7r2", UID:"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be", APIVersion:"v1", ResourceVersion:"27283", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.24/23] from ovn-kubernetes 2025-08-13T19:58:56.365723000+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.366205074+00:00 stderr P ADD starting CNI request ContainerID:"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Netns:"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-08-13T19:58:56.366299326+00:00 stderr F 2025-08-13T19:58:56.397057653+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748" Netns:"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-08-13T19:58:56.397057653+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98" Netns:"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-08-13T19:58:56.450894008+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933" Netns:"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:a6:6d:aa:9c:56\",\"name\":\"042b00f26918850\"},{\"mac\":\"0a:58:0a:d9:00:30\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6eba4aaa-159d-430d-9946-5a35c80a373c\"}],\"ips\":[{\"address\":\"10.217.0.48/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.458953357+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724" Netns:"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:7b:ae:d5:12:4e\",\"name\":\"51987a02e71ec40\"},{\"mac\":\"0a:58:0a:d9:00:18\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c61a4ab0-6f50-41fe-8134-619a499fa7b3\"}],\"ips\":[{\"address\":\"10.217.0.24/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.477898467+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2" Netns:"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-08-13T19:58:56.521298975+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"76a23bcc5261ffe","mac":"1a:bb:ff:c9:e7:24"},{"name":"eth0","mac":"0a:58:0a:d9:00:07","sandbox":"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.7/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.521637754+00:00 stderr F I0813 19:58:56.521579 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-78d54458c4-sc8h7", UID:"ed024e5d-8fc2-4c22-803d-73f3c9795f19", APIVersion:"v1", ResourceVersion:"27411", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.7/23] from ovn-kubernetes 2025-08-13T19:58:56.576623112+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.576744955+00:00 stderr P ADD starting CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T19:58:56.576936031+00:00 stderr F 2025-08-13T19:58:56.586952556+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e" Netns:"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:bb:ff:c9:e7:24\",\"name\":\"76a23bcc5261ffe\"},{\"mac\":\"0a:58:0a:d9:00:07\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae621442-01e0-4565-979c-905caed43df8\"}],\"ips\":[{\"address\":\"10.217.0.7/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.782631784+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.782699766+00:00 stderr P Add: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"489c96bd95d523f","mac":"a2:cb:e6:94:36:cf"},{"name":"eth0","mac":"0a:58:0a:d9:00:09","sandbox":"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.9/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.782731397+00:00 stderr F 2025-08-13T19:58:56.783256972+00:00 stderr F I0813 19:58:56.783225 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-7978d7d7f6-2nt8z", UID:"0f394926-bdb9-425c-b36e-264d7fd34550", APIVersion:"v1", ResourceVersion:"27338", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.9/23] from ovn-kubernetes 2025-08-13T19:58:56.841606195+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"40aef0eb1bbaaf5","mac":"6a:17:c4:5e:dd:74"},{"name":"eth0","mac":"0a:58:0a:d9:00:32","sandbox":"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.50/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.841606195+00:00 stderr F I0813 19:58:56.840907 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-f4jkp", UID:"4092a9f8-5acc-4932-9e90-ef962eeb301a", APIVersion:"v1", ResourceVersion:"27305", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.50/23] from ovn-kubernetes 2025-08-13T19:58:56.841606195+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Netns:"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:cb:e6:94:36:cf\",\"name\":\"489c96bd95d523f\"},{\"mac\":\"0a:58:0a:d9:00:09\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f496d738-acb8-4c3d-9c78-8782b64f4ab7\"}],\"ips\":[{\"address\":\"10.217.0.9/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.886460464+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"88c60b5e25b2ce0","mac":"d2:40:20:33:89:cb"},{"name":"eth0","mac":"0a:58:0a:d9:00:20","sandbox":"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.32/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.886460464+00:00 stderr F I0813 19:58:56.886090 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"multus-admission-controller-6c7c885997-4hbbc", UID:"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0", APIVersion:"v1", ResourceVersion:"27266", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.32/23] from ovn-kubernetes 2025-08-13T19:58:56.890648493+00:00 stderr P 2025-08-13T19:58:56Z [verbose] 2025-08-13T19:58:56.893887085+00:00 stderr P ADD finished CNI request ContainerID:"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748" Netns:"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6a:17:c4:5e:dd:74\",\"name\":\"40aef0eb1bbaaf5\"},{\"mac\":\"0a:58:0a:d9:00:32\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/57dbd258-9cae-497a-b5eb-fe7fbbfe66b6\"}],\"ips\":[{\"address\":\"10.217.0.50/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:56.893974558+00:00 stderr F 2025-08-13T19:58:56.916500400+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD starting CNI request ContainerID:"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb" Netns:"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-08-13T19:58:56.917071956+00:00 stderr F 2025-08-13T19:58:56Z [verbose] Add: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"44f5ef3518ac6b9","mac":"2a:0a:70:95:cf:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:19","sandbox":"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.25/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:56.917188900+00:00 stderr F I0813 19:58:56.917099 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator", Name:"migrator-f7c6d88df-q2fnv", UID:"cf1a8966-f594-490a-9fbb-eec5bafd13d3", APIVersion:"v1", ResourceVersion:"27336", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.25/23] from ovn-kubernetes 2025-08-13T19:58:57.000396391+00:00 stderr F 2025-08-13T19:58:56Z [verbose] ADD finished CNI request ContainerID:"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98" Netns:"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:40:20:33:89:cb\",\"name\":\"88c60b5e25b2ce0\"},{\"mac\":\"0a:58:0a:d9:00:20\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f961ec0b-d3ba-4ba9-a578-5548310fe298\"}],\"ips\":[{\"address\":\"10.217.0.32/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.004141538+00:00 stderr F 2025-08-13T19:58:57Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-vlbxv:378552fd-5e53-4882-87ff-95f3d9198861:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fbf310c9137d286","mac":"b2:ca:c3:72:ee:8c"},{"name":"eth0","mac":"0a:58:0a:d9:00:1a","sandbox":"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.26/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.004686544+00:00 stderr F I0813 19:58:57.004388 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-vlbxv", UID:"378552fd-5e53-4882-87ff-95f3d9198861", APIVersion:"v1", ResourceVersion:"27387", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.26/23] from ovn-kubernetes 2025-08-13T19:58:57.056690216+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:ca:c3:72:ee:8c\",\"name\":\"fbf310c9137d286\"},{\"mac\":\"0a:58:0a:d9:00:1a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1\"}],\"ips\":[{\"address\":\"10.217.0.26/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.057946752+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.058000283+00:00 stderr P ADD finished CNI request ContainerID:"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2" Netns:"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:0a:70:95:cf:7d\",\"name\":\"44f5ef3518ac6b9\"},{\"mac\":\"0a:58:0a:d9:00:19\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9cd13442-7a93-465f-b720-10418ec1fe51\"}],\"ips\":[{\"address\":\"10.217.0.25/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.058026644+00:00 stderr F 2025-08-13T19:58:57.105462326+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.105524388+00:00 stderr P ADD starting CNI request ContainerID:"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Netns:"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-08-13T19:58:57.105548729+00:00 stderr F 2025-08-13T19:58:57.244323695+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821" Netns:"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-08-13T19:58:57.294940708+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T19:58:57.333358203+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc" Netns:"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"" 2025-08-13T19:58:57.427415884+00:00 stderr F 2025-08-13T19:58:57Z [verbose] Add: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c45b735c45341a","mac":"ca:10:40:2a:6a:05"},{"name":"eth0","mac":"0a:58:0a:d9:00:2e","sandbox":"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.46/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.428259008+00:00 stderr F I0813 19:58:57.428100 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-samples-operator", Name:"cluster-samples-operator-bc474d5d6-wshwg", UID:"f728c15e-d8de-4a9a-a3ea-fdcead95cb91", APIVersion:"v1", ResourceVersion:"27350", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.46/23] from ovn-kubernetes 2025-08-13T19:58:57.480594800+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892" Netns:"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-08-13T19:58:57.561150286+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb" Netns:"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ca:10:40:2a:6a:05\",\"name\":\"2c45b735c45341a\"},{\"mac\":\"0a:58:0a:d9:00:2e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/335912d9-96b7-4d5c-981d-f6f4bc9d5a82\"}],\"ips\":[{\"address\":\"10.217.0.46/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.616173514+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Netns:"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-08-13T19:58:57.690381570+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d" Netns:"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-08-13T19:58:57.869494565+00:00 stderr P 2025-08-13T19:58:57Z [verbose] 2025-08-13T19:58:57.869559487+00:00 stderr P Add: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"20a42c53825c918","mac":"4e:5c:e6:eb:13:aa"},{"name":"eth0","mac":"0a:58:0a:d9:00:12","sandbox":"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.18/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:57.869596958+00:00 stderr F 2025-08-13T19:58:57.870607277+00:00 stderr F I0813 19:58:57.870464 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns-operator", Name:"dns-operator-75f687757b-nz2xb", UID:"10603adc-d495-423c-9459-4caa405960bb", APIVersion:"v1", ResourceVersion:"27299", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.18/23] from ovn-kubernetes 2025-08-13T19:58:57.947280493+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD finished CNI request ContainerID:"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821" Netns:"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5c:e6:eb:13:aa\",\"name\":\"20a42c53825c918\"},{\"mac\":\"0a:58:0a:d9:00:12\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/74cd27b4-e9e0-40d9-8969-943a3c87dfa4\"}],\"ips\":[{\"address\":\"10.217.0.18/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:57.983576217+00:00 stderr F 2025-08-13T19:58:57Z [verbose] ADD starting CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T19:58:58.085763880+00:00 stderr P 2025-08-13T19:58:58Z [verbose] 2025-08-13T19:58:58.085948135+00:00 stderr P Add: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fe503da15decef9","mac":"62:a9:f3:84:d8:42"},{"name":"eth0","mac":"0a:58:0a:d9:00:16","sandbox":"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.22/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:58.085979036+00:00 stderr F 2025-08-13T19:58:58.087337515+00:00 stderr F I0813 19:58:58.086439 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator-7769bd8d7d-q5cvv", UID:"b54e8941-2fc4-432a-9e51-39684df9089e", APIVersion:"v1", ResourceVersion:"27428", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.22/23] from ovn-kubernetes 2025-08-13T19:58:58.167669555+00:00 stderr P 2025-08-13T19:58:58Z [verbose] 2025-08-13T19:58:58.168224991+00:00 stderr P ADD finished CNI request ContainerID:"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Netns:"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:a9:f3:84:d8:42\",\"name\":\"fe503da15decef9\"},{\"mac\":\"0a:58:0a:d9:00:16\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/507b0851-b062-4428-b5aa-59889c89d7f6\"}],\"ips\":[{\"address\":\"10.217.0.22/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:58.168290953+00:00 stderr F 2025-08-13T19:58:58.213958704+00:00 stderr F 2025-08-13T19:58:58Z [verbose] Add: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"e6ed8c1e93f8bc4","mac":"aa:e8:ef:66:06:69"},{"name":"eth0","mac":"0a:58:0a:d9:00:1c","sandbox":"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.28/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:58.216982181+00:00 stderr F I0813 19:58:58.216294 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-84fccc7b6-mkncc", UID:"b233d916-bfe3-4ae5-ae39-6b574d1aa05e", APIVersion:"v1", ResourceVersion:"27421", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.28/23] from ovn-kubernetes 2025-08-13T19:58:58.295951372+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD finished CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:e8:ef:66:06:69\",\"name\":\"e6ed8c1e93f8bc4\"},{\"mac\":\"0a:58:0a:d9:00:1c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d\"}],\"ips\":[{\"address\":\"10.217.0.28/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:58.338013061+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Netns:"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-08-13T19:58:58.473564435+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3" Netns:"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-08-13T19:58:58.634314047+00:00 stderr F 2025-08-13T19:58:58Z [verbose] ADD starting CNI request ContainerID:"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Netns:"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-08-13T19:58:59.005862598+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"282af480c29eba8","mac":"5a:fa:7e:60:ae:8e"},{"name":"eth0","mac":"0a:58:0a:d9:00:0a","sandbox":"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.10/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.007142414+00:00 stderr F I0813 19:58:59.007096 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-546b4f8984-pwccz", UID:"6d67253e-2acd-4bc1-8185-793587da4f17", APIVersion:"v1", ResourceVersion:"27329", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.10/23] from ovn-kubernetes 2025-08-13T19:58:59.143136761+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-authentication:oauth-openshift-765b47f944-n2lhl:13ad7555-5f28-4555-a563-892713a8433a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8266ab3300c992b","mac":"16:7d:9f:58:7f:21"},{"name":"eth0","mac":"0a:58:0a:d9:00:1e","sandbox":"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.30/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.143136761+00:00 stderr F I0813 19:58:59.119382 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-765b47f944-n2lhl", UID:"13ad7555-5f28-4555-a563-892713a8433a", APIVersion:"v1", ResourceVersion:"27326", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.30/23] from ovn-kubernetes 2025-08-13T19:58:59.184869251+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Netns:"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:fa:7e:60:ae:8e\",\"name\":\"282af480c29eba8\"},{\"mac\":\"0a:58:0a:d9:00:0a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/655ccbbf-76e3-4129-bcd4-1cb267a341b0\"}],\"ips\":[{\"address\":\"10.217.0.10/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.197442879+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:7d:9f:58:7f:21\",\"name\":\"8266ab3300c992b\"},{\"mac\":\"0a:58:0a:d9:00:1e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d\"}],\"ips\":[{\"address\":\"10.217.0.30/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.216022159+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238" Netns:"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-08-13T19:58:59.216075330+00:00 stderr P 2025-08-13T19:58:59Z [verbose] 2025-08-13T19:58:59.216086340+00:00 stderr F ADD starting CNI request ContainerID:"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a" Netns:"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-08-13T19:58:59.226760295+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2cacd5e0efb1ce8","mac":"ea:0d:4c:ae:13:d8"},{"name":"eth0","mac":"0a:58:0a:d9:00:08","sandbox":"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.8/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.226760295+00:00 stderr F I0813 19:58:59.219586 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-etcd-operator", Name:"etcd-operator-768d5b5d86-722mg", UID:"0b5c38ff-1fa8-4219-994d-15776acd4a4d", APIVersion:"v1", ResourceVersion:"27425", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.8/23] from ovn-kubernetes 2025-08-13T19:58:59.226760295+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2aed5bade7f294b","mac":"4e:5a:b9:b2:bb:00"},{"name":"eth0","mac":"0a:58:0a:d9:00:0f","sandbox":"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.15/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.226951730+00:00 stderr F I0813 19:58:59.226928 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-6f6cb54958-rbddb", UID:"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf", APIVersion:"v1", ResourceVersion:"27367", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.15/23] from ovn-kubernetes 2025-08-13T19:58:59.238044236+00:00 stderr F 2025-08-13T19:58:59Z [verbose] Add: openshift-dns:dns-default-gbw49:13045510-8717-4a71-ade4-be95a76440a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"63f14f64c728127","mac":"9a:23:de:02:96:5e"},{"name":"eth0","mac":"0a:58:0a:d9:00:1f","sandbox":"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.31/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:58:59.238044236+00:00 stderr F I0813 19:58:59.231537 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-gbw49", UID:"13045510-8717-4a71-ade4-be95a76440a7", APIVersion:"v1", ResourceVersion:"27378", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.31/23] from ovn-kubernetes 2025-08-13T19:58:59.405539971+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a" Netns:"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-08-13T19:58:59.419100297+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a" Netns:"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc" Netns:"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:23:de:02:96:5e\",\"name\":\"63f14f64c728127\"},{\"mac\":\"0a:58:0a:d9:00:1f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/596a56ad-fa96-4824-9885-0b954a96d079\"}],\"ips\":[{\"address\":\"10.217.0.31/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Netns:"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:5a:b9:b2:bb:00\",\"name\":\"2aed5bade7f294b\"},{\"mac\":\"0a:58:0a:d9:00:0f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5461ae60-bf28-4153-9fe9-e79ce8ef8bfa\"}],\"ips\":[{\"address\":\"10.217.0.15/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.448707101+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD finished CNI request ContainerID:"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892" Netns:"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:0d:4c:ae:13:d8\",\"name\":\"2cacd5e0efb1ce8\"},{\"mac\":\"0a:58:0a:d9:00:08\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/899f04cb-d5a9-4d0d-ab5f-db55f116a94c\"}],\"ips\":[{\"address\":\"10.217.0.8/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:58:59.499960322+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T19:58:59.600975011+00:00 stderr P 2025-08-13T19:58:59Z [verbose] 2025-08-13T19:58:59.601505556+00:00 stderr P ADD starting CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"" 2025-08-13T19:58:59.601565358+00:00 stderr F 2025-08-13T19:58:59.631759808+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"" 2025-08-13T19:58:59.647303371+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7" Netns:"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-08-13T19:58:59.805428699+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3" Netns:"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-08-13T19:58:59.844184134+00:00 stderr F 2025-08-13T19:58:59Z [verbose] ADD starting CNI request ContainerID:"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5" Netns:"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-08-13T19:59:00.168377535+00:00 stderr P 2025-08-13T19:59:00Z [verbose] 2025-08-13T19:59:00.170681861+00:00 stderr P ADD starting CNI request ContainerID:"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31" Netns:"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-08-13T19:59:00.170862256+00:00 stderr F 2025-08-13T19:59:00.171981088+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"" 2025-08-13T19:59:00.202354764+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T19:59:00.324034272+00:00 stderr F 2025-08-13T19:59:00Z [verbose] ADD starting CNI request ContainerID:"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7" Netns:"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-08-13T19:59:00.808159642+00:00 stderr P 2025-08-13T19:59:00Z [verbose] 2025-08-13T19:59:00.808287966+00:00 stderr P ADD starting CNI request ContainerID:"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed" Netns:"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-08-13T19:59:00.808756179+00:00 stderr F 2025-08-13T19:59:01.123363757+00:00 stderr F 2025-08-13T19:59:01Z [verbose] ADD starting CNI request ContainerID:"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621" Netns:"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-08-13T19:59:03.135317658+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD starting CNI request ContainerID:"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007" Netns:"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-08-13T19:59:03.136943044+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.137268004+00:00 stderr P Add: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"aab926f26907ff6","mac":"ae:8f:58:c5:04:5d"},{"name":"eth0","mac":"0a:58:0a:d9:00:42","sandbox":"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.66/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.137298555+00:00 stderr F 2025-08-13T19:59:03.181010350+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7c70e17033c6821","mac":"a6:4b:a8:66:58:77"},{"name":"eth0","mac":"0a:58:0a:d9:00:0e","sandbox":"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.14/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.181010350+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"10cfef5f94c814c","mac":"fa:06:81:ef:ea:29"},{"name":"eth0","mac":"0a:58:0a:d9:00:33","sandbox":"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.51/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.181010350+00:00 stderr P 2025-08-13T19:59:03Z [verbose] Add: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d3db60615905e44","mac":"d6:ab:ce:8e:34:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:17","sandbox":"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.23/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5aa1911bfbbdddf","mac":"ea:f0:97:ca:28:ed"},{"name":"eth0","mac":"0a:58:0a:d9:00:04","sandbox":"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.4/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-5c4dbb8899-tchz5:af6b67a3-a2bd-4051-9adc-c208a5a65d79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"893b4f9b5ed2707","mac":"d2:c7:9d:0a:38:17"},{"name":"eth0","mac":"0a:58:0a:d9:00:11","sandbox":"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.17/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.181666 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-65476884b9-9wcvx", UID:"6268b7fe-8910-4505-b404-6f1df638105c", APIVersion:"v1", ResourceVersion:"27396", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.66/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.280298 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-6d8474f75f-x54mh", UID:"c085412c-b875-46c9-ae3e-e6b0d8067091", APIVersion:"v1", ResourceVersion:"27257", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.14/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.280323 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-8s8pc", UID:"c782cf62-a827-4677-b3c2-6f82c5f09cbb", APIVersion:"v1", ResourceVersion:"27308", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.51/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288692 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-config-operator", Name:"openshift-config-operator-77658b5b66-dq5sc", UID:"530553aa-0a1d-423e-8a22-f5eb4bdbb883", APIVersion:"v1", ResourceVersion:"27263", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.23/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288731 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-v54bt", UID:"34a48baf-1bee-4921-8bb2-9b7320e76f79", APIVersion:"v1", ResourceVersion:"27275", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.4/23] from ovn-kubernetes 2025-08-13T19:59:03.301128335+00:00 stderr F I0813 19:59:03.288748 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-5c4dbb8899-tchz5", UID:"af6b67a3-a2bd-4051-9adc-c208a5a65d79", APIVersion:"v1", ResourceVersion:"27278", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.17/23] from ovn-kubernetes 2025-08-13T19:59:03.321165456+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d" Netns:"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:8f:58:c5:04:5d\",\"name\":\"aab926f26907ff6\"},{\"mac\":\"0a:58:0a:d9:00:42\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae557160-e1b1-4a90-8f0b-6b65225ee83e\"}],\"ips\":[{\"address\":\"10.217.0.66/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.321165456+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3" Netns:"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a6:4b:a8:66:58:77\",\"name\":\"7c70e17033c6821\"},{\"mac\":\"0a:58:0a:d9:00:0e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/22af814e-c9c5-4a38-8e71-91e0ef541bf4\"}],\"ips\":[{\"address\":\"10.217.0.14/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.672140931+00:00 stderr F 2025-08-13T19:59:03Z [verbose] ADD finished CNI request ContainerID:"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a" Netns:"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fa:06:81:ef:ea:29\",\"name\":\"10cfef5f94c814c\"},{\"mac\":\"0a:58:0a:d9:00:33\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3b25dfa7-0dbe-48ac-9bb0-4559de6430f6\"}],\"ips\":[{\"address\":\"10.217.0.51/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.684501513+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.685468030+00:00 stderr P Add: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b27ef0e5311849c","mac":"5a:be:36:6c:e5:ff"},{"name":"eth0","mac":"0a:58:0a:d9:00:27","sandbox":"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.39/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.685510352+00:00 stderr F 2025-08-13T19:59:03.685942824+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.685981775+00:00 stderr P Add: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:87df87f4-ba66-4137-8e41-1fa632ad4207:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4916f2a17d27bbf","mac":"4e:22:5c:05:c0:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:24","sandbox":"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.36/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.686051727+00:00 stderr F 2025-08-13T19:59:03.686445048+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.686494450+00:00 stderr P Add: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"526dc34c7f02246","mac":"1e:22:ca:d9:c0:4a"},{"name":"eth0","mac":"0a:58:0a:d9:00:06","sandbox":"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.6/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.686523791+00:00 stderr F 2025-08-13T19:59:03.701115536+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.701925940+00:00 stderr F Add: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"906e45421a720cb","mac":"26:7a:19:ff:0e:c3"},{"name":"eth0","mac":"0a:58:0a:d9:00:13","sandbox":"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.19/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.702007492+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.702074924+00:00 stderr P ADD finished CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:c7:9d:0a:38:17\",\"name\":\"893b4f9b5ed2707\"},{\"mac\":\"0a:58:0a:d9:00:11\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a\"}],\"ips\":[{\"address\":\"10.217.0.17/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.702175467+00:00 stderr F 2025-08-13T19:59:03.702379302+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.702452915+00:00 stderr P Add: openshift-marketplace:redhat-marketplace-rmwfn:9ad279b4-d9dc-42a8-a1c8-a002bd063482:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9218677c9aa0f21","mac":"3e:06:24:c1:84:12"},{"name":"eth0","mac":"0a:58:0a:d9:00:36","sandbox":"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.54/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.702478165+00:00 stderr F 2025-08-13T19:59:03.702995640+00:00 stderr F I0813 19:59:03.702962 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"5bacb25d-97b6-4491-8fb4-99feae1d802a", APIVersion:"v1", ResourceVersion:"27346", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.39/23] from ovn-kubernetes 2025-08-13T19:59:03.703055082+00:00 stderr F I0813 19:59:03.703034 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-6ff78978b4-q4vv8", UID:"87df87f4-ba66-4137-8e41-1fa632ad4207", APIVersion:"v1", ResourceVersion:"27269", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.36/23] from ovn-kubernetes 2025-08-13T19:59:03.703092063+00:00 stderr F I0813 19:59:03.703076 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-7c88c4c865-kn67m", UID:"43ae1c37-047b-4ee2-9fee-41e337dd4ac8", APIVersion:"v1", ResourceVersion:"27332", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.6/23] from ovn-kubernetes 2025-08-13T19:59:03.703125654+00:00 stderr F I0813 19:59:03.703110 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication-operator", Name:"authentication-operator-7cc7ff75d5-g9qv8", UID:"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e", APIVersion:"v1", ResourceVersion:"27314", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.19/23] from ovn-kubernetes 2025-08-13T19:59:03.703158455+00:00 stderr F I0813 19:59:03.703143 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-rmwfn", UID:"9ad279b4-d9dc-42a8-a1c8-a002bd063482", APIVersion:"v1", ResourceVersion:"27389", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.54/23] from ovn-kubernetes 2025-08-13T19:59:03.732219983+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.732304806+00:00 stderr P Add: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4146ac88f77df20","mac":"7a:30:67:39:88:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:47","sandbox":"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.71/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.732330756+00:00 stderr F 2025-08-13T19:59:03.733101848+00:00 stderr F I0813 19:59:03.733068 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-2vhcn", UID:"0b5d722a-1123-4935-9740-52a08d018bc9", APIVersion:"v1", ResourceVersion:"27357", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.71/23] from ovn-kubernetes 2025-08-13T19:59:03.880295054+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.881361964+00:00 stderr P ADD finished CNI request ContainerID:"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a" Netns:"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:f0:97:ca:28:ed\",\"name\":\"5aa1911bfbbdddf\"},{\"mac\":\"0a:58:0a:d9:00:04\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6c622cad-6adf-4b6c-9819-f3b4b84dbf1c\"}],\"ips\":[{\"address\":\"10.217.0.4/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.881411256+00:00 stderr F 2025-08-13T19:59:03.936494066+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.936603749+00:00 stderr P ADD finished CNI request ContainerID:"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7" Netns:"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:be:36:6c:e5:ff\",\"name\":\"b27ef0e5311849c\"},{\"mac\":\"0a:58:0a:d9:00:27\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/816c9fbd-4e9d-4e58-9650-775be1fefeff\"}],\"ips\":[{\"address\":\"10.217.0.39/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.936678871+00:00 stderr F 2025-08-13T19:59:03.937121224+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.937362851+00:00 stderr P ADD finished CNI request ContainerID:"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31" Netns:"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:7a:19:ff:0e:c3\",\"name\":\"906e45421a720cb\"},{\"mac\":\"0a:58:0a:d9:00:13\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/40790f09-e8eb-4d30-a296-031a41bd8f5d\"}],\"ips\":[{\"address\":\"10.217.0.19/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.937438253+00:00 stderr F 2025-08-13T19:59:03.937627508+00:00 stderr P 2025-08-13T19:59:03Z [verbose] 2025-08-13T19:59:03.937896776+00:00 stderr P ADD finished CNI request ContainerID:"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3" Netns:"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:30:67:39:88:7d\",\"name\":\"4146ac88f77df20\"},{\"mac\":\"0a:58:0a:d9:00:47\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3dc43bc9-2154-4ea7-a28e-e367c75aa76c\"}],\"ips\":[{\"address\":\"10.217.0.71/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:03.938123652+00:00 stderr F 2025-08-13T19:59:03.953421098+00:00 stderr F 2025-08-13T19:59:03Z [verbose] Add: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0e119602de1750a","mac":"5a:68:88:74:1a:1a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3e","sandbox":"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.62/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:03.953911272+00:00 stderr F I0813 19:59:03.953591 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5dbbc74dc9-cp5cd", UID:"e9127708-ccfd-4891-8a3a-f0cacb77e0f4", APIVersion:"v1", ResourceVersion:"27354", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.62/23] from ovn-kubernetes 2025-08-13T19:59:04.073471741+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-apiserver:apiserver-67cbf64bc9-mtx25:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"961449f5e5e8534","mac":"06:02:94:00:bf:62"},{"name":"eth0","mac":"0a:58:0a:d9:00:25","sandbox":"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.37/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.075157649+00:00 stderr F I0813 19:59:04.075076 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-67cbf64bc9-mtx25", UID:"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab", APIVersion:"v1", ResourceVersion:"27361", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.37/23] from ovn-kubernetes 2025-08-13T19:59:04.183987791+00:00 stderr F 2025-08-13T19:59:04Z [verbose] ADD finished CNI request ContainerID:"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a" Netns:"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:22:ca:d9:c0:4a\",\"name\":\"526dc34c7f02246\"},{\"mac\":\"0a:58:0a:d9:00:06\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/07a3c410-01ff-48da-98d3-be907e6beb6c\"}],\"ips\":[{\"address\":\"10.217.0.6/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:04.521248175+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-8-crc:72854c1e-5ae2-4ed6-9e50-ff3bccde2635:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d84dd6581e40bee","mac":"f2:91:66:93:67:34"},{"name":"eth0","mac":"0a:58:0a:d9:00:37","sandbox":"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.55/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.521441280+00:00 stderr F I0813 19:59:04.521386 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-8-crc", UID:"72854c1e-5ae2-4ed6-9e50-ff3bccde2635", APIVersion:"v1", ResourceVersion:"27298", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.55/23] from ovn-kubernetes 2025-08-13T19:59:04.538269750+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"717e351e369b4a5","mac":"52:85:10:ff:fa:5e"},{"name":"eth0","mac":"0a:58:0a:d9:00:10","sandbox":"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.16/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.538555898+00:00 stderr F I0813 19:59:04.538471 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator-686c6c748c-qbnnr", UID:"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7", APIVersion:"v1", ResourceVersion:"27371", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.16/23] from ovn-kubernetes 2025-08-13T19:59:04.658192458+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"97418fd7ce5644b","mac":"6e:43:34:af:3f:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:40","sandbox":"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.64/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.658192458+00:00 stderr F I0813 19:59:04.653636 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5c5478f8c-vqvt7", UID:"d0f40333-c860-4c04-8058-a0bf572dcf12", APIVersion:"v1", ResourceVersion:"27272", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.64/23] from ovn-kubernetes 2025-08-13T19:59:04.667030550+00:00 stderr P 2025-08-13T19:59:04Z [verbose] 2025-08-13T19:59:04.667081782+00:00 stderr P Add: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"22d48c9fe60d97e","mac":"c6:21:89:7b:ef:07"},{"name":"eth0","mac":"0a:58:0a:d9:00:2d","sandbox":"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.45/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.667113393+00:00 stderr F 2025-08-13T19:59:04.667529494+00:00 stderr F I0813 19:59:04.667467 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-operator", Name:"ingress-operator-7d46d5bb6d-rrg6t", UID:"7d51f445-054a-4e4f-a67b-a828f5a32511", APIVersion:"v1", ResourceVersion:"27414", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.45/23] from ovn-kubernetes 2025-08-13T19:59:04.774100162+00:00 stderr F 2025-08-13T19:59:04Z [verbose] Add: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a10fd87b4b9fef3","mac":"7a:f0:82:7d:d8:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:3d","sandbox":"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.61/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.774299488+00:00 stderr F I0813 19:59:04.774247 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-conversion-webhook-595f9969b-l6z49", UID:"59748b9b-c309-4712-aa85-bb38d71c4915", APIVersion:"v1", ResourceVersion:"27381", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.61/23] from ovn-kubernetes 2025-08-13T19:59:04.817896801+00:00 stderr F 2025-08-13T19:59:04Z [verbose] ADD finished CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3e:06:24:c1:84:12\",\"name\":\"9218677c9aa0f21\"},{\"mac\":\"0a:58:0a:d9:00:36\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782\"}],\"ips\":[{\"address\":\"10.217.0.54/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:04.874933427+00:00 stderr P 2025-08-13T19:59:04Z [verbose] 2025-08-13T19:59:04.874985878+00:00 stderr P Add: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ce1a5d3596103f2","mac":"8e:1b:59:6f:cb:45"},{"name":"eth0","mac":"0a:58:0a:d9:00:31","sandbox":"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.49/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:04.875017139+00:00 stderr F 2025-08-13T19:59:04.875932285+00:00 stderr F I0813 19:59:04.875899 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"hostpath-provisioner", Name:"csi-hostpathplugin-hvm8g", UID:"12e733dd-0939-4f1b-9cbb-13897e093787", APIVersion:"v1", ResourceVersion:"27304", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.49/23] from ovn-kubernetes 2025-08-13T19:59:05.358683366+00:00 stderr F 2025-08-13T19:59:05Z [verbose] ADD finished CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4e:22:5c:05:c0:19\",\"name\":\"4916f2a17d27bbf\"},{\"mac\":\"0a:58:0a:d9:00:24\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c\"}],\"ips\":[{\"address\":\"10.217.0.36/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:05.358683366+00:00 stderr F 2025-08-13T19:59:05Z [verbose] ADD finished CNI request ContainerID:"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Netns:"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:ab:ce:8e:34:62\",\"name\":\"d3db60615905e44\"},{\"mac\":\"0a:58:0a:d9:00:17\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b894c68f-ab5f-4e66-b40a-eb0462970fb7\"}],\"ips\":[{\"address\":\"10.217.0.23/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.132119023+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:91:66:93:67:34\",\"name\":\"d84dd6581e40bee\"},{\"mac\":\"0a:58:0a:d9:00:37\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976\"}],\"ips\":[{\"address\":\"10.217.0.55/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.132119023+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7" Netns:"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:43:34:af:3f:cd\",\"name\":\"97418fd7ce5644b\"},{\"mac\":\"0a:58:0a:d9:00:40\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/98329f47-b240-4eb5-89bf-192e571daa4a\"}],\"ips\":[{\"address\":\"10.217.0.64/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.280437451+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed" Netns:"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c6:21:89:7b:ef:07\",\"name\":\"22d48c9fe60d97e\"},{\"mac\":\"0a:58:0a:d9:00:2d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e6ba88c5-6730-4a5b-b353-7bbeb4049738\"}],\"ips\":[{\"address\":\"10.217.0.45/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.310012694+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5" Netns:"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"52:85:10:ff:fa:5e\",\"name\":\"717e351e369b4a5\"},{\"mac\":\"0a:58:0a:d9:00:10\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/559b0d49-ba17-4e69-90ab-366e59d1166b\"}],\"ips\":[{\"address\":\"10.217.0.16/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.310012694+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007" Netns:"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:f0:82:7d:d8:bb\",\"name\":\"a10fd87b4b9fef3\"},{\"mac\":\"0a:58:0a:d9:00:3d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c156238a-d5d4-4feb-883a-0fefaa4b801f\"}],\"ips\":[{\"address\":\"10.217.0.61/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.313906515+00:00 stderr P 2025-08-13T19:59:06Z [verbose] 2025-08-13T19:59:06.313973167+00:00 stderr P ADD finished CNI request ContainerID:"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238" Netns:"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:68:88:74:1a:1a\",\"name\":\"0e119602de1750a\"},{\"mac\":\"0a:58:0a:d9:00:3e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4f246d4c-404d-4986-b5a6-5c2f06b87c6a\"}],\"ips\":[{\"address\":\"10.217.0.62/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.314000308+00:00 stderr F 2025-08-13T19:59:06.314444290+00:00 stderr P 2025-08-13T19:59:06Z [verbose] 2025-08-13T19:59:06.314487182+00:00 stderr P ADD finished CNI request ContainerID:"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621" Netns:"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:1b:59:6f:cb:45\",\"name\":\"ce1a5d3596103f2\"},{\"mac\":\"0a:58:0a:d9:00:31\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1c78c2d9-426c-4eca-8f52-b5c0eb14a232\"}],\"ips\":[{\"address\":\"10.217.0.49/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:06.314518213+00:00 stderr F 2025-08-13T19:59:06.320966546+00:00 stderr F 2025-08-13T19:59:06Z [verbose] ADD finished CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:02:94:00:bf:62\",\"name\":\"961449f5e5e8534\"},{\"mac\":\"0a:58:0a:d9:00:25\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454\"}],\"ips\":[{\"address\":\"10.217.0.37/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:27.144002369+00:00 stderr F 2025-08-13T19:59:27Z [verbose] DEL starting CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"" 2025-08-13T19:59:27.179924703+00:00 stderr F 2025-08-13T19:59:27Z [verbose] Del: openshift-kube-controller-manager:revision-pruner-8-crc:72854c1e-5ae2-4ed6-9e50-ff3bccde2635:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:33.096082993+00:00 stderr F 2025-08-13T19:59:33Z [verbose] DEL finished CNI request ContainerID:"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Netns:"/var/run/netns/7bb6bb4a-a920-463e-a86a-6d40819d6976" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-8-crc;K8S_POD_INFRA_CONTAINER_ID=d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877;K8S_POD_UID=72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Path:"", result: "", err: 2025-08-13T19:59:41.933253729+00:00 stderr F 2025-08-13T19:59:41Z [verbose] ADD starting CNI request ContainerID:"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee" Netns:"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"" 2025-08-13T19:59:43.456335124+00:00 stderr F 2025-08-13T19:59:43Z [verbose] DEL starting CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T19:59:43.456335124+00:00 stderr F 2025-08-13T19:59:43Z [verbose] Del: openshift-service-ca:service-ca-666f99b6f-vlbxv:378552fd-5e53-4882-87ff-95f3d9198861:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:47.068968522+00:00 stderr F 2025-08-13T19:59:47Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-kk8kg:e4a7de23-6134-4044-902a-0900dc04a501:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5069234e6bbbde","mac":"46:16:5d:0b:45:7c"},{"name":"eth0","mac":"0a:58:0a:d9:00:28","sandbox":"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.40/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:47.099974756+00:00 stderr F I0813 19:59:47.099179 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-kk8kg", UID:"e4a7de23-6134-4044-902a-0900dc04a501", APIVersion:"v1", ResourceVersion:"28290", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.40/23] from ovn-kubernetes 2025-08-13T19:59:47.897507141+00:00 stderr F 2025-08-13T19:59:47Z [verbose] ADD finished CNI request ContainerID:"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee" Netns:"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"46:16:5d:0b:45:7c\",\"name\":\"c5069234e6bbbde\"},{\"mac\":\"0a:58:0a:d9:00:28\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e65ece38-f1c9-4db0-b9d9-7d3aa029a566\"}],\"ips\":[{\"address\":\"10.217.0.40/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T19:59:49.047495393+00:00 stderr F 2025-08-13T19:59:49Z [verbose] DEL finished CNI request ContainerID:"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Netns:"/var/run/netns/16067c77-713a-4961-add8-c216b53340b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "", err: 2025-08-13T19:59:52.638690193+00:00 stderr F 2025-08-13T19:59:52Z [verbose] DEL starting CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T19:59:52.658090086+00:00 stderr F 2025-08-13T19:59:52Z [verbose] Del: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:87df87f4-ba66-4137-8e41-1fa632ad4207:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:52.849384859+00:00 stderr P 2025-08-13T19:59:52Z [verbose] 2025-08-13T19:59:52.849595085+00:00 stderr P DEL starting CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"" 2025-08-13T19:59:52.850056428+00:00 stderr F 2025-08-13T19:59:52.850768769+00:00 stderr P 2025-08-13T19:59:52Z [verbose] 2025-08-13T19:59:52.850910913+00:00 stderr P Del: openshift-route-controller-manager:route-controller-manager-5c4dbb8899-tchz5:af6b67a3-a2bd-4051-9adc-c208a5a65d79:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T19:59:52.850952694+00:00 stderr F 2025-08-13T19:59:53.782082985+00:00 stderr F 2025-08-13T19:59:53Z [verbose] DEL finished CNI request ContainerID:"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f" Netns:"/var/run/netns/c964672c-1543-4ed9-8311-27242d3cbe4c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "", err: 2025-08-13T19:59:54.606403153+00:00 stderr F 2025-08-13T19:59:54Z [verbose] DEL finished CNI request ContainerID:"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf" Netns:"/var/run/netns/39f01de0-25a0-4e52-b360-7aa638922e4a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5c4dbb8899-tchz5;K8S_POD_INFRA_CONTAINER_ID=893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf;K8S_POD_UID=af6b67a3-a2bd-4051-9adc-c208a5a65d79" Path:"", result: "", err: 2025-08-13T19:59:57.244142212+00:00 stderr F 2025-08-13T19:59:57Z [verbose] ADD starting CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"" 2025-08-13T19:59:57.594150390+00:00 stderr F 2025-08-13T19:59:57Z [verbose] ADD starting CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"" 2025-08-13T19:59:59.956411127+00:00 stderr F 2025-08-13T19:59:59Z [verbose] Add: openshift-controller-manager:controller-manager-c4dd57946-mpxjt:16f68e98-a8f9-417a-b92b-37bfd7b11e01:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4cfa6ec97b88dab","mac":"d6:04:2f:51:ff:10"},{"name":"eth0","mac":"0a:58:0a:d9:00:29","sandbox":"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.41/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T19:59:59.956411127+00:00 stderr F I0813 19:59:59.954708 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-c4dd57946-mpxjt", UID:"16f68e98-a8f9-417a-b92b-37bfd7b11e01", APIVersion:"v1", ResourceVersion:"28746", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.41/23] from ovn-kubernetes 2025-08-13T20:00:00.044587720+00:00 stderr F 2025-08-13T20:00:00Z [verbose] ADD finished CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d6:04:2f:51:ff:10\",\"name\":\"4cfa6ec97b88dab\"},{\"mac\":\"0a:58:0a:d9:00:29\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6\"}],\"ips\":[{\"address\":\"10.217.0.41/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:01.558864525+00:00 stderr F 2025-08-13T20:00:01Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-5b77f9fd48-hb8xt:83bf0764-e80c-490b-8d3c-3cf626fdb233:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"13b18d12f5f999b","mac":"f6:89:4d:62:4a:70"},{"name":"eth0","mac":"0a:58:0a:d9:00:2a","sandbox":"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.42/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:01.559106632+00:00 stderr F I0813 20:00:01.559027 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-5b77f9fd48-hb8xt", UID:"83bf0764-e80c-490b-8d3c-3cf626fdb233", APIVersion:"v1", ResourceVersion:"28749", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.42/23] from ovn-kubernetes 2025-08-13T20:00:01.691357391+00:00 stderr P 2025-08-13T20:00:01Z [verbose] 2025-08-13T20:00:01.731302750+00:00 stderr P ADD finished CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f6:89:4d:62:4a:70\",\"name\":\"13b18d12f5f999b\"},{\"mac\":\"0a:58:0a:d9:00:2a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39\"}],\"ips\":[{\"address\":\"10.217.0.42/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:01.731524846+00:00 stderr F 2025-08-13T20:00:02.963959695+00:00 stderr F 2025-08-13T20:00:02Z [verbose] ADD starting CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"" 2025-08-13T20:00:05.345329857+00:00 stderr P 2025-08-13T20:00:05Z [verbose] 2025-08-13T20:00:05.345387239+00:00 stderr P Add: openshift-operator-lifecycle-manager:collect-profiles-29251920-wcws2:deaee4f4-7b7a-442d-99b7-c8ac62ef5f27:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"eae823dac0e12a2","mac":"92:41:c9:a3:ea:a6"},{"name":"eth0","mac":"0a:58:0a:d9:00:2c","sandbox":"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.44/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:05.345418890+00:00 stderr F 2025-08-13T20:00:05.346879541+00:00 stderr F I0813 20:00:05.345996 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251920-wcws2", UID:"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27", APIVersion:"v1", ResourceVersion:"28823", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.44/23] from ovn-kubernetes 2025-08-13T20:00:05.381500918+00:00 stderr P 2025-08-13T20:00:05Z [verbose] 2025-08-13T20:00:05.381557790+00:00 stderr P ADD finished CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:41:c9:a3:ea:a6\",\"name\":\"eae823dac0e12a2\"},{\"mac\":\"0a:58:0a:d9:00:2c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984\"}],\"ips\":[{\"address\":\"10.217.0.44/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:05.381588991+00:00 stderr F 2025-08-13T20:00:08.083714518+00:00 stderr F 2025-08-13T20:00:08Z [verbose] ADD starting CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"" 2025-08-13T20:00:11.240574632+00:00 stderr F 2025-08-13T20:00:11Z [verbose] ADD starting CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"" 2025-08-13T20:00:12.220030431+00:00 stderr F 2025-08-13T20:00:12Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-9-crc:a0453d24-e872-43af-9e7a-86227c26d200:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"beb700893f285f1","mac":"06:fd:56:87:f4:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:34","sandbox":"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.52/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:12.220030431+00:00 stderr F I0813 20:00:12.216134 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-9-crc", UID:"a0453d24-e872-43af-9e7a-86227c26d200", APIVersion:"v1", ResourceVersion:"28975", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.52/23] from ovn-kubernetes 2025-08-13T20:00:12.800695578+00:00 stderr F 2025-08-13T20:00:12Z [verbose] ADD finished CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:fd:56:87:f4:cd\",\"name\":\"beb700893f285f1\"},{\"mac\":\"0a:58:0a:d9:00:34\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173\"}],\"ips\":[{\"address\":\"10.217.0.52/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:13.096603935+00:00 stderr F 2025-08-13T20:00:13Z [verbose] Add: openshift-kube-controller-manager:installer-9-crc:227e3650-2a85-4229-8099-bb53972635b2:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ca267bd7a205181","mac":"06:5e:50:09:30:d5"},{"name":"eth0","mac":"0a:58:0a:d9:00:35","sandbox":"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.53/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:13.096603935+00:00 stderr F I0813 20:00:13.095603 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-9-crc", UID:"227e3650-2a85-4229-8099-bb53972635b2", APIVersion:"v1", ResourceVersion:"29034", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.53/23] from ovn-kubernetes 2025-08-13T20:00:13.126610891+00:00 stderr P 2025-08-13T20:00:13Z [verbose] 2025-08-13T20:00:13.126685123+00:00 stderr P DEL starting CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"" 2025-08-13T20:00:13.126721704+00:00 stderr F 2025-08-13T20:00:13.127349202+00:00 stderr P 2025-08-13T20:00:13Z [verbose] 2025-08-13T20:00:13.127392773+00:00 stderr P Del: openshift-route-controller-manager:route-controller-manager-5b77f9fd48-hb8xt:83bf0764-e80c-490b-8d3c-3cf626fdb233:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:13.127422254+00:00 stderr F 2025-08-13T20:00:13.427259784+00:00 stderr F 2025-08-13T20:00:13Z [verbose] ADD finished CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:5e:50:09:30:d5\",\"name\":\"ca267bd7a205181\"},{\"mac\":\"0a:58:0a:d9:00:35\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330\"}],\"ips\":[{\"address\":\"10.217.0.53/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:13.463043484+00:00 stderr F 2025-08-13T20:00:13Z [verbose] DEL starting CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"" 2025-08-13T20:00:13.463043484+00:00 stderr F 2025-08-13T20:00:13Z [verbose] Del: openshift-controller-manager:controller-manager-c4dd57946-mpxjt:16f68e98-a8f9-417a-b92b-37bfd7b11e01:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:14.913106910+00:00 stderr F 2025-08-13T20:00:14Z [verbose] DEL finished CNI request ContainerID:"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Netns:"/var/run/netns/931f6801-4284-48e2-ba9b-ebb96ec043e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-c4dd57946-mpxjt;K8S_POD_INFRA_CONTAINER_ID=4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54;K8S_POD_UID=16f68e98-a8f9-417a-b92b-37bfd7b11e01" Path:"", result: "", err: 2025-08-13T20:00:15.107138713+00:00 stderr F 2025-08-13T20:00:15Z [verbose] DEL starting CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"" 2025-08-13T20:00:15.107138713+00:00 stderr F 2025-08-13T20:00:15Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29251920-wcws2:deaee4f4-7b7a-442d-99b7-c8ac62ef5f27:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:15.630039383+00:00 stderr P 2025-08-13T20:00:15Z [verbose] 2025-08-13T20:00:15.630093634+00:00 stderr P DEL finished CNI request ContainerID:"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a" Netns:"/var/run/netns/58cf4644-01ae-42de-998d-840ca6b6dd39" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5b77f9fd48-hb8xt;K8S_POD_INFRA_CONTAINER_ID=13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a;K8S_POD_UID=83bf0764-e80c-490b-8d3c-3cf626fdb233" Path:"", result: "", err: 2025-08-13T20:00:15.630118135+00:00 stderr F 2025-08-13T20:00:16.272621295+00:00 stderr F 2025-08-13T20:00:16Z [verbose] DEL finished CNI request ContainerID:"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Netns:"/var/run/netns/d03b248a-9813-4629-85b8-29241dc1e984" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251920-wcws2;K8S_POD_INFRA_CONTAINER_ID=eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348;K8S_POD_UID=deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" Path:"", result: "", err: 2025-08-13T20:00:18.052931309+00:00 stderr F 2025-08-13T20:00:18Z [verbose] DEL starting CNI request ContainerID:"628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28658250-dvzvw;K8S_POD_INFRA_CONTAINER_ID=628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292;K8S_POD_UID=05fb6e44-aaf9-4fbc-a235-7a3447ac3086" Path:"" 2025-08-13T20:00:18.053876806+00:00 stderr F 2025-08-13T20:00:18Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292: no such file or directory, cannot properly delete 2025-08-13T20:00:18.053876806+00:00 stderr F 2025-08-13T20:00:18Z [verbose] DEL finished CNI request ContainerID:"628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28658250-dvzvw;K8S_POD_INFRA_CONTAINER_ID=628032c525b0ea76a1735a05272ee3f884451044c1e0736096ab478da4f7f292;K8S_POD_UID=05fb6e44-aaf9-4fbc-a235-7a3447ac3086" Path:"", result: "", err: 2025-08-13T20:00:18.953674012+00:00 stderr F 2025-08-13T20:00:18Z [verbose] ADD starting CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"" 2025-08-13T20:00:19.081176678+00:00 stderr F 2025-08-13T20:00:19Z [verbose] DEL starting CNI request ContainerID:"d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-08-13T20:00:19.153086928+00:00 stderr F 2025-08-13T20:00:19Z [verbose] Del: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:19.425343031+00:00 stderr F 2025-08-13T20:00:19Z [verbose] ADD starting CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"" 2025-08-13T20:00:20.398503070+00:00 stderr F 2025-08-13T20:00:20Z [verbose] ADD starting CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"" 2025-08-13T20:00:21.775932115+00:00 stderr F 2025-08-13T20:00:21Z [verbose] ADD starting CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"" 2025-08-13T20:00:21.808731090+00:00 stderr F 2025-08-13T20:00:21Z [verbose] DEL finished CNI request ContainerID:"d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=d6388111d42b1b52671ff33a0eeb73cc55dd6201183af5171802748304de2a17;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "", err: 2025-08-13T20:00:22.604078219+00:00 stderr F 2025-08-13T20:00:22Z [verbose] DEL starting CNI request ContainerID:"a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-08-13T20:00:22.624992706+00:00 stderr F 2025-08-13T20:00:22Z [verbose] Del: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.221625190+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.221874807+00:00 stderr P DEL starting CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"" 2025-08-13T20:00:23.221917299+00:00 stderr F 2025-08-13T20:00:23.222874806+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.222953108+00:00 stderr P Del: openshift-kube-controller-manager:revision-pruner-9-crc:a0453d24-e872-43af-9e7a-86227c26d200:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.222991239+00:00 stderr F 2025-08-13T20:00:23.426680097+00:00 stderr F 2025-08-13T20:00:23Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-6cfd9fc8fc-7sbzw:1713e8bc-bab0-49a8-8618-9ded2e18906c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1f55b781eeb63db","mac":"4a:a0:3c:7f:f6:89"},{"name":"eth0","mac":"0a:58:0a:d9:00:38","sandbox":"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.56/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.427156031+00:00 stderr F I0813 20:00:23.427101 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-6cfd9fc8fc-7sbzw", UID:"1713e8bc-bab0-49a8-8618-9ded2e18906c", APIVersion:"v1", ResourceVersion:"29279", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.56/23] from ovn-kubernetes 2025-08-13T20:00:23.524430564+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.524518267+00:00 stderr P Add: openshift-kube-apiserver:installer-9-crc:2ad657a4-8b02-4373-8d0d-b0e25345dc90:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"9b70547ed21fdd5","mac":"1e:44:eb:8d:8e:6a"},{"name":"eth0","mac":"0a:58:0a:d9:00:37","sandbox":"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.55/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.524584588+00:00 stderr F 2025-08-13T20:00:23.525128274+00:00 stderr F I0813 20:00:23.525048 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-9-crc", UID:"2ad657a4-8b02-4373-8d0d-b0e25345dc90", APIVersion:"v1", ResourceVersion:"29261", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.55/23] from ovn-kubernetes 2025-08-13T20:00:23.693962418+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.694021390+00:00 stderr P Add: openshift-console:console-5d9678894c-wx62n:384ed0e8-86e4-42df-bd2c-604c1f536a15:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"612e7824c92f4db","mac":"2a:0d:bb:e8:fc:b3"},{"name":"eth0","mac":"0a:58:0a:d9:00:39","sandbox":"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.57/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.694045971+00:00 stderr F 2025-08-13T20:00:23.694451672+00:00 stderr F I0813 20:00:23.694330 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-5d9678894c-wx62n", UID:"384ed0e8-86e4-42df-bd2c-604c1f536a15", APIVersion:"v1", ResourceVersion:"29333", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.57/23] from ovn-kubernetes 2025-08-13T20:00:23.700320369+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=a3f675eda1f1f650af555ac6627d1fdb41abe888489e03a064815885157f5403;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "", err: 2025-08-13T20:00:23.734697240+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL starting CNI request ContainerID:"f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-08-13T20:00:23.735698498+00:00 stderr F 2025-08-13T20:00:23Z [verbose] Del: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:23.764698625+00:00 stderr P 2025-08-13T20:00:23Z [verbose] 2025-08-13T20:00:23.764912391+00:00 stderr P Add: openshift-controller-manager:controller-manager-67685c4459-7p2h8:a560ec6a-586f-403c-a08e-e3a76fa1b7fd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51aea926a857cd4","mac":"9e:95:94:ba:9d:71"},{"name":"eth0","mac":"0a:58:0a:d9:00:3a","sandbox":"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.58/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:23.765013234+00:00 stderr F 2025-08-13T20:00:23.765593531+00:00 stderr F I0813 20:00:23.765564 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-67685c4459-7p2h8", UID:"a560ec6a-586f-403c-a08e-e3a76fa1b7fd", APIVersion:"v1", ResourceVersion:"29411", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.58/23] from ovn-kubernetes 2025-08-13T20:00:23.818472889+00:00 stderr F 2025-08-13T20:00:23Z [verbose] ADD finished CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:a0:3c:7f:f6:89\",\"name\":\"1f55b781eeb63db\"},{\"mac\":\"0a:58:0a:d9:00:38\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588\"}],\"ips\":[{\"address\":\"10.217.0.56/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:23.895917767+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Netns:"/var/run/netns/9628325c-7662-4b67-b68d-e2df8d301173" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-9-crc;K8S_POD_INFRA_CONTAINER_ID=beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319;K8S_POD_UID=a0453d24-e872-43af-9e7a-86227c26d200" Path:"", result: "", err: 2025-08-13T20:00:23.993536050+00:00 stderr F 2025-08-13T20:00:23Z [verbose] DEL finished CNI request ContainerID:"f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=f9aabd5b02c81bbe0daa4b7f8edaa2072f7d958bff4d4003604076a03fb177e2;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "", err: 2025-08-13T20:00:24.155722865+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-08-13T20:00:24.158038961+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:24.250949330+00:00 stderr P 2025-08-13T20:00:24Z [verbose] 2025-08-13T20:00:24.282009636+00:00 stderr F ADD finished CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:44:eb:8d:8e:6a\",\"name\":\"9b70547ed21fdd5\"},{\"mac\":\"0a:58:0a:d9:00:37\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d\"}],\"ips\":[{\"address\":\"10.217.0.55/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.282009636+00:00 stderr F 2025-08-13T20:00:24Z [verbose] ADD finished CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:0d:bb:e8:fc:b3\",\"name\":\"612e7824c92f4db\"},{\"mac\":\"0a:58:0a:d9:00:39\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0\"}],\"ips\":[{\"address\":\"10.217.0.57/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.282009636+00:00 stderr F 2025-08-13T20:00:24Z [verbose] ADD finished CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:95:94:ba:9d:71\",\"name\":\"51aea926a857cd4\"},{\"mac\":\"0a:58:0a:d9:00:3a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d\"}],\"ips\":[{\"address\":\"10.217.0.58/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:24.344766175+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL finished CNI request ContainerID:"657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=657f2ae8a88862495da3eaf4929639924b3c5ac944123fbfa2c45af54da8eeae;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "", err: 2025-08-13T20:00:24.388626996+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-08-13T20:00:24.400294769+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:24.770294289+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL finished CNI request ContainerID:"4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=4e60446641bb2794f361bac93ea3ad453233afcdf0dccdc18a1602491cab10f7;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "", err: 2025-08-13T20:00:24.831022011+00:00 stderr F 2025-08-13T20:00:24Z [verbose] DEL starting CNI request ContainerID:"defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-08-13T20:00:24.837189417+00:00 stderr F 2025-08-13T20:00:24Z [verbose] Del: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:26.002482674+00:00 stderr F 2025-08-13T20:00:26Z [verbose] DEL finished CNI request ContainerID:"defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=defe45c320ad6203b1ec1370b005153bc67bfbf75778890aabec5f575fc3869e;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "", err: 2025-08-13T20:00:26.422336186+00:00 stderr F 2025-08-13T20:00:26Z [verbose] DEL starting CNI request ContainerID:"6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-08-13T20:00:26.434518494+00:00 stderr F 2025-08-13T20:00:26Z [verbose] Del: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:27.168170163+00:00 stderr F 2025-08-13T20:00:27Z [verbose] DEL starting CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"" 2025-08-13T20:00:27.168170163+00:00 stderr F 2025-08-13T20:00:27Z [verbose] Del: openshift-controller-manager:controller-manager-67685c4459-7p2h8:a560ec6a-586f-403c-a08e-e3a76fa1b7fd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:27.794460121+00:00 stderr F 2025-08-13T20:00:27Z [verbose] DEL starting CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"" 2025-08-13T20:00:27.794704278+00:00 stderr F 2025-08-13T20:00:27Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-6cfd9fc8fc-7sbzw:1713e8bc-bab0-49a8-8618-9ded2e18906c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:28.483037086+00:00 stderr P 2025-08-13T20:00:28Z [verbose] 2025-08-13T20:00:28.483116038+00:00 stderr P ADD starting CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"" 2025-08-13T20:00:28.483149849+00:00 stderr F 2025-08-13T20:00:28.644932272+00:00 stderr F 2025-08-13T20:00:28Z [verbose] ADD starting CNI request ContainerID:"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e" Netns:"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-08-13T20:00:28.787764605+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL finished CNI request ContainerID:"6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=6824abd319d801d93100e5a78029eeb4b36aff7431e6dd2a4cc86e65a59169df;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "", err: 2025-08-13T20:00:28.855300300+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL finished CNI request ContainerID:"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa" Netns:"/var/run/netns/55bf1bf4-f151-480d-940d-f0f12e62a28d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-67685c4459-7p2h8;K8S_POD_INFRA_CONTAINER_ID=51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa;K8S_POD_UID=a560ec6a-586f-403c-a08e-e3a76fa1b7fd" Path:"", result: "", err: 2025-08-13T20:00:28.919684786+00:00 stderr F 2025-08-13T20:00:28Z [verbose] DEL starting CNI request ContainerID:"9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-08-13T20:00:28.922399074+00:00 stderr F 2025-08-13T20:00:28Z [verbose] Del: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:29.981503632+00:00 stderr F 2025-08-13T20:00:29Z [verbose] ADD starting CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"" 2025-08-13T20:00:30.201077733+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL finished CNI request ContainerID:"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715" Netns:"/var/run/netns/2d4bc6b6-5587-4e2e-9a5d-bd19e8d86588" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6cfd9fc8fc-7sbzw;K8S_POD_INFRA_CONTAINER_ID=1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715;K8S_POD_UID=1713e8bc-bab0-49a8-8618-9ded2e18906c" Path:"", result: "", err: 2025-08-13T20:00:30.542571871+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL finished CNI request ContainerID:"9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=9a9e3e1f426de43a655d2713cf2cf80d659d319c3d5af635a89d2d1998a12685;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "", err: 2025-08-13T20:00:30.750113158+00:00 stderr P 2025-08-13T20:00:30Z [verbose] 2025-08-13T20:00:30.750212571+00:00 stderr P Add: openshift-image-registry:image-registry-7cbd5666ff-bbfrf:42b6a393-6194-4620-bf8f-7e4b6cbe5679:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"958ba1ee8e9afa1","mac":"72:4f:d8:41:fe:b2"},{"name":"eth0","mac":"0a:58:0a:d9:00:26","sandbox":"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.38/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:30.750352425+00:00 stderr F 2025-08-13T20:00:30.757356695+00:00 stderr F I0813 20:00:30.750923 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-7cbd5666ff-bbfrf", UID:"42b6a393-6194-4620-bf8f-7e4b6cbe5679", APIVersion:"v1", ResourceVersion:"27607", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.38/23] from ovn-kubernetes 2025-08-13T20:00:30.851936702+00:00 stderr F 2025-08-13T20:00:30Z [verbose] DEL starting CNI request ContainerID:"e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T20:00:30.851936702+00:00 stderr F 2025-08-13T20:00:30Z [verbose] Del: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:30.863932404+00:00 stderr F 2025-08-13T20:00:30Z [verbose] ADD finished CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:4f:d8:41:fe:b2\",\"name\":\"958ba1ee8e9afa1\"},{\"mac\":\"0a:58:0a:d9:00:26\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda\"}],\"ips\":[{\"address\":\"10.217.0.38/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:30.912946971+00:00 stderr F 2025-08-13T20:00:30Z [verbose] Add: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7356b549b0982e9","mac":"1a:15:48:83:02:58"},{"name":"eth0","mac":"0a:58:0a:d9:00:3b","sandbox":"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.59/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:30.912946971+00:00 stderr F I0813 20:00:30.912354 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75779c45fd-v2j2v", UID:"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319", APIVersion:"v1", ResourceVersion:"29604", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.59/23] from ovn-kubernetes 2025-08-13T20:00:30.967560289+00:00 stderr F 2025-08-13T20:00:30Z [verbose] ADD finished CNI request ContainerID:"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e" Netns:"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:15:48:83:02:58\",\"name\":\"7356b549b0982e9\"},{\"mac\":\"0a:58:0a:d9:00:3b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c953bf54-7cc8-4003-9703-2c7a471aed7f\"}],\"ips\":[{\"address\":\"10.217.0.59/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:31.002266648+00:00 stderr P 2025-08-13T20:00:30Z [verbose] 2025-08-13T20:00:31.002340550+00:00 stderr P Add: openshift-controller-manager:controller-manager-78589965b8-vmcwt:00d32440-4cce-4609-96f3-51ac94480aab:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"97945bb2ed21e57","mac":"de:38:c2:b4:d8:3a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3c","sandbox":"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.60/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:31.002466734+00:00 stderr F 2025-08-13T20:00:31.003170114+00:00 stderr F I0813 20:00:31.003086 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-78589965b8-vmcwt", UID:"00d32440-4cce-4609-96f3-51ac94480aab", APIVersion:"v1", ResourceVersion:"29670", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.60/23] from ovn-kubernetes 2025-08-13T20:00:31.082701722+00:00 stderr F 2025-08-13T20:00:31Z [verbose] ADD finished CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:38:c2:b4:d8:3a\",\"name\":\"97945bb2ed21e57\"},{\"mac\":\"0a:58:0a:d9:00:3c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52\"}],\"ips\":[{\"address\":\"10.217.0.60/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:32.195444221+00:00 stderr F 2025-08-13T20:00:32Z [verbose] DEL finished CNI request ContainerID:"e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e1b8b9555bd24d97227d4c4d3c9d61794f42a04b0ad9a23c933d58fe5e0eccb0;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "", err: 2025-08-13T20:00:32.558349439+00:00 stderr F 2025-08-13T20:00:32Z [verbose] DEL starting CNI request ContainerID:"9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-08-13T20:00:32.651056871+00:00 stderr F 2025-08-13T20:00:32Z [verbose] Del: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:34.140405589+00:00 stderr F 2025-08-13T20:00:34Z [verbose] ADD starting CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"" 2025-08-13T20:00:34.673187000+00:00 stderr F 2025-08-13T20:00:34Z [verbose] DEL finished CNI request ContainerID:"9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=9f59b16bbb5c3dd1084a5709e4c6d5ef7d05038b8e180f5a91a326c80990cac2;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "", err: 2025-08-13T20:00:34.835387915+00:00 stderr F 2025-08-13T20:00:34Z [verbose] DEL starting CNI request ContainerID:"fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-08-13T20:00:34.844419193+00:00 stderr F 2025-08-13T20:00:34Z [verbose] Del: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:35.948650999+00:00 stderr P 2025-08-13T20:00:35Z [verbose] 2025-08-13T20:00:35.948741002+00:00 stderr P Add: openshift-route-controller-manager:route-controller-manager-846977c6bc-7gjhh:ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7b8bdc9f188dc33","mac":"76:3b:e8:1c:21:23"},{"name":"eth0","mac":"0a:58:0a:d9:00:41","sandbox":"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.65/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:35.948823624+00:00 stderr F 2025-08-13T20:00:35.967116546+00:00 stderr F I0813 20:00:35.953067 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-846977c6bc-7gjhh", UID:"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d", APIVersion:"v1", ResourceVersion:"29760", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.65/23] from ovn-kubernetes 2025-08-13T20:00:36.106877411+00:00 stderr F 2025-08-13T20:00:36Z [verbose] DEL finished CNI request ContainerID:"fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=fdb3fe17bdce79dd11ccd5d25e01662bcafa8f777956cea880dd33473ddc0bd7;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "", err: 2025-08-13T20:00:36.200960723+00:00 stderr P 2025-08-13T20:00:36Z [verbose] 2025-08-13T20:00:36.201043076+00:00 stderr P DEL starting CNI request ContainerID:"9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-08-13T20:00:36.201067356+00:00 stderr F 2025-08-13T20:00:36.214635213+00:00 stderr P 2025-08-13T20:00:36Z [verbose] 2025-08-13T20:00:36.214699455+00:00 stderr P Del: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:36.214723896+00:00 stderr F 2025-08-13T20:00:36.550105928+00:00 stderr F 2025-08-13T20:00:36Z [verbose] ADD finished CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:3b:e8:1c:21:23\",\"name\":\"7b8bdc9f188dc33\"},{\"mac\":\"0a:58:0a:d9:00:41\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28\"}],\"ips\":[{\"address\":\"10.217.0.65/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:37.125114874+00:00 stderr F 2025-08-13T20:00:37Z [verbose] DEL finished CNI request ContainerID:"9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=9ac47df691313e3ec13649d206884e8233eda77a32f30da60cc55f7d0cc657a6;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "", err: 2025-08-13T20:00:37.215378848+00:00 stderr F 2025-08-13T20:00:37Z [verbose] DEL starting CNI request ContainerID:"fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"" 2025-08-13T20:00:37.262763049+00:00 stderr F 2025-08-13T20:00:37Z [verbose] Del: openshift-controller-manager:controller-manager-6ff78978b4-q4vv8:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:38.651627631+00:00 stderr P 2025-08-13T20:00:38Z [verbose] 2025-08-13T20:00:38.651700183+00:00 stderr P DEL finished CNI request ContainerID:"fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6ff78978b4-q4vv8;K8S_POD_INFRA_CONTAINER_ID=fccee2e1608ededc00abd9b8a844dcbb4ee8e3e2d8d12e5cbe4114be73cca5eb;K8S_POD_UID=87df87f4-ba66-4137-8e41-1fa632ad4207" Path:"", result: "", err: 2025-08-13T20:00:38.651743074+00:00 stderr F 2025-08-13T20:00:38.821651149+00:00 stderr P 2025-08-13T20:00:38Z [verbose] 2025-08-13T20:00:38.821763572+00:00 stderr P ADD starting CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"" 2025-08-13T20:00:38.827408263+00:00 stderr F 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [verbose] DEL starting CNI request ContainerID:"16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61;K8S_POD_UID=7fc6841e-1b13-42dc-8470-506b09b9d82d" Path:"" 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61: no such file or directory, cannot properly delete 2025-08-13T20:00:40.655755536+00:00 stderr F 2025-08-13T20:00:40Z [verbose] DEL finished CNI request ContainerID:"16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=16950a4b107083363af8e2aead82e0fa59f887a15781ee3d28ad0f1ab990af61;K8S_POD_UID=7fc6841e-1b13-42dc-8470-506b09b9d82d" Path:"", result: "", err: 2025-08-13T20:00:41.080083045+00:00 stderr F 2025-08-13T20:00:41Z [verbose] DEL starting CNI request ContainerID:"8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-08-13T20:00:41.080083045+00:00 stderr F 2025-08-13T20:00:41Z [verbose] Del: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:42.117281860+00:00 stderr F 2025-08-13T20:00:42Z [verbose] ADD starting CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"" 2025-08-13T20:00:42.225946918+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL starting CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T20:00:42.228035068+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Del: openshift-authentication:oauth-openshift-765b47f944-n2lhl:13ad7555-5f28-4555-a563-892713a8433a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:42.356362727+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL finished CNI request ContainerID:"8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=8be7a9b51f6314aeef996acf7469da8aaceeb411905168e5ca18ca82de940a1d;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "", err: 2025-08-13T20:00:42.478464869+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Add: openshift-kube-scheduler:installer-7-crc:b57cce81-8ea0-4c4d-aae1-ee024d201c15:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"639e0e9093fe7c9","mac":"d2:b3:7b:29:cb:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:43","sandbox":"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.67/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:42.479200910+00:00 stderr F I0813 20:00:42.479162 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler", Name:"installer-7-crc", UID:"b57cce81-8ea0-4c4d-aae1-ee024d201c15", APIVersion:"v1", ResourceVersion:"29872", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.67/23] from ovn-kubernetes 2025-08-13T20:00:42.730766093+00:00 stderr P 2025-08-13T20:00:42Z [verbose] 2025-08-13T20:00:42.730891296+00:00 stderr P DEL starting CNI request ContainerID:"da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-585546dd8b-v5m4t;K8S_POD_INFRA_CONTAINER_ID=da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02;K8S_POD_UID=c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Path:"" 2025-08-13T20:00:42.730916937+00:00 stderr F 2025-08-13T20:00:42.941020918+00:00 stderr F 2025-08-13T20:00:42Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-10-crc:2f155735-a9be-4621-a5f2-5ab4b6957acd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c05ff35bd00034f","mac":"d2:80:43:7f:e4:85"},{"name":"eth0","mac":"0a:58:0a:d9:00:44","sandbox":"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.68/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:42.944025443+00:00 stderr F I0813 20:00:42.941635 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-10-crc", UID:"2f155735-a9be-4621-a5f2-5ab4b6957acd", APIVersion:"v1", ResourceVersion:"29896", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.68/23] from ovn-kubernetes 2025-08-13T20:00:42.950006964+00:00 stderr F 2025-08-13T20:00:42Z [verbose] DEL finished CNI request ContainerID:"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141" Netns:"/var/run/netns/0e871511-fda3-41d1-ad6f-2bcfcfa2481d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "", err: 2025-08-13T20:00:42.992405573+00:00 stderr P 2025-08-13T20:00:42Z [verbose] 2025-08-13T20:00:42.992465665+00:00 stderr P ADD starting CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"" 2025-08-13T20:00:42.992498166+00:00 stderr F 2025-08-13T20:00:43.013136444+00:00 stderr F 2025-08-13T20:00:43Z [verbose] Del: openshift-image-registry:image-registry-585546dd8b-v5m4t:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:43.393009776+00:00 stderr F 2025-08-13T20:00:43Z [verbose] DEL finished CNI request ContainerID:"da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-585546dd8b-v5m4t;K8S_POD_INFRA_CONTAINER_ID=da2c22b4e9fecd84ca52faa8c6c1fc46d66b1ff333890e8fe52c11000a224c02;K8S_POD_UID=c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Path:"", result: "", err: 2025-08-13T20:00:43.421580150+00:00 stderr P 2025-08-13T20:00:43Z [verbose] 2025-08-13T20:00:43.421641321+00:00 stderr P Add: openshift-kube-controller-manager:installer-10-crc:79050916-d488-4806-b556-1b0078b31e53:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5d98545d20b610","mac":"1e:0a:7c:9b:81:94"},{"name":"eth0","mac":"0a:58:0a:d9:00:45","sandbox":"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.69/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:43.421713993+00:00 stderr F 2025-08-13T20:00:43.422160696+00:00 stderr F I0813 20:00:43.422095 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-10-crc", UID:"79050916-d488-4806-b556-1b0078b31e53", APIVersion:"v1", ResourceVersion:"29912", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.69/23] from ovn-kubernetes 2025-08-13T20:00:43.680240285+00:00 stderr F 2025-08-13T20:00:43Z [verbose] DEL starting CNI request ContainerID:"5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-08-13T20:00:43.758502797+00:00 stderr F 2025-08-13T20:00:43Z [verbose] Del: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:44.114893609+00:00 stderr F 2025-08-13T20:00:44Z [verbose] DEL finished CNI request ContainerID:"5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=5e64da46150ebb40cdd4ecb78b0a375db61e971c6e24c5e8eaf23820a89589dc;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "", err: 2025-08-13T20:00:44.184127013+00:00 stderr F 2025-08-13T20:00:44Z [verbose] DEL starting CNI request ContainerID:"a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-08-13T20:00:44.217335970+00:00 stderr F 2025-08-13T20:00:44Z [verbose] Del: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:45.211890339+00:00 stderr F 2025-08-13T20:00:45Z [verbose] DEL finished CNI request ContainerID:"a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=a44b069f93eedf98b5b6f9e5b936bcc32d50e0c5c9c62cef520ab0ba979214bf;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "", err: 2025-08-13T20:00:45.672314437+00:00 stderr P 2025-08-13T20:00:45Z [verbose] 2025-08-13T20:00:45.672434441+00:00 stderr P DEL starting CNI request ContainerID:"5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-08-13T20:00:45.672463122+00:00 stderr F 2025-08-13T20:00:45.697065183+00:00 stderr P 2025-08-13T20:00:45Z [verbose] 2025-08-13T20:00:45.697150805+00:00 stderr P Del: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:45.697175946+00:00 stderr F 2025-08-13T20:00:45.805306359+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:0a:7c:9b:81:94\",\"name\":\"c5d98545d20b610\"},{\"mac\":\"0a:58:0a:d9:00:45\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f\"}],\"ips\":[{\"address\":\"10.217.0.69/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:45.911488567+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:80:43:7f:e4:85\",\"name\":\"c05ff35bd00034f\"},{\"mac\":\"0a:58:0a:d9:00:44\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5\"}],\"ips\":[{\"address\":\"10.217.0.68/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:45.911685283+00:00 stderr F 2025-08-13T20:00:45Z [verbose] ADD finished CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:b3:7b:29:cb:9c\",\"name\":\"639e0e9093fe7c9\"},{\"mac\":\"0a:58:0a:d9:00:43\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11\"}],\"ips\":[{\"address\":\"10.217.0.67/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:46.769819461+00:00 stderr F 2025-08-13T20:00:46Z [verbose] ADD starting CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"" 2025-08-13T20:00:47.097347730+00:00 stderr F 2025-08-13T20:00:47Z [verbose] DEL finished CNI request ContainerID:"5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=5ee90cc93a8f50d5f16df1056e015fc09ea8ce98eba3651d2af537405414ffe7;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "", err: 2025-08-13T20:00:47.471940481+00:00 stderr F 2025-08-13T20:00:47Z [verbose] DEL starting CNI request ContainerID:"7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-08-13T20:00:47.557602813+00:00 stderr F 2025-08-13T20:00:47Z [verbose] Del: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:50.754210741+00:00 stderr F 2025-08-13T20:00:50Z [verbose] Add: openshift-apiserver:apiserver-67cbf64bc9-jjfds:b23d6435-6431-4905-b41b-a517327385e5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"411add17e78de78","mac":"86:a5:06:68:9f:58"},{"name":"eth0","mac":"0a:58:0a:d9:00:46","sandbox":"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.70/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:50.754265362+00:00 stderr F I0813 20:00:50.754227 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-67cbf64bc9-jjfds", UID:"b23d6435-6431-4905-b41b-a517327385e5", APIVersion:"v1", ResourceVersion:"29962", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.70/23] from ovn-kubernetes 2025-08-13T20:00:50.832530764+00:00 stderr F 2025-08-13T20:00:50Z [verbose] DEL finished CNI request ContainerID:"7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=7cbee94a0019287163f034af6eaf026b22eda36296fee7ada8ec1f0757be2148;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "", err: 2025-08-13T20:00:51.690436508+00:00 stderr F 2025-08-13T20:00:51Z [verbose] DEL starting CNI request ContainerID:"c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T20:00:51.757410636+00:00 stderr F 2025-08-13T20:00:51Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-mtx25:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:52.963080514+00:00 stderr F 2025-08-13T20:00:52Z [verbose] ADD starting CNI request ContainerID:"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404" Netns:"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"" 2025-08-13T20:00:53.012391360+00:00 stderr F 2025-08-13T20:00:53Z [verbose] DEL finished CNI request ContainerID:"c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=c64580af653a5c82ccb1aa4cec3fd7112e5c51f08e77a2cb0dd795ea7b69f906;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "", err: 2025-08-13T20:00:54.999966983+00:00 stderr F 2025-08-13T20:00:54Z [verbose] Add: openshift-authentication:oauth-openshift-74fc7c67cc-xqf8b:01feb2e0-a0f4-4573-8335-34e364e0ef40:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ca33bd29c9a026f","mac":"d2:28:de:94:9a:bd"},{"name":"eth0","mac":"0a:58:0a:d9:00:48","sandbox":"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.72/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:00:54.999966983+00:00 stderr F I0813 20:00:54.997914 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-74fc7c67cc-xqf8b", UID:"01feb2e0-a0f4-4573-8335-34e364e0ef40", APIVersion:"v1", ResourceVersion:"30093", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.72/23] from ovn-kubernetes 2025-08-13T20:00:55.784357859+00:00 stderr P 2025-08-13T20:00:55Z [verbose] 2025-08-13T20:00:55.784484773+00:00 stderr P DEL starting CNI request ContainerID:"df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-08-13T20:00:55.784519304+00:00 stderr F 2025-08-13T20:00:55.801439626+00:00 stderr P 2025-08-13T20:00:55Z [verbose] 2025-08-13T20:00:55.801514399+00:00 stderr P Del: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:55.801574770+00:00 stderr F 2025-08-13T20:00:57.685908652+00:00 stderr P 2025-08-13T20:00:57Z [verbose] 2025-08-13T20:00:57.686257362+00:00 stderr P DEL finished CNI request ContainerID:"df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=df01448416fec1aac04076c4b97a2ab84a208f9c7e531a6eea8357072084eaef;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "", err: 2025-08-13T20:00:57.686300333+00:00 stderr F 2025-08-13T20:00:59.625135578+00:00 stderr F 2025-08-13T20:00:59Z [verbose] DEL starting CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"" 2025-08-13T20:00:59.626535218+00:00 stderr F 2025-08-13T20:00:59Z [verbose] Del: openshift-kube-controller-manager:installer-9-crc:227e3650-2a85-4229-8099-bb53972635b2:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:00:59.949886749+00:00 stderr F 2025-08-13T20:00:59Z [verbose] ADD finished CNI request ContainerID:"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404" Netns:"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:28:de:94:9a:bd\",\"name\":\"ca33bd29c9a026f\"},{\"mac\":\"0a:58:0a:d9:00:48\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2ae85cbd-71a6-416a-8db5-ee7ccfb3e931\"}],\"ips\":[{\"address\":\"10.217.0.72/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:00:59.955906181+00:00 stderr F 2025-08-13T20:00:59Z [verbose] ADD finished CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"86:a5:06:68:9f:58\",\"name\":\"411add17e78de78\"},{\"mac\":\"0a:58:0a:d9:00:46\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b\"}],\"ips\":[{\"address\":\"10.217.0.70/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:01:00.262048961+00:00 stderr P 2025-08-13T20:01:00Z [verbose] 2025-08-13T20:01:00.262101852+00:00 stderr P DEL starting CNI request ContainerID:"2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-08-13T20:01:00.262127363+00:00 stderr F 2025-08-13T20:01:00.306681903+00:00 stderr P 2025-08-13T20:01:00Z [verbose] 2025-08-13T20:01:00.306880019+00:00 stderr P Del: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:00.306920850+00:00 stderr F 2025-08-13T20:01:01.496994785+00:00 stderr F 2025-08-13T20:01:01Z [verbose] DEL finished CNI request ContainerID:"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef" Netns:"/var/run/netns/f3d4ee0f-cb95-4404-998d-8234df0e9330" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef;K8S_POD_UID=227e3650-2a85-4229-8099-bb53972635b2" Path:"", result: "", err: 2025-08-13T20:01:01.818888194+00:00 stderr F 2025-08-13T20:01:01Z [verbose] DEL finished CNI request ContainerID:"2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=2ae90acd2995b1dcc37ed6119a29512e6665ccfe374f811d7650ee274489ed91;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "", err: 2025-08-13T20:01:04.274439165+00:00 stderr P 2025-08-13T20:01:04Z [verbose] 2025-08-13T20:01:04.274510537+00:00 stderr P DEL starting CNI request ContainerID:"432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-08-13T20:01:04.274656522+00:00 stderr F 2025-08-13T20:01:04.279231072+00:00 stderr P 2025-08-13T20:01:04Z [verbose] 2025-08-13T20:01:04.279270203+00:00 stderr P Del: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:04.279294144+00:00 stderr F 2025-08-13T20:01:06.005889996+00:00 stderr P 2025-08-13T20:01:06Z [verbose] 2025-08-13T20:01:06.006018259+00:00 stderr P DEL finished CNI request ContainerID:"432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=432a44023c96c4a190a4d9e4aaade94c463767d2c03338ff6c3eec88593678b0;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "", err: 2025-08-13T20:01:06.006046850+00:00 stderr F 2025-08-13T20:01:07.641698699+00:00 stderr F 2025-08-13T20:01:07Z [verbose] DEL starting CNI request ContainerID:"33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-08-13T20:01:07.655044230+00:00 stderr F 2025-08-13T20:01:07Z [verbose] Del: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:11.315883054+00:00 stderr F 2025-08-13T20:01:11Z [verbose] DEL finished CNI request ContainerID:"33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=33a7a183a635cac6a4eb98a37022992dab3d3a030061cdaba0e740f17db9067e;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "", err: 2025-08-13T20:01:11.611486543+00:00 stderr P 2025-08-13T20:01:11Z [verbose] 2025-08-13T20:01:11.611557375+00:00 stderr P DEL starting CNI request ContainerID:"dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-08-13T20:01:11.611620457+00:00 stderr F 2025-08-13T20:01:11.640926123+00:00 stderr P 2025-08-13T20:01:11Z [verbose] 2025-08-13T20:01:11.641012095+00:00 stderr P Del: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:11.641042516+00:00 stderr F 2025-08-13T20:01:12.677367365+00:00 stderr P 2025-08-13T20:01:12Z [verbose] 2025-08-13T20:01:12.677746566+00:00 stderr P DEL finished CNI request ContainerID:"dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=dd9ec529de53736ffb1b837b53f817960d4759ad27cbf8ae8f5117fbdaa896c1;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "", err: 2025-08-13T20:01:12.677957122+00:00 stderr F 2025-08-13T20:01:15.978545495+00:00 stderr F 2025-08-13T20:01:15Z [verbose] DEL starting CNI request ContainerID:"0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-08-13T20:01:16.068195151+00:00 stderr P 2025-08-13T20:01:16Z [verbose] 2025-08-13T20:01:16.068257523+00:00 stderr P DEL starting CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"" 2025-08-13T20:01:16.068284344+00:00 stderr F 2025-08-13T20:01:16.068958033+00:00 stderr P 2025-08-13T20:01:16Z [verbose] 2025-08-13T20:01:16.069011444+00:00 stderr P Del: openshift-kube-controller-manager:revision-pruner-10-crc:2f155735-a9be-4621-a5f2-5ab4b6957acd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:16.069044455+00:00 stderr F 2025-08-13T20:01:16.129297173+00:00 stderr F 2025-08-13T20:01:16Z [verbose] Del: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:17.040373572+00:00 stderr F 2025-08-13T20:01:17Z [verbose] DEL finished CNI request ContainerID:"0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=0bd5c29e77a3037aae2fa96b544efc5ac7e1fb73a7a339a2f0f8a9e30d2f62a1;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "", err: 2025-08-13T20:01:17.168158526+00:00 stderr P 2025-08-13T20:01:17Z [verbose] 2025-08-13T20:01:17.168284669+00:00 stderr P DEL finished CNI request ContainerID:"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Netns:"/var/run/netns/fd0a99e4-8ce6-4c16-9b4e-aec3188878d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-10-crc;K8S_POD_INFRA_CONTAINER_ID=c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5;K8S_POD_UID=2f155735-a9be-4621-a5f2-5ab4b6957acd" Path:"", result: "", err: 2025-08-13T20:01:17.168357521+00:00 stderr F 2025-08-13T20:01:18.589913375+00:00 stderr F 2025-08-13T20:01:18Z [verbose] DEL starting CNI request ContainerID:"878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-08-13T20:01:18.595353010+00:00 stderr F 2025-08-13T20:01:18Z [verbose] Del: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:19.197903550+00:00 stderr F 2025-08-13T20:01:19Z [verbose] DEL finished CNI request ContainerID:"878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=878c7e3ae7a33039e3715e057a430612aebe8e58d774871e2d8a818ea3d7c85f;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "", err: 2025-08-13T20:01:19.430217025+00:00 stderr P 2025-08-13T20:01:19Z [verbose] 2025-08-13T20:01:19.430279626+00:00 stderr P DEL starting CNI request ContainerID:"21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-08-13T20:01:19.430305647+00:00 stderr F 2025-08-13T20:01:19.469115584+00:00 stderr P 2025-08-13T20:01:19Z [verbose] 2025-08-13T20:01:19.469181956+00:00 stderr P Del: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:19.469207707+00:00 stderr F 2025-08-13T20:01:20.235586379+00:00 stderr F 2025-08-13T20:01:20Z [verbose] DEL finished CNI request ContainerID:"21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=21796c1f7342724e32e54ca11b541e1cda8e05849122530266a7b12a23cc0fe1;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "", err: 2025-08-13T20:01:20.796826942+00:00 stderr F 2025-08-13T20:01:20Z [verbose] DEL starting CNI request ContainerID:"2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-08-13T20:01:20.829113883+00:00 stderr F 2025-08-13T20:01:20Z [verbose] Del: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:21.226157394+00:00 stderr F 2025-08-13T20:01:21Z [verbose] DEL finished CNI request ContainerID:"2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=2904aa4ee10011260ecc56661b111d456503de7780da1835b59281d4e8da37e6;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "", err: 2025-08-13T20:01:22.043060597+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL starting CNI request ContainerID:"3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-08-13T20:01:22.078591830+00:00 stderr F 2025-08-13T20:01:22Z [verbose] Del: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:22.231411208+00:00 stderr F 2025-08-13T20:01:22Z [verbose] ADD starting CNI request ContainerID:"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Netns:"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"" 2025-08-13T20:01:22.447936762+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL finished CNI request ContainerID:"3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=3ec87f325b56dafac3c9f6517722f77891a8dcfe8d0e6aeff1e0fe4564dfa13c;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "", err: 2025-08-13T20:01:22.762623774+00:00 stderr F 2025-08-13T20:01:22Z [verbose] Add: openshift-console:console-644bb77b49-5x5xk:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"48ddb06f60b4f68","mac":"2e:65:98:07:c8:3f"},{"name":"eth0","mac":"0a:58:0a:d9:00:49","sandbox":"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.73/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:01:22.763233211+00:00 stderr F I0813 20:01:22.763087 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-644bb77b49-5x5xk", UID:"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1", APIVersion:"v1", ResourceVersion:"30362", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.73/23] from ovn-kubernetes 2025-08-13T20:01:22.999714184+00:00 stderr F 2025-08-13T20:01:22Z [verbose] DEL starting CNI request ContainerID:"2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-08-13T20:01:23.071366437+00:00 stderr F 2025-08-13T20:01:23Z [verbose] Del: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:23.408762758+00:00 stderr F 2025-08-13T20:01:23Z [verbose] DEL finished CNI request ContainerID:"2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=2d2195d84ac2445cdc3e5a462627ae2f8ed35af7ac5aa5d370c6a2c32b9485b2;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "", err: 2025-08-13T20:01:23.633597139+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:23.634064242+00:00 stderr P ADD finished CNI request ContainerID:"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Netns:"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:65:98:07:c8:3f\",\"name\":\"48ddb06f60b4f68\"},{\"mac\":\"0a:58:0a:d9:00:49\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c01eb88d-c361-4ad4-aeb3-1fef2bcbda1d\"}],\"ips\":[{\"address\":\"10.217.0.73/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:01:23.634102513+00:00 stderr F 2025-08-13T20:01:23.794134056+00:00 stderr F 2025-08-13T20:01:23Z [verbose] DEL starting CNI request ContainerID:"2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-08-13T20:01:23.798542252+00:00 stderr F 2025-08-13T20:01:23Z [verbose] Del: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:23.997263328+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:23.997368721+00:00 stderr P DEL starting CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"" 2025-08-13T20:01:23.997394662+00:00 stderr F 2025-08-13T20:01:24.002935580+00:00 stderr P 2025-08-13T20:01:23Z [verbose] 2025-08-13T20:01:24.003018803+00:00 stderr P Del: openshift-controller-manager:controller-manager-78589965b8-vmcwt:00d32440-4cce-4609-96f3-51ac94480aab:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:24.003107155+00:00 stderr F 2025-08-13T20:01:24.207200205+00:00 stderr F 2025-08-13T20:01:24Z [verbose] DEL finished CNI request ContainerID:"2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=2a222a6e16cc3bc99ae882453cf511604f6c8cf0f7727eca34e65c54c3de325f;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "", err: 2025-08-13T20:01:24.405154259+00:00 stderr P 2025-08-13T20:01:24Z [verbose] 2025-08-13T20:01:24.405207660+00:00 stderr P DEL finished CNI request ContainerID:"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9" Netns:"/var/run/netns/56a892cd-92ef-4d1e-8ced-347053efce52" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-78589965b8-vmcwt;K8S_POD_INFRA_CONTAINER_ID=97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9;K8S_POD_UID=00d32440-4cce-4609-96f3-51ac94480aab" Path:"", result: "", err: 2025-08-13T20:01:24.405232041+00:00 stderr F 2025-08-13T20:01:24.676881827+00:00 stderr F 2025-08-13T20:01:24Z [verbose] DEL starting CNI request ContainerID:"d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-08-13T20:01:24.977894500+00:00 stderr F 2025-08-13T20:01:24Z [verbose] Del: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:25.162886715+00:00 stderr F 2025-08-13T20:01:25Z [verbose] DEL finished CNI request ContainerID:"d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=d60a8ff795b3aa244ba0a6d4beb85f92509f5b3b5e5325497c70969789030543;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "", err: 2025-08-13T20:01:25.410320910+00:00 stderr F 2025-08-13T20:01:25Z [verbose] DEL starting CNI request ContainerID:"c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-08-13T20:01:25.473106201+00:00 stderr F 2025-08-13T20:01:25Z [verbose] Del: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:26.572894659+00:00 stderr F 2025-08-13T20:01:26Z [verbose] DEL finished CNI request ContainerID:"c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=c577d94ee62a3a3ce3f153fd7296888a10d038e1220ab6f060850cebd3768fa4;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "", err: 2025-08-13T20:01:27.222996886+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL starting CNI request ContainerID:"1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-08-13T20:01:27.225452106+00:00 stderr F 2025-08-13T20:01:27Z [verbose] Del: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:27.256063179+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL starting CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"" 2025-08-13T20:01:27.256524022+00:00 stderr F 2025-08-13T20:01:27Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-846977c6bc-7gjhh:ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:27.767518242+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL finished CNI request ContainerID:"1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=1cb83bf6435551fc60288f3f171dc9f91e5856d343faf671fbecc82fa4708d4f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "", err: 2025-08-13T20:01:27.879041363+00:00 stderr F 2025-08-13T20:01:27Z [verbose] DEL finished CNI request ContainerID:"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e" Netns:"/var/run/netns/08ce2810-2ad2-40b0-834e-37fa5f126f28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-846977c6bc-7gjhh;K8S_POD_INFRA_CONTAINER_ID=7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e;K8S_POD_UID=ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" Path:"", result: "", err: 2025-08-13T20:01:28.293605073+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL starting CNI request ContainerID:"5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"" 2025-08-13T20:01:28.599038402+00:00 stderr F 2025-08-13T20:01:28Z [verbose] Del: openshift-authentication:oauth-openshift-765b47f944-n2lhl:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:28.839098428+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL finished CNI request ContainerID:"5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-765b47f944-n2lhl;K8S_POD_INFRA_CONTAINER_ID=5d5ab928ee2118ee6832fa28966b7fe15cf32a3ce9753af82602917b135938a5;K8S_POD_UID=13ad7555-5f28-4555-a563-892713a8433a" Path:"", result: "", err: 2025-08-13T20:01:28.913564581+00:00 stderr F 2025-08-13T20:01:28Z [verbose] DEL starting CNI request ContainerID:"2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"" 2025-08-13T20:01:29.073348387+00:00 stderr F 2025-08-13T20:01:29Z [verbose] Del: openshift-service-ca:service-ca-666f99b6f-vlbxv:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:29.443305406+00:00 stderr F 2025-08-13T20:01:29Z [verbose] DEL finished CNI request ContainerID:"2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-vlbxv;K8S_POD_INFRA_CONTAINER_ID=2cd2d310ebe8b3f7efe01442baea6a99f4bdf879192ffea1402034757e499bc3;K8S_POD_UID=378552fd-5e53-4882-87ff-95f3d9198861" Path:"", result: "", err: 2025-08-13T20:01:29.693927872+00:00 stderr P 2025-08-13T20:01:29Z [verbose] 2025-08-13T20:01:29.694009855+00:00 stderr P DEL starting CNI request ContainerID:"dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-08-13T20:01:29.694036865+00:00 stderr F 2025-08-13T20:01:29.695198629+00:00 stderr P 2025-08-13T20:01:29Z [verbose] 2025-08-13T20:01:29.695294501+00:00 stderr P Del: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:29.695434705+00:00 stderr F 2025-08-13T20:01:30.418929144+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL finished CNI request ContainerID:"dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=dbfc5ec7432a60129231728884794ababe640efdb630ee659406890b3d7d9c03;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "", err: 2025-08-13T20:01:30.747711369+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL starting CNI request ContainerID:"a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-08-13T20:01:30.750870159+00:00 stderr F 2025-08-13T20:01:30Z [verbose] Del: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:30.992747486+00:00 stderr F 2025-08-13T20:01:30Z [verbose] DEL finished CNI request ContainerID:"a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=a990f905c27257a1e5d4f50465c77e865fd84acb2321a5e682d1c3f0256c57ac;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "", err: 2025-08-13T20:01:31.345338550+00:00 stderr F 2025-08-13T20:01:31Z [verbose] DEL starting CNI request ContainerID:"0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-08-13T20:01:31.353528964+00:00 stderr F 2025-08-13T20:01:31Z [verbose] Del: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:31.967512002+00:00 stderr F 2025-08-13T20:01:31Z [verbose] DEL finished CNI request ContainerID:"0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=0dd09f73af5d2dea79cdb7c4a76803e01abcc55aceb455914ccd26eeee6686f3;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "", err: 2025-08-13T20:01:32.461093585+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.461155507+00:00 stderr P DEL starting CNI request ContainerID:"b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-08-13T20:01:32.461180178+00:00 stderr F 2025-08-13T20:01:32.509104104+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.509161716+00:00 stderr P Del: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:32.509186517+00:00 stderr F 2025-08-13T20:01:32.840936847+00:00 stderr P 2025-08-13T20:01:32Z [verbose] 2025-08-13T20:01:32.840995988+00:00 stderr P DEL finished CNI request ContainerID:"b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=b04bfa8a4df35640a1a2ca8c160ff99f05de826ac1be45f3e2ab1c114915d89d;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "", err: 2025-08-13T20:01:32.841020399+00:00 stderr F 2025-08-13T20:01:32.931127618+00:00 stderr F 2025-08-13T20:01:32Z [verbose] DEL starting CNI request ContainerID:"d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-08-13T20:01:32.957988004+00:00 stderr F 2025-08-13T20:01:32Z [verbose] Del: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:33.181215610+00:00 stderr F 2025-08-13T20:01:33Z [verbose] DEL finished CNI request ContainerID:"d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=d1ee5558ce798968c7d7c7ccb69cfaabbafa7221af2265122bd6ceb0ec69e969;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "", err: 2025-08-13T20:01:33.285728640+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.285874134+00:00 stderr P DEL starting CNI request ContainerID:"af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-08-13T20:01:33.285905965+00:00 stderr F 2025-08-13T20:01:33.301964453+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.302017954+00:00 stderr P Del: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:01:33.302042765+00:00 stderr F 2025-08-13T20:01:33.561984896+00:00 stderr P 2025-08-13T20:01:33Z [verbose] 2025-08-13T20:01:33.562075539+00:00 stderr P DEL finished CNI request ContainerID:"af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b" Netns:"" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=af64d8772888c95f8b1c9b70e3f5bf0c57ab125b9bd97401e6a93d105abec31b;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "", err: 2025-08-13T20:01:33.562102600+00:00 stderr F 2025-08-13T20:02:24.117412736+00:00 stderr F 2025-08-13T20:02:24Z [verbose] DEL starting CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"" 2025-08-13T20:02:24.118743814+00:00 stderr F 2025-08-13T20:02:24Z [verbose] Del: openshift-kube-apiserver:installer-9-crc:2ad657a4-8b02-4373-8d0d-b0e25345dc90:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:24.384304969+00:00 stderr F 2025-08-13T20:02:24Z [verbose] DEL finished CNI request ContainerID:"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Netns:"/var/run/netns/3ebbbba3-541f-422f-b4bd-ea82b778fb0d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-9-crc;K8S_POD_INFRA_CONTAINER_ID=9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8;K8S_POD_UID=2ad657a4-8b02-4373-8d0d-b0e25345dc90" Path:"", result: "", err: 2025-08-13T20:02:25.102333633+00:00 stderr F 2025-08-13T20:02:25Z [verbose] DEL starting CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"" 2025-08-13T20:02:25.102500167+00:00 stderr F 2025-08-13T20:02:25Z [verbose] Del: openshift-kube-scheduler:installer-7-crc:b57cce81-8ea0-4c4d-aae1-ee024d201c15:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:25.345992404+00:00 stderr F 2025-08-13T20:02:25Z [verbose] DEL finished CNI request ContainerID:"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Netns:"/var/run/netns/1321210d-04c3-46b8-9215-4081860acf11" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-7-crc;K8S_POD_INFRA_CONTAINER_ID=639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab;K8S_POD_UID=b57cce81-8ea0-4c4d-aae1-ee024d201c15" Path:"", result: "", err: 2025-08-13T20:02:33.574743915+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"" 2025-08-13T20:02:33.575285181+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-console:console-84fccc7b6-mkncc:b233d916-bfe3-4ae5-ae39-6b574d1aa05e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.592362268+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"" 2025-08-13T20:02:33.592662666+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-image-registry:image-registry-7cbd5666ff-bbfrf:42b6a393-6194-4620-bf8f-7e4b6cbe5679:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.602590850+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL starting CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"" 2025-08-13T20:02:33.602590850+00:00 stderr F 2025-08-13T20:02:33Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-mtx25:unknownUID:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:33.797138049+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc" Netns:"/var/run/netns/96629289-ef1e-4353-acb2-125a21227454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-mtx25;K8S_POD_INFRA_CONTAINER_ID=961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc;K8S_POD_UID=23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Path:"", result: "", err: 2025-08-13T20:02:33.958714969+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f" Netns:"/var/run/netns/b4cbc8dd-918e-4828-a883-addb98426d7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-84fccc7b6-mkncc;K8S_POD_INFRA_CONTAINER_ID=e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f;K8S_POD_UID=b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Path:"", result: "", err: 2025-08-13T20:02:33.961291092+00:00 stderr F 2025-08-13T20:02:33Z [verbose] DEL finished CNI request ContainerID:"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4" Netns:"/var/run/netns/4cb67ea2-66ca-4477-911c-e402227f8dda" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-7cbd5666ff-bbfrf;K8S_POD_INFRA_CONTAINER_ID=958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4;K8S_POD_UID=42b6a393-6194-4620-bf8f-7e4b6cbe5679" Path:"", result: "", err: 2025-08-13T20:02:34.574369441+00:00 stderr F 2025-08-13T20:02:34Z [verbose] DEL starting CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"" 2025-08-13T20:02:34.574644859+00:00 stderr F 2025-08-13T20:02:34Z [verbose] Del: openshift-kube-controller-manager:installer-10-crc:79050916-d488-4806-b556-1b0078b31e53:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:02:34.770582238+00:00 stderr F 2025-08-13T20:02:34Z [verbose] DEL finished CNI request ContainerID:"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Netns:"/var/run/netns/a78e9cb9-c5d1-4c13-9b41-34a1e877122f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-crc;K8S_POD_INFRA_CONTAINER_ID=c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc;K8S_POD_UID=79050916-d488-4806-b556-1b0078b31e53" Path:"", result: "", err: 2025-08-13T20:05:14.544423840+00:00 stderr P 2025-08-13T20:05:14Z [verbose] 2025-08-13T20:05:14.544546203+00:00 stderr P ADD starting CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:14.544572164+00:00 stderr F 2025-08-13T20:05:14.562162048+00:00 stderr F 2025-08-13T20:05:14Z [verbose] ADD starting CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:17.045696786+00:00 stderr P 2025-08-13T20:05:17Z [error] 2025-08-13T20:05:17.046235931+00:00 stderr P Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found 2025-08-13T20:05:17.046279673+00:00 stderr F 2025-08-13T20:05:17.046380996+00:00 stderr P 2025-08-13T20:05:17Z [verbose] 2025-08-13T20:05:17.046411026+00:00 stderr P ADD finished CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found 2025-08-13T20:05:17.046435127+00:00 stderr F 2025-08-13T20:05:17.065878714+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found 2025-08-13T20:05:17.065878714+00:00 stderr F 2025-08-13T20:05:17Z [verbose] ADD finished CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found 2025-08-13T20:05:17.113344323+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL starting CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:17.115397422+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626: no such file or directory, cannot properly delete 2025-08-13T20:05:17.115397422+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL finished CNI request ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL starting CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [error] Multus: failed to get the cached delegates file: open /var/lib/cni/multus/d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23: no such file or directory, cannot properly delete 2025-08-13T20:05:17.127817708+00:00 stderr F 2025-08-13T20:05:17Z [verbose] DEL finished CNI request ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: 2025-08-13T20:05:19.113406156+00:00 stderr F 2025-08-13T20:05:19Z [verbose] ADD starting CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:05:19.788248221+00:00 stderr F 2025-08-13T20:05:19Z [verbose] ADD starting CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:05:22.561108354+00:00 stderr F 2025-08-13T20:05:22Z [verbose] Add: openshift-controller-manager:controller-manager-598fc85fd4-8wlsm:8b8d1c48-5762-450f-bd4d-9134869f432b:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7814bf45dce77ed","mac":"de:08:bb:e3:37:f7"},{"name":"eth0","mac":"0a:58:0a:d9:00:4a","sandbox":"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.74/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:22.561108354+00:00 stderr F I0813 20:05:22.559936 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-598fc85fd4-8wlsm", UID:"8b8d1c48-5762-450f-bd4d-9134869f432b", APIVersion:"v1", ResourceVersion:"31131", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.74/23] from ovn-kubernetes 2025-08-13T20:05:22.604005233+00:00 stderr F 2025-08-13T20:05:22Z [verbose] ADD finished CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:08:bb:e3:37:f7\",\"name\":\"7814bf45dce77ed\"},{\"mac\":\"0a:58:0a:d9:00:4a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742\"}],\"ips\":[{\"address\":\"10.217.0.74/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:05:22.654375065+00:00 stderr F 2025-08-13T20:05:22Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-6884dcf749-n4qpx:becc7e17-2bc7-417d-832f-55127299d70f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"924f68f94ccf00f","mac":"32:fc:e5:39:3f:bc"},{"name":"eth0","mac":"0a:58:0a:d9:00:4b","sandbox":"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.75/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:22.654375065+00:00 stderr F I0813 20:05:22.654203 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-6884dcf749-n4qpx", UID:"becc7e17-2bc7-417d-832f-55127299d70f", APIVersion:"v1", ResourceVersion:"31132", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.75/23] from ovn-kubernetes 2025-08-13T20:05:22.708178386+00:00 stderr F 2025-08-13T20:05:22Z [verbose] ADD finished CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:fc:e5:39:3f:bc\",\"name\":\"924f68f94ccf00f\"},{\"mac\":\"0a:58:0a:d9:00:4b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340\"}],\"ips\":[{\"address\":\"10.217.0.75/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:05:57.593828054+00:00 stderr F 2025-08-13T20:05:57Z [verbose] ADD starting CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"" 2025-08-13T20:05:57.913180129+00:00 stderr F 2025-08-13T20:05:57Z [verbose] Add: openshift-kube-controller-manager:installer-10-retry-1-crc:dc02677d-deed-4cc9-bb8c-0dd300f83655:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0d375f365a8fdeb","mac":"1a:07:0f:a8:db:26"},{"name":"eth0","mac":"0a:58:0a:d9:00:4c","sandbox":"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.76/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:05:57.913180129+00:00 stderr F I0813 20:05:57.907516 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-10-retry-1-crc", UID:"dc02677d-deed-4cc9-bb8c-0dd300f83655", APIVersion:"v1", ResourceVersion:"31897", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.76/23] from ovn-kubernetes 2025-08-13T20:05:57.935943931+00:00 stderr F 2025-08-13T20:05:57Z [verbose] ADD finished CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1a:07:0f:a8:db:26\",\"name\":\"0d375f365a8fdeb\"},{\"mac\":\"0a:58:0a:d9:00:4c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63\"}],\"ips\":[{\"address\":\"10.217.0.76/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:07.174029432+00:00 stderr F 2025-08-13T20:06:07Z [verbose] DEL starting CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"" 2025-08-13T20:06:07.175242426+00:00 stderr F 2025-08-13T20:06:07Z [verbose] Del: openshift-console:console-5d9678894c-wx62n:384ed0e8-86e4-42df-bd2c-604c1f536a15:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:07.412890992+00:00 stderr F 2025-08-13T20:06:07Z [verbose] DEL finished CNI request ContainerID:"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7" Netns:"/var/run/netns/911ccb4d-64e5-46bb-a0cd-d2ccbe43dbc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d9678894c-wx62n;K8S_POD_INFRA_CONTAINER_ID=612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7;K8S_POD_UID=384ed0e8-86e4-42df-bd2c-604c1f536a15" Path:"", result: "", err: 2025-08-13T20:06:30.919161705+00:00 stderr F 2025-08-13T20:06:30Z [verbose] DEL starting CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"" 2025-08-13T20:06:30.929945744+00:00 stderr F 2025-08-13T20:06:30Z [verbose] Del: openshift-marketplace:redhat-marketplace-rmwfn:9ad279b4-d9dc-42a8-a1c8-a002bd063482:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:31.214078963+00:00 stderr F 2025-08-13T20:06:31Z [verbose] DEL finished CNI request ContainerID:"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7" Netns:"/var/run/netns/1c780c69-25c3-4c0b-953d-e261642f6782" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-rmwfn;K8S_POD_INFRA_CONTAINER_ID=9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7;K8S_POD_UID=9ad279b4-d9dc-42a8-a1c8-a002bd063482" Path:"", result: "", err: 2025-08-13T20:06:32.106536910+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL starting CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"" 2025-08-13T20:06:32.107840658+00:00 stderr F 2025-08-13T20:06:32Z [verbose] Del: openshift-marketplace:redhat-operators-dcqzh:6db26b71-4e04-4688-a0c0-00e06e8c888d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:32.435188093+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL finished CNI request ContainerID:"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45" Netns:"/var/run/netns/5b8d87b9-c5ef-415b-94c0-714b74bcbc43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-dcqzh;K8S_POD_INFRA_CONTAINER_ID=fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45;K8S_POD_UID=6db26b71-4e04-4688-a0c0-00e06e8c888d" Path:"", result: "", err: 2025-08-13T20:06:32.925951714+00:00 stderr F 2025-08-13T20:06:32Z [verbose] DEL starting CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"" 2025-08-13T20:06:32.928683942+00:00 stderr F 2025-08-13T20:06:32Z [verbose] Del: openshift-marketplace:community-operators-k9qqb:ccdf38cf-634a-41a2-9c8b-74bb86af80a7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:33.103497344+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL starting CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"" 2025-08-13T20:06:33.106290584+00:00 stderr F 2025-08-13T20:06:33Z [verbose] Del: openshift-marketplace:certified-operators-g4v97:bb917686-edfb-4158-86ad-6fce0abec64c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:33.192759834+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL finished CNI request ContainerID:"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350" Netns:"/var/run/netns/f7f9b752-d0b9-4b04-9ab7-5c6e4bb4a343" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-k9qqb;K8S_POD_INFRA_CONTAINER_ID=ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350;K8S_POD_UID=ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Path:"", result: "", err: 2025-08-13T20:06:33.476751036+00:00 stderr F 2025-08-13T20:06:33Z [verbose] DEL finished CNI request ContainerID:"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761" Netns:"/var/run/netns/023f1963-9a7c-4cad-80bd-fe72a8fdf2b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-g4v97;K8S_POD_INFRA_CONTAINER_ID=2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761;K8S_POD_UID=bb917686-edfb-4158-86ad-6fce0abec64c" Path:"", result: "", err: 2025-08-13T20:06:34.983841015+00:00 stderr F 2025-08-13T20:06:34Z [verbose] DEL starting CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"" 2025-08-13T20:06:34.983841015+00:00 stderr F 2025-08-13T20:06:34Z [verbose] Del: openshift-kube-controller-manager:installer-10-retry-1-crc:dc02677d-deed-4cc9-bb8c-0dd300f83655:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:06:35.070086737+00:00 stderr F 2025-08-13T20:06:35Z [verbose] ADD starting CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"" 2025-08-13T20:06:35.488599537+00:00 stderr P 2025-08-13T20:06:35Z [verbose] 2025-08-13T20:06:35.493270091+00:00 stderr F DEL finished CNI request ContainerID:"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Netns:"/var/run/netns/a326abff-8906-403f-bb73-851a6aaa3e63" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-10-retry-1-crc;K8S_POD_INFRA_CONTAINER_ID=0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec;K8S_POD_UID=dc02677d-deed-4cc9-bb8c-0dd300f83655" Path:"", result: "", err: 2025-08-13T20:06:35.892583669+00:00 stderr F 2025-08-13T20:06:35Z [verbose] Add: openshift-marketplace:redhat-marketplace-4txfd:af6c965e-9dc8-417a-aa1c-303a50ec9adc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0ac24e234dbea3f","mac":"9a:93:0b:c4:ff:6b"},{"name":"eth0","mac":"0a:58:0a:d9:00:4d","sandbox":"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.77/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:35.893138495+00:00 stderr F I0813 20:06:35.893084 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-4txfd", UID:"af6c965e-9dc8-417a-aa1c-303a50ec9adc", APIVersion:"v1", ResourceVersion:"32097", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.77/23] from ovn-kubernetes 2025-08-13T20:06:35.925956266+00:00 stderr F 2025-08-13T20:06:35Z [verbose] ADD finished CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:93:0b:c4:ff:6b\",\"name\":\"0ac24e234dbea3f\"},{\"mac\":\"0a:58:0a:d9:00:4d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c\"}],\"ips\":[{\"address\":\"10.217.0.77/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:36.181685578+00:00 stderr F 2025-08-13T20:06:36Z [verbose] ADD starting CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"" 2025-08-13T20:06:36.609644798+00:00 stderr F 2025-08-13T20:06:36Z [verbose] Add: openshift-marketplace:certified-operators-cfdk8:5391dc5d-0f00-4464-b617-b164e2f9b77a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"93c5c47bf133377","mac":"42:38:8c:ef:03:cd"},{"name":"eth0","mac":"0a:58:0a:d9:00:4e","sandbox":"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.78/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:36.609644798+00:00 stderr F I0813 20:06:36.608961 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-cfdk8", UID:"5391dc5d-0f00-4464-b617-b164e2f9b77a", APIVersion:"v1", ResourceVersion:"32114", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.78/23] from ovn-kubernetes 2025-08-13T20:06:36.644232890+00:00 stderr P 2025-08-13T20:06:36Z [verbose] 2025-08-13T20:06:36.644306972+00:00 stderr P ADD finished CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"42:38:8c:ef:03:cd\",\"name\":\"93c5c47bf133377\"},{\"mac\":\"0a:58:0a:d9:00:4e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8\"}],\"ips\":[{\"address\":\"10.217.0.78/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:36.644332063+00:00 stderr F 2025-08-13T20:06:37.252751407+00:00 stderr F 2025-08-13T20:06:37Z [verbose] ADD starting CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"" 2025-08-13T20:06:37.613569952+00:00 stderr F 2025-08-13T20:06:37Z [verbose] Add: openshift-marketplace:redhat-operators-pmqwc:0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3025039c6358002","mac":"c6:bc:20:ad:82:0d"},{"name":"eth0","mac":"0a:58:0a:d9:00:4f","sandbox":"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.79/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:37.615822376+00:00 stderr F I0813 20:06:37.615700 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-pmqwc", UID:"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed", APIVersion:"v1", ResourceVersion:"32132", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.79/23] from ovn-kubernetes 2025-08-13T20:06:37.662039492+00:00 stderr P 2025-08-13T20:06:37Z [verbose] 2025-08-13T20:06:37.662084033+00:00 stderr F ADD finished CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c6:bc:20:ad:82:0d\",\"name\":\"3025039c6358002\"},{\"mac\":\"0a:58:0a:d9:00:4f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0\"}],\"ips\":[{\"address\":\"10.217.0.79/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:06:38.808915803+00:00 stderr F 2025-08-13T20:06:38Z [verbose] ADD starting CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"" 2025-08-13T20:06:39.261856519+00:00 stderr F 2025-08-13T20:06:39Z [verbose] Add: openshift-marketplace:community-operators-p7svp:8518239d-8dab-48ac-a3c1-e775566b9bff:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4a52c9653485366","mac":"5e:de:da:57:2f:c1"},{"name":"eth0","mac":"0a:58:0a:d9:00:50","sandbox":"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.80/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:06:39.261856519+00:00 stderr F I0813 20:06:39.258584 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-p7svp", UID:"8518239d-8dab-48ac-a3c1-e775566b9bff", APIVersion:"v1", ResourceVersion:"32155", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.80/23] from ovn-kubernetes 2025-08-13T20:06:39.384274459+00:00 stderr F 2025-08-13T20:06:39Z [verbose] ADD finished CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5e:de:da:57:2f:c1\",\"name\":\"4a52c9653485366\"},{\"mac\":\"0a:58:0a:d9:00:50\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0\"}],\"ips\":[{\"address\":\"10.217.0.80/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:05.047576957+00:00 stderr F 2025-08-13T20:07:05Z [verbose] ADD starting CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"" 2025-08-13T20:07:05.529643079+00:00 stderr F 2025-08-13T20:07:05Z [verbose] Add: openshift-kube-apiserver:installer-11-crc:47a054e4-19c2-4c12-a054-fc5edc98978a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"82592d624297fdd","mac":"72:ad:68:88:87:91"},{"name":"eth0","mac":"0a:58:0a:d9:00:51","sandbox":"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.81/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:05.531470061+00:00 stderr F I0813 20:07:05.530332 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-11-crc", UID:"47a054e4-19c2-4c12-a054-fc5edc98978a", APIVersion:"v1", ResourceVersion:"32354", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.81/23] from ovn-kubernetes 2025-08-13T20:07:05.592743088+00:00 stderr F 2025-08-13T20:07:05Z [verbose] ADD finished CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:ad:68:88:87:91\",\"name\":\"82592d624297fdd\"},{\"mac\":\"0a:58:0a:d9:00:51\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7\"}],\"ips\":[{\"address\":\"10.217.0.81/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:08.199712932+00:00 stderr F 2025-08-13T20:07:08Z [verbose] DEL starting CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"" 2025-08-13T20:07:08.211746267+00:00 stderr F 2025-08-13T20:07:08Z [verbose] Del: openshift-marketplace:redhat-marketplace-4txfd:af6c965e-9dc8-417a-aa1c-303a50ec9adc:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:08.631969465+00:00 stderr F 2025-08-13T20:07:08Z [verbose] DEL finished CNI request ContainerID:"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d" Netns:"/var/run/netns/2c92bf17-1a89-43e9-8f9c-76623132015c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-4txfd;K8S_POD_INFRA_CONTAINER_ID=0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d;K8S_POD_UID=af6c965e-9dc8-417a-aa1c-303a50ec9adc" Path:"", result: "", err: 2025-08-13T20:07:15.563655952+00:00 stderr F 2025-08-13T20:07:15Z [verbose] DEL starting CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"" 2025-08-13T20:07:15.567894734+00:00 stderr F 2025-08-13T20:07:15Z [verbose] Del: openshift-apiserver:apiserver-67cbf64bc9-jjfds:b23d6435-6431-4905-b41b-a517327385e5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:16.064026918+00:00 stderr F 2025-08-13T20:07:16Z [verbose] DEL finished CNI request ContainerID:"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Netns:"/var/run/netns/33580ad5-d33b-4301-b2b1-f494846cac8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-67cbf64bc9-jjfds;K8S_POD_INFRA_CONTAINER_ID=411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58;K8S_POD_UID=b23d6435-6431-4905-b41b-a517327385e5" Path:"", result: "", err: 2025-08-13T20:07:19.217331386+00:00 stderr F 2025-08-13T20:07:19Z [verbose] DEL starting CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"" 2025-08-13T20:07:19.299711168+00:00 stderr F 2025-08-13T20:07:19Z [verbose] Del: openshift-marketplace:certified-operators-cfdk8:5391dc5d-0f00-4464-b617-b164e2f9b77a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:20.041520296+00:00 stderr F 2025-08-13T20:07:20Z [verbose] DEL finished CNI request ContainerID:"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d" Netns:"/var/run/netns/3e67ef33-2561-418e-bd7c-606044cdd1e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-cfdk8;K8S_POD_INFRA_CONTAINER_ID=93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d;K8S_POD_UID=5391dc5d-0f00-4464-b617-b164e2f9b77a" Path:"", result: "", err: 2025-08-13T20:07:20.591508015+00:00 stderr P 2025-08-13T20:07:20Z [verbose] 2025-08-13T20:07:20.591577117+00:00 stderr P ADD starting CNI request ContainerID:"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8" Netns:"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"" 2025-08-13T20:07:20.591602628+00:00 stderr F 2025-08-13T20:07:21.160681172+00:00 stderr F 2025-08-13T20:07:21Z [verbose] Add: openshift-apiserver:apiserver-7fc54b8dd7-d2bhp:41e8708a-e40d-4d28-846b-c52eda4d1755:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2059a6e71652337","mac":"7a:aa:4b:73:44:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:52","sandbox":"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.82/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:21.160681172+00:00 stderr F I0813 20:07:21.159554 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"41e8708a-e40d-4d28-846b-c52eda4d1755", APIVersion:"v1", ResourceVersion:"32500", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.82/23] from ovn-kubernetes 2025-08-13T20:07:21.342528146+00:00 stderr F 2025-08-13T20:07:21Z [verbose] ADD finished CNI request ContainerID:"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8" Netns:"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:aa:4b:73:44:25\",\"name\":\"2059a6e71652337\"},{\"mac\":\"0a:58:0a:d9:00:52\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d0f6574e-984f-461d-b8a3-1d458aee6fc0\"}],\"ips\":[{\"address\":\"10.217.0.82/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:22.608537384+00:00 stderr F 2025-08-13T20:07:22Z [verbose] ADD starting CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"" 2025-08-13T20:07:23.304977332+00:00 stderr F 2025-08-13T20:07:23Z [verbose] Add: openshift-kube-controller-manager:revision-pruner-11-crc:1784282a-268d-4e44-a766-43281414e2dc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a480fccd2debaaf","mac":"92:1b:9d:51:b7:76"},{"name":"eth0","mac":"0a:58:0a:d9:00:53","sandbox":"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.83/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:23.304977332+00:00 stderr F I0813 20:07:23.303547 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"revision-pruner-11-crc", UID:"1784282a-268d-4e44-a766-43281414e2dc", APIVersion:"v1", ResourceVersion:"32536", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.83/23] from ovn-kubernetes 2025-08-13T20:07:23.501200018+00:00 stderr F 2025-08-13T20:07:23Z [verbose] ADD finished CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:1b:9d:51:b7:76\",\"name\":\"a480fccd2debaaf\"},{\"mac\":\"0a:58:0a:d9:00:53\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3\"}],\"ips\":[{\"address\":\"10.217.0.83/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:23.524278799+00:00 stderr F 2025-08-13T20:07:23Z [verbose] ADD starting CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"" 2025-08-13T20:07:24.002870461+00:00 stderr F 2025-08-13T20:07:24Z [verbose] Add: openshift-kube-scheduler:installer-8-crc:aca1f9ff-a685-4a78-b461-3931b757f754:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d0ba8aa29fc697e","mac":"32:07:c5:1f:d6:82"},{"name":"eth0","mac":"0a:58:0a:d9:00:54","sandbox":"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.84/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:24.003631173+00:00 stderr F I0813 20:07:24.003197 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler", Name:"installer-8-crc", UID:"aca1f9ff-a685-4a78-b461-3931b757f754", APIVersion:"v1", ResourceVersion:"32550", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.84/23] from ovn-kubernetes 2025-08-13T20:07:24.105089962+00:00 stderr F 2025-08-13T20:07:24Z [verbose] ADD finished CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:07:c5:1f:d6:82\",\"name\":\"d0ba8aa29fc697e\"},{\"mac\":\"0a:58:0a:d9:00:54\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac\"}],\"ips\":[{\"address\":\"10.217.0.84/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:25.184381715+00:00 stderr F 2025-08-13T20:07:25Z [verbose] ADD starting CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"" 2025-08-13T20:07:25.783963976+00:00 stderr F 2025-08-13T20:07:25Z [verbose] Add: openshift-kube-controller-manager:installer-11-crc:a45bfab9-f78b-4d72-b5b7-903e60401124:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8f0bbf4ce8e2b74","mac":"62:a3:62:e6:c6:48"},{"name":"eth0","mac":"0a:58:0a:d9:00:55","sandbox":"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.85/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:25.783963976+00:00 stderr F I0813 20:07:25.783385 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"installer-11-crc", UID:"a45bfab9-f78b-4d72-b5b7-903e60401124", APIVersion:"v1", ResourceVersion:"32572", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.85/23] from ovn-kubernetes 2025-08-13T20:07:26.201866818+00:00 stderr F 2025-08-13T20:07:26Z [verbose] ADD finished CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:a3:62:e6:c6:48\",\"name\":\"8f0bbf4ce8e2b74\"},{\"mac\":\"0a:58:0a:d9:00:55\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187\"}],\"ips\":[{\"address\":\"10.217.0.85/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:27.651630154+00:00 stderr F 2025-08-13T20:07:27Z [verbose] DEL starting CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"" 2025-08-13T20:07:27.659907761+00:00 stderr P 2025-08-13T20:07:27Z [verbose] Del: openshift-kube-controller-manager:revision-pruner-11-crc:1784282a-268d-4e44-a766-43281414e2dc:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:27.659977833+00:00 stderr F 2025-08-13T20:07:28.002170003+00:00 stderr F 2025-08-13T20:07:28Z [verbose] DEL finished CNI request ContainerID:"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Netns:"/var/run/netns/a0d12f90-5ec5-489c-80e8-393e54e57ce3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=revision-pruner-11-crc;K8S_POD_INFRA_CONTAINER_ID=a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448;K8S_POD_UID=1784282a-268d-4e44-a766-43281414e2dc" Path:"", result: "", err: 2025-08-13T20:07:34.403942158+00:00 stderr F 2025-08-13T20:07:34Z [verbose] ADD starting CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"" 2025-08-13T20:07:34.844143269+00:00 stderr P 2025-08-13T20:07:34Z [verbose] 2025-08-13T20:07:34.844201801+00:00 stderr P Add: openshift-kube-apiserver:installer-12-crc:3557248c-8f70-4165-aa66-8df983e7e01a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"afb6a839e21ef78","mac":"06:11:98:91:d8:e0"},{"name":"eth0","mac":"0a:58:0a:d9:00:56","sandbox":"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.86/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:07:34.844227922+00:00 stderr F 2025-08-13T20:07:34.847824415+00:00 stderr F I0813 20:07:34.846601 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-12-crc", UID:"3557248c-8f70-4165-aa66-8df983e7e01a", APIVersion:"v1", ResourceVersion:"32679", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.86/23] from ovn-kubernetes 2025-08-13T20:07:34.913360634+00:00 stderr P 2025-08-13T20:07:34Z [verbose] 2025-08-13T20:07:34.913470257+00:00 stderr P ADD finished CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:11:98:91:d8:e0\",\"name\":\"afb6a839e21ef78\"},{\"mac\":\"0a:58:0a:d9:00:56\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c\"}],\"ips\":[{\"address\":\"10.217.0.86/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:07:34.913683113+00:00 stderr F 2025-08-13T20:07:39.789523187+00:00 stderr P 2025-08-13T20:07:39Z [verbose] 2025-08-13T20:07:39.789634540+00:00 stderr P DEL starting CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"" 2025-08-13T20:07:39.789669071+00:00 stderr F 2025-08-13T20:07:39.811267440+00:00 stderr P 2025-08-13T20:07:39Z [verbose] 2025-08-13T20:07:39.811328592+00:00 stderr P Del: openshift-kube-apiserver:installer-11-crc:47a054e4-19c2-4c12-a054-fc5edc98978a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:39.811353123+00:00 stderr F 2025-08-13T20:07:40.270850337+00:00 stderr P 2025-08-13T20:07:40Z [verbose] 2025-08-13T20:07:40.270932819+00:00 stderr P DEL finished CNI request ContainerID:"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763" Netns:"/var/run/netns/c5cfd491-70bc-4ec7-991a-0855c95408c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763;K8S_POD_UID=47a054e4-19c2-4c12-a054-fc5edc98978a" Path:"", result: "", err: 2025-08-13T20:07:40.270960230+00:00 stderr F 2025-08-13T20:07:41.160427082+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.160537705+00:00 stderr P DEL starting CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"" 2025-08-13T20:07:41.160629108+00:00 stderr F 2025-08-13T20:07:41.161322108+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.161356919+00:00 stderr P Del: openshift-marketplace:community-operators-p7svp:8518239d-8dab-48ac-a3c1-e775566b9bff:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:41.161380659+00:00 stderr F 2025-08-13T20:07:41.453042622+00:00 stderr P 2025-08-13T20:07:41Z [verbose] 2025-08-13T20:07:41.453105944+00:00 stderr P DEL finished CNI request ContainerID:"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7" Netns:"/var/run/netns/8b7eea37-2dbd-4df6-97f4-0127e310caf0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-p7svp;K8S_POD_INFRA_CONTAINER_ID=4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7;K8S_POD_UID=8518239d-8dab-48ac-a3c1-e775566b9bff" Path:"", result: "", err: 2025-08-13T20:07:41.453130364+00:00 stderr F 2025-08-13T20:07:59.167879871+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL starting CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"" 2025-08-13T20:07:59.169982662+00:00 stderr F 2025-08-13T20:07:59Z [verbose] Del: openshift-kube-scheduler:installer-8-crc:aca1f9ff-a685-4a78-b461-3931b757f754:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:59.413508534+00:00 stderr P 2025-08-13T20:07:59Z [verbose] 2025-08-13T20:07:59.413580176+00:00 stderr P DEL starting CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"" 2025-08-13T20:07:59.413611157+00:00 stderr F 2025-08-13T20:07:59.416382616+00:00 stderr P 2025-08-13T20:07:59Z [verbose] 2025-08-13T20:07:59.416426968+00:00 stderr P Del: openshift-marketplace:redhat-operators-pmqwc:0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:07:59.416457268+00:00 stderr F 2025-08-13T20:07:59.503252977+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL finished CNI request ContainerID:"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Netns:"/var/run/netns/f141d018-90a9-410f-8899-edd638325eac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-8-crc;K8S_POD_INFRA_CONTAINER_ID=d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056;K8S_POD_UID=aca1f9ff-a685-4a78-b461-3931b757f754" Path:"", result: "", err: 2025-08-13T20:07:59.646335739+00:00 stderr F 2025-08-13T20:07:59Z [verbose] DEL finished CNI request ContainerID:"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8" Netns:"/var/run/netns/bfd1eef8-c810-457e-84f5-906053b64ad0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-pmqwc;K8S_POD_INFRA_CONTAINER_ID=3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8;K8S_POD_UID=0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" Path:"", result: "", err: 2025-08-13T20:08:02.315139306+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.315255940+00:00 stderr P DEL starting CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"" 2025-08-13T20:08:02.315282180+00:00 stderr F 2025-08-13T20:08:02.347661799+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.347721001+00:00 stderr P Del: openshift-kube-controller-manager:installer-11-crc:a45bfab9-f78b-4d72-b5b7-903e60401124:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:08:02.347754912+00:00 stderr F 2025-08-13T20:08:02.634587615+00:00 stderr P 2025-08-13T20:08:02Z [verbose] 2025-08-13T20:08:02.634645187+00:00 stderr P DEL finished CNI request ContainerID:"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Netns:"/var/run/netns/b1f9f948-5182-4a9d-864b-bda6943b9187" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-crc;K8S_POD_INFRA_CONTAINER_ID=8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31;K8S_POD_UID=a45bfab9-f78b-4d72-b5b7-903e60401124" Path:"", result: "", err: 2025-08-13T20:08:02.634669568+00:00 stderr F 2025-08-13T20:08:25.500144370+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.500255923+00:00 stderr P DEL starting CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"" 2025-08-13T20:08:25.500286974+00:00 stderr F 2025-08-13T20:08:25.501293113+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.501330984+00:00 stderr P Del: openshift-kube-apiserver:installer-12-crc:3557248c-8f70-4165-aa66-8df983e7e01a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:08:25.501355614+00:00 stderr F 2025-08-13T20:08:25.826326272+00:00 stderr P 2025-08-13T20:08:25Z [verbose] 2025-08-13T20:08:25.826536238+00:00 stderr P DEL finished CNI request ContainerID:"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Netns:"/var/run/netns/b78865c0-1c97-410c-9fce-84455b78845c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-12-crc;K8S_POD_INFRA_CONTAINER_ID=afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309;K8S_POD_UID=3557248c-8f70-4165-aa66-8df983e7e01a" Path:"", result: "", err: 2025-08-13T20:08:25.826562909+00:00 stderr F 2025-08-13T20:11:00.076861257+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL starting CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" 2025-08-13T20:11:00.078535555+00:00 stderr F 2025-08-13T20:11:00Z [verbose] Del: openshift-route-controller-manager:route-controller-manager-6884dcf749-n4qpx:becc7e17-2bc7-417d-832f-55127299d70f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:11:00.078535555+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL starting CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" 2025-08-13T20:11:00.082829858+00:00 stderr F 2025-08-13T20:11:00Z [verbose] Del: openshift-controller-manager:controller-manager-598fc85fd4-8wlsm:8b8d1c48-5762-450f-bd4d-9134869f432b:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:11:00.299981454+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL finished CNI request ContainerID:"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb" Netns:"/var/run/netns/90838034-5fbf-4da8-8c62-f815c2b7b742" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"", result: "", err: 2025-08-13T20:11:00.443386906+00:00 stderr F 2025-08-13T20:11:00Z [verbose] DEL finished CNI request ContainerID:"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7" Netns:"/var/run/netns/3b9627d6-1453-443c-ba95-69c7de74e340" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"", result: "", err: 2025-08-13T20:11:01.950907168+00:00 stderr F 2025-08-13T20:11:01Z [verbose] ADD starting CNI request ContainerID:"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c" Netns:"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"" 2025-08-13T20:11:01.963896741+00:00 stderr F 2025-08-13T20:11:01Z [verbose] ADD starting CNI request ContainerID:"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88" Netns:"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"" 2025-08-13T20:11:02.160903619+00:00 stderr F 2025-08-13T20:11:02Z [verbose] Add: openshift-controller-manager:controller-manager-778975cc4f-x5vcf:1a3e81c3-c292-4130-9436-f94062c91efd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"67a3c779a8c87e7","mac":"aa:6d:9a:25:77:61"},{"name":"eth0","mac":"0a:58:0a:d9:00:57","sandbox":"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.87/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:11:02.160903619+00:00 stderr F I0813 20:11:02.158865 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-778975cc4f-x5vcf", UID:"1a3e81c3-c292-4130-9436-f94062c91efd", APIVersion:"v1", ResourceVersion:"33335", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.87/23] from ovn-kubernetes 2025-08-13T20:11:02.189402916+00:00 stderr F 2025-08-13T20:11:02Z [verbose] ADD finished CNI request ContainerID:"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c" Netns:"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"aa:6d:9a:25:77:61\",\"name\":\"67a3c779a8c87e7\"},{\"mac\":\"0a:58:0a:d9:00:57\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b782b33-396c-4f2e-9790-a15358281ab8\"}],\"ips\":[{\"address\":\"10.217.0.87/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:11:02.247865802+00:00 stderr F 2025-08-13T20:11:02Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-776b8b7477-sfpvs:21d29937-debd-4407-b2b1-d1053cb0f342:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"c5bff19800c2cb5","mac":"ae:fe:5d:38:ef:2e"},{"name":"eth0","mac":"0a:58:0a:d9:00:58","sandbox":"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.88/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:11:02.248631374+00:00 stderr F I0813 20:11:02.248566 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-776b8b7477-sfpvs", UID:"21d29937-debd-4407-b2b1-d1053cb0f342", APIVersion:"v1", ResourceVersion:"33336", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.88/23] from ovn-kubernetes 2025-08-13T20:11:02.285459280+00:00 stderr P 2025-08-13T20:11:02Z [verbose] ADD finished CNI request ContainerID:"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88" Netns:"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:fe:5d:38:ef:2e\",\"name\":\"c5bff19800c2cb5\"},{\"mac\":\"0a:58:0a:d9:00:58\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3c1d86f6-868f-48ac-ae16-82b923dda727\"}],\"ips\":[{\"address\":\"10.217.0.88/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:11:02.286108679+00:00 stderr F 2025-08-13T20:15:00.781729927+00:00 stderr F 2025-08-13T20:15:00Z [verbose] ADD starting CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"" 2025-08-13T20:15:00.963120417+00:00 stderr F 2025-08-13T20:15:00Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29251935-d7x6j:51936587-a4af-470d-ad92-8ab9062cbc72:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"21feea149913711","mac":"be:26:57:2f:28:77"},{"name":"eth0","mac":"0a:58:0a:d9:00:59","sandbox":"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.89/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:15:00.963591081+00:00 stderr F I0813 20:15:00.963481 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251935-d7x6j", UID:"51936587-a4af-470d-ad92-8ab9062cbc72", APIVersion:"v1", ResourceVersion:"33816", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.89/23] from ovn-kubernetes 2025-08-13T20:15:01.019053031+00:00 stderr F 2025-08-13T20:15:01Z [verbose] ADD finished CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:26:57:2f:28:77\",\"name\":\"21feea149913711\"},{\"mac\":\"0a:58:0a:d9:00:59\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647\"}],\"ips\":[{\"address\":\"10.217.0.89/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:15:04.402616130+00:00 stderr F 2025-08-13T20:15:04Z [verbose] DEL starting CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"" 2025-08-13T20:15:04.403546217+00:00 stderr F 2025-08-13T20:15:04Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29251935-d7x6j:51936587-a4af-470d-ad92-8ab9062cbc72:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:15:04.593539334+00:00 stderr F 2025-08-13T20:15:04Z [verbose] DEL finished CNI request ContainerID:"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Netns:"/var/run/netns/f993d2a8-6c5a-40e6-91ea-0cbc25b63647" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251935-d7x6j;K8S_POD_INFRA_CONTAINER_ID=21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855;K8S_POD_UID=51936587-a4af-470d-ad92-8ab9062cbc72" Path:"", result: "", err: 2025-08-13T20:16:58.608948335+00:00 stderr F 2025-08-13T20:16:58Z [verbose] ADD starting CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"" 2025-08-13T20:16:58.796684776+00:00 stderr F 2025-08-13T20:16:58Z [verbose] Add: openshift-marketplace:certified-operators-8bbjz:8e241cc6-c71d-4fa0-9a1a-18098bcf6594:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"18af4daca70b211","mac":"9a:cd:12:c0:68:18"},{"name":"eth0","mac":"0a:58:0a:d9:00:5a","sandbox":"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.90/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:16:58.797372606+00:00 stderr F I0813 20:16:58.797261 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-8bbjz", UID:"8e241cc6-c71d-4fa0-9a1a-18098bcf6594", APIVersion:"v1", ResourceVersion:"34100", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.90/23] from ovn-kubernetes 2025-08-13T20:16:58.860039395+00:00 stderr F 2025-08-13T20:16:58Z [verbose] ADD finished CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:cd:12:c0:68:18\",\"name\":\"18af4daca70b211\"},{\"mac\":\"0a:58:0a:d9:00:5a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474\"}],\"ips\":[{\"address\":\"10.217.0.90/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:00.557066007+00:00 stderr P 2025-08-13T20:17:00Z [verbose] 2025-08-13T20:17:00.557183100+00:00 stderr P ADD starting CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"" 2025-08-13T20:17:00.557291693+00:00 stderr F 2025-08-13T20:17:00.797736670+00:00 stderr F 2025-08-13T20:17:00Z [verbose] Add: openshift-marketplace:redhat-marketplace-nsk78:a084eaff-10e9-439e-96f3-f3450fb14db7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"95f40ae6abffb8f","mac":"9e:b4:d7:fb:98:d3"},{"name":"eth0","mac":"0a:58:0a:d9:00:5b","sandbox":"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.91/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:00.797736670+00:00 stderr F I0813 20:17:00.796302 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nsk78", UID:"a084eaff-10e9-439e-96f3-f3450fb14db7", APIVersion:"v1", ResourceVersion:"34117", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.91/23] from ovn-kubernetes 2025-08-13T20:17:01.051256190+00:00 stderr P 2025-08-13T20:17:01Z [verbose] 2025-08-13T20:17:01.051408334+00:00 stderr P ADD finished CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9e:b4:d7:fb:98:d3\",\"name\":\"95f40ae6abffb8f\"},{\"mac\":\"0a:58:0a:d9:00:5b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063\"}],\"ips\":[{\"address\":\"10.217.0.91/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:01.051495627+00:00 stderr F 2025-08-13T20:17:20.945952804+00:00 stderr F 2025-08-13T20:17:20Z [verbose] ADD starting CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"" 2025-08-13T20:17:21.291695047+00:00 stderr P 2025-08-13T20:17:21Z [verbose] 2025-08-13T20:17:21.291757969+00:00 stderr P Add: openshift-marketplace:redhat-operators-swl5s:407a8505-ab64-42f9-aa53-a63f8e97c189:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"011ddcc3b1f8c14","mac":"9a:a7:0d:46:5e:25"},{"name":"eth0","mac":"0a:58:0a:d9:00:5c","sandbox":"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.92/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:21.291910763+00:00 stderr F 2025-08-13T20:17:21.293331844+00:00 stderr F I0813 20:17:21.292950 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-swl5s", UID:"407a8505-ab64-42f9-aa53-a63f8e97c189", APIVersion:"v1", ResourceVersion:"34179", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.92/23] from ovn-kubernetes 2025-08-13T20:17:21.607094354+00:00 stderr P 2025-08-13T20:17:21Z [verbose] 2025-08-13T20:17:21.607148426+00:00 stderr P ADD finished CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"9a:a7:0d:46:5e:25\",\"name\":\"011ddcc3b1f8c14\"},{\"mac\":\"0a:58:0a:d9:00:5c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38\"}],\"ips\":[{\"address\":\"10.217.0.92/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:21.607191417+00:00 stderr F 2025-08-13T20:17:30.765245514+00:00 stderr P 2025-08-13T20:17:30Z [verbose] 2025-08-13T20:17:30.765323897+00:00 stderr P ADD starting CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"" 2025-08-13T20:17:30.765348217+00:00 stderr F 2025-08-13T20:17:31.059174718+00:00 stderr F 2025-08-13T20:17:31Z [verbose] Add: openshift-marketplace:community-operators-tfv59:718f06fe-dcad-4053-8de2-e2c38fb7503d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b983de43e586634","mac":"8e:ce:68:75:7d:52"},{"name":"eth0","mac":"0a:58:0a:d9:00:5d","sandbox":"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.93/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:17:31.059174718+00:00 stderr F I0813 20:17:31.056753 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-tfv59", UID:"718f06fe-dcad-4053-8de2-e2c38fb7503d", APIVersion:"v1", ResourceVersion:"34230", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.93/23] from ovn-kubernetes 2025-08-13T20:17:31.140038977+00:00 stderr F 2025-08-13T20:17:31Z [verbose] ADD finished CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:ce:68:75:7d:52\",\"name\":\"b983de43e586634\"},{\"mac\":\"0a:58:0a:d9:00:5d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103\"}],\"ips\":[{\"address\":\"10.217.0.93/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:17:35.355986043+00:00 stderr F 2025-08-13T20:17:35Z [verbose] DEL starting CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"" 2025-08-13T20:17:35.356862828+00:00 stderr F 2025-08-13T20:17:35Z [verbose] Del: openshift-marketplace:redhat-marketplace-nsk78:a084eaff-10e9-439e-96f3-f3450fb14db7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:17:35.677012541+00:00 stderr F 2025-08-13T20:17:35Z [verbose] DEL finished CNI request ContainerID:"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439" Netns:"/var/run/netns/bfc0bac2-17b6-45ef-a113-6b252dd17063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nsk78;K8S_POD_INFRA_CONTAINER_ID=95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439;K8S_POD_UID=a084eaff-10e9-439e-96f3-f3450fb14db7" Path:"", result: "", err: 2025-08-13T20:17:42.940476414+00:00 stderr P 2025-08-13T20:17:42Z [verbose] 2025-08-13T20:17:42.940551276+00:00 stderr P DEL starting CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"" 2025-08-13T20:17:42.940576557+00:00 stderr F 2025-08-13T20:17:42.941672528+00:00 stderr P 2025-08-13T20:17:42Z [verbose] 2025-08-13T20:17:42.941711539+00:00 stderr P Del: openshift-marketplace:certified-operators-8bbjz:8e241cc6-c71d-4fa0-9a1a-18098bcf6594:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:17:42.941735740+00:00 stderr F 2025-08-13T20:17:43.130319066+00:00 stderr P 2025-08-13T20:17:43Z [verbose] 2025-08-13T20:17:43.130398278+00:00 stderr P DEL finished CNI request ContainerID:"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f" Netns:"/var/run/netns/266038b1-4f8d-4244-9273-e3cef9a4f474" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8bbjz;K8S_POD_INFRA_CONTAINER_ID=18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f;K8S_POD_UID=8e241cc6-c71d-4fa0-9a1a-18098bcf6594" Path:"", result: "", err: 2025-08-13T20:17:43.130429249+00:00 stderr F 2025-08-13T20:18:52.308718151+00:00 stderr P 2025-08-13T20:18:52Z [verbose] 2025-08-13T20:18:52.308860535+00:00 stderr P DEL starting CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"" 2025-08-13T20:18:52.308910326+00:00 stderr F 2025-08-13T20:18:52.322174545+00:00 stderr P 2025-08-13T20:18:52Z [verbose] 2025-08-13T20:18:52.322215476+00:00 stderr P Del: openshift-marketplace:community-operators-tfv59:718f06fe-dcad-4053-8de2-e2c38fb7503d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:18:52.322271608+00:00 stderr F 2025-08-13T20:18:54.275913969+00:00 stderr P 2025-08-13T20:18:54Z [verbose] 2025-08-13T20:18:54.276014152+00:00 stderr P DEL finished CNI request ContainerID:"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790" Netns:"/var/run/netns/281fd615-72a1-4ed5-bac0-895020cb2103" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-tfv59;K8S_POD_INFRA_CONTAINER_ID=b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790;K8S_POD_UID=718f06fe-dcad-4053-8de2-e2c38fb7503d" Path:"", result: "", err: 2025-08-13T20:18:54.276041162+00:00 stderr F 2025-08-13T20:19:34.296937659+00:00 stderr F 2025-08-13T20:19:34Z [verbose] DEL starting CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"" 2025-08-13T20:19:34.300885202+00:00 stderr F 2025-08-13T20:19:34Z [verbose] Del: openshift-marketplace:redhat-operators-swl5s:407a8505-ab64-42f9-aa53-a63f8e97c189:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:19:34.534297221+00:00 stderr F 2025-08-13T20:19:34Z [verbose] DEL finished CNI request ContainerID:"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5" Netns:"/var/run/netns/532cbd77-cc65-4c3b-a882-d79634b87a38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-swl5s;K8S_POD_INFRA_CONTAINER_ID=011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5;K8S_POD_UID=407a8505-ab64-42f9-aa53-a63f8e97c189" Path:"", result: "", err: 2025-08-13T20:27:06.122840134+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD starting CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"" 2025-08-13T20:27:06.262561679+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD starting CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"" 2025-08-13T20:27:06.458150272+00:00 stderr F 2025-08-13T20:27:06Z [verbose] Add: openshift-marketplace:redhat-marketplace-jbzn9:b152b92f-8fab-4b74-8e68-00278380759d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"65efa81c3e0e120","mac":"e6:4c:64:f5:01:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:5e","sandbox":"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.94/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:27:06.479224494+00:00 stderr F I0813 20:27:06.473549 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-jbzn9", UID:"b152b92f-8fab-4b74-8e68-00278380759d", APIVersion:"v1", ResourceVersion:"35401", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.94/23] from ovn-kubernetes 2025-08-13T20:27:06.522091700+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD finished CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e6:4c:64:f5:01:ca\",\"name\":\"65efa81c3e0e120\"},{\"mac\":\"0a:58:0a:d9:00:5e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b\"}],\"ips\":[{\"address\":\"10.217.0.94/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:27:06.569207297+00:00 stderr F 2025-08-13T20:27:06Z [verbose] Add: openshift-marketplace:certified-operators-xldzg:926ac7a4-e156-4e71-9681-7a48897402eb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d26f242e575b9e4","mac":"12:4b:e0:9e:55:86"},{"name":"eth0","mac":"0a:58:0a:d9:00:5f","sandbox":"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.95/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:27:06.569458594+00:00 stderr F I0813 20:27:06.569413 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-xldzg", UID:"926ac7a4-e156-4e71-9681-7a48897402eb", APIVersion:"v1", ResourceVersion:"35408", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.95/23] from ovn-kubernetes 2025-08-13T20:27:06.626562697+00:00 stderr F 2025-08-13T20:27:06Z [verbose] ADD finished CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:4b:e0:9e:55:86\",\"name\":\"d26f242e575b9e4\"},{\"mac\":\"0a:58:0a:d9:00:5f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79\"}],\"ips\":[{\"address\":\"10.217.0.95/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:27:29.231263536+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL starting CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"" 2025-08-13T20:27:29.236216528+00:00 stderr F 2025-08-13T20:27:29Z [verbose] Del: openshift-marketplace:certified-operators-xldzg:926ac7a4-e156-4e71-9681-7a48897402eb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:27:29.284492108+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL starting CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"" 2025-08-13T20:27:29.284642242+00:00 stderr F 2025-08-13T20:27:29Z [verbose] Del: openshift-marketplace:redhat-marketplace-jbzn9:b152b92f-8fab-4b74-8e68-00278380759d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:27:29.434254651+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL finished CNI request ContainerID:"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e" Netns:"/var/run/netns/53c62a5d-26ea-42f7-9577-906284d26f79" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-xldzg;K8S_POD_INFRA_CONTAINER_ID=d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e;K8S_POD_UID=926ac7a4-e156-4e71-9681-7a48897402eb" Path:"", result: "", err: 2025-08-13T20:27:29.494300477+00:00 stderr F 2025-08-13T20:27:29Z [verbose] DEL finished CNI request ContainerID:"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0" Netns:"/var/run/netns/f8d0473d-1fd4-4d95-bae4-0a4398be087b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-jbzn9;K8S_POD_INFRA_CONTAINER_ID=65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0;K8S_POD_UID=b152b92f-8fab-4b74-8e68-00278380759d" Path:"", result: "", err: 2025-08-13T20:28:43.767683367+00:00 stderr F 2025-08-13T20:28:43Z [verbose] ADD starting CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"" 2025-08-13T20:28:44.030064569+00:00 stderr F 2025-08-13T20:28:44Z [verbose] Add: openshift-marketplace:community-operators-hvwvm:bfb8fd54-a923-43fe-a0f5-bc4066352d71:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"786926dc94686ef","mac":"f2:1e:d5:6b:2f:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:60","sandbox":"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.96/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:28:44.030692537+00:00 stderr F I0813 20:28:44.030422 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-hvwvm", UID:"bfb8fd54-a923-43fe-a0f5-bc4066352d71", APIVersion:"v1", ResourceVersion:"35629", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.96/23] from ovn-kubernetes 2025-08-13T20:28:44.068653379+00:00 stderr F 2025-08-13T20:28:44Z [verbose] ADD finished CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"f2:1e:d5:6b:2f:19\",\"name\":\"786926dc94686ef\"},{\"mac\":\"0a:58:0a:d9:00:60\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610\"}],\"ips\":[{\"address\":\"10.217.0.96/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:29:06.009323725+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.009411328+00:00 stderr P DEL starting CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"" 2025-08-13T20:29:06.009436298+00:00 stderr F 2025-08-13T20:29:06.010357595+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.010394066+00:00 stderr P Del: openshift-marketplace:community-operators-hvwvm:bfb8fd54-a923-43fe-a0f5-bc4066352d71:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:29:06.010418947+00:00 stderr F 2025-08-13T20:29:06.211321622+00:00 stderr P 2025-08-13T20:29:06Z [verbose] 2025-08-13T20:29:06.211421805+00:00 stderr P DEL finished CNI request ContainerID:"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af" Netns:"/var/run/netns/1e0f12a7-9b7a-46a2-b454-6eae53051610" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-hvwvm;K8S_POD_INFRA_CONTAINER_ID=786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af;K8S_POD_UID=bfb8fd54-a923-43fe-a0f5-bc4066352d71" Path:"", result: "", err: 2025-08-13T20:29:06.211461296+00:00 stderr F 2025-08-13T20:29:30.524869297+00:00 stderr F 2025-08-13T20:29:30Z [verbose] ADD starting CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"" 2025-08-13T20:29:30.754850807+00:00 stderr P 2025-08-13T20:29:30Z [verbose] 2025-08-13T20:29:30.754914439+00:00 stderr P Add: openshift-marketplace:redhat-operators-zdwjn:6d579e1a-3b27-4c1f-9175-42ac58490d42:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3fdb2c96a67c002","mac":"06:8b:ac:a2:0e:cc"},{"name":"eth0","mac":"0a:58:0a:d9:00:61","sandbox":"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.97/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:29:30.754940140+00:00 stderr F 2025-08-13T20:29:30.762887008+00:00 stderr F I0813 20:29:30.762765 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-zdwjn", UID:"6d579e1a-3b27-4c1f-9175-42ac58490d42", APIVersion:"v1", ResourceVersion:"35763", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.97/23] from ovn-kubernetes 2025-08-13T20:29:30.797250476+00:00 stderr F 2025-08-13T20:29:30Z [verbose] ADD finished CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"06:8b:ac:a2:0e:cc\",\"name\":\"3fdb2c96a67c002\"},{\"mac\":\"0a:58:0a:d9:00:61\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d\"}],\"ips\":[{\"address\":\"10.217.0.97/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:30:02.419847222+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.420056688+00:00 stderr P ADD starting CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"" 2025-08-13T20:30:02.420093569+00:00 stderr F 2025-08-13T20:30:02.778196933+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.778271045+00:00 stderr P Add: openshift-operator-lifecycle-manager:collect-profiles-29251950-x8jjd:ad171c4b-8408-4370-8e86-502999788ddb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"61f39a784f23d0e","mac":"da:99:8d:d0:4e:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:62","sandbox":"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.98/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:30:02.778296465+00:00 stderr F 2025-08-13T20:30:02.779130789+00:00 stderr F I0813 20:30:02.779084 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29251950-x8jjd", UID:"ad171c4b-8408-4370-8e86-502999788ddb", APIVersion:"v1", ResourceVersion:"35844", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.98/23] from ovn-kubernetes 2025-08-13T20:30:02.811261253+00:00 stderr P 2025-08-13T20:30:02Z [verbose] 2025-08-13T20:30:02.811543541+00:00 stderr P ADD finished CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"da:99:8d:d0:4e:9c\",\"name\":\"61f39a784f23d0e\"},{\"mac\":\"0a:58:0a:d9:00:62\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c\"}],\"ips\":[{\"address\":\"10.217.0.98/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:30:02.811577742+00:00 stderr F 2025-08-13T20:30:06.544162270+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.544238552+00:00 stderr P DEL starting CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"" 2025-08-13T20:30:06.544263533+00:00 stderr F 2025-08-13T20:30:06.545051676+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.545090917+00:00 stderr P Del: openshift-operator-lifecycle-manager:collect-profiles-29251950-x8jjd:ad171c4b-8408-4370-8e86-502999788ddb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:30:06.545115287+00:00 stderr F 2025-08-13T20:30:06.810139966+00:00 stderr P 2025-08-13T20:30:06Z [verbose] 2025-08-13T20:30:06.810241489+00:00 stderr P DEL finished CNI request ContainerID:"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Netns:"/var/run/netns/7b94eab6-f613-47c5-b591-462836b19b9c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29251950-x8jjd;K8S_POD_INFRA_CONTAINER_ID=61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9;K8S_POD_UID=ad171c4b-8408-4370-8e86-502999788ddb" Path:"", result: "", err: 2025-08-13T20:30:06.810291930+00:00 stderr F 2025-08-13T20:30:32.652453365+00:00 stderr P 2025-08-13T20:30:32Z [verbose] 2025-08-13T20:30:32.652542547+00:00 stderr P DEL starting CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"" 2025-08-13T20:30:32.652567668+00:00 stderr F 2025-08-13T20:30:32.654540215+00:00 stderr P 2025-08-13T20:30:32Z [verbose] 2025-08-13T20:30:32.654575346+00:00 stderr P Del: openshift-marketplace:redhat-operators-zdwjn:6d579e1a-3b27-4c1f-9175-42ac58490d42:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:30:32.654599396+00:00 stderr F 2025-08-13T20:30:32.870968506+00:00 stderr F 2025-08-13T20:30:32Z [verbose] DEL finished CNI request ContainerID:"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664" Netns:"/var/run/netns/8b542f5e-b37c-4ba6-a2cd-003f00a0be1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zdwjn;K8S_POD_INFRA_CONTAINER_ID=3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664;K8S_POD_UID=6d579e1a-3b27-4c1f-9175-42ac58490d42" Path:"", result: "", err: 2025-08-13T20:37:48.650104847+00:00 stderr F 2025-08-13T20:37:48Z [verbose] ADD starting CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"" 2025-08-13T20:37:48.861010088+00:00 stderr F 2025-08-13T20:37:48Z [verbose] Add: openshift-marketplace:redhat-marketplace-nkzlk:afc02c17-9714-426d-aafa-ee58c673ab0c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"316cb50fa85ce61","mac":"26:aa:67:d6:f6:bb"},{"name":"eth0","mac":"0a:58:0a:d9:00:63","sandbox":"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.99/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:37:48.862828720+00:00 stderr F I0813 20:37:48.861954 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nkzlk", UID:"afc02c17-9714-426d-aafa-ee58c673ab0c", APIVersion:"v1", ResourceVersion:"36823", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.99/23] from ovn-kubernetes 2025-08-13T20:37:48.903530504+00:00 stderr F 2025-08-13T20:37:48Z [verbose] ADD finished CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:aa:67:d6:f6:bb\",\"name\":\"316cb50fa85ce61\"},{\"mac\":\"0a:58:0a:d9:00:63\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44\"}],\"ips\":[{\"address\":\"10.217.0.99/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:38:08.940938186+00:00 stderr F 2025-08-13T20:38:08Z [verbose] DEL starting CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"" 2025-08-13T20:38:08.941644556+00:00 stderr F 2025-08-13T20:38:08Z [verbose] Del: openshift-marketplace:redhat-marketplace-nkzlk:afc02c17-9714-426d-aafa-ee58c673ab0c:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:38:09.132340454+00:00 stderr F 2025-08-13T20:38:09Z [verbose] DEL finished CNI request ContainerID:"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973" Netns:"/var/run/netns/526cbd5c-b3c2-4dba-819c-ba400a719e44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nkzlk;K8S_POD_INFRA_CONTAINER_ID=316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973;K8S_POD_UID=afc02c17-9714-426d-aafa-ee58c673ab0c" Path:"", result: "", err: 2025-08-13T20:38:36.497725067+00:00 stderr F 2025-08-13T20:38:36Z [verbose] ADD starting CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"" 2025-08-13T20:38:36.719000537+00:00 stderr F 2025-08-13T20:38:36Z [verbose] Add: openshift-marketplace:certified-operators-4kmbv:847e60dc-7a0a-4115-a7e1-356476e319e7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"48a72e1ed96b8c0","mac":"ba:71:ca:85:15:a1"},{"name":"eth0","mac":"0a:58:0a:d9:00:64","sandbox":"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.100/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:38:36.720045557+00:00 stderr F I0813 20:38:36.719870 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-4kmbv", UID:"847e60dc-7a0a-4115-a7e1-356476e319e7", APIVersion:"v1", ResourceVersion:"36922", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.100/23] from ovn-kubernetes 2025-08-13T20:38:36.806263303+00:00 stderr P 2025-08-13T20:38:36Z [verbose] 2025-08-13T20:38:36.806353615+00:00 stderr P ADD finished CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ba:71:ca:85:15:a1\",\"name\":\"48a72e1ed96b8c0\"},{\"mac\":\"0a:58:0a:d9:00:64\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18\"}],\"ips\":[{\"address\":\"10.217.0.100/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:38:36.806379876+00:00 stderr F 2025-08-13T20:38:57.274709039+00:00 stderr F 2025-08-13T20:38:57Z [verbose] DEL starting CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"" 2025-08-13T20:38:57.275727299+00:00 stderr F 2025-08-13T20:38:57Z [verbose] Del: openshift-marketplace:certified-operators-4kmbv:847e60dc-7a0a-4115-a7e1-356476e319e7:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:38:57.457966003+00:00 stderr F 2025-08-13T20:38:57Z [verbose] DEL finished CNI request ContainerID:"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0" Netns:"/var/run/netns/79131e3e-a4d7-46c4-b812-b1c65ea76c18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-4kmbv;K8S_POD_INFRA_CONTAINER_ID=48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0;K8S_POD_UID=847e60dc-7a0a-4115-a7e1-356476e319e7" Path:"", result: "", err: 2025-08-13T20:41:21.899190984+00:00 stderr P 2025-08-13T20:41:21Z [verbose] 2025-08-13T20:41:21.899407450+00:00 stderr P ADD starting CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"" 2025-08-13T20:41:21.899451962+00:00 stderr F 2025-08-13T20:41:22.168813287+00:00 stderr F 2025-08-13T20:41:22Z [verbose] Add: openshift-marketplace:redhat-operators-k2tgr:58e4f786-ee2a-45c4-83a4-523611d1eccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b07b3fcd02d69d1","mac":"6e:99:82:6e:a6:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:65","sandbox":"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.101/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:41:22.170555268+00:00 stderr F I0813 20:41:22.170374 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-k2tgr", UID:"58e4f786-ee2a-45c4-83a4-523611d1eccd", APIVersion:"v1", ResourceVersion:"37281", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.101/23] from ovn-kubernetes 2025-08-13T20:41:22.217286985+00:00 stderr F 2025-08-13T20:41:22Z [verbose] ADD finished CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:99:82:6e:a6:ca\",\"name\":\"b07b3fcd02d69d1\"},{\"mac\":\"0a:58:0a:d9:00:65\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167\"}],\"ips\":[{\"address\":\"10.217.0.101/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:42:13.827600468+00:00 stderr F 2025-08-13T20:42:13Z [verbose] DEL starting CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"" 2025-08-13T20:42:13.830067749+00:00 stderr F 2025-08-13T20:42:13Z [verbose] Del: openshift-marketplace:redhat-operators-k2tgr:58e4f786-ee2a-45c4-83a4-523611d1eccd:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-08-13T20:42:14.179240226+00:00 stderr F 2025-08-13T20:42:14Z [verbose] DEL finished CNI request ContainerID:"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c" Netns:"/var/run/netns/03d3e0f3-a040-4e17-a014-8ff1d38e5167" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-k2tgr;K8S_POD_INFRA_CONTAINER_ID=b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c;K8S_POD_UID=58e4f786-ee2a-45c4-83a4-523611d1eccd" Path:"", result: "", err: 2025-08-13T20:42:27.029073457+00:00 stderr P 2025-08-13T20:42:27Z [verbose] 2025-08-13T20:42:27.029266502+00:00 stderr P ADD starting CNI request ContainerID:"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a" Netns:"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-08-13T20:42:27.029306213+00:00 stderr F 2025-08-13T20:42:27.856884263+00:00 stderr F 2025-08-13T20:42:27Z [verbose] Add: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b4ce7c1e13297d1","mac":"be:61:47:79:5b:8f"},{"name":"eth0","mac":"0a:58:0a:d9:00:66","sandbox":"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.102/23","gateway":"10.217.0.1"}],"dns":{}} 2025-08-13T20:42:27.863480073+00:00 stderr F I0813 20:42:27.859848 23104 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-sdddl", UID:"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760", APIVersion:"v1", ResourceVersion:"37479", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.102/23] from ovn-kubernetes 2025-08-13T20:42:27.896369491+00:00 stderr F 2025-08-13T20:42:27Z [verbose] ADD finished CNI request ContainerID:"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a" Netns:"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:61:47:79:5b:8f\",\"name\":\"b4ce7c1e13297d1\"},{\"mac\":\"0a:58:0a:d9:00:66\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/80c609b5-77e3-47c8-8037-c2c5aeb88310\"}],\"ips\":[{\"address\":\"10.217.0.102/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-08-13T20:42:44.710914328+00:00 stderr P 2025-08-13T20:42:44Z [verbose] 2025-08-13T20:42:44.712022420+00:00 stderr F caught terminated, stopping... 2025-08-13T20:42:44.713590275+00:00 stderr F 2025-08-13T20:42:44Z [verbose] Stopped monitoring, closing channel ... 2025-08-13T20:42:44.718194688+00:00 stderr F 2025-08-13T20:42:44Z [verbose] ConfigWatcher done 2025-08-13T20:42:44.718194688+00:00 stderr F 2025-08-13T20:42:44Z [verbose] Delete old config @ /host/etc/cni/net.d/00-multus.conf 2025-08-13T20:42:44.718472186+00:00 stderr F 2025-08-13T20:42:44Z [verbose] multus daemon is exited ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000157215117130646033242 0ustar zuulzuul2025-08-13T19:55:29.808124912+00:00 stdout F 2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 2025-08-13T19:55:29.859028584+00:00 stdout F 2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/ 2025-08-13T19:55:29.894323461+00:00 stderr F 2025-08-13T19:55:29Z [verbose] multus-daemon started 2025-08-13T19:55:29.894323461+00:00 stderr F 2025-08-13T19:55:29Z [verbose] Readiness Indicator file check 2025-08-13T19:56:14.896151827+00:00 stderr F 2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000050012015117130646033233 0ustar zuulzuul2025-12-13T00:12:50.778476980+00:00 stdout F 2025-12-13T00:12:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2c23ed54-3b26-4976-a0e3-acd0ed300f54 2025-12-13T00:12:50.859862314+00:00 stdout F 2025-12-13T00:12:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2c23ed54-3b26-4976-a0e3-acd0ed300f54 to /host/opt/cni/bin/ 2025-12-13T00:12:50.885354715+00:00 stderr F 2025-12-13T00:12:50Z [verbose] multus-daemon started 2025-12-13T00:12:50.885354715+00:00 stderr F 2025-12-13T00:12:50Z [verbose] Readiness Indicator file check 2025-12-13T00:12:50.885468878+00:00 stderr F 2025-12-13T00:12:50Z [verbose] Readiness Indicator file check done! 2025-12-13T00:12:50.887657445+00:00 stderr F I1213 00:12:50.887612 8420 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-12-13T00:12:50.888158908+00:00 stderr F 2025-12-13T00:12:50Z [verbose] Waiting for certificate 2025-12-13T00:12:51.888473734+00:00 stderr F I1213 00:12:51.888341 8420 certificate_store.go:130] Loading cert/key pair from "/etc/cni/multus/certs/multus-client-current.pem". 2025-12-13T00:12:51.888710441+00:00 stderr F 2025-12-13T00:12:51Z [verbose] Certificate found! 2025-12-13T00:12:51.889474131+00:00 stderr F 2025-12-13T00:12:51Z [verbose] server configured with chroot: /hostroot 2025-12-13T00:12:51.889474131+00:00 stderr F 2025-12-13T00:12:51Z [verbose] Filtering pod watch for node "crc" 2025-12-13T00:12:51.991446538+00:00 stderr P 2025-12-13T00:12:51Z [verbose] 2025-12-13T00:12:51.991540671+00:00 stderr P API readiness check 2025-12-13T00:12:51.991572931+00:00 stderr F 2025-12-13T00:12:51.992330212+00:00 stderr F 2025-12-13T00:12:51Z [verbose] API readiness check done! 2025-12-13T00:12:51.992704561+00:00 stderr F 2025-12-13T00:12:51Z [verbose] Generated MultusCNI config: {"binDir":"/var/lib/cni/bin","cniVersion":"0.3.1","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","namespaceIsolation":true,"globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","type":"multus-shim","daemonSocketDir":"/run/multus/socket"} 2025-12-13T00:12:51.993073331+00:00 stderr F 2025-12-13T00:12:51Z [verbose] started to watch file /host/run/multus/cni/net.d/10-ovn-kubernetes.conf 2025-12-13T00:13:10.678697620+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5" Netns:"/var/run/netns/1505d5b3-f21b-4e23-8333-cde8c9ab67a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-12-13T00:13:10.681927468+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"3951bf389c34415eb41238b05f9c3f8833a20e37e08e7856b7e53cda0cdf7cc0" Netns:"/var/run/netns/fc9b608a-947f-4790-8b21-83f7f8f8da77" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=3951bf389c34415eb41238b05f9c3f8833a20e37e08e7856b7e53cda0cdf7cc0;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"" 2025-12-13T00:13:10.700571815+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b" Netns:"/var/run/netns/bc09b23e-1b48-46bf-9f17-dfff908746d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-12-13T00:13:10.711255344+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"023d056511fb885f2bc62752e8fc1a0d27ef6babe1cdccd27d12b854fbade3fc" Netns:"/var/run/netns/cb3d4aa0-4b89-4afc-bac1-11461c057267" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=023d056511fb885f2bc62752e8fc1a0d27ef6babe1cdccd27d12b854fbade3fc;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"" 2025-12-13T00:13:10.800404229+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"3d6e166fed28a3e6b6cc681f8b6c8a59043b3add383c503e2334c8f30583f90e" Netns:"/var/run/netns/15880610-41e2-45d7-9664-b6c00be31c17" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=3d6e166fed28a3e6b6cc681f8b6c8a59043b3add383c503e2334c8f30583f90e;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"" 2025-12-13T00:13:10.802201330+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"a9511a5d94287af21a200f5271a99bc54ec6bc1e0f89ea57246a82b2bff3229c" Netns:"/var/run/netns/29ea4c92-6607-49d2-a2b7-02d64d99ef4b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a9511a5d94287af21a200f5271a99bc54ec6bc1e0f89ea57246a82b2bff3229c;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"" 2025-12-13T00:13:10.804107874+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"a8c83f18faa15f6df6226f364ac00f3a09c95f79a750230c2ca3078f0b06d172" Netns:"/var/run/netns/17c687f6-f643-49b0-b317-65ae96f88952" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=a8c83f18faa15f6df6226f364ac00f3a09c95f79a750230c2ca3078f0b06d172;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"" 2025-12-13T00:13:10.815579099+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"85c1f092a9e274b186f46a7e431d4932181643f36fe1a39db744cc085e04295c" Netns:"/var/run/netns/c9ec0f9d-51ae-4bf4-ab08-67e7a7020473" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=85c1f092a9e274b186f46a7e431d4932181643f36fe1a39db744cc085e04295c;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"" 2025-12-13T00:13:10.816158059+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe" Netns:"/var/run/netns/3cbe75de-90d3-4ce6-b460-fce7d26d4ac3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"" 2025-12-13T00:13:10.816645365+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"a3f3493cadb8dfa346e42c025f7631cf8c99d5d4eaaa56bd5916a9908a726e6b" Netns:"/var/run/netns/bdef2263-4fb0-4173-bb14-4523b2d9055a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=a3f3493cadb8dfa346e42c025f7631cf8c99d5d4eaaa56bd5916a9908a726e6b;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"" 2025-12-13T00:13:10.839509723+00:00 stderr F 2025-12-13T00:13:10Z [verbose] ADD starting CNI request ContainerID:"a5a31f215fdd49f7b2461ecf5ce6b4ddaf43cd32c78e4f947b217bdd4e33672a" Netns:"/var/run/netns/27becbc0-a9be-4499-92a3-58c618d0c7c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=a5a31f215fdd49f7b2461ecf5ce6b4ddaf43cd32c78e4f947b217bdd4e33672a;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"" 2025-12-13T00:13:11.017736262+00:00 stderr F 2025-12-13T00:13:11Z [verbose] Add: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"07ba5aad8c5b88d","mac":"0e:f3:22:58:4e:74"},{"name":"eth0","mac":"0a:58:0a:d9:00:32","sandbox":"/var/run/netns/1505d5b3-f21b-4e23-8333-cde8c9ab67a4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.50/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:11.018386794+00:00 stderr F I1213 00:13:11.018314 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-f4jkp", UID:"4092a9f8-5acc-4932-9e90-ef962eeb301a", APIVersion:"v1", ResourceVersion:"38261", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.50/23] from ovn-kubernetes 2025-12-13T00:13:11.040464096+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD finished CNI request ContainerID:"07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5" Netns:"/var/run/netns/1505d5b3-f21b-4e23-8333-cde8c9ab67a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"0e:f3:22:58:4e:74\",\"name\":\"07ba5aad8c5b88d\"},{\"mac\":\"0a:58:0a:d9:00:32\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1505d5b3-f21b-4e23-8333-cde8c9ab67a4\"}],\"ips\":[{\"address\":\"10.217.0.50/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:11.045774704+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"5335a2b03a6a955c95fdca0c87846a42ca9e06339d5ead88438fb7bc59e20d1b" Netns:"/var/run/netns/c50efcd5-7a6e-4d92-8745-5cf99c51a785" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=5335a2b03a6a955c95fdca0c87846a42ca9e06339d5ead88438fb7bc59e20d1b;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"" 2025-12-13T00:13:11.082056743+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"ef1840579032710f0d311138cfc0b01859c59d3b95eac15be24e4426778cc0df" Netns:"/var/run/netns/7b19ab23-b8c2-4f1b-83a2-02014376f063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ef1840579032710f0d311138cfc0b01859c59d3b95eac15be24e4426778cc0df;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"" 2025-12-13T00:13:11.082056743+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"3d1bad1ef025567a6b67c8ccdebbeea569f71832372ac6f9b2384c6c493f4581" Netns:"/var/run/netns/eed8b816-68c1-4331-a632-4bde18260a66" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=3d1bad1ef025567a6b67c8ccdebbeea569f71832372ac6f9b2384c6c493f4581;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"" 2025-12-13T00:13:11.102750488+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369" Netns:"/var/run/netns/a4c91542-b918-4e77-af5b-6cf401d19d9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-12-13T00:13:11.105715468+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"047d698c98b0fe010a6fc7b8815b46f231b5e1fda96125068535ef2b36f3bad7" Netns:"/var/run/netns/18fdaf1d-b2ce-41e3-a4a6-45ee6cc5cabc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=047d698c98b0fe010a6fc7b8815b46f231b5e1fda96125068535ef2b36f3bad7;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"" 2025-12-13T00:13:11.107312172+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"51926de5dcbf369143baee16ac43644dbdf8d8959c02f49abc9f694e15e6d07a" Netns:"/var/run/netns/565f1eb9-81bd-418e-af11-993bfc34a0c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=51926de5dcbf369143baee16ac43644dbdf8d8959c02f49abc9f694e15e6d07a;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"" 2025-12-13T00:13:11.111033107+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"fd4386b3a903d3747a1bf4a96d760e784b3510ae7cb99e1967d3fb3273476b3d" Netns:"/var/run/netns/a7e49f84-7f8f-4bef-8884-7cb9dd511408" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=fd4386b3a903d3747a1bf4a96d760e784b3510ae7cb99e1967d3fb3273476b3d;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"" 2025-12-13T00:13:11.116792040+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"474af639b558e293b6814f01415f5d30ec3b3ad52238cc91885cfe58bb7a7200" Netns:"/var/run/netns/6ed9a71c-21fe-4c39-b4e5-bd941fb04d1f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=474af639b558e293b6814f01415f5d30ec3b3ad52238cc91885cfe58bb7a7200;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"" 2025-12-13T00:13:11.123408253+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"92eca3b1bb762f792f1933df8409251e2d1c489f38e8bd3e0296af884fa35031" Netns:"/var/run/netns/5431302a-e992-46c1-8378-8b6b803cdc38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=92eca3b1bb762f792f1933df8409251e2d1c489f38e8bd3e0296af884fa35031;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"" 2025-12-13T00:13:11.123408253+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"b17235b865d4102aab00cb9ef06e82e2a51d90c2a91e735c747e122c93a40bdb" Netns:"/var/run/netns/ae5804da-db80-40a4-841c-4d621e4de5f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=b17235b865d4102aab00cb9ef06e82e2a51d90c2a91e735c747e122c93a40bdb;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"" 2025-12-13T00:13:11.128298517+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"aaa0e2898f563e73a3663336abcac71a45ef85b6a6472708e70b2532b614ce1e" Netns:"/var/run/netns/de4075ca-22cf-4eb4-8e73-1cf02c5a4c87" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=aaa0e2898f563e73a3663336abcac71a45ef85b6a6472708e70b2532b614ce1e;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"" 2025-12-13T00:13:11.132053644+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"b481640912713d98935b0d99ebecc9202386848c0e62d7519ae403d53edc5992" Netns:"/var/run/netns/814b066e-764b-4d43-9418-ad112da01776" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=b481640912713d98935b0d99ebecc9202386848c0e62d7519ae403d53edc5992;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"" 2025-12-13T00:13:11.139832795+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608" Netns:"/var/run/netns/6224b8d1-5e5d-4ad0-8703-da10ca35d003" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-12-13T00:13:11.144068037+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"260ff140b8d1e0b83e98abb5bf77763cb6fdb69de286a269afe3641eb6a8ec71" Netns:"/var/run/netns/c3d55b6f-b362-4414-8d89-7a3df2eb781c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=260ff140b8d1e0b83e98abb5bf77763cb6fdb69de286a269afe3641eb6a8ec71;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"" 2025-12-13T00:13:11.178341148+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"0c2018b001b98fb63cd6b5cc2f776c2cfb94614d7c0e8df6e3397b362d62a762" Netns:"/var/run/netns/6c055f04-abb4-45e3-8d26-0fd8f51cd3bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=0c2018b001b98fb63cd6b5cc2f776c2cfb94614d7c0e8df6e3397b362d62a762;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"" 2025-12-13T00:13:11.185556311+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"2c244248bacdb72aa7ec190608f054e315808514dccb4b7f29eacb37d7951908" Netns:"/var/run/netns/e3bd3846-047d-4858-9567-cb0c32821487" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2c244248bacdb72aa7ec190608f054e315808514dccb4b7f29eacb37d7951908;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"" 2025-12-13T00:13:11.203985070+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"adcfabc53e3efc84fc8e56a40029d63c5f6d6a501ce9342fff608679249609dc" Netns:"/var/run/netns/de17a7d3-3b59-4829-871c-7da17f3a095c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=adcfabc53e3efc84fc8e56a40029d63c5f6d6a501ce9342fff608679249609dc;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"" 2025-12-13T00:13:11.207074425+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"098e9064cecbfaae5a02c851bbd8b3fca09efe5260365ea54365082f2199250a" Netns:"/var/run/netns/c2a72491-7f7f-4efd-8752-f9b311ae22b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=098e9064cecbfaae5a02c851bbd8b3fca09efe5260365ea54365082f2199250a;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"" 2025-12-13T00:13:11.226306180+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"cc9e8f2c0960b84a38d933380db24ec3c3106becbbb6ad79ebc338f0ace5a027" Netns:"/var/run/netns/66cffacc-b5bf-4596-8842-be05f121bc7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=cc9e8f2c0960b84a38d933380db24ec3c3106becbbb6ad79ebc338f0ace5a027;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"" 2025-12-13T00:13:11.228230295+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"264afc75138ea0f1ff0875894274002479c18051828c4420db2d1e44587145bb" Netns:"/var/run/netns/329bb950-a3c1-494b-b43e-3500c8acc755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=264afc75138ea0f1ff0875894274002479c18051828c4420db2d1e44587145bb;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"" 2025-12-13T00:13:11.242968800+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"f82a63c55b8ff998e15cd92d6afb921fdc8e982de3c7ea32c04589ed06296135" Netns:"/var/run/netns/b61551f8-1f61-43bf-a255-72ee160e6642" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=f82a63c55b8ff998e15cd92d6afb921fdc8e982de3c7ea32c04589ed06296135;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"" 2025-12-13T00:13:11.245150554+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"d1e9ea5ea78fec2692d814c6876e544dc5f6d3caae5d5e5ab4bdd883bfbbb152" Netns:"/var/run/netns/1b9b6f65-6f66-4b83-bbe0-7bd63cea6396" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=d1e9ea5ea78fec2692d814c6876e544dc5f6d3caae5d5e5ab4bdd883bfbbb152;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"" 2025-12-13T00:13:11.261228154+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"716e032c4650ea6e62cbb0bcc5e4edde86f8930f6db4a2f527c3a7851b7e42aa" Netns:"/var/run/netns/b913c369-debf-4c36-86d2-56d56790c29d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=716e032c4650ea6e62cbb0bcc5e4edde86f8930f6db4a2f527c3a7851b7e42aa;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"" 2025-12-13T00:13:11.267032319+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"16e048b797f70b26cb7c601d27d0aac1b7c64c7664585071356f64a2f4ae8193" Netns:"/var/run/netns/211450fb-8bf8-4c4f-96fe-8d01b6190ef3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=16e048b797f70b26cb7c601d27d0aac1b7c64c7664585071356f64a2f4ae8193;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"" 2025-12-13T00:13:11.274251101+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"696c8068ac2d061dbdbb339c4d53a876587abd758ac633d215f43cbbee6b8287" Netns:"/var/run/netns/54705ef4-561e-4b7c-9b0f-577a3bccd7a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=696c8068ac2d061dbdbb339c4d53a876587abd758ac633d215f43cbbee6b8287;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"" 2025-12-13T00:13:11.280665677+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"d3ee238f663eeb39a93895b846a0d0d95f6ddc25b5f16ecd3f862f8292c81d51" Netns:"/var/run/netns/5eaf8350-9a7c-442d-8c9f-ceb176378fbe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=d3ee238f663eeb39a93895b846a0d0d95f6ddc25b5f16ecd3f862f8292c81d51;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"" 2025-12-13T00:13:11.295298579+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"2399c00ca096e2ecf32aba01d101d040facf34d907b01e43a81e5791b8b1b46f" Netns:"/var/run/netns/6bc5b7a0-520f-4333-a5c9-4531585f4803" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=2399c00ca096e2ecf32aba01d101d040facf34d907b01e43a81e5791b8b1b46f;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"" 2025-12-13T00:13:11.295612639+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f" Netns:"/var/run/netns/87394dcd-a97c-40df-9fc1-9697ef0f2622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-12-13T00:13:11.334429854+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8" Netns:"/var/run/netns/f306751f-591c-49c9-8a39-0e4cfe596afe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-12-13T00:13:11.335459218+00:00 stderr F 2025-12-13T00:13:11Z [verbose] Add: openshift-ingress-operator:ingress-operator-7d46d5bb6d-rrg6t:7d51f445-054a-4e4f-a67b-a828f5a32511:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"023d056511fb885","mac":"b2:7b:76:c9:a4:7e"},{"name":"eth0","mac":"0a:58:0a:d9:00:2d","sandbox":"/var/run/netns/cb3d4aa0-4b89-4afc-bac1-11461c057267"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.45/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:11.335459218+00:00 stderr F 2025-12-13T00:13:11Z [verbose] Add: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f2f014941774dd8","mac":"2e:ac:fc:e2:e6:ca"},{"name":"eth0","mac":"0a:58:0a:d9:00:2f","sandbox":"/var/run/netns/bc09b23e-1b48-46bf-9f17-dfff908746d1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.47/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:11.335835330+00:00 stderr F I1213 00:13:11.335761 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-operator", Name:"ingress-operator-7d46d5bb6d-rrg6t", UID:"7d51f445-054a-4e4f-a67b-a828f5a32511", APIVersion:"v1", ResourceVersion:"38436", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.45/23] from ovn-kubernetes 2025-12-13T00:13:11.335835330+00:00 stderr F I1213 00:13:11.335825 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-7287f", UID:"887d596e-c519-4bfa-af90-3edd9e1b2f0f", APIVersion:"v1", ResourceVersion:"38303", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.47/23] from ovn-kubernetes 2025-12-13T00:13:11.337901900+00:00 stderr F 2025-12-13T00:13:11Z [verbose] Add: openshift-console-operator:console-operator-5dbbc74dc9-cp5cd:e9127708-ccfd-4891-8a3a-f0cacb77e0f4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3951bf389c34415","mac":"16:8c:73:b1:cb:9a"},{"name":"eth0","mac":"0a:58:0a:d9:00:3e","sandbox":"/var/run/netns/fc9b608a-947f-4790-8b21-83f7f8f8da77"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.62/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:11.337901900+00:00 stderr F I1213 00:13:11.336595 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-5dbbc74dc9-cp5cd", UID:"e9127708-ccfd-4891-8a3a-f0cacb77e0f4", APIVersion:"v1", ResourceVersion:"38240", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.62/23] from ovn-kubernetes 2025-12-13T00:13:11.348131894+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD finished CNI request ContainerID:"3951bf389c34415eb41238b05f9c3f8833a20e37e08e7856b7e53cda0cdf7cc0" Netns:"/var/run/netns/fc9b608a-947f-4790-8b21-83f7f8f8da77" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-operator-5dbbc74dc9-cp5cd;K8S_POD_INFRA_CONTAINER_ID=3951bf389c34415eb41238b05f9c3f8833a20e37e08e7856b7e53cda0cdf7cc0;K8S_POD_UID=e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"16:8c:73:b1:cb:9a\",\"name\":\"3951bf389c34415\"},{\"mac\":\"0a:58:0a:d9:00:3e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/fc9b608a-947f-4790-8b21-83f7f8f8da77\"}],\"ips\":[{\"address\":\"10.217.0.62/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:11.353194184+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD finished CNI request ContainerID:"023d056511fb885f2bc62752e8fc1a0d27ef6babe1cdccd27d12b854fbade3fc" Netns:"/var/run/netns/cb3d4aa0-4b89-4afc-bac1-11461c057267" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-7d46d5bb6d-rrg6t;K8S_POD_INFRA_CONTAINER_ID=023d056511fb885f2bc62752e8fc1a0d27ef6babe1cdccd27d12b854fbade3fc;K8S_POD_UID=7d51f445-054a-4e4f-a67b-a828f5a32511" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:7b:76:c9:a4:7e\",\"name\":\"023d056511fb885\"},{\"mac\":\"0a:58:0a:d9:00:2d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/cb3d4aa0-4b89-4afc-bac1-11461c057267\"}],\"ips\":[{\"address\":\"10.217.0.45/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:11.353496094+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD finished CNI request ContainerID:"f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b" Netns:"/var/run/netns/bc09b23e-1b48-46bf-9f17-dfff908746d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2e:ac:fc:e2:e6:ca\",\"name\":\"f2f014941774dd8\"},{\"mac\":\"0a:58:0a:d9:00:2f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bc09b23e-1b48-46bf-9f17-dfff908746d1\"}],\"ips\":[{\"address\":\"10.217.0.47/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:11.441762120+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"052172120e2281221920352db7242b5fa1aa1540e56c8f618946503aea767e7e" Netns:"/var/run/netns/c53cf2de-e44a-4315-84b6-140cef1913f1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=052172120e2281221920352db7242b5fa1aa1540e56c8f618946503aea767e7e;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"" 2025-12-13T00:13:11.444911146+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"6741ceca359a128669b6237aead37e498431793f6ed70109c463040e739fa00d" Netns:"/var/run/netns/c78971cd-ebaa-47f7-9721-1e5b0b584c96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=6741ceca359a128669b6237aead37e498431793f6ed70109c463040e739fa00d;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"" 2025-12-13T00:13:11.522065959+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"a65a45a3a7617d60c515aabda8825495c6ae6130b9ccc1b248fd9b1eb284ac95" Netns:"/var/run/netns/724e3bfb-a7e5-463b-aa1c-7d5bd3a05736" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=a65a45a3a7617d60c515aabda8825495c6ae6130b9ccc1b248fd9b1eb284ac95;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"" 2025-12-13T00:13:11.546046175+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"7bac34826efa2bf9294d1b22c8da822644d189e6e8bcbc3bc9c9470ac2b2f6b7" Netns:"/var/run/netns/e344f778-081f-4018-8c1c-1bf643e9ff64" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7bac34826efa2bf9294d1b22c8da822644d189e6e8bcbc3bc9c9470ac2b2f6b7;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"" 2025-12-13T00:13:11.555741200+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"84ec990a66b414ef649197accb94eb1fbd0fb9076266ba811d3117e7688657ca" Netns:"/var/run/netns/0fec9c49-96cb-4c0f-bb62-560d302bd81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=84ec990a66b414ef649197accb94eb1fbd0fb9076266ba811d3117e7688657ca;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"" 2025-12-13T00:13:11.600238195+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD starting CNI request ContainerID:"f205e6fa0cc76cdae9e75007422dfc19a3df6d186d37a079113cdbb791b57eb6" Netns:"/var/run/netns/62f195f8-daa2-4a1e-be20-f0784aee32d6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=f205e6fa0cc76cdae9e75007422dfc19a3df6d186d37a079113cdbb791b57eb6;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"" 2025-12-13T00:13:11.937792227+00:00 stderr F 2025-12-13T00:13:11Z [verbose] Add: openshift-console-operator:console-conversion-webhook-595f9969b-l6z49:59748b9b-c309-4712-aa85-bb38d71c4915:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a9511a5d94287af","mac":"fe:f7:d9:b9:e9:47"},{"name":"eth0","mac":"0a:58:0a:d9:00:3d","sandbox":"/var/run/netns/29ea4c92-6607-49d2-a2b7-02d64d99ef4b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.61/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:11.937792227+00:00 stderr F I1213 00:13:11.936860 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-conversion-webhook-595f9969b-l6z49", UID:"59748b9b-c309-4712-aa85-bb38d71c4915", APIVersion:"v1", ResourceVersion:"38191", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.61/23] from ovn-kubernetes 2025-12-13T00:13:11.954979585+00:00 stderr F 2025-12-13T00:13:11Z [verbose] ADD finished CNI request ContainerID:"a9511a5d94287af21a200f5271a99bc54ec6bc1e0f89ea57246a82b2bff3229c" Netns:"/var/run/netns/29ea4c92-6607-49d2-a2b7-02d64d99ef4b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console-operator;K8S_POD_NAME=console-conversion-webhook-595f9969b-l6z49;K8S_POD_INFRA_CONTAINER_ID=a9511a5d94287af21a200f5271a99bc54ec6bc1e0f89ea57246a82b2bff3229c;K8S_POD_UID=59748b9b-c309-4712-aa85-bb38d71c4915" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fe:f7:d9:b9:e9:47\",\"name\":\"a9511a5d94287af\"},{\"mac\":\"0a:58:0a:d9:00:3d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/29ea4c92-6607-49d2-a2b7-02d64d99ef4b\"}],\"ips\":[{\"address\":\"10.217.0.61/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.381027662+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-image-registry:cluster-image-registry-operator-7769bd8d7d-q5cvv:b54e8941-2fc4-432a-9e51-39684df9089e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a8c83f18faa15f6","mac":"22:07:bb:14:02:03"},{"name":"eth0","mac":"0a:58:0a:d9:00:16","sandbox":"/var/run/netns/17c687f6-f643-49b0-b317-65ae96f88952"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.22/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.381201818+00:00 stderr F I1213 00:13:12.381149 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator-7769bd8d7d-q5cvv", UID:"b54e8941-2fc4-432a-9e51-39684df9089e", APIVersion:"v1", ResourceVersion:"38200", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.22/23] from ovn-kubernetes 2025-12-13T00:13:12.399190052+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"a8c83f18faa15f6df6226f364ac00f3a09c95f79a750230c2ca3078f0b06d172" Netns:"/var/run/netns/17c687f6-f643-49b0-b317-65ae96f88952" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-7769bd8d7d-q5cvv;K8S_POD_INFRA_CONTAINER_ID=a8c83f18faa15f6df6226f364ac00f3a09c95f79a750230c2ca3078f0b06d172;K8S_POD_UID=b54e8941-2fc4-432a-9e51-39684df9089e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"22:07:bb:14:02:03\",\"name\":\"a8c83f18faa15f6\"},{\"mac\":\"0a:58:0a:d9:00:16\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/17c687f6-f643-49b0-b317-65ae96f88952\"}],\"ips\":[{\"address\":\"10.217.0.22/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.408091151+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-dns:dns-default-gbw49:13045510-8717-4a71-ade4-be95a76440a7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"260ff140b8d1e0b","mac":"7a:0a:1d:c5:37:46"},{"name":"eth0","mac":"0a:58:0a:d9:00:1f","sandbox":"/var/run/netns/c3d55b6f-b362-4414-8d89-7a3df2eb781c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.31/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.408861687+00:00 stderr F I1213 00:13:12.408383 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns", Name:"dns-default-gbw49", UID:"13045510-8717-4a71-ade4-be95a76440a7", APIVersion:"v1", ResourceVersion:"38328", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.31/23] from ovn-kubernetes 2025-12-13T00:13:12.421642636+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"272898bbbe67ff8","mac":"92:65:b1:5c:c8:6e"},{"name":"eth0","mac":"0a:58:0a:d9:00:0d","sandbox":"/var/run/netns/87394dcd-a97c-40df-9fc1-9697ef0f2622"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.13/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.421642636+00:00 stderr F I1213 00:13:12.420791 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-f9xdt", UID:"3482be94-0cdb-4e2a-889b-e5fac59fdbf5", APIVersion:"v1", ResourceVersion:"38185", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.13/23] from ovn-kubernetes 2025-12-13T00:13:12.421642636+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-apiserver-operator:openshift-apiserver-operator-7c88c4c865-kn67m:43ae1c37-047b-4ee2-9fee-41e337dd4ac8:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"92eca3b1bb762f7","mac":"72:70:7e:61:88:e5"},{"name":"eth0","mac":"0a:58:0a:d9:00:06","sandbox":"/var/run/netns/5431302a-e992-46c1-8378-8b6b803cdc38"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.6/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.423159238+00:00 stderr F I1213 00:13:12.422032 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-7c88c4c865-kn67m", UID:"43ae1c37-047b-4ee2-9fee-41e337dd4ac8", APIVersion:"v1", ResourceVersion:"38234", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.6/23] from ovn-kubernetes 2025-12-13T00:13:12.425159524+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: hostpath-provisioner:csi-hostpathplugin-hvm8g:12e733dd-0939-4f1b-9cbb-13897e093787:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"098e9064cecbfaa","mac":"32:ec:68:d3:4c:d7"},{"name":"eth0","mac":"0a:58:0a:d9:00:31","sandbox":"/var/run/netns/c2a72491-7f7f-4efd-8752-f9b311ae22b4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.49/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.425159524+00:00 stderr F I1213 00:13:12.425089 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"hostpath-provisioner", Name:"csi-hostpathplugin-hvm8g", UID:"12e733dd-0939-4f1b-9cbb-13897e093787", APIVersion:"v1", ResourceVersion:"38439", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.49/23] from ovn-kubernetes 2025-12-13T00:13:12.427411560+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-cluster-samples-operator:cluster-samples-operator-bc474d5d6-wshwg:f728c15e-d8de-4a9a-a3ea-fdcead95cb91:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"31c30f7e9c0640f","mac":"6e:cc:9f:46:8b:02"},{"name":"eth0","mac":"0a:58:0a:d9:00:2e","sandbox":"/var/run/netns/3cbe75de-90d3-4ce6-b460-fce7d26d4ac3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.46/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.427603756+00:00 stderr F I1213 00:13:12.427518 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-cluster-samples-operator", Name:"cluster-samples-operator-bc474d5d6-wshwg", UID:"f728c15e-d8de-4a9a-a3ea-fdcead95cb91", APIVersion:"v1", ResourceVersion:"38433", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.46/23] from ovn-kubernetes 2025-12-13T00:13:12.429329485+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-authentication:oauth-openshift-74fc7c67cc-xqf8b:01feb2e0-a0f4-4573-8335-34e364e0ef40:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ef1840579032710","mac":"76:39:20:3e:25:56"},{"name":"eth0","mac":"0a:58:0a:d9:00:48","sandbox":"/var/run/netns/7b19ab23-b8c2-4f1b-83a2-02014376f063"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.72/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.429507091+00:00 stderr F I1213 00:13:12.429466 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication", Name:"oauth-openshift-74fc7c67cc-xqf8b", UID:"01feb2e0-a0f4-4573-8335-34e364e0ef40", APIVersion:"v1", ResourceVersion:"38300", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.72/23] from ovn-kubernetes 2025-12-13T00:13:12.429604934+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-network-diagnostics:network-check-source-5c5478f8c-vqvt7:d0f40333-c860-4c04-8058-a0bf572dcf12:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0c2018b001b98fb","mac":"fa:3e:e4:43:0e:61"},{"name":"eth0","mac":"0a:58:0a:d9:00:40","sandbox":"/var/run/netns/6c055f04-abb4-45e3-8d26-0fd8f51cd3bc"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.64/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.431464747+00:00 stderr F I1213 00:13:12.429828 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-source-5c5478f8c-vqvt7", UID:"d0f40333-c860-4c04-8058-a0bf572dcf12", APIVersion:"v1", ResourceVersion:"38420", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.64/23] from ovn-kubernetes 2025-12-13T00:13:12.431464747+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-console:downloads-65476884b9-9wcvx:6268b7fe-8910-4505-b404-6f1df638105c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"85c1f092a9e274b","mac":"42:77:27:d5:b5:10"},{"name":"eth0","mac":"0a:58:0a:d9:00:42","sandbox":"/var/run/netns/c9ec0f9d-51ae-4bf4-ab08-67e7a7020473"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.66/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.431464747+00:00 stderr F I1213 00:13:12.430398 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"downloads-65476884b9-9wcvx", UID:"6268b7fe-8910-4505-b404-6f1df638105c", APIVersion:"v1", ResourceVersion:"38212", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.66/23] from ovn-kubernetes 2025-12-13T00:13:12.431464747+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-machine-config-operator:machine-config-operator-76788bff89-wkjgm:120b38dc-8236-4fa6-a452-642b8ad738ee:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b481640912713d9","mac":"be:4f:2b:ab:d6:78"},{"name":"eth0","mac":"0a:58:0a:d9:00:15","sandbox":"/var/run/netns/814b066e-764b-4d43-9418-ad112da01776"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.21/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.431464747+00:00 stderr F I1213 00:13:12.431313 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-operator-76788bff89-wkjgm", UID:"120b38dc-8236-4fa6-a452-642b8ad738ee", APIVersion:"v1", ResourceVersion:"38182", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.21/23] from ovn-kubernetes 2025-12-13T00:13:12.434859481+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"260ff140b8d1e0b83e98abb5bf77763cb6fdb69de286a269afe3641eb6a8ec71" Netns:"/var/run/netns/c3d55b6f-b362-4414-8d89-7a3df2eb781c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns;K8S_POD_NAME=dns-default-gbw49;K8S_POD_INFRA_CONTAINER_ID=260ff140b8d1e0b83e98abb5bf77763cb6fdb69de286a269afe3641eb6a8ec71;K8S_POD_UID=13045510-8717-4a71-ade4-be95a76440a7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"7a:0a:1d:c5:37:46\",\"name\":\"260ff140b8d1e0b\"},{\"mac\":\"0a:58:0a:d9:00:1f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c3d55b6f-b362-4414-8d89-7a3df2eb781c\"}],\"ips\":[{\"address\":\"10.217.0.31/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.452281306+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-authentication-operator:authentication-operator-7cc7ff75d5-g9qv8:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a3f3493cadb8dfa","mac":"ee:17:3d:f7:51:e9"},{"name":"eth0","mac":"0a:58:0a:d9:00:13","sandbox":"/var/run/netns/bdef2263-4fb0-4173-bb14-4523b2d9055a"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.19/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.452281306+00:00 stderr F I1213 00:13:12.448748 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-authentication-operator", Name:"authentication-operator-7cc7ff75d5-g9qv8", UID:"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e", APIVersion:"v1", ResourceVersion:"38294", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.19/23] from ovn-kubernetes 2025-12-13T00:13:12.452281306+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"92eca3b1bb762f792f1933df8409251e2d1c489f38e8bd3e0296af884fa35031" Netns:"/var/run/netns/5431302a-e992-46c1-8378-8b6b803cdc38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-7c88c4c865-kn67m;K8S_POD_INFRA_CONTAINER_ID=92eca3b1bb762f792f1933df8409251e2d1c489f38e8bd3e0296af884fa35031;K8S_POD_UID=43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:70:7e:61:88:e5\",\"name\":\"92eca3b1bb762f7\"},{\"mac\":\"0a:58:0a:d9:00:06\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5431302a-e992-46c1-8378-8b6b803cdc38\"}],\"ips\":[{\"address\":\"10.217.0.6/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.452281306+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f" Netns:"/var/run/netns/87394dcd-a97c-40df-9fc1-9697ef0f2622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:65:b1:5c:c8:6e\",\"name\":\"272898bbbe67ff8\"},{\"mac\":\"0a:58:0a:d9:00:0d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/87394dcd-a97c-40df-9fc1-9697ef0f2622\"}],\"ips\":[{\"address\":\"10.217.0.13/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.452281306+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-operator-lifecycle-manager:catalog-operator-857456c46-7f5wf:8a5ae51d-d173-4531-8975-f164c975ce1f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"5335a2b03a6a955","mac":"ca:c6:4a:5e:76:14"},{"name":"eth0","mac":"0a:58:0a:d9:00:0b","sandbox":"/var/run/netns/c50efcd5-7a6e-4d92-8745-5cf99c51a785"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.11/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.467380023+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-controller-manager-operator:openshift-controller-manager-operator-7978d7d7f6-2nt8z:0f394926-bdb9-425c-b36e-264d7fd34550:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2399c00ca096e2e","mac":"ce:6b:c0:1a:29:13"},{"name":"eth0","mac":"0a:58:0a:d9:00:09","sandbox":"/var/run/netns/6bc5b7a0-520f-4333-a5c9-4531585f4803"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.9/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.467759436+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator-686c6c748c-qbnnr:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a65a45a3a7617d6","mac":"36:a2:75:de:a7:88"},{"name":"eth0","mac":"0a:58:0a:d9:00:10","sandbox":"/var/run/netns/724e3bfb-a7e5-463b-aa1c-7d5bd3a05736"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.16/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.468198871+00:00 stderr F I1213 00:13:12.468151 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"catalog-operator-857456c46-7f5wf", UID:"8a5ae51d-d173-4531-8975-f164c975ce1f", APIVersion:"v1", ResourceVersion:"38445", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.11/23] from ovn-kubernetes 2025-12-13T00:13:12.468263563+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-controller-manager:controller-manager-778975cc4f-x5vcf:1a3e81c3-c292-4130-9436-f94062c91efd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"264afc75138ea0f","mac":"b2:88:3c:49:b3:2b"},{"name":"eth0","mac":"0a:58:0a:d9:00:57","sandbox":"/var/run/netns/329bb950-a3c1-494b-b43e-3500c8acc755"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.87/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.468289034+00:00 stderr P 2025-12-13T00:13:12Z [verbose] 2025-12-13T00:13:12.468296954+00:00 stderr F ADD finished CNI request ContainerID:"098e9064cecbfaae5a02c851bbd8b3fca09efe5260365ea54365082f2199250a" Netns:"/var/run/netns/c2a72491-7f7f-4efd-8752-f9b311ae22b4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=hostpath-provisioner;K8S_POD_NAME=csi-hostpathplugin-hvm8g;K8S_POD_INFRA_CONTAINER_ID=098e9064cecbfaae5a02c851bbd8b3fca09efe5260365ea54365082f2199250a;K8S_POD_UID=12e733dd-0939-4f1b-9cbb-13897e093787" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"32:ec:68:d3:4c:d7\",\"name\":\"098e9064cecbfaa\"},{\"mac\":\"0a:58:0a:d9:00:31\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c2a72491-7f7f-4efd-8752-f9b311ae22b4\"}],\"ips\":[{\"address\":\"10.217.0.49/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.468304665+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-multus:network-metrics-daemon-qdfr4:a702c6d2-4dde-4077-ab8c-0f8df804bf7a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b17235b865d4102","mac":"6a:a7:6e:a9:b5:fc"},{"name":"eth0","mac":"0a:58:0a:d9:00:03","sandbox":"/var/run/netns/ae5804da-db80-40a4-841c-4d621e4de5f8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.3/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.468409948+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-kube-storage-version-migrator:migrator-f7c6d88df-q2fnv:cf1a8966-f594-490a-9fbb-eec5bafd13d3:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3d1bad1ef025567","mac":"8e:8f:5d:b4:08:19"},{"name":"eth0","mac":"0a:58:0a:d9:00:19","sandbox":"/var/run/netns/eed8b816-68c1-4331-a632-4bde18260a66"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.25/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.468468290+00:00 stderr F I1213 00:13:12.468433 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager", Name:"controller-manager-778975cc4f-x5vcf", UID:"1a3e81c3-c292-4130-9436-f94062c91efd", APIVersion:"v1", ResourceVersion:"38176", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.87/23] from ovn-kubernetes 2025-12-13T00:13:12.468481900+00:00 stderr F I1213 00:13:12.468458 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-qdfr4", UID:"a702c6d2-4dde-4077-ab8c-0f8df804bf7a", APIVersion:"v1", ResourceVersion:"38188", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.3/23] from ovn-kubernetes 2025-12-13T00:13:12.468528652+00:00 stderr F I1213 00:13:12.468479 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-7978d7d7f6-2nt8z", UID:"0f394926-bdb9-425c-b36e-264d7fd34550", APIVersion:"v1", ResourceVersion:"38231", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.9/23] from ovn-kubernetes 2025-12-13T00:13:12.468551263+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-apiserver:apiserver-7fc54b8dd7-d2bhp:41e8708a-e40d-4d28-846b-c52eda4d1755:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"a5a31f215fdd49f","mac":"66:78:88:be:08:03"},{"name":"eth0","mac":"0a:58:0a:d9:00:52","sandbox":"/var/run/netns/27becbc0-a9be-4499-92a3-58c618d0c7c8"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.82/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.468641696+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-ingress-canary:ingress-canary-2vhcn:0b5d722a-1123-4935-9740-52a08d018bc9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cc9e8f2c0960b84","mac":"b2:fc:8e:f5:2c:97"},{"name":"eth0","mac":"0a:58:0a:d9:00:47","sandbox":"/var/run/netns/66cffacc-b5bf-4596-8842-be05f121bc7d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.71/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.469759053+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"b481640912713d98935b0d99ebecc9202386848c0e62d7519ae403d53edc5992" Netns:"/var/run/netns/814b066e-764b-4d43-9418-ad112da01776" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-76788bff89-wkjgm;K8S_POD_INFRA_CONTAINER_ID=b481640912713d98935b0d99ebecc9202386848c0e62d7519ae403d53edc5992;K8S_POD_UID=120b38dc-8236-4fa6-a452-642b8ad738ee" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:4f:2b:ab:d6:78\",\"name\":\"b481640912713d9\"},{\"mac\":\"0a:58:0a:d9:00:15\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/814b066e-764b-4d43-9418-ad112da01776\"}],\"ips\":[{\"address\":\"10.217.0.21/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.469846186+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"0c2018b001b98fb63cd6b5cc2f776c2cfb94614d7c0e8df6e3397b362d62a762" Netns:"/var/run/netns/6c055f04-abb4-45e3-8d26-0fd8f51cd3bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-5c5478f8c-vqvt7;K8S_POD_INFRA_CONTAINER_ID=0c2018b001b98fb63cd6b5cc2f776c2cfb94614d7c0e8df6e3397b362d62a762;K8S_POD_UID=d0f40333-c860-4c04-8058-a0bf572dcf12" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"fa:3e:e4:43:0e:61\",\"name\":\"0c2018b001b98fb\"},{\"mac\":\"0a:58:0a:d9:00:40\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6c055f04-abb4-45e3-8d26-0fd8f51cd3bc\"}],\"ips\":[{\"address\":\"10.217.0.64/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.470071013+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"85c1f092a9e274b186f46a7e431d4932181643f36fe1a39db744cc085e04295c" Netns:"/var/run/netns/c9ec0f9d-51ae-4bf4-ab08-67e7a7020473" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-65476884b9-9wcvx;K8S_POD_INFRA_CONTAINER_ID=85c1f092a9e274b186f46a7e431d4932181643f36fe1a39db744cc085e04295c;K8S_POD_UID=6268b7fe-8910-4505-b404-6f1df638105c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"42:77:27:d5:b5:10\",\"name\":\"85c1f092a9e274b\"},{\"mac\":\"0a:58:0a:d9:00:42\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c9ec0f9d-51ae-4bf4-ab08-67e7a7020473\"}],\"ips\":[{\"address\":\"10.217.0.66/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.470474978+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"ef1840579032710f0d311138cfc0b01859c59d3b95eac15be24e4426778cc0df" Netns:"/var/run/netns/7b19ab23-b8c2-4f1b-83a2-02014376f063" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-74fc7c67cc-xqf8b;K8S_POD_INFRA_CONTAINER_ID=ef1840579032710f0d311138cfc0b01859c59d3b95eac15be24e4426778cc0df;K8S_POD_UID=01feb2e0-a0f4-4573-8335-34e364e0ef40" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:39:20:3e:25:56\",\"name\":\"ef1840579032710\"},{\"mac\":\"0a:58:0a:d9:00:48\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/7b19ab23-b8c2-4f1b-83a2-02014376f063\"}],\"ips\":[{\"address\":\"10.217.0.72/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.473357285+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe" Netns:"/var/run/netns/3cbe75de-90d3-4ce6-b460-fce7d26d4ac3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-bc474d5d6-wshwg;K8S_POD_INFRA_CONTAINER_ID=31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe;K8S_POD_UID=f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:cc:9f:46:8b:02\",\"name\":\"31c30f7e9c0640f\"},{\"mac\":\"0a:58:0a:d9:00:2e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3cbe75de-90d3-4ce6-b460-fce7d26d4ac3\"}],\"ips\":[{\"address\":\"10.217.0.46/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.473433527+00:00 stderr F I1213 00:13:12.468523 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator-686c6c748c-qbnnr", UID:"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7", APIVersion:"v1", ResourceVersion:"38322", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.16/23] from ovn-kubernetes 2025-12-13T00:13:12.473466878+00:00 stderr F I1213 00:13:12.473438 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-storage-version-migrator", Name:"migrator-f7c6d88df-q2fnv", UID:"cf1a8966-f594-490a-9fbb-eec5bafd13d3", APIVersion:"v1", ResourceVersion:"38264", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.25/23] from ovn-kubernetes 2025-12-13T00:13:12.473474118+00:00 stderr F I1213 00:13:12.473461 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"41e8708a-e40d-4d28-846b-c52eda4d1755", APIVersion:"v1", ResourceVersion:"38414", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.82/23] from ovn-kubernetes 2025-12-13T00:13:12.473481319+00:00 stderr F I1213 00:13:12.473470 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-ingress-canary", Name:"ingress-canary-2vhcn", UID:"0b5d722a-1123-4935-9740-52a08d018bc9", APIVersion:"v1", ResourceVersion:"38258", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.71/23] from ovn-kubernetes 2025-12-13T00:13:12.482756060+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"a3f3493cadb8dfa346e42c025f7631cf8c99d5d4eaaa56bd5916a9908a726e6b" Netns:"/var/run/netns/bdef2263-4fb0-4173-bb14-4523b2d9055a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-7cc7ff75d5-g9qv8;K8S_POD_INFRA_CONTAINER_ID=a3f3493cadb8dfa346e42c025f7631cf8c99d5d4eaaa56bd5916a9908a726e6b;K8S_POD_UID=ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ee:17:3d:f7:51:e9\",\"name\":\"a3f3493cadb8dfa\"},{\"mac\":\"0a:58:0a:d9:00:13\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bdef2263-4fb0-4173-bb14-4523b2d9055a\"}],\"ips\":[{\"address\":\"10.217.0.19/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.482946976+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"3d1bad1ef025567a6b67c8ccdebbeea569f71832372ac6f9b2384c6c493f4581" Netns:"/var/run/netns/eed8b816-68c1-4331-a632-4bde18260a66" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator;K8S_POD_NAME=migrator-f7c6d88df-q2fnv;K8S_POD_INFRA_CONTAINER_ID=3d1bad1ef025567a6b67c8ccdebbeea569f71832372ac6f9b2384c6c493f4581;K8S_POD_UID=cf1a8966-f594-490a-9fbb-eec5bafd13d3" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8e:8f:5d:b4:08:19\",\"name\":\"3d1bad1ef025567\"},{\"mac\":\"0a:58:0a:d9:00:19\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/eed8b816-68c1-4331-a632-4bde18260a66\"}],\"ips\":[{\"address\":\"10.217.0.25/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.492464866+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"cc9e8f2c0960b84a38d933380db24ec3c3106becbbb6ad79ebc338f0ace5a027" Netns:"/var/run/netns/66cffacc-b5bf-4596-8842-be05f121bc7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-canary;K8S_POD_NAME=ingress-canary-2vhcn;K8S_POD_INFRA_CONTAINER_ID=cc9e8f2c0960b84a38d933380db24ec3c3106becbbb6ad79ebc338f0ace5a027;K8S_POD_UID=0b5d722a-1123-4935-9740-52a08d018bc9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:fc:8e:f5:2c:97\",\"name\":\"cc9e8f2c0960b84\"},{\"mac\":\"0a:58:0a:d9:00:47\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/66cffacc-b5bf-4596-8842-be05f121bc7d\"}],\"ips\":[{\"address\":\"10.217.0.71/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.492757635+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"264afc75138ea0f1ff0875894274002479c18051828c4420db2d1e44587145bb" Netns:"/var/run/netns/329bb950-a3c1-494b-b43e-3500c8acc755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-778975cc4f-x5vcf;K8S_POD_INFRA_CONTAINER_ID=264afc75138ea0f1ff0875894274002479c18051828c4420db2d1e44587145bb;K8S_POD_UID=1a3e81c3-c292-4130-9436-f94062c91efd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b2:88:3c:49:b3:2b\",\"name\":\"264afc75138ea0f\"},{\"mac\":\"0a:58:0a:d9:00:57\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/329bb950-a3c1-494b-b43e-3500c8acc755\"}],\"ips\":[{\"address\":\"10.217.0.87/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.493147879+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"2399c00ca096e2ecf32aba01d101d040facf34d907b01e43a81e5791b8b1b46f" Netns:"/var/run/netns/6bc5b7a0-520f-4333-a5c9-4531585f4803" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-7978d7d7f6-2nt8z;K8S_POD_INFRA_CONTAINER_ID=2399c00ca096e2ecf32aba01d101d040facf34d907b01e43a81e5791b8b1b46f;K8S_POD_UID=0f394926-bdb9-425c-b36e-264d7fd34550" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ce:6b:c0:1a:29:13\",\"name\":\"2399c00ca096e2e\"},{\"mac\":\"0a:58:0a:d9:00:09\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6bc5b7a0-520f-4333-a5c9-4531585f4803\"}],\"ips\":[{\"address\":\"10.217.0.9/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.495003421+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"a65a45a3a7617d60c515aabda8825495c6ae6130b9ccc1b248fd9b1eb284ac95" Netns:"/var/run/netns/724e3bfb-a7e5-463b-aa1c-7d5bd3a05736" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-686c6c748c-qbnnr;K8S_POD_INFRA_CONTAINER_ID=a65a45a3a7617d60c515aabda8825495c6ae6130b9ccc1b248fd9b1eb284ac95;K8S_POD_UID=9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"36:a2:75:de:a7:88\",\"name\":\"a65a45a3a7617d6\"},{\"mac\":\"0a:58:0a:d9:00:10\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/724e3bfb-a7e5-463b-aa1c-7d5bd3a05736\"}],\"ips\":[{\"address\":\"10.217.0.16/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.495204388+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"a5a31f215fdd49f7b2461ecf5ce6b4ddaf43cd32c78e4f947b217bdd4e33672a" Netns:"/var/run/netns/27becbc0-a9be-4499-92a3-58c618d0c7c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver;K8S_POD_NAME=apiserver-7fc54b8dd7-d2bhp;K8S_POD_INFRA_CONTAINER_ID=a5a31f215fdd49f7b2461ecf5ce6b4ddaf43cd32c78e4f947b217bdd4e33672a;K8S_POD_UID=41e8708a-e40d-4d28-846b-c52eda4d1755" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"66:78:88:be:08:03\",\"name\":\"a5a31f215fdd49f\"},{\"mac\":\"0a:58:0a:d9:00:52\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/27becbc0-a9be-4499-92a3-58c618d0c7c8\"}],\"ips\":[{\"address\":\"10.217.0.82/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.496471291+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"5335a2b03a6a955c95fdca0c87846a42ca9e06339d5ead88438fb7bc59e20d1b" Netns:"/var/run/netns/c50efcd5-7a6e-4d92-8745-5cf99c51a785" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-857456c46-7f5wf;K8S_POD_INFRA_CONTAINER_ID=5335a2b03a6a955c95fdca0c87846a42ca9e06339d5ead88438fb7bc59e20d1b;K8S_POD_UID=8a5ae51d-d173-4531-8975-f164c975ce1f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ca:c6:4a:5e:76:14\",\"name\":\"5335a2b03a6a955\"},{\"mac\":\"0a:58:0a:d9:00:0b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c50efcd5-7a6e-4d92-8745-5cf99c51a785\"}],\"ips\":[{\"address\":\"10.217.0.11/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.503970742+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"b17235b865d4102aab00cb9ef06e82e2a51d90c2a91e735c747e122c93a40bdb" Netns:"/var/run/netns/ae5804da-db80-40a4-841c-4d621e4de5f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-qdfr4;K8S_POD_INFRA_CONTAINER_ID=b17235b865d4102aab00cb9ef06e82e2a51d90c2a91e735c747e122c93a40bdb;K8S_POD_UID=a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6a:a7:6e:a9:b5:fc\",\"name\":\"b17235b865d4102\"},{\"mac\":\"0a:58:0a:d9:00:03\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/ae5804da-db80-40a4-841c-4d621e4de5f8\"}],\"ips\":[{\"address\":\"10.217.0.3/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.553044792+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-multus:multus-admission-controller-6c7c885997-4hbbc:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"474af639b558e29","mac":"8a:be:c8:29:f0:e9"},{"name":"eth0","mac":"0a:58:0a:d9:00:20","sandbox":"/var/run/netns/6ed9a71c-21fe-4c39-b4e5-bd941fb04d1f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.32/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.553398223+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-machine-config-operator:machine-config-controller-6df6df6b6b-58shh:297ab9b6-2186-4d5b-a952-2bfd59af63c4:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"adcfabc53e3efc8","mac":"ea:a9:ce:64:1e:be"},{"name":"eth0","mac":"0a:58:0a:d9:00:3f","sandbox":"/var/run/netns/de17a7d3-3b59-4829-871c-7da17f3a095c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.63/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.565637814+00:00 stderr F I1213 00:13:12.565103 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"multus-admission-controller-6c7c885997-4hbbc", UID:"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0", APIVersion:"v1", ResourceVersion:"38270", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.32/23] from ovn-kubernetes 2025-12-13T00:13:12.565637814+00:00 stderr F I1213 00:13:12.565135 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-config-operator", Name:"machine-config-controller-6df6df6b6b-58shh", UID:"297ab9b6-2186-4d5b-a952-2bfd59af63c4", APIVersion:"v1", ResourceVersion:"38285", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.63/23] from ovn-kubernetes 2025-12-13T00:13:12.565637814+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-machine-api:control-plane-machine-set-operator-649bd778b4-tt5tw:45a8038e-e7f2-4d93-a6f5-7753aa54e63f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"2c244248bacdb72","mac":"b6:6b:9c:3d:95:7a"},{"name":"eth0","mac":"0a:58:0a:d9:00:14","sandbox":"/var/run/netns/e3bd3846-047d-4858-9567-cb0c32821487"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.20/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.565637814+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-service-ca-operator:service-ca-operator-546b4f8984-pwccz:6d67253e-2acd-4bc1-8185-793587da4f17:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"16e048b797f70b2","mac":"ea:19:8a:a9:b2:bd"},{"name":"eth0","mac":"0a:58:0a:d9:00:0a","sandbox":"/var/run/netns/211450fb-8bf8-4c4f-96fe-8d01b6190ef3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.10/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.565637814+00:00 stderr F I1213 00:13:12.565410 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-546b4f8984-pwccz", UID:"6d67253e-2acd-4bc1-8185-793587da4f17", APIVersion:"v1", ResourceVersion:"38319", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.10/23] from ovn-kubernetes 2025-12-13T00:13:12.565637814+00:00 stderr F I1213 00:13:12.565419 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"control-plane-machine-set-operator-649bd778b4-tt5tw", UID:"45a8038e-e7f2-4d93-a6f5-7753aa54e63f", APIVersion:"v1", ResourceVersion:"38408", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.20/23] from ovn-kubernetes 2025-12-13T00:13:12.567213958+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-route-controller-manager:route-controller-manager-776b8b7477-sfpvs:21d29937-debd-4407-b2b1-d1053cb0f342:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"fd4386b3a903d37","mac":"4a:f4:1b:2d:92:c7"},{"name":"eth0","mac":"0a:58:0a:d9:00:58","sandbox":"/var/run/netns/a7e49f84-7f8f-4bef-8884-7cb9dd511408"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.88/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.567420215+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"bbcc465ef8a290f","mac":"1e:a3:94:c8:f2:76"},{"name":"eth0","mac":"0a:58:0a:d9:00:30","sandbox":"/var/run/netns/f306751f-591c-49c9-8a39-0e4cfe596afe"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.48/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.567522268+00:00 stderr F 2025-12-13T00:13:12Z [verbose] I1213 00:13:12.567482 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-route-controller-manager", Name:"route-controller-manager-776b8b7477-sfpvs", UID:"21d29937-debd-4407-b2b1-d1053cb0f342", APIVersion:"v1", ResourceVersion:"38398", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.88/23] from ovn-kubernetes 2025-12-13T00:13:12.567522268+00:00 stderr F I1213 00:13:12.567502 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-8jhz6", UID:"3f4dca86-e6ee-4ec9-8324-86aff960225e", APIVersion:"v1", ResourceVersion:"38173", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.48/23] from ovn-kubernetes 2025-12-13T00:13:12.567522268+00:00 stderr F Add: openshift-kube-scheduler-operator:openshift-kube-scheduler-operator-5d9b995f6b-fcgd7:71af81a9-7d43-49b2-9287-c375900aa905:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"aaa0e2898f563e7","mac":"6e:d5:45:2a:82:e8"},{"name":"eth0","mac":"0a:58:0a:d9:00:0c","sandbox":"/var/run/netns/de4075ca-22cf-4eb4-8e73-1cf02c5a4c87"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.12/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.567522268+00:00 stderr P 2025-12-13T00:13:12Z [verbose] 2025-12-13T00:13:12.567557469+00:00 stderr F Add: openshift-machine-api:machine-api-operator-788b7c6b6c-ctdmb:4f8aa612-9da0-4a2b-911e-6a1764a4e74e:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"047d698c98b0fe0","mac":"d2:6c:89:77:51:e8"},{"name":"eth0","mac":"0a:58:0a:d9:00:05","sandbox":"/var/run/netns/18fdaf1d-b2ce-41e3-a4a6-45ee6cc5cabc"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.5/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.567597141+00:00 stderr F I1213 00:13:12.567564 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7", UID:"71af81a9-7d43-49b2-9287-c375900aa905", APIVersion:"v1", ResourceVersion:"38279", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.12/23] from ovn-kubernetes 2025-12-13T00:13:12.567597141+00:00 stderr F I1213 00:13:12.567582 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-machine-api", Name:"machine-api-operator-788b7c6b6c-ctdmb", UID:"4f8aa612-9da0-4a2b-911e-6a1764a4e74e", APIVersion:"v1", ResourceVersion:"38316", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.5/23] from ovn-kubernetes 2025-12-13T00:13:12.576648494+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"df0fb794ad36ed4","mac":"ea:76:a2:25:2f:7d"},{"name":"eth0","mac":"0a:58:0a:d9:00:33","sandbox":"/var/run/netns/a4c91542-b918-4e77-af5b-6cf401d19d9f"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.51/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.576648494+00:00 stderr F I1213 00:13:12.569468 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-8s8pc", UID:"c782cf62-a827-4677-b3c2-6f82c5f09cbb", APIVersion:"v1", ResourceVersion:"38307", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.51/23] from ovn-kubernetes 2025-12-13T00:13:12.576648494+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-operator-lifecycle-manager:packageserver-8464bcc55b-sjnqz:bd556935-a077-45df-ba3f-d42c39326ccd:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d1e9ea5ea78fec2","mac":"62:e3:98:8e:75:d9"},{"name":"eth0","mac":"0a:58:0a:d9:00:2b","sandbox":"/var/run/netns/1b9b6f65-6f66-4b83-bbe0-7bd63cea6396"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.43/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.576648494+00:00 stderr F I1213 00:13:12.572445 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8464bcc55b-sjnqz", UID:"bd556935-a077-45df-ba3f-d42c39326ccd", APIVersion:"v1", ResourceVersion:"38267", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.43/23] from ovn-kubernetes 2025-12-13T00:13:12.576648494+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-oauth-apiserver:apiserver-69c565c9b6-vbdpd:5bacb25d-97b6-4491-8fb4-99feae1d802a:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3d6e166fed28a3e","mac":"82:d3:9c:92:3e:8b"},{"name":"eth0","mac":"0a:58:0a:d9:00:27","sandbox":"/var/run/netns/15880610-41e2-45d7-9664-b6c00be31c17"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.39/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.576648494+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-kube-controller-manager-operator:kube-controller-manager-operator-6f6cb54958-rbddb:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"696c8068ac2d061","mac":"96:81:0c:ac:e6:d1"},{"name":"eth0","mac":"0a:58:0a:d9:00:0f","sandbox":"/var/run/netns/54705ef4-561e-4b7c-9b0f-577a3bccd7a3"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.15/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.576648494+00:00 stderr F I1213 00:13:12.576561 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"5bacb25d-97b6-4491-8fb4-99feae1d802a", APIVersion:"v1", ResourceVersion:"38424", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.39/23] from ovn-kubernetes 2025-12-13T00:13:12.576648494+00:00 stderr F I1213 00:13:12.576582 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-6f6cb54958-rbddb", UID:"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf", APIVersion:"v1", ResourceVersion:"38167", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.15/23] from ovn-kubernetes 2025-12-13T00:13:12.577206144+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-network-diagnostics:network-check-target-v54bt:34a48baf-1bee-4921-8bb2-9b7320e76f79:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f82a63c55b8ff99","mac":"c2:9a:8b:34:0b:47"},{"name":"eth0","mac":"0a:58:0a:d9:00:04","sandbox":"/var/run/netns/b61551f8-1f61-43bf-a255-72ee160e6642"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.4/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.577554606+00:00 stderr F I1213 00:13:12.577499 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-v54bt", UID:"34a48baf-1bee-4921-8bb2-9b7320e76f79", APIVersion:"v1", ResourceVersion:"38430", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.4/23] from ovn-kubernetes 2025-12-13T00:13:12.577661539+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-console:console-644bb77b49-5x5xk:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"d3ee238f663eeb3","mac":"12:23:a8:0d:01:c0"},{"name":"eth0","mac":"0a:58:0a:d9:00:49","sandbox":"/var/run/netns/5eaf8350-9a7c-442d-8c9f-ceb176378fbe"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.73/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.577746472+00:00 stderr F I1213 00:13:12.577699 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console", Name:"console-644bb77b49-5x5xk", UID:"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1", APIVersion:"v1", ResourceVersion:"38310", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.73/23] from ovn-kubernetes 2025-12-13T00:13:12.578059462+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-config-operator:openshift-config-operator-77658b5b66-dq5sc:530553aa-0a1d-423e-8a22-f5eb4bdbb883:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"052172120e22812","mac":"22:57:5e:5b:0c:2f"},{"name":"eth0","mac":"0a:58:0a:d9:00:17","sandbox":"/var/run/netns/c53cf2de-e44a-4315-84b6-140cef1913f1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.23/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.578248919+00:00 stderr F I1213 00:13:12.578193 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-config-operator", Name:"openshift-config-operator-77658b5b66-dq5sc", UID:"530553aa-0a1d-423e-8a22-f5eb4bdbb883", APIVersion:"v1", ResourceVersion:"38197", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.23/23] from ovn-kubernetes 2025-12-13T00:13:12.579419758+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-kube-apiserver-operator:kube-apiserver-operator-78d54458c4-sc8h7:ed024e5d-8fc2-4c22-803d-73f3c9795f19:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"51926de5dcbf369","mac":"82:02:e7:25:f9:05"},{"name":"eth0","mac":"0a:58:0a:d9:00:07","sandbox":"/var/run/netns/565f1eb9-81bd-418e-af11-993bfc34a0c7"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.7/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.579515001+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-operator-lifecycle-manager:package-server-manager-84d578d794-jw7r2:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"84ec990a66b414e","mac":"26:7d:e2:22:d8:46"},{"name":"eth0","mac":"0a:58:0a:d9:00:18","sandbox":"/var/run/netns/0fec9c49-96cb-4c0f-bb62-560d302bd81b"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.24/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.579568202+00:00 stderr F I1213 00:13:12.579513 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-78d54458c4-sc8h7", UID:"ed024e5d-8fc2-4c22-803d-73f3c9795f19", APIVersion:"v1", ResourceVersion:"38417", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.7/23] from ovn-kubernetes 2025-12-13T00:13:12.579623034+00:00 stderr F I1213 00:13:12.579580 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"package-server-manager-84d578d794-jw7r2", UID:"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be", APIVersion:"v1", ResourceVersion:"38249", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.24/23] from ovn-kubernetes 2025-12-13T00:13:12.579708267+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-etcd-operator:etcd-operator-768d5b5d86-722mg:0b5c38ff-1fa8-4219-994d-15776acd4a4d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"716e032c4650ea6","mac":"ee:9b:e0:e0:7e:f1"},{"name":"eth0","mac":"0a:58:0a:d9:00:08","sandbox":"/var/run/netns/b913c369-debf-4c36-86d2-56d56790c29d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.8/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.579764640+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-operator-lifecycle-manager:olm-operator-6d8474f75f-x54mh:c085412c-b875-46c9-ae3e-e6b0d8067091:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7bac34826efa2bf","mac":"de:64:00:3a:44:54"},{"name":"eth0","mac":"0a:58:0a:d9:00:0e","sandbox":"/var/run/netns/e344f778-081f-4018-8c1c-1bf643e9ff64"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.14/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.579845273+00:00 stderr F I1213 00:13:12.579808 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-etcd-operator", Name:"etcd-operator-768d5b5d86-722mg", UID:"0b5c38ff-1fa8-4219-994d-15776acd4a4d", APIVersion:"v1", ResourceVersion:"38392", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.8/23] from ovn-kubernetes 2025-12-13T00:13:12.579860463+00:00 stderr F I1213 00:13:12.579835 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"olm-operator-6d8474f75f-x54mh", UID:"c085412c-b875-46c9-ae3e-e6b0d8067091", APIVersion:"v1", ResourceVersion:"38276", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.14/23] from ovn-kubernetes 2025-12-13T00:13:12.579860463+00:00 stderr P 2025-12-13T00:13:12Z [verbose] 2025-12-13T00:13:12.579873544+00:00 stderr F Add: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"35d8c23c6f902a9","mac":"2a:ff:8f:da:cc:9e"},{"name":"eth0","mac":"0a:58:0a:d9:00:66","sandbox":"/var/run/netns/6224b8d1-5e5d-4ad0-8703-da10ca35d003"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.102/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.579971307+00:00 stderr F I1213 00:13:12.579900 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-sdddl", UID:"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760", APIVersion:"v1", ResourceVersion:"38179", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.102/23] from ovn-kubernetes 2025-12-13T00:13:12.579988187+00:00 stderr P 2025-12-13T00:13:12Z [verbose] Add: openshift-dns-operator:dns-operator-75f687757b-nz2xb:10603adc-d495-423c-9459-4caa405960bb:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"6741ceca359a128","mac":"82:1f:a7:11:25:0c"},{"name":"eth0","mac":"0a:58:0a:d9:00:12","sandbox":"/var/run/netns/c78971cd-ebaa-47f7-9721-1e5b0b584c96"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.18/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.580003918+00:00 stderr F 2025-12-13T00:13:12.580103841+00:00 stderr F I1213 00:13:12.580057 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-dns-operator", Name:"dns-operator-75f687757b-nz2xb", UID:"10603adc-d495-423c-9459-4caa405960bb", APIVersion:"v1", ResourceVersion:"38252", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.18/23] from ovn-kubernetes 2025-12-13T00:13:12.653540819+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"adcfabc53e3efc84fc8e56a40029d63c5f6d6a501ce9342fff608679249609dc" Netns:"/var/run/netns/de17a7d3-3b59-4829-871c-7da17f3a095c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-controller-6df6df6b6b-58shh;K8S_POD_INFRA_CONTAINER_ID=adcfabc53e3efc84fc8e56a40029d63c5f6d6a501ce9342fff608679249609dc;K8S_POD_UID=297ab9b6-2186-4d5b-a952-2bfd59af63c4" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:a9:ce:64:1e:be\",\"name\":\"adcfabc53e3efc8\"},{\"mac\":\"0a:58:0a:d9:00:3f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/de17a7d3-3b59-4829-871c-7da17f3a095c\"}],\"ips\":[{\"address\":\"10.217.0.63/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:12.682747510+00:00 stderr F 2025-12-13T00:13:12Z [verbose] Add: openshift-service-ca:service-ca-666f99b6f-kk8kg:e4a7de23-6134-4044-902a-0900dc04a501:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f205e6fa0cc76cd","mac":"5a:d2:5a:5b:49:11"},{"name":"eth0","mac":"0a:58:0a:d9:00:28","sandbox":"/var/run/netns/62f195f8-daa2-4a1e-be20-f0784aee32d6"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.40/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:13:12.682835213+00:00 stderr F I1213 00:13:12.682797 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-service-ca", Name:"service-ca-666f99b6f-kk8kg", UID:"e4a7de23-6134-4044-902a-0900dc04a501", APIVersion:"v1", ResourceVersion:"38206", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.40/23] from ovn-kubernetes 2025-12-13T00:13:12.835796423+00:00 stderr F 2025-12-13T00:13:12Z [verbose] ADD finished CNI request ContainerID:"474af639b558e293b6814f01415f5d30ec3b3ad52238cc91885cfe58bb7a7200" Netns:"/var/run/netns/6ed9a71c-21fe-4c39-b4e5-bd941fb04d1f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6c7c885997-4hbbc;K8S_POD_INFRA_CONTAINER_ID=474af639b558e293b6814f01415f5d30ec3b3ad52238cc91885cfe58bb7a7200;K8S_POD_UID=d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"8a:be:c8:29:f0:e9\",\"name\":\"474af639b558e29\"},{\"mac\":\"0a:58:0a:d9:00:20\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6ed9a71c-21fe-4c39-b4e5-bd941fb04d1f\"}],\"ips\":[{\"address\":\"10.217.0.32/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.051240852+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"16e048b797f70b26cb7c601d27d0aac1b7c64c7664585071356f64a2f4ae8193" Netns:"/var/run/netns/211450fb-8bf8-4c4f-96fe-8d01b6190ef3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-546b4f8984-pwccz;K8S_POD_INFRA_CONTAINER_ID=16e048b797f70b26cb7c601d27d0aac1b7c64c7664585071356f64a2f4ae8193;K8S_POD_UID=6d67253e-2acd-4bc1-8185-793587da4f17" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:19:8a:a9:b2:bd\",\"name\":\"16e048b797f70b2\"},{\"mac\":\"0a:58:0a:d9:00:0a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/211450fb-8bf8-4c4f-96fe-8d01b6190ef3\"}],\"ips\":[{\"address\":\"10.217.0.10/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.074086070+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"2c244248bacdb72aa7ec190608f054e315808514dccb4b7f29eacb37d7951908" Netns:"/var/run/netns/e3bd3846-047d-4858-9567-cb0c32821487" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-649bd778b4-tt5tw;K8S_POD_INFRA_CONTAINER_ID=2c244248bacdb72aa7ec190608f054e315808514dccb4b7f29eacb37d7951908;K8S_POD_UID=45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"b6:6b:9c:3d:95:7a\",\"name\":\"2c244248bacdb72\"},{\"mac\":\"0a:58:0a:d9:00:14\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e3bd3846-047d-4858-9567-cb0c32821487\"}],\"ips\":[{\"address\":\"10.217.0.20/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.094007999+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8" Netns:"/var/run/netns/f306751f-591c-49c9-8a39-0e4cfe596afe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"1e:a3:94:c8:f2:76\",\"name\":\"bbcc465ef8a290f\"},{\"mac\":\"0a:58:0a:d9:00:30\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/f306751f-591c-49c9-8a39-0e4cfe596afe\"}],\"ips\":[{\"address\":\"10.217.0.48/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.135151562+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"047d698c98b0fe010a6fc7b8815b46f231b5e1fda96125068535ef2b36f3bad7" Netns:"/var/run/netns/18fdaf1d-b2ce-41e3-a4a6-45ee6cc5cabc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-788b7c6b6c-ctdmb;K8S_POD_INFRA_CONTAINER_ID=047d698c98b0fe010a6fc7b8815b46f231b5e1fda96125068535ef2b36f3bad7;K8S_POD_UID=4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:6c:89:77:51:e8\",\"name\":\"047d698c98b0fe0\"},{\"mac\":\"0a:58:0a:d9:00:05\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/18fdaf1d-b2ce-41e3-a4a6-45ee6cc5cabc\"}],\"ips\":[{\"address\":\"10.217.0.5/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.159711188+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"aaa0e2898f563e73a3663336abcac71a45ef85b6a6472708e70b2532b614ce1e" Netns:"/var/run/netns/de4075ca-22cf-4eb4-8e73-1cf02c5a4c87" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-5d9b995f6b-fcgd7;K8S_POD_INFRA_CONTAINER_ID=aaa0e2898f563e73a3663336abcac71a45ef85b6a6472708e70b2532b614ce1e;K8S_POD_UID=71af81a9-7d43-49b2-9287-c375900aa905" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"6e:d5:45:2a:82:e8\",\"name\":\"aaa0e2898f563e7\"},{\"mac\":\"0a:58:0a:d9:00:0c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/de4075ca-22cf-4eb4-8e73-1cf02c5a4c87\"}],\"ips\":[{\"address\":\"10.217.0.12/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.175329682+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369" Netns:"/var/run/netns/a4c91542-b918-4e77-af5b-6cf401d19d9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ea:76:a2:25:2f:7d\",\"name\":\"df0fb794ad36ed4\"},{\"mac\":\"0a:58:0a:d9:00:33\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a4c91542-b918-4e77-af5b-6cf401d19d9f\"}],\"ips\":[{\"address\":\"10.217.0.51/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.197538748+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"d1e9ea5ea78fec2692d814c6876e544dc5f6d3caae5d5e5ab4bdd883bfbbb152" Netns:"/var/run/netns/1b9b6f65-6f66-4b83-bbe0-7bd63cea6396" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-8464bcc55b-sjnqz;K8S_POD_INFRA_CONTAINER_ID=d1e9ea5ea78fec2692d814c6876e544dc5f6d3caae5d5e5ab4bdd883bfbbb152;K8S_POD_UID=bd556935-a077-45df-ba3f-d42c39326ccd" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"62:e3:98:8e:75:d9\",\"name\":\"d1e9ea5ea78fec2\"},{\"mac\":\"0a:58:0a:d9:00:2b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1b9b6f65-6f66-4b83-bbe0-7bd63cea6396\"}],\"ips\":[{\"address\":\"10.217.0.43/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.219524277+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"3d6e166fed28a3e6b6cc681f8b6c8a59043b3add383c503e2334c8f30583f90e" Netns:"/var/run/netns/15880610-41e2-45d7-9664-b6c00be31c17" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-69c565c9b6-vbdpd;K8S_POD_INFRA_CONTAINER_ID=3d6e166fed28a3e6b6cc681f8b6c8a59043b3add383c503e2334c8f30583f90e;K8S_POD_UID=5bacb25d-97b6-4491-8fb4-99feae1d802a" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:d3:9c:92:3e:8b\",\"name\":\"3d6e166fed28a3e\"},{\"mac\":\"0a:58:0a:d9:00:27\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/15880610-41e2-45d7-9664-b6c00be31c17\"}],\"ips\":[{\"address\":\"10.217.0.39/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.235088089+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"696c8068ac2d061dbdbb339c4d53a876587abd758ac633d215f43cbbee6b8287" Netns:"/var/run/netns/54705ef4-561e-4b7c-9b0f-577a3bccd7a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-6f6cb54958-rbddb;K8S_POD_INFRA_CONTAINER_ID=696c8068ac2d061dbdbb339c4d53a876587abd758ac633d215f43cbbee6b8287;K8S_POD_UID=c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"96:81:0c:ac:e6:d1\",\"name\":\"696c8068ac2d061\"},{\"mac\":\"0a:58:0a:d9:00:0f\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/54705ef4-561e-4b7c-9b0f-577a3bccd7a3\"}],\"ips\":[{\"address\":\"10.217.0.15/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.259011844+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"fd4386b3a903d3747a1bf4a96d760e784b3510ae7cb99e1967d3fb3273476b3d" Netns:"/var/run/netns/a7e49f84-7f8f-4bef-8884-7cb9dd511408" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-776b8b7477-sfpvs;K8S_POD_INFRA_CONTAINER_ID=fd4386b3a903d3747a1bf4a96d760e784b3510ae7cb99e1967d3fb3273476b3d;K8S_POD_UID=21d29937-debd-4407-b2b1-d1053cb0f342" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"4a:f4:1b:2d:92:c7\",\"name\":\"fd4386b3a903d37\"},{\"mac\":\"0a:58:0a:d9:00:58\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/a7e49f84-7f8f-4bef-8884-7cb9dd511408\"}],\"ips\":[{\"address\":\"10.217.0.88/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.271534044+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"f82a63c55b8ff998e15cd92d6afb921fdc8e982de3c7ea32c04589ed06296135" Netns:"/var/run/netns/b61551f8-1f61-43bf-a255-72ee160e6642" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-v54bt;K8S_POD_INFRA_CONTAINER_ID=f82a63c55b8ff998e15cd92d6afb921fdc8e982de3c7ea32c04589ed06296135;K8S_POD_UID=34a48baf-1bee-4921-8bb2-9b7320e76f79" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"c2:9a:8b:34:0b:47\",\"name\":\"f82a63c55b8ff99\"},{\"mac\":\"0a:58:0a:d9:00:04\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b61551f8-1f61-43bf-a255-72ee160e6642\"}],\"ips\":[{\"address\":\"10.217.0.4/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.296472592+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"d3ee238f663eeb39a93895b846a0d0d95f6ddc25b5f16ecd3f862f8292c81d51" Netns:"/var/run/netns/5eaf8350-9a7c-442d-8c9f-ceb176378fbe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-644bb77b49-5x5xk;K8S_POD_INFRA_CONTAINER_ID=d3ee238f663eeb39a93895b846a0d0d95f6ddc25b5f16ecd3f862f8292c81d51;K8S_POD_UID=9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"12:23:a8:0d:01:c0\",\"name\":\"d3ee238f663eeb3\"},{\"mac\":\"0a:58:0a:d9:00:49\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5eaf8350-9a7c-442d-8c9f-ceb176378fbe\"}],\"ips\":[{\"address\":\"10.217.0.73/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.310665689+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"052172120e2281221920352db7242b5fa1aa1540e56c8f618946503aea767e7e" Netns:"/var/run/netns/c53cf2de-e44a-4315-84b6-140cef1913f1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-77658b5b66-dq5sc;K8S_POD_INFRA_CONTAINER_ID=052172120e2281221920352db7242b5fa1aa1540e56c8f618946503aea767e7e;K8S_POD_UID=530553aa-0a1d-423e-8a22-f5eb4bdbb883" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"22:57:5e:5b:0c:2f\",\"name\":\"052172120e22812\"},{\"mac\":\"0a:58:0a:d9:00:17\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c53cf2de-e44a-4315-84b6-140cef1913f1\"}],\"ips\":[{\"address\":\"10.217.0.23/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.330994143+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"51926de5dcbf369143baee16ac43644dbdf8d8959c02f49abc9f694e15e6d07a" Netns:"/var/run/netns/565f1eb9-81bd-418e-af11-993bfc34a0c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-78d54458c4-sc8h7;K8S_POD_INFRA_CONTAINER_ID=51926de5dcbf369143baee16ac43644dbdf8d8959c02f49abc9f694e15e6d07a;K8S_POD_UID=ed024e5d-8fc2-4c22-803d-73f3c9795f19" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:02:e7:25:f9:05\",\"name\":\"51926de5dcbf369\"},{\"mac\":\"0a:58:0a:d9:00:07\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/565f1eb9-81bd-418e-af11-993bfc34a0c7\"}],\"ips\":[{\"address\":\"10.217.0.7/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.351861134+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"84ec990a66b414ef649197accb94eb1fbd0fb9076266ba811d3117e7688657ca" Netns:"/var/run/netns/0fec9c49-96cb-4c0f-bb62-560d302bd81b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-84d578d794-jw7r2;K8S_POD_INFRA_CONTAINER_ID=84ec990a66b414ef649197accb94eb1fbd0fb9076266ba811d3117e7688657ca;K8S_POD_UID=63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"26:7d:e2:22:d8:46\",\"name\":\"84ec990a66b414e\"},{\"mac\":\"0a:58:0a:d9:00:18\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/0fec9c49-96cb-4c0f-bb62-560d302bd81b\"}],\"ips\":[{\"address\":\"10.217.0.24/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.370559932+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"7bac34826efa2bf9294d1b22c8da822644d189e6e8bcbc3bc9c9470ac2b2f6b7" Netns:"/var/run/netns/e344f778-081f-4018-8c1c-1bf643e9ff64" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-6d8474f75f-x54mh;K8S_POD_INFRA_CONTAINER_ID=7bac34826efa2bf9294d1b22c8da822644d189e6e8bcbc3bc9c9470ac2b2f6b7;K8S_POD_UID=c085412c-b875-46c9-ae3e-e6b0d8067091" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"de:64:00:3a:44:54\",\"name\":\"7bac34826efa2bf\"},{\"mac\":\"0a:58:0a:d9:00:0e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e344f778-081f-4018-8c1c-1bf643e9ff64\"}],\"ips\":[{\"address\":\"10.217.0.14/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.391513366+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"716e032c4650ea6e62cbb0bcc5e4edde86f8930f6db4a2f527c3a7851b7e42aa" Netns:"/var/run/netns/b913c369-debf-4c36-86d2-56d56790c29d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-768d5b5d86-722mg;K8S_POD_INFRA_CONTAINER_ID=716e032c4650ea6e62cbb0bcc5e4edde86f8930f6db4a2f527c3a7851b7e42aa;K8S_POD_UID=0b5c38ff-1fa8-4219-994d-15776acd4a4d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ee:9b:e0:e0:7e:f1\",\"name\":\"716e032c4650ea6\"},{\"mac\":\"0a:58:0a:d9:00:08\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/b913c369-debf-4c36-86d2-56d56790c29d\"}],\"ips\":[{\"address\":\"10.217.0.8/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.434050935+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"6741ceca359a128669b6237aead37e498431793f6ed70109c463040e739fa00d" Netns:"/var/run/netns/c78971cd-ebaa-47f7-9721-1e5b0b584c96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-75f687757b-nz2xb;K8S_POD_INFRA_CONTAINER_ID=6741ceca359a128669b6237aead37e498431793f6ed70109c463040e739fa00d;K8S_POD_UID=10603adc-d495-423c-9459-4caa405960bb" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"82:1f:a7:11:25:0c\",\"name\":\"6741ceca359a128\"},{\"mac\":\"0a:58:0a:d9:00:12\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c78971cd-ebaa-47f7-9721-1e5b0b584c96\"}],\"ips\":[{\"address\":\"10.217.0.18/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.435746462+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608" Netns:"/var/run/netns/6224b8d1-5e5d-4ad0-8703-da10ca35d003" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"2a:ff:8f:da:cc:9e\",\"name\":\"35d8c23c6f902a9\"},{\"mac\":\"0a:58:0a:d9:00:66\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6224b8d1-5e5d-4ad0-8703-da10ca35d003\"}],\"ips\":[{\"address\":\"10.217.0.102/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:13:13.469216037+00:00 stderr F 2025-12-13T00:13:13Z [verbose] ADD finished CNI request ContainerID:"f205e6fa0cc76cdae9e75007422dfc19a3df6d186d37a079113cdbb791b57eb6" Netns:"/var/run/netns/62f195f8-daa2-4a1e-be20-f0784aee32d6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca;K8S_POD_NAME=service-ca-666f99b6f-kk8kg;K8S_POD_INFRA_CONTAINER_ID=f205e6fa0cc76cdae9e75007422dfc19a3df6d186d37a079113cdbb791b57eb6;K8S_POD_UID=e4a7de23-6134-4044-902a-0900dc04a501" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"5a:d2:5a:5b:49:11\",\"name\":\"f205e6fa0cc76cd\"},{\"mac\":\"0a:58:0a:d9:00:28\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/62f195f8-daa2-4a1e-be20-f0784aee32d6\"}],\"ips\":[{\"address\":\"10.217.0.40/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:21.560212908+00:00 stderr F 2025-12-13T00:14:21Z [verbose] ADD starting CNI request ContainerID:"25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f" Netns:"/var/run/netns/c1b56ee8-a5ca-447b-90e8-ea15ee027e68" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426400-gwxp5;K8S_POD_INFRA_CONTAINER_ID=25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f;K8S_POD_UID=51a3de13-562b-4bfd-aacb-e8832c894074" Path:"" 2025-12-13T00:14:21.685534238+00:00 stderr P 2025-12-13T00:14:21Z [verbose] 2025-12-13T00:14:21.685612971+00:00 stderr P ADD starting CNI request ContainerID:"ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc" Netns:"/var/run/netns/8b498cb7-19f5-4258-987e-cb2590abade4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29426400-tlv26;K8S_POD_INFRA_CONTAINER_ID=ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc;K8S_POD_UID=e98bd1f0-fe98-48b9-82cb-86e924ade65d" Path:"" 2025-12-13T00:14:21.685639072+00:00 stderr F 2025-12-13T00:14:21.711352139+00:00 stderr F 2025-12-13T00:14:21Z [verbose] ADD starting CNI request ContainerID:"1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff" Netns:"/var/run/netns/3eb8773f-4c48-467c-b95c-0bc790b0a4e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-999b2;K8S_POD_INFRA_CONTAINER_ID=1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff;K8S_POD_UID=989c3d90-b7ac-45d9-936e-fae237909d65" Path:"" 2025-12-13T00:14:21.758500205+00:00 stderr P 2025-12-13T00:14:21Z [verbose] 2025-12-13T00:14:21.758576128+00:00 stderr P ADD starting CNI request ContainerID:"bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258" Netns:"/var/run/netns/d6cf48cd-9de4-4abc-a6f6-411cf5773fb1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-scn2m;K8S_POD_INFRA_CONTAINER_ID=bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258;K8S_POD_UID=8106bb8c-f9d7-4688-b722-24144962b129" Path:"" 2025-12-13T00:14:21.758599458+00:00 stderr F 2025-12-13T00:14:21.777256499+00:00 stderr P 2025-12-13T00:14:21Z [verbose] 2025-12-13T00:14:21.777298010+00:00 stderr P ADD starting CNI request ContainerID:"3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336" Netns:"/var/run/netns/bc0a46b4-fddb-4579-bb20-abdd69723d08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-5fv2w;K8S_POD_INFRA_CONTAINER_ID=3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336;K8S_POD_UID=c2aedfd2-5601-4fa7-9963-519307c571e9" Path:"" 2025-12-13T00:14:21.777324211+00:00 stderr F 2025-12-13T00:14:21.811085047+00:00 stderr P 2025-12-13T00:14:21Z [verbose] 2025-12-13T00:14:21.811132518+00:00 stderr P Add: openshift-image-registry:image-pruner-29426400-tlv26:e98bd1f0-fe98-48b9-82cb-86e924ade65d:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"ebe3d6488306411","mac":"3e:73:37:c1:a9:b2"},{"name":"eth0","mac":"0a:58:0a:d9:00:1b","sandbox":"/var/run/netns/8b498cb7-19f5-4258-987e-cb2590abade4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.27/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:21.811151919+00:00 stderr F 2025-12-13T00:14:21.811418357+00:00 stderr F I1213 00:14:21.811395 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-pruner-29426400-tlv26", UID:"e98bd1f0-fe98-48b9-82cb-86e924ade65d", APIVersion:"v1", ResourceVersion:"41000", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.27/23] from ovn-kubernetes 2025-12-13T00:14:21.823646990+00:00 stderr F 2025-12-13T00:14:21Z [verbose] ADD finished CNI request ContainerID:"ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc" Netns:"/var/run/netns/8b498cb7-19f5-4258-987e-cb2590abade4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29426400-tlv26;K8S_POD_INFRA_CONTAINER_ID=ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc;K8S_POD_UID=e98bd1f0-fe98-48b9-82cb-86e924ade65d" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3e:73:37:c1:a9:b2\",\"name\":\"ebe3d6488306411\"},{\"mac\":\"0a:58:0a:d9:00:1b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8b498cb7-19f5-4258-987e-cb2590abade4\"}],\"ips\":[{\"address\":\"10.217.0.27/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:21.883091962+00:00 stderr F 2025-12-13T00:14:21Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29426400-gwxp5:51a3de13-562b-4bfd-aacb-e8832c894074:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"25cb24980666f0c","mac":"22:53:06:e6:a9:c2"},{"name":"eth0","mac":"0a:58:0a:d9:00:11","sandbox":"/var/run/netns/c1b56ee8-a5ca-447b-90e8-ea15ee027e68"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.17/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:21.883283368+00:00 stderr F I1213 00:14:21.883249 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29426400-gwxp5", UID:"51a3de13-562b-4bfd-aacb-e8832c894074", APIVersion:"v1", ResourceVersion:"40999", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.17/23] from ovn-kubernetes 2025-12-13T00:14:21.894404056+00:00 stderr F 2025-12-13T00:14:21Z [verbose] ADD finished CNI request ContainerID:"25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f" Netns:"/var/run/netns/c1b56ee8-a5ca-447b-90e8-ea15ee027e68" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426400-gwxp5;K8S_POD_INFRA_CONTAINER_ID=25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f;K8S_POD_UID=51a3de13-562b-4bfd-aacb-e8832c894074" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"22:53:06:e6:a9:c2\",\"name\":\"25cb24980666f0c\"},{\"mac\":\"0a:58:0a:d9:00:11\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/c1b56ee8-a5ca-447b-90e8-ea15ee027e68\"}],\"ips\":[{\"address\":\"10.217.0.17/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:22.086123223+00:00 stderr F 2025-12-13T00:14:22Z [verbose] Add: openshift-marketplace:certified-operators-999b2:989c3d90-b7ac-45d9-936e-fae237909d65:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"1da7246f28761fb","mac":"76:08:71:fa:e1:e7"},{"name":"eth0","mac":"0a:58:0a:d9:00:1c","sandbox":"/var/run/netns/3eb8773f-4c48-467c-b95c-0bc790b0a4e1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.28/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:22.086123223+00:00 stderr F I1213 00:14:22.085421 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-999b2", UID:"989c3d90-b7ac-45d9-936e-fae237909d65", APIVersion:"v1", ResourceVersion:"41002", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.28/23] from ovn-kubernetes 2025-12-13T00:14:22.100122303+00:00 stderr P 2025-12-13T00:14:22Z [verbose] 2025-12-13T00:14:22.100189115+00:00 stderr P ADD finished CNI request ContainerID:"1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff" Netns:"/var/run/netns/3eb8773f-4c48-467c-b95c-0bc790b0a4e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-999b2;K8S_POD_INFRA_CONTAINER_ID=1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff;K8S_POD_UID=989c3d90-b7ac-45d9-936e-fae237909d65" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"76:08:71:fa:e1:e7\",\"name\":\"1da7246f28761fb\"},{\"mac\":\"0a:58:0a:d9:00:1c\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/3eb8773f-4c48-467c-b95c-0bc790b0a4e1\"}],\"ips\":[{\"address\":\"10.217.0.28/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:22.100214016+00:00 stderr F 2025-12-13T00:14:22.163608814+00:00 stderr F 2025-12-13T00:14:22Z [verbose] Add: openshift-marketplace:redhat-operators-scn2m:8106bb8c-f9d7-4688-b722-24144962b129:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"bbef716af563208","mac":"ce:99:0d:20:7f:e8"},{"name":"eth0","mac":"0a:58:0a:d9:00:1d","sandbox":"/var/run/netns/d6cf48cd-9de4-4abc-a6f6-411cf5773fb1"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.29/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:22.163807902+00:00 stderr F I1213 00:14:22.163778 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-scn2m", UID:"8106bb8c-f9d7-4688-b722-24144962b129", APIVersion:"v1", ResourceVersion:"41003", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.29/23] from ovn-kubernetes 2025-12-13T00:14:22.176681125+00:00 stderr F 2025-12-13T00:14:22Z [verbose] Add: openshift-marketplace:redhat-marketplace-5fv2w:c2aedfd2-5601-4fa7-9963-519307c571e9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"3526b616d8d5515","mac":"a2:c3:18:3e:38:41"},{"name":"eth0","mac":"0a:58:0a:d9:00:1a","sandbox":"/var/run/netns/bc0a46b4-fddb-4579-bb20-abdd69723d08"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.26/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:22.176842250+00:00 stderr F I1213 00:14:22.176814 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-5fv2w", UID:"c2aedfd2-5601-4fa7-9963-519307c571e9", APIVersion:"v1", ResourceVersion:"41001", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.26/23] from ovn-kubernetes 2025-12-13T00:14:22.178065900+00:00 stderr P 2025-12-13T00:14:22Z [verbose] 2025-12-13T00:14:22.178116132+00:00 stderr P ADD finished CNI request ContainerID:"bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258" Netns:"/var/run/netns/d6cf48cd-9de4-4abc-a6f6-411cf5773fb1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-scn2m;K8S_POD_INFRA_CONTAINER_ID=bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258;K8S_POD_UID=8106bb8c-f9d7-4688-b722-24144962b129" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ce:99:0d:20:7f:e8\",\"name\":\"bbef716af563208\"},{\"mac\":\"0a:58:0a:d9:00:1d\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/d6cf48cd-9de4-4abc-a6f6-411cf5773fb1\"}],\"ips\":[{\"address\":\"10.217.0.29/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:22.178141432+00:00 stderr F 2025-12-13T00:14:22.197831016+00:00 stderr F 2025-12-13T00:14:22Z [verbose] ADD finished CNI request ContainerID:"3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336" Netns:"/var/run/netns/bc0a46b4-fddb-4579-bb20-abdd69723d08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-5fv2w;K8S_POD_INFRA_CONTAINER_ID=3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336;K8S_POD_UID=c2aedfd2-5601-4fa7-9963-519307c571e9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:c3:18:3e:38:41\",\"name\":\"3526b616d8d5515\"},{\"mac\":\"0a:58:0a:d9:00:1a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/bc0a46b4-fddb-4579-bb20-abdd69723d08\"}],\"ips\":[{\"address\":\"10.217.0.26/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:28.661139806+00:00 stderr F 2025-12-13T00:14:28Z [verbose] DEL starting CNI request ContainerID:"25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f" Netns:"/var/run/netns/c1b56ee8-a5ca-447b-90e8-ea15ee027e68" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426400-gwxp5;K8S_POD_INFRA_CONTAINER_ID=25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f;K8S_POD_UID=51a3de13-562b-4bfd-aacb-e8832c894074" Path:"" 2025-12-13T00:14:28.661639572+00:00 stderr F 2025-12-13T00:14:28Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29426400-gwxp5:51a3de13-562b-4bfd-aacb-e8832c894074:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:28.813992732+00:00 stderr F 2025-12-13T00:14:28Z [verbose] DEL finished CNI request ContainerID:"25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f" Netns:"/var/run/netns/c1b56ee8-a5ca-447b-90e8-ea15ee027e68" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426400-gwxp5;K8S_POD_INFRA_CONTAINER_ID=25cb24980666f0cb30f68ec626d7953df71ace051d83cb9f6182387c5160283f;K8S_POD_UID=51a3de13-562b-4bfd-aacb-e8832c894074" Path:"", result: "", err: 2025-12-13T00:14:37.839358446+00:00 stderr F 2025-12-13T00:14:37Z [verbose] DEL starting CNI request ContainerID:"3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336" Netns:"/var/run/netns/bc0a46b4-fddb-4579-bb20-abdd69723d08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-5fv2w;K8S_POD_INFRA_CONTAINER_ID=3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336;K8S_POD_UID=c2aedfd2-5601-4fa7-9963-519307c571e9" Path:"" 2025-12-13T00:14:37.840176192+00:00 stderr F 2025-12-13T00:14:37Z [verbose] Del: openshift-marketplace:redhat-marketplace-5fv2w:c2aedfd2-5601-4fa7-9963-519307c571e9:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:37.981489267+00:00 stderr F 2025-12-13T00:14:37Z [verbose] DEL finished CNI request ContainerID:"3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336" Netns:"/var/run/netns/bc0a46b4-fddb-4579-bb20-abdd69723d08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-5fv2w;K8S_POD_INFRA_CONTAINER_ID=3526b616d8d55151b09666b2259532e41ddac7f95d1f3b5634470c4b12034336;K8S_POD_UID=c2aedfd2-5601-4fa7-9963-519307c571e9" Path:"", result: "", err: 2025-12-13T00:14:38.020827492+00:00 stderr P 2025-12-13T00:14:38Z [verbose] 2025-12-13T00:14:38.020886004+00:00 stderr P DEL starting CNI request ContainerID:"f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b" Netns:"/var/run/netns/bc09b23e-1b48-46bf-9f17-dfff908746d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"" 2025-12-13T00:14:38.020909365+00:00 stderr F 2025-12-13T00:14:38.021138872+00:00 stderr P 2025-12-13T00:14:38Z [verbose] 2025-12-13T00:14:38.021170743+00:00 stderr P Del: openshift-marketplace:certified-operators-7287f:887d596e-c519-4bfa-af90-3edd9e1b2f0f:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:38.021192684+00:00 stderr F 2025-12-13T00:14:38.097375544+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL starting CNI request ContainerID:"1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff" Netns:"/var/run/netns/3eb8773f-4c48-467c-b95c-0bc790b0a4e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-999b2;K8S_POD_INFRA_CONTAINER_ID=1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff;K8S_POD_UID=989c3d90-b7ac-45d9-936e-fae237909d65" Path:"" 2025-12-13T00:14:38.097375544+00:00 stderr F 2025-12-13T00:14:38Z [verbose] Del: openshift-marketplace:certified-operators-999b2:989c3d90-b7ac-45d9-936e-fae237909d65:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:38.151320669+00:00 stderr P 2025-12-13T00:14:38Z [verbose] 2025-12-13T00:14:38.151363441+00:00 stderr P DEL finished CNI request ContainerID:"f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b" Netns:"/var/run/netns/bc09b23e-1b48-46bf-9f17-dfff908746d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-7287f;K8S_POD_INFRA_CONTAINER_ID=f2f014941774dd8ebcb5653d144f7f8ac118b640529f0329d3583008b31a8e5b;K8S_POD_UID=887d596e-c519-4bfa-af90-3edd9e1b2f0f" Path:"", result: "", err: 2025-12-13T00:14:38.151382922+00:00 stderr F 2025-12-13T00:14:38.164506413+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL starting CNI request ContainerID:"35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608" Netns:"/var/run/netns/6224b8d1-5e5d-4ad0-8703-da10ca35d003" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"" 2025-12-13T00:14:38.164637817+00:00 stderr F 2025-12-13T00:14:38Z [verbose] Del: openshift-marketplace:community-operators-sdddl:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:38.209262563+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL starting CNI request ContainerID:"bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258" Netns:"/var/run/netns/d6cf48cd-9de4-4abc-a6f6-411cf5773fb1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-scn2m;K8S_POD_INFRA_CONTAINER_ID=bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258;K8S_POD_UID=8106bb8c-f9d7-4688-b722-24144962b129" Path:"" 2025-12-13T00:14:38.209473799+00:00 stderr F 2025-12-13T00:14:38Z [verbose] Del: openshift-marketplace:redhat-operators-scn2m:8106bb8c-f9d7-4688-b722-24144962b129:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:38.224970008+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL starting CNI request ContainerID:"07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5" Netns:"/var/run/netns/1505d5b3-f21b-4e23-8333-cde8c9ab67a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"" 2025-12-13T00:14:38.224970008+00:00 stderr F 2025-12-13T00:14:38Z [verbose] Del: openshift-marketplace:redhat-operators-f4jkp:4092a9f8-5acc-4932-9e90-ef962eeb301a:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:38.225299469+00:00 stderr P 2025-12-13T00:14:38Z [verbose] 2025-12-13T00:14:38.225325470+00:00 stderr P DEL starting CNI request ContainerID:"bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8" Netns:"/var/run/netns/f306751f-591c-49c9-8a39-0e4cfe596afe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"" 2025-12-13T00:14:38.225344290+00:00 stderr F 2025-12-13T00:14:38.225521646+00:00 stderr P 2025-12-13T00:14:38Z [verbose] 2025-12-13T00:14:38.225543926+00:00 stderr P Del: openshift-marketplace:community-operators-8jhz6:3f4dca86-e6ee-4ec9-8324-86aff960225e:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:38.225562537+00:00 stderr F 2025-12-13T00:14:38.241161119+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL finished CNI request ContainerID:"1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff" Netns:"/var/run/netns/3eb8773f-4c48-467c-b95c-0bc790b0a4e1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-999b2;K8S_POD_INFRA_CONTAINER_ID=1da7246f28761fbb0987cf52decf013256527ee17caeb395bd8c87bb6bdec6ff;K8S_POD_UID=989c3d90-b7ac-45d9-936e-fae237909d65" Path:"", result: "", err: 2025-12-13T00:14:38.257881827+00:00 stderr F 2025-12-13T00:14:38Z [verbose] ADD starting CNI request ContainerID:"b39c75d998d69051dc9638b39314475bd093faad6001f12a8f43b177ad818f9e" Netns:"/var/run/netns/8c8176aa-bbb3-42bf-8ce4-5778f36b3611" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-kghgr;K8S_POD_INFRA_CONTAINER_ID=b39c75d998d69051dc9638b39314475bd093faad6001f12a8f43b177ad818f9e;K8S_POD_UID=9025b167-d0fc-419f-92c1-add28909ab7c" Path:"" 2025-12-13T00:14:38.381843284+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL finished CNI request ContainerID:"35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608" Netns:"/var/run/netns/6224b8d1-5e5d-4ad0-8703-da10ca35d003" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-sdddl;K8S_POD_INFRA_CONTAINER_ID=35d8c23c6f902a963250d41a6be0caf4f79b81e7ca8e12e4aaf3ec502f7e5608;K8S_POD_UID=fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Path:"", result: "", err: 2025-12-13T00:14:38.489231427+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL finished CNI request ContainerID:"bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8" Netns:"/var/run/netns/f306751f-591c-49c9-8a39-0e4cfe596afe" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-8jhz6;K8S_POD_INFRA_CONTAINER_ID=bbcc465ef8a290fa4c2f15ff3569ef12dc750975dad4f6f6f584ed2d30bab0b8;K8S_POD_UID=3f4dca86-e6ee-4ec9-8324-86aff960225e" Path:"", result: "", err: 2025-12-13T00:14:38.499024963+00:00 stderr F 2025-12-13T00:14:38Z [verbose] Add: openshift-marketplace:marketplace-operator-8b455464d-kghgr:9025b167-d0fc-419f-92c1-add28909ab7c:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"b39c75d998d6905","mac":"d2:ee:f3:55:72:9d"},{"name":"eth0","mac":"0a:58:0a:d9:00:1e","sandbox":"/var/run/netns/8c8176aa-bbb3-42bf-8ce4-5778f36b3611"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.30/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:38.499024963+00:00 stderr F I1213 00:14:38.494513 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"marketplace-operator-8b455464d-kghgr", UID:"9025b167-d0fc-419f-92c1-add28909ab7c", APIVersion:"v1", ResourceVersion:"41220", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.30/23] from ovn-kubernetes 2025-12-13T00:14:38.506249375+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL finished CNI request ContainerID:"bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258" Netns:"/var/run/netns/d6cf48cd-9de4-4abc-a6f6-411cf5773fb1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-scn2m;K8S_POD_INFRA_CONTAINER_ID=bbef716af563208fd982b280687470c56c323a3cefc1d6308dcf2aa9f1ebd258;K8S_POD_UID=8106bb8c-f9d7-4688-b722-24144962b129" Path:"", result: "", err: 2025-12-13T00:14:38.508319411+00:00 stderr F 2025-12-13T00:14:38Z [verbose] DEL finished CNI request ContainerID:"07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5" Netns:"/var/run/netns/1505d5b3-f21b-4e23-8333-cde8c9ab67a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-f4jkp;K8S_POD_INFRA_CONTAINER_ID=07ba5aad8c5b88d1090aba13b5c78fa7cde288277aa275b0f47ef599ede5a2d5;K8S_POD_UID=4092a9f8-5acc-4932-9e90-ef962eeb301a" Path:"", result: "", err: 2025-12-13T00:14:38.512318100+00:00 stderr F 2025-12-13T00:14:38Z [verbose] ADD finished CNI request ContainerID:"b39c75d998d69051dc9638b39314475bd093faad6001f12a8f43b177ad818f9e" Netns:"/var/run/netns/8c8176aa-bbb3-42bf-8ce4-5778f36b3611" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-kghgr;K8S_POD_INFRA_CONTAINER_ID=b39c75d998d69051dc9638b39314475bd093faad6001f12a8f43b177ad818f9e;K8S_POD_UID=9025b167-d0fc-419f-92c1-add28909ab7c" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"d2:ee:f3:55:72:9d\",\"name\":\"b39c75d998d6905\"},{\"mac\":\"0a:58:0a:d9:00:1e\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/8c8176aa-bbb3-42bf-8ce4-5778f36b3611\"}],\"ips\":[{\"address\":\"10.217.0.30/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:39.192140095+00:00 stderr F 2025-12-13T00:14:39Z [verbose] DEL starting CNI request ContainerID:"272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f" Netns:"/var/run/netns/87394dcd-a97c-40df-9fc1-9697ef0f2622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"" 2025-12-13T00:14:39.192140095+00:00 stderr F 2025-12-13T00:14:39Z [verbose] Del: openshift-marketplace:marketplace-operator-8b455464d-f9xdt:3482be94-0cdb-4e2a-889b-e5fac59fdbf5:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:39.343753881+00:00 stderr F 2025-12-13T00:14:39Z [verbose] DEL finished CNI request ContainerID:"272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f" Netns:"/var/run/netns/87394dcd-a97c-40df-9fc1-9697ef0f2622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-8b455464d-f9xdt;K8S_POD_INFRA_CONTAINER_ID=272898bbbe67ff8beb87ec03ded40b79cb3587276d609dca5e2c8a14e795335f;K8S_POD_UID=3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Path:"", result: "", err: 2025-12-13T00:14:40.841369019+00:00 stderr F 2025-12-13T00:14:40Z [verbose] ADD starting CNI request ContainerID:"7af9d38479081247d56825be4a5b4e9d5b1501a1d255aef236bb953eb8d96941" Netns:"/var/run/netns/dca6fbea-ad45-43cc-8b64-7bb8df7ebdbd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-lcrg8;K8S_POD_INFRA_CONTAINER_ID=7af9d38479081247d56825be4a5b4e9d5b1501a1d255aef236bb953eb8d96941;K8S_POD_UID=7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20" Path:"" 2025-12-13T00:14:40.960976966+00:00 stderr F 2025-12-13T00:14:40Z [verbose] Add: openshift-marketplace:certified-operators-lcrg8:7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"7af9d3847908124","mac":"be:db:03:b5:07:7f"},{"name":"eth0","mac":"0a:58:0a:d9:00:21","sandbox":"/var/run/netns/dca6fbea-ad45-43cc-8b64-7bb8df7ebdbd"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.33/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:40.961293587+00:00 stderr F I1213 00:14:40.961216 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"certified-operators-lcrg8", UID:"7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20", APIVersion:"v1", ResourceVersion:"41265", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.33/23] from ovn-kubernetes 2025-12-13T00:14:40.974649856+00:00 stderr F 2025-12-13T00:14:40Z [verbose] ADD finished CNI request ContainerID:"7af9d38479081247d56825be4a5b4e9d5b1501a1d255aef236bb953eb8d96941" Netns:"/var/run/netns/dca6fbea-ad45-43cc-8b64-7bb8df7ebdbd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-lcrg8;K8S_POD_INFRA_CONTAINER_ID=7af9d38479081247d56825be4a5b4e9d5b1501a1d255aef236bb953eb8d96941;K8S_POD_UID=7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"be:db:03:b5:07:7f\",\"name\":\"7af9d3847908124\"},{\"mac\":\"0a:58:0a:d9:00:21\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/dca6fbea-ad45-43cc-8b64-7bb8df7ebdbd\"}],\"ips\":[{\"address\":\"10.217.0.33/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:42.862449703+00:00 stderr F 2025-12-13T00:14:42Z [verbose] DEL starting CNI request ContainerID:"df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369" Netns:"/var/run/netns/a4c91542-b918-4e77-af5b-6cf401d19d9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"" 2025-12-13T00:14:42.862966260+00:00 stderr F 2025-12-13T00:14:42Z [verbose] Del: openshift-marketplace:redhat-marketplace-8s8pc:c782cf62-a827-4677-b3c2-6f82c5f09cbb:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:14:43.030130877+00:00 stderr F 2025-12-13T00:14:43Z [verbose] DEL finished CNI request ContainerID:"df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369" Netns:"/var/run/netns/a4c91542-b918-4e77-af5b-6cf401d19d9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-8s8pc;K8S_POD_INFRA_CONTAINER_ID=df0fb794ad36ed4df8b707753fce89a53b88ad2ccae743627342ec41b1894369;K8S_POD_UID=c782cf62-a827-4677-b3c2-6f82c5f09cbb" Path:"", result: "", err: 2025-12-13T00:14:43.824083782+00:00 stderr F 2025-12-13T00:14:43Z [verbose] ADD starting CNI request ContainerID:"823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418" Netns:"/var/run/netns/4c608cf6-facd-4939-8e6f-16a0ce6faf38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-fs22p;K8S_POD_INFRA_CONTAINER_ID=823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418;K8S_POD_UID=99ebe506-22a8-42f2-b051-2f2f5dcb0882" Path:"" 2025-12-13T00:14:44.019522668+00:00 stderr F 2025-12-13T00:14:44Z [verbose] Add: openshift-marketplace:community-operators-fs22p:99ebe506-22a8-42f2-b051-2f2f5dcb0882:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"823c0b64ef91baf","mac":"3a:9e:ed:87:e2:fb"},{"name":"eth0","mac":"0a:58:0a:d9:00:22","sandbox":"/var/run/netns/4c608cf6-facd-4939-8e6f-16a0ce6faf38"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.34/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:44.019759516+00:00 stderr F I1213 00:14:44.019705 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-fs22p", UID:"99ebe506-22a8-42f2-b051-2f2f5dcb0882", APIVersion:"v1", ResourceVersion:"41305", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.34/23] from ovn-kubernetes 2025-12-13T00:14:44.037528948+00:00 stderr F 2025-12-13T00:14:44Z [verbose] ADD finished CNI request ContainerID:"823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418" Netns:"/var/run/netns/4c608cf6-facd-4939-8e6f-16a0ce6faf38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-fs22p;K8S_POD_INFRA_CONTAINER_ID=823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418;K8S_POD_UID=99ebe506-22a8-42f2-b051-2f2f5dcb0882" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"3a:9e:ed:87:e2:fb\",\"name\":\"823c0b64ef91baf\"},{\"mac\":\"0a:58:0a:d9:00:22\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/4c608cf6-facd-4939-8e6f-16a0ce6faf38\"}],\"ips\":[{\"address\":\"10.217.0.34/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:44.849573466+00:00 stderr F 2025-12-13T00:14:44Z [verbose] ADD starting CNI request ContainerID:"cb43fbace872486d8c039f3b1528b55a31c2aa74f81d0b4ea2be6be5f78b8a06" Netns:"/var/run/netns/cf794177-c67c-443d-baf2-5e0ccea2d05c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zg7cl;K8S_POD_INFRA_CONTAINER_ID=cb43fbace872486d8c039f3b1528b55a31c2aa74f81d0b4ea2be6be5f78b8a06;K8S_POD_UID=b9a848b0-1438-4ada-b7da-2fe53dbf235f" Path:"" 2025-12-13T00:14:45.343059758+00:00 stderr F 2025-12-13T00:14:45Z [verbose] Add: openshift-marketplace:redhat-operators-zg7cl:b9a848b0-1438-4ada-b7da-2fe53dbf235f:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"cb43fbace872486","mac":"e2:0a:46:4e:aa:ee"},{"name":"eth0","mac":"0a:58:0a:d9:00:23","sandbox":"/var/run/netns/cf794177-c67c-443d-baf2-5e0ccea2d05c"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.35/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:45.343059758+00:00 stderr F I1213 00:14:45.341679 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-operators-zg7cl", UID:"b9a848b0-1438-4ada-b7da-2fe53dbf235f", APIVersion:"v1", ResourceVersion:"41323", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.35/23] from ovn-kubernetes 2025-12-13T00:14:45.357310446+00:00 stderr F 2025-12-13T00:14:45Z [verbose] ADD finished CNI request ContainerID:"cb43fbace872486d8c039f3b1528b55a31c2aa74f81d0b4ea2be6be5f78b8a06" Netns:"/var/run/netns/cf794177-c67c-443d-baf2-5e0ccea2d05c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zg7cl;K8S_POD_INFRA_CONTAINER_ID=cb43fbace872486d8c039f3b1528b55a31c2aa74f81d0b4ea2be6be5f78b8a06;K8S_POD_UID=b9a848b0-1438-4ada-b7da-2fe53dbf235f" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"e2:0a:46:4e:aa:ee\",\"name\":\"cb43fbace872486\"},{\"mac\":\"0a:58:0a:d9:00:23\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/cf794177-c67c-443d-baf2-5e0ccea2d05c\"}],\"ips\":[{\"address\":\"10.217.0.35/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:46.412647049+00:00 stderr F 2025-12-13T00:14:46Z [verbose] ADD starting CNI request ContainerID:"8f2f2e79e395ed4d4e3d5069544410f363c310de5e3d2cbd233d070a733fe6e1" Netns:"/var/run/netns/6b82d933-da16-48c9-9b4f-ac8301f13ac2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nv4pl;K8S_POD_INFRA_CONTAINER_ID=8f2f2e79e395ed4d4e3d5069544410f363c310de5e3d2cbd233d070a733fe6e1;K8S_POD_UID=85ca9974-9a5f-4cbf-a126-71bf61c49940" Path:"" 2025-12-13T00:14:46.647579965+00:00 stderr F 2025-12-13T00:14:46Z [verbose] ADD starting CNI request ContainerID:"4a4fffc789383907ce6216a597a01de97f50d64495f651b3ac04cf29b07e59fa" Netns:"/var/run/netns/e5922594-baba-48dd-a4a6-6cd350f2f7ef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-s2hxn;K8S_POD_INFRA_CONTAINER_ID=4a4fffc789383907ce6216a597a01de97f50d64495f651b3ac04cf29b07e59fa;K8S_POD_UID=220875e2-503f-46b5-aaa6-bb8fc45743cc" Path:"" 2025-12-13T00:14:46.755022411+00:00 stderr F 2025-12-13T00:14:46Z [verbose] Add: openshift-marketplace:redhat-marketplace-nv4pl:85ca9974-9a5f-4cbf-a126-71bf61c49940:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"8f2f2e79e395ed4","mac":"0a:68:89:a2:74:e9"},{"name":"eth0","mac":"0a:58:0a:d9:00:24","sandbox":"/var/run/netns/6b82d933-da16-48c9-9b4f-ac8301f13ac2"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.36/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:46.755239748+00:00 stderr F I1213 00:14:46.755188 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"redhat-marketplace-nv4pl", UID:"85ca9974-9a5f-4cbf-a126-71bf61c49940", APIVersion:"v1", ResourceVersion:"41340", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.36/23] from ovn-kubernetes 2025-12-13T00:14:46.768461114+00:00 stderr F 2025-12-13T00:14:46Z [verbose] ADD finished CNI request ContainerID:"8f2f2e79e395ed4d4e3d5069544410f363c310de5e3d2cbd233d070a733fe6e1" Netns:"/var/run/netns/6b82d933-da16-48c9-9b4f-ac8301f13ac2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-nv4pl;K8S_POD_INFRA_CONTAINER_ID=8f2f2e79e395ed4d4e3d5069544410f363c310de5e3d2cbd233d070a733fe6e1;K8S_POD_UID=85ca9974-9a5f-4cbf-a126-71bf61c49940" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"0a:68:89:a2:74:e9\",\"name\":\"8f2f2e79e395ed4\"},{\"mac\":\"0a:58:0a:d9:00:24\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/6b82d933-da16-48c9-9b4f-ac8301f13ac2\"}],\"ips\":[{\"address\":\"10.217.0.36/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:14:46.965207931+00:00 stderr F 2025-12-13T00:14:46Z [verbose] Add: openshift-marketplace:community-operators-s2hxn:220875e2-503f-46b5-aaa6-bb8fc45743cc:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"4a4fffc78938390","mac":"ae:90:26:47:9a:a3"},{"name":"eth0","mac":"0a:58:0a:d9:00:25","sandbox":"/var/run/netns/e5922594-baba-48dd-a4a6-6cd350f2f7ef"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.37/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:14:46.965207931+00:00 stderr F I1213 00:14:46.964049 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-marketplace", Name:"community-operators-s2hxn", UID:"220875e2-503f-46b5-aaa6-bb8fc45743cc", APIVersion:"v1", ResourceVersion:"41345", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.37/23] from ovn-kubernetes 2025-12-13T00:14:46.983317863+00:00 stderr F 2025-12-13T00:14:46Z [verbose] ADD finished CNI request ContainerID:"4a4fffc789383907ce6216a597a01de97f50d64495f651b3ac04cf29b07e59fa" Netns:"/var/run/netns/e5922594-baba-48dd-a4a6-6cd350f2f7ef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-s2hxn;K8S_POD_INFRA_CONTAINER_ID=4a4fffc789383907ce6216a597a01de97f50d64495f651b3ac04cf29b07e59fa;K8S_POD_UID=220875e2-503f-46b5-aaa6-bb8fc45743cc" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"ae:90:26:47:9a:a3\",\"name\":\"4a4fffc78938390\"},{\"mac\":\"0a:58:0a:d9:00:25\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/e5922594-baba-48dd-a4a6-6cd350f2f7ef\"}],\"ips\":[{\"address\":\"10.217.0.37/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:15:00.560509449+00:00 stderr F 2025-12-13T00:15:00Z [verbose] ADD starting CNI request ContainerID:"06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab" Netns:"/var/run/netns/227aec9b-99e4-4e67-89ce-fe8822f88603" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426415-vhdrh;K8S_POD_INFRA_CONTAINER_ID=06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab;K8S_POD_UID=7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2" Path:"" 2025-12-13T00:15:00.952592489+00:00 stderr F 2025-12-13T00:15:00Z [verbose] Add: openshift-operator-lifecycle-manager:collect-profiles-29426415-vhdrh:7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"06923158b764a9f","mac":"92:3c:72:53:0e:a3"},{"name":"eth0","mac":"0a:58:0a:d9:00:26","sandbox":"/var/run/netns/227aec9b-99e4-4e67-89ce-fe8822f88603"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.38/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:15:00.954032755+00:00 stderr F I1213 00:15:00.952971 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-operator-lifecycle-manager", Name:"collect-profiles-29426415-vhdrh", UID:"7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2", APIVersion:"v1", ResourceVersion:"41440", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.38/23] from ovn-kubernetes 2025-12-13T00:15:00.964536123+00:00 stderr P 2025-12-13T00:15:00Z [verbose] 2025-12-13T00:15:00.964594765+00:00 stderr P ADD finished CNI request ContainerID:"06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab" Netns:"/var/run/netns/227aec9b-99e4-4e67-89ce-fe8822f88603" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426415-vhdrh;K8S_POD_INFRA_CONTAINER_ID=06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab;K8S_POD_UID=7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"92:3c:72:53:0e:a3\",\"name\":\"06923158b764a9f\"},{\"mac\":\"0a:58:0a:d9:00:26\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/227aec9b-99e4-4e67-89ce-fe8822f88603\"}],\"ips\":[{\"address\":\"10.217.0.38/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:15:00.964617626+00:00 stderr F 2025-12-13T00:15:13.038980128+00:00 stderr F 2025-12-13T00:15:13Z [verbose] ADD starting CNI request ContainerID:"f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16" Netns:"/var/run/netns/1d610287-f81b-455b-a8c6-2bc9fc06507d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-12-13T00:15:13.365848183+00:00 stderr F 2025-12-13T00:15:13Z [verbose] Add: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f3745ee021356f2","mac":"a2:a1:0d:1e:74:a2"},{"name":"eth0","mac":"0a:58:0a:d9:00:3b","sandbox":"/var/run/netns/1d610287-f81b-455b-a8c6-2bc9fc06507d"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.59/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:15:13.366207034+00:00 stderr F I1213 00:15:13.366164 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75779c45fd-v2j2v", UID:"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319", APIVersion:"v1", ResourceVersion:"38389", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.59/23] from ovn-kubernetes 2025-12-13T00:15:13.383585268+00:00 stderr F 2025-12-13T00:15:13Z [verbose] ADD finished CNI request ContainerID:"f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16" Netns:"/var/run/netns/1d610287-f81b-455b-a8c6-2bc9fc06507d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"a2:a1:0d:1e:74:a2\",\"name\":\"f3745ee021356f2\"},{\"mac\":\"0a:58:0a:d9:00:3b\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/1d610287-f81b-455b-a8c6-2bc9fc06507d\"}],\"ips\":[{\"address\":\"10.217.0.59/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:15:40.631654565+00:00 stderr F 2025-12-13T00:15:40Z [verbose] DEL starting CNI request ContainerID:"06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab" Netns:"/var/run/netns/227aec9b-99e4-4e67-89ce-fe8822f88603" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426415-vhdrh;K8S_POD_INFRA_CONTAINER_ID=06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab;K8S_POD_UID=7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2" Path:"" 2025-12-13T00:15:40.632541281+00:00 stderr F 2025-12-13T00:15:40Z [verbose] Del: openshift-operator-lifecycle-manager:collect-profiles-29426415-vhdrh:7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:15:40.791004932+00:00 stderr F 2025-12-13T00:15:40Z [verbose] DEL finished CNI request ContainerID:"06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab" Netns:"/var/run/netns/227aec9b-99e4-4e67-89ce-fe8822f88603" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29426415-vhdrh;K8S_POD_INFRA_CONTAINER_ID=06923158b764a9f08f2a48d1436e003eb7053015ef1edaef9990bdffd56f6cab;K8S_POD_UID=7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2" Path:"", result: "", err: 2025-12-13T00:15:46.819818272+00:00 stderr F 2025-12-13T00:15:46Z [verbose] DEL starting CNI request ContainerID:"823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418" Netns:"/var/run/netns/4c608cf6-facd-4939-8e6f-16a0ce6faf38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-fs22p;K8S_POD_INFRA_CONTAINER_ID=823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418;K8S_POD_UID=99ebe506-22a8-42f2-b051-2f2f5dcb0882" Path:"" 2025-12-13T00:15:46.820467591+00:00 stderr F 2025-12-13T00:15:46Z [verbose] Del: openshift-marketplace:community-operators-fs22p:99ebe506-22a8-42f2-b051-2f2f5dcb0882:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:15:46.965792042+00:00 stderr F 2025-12-13T00:15:46Z [verbose] DEL finished CNI request ContainerID:"823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418" Netns:"/var/run/netns/4c608cf6-facd-4939-8e6f-16a0ce6faf38" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-fs22p;K8S_POD_INFRA_CONTAINER_ID=823c0b64ef91bafd383fd306ccb5578c0076c31c1170af42711e95c9862d4418;K8S_POD_UID=99ebe506-22a8-42f2-b051-2f2f5dcb0882" Path:"", result: "", err: 2025-12-13T00:15:57.749722231+00:00 stderr F 2025-12-13T00:15:57Z [verbose] DEL starting CNI request ContainerID:"ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc" Netns:"/var/run/netns/8b498cb7-19f5-4258-987e-cb2590abade4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29426400-tlv26;K8S_POD_INFRA_CONTAINER_ID=ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc;K8S_POD_UID=e98bd1f0-fe98-48b9-82cb-86e924ade65d" Path:"" 2025-12-13T00:15:57.750492264+00:00 stderr F 2025-12-13T00:15:57Z [verbose] Del: openshift-image-registry:image-pruner-29426400-tlv26:e98bd1f0-fe98-48b9-82cb-86e924ade65d:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:15:57.906825941+00:00 stderr F 2025-12-13T00:15:57Z [verbose] DEL finished CNI request ContainerID:"ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc" Netns:"/var/run/netns/8b498cb7-19f5-4258-987e-cb2590abade4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-pruner-29426400-tlv26;K8S_POD_INFRA_CONTAINER_ID=ebe3d64883064112fd6f8647f4a0269c7aac9fd68909f48cc03fd72ff547e9dc;K8S_POD_UID=e98bd1f0-fe98-48b9-82cb-86e924ade65d" Path:"", result: "", err: 2025-12-13T00:18:32.755275571+00:00 stderr F 2025-12-13T00:18:32Z [verbose] ADD starting CNI request ContainerID:"f3483754a03354916db68cc42eaee34f16ae410ba8cd3b6b27b2f3a10014afcd" Netns:"/var/run/netns/5225573a-a8e5-4530-ac78-027474ec5bc4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75b7bb6564-rnjvj;K8S_POD_INFRA_CONTAINER_ID=f3483754a03354916db68cc42eaee34f16ae410ba8cd3b6b27b2f3a10014afcd;K8S_POD_UID=d73c4e63-30ef-4915-925d-f44201c612ec" Path:"" 2025-12-13T00:18:33.076245142+00:00 stderr F 2025-12-13T00:18:33Z [verbose] Add: openshift-image-registry:image-registry-75b7bb6564-rnjvj:d73c4e63-30ef-4915-925d-f44201c612ec:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"f3483754a033549","mac":"72:a9:1f:8c:64:71"},{"name":"eth0","mac":"0a:58:0a:d9:00:29","sandbox":"/var/run/netns/5225573a-a8e5-4530-ac78-027474ec5bc4"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.41/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:18:33.076452188+00:00 stderr F I1213 00:18:33.076382 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-image-registry", Name:"image-registry-75b7bb6564-rnjvj", UID:"d73c4e63-30ef-4915-925d-f44201c612ec", APIVersion:"v1", ResourceVersion:"42030", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.41/23] from ovn-kubernetes 2025-12-13T00:18:33.098037712+00:00 stderr F 2025-12-13T00:18:33Z [verbose] ADD finished CNI request ContainerID:"f3483754a03354916db68cc42eaee34f16ae410ba8cd3b6b27b2f3a10014afcd" Netns:"/var/run/netns/5225573a-a8e5-4530-ac78-027474ec5bc4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75b7bb6564-rnjvj;K8S_POD_INFRA_CONTAINER_ID=f3483754a03354916db68cc42eaee34f16ae410ba8cd3b6b27b2f3a10014afcd;K8S_POD_UID=d73c4e63-30ef-4915-925d-f44201c612ec" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"72:a9:1f:8c:64:71\",\"name\":\"f3483754a033549\"},{\"mac\":\"0a:58:0a:d9:00:29\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/5225573a-a8e5-4530-ac78-027474ec5bc4\"}],\"ips\":[{\"address\":\"10.217.0.41/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:19:18.098615895+00:00 stderr F 2025-12-13T00:19:18Z [verbose] DEL starting CNI request ContainerID:"f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16" Netns:"/var/run/netns/1d610287-f81b-455b-a8c6-2bc9fc06507d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"" 2025-12-13T00:19:18.098684677+00:00 stderr F 2025-12-13T00:19:18Z [verbose] Del: openshift-image-registry:image-registry-75779c45fd-v2j2v:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319:ovn-kubernetes:eth0 {"cniVersion":"0.4.0","name":"ovn-kubernetes","type":"ovn-k8s-cni-overlay","ipam":{},"dns":{},"logFile":"/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log","logLevel":"4","logfile-maxsize":100,"logfile-maxbackups":5,"logfile-maxage":0,"runtimeConfig":{}} 2025-12-13T00:19:18.222950535+00:00 stderr F 2025-12-13T00:19:18Z [verbose] DEL finished CNI request ContainerID:"f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16" Netns:"/var/run/netns/1d610287-f81b-455b-a8c6-2bc9fc06507d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=image-registry-75779c45fd-v2j2v;K8S_POD_INFRA_CONTAINER_ID=f3745ee021356f2a087478a33277cbbb141272af013af64d28283b4eedc33c16;K8S_POD_UID=f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Path:"", result: "", err: 2025-12-13T00:19:57.348420630+00:00 stderr F 2025-12-13T00:19:57Z [verbose] ADD starting CNI request ContainerID:"0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37" Netns:"/var/run/netns/11aaf5ff-61e9-4955-9c46-163b21c02c12" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37;K8S_POD_UID=7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9" Path:"" 2025-12-13T00:19:57.528079015+00:00 stderr F 2025-12-13T00:19:57Z [verbose] Add: openshift-kube-apiserver:installer-13-crc:7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9:ovn-kubernetes(ovn-kubernetes):eth0 {"cniVersion":"0.4.0","interfaces":[{"name":"0cd59daaff2210a","mac":"02:7f:22:64:40:9c"},{"name":"eth0","mac":"0a:58:0a:d9:00:2a","sandbox":"/var/run/netns/11aaf5ff-61e9-4955-9c46-163b21c02c12"}],"ips":[{"version":"4","interface":1,"address":"10.217.0.42/23","gateway":"10.217.0.1"}],"dns":{}} 2025-12-13T00:19:57.528316071+00:00 stderr F I1213 00:19:57.528254 8420 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"installer-13-crc", UID:"7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9", APIVersion:"v1", ResourceVersion:"42685", FieldPath:""}): type: 'Normal' reason: 'AddedInterface' Add eth0 [10.217.0.42/23] from ovn-kubernetes 2025-12-13T00:19:57.540916159+00:00 stderr F 2025-12-13T00:19:57Z [verbose] ADD finished CNI request ContainerID:"0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37" Netns:"/var/run/netns/11aaf5ff-61e9-4955-9c46-163b21c02c12" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-13-crc;K8S_POD_INFRA_CONTAINER_ID=0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37;K8S_POD_UID=7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9" Path:"", result: "{\"Result\":{\"cniVersion\":\"1.1.0\",\"interfaces\":[{\"mac\":\"02:7f:22:64:40:9c\",\"name\":\"0cd59daaff2210a\"},{\"mac\":\"0a:58:0a:d9:00:2a\",\"name\":\"eth0\",\"sandbox\":\"/var/run/netns/11aaf5ff-61e9-4955-9c46-163b21c02c12\"}],\"ips\":[{\"address\":\"10.217.0.42/23\",\"gateway\":\"10.217.0.1\",\"interface\":1}]}}", err: 2025-12-13T00:20:01.589805477+00:00 stderr F 2025-12-13T00:20:01Z [verbose] readiness indicator file is gone. restart multus-daemon ././@LongLink0000644000000000000000000000024000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130646033073 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001133515117130646033100 0ustar zuulzuul2025-12-13T00:20:04.103876862+00:00 stderr F + [[ -f /env/_master ]] 2025-12-13T00:20:04.103876862+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-12-13T00:20:04.104005545+00:00 stderr F ++ set -x 2025-12-13T00:20:04.104005545+00:00 stderr F ++ K8S_NODE= 2025-12-13T00:20:04.104005545+00:00 stderr F ++ [[ -n '' ]] 2025-12-13T00:20:04.104005545+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:04.104005545+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:04.104005545+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:04.104019205+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:04.104019205+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:04.104019205+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:04.104027316+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:04.104035786+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:04.104035786+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:04.104043216+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:04.105129817+00:00 stderr F + trap quit-ovn-northd TERM INT 2025-12-13T00:20:04.105129817+00:00 stderr F + start-ovn-northd info 2025-12-13T00:20:04.105149407+00:00 stderr F + local log_level=info 2025-12-13T00:20:04.105149407+00:00 stderr F + [[ 1 -ne 1 ]] 2025-12-13T00:20:04.105604850+00:00 stderr F ++ date -Iseconds 2025-12-13T00:20:04.107781360+00:00 stderr F + echo '2025-12-13T00:20:04+00:00 - starting ovn-northd' 2025-12-13T00:20:04.108348665+00:00 stdout F 2025-12-13T00:20:04+00:00 - starting ovn-northd 2025-12-13T00:20:04.108362916+00:00 stderr F + wait 28386 2025-12-13T00:20:04.108362916+00:00 stderr F + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 2025-12-13T00:20:04.115889834+00:00 stderr F 2025-12-13T00:20:04.115Z|00001|ovn_northd|INFO|OVN internal version is : [24.03.3-20.33.0-72.6] 2025-12-13T00:20:04.116617843+00:00 stderr F 2025-12-13T00:20:04.116Z|00002|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2025-12-13T00:20:04.116617843+00:00 stderr F 2025-12-13T00:20:04.116Z|00003|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connection attempt failed (No such file or directory) 2025-12-13T00:20:04.116635124+00:00 stderr F 2025-12-13T00:20:04.116Z|00004|ovn_northd|INFO|OVN NB IDL reconnected, force recompute. 2025-12-13T00:20:04.116677745+00:00 stderr F 2025-12-13T00:20:04.116Z|00005|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:04.116686765+00:00 stderr F 2025-12-13T00:20:04.116Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-12-13T00:20:04.116702506+00:00 stderr F 2025-12-13T00:20:04.116Z|00007|ovn_northd|INFO|OVN SB IDL reconnected, force recompute. 2025-12-13T00:20:05.117344218+00:00 stderr F 2025-12-13T00:20:05.117Z|00008|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2025-12-13T00:20:05.117386479+00:00 stderr F 2025-12-13T00:20:05.117Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connected 2025-12-13T00:20:05.117402779+00:00 stderr F 2025-12-13T00:20:05.117Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:05.117455401+00:00 stderr F 2025-12-13T00:20:05.117Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-12-13T00:20:05.117455401+00:00 stderr F 2025-12-13T00:20:05.117Z|00012|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 2 seconds before reconnect 2025-12-13T00:20:07.118263832+00:00 stderr F 2025-12-13T00:20:07.118Z|00013|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:07.118819118+00:00 stderr F 2025-12-13T00:20:07.118Z|00014|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2025-12-13T00:20:07.119356373+00:00 stderr F 2025-12-13T00:20:07.119Z|00015|ovn_northd|INFO|ovn-northd lock acquired. This ovn-northd instance is now active. 2025-12-13T00:20:09.435351466+00:00 stderr F 2025-12-13T00:20:09.435Z|00016|ipam|WARN|d6057acb-0f02-4ebe-8cea-b3228e61764c: Duplicate IP set: 10.217.0.2 2025-12-13T00:20:20.615288208+00:00 stderr F 2025-12-13T00:20:20.615Z|00017|memory|INFO|16392 kB peak resident set size after 16.5 seconds 2025-12-13T00:20:20.615551396+00:00 stderr F 2025-12-13T00:20:20.615Z|00018|memory|INFO|idl-cells-OVN_Northbound:2694 idl-cells-OVN_Southbound:12454 2025-12-13T00:21:09.328535446+00:00 stderr F 2025-12-13T00:21:09.328Z|00019|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001104015117130646033071 0ustar zuulzuul2025-12-13T00:20:03.928274070+00:00 stderr F ++ K8S_NODE= 2025-12-13T00:20:03.928496996+00:00 stderr F ++ [[ -n '' ]] 2025-12-13T00:20:03.928553208+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:03.928600719+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:03.928644830+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.928688631+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:03.928732553+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:03.928775764+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:03.928818235+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:03.928860466+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:03.928902687+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:03.929008900+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:03.930862342+00:00 stderr F + start-rbac-proxy-node ovn-metrics 9105 29105 /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.930969234+00:00 stderr F + local detail=ovn-metrics 2025-12-13T00:20:03.931027246+00:00 stderr F + local listen_port=9105 2025-12-13T00:20:03.931068007+00:00 stderr F + local upstream_port=29105 2025-12-13T00:20:03.931107908+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-12-13T00:20:03.931151069+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.931192560+00:00 stderr F + [[ 5 -ne 5 ]] 2025-12-13T00:20:03.932020523+00:00 stderr F ++ date -Iseconds 2025-12-13T00:20:03.934447870+00:00 stderr F + echo '2025-12-13T00:20:03+00:00 INFO: waiting for ovn-metrics certs to be mounted' 2025-12-13T00:20:03.934535982+00:00 stdout F 2025-12-13T00:20:03+00:00 INFO: waiting for ovn-metrics certs to be mounted 2025-12-13T00:20:03.934595454+00:00 stderr F + wait-for-certs ovn-metrics /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.934663826+00:00 stderr F + local detail=ovn-metrics 2025-12-13T00:20:03.934723208+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-12-13T00:20:03.934768899+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.934805640+00:00 stderr F + [[ 3 -ne 3 ]] 2025-12-13T00:20:03.934838531+00:00 stderr F + retries=0 2025-12-13T00:20:03.935328104+00:00 stderr F ++ date +%s 2025-12-13T00:20:03.937513195+00:00 stderr F + TS=1765585203 2025-12-13T00:20:03.937594567+00:00 stderr F + WARN_TS=1765586403 2025-12-13T00:20:03.937637988+00:00 stderr F + HAS_LOGGED_INFO=0 2025-12-13T00:20:03.937678849+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.key ]] 2025-12-13T00:20:03.937732700+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.crt ]] 2025-12-13T00:20:03.938222114+00:00 stderr F ++ date -Iseconds 2025-12-13T00:20:03.940212499+00:00 stderr F + echo '2025-12-13T00:20:03+00:00 INFO: ovn-metrics certs mounted, starting kube-rbac-proxy' 2025-12-13T00:20:03.940256690+00:00 stdout F 2025-12-13T00:20:03+00:00 INFO: ovn-metrics certs mounted, starting kube-rbac-proxy 2025-12-13T00:20:03.940332412+00:00 stderr F + exec /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=:9105 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --upstream=http://127.0.0.1:29105/ --tls-private-key-file=/etc/pki/tls/metrics-cert/tls.key --tls-cert-file=/etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.978169306+00:00 stderr F W1213 00:20:03.978081 28285 deprecated.go:66] 2025-12-13T00:20:03.978169306+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:20:03.978169306+00:00 stderr F 2025-12-13T00:20:03.978169306+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:20:03.978169306+00:00 stderr F 2025-12-13T00:20:03.978169306+00:00 stderr F =============================================== 2025-12-13T00:20:03.978169306+00:00 stderr F 2025-12-13T00:20:03.978712601+00:00 stderr F I1213 00:20:03.978695 28285 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:20:03.978779083+00:00 stderr F I1213 00:20:03.978767 28285 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:20:03.979281746+00:00 stderr F I1213 00:20:03.979242 28285 kube-rbac-proxy.go:395] Starting TCP socket on :9105 2025-12-13T00:20:03.979587975+00:00 stderr F I1213 00:20:03.979561 28285 kube-rbac-proxy.go:402] Listening securely on :9105 ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001111015117130646033067 0ustar zuulzuul2025-12-13T00:20:03.775575509+00:00 stderr F ++ K8S_NODE= 2025-12-13T00:20:03.775575509+00:00 stderr F ++ [[ -n '' ]] 2025-12-13T00:20:03.775575509+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:03.775575509+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:03.775575509+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.775575509+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:03.775575509+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:03.775680612+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:03.775680612+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:03.775680612+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:03.775680612+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:03.775680612+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:03.776527466+00:00 stderr F + start-rbac-proxy-node ovn-node-metrics 9103 29103 /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.776527466+00:00 stderr F + local detail=ovn-node-metrics 2025-12-13T00:20:03.776527466+00:00 stderr F + local listen_port=9103 2025-12-13T00:20:03.776544146+00:00 stderr F + local upstream_port=29103 2025-12-13T00:20:03.776544146+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-12-13T00:20:03.776544146+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.776544146+00:00 stderr F + [[ 5 -ne 5 ]] 2025-12-13T00:20:03.776853285+00:00 stderr F ++ date -Iseconds 2025-12-13T00:20:03.778720446+00:00 stderr F + echo '2025-12-13T00:20:03+00:00 INFO: waiting for ovn-node-metrics certs to be mounted' 2025-12-13T00:20:03.778736476+00:00 stdout F 2025-12-13T00:20:03+00:00 INFO: waiting for ovn-node-metrics certs to be mounted 2025-12-13T00:20:03.778741536+00:00 stderr F + wait-for-certs ovn-node-metrics /etc/pki/tls/metrics-cert/tls.key /etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.778764207+00:00 stderr F + local detail=ovn-node-metrics 2025-12-13T00:20:03.778764207+00:00 stderr F + local privkey=/etc/pki/tls/metrics-cert/tls.key 2025-12-13T00:20:03.778764207+00:00 stderr F + local clientcert=/etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.778772367+00:00 stderr F + [[ 3 -ne 3 ]] 2025-12-13T00:20:03.778772367+00:00 stderr F + retries=0 2025-12-13T00:20:03.779106957+00:00 stderr F ++ date +%s 2025-12-13T00:20:03.780534436+00:00 stderr F + TS=1765585203 2025-12-13T00:20:03.780549966+00:00 stderr F + WARN_TS=1765586403 2025-12-13T00:20:03.780549966+00:00 stderr F + HAS_LOGGED_INFO=0 2025-12-13T00:20:03.780549966+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.key ]] 2025-12-13T00:20:03.780579437+00:00 stderr F + [[ ! -f /etc/pki/tls/metrics-cert/tls.crt ]] 2025-12-13T00:20:03.780896406+00:00 stderr F ++ date -Iseconds 2025-12-13T00:20:03.782404018+00:00 stdout F 2025-12-13T00:20:03+00:00 INFO: ovn-node-metrics certs mounted, starting kube-rbac-proxy 2025-12-13T00:20:03.782418208+00:00 stderr F + echo '2025-12-13T00:20:03+00:00 INFO: ovn-node-metrics certs mounted, starting kube-rbac-proxy' 2025-12-13T00:20:03.782418208+00:00 stderr F + exec /usr/bin/kube-rbac-proxy --logtostderr --secure-listen-address=:9103 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --upstream=http://127.0.0.1:29103/ --tls-private-key-file=/etc/pki/tls/metrics-cert/tls.key --tls-cert-file=/etc/pki/tls/metrics-cert/tls.crt 2025-12-13T00:20:03.812279291+00:00 stderr F W1213 00:20:03.812180 28230 deprecated.go:66] 2025-12-13T00:20:03.812279291+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:20:03.812279291+00:00 stderr F 2025-12-13T00:20:03.812279291+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:20:03.812279291+00:00 stderr F 2025-12-13T00:20:03.812279291+00:00 stderr F =============================================== 2025-12-13T00:20:03.812279291+00:00 stderr F 2025-12-13T00:20:03.812669322+00:00 stderr F I1213 00:20:03.812635 28230 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:20:03.812683612+00:00 stderr F I1213 00:20:03.812677 28230 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:20:03.813113095+00:00 stderr F I1213 00:20:03.813074 28230 kube-rbac-proxy.go:395] Starting TCP socket on :9103 2025-12-13T00:20:03.813477565+00:00 stderr F I1213 00:20:03.813443 28230 kube-rbac-proxy.go:402] Listening securely on :9103 ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130653033071 5ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001204615117130646033100 0ustar zuulzuul2025-12-13T00:20:03.622310814+00:00 stderr F ++ K8S_NODE= 2025-12-13T00:20:03.622310814+00:00 stderr F ++ [[ -n '' ]] 2025-12-13T00:20:03.622310814+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:03.622310814+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:03.622310814+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.622310814+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:03.622310814+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:03.622462388+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:03.622462388+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:03.622462388+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:03.622462388+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:03.622462388+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:03.623792634+00:00 stderr F + start-audit-log-rotation 2025-12-13T00:20:03.623818314+00:00 stderr F + MAXFILESIZE=50000000 2025-12-13T00:20:03.623818314+00:00 stderr F + MAXLOGFILES=5 2025-12-13T00:20:03.624617466+00:00 stderr F ++ dirname /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.627493876+00:00 stderr F + LOGDIR=/var/log/ovn 2025-12-13T00:20:03.627493876+00:00 stderr F + local retries=0 2025-12-13T00:20:03.627517386+00:00 stderr F + [[ 30 -gt 0 ]] 2025-12-13T00:20:03.627517386+00:00 stderr F + (( retries += 1 )) 2025-12-13T00:20:03.628132554+00:00 stderr F ++ cat /var/run/ovn/ovn-controller.pid 2025-12-13T00:20:03.630133648+00:00 stderr F + CONTROLLERPID=28140 2025-12-13T00:20:03.630133648+00:00 stderr F + [[ -n 28140 ]] 2025-12-13T00:20:03.630158589+00:00 stderr F + break 2025-12-13T00:20:03.630158589+00:00 stderr F + [[ -z 28140 ]] 2025-12-13T00:20:03.630416686+00:00 stderr F + true 2025-12-13T00:20:03.630429006+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-12-13T00:20:03.630548990+00:00 stderr F + tail -F /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.631625040+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.631723833+00:00 stderr F ++ tr -s '\t' ' ' 2025-12-13T00:20:03.631802145+00:00 stderr F ++ cut '-d ' -f1 2025-12-13T00:20:03.634544160+00:00 stderr F + file_size=0 2025-12-13T00:20:03.634544160+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-12-13T00:20:03.635904817+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.636069212+00:00 stderr F ++ wc -l 2025-12-13T00:20:03.638744506+00:00 stderr F + num_files=1 2025-12-13T00:20:03.638744506+00:00 stderr F + '[' 1 -gt 5 ']' 2025-12-13T00:20:03.638769266+00:00 stderr F + sleep 30 2025-12-13T00:20:33.640832151+00:00 stderr F + true 2025-12-13T00:20:33.640832151+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-12-13T00:20:33.642186598+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:33.649170356+00:00 stderr F ++ tr -s '\t' ' ' 2025-12-13T00:20:33.649376821+00:00 stderr F ++ cut '-d ' -f1 2025-12-13T00:20:33.652071944+00:00 stderr F + file_size=0 2025-12-13T00:20:33.652104255+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-12-13T00:20:33.653169954+00:00 stderr F ++ wc -l 2025-12-13T00:20:33.653288677+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:33.656338119+00:00 stderr F + num_files=1 2025-12-13T00:20:33.656354650+00:00 stderr F + '[' 1 -gt 5 ']' 2025-12-13T00:20:33.656354650+00:00 stderr F + sleep 30 2025-12-13T00:21:03.658422419+00:00 stderr F + true 2025-12-13T00:21:03.658422419+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-12-13T00:21:03.659705224+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-12-13T00:21:03.659798307+00:00 stderr F ++ tr -s '\t' ' ' 2025-12-13T00:21:03.659957371+00:00 stderr F ++ cut '-d ' -f1 2025-12-13T00:21:03.663104086+00:00 stderr F + file_size=0 2025-12-13T00:21:03.663104086+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-12-13T00:21:03.663952958+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-12-13T00:21:03.664127923+00:00 stderr F ++ wc -l 2025-12-13T00:21:03.666918518+00:00 stderr F + num_files=1 2025-12-13T00:21:03.666964029+00:00 stderr F + '[' 1 -gt 5 ']' 2025-12-13T00:21:03.666984630+00:00 stderr F + sleep 30 2025-12-13T00:21:33.669163791+00:00 stderr F + true 2025-12-13T00:21:33.669163791+00:00 stderr F + '[' -f /var/log/ovn/acl-audit-log.log ']' 2025-12-13T00:21:33.670555919+00:00 stderr F ++ du -b /var/log/ovn/acl-audit-log.log 2025-12-13T00:21:33.670577970+00:00 stderr F ++ tr -s '\t' ' ' 2025-12-13T00:21:33.670958420+00:00 stderr F ++ cut '-d ' -f1 2025-12-13T00:21:33.674203587+00:00 stderr F + file_size=0 2025-12-13T00:21:33.674203587+00:00 stderr F + '[' 0 -gt 50000000 ']' 2025-12-13T00:21:33.675221705+00:00 stderr F ++ ls -1 /var/log/ovn/acl-audit-log.log 2025-12-13T00:21:33.675380499+00:00 stderr F ++ wc -l 2025-12-13T00:21:33.678324498+00:00 stderr F + num_files=1 2025-12-13T00:21:33.678324498+00:00 stderr F + '[' 1 -gt 5 ']' 2025-12-13T00:21:33.678324498+00:00 stderr F + sleep 30 ././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000672310715117130646033115 0ustar zuulzuul2025-12-13T00:20:08.820259995+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-12-13T00:20:08.820499881+00:00 stderr F ++ set -x 2025-12-13T00:20:08.820499881+00:00 stderr F ++ K8S_NODE=crc 2025-12-13T00:20:08.820499881+00:00 stderr F ++ [[ -n crc ]] 2025-12-13T00:20:08.820499881+00:00 stderr F ++ [[ -f /env/crc ]] 2025-12-13T00:20:08.820499881+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:08.820499881+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:08.820499881+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:08.820499881+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:08.820499881+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:08.820499881+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:08.820499881+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:08.820499881+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:08.820499881+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:08.820499881+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:08.821261902+00:00 stderr F + start-ovnkube-node 4 29103 29105 2025-12-13T00:20:08.821286843+00:00 stderr F + local log_level=4 2025-12-13T00:20:08.821286843+00:00 stderr F + local metrics_port=29103 2025-12-13T00:20:08.821286843+00:00 stderr F + local ovn_metrics_port=29105 2025-12-13T00:20:08.821286843+00:00 stderr F + [[ 3 -ne 3 ]] 2025-12-13T00:20:08.821286843+00:00 stderr F + cni-bin-copy 2025-12-13T00:20:08.821306213+00:00 stderr F + . /host/etc/os-release 2025-12-13T00:20:08.821661084+00:00 stderr F ++ NAME='Red Hat Enterprise Linux CoreOS' 2025-12-13T00:20:08.821661084+00:00 stderr F ++ ID=rhcos 2025-12-13T00:20:08.821661084+00:00 stderr F ++ ID_LIKE='rhel fedora' 2025-12-13T00:20:08.821661084+00:00 stderr F ++ VERSION=416.94.202406172220-0 2025-12-13T00:20:08.821661084+00:00 stderr F ++ VERSION_ID=4.16 2025-12-13T00:20:08.821661084+00:00 stderr F ++ VARIANT=CoreOS 2025-12-13T00:20:08.821661084+00:00 stderr F ++ VARIANT_ID=coreos 2025-12-13T00:20:08.821661084+00:00 stderr F ++ PLATFORM_ID=platform:el9 2025-12-13T00:20:08.821661084+00:00 stderr F ++ PRETTY_NAME='Red Hat Enterprise Linux CoreOS 416.94.202406172220-0' 2025-12-13T00:20:08.821661084+00:00 stderr F ++ ANSI_COLOR='0;31' 2025-12-13T00:20:08.821661084+00:00 stderr F ++ CPE_NAME=cpe:/o:redhat:enterprise_linux:9::baseos::coreos 2025-12-13T00:20:08.821661084+00:00 stderr F ++ HOME_URL=https://www.redhat.com/ 2025-12-13T00:20:08.821661084+00:00 stderr F ++ DOCUMENTATION_URL=https://docs.okd.io/latest/welcome/index.html 2025-12-13T00:20:08.821661084+00:00 stderr F ++ BUG_REPORT_URL=https://access.redhat.com/labs/rhir/ 2025-12-13T00:20:08.821661084+00:00 stderr F ++ REDHAT_BUGZILLA_PRODUCT='OpenShift Container Platform' 2025-12-13T00:20:08.821661084+00:00 stderr F ++ REDHAT_BUGZILLA_PRODUCT_VERSION=4.16 2025-12-13T00:20:08.821661084+00:00 stderr F ++ REDHAT_SUPPORT_PRODUCT='OpenShift Container Platform' 2025-12-13T00:20:08.821661084+00:00 stderr F ++ REDHAT_SUPPORT_PRODUCT_VERSION=4.16 2025-12-13T00:20:08.821661084+00:00 stderr F ++ OPENSHIFT_VERSION=4.16 2025-12-13T00:20:08.821661084+00:00 stderr F ++ RHEL_VERSION=9.4 2025-12-13T00:20:08.821661084+00:00 stderr F ++ OSTREE_VERSION=416.94.202406172220-0 2025-12-13T00:20:08.821661084+00:00 stderr F + rhelmajor= 2025-12-13T00:20:08.821661084+00:00 stderr F + case "${ID}" in 2025-12-13T00:20:08.822582089+00:00 stderr F ++ cut -f 5 -d : 2025-12-13T00:20:08.822582089+00:00 stderr F ++ echo cpe:/o:redhat:enterprise_linux:9::baseos::coreos 2025-12-13T00:20:08.826277811+00:00 stderr F + RHEL_VERSION=9 2025-12-13T00:20:08.826277811+00:00 stderr F ++ echo 9 2025-12-13T00:20:08.826277811+00:00 stderr F ++ sed -E 's/([0-9]+)\.{1}[0-9]+(\.[0-9]+)?/\1/' 2025-12-13T00:20:08.828216264+00:00 stderr F + rhelmajor=9 2025-12-13T00:20:08.828216264+00:00 stderr F + sourcedir=/usr/libexec/cni/ 2025-12-13T00:20:08.828216264+00:00 stderr F + case "${rhelmajor}" in 2025-12-13T00:20:08.828216264+00:00 stderr F + sourcedir=/usr/libexec/cni/rhel9 2025-12-13T00:20:08.828216264+00:00 stderr F + cp -f /usr/libexec/cni/rhel9/ovn-k8s-cni-overlay /cni-bin-dir/ 2025-12-13T00:20:08.903307955+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-12-13T00:20:08.905326160+00:00 stderr F + echo 'I1213 00:20:08.904825506 - disable conntrack on geneve port' 2025-12-13T00:20:08.905348311+00:00 stdout F I1213 00:20:08.904825506 - disable conntrack on geneve port 2025-12-13T00:20:08.905381672+00:00 stderr F + iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK 2025-12-13T00:20:08.909781903+00:00 stderr F + iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK 2025-12-13T00:20:08.912418776+00:00 stderr F + ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK 2025-12-13T00:20:08.916972292+00:00 stderr F + ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK 2025-12-13T00:20:08.923402499+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-12-13T00:20:08.924616562+00:00 stdout F I1213 00:20:08.924273423 - starting ovnkube-node 2025-12-13T00:20:08.924631203+00:00 stderr F + echo 'I1213 00:20:08.924273423 - starting ovnkube-node' 2025-12-13T00:20:08.924631203+00:00 stderr F + '[' local == shared ']' 2025-12-13T00:20:08.924631203+00:00 stderr F + '[' local == local ']' 2025-12-13T00:20:08.924642213+00:00 stderr F + gateway_mode_flags='--gateway-mode local --gateway-interface br-ex' 2025-12-13T00:20:08.924642213+00:00 stderr F + export_network_flows_flags= 2025-12-13T00:20:08.924651563+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924658683+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924665793+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924665793+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924673494+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924680604+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924680604+00:00 stderr F + gw_interface_flag= 2025-12-13T00:20:08.924687994+00:00 stderr F + '[' -d /sys/class/net/br-ex1 ']' 2025-12-13T00:20:08.924721135+00:00 stderr F + node_mgmt_port_netdev_flags= 2025-12-13T00:20:08.924721135+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924721135+00:00 stderr F + [[ -n '' ]] 2025-12-13T00:20:08.924730595+00:00 stderr F + multi_network_enabled_flag= 2025-12-13T00:20:08.924730595+00:00 stderr F + [[ true == \t\r\u\e ]] 2025-12-13T00:20:08.924737945+00:00 stderr F + multi_network_enabled_flag=--enable-multi-network 2025-12-13T00:20:08.924745036+00:00 stderr F + multi_network_policy_enabled_flag= 2025-12-13T00:20:08.924745036+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-12-13T00:20:08.924752516+00:00 stderr F + admin_network_policy_enabled_flag= 2025-12-13T00:20:08.924759576+00:00 stderr F + [[ true == \t\r\u\e ]] 2025-12-13T00:20:08.924759576+00:00 stderr F + admin_network_policy_enabled_flag=--enable-admin-network-policy 2025-12-13T00:20:08.924767066+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-12-13T00:20:08.924767066+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-12-13T00:20:08.924774336+00:00 stderr F + ip_forwarding_flag= 2025-12-13T00:20:08.924781437+00:00 stderr F + '[' Global == Global ']' 2025-12-13T00:20:08.924781437+00:00 stderr F + sysctl -w net.ipv4.ip_forward=1 2025-12-13T00:20:08.926547596+00:00 stdout F net.ipv4.ip_forward = 1 2025-12-13T00:20:08.926561406+00:00 stderr F + sysctl -w net.ipv6.conf.all.forwarding=1 2025-12-13T00:20:08.928887580+00:00 stdout F net.ipv6.conf.all.forwarding = 1 2025-12-13T00:20:08.929156858+00:00 stderr F + NETWORK_NODE_IDENTITY_ENABLE= 2025-12-13T00:20:08.929156858+00:00 stderr F + [[ true == \t\r\u\e ]] 2025-12-13T00:20:08.929156858+00:00 stderr F + NETWORK_NODE_IDENTITY_ENABLE=' 2025-12-13T00:20:08.929156858+00:00 stderr F --bootstrap-kubeconfig=/var/lib/kubelet/kubeconfig 2025-12-13T00:20:08.929156858+00:00 stderr F --cert-dir=/etc/ovn/ovnkube-node-certs 2025-12-13T00:20:08.929156858+00:00 stderr F --cert-duration=24h 2025-12-13T00:20:08.929156858+00:00 stderr F ' 2025-12-13T00:20:08.929174068+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-12-13T00:20:08.929174068+00:00 stderr F + [[ '' != '' ]] 2025-12-13T00:20:08.929174068+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-12-13T00:20:08.929196479+00:00 stderr F + [[ '' != '' ]] 2025-12-13T00:20:08.929282901+00:00 stderr F + exec /usr/bin/ovnkube --init-ovnkube-controller crc --init-node crc --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 --inactivity-probe=180000 --gateway-mode local --gateway-interface br-ex --metrics-bind-address 127.0.0.1:29103 --ovn-metrics-bind-address 127.0.0.1:29105 --metrics-enable-pprof --metrics-enable-config-duration --export-ovs-metrics --disable-snat-multiple-gws --enable-multi-network --enable-admin-network-policy --enable-multicast --zone crc --enable-interconnect --acl-logging-rate-limit 20 --bootstrap-kubeconfig=/var/lib/kubelet/kubeconfig --cert-dir=/etc/ovn/ovnkube-node-certs --cert-duration=24h 2025-12-13T00:20:08.953079597+00:00 stderr F I1213 00:20:08.952966 28750 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-12-13T00:20:08.953127918+00:00 stderr F I1213 00:20:08.953028 28750 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:local Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-12-13T00:20:08.954977789+00:00 stderr F I1213 00:20:08.954925 28750 certificate_store.go:130] Loading cert/key pair from "/etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem". 2025-12-13T00:20:08.955171945+00:00 stderr F I1213 00:20:08.955131 28750 certificate_store.go:130] Loading cert/key pair from "/etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem". 2025-12-13T00:20:08.955262777+00:00 stderr F I1213 00:20:08.955234 28750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled 2025-12-13T00:20:08.955262777+00:00 stderr F I1213 00:20:08.955259 28750 kube.go:358] Waiting for certificate 2025-12-13T00:20:08.955301758+00:00 stderr F I1213 00:20:08.955284 28750 kube.go:365] Certificate found 2025-12-13T00:20:08.955473103+00:00 stderr F I1213 00:20:08.955362 28750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Certificate expiration is 2025-12-14 00:07:38 +0000 UTC, rotation deadline is 2025-12-13 18:47:04.474709881 +0000 UTC 2025-12-13T00:20:08.955473103+00:00 stderr F I1213 00:20:08.955464 28750 certificate_manager.go:356] kubernetes.io/kube-apiserver-client: Waiting 18h26m55.519251658s for next certificate rotation 2025-12-13T00:20:08.955876484+00:00 stderr F I1213 00:20:08.955841 28750 cert_rotation.go:137] Starting client certificate rotation controller 2025-12-13T00:20:08.956792140+00:00 stderr F I1213 00:20:08.956741 28750 metrics.go:532] Starting metrics server at address "127.0.0.1:29103" 2025-12-13T00:20:08.957499579+00:00 stderr F I1213 00:20:08.957354 28750 libovsdb.go:62] Client for OVN_Northbound using log verbosity 4 with lumberjack &lumberjack.Logger{Filename:"/var/log/ovnkube/libovsdb.log", MaxSize:100, MaxAge:0, MaxBackups:5, LocalTime:false, Compress:true, size:0, file:(*os.File)(nil), mu:sync.Mutex{state:0, sema:0x0}, millCh:(chan bool)(nil), startMill:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}} 2025-12-13T00:20:08.957549530+00:00 stderr F I1213 00:20:08.957413 28750 node_network_controller_manager.go:98] Starting the node network controller manager, Mode: full 2025-12-13T00:20:08.957585471+00:00 stderr F I1213 00:20:08.957565 28750 factory.go:405] Starting watch factory 2025-12-13T00:20:08.957681144+00:00 stderr F I1213 00:20:08.957648 28750 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.957681144+00:00 stderr F I1213 00:20:08.957658 28750 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.957693544+00:00 stderr F I1213 00:20:08.957679 28750 reflector.go:289] Starting reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.957713475+00:00 stderr F I1213 00:20:08.957692 28750 reflector.go:325] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.957713475+00:00 stderr F I1213 00:20:08.957693 28750 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.957713475+00:00 stderr F I1213 00:20:08.957702 28750 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.958012643+00:00 stderr F I1213 00:20:08.957884 28750 reflector.go:289] Starting reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.958012643+00:00 stderr F I1213 00:20:08.957919 28750 reflector.go:325] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.958139737+00:00 stderr F I1213 00:20:08.958106 28750 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.958139737+00:00 stderr F I1213 00:20:08.958125 28750 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.959168696+00:00 stderr F I1213 00:20:08.959070 28750 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.959168696+00:00 stderr F I1213 00:20:08.959092 28750 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.969864490+00:00 stderr F I1213 00:20:08.969804 28750 metrics.go:532] Starting metrics server at address "127.0.0.1:29105" 2025-12-13T00:20:08.972265886+00:00 stderr F I1213 00:20:08.972205 28750 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.972265886+00:00 stderr F I1213 00:20:08.972219 28750 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.973517381+00:00 stderr F I1213 00:20:08.973010 28750 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.973517381+00:00 stderr F I1213 00:20:08.973391 28750 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.978059796+00:00 stderr F I1213 00:20:08.978011 28750 ovn_db.go:374] Found OVN DB Pod running on this node. Registering OVN DB Metrics 2025-12-13T00:20:08.979753293+00:00 stderr F I1213 00:20:08.979719 28750 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.983201598+00:00 stderr F I1213 00:20:08.983125 28750 ovn_db.go:329] /var/run/openvswitch/ovnnb_db.sock getting info failed: stat /var/run/openvswitch/ovnnb_db.sock: no such file or directory 2025-12-13T00:20:08.983201598+00:00 stderr F I1213 00:20:08.983190 28750 ovn_db.go:326] ovnnb_db.sock found at /var/run/ovn/ 2025-12-13T00:20:08.986557790+00:00 stderr F I1213 00:20:08.986525 28750 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:20:08.997355018+00:00 stderr F I1213 00:20:08.997275 28750 libovsdb.go:62] Client for OVN_Southbound using log verbosity 4 with lumberjack &lumberjack.Logger{Filename:"/var/log/ovnkube/libovsdb.log", MaxSize:100, MaxAge:0, MaxBackups:5, LocalTime:false, Compress:true, size:0, file:(*os.File)(nil), mu:sync.Mutex{state:0, sema:0x0}, millCh:(chan bool)(nil), startMill:sync.Once{done:0x0, m:sync.Mutex{state:0, sema:0x0}}} 2025-12-13T00:20:09.003497077+00:00 stderr F I1213 00:20:09.003471 28750 ovn_db.go:421] Found db is standalone, don't register db_cluster metrics 2025-12-13T00:20:09.010476580+00:00 stderr F I1213 00:20:09.010442 28750 network_controller_manager.go:300] Starting the ovnkube controller 2025-12-13T00:20:09.010476580+00:00 stderr F I1213 00:20:09.010456 28750 network_controller_manager.go:305] Waiting up to 5m0s for NBDB zone to match: crc 2025-12-13T00:20:09.010516811+00:00 stderr F I1213 00:20:09.010502 28750 network_controller_manager.go:325] NBDB zone sync took: 37.761µs 2025-12-13T00:20:09.010516811+00:00 stderr F I1213 00:20:09.010511 28750 factory.go:405] Starting watch factory 2025-12-13T00:20:09.010611944+00:00 stderr F I1213 00:20:09.010589 28750 reflector.go:289] Starting reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:20:09.010611944+00:00 stderr F I1213 00:20:09.010600 28750 reflector.go:325] Listing and watching *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:20:09.010693546+00:00 stderr F I1213 00:20:09.010657 28750 reflector.go:289] Starting reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:20:09.010693546+00:00 stderr F I1213 00:20:09.010676 28750 reflector.go:325] Listing and watching *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:20:09.012414484+00:00 stderr F I1213 00:20:09.012379 28750 reflector.go:351] Caches populated for *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:20:09.012414484+00:00 stderr F I1213 00:20:09.012384 28750 reflector.go:351] Caches populated for *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:20:09.057912328+00:00 stderr F I1213 00:20:09.057819 28750 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.057912328+00:00 stderr F I1213 00:20:09.057846 28750 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.061243560+00:00 stderr F I1213 00:20:09.060380 28750 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.111433494+00:00 stderr F I1213 00:20:09.111354 28750 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.111433494+00:00 stderr F I1213 00:20:09.111380 28750 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.123812545+00:00 stderr F I1213 00:20:09.123756 28750 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.158127502+00:00 stderr F I1213 00:20:09.158060 28750 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.158127502+00:00 stderr F I1213 00:20:09.158088 28750 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.160147697+00:00 stderr F I1213 00:20:09.160101 28750 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.212617164+00:00 stderr F I1213 00:20:09.212568 28750 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.212617164+00:00 stderr F I1213 00:20:09.212590 28750 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.214603989+00:00 stderr F I1213 00:20:09.214547 28750 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.259061535+00:00 stderr F I1213 00:20:09.258982 28750 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.259061535+00:00 stderr F I1213 00:20:09.259013 28750 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.260643708+00:00 stderr F I1213 00:20:09.260615 28750 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:20:09.323167893+00:00 stderr F I1213 00:20:09.323102 28750 network_controller_manager.go:336] Waiting up to 5m0s for a node to have "crc" zone 2025-12-13T00:20:09.323167893+00:00 stderr F I1213 00:20:09.323148 28750 network_controller_manager.go:359] Waiting for node in zone sync took: 16.37µs 2025-12-13T00:20:09.357576161+00:00 stderr F I1213 00:20:09.357519 28750 network_controller_manager.go:220] SCTP support detected in OVN 2025-12-13T00:20:09.362733034+00:00 stderr F I1213 00:20:09.361979 28750 default_node_network_controller.go:133] Enable node proxy healthz server on 0.0.0.0:10256 2025-12-13T00:20:09.362991211+00:00 stderr F I1213 00:20:09.362970 28750 default_node_network_controller.go:684] Starting the default node network controller 2025-12-13T00:20:09.363041812+00:00 stderr F I1213 00:20:09.363031 28750 ovs.go:159] Exec(10): /usr/bin/ovs-vsctl --timeout=15 --no-heading --data=bare --format=csv --columns name list interface 2025-12-13T00:20:09.363139155+00:00 stderr F I1213 00:20:09.363111 28750 services_controller.go:60] Creating event broadcaster 2025-12-13T00:20:09.364112981+00:00 stderr F I1213 00:20:09.363302 28750 default_network_controller.go:328] Starting the default network controller 2025-12-13T00:20:09.364112981+00:00 stderr F I1213 00:20:09.363418 28750 address_set_sync.go:394] SyncAddressSets found 0 stale address sets, 0 of them were ignored 2025-12-13T00:20:09.365222842+00:00 stderr F I1213 00:20:09.365177 28750 acl_sync.go:167] Updating Tier of existing ACLs... 2025-12-13T00:20:09.365243343+00:00 stderr F I1213 00:20:09.365227 28750 acl_sync.go:192] Updating tier's of all ACLs in cluster took 14.21µs 2025-12-13T00:20:09.365678815+00:00 stderr F I1213 00:20:09.365635 28750 port_group_sync.go:309] SyncPortGroups found 0 stale port groups 2025-12-13T00:20:09.368138373+00:00 stderr F I1213 00:20:09.368000 28750 default_network_controller.go:364] Existing number of nodes: 1 2025-12-13T00:20:09.368138373+00:00 stderr F I1213 00:20:09.368033 28750 ovs.go:159] Exec(11): /usr/bin/ovn-nbctl --timeout=15 --columns=_uuid list Load_Balancer_Group 2025-12-13T00:20:09.370379114+00:00 stderr F I1213 00:20:09.370297 28750 ovs.go:162] Exec(9): stdout: "" 2025-12-13T00:20:09.370379114+00:00 stderr F I1213 00:20:09.370361 28750 ovs.go:163] Exec(9): stderr: "" 2025-12-13T00:20:09.370402714+00:00 stderr F I1213 00:20:09.370375 28750 node_network_controller_manager.go:251] CheckForStaleOVSInternalPorts took 8.34091ms 2025-12-13T00:20:09.370413095+00:00 stderr F I1213 00:20:09.370402 28750 ovs.go:159] Exec(12): /usr/bin/ovs-vsctl --timeout=15 --columns=name,external_ids --data=bare --no-headings --format=csv find Interface external_ids:sandbox!="" external_ids:vf-netdev-name!="" 2025-12-13T00:20:09.370623580+00:00 stderr F I1213 00:20:09.370559 28750 ovs.go:162] Exec(10): stdout: "a5a31f215fdd49f\naaa0e2898f563e7\nb17235b865d4102\n098e9064cecbfaa\nadcfabc53e3efc8\n85c1f092a9e274b\novn-k8s-mp0\n4a4fffc78938390\n6741ceca359a128\nens3\n92eca3b1bb762f7\n3d1bad1ef025567\n51926de5dcbf369\nf205e6fa0cc76cd\nbr-ex\n84ec990a66b414e\nfd4386b3a903d37\nb39c75d998d6905\n16e048b797f70b2\na9511a5d94287af\nb481640912713d9\npatch-br-ex_crc-to-br-int\n3d6e166fed28a3e\n0c2018b001b98fb\npatch-br-int-to-br-ex_crc\n5335a2b03a6a955\nf3483754a033549\n023d056511fb885\ncb43fbace872486\nbr-int\ncc9e8f2c0960b84\n047d698c98b0fe0\n2c244248bacdb72\n716e032c4650ea6\n264afc75138ea0f\nd1e9ea5ea78fec2\na3f3493cadb8dfa\na65a45a3a7617d6\n260ff140b8d1e0b\n7bac34826efa2bf\nef1840579032710\n2399c00ca096e2e\n696c8068ac2d061\n052172120e22812\n7af9d3847908124\n474af639b558e29\na8c83f18faa15f6\n3951bf389c34415\n31c30f7e9c0640f\nd3ee238f663eeb3\n0cd59daaff2210a\nf82a63c55b8ff99\n8f2f2e79e395ed4\n" 2025-12-13T00:20:09.370623580+00:00 stderr F I1213 00:20:09.370594 28750 ovs.go:163] Exec(10): stderr: "" 2025-12-13T00:20:09.370659092+00:00 stderr F I1213 00:20:09.370646 28750 ovs.go:159] Exec(13): /usr/bin/ovs-ofctl dump-flows br-ex 2025-12-13T00:20:09.373244133+00:00 stderr F I1213 00:20:09.373205 28750 ovs.go:162] Exec(11): stdout: "_uuid : 59dab9f1-0389-4181-919a-0167ad60c25f\n\n_uuid : 02fd96d5-5e32-4855-b83b-d257ecda4e0c\n\n_uuid : dc1aa63b-0376-4ccb-99e2-10794dfff422\n" 2025-12-13T00:20:09.373292064+00:00 stderr F I1213 00:20:09.373277 28750 ovs.go:163] Exec(11): stderr: "" 2025-12-13T00:20:09.373414257+00:00 stderr F I1213 00:20:09.373375 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {59dab9f1-0389-4181-919a-0167ad60c25f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.373462780+00:00 stderr F I1213 00:20:09.373437 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {59dab9f1-0389-4181-919a-0167ad60c25f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.374161889+00:00 stderr F I1213 00:20:09.374104 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterSwitchLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dc1aa63b-0376-4ccb-99e2-10794dfff422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.374161889+00:00 stderr F I1213 00:20:09.374145 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterSwitchLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dc1aa63b-0376-4ccb-99e2-10794dfff422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.374532519+00:00 stderr F I1213 00:20:09.374477 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterRouterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02fd96d5-5e32-4855-b83b-d257ecda4e0c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.374573260+00:00 stderr F I1213 00:20:09.374528 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer_Group Row:map[name:clusterRouterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {02fd96d5-5e32-4855-b83b-d257ecda4e0c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375091184+00:00 stderr F I1213 00:20:09.375048 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {406a913f-23d5-4a8b-a150-6a090ee355a5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375131945+00:00 stderr F I1213 00:20:09.375098 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5d8bcfad-8c42-42c9-84cc-e00150241d43}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375163086+00:00 stderr F I1213 00:20:09.375139 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e55fbb04-3e0a-43a9-a5cb-7bdd67b39a16}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375203737+00:00 stderr F I1213 00:20:09.375174 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {efb4b620-b538-4498-8e1f-b2aa252ca816}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375234518+00:00 stderr F I1213 00:20:09.375209 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d755c81-e306-481c-a62f-0d10e2f5d48f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375268209+00:00 stderr F I1213 00:20:09.375242 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {437a223f-c2a4-49be-a4fb-d226930b652c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375307560+00:00 stderr F I1213 00:20:09.375279 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {75e12f21-bf66-4829-a27e-d4ed58399124}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375348591+00:00 stderr F I1213 00:20:09.375320 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6bfa24b-ac8f-435a-8587-008597dbe58d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375388912+00:00 stderr F I1213 00:20:09.375363 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed88a175-3a45-4a73-9645-9da77151d550}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375436953+00:00 stderr F I1213 00:20:09.375401 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter svc-monitor:svc-monitor-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8af88be7-33a2-4869-a73a-80bcee31d385}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.375534446+00:00 stderr F I1213 00:20:09.375428 28750 transact.go:42] Configuring OVN: [{Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {406a913f-23d5-4a8b-a150-6a090ee355a5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5d8bcfad-8c42-42c9-84cc-e00150241d43}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e55fbb04-3e0a-43a9-a5cb-7bdd67b39a16}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {efb4b620-b538-4498-8e1f-b2aa252ca816}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d755c81-e306-481c-a62f-0d10e2f5d48f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {437a223f-c2a4-49be-a4fb-d226930b652c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {75e12f21-bf66-4829-a27e-d4ed58399124}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6bfa24b-ac8f-435a-8587-008597dbe58d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Meter Row:map[bands:{GoSet:[{GoUUID:883f99d9-019f-4e2b-8c95-544f07246111}]} fair:{GoSet:[true]} unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed88a175-3a45-4a73-9645-9da77151d550}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter svc-monitor:svc-monitor-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8af88be7-33a2-4869-a73a-80bcee31d385}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.377833530+00:00 stderr F I1213 00:20:09.377772 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[k8s-cluster-router:yes]} options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.377855980+00:00 stderr F I1213 00:20:09.377825 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[k8s-cluster-router:yes]} options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.381207022+00:00 stderr F I1213 00:20:09.381111 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f22c876a-c982-47fe-aa27-179517e8761d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.381406438+00:00 stderr F I1213 00:20:09.381347 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {684df200-e39c-4451-bf48-c08eac5c3524}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.381483430+00:00 stderr F I1213 00:20:09.381437 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:f22c876a-c982-47fe-aa27-179517e8761d} {GoUUID:684df200-e39c-4451-bf48-c08eac5c3524}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.381642294+00:00 stderr F I1213 00:20:09.381487 28750 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f22c876a-c982-47fe-aa27-179517e8761d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:DefaultDeny:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:DefaultDeny]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1011 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {684df200-e39c-4451-bf48-c08eac5c3524}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:f22c876a-c982-47fe-aa27-179517e8761d} {GoUUID:684df200-e39c-4451-bf48-c08eac5c3524}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.385457800+00:00 stderr F I1213 00:20:09.385420 28750 ovs.go:162] Exec(12): stdout: "" 2025-12-13T00:20:09.385517492+00:00 stderr F I1213 00:20:09.385502 28750 ovs.go:163] Exec(12): stderr: "" 2025-12-13T00:20:09.390080117+00:00 stderr F I1213 00:20:09.386246 28750 ovs.go:162] Exec(13): stdout: "NXST_FLOW reply (xid=0x4):\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=500,ip,in_port=2,nw_src=38.102.83.51,nw_dst=169.254.169.2 actions=ct(commit,table=4,zone=64001,nat(dst=38.102.83.51))\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=500,ip,in_port=2,nw_src=38.102.83.51,nw_dst=172.17.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=500,ip,in_port=2,nw_src=38.102.83.51,nw_dst=172.18.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=500,ip,in_port=2,nw_src=38.102.83.51,nw_dst=172.19.0.5 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=500,ip,in_port=2,nw_src=38.102.83.51,nw_dst=192.168.122.10 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=1777, n_bytes=179170, idle_age=5, priority=500,ip,in_port=2,nw_src=38.102.83.51,nw_dst=192.168.126.11 actions=ct(commit,table=4,zone=64001)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=1393, n_bytes=4008243, idle_age=5, priority=500,ip,in_port=LOCAL,nw_dst=169.254.169.1 actions=ct(table=5,zone=64002,nat)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=1854, n_bytes=190882, idle_age=3, priority=500,ip,in_port=LOCAL,nw_dst=10.217.4.0/23 actions=ct(commit,table=2,zone=64001,nat(src=169.254.169.2))\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=105,ip,in_port=2,nw_dst=10.217.4.0/23 actions=drop\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=1441, n_bytes=4029075, idle_age=3, priority=500,ip,in_port=2,nw_src=10.217.4.0/23,nw_dst=169.254.169.2 actions=ct(table=3,zone=64001,nat)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=205,udp,in_port=1,dl_dst=fa:16:3e:f0:63:3e,tp_dst=6081 actions=LOCAL\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=200,udp,in_port=1,tp_dst=6081 actions=NORMAL\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=200,udp,in_port=LOCAL,tp_dst=6081 actions=output:1\n cookie=0xdeff105, duration=16.512s, table=0, n_packets=0, n_bytes=0, idle_age=16, priority=110,ip,in_port=1,nw_frag=yes actions=ct(table=0,zone=64004)\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=109,ip,in_port=2,dl_src=fa:16:3e:f0:63:3e,nw_src=10.217.0.0/23 actions=ct(commit,zone=64000,exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=105,pkt_mark=0x3f0,ip,in_port=2,dl_src=fa:16:3e:f0:63:3e actions=ct(commit,zone=64000,nat(src=38.102.83.51),exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=0, n_bytes=0, idle_age=448, priority=104,ip,in_port=2,nw_src=10.217.0.0/22 actions=drop\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=7327, n_bytes=1102984, idle_age=13, priority=100,ip,in_port=2,dl_src=fa:16:3e:f0:63:3e actions=ct(commit,zone=64000,exec(load:0x1->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=61919, n_bytes=5301686, idle_age=5, priority=100,ip,in_port=LOCAL actions=ct(commit,zone=64000,exec(load:0x2->NXM_NX_CT_MARK[])),output:1\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=172203, n_bytes=2314475565, idle_age=5, priority=50,ip,in_port=1 actions=ct(table=1,zone=64000,nat)\n cookie=0xdeff105, duration=449.257s, table=0, n_packets=25, n_bytes=1316, idle_age=7, priority=10,in_port=2,dl_src=fa:16:3e:f0:63:3e actions=NORMAL\n cookie=0xdeff105, duration=448.660s, table=0, n_packets=30, n_bytes=1666, idle_age=7, priority=10,in_port=1,dl_dst=fa:16:3e:f0:63:3e actions=output:2,LOCAL\n cookie=0xdeff105, duration=449.257s, table=0, n_packets=0, n_bytes=0, idle_age=449, priority=9,in_port=2 actions=drop\n cookie=0x0, duration=567.901s, table=0, n_packets=72606, n_bytes=7593427, idle_age=0, priority=0 actions=NORMAL\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=6217, n_bytes=4979725, idle_age=13, priority=100,ct_state=+est+trk,ct_mark=0x1,ip actions=output:2\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=165804, n_bytes=2309466681, idle_age=7, priority=100,ct_state=+est+trk,ct_mark=0x2,ip actions=LOCAL\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=0, n_bytes=0, idle_age=448, priority=100,ct_state=+rel+trk,ct_mark=0x1,ip actions=output:2\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=0, n_bytes=0, idle_age=448, priority=100,ct_state=+rel+trk,ct_mark=0x2,ip actions=LOCAL\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=0, n_bytes=0, idle_age=448, priority=15,ip,nw_dst=10.217.0.0/22 actions=output:2\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=0, n_bytes=0, idle_age=448, priority=13,udp,in_port=1,tp_dst=3784 actions=output:2,LOCAL\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=182, n_bytes=29159, idle_age=5, priority=10,dl_dst=fa:16:3e:f0:63:3e actions=LOCAL\n cookie=0xdeff105, duration=448.660s, table=1, n_packets=0, n_bytes=0, idle_age=448, priority=0 actions=NORMAL\n cookie=0xdeff105, duration=448.660s, table=2, n_packets=3247, n_bytes=4199125, idle_age=3, actions=mod_dl_dst:fa:16:3e:f0:63:3e,output:2\n cookie=0xdeff105, duration=448.660s, table=3, n_packets=3218, n_bytes=4208245, idle_age=3, actions=move:NXM_OF_ETH_DST[]->NXM_OF_ETH_SRC[],mod_dl_dst:fa:16:3e:f0:63:3e,LOCAL\n cookie=0xdeff105, duration=448.660s, table=4, n_packets=1777, n_bytes=179170, idle_age=5, ip actions=ct(commit,table=3,zone=64002,nat(src=169.254.169.1))\n cookie=0xdeff105, duration=448.660s, table=5, n_packets=1393, n_bytes=4008243, idle_age=5, ip actions=ct(commit,table=2,zone=64001,nat)\n" 2025-12-13T00:20:09.390080117+00:00 stderr F I1213 00:20:09.386396 28750 ovs.go:163] Exec(13): stderr: "" 2025-12-13T00:20:09.392719620+00:00 stderr F I1213 00:20:09.392654 28750 ovs.go:159] Exec(14): /usr/bin/ovn-sbctl --timeout=15 --no-leader-only get SB_Global . options:name 2025-12-13T00:20:09.393861172+00:00 stderr F I1213 00:20:09.393659 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:inport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8e398b3-e322-4290-baa5-01fafd781e3f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.394374336+00:00 stderr F I1213 00:20:09.394235 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:outport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.394853859+00:00 stderr F I1213 00:20:09.394550 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b8e398b3-e322-4290-baa5-01fafd781e3f} {GoUUID:f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.395301901+00:00 stderr F I1213 00:20:09.394925 28750 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Egress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:inport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8e398b3-e322-4290-baa5-01fafd781e3f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:MulticastCluster:AllowInterNode:Ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:MulticastCluster type:AllowInterNode]} log:false match:outport == @a4743249366342378346 && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1012 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b8e398b3-e322-4290-baa5-01fafd781e3f} {GoUUID:f1df6f93-2c80-4ef0-a5d0-488e4a7fd80a}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.396314120+00:00 stderr F I1213 00:20:09.396267 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.396314120+00:00 stderr F I1213 00:20:09.396290 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.397340958+00:00 stderr F I1213 00:20:09.397258 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e6a93f86-9b7d-430d-b5fd-372b961737ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.397426050+00:00 stderr F I1213 00:20:09.397363 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e6a93f86-9b7d-430d-b5fd-372b961737ff}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.397466731+00:00 stderr F I1213 00:20:09.397408 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e6a93f86-9b7d-430d-b5fd-372b961737ff}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e6a93f86-9b7d-430d-b5fd-372b961737ff}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.398044557+00:00 stderr F I1213 00:20:09.397999 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {45b73aea-7414-4894-b242-a14caec8c198}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.398068087+00:00 stderr F I1213 00:20:09.398040 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:45b73aea-7414-4894-b242-a14caec8c198}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.398081328+00:00 stderr F I1213 00:20:09.398056 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {45b73aea-7414-4894-b242-a14caec8c198}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:45b73aea-7414-4894-b242-a14caec8c198}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.398358935+00:00 stderr F I1213 00:20:09.398329 28750 default_network_controller.go:419] Cleaning External Gateway ECMP routes 2025-12-13T00:20:09.398581841+00:00 stderr F I1213 00:20:09.398545 28750 repair.go:33] Syncing exgw routes took 197.635µs 2025-12-13T00:20:09.398597392+00:00 stderr F I1213 00:20:09.398576 28750 default_network_controller.go:430] Starting all the Watchers... 2025-12-13T00:20:09.399272741+00:00 stderr F I1213 00:20:09.399235 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift 2025-12-13T00:20:09.399272741+00:00 stderr F I1213 00:20:09.399234 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-controller-manager 2025-12-13T00:20:09.399296032+00:00 stderr F I1213 00:20:09.399270 28750 namespace.go:100] [openshift] adding namespace 2025-12-13T00:20:09.399343353+00:00 stderr F I1213 00:20:09.399236 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-ingress 2025-12-13T00:20:09.399391524+00:00 stderr F I1213 00:20:09.399358 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-host-network 2025-12-13T00:20:09.399391524+00:00 stderr F I1213 00:20:09.399365 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-ingress-canary 2025-12-13T00:20:09.399391524+00:00 stderr F I1213 00:20:09.399372 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-multus 2025-12-13T00:20:09.399391524+00:00 stderr F I1213 00:20:09.399374 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-console 2025-12-13T00:20:09.399410085+00:00 stderr F I1213 00:20:09.399382 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-apiserver 2025-12-13T00:20:09.399476827+00:00 stderr F I1213 00:20:09.399324 28750 namespace.go:100] [openshift-controller-manager] adding namespace 2025-12-13T00:20:09.399476827+00:00 stderr F I1213 00:20:09.399329 28750 obj_retry.go:502] Add event received for *v1.Namespace default 2025-12-13T00:20:09.399501867+00:00 stderr F I1213 00:20:09.399474 28750 namespace.go:100] [default] adding namespace 2025-12-13T00:20:09.399501867+00:00 stderr F I1213 00:20:09.399309 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-config-operator 2025-12-13T00:20:09.399501867+00:00 stderr F I1213 00:20:09.399488 28750 namespace.go:100] [openshift-config-operator] adding namespace 2025-12-13T00:20:09.399501867+00:00 stderr F I1213 00:20:09.399344 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-controller-manager 2025-12-13T00:20:09.399516578+00:00 stderr F I1213 00:20:09.399502 28750 namespace.go:100] [openshift-kube-controller-manager] adding namespace 2025-12-13T00:20:09.399516578+00:00 stderr F I1213 00:20:09.399345 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-service-ca-operator 2025-12-13T00:20:09.399516578+00:00 stderr F I1213 00:20:09.399513 28750 namespace.go:100] [openshift-service-ca-operator] adding namespace 2025-12-13T00:20:09.399532838+00:00 stderr F I1213 00:20:09.399356 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-image-registry 2025-12-13T00:20:09.399532838+00:00 stderr F I1213 00:20:09.399526 28750 namespace.go:100] [openshift-image-registry] adding namespace 2025-12-13T00:20:09.399532838+00:00 stderr F I1213 00:20:09.399358 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-cloud-network-config-controller 2025-12-13T00:20:09.399547859+00:00 stderr F I1213 00:20:09.399538 28750 namespace.go:100] [openshift-cloud-network-config-controller] adding namespace 2025-12-13T00:20:09.399547859+00:00 stderr F I1213 00:20:09.399343 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift:v4 k8s.ovn.org/name:openshift k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f040846e-f3f2-4016-b78c-e138f8ebd3e3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.399547859+00:00 stderr F I1213 00:20:09.399542 28750 namespace.go:100] [openshift-host-network] adding namespace 2025-12-13T00:20:09.399562199+00:00 stderr F I1213 00:20:09.399554 28750 namespace.go:100] [openshift-ingress-canary] adding namespace 2025-12-13T00:20:09.399577179+00:00 stderr F I1213 00:20:09.399559 28750 address_set.go:304] New(f040846e-f3f2-4016-b78c-e138f8ebd3e3/default-network-controller:Namespace:openshift:v4/a8611152139381270309) with [] 2025-12-13T00:20:09.399577179+00:00 stderr F I1213 00:20:09.399566 28750 namespace.go:100] [openshift-multus] adding namespace 2025-12-13T00:20:09.399590900+00:00 stderr F I1213 00:20:09.399577 28750 namespace.go:100] [openshift-console] adding namespace 2025-12-13T00:20:09.399603750+00:00 stderr F I1213 00:20:09.399588 28750 namespace.go:100] [openshift-kube-apiserver] adding namespace 2025-12-13T00:20:09.399647791+00:00 stderr F I1213 00:20:09.399565 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift:v4 k8s.ovn.org/name:openshift k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f040846e-f3f2-4016-b78c-e138f8ebd3e3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.399647791+00:00 stderr F I1213 00:20:09.399346 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-authentication 2025-12-13T00:20:09.399664112+00:00 stderr F I1213 00:20:09.399649 28750 namespace.go:100] [openshift-authentication] adding namespace 2025-12-13T00:20:09.399712263+00:00 stderr F I1213 00:20:09.399692 28750 namespace.go:100] [openshift-ingress] adding namespace 2025-12-13T00:20:09.399894268+00:00 stderr F I1213 00:20:09.399869 28750 namespace.go:104] [openshift] adding namespace took 593.016µs 2025-12-13T00:20:09.399894268+00:00 stderr F I1213 00:20:09.399880 28750 obj_retry.go:541] Creating *v1.Namespace openshift took: 624.297µs 2025-12-13T00:20:09.399894268+00:00 stderr F I1213 00:20:09.399886 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-service-ca 2025-12-13T00:20:09.399894268+00:00 stderr F I1213 00:20:09.399891 28750 namespace.go:100] [openshift-service-ca] adding namespace 2025-12-13T00:20:09.399975560+00:00 stderr F I1213 00:20:09.399926 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.40]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca:v4 k8s.ovn.org/name:openshift-service-ca k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c559651-6ac2-45b6-acf9-3f03f7401a2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.399975560+00:00 stderr F I1213 00:20:09.399964 28750 address_set.go:304] New(7c559651-6ac2-45b6-acf9-3f03f7401a2d/default-network-controller:Namespace:openshift-service-ca:v4/a15543462790031426324) with [10.217.0.40] 2025-12-13T00:20:09.399992691+00:00 stderr F I1213 00:20:09.399969 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.40]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca:v4 k8s.ovn.org/name:openshift-service-ca k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7c559651-6ac2-45b6-acf9-3f03f7401a2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.400296509+00:00 stderr F I1213 00:20:09.400218 28750 namespace.go:104] [openshift-service-ca] adding namespace took 321.089µs 2025-12-13T00:20:09.400296509+00:00 stderr F I1213 00:20:09.400235 28750 obj_retry.go:541] Creating *v1.Namespace openshift-service-ca took: 341.569µs 2025-12-13T00:20:09.400296509+00:00 stderr F I1213 00:20:09.400243 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-samples-operator 2025-12-13T00:20:09.400296509+00:00 stderr F I1213 00:20:09.400249 28750 namespace.go:100] [openshift-cluster-samples-operator] adding namespace 2025-12-13T00:20:09.400315119+00:00 stderr F I1213 00:20:09.400288 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.46]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-samples-operator:v4 k8s.ovn.org/name:openshift-cluster-samples-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de4bd34a-a221-4ac1-8c2d-5ac98cadd20e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.400315119+00:00 stderr F I1213 00:20:09.400309 28750 address_set.go:304] New(de4bd34a-a221-4ac1-8c2d-5ac98cadd20e/default-network-controller:Namespace:openshift-cluster-samples-operator:v4/a3083655245828550199) with [10.217.0.46] 2025-12-13T00:20:09.400330950+00:00 stderr F I1213 00:20:09.400314 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.46]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-samples-operator:v4 k8s.ovn.org/name:openshift-cluster-samples-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {de4bd34a-a221-4ac1-8c2d-5ac98cadd20e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.400590297+00:00 stderr F I1213 00:20:09.400560 28750 namespace.go:104] [openshift-cluster-samples-operator] adding namespace took 305.538µs 2025-12-13T00:20:09.400590297+00:00 stderr F I1213 00:20:09.400572 28750 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-samples-operator took: 321.968µs 2025-12-13T00:20:09.400590297+00:00 stderr F I1213 00:20:09.400578 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-version 2025-12-13T00:20:09.400590297+00:00 stderr F I1213 00:20:09.400583 28750 namespace.go:100] [openshift-cluster-version] adding namespace 2025-12-13T00:20:09.400678379+00:00 stderr F I1213 00:20:09.400636 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.87]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager:v4 k8s.ovn.org/name:openshift-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1d06d0e2-5650-4996-848c-4f667cd28a66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.400678379+00:00 stderr F I1213 00:20:09.400665 28750 address_set.go:304] New(1d06d0e2-5650-4996-848c-4f667cd28a66/default-network-controller:Namespace:openshift-controller-manager:v4/a10467312518402121836) with [10.217.0.87] 2025-12-13T00:20:09.400694440+00:00 stderr F I1213 00:20:09.400671 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.87]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager:v4 k8s.ovn.org/name:openshift-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1d06d0e2-5650-4996-848c-4f667cd28a66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.400997958+00:00 stderr F I1213 00:20:09.400910 28750 namespace.go:104] [openshift-controller-manager] adding namespace took 1.45012ms 2025-12-13T00:20:09.400997958+00:00 stderr F I1213 00:20:09.400920 28750 obj_retry.go:541] Creating *v1.Namespace openshift-controller-manager took: 1.647555ms 2025-12-13T00:20:09.400997958+00:00 stderr F I1213 00:20:09.400926 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-openstack-infra 2025-12-13T00:20:09.400997958+00:00 stderr F I1213 00:20:09.400953 28750 namespace.go:100] [openshift-openstack-infra] adding namespace 2025-12-13T00:20:09.401024389+00:00 stderr F I1213 00:20:09.400995 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-openstack-infra:v4 k8s.ovn.org/name:openshift-openstack-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df03648f-a2a6-48c0-8e83-58b35476b3fb}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.401024389+00:00 stderr F I1213 00:20:09.401019 28750 address_set.go:304] New(df03648f-a2a6-48c0-8e83-58b35476b3fb/default-network-controller:Namespace:openshift-openstack-infra:v4/a16831300222096547883) with [] 2025-12-13T00:20:09.401040219+00:00 stderr F I1213 00:20:09.401024 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-openstack-infra:v4 k8s.ovn.org/name:openshift-openstack-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df03648f-a2a6-48c0-8e83-58b35476b3fb}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.401058250+00:00 stderr F I1213 00:20:09.401042 28750 ovs.go:162] Exec(14): stdout: "crc\n" 2025-12-13T00:20:09.401058250+00:00 stderr F I1213 00:20:09.401053 28750 ovs.go:163] Exec(14): stderr: "" 2025-12-13T00:20:09.401166253+00:00 stderr F I1213 00:20:09.401125 28750 config.go:1590] Exec: /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-remote="unix:/var/run/ovn/ovnsb_db.sock" 2025-12-13T00:20:09.401506183+00:00 stderr F I1213 00:20:09.401469 28750 namespace.go:104] [openshift-openstack-infra] adding namespace took 501.623µs 2025-12-13T00:20:09.401532574+00:00 stderr F I1213 00:20:09.401519 28750 obj_retry.go:541] Creating *v1.Namespace openshift-openstack-infra took: 557.926µs 2025-12-13T00:20:09.401568855+00:00 stderr F I1213 00:20:09.401540 28750 obj_retry.go:502] Add event received for *v1.Namespace kube-node-lease 2025-12-13T00:20:09.401568855+00:00 stderr F I1213 00:20:09.401562 28750 namespace.go:100] [kube-node-lease] adding namespace 2025-12-13T00:20:09.401580025+00:00 stderr F I1213 00:20:09.401468 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:default:v4 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be1084dd-a9ab-4193-9794-c4853eea1791}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.401652427+00:00 stderr F I1213 00:20:09.401622 28750 address_set.go:304] New(be1084dd-a9ab-4193-9794-c4853eea1791/default-network-controller:Namespace:default:v4/a4322231855293774466) with [] 2025-12-13T00:20:09.401724179+00:00 stderr F I1213 00:20:09.401652 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:default:v4 k8s.ovn.org/name:default k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be1084dd-a9ab-4193-9794-c4853eea1791}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.402441088+00:00 stderr F I1213 00:20:09.402402 28750 namespace.go:104] [default] adding namespace took 2.91229ms 2025-12-13T00:20:09.402464169+00:00 stderr F I1213 00:20:09.402437 28750 obj_retry.go:541] Creating *v1.Namespace default took: 2.963652ms 2025-12-13T00:20:09.402464169+00:00 stderr F I1213 00:20:09.402458 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-apiserver 2025-12-13T00:20:09.402490430+00:00 stderr F I1213 00:20:09.402473 28750 namespace.go:100] [openshift-apiserver] adding namespace 2025-12-13T00:20:09.402675805+00:00 stderr F I1213 00:20:09.402618 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.23]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-operator:v4 k8s.ovn.org/name:openshift-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f40c027-345f-41f2-9721-c96b848490c2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.402729856+00:00 stderr F I1213 00:20:09.402713 28750 address_set.go:304] New(9f40c027-345f-41f2-9721-c96b848490c2/default-network-controller:Namespace:openshift-config-operator:v4/a15513656991472936797) with [10.217.0.23] 2025-12-13T00:20:09.402804498+00:00 stderr F I1213 00:20:09.402750 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.23]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-operator:v4 k8s.ovn.org/name:openshift-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f40c027-345f-41f2-9721-c96b848490c2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.403289991+00:00 stderr F I1213 00:20:09.403269 28750 namespace.go:104] [openshift-config-operator] adding namespace took 3.773194ms 2025-12-13T00:20:09.403366034+00:00 stderr F I1213 00:20:09.403351 28750 obj_retry.go:541] Creating *v1.Namespace openshift-config-operator took: 3.864326ms 2025-12-13T00:20:09.403434885+00:00 stderr F I1213 00:20:09.403421 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-diagnostics 2025-12-13T00:20:09.403468866+00:00 stderr F I1213 00:20:09.403456 28750 namespace.go:100] [openshift-network-diagnostics] adding namespace 2025-12-13T00:20:09.403525858+00:00 stderr F I1213 00:20:09.403285 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager:v4 k8s.ovn.org/name:openshift-kube-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.403576569+00:00 stderr F I1213 00:20:09.403551 28750 address_set.go:304] New(7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) with [] 2025-12-13T00:20:09.403642961+00:00 stderr F I1213 00:20:09.403607 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager:v4 k8s.ovn.org/name:openshift-kube-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.404176726+00:00 stderr F I1213 00:20:09.404156 28750 namespace.go:104] [openshift-kube-controller-manager] adding namespace took 4.646948ms 2025-12-13T00:20:09.404246987+00:00 stderr F I1213 00:20:09.404231 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-controller-manager took: 4.73361ms 2025-12-13T00:20:09.404295590+00:00 stderr F I1213 00:20:09.404271 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-ovn-kubernetes 2025-12-13T00:20:09.404342421+00:00 stderr F I1213 00:20:09.404328 28750 namespace.go:100] [openshift-ovn-kubernetes] adding namespace 2025-12-13T00:20:09.404405723+00:00 stderr F I1213 00:20:09.404308 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.10]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca-operator:v4 k8s.ovn.org/name:openshift-service-ca-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9ac8de8-d679-44f7-bc42-650c48e57dec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.404405723+00:00 stderr F I1213 00:20:09.404390 28750 address_set.go:304] New(d9ac8de8-d679-44f7-bc42-650c48e57dec/default-network-controller:Namespace:openshift-service-ca-operator:v4/a9531058592630863333) with [10.217.0.10] 2025-12-13T00:20:09.404453444+00:00 stderr F I1213 00:20:09.404403 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.10]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-service-ca-operator:v4 k8s.ovn.org/name:openshift-service-ca-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9ac8de8-d679-44f7-bc42-650c48e57dec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.404956478+00:00 stderr F I1213 00:20:09.404900 28750 namespace.go:104] [openshift-service-ca-operator] adding namespace took 5.378608ms 2025-12-13T00:20:09.405020569+00:00 stderr F I1213 00:20:09.405005 28750 obj_retry.go:541] Creating *v1.Namespace openshift-service-ca-operator took: 5.492712ms 2025-12-13T00:20:09.405086591+00:00 stderr F I1213 00:20:09.405073 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-node-identity 2025-12-13T00:20:09.405120782+00:00 stderr F I1213 00:20:09.405108 28750 namespace.go:100] [openshift-network-node-identity] adding namespace 2025-12-13T00:20:09.405178914+00:00 stderr F I1213 00:20:09.404911 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.22 10.217.0.41]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-image-registry:v4 k8s.ovn.org/name:openshift-image-registry k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.405259076+00:00 stderr F I1213 00:20:09.405211 28750 address_set.go:304] New(3fbe9c32-c447-4e1c-9b34-6fac7dd25149/default-network-controller:Namespace:openshift-image-registry:v4/a65811733811199347) with [10.217.0.22 10.217.0.41] 2025-12-13T00:20:09.405353609+00:00 stderr F I1213 00:20:09.405283 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.22 10.217.0.41]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-image-registry:v4 k8s.ovn.org/name:openshift-image-registry k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.405899363+00:00 stderr F I1213 00:20:09.405873 28750 namespace.go:104] [openshift-image-registry] adding namespace took 6.336405ms 2025-12-13T00:20:09.406003976+00:00 stderr F I1213 00:20:09.405983 28750 obj_retry.go:541] Creating *v1.Namespace openshift-image-registry took: 6.406836ms 2025-12-13T00:20:09.406069368+00:00 stderr F I1213 00:20:09.406053 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-infra 2025-12-13T00:20:09.406119259+00:00 stderr F I1213 00:20:09.405969 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-network-config-controller:v4 k8s.ovn.org/name:openshift-cloud-network-config-controller k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {37961f7a-b7f0-46c5-ad41-987d6afce443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.406130890+00:00 stderr F I1213 00:20:09.406116 28750 address_set.go:304] New(37961f7a-b7f0-46c5-ad41-987d6afce443/default-network-controller:Namespace:openshift-cloud-network-config-controller:v4/a6429873968864053860) with [] 2025-12-13T00:20:09.406163471+00:00 stderr F I1213 00:20:09.406095 28750 namespace.go:100] [openshift-infra] adding namespace 2025-12-13T00:20:09.406199802+00:00 stderr F I1213 00:20:09.406130 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-network-config-controller:v4 k8s.ovn.org/name:openshift-cloud-network-config-controller k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {37961f7a-b7f0-46c5-ad41-987d6afce443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.406621413+00:00 stderr F I1213 00:20:09.406601 28750 namespace.go:104] [openshift-cloud-network-config-controller] adding namespace took 7.054834ms 2025-12-13T00:20:09.406675155+00:00 stderr F I1213 00:20:09.406631 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.2 100.64.0.2]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-host-network:v4 k8s.ovn.org/name:openshift-host-network k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bf04507a-34b0-4813-98e8-8d9d1837a096}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.406675155+00:00 stderr F I1213 00:20:09.406667 28750 address_set.go:304] New(bf04507a-34b0-4813-98e8-8d9d1837a096/default-network-controller:Namespace:openshift-host-network:v4/a6910206611978007605) with [10.217.0.2 100.64.0.2] 2025-12-13T00:20:09.406706895+00:00 stderr F I1213 00:20:09.406673 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.2 100.64.0.2]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-host-network:v4 k8s.ovn.org/name:openshift-host-network k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bf04507a-34b0-4813-98e8-8d9d1837a096}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.406787088+00:00 stderr F I1213 00:20:09.406648 28750 obj_retry.go:541] Creating *v1.Namespace openshift-cloud-network-config-controller took: 7.111906ms 2025-12-13T00:20:09.406787088+00:00 stderr F I1213 00:20:09.406779 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-controller-manager-operator 2025-12-13T00:20:09.406799348+00:00 stderr F I1213 00:20:09.406788 28750 namespace.go:100] [openshift-controller-manager-operator] adding namespace 2025-12-13T00:20:09.407077397+00:00 stderr F I1213 00:20:09.407018 28750 namespace.go:104] [openshift-host-network] adding namespace took 7.460005ms 2025-12-13T00:20:09.407077397+00:00 stderr F I1213 00:20:09.407048 28750 obj_retry.go:541] Creating *v1.Namespace openshift-host-network took: 7.509787ms 2025-12-13T00:20:09.407077397+00:00 stderr F I1213 00:20:09.407069 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-machine-api 2025-12-13T00:20:09.407100277+00:00 stderr F I1213 00:20:09.407082 28750 namespace.go:100] [openshift-machine-api] adding namespace 2025-12-13T00:20:09.407196330+00:00 stderr F I1213 00:20:09.407159 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.71]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-canary:v4 k8s.ovn.org/name:openshift-ingress-canary k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7ddc544b-99f6-40be-9f0d-23611845c014}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.407244101+00:00 stderr F I1213 00:20:09.407227 28750 address_set.go:304] New(7ddc544b-99f6-40be-9f0d-23611845c014/default-network-controller:Namespace:openshift-ingress-canary:v4/a17074529903361539284) with [10.217.0.71] 2025-12-13T00:20:09.407296472+00:00 stderr F I1213 00:20:09.407264 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.71]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-canary:v4 k8s.ovn.org/name:openshift-ingress-canary k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7ddc544b-99f6-40be-9f0d-23611845c014}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.407692363+00:00 stderr F I1213 00:20:09.407651 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.32 10.217.0.3]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-multus:v4 k8s.ovn.org/name:openshift-multus k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a3dfe91c-3443-4cbb-84c2-f35c94b3c72b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.407692363+00:00 stderr F I1213 00:20:09.407682 28750 address_set.go:304] New(a3dfe91c-3443-4cbb-84c2-f35c94b3c72b/default-network-controller:Namespace:openshift-multus:v4/a13687770890520536676) with [10.217.0.32 10.217.0.3] 2025-12-13T00:20:09.407708164+00:00 stderr F I1213 00:20:09.407688 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.32 10.217.0.3]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-multus:v4 k8s.ovn.org/name:openshift-multus k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a3dfe91c-3443-4cbb-84c2-f35c94b3c72b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.407741045+00:00 stderr F I1213 00:20:09.407725 28750 namespace.go:104] [openshift-ingress-canary] adding namespace took 8.112424ms 2025-12-13T00:20:09.407775815+00:00 stderr F I1213 00:20:09.407761 28750 obj_retry.go:541] Creating *v1.Namespace openshift-ingress-canary took: 8.209446ms 2025-12-13T00:20:09.407812086+00:00 stderr F I1213 00:20:09.407799 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-vsphere-infra 2025-12-13T00:20:09.407844287+00:00 stderr F I1213 00:20:09.407832 28750 namespace.go:100] [openshift-vsphere-infra] adding namespace 2025-12-13T00:20:09.408080974+00:00 stderr F I1213 00:20:09.408051 28750 namespace.go:104] [openshift-multus] adding namespace took 8.478784ms 2025-12-13T00:20:09.408080974+00:00 stderr F I1213 00:20:09.408065 28750 obj_retry.go:541] Creating *v1.Namespace openshift-multus took: 8.504474ms 2025-12-13T00:20:09.408080974+00:00 stderr F I1213 00:20:09.408072 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-storage-operator 2025-12-13T00:20:09.408080974+00:00 stderr F I1213 00:20:09.408078 28750 namespace.go:100] [openshift-cluster-storage-operator] adding namespace 2025-12-13T00:20:09.408137495+00:00 stderr F I1213 00:20:09.408053 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.73 10.217.0.66]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console:v4 k8s.ovn.org/name:openshift-console k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9af07d61-d216-4338-9242-d93a3d664841}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.408182517+00:00 stderr F I1213 00:20:09.408166 28750 address_set.go:304] New(9af07d61-d216-4338-9242-d93a3d664841/default-network-controller:Namespace:openshift-console:v4/a11622011068173273797) with [10.217.0.73 10.217.0.66] 2025-12-13T00:20:09.408232098+00:00 stderr F I1213 00:20:09.408202 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.73 10.217.0.66]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console:v4 k8s.ovn.org/name:openshift-console k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9af07d61-d216-4338-9242-d93a3d664841}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.408648979+00:00 stderr F I1213 00:20:09.408616 28750 namespace.go:104] [openshift-console] adding namespace took 9.032659ms 2025-12-13T00:20:09.408648979+00:00 stderr F I1213 00:20:09.408630 28750 obj_retry.go:541] Creating *v1.Namespace openshift-console took: 9.05693ms 2025-12-13T00:20:09.408648979+00:00 stderr F I1213 00:20:09.408637 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-cloud-platform-infra 2025-12-13T00:20:09.408648979+00:00 stderr F I1213 00:20:09.408641 28750 namespace.go:100] [openshift-cloud-platform-infra] adding namespace 2025-12-13T00:20:09.408775803+00:00 stderr F I1213 00:20:09.408697 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.42]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver:v4 k8s.ovn.org/name:openshift-kube-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.408786293+00:00 stderr F I1213 00:20:09.408772 28750 address_set.go:304] New(6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) with [10.217.0.42] 2025-12-13T00:20:09.408830944+00:00 stderr F I1213 00:20:09.408784 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.42]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver:v4 k8s.ovn.org/name:openshift-kube-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.409316297+00:00 stderr F I1213 00:20:09.409280 28750 namespace.go:104] [openshift-kube-apiserver] adding namespace took 9.679946ms 2025-12-13T00:20:09.409316297+00:00 stderr F I1213 00:20:09.409305 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-apiserver took: 9.718538ms 2025-12-13T00:20:09.409337658+00:00 stderr F I1213 00:20:09.409323 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-config-managed 2025-12-13T00:20:09.409347578+00:00 stderr F I1213 00:20:09.409335 28750 namespace.go:100] [openshift-config-managed] adding namespace 2025-12-13T00:20:09.409427491+00:00 stderr F I1213 00:20:09.409390 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.72]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication:v4 k8s.ovn.org/name:openshift-authentication k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {131500e4-dd5f-4136-9e7a-0724c2d30917}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.409477032+00:00 stderr F I1213 00:20:09.409463 28750 address_set.go:304] New(131500e4-dd5f-4136-9e7a-0724c2d30917/default-network-controller:Namespace:openshift-authentication:v4/a5821095395710037482) with [10.217.0.72] 2025-12-13T00:20:09.409526233+00:00 stderr F I1213 00:20:09.409498 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.72]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication:v4 k8s.ovn.org/name:openshift-authentication k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {131500e4-dd5f-4136-9e7a-0724c2d30917}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.409984897+00:00 stderr F I1213 00:20:09.409962 28750 namespace.go:104] [openshift-authentication] adding namespace took 10.302015ms 2025-12-13T00:20:09.410040798+00:00 stderr F I1213 00:20:09.410024 28750 obj_retry.go:541] Creating *v1.Namespace openshift-authentication took: 10.385877ms 2025-12-13T00:20:09.410089840+00:00 stderr F I1213 00:20:09.409953 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress:v4 k8s.ovn.org/name:openshift-ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {25d45f75-d6c3-4910-9c2c-f40890ced90d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.410106450+00:00 stderr F I1213 00:20:09.410092 28750 address_set.go:304] New(25d45f75-d6c3-4910-9c2c-f40890ced90d/default-network-controller:Namespace:openshift-ingress:v4/a9185810757115582127) with [] 2025-12-13T00:20:09.410138801+00:00 stderr F I1213 00:20:09.410069 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-controller-manager-operator 2025-12-13T00:20:09.410185522+00:00 stderr F I1213 00:20:09.410170 28750 namespace.go:100] [openshift-kube-controller-manager-operator] adding namespace 2025-12-13T00:20:09.410220783+00:00 stderr F I1213 00:20:09.410108 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress:v4 k8s.ovn.org/name:openshift-ingress k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {25d45f75-d6c3-4910-9c2c-f40890ced90d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.410657905+00:00 stderr F I1213 00:20:09.410605 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-version:v4 k8s.ovn.org/name:openshift-cluster-version k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2355a6e5-ae30-44ea-bc37-ea95e8937a37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.410657905+00:00 stderr F I1213 00:20:09.410633 28750 address_set.go:304] New(2355a6e5-ae30-44ea-bc37-ea95e8937a37/default-network-controller:Namespace:openshift-cluster-version:v4/a8029920972938375443) with [] 2025-12-13T00:20:09.410657905+00:00 stderr F I1213 00:20:09.410640 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-version:v4 k8s.ovn.org/name:openshift-cluster-version k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2355a6e5-ae30-44ea-bc37-ea95e8937a37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.410741817+00:00 stderr F I1213 00:20:09.410704 28750 namespace.go:104] [openshift-ingress] adding namespace took 10.952902ms 2025-12-13T00:20:09.410753948+00:00 stderr F I1213 00:20:09.410736 28750 obj_retry.go:541] Creating *v1.Namespace openshift-ingress took: 11.297392ms 2025-12-13T00:20:09.410766138+00:00 stderr F I1213 00:20:09.410757 28750 obj_retry.go:502] Add event received for *v1.Namespace kube-system 2025-12-13T00:20:09.410806539+00:00 stderr F I1213 00:20:09.410772 28750 namespace.go:100] [kube-system] adding namespace 2025-12-13T00:20:09.410992484+00:00 stderr F I1213 00:20:09.410961 28750 namespace.go:104] [openshift-cluster-version] adding namespace took 10.370376ms 2025-12-13T00:20:09.410992484+00:00 stderr F I1213 00:20:09.410978 28750 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-version took: 10.392527ms 2025-12-13T00:20:09.410992484+00:00 stderr F I1213 00:20:09.410988 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-ingress-operator 2025-12-13T00:20:09.411013545+00:00 stderr F I1213 00:20:09.410995 28750 namespace.go:100] [openshift-ingress-operator] adding namespace 2025-12-13T00:20:09.411048936+00:00 stderr F I1213 00:20:09.410955 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-node-lease:v4 k8s.ovn.org/name:kube-node-lease k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {16ef921a-9758-4ef9-9c0a-3ab866f42178}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.411101877+00:00 stderr F I1213 00:20:09.411071 28750 address_set.go:304] New(16ef921a-9758-4ef9-9c0a-3ab866f42178/default-network-controller:Namespace:kube-node-lease:v4/a8945957557890443212) with [] 2025-12-13T00:20:09.411158259+00:00 stderr F I1213 00:20:09.411129 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-node-lease:v4 k8s.ovn.org/name:kube-node-lease k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {16ef921a-9758-4ef9-9c0a-3ab866f42178}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.411519028+00:00 stderr F I1213 00:20:09.411479 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.82]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver:v4 k8s.ovn.org/name:openshift-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e1b35e7-ab39-4122-9471-98e00d6d2a4f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.411519028+00:00 stderr F I1213 00:20:09.411506 28750 address_set.go:304] New(3e1b35e7-ab39-4122-9471-98e00d6d2a4f/default-network-controller:Namespace:openshift-apiserver:v4/a12374569603079029239) with [10.217.0.82] 2025-12-13T00:20:09.411533829+00:00 stderr F I1213 00:20:09.411512 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.82]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver:v4 k8s.ovn.org/name:openshift-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e1b35e7-ab39-4122-9471-98e00d6d2a4f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.411549719+00:00 stderr F I1213 00:20:09.411538 28750 namespace.go:104] [kube-node-lease] adding namespace took 9.966984ms 2025-12-13T00:20:09.411559580+00:00 stderr F I1213 00:20:09.411548 28750 obj_retry.go:541] Creating *v1.Namespace kube-node-lease took: 9.990495ms 2025-12-13T00:20:09.411559580+00:00 stderr F I1213 00:20:09.411555 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-dns-operator 2025-12-13T00:20:09.411569640+00:00 stderr F I1213 00:20:09.411560 28750 namespace.go:100] [openshift-dns-operator] adding namespace 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411797 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.4 10.217.0.64]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-diagnostics:v4 k8s.ovn.org/name:openshift-network-diagnostics k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8218c39d-9f44-4ee1-a2d7-cc0399885a25}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411825 28750 address_set.go:304] New(8218c39d-9f44-4ee1-a2d7-cc0399885a25/default-network-controller:Namespace:openshift-network-diagnostics:v4/a1966919964212966539) with [10.217.0.4 10.217.0.64] 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411827 28750 namespace.go:104] [openshift-apiserver] adding namespace took 9.345498ms 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411834 28750 obj_retry.go:541] Creating *v1.Namespace openshift-apiserver took: 9.364048ms 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411832 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.4 10.217.0.64]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-diagnostics:v4 k8s.ovn.org/name:openshift-network-diagnostics k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8218c39d-9f44-4ee1-a2d7-cc0399885a25}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411841 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-scheduler-operator 2025-12-13T00:20:09.411857308+00:00 stderr F I1213 00:20:09.411848 28750 namespace.go:100] [openshift-kube-scheduler-operator] adding namespace 2025-12-13T00:20:09.412123485+00:00 stderr F I1213 00:20:09.412091 28750 namespace.go:104] [openshift-network-diagnostics] adding namespace took 8.573837ms 2025-12-13T00:20:09.412123485+00:00 stderr F I1213 00:20:09.412108 28750 obj_retry.go:541] Creating *v1.Namespace openshift-network-diagnostics took: 8.651968ms 2025-12-13T00:20:09.412123485+00:00 stderr F I1213 00:20:09.412097 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovn-kubernetes:v4 k8s.ovn.org/name:openshift-ovn-kubernetes k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {26eb6b8a-5781-42f2-86d8-29346cd69f7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.412123485+00:00 stderr F I1213 00:20:09.412118 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:09.412147866+00:00 stderr F I1213 00:20:09.412118 28750 address_set.go:304] New(26eb6b8a-5781-42f2-86d8-29346cd69f7c/default-network-controller:Namespace:openshift-ovn-kubernetes:v4/a1398255725986493602) with [] 2025-12-13T00:20:09.412147866+00:00 stderr F I1213 00:20:09.412124 28750 namespace.go:100] [openshift-operator-lifecycle-manager] adding namespace 2025-12-13T00:20:09.412147866+00:00 stderr F I1213 00:20:09.412126 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovn-kubernetes:v4 k8s.ovn.org/name:openshift-ovn-kubernetes k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {26eb6b8a-5781-42f2-86d8-29346cd69f7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412400 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-node-identity:v4 k8s.ovn.org/name:openshift-network-node-identity k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {abc91a62-9c79-443c-8e3b-15e8d82538db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412425 28750 namespace.go:104] [openshift-ovn-kubernetes] adding namespace took 8.064412ms 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412428 28750 address_set.go:304] New(abc91a62-9c79-443c-8e3b-15e8d82538db/default-network-controller:Namespace:openshift-network-node-identity:v4/a6647208685787594228) with [] 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412433 28750 obj_retry.go:541] Creating *v1.Namespace openshift-ovn-kubernetes took: 8.106062ms 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412440 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412434 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-node-identity:v4 k8s.ovn.org/name:openshift-network-node-identity k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {abc91a62-9c79-443c-8e3b-15e8d82538db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.412450524+00:00 stderr F I1213 00:20:09.412445 28750 namespace.go:100] [openshift-kube-storage-version-migrator-operator] adding namespace 2025-12-13T00:20:09.412701602+00:00 stderr F I1213 00:20:09.412676 28750 namespace.go:104] [openshift-network-node-identity] adding namespace took 7.508147ms 2025-12-13T00:20:09.412701602+00:00 stderr F I1213 00:20:09.412691 28750 obj_retry.go:541] Creating *v1.Namespace openshift-network-node-identity took: 7.583009ms 2025-12-13T00:20:09.412701602+00:00 stderr F I1213 00:20:09.412680 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-infra:v4 k8s.ovn.org/name:openshift-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {79e4a569-af81-44a1-811a-aa1f3875d414}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.412719682+00:00 stderr F I1213 00:20:09.412700 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-marketplace 2025-12-13T00:20:09.412719682+00:00 stderr F I1213 00:20:09.412700 28750 address_set.go:304] New(79e4a569-af81-44a1-811a-aa1f3875d414/default-network-controller:Namespace:openshift-infra:v4/a4190772658089390776) with [] 2025-12-13T00:20:09.412719682+00:00 stderr F I1213 00:20:09.412707 28750 namespace.go:100] [openshift-marketplace] adding namespace 2025-12-13T00:20:09.412730622+00:00 stderr F I1213 00:20:09.412708 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-infra:v4 k8s.ovn.org/name:openshift-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {79e4a569-af81-44a1-811a-aa1f3875d414}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.413013130+00:00 stderr F I1213 00:20:09.412992 28750 namespace.go:104] [openshift-infra] adding namespace took 6.799168ms 2025-12-13T00:20:09.413064011+00:00 stderr F I1213 00:20:09.413049 28750 obj_retry.go:541] Creating *v1.Namespace openshift-infra took: 6.953902ms 2025-12-13T00:20:09.413107993+00:00 stderr F I1213 00:20:09.412999 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.9]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager-operator:v4 k8s.ovn.org/name:openshift-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d0dffec9-15a9-4a4a-82c9-04e99f7b8a5e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.413134463+00:00 stderr F I1213 00:20:09.413088 28750 obj_retry.go:502] Add event received for *v1.Namespace kube-public 2025-12-13T00:20:09.413172204+00:00 stderr F I1213 00:20:09.413158 28750 namespace.go:100] [kube-public] adding namespace 2025-12-13T00:20:09.413216886+00:00 stderr F I1213 00:20:09.413121 28750 address_set.go:304] New(d0dffec9-15a9-4a4a-82c9-04e99f7b8a5e/default-network-controller:Namespace:openshift-controller-manager-operator:v4/a14938231737766799037) with [10.217.0.9] 2025-12-13T00:20:09.413294958+00:00 stderr F I1213 00:20:09.413213 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.9]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-controller-manager-operator:v4 k8s.ovn.org/name:openshift-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d0dffec9-15a9-4a4a-82c9-04e99f7b8a5e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.413900134+00:00 stderr F I1213 00:20:09.413846 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.20 10.217.0.5]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-api:v4 k8s.ovn.org/name:openshift-machine-api k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9b532877-1f57-4cef-a907-ee481e0632b3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.413900134+00:00 stderr F I1213 00:20:09.413874 28750 address_set.go:304] New(9b532877-1f57-4cef-a907-ee481e0632b3/default-network-controller:Namespace:openshift-machine-api:v4/a8146979490545162082) with [10.217.0.20 10.217.0.5] 2025-12-13T00:20:09.413900134+00:00 stderr F I1213 00:20:09.413872 28750 namespace.go:104] [openshift-controller-manager-operator] adding namespace took 7.072745ms 2025-12-13T00:20:09.413900134+00:00 stderr F I1213 00:20:09.413880 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.20 10.217.0.5]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-api:v4 k8s.ovn.org/name:openshift-machine-api k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9b532877-1f57-4cef-a907-ee481e0632b3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.413920865+00:00 stderr F I1213 00:20:09.413896 28750 obj_retry.go:541] Creating *v1.Namespace openshift-controller-manager-operator took: 7.105106ms 2025-12-13T00:20:09.413920865+00:00 stderr F I1213 00:20:09.413915 28750 obj_retry.go:502] Add event received for *v1.Namespace openstack 2025-12-13T00:20:09.413953526+00:00 stderr F I1213 00:20:09.413927 28750 namespace.go:100] [openstack] adding namespace 2025-12-13T00:20:09.414248064+00:00 stderr F I1213 00:20:09.414197 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-vsphere-infra:v4 k8s.ovn.org/name:openshift-vsphere-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11282001-8d66-4c1f-84de-c3454bcd789a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.414248064+00:00 stderr F I1213 00:20:09.414227 28750 address_set.go:304] New(11282001-8d66-4c1f-84de-c3454bcd789a/default-network-controller:Namespace:openshift-vsphere-infra:v4/a15940016096332404286) with [] 2025-12-13T00:20:09.414248064+00:00 stderr F I1213 00:20:09.414226 28750 namespace.go:104] [openshift-machine-api] adding namespace took 7.135326ms 2025-12-13T00:20:09.414248064+00:00 stderr F I1213 00:20:09.414237 28750 obj_retry.go:541] Creating *v1.Namespace openshift-machine-api took: 7.156626ms 2025-12-13T00:20:09.414248064+00:00 stderr F I1213 00:20:09.414233 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-vsphere-infra:v4 k8s.ovn.org/name:openshift-vsphere-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11282001-8d66-4c1f-84de-c3454bcd789a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.414261554+00:00 stderr F I1213 00:20:09.414245 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-oauth-apiserver 2025-12-13T00:20:09.414261554+00:00 stderr F I1213 00:20:09.414252 28750 namespace.go:100] [openshift-oauth-apiserver] adding namespace 2025-12-13T00:20:09.414552202+00:00 stderr F I1213 00:20:09.414517 28750 namespace.go:104] [openshift-vsphere-infra] adding namespace took 6.653693ms 2025-12-13T00:20:09.414552202+00:00 stderr F I1213 00:20:09.414531 28750 obj_retry.go:541] Creating *v1.Namespace openshift-vsphere-infra took: 6.699024ms 2025-12-13T00:20:09.414552202+00:00 stderr F I1213 00:20:09.414539 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-apiserver-operator 2025-12-13T00:20:09.414552202+00:00 stderr F I1213 00:20:09.414544 28750 namespace.go:100] [openshift-kube-apiserver-operator] adding namespace 2025-12-13T00:20:09.414584033+00:00 stderr F I1213 00:20:09.414548 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-storage-operator:v4 k8s.ovn.org/name:openshift-cluster-storage-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e456889c-a1a5-4ca0-b08b-79801583741a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.414584033+00:00 stderr F I1213 00:20:09.414576 28750 address_set.go:304] New(e456889c-a1a5-4ca0-b08b-79801583741a/default-network-controller:Namespace:openshift-cluster-storage-operator:v4/a13337366700695359377) with [] 2025-12-13T00:20:09.414598533+00:00 stderr F I1213 00:20:09.414581 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-storage-operator:v4 k8s.ovn.org/name:openshift-cluster-storage-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e456889c-a1a5-4ca0-b08b-79801583741a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.414945673+00:00 stderr F I1213 00:20:09.414910 28750 namespace.go:104] [openshift-cluster-storage-operator] adding namespace took 6.826828ms 2025-12-13T00:20:09.414945673+00:00 stderr F I1213 00:20:09.414922 28750 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-storage-operator took: 6.843428ms 2025-12-13T00:20:09.414960153+00:00 stderr F I1213 00:20:09.414930 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-apiserver-operator 2025-12-13T00:20:09.414960153+00:00 stderr F I1213 00:20:09.414953 28750 namespace.go:100] [openshift-apiserver-operator] adding namespace 2025-12-13T00:20:09.415076076+00:00 stderr F I1213 00:20:09.415006 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-platform-infra:v4 k8s.ovn.org/name:openshift-cloud-platform-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bbc24026-ee3c-4098-b3f2-e27016b964c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.415076076+00:00 stderr F I1213 00:20:09.415061 28750 address_set.go:304] New(bbc24026-ee3c-4098-b3f2-e27016b964c4/default-network-controller:Namespace:openshift-cloud-platform-infra:v4/a755693067844839690) with [] 2025-12-13T00:20:09.415141128+00:00 stderr F I1213 00:20:09.415073 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cloud-platform-infra:v4 k8s.ovn.org/name:openshift-cloud-platform-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {bbc24026-ee3c-4098-b3f2-e27016b964c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.415619292+00:00 stderr F I1213 00:20:09.415570 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-managed:v4 k8s.ovn.org/name:openshift-config-managed k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d90c324-b001-4abe-81c1-4b4c84f38fda}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.415619292+00:00 stderr F I1213 00:20:09.415600 28750 address_set.go:304] New(3d90c324-b001-4abe-81c1-4b4c84f38fda/default-network-controller:Namespace:openshift-config-managed:v4/a6117206921658593480) with [] 2025-12-13T00:20:09.415619292+00:00 stderr F I1213 00:20:09.415605 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config-managed:v4 k8s.ovn.org/name:openshift-config-managed k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d90c324-b001-4abe-81c1-4b4c84f38fda}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.415635902+00:00 stderr F I1213 00:20:09.415617 28750 namespace.go:104] [openshift-cloud-platform-infra] adding namespace took 6.961693ms 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415642 28750 obj_retry.go:541] Creating *v1.Namespace openshift-cloud-platform-infra took: 6.992863ms 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415663 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-route-controller-manager 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415678 28750 namespace.go:100] [openshift-route-controller-manager] adding namespace 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415876 28750 namespace.go:104] [openshift-config-managed] adding namespace took 6.527821ms 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415881 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.15]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager-operator:v4 k8s.ovn.org/name:openshift-kube-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {adba65cb-d07c-402b-8874-1746b8bdeaaf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415900 28750 obj_retry.go:541] Creating *v1.Namespace openshift-config-managed took: 6.559681ms 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415902 28750 address_set.go:304] New(adba65cb-d07c-402b-8874-1746b8bdeaaf/default-network-controller:Namespace:openshift-kube-controller-manager-operator:v4/a13990978431870169537) with [10.217.0.15] 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415912 28750 ovs.go:159] Exec(15): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=192.168.126.11 external_ids:ovn-remote-probe-interval=180000 external_ids:ovn-openflow-probe-interval=180 other_config:bundle-idle-timeout=180 external_ids:hostname="crc" external_ids:ovn-is-interconn=true external_ids:ovn-monitor-all=true external_ids:ovn-ofctrl-wait-before-clear=0 external_ids:ovn-enable-lflow-cache=true external_ids:ovn-set-local-ip="true" external_ids:ovn-memlimit-lflow-cache-kb=1048576 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415918 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-node 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415912 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.15]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-controller-manager-operator:v4 k8s.ovn.org/name:openshift-kube-controller-manager-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {adba65cb-d07c-402b-8874-1746b8bdeaaf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.416063684+00:00 stderr F I1213 00:20:09.415966 28750 namespace.go:100] [openshift-node] adding namespace 2025-12-13T00:20:09.416319761+00:00 stderr F I1213 00:20:09.416268 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-system:v4 k8s.ovn.org/name:kube-system k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03aae892-0ac3-4b9f-a966-da2d596a1ccc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.416319761+00:00 stderr F I1213 00:20:09.416300 28750 address_set.go:304] New(03aae892-0ac3-4b9f-a966-da2d596a1ccc/default-network-controller:Namespace:kube-system:v4/a8746611765617041202) with [] 2025-12-13T00:20:09.416335361+00:00 stderr F I1213 00:20:09.416307 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-system:v4 k8s.ovn.org/name:kube-system k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03aae892-0ac3-4b9f-a966-da2d596a1ccc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.416409153+00:00 stderr F I1213 00:20:09.416384 28750 namespace.go:104] [openshift-kube-controller-manager-operator] adding namespace took 6.17764ms 2025-12-13T00:20:09.416409153+00:00 stderr F I1213 00:20:09.416397 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-controller-manager-operator took: 6.233922ms 2025-12-13T00:20:09.416409153+00:00 stderr F I1213 00:20:09.416405 28750 obj_retry.go:502] Add event received for *v1.Namespace hostpath-provisioner 2025-12-13T00:20:09.416424954+00:00 stderr F I1213 00:20:09.416410 28750 namespace.go:100] [hostpath-provisioner] adding namespace 2025-12-13T00:20:09.416800674+00:00 stderr F I1213 00:20:09.416761 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.45]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-operator:v4 k8s.ovn.org/name:openshift-ingress-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3047e9b3-92bb-427f-ab3e-a5f9bc8ce704}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.416883086+00:00 stderr F I1213 00:20:09.416868 28750 address_set.go:304] New(3047e9b3-92bb-427f-ab3e-a5f9bc8ce704/default-network-controller:Namespace:openshift-ingress-operator:v4/a12824364980436020060) with [10.217.0.45] 2025-12-13T00:20:09.416972739+00:00 stderr F I1213 00:20:09.416904 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.45]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ingress-operator:v4 k8s.ovn.org/name:openshift-ingress-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3047e9b3-92bb-427f-ab3e-a5f9bc8ce704}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.417118443+00:00 stderr F I1213 00:20:09.416815 28750 namespace.go:104] [kube-system] adding namespace took 6.021056ms 2025-12-13T00:20:09.417118443+00:00 stderr F I1213 00:20:09.417089 28750 obj_retry.go:541] Creating *v1.Namespace kube-system took: 6.316124ms 2025-12-13T00:20:09.417118443+00:00 stderr F I1213 00:20:09.417100 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-scheduler 2025-12-13T00:20:09.417118443+00:00 stderr F I1213 00:20:09.417107 28750 namespace.go:100] [openshift-kube-scheduler] adding namespace 2025-12-13T00:20:09.417577535+00:00 stderr F I1213 00:20:09.417548 28750 namespace.go:104] [openshift-ingress-operator] adding namespace took 6.54536ms 2025-12-13T00:20:09.417577535+00:00 stderr F I1213 00:20:09.417565 28750 obj_retry.go:541] Creating *v1.Namespace openshift-ingress-operator took: 6.569231ms 2025-12-13T00:20:09.417723679+00:00 stderr F I1213 00:20:09.417608 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.18]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns-operator:v4 k8s.ovn.org/name:openshift-dns-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {84d4b59b-775b-4d4e-a869-e623863cffd4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.417723679+00:00 stderr F I1213 00:20:09.417700 28750 address_set.go:304] New(84d4b59b-775b-4d4e-a869-e623863cffd4/default-network-controller:Namespace:openshift-dns-operator:v4/a12081638711291249560) with [10.217.0.18] 2025-12-13T00:20:09.417759180+00:00 stderr F I1213 00:20:09.417713 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.18]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns-operator:v4 k8s.ovn.org/name:openshift-dns-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {84d4b59b-775b-4d4e-a869-e623863cffd4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.418259984+00:00 stderr F I1213 00:20:09.418226 28750 namespace.go:104] [openshift-dns-operator] adding namespace took 6.656293ms 2025-12-13T00:20:09.418259984+00:00 stderr F I1213 00:20:09.418251 28750 obj_retry.go:541] Creating *v1.Namespace openshift-dns-operator took: 6.686084ms 2025-12-13T00:20:09.418282126+00:00 stderr F I1213 00:20:09.418268 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-machine-config-operator 2025-12-13T00:20:09.418304586+00:00 stderr F I1213 00:20:09.418279 28750 namespace.go:100] [openshift-machine-config-operator] adding namespace 2025-12-13T00:20:09.418354037+00:00 stderr F I1213 00:20:09.418227 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.12]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler-operator:v4 k8s.ovn.org/name:openshift-kube-scheduler-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a2bbab4a-fc2a-420d-89e8-71905aa18031}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.418403389+00:00 stderr F I1213 00:20:09.418377 28750 address_set.go:304] New(a2bbab4a-fc2a-420d-89e8-71905aa18031/default-network-controller:Namespace:openshift-kube-scheduler-operator:v4/a8446891589965341694) with [10.217.0.12] 2025-12-13T00:20:09.418463240+00:00 stderr F I1213 00:20:09.418433 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.12]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler-operator:v4 k8s.ovn.org/name:openshift-kube-scheduler-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a2bbab4a-fc2a-420d-89e8-71905aa18031}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.418889332+00:00 stderr F I1213 00:20:09.418847 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.14 10.217.0.24 10.217.0.43 10.217.0.11]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4 k8s.ovn.org/name:openshift-operator-lifecycle-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.418889332+00:00 stderr F I1213 00:20:09.418877 28750 address_set.go:304] New(369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) with [10.217.0.14 10.217.0.24 10.217.0.43 10.217.0.11] 2025-12-13T00:20:09.418925583+00:00 stderr F I1213 00:20:09.418884 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.14 10.217.0.24 10.217.0.43 10.217.0.11]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4 k8s.ovn.org/name:openshift-operator-lifecycle-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.418991745+00:00 stderr F I1213 00:20:09.418975 28750 namespace.go:104] [openshift-kube-scheduler-operator] adding namespace took 7.017604ms 2025-12-13T00:20:09.419044186+00:00 stderr F I1213 00:20:09.419018 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-scheduler-operator took: 7.166688ms 2025-12-13T00:20:09.419090998+00:00 stderr F I1213 00:20:09.419077 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-user-workload-monitoring 2025-12-13T00:20:09.419124658+00:00 stderr F I1213 00:20:09.419112 28750 namespace.go:100] [openshift-user-workload-monitoring] adding namespace 2025-12-13T00:20:09.419264642+00:00 stderr F I1213 00:20:09.419249 28750 namespace.go:104] [openshift-operator-lifecycle-manager] adding namespace took 7.119067ms 2025-12-13T00:20:09.419311914+00:00 stderr F I1213 00:20:09.419287 28750 obj_retry.go:541] Creating *v1.Namespace openshift-operator-lifecycle-manager took: 7.159578ms 2025-12-13T00:20:09.419356105+00:00 stderr F I1213 00:20:09.419343 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-operators 2025-12-13T00:20:09.419412156+00:00 stderr F I1213 00:20:09.419316 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.16]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator-operator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9bd2029-4bc5-40b5-aded-872bff118867}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.419412156+00:00 stderr F I1213 00:20:09.419401 28750 address_set.go:304] New(d9bd2029-4bc5-40b5-aded-872bff118867/default-network-controller:Namespace:openshift-kube-storage-version-migrator-operator:v4/a11291866915865594395) with [10.217.0.16] 2025-12-13T00:20:09.419448407+00:00 stderr F I1213 00:20:09.419376 28750 namespace.go:100] [openshift-operators] adding namespace 2025-12-13T00:20:09.419477968+00:00 stderr F I1213 00:20:09.419418 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.16]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator-operator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9bd2029-4bc5-40b5-aded-872bff118867}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.419948481+00:00 stderr F I1213 00:20:09.419880 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.36 10.217.0.35 10.217.0.33 10.217.0.37 10.217.0.30]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-marketplace:v4 k8s.ovn.org/name:openshift-marketplace k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0e02ce6b-c33a-409a-bb6c-ed39be6a658f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.419948481+00:00 stderr F I1213 00:20:09.419911 28750 address_set.go:304] New(0e02ce6b-c33a-409a-bb6c-ed39be6a658f/default-network-controller:Namespace:openshift-marketplace:v4/a13245376580307887587) with [10.217.0.33 10.217.0.37 10.217.0.30 10.217.0.36 10.217.0.35] 2025-12-13T00:20:09.419969551+00:00 stderr F I1213 00:20:09.419918 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.36 10.217.0.35 10.217.0.33 10.217.0.37 10.217.0.30]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-marketplace:v4 k8s.ovn.org/name:openshift-marketplace k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0e02ce6b-c33a-409a-bb6c-ed39be6a658f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.420010633+00:00 stderr F I1213 00:20:09.419973 28750 namespace.go:104] [openshift-kube-storage-version-migrator-operator] adding namespace took 7.517407ms 2025-12-13T00:20:09.420010633+00:00 stderr F I1213 00:20:09.419997 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-storage-version-migrator-operator took: 7.547258ms 2025-12-13T00:20:09.420050964+00:00 stderr F I1213 00:20:09.420017 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-ovirt-infra 2025-12-13T00:20:09.420073424+00:00 stderr F I1213 00:20:09.420054 28750 namespace.go:100] [openshift-ovirt-infra] adding namespace 2025-12-13T00:20:09.420303070+00:00 stderr F I1213 00:20:09.420281 28750 namespace.go:104] [openshift-marketplace] adding namespace took 7.565078ms 2025-12-13T00:20:09.420367422+00:00 stderr F I1213 00:20:09.420353 28750 obj_retry.go:541] Creating *v1.Namespace openshift-marketplace took: 7.64266ms 2025-12-13T00:20:09.420420604+00:00 stderr F I1213 00:20:09.420227 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-public:v4 k8s.ovn.org/name:kube-public k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ed2cddb-7639-4bce-a4c1-426cb86feb33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.420457975+00:00 stderr F I1213 00:20:09.420391 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-etcd-operator 2025-12-13T00:20:09.420491616+00:00 stderr F I1213 00:20:09.420479 28750 namespace.go:100] [openshift-etcd-operator] adding namespace 2025-12-13T00:20:09.420565388+00:00 stderr F I1213 00:20:09.420540 28750 address_set.go:304] New(3ed2cddb-7639-4bce-a4c1-426cb86feb33/default-network-controller:Namespace:kube-public:v4/a8590749387396730558) with [] 2025-12-13T00:20:09.420613519+00:00 stderr F I1213 00:20:09.420554 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:kube-public:v4 k8s.ovn.org/name:kube-public k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ed2cddb-7639-4bce-a4c1-426cb86feb33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.420949378+00:00 stderr F I1213 00:20:09.420893 28750 namespace.go:104] [kube-public] adding namespace took 7.702532ms 2025-12-13T00:20:09.420949378+00:00 stderr F I1213 00:20:09.420906 28750 obj_retry.go:541] Creating *v1.Namespace kube-public took: 7.748053ms 2025-12-13T00:20:09.420949378+00:00 stderr F I1213 00:20:09.420914 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-dns 2025-12-13T00:20:09.420949378+00:00 stderr F I1213 00:20:09.420919 28750 namespace.go:100] [openshift-dns] adding namespace 2025-12-13T00:20:09.420999229+00:00 stderr F I1213 00:20:09.420906 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack:v4 k8s.ovn.org/name:openstack k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ee790e3-5571-485c-8e9c-a3ba08afe5ad}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.420999229+00:00 stderr F I1213 00:20:09.420990 28750 address_set.go:304] New(4ee790e3-5571-485c-8e9c-a3ba08afe5ad/default-network-controller:Namespace:openstack:v4/a15556675108942259965) with [] 2025-12-13T00:20:09.421066811+00:00 stderr F I1213 00:20:09.421001 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack:v4 k8s.ovn.org/name:openstack k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ee790e3-5571-485c-8e9c-a3ba08afe5ad}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.421563586+00:00 stderr F I1213 00:20:09.421501 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.39]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-oauth-apiserver:v4 k8s.ovn.org/name:openshift-oauth-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4dac2b20-e105-4916-96b0-5690b4c79e49}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.421631138+00:00 stderr F I1213 00:20:09.421615 28750 address_set.go:304] New(4dac2b20-e105-4916-96b0-5690b4c79e49/default-network-controller:Namespace:openshift-oauth-apiserver:v4/a18232515746603522929) with [10.217.0.39] 2025-12-13T00:20:09.421732320+00:00 stderr F I1213 00:20:09.421699 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.39]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-oauth-apiserver:v4 k8s.ovn.org/name:openshift-oauth-apiserver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4dac2b20-e105-4916-96b0-5690b4c79e49}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.421876894+00:00 stderr F I1213 00:20:09.421554 28750 namespace.go:104] [openstack] adding namespace took 7.581599ms 2025-12-13T00:20:09.421876894+00:00 stderr F I1213 00:20:09.421866 28750 obj_retry.go:541] Creating *v1.Namespace openstack took: 7.937759ms 2025-12-13T00:20:09.421896375+00:00 stderr F I1213 00:20:09.421878 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-network-operator 2025-12-13T00:20:09.421896375+00:00 stderr F I1213 00:20:09.421885 28750 namespace.go:100] [openshift-network-operator] adding namespace 2025-12-13T00:20:09.422276465+00:00 stderr F I1213 00:20:09.422225 28750 ovs.go:162] Exec(15): stdout: "" 2025-12-13T00:20:09.422276465+00:00 stderr F I1213 00:20:09.422250 28750 ovs.go:163] Exec(15): stderr: "" 2025-12-13T00:20:09.422297086+00:00 stderr F I1213 00:20:09.422274 28750 ovs.go:159] Exec(16): /usr/bin/ovs-vsctl --timeout=15 -- clear bridge br-int netflow -- clear bridge br-int sflow -- clear bridge br-int ipfix 2025-12-13T00:20:09.422414039+00:00 stderr F I1213 00:20:09.422369 28750 namespace.go:104] [openshift-oauth-apiserver] adding namespace took 8.109734ms 2025-12-13T00:20:09.422414039+00:00 stderr F I1213 00:20:09.422387 28750 obj_retry.go:541] Creating *v1.Namespace openshift-oauth-apiserver took: 8.133544ms 2025-12-13T00:20:09.422414039+00:00 stderr F I1213 00:20:09.422397 28750 obj_retry.go:502] Add event received for *v1.Namespace openstack-operators 2025-12-13T00:20:09.422414039+00:00 stderr F I1213 00:20:09.422403 28750 namespace.go:100] [openstack-operators] adding namespace 2025-12-13T00:20:09.422480051+00:00 stderr F I1213 00:20:09.422444 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.7]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver-operator:v4 k8s.ovn.org/name:openshift-kube-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c61fceab-5551-4b36-b095-0df4dd86c2d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.422543482+00:00 stderr F I1213 00:20:09.422527 28750 address_set.go:304] New(c61fceab-5551-4b36-b095-0df4dd86c2d2/default-network-controller:Namespace:openshift-kube-apiserver-operator:v4/a11465645704438275080) with [10.217.0.7] 2025-12-13T00:20:09.422605064+00:00 stderr F I1213 00:20:09.422564 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.7]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-apiserver-operator:v4 k8s.ovn.org/name:openshift-kube-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c61fceab-5551-4b36-b095-0df4dd86c2d2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.423006525+00:00 stderr F I1213 00:20:09.422987 28750 namespace.go:104] [openshift-kube-apiserver-operator] adding namespace took 8.435102ms 2025-12-13T00:20:09.423092967+00:00 stderr F I1213 00:20:09.423072 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-apiserver-operator took: 8.522185ms 2025-12-13T00:20:09.423160929+00:00 stderr F I1213 00:20:09.423147 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-console-operator 2025-12-13T00:20:09.423195460+00:00 stderr F I1213 00:20:09.423183 28750 namespace.go:100] [openshift-console-operator] adding namespace 2025-12-13T00:20:09.423245612+00:00 stderr F I1213 00:20:09.422989 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.6]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver-operator:v4 k8s.ovn.org/name:openshift-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {09051ac2-1c77-41ab-bd2c-be4e2e86af73}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.423307033+00:00 stderr F I1213 00:20:09.423277 28750 address_set.go:304] New(09051ac2-1c77-41ab-bd2c-be4e2e86af73/default-network-controller:Namespace:openshift-apiserver-operator:v4/a17733727332347776420) with [10.217.0.6] 2025-12-13T00:20:09.423371845+00:00 stderr F I1213 00:20:09.423337 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.6]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-apiserver-operator:v4 k8s.ovn.org/name:openshift-apiserver-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {09051ac2-1c77-41ab-bd2c-be4e2e86af73}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.423793907+00:00 stderr F I1213 00:20:09.423771 28750 namespace.go:104] [openshift-apiserver-operator] adding namespace took 8.812233ms 2025-12-13T00:20:09.423793907+00:00 stderr F I1213 00:20:09.423783 28750 obj_retry.go:541] Creating *v1.Namespace openshift-apiserver-operator took: 8.830013ms 2025-12-13T00:20:09.423793907+00:00 stderr F I1213 00:20:09.423774 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.88]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-route-controller-manager:v4 k8s.ovn.org/name:openshift-route-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ff17fce-ddcb-48ee-8395-5fd88de6ce7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.423822427+00:00 stderr F I1213 00:20:09.423792 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-etcd 2025-12-13T00:20:09.423822427+00:00 stderr F I1213 00:20:09.423796 28750 address_set.go:304] New(1ff17fce-ddcb-48ee-8395-5fd88de6ce7a/default-network-controller:Namespace:openshift-route-controller-manager:v4/a5513313330862551964) with [10.217.0.88] 2025-12-13T00:20:09.423822427+00:00 stderr F I1213 00:20:09.423799 28750 namespace.go:100] [openshift-etcd] adding namespace 2025-12-13T00:20:09.423822427+00:00 stderr F I1213 00:20:09.423802 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.88]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-route-controller-manager:v4 k8s.ovn.org/name:openshift-route-controller-manager k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ff17fce-ddcb-48ee-8395-5fd88de6ce7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.424195788+00:00 stderr F I1213 00:20:09.424160 28750 namespace.go:104] [openshift-route-controller-manager] adding namespace took 8.472843ms 2025-12-13T00:20:09.424195788+00:00 stderr F I1213 00:20:09.424175 28750 obj_retry.go:541] Creating *v1.Namespace openshift-route-controller-manager took: 8.498165ms 2025-12-13T00:20:09.424195788+00:00 stderr F I1213 00:20:09.424182 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-console-user-settings 2025-12-13T00:20:09.424195788+00:00 stderr F I1213 00:20:09.424188 28750 namespace.go:100] [openshift-console-user-settings] adding namespace 2025-12-13T00:20:09.424276021+00:00 stderr F I1213 00:20:09.424240 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-node:v4 k8s.ovn.org/name:openshift-node k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11c9190d-7255-419a-b417-2c71ee754070}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.424349983+00:00 stderr F I1213 00:20:09.424333 28750 address_set.go:304] New(11c9190d-7255-419a-b417-2c71ee754070/default-network-controller:Namespace:openshift-node:v4/a10320713570038180226) with [] 2025-12-13T00:20:09.424423065+00:00 stderr F I1213 00:20:09.424370 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-node:v4 k8s.ovn.org/name:openshift-node k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11c9190d-7255-419a-b417-2c71ee754070}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.424882707+00:00 stderr F I1213 00:20:09.424839 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.49]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:hostpath-provisioner:v4 k8s.ovn.org/name:hostpath-provisioner k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {62f9e339-06be-4a23-84c9-22e50c244827}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.424882707+00:00 stderr F I1213 00:20:09.424869 28750 address_set.go:304] New(62f9e339-06be-4a23-84c9-22e50c244827/default-network-controller:Namespace:hostpath-provisioner:v4/a9227386411571076591) with [10.217.0.49] 2025-12-13T00:20:09.424912878+00:00 stderr F I1213 00:20:09.424876 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.49]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:hostpath-provisioner:v4 k8s.ovn.org/name:hostpath-provisioner k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {62f9e339-06be-4a23-84c9-22e50c244827}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.424964389+00:00 stderr F I1213 00:20:09.424858 28750 namespace.go:104] [openshift-node] adding namespace took 8.879884ms 2025-12-13T00:20:09.425009121+00:00 stderr F I1213 00:20:09.424991 28750 obj_retry.go:541] Creating *v1.Namespace openshift-node took: 9.05764ms 2025-12-13T00:20:09.425083943+00:00 stderr F I1213 00:20:09.425066 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kni-infra 2025-12-13T00:20:09.425140224+00:00 stderr F I1213 00:20:09.425108 28750 namespace.go:100] [openshift-kni-infra] adding namespace 2025-12-13T00:20:09.425542895+00:00 stderr F I1213 00:20:09.425509 28750 namespace.go:104] [hostpath-provisioner] adding namespace took 9.09203ms 2025-12-13T00:20:09.425542895+00:00 stderr F I1213 00:20:09.425522 28750 obj_retry.go:541] Creating *v1.Namespace hostpath-provisioner took: 9.110791ms 2025-12-13T00:20:09.425542895+00:00 stderr F I1213 00:20:09.425530 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-kube-storage-version-migrator 2025-12-13T00:20:09.425542895+00:00 stderr F I1213 00:20:09.425536 28750 namespace.go:100] [openshift-kube-storage-version-migrator] adding namespace 2025-12-13T00:20:09.425562026+00:00 stderr F I1213 00:20:09.425528 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler:v4 k8s.ovn.org/name:openshift-kube-scheduler k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.425562026+00:00 stderr F I1213 00:20:09.425557 28750 address_set.go:304] New(87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) with [] 2025-12-13T00:20:09.425602427+00:00 stderr F I1213 00:20:09.425563 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-scheduler:v4 k8s.ovn.org/name:openshift-kube-scheduler k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.425988667+00:00 stderr F I1213 00:20:09.425929 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.21 10.217.0.63]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-config-operator:v4 k8s.ovn.org/name:openshift-machine-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9fea9088-07c8-413d-8aa0-608e3c74298e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.426015188+00:00 stderr F I1213 00:20:09.425979 28750 address_set.go:304] New(9fea9088-07c8-413d-8aa0-608e3c74298e/default-network-controller:Namespace:openshift-machine-config-operator:v4/a1512537150246498877) with [10.217.0.21 10.217.0.63] 2025-12-13T00:20:09.426015188+00:00 stderr F I1213 00:20:09.426003 28750 namespace.go:104] [openshift-kube-scheduler] adding namespace took 8.889885ms 2025-12-13T00:20:09.426015188+00:00 stderr F I1213 00:20:09.425996 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.21 10.217.0.63]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-machine-config-operator:v4 k8s.ovn.org/name:openshift-machine-config-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9fea9088-07c8-413d-8aa0-608e3c74298e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.426046189+00:00 stderr F I1213 00:20:09.426013 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-scheduler took: 8.905296ms 2025-12-13T00:20:09.426046189+00:00 stderr F I1213 00:20:09.426022 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-config 2025-12-13T00:20:09.426046189+00:00 stderr F I1213 00:20:09.426028 28750 namespace.go:100] [openshift-config] adding namespace 2025-12-13T00:20:09.426275375+00:00 stderr F I1213 00:20:09.426248 28750 namespace.go:104] [openshift-machine-config-operator] adding namespace took 7.960828ms 2025-12-13T00:20:09.426275375+00:00 stderr F I1213 00:20:09.426260 28750 obj_retry.go:541] Creating *v1.Namespace openshift-machine-config-operator took: 7.982421ms 2025-12-13T00:20:09.426275375+00:00 stderr F I1213 00:20:09.426254 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-user-workload-monitoring:v4 k8s.ovn.org/name:openshift-user-workload-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c7b7eb85-7c38-4a2f-b21b-66f16447149e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.426288885+00:00 stderr F I1213 00:20:09.426275 28750 address_set.go:304] New(c7b7eb85-7c38-4a2f-b21b-66f16447149e/default-network-controller:Namespace:openshift-user-workload-monitoring:v4/a17884403498503024866) with [] 2025-12-13T00:20:09.426319246+00:00 stderr F I1213 00:20:09.426280 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-user-workload-monitoring:v4 k8s.ovn.org/name:openshift-user-workload-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c7b7eb85-7c38-4a2f-b21b-66f16447149e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.426631485+00:00 stderr F I1213 00:20:09.426604 28750 namespace.go:104] [openshift-user-workload-monitoring] adding namespace took 7.438885ms 2025-12-13T00:20:09.426631485+00:00 stderr F I1213 00:20:09.426620 28750 obj_retry.go:541] Creating *v1.Namespace openshift-user-workload-monitoring took: 7.507326ms 2025-12-13T00:20:09.426739839+00:00 stderr F I1213 00:20:09.426704 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operators:v4 k8s.ovn.org/name:openshift-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0d92907b-b2f1-4209-bfd7-581d0f2dbd4d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.426802800+00:00 stderr F I1213 00:20:09.426785 28750 address_set.go:304] New(0d92907b-b2f1-4209-bfd7-581d0f2dbd4d/default-network-controller:Namespace:openshift-operators:v4/a17780485792851514981) with [] 2025-12-13T00:20:09.426871252+00:00 stderr F I1213 00:20:09.426823 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-operators:v4 k8s.ovn.org/name:openshift-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0d92907b-b2f1-4209-bfd7-581d0f2dbd4d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.427250103+00:00 stderr F I1213 00:20:09.427223 28750 namespace.go:104] [openshift-operators] adding namespace took 7.754114ms 2025-12-13T00:20:09.427250103+00:00 stderr F I1213 00:20:09.427238 28750 obj_retry.go:541] Creating *v1.Namespace openshift-operators took: 7.861187ms 2025-12-13T00:20:09.427270983+00:00 stderr F I1213 00:20:09.427248 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-authentication-operator 2025-12-13T00:20:09.427270983+00:00 stderr F I1213 00:20:09.427254 28750 namespace.go:100] [openshift-authentication-operator] adding namespace 2025-12-13T00:20:09.427270983+00:00 stderr F I1213 00:20:09.427247 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovirt-infra:v4 k8s.ovn.org/name:openshift-ovirt-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b874879e-dad8-4f9f-bcbb-02dc1ff04309}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.427284484+00:00 stderr F I1213 00:20:09.427269 28750 address_set.go:304] New(b874879e-dad8-4f9f-bcbb-02dc1ff04309/default-network-controller:Namespace:openshift-ovirt-infra:v4/a5172224344187132617) with [] 2025-12-13T00:20:09.427296384+00:00 stderr F I1213 00:20:09.427275 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-ovirt-infra:v4 k8s.ovn.org/name:openshift-ovirt-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b874879e-dad8-4f9f-bcbb-02dc1ff04309}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.427627523+00:00 stderr F I1213 00:20:09.427592 28750 namespace.go:104] [openshift-ovirt-infra] adding namespace took 7.528768ms 2025-12-13T00:20:09.427627523+00:00 stderr F I1213 00:20:09.427606 28750 obj_retry.go:541] Creating *v1.Namespace openshift-ovirt-infra took: 7.552098ms 2025-12-13T00:20:09.427687225+00:00 stderr F I1213 00:20:09.427590 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.8]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd-operator:v4 k8s.ovn.org/name:openshift-etcd-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1ee15a1-bd6d-495f-8ae4-fb5816533020}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.427752606+00:00 stderr F I1213 00:20:09.427736 28750 address_set.go:304] New(f1ee15a1-bd6d-495f-8ae4-fb5816533020/default-network-controller:Namespace:openshift-etcd-operator:v4/a14710618839743769589) with [10.217.0.8] 2025-12-13T00:20:09.427802148+00:00 stderr F I1213 00:20:09.427773 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.8]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd-operator:v4 k8s.ovn.org/name:openshift-etcd-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1ee15a1-bd6d-495f-8ae4-fb5816533020}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.428226099+00:00 stderr F I1213 00:20:09.428183 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.31]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns:v4 k8s.ovn.org/name:openshift-dns k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9adb0821-8866-4e7b-91bc-3de62bd5d8d5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.428226099+00:00 stderr F I1213 00:20:09.428212 28750 address_set.go:304] New(9adb0821-8866-4e7b-91bc-3de62bd5d8d5/default-network-controller:Namespace:openshift-dns:v4/a11732331429224425771) with [10.217.0.31] 2025-12-13T00:20:09.428247060+00:00 stderr F I1213 00:20:09.428217 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.31]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-dns:v4 k8s.ovn.org/name:openshift-dns k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9adb0821-8866-4e7b-91bc-3de62bd5d8d5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.428247060+00:00 stderr F I1213 00:20:09.428234 28750 namespace.go:104] [openshift-etcd-operator] adding namespace took 7.700842ms 2025-12-13T00:20:09.428247060+00:00 stderr F I1213 00:20:09.428242 28750 obj_retry.go:541] Creating *v1.Namespace openshift-etcd-operator took: 7.763814ms 2025-12-13T00:20:09.428507727+00:00 stderr F I1213 00:20:09.428482 28750 namespace.go:104] [openshift-dns] adding namespace took 7.557589ms 2025-12-13T00:20:09.428507727+00:00 stderr F I1213 00:20:09.428481 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-operator:v4 k8s.ovn.org/name:openshift-network-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04111b86-db80-412d-9e2f-97094cf524e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.428507727+00:00 stderr F I1213 00:20:09.428496 28750 obj_retry.go:541] Creating *v1.Namespace openshift-dns took: 7.574789ms 2025-12-13T00:20:09.428507727+00:00 stderr F I1213 00:20:09.428503 28750 address_set.go:304] New(04111b86-db80-412d-9e2f-97094cf524e1/default-network-controller:Namespace:openshift-network-operator:v4/a17843891307737330665) with [] 2025-12-13T00:20:09.428527287+00:00 stderr F I1213 00:20:09.428508 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-network-operator:v4 k8s.ovn.org/name:openshift-network-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {04111b86-db80-412d-9e2f-97094cf524e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.428831156+00:00 stderr F I1213 00:20:09.428792 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack-operators:v4 k8s.ovn.org/name:openstack-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a0757453-55bd-4a81-b530-e7a71ace7a2f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.428884317+00:00 stderr F I1213 00:20:09.428870 28750 address_set.go:304] New(a0757453-55bd-4a81-b530-e7a71ace7a2f/default-network-controller:Namespace:openstack-operators:v4/a14495394860088409165) with [] 2025-12-13T00:20:09.428957319+00:00 stderr F I1213 00:20:09.428905 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openstack-operators:v4 k8s.ovn.org/name:openstack-operators k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a0757453-55bd-4a81-b530-e7a71ace7a2f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.429092123+00:00 stderr F I1213 00:20:09.428815 28750 namespace.go:104] [openshift-network-operator] adding namespace took 6.92464ms 2025-12-13T00:20:09.429092123+00:00 stderr F I1213 00:20:09.429080 28750 obj_retry.go:541] Creating *v1.Namespace openshift-network-operator took: 7.190567ms 2025-12-13T00:20:09.429351570+00:00 stderr F I1213 00:20:09.429314 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.61 10.217.0.62]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-operator:v4 k8s.ovn.org/name:openshift-console-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b98e5acb-dfcd-45a0-8119-e0592c97b5c3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.429351570+00:00 stderr F I1213 00:20:09.429339 28750 address_set.go:304] New(b98e5acb-dfcd-45a0-8119-e0592c97b5c3/default-network-controller:Namespace:openshift-console-operator:v4/a16211398687523592942) with [10.217.0.61 10.217.0.62] 2025-12-13T00:20:09.429374860+00:00 stderr F I1213 00:20:09.429345 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.61 10.217.0.62]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-operator:v4 k8s.ovn.org/name:openshift-console-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b98e5acb-dfcd-45a0-8119-e0592c97b5c3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.429409431+00:00 stderr F I1213 00:20:09.429332 28750 namespace.go:104] [openstack-operators] adding namespace took 6.9222ms 2025-12-13T00:20:09.429447862+00:00 stderr F I1213 00:20:09.429433 28750 obj_retry.go:541] Creating *v1.Namespace openstack-operators took: 7.024933ms 2025-12-13T00:20:09.429711131+00:00 stderr F I1213 00:20:09.429645 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd:v4 k8s.ovn.org/name:openshift-etcd k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ac960e5b-7bfc-4c54-ade3-c1a9dd66917b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.429711131+00:00 stderr F I1213 00:20:09.429674 28750 address_set.go:304] New(ac960e5b-7bfc-4c54-ade3-c1a9dd66917b/default-network-controller:Namespace:openshift-etcd:v4/a1263951348256964356) with [] 2025-12-13T00:20:09.429724941+00:00 stderr F I1213 00:20:09.429706 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-etcd:v4 k8s.ovn.org/name:openshift-etcd k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ac960e5b-7bfc-4c54-ade3-c1a9dd66917b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.429724941+00:00 stderr F I1213 00:20:09.429715 28750 namespace.go:104] [openshift-console-operator] adding namespace took 6.48054ms 2025-12-13T00:20:09.429735321+00:00 stderr F I1213 00:20:09.429728 28750 obj_retry.go:541] Creating *v1.Namespace openshift-console-operator took: 6.545421ms 2025-12-13T00:20:09.430071130+00:00 stderr F I1213 00:20:09.430024 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-user-settings:v4 k8s.ovn.org/name:openshift-console-user-settings k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1993aac-30de-48d4-bdd3-a3299b9538ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.430071130+00:00 stderr F I1213 00:20:09.430053 28750 namespace.go:104] [openshift-etcd] adding namespace took 6.246493ms 2025-12-13T00:20:09.430071130+00:00 stderr F I1213 00:20:09.430055 28750 address_set.go:304] New(f1993aac-30de-48d4-bdd3-a3299b9538ff/default-network-controller:Namespace:openshift-console-user-settings:v4/a17174782576849527835) with [] 2025-12-13T00:20:09.430071130+00:00 stderr F I1213 00:20:09.430063 28750 obj_retry.go:541] Creating *v1.Namespace openshift-etcd took: 6.262423ms 2025-12-13T00:20:09.430089831+00:00 stderr F I1213 00:20:09.430073 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-monitoring 2025-12-13T00:20:09.430089831+00:00 stderr F I1213 00:20:09.430065 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-console-user-settings:v4 k8s.ovn.org/name:openshift-console-user-settings k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f1993aac-30de-48d4-bdd3-a3299b9538ff}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.430089831+00:00 stderr F I1213 00:20:09.430080 28750 namespace.go:100] [openshift-monitoring] adding namespace 2025-12-13T00:20:09.430466241+00:00 stderr F I1213 00:20:09.430416 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kni-infra:v4 k8s.ovn.org/name:openshift-kni-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {982e3095-ca6c-4b53-837f-82c794c14593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.430466241+00:00 stderr F I1213 00:20:09.430445 28750 address_set.go:304] New(982e3095-ca6c-4b53-837f-82c794c14593/default-network-controller:Namespace:openshift-kni-infra:v4/a12641306622762432907) with [] 2025-12-13T00:20:09.430466241+00:00 stderr F I1213 00:20:09.430447 28750 namespace.go:104] [openshift-console-user-settings] adding namespace took 6.253072ms 2025-12-13T00:20:09.430466241+00:00 stderr F I1213 00:20:09.430457 28750 obj_retry.go:541] Creating *v1.Namespace openshift-console-user-settings took: 6.267663ms 2025-12-13T00:20:09.430479541+00:00 stderr F I1213 00:20:09.430452 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kni-infra:v4 k8s.ovn.org/name:openshift-kni-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {982e3095-ca6c-4b53-837f-82c794c14593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.430479541+00:00 stderr F I1213 00:20:09.430466 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-nutanix-infra 2025-12-13T00:20:09.430479541+00:00 stderr F I1213 00:20:09.430474 28750 namespace.go:100] [openshift-nutanix-infra] adding namespace 2025-12-13T00:20:09.430776660+00:00 stderr F I1213 00:20:09.430758 28750 namespace.go:104] [openshift-kni-infra] adding namespace took 5.581414ms 2025-12-13T00:20:09.430776660+00:00 stderr F I1213 00:20:09.430771 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kni-infra took: 5.662916ms 2025-12-13T00:20:09.430788450+00:00 stderr F I1213 00:20:09.430754 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.25]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c015a739-e4ae-4995-96b0-bc406fefecac}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.430788450+00:00 stderr F I1213 00:20:09.430784 28750 address_set.go:304] New(c015a739-e4ae-4995-96b0-bc406fefecac/default-network-controller:Namespace:openshift-kube-storage-version-migrator:v4/a16978912863426934758) with [10.217.0.25] 2025-12-13T00:20:09.430835561+00:00 stderr F I1213 00:20:09.430791 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.25]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-kube-storage-version-migrator:v4 k8s.ovn.org/name:openshift-kube-storage-version-migrator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c015a739-e4ae-4995-96b0-bc406fefecac}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431214011+00:00 stderr F I1213 00:20:09.431156 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config:v4 k8s.ovn.org/name:openshift-config k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {44ff8a33-26ed-440b-98f7-fcd58e35aa55}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431214011+00:00 stderr F I1213 00:20:09.431183 28750 address_set.go:304] New(44ff8a33-26ed-440b-98f7-fcd58e35aa55/default-network-controller:Namespace:openshift-config:v4/a14322580666718461836) with [] 2025-12-13T00:20:09.431214011+00:00 stderr F I1213 00:20:09.431192 28750 namespace.go:104] [openshift-kube-storage-version-migrator] adding namespace took 5.649786ms 2025-12-13T00:20:09.431214011+00:00 stderr F I1213 00:20:09.431201 28750 obj_retry.go:541] Creating *v1.Namespace openshift-kube-storage-version-migrator took: 5.663686ms 2025-12-13T00:20:09.431214011+00:00 stderr F I1213 00:20:09.431189 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-config:v4 k8s.ovn.org/name:openshift-config k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {44ff8a33-26ed-440b-98f7-fcd58e35aa55}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431540310+00:00 stderr F I1213 00:20:09.431506 28750 namespace.go:104] [openshift-config] adding namespace took 5.469611ms 2025-12-13T00:20:09.431540310+00:00 stderr F I1213 00:20:09.431499 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.19]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication-operator:v4 k8s.ovn.org/name:openshift-authentication-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a8db0187-0f5e-41f3-8b4b-4d2c4d0d1e2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431540310+00:00 stderr F I1213 00:20:09.431522 28750 obj_retry.go:541] Creating *v1.Namespace openshift-config took: 5.492292ms 2025-12-13T00:20:09.431540310+00:00 stderr F I1213 00:20:09.431527 28750 address_set.go:304] New(a8db0187-0f5e-41f3-8b4b-4d2c4d0d1e2d/default-network-controller:Namespace:openshift-authentication-operator:v4/a11592754075545683359) with [10.217.0.19] 2025-12-13T00:20:09.431540310+00:00 stderr F I1213 00:20:09.431530 28750 obj_retry.go:502] Add event received for *v1.Namespace openshift-cluster-machine-approver 2025-12-13T00:20:09.431557771+00:00 stderr F I1213 00:20:09.431538 28750 namespace.go:100] [openshift-cluster-machine-approver] adding namespace 2025-12-13T00:20:09.431557771+00:00 stderr F I1213 00:20:09.431535 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[10.217.0.19]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-authentication-operator:v4 k8s.ovn.org/name:openshift-authentication-operator k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a8db0187-0f5e-41f3-8b4b-4d2c4d0d1e2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431950121+00:00 stderr F I1213 00:20:09.431881 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-monitoring:v4 k8s.ovn.org/name:openshift-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {019ff9cb-74b2-458c-bba7-3cd8f5e2b524}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431950121+00:00 stderr F I1213 00:20:09.431910 28750 address_set.go:304] New(019ff9cb-74b2-458c-bba7-3cd8f5e2b524/default-network-controller:Namespace:openshift-monitoring:v4/a5151710470485437164) with [] 2025-12-13T00:20:09.431950121+00:00 stderr F I1213 00:20:09.431919 28750 namespace.go:104] [openshift-authentication-operator] adding namespace took 4.659038ms 2025-12-13T00:20:09.431967532+00:00 stderr F I1213 00:20:09.431917 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-monitoring:v4 k8s.ovn.org/name:openshift-monitoring k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {019ff9cb-74b2-458c-bba7-3cd8f5e2b524}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.431977162+00:00 stderr F I1213 00:20:09.431964 28750 obj_retry.go:541] Creating *v1.Namespace openshift-authentication-operator took: 4.673268ms 2025-12-13T00:20:09.432292332+00:00 stderr F I1213 00:20:09.432236 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-nutanix-infra:v4 k8s.ovn.org/name:openshift-nutanix-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9be864df-f271-4d7b-a8a0-e53c648721e4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.432292332+00:00 stderr F I1213 00:20:09.432282 28750 address_set.go:304] New(9be864df-f271-4d7b-a8a0-e53c648721e4/default-network-controller:Namespace:openshift-nutanix-infra:v4/a10781256116209244644) with [] 2025-12-13T00:20:09.432308182+00:00 stderr F I1213 00:20:09.432289 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-nutanix-infra:v4 k8s.ovn.org/name:openshift-nutanix-infra k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9be864df-f271-4d7b-a8a0-e53c648721e4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.432344013+00:00 stderr F I1213 00:20:09.432326 28750 namespace.go:104] [openshift-monitoring] adding namespace took 2.186959ms 2025-12-13T00:20:09.432377964+00:00 stderr F I1213 00:20:09.432364 28750 obj_retry.go:541] Creating *v1.Namespace openshift-monitoring took: 2.282504ms 2025-12-13T00:20:09.432696973+00:00 stderr F I1213 00:20:09.432645 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-machine-approver:v4 k8s.ovn.org/name:openshift-cluster-machine-approver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f13b0bb-074d-4fa3-ac40-66aa5570fde6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.432696973+00:00 stderr F I1213 00:20:09.432672 28750 address_set.go:304] New(9f13b0bb-074d-4fa3-ac40-66aa5570fde6/default-network-controller:Namespace:openshift-cluster-machine-approver:v4/a8065968527448962190) with [] 2025-12-13T00:20:09.432696973+00:00 stderr F I1213 00:20:09.432677 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]} external_ids:{GoMap:map[ip-family:v4 k8s.ovn.org/id:default-network-controller:Namespace:openshift-cluster-machine-approver:v4 k8s.ovn.org/name:openshift-cluster-machine-approver k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:Namespace]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9f13b0bb-074d-4fa3-ac40-66aa5570fde6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.432767225+00:00 stderr F I1213 00:20:09.432749 28750 namespace.go:104] [openshift-nutanix-infra] adding namespace took 2.266843ms 2025-12-13T00:20:09.432802416+00:00 stderr F I1213 00:20:09.432789 28750 obj_retry.go:541] Creating *v1.Namespace openshift-nutanix-infra took: 2.313074ms 2025-12-13T00:20:09.432998601+00:00 stderr F I1213 00:20:09.432975 28750 namespace.go:104] [openshift-cluster-machine-approver] adding namespace took 1.42998ms 2025-12-13T00:20:09.432998601+00:00 stderr F I1213 00:20:09.432987 28750 obj_retry.go:541] Creating *v1.Namespace openshift-cluster-machine-approver took: 1.449861ms 2025-12-13T00:20:09.433029582+00:00 stderr F I1213 00:20:09.433000 28750 factory.go:988] Added *v1.Namespace event handler 1 2025-12-13T00:20:09.433215737+00:00 stderr F I1213 00:20:09.433178 28750 obj_retry.go:502] Add event received for *v1.Node crc 2025-12-13T00:20:09.433215737+00:00 stderr F I1213 00:20:09.433209 28750 master.go:627] Adding or Updating Node "crc" 2025-12-13T00:20:09.433379011+00:00 stderr F I1213 00:20:09.433325 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:dc1aa63b-0376-4ccb-99e2-10794dfff422}]} other_config:{GoMap:map[exclude_ips:10.217.0.2 mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.433379011+00:00 stderr F I1213 00:20:09.433358 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:dc1aa63b-0376-4ccb-99e2-10794dfff422}]} other_config:{GoMap:map[exclude_ips:10.217.0.2 mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.434041159+00:00 stderr F I1213 00:20:09.433997 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[arp_proxy:0a:58:a9:fe:01:01 169.254.1.1 fe80::1 10.217.0.0/22 router-port:rtos-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4867092-ff2b-4983-87ca-ac317e39f546}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.434133272+00:00 stderr F I1213 00:20:09.434104 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.434191684+00:00 stderr F I1213 00:20:09.434156 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[arp_proxy:0a:58:a9:fe:01:01 169.254.1.1 fe80::1 10.217.0.0/22 router-port:rtos-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4867092-ff2b-4983-87ca-ac317e39f546}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.434782600+00:00 stderr F I1213 00:20:09.434695 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.434782600+00:00 stderr F I1213 00:20:09.434761 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c4867092-ff2b-4983-87ca-ac317e39f546}]}}] Timeout: Where:[where column _uuid == {e9f73f57-05b3-4176-8265-1b664879a040}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.435307045+00:00 stderr F I1213 00:20:09.435270 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Gateway_Chassis Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa name:rtos-crc-017e52b0-97d3-4d7d-aae4-9b216aa025aa priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {964f1415-aaef-4b48-af02-711e20a49a83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.435441068+00:00 stderr F I1213 00:20:09.435406 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:964f1415-aaef-4b48-af02-711e20a49a83}]} mac:0a:58:0a:d9:00:01 networks:{GoSet:[10.217.0.1/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {21ca7e68-1853-4731-afc6-84ae6ce74fe0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.435527781+00:00 stderr F I1213 00:20:09.435501 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:21ca7e68-1853-4731-afc6-84ae6ce74fe0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.435607263+00:00 stderr F I1213 00:20:09.435550 28750 transact.go:42] Configuring OVN: [{Op:update Table:Gateway_Chassis Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa name:rtos-crc-017e52b0-97d3-4d7d-aae4-9b216aa025aa priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {964f1415-aaef-4b48-af02-711e20a49a83}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:964f1415-aaef-4b48-af02-711e20a49a83}]} mac:0a:58:0a:d9:00:01 networks:{GoSet:[10.217.0.1/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {21ca7e68-1853-4731-afc6-84ae6ce74fe0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:21ca7e68-1853-4731-afc6-84ae6ce74fe0}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.436249290+00:00 stderr F I1213 00:20:09.436170 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.217.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:crc:10.217.0.2 k8s.ovn.org/name:crc k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.217.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cb395e67-c62e-4261-95eb-0e35bc8ae4a2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.436272981+00:00 stderr F I1213 00:20:09.436242 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:cb395e67-c62e-4261-95eb-0e35bc8ae4a2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.436311922+00:00 stderr F I1213 00:20:09.436260 28750 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[ip:10.217.0.2 k8s.ovn.org/id:default-network-controller:NetpolNode:crc:10.217.0.2 k8s.ovn.org/name:crc k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolNode]} log:false match:ip4.src==10.217.0.2 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cb395e67-c62e-4261-95eb-0e35bc8ae4a2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:cb395e67-c62e-4261-95eb-0e35bc8ae4a2}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.436745074+00:00 stderr F I1213 00:20:09.436710 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.217.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f776c4fa-65e6-40b2-8863-a1363ed09111}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.436821576+00:00 stderr F I1213 00:20:09.436794 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:f776c4fa-65e6-40b2-8863-a1363ed09111}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.436874318+00:00 stderr F I1213 00:20:09.436844 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.217.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f776c4fa-65e6-40b2-8863-a1363ed09111}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:f776c4fa-65e6-40b2-8863-a1363ed09111}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.437498175+00:00 stderr F I1213 00:20:09.437447 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[b6:dc:d9:26:03:d4 10.217.0.2]} options:{GoMap:map[]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df0c3d3a-8474-4c1f-ac97-c1820ebf2712}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.437523445+00:00 stderr F I1213 00:20:09.437499 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.437559756+00:00 stderr F I1213 00:20:09.437516 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[b6:dc:d9:26:03:d4 10.217.0.2]} options:{GoMap:map[]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {df0c3d3a-8474-4c1f-ac97-c1820ebf2712}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.438097312+00:00 stderr F I1213 00:20:09.438017 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.438238896+00:00 stderr F I1213 00:20:09.438173 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:df0c3d3a-8474-4c1f-ac97-c1820ebf2712}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.438616926+00:00 stderr F I1213 00:20:09.438575 28750 switch.go:50] Hybridoverlay port does not exist for node crc 2025-12-13T00:20:09.438616926+00:00 stderr F I1213 00:20:09.438594 28750 switch.go:59] haveMP true haveHO false ManagementPortAddress 10.217.0.2/23 HybridOverlayAddressOA 10.217.0.3/23 2025-12-13T00:20:09.438713539+00:00 stderr F I1213 00:20:09.438669 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.438713539+00:00 stderr F I1213 00:20:09.438697 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:d9:00:01 mcast_ip4_src:10.217.0.1 mcast_querier:true mcast_snoop:true subnet:10.217.0.0/23]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.439225953+00:00 stderr F I1213 00:20:09.439183 28750 hybrid.go:140] Removing node crc hybrid overlay port 2025-12-13T00:20:09.439464319+00:00 stderr F I1213 00:20:09.439418 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.51 physical_ips:38.102.83.51]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.439490430+00:00 stderr F I1213 00:20:09.439456 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.51 physical_ips:38.102.83.51]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.440069216+00:00 stderr F I1213 00:20:09.440019 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.440130697+00:00 stderr F I1213 00:20:09.440084 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.440143808+00:00 stderr F I1213 00:20:09.440111 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.440589710+00:00 stderr F I1213 00:20:09.440551 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.440624801+00:00 stderr F I1213 00:20:09.440604 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.440667772+00:00 stderr F I1213 00:20:09.440623 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.441041213+00:00 stderr F I1213 00:20:09.440998 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.441104855+00:00 stderr F I1213 00:20:09.441058 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.441134215+00:00 stderr F I1213 00:20:09.441095 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.441571097+00:00 stderr F I1213 00:20:09.441505 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:f0:63:3e networks:{GoSet:[38.102.83.51/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.441571097+00:00 stderr F I1213 00:20:09.441557 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.441609888+00:00 stderr F I1213 00:20:09.441570 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:f0:63:3e networks:{GoSet:[38.102.83.51/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.442157953+00:00 stderr F I1213 00:20:09.442104 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.442383469+00:00 stderr F I1213 00:20:09.442334 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:f0:63:3e]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.442437941+00:00 stderr F I1213 00:20:09.442402 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.442512653+00:00 stderr F I1213 00:20:09.442429 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:f0:63:3e]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443102929+00:00 stderr F I1213 00:20:09.443068 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443121200+00:00 stderr F I1213 00:20:09.443093 28750 transact.go:42] Configuring OVN: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443460429+00:00 stderr F I1213 00:20:09.443416 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443511611+00:00 stderr F I1213 00:20:09.443474 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443527752+00:00 stderr F I1213 00:20:09.443494 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443882361+00:00 stderr F I1213 00:20:09.443851 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443904462+00:00 stderr F I1213 00:20:09.443890 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.443964093+00:00 stderr F I1213 00:20:09.443903 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.444259752+00:00 stderr F I1213 00:20:09.444222 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.444283472+00:00 stderr F I1213 00:20:09.444265 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.444309313+00:00 stderr F I1213 00:20:09.444280 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.445067453+00:00 stderr F I1213 00:20:09.445028 28750 ovs.go:162] Exec(16): stdout: "" 2025-12-13T00:20:09.445145126+00:00 stderr F I1213 00:20:09.445126 28750 ovs.go:163] Exec(16): stderr: "" 2025-12-13T00:20:09.445218988+00:00 stderr F I1213 00:20:09.445077 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.445275119+00:00 stderr F I1213 00:20:09.445223 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.445275119+00:00 stderr F I1213 00:20:09.445246 28750 transact.go:42] Configuring OVN: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.448675093+00:00 stderr F I1213 00:20:09.448637 28750 default_node_network_controller.go:778] Node crc ready for ovn initialization with subnet 10.217.0.0/23 2025-12-13T00:20:09.448805736+00:00 stderr F I1213 00:20:09.448760 28750 ovs.go:159] Exec(17): /usr/bin/ovs-vsctl --timeout=15 --no-headings --data bare --format csv --columns type,name find Interface name=ovn-k8s-mp0 2025-12-13T00:20:09.456381705+00:00 stderr F I1213 00:20:09.456325 28750 ovs.go:162] Exec(17): stdout: "internal,ovn-k8s-mp0\n" 2025-12-13T00:20:09.456381705+00:00 stderr F I1213 00:20:09.456367 28750 ovs.go:163] Exec(17): stderr: "" 2025-12-13T00:20:09.456407306+00:00 stderr F I1213 00:20:09.456390 28750 ovs.go:159] Exec(18): /usr/bin/ovs-vsctl --timeout=15 --no-headings --data bare --format csv --columns type,name find Interface name=ovn-k8s-mp0_0 2025-12-13T00:20:09.462896455+00:00 stderr F I1213 00:20:09.462847 28750 ovs.go:162] Exec(18): stdout: "" 2025-12-13T00:20:09.462896455+00:00 stderr F I1213 00:20:09.462868 28750 ovs.go:163] Exec(18): stderr: "" 2025-12-13T00:20:09.463141162+00:00 stderr F I1213 00:20:09.463111 28750 ovs.go:159] Exec(19): /usr/bin/ovs-vsctl --timeout=15 -- --if-exists del-port br-int k8s-crc -- --may-exist add-port br-int ovn-k8s-mp0 -- set interface ovn-k8s-mp0 type=internal mtu_request=1400 external-ids:iface-id=k8s-crc 2025-12-13T00:20:09.464091208+00:00 stderr F I1213 00:20:09.464032 28750 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0739e536-1258-4533-951f-bde9a4f368fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.464150530+00:00 stderr F I1213 00:20:09.464104 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:0739e536-1258-4533-951f-bde9a4f368fa}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.464168260+00:00 stderr F I1213 00:20:09.464142 28750 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0739e536-1258-4533-951f-bde9a4f368fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:0739e536-1258-4533-951f-bde9a4f368fa}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.464872030+00:00 stderr F I1213 00:20:09.464832 28750 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aaf9c965-eb0d-4559-b691-4ded50472dfe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.464922661+00:00 stderr F I1213 00:20:09.464883 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:aaf9c965-eb0d-4559-b691-4ded50472dfe}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.464944802+00:00 stderr F I1213 00:20:09.464909 28750 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aaf9c965-eb0d-4559-b691-4ded50472dfe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:aaf9c965-eb0d-4559-b691-4ded50472dfe}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.465435465+00:00 stderr F I1213 00:20:09.465405 28750 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a21fe4c-ea6d-4734-a67e-d70ab4d56148}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.465502957+00:00 stderr F I1213 00:20:09.465481 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:7a21fe4c-ea6d-4734-a67e-d70ab4d56148}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.465539738+00:00 stderr F I1213 00:20:09.465519 28750 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a21fe4c-ea6d-4734-a67e-d70ab4d56148}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:7a21fe4c-ea6d-4734-a67e-d70ab4d56148}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.466008501+00:00 stderr F I1213 00:20:09.465980 28750 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6aa775b-5c58-4294-9cce-da72df51bfb2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.466033602+00:00 stderr F I1213 00:20:09.466014 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:a6aa775b-5c58-4294-9cce-da72df51bfb2}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.466056593+00:00 stderr F I1213 00:20:09.466028 28750 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6aa775b-5c58-4294-9cce-da72df51bfb2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:a6aa775b-5c58-4294-9cce-da72df51bfb2}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.466449503+00:00 stderr F I1213 00:20:09.466426 28750 model_client.go:406] Delete operations generated as: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1def9c16-859c-4bea-bb72-5a1fb7cd5755}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.466502615+00:00 stderr F I1213 00:20:09.466484 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:1def9c16-859c-4bea-bb72-5a1fb7cd5755}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.466539516+00:00 stderr F I1213 00:20:09.466518 28750 transact.go:42] Configuring OVN: [{Op:delete Table:Logical_Router_Policy Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1def9c16-859c-4bea-bb72-5a1fb7cd5755}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:delete Value:{GoSet:[{GoUUID:1def9c16-859c-4bea-bb72-5a1fb7cd5755}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.471460631+00:00 stderr F I1213 00:20:09.471431 28750 ovs.go:162] Exec(19): stdout: "" 2025-12-13T00:20:09.471460631+00:00 stderr F I1213 00:20:09.471449 28750 ovs.go:163] Exec(19): stderr: "" 2025-12-13T00:20:09.471478501+00:00 stderr F I1213 00:20:09.471461 28750 ovs.go:159] Exec(20): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 mac_in_use 2025-12-13T00:20:09.483547065+00:00 stderr F I1213 00:20:09.483494 28750 ovs.go:162] Exec(20): stdout: "\"b6:dc:d9:26:03:d4\"\n" 2025-12-13T00:20:09.483547065+00:00 stderr F I1213 00:20:09.483525 28750 ovs.go:163] Exec(20): stderr: "" 2025-12-13T00:20:09.483573686+00:00 stderr F I1213 00:20:09.483543 28750 ovs.go:159] Exec(21): /usr/bin/ovs-vsctl --timeout=15 set interface ovn-k8s-mp0 mac=b6\:dc\:d9\:26\:03\:d4 2025-12-13T00:20:09.486767844+00:00 stderr F I1213 00:20:09.486724 28750 base_network_controller.go:458] When adding node crc for network default, found 81 pods to add to retryPods 2025-12-13T00:20:09.486767844+00:00 stderr F I1213 00:20:09.486752 28750 base_network_controller.go:464] Adding pod hostpath-provisioner/csi-hostpathplugin-hvm8g to retryPods for network default 2025-12-13T00:20:09.486786454+00:00 stderr F I1213 00:20:09.486770 28750 base_network_controller.go:464] Adding pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m to retryPods for network default 2025-12-13T00:20:09.486786454+00:00 stderr F I1213 00:20:09.486781 28750 base_network_controller.go:464] Adding pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp to retryPods for network default 2025-12-13T00:20:09.486799464+00:00 stderr F I1213 00:20:09.486793 28750 base_network_controller.go:464] Adding pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 to retryPods for network default 2025-12-13T00:20:09.486809505+00:00 stderr F I1213 00:20:09.486802 28750 base_network_controller.go:464] Adding pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b to retryPods for network default 2025-12-13T00:20:09.486827035+00:00 stderr F I1213 00:20:09.486812 28750 base_network_controller.go:464] Adding pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 to retryPods for network default 2025-12-13T00:20:09.486827035+00:00 stderr F I1213 00:20:09.486822 28750 base_network_controller.go:464] Adding pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg to retryPods for network default 2025-12-13T00:20:09.486839545+00:00 stderr F I1213 00:20:09.486832 28750 base_network_controller.go:464] Adding pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 to retryPods for network default 2025-12-13T00:20:09.486849436+00:00 stderr F I1213 00:20:09.486841 28750 base_network_controller.go:464] Adding pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc to retryPods for network default 2025-12-13T00:20:09.486861326+00:00 stderr F I1213 00:20:09.486855 28750 base_network_controller.go:464] Adding pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 to retryPods for network default 2025-12-13T00:20:09.486873236+00:00 stderr F I1213 00:20:09.486866 28750 base_network_controller.go:464] Adding pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd to retryPods for network default 2025-12-13T00:20:09.486882987+00:00 stderr F I1213 00:20:09.486876 28750 base_network_controller.go:464] Adding pod openshift-console/console-644bb77b49-5x5xk to retryPods for network default 2025-12-13T00:20:09.486892607+00:00 stderr F I1213 00:20:09.486886 28750 base_network_controller.go:464] Adding pod openshift-console/downloads-65476884b9-9wcvx to retryPods for network default 2025-12-13T00:20:09.486904467+00:00 stderr F I1213 00:20:09.486898 28750 base_network_controller.go:464] Adding pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z to retryPods for network default 2025-12-13T00:20:09.486916328+00:00 stderr F I1213 00:20:09.486908 28750 base_network_controller.go:464] Adding pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf to retryPods for network default 2025-12-13T00:20:09.486925998+00:00 stderr F I1213 00:20:09.486918 28750 base_network_controller.go:464] Adding pod openshift-dns-operator/dns-operator-75f687757b-nz2xb to retryPods for network default 2025-12-13T00:20:09.486962859+00:00 stderr F I1213 00:20:09.486954 28750 base_network_controller.go:464] Adding pod openshift-dns/dns-default-gbw49 to retryPods for network default 2025-12-13T00:20:09.486975469+00:00 stderr F I1213 00:20:09.486966 28750 base_network_controller.go:464] Adding pod openshift-dns/node-resolver-dn27q to retryPods for network default 2025-12-13T00:20:09.486985129+00:00 stderr F I1213 00:20:09.486978 28750 base_network_controller.go:464] Adding pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg to retryPods for network default 2025-12-13T00:20:09.486994750+00:00 stderr F I1213 00:20:09.486988 28750 base_network_controller.go:464] Adding pod openshift-etcd/etcd-crc to retryPods for network default 2025-12-13T00:20:09.487006770+00:00 stderr F I1213 00:20:09.486998 28750 base_network_controller.go:464] Adding pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv to retryPods for network default 2025-12-13T00:20:09.487016400+00:00 stderr F I1213 00:20:09.487009 28750 base_network_controller.go:464] Adding pod openshift-image-registry/image-registry-75b7bb6564-rnjvj to retryPods for network default 2025-12-13T00:20:09.487086302+00:00 stderr F I1213 00:20:09.487054 28750 base_network_controller.go:464] Adding pod openshift-image-registry/node-ca-l92hr to retryPods for network default 2025-12-13T00:20:09.487086302+00:00 stderr F I1213 00:20:09.487073 28750 base_network_controller.go:464] Adding pod openshift-ingress-canary/ingress-canary-2vhcn to retryPods for network default 2025-12-13T00:20:09.487102043+00:00 stderr F I1213 00:20:09.487084 28750 base_network_controller.go:464] Adding pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t to retryPods for network default 2025-12-13T00:20:09.487102043+00:00 stderr F I1213 00:20:09.487097 28750 base_network_controller.go:464] Adding pod openshift-ingress/router-default-5c9bf7bc58-6jctv to retryPods for network default 2025-12-13T00:20:09.487114523+00:00 stderr F I1213 00:20:09.487108 28750 base_network_controller.go:464] Adding pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 to retryPods for network default 2025-12-13T00:20:09.487126423+00:00 stderr F I1213 00:20:09.487119 28750 base_network_controller.go:464] Adding pod openshift-kube-apiserver/installer-13-crc to retryPods for network default 2025-12-13T00:20:09.487135964+00:00 stderr F I1213 00:20:09.487129 28750 base_network_controller.go:464] Adding pod openshift-kube-apiserver/kube-apiserver-crc to retryPods for network default 2025-12-13T00:20:09.487147884+00:00 stderr F I1213 00:20:09.487140 28750 base_network_controller.go:464] Adding pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb to retryPods for network default 2025-12-13T00:20:09.487157424+00:00 stderr F I1213 00:20:09.487151 28750 base_network_controller.go:464] Adding pod openshift-kube-controller-manager/kube-controller-manager-crc to retryPods for network default 2025-12-13T00:20:09.487169384+00:00 stderr F I1213 00:20:09.487163 28750 base_network_controller.go:464] Adding pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 to retryPods for network default 2025-12-13T00:20:09.487181255+00:00 stderr F I1213 00:20:09.487173 28750 base_network_controller.go:464] Adding pod openshift-kube-scheduler/openshift-kube-scheduler-crc to retryPods for network default 2025-12-13T00:20:09.487190785+00:00 stderr F I1213 00:20:09.487183 28750 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr to retryPods for network default 2025-12-13T00:20:09.487202635+00:00 stderr F I1213 00:20:09.487194 28750 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv to retryPods for network default 2025-12-13T00:20:09.487212186+00:00 stderr F I1213 00:20:09.487204 28750 base_network_controller.go:464] Adding pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw to retryPods for network default 2025-12-13T00:20:09.487221686+00:00 stderr F I1213 00:20:09.487214 28750 base_network_controller.go:464] Adding pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb to retryPods for network default 2025-12-13T00:20:09.487231286+00:00 stderr F I1213 00:20:09.487224 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc to retryPods for network default 2025-12-13T00:20:09.487240916+00:00 stderr F I1213 00:20:09.487234 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh to retryPods for network default 2025-12-13T00:20:09.487252787+00:00 stderr F I1213 00:20:09.487244 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-daemon-zpnhg to retryPods for network default 2025-12-13T00:20:09.487262327+00:00 stderr F I1213 00:20:09.487254 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm to retryPods for network default 2025-12-13T00:20:09.487271847+00:00 stderr F I1213 00:20:09.487265 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-server-v65wr to retryPods for network default 2025-12-13T00:20:09.487281467+00:00 stderr F I1213 00:20:09.487274 28750 base_network_controller.go:464] Adding pod openshift-marketplace/certified-operators-lcrg8 to retryPods for network default 2025-12-13T00:20:09.487294848+00:00 stderr F I1213 00:20:09.487284 28750 base_network_controller.go:464] Adding pod openshift-marketplace/community-operators-s2hxn to retryPods for network default 2025-12-13T00:20:09.487304368+00:00 stderr F I1213 00:20:09.487295 28750 base_network_controller.go:464] Adding pod openshift-marketplace/marketplace-operator-8b455464d-kghgr to retryPods for network default 2025-12-13T00:20:09.487313848+00:00 stderr F I1213 00:20:09.487305 28750 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-marketplace-nv4pl to retryPods for network default 2025-12-13T00:20:09.487323319+00:00 stderr F I1213 00:20:09.487315 28750 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-operators-zg7cl to retryPods for network default 2025-12-13T00:20:09.487332809+00:00 stderr F I1213 00:20:09.487325 28750 base_network_controller.go:464] Adding pod openshift-multus/multus-additional-cni-plugins-bzj2p to retryPods for network default 2025-12-13T00:20:09.487342459+00:00 stderr F I1213 00:20:09.487336 28750 base_network_controller.go:464] Adding pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc to retryPods for network default 2025-12-13T00:20:09.487355169+00:00 stderr F I1213 00:20:09.487348 28750 base_network_controller.go:464] Adding pod openshift-multus/multus-q88th to retryPods for network default 2025-12-13T00:20:09.487367110+00:00 stderr F I1213 00:20:09.487360 28750 base_network_controller.go:464] Adding pod openshift-multus/network-metrics-daemon-qdfr4 to retryPods for network default 2025-12-13T00:20:09.487376640+00:00 stderr F I1213 00:20:09.487370 28750 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 to retryPods for network default 2025-12-13T00:20:09.487388600+00:00 stderr F I1213 00:20:09.487380 28750 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-target-v54bt to retryPods for network default 2025-12-13T00:20:09.487398151+00:00 stderr F I1213 00:20:09.487390 28750 base_network_controller.go:464] Adding pod openshift-network-node-identity/network-node-identity-7xghp to retryPods for network default 2025-12-13T00:20:09.487407621+00:00 stderr F I1213 00:20:09.487399 28750 base_network_controller.go:464] Adding pod openshift-network-operator/iptables-alerter-wwpnd to retryPods for network default 2025-12-13T00:20:09.487417141+00:00 stderr F I1213 00:20:09.487410 28750 base_network_controller.go:464] Adding pod openshift-network-operator/network-operator-767c585db5-zd56b to retryPods for network default 2025-12-13T00:20:09.487426671+00:00 stderr F I1213 00:20:09.487419 28750 base_network_controller.go:464] Adding pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd to retryPods for network default 2025-12-13T00:20:09.487436152+00:00 stderr F I1213 00:20:09.487429 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf to retryPods for network default 2025-12-13T00:20:09.487447822+00:00 stderr F I1213 00:20:09.487441 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh to retryPods for network default 2025-12-13T00:20:09.487459612+00:00 stderr F I1213 00:20:09.487453 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 to retryPods for network default 2025-12-13T00:20:09.487469283+00:00 stderr F I1213 00:20:09.487463 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz to retryPods for network default 2025-12-13T00:20:09.487480993+00:00 stderr F I1213 00:20:09.487474 28750 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b to retryPods for network default 2025-12-13T00:20:09.487490513+00:00 stderr F I1213 00:20:09.487484 28750 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-node-brz8k to retryPods for network default 2025-12-13T00:20:09.487503484+00:00 stderr F I1213 00:20:09.487494 28750 base_network_controller.go:464] Adding pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs to retryPods for network default 2025-12-13T00:20:09.487513024+00:00 stderr F I1213 00:20:09.487503 28750 base_network_controller.go:464] Adding pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz to retryPods for network default 2025-12-13T00:20:09.487522514+00:00 stderr F I1213 00:20:09.487514 28750 base_network_controller.go:464] Adding pod openshift-service-ca/service-ca-666f99b6f-kk8kg to retryPods for network default 2025-12-13T00:20:09.487534344+00:00 stderr F I1213 00:20:09.487524 28750 obj_retry.go:233] Iterate retry objects requested (resource *v1.Pod) 2025-12-13T00:20:09.487687879+00:00 stderr F I1213 00:20:09.487620 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Encap Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa ip:192.168.126.11 options:{GoMap:map[csum:true]} type:geneve] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ffeb60a-fdfe-4f88-8208-1eba752e78d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.487792411+00:00 stderr F I1213 00:20:09.487740 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Chassis Row:map[encaps:{GoSet:[{GoUUID:4ffeb60a-fdfe-4f88-8208-1eba752e78d6}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.487843403+00:00 stderr F I1213 00:20:09.487802 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Chassis Row:map[] Rows:[] Columns:[] Mutations:[{Column:other_config Mutator:delete Value:{GoSet:[is-remote]}} {Column:other_config Mutator:insert Value:{GoMap:map[is-remote:false]}}] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.487900914+00:00 stderr F I1213 00:20:09.487840 28750 transact.go:42] Configuring OVN: [{Op:update Table:Encap Row:map[chassis_name:017e52b0-97d3-4d7d-aae4-9b216aa025aa ip:192.168.126.11 options:{GoMap:map[csum:true]} type:geneve] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4ffeb60a-fdfe-4f88-8208-1eba752e78d6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Chassis Row:map[encaps:{GoSet:[{GoUUID:4ffeb60a-fdfe-4f88-8208-1eba752e78d6}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Chassis Row:map[] Rows:[] Columns:[] Mutations:[{Column:other_config Mutator:delete Value:{GoSet:[is-remote]}} {Column:other_config Mutator:insert Value:{GoMap:map[is-remote:false]}}] Timeout: Where:[where column _uuid == {b93f58b2-b1ad-47d9-8850-0082ad97586e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.489552500+00:00 stderr F I1213 00:20:09.489528 28750 zone_ic_handler.go:220] Creating interconnect resources for local zone node crc for the network default 2025-12-13T00:20:09.489697284+00:00 stderr F I1213 00:20:09.489649 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:58:00:02 name:rtots-crc networks:{GoSet:[100.88.0.2/16]} options:{GoMap:map[mcast_flood:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e084d304-085a-4130-a762-c61b2dfff5af}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.489771596+00:00 stderr F I1213 00:20:09.489746 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e084d304-085a-4130-a762-c61b2dfff5af}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.489837718+00:00 stderr F I1213 00:20:09.489799 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:58:00:02 name:rtots-crc networks:{GoSet:[100.88.0.2/16]} options:{GoMap:map[mcast_flood:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e084d304-085a-4130-a762-c61b2dfff5af}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e084d304-085a-4130-a762-c61b2dfff5af}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.489927501+00:00 stderr F I1213 00:20:09.489893 28750 ovs.go:162] Exec(21): stdout: "" 2025-12-13T00:20:09.489927501+00:00 stderr F I1213 00:20:09.489914 28750 ovs.go:163] Exec(21): stderr: "" 2025-12-13T00:20:09.490739483+00:00 stderr F I1213 00:20:09.490647 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[requested-tnl-key:2 router-port:rtots-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff13d99b-65b5-4b3b-bc27-534de830144b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.490813395+00:00 stderr F I1213 00:20:09.490765 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ff13d99b-65b5-4b3b-bc27-534de830144b}]}}] Timeout: Where:[where column _uuid == {d7c11e7c-4a6b-41fe-87a6-5c70659238bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.490872076+00:00 stderr F I1213 00:20:09.490803 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[requested-tnl-key:2 router-port:rtots-crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff13d99b-65b5-4b3b-bc27-534de830144b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ff13d99b-65b5-4b3b-bc27-534de830144b}]}}] Timeout: Where:[where column _uuid == {d7c11e7c-4a6b-41fe-87a6-5c70659238bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.493437027+00:00 stderr F I1213 00:20:09.493394 28750 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.0.0/23 Src: 10.217.0.2 Gw: Flags: [] Table: 254 Realm: 0}" 2025-12-13T00:20:09.493492859+00:00 stderr F I1213 00:20:09.493478 28750 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.1.255/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-12-13T00:20:09.493531910+00:00 stderr F I1213 00:20:09.493518 28750 route_manager.go:155] Route Manager: netlink route deletion event: "{Ifindex: 5 Dst: 10.217.0.2/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-12-13T00:20:09.493770176+00:00 stderr F I1213 00:20:09.493746 28750 obj_retry.go:541] Creating *v1.Node crc took: 60.544079ms 2025-12-13T00:20:09.493827748+00:00 stderr F I1213 00:20:09.493812 28750 factory.go:988] Added *v1.Node event handler 2 2025-12-13T00:20:09.493907780+00:00 stderr F I1213 00:20:09.493861 28750 ovn.go:449] Starting OVN Service Controller: Using Endpoint Slices 2025-12-13T00:20:09.494137007+00:00 stderr F I1213 00:20:09.494079 28750 services_controller.go:168] Starting controller ovn-lb-controller 2025-12-13T00:20:09.494198309+00:00 stderr F I1213 00:20:09.494171 28750 services_controller.go:176] Waiting for node tracker handler to sync 2025-12-13T00:20:09.494210369+00:00 stderr F I1213 00:20:09.494195 28750 shared_informer.go:311] Waiting for caches to sync for node-tracker-controller 2025-12-13T00:20:09.494269441+00:00 stderr F I1213 00:20:09.494235 28750 node_tracker.go:204] Processing possible switch / router updates for node crc 2025-12-13T00:20:09.494416985+00:00 stderr F I1213 00:20:09.494371 28750 node_tracker.go:165] Node crc switch + router changed, syncing services 2025-12-13T00:20:09.494416985+00:00 stderr F I1213 00:20:09.494392 28750 services_controller.go:519] Full service sync requested 2025-12-13T00:20:09.495876834+00:00 stderr F W1213 00:20:09.495833 28750 base_network_controller_pods.go:88] Already allocated IPs: 10.217.0.55 for pod: openshift-kube-controller-manager_revision-pruner-8-crc in phase: 0xc0007d9528 on switch: crc 2025-12-13T00:20:09.496751179+00:00 stderr F I1213 00:20:09.496694 28750 default_network_controller.go:655] Recording add event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.496751179+00:00 stderr F I1213 00:20:09.496722 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-10-crc 2025-12-13T00:20:09.496751179+00:00 stderr F I1213 00:20:09.496734 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.496776240+00:00 stderr F I1213 00:20:09.496745 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-10-crc 2025-12-13T00:20:09.496776240+00:00 stderr F I1213 00:20:09.496742 28750 default_network_controller.go:655] Recording add event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.496776240+00:00 stderr F I1213 00:20:09.496761 28750 ovn.go:134] Ensuring zone local for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in node crc 2025-12-13T00:20:09.496816831+00:00 stderr F I1213 00:20:09.496778 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.496816831+00:00 stderr F I1213 00:20:09.496803 28750 default_network_controller.go:655] Recording add event on pod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.496816831+00:00 stderr F I1213 00:20:09.496762 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.496832521+00:00 stderr F I1213 00:20:09.496824 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.496844272+00:00 stderr F I1213 00:20:09.496836 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-12-13T00:20:09.496844272+00:00 stderr F I1213 00:20:09.496839 28750 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dn27q in node crc 2025-12-13T00:20:09.496854852+00:00 stderr F I1213 00:20:09.496844 28750 base_network_controller_pods.go:476] [default/openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] creating logical port openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t for pod on switch crc 2025-12-13T00:20:09.496864882+00:00 stderr F I1213 00:20:09.496853 28750 default_network_controller.go:655] Recording add event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.496864882+00:00 stderr F I1213 00:20:09.496859 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.496874863+00:00 stderr F I1213 00:20:09.496861 28750 obj_retry.go:541] Creating *v1.Pod openshift-dns/node-resolver-dn27q took: 23.941µs 2025-12-13T00:20:09.496891733+00:00 stderr F I1213 00:20:09.496873 28750 default_network_controller.go:655] Recording add event on pod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.496891733+00:00 stderr F I1213 00:20:09.496874 28750 default_network_controller.go:699] Recording success event on pod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.496891733+00:00 stderr F I1213 00:20:09.496880 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.496902793+00:00 stderr F I1213 00:20:09.496889 28750 ovn.go:134] Ensuring zone local for Pod openshift-console/console-644bb77b49-5x5xk in node crc 2025-12-13T00:20:09.496971075+00:00 stderr F I1213 00:20:09.496916 28750 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-dns/node-resolver-dn27q. OVN-Kubernetes controller took 6.1512e-05 seconds. No OVN measurement. 2025-12-13T00:20:09.496971075+00:00 stderr F I1213 00:20:09.496827 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-10-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.496971075+00:00 stderr F I1213 00:20:09.496927 28750 base_network_controller_pods.go:476] [default/openshift-console/console-644bb77b49-5x5xk] creating logical port openshift-console_console-644bb77b49-5x5xk for pod on switch crc 2025-12-13T00:20:09.496971075+00:00 stderr F I1213 00:20:09.496963 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-10-crc 2025-12-13T00:20:09.497114989+00:00 stderr F W1213 00:20:09.497086 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-10-crc. Using logical switch crc port uuid and addrs [10.217.0.69/23] 2025-12-13T00:20:09.497189311+00:00 stderr F I1213 00:20:09.497164 28750 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.69] from address set 2025-12-13T00:20:09.497249303+00:00 stderr F I1213 00:20:09.497204 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.69]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497306434+00:00 stderr F I1213 00:20:09.497289 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-9-crc 2025-12-13T00:20:09.497353256+00:00 stderr F I1213 00:20:09.497308 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.69]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497353256+00:00 stderr F I1213 00:20:09.497302 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497365016+00:00 stderr F I1213 00:20:09.497352 28750 default_network_controller.go:655] Recording add event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.497406277+00:00 stderr F I1213 00:20:09.497380 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.497417427+00:00 stderr F I1213 00:20:09.497382 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497427228+00:00 stderr F I1213 00:20:09.497417 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.497436938+00:00 stderr F I1213 00:20:09.497430 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.497448948+00:00 stderr F I1213 00:20:09.497442 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in node crc 2025-12-13T00:20:09.497475289+00:00 stderr F I1213 00:20:09.497387 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-9-crc 2025-12-13T00:20:09.497517040+00:00 stderr F I1213 00:20:09.497494 28750 default_network_controller.go:655] Recording add event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.497517040+00:00 stderr F I1213 00:20:09.497497 28750 default_network_controller.go:655] Recording add event on pod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.497517040+00:00 stderr F I1213 00:20:09.497510 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.497530600+00:00 stderr F I1213 00:20:09.497515 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.497530600+00:00 stderr F I1213 00:20:09.497404 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in node crc 2025-12-13T00:20:09.497541131+00:00 stderr F I1213 00:20:09.497534 28750 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-l92hr in node crc 2025-12-13T00:20:09.497550931+00:00 stderr F I1213 00:20:09.497541 28750 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/node-ca-l92hr took: 7.511µs 2025-12-13T00:20:09.497561521+00:00 stderr F I1213 00:20:09.496846 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-12-13T00:20:09.497561521+00:00 stderr F I1213 00:20:09.497549 28750 default_network_controller.go:699] Recording success event on pod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.497571912+00:00 stderr F I1213 00:20:09.497557 28750 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] creating logical port openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 for pod on switch crc 2025-12-13T00:20:09.497571912+00:00 stderr F I1213 00:20:09.497562 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-10-retry-1-crc 2025-12-13T00:20:09.497571912+00:00 stderr F I1213 00:20:09.497562 28750 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.497582912+00:00 stderr F I1213 00:20:09.497286 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497597182+00:00 stderr F I1213 00:20:09.496867 28750 ovn.go:134] Ensuring zone local for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in node crc 2025-12-13T00:20:09.497597182+00:00 stderr F I1213 00:20:09.497589 28750 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd): port not found in cache or already marked for removal 2025-12-13T00:20:09.497607432+00:00 stderr F I1213 00:20:09.497599 28750 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-12-13T00:20:09.497617203+00:00 stderr F I1213 00:20:09.497604 28750 base_network_controller_pods.go:476] [default/openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] creating logical port openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 for pod on switch crc 2025-12-13T00:20:09.497648964+00:00 stderr F I1213 00:20:09.497632 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.497695325+00:00 stderr F I1213 00:20:09.497667 28750 default_network_controller.go:655] Recording add event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.497695325+00:00 stderr F I1213 00:20:09.497688 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.497706605+00:00 stderr F I1213 00:20:09.497700 28750 ovn.go:134] Ensuring zone local for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in node crc 2025-12-13T00:20:09.497718586+00:00 stderr F W1213 00:20:09.497705 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd. Using logical switch crc port uuid and addrs [10.217.0.98/23] 2025-12-13T00:20:09.497751016+00:00 stderr F I1213 00:20:09.497728 28750 base_network_controller_pods.go:476] [default/openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] creating logical port openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b for pod on switch crc 2025-12-13T00:20:09.497788917+00:00 stderr F I1213 00:20:09.497775 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-9-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.497821638+00:00 stderr F I1213 00:20:09.497809 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-9-crc 2025-12-13T00:20:09.497877160+00:00 stderr F I1213 00:20:09.497827 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497972082+00:00 stderr F I1213 00:20:09.497864 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497997923+00:00 stderr F I1213 00:20:09.497958 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.497997923+00:00 stderr F I1213 00:20:09.496706 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.498011383+00:00 stderr F I1213 00:20:09.498003 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.498023634+00:00 stderr F I1213 00:20:09.497991 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498023634+00:00 stderr F I1213 00:20:09.497995 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498039434+00:00 stderr F I1213 00:20:09.498031 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc 2025-12-13T00:20:09.498050525+00:00 stderr F I1213 00:20:09.498038 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc took: 26.341µs 2025-12-13T00:20:09.498050525+00:00 stderr F I1213 00:20:09.498045 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.498060605+00:00 stderr F I1213 00:20:09.498054 28750 default_network_controller.go:655] Recording add event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.498070505+00:00 stderr F I1213 00:20:09.498050 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498070505+00:00 stderr F I1213 00:20:09.497528 28750 ovn.go:134] Ensuring zone local for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in node crc 2025-12-13T00:20:09.498080825+00:00 stderr F I1213 00:20:09.498072 28750 obj_retry.go:541] Creating *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 took: 544.765µs 2025-12-13T00:20:09.498091656+00:00 stderr F I1213 00:20:09.498078 28750 default_network_controller.go:699] Recording success event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.498091656+00:00 stderr F I1213 00:20:09.498070 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498105916+00:00 stderr F I1213 00:20:09.497480 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498155787+00:00 stderr F I1213 00:20:09.498115 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498249740+00:00 stderr F W1213 00:20:09.498220 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-9-crc. Using logical switch crc port uuid and addrs [10.217.0.52/23] 2025-12-13T00:20:09.498337102+00:00 stderr F I1213 00:20:09.498322 28750 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.52] from address set 2025-12-13T00:20:09.498413384+00:00 stderr F I1213 00:20:09.498385 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.52]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498484756+00:00 stderr F I1213 00:20:09.498460 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.52]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498554248+00:00 stderr F I1213 00:20:09.498510 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498599730+00:00 stderr F I1213 00:20:09.498563 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498599730+00:00 stderr F I1213 00:20:09.498585 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-10-crc, ips: 10.217.0.69 2025-12-13T00:20:09.498611140+00:00 stderr F I1213 00:20:09.496808 28750 ovn.go:134] Ensuring zone local for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in node crc 2025-12-13T00:20:09.498621090+00:00 stderr F I1213 00:20:09.498611 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.498634860+00:00 stderr F I1213 00:20:09.498626 28750 base_network_controller_pods.go:476] [default/openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] creating logical port openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs for pod on switch crc 2025-12-13T00:20:09.498634860+00:00 stderr F I1213 00:20:09.498629 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.498651031+00:00 stderr F I1213 00:20:09.498643 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in node crc 2025-12-13T00:20:09.498704422+00:00 stderr F I1213 00:20:09.498673 28750 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] creating logical port openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr for pod on switch crc 2025-12-13T00:20:09.498704422+00:00 stderr F I1213 00:20:09.498509 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498760244+00:00 stderr F I1213 00:20:09.498724 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498760244+00:00 stderr F I1213 00:20:09.498725 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498810375+00:00 stderr F I1213 00:20:09.497810 28750 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.98] from address set 2025-12-13T00:20:09.498869537+00:00 stderr F I1213 00:20:09.498824 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498881157+00:00 stderr F I1213 00:20:09.498853 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498893378+00:00 stderr F I1213 00:20:09.498876 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.498924238+00:00 stderr F I1213 00:20:09.498590 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499005491+00:00 stderr F I1213 00:20:09.498966 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499026061+00:00 stderr F I1213 00:20:09.498980 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499070002+00:00 stderr F I1213 00:20:09.498982 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.98]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499213366+00:00 stderr F I1213 00:20:09.499174 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.98]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499257557+00:00 stderr F I1213 00:20:09.497626 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499327489+00:00 stderr F I1213 00:20:09.499286 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499382401+00:00 stderr F I1213 00:20:09.499339 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499436352+00:00 stderr F I1213 00:20:09.499402 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499498054+00:00 stderr F I1213 00:20:09.499442 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499542606+00:00 stderr F I1213 00:20:09.499452 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499542606+00:00 stderr F I1213 00:20:09.499516 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499610758+00:00 stderr F I1213 00:20:09.497472 28750 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] creating logical port openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh for pod on switch crc 2025-12-13T00:20:09.499663379+00:00 stderr F I1213 00:20:09.499550 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499674870+00:00 stderr F I1213 00:20:09.499635 28750 port_cache.go:96] port-cache(openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b): added port &{name:openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b uuid:3e86699a-fa52-4a81-9386-60d37f3fa10c logicalSwitch:crc ips:[0xc000e2a990] mac:[10 88 10 217 0 72] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.72/23] and MAC: 0a:58:0a:d9:00:48 2025-12-13T00:20:09.499707821+00:00 stderr F I1213 00:20:09.499678 28750 pods.go:220] [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] addLogicalPort took 1.957895ms, libovsdb time 1.034619ms 2025-12-13T00:20:09.499707821+00:00 stderr F I1213 00:20:09.499699 28750 obj_retry.go:541] Creating *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b took: 1.998975ms 2025-12-13T00:20:09.499721911+00:00 stderr F I1213 00:20:09.499709 28750 default_network_controller.go:699] Recording success event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.499721911+00:00 stderr F I1213 00:20:09.499699 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499734591+00:00 stderr F I1213 00:20:09.499723 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.499746862+00:00 stderr F I1213 00:20:09.499736 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.499766432+00:00 stderr F I1213 00:20:09.499749 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in node crc 2025-12-13T00:20:09.499766432+00:00 stderr F I1213 00:20:09.498064 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.499766432+00:00 stderr F I1213 00:20:09.499760 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg took: 12.15µs 2025-12-13T00:20:09.499779463+00:00 stderr F I1213 00:20:09.499766 28750 ovn.go:134] Ensuring zone local for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in node crc 2025-12-13T00:20:09.499779463+00:00 stderr F I1213 00:20:09.499741 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.499825394+00:00 stderr F I1213 00:20:09.499791 28750 base_network_controller_pods.go:476] [default/openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] creating logical port openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz for pod on switch crc 2025-12-13T00:20:09.499825394+00:00 stderr F I1213 00:20:09.498086 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-9-crc 2025-12-13T00:20:09.499825394+00:00 stderr F I1213 00:20:09.499815 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-9-crc 2025-12-13T00:20:09.499838634+00:00 stderr F I1213 00:20:09.499825 28750 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-9-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.499848595+00:00 stderr F I1213 00:20:09.499839 28750 port_cache.go:122] port-cache(openshift-kube-apiserver_installer-9-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.499848595+00:00 stderr F I1213 00:20:09.499843 28750 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-9-crc 2025-12-13T00:20:09.499913066+00:00 stderr F I1213 00:20:09.499785 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500053280+00:00 stderr F I1213 00:20:09.500003 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500073261+00:00 stderr F I1213 00:20:09.500053 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500083331+00:00 stderr F I1213 00:20:09.497485 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.500115482+00:00 stderr F I1213 00:20:09.500090 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.500128992+00:00 stderr F I1213 00:20:09.500116 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in node crc 2025-12-13T00:20:09.500141222+00:00 stderr F I1213 00:20:09.500121 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500176773+00:00 stderr F I1213 00:20:09.500151 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] creating logical port openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz for pod on switch crc 2025-12-13T00:20:09.500217485+00:00 stderr F I1213 00:20:09.499768 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.500228825+00:00 stderr F I1213 00:20:09.500214 28750 default_network_controller.go:655] Recording add event on pod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.500239285+00:00 stderr F I1213 00:20:09.500224 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.500239285+00:00 stderr F I1213 00:20:09.497476 28750 default_network_controller.go:655] Recording add event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.500249475+00:00 stderr F I1213 00:20:09.500236 28750 ovn.go:134] Ensuring zone local for Pod openshift-ingress-canary/ingress-canary-2vhcn in node crc 2025-12-13T00:20:09.500261486+00:00 stderr F I1213 00:20:09.500250 28750 base_network_controller_pods.go:476] [default/openshift-ingress-canary/ingress-canary-2vhcn] creating logical port openshift-ingress-canary_ingress-canary-2vhcn for pod on switch crc 2025-12-13T00:20:09.500292397+00:00 stderr F I1213 00:20:09.500268 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.500292397+00:00 stderr F I1213 00:20:09.498788 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500292397+00:00 stderr F I1213 00:20:09.500287 28750 ovn.go:134] Ensuring zone local for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in node crc 2025-12-13T00:20:09.500309737+00:00 stderr F I1213 00:20:09.497575 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-10-retry-1-crc 2025-12-13T00:20:09.500322077+00:00 stderr F I1213 00:20:09.500314 28750 base_network_controller_pods.go:476] [default/openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] creating logical port openshift-etcd-operator_etcd-operator-768d5b5d86-722mg for pod on switch crc 2025-12-13T00:20:09.500322077+00:00 stderr F I1213 00:20:09.500312 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.500356108+00:00 stderr F I1213 00:20:09.500332 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-10-retry-1-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.500356108+00:00 stderr F I1213 00:20:09.500345 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-10-retry-1-crc 2025-12-13T00:20:09.500395809+00:00 stderr F I1213 00:20:09.500354 28750 port_cache.go:96] port-cache(openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs): added port &{name:openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs uuid:c2174bce-e1da-468b-aa60-b9409f80c104 logicalSwitch:crc ips:[0xc000ee1470] mac:[10 88 10 217 0 88] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.88/23] and MAC: 0a:58:0a:d9:00:58 2025-12-13T00:20:09.500406970+00:00 stderr F I1213 00:20:09.500389 28750 pods.go:220] [openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] addLogicalPort took 1.769019ms, libovsdb time 889.805µs 2025-12-13T00:20:09.500417440+00:00 stderr F I1213 00:20:09.500404 28750 obj_retry.go:541] Creating *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs took: 3.599788ms 2025-12-13T00:20:09.500427560+00:00 stderr F I1213 00:20:09.500414 28750 default_network_controller.go:699] Recording success event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.500427560+00:00 stderr F W1213 00:20:09.500417 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-10-retry-1-crc. Using logical switch crc port uuid and addrs [10.217.0.76/23] 2025-12-13T00:20:09.500437841+00:00 stderr F I1213 00:20:09.500429 28750 default_network_controller.go:655] Recording add event on pod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.500447631+00:00 stderr F I1213 00:20:09.500440 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.500461661+00:00 stderr F I1213 00:20:09.500453 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/community-operators-s2hxn in node crc 2025-12-13T00:20:09.500493382+00:00 stderr F I1213 00:20:09.500469 28750 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.76] from address set 2025-12-13T00:20:09.500537113+00:00 stderr F I1213 00:20:09.500440 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500537113+00:00 stderr F I1213 00:20:09.500502 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500537113+00:00 stderr F I1213 00:20:09.500473 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/community-operators-s2hxn] creating logical port openshift-marketplace_community-operators-s2hxn for pod on switch crc 2025-12-13T00:20:09.500587585+00:00 stderr F I1213 00:20:09.500541 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500626766+00:00 stderr F I1213 00:20:09.500596 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.500996136+00:00 stderr F I1213 00:20:09.500506 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.76]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501020726+00:00 stderr F W1213 00:20:09.501006 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-apiserver/installer-9-crc. Using logical switch crc port uuid and addrs [10.217.0.55/23] 2025-12-13T00:20:09.501020726+00:00 stderr F I1213 00:20:09.501004 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.76]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501075398+00:00 stderr F I1213 00:20:09.501049 28750 address_set.go:613] (6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) deleting addresses [10.217.0.55] from address set 2025-12-13T00:20:09.501109789+00:00 stderr F I1213 00:20:09.501073 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501109789+00:00 stderr F I1213 00:20:09.501085 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.55]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501123439+00:00 stderr F I1213 00:20:09.501110 28750 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd, ips: 10.217.0.98 2025-12-13T00:20:09.501133449+00:00 stderr F I1213 00:20:09.501113 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501143170+00:00 stderr F I1213 00:20:09.501123 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.55]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501182611+00:00 stderr F I1213 00:20:09.500970 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501224852+00:00 stderr F I1213 00:20:09.501133 28750 default_network_controller.go:655] Recording add event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.501224852+00:00 stderr F I1213 00:20:09.501219 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.501237122+00:00 stderr F I1213 00:20:09.501210 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501246963+00:00 stderr F I1213 00:20:09.499604 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-9-crc, ips: 10.217.0.52 2025-12-13T00:20:09.501260913+00:00 stderr F I1213 00:20:09.501248 28750 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.501292674+00:00 stderr F I1213 00:20:09.501258 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501292674+00:00 stderr F I1213 00:20:09.501250 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501292674+00:00 stderr F I1213 00:20:09.498751 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501359006+00:00 stderr F I1213 00:20:09.501321 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501359006+00:00 stderr F I1213 00:20:09.501321 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501401947+00:00 stderr F I1213 00:20:09.501336 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501467249+00:00 stderr F I1213 00:20:09.501362 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501497769+00:00 stderr F I1213 00:20:09.501229 28750 ovn.go:134] Ensuring zone local for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in node crc 2025-12-13T00:20:09.501497769+00:00 stderr F I1213 00:20:09.501131 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501509090+00:00 stderr F I1213 00:20:09.501499 28750 base_network_controller_pods.go:476] [default/openshift-service-ca/service-ca-666f99b6f-kk8kg] creating logical port openshift-service-ca_service-ca-666f99b6f-kk8kg for pod on switch crc 2025-12-13T00:20:09.501591442+00:00 stderr F I1213 00:20:09.501550 28750 port_cache.go:96] port-cache(openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8): added port &{name:openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 uuid:99ef3a4b-7858-4c9b-90db-217867afe36a logicalSwitch:crc ips:[0xc0008afd10] mac:[10 88 10 217 0 19] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.19/23] and MAC: 0a:58:0a:d9:00:13 2025-12-13T00:20:09.501591442+00:00 stderr F I1213 00:20:09.501580 28750 pods.go:220] [openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] addLogicalPort took 3.98123ms, libovsdb time 1.992584ms 2025-12-13T00:20:09.501604692+00:00 stderr F I1213 00:20:09.501277 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.501604692+00:00 stderr F I1213 00:20:09.501593 28750 obj_retry.go:541] Creating *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 took: 4.72429ms 2025-12-13T00:20:09.501614903+00:00 stderr F I1213 00:20:09.501601 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in node crc 2025-12-13T00:20:09.501614903+00:00 stderr F I1213 00:20:09.501604 28750 default_network_controller.go:699] Recording success event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.501624963+00:00 stderr F I1213 00:20:09.501617 28750 base_network_controller_pods.go:476] [default/openshift-multus/multus-admission-controller-6c7c885997-4hbbc] creating logical port openshift-multus_multus-admission-controller-6c7c885997-4hbbc for pod on switch crc 2025-12-13T00:20:09.501739226+00:00 stderr F I1213 00:20:09.501696 28750 port_cache.go:96] port-cache(openshift-console_console-644bb77b49-5x5xk): added port &{name:openshift-console_console-644bb77b49-5x5xk uuid:9a79516e-7a72-4d42-b0ab-87a99aa064f3 logicalSwitch:crc ips:[0xc00084a240] mac:[10 88 10 217 0 73] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.73/23] and MAC: 0a:58:0a:d9:00:49 2025-12-13T00:20:09.501785157+00:00 stderr F I1213 00:20:09.501737 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:220875e2-503f-46b5-aaa6-bb8fc45743cc requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9d09e10-aa79-4a77-ba42-e77ca54f8045}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501830799+00:00 stderr F I1213 00:20:09.501794 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501830799+00:00 stderr F I1213 00:20:09.501795 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.501863929+00:00 stderr F I1213 00:20:09.501764 28750 pods.go:220] [openshift-console/console-644bb77b49-5x5xk] addLogicalPort took 4.854064ms, libovsdb time 1.902101ms 2025-12-13T00:20:09.501925321+00:00 stderr F I1213 00:20:09.501891 28750 obj_retry.go:541] Creating *v1.Pod openshift-console/console-644bb77b49-5x5xk took: 5.000217ms 2025-12-13T00:20:09.501985803+00:00 stderr F I1213 00:20:09.501552 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502006073+00:00 stderr F I1213 00:20:09.501966 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502041104+00:00 stderr F I1213 00:20:09.501966 28750 default_network_controller.go:699] Recording success event on pod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.502087665+00:00 stderr F I1213 00:20:09.502031 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502126897+00:00 stderr F I1213 00:20:09.502090 28750 port_cache.go:96] port-cache(openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz): added port &{name:openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz uuid:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56 logicalSwitch:crc ips:[0xc00117f8f0] mac:[10 88 10 217 0 10] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.10/23] and MAC: 0a:58:0a:d9:00:0a 2025-12-13T00:20:09.502126897+00:00 stderr F I1213 00:20:09.502120 28750 pods.go:220] [openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] addLogicalPort took 2.334663ms, libovsdb time 719.569µs 2025-12-13T00:20:09.502138247+00:00 stderr F I1213 00:20:09.502129 28750 obj_retry.go:541] Creating *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz took: 2.364485ms 2025-12-13T00:20:09.502149097+00:00 stderr F I1213 00:20:09.502136 28750 default_network_controller.go:699] Recording success event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.502149097+00:00 stderr F I1213 00:20:09.502145 28750 default_network_controller.go:655] Recording add event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.502159677+00:00 stderr F I1213 00:20:09.502080 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502159677+00:00 stderr F I1213 00:20:09.502154 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.502175368+00:00 stderr F I1213 00:20:09.502163 28750 ovn.go:134] Ensuring zone local for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in node crc 2025-12-13T00:20:09.502185168+00:00 stderr F I1213 00:20:09.502176 28750 base_network_controller_pods.go:476] [default/openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] creating logical port openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd for pod on switch crc 2025-12-13T00:20:09.502217039+00:00 stderr F I1213 00:20:09.501367 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502265920+00:00 stderr F I1213 00:20:09.502231 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502320272+00:00 stderr F I1213 00:20:09.502287 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502320272+00:00 stderr F I1213 00:20:09.502292 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502369094+00:00 stderr F I1213 00:20:09.502336 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502369094+00:00 stderr F I1213 00:20:09.499052 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502403455+00:00 stderr F I1213 00:20:09.502360 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502449186+00:00 stderr F I1213 00:20:09.502417 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502449186+00:00 stderr F I1213 00:20:09.501920 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502460787+00:00 stderr F I1213 00:20:09.502360 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502470707+00:00 stderr F I1213 00:20:09.502443 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502530479+00:00 stderr F I1213 00:20:09.502485 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502623491+00:00 stderr F I1213 00:20:09.502571 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502623491+00:00 stderr F I1213 00:20:09.502600 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502662232+00:00 stderr F I1213 00:20:09.502632 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502703953+00:00 stderr F I1213 00:20:09.502473 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502703953+00:00 stderr F I1213 00:20:09.502652 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.502914269+00:00 stderr F I1213 00:20:09.502106 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.502914269+00:00 stderr F I1213 00:20:09.502904 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.502926929+00:00 stderr F I1213 00:20:09.502914 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc 2025-12-13T00:20:09.502926929+00:00 stderr F I1213 00:20:09.502920 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc took: 8.14µs 2025-12-13T00:20:09.502963050+00:00 stderr F I1213 00:20:09.502925 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.502963050+00:00 stderr F I1213 00:20:09.502949 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.502963050+00:00 stderr F I1213 00:20:09.502955 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.502975661+00:00 stderr F I1213 00:20:09.502961 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in node crc 2025-12-13T00:20:09.502992771+00:00 stderr F I1213 00:20:09.502973 28750 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] creating logical port openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv for pod on switch crc 2025-12-13T00:20:09.503034262+00:00 stderr F I1213 00:20:09.502999 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-9-crc, ips: 10.217.0.55 2025-12-13T00:20:09.503034262+00:00 stderr F I1213 00:20:09.501854 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503034262+00:00 stderr F I1213 00:20:09.503022 28750 default_network_controller.go:655] Recording add event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.503047723+00:00 stderr F I1213 00:20:09.503034 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.503057723+00:00 stderr F I1213 00:20:09.503019 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503057723+00:00 stderr F I1213 00:20:09.503044 28750 ovn.go:134] Ensuring zone local for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in node crc 2025-12-13T00:20:09.503070793+00:00 stderr F I1213 00:20:09.503064 28750 base_network_controller_pods.go:476] [default/openshift-dns-operator/dns-operator-75f687757b-nz2xb] creating logical port openshift-dns-operator_dns-operator-75f687757b-nz2xb for pod on switch crc 2025-12-13T00:20:09.503124025+00:00 stderr F I1213 00:20:09.503071 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503204437+00:00 stderr F I1213 00:20:09.503144 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503204437+00:00 stderr F I1213 00:20:09.503137 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503241828+00:00 stderr F I1213 00:20:09.503210 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503273139+00:00 stderr F I1213 00:20:09.503180 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503273139+00:00 stderr F I1213 00:20:09.503234 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503287039+00:00 stderr F I1213 00:20:09.503233 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503328870+00:00 stderr F I1213 00:20:09.503292 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503328870+00:00 stderr F I1213 00:20:09.503304 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3eaca145-e78f-4caa-a5e0-078f141ee3c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503369951+00:00 stderr F I1213 00:20:09.503340 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3eaca145-e78f-4caa-a5e0-078f141ee3c5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.503369951+00:00 stderr F I1213 00:20:09.503348 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505331286+00:00 stderr F I1213 00:20:09.501617 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.505331286+00:00 stderr F I1213 00:20:09.505322 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.505354046+00:00 stderr F I1213 00:20:09.505337 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in node crc 2025-12-13T00:20:09.505364397+00:00 stderr F I1213 00:20:09.505291 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505364397+00:00 stderr F I1213 00:20:09.505359 28750 base_network_controller_pods.go:476] [default/openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] creating logical port openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb for pod on switch crc 2025-12-13T00:20:09.505427098+00:00 stderr F I1213 00:20:09.505399 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-10-retry-1-crc, ips: 10.217.0.76 2025-12-13T00:20:09.505427098+00:00 stderr F I1213 00:20:09.505422 28750 default_network_controller.go:655] Recording add event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.505440989+00:00 stderr F I1213 00:20:09.505434 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.505450779+00:00 stderr F I1213 00:20:09.505443 28750 ovn.go:134] Ensuring zone local for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in node crc 2025-12-13T00:20:09.505472580+00:00 stderr F I1213 00:20:09.505461 28750 base_network_controller_pods.go:476] [default/openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] creating logical port openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m for pod on switch crc 2025-12-13T00:20:09.505472580+00:00 stderr F I1213 00:20:09.505431 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505619464+00:00 stderr F I1213 00:20:09.505561 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505671975+00:00 stderr F I1213 00:20:09.505614 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505722146+00:00 stderr F I1213 00:20:09.505668 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505787128+00:00 stderr F I1213 00:20:09.505704 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505787128+00:00 stderr F I1213 00:20:09.505751 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.505867090+00:00 stderr F I1213 00:20:09.501843 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.506015314+00:00 stderr F I1213 00:20:09.505796 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.506099677+00:00 stderr F I1213 00:20:09.506062 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.506156248+00:00 stderr F I1213 00:20:09.503084 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.506347523+00:00 stderr F I1213 00:20:09.506294 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.506740854+00:00 stderr F I1213 00:20:09.506441 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.506987931+00:00 stderr F I1213 00:20:09.506924 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.507016692+00:00 stderr F I1213 00:20:09.506999 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.507334090+00:00 stderr F I1213 00:20:09.507048 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.507351851+00:00 stderr F I1213 00:20:09.507233 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.507492325+00:00 stderr F I1213 00:20:09.507397 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.507780362+00:00 stderr F I1213 00:20:09.507495 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.510739555+00:00 stderr F I1213 00:20:09.502672 28750 port_cache.go:96] port-cache(openshift-etcd-operator_etcd-operator-768d5b5d86-722mg): added port &{name:openshift-etcd-operator_etcd-operator-768d5b5d86-722mg uuid:e834ded8-9d5b-46e7-b962-1ee96928bab4 logicalSwitch:crc ips:[0xc00140a300] mac:[10 88 10 217 0 8] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.8/23] and MAC: 0a:58:0a:d9:00:08 2025-12-13T00:20:09.511411563+00:00 stderr F I1213 00:20:09.511357 28750 pods.go:220] [openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] addLogicalPort took 10.944242ms, libovsdb time 1.534013ms 2025-12-13T00:20:09.511437214+00:00 stderr F I1213 00:20:09.511405 28750 obj_retry.go:541] Creating *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg took: 11.112607ms 2025-12-13T00:20:09.511437214+00:00 stderr F I1213 00:20:09.511427 28750 default_network_controller.go:699] Recording success event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.511514356+00:00 stderr F I1213 00:20:09.511362 28750 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.2/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-12-13T00:20:09.511592548+00:00 stderr F I1213 00:20:09.511323 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.511786243+00:00 stderr F I1213 00:20:09.511740 28750 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-q88th 2025-12-13T00:20:09.511786243+00:00 stderr F I1213 00:20:09.511704 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.511786243+00:00 stderr F I1213 00:20:09.511715 28750 port_cache.go:96] port-cache(openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t): added port &{name:openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t uuid:710ea152-1844-44ad-b1a6-805ec9a3700e logicalSwitch:crc ips:[0xc0007157d0] mac:[10 88 10 217 0 45] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.45/23] and MAC: 0a:58:0a:d9:00:2d 2025-12-13T00:20:09.511803814+00:00 stderr F I1213 00:20:09.511782 28750 pods.go:220] [openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] addLogicalPort took 14.961362ms, libovsdb time 3.936389ms 2025-12-13T00:20:09.511803814+00:00 stderr F I1213 00:20:09.511794 28750 obj_retry.go:541] Creating *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t took: 15.041165ms 2025-12-13T00:20:09.511824534+00:00 stderr F I1213 00:20:09.511802 28750 default_network_controller.go:699] Recording success event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.511824534+00:00 stderr F I1213 00:20:09.511812 28750 default_network_controller.go:655] Recording add event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.511837875+00:00 stderr F I1213 00:20:09.511772 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.511850035+00:00 stderr F I1213 00:20:09.511839 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.511862246+00:00 stderr F I1213 00:20:09.511851 28750 ovn.go:134] Ensuring zone local for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in node crc 2025-12-13T00:20:09.511908627+00:00 stderr F I1213 00:20:09.511869 28750 base_network_controller_pods.go:476] [default/openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] creating logical port openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg for pod on switch crc 2025-12-13T00:20:09.511983349+00:00 stderr F I1213 00:20:09.511918 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-q88th 2025-12-13T00:20:09.512009120+00:00 stderr F I1213 00:20:09.511988 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-q88th in node crc 2025-12-13T00:20:09.512009120+00:00 stderr F I1213 00:20:09.511979 28750 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7): added port &{name:openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 uuid:3644fddd-ceae-4a64-8b00-dadf73515945 logicalSwitch:crc ips:[0xc00084af00] mac:[10 88 10 217 0 64] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.64/23] and MAC: 0a:58:0a:d9:00:40 2025-12-13T00:20:09.512021660+00:00 stderr F I1213 00:20:09.512004 28750 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-q88th took: 25.3µs 2025-12-13T00:20:09.512079701+00:00 stderr F I1213 00:20:09.512046 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-q88th 2025-12-13T00:20:09.512079701+00:00 stderr F I1213 00:20:09.512047 28750 pods.go:220] [openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] addLogicalPort took 14.493619ms, libovsdb time 2.441527ms 2025-12-13T00:20:09.512079701+00:00 stderr F I1213 00:20:09.512071 28750 default_network_controller.go:655] Recording add event on pod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.512094842+00:00 stderr F I1213 00:20:09.512075 28750 obj_retry.go:541] Creating *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 took: 14.671554ms 2025-12-13T00:20:09.512154683+00:00 stderr F I1213 00:20:09.512117 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.512154683+00:00 stderr F I1213 00:20:09.512146 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-7xghp in node crc 2025-12-13T00:20:09.512166094+00:00 stderr F I1213 00:20:09.512152 28750 ovs.go:159] Exec(22): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.forwarding=1 2025-12-13T00:20:09.512175944+00:00 stderr F I1213 00:20:09.512160 28750 obj_retry.go:541] Creating *v1.Pod openshift-network-node-identity/network-node-identity-7xghp took: 17.55µs 2025-12-13T00:20:09.512185654+00:00 stderr F I1213 00:20:09.512175 28750 default_network_controller.go:699] Recording success event on pod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.512228855+00:00 stderr F I1213 00:20:09.512190 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-8-crc 2025-12-13T00:20:09.512242196+00:00 stderr F I1213 00:20:09.512232 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-8-crc 2025-12-13T00:20:09.512294857+00:00 stderr F I1213 00:20:09.512249 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.512332028+00:00 stderr F I1213 00:20:09.512303 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-8-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.512332028+00:00 stderr F I1213 00:20:09.512324 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-8-crc 2025-12-13T00:20:09.512375299+00:00 stderr F I1213 00:20:09.512359 28750 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 0 Realm: 0} 2025-12-13T00:20:09.512508083+00:00 stderr F W1213 00:20:09.512461 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-8-crc. Using logical switch crc port uuid and addrs [10.217.0.55/23] 2025-12-13T00:20:09.512508083+00:00 stderr F I1213 00:20:09.512492 28750 base_network_controller_pods.go:999] Completed pod openshift-kube-controller-manager/revision-pruner-8-crc was already released for nad default before startup 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.512902 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz): added port &{name:openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz uuid:69155615-9d93-4b72-bddd-739a6e731251 logicalSwitch:crc ips:[0xc0011bb5c0] mac:[10 88 10 217 0 43] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.43/23] and MAC: 0a:58:0a:d9:00:2b 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.512925 28750 pods.go:220] [openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] addLogicalPort took 12.786452ms, libovsdb time 10.267064ms 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.512319 28750 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.512974 28750 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz took: 12.844083ms 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.512984 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.513000 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.513010 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.513021 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in node crc 2025-12-13T00:20:09.513052108+00:00 stderr F I1213 00:20:09.513010 28750 default_network_controller.go:655] Recording add event on pod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.513084269+00:00 stderr F I1213 00:20:09.513049 28750 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] creating logical port openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm for pod on switch crc 2025-12-13T00:20:09.513298125+00:00 stderr F I1213 00:20:09.513250 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.513363316+00:00 stderr F I1213 00:20:09.513314 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.513434098+00:00 stderr F I1213 00:20:09.513384 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.513479670+00:00 stderr F I1213 00:20:09.513445 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.513533392+00:00 stderr F I1213 00:20:09.513502 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.513626495+00:00 stderr F I1213 00:20:09.513584 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.514051786+00:00 stderr F I1213 00:20:09.514004 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.514094777+00:00 stderr F I1213 00:20:09.514029 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-v54bt in node crc 2025-12-13T00:20:09.514107158+00:00 stderr F I1213 00:20:09.514096 28750 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-target-v54bt] creating logical port openshift-network-diagnostics_network-check-target-v54bt for pod on switch crc 2025-12-13T00:20:09.514233921+00:00 stderr F I1213 00:20:09.512060 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:220875e2-503f-46b5-aaa6-bb8fc45743cc requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9d09e10-aa79-4a77-ba42-e77ca54f8045}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3eaca145-e78f-4caa-a5e0-078f141ee3c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3eaca145-e78f-4caa-a5e0-078f141ee3c5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515019532+00:00 stderr F I1213 00:20:09.513342 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515800164+00:00 stderr F I1213 00:20:09.515746 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515800164+00:00 stderr F I1213 00:20:09.515753 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515822534+00:00 stderr F I1213 00:20:09.515798 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515822534+00:00 stderr F I1213 00:20:09.515804 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515885236+00:00 stderr F I1213 00:20:09.515817 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515885236+00:00 stderr F I1213 00:20:09.515866 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515885236+00:00 stderr F I1213 00:20:09.515310 28750 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh): added port &{name:openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh uuid:9aafbb57-c78d-409c-9ff4-1561d4387b2d logicalSwitch:crc ips:[0xc000d89710] mac:[10 88 10 217 0 63] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.63/23] and MAC: 0a:58:0a:d9:00:3f 2025-12-13T00:20:09.515911657+00:00 stderr F I1213 00:20:09.515888 28750 pods.go:220] [openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] addLogicalPort took 18.426098ms, libovsdb time 11.933709ms 2025-12-13T00:20:09.515911657+00:00 stderr F I1213 00:20:09.515850 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.515925207+00:00 stderr F I1213 00:20:09.515903 28750 port_cache.go:96] port-cache(openshift-ingress-canary_ingress-canary-2vhcn): added port &{name:openshift-ingress-canary_ingress-canary-2vhcn uuid:7a350d82-7987-4ce6-ae41-dd930411ca29 logicalSwitch:crc ips:[0xc0011e76b0] mac:[10 88 10 217 0 71] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.71/23] and MAC: 0a:58:0a:d9:00:47 2025-12-13T00:20:09.515925207+00:00 stderr F I1213 00:20:09.515914 28750 pods.go:220] [openshift-ingress-canary/ingress-canary-2vhcn] addLogicalPort took 15.671052ms, libovsdb time 12.686919ms 2025-12-13T00:20:09.515925207+00:00 stderr F I1213 00:20:09.515920 28750 obj_retry.go:541] Creating *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn took: 15.687172ms 2025-12-13T00:20:09.515964178+00:00 stderr F I1213 00:20:09.515925 28750 default_network_controller.go:699] Recording success event on pod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.515964178+00:00 stderr F I1213 00:20:09.515952 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.515984079+00:00 stderr F I1213 00:20:09.515964 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.515984079+00:00 stderr F I1213 00:20:09.515974 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in node crc 2025-12-13T00:20:09.515996849+00:00 stderr F I1213 00:20:09.515990 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] creating logical port openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 for pod on switch crc 2025-12-13T00:20:09.516053291+00:00 stderr F I1213 00:20:09.515450 28750 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0} 2025-12-13T00:20:09.516158283+00:00 stderr F I1213 00:20:09.516136 28750 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 0 Realm: 0} 2025-12-13T00:20:09.516274037+00:00 stderr F I1213 00:20:09.516225 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516274037+00:00 stderr F I1213 00:20:09.515897 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh took: 18.458008ms 2025-12-13T00:20:09.516304037+00:00 stderr F I1213 00:20:09.516266 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516304037+00:00 stderr F I1213 00:20:09.516288 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.516314718+00:00 stderr F I1213 00:20:09.516307 28750 default_network_controller.go:655] Recording add event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.516324648+00:00 stderr F I1213 00:20:09.516283 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516336979+00:00 stderr F I1213 00:20:09.516326 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.516375550+00:00 stderr F I1213 00:20:09.516345 28750 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in node crc 2025-12-13T00:20:09.516386581+00:00 stderr F I1213 00:20:09.516376 28750 base_network_controller_pods.go:476] [default/openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] creating logical port openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv for pod on switch crc 2025-12-13T00:20:09.516481763+00:00 stderr F I1213 00:20:09.516133 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516538915+00:00 stderr F I1213 00:20:09.516490 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516550335+00:00 stderr F I1213 00:20:09.516533 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516655618+00:00 stderr F I1213 00:20:09.515444 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516722460+00:00 stderr F I1213 00:20:09.516024 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516734390+00:00 stderr F I1213 00:20:09.516677 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516777921+00:00 stderr F I1213 00:20:09.516713 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516800742+00:00 stderr F I1213 00:20:09.516049 28750 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr): added port &{name:openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr uuid:2d98188d-6d49-48e7-8956-57a5c46efe26 logicalSwitch:crc ips:[0xc00119c6f0] mac:[10 88 10 217 0 16] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.16/23] and MAC: 0a:58:0a:d9:00:10 2025-12-13T00:20:09.516800742+00:00 stderr F I1213 00:20:09.516741 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516800742+00:00 stderr F I1213 00:20:09.516786 28750 pods.go:220] [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] addLogicalPort took 18.13151ms, libovsdb time 9.019299ms 2025-12-13T00:20:09.516800742+00:00 stderr F I1213 00:20:09.516795 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr took: 18.155231ms 2025-12-13T00:20:09.516812862+00:00 stderr F I1213 00:20:09.516800 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.516812862+00:00 stderr F I1213 00:20:09.516773 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516812862+00:00 stderr F I1213 00:20:09.516808 28750 default_network_controller.go:655] Recording add event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.516823663+00:00 stderr F I1213 00:20:09.516816 28750 obj_retry.go:502] Add event received for *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.516833413+00:00 stderr F I1213 00:20:09.516823 28750 ovn.go:134] Ensuring zone local for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in node crc 2025-12-13T00:20:09.516842903+00:00 stderr F I1213 00:20:09.516833 28750 base_network_controller_pods.go:476] [default/hostpath-provisioner/csi-hostpathplugin-hvm8g] creating logical port hostpath-provisioner_csi-hostpathplugin-hvm8g for pod on switch crc 2025-12-13T00:20:09.516900025+00:00 stderr F I1213 00:20:09.516871 28750 port_cache.go:96] port-cache(openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m): added port &{name:openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m uuid:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26 logicalSwitch:crc ips:[0xc0014a98f0] mac:[10 88 10 217 0 6] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.6/23] and MAC: 0a:58:0a:d9:00:06 2025-12-13T00:20:09.516911325+00:00 stderr F I1213 00:20:09.516892 28750 pods.go:220] [openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] addLogicalPort took 11.443805ms, libovsdb time 8.479544ms 2025-12-13T00:20:09.516911325+00:00 stderr F I1213 00:20:09.516905 28750 obj_retry.go:541] Creating *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m took: 11.463456ms 2025-12-13T00:20:09.516926065+00:00 stderr F I1213 00:20:09.516910 28750 default_network_controller.go:699] Recording success event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.516926065+00:00 stderr F I1213 00:20:09.516883 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.516926065+00:00 stderr F I1213 00:20:09.516919 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/installer-8-crc 2025-12-13T00:20:09.516960006+00:00 stderr F I1213 00:20:09.516927 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/installer-8-crc 2025-12-13T00:20:09.516960006+00:00 stderr F I1213 00:20:09.516953 28750 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-8-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.516970307+00:00 stderr F I1213 00:20:09.516964 28750 port_cache.go:122] port-cache(openshift-kube-scheduler_installer-8-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.516980487+00:00 stderr F I1213 00:20:09.516969 28750 pods.go:151] Deleting pod: openshift-kube-scheduler/installer-8-crc 2025-12-13T00:20:09.517022458+00:00 stderr F I1213 00:20:09.516150 28750 ovs.go:162] Exec(22): stdout: "net.ipv4.conf.ovn-k8s-mp0.forwarding = 1\n" 2025-12-13T00:20:09.517022458+00:00 stderr F W1213 00:20:09.517005 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-scheduler/installer-8-crc. Using logical switch crc port uuid and addrs [10.217.0.84/23] 2025-12-13T00:20:09.517022458+00:00 stderr F I1213 00:20:09.517010 28750 ovs.go:163] Exec(22): stderr: "" 2025-12-13T00:20:09.517022458+00:00 stderr F I1213 00:20:09.516987 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517085240+00:00 stderr F I1213 00:20:09.517038 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517085240+00:00 stderr F I1213 00:20:09.517060 28750 address_set.go:613] (87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) deleting addresses [10.217.0.84] from address set 2025-12-13T00:20:09.517097430+00:00 stderr F I1213 00:20:09.517081 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517097430+00:00 stderr F I1213 00:20:09.517081 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.84]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517138311+00:00 stderr F I1213 00:20:09.517108 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.84]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517224563+00:00 stderr F I1213 00:20:09.517178 28750 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv): added port &{name:openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv uuid:2a5717ea-0a50-4ebb-b087-90e637274a33 logicalSwitch:crc ips:[0xc001588a50] mac:[10 88 10 217 0 25] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.25/23] and MAC: 0a:58:0a:d9:00:19 2025-12-13T00:20:09.517224563+00:00 stderr F I1213 00:20:09.517193 28750 pods.go:220] [openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] addLogicalPort took 14.225953ms, libovsdb time 8.065652ms 2025-12-13T00:20:09.517224563+00:00 stderr F I1213 00:20:09.517199 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv took: 14.238303ms 2025-12-13T00:20:09.517224563+00:00 stderr F I1213 00:20:09.517204 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.517224563+00:00 stderr F I1213 00:20:09.517210 28750 default_network_controller.go:655] Recording add event on pod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.517224563+00:00 stderr F I1213 00:20:09.517216 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.517243334+00:00 stderr F I1213 00:20:09.517224 28750 ovn.go:134] Ensuring zone local for Pod openshift-dns/dns-default-gbw49 in node crc 2025-12-13T00:20:09.517243334+00:00 stderr F I1213 00:20:09.517234 28750 base_network_controller_pods.go:476] [default/openshift-dns/dns-default-gbw49] creating logical port openshift-dns_dns-default-gbw49 for pod on switch crc 2025-12-13T00:20:09.517315046+00:00 stderr F I1213 00:20:09.517266 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517363447+00:00 stderr F I1213 00:20:09.517324 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517374418+00:00 stderr F I1213 00:20:09.517355 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517391108+00:00 stderr F I1213 00:20:09.517352 28750 port_cache.go:96] port-cache(openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb): added port &{name:openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb uuid:805e2f41-6cb8-4ccf-9939-37cfb4fa5509 logicalSwitch:crc ips:[0xc0015890e0] mac:[10 88 10 217 0 5] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.5/23] and MAC: 0a:58:0a:d9:00:05 2025-12-13T00:20:09.517400888+00:00 stderr F I1213 00:20:09.517386 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517431839+00:00 stderr F I1213 00:20:09.517400 28750 pods.go:220] [openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] addLogicalPort took 12.049392ms, libovsdb time 3.791244ms 2025-12-13T00:20:09.517443399+00:00 stderr F I1213 00:20:09.517421 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517443399+00:00 stderr F I1213 00:20:09.517426 28750 port_cache.go:96] port-cache(openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd): added port &{name:openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd uuid:8b4158c3-d859-42e6-8259-b16ce1cbd284 logicalSwitch:crc ips:[0xc0012e6570] mac:[10 88 10 217 0 39] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.39/23] and MAC: 0a:58:0a:d9:00:27 2025-12-13T00:20:09.517443399+00:00 stderr F I1213 00:20:09.517355 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517455140+00:00 stderr F I1213 00:20:09.517442 28750 pods.go:220] [openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] addLogicalPort took 15.272471ms, libovsdb time 12.36551ms 2025-12-13T00:20:09.517464910+00:00 stderr F I1213 00:20:09.517452 28750 obj_retry.go:541] Creating *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd took: 15.289523ms 2025-12-13T00:20:09.517464910+00:00 stderr F I1213 00:20:09.517458 28750 default_network_controller.go:699] Recording success event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.517478640+00:00 stderr F I1213 00:20:09.517465 28750 default_network_controller.go:655] Recording add event on pod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.517478640+00:00 stderr F I1213 00:20:09.517472 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.517488721+00:00 stderr F I1213 00:20:09.517479 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qdfr4 in node crc 2025-12-13T00:20:09.517498661+00:00 stderr F I1213 00:20:09.517465 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517498661+00:00 stderr F I1213 00:20:09.517432 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb took: 12.093533ms 2025-12-13T00:20:09.517508851+00:00 stderr F I1213 00:20:09.517499 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.517508851+00:00 stderr F I1213 00:20:09.517504 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/installer-11-crc 2025-12-13T00:20:09.517519382+00:00 stderr F I1213 00:20:09.517510 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/installer-11-crc 2025-12-13T00:20:09.517529032+00:00 stderr F I1213 00:20:09.517516 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-11-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.517538852+00:00 stderr F I1213 00:20:09.517528 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_installer-11-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.517538852+00:00 stderr F I1213 00:20:09.517533 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/installer-11-crc 2025-12-13T00:20:09.517552372+00:00 stderr F I1213 00:20:09.517530 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517589073+00:00 stderr F W1213 00:20:09.517563 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/installer-11-crc. Using logical switch crc port uuid and addrs [10.217.0.85/23] 2025-12-13T00:20:09.517602174+00:00 stderr F I1213 00:20:09.517399 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517634025+00:00 stderr F I1213 00:20:09.517613 28750 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.85] from address set 2025-12-13T00:20:09.517657465+00:00 stderr F I1213 00:20:09.517636 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.85]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517671206+00:00 stderr F I1213 00:20:09.517641 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517681116+00:00 stderr F I1213 00:20:09.517663 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.85]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517691016+00:00 stderr F I1213 00:20:09.517596 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517700836+00:00 stderr F I1213 00:20:09.517681 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517738277+00:00 stderr F I1213 00:20:09.517710 28750 port_cache.go:96] port-cache(openshift-service-ca_service-ca-666f99b6f-kk8kg): added port &{name:openshift-service-ca_service-ca-666f99b6f-kk8kg uuid:9409cb25-8c46-46db-98ab-5eafe9669ef8 logicalSwitch:crc ips:[0xc00107ff50] mac:[10 88 10 217 0 40] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.40/23] and MAC: 0a:58:0a:d9:00:28 2025-12-13T00:20:09.517738277+00:00 stderr F I1213 00:20:09.517712 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517738277+00:00 stderr F I1213 00:20:09.517666 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517755308+00:00 stderr F I1213 00:20:09.517732 28750 port_cache.go:96] port-cache(openshift-marketplace_community-operators-s2hxn): added port &{name:openshift-marketplace_community-operators-s2hxn uuid:d9d09e10-aa79-4a77-ba42-e77ca54f8045 logicalSwitch:crc ips:[0xc001388c00] mac:[10 88 10 217 0 37] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.37/23] and MAC: 0a:58:0a:d9:00:25 2025-12-13T00:20:09.517755308+00:00 stderr F I1213 00:20:09.517741 28750 pods.go:220] [openshift-marketplace/community-operators-s2hxn] addLogicalPort took 17.279217ms, libovsdb time 4.514074ms 2025-12-13T00:20:09.517755308+00:00 stderr F I1213 00:20:09.517748 28750 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/community-operators-s2hxn took: 17.297687ms 2025-12-13T00:20:09.517755308+00:00 stderr F I1213 00:20:09.517752 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.517766618+00:00 stderr F I1213 00:20:09.517758 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.517776669+00:00 stderr F I1213 00:20:09.517765 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.517776669+00:00 stderr F I1213 00:20:09.517730 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.517776669+00:00 stderr F I1213 00:20:09.517772 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in node crc 2025-12-13T00:20:09.517791029+00:00 stderr F I1213 00:20:09.517781 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] creating logical port openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh for pod on switch crc 2025-12-13T00:20:09.517859061+00:00 stderr F I1213 00:20:09.517726 28750 pods.go:220] [openshift-service-ca/service-ca-666f99b6f-kk8kg] addLogicalPort took 16.235628ms, libovsdb time 12.442703ms 2025-12-13T00:20:09.517859061+00:00 stderr F I1213 00:20:09.517837 28750 port_cache.go:96] port-cache(openshift-multus_multus-admission-controller-6c7c885997-4hbbc): added port &{name:openshift-multus_multus-admission-controller-6c7c885997-4hbbc uuid:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6 logicalSwitch:crc ips:[0xc0012d3740] mac:[10 88 10 217 0 32] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.32/23] and MAC: 0a:58:0a:d9:00:20 2025-12-13T00:20:09.517859061+00:00 stderr F I1213 00:20:09.517844 28750 obj_retry.go:541] Creating *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg took: 16.614138ms 2025-12-13T00:20:09.517872101+00:00 stderr F I1213 00:20:09.517853 28750 port_cache.go:96] port-cache(openshift-dns-operator_dns-operator-75f687757b-nz2xb): added port &{name:openshift-dns-operator_dns-operator-75f687757b-nz2xb uuid:b212e2c2-3d4e-4898-aede-c926b74813f0 logicalSwitch:crc ips:[0xc001358f90] mac:[10 88 10 217 0 18] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.18/23] and MAC: 0a:58:0a:d9:00:12 2025-12-13T00:20:09.517872101+00:00 stderr F I1213 00:20:09.517858 28750 default_network_controller.go:699] Recording success event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.517872101+00:00 stderr F I1213 00:20:09.517862 28750 pods.go:220] [openshift-dns-operator/dns-operator-75f687757b-nz2xb] addLogicalPort took 14.805688ms, libovsdb time 663.779µs 2025-12-13T00:20:09.517895052+00:00 stderr F I1213 00:20:09.517489 28750 base_network_controller_pods.go:476] [default/openshift-multus/network-metrics-daemon-qdfr4] creating logical port openshift-multus_network-metrics-daemon-qdfr4 for pod on switch crc 2025-12-13T00:20:09.517895052+00:00 stderr F I1213 00:20:09.517876 28750 obj_retry.go:541] Creating *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb took: 14.827699ms 2025-12-13T00:20:09.517895052+00:00 stderr F I1213 00:20:09.517872 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.517895052+00:00 stderr F I1213 00:20:09.517886 28750 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-v54bt): added port &{name:openshift-network-diagnostics_network-check-target-v54bt uuid:c0f95133-023f-4bbd-8719-e29d2cfbb32d logicalSwitch:crc ips:[0xc001240210] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04 2025-12-13T00:20:09.517910352+00:00 stderr F I1213 00:20:09.517893 28750 pods.go:220] [openshift-network-diagnostics/network-check-target-v54bt] addLogicalPort took 3.815285ms, libovsdb time 442.463µs 2025-12-13T00:20:09.517910352+00:00 stderr F I1213 00:20:09.517897 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.517910352+00:00 stderr F I1213 00:20:09.517899 28750 obj_retry.go:541] Creating *v1.Pod openshift-network-diagnostics/network-check-target-v54bt took: 3.872957ms 2025-12-13T00:20:09.517910352+00:00 stderr F I1213 00:20:09.517904 28750 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.517921712+00:00 stderr F I1213 00:20:09.517911 28750 default_network_controller.go:655] Recording add event on pod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.517921712+00:00 stderr F I1213 00:20:09.517882 28750 default_network_controller.go:699] Recording success event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.517921712+00:00 stderr F I1213 00:20:09.517918 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.517991394+00:00 stderr F I1213 00:20:09.517923 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh 2025-12-13T00:20:09.517991394+00:00 stderr F I1213 00:20:09.517925 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-wwpnd in node crc 2025-12-13T00:20:09.517991394+00:00 stderr F I1213 00:20:09.517975 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh 2025-12-13T00:20:09.517991394+00:00 stderr F I1213 00:20:09.517980 28750 obj_retry.go:541] Creating *v1.Pod openshift-network-operator/iptables-alerter-wwpnd took: 9.971µs 2025-12-13T00:20:09.517991394+00:00 stderr F I1213 00:20:09.517985 28750 default_network_controller.go:699] Recording success event on pod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.518010315+00:00 stderr F I1213 00:20:09.517989 28750 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.518010315+00:00 stderr F I1213 00:20:09.517848 28750 pods.go:220] [openshift-multus/multus-admission-controller-6c7c885997-4hbbc] addLogicalPort took 16.238778ms, libovsdb time 3.029434ms 2025-12-13T00:20:09.518010315+00:00 stderr F I1213 00:20:09.517998 28750 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc took: 16.401612ms 2025-12-13T00:20:09.518010315+00:00 stderr F I1213 00:20:09.518003 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.518021805+00:00 stderr F I1213 00:20:09.518007 28750 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh): port not found in cache or already marked for removal 2025-12-13T00:20:09.518021805+00:00 stderr F I1213 00:20:09.517912 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in node crc 2025-12-13T00:20:09.518021805+00:00 stderr F I1213 00:20:09.518017 28750 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh 2025-12-13T00:20:09.518032495+00:00 stderr F I1213 00:20:09.518026 28750 base_network_controller_pods.go:476] [default/openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] creating logical port openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 for pod on switch crc 2025-12-13T00:20:09.518094557+00:00 stderr F W1213 00:20:09.518067 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh. Using logical switch crc port uuid and addrs [10.217.0.38/23] 2025-12-13T00:20:09.518191420+00:00 stderr F I1213 00:20:09.518109 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518191420+00:00 stderr F I1213 00:20:09.518147 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518191420+00:00 stderr F I1213 00:20:09.518162 28750 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.38] from address set 2025-12-13T00:20:09.518191420+00:00 stderr F I1213 00:20:09.518158 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518191420+00:00 stderr F I1213 00:20:09.518178 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518218101+00:00 stderr F I1213 00:20:09.518194 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.38]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518228091+00:00 stderr F I1213 00:20:09.518213 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518238851+00:00 stderr F I1213 00:20:09.518192 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518262632+00:00 stderr F I1213 00:20:09.518204 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518262632+00:00 stderr F I1213 00:20:09.518237 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.38]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518324673+00:00 stderr F I1213 00:20:09.518272 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518324673+00:00 stderr F I1213 00:20:09.517990 28750 default_network_controller.go:655] Recording add event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.518324673+00:00 stderr F I1213 00:20:09.518317 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.518338504+00:00 stderr F I1213 00:20:09.518324 28750 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in node crc 2025-12-13T00:20:09.518338504+00:00 stderr F I1213 00:20:09.517996 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518398385+00:00 stderr F I1213 00:20:09.518008 28750 default_network_controller.go:655] Recording add event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.518398385+00:00 stderr F I1213 00:20:09.518351 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518398385+00:00 stderr F I1213 00:20:09.518360 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518398385+00:00 stderr F I1213 00:20:09.518373 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.518398385+00:00 stderr F I1213 00:20:09.518382 28750 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in node crc 2025-12-13T00:20:09.518411906+00:00 stderr F I1213 00:20:09.518403 28750 base_network_controller_pods.go:476] [default/openshift-controller-manager/controller-manager-778975cc4f-x5vcf] creating logical port openshift-controller-manager_controller-manager-778975cc4f-x5vcf for pod on switch crc 2025-12-13T00:20:09.518421826+00:00 stderr F I1213 00:20:09.518408 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518480808+00:00 stderr F I1213 00:20:09.518447 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518495258+00:00 stderr F I1213 00:20:09.518482 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518568650+00:00 stderr F I1213 00:20:09.518495 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518568650+00:00 stderr F I1213 00:20:09.518529 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518584930+00:00 stderr F I1213 00:20:09.518558 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518613891+00:00 stderr F I1213 00:20:09.518596 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518821287+00:00 stderr F I1213 00:20:09.518779 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518865838+00:00 stderr F I1213 00:20:09.518828 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518865838+00:00 stderr F I1213 00:20:09.518329 28750 obj_retry.go:541] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b took: 6.86µs 2025-12-13T00:20:09.518865838+00:00 stderr F I1213 00:20:09.518855 28750 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.518902129+00:00 stderr F I1213 00:20:09.518865 28750 default_network_controller.go:655] Recording add event on pod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.518902129+00:00 stderr F I1213 00:20:09.518862 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518902129+00:00 stderr F I1213 00:20:09.518845 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518902129+00:00 stderr F I1213 00:20:09.518875 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.518902129+00:00 stderr F I1213 00:20:09.518886 28750 ovn.go:134] Ensuring zone local for Pod openshift-console/downloads-65476884b9-9wcvx in node crc 2025-12-13T00:20:09.518918760+00:00 stderr F I1213 00:20:09.518901 28750 base_network_controller_pods.go:476] [default/openshift-console/downloads-65476884b9-9wcvx] creating logical port openshift-console_downloads-65476884b9-9wcvx for pod on switch crc 2025-12-13T00:20:09.518949560+00:00 stderr F I1213 00:20:09.518909 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.518949560+00:00 stderr F I1213 00:20:09.518891 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519086824+00:00 stderr F I1213 00:20:09.518830 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519086824+00:00 stderr F I1213 00:20:09.518966 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519105115+00:00 stderr F I1213 00:20:09.519061 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519147047+00:00 stderr F I1213 00:20:09.519100 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519165727+00:00 stderr F I1213 00:20:09.519150 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519230859+00:00 stderr F I1213 00:20:09.519198 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519329242+00:00 stderr F I1213 00:20:09.519312 28750 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0} 2025-12-13T00:20:09.519400244+00:00 stderr F I1213 00:20:09.519385 28750 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.1.255/32 Src: 10.217.0.2 Gw: Flags: [] Table: 255 Realm: 0}" 2025-12-13T00:20:09.519454385+00:00 stderr F I1213 00:20:09.519428 28750 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.0/23 Src: 10.217.0.2 Gw: Flags: [] Table: 254 Realm: 0}" 2025-12-13T00:20:09.519503256+00:00 stderr F I1213 00:20:09.519489 28750 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.0.0/22 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0}" 2025-12-13T00:20:09.519553178+00:00 stderr F I1213 00:20:09.519527 28750 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 169.254.169.3/32 Src: Gw: 10.217.0.1 Flags: [] Table: 254 Realm: 0}" 2025-12-13T00:20:09.519611589+00:00 stderr F I1213 00:20:09.519575 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-scheduler/installer-8-crc, ips: 10.217.0.84 2025-12-13T00:20:09.519623200+00:00 stderr F I1213 00:20:09.519395 28750 port_cache.go:96] port-cache(openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv): added port &{name:openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv uuid:82630d91-1647-4c0c-aa84-8f820bcf919e logicalSwitch:crc ips:[0xc001359080] mac:[10 88 10 217 0 22] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.22/23] and MAC: 0a:58:0a:d9:00:16 2025-12-13T00:20:09.519633110+00:00 stderr F I1213 00:20:09.519618 28750 pods.go:220] [openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] addLogicalPort took 3.25482ms, libovsdb time 1.7959ms 2025-12-13T00:20:09.519633110+00:00 stderr F I1213 00:20:09.519628 28750 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv took: 3.289921ms 2025-12-13T00:20:09.519658121+00:00 stderr F I1213 00:20:09.519635 28750 default_network_controller.go:699] Recording success event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.519658121+00:00 stderr F I1213 00:20:09.519644 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 2025-12-13T00:20:09.519658121+00:00 stderr F I1213 00:20:09.519652 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 2025-12-13T00:20:09.519669351+00:00 stderr F I1213 00:20:09.519659 28750 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.519685081+00:00 stderr F I1213 00:20:09.519664 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/installer-11-crc, ips: 10.217.0.85 2025-12-13T00:20:09.519685081+00:00 stderr F I1213 00:20:09.519672 28750 port_cache.go:122] port-cache(openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5): port not found in cache or already marked for removal 2025-12-13T00:20:09.519685081+00:00 stderr F I1213 00:20:09.519677 28750 pods.go:151] Deleting pod: openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 2025-12-13T00:20:09.519716942+00:00 stderr F I1213 00:20:09.519688 28750 port_cache.go:96] port-cache(hostpath-provisioner_csi-hostpathplugin-hvm8g): added port &{name:hostpath-provisioner_csi-hostpathplugin-hvm8g uuid:52259988-af2b-4ee5-bbfe-801c4ebeb0ae logicalSwitch:crc ips:[0xc000c96210] mac:[10 88 10 217 0 49] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.49/23] and MAC: 0a:58:0a:d9:00:31 2025-12-13T00:20:09.519716942+00:00 stderr F I1213 00:20:09.519704 28750 pods.go:220] [hostpath-provisioner/csi-hostpathplugin-hvm8g] addLogicalPort took 2.875949ms, libovsdb time 2.019366ms 2025-12-13T00:20:09.519716942+00:00 stderr F I1213 00:20:09.519689 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519728783+00:00 stderr F I1213 00:20:09.519711 28750 obj_retry.go:541] Creating *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g took: 2.88951ms 2025-12-13T00:20:09.519728783+00:00 stderr F W1213 00:20:09.519716 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5. Using logical switch crc port uuid and addrs [10.217.0.17/23] 2025-12-13T00:20:09.519728783+00:00 stderr F I1213 00:20:09.519720 28750 default_network_controller.go:699] Recording success event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.519739693+00:00 stderr F I1213 00:20:09.519728 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-10-crc 2025-12-13T00:20:09.519739693+00:00 stderr F I1213 00:20:09.519734 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-10-crc 2025-12-13T00:20:09.519749833+00:00 stderr F I1213 00:20:09.519740 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.519759683+00:00 stderr F I1213 00:20:09.519750 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-10-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.519759683+00:00 stderr F I1213 00:20:09.519754 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-10-crc 2025-12-13T00:20:09.519759683+00:00 stderr F I1213 00:20:09.519754 28750 address_set.go:613] (369b3a88-e647-4277-a647-25ffea296a4a/default-network-controller:Namespace:openshift-operator-lifecycle-manager:v4/a1482332553631220387) deleting addresses [10.217.0.17] from address set 2025-12-13T00:20:09.519770534+00:00 stderr F I1213 00:20:09.519747 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519811655+00:00 stderr F W1213 00:20:09.519785 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-10-crc. Using logical switch crc port uuid and addrs [10.217.0.68/23] 2025-12-13T00:20:09.519811655+00:00 stderr F I1213 00:20:09.519782 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.17]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519826635+00:00 stderr F I1213 00:20:09.519819 28750 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.68] from address set 2025-12-13T00:20:09.519840136+00:00 stderr F I1213 00:20:09.519824 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.17]}}] Timeout: Where:[where column _uuid == {369b3a88-e647-4277-a647-25ffea296a4a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519850226+00:00 stderr F I1213 00:20:09.519771 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519850226+00:00 stderr F I1213 00:20:09.519837 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.68]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519895577+00:00 stderr F I1213 00:20:09.519678 28750 default_network_controller.go:655] Recording add event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.519895577+00:00 stderr F I1213 00:20:09.519865 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.68]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.519895577+00:00 stderr F I1213 00:20:09.519886 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.519911398+00:00 stderr F I1213 00:20:09.519896 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-767c585db5-zd56b in node crc 2025-12-13T00:20:09.519911398+00:00 stderr F I1213 00:20:09.519903 28750 obj_retry.go:541] Creating *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b took: 8.82µs 2025-12-13T00:20:09.519911398+00:00 stderr F I1213 00:20:09.519908 28750 default_network_controller.go:699] Recording success event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.519922208+00:00 stderr F I1213 00:20:09.519916 28750 default_network_controller.go:655] Recording add event on pod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.519946209+00:00 stderr F I1213 00:20:09.519924 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.519961499+00:00 stderr F I1213 00:20:09.519916 28750 port_cache.go:96] port-cache(openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg): added port &{name:openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg uuid:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4 logicalSwitch:crc ips:[0xc0015c4930] mac:[10 88 10 217 0 46] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.46/23] and MAC: 0a:58:0a:d9:00:2e 2025-12-13T00:20:09.519971259+00:00 stderr F I1213 00:20:09.519960 28750 pods.go:220] [openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] addLogicalPort took 8.099624ms, libovsdb time 2.55279ms 2025-12-13T00:20:09.519980900+00:00 stderr F I1213 00:20:09.519972 28750 obj_retry.go:541] Creating *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg took: 8.122254ms 2025-12-13T00:20:09.519990640+00:00 stderr F I1213 00:20:09.519980 28750 default_network_controller.go:699] Recording success event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.519990640+00:00 stderr F I1213 00:20:09.519975 28750 port_cache.go:96] port-cache(openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7): added port &{name:openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 uuid:6e77fb5d-c04f-467c-9883-8cb59d819d86 logicalSwitch:crc ips:[0xc000cc75f0] mac:[10 88 10 217 0 12] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.12/23] and MAC: 0a:58:0a:d9:00:0c 2025-12-13T00:20:09.520000900+00:00 stderr F I1213 00:20:09.519989 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-12-crc 2025-12-13T00:20:09.520010540+00:00 stderr F I1213 00:20:09.519999 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-12-crc 2025-12-13T00:20:09.520010540+00:00 stderr F I1213 00:20:09.519992 28750 pods.go:220] [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] addLogicalPort took 1.971325ms, libovsdb time 1.472691ms 2025-12-13T00:20:09.520020571+00:00 stderr F I1213 00:20:09.520008 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 took: 2.101248ms 2025-12-13T00:20:09.520020571+00:00 stderr F I1213 00:20:09.519952 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr in node crc 2025-12-13T00:20:09.520020571+00:00 stderr F I1213 00:20:09.520015 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.520031481+00:00 stderr F I1213 00:20:09.520021 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.520031481+00:00 stderr F I1213 00:20:09.520028 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.520044891+00:00 stderr F I1213 00:20:09.520028 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/marketplace-operator-8b455464d-kghgr] creating logical port openshift-marketplace_marketplace-operator-8b455464d-kghgr for pod on switch crc 2025-12-13T00:20:09.520044891+00:00 stderr F I1213 00:20:09.520009 28750 port_cache.go:96] port-cache(openshift-dns_dns-default-gbw49): added port &{name:openshift-dns_dns-default-gbw49 uuid:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213 logicalSwitch:crc ips:[0xc0018db4a0] mac:[10 88 10 217 0 31] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.31/23] and MAC: 0a:58:0a:d9:00:1f 2025-12-13T00:20:09.520092253+00:00 stderr F I1213 00:20:09.520058 28750 pods.go:220] [openshift-dns/dns-default-gbw49] addLogicalPort took 2.823047ms, libovsdb time 2.243332ms 2025-12-13T00:20:09.520092253+00:00 stderr F I1213 00:20:09.520057 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2): added port &{name:openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 uuid:f24db1f4-18a4-418a-9c99-1d94ebfba0da logicalSwitch:crc ips:[0xc0016baf60] mac:[10 88 10 217 0 24] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.24/23] and MAC: 0a:58:0a:d9:00:18 2025-12-13T00:20:09.520105993+00:00 stderr F I1213 00:20:09.520090 28750 pods.go:220] [openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] addLogicalPort took 4.109003ms, libovsdb time 1.853441ms 2025-12-13T00:20:09.520105993+00:00 stderr F I1213 00:20:09.520090 28750 obj_retry.go:541] Creating *v1.Pod openshift-dns/dns-default-gbw49 took: 2.862039ms 2025-12-13T00:20:09.520105993+00:00 stderr F I1213 00:20:09.520099 28750 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 took: 4.125685ms 2025-12-13T00:20:09.520119783+00:00 stderr F I1213 00:20:09.520106 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.520119783+00:00 stderr F I1213 00:20:09.520107 28750 default_network_controller.go:699] Recording success event on pod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.520119783+00:00 stderr F I1213 00:20:09.520115 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.520146574+00:00 stderr F I1213 00:20:09.520125 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.520146574+00:00 stderr F I1213 00:20:09.520126 28750 default_network_controller.go:655] Recording add event on pod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.520146574+00:00 stderr F I1213 00:20:09.520134 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in node crc 2025-12-13T00:20:09.520158444+00:00 stderr F I1213 00:20:09.520147 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.520158444+00:00 stderr F I1213 00:20:09.520009 28750 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-12-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.520168505+00:00 stderr F I1213 00:20:09.520035 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc 2025-12-13T00:20:09.520178285+00:00 stderr F I1213 00:20:09.520163 28750 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh, ips: 10.217.0.38 2025-12-13T00:20:09.520178285+00:00 stderr F I1213 00:20:09.520169 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver/kube-apiserver-crc took: 133.234µs 2025-12-13T00:20:09.520193655+00:00 stderr F I1213 00:20:09.520167 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-operators-zg7cl in node crc 2025-12-13T00:20:09.520193655+00:00 stderr F I1213 00:20:09.520180 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.520193655+00:00 stderr F I1213 00:20:09.520152 28750 base_network_controller_pods.go:476] [default/openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] creating logical port openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 for pod on switch crc 2025-12-13T00:20:09.520193655+00:00 stderr F I1213 00:20:09.520188 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.520204826+00:00 stderr F I1213 00:20:09.520196 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.520214546+00:00 stderr F I1213 00:20:09.520205 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in node crc 2025-12-13T00:20:09.520224286+00:00 stderr F I1213 00:20:09.520208 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-operators-zg7cl] creating logical port openshift-marketplace_redhat-operators-zg7cl for pod on switch crc 2025-12-13T00:20:09.520224286+00:00 stderr F I1213 00:20:09.520218 28750 base_network_controller_pods.go:476] [default/openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] creating logical port openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb for pod on switch crc 2025-12-13T00:20:09.520235186+00:00 stderr F I1213 00:20:09.520204 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:9025b167-d0fc-419f-92c1-add28909ab7c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {116f6820-9a92-4bbc-bb34-265c58c5b649}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520286938+00:00 stderr F I1213 00:20:09.520177 28750 port_cache.go:122] port-cache(openshift-kube-apiserver_installer-12-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.520286938+00:00 stderr F I1213 00:20:09.520271 28750 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-12-crc 2025-12-13T00:20:09.520286938+00:00 stderr F I1213 00:20:09.520261 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520354170+00:00 stderr F W1213 00:20:09.520312 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-apiserver/installer-12-crc. Using logical switch crc port uuid and addrs [10.217.0.86/23] 2025-12-13T00:20:09.520354170+00:00 stderr F I1213 00:20:09.520315 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520371060+00:00 stderr F I1213 00:20:09.520356 28750 address_set.go:613] (6276d4aa-26f2-4006-a763-72ea13795238/default-network-controller:Namespace:openshift-kube-apiserver:v4/a4531626005796422843) deleting addresses [10.217.0.86] from address set 2025-12-13T00:20:09.520421552+00:00 stderr F I1213 00:20:09.520382 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.86]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520421552+00:00 stderr F I1213 00:20:09.520384 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520472033+00:00 stderr F I1213 00:20:09.520425 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.86]}}] Timeout: Where:[where column _uuid == {6276d4aa-26f2-4006-a763-72ea13795238}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520472033+00:00 stderr F I1213 00:20:09.520430 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520472033+00:00 stderr F I1213 00:20:09.520430 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520515544+00:00 stderr F I1213 00:20:09.520481 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520515544+00:00 stderr F I1213 00:20:09.520486 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520572436+00:00 stderr F I1213 00:20:09.520441 28750 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm): added port &{name:openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm uuid:ad3d5728-34ed-421c-a749-1d7a957800a8 logicalSwitch:crc ips:[0xc00170d4d0] mac:[10 88 10 217 0 21] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.21/23] and MAC: 0a:58:0a:d9:00:15 2025-12-13T00:20:09.520628847+00:00 stderr F I1213 00:20:09.520608 28750 pods.go:220] [openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] addLogicalPort took 7.577159ms, libovsdb time 3.595389ms 2025-12-13T00:20:09.520683079+00:00 stderr F I1213 00:20:09.520646 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3c1b071a-c76b-475e-8109-e930ef901298}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520729470+00:00 stderr F I1213 00:20:09.520702 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3c1b071a-c76b-475e-8109-e930ef901298}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520760811+00:00 stderr F I1213 00:20:09.520745 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm took: 7.636761ms 2025-12-13T00:20:09.520818352+00:00 stderr F I1213 00:20:09.520805 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.520871434+00:00 stderr F I1213 00:20:09.520815 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520884804+00:00 stderr F I1213 00:20:09.520868 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.520922875+00:00 stderr F I1213 00:20:09.520909 28750 default_network_controller.go:655] Recording add event on pod openshift-etcd/etcd-crc 2025-12-13T00:20:09.521008027+00:00 stderr F I1213 00:20:09.520992 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-etcd/etcd-crc 2025-12-13T00:20:09.521079259+00:00 stderr F I1213 00:20:09.521065 28750 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc 2025-12-13T00:20:09.521127141+00:00 stderr F I1213 00:20:09.521101 28750 obj_retry.go:541] Creating *v1.Pod openshift-etcd/etcd-crc took: 40.821µs 2025-12-13T00:20:09.521179272+00:00 stderr F I1213 00:20:09.520555 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521268385+00:00 stderr F I1213 00:20:09.520885 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521342177+00:00 stderr F I1213 00:20:09.520720 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:9025b167-d0fc-419f-92c1-add28909ab7c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {116f6820-9a92-4bbc-bb34-265c58c5b649}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3c1b071a-c76b-475e-8109-e930ef901298}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3c1b071a-c76b-475e-8109-e930ef901298}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521466810+00:00 stderr F I1213 00:20:09.520588 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:b9a848b0-1438-4ada-b7da-2fe53dbf235f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1bf49fc6-86bc-42b9-9630-d7184a92b0eb}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521529142+00:00 stderr F I1213 00:20:09.521490 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521577133+00:00 stderr F I1213 00:20:09.521156 28750 default_network_controller.go:699] Recording success event on pod openshift-etcd/etcd-crc 2025-12-13T00:20:09.521577133+00:00 stderr F I1213 00:20:09.521562 28750 default_network_controller.go:655] Recording add event on pod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.521577133+00:00 stderr F I1213 00:20:09.521572 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.521594003+00:00 stderr F I1213 00:20:09.521581 28750 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj in node crc 2025-12-13T00:20:09.521603754+00:00 stderr F I1213 00:20:09.521595 28750 base_network_controller_pods.go:476] [default/openshift-image-registry/image-registry-75b7bb6564-rnjvj] creating logical port openshift-image-registry_image-registry-75b7bb6564-rnjvj for pod on switch crc 2025-12-13T00:20:09.521650585+00:00 stderr F I1213 00:20:09.521526 28750 port_cache.go:96] port-cache(openshift-multus_network-metrics-daemon-qdfr4): added port &{name:openshift-multus_network-metrics-daemon-qdfr4 uuid:3564ddfd-a311-4df3-b5d0-1e76294b4ab0 logicalSwitch:crc ips:[0xc000c6f200] mac:[10 88 10 217 0 3] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.3/23] and MAC: 0a:58:0a:d9:00:03 2025-12-13T00:20:09.521650585+00:00 stderr F I1213 00:20:09.521639 28750 pods.go:220] [openshift-multus/network-metrics-daemon-qdfr4] addLogicalPort took 4.151973ms, libovsdb time 2.54101ms 2025-12-13T00:20:09.521662265+00:00 stderr F I1213 00:20:09.521650 28750 obj_retry.go:541] Creating *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 took: 4.170325ms 2025-12-13T00:20:09.521662265+00:00 stderr F I1213 00:20:09.521658 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.521674866+00:00 stderr F I1213 00:20:09.521667 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-scheduler/installer-7-crc 2025-12-13T00:20:09.521698676+00:00 stderr F I1213 00:20:09.521676 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-scheduler/installer-7-crc 2025-12-13T00:20:09.521698676+00:00 stderr F I1213 00:20:09.521690 28750 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-7-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.521698676+00:00 stderr F I1213 00:20:09.521663 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh): added port &{name:openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh uuid:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749 logicalSwitch:crc ips:[0xc000caed50] mac:[10 88 10 217 0 14] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.14/23] and MAC: 0a:58:0a:d9:00:0e 2025-12-13T00:20:09.521709747+00:00 stderr F I1213 00:20:09.521702 28750 pods.go:220] [openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] addLogicalPort took 3.923538ms, libovsdb time 2.586232ms 2025-12-13T00:20:09.521719957+00:00 stderr F I1213 00:20:09.521710 28750 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh took: 3.937868ms 2025-12-13T00:20:09.521719957+00:00 stderr F I1213 00:20:09.521716 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.521732377+00:00 stderr F I1213 00:20:09.521725 28750 default_network_controller.go:655] Recording add event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.521742117+00:00 stderr F I1213 00:20:09.521732 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.521751778+00:00 stderr F I1213 00:20:09.521725 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:d73c4e63-30ef-4915-925d-f44201c612ec requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521765168+00:00 stderr F I1213 00:20:09.521751 28750 port_cache.go:122] port-cache(openshift-kube-scheduler_installer-7-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.521765168+00:00 stderr F I1213 00:20:09.521759 28750 pods.go:151] Deleting pod: openshift-kube-scheduler/installer-7-crc 2025-12-13T00:20:09.521777698+00:00 stderr F I1213 00:20:09.521763 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521791049+00:00 stderr F I1213 00:20:09.521769 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.521803359+00:00 stderr F I1213 00:20:09.521779 28750 port_cache.go:96] port-cache(openshift-controller-manager_controller-manager-778975cc4f-x5vcf): added port &{name:openshift-controller-manager_controller-manager-778975cc4f-x5vcf uuid:eda38bc9-7da5-4a6b-818c-4e1e8f85426d logicalSwitch:crc ips:[0xc000cafcb0] mac:[10 88 10 217 0 87] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.87/23] and MAC: 0a:58:0a:d9:00:57 2025-12-13T00:20:09.521843230+00:00 stderr F I1213 00:20:09.521811 28750 pods.go:220] [openshift-controller-manager/controller-manager-778975cc4f-x5vcf] addLogicalPort took 3.420004ms, libovsdb time 2.703304ms 2025-12-13T00:20:09.521856291+00:00 stderr F I1213 00:20:09.521846 28750 obj_retry.go:541] Creating *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf took: 3.458805ms 2025-12-13T00:20:09.521865921+00:00 stderr F I1213 00:20:09.521742 28750 ovn.go:134] Ensuring zone local for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in node crc 2025-12-13T00:20:09.521875761+00:00 stderr F I1213 00:20:09.521862 28750 default_network_controller.go:699] Recording success event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.521875761+00:00 stderr F I1213 00:20:09.521867 28750 obj_retry.go:541] Creating *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 took: 126.504µs 2025-12-13T00:20:09.521886041+00:00 stderr F I1213 00:20:09.521874 28750 default_network_controller.go:699] Recording success event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.521886041+00:00 stderr F I1213 00:20:09.521881 28750 default_network_controller.go:655] Recording add event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.521908702+00:00 stderr F I1213 00:20:09.521882 28750 default_network_controller.go:655] Recording add event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.521908702+00:00 stderr F I1213 00:20:09.521889 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.521908702+00:00 stderr F I1213 00:20:09.521898 28750 ovn.go:134] Ensuring zone local for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in node crc 2025-12-13T00:20:09.521908702+00:00 stderr F I1213 00:20:09.521902 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.521923912+00:00 stderr F I1213 00:20:09.521909 28750 base_network_controller_pods.go:476] [default/openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] creating logical port openshift-apiserver_apiserver-7fc54b8dd7-d2bhp for pod on switch crc 2025-12-13T00:20:09.521976835+00:00 stderr F I1213 00:20:09.521922 28750 ovn.go:134] Ensuring zone local for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in node crc 2025-12-13T00:20:09.522036156+00:00 stderr F I1213 00:20:09.521922 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522093138+00:00 stderr F I1213 00:20:09.522044 28750 base_network_controller_pods.go:476] [default/openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] creating logical port openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc for pod on switch crc 2025-12-13T00:20:09.522133479+00:00 stderr F I1213 00:20:09.521844 28750 port_cache.go:96] port-cache(openshift-marketplace_marketplace-operator-8b455464d-kghgr): added port &{name:openshift-marketplace_marketplace-operator-8b455464d-kghgr uuid:116f6820-9a92-4bbc-bb34-265c58c5b649 logicalSwitch:crc ips:[0xc0016bb710] mac:[10 88 10 217 0 30] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.30/23] and MAC: 0a:58:0a:d9:00:1e 2025-12-13T00:20:09.522190901+00:00 stderr F I1213 00:20:09.522131 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522190901+00:00 stderr F W1213 00:20:09.522152 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-scheduler/installer-7-crc. Using logical switch crc port uuid and addrs [10.217.0.67/23] 2025-12-13T00:20:09.522203681+00:00 stderr F I1213 00:20:09.522193 28750 address_set.go:613] (87985ed0-37ee-4fd2-af4b-1c4bf264af83/default-network-controller:Namespace:openshift-kube-scheduler:v4/a15634036902741400949) deleting addresses [10.217.0.67] from address set 2025-12-13T00:20:09.522214391+00:00 stderr F I1213 00:20:09.522192 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522249132+00:00 stderr F I1213 00:20:09.522215 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.67]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522265893+00:00 stderr F I1213 00:20:09.522251 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.67]}}] Timeout: Where:[where column _uuid == {87985ed0-37ee-4fd2-af4b-1c4bf264af83}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522327414+00:00 stderr F I1213 00:20:09.522243 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522348625+00:00 stderr F I1213 00:20:09.522328 28750 pods.go:185] Attempting to release IPs for pod: openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5, ips: 10.217.0.17 2025-12-13T00:20:09.522372176+00:00 stderr F I1213 00:20:09.522350 28750 default_network_controller.go:655] Recording add event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.522372176+00:00 stderr F I1213 00:20:09.522357 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.522372176+00:00 stderr F I1213 00:20:09.522368 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in node crc 2025-12-13T00:20:09.522404736+00:00 stderr F I1213 00:20:09.522381 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] creating logical port openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf for pod on switch crc 2025-12-13T00:20:09.522441347+00:00 stderr F I1213 00:20:09.522401 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {31d380f3-a977-4726-8270-75a72c5efb5e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522441347+00:00 stderr F I1213 00:20:09.522406 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522479449+00:00 stderr F I1213 00:20:09.522448 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:31d380f3-a977-4726-8270-75a72c5efb5e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522479449+00:00 stderr F I1213 00:20:09.522456 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522505489+00:00 stderr F I1213 00:20:09.522445 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522544370+00:00 stderr F I1213 00:20:09.522472 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:d73c4e63-30ef-4915-925d-f44201c612ec requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {31d380f3-a977-4726-8270-75a72c5efb5e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:31d380f3-a977-4726-8270-75a72c5efb5e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522544370+00:00 stderr F I1213 00:20:09.522516 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522544370+00:00 stderr F I1213 00:20:09.522478 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522563761+00:00 stderr F I1213 00:20:09.522548 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522563761+00:00 stderr F I1213 00:20:09.522555 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-10-crc, ips: 10.217.0.68 2025-12-13T00:20:09.522577231+00:00 stderr F I1213 00:20:09.522542 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522577231+00:00 stderr F I1213 00:20:09.522573 28750 default_network_controller.go:655] Recording add event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.522603222+00:00 stderr F I1213 00:20:09.522582 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.522603222+00:00 stderr F I1213 00:20:09.522583 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522603222+00:00 stderr F I1213 00:20:09.522591 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in node crc 2025-12-13T00:20:09.522603222+00:00 stderr F I1213 00:20:09.522599 28750 obj_retry.go:541] Creating *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p took: 8.691µs 2025-12-13T00:20:09.522615122+00:00 stderr F I1213 00:20:09.522604 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.522615122+00:00 stderr F I1213 00:20:09.522611 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/revision-pruner-11-crc 2025-12-13T00:20:09.522625242+00:00 stderr F I1213 00:20:09.522617 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/revision-pruner-11-crc 2025-12-13T00:20:09.522634853+00:00 stderr F I1213 00:20:09.522624 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.522647403+00:00 stderr F I1213 00:20:09.522118 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-12-crc, ips: 10.217.0.86 2025-12-13T00:20:09.522659573+00:00 stderr F I1213 00:20:09.522651 28750 default_network_controller.go:655] Recording add event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.522669234+00:00 stderr F I1213 00:20:09.522659 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.522679064+00:00 stderr F I1213 00:20:09.522667 28750 ovn.go:134] Ensuring zone local for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in node crc 2025-12-13T00:20:09.522679064+00:00 stderr F I1213 00:20:09.522673 28750 obj_retry.go:541] Creating *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv took: 7.24µs 2025-12-13T00:20:09.522689154+00:00 stderr F I1213 00:20:09.522652 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522735365+00:00 stderr F I1213 00:20:09.522690 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522774987+00:00 stderr F I1213 00:20:09.522462 28750 port_cache.go:96] port-cache(openshift-console_downloads-65476884b9-9wcvx): added port &{name:openshift-console_downloads-65476884b9-9wcvx uuid:745a40f7-2acc-4e2b-a087-861e0ea97ffe logicalSwitch:crc ips:[0xc0008afe00] mac:[10 88 10 217 0 66] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.66/23] and MAC: 0a:58:0a:d9:00:42 2025-12-13T00:20:09.522856869+00:00 stderr F I1213 00:20:09.522296 28750 pods.go:220] [openshift-marketplace/marketplace-operator-8b455464d-kghgr] addLogicalPort took 2.137429ms, libovsdb time 1.11904ms 2025-12-13T00:20:09.522856869+00:00 stderr F I1213 00:20:09.522849 28750 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr took: 2.91371ms 2025-12-13T00:20:09.522869379+00:00 stderr F I1213 00:20:09.522678 28750 default_network_controller.go:699] Recording success event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.522869379+00:00 stderr F I1213 00:20:09.522858 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.522892160+00:00 stderr F I1213 00:20:09.522870 28750 default_network_controller.go:655] Recording add event on pod openshift-image-registry/image-pruner-29426400-tlv26 2025-12-13T00:20:09.522892160+00:00 stderr F I1213 00:20:09.522752 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522892160+00:00 stderr F I1213 00:20:09.522864 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522892160+00:00 stderr F I1213 00:20:09.522878 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-image-registry/image-pruner-29426400-tlv26 2025-12-13T00:20:09.522892160+00:00 stderr F I1213 00:20:09.522884 28750 obj_retry.go:459] Detected object openshift-image-registry/image-pruner-29426400-tlv26 of type *v1.Pod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.522904460+00:00 stderr F I1213 00:20:09.522892 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522914340+00:00 stderr F I1213 00:20:09.522902 28750 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress/router-default-5c9bf7bc58-6jctv. OVN-Kubernetes controller took 0.000206846 seconds. No OVN measurement. 2025-12-13T00:20:09.522914340+00:00 stderr F I1213 00:20:09.522871 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.522924631+00:00 stderr F I1213 00:20:09.522913 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.522924631+00:00 stderr F I1213 00:20:09.522920 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc 2025-12-13T00:20:09.522953381+00:00 stderr F I1213 00:20:09.522925 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc took: 6.621µs 2025-12-13T00:20:09.522953381+00:00 stderr F I1213 00:20:09.522944 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.522968982+00:00 stderr F I1213 00:20:09.522881 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522968982+00:00 stderr F I1213 00:20:09.522907 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.522968982+00:00 stderr F I1213 00:20:09.522920 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {910a58ce-fb7f-4a21-8db9-12b6edfc6eef}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.523028613+00:00 stderr F I1213 00:20:09.522993 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:910a58ce-fb7f-4a21-8db9-12b6edfc6eef}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.523028613+00:00 stderr F I1213 00:20:09.522832 28750 pods.go:220] [openshift-console/downloads-65476884b9-9wcvx] addLogicalPort took 3.936869ms, libovsdb time 2.009884ms 2025-12-13T00:20:09.523087235+00:00 stderr F I1213 00:20:09.523049 28750 obj_retry.go:541] Creating *v1.Pod openshift-console/downloads-65476884b9-9wcvx took: 4.138884ms 2025-12-13T00:20:09.523087235+00:00 stderr F I1213 00:20:09.523064 28750 default_network_controller.go:699] Recording success event on pod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.523140607+00:00 stderr F I1213 00:20:09.523054 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:b9a848b0-1438-4ada-b7da-2fe53dbf235f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1bf49fc6-86bc-42b9-9630-d7184a92b0eb}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {910a58ce-fb7f-4a21-8db9-12b6edfc6eef}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:910a58ce-fb7f-4a21-8db9-12b6edfc6eef}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.523168427+00:00 stderr F I1213 00:20:09.523050 28750 port_cache.go:96] port-cache(openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb): added port &{name:openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb uuid:be2fa59f-4cec-4742-a4bd-dcd0913d1422 logicalSwitch:crc ips:[0xc00115c9c0] mac:[10 88 10 217 0 15] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.15/23] and MAC: 0a:58:0a:d9:00:0f 2025-12-13T00:20:09.523242309+00:00 stderr F I1213 00:20:09.523224 28750 pods.go:220] [openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] addLogicalPort took 3.010383ms, libovsdb time 1.248395ms 2025-12-13T00:20:09.523337252+00:00 stderr F I1213 00:20:09.523309 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb took: 3.062764ms 2025-12-13T00:20:09.523415704+00:00 stderr F I1213 00:20:09.523401 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.523467945+00:00 stderr F I1213 00:20:09.523443 28750 default_network_controller.go:655] Recording add event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.523533217+00:00 stderr F I1213 00:20:09.523456 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.523596209+00:00 stderr F I1213 00:20:09.523509 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.523637240+00:00 stderr F I1213 00:20:09.523623 28750 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in node crc 2025-12-13T00:20:09.523724662+00:00 stderr F I1213 00:20:09.523708 28750 base_network_controller_pods.go:476] [default/openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] creating logical port openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z for pod on switch crc 2025-12-13T00:20:09.523786544+00:00 stderr F I1213 00:20:09.523568 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.523921418+00:00 stderr F I1213 00:20:09.523820 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.524160254+00:00 stderr F I1213 00:20:09.523477 28750 port_cache.go:122] port-cache(openshift-image-registry_image-pruner-29426400-tlv26): port not found in cache or already marked for removal 2025-12-13T00:20:09.524160254+00:00 stderr F I1213 00:20:09.524133 28750 pods.go:151] Deleting pod: openshift-image-registry/image-pruner-29426400-tlv26 2025-12-13T00:20:09.524201275+00:00 stderr F W1213 00:20:09.524175 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-image-registry/image-pruner-29426400-tlv26. Using logical switch crc port uuid and addrs [10.217.0.27/23] 2025-12-13T00:20:09.524282058+00:00 stderr F I1213 00:20:09.524245 28750 address_set.go:613] (3fbe9c32-c447-4e1c-9b34-6fac7dd25149/default-network-controller:Namespace:openshift-image-registry:v4/a65811733811199347) deleting addresses [10.217.0.27] from address set 2025-12-13T00:20:09.524300278+00:00 stderr F I1213 00:20:09.524275 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.27]}}] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.524361980+00:00 stderr F I1213 00:20:09.524311 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.27]}}] Timeout: Where:[where column _uuid == {3fbe9c32-c447-4e1c-9b34-6fac7dd25149}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.524414131+00:00 stderr F I1213 00:20:09.524124 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.524526394+00:00 stderr F I1213 00:20:09.524486 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.524669038+00:00 stderr F I1213 00:20:09.524639 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.524817933+00:00 stderr F I1213 00:20:09.524388 28750 port_cache.go:122] port-cache(openshift-kube-controller-manager_revision-pruner-11-crc): port not found in cache or already marked for removal 2025-12-13T00:20:09.524862594+00:00 stderr F I1213 00:20:09.524847 28750 pods.go:151] Deleting pod: openshift-kube-controller-manager/revision-pruner-11-crc 2025-12-13T00:20:09.524990118+00:00 stderr F I1213 00:20:09.524910 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-scheduler/installer-7-crc, ips: 10.217.0.67 2025-12-13T00:20:09.524990118+00:00 stderr F I1213 00:20:09.524927 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.524990118+00:00 stderr F I1213 00:20:09.524978 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.524990118+00:00 stderr F I1213 00:20:09.524985 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in node crc 2025-12-13T00:20:09.525015038+00:00 stderr F I1213 00:20:09.524996 28750 base_network_controller_pods.go:476] [default/openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] creating logical port openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw for pod on switch crc 2025-12-13T00:20:09.525094461+00:00 stderr F W1213 00:20:09.525072 28750 base_network_controller_pods.go:221] No cached port info for deleting pod default/openshift-kube-controller-manager/revision-pruner-11-crc. Using logical switch crc port uuid and addrs [10.217.0.83/23] 2025-12-13T00:20:09.525153422+00:00 stderr F I1213 00:20:09.525111 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.525166903+00:00 stderr F I1213 00:20:09.525148 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.525215574+00:00 stderr F I1213 00:20:09.525184 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.525229564+00:00 stderr F I1213 00:20:09.525210 28750 pods.go:185] Attempting to release IPs for pod: openshift-image-registry/image-pruner-29426400-tlv26, ips: 10.217.0.27 2025-12-13T00:20:09.525241955+00:00 stderr F I1213 00:20:09.524851 28750 port_cache.go:96] port-cache(openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc): added port &{name:openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc uuid:d2f291e9-b4fe-47a7-a644-298254d226c5 logicalSwitch:crc ips:[0xc00140bc20] mac:[10 88 10 217 0 23] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.23/23] and MAC: 0a:58:0a:d9:00:17 2025-12-13T00:20:09.525241955+00:00 stderr F I1213 00:20:09.525229 28750 default_network_controller.go:655] Recording add event on pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.525254845+00:00 stderr F I1213 00:20:09.525244 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.525267425+00:00 stderr F I1213 00:20:09.525256 28750 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-brz8k in node crc 2025-12-13T00:20:09.525267425+00:00 stderr F I1213 00:20:09.525252 28750 port_cache.go:96] port-cache(openshift-marketplace_redhat-operators-zg7cl): added port &{name:openshift-marketplace_redhat-operators-zg7cl uuid:1bf49fc6-86bc-42b9-9630-d7184a92b0eb logicalSwitch:crc ips:[0xc000d1d950] mac:[10 88 10 217 0 35] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.35/23] and MAC: 0a:58:0a:d9:00:23 2025-12-13T00:20:09.525288456+00:00 stderr F I1213 00:20:09.525267 28750 obj_retry.go:541] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-node-brz8k took: 10.74µs 2025-12-13T00:20:09.525288456+00:00 stderr F I1213 00:20:09.525277 28750 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.525288456+00:00 stderr F I1213 00:20:09.525275 28750 port_cache.go:96] port-cache(openshift-apiserver_apiserver-7fc54b8dd7-d2bhp): added port &{name:openshift-apiserver_apiserver-7fc54b8dd7-d2bhp uuid:005abe2f-f66d-42f8-945c-fbc80f820ed4 logicalSwitch:crc ips:[0xc000cf7800] mac:[10 88 10 217 0 82] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.82/23] and MAC: 0a:58:0a:d9:00:52 2025-12-13T00:20:09.525302676+00:00 stderr F I1213 00:20:09.525285 28750 pods.go:220] [openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] addLogicalPort took 3.381824ms, libovsdb time 1.686486ms 2025-12-13T00:20:09.525962184+00:00 stderr F I1213 00:20:09.525240 28750 pods.go:220] [openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] addLogicalPort took 3.222358ms, libovsdb time 1.024859ms 2025-12-13T00:20:09.527355092+00:00 stderr F I1213 00:20:09.527330 28750 obj_retry.go:541] Creating *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc took: 5.413519ms 2025-12-13T00:20:09.527355092+00:00 stderr F I1213 00:20:09.527346 28750 default_network_controller.go:699] Recording success event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.527371093+00:00 stderr F I1213 00:20:09.527359 28750 default_network_controller.go:655] Recording add event on pod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.527378573+00:00 stderr F I1213 00:20:09.527371 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.527416374+00:00 stderr F I1213 00:20:09.527383 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/certified-operators-lcrg8 in node crc 2025-12-13T00:20:09.527456665+00:00 stderr F I1213 00:20:09.527421 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/certified-operators-lcrg8] creating logical port openshift-marketplace_certified-operators-lcrg8 for pod on switch crc 2025-12-13T00:20:09.527524097+00:00 stderr F I1213 00:20:09.527506 28750 address_set.go:613] (7a4da73b-99a6-4d7f-9775-40343fe32ef9/default-network-controller:Namespace:openshift-kube-controller-manager:v4/a4663622633901538608) deleting addresses [10.217.0.83] from address set 2025-12-13T00:20:09.527585389+00:00 stderr F I1213 00:20:09.527290 28750 obj_retry.go:541] Creating *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp took: 3.398394ms 2025-12-13T00:20:09.527594190+00:00 stderr F I1213 00:20:09.527543 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.527594190+00:00 stderr F I1213 00:20:09.527580 28750 default_network_controller.go:699] Recording success event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.527603040+00:00 stderr F I1213 00:20:09.527580 28750 port_cache.go:96] port-cache(openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7): added port &{name:openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 uuid:f5ecfd58-e886-4b2c-9939-022e7f14b7a7 logicalSwitch:crc ips:[0xc000ee17d0] mac:[10 88 10 217 0 7] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.7/23] and MAC: 0a:58:0a:d9:00:07 2025-12-13T00:20:09.527603040+00:00 stderr F I1213 00:20:09.527599 28750 default_network_controller.go:655] Recording add event on pod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.527617350+00:00 stderr F I1213 00:20:09.527607 28750 pods.go:220] [openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] addLogicalPort took 7.464196ms, libovsdb time 2.324755ms 2025-12-13T00:20:09.527634181+00:00 stderr F I1213 00:20:09.527608 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.527641701+00:00 stderr F I1213 00:20:09.527625 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-marketplace-nv4pl in node crc 2025-12-13T00:20:09.527641701+00:00 stderr F I1213 00:20:09.527619 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 took: 7.485026ms 2025-12-13T00:20:09.527641701+00:00 stderr F I1213 00:20:09.527634 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.527650311+00:00 stderr F I1213 00:20:09.527641 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-marketplace-nv4pl] creating logical port openshift-marketplace_redhat-marketplace-nv4pl for pod on switch crc 2025-12-13T00:20:09.527650311+00:00 stderr F I1213 00:20:09.527646 28750 default_network_controller.go:655] Recording add event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.527681082+00:00 stderr F I1213 00:20:09.527656 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.527681082+00:00 stderr F I1213 00:20:09.525267 28750 pods.go:220] [openshift-marketplace/redhat-operators-zg7cl] addLogicalPort took 5.07983ms, libovsdb time 1.491551ms 2025-12-13T00:20:09.527681082+00:00 stderr F I1213 00:20:09.527673 28750 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in node crc 2025-12-13T00:20:09.527690122+00:00 stderr F I1213 00:20:09.527660 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.527690122+00:00 stderr F I1213 00:20:09.527679 28750 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/redhat-operators-zg7cl took: 7.514708ms 2025-12-13T00:20:09.527697542+00:00 stderr F I1213 00:20:09.527691 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.527720593+00:00 stderr F I1213 00:20:09.527696 28750 base_network_controller_pods.go:476] [default/openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] creating logical port openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 for pod on switch crc 2025-12-13T00:20:09.527728423+00:00 stderr F I1213 00:20:09.527689 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.527744494+00:00 stderr F I1213 00:20:09.527703 28750 default_network_controller.go:655] Recording add event on pod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.527744494+00:00 stderr F I1213 00:20:09.527718 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf): added port &{name:openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf uuid:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c logicalSwitch:crc ips:[0xc001508f30] mac:[10 88 10 217 0 11] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.11/23] and MAC: 0a:58:0a:d9:00:0b 2025-12-13T00:20:09.527744494+00:00 stderr F I1213 00:20:09.527739 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.527755454+00:00 stderr F I1213 00:20:09.527749 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/installer-13-crc in node crc 2025-12-13T00:20:09.527755454+00:00 stderr F I1213 00:20:09.527747 28750 pods.go:220] [openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] addLogicalPort took 5.363118ms, libovsdb time 1.901613ms 2025-12-13T00:20:09.527764564+00:00 stderr F I1213 00:20:09.527758 28750 obj_retry.go:541] Creating *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf took: 5.393219ms 2025-12-13T00:20:09.527771725+00:00 stderr F I1213 00:20:09.527765 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.527778955+00:00 stderr F I1213 00:20:09.527768 28750 base_network_controller_pods.go:476] [default/openshift-kube-apiserver/installer-13-crc] creating logical port openshift-kube-apiserver_installer-13-crc for pod on switch crc 2025-12-13T00:20:09.527802865+00:00 stderr F I1213 00:20:09.527777 28750 port_cache.go:96] port-cache(openshift-image-registry_image-registry-75b7bb6564-rnjvj): added port &{name:openshift-image-registry_image-registry-75b7bb6564-rnjvj uuid:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1 logicalSwitch:crc ips:[0xc000cf68a0] mac:[10 88 10 217 0 41] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.41/23] and MAC: 0a:58:0a:d9:00:29 2025-12-13T00:20:09.527824396+00:00 stderr F I1213 00:20:09.527798 28750 pods.go:220] [openshift-image-registry/image-registry-75b7bb6564-rnjvj] addLogicalPort took 6.210282ms, libovsdb time 2.356955ms 2025-12-13T00:20:09.527846767+00:00 stderr F I1213 00:20:09.527829 28750 obj_retry.go:541] Creating *v1.Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj took: 6.248153ms 2025-12-13T00:20:09.527846767+00:00 stderr F I1213 00:20:09.527815 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.83]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.527846767+00:00 stderr F I1213 00:20:09.527841 28750 default_network_controller.go:699] Recording success event on pod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.527855297+00:00 stderr F I1213 00:20:09.527850 28750 default_network_controller.go:655] Recording add event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.527880437+00:00 stderr F I1213 00:20:09.527863 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.527888278+00:00 stderr F I1213 00:20:09.527878 28750 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in node crc 2025-12-13T00:20:09.527888278+00:00 stderr F I1213 00:20:09.527855 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.527914888+00:00 stderr F I1213 00:20:09.527876 28750 transact.go:42] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:delete Value:{GoSet:[10.217.0.83]}}] Timeout: Where:[where column _uuid == {7a4da73b-99a6-4d7f-9775-40343fe32ef9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.527914888+00:00 stderr F I1213 00:20:09.527702 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528057272+00:00 stderr F I1213 00:20:09.527998 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {145016fd-3cef-4f28-abce-7d4c2c73477c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528057272+00:00 stderr F I1213 00:20:09.527896 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528119294+00:00 stderr F I1213 00:20:09.527901 28750 base_network_controller_pods.go:476] [default/openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] creating logical port openshift-console-operator_console-operator-5dbbc74dc9-cp5cd for pod on switch crc 2025-12-13T00:20:09.528119294+00:00 stderr F I1213 00:20:09.528062 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8393529-1df9-45df-a9d2-f761b6ef68c3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528224277+00:00 stderr F I1213 00:20:09.528164 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528296939+00:00 stderr F I1213 00:20:09.528216 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528296939+00:00 stderr F I1213 00:20:09.528253 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528296939+00:00 stderr F I1213 00:20:09.528079 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528379851+00:00 stderr F I1213 00:20:09.528329 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528400032+00:00 stderr F I1213 00:20:09.528362 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528400032+00:00 stderr F I1213 00:20:09.528253 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:85ca9974-9a5f-4cbf-a126-71bf61c49940 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d33f70f7-551b-4345-ba06-af50c250b376}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528411602+00:00 stderr F I1213 00:20:09.528371 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528468234+00:00 stderr F I1213 00:20:09.528435 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528476414+00:00 stderr F I1213 00:20:09.528448 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528476414+00:00 stderr F I1213 00:20:09.528449 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528564786+00:00 stderr F I1213 00:20:09.528525 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528680199+00:00 stderr F I1213 00:20:09.528527 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528900565+00:00 stderr F I1213 00:20:09.528859 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b935134-b25d-48e2-b3d5-059fcc367ac0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.528986828+00:00 stderr F I1213 00:20:09.528922 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1b935134-b25d-48e2-b3d5-059fcc367ac0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529112541+00:00 stderr F I1213 00:20:09.529018 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {145016fd-3cef-4f28-abce-7d4c2c73477c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b935134-b25d-48e2-b3d5-059fcc367ac0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1b935134-b25d-48e2-b3d5-059fcc367ac0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529148512+00:00 stderr F I1213 00:20:09.529102 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529215194+00:00 stderr F I1213 00:20:09.529180 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529276456+00:00 stderr F I1213 00:20:09.529214 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0c6b16ad-44ad-44c4-bdda-872783f6ab00}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529316527+00:00 stderr F I1213 00:20:09.529213 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529348898+00:00 stderr F I1213 00:20:09.529310 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0c6b16ad-44ad-44c4-bdda-872783f6ab00}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529456840+00:00 stderr F I1213 00:20:09.529345 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:85ca9974-9a5f-4cbf-a126-71bf61c49940 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d33f70f7-551b-4345-ba06-af50c250b376}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0c6b16ad-44ad-44c4-bdda-872783f6ab00}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0c6b16ad-44ad-44c4-bdda-872783f6ab00}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529456840+00:00 stderr F I1213 00:20:09.529434 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529527672+00:00 stderr F I1213 00:20:09.529494 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529664226+00:00 stderr F I1213 00:20:09.529607 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87fe1602-51a3-4d2b-be29-ca6a53c0b451}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529664226+00:00 stderr F I1213 00:20:09.529523 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.529722938+00:00 stderr F I1213 00:20:09.529686 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:87fe1602-51a3-4d2b-be29-ca6a53c0b451}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.530128849+00:00 stderr F I1213 00:20:09.529720 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8393529-1df9-45df-a9d2-f761b6ef68c3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87fe1602-51a3-4d2b-be29-ca6a53c0b451}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:87fe1602-51a3-4d2b-be29-ca6a53c0b451}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.530563001+00:00 stderr F I1213 00:20:09.530385 28750 port_cache.go:96] port-cache(openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw): added port &{name:openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw uuid:f8e99409-b28a-4d27-a8e5-267ea6a801cf logicalSwitch:crc ips:[0xc0013e17d0] mac:[10 88 10 217 0 20] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.20/23] and MAC: 0a:58:0a:d9:00:14 2025-12-13T00:20:09.530563001+00:00 stderr F I1213 00:20:09.530555 28750 pods.go:220] [openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] addLogicalPort took 5.562663ms, libovsdb time 2.480468ms 2025-12-13T00:20:09.530574662+00:00 stderr F I1213 00:20:09.530565 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw took: 5.578723ms 2025-12-13T00:20:09.530582022+00:00 stderr F I1213 00:20:09.530573 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.530629623+00:00 stderr F I1213 00:20:09.530589 28750 port_cache.go:96] port-cache(openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z): added port &{name:openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z uuid:f5604df7-c1b9-4360-a570-e22fbf62c520 logicalSwitch:crc ips:[0xc0012414d0] mac:[10 88 10 217 0 9] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.9/23] and MAC: 0a:58:0a:d9:00:09 2025-12-13T00:20:09.530629623+00:00 stderr F I1213 00:20:09.530620 28750 pods.go:220] [openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] addLogicalPort took 6.941182ms, libovsdb time 2.718665ms 2025-12-13T00:20:09.530638214+00:00 stderr F I1213 00:20:09.530627 28750 obj_retry.go:541] Creating *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z took: 7.008204ms 2025-12-13T00:20:09.530638214+00:00 stderr F I1213 00:20:09.530633 28750 default_network_controller.go:699] Recording success event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.530673945+00:00 stderr F I1213 00:20:09.530635 28750 port_cache.go:96] port-cache(openshift-console-operator_console-conversion-webhook-595f9969b-l6z49): added port &{name:openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 uuid:6056bee0-572a-4de7-bb24-40ca6a66be30 logicalSwitch:crc ips:[0xc000d6bd40] mac:[10 88 10 217 0 61] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.61/23] and MAC: 0a:58:0a:d9:00:3d 2025-12-13T00:20:09.530673945+00:00 stderr F I1213 00:20:09.530668 28750 pods.go:220] [openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] addLogicalPort took 2.985092ms, libovsdb time 1.408029ms 2025-12-13T00:20:09.530697715+00:00 stderr F I1213 00:20:09.530679 28750 obj_retry.go:541] Creating *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 took: 3.006883ms 2025-12-13T00:20:09.530697715+00:00 stderr F I1213 00:20:09.530691 28750 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.530754697+00:00 stderr F I1213 00:20:09.530727 28750 port_cache.go:96] port-cache(openshift-kube-apiserver_installer-13-crc): added port &{name:openshift-kube-apiserver_installer-13-crc uuid:145016fd-3cef-4f28-abce-7d4c2c73477c logicalSwitch:crc ips:[0xc00133f830] mac:[10 88 10 217 0 42] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.42/23] and MAC: 0a:58:0a:d9:00:2a 2025-12-13T00:20:09.530754697+00:00 stderr F I1213 00:20:09.530741 28750 pods.go:220] [openshift-kube-apiserver/installer-13-crc] addLogicalPort took 2.986492ms, libovsdb time 1.703898ms 2025-12-13T00:20:09.530754697+00:00 stderr F I1213 00:20:09.530748 28750 obj_retry.go:541] Creating *v1.Pod openshift-kube-apiserver/installer-13-crc took: 3.001262ms 2025-12-13T00:20:09.530768517+00:00 stderr F I1213 00:20:09.530759 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.530827049+00:00 stderr F I1213 00:20:09.530804 28750 port_cache.go:96] port-cache(openshift-marketplace_redhat-marketplace-nv4pl): added port &{name:openshift-marketplace_redhat-marketplace-nv4pl uuid:d33f70f7-551b-4345-ba06-af50c250b376 logicalSwitch:crc ips:[0xc00133f440] mac:[10 88 10 217 0 36] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.36/23] and MAC: 0a:58:0a:d9:00:24 2025-12-13T00:20:09.530827049+00:00 stderr F I1213 00:20:09.530819 28750 pods.go:220] [openshift-marketplace/redhat-marketplace-nv4pl] addLogicalPort took 3.188067ms, libovsdb time 1.453151ms 2025-12-13T00:20:09.530836149+00:00 stderr F I1213 00:20:09.530828 28750 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/redhat-marketplace-nv4pl took: 3.204069ms 2025-12-13T00:20:09.530836149+00:00 stderr F I1213 00:20:09.530833 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.530920571+00:00 stderr F I1213 00:20:09.530887 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-controller-manager/revision-pruner-11-crc, ips: 10.217.0.83 2025-12-13T00:20:09.530920571+00:00 stderr F I1213 00:20:09.530892 28750 port_cache.go:96] port-cache(openshift-console-operator_console-operator-5dbbc74dc9-cp5cd): added port &{name:openshift-console-operator_console-operator-5dbbc74dc9-cp5cd uuid:6af06372-81fc-4451-8678-6253ce70f317 logicalSwitch:crc ips:[0xc001565320] mac:[10 88 10 217 0 62] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.62/23] and MAC: 0a:58:0a:d9:00:3e 2025-12-13T00:20:09.530920571+00:00 stderr F I1213 00:20:09.530902 28750 pods.go:220] [openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] addLogicalPort took 3.016963ms, libovsdb time 1.360458ms 2025-12-13T00:20:09.530920571+00:00 stderr F I1213 00:20:09.530908 28750 obj_retry.go:541] Creating *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd took: 3.032874ms 2025-12-13T00:20:09.530920571+00:00 stderr F I1213 00:20:09.530913 28750 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.531008144+00:00 stderr F I1213 00:20:09.530985 28750 port_cache.go:96] port-cache(openshift-marketplace_certified-operators-lcrg8): added port &{name:openshift-marketplace_certified-operators-lcrg8 uuid:d8393529-1df9-45df-a9d2-f761b6ef68c3 logicalSwitch:crc ips:[0xc00133ef30] mac:[10 88 10 217 0 33] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.33/23] and MAC: 0a:58:0a:d9:00:21 2025-12-13T00:20:09.531008144+00:00 stderr F I1213 00:20:09.530998 28750 pods.go:220] [openshift-marketplace/certified-operators-lcrg8] addLogicalPort took 3.589639ms, libovsdb time 1.258595ms 2025-12-13T00:20:09.531027514+00:00 stderr F I1213 00:20:09.531005 28750 obj_retry.go:541] Creating *v1.Pod openshift-marketplace/certified-operators-lcrg8 took: 3.6255ms 2025-12-13T00:20:09.531027514+00:00 stderr F I1213 00:20:09.531012 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.531027514+00:00 stderr F I1213 00:20:09.531024 28750 default_network_controller.go:655] Recording add event on pod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.531039964+00:00 stderr F I1213 00:20:09.531034 28750 obj_retry.go:502] Add event received for *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.531064435+00:00 stderr F I1213 00:20:09.531047 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-server-v65wr in node crc 2025-12-13T00:20:09.531623450+00:00 stderr F I1213 00:20:09.531056 28750 obj_retry.go:541] Creating *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr took: 16.051µs 2025-12-13T00:20:09.531623450+00:00 stderr F I1213 00:20:09.531617 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.531646771+00:00 stderr F I1213 00:20:09.531639 28750 factory.go:988] Added *v1.Pod event handler 3 2025-12-13T00:20:09.531751664+00:00 stderr F I1213 00:20:09.531726 28750 obj_retry.go:427] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod 2025-12-13T00:20:09.532176995+00:00 stderr F I1213 00:20:09.532147 28750 admin_network_policy_controller.go:124] Setting up event handlers for Admin Network Policy 2025-12-13T00:20:09.532242907+00:00 stderr F I1213 00:20:09.532226 28750 admin_network_policy_controller.go:142] Setting up event handlers for Baseline Admin Network Policy 2025-12-13T00:20:09.532307689+00:00 stderr F I1213 00:20:09.532290 28750 admin_network_policy_controller.go:159] Setting up event handlers for Namespaces in Admin Network Policy controller 2025-12-13T00:20:09.532396341+00:00 stderr F I1213 00:20:09.532379 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-authentication-operator 2025-12-13T00:20:09.532396341+00:00 stderr F I1213 00:20:09.532392 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-machine-approver 2025-12-13T00:20:09.532405122+00:00 stderr F I1213 00:20:09.532398 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-storage-version-migrator 2025-12-13T00:20:09.532405122+00:00 stderr F I1213 00:20:09.532402 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-monitoring 2025-12-13T00:20:09.532412482+00:00 stderr F I1213 00:20:09.532407 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-nutanix-infra 2025-12-13T00:20:09.532419622+00:00 stderr F I1213 00:20:09.532382 28750 admin_network_policy_controller.go:175] Setting up event handlers for Pods in Admin Network Policy controller 2025-12-13T00:20:09.532477934+00:00 stderr F I1213 00:20:09.532411 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress-canary 2025-12-13T00:20:09.532503634+00:00 stderr F I1213 00:20:09.532485 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-multus 2025-12-13T00:20:09.532503634+00:00 stderr F I1213 00:20:09.532499 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-authentication 2025-12-13T00:20:09.532511685+00:00 stderr F I1213 00:20:09.532504 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config-operator 2025-12-13T00:20:09.532534115+00:00 stderr F I1213 00:20:09.532517 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-image-registry 2025-12-13T00:20:09.532534115+00:00 stderr F I1213 00:20:09.532527 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-service-ca-operator 2025-12-13T00:20:09.532534115+00:00 stderr F I1213 00:20:09.532528 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.532547706+00:00 stderr F I1213 00:20:09.532535 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller default 2025-12-13T00:20:09.532547706+00:00 stderr F I1213 00:20:09.532538 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.532547706+00:00 stderr F I1213 00:20:09.532541 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift 2025-12-13T00:20:09.532547706+00:00 stderr F I1213 00:20:09.532544 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.532556036+00:00 stderr F I1213 00:20:09.532549 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.532556036+00:00 stderr F I1213 00:20:09.532547 28750 admin_network_policy_controller.go:191] Setting up event handlers for Nodes in Admin Network Policy controller 2025-12-13T00:20:09.532563696+00:00 stderr F I1213 00:20:09.532555 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.532563696+00:00 stderr F I1213 00:20:09.532561 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.532571076+00:00 stderr F I1213 00:20:09.532566 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.532578116+00:00 stderr F I1213 00:20:09.532571 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-10-crc 2025-12-13T00:20:09.532585137+00:00 stderr F I1213 00:20:09.532561 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cloud-network-config-controller 2025-12-13T00:20:09.532593797+00:00 stderr F I1213 00:20:09.532588 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-controller-manager 2025-12-13T00:20:09.532600857+00:00 stderr F I1213 00:20:09.532594 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-host-network 2025-12-13T00:20:09.532600857+00:00 stderr F I1213 00:20:09.532598 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress 2025-12-13T00:20:09.532615367+00:00 stderr F I1213 00:20:09.532604 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console 2025-12-13T00:20:09.532615367+00:00 stderr F I1213 00:20:09.532609 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-apiserver 2025-12-13T00:20:09.532623158+00:00 stderr F I1213 00:20:09.532614 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-controller-manager 2025-12-13T00:20:09.532623158+00:00 stderr F I1213 00:20:09.532578 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.532623158+00:00 stderr F I1213 00:20:09.532620 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-diagnostics 2025-12-13T00:20:09.532632708+00:00 stderr F I1213 00:20:09.532627 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-service-ca 2025-12-13T00:20:09.532642158+00:00 stderr F I1213 00:20:09.532633 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-node-lease 2025-12-13T00:20:09.532642158+00:00 stderr F I1213 00:20:09.532638 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-system 2025-12-13T00:20:09.532649368+00:00 stderr F I1213 00:20:09.532642 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-apiserver 2025-12-13T00:20:09.532658429+00:00 stderr F I1213 00:20:09.532653 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-storage-operator 2025-12-13T00:20:09.532665679+00:00 stderr F I1213 00:20:09.532648 28750 admin_network_policy_controller.go:549] Adding Node in Admin Network Policy controller crc 2025-12-13T00:20:09.532665679+00:00 stderr F I1213 00:20:09.532662 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-controller-manager-operator 2025-12-13T00:20:09.532673649+00:00 stderr F I1213 00:20:09.532667 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-node-identity 2025-12-13T00:20:09.532673649+00:00 stderr F I1213 00:20:09.532627 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.532712960+00:00 stderr F I1213 00:20:09.532671 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-openstack-infra 2025-12-13T00:20:09.532712960+00:00 stderr F I1213 00:20:09.532694 28750 admin_network_policy_controller.go:218] Starting controller default-network-controller 2025-12-13T00:20:09.532712960+00:00 stderr F I1213 00:20:09.532705 28750 admin_network_policy_controller.go:221] Waiting for informer caches to sync 2025-12-13T00:20:09.532712960+00:00 stderr F I1213 00:20:09.532708 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-vsphere-infra 2025-12-13T00:20:09.532722120+00:00 stderr F I1213 00:20:09.532714 28750 shared_informer.go:311] Waiting for caches to sync for default-network-controller 2025-12-13T00:20:09.532722120+00:00 stderr F I1213 00:20:09.532717 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cloud-platform-infra 2025-12-13T00:20:09.532730901+00:00 stderr F I1213 00:20:09.532721 28750 shared_informer.go:318] Caches are synced for default-network-controller 2025-12-13T00:20:09.532730901+00:00 stderr F I1213 00:20:09.532723 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config-managed 2025-12-13T00:20:09.532730901+00:00 stderr F I1213 00:20:09.532725 28750 admin_network_policy_controller.go:228] Repairing Admin Network Policies 2025-12-13T00:20:09.532741231+00:00 stderr F I1213 00:20:09.532730 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-controller-manager-operator 2025-12-13T00:20:09.532741231+00:00 stderr F I1213 00:20:09.532737 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ovn-kubernetes 2025-12-13T00:20:09.532750471+00:00 stderr F I1213 00:20:09.532744 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-infra 2025-12-13T00:20:09.532759211+00:00 stderr F I1213 00:20:09.532750 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-scheduler 2025-12-13T00:20:09.532759211+00:00 stderr F I1213 00:20:09.532756 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-machine-api 2025-12-13T00:20:09.532768262+00:00 stderr F I1213 00:20:09.532761 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-oauth-apiserver 2025-12-13T00:20:09.532780302+00:00 stderr F I1213 00:20:09.532767 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller hostpath-provisioner 2025-12-13T00:20:09.532780302+00:00 stderr F I1213 00:20:09.532772 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-samples-operator 2025-12-13T00:20:09.532789142+00:00 stderr F I1213 00:20:09.532778 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-config 2025-12-13T00:20:09.532789142+00:00 stderr F I1213 00:20:09.532784 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-dns-operator 2025-12-13T00:20:09.532797992+00:00 stderr F I1213 00:20:09.532789 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller kube-public 2025-12-13T00:20:09.532797992+00:00 stderr F I1213 00:20:09.532794 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-apiserver-operator 2025-12-13T00:20:09.532806943+00:00 stderr F I1213 00:20:09.532800 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:09.532816113+00:00 stderr F I1213 00:20:09.532805 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-node 2025-12-13T00:20:09.532816113+00:00 stderr F I1213 00:20:09.532811 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-operator-lifecycle-manager 2025-12-13T00:20:09.532825043+00:00 stderr F I1213 00:20:09.532815 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-route-controller-manager 2025-12-13T00:20:09.532825043+00:00 stderr F I1213 00:20:09.532821 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openstack 2025-12-13T00:20:09.532833553+00:00 stderr F I1213 00:20:09.532697 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.532843884+00:00 stderr F I1213 00:20:09.532837 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-apiserver-operator 2025-12-13T00:20:09.532854214+00:00 stderr F I1213 00:20:09.532847 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kube-scheduler-operator 2025-12-13T00:20:09.532862764+00:00 stderr F I1213 00:20:09.532855 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-marketplace 2025-12-13T00:20:09.532870534+00:00 stderr F I1213 00:20:09.532862 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-cluster-version 2025-12-13T00:20:09.532878815+00:00 stderr F I1213 00:20:09.532868 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console-user-settings 2025-12-13T00:20:09.532878815+00:00 stderr F I1213 00:20:09.532874 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-etcd-operator 2025-12-13T00:20:09.532888155+00:00 stderr F I1213 00:20:09.532880 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-kni-infra 2025-12-13T00:20:09.532888155+00:00 stderr F I1213 00:20:09.532885 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-network-operator 2025-12-13T00:20:09.532899615+00:00 stderr F I1213 00:20:09.532892 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-dns 2025-12-13T00:20:09.532912515+00:00 stderr F I1213 00:20:09.532898 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-machine-config-operator 2025-12-13T00:20:09.532912515+00:00 stderr F I1213 00:20:09.532904 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-user-workload-monitoring 2025-12-13T00:20:09.532912515+00:00 stderr F I1213 00:20:09.532905 28750 repair.go:29] Repairing admin network policies took 174.645µs 2025-12-13T00:20:09.532923276+00:00 stderr F I1213 00:20:09.532837 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.532945966+00:00 stderr F I1213 00:20:09.532920 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.532957117+00:00 stderr F I1213 00:20:09.532910 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openstack-operators 2025-12-13T00:20:09.532964457+00:00 stderr F I1213 00:20:09.532904 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b099dc38-13d2-468d-a119-69434d19ed27}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.532964457+00:00 stderr F I1213 00:20:09.532958 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-console-operator 2025-12-13T00:20:09.532971827+00:00 stderr F I1213 00:20:09.532966 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-etcd 2025-12-13T00:20:09.532979077+00:00 stderr F I1213 00:20:09.532972 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ingress-operator 2025-12-13T00:20:09.532979077+00:00 stderr F I1213 00:20:09.532972 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.532986917+00:00 stderr F I1213 00:20:09.532978 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-operators 2025-12-13T00:20:09.532986917+00:00 stderr F I1213 00:20:09.532982 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-12-13T00:20:09.532994548+00:00 stderr F I1213 00:20:09.532986 28750 admin_network_policy_controller.go:443] Adding Namespace in Admin Network Policy controller openshift-ovirt-infra 2025-12-13T00:20:09.532994548+00:00 stderr F I1213 00:20:09.532989 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.533003638+00:00 stderr F I1213 00:20:09.532997 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.533003638+00:00 stderr F I1213 00:20:09.532997 28750 repair.go:104] Repairing baseline admin network policies took 85.913µs 2025-12-13T00:20:09.533016048+00:00 stderr F I1213 00:20:09.533006 28750 admin_network_policy_controller.go:241] Starting Admin Network Policy workers 2025-12-13T00:20:09.533016048+00:00 stderr F I1213 00:20:09.533013 28750 admin_network_policy_controller.go:252] Starting Baseline Admin Network Policy workers 2025-12-13T00:20:09.533024669+00:00 stderr F I1213 00:20:09.533019 28750 admin_network_policy_controller.go:263] Starting Namespace Admin Network Policy workers 2025-12-13T00:20:09.533034529+00:00 stderr F I1213 00:20:09.533025 28750 admin_network_policy_controller.go:274] Starting Pod Admin Network Policy workers 2025-12-13T00:20:09.533034529+00:00 stderr F I1213 00:20:09.533030 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.533043629+00:00 stderr F I1213 00:20:09.533031 28750 admin_network_policy_controller.go:285] Starting Node Admin Network Policy workers 2025-12-13T00:20:09.533054529+00:00 stderr F I1213 00:20:09.533048 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console/console-644bb77b49-5x5xk in Admin Network Policy controller 2025-12-13T00:20:09.533062550+00:00 stderr F I1213 00:20:09.533056 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console/console-644bb77b49-5x5xk Admin Network Policy controller: took 8.73µs 2025-12-13T00:20:09.533094460+00:00 stderr F I1213 00:20:09.533075 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.533094460+00:00 stderr F I1213 00:20:09.533086 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.533094460+00:00 stderr F I1213 00:20:09.533090 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.533104141+00:00 stderr F I1213 00:20:09.533067 28750 model_client.go:381] Update operations generated as: [{Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab13cfbf-8c22-4847-bf8f-29854d97f49b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.533104141+00:00 stderr F I1213 00:20:09.533096 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-9-crc 2025-12-13T00:20:09.533111731+00:00 stderr F I1213 00:20:09.533102 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.533111731+00:00 stderr F I1213 00:20:09.533108 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.533118951+00:00 stderr F I1213 00:20:09.533108 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns/node-resolver-dn27q in Admin Network Policy controller 2025-12-13T00:20:09.533129601+00:00 stderr F I1213 00:20:09.533121 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns/node-resolver-dn27q Admin Network Policy controller: took 16.11µs 2025-12-13T00:20:09.533152213+00:00 stderr F I1213 00:20:09.533112 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-9-crc 2025-12-13T00:20:09.533152213+00:00 stderr F I1213 00:20:09.533128 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b099dc38-13d2-468d-a119-69434d19ed27} {GoUUID:ab13cfbf-8c22-4847-bf8f-29854d97f49b}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.533152213+00:00 stderr F I1213 00:20:09.533148 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.533163643+00:00 stderr F I1213 00:20:09.533158 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-q88th 2025-12-13T00:20:09.533170713+00:00 stderr F I1213 00:20:09.533165 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.533195424+00:00 stderr F I1213 00:20:09.533178 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.533195424+00:00 stderr F I1213 00:20:09.533191 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.533203324+00:00 stderr F I1213 00:20:09.533198 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-12-crc 2025-12-13T00:20:09.533210515+00:00 stderr F I1213 00:20:09.533150 28750 transact.go:42] Configuring OVN: [{Op:update Table:ACL Row:map[action:allow-related direction:to-lport external_ids:{GoMap:map[direction:Ingress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Ingress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b099dc38-13d2-468d-a119-69434d19ed27}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:ACL Row:map[action:allow-related direction:from-lport external_ids:{GoMap:map[direction:Egress k8s.ovn.org/id:default-network-controller:NetpolDefault:allow-hairpinning:Egress k8s.ovn.org/name:allow-hairpinning k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:NetpolDefault]} log:false match:ip4.src == 169.254.169.5 meter:{GoSet:[acl-logging]} name:{GoSet:[]} options:{GoMap:map[apply-after-lb:true]} priority:1001 severity:{GoSet:[]} tier:2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab13cfbf-8c22-4847-bf8f-29854d97f49b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:b099dc38-13d2-468d-a119-69434d19ed27} {GoUUID:ab13cfbf-8c22-4847-bf8f-29854d97f49b}]}}] Timeout: Where:[where column _uuid == {209ca8a5-55e7-4f87-adee-1ae7952f089e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.533238735+00:00 stderr F I1213 00:20:09.533218 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in Admin Network Policy controller 2025-12-13T00:20:09.533238735+00:00 stderr F I1213 00:20:09.533233 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs Admin Network Policy controller: took 14.36µs 2025-12-13T00:20:09.533249606+00:00 stderr F I1213 00:20:09.533241 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in Admin Network Policy controller 2025-12-13T00:20:09.533257106+00:00 stderr F I1213 00:20:09.533247 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 Admin Network Policy controller: took 5.851µs 2025-12-13T00:20:09.533264556+00:00 stderr F I1213 00:20:09.533254 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in Admin Network Policy controller 2025-12-13T00:20:09.533264556+00:00 stderr F I1213 00:20:09.533261 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg Admin Network Policy controller: took 6.74µs 2025-12-13T00:20:09.533273576+00:00 stderr F I1213 00:20:09.533267 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in Admin Network Policy controller 2025-12-13T00:20:09.533281877+00:00 stderr F I1213 00:20:09.533273 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz Admin Network Policy controller: took 6.03µs 2025-12-13T00:20:09.533290467+00:00 stderr F I1213 00:20:09.533279 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in Admin Network Policy controller 2025-12-13T00:20:09.533290467+00:00 stderr F I1213 00:20:09.533285 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t Admin Network Policy controller: took 6.101µs 2025-12-13T00:20:09.533298087+00:00 stderr F I1213 00:20:09.533291 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-10-crc in Admin Network Policy controller 2025-12-13T00:20:09.533306807+00:00 stderr F I1213 00:20:09.533203 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.533313977+00:00 stderr F I1213 00:20:09.533297 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-10-crc Admin Network Policy controller: took 5.25µs 2025-12-13T00:20:09.533313977+00:00 stderr F I1213 00:20:09.533306 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.533321128+00:00 stderr F I1213 00:20:09.533313 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in Admin Network Policy controller 2025-12-13T00:20:09.533327958+00:00 stderr F I1213 00:20:09.533319 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc Admin Network Policy controller: took 6.21µs 2025-12-13T00:20:09.533335458+00:00 stderr F I1213 00:20:09.533326 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in Admin Network Policy controller 2025-12-13T00:20:09.533335458+00:00 stderr F I1213 00:20:09.533332 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh Admin Network Policy controller: took 5.55µs 2025-12-13T00:20:09.533344208+00:00 stderr F I1213 00:20:09.533338 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in Admin Network Policy controller 2025-12-13T00:20:09.533351038+00:00 stderr F I1213 00:20:09.533344 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 Admin Network Policy controller: took 5.5µs 2025-12-13T00:20:09.533360619+00:00 stderr F I1213 00:20:09.533350 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in Admin Network Policy controller 2025-12-13T00:20:09.533360619+00:00 stderr F I1213 00:20:09.533355 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b Admin Network Policy controller: took 5.15µs 2025-12-13T00:20:09.533367739+00:00 stderr F I1213 00:20:09.533361 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in Admin Network Policy controller 2025-12-13T00:20:09.533374549+00:00 stderr F I1213 00:20:09.533367 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg Admin Network Policy controller: took 5.61µs 2025-12-13T00:20:09.533381679+00:00 stderr F I1213 00:20:09.533373 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/node-ca-l92hr in Admin Network Policy controller 2025-12-13T00:20:09.533381679+00:00 stderr F I1213 00:20:09.533378 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/node-ca-l92hr Admin Network Policy controller: took 5.32µs 2025-12-13T00:20:09.533389099+00:00 stderr F I1213 00:20:09.533381 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-10-retry-1-crc 2025-12-13T00:20:09.533389099+00:00 stderr F I1213 00:20:09.533385 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd in Admin Network Policy controller 2025-12-13T00:20:09.533396560+00:00 stderr F I1213 00:20:09.533388 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/installer-11-crc 2025-12-13T00:20:09.533396560+00:00 stderr F I1213 00:20:09.533391 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd Admin Network Policy controller: took 5.83µs 2025-12-13T00:20:09.533405240+00:00 stderr F I1213 00:20:09.533400 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in Admin Network Policy controller 2025-12-13T00:20:09.533412230+00:00 stderr F I1213 00:20:09.533407 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 Admin Network Policy controller: took 6.19µs 2025-12-13T00:20:09.533420670+00:00 stderr F I1213 00:20:09.533414 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in Admin Network Policy controller 2025-12-13T00:20:09.533427540+00:00 stderr F I1213 00:20:09.533421 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr Admin Network Policy controller: took 6.81µs 2025-12-13T00:20:09.533434371+00:00 stderr F I1213 00:20:09.533427 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in Admin Network Policy controller 2025-12-13T00:20:09.533441481+00:00 stderr F I1213 00:20:09.533433 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-daemon-zpnhg Admin Network Policy controller: took 5.741µs 2025-12-13T00:20:09.533448651+00:00 stderr F I1213 00:20:09.533439 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/community-operators-s2hxn in Admin Network Policy controller 2025-12-13T00:20:09.533448651+00:00 stderr F I1213 00:20:09.533445 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/community-operators-s2hxn Admin Network Policy controller: took 6.15µs 2025-12-13T00:20:09.533458071+00:00 stderr F I1213 00:20:09.533451 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in Admin Network Policy controller 2025-12-13T00:20:09.533465221+00:00 stderr F I1213 00:20:09.533456 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz Admin Network Policy controller: took 5.58µs 2025-12-13T00:20:09.533472022+00:00 stderr F I1213 00:20:09.533393 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.533479032+00:00 stderr F I1213 00:20:09.533463 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in Admin Network Policy controller 2025-12-13T00:20:09.533487842+00:00 stderr F I1213 00:20:09.533482 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv Admin Network Policy controller: took 18.201µs 2025-12-13T00:20:09.533495202+00:00 stderr F I1213 00:20:09.533481 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.533495202+00:00 stderr F I1213 00:20:09.533492 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-9-crc in Admin Network Policy controller 2025-12-13T00:20:09.533502413+00:00 stderr F I1213 00:20:09.533496 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.533510053+00:00 stderr F I1213 00:20:09.533498 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-9-crc Admin Network Policy controller: took 6.16µs 2025-12-13T00:20:09.533518583+00:00 stderr F I1213 00:20:09.533509 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.533527733+00:00 stderr F I1213 00:20:09.533517 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.533527733+00:00 stderr F I1213 00:20:09.533520 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in Admin Network Policy controller 2025-12-13T00:20:09.533527733+00:00 stderr F I1213 00:20:09.533522 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.533560964+00:00 stderr F I1213 00:20:09.533533 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm Admin Network Policy controller: took 15.47µs 2025-12-13T00:20:09.533560964+00:00 stderr F I1213 00:20:09.533537 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.533560964+00:00 stderr F I1213 00:20:09.533548 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in Admin Network Policy controller 2025-12-13T00:20:09.533570674+00:00 stderr F I1213 00:20:09.533562 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-service-ca/service-ca-666f99b6f-kk8kg Admin Network Policy controller: took 13.96µs 2025-12-13T00:20:09.533570674+00:00 stderr F I1213 00:20:09.533562 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.533581695+00:00 stderr F I1213 00:20:09.533568 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-9-crc in Admin Network Policy controller 2025-12-13T00:20:09.533581695+00:00 stderr F I1213 00:20:09.533573 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-9-crc Admin Network Policy controller: took 5.64µs 2025-12-13T00:20:09.533581695+00:00 stderr F I1213 00:20:09.533574 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.533589635+00:00 stderr F I1213 00:20:09.533579 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in Admin Network Policy controller 2025-12-13T00:20:09.533589635+00:00 stderr F I1213 00:20:09.533583 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.533589635+00:00 stderr F I1213 00:20:09.533585 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb Admin Network Policy controller: took 5.8µs 2025-12-13T00:20:09.533597405+00:00 stderr F I1213 00:20:09.533590 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.533604615+00:00 stderr F I1213 00:20:09.533597 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.533604615+00:00 stderr F I1213 00:20:09.533591 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-q88th in Admin Network Policy controller 2025-12-13T00:20:09.533612065+00:00 stderr F I1213 00:20:09.533605 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.533612065+00:00 stderr F I1213 00:20:09.533606 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-q88th Admin Network Policy controller: took 14.9µs 2025-12-13T00:20:09.533619426+00:00 stderr F I1213 00:20:09.533611 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-etcd/etcd-crc 2025-12-13T00:20:09.533619426+00:00 stderr F I1213 00:20:09.533614 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in Admin Network Policy controller 2025-12-13T00:20:09.533626976+00:00 stderr F I1213 00:20:09.533618 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc Admin Network Policy controller: took 4.691µs 2025-12-13T00:20:09.533626976+00:00 stderr F I1213 00:20:09.533619 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 2025-12-13T00:20:09.533626976+00:00 stderr F I1213 00:20:09.533624 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-diagnostics/network-check-target-v54bt in Admin Network Policy controller 2025-12-13T00:20:09.533634746+00:00 stderr F I1213 00:20:09.533628 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.533641856+00:00 stderr F I1213 00:20:09.533628 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-diagnostics/network-check-target-v54bt Admin Network Policy controller: took 4.5µs 2025-12-13T00:20:09.533651537+00:00 stderr F I1213 00:20:09.533641 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.533651537+00:00 stderr F I1213 00:20:09.533642 28750 factory.go:988] Added *v1.NetworkPolicy event handler 4 2025-12-13T00:20:09.533658797+00:00 stderr F I1213 00:20:09.533649 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in Admin Network Policy controller 2025-12-13T00:20:09.533665887+00:00 stderr F I1213 00:20:09.533658 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd Admin Network Policy controller: took 9.961µs 2025-12-13T00:20:09.533672927+00:00 stderr F I1213 00:20:09.533667 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-12-crc in Admin Network Policy controller 2025-12-13T00:20:09.533679967+00:00 stderr F I1213 00:20:09.533673 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-12-crc Admin Network Policy controller: took 5.56µs 2025-12-13T00:20:09.533700608+00:00 stderr F I1213 00:20:09.533679 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in Admin Network Policy controller 2025-12-13T00:20:09.533700608+00:00 stderr F I1213 00:20:09.533685 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 Admin Network Policy controller: took 6.03µs 2025-12-13T00:20:09.533708398+00:00 stderr F I1213 00:20:09.533691 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in Admin Network Policy controller 2025-12-13T00:20:09.533708398+00:00 stderr F I1213 00:20:09.533704 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/openshift-kube-scheduler-crc Admin Network Policy controller: took 12.35µs 2025-12-13T00:20:09.533716018+00:00 stderr F I1213 00:20:09.533705 28750 egressip.go:1052] syncStaleEgressReroutePolicy will remove stale nexthops: [] 2025-12-13T00:20:09.533716018+00:00 stderr F I1213 00:20:09.533711 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-10-retry-1-crc in Admin Network Policy controller 2025-12-13T00:20:09.533723309+00:00 stderr F I1213 00:20:09.533718 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-10-retry-1-crc Admin Network Policy controller: took 5.81µs 2025-12-13T00:20:09.533730399+00:00 stderr F I1213 00:20:09.533724 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/installer-11-crc in Admin Network Policy controller 2025-12-13T00:20:09.533737979+00:00 stderr F I1213 00:20:09.533729 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/installer-11-crc Admin Network Policy controller: took 5.061µs 2025-12-13T00:20:09.533737979+00:00 stderr F I1213 00:20:09.533650 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.533745439+00:00 stderr F I1213 00:20:09.533735 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in Admin Network Policy controller 2025-12-13T00:20:09.533745439+00:00 stderr F I1213 00:20:09.533738 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.533745439+00:00 stderr F I1213 00:20:09.533742 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv Admin Network Policy controller: took 6.69µs 2025-12-13T00:20:09.533759659+00:00 stderr F I1213 00:20:09.533744 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.533759659+00:00 stderr F I1213 00:20:09.533750 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in Admin Network Policy controller 2025-12-13T00:20:09.533759659+00:00 stderr F I1213 00:20:09.533752 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh 2025-12-13T00:20:09.533759659+00:00 stderr F I1213 00:20:09.533756 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh Admin Network Policy controller: took 6.11µs 2025-12-13T00:20:09.533767900+00:00 stderr F I1213 00:20:09.533758 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.533767900+00:00 stderr F I1213 00:20:09.533762 28750 address_set.go:304] New(b27c19cc-d9d0-4d57-a5a8-06fcff438e8a/default-network-controller:EgressIP:egressip-served-pods:v4/a4548040316634674295) with [] 2025-12-13T00:20:09.533775240+00:00 stderr F I1213 00:20:09.533764 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/image-pruner-29426400-tlv26 2025-12-13T00:20:09.533775240+00:00 stderr F I1213 00:20:09.533771 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.533782310+00:00 stderr F I1213 00:20:09.533776 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-10-crc 2025-12-13T00:20:09.533789400+00:00 stderr F I1213 00:20:09.533781 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.533789400+00:00 stderr F I1213 00:20:09.533785 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.533813741+00:00 stderr F I1213 00:20:09.533790 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.533813741+00:00 stderr F I1213 00:20:09.533789 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b27c19cc-d9d0-4d57-a5a8-06fcff438e8a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.533813741+00:00 stderr F I1213 00:20:09.533806 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.533822391+00:00 stderr F I1213 00:20:09.533813 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.533822391+00:00 stderr F I1213 00:20:09.533818 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.533829521+00:00 stderr F I1213 00:20:09.533812 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b27c19cc-d9d0-4d57-a5a8-06fcff438e8a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.533838782+00:00 stderr F I1213 00:20:09.533827 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.533838782+00:00 stderr F I1213 00:20:09.533834 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/installer-7-crc 2025-12-13T00:20:09.533847072+00:00 stderr F I1213 00:20:09.533839 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-scheduler/installer-8-crc 2025-12-13T00:20:09.533847072+00:00 stderr F I1213 00:20:09.533844 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.533854362+00:00 stderr F I1213 00:20:09.533849 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.533862282+00:00 stderr F I1213 00:20:09.533854 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.533862282+00:00 stderr F I1213 00:20:09.533859 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-8-crc 2025-12-13T00:20:09.533869682+00:00 stderr F I1213 00:20:09.533864 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.533876933+00:00 stderr F I1213 00:20:09.533870 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-controller-manager/revision-pruner-11-crc 2025-12-13T00:20:09.533876933+00:00 stderr F I1213 00:20:09.533874 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.533884093+00:00 stderr F I1213 00:20:09.533879 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.533891203+00:00 stderr F I1213 00:20:09.533884 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.533898223+00:00 stderr F I1213 00:20:09.533889 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.533898223+00:00 stderr F I1213 00:20:09.533894 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.533905653+00:00 stderr F I1213 00:20:09.533764 28750 admin_network_policy_pod.go:56] Processing sync for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in Admin Network Policy controller 2025-12-13T00:20:09.533905653+00:00 stderr F I1213 00:20:09.533899 28750 admin_network_policy_controller.go:489] Adding Pod in Admin Network Policy controller openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.533912894+00:00 stderr F I1213 00:20:09.533904 28750 admin_network_policy_pod.go:59] Finished syncing Pod hostpath-provisioner/csi-hostpathplugin-hvm8g Admin Network Policy controller: took 140.243µs 2025-12-13T00:20:09.533919884+00:00 stderr F I1213 00:20:09.533913 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in Admin Network Policy controller 2025-12-13T00:20:09.533926954+00:00 stderr F I1213 00:20:09.533919 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf Admin Network Policy controller: took 5.27µs 2025-12-13T00:20:09.533926954+00:00 stderr F I1213 00:20:09.533924 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress-canary/ingress-canary-2vhcn in Admin Network Policy controller 2025-12-13T00:20:09.533960025+00:00 stderr F I1213 00:20:09.533928 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress-canary/ingress-canary-2vhcn Admin Network Policy controller: took 4.35µs 2025-12-13T00:20:09.533969115+00:00 stderr F I1213 00:20:09.533961 28750 admin_network_policy_node.go:55] Processing sync for Node crc in Admin Network Policy controller 2025-12-13T00:20:09.533976345+00:00 stderr F I1213 00:20:09.533970 28750 admin_network_policy_node.go:58] Finished syncing Node crc Admin Network Policy controller: took 11.3µs 2025-12-13T00:20:09.534019657+00:00 stderr F I1213 00:20:09.533988 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-authentication-operator in Admin Network Policy controller 2025-12-13T00:20:09.534048497+00:00 stderr F I1213 00:20:09.534030 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 10.4µs 2025-12-13T00:20:09.534048497+00:00 stderr F I1213 00:20:09.534044 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-machine-approver in Admin Network Policy controller 2025-12-13T00:20:09.534056448+00:00 stderr F I1213 00:20:09.534049 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-machine-approver Admin Network Policy controller: took 4.42µs 2025-12-13T00:20:09.534063708+00:00 stderr F I1213 00:20:09.534055 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-storage-version-migrator in Admin Network Policy controller 2025-12-13T00:20:09.534063708+00:00 stderr F I1213 00:20:09.534059 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-storage-version-migrator Admin Network Policy controller: took 4.781µs 2025-12-13T00:20:09.534070978+00:00 stderr F I1213 00:20:09.534065 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-monitoring in Admin Network Policy controller 2025-12-13T00:20:09.534078938+00:00 stderr F I1213 00:20:09.534069 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-monitoring Admin Network Policy controller: took 4.13µs 2025-12-13T00:20:09.534078938+00:00 stderr F I1213 00:20:09.534075 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-nutanix-infra in Admin Network Policy controller 2025-12-13T00:20:09.534088148+00:00 stderr F I1213 00:20:09.534080 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-nutanix-infra Admin Network Policy controller: took 4.23µs 2025-12-13T00:20:09.534097429+00:00 stderr F I1213 00:20:09.534085 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress-canary in Admin Network Policy controller 2025-12-13T00:20:09.534097429+00:00 stderr F I1213 00:20:09.534090 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress-canary Admin Network Policy controller: took 4.77µs 2025-12-13T00:20:09.534106769+00:00 stderr F I1213 00:20:09.534096 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-multus in Admin Network Policy controller 2025-12-13T00:20:09.534106769+00:00 stderr F I1213 00:20:09.534100 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-multus Admin Network Policy controller: took 4.31µs 2025-12-13T00:20:09.534116699+00:00 stderr F I1213 00:20:09.534105 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-authentication in Admin Network Policy controller 2025-12-13T00:20:09.534116699+00:00 stderr F I1213 00:20:09.534109 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication Admin Network Policy controller: took 3.39µs 2025-12-13T00:20:09.534116699+00:00 stderr F I1213 00:20:09.534113 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller 2025-12-13T00:20:09.534130630+00:00 stderr F I1213 00:20:09.534117 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 3.43µs 2025-12-13T00:20:09.534130630+00:00 stderr F I1213 00:20:09.534121 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-image-registry in Admin Network Policy controller 2025-12-13T00:20:09.534130630+00:00 stderr F I1213 00:20:09.534125 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-image-registry Admin Network Policy controller: took 3.09µs 2025-12-13T00:20:09.534140050+00:00 stderr F I1213 00:20:09.534132 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in Admin Network Policy controller 2025-12-13T00:20:09.534147300+00:00 stderr F I1213 00:20:09.534139 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc Admin Network Policy controller: took 9.65µs 2025-12-13T00:20:09.534154590+00:00 stderr F I1213 00:20:09.534147 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/kube-apiserver-crc in Admin Network Policy controller 2025-12-13T00:20:09.534161700+00:00 stderr F I1213 00:20:09.534152 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/kube-apiserver-crc Admin Network Policy controller: took 4.4µs 2025-12-13T00:20:09.534161700+00:00 stderr F I1213 00:20:09.534158 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-node-identity/network-node-identity-7xghp in Admin Network Policy controller 2025-12-13T00:20:09.534169221+00:00 stderr F I1213 00:20:09.534162 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-node-identity/network-node-identity-7xghp Admin Network Policy controller: took 4.49µs 2025-12-13T00:20:09.534176261+00:00 stderr F I1213 00:20:09.534167 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-operator/iptables-alerter-wwpnd in Admin Network Policy controller 2025-12-13T00:20:09.534176261+00:00 stderr F I1213 00:20:09.534172 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-operator/iptables-alerter-wwpnd Admin Network Policy controller: took 4.75µs 2025-12-13T00:20:09.534183641+00:00 stderr F I1213 00:20:09.534176 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-network-operator/network-operator-767c585db5-zd56b in Admin Network Policy controller 2025-12-13T00:20:09.534183641+00:00 stderr F I1213 00:20:09.534180 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-network-operator/network-operator-767c585db5-zd56b Admin Network Policy controller: took 3.96µs 2025-12-13T00:20:09.534191011+00:00 stderr F I1213 00:20:09.534185 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns/dns-default-gbw49 in Admin Network Policy controller 2025-12-13T00:20:09.534198081+00:00 stderr F I1213 00:20:09.534189 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns/dns-default-gbw49 Admin Network Policy controller: took 4.08µs 2025-12-13T00:20:09.534198081+00:00 stderr F I1213 00:20:09.534193 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in Admin Network Policy controller 2025-12-13T00:20:09.534205432+00:00 stderr F I1213 00:20:09.534198 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ingress/router-default-5c9bf7bc58-6jctv Admin Network Policy controller: took 4.24µs 2025-12-13T00:20:09.534205432+00:00 stderr F I1213 00:20:09.534202 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/network-metrics-daemon-qdfr4 in Admin Network Policy controller 2025-12-13T00:20:09.534235032+00:00 stderr F I1213 00:20:09.534215 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/network-metrics-daemon-qdfr4 Admin Network Policy controller: took 3.86µs 2025-12-13T00:20:09.534235032+00:00 stderr F I1213 00:20:09.534224 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-etcd/etcd-crc in Admin Network Policy controller 2025-12-13T00:20:09.534235032+00:00 stderr F I1213 00:20:09.534229 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-etcd/etcd-crc Admin Network Policy controller: took 4.62µs 2025-12-13T00:20:09.534245293+00:00 stderr F I1213 00:20:09.534240 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 in Admin Network Policy controller 2025-12-13T00:20:09.534252373+00:00 stderr F I1213 00:20:09.534244 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 Admin Network Policy controller: took 4.7µs 2025-12-13T00:20:09.534252373+00:00 stderr F I1213 00:20:09.534249 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in Admin Network Policy controller 2025-12-13T00:20:09.534259623+00:00 stderr F I1213 00:20:09.534253 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 Admin Network Policy controller: took 4.24µs 2025-12-13T00:20:09.534266643+00:00 stderr F I1213 00:20:09.534260 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-service-ca-operator in Admin Network Policy controller 2025-12-13T00:20:09.534273713+00:00 stderr F I1213 00:20:09.534267 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-service-ca-operator Admin Network Policy controller: took 8.23µs 2025-12-13T00:20:09.534295524+00:00 stderr F I1213 00:20:09.534278 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace default in Admin Network Policy controller 2025-12-13T00:20:09.534295524+00:00 stderr F I1213 00:20:09.534288 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace default Admin Network Policy controller: took 10.21µs 2025-12-13T00:20:09.534319205+00:00 stderr F I1213 00:20:09.534298 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in Admin Network Policy controller 2025-12-13T00:20:09.534319205+00:00 stderr F I1213 00:20:09.534315 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m Admin Network Policy controller: took 20.671µs 2025-12-13T00:20:09.534343815+00:00 stderr F I1213 00:20:09.534326 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in Admin Network Policy controller 2025-12-13T00:20:09.534343815+00:00 stderr F I1213 00:20:09.534336 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 Admin Network Policy controller: took 9.79µs 2025-12-13T00:20:09.534352426+00:00 stderr F I1213 00:20:09.534343 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in Admin Network Policy controller 2025-12-13T00:20:09.534352426+00:00 stderr F I1213 00:20:09.534349 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb Admin Network Policy controller: took 5.7µs 2025-12-13T00:20:09.534362376+00:00 stderr F I1213 00:20:09.534356 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/redhat-operators-zg7cl in Admin Network Policy controller 2025-12-13T00:20:09.534369696+00:00 stderr F I1213 00:20:09.534361 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/redhat-operators-zg7cl Admin Network Policy controller: took 5.21µs 2025-12-13T00:20:09.534376846+00:00 stderr F I1213 00:20:09.534367 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh in Admin Network Policy controller 2025-12-13T00:20:09.534376846+00:00 stderr F I1213 00:20:09.534373 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh Admin Network Policy controller: took 6.13µs 2025-12-13T00:20:09.534384156+00:00 stderr F I1213 00:20:09.534379 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in Admin Network Policy controller 2025-12-13T00:20:09.534391297+00:00 stderr F I1213 00:20:09.534384 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b Admin Network Policy controller: took 5.45µs 2025-12-13T00:20:09.534398427+00:00 stderr F I1213 00:20:09.534390 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/image-pruner-29426400-tlv26 in Admin Network Policy controller 2025-12-13T00:20:09.534398427+00:00 stderr F I1213 00:20:09.534395 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/image-pruner-29426400-tlv26 Admin Network Policy controller: took 4.96µs 2025-12-13T00:20:09.534408507+00:00 stderr F I1213 00:20:09.534403 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in Admin Network Policy controller 2025-12-13T00:20:09.534415637+00:00 stderr F I1213 00:20:09.534409 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 Admin Network Policy controller: took 7.2µs 2025-12-13T00:20:09.534422648+00:00 stderr F I1213 00:20:09.534417 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-10-crc in Admin Network Policy controller 2025-12-13T00:20:09.534429488+00:00 stderr F I1213 00:20:09.534422 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-10-crc Admin Network Policy controller: took 5.45µs 2025-12-13T00:20:09.534436308+00:00 stderr F I1213 00:20:09.534428 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in Admin Network Policy controller 2025-12-13T00:20:09.534443488+00:00 stderr F I1213 00:20:09.534434 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb Admin Network Policy controller: took 5.92µs 2025-12-13T00:20:09.534443488+00:00 stderr F I1213 00:20:09.534440 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr in Admin Network Policy controller 2025-12-13T00:20:09.534450588+00:00 stderr F I1213 00:20:09.534445 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr Admin Network Policy controller: took 4.62µs 2025-12-13T00:20:09.534457598+00:00 stderr F I1213 00:20:09.534451 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in Admin Network Policy controller 2025-12-13T00:20:09.534464449+00:00 stderr F I1213 00:20:09.534456 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-multus/multus-additional-cni-plugins-bzj2p Admin Network Policy controller: took 5.27µs 2025-12-13T00:20:09.534473949+00:00 stderr F I1213 00:20:09.534463 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/certified-operators-lcrg8 in Admin Network Policy controller 2025-12-13T00:20:09.534473949+00:00 stderr F I1213 00:20:09.534468 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/certified-operators-lcrg8 Admin Network Policy controller: took 5.3µs 2025-12-13T00:20:09.534482719+00:00 stderr F I1213 00:20:09.534477 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in Admin Network Policy controller 2025-12-13T00:20:09.534489669+00:00 stderr F I1213 00:20:09.534482 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf Admin Network Policy controller: took 8µs 2025-12-13T00:20:09.534496490+00:00 stderr F I1213 00:20:09.534489 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in Admin Network Policy controller 2025-12-13T00:20:09.534503590+00:00 stderr F I1213 00:20:09.534495 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp Admin Network Policy controller: took 5.59µs 2025-12-13T00:20:09.534510770+00:00 stderr F I1213 00:20:09.534501 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj in Admin Network Policy controller 2025-12-13T00:20:09.534510770+00:00 stderr F I1213 00:20:09.534507 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj Admin Network Policy controller: took 5.71µs 2025-12-13T00:20:09.534518170+00:00 stderr F I1213 00:20:09.534513 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/installer-7-crc in Admin Network Policy controller 2025-12-13T00:20:09.534525020+00:00 stderr F I1213 00:20:09.534519 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/installer-7-crc Admin Network Policy controller: took 5.38µs 2025-12-13T00:20:09.534531840+00:00 stderr F I1213 00:20:09.534525 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-scheduler/installer-8-crc in Admin Network Policy controller 2025-12-13T00:20:09.534538941+00:00 stderr F I1213 00:20:09.534530 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-scheduler/installer-8-crc Admin Network Policy controller: took 5.3µs 2025-12-13T00:20:09.534546121+00:00 stderr F I1213 00:20:09.534536 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-config-operator/machine-config-server-v65wr in Admin Network Policy controller 2025-12-13T00:20:09.534546121+00:00 stderr F I1213 00:20:09.534543 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-config-operator/machine-config-server-v65wr Admin Network Policy controller: took 6.3µs 2025-12-13T00:20:09.534555131+00:00 stderr F I1213 00:20:09.534549 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in Admin Network Policy controller 2025-12-13T00:20:09.534561971+00:00 stderr F I1213 00:20:09.534555 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 Admin Network Policy controller: took 5.94µs 2025-12-13T00:20:09.534568821+00:00 stderr F I1213 00:20:09.534562 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in Admin Network Policy controller 2025-12-13T00:20:09.534575672+00:00 stderr F I1213 00:20:09.534568 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z Admin Network Policy controller: took 5.72µs 2025-12-13T00:20:09.534585222+00:00 stderr F I1213 00:20:09.534575 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-8-crc in Admin Network Policy controller 2025-12-13T00:20:09.534585222+00:00 stderr F I1213 00:20:09.534581 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-8-crc Admin Network Policy controller: took 5.86µs 2025-12-13T00:20:09.534592492+00:00 stderr F I1213 00:20:09.534588 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/kube-controller-manager-crc in Admin Network Policy controller 2025-12-13T00:20:09.534600962+00:00 stderr F I1213 00:20:09.534588 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-marketplace 2025-12-13T00:20:09.534600962+00:00 stderr F I1213 00:20:09.534593 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/kube-controller-manager-crc Admin Network Policy controller: took 5.6µs 2025-12-13T00:20:09.534610123+00:00 stderr F I1213 00:20:09.534601 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-controller-manager/revision-pruner-11-crc in Admin Network Policy controller 2025-12-13T00:20:09.534610123+00:00 stderr F I1213 00:20:09.534606 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-controller-manager/revision-pruner-11-crc Admin Network Policy controller: took 5.44µs 2025-12-13T00:20:09.534619553+00:00 stderr F I1213 00:20:09.534613 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in Admin Network Policy controller 2025-12-13T00:20:09.534619553+00:00 stderr F I1213 00:20:09.534615 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-marketplace took: 15.851µs 2025-12-13T00:20:09.534628803+00:00 stderr F I1213 00:20:09.534619 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw Admin Network Policy controller: took 5.97µs 2025-12-13T00:20:09.534628803+00:00 stderr F I1213 00:20:09.534622 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-operators 2025-12-13T00:20:09.534638233+00:00 stderr F I1213 00:20:09.534626 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-marketplace/redhat-marketplace-nv4pl in Admin Network Policy controller 2025-12-13T00:20:09.534638233+00:00 stderr F I1213 00:20:09.534629 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-operators took: 1.51µs 2025-12-13T00:20:09.534638233+00:00 stderr F I1213 00:20:09.534633 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-marketplace/redhat-marketplace-nv4pl Admin Network Policy controller: took 6.92µs 2025-12-13T00:20:09.534647474+00:00 stderr F I1213 00:20:09.534641 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-ovn-kubernetes/ovnkube-node-brz8k in Admin Network Policy controller 2025-12-13T00:20:09.534655624+00:00 stderr F I1213 00:20:09.534648 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-ovn-kubernetes/ovnkube-node-brz8k Admin Network Policy controller: took 7.171µs 2025-12-13T00:20:09.534664164+00:00 stderr F I1213 00:20:09.534656 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in Admin Network Policy controller 2025-12-13T00:20:09.534672244+00:00 stderr F I1213 00:20:09.534663 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd Admin Network Policy controller: took 6.97µs 2025-12-13T00:20:09.534680525+00:00 stderr F I1213 00:20:09.534670 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-user-workload-monitoring 2025-12-13T00:20:09.534680525+00:00 stderr F I1213 00:20:09.534635 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace default 2025-12-13T00:20:09.534693345+00:00 stderr F I1213 00:20:09.534684 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-user-workload-monitoring took: 7.871µs 2025-12-13T00:20:09.534693345+00:00 stderr F I1213 00:20:09.534690 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cloud-network-config-controller 2025-12-13T00:20:09.534701745+00:00 stderr F I1213 00:20:09.534691 28750 obj_retry.go:541] Creating *factory.egressIPNamespace default took: 6.88µs 2025-12-13T00:20:09.534701745+00:00 stderr F I1213 00:20:09.534697 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cloud-network-config-controller took: 2.65µs 2025-12-13T00:20:09.534710215+00:00 stderr F I1213 00:20:09.534702 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-system 2025-12-13T00:20:09.534710215+00:00 stderr F I1213 00:20:09.534705 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift 2025-12-13T00:20:09.534718736+00:00 stderr F I1213 00:20:09.534709 28750 obj_retry.go:541] Creating *factory.egressIPNamespace kube-system took: 1.52µs 2025-12-13T00:20:09.534718736+00:00 stderr F I1213 00:20:09.534714 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-infra 2025-12-13T00:20:09.534727716+00:00 stderr F I1213 00:20:09.534649 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-dns 2025-12-13T00:20:09.534727716+00:00 stderr F I1213 00:20:09.534721 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-infra took: 1.47µs 2025-12-13T00:20:09.534736686+00:00 stderr F I1213 00:20:09.534726 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config 2025-12-13T00:20:09.534736686+00:00 stderr F I1213 00:20:09.534732 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config took: 1.67µs 2025-12-13T00:20:09.534745416+00:00 stderr F I1213 00:20:09.534735 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console-operator 2025-12-13T00:20:09.534753767+00:00 stderr F I1213 00:20:09.534744 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-dns took: 19.33µs 2025-12-13T00:20:09.534762577+00:00 stderr F I1213 00:20:09.534754 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-host-network 2025-12-13T00:20:09.534771507+00:00 stderr F I1213 00:20:09.534763 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-host-network took: 2.09µs 2025-12-13T00:20:09.534771507+00:00 stderr F I1213 00:20:09.534656 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-etcd-operator 2025-12-13T00:20:09.534780247+00:00 stderr F I1213 00:20:09.534769 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-node-lease 2025-12-13T00:20:09.534780247+00:00 stderr F I1213 00:20:09.534772 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress-operator 2025-12-13T00:20:09.534789027+00:00 stderr F I1213 00:20:09.534776 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console-user-settings 2025-12-13T00:20:09.534789027+00:00 stderr F I1213 00:20:09.534780 28750 obj_retry.go:541] Creating *factory.egressIPNamespace kube-node-lease took: 1.66µs 2025-12-13T00:20:09.534797798+00:00 stderr F I1213 00:20:09.534787 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-node-identity 2025-12-13T00:20:09.534797798+00:00 stderr F I1213 00:20:09.534790 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress-operator took: 11.39µs 2025-12-13T00:20:09.534806738+00:00 stderr F I1213 00:20:09.534793 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-etcd-operator took: 14.77µs 2025-12-13T00:20:09.534806738+00:00 stderr F I1213 00:20:09.534798 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-controller-manager 2025-12-13T00:20:09.534806738+00:00 stderr F I1213 00:20:09.534671 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-console/downloads-65476884b9-9wcvx in Admin Network Policy controller 2025-12-13T00:20:09.534819688+00:00 stderr F I1213 00:20:09.534804 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config-operator 2025-12-13T00:20:09.534819688+00:00 stderr F I1213 00:20:09.534805 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openstack-operators 2025-12-13T00:20:09.534819688+00:00 stderr F I1213 00:20:09.534808 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-console/downloads-65476884b9-9wcvx Admin Network Policy controller: took 137.834µs 2025-12-13T00:20:09.534829249+00:00 stderr F I1213 00:20:09.534816 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-operator 2025-12-13T00:20:09.534829249+00:00 stderr F I1213 00:20:09.534818 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config-operator took: 2.49µs 2025-12-13T00:20:09.534829249+00:00 stderr F I1213 00:20:09.534641 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kni-infra 2025-12-13T00:20:09.534838849+00:00 stderr F I1213 00:20:09.534827 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-scheduler-operator 2025-12-13T00:20:09.534838849+00:00 stderr F I1213 00:20:09.534663 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-apiserver-operator 2025-12-13T00:20:09.534850569+00:00 stderr F I1213 00:20:09.534843 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-operator took: 7.991µs 2025-12-13T00:20:09.534850569+00:00 stderr F I1213 00:20:09.534846 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-scheduler-operator took: 8.48µs 2025-12-13T00:20:09.534859739+00:00 stderr F I1213 00:20:09.534850 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-apiserver-operator took: 11.49µs 2025-12-13T00:20:09.534859739+00:00 stderr F I1213 00:20:09.534737 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:09.534869100+00:00 stderr F I1213 00:20:09.534857 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-version 2025-12-13T00:20:09.534869100+00:00 stderr F I1213 00:20:09.534862 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-storage-version-migrator-operator took: 1.65µs 2025-12-13T00:20:09.534869100+00:00 stderr F I1213 00:20:09.534827 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-openstack-infra 2025-12-13T00:20:09.534879500+00:00 stderr F I1213 00:20:09.534864 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-machine-config-operator 2025-12-13T00:20:09.534879500+00:00 stderr F I1213 00:20:09.534791 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console-user-settings took: 6.97µs 2025-12-13T00:20:09.534879500+00:00 stderr F I1213 00:20:09.534875 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-version took: 11.261µs 2025-12-13T00:20:09.534888580+00:00 stderr F I1213 00:20:09.534882 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-nutanix-infra 2025-12-13T00:20:09.534897090+00:00 stderr F I1213 00:20:09.534886 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-machine-config-operator took: 11.43µs 2025-12-13T00:20:09.534897090+00:00 stderr F I1213 00:20:09.534889 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-nutanix-infra took: 1.98µs 2025-12-13T00:20:09.534909021+00:00 stderr F I1213 00:20:09.534895 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-image-registry 2025-12-13T00:20:09.534909021+00:00 stderr F I1213 00:20:09.534897 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-controller-manager 2025-12-13T00:20:09.534909021+00:00 stderr F I1213 00:20:09.534904 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-image-registry took: 2.511µs 2025-12-13T00:20:09.534919241+00:00 stderr F I1213 00:20:09.534819 28750 admin_network_policy_pod.go:56] Processing sync for Pod openshift-kube-apiserver/installer-13-crc in Admin Network Policy controller 2025-12-13T00:20:09.534919241+00:00 stderr F I1213 00:20:09.534912 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ovn-kubernetes 2025-12-13T00:20:09.534949502+00:00 stderr F I1213 00:20:09.534916 28750 admin_network_policy_pod.go:59] Finished syncing Pod openshift-kube-apiserver/installer-13-crc Admin Network Policy controller: took 97.573µs 2025-12-13T00:20:09.534949502+00:00 stderr F I1213 00:20:09.534921 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ovn-kubernetes took: 2.47µs 2025-12-13T00:20:09.534963622+00:00 stderr F I1213 00:20:09.534714 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift took: 1.74µs 2025-12-13T00:20:09.534972672+00:00 stderr F I1213 00:20:09.534961 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress 2025-12-13T00:20:09.534981503+00:00 stderr F I1213 00:20:09.534972 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress took: 3.94µs 2025-12-13T00:20:09.534981503+00:00 stderr F I1213 00:20:09.534978 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-network-diagnostics 2025-12-13T00:20:09.534992583+00:00 stderr F I1213 00:20:09.534987 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-diagnostics took: 3.47µs 2025-12-13T00:20:09.535001163+00:00 stderr F I1213 00:20:09.534993 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-apiserver-operator 2025-12-13T00:20:09.535010173+00:00 stderr F I1213 00:20:09.535001 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-apiserver-operator took: 2.44µs 2025-12-13T00:20:09.535010173+00:00 stderr F I1213 00:20:09.534905 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-controller-manager took: 2.39µs 2025-12-13T00:20:09.535019474+00:00 stderr F I1213 00:20:09.534820 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openstack-operators took: 6.33µs 2025-12-13T00:20:09.535019474+00:00 stderr F I1213 00:20:09.535016 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-multus 2025-12-13T00:20:09.535031594+00:00 stderr F I1213 00:20:09.535025 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-multus took: 3.27µs 2025-12-13T00:20:09.535040334+00:00 stderr F I1213 00:20:09.535031 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-machine-api 2025-12-13T00:20:09.535049185+00:00 stderr F I1213 00:20:09.535040 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-machine-api took: 2.61µs 2025-12-13T00:20:09.535049185+00:00 stderr F I1213 00:20:09.534875 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-openstack-infra took: 3.41µs 2025-12-13T00:20:09.535058385+00:00 stderr F I1213 00:20:09.535048 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cloud-platform-infra 2025-12-13T00:20:09.535067005+00:00 stderr F I1213 00:20:09.535056 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cloud-platform-infra took: 2.46µs 2025-12-13T00:20:09.535067005+00:00 stderr F I1213 00:20:09.535062 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openstack 2025-12-13T00:20:09.535080465+00:00 stderr F I1213 00:20:09.535071 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openstack took: 2.56µs 2025-12-13T00:20:09.535080465+00:00 stderr F I1213 00:20:09.534868 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-route-controller-manager 2025-12-13T00:20:09.535089876+00:00 stderr F I1213 00:20:09.535082 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-route-controller-manager took: 2.11µs 2025-12-13T00:20:09.535089876+00:00 stderr F I1213 00:20:09.534862 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-machine-approver 2025-12-13T00:20:09.535127877+00:00 stderr F I1213 00:20:09.535103 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-machine-approver took: 3.27µs 2025-12-13T00:20:09.535127877+00:00 stderr F I1213 00:20:09.535113 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-service-ca 2025-12-13T00:20:09.535127877+00:00 stderr F I1213 00:20:09.535121 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-service-ca took: 1.15µs 2025-12-13T00:20:09.535139897+00:00 stderr F I1213 00:20:09.535126 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-controller-manager-operator 2025-12-13T00:20:09.535139897+00:00 stderr F I1213 00:20:09.535135 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-controller-manager-operator took: 3.49µs 2025-12-13T00:20:09.535149397+00:00 stderr F I1213 00:20:09.534850 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-authentication-operator 2025-12-13T00:20:09.535149397+00:00 stderr F I1213 00:20:09.535145 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-authentication-operator took: 2.55µs 2025-12-13T00:20:09.535158898+00:00 stderr F I1213 00:20:09.535150 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-config-managed 2025-12-13T00:20:09.535158898+00:00 stderr F I1213 00:20:09.535155 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-config-managed took: 1.24µs 2025-12-13T00:20:09.535167918+00:00 stderr F I1213 00:20:09.535160 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-operator-lifecycle-manager 2025-12-13T00:20:09.535176498+00:00 stderr F I1213 00:20:09.535167 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-operator-lifecycle-manager took: 2.76µs 2025-12-13T00:20:09.535176498+00:00 stderr F I1213 00:20:09.534798 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ovirt-infra 2025-12-13T00:20:09.535207219+00:00 stderr F I1213 00:20:09.535185 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ovirt-infra took: 8.75µs 2025-12-13T00:20:09.535207219+00:00 stderr F I1213 00:20:09.535196 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-ingress-canary 2025-12-13T00:20:09.535207219+00:00 stderr F I1213 00:20:09.535203 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-ingress-canary took: 1.31µs 2025-12-13T00:20:09.535218679+00:00 stderr F I1213 00:20:09.535208 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-scheduler 2025-12-13T00:20:09.535227619+00:00 stderr F I1213 00:20:09.535216 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-scheduler took: 2.49µs 2025-12-13T00:20:09.535227619+00:00 stderr F I1213 00:20:09.535222 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace kube-public 2025-12-13T00:20:09.535236870+00:00 stderr F I1213 00:20:09.535228 28750 obj_retry.go:541] Creating *factory.egressIPNamespace kube-public took: 1.68µs 2025-12-13T00:20:09.535236870+00:00 stderr F I1213 00:20:09.534798 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-network-node-identity took: 2.96µs 2025-12-13T00:20:09.535250530+00:00 stderr F I1213 00:20:09.535236 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-node 2025-12-13T00:20:09.535250530+00:00 stderr F I1213 00:20:09.535242 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-node took: 1.06µs 2025-12-13T00:20:09.535250530+00:00 stderr F I1213 00:20:09.534807 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-controller-manager took: 1.71µs 2025-12-13T00:20:09.535260110+00:00 stderr F I1213 00:20:09.535250 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-controller-manager-operator 2025-12-13T00:20:09.535268711+00:00 stderr F I1213 00:20:09.535258 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-controller-manager-operator took: 2.63µs 2025-12-13T00:20:09.535268711+00:00 stderr F I1213 00:20:09.535264 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace hostpath-provisioner 2025-12-13T00:20:09.535280111+00:00 stderr F I1213 00:20:09.535272 28750 obj_retry.go:541] Creating *factory.egressIPNamespace hostpath-provisioner took: 2.18µs 2025-12-13T00:20:09.535280111+00:00 stderr F I1213 00:20:09.534764 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console-operator took: 17.671µs 2025-12-13T00:20:09.535290891+00:00 stderr F I1213 00:20:09.535283 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-etcd 2025-12-13T00:20:09.535300621+00:00 stderr F I1213 00:20:09.535292 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-etcd took: 2.81µs 2025-12-13T00:20:09.535300621+00:00 stderr F I1213 00:20:09.535297 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-service-ca-operator 2025-12-13T00:20:09.535311682+00:00 stderr F I1213 00:20:09.535303 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-service-ca-operator took: 1.5µs 2025-12-13T00:20:09.535311682+00:00 stderr F I1213 00:20:09.535307 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-apiserver 2025-12-13T00:20:09.535321062+00:00 stderr F I1213 00:20:09.534850 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kni-infra took: 19.05µs 2025-12-13T00:20:09.535321062+00:00 stderr F I1213 00:20:09.535315 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-apiserver took: 3.4µs 2025-12-13T00:20:09.535329672+00:00 stderr F I1213 00:20:09.535319 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-storage-version-migrator 2025-12-13T00:20:09.535338672+00:00 stderr F I1213 00:20:09.535329 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-storage-version-migrator took: 2.88µs 2025-12-13T00:20:09.535338672+00:00 stderr F I1213 00:20:09.535334 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-kube-apiserver 2025-12-13T00:20:09.535348113+00:00 stderr F I1213 00:20:09.535340 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-kube-apiserver took: 1.84µs 2025-12-13T00:20:09.535348113+00:00 stderr F I1213 00:20:09.535345 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-storage-operator 2025-12-13T00:20:09.535357323+00:00 stderr F I1213 00:20:09.535350 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-storage-operator took: 1.21µs 2025-12-13T00:20:09.535357323+00:00 stderr F I1213 00:20:09.535354 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-oauth-apiserver 2025-12-13T00:20:09.535368533+00:00 stderr F I1213 00:20:09.535361 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-oauth-apiserver took: 2.48µs 2025-12-13T00:20:09.535368533+00:00 stderr F I1213 00:20:09.534852 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-monitoring 2025-12-13T00:20:09.535381994+00:00 stderr F I1213 00:20:09.535372 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-monitoring took: 2.8µs 2025-12-13T00:20:09.535381994+00:00 stderr F I1213 00:20:09.535376 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-vsphere-infra 2025-12-13T00:20:09.535391244+00:00 stderr F I1213 00:20:09.535382 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-vsphere-infra took: 1.78µs 2025-12-13T00:20:09.535391244+00:00 stderr F I1213 00:20:09.535386 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-cluster-samples-operator 2025-12-13T00:20:09.535400384+00:00 stderr F I1213 00:20:09.534876 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-authentication 2025-12-13T00:20:09.535400384+00:00 stderr F I1213 00:20:09.535392 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-cluster-samples-operator took: 1.33µs 2025-12-13T00:20:09.535409004+00:00 stderr F I1213 00:20:09.535401 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-authentication took: 3.44µs 2025-12-13T00:20:09.535417195+00:00 stderr F I1213 00:20:09.535409 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-console 2025-12-13T00:20:09.535425395+00:00 stderr F I1213 00:20:09.535419 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-console took: 3.201µs 2025-12-13T00:20:09.535433975+00:00 stderr F I1213 00:20:09.535424 28750 obj_retry.go:502] Add event received for *factory.egressIPNamespace openshift-dns-operator 2025-12-13T00:20:09.535442575+00:00 stderr F I1213 00:20:09.535432 28750 obj_retry.go:541] Creating *factory.egressIPNamespace openshift-dns-operator took: 1.79µs 2025-12-13T00:20:09.535451616+00:00 stderr F I1213 00:20:09.535444 28750 factory.go:988] Added *v1.Namespace event handler 5 2025-12-13T00:20:09.535486947+00:00 stderr F I1213 00:20:09.534926 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift in Admin Network Policy controller 2025-12-13T00:20:09.535486947+00:00 stderr F I1213 00:20:09.535469 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift Admin Network Policy controller: took 541.925µs 2025-12-13T00:20:09.535486947+00:00 stderr F I1213 00:20:09.535478 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cloud-network-config-controller in Admin Network Policy controller 2025-12-13T00:20:09.535486947+00:00 stderr F I1213 00:20:09.535482 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cloud-network-config-controller Admin Network Policy controller: took 4.07µs 2025-12-13T00:20:09.535498727+00:00 stderr F I1213 00:20:09.535487 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-controller-manager in Admin Network Policy controller 2025-12-13T00:20:09.535498727+00:00 stderr F I1213 00:20:09.535490 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-controller-manager Admin Network Policy controller: took 3.691µs 2025-12-13T00:20:09.535498727+00:00 stderr F I1213 00:20:09.535495 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-host-network in Admin Network Policy controller 2025-12-13T00:20:09.535510707+00:00 stderr F I1213 00:20:09.535505 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-host-network Admin Network Policy controller: took 3.34µs 2025-12-13T00:20:09.535520587+00:00 stderr F I1213 00:20:09.535509 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress in Admin Network Policy controller 2025-12-13T00:20:09.535520587+00:00 stderr F I1213 00:20:09.535513 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress Admin Network Policy controller: took 3.58µs 2025-12-13T00:20:09.535520587+00:00 stderr F I1213 00:20:09.535511 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.535534948+00:00 stderr F I1213 00:20:09.535518 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console in Admin Network Policy controller 2025-12-13T00:20:09.535534948+00:00 stderr F I1213 00:20:09.535524 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console Admin Network Policy controller: took 6.02µs 2025-12-13T00:20:09.535534948+00:00 stderr F I1213 00:20:09.535528 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-apiserver in Admin Network Policy controller 2025-12-13T00:20:09.535534948+00:00 stderr F I1213 00:20:09.535531 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-apiserver Admin Network Policy controller: took 3.33µs 2025-12-13T00:20:09.535545498+00:00 stderr F I1213 00:20:09.535536 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-controller-manager in Admin Network Policy controller 2025-12-13T00:20:09.535545498+00:00 stderr F I1213 00:20:09.535539 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-controller-manager Admin Network Policy controller: took 3.42µs 2025-12-13T00:20:09.535554788+00:00 stderr F I1213 00:20:09.535541 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress/router-default-5c9bf7bc58-6jctv took: 14.92µs 2025-12-13T00:20:09.535554788+00:00 stderr F I1213 00:20:09.535544 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-diagnostics in Admin Network Policy controller 2025-12-13T00:20:09.535554788+00:00 stderr F I1213 00:20:09.535550 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.535564939+00:00 stderr F I1213 00:20:09.535550 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-diagnostics Admin Network Policy controller: took 5.98µs 2025-12-13T00:20:09.535564939+00:00 stderr F I1213 00:20:09.535559 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-service-ca in Admin Network Policy controller 2025-12-13T00:20:09.535564939+00:00 stderr F I1213 00:20:09.535560 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/redhat-operators-zg7cl took: 3.58µs 2025-12-13T00:20:09.535574999+00:00 stderr F I1213 00:20:09.535564 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-service-ca Admin Network Policy controller: took 4.831µs 2025-12-13T00:20:09.535574999+00:00 stderr F I1213 00:20:09.535566 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/installer-7-crc 2025-12-13T00:20:09.535574999+00:00 stderr F I1213 00:20:09.535569 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-node-lease in Admin Network Policy controller 2025-12-13T00:20:09.535584829+00:00 stderr F I1213 00:20:09.535573 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-node-lease Admin Network Policy controller: took 4.11µs 2025-12-13T00:20:09.535584829+00:00 stderr F I1213 00:20:09.535575 28750 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-7-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535584829+00:00 stderr F I1213 00:20:09.535578 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-system in Admin Network Policy controller 2025-12-13T00:20:09.535594059+00:00 stderr F I1213 00:20:09.535583 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-system Admin Network Policy controller: took 5.07µs 2025-12-13T00:20:09.535594059+00:00 stderr F I1213 00:20:09.535588 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-apiserver in Admin Network Policy controller 2025-12-13T00:20:09.535610330+00:00 stderr F I1213 00:20:09.535592 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-apiserver Admin Network Policy controller: took 3.94µs 2025-12-13T00:20:09.535610330+00:00 stderr F I1213 00:20:09.535598 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-storage-operator in Admin Network Policy controller 2025-12-13T00:20:09.535610330+00:00 stderr F I1213 00:20:09.535602 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-storage-operator Admin Network Policy controller: took 4.15µs 2025-12-13T00:20:09.535610330+00:00 stderr F I1213 00:20:09.535603 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.535610330+00:00 stderr F I1213 00:20:09.535607 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-controller-manager-operator in Admin Network Policy controller 2025-12-13T00:20:09.535620660+00:00 stderr F I1213 00:20:09.535610 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-controller-manager-operator Admin Network Policy controller: took 3.87µs 2025-12-13T00:20:09.535620660+00:00 stderr F I1213 00:20:09.535615 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-node-identity in Admin Network Policy controller 2025-12-13T00:20:09.535629960+00:00 stderr F I1213 00:20:09.535618 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-node-identity Admin Network Policy controller: took 3.27µs 2025-12-13T00:20:09.535629960+00:00 stderr F I1213 00:20:09.535620 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/network-metrics-daemon-qdfr4 took: 9.35µs 2025-12-13T00:20:09.535629960+00:00 stderr F I1213 00:20:09.535624 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-openstack-infra in Admin Network Policy controller 2025-12-13T00:20:09.535639931+00:00 stderr F I1213 00:20:09.535627 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.535639931+00:00 stderr F I1213 00:20:09.535635 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.535649021+00:00 stderr F I1213 00:20:09.535638 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp took: 2.911µs 2025-12-13T00:20:09.535649021+00:00 stderr F I1213 00:20:09.535645 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.535658001+00:00 stderr F I1213 00:20:09.535652 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 took: 9.68µs 2025-12-13T00:20:09.535658001+00:00 stderr F I1213 00:20:09.535654 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg took: 2.51µs 2025-12-13T00:20:09.535666681+00:00 stderr F I1213 00:20:09.535658 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.535666681+00:00 stderr F I1213 00:20:09.535661 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.535675572+00:00 stderr F I1213 00:20:09.535666 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-brz8k took: 2.05µs 2025-12-13T00:20:09.535675572+00:00 stderr F I1213 00:20:09.535670 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/node-ca-l92hr took: 1.91µs 2025-12-13T00:20:09.535685002+00:00 stderr F I1213 00:20:09.535671 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.535685002+00:00 stderr F I1213 00:20:09.535678 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.535697802+00:00 stderr F I1213 00:20:09.535681 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd took: 1.62µs 2025-12-13T00:20:09.535697802+00:00 stderr F I1213 00:20:09.535689 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 took: 3.56µs 2025-12-13T00:20:09.535697802+00:00 stderr F I1213 00:20:09.535690 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.535705882+00:00 stderr F I1213 00:20:09.535696 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.535705882+00:00 stderr F I1213 00:20:09.535699 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr took: 1.06µs 2025-12-13T00:20:09.535713413+00:00 stderr F I1213 00:20:09.535705 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.535713413+00:00 stderr F I1213 00:20:09.535592 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.535720993+00:00 stderr F I1213 00:20:09.535712 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 2025-12-13T00:20:09.535728183+00:00 stderr F I1213 00:20:09.535718 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.535728183+00:00 stderr F I1213 00:20:09.535722 28750 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535735473+00:00 stderr F I1213 00:20:09.535729 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.535742813+00:00 stderr F I1213 00:20:09.535735 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.535742813+00:00 stderr F I1213 00:20:09.535702 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.535772654+00:00 stderr F I1213 00:20:09.535749 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b took: 6.261µs 2025-12-13T00:20:09.535772654+00:00 stderr F I1213 00:20:09.535755 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/image-pruner-29426400-tlv26 2025-12-13T00:20:09.535772654+00:00 stderr F I1213 00:20:09.535760 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.535772654+00:00 stderr F I1213 00:20:09.535762 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.535772654+00:00 stderr F I1213 00:20:09.535764 28750 obj_retry.go:459] Detected object openshift-image-registry/image-pruner-29426400-tlv26 of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535785535+00:00 stderr F I1213 00:20:09.535769 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf took: 2.1µs 2025-12-13T00:20:09.535785535+00:00 stderr F I1213 00:20:09.535776 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.535785535+00:00 stderr F I1213 00:20:09.535781 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns/dns-default-gbw49 took: 9.29µs 2025-12-13T00:20:09.535793595+00:00 stderr F I1213 00:20:09.535784 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver/installer-13-crc took: 4.181µs 2025-12-13T00:20:09.535793595+00:00 stderr F I1213 00:20:09.535788 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.535801215+00:00 stderr F I1213 00:20:09.535731 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 took: 6.27µs 2025-12-13T00:20:09.535801215+00:00 stderr F I1213 00:20:09.535793 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.535801215+00:00 stderr F I1213 00:20:09.535798 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh 2025-12-13T00:20:09.535811165+00:00 stderr F I1213 00:20:09.535805 28750 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535818486+00:00 stderr F I1213 00:20:09.535723 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-node-identity/network-node-identity-7xghp took: 8.78µs 2025-12-13T00:20:09.535818486+00:00 stderr F I1213 00:20:09.535814 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-10-crc 2025-12-13T00:20:09.535825976+00:00 stderr F I1213 00:20:09.535597 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.535833056+00:00 stderr F I1213 00:20:09.535822 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.535833056+00:00 stderr F I1213 00:20:09.535789 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.535840416+00:00 stderr F I1213 00:20:09.535832 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw took: 1.82µs 2025-12-13T00:20:09.535840416+00:00 stderr F I1213 00:20:09.535836 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/image-registry-75b7bb6564-rnjvj took: 2.19µs 2025-12-13T00:20:09.535848656+00:00 stderr F I1213 00:20:09.535823 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535848656+00:00 stderr F I1213 00:20:09.535845 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.535873497+00:00 stderr F I1213 00:20:09.535854 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh took: 2.08µs 2025-12-13T00:20:09.535873497+00:00 stderr F I1213 00:20:09.535866 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.535883997+00:00 stderr F I1213 00:20:09.535875 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd took: 2.6µs 2025-12-13T00:20:09.535883997+00:00 stderr F I1213 00:20:09.535782 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.535893028+00:00 stderr F I1213 00:20:09.535887 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb took: 1.74µs 2025-12-13T00:20:09.535900168+00:00 stderr F I1213 00:20:09.535895 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.535907338+00:00 stderr F I1213 00:20:09.535902 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs took: 1.41µs 2025-12-13T00:20:09.535914598+00:00 stderr F I1213 00:20:09.535806 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 took: 8.82µs 2025-12-13T00:20:09.535914598+00:00 stderr F I1213 00:20:09.535908 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.535921888+00:00 stderr F I1213 00:20:09.535915 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/installer-8-crc 2025-12-13T00:20:09.535921888+00:00 stderr F I1213 00:20:09.535917 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-admission-controller-6c7c885997-4hbbc took: 1.95µs 2025-12-13T00:20:09.535945740+00:00 stderr F I1213 00:20:09.535922 28750 obj_retry.go:459] Detected object openshift-kube-scheduler/installer-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535957230+00:00 stderr F I1213 00:20:09.535947 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.535957230+00:00 stderr F I1213 00:20:09.535954 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz took: 1.59µs 2025-12-13T00:20:09.535964511+00:00 stderr F I1213 00:20:09.535959 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-9-crc 2025-12-13T00:20:09.535971751+00:00 stderr F I1213 00:20:09.535963 28750 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.535971751+00:00 stderr F I1213 00:20:09.535857 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.536001832+00:00 stderr F I1213 00:20:09.535979 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z took: 2.35µs 2025-12-13T00:20:09.536001832+00:00 stderr F I1213 00:20:09.535991 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.536001832+00:00 stderr F I1213 00:20:09.535998 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg took: 1.58µs 2025-12-13T00:20:09.536012652+00:00 stderr F I1213 00:20:09.536004 28750 obj_retry.go:502] Add event received for *factory.egressIPPod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.536023132+00:00 stderr F I1213 00:20:09.536011 28750 obj_retry.go:541] Creating *factory.egressIPPod hostpath-provisioner/csi-hostpathplugin-hvm8g took: 1.68µs 2025-12-13T00:20:09.536023132+00:00 stderr F I1213 00:20:09.535836 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-operator/iptables-alerter-wwpnd took: 8.84µs 2025-12-13T00:20:09.536023132+00:00 stderr F I1213 00:20:09.536020 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.536033602+00:00 stderr F I1213 00:20:09.536028 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/redhat-marketplace-nv4pl took: 2.15µs 2025-12-13T00:20:09.536040803+00:00 stderr F I1213 00:20:09.536034 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-11-crc 2025-12-13T00:20:09.536047863+00:00 stderr F I1213 00:20:09.536040 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536054703+00:00 stderr F I1213 00:20:09.536048 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-10-crc 2025-12-13T00:20:09.536061553+00:00 stderr F I1213 00:20:09.536054 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536068673+00:00 stderr F I1213 00:20:09.536060 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.536075484+00:00 stderr F I1213 00:20:09.536066 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/community-operators-s2hxn took: 1.96µs 2025-12-13T00:20:09.536099634+00:00 stderr F I1213 00:20:09.535815 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.536099634+00:00 stderr F I1213 00:20:09.536092 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 took: 1.47µs 2025-12-13T00:20:09.536107564+00:00 stderr F I1213 00:20:09.536099 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.536115055+00:00 stderr F I1213 00:20:09.536106 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-daemon-zpnhg took: 2.09µs 2025-12-13T00:20:09.536115055+00:00 stderr F I1213 00:20:09.536112 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.536124145+00:00 stderr F I1213 00:20:09.536119 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc took: 1.24µs 2025-12-13T00:20:09.536131415+00:00 stderr F I1213 00:20:09.535756 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m took: 8.48µs 2025-12-13T00:20:09.536138235+00:00 stderr F I1213 00:20:09.536129 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-etcd/etcd-crc 2025-12-13T00:20:09.536145515+00:00 stderr F I1213 00:20:09.536137 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-etcd/etcd-crc took: 1.68µs 2025-12-13T00:20:09.536145515+00:00 stderr F I1213 00:20:09.536142 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-8-crc 2025-12-13T00:20:09.536154336+00:00 stderr F I1213 00:20:09.536148 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536163916+00:00 stderr F I1213 00:20:09.536156 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.536171076+00:00 stderr F I1213 00:20:09.536163 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-console/downloads-65476884b9-9wcvx took: 1.4µs 2025-12-13T00:20:09.536177906+00:00 stderr F I1213 00:20:09.536168 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.536185027+00:00 stderr F I1213 00:20:09.536176 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm took: 1.77µs 2025-12-13T00:20:09.536185027+00:00 stderr F I1213 00:20:09.535748 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-operator/network-operator-767c585db5-zd56b took: 10.441µs 2025-12-13T00:20:09.536192137+00:00 stderr F I1213 00:20:09.536185 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.536198937+00:00 stderr F I1213 00:20:09.536193 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/marketplace-operator-8b455464d-kghgr took: 2.02µs 2025-12-13T00:20:09.536205737+00:00 stderr F I1213 00:20:09.536199 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.536212537+00:00 stderr F I1213 00:20:09.536206 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-console/console-644bb77b49-5x5xk took: 1.41µs 2025-12-13T00:20:09.536219357+00:00 stderr F I1213 00:20:09.536212 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.536226698+00:00 stderr F I1213 00:20:09.536218 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-service-ca/service-ca-666f99b6f-kk8kg took: 1.06µs 2025-12-13T00:20:09.536226698+00:00 stderr F I1213 00:20:09.536224 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-11-crc 2025-12-13T00:20:09.536234688+00:00 stderr F I1213 00:20:09.535713 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb took: 2.37µs 2025-12-13T00:20:09.536234688+00:00 stderr F I1213 00:20:09.536230 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536242188+00:00 stderr F I1213 00:20:09.536235 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-q88th 2025-12-13T00:20:09.536242188+00:00 stderr F I1213 00:20:09.535628 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-openstack-infra Admin Network Policy controller: took 4.15µs 2025-12-13T00:20:09.536249308+00:00 stderr F I1213 00:20:09.536243 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-q88th took: 2.54µs 2025-12-13T00:20:09.536256128+00:00 stderr F I1213 00:20:09.536249 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.536263279+00:00 stderr F I1213 00:20:09.536255 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-network-diagnostics/network-check-target-v54bt took: 1.26µs 2025-12-13T00:20:09.536263279+00:00 stderr F I1213 00:20:09.536260 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/installer-10-retry-1-crc 2025-12-13T00:20:09.536270419+00:00 stderr F I1213 00:20:09.536265 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536279919+00:00 stderr F I1213 00:20:09.536249 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller 2025-12-13T00:20:09.536279919+00:00 stderr F I1213 00:20:09.536276 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 27.681µs 2025-12-13T00:20:09.536288939+00:00 stderr F I1213 00:20:09.536284 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cloud-platform-infra in Admin Network Policy controller 2025-12-13T00:20:09.536296140+00:00 stderr F I1213 00:20:09.536287 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cloud-platform-infra Admin Network Policy controller: took 3.72µs 2025-12-13T00:20:09.536296140+00:00 stderr F I1213 00:20:09.536293 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller 2025-12-13T00:20:09.536303240+00:00 stderr F I1213 00:20:09.536297 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 3.991µs 2025-12-13T00:20:09.536310700+00:00 stderr F I1213 00:20:09.536302 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-controller-manager-operator in Admin Network Policy controller 2025-12-13T00:20:09.536310700+00:00 stderr F I1213 00:20:09.536306 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-controller-manager-operator Admin Network Policy controller: took 3.47µs 2025-12-13T00:20:09.536318970+00:00 stderr F I1213 00:20:09.536310 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ovn-kubernetes in Admin Network Policy controller 2025-12-13T00:20:09.536318970+00:00 stderr F I1213 00:20:09.536314 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ovn-kubernetes Admin Network Policy controller: took 3.61µs 2025-12-13T00:20:09.536327560+00:00 stderr F I1213 00:20:09.536318 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-infra in Admin Network Policy controller 2025-12-13T00:20:09.536327560+00:00 stderr F I1213 00:20:09.536322 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-infra Admin Network Policy controller: took 3.46µs 2025-12-13T00:20:09.536335231+00:00 stderr F I1213 00:20:09.536326 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-scheduler in Admin Network Policy controller 2025-12-13T00:20:09.536335231+00:00 stderr F I1213 00:20:09.536330 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-scheduler Admin Network Policy controller: took 3.59µs 2025-12-13T00:20:09.536344461+00:00 stderr F I1213 00:20:09.536334 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-machine-api in Admin Network Policy controller 2025-12-13T00:20:09.536344461+00:00 stderr F I1213 00:20:09.536337 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-machine-api Admin Network Policy controller: took 3.38µs 2025-12-13T00:20:09.536344461+00:00 stderr F I1213 00:20:09.536342 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-oauth-apiserver in Admin Network Policy controller 2025-12-13T00:20:09.536352491+00:00 stderr F I1213 00:20:09.536345 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-oauth-apiserver Admin Network Policy controller: took 3.45µs 2025-12-13T00:20:09.536352491+00:00 stderr F I1213 00:20:09.536350 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace hostpath-provisioner in Admin Network Policy controller 2025-12-13T00:20:09.536369032+00:00 stderr F I1213 00:20:09.536353 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace hostpath-provisioner Admin Network Policy controller: took 3.21µs 2025-12-13T00:20:09.536369032+00:00 stderr F I1213 00:20:09.536357 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-samples-operator in Admin Network Policy controller 2025-12-13T00:20:09.536369032+00:00 stderr F I1213 00:20:09.536361 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-samples-operator Admin Network Policy controller: took 3.15µs 2025-12-13T00:20:09.536369032+00:00 stderr F I1213 00:20:09.536365 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller 2025-12-13T00:20:09.536377142+00:00 stderr F I1213 00:20:09.536368 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 3.32µs 2025-12-13T00:20:09.536377142+00:00 stderr F I1213 00:20:09.536373 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-dns-operator in Admin Network Policy controller 2025-12-13T00:20:09.536384412+00:00 stderr F I1213 00:20:09.536376 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-dns-operator Admin Network Policy controller: took 3.24µs 2025-12-13T00:20:09.536384412+00:00 stderr F I1213 00:20:09.536382 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace kube-public in Admin Network Policy controller 2025-12-13T00:20:09.536391692+00:00 stderr F I1213 00:20:09.536385 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace kube-public Admin Network Policy controller: took 4.37µs 2025-12-13T00:20:09.536391692+00:00 stderr F I1213 00:20:09.535705 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv took: 2.2µs 2025-12-13T00:20:09.536399072+00:00 stderr F I1213 00:20:09.536390 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-apiserver-operator in Admin Network Policy controller 2025-12-13T00:20:09.536399072+00:00 stderr F I1213 00:20:09.536395 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-apiserver-operator Admin Network Policy controller: took 5.75µs 2025-12-13T00:20:09.536406543+00:00 stderr F I1213 00:20:09.536396 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/revision-pruner-9-crc 2025-12-13T00:20:09.536406543+00:00 stderr F I1213 00:20:09.536400 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-storage-version-migrator-operator in Admin Network Policy controller 2025-12-13T00:20:09.536406543+00:00 stderr F I1213 00:20:09.536404 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-storage-version-migrator-operator Admin Network Policy controller: took 3.66µs 2025-12-13T00:20:09.536414343+00:00 stderr F I1213 00:20:09.536406 28750 obj_retry.go:459] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536414343+00:00 stderr F I1213 00:20:09.535790 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.536421623+00:00 stderr F I1213 00:20:09.536416 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.536430673+00:00 stderr F I1213 00:20:09.536424 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-scheduler/openshift-kube-scheduler-crc took: 1.7µs 2025-12-13T00:20:09.536437693+00:00 stderr F I1213 00:20:09.536408 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-node in Admin Network Policy controller 2025-12-13T00:20:09.536437693+00:00 stderr F I1213 00:20:09.536434 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-node Admin Network Policy controller: took 25.3µs 2025-12-13T00:20:09.536447124+00:00 stderr F I1213 00:20:09.536439 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller 2025-12-13T00:20:09.536447124+00:00 stderr F I1213 00:20:09.536442 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 3.451µs 2025-12-13T00:20:09.536454504+00:00 stderr F I1213 00:20:09.536446 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-route-controller-manager in Admin Network Policy controller 2025-12-13T00:20:09.536454504+00:00 stderr F I1213 00:20:09.536450 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-route-controller-manager Admin Network Policy controller: took 3.41µs 2025-12-13T00:20:09.536461754+00:00 stderr F I1213 00:20:09.536454 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openstack in Admin Network Policy controller 2025-12-13T00:20:09.536461754+00:00 stderr F I1213 00:20:09.536458 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openstack Admin Network Policy controller: took 3.41µs 2025-12-13T00:20:09.536469134+00:00 stderr F I1213 00:20:09.536463 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-apiserver-operator in Admin Network Policy controller 2025-12-13T00:20:09.536469134+00:00 stderr F I1213 00:20:09.536466 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-apiserver-operator Admin Network Policy controller: took 3.43µs 2025-12-13T00:20:09.536499095+00:00 stderr F I1213 00:20:09.536480 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kube-scheduler-operator in Admin Network Policy controller 2025-12-13T00:20:09.536499095+00:00 stderr F I1213 00:20:09.536488 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kube-scheduler-operator Admin Network Policy controller: took 17.181µs 2025-12-13T00:20:09.536499095+00:00 stderr F I1213 00:20:09.536493 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-marketplace in Admin Network Policy controller 2025-12-13T00:20:09.536499095+00:00 stderr F I1213 00:20:09.536416 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 took: 1.74µs 2025-12-13T00:20:09.536508285+00:00 stderr F I1213 00:20:09.536496 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-marketplace Admin Network Policy controller: took 3.58µs 2025-12-13T00:20:09.536508285+00:00 stderr F I1213 00:20:09.536502 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.536508285+00:00 stderr F I1213 00:20:09.536504 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-cluster-version in Admin Network Policy controller 2025-12-13T00:20:09.536516266+00:00 stderr F I1213 00:20:09.536508 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-cluster-version Admin Network Policy controller: took 3.79µs 2025-12-13T00:20:09.536516266+00:00 stderr F I1213 00:20:09.536510 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-controller-manager/controller-manager-778975cc4f-x5vcf took: 2.28µs 2025-12-13T00:20:09.536516266+00:00 stderr F I1213 00:20:09.536513 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console-user-settings in Admin Network Policy controller 2025-12-13T00:20:09.536526676+00:00 stderr F I1213 00:20:09.535816 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.536526676+00:00 stderr F I1213 00:20:09.536517 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console-user-settings Admin Network Policy controller: took 4.371µs 2025-12-13T00:20:09.536526676+00:00 stderr F I1213 00:20:09.536523 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-etcd-operator in Admin Network Policy controller 2025-12-13T00:20:09.536534526+00:00 stderr F I1213 00:20:09.536528 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-etcd-operator Admin Network Policy controller: took 4.2µs 2025-12-13T00:20:09.536541376+00:00 stderr F I1213 00:20:09.536532 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-kni-infra in Admin Network Policy controller 2025-12-13T00:20:09.536541376+00:00 stderr F I1213 00:20:09.536536 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-kni-infra Admin Network Policy controller: took 4.04µs 2025-12-13T00:20:09.536548756+00:00 stderr F I1213 00:20:09.536541 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-network-operator in Admin Network Policy controller 2025-12-13T00:20:09.536548756+00:00 stderr F I1213 00:20:09.536544 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-network-operator Admin Network Policy controller: took 3.53µs 2025-12-13T00:20:09.536556027+00:00 stderr F I1213 00:20:09.536548 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-dns in Admin Network Policy controller 2025-12-13T00:20:09.536556027+00:00 stderr F I1213 00:20:09.536552 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-dns Admin Network Policy controller: took 3.4µs 2025-12-13T00:20:09.536563397+00:00 stderr F I1213 00:20:09.536556 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-machine-config-operator in Admin Network Policy controller 2025-12-13T00:20:09.536563397+00:00 stderr F I1213 00:20:09.536560 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-machine-config-operator Admin Network Policy controller: took 3.44µs 2025-12-13T00:20:09.536570857+00:00 stderr F I1213 00:20:09.536564 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-user-workload-monitoring in Admin Network Policy controller 2025-12-13T00:20:09.536570857+00:00 stderr F I1213 00:20:09.536568 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-user-workload-monitoring Admin Network Policy controller: took 3.43µs 2025-12-13T00:20:09.536578227+00:00 stderr F I1213 00:20:09.536572 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openstack-operators in Admin Network Policy controller 2025-12-13T00:20:09.536585387+00:00 stderr F I1213 00:20:09.536576 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openstack-operators Admin Network Policy controller: took 3.32µs 2025-12-13T00:20:09.536585387+00:00 stderr F I1213 00:20:09.536581 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-console-operator in Admin Network Policy controller 2025-12-13T00:20:09.536593258+00:00 stderr F I1213 00:20:09.536584 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-console-operator Admin Network Policy controller: took 3.36µs 2025-12-13T00:20:09.536593258+00:00 stderr F I1213 00:20:09.536588 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-etcd in Admin Network Policy controller 2025-12-13T00:20:09.536593258+00:00 stderr F I1213 00:20:09.536523 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-multus/multus-additional-cni-plugins-bzj2p took: 2.14µs 2025-12-13T00:20:09.536616458+00:00 stderr F I1213 00:20:09.536592 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-etcd Admin Network Policy controller: took 3.331µs 2025-12-13T00:20:09.536616458+00:00 stderr F I1213 00:20:09.536595 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.536616458+00:00 stderr F I1213 00:20:09.536597 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ingress-operator in Admin Network Policy controller 2025-12-13T00:20:09.536616458+00:00 stderr F I1213 00:20:09.536602 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ingress-operator Admin Network Policy controller: took 4.96µs 2025-12-13T00:20:09.536616458+00:00 stderr F I1213 00:20:09.536607 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operators in Admin Network Policy controller 2025-12-13T00:20:09.536616458+00:00 stderr F I1213 00:20:09.536610 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operators Admin Network Policy controller: took 3.29µs 2025-12-13T00:20:09.536625369+00:00 stderr F I1213 00:20:09.536614 28750 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-ovirt-infra in Admin Network Policy controller 2025-12-13T00:20:09.536625369+00:00 stderr F I1213 00:20:09.536618 28750 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-ovirt-infra Admin Network Policy controller: took 3.91µs 2025-12-13T00:20:09.536625369+00:00 stderr F I1213 00:20:09.536602 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns/node-resolver-dn27q took: 1.9µs 2025-12-13T00:20:09.536633069+00:00 stderr F I1213 00:20:09.536628 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.536641579+00:00 stderr F I1213 00:20:09.536636 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t took: 3.1µs 2025-12-13T00:20:09.536649149+00:00 stderr F I1213 00:20:09.536641 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-12-13T00:20:09.536649149+00:00 stderr F I1213 00:20:09.536646 28750 obj_retry.go:459] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536656259+00:00 stderr F I1213 00:20:09.535809 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-dns-operator/dns-operator-75f687757b-nz2xb took: 8.29µs 2025-12-13T00:20:09.536663070+00:00 stderr F I1213 00:20:09.536657 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.536669900+00:00 stderr F I1213 00:20:09.536664 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 took: 1.891µs 2025-12-13T00:20:09.536676720+00:00 stderr F I1213 00:20:09.536668 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.536683810+00:00 stderr F I1213 00:20:09.536675 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b took: 1.8µs 2025-12-13T00:20:09.536683810+00:00 stderr F I1213 00:20:09.536679 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.536691330+00:00 stderr F I1213 00:20:09.536685 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv took: 1.03µs 2025-12-13T00:20:09.536698791+00:00 stderr F I1213 00:20:09.536689 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.536698791+00:00 stderr F I1213 00:20:09.536696 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-ingress-canary/ingress-canary-2vhcn took: 1.82µs 2025-12-13T00:20:09.536708721+00:00 stderr F I1213 00:20:09.536429 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.536715741+00:00 stderr F I1213 00:20:09.536706 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh took: 2.35µs 2025-12-13T00:20:09.536715741+00:00 stderr F I1213 00:20:09.536711 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.536723131+00:00 stderr F I1213 00:20:09.536717 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-apiserver/kube-apiserver-crc took: 1.22µs 2025-12-13T00:20:09.536723131+00:00 stderr F I1213 00:20:09.535837 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.536732141+00:00 stderr F I1213 00:20:09.536727 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 took: 2.66µs 2025-12-13T00:20:09.536739262+00:00 stderr F I1213 00:20:09.536732 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-apiserver/installer-12-crc 2025-12-13T00:20:09.536739262+00:00 stderr F I1213 00:20:09.536736 28750 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-12-crc of type *factory.egressIPPod in terminal state (e.g. completed) during add event: will remove it 2025-12-13T00:20:09.536746492+00:00 stderr F I1213 00:20:09.535588 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.536754892+00:00 stderr F I1213 00:20:09.536749 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-marketplace/certified-operators-lcrg8 took: 3.08µs 2025-12-13T00:20:09.536761702+00:00 stderr F I1213 00:20:09.536754 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.536768812+00:00 stderr F I1213 00:20:09.536760 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/machine-config-server-v65wr took: 1.9µs 2025-12-13T00:20:09.536768812+00:00 stderr F I1213 00:20:09.536765 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.536775873+00:00 stderr F I1213 00:20:09.536770 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-kube-controller-manager/kube-controller-manager-crc took: 1.52µs 2025-12-13T00:20:09.536782963+00:00 stderr F I1213 00:20:09.536775 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.536790063+00:00 stderr F I1213 00:20:09.536780 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-machine-config-operator/kube-rbac-proxy-crio-crc took: 1.43µs 2025-12-13T00:20:09.536790063+00:00 stderr F I1213 00:20:09.536785 28750 obj_retry.go:502] Add event received for *factory.egressIPPod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.536797103+00:00 stderr F I1213 00:20:09.536791 28750 obj_retry.go:541] Creating *factory.egressIPPod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz took: 1.37µs 2025-12-13T00:20:09.536805483+00:00 stderr F I1213 00:20:09.536800 28750 factory.go:988] Added *v1.Pod event handler 6 2025-12-13T00:20:09.536894056+00:00 stderr F I1213 00:20:09.536859 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 10.217.0.0/22 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.536927557+00:00 stderr F I1213 00:20:09.536901 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.536956398+00:00 stderr F I1213 00:20:09.536919 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 10.217.0.0/22 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:0aa7f47a-d2ec-4d10-8558-5ef3bcc81b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.537360498+00:00 stderr F I1213 00:20:09.537322 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {56cc3414-2f3a-494a-bb5b-fac89ab63a76}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.537409240+00:00 stderr F I1213 00:20:09.537374 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:56cc3414-2f3a-494a-bb5b-fac89ab63a76}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.537453911+00:00 stderr F I1213 00:20:09.537407 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.217.0.0/22 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {56cc3414-2f3a-494a-bb5b-fac89ab63a76}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:56cc3414-2f3a-494a-bb5b-fac89ab63a76}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.537529323+00:00 stderr F I1213 00:20:09.537393 28750 gateway_init.go:324] Initializing Gateway Functionality 2025-12-13T00:20:09.538661994+00:00 stderr F I1213 00:20:09.538624 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[ip-family:ip k8s.ovn.org/id:default-network-controller:EgressIP:102:EIP-No-Reroute-reply-traffic:ip k8s.ovn.org/name:EIP-No-Reroute-reply-traffic k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressIP priority:102]} match:pkt.mark == 42 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {725b51e4-bb6e-431f-9362-bdc8357c6b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.538731416+00:00 stderr F I1213 00:20:09.538711 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:725b51e4-bb6e-431f-9362-bdc8357c6b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.538785348+00:00 stderr F I1213 00:20:09.538755 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[ip-family:ip k8s.ovn.org/id:default-network-controller:EgressIP:102:EIP-No-Reroute-reply-traffic:ip k8s.ovn.org/name:EIP-No-Reroute-reply-traffic k8s.ovn.org/owner-controller:default-network-controller k8s.ovn.org/owner-type:EgressIP priority:102]} match:pkt.mark == 42 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {725b51e4-bb6e-431f-9362-bdc8357c6b0d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:725b51e4-bb6e-431f-9362-bdc8357c6b0d}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.539244301+00:00 stderr F I1213 00:20:09.539229 28750 address_set.go:304] New(89261db8-e0fa-461c-a902-652c89cbed51/default-network-controller:EgressIP:node-ips:v4/a14918748166599097711) with [] 2025-12-13T00:20:09.539292252+00:00 stderr F I1213 00:20:09.539282 28750 address_set.go:304] New(b27c19cc-d9d0-4d57-a5a8-06fcff438e8a/default-network-controller:EgressIP:egressip-served-pods:v4/a4548040316634674295) with [] 2025-12-13T00:20:09.539329333+00:00 stderr F I1213 00:20:09.539320 28750 address_set.go:304] New(6ad154d5-3275-4190-8d21-ff202885643c/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with [] 2025-12-13T00:20:09.539416415+00:00 stderr F I1213 00:20:09.539404 28750 obj_retry.go:502] Add event received for *factory.egressNode crc 2025-12-13T00:20:09.539490698+00:00 stderr F I1213 00:20:09.539368 28750 gateway_localnet.go:181] Node local addresses initialized to: map[10.217.0.2:{10.217.0.0 fffffe00} 127.0.0.1:{127.0.0.0 ff000000} 169.254.169.2:{169.254.169.0 fffffff8} 172.17.0.5:{172.17.0.0 ffffff00} 172.18.0.5:{172.18.0.0 ffffff00} 172.19.0.5:{172.19.0.0 ffffff00} 192.168.122.10:{192.168.122.0 ffffff00} 192.168.126.11:{192.168.126.0 ffffff00} 38.102.83.51:{38.102.83.0 ffffff00} ::1:{::1 ffffffffffffffffffffffffffffffff} fe80::1023:a8ff:fe0d:1c0:{fe80:: ffffffffffffffff0000000000000000} fe80::148c:73ff:feb1:cb9a:{fe80:: ffffffffffffffff0000000000000000} fe80::2007:bbff:fe14:203:{fe80:: ffffffffffffffff0000000000000000} fe80::2057:5eff:fe5b:c2f:{fe80:: ffffffffffffffff0000000000000000} fe80::247d:e2ff:fe22:d846:{fe80:: ffffffffffffffff0000000000000000} fe80::30ec:68ff:fed3:4cd7:{fe80:: ffffffffffffffff0000000000000000} fe80::34a2:75ff:fede:a788:{fe80:: ffffffffffffffff0000000000000000} fe80::4077:27ff:fed5:b510:{fe80:: ffffffffffffffff0000000000000000} fe80::48f4:1bff:fe2d:92c7:{fe80:: ffffffffffffffff0000000000000000} fe80::58d2:5aff:fe5b:4911:{fe80:: ffffffffffffffff0000000000000000} fe80::60e3:98ff:fe8e:75d9:{fe80:: ffffffffffffffff0000000000000000} fe80::6478:88ff:febe:803:{fe80:: ffffffffffffffff0000000000000000} fe80::68a7:6eff:fea9:b5fc:{fe80:: ffffffffffffffff0000000000000000} fe80::6ccc:9fff:fe46:8b02:{fe80:: ffffffffffffffff0000000000000000} fe80::6cd5:45ff:fe2a:82e8:{fe80:: ffffffffffffffff0000000000000000} fe80::7070:7eff:fe61:88e5:{fe80:: ffffffffffffffff0000000000000000} fe80::70a9:1fff:fe8c:6471:{fe80:: ffffffffffffffff0000000000000000} fe80::7439:20ff:fe3e:2556:{fe80:: ffffffffffffffff0000000000000000} fe80::780a:1dff:fec5:3746:{fe80:: ffffffffffffffff0000000000000000} fe80::7f:22ff:fe64:409c:{fe80:: ffffffffffffffff0000000000000000} fe80::8002:e7ff:fe25:f905:{fe80:: ffffffffffffffff0000000000000000} fe80::801f:a7ff:fe11:250c:{fe80:: ffffffffffffffff0000000000000000} fe80::80d3:9cff:fe92:3e8b:{fe80:: ffffffffffffffff0000000000000000} fe80::868:89ff:fea2:74e9:{fe80:: ffffffffffffffff0000000000000000} fe80::88be:c8ff:fe29:f0e9:{fe80:: ffffffffffffffff0000000000000000} fe80::8c8f:5dff:feb4:819:{fe80:: ffffffffffffffff0000000000000000} fe80::9481:cff:feac:e6d1:{fe80:: ffffffffffffffff0000000000000000} fe80::ac90:26ff:fe47:9aa3:{fe80:: ffffffffffffffff0000000000000000} fe80::b07b:76ff:fec9:a47e:{fe80:: ffffffffffffffff0000000000000000} fe80::b088:3cff:fe49:b32b:{fe80:: ffffffffffffffff0000000000000000} fe80::b0fc:8eff:fef5:2c97:{fe80:: ffffffffffffffff0000000000000000} fe80::b46b:9cff:fe3d:957a:{fe80:: ffffffffffffffff0000000000000000} fe80::b4dc:d9ff:fe26:3d4:{fe80:: ffffffffffffffff0000000000000000} fe80::bc31:1c1e:2091:e239:{fe80:: ffffffffffffffff0000000000000000} fe80::bc4f:2bff:feab:d678:{fe80:: ffffffffffffffff0000000000000000} fe80::bcdb:3ff:feb5:77f:{fe80:: ffffffffffffffff0000000000000000} fe80::c09a:8bff:fe34:b47:{fe80:: ffffffffffffffff0000000000000000} fe80::c8c6:4aff:fe5e:7614:{fe80:: ffffffffffffffff0000000000000000} fe80::cc6b:c0ff:fe1a:2913:{fe80:: ffffffffffffffff0000000000000000} fe80::d06c:89ff:fe77:51e8:{fe80:: ffffffffffffffff0000000000000000} fe80::d0ee:f3ff:fe55:729d:{fe80:: ffffffffffffffff0000000000000000} fe80::dc64:ff:fe3a:4454:{fe80:: ffffffffffffffff0000000000000000} fe80::e00a:46ff:fe4e:aaee:{fe80:: ffffffffffffffff0000000000000000} fe80::e819:8aff:fea9:b2bd:{fe80:: ffffffffffffffff0000000000000000} fe80::e8a9:ceff:fe64:1ebe:{fe80:: ffffffffffffffff0000000000000000} fe80::ec17:3dff:fef7:51e9:{fe80:: ffffffffffffffff0000000000000000} fe80::ec9b:e0ff:fee0:7ef1:{fe80:: ffffffffffffffff0000000000000000} fe80::f83e:e4ff:fe43:e61:{fe80:: ffffffffffffffff0000000000000000} fe80::fcf7:d9ff:feb9:e947:{fe80:: ffffffffffffffff0000000000000000}] 2025-12-13T00:20:09.539505718+00:00 stderr F I1213 00:20:09.539488 28750 ovs.go:159] Exec(23): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex 2025-12-13T00:20:09.539779865+00:00 stderr F I1213 00:20:09.539743 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[172.17.0.5 172.18.0.5 172.19.0.5 192.168.122.10 192.168.126.11 38.102.83.51]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {89261db8-e0fa-461c-a902-652c89cbed51}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.539830737+00:00 stderr F I1213 00:20:09.539805 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[172.17.0.5 172.18.0.5 172.19.0.5 192.168.122.10 192.168.126.11 38.102.83.51]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {89261db8-e0fa-461c-a902-652c89cbed51}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.540459944+00:00 stderr F I1213 00:20:09.540414 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:(ip4.src == $a4548040316634674295 || ip4.src == $a13607449821398607916) && ip4.dst == $a14918748166599097711 options:{GoMap:map[pkt_mark:1008]} priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6eea91c7-d03c-41d8-906d-a701fa21697a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.540491095+00:00 stderr F I1213 00:20:09.540467 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:6eea91c7-d03c-41d8-906d-a701fa21697a}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.540514345+00:00 stderr F I1213 00:20:09.540486 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Policy Row:map[action:allow match:(ip4.src == $a4548040316634674295 || ip4.src == $a13607449821398607916) && ip4.dst == $a14918748166599097711 options:{GoMap:map[pkt_mark:1008]} priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6eea91c7-d03c-41d8-906d-a701fa21697a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:6eea91c7-d03c-41d8-906d-a701fa21697a}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.540860085+00:00 stderr F I1213 00:20:09.540845 28750 egressip.go:1280] Egress node: crc about to be initialized 2025-12-13T00:20:09.540982498+00:00 stderr F I1213 00:20:09.540959 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {80147caa-a146-46b4-bd15-0b6299f65979}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.541041340+00:00 stderr F I1213 00:20:09.541022 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:80147caa-a146-46b4-bd15-0b6299f65979}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.541080081+00:00 stderr F I1213 00:20:09.541057 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {80147caa-a146-46b4-bd15-0b6299f65979}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:80147caa-a146-46b4-bd15-0b6299f65979}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.541383429+00:00 stderr F I1213 00:20:09.541356 28750 obj_retry.go:541] Creating *factory.egressNode crc took: 1.914492ms 2025-12-13T00:20:09.541395449+00:00 stderr F I1213 00:20:09.541381 28750 factory.go:988] Added *v1.Node event handler 7 2025-12-13T00:20:09.541435680+00:00 stderr F I1213 00:20:09.541416 28750 factory.go:988] Added *v1.EgressIP event handler 8 2025-12-13T00:20:09.541692088+00:00 stderr F I1213 00:20:09.541659 28750 factory.go:988] Added *v1.EgressFirewall event handler 9 2025-12-13T00:20:09.541764810+00:00 stderr F I1213 00:20:09.541740 28750 obj_retry.go:502] Add event received for *factory.egressFwNode crc 2025-12-13T00:20:09.541812332+00:00 stderr F I1213 00:20:09.541793 28750 obj_retry.go:541] Creating *factory.egressFwNode crc took: 33.061µs 2025-12-13T00:20:09.541819852+00:00 stderr F I1213 00:20:09.541809 28750 factory.go:988] Added *v1.Node event handler 10 2025-12-13T00:20:09.541826872+00:00 stderr F I1213 00:20:09.541821 28750 egressqos.go:193] Setting up event handlers for EgressQoS 2025-12-13T00:20:09.542301475+00:00 stderr F I1213 00:20:09.542271 28750 egressqos.go:245] Starting EgressQoS Controller 2025-12-13T00:20:09.542320496+00:00 stderr F I1213 00:20:09.542299 28750 shared_informer.go:311] Waiting for caches to sync for egressqosnodes 2025-12-13T00:20:09.542320496+00:00 stderr F I1213 00:20:09.542307 28750 shared_informer.go:318] Caches are synced for egressqosnodes 2025-12-13T00:20:09.542320496+00:00 stderr F I1213 00:20:09.542313 28750 shared_informer.go:311] Waiting for caches to sync for egressqospods 2025-12-13T00:20:09.542320496+00:00 stderr F I1213 00:20:09.542317 28750 shared_informer.go:318] Caches are synced for egressqospods 2025-12-13T00:20:09.542331216+00:00 stderr F I1213 00:20:09.542321 28750 shared_informer.go:311] Waiting for caches to sync for egressqos 2025-12-13T00:20:09.542331216+00:00 stderr F I1213 00:20:09.542326 28750 shared_informer.go:318] Caches are synced for egressqos 2025-12-13T00:20:09.542355526+00:00 stderr F I1213 00:20:09.542331 28750 egressqos.go:259] Repairing EgressQoSes 2025-12-13T00:20:09.542355526+00:00 stderr F I1213 00:20:09.542335 28750 egressqos.go:400] Starting repairing loop for egressqos 2025-12-13T00:20:09.542487640+00:00 stderr F I1213 00:20:09.542462 28750 egressqos.go:402] Finished repairing loop for egressqos: 121.393µs 2025-12-13T00:20:09.542518221+00:00 stderr F I1213 00:20:09.542499 28750 egressservice_zone.go:129] Setting up event handlers for Egress Services 2025-12-13T00:20:09.542553542+00:00 stderr F I1213 00:20:09.542530 28750 egressqos.go:1008] Processing sync for EgressQoS node crc 2025-12-13T00:20:09.542742367+00:00 stderr F I1213 00:20:09.542720 28750 egressservice_zone.go:205] Starting Egress Services Controller 2025-12-13T00:20:09.542742367+00:00 stderr F I1213 00:20:09.542734 28750 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-12-13T00:20:09.542742367+00:00 stderr F I1213 00:20:09.542740 28750 shared_informer.go:318] Caches are synced for egressservices 2025-12-13T00:20:09.542751247+00:00 stderr F I1213 00:20:09.542743 28750 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-12-13T00:20:09.542751247+00:00 stderr F I1213 00:20:09.542747 28750 shared_informer.go:318] Caches are synced for egressservices_services 2025-12-13T00:20:09.542758217+00:00 stderr F I1213 00:20:09.542752 28750 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-12-13T00:20:09.542758217+00:00 stderr F I1213 00:20:09.542756 28750 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-12-13T00:20:09.542765098+00:00 stderr F I1213 00:20:09.542761 28750 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-12-13T00:20:09.542771738+00:00 stderr F I1213 00:20:09.542765 28750 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-12-13T00:20:09.542771738+00:00 stderr F I1213 00:20:09.542769 28750 egressservice_zone.go:223] Repairing Egress Services 2025-12-13T00:20:09.542797789+00:00 stderr F I1213 00:20:09.542762 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/marketplace-operator-metrics for endpointslice openshift-marketplace/marketplace-operator-metrics-fcwkk as it is not a known egress service 2025-12-13T00:20:09.542805809+00:00 stderr F I1213 00:20:09.542796 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-cluster-version/cluster-version-operator for endpointslice openshift-cluster-version/cluster-version-operator-qt7zf as it is not a known egress service 2025-12-13T00:20:09.542814549+00:00 stderr F I1213 00:20:09.542809 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-config-operator/metrics for endpointslice openshift-config-operator/metrics-tw775 as it is not a known egress service 2025-12-13T00:20:09.542821819+00:00 stderr F I1213 00:20:09.542815 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress/router-internal-default for endpointslice openshift-ingress/router-internal-default-29hv8 as it is not a known egress service 2025-12-13T00:20:09.543394985+00:00 stderr F I1213 00:20:09.542547 28750 egressqos.go:1023] EgressQoS crc node retrieved from lister: &Node{ObjectMeta:{crc c83c88d3-f34d-4083-a59d-1c50f90f89b8 42788 0 2024-06-26 12:44:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:crc kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node-role.kubernetes.io/worker: node.openshift.io/os_id:rhcos topology.hostpath.csi/node:crc] map[csi.volume.kubernetes.io/nodeid:{"kubevirt.io.hostpath-provisioner":"crc"} k8s.ovn.org/host-cidrs:["172.17.0.5/24","172.18.0.5/24","172.19.0.5/24","192.168.122.10/24","192.168.126.11/24","38.102.83.51/24"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"br-ex_crc","mac-address":"fa:16:3e:f0:63:3e","ip-addresses":["38.102.83.51/24"],"ip-address":"38.102.83.51/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/network-ids:{"default":"0"} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-mgmt-port-mac-address:b6:dc:d9:26:03:d4 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.51/24"} k8s.ovn.org/node-subnets:{"default":["10.217.0.0/23"]} k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"} k8s.ovn.org/remote-zone-migrated:crc k8s.ovn.org/zone-name:crc machineconfiguration.openshift.io/controlPlaneTopology:SingleReplica machineconfiguration.openshift.io/currentConfig:rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/desiredConfig:rendered-master-ef556ead28ddfad01c34ac56c7adfb5a machineconfiguration.openshift.io/desiredDrain:uncordon-rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/lastAppliedDrain:uncordon-rendered-master-11405dc064e9fc83a779a06d1cd665b3 machineconfiguration.openshift.io/lastObservedServerCAAnnotation:false machineconfiguration.openshift.io/lastSyncedControllerConfigResourceVersion:42173 machineconfiguration.openshift.io/post-config-action: machineconfiguration.openshift.io/reason:missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:20:09.543394985+00:00 stderr P machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found machineconfiguration.openshift.io/state:Degraded volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{12 0} {} 12 DecimalSI},ephemeral-storage: {{85294297088 0} {} 83295212Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{33654132736 0} {} 32865364Ki BinarySI},pods: {{250 0} {} 250 DecimalSI},},Allocatable:ResourceList{cpu: {{11800 -3} {} 11800m DecimalSI},ephemeral-storage: {{76397865653 0} {} 76397865653 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{33182273536 0} {} 32404564Ki BinarySI},pods: {{250 0} {} 250 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-12-13 00:20:06 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-12-13 00:20:06 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-12-13 00:20:06 +0000 UTC,LastTransitionTime:2025-08-13 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-12-13 00:20:06 +0000 UTC,LastTransitionTime:2025-12-13 00:12:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.126.11,},NodeAddress{Type:Hostname,Address:crc,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1bd596843fb445da20eca66471ddf66,SystemUUID:02c5855c-ecb0-48ea-8e20-d08d70e9697e,BootID:6b04f785-78a9-47bd-b1b0-d69223fec89b,KernelVersion:5.14.0-427.22.1.el9_4.x86_64,OSImage:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0,ContainerRuntimeVersion:cri-o://1.29.5-5.rhaos4.16.git7032128.el9,KubeletVersion:v1.29.5+29c95f3,KubeProxyVersion:v1.29.5+29c95f3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:03469650d23dff51e7a1d05cc8854775dbd5b7854daa08b5f2f2e8f376acca3e registry.redhat.io/redhat/redhat-operator-index@sha256:f51142639e4917509a82190a100e3a7046e62600f7123821ff8a7485dc10f5a9 registry.redhat.io/redhat/redhat-operator-index:v4.16],SizeBytes:3890909497,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:98affcb112cdd069bfed8c7b5f300a1252e5b67d78a89515108331d589de6390],SizeBytes:3571551444,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f],SizeBytes:2572133253,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6],SizeBytes:2121001615,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:5c1c84b5681e3c3e40a57b63ba5a6656cdb58907f595c7e8b90184d39b848d1d registry.redhat.io/redhat/community-operator-index@sha256:ae3355f9ade3bca8b03d806cd0100d7215b77ac2bcab081c8eb71af51f334adc registry.redhat.io/redhat/community-operator-index:v4.16],SizeBytes:1938250566,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:6baf614a7bf44076cbfcf4d04b52541e8eaaf096cfaeb88b1ccccfb8d8b9d50a],SizeBytes:1858356929,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:5e0f5b3b5a7b975f26275f0e3d565872ab130fd81b1b879938eb5b1a500bbeea registry.redhat.io/redhat/certified-operator-index@sha256:f8bde4296d4ec76a61c8d61c56f6ee32cd1c9489e5d6f3e30b1b96427409578a registry.redhat.io/redhat/certified-operator-index:v4.16],SizeBytes:1721905473,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:548b0f6afa60b733fb6504e94ef9ced28d4d51382ba708b8897f546966661967 registry.redhat.io/redhat/redhat-marketplace-index@sha256:8de9b589084ccdc5ffce244985a039bbb2da52aea676c4bf424ec38958f03b1c registry.redhat.io/redhat/redhat-marketplace-index:v4.16],SizeBytes:1522824949,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:c465a1b8318dfd33120b2692dd1f6aae7db4b6e69f50720cb1391d55fadec562],SizeBytes:1487797447,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:55a23334bbbe439e5a7e305ed720325ac4192d078b06fe8be277fe1eff62c533],SizeBytes:1458020021,},ContainerImage{Names:[registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b],SizeBytes:1374511543,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009],SizeBytes:1346691049,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca],SizeBytes:1222078702,},ContainerImage{Names:[registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f],SizeBytes:1116811194,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842],SizeBytes:1067242914,},ContainerImage{Names:[registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0cc 2025-12-13T00:20:09.543423826+00:00 stderr F f6512d],SizeBytes:993487271,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a],SizeBytes:874809222,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2],SizeBytes:829474731,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251],SizeBytes:826261505,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8],SizeBytes:823328808,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2],SizeBytes:775169417,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648],SizeBytes:685289316,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73],SizeBytes:677900529,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae],SizeBytes:654603911,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc],SizeBytes:596693555,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78],SizeBytes:568208801,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce],SizeBytes:562097717,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403],SizeBytes:541135334,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d],SizeBytes:539461335,},ContainerImage{Names:[registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8 registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9 registry.redhat.io/openshift4/ose-csi-external-provisioner:latest],SizeBytes:520763795,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3],SizeBytes:507363664,},ContainerImage{Names:[quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69],SizeBytes:503433479,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3],SizeBytes:503286020,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce],SizeBytes:502054492,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b],SizeBytes:501535327,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75],SizeBytes:501474997,},ContainerImage{Names:[quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f],SizeBytes:499981426,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72],SizeBytes:498615097,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791],SizeBytes:498403671,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611],SizeBytes:497554071,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f],SizeBytes:497168817,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa],SizeBytes:497128745,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc],SizeBytes:496236158,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d],SizeBytes:495929820,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15],SizeBytes:494198000,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730],SizeBytes:493495521,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208],SizeBytes:492229908,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8],SizeBytes:488729683,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99],SizeBytes:487322445,},ContainerImage{Names:[quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0],SizeBytes:484252300,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} 2025-12-13T00:20:09.543423826+00:00 stderr F I1213 00:20:09.543015 28750 egressqos.go:1011] Finished syncing EgressQoS node crc : 485.993µs 2025-12-13T00:20:09.543423826+00:00 stderr F I1213 00:20:09.543178 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6ad154d5-3275-4190-8d21-ff202885643c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.543423826+00:00 stderr F I1213 00:20:09.543220 28750 transact.go:42] Configuring OVN: [{Op:update Table:Address_Set Row:map[addresses:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6ad154d5-3275-4190-8d21-ff202885643c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.543994151+00:00 stderr F I1213 00:20:09.543954 28750 master_controller.go:86] Starting Admin Policy Based Route Controller 2025-12-13T00:20:09.543994151+00:00 stderr F I1213 00:20:09.543973 28750 external_controller.go:276] Starting Admin Policy Based Route Controller 2025-12-13T00:20:09.544075043+00:00 stderr F I1213 00:20:09.543959 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/cluster-autoscaler-operator for endpointslice openshift-machine-api/cluster-autoscaler-operator-r4g5l as it is not a known egress service 2025-12-13T00:20:09.544075043+00:00 stderr F I1213 00:20:09.544070 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-p8xmw as it is not a known egress service 2025-12-13T00:20:09.544099054+00:00 stderr F I1213 00:20:09.544080 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/packageserver-service for endpointslice openshift-operator-lifecycle-manager/packageserver-service-tlm8t as it is not a known egress service 2025-12-13T00:20:09.544099054+00:00 stderr F I1213 00:20:09.544095 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver/check-endpoints for endpointslice openshift-apiserver/check-endpoints-sbfp5 as it is not a known egress service 2025-12-13T00:20:09.544109544+00:00 stderr F I1213 00:20:09.544104 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-controller-manager/controller-manager for endpointslice openshift-controller-manager/controller-manager-kxmft as it is not a known egress service 2025-12-13T00:20:09.544117154+00:00 stderr F I1213 00:20:09.544105 28750 default_network_controller.go:572] Completing all the Watchers took 145.519743ms 2025-12-13T00:20:09.544117154+00:00 stderr F I1213 00:20:09.544111 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-cxk8j as it is not a known egress service 2025-12-13T00:20:09.544127685+00:00 stderr F I1213 00:20:09.544118 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/catalog-operator-metrics for endpointslice openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 as it is not a known egress service 2025-12-13T00:20:09.544127685+00:00 stderr F I1213 00:20:09.544123 28750 default_network_controller.go:576] Starting unidling controllers 2025-12-13T00:20:09.544135125+00:00 stderr F I1213 00:20:09.544126 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/package-server-manager-metrics for endpointslice openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p as it is not a known egress service 2025-12-13T00:20:09.544142215+00:00 stderr F I1213 00:20:09.544135 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/control-plane-machine-set-operator for endpointslice openshift-machine-api/control-plane-machine-set-operator-nmjkn as it is not a known egress service 2025-12-13T00:20:09.544149255+00:00 stderr F I1213 00:20:09.544142 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/community-operators for endpointslice openshift-marketplace/community-operators-h826k as it is not a known egress service 2025-12-13T00:20:09.544156325+00:00 stderr F I1213 00:20:09.544149 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-scheduler-operator/metrics for endpointslice openshift-kube-scheduler-operator/metrics-zk8d6 as it is not a known egress service 2025-12-13T00:20:09.544163146+00:00 stderr F I1213 00:20:09.544157 28750 unidle.go:45] Registering OVN SB ControllerEvent handler 2025-12-13T00:20:09.544171676+00:00 stderr F I1213 00:20:09.544166 28750 unidle.go:62] Populating Initial ContollerEvent events 2025-12-13T00:20:09.544195577+00:00 stderr F I1213 00:20:09.544180 28750 unidle.go:78] Setting up event handlers for services 2025-12-13T00:20:09.544279569+00:00 stderr F I1213 00:20:09.544258 28750 network_attach_def_controller.go:134] Starting network-controller-manager NAD controller 2025-12-13T00:20:09.544305760+00:00 stderr F I1213 00:20:09.544157 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-daemon for endpointslice openshift-machine-config-operator/machine-config-daemon-2nvnz as it is not a known egress service 2025-12-13T00:20:09.544305760+00:00 stderr F I1213 00:20:09.544302 28750 shared_informer.go:311] Waiting for caches to sync for network-controller-manager 2025-12-13T00:20:09.544314000+00:00 stderr F I1213 00:20:09.544308 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-rdlxc as it is not a known egress service 2025-12-13T00:20:09.544322770+00:00 stderr F I1213 00:20:09.544317 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-controller-manager-operator/metrics for endpointslice openshift-controller-manager-operator/metrics-psf8p as it is not a known egress service 2025-12-13T00:20:09.544330080+00:00 stderr F I1213 00:20:09.544322 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress-canary/ingress-canary for endpointslice openshift-ingress-canary/ingress-canary-rhnd4 as it is not a known egress service 2025-12-13T00:20:09.544337170+00:00 stderr F I1213 00:20:09.544330 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-apiserver-operator/metrics for endpointslice openshift-kube-apiserver-operator/metrics-kbv55 as it is not a known egress service 2025-12-13T00:20:09.544344172+00:00 stderr F I1213 00:20:09.544337 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/redhat-operators for endpointslice openshift-marketplace/redhat-operators-47g6l as it is not a known egress service 2025-12-13T00:20:09.544354402+00:00 stderr F I1213 00:20:09.544344 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-multus/multus-admission-controller for endpointslice openshift-multus/multus-admission-controller-s6h4d as it is not a known egress service 2025-12-13T00:20:09.544354402+00:00 stderr F I1213 00:20:09.544351 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-authentication-operator/metrics for endpointslice openshift-authentication-operator/metrics-dp499 as it is not a known egress service 2025-12-13T00:20:09.544388143+00:00 stderr F I1213 00:20:09.544363 28750 egressservice_zone_node.go:110] Processing sync for Egress Service node crc 2025-12-13T00:20:09.544396363+00:00 stderr F I1213 00:20:09.544385 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd 2025-12-13T00:20:09.544396363+00:00 stderr F I1213 00:20:09.544389 28750 egressservice_zone_node.go:113] Finished syncing Egress Service node crc: 28.671µs 2025-12-13T00:20:09.544403753+00:00 stderr F I1213 00:20:09.544397 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.544410903+00:00 stderr F I1213 00:20:09.544402 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.544410903+00:00 stderr F I1213 00:20:09.544407 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.544418204+00:00 stderr F I1213 00:20:09.544411 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-wg4kr as it is not a known egress service 2025-12-13T00:20:09.544425214+00:00 stderr F I1213 00:20:09.544419 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-controllers for endpointslice openshift-machine-api/machine-api-controllers-j9jjt as it is not a known egress service 2025-12-13T00:20:09.544432234+00:00 stderr F I1213 00:20:09.544427 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-oauth-apiserver/api for endpointslice openshift-oauth-apiserver/api-2pj4d as it is not a known egress service 2025-12-13T00:20:09.544439394+00:00 stderr F I1213 00:20:09.544412 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.544439394+00:00 stderr F I1213 00:20:09.544435 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd-operator/metrics for endpointslice openshift-etcd-operator/metrics-z62zm as it is not a known egress service 2025-12-13T00:20:09.544446914+00:00 stderr F I1213 00:20:09.544437 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.544446914+00:00 stderr F I1213 00:20:09.544442 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns/dns-default for endpointslice openshift-dns/dns-default-lctlx as it is not a known egress service 2025-12-13T00:20:09.544454505+00:00 stderr F I1213 00:20:09.544444 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.544454505+00:00 stderr F I1213 00:20:09.544450 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-image-registry/image-registry for endpointslice openshift-image-registry/image-registry-cfsrx as it is not a known egress service 2025-12-13T00:20:09.544465395+00:00 stderr F I1213 00:20:09.544452 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-9-crc 2025-12-13T00:20:09.544465395+00:00 stderr F I1213 00:20:09.544459 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-service-ca-operator/metrics for endpointslice openshift-service-ca-operator/metrics-wrkrj as it is not a known egress service 2025-12-13T00:20:09.544465395+00:00 stderr F I1213 00:20:09.544460 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.544474345+00:00 stderr F I1213 00:20:09.544460 28750 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:20:09.544474345+00:00 stderr F I1213 00:20:09.544470 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.544482315+00:00 stderr F I1213 00:20:09.544471 28750 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:20:09.544482315+00:00 stderr F I1213 00:20:09.544475 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-9-crc 2025-12-13T00:20:09.544489696+00:00 stderr F I1213 00:20:09.544480 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.544489696+00:00 stderr F I1213 00:20:09.544486 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-q88th 2025-12-13T00:20:09.544496966+00:00 stderr F I1213 00:20:09.544491 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.544504106+00:00 stderr F I1213 00:20:09.544496 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.544504106+00:00 stderr F I1213 00:20:09.544500 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.544511386+00:00 stderr F I1213 00:20:09.544505 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-12-crc 2025-12-13T00:20:09.544519086+00:00 stderr F I1213 00:20:09.544511 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.544519086+00:00 stderr F I1213 00:20:09.544467 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver-operator/metrics for endpointslice openshift-apiserver-operator/metrics-sgtfh as it is not a known egress service 2025-12-13T00:20:09.544526387+00:00 stderr F I1213 00:20:09.544517 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.544533637+00:00 stderr F I1213 00:20:09.544526 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-10-retry-1-crc 2025-12-13T00:20:09.544533637+00:00 stderr F I1213 00:20:09.544529 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-etcd/etcd for endpointslice openshift-etcd/etcd-8wmzv as it is not a known egress service 2025-12-13T00:20:09.544540877+00:00 stderr F I1213 00:20:09.544532 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-11-crc 2025-12-13T00:20:09.544550537+00:00 stderr F I1213 00:20:09.544539 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.544550537+00:00 stderr F I1213 00:20:09.544539 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator-webhook for endpointslice openshift-machine-api/machine-api-operator-webhook-x4gjx as it is not a known egress service 2025-12-13T00:20:09.544550537+00:00 stderr F I1213 00:20:09.544546 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.544558537+00:00 stderr F I1213 00:20:09.544549 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-controller for endpointslice openshift-machine-config-operator/machine-config-controller-7t8hc as it is not a known egress service 2025-12-13T00:20:09.544558537+00:00 stderr F I1213 00:20:09.544552 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.544566388+00:00 stderr F I1213 00:20:09.544558 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/redhat-marketplace for endpointslice openshift-marketplace/redhat-marketplace-8k279 as it is not a known egress service 2025-12-13T00:20:09.544566388+00:00 stderr F I1213 00:20:09.544559 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.544573948+00:00 stderr F I1213 00:20:09.544568 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator-machine-webhook for endpointslice openshift-machine-api/machine-api-operator-machine-webhook-xj4tp as it is not a known egress service 2025-12-13T00:20:09.544581198+00:00 stderr F I1213 00:20:09.544570 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.544581198+00:00 stderr F I1213 00:20:09.544576 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-marketplace/certified-operators for endpointslice openshift-marketplace/certified-operators-bw9bv as it is not a known egress service 2025-12-13T00:20:09.544588698+00:00 stderr F I1213 00:20:09.544578 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.544588698+00:00 stderr F I1213 00:20:09.544583 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-apiserver/api for endpointslice openshift-apiserver/api-7hq6z as it is not a known egress service 2025-12-13T00:20:09.544596198+00:00 stderr F I1213 00:20:09.544585 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.544596198+00:00 stderr F I1213 00:20:09.544592 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-storage-version-migrator-operator/metrics for endpointslice openshift-kube-storage-version-migrator-operator/metrics-zbxs7 as it is not a known egress service 2025-12-13T00:20:09.544603449+00:00 stderr F I1213 00:20:09.544593 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.544610659+00:00 stderr F I1213 00:20:09.544601 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.544610659+00:00 stderr F I1213 00:20:09.544602 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-operator-lifecycle-manager/olm-operator-metrics for endpointslice openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 as it is not a known egress service 2025-12-13T00:20:09.544620259+00:00 stderr F I1213 00:20:09.544608 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.544620259+00:00 stderr F I1213 00:20:09.544611 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-scheduler/scheduler for endpointslice openshift-kube-scheduler/scheduler-4wbzh as it is not a known egress service 2025-12-13T00:20:09.544627579+00:00 stderr F I1213 00:20:09.544617 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.544634670+00:00 stderr F I1213 00:20:09.544626 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.544641930+00:00 stderr F I1213 00:20:09.544632 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.544641930+00:00 stderr F I1213 00:20:09.544639 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-etcd/etcd-crc 2025-12-13T00:20:09.544650910+00:00 stderr F I1213 00:20:09.544645 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 2025-12-13T00:20:09.544658010+00:00 stderr F I1213 00:20:09.544651 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.544665120+00:00 stderr F I1213 00:20:09.544657 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.544671981+00:00 stderr F I1213 00:20:09.544664 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.544678841+00:00 stderr F I1213 00:20:09.544671 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.544685941+00:00 stderr F I1213 00:20:09.544677 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.544693251+00:00 stderr F I1213 00:20:09.544683 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh 2025-12-13T00:20:09.544693251+00:00 stderr F I1213 00:20:09.544619 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/downloads for endpointslice openshift-console/downloads-zsr67 as it is not a known egress service 2025-12-13T00:20:09.544700691+00:00 stderr F I1213 00:20:09.544690 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.544700691+00:00 stderr F I1213 00:20:09.544696 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-network-diagnostics/network-check-target for endpointslice openshift-network-diagnostics/network-check-target-kqkjk as it is not a known egress service 2025-12-13T00:20:09.544708122+00:00 stderr F I1213 00:20:09.544698 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/image-pruner-29426400-tlv26 2025-12-13T00:20:09.544708122+00:00 stderr F I1213 00:20:09.544704 28750 egressservice_zone_endpointslice.go:80] Ignoring updating default/kubernetes for endpointslice default/kubernetes as it is not a known egress service 2025-12-13T00:20:09.544715542+00:00 stderr F I1213 00:20:09.544705 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.544715542+00:00 stderr F I1213 00:20:09.544711 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/webhook for endpointslice openshift-console-operator/webhook-b7j7h as it is not a known egress service 2025-12-13T00:20:09.544725402+00:00 stderr F I1213 00:20:09.544713 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-10-crc 2025-12-13T00:20:09.544725402+00:00 stderr F I1213 00:20:09.544719 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-ingress-operator/metrics for endpointslice openshift-ingress-operator/metrics-cd48g as it is not a known egress service 2025-12-13T00:20:09.544733642+00:00 stderr F I1213 00:20:09.544720 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.544733642+00:00 stderr F I1213 00:20:09.544726 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-controller-manager/kube-controller-manager for endpointslice openshift-kube-controller-manager/kube-controller-manager-fcp2k as it is not a known egress service 2025-12-13T00:20:09.544741632+00:00 stderr F I1213 00:20:09.544730 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.544741632+00:00 stderr F I1213 00:20:09.544735 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-authentication/oauth-openshift for endpointslice openshift-authentication/oauth-openshift-6gdxk as it is not a known egress service 2025-12-13T00:20:09.544749293+00:00 stderr F I1213 00:20:09.544738 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.544749293+00:00 stderr F I1213 00:20:09.544743 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-route-controller-manager/route-controller-manager for endpointslice openshift-route-controller-manager/route-controller-manager-64jvm as it is not a known egress service 2025-12-13T00:20:09.544756983+00:00 stderr F I1213 00:20:09.544746 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.544756983+00:00 stderr F I1213 00:20:09.544751 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-apiserver/apiserver for endpointslice openshift-kube-apiserver/apiserver-5mvtf as it is not a known egress service 2025-12-13T00:20:09.544764283+00:00 stderr F I1213 00:20:09.544753 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.544764283+00:00 stderr F I1213 00:20:09.544761 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.544771823+00:00 stderr F I1213 00:20:09.544766 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.544778993+00:00 stderr F I1213 00:20:09.544769 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-kube-controller-manager-operator/metrics for endpointslice openshift-kube-controller-manager-operator/metrics-cz5rv as it is not a known egress service 2025-12-13T00:20:09.544778993+00:00 stderr F I1213 00:20:09.544773 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/installer-7-crc 2025-12-13T00:20:09.544786324+00:00 stderr F I1213 00:20:09.544778 28750 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-api/machine-api-operator for endpointslice openshift-machine-api/machine-api-operator-2js9r as it is not a known egress service 2025-12-13T00:20:09.544786324+00:00 stderr F I1213 00:20:09.544779 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-scheduler/installer-8-crc 2025-12-13T00:20:09.544796284+00:00 stderr F I1213 00:20:09.544787 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.544796284+00:00 stderr F I1213 00:20:09.544792 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.544803564+00:00 stderr F I1213 00:20:09.544797 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.544810674+00:00 stderr F I1213 00:20:09.544802 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-8-crc 2025-12-13T00:20:09.544817525+00:00 stderr F I1213 00:20:09.544808 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.544824795+00:00 stderr F I1213 00:20:09.544815 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/revision-pruner-11-crc 2025-12-13T00:20:09.544831835+00:00 stderr F I1213 00:20:09.544822 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.544831835+00:00 stderr F I1213 00:20:09.544828 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.544840885+00:00 stderr F I1213 00:20:09.544834 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.544847735+00:00 stderr F I1213 00:20:09.544841 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.544855146+00:00 stderr F I1213 00:20:09.544846 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.544855146+00:00 stderr F I1213 00:20:09.544852 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.544879326+00:00 stderr F I1213 00:20:09.544860 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.544879326+00:00 stderr F I1213 00:20:09.544872 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.544887436+00:00 stderr F I1213 00:20:09.544878 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.544887436+00:00 stderr F I1213 00:20:09.544884 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.544894737+00:00 stderr F I1213 00:20:09.544890 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.544901587+00:00 stderr F I1213 00:20:09.544896 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.544908407+00:00 stderr F I1213 00:20:09.544902 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.544962178+00:00 stderr F I1213 00:20:09.544918 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-kube-controller-manager/installer-10-crc 2025-12-13T00:20:09.544962178+00:00 stderr F I1213 00:20:09.544932 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.544962178+00:00 stderr F I1213 00:20:09.544953 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.544977409+00:00 stderr F I1213 00:20:09.544960 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.544977409+00:00 stderr F I1213 00:20:09.544966 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.544977409+00:00 stderr F I1213 00:20:09.544972 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.544985289+00:00 stderr F I1213 00:20:09.544978 28750 external_controller_pod.go:21] APB queuing policies: map[] for pod: openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.545008110+00:00 stderr F I1213 00:20:09.544374 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-infra 2025-12-13T00:20:09.545044951+00:00 stderr F I1213 00:20:09.545031 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-scheduler 2025-12-13T00:20:09.545079202+00:00 stderr F I1213 00:20:09.545067 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-machine-api 2025-12-13T00:20:09.545107202+00:00 stderr F I1213 00:20:09.545097 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-oauth-apiserver 2025-12-13T00:20:09.545131183+00:00 stderr F I1213 00:20:09.545122 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: hostpath-provisioner 2025-12-13T00:20:09.545154724+00:00 stderr F I1213 00:20:09.545145 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-samples-operator 2025-12-13T00:20:09.545178354+00:00 stderr F I1213 00:20:09.545169 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config 2025-12-13T00:20:09.545202105+00:00 stderr F I1213 00:20:09.545192 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-dns-operator 2025-12-13T00:20:09.545225636+00:00 stderr F I1213 00:20:09.545216 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openstack 2025-12-13T00:20:09.545249236+00:00 stderr F I1213 00:20:09.545240 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-public 2025-12-13T00:20:09.545272877+00:00 stderr F I1213 00:20:09.545263 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-apiserver-operator 2025-12-13T00:20:09.545296308+00:00 stderr F I1213 00:20:09.545287 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:09.545319918+00:00 stderr F I1213 00:20:09.545311 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-node 2025-12-13T00:20:09.545343339+00:00 stderr F I1213 00:20:09.545334 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-operator-lifecycle-manager 2025-12-13T00:20:09.545366679+00:00 stderr F I1213 00:20:09.545357 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-route-controller-manager 2025-12-13T00:20:09.545390380+00:00 stderr F I1213 00:20:09.545381 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver-operator 2025-12-13T00:20:09.545417821+00:00 stderr F I1213 00:20:09.545408 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-scheduler-operator 2025-12-13T00:20:09.545443832+00:00 stderr F I1213 00:20:09.545434 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-marketplace 2025-12-13T00:20:09.545467522+00:00 stderr F I1213 00:20:09.545458 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-version 2025-12-13T00:20:09.545492733+00:00 stderr F I1213 00:20:09.545483 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console-user-settings 2025-12-13T00:20:09.545516254+00:00 stderr F I1213 00:20:09.545507 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd-operator 2025-12-13T00:20:09.545539744+00:00 stderr F I1213 00:20:09.545530 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kni-infra 2025-12-13T00:20:09.545563105+00:00 stderr F I1213 00:20:09.545554 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-operator 2025-12-13T00:20:09.545586555+00:00 stderr F I1213 00:20:09.545577 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-dns 2025-12-13T00:20:09.545609856+00:00 stderr F I1213 00:20:09.545600 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-machine-config-operator 2025-12-13T00:20:09.545633717+00:00 stderr F I1213 00:20:09.545624 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-user-workload-monitoring 2025-12-13T00:20:09.545657227+00:00 stderr F I1213 00:20:09.545648 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openstack-operators 2025-12-13T00:20:09.545680478+00:00 stderr F I1213 00:20:09.545671 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console-operator 2025-12-13T00:20:09.545704019+00:00 stderr F I1213 00:20:09.545695 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-etcd 2025-12-13T00:20:09.545727319+00:00 stderr F I1213 00:20:09.545718 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress-operator 2025-12-13T00:20:09.545750430+00:00 stderr F I1213 00:20:09.545741 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-operators 2025-12-13T00:20:09.545773821+00:00 stderr F I1213 00:20:09.545764 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ovirt-infra 2025-12-13T00:20:09.545797411+00:00 stderr F I1213 00:20:09.545788 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-authentication-operator 2025-12-13T00:20:09.545820742+00:00 stderr F I1213 00:20:09.545811 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-machine-approver 2025-12-13T00:20:09.545844112+00:00 stderr F I1213 00:20:09.545835 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-storage-version-migrator 2025-12-13T00:20:09.545867413+00:00 stderr F I1213 00:20:09.545858 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-monitoring 2025-12-13T00:20:09.545890784+00:00 stderr F I1213 00:20:09.545881 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-nutanix-infra 2025-12-13T00:20:09.545914434+00:00 stderr F I1213 00:20:09.545905 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress-canary 2025-12-13T00:20:09.545977356+00:00 stderr F I1213 00:20:09.545964 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-multus 2025-12-13T00:20:09.546098909+00:00 stderr F I1213 00:20:09.546081 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-authentication 2025-12-13T00:20:09.546129770+00:00 stderr F I1213 00:20:09.546120 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config-operator 2025-12-13T00:20:09.546154181+00:00 stderr F I1213 00:20:09.546145 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-image-registry 2025-12-13T00:20:09.546181552+00:00 stderr F I1213 00:20:09.546170 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-service-ca-operator 2025-12-13T00:20:09.546214192+00:00 stderr F I1213 00:20:09.546202 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: default 2025-12-13T00:20:09.546239623+00:00 stderr F I1213 00:20:09.546230 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift 2025-12-13T00:20:09.546263724+00:00 stderr F I1213 00:20:09.546254 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cloud-network-config-controller 2025-12-13T00:20:09.546287544+00:00 stderr F I1213 00:20:09.546278 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-controller-manager 2025-12-13T00:20:09.546320935+00:00 stderr F I1213 00:20:09.546311 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-host-network 2025-12-13T00:20:09.546345326+00:00 stderr F I1213 00:20:09.546336 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ingress 2025-12-13T00:20:09.546369247+00:00 stderr F I1213 00:20:09.546360 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-console 2025-12-13T00:20:09.546394127+00:00 stderr F I1213 00:20:09.546384 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-apiserver 2025-12-13T00:20:09.546417768+00:00 stderr F I1213 00:20:09.546408 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-controller-manager 2025-12-13T00:20:09.546442169+00:00 stderr F I1213 00:20:09.546432 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-diagnostics 2025-12-13T00:20:09.546469259+00:00 stderr F I1213 00:20:09.546460 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-openstack-infra 2025-12-13T00:20:09.546500000+00:00 stderr F I1213 00:20:09.546490 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-service-ca 2025-12-13T00:20:09.546530751+00:00 stderr F I1213 00:20:09.546521 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-node-lease 2025-12-13T00:20:09.546557982+00:00 stderr F I1213 00:20:09.546548 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: kube-system 2025-12-13T00:20:09.546588103+00:00 stderr F I1213 00:20:09.546578 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-apiserver 2025-12-13T00:20:09.546616883+00:00 stderr F I1213 00:20:09.546607 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cluster-storage-operator 2025-12-13T00:20:09.546646884+00:00 stderr F I1213 00:20:09.546637 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-kube-controller-manager-operator 2025-12-13T00:20:09.546676435+00:00 stderr F I1213 00:20:09.546667 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-network-node-identity 2025-12-13T00:20:09.546713976+00:00 stderr F I1213 00:20:09.546704 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-vsphere-infra 2025-12-13T00:20:09.546744757+00:00 stderr F I1213 00:20:09.546735 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-cloud-platform-infra 2025-12-13T00:20:09.546774488+00:00 stderr F I1213 00:20:09.546765 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-config-managed 2025-12-13T00:20:09.546803638+00:00 stderr F I1213 00:20:09.546794 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-controller-manager-operator 2025-12-13T00:20:09.546833099+00:00 stderr F I1213 00:20:09.546823 28750 external_controller_namespace.go:16] APB queuing policies: map[] for namespace: openshift-ovn-kubernetes 2025-12-13T00:20:09.546855910+00:00 stderr F I1213 00:20:09.546287 28750 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:20:09.548468745+00:00 stderr F I1213 00:20:09.548428 28750 ovs.go:162] Exec(23): stdout: "" 2025-12-13T00:20:09.548468745+00:00 stderr F I1213 00:20:09.548447 28750 ovs.go:163] Exec(23): stderr: "ovs-vsctl: no port named br-ex\n" 2025-12-13T00:20:09.548468745+00:00 stderr F I1213 00:20:09.548452 28750 ovs.go:165] Exec(23): err: exit status 1 2025-12-13T00:20:09.548591678+00:00 stderr F I1213 00:20:09.548565 28750 helper_linux.go:92] Provided gateway interface "br-ex", found as index: 11 2025-12-13T00:20:09.548720312+00:00 stderr F I1213 00:20:09.548697 28750 helper_linux.go:117] Found default gateway interface br-ex 38.102.83.1 2025-12-13T00:20:09.548893806+00:00 stderr F I1213 00:20:09.548869 28750 gateway_init.go:370] Preparing Local Gateway 2025-12-13T00:20:09.548893806+00:00 stderr F I1213 00:20:09.548881 28750 gateway_localnet.go:26] Creating new local gateway 2025-12-13T00:20:09.548901767+00:00 stderr F I1213 00:20:09.548897 28750 iptables.go:108] Creating table: filter chain: FORWARD 2025-12-13T00:20:09.551233591+00:00 stderr F I1213 00:20:09.551194 28750 iptables.go:110] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:09.554404998+00:00 stderr F I1213 00:20:09.554365 28750 iptables.go:121] Adding rule in table: filter, chain: FORWARD with args: "-o ovn-k8s-mp0 -j ACCEPT" for protocol: 0 2025-12-13T00:20:09.559474208+00:00 stderr F I1213 00:20:09.559420 28750 iptables.go:121] Adding rule in table: filter, chain: FORWARD with args: "-i ovn-k8s-mp0 -j ACCEPT" for protocol: 0 2025-12-13T00:20:09.561720041+00:00 stderr F I1213 00:20:09.561676 28750 iptables.go:108] Creating table: filter chain: INPUT 2025-12-13T00:20:09.563708545+00:00 stderr F I1213 00:20:09.563670 28750 iptables.go:110] Chain: "INPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N INPUT --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:09.566042229+00:00 stderr F I1213 00:20:09.566012 28750 iptables.go:121] Adding rule in table: filter, chain: INPUT with args: "-i ovn-k8s-mp0 -m comment --comment from OVN to localhost -j ACCEPT" for protocol: 0 2025-12-13T00:20:09.568650251+00:00 stderr F I1213 00:20:09.568589 28750 iptables.go:108] Creating table: nat chain: POSTROUTING 2025-12-13T00:20:09.570627446+00:00 stderr F I1213 00:20:09.570584 28750 iptables.go:110] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:09.580946621+00:00 stderr F I1213 00:20:09.580889 28750 iptables.go:121] Adding rule in table: nat, chain: POSTROUTING with args: "-s 169.254.169.1 -j MASQUERADE" for protocol: 0 2025-12-13T00:20:09.586040720+00:00 stderr F I1213 00:20:09.586007 28750 iptables.go:121] Adding rule in table: nat, chain: POSTROUTING with args: "-s 10.217.0.0/23 -j MASQUERADE" for protocol: 0 2025-12-13T00:20:09.588923340+00:00 stderr F I1213 00:20:09.588882 28750 ovs.go:159] Exec(24): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex 2025-12-13T00:20:09.594630117+00:00 stderr F I1213 00:20:09.594588 28750 shared_informer.go:318] Caches are synced for node-tracker-controller 2025-12-13T00:20:09.594630117+00:00 stderr F I1213 00:20:09.594605 28750 services_controller.go:184] Setting up event handlers for services 2025-12-13T00:20:09.594683949+00:00 stderr F I1213 00:20:09.594664 28750 services_controller.go:194] Setting up event handlers for endpoint slices 2025-12-13T00:20:09.594744430+00:00 stderr F I1213 00:20:09.594718 28750 services_controller.go:204] Waiting for service and endpoint handlers to sync 2025-12-13T00:20:09.594744430+00:00 stderr F I1213 00:20:09.594731 28750 shared_informer.go:311] Waiting for caches to sync for ovn-lb-controller 2025-12-13T00:20:09.594755341+00:00 stderr F I1213 00:20:09.594740 28750 services_controller.go:551] Adding service openshift-image-registry/image-registry-operator 2025-12-13T00:20:09.594825544+00:00 stderr F I1213 00:20:09.594806 28750 services_controller.go:551] Adding service openshift-marketplace/redhat-marketplace 2025-12-13T00:20:09.594825544+00:00 stderr F I1213 00:20:09.594818 28750 services_controller.go:551] Adding service openshift-apiserver-operator/metrics 2025-12-13T00:20:09.594834244+00:00 stderr F I1213 00:20:09.594825 28750 services_controller.go:551] Adding service openshift-console-operator/webhook 2025-12-13T00:20:09.594841174+00:00 stderr F I1213 00:20:09.594832 28750 services_controller.go:551] Adding service openshift-etcd/etcd 2025-12-13T00:20:09.594849074+00:00 stderr F I1213 00:20:09.594839 28750 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-daemon 2025-12-13T00:20:09.594849074+00:00 stderr F I1213 00:20:09.594846 28750 services_controller.go:551] Adding service openshift-service-ca-operator/metrics 2025-12-13T00:20:09.594871735+00:00 stderr F I1213 00:20:09.594855 28750 services_controller.go:551] Adding service openshift-console/downloads 2025-12-13T00:20:09.594871735+00:00 stderr F I1213 00:20:09.594865 28750 services_controller.go:551] Adding service openshift-dns-operator/metrics 2025-12-13T00:20:09.594879745+00:00 stderr F I1213 00:20:09.594873 28750 services_controller.go:551] Adding service openshift-ingress-canary/ingress-canary 2025-12-13T00:20:09.594886855+00:00 stderr F I1213 00:20:09.594879 28750 services_controller.go:551] Adding service openshift-machine-api/cluster-autoscaler-operator 2025-12-13T00:20:09.594886855+00:00 stderr F I1213 00:20:09.594884 28750 services_controller.go:551] Adding service openshift-marketplace/certified-operators 2025-12-13T00:20:09.594895765+00:00 stderr F I1213 00:20:09.594890 28750 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-12-13T00:20:09.594902746+00:00 stderr F I1213 00:20:09.594896 28750 services_controller.go:551] Adding service openshift-kube-controller-manager-operator/metrics 2025-12-13T00:20:09.594909726+00:00 stderr F I1213 00:20:09.594904 28750 services_controller.go:551] Adding service openshift-kube-scheduler-operator/metrics 2025-12-13T00:20:09.594916586+00:00 stderr F I1213 00:20:09.594909 28750 services_controller.go:551] Adding service openshift-network-operator/metrics 2025-12-13T00:20:09.594923556+00:00 stderr F I1213 00:20:09.594916 28750 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-12-13T00:20:09.594948347+00:00 stderr F I1213 00:20:09.594923 28750 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/packageserver-service 2025-12-13T00:20:09.595019249+00:00 stderr F I1213 00:20:09.594931 28750 services_controller.go:551] Adding service openshift-oauth-apiserver/api 2025-12-13T00:20:09.595019249+00:00 stderr F I1213 00:20:09.595012 28750 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:20:09.595033459+00:00 stderr F I1213 00:20:09.595019 28750 services_controller.go:551] Adding service openshift-config-operator/metrics 2025-12-13T00:20:09.595033459+00:00 stderr F I1213 00:20:09.595024 28750 services_controller.go:551] Adding service openshift-ingress/router-internal-default 2025-12-13T00:20:09.595033459+00:00 stderr F I1213 00:20:09.595029 28750 services_controller.go:551] Adding service openshift-kube-apiserver/apiserver 2025-12-13T00:20:09.595042509+00:00 stderr F I1213 00:20:09.595035 28750 services_controller.go:551] Adding service openshift-kube-storage-version-migrator-operator/metrics 2025-12-13T00:20:09.595049700+00:00 stderr F I1213 00:20:09.595041 28750 services_controller.go:551] Adding service openshift-machine-api/machine-api-controllers 2025-12-13T00:20:09.595049700+00:00 stderr F I1213 00:20:09.595046 28750 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-controller 2025-12-13T00:20:09.595057360+00:00 stderr F I1213 00:20:09.595051 28750 services_controller.go:551] Adding service default/kubernetes 2025-12-13T00:20:09.595065610+00:00 stderr F I1213 00:20:09.595055 28750 services_controller.go:551] Adding service openshift-authentication-operator/metrics 2025-12-13T00:20:09.595065610+00:00 stderr F I1213 00:20:09.595061 28750 services_controller.go:551] Adding service openshift-machine-api/control-plane-machine-set-operator 2025-12-13T00:20:09.595074650+00:00 stderr F I1213 00:20:09.595066 28750 services_controller.go:551] Adding service openshift-etcd-operator/metrics 2025-12-13T00:20:09.595074650+00:00 stderr F I1213 00:20:09.595071 28750 services_controller.go:551] Adding service openshift-cluster-machine-approver/machine-approver 2025-12-13T00:20:09.595083981+00:00 stderr F I1213 00:20:09.595076 28750 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-machine-webhook 2025-12-13T00:20:09.595092911+00:00 stderr F I1213 00:20:09.595082 28750 services_controller.go:551] Adding service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-12-13T00:20:09.595101711+00:00 stderr F I1213 00:20:09.595095 28750 services_controller.go:551] Adding service openshift-authentication/oauth-openshift 2025-12-13T00:20:09.595110341+00:00 stderr F I1213 00:20:09.595100 28750 services_controller.go:551] Adding service openshift-console-operator/metrics 2025-12-13T00:20:09.595110341+00:00 stderr F I1213 00:20:09.595104 28750 services_controller.go:551] Adding service openshift-controller-manager/controller-manager 2025-12-13T00:20:09.595119752+00:00 stderr F I1213 00:20:09.595111 28750 services_controller.go:551] Adding service openshift-marketplace/marketplace-operator-metrics 2025-12-13T00:20:09.595119752+00:00 stderr F I1213 00:20:09.595117 28750 services_controller.go:551] Adding service openshift-multus/multus-admission-controller 2025-12-13T00:20:09.595128472+00:00 stderr F I1213 00:20:09.595122 28750 services_controller.go:551] Adding service openshift-apiserver/api 2025-12-13T00:20:09.595136982+00:00 stderr F I1213 00:20:09.595127 28750 services_controller.go:551] Adding service openshift-cluster-samples-operator/metrics 2025-12-13T00:20:09.595145732+00:00 stderr F I1213 00:20:09.595136 28750 services_controller.go:551] Adding service openshift-monitoring/cluster-monitoring-operator 2025-12-13T00:20:09.595154603+00:00 stderr F I1213 00:20:09.595143 28750 services_controller.go:551] Adding service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:20:09.595154603+00:00 stderr F I1213 00:20:09.595151 28750 services_controller.go:551] Adding service openshift-dns/dns-default 2025-12-13T00:20:09.595163363+00:00 stderr F I1213 00:20:09.595157 28750 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator 2025-12-13T00:20:09.595176023+00:00 stderr F I1213 00:20:09.595163 28750 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-source 2025-12-13T00:20:09.595176023+00:00 stderr F I1213 00:20:09.595168 28750 services_controller.go:551] Adding service openshift-route-controller-manager/route-controller-manager 2025-12-13T00:20:09.595185013+00:00 stderr F I1213 00:20:09.595175 28750 services_controller.go:551] Adding service openshift-console/console 2025-12-13T00:20:09.595185013+00:00 stderr F I1213 00:20:09.595179 28750 services_controller.go:551] Adding service openshift-controller-manager-operator/metrics 2025-12-13T00:20:09.595194204+00:00 stderr F I1213 00:20:09.595184 28750 services_controller.go:551] Adding service openshift-machine-api/machine-api-operator-webhook 2025-12-13T00:20:09.595194204+00:00 stderr F I1213 00:20:09.595188 28750 services_controller.go:551] Adding service openshift-apiserver/check-endpoints 2025-12-13T00:20:09.595203234+00:00 stderr F I1213 00:20:09.595193 28750 services_controller.go:551] Adding service openshift-marketplace/community-operators 2025-12-13T00:20:09.595212014+00:00 stderr F I1213 00:20:09.595205 28750 services_controller.go:551] Adding service openshift-marketplace/redhat-operators 2025-12-13T00:20:09.595220594+00:00 stderr F I1213 00:20:09.595211 28750 services_controller.go:551] Adding service openshift-multus/network-metrics-service 2025-12-13T00:20:09.595229565+00:00 stderr F I1213 00:20:09.595220 28750 services_controller.go:551] Adding service openshift-network-diagnostics/network-check-target 2025-12-13T00:20:09.595229565+00:00 stderr F I1213 00:20:09.595225 28750 services_controller.go:551] Adding service openshift-cluster-version/cluster-version-operator 2025-12-13T00:20:09.595240245+00:00 stderr F I1213 00:20:09.595234 28750 services_controller.go:551] Adding service default/openshift 2025-12-13T00:20:09.595248435+00:00 stderr F I1213 00:20:09.595239 28750 services_controller.go:551] Adding service openshift-image-registry/image-registry 2025-12-13T00:20:09.595257175+00:00 stderr F I1213 00:20:09.595248 28750 services_controller.go:551] Adding service openshift-ingress-operator/metrics 2025-12-13T00:20:09.595257175+00:00 stderr F I1213 00:20:09.595253 28750 services_controller.go:551] Adding service openshift-kube-apiserver-operator/metrics 2025-12-13T00:20:09.595268356+00:00 stderr F I1213 00:20:09.595263 28750 services_controller.go:551] Adding service openshift-kube-scheduler/scheduler 2025-12-13T00:20:09.595277056+00:00 stderr F I1213 00:20:09.595269 28750 services_controller.go:551] Adding service openshift-machine-config-operator/machine-config-operator 2025-12-13T00:20:09.595285966+00:00 stderr F I1213 00:20:09.595280 28750 services_controller.go:551] Adding service openshift-kube-controller-manager/kube-controller-manager 2025-12-13T00:20:09.596425867+00:00 stderr F I1213 00:20:09.596388 28750 ovs.go:162] Exec(24): stdout: "" 2025-12-13T00:20:09.596425867+00:00 stderr F I1213 00:20:09.596416 28750 ovs.go:163] Exec(24): stderr: "ovs-vsctl: no port named br-ex\n" 2025-12-13T00:20:09.596453198+00:00 stderr F I1213 00:20:09.596427 28750 ovs.go:165] Exec(24): err: exit status 1 2025-12-13T00:20:09.596453198+00:00 stderr F I1213 00:20:09.596446 28750 ovs.go:159] Exec(25): /usr/bin/ovs-vsctl --timeout=15 br-exists br-ex 2025-12-13T00:20:09.603863183+00:00 stderr F I1213 00:20:09.603798 28750 ovs.go:162] Exec(25): stdout: "" 2025-12-13T00:20:09.603863183+00:00 stderr F I1213 00:20:09.603840 28750 ovs.go:163] Exec(25): stderr: "" 2025-12-13T00:20:09.603863183+00:00 stderr F I1213 00:20:09.603859 28750 ovs.go:159] Exec(26): /usr/bin/ovs-vsctl --timeout=15 list-ports br-ex 2025-12-13T00:20:09.618392313+00:00 stderr F I1213 00:20:09.618245 28750 ovs.go:162] Exec(26): stdout: "ens3\npatch-br-ex_crc-to-br-int\n" 2025-12-13T00:20:09.618392313+00:00 stderr F I1213 00:20:09.618279 28750 ovs.go:163] Exec(26): stderr: "" 2025-12-13T00:20:09.618392313+00:00 stderr F I1213 00:20:09.618294 28750 ovs.go:159] Exec(27): /usr/bin/ovs-vsctl --timeout=15 get Port ens3 Interfaces 2025-12-13T00:20:09.625763177+00:00 stderr F I1213 00:20:09.625713 28750 ovs.go:162] Exec(27): stdout: "[682a6bc3-ba67-47fe-94a2-2d199dc6c1fc]\n" 2025-12-13T00:20:09.625763177+00:00 stderr F I1213 00:20:09.625749 28750 ovs.go:163] Exec(27): stderr: "" 2025-12-13T00:20:09.625811048+00:00 stderr F I1213 00:20:09.625783 28750 ovs.go:159] Exec(28): /usr/bin/ovs-vsctl --timeout=15 get Port patch-br-ex_crc-to-br-int Interfaces 2025-12-13T00:20:09.632332787+00:00 stderr F I1213 00:20:09.632272 28750 ovs.go:162] Exec(28): stdout: "[258af48a-6bce-475a-a6a0-d7cd0a4caa3c]\n" 2025-12-13T00:20:09.632332787+00:00 stderr F I1213 00:20:09.632302 28750 ovs.go:163] Exec(28): stderr: "" 2025-12-13T00:20:09.632332787+00:00 stderr F I1213 00:20:09.632320 28750 ovs.go:159] Exec(29): /usr/bin/ovs-vsctl --timeout=15 get Interface 682a6bc3-ba67-47fe-94a2-2d199dc6c1fc Type 2025-12-13T00:20:09.640015209+00:00 stderr F I1213 00:20:09.639963 28750 ovs.go:162] Exec(29): stdout: "system\n" 2025-12-13T00:20:09.640015209+00:00 stderr F I1213 00:20:09.639986 28750 ovs.go:163] Exec(29): stderr: "" 2025-12-13T00:20:09.640015209+00:00 stderr F I1213 00:20:09.640005 28750 ovs.go:159] Exec(30): /usr/bin/ovs-vsctl --timeout=15 get Interface 258af48a-6bce-475a-a6a0-d7cd0a4caa3c Type 2025-12-13T00:20:09.645301325+00:00 stderr F I1213 00:20:09.645246 28750 shared_informer.go:318] Caches are synced for network-controller-manager 2025-12-13T00:20:09.645301325+00:00 stderr F I1213 00:20:09.645271 28750 network_attach_def_controller.go:182] Starting repairing loop for network-controller-manager 2025-12-13T00:20:09.645376597+00:00 stderr F I1213 00:20:09.645340 28750 network_attach_def_controller.go:184] Finished repairing loop for network-controller-manager: 72.362µs err: 2025-12-13T00:20:09.645376597+00:00 stderr F I1213 00:20:09.645351 28750 network_attach_def_controller.go:153] Starting workers for network-controller-manager NAD controller 2025-12-13T00:20:09.646099807+00:00 stderr F I1213 00:20:09.646060 28750 ovs.go:162] Exec(30): stdout: "patch\n" 2025-12-13T00:20:09.646099807+00:00 stderr F I1213 00:20:09.646075 28750 ovs.go:163] Exec(30): stderr: "" 2025-12-13T00:20:09.646099807+00:00 stderr F I1213 00:20:09.646084 28750 ovs.go:159] Exec(31): /usr/bin/ovs-vsctl --timeout=15 get interface ens3 ofport 2025-12-13T00:20:09.652298228+00:00 stderr F I1213 00:20:09.652230 28750 ovs.go:162] Exec(31): stdout: "1\n" 2025-12-13T00:20:09.652298228+00:00 stderr F I1213 00:20:09.652245 28750 ovs.go:163] Exec(31): stderr: "" 2025-12-13T00:20:09.652298228+00:00 stderr F I1213 00:20:09.652254 28750 ovs.go:159] Exec(32): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface br-ex mac_in_use 2025-12-13T00:20:09.658344944+00:00 stderr F I1213 00:20:09.658311 28750 ovs.go:162] Exec(32): stdout: "\"fa:16:3e:f0:63:3e\"\n" 2025-12-13T00:20:09.658419406+00:00 stderr F I1213 00:20:09.658379 28750 ovs.go:163] Exec(32): stderr: "" 2025-12-13T00:20:09.658492208+00:00 stderr F I1213 00:20:09.658477 28750 ovs.go:159] Exec(33): /usr/sbin/sysctl -w net.ipv4.conf.br-ex.forwarding=1 2025-12-13T00:20:09.659809775+00:00 stderr F I1213 00:20:09.659770 28750 ovs.go:162] Exec(33): stdout: "net.ipv4.conf.br-ex.forwarding = 1\n" 2025-12-13T00:20:09.659865907+00:00 stderr F I1213 00:20:09.659848 28750 ovs.go:163] Exec(33): stderr: "" 2025-12-13T00:20:09.659924088+00:00 stderr F I1213 00:20:09.659907 28750 ovs.go:159] Exec(34): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:ovn-bridge-mappings 2025-12-13T00:20:09.672224297+00:00 stderr F I1213 00:20:09.671780 28750 ovs.go:162] Exec(34): stdout: "\"physnet:br-ex\"\n" 2025-12-13T00:20:09.672276529+00:00 stderr F I1213 00:20:09.672260 28750 ovs.go:163] Exec(34): stderr: "" 2025-12-13T00:20:09.672321450+00:00 stderr F I1213 00:20:09.672306 28750 ovs.go:159] Exec(35): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-bridge-mappings=physnet:br-ex 2025-12-13T00:20:09.678408508+00:00 stderr F I1213 00:20:09.678373 28750 ovs.go:162] Exec(35): stdout: "" 2025-12-13T00:20:09.678408508+00:00 stderr F I1213 00:20:09.678393 28750 ovs.go:163] Exec(35): stderr: "" 2025-12-13T00:20:09.678433158+00:00 stderr F I1213 00:20:09.678408 28750 ovs.go:159] Exec(36): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:system-id 2025-12-13T00:20:09.690013327+00:00 stderr F I1213 00:20:09.689970 28750 ovs.go:162] Exec(36): stdout: "\"017e52b0-97d3-4d7d-aae4-9b216aa025aa\"\n" 2025-12-13T00:20:09.690013327+00:00 stderr F I1213 00:20:09.689989 28750 ovs.go:163] Exec(36): stderr: "" 2025-12-13T00:20:09.690043639+00:00 stderr F I1213 00:20:09.690032 28750 ovs.go:159] Exec(37): /usr/bin/ovs-appctl --timeout=15 dpif/show-dp-features br-ex 2025-12-13T00:20:09.695295723+00:00 stderr F I1213 00:20:09.695269 28750 shared_informer.go:318] Caches are synced for ovn-lb-controller 2025-12-13T00:20:09.695349275+00:00 stderr F I1213 00:20:09.695335 28750 repair.go:57] Starting repairing loop for services 2025-12-13T00:20:09.695811688+00:00 stderr F I1213 00:20:09.695791 28750 repair.go:128] Deleted 0 stale service LBs 2025-12-13T00:20:09.695865020+00:00 stderr F I1213 00:20:09.695852 28750 repair.go:134] Deleted 0 stale Chassis Template Vars 2025-12-13T00:20:09.695914911+00:00 stderr F I1213 00:20:09.695900 28750 repair.go:59] Finished repairing loop for services: 568.537µs 2025-12-13T00:20:09.696174348+00:00 stderr F I1213 00:20:09.696152 28750 services_controller.go:314] Controller cache of 53 load balancers initialized for 52 services 2025-12-13T00:20:09.696229180+00:00 stderr F I1213 00:20:09.696212 28750 services_controller.go:225] Starting workers 2025-12-13T00:20:09.696296141+00:00 stderr F I1213 00:20:09.696281 28750 services_controller.go:332] Processing sync for service openshift-kube-apiserver-operator/metrics 2025-12-13T00:20:09.696372623+00:00 stderr F I1213 00:20:09.696337 28750 services_controller.go:332] Processing sync for service openshift-kube-scheduler-operator/metrics 2025-12-13T00:20:09.696372623+00:00 stderr F I1213 00:20:09.696358 28750 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-daemon 2025-12-13T00:20:09.696469566+00:00 stderr F I1213 00:20:09.696437 28750 services_controller.go:332] Processing sync for service openshift-controller-manager-operator/metrics 2025-12-13T00:20:09.696469566+00:00 stderr F I1213 00:20:09.696438 28750 services_controller.go:332] Processing sync for service openshift-console-operator/metrics 2025-12-13T00:20:09.696538208+00:00 stderr F I1213 00:20:09.696380 28750 services_controller.go:397] Service machine-config-daemon retrieved from lister: &Service{ObjectMeta:{machine-config-daemon openshift-machine-config-operator bddcb8c2-0f2d-4efa-a0ec-3e0648c24386 4880 0 2024-06-26 12:39:15 +0000 UTC map[k8s-app:machine-config-daemon] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000892907 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},ServicePort{Name:health,Protocol:TCP,Port:8798,TargetPort:{0 8798 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-daemon,},ClusterIP:10.217.5.82,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.82],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.696605420+00:00 stderr F I1213 00:20:09.696460 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-controller-manager-operator 2f6bb711-85a4-408c-913a-54f006dcf2e9 4322 0 2024-06-26 12:39:07 +0000 UTC map[app:openshift-controller-manager-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:openshift-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062f17b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-controller-manager-operator,},ClusterIP:10.217.5.152,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.152],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.696617420+00:00 stderr F I1213 00:20:09.696367 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-scheduler-operator 080e1aaf-7269-495b-ab74-593efe4192ec 4661 0 2024-06-26 12:39:09 +0000 UTC map[app:openshift-kube-scheduler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-scheduler-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062fcab }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-kube-scheduler-operator,},ClusterIP:10.217.4.108,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.108],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.696644811+00:00 stderr F I1213 00:20:09.696320 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-apiserver-operator ed79a864-3d59-456e-8a6c-724ec68e6d1b 4515 0 2024-06-26 12:39:27 +0000 UTC map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062fa2b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.31,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.31],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.696690092+00:00 stderr F I1213 00:20:09.696644 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler-operator/metrics are: map[TCP/https:{8443 [10.217.0.12] []}] 2025-12-13T00:20:09.696690092+00:00 stderr F I1213 00:20:09.696599 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-daemon are: map[TCP/health:{8798 [192.168.126.11] []} TCP/metrics:{9001 [192.168.126.11] []}] 2025-12-13T00:20:09.696690092+00:00 stderr F I1213 00:20:09.696621 28750 lb_config.go:1016] Cluster endpoints for openshift-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.9] []}] 2025-12-13T00:20:09.696704092+00:00 stderr F I1213 00:20:09.696683 28750 services_controller.go:413] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.108"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.12"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.696704092+00:00 stderr F I1213 00:20:09.696696 28750 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-daemon LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.696714833+00:00 stderr F I1213 00:20:09.696692 28750 services_controller.go:413] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.152"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.9"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.696714833+00:00 stderr F I1213 00:20:09.696700 28750 services_controller.go:414] Built service openshift-kube-scheduler-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.696714833+00:00 stderr F I1213 00:20:09.696708 28750 services_controller.go:414] Built service openshift-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.696725643+00:00 stderr F I1213 00:20:09.696710 28750 services_controller.go:415] Built service openshift-kube-scheduler-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.696725643+00:00 stderr F I1213 00:20:09.696715 28750 services_controller.go:415] Built service openshift-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.696735493+00:00 stderr F I1213 00:20:09.696709 28750 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-daemon LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.82"}, protocol:"TCP", inport:8798, clusterEndpoints:services.lbEndpoints{Port:8798, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.696748404+00:00 stderr F I1213 00:20:09.696738 28750 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-daemon LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.696782985+00:00 stderr F I1213 00:20:09.696735 28750 services_controller.go:421] Built service openshift-kube-scheduler-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.696782985+00:00 stderr F I1213 00:20:09.696772 28750 services_controller.go:422] Built service openshift-kube-scheduler-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.696794625+00:00 stderr F I1213 00:20:09.696479 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-console-operator 793d323e-de30-470a-af76-520af7b2dad8 9604 0 2024-06-26 12:53:34 +0000 UTC map[name:console-operator] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062eec7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-operator,},ClusterIP:10.217.5.211,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.211],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.696794625+00:00 stderr F I1213 00:20:09.696768 28750 services_controller.go:421] Built service openshift-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.696794625+00:00 stderr F I1213 00:20:09.696781 28750 services_controller.go:423] Built service openshift-kube-scheduler-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.696808895+00:00 stderr F I1213 00:20:09.696792 28750 services_controller.go:422] Built service openshift-controller-manager-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.696808895+00:00 stderr F I1213 00:20:09.696795 28750 services_controller.go:424] Service openshift-kube-scheduler-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.696808895+00:00 stderr F I1213 00:20:09.696799 28750 services_controller.go:423] Built service openshift-controller-manager-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.696808895+00:00 stderr F I1213 00:20:09.696800 28750 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-daemon cluster-wide LB []services.LB{} 2025-12-13T00:20:09.696839246+00:00 stderr F I1213 00:20:09.696807 28750 services_controller.go:424] Service openshift-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.696839246+00:00 stderr F I1213 00:20:09.696813 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"9037868a-bf59-4e20-8fc8-16e697f234f6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.108", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.12", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.696850006+00:00 stderr F I1213 00:20:09.696821 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"cd325bf7-5a1f-48df-b966-4cb50de55e08", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.152", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.9", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.696865147+00:00 stderr F I1213 00:20:09.696817 28750 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-daemon per-node LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.696896848+00:00 stderr F I1213 00:20:09.696872 28750 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-daemon template LB []services.LB{} 2025-12-13T00:20:09.696896848+00:00 stderr F I1213 00:20:09.696831 28750 lb_config.go:1016] Cluster endpoints for openshift-console-operator/metrics are: map[TCP/https:{8443 [10.217.0.62] []}] 2025-12-13T00:20:09.696907718+00:00 stderr F I1213 00:20:09.696896 28750 services_controller.go:424] Service openshift-machine-config-operator/machine-config-daemon has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.696986680+00:00 stderr F I1213 00:20:09.696910 28750 services_controller.go:413] Built service openshift-console-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.211"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.62"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.696986680+00:00 stderr F I1213 00:20:09.696950 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.108:443:10.217.0.12:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697005121+00:00 stderr F I1213 00:20:09.696953 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.152:443:10.217.0.9:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cd325bf7-5a1f-48df-b966-4cb50de55e08}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697005121+00:00 stderr F I1213 00:20:09.696983 28750 services_controller.go:414] Built service openshift-console-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.697020301+00:00 stderr F I1213 00:20:09.697003 28750 services_controller.go:415] Built service openshift-console-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.697020301+00:00 stderr F I1213 00:20:09.696922 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"03279af5-9ea3-4256-ad64-2f0188f56e36", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9001, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.82", Port:8798, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:8798, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.697020301+00:00 stderr F I1213 00:20:09.696993 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.108:443:10.217.0.12:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697020301+00:00 stderr F I1213 00:20:09.696996 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.152:443:10.217.0.9:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {cd325bf7-5a1f-48df-b966-4cb50de55e08}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697066512+00:00 stderr F I1213 00:20:09.696957 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.7] []}] 2025-12-13T00:20:09.697112154+00:00 stderr F I1213 00:20:09.697036 28750 services_controller.go:421] Built service openshift-console-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.697127264+00:00 stderr F I1213 00:20:09.697111 28750 services_controller.go:422] Built service openshift-console-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.697139474+00:00 stderr F I1213 00:20:09.697131 28750 services_controller.go:423] Built service openshift-console-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.697182655+00:00 stderr F I1213 00:20:09.697152 28750 services_controller.go:424] Service openshift-console-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.697211226+00:00 stderr F I1213 00:20:09.697154 28750 services_controller.go:413] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.31"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.7"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.697249257+00:00 stderr F I1213 00:20:09.697162 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.82:8798:192.168.126.11:8798 10.217.5.82:9001:192.168.126.11:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03279af5-9ea3-4256-ad64-2f0188f56e36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697288248+00:00 stderr F I1213 00:20:09.697268 28750 services_controller.go:414] Built service openshift-kube-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.697343390+00:00 stderr F I1213 00:20:09.697325 28750 services_controller.go:415] Built service openshift-kube-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.697417932+00:00 stderr F I1213 00:20:09.697382 28750 services_controller.go:421] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.697455033+00:00 stderr F I1213 00:20:09.697441 28750 services_controller.go:422] Built service openshift-kube-apiserver-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.697503984+00:00 stderr F I1213 00:20:09.697197 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.211", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.62", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.697530455+00:00 stderr F I1213 00:20:09.697482 28750 services_controller.go:423] Built service openshift-kube-apiserver-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.697574056+00:00 stderr F I1213 00:20:09.697560 28750 services_controller.go:424] Service openshift-kube-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.697610867+00:00 stderr F I1213 00:20:09.697250 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.82:8798:192.168.126.11:8798 10.217.5.82:9001:192.168.126.11:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {03279af5-9ea3-4256-ad64-2f0188f56e36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697673559+00:00 stderr F I1213 00:20:09.697632 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"11ea2791-06de-4f67-9dea-91c73a312b37", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.31", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.7", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.697776802+00:00 stderr F I1213 00:20:09.697686 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.211:443:10.217.0.62:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697863174+00:00 stderr F I1213 00:20:09.697820 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.31:443:10.217.0.7:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11ea2791-06de-4f67-9dea-91c73a312b37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.697962527+00:00 stderr F I1213 00:20:09.697899 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver-operator/metrics]} name:Service_openshift-kube-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.31:443:10.217.0.7:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {11ea2791-06de-4f67-9dea-91c73a312b37}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.698127621+00:00 stderr F I1213 00:20:09.697810 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.211:443:10.217.0.62:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0acef795-a8b4-42f0-a0f3-a0fbcd6acd2c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.699676434+00:00 stderr F I1213 00:20:09.699630 28750 ovs.go:162] Exec(37): stdout: "Masked set action: Yes\nTunnel push pop: No\nUfid: Yes\nTruncate action: Yes\nClone action: Yes\nSample nesting: 10\nConntrack eventmask: Yes\nConntrack clear: Yes\nMax dp_hash algorithm: 1\nCheck pkt length action: Yes\nConntrack timeout policy: Yes\nExplicit Drop action: No\nOptimized Balance TCP mode: No\nConntrack all-zero IP SNAT: Yes\nMPLS Label add: Yes\nMax VLAN headers: 2\nMax MPLS depth: 3\nRecirc: Yes\nCT state: Yes\nCT zone: Yes\nCT mark: Yes\nCT label: Yes\nCT state NAT: Yes\nCT orig tuple: Yes\nCT orig tuple for IPv6: Yes\nIPv6 ND Extension: No\n" 2025-12-13T00:20:09.699676434+00:00 stderr F I1213 00:20:09.699654 28750 ovs.go:163] Exec(37): stderr: "" 2025-12-13T00:20:09.699702185+00:00 stderr F I1213 00:20:09.699693 28750 ovs.go:159] Exec(38): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . other_config:hw-offload 2025-12-13T00:20:09.700805275+00:00 stderr F I1213 00:20:09.700738 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager-operator/metrics"} 2025-12-13T00:20:09.700805275+00:00 stderr F I1213 00:20:09.700779 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-controller-manager-operator : 4.348829ms 2025-12-13T00:20:09.700831316+00:00 stderr F I1213 00:20:09.700803 28750 services_controller.go:332] Processing sync for service openshift-ingress-canary/ingress-canary 2025-12-13T00:20:09.700900848+00:00 stderr F I1213 00:20:09.700860 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-daemon"} 2025-12-13T00:20:09.700900848+00:00 stderr F I1213 00:20:09.700891 28750 services_controller.go:336] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator : 4.540244ms 2025-12-13T00:20:09.700912908+00:00 stderr F I1213 00:20:09.700817 28750 services_controller.go:397] Service ingress-canary retrieved from lister: &Service{ObjectMeta:{ingress-canary openshift-ingress-canary cd641ce4-6a02-4a0c-9222-6ab30b234450 10172 0 2024-06-26 12:54:01 +0000 UTC map[ingress.openshift.io/canary:canary_controller] map[] [{apps/v1 daemonset ingress-canary b5512a08-cd29-46f9-9661-4c860338b2ca 0xc00062f7a7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:8080-tcp,Protocol:TCP,Port:8080,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},ServicePort{Name:8888-tcp,Protocol:TCP,Port:8888,TargetPort:{0 8888 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscanary.operator.openshift.io/daemonset-ingresscanary: canary_controller,},ClusterIP:10.217.4.204,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.204],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.700912908+00:00 stderr F I1213 00:20:09.700882 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver-operator/metrics"} 2025-12-13T00:20:09.700923408+00:00 stderr F I1213 00:20:09.700915 28750 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-operators 2025-12-13T00:20:09.700923408+00:00 stderr F I1213 00:20:09.700909 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/metrics"} 2025-12-13T00:20:09.700948619+00:00 stderr F I1213 00:20:09.700920 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-apiserver-operator : 4.637897ms 2025-12-13T00:20:09.700994880+00:00 stderr F I1213 00:20:09.700968 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-console-operator : 4.507963ms 2025-12-13T00:20:09.700994880+00:00 stderr F I1213 00:20:09.700973 28750 services_controller.go:332] Processing sync for service openshift-multus/multus-admission-controller 2025-12-13T00:20:09.700994880+00:00 stderr F I1213 00:20:09.700969 28750 lb_config.go:1016] Cluster endpoints for openshift-ingress-canary/ingress-canary are: map[TCP/8080-tcp:{8080 [10.217.0.71] []} TCP/8888-tcp:{8888 [10.217.0.71] []}] 2025-12-13T00:20:09.701011221+00:00 stderr F I1213 00:20:09.700993 28750 services_controller.go:332] Processing sync for service openshift-authentication-operator/metrics 2025-12-13T00:20:09.701011221+00:00 stderr F I1213 00:20:09.700924 28750 services_controller.go:397] Service redhat-operators retrieved from lister: &Service{ObjectMeta:{redhat-operators openshift-marketplace adccbaa4-8d5b-4985-9a89-66271ea4bf4e 6530 0 2024-06-26 12:47:51 +0000 UTC map[olm.managed:true olm.service-spec-hash:97lhyg0LJh9cnJG1O4Cl7ghtE8qwBzbCJInGtY] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-operators 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7 0xc000892d2d 0xc000892d2e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-operators,olm.managed: true,},ClusterIP:10.217.5.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.701029151+00:00 stderr F I1213 00:20:09.701005 28750 services_controller.go:413] Built service openshift-ingress-canary/ingress-canary LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8080, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.204"}, protocol:"TCP", inport:8888, clusterEndpoints:services.lbEndpoints{Port:8888, V4IPs:[]string{"10.217.0.71"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.701041171+00:00 stderr F I1213 00:20:09.701032 28750 services_controller.go:414] Built service openshift-ingress-canary/ingress-canary LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.701051132+00:00 stderr F I1213 00:20:09.701038 28750 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-operators are: map[TCP/grpc:{50051 [10.217.0.35] []}] 2025-12-13T00:20:09.701051132+00:00 stderr F I1213 00:20:09.701043 28750 services_controller.go:415] Built service openshift-ingress-canary/ingress-canary LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.701084633+00:00 stderr F I1213 00:20:09.701053 28750 services_controller.go:413] Built service openshift-marketplace/redhat-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.52"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.35"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.701084633+00:00 stderr F I1213 00:20:09.701072 28750 services_controller.go:414] Built service openshift-marketplace/redhat-operators LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.701084633+00:00 stderr F I1213 00:20:09.701079 28750 services_controller.go:415] Built service openshift-marketplace/redhat-operators LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.701096443+00:00 stderr F I1213 00:20:09.701004 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-authentication-operator 20ebd9ba-71d4-4753-8707-d87939791a19 4335 0 2024-06-26 12:39:09 +0000 UTC map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062e917 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operator,},ClusterIP:10.217.5.51,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.51],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.701096443+00:00 stderr F I1213 00:20:09.700994 28750 services_controller.go:397] Service multus-admission-controller retrieved from lister: &Service{ObjectMeta:{multus-admission-controller openshift-multus 35568373-18ec-4ba2-8d18-12de10aa5a3f 5005 0 2024-06-26 12:45:47 +0000 UTC map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000892e87 0xc000892e88}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: multus-admission-controller,},ClusterIP:10.217.4.247,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.247],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.701110393+00:00 stderr F I1213 00:20:09.701070 28750 services_controller.go:421] Built service openshift-ingress-canary/ingress-canary cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.701120254+00:00 stderr F I1213 00:20:09.701107 28750 lb_config.go:1016] Cluster endpoints for openshift-authentication-operator/metrics are: map[TCP/https:{8443 [10.217.0.19] []}] 2025-12-13T00:20:09.701120254+00:00 stderr F I1213 00:20:09.701113 28750 services_controller.go:422] Built service openshift-ingress-canary/ingress-canary per-node LB []services.LB{} 2025-12-13T00:20:09.701131344+00:00 stderr F I1213 00:20:09.701101 28750 services_controller.go:421] Built service openshift-marketplace/redhat-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.701131344+00:00 stderr F I1213 00:20:09.701126 28750 services_controller.go:423] Built service openshift-ingress-canary/ingress-canary template LB []services.LB{} 2025-12-13T00:20:09.701152875+00:00 stderr F I1213 00:20:09.701124 28750 services_controller.go:413] Built service openshift-authentication-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.51"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.19"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.701152875+00:00 stderr F I1213 00:20:09.701128 28750 services_controller.go:422] Built service openshift-marketplace/redhat-operators per-node LB []services.LB{} 2025-12-13T00:20:09.701152875+00:00 stderr F I1213 00:20:09.701139 28750 services_controller.go:414] Built service openshift-authentication-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.701152875+00:00 stderr F I1213 00:20:09.701140 28750 services_controller.go:424] Service openshift-ingress-canary/ingress-canary has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.701152875+00:00 stderr F I1213 00:20:09.701142 28750 services_controller.go:423] Built service openshift-marketplace/redhat-operators template LB []services.LB{} 2025-12-13T00:20:09.701168035+00:00 stderr F I1213 00:20:09.701149 28750 services_controller.go:415] Built service openshift-authentication-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.701168035+00:00 stderr F I1213 00:20:09.701155 28750 services_controller.go:424] Service openshift-marketplace/redhat-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.701168035+00:00 stderr F I1213 00:20:09.701139 28750 lb_config.go:1016] Cluster endpoints for openshift-multus/multus-admission-controller are: map[TCP/metrics:{8443 [10.217.0.32] []} TCP/webhook:{6443 [10.217.0.32] []}] 2025-12-13T00:20:09.701204766+00:00 stderr F I1213 00:20:09.700879 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler-operator/metrics"} 2025-12-13T00:20:09.701260428+00:00 stderr F I1213 00:20:09.701176 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"6fe909bf-0efe-41d7-93bd-ab2cc0acd4db", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-canary/ingress-canary_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8080, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8080, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.204", Port:8888, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.71", Port:8888, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.701309210+00:00 stderr F I1213 00:20:09.701288 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-scheduler-operator : 4.912046ms 2025-12-13T00:20:09.701355691+00:00 stderr F I1213 00:20:09.701173 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"dbd78453-346b-4a67-b084-f31f95f15b67", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.52", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.35", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.701398322+00:00 stderr F I1213 00:20:09.701383 28750 services_controller.go:332] Processing sync for service openshift-console/console 2025-12-13T00:20:09.701468344+00:00 stderr F I1213 00:20:09.701392 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.204:8080:10.217.0.71:8080 10.217.4.204:8888:10.217.0.71:8888]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fe909bf-0efe-41d7-93bd-ab2cc0acd4db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.701519825+00:00 stderr F I1213 00:20:09.701460 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.204:8080:10.217.0.71:8080 10.217.4.204:8888:10.217.0.71:8888]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fe909bf-0efe-41d7-93bd-ab2cc0acd4db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.701578647+00:00 stderr F I1213 00:20:09.701425 28750 services_controller.go:397] Service console retrieved from lister: &Service{ObjectMeta:{console openshift-console 5b0bdd1d-b81c-479c-9a03-f3ff2b5db014 9795 0 2024-06-26 12:53:44 +0000 UTC map[app:console] map[operator.openshift.io/spec-hash:5a95972a23c40ab49ce88af0712f389072cea6a9798f6e5350b856d92bc3bd6d service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:console-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: ui,},ClusterIP:10.217.4.140,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.140],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.701642149+00:00 stderr F I1213 00:20:09.701623 28750 lb_config.go:1016] Cluster endpoints for openshift-console/console are: map[TCP/https:{8443 [10.217.0.73] []}] 2025-12-13T00:20:09.701689770+00:00 stderr F I1213 00:20:09.701669 28750 services_controller.go:413] Built service openshift-console/console LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.140"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.73"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.701727661+00:00 stderr F I1213 00:20:09.701711 28750 services_controller.go:414] Built service openshift-console/console LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.701772542+00:00 stderr F I1213 00:20:09.701756 28750 services_controller.go:415] Built service openshift-console/console LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.701848804+00:00 stderr F I1213 00:20:09.701815 28750 services_controller.go:421] Built service openshift-console/console cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.701886705+00:00 stderr F I1213 00:20:09.701873 28750 services_controller.go:422] Built service openshift-console/console per-node LB []services.LB{} 2025-12-13T00:20:09.701919516+00:00 stderr F I1213 00:20:09.701907 28750 services_controller.go:423] Built service openshift-console/console template LB []services.LB{} 2025-12-13T00:20:09.701979258+00:00 stderr F I1213 00:20:09.701510 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.52:50051:10.217.0.35:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dbd78453-346b-4a67-b084-f31f95f15b67}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.702021939+00:00 stderr F I1213 00:20:09.702007 28750 services_controller.go:424] Service openshift-console/console has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.702095181+00:00 stderr F I1213 00:20:09.702052 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"1b2be37e-8144-48c9-927b-da6e21dae8a9", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console/console_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.140", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.73", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.702157303+00:00 stderr F I1213 00:20:09.702126 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-canary/ingress-canary"} 2025-12-13T00:20:09.702157303+00:00 stderr F I1213 00:20:09.702144 28750 services_controller.go:336] Finished syncing service ingress-canary on namespace openshift-ingress-canary : 1.342647ms 2025-12-13T00:20:09.702169693+00:00 stderr F I1213 00:20:09.702155 28750 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-controllers 2025-12-13T00:20:09.702243075+00:00 stderr F I1213 00:20:09.702164 28750 services_controller.go:397] Service machine-api-controllers retrieved from lister: &Service{ObjectMeta:{machine-api-controllers openshift-machine-api 6a75af62-23dd-4080-8ef6-00c8bb47e103 4782 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00089206b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:mhc-mtrc,Protocol:TCP,Port:8444,TargetPort:{1 0 mhc-mtrc},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: controller,},ClusterIP:10.217.5.185,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.185],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.702259166+00:00 stderr F I1213 00:20:09.702243 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-controllers are: map[] 2025-12-13T00:20:09.702271116+00:00 stderr F I1213 00:20:09.702256 28750 services_controller.go:413] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.185"}, protocol:"TCP", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.702281066+00:00 stderr F I1213 00:20:09.702270 28750 services_controller.go:414] Built service openshift-machine-api/machine-api-controllers LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.702281066+00:00 stderr F I1213 00:20:09.702276 28750 services_controller.go:415] Built service openshift-machine-api/machine-api-controllers LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.702325627+00:00 stderr F I1213 00:20:09.702290 28750 services_controller.go:421] Built service openshift-machine-api/machine-api-controllers cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.702325627+00:00 stderr F I1213 00:20:09.702314 28750 services_controller.go:422] Built service openshift-machine-api/machine-api-controllers per-node LB []services.LB{} 2025-12-13T00:20:09.702325627+00:00 stderr F I1213 00:20:09.702321 28750 services_controller.go:423] Built service openshift-machine-api/machine-api-controllers template LB []services.LB{} 2025-12-13T00:20:09.702338728+00:00 stderr F I1213 00:20:09.702328 28750 services_controller.go:424] Service openshift-machine-api/machine-api-controllers has 3 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.702385559+00:00 stderr F I1213 00:20:09.702342 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"9345b326-0288-485f-8374-532f033762a6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-controllers_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8441, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8442, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.185", Port:8444, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.702484322+00:00 stderr F I1213 00:20:09.702421 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:10.217.0.73:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b2be37e-8144-48c9-927b-da6e21dae8a9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.702601315+00:00 stderr F I1213 00:20:09.702559 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:10.217.0.73:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b2be37e-8144-48c9-927b-da6e21dae8a9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.702782310+00:00 stderr F I1213 00:20:09.701978 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.52:50051:10.217.0.35:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {dbd78453-346b-4a67-b084-f31f95f15b67}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.702990576+00:00 stderr F I1213 00:20:09.702466 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.185:8441: 10.217.5.185:8442: 10.217.5.185:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9345b326-0288-485f-8374-532f033762a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.703097218+00:00 stderr F I1213 00:20:09.703038 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.185:8441: 10.217.5.185:8442: 10.217.5.185:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9345b326-0288-485f-8374-532f033762a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.703251623+00:00 stderr F I1213 00:20:09.703117 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/console"} 2025-12-13T00:20:09.703251623+00:00 stderr F I1213 00:20:09.703230 28750 services_controller.go:336] Finished syncing service console on namespace openshift-console : 1.84981ms 2025-12-13T00:20:09.703251623+00:00 stderr F I1213 00:20:09.703241 28750 services_controller.go:332] Processing sync for service openshift-oauth-apiserver/api 2025-12-13T00:20:09.703310324+00:00 stderr F I1213 00:20:09.703247 28750 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-oauth-apiserver 8ccd218c-b483-42f1-81ef-8a1e9a05f574 5246 0 2024-06-26 12:47:12 +0000 UTC map[app:openshift-oauth-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.114,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.114],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.703322235+00:00 stderr F I1213 00:20:09.703307 28750 lb_config.go:1016] Cluster endpoints for openshift-oauth-apiserver/api are: map[TCP/https:{8443 [10.217.0.39] []}] 2025-12-13T00:20:09.703331935+00:00 stderr F I1213 00:20:09.703320 28750 services_controller.go:413] Built service openshift-oauth-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.114"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.39"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.703342065+00:00 stderr F I1213 00:20:09.703330 28750 services_controller.go:414] Built service openshift-oauth-apiserver/api LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.703342065+00:00 stderr F I1213 00:20:09.703336 28750 services_controller.go:415] Built service openshift-oauth-apiserver/api LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.703381616+00:00 stderr F I1213 00:20:09.703348 28750 services_controller.go:421] Built service openshift-oauth-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.703381616+00:00 stderr F I1213 00:20:09.703369 28750 services_controller.go:422] Built service openshift-oauth-apiserver/api per-node LB []services.LB{} 2025-12-13T00:20:09.703381616+00:00 stderr F I1213 00:20:09.703375 28750 services_controller.go:423] Built service openshift-oauth-apiserver/api template LB []services.LB{} 2025-12-13T00:20:09.703398687+00:00 stderr F I1213 00:20:09.703382 28750 services_controller.go:424] Service openshift-oauth-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.703436048+00:00 stderr F I1213 00:20:09.703396 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"7595c030-5437-4d83-9952-897bbf081592", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-oauth-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.114", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.39", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.703525950+00:00 stderr F I1213 00:20:09.703485 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.114:443:10.217.0.39:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7595c030-5437-4d83-9952-897bbf081592}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.703538680+00:00 stderr F I1213 00:20:09.703514 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.114:443:10.217.0.39:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7595c030-5437-4d83-9952-897bbf081592}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.703608202+00:00 stderr F I1213 00:20:09.703586 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-operators"} 2025-12-13T00:20:09.703645343+00:00 stderr F I1213 00:20:09.703631 28750 services_controller.go:336] Finished syncing service redhat-operators on namespace openshift-marketplace : 2.715455ms 2025-12-13T00:20:09.703680194+00:00 stderr F I1213 00:20:09.701191 28750 services_controller.go:413] Built service openshift-multus/multus-admission-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.247"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.32"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.703707025+00:00 stderr F I1213 00:20:09.701192 28750 services_controller.go:421] Built service openshift-authentication-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.703742806+00:00 stderr F I1213 00:20:09.703729 28750 services_controller.go:422] Built service openshift-authentication-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.703788387+00:00 stderr F I1213 00:20:09.703757 28750 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry-operator 2025-12-13T00:20:09.703799218+00:00 stderr F I1213 00:20:09.703785 28750 services_controller.go:336] Finished syncing service image-registry-operator on namespace openshift-image-registry : 28.191µs 2025-12-13T00:20:09.703808928+00:00 stderr F I1213 00:20:09.703787 28750 services_controller.go:414] Built service openshift-multus/multus-admission-controller LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.703808928+00:00 stderr F I1213 00:20:09.703802 28750 services_controller.go:332] Processing sync for service openshift-marketplace/redhat-marketplace 2025-12-13T00:20:09.703819258+00:00 stderr F I1213 00:20:09.703803 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-controllers"} 2025-12-13T00:20:09.703819258+00:00 stderr F I1213 00:20:09.703809 28750 services_controller.go:415] Built service openshift-multus/multus-admission-controller LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.703829368+00:00 stderr F I1213 00:20:09.703817 28750 services_controller.go:336] Finished syncing service machine-api-controllers on namespace openshift-machine-api : 1.660985ms 2025-12-13T00:20:09.703839069+00:00 stderr F I1213 00:20:09.703831 28750 services_controller.go:332] Processing sync for service openshift-etcd-operator/metrics 2025-12-13T00:20:09.703864849+00:00 stderr F I1213 00:20:09.703764 28750 services_controller.go:423] Built service openshift-authentication-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.703906750+00:00 stderr F I1213 00:20:09.703837 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-etcd-operator 470dd1a6-5645-4282-97e4-ebd3fef4caae 4531 0 2024-06-26 12:39:06 +0000 UTC map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062f4b7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: etcd-operator,},ClusterIP:10.217.4.182,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.182],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.703906750+00:00 stderr F I1213 00:20:09.703849 28750 services_controller.go:421] Built service openshift-multus/multus-admission-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.703923731+00:00 stderr F I1213 00:20:09.703908 28750 lb_config.go:1016] Cluster endpoints for openshift-etcd-operator/metrics are: map[TCP/https:{8443 [10.217.0.8] []}] 2025-12-13T00:20:09.703923731+00:00 stderr F I1213 00:20:09.703810 28750 services_controller.go:397] Service redhat-marketplace retrieved from lister: &Service{ObjectMeta:{redhat-marketplace openshift-marketplace 73712edb-385d-4bf0-9c03-b6c570b1a22f 6434 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true olm.service-spec-hash:aUeLNNcZzVZO2rcaZ5Kc8V3jffO0Ss4T6qX6V5] map[] [{operators.coreos.com/v1alpha1 CatalogSource redhat-marketplace 6f259421-4edb-49d8-a6ce-aa41dfc64264 0xc000892c7d 0xc000892c7e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: redhat-marketplace,olm.managed: true,},ClusterIP:10.217.4.65,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.65],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.703960022+00:00 stderr F I1213 00:20:09.703912 28750 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-image-registry/image-registry-operator. OVN-Kubernetes controller took 0.109036296 seconds. No OVN measurement. 2025-12-13T00:20:09.703960022+00:00 stderr F I1213 00:20:09.703922 28750 services_controller.go:413] Built service openshift-etcd-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.182"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.8"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.703960022+00:00 stderr F I1213 00:20:09.703955 28750 services_controller.go:414] Built service openshift-etcd-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.703974742+00:00 stderr F I1213 00:20:09.703961 28750 services_controller.go:415] Built service openshift-etcd-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.703992793+00:00 stderr F I1213 00:20:09.703978 28750 lb_config.go:1016] Cluster endpoints for openshift-marketplace/redhat-marketplace are: map[TCP/grpc:{50051 [10.217.0.36] []}] 2025-12-13T00:20:09.704023255+00:00 stderr F I1213 00:20:09.703887 28750 services_controller.go:424] Service openshift-authentication-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.704072866+00:00 stderr F I1213 00:20:09.704013 28750 services_controller.go:413] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.65"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.36"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.704072866+00:00 stderr F I1213 00:20:09.704065 28750 services_controller.go:414] Built service openshift-marketplace/redhat-marketplace LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.704088256+00:00 stderr F I1213 00:20:09.704066 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-oauth-apiserver/api"} 2025-12-13T00:20:09.704088256+00:00 stderr F I1213 00:20:09.704078 28750 services_controller.go:415] Built service openshift-marketplace/redhat-marketplace LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.704088256+00:00 stderr F I1213 00:20:09.704080 28750 services_controller.go:336] Finished syncing service api on namespace openshift-oauth-apiserver : 838.134µs 2025-12-13T00:20:09.704107297+00:00 stderr F I1213 00:20:09.704093 28750 services_controller.go:332] Processing sync for service openshift-dns/dns-default 2025-12-13T00:20:09.704163468+00:00 stderr F I1213 00:20:09.704103 28750 services_controller.go:421] Built service openshift-marketplace/redhat-marketplace cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.36", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.704163468+00:00 stderr F I1213 00:20:09.704100 28750 services_controller.go:397] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns 9c0247d8-5697-41ea-812e-582bb93c9b4d 5259 0 2024-06-26 12:47:19 +0000 UTC map[dns.operator.openshift.io/owning-dns:default] map[service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{operator.openshift.io/v1 DNS default 8e7b8280-016f-4ceb-a792-fc5be2494468 0xc00062f3c7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.704163468+00:00 stderr F I1213 00:20:09.704151 28750 services_controller.go:422] Built service openshift-marketplace/redhat-marketplace per-node LB []services.LB{} 2025-12-13T00:20:09.704183729+00:00 stderr F I1213 00:20:09.704166 28750 services_controller.go:423] Built service openshift-marketplace/redhat-marketplace template LB []services.LB{} 2025-12-13T00:20:09.704183729+00:00 stderr F I1213 00:20:09.704170 28750 lb_config.go:1016] Cluster endpoints for openshift-dns/dns-default are: map[TCP/dns-tcp:{5353 [10.217.0.31] []} TCP/metrics:{9154 [10.217.0.31] []} UDP/dns:{5353 [10.217.0.31] []}] 2025-12-13T00:20:09.704194119+00:00 stderr F I1213 00:20:09.704182 28750 services_controller.go:424] Service openshift-marketplace/redhat-marketplace has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.704194119+00:00 stderr F I1213 00:20:09.704189 28750 services_controller.go:413] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.704229220+00:00 stderr F I1213 00:20:09.704195 28750 services_controller.go:414] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"UDP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:53, clusterEndpoints:services.lbEndpoints{Port:5353, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.10"}, protocol:"TCP", inport:9154, clusterEndpoints:services.lbEndpoints{Port:9154, V4IPs:[]string{"10.217.0.31"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.704229220+00:00 stderr F I1213 00:20:09.704216 28750 services_controller.go:415] Built service openshift-dns/dns-default LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.704229220+00:00 stderr F I1213 00:20:09.703909 28750 services_controller.go:422] Built service openshift-multus/multus-admission-controller per-node LB []services.LB{} 2025-12-13T00:20:09.704274821+00:00 stderr F I1213 00:20:09.704209 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"026ccb94-9135-4ac8-9f1b-4f7198943ac3", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/redhat-marketplace_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.65", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.36", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.704338893+00:00 stderr F I1213 00:20:09.704253 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"43d1a806-6d56-4f19-9c53-1ce78b0d24a1", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-authentication-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.51", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.19", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.704387175+00:00 stderr F I1213 00:20:09.704009 28750 services_controller.go:421] Built service openshift-etcd-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.704400365+00:00 stderr F I1213 00:20:09.704387 28750 services_controller.go:422] Built service openshift-etcd-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.704426036+00:00 stderr F I1213 00:20:09.704406 28750 services_controller.go:423] Built service openshift-etcd-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.704436166+00:00 stderr F I1213 00:20:09.704427 28750 services_controller.go:424] Service openshift-etcd-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.704478657+00:00 stderr F I1213 00:20:09.704411 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.65:50051:10.217.0.36:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {026ccb94-9135-4ac8-9f1b-4f7198943ac3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.704536979+00:00 stderr F I1213 00:20:09.704477 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.65:50051:10.217.0.36:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {026ccb94-9135-4ac8-9f1b-4f7198943ac3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.704536979+00:00 stderr F I1213 00:20:09.704459 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"85c70b85-2b80-443a-b268-ffc8695f018e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-etcd-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.182", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.8", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.704636481+00:00 stderr F I1213 00:20:09.704596 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.51:443:10.217.0.19:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {43d1a806-6d56-4f19-9c53-1ce78b0d24a1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.704697683+00:00 stderr F I1213 00:20:09.704662 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.51:443:10.217.0.19:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {43d1a806-6d56-4f19-9c53-1ce78b0d24a1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.704747364+00:00 stderr F I1213 00:20:09.704665 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:443:10.217.0.8:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {85c70b85-2b80-443a-b268-ffc8695f018e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.704815726+00:00 stderr F I1213 00:20:09.704748 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd-operator/metrics]} name:Service_openshift-etcd-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:443:10.217.0.8:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {85c70b85-2b80-443a-b268-ffc8695f018e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.704913699+00:00 stderr F I1213 00:20:09.704249 28750 services_controller.go:423] Built service openshift-multus/multus-admission-controller template LB []services.LB{} 2025-12-13T00:20:09.704980141+00:00 stderr F I1213 00:20:09.704964 28750 services_controller.go:424] Service openshift-multus/multus-admission-controller has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.705062353+00:00 stderr F I1213 00:20:09.705016 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"8f8bf377-ba57-45b8-b726-f3b9236cc1ab", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-multus/multus-admission-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:6443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.247", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.32", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.705213337+00:00 stderr F I1213 00:20:09.705165 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.247:443:10.217.0.32:6443 10.217.4.247:8443:10.217.0.32:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f8bf377-ba57-45b8-b726-f3b9236cc1ab}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.705354831+00:00 stderr F I1213 00:20:09.705302 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-multus/multus-admission-controller]} name:Service_openshift-multus/multus-admission-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.247:443:10.217.0.32:6443 10.217.4.247:8443:10.217.0.32:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f8bf377-ba57-45b8-b726-f3b9236cc1ab}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.705574757+00:00 stderr F I1213 00:20:09.704252 28750 services_controller.go:421] Built service openshift-dns/dns-default cluster-wide LB []services.LB{} 2025-12-13T00:20:09.705633959+00:00 stderr F I1213 00:20:09.705604 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/redhat-marketplace"} 2025-12-13T00:20:09.705633959+00:00 stderr F I1213 00:20:09.705620 28750 services_controller.go:336] Finished syncing service redhat-marketplace on namespace openshift-marketplace : 1.81809ms 2025-12-13T00:20:09.705646039+00:00 stderr F I1213 00:20:09.705632 28750 services_controller.go:332] Processing sync for service openshift-apiserver-operator/metrics 2025-12-13T00:20:09.705705890+00:00 stderr F I1213 00:20:09.705638 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-apiserver-operator 4c2fba48-c67e-4420-9529-0bb456da4341 4348 0 2024-06-26 12:39:11 +0000 UTC map[app:openshift-apiserver-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:openshift-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062e617 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-apiserver-operator,},ClusterIP:10.217.5.125,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.125],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.705716771+00:00 stderr F I1213 00:20:09.705703 28750 lb_config.go:1016] Cluster endpoints for openshift-apiserver-operator/metrics are: map[TCP/https:{8443 [10.217.0.6] []}] 2025-12-13T00:20:09.705726651+00:00 stderr F I1213 00:20:09.705713 28750 services_controller.go:413] Built service openshift-apiserver-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.125"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.6"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.705726651+00:00 stderr F I1213 00:20:09.705723 28750 services_controller.go:414] Built service openshift-apiserver-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.705746212+00:00 stderr F I1213 00:20:09.705728 28750 services_controller.go:415] Built service openshift-apiserver-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.705758402+00:00 stderr F I1213 00:20:09.705740 28750 services_controller.go:421] Built service openshift-apiserver-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.705768232+00:00 stderr F I1213 00:20:09.705756 28750 services_controller.go:422] Built service openshift-apiserver-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.705768232+00:00 stderr F I1213 00:20:09.705762 28750 services_controller.go:423] Built service openshift-apiserver-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.705778312+00:00 stderr F I1213 00:20:09.705768 28750 services_controller.go:424] Service openshift-apiserver-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.705821074+00:00 stderr F I1213 00:20:09.705780 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"6c6bde70-7352-441c-a61d-299eaf2273c0", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.125", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.6", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.705925356+00:00 stderr F I1213 00:20:09.705862 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.125:443:10.217.0.6:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6c6bde70-7352-441c-a61d-299eaf2273c0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.705925356+00:00 stderr F I1213 00:20:09.705889 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.125:443:10.217.0.6:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6c6bde70-7352-441c-a61d-299eaf2273c0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.706046380+00:00 stderr F I1213 00:20:09.705608 28750 services_controller.go:422] Built service openshift-dns/dns-default per-node LB []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.706046380+00:00 stderr F I1213 00:20:09.706036 28750 services_controller.go:423] Built service openshift-dns/dns-default template LB []services.LB{} 2025-12-13T00:20:09.706058820+00:00 stderr F I1213 00:20:09.706045 28750 services_controller.go:424] Service openshift-dns/dns-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.706127232+00:00 stderr F I1213 00:20:09.706059 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"19e6099f-93ac-4c53-b159-d6e39047458d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"0638bd05-8826-488d-bed7-e92d2c002903", Protocol:"udp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:9154, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}, services.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_crc", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.10", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.31", Port:5353, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.706210514+00:00 stderr F I1213 00:20:09.706171 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353 10.217.4.10:9154:10.217.0.31:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {19e6099f-93ac-4c53-b159-d6e39047458d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.706259766+00:00 stderr F I1213 00:20:09.706238 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-multus/multus-admission-controller"} 2025-12-13T00:20:09.706297647+00:00 stderr F I1213 00:20:09.706281 28750 services_controller.go:336] Finished syncing service multus-admission-controller on namespace openshift-multus : 5.309016ms 2025-12-13T00:20:09.706369169+00:00 stderr F I1213 00:20:09.706347 28750 services_controller.go:332] Processing sync for service openshift-image-registry/image-registry 2025-12-13T00:20:09.706485862+00:00 stderr F I1213 00:20:09.706404 28750 services_controller.go:397] Service image-registry retrieved from lister: &Service{ObjectMeta:{image-registry openshift-image-registry 7b12735e-9db4-4c6e-99f6-b2626c4e9f08 17962 0 2024-06-27 13:18:52 +0000 UTC map[docker-registry:default] map[imageregistry.operator.openshift.io/checksum:sha256:1c19715a76014ae1d56140d6390a08f14f453c1a59dc36c15718f40c638ef63d service.alpha.openshift.io/serving-cert-secret-name:image-registry-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5000-tcp,Protocol:TCP,Port:5000,TargetPort:{0 5000 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{docker-registry: default,},ClusterIP:10.217.4.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.706583824+00:00 stderr F I1213 00:20:09.706559 28750 lb_config.go:1016] Cluster endpoints for openshift-image-registry/image-registry are: map[TCP/5000-tcp:{5000 [10.217.0.41] []}] 2025-12-13T00:20:09.706639216+00:00 stderr F I1213 00:20:09.706619 28750 services_controller.go:413] Built service openshift-image-registry/image-registry LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.41"}, protocol:"TCP", inport:5000, clusterEndpoints:services.lbEndpoints{Port:5000, V4IPs:[]string{"10.217.0.41"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.706673567+00:00 stderr F I1213 00:20:09.706660 28750 services_controller.go:414] Built service openshift-image-registry/image-registry LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.706706648+00:00 stderr F I1213 00:20:09.706694 28750 services_controller.go:415] Built service openshift-image-registry/image-registry LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.706770759+00:00 stderr F I1213 00:20:09.706739 28750 services_controller.go:421] Built service openshift-image-registry/image-registry cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.41", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.706836932+00:00 stderr F I1213 00:20:09.706812 28750 services_controller.go:422] Built service openshift-image-registry/image-registry per-node LB []services.LB{} 2025-12-13T00:20:09.706877373+00:00 stderr F I1213 00:20:09.706864 28750 services_controller.go:423] Built service openshift-image-registry/image-registry template LB []services.LB{} 2025-12-13T00:20:09.706914694+00:00 stderr F I1213 00:20:09.706899 28750 services_controller.go:424] Service openshift-image-registry/image-registry has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.707008567+00:00 stderr F I1213 00:20:09.706965 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"1636f317-82cf-4afb-adb2-1388c1aee17c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-image-registry/image-registry_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.41", Port:5000, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.41", Port:5000, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.707167261+00:00 stderr F I1213 00:20:09.707127 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:5000:10.217.0.41:5000]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1636f317-82cf-4afb-adb2-1388c1aee17c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.707229453+00:00 stderr F I1213 00:20:09.707194 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:5000:10.217.0.41:5000]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1636f317-82cf-4afb-adb2-1388c1aee17c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.707374027+00:00 stderr F I1213 00:20:09.706181 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd-operator/metrics"} 2025-12-13T00:20:09.707413438+00:00 stderr F I1213 00:20:09.707399 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-etcd-operator : 3.566449ms 2025-12-13T00:20:09.707452719+00:00 stderr F I1213 00:20:09.707440 28750 services_controller.go:332] Processing sync for service openshift-console-operator/webhook 2025-12-13T00:20:09.707552672+00:00 stderr F I1213 00:20:09.707473 28750 services_controller.go:397] Service webhook retrieved from lister: &Service{ObjectMeta:{webhook openshift-console-operator 0bec6a60-3529-4fdb-81de-718ea6c4dae4 9610 0 2024-06-26 12:53:34 +0000 UTC map[name:console-conversion-webhook] map[capability.openshift.io/name:Console include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:webhook-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062ef87 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:9443,TargetPort:{0 9443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: console-conversion-webhook,},ClusterIP:10.217.5.84,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.84],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.707609113+00:00 stderr F I1213 00:20:09.706280 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver-operator/metrics"} 2025-12-13T00:20:09.707638664+00:00 stderr F I1213 00:20:09.706235 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0638bd05-8826-488d-bed7-e92d2c002903}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.707665935+00:00 stderr F I1213 00:20:09.705582 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication-operator/metrics"} 2025-12-13T00:20:09.707692685+00:00 stderr F I1213 00:20:09.707609 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-image-registry/image-registry"} 2025-12-13T00:20:09.707732407+00:00 stderr F I1213 00:20:09.707716 28750 services_controller.go:336] Finished syncing service image-registry on namespace openshift-image-registry : 1.368618ms 2025-12-13T00:20:09.707771458+00:00 stderr F I1213 00:20:09.707759 28750 services_controller.go:332] Processing sync for service openshift-service-ca-operator/metrics 2025-12-13T00:20:09.707859330+00:00 stderr F I1213 00:20:09.707792 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-service-ca-operator 030283b3-acfe-40ed-811c-d9f7f79607f6 5225 0 2024-06-26 12:39:07 +0000 UTC map[app:service-ca-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0008936ef }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: service-ca-operator,},ClusterIP:10.217.5.165,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.165],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.707911531+00:00 stderr F I1213 00:20:09.707895 28750 lb_config.go:1016] Cluster endpoints for openshift-service-ca-operator/metrics are: map[TCP/https:{8443 [10.217.0.10] []}] 2025-12-13T00:20:09.707980243+00:00 stderr F I1213 00:20:09.707958 28750 services_controller.go:413] Built service openshift-service-ca-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.165"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.10"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.708051655+00:00 stderr F I1213 00:20:09.708021 28750 services_controller.go:414] Built service openshift-service-ca-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.708111047+00:00 stderr F I1213 00:20:09.708020 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353 10.217.4.10:9154:10.217.0.31:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {19e6099f-93ac-4c53-b159-d6e39047458d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53:10.217.0.31:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0638bd05-8826-488d-bed7-e92d2c002903}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.708147008+00:00 stderr F I1213 00:20:09.708131 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-authentication-operator : 7.100866ms 2025-12-13T00:20:09.708187699+00:00 stderr F I1213 00:20:09.708175 28750 services_controller.go:332] Processing sync for service openshift-etcd/etcd 2025-12-13T00:20:09.708276541+00:00 stderr F I1213 00:20:09.708085 28750 services_controller.go:415] Built service openshift-service-ca-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.708307562+00:00 stderr F I1213 00:20:09.708209 28750 services_controller.go:397] Service etcd retrieved from lister: &Service{ObjectMeta:{etcd openshift-etcd 09198b54-ff7d-4bc0-9111-00e2f443a981 4485 0 2024-06-26 12:38:46 +0000 UTC map[k8s-app:etcd] map[operator.openshift.io/spec-hash:0685cfaa0976bfb7ba58513629369c20bf05f4fba36949e982bdb43af328f0e1 prometheus.io/scheme:https prometheus.io/scrape:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:etcd,Protocol:TCP,Port:2379,TargetPort:{0 2379 },NodePort:0,AppProtocol:nil,},ServicePort{Name:etcd-metrics,Protocol:TCP,Port:9979,TargetPort:{0 9979 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{etcd: true,},ClusterIP:10.217.5.137,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.137],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.708362314+00:00 stderr F I1213 00:20:09.708344 28750 lb_config.go:1016] Cluster endpoints for openshift-etcd/etcd are: map[TCP/etcd:{2379 [192.168.126.11] []} TCP/etcd-metrics:{9979 [192.168.126.11] []}] 2025-12-13T00:20:09.708409545+00:00 stderr F I1213 00:20:09.708396 28750 services_controller.go:413] Built service openshift-etcd/etcd LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.708475907+00:00 stderr F I1213 00:20:09.708442 28750 services_controller.go:414] Built service openshift-etcd/etcd LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:2379, clusterEndpoints:services.lbEndpoints{Port:2379, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.137"}, protocol:"TCP", inport:9979, clusterEndpoints:services.lbEndpoints{Port:9979, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.708510848+00:00 stderr F I1213 00:20:09.708497 28750 services_controller.go:415] Built service openshift-etcd/etcd LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.708564309+00:00 stderr F I1213 00:20:09.708551 28750 services_controller.go:421] Built service openshift-etcd/etcd cluster-wide LB []services.LB{} 2025-12-13T00:20:09.708625551+00:00 stderr F I1213 00:20:09.708303 28750 services_controller.go:421] Built service openshift-service-ca-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.708862147+00:00 stderr F I1213 00:20:09.708702 28750 services_controller.go:422] Built service openshift-service-ca-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.708919319+00:00 stderr F I1213 00:20:09.707634 28750 lb_config.go:1016] Cluster endpoints for openshift-console-operator/webhook are: map[TCP/webhook:{9443 [10.217.0.61] []}] 2025-12-13T00:20:09.708919319+00:00 stderr F I1213 00:20:09.707645 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-apiserver-operator : 2.014046ms 2025-12-13T00:20:09.708919319+00:00 stderr F I1213 00:20:09.708586 28750 services_controller.go:422] Built service openshift-etcd/etcd per-node LB []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.708919319+00:00 stderr F I1213 00:20:09.708912 28750 services_controller.go:423] Built service openshift-etcd/etcd template LB []services.LB{} 2025-12-13T00:20:09.708966020+00:00 stderr F I1213 00:20:09.708922 28750 services_controller.go:413] Built service openshift-console-operator/webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.84"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.61"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.708966020+00:00 stderr F I1213 00:20:09.708677 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"} 2025-12-13T00:20:09.708988341+00:00 stderr F I1213 00:20:09.708962 28750 services_controller.go:414] Built service openshift-console-operator/webhook LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.708988341+00:00 stderr F I1213 00:20:09.708970 28750 services_controller.go:415] Built service openshift-console-operator/webhook LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.708998701+00:00 stderr F I1213 00:20:09.708988 28750 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-webhook 2025-12-13T00:20:09.709034972+00:00 stderr F I1213 00:20:09.708991 28750 services_controller.go:421] Built service openshift-console-operator/webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.709034972+00:00 stderr F I1213 00:20:09.709025 28750 services_controller.go:422] Built service openshift-console-operator/webhook per-node LB []services.LB{} 2025-12-13T00:20:09.709045962+00:00 stderr F I1213 00:20:09.709036 28750 services_controller.go:423] Built service openshift-console-operator/webhook template LB []services.LB{} 2025-12-13T00:20:09.709055713+00:00 stderr F I1213 00:20:09.709045 28750 services_controller.go:424] Service openshift-console-operator/webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.709092474+00:00 stderr F I1213 00:20:09.708927 28750 services_controller.go:424] Service openshift-etcd/etcd has 0 cluster-wide, 2 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.709092474+00:00 stderr F I1213 00:20:09.709079 28750 services_controller.go:423] Built service openshift-service-ca-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.709154135+00:00 stderr F I1213 00:20:09.709091 28750 ovs.go:162] Exec(38): stdout: "\n" 2025-12-13T00:20:09.709154135+00:00 stderr F I1213 00:20:09.709111 28750 services_controller.go:424] Service openshift-service-ca-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.709154135+00:00 stderr F I1213 00:20:09.709062 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"a25f22fe-ae1f-47da-afc5-e1fde93bc930", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console-operator/webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.84", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.61", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.709154135+00:00 stderr F I1213 00:20:09.709135 28750 ovs.go:163] Exec(38): stderr: "" 2025-12-13T00:20:09.709170426+00:00 stderr F I1213 00:20:09.708969 28750 services_controller.go:336] Finished syncing service dns-default on namespace openshift-dns : 4.872153ms 2025-12-13T00:20:09.709201717+00:00 stderr F I1213 00:20:09.709178 28750 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-controller 2025-12-13T00:20:09.709201717+00:00 stderr F I1213 00:20:09.709133 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"7b275cee-1db5-46de-8569-dce67abda430", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-etcd/etcd_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:2379, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.137", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9979, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.709213147+00:00 stderr F I1213 00:20:09.709155 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"aa7bf248-62bd-463a-bce9-eaabc90e6138", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-service-ca-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.165", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.10", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.709314950+00:00 stderr F I1213 00:20:09.709003 28750 services_controller.go:397] Service machine-api-operator-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-webhook openshift-machine-api 128263d4-d278-44f6-9ae4-9e9ecc572513 4862 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-api-operator-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00089274b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 webhook-server},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.5.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.44],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.710605396+00:00 stderr F I1213 00:20:09.710529 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-webhook are: map[] 2025-12-13T00:20:09.710605396+00:00 stderr F I1213 00:20:09.710572 28750 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.44"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.710605396+00:00 stderr F I1213 00:20:09.710587 28750 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.710631816+00:00 stderr F I1213 00:20:09.710604 28750 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-webhook LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.710652087+00:00 stderr F I1213 00:20:09.710619 28750 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.710669688+00:00 stderr F I1213 00:20:09.710531 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.165:443:10.217.0.10:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aa7bf248-62bd-463a-bce9-eaabc90e6138}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.710694438+00:00 stderr F I1213 00:20:09.709192 28750 services_controller.go:397] Service machine-config-controller retrieved from lister: &Service{ObjectMeta:{machine-config-controller openshift-machine-config-operator 3ff83f1a-4058-4b9e-a4fd-83f51836c82e 4847 0 2024-06-26 12:39:14 +0000 UTC map[k8s-app:machine-config-controller] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00089282b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.710758010+00:00 stderr F I1213 00:20:09.710642 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.137:2379:192.168.126.11:2379 10.217.5.137:9979:192.168.126.11:9979]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b275cee-1db5-46de-8569-dce67abda430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.710770030+00:00 stderr F I1213 00:20:09.710674 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-service-ca-operator/metrics]} name:Service_openshift-service-ca-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.165:443:10.217.0.10:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {aa7bf248-62bd-463a-bce9-eaabc90e6138}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.710811261+00:00 stderr F I1213 00:20:09.710736 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/webhook]} name:Service_openshift-console-operator/webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.84:9443:10.217.0.61:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a25f22fe-ae1f-47da-afc5-e1fde93bc930}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.710826232+00:00 stderr F I1213 00:20:09.710706 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-controller are: map[TCP/metrics:{9001 [10.217.0.63] []}] 2025-12-13T00:20:09.710835652+00:00 stderr F I1213 00:20:09.710767 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.137:2379:192.168.126.11:2379 10.217.5.137:9979:192.168.126.11:9979]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7b275cee-1db5-46de-8569-dce67abda430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.710868013+00:00 stderr F I1213 00:20:09.710808 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/webhook]} name:Service_openshift-console-operator/webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.84:9443:10.217.0.61:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a25f22fe-ae1f-47da-afc5-e1fde93bc930}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.710868013+00:00 stderr F I1213 00:20:09.710834 28750 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-controller LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.214"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.63"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.710868013+00:00 stderr F I1213 00:20:09.710862 28750 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-controller LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.710901804+00:00 stderr F I1213 00:20:09.710880 28750 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-controller LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.710981966+00:00 stderr F I1213 00:20:09.710914 28750 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-controller cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.710981966+00:00 stderr F I1213 00:20:09.710973 28750 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-controller per-node LB []services.LB{} 2025-12-13T00:20:09.710999086+00:00 stderr F I1213 00:20:09.710983 28750 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-controller template LB []services.LB{} 2025-12-13T00:20:09.710999086+00:00 stderr F I1213 00:20:09.710983 28750 iptables.go:146] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 2025-12-13T00:20:09.710999086+00:00 stderr F I1213 00:20:09.710993 28750 services_controller.go:424] Service openshift-machine-config-operator/machine-config-controller has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.711046388+00:00 stderr F I1213 00:20:09.710647 28750 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-webhook per-node LB []services.LB{} 2025-12-13T00:20:09.711059128+00:00 stderr F I1213 00:20:09.711051 28750 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-webhook template LB []services.LB{} 2025-12-13T00:20:09.711069148+00:00 stderr F I1213 00:20:09.711019 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"14fa73a9-9675-4207-8711-28031fe0d8db", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.214", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.63", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.711078689+00:00 stderr F I1213 00:20:09.711068 28750 services_controller.go:424] Service openshift-machine-api/machine-api-operator-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.711250193+00:00 stderr F I1213 00:20:09.711184 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.214:9001:10.217.0.63:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {14fa73a9-9675-4207-8711-28031fe0d8db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.711311105+00:00 stderr F I1213 00:20:09.711248 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-controller]} name:Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.214:9001:10.217.0.63:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {14fa73a9-9675-4207-8711-28031fe0d8db}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.711482800+00:00 stderr F I1213 00:20:09.711132 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"61827598-f37d-413c-8229-0ed852809fb6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.44", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.711659965+00:00 stderr F I1213 00:20:09.711590 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.44:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61827598-f37d-413c-8229-0ed852809fb6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.711773358+00:00 stderr F I1213 00:20:09.711701 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.44:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61827598-f37d-413c-8229-0ed852809fb6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.711901051+00:00 stderr F I1213 00:20:09.711861 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-service-ca-operator/metrics"} 2025-12-13T00:20:09.711901051+00:00 stderr F I1213 00:20:09.711893 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-service-ca-operator : 4.134114ms 2025-12-13T00:20:09.711964473+00:00 stderr F I1213 00:20:09.711921 28750 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator-machine-webhook 2025-12-13T00:20:09.712066116+00:00 stderr F I1213 00:20:09.711951 28750 services_controller.go:397] Service machine-api-operator-machine-webhook retrieved from lister: &Service{ObjectMeta:{machine-api-operator-machine-webhook openshift-machine-api 7dd2300f-f67e-4eb3-a3fa-1f22c230305a 4821 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-api-operator-machine-webhook] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:machine-api-operator-machine-webhook-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00089265b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 machine-webhook},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{api: clusterapi,k8s-app: controller,},ClusterIP:10.217.4.242,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.242],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.712082296+00:00 stderr F I1213 00:20:09.712072 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator-machine-webhook are: map[] 2025-12-13T00:20:09.712115737+00:00 stderr F I1213 00:20:09.712085 28750 services_controller.go:413] Built service openshift-machine-api/machine-api-operator-machine-webhook LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.242"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.712115737+00:00 stderr F I1213 00:20:09.712105 28750 services_controller.go:414] Built service openshift-machine-api/machine-api-operator-machine-webhook LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.712126947+00:00 stderr F I1213 00:20:09.712114 28750 services_controller.go:415] Built service openshift-machine-api/machine-api-operator-machine-webhook LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.712181449+00:00 stderr F I1213 00:20:09.712151 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-etcd/etcd"} 2025-12-13T00:20:09.712181449+00:00 stderr F I1213 00:20:09.712141 28750 services_controller.go:421] Built service openshift-machine-api/machine-api-operator-machine-webhook cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.712181449+00:00 stderr F I1213 00:20:09.712174 28750 services_controller.go:336] Finished syncing service etcd on namespace openshift-etcd : 3.999989ms 2025-12-13T00:20:09.712193669+00:00 stderr F I1213 00:20:09.712177 28750 services_controller.go:422] Built service openshift-machine-api/machine-api-operator-machine-webhook per-node LB []services.LB{} 2025-12-13T00:20:09.712193669+00:00 stderr F I1213 00:20:09.712187 28750 services_controller.go:423] Built service openshift-machine-api/machine-api-operator-machine-webhook template LB []services.LB{} 2025-12-13T00:20:09.712210660+00:00 stderr F I1213 00:20:09.712192 28750 services_controller.go:332] Processing sync for service openshift-marketplace/certified-operators 2025-12-13T00:20:09.712210660+00:00 stderr F I1213 00:20:09.712196 28750 services_controller.go:424] Service openshift-machine-api/machine-api-operator-machine-webhook has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.712284802+00:00 stderr F I1213 00:20:09.712214 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"733bd827-e0e7-4901-9b0d-4f2fcb21be04", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.242", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.712296272+00:00 stderr F I1213 00:20:09.712202 28750 services_controller.go:397] Service certified-operators retrieved from lister: &Service{ObjectMeta:{certified-operators openshift-marketplace 97052848-7332-4254-8854-60d45bb91123 6358 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:7FOCZ3GVMQ1pwQKJahWmE09uJDRx6ab8xxcEYE] map[] [{operators.coreos.com/v1alpha1 CatalogSource certified-operators 16d5fe82-aef0-4700-8b13-e78e71d2a10d 0xc000892a6d 0xc000892a6e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.249,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.249],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.712350413+00:00 stderr F I1213 00:20:09.712323 28750 lb_config.go:1016] Cluster endpoints for openshift-marketplace/certified-operators are: map[TCP/grpc:{50051 [10.217.0.33] []}] 2025-12-13T00:20:09.712361244+00:00 stderr F I1213 00:20:09.712346 28750 services_controller.go:413] Built service openshift-marketplace/certified-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.249"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.33"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.712374314+00:00 stderr F I1213 00:20:09.712365 28750 services_controller.go:414] Built service openshift-marketplace/certified-operators LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.712383924+00:00 stderr F I1213 00:20:09.712373 28750 services_controller.go:415] Built service openshift-marketplace/certified-operators LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.712452287+00:00 stderr F I1213 00:20:09.712389 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-controller"} 2025-12-13T00:20:09.712466507+00:00 stderr F I1213 00:20:09.712455 28750 services_controller.go:336] Finished syncing service machine-config-controller on namespace openshift-machine-config-operator : 3.273061ms 2025-12-13T00:20:09.712507249+00:00 stderr F I1213 00:20:09.712441 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.242:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {733bd827-e0e7-4901-9b0d-4f2fcb21be04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.712507249+00:00 stderr F I1213 00:20:09.712402 28750 services_controller.go:421] Built service openshift-marketplace/certified-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.712522099+00:00 stderr F I1213 00:20:09.712510 28750 services_controller.go:422] Built service openshift-marketplace/certified-operators per-node LB []services.LB{} 2025-12-13T00:20:09.712555110+00:00 stderr F I1213 00:20:09.712530 28750 services_controller.go:423] Built service openshift-marketplace/certified-operators template LB []services.LB{} 2025-12-13T00:20:09.712568750+00:00 stderr F I1213 00:20:09.712558 28750 services_controller.go:424] Service openshift-marketplace/certified-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.712621762+00:00 stderr F I1213 00:20:09.712544 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/machine-api-operator-machine-webhook_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.242:443:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {733bd827-e0e7-4901-9b0d-4f2fcb21be04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.712636762+00:00 stderr F I1213 00:20:09.712612 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console-operator/webhook"} 2025-12-13T00:20:09.712636762+00:00 stderr F I1213 00:20:09.712627 28750 services_controller.go:336] Finished syncing service webhook on namespace openshift-console-operator : 5.187873ms 2025-12-13T00:20:09.712671753+00:00 stderr F I1213 00:20:09.712650 28750 services_controller.go:332] Processing sync for service openshift-kube-storage-version-migrator-operator/metrics 2025-12-13T00:20:09.712720434+00:00 stderr F I1213 00:20:09.712609 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"46ded4bb-21d9-4d3a-a886-ac7e004b5ce4", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/certified-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.249", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.33", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.712761095+00:00 stderr F I1213 00:20:09.712709 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-webhook"} 2025-12-13T00:20:09.712776316+00:00 stderr F I1213 00:20:09.712764 28750 services_controller.go:336] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api : 3.770714ms 2025-12-13T00:20:09.712791206+00:00 stderr F I1213 00:20:09.712701 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-storage-version-migrator-operator 3e887cd0-b481-460c-b943-d944dc64df2f 4706 0 2024-06-26 12:39:17 +0000 UTC map[app:kube-storage-version-migrator-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062fe37 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-storage-version-migrator-operator,},ClusterIP:10.217.4.151,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.151],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.712830047+00:00 stderr F I1213 00:20:09.712486 28750 services_controller.go:332] Processing sync for service openshift-apiserver/api 2025-12-13T00:20:09.712830047+00:00 stderr F I1213 00:20:09.712804 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-storage-version-migrator-operator/metrics are: map[TCP/https:{8443 [10.217.0.16] []}] 2025-12-13T00:20:09.712861218+00:00 stderr F I1213 00:20:09.712820 28750 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/olm-operator-metrics 2025-12-13T00:20:09.712861218+00:00 stderr F I1213 00:20:09.712832 28750 services_controller.go:413] Built service openshift-kube-storage-version-migrator-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.151"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.16"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.712861218+00:00 stderr F I1213 00:20:09.712848 28750 services_controller.go:414] Built service openshift-kube-storage-version-migrator-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.712861218+00:00 stderr F I1213 00:20:09.712856 28750 services_controller.go:415] Built service openshift-kube-storage-version-migrator-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.712902099+00:00 stderr F I1213 00:20:09.712818 28750 services_controller.go:397] Service api retrieved from lister: &Service{ObjectMeta:{api openshift-apiserver fb5bd66d-5e82-4bcc-8126-39324a92dccc 5229 0 2024-06-26 12:47:09 +0000 UTC map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.69,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.69],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.712951941+00:00 stderr F I1213 00:20:09.712884 28750 services_controller.go:421] Built service openshift-kube-storage-version-migrator-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.712951941+00:00 stderr F I1213 00:20:09.712910 28750 lb_config.go:1016] Cluster endpoints for openshift-apiserver/api are: map[TCP/https:{8443 [10.217.0.82] []}] 2025-12-13T00:20:09.712971721+00:00 stderr F I1213 00:20:09.712924 28750 services_controller.go:422] Built service openshift-kube-storage-version-migrator-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.712981961+00:00 stderr F I1213 00:20:09.712956 28750 services_controller.go:413] Built service openshift-apiserver/api LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.69"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.712981961+00:00 stderr F I1213 00:20:09.712970 28750 services_controller.go:423] Built service openshift-kube-storage-version-migrator-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.712992272+00:00 stderr F I1213 00:20:09.712977 28750 services_controller.go:414] Built service openshift-apiserver/api LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.712992272+00:00 stderr F I1213 00:20:09.712982 28750 services_controller.go:424] Service openshift-kube-storage-version-migrator-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.712992272+00:00 stderr F I1213 00:20:09.712987 28750 services_controller.go:415] Built service openshift-apiserver/api LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.713026253+00:00 stderr F I1213 00:20:09.712852 28750 services_controller.go:397] Service olm-operator-metrics retrieved from lister: &Service{ObjectMeta:{olm-operator-metrics openshift-operator-lifecycle-manager f54a9b6f-c334-4276-9ca3-b290325fd276 5100 0 2024-06-26 12:39:23 +0000 UTC map[app:olm-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:olm-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00089330f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: olm-operator,},ClusterIP:10.217.5.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.713057574+00:00 stderr F I1213 00:20:09.713010 28750 services_controller.go:421] Built service openshift-apiserver/api cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.713057574+00:00 stderr F I1213 00:20:09.713001 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.151", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.16", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.713057574+00:00 stderr F I1213 00:20:09.713049 28750 services_controller.go:422] Built service openshift-apiserver/api per-node LB []services.LB{} 2025-12-13T00:20:09.713072664+00:00 stderr F I1213 00:20:09.713060 28750 services_controller.go:423] Built service openshift-apiserver/api template LB []services.LB{} 2025-12-13T00:20:09.713072664+00:00 stderr F I1213 00:20:09.713054 28750 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.14] []}] 2025-12-13T00:20:09.713082854+00:00 stderr F I1213 00:20:09.713070 28750 services_controller.go:424] Service openshift-apiserver/api has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.713131156+00:00 stderr F I1213 00:20:09.713001 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/certified-operators]} name:Service_openshift-marketplace/certified-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.249:50051:10.217.0.33:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46ded4bb-21d9-4d3a-a886-ac7e004b5ce4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713131156+00:00 stderr F I1213 00:20:09.713094 28750 services_controller.go:413] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.220"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.14"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.713131156+00:00 stderr F I1213 00:20:09.713089 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/api_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.69", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.713165056+00:00 stderr F I1213 00:20:09.713135 28750 services_controller.go:414] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.713175197+00:00 stderr F I1213 00:20:09.713161 28750 services_controller.go:415] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.713238538+00:00 stderr F I1213 00:20:09.713136 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/certified-operators]} name:Service_openshift-marketplace/certified-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.249:50051:10.217.0.33:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46ded4bb-21d9-4d3a-a886-ac7e004b5ce4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713238538+00:00 stderr F I1213 00:20:09.713176 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.151:443:10.217.0.16:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713265119+00:00 stderr F I1213 00:20:09.713202 28750 services_controller.go:421] Built service openshift-operator-lifecycle-manager/olm-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.713275639+00:00 stderr F I1213 00:20:09.713235 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.69:443:10.217.0.82:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713275639+00:00 stderr F I1213 00:20:09.713267 28750 services_controller.go:422] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB []services.LB{} 2025-12-13T00:20:09.713291810+00:00 stderr F I1213 00:20:09.713239 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-storage-version-migrator-operator/metrics]} name:Service_openshift-kube-storage-version-migrator-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.151:443:10.217.0.16:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0f1d2ac6-1aa6-4b79-88e2-c16f28812b5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713301160+00:00 stderr F I1213 00:20:09.713284 28750 services_controller.go:423] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB []services.LB{} 2025-12-13T00:20:09.713313210+00:00 stderr F I1213 00:20:09.713276 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.69:443:10.217.0.82:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b4cbcfd-9a85-4def-b60e-4acd6a3a0b14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713322551+00:00 stderr F I1213 00:20:09.713312 28750 services_controller.go:424] Service openshift-operator-lifecycle-manager/olm-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.713443624+00:00 stderr F I1213 00:20:09.713341 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"ed8cbb68-e50d-4e49-8495-eff895089054", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.220", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.14", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.713752742+00:00 stderr F I1213 00:20:09.713647 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.220:8443:10.217.0.14:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed8cbb68-e50d-4e49-8495-eff895089054}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.713825684+00:00 stderr F I1213 00:20:09.713741 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.220:8443:10.217.0.14:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ed8cbb68-e50d-4e49-8495-eff895089054}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.715245024+00:00 stderr F I1213 00:20:09.715184 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator-machine-webhook"} 2025-12-13T00:20:09.715245024+00:00 stderr F I1213 00:20:09.715212 28750 services_controller.go:336] Finished syncing service machine-api-operator-machine-webhook on namespace openshift-machine-api : 3.300041ms 2025-12-13T00:20:09.715245024+00:00 stderr F I1213 00:20:09.715229 28750 services_controller.go:332] Processing sync for service openshift-kube-scheduler/scheduler 2025-12-13T00:20:09.715408539+00:00 stderr F I1213 00:20:09.715238 28750 services_controller.go:397] Service scheduler retrieved from lister: &Service{ObjectMeta:{scheduler openshift-kube-scheduler a839a554-406d-4df8-b3ae-b533cb3e24bc 4695 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:kube-scheduler] map[operator.openshift.io/spec-hash:f185087b7610499b49263c17685abe7f251a50c890808284a072687bf6d73275 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10259 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{scheduler: true,},ClusterIP:10.217.5.218,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.218],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.715408539+00:00 stderr F I1213 00:20:09.715373 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-scheduler/scheduler are: map[TCP/https:{10259 [192.168.126.11] []}] 2025-12-13T00:20:09.715408539+00:00 stderr F I1213 00:20:09.715388 28750 services_controller.go:413] Built service openshift-kube-scheduler/scheduler LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.715426509+00:00 stderr F I1213 00:20:09.715401 28750 services_controller.go:414] Built service openshift-kube-scheduler/scheduler LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.218"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10259, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.715447410+00:00 stderr F I1213 00:20:09.715429 28750 services_controller.go:415] Built service openshift-kube-scheduler/scheduler LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.715487271+00:00 stderr F I1213 00:20:09.715454 28750 services_controller.go:421] Built service openshift-kube-scheduler/scheduler cluster-wide LB []services.LB{} 2025-12-13T00:20:09.716109728+00:00 stderr F I1213 00:20:09.715495 28750 services_controller.go:422] Built service openshift-kube-scheduler/scheduler per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.716170539+00:00 stderr F I1213 00:20:09.716151 28750 services_controller.go:423] Built service openshift-kube-scheduler/scheduler template LB []services.LB{} 2025-12-13T00:20:09.716210940+00:00 stderr F I1213 00:20:09.716195 28750 services_controller.go:424] Service openshift-kube-scheduler/scheduler has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.716291133+00:00 stderr F I1213 00:20:09.716245 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"915e622c-d89a-4906-831f-8daeda55c910", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.218", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10259, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.716348074+00:00 stderr F I1213 00:20:09.716196 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/api"} 2025-12-13T00:20:09.716378425+00:00 stderr F I1213 00:20:09.716352 28750 services_controller.go:336] Finished syncing service api on namespace openshift-apiserver : 3.862176ms 2025-12-13T00:20:09.716391355+00:00 stderr F I1213 00:20:09.716385 28750 services_controller.go:332] Processing sync for service openshift-console/downloads 2025-12-13T00:20:09.716466727+00:00 stderr F I1213 00:20:09.716181 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/certified-operators"} 2025-12-13T00:20:09.716466727+00:00 stderr F I1213 00:20:09.716457 28750 services_controller.go:336] Finished syncing service certified-operators on namespace openshift-marketplace : 4.262078ms 2025-12-13T00:20:09.716511469+00:00 stderr F I1213 00:20:09.716474 28750 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-target 2025-12-13T00:20:09.716511469+00:00 stderr F I1213 00:20:09.716395 28750 services_controller.go:397] Service downloads retrieved from lister: &Service{ObjectMeta:{downloads openshift-console d6818508-d113-4821-84c8-94f59cfa13cb 9742 0 2024-06-26 12:53:44 +0000 UTC map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.196,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.196],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.716563750+00:00 stderr F I1213 00:20:09.716482 28750 services_controller.go:397] Service network-check-target retrieved from lister: &Service{ObjectMeta:{network-check-target openshift-network-diagnostics 151fdab6-cca2-4880-a96c-48e605cc8d3d 2803 0 2024-06-26 12:45:59 +0000 UTC map[] map[] [{operator.openshift.io/v1 Network cluster 5ca11404-f665-4aa0-85cf-da2f3e9c86ad 0xc000893047 0xc000893048}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: network-check-target,},ClusterIP:10.217.5.248,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.248],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.716563750+00:00 stderr F I1213 00:20:09.716536 28750 lb_config.go:1016] Cluster endpoints for openshift-console/downloads are: map[TCP/http:{8080 [10.217.0.66] []}] 2025-12-13T00:20:09.716576340+00:00 stderr F I1213 00:20:09.716563 28750 lb_config.go:1016] Cluster endpoints for openshift-network-diagnostics/network-check-target are: map[TCP/:{8080 [10.217.0.4] []}] 2025-12-13T00:20:09.716576340+00:00 stderr F I1213 00:20:09.716561 28750 services_controller.go:413] Built service openshift-console/downloads LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.196"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.66"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.716589191+00:00 stderr F I1213 00:20:09.716579 28750 services_controller.go:414] Built service openshift-console/downloads LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.716599941+00:00 stderr F I1213 00:20:09.716579 28750 services_controller.go:413] Built service openshift-network-diagnostics/network-check-target LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.248"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:8080, V4IPs:[]string{"10.217.0.4"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.716599941+00:00 stderr F I1213 00:20:09.716589 28750 services_controller.go:415] Built service openshift-console/downloads LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.716599941+00:00 stderr F I1213 00:20:09.716594 28750 services_controller.go:414] Built service openshift-network-diagnostics/network-check-target LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.716615491+00:00 stderr F I1213 00:20:09.716603 28750 services_controller.go:415] Built service openshift-network-diagnostics/network-check-target LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.716625262+00:00 stderr F I1213 00:20:09.716151 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-storage-version-migrator-operator/metrics"} 2025-12-13T00:20:09.716638162+00:00 stderr F I1213 00:20:09.716612 28750 services_controller.go:421] Built service openshift-console/downloads cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.716648162+00:00 stderr F I1213 00:20:09.716619 28750 services_controller.go:421] Built service openshift-network-diagnostics/network-check-target cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.716648162+00:00 stderr F I1213 00:20:09.716640 28750 services_controller.go:422] Built service openshift-console/downloads per-node LB []services.LB{} 2025-12-13T00:20:09.716658443+00:00 stderr F I1213 00:20:09.716645 28750 services_controller.go:422] Built service openshift-network-diagnostics/network-check-target per-node LB []services.LB{} 2025-12-13T00:20:09.716658443+00:00 stderr F I1213 00:20:09.716642 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator : 3.9822ms 2025-12-13T00:20:09.716658443+00:00 stderr F I1213 00:20:09.716650 28750 services_controller.go:423] Built service openshift-console/downloads template LB []services.LB{} 2025-12-13T00:20:09.716669943+00:00 stderr F I1213 00:20:09.716654 28750 services_controller.go:423] Built service openshift-network-diagnostics/network-check-target template LB []services.LB{} 2025-12-13T00:20:09.716669943+00:00 stderr F I1213 00:20:09.716663 28750 services_controller.go:424] Service openshift-console/downloads has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.716683333+00:00 stderr F I1213 00:20:09.716666 28750 services_controller.go:424] Service openshift-network-diagnostics/network-check-target has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.716683333+00:00 stderr F I1213 00:20:09.716674 28750 services_controller.go:332] Processing sync for service openshift-kube-controller-manager/kube-controller-manager 2025-12-13T00:20:09.716733235+00:00 stderr F I1213 00:20:09.716683 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"66ce05b8-462b-4fd9-b81f-bcc75a439997", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-console/downloads_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.196", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.66", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.716733235+00:00 stderr F I1213 00:20:09.716683 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"64185ba6-b0f4-4c6d-b401-fb03d791f35d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-network-diagnostics/network-check-target_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.248", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.4", Port:8080, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.716769186+00:00 stderr F I1213 00:20:09.716213 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/olm-operator-metrics"} 2025-12-13T00:20:09.716805137+00:00 stderr F I1213 00:20:09.716775 28750 services_controller.go:336] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager : 3.967389ms 2025-12-13T00:20:09.716805137+00:00 stderr F I1213 00:20:09.716691 28750 services_controller.go:397] Service kube-controller-manager retrieved from lister: &Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 419fdf14-5d8d-4271-b9e7-729de80d8cd2 5235 0 2024-06-26 12:47:11 +0000 UTC map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 10257 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{kube-controller-manager: true,},ClusterIP:10.217.4.112,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.112],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.716824217+00:00 stderr F I1213 00:20:09.716809 28750 services_controller.go:332] Processing sync for service openshift-dns-operator/metrics 2025-12-13T00:20:09.716872108+00:00 stderr F I1213 00:20:09.716809 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.248:80:10.217.0.4:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64185ba6-b0f4-4c6d-b401-fb03d791f35d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.716872108+00:00 stderr F I1213 00:20:09.716835 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager/kube-controller-manager are: map[TCP/https:{10257 [192.168.126.11] []}] 2025-12-13T00:20:09.716872108+00:00 stderr F I1213 00:20:09.716821 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.196:80:10.217.0.66:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {66ce05b8-462b-4fd9-b81f-bcc75a439997}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.716887429+00:00 stderr F I1213 00:20:09.716868 28750 services_controller.go:413] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.716887429+00:00 stderr F I1213 00:20:09.716851 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-diagnostics/network-check-target]} name:Service_openshift-network-diagnostics/network-check-target_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.248:80:10.217.0.4:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64185ba6-b0f4-4c6d-b401-fb03d791f35d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.716900989+00:00 stderr F I1213 00:20:09.716867 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.196:80:10.217.0.66:8080]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {66ce05b8-462b-4fd9-b81f-bcc75a439997}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.716900989+00:00 stderr F I1213 00:20:09.716883 28750 services_controller.go:414] Built service openshift-kube-controller-manager/kube-controller-manager LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.112"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:10257, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.716920130+00:00 stderr F I1213 00:20:09.716909 28750 services_controller.go:415] Built service openshift-kube-controller-manager/kube-controller-manager LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.716955131+00:00 stderr F I1213 00:20:09.716822 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-dns-operator c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2 4375 0 2024-06-26 12:39:11 +0000 UTC map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062f2f7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: dns-operator,},ClusterIP:10.217.4.52,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.52],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.717013142+00:00 stderr F I1213 00:20:09.716978 28750 lb_config.go:1016] Cluster endpoints for openshift-dns-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.18] []}] 2025-12-13T00:20:09.717013142+00:00 stderr F I1213 00:20:09.716994 28750 services_controller.go:421] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB []services.LB{} 2025-12-13T00:20:09.717028743+00:00 stderr F I1213 00:20:09.717009 28750 services_controller.go:413] Built service openshift-dns-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.52"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.18"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.717042543+00:00 stderr F I1213 00:20:09.717031 28750 services_controller.go:414] Built service openshift-dns-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.717053643+00:00 stderr F I1213 00:20:09.717043 28750 services_controller.go:415] Built service openshift-dns-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.717069084+00:00 stderr F I1213 00:20:09.717015 28750 services_controller.go:422] Built service openshift-kube-controller-manager/kube-controller-manager per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.717080464+00:00 stderr F I1213 00:20:09.717068 28750 services_controller.go:423] Built service openshift-kube-controller-manager/kube-controller-manager template LB []services.LB{} 2025-12-13T00:20:09.717121495+00:00 stderr F I1213 00:20:09.717089 28750 services_controller.go:424] Service openshift-kube-controller-manager/kube-controller-manager has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.717121495+00:00 stderr F I1213 00:20:09.717068 28750 services_controller.go:421] Built service openshift-dns-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.717134515+00:00 stderr F I1213 00:20:09.717122 28750 services_controller.go:422] Built service openshift-dns-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.717148316+00:00 stderr F I1213 00:20:09.717138 28750 services_controller.go:423] Built service openshift-dns-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.717186457+00:00 stderr F I1213 00:20:09.717154 28750 services_controller.go:424] Service openshift-dns-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.717186457+00:00 stderr F I1213 00:20:09.717163 28750 iptables.go:146] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 2025-12-13T00:20:09.717201847+00:00 stderr F I1213 00:20:09.717131 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"b8952058-0880-4949-ae72-093072a0d7c5", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.112", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:10257, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.717255539+00:00 stderr F I1213 00:20:09.717190 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"404c8c52-bd98-412a-93df-61640fc7575c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-dns-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.52", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.18", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.717329051+00:00 stderr F I1213 00:20:09.717165 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.218:443:192.168.126.11:10259]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {915e622c-d89a-4906-831f-8daeda55c910}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717399143+00:00 stderr F I1213 00:20:09.717329 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.112:443:192.168.126.11:10257]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8952058-0880-4949-ae72-093072a0d7c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717447554+00:00 stderr F I1213 00:20:09.717381 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.52:9393:10.217.0.18:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {404c8c52-bd98-412a-93df-61640fc7575c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717447554+00:00 stderr F I1213 00:20:09.717391 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.112:443:192.168.126.11:10257]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b8952058-0880-4949-ae72-093072a0d7c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717447554+00:00 stderr F I1213 00:20:09.717429 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-network-diagnostics/network-check-target"} 2025-12-13T00:20:09.717468975+00:00 stderr F I1213 00:20:09.717448 28750 services_controller.go:336] Finished syncing service network-check-target on namespace openshift-network-diagnostics : 973.817µs 2025-12-13T00:20:09.717468975+00:00 stderr F I1213 00:20:09.717462 28750 services_controller.go:332] Processing sync for service openshift-machine-api/cluster-autoscaler-operator 2025-12-13T00:20:09.717481485+00:00 stderr F I1213 00:20:09.717441 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.52:9393:10.217.0.18:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {404c8c52-bd98-412a-93df-61640fc7575c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717534476+00:00 stderr F I1213 00:20:09.717368 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.218:443:192.168.126.11:10259]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {915e622c-d89a-4906-831f-8daeda55c910}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717607528+00:00 stderr F I1213 00:20:09.717573 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-console/downloads"} 2025-12-13T00:20:09.717607528+00:00 stderr F I1213 00:20:09.717594 28750 services_controller.go:336] Finished syncing service downloads on namespace openshift-console : 1.210003ms 2025-12-13T00:20:09.717621729+00:00 stderr F I1213 00:20:09.717607 28750 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-12-13T00:20:09.717688731+00:00 stderr F I1213 00:20:09.717613 28750 services_controller.go:397] Service catalog-operator-metrics retrieved from lister: &Service{ObjectMeta:{catalog-operator-metrics openshift-operator-lifecycle-manager 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72 5067 0 2024-06-26 12:39:23 +0000 UTC map[app:catalog-operator] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:catalog-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000893257 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https-metrics,Protocol:TCP,Port:8443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: catalog-operator,},ClusterIP:10.217.5.17,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.17],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.717708611+00:00 stderr F I1213 00:20:09.717687 28750 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/catalog-operator-metrics are: map[TCP/https-metrics:{8443 [10.217.0.11] []}] 2025-12-13T00:20:09.717708611+00:00 stderr F I1213 00:20:09.717698 28750 services_controller.go:413] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.17"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.717721662+00:00 stderr F I1213 00:20:09.717708 28750 services_controller.go:414] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.717721662+00:00 stderr F I1213 00:20:09.717469 28750 services_controller.go:397] Service cluster-autoscaler-operator retrieved from lister: &Service{ObjectMeta:{cluster-autoscaler-operator openshift-machine-api c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062 4713 0 2024-06-26 12:39:18 +0000 UTC map[k8s-app:cluster-autoscaler-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-autoscaler-operator-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062fecb }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9192,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-autoscaler-operator,},ClusterIP:10.217.5.83,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.83],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.717721662+00:00 stderr F I1213 00:20:09.717714 28750 services_controller.go:415] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.717769373+00:00 stderr F I1213 00:20:09.717730 28750 services_controller.go:421] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.717769373+00:00 stderr F I1213 00:20:09.717740 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-api/cluster-autoscaler-operator are: map[] 2025-12-13T00:20:09.717769373+00:00 stderr F I1213 00:20:09.717753 28750 services_controller.go:422] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics per-node LB []services.LB{} 2025-12-13T00:20:09.717769373+00:00 stderr F I1213 00:20:09.717760 28750 services_controller.go:423] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics template LB []services.LB{} 2025-12-13T00:20:09.717785363+00:00 stderr F I1213 00:20:09.717767 28750 services_controller.go:424] Service openshift-operator-lifecycle-manager/catalog-operator-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.717785363+00:00 stderr F I1213 00:20:09.717762 28750 services_controller.go:413] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.83"}, protocol:"TCP", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.717785363+00:00 stderr F I1213 00:20:09.717779 28750 services_controller.go:414] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.717798734+00:00 stderr F I1213 00:20:09.717787 28750 services_controller.go:415] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.717810754+00:00 stderr F I1213 00:20:09.717779 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"1a7d5584-355c-4545-8ea4-6c97fee9c8d6", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.17", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.11", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.717829774+00:00 stderr F I1213 00:20:09.717804 28750 services_controller.go:421] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.717840955+00:00 stderr F I1213 00:20:09.717830 28750 services_controller.go:422] Built service openshift-machine-api/cluster-autoscaler-operator per-node LB []services.LB{} 2025-12-13T00:20:09.717852255+00:00 stderr F I1213 00:20:09.717841 28750 services_controller.go:423] Built service openshift-machine-api/cluster-autoscaler-operator template LB []services.LB{} 2025-12-13T00:20:09.717862855+00:00 stderr F I1213 00:20:09.717850 28750 services_controller.go:424] Service openshift-machine-api/cluster-autoscaler-operator has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.717911817+00:00 stderr F I1213 00:20:09.717868 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.17:8443:10.217.0.11:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a7d5584-355c-4545-8ea4-6c97fee9c8d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.717911817+00:00 stderr F I1213 00:20:09.717867 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:"10.217.5.83", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.717927307+00:00 stderr F I1213 00:20:09.717899 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/catalog-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.17:8443:10.217.0.11:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a7d5584-355c-4545-8ea4-6c97fee9c8d6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718072552+00:00 stderr F I1213 00:20:09.718013 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager/kube-controller-manager"} 2025-12-13T00:20:09.718072552+00:00 stderr F I1213 00:20:09.718035 28750 services_controller.go:336] Finished syncing service kube-controller-manager on namespace openshift-kube-controller-manager : 1.363178ms 2025-12-13T00:20:09.718072552+00:00 stderr F I1213 00:20:09.718010 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.83:443: 10.217.5.83:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718072552+00:00 stderr F I1213 00:20:09.718049 28750 services_controller.go:332] Processing sync for service openshift-kube-controller-manager-operator/metrics 2025-12-13T00:20:09.718092923+00:00 stderr F I1213 00:20:09.718051 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.83:443: 10.217.5.83:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6fdcf5bf-9a8f-4dc7-b401-5b01d42512e2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718188595+00:00 stderr F I1213 00:20:09.718057 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-kube-controller-manager-operator 136038f9-f376-4b0b-8c75-a42240d176cc 4549 0 2024-06-26 12:39:14 +0000 UTC map[app:kube-controller-manager-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:kube-controller-manager-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062fb5f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-controller-manager-operator,},ClusterIP:10.217.4.79,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.79],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.718213006+00:00 stderr F I1213 00:20:09.718190 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-controller-manager-operator/metrics are: map[TCP/https:{8443 [10.217.0.15] []}] 2025-12-13T00:20:09.718223046+00:00 stderr F I1213 00:20:09.718204 28750 services_controller.go:413] Built service openshift-kube-controller-manager-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.79"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.15"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.718223046+00:00 stderr F I1213 00:20:09.718218 28750 services_controller.go:414] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.718233136+00:00 stderr F I1213 00:20:09.718225 28750 services_controller.go:415] Built service openshift-kube-controller-manager-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.718245207+00:00 stderr F I1213 00:20:09.718210 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns-operator/metrics"} 2025-12-13T00:20:09.718277828+00:00 stderr F I1213 00:20:09.718253 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-dns-operator : 1.442121ms 2025-12-13T00:20:09.718277828+00:00 stderr F I1213 00:20:09.718248 28750 services_controller.go:421] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.718277828+00:00 stderr F I1213 00:20:09.718273 28750 services_controller.go:422] Built service openshift-kube-controller-manager-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.718289578+00:00 stderr F I1213 00:20:09.718275 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/catalog-operator-metrics"} 2025-12-13T00:20:09.718289578+00:00 stderr F I1213 00:20:09.718284 28750 services_controller.go:332] Processing sync for service openshift-network-operator/metrics 2025-12-13T00:20:09.718299668+00:00 stderr F I1213 00:20:09.718282 28750 services_controller.go:423] Built service openshift-kube-controller-manager-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.718312729+00:00 stderr F I1213 00:20:09.718301 28750 services_controller.go:424] Service openshift-kube-controller-manager-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.718312729+00:00 stderr F I1213 00:20:09.718300 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-network-operator : 16.45µs 2025-12-13T00:20:09.718343079+00:00 stderr F I1213 00:20:09.718322 28750 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service 2025-12-13T00:20:09.718355600+00:00 stderr F I1213 00:20:09.718318 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"a30efd8e-3079-4ca5-a3fd-b660bc1089ea", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.79", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.15", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.718365580+00:00 stderr F I1213 00:20:09.718289 28750 services_controller.go:336] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager : 682.22µs 2025-12-13T00:20:09.718375250+00:00 stderr F I1213 00:20:09.718369 28750 services_controller.go:332] Processing sync for service default/kubernetes 2025-12-13T00:20:09.718452752+00:00 stderr F I1213 00:20:09.718375 28750 services_controller.go:397] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 057366b7-69c1-49f0-88ca-660c8863cae8 249 0 2024-06-26 12:38:03 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.718452752+00:00 stderr F I1213 00:20:09.718335 28750 services_controller.go:397] Service packageserver-service retrieved from lister: &Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager 8099635d-a821-489e-8b18-cae3e83f00b2 6451 0 2024-06-26 12:47:50 +0000 UTC map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver 0beab272-7637-4d44-b3aa-502dcafbc929 0xc00089347d 0xc00089347e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:5443,Protocol:TCP,Port:5443,TargetPort:{0 5443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: packageserver,},ClusterIP:10.217.4.230,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.230],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.718452752+00:00 stderr F I1213 00:20:09.718439 28750 lb_config.go:1016] Cluster endpoints for default/kubernetes are: map[TCP/https:{6443 [192.168.126.11] []}] 2025-12-13T00:20:09.718470383+00:00 stderr F I1213 00:20:09.718450 28750 services_controller.go:413] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.718470383+00:00 stderr F I1213 00:20:09.718456 28750 services_controller.go:414] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.1"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.718470383+00:00 stderr F I1213 00:20:09.718434 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.79:443:10.217.0.15:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a30efd8e-3079-4ca5-a3fd-b660bc1089ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718481293+00:00 stderr F I1213 00:20:09.718466 28750 services_controller.go:415] Built service default/kubernetes LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.718481293+00:00 stderr F I1213 00:20:09.718459 28750 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/packageserver-service are: map[TCP/5443:{5443 [10.217.0.43] []}] 2025-12-13T00:20:09.718491283+00:00 stderr F I1213 00:20:09.718485 28750 services_controller.go:421] Built service default/kubernetes cluster-wide LB []services.LB{} 2025-12-13T00:20:09.718502004+00:00 stderr F I1213 00:20:09.718474 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.79:443:10.217.0.15:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a30efd8e-3079-4ca5-a3fd-b660bc1089ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718502004+00:00 stderr F I1213 00:20:09.718486 28750 services_controller.go:413] Built service openshift-operator-lifecycle-manager/packageserver-service LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.230"}, protocol:"TCP", inport:5443, clusterEndpoints:services.lbEndpoints{Port:5443, V4IPs:[]string{"10.217.0.43"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.718512414+00:00 stderr F I1213 00:20:09.718491 28750 services_controller.go:422] Built service default/kubernetes per-node LB []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.718512414+00:00 stderr F I1213 00:20:09.718508 28750 services_controller.go:423] Built service default/kubernetes template LB []services.LB{} 2025-12-13T00:20:09.718525664+00:00 stderr F I1213 00:20:09.718507 28750 services_controller.go:414] Built service openshift-operator-lifecycle-manager/packageserver-service LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.718525664+00:00 stderr F I1213 00:20:09.718515 28750 services_controller.go:424] Service default/kubernetes has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.718535765+00:00 stderr F I1213 00:20:09.718526 28750 services_controller.go:415] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.718594746+00:00 stderr F I1213 00:20:09.718529 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"64f8efea-8a64-40da-b2c5-6e8d4a7a1c68", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_default/kubernetes_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.1", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.718653528+00:00 stderr F I1213 00:20:09.718524 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-scheduler/scheduler"} 2025-12-13T00:20:09.718665018+00:00 stderr F I1213 00:20:09.718606 28750 services_controller.go:421] Built service openshift-operator-lifecycle-manager/packageserver-service cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.718665018+00:00 stderr F I1213 00:20:09.718652 28750 services_controller.go:336] Finished syncing service scheduler on namespace openshift-kube-scheduler : 3.416004ms 2025-12-13T00:20:09.718681639+00:00 stderr F I1213 00:20:09.718669 28750 services_controller.go:422] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB []services.LB{} 2025-12-13T00:20:09.718681639+00:00 stderr F I1213 00:20:09.718677 28750 services_controller.go:332] Processing sync for service openshift-ingress-operator/metrics 2025-12-13T00:20:09.718714070+00:00 stderr F I1213 00:20:09.718689 28750 services_controller.go:423] Built service openshift-operator-lifecycle-manager/packageserver-service template LB []services.LB{} 2025-12-13T00:20:09.718724320+00:00 stderr F I1213 00:20:09.718712 28750 services_controller.go:424] Service openshift-operator-lifecycle-manager/packageserver-service has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.718812342+00:00 stderr F I1213 00:20:09.718658 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64f8efea-8a64-40da-b2c5-6e8d4a7a1c68}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718812342+00:00 stderr F I1213 00:20:09.718701 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-ingress-operator 1e390522-c38e-4189-86b8-ad75c61e3844 6514 0 2024-06-26 12:47:50 +0000 UTC map[name:ingress-operator] map[capability.openshift.io/name:Ingress include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062f887 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.4.255,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.255],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.718812342+00:00 stderr F I1213 00:20:09.718778 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/cluster-autoscaler-operator"} 2025-12-13T00:20:09.718812342+00:00 stderr F I1213 00:20:09.718738 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"68eacba8-7e00-494f-a0e9-a0d7f4ab5c77", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.230", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.43", Port:5443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.718812342+00:00 stderr F I1213 00:20:09.718797 28750 services_controller.go:336] Finished syncing service cluster-autoscaler-operator on namespace openshift-machine-api : 1.334858ms 2025-12-13T00:20:09.718831173+00:00 stderr F I1213 00:20:09.718788 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {64f8efea-8a64-40da-b2c5-6e8d4a7a1c68}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.718831173+00:00 stderr F I1213 00:20:09.718814 28750 services_controller.go:332] Processing sync for service openshift-authentication/oauth-openshift 2025-12-13T00:20:09.718831173+00:00 stderr F I1213 00:20:09.718811 28750 lb_config.go:1016] Cluster endpoints for openshift-ingress-operator/metrics are: map[TCP/metrics:{9393 [10.217.0.45] []}] 2025-12-13T00:20:09.718863134+00:00 stderr F I1213 00:20:09.718832 28750 services_controller.go:413] Built service openshift-ingress-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.255"}, protocol:"TCP", inport:9393, clusterEndpoints:services.lbEndpoints{Port:9393, V4IPs:[]string{"10.217.0.45"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.718863134+00:00 stderr F I1213 00:20:09.718854 28750 services_controller.go:414] Built service openshift-ingress-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.718873664+00:00 stderr F I1213 00:20:09.718865 28750 services_controller.go:415] Built service openshift-ingress-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.718902945+00:00 stderr F I1213 00:20:09.718823 28750 services_controller.go:397] Service oauth-openshift retrieved from lister: &Service{ObjectMeta:{oauth-openshift openshift-authentication 64190ecd-229c-482a-966a-b5649b5042ed 5248 0 2024-06-26 12:47:15 +0000 UTC map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.40,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.40],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.718916595+00:00 stderr F I1213 00:20:09.718905 28750 lb_config.go:1016] Cluster endpoints for openshift-authentication/oauth-openshift are: map[TCP/https:{6443 [10.217.0.72] []}] 2025-12-13T00:20:09.718926475+00:00 stderr F I1213 00:20:09.718892 28750 services_controller.go:421] Built service openshift-ingress-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.718954866+00:00 stderr F I1213 00:20:09.718919 28750 services_controller.go:413] Built service openshift-authentication/oauth-openshift LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.40"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"10.217.0.72"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.718969216+00:00 stderr F I1213 00:20:09.718955 28750 services_controller.go:414] Built service openshift-authentication/oauth-openshift LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.718969216+00:00 stderr F I1213 00:20:09.718955 28750 services_controller.go:422] Built service openshift-ingress-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.718969216+00:00 stderr F I1213 00:20:09.718963 28750 services_controller.go:415] Built service openshift-authentication/oauth-openshift LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.718980247+00:00 stderr F I1213 00:20:09.718973 28750 services_controller.go:423] Built service openshift-ingress-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.718990067+00:00 stderr F I1213 00:20:09.718969 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-controller-manager-operator/metrics"} 2025-12-13T00:20:09.718999797+00:00 stderr F I1213 00:20:09.718987 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-kube-controller-manager-operator : 936.586µs 2025-12-13T00:20:09.718999797+00:00 stderr F I1213 00:20:09.718986 28750 services_controller.go:424] Service openshift-ingress-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.719010158+00:00 stderr F I1213 00:20:09.718981 28750 services_controller.go:421] Built service openshift-authentication/oauth-openshift cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.719010158+00:00 stderr F I1213 00:20:09.718999 28750 services_controller.go:332] Processing sync for service openshift-route-controller-manager/route-controller-manager 2025-12-13T00:20:09.719023648+00:00 stderr F I1213 00:20:09.719008 28750 services_controller.go:422] Built service openshift-authentication/oauth-openshift per-node LB []services.LB{} 2025-12-13T00:20:09.719023648+00:00 stderr F I1213 00:20:09.719017 28750 services_controller.go:423] Built service openshift-authentication/oauth-openshift template LB []services.LB{} 2025-12-13T00:20:09.719036038+00:00 stderr F I1213 00:20:09.719027 28750 services_controller.go:424] Service openshift-authentication/oauth-openshift has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.719080809+00:00 stderr F I1213 00:20:09.719009 28750 services_controller.go:397] Service route-controller-manager retrieved from lister: &Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 105b901a-a54d-4dae-b7b6-99e83d48166f 5156 0 2024-06-26 12:47:06 +0000 UTC map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{route-controller-manager: true,},ClusterIP:10.217.5.173,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.173],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.719080809+00:00 stderr F I1213 00:20:09.719002 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.230:5443:10.217.0.43:5443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {68eacba8-7e00-494f-a0e9-a0d7f4ab5c77}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719080809+00:00 stderr F I1213 00:20:09.719045 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"01ea2a7a-b4a0-46ba-a9ec-611e94b854e1", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-authentication/oauth-openshift_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.40", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.72", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.719095900+00:00 stderr F I1213 00:20:09.719080 28750 lb_config.go:1016] Cluster endpoints for openshift-route-controller-manager/route-controller-manager are: map[TCP/https:{8443 [10.217.0.88] []}] 2025-12-13T00:20:09.719105730+00:00 stderr F I1213 00:20:09.719094 28750 services_controller.go:413] Built service openshift-route-controller-manager/route-controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.173"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.88"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.719115340+00:00 stderr F I1213 00:20:09.719106 28750 services_controller.go:414] Built service openshift-route-controller-manager/route-controller-manager LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.719125151+00:00 stderr F I1213 00:20:09.719114 28750 services_controller.go:415] Built service openshift-route-controller-manager/route-controller-manager LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.719125151+00:00 stderr F I1213 00:20:09.719084 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.230:5443:10.217.0.43:5443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {68eacba8-7e00-494f-a0e9-a0d7f4ab5c77}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719167062+00:00 stderr F I1213 00:20:09.719128 28750 services_controller.go:421] Built service openshift-route-controller-manager/route-controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.719167062+00:00 stderr F I1213 00:20:09.719156 28750 services_controller.go:422] Built service openshift-route-controller-manager/route-controller-manager per-node LB []services.LB{} 2025-12-13T00:20:09.719177752+00:00 stderr F I1213 00:20:09.719165 28750 services_controller.go:423] Built service openshift-route-controller-manager/route-controller-manager template LB []services.LB{} 2025-12-13T00:20:09.719190812+00:00 stderr F I1213 00:20:09.719174 28750 services_controller.go:424] Service openshift-route-controller-manager/route-controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.719190812+00:00 stderr F I1213 00:20:09.719177 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"} 2025-12-13T00:20:09.719203353+00:00 stderr F I1213 00:20:09.719163 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:10.217.0.72:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01ea2a7a-b4a0-46ba-a9ec-611e94b854e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719213263+00:00 stderr F I1213 00:20:09.719014 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"3ce3d754-52d1-4b39-a0be-cc15ffb5a10c", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.255", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.45", Port:9393, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.719225403+00:00 stderr F I1213 00:20:09.719190 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"5536fef9-19d0-4375-b45b-e06dafe12061", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.173", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.88", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.719268035+00:00 stderr F I1213 00:20:09.719209 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.40:443:10.217.0.72:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01ea2a7a-b4a0-46ba-a9ec-611e94b854e1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719365137+00:00 stderr F I1213 00:20:09.719299 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.173:443:10.217.0.88:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5536fef9-19d0-4375-b45b-e06dafe12061}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719365137+00:00 stderr F I1213 00:20:09.719341 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.173:443:10.217.0.88:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5536fef9-19d0-4375-b45b-e06dafe12061}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719379638+00:00 stderr F I1213 00:20:09.719195 28750 services_controller.go:336] Finished syncing service kubernetes on namespace default : 824.152µs 2025-12-13T00:20:09.719379638+00:00 stderr F I1213 00:20:09.719333 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.255:9393:10.217.0.45:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ce3d754-52d1-4b39-a0be-cc15ffb5a10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719392178+00:00 stderr F I1213 00:20:09.719383 28750 services_controller.go:332] Processing sync for service openshift-kube-apiserver/apiserver 2025-12-13T00:20:09.719444639+00:00 stderr F I1213 00:20:09.719388 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.255:9393:10.217.0.45:9393]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3ce3d754-52d1-4b39-a0be-cc15ffb5a10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719444639+00:00 stderr F I1213 00:20:09.719392 28750 services_controller.go:397] Service apiserver retrieved from lister: &Service{ObjectMeta:{apiserver openshift-kube-apiserver 44a33f79-7e24-4f1b-bc46-f52dfcec13b8 3793 0 2024-06-26 12:47:04 +0000 UTC map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.86,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.86],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.719479360+00:00 stderr F I1213 00:20:09.719456 28750 lb_config.go:1016] Cluster endpoints for openshift-kube-apiserver/apiserver are: map[TCP/https:{6443 [192.168.126.11] []}] 2025-12-13T00:20:09.719479360+00:00 stderr F I1213 00:20:09.719471 28750 services_controller.go:413] Built service openshift-kube-apiserver/apiserver LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.719490161+00:00 stderr F I1213 00:20:09.719478 28750 services_controller.go:414] Built service openshift-kube-apiserver/apiserver LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.86"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.719499891+00:00 stderr F I1213 00:20:09.719491 28750 services_controller.go:415] Built service openshift-kube-apiserver/apiserver LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.719528452+00:00 stderr F I1213 00:20:09.719511 28750 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{} 2025-12-13T00:20:09.719567073+00:00 stderr F I1213 00:20:09.719520 28750 services_controller.go:422] Built service openshift-kube-apiserver/apiserver per-node LB []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.719567073+00:00 stderr F I1213 00:20:09.719552 28750 services_controller.go:423] Built service openshift-kube-apiserver/apiserver template LB []services.LB{} 2025-12-13T00:20:09.719567073+00:00 stderr F I1213 00:20:09.719562 28750 services_controller.go:424] Service openshift-kube-apiserver/apiserver has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.719626714+00:00 stderr F I1213 00:20:09.719577 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"274b346b-9aa9-401d-b752-d4168e631034", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.86", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.719626714+00:00 stderr F I1213 00:20:09.719600 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/packageserver-service"} 2025-12-13T00:20:09.719640765+00:00 stderr F I1213 00:20:09.719625 28750 services_controller.go:336] Finished syncing service packageserver-service on namespace openshift-operator-lifecycle-manager : 1.303495ms 2025-12-13T00:20:09.719653055+00:00 stderr F I1213 00:20:09.719646 28750 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:20:09.719692636+00:00 stderr F I1213 00:20:09.719658 28750 services_controller.go:336] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes : 12.06µs 2025-12-13T00:20:09.719692636+00:00 stderr F I1213 00:20:09.719679 28750 services_controller.go:332] Processing sync for service openshift-config-operator/metrics 2025-12-13T00:20:09.719746138+00:00 stderr F I1213 00:20:09.719697 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.86:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {274b346b-9aa9-401d-b752-d4168e631034}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719775528+00:00 stderr F I1213 00:20:09.719737 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-apiserver/apiserver]} name:Service_openshift-kube-apiserver/apiserver_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.86:443:192.168.126.11:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {274b346b-9aa9-401d-b752-d4168e631034}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.719775528+00:00 stderr F I1213 00:20:09.719744 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-authentication/oauth-openshift"} 2025-12-13T00:20:09.719786379+00:00 stderr F I1213 00:20:09.719689 28750 services_controller.go:397] Service metrics retrieved from lister: &Service{ObjectMeta:{metrics openshift-config-operator f04ada1b-55ad-45a3-9231-6d1ff7242fa0 4291 0 2024-06-26 12:39:24 +0000 UTC map[app:openshift-config-operator] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:config-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062ec6f }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: openshift-config-operator,},ClusterIP:10.217.5.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.120],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.719786379+00:00 stderr F I1213 00:20:09.719776 28750 services_controller.go:336] Finished syncing service oauth-openshift on namespace openshift-authentication : 959.516µs 2025-12-13T00:20:09.719828230+00:00 stderr F I1213 00:20:09.719797 28750 services_controller.go:332] Processing sync for service openshift-ingress/router-internal-default 2025-12-13T00:20:09.719828230+00:00 stderr F I1213 00:20:09.719805 28750 lb_config.go:1016] Cluster endpoints for openshift-config-operator/metrics are: map[TCP/https:{8443 [10.217.0.23] []}] 2025-12-13T00:20:09.719841800+00:00 stderr F I1213 00:20:09.719826 28750 services_controller.go:413] Built service openshift-config-operator/metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.120"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.23"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.719853981+00:00 stderr F I1213 00:20:09.719845 28750 services_controller.go:414] Built service openshift-config-operator/metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.719863691+00:00 stderr F I1213 00:20:09.719856 28750 services_controller.go:415] Built service openshift-config-operator/metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.719921592+00:00 stderr F I1213 00:20:09.719816 28750 services_controller.go:397] Service router-internal-default retrieved from lister: &Service{ObjectMeta:{router-internal-default openshift-ingress 3ded9605-ced3-4583-97b6-f93264b463a7 7398 0 2024-06-26 12:48:38 +0000 UTC map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{apps/v1 Deployment router-default 9ae4d312-7fc4-4344-ab7a-669da95f56bf 0xc00062f92e }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.220,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.220],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.719921592+00:00 stderr F I1213 00:20:09.719881 28750 services_controller.go:421] Built service openshift-config-operator/metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.719951653+00:00 stderr F I1213 00:20:09.719920 28750 services_controller.go:422] Built service openshift-config-operator/metrics per-node LB []services.LB{} 2025-12-13T00:20:09.719997344+00:00 stderr F I1213 00:20:09.719969 28750 services_controller.go:423] Built service openshift-config-operator/metrics template LB []services.LB{} 2025-12-13T00:20:09.719997344+00:00 stderr F I1213 00:20:09.719928 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-route-controller-manager/route-controller-manager"} 2025-12-13T00:20:09.720010605+00:00 stderr F I1213 00:20:09.719999 28750 services_controller.go:424] Service openshift-config-operator/metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.720010605+00:00 stderr F I1213 00:20:09.720001 28750 services_controller.go:336] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager : 1.000457ms 2025-12-13T00:20:09.720020755+00:00 stderr F I1213 00:20:09.720005 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress-operator/metrics"} 2025-12-13T00:20:09.720030335+00:00 stderr F I1213 00:20:09.720019 28750 services_controller.go:332] Processing sync for service openshift-machine-api/control-plane-machine-set-operator 2025-12-13T00:20:09.720030335+00:00 stderr F I1213 00:20:09.720021 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-ingress-operator : 1.344867ms 2025-12-13T00:20:09.720068426+00:00 stderr F I1213 00:20:09.720038 28750 services_controller.go:332] Processing sync for service openshift-cluster-machine-approver/machine-approver 2025-12-13T00:20:09.720068426+00:00 stderr F I1213 00:20:09.720056 28750 services_controller.go:336] Finished syncing service machine-approver on namespace openshift-cluster-machine-approver : 17.611µs 2025-12-13T00:20:09.720079067+00:00 stderr F I1213 00:20:09.720027 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"8687c189-4f68-4a92-accd-596f2b18fbfd", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-config-operator/metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.120", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.23", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.720079067+00:00 stderr F I1213 00:20:09.720071 28750 services_controller.go:332] Processing sync for service openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-12-13T00:20:09.720097307+00:00 stderr F I1213 00:20:09.720028 28750 services_controller.go:397] Service control-plane-machine-set-operator retrieved from lister: &Service{ObjectMeta:{control-plane-machine-set-operator openshift-machine-api 7c42fd7c-0955-49c7-819c-4685e0681272 4749 0 2024-06-26 12:39:09 +0000 UTC map[k8s-app:control-plane-machine-set-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:control-plane-machine-set-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062ffa7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:9443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: control-plane-machine-set-operator,},ClusterIP:10.217.5.136,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.136],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.720137448+00:00 stderr F I1213 00:20:09.720102 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-api/control-plane-machine-set-operator are: map[TCP/https:{9443 [10.217.0.20] []}] 2025-12-13T00:20:09.720137448+00:00 stderr F I1213 00:20:09.720124 28750 services_controller.go:413] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.136"}, protocol:"TCP", inport:9443, clusterEndpoints:services.lbEndpoints{Port:9443, V4IPs:[]string{"10.217.0.20"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.720148789+00:00 stderr F I1213 00:20:09.720137 28750 services_controller.go:414] Built service openshift-machine-api/control-plane-machine-set-operator LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.720148789+00:00 stderr F I1213 00:20:09.720082 28750 services_controller.go:397] Service package-server-manager-metrics retrieved from lister: &Service{ObjectMeta:{package-server-manager-metrics openshift-operator-lifecycle-manager ae547e8e-2a0a-43b3-8358-80f1e40dfde9 5119 0 2024-06-26 12:39:24 +0000 UTC map[] map[capability.openshift.io/name:OperatorLifecycleManager include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true service.alpha.openshift.io/serving-cert-secret-name:package-server-manager-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0008933bf }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8443,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: package-server-manager,},ClusterIP:10.217.4.147,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.147],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.720162659+00:00 stderr F I1213 00:20:09.720145 28750 services_controller.go:415] Built service openshift-machine-api/control-plane-machine-set-operator LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.720192130+00:00 stderr F I1213 00:20:09.720165 28750 lb_config.go:1016] Cluster endpoints for openshift-operator-lifecycle-manager/package-server-manager-metrics are: map[TCP/metrics:{8443 [10.217.0.24] []}] 2025-12-13T00:20:09.720192130+00:00 stderr F I1213 00:20:09.720165 28750 services_controller.go:421] Built service openshift-machine-api/control-plane-machine-set-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.720203460+00:00 stderr F I1213 00:20:09.720190 28750 services_controller.go:422] Built service openshift-machine-api/control-plane-machine-set-operator per-node LB []services.LB{} 2025-12-13T00:20:09.720203460+00:00 stderr F I1213 00:20:09.720190 28750 services_controller.go:413] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.147"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.24"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.720213780+00:00 stderr F I1213 00:20:09.720202 28750 services_controller.go:423] Built service openshift-machine-api/control-plane-machine-set-operator template LB []services.LB{} 2025-12-13T00:20:09.720223621+00:00 stderr F I1213 00:20:09.720209 28750 services_controller.go:414] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.720223621+00:00 stderr F I1213 00:20:09.720215 28750 services_controller.go:424] Service openshift-machine-api/control-plane-machine-set-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.720233631+00:00 stderr F I1213 00:20:09.720221 28750 services_controller.go:415] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.720279972+00:00 stderr F I1213 00:20:09.720206 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.120:443:10.217.0.23:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8687c189-4f68-4a92-accd-596f2b18fbfd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.720279972+00:00 stderr F I1213 00:20:09.720238 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"0549e0a0-7df6-4da0-b4ff-6834505eba14", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.136", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.20", Port:9443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.720312443+00:00 stderr F I1213 00:20:09.720287 28750 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-cluster-machine-approver/machine-approver. OVN-Kubernetes controller took 0.124989336 seconds. No OVN measurement. 2025-12-13T00:20:09.720344004+00:00 stderr F I1213 00:20:09.720279 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.120:443:10.217.0.23:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8687c189-4f68-4a92-accd-596f2b18fbfd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.720387065+00:00 stderr F I1213 00:20:09.720354 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-kube-apiserver/apiserver"} 2025-12-13T00:20:09.720397255+00:00 stderr F I1213 00:20:09.720383 28750 services_controller.go:336] Finished syncing service apiserver on namespace openshift-kube-apiserver : 991.877µs 2025-12-13T00:20:09.720407086+00:00 stderr F I1213 00:20:09.720399 28750 services_controller.go:332] Processing sync for service openshift-controller-manager/controller-manager 2025-12-13T00:20:09.720497398+00:00 stderr F I1213 00:20:09.720409 28750 services_controller.go:397] Service controller-manager retrieved from lister: &Service{ObjectMeta:{controller-manager openshift-controller-manager 2222c363-21dc-4d99-b2be-80dc3cdf8209 4361 0 2024-06-26 12:47:07 +0000 UTC map[prometheus:openshift-controller-manager] map[operator.openshift.io/spec-hash:b3b96749ab82e4de02ef6aa9f0e168108d09315e18d73931c12251d267378e74 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{controller-manager: true,},ClusterIP:10.217.4.104,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.104],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.720516659+00:00 stderr F I1213 00:20:09.720495 28750 lb_config.go:1016] Cluster endpoints for openshift-controller-manager/controller-manager are: map[TCP/https:{8443 [10.217.0.87] []}] 2025-12-13T00:20:09.720529129+00:00 stderr F I1213 00:20:09.720513 28750 services_controller.go:413] Built service openshift-controller-manager/controller-manager LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.104"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.87"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.720540969+00:00 stderr F I1213 00:20:09.720530 28750 services_controller.go:414] Built service openshift-controller-manager/controller-manager LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.720553130+00:00 stderr F I1213 00:20:09.720540 28750 services_controller.go:415] Built service openshift-controller-manager/controller-manager LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.720553130+00:00 stderr F I1213 00:20:09.719962 28750 lb_config.go:1016] Cluster endpoints for openshift-ingress/router-internal-default are: map[TCP/http:{80 [192.168.126.11] []} TCP/https:{443 [192.168.126.11] []} TCP/metrics:{1936 [192.168.126.11] []}] 2025-12-13T00:20:09.720587281+00:00 stderr F I1213 00:20:09.720244 28750 services_controller.go:421] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.720587281+00:00 stderr F I1213 00:20:09.720573 28750 services_controller.go:413] Built service openshift-ingress/router-internal-default LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.720587281+00:00 stderr F I1213 00:20:09.720577 28750 services_controller.go:422] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics per-node LB []services.LB{} 2025-12-13T00:20:09.720599091+00:00 stderr F I1213 00:20:09.720561 28750 services_controller.go:421] Built service openshift-controller-manager/controller-manager cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.720599091+00:00 stderr F I1213 00:20:09.720588 28750 services_controller.go:423] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics template LB []services.LB{} 2025-12-13T00:20:09.720613191+00:00 stderr F I1213 00:20:09.720597 28750 services_controller.go:422] Built service openshift-controller-manager/controller-manager per-node LB []services.LB{} 2025-12-13T00:20:09.720613191+00:00 stderr F I1213 00:20:09.720585 28750 services_controller.go:414] Built service openshift-ingress/router-internal-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:80, clusterEndpoints:services.lbEndpoints{Port:80, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:443, clusterEndpoints:services.lbEndpoints{Port:443, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.4.220"}, protocol:"TCP", inport:1936, clusterEndpoints:services.lbEndpoints{Port:1936, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.720623501+00:00 stderr F I1213 00:20:09.720612 28750 services_controller.go:423] Built service openshift-controller-manager/controller-manager template LB []services.LB{} 2025-12-13T00:20:09.720623501+00:00 stderr F I1213 00:20:09.720616 28750 services_controller.go:415] Built service openshift-ingress/router-internal-default LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.720635822+00:00 stderr F I1213 00:20:09.720625 28750 services_controller.go:424] Service openshift-controller-manager/controller-manager has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.720682243+00:00 stderr F I1213 00:20:09.720656 28750 services_controller.go:421] Built service openshift-ingress/router-internal-default cluster-wide LB []services.LB{} 2025-12-13T00:20:09.720694863+00:00 stderr F I1213 00:20:09.720648 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"b3787e47-e57b-411b-a351-f5dcf926f4a7", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-controller-manager/controller-manager_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.104", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.87", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.720727834+00:00 stderr F I1213 00:20:09.720675 28750 services_controller.go:422] Built service openshift-ingress/router-internal-default per-node LB []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.720737915+00:00 stderr F I1213 00:20:09.720725 28750 services_controller.go:423] Built service openshift-ingress/router-internal-default template LB []services.LB{} 2025-12-13T00:20:09.720747745+00:00 stderr F I1213 00:20:09.720739 28750 services_controller.go:424] Service openshift-ingress/router-internal-default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.720757355+00:00 stderr F I1213 00:20:09.720742 28750 iptables.go:146] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 2025-12-13T00:20:09.720844829+00:00 stderr F I1213 00:20:09.720761 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"28dce8d5-6d44-483e-9370-4cf8abff0832", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:80, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:443, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.4.220", Port:1936, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:1936, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.720858399+00:00 stderr F I1213 00:20:09.720812 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.104:443:10.217.0.87:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3787e47-e57b-411b-a351-f5dcf926f4a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.720918541+00:00 stderr F I1213 00:20:09.720861 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager/controller-manager]} name:Service_openshift-controller-manager/controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.104:443:10.217.0.87:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3787e47-e57b-411b-a351-f5dcf926f4a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721120936+00:00 stderr F I1213 00:20:09.721024 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.220:1936:192.168.126.11:1936 10.217.4.220:443:192.168.126.11:443 10.217.4.220:80:192.168.126.11:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {28dce8d5-6d44-483e-9370-4cf8abff0832}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721184818+00:00 stderr F I1213 00:20:09.721109 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.220:1936:192.168.126.11:1936 10.217.4.220:443:192.168.126.11:443 10.217.4.220:80:192.168.126.11:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {28dce8d5-6d44-483e-9370-4cf8abff0832}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721184818+00:00 stderr F I1213 00:20:09.721162 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-config-operator/metrics"} 2025-12-13T00:20:09.721195438+00:00 stderr F I1213 00:20:09.721184 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-config-operator : 1.504132ms 2025-12-13T00:20:09.721209488+00:00 stderr F I1213 00:20:09.721202 28750 services_controller.go:332] Processing sync for service openshift-marketplace/marketplace-operator-metrics 2025-12-13T00:20:09.721326262+00:00 stderr F I1213 00:20:09.721212 28750 services_controller.go:397] Service marketplace-operator-metrics retrieved from lister: &Service{ObjectMeta:{marketplace-operator-metrics openshift-marketplace 1bfd7637-f88e-403e-8d75-c71b380fc127 4909 0 2024-06-26 12:39:13 +0000 UTC map[name:marketplace-operator] map[capability.openshift.io/name:marketplace include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:marketplace-operator-metrics service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc000892bc7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.19,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.19],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.721345312+00:00 stderr F I1213 00:20:09.721331 28750 lb_config.go:1016] Cluster endpoints for openshift-marketplace/marketplace-operator-metrics are: map[TCP/https-metrics:{8081 [10.217.0.30] []} TCP/metrics:{8383 [10.217.0.30] []}] 2025-12-13T00:20:09.721393683+00:00 stderr F I1213 00:20:09.721356 28750 services_controller.go:413] Built service openshift-marketplace/marketplace-operator-metrics LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8383, clusterEndpoints:services.lbEndpoints{Port:8383, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.217.5.19"}, protocol:"TCP", inport:8081, clusterEndpoints:services.lbEndpoints{Port:8081, V4IPs:[]string{"10.217.0.30"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.721393683+00:00 stderr F I1213 00:20:09.721385 28750 services_controller.go:414] Built service openshift-marketplace/marketplace-operator-metrics LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.721406304+00:00 stderr F I1213 00:20:09.721397 28750 services_controller.go:415] Built service openshift-marketplace/marketplace-operator-metrics LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.721479726+00:00 stderr F I1213 00:20:09.720445 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.136:9443:10.217.0.20:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0549e0a0-7df6-4da0-b4ff-6834505eba14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721537037+00:00 stderr F I1213 00:20:09.721503 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-controller-manager/controller-manager"} 2025-12-13T00:20:09.721537037+00:00 stderr F I1213 00:20:09.721511 28750 services_controller.go:424] Service openshift-operator-lifecycle-manager/package-server-manager-metrics has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.721537037+00:00 stderr F I1213 00:20:09.721524 28750 services_controller.go:336] Finished syncing service controller-manager on namespace openshift-controller-manager : 1.125562ms 2025-12-13T00:20:09.721566448+00:00 stderr F I1213 00:20:09.721540 28750 services_controller.go:332] Processing sync for service openshift-cluster-samples-operator/metrics 2025-12-13T00:20:09.721566448+00:00 stderr F I1213 00:20:09.721550 28750 services_controller.go:336] Finished syncing service metrics on namespace openshift-cluster-samples-operator : 9.601µs 2025-12-13T00:20:09.721578788+00:00 stderr F I1213 00:20:09.721564 28750 services_controller.go:332] Processing sync for service openshift-monitoring/cluster-monitoring-operator 2025-12-13T00:20:09.721578788+00:00 stderr F I1213 00:20:09.721572 28750 services_controller.go:336] Finished syncing service cluster-monitoring-operator on namespace openshift-monitoring : 8.21µs 2025-12-13T00:20:09.721591429+00:00 stderr F I1213 00:20:09.721545 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.147", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.24", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.721591429+00:00 stderr F I1213 00:20:09.721585 28750 services_controller.go:332] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:20:09.721603879+00:00 stderr F I1213 00:20:09.721594 28750 services_controller.go:336] Finished syncing service ovn-kubernetes-control-plane on namespace openshift-ovn-kubernetes : 9.58µs 2025-12-13T00:20:09.721615820+00:00 stderr F I1213 00:20:09.721607 28750 services_controller.go:332] Processing sync for service openshift-machine-api/machine-api-operator 2025-12-13T00:20:09.721627470+00:00 stderr F I1213 00:20:09.721474 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.136:9443:10.217.0.20:9443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0549e0a0-7df6-4da0-b4ff-6834505eba14}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721711082+00:00 stderr F I1213 00:20:09.721616 28750 services_controller.go:397] Service machine-api-operator retrieved from lister: &Service{ObjectMeta:{machine-api-operator openshift-machine-api ef047d6e-c72f-4309-a95e-08fb0ed08662 4792 0 2024-06-26 12:39:26 +0000 UTC map[k8s-app:machine-api-operator] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00089259b }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.127,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.127],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.721711082+00:00 stderr F I1213 00:20:09.721673 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.147:8443:10.217.0.24:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721726353+00:00 stderr F I1213 00:20:09.721715 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-api/machine-api-operator are: map[TCP/https:{8443 [10.217.0.5] []}] 2025-12-13T00:20:09.721766354+00:00 stderr F I1213 00:20:09.721733 28750 services_controller.go:413] Built service openshift-machine-api/machine-api-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.127"}, protocol:"TCP", inport:8443, clusterEndpoints:services.lbEndpoints{Port:8443, V4IPs:[]string{"10.217.0.5"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.721766354+00:00 stderr F I1213 00:20:09.721756 28750 services_controller.go:414] Built service openshift-machine-api/machine-api-operator LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.721785624+00:00 stderr F I1213 00:20:09.721767 28750 services_controller.go:415] Built service openshift-machine-api/machine-api-operator LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.721785624+00:00 stderr F I1213 00:20:09.721737 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.147:8443:10.217.0.24:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {393c32dd-2ea5-4e3d-b98a-4cb2c2e8cbef}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_cfg Mutator:+= Value:1}] Timeout: Where:[where column _uuid == {71fe090d-459a-4fc6-bb5b-0e86db250be3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.721808135+00:00 stderr F I1213 00:20:09.721420 28750 services_controller.go:421] Built service openshift-marketplace/marketplace-operator-metrics cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.721820255+00:00 stderr F I1213 00:20:09.721788 28750 services_controller.go:421] Built service openshift-machine-api/machine-api-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.721820255+00:00 stderr F I1213 00:20:09.721812 28750 services_controller.go:422] Built service openshift-marketplace/marketplace-operator-metrics per-node LB []services.LB{} 2025-12-13T00:20:09.721832775+00:00 stderr F I1213 00:20:09.721823 28750 services_controller.go:422] Built service openshift-machine-api/machine-api-operator per-node LB []services.LB{} 2025-12-13T00:20:09.721832775+00:00 stderr F I1213 00:20:09.721827 28750 services_controller.go:423] Built service openshift-marketplace/marketplace-operator-metrics template LB []services.LB{} 2025-12-13T00:20:09.721844876+00:00 stderr F I1213 00:20:09.721837 28750 services_controller.go:423] Built service openshift-machine-api/machine-api-operator template LB []services.LB{} 2025-12-13T00:20:09.721857296+00:00 stderr F I1213 00:20:09.721842 28750 services_controller.go:424] Service openshift-marketplace/marketplace-operator-metrics has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.721857296+00:00 stderr F I1213 00:20:09.721850 28750 services_controller.go:424] Service openshift-machine-api/machine-api-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.721959289+00:00 stderr F I1213 00:20:09.721873 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"5e50823a-046a-4005-92b9-c7d2d651ffe9", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-api/machine-api-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.127", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.5", Port:8443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.721959289+00:00 stderr F I1213 00:20:09.721866 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"fd4c9213-5ce0-448d-a75e-3598556830dc", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8383, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8383, Template:(*services.Template)(nil)}}}, services.LBRule{Source:services.Addr{IP:"10.217.5.19", Port:8081, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.30", Port:8081, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.722002570+00:00 stderr F I1213 00:20:09.721973 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"} 2025-12-13T00:20:09.722002570+00:00 stderr F I1213 00:20:09.721992 28750 services_controller.go:336] Finished syncing service router-internal-default on namespace openshift-ingress : 2.195951ms 2025-12-13T00:20:09.722015790+00:00 stderr F I1213 00:20:09.722008 28750 services_controller.go:332] Processing sync for service openshift-network-diagnostics/network-check-source 2025-12-13T00:20:09.722025671+00:00 stderr F I1213 00:20:09.722016 28750 services_controller.go:336] Finished syncing service network-check-source on namespace openshift-network-diagnostics : 8.19µs 2025-12-13T00:20:09.722035351+00:00 stderr F I1213 00:20:09.722024 28750 services_controller.go:332] Processing sync for service openshift-apiserver/check-endpoints 2025-12-13T00:20:09.722117913+00:00 stderr F I1213 00:20:09.722032 28750 services_controller.go:397] Service check-endpoints retrieved from lister: &Service{ObjectMeta:{check-endpoints openshift-apiserver 435aa879-8965-459a-9b2a-dfd8f8924b3a 5567 0 2024-06-26 12:47:30 +0000 UTC map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062e857 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:check-endpoints,Protocol:TCP,Port:17698,TargetPort:{0 17698 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.23,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.23],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.722117913+00:00 stderr F I1213 00:20:09.722069 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.127:8443:10.217.0.5:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e50823a-046a-4005-92b9-c7d2d651ffe9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.722137484+00:00 stderr F I1213 00:20:09.722116 28750 lb_config.go:1016] Cluster endpoints for openshift-apiserver/check-endpoints are: map[TCP/check-endpoints:{17698 [10.217.0.82] []}] 2025-12-13T00:20:09.722149264+00:00 stderr F I1213 00:20:09.722130 28750 services_controller.go:413] Built service openshift-apiserver/check-endpoints LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.23"}, protocol:"TCP", inport:17698, clusterEndpoints:services.lbEndpoints{Port:17698, V4IPs:[]string{"10.217.0.82"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.722149264+00:00 stderr F I1213 00:20:09.722145 28750 services_controller.go:414] Built service openshift-apiserver/check-endpoints LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.722161174+00:00 stderr F I1213 00:20:09.722152 28750 services_controller.go:415] Built service openshift-apiserver/check-endpoints LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.722212296+00:00 stderr F I1213 00:20:09.722166 28750 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name service/openshift-ovn-kubernetes/ovn-kubernetes-control-plane. OVN-Kubernetes controller took 0.126452827 seconds. No OVN measurement. 2025-12-13T00:20:09.722212296+00:00 stderr F I1213 00:20:09.722118 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.19:8081:10.217.0.30:8081 10.217.5.19:8383:10.217.0.30:8383]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd4c9213-5ce0-448d-a75e-3598556830dc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.722255447+00:00 stderr F I1213 00:20:09.722200 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.19:8081:10.217.0.30:8081 10.217.5.19:8383:10.217.0.30:8383]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd4c9213-5ce0-448d-a75e-3598556830dc}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.722255447+00:00 stderr F I1213 00:20:09.722236 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/control-plane-machine-set-operator"} 2025-12-13T00:20:09.722269907+00:00 stderr F I1213 00:20:09.722254 28750 services_controller.go:336] Finished syncing service control-plane-machine-set-operator on namespace openshift-machine-api : 2.233072ms 2025-12-13T00:20:09.722269907+00:00 stderr F I1213 00:20:09.722169 28750 services_controller.go:421] Built service openshift-apiserver/check-endpoints cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.722282508+00:00 stderr F I1213 00:20:09.722271 28750 services_controller.go:332] Processing sync for service openshift-marketplace/community-operators 2025-12-13T00:20:09.722282508+00:00 stderr F I1213 00:20:09.722275 28750 services_controller.go:422] Built service openshift-apiserver/check-endpoints per-node LB []services.LB{} 2025-12-13T00:20:09.722294848+00:00 stderr F I1213 00:20:09.722285 28750 services_controller.go:423] Built service openshift-apiserver/check-endpoints template LB []services.LB{} 2025-12-13T00:20:09.722306558+00:00 stderr F I1213 00:20:09.722295 28750 services_controller.go:424] Service openshift-apiserver/check-endpoints has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.722370780+00:00 stderr F I1213 00:20:09.722314 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"9644365b-5102-49b0-be5c-9e41d7162eca", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-apiserver/check-endpoints_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.23", Port:17698, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.82", Port:17698, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.722436822+00:00 stderr F I1213 00:20:09.722121 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.127:8443:10.217.0.5:8443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e50823a-046a-4005-92b9-c7d2d651ffe9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.722512964+00:00 stderr F I1213 00:20:09.722473 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-operator-lifecycle-manager/package-server-manager-metrics"} 2025-12-13T00:20:09.722512964+00:00 stderr F I1213 00:20:09.722461 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.23:17698:10.217.0.82:17698]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9644365b-5102-49b0-be5c-9e41d7162eca}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.722512964+00:00 stderr F I1213 00:20:09.722500 28750 services_controller.go:336] Finished syncing service package-server-manager-metrics on namespace openshift-operator-lifecycle-manager : 2.427867ms 2025-12-13T00:20:09.722529104+00:00 stderr F I1213 00:20:09.722518 28750 services_controller.go:332] Processing sync for service openshift-multus/network-metrics-service 2025-12-13T00:20:09.722541865+00:00 stderr F I1213 00:20:09.722528 28750 services_controller.go:336] Finished syncing service network-metrics-service on namespace openshift-multus : 10µs 2025-12-13T00:20:09.722554185+00:00 stderr F I1213 00:20:09.722511 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/check-endpoints]} name:Service_openshift-apiserver/check-endpoints_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.23:17698:10.217.0.82:17698]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9644365b-5102-49b0-be5c-9e41d7162eca}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.722554185+00:00 stderr F I1213 00:20:09.722281 28750 services_controller.go:397] Service community-operators retrieved from lister: &Service{ObjectMeta:{community-operators openshift-marketplace daa5c70d-2f05-4c99-b062-49370cf4b7bd 6377 0 2024-06-26 12:47:48 +0000 UTC map[olm.managed:true olm.service-spec-hash:9y90X0LnOAvWXlE7PZKqH0sBNEP83PNwaUfqVB] map[] [{operators.coreos.com/v1alpha1 CatalogSource community-operators e583c58d-4569-4cab-9192-62c813516208 0xc000892b0d 0xc000892b0e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: community-operators,olm.managed: true,},ClusterIP:10.217.4.229,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.229],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.722640467+00:00 stderr F I1213 00:20:09.722578 28750 lb_config.go:1016] Cluster endpoints for openshift-marketplace/community-operators are: map[TCP/grpc:{50051 [10.217.0.37] []}] 2025-12-13T00:20:09.722656108+00:00 stderr F I1213 00:20:09.722640 28750 services_controller.go:413] Built service openshift-marketplace/community-operators LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.4.229"}, protocol:"TCP", inport:50051, clusterEndpoints:services.lbEndpoints{Port:50051, V4IPs:[]string{"10.217.0.37"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.722668298+00:00 stderr F I1213 00:20:09.722658 28750 services_controller.go:414] Built service openshift-marketplace/community-operators LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.722680368+00:00 stderr F I1213 00:20:09.722669 28750 services_controller.go:415] Built service openshift-marketplace/community-operators LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.722716429+00:00 stderr F I1213 00:20:09.722678 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/marketplace-operator-metrics"} 2025-12-13T00:20:09.722716429+00:00 stderr F I1213 00:20:09.722540 28750 services_controller.go:332] Processing sync for service openshift-cluster-version/cluster-version-operator 2025-12-13T00:20:09.722716429+00:00 stderr F I1213 00:20:09.722707 28750 services_controller.go:336] Finished syncing service marketplace-operator-metrics on namespace openshift-marketplace : 1.503581ms 2025-12-13T00:20:09.722733180+00:00 stderr F I1213 00:20:09.722690 28750 services_controller.go:421] Built service openshift-marketplace/community-operators cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.37", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.722733180+00:00 stderr F I1213 00:20:09.722727 28750 services_controller.go:332] Processing sync for service default/openshift 2025-12-13T00:20:09.722746050+00:00 stderr F I1213 00:20:09.722727 28750 services_controller.go:422] Built service openshift-marketplace/community-operators per-node LB []services.LB{} 2025-12-13T00:20:09.722746050+00:00 stderr F I1213 00:20:09.722740 28750 services_controller.go:336] Finished syncing service openshift on namespace default : 12.52µs 2025-12-13T00:20:09.722756601+00:00 stderr F I1213 00:20:09.722745 28750 services_controller.go:423] Built service openshift-marketplace/community-operators template LB []services.LB{} 2025-12-13T00:20:09.722766261+00:00 stderr F I1213 00:20:09.722753 28750 services_controller.go:332] Processing sync for service openshift-machine-config-operator/machine-config-operator 2025-12-13T00:20:09.722766261+00:00 stderr F I1213 00:20:09.722759 28750 services_controller.go:424] Service openshift-marketplace/community-operators has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.722807512+00:00 stderr F I1213 00:20:09.722715 28750 services_controller.go:397] Service cluster-version-operator retrieved from lister: &Service{ObjectMeta:{cluster-version-operator openshift-cluster-version b85c5397-4189-4029-b181-4e339da207b7 6237 0 2024-06-26 12:38:51 +0000 UTC map[k8s-app:cluster-version-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true kubernetes.io/description:Expose cluster-version operator metrics to other in-cluster consumers. Access requires a prometheus-k8s RoleBinding in this namespace. service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:cluster-version-operator-serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc00062ebb7 }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9099,TargetPort:{0 9099 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: cluster-version-operator,},ClusterIP:10.217.5.47,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.47],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.722820362+00:00 stderr F I1213 00:20:09.722809 28750 lb_config.go:1016] Cluster endpoints for openshift-cluster-version/cluster-version-operator are: map[TCP/metrics:{9099 [192.168.126.11] []}] 2025-12-13T00:20:09.722830683+00:00 stderr F I1213 00:20:09.722824 28750 services_controller.go:413] Built service openshift-cluster-version/cluster-version-operator LB cluster-wide configs []services.lbConfig(nil) 2025-12-13T00:20:09.722840533+00:00 stderr F I1213 00:20:09.722781 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"acd71d69-4870-4695-b8eb-935626516f5d", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-marketplace/community-operators_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.4.229", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.37", Port:50051, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.722840533+00:00 stderr F I1213 00:20:09.722831 28750 services_controller.go:414] Built service openshift-cluster-version/cluster-version-operator LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.47"}, protocol:"TCP", inport:9099, clusterEndpoints:services.lbEndpoints{Port:9099, V4IPs:[]string{"192.168.126.11"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.722854153+00:00 stderr F I1213 00:20:09.722764 28750 services_controller.go:397] Service machine-config-operator retrieved from lister: &Service{ObjectMeta:{machine-config-operator openshift-machine-config-operator 355a1056-7d77-4a52-a1f5-8eb39c13574e 4891 0 2024-06-26 12:39:13 +0000 UTC map[k8s-app:machine-config-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026 service.beta.openshift.io/serving-cert-secret-name:mco-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1719406026] [{config.openshift.io/v1 ClusterVersion version a73cbaa6-40d3-4694-9b98-c0a6eed45825 0xc0008929ab }] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-operator,},ClusterIP:10.217.5.4,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.4],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} 2025-12-13T00:20:09.722854153+00:00 stderr F I1213 00:20:09.722843 28750 services_controller.go:415] Built service openshift-cluster-version/cluster-version-operator LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.722885104+00:00 stderr F I1213 00:20:09.722868 28750 services_controller.go:421] Built service openshift-cluster-version/cluster-version-operator cluster-wide LB []services.LB{} 2025-12-13T00:20:09.722885104+00:00 stderr F I1213 00:20:09.722871 28750 lb_config.go:1016] Cluster endpoints for openshift-machine-config-operator/machine-config-operator are: map[TCP/metrics:{9001 [10.217.0.21] []}] 2025-12-13T00:20:09.722925205+00:00 stderr F I1213 00:20:09.722878 28750 services_controller.go:422] Built service openshift-cluster-version/cluster-version-operator per-node LB []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.722925205+00:00 stderr F I1213 00:20:09.722908 28750 services_controller.go:423] Built service openshift-cluster-version/cluster-version-operator template LB []services.LB{} 2025-12-13T00:20:09.722925205+00:00 stderr F I1213 00:20:09.722918 28750 services_controller.go:424] Service openshift-cluster-version/cluster-version-operator has 0 cluster-wide, 1 per-node configs, 0 template configs, making 0 (cluster) 1 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.723018978+00:00 stderr F I1213 00:20:09.722966 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"516e9abb-2a1b-4eba-859b-c827d44fb86e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.47", Port:9099, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"192.168.126.11", Port:9099, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{"crc"}, Routers:[]string{"GR_crc"}, Groups:[]string(nil)}} 2025-12-13T00:20:09.723018978+00:00 stderr F I1213 00:20:09.722974 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.229:50051:10.217.0.37:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {acd71d69-4870-4695-b8eb-935626516f5d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.723078609+00:00 stderr F I1213 00:20:09.723024 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/community-operators]} name:Service_openshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.229:50051:10.217.0.37:50051]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {acd71d69-4870-4695-b8eb-935626516f5d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.723136251+00:00 stderr F I1213 00:20:09.723089 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.47:9099:192.168.126.11:9099]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516e9abb-2a1b-4eba-859b-c827d44fb86e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.723173812+00:00 stderr F I1213 00:20:09.723128 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.47:9099:192.168.126.11:9099]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516e9abb-2a1b-4eba-859b-c827d44fb86e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.723231473+00:00 stderr F I1213 00:20:09.722891 28750 services_controller.go:413] Built service openshift-machine-config-operator/machine-config-operator LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.217.5.4"}, protocol:"TCP", inport:9001, clusterEndpoints:services.lbEndpoints{Port:9001, V4IPs:[]string{"10.217.0.21"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} 2025-12-13T00:20:09.723231473+00:00 stderr F I1213 00:20:09.723220 28750 services_controller.go:414] Built service openshift-machine-config-operator/machine-config-operator LB per-node configs []services.lbConfig(nil) 2025-12-13T00:20:09.723231473+00:00 stderr F I1213 00:20:09.723228 28750 services_controller.go:415] Built service openshift-machine-config-operator/machine-config-operator LB template configs []services.lbConfig(nil) 2025-12-13T00:20:09.723284545+00:00 stderr F I1213 00:20:09.723245 28750 services_controller.go:421] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.723284545+00:00 stderr F I1213 00:20:09.723277 28750 services_controller.go:422] Built service openshift-machine-config-operator/machine-config-operator per-node LB []services.LB{} 2025-12-13T00:20:09.723295105+00:00 stderr F I1213 00:20:09.723288 28750 services_controller.go:423] Built service openshift-machine-config-operator/machine-config-operator template LB []services.LB{} 2025-12-13T00:20:09.723342976+00:00 stderr F I1213 00:20:09.723301 28750 services_controller.go:424] Service openshift-machine-config-operator/machine-config-operator has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers 2025-12-13T00:20:09.723342976+00:00 stderr F I1213 00:20:09.723318 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-api/machine-api-operator"} 2025-12-13T00:20:09.723342976+00:00 stderr F I1213 00:20:09.723337 28750 services_controller.go:336] Finished syncing service machine-api-operator on namespace openshift-machine-api : 1.728937ms 2025-12-13T00:20:09.723393748+00:00 stderr F I1213 00:20:09.723328 28750 services_controller.go:443] Services do not match, existing lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"f2bd9885-097c-4b3e-916f-b4598e49252e", Protocol:"tcp", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}}, built lbs: []services.LB{services.LB{Name:"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:""}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:"10.217.5.4", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:"10.217.0.21", Port:9001, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} 2025-12-13T00:20:09.723518231+00:00 stderr F I1213 00:20:09.723473 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-apiserver/check-endpoints"} 2025-12-13T00:20:09.723518231+00:00 stderr F I1213 00:20:09.723472 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.4:9001:10.217.0.21:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f2bd9885-097c-4b3e-916f-b4598e49252e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.723518231+00:00 stderr F I1213 00:20:09.723510 28750 services_controller.go:336] Finished syncing service check-endpoints on namespace openshift-apiserver : 1.48294ms 2025-12-13T00:20:09.723558672+00:00 stderr F I1213 00:20:09.723512 28750 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.169.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.4:9001:10.217.0.21:9001]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f2bd9885-097c-4b3e-916f-b4598e49252e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.723659876+00:00 stderr F I1213 00:20:09.723625 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-marketplace/community-operators"} 2025-12-13T00:20:09.723659876+00:00 stderr F I1213 00:20:09.723651 28750 services_controller.go:336] Finished syncing service community-operators on namespace openshift-marketplace : 1.379669ms 2025-12-13T00:20:09.723805360+00:00 stderr F I1213 00:20:09.723771 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-cluster-version/cluster-version-operator"} 2025-12-13T00:20:09.723805360+00:00 stderr F I1213 00:20:09.723795 28750 services_controller.go:336] Finished syncing service cluster-version-operator on namespace openshift-cluster-version : 1.254695ms 2025-12-13T00:20:09.723958414+00:00 stderr F I1213 00:20:09.723895 28750 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-machine-config-operator/machine-config-operator"} 2025-12-13T00:20:09.723958414+00:00 stderr F I1213 00:20:09.723919 28750 services_controller.go:336] Finished syncing service machine-config-operator on namespace openshift-machine-config-operator : 1.165043ms 2025-12-13T00:20:09.723958414+00:00 stderr F I1213 00:20:09.723917 28750 iptables.go:146] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 2025-12-13T00:20:09.726959077+00:00 stderr F I1213 00:20:09.726905 28750 iptables.go:108] Creating table: filter chain: FORWARD 2025-12-13T00:20:09.729964960+00:00 stderr F I1213 00:20:09.729902 28750 iptables.go:110] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:09.744473360+00:00 stderr F I1213 00:20:09.744418 28750 iptables.go:108] Creating table: filter chain: OUTPUT 2025-12-13T00:20:09.748674315+00:00 stderr F I1213 00:20:09.748609 28750 iptables.go:110] Chain: "OUTPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:09.756245584+00:00 stderr F I1213 00:20:09.756204 28750 gateway_localnet.go:152] Local Gateway Creation Complete 2025-12-13T00:20:09.756478450+00:00 stderr F I1213 00:20:09.756438 28750 default_node_network_controller.go:1301] MTU (1500) of network interface eth10 is big enough to deal with Geneve header overhead (sum 1458). 2025-12-13T00:20:09.756478450+00:00 stderr F I1213 00:20:09.756461 28750 kube.go:128] Setting annotations map[k8s.ovn.org/gateway-mtu-support: k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_crc","mac-address":"fa:16:3e:f0:63:3e","ip-addresses":["38.102.83.51/24"],"ip-address":"38.102.83.51/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-mgmt-port-mac-address:b6:dc:d9:26:03:d4 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.51/24"} k8s.ovn.org/zone-name:crc] on node crc 2025-12-13T00:20:09.766859637+00:00 stderr F I1213 00:20:09.766565 28750 obj_retry.go:555] Update event received for resource *v1.Node, old object is equal to new: false 2025-12-13T00:20:09.766859637+00:00 stderr F I1213 00:20:09.766595 28750 obj_retry.go:607] Update event received for *v1.Node crc 2025-12-13T00:20:09.766859637+00:00 stderr F I1213 00:20:09.766744 28750 master.go:627] Adding or Updating Node "crc" 2025-12-13T00:20:09.766859637+00:00 stderr F I1213 00:20:09.766776 28750 hybrid.go:140] Removing node crc hybrid overlay port 2025-12-13T00:20:09.766891238+00:00 stderr F I1213 00:20:09.766851 28750 node_tracker.go:204] Processing possible switch / router updates for node crc 2025-12-13T00:20:09.767025741+00:00 stderr F I1213 00:20:09.766996 28750 node_tracker.go:165] Node crc switch + router changed, syncing services 2025-12-13T00:20:09.767062112+00:00 stderr F I1213 00:20:09.767006 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.51 physical_ips:38.102.83.51]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.767075373+00:00 stderr F I1213 00:20:09.767052 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:8af88be7-33a2-4869-a73a-80bcee31d385}]} external_ids:{GoMap:map[physical_ip:38.102.83.51 physical_ips:38.102.83.51]} load_balancer_group:{GoSet:[{GoUUID:59dab9f1-0389-4181-919a-0167ad60c25f} {GoUUID:02fd96d5-5e32-4855-b83b-d257ecda4e0c}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:017e52b0-97d3-4d7d-aae4-9b216aa025aa dynamic_neigh_routers:true lb_force_snat_ip:router_ip mac_binding_age_threshold:300 snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.768861572+00:00 stderr F I1213 00:20:09.767315 28750 default_node_network_controller.go:974] Waiting for gateway and management port readiness... 2025-12-13T00:20:09.768861572+00:00 stderr F I1213 00:20:09.767438 28750 ovs.go:159] Exec(39): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.28140.ctl connection-status 2025-12-13T00:20:09.768861572+00:00 stderr F I1213 00:20:09.767531 28750 ovs.go:159] Exec(40): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 ofport 2025-12-13T00:20:09.773712415+00:00 stderr F I1213 00:20:09.773506 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.773712415+00:00 stderr F I1213 00:20:09.773628 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.773712415+00:00 stderr F I1213 00:20:09.773653 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f581ba5-bc38-4c6c-a2fe-7fb38a5de95a}]}}] Timeout: Where:[where column _uuid == {e80fcef2-fac9-4844-ba83-db36f8e2cc28}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.775090904+00:00 stderr F I1213 00:20:09.775003 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.775109584+00:00 stderr F I1213 00:20:09.775080 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.775146955+00:00 stderr F I1213 00:20:09.775100 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]} options:{GoMap:map[gateway_mtu:1400]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c521aef7-6fed-4a5a-b1d9-8e7a81736008}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c521aef7-6fed-4a5a-b1d9-8e7a81736008}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.778913549+00:00 stderr F I1213 00:20:09.778042 28750 ovs.go:162] Exec(40): stdout: "1\n" 2025-12-13T00:20:09.778913549+00:00 stderr F I1213 00:20:09.778074 28750 ovs.go:163] Exec(40): stderr: "" 2025-12-13T00:20:09.778913549+00:00 stderr F I1213 00:20:09.778090 28750 ovs.go:159] Exec(41): /usr/bin/ovs-ofctl --no-stats --no-names dump-flows br-int table=65,out_port=1 2025-12-13T00:20:09.778913549+00:00 stderr F I1213 00:20:09.778573 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.778913549+00:00 stderr F I1213 00:20:09.778628 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.778913549+00:00 stderr F I1213 00:20:09.778648 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b3455ab3-baec-4c77-bf58-c8ee661db1b0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:b3455ab3-baec-4c77-bf58-c8ee661db1b0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.779349041+00:00 stderr F I1213 00:20:09.779284 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:f0:63:3e networks:{GoSet:[38.102.83.51/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.779407582+00:00 stderr F I1213 00:20:09.779365 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.779424273+00:00 stderr F I1213 00:20:09.779394 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:fa:16:3e:f0:63:3e networks:{GoSet:[38.102.83.51/24]} options:{GoMap:map[]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3d2b378f-284f-421b-b46b-c468b7e6cd8d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3d2b378f-284f-421b-b46b-c468b7e6cd8d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.780066191+00:00 stderr F I1213 00:20:09.779997 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.780804481+00:00 stderr F I1213 00:20:09.780725 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:f0:63:3e]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.780804481+00:00 stderr F I1213 00:20:09.780782 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.780886464+00:00 stderr F I1213 00:20:09.780799 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {516233f5-61de-4ea0-ae93-97e9413cd4fa}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[fa:16:3e:f0:63:3e]} options:{GoMap:map[exclude-lb-vips-from-garp:true nat-addresses:router router-port:rtoe-GR_crc]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {32735011-f3ec-43b2-aac3-e6ed4dd32c30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:516233f5-61de-4ea0-ae93-97e9413cd4fa} {GoUUID:32735011-f3ec-43b2-aac3-e6ed4dd32c30}]}}] Timeout: Where:[where column _uuid == {18f52cdd-cee0-4b9c-87dc-4d090679e6ea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.782848668+00:00 stderr F I1213 00:20:09.782502 28750 ovs.go:162] Exec(39): stdout: "not connected\n" 2025-12-13T00:20:09.782848668+00:00 stderr F I1213 00:20:09.782529 28750 ovs.go:163] Exec(39): stderr: "" 2025-12-13T00:20:09.782848668+00:00 stderr F I1213 00:20:09.782540 28750 default_node_network_controller.go:385] Node connection status = not connected 2025-12-13T00:20:09.788973407+00:00 stderr F I1213 00:20:09.788893 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.788973407+00:00 stderr F I1213 00:20:09.788957 28750 transact.go:42] Configuring OVN: [{Op:update Table:Static_MAC_Binding Row:map[ip:169.254.169.4 logical_port:rtoe-GR_crc mac:0a:58:a9:fe:a9:04 override_dynamic_mac:true] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {46f7156c-697b-4394-a40f-1e7225954094}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.793129471+00:00 stderr F I1213 00:20:09.792616 28750 ovs.go:162] Exec(41): stdout: " cookie=0xaad46e3b, table=65, priority=100,reg15=0x2,metadata=0x3 actions=output:1\n" 2025-12-13T00:20:09.793129471+00:00 stderr F I1213 00:20:09.792655 28750 ovs.go:163] Exec(41): stderr: "" 2025-12-13T00:20:09.793129471+00:00 stderr F I1213 00:20:09.792678 28750 management-port.go:161] Management port ovn-k8s-mp0 is ready 2025-12-13T00:20:09.794554631+00:00 stderr F I1213 00:20:09.793898 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.794554631+00:00 stderr F I1213 00:20:09.793974 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.794554631+00:00 stderr F I1213 00:20:09.793995 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:169.254.169.4] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2846b89d-97b9-453b-9343-7231c57eea62}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:2846b89d-97b9-453b-9343-7231c57eea62}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.794724275+00:00 stderr F I1213 00:20:09.794649 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.794767107+00:00 stderr F I1213 00:20:09.794733 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.794796017+00:00 stderr F I1213 00:20:09.794762 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:38.102.83.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {734af81b-a9a2-45f9-a8bb-d8bce515d37d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:734af81b-a9a2-45f9-a8bb-d8bce515d37d}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.795392324+00:00 stderr F I1213 00:20:09.795351 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.795392324+00:00 stderr F I1213 00:20:09.795382 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.795415654+00:00 stderr F I1213 00:20:09.795394 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1e0c1c89-e407-443a-b61e-39d1499f5b36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:1e0c1c89-e407-443a-b61e-39d1499f5b36}]}}] Timeout: Where:[where column _uuid == {4c794511-13fc-4ffb-bba5-3b14f127de07}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.796961657+00:00 stderr F I1213 00:20:09.796325 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.796961657+00:00 stderr F I1213 00:20:09.796376 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.796961657+00:00 stderr F I1213 00:20:09.796392 28750 transact.go:42] Configuring OVN: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:100.64.0.2 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {92357530-52ac-4fd6-8a9d-74712c56c6bf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:92357530-52ac-4fd6-8a9d-74712c56c6bf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836092 28750 base_network_controller.go:458] When adding node crc for network default, found 81 pods to add to retryPods 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836117 28750 base_network_controller.go:464] Adding pod hostpath-provisioner/csi-hostpathplugin-hvm8g to retryPods for network default 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836128 28750 base_network_controller.go:464] Adding pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m to retryPods for network default 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836134 28750 base_network_controller.go:464] Adding pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp to retryPods for network default 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836140 28750 base_network_controller.go:464] Adding pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 to retryPods for network default 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836147 28750 base_network_controller.go:464] Adding pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b to retryPods for network default 2025-12-13T00:20:09.836157838+00:00 stderr F I1213 00:20:09.836153 28750 base_network_controller.go:464] Adding pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836158 28750 base_network_controller.go:464] Adding pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836165 28750 base_network_controller.go:464] Adding pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836171 28750 base_network_controller.go:464] Adding pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836176 28750 base_network_controller.go:464] Adding pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836181 28750 base_network_controller.go:464] Adding pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836188 28750 base_network_controller.go:464] Adding pod openshift-console/console-644bb77b49-5x5xk to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836195 28750 base_network_controller.go:464] Adding pod openshift-console/downloads-65476884b9-9wcvx to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836200 28750 base_network_controller.go:464] Adding pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z to retryPods for network default 2025-12-13T00:20:09.836213250+00:00 stderr F I1213 00:20:09.836206 28750 base_network_controller.go:464] Adding pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf to retryPods for network default 2025-12-13T00:20:09.836244120+00:00 stderr F I1213 00:20:09.836213 28750 base_network_controller.go:464] Adding pod openshift-dns-operator/dns-operator-75f687757b-nz2xb to retryPods for network default 2025-12-13T00:20:09.836244120+00:00 stderr F I1213 00:20:09.836219 28750 base_network_controller.go:464] Adding pod openshift-dns/dns-default-gbw49 to retryPods for network default 2025-12-13T00:20:09.836244120+00:00 stderr F I1213 00:20:09.836226 28750 base_network_controller.go:464] Adding pod openshift-dns/node-resolver-dn27q to retryPods for network default 2025-12-13T00:20:09.836244120+00:00 stderr F I1213 00:20:09.836231 28750 base_network_controller.go:464] Adding pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg to retryPods for network default 2025-12-13T00:20:09.836278381+00:00 stderr F I1213 00:20:09.836255 28750 base_network_controller.go:464] Adding pod openshift-etcd/etcd-crc to retryPods for network default 2025-12-13T00:20:09.836278381+00:00 stderr F I1213 00:20:09.836265 28750 base_network_controller.go:464] Adding pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv to retryPods for network default 2025-12-13T00:20:09.836278381+00:00 stderr F I1213 00:20:09.836271 28750 base_network_controller.go:464] Adding pod openshift-image-registry/image-registry-75b7bb6564-rnjvj to retryPods for network default 2025-12-13T00:20:09.836289752+00:00 stderr F I1213 00:20:09.836277 28750 base_network_controller.go:464] Adding pod openshift-image-registry/node-ca-l92hr to retryPods for network default 2025-12-13T00:20:09.836289752+00:00 stderr F I1213 00:20:09.836283 28750 base_network_controller.go:464] Adding pod openshift-ingress-canary/ingress-canary-2vhcn to retryPods for network default 2025-12-13T00:20:09.836299952+00:00 stderr F I1213 00:20:09.836289 28750 base_network_controller.go:464] Adding pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t to retryPods for network default 2025-12-13T00:20:09.836299952+00:00 stderr F I1213 00:20:09.836295 28750 base_network_controller.go:464] Adding pod openshift-ingress/router-default-5c9bf7bc58-6jctv to retryPods for network default 2025-12-13T00:20:09.836310162+00:00 stderr F I1213 00:20:09.836300 28750 base_network_controller.go:464] Adding pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 to retryPods for network default 2025-12-13T00:20:09.836310162+00:00 stderr F I1213 00:20:09.836306 28750 base_network_controller.go:464] Adding pod openshift-kube-apiserver/installer-13-crc to retryPods for network default 2025-12-13T00:20:09.836320032+00:00 stderr F I1213 00:20:09.836314 28750 base_network_controller.go:464] Adding pod openshift-kube-apiserver/kube-apiserver-crc to retryPods for network default 2025-12-13T00:20:09.836333113+00:00 stderr F I1213 00:20:09.836319 28750 base_network_controller.go:464] Adding pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb to retryPods for network default 2025-12-13T00:20:09.836333113+00:00 stderr F I1213 00:20:09.836326 28750 base_network_controller.go:464] Adding pod openshift-kube-controller-manager/kube-controller-manager-crc to retryPods for network default 2025-12-13T00:20:09.836343243+00:00 stderr F I1213 00:20:09.836334 28750 base_network_controller.go:464] Adding pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 to retryPods for network default 2025-12-13T00:20:09.836343243+00:00 stderr F I1213 00:20:09.836340 28750 base_network_controller.go:464] Adding pod openshift-kube-scheduler/openshift-kube-scheduler-crc to retryPods for network default 2025-12-13T00:20:09.836353383+00:00 stderr F I1213 00:20:09.836346 28750 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr to retryPods for network default 2025-12-13T00:20:09.836362874+00:00 stderr F I1213 00:20:09.836351 28750 base_network_controller.go:464] Adding pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv to retryPods for network default 2025-12-13T00:20:09.836362874+00:00 stderr F I1213 00:20:09.836357 28750 base_network_controller.go:464] Adding pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw to retryPods for network default 2025-12-13T00:20:09.836373504+00:00 stderr F I1213 00:20:09.836363 28750 base_network_controller.go:464] Adding pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb to retryPods for network default 2025-12-13T00:20:09.836373504+00:00 stderr F I1213 00:20:09.836370 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc to retryPods for network default 2025-12-13T00:20:09.836383394+00:00 stderr F I1213 00:20:09.836377 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh to retryPods for network default 2025-12-13T00:20:09.836392794+00:00 stderr F I1213 00:20:09.836384 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-daemon-zpnhg to retryPods for network default 2025-12-13T00:20:09.836402425+00:00 stderr F I1213 00:20:09.836392 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm to retryPods for network default 2025-12-13T00:20:09.836402425+00:00 stderr F I1213 00:20:09.836399 28750 base_network_controller.go:464] Adding pod openshift-machine-config-operator/machine-config-server-v65wr to retryPods for network default 2025-12-13T00:20:09.836412205+00:00 stderr F I1213 00:20:09.836406 28750 base_network_controller.go:464] Adding pod openshift-marketplace/certified-operators-lcrg8 to retryPods for network default 2025-12-13T00:20:09.836421555+00:00 stderr F I1213 00:20:09.836413 28750 base_network_controller.go:464] Adding pod openshift-marketplace/community-operators-s2hxn to retryPods for network default 2025-12-13T00:20:09.836431065+00:00 stderr F I1213 00:20:09.836420 28750 base_network_controller.go:464] Adding pod openshift-marketplace/marketplace-operator-8b455464d-kghgr to retryPods for network default 2025-12-13T00:20:09.836431065+00:00 stderr F I1213 00:20:09.836427 28750 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-marketplace-nv4pl to retryPods for network default 2025-12-13T00:20:09.836440796+00:00 stderr F I1213 00:20:09.836434 28750 base_network_controller.go:464] Adding pod openshift-marketplace/redhat-operators-zg7cl to retryPods for network default 2025-12-13T00:20:09.836450156+00:00 stderr F I1213 00:20:09.836442 28750 base_network_controller.go:464] Adding pod openshift-multus/multus-additional-cni-plugins-bzj2p to retryPods for network default 2025-12-13T00:20:09.836462746+00:00 stderr F I1213 00:20:09.836449 28750 base_network_controller.go:464] Adding pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc to retryPods for network default 2025-12-13T00:20:09.836462746+00:00 stderr F I1213 00:20:09.836456 28750 base_network_controller.go:464] Adding pod openshift-multus/multus-q88th to retryPods for network default 2025-12-13T00:20:09.836472657+00:00 stderr F I1213 00:20:09.836462 28750 base_network_controller.go:464] Adding pod openshift-multus/network-metrics-daemon-qdfr4 to retryPods for network default 2025-12-13T00:20:09.836472657+00:00 stderr F I1213 00:20:09.836467 28750 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 to retryPods for network default 2025-12-13T00:20:09.836482717+00:00 stderr F I1213 00:20:09.836472 28750 base_network_controller.go:464] Adding pod openshift-network-diagnostics/network-check-target-v54bt to retryPods for network default 2025-12-13T00:20:09.836482717+00:00 stderr F I1213 00:20:09.836477 28750 base_network_controller.go:464] Adding pod openshift-network-node-identity/network-node-identity-7xghp to retryPods for network default 2025-12-13T00:20:09.836492717+00:00 stderr F I1213 00:20:09.836483 28750 base_network_controller.go:464] Adding pod openshift-network-operator/iptables-alerter-wwpnd to retryPods for network default 2025-12-13T00:20:09.836492717+00:00 stderr F I1213 00:20:09.836488 28750 base_network_controller.go:464] Adding pod openshift-network-operator/network-operator-767c585db5-zd56b to retryPods for network default 2025-12-13T00:20:09.836502707+00:00 stderr F I1213 00:20:09.836493 28750 base_network_controller.go:464] Adding pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd to retryPods for network default 2025-12-13T00:20:09.836502707+00:00 stderr F I1213 00:20:09.836499 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf to retryPods for network default 2025-12-13T00:20:09.836512438+00:00 stderr F I1213 00:20:09.836506 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh to retryPods for network default 2025-12-13T00:20:09.836522118+00:00 stderr F I1213 00:20:09.836512 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 to retryPods for network default 2025-12-13T00:20:09.836522118+00:00 stderr F I1213 00:20:09.836518 28750 base_network_controller.go:464] Adding pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz to retryPods for network default 2025-12-13T00:20:09.836532108+00:00 stderr F I1213 00:20:09.836523 28750 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b to retryPods for network default 2025-12-13T00:20:09.836532108+00:00 stderr F I1213 00:20:09.836529 28750 base_network_controller.go:464] Adding pod openshift-ovn-kubernetes/ovnkube-node-brz8k to retryPods for network default 2025-12-13T00:20:09.836542769+00:00 stderr F I1213 00:20:09.836534 28750 base_network_controller.go:464] Adding pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs to retryPods for network default 2025-12-13T00:20:09.836542769+00:00 stderr F I1213 00:20:09.836539 28750 base_network_controller.go:464] Adding pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz to retryPods for network default 2025-12-13T00:20:09.836552559+00:00 stderr F I1213 00:20:09.836546 28750 base_network_controller.go:464] Adding pod openshift-service-ca/service-ca-666f99b6f-kk8kg to retryPods for network default 2025-12-13T00:20:09.836581450+00:00 stderr F I1213 00:20:09.836557 28750 obj_retry.go:233] Iterate retry objects requested (resource *v1.Pod) 2025-12-13T00:20:09.836591560+00:00 stderr F I1213 00:20:09.836583 28750 obj_retry.go:555] Update event received for resource *factory.egressNode, old object is equal to new: false 2025-12-13T00:20:09.836604270+00:00 stderr F I1213 00:20:09.836596 28750 obj_retry.go:607] Update event received for *factory.egressNode crc 2025-12-13T00:20:09.836663872+00:00 stderr F I1213 00:20:09.836639 28750 obj_retry.go:427] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod 2025-12-13T00:20:09.836698493+00:00 stderr F I1213 00:20:09.836675 28750 obj_retry.go:555] Update event received for resource *factory.egressFwNode, old object is equal to new: true 2025-12-13T00:20:09.836698493+00:00 stderr F I1213 00:20:09.836660 28750 obj_retry.go:402] Going to retry *v1.Pod resource setup for 66 objects: [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b openshift-console-operator/console-operator-5dbbc74dc9-cp5cd openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb openshift-marketplace/redhat-marketplace-nv4pl openshift-marketplace/redhat-operators-zg7cl openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr openshift-network-operator/network-operator-767c585db5-zd56b openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 openshift-dns-operator/dns-operator-75f687757b-nz2xb openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw openshift-network-operator/iptables-alerter-wwpnd openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg openshift-dns/node-resolver-dn27q openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 openshift-network-diagnostics/network-check-target-v54bt openshift-machine-config-operator/machine-config-daemon-zpnhg openshift-multus/multus-additional-cni-plugins-bzj2p openshift-multus/multus-q88th openshift-apiserver/apiserver-7fc54b8dd7-d2bhp openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 openshift-network-node-identity/network-node-identity-7xghp openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv openshift-multus/multus-admission-controller-6c7c885997-4hbbc openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 openshift-console/console-644bb77b49-5x5xk openshift-etcd/etcd-crc openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b openshift-ovn-kubernetes/ovnkube-node-brz8k openshift-ingress-canary/ingress-canary-2vhcn openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh openshift-machine-config-operator/machine-config-server-v65wr openshift-marketplace/certified-operators-lcrg8 openshift-marketplace/community-operators-s2hxn openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc openshift-kube-apiserver/installer-13-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc openshift-console/downloads-65476884b9-9wcvx openshift-dns/dns-default-gbw49 openshift-image-registry/image-registry-75b7bb6564-rnjvj openshift-image-registry/node-ca-l92hr openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t openshift-ingress/router-default-5c9bf7bc58-6jctv hostpath-provisioner/csi-hostpathplugin-hvm8g openshift-controller-manager/controller-manager-778975cc4f-x5vcf openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 openshift-service-ca/service-ca-666f99b6f-kk8kg openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh openshift-etcd-operator/etcd-operator-768d5b5d86-722mg openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm openshift-marketplace/marketplace-operator-8b455464d-kghgr openshift-multus/network-metrics-daemon-qdfr4] 2025-12-13T00:20:09.836785795+00:00 stderr F I1213 00:20:09.836757 28750 obj_retry.go:411] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources 2025-12-13T00:20:09.836785795+00:00 stderr F I1213 00:20:09.836771 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.836785795+00:00 stderr F I1213 00:20:09.836778 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.836798225+00:00 stderr F I1213 00:20:09.836784 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.836798225+00:00 stderr F I1213 00:20:09.836788 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qdfr4 in node crc 2025-12-13T00:20:09.836798225+00:00 stderr F I1213 00:20:09.836793 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.836808876+00:00 stderr F I1213 00:20:09.836800 28750 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd in node crc 2025-12-13T00:20:09.836845387+00:00 stderr F I1213 00:20:09.836818 28750 base_network_controller_pods.go:476] [default/openshift-multus/network-metrics-daemon-qdfr4] creating logical port openshift-multus_network-metrics-daemon-qdfr4 for pod on switch crc 2025-12-13T00:20:09.836845387+00:00 stderr F I1213 00:20:09.836827 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.836845387+00:00 stderr F I1213 00:20:09.836832 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.836845387+00:00 stderr F I1213 00:20:09.836837 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv in node crc 2025-12-13T00:20:09.836860887+00:00 stderr F I1213 00:20:09.836851 28750 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] creating logical port openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv for pod on switch crc 2025-12-13T00:20:09.837000911+00:00 stderr F I1213 00:20:09.836928 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.837049302+00:00 stderr F I1213 00:20:09.837018 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.837049302+00:00 stderr F I1213 00:20:09.837036 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.837049302+00:00 stderr F I1213 00:20:09.837040 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.837062193+00:00 stderr F I1213 00:20:09.837040 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.837062193+00:00 stderr F I1213 00:20:09.837051 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.837072213+00:00 stderr F I1213 00:20:09.837059 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.837072213+00:00 stderr F I1213 00:20:09.837022 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.837088163+00:00 stderr F I1213 00:20:09.837070 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.837088163+00:00 stderr F I1213 00:20:09.837062 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.837098484+00:00 stderr F I1213 00:20:09.837059 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-server-v65wr in node crc 2025-12-13T00:20:09.837098484+00:00 stderr F I1213 00:20:09.837091 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.837108694+00:00 stderr F I1213 00:20:09.837097 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.837108694+00:00 stderr F I1213 00:20:09.837009 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.837108694+00:00 stderr F I1213 00:20:09.837098 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-server-v65wr after 0 failed attempt(s) 2025-12-13T00:20:09.837119504+00:00 stderr F I1213 00:20:09.837109 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.837119504+00:00 stderr F I1213 00:20:09.837112 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-server-v65wr 2025-12-13T00:20:09.837129694+00:00 stderr F I1213 00:20:09.837119 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 in node crc 2025-12-13T00:20:09.837129694+00:00 stderr F I1213 00:20:09.837119 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.837129694+00:00 stderr F I1213 00:20:09.837112 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.837140755+00:00 stderr F I1213 00:20:09.837120 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.837140755+00:00 stderr F I1213 00:20:09.837121 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-etcd/etcd-crc 2025-12-13T00:20:09.837140755+00:00 stderr F I1213 00:20:09.837133 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.837151795+00:00 stderr F I1213 00:20:09.837138 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.837151795+00:00 stderr F I1213 00:20:09.837143 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.837151795+00:00 stderr F I1213 00:20:09.837142 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-etcd/etcd-crc 2025-12-13T00:20:09.837162615+00:00 stderr F I1213 00:20:09.837148 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.837162615+00:00 stderr F I1213 00:20:09.837150 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb in node crc 2025-12-13T00:20:09.837162615+00:00 stderr F I1213 00:20:09.837079 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.837162615+00:00 stderr F I1213 00:20:09.837100 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.837177636+00:00 stderr F I1213 00:20:09.837155 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.837177636+00:00 stderr F I1213 00:20:09.837165 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.837177636+00:00 stderr F I1213 00:20:09.837167 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.837177636+00:00 stderr F I1213 00:20:09.837171 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.837191246+00:00 stderr F I1213 00:20:09.837175 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.837191246+00:00 stderr F I1213 00:20:09.837176 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.837191246+00:00 stderr F I1213 00:20:09.837173 28750 base_network_controller_pods.go:476] [default/openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] creating logical port openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb for pod on switch crc 2025-12-13T00:20:09.837191246+00:00 stderr F I1213 00:20:09.837175 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.837205527+00:00 stderr F I1213 00:20:09.837187 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.837205527+00:00 stderr F I1213 00:20:09.837186 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.837205527+00:00 stderr F I1213 00:20:09.837171 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.837205527+00:00 stderr F I1213 00:20:09.837197 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.837219787+00:00 stderr F I1213 00:20:09.837204 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.837219787+00:00 stderr F I1213 00:20:09.837207 28750 ovn.go:134] Ensuring zone local for Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg in node crc 2025-12-13T00:20:09.837219787+00:00 stderr F I1213 00:20:09.837209 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.837219787+00:00 stderr F I1213 00:20:09.837209 28750 obj_retry.go:296] Retry object setup: *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.837233557+00:00 stderr F I1213 00:20:09.837216 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.837233557+00:00 stderr F I1213 00:20:09.837213 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.837233557+00:00 stderr F I1213 00:20:09.837224 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.837246988+00:00 stderr F I1213 00:20:09.837231 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-operators-zg7cl in node crc 2025-12-13T00:20:09.837246988+00:00 stderr F I1213 00:20:09.837233 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.837246988+00:00 stderr F I1213 00:20:09.837237 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.837246988+00:00 stderr F I1213 00:20:09.837235 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.837264578+00:00 stderr F I1213 00:20:09.837242 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.837264578+00:00 stderr F I1213 00:20:09.837248 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.837264578+00:00 stderr F I1213 00:20:09.837252 28750 ovn.go:134] Ensuring zone local for Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb in node crc 2025-12-13T00:20:09.837264578+00:00 stderr F I1213 00:20:09.837255 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.837278508+00:00 stderr F I1213 00:20:09.837260 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.837278508+00:00 stderr F I1213 00:20:09.837257 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.837278508+00:00 stderr F I1213 00:20:09.837268 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.837278508+00:00 stderr F I1213 00:20:09.837267 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.837278508+00:00 stderr F I1213 00:20:09.837269 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.837292139+00:00 stderr F I1213 00:20:09.837277 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.837292139+00:00 stderr F I1213 00:20:09.837281 28750 ovn.go:134] Ensuring zone local for Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 in node crc 2025-12-13T00:20:09.837292139+00:00 stderr F I1213 00:20:09.837283 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh in node crc 2025-12-13T00:20:09.837292139+00:00 stderr F I1213 00:20:09.837285 28750 base_network_controller_pods.go:476] [default/openshift-dns-operator/dns-operator-75f687757b-nz2xb] creating logical port openshift-dns-operator_dns-operator-75f687757b-nz2xb for pod on switch crc 2025-12-13T00:20:09.837304319+00:00 stderr F I1213 00:20:09.837284 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.837304319+00:00 stderr F I1213 00:20:09.836778 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.837304319+00:00 stderr F I1213 00:20:09.837295 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.837315239+00:00 stderr F I1213 00:20:09.837302 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.837315239+00:00 stderr F I1213 00:20:09.837302 28750 base_network_controller_pods.go:476] [default/openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] creating logical port openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 for pod on switch crc 2025-12-13T00:20:09.837315239+00:00 stderr F I1213 00:20:09.837302 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.837329310+00:00 stderr F I1213 00:20:09.837315 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.837329310+00:00 stderr F I1213 00:20:09.837321 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm in node crc 2025-12-13T00:20:09.837329310+00:00 stderr F I1213 00:20:09.836944 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.837340260+00:00 stderr F I1213 00:20:09.837328 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.837340260+00:00 stderr F I1213 00:20:09.837335 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-bzj2p in node crc 2025-12-13T00:20:09.837350320+00:00 stderr F I1213 00:20:09.837336 28750 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] creating logical port openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm for pod on switch crc 2025-12-13T00:20:09.837350320+00:00 stderr F I1213 00:20:09.837339 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-bzj2p after 0 failed attempt(s) 2025-12-13T00:20:09.837360301+00:00 stderr F I1213 00:20:09.837352 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-additional-cni-plugins-bzj2p 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.837238 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-7xghp in node crc 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.836984 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.837380 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-7xghp after 0 failed attempt(s) 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.837385 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.837388 28750 default_network_controller.go:699] Recording success event on pod openshift-network-node-identity/network-node-identity-7xghp 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.836821 28750 base_network_controller_pods.go:476] [default/openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] creating logical port openshift-console-operator_console-operator-5dbbc74dc9-cp5cd for pod on switch crc 2025-12-13T00:20:09.837396712+00:00 stderr F I1213 00:20:09.837232 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.837410502+00:00 stderr F I1213 00:20:09.837400 28750 ovn.go:134] Ensuring zone local for Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg in node crc 2025-12-13T00:20:09.837410502+00:00 stderr F I1213 00:20:09.837101 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.837420392+00:00 stderr F I1213 00:20:09.837409 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.837420392+00:00 stderr F I1213 00:20:09.837416 28750 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-brz8k in node crc 2025-12-13T00:20:09.837430403+00:00 stderr F I1213 00:20:09.837420 28750 base_network_controller_pods.go:476] [default/openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] creating logical port openshift-etcd-operator_etcd-operator-768d5b5d86-722mg for pod on switch crc 2025-12-13T00:20:09.837430403+00:00 stderr F I1213 00:20:09.837183 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.837443543+00:00 stderr F I1213 00:20:09.837429 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 in node crc 2025-12-13T00:20:09.837453143+00:00 stderr F I1213 00:20:09.837444 28750 base_network_controller_pods.go:476] [default/openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] creating logical port openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 for pod on switch crc 2025-12-13T00:20:09.837482044+00:00 stderr F I1213 00:20:09.837113 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837493514+00:00 stderr F I1213 00:20:09.837303 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.837505405+00:00 stderr F I1213 00:20:09.837495 28750 ovn.go:134] Ensuring zone local for Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b in node crc 2025-12-13T00:20:09.837553226+00:00 stderr F I1213 00:20:09.837504 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837566056+00:00 stderr F I1213 00:20:09.837536 28750 base_network_controller_pods.go:476] [default/openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] creating logical port openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b for pod on switch crc 2025-12-13T00:20:09.837566056+00:00 stderr F I1213 00:20:09.837543 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837598107+00:00 stderr F I1213 00:20:09.837562 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837598107+00:00 stderr F I1213 00:20:09.837564 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837598107+00:00 stderr F I1213 00:20:09.837560 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837615018+00:00 stderr F I1213 00:20:09.837583 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837615018+00:00 stderr F I1213 00:20:09.837587 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837615018+00:00 stderr F I1213 00:20:09.837599 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837651229+00:00 stderr F I1213 00:20:09.837613 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837662259+00:00 stderr F I1213 00:20:09.837641 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837662259+00:00 stderr F I1213 00:20:09.837645 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837662259+00:00 stderr F I1213 00:20:09.837296 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.837672929+00:00 stderr F I1213 00:20:09.837666 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-767c585db5-zd56b in node crc 2025-12-13T00:20:09.837682629+00:00 stderr F I1213 00:20:09.837671 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-operator/network-operator-767c585db5-zd56b after 0 failed attempt(s) 2025-12-13T00:20:09.837682629+00:00 stderr F I1213 00:20:09.837676 28750 default_network_controller.go:699] Recording success event on pod openshift-network-operator/network-operator-767c585db5-zd56b 2025-12-13T00:20:09.837692830+00:00 stderr F I1213 00:20:09.837615 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837692830+00:00 stderr F I1213 00:20:09.837675 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837708790+00:00 stderr F I1213 00:20:09.837283 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-operators-zg7cl] creating logical port openshift-marketplace_redhat-operators-zg7cl for pod on switch crc 2025-12-13T00:20:09.837746491+00:00 stderr F I1213 00:20:09.837717 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837774102+00:00 stderr F I1213 00:20:09.837724 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.837822223+00:00 stderr F I1213 00:20:09.837807 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw in node crc 2025-12-13T00:20:09.837864624+00:00 stderr F I1213 00:20:09.837420 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-brz8k after 0 failed attempt(s) 2025-12-13T00:20:09.837864624+00:00 stderr F I1213 00:20:09.837857 28750 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:09.837875975+00:00 stderr F I1213 00:20:09.837849 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:b9a848b0-1438-4ada-b7da-2fe53dbf235f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1bf49fc6-86bc-42b9-9630-d7184a92b0eb}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837875975+00:00 stderr F I1213 00:20:09.837248 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.837906006+00:00 stderr F I1213 00:20:09.837843 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837906006+00:00 stderr F I1213 00:20:09.837887 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837987868+00:00 stderr F I1213 00:20:09.837276 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.837987868+00:00 stderr F I1213 00:20:09.837181 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-v54bt in node crc 2025-12-13T00:20:09.837987868+00:00 stderr F I1213 00:20:09.837964 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.837987868+00:00 stderr F I1213 00:20:09.837963 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838016509+00:00 stderr F I1213 00:20:09.837995 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838016509+00:00 stderr F I1213 00:20:09.838003 28750 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-target-v54bt] creating logical port openshift-network-diagnostics_network-check-target-v54bt for pod on switch crc 2025-12-13T00:20:09.838016509+00:00 stderr F I1213 00:20:09.838001 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838029939+00:00 stderr F I1213 00:20:09.838004 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838029939+00:00 stderr F I1213 00:20:09.838010 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838045599+00:00 stderr F I1213 00:20:09.837384 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838058250+00:00 stderr F I1213 00:20:09.838010 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} options:{GoMap:map[iface-id-ver:0b5c38ff-1fa8-4219-994d-15776acd4a4d requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:08 10.217.0.8]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e834ded8-9d5b-46e7-b962-1ee96928bab4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:e834ded8-9d5b-46e7-b962-1ee96928bab4}]}}] Timeout: Where:[where column _uuid == {f8a00f5d-1728-4139-8582-f2fb90581499}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.8 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a6dfd664-3ae6-42b8-adeb-4c9f305aa327}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a6dfd664-3ae6-42b8-adeb-4c9f305aa327}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838058250+00:00 stderr F I1213 00:20:09.838042 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838074510+00:00 stderr F I1213 00:20:09.837877 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.838074510+00:00 stderr F I1213 00:20:09.838064 28750 ovn.go:134] Ensuring zone local for Pod openshift-service-ca/service-ca-666f99b6f-kk8kg in node crc 2025-12-13T00:20:09.838088201+00:00 stderr F I1213 00:20:09.838064 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838088201+00:00 stderr F I1213 00:20:09.838083 28750 base_network_controller_pods.go:476] [default/openshift-service-ca/service-ca-666f99b6f-kk8kg] creating logical port openshift-service-ca_service-ca-666f99b6f-kk8kg for pod on switch crc 2025-12-13T00:20:09.838131942+00:00 stderr F I1213 00:20:09.838062 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} options:{GoMap:map[iface-id-ver:cf1a8966-f594-490a-9fbb-eec5bafd13d3 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:19 10.217.0.25]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2a5717ea-0a50-4ebb-b087-90e637274a33}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2a5717ea-0a50-4ebb-b087-90e637274a33}]}}] Timeout: Where:[where column _uuid == {4f91dae1-6838-4840-9491-e068cbcf1f65}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.25 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0ec236b8-244d-4e2f-bfb5-0733dbce5d66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0ec236b8-244d-4e2f-bfb5-0733dbce5d66}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838131942+00:00 stderr F I1213 00:20:09.838100 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838156752+00:00 stderr F I1213 00:20:09.838130 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838172883+00:00 stderr F I1213 00:20:09.838120 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} options:{GoMap:map[iface-id-ver:120b38dc-8236-4fa6-a452-642b8ad738ee requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:15 10.217.0.21]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad3d5728-34ed-421c-a749-1d7a957800a8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad3d5728-34ed-421c-a749-1d7a957800a8}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.21 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7b5ef19-a4ce-4da7-8bcf-021f71fd6ee3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838213134+00:00 stderr F I1213 00:20:09.837577 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838213134+00:00 stderr F I1213 00:20:09.837147 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.838225614+00:00 stderr F I1213 00:20:09.838214 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.838225614+00:00 stderr F I1213 00:20:09.838016 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} options:{GoMap:map[iface-id-ver:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3e 10.217.0.62]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6af06372-81fc-4451-8678-6253ce70f317}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6af06372-81fc-4451-8678-6253ce70f317}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.62 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {513dccfe-dd54-4217-b9ce-2d865937835c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:513dccfe-dd54-4217-b9ce-2d865937835c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838243895+00:00 stderr F I1213 00:20:09.838224 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/redhat-marketplace-nv4pl in node crc 2025-12-13T00:20:09.838256315+00:00 stderr F I1213 00:20:09.838230 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838266255+00:00 stderr F I1213 00:20:09.838237 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838303816+00:00 stderr F I1213 00:20:09.838257 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838303816+00:00 stderr F I1213 00:20:09.838282 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/redhat-marketplace-nv4pl] creating logical port openshift-marketplace_redhat-marketplace-nv4pl for pod on switch crc 2025-12-13T00:20:09.838315717+00:00 stderr F I1213 00:20:09.837976 28750 ovn.go:134] Ensuring zone local for Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz in node crc 2025-12-13T00:20:09.838315717+00:00 stderr F I1213 00:20:09.838295 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838348018+00:00 stderr F I1213 00:20:09.838315 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838359018+00:00 stderr F I1213 00:20:09.838285 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838371118+00:00 stderr F I1213 00:20:09.837259 28750 base_network_controller_pods.go:476] [default/openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] creating logical port openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg for pod on switch crc 2025-12-13T00:20:09.838383259+00:00 stderr F I1213 00:20:09.838368 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838414129+00:00 stderr F I1213 00:20:09.837222 28750 obj_retry.go:358] Adding new object: *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.838414129+00:00 stderr F I1213 00:20:09.838386 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838414129+00:00 stderr F I1213 00:20:09.837214 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.838426500+00:00 stderr F I1213 00:20:09.838412 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc in node crc 2025-12-13T00:20:09.838436160+00:00 stderr F I1213 00:20:09.838104 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838436160+00:00 stderr F I1213 00:20:09.838424 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838449180+00:00 stderr F I1213 00:20:09.838432 28750 base_network_controller_pods.go:476] [default/openshift-multus/multus-admission-controller-6c7c885997-4hbbc] creating logical port openshift-multus_multus-admission-controller-6c7c885997-4hbbc for pod on switch crc 2025-12-13T00:20:09.838483311+00:00 stderr F I1213 00:20:09.838329 28750 base_network_controller_pods.go:476] [default/openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] creating logical port openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz for pod on switch crc 2025-12-13T00:20:09.838533814+00:00 stderr F I1213 00:20:09.838498 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838533814+00:00 stderr F I1213 00:20:09.838404 28750 ovn.go:134] Ensuring zone local for Pod hostpath-provisioner/csi-hostpathplugin-hvm8g in node crc 2025-12-13T00:20:09.838547824+00:00 stderr F I1213 00:20:09.838533 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838557484+00:00 stderr F I1213 00:20:09.838546 28750 base_network_controller_pods.go:476] [default/hostpath-provisioner/csi-hostpathplugin-hvm8g] creating logical port hostpath-provisioner_csi-hostpathplugin-hvm8g for pod on switch crc 2025-12-13T00:20:09.838585255+00:00 stderr F I1213 00:20:09.838512 28750 base_network_controller_pods.go:476] [default/openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] creating logical port openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw for pod on switch crc 2025-12-13T00:20:09.838666657+00:00 stderr F I1213 00:20:09.837169 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.838666657+00:00 stderr F I1213 00:20:09.838639 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838666657+00:00 stderr F I1213 00:20:09.837160 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.838679888+00:00 stderr F I1213 00:20:09.837166 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/installer-13-crc in node crc 2025-12-13T00:20:09.838679888+00:00 stderr F I1213 00:20:09.838668 28750 ovn.go:134] Ensuring zone local for Pod openshift-console/console-644bb77b49-5x5xk in node crc 2025-12-13T00:20:09.838692328+00:00 stderr F I1213 00:20:09.838686 28750 base_network_controller_pods.go:476] [default/openshift-console/console-644bb77b49-5x5xk] creating logical port openshift-console_console-644bb77b49-5x5xk for pod on switch crc 2025-12-13T00:20:09.838702218+00:00 stderr F I1213 00:20:09.837176 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.838702218+00:00 stderr F I1213 00:20:09.838675 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838702218+00:00 stderr F I1213 00:20:09.838684 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838718109+00:00 stderr F I1213 00:20:09.838697 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.838718109+00:00 stderr F I1213 00:20:09.838707 28750 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf in node crc 2025-12-13T00:20:09.838728519+00:00 stderr F I1213 00:20:09.838275 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838728519+00:00 stderr F I1213 00:20:09.838717 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838738919+00:00 stderr F I1213 00:20:09.838725 28750 base_network_controller_pods.go:476] [default/openshift-controller-manager/controller-manager-778975cc4f-x5vcf] creating logical port openshift-controller-manager_controller-manager-778975cc4f-x5vcf for pod on switch crc 2025-12-13T00:20:09.838738919+00:00 stderr F I1213 00:20:09.838723 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838783040+00:00 stderr F I1213 00:20:09.838751 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838783040+00:00 stderr F I1213 00:20:09.838730 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} options:{GoMap:map[iface-id-ver:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0f 10.217.0.15]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {be2fa59f-4cec-4742-a4bd-dcd0913d1422}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:be2fa59f-4cec-4742-a4bd-dcd0913d1422}]}}] Timeout: Where:[where column _uuid == {4001cc39-c23c-46dd-87d7-2a0f579404da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.15 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {71937aad-6e75-4d89-9a20-9586e5b9d460}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:71937aad-6e75-4d89-9a20-9586e5b9d460}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838783040+00:00 stderr F I1213 00:20:09.837069 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.838783040+00:00 stderr F I1213 00:20:09.838754 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838799981+00:00 stderr F I1213 00:20:09.838764 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838809901+00:00 stderr F I1213 00:20:09.838744 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} options:{GoMap:map[iface-id-ver:ed024e5d-8fc2-4c22-803d-73f3c9795f19 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:07 10.217.0.7]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5ecfd58-e886-4b2c-9939-022e7f14b7a7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5ecfd58-e886-4b2c-9939-022e7f14b7a7}]}}] Timeout: Where:[where column _uuid == {cdc4ecc4-c623-407e-ad92-33cb6c2b7b75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {00bb4740-117c-432b-86ba-a37c8a0d8dd2}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:00bb4740-117c-432b-86ba-a37c8a0d8dd2}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838844132+00:00 stderr F I1213 00:20:09.838807 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838844132+00:00 stderr F I1213 00:20:09.838819 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838844132+00:00 stderr F I1213 00:20:09.838680 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838862473+00:00 stderr F I1213 00:20:09.838567 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838862473+00:00 stderr F I1213 00:20:09.838849 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838874893+00:00 stderr F I1213 00:20:09.838856 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.838911714+00:00 stderr F I1213 00:20:09.838878 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839004527+00:00 stderr F I1213 00:20:09.838655 28750 ovn.go:134] Ensuring zone local for Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd in node crc 2025-12-13T00:20:09.839022547+00:00 stderr F I1213 00:20:09.839003 28750 base_network_controller_pods.go:476] [default/openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] creating logical port openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd for pod on switch crc 2025-12-13T00:20:09.839069398+00:00 stderr F I1213 00:20:09.837105 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.839069398+00:00 stderr F I1213 00:20:09.839036 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839069398+00:00 stderr F I1213 00:20:09.839061 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh in node crc 2025-12-13T00:20:09.839090819+00:00 stderr F I1213 00:20:09.839077 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839102929+00:00 stderr F I1213 00:20:09.839091 28750 base_network_controller_pods.go:476] [default/openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] creating logical port openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh for pod on switch crc 2025-12-13T00:20:09.839147460+00:00 stderr F I1213 00:20:09.839110 28750 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv): added port &{name:openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv uuid:2a5717ea-0a50-4ebb-b087-90e637274a33 logicalSwitch:crc ips:[0xc00168eb40] mac:[10 88 10 217 0 25] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.25/23] and MAC: 0a:58:0a:d9:00:19 2025-12-13T00:20:09.839147460+00:00 stderr F I1213 00:20:09.838780 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.839147460+00:00 stderr F I1213 00:20:09.839126 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839147460+00:00 stderr F I1213 00:20:09.839143 28750 ovn.go:134] Ensuring zone local for Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m in node crc 2025-12-13T00:20:09.839185451+00:00 stderr F I1213 00:20:09.839155 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839198942+00:00 stderr F I1213 00:20:09.839182 28750 port_cache.go:96] port-cache(openshift-etcd-operator_etcd-operator-768d5b5d86-722mg): added port &{name:openshift-etcd-operator_etcd-operator-768d5b5d86-722mg uuid:e834ded8-9d5b-46e7-b962-1ee96928bab4 logicalSwitch:crc ips:[0xc00168f4a0] mac:[10 88 10 217 0 8] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.8/23] and MAC: 0a:58:0a:d9:00:08 2025-12-13T00:20:09.839198942+00:00 stderr F I1213 00:20:09.839191 28750 pods.go:220] [openshift-etcd-operator/etcd-operator-768d5b5d86-722mg] addLogicalPort took 1.7766ms, libovsdb time 1.169763ms 2025-12-13T00:20:09.839241163+00:00 stderr F I1213 00:20:09.839196 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg after 0 failed attempt(s) 2025-12-13T00:20:09.839241163+00:00 stderr F I1213 00:20:09.839226 28750 default_network_controller.go:699] Recording success event on pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg 2025-12-13T00:20:09.839241163+00:00 stderr F I1213 00:20:09.838837 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:34a48baf-1bee-4921-8bb2-9b7320e76f79 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c0f95133-023f-4bbd-8719-e29d2cfbb32d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c0f95133-023f-4bbd-8719-e29d2cfbb32d}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:933179f7-08dd-4d64-bfd4-2b6fc5e9c22c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839261384+00:00 stderr F I1213 00:20:09.839240 28750 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm): added port &{name:openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm uuid:ad3d5728-34ed-421c-a749-1d7a957800a8 logicalSwitch:crc ips:[0xc000127fb0] mac:[10 88 10 217 0 21] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.21/23] and MAC: 0a:58:0a:d9:00:15 2025-12-13T00:20:09.839261384+00:00 stderr F I1213 00:20:09.839251 28750 pods.go:220] [openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm] addLogicalPort took 1.919963ms, libovsdb time 1.114662ms 2025-12-13T00:20:09.839274164+00:00 stderr F I1213 00:20:09.839258 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm after 0 failed attempt(s) 2025-12-13T00:20:09.839274164+00:00 stderr F I1213 00:20:09.839263 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm 2025-12-13T00:20:09.839274164+00:00 stderr F I1213 00:20:09.839254 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839327145+00:00 stderr F I1213 00:20:09.839281 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839327145+00:00 stderr F I1213 00:20:09.839297 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839343546+00:00 stderr F I1213 00:20:09.838804 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839353456+00:00 stderr F I1213 00:20:09.839334 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839363256+00:00 stderr F I1213 00:20:09.839346 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839400107+00:00 stderr F I1213 00:20:09.839369 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839400107+00:00 stderr F I1213 00:20:09.839291 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839429958+00:00 stderr F I1213 00:20:09.839398 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839429958+00:00 stderr F I1213 00:20:09.839359 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} options:{GoMap:map[iface-id-ver:6d67253e-2acd-4bc1-8185-793587da4f17 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0a 10.217.0.10]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56}]}}] Timeout: Where:[where column _uuid == {518f8c39-eb57-4799-bfd1-37b6918f5c5b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.10 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2b58dd61-e63c-407c-8db8-c7c7b5c23d24}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2b58dd61-e63c-407c-8db8-c7c7b5c23d24}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839444768+00:00 stderr F I1213 00:20:09.839422 28750 port_cache.go:96] port-cache(openshift-console-operator_console-operator-5dbbc74dc9-cp5cd): added port &{name:openshift-console-operator_console-operator-5dbbc74dc9-cp5cd uuid:6af06372-81fc-4451-8678-6253ce70f317 logicalSwitch:crc ips:[0xc001371920] mac:[10 88 10 217 0 62] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.62/23] and MAC: 0a:58:0a:d9:00:3e 2025-12-13T00:20:09.839444768+00:00 stderr F I1213 00:20:09.839398 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} options:{GoMap:map[iface-id-ver:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2e 10.217.0.46]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4}]}}] Timeout: Where:[where column _uuid == {5b2bd23c-417c-4ef4-b90a-359ab75287c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.46 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {885a79ad-c3d5-4b78-8ba8-1d83439b736b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:885a79ad-c3d5-4b78-8ba8-1d83439b736b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839483720+00:00 stderr F I1213 00:20:09.839167 28750 base_network_controller_pods.go:476] [default/openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] creating logical port openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m for pod on switch crc 2025-12-13T00:20:09.839494650+00:00 stderr F I1213 00:20:09.837048 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.839504470+00:00 stderr F I1213 00:20:09.839496 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/community-operators-s2hxn in node crc 2025-12-13T00:20:09.839540131+00:00 stderr F I1213 00:20:09.839438 28750 pods.go:220] [openshift-console-operator/console-operator-5dbbc74dc9-cp5cd] addLogicalPort took 2.622392ms, libovsdb time 1.3983ms 2025-12-13T00:20:09.839540131+00:00 stderr F I1213 00:20:09.839527 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd after 0 failed attempt(s) 2025-12-13T00:20:09.839540131+00:00 stderr F I1213 00:20:09.839534 28750 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd 2025-12-13T00:20:09.839562762+00:00 stderr F I1213 00:20:09.837070 28750 ovn.go:134] Ensuring zone local for Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 in node crc 2025-12-13T00:20:09.839562762+00:00 stderr F I1213 00:20:09.839546 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 after 0 failed attempt(s) 2025-12-13T00:20:09.839562762+00:00 stderr F I1213 00:20:09.839549 28750 default_network_controller.go:699] Recording success event on pod openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9 2025-12-13T00:20:09.839562762+00:00 stderr F I1213 00:20:09.837078 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.839584382+00:00 stderr F I1213 00:20:09.839560 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.839584382+00:00 stderr F I1213 00:20:09.839566 28750 ovn.go:134] Ensuring zone local for Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc in node crc 2025-12-13T00:20:09.839584382+00:00 stderr F I1213 00:20:09.836979 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.839597463+00:00 stderr F I1213 00:20:09.839584 28750 base_network_controller_pods.go:476] [default/openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] creating logical port openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc for pod on switch crc 2025-12-13T00:20:09.839597463+00:00 stderr F I1213 00:20:09.839586 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.839597463+00:00 stderr F I1213 00:20:09.839593 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr in node crc 2025-12-13T00:20:09.839641884+00:00 stderr F I1213 00:20:09.839608 28750 base_network_controller_pods.go:476] [default/openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] creating logical port openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr for pod on switch crc 2025-12-13T00:20:09.839641884+00:00 stderr F I1213 00:20:09.839628 28750 port_cache.go:96] port-cache(openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb): added port &{name:openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb uuid:be2fa59f-4cec-4742-a4bd-dcd0913d1422 logicalSwitch:crc ips:[0xc001637d40] mac:[10 88 10 217 0 15] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.15/23] and MAC: 0a:58:0a:d9:00:0f 2025-12-13T00:20:09.839653634+00:00 stderr F I1213 00:20:09.839640 28750 pods.go:220] [openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb] addLogicalPort took 2.474889ms, libovsdb time 892.724µs 2025-12-13T00:20:09.839653634+00:00 stderr F I1213 00:20:09.839645 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb after 0 failed attempt(s) 2025-12-13T00:20:09.839653634+00:00 stderr F I1213 00:20:09.839650 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb 2025-12-13T00:20:09.839664494+00:00 stderr F I1213 00:20:09.839645 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839664494+00:00 stderr F I1213 00:20:09.839092 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} options:{GoMap:map[iface-id-ver:e4a7de23-6134-4044-902a-0900dc04a501 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:28 10.217.0.40]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9409cb25-8c46-46db-98ab-5eafe9669ef8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9409cb25-8c46-46db-98ab-5eafe9669ef8}]}}] Timeout: Where:[where column _uuid == {ef64a2e5-df47-4a70-b021-d05d7c16d1d1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.40 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {13384dbf-6468-4401-b6b5-1fc817c99dcd}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:13384dbf-6468-4401-b6b5-1fc817c99dcd}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839702195+00:00 stderr F I1213 00:20:09.839675 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839748267+00:00 stderr F I1213 00:20:09.839692 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} options:{GoMap:map[iface-id-ver:12e733dd-0939-4f1b-9cbb-13897e093787 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:31 10.217.0.49]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52259988-af2b-4ee5-bbfe-801c4ebeb0ae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:52259988-af2b-4ee5-bbfe-801c4ebeb0ae}]}}] Timeout: Where:[where column _uuid == {7fc92973-4bc3-465a-b279-3f843add06f8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.49 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65079815-d96f-43ce-8e23-6a61dc263b7b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:65079815-d96f-43ce-8e23-6a61dc263b7b}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839765587+00:00 stderr F I1213 00:20:09.837101 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.839765587+00:00 stderr F I1213 00:20:09.839762 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf in node crc 2025-12-13T00:20:09.839780678+00:00 stderr F I1213 00:20:09.839748 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839780678+00:00 stderr F I1213 00:20:09.839762 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839790898+00:00 stderr F I1213 00:20:09.839780 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] creating logical port openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf for pod on switch crc 2025-12-13T00:20:09.839821989+00:00 stderr F I1213 00:20:09.839794 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839821989+00:00 stderr F I1213 00:20:09.839714 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839821989+00:00 stderr F I1213 00:20:09.839806 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839855360+00:00 stderr F I1213 00:20:09.837120 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.839855360+00:00 stderr F I1213 00:20:09.837128 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.839855360+00:00 stderr F I1213 00:20:09.839849 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.839867220+00:00 stderr F I1213 00:20:09.837136 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.839867220+00:00 stderr F I1213 00:20:09.839858 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc 2025-12-13T00:20:09.839867220+00:00 stderr F I1213 00:20:09.839861 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.839881610+00:00 stderr F I1213 00:20:09.839864 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s) 2025-12-13T00:20:09.839881610+00:00 stderr F I1213 00:20:09.839867 28750 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj in node crc 2025-12-13T00:20:09.839881610+00:00 stderr F I1213 00:20:09.839864 28750 port_cache.go:96] port-cache(openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7): added port &{name:openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7 uuid:f5ecfd58-e886-4b2c-9939-022e7f14b7a7 logicalSwitch:crc ips:[0xc001358630] mac:[10 88 10 217 0 7] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.7/23] and MAC: 0a:58:0a:d9:00:07 2025-12-13T00:20:09.839881610+00:00 stderr F I1213 00:20:09.839850 28750 ovn.go:134] Ensuring zone local for Pod openshift-dns/dns-default-gbw49 in node crc 2025-12-13T00:20:09.839881610+00:00 stderr F I1213 00:20:09.839826 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} options:{GoMap:map[iface-id-ver:297ab9b6-2186-4d5b-a952-2bfd59af63c4 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3f 10.217.0.63]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9aafbb57-c78d-409c-9ff4-1561d4387b2d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9aafbb57-c78d-409c-9ff4-1561d4387b2d}]}}] Timeout: Where:[where column _uuid == {172587e5-b2f9-4278-8c11-8f4c23f280a6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.63 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c92f035f-d338-40f2-abeb-ccb48ca242b7}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c92f035f-d338-40f2-abeb-ccb48ca242b7}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839881610+00:00 stderr F I1213 00:20:09.839875 28750 pods.go:220] [openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7] addLogicalPort took 2.434297ms, libovsdb time 1.116991ms 2025-12-13T00:20:09.839893921+00:00 stderr F I1213 00:20:09.839881 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 after 0 failed attempt(s) 2025-12-13T00:20:09.839893921+00:00 stderr F I1213 00:20:09.839883 28750 base_network_controller_pods.go:476] [default/openshift-image-registry/image-registry-75b7bb6564-rnjvj] creating logical port openshift-image-registry_image-registry-75b7bb6564-rnjvj for pod on switch crc 2025-12-13T00:20:09.839893921+00:00 stderr F I1213 00:20:09.839886 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7 2025-12-13T00:20:09.839893921+00:00 stderr F I1213 00:20:09.839870 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc 2025-12-13T00:20:09.839905301+00:00 stderr F I1213 00:20:09.837000 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.839905301+00:00 stderr F I1213 00:20:09.837156 28750 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc 2025-12-13T00:20:09.839905301+00:00 stderr F I1213 00:20:09.839902 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s) 2025-12-13T00:20:09.839918971+00:00 stderr F I1213 00:20:09.839903 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.839918971+00:00 stderr F I1213 00:20:09.839906 28750 default_network_controller.go:699] Recording success event on pod openshift-etcd/etcd-crc 2025-12-13T00:20:09.839918971+00:00 stderr F I1213 00:20:09.837181 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.839954192+00:00 stderr F I1213 00:20:09.839917 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.839954192+00:00 stderr F I1213 00:20:09.839910 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839954192+00:00 stderr F I1213 00:20:09.839924 28750 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv in node crc 2025-12-13T00:20:09.839954192+00:00 stderr F I1213 00:20:09.839913 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.839954192+00:00 stderr F I1213 00:20:09.837155 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.839972073+00:00 stderr F I1213 00:20:09.839954 28750 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-l92hr in node crc 2025-12-13T00:20:09.839972073+00:00 stderr F I1213 00:20:09.839956 28750 base_network_controller_pods.go:476] [default/openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] creating logical port openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv for pod on switch crc 2025-12-13T00:20:09.839972073+00:00 stderr F I1213 00:20:09.839959 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/node-ca-l92hr after 0 failed attempt(s) 2025-12-13T00:20:09.839972073+00:00 stderr F I1213 00:20:09.839964 28750 default_network_controller.go:699] Recording success event on pod openshift-image-registry/node-ca-l92hr 2025-12-13T00:20:09.839972073+00:00 stderr F I1213 00:20:09.837196 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.840007484+00:00 stderr F I1213 00:20:09.839982 28750 ovn.go:134] Ensuring zone local for Pod openshift-ingress/router-default-5c9bf7bc58-6jctv in node crc 2025-12-13T00:20:09.840007484+00:00 stderr F I1213 00:20:09.837197 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.840007484+00:00 stderr F I1213 00:20:09.839993 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress/router-default-5c9bf7bc58-6jctv after 0 failed attempt(s) 2025-12-13T00:20:09.840007484+00:00 stderr F I1213 00:20:09.839999 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 in node crc 2025-12-13T00:20:09.840007484+00:00 stderr F I1213 00:20:09.837217 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.840027694+00:00 stderr F I1213 00:20:09.840010 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.840027694+00:00 stderr F I1213 00:20:09.840017 28750 ovn.go:134] Ensuring zone local for Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 in node crc 2025-12-13T00:20:09.840027694+00:00 stderr F I1213 00:20:09.840018 28750 base_network_controller_pods.go:476] [default/openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] creating logical port openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 for pod on switch crc 2025-12-13T00:20:09.840038615+00:00 stderr F I1213 00:20:09.840024 28750 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-target-v54bt): added port &{name:openshift-network-diagnostics_network-check-target-v54bt uuid:c0f95133-023f-4bbd-8719-e29d2cfbb32d logicalSwitch:crc ips:[0xc000c97770] mac:[10 88 10 217 0 4] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.4/23] and MAC: 0a:58:0a:d9:00:04 2025-12-13T00:20:09.840048445+00:00 stderr F I1213 00:20:09.840024 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:d73c4e63-30ef-4915-925d-f44201c612ec requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840048445+00:00 stderr F I1213 00:20:09.837150 28750 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b in node crc 2025-12-13T00:20:09.840058705+00:00 stderr F I1213 00:20:09.840047 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b after 0 failed attempt(s) 2025-12-13T00:20:09.840058705+00:00 stderr F I1213 00:20:09.840052 28750 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b 2025-12-13T00:20:09.840068685+00:00 stderr F I1213 00:20:09.837240 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.840068685+00:00 stderr F I1213 00:20:09.840053 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840068685+00:00 stderr F I1213 00:20:09.840063 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.840080016+00:00 stderr F I1213 00:20:09.840069 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 in node crc 2025-12-13T00:20:09.840080016+00:00 stderr F I1213 00:20:09.839999 28750 default_network_controller.go:699] Recording success event on pod openshift-ingress/router-default-5c9bf7bc58-6jctv 2025-12-13T00:20:09.840090136+00:00 stderr F I1213 00:20:09.840070 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840090136+00:00 stderr F I1213 00:20:09.837274 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc 2025-12-13T00:20:09.840103376+00:00 stderr F I1213 00:20:09.840089 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840114107+00:00 stderr F I1213 00:20:09.840035 28750 pods.go:220] [openshift-network-diagnostics/network-check-target-v54bt] addLogicalPort took 2.043356ms, libovsdb time 1.182882ms 2025-12-13T00:20:09.840114107+00:00 stderr F I1213 00:20:09.840092 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840114107+00:00 stderr F I1213 00:20:09.840109 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-target-v54bt after 0 failed attempt(s) 2025-12-13T00:20:09.840125027+00:00 stderr F I1213 00:20:09.840116 28750 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-target-v54bt 2025-12-13T00:20:09.840125027+00:00 stderr F I1213 00:20:09.837283 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.840138057+00:00 stderr F I1213 00:20:09.840129 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.840138057+00:00 stderr F I1213 00:20:09.840125 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840148658+00:00 stderr F I1213 00:20:09.840137 28750 ovn.go:134] Ensuring zone local for Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 in node crc 2025-12-13T00:20:09.840148658+00:00 stderr F I1213 00:20:09.840087 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} options:{GoMap:map[iface-id-ver:a702c6d2-4dde-4077-ab8c-0f8df804bf7a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:03 10.217.0.3]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3564ddfd-a311-4df3-b5d0-1e76294b4ab0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3564ddfd-a311-4df3-b5d0-1e76294b4ab0}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f7e5e1c6-53f2-4022-a945-14d792e680f9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f7e5e1c6-53f2-4022-a945-14d792e680f9}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840148658+00:00 stderr F I1213 00:20:09.840143 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 after 0 failed attempt(s) 2025-12-13T00:20:09.840162368+00:00 stderr F I1213 00:20:09.840149 28750 default_network_controller.go:699] Recording success event on pod openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46 2025-12-13T00:20:09.840162368+00:00 stderr F I1213 00:20:09.836771 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.840172968+00:00 stderr F I1213 00:20:09.840160 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.840172968+00:00 stderr F I1213 00:20:09.840159 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840172968+00:00 stderr F I1213 00:20:09.840169 28750 ovn.go:134] Ensuring zone local for Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz in node crc 2025-12-13T00:20:09.840183829+00:00 stderr F I1213 00:20:09.840095 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s) 2025-12-13T00:20:09.840183829+00:00 stderr F I1213 00:20:09.840178 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc 2025-12-13T00:20:09.840193959+00:00 stderr F I1213 00:20:09.840168 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840193959+00:00 stderr F I1213 00:20:09.837303 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] creating logical port openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh for pod on switch crc 2025-12-13T00:20:09.840193959+00:00 stderr F I1213 00:20:09.840175 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840207899+00:00 stderr F I1213 00:20:09.840193 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] creating logical port openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz for pod on switch crc 2025-12-13T00:20:09.840207899+00:00 stderr F I1213 00:20:09.837294 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.840218090+00:00 stderr F I1213 00:20:09.840199 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840218090+00:00 stderr F I1213 00:20:09.840209 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.840228030+00:00 stderr F I1213 00:20:09.840218 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr in node crc 2025-12-13T00:20:09.840260491+00:00 stderr F I1213 00:20:09.840231 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840260491+00:00 stderr F I1213 00:20:09.840240 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840303622+00:00 stderr F I1213 00:20:09.837308 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-zpnhg in node crc 2025-12-13T00:20:09.840303622+00:00 stderr F I1213 00:20:09.840287 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-zpnhg after 0 failed attempt(s) 2025-12-13T00:20:09.840303622+00:00 stderr F I1213 00:20:09.840293 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-zpnhg 2025-12-13T00:20:09.840303622+00:00 stderr F I1213 00:20:09.837560 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840316452+00:00 stderr F I1213 00:20:09.840282 28750 port_cache.go:96] port-cache(openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz): added port &{name:openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz uuid:ab5d4cae-0fa2-40ab-86de-48a43e5f8d56 logicalSwitch:crc ips:[0xc00170d9b0] mac:[10 88 10 217 0 10] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.10/23] and MAC: 0a:58:0a:d9:00:0a 2025-12-13T00:20:09.840346143+00:00 stderr F I1213 00:20:09.840316 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840346143+00:00 stderr F I1213 00:20:09.837955 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840360733+00:00 stderr F I1213 00:20:09.840330 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840370694+00:00 stderr F I1213 00:20:09.840356 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840402225+00:00 stderr F I1213 00:20:09.840242 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/marketplace-operator-8b455464d-kghgr] creating logical port openshift-marketplace_marketplace-operator-8b455464d-kghgr for pod on switch crc 2025-12-13T00:20:09.840402225+00:00 stderr F I1213 00:20:09.840376 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840402225+00:00 stderr F I1213 00:20:09.840387 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {31d380f3-a977-4726-8270-75a72c5efb5e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840436545+00:00 stderr F I1213 00:20:09.840392 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840436545+00:00 stderr F I1213 00:20:09.840416 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:31d380f3-a977-4726-8270-75a72c5efb5e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840447516+00:00 stderr F I1213 00:20:09.840431 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840481097+00:00 stderr F I1213 00:20:09.840448 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840481097+00:00 stderr F I1213 00:20:09.840432 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} options:{GoMap:map[iface-id-ver:d73c4e63-30ef-4915-925d-f44201c612ec requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:29 10.217.0.41]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {31d380f3-a977-4726-8270-75a72c5efb5e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:31d380f3-a977-4726-8270-75a72c5efb5e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840538958+00:00 stderr F I1213 00:20:09.840505 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840587270+00:00 stderr F I1213 00:20:09.840544 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840624901+00:00 stderr F I1213 00:20:09.840581 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:9025b167-d0fc-419f-92c1-add28909ab7c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {116f6820-9a92-4bbc-bb34-265c58c5b649}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840624901+00:00 stderr F I1213 00:20:09.840605 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840670562+00:00 stderr F I1213 00:20:09.840637 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840670562+00:00 stderr F I1213 00:20:09.840320 28750 pods.go:220] [openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz] addLogicalPort took 1.994265ms, libovsdb time 909.015µs 2025-12-13T00:20:09.840670562+00:00 stderr F I1213 00:20:09.840628 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} options:{GoMap:map[iface-id-ver:b54e8941-2fc4-432a-9e51-39684df9089e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:16 10.217.0.22]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {82630d91-1647-4c0c-aa84-8f820bcf919e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:82630d91-1647-4c0c-aa84-8f820bcf919e}]}}] Timeout: Where:[where column _uuid == {905e7bf7-de29-462a-af48-5a2746956eea}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.22 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9a5ec36-71c6-4938-8247-5a459bf24e1f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d9a5ec36-71c6-4938-8247-5a459bf24e1f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840683032+00:00 stderr F I1213 00:20:09.837927 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840718473+00:00 stderr F I1213 00:20:09.840690 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840718473+00:00 stderr F I1213 00:20:09.840702 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840718473+00:00 stderr F I1213 00:20:09.840698 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840736354+00:00 stderr F I1213 00:20:09.838175 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840809346+00:00 stderr F I1213 00:20:09.840740 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} options:{GoMap:map[iface-id-ver:59748b9b-c309-4712-aa85-bb38d71c4915 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3d 10.217.0.61]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6056bee0-572a-4de7-bb24-40ca6a66be30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6056bee0-572a-4de7-bb24-40ca6a66be30}]}}] Timeout: Where:[where column _uuid == {d2123a33-f6a4-4e11-a589-35282f79b593}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.61 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {95146e00-82d8-4bf3-8c5b-dac75d43239c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:95146e00-82d8-4bf3-8c5b-dac75d43239c}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840809346+00:00 stderr F I1213 00:20:09.840553 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840845647+00:00 stderr F I1213 00:20:09.840807 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840845647+00:00 stderr F I1213 00:20:09.840820 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840889558+00:00 stderr F I1213 00:20:09.840840 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {910a58ce-fb7f-4a21-8db9-12b6edfc6eef}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840889558+00:00 stderr F I1213 00:20:09.840844 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} options:{GoMap:map[iface-id-ver:d0f40333-c860-4c04-8058-a0bf572dcf12 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:40 10.217.0.64]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3644fddd-ceae-4a64-8b00-dadf73515945}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3644fddd-ceae-4a64-8b00-dadf73515945}]}}] Timeout: Where:[where column _uuid == {f5d13ab4-cf69-4129-b553-a33fa44b8f30}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.64 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2e91916d-9350-416e-9311-ee7b2033c3ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2e91916d-9350-416e-9311-ee7b2033c3ec}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840966310+00:00 stderr F I1213 00:20:09.840896 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840966310+00:00 stderr F I1213 00:20:09.840639 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840966310+00:00 stderr F I1213 00:20:09.840914 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:910a58ce-fb7f-4a21-8db9-12b6edfc6eef}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840986030+00:00 stderr F I1213 00:20:09.840965 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.840995931+00:00 stderr F I1213 00:20:09.840971 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841012091+00:00 stderr F I1213 00:20:09.838687 28750 base_network_controller_pods.go:476] [default/openshift-kube-apiserver/installer-13-crc] creating logical port openshift-kube-apiserver_installer-13-crc for pod on switch crc 2025-12-13T00:20:09.841022661+00:00 stderr F I1213 00:20:09.840994 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841032252+00:00 stderr F I1213 00:20:09.841002 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841075623+00:00 stderr F I1213 00:20:09.840993 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} options:{GoMap:map[iface-id-ver:bd556935-a077-45df-ba3f-d42c39326ccd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2b 10.217.0.43]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69155615-9d93-4b72-bddd-739a6e731251}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:69155615-9d93-4b72-bddd-739a6e731251}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.43 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {836a6a8f-ef84-4ecd-996a-6579e908ce2a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:836a6a8f-ef84-4ecd-996a-6579e908ce2a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841075623+00:00 stderr F I1213 00:20:09.841027 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} options:{GoMap:map[iface-id-ver:01feb2e0-a0f4-4573-8335-34e364e0ef40 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:48 10.217.0.72]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3e86699a-fa52-4a81-9386-60d37f3fa10c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:3e86699a-fa52-4a81-9386-60d37f3fa10c}]}}] Timeout: Where:[where column _uuid == {65d2d7fe-9437-4599-bc37-da4da4d5905c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.72 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ff201db7-fe7f-4311-91e5-346c10cbf942}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:ff201db7-fe7f-4311-91e5-346c10cbf942}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841112024+00:00 stderr F I1213 00:20:09.841071 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3c1b071a-c76b-475e-8109-e930ef901298}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841122884+00:00 stderr F I1213 00:20:09.841044 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} options:{GoMap:map[iface-id-ver:10603adc-d495-423c-9459-4caa405960bb requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:12 10.217.0.18]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b212e2c2-3d4e-4898-aede-c926b74813f0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:b212e2c2-3d4e-4898-aede-c926b74813f0}]}}] Timeout: Where:[where column _uuid == {46538203-fb12-48b9-9840-a39e58a289ec}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.18 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2391a5ae-b1bd-4dcc-90eb-e3a1195f5bd3}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841132924+00:00 stderr F I1213 00:20:09.838886 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841146455+00:00 stderr F I1213 00:20:09.841124 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3c1b071a-c76b-475e-8109-e930ef901298}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841158495+00:00 stderr F I1213 00:20:09.840855 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841227527+00:00 stderr F I1213 00:20:09.841150 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} options:{GoMap:map[iface-id-ver:9025b167-d0fc-419f-92c1-add28909ab7c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1e 10.217.0.30]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {116f6820-9a92-4bbc-bb34-265c58c5b649}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:116f6820-9a92-4bbc-bb34-265c58c5b649}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.30 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3c1b071a-c76b-475e-8109-e930ef901298}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3c1b071a-c76b-475e-8109-e930ef901298}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841227527+00:00 stderr F I1213 00:20:09.838887 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841227527+00:00 stderr F I1213 00:20:09.841163 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} options:{GoMap:map[iface-id-ver:c085412c-b875-46c9-ae3e-e6b0d8067091 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0e 10.217.0.14]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.14 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8946487-1f89-4d39-965a-9596d567c892}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d8946487-1f89-4d39-965a-9596d567c892}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841286369+00:00 stderr F I1213 00:20:09.837012 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.841286369+00:00 stderr F I1213 00:20:09.841277 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.841298020+00:00 stderr F I1213 00:20:09.841285 28750 ovn.go:134] Ensuring zone local for Pod openshift-marketplace/certified-operators-lcrg8 in node crc 2025-12-13T00:20:09.841328431+00:00 stderr F I1213 00:20:09.841305 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/certified-operators-lcrg8] creating logical port openshift-marketplace_certified-operators-lcrg8 for pod on switch crc 2025-12-13T00:20:09.841328431+00:00 stderr F I1213 00:20:09.841295 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841339601+00:00 stderr F I1213 00:20:09.840664 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz after 0 failed attempt(s) 2025-12-13T00:20:09.841339601+00:00 stderr F I1213 00:20:09.841334 28750 default_network_controller.go:699] Recording success event on pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz 2025-12-13T00:20:09.841349781+00:00 stderr F I1213 00:20:09.839204 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841389402+00:00 stderr F I1213 00:20:09.841198 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841419893+00:00 stderr F I1213 00:20:09.839132 28750 pods.go:220] [openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv] addLogicalPort took 2.283823ms, libovsdb time 1.038729ms 2025-12-13T00:20:09.841447284+00:00 stderr F I1213 00:20:09.837060 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.841475025+00:00 stderr F I1213 00:20:09.837086 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb in node crc 2025-12-13T00:20:09.841502785+00:00 stderr F I1213 00:20:09.839828 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841542387+00:00 stderr F I1213 00:20:09.841516 28750 base_network_controller_pods.go:476] [default/openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] creating logical port openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb for pod on switch crc 2025-12-13T00:20:09.841542387+00:00 stderr F I1213 00:20:09.839513 28750 base_network_controller_pods.go:476] [default/openshift-marketplace/community-operators-s2hxn] creating logical port openshift-marketplace_community-operators-s2hxn for pod on switch crc 2025-12-13T00:20:09.841613799+00:00 stderr F I1213 00:20:09.841578 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841625519+00:00 stderr F I1213 00:20:09.837142 28750 base_network_controller_pods.go:476] [default/openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] creating logical port openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 for pod on switch crc 2025-12-13T00:20:09.841625519+00:00 stderr F I1213 00:20:09.841614 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841672240+00:00 stderr F I1213 00:20:09.837134 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.841672240+00:00 stderr F I1213 00:20:09.841651 28750 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc 2025-12-13T00:20:09.841672240+00:00 stderr F I1213 00:20:09.841659 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s) 2025-12-13T00:20:09.841694661+00:00 stderr F I1213 00:20:09.841627 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} options:{GoMap:map[iface-id-ver:5bacb25d-97b6-4491-8fb4-99feae1d802a requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:27 10.217.0.39]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b4158c3-d859-42e6-8259-b16ce1cbd284}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8b4158c3-d859-42e6-8259-b16ce1cbd284}]}}] Timeout: Where:[where column _uuid == {038121f6-33a2-46c3-a820-7e67ff387e75}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.39 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e2370e4-c8a2-48a7-a016-426be9bb2419}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:6e2370e4-c8a2-48a7-a016-426be9bb2419}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841694661+00:00 stderr F I1213 00:20:09.841673 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc 2025-12-13T00:20:09.841694661+00:00 stderr F I1213 00:20:09.839894 28750 base_network_controller_pods.go:476] [default/openshift-dns/dns-default-gbw49] creating logical port openshift-dns_dns-default-gbw49 for pod on switch crc 2025-12-13T00:20:09.841694661+00:00 stderr F I1213 00:20:09.841673 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841716361+00:00 stderr F I1213 00:20:09.837088 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.841716361+00:00 stderr F I1213 00:20:09.841706 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841729562+00:00 stderr F I1213 00:20:09.841716 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.841729562+00:00 stderr F I1213 00:20:09.841725 28750 ovn.go:134] Ensuring zone local for Pod openshift-ingress-canary/ingress-canary-2vhcn in node crc 2025-12-13T00:20:09.841771523+00:00 stderr F I1213 00:20:09.841743 28750 base_network_controller_pods.go:476] [default/openshift-ingress-canary/ingress-canary-2vhcn] creating logical port openshift-ingress-canary_ingress-canary-2vhcn for pod on switch crc 2025-12-13T00:20:09.841771523+00:00 stderr F I1213 00:20:09.841740 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841771523+00:00 stderr F I1213 00:20:09.841731 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:220875e2-503f-46b5-aaa6-bb8fc45743cc requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9d09e10-aa79-4a77-ba42-e77ca54f8045}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841771523+00:00 stderr F I1213 00:20:09.841733 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841815254+00:00 stderr F I1213 00:20:09.837168 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.841815254+00:00 stderr F I1213 00:20:09.841808 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.841827524+00:00 stderr F I1213 00:20:09.841799 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841827524+00:00 stderr F I1213 00:20:09.841802 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841827524+00:00 stderr F I1213 00:20:09.841819 28750 ovn.go:134] Ensuring zone local for Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t in node crc 2025-12-13T00:20:09.841827524+00:00 stderr F I1213 00:20:09.841793 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841847125+00:00 stderr F I1213 00:20:09.841833 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841856645+00:00 stderr F I1213 00:20:09.841848 28750 base_network_controller_pods.go:476] [default/openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] creating logical port openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t for pod on switch crc 2025-12-13T00:20:09.841897996+00:00 stderr F I1213 00:20:09.841864 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.841897996+00:00 stderr F I1213 00:20:09.841868 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842066521+00:00 stderr F I1213 00:20:09.841877 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.839911 28750 ovn.go:134] Ensuring zone local for Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs in node crc 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.840041 28750 base_network_controller_pods.go:476] [default/openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] creating logical port openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 for pod on switch crc 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.837223 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-multus/multus-q88th 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.840086 28750 base_network_controller_pods.go:476] [default/openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] creating logical port openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 for pod on switch crc 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.837245 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.837287 28750 ovn.go:134] Ensuring zone local for Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z in node crc 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.837392 28750 ovn.go:134] Ensuring zone local for Pod openshift-console/downloads-65476884b9-9wcvx in node crc 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.840642 28750 port_cache.go:96] port-cache(openshift-service-ca_service-ca-666f99b6f-kk8kg): added port &{name:openshift-service-ca_service-ca-666f99b6f-kk8kg uuid:9409cb25-8c46-46db-98ab-5eafe9669ef8 logicalSwitch:crc ips:[0xc000e8b710] mac:[10 88 10 217 0 40] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.40/23] and MAC: 0a:58:0a:d9:00:28 2025-12-13T00:20:09.842235715+00:00 stderr F I1213 00:20:09.838576 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842259856+00:00 stderr F I1213 00:20:09.840974 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} options:{GoMap:map[iface-id-ver:b9a848b0-1438-4ada-b7da-2fe53dbf235f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:23 10.217.0.35]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1bf49fc6-86bc-42b9-9630-d7184a92b0eb}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:1bf49fc6-86bc-42b9-9630-d7184a92b0eb}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.35 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {910a58ce-fb7f-4a21-8db9-12b6edfc6eef}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:910a58ce-fb7f-4a21-8db9-12b6edfc6eef}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842259856+00:00 stderr F I1213 00:20:09.841193 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {145016fd-3cef-4f28-abce-7d4c2c73477c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842259856+00:00 stderr F I1213 00:20:09.841371 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:85ca9974-9a5f-4cbf-a126-71bf61c49940 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d33f70f7-551b-4345-ba06-af50c250b376}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842313988+00:00 stderr F I1213 00:20:09.842262 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842313988+00:00 stderr F I1213 00:20:09.842271 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842325728+00:00 stderr F I1213 00:20:09.837178 28750 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dn27q in node crc 2025-12-13T00:20:09.842337968+00:00 stderr F I1213 00:20:09.842327 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns/node-resolver-dn27q after 0 failed attempt(s) 2025-12-13T00:20:09.842347839+00:00 stderr F I1213 00:20:09.842335 28750 default_network_controller.go:699] Recording success event on pod openshift-dns/node-resolver-dn27q 2025-12-13T00:20:09.842347839+00:00 stderr F I1213 00:20:09.841454 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8393529-1df9-45df-a9d2-f761b6ef68c3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842358209+00:00 stderr F I1213 00:20:09.842331 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842404080+00:00 stderr F I1213 00:20:09.842360 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842415410+00:00 stderr F I1213 00:20:09.842390 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842427451+00:00 stderr F I1213 00:20:09.842357 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} options:{GoMap:map[iface-id-ver:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:06 10.217.0.6]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26}]}}] Timeout: Where:[where column _uuid == {34c062e9-0e41-479f-b36f-464b48cc97e0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5caac608-1693-4730-b705-794c4dca0d50}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:5caac608-1693-4730-b705-794c4dca0d50}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842482232+00:00 stderr F I1213 00:20:09.842423 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842482232+00:00 stderr F I1213 00:20:09.842451 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842496303+00:00 stderr F I1213 00:20:09.842488 28750 base_network_controller_pods.go:476] [default/openshift-console/downloads-65476884b9-9wcvx] creating logical port openshift-console_downloads-65476884b9-9wcvx for pod on switch crc 2025-12-13T00:20:09.842545874+00:00 stderr F I1213 00:20:09.842505 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842603275+00:00 stderr F I1213 00:20:09.842584 28750 base_network_controller_pods.go:476] [default/openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] creating logical port openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z for pod on switch crc 2025-12-13T00:20:09.842642637+00:00 stderr F I1213 00:20:09.842600 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842751919+00:00 stderr F I1213 00:20:09.842696 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842801981+00:00 stderr F I1213 00:20:09.842765 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.842925874+00:00 stderr F I1213 00:20:09.842897 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-multus/multus-q88th 2025-12-13T00:20:09.842925874+00:00 stderr F I1213 00:20:09.842912 28750 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-q88th in node crc 2025-12-13T00:20:09.842925874+00:00 stderr F I1213 00:20:09.842795 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} options:{GoMap:map[iface-id-ver:71af81a9-7d43-49b2-9287-c375900aa905 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0c 10.217.0.12]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {6e77fb5d-c04f-467c-9883-8cb59d819d86}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:6e77fb5d-c04f-467c-9883-8cb59d819d86}]}}] Timeout: Where:[where column _uuid == {c8d9d75a-827d-4b5a-8293-96d3de66db7c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.12 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2453849f-f886-4491-b6c8-a4d7af784119}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:2453849f-f886-4491-b6c8-a4d7af784119}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843032367+00:00 stderr F I1213 00:20:09.842889 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843052248+00:00 stderr F I1213 00:20:09.843024 28750 base_network_controller_pods.go:476] [default/openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] creating logical port openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs for pod on switch crc 2025-12-13T00:20:09.843097909+00:00 stderr F I1213 00:20:09.843054 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843136980+00:00 stderr F I1213 00:20:09.843067 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87fe1602-51a3-4d2b-be29-ca6a53c0b451}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843177811+00:00 stderr F I1213 00:20:09.843139 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843203492+00:00 stderr F I1213 00:20:09.843167 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:87fe1602-51a3-4d2b-be29-ca6a53c0b451}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843271914+00:00 stderr F I1213 00:20:09.843194 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} options:{GoMap:map[iface-id-ver:7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:21 10.217.0.33]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d8393529-1df9-45df-a9d2-f761b6ef68c3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d8393529-1df9-45df-a9d2-f761b6ef68c3}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.33 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {87fe1602-51a3-4d2b-be29-ca6a53c0b451}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:87fe1602-51a3-4d2b-be29-ca6a53c0b451}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843311155+00:00 stderr F I1213 00:20:09.842917 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-q88th after 0 failed attempt(s) 2025-12-13T00:20:09.843321795+00:00 stderr F I1213 00:20:09.843307 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-q88th 2025-12-13T00:20:09.843321795+00:00 stderr F I1213 00:20:09.841474 28750 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc 2025-12-13T00:20:09.843413517+00:00 stderr F I1213 00:20:09.843391 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s) 2025-12-13T00:20:09.843413517+00:00 stderr F I1213 00:20:09.843401 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc 2025-12-13T00:20:09.843413517+00:00 stderr F I1213 00:20:09.841480 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843464369+00:00 stderr F I1213 00:20:09.842443 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843475499+00:00 stderr F I1213 00:20:09.843460 28750 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-wwpnd in node crc 2025-12-13T00:20:09.843475499+00:00 stderr F I1213 00:20:09.841827 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} options:{GoMap:map[iface-id-ver:1a3e81c3-c292-4130-9436-f94062c91efd requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:57 10.217.0.87]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {eda38bc9-7da5-4a6b-818c-4e1e8f85426d}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:eda38bc9-7da5-4a6b-818c-4e1e8f85426d}]}}] Timeout: Where:[where column _uuid == {d9150f6f-5be3-40d7-848b-deb1197b35b9}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.87 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {01c8cadf-7d39-4ad0-9627-60cf1df4b48e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:01c8cadf-7d39-4ad0-9627-60cf1df4b48e}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843485709+00:00 stderr F I1213 00:20:09.843476 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-wwpnd after 0 failed attempt(s) 2025-12-13T00:20:09.843496360+00:00 stderr F I1213 00:20:09.843484 28750 default_network_controller.go:699] Recording success event on pod openshift-network-operator/iptables-alerter-wwpnd 2025-12-13T00:20:09.843496360+00:00 stderr F I1213 00:20:09.843450 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843506440+00:00 stderr F I1213 00:20:09.843494 28750 pods.go:220] [openshift-service-ca/service-ca-666f99b6f-kk8kg] addLogicalPort took 5.347758ms, libovsdb time 1.541482ms 2025-12-13T00:20:09.843516030+00:00 stderr F I1213 00:20:09.843507 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-service-ca/service-ca-666f99b6f-kk8kg after 0 failed attempt(s) 2025-12-13T00:20:09.843529091+00:00 stderr F I1213 00:20:09.843514 28750 default_network_controller.go:699] Recording success event on pod openshift-service-ca/service-ca-666f99b6f-kk8kg 2025-12-13T00:20:09.843529091+00:00 stderr F I1213 00:20:09.841465 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv after 0 failed attempt(s) 2025-12-13T00:20:09.843529091+00:00 stderr F I1213 00:20:09.843523 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv 2025-12-13T00:20:09.843539941+00:00 stderr F I1213 00:20:09.839163 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "cf1a8966-f594-490a-9fbb-eec5bafd13d3" 2025-12-13T00:20:09.843552041+00:00 stderr F I1213 00:20:09.843527 28750 port_cache.go:96] port-cache(openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg): added port &{name:openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg uuid:1ab7106a-09d5-4567-9282-0a7a6aa6a6b4 logicalSwitch:crc ips:[0xc001ad22a0] mac:[10 88 10 217 0 46] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.46/23] and MAC: 0a:58:0a:d9:00:2e 2025-12-13T00:20:09.843561852+00:00 stderr F I1213 00:20:09.843552 28750 pods.go:220] [openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg] addLogicalPort took 6.322974ms, libovsdb time 1.264675ms 2025-12-13T00:20:09.843571662+00:00 stderr F I1213 00:20:09.843562 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg after 0 failed attempt(s) 2025-12-13T00:20:09.843571662+00:00 stderr F I1213 00:20:09.843546 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843583872+00:00 stderr F I1213 00:20:09.843574 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0b5c38ff-1fa8-4219-994d-15776acd4a4d" 2025-12-13T00:20:09.843593632+00:00 stderr F I1213 00:20:09.843587 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "120b38dc-8236-4fa6-a452-642b8ad738ee" 2025-12-13T00:20:09.843603593+00:00 stderr F I1213 00:20:09.843592 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e9127708-ccfd-4891-8a3a-f0cacb77e0f4" 2025-12-13T00:20:09.843603593+00:00 stderr F I1213 00:20:09.843598 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" 2025-12-13T00:20:09.843614533+00:00 stderr F I1213 00:20:09.842298 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843614533+00:00 stderr F I1213 00:20:09.843603 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "ed024e5d-8fc2-4c22-803d-73f3c9795f19" 2025-12-13T00:20:09.843614533+00:00 stderr F I1213 00:20:09.843608 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "34a48baf-1bee-4921-8bb2-9b7320e76f79" 2025-12-13T00:20:09.843614533+00:00 stderr F I1213 00:20:09.843599 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843629023+00:00 stderr F I1213 00:20:09.843613 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "6d67253e-2acd-4bc1-8185-793587da4f17" 2025-12-13T00:20:09.843629023+00:00 stderr F I1213 00:20:09.843618 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "e4a7de23-6134-4044-902a-0900dc04a501" 2025-12-13T00:20:09.843629023+00:00 stderr F I1213 00:20:09.843623 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "f728c15e-d8de-4a9a-a3ea-fdcead95cb91" 2025-12-13T00:20:09.843639704+00:00 stderr F I1213 00:20:09.843633 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "12e733dd-0939-4f1b-9cbb-13897e093787" 2025-12-13T00:20:09.843649494+00:00 stderr F I1213 00:20:09.843638 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "297ab9b6-2186-4d5b-a952-2bfd59af63c4" 2025-12-13T00:20:09.843649494+00:00 stderr F I1213 00:20:09.843643 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d73c4e63-30ef-4915-925d-f44201c612ec" 2025-12-13T00:20:09.843659764+00:00 stderr F I1213 00:20:09.843648 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "b54e8941-2fc4-432a-9e51-39684df9089e" 2025-12-13T00:20:09.843659764+00:00 stderr F I1213 00:20:09.843512 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} options:{GoMap:map[iface-id-ver:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:49 10.217.0.73]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {9a79516e-7a72-4d42-b0ab-87a99aa064f3}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:9a79516e-7a72-4d42-b0ab-87a99aa064f3}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.73 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fe11781d-2eb7-4789-931b-76e487eefd5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:fe11781d-2eb7-4789-931b-76e487eefd5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843670034+00:00 stderr F I1213 00:20:09.843652 28750 port_cache.go:96] port-cache(hostpath-provisioner_csi-hostpathplugin-hvm8g): added port &{name:hostpath-provisioner_csi-hostpathplugin-hvm8g uuid:52259988-af2b-4ee5-bbfe-801c4ebeb0ae logicalSwitch:crc ips:[0xc000d40150] mac:[10 88 10 217 0 49] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.49/23] and MAC: 0a:58:0a:d9:00:31 2025-12-13T00:20:09.843682345+00:00 stderr F I1213 00:20:09.843659 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843695165+00:00 stderr F I1213 00:20:09.843677 28750 pods.go:220] [hostpath-provisioner/csi-hostpathplugin-hvm8g] addLogicalPort took 5.12907ms, libovsdb time 1.168832ms 2025-12-13T00:20:09.843695165+00:00 stderr F I1213 00:20:09.843663 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843695165+00:00 stderr F I1213 00:20:09.843687 28750 obj_retry.go:379] Retry successful for *v1.Pod hostpath-provisioner/csi-hostpathplugin-hvm8g after 0 failed attempt(s) 2025-12-13T00:20:09.843705885+00:00 stderr F I1213 00:20:09.843693 28750 default_network_controller.go:699] Recording success event on pod hostpath-provisioner/csi-hostpathplugin-hvm8g 2025-12-13T00:20:09.843717906+00:00 stderr F I1213 00:20:09.843705 28750 port_cache.go:96] port-cache(openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh): added port &{name:openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh uuid:9aafbb57-c78d-409c-9ff4-1561d4387b2d logicalSwitch:crc ips:[0xc000273f80] mac:[10 88 10 217 0 63] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.63/23] and MAC: 0a:58:0a:d9:00:3f 2025-12-13T00:20:09.843748737+00:00 stderr F I1213 00:20:09.843725 28750 pods.go:220] [openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh] addLogicalPort took 4.637667ms, libovsdb time 1.413888ms 2025-12-13T00:20:09.843748737+00:00 stderr F I1213 00:20:09.843739 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh after 0 failed attempt(s) 2025-12-13T00:20:09.843759807+00:00 stderr F I1213 00:20:09.843745 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh 2025-12-13T00:20:09.843769537+00:00 stderr F I1213 00:20:09.843757 28750 port_cache.go:96] port-cache(openshift-image-registry_image-registry-75b7bb6564-rnjvj): added port &{name:openshift-image-registry_image-registry-75b7bb6564-rnjvj uuid:4981169d-8e04-4bc4-9fc4-c3fa53ed5de1 logicalSwitch:crc ips:[0xc000eb0420] mac:[10 88 10 217 0 41] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.41/23] and MAC: 0a:58:0a:d9:00:29 2025-12-13T00:20:09.843799818+00:00 stderr F I1213 00:20:09.843770 28750 pods.go:220] [openshift-image-registry/image-registry-75b7bb6564-rnjvj] addLogicalPort took 3.889507ms, libovsdb time 1.05078ms 2025-12-13T00:20:09.843799818+00:00 stderr F I1213 00:20:09.843790 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/image-registry-75b7bb6564-rnjvj after 0 failed attempt(s) 2025-12-13T00:20:09.843799818+00:00 stderr F I1213 00:20:09.843767 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843854119+00:00 stderr F I1213 00:20:09.841484 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843868590+00:00 stderr F I1213 00:20:09.843839 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.843878700+00:00 stderr F I1213 00:20:09.843857 28750 port_cache.go:96] port-cache(openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv): added port &{name:openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv uuid:82630d91-1647-4c0c-aa84-8f820bcf919e logicalSwitch:crc ips:[0xc0018db230] mac:[10 88 10 217 0 22] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.22/23] and MAC: 0a:58:0a:d9:00:16 2025-12-13T00:20:09.843890310+00:00 stderr F I1213 00:20:09.843873 28750 pods.go:220] [openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv] addLogicalPort took 3.920498ms, libovsdb time 1.249985ms 2025-12-13T00:20:09.843902101+00:00 stderr F I1213 00:20:09.843883 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv after 0 failed attempt(s) 2025-12-13T00:20:09.843902101+00:00 stderr F I1213 00:20:09.843896 28750 default_network_controller.go:699] Recording success event on pod openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv 2025-12-13T00:20:09.843914181+00:00 stderr F I1213 00:20:09.843897 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844007284+00:00 stderr F I1213 00:20:09.843960 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844007284+00:00 stderr F I1213 00:20:09.843973 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844087266+00:00 stderr F I1213 00:20:09.844042 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844214990+00:00 stderr F I1213 00:20:09.844162 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844284172+00:00 stderr F I1213 00:20:09.844244 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844301513+00:00 stderr F I1213 00:20:09.844273 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844355694+00:00 stderr F I1213 00:20:09.844321 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844369055+00:00 stderr F I1213 00:20:09.844278 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} options:{GoMap:map[iface-id-ver:13045510-8717-4a71-ade4-be95a76440a7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:1f 10.217.0.31]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213}]}}] Timeout: Where:[where column _uuid == {5c003dc7-268c-4cb5-b66b-9d2f810661e6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.31 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:a92c16ca-c9e1-47b0-80e1-9c7bb8bcef92}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844420446+00:00 stderr F I1213 00:20:09.844371 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0c6b16ad-44ad-44c4-bdda-872783f6ab00}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844488228+00:00 stderr F I1213 00:20:09.844450 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0c6b16ad-44ad-44c4-bdda-872783f6ab00}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844503258+00:00 stderr F I1213 00:20:09.844476 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844555030+00:00 stderr F I1213 00:20:09.844510 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844555030+00:00 stderr F I1213 00:20:09.844529 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844570170+00:00 stderr F I1213 00:20:09.844481 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} options:{GoMap:map[iface-id-ver:85ca9974-9a5f-4cbf-a126-71bf61c49940 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:24 10.217.0.36]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d33f70f7-551b-4345-ba06-af50c250b376}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d33f70f7-551b-4345-ba06-af50c250b376}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.36 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0c6b16ad-44ad-44c4-bdda-872783f6ab00}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:0c6b16ad-44ad-44c4-bdda-872783f6ab00}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844580240+00:00 stderr F I1213 00:20:09.844561 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844632432+00:00 stderr F I1213 00:20:09.844550 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} options:{GoMap:map[iface-id-ver:0b5d722a-1123-4935-9740-52a08d018bc9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:47 10.217.0.71]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7a350d82-7987-4ce6-ae41-dd930411ca29}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:7a350d82-7987-4ce6-ae41-dd930411ca29}]}}] Timeout: Where:[where column _uuid == {480d825d-a6a3-4408-89f7-433b108e677b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.71 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5bdc3c8-db64-45ca-8596-53e5e38584c4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f5bdc3c8-db64-45ca-8596-53e5e38584c4}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844652442+00:00 stderr F I1213 00:20:09.844580 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} options:{GoMap:map[iface-id-ver:45a8038e-e7f2-4d93-a6f5-7753aa54e63f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:14 10.217.0.20]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f8e99409-b28a-4d27-a8e5-267ea6a801cf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f8e99409-b28a-4d27-a8e5-267ea6a801cf}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.20 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7e2d0691-2edc-4313-a98c-25b101cdf576}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7e2d0691-2edc-4313-a98c-25b101cdf576}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844652442+00:00 stderr F I1213 00:20:09.839835 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844706804+00:00 stderr F I1213 00:20:09.844657 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844757035+00:00 stderr F I1213 00:20:09.843796 28750 default_network_controller.go:699] Recording success event on pod openshift-image-registry/image-registry-75b7bb6564-rnjvj 2025-12-13T00:20:09.844757035+00:00 stderr F I1213 00:20:09.844730 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844777406+00:00 stderr F I1213 00:20:09.843568 28750 default_network_controller.go:699] Recording success event on pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg 2025-12-13T00:20:09.844777406+00:00 stderr F I1213 00:20:09.844752 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844845137+00:00 stderr F I1213 00:20:09.844805 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844845137+00:00 stderr F I1213 00:20:09.844802 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844845137+00:00 stderr F I1213 00:20:09.844757 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} options:{GoMap:map[iface-id-ver:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:20 10.217.0.32]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6}]}}] Timeout: Where:[where column _uuid == {3e14f06c-8cb1-4266-82da-dd9d246bcca8}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.32 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7d2c6a4d-e1a7-4457-8d2b-2a8098268121}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7d2c6a4d-e1a7-4457-8d2b-2a8098268121}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.844904099+00:00 stderr F I1213 00:20:09.844679 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845051583+00:00 stderr F I1213 00:20:09.845003 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845099594+00:00 stderr F I1213 00:20:09.845058 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845163786+00:00 stderr F I1213 00:20:09.845084 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} options:{GoMap:map[iface-id-ver:4f8aa612-9da0-4a2b-911e-6a1764a4e74e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:05 10.217.0.5]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {805e2f41-6cb8-4ccf-9939-37cfb4fa5509}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:805e2f41-6cb8-4ccf-9939-37cfb4fa5509}]}}] Timeout: Where:[where column _uuid == {b085c101-9f1c-4419-be2b-9c8df8cad59f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:f9d7ca8b-5c8a-44a0-b49e-f783d7c54787}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845178567+00:00 stderr F I1213 00:20:09.845159 28750 port_cache.go:96] port-cache(openshift-dns_dns-default-gbw49): added port &{name:openshift-dns_dns-default-gbw49 uuid:5e26cedd-18bd-46bc-a3fb-1ef5c6ab5213 logicalSwitch:crc ips:[0xc001388ff0] mac:[10 88 10 217 0 31] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.31/23] and MAC: 0a:58:0a:d9:00:1f 2025-12-13T00:20:09.845190757+00:00 stderr F I1213 00:20:09.845171 28750 pods.go:220] [openshift-dns/dns-default-gbw49] addLogicalPort took 5.283046ms, libovsdb time 875.824µs 2025-12-13T00:20:09.845190757+00:00 stderr F I1213 00:20:09.845183 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns/dns-default-gbw49 after 0 failed attempt(s) 2025-12-13T00:20:09.845190757+00:00 stderr F I1213 00:20:09.845151 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845210367+00:00 stderr F I1213 00:20:09.845192 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "13045510-8717-4a71-ade4-be95a76440a7" 2025-12-13T00:20:09.845252129+00:00 stderr F I1213 00:20:09.845195 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845252129+00:00 stderr F I1213 00:20:09.845226 28750 port_cache.go:96] port-cache(openshift-console_console-644bb77b49-5x5xk): added port &{name:openshift-console_console-644bb77b49-5x5xk uuid:9a79516e-7a72-4d42-b0ab-87a99aa064f3 logicalSwitch:crc ips:[0xc001ad3080] mac:[10 88 10 217 0 73] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.73/23] and MAC: 0a:58:0a:d9:00:49 2025-12-13T00:20:09.845252129+00:00 stderr F I1213 00:20:09.845239 28750 pods.go:220] [openshift-console/console-644bb77b49-5x5xk] addLogicalPort took 6.55765ms, libovsdb time 1.719868ms 2025-12-13T00:20:09.845252129+00:00 stderr F I1213 00:20:09.845245 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-console/console-644bb77b49-5x5xk after 0 failed attempt(s) 2025-12-13T00:20:09.845252129+00:00 stderr F I1213 00:20:09.845187 28750 default_network_controller.go:699] Recording success event on pod openshift-dns/dns-default-gbw49 2025-12-13T00:20:09.845267759+00:00 stderr F I1213 00:20:09.845250 28750 default_network_controller.go:699] Recording success event on pod openshift-console/console-644bb77b49-5x5xk 2025-12-13T00:20:09.845267759+00:00 stderr F I1213 00:20:09.845232 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845267759+00:00 stderr F I1213 00:20:09.845229 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845281649+00:00 stderr F I1213 00:20:09.845268 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh): added port &{name:openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh uuid:65255fc0-0c1f-4ba9-9d8f-e6c95bb72749 logicalSwitch:crc ips:[0xc00119c420] mac:[10 88 10 217 0 14] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.14/23] and MAC: 0a:58:0a:d9:00:0e 2025-12-13T00:20:09.845281649+00:00 stderr F I1213 00:20:09.845277 28750 pods.go:220] [openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh] addLogicalPort took 7.98106ms, libovsdb time 4.097434ms 2025-12-13T00:20:09.845295970+00:00 stderr F I1213 00:20:09.845283 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh after 0 failed attempt(s) 2025-12-13T00:20:09.845295970+00:00 stderr F I1213 00:20:09.845288 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh 2025-12-13T00:20:09.845295970+00:00 stderr F I1213 00:20:09.845272 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845316900+00:00 stderr F I1213 00:20:09.845286 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845316900+00:00 stderr F I1213 00:20:09.845304 28750 port_cache.go:96] port-cache(openshift-console-operator_console-conversion-webhook-595f9969b-l6z49): added port &{name:openshift-console-operator_console-conversion-webhook-595f9969b-l6z49 uuid:6056bee0-572a-4de7-bb24-40ca6a66be30 logicalSwitch:crc ips:[0xc001564330] mac:[10 88 10 217 0 61] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.61/23] and MAC: 0a:58:0a:d9:00:3d 2025-12-13T00:20:09.845331181+00:00 stderr F I1213 00:20:09.845313 28750 pods.go:220] [openshift-console-operator/console-conversion-webhook-595f9969b-l6z49] addLogicalPort took 8.018431ms, libovsdb time 4.558336ms 2025-12-13T00:20:09.845331181+00:00 stderr F I1213 00:20:09.845302 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845331181+00:00 stderr F I1213 00:20:09.845320 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 after 0 failed attempt(s) 2025-12-13T00:20:09.845331181+00:00 stderr F I1213 00:20:09.845325 28750 default_network_controller.go:699] Recording success event on pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49 2025-12-13T00:20:09.845345051+00:00 stderr F I1213 00:20:09.845319 28750 port_cache.go:96] port-cache(openshift-multus_network-metrics-daemon-qdfr4): added port &{name:openshift-multus_network-metrics-daemon-qdfr4 uuid:3564ddfd-a311-4df3-b5d0-1e76294b4ab0 logicalSwitch:crc ips:[0xc0017416b0] mac:[10 88 10 217 0 3] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.3/23] and MAC: 0a:58:0a:d9:00:03 2025-12-13T00:20:09.845345051+00:00 stderr F I1213 00:20:09.845340 28750 pods.go:220] [openshift-multus/network-metrics-daemon-qdfr4] addLogicalPort took 8.531795ms, libovsdb time 5.224244ms 2025-12-13T00:20:09.845357671+00:00 stderr F I1213 00:20:09.845345 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/network-metrics-daemon-qdfr4 after 0 failed attempt(s) 2025-12-13T00:20:09.845357671+00:00 stderr F I1213 00:20:09.845349 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/network-metrics-daemon-qdfr4 2025-12-13T00:20:09.845357671+00:00 stderr F I1213 00:20:09.845344 28750 port_cache.go:96] port-cache(openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7): added port &{name:openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7 uuid:3644fddd-ceae-4a64-8b00-dadf73515945 logicalSwitch:crc ips:[0xc001994e40] mac:[10 88 10 217 0 64] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.64/23] and MAC: 0a:58:0a:d9:00:40 2025-12-13T00:20:09.845357671+00:00 stderr F I1213 00:20:09.839974 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845372162+00:00 stderr F I1213 00:20:09.845358 28750 pods.go:220] [openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7] addLogicalPort took 5.343767ms, libovsdb time 4.488615ms 2025-12-13T00:20:09.845372162+00:00 stderr F I1213 00:20:09.845367 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 after 0 failed attempt(s) 2025-12-13T00:20:09.845388852+00:00 stderr F I1213 00:20:09.845309 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} options:{GoMap:map[iface-id-ver:6268b7fe-8910-4505-b404-6f1df638105c requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:42 10.217.0.66]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {745a40f7-2acc-4e2b-a087-861e0ea97ffe}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:745a40f7-2acc-4e2b-a087-861e0ea97ffe}]}}] Timeout: Where:[where column _uuid == {82e93a32-6948-40ee-b2ac-6218a7078ae0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.66 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {7790ce03-cefa-4620-876d-a377ca4c3dbf}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:7790ce03-cefa-4620-876d-a377ca4c3dbf}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845388852+00:00 stderr F I1213 00:20:09.845375 28750 default_network_controller.go:699] Recording success event on pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7 2025-12-13T00:20:09.845388852+00:00 stderr F I1213 00:20:09.845302 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} options:{GoMap:map[iface-id-ver:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:10 10.217.0.16]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2d98188d-6d49-48e7-8956-57a5c46efe26}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:2d98188d-6d49-48e7-8956-57a5c46efe26}]}}] Timeout: Where:[where column _uuid == {2bf79d67-1324-4efc-8ead-1a59cc805d56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.16 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1a5f15db-00b3-4563-9f1c-1aca2a061230}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1a5f15db-00b3-4563-9f1c-1aca2a061230}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845402573+00:00 stderr F I1213 00:20:09.845326 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} options:{GoMap:map[iface-id-ver:7d51f445-054a-4e4f-a67b-a828f5a32511 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2d 10.217.0.45]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {710ea152-1844-44ad-b1a6-805ec9a3700e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:710ea152-1844-44ad-b1a6-805ec9a3700e}]}}] Timeout: Where:[where column _uuid == {4a46821f-f601-44e2-aacf-0ffb901e376e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.45 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e326c177-dff3-4bbe-a1eb-fe518325ee36}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:e326c177-dff3-4bbe-a1eb-fe518325ee36}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845421603+00:00 stderr F I1213 00:20:09.845401 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845480965+00:00 stderr F I1213 00:20:09.845433 28750 port_cache.go:96] port-cache(openshift-marketplace_redhat-marketplace-nv4pl): added port &{name:openshift-marketplace_redhat-marketplace-nv4pl uuid:d33f70f7-551b-4345-ba06-af50c250b376 logicalSwitch:crc ips:[0xc00160e660] mac:[10 88 10 217 0 36] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.36/23] and MAC: 0a:58:0a:d9:00:24 2025-12-13T00:20:09.845480965+00:00 stderr F I1213 00:20:09.845467 28750 pods.go:220] [openshift-marketplace/redhat-marketplace-nv4pl] addLogicalPort took 7.222319ms, libovsdb time 941.265µs 2025-12-13T00:20:09.845480965+00:00 stderr F I1213 00:20:09.845469 28750 port_cache.go:96] port-cache(openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7): added port &{name:openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 uuid:6e77fb5d-c04f-467c-9883-8cb59d819d86 logicalSwitch:crc ips:[0xc001588930] mac:[10 88 10 217 0 12] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.12/23] and MAC: 0a:58:0a:d9:00:0c 2025-12-13T00:20:09.845495985+00:00 stderr F I1213 00:20:09.845477 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/redhat-marketplace-nv4pl after 0 failed attempt(s) 2025-12-13T00:20:09.845495985+00:00 stderr F I1213 00:20:09.845483 28750 pods.go:220] [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7] addLogicalPort took 8.34755ms, libovsdb time 2.668493ms 2025-12-13T00:20:09.845495985+00:00 stderr F I1213 00:20:09.845491 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 after 0 failed attempt(s) 2025-12-13T00:20:09.845509366+00:00 stderr F I1213 00:20:09.845496 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7 2025-12-13T00:20:09.845509366+00:00 stderr F I1213 00:20:09.844351 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} options:{GoMap:map[iface-id-ver:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:13 10.217.0.19]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {99ef3a4b-7858-4c9b-90db-217867afe36a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:99ef3a4b-7858-4c9b-90db-217867afe36a}]}}] Timeout: Where:[where column _uuid == {57f0696f-dc79-4d6c-b6a1-8c0c5c1afaae}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.19 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {562978c2-805e-4646-ada6-3dd7d281b620}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:562978c2-805e-4646-ada6-3dd7d281b620}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845509366+00:00 stderr F I1213 00:20:09.845230 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845526076+00:00 stderr F I1213 00:20:09.845485 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-marketplace-nv4pl 2025-12-13T00:20:09.845526076+00:00 stderr F I1213 00:20:09.845498 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz): added port &{name:openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz uuid:69155615-9d93-4b72-bddd-739a6e731251 logicalSwitch:crc ips:[0xc0016ba2d0] mac:[10 88 10 217 0 43] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.43/23] and MAC: 0a:58:0a:d9:00:2b 2025-12-13T00:20:09.845526076+00:00 stderr F I1213 00:20:09.845521 28750 pods.go:220] [openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz] addLogicalPort took 5.334167ms, libovsdb time 4.430972ms 2025-12-13T00:20:09.845537366+00:00 stderr F I1213 00:20:09.845529 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz after 0 failed attempt(s) 2025-12-13T00:20:09.845537366+00:00 stderr F I1213 00:20:09.845524 28750 port_cache.go:96] port-cache(openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b): added port &{name:openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b uuid:3e86699a-fa52-4a81-9386-60d37f3fa10c logicalSwitch:crc ips:[0xc000c96540] mac:[10 88 10 217 0 72] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.72/23] and MAC: 0a:58:0a:d9:00:48 2025-12-13T00:20:09.845549307+00:00 stderr F I1213 00:20:09.845539 28750 pods.go:220] [openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b] addLogicalPort took 8.020991ms, libovsdb time 4.493075ms 2025-12-13T00:20:09.845561737+00:00 stderr F I1213 00:20:09.845546 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b after 0 failed attempt(s) 2025-12-13T00:20:09.845561737+00:00 stderr F I1213 00:20:09.845551 28750 default_network_controller.go:699] Recording success event on pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b 2025-12-13T00:20:09.845561737+00:00 stderr F I1213 00:20:09.845543 28750 port_cache.go:96] port-cache(openshift-marketplace_certified-operators-lcrg8): added port &{name:openshift-marketplace_certified-operators-lcrg8 uuid:d8393529-1df9-45df-a9d2-f761b6ef68c3 logicalSwitch:crc ips:[0xc001588270] mac:[10 88 10 217 0 33] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.33/23] and MAC: 0a:58:0a:d9:00:21 2025-12-13T00:20:09.845580507+00:00 stderr F I1213 00:20:09.845535 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz 2025-12-13T00:20:09.845580507+00:00 stderr F I1213 00:20:09.845562 28750 pods.go:220] [openshift-marketplace/certified-operators-lcrg8] addLogicalPort took 4.260677ms, libovsdb time 2.249493ms 2025-12-13T00:20:09.845580507+00:00 stderr F I1213 00:20:09.845571 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/certified-operators-lcrg8 after 0 failed attempt(s) 2025-12-13T00:20:09.845580507+00:00 stderr F I1213 00:20:09.845571 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" 2025-12-13T00:20:09.845580507+00:00 stderr F I1213 00:20:09.845576 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/certified-operators-lcrg8 2025-12-13T00:20:09.845594968+00:00 stderr F I1213 00:20:09.845564 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845609918+00:00 stderr F I1213 00:20:09.845589 28750 port_cache.go:96] port-cache(openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd): added port &{name:openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd uuid:8b4158c3-d859-42e6-8259-b16ce1cbd284 logicalSwitch:crc ips:[0xc001914ff0] mac:[10 88 10 217 0 39] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.39/23] and MAC: 0a:58:0a:d9:00:27 2025-12-13T00:20:09.845621939+00:00 stderr F I1213 00:20:09.845609 28750 pods.go:220] [openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd] addLogicalPort took 6.609952ms, libovsdb time 3.821085ms 2025-12-13T00:20:09.845621939+00:00 stderr F I1213 00:20:09.845618 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd after 0 failed attempt(s) 2025-12-13T00:20:09.845634149+00:00 stderr F I1213 00:20:09.845623 28750 default_network_controller.go:699] Recording success event on pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd 2025-12-13T00:20:09.845645839+00:00 stderr F I1213 00:20:09.845628 28750 port_cache.go:96] port-cache(openshift-ingress-canary_ingress-canary-2vhcn): added port &{name:openshift-ingress-canary_ingress-canary-2vhcn uuid:7a350d82-7987-4ce6-ae41-dd930411ca29 logicalSwitch:crc ips:[0xc0014a88a0] mac:[10 88 10 217 0 71] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.71/23] and MAC: 0a:58:0a:d9:00:47 2025-12-13T00:20:09.845657750+00:00 stderr F I1213 00:20:09.845644 28750 pods.go:220] [openshift-ingress-canary/ingress-canary-2vhcn] addLogicalPort took 3.906097ms, libovsdb time 1.07516ms 2025-12-13T00:20:09.845657750+00:00 stderr F I1213 00:20:09.845650 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress-canary/ingress-canary-2vhcn after 0 failed attempt(s) 2025-12-13T00:20:09.845657750+00:00 stderr F I1213 00:20:09.845654 28750 default_network_controller.go:699] Recording success event on pod openshift-ingress-canary/ingress-canary-2vhcn 2025-12-13T00:20:09.845674790+00:00 stderr F I1213 00:20:09.845649 28750 port_cache.go:96] port-cache(openshift-dns-operator_dns-operator-75f687757b-nz2xb): added port &{name:openshift-dns-operator_dns-operator-75f687757b-nz2xb uuid:b212e2c2-3d4e-4898-aede-c926b74813f0 logicalSwitch:crc ips:[0xc0015ec3c0] mac:[10 88 10 217 0 18] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.18/23] and MAC: 0a:58:0a:d9:00:12 2025-12-13T00:20:09.845674790+00:00 stderr F I1213 00:20:09.845666 28750 pods.go:220] [openshift-dns-operator/dns-operator-75f687757b-nz2xb] addLogicalPort took 8.392602ms, libovsdb time 4.599727ms 2025-12-13T00:20:09.845687130+00:00 stderr F I1213 00:20:09.845674 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-dns-operator/dns-operator-75f687757b-nz2xb after 0 failed attempt(s) 2025-12-13T00:20:09.845687130+00:00 stderr F I1213 00:20:09.845679 28750 default_network_controller.go:699] Recording success event on pod openshift-dns-operator/dns-operator-75f687757b-nz2xb 2025-12-13T00:20:09.845729632+00:00 stderr F I1213 00:20:09.845690 28750 port_cache.go:96] port-cache(openshift-marketplace_marketplace-operator-8b455464d-kghgr): added port &{name:openshift-marketplace_marketplace-operator-8b455464d-kghgr uuid:116f6820-9a92-4bbc-bb34-265c58c5b649 logicalSwitch:crc ips:[0xc000e4f530] mac:[10 88 10 217 0 30] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.30/23] and MAC: 0a:58:0a:d9:00:1e 2025-12-13T00:20:09.845729632+00:00 stderr F I1213 00:20:09.845713 28750 pods.go:220] [openshift-marketplace/marketplace-operator-8b455464d-kghgr] addLogicalPort took 5.476031ms, libovsdb time 4.494994ms 2025-12-13T00:20:09.845729632+00:00 stderr F I1213 00:20:09.845720 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/marketplace-operator-8b455464d-kghgr after 0 failed attempt(s) 2025-12-13T00:20:09.845729632+00:00 stderr F I1213 00:20:09.845726 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/marketplace-operator-8b455464d-kghgr 2025-12-13T00:20:09.845744492+00:00 stderr F I1213 00:20:09.844885 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845744492+00:00 stderr F I1213 00:20:09.845579 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "c085412c-b875-46c9-ae3e-e6b0d8067091" 2025-12-13T00:20:09.845756812+00:00 stderr F I1213 00:20:09.845736 28750 port_cache.go:96] port-cache(openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m): added port &{name:openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m uuid:8f5c86fb-1b4c-42f5-945d-5fbe3f7bfd26 logicalSwitch:crc ips:[0xc0012b8930] mac:[10 88 10 217 0 6] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.6/23] and MAC: 0a:58:0a:d9:00:06 2025-12-13T00:20:09.845756812+00:00 stderr F I1213 00:20:09.845749 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "b9a848b0-1438-4ada-b7da-2fe53dbf235f" 2025-12-13T00:20:09.845767203+00:00 stderr F I1213 00:20:09.845754 28750 pods.go:220] [openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m] addLogicalPort took 6.596151ms, libovsdb time 3.095885ms 2025-12-13T00:20:09.845767203+00:00 stderr F I1213 00:20:09.845759 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "59748b9b-c309-4712-aa85-bb38d71c4915" 2025-12-13T00:20:09.845767203+00:00 stderr F I1213 00:20:09.845763 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m after 0 failed attempt(s) 2025-12-13T00:20:09.845777953+00:00 stderr F I1213 00:20:09.845768 28750 default_network_controller.go:699] Recording success event on pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m 2025-12-13T00:20:09.845794263+00:00 stderr F I1213 00:20:09.845775 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "a702c6d2-4dde-4077-ab8c-0f8df804bf7a" 2025-12-13T00:20:09.845794263+00:00 stderr F I1213 00:20:09.845782 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d0f40333-c860-4c04-8058-a0bf572dcf12" 2025-12-13T00:20:09.845794263+00:00 stderr F I1213 00:20:09.845789 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "85ca9974-9a5f-4cbf-a126-71bf61c49940" 2025-12-13T00:20:09.845808284+00:00 stderr F I1213 00:20:09.845780 28750 port_cache.go:96] port-cache(openshift-controller-manager_controller-manager-778975cc4f-x5vcf): added port &{name:openshift-controller-manager_controller-manager-778975cc4f-x5vcf uuid:eda38bc9-7da5-4a6b-818c-4e1e8f85426d logicalSwitch:crc ips:[0xc0019143f0] mac:[10 88 10 217 0 87] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.87/23] and MAC: 0a:58:0a:d9:00:57 2025-12-13T00:20:09.845808284+00:00 stderr F I1213 00:20:09.845796 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "bd556935-a077-45df-ba3f-d42c39326ccd" 2025-12-13T00:20:09.845808284+00:00 stderr F I1213 00:20:09.845799 28750 pods.go:220] [openshift-controller-manager/controller-manager-778975cc4f-x5vcf] addLogicalPort took 7.078174ms, libovsdb time 3.758984ms 2025-12-13T00:20:09.845808284+00:00 stderr F I1213 00:20:09.845803 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20" 2025-12-13T00:20:09.845822494+00:00 stderr F I1213 00:20:09.845810 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf after 0 failed attempt(s) 2025-12-13T00:20:09.845822494+00:00 stderr F I1213 00:20:09.845811 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "5bacb25d-97b6-4491-8fb4-99feae1d802a" 2025-12-13T00:20:09.845822494+00:00 stderr F I1213 00:20:09.845816 28750 default_network_controller.go:699] Recording success event on pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf 2025-12-13T00:20:09.845834524+00:00 stderr F I1213 00:20:09.845820 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "43ae1c37-047b-4ee2-9fee-41e337dd4ac8" 2025-12-13T00:20:09.845834524+00:00 stderr F I1213 00:20:09.845805 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845834524+00:00 stderr F I1213 00:20:09.845827 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "71af81a9-7d43-49b2-9287-c375900aa905" 2025-12-13T00:20:09.845845225+00:00 stderr F I1213 00:20:09.845838 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "01feb2e0-a0f4-4573-8335-34e364e0ef40" 2025-12-13T00:20:09.845854835+00:00 stderr F I1213 00:20:09.845846 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "1a3e81c3-c292-4130-9436-f94062c91efd" 2025-12-13T00:20:09.845892956+00:00 stderr F I1213 00:20:09.845847 28750 port_cache.go:96] port-cache(openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw): added port &{name:openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw uuid:f8e99409-b28a-4d27-a8e5-267ea6a801cf logicalSwitch:crc ips:[0xc000d60420] mac:[10 88 10 217 0 20] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.20/23] and MAC: 0a:58:0a:d9:00:14 2025-12-13T00:20:09.845892956+00:00 stderr F I1213 00:20:09.837206 28750 obj_retry.go:296] Retry object setup: *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.845892956+00:00 stderr F I1213 00:20:09.845863 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845892956+00:00 stderr F I1213 00:20:09.844912 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.845929977+00:00 stderr F I1213 00:20:09.845127 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846066591+00:00 stderr F I1213 00:20:09.846022 28750 port_cache.go:96] port-cache(openshift-marketplace_redhat-operators-zg7cl): added port &{name:openshift-marketplace_redhat-operators-zg7cl uuid:1bf49fc6-86bc-42b9-9630-d7184a92b0eb logicalSwitch:crc ips:[0xc000d2abd0] mac:[10 88 10 217 0 35] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.35/23] and MAC: 0a:58:0a:d9:00:23 2025-12-13T00:20:09.846078431+00:00 stderr F I1213 00:20:09.846060 28750 pods.go:220] [openshift-marketplace/redhat-operators-zg7cl] addLogicalPort took 8.803782ms, libovsdb time 4.32018ms 2025-12-13T00:20:09.846078431+00:00 stderr F I1213 00:20:09.846072 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/redhat-operators-zg7cl after 0 failed attempt(s) 2025-12-13T00:20:09.846088881+00:00 stderr F I1213 00:20:09.846078 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/redhat-operators-zg7cl 2025-12-13T00:20:09.846088881+00:00 stderr F I1213 00:20:09.845891 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} options:{GoMap:map[iface-id-ver:8a5ae51d-d173-4531-8975-f164c975ce1f requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:0b 10.217.0.11]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.11 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:d5c46dfc-43b7-4f29-8e0b-bf168c892ee6}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846105992+00:00 stderr F I1213 00:20:09.846083 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846105992+00:00 stderr F I1213 00:20:09.846094 28750 obj_retry.go:358] Adding new object: *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.846118492+00:00 stderr F I1213 00:20:09.846111 28750 ovn.go:134] Ensuring zone local for Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp in node crc 2025-12-13T00:20:09.846164713+00:00 stderr F I1213 00:20:09.846140 28750 base_network_controller_pods.go:476] [default/openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] creating logical port openshift-apiserver_apiserver-7fc54b8dd7-d2bhp for pod on switch crc 2025-12-13T00:20:09.846164713+00:00 stderr F I1213 00:20:09.844873 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846164713+00:00 stderr F I1213 00:20:09.845853 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0b5d722a-1123-4935-9740-52a08d018bc9" 2025-12-13T00:20:09.846181094+00:00 stderr F I1213 00:20:09.846172 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "10603adc-d495-423c-9459-4caa405960bb" 2025-12-13T00:20:09.846193564+00:00 stderr F I1213 00:20:09.846181 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9025b167-d0fc-419f-92c1-add28909ab7c" 2025-12-13T00:20:09.846193564+00:00 stderr F I1213 00:20:09.846189 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "45a8038e-e7f2-4d93-a6f5-7753aa54e63f" 2025-12-13T00:20:09.846206335+00:00 stderr F I1213 00:20:09.846195 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" 2025-12-13T00:20:09.846206335+00:00 stderr F I1213 00:20:09.846123 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846254006+00:00 stderr F I1213 00:20:09.846204 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b935134-b25d-48e2-b3d5-059fcc367ac0}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846254006+00:00 stderr F I1213 00:20:09.845921 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846295397+00:00 stderr F I1213 00:20:09.846258 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1b935134-b25d-48e2-b3d5-059fcc367ac0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846295397+00:00 stderr F I1213 00:20:09.846272 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846357109+00:00 stderr F I1213 00:20:09.846284 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} options:{GoMap:map[iface-id-ver:7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:2a 10.217.0.42]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {145016fd-3cef-4f28-abce-7d4c2c73477c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:145016fd-3cef-4f28-abce-7d4c2c73477c}]}}] Timeout: Where:[where column _uuid == {ca455606-530d-4273-b8ee-8ed4760b1f66}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.42 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {1b935134-b25d-48e2-b3d5-059fcc367ac0}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:1b935134-b25d-48e2-b3d5-059fcc367ac0}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846357109+00:00 stderr F I1213 00:20:09.846249 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} options:{GoMap:map[iface-id-ver:530553aa-0a1d-423e-8a22-f5eb4bdbb883 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:17 10.217.0.23]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d2f291e9-b4fe-47a7-a644-298254d226c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d2f291e9-b4fe-47a7-a644-298254d226c5}]}}] Timeout: Where:[where column _uuid == {8f19c25c-23f2-4be6-ae5b-f3e31e0c5430}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.23 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {52d4cb14-db01-4356-992b-9aeb85743758}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:52d4cb14-db01-4356-992b-9aeb85743758}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846404100+00:00 stderr F I1213 00:20:09.846301 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} options:{GoMap:map[iface-id-ver:0f394926-bdb9-425c-b36e-264d7fd34550 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:09 10.217.0.9]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f5604df7-c1b9-4360-a570-e22fbf62c520}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f5604df7-c1b9-4360-a570-e22fbf62c520}]}}] Timeout: Where:[where column _uuid == {327158bc-926b-43b6-928d-23c33a7f6443}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.9 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8b428b80-dee0-4a48-a105-d91e72f01b56}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:8b428b80-dee0-4a48-a105-d91e72f01b56}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846437851+00:00 stderr F I1213 00:20:09.846394 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3eaca145-e78f-4caa-a5e0-078f141ee3c5}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846437851+00:00 stderr F I1213 00:20:09.846399 28750 model_client.go:381] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846493842+00:00 stderr F I1213 00:20:09.846457 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3eaca145-e78f-4caa-a5e0-078f141ee3c5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846493842+00:00 stderr F I1213 00:20:09.846010 28750 pods.go:220] [openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw] addLogicalPort took 8.047622ms, libovsdb time 1.255324ms 2025-12-13T00:20:09.846504763+00:00 stderr F I1213 00:20:09.846130 28750 port_cache.go:96] port-cache(openshift-multus_multus-admission-controller-6c7c885997-4hbbc): added port &{name:openshift-multus_multus-admission-controller-6c7c885997-4hbbc uuid:db41b4eb-d15d-44fd-9ed7-0eab6f23f4c6 logicalSwitch:crc ips:[0xc0018980c0] mac:[10 88 10 217 0 32] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.32/23] and MAC: 0a:58:0a:d9:00:20 2025-12-13T00:20:09.846514523+00:00 stderr F I1213 00:20:09.846501 28750 pods.go:220] [openshift-multus/multus-admission-controller-6c7c885997-4hbbc] addLogicalPort took 8.072312ms, libovsdb time 1.365437ms 2025-12-13T00:20:09.846527973+00:00 stderr F I1213 00:20:09.846511 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc after 0 failed attempt(s) 2025-12-13T00:20:09.846527973+00:00 stderr F I1213 00:20:09.846497 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw after 0 failed attempt(s) 2025-12-13T00:20:09.846527973+00:00 stderr F I1213 00:20:09.846500 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846527973+00:00 stderr F I1213 00:20:09.846518 28750 default_network_controller.go:699] Recording success event on pod openshift-multus/multus-admission-controller-6c7c885997-4hbbc 2025-12-13T00:20:09.846539334+00:00 stderr F I1213 00:20:09.846524 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw 2025-12-13T00:20:09.846548804+00:00 stderr F I1213 00:20:09.846470 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846585045+00:00 stderr F I1213 00:20:09.846550 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846585045+00:00 stderr F I1213 00:20:09.846494 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} options:{GoMap:map[iface-id-ver:220875e2-503f-46b5-aaa6-bb8fc45743cc requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:25 10.217.0.37]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d9d09e10-aa79-4a77-ba42-e77ca54f8045}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d9d09e10-aa79-4a77-ba42-e77ca54f8045}]}}] Timeout: Where:[where column _uuid == {841efc2c-f27a-4621-aebc-4d7d48c0798b}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.37 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {3eaca145-e78f-4caa-a5e0-078f141ee3c5}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:3eaca145-e78f-4caa-a5e0-078f141ee3c5}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846647206+00:00 stderr F I1213 00:20:09.846605 28750 port_cache.go:96] port-cache(openshift-console_downloads-65476884b9-9wcvx): added port &{name:openshift-console_downloads-65476884b9-9wcvx uuid:745a40f7-2acc-4e2b-a087-861e0ea97ffe logicalSwitch:crc ips:[0xc00160f740] mac:[10 88 10 217 0 66] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.66/23] and MAC: 0a:58:0a:d9:00:42 2025-12-13T00:20:09.846647206+00:00 stderr F I1213 00:20:09.846578 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} options:{GoMap:map[iface-id-ver:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:18 10.217.0.24]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {f24db1f4-18a4-418a-9c99-1d94ebfba0da}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:f24db1f4-18a4-418a-9c99-1d94ebfba0da}]}}] Timeout: Where:[where column _uuid == {05880ae4-e549-45bb-8449-f9573bf10469}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.24 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {86257c8a-6754-4af2-9aae-87ed521f1c5f}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:86257c8a-6754-4af2-9aae-87ed521f1c5f}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846658507+00:00 stderr F I1213 00:20:09.846639 28750 pods.go:220] [openshift-console/downloads-65476884b9-9wcvx] addLogicalPort took 4.163804ms, libovsdb time 1.269065ms 2025-12-13T00:20:09.846658507+00:00 stderr F I1213 00:20:09.846652 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-console/downloads-65476884b9-9wcvx after 0 failed attempt(s) 2025-12-13T00:20:09.846694248+00:00 stderr F I1213 00:20:09.846654 28750 port_cache.go:96] port-cache(openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb): added port &{name:openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb uuid:805e2f41-6cb8-4ccf-9939-37cfb4fa5509 logicalSwitch:crc ips:[0xc0013e10e0] mac:[10 88 10 217 0 5] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.5/23] and MAC: 0a:58:0a:d9:00:05 2025-12-13T00:20:09.846694248+00:00 stderr F I1213 00:20:09.846686 28750 pods.go:220] [openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb] addLogicalPort took 5.173631ms, libovsdb time 1.562542ms 2025-12-13T00:20:09.846705538+00:00 stderr F I1213 00:20:09.846687 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "6268b7fe-8910-4505-b404-6f1df638105c" 2025-12-13T00:20:09.846705538+00:00 stderr F I1213 00:20:09.846696 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb after 0 failed attempt(s) 2025-12-13T00:20:09.846705538+00:00 stderr F I1213 00:20:09.846698 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "4f8aa612-9da0-4a2b-911e-6a1764a4e74e" 2025-12-13T00:20:09.846716308+00:00 stderr F I1213 00:20:09.846702 28750 default_network_controller.go:699] Recording success event on pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb 2025-12-13T00:20:09.846716308+00:00 stderr F I1213 00:20:09.846678 28750 default_network_controller.go:699] Recording success event on pod openshift-console/downloads-65476884b9-9wcvx 2025-12-13T00:20:09.846733429+00:00 stderr F I1213 00:20:09.846607 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.846763420+00:00 stderr F I1213 00:20:09.846731 28750 port_cache.go:96] port-cache(openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t): added port &{name:openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t uuid:710ea152-1844-44ad-b1a6-805ec9a3700e logicalSwitch:crc ips:[0xc00107ff80] mac:[10 88 10 217 0 45] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.45/23] and MAC: 0a:58:0a:d9:00:2d 2025-12-13T00:20:09.846774230+00:00 stderr F I1213 00:20:09.846757 28750 pods.go:220] [openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t] addLogicalPort took 4.914644ms, libovsdb time 1.393357ms 2025-12-13T00:20:09.846806581+00:00 stderr F I1213 00:20:09.846772 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t after 0 failed attempt(s) 2025-12-13T00:20:09.846806581+00:00 stderr F I1213 00:20:09.846797 28750 default_network_controller.go:699] Recording success event on pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t 2025-12-13T00:20:09.846820151+00:00 stderr F I1213 00:20:09.846812 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "7d51f445-054a-4e4f-a67b-a828f5a32511" 2025-12-13T00:20:09.846913285+00:00 stderr F I1213 00:20:09.846878 28750 port_cache.go:96] port-cache(openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr): added port &{name:openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr uuid:2d98188d-6d49-48e7-8956-57a5c46efe26 logicalSwitch:crc ips:[0xc0016055f0] mac:[10 88 10 217 0 16] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.16/23] and MAC: 0a:58:0a:d9:00:10 2025-12-13T00:20:09.846913285+00:00 stderr F I1213 00:20:09.846905 28750 pods.go:220] [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr] addLogicalPort took 7.297751ms, libovsdb time 1.567712ms 2025-12-13T00:20:09.846924695+00:00 stderr F I1213 00:20:09.846916 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr after 0 failed attempt(s) 2025-12-13T00:20:09.846954356+00:00 stderr F I1213 00:20:09.846924 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr 2025-12-13T00:20:09.847021708+00:00 stderr F I1213 00:20:09.846993 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" 2025-12-13T00:20:09.847037278+00:00 stderr F I1213 00:20:09.846997 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.847110700+00:00 stderr F I1213 00:20:09.847068 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.847221423+00:00 stderr F I1213 00:20:09.847180 28750 port_cache.go:96] port-cache(openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8): added port &{name:openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8 uuid:99ef3a4b-7858-4c9b-90db-217867afe36a logicalSwitch:crc ips:[0xc001b88480] mac:[10 88 10 217 0 19] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.19/23] and MAC: 0a:58:0a:d9:00:13 2025-12-13T00:20:09.847221423+00:00 stderr F I1213 00:20:09.847206 28750 pods.go:220] [openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8] addLogicalPort took 7.172519ms, libovsdb time 2.824978ms 2025-12-13T00:20:09.847221423+00:00 stderr F I1213 00:20:09.847214 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 after 0 failed attempt(s) 2025-12-13T00:20:09.847236434+00:00 stderr F I1213 00:20:09.847220 28750 default_network_controller.go:699] Recording success event on pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8 2025-12-13T00:20:09.847236434+00:00 stderr F I1213 00:20:09.847112 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} options:{GoMap:map[iface-id-ver:21d29937-debd-4407-b2b1-d1053cb0f342 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:58 10.217.0.88]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c2174bce-e1da-468b-aa60-b9409f80c104}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c2174bce-e1da-468b-aa60-b9409f80c104}]}}] Timeout: Where:[where column _uuid == {7f89eae9-cb3f-438c-8fc8-824d28075b04}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.88 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {61452dfb-4328-4d5d-af1c-a7de40dfbc7a}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:61452dfb-4328-4d5d-af1c-a7de40dfbc7a}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.847246634+00:00 stderr F I1213 00:20:09.847236 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" 2025-12-13T00:20:09.847890911+00:00 stderr F I1213 00:20:09.847856 28750 port_cache.go:96] port-cache(openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z): added port &{name:openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z uuid:f5604df7-c1b9-4360-a570-e22fbf62c520 logicalSwitch:crc ips:[0xc001389710] mac:[10 88 10 217 0 9] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.9/23] and MAC: 0a:58:0a:d9:00:09 2025-12-13T00:20:09.847890911+00:00 stderr F I1213 00:20:09.847876 28750 pods.go:220] [openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z] addLogicalPort took 5.312797ms, libovsdb time 1.547263ms 2025-12-13T00:20:09.847890911+00:00 stderr F I1213 00:20:09.847883 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z after 0 failed attempt(s) 2025-12-13T00:20:09.847914492+00:00 stderr F I1213 00:20:09.847889 28750 default_network_controller.go:699] Recording success event on pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z 2025-12-13T00:20:09.847914492+00:00 stderr F I1213 00:20:09.847897 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "0f394926-bdb9-425c-b36e-264d7fd34550" 2025-12-13T00:20:09.847988204+00:00 stderr F I1213 00:20:09.847927 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf): added port &{name:openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf uuid:ce38c47f-ab41-4ec2-8e0f-f92c2e23354c logicalSwitch:crc ips:[0xc0015382d0] mac:[10 88 10 217 0 11] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.11/23] and MAC: 0a:58:0a:d9:00:0b 2025-12-13T00:20:09.847988204+00:00 stderr F I1213 00:20:09.847917 28750 model_client.go:381] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.847988204+00:00 stderr F I1213 00:20:09.847977 28750 pods.go:220] [openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf] addLogicalPort took 8.199927ms, libovsdb time 2.023196ms 2025-12-13T00:20:09.848006435+00:00 stderr F I1213 00:20:09.847991 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf after 0 failed attempt(s) 2025-12-13T00:20:09.848006435+00:00 stderr F I1213 00:20:09.847999 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf 2025-12-13T00:20:09.848022545+00:00 stderr F I1213 00:20:09.848011 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "8a5ae51d-d173-4531-8975-f164c975ce1f" 2025-12-13T00:20:09.848035385+00:00 stderr F I1213 00:20:09.848008 28750 model_client.go:397] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.848103657+00:00 stderr F I1213 00:20:09.848030 28750 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} options:{GoMap:map[iface-id-ver:41e8708a-e40d-4d28-846b-c52eda4d1755 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:52 10.217.0.82]} tag_request:{GoSet:[]} type:] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {005abe2f-f66d-42f8-945c-fbc80f820ed4}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {d6057acb-0f02-4ebe-8cea-b3228e61764c}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:005abe2f-f66d-42f8-945c-fbc80f820ed4}]}}] Timeout: Where:[where column _uuid == {336e83b0-c9fa-498c-aea8-c8cc7385ea1e}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:38.102.83.51 logical_ip:10.217.0.82 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c43bc3ed-4021-47db-b48f-03cce83a4268}] Until: Durable: Comment: Lock: UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:c43bc3ed-4021-47db-b48f-03cce83a4268}]}}] Timeout: Where:[where column _uuid == {c67e03de-a22d-4bb9-8c3d-72aee14a0fb3}] Until: Durable: Comment: Lock: UUID: UUIDName:}] 2025-12-13T00:20:09.848225471+00:00 stderr F I1213 00:20:09.848193 28750 port_cache.go:96] port-cache(openshift-marketplace_community-operators-s2hxn): added port &{name:openshift-marketplace_community-operators-s2hxn uuid:d9d09e10-aa79-4a77-ba42-e77ca54f8045 logicalSwitch:crc ips:[0xc000d6a180] mac:[10 88 10 217 0 37] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.37/23] and MAC: 0a:58:0a:d9:00:25 2025-12-13T00:20:09.848225471+00:00 stderr F I1213 00:20:09.848210 28750 pods.go:220] [openshift-marketplace/community-operators-s2hxn] addLogicalPort took 8.70142ms, libovsdb time 1.692537ms 2025-12-13T00:20:09.848225471+00:00 stderr F I1213 00:20:09.848218 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-marketplace/community-operators-s2hxn after 0 failed attempt(s) 2025-12-13T00:20:09.848241631+00:00 stderr F I1213 00:20:09.848223 28750 default_network_controller.go:699] Recording success event on pod openshift-marketplace/community-operators-s2hxn 2025-12-13T00:20:09.848241631+00:00 stderr F I1213 00:20:09.848231 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "220875e2-503f-46b5-aaa6-bb8fc45743cc" 2025-12-13T00:20:09.848559020+00:00 stderr F I1213 00:20:09.848519 28750 port_cache.go:96] port-cache(openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2): added port &{name:openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2 uuid:f24db1f4-18a4-418a-9c99-1d94ebfba0da logicalSwitch:crc ips:[0xc001389c20] mac:[10 88 10 217 0 24] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.24/23] and MAC: 0a:58:0a:d9:00:18 2025-12-13T00:20:09.848559020+00:00 stderr F I1213 00:20:09.848538 28750 pods.go:220] [openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2] addLogicalPort took 8.456893ms, libovsdb time 1.930273ms 2025-12-13T00:20:09.848559020+00:00 stderr F I1213 00:20:09.848548 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 after 0 failed attempt(s) 2025-12-13T00:20:09.848559020+00:00 stderr F I1213 00:20:09.848553 28750 default_network_controller.go:699] Recording success event on pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2 2025-12-13T00:20:09.848577960+00:00 stderr F I1213 00:20:09.848563 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" 2025-12-13T00:20:09.848768285+00:00 stderr F I1213 00:20:09.848733 28750 port_cache.go:96] port-cache(openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc): added port &{name:openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc uuid:d2f291e9-b4fe-47a7-a644-298254d226c5 logicalSwitch:crc ips:[0xc0009afd70] mac:[10 88 10 217 0 23] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.23/23] and MAC: 0a:58:0a:d9:00:17 2025-12-13T00:20:09.848768285+00:00 stderr F I1213 00:20:09.848754 28750 pods.go:220] [openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc] addLogicalPort took 9.174103ms, libovsdb time 2.478288ms 2025-12-13T00:20:09.848783296+00:00 stderr F I1213 00:20:09.848764 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc after 0 failed attempt(s) 2025-12-13T00:20:09.848783296+00:00 stderr F I1213 00:20:09.848772 28750 default_network_controller.go:699] Recording success event on pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc 2025-12-13T00:20:09.848795976+00:00 stderr F I1213 00:20:09.848783 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "530553aa-0a1d-423e-8a22-f5eb4bdbb883" 2025-12-13T00:20:09.849270759+00:00 stderr F I1213 00:20:09.849227 28750 port_cache.go:96] port-cache(openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs): added port &{name:openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs uuid:c2174bce-e1da-468b-aa60-b9409f80c104 logicalSwitch:crc ips:[0xc0014a9ef0] mac:[10 88 10 217 0 88] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.88/23] and MAC: 0a:58:0a:d9:00:58 2025-12-13T00:20:09.849270759+00:00 stderr F I1213 00:20:09.849249 28750 pods.go:220] [openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs] addLogicalPort took 6.231451ms, libovsdb time 2.107357ms 2025-12-13T00:20:09.849270759+00:00 stderr F I1213 00:20:09.849258 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs after 0 failed attempt(s) 2025-12-13T00:20:09.849270759+00:00 stderr F I1213 00:20:09.849265 28750 default_network_controller.go:699] Recording success event on pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs 2025-12-13T00:20:09.849293470+00:00 stderr F I1213 00:20:09.849276 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "21d29937-debd-4407-b2b1-d1053cb0f342" 2025-12-13T00:20:09.849293470+00:00 stderr F I1213 00:20:09.849284 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9" 2025-12-13T00:20:09.849304110+00:00 stderr F I1213 00:20:09.849278 28750 port_cache.go:96] port-cache(openshift-kube-apiserver_installer-13-crc): added port &{name:openshift-kube-apiserver_installer-13-crc uuid:145016fd-3cef-4f28-abce-7d4c2c73477c logicalSwitch:crc ips:[0xc000d89380] mac:[10 88 10 217 0 42] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.42/23] and MAC: 0a:58:0a:d9:00:2a 2025-12-13T00:20:09.849313930+00:00 stderr F I1213 00:20:09.849298 28750 pods.go:220] [openshift-kube-apiserver/installer-13-crc] addLogicalPort took 10.614982ms, libovsdb time 2.980362ms 2025-12-13T00:20:09.849352481+00:00 stderr F I1213 00:20:09.849330 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-kube-apiserver/installer-13-crc after 0 failed attempt(s) 2025-12-13T00:20:09.849352481+00:00 stderr F I1213 00:20:09.849340 28750 default_network_controller.go:699] Recording success event on pod openshift-kube-apiserver/installer-13-crc 2025-12-13T00:20:09.849478615+00:00 stderr F I1213 00:20:09.849448 28750 port_cache.go:96] port-cache(openshift-apiserver_apiserver-7fc54b8dd7-d2bhp): added port &{name:openshift-apiserver_apiserver-7fc54b8dd7-d2bhp uuid:005abe2f-f66d-42f8-945c-fbc80f820ed4 logicalSwitch:crc ips:[0xc00121dce0] mac:[10 88 10 217 0 82] expires:{wall:0 ext:0 loc:}} with IP: [10.217.0.82/23] and MAC: 0a:58:0a:d9:00:52 2025-12-13T00:20:09.849522456+00:00 stderr F I1213 00:20:09.849505 28750 pods.go:220] [openshift-apiserver/apiserver-7fc54b8dd7-d2bhp] addLogicalPort took 3.373113ms, libovsdb time 1.400038ms 2025-12-13T00:20:09.849558807+00:00 stderr F I1213 00:20:09.849545 28750 obj_retry.go:379] Retry successful for *v1.Pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp after 0 failed attempt(s) 2025-12-13T00:20:09.849603558+00:00 stderr F I1213 00:20:09.849522 28750 ovnkube_controller.go:808] Unexpected last event type (1) in cache for pod with UID "41e8708a-e40d-4d28-846b-c52eda4d1755" 2025-12-13T00:20:09.849630099+00:00 stderr F I1213 00:20:09.849579 28750 default_network_controller.go:699] Recording success event on pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp 2025-12-13T00:20:09.849682330+00:00 stderr F I1213 00:20:09.849668 28750 obj_retry.go:413] Function iterateRetryResources for *v1.Pod ended (in 13.006518ms) 2025-12-13T00:20:10.268355216+00:00 stderr F I1213 00:20:10.268322 28750 ovs.go:159] Exec(42): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.28140.ctl connection-status 2025-12-13T00:20:10.272747377+00:00 stderr F I1213 00:20:10.272723 28750 ovs.go:162] Exec(42): stdout: "not connected\n" 2025-12-13T00:20:10.272787278+00:00 stderr F I1213 00:20:10.272776 28750 ovs.go:163] Exec(42): stderr: "" 2025-12-13T00:20:10.272815169+00:00 stderr F I1213 00:20:10.272806 28750 default_node_network_controller.go:385] Node connection status = not connected 2025-12-13T00:20:10.367949432+00:00 stderr F I1213 00:20:10.367883 28750 obj_retry.go:555] Update event received for resource *v1.Pod, old object is equal to new: false 2025-12-13T00:20:10.367949432+00:00 stderr F I1213 00:20:10.367903 28750 default_network_controller.go:670] Recording update event on pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:10.367949432+00:00 stderr F I1213 00:20:10.367918 28750 obj_retry.go:607] Update event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:10.368002094+00:00 stderr F I1213 00:20:10.367981 28750 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-brz8k in node crc 2025-12-13T00:20:10.368002094+00:00 stderr F I1213 00:20:10.367990 28750 default_network_controller.go:699] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:10.368012004+00:00 stderr F I1213 00:20:10.367998 28750 obj_retry.go:555] Update event received for resource *factory.egressIPPod, old object is equal to new: false 2025-12-13T00:20:10.368012004+00:00 stderr F I1213 00:20:10.368008 28750 obj_retry.go:607] Update event received for *factory.egressIPPod openshift-ovn-kubernetes/ovnkube-node-brz8k 2025-12-13T00:20:10.768144867+00:00 stderr F I1213 00:20:10.768101 28750 ovs.go:159] Exec(43): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.28140.ctl connection-status 2025-12-13T00:20:10.773375781+00:00 stderr F I1213 00:20:10.773332 28750 ovs.go:162] Exec(43): stdout: "connected\n" 2025-12-13T00:20:10.773375781+00:00 stderr F I1213 00:20:10.773354 28750 ovs.go:163] Exec(43): stderr: "" 2025-12-13T00:20:10.773375781+00:00 stderr F I1213 00:20:10.773365 28750 default_node_network_controller.go:385] Node connection status = connected 2025-12-13T00:20:10.773405452+00:00 stderr F I1213 00:20:10.773379 28750 ovs.go:159] Exec(44): /usr/bin/ovs-vsctl --timeout=15 -- br-exists br-int 2025-12-13T00:20:10.781040883+00:00 stderr F I1213 00:20:10.780998 28750 ovs.go:162] Exec(44): stdout: "" 2025-12-13T00:20:10.781040883+00:00 stderr F I1213 00:20:10.781016 28750 ovs.go:163] Exec(44): stderr: "" 2025-12-13T00:20:10.781040883+00:00 stderr F I1213 00:20:10.781029 28750 ovs.go:159] Exec(45): /usr/bin/ovs-ofctl dump-aggregate br-int 2025-12-13T00:20:10.788835358+00:00 stderr F I1213 00:20:10.788793 28750 ovs.go:162] Exec(45): stdout: "NXST_AGGREGATE reply (xid=0x4): packet_count=24026 byte_count=11189561 flow_count=2939\n" 2025-12-13T00:20:10.788835358+00:00 stderr F I1213 00:20:10.788821 28750 ovs.go:163] Exec(45): stderr: "" 2025-12-13T00:20:10.788875139+00:00 stderr F I1213 00:20:10.788838 28750 ovs.go:159] Exec(46): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_crc-to-br-int ofport 2025-12-13T00:20:10.797440975+00:00 stderr F I1213 00:20:10.797393 28750 ovs.go:162] Exec(46): stdout: "2\n" 2025-12-13T00:20:10.797440975+00:00 stderr F I1213 00:20:10.797428 28750 ovs.go:163] Exec(46): stderr: "" 2025-12-13T00:20:10.797463986+00:00 stderr F I1213 00:20:10.797439 28750 gateway.go:365] Gateway is ready 2025-12-13T00:20:10.797463986+00:00 stderr F I1213 00:20:10.797455 28750 gateway_localnet.go:78] Creating Local Gateway Openflow Manager 2025-12-13T00:20:10.797487776+00:00 stderr F I1213 00:20:10.797471 28750 ovs.go:159] Exec(47): /usr/bin/ovs-vsctl --timeout=15 get Interface patch-br-ex_crc-to-br-int ofport 2025-12-13T00:20:10.806445883+00:00 stderr F I1213 00:20:10.806401 28750 ovs.go:162] Exec(47): stdout: "2\n" 2025-12-13T00:20:10.806445883+00:00 stderr F I1213 00:20:10.806430 28750 ovs.go:163] Exec(47): stderr: "" 2025-12-13T00:20:10.806466334+00:00 stderr F I1213 00:20:10.806444 28750 ovs.go:159] Exec(48): /usr/bin/ovs-vsctl --timeout=15 get interface ens3 ofport 2025-12-13T00:20:10.815418940+00:00 stderr F I1213 00:20:10.815370 28750 ovs.go:162] Exec(48): stdout: "1\n" 2025-12-13T00:20:10.815418940+00:00 stderr F I1213 00:20:10.815401 28750 ovs.go:163] Exec(48): stderr: "" 2025-12-13T00:20:10.819309958+00:00 stderr F I1213 00:20:10.819247 28750 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 127.0.0.1/8 lo 2025-12-13T00:20:10.819309958+00:00 stderr F I1213 00:20:10.819276 28750 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 10.217.0.2/23 ovn-k8s-mp0 2025-12-13T00:20:10.819309958+00:00 stderr F I1213 00:20:10.819289 28750 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 169.254.169.2/29 br-ex 2025-12-13T00:20:10.819309958+00:00 stderr F I1213 00:20:10.819298 28750 node_ip_handler_linux.go:228] Node primary address changed to 192.168.126.11. Updating OVN encap IP. 2025-12-13T00:20:10.819328389+00:00 stderr F I1213 00:20:10.819310 28750 ovs.go:159] Exec(49): /usr/bin/ovs-vsctl --timeout=15 get Open_vSwitch . external_ids:ovn-encap-ip 2025-12-13T00:20:10.826812874+00:00 stderr F I1213 00:20:10.826716 28750 ovs.go:162] Exec(49): stdout: "\"192.168.126.11\"\n" 2025-12-13T00:20:10.826812874+00:00 stderr F I1213 00:20:10.826745 28750 ovs.go:163] Exec(49): stderr: "" 2025-12-13T00:20:10.826812874+00:00 stderr F I1213 00:20:10.826757 28750 node_ip_handler_linux.go:482] Will not update encap IP 192.168.126.11 - it is already configured 2025-12-13T00:20:10.826812874+00:00 stderr F I1213 00:20:10.826765 28750 node_ip_handler_linux.go:441] Node address changed to map[172.17.0.5/24:{} 172.18.0.5/24:{} 172.19.0.5/24:{} 192.168.122.10/24:{} 192.168.126.11/24:{} 38.102.83.51/24:{}]. Updating annotations. 2025-12-13T00:20:10.827166194+00:00 stderr F I1213 00:20:10.827134 28750 kube.go:128] Setting annotations map[k8s.ovn.org/host-cidrs:["172.17.0.5/24","172.18.0.5/24","172.19.0.5/24","192.168.122.10/24","192.168.126.11/24","38.102.83.51/24"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_crc","mac-address":"fa:16:3e:f0:63:3e","ip-addresses":["38.102.83.51/24"],"ip-address":"38.102.83.51/24","next-hops":["38.102.83.1"],"next-hop":"38.102.83.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:017e52b0-97d3-4d7d-aae4-9b216aa025aa k8s.ovn.org/node-primary-ifaddr:{"ipv4":"38.102.83.51/24"}] on node crc 2025-12-13T00:20:10.837574701+00:00 stderr F I1213 00:20:10.837523 28750 gateway_shared_intf.go:2029] Setting OVN Masquerade route with source: 192.168.126.11 2025-12-13T00:20:10.837634713+00:00 stderr F I1213 00:20:10.837610 28750 ovs.go:159] Exec(50): /usr/sbin/ip route replace table 7 10.217.4.0/23 via 10.217.0.1 dev ovn-k8s-mp0 2025-12-13T00:20:10.837852849+00:00 stderr F I1213 00:20:10.837786 28750 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 11 Dst: 169.254.169.1/32 Src: 192.168.126.11 Gw: Flags: [] Table: 0 Realm: 0} 2025-12-13T00:20:10.838106946+00:00 stderr F I1213 00:20:10.838072 28750 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 11 Dst: 169.254.169.1/32 Src: 192.168.126.11 Gw: Flags: [] Table: 254 Realm: 0} 2025-12-13T00:20:10.839792563+00:00 stderr F I1213 00:20:10.839739 28750 route_manager.go:149] Route Manager: netlink route addition event: "{Ifindex: 5 Dst: 10.217.4.0/23 Src: Gw: 10.217.0.1 Flags: [] Table: 7 Realm: 0}" 2025-12-13T00:20:10.840103181+00:00 stderr F I1213 00:20:10.840042 28750 ovs.go:162] Exec(50): stdout: "" 2025-12-13T00:20:10.840103181+00:00 stderr F I1213 00:20:10.840057 28750 ovs.go:163] Exec(50): stderr: "" 2025-12-13T00:20:10.840103181+00:00 stderr F I1213 00:20:10.840065 28750 gateway_shared_intf.go:1674] Successfully added route into custom routing table: 7 2025-12-13T00:20:10.840103181+00:00 stderr F I1213 00:20:10.840074 28750 ovs.go:159] Exec(51): /usr/sbin/ip -4 rule 2025-12-13T00:20:10.842472767+00:00 stderr F I1213 00:20:10.842434 28750 ovs.go:162] Exec(51): stdout: "0:\tfrom all lookup local\n30:\tfrom all fwmark 0x1745ec lookup 7\n32766:\tfrom all lookup main\n32767:\tfrom all lookup default\n" 2025-12-13T00:20:10.842472767+00:00 stderr F I1213 00:20:10.842457 28750 ovs.go:163] Exec(51): stderr: "" 2025-12-13T00:20:10.842472767+00:00 stderr F I1213 00:20:10.842469 28750 ovs.go:159] Exec(52): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.rp_filter=2 2025-12-13T00:20:10.843517775+00:00 stderr F I1213 00:20:10.843484 28750 ovs.go:162] Exec(52): stdout: "net.ipv4.conf.ovn-k8s-mp0.rp_filter = 2\n" 2025-12-13T00:20:10.843517775+00:00 stderr F I1213 00:20:10.843501 28750 ovs.go:163] Exec(52): stderr: "" 2025-12-13T00:20:10.843517775+00:00 stderr F I1213 00:20:10.843512 28750 ovs.go:159] Exec(53): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_crc-to-br-int ofport 2025-12-13T00:20:10.852439271+00:00 stderr F I1213 00:20:10.852384 28750 ovs.go:162] Exec(53): stdout: "2\n" 2025-12-13T00:20:10.852439271+00:00 stderr F I1213 00:20:10.852421 28750 ovs.go:163] Exec(53): stderr: "" 2025-12-13T00:20:10.852456841+00:00 stderr F I1213 00:20:10.852444 28750 ovs.go:159] Exec(54): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ens3 ofport 2025-12-13T00:20:10.862649023+00:00 stderr F I1213 00:20:10.862601 28750 ovs.go:162] Exec(54): stdout: "1\n" 2025-12-13T00:20:10.862649023+00:00 stderr F I1213 00:20:10.862632 28750 ovs.go:163] Exec(54): stderr: "" 2025-12-13T00:20:10.866105488+00:00 stderr F I1213 00:20:10.866029 28750 gateway_iptables.go:487] Chain: "OVN-KUBE-ITP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.868570186+00:00 stderr F I1213 00:20:10.868463 28750 gateway_iptables.go:487] Chain: "OVN-KUBE-ITP" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.871265890+00:00 stderr F I1213 00:20:10.871202 28750 gateway_iptables.go:487] Chain: "OVN-KUBE-EGRESS-SVC" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EGRESS-SVC --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.873417830+00:00 stderr F I1213 00:20:10.873363 28750 gateway_iptables.go:487] Chain: "OVN-KUBE-NODEPORT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-NODEPORT --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.876883395+00:00 stderr F I1213 00:20:10.876836 28750 gateway_iptables.go:487] Chain: "OVN-KUBE-EXTERNALIP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EXTERNALIP --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.880313560+00:00 stderr F I1213 00:20:10.880260 28750 gateway_iptables.go:487] Chain: "OVN-KUBE-ETP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ETP --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.880336250+00:00 stderr F I1213 00:20:10.880307 28750 iptables.go:108] Creating table: mangle chain: OUTPUT 2025-12-13T00:20:10.882867080+00:00 stderr F I1213 00:20:10.882826 28750 iptables.go:110] Chain: "OUTPUT" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.885913184+00:00 stderr F I1213 00:20:10.885816 28750 iptables.go:108] Creating table: nat chain: OUTPUT 2025-12-13T00:20:10.889478103+00:00 stderr F I1213 00:20:10.889437 28750 iptables.go:110] Chain: "OUTPUT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.891658252+00:00 stderr F I1213 00:20:10.891613 28750 iptables.go:108] Creating table: nat chain: POSTROUTING 2025-12-13T00:20:10.896061344+00:00 stderr F I1213 00:20:10.896000 28750 iptables.go:110] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.898796390+00:00 stderr F I1213 00:20:10.898757 28750 iptables.go:108] Creating table: nat chain: PREROUTING 2025-12-13T00:20:10.901117624+00:00 stderr F I1213 00:20:10.901073 28750 iptables.go:110] Chain: "PREROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N PREROUTING --wait]: exit status 1: iptables: Chain already exists. 2025-12-13T00:20:10.916440226+00:00 stderr F I1213 00:20:10.916337 28750 gateway_shared_intf.go:2124] Ensuring IP Neighbor entry for: 169.254.169.1 2025-12-13T00:20:10.916498607+00:00 stderr F I1213 00:20:10.916488 28750 gateway_shared_intf.go:2124] Ensuring IP Neighbor entry for: 169.254.169.4 2025-12-13T00:20:10.916665582+00:00 stderr F I1213 00:20:10.916573 28750 obj_retry_gateway.go:28] [newRetryFrameworkNodeWithParameters] g.watchFactory=&{10 0xc0002559d0 0xc000255a40 0xc000255ab0 0xc000255b20 0xc000255b90 0xc000255c00 0xc0000b6c30 0xc000255d50 0xc000255dc0 map[0x23d45a0:0xc0000ffab0 0x23d4ae0:0xc0000ff9d0 0x23d4d80:0xc0000ffa40 0x23d5020:0xc0000ffb20 0x23d52c0:0xc0000ffb90 0x23d5aa0:0xc0000ffc00 0x23f31a0:0xc0000ff6c0 0x23f4020:0xc0000ff730 0x23f4760:0xc0000ff8f0 0x23f5980:0xc0000ff810 0x23f7680:0xc0000ff960 0x23f8160:0xc0000ff7a0] 0xc0000b90e0} 2025-12-13T00:20:10.916679972+00:00 stderr F I1213 00:20:10.916661 28750 gateway.go:143] Starting gateway service sync 2025-12-13T00:20:10.917488605+00:00 stderr F I1213 00:20:10.917423 28750 openflow_manager.go:85] Gateway OpenFlow sync requested 2025-12-13T00:20:10.917488605+00:00 stderr F I1213 00:20:10.917451 28750 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-ITP 2025-12-13T00:20:10.921475135+00:00 stderr F I1213 00:20:10.921418 28750 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-EGRESS-SVC 2025-12-13T00:20:10.924525739+00:00 stderr F I1213 00:20:10.924459 28750 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-NODEPORT 2025-12-13T00:20:10.927324186+00:00 stderr F I1213 00:20:10.927271 28750 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-EXTERNALIP 2025-12-13T00:20:10.929777454+00:00 stderr F I1213 00:20:10.929701 28750 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-ETP 2025-12-13T00:20:10.932226042+00:00 stderr F I1213 00:20:10.931879 28750 gateway_iptables.go:544] Recreating iptables rules for table: nat, chain: OVN-KUBE-SNAT-MGMTPORT 2025-12-13T00:20:10.933995471+00:00 stderr F I1213 00:20:10.933969 28750 gateway_iptables.go:544] Recreating iptables rules for table: mangle, chain: OVN-KUBE-ITP 2025-12-13T00:20:10.935957804+00:00 stderr F I1213 00:20:10.935904 28750 gateway.go:160] Gateway service sync done. Time taken: 19.231561ms 2025-12-13T00:20:10.935957804+00:00 stderr F I1213 00:20:10.935947 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver/check-endpoints 2025-12-13T00:20:10.935974915+00:00 stderr F I1213 00:20:10.935967 28750 gateway_shared_intf.go:609] Adding service check-endpoints in namespace openshift-apiserver 2025-12-13T00:20:10.936013756+00:00 stderr F I1213 00:20:10.935997 28750 gateway_shared_intf.go:635] Updating already programmed rules for check-endpoints in namespace openshift-apiserver 2025-12-13T00:20:10.936021606+00:00 stderr F I1213 00:20:10.936012 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936028796+00:00 stderr F I1213 00:20:10.936018 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver/check-endpoints took: 54.572µs 2025-12-13T00:20:10.936028796+00:00 stderr F I1213 00:20:10.936025 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/community-operators 2025-12-13T00:20:10.936044687+00:00 stderr F I1213 00:20:10.936030 28750 gateway_shared_intf.go:609] Adding service community-operators in namespace openshift-marketplace 2025-12-13T00:20:10.936052347+00:00 stderr F I1213 00:20:10.936044 28750 gateway_shared_intf.go:635] Updating already programmed rules for community-operators in namespace openshift-marketplace 2025-12-13T00:20:10.936052347+00:00 stderr F I1213 00:20:10.936048 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936060467+00:00 stderr F I1213 00:20:10.936052 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/community-operators took: 21.891µs 2025-12-13T00:20:10.936060467+00:00 stderr F I1213 00:20:10.936056 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/redhat-operators 2025-12-13T00:20:10.936067757+00:00 stderr F I1213 00:20:10.936060 28750 gateway_shared_intf.go:609] Adding service redhat-operators in namespace openshift-marketplace 2025-12-13T00:20:10.936090418+00:00 stderr F I1213 00:20:10.936073 28750 gateway_shared_intf.go:635] Updating already programmed rules for redhat-operators in namespace openshift-marketplace 2025-12-13T00:20:10.936090418+00:00 stderr F I1213 00:20:10.936081 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936090418+00:00 stderr F I1213 00:20:10.936085 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/redhat-operators took: 24.411µs 2025-12-13T00:20:10.936098988+00:00 stderr F I1213 00:20:10.936089 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-multus/network-metrics-service 2025-12-13T00:20:10.936098988+00:00 stderr F I1213 00:20:10.936094 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-multus/network-metrics-service took: 240ns 2025-12-13T00:20:10.936106398+00:00 stderr F I1213 00:20:10.936099 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-diagnostics/network-check-target 2025-12-13T00:20:10.936106398+00:00 stderr F I1213 00:20:10.936103 28750 gateway_shared_intf.go:609] Adding service network-check-target in namespace openshift-network-diagnostics 2025-12-13T00:20:10.936130309+00:00 stderr F I1213 00:20:10.936115 28750 gateway_shared_intf.go:635] Updating already programmed rules for network-check-target in namespace openshift-network-diagnostics 2025-12-13T00:20:10.936130309+00:00 stderr F I1213 00:20:10.936123 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936130309+00:00 stderr F I1213 00:20:10.936127 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-diagnostics/network-check-target took: 23.771µs 2025-12-13T00:20:10.936138929+00:00 stderr F I1213 00:20:10.936132 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-version/cluster-version-operator 2025-12-13T00:20:10.936138929+00:00 stderr F I1213 00:20:10.936136 28750 gateway_shared_intf.go:609] Adding service cluster-version-operator in namespace openshift-cluster-version 2025-12-13T00:20:10.936162020+00:00 stderr F I1213 00:20:10.936147 28750 gateway_shared_intf.go:635] Updating already programmed rules for cluster-version-operator in namespace openshift-cluster-version 2025-12-13T00:20:10.936162020+00:00 stderr F I1213 00:20:10.936155 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936162020+00:00 stderr F I1213 00:20:10.936159 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-version/cluster-version-operator took: 22.321µs 2025-12-13T00:20:10.936169920+00:00 stderr F I1213 00:20:10.936163 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway default/openshift 2025-12-13T00:20:10.936177530+00:00 stderr F I1213 00:20:10.936168 28750 obj_retry.go:541] Creating *factory.serviceForGateway default/openshift took: 520ns 2025-12-13T00:20:10.936177530+00:00 stderr F I1213 00:20:10.936172 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-image-registry/image-registry 2025-12-13T00:20:10.936187890+00:00 stderr F I1213 00:20:10.936176 28750 gateway_shared_intf.go:609] Adding service image-registry in namespace openshift-image-registry 2025-12-13T00:20:10.936195241+00:00 stderr F I1213 00:20:10.936187 28750 gateway_shared_intf.go:635] Updating already programmed rules for image-registry in namespace openshift-image-registry 2025-12-13T00:20:10.936195241+00:00 stderr F I1213 00:20:10.936191 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936203081+00:00 stderr F I1213 00:20:10.936195 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-image-registry/image-registry took: 18.431µs 2025-12-13T00:20:10.936210131+00:00 stderr F I1213 00:20:10.936201 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress-operator/metrics 2025-12-13T00:20:10.936210131+00:00 stderr F I1213 00:20:10.936205 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-ingress-operator 2025-12-13T00:20:10.936231922+00:00 stderr F I1213 00:20:10.936217 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-ingress-operator 2025-12-13T00:20:10.936231922+00:00 stderr F I1213 00:20:10.936225 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936231922+00:00 stderr F I1213 00:20:10.936229 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress-operator/metrics took: 23.331µs 2025-12-13T00:20:10.936240032+00:00 stderr F I1213 00:20:10.936233 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-apiserver-operator/metrics 2025-12-13T00:20:10.936240032+00:00 stderr F I1213 00:20:10.936237 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-apiserver-operator 2025-12-13T00:20:10.936263242+00:00 stderr F I1213 00:20:10.936249 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-apiserver-operator 2025-12-13T00:20:10.936263242+00:00 stderr F I1213 00:20:10.936256 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936263242+00:00 stderr F I1213 00:20:10.936260 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-apiserver-operator/metrics took: 22.27µs 2025-12-13T00:20:10.936271393+00:00 stderr F I1213 00:20:10.936264 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-scheduler/scheduler 2025-12-13T00:20:10.936271393+00:00 stderr F I1213 00:20:10.936269 28750 gateway_shared_intf.go:609] Adding service scheduler in namespace openshift-kube-scheduler 2025-12-13T00:20:10.936294023+00:00 stderr F I1213 00:20:10.936280 28750 gateway_shared_intf.go:635] Updating already programmed rules for scheduler in namespace openshift-kube-scheduler 2025-12-13T00:20:10.936294023+00:00 stderr F I1213 00:20:10.936288 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936301554+00:00 stderr F I1213 00:20:10.936291 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-scheduler/scheduler took: 23.12µs 2025-12-13T00:20:10.936301554+00:00 stderr F I1213 00:20:10.936296 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-operator 2025-12-13T00:20:10.936309234+00:00 stderr F I1213 00:20:10.936300 28750 gateway_shared_intf.go:609] Adding service machine-config-operator in namespace openshift-machine-config-operator 2025-12-13T00:20:10.936317984+00:00 stderr F I1213 00:20:10.936312 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-operator in namespace openshift-machine-config-operator 2025-12-13T00:20:10.936329654+00:00 stderr F I1213 00:20:10.936317 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936329654+00:00 stderr F I1213 00:20:10.936321 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-operator took: 20.401µs 2025-12-13T00:20:10.936329654+00:00 stderr F I1213 00:20:10.936326 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-controller-manager/kube-controller-manager 2025-12-13T00:20:10.936338255+00:00 stderr F I1213 00:20:10.936331 28750 gateway_shared_intf.go:609] Adding service kube-controller-manager in namespace openshift-kube-controller-manager 2025-12-13T00:20:10.936348885+00:00 stderr F I1213 00:20:10.936343 28750 gateway_shared_intf.go:635] Updating already programmed rules for kube-controller-manager in namespace openshift-kube-controller-manager 2025-12-13T00:20:10.936356435+00:00 stderr F I1213 00:20:10.936348 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936356435+00:00 stderr F I1213 00:20:10.936352 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-controller-manager/kube-controller-manager took: 20.561µs 2025-12-13T00:20:10.936363835+00:00 stderr F I1213 00:20:10.936355 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-image-registry/image-registry-operator 2025-12-13T00:20:10.936363835+00:00 stderr F I1213 00:20:10.936360 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-image-registry/image-registry-operator took: 250ns 2025-12-13T00:20:10.936371235+00:00 stderr F I1213 00:20:10.936363 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/redhat-marketplace 2025-12-13T00:20:10.936371235+00:00 stderr F I1213 00:20:10.936367 28750 gateway_shared_intf.go:609] Adding service redhat-marketplace in namespace openshift-marketplace 2025-12-13T00:20:10.936393206+00:00 stderr F I1213 00:20:10.936378 28750 gateway_shared_intf.go:635] Updating already programmed rules for redhat-marketplace in namespace openshift-marketplace 2025-12-13T00:20:10.936393206+00:00 stderr F I1213 00:20:10.936385 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936393206+00:00 stderr F I1213 00:20:10.936389 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/redhat-marketplace took: 21.411µs 2025-12-13T00:20:10.936401146+00:00 stderr F I1213 00:20:10.936393 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver-operator/metrics 2025-12-13T00:20:10.936401146+00:00 stderr F I1213 00:20:10.936397 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-apiserver-operator 2025-12-13T00:20:10.936422917+00:00 stderr F I1213 00:20:10.936409 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-apiserver-operator 2025-12-13T00:20:10.936422917+00:00 stderr F I1213 00:20:10.936416 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936422917+00:00 stderr F I1213 00:20:10.936420 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver-operator/metrics took: 22.151µs 2025-12-13T00:20:10.936431147+00:00 stderr F I1213 00:20:10.936424 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console-operator/webhook 2025-12-13T00:20:10.936438107+00:00 stderr F I1213 00:20:10.936429 28750 gateway_shared_intf.go:609] Adding service webhook in namespace openshift-console-operator 2025-12-13T00:20:10.936446997+00:00 stderr F I1213 00:20:10.936442 28750 gateway_shared_intf.go:635] Updating already programmed rules for webhook in namespace openshift-console-operator 2025-12-13T00:20:10.936454018+00:00 stderr F I1213 00:20:10.936446 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936454018+00:00 stderr F I1213 00:20:10.936450 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console-operator/webhook took: 21.361µs 2025-12-13T00:20:10.936463868+00:00 stderr F I1213 00:20:10.936454 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-etcd/etcd 2025-12-13T00:20:10.936463868+00:00 stderr F I1213 00:20:10.936459 28750 gateway_shared_intf.go:609] Adding service etcd in namespace openshift-etcd 2025-12-13T00:20:10.936473858+00:00 stderr F I1213 00:20:10.936468 28750 gateway_shared_intf.go:635] Updating already programmed rules for etcd in namespace openshift-etcd 2025-12-13T00:20:10.936480878+00:00 stderr F I1213 00:20:10.936473 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936480878+00:00 stderr F I1213 00:20:10.936477 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-etcd/etcd took: 18.34µs 2025-12-13T00:20:10.936488379+00:00 stderr F I1213 00:20:10.936481 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-daemon 2025-12-13T00:20:10.936495409+00:00 stderr F I1213 00:20:10.936486 28750 gateway_shared_intf.go:609] Adding service machine-config-daemon in namespace openshift-machine-config-operator 2025-12-13T00:20:10.936522140+00:00 stderr F I1213 00:20:10.936506 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-daemon in namespace openshift-machine-config-operator 2025-12-13T00:20:10.936522140+00:00 stderr F I1213 00:20:10.936515 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936522140+00:00 stderr F I1213 00:20:10.936519 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-daemon took: 32.98µs 2025-12-13T00:20:10.936530460+00:00 stderr F I1213 00:20:10.936524 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-service-ca-operator/metrics 2025-12-13T00:20:10.936537540+00:00 stderr F I1213 00:20:10.936528 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-service-ca-operator 2025-12-13T00:20:10.936546500+00:00 stderr F I1213 00:20:10.936541 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-service-ca-operator 2025-12-13T00:20:10.936553540+00:00 stderr F I1213 00:20:10.936544 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936553540+00:00 stderr F I1213 00:20:10.936549 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-service-ca-operator/metrics took: 20.54µs 2025-12-13T00:20:10.936560861+00:00 stderr F I1213 00:20:10.936553 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console/downloads 2025-12-13T00:20:10.936560861+00:00 stderr F I1213 00:20:10.936557 28750 gateway_shared_intf.go:609] Adding service downloads in namespace openshift-console 2025-12-13T00:20:10.936582721+00:00 stderr F I1213 00:20:10.936567 28750 gateway_shared_intf.go:635] Updating already programmed rules for downloads in namespace openshift-console 2025-12-13T00:20:10.936582721+00:00 stderr F I1213 00:20:10.936576 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936582721+00:00 stderr F I1213 00:20:10.936579 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console/downloads took: 22.041µs 2025-12-13T00:20:10.936590751+00:00 stderr F I1213 00:20:10.936584 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-dns-operator/metrics 2025-12-13T00:20:10.936590751+00:00 stderr F I1213 00:20:10.936588 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-dns-operator 2025-12-13T00:20:10.936612802+00:00 stderr F I1213 00:20:10.936599 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-dns-operator 2025-12-13T00:20:10.936612802+00:00 stderr F I1213 00:20:10.936606 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936612802+00:00 stderr F I1213 00:20:10.936610 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-dns-operator/metrics took: 22.011µs 2025-12-13T00:20:10.936623312+00:00 stderr F I1213 00:20:10.936614 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress-canary/ingress-canary 2025-12-13T00:20:10.936623312+00:00 stderr F I1213 00:20:10.936618 28750 gateway_shared_intf.go:609] Adding service ingress-canary in namespace openshift-ingress-canary 2025-12-13T00:20:10.936645464+00:00 stderr F I1213 00:20:10.936628 28750 gateway_shared_intf.go:635] Updating already programmed rules for ingress-canary in namespace openshift-ingress-canary 2025-12-13T00:20:10.936645464+00:00 stderr F I1213 00:20:10.936636 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936645464+00:00 stderr F I1213 00:20:10.936640 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress-canary/ingress-canary took: 21.292µs 2025-12-13T00:20:10.936653524+00:00 stderr F I1213 00:20:10.936644 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/cluster-autoscaler-operator 2025-12-13T00:20:10.936653524+00:00 stderr F I1213 00:20:10.936648 28750 gateway_shared_intf.go:609] Adding service cluster-autoscaler-operator in namespace openshift-machine-api 2025-12-13T00:20:10.936674945+00:00 stderr F I1213 00:20:10.936660 28750 gateway_shared_intf.go:635] Updating already programmed rules for cluster-autoscaler-operator in namespace openshift-machine-api 2025-12-13T00:20:10.936674945+00:00 stderr F I1213 00:20:10.936668 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936674945+00:00 stderr F I1213 00:20:10.936672 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/cluster-autoscaler-operator took: 23.431µs 2025-12-13T00:20:10.936684755+00:00 stderr F I1213 00:20:10.936677 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/certified-operators 2025-12-13T00:20:10.936684755+00:00 stderr F I1213 00:20:10.936681 28750 gateway_shared_intf.go:609] Adding service certified-operators in namespace openshift-marketplace 2025-12-13T00:20:10.936700575+00:00 stderr F I1213 00:20:10.936692 28750 gateway_shared_intf.go:635] Updating already programmed rules for certified-operators in namespace openshift-marketplace 2025-12-13T00:20:10.936700575+00:00 stderr F I1213 00:20:10.936695 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936710616+00:00 stderr F I1213 00:20:10.936699 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/certified-operators took: 17.94µs 2025-12-13T00:20:10.936710616+00:00 stderr F I1213 00:20:10.936703 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics 2025-12-13T00:20:10.936710616+00:00 stderr F I1213 00:20:10.936707 28750 gateway_shared_intf.go:609] Adding service catalog-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.936743377+00:00 stderr F I1213 00:20:10.936719 28750 gateway_shared_intf.go:635] Updating already programmed rules for catalog-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.936743377+00:00 stderr F I1213 00:20:10.936727 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936743377+00:00 stderr F I1213 00:20:10.936731 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics took: 23.5µs 2025-12-13T00:20:10.936743377+00:00 stderr F I1213 00:20:10.936735 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-controller-manager-operator/metrics 2025-12-13T00:20:10.936743377+00:00 stderr F I1213 00:20:10.936739 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-controller-manager-operator 2025-12-13T00:20:10.936760027+00:00 stderr F I1213 00:20:10.936750 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-controller-manager-operator 2025-12-13T00:20:10.936760027+00:00 stderr F I1213 00:20:10.936754 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936769607+00:00 stderr F I1213 00:20:10.936758 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-controller-manager-operator/metrics took: 18.001µs 2025-12-13T00:20:10.936769607+00:00 stderr F I1213 00:20:10.936762 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-scheduler-operator/metrics 2025-12-13T00:20:10.936769607+00:00 stderr F I1213 00:20:10.936766 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-scheduler-operator 2025-12-13T00:20:10.936781648+00:00 stderr F I1213 00:20:10.936776 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-scheduler-operator 2025-12-13T00:20:10.936790308+00:00 stderr F I1213 00:20:10.936779 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936790308+00:00 stderr F I1213 00:20:10.936784 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-scheduler-operator/metrics took: 17.811µs 2025-12-13T00:20:10.936799258+00:00 stderr F I1213 00:20:10.936788 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-operator/metrics 2025-12-13T00:20:10.936799258+00:00 stderr F I1213 00:20:10.936793 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-operator/metrics took: 290ns 2025-12-13T00:20:10.936799258+00:00 stderr F I1213 00:20:10.936796 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics 2025-12-13T00:20:10.936808358+00:00 stderr F I1213 00:20:10.936800 28750 gateway_shared_intf.go:609] Adding service olm-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.936816789+00:00 stderr F I1213 00:20:10.936810 28750 gateway_shared_intf.go:635] Updating already programmed rules for olm-operator-metrics in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.936816789+00:00 stderr F I1213 00:20:10.936814 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936825269+00:00 stderr F I1213 00:20:10.936817 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics took: 17.001µs 2025-12-13T00:20:10.936825269+00:00 stderr F I1213 00:20:10.936822 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/packageserver-service 2025-12-13T00:20:10.936833649+00:00 stderr F I1213 00:20:10.936826 28750 gateway_shared_intf.go:609] Adding service packageserver-service in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.936842039+00:00 stderr F I1213 00:20:10.936836 28750 gateway_shared_intf.go:635] Updating already programmed rules for packageserver-service in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.936850429+00:00 stderr F I1213 00:20:10.936840 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936850429+00:00 stderr F I1213 00:20:10.936844 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/packageserver-service took: 18.51µs 2025-12-13T00:20:10.936858720+00:00 stderr F I1213 00:20:10.936848 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-oauth-apiserver/api 2025-12-13T00:20:10.936858720+00:00 stderr F I1213 00:20:10.936853 28750 gateway_shared_intf.go:609] Adding service api in namespace openshift-oauth-apiserver 2025-12-13T00:20:10.936871400+00:00 stderr F I1213 00:20:10.936863 28750 gateway_shared_intf.go:635] Updating already programmed rules for api in namespace openshift-oauth-apiserver 2025-12-13T00:20:10.936871400+00:00 stderr F I1213 00:20:10.936866 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936880270+00:00 stderr F I1213 00:20:10.936870 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-oauth-apiserver/api took: 17.491µs 2025-12-13T00:20:10.936880270+00:00 stderr F I1213 00:20:10.936874 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:20:10.936889671+00:00 stderr F I1213 00:20:10.936879 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-node took: 320ns 2025-12-13T00:20:10.936889671+00:00 stderr F I1213 00:20:10.936883 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-config-operator/metrics 2025-12-13T00:20:10.936889671+00:00 stderr F I1213 00:20:10.936886 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-config-operator 2025-12-13T00:20:10.936920921+00:00 stderr F I1213 00:20:10.936897 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-config-operator 2025-12-13T00:20:10.936920921+00:00 stderr F I1213 00:20:10.936910 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936920921+00:00 stderr F I1213 00:20:10.936914 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-config-operator/metrics took: 27.031µs 2025-12-13T00:20:10.936920921+00:00 stderr F I1213 00:20:10.936918 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ingress/router-internal-default 2025-12-13T00:20:10.936949232+00:00 stderr F I1213 00:20:10.936922 28750 gateway_shared_intf.go:609] Adding service router-internal-default in namespace openshift-ingress 2025-12-13T00:20:10.936963463+00:00 stderr F I1213 00:20:10.936953 28750 gateway_shared_intf.go:635] Updating already programmed rules for router-internal-default in namespace openshift-ingress 2025-12-13T00:20:10.936963463+00:00 stderr F I1213 00:20:10.936960 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.936971913+00:00 stderr F I1213 00:20:10.936966 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ingress/router-internal-default took: 42.752µs 2025-12-13T00:20:10.936980173+00:00 stderr F I1213 00:20:10.936972 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-apiserver/apiserver 2025-12-13T00:20:10.936988313+00:00 stderr F I1213 00:20:10.936979 28750 gateway_shared_intf.go:609] Adding service apiserver in namespace openshift-kube-apiserver 2025-12-13T00:20:10.937038185+00:00 stderr F I1213 00:20:10.936995 28750 gateway_shared_intf.go:635] Updating already programmed rules for apiserver in namespace openshift-kube-apiserver 2025-12-13T00:20:10.937038185+00:00 stderr F I1213 00:20:10.937005 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937038185+00:00 stderr F I1213 00:20:10.937009 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-apiserver/apiserver took: 30.151µs 2025-12-13T00:20:10.937038185+00:00 stderr F I1213 00:20:10.937015 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-kube-storage-version-migrator-operator/metrics 2025-12-13T00:20:10.937048435+00:00 stderr F I1213 00:20:10.937040 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:10.937074706+00:00 stderr F I1213 00:20:10.937056 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:10.937074706+00:00 stderr F I1213 00:20:10.937067 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937074706+00:00 stderr F I1213 00:20:10.937071 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-kube-storage-version-migrator-operator/metrics took: 50.881µs 2025-12-13T00:20:10.937088626+00:00 stderr F I1213 00:20:10.937076 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-controllers 2025-12-13T00:20:10.937088626+00:00 stderr F I1213 00:20:10.937081 28750 gateway_shared_intf.go:609] Adding service machine-api-controllers in namespace openshift-machine-api 2025-12-13T00:20:10.937097676+00:00 stderr F I1213 00:20:10.937090 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-controllers in namespace openshift-machine-api 2025-12-13T00:20:10.937097676+00:00 stderr F I1213 00:20:10.937094 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937106406+00:00 stderr F I1213 00:20:10.937098 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-controllers took: 16.89µs 2025-12-13T00:20:10.937106406+00:00 stderr F I1213 00:20:10.937102 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-config-operator/machine-config-controller 2025-12-13T00:20:10.937115407+00:00 stderr F I1213 00:20:10.937106 28750 gateway_shared_intf.go:609] Adding service machine-config-controller in namespace openshift-machine-config-operator 2025-12-13T00:20:10.937123967+00:00 stderr F I1213 00:20:10.937117 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-config-controller in namespace openshift-machine-config-operator 2025-12-13T00:20:10.937123967+00:00 stderr F I1213 00:20:10.937121 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937132727+00:00 stderr F I1213 00:20:10.937124 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-config-operator/machine-config-controller took: 18.431µs 2025-12-13T00:20:10.937132727+00:00 stderr F I1213 00:20:10.937128 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway default/kubernetes 2025-12-13T00:20:10.937141127+00:00 stderr F I1213 00:20:10.937132 28750 gateway_shared_intf.go:609] Adding service kubernetes in namespace default 2025-12-13T00:20:10.937149658+00:00 stderr F I1213 00:20:10.937141 28750 gateway_shared_intf.go:635] Updating already programmed rules for kubernetes in namespace default 2025-12-13T00:20:10.937149658+00:00 stderr F I1213 00:20:10.937145 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937158438+00:00 stderr F I1213 00:20:10.937149 28750 obj_retry.go:541] Creating *factory.serviceForGateway default/kubernetes took: 16.231µs 2025-12-13T00:20:10.937158438+00:00 stderr F I1213 00:20:10.937154 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-authentication-operator/metrics 2025-12-13T00:20:10.937167388+00:00 stderr F I1213 00:20:10.937158 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-authentication-operator 2025-12-13T00:20:10.937175948+00:00 stderr F I1213 00:20:10.937169 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-authentication-operator 2025-12-13T00:20:10.937175948+00:00 stderr F I1213 00:20:10.937173 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937184849+00:00 stderr F I1213 00:20:10.937177 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-authentication-operator/metrics took: 18.89µs 2025-12-13T00:20:10.937184849+00:00 stderr F I1213 00:20:10.937181 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/control-plane-machine-set-operator 2025-12-13T00:20:10.937196739+00:00 stderr F I1213 00:20:10.937185 28750 gateway_shared_intf.go:609] Adding service control-plane-machine-set-operator in namespace openshift-machine-api 2025-12-13T00:20:10.937205529+00:00 stderr F I1213 00:20:10.937196 28750 gateway_shared_intf.go:635] Updating already programmed rules for control-plane-machine-set-operator in namespace openshift-machine-api 2025-12-13T00:20:10.937205529+00:00 stderr F I1213 00:20:10.937200 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937214789+00:00 stderr F I1213 00:20:10.937203 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/control-plane-machine-set-operator took: 17.47µs 2025-12-13T00:20:10.937214789+00:00 stderr F I1213 00:20:10.937208 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-etcd-operator/metrics 2025-12-13T00:20:10.937214789+00:00 stderr F I1213 00:20:10.937212 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-etcd-operator 2025-12-13T00:20:10.937243050+00:00 stderr F I1213 00:20:10.937222 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-etcd-operator 2025-12-13T00:20:10.937243050+00:00 stderr F I1213 00:20:10.937230 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937243050+00:00 stderr F I1213 00:20:10.937234 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-etcd-operator/metrics took: 22.461µs 2025-12-13T00:20:10.937243050+00:00 stderr F I1213 00:20:10.937238 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-machine-approver/machine-approver 2025-12-13T00:20:10.937253760+00:00 stderr F I1213 00:20:10.937243 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-machine-approver/machine-approver took: 320ns 2025-12-13T00:20:10.937253760+00:00 stderr F I1213 00:20:10.937247 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator-machine-webhook 2025-12-13T00:20:10.937253760+00:00 stderr F I1213 00:20:10.937250 28750 gateway_shared_intf.go:609] Adding service machine-api-operator-machine-webhook in namespace openshift-machine-api 2025-12-13T00:20:10.937265441+00:00 stderr F I1213 00:20:10.937259 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator-machine-webhook in namespace openshift-machine-api 2025-12-13T00:20:10.937274641+00:00 stderr F I1213 00:20:10.937263 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937274641+00:00 stderr F I1213 00:20:10.937268 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator-machine-webhook took: 16.921µs 2025-12-13T00:20:10.937283511+00:00 stderr F I1213 00:20:10.937272 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics 2025-12-13T00:20:10.937283511+00:00 stderr F I1213 00:20:10.937277 28750 gateway_shared_intf.go:609] Adding service package-server-manager-metrics in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.937294282+00:00 stderr F I1213 00:20:10.937287 28750 gateway_shared_intf.go:635] Updating already programmed rules for package-server-manager-metrics in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.937294282+00:00 stderr F I1213 00:20:10.937291 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937303082+00:00 stderr F I1213 00:20:10.937295 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics took: 17.75µs 2025-12-13T00:20:10.937303082+00:00 stderr F I1213 00:20:10.937299 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-authentication/oauth-openshift 2025-12-13T00:20:10.937326712+00:00 stderr F I1213 00:20:10.937303 28750 gateway_shared_intf.go:609] Adding service oauth-openshift in namespace openshift-authentication 2025-12-13T00:20:10.937326712+00:00 stderr F I1213 00:20:10.937313 28750 gateway_shared_intf.go:635] Updating already programmed rules for oauth-openshift in namespace openshift-authentication 2025-12-13T00:20:10.937326712+00:00 stderr F I1213 00:20:10.937317 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937326712+00:00 stderr F I1213 00:20:10.937321 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-authentication/oauth-openshift took: 17.15µs 2025-12-13T00:20:10.937336793+00:00 stderr F I1213 00:20:10.937324 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console-operator/metrics 2025-12-13T00:20:10.937336793+00:00 stderr F I1213 00:20:10.937329 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-console-operator 2025-12-13T00:20:10.937346043+00:00 stderr F I1213 00:20:10.937338 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-console-operator 2025-12-13T00:20:10.937346043+00:00 stderr F I1213 00:20:10.937342 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937355373+00:00 stderr F I1213 00:20:10.937346 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console-operator/metrics took: 16.911µs 2025-12-13T00:20:10.937355373+00:00 stderr F I1213 00:20:10.937350 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-controller-manager/controller-manager 2025-12-13T00:20:10.937363963+00:00 stderr F I1213 00:20:10.937354 28750 gateway_shared_intf.go:609] Adding service controller-manager in namespace openshift-controller-manager 2025-12-13T00:20:10.937372764+00:00 stderr F I1213 00:20:10.937364 28750 gateway_shared_intf.go:635] Updating already programmed rules for controller-manager in namespace openshift-controller-manager 2025-12-13T00:20:10.937372764+00:00 stderr F I1213 00:20:10.937368 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937381994+00:00 stderr F I1213 00:20:10.937371 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-controller-manager/controller-manager took: 16.801µs 2025-12-13T00:20:10.937381994+00:00 stderr F I1213 00:20:10.937376 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-marketplace/marketplace-operator-metrics 2025-12-13T00:20:10.937391084+00:00 stderr F I1213 00:20:10.937380 28750 gateway_shared_intf.go:609] Adding service marketplace-operator-metrics in namespace openshift-marketplace 2025-12-13T00:20:10.937400224+00:00 stderr F I1213 00:20:10.937390 28750 gateway_shared_intf.go:635] Updating already programmed rules for marketplace-operator-metrics in namespace openshift-marketplace 2025-12-13T00:20:10.937400224+00:00 stderr F I1213 00:20:10.937395 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937409295+00:00 stderr F I1213 00:20:10.937398 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-marketplace/marketplace-operator-metrics took: 18.27µs 2025-12-13T00:20:10.937409295+00:00 stderr F I1213 00:20:10.937402 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-multus/multus-admission-controller 2025-12-13T00:20:10.937409295+00:00 stderr F I1213 00:20:10.937406 28750 gateway_shared_intf.go:609] Adding service multus-admission-controller in namespace openshift-multus 2025-12-13T00:20:10.937421255+00:00 stderr F I1213 00:20:10.937416 28750 gateway_shared_intf.go:635] Updating already programmed rules for multus-admission-controller in namespace openshift-multus 2025-12-13T00:20:10.937430275+00:00 stderr F I1213 00:20:10.937420 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937430275+00:00 stderr F I1213 00:20:10.937424 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-multus/multus-admission-controller took: 17.59µs 2025-12-13T00:20:10.937443236+00:00 stderr F I1213 00:20:10.937428 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-apiserver/api 2025-12-13T00:20:10.937443236+00:00 stderr F I1213 00:20:10.937432 28750 gateway_shared_intf.go:609] Adding service api in namespace openshift-apiserver 2025-12-13T00:20:10.937452386+00:00 stderr F I1213 00:20:10.937442 28750 gateway_shared_intf.go:635] Updating already programmed rules for api in namespace openshift-apiserver 2025-12-13T00:20:10.937452386+00:00 stderr F I1213 00:20:10.937446 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937452386+00:00 stderr F I1213 00:20:10.937449 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-apiserver/api took: 16.411µs 2025-12-13T00:20:10.937462236+00:00 stderr F I1213 00:20:10.937453 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-cluster-samples-operator/metrics 2025-12-13T00:20:10.937462236+00:00 stderr F I1213 00:20:10.937457 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-cluster-samples-operator/metrics took: 340ns 2025-12-13T00:20:10.937471736+00:00 stderr F I1213 00:20:10.937461 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-monitoring/cluster-monitoring-operator 2025-12-13T00:20:10.937471736+00:00 stderr F I1213 00:20:10.937465 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-monitoring/cluster-monitoring-operator took: 170ns 2025-12-13T00:20:10.937480997+00:00 stderr F I1213 00:20:10.937469 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:20:10.937480997+00:00 stderr F I1213 00:20:10.937474 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-ovn-kubernetes/ovn-kubernetes-control-plane took: 310ns 2025-12-13T00:20:10.937480997+00:00 stderr F I1213 00:20:10.937478 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-dns/dns-default 2025-12-13T00:20:10.937490667+00:00 stderr F I1213 00:20:10.937482 28750 gateway_shared_intf.go:609] Adding service dns-default in namespace openshift-dns 2025-12-13T00:20:10.937499537+00:00 stderr F I1213 00:20:10.937492 28750 gateway_shared_intf.go:635] Updating already programmed rules for dns-default in namespace openshift-dns 2025-12-13T00:20:10.937529268+00:00 stderr F I1213 00:20:10.937510 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937529268+00:00 stderr F I1213 00:20:10.937518 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-dns/dns-default took: 36.021µs 2025-12-13T00:20:10.937529268+00:00 stderr F I1213 00:20:10.937522 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator 2025-12-13T00:20:10.937529268+00:00 stderr F I1213 00:20:10.937526 28750 gateway_shared_intf.go:609] Adding service machine-api-operator in namespace openshift-machine-api 2025-12-13T00:20:10.937542808+00:00 stderr F I1213 00:20:10.937536 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator in namespace openshift-machine-api 2025-12-13T00:20:10.937542808+00:00 stderr F I1213 00:20:10.937540 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937552249+00:00 stderr F I1213 00:20:10.937544 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator took: 17.47µs 2025-12-13T00:20:10.937552249+00:00 stderr F I1213 00:20:10.937548 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-network-diagnostics/network-check-source 2025-12-13T00:20:10.937561489+00:00 stderr F I1213 00:20:10.937552 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-network-diagnostics/network-check-source took: 180ns 2025-12-13T00:20:10.937561489+00:00 stderr F I1213 00:20:10.937556 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-route-controller-manager/route-controller-manager 2025-12-13T00:20:10.937574329+00:00 stderr F I1213 00:20:10.937560 28750 gateway_shared_intf.go:609] Adding service route-controller-manager in namespace openshift-route-controller-manager 2025-12-13T00:20:10.937574329+00:00 stderr F I1213 00:20:10.937569 28750 gateway_shared_intf.go:635] Updating already programmed rules for route-controller-manager in namespace openshift-route-controller-manager 2025-12-13T00:20:10.937583979+00:00 stderr F I1213 00:20:10.937573 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937583979+00:00 stderr F I1213 00:20:10.937577 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-route-controller-manager/route-controller-manager took: 16.8µs 2025-12-13T00:20:10.937583979+00:00 stderr F I1213 00:20:10.937581 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-console/console 2025-12-13T00:20:10.937593800+00:00 stderr F I1213 00:20:10.937585 28750 gateway_shared_intf.go:609] Adding service console in namespace openshift-console 2025-12-13T00:20:10.937602110+00:00 stderr F I1213 00:20:10.937594 28750 gateway_shared_intf.go:635] Updating already programmed rules for console in namespace openshift-console 2025-12-13T00:20:10.937602110+00:00 stderr F I1213 00:20:10.937598 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937610680+00:00 stderr F I1213 00:20:10.937602 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-console/console took: 16.761µs 2025-12-13T00:20:10.937610680+00:00 stderr F I1213 00:20:10.937606 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-controller-manager-operator/metrics 2025-12-13T00:20:10.937619450+00:00 stderr F I1213 00:20:10.937610 28750 gateway_shared_intf.go:609] Adding service metrics in namespace openshift-controller-manager-operator 2025-12-13T00:20:10.937627931+00:00 stderr F I1213 00:20:10.937620 28750 gateway_shared_intf.go:635] Updating already programmed rules for metrics in namespace openshift-controller-manager-operator 2025-12-13T00:20:10.937627931+00:00 stderr F I1213 00:20:10.937624 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937637031+00:00 stderr F I1213 00:20:10.937628 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-controller-manager-operator/metrics took: 17.801µs 2025-12-13T00:20:10.937637031+00:00 stderr F I1213 00:20:10.937632 28750 obj_retry.go:502] Add event received for *factory.serviceForGateway openshift-machine-api/machine-api-operator-webhook 2025-12-13T00:20:10.937646021+00:00 stderr F I1213 00:20:10.937636 28750 gateway_shared_intf.go:609] Adding service machine-api-operator-webhook in namespace openshift-machine-api 2025-12-13T00:20:10.937655181+00:00 stderr F I1213 00:20:10.937645 28750 gateway_shared_intf.go:635] Updating already programmed rules for machine-api-operator-webhook in namespace openshift-machine-api 2025-12-13T00:20:10.937655181+00:00 stderr F I1213 00:20:10.937649 28750 openflow_manager.go:87] Gateway OpenFlow sync already requested 2025-12-13T00:20:10.937655181+00:00 stderr F I1213 00:20:10.937652 28750 obj_retry.go:541] Creating *factory.serviceForGateway openshift-machine-api/machine-api-operator-webhook took: 15.64µs 2025-12-13T00:20:10.937667332+00:00 stderr F I1213 00:20:10.937659 28750 factory.go:988] Added *v1.Service event handler 11 2025-12-13T00:20:10.937710863+00:00 stderr F I1213 00:20:10.937672 28750 obj_retry_gateway.go:28] [newRetryFrameworkNodeWithParameters] g.watchFactory=&{11 0xc0002559d0 0xc000255a40 0xc000255ab0 0xc000255b20 0xc000255b90 0xc000255c00 0xc0000b6c30 0xc000255d50 0xc000255dc0 map[0x23d45a0:0xc0000ffab0 0x23d4ae0:0xc0000ff9d0 0x23d4d80:0xc0000ffa40 0x23d5020:0xc0000ffb20 0x23d52c0:0xc0000ffb90 0x23d5aa0:0xc0000ffc00 0x23f31a0:0xc0000ff6c0 0x23f4020:0xc0000ff730 0x23f4760:0xc0000ff8f0 0x23f5980:0xc0000ff810 0x23f7680:0xc0000ff960 0x23f8160:0xc0000ff7a0] 0xc0000b90e0} 2025-12-13T00:20:10.937742414+00:00 stderr F I1213 00:20:10.937719 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/control-plane-machine-set-operator-nmjkn 2025-12-13T00:20:10.937742414+00:00 stderr F I1213 00:20:10.937737 28750 gateway_shared_intf.go:842] Adding endpointslice control-plane-machine-set-operator-nmjkn in namespace openshift-machine-api 2025-12-13T00:20:10.937771684+00:00 stderr F I1213 00:20:10.937750 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/control-plane-machine-set-operator-nmjkn took: 17.701µs 2025-12-13T00:20:10.937780815+00:00 stderr F I1213 00:20:10.937775 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/community-operators-h826k 2025-12-13T00:20:10.937789355+00:00 stderr F I1213 00:20:10.937780 28750 gateway_shared_intf.go:842] Adding endpointslice community-operators-h826k in namespace openshift-marketplace 2025-12-13T00:20:10.937797665+00:00 stderr F I1213 00:20:10.937791 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/community-operators-h826k took: 11.68µs 2025-12-13T00:20:10.937805865+00:00 stderr F I1213 00:20:10.937796 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress-canary/ingress-canary-rhnd4 2025-12-13T00:20:10.937805865+00:00 stderr F I1213 00:20:10.937801 28750 gateway_shared_intf.go:842] Adding endpointslice ingress-canary-rhnd4 in namespace openshift-ingress-canary 2025-12-13T00:20:10.937816176+00:00 stderr F I1213 00:20:10.937811 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress-canary/ingress-canary-rhnd4 took: 10.481µs 2025-12-13T00:20:10.937824366+00:00 stderr F I1213 00:20:10.937815 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-apiserver-operator/metrics-kbv55 2025-12-13T00:20:10.937824366+00:00 stderr F I1213 00:20:10.937820 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-kbv55 in namespace openshift-kube-apiserver-operator 2025-12-13T00:20:10.937851747+00:00 stderr F I1213 00:20:10.937830 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-apiserver-operator/metrics-kbv55 took: 11.13µs 2025-12-13T00:20:10.937851747+00:00 stderr F I1213 00:20:10.937843 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-scheduler-operator/metrics-zk8d6 2025-12-13T00:20:10.937851747+00:00 stderr F I1213 00:20:10.937848 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-zk8d6 in namespace openshift-kube-scheduler-operator 2025-12-13T00:20:10.937864317+00:00 stderr F I1213 00:20:10.937857 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-scheduler-operator/metrics-zk8d6 took: 10.381µs 2025-12-13T00:20:10.937872897+00:00 stderr F I1213 00:20:10.937862 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-daemon-2nvnz 2025-12-13T00:20:10.937872897+00:00 stderr F I1213 00:20:10.937867 28750 gateway_shared_intf.go:842] Adding endpointslice machine-config-daemon-2nvnz in namespace openshift-machine-config-operator 2025-12-13T00:20:10.937883698+00:00 stderr F I1213 00:20:10.937878 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-daemon-2nvnz took: 11.43µs 2025-12-13T00:20:10.937892308+00:00 stderr F I1213 00:20:10.937882 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console-operator/metrics-rdlxc 2025-12-13T00:20:10.937892308+00:00 stderr F I1213 00:20:10.937887 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-rdlxc in namespace openshift-console-operator 2025-12-13T00:20:10.937905498+00:00 stderr F I1213 00:20:10.937897 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console-operator/metrics-rdlxc took: 9.95µs 2025-12-13T00:20:10.937905498+00:00 stderr F I1213 00:20:10.937901 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-controller-manager-operator/metrics-psf8p 2025-12-13T00:20:10.937914308+00:00 stderr F I1213 00:20:10.937906 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-psf8p in namespace openshift-controller-manager-operator 2025-12-13T00:20:10.937923159+00:00 stderr F I1213 00:20:10.937917 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-controller-manager-operator/metrics-psf8p took: 12.57µs 2025-12-13T00:20:10.937953769+00:00 stderr F I1213 00:20:10.937922 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/redhat-operators-47g6l 2025-12-13T00:20:10.937953769+00:00 stderr F I1213 00:20:10.937926 28750 gateway_shared_intf.go:842] Adding endpointslice redhat-operators-47g6l in namespace openshift-marketplace 2025-12-13T00:20:10.937967590+00:00 stderr F I1213 00:20:10.937958 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/redhat-operators-47g6l took: 31.39µs 2025-12-13T00:20:10.937975920+00:00 stderr F I1213 00:20:10.937965 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-multus/multus-admission-controller-s6h4d 2025-12-13T00:20:10.937975920+00:00 stderr F I1213 00:20:10.937971 28750 gateway_shared_intf.go:842] Adding endpointslice multus-admission-controller-s6h4d in namespace openshift-multus 2025-12-13T00:20:10.937986860+00:00 stderr F I1213 00:20:10.937981 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-multus/multus-admission-controller-s6h4d took: 10.87µs 2025-12-13T00:20:10.937995301+00:00 stderr F I1213 00:20:10.937986 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-authentication-operator/metrics-dp499 2025-12-13T00:20:10.937995301+00:00 stderr F I1213 00:20:10.937991 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-dp499 in namespace openshift-authentication-operator 2025-12-13T00:20:10.938006351+00:00 stderr F I1213 00:20:10.938000 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-authentication-operator/metrics-dp499 took: 9.531µs 2025-12-13T00:20:10.938014191+00:00 stderr F I1213 00:20:10.938004 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console/console-wg4kr 2025-12-13T00:20:10.938014191+00:00 stderr F I1213 00:20:10.938009 28750 gateway_shared_intf.go:842] Adding endpointslice console-wg4kr in namespace openshift-console 2025-12-13T00:20:10.938024471+00:00 stderr F I1213 00:20:10.938018 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console/console-wg4kr took: 9.83µs 2025-12-13T00:20:10.938032672+00:00 stderr F I1213 00:20:10.938023 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-controllers-j9jjt 2025-12-13T00:20:10.938032672+00:00 stderr F I1213 00:20:10.938027 28750 gateway_shared_intf.go:842] Adding endpointslice machine-api-controllers-j9jjt in namespace openshift-machine-api 2025-12-13T00:20:10.938043082+00:00 stderr F I1213 00:20:10.938036 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-controllers-j9jjt took: 9.241µs 2025-12-13T00:20:10.938043082+00:00 stderr F I1213 00:20:10.938040 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-oauth-apiserver/api-2pj4d 2025-12-13T00:20:10.938051512+00:00 stderr F I1213 00:20:10.938045 28750 gateway_shared_intf.go:842] Adding endpointslice api-2pj4d in namespace openshift-oauth-apiserver 2025-12-13T00:20:10.938063552+00:00 stderr F I1213 00:20:10.938054 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-oauth-apiserver/api-2pj4d took: 10.1µs 2025-12-13T00:20:10.938063552+00:00 stderr F I1213 00:20:10.938058 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-etcd-operator/metrics-z62zm 2025-12-13T00:20:10.938071883+00:00 stderr F I1213 00:20:10.938063 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-z62zm in namespace openshift-etcd-operator 2025-12-13T00:20:10.938080023+00:00 stderr F I1213 00:20:10.938072 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-etcd-operator/metrics-z62zm took: 9.931µs 2025-12-13T00:20:10.938080023+00:00 stderr F I1213 00:20:10.938076 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-dns/dns-default-lctlx 2025-12-13T00:20:10.938088273+00:00 stderr F I1213 00:20:10.938081 28750 gateway_shared_intf.go:842] Adding endpointslice dns-default-lctlx in namespace openshift-dns 2025-12-13T00:20:10.938098433+00:00 stderr F I1213 00:20:10.938092 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-dns/dns-default-lctlx took: 10.8µs 2025-12-13T00:20:10.938134574+00:00 stderr F I1213 00:20:10.938096 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-image-registry/image-registry-cfsrx 2025-12-13T00:20:10.938134574+00:00 stderr F I1213 00:20:10.938101 28750 gateway_shared_intf.go:842] Adding endpointslice image-registry-cfsrx in namespace openshift-image-registry 2025-12-13T00:20:10.938144125+00:00 stderr F I1213 00:20:10.938134 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-image-registry/image-registry-cfsrx took: 33.071µs 2025-12-13T00:20:10.938144125+00:00 stderr F I1213 00:20:10.938139 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-controller-7t8hc 2025-12-13T00:20:10.938152995+00:00 stderr F I1213 00:20:10.938144 28750 gateway_shared_intf.go:842] Adding endpointslice machine-config-controller-7t8hc in namespace openshift-machine-config-operator 2025-12-13T00:20:10.938161935+00:00 stderr F I1213 00:20:10.938154 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-controller-7t8hc took: 10.97µs 2025-12-13T00:20:10.938161935+00:00 stderr F I1213 00:20:10.938159 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/redhat-marketplace-8k279 2025-12-13T00:20:10.938170865+00:00 stderr F I1213 00:20:10.938164 28750 gateway_shared_intf.go:842] Adding endpointslice redhat-marketplace-8k279 in namespace openshift-marketplace 2025-12-13T00:20:10.938181936+00:00 stderr F I1213 00:20:10.938175 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/redhat-marketplace-8k279 took: 12.19µs 2025-12-13T00:20:10.938190646+00:00 stderr F I1213 00:20:10.938179 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-service-ca-operator/metrics-wrkrj 2025-12-13T00:20:10.938190646+00:00 stderr F I1213 00:20:10.938185 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-wrkrj in namespace openshift-service-ca-operator 2025-12-13T00:20:10.938201606+00:00 stderr F I1213 00:20:10.938196 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-service-ca-operator/metrics-wrkrj took: 11.54µs 2025-12-13T00:20:10.938210146+00:00 stderr F I1213 00:20:10.938200 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver-operator/metrics-sgtfh 2025-12-13T00:20:10.938210146+00:00 stderr F I1213 00:20:10.938205 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-sgtfh in namespace openshift-apiserver-operator 2025-12-13T00:20:10.938222387+00:00 stderr F I1213 00:20:10.938217 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver-operator/metrics-sgtfh took: 12.021µs 2025-12-13T00:20:10.938229417+00:00 stderr F I1213 00:20:10.938221 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-etcd/etcd-8wmzv 2025-12-13T00:20:10.938229417+00:00 stderr F I1213 00:20:10.938226 28750 gateway_shared_intf.go:842] Adding endpointslice etcd-8wmzv in namespace openshift-etcd 2025-12-13T00:20:10.938259218+00:00 stderr F I1213 00:20:10.938236 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-etcd/etcd-8wmzv took: 10.47µs 2025-12-13T00:20:10.938259218+00:00 stderr F I1213 00:20:10.938245 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-webhook-x4gjx 2025-12-13T00:20:10.938259218+00:00 stderr F I1213 00:20:10.938250 28750 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-webhook-x4gjx in namespace openshift-machine-api 2025-12-13T00:20:10.938267738+00:00 stderr F I1213 00:20:10.938259 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-webhook-x4gjx took: 10.091µs 2025-12-13T00:20:10.938267738+00:00 stderr F I1213 00:20:10.938263 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-machine-webhook-xj4tp 2025-12-13T00:20:10.938275018+00:00 stderr F I1213 00:20:10.938268 28750 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-machine-webhook-xj4tp in namespace openshift-machine-api 2025-12-13T00:20:10.938283628+00:00 stderr F I1213 00:20:10.938278 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-machine-webhook-xj4tp took: 10.28µs 2025-12-13T00:20:10.938290759+00:00 stderr F I1213 00:20:10.938282 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/certified-operators-bw9bv 2025-12-13T00:20:10.938290759+00:00 stderr F I1213 00:20:10.938287 28750 gateway_shared_intf.go:842] Adding endpointslice certified-operators-bw9bv in namespace openshift-marketplace 2025-12-13T00:20:10.938313779+00:00 stderr F I1213 00:20:10.938298 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/certified-operators-bw9bv took: 12.111µs 2025-12-13T00:20:10.938313779+00:00 stderr F I1213 00:20:10.938307 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver/api-7hq6z 2025-12-13T00:20:10.938313779+00:00 stderr F I1213 00:20:10.938311 28750 gateway_shared_intf.go:842] Adding endpointslice api-7hq6z in namespace openshift-apiserver 2025-12-13T00:20:10.938335060+00:00 stderr F I1213 00:20:10.938321 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver/api-7hq6z took: 10.33µs 2025-12-13T00:20:10.938335060+00:00 stderr F I1213 00:20:10.938329 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-storage-version-migrator-operator/metrics-zbxs7 2025-12-13T00:20:10.938342390+00:00 stderr F I1213 00:20:10.938334 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-zbxs7 in namespace openshift-kube-storage-version-migrator-operator 2025-12-13T00:20:10.938352210+00:00 stderr F I1213 00:20:10.938345 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-storage-version-migrator-operator/metrics-zbxs7 took: 12.31µs 2025-12-13T00:20:10.938360681+00:00 stderr F I1213 00:20:10.938350 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 2025-12-13T00:20:10.938360681+00:00 stderr F I1213 00:20:10.938355 28750 gateway_shared_intf.go:842] Adding endpointslice olm-operator-metrics-vql58 in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.938372611+00:00 stderr F I1213 00:20:10.938366 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/olm-operator-metrics-vql58 took: 11.911µs 2025-12-13T00:20:10.938381291+00:00 stderr F I1213 00:20:10.938371 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-scheduler/scheduler-4wbzh 2025-12-13T00:20:10.938381291+00:00 stderr F I1213 00:20:10.938376 28750 gateway_shared_intf.go:842] Adding endpointslice scheduler-4wbzh in namespace openshift-kube-scheduler 2025-12-13T00:20:10.938392231+00:00 stderr F I1213 00:20:10.938386 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-scheduler/scheduler-4wbzh took: 11.18µs 2025-12-13T00:20:10.938400742+00:00 stderr F I1213 00:20:10.938391 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console/downloads-zsr67 2025-12-13T00:20:10.938400742+00:00 stderr F I1213 00:20:10.938395 28750 gateway_shared_intf.go:842] Adding endpointslice downloads-zsr67 in namespace openshift-console 2025-12-13T00:20:10.938409602+00:00 stderr F I1213 00:20:10.938405 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console/downloads-zsr67 took: 10.181µs 2025-12-13T00:20:10.938416692+00:00 stderr F I1213 00:20:10.938409 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-network-diagnostics/network-check-target-kqkjk 2025-12-13T00:20:10.938416692+00:00 stderr F I1213 00:20:10.938414 28750 gateway_shared_intf.go:842] Adding endpointslice network-check-target-kqkjk in namespace openshift-network-diagnostics 2025-12-13T00:20:10.938434633+00:00 stderr F I1213 00:20:10.938424 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-network-diagnostics/network-check-target-kqkjk took: 10.64µs 2025-12-13T00:20:10.938434633+00:00 stderr F I1213 00:20:10.938429 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress-operator/metrics-cd48g 2025-12-13T00:20:10.938441853+00:00 stderr F I1213 00:20:10.938434 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-cd48g in namespace openshift-ingress-operator 2025-12-13T00:20:10.938448803+00:00 stderr F I1213 00:20:10.938443 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress-operator/metrics-cd48g took: 10.191µs 2025-12-13T00:20:10.938455893+00:00 stderr F I1213 00:20:10.938447 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-controller-manager/kube-controller-manager-fcp2k 2025-12-13T00:20:10.938455893+00:00 stderr F I1213 00:20:10.938452 28750 gateway_shared_intf.go:842] Adding endpointslice kube-controller-manager-fcp2k in namespace openshift-kube-controller-manager 2025-12-13T00:20:10.938478664+00:00 stderr F I1213 00:20:10.938462 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-controller-manager/kube-controller-manager-fcp2k took: 10.54µs 2025-12-13T00:20:10.938478664+00:00 stderr F I1213 00:20:10.938470 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway default/kubernetes 2025-12-13T00:20:10.938478664+00:00 stderr F I1213 00:20:10.938476 28750 gateway_shared_intf.go:842] Adding endpointslice kubernetes in namespace default 2025-12-13T00:20:10.938501314+00:00 stderr F I1213 00:20:10.938484 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway default/kubernetes took: 9.89µs 2025-12-13T00:20:10.938501314+00:00 stderr F I1213 00:20:10.938492 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-console-operator/webhook-b7j7h 2025-12-13T00:20:10.938501314+00:00 stderr F I1213 00:20:10.938496 28750 gateway_shared_intf.go:842] Adding endpointslice webhook-b7j7h in namespace openshift-console-operator 2025-12-13T00:20:10.938522485+00:00 stderr F I1213 00:20:10.938507 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-console-operator/webhook-b7j7h took: 10.29µs 2025-12-13T00:20:10.938522485+00:00 stderr F I1213 00:20:10.938515 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-authentication/oauth-openshift-6gdxk 2025-12-13T00:20:10.938522485+00:00 stderr F I1213 00:20:10.938519 28750 gateway_shared_intf.go:842] Adding endpointslice oauth-openshift-6gdxk in namespace openshift-authentication 2025-12-13T00:20:10.938534785+00:00 stderr F I1213 00:20:10.938529 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-authentication/oauth-openshift-6gdxk took: 10.13µs 2025-12-13T00:20:10.938542025+00:00 stderr F I1213 00:20:10.938534 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-route-controller-manager/route-controller-manager-64jvm 2025-12-13T00:20:10.938542025+00:00 stderr F I1213 00:20:10.938538 28750 gateway_shared_intf.go:842] Adding endpointslice route-controller-manager-64jvm in namespace openshift-route-controller-manager 2025-12-13T00:20:10.938568586+00:00 stderr F I1213 00:20:10.938548 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-route-controller-manager/route-controller-manager-64jvm took: 9.631µs 2025-12-13T00:20:10.938568586+00:00 stderr F I1213 00:20:10.938559 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-apiserver/apiserver-5mvtf 2025-12-13T00:20:10.938568586+00:00 stderr F I1213 00:20:10.938564 28750 gateway_shared_intf.go:842] Adding endpointslice apiserver-5mvtf in namespace openshift-kube-apiserver 2025-12-13T00:20:10.938579997+00:00 stderr F I1213 00:20:10.938574 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-apiserver/apiserver-5mvtf took: 10.3µs 2025-12-13T00:20:10.938588477+00:00 stderr F I1213 00:20:10.938578 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-kube-controller-manager-operator/metrics-cz5rv 2025-12-13T00:20:10.938588477+00:00 stderr F I1213 00:20:10.938583 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-cz5rv in namespace openshift-kube-controller-manager-operator 2025-12-13T00:20:10.938599637+00:00 stderr F I1213 00:20:10.938593 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-kube-controller-manager-operator/metrics-cz5rv took: 9.85µs 2025-12-13T00:20:10.938599637+00:00 stderr F I1213 00:20:10.938597 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-2js9r 2025-12-13T00:20:10.938608547+00:00 stderr F I1213 00:20:10.938601 28750 gateway_shared_intf.go:842] Adding endpointslice machine-api-operator-2js9r in namespace openshift-machine-api 2025-12-13T00:20:10.938617048+00:00 stderr F I1213 00:20:10.938611 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/machine-api-operator-2js9r took: 10.7µs 2025-12-13T00:20:10.938625588+00:00 stderr F I1213 00:20:10.938616 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-api/cluster-autoscaler-operator-r4g5l 2025-12-13T00:20:10.938625588+00:00 stderr F I1213 00:20:10.938620 28750 gateway_shared_intf.go:842] Adding endpointslice cluster-autoscaler-operator-r4g5l in namespace openshift-machine-api 2025-12-13T00:20:10.938636218+00:00 stderr F I1213 00:20:10.938629 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-api/cluster-autoscaler-operator-r4g5l took: 8.8µs 2025-12-13T00:20:10.938636218+00:00 stderr F I1213 00:20:10.938633 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-operator-p8xmw 2025-12-13T00:20:10.938644338+00:00 stderr F I1213 00:20:10.938638 28750 gateway_shared_intf.go:842] Adding endpointslice machine-config-operator-p8xmw in namespace openshift-machine-config-operator 2025-12-13T00:20:10.938671279+00:00 stderr F I1213 00:20:10.938650 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-machine-config-operator/machine-config-operator-p8xmw took: 12.64µs 2025-12-13T00:20:10.938671279+00:00 stderr F I1213 00:20:10.938662 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-marketplace/marketplace-operator-metrics-fcwkk 2025-12-13T00:20:10.938671279+00:00 stderr F I1213 00:20:10.938666 28750 gateway_shared_intf.go:842] Adding endpointslice marketplace-operator-metrics-fcwkk in namespace openshift-marketplace 2025-12-13T00:20:10.938685189+00:00 stderr F I1213 00:20:10.938676 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-marketplace/marketplace-operator-metrics-fcwkk took: 10.49µs 2025-12-13T00:20:10.938685189+00:00 stderr F I1213 00:20:10.938681 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-cluster-version/cluster-version-operator-qt7zf 2025-12-13T00:20:10.938694110+00:00 stderr F I1213 00:20:10.938686 28750 gateway_shared_intf.go:842] Adding endpointslice cluster-version-operator-qt7zf in namespace openshift-cluster-version 2025-12-13T00:20:10.938702770+00:00 stderr F I1213 00:20:10.938695 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-cluster-version/cluster-version-operator-qt7zf took: 10.501µs 2025-12-13T00:20:10.938702770+00:00 stderr F I1213 00:20:10.938700 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-config-operator/metrics-tw775 2025-12-13T00:20:10.938711700+00:00 stderr F I1213 00:20:10.938705 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-tw775 in namespace openshift-config-operator 2025-12-13T00:20:10.938720170+00:00 stderr F I1213 00:20:10.938714 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-config-operator/metrics-tw775 took: 10.48µs 2025-12-13T00:20:10.938728731+00:00 stderr F I1213 00:20:10.938719 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-ingress/router-internal-default-29hv8 2025-12-13T00:20:10.938728731+00:00 stderr F I1213 00:20:10.938723 28750 gateway_shared_intf.go:842] Adding endpointslice router-internal-default-29hv8 in namespace openshift-ingress 2025-12-13T00:20:10.938739701+00:00 stderr F I1213 00:20:10.938733 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-ingress/router-internal-default-29hv8 took: 9.741µs 2025-12-13T00:20:10.938739701+00:00 stderr F I1213 00:20:10.938737 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 2025-12-13T00:20:10.938748671+00:00 stderr F I1213 00:20:10.938741 28750 gateway_shared_intf.go:842] Adding endpointslice catalog-operator-metrics-fqfm8 in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.938757391+00:00 stderr F I1213 00:20:10.938752 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/catalog-operator-metrics-fqfm8 took: 10.88µs 2025-12-13T00:20:10.938766202+00:00 stderr F I1213 00:20:10.938756 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p 2025-12-13T00:20:10.938766202+00:00 stderr F I1213 00:20:10.938762 28750 gateway_shared_intf.go:842] Adding endpointslice package-server-manager-metrics-mq66p in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.938795752+00:00 stderr F I1213 00:20:10.938772 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/package-server-manager-metrics-mq66p took: 11.221µs 2025-12-13T00:20:10.938795752+00:00 stderr F I1213 00:20:10.938785 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/packageserver-service-tlm8t 2025-12-13T00:20:10.938795752+00:00 stderr F I1213 00:20:10.938790 28750 gateway_shared_intf.go:842] Adding endpointslice packageserver-service-tlm8t in namespace openshift-operator-lifecycle-manager 2025-12-13T00:20:10.938810033+00:00 stderr F I1213 00:20:10.938801 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-operator-lifecycle-manager/packageserver-service-tlm8t took: 11.501µs 2025-12-13T00:20:10.938810033+00:00 stderr F I1213 00:20:10.938805 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-apiserver/check-endpoints-sbfp5 2025-12-13T00:20:10.938818833+00:00 stderr F I1213 00:20:10.938809 28750 gateway_shared_intf.go:842] Adding endpointslice check-endpoints-sbfp5 in namespace openshift-apiserver 2025-12-13T00:20:10.938827633+00:00 stderr F I1213 00:20:10.938819 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-apiserver/check-endpoints-sbfp5 took: 9.99µs 2025-12-13T00:20:10.938827633+00:00 stderr F I1213 00:20:10.938823 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-controller-manager/controller-manager-kxmft 2025-12-13T00:20:10.938836504+00:00 stderr F I1213 00:20:10.938828 28750 gateway_shared_intf.go:842] Adding endpointslice controller-manager-kxmft in namespace openshift-controller-manager 2025-12-13T00:20:10.938845134+00:00 stderr F I1213 00:20:10.938839 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-controller-manager/controller-manager-kxmft took: 11.701µs 2025-12-13T00:20:10.938853644+00:00 stderr F I1213 00:20:10.938843 28750 obj_retry.go:502] Add event received for *factory.endpointSliceForGateway openshift-dns-operator/metrics-cxk8j 2025-12-13T00:20:10.938853644+00:00 stderr F I1213 00:20:10.938848 28750 gateway_shared_intf.go:842] Adding endpointslice metrics-cxk8j in namespace openshift-dns-operator 2025-12-13T00:20:10.938864564+00:00 stderr F I1213 00:20:10.938857 28750 obj_retry.go:541] Creating *factory.endpointSliceForGateway openshift-dns-operator/metrics-cxk8j took: 9.99µs 2025-12-13T00:20:10.938872915+00:00 stderr F I1213 00:20:10.938863 28750 factory.go:988] Added *v1.EndpointSlice event handler 12 2025-12-13T00:20:10.938928836+00:00 stderr F I1213 00:20:10.938911 28750 gateway.go:244] Spawning Conntrack Rule Check Thread 2025-12-13T00:20:10.939133972+00:00 stderr F I1213 00:20:10.939114 28750 ovs.go:159] Exec(55): /usr/bin/ovs-ofctl -O OpenFlow13 --bundle replace-flows br-ex - 2025-12-13T00:20:10.942297320+00:00 stderr F I1213 00:20:10.942261 28750 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 127.0.0.1/8 lo 2025-12-13T00:20:10.942297320+00:00 stderr F I1213 00:20:10.942285 28750 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 10.217.0.2/23 ovn-k8s-mp0 2025-12-13T00:20:10.942317640+00:00 stderr F I1213 00:20:10.942299 28750 node_ip_handler_linux.go:431] Skipping non-useable IP address for host: 169.254.169.2/29 br-ex 2025-12-13T00:20:10.942325150+00:00 stderr F I1213 00:20:10.942317 28750 node_ip_handler_linux.go:141] Node IP manager is running 2025-12-13T00:20:10.942335251+00:00 stderr F I1213 00:20:10.942329 28750 default_node_network_controller.go:980] Gateway and management port readiness took 1.171596857s 2025-12-13T00:20:10.960166892+00:00 stderr F I1213 00:20:10.960084 28750 ovs.go:162] Exec(55): stdout: "" 2025-12-13T00:20:10.960166892+00:00 stderr F I1213 00:20:10.960104 28750 ovs.go:163] Exec(55): stderr: "" 2025-12-13T00:20:10.960279625+00:00 stderr F I1213 00:20:10.960249 28750 ovs.go:159] Exec(56): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-br br-ext 2025-12-13T00:20:10.960355107+00:00 stderr F I1213 00:20:10.960317 28750 route_manager.go:93] Route Manager: attempting to add route: {Ifindex: 11 Dst: 10.217.4.0/23 Src: 169.254.169.2 Gw: 169.254.169.4 Flags: [] Table: 0 Realm: 0} 2025-12-13T00:20:10.960669866+00:00 stderr F I1213 00:20:10.960634 28750 route_manager.go:110] Route Manager: completed adding route: {Ifindex: 11 Dst: 10.217.4.0/23 Src: 169.254.169.2 Gw: 169.254.169.4 Flags: [] Table: 254 Realm: 0} 2025-12-13T00:20:10.968271936+00:00 stderr F I1213 00:20:10.968222 28750 ovs.go:162] Exec(56): stdout: "" 2025-12-13T00:20:10.968271936+00:00 stderr F I1213 00:20:10.968246 28750 ovs.go:163] Exec(56): stderr: "" 2025-12-13T00:20:10.968271936+00:00 stderr F I1213 00:20:10.968264 28750 ovs.go:159] Exec(57): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-port br-int int 2025-12-13T00:20:10.976101772+00:00 stderr F I1213 00:20:10.976043 28750 ovs.go:162] Exec(57): stdout: "" 2025-12-13T00:20:10.976101772+00:00 stderr F I1213 00:20:10.976064 28750 ovs.go:163] Exec(57): stderr: "" 2025-12-13T00:20:10.976326878+00:00 stderr F I1213 00:20:10.976229 28750 healthcheck_node.go:124] "Starting node proxy healthz server" address="0.0.0.0:10256" 2025-12-13T00:20:10.976326878+00:00 stderr F W1213 00:20:10.976278 28750 egressip_healthcheck.go:74] Health checking using insecure connection 2025-12-13T00:20:10.976360889+00:00 stderr F I1213 00:20:10.976337 28750 egressip_healthcheck.go:107] Starting Egress IP Health Server on 10.217.0.2:9107 2025-12-13T00:20:10.977119449+00:00 stderr F I1213 00:20:10.977091 28750 egressservice_node.go:84] Setting up event handlers for Egress Services 2025-12-13T00:20:10.977334265+00:00 stderr F I1213 00:20:10.977293 28750 egressservice_node.go:172] Starting Egress Services Controller 2025-12-13T00:20:10.977348566+00:00 stderr F I1213 00:20:10.977330 28750 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-12-13T00:20:10.977348566+00:00 stderr F I1213 00:20:10.977340 28750 shared_informer.go:318] Caches are synced for egressservices 2025-12-13T00:20:10.977358626+00:00 stderr F I1213 00:20:10.977345 28750 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-12-13T00:20:10.977358626+00:00 stderr F I1213 00:20:10.977351 28750 shared_informer.go:318] Caches are synced for egressservices_services 2025-12-13T00:20:10.977368186+00:00 stderr F I1213 00:20:10.977359 28750 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-12-13T00:20:10.977368186+00:00 stderr F I1213 00:20:10.977364 28750 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-12-13T00:20:10.977377716+00:00 stderr F I1213 00:20:10.977369 28750 egressservice_node.go:186] Repairing Egress Services 2025-12-13T00:20:10.983747932+00:00 stderr F I1213 00:20:10.983681 28750 iptables.go:108] Creating table: nat chain: OVN-KUBE-EGRESS-SVC 2025-12-13T00:20:10.989664445+00:00 stderr F W1213 00:20:10.989586 28750 management-port_linux.go:495] missing management port nat rule in chain OVN-KUBE-SNAT-MGMTPORT, adding it 2025-12-13T00:20:10.991078354+00:00 stderr F I1213 00:20:10.990810 28750 node_controller.go:43] Starting Admin Policy Based Route Node Controller 2025-12-13T00:20:10.991078354+00:00 stderr F I1213 00:20:10.990829 28750 external_controller.go:276] Starting Admin Policy Based Route Controller 2025-12-13T00:20:10.995827626+00:00 stderr F I1213 00:20:10.995781 28750 egressip.go:183] Starting Egress IP Controller 2025-12-13T00:20:10.996007501+00:00 stderr F I1213 00:20:10.995979 28750 shared_informer.go:311] Waiting for caches to sync for eippod 2025-12-13T00:20:10.996007501+00:00 stderr F I1213 00:20:10.995995 28750 shared_informer.go:318] Caches are synced for eippod 2025-12-13T00:20:10.996069602+00:00 stderr F I1213 00:20:10.996044 28750 shared_informer.go:311] Waiting for caches to sync for eipeip 2025-12-13T00:20:10.996069602+00:00 stderr F I1213 00:20:10.996061 28750 shared_informer.go:318] Caches are synced for eipeip 2025-12-13T00:20:10.996078642+00:00 stderr F I1213 00:20:10.996070 28750 shared_informer.go:311] Waiting for caches to sync for eipnamespace 2025-12-13T00:20:10.996078642+00:00 stderr F I1213 00:20:10.996075 28750 shared_informer.go:318] Caches are synced for eipnamespace 2025-12-13T00:20:10.996194936+00:00 stderr F I1213 00:20:10.996155 28750 iptables_manager.go:96] IPTables manager: own chain: table nat, chain OVN-KUBE-EGRESS-IP-MULTI-NIC, protocol IPv4 2025-12-13T00:20:10.999131077+00:00 stderr F I1213 00:20:10.999041 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.002636403+00:00 stderr F I1213 00:20:11.002494 28750 iptables_manager.go:164] IPTables manager: ensure rule - table nat, chain POSTROUTING, protocol IPv4, rule: {[-j OVN-KUBE-EGRESS-IP-MULTI-NIC]} 2025-12-13T00:20:11.005210154+00:00 stderr F I1213 00:20:11.005143 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.010633194+00:00 stderr F I1213 00:20:11.010557 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.014063068+00:00 stderr F I1213 00:20:11.014012 28750 iptables_manager.go:164] IPTables manager: ensure rule - table mangle, chain PREROUTING, protocol IPv4, rule: {[-m mark --mark 0 -j CONNMARK --restore-mark]} 2025-12-13T00:20:11.017754929+00:00 stderr F I1213 00:20:11.017712 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.024400983+00:00 stderr F I1213 00:20:11.024299 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.029792582+00:00 stderr F I1213 00:20:11.029721 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","mangle"] 2025-12-13T00:20:11.050471182+00:00 stderr F I1213 00:20:11.050357 28750 iptables_manager.go:164] IPTables manager: ensure rule - table mangle, chain PREROUTING, protocol IPv4, rule: {[-m mark --mark 1008 -j CONNMARK --save-mark]} 2025-12-13T00:20:11.054130223+00:00 stderr F I1213 00:20:11.054096 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.061799624+00:00 stderr F I1213 00:20:11.061742 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.068056016+00:00 stderr F I1213 00:20:11.068012 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","mangle"] 2025-12-13T00:20:11.113755577+00:00 stderr F I1213 00:20:11.113683 28750 iptables.go:358] "Running" command="iptables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.117803168+00:00 stderr F I1213 00:20:11.117725 28750 iptables.go:358] "Running" command="ip6tables-save" arguments=["-t","nat"] 2025-12-13T00:20:11.120168914+00:00 stderr F I1213 00:20:11.120121 28750 link_network_manager.go:116] Link manager is running 2025-12-13T00:20:11.120168914+00:00 stderr F I1213 00:20:11.120151 28750 default_node_network_controller.go:1128] Default node network controller initialized and ready. 2025-12-13T00:20:11.121210212+00:00 stderr F I1213 00:20:11.121172 28750 ovspinning_linux.go:39] OVS CPU affinity pinning disabled 2025-12-13T00:20:38.433005748+00:00 stderr F I1213 00:20:38.428975 28750 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 0 items received 2025-12-13T00:20:38.433005748+00:00 stderr F I1213 00:20:38.429351 28750 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 1 items received 2025-12-13T00:20:38.433005748+00:00 stderr F I1213 00:20:38.429618 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.433005748+00:00 stderr F I1213 00:20:38.429646 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.448409344+00:00 stderr F I1213 00:20:38.447567 28750 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 1 items received 2025-12-13T00:20:38.448409344+00:00 stderr F I1213 00:20:38.447998 28750 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.NetworkPolicy total 0 items received 2025-12-13T00:20:38.448409344+00:00 stderr F I1213 00:20:38.448276 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42785&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.448409344+00:00 stderr F I1213 00:20:38.448344 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.488329861+00:00 stderr F I1213 00:20:38.488234 28750 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 7 items received 2025-12-13T00:20:38.497107558+00:00 stderr F I1213 00:20:38.492174 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=8m19s&timeoutSeconds=499&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.497107558+00:00 stderr F I1213 00:20:38.497039 28750 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Namespace total 0 items received 2025-12-13T00:20:38.498080234+00:00 stderr F I1213 00:20:38.497309 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42781&timeout=5m9s&timeoutSeconds=309&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.498080234+00:00 stderr F I1213 00:20:38.497842 28750 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 0 items received 2025-12-13T00:20:38.498080234+00:00 stderr F I1213 00:20:38.498025 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42796&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.514126027+00:00 stderr F I1213 00:20:38.514071 28750 reflector.go:800] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: Watch close - *v1alpha1.AdminNetworkPolicy total 0 items received 2025-12-13T00:20:38.514598690+00:00 stderr F I1213 00:20:38.514555 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42798&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.526764828+00:00 stderr F I1213 00:20:38.526705 28750 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2025-12-13T00:20:38.526874631+00:00 stderr F I1213 00:20:38.526840 28750 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 0 items received 2025-12-13T00:20:38.527052295+00:00 stderr F I1213 00:20:38.526972 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42791&timeout=7m13s&timeoutSeconds=433&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.527416155+00:00 stderr F I1213 00:20:38.527356 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42797&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.532886023+00:00 stderr F I1213 00:20:38.532832 28750 reflector.go:800] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: Watch close - *v1alpha1.BaselineAdminNetworkPolicy total 0 items received 2025-12-13T00:20:38.534527658+00:00 stderr F I1213 00:20:38.534469 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42790&timeout=7m51s&timeoutSeconds=471&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.542615536+00:00 stderr F I1213 00:20:38.542557 28750 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 0 items received 2025-12-13T00:20:38.543130750+00:00 stderr F I1213 00:20:38.542908 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42801&timeout=7m55s&timeoutSeconds=475&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.543569651+00:00 stderr F I1213 00:20:38.543498 28750 cni.go:258] [openshift-kube-apiserver/installer-13-crc 0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37 network default NAD default] DEL starting CNI request [openshift-kube-apiserver/installer-13-crc 0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37 network default NAD default] 2025-12-13T00:20:38.544555489+00:00 stderr F I1213 00:20:38.544523 28750 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 0 items received 2025-12-13T00:20:38.544781535+00:00 stderr F I1213 00:20:38.544737 28750 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42791&timeout=7m57s&timeoutSeconds=477&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.547254722+00:00 stderr F I1213 00:20:38.546876 28750 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 0 items received 2025-12-13T00:20:38.548152595+00:00 stderr F I1213 00:20:38.548104 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42798&timeout=6m9s&timeoutSeconds=369&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.646607162+00:00 stderr F I1213 00:20:38.646528 28750 cni.go:279] [openshift-kube-apiserver/installer-13-crc 0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37 network default NAD default] DEL finished CNI request [openshift-kube-apiserver/installer-13-crc 0cd59daaff2210a8feba8a95a04d6ee07b96c64856dce07f56b5c1fdad1ceb37 network default NAD default], result "{\"dns\":{}}", err 2025-12-13T00:20:39.368546404+00:00 stderr F I1213 00:20:39.368377 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42790&timeout=5m31s&timeoutSeconds=331&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.427978107+00:00 stderr F I1213 00:20:39.427870 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42798&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.500743991+00:00 stderr F I1213 00:20:39.500682 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.550186046+00:00 stderr F I1213 00:20:39.550108 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42785&timeout=5m18s&timeoutSeconds=318&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.673694488+00:00 stderr F I1213 00:20:39.673620 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42798&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.721406776+00:00 stderr F I1213 00:20:39.721336 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42791&timeout=5m48s&timeoutSeconds=348&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.795021692+00:00 stderr F I1213 00:20:39.794922 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42797&timeout=8m36s&timeoutSeconds=516&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.824739744+00:00 stderr F I1213 00:20:39.824668 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42781&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.880669384+00:00 stderr F I1213 00:20:39.880588 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.984457564+00:00 stderr F I1213 00:20:39.984386 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=7m12s&timeoutSeconds=432&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.987343402+00:00 stderr F I1213 00:20:39.986927 28750 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42791&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:40.039674263+00:00 stderr F I1213 00:20:40.039555 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42796&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:40.046316033+00:00 stderr F I1213 00:20:40.046242 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42801&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:40.047727241+00:00 stderr F I1213 00:20:40.047669 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=5m7s&timeoutSeconds=307&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.157690213+00:00 stderr F I1213 00:20:41.157601 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42798&timeout=9m1s&timeoutSeconds=541&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.220272772+00:00 stderr F I1213 00:20:41.220195 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42785&timeout=8m47s&timeoutSeconds=527&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.265054181+00:00 stderr F I1213 00:20:41.265000 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=9m41s&timeoutSeconds=581&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.423072134+00:00 stderr F I1213 00:20:41.422764 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42790&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.477730840+00:00 stderr F I1213 00:20:41.477641 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42797&timeout=9m57s&timeoutSeconds=597&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.786610914+00:00 stderr F I1213 00:20:41.786504 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.096220489+00:00 stderr F I1213 00:20:42.096123 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42801&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.356088551+00:00 stderr F I1213 00:20:42.356004 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42798&timeout=5m51s&timeoutSeconds=351&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.391028795+00:00 stderr F I1213 00:20:42.390921 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42796&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.550816376+00:00 stderr F I1213 00:20:42.549983 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.667612087+00:00 stderr F I1213 00:20:42.667543 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42781&timeout=5m5s&timeoutSeconds=305&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.860672147+00:00 stderr F I1213 00:20:42.860476 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42791&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.955318251+00:00 stderr F I1213 00:20:42.955230 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:43.038084925+00:00 stderr F I1213 00:20:43.037984 28750 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42791&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:44.897574654+00:00 stderr F I1213 00:20:44.897498 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=42785&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:45.344304409+00:00 stderr F I1213 00:20:45.344147 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.218160909+00:00 stderr F I1213 00:20:46.217795 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace returned Get "https://api-int.crc.testing:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=42781&timeout=9m53s&timeoutSeconds=593&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.304340575+00:00 stderr F I1213 00:20:46.304265 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/baselineadminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42790&timeout=8m10s&timeoutSeconds=490&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.391463616+00:00 stderr F I1213 00:20:46.391346 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42791&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.495378220+00:00 stderr F I1213 00:20:46.495303 28750 reflector.go:425] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy returned Get "https://api-int.crc.testing:6443/apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies?allowWatchBookmarks=true&resourceVersion=42798&timeout=9m53s&timeoutSeconds=593&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.620796755+00:00 stderr F I1213 00:20:46.620706 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=5m15s&timeoutSeconds=315&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.702447358+00:00 stderr F I1213 00:20:46.702375 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42796&timeout=9m19s&timeoutSeconds=559&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.864142111+00:00 stderr F I1213 00:20:46.864066 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.959063272+00:00 stderr F I1213 00:20:46.958912 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42801&timeout=6m36s&timeoutSeconds=396&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:47.110594131+00:00 stderr F I1213 00:20:47.109978 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42797&timeout=5m14s&timeoutSeconds=314&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:47.122917164+00:00 stderr F I1213 00:20:47.122824 28750 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42798&timeout=6m36s&timeoutSeconds=396&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:47.603656386+00:00 stderr F I1213 00:20:47.603561 28750 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=7m1s&timeoutSeconds=421&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:49.409338853+00:00 stderr F I1213 00:20:49.409164 28750 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42791&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:54.892122524+00:00 stderr F I1213 00:20:54.892056 28750 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.NetworkPolicy closed with: too old resource version: 42785 (42935) 2025-12-13T00:20:54.946561654+00:00 stderr F I1213 00:20:54.946499 28750 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 42791 (42940) 2025-12-13T00:20:55.403878534+00:00 stderr F I1213 00:20:55.403764 28750 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 42798 (42940) 2025-12-13T00:20:55.443586575+00:00 stderr F I1213 00:20:55.443477 28750 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 42804 (42935) 2025-12-13T00:20:56.088001095+00:00 stderr F I1213 00:20:56.086664 28750 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 42796 (42967) 2025-12-13T00:20:56.151775996+00:00 stderr F I1213 00:20:56.150697 28750 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Namespace closed with: too old resource version: 42781 (42935) 2025-12-13T00:20:56.538366838+00:00 stderr F I1213 00:20:56.538288 28750 reflector.go:449] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.AdminNetworkPolicy closed with: too old resource version: 42798 (42967) 2025-12-13T00:20:57.433990375+00:00 stderr F I1213 00:20:57.433869 28750 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 42791 (42940) 2025-12-13T00:20:57.984196863+00:00 stderr F I1213 00:20:57.983909 28750 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 42797 (42967) 2025-12-13T00:20:58.166633546+00:00 stderr F I1213 00:20:58.166391 28750 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 42801 (42940) 2025-12-13T00:20:58.268599327+00:00 stderr F I1213 00:20:58.268536 28750 reflector.go:449] sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141: watch of *v1alpha1.BaselineAdminNetworkPolicy closed with: too old resource version: 42790 (42967) 2025-12-13T00:20:58.358856844+00:00 stderr F I1213 00:20:58.358785 28750 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods?labelSelector=app%3Dovnkube-master&resourceVersion=0 2025-12-13T00:20:58.359014398+00:00 stderr F I1213 00:20:58.358971 28750 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=7m4s&timeoutSeconds=424&watch=true 2025-12-13T00:20:58.359982493+00:00 stderr F I1213 00:20:58.359925 28750 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 42796 (42935) 2025-12-13T00:20:59.177166155+00:00 stderr F I1213 00:20:59.176471 28750 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 42927 (42935) 2025-12-13T00:20:59.655064821+00:00 stderr F I1213 00:20:59.654994 28750 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=5m2s&timeoutSeconds=302&watch=true 2025-12-13T00:20:59.657342482+00:00 stderr F I1213 00:20:59.657300 28750 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 42930 (42935) 2025-12-13T00:21:08.457706847+00:00 stderr F I1213 00:21:08.457631 28750 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:08.459660449+00:00 stderr F I1213 00:21:08.459617 28750 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:09.434158567+00:00 stderr F I1213 00:21:09.434085 28750 reflector.go:325] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:09.437333782+00:00 stderr F I1213 00:21:09.437296 28750 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:09.437506097+00:00 stderr F I1213 00:21:09.437444 28750 namespace.go:144] [default] updating namespace 2025-12-13T00:21:09.437506097+00:00 stderr F I1213 00:21:09.437495 28750 namespace.go:144] [kube-system] updating namespace 2025-12-13T00:21:09.437520977+00:00 stderr F I1213 00:21:09.437502 28750 namespace.go:144] [hostpath-provisioner] updating namespace 2025-12-13T00:21:09.437520977+00:00 stderr F I1213 00:21:09.437513 28750 namespace.go:144] [kube-node-lease] updating namespace 2025-12-13T00:21:09.437530787+00:00 stderr F I1213 00:21:09.437523 28750 namespace.go:144] [openshift-apiserver-operator] updating namespace 2025-12-13T00:21:09.437539118+00:00 stderr F I1213 00:21:09.437529 28750 namespace.go:144] [openshift] updating namespace 2025-12-13T00:21:09.437547478+00:00 stderr F I1213 00:21:09.437539 28750 namespace.go:144] [kube-public] updating namespace 2025-12-13T00:21:09.437590739+00:00 stderr F I1213 00:21:09.437563 28750 namespace.go:144] [openshift-apiserver] updating namespace 2025-12-13T00:21:09.437590739+00:00 stderr F I1213 00:21:09.437570 28750 namespace.go:144] [openshift-authentication] updating namespace 2025-12-13T00:21:09.437590739+00:00 stderr F I1213 00:21:09.437583 28750 namespace.go:144] [openshift-authentication-operator] updating namespace 2025-12-13T00:21:09.437633350+00:00 stderr F I1213 00:21:09.437612 28750 namespace.go:144] [openshift-cloud-network-config-controller] updating namespace 2025-12-13T00:21:09.437633350+00:00 stderr F I1213 00:21:09.437612 28750 namespace.go:144] [openshift-cloud-platform-infra] updating namespace 2025-12-13T00:21:09.437644830+00:00 stderr F I1213 00:21:09.437639 28750 namespace.go:144] [openshift-cluster-machine-approver] updating namespace 2025-12-13T00:21:09.437662861+00:00 stderr F I1213 00:21:09.437652 28750 namespace.go:144] [openshift-cluster-samples-operator] updating namespace 2025-12-13T00:21:09.437699042+00:00 stderr F I1213 00:21:09.437677 28750 namespace.go:144] [openshift-cluster-version] updating namespace 2025-12-13T00:21:09.437699042+00:00 stderr F I1213 00:21:09.437686 28750 namespace.go:144] [openshift-cluster-storage-operator] updating namespace 2025-12-13T00:21:09.437709942+00:00 stderr F I1213 00:21:09.437697 28750 namespace.go:144] [openshift-config-managed] updating namespace 2025-12-13T00:21:09.437709942+00:00 stderr F I1213 00:21:09.437706 28750 namespace.go:144] [openshift-config] updating namespace 2025-12-13T00:21:09.437748523+00:00 stderr F I1213 00:21:09.437726 28750 namespace.go:144] [openshift-config-operator] updating namespace 2025-12-13T00:21:09.437748523+00:00 stderr F I1213 00:21:09.437740 28750 namespace.go:144] [openshift-console-operator] updating namespace 2025-12-13T00:21:09.437793434+00:00 stderr F I1213 00:21:09.437772 28750 namespace.go:144] [openshift-console] updating namespace 2025-12-13T00:21:09.437861236+00:00 stderr F I1213 00:21:09.437835 28750 namespace.go:144] [openshift-controller-manager] updating namespace 2025-12-13T00:21:09.437861236+00:00 stderr F I1213 00:21:09.437837 28750 namespace.go:144] [openshift-console-user-settings] updating namespace 2025-12-13T00:21:09.437916008+00:00 stderr F I1213 00:21:09.437896 28750 namespace.go:144] [openshift-controller-manager-operator] updating namespace 2025-12-13T00:21:09.437980580+00:00 stderr F I1213 00:21:09.437957 28750 namespace.go:144] [openshift-etcd-operator] updating namespace 2025-12-13T00:21:09.437980580+00:00 stderr F I1213 00:21:09.437965 28750 namespace.go:144] [openshift-host-network] updating namespace 2025-12-13T00:21:09.437980580+00:00 stderr F I1213 00:21:09.437962 28750 namespace.go:144] [openshift-image-registry] updating namespace 2025-12-13T00:21:09.437998471+00:00 stderr F I1213 00:21:09.437906 28750 namespace.go:144] [openshift-dns] updating namespace 2025-12-13T00:21:09.438010251+00:00 stderr F I1213 00:21:09.437994 28750 namespace.go:144] [openshift-ingress] updating namespace 2025-12-13T00:21:09.438018831+00:00 stderr F I1213 00:21:09.438006 28750 namespace.go:144] [openshift-infra] updating namespace 2025-12-13T00:21:09.438027812+00:00 stderr F I1213 00:21:09.438016 28750 namespace.go:144] [openshift-etcd] updating namespace 2025-12-13T00:21:09.438073683+00:00 stderr F I1213 00:21:09.438049 28750 namespace.go:144] [openshift-kni-infra] updating namespace 2025-12-13T00:21:09.438073683+00:00 stderr F I1213 00:21:09.437902 28750 namespace.go:144] [openshift-dns-operator] updating namespace 2025-12-13T00:21:09.438073683+00:00 stderr F I1213 00:21:09.438060 28750 namespace.go:144] [openshift-ingress-operator] updating namespace 2025-12-13T00:21:09.438101913+00:00 stderr F I1213 00:21:09.438054 28750 namespace.go:144] [openshift-ingress-canary] updating namespace 2025-12-13T00:21:09.438144885+00:00 stderr F I1213 00:21:09.438123 28750 namespace.go:144] [openshift-kube-scheduler] updating namespace 2025-12-13T00:21:09.438165815+00:00 stderr F I1213 00:21:09.438052 28750 namespace.go:144] [openshift-kube-apiserver] updating namespace 2025-12-13T00:21:09.438211686+00:00 stderr F I1213 00:21:09.438192 28750 namespace.go:144] [openshift-kube-scheduler-operator] updating namespace 2025-12-13T00:21:09.438220567+00:00 stderr F I1213 00:21:09.438196 28750 namespace.go:144] [openshift-kube-apiserver-operator] updating namespace 2025-12-13T00:21:09.438227827+00:00 stderr F I1213 00:21:09.438218 28750 namespace.go:144] [openshift-kube-storage-version-migrator-operator] updating namespace 2025-12-13T00:21:09.438234517+00:00 stderr F I1213 00:21:09.438105 28750 namespace.go:144] [openshift-kube-controller-manager-operator] updating namespace 2025-12-13T00:21:09.438242967+00:00 stderr F I1213 00:21:09.438103 28750 namespace.go:144] [openshift-kube-controller-manager] updating namespace 2025-12-13T00:21:09.438314399+00:00 stderr F I1213 00:21:09.438260 28750 namespace.go:144] [openshift-kube-storage-version-migrator] updating namespace 2025-12-13T00:21:09.438314399+00:00 stderr F I1213 00:21:09.438308 28750 namespace.go:144] [openshift-machine-api] updating namespace 2025-12-13T00:21:09.438353810+00:00 stderr F I1213 00:21:09.438329 28750 namespace.go:144] [openshift-marketplace] updating namespace 2025-12-13T00:21:09.438353810+00:00 stderr F I1213 00:21:09.438346 28750 namespace.go:144] [openshift-multus] updating namespace 2025-12-13T00:21:09.438382411+00:00 stderr F I1213 00:21:09.438372 28750 namespace.go:144] [openshift-monitoring] updating namespace 2025-12-13T00:21:09.438424962+00:00 stderr F I1213 00:21:09.438400 28750 namespace.go:144] [openshift-network-node-identity] updating namespace 2025-12-13T00:21:09.438459693+00:00 stderr F I1213 00:21:09.438435 28750 namespace.go:144] [openshift-network-operator] updating namespace 2025-12-13T00:21:09.438472593+00:00 stderr F I1213 00:21:09.438462 28750 namespace.go:144] [openshift-nutanix-infra] updating namespace 2025-12-13T00:21:09.438479934+00:00 stderr F I1213 00:21:09.438375 28750 namespace.go:144] [openshift-network-diagnostics] updating namespace 2025-12-13T00:21:09.438507334+00:00 stderr F I1213 00:21:09.438335 28750 namespace.go:144] [openshift-machine-config-operator] updating namespace 2025-12-13T00:21:09.438533035+00:00 stderr F I1213 00:21:09.438512 28750 namespace.go:144] [openshift-node] updating namespace 2025-12-13T00:21:09.438580236+00:00 stderr F I1213 00:21:09.438569 28750 namespace.go:144] [openshift-oauth-apiserver] updating namespace 2025-12-13T00:21:09.438627918+00:00 stderr F I1213 00:21:09.438617 28750 namespace.go:144] [openshift-operator-lifecycle-manager] updating namespace 2025-12-13T00:21:09.438658458+00:00 stderr F I1213 00:21:09.438640 28750 namespace.go:144] [openshift-operators] updating namespace 2025-12-13T00:21:09.438666779+00:00 stderr F I1213 00:21:09.438658 28750 namespace.go:144] [openshift-ovirt-infra] updating namespace 2025-12-13T00:21:09.438692009+00:00 stderr F I1213 00:21:09.438617 28750 namespace.go:144] [openshift-openstack-infra] updating namespace 2025-12-13T00:21:09.438718280+00:00 stderr F I1213 00:21:09.438698 28750 namespace.go:144] [openshift-route-controller-manager] updating namespace 2025-12-13T00:21:09.438728600+00:00 stderr F I1213 00:21:09.438721 28750 namespace.go:144] [openshift-service-ca-operator] updating namespace 2025-12-13T00:21:09.438735390+00:00 stderr F I1213 00:21:09.438686 28750 namespace.go:144] [openshift-ovn-kubernetes] updating namespace 2025-12-13T00:21:09.438742691+00:00 stderr F I1213 00:21:09.438726 28750 namespace.go:144] [openshift-service-ca] updating namespace 2025-12-13T00:21:09.438771871+00:00 stderr F I1213 00:21:09.438747 28750 namespace.go:144] [openshift-user-workload-monitoring] updating namespace 2025-12-13T00:21:09.438771871+00:00 stderr F I1213 00:21:09.438756 28750 namespace.go:144] [openstack] updating namespace 2025-12-13T00:21:09.438771871+00:00 stderr F I1213 00:21:09.438758 28750 namespace.go:144] [openstack-operators] updating namespace 2025-12-13T00:21:09.438783942+00:00 stderr F I1213 00:21:09.438777 28750 namespace.go:144] [openshift-vsphere-infra] updating namespace 2025-12-13T00:21:09.438820753+00:00 stderr F I1213 00:21:09.438800 28750 namespace.go:100] [service-telemetry] adding namespace 2025-12-13T00:21:09.440423935+00:00 stderr F I1213 00:21:09.440373 28750 namespace.go:104] [service-telemetry] adding namespace took 1.556782ms 2025-12-13T00:21:12.934680758+00:00 stderr F I1213 00:21:12.934611 28750 reflector.go:325] Listing and watching *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:21:12.936346752+00:00 stderr F I1213 00:21:12.936317 28750 reflector.go:351] Caches populated for *v1alpha1.BaselineAdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:21:14.638622597+00:00 stderr F I1213 00:21:14.638540 28750 reflector.go:325] Listing and watching *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:21:14.638728120+00:00 stderr F I1213 00:21:14.638674 28750 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:14.642114731+00:00 stderr F I1213 00:21:14.640796 28750 reflector.go:351] Caches populated for *v1alpha1.AdminNetworkPolicy from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:21:14.642836490+00:00 stderr F I1213 00:21:14.642796 28750 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:14.955167349+00:00 stderr F I1213 00:21:14.955116 28750 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:14.956832924+00:00 stderr F I1213 00:21:14.956793 28750 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:16.094277478+00:00 stderr F I1213 00:21:16.094209 28750 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:21:16.095872141+00:00 stderr F I1213 00:21:16.095828 28750 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:21:16.661810362+00:00 stderr F I1213 00:21:16.661734 28750 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:16.664366781+00:00 stderr F I1213 00:21:16.663676 28750 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:17.720401148+00:00 stderr F I1213 00:21:17.720319 28750 reflector.go:325] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:17.721752015+00:00 stderr F I1213 00:21:17.721705 28750 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:19.208157664+00:00 stderr F I1213 00:21:19.208092 28750 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:19.208194645+00:00 stderr F I1213 00:21:19.208158 28750 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:19.209204243+00:00 stderr F I1213 00:21:19.209163 28750 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:19.210406505+00:00 stderr F I1213 00:21:19.210360 28750 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:20.166972107+00:00 stderr F I1213 00:21:20.166863 28750 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:20.169052824+00:00 stderr F I1213 00:21:20.168964 28750 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:20.169364712+00:00 stderr F I1213 00:21:20.169292 28750 master.go:627] Adding or Updating Node "crc" 2025-12-13T00:21:20.169364712+00:00 stderr F I1213 00:21:20.169333 28750 hybrid.go:140] Removing node crc hybrid overlay port 2025-12-13T00:21:20.664237517+00:00 stderr F I1213 00:21:20.664095 28750 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:20.666142509+00:00 stderr F I1213 00:21:20.666120 28750 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:21.371585635+00:00 stderr F I1213 00:21:21.371488 28750 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:21.386126398+00:00 stderr F I1213 00:21:21.386036 28750 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:21.386824306+00:00 stderr F I1213 00:21:21.386775 28750 obj_retry.go:453] Detected object openshift-image-registry/image-pruner-29426400-tlv26 of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.386824306+00:00 stderr F I1213 00:21:21.386809 28750 obj_retry.go:453] Detected object openshift-image-registry/image-pruner-29426400-tlv26 of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387016161+00:00 stderr F I1213 00:21:21.386981 28750 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-12-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387016161+00:00 stderr F I1213 00:21:21.387000 28750 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-12-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387061313+00:00 stderr F I1213 00:21:21.387035 28750 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-13-crc of type *v1.Pod in terminal state (e.g. completed) during update event: will remove it 2025-12-13T00:21:21.387061313+00:00 stderr F I1213 00:21:21.387048 28750 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387070983+00:00 stderr F I1213 00:21:21.387065 28750 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387145435+00:00 stderr F I1213 00:21:21.387108 28750 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-9-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387145435+00:00 stderr F I1213 00:21:21.387136 28750 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387145435+00:00 stderr F I1213 00:21:21.387140 28750 obj_retry.go:453] Detected object openshift-kube-apiserver/installer-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387156065+00:00 stderr F I1213 00:21:21.387129 28750 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387163685+00:00 stderr F I1213 00:21:21.387156 28750 factory.go:1465] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387172056+00:00 stderr F I1213 00:21:21.387065 28750 pods.go:151] Deleting pod: openshift-kube-apiserver/installer-13-crc 2025-12-13T00:21:21.387204166+00:00 stderr F I1213 00:21:21.387186 28750 handler.go:411] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387231487+00:00 stderr F I1213 00:21:21.387217 28750 handler.go:411] Object openshift-kube-apiserver/kube-apiserver-crc is replaced, invoking delete followed by add handler 2025-12-13T00:21:21.387404672+00:00 stderr F I1213 00:21:21.387377 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-11-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387404672+00:00 stderr F I1213 00:21:21.387393 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387414762+00:00 stderr F I1213 00:21:21.387403 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387414762+00:00 stderr F I1213 00:21:21.387409 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387504504+00:00 stderr F I1213 00:21:21.387483 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387513225+00:00 stderr F I1213 00:21:21.387500 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-10-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387612567+00:00 stderr F I1213 00:21:21.387589 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387612567+00:00 stderr F I1213 00:21:21.387599 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387627018+00:00 stderr F I1213 00:21:21.387614 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387627018+00:00 stderr F I1213 00:21:21.387619 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-11-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387638188+00:00 stderr F I1213 00:21:21.387632 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387646968+00:00 stderr F I1213 00:21:21.387638 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/revision-pruner-9-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387769911+00:00 stderr F I1213 00:21:21.387740 28750 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-7-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387769911+00:00 stderr F I1213 00:21:21.387764 28750 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-7-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387780262+00:00 stderr F I1213 00:21:21.387748 28750 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-8-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.387791002+00:00 stderr F I1213 00:21:21.387777 28750 obj_retry.go:453] Detected object openshift-kube-scheduler/installer-8-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388450370+00:00 stderr F I1213 00:21:21.388419 28750 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388450370+00:00 stderr F I1213 00:21:21.388433 28750 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388450370+00:00 stderr F I1213 00:21:21.388436 28750 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388450370+00:00 stderr F I1213 00:21:21.388445 28750 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5 of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388551522+00:00 stderr F I1213 00:21:21.388501 28750 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388551522+00:00 stderr F I1213 00:21:21.388515 28750 obj_retry.go:453] Detected object openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.388956124+00:00 stderr F E1213 00:21:21.388899 28750 admin_network_policy_controller.go:535] couldn't get key for object {Key:openshift-kube-apiserver/kube-apiserver-startup-monitor-crc Obj:&Pod{ObjectMeta:{kube-apiserver-startup-monitor-crc openshift-kube-apiserver b415c03d-7cbe-497e-b3a2-0d31a51a7149 42927 0 2025-12-13 00:20:36 +0000 UTC map[revision:13] map[kubernetes.io/config.hash:7dae59545f22b3fb679a7fbf878a6379 kubernetes.io/config.mirror:7dae59545f22b3fb679a7fbf878a6379 kubernetes.io/config.seen:2025-12-13T00:20:36.119570121Z kubernetes.io/config.source:file] [{v1 Node crc c83c88d3-f34d-4083-a59d-1c50f90f89b8 0xc000a2ae6a }] [] []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:startup-monitor,Image:quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*5,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:crc,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{},},}}: object has no meta: object does not implement the Object interfaces 2025-12-13T00:21:21.388956124+00:00 stderr F E1213 00:21:21.388907 28750 egressqos.go:876] couldn't get key for object {Key:openshift-kube-apiserver/kube-apiserver-startup-monitor-crc Obj:&Pod{ObjectMeta:{kube-apiserver-startup-monitor-crc openshift-kube-apiserver b415c03d-7cbe-497e-b3a2-0d31a51a7149 42927 0 2025-12-13 00:20:36 +0000 UTC map[revision:13] map[kubernetes.io/config.hash:7dae59545f22b3fb679a7fbf878a6379 kubernetes.io/config.mirror:7dae59545f22b3fb679a7fbf878a6379 kubernetes.io/config.seen:2025-12-13T00:20:36.119570121Z kubernetes.io/config.source:file] [{v1 Node crc c83c88d3-f34d-4083-a59d-1c50f90f89b8 0xc000a2ae6a }] [] []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:startup-monitor,Image:quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*5,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:crc,HostNetwork:true,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{},},}}: object has no meta: object does not implement the Object interfaces 2025-12-13T00:21:21.454974055+00:00 stderr F I1213 00:21:21.454874 28750 pods.go:185] Attempting to release IPs for pod: openshift-kube-apiserver/installer-13-crc, ips: 10.217.0.42 2025-12-13T00:21:21.455045267+00:00 stderr F I1213 00:21:21.455009 28750 obj_retry.go:459] Detected object openshift-kube-apiserver/installer-13-crc of type *factory.egressIPPod in terminal state (e.g. completed) during update event: will remove it 2025-12-13T00:21:21.455056477+00:00 stderr F I1213 00:21:21.455048 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *v1.Pod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.455067007+00:00 stderr F I1213 00:21:21.455056 28750 obj_retry.go:453] Detected object openshift-kube-controller-manager/installer-10-retry-1-crc of type *factory.egressIPPod in terminal state (e.g. completed) will be ignored as it has already been processed 2025-12-13T00:21:21.750694635+00:00 stderr F I1213 00:21:21.749160 28750 namespace.go:144] [service-telemetry] updating namespace 2025-12-13T00:21:50.362021918+00:00 stderr F I1213 00:21:50.361797 28750 namespace.go:100] [openshift-must-gather-zffxd] adding namespace 2025-12-13T00:21:50.465763887+00:00 stderr F I1213 00:21:50.465718 28750 namespace.go:104] [openshift-must-gather-zffxd] adding namespace took 103.897962ms 2025-12-13T00:21:50.466740740+00:00 stderr F I1213 00:21:50.466725 28750 namespace.go:144] [openshift-must-gather-zffxd] updating namespace 2025-12-13T00:21:51.451954871+00:00 stderr F I1213 00:21:51.451854 28750 namespace.go:144] [openshift-must-gather-zffxd] updating namespace ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000001777615117130646033117 0ustar zuulzuul2025-12-13T00:20:03.451648858+00:00 stderr F ++ K8S_NODE=crc 2025-12-13T00:20:03.451648858+00:00 stderr F ++ [[ -n crc ]] 2025-12-13T00:20:03.451648858+00:00 stderr F ++ [[ -f /env/crc ]] 2025-12-13T00:20:03.451648858+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:03.451901584+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:03.451901584+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.451901584+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:03.451901584+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:03.451901584+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:03.451901584+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:03.451901584+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:03.451901584+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:03.451901584+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:03.452927352+00:00 stderr F + start-ovn-controller info 2025-12-13T00:20:03.452927352+00:00 stderr F + local log_level=info 2025-12-13T00:20:03.452927352+00:00 stderr F + [[ 1 -ne 1 ]] 2025-12-13T00:20:03.453395765+00:00 stderr F ++ date -Iseconds 2025-12-13T00:20:03.456597443+00:00 stdout F 2025-12-13T00:20:03+00:00 - starting ovn-controller 2025-12-13T00:20:03.456616644+00:00 stderr F + echo '2025-12-13T00:20:03+00:00 - starting ovn-controller' 2025-12-13T00:20:03.456616644+00:00 stderr F + exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid --syslog-method=null --log-file=/var/log/ovn/acl-audit-log.log -vFACILITY:local0 -vconsole:info -vconsole:acl_log:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' -vsyslog:acl_log:info -vfile:acl_log:info 2025-12-13T00:20:03.464094110+00:00 stderr F 2025-12-13T00:20:03Z|00001|vlog|INFO|opened log file /var/log/ovn/acl-audit-log.log 2025-12-13T00:20:03.464094110+00:00 stderr F 2025-12-13T00:20:03.463Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2025-12-13T00:20:03.464094110+00:00 stderr F 2025-12-13T00:20:03.463Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2025-12-13T00:20:03.469235372+00:00 stderr F 2025-12-13T00:20:03.469Z|00004|main|INFO|OVN internal version is : [24.03.3-20.33.0-72.6] 2025-12-13T00:20:03.469235372+00:00 stderr F 2025-12-13T00:20:03.469Z|00005|main|INFO|OVS IDL reconnected, force recompute. 2025-12-13T00:20:03.469284184+00:00 stderr F 2025-12-13T00:20:03.469Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:03.469284184+00:00 stderr F 2025-12-13T00:20:03.469Z|00007|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-12-13T00:20:03.469292034+00:00 stderr F 2025-12-13T00:20:03.469Z|00008|main|INFO|OVNSB IDL reconnected, force recompute. 2025-12-13T00:20:04.470664785+00:00 stderr F 2025-12-13T00:20:04.470Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:04.470664785+00:00 stderr F 2025-12-13T00:20:04.470Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-12-13T00:20:04.470664785+00:00 stderr F 2025-12-13T00:20:04.470Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 2 seconds before reconnect 2025-12-13T00:20:06.472922638+00:00 stderr F 2025-12-13T00:20:06.472Z|00012|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:06.472922638+00:00 stderr F 2025-12-13T00:20:06.472Z|00013|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2025-12-13T00:20:06.472922638+00:00 stderr F 2025-12-13T00:20:06.472Z|00014|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: waiting 4 seconds before reconnect 2025-12-13T00:20:10.473249025+00:00 stderr F 2025-12-13T00:20:10.473Z|00015|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2025-12-13T00:20:10.473249025+00:00 stderr F 2025-12-13T00:20:10.473Z|00016|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2025-12-13T00:20:10.545959530+00:00 stderr F 2025-12-13T00:20:10.545Z|00017|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-12-13T00:20:10.545959530+00:00 stderr F 2025-12-13T00:20:10.545Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00020|features|INFO|OVS Feature: ct_zero_snat, state: supported 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00021|features|INFO|OVS Feature: ct_flush, state: supported 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00022|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00023|main|INFO|OVS feature set changed, force recompute. 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00024|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-12-13T00:20:10.546712281+00:00 stderr F 2025-12-13T00:20:10.546Z|00025|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-12-13T00:20:10.546930127+00:00 stderr F 2025-12-13T00:20:10.546Z|00026|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-12-13T00:20:10.546965258+00:00 stderr F 2025-12-13T00:20:10.546Z|00027|main|INFO|OVS OpenFlow connection reconnected,force recompute. 2025-12-13T00:20:10.547333589+00:00 stderr F 2025-12-13T00:20:10.547Z|00028|main|INFO|OVS feature set changed, force recompute. 2025-12-13T00:20:10.572156772+00:00 stderr F 2025-12-13T00:20:10.572Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-12-13T00:20:10.572156772+00:00 stderr F 2025-12-13T00:20:10.572Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-12-13T00:20:10.572199424+00:00 stderr F 2025-12-13T00:20:10.572Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2025-12-13T00:20:10.572199424+00:00 stderr F 2025-12-13T00:20:10.572Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2025-12-13T00:20:10.572779050+00:00 stderr F 2025-12-13T00:20:10.572Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-12-13T00:20:10.573015617+00:00 stderr F 2025-12-13T00:20:10.572Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2025-12-13T00:20:13.989423302+00:00 stderr F 2025-12-13T00:20:13.989Z|00029|memory|INFO|19072 kB peak resident set size after 10.5 seconds 2025-12-13T00:20:13.989423302+00:00 stderr F 2025-12-13T00:20:13.989Z|00030|memory|INFO|idl-cells-OVN_Southbound:12454 idl-cells-Open_vSwitch:3087 lflow-cache-entries-cache-expr:270 lflow-cache-entries-cache-matches:602 lflow-cache-size-KB:697 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:692 ofctrl_installed_flow_usage-KB:509 ofctrl_sb_flow_ref_usage-KB:273 2025-12-13T00:20:38.555241546+00:00 stderr F 2025-12-13T00:20:38.555Z|00031|binding|INFO|Releasing lport openshift-kube-apiserver_installer-13-crc from this chassis (sb_readonly=0) 2025-12-13T00:20:38.555241546+00:00 stderr F 2025-12-13T00:20:38.555Z|00032|if_status|WARN|Trying to release unknown interface openshift-kube-apiserver_installer-13-crc 2025-12-13T00:20:38.555241546+00:00 stderr F 2025-12-13T00:20:38.555Z|00033|binding|INFO|Setting lport openshift-kube-apiserver_installer-13-crc down in Southbound 2025-12-13T00:21:06.431699255+00:00 stderr F 2025-12-13T00:21:06.431Z|00034|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory 2025-12-13T00:21:51.466533081+00:00 stderr F 2025-12-13T00:21:51.466Z|00035|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory ././@LongLink0000644000000000000000000000024500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000445315117130646033103 0ustar zuulzuul2025-12-13T00:20:06.521852557+00:00 stderr F + [[ -f /env/_master ]] 2025-12-13T00:20:06.521852557+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-12-13T00:20:06.521852557+00:00 stderr F ++ set -x 2025-12-13T00:20:06.521985541+00:00 stderr F ++ K8S_NODE= 2025-12-13T00:20:06.521985541+00:00 stderr F ++ [[ -n '' ]] 2025-12-13T00:20:06.521985541+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:06.521985541+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:06.521985541+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:06.521985541+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:06.521985541+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:06.521985541+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:06.521985541+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:06.521985541+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:06.521985541+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:06.521985541+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:06.522920796+00:00 stderr F + trap quit-sbdb TERM INT 2025-12-13T00:20:06.522954607+00:00 stderr F + start-sbdb info 2025-12-13T00:20:06.522966998+00:00 stderr F + local log_level=info 2025-12-13T00:20:06.522974868+00:00 stderr F + [[ 1 -ne 1 ]] 2025-12-13T00:20:06.523229585+00:00 stderr F + wait 28624 2025-12-13T00:20:06.523478052+00:00 stderr F + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor --db-sb-sock=/var/run/ovn/ovnsb_db.sock '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb 2025-12-13T00:20:06.619382756+00:00 stderr F 2025-12-13T00:20:06.619Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-sb.log 2025-12-13T00:20:06.711486686+00:00 stderr F 2025-12-13T00:20:06.711Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.1 2025-12-13T00:20:16.717525769+00:00 stderr F 2025-12-13T00:20:16.717Z|00003|memory|INFO|16640 kB peak resident set size after 10.1 seconds 2025-12-13T00:20:16.717977701+00:00 stderr F 2025-12-13T00:20:16.717Z|00004|memory|INFO|atoms:16113 cells:14306 json-caches:2 monitors:5 n-weak-refs:244 sessions:3 ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000000015117130646033063 0ustar zuulzuul././@LongLink0000644000000000000000000000024500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000025200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000455715117130646033110 0ustar zuulzuul2025-12-13T00:20:04.257316123+00:00 stderr F + [[ -f /env/_master ]] 2025-12-13T00:20:04.257316123+00:00 stderr F + . /ovnkube-lib/ovnkube-lib.sh 2025-12-13T00:20:04.257316123+00:00 stderr F ++ set -x 2025-12-13T00:20:04.257473127+00:00 stderr F ++ K8S_NODE=crc 2025-12-13T00:20:04.257473127+00:00 stderr F ++ [[ -n crc ]] 2025-12-13T00:20:04.257473127+00:00 stderr F ++ [[ -f /env/crc ]] 2025-12-13T00:20:04.257473127+00:00 stderr F ++ northd_pidfile=/var/run/ovn/ovn-northd.pid 2025-12-13T00:20:04.257473127+00:00 stderr F ++ controller_pidfile=/var/run/ovn/ovn-controller.pid 2025-12-13T00:20:04.257473127+00:00 stderr F ++ controller_logfile=/var/log/ovn/acl-audit-log.log 2025-12-13T00:20:04.257473127+00:00 stderr F ++ vswitch_dbsock=/var/run/openvswitch/db.sock 2025-12-13T00:20:04.257473127+00:00 stderr F ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid 2025-12-13T00:20:04.257473127+00:00 stderr F ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock 2025-12-13T00:20:04.257473127+00:00 stderr F ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl 2025-12-13T00:20:04.257473127+00:00 stderr F ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid 2025-12-13T00:20:04.257473127+00:00 stderr F ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock 2025-12-13T00:20:04.257473127+00:00 stderr F ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl 2025-12-13T00:20:04.258399753+00:00 stderr F + trap quit-nbdb TERM INT 2025-12-13T00:20:04.258399753+00:00 stderr F + start-nbdb info 2025-12-13T00:20:04.258415394+00:00 stderr F + local log_level=info 2025-12-13T00:20:04.258415394+00:00 stderr F + [[ 1 -ne 1 ]] 2025-12-13T00:20:04.258692701+00:00 stderr F + wait 28427 2025-12-13T00:20:04.259150174+00:00 stderr F + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor --db-nb-sock=/var/run/ovn/ovnnb_db.sock '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb 2025-12-13T00:20:04.367222633+00:00 stderr F 2025-12-13T00:20:04.367Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log 2025-12-13T00:20:04.405813307+00:00 stderr F 2025-12-13T00:20:04.405Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.1 2025-12-13T00:20:14.408254621+00:00 stderr F 2025-12-13T00:20:14.408Z|00003|memory|INFO|12928 kB peak resident set size after 10.0 seconds 2025-12-13T00:20:14.408370125+00:00 stderr F 2025-12-13T00:20:14.408Z|00004|memory|INFO|atoms:4675 cells:3398 json-caches:2 monitors:3 n-weak-refs:114 sessions:2 ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130646033103 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130653033101 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130646033073 0ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130653033101 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015117130646033110 0ustar zuulzuul2025-12-13T00:14:47.549362699+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-12-13T00:14:49.133188490+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-12-13T00:14:49.133252793+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130653033101 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130646033073 0ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130646033057 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000224315117130646033062 0ustar zuulzuul2025-08-13T19:59:04.519182976+00:00 stderr F W0813 19:59:04.505970 1 deprecated.go:66] 2025-08-13T19:59:04.519182976+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.519182976+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.519182976+00:00 stderr F =============================================== 2025-08-13T19:59:04.519182976+00:00 stderr F 2025-08-13T19:59:04.521923144+00:00 stderr F I0813 19:59:04.521868 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:04.522683976+00:00 stderr F I0813 19:59:04.522582 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:04.531614440+00:00 stderr F I0813 19:59:04.530405 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-08-13T19:59:04.536668264+00:00 stderr F I0813 19:59:04.535283 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 2025-08-13T20:42:43.954344275+00:00 stderr F I0813 20:42:43.954103 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000203615117130646033062 0ustar zuulzuul2025-12-13T00:13:15.363751798+00:00 stderr F W1213 00:13:15.363602 1 deprecated.go:66] 2025-12-13T00:13:15.363751798+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:15.363751798+00:00 stderr F 2025-12-13T00:13:15.363751798+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:15.363751798+00:00 stderr F 2025-12-13T00:13:15.363751798+00:00 stderr F =============================================== 2025-12-13T00:13:15.363751798+00:00 stderr F 2025-12-13T00:13:15.364184302+00:00 stderr F I1213 00:13:15.364147 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:15.364202313+00:00 stderr F I1213 00:13:15.364189 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:15.364674049+00:00 stderr F I1213 00:13:15.364631 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-12-13T00:13:15.365095263+00:00 stderr F I1213 00:13:15.365068 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000003410415117130646033063 0ustar zuulzuul2025-08-13T19:59:47.427869073+00:00 stderr F 2025-08-13T19:59:47Z INFO setup starting manager 2025-08-13T19:59:47.581323448+00:00 stderr F 2025-08-13T19:59:47Z INFO controller-runtime.metrics Starting metrics server 2025-08-13T19:59:47.581953346+00:00 stderr F 2025-08-13T19:59:47Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":9090", "secure": false} 2025-08-13T19:59:47.596895631+00:00 stderr F 2025-08-13T19:59:47Z INFO starting server {"kind": "pprof", "addr": "[::]:6060"} 2025-08-13T19:59:47.596895631+00:00 stderr F 2025-08-13T19:59:47Z INFO starting server {"kind": "health probe", "addr": "[::]:8080"} 2025-08-13T19:59:47.895817023+00:00 stderr F I0813 19:59:47.761307 1 leaderelection.go:250] attempting to acquire leader lease openshift-operator-lifecycle-manager/packageserver-controller-lock... 2025-08-13T19:59:48.598371620+00:00 stderr F I0813 19:59:48.595508 1 leaderelection.go:260] successfully acquired lease openshift-operator-lifecycle-manager/packageserver-controller-lock 2025-08-13T19:59:48.599217465+00:00 stderr F 2025-08-13T19:59:48Z DEBUG events package-server-manager-84d578d794-jw7r2_e40d9bf0-eba2-484b-a0df-0a92c0213730 became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-operator-lifecycle-manager","name":"packageserver-controller-lock","uid":"0beb9bb7-cfd9-4760-98f3-f0c893f5cf42","apiVersion":"coordination.k8s.io/v1","resourceVersion":"28430"}, "reason": "LeaderElection"} 2025-08-13T19:59:48.600629435+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:48.600653725+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1.Infrastructure"} 2025-08-13T19:59:48.600653725+00:00 stderr F 2025-08-13T19:59:48Z INFO Starting Controller {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T19:59:50.305095201+00:00 stderr F 2025-08-13T19:59:50Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T19:59:51.209164813+00:00 stderr F 2025-08-13T19:59:51Z INFO Starting workers {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "worker count": 1} 2025-08-13T19:59:51.543424412+00:00 stderr F 2025-08-13T19:59:51Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T19:59:51.543424412+00:00 stderr F 2025-08-13T19:59:51Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:52.151692491+00:00 stderr F 2025-08-13T19:59:52Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:52.151692491+00:00 stderr F 2025-08-13T19:59:52Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T19:59:53.088333270+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T19:59:53.088333270+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390313498+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T19:59:53.390313498+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390534724+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T19:59:53.390534724+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T19:59:53.534931020+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T19:59:53.534931020+00:00 stderr F 2025-08-13T19:59:53Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:03:01.319955058+00:00 stderr F E0813 20:03:01.318734 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:01.394983163+00:00 stderr F E0813 20:04:01.393330 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:01.342585258+00:00 stderr F E0813 20:05:01.341321 1 leaderelection.go:332] error retrieving resource lock openshift-operator-lifecycle-manager/packageserver-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:29.332357546+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:06:29.332357546+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:29.340078067+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:29.360067510+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:06:29.387128505+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:06:29.387128505+00:00 stderr F 2025-08-13T20:06:29Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:06:41.133024418+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:06:41.194134500+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:06:41.194134500+00:00 stderr F 2025-08-13T20:06:41Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.228025353+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:09:45.228025353+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:09:45.284368859+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:09:45.367941865+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:09:45.368022437+00:00 stderr F 2025-08-13T20:09:45Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.711473839+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-08-13T20:10:16.711690345+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.712255871+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:10:16.723341429+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-08-13T20:10:16.745037671+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-08-13T20:10:16.745037671+00:00 stderr F 2025-08-13T20:10:16Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-08-13T20:42:42.894471130+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for non leader election runnables 2025-08-13T20:42:42.894471130+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for leader election runnables 2025-08-13T20:42:42.901418100+00:00 stderr F 2025-08-13T20:42:42Z INFO Shutdown signal received, waiting for all workers to finish {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T20:42:42.901418100+00:00 stderr F 2025-08-13T20:42:42Z INFO All workers finished {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-08-13T20:42:42.902414669+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for caches 2025-08-13T20:42:42.910454040+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for webhooks 2025-08-13T20:42:42.910542823+00:00 stderr F 2025-08-13T20:42:42Z INFO Stopping and waiting for HTTP servers 2025-08-13T20:42:42.913641892+00:00 stderr F 2025-08-13T20:42:42Z INFO shutting down server {"kind": "health probe", "addr": "[::]:8080"} 2025-08-13T20:42:42.913984912+00:00 stderr F 2025-08-13T20:42:42Z INFO shutting down server {"kind": "pprof", "addr": "[::]:6060"} 2025-08-13T20:42:42.915348912+00:00 stderr F 2025-08-13T20:42:42Z INFO controller-runtime.metrics Shutting down metrics server with timeout of 1 minute ././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000772315117130646033072 0ustar zuulzuul2025-12-13T00:13:18.655749266+00:00 stderr F 2025-12-13T00:13:18Z INFO setup starting manager 2025-12-13T00:13:18.669176168+00:00 stderr F 2025-12-13T00:13:18Z INFO starting server {"kind": "health probe", "addr": "[::]:8080"} 2025-12-13T00:13:18.669263710+00:00 stderr F 2025-12-13T00:13:18Z INFO controller-runtime.metrics Starting metrics server 2025-12-13T00:13:18.669405715+00:00 stderr F 2025-12-13T00:13:18Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":9090", "secure": false} 2025-12-13T00:13:18.669461927+00:00 stderr F 2025-12-13T00:13:18Z INFO starting server {"kind": "pprof", "addr": "[::]:6060"} 2025-12-13T00:13:18.669624342+00:00 stderr F I1213 00:13:18.669584 1 leaderelection.go:250] attempting to acquire leader lease openshift-operator-lifecycle-manager/packageserver-controller-lock... 2025-12-13T00:18:23.244340324+00:00 stderr F I1213 00:18:23.243652 1 leaderelection.go:260] successfully acquired lease openshift-operator-lifecycle-manager/packageserver-controller-lock 2025-12-13T00:18:23.244597320+00:00 stderr F 2025-12-13T00:18:23Z DEBUG events package-server-manager-84d578d794-jw7r2_0d395c6a-1489-4784-b939-0a57b8d2296a became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-operator-lifecycle-manager","name":"packageserver-controller-lock","uid":"0beb9bb7-cfd9-4760-98f3-f0c893f5cf42","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41990"}, "reason": "LeaderElection"} 2025-12-13T00:18:23.252526091+00:00 stderr F 2025-12-13T00:18:23Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1alpha1.ClusterServiceVersion"} 2025-12-13T00:18:23.252779218+00:00 stderr F 2025-12-13T00:18:23Z INFO Starting EventSource {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "source": "kind source: *v1.Infrastructure"} 2025-12-13T00:18:23.252891981+00:00 stderr F 2025-12-13T00:18:23Z INFO Starting Controller {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion"} 2025-12-13T00:18:23.381585673+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver requeueing the packageserver deployment after encountering infrastructure event {"infrastructure": "cluster"} 2025-12-13T00:18:23.381703996+00:00 stderr F 2025-12-13T00:18:23Z INFO Starting workers {"controller": "clusterserviceversion", "controllerGroup": "operators.coreos.com", "controllerKind": "ClusterServiceVersion", "worker count": 1} 2025-12-13T00:18:23.383478644+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver handling current request {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "request": "openshift-operator-lifecycle-manager/packageserver"} 2025-12-13T00:18:23.383543955+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver checking to see if required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-12-13T00:18:23.690347732+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver confimed required RBAC exists {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} 2025-12-13T00:18:23.691373970+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver currently topology mode {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "highly available": false} 2025-12-13T00:18:23.736406587+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver reconciliation result {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}, "res": "unchanged"} 2025-12-13T00:18:23.736406587+00:00 stderr F 2025-12-13T00:18:23Z INFO controllers.packageserver finished request reconciliation {"csv": {"name":"packageserver","namespace":"openshift-operator-lifecycle-manager"}} ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015117130646033047 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015117130654033046 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000000007515117130646033053 0ustar zuulzuul2025-12-13T00:13:15.266611664+00:00 stdout F serving on 8080 ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000000007515117130646033053 0ustar zuulzuul2025-08-13T19:59:14.989395384+00:00 stdout F serving on 8080 ././@LongLink0000644000000000000000000000022400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015117130647033002 5ustar zuulzuul././@LongLink0000644000000000000000000000023000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015117130654033000 5ustar zuulzuul././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000570415117130647033012 0ustar zuulzuul2025-12-13T00:13:15.402730478+00:00 stdout F .:5353 2025-12-13T00:13:15.402730478+00:00 stdout F hostname.bind.:5353 2025-12-13T00:13:15.402879173+00:00 stdout F [INFO] plugin/reload: Running configuration SHA512 = c40f1fac74a6633c6b1943fe251ad80adf3d5bd9b35c9e7d9b72bc260c5e2455f03e403e3b79d32f0936ff27e81ff6d07c68a95724b1c2c23510644372976718 2025-12-13T00:13:15.402879173+00:00 stdout F CoreDNS-1.11.1 2025-12-13T00:13:15.402879173+00:00 stdout F linux/amd64, go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime, 2025-12-13T00:18:32.087345943+00:00 stdout F [INFO] 10.217.0.8:55239 - 626 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000462463s 2025-12-13T00:18:32.087345943+00:00 stdout F [INFO] 10.217.0.8:57989 - 44554 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000595297s 2025-12-13T00:18:51.055032856+00:00 stdout F [INFO] 10.217.0.8:60300 - 17951 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000708439s 2025-12-13T00:18:51.055129760+00:00 stdout F [INFO] 10.217.0.8:33116 - 26452 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000609816s 2025-12-13T00:19:13.055158194+00:00 stdout F [INFO] 10.217.0.8:34610 - 8558 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001664036s 2025-12-13T00:19:13.055158194+00:00 stdout F [INFO] 10.217.0.8:59699 - 16509 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001106531s 2025-12-13T00:19:51.057873851+00:00 stdout F [INFO] 10.217.0.8:33812 - 6776 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000458622s 2025-12-13T00:19:51.057873851+00:00 stdout F [INFO] 10.217.0.8:50373 - 21370 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000599517s 2025-12-13T00:20:51.057018335+00:00 stdout F [INFO] 10.217.0.8:53775 - 33351 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000800522s 2025-12-13T00:20:51.057092877+00:00 stdout F [INFO] 10.217.0.8:40005 - 54675 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000835972s 2025-12-13T00:21:51.060025554+00:00 stdout F [INFO] 10.217.0.8:38886 - 43566 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00081684s 2025-12-13T00:21:51.060025554+00:00 stdout F [INFO] 10.217.0.8:57077 - 48741 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000860681s 2025-12-13T00:21:56.901954535+00:00 stdout F [INFO] 10.217.0.8:40879 - 33125 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000869941s 2025-12-13T00:21:56.901954535+00:00 stdout F [INFO] 10.217.0.8:50804 - 23051 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000922493s ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000226074215117130647033021 0ustar zuulzuul2025-08-13T19:59:13.144252487+00:00 stdout F [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2025-08-13T19:59:13.171934436+00:00 stdout F .:5353 2025-08-13T19:59:13.171934436+00:00 stdout F hostname.bind.:5353 2025-08-13T19:59:13.185586915+00:00 stdout F [INFO] plugin/reload: Running configuration SHA512 = c40f1fac74a6633c6b1943fe251ad80adf3d5bd9b35c9e7d9b72bc260c5e2455f03e403e3b79d32f0936ff27e81ff6d07c68a95724b1c2c23510644372976718 2025-08-13T19:59:13.187380976+00:00 stdout F CoreDNS-1.11.1 2025-08-13T19:59:13.187380976+00:00 stdout F linux/amd64, go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime, 2025-08-13T19:59:36.359190859+00:00 stdout F [INFO] 10.217.0.28:60726 - 4384 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009569913s 2025-08-13T19:59:36.359190859+00:00 stdout F [INFO] 10.217.0.28:45746 - 61404 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.016871841s 2025-08-13T19:59:38.555978409+00:00 stdout F [INFO] 10.217.0.8:37135 - 10343 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001011259s 2025-08-13T19:59:38.555978409+00:00 stdout F [INFO] 10.217.0.8:58657 - 31103 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001289036s 2025-08-13T19:59:39.582450718+00:00 stdout F [INFO] 10.217.0.8:36699 - 25225 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003627074s 2025-08-13T19:59:39.583116537+00:00 stdout F [INFO] 10.217.0.8:46453 - 52750 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000979658s 2025-08-13T19:59:41.285089343+00:00 stdout F [INFO] 10.217.0.8:42982 - 35440 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005392324s 2025-08-13T19:59:41.372498954+00:00 stdout F [INFO] 10.217.0.28:53074 - 61598 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.044448346s 2025-08-13T19:59:41.372498954+00:00 stdout F [INFO] 10.217.0.28:47243 - 33124 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.045254849s 2025-08-13T19:59:41.380854483+00:00 stdout F [INFO] 10.217.0.8:59732 - 7106 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.093183886s 2025-08-13T19:59:42.056944115+00:00 stdout F [INFO] 10.217.0.8:57861 - 34485 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001998527s 2025-08-13T19:59:42.057040607+00:00 stdout F [INFO] 10.217.0.8:51920 - 49588 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00312535s 2025-08-13T19:59:42.254946729+00:00 stdout F [INFO] 10.217.0.8:42744 - 36863 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002500741s 2025-08-13T19:59:42.368712312+00:00 stdout F [INFO] 10.217.0.8:41487 - 58644 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00389273s 2025-08-13T19:59:43.477540588+00:00 stdout F [INFO] 10.217.0.8:55842 - 16014 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002701517s 2025-08-13T19:59:43.477540588+00:00 stdout F [INFO] 10.217.0.8:59959 - 45350 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004039455s 2025-08-13T19:59:44.068239376+00:00 stdout F [INFO] 10.217.0.8:54207 - 19718 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003326715s 2025-08-13T19:59:44.068239376+00:00 stdout F [INFO] 10.217.0.8:55710 - 12381 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000859364s 2025-08-13T19:59:44.473616752+00:00 stdout F [INFO] 10.217.0.8:57433 - 12555 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001231785s 2025-08-13T19:59:44.490141913+00:00 stdout F [INFO] 10.217.0.8:56361 - 611 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001532184s 2025-08-13T19:59:45.207580854+00:00 stdout F [INFO] 10.217.0.8:44517 - 45189 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007351389s 2025-08-13T19:59:45.331979930+00:00 stdout F [INFO] 10.217.0.8:58571 - 60387 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.010958382s 2025-08-13T19:59:46.275200306+00:00 stdout F [INFO] 10.217.0.28:56315 - 52153 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004062336s 2025-08-13T19:59:46.275701470+00:00 stdout F [INFO] 10.217.0.28:53644 - 10701 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004936511s 2025-08-13T19:59:46.778551484+00:00 stdout F [INFO] 10.217.0.8:35729 - 16750 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003115839s 2025-08-13T19:59:46.779616544+00:00 stdout F [INFO] 10.217.0.8:47577 - 4218 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004051835s 2025-08-13T19:59:49.490117401+00:00 stdout F [INFO] 10.217.0.8:60652 - 34962 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000776823s 2025-08-13T19:59:49.492174519+00:00 stdout F [INFO] 10.217.0.8:42073 - 18763 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000911126s 2025-08-13T19:59:51.325499579+00:00 stdout F [INFO] 10.217.0.28:35410 - 28476 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002997815s 2025-08-13T19:59:51.325499579+00:00 stdout F [INFO] 10.217.0.28:38192 - 43866 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002657256s 2025-08-13T19:59:54.722954265+00:00 stdout F [INFO] 10.217.0.8:57411 - 62430 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003547181s 2025-08-13T19:59:54.722954265+00:00 stdout F [INFO] 10.217.0.8:44346 - 25135 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004123868s 2025-08-13T19:59:56.279401503+00:00 stdout F [INFO] 10.217.0.28:55304 - 30922 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002699447s 2025-08-13T19:59:56.402210944+00:00 stdout F [INFO] 10.217.0.28:60105 - 977 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.129498722s 2025-08-13T20:00:01.312229374+00:00 stdout F [INFO] 10.217.0.28:36085 - 33371 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003943962s 2025-08-13T20:00:01.312229374+00:00 stdout F [INFO] 10.217.0.28:33968 - 25713 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004474897s 2025-08-13T20:00:05.154259269+00:00 stdout F [INFO] 10.217.0.8:38795 - 21470 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003238472s 2025-08-13T20:00:05.154259269+00:00 stdout F [INFO] 10.217.0.8:39200 - 34446 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.023923592s 2025-08-13T20:00:06.214910512+00:00 stdout F [INFO] 10.217.0.28:43430 - 45449 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000923157s 2025-08-13T20:00:06.221060848+00:00 stdout F [INFO] 10.217.0.28:53417 - 39326 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009344247s 2025-08-13T20:00:10.325143240+00:00 stdout F [INFO] 10.217.0.62:51993 - 11598 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004463237s 2025-08-13T20:00:10.333127238+00:00 stdout F [INFO] 10.217.0.62:44300 - 42845 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001539534s 2025-08-13T20:00:11.276530577+00:00 stdout F [INFO] 10.217.0.28:43084 - 32054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006750782s 2025-08-13T20:00:11.276530577+00:00 stdout F [INFO] 10.217.0.28:53563 - 42854 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006401943s 2025-08-13T20:00:11.908425165+00:00 stdout F [INFO] 10.217.0.62:54342 - 11297 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.013080363s 2025-08-13T20:00:11.909993260+00:00 stdout F [INFO] 10.217.0.62:39933 - 33204 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.014236276s 2025-08-13T20:00:12.249274154+00:00 stdout F [INFO] 10.217.0.19:58421 - 64290 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001664357s 2025-08-13T20:00:12.250137289+00:00 stdout F [INFO] 10.217.0.19:52477 - 46487 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004760996s 2025-08-13T20:00:12.416577515+00:00 stdout F [INFO] 10.217.0.19:53799 - 52499 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000827873s 2025-08-13T20:00:12.416577515+00:00 stdout F [INFO] 10.217.0.19:60061 - 51150 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000839763s 2025-08-13T20:00:12.441540027+00:00 stdout F [INFO] 10.217.0.62:34840 - 21342 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000948127s 2025-08-13T20:00:12.453288212+00:00 stdout F [INFO] 10.217.0.62:42451 - 35945 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01227864s 2025-08-13T20:00:12.493549240+00:00 stdout F [INFO] 10.217.0.62:50935 - 27932 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002008878s 2025-08-13T20:00:12.493702974+00:00 stdout F [INFO] 10.217.0.62:43620 - 46295 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001971266s 2025-08-13T20:00:12.530296808+00:00 stdout F [INFO] 10.217.0.62:36702 - 14398 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000981408s 2025-08-13T20:00:12.530296808+00:00 stdout F [INFO] 10.217.0.62:48646 - 64315 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000479213s 2025-08-13T20:00:13.138332395+00:00 stdout F [INFO] 10.217.0.62:59404 - 49497 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00140676s 2025-08-13T20:00:13.138332395+00:00 stdout F [INFO] 10.217.0.62:49332 - 15686 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002175362s 2025-08-13T20:00:13.283059932+00:00 stdout F [INFO] 10.217.0.62:44541 - 25632 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002145041s 2025-08-13T20:00:13.283059932+00:00 stdout F [INFO] 10.217.0.62:40546 - 5445 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002069449s 2025-08-13T20:00:13.353282534+00:00 stdout F [INFO] 10.217.0.62:54016 - 46275 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071551s 2025-08-13T20:00:13.354444558+00:00 stdout F [INFO] 10.217.0.62:33622 - 22522 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000671669s 2025-08-13T20:00:13.439033769+00:00 stdout F [INFO] 10.217.0.62:39950 - 27175 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007004119s 2025-08-13T20:00:13.443953430+00:00 stdout F [INFO] 10.217.0.62:48086 - 49881 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004105107s 2025-08-13T20:00:13.541132851+00:00 stdout F [INFO] 10.217.0.62:58381 - 51106 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001316337s 2025-08-13T20:00:13.541132851+00:00 stdout F [INFO] 10.217.0.62:54860 - 61170 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002034738s 2025-08-13T20:00:13.653005921+00:00 stdout F [INFO] 10.217.0.62:48034 - 54387 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001380509s 2025-08-13T20:00:13.653005921+00:00 stdout F [INFO] 10.217.0.62:48917 - 55266 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001103781s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.19:38017 - 48268 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004880009s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.62:49113 - 33930 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003019526s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.62:47685 - 37637 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00420269s 2025-08-13T20:00:13.833552609+00:00 stdout F [INFO] 10.217.0.19:36992 - 13239 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00596202s 2025-08-13T20:00:13.954743274+00:00 stdout F [INFO] 10.217.0.62:60184 - 33262 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001215854s 2025-08-13T20:00:13.971888523+00:00 stdout F [INFO] 10.217.0.62:40495 - 54729 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.018309722s 2025-08-13T20:00:14.090604728+00:00 stdout F [INFO] 10.217.0.62:44602 - 62643 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139623s 2025-08-13T20:00:14.090604728+00:00 stdout F [INFO] 10.217.0.62:38107 - 49721 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002221754s 2025-08-13T20:00:14.159122312+00:00 stdout F [INFO] 10.217.0.62:47344 - 18930 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009264744s 2025-08-13T20:00:14.159179524+00:00 stdout F [INFO] 10.217.0.62:51560 - 51651 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010123818s 2025-08-13T20:00:14.262518960+00:00 stdout F [INFO] 10.217.0.62:39670 - 1640 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000935326s 2025-08-13T20:00:14.264760374+00:00 stdout F [INFO] 10.217.0.62:55417 - 30464 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001238945s 2025-08-13T20:00:14.346152915+00:00 stdout F [INFO] 10.217.0.62:56143 - 12731 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003476399s 2025-08-13T20:00:14.346643759+00:00 stdout F [INFO] 10.217.0.62:48892 - 34607 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004013275s 2025-08-13T20:00:14.509059210+00:00 stdout F [INFO] 10.217.0.62:56488 - 15404 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001027549s 2025-08-13T20:00:14.514045602+00:00 stdout F [INFO] 10.217.0.62:47329 - 1471 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000592067s 2025-08-13T20:00:14.622904326+00:00 stdout F [INFO] 10.217.0.62:56101 - 63034 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001539474s 2025-08-13T20:00:14.622904326+00:00 stdout F [INFO] 10.217.0.62:42829 - 34852 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001581765s 2025-08-13T20:00:14.723699989+00:00 stdout F [INFO] 10.217.0.62:38772 - 52371 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000794353s 2025-08-13T20:00:14.723699989+00:00 stdout F [INFO] 10.217.0.62:50000 - 58314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000788752s 2025-08-13T20:00:14.852071560+00:00 stdout F [INFO] 10.217.0.19:57963 - 54633 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001571125s 2025-08-13T20:00:14.852071560+00:00 stdout F [INFO] 10.217.0.19:49761 - 48106 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00210259s 2025-08-13T20:00:14.891880995+00:00 stdout F [INFO] 10.217.0.62:45637 - 43595 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002088s 2025-08-13T20:00:14.891880995+00:00 stdout F [INFO] 10.217.0.62:46509 - 9221 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002678606s 2025-08-13T20:00:14.902471267+00:00 stdout F [INFO] 10.217.0.19:44812 - 34538 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001137942s 2025-08-13T20:00:14.902471267+00:00 stdout F [INFO] 10.217.0.19:39668 - 49216 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000528135s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.19:60068 - 15629 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.024030195s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.19:41174 - 41426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.024001304s 2025-08-13T20:00:15.004110975+00:00 stdout F [INFO] 10.217.0.62:42114 - 47386 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.016223713s 2025-08-13T20:00:15.017236019+00:00 stdout F [INFO] 10.217.0.62:51210 - 5963 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.037691795s 2025-08-13T20:00:15.061301206+00:00 stdout F [INFO] 10.217.0.19:35651 - 11770 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003591183s 2025-08-13T20:00:15.087382469+00:00 stdout F [INFO] 10.217.0.19:37849 - 7839 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003403527s 2025-08-13T20:00:15.093767751+00:00 stdout F [INFO] 10.217.0.62:46650 - 34229 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001093931s 2025-08-13T20:00:15.093767751+00:00 stdout F [INFO] 10.217.0.62:39635 - 29161 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00105456s 2025-08-13T20:00:15.226151576+00:00 stdout F [INFO] 10.217.0.19:46945 - 33977 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000949407s 2025-08-13T20:00:15.226151576+00:00 stdout F [INFO] 10.217.0.19:57137 - 54899 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001574874s 2025-08-13T20:00:15.240936918+00:00 stdout F [INFO] 10.217.0.62:42099 - 59560 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002851612s 2025-08-13T20:00:15.240936918+00:00 stdout F [INFO] 10.217.0.62:41175 - 29037 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002982265s 2025-08-13T20:00:15.364911513+00:00 stdout F [INFO] 10.217.0.62:49482 - 56674 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007002209s 2025-08-13T20:00:15.368139115+00:00 stdout F [INFO] 10.217.0.62:53468 - 9573 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002738768s 2025-08-13T20:00:15.382737091+00:00 stdout F [INFO] 10.217.0.19:57361 - 17104 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106734s 2025-08-13T20:00:15.382737091+00:00 stdout F [INFO] 10.217.0.19:55283 - 16862 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.011326993s 2025-08-13T20:00:15.433983542+00:00 stdout F [INFO] 10.217.0.62:55667 - 33923 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004534109s 2025-08-13T20:00:15.433983542+00:00 stdout F [INFO] 10.217.0.62:34787 - 37314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004371375s 2025-08-13T20:00:15.444336067+00:00 stdout F [INFO] 10.217.0.19:51870 - 64590 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001194404s 2025-08-13T20:00:15.446909311+00:00 stdout F [INFO] 10.217.0.19:43903 - 19405 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001676148s 2025-08-13T20:00:15.517874164+00:00 stdout F [INFO] 10.217.0.62:38759 - 35666 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003969233s 2025-08-13T20:00:15.531892114+00:00 stdout F [INFO] 10.217.0.62:56587 - 13504 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01158042s 2025-08-13T20:00:15.573548082+00:00 stdout F [INFO] 10.217.0.62:52776 - 23634 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00211457s 2025-08-13T20:00:15.573548082+00:00 stdout F [INFO] 10.217.0.62:42107 - 27037 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002034778s 2025-08-13T20:00:15.654391257+00:00 stdout F [INFO] 10.217.0.62:32812 - 42091 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00282523s 2025-08-13T20:00:15.656534878+00:00 stdout F [INFO] 10.217.0.62:38907 - 27002 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002048938s 2025-08-13T20:00:15.734330996+00:00 stdout F [INFO] 10.217.0.62:33135 - 32921 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003792468s 2025-08-13T20:00:15.734330996+00:00 stdout F [INFO] 10.217.0.62:33955 - 62297 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004643453s 2025-08-13T20:00:15.802234793+00:00 stdout F [INFO] 10.217.0.62:49884 - 1877 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00387168s 2025-08-13T20:00:15.802278394+00:00 stdout F [INFO] 10.217.0.62:49483 - 64427 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00490845s 2025-08-13T20:00:15.873532576+00:00 stdout F [INFO] 10.217.0.62:43741 - 36144 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000938287s 2025-08-13T20:00:15.873532576+00:00 stdout F [INFO] 10.217.0.62:41760 - 12545 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000846474s 2025-08-13T20:00:15.908057300+00:00 stdout F [INFO] 10.217.0.62:33669 - 26634 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001498842s 2025-08-13T20:00:15.908057300+00:00 stdout F [INFO] 10.217.0.62:49123 - 62305 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001537134s 2025-08-13T20:00:15.958915760+00:00 stdout F [INFO] 10.217.0.62:36466 - 6221 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000944747s 2025-08-13T20:00:15.958915760+00:00 stdout F [INFO] 10.217.0.62:45514 - 61891 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001165713s 2025-08-13T20:00:16.003345277+00:00 stdout F [INFO] 10.217.0.62:35535 - 24579 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000768992s 2025-08-13T20:00:16.003458100+00:00 stdout F [INFO] 10.217.0.62:54227 - 43386 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000751341s 2025-08-13T20:00:16.025173939+00:00 stdout F [INFO] 10.217.0.62:47898 - 47331 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009788199s 2025-08-13T20:00:16.028196506+00:00 stdout F [INFO] 10.217.0.62:53790 - 34665 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011007464s 2025-08-13T20:00:16.063730659+00:00 stdout F [INFO] 10.217.0.19:38695 - 5246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002941634s 2025-08-13T20:00:16.070752309+00:00 stdout F [INFO] 10.217.0.19:34631 - 33937 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007993128s 2025-08-13T20:00:16.071029697+00:00 stdout F [INFO] 10.217.0.62:45478 - 2016 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000954457s 2025-08-13T20:00:16.073662712+00:00 stdout F [INFO] 10.217.0.62:60256 - 52513 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001386379s 2025-08-13T20:00:16.115580247+00:00 stdout F [INFO] 10.217.0.62:60048 - 4345 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000984398s 2025-08-13T20:00:16.115580247+00:00 stdout F [INFO] 10.217.0.62:60259 - 45250 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002088889s 2025-08-13T20:00:16.150981417+00:00 stdout F [INFO] 10.217.0.62:33976 - 29295 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010768027s 2025-08-13T20:00:16.159019016+00:00 stdout F [INFO] 10.217.0.62:53332 - 40899 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012587289s 2025-08-13T20:00:16.197911625+00:00 stdout F [INFO] 10.217.0.62:46654 - 45279 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000622438s 2025-08-13T20:00:16.200930001+00:00 stdout F [INFO] 10.217.0.62:32870 - 21450 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001038189s 2025-08-13T20:00:16.205693417+00:00 stdout F [INFO] 10.217.0.28:51187 - 30233 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001215535s 2025-08-13T20:00:16.205693417+00:00 stdout F [INFO] 10.217.0.28:51035 - 51486 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000806912s 2025-08-13T20:00:16.336719233+00:00 stdout F [INFO] 10.217.0.62:60998 - 63904 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001224225s 2025-08-13T20:00:16.336825176+00:00 stdout F [INFO] 10.217.0.62:33232 - 15158 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001088981s 2025-08-13T20:00:16.356991191+00:00 stdout F [INFO] 10.217.0.62:41518 - 40868 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000610358s 2025-08-13T20:00:16.357465835+00:00 stdout F [INFO] 10.217.0.62:54709 - 24528 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001147052s 2025-08-13T20:00:16.379975566+00:00 stdout F [INFO] 10.217.0.62:57672 - 38980 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000834984s 2025-08-13T20:00:16.380039328+00:00 stdout F [INFO] 10.217.0.62:49914 - 18773 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000918706s 2025-08-13T20:00:16.407303966+00:00 stdout F [INFO] 10.217.0.62:52427 - 11793 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001208755s 2025-08-13T20:00:16.408590282+00:00 stdout F [INFO] 10.217.0.62:43965 - 9519 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002598674s 2025-08-13T20:00:16.452031651+00:00 stdout F [INFO] 10.217.0.62:55006 - 16870 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000919176s 2025-08-13T20:00:16.452031651+00:00 stdout F [INFO] 10.217.0.62:49785 - 28542 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001365109s 2025-08-13T20:00:17.145072472+00:00 stdout F [INFO] 10.217.0.19:41177 - 20131 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001652497s 2025-08-13T20:00:17.148561112+00:00 stdout F [INFO] 10.217.0.19:37598 - 42797 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006819774s 2025-08-13T20:00:17.197991761+00:00 stdout F [INFO] 10.217.0.19:33492 - 52647 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001516074s 2025-08-13T20:00:17.198367222+00:00 stdout F [INFO] 10.217.0.19:45055 - 1509 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001945665s 2025-08-13T20:00:17.757549517+00:00 stdout F [INFO] 10.217.0.62:43319 - 34607 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001677958s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.19:55481 - 21056 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000947157s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.19:58786 - 15965 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000674169s 2025-08-13T20:00:17.763061234+00:00 stdout F [INFO] 10.217.0.62:33169 - 37945 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001128812s 2025-08-13T20:00:17.816943650+00:00 stdout F [INFO] 10.217.0.62:42205 - 36373 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002212463s 2025-08-13T20:00:17.816943650+00:00 stdout F [INFO] 10.217.0.62:37387 - 16790 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001350018s 2025-08-13T20:00:17.868672735+00:00 stdout F [INFO] 10.217.0.62:52021 - 9648 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001696059s 2025-08-13T20:00:17.874387088+00:00 stdout F [INFO] 10.217.0.62:40260 - 51115 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000826174s 2025-08-13T20:00:17.998417495+00:00 stdout F [INFO] 10.217.0.62:47087 - 30590 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000785533s 2025-08-13T20:00:18.001043570+00:00 stdout F [INFO] 10.217.0.62:47798 - 1955 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003256683s 2025-08-13T20:00:18.093719402+00:00 stdout F [INFO] 10.217.0.62:58441 - 33701 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021269306s 2025-08-13T20:00:18.093719402+00:00 stdout F [INFO] 10.217.0.62:41589 - 40585 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.030229852s 2025-08-13T20:00:18.296816152+00:00 stdout F [INFO] 10.217.0.19:51832 - 36823 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003318795s 2025-08-13T20:00:18.301997970+00:00 stdout F [INFO] 10.217.0.19:39903 - 35 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001972347s 2025-08-13T20:00:18.350136493+00:00 stdout F [INFO] 10.217.0.19:58831 - 64918 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002482641s 2025-08-13T20:00:18.350136493+00:00 stdout F [INFO] 10.217.0.19:42371 - 39239 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006809914s 2025-08-13T20:00:18.407327564+00:00 stdout F [INFO] 10.217.0.62:52472 - 11023 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011434426s 2025-08-13T20:00:18.423120254+00:00 stdout F [INFO] 10.217.0.62:47420 - 64726 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.020940467s 2025-08-13T20:00:18.629197680+00:00 stdout F [INFO] 10.217.0.62:37245 - 2390 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002279585s 2025-08-13T20:00:18.631098444+00:00 stdout F [INFO] 10.217.0.62:50831 - 11251 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003223551s 2025-08-13T20:00:18.761415450+00:00 stdout F [INFO] 10.217.0.62:46429 - 45837 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001951445s 2025-08-13T20:00:18.815524273+00:00 stdout F [INFO] 10.217.0.62:36125 - 11874 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.033757773s 2025-08-13T20:00:18.929433271+00:00 stdout F [INFO] 10.217.0.19:57431 - 54919 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001774561s 2025-08-13T20:00:18.940378423+00:00 stdout F [INFO] 10.217.0.19:56896 - 39088 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002388608s 2025-08-13T20:00:19.114070866+00:00 stdout F [INFO] 10.217.0.62:33352 - 47244 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021087921s 2025-08-13T20:00:19.114070866+00:00 stdout F [INFO] 10.217.0.62:60188 - 11847 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.021673368s 2025-08-13T20:00:19.237767773+00:00 stdout F [INFO] 10.217.0.62:56582 - 19683 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001947166s 2025-08-13T20:00:19.260334966+00:00 stdout F [INFO] 10.217.0.62:33301 - 31898 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011323893s 2025-08-13T20:00:19.527445143+00:00 stdout F [INFO] 10.217.0.19:60066 - 34703 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001614406s 2025-08-13T20:00:19.528323238+00:00 stdout F [INFO] 10.217.0.19:60305 - 37834 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002391868s 2025-08-13T20:00:20.196990224+00:00 stdout F [INFO] 10.217.0.19:52258 - 25009 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002468161s 2025-08-13T20:00:20.198550649+00:00 stdout F [INFO] 10.217.0.19:38223 - 9118 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007779282s 2025-08-13T20:00:20.476452013+00:00 stdout F [INFO] 10.217.0.62:54708 - 21880 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002848321s 2025-08-13T20:00:20.476452013+00:00 stdout F [INFO] 10.217.0.62:59683 - 4772 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005088505s 2025-08-13T20:00:20.675957121+00:00 stdout F [INFO] 10.217.0.62:50588 - 46196 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004520369s 2025-08-13T20:00:20.714120399+00:00 stdout F [INFO] 10.217.0.62:41010 - 48406 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009880922s 2025-08-13T20:00:20.839418082+00:00 stdout F [INFO] 10.217.0.62:47437 - 37198 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006881437s 2025-08-13T20:00:20.839561556+00:00 stdout F [INFO] 10.217.0.62:36545 - 56669 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007588906s 2025-08-13T20:00:20.923534181+00:00 stdout F [INFO] 10.217.0.62:41433 - 54004 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000731301s 2025-08-13T20:00:20.923853600+00:00 stdout F [INFO] 10.217.0.62:58985 - 8745 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001453541s 2025-08-13T20:00:21.140617421+00:00 stdout F [INFO] 10.217.0.62:50318 - 16622 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012183297s 2025-08-13T20:00:21.140617421+00:00 stdout F [INFO] 10.217.0.62:43541 - 15801 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012824826s 2025-08-13T20:00:21.197936515+00:00 stdout F [INFO] 10.217.0.62:49442 - 22994 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004789596s 2025-08-13T20:00:21.198102180+00:00 stdout F [INFO] 10.217.0.28:32829 - 14166 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002330176s 2025-08-13T20:00:21.202505155+00:00 stdout F [INFO] 10.217.0.62:33830 - 17870 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00597609s 2025-08-13T20:00:21.202505155+00:00 stdout F [INFO] 10.217.0.28:53244 - 10983 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002501252s 2025-08-13T20:00:21.450165117+00:00 stdout F [INFO] 10.217.0.62:50542 - 30667 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002195952s 2025-08-13T20:00:21.450165117+00:00 stdout F [INFO] 10.217.0.62:37026 - 47114 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002694096s 2025-08-13T20:00:21.548909932+00:00 stdout F [INFO] 10.217.0.62:58992 - 14098 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001445921s 2025-08-13T20:00:21.548909932+00:00 stdout F [INFO] 10.217.0.62:59478 - 12237 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007775702s 2025-08-13T20:00:21.665943849+00:00 stdout F [INFO] 10.217.0.62:53629 - 30765 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002404769s 2025-08-13T20:00:21.670443357+00:00 stdout F [INFO] 10.217.0.62:59632 - 3278 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008741309s 2025-08-13T20:00:21.703125519+00:00 stdout F [INFO] 10.217.0.62:41766 - 63549 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001135602s 2025-08-13T20:00:21.703719906+00:00 stdout F [INFO] 10.217.0.62:46054 - 10024 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000784832s 2025-08-13T20:00:22.579485308+00:00 stdout F [INFO] 10.217.0.19:50242 - 14711 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003977904s 2025-08-13T20:00:22.579485308+00:00 stdout F [INFO] 10.217.0.19:48700 - 53632 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004684324s 2025-08-13T20:00:25.731086805+00:00 stdout F [INFO] 10.217.0.8:59710 - 37958 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003472519s 2025-08-13T20:00:25.731086805+00:00 stdout F [INFO] 10.217.0.8:38997 - 4199 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000922526s 2025-08-13T20:00:25.739917427+00:00 stdout F [INFO] 10.217.0.8:37909 - 64583 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.005826467s 2025-08-13T20:00:25.743884900+00:00 stdout F [INFO] 10.217.0.8:51606 - 4042 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001596626s 2025-08-13T20:00:26.211451573+00:00 stdout F [INFO] 10.217.0.28:41789 - 33440 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004783477s 2025-08-13T20:00:26.215747035+00:00 stdout F [INFO] 10.217.0.28:54682 - 18635 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005841127s 2025-08-13T20:00:27.306241280+00:00 stdout F [INFO] 10.217.0.37:49903 - 59099 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002873802s 2025-08-13T20:00:27.306241280+00:00 stdout F [INFO] 10.217.0.37:49337 - 16367 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00315367s 2025-08-13T20:00:27.610176267+00:00 stdout F [INFO] 10.217.0.57:48884 - 35058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001445402s 2025-08-13T20:00:27.611670049+00:00 stdout F [INFO] 10.217.0.57:47676 - 42222 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001946096s 2025-08-13T20:00:27.716045735+00:00 stdout F [INFO] 10.217.0.57:44140 - 9399 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.013504715s 2025-08-13T20:00:27.717506627+00:00 stdout F [INFO] 10.217.0.57:45630 - 4081 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.013814204s 2025-08-13T20:00:31.268346355+00:00 stdout F [INFO] 10.217.0.28:57797 - 30523 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001610226s 2025-08-13T20:00:31.268346355+00:00 stdout F [INFO] 10.217.0.28:51665 - 51102 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002051629s 2025-08-13T20:00:31.425624620+00:00 stdout F [INFO] 10.217.0.62:40352 - 63069 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003710086s 2025-08-13T20:00:31.437602572+00:00 stdout F [INFO] 10.217.0.62:41024 - 28867 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004605691s 2025-08-13T20:00:31.661447155+00:00 stdout F [INFO] 10.217.0.62:44788 - 55371 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001476762s 2025-08-13T20:00:31.689217696+00:00 stdout F [INFO] 10.217.0.62:44360 - 51842 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.023932763s 2025-08-13T20:00:31.811253746+00:00 stdout F [INFO] 10.217.0.62:57778 - 8850 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001100381s 2025-08-13T20:00:31.811253746+00:00 stdout F [INFO] 10.217.0.62:60857 - 53999 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000835464s 2025-08-13T20:00:31.961965074+00:00 stdout F [INFO] 10.217.0.62:34727 - 35445 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002562353s 2025-08-13T20:00:31.964171786+00:00 stdout F [INFO] 10.217.0.62:41993 - 40439 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006554117s 2025-08-13T20:00:32.113323289+00:00 stdout F [INFO] 10.217.0.62:58232 - 45749 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001293527s 2025-08-13T20:00:32.114298577+00:00 stdout F [INFO] 10.217.0.62:35112 - 22784 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000848074s 2025-08-13T20:00:32.737430424+00:00 stdout F [INFO] 10.217.0.57:38966 - 30327 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000583237s 2025-08-13T20:00:32.737498686+00:00 stdout F [INFO] 10.217.0.57:50692 - 52016 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001420221s 2025-08-13T20:00:33.045863489+00:00 stdout F [INFO] 10.217.0.62:58824 - 1702 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001806381s 2025-08-13T20:00:33.058230141+00:00 stdout F [INFO] 10.217.0.62:36864 - 16355 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008756479s 2025-08-13T20:00:33.577261071+00:00 stdout F [INFO] 10.217.0.62:43777 - 9593 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001006539s 2025-08-13T20:00:33.577261071+00:00 stdout F [INFO] 10.217.0.62:53467 - 42328 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000907196s 2025-08-13T20:00:33.863106352+00:00 stdout F [INFO] 10.217.0.62:38630 - 31176 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001723559s 2025-08-13T20:00:33.872896041+00:00 stdout F [INFO] 10.217.0.62:60120 - 36301 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002984125s 2025-08-13T20:00:33.968965360+00:00 stdout F [INFO] 10.217.0.62:34743 - 13834 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00177437s 2025-08-13T20:00:33.968965360+00:00 stdout F [INFO] 10.217.0.62:44966 - 17140 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002161702s 2025-08-13T20:00:34.058497063+00:00 stdout F [INFO] 10.217.0.62:43193 - 43833 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001836962s 2025-08-13T20:00:34.058497063+00:00 stdout F [INFO] 10.217.0.62:34426 - 45919 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002009107s 2025-08-13T20:00:35.433094038+00:00 stdout F [INFO] 10.217.0.19:49445 - 33612 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000984848s 2025-08-13T20:00:35.433094038+00:00 stdout F [INFO] 10.217.0.19:41366 - 18651 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000927016s 2025-08-13T20:00:35.453677195+00:00 stdout F [INFO] 10.217.0.8:57298 - 52946 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007615537s 2025-08-13T20:00:35.453677195+00:00 stdout F [INFO] 10.217.0.8:38349 - 16220 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006572848s 2025-08-13T20:00:35.468442306+00:00 stdout F [INFO] 10.217.0.8:33452 - 50656 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012375263s 2025-08-13T20:00:35.468442306+00:00 stdout F [INFO] 10.217.0.8:35773 - 41908 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012355652s 2025-08-13T20:00:35.690673373+00:00 stdout F [INFO] 10.217.0.62:55989 - 53229 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0014117s 2025-08-13T20:00:35.690673373+00:00 stdout F [INFO] 10.217.0.62:33065 - 53085 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001248106s 2025-08-13T20:00:35.730503909+00:00 stdout F [INFO] 10.217.0.19:36292 - 10939 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005190688s 2025-08-13T20:00:35.770042926+00:00 stdout F [INFO] 10.217.0.19:54938 - 27133 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.048229325s 2025-08-13T20:00:35.897148751+00:00 stdout F [INFO] 10.217.0.62:51579 - 52497 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010913982s 2025-08-13T20:00:35.897148751+00:00 stdout F [INFO] 10.217.0.62:32787 - 24254 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011512098s 2025-08-13T20:00:35.952043906+00:00 stdout F [INFO] 10.217.0.19:33608 - 30398 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000857724s 2025-08-13T20:00:35.952043906+00:00 stdout F [INFO] 10.217.0.19:39154 - 13296 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001591015s 2025-08-13T20:00:36.052158070+00:00 stdout F [INFO] 10.217.0.62:57742 - 50873 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007803622s 2025-08-13T20:00:36.052158070+00:00 stdout F [INFO] 10.217.0.62:50217 - 39537 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008163043s 2025-08-13T20:00:36.085994515+00:00 stdout F [INFO] 10.217.0.62:46973 - 4617 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000783212s 2025-08-13T20:00:36.085994515+00:00 stdout F [INFO] 10.217.0.62:48190 - 9675 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000738442s 2025-08-13T20:00:36.123936097+00:00 stdout F [INFO] 10.217.0.62:57435 - 21834 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000900436s 2025-08-13T20:00:36.123936097+00:00 stdout F [INFO] 10.217.0.62:50783 - 58379 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000881065s 2025-08-13T20:00:36.192447920+00:00 stdout F [INFO] 10.217.0.28:35258 - 57233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001739479s 2025-08-13T20:00:36.201027864+00:00 stdout F [INFO] 10.217.0.28:45699 - 57411 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010188299s 2025-08-13T20:00:36.562031178+00:00 stdout F [INFO] 10.217.0.19:51585 - 9027 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966748s 2025-08-13T20:00:36.575971016+00:00 stdout F [INFO] 10.217.0.19:34532 - 16670 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.014703769s 2025-08-13T20:00:36.983461205+00:00 stdout F [INFO] 10.217.0.19:41101 - 30346 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001602416s 2025-08-13T20:00:36.983886077+00:00 stdout F [INFO] 10.217.0.19:37160 - 8672 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001703779s 2025-08-13T20:00:37.283931612+00:00 stdout F [INFO] 10.217.0.19:51863 - 63110 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003548381s 2025-08-13T20:00:37.285398654+00:00 stdout F [INFO] 10.217.0.19:46973 - 8780 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001810482s 2025-08-13T20:00:37.544949185+00:00 stdout F [INFO] 10.217.0.19:60998 - 12802 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002060749s 2025-08-13T20:00:37.553034096+00:00 stdout F [INFO] 10.217.0.19:57940 - 9552 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010101818s 2025-08-13T20:00:37.790369403+00:00 stdout F [INFO] 10.217.0.57:43550 - 43696 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009638015s 2025-08-13T20:00:37.796695383+00:00 stdout F [INFO] 10.217.0.57:39451 - 44600 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006570608s 2025-08-13T20:00:40.173948128+00:00 stdout F [INFO] 10.217.0.60:48850 - 60117 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008025469s 2025-08-13T20:00:40.173948128+00:00 stdout F [INFO] 10.217.0.60:48884 - 7163 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011604921s 2025-08-13T20:00:40.183890401+00:00 stdout F [INFO] 10.217.0.60:53961 - 51717 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009816699s 2025-08-13T20:00:40.184241341+00:00 stdout F [INFO] 10.217.0.60:46675 - 43378 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009938023s 2025-08-13T20:00:40.236573113+00:00 stdout F [INFO] 10.217.0.62:40825 - 24788 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004861068s 2025-08-13T20:00:40.236573113+00:00 stdout F [INFO] 10.217.0.62:46421 - 61090 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005456386s 2025-08-13T20:00:40.714710607+00:00 stdout F [INFO] 10.217.0.62:59322 - 41606 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006050943s 2025-08-13T20:00:40.714710607+00:00 stdout F [INFO] 10.217.0.62:39975 - 59107 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.007196325s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:45725 - 34122 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.013637669s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:44229 - 60604 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.014164904s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:43328 - 58288 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.015254415s 2025-08-13T20:00:40.803110348+00:00 stdout F [INFO] 10.217.0.60:38617 - 57227 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.014962096s 2025-08-13T20:00:40.910345735+00:00 stdout F [INFO] 10.217.0.60:45732 - 4038 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004323434s 2025-08-13T20:00:40.930483439+00:00 stdout F [INFO] 10.217.0.60:49251 - 15133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.021362629s 2025-08-13T20:00:40.990210252+00:00 stdout F [INFO] 10.217.0.60:40178 - 34377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004410105s 2025-08-13T20:00:41.060271440+00:00 stdout F [INFO] 10.217.0.60:50125 - 63735 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018152517s 2025-08-13T20:00:41.075524395+00:00 stdout F [INFO] 10.217.0.62:57370 - 34368 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.017149759s 2025-08-13T20:00:41.075931257+00:00 stdout F [INFO] 10.217.0.62:41324 - 39724 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.018058485s 2025-08-13T20:00:41.190088592+00:00 stdout F [INFO] 10.217.0.60:36300 - 17768 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007318269s 2025-08-13T20:00:41.190088592+00:00 stdout F [INFO] 10.217.0.60:60017 - 36037 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016632274s 2025-08-13T20:00:41.224605096+00:00 stdout F [INFO] 10.217.0.28:51635 - 40403 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006965329s 2025-08-13T20:00:41.264087632+00:00 stdout F [INFO] 10.217.0.28:52062 - 1904 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.009673976s 2025-08-13T20:00:41.373989136+00:00 stdout F [INFO] 10.217.0.62:56057 - 9966 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001195254s 2025-08-13T20:00:41.387945613+00:00 stdout F [INFO] 10.217.0.62:46435 - 5126 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.016528461s 2025-08-13T20:00:41.715311878+00:00 stdout F [INFO] 10.217.0.62:50311 - 25129 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0063246s 2025-08-13T20:00:41.717438859+00:00 stdout F [INFO] 10.217.0.62:33386 - 40928 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006839125s 2025-08-13T20:00:41.738119748+00:00 stdout F [INFO] 10.217.0.60:52373 - 23632 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007761871s 2025-08-13T20:00:41.801297700+00:00 stdout F [INFO] 10.217.0.60:35320 - 1700 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.058023705s 2025-08-13T20:00:41.801297700+00:00 stdout F [INFO] 10.217.0.60:55328 - 10605 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.058711244s 2025-08-13T20:00:41.829259987+00:00 stdout F [INFO] 10.217.0.60:53501 - 19974 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.05925692s 2025-08-13T20:00:41.976174496+00:00 stdout F [INFO] 10.217.0.60:52578 - 9173 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008371928s 2025-08-13T20:00:41.976497765+00:00 stdout F [INFO] 10.217.0.60:40532 - 45632 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00071053s 2025-08-13T20:00:41.976709652+00:00 stdout F [INFO] 10.217.0.60:34514 - 826 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001254386s 2025-08-13T20:00:41.977127923+00:00 stdout F [INFO] 10.217.0.60:57847 - 46655 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009665195s 2025-08-13T20:00:42.007513300+00:00 stdout F [INFO] 10.217.0.60:54739 - 10924 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003385036s 2025-08-13T20:00:42.007513300+00:00 stdout F [INFO] 10.217.0.60:59114 - 30470 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005906868s 2025-08-13T20:00:42.084584237+00:00 stdout F [INFO] 10.217.0.60:56862 - 58713 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003955053s 2025-08-13T20:00:42.087243233+00:00 stdout F [INFO] 10.217.0.60:36311 - 27915 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002747588s 2025-08-13T20:00:42.132939816+00:00 stdout F [INFO] 10.217.0.60:52763 - 16178 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009929473s 2025-08-13T20:00:42.133577344+00:00 stdout F [INFO] 10.217.0.60:49355 - 29298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012692052s 2025-08-13T20:00:42.153883023+00:00 stdout F [INFO] 10.217.0.19:47374 - 58670 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105662s 2025-08-13T20:00:42.153944575+00:00 stdout F [INFO] 10.217.0.19:53419 - 55595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000777443s 2025-08-13T20:00:42.164854346+00:00 stdout F [INFO] 10.217.0.60:32773 - 53922 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.006955098s 2025-08-13T20:00:42.164854346+00:00 stdout F [INFO] 10.217.0.60:33864 - 8621 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.005889688s 2025-08-13T20:00:42.206111563+00:00 stdout F [INFO] 10.217.0.60:57809 - 30579 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005044634s 2025-08-13T20:00:42.206578306+00:00 stdout F [INFO] 10.217.0.60:58796 - 61858 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006533616s 2025-08-13T20:00:42.290129518+00:00 stdout F [INFO] 10.217.0.60:47662 - 60054 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008448151s 2025-08-13T20:00:42.290129518+00:00 stdout F [INFO] 10.217.0.60:44908 - 34608 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009088019s 2025-08-13T20:00:42.372369263+00:00 stdout F [INFO] 10.217.0.60:50074 - 31458 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003794648s 2025-08-13T20:00:42.374170995+00:00 stdout F [INFO] 10.217.0.60:59507 - 41881 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004698834s 2025-08-13T20:00:42.381148524+00:00 stdout F [INFO] 10.217.0.60:53163 - 6696 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00353352s 2025-08-13T20:00:42.381148524+00:00 stdout F [INFO] 10.217.0.60:47961 - 23913 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003714746s 2025-08-13T20:00:42.395730410+00:00 stdout F [INFO] 10.217.0.60:51772 - 27470 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004109837s 2025-08-13T20:00:42.396155742+00:00 stdout F [INFO] 10.217.0.60:56559 - 41180 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003775757s 2025-08-13T20:00:42.465712745+00:00 stdout F [INFO] 10.217.0.60:52007 - 64110 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002959514s 2025-08-13T20:00:42.466142467+00:00 stdout F [INFO] 10.217.0.60:46205 - 30385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000848104s 2025-08-13T20:00:42.466550819+00:00 stdout F [INFO] 10.217.0.60:39664 - 25521 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003301974s 2025-08-13T20:00:42.466616831+00:00 stdout F [INFO] 10.217.0.60:58488 - 44759 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001436741s 2025-08-13T20:00:42.501886346+00:00 stdout F [INFO] 10.217.0.60:53857 - 2484 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005065864s 2025-08-13T20:00:42.502274967+00:00 stdout F [INFO] 10.217.0.60:37557 - 36840 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006229778s 2025-08-13T20:00:42.541595989+00:00 stdout F [INFO] 10.217.0.60:49941 - 58907 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001070981s 2025-08-13T20:00:42.542671779+00:00 stdout F [INFO] 10.217.0.60:33250 - 47268 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001080841s 2025-08-13T20:00:42.549360000+00:00 stdout F [INFO] 10.217.0.60:36173 - 36326 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00417675s 2025-08-13T20:00:42.549577626+00:00 stdout F [INFO] 10.217.0.60:44656 - 34287 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00457103s 2025-08-13T20:00:42.612258244+00:00 stdout F [INFO] 10.217.0.60:36746 - 61964 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.007478603s 2025-08-13T20:00:42.612546452+00:00 stdout F [INFO] 10.217.0.60:48936 - 60887 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.008014958s 2025-08-13T20:00:42.612857751+00:00 stdout F [INFO] 10.217.0.60:41597 - 9052 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010254542s 2025-08-13T20:00:42.613108138+00:00 stdout F [INFO] 10.217.0.60:60388 - 16520 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010896681s 2025-08-13T20:00:42.626540471+00:00 stdout F [INFO] 10.217.0.19:47160 - 61990 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001217815s 2025-08-13T20:00:42.626767237+00:00 stdout F [INFO] 10.217.0.19:35835 - 10492 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001272586s 2025-08-13T20:00:42.635307571+00:00 stdout F [INFO] 10.217.0.60:47210 - 51012 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00493136s 2025-08-13T20:00:42.635508907+00:00 stdout F [INFO] 10.217.0.60:48041 - 17561 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004893059s 2025-08-13T20:00:42.662218808+00:00 stdout F [INFO] 10.217.0.60:52906 - 27055 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009696496s 2025-08-13T20:00:42.662218808+00:00 stdout F [INFO] 10.217.0.60:36666 - 6155 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009805049s 2025-08-13T20:00:42.716915738+00:00 stdout F [INFO] 10.217.0.60:60334 - 9510 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005900578s 2025-08-13T20:00:42.716915738+00:00 stdout F [INFO] 10.217.0.60:56325 - 59807 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006511966s 2025-08-13T20:00:42.722290411+00:00 stdout F [INFO] 10.217.0.60:56599 - 47234 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.007804482s 2025-08-13T20:00:42.722639381+00:00 stdout F [INFO] 10.217.0.60:40308 - 51070 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.008097021s 2025-08-13T20:00:42.751493194+00:00 stdout F [INFO] 10.217.0.60:42287 - 5955 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068765s 2025-08-13T20:00:42.753440729+00:00 stdout F [INFO] 10.217.0.60:43479 - 31758 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000434712s 2025-08-13T20:00:42.758242896+00:00 stdout F [INFO] 10.217.0.57:34553 - 58336 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003449808s 2025-08-13T20:00:42.758242896+00:00 stdout F [INFO] 10.217.0.57:55698 - 58679 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003164171s 2025-08-13T20:00:42.799761920+00:00 stdout F [INFO] 10.217.0.60:57471 - 53951 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001028359s 2025-08-13T20:00:42.799761920+00:00 stdout F [INFO] 10.217.0.60:36137 - 30042 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000925436s 2025-08-13T20:00:42.821276633+00:00 stdout F [INFO] 10.217.0.60:51024 - 25845 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002521761s 2025-08-13T20:00:42.852377870+00:00 stdout F [INFO] 10.217.0.60:45732 - 49655 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.015162122s 2025-08-13T20:00:42.877721733+00:00 stdout F [INFO] 10.217.0.60:36587 - 34390 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008998267s 2025-08-13T20:00:42.877761484+00:00 stdout F [INFO] 10.217.0.60:41455 - 12455 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009635164s 2025-08-13T20:00:42.934947915+00:00 stdout F [INFO] 10.217.0.60:48487 - 16479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001001568s 2025-08-13T20:00:42.934947915+00:00 stdout F [INFO] 10.217.0.60:39248 - 18897 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001283497s 2025-08-13T20:00:42.992199317+00:00 stdout F [INFO] 10.217.0.60:38306 - 15345 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001791081s 2025-08-13T20:00:42.997569910+00:00 stdout F [INFO] 10.217.0.60:43691 - 39626 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006527846s 2025-08-13T20:00:43.062744239+00:00 stdout F [INFO] 10.217.0.60:43124 - 10054 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002526562s 2025-08-13T20:00:43.062819551+00:00 stdout F [INFO] 10.217.0.60:43696 - 21466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002597884s 2025-08-13T20:00:43.124412067+00:00 stdout F [INFO] 10.217.0.60:40037 - 48306 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001489412s 2025-08-13T20:00:43.124412067+00:00 stdout F [INFO] 10.217.0.60:56776 - 5951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001802502s 2025-08-13T20:00:43.124448128+00:00 stdout F [INFO] 10.217.0.60:35495 - 36278 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001221295s 2025-08-13T20:00:43.125673583+00:00 stdout F [INFO] 10.217.0.60:33856 - 56194 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002322437s 2025-08-13T20:00:43.147262809+00:00 stdout F [INFO] 10.217.0.60:36329 - 25227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003368526s 2025-08-13T20:00:43.148305638+00:00 stdout F [INFO] 10.217.0.60:48432 - 30770 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003434678s 2025-08-13T20:00:43.187327211+00:00 stdout F [INFO] 10.217.0.60:42719 - 44038 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002487311s 2025-08-13T20:00:43.187622319+00:00 stdout F [INFO] 10.217.0.60:33733 - 28042 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002977814s 2025-08-13T20:00:43.193190838+00:00 stdout F [INFO] 10.217.0.60:40321 - 64415 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001383639s 2025-08-13T20:00:43.193190838+00:00 stdout F [INFO] 10.217.0.60:58984 - 37137 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001215215s 2025-08-13T20:00:43.281338372+00:00 stdout F [INFO] 10.217.0.60:40457 - 23589 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001961056s 2025-08-13T20:00:43.281886087+00:00 stdout F [INFO] 10.217.0.60:52727 - 44082 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002660136s 2025-08-13T20:00:43.285920862+00:00 stdout F [INFO] 10.217.0.60:55415 - 57509 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069516s 2025-08-13T20:00:43.285920862+00:00 stdout F [INFO] 10.217.0.60:58720 - 48518 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000799473s 2025-08-13T20:00:43.352745757+00:00 stdout F [INFO] 10.217.0.60:47603 - 41130 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007655488s 2025-08-13T20:00:43.352745757+00:00 stdout F [INFO] 10.217.0.60:58734 - 17536 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007685769s 2025-08-13T20:00:43.423710250+00:00 stdout F [INFO] 10.217.0.60:51827 - 61021 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003569272s 2025-08-13T20:00:43.424613746+00:00 stdout F [INFO] 10.217.0.60:49551 - 22902 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004468317s 2025-08-13T20:00:43.425194043+00:00 stdout F [INFO] 10.217.0.60:43655 - 46137 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004508809s 2025-08-13T20:00:43.425534112+00:00 stdout F [INFO] 10.217.0.60:57763 - 5602 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005004903s 2025-08-13T20:00:43.470829214+00:00 stdout F [INFO] 10.217.0.60:60522 - 63475 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001313588s 2025-08-13T20:00:43.472331027+00:00 stdout F [INFO] 10.217.0.60:47979 - 23540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002337996s 2025-08-13T20:00:43.497125284+00:00 stdout F [INFO] 10.217.0.60:39595 - 21645 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001085451s 2025-08-13T20:00:43.497502724+00:00 stdout F [INFO] 10.217.0.60:44441 - 51694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001836472s 2025-08-13T20:00:43.543204928+00:00 stdout F [INFO] 10.217.0.60:44521 - 34550 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001911934s 2025-08-13T20:00:43.546595764+00:00 stdout F [INFO] 10.217.0.60:52466 - 14265 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000969237s 2025-08-13T20:00:43.557742862+00:00 stdout F [INFO] 10.217.0.60:59650 - 5341 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001627867s 2025-08-13T20:00:43.557742862+00:00 stdout F [INFO] 10.217.0.60:51584 - 49883 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001501333s 2025-08-13T20:00:43.600072219+00:00 stdout F [INFO] 10.217.0.60:49431 - 52323 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845184s 2025-08-13T20:00:43.600110150+00:00 stdout F [INFO] 10.217.0.60:48025 - 60773 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000760792s 2025-08-13T20:00:43.611390982+00:00 stdout F [INFO] 10.217.0.60:40820 - 1549 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070815s 2025-08-13T20:00:43.611544296+00:00 stdout F [INFO] 10.217.0.60:47757 - 10448 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754742s 2025-08-13T20:00:43.618045112+00:00 stdout F [INFO] 10.217.0.60:53912 - 30298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001381489s 2025-08-13T20:00:43.619334828+00:00 stdout F [INFO] 10.217.0.60:44704 - 44979 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000822413s 2025-08-13T20:00:43.662573931+00:00 stdout F [INFO] 10.217.0.60:55017 - 30285 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001868524s 2025-08-13T20:00:43.662573931+00:00 stdout F [INFO] 10.217.0.60:59906 - 48194 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001780561s 2025-08-13T20:00:43.678353481+00:00 stdout F [INFO] 10.217.0.60:55405 - 6328 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000933286s 2025-08-13T20:00:43.678353481+00:00 stdout F [INFO] 10.217.0.60:47560 - 58287 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001986826s 2025-08-13T20:00:43.679677589+00:00 stdout F [INFO] 10.217.0.60:41177 - 47820 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001985617s 2025-08-13T20:00:43.686079122+00:00 stdout F [INFO] 10.217.0.60:35239 - 45681 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008523043s 2025-08-13T20:00:43.729225602+00:00 stdout F [INFO] 10.217.0.60:36044 - 4180 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001164383s 2025-08-13T20:00:43.729225602+00:00 stdout F [INFO] 10.217.0.60:41479 - 54020 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001488832s 2025-08-13T20:00:43.749673445+00:00 stdout F [INFO] 10.217.0.60:51760 - 2234 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001074331s 2025-08-13T20:00:43.750405596+00:00 stdout F [INFO] 10.217.0.60:56738 - 50301 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001634247s 2025-08-13T20:00:43.788935744+00:00 stdout F [INFO] 10.217.0.60:58467 - 36904 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003258043s 2025-08-13T20:00:43.788935744+00:00 stdout F [INFO] 10.217.0.60:51533 - 39214 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003825169s 2025-08-13T20:00:43.806927437+00:00 stdout F [INFO] 10.217.0.60:34408 - 40072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070067s 2025-08-13T20:00:43.807457072+00:00 stdout F [INFO] 10.217.0.60:49017 - 43687 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001157863s 2025-08-13T20:00:43.848398340+00:00 stdout F [INFO] 10.217.0.60:53118 - 52132 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000759662s 2025-08-13T20:00:43.848634737+00:00 stdout F [INFO] 10.217.0.60:35567 - 5264 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000990089s 2025-08-13T20:00:43.872894978+00:00 stdout F [INFO] 10.217.0.60:45098 - 58996 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001257796s 2025-08-13T20:00:43.876055308+00:00 stdout F [INFO] 10.217.0.60:41662 - 8062 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003664225s 2025-08-13T20:00:43.922988587+00:00 stdout F [INFO] 10.217.0.60:60175 - 57474 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000804053s 2025-08-13T20:00:43.923065349+00:00 stdout F [INFO] 10.217.0.60:58165 - 10664 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001026449s 2025-08-13T20:00:43.935124723+00:00 stdout F [INFO] 10.217.0.60:37231 - 28474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776202s 2025-08-13T20:00:43.935243866+00:00 stdout F [INFO] 10.217.0.60:34968 - 12153 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001174304s 2025-08-13T20:00:44.003189984+00:00 stdout F [INFO] 10.217.0.60:37021 - 40929 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000838734s 2025-08-13T20:00:44.003408840+00:00 stdout F [INFO] 10.217.0.60:38598 - 52744 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001257846s 2025-08-13T20:00:44.042574607+00:00 stdout F [INFO] 10.217.0.60:34231 - 46460 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000661409s 2025-08-13T20:00:44.043194384+00:00 stdout F [INFO] 10.217.0.60:58181 - 15108 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001057461s 2025-08-13T20:00:44.097995907+00:00 stdout F [INFO] 10.217.0.60:39022 - 58329 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000672049s 2025-08-13T20:00:44.098756259+00:00 stdout F [INFO] 10.217.0.60:57403 - 10634 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001475332s 2025-08-13T20:00:44.111347118+00:00 stdout F [INFO] 10.217.0.60:37653 - 37404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000602627s 2025-08-13T20:00:44.111596875+00:00 stdout F [INFO] 10.217.0.60:47515 - 2054 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467333s 2025-08-13T20:00:44.238898615+00:00 stdout F [INFO] 10.217.0.60:42808 - 16861 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000642628s 2025-08-13T20:00:44.238898615+00:00 stdout F [INFO] 10.217.0.60:48067 - 30534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001094392s 2025-08-13T20:00:44.239324577+00:00 stdout F [INFO] 10.217.0.60:35307 - 23336 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001235565s 2025-08-13T20:00:44.239324577+00:00 stdout F [INFO] 10.217.0.60:39319 - 35875 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001458431s 2025-08-13T20:00:44.306349678+00:00 stdout F [INFO] 10.217.0.60:38046 - 11013 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001185764s 2025-08-13T20:00:44.306349678+00:00 stdout F [INFO] 10.217.0.60:43413 - 39794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105296s 2025-08-13T20:00:44.308755196+00:00 stdout F [INFO] 10.217.0.60:54687 - 38217 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004081087s 2025-08-13T20:00:44.309022484+00:00 stdout F [INFO] 10.217.0.60:43534 - 29464 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004097267s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:53728 - 6570 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002602834s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:41092 - 5177 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005096716s 2025-08-13T20:00:44.371112775+00:00 stdout F [INFO] 10.217.0.60:35936 - 46005 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005082925s 2025-08-13T20:00:44.373211944+00:00 stdout F [INFO] 10.217.0.60:39366 - 34168 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006175236s 2025-08-13T20:00:44.467058530+00:00 stdout F [INFO] 10.217.0.60:50003 - 16 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005760904s 2025-08-13T20:00:44.470581051+00:00 stdout F [INFO] 10.217.0.60:59145 - 51973 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00663318s 2025-08-13T20:00:44.561180964+00:00 stdout F [INFO] 10.217.0.60:38896 - 9837 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007873105s 2025-08-13T20:00:44.562378618+00:00 stdout F [INFO] 10.217.0.60:57914 - 49009 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008189754s 2025-08-13T20:00:44.565446676+00:00 stdout F [INFO] 10.217.0.60:44070 - 11996 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003979883s 2025-08-13T20:00:44.570577672+00:00 stdout F [INFO] 10.217.0.60:58860 - 63938 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008014928s 2025-08-13T20:00:44.634733022+00:00 stdout F [INFO] 10.217.0.60:57237 - 12292 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00106751s 2025-08-13T20:00:44.635281027+00:00 stdout F [INFO] 10.217.0.60:50472 - 10469 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001678718s 2025-08-13T20:00:44.636369068+00:00 stdout F [INFO] 10.217.0.60:59839 - 51565 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002900223s 2025-08-13T20:00:44.636465431+00:00 stdout F [INFO] 10.217.0.60:34556 - 21555 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002778619s 2025-08-13T20:00:44.718334255+00:00 stdout F [INFO] 10.217.0.60:47654 - 43073 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009952494s 2025-08-13T20:00:44.725470629+00:00 stdout F [INFO] 10.217.0.60:37658 - 61873 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007332789s 2025-08-13T20:00:44.725470629+00:00 stdout F [INFO] 10.217.0.60:47601 - 29425 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007493024s 2025-08-13T20:00:44.725535371+00:00 stdout F [INFO] 10.217.0.60:36558 - 10190 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007303668s 2025-08-13T20:00:44.804289786+00:00 stdout F [INFO] 10.217.0.60:59348 - 55222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001225224s 2025-08-13T20:00:44.807936300+00:00 stdout F [INFO] 10.217.0.60:40745 - 55897 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004331593s 2025-08-13T20:00:44.872898843+00:00 stdout F [INFO] 10.217.0.60:35807 - 51060 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934206s 2025-08-13T20:00:44.872898843+00:00 stdout F [INFO] 10.217.0.60:45328 - 15524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001419791s 2025-08-13T20:00:44.946685987+00:00 stdout F [INFO] 10.217.0.60:52632 - 32111 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002485871s 2025-08-13T20:00:44.948327163+00:00 stdout F [INFO] 10.217.0.60:59941 - 31677 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775922s 2025-08-13T20:00:45.044009342+00:00 stdout F [INFO] 10.217.0.60:36355 - 57491 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003067507s 2025-08-13T20:00:45.044009342+00:00 stdout F [INFO] 10.217.0.60:49443 - 46875 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004174929s 2025-08-13T20:00:45.067966155+00:00 stdout F [INFO] 10.217.0.60:45455 - 970 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007233066s 2025-08-13T20:00:45.069930651+00:00 stdout F [INFO] 10.217.0.60:47235 - 43002 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007697979s 2025-08-13T20:00:45.128341376+00:00 stdout F [INFO] 10.217.0.60:48452 - 47585 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004124938s 2025-08-13T20:00:45.128606654+00:00 stdout F [INFO] 10.217.0.60:43867 - 19263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00526546s 2025-08-13T20:00:45.135204322+00:00 stdout F [INFO] 10.217.0.60:37334 - 26237 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00631382s 2025-08-13T20:00:45.139650159+00:00 stdout F [INFO] 10.217.0.60:36882 - 30502 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006561027s 2025-08-13T20:00:45.152938708+00:00 stdout F [INFO] 10.217.0.60:49344 - 27017 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001373699s 2025-08-13T20:00:45.153824703+00:00 stdout F [INFO] 10.217.0.60:33307 - 39959 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002010507s 2025-08-13T20:00:45.207455652+00:00 stdout F [INFO] 10.217.0.60:60805 - 45507 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001168484s 2025-08-13T20:00:45.209456329+00:00 stdout F [INFO] 10.217.0.60:36405 - 47965 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00210645s 2025-08-13T20:00:45.212187877+00:00 stdout F [INFO] 10.217.0.60:40040 - 8629 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002089839s 2025-08-13T20:00:45.212291200+00:00 stdout F [INFO] 10.217.0.60:50600 - 32146 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002621445s 2025-08-13T20:00:45.286609639+00:00 stdout F [INFO] 10.217.0.60:39091 - 50762 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002044838s 2025-08-13T20:00:45.286928948+00:00 stdout F [INFO] 10.217.0.60:39286 - 6970 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0024663s 2025-08-13T20:00:45.297360786+00:00 stdout F [INFO] 10.217.0.60:45099 - 30506 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001081961s 2025-08-13T20:00:45.297486899+00:00 stdout F [INFO] 10.217.0.60:40857 - 4648 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000881935s 2025-08-13T20:00:45.358146569+00:00 stdout F [INFO] 10.217.0.60:54841 - 9214 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001787301s 2025-08-13T20:00:45.358523760+00:00 stdout F [INFO] 10.217.0.60:56436 - 23377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002150051s 2025-08-13T20:00:45.389388330+00:00 stdout F [INFO] 10.217.0.60:54857 - 31695 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006964878s 2025-08-13T20:00:45.390320876+00:00 stdout F [INFO] 10.217.0.60:43585 - 49599 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008879413s 2025-08-13T20:00:45.413340543+00:00 stdout F [INFO] 10.217.0.60:59884 - 19312 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00456255s 2025-08-13T20:00:45.416187324+00:00 stdout F [INFO] 10.217.0.60:42278 - 12738 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005400374s 2025-08-13T20:00:45.539007396+00:00 stdout F [INFO] 10.217.0.60:59731 - 36791 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.046739423s 2025-08-13T20:00:45.539007396+00:00 stdout F [INFO] 10.217.0.60:58492 - 4853 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.047440983s 2025-08-13T20:00:45.543086592+00:00 stdout F [INFO] 10.217.0.60:44716 - 64555 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001795521s 2025-08-13T20:00:45.568481187+00:00 stdout F [INFO] 10.217.0.60:55385 - 23971 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002639415s 2025-08-13T20:00:45.661917681+00:00 stdout F [INFO] 10.217.0.60:48047 - 11680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.019125105s 2025-08-13T20:00:45.662415625+00:00 stdout F [INFO] 10.217.0.60:39642 - 1573 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.019904978s 2025-08-13T20:00:45.664545706+00:00 stdout F [INFO] 10.217.0.60:36013 - 44109 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002240734s 2025-08-13T20:00:45.664806083+00:00 stdout F [INFO] 10.217.0.60:44076 - 19830 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002743738s 2025-08-13T20:00:45.711686910+00:00 stdout F [INFO] 10.217.0.60:58769 - 4854 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001804002s 2025-08-13T20:00:45.711913676+00:00 stdout F [INFO] 10.217.0.60:43779 - 32061 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002489711s 2025-08-13T20:00:45.763128247+00:00 stdout F [INFO] 10.217.0.60:38757 - 37067 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001899084s 2025-08-13T20:00:45.763128247+00:00 stdout F [INFO] 10.217.0.60:47541 - 44972 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002489831s 2025-08-13T20:00:45.818947768+00:00 stdout F [INFO] 10.217.0.60:45022 - 59800 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004771886s 2025-08-13T20:00:45.849971173+00:00 stdout F [INFO] 10.217.0.60:41359 - 41174 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.033949028s 2025-08-13T20:00:45.946985419+00:00 stdout F [INFO] 10.217.0.60:55262 - 56115 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025679812s 2025-08-13T20:00:45.946985419+00:00 stdout F [INFO] 10.217.0.60:57747 - 17936 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.026174786s 2025-08-13T20:00:46.032202749+00:00 stdout F [INFO] 10.217.0.60:41161 - 13567 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013157245s 2025-08-13T20:00:46.032202749+00:00 stdout F [INFO] 10.217.0.60:56298 - 13419 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013566517s 2025-08-13T20:00:46.065677784+00:00 stdout F [INFO] 10.217.0.60:44922 - 19924 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008302017s 2025-08-13T20:00:46.066911359+00:00 stdout F [INFO] 10.217.0.60:56113 - 53771 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0091097s 2025-08-13T20:00:46.151485830+00:00 stdout F [INFO] 10.217.0.60:52318 - 47301 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000995248s 2025-08-13T20:00:46.173898789+00:00 stdout F [INFO] 10.217.0.60:43487 - 39893 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016371676s 2025-08-13T20:00:46.174119226+00:00 stdout F [INFO] 10.217.0.60:40927 - 1924 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.023281174s 2025-08-13T20:00:46.174286081+00:00 stdout F [INFO] 10.217.0.60:58486 - 48914 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016530531s 2025-08-13T20:00:46.191491961+00:00 stdout F [INFO] 10.217.0.60:40447 - 49758 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.01370845s 2025-08-13T20:00:46.192810969+00:00 stdout F [INFO] 10.217.0.60:54234 - 36503 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000944556s 2025-08-13T20:00:46.334162769+00:00 stdout F [INFO] 10.217.0.28:54681 - 4446 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001799722s 2025-08-13T20:00:46.350929327+00:00 stdout F [INFO] 10.217.0.28:50313 - 58891 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.018742515s 2025-08-13T20:00:46.411343540+00:00 stdout F [INFO] 10.217.0.60:52088 - 60139 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.033143935s 2025-08-13T20:00:46.426122801+00:00 stdout F [INFO] 10.217.0.60:33487 - 5955 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.015007558s 2025-08-13T20:00:46.430542497+00:00 stdout F [INFO] 10.217.0.60:35828 - 34714 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011280352s 2025-08-13T20:00:46.430755333+00:00 stdout F [INFO] 10.217.0.60:40533 - 36830 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012218549s 2025-08-13T20:00:46.433404079+00:00 stdout F [INFO] 10.217.0.60:44919 - 20101 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014276107s 2025-08-13T20:00:46.453394739+00:00 stdout F [INFO] 10.217.0.60:39833 - 42467 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00278477s 2025-08-13T20:00:46.535225702+00:00 stdout F [INFO] 10.217.0.60:43467 - 41891 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013027052s 2025-08-13T20:00:46.538984440+00:00 stdout F [INFO] 10.217.0.60:47507 - 26638 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017371926s 2025-08-13T20:00:46.540622626+00:00 stdout F [INFO] 10.217.0.60:44240 - 8524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004996362s 2025-08-13T20:00:46.540914895+00:00 stdout F [INFO] 10.217.0.60:48175 - 63130 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005273221s 2025-08-13T20:00:46.634162373+00:00 stdout F [INFO] 10.217.0.60:37456 - 31623 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012673661s 2025-08-13T20:00:46.644251291+00:00 stdout F [INFO] 10.217.0.60:53598 - 25926 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.021997027s 2025-08-13T20:00:46.754285509+00:00 stdout F [INFO] 10.217.0.60:44935 - 57055 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003675785s 2025-08-13T20:00:46.756616595+00:00 stdout F [INFO] 10.217.0.60:51680 - 59997 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006043852s 2025-08-13T20:00:46.768956457+00:00 stdout F [INFO] 10.217.0.60:60644 - 39110 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011803606s 2025-08-13T20:00:46.769154143+00:00 stdout F [INFO] 10.217.0.60:35175 - 29320 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012395984s 2025-08-13T20:00:46.769327967+00:00 stdout F [INFO] 10.217.0.60:60554 - 44922 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.013082673s 2025-08-13T20:00:46.769385279+00:00 stdout F [INFO] 10.217.0.60:54831 - 58016 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.013392642s 2025-08-13T20:00:46.839294423+00:00 stdout F [INFO] 10.217.0.60:56661 - 11234 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008934975s 2025-08-13T20:00:46.854649420+00:00 stdout F [INFO] 10.217.0.60:54181 - 15570 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023440799s 2025-08-13T20:00:46.943090421+00:00 stdout F [INFO] 10.217.0.60:56336 - 41120 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.034635307s 2025-08-13T20:00:46.943090421+00:00 stdout F [INFO] 10.217.0.60:44456 - 12995 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.035224234s 2025-08-13T20:00:47.116218208+00:00 stdout F [INFO] 10.217.0.60:37550 - 43782 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020021221s 2025-08-13T20:00:47.116218208+00:00 stdout F [INFO] 10.217.0.60:40522 - 2678 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.02034309s 2025-08-13T20:00:47.126230043+00:00 stdout F [INFO] 10.217.0.60:36157 - 52261 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011332573s 2025-08-13T20:00:47.126230043+00:00 stdout F [INFO] 10.217.0.60:47728 - 17977 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011963282s 2025-08-13T20:00:47.241771228+00:00 stdout F [INFO] 10.217.0.60:41268 - 12463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008230535s 2025-08-13T20:00:47.242500149+00:00 stdout F [INFO] 10.217.0.60:48998 - 45260 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009925393s 2025-08-13T20:00:47.272471063+00:00 stdout F [INFO] 10.217.0.60:48206 - 29639 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010554841s 2025-08-13T20:00:47.272471063+00:00 stdout F [INFO] 10.217.0.60:46995 - 41796 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011676323s 2025-08-13T20:00:47.353176194+00:00 stdout F [INFO] 10.217.0.60:52090 - 63475 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001874713s 2025-08-13T20:00:47.353440092+00:00 stdout F [INFO] 10.217.0.60:45588 - 13464 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767912s 2025-08-13T20:00:47.376652584+00:00 stdout F [INFO] 10.217.0.60:56400 - 20881 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003841279s 2025-08-13T20:00:47.379424483+00:00 stdout F [INFO] 10.217.0.60:49524 - 56709 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004009245s 2025-08-13T20:00:47.593026593+00:00 stdout F [INFO] 10.217.0.60:44764 - 44056 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005789535s 2025-08-13T20:00:47.595032430+00:00 stdout F [INFO] 10.217.0.60:53477 - 46362 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006140115s 2025-08-13T20:00:47.622130023+00:00 stdout F [INFO] 10.217.0.60:33700 - 42114 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014865764s 2025-08-13T20:00:47.622935996+00:00 stdout F [INFO] 10.217.0.60:48559 - 14693 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004471528s 2025-08-13T20:00:47.721827146+00:00 stdout F [INFO] 10.217.0.60:44528 - 15152 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007532715s 2025-08-13T20:00:47.722296829+00:00 stdout F [INFO] 10.217.0.60:50872 - 49613 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007892625s 2025-08-13T20:00:47.748162137+00:00 stdout F [INFO] 10.217.0.60:38608 - 25117 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012239068s 2025-08-13T20:00:47.748162137+00:00 stdout F [INFO] 10.217.0.60:42298 - 23915 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009985865s 2025-08-13T20:00:47.752151451+00:00 stdout F [INFO] 10.217.0.60:52605 - 33456 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002111341s 2025-08-13T20:00:47.752151451+00:00 stdout F [INFO] 10.217.0.60:54884 - 59917 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002028918s 2025-08-13T20:00:47.804053931+00:00 stdout F [INFO] 10.217.0.57:34181 - 56876 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.032832606s 2025-08-13T20:00:47.809603409+00:00 stdout F [INFO] 10.217.0.57:53332 - 16628 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007190365s 2025-08-13T20:00:47.853002786+00:00 stdout F [INFO] 10.217.0.60:60728 - 54856 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011329993s 2025-08-13T20:00:47.853002786+00:00 stdout F [INFO] 10.217.0.60:47761 - 10494 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.012030293s 2025-08-13T20:00:47.864434812+00:00 stdout F [INFO] 10.217.0.60:51567 - 53307 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012441734s 2025-08-13T20:00:47.881486958+00:00 stdout F [INFO] 10.217.0.60:37162 - 20491 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.026721132s 2025-08-13T20:00:47.918687149+00:00 stdout F [INFO] 10.217.0.60:33731 - 24998 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009373047s 2025-08-13T20:00:47.941229022+00:00 stdout F [INFO] 10.217.0.60:51821 - 8384 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.031317193s 2025-08-13T20:00:47.941717446+00:00 stdout F [INFO] 10.217.0.60:47409 - 36059 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006863795s 2025-08-13T20:00:47.942178689+00:00 stdout F [INFO] 10.217.0.60:50438 - 6738 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007836173s 2025-08-13T20:00:48.101958315+00:00 stdout F [INFO] 10.217.0.60:37129 - 8864 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004393355s 2025-08-13T20:00:48.113905356+00:00 stdout F [INFO] 10.217.0.60:46038 - 64197 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012032263s 2025-08-13T20:00:48.113905356+00:00 stdout F [INFO] 10.217.0.60:52966 - 53758 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012346242s 2025-08-13T20:00:48.140736431+00:00 stdout F [INFO] 10.217.0.60:57444 - 30845 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.047522346s 2025-08-13T20:00:48.221997768+00:00 stdout F [INFO] 10.217.0.60:52176 - 15093 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000942287s 2025-08-13T20:00:48.221997768+00:00 stdout F [INFO] 10.217.0.60:43544 - 8170 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001601525s 2025-08-13T20:00:48.288368820+00:00 stdout F [INFO] 10.217.0.60:47883 - 29163 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005282701s 2025-08-13T20:00:48.288368820+00:00 stdout F [INFO] 10.217.0.60:40600 - 44845 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005914099s 2025-08-13T20:00:48.300356172+00:00 stdout F [INFO] 10.217.0.60:60257 - 34117 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001176293s 2025-08-13T20:00:48.300356172+00:00 stdout F [INFO] 10.217.0.60:54557 - 16227 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005456785s 2025-08-13T20:00:48.333949810+00:00 stdout F [INFO] 10.217.0.60:36088 - 663 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001502822s 2025-08-13T20:00:48.333949810+00:00 stdout F [INFO] 10.217.0.60:54236 - 5321 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001722179s 2025-08-13T20:00:48.378015887+00:00 stdout F [INFO] 10.217.0.60:57981 - 48896 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00138745s 2025-08-13T20:00:48.378015887+00:00 stdout F [INFO] 10.217.0.60:57451 - 16785 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140207s 2025-08-13T20:00:48.450697889+00:00 stdout F [INFO] 10.217.0.60:48824 - 688 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010017566s 2025-08-13T20:00:48.464770230+00:00 stdout F [INFO] 10.217.0.60:48120 - 24857 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.029571773s 2025-08-13T20:00:48.465644615+00:00 stdout F [INFO] 10.217.0.60:41695 - 60353 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007614157s 2025-08-13T20:00:48.518190334+00:00 stdout F [INFO] 10.217.0.60:58007 - 53703 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.033442244s 2025-08-13T20:00:48.538927885+00:00 stdout F [INFO] 10.217.0.60:48976 - 17944 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008436611s 2025-08-13T20:00:48.538927885+00:00 stdout F [INFO] 10.217.0.60:56226 - 25509 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008007849s 2025-08-13T20:00:48.604984298+00:00 stdout F [INFO] 10.217.0.60:39769 - 21331 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002969005s 2025-08-13T20:00:48.604984298+00:00 stdout F [INFO] 10.217.0.60:58238 - 34786 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003174291s 2025-08-13T20:00:48.626890253+00:00 stdout F [INFO] 10.217.0.60:57081 - 27014 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001323508s 2025-08-13T20:00:48.633335527+00:00 stdout F [INFO] 10.217.0.60:55145 - 13147 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011070895s 2025-08-13T20:00:48.677906238+00:00 stdout F [INFO] 10.217.0.60:48452 - 48371 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007958427s 2025-08-13T20:00:48.677906238+00:00 stdout F [INFO] 10.217.0.60:59529 - 56136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00839607s 2025-08-13T20:00:48.817630152+00:00 stdout F [INFO] 10.217.0.60:35230 - 49847 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.019805685s 2025-08-13T20:00:48.843040606+00:00 stdout F [INFO] 10.217.0.60:34693 - 19528 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.037023836s 2025-08-13T20:00:48.879189817+00:00 stdout F [INFO] 10.217.0.60:34184 - 25098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.015109961s 2025-08-13T20:00:48.894400171+00:00 stdout F [INFO] 10.217.0.60:34465 - 55609 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.042471811s 2025-08-13T20:00:48.957136080+00:00 stdout F [INFO] 10.217.0.60:38476 - 51226 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008033729s 2025-08-13T20:00:48.991085998+00:00 stdout F [INFO] 10.217.0.60:34121 - 45872 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018326873s 2025-08-13T20:00:49.007615269+00:00 stdout F [INFO] 10.217.0.60:52068 - 8984 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006058543s 2025-08-13T20:00:49.007615269+00:00 stdout F [INFO] 10.217.0.60:41706 - 21787 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005875008s 2025-08-13T20:00:49.091360167+00:00 stdout F [INFO] 10.217.0.60:45325 - 10432 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.01261853s 2025-08-13T20:00:49.091360167+00:00 stdout F [INFO] 10.217.0.60:43459 - 12755 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.01299332s 2025-08-13T20:00:49.120658922+00:00 stdout F [INFO] 10.217.0.60:60336 - 33140 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003302204s 2025-08-13T20:00:49.120658922+00:00 stdout F [INFO] 10.217.0.60:33883 - 18191 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004276132s 2025-08-13T20:00:49.186982213+00:00 stdout F [INFO] 10.217.0.60:45906 - 53699 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011465726s 2025-08-13T20:00:49.189545497+00:00 stdout F [INFO] 10.217.0.60:39046 - 2349 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.013666889s 2025-08-13T20:00:49.335934651+00:00 stdout F [INFO] 10.217.0.60:45971 - 35326 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003226722s 2025-08-13T20:00:49.339332508+00:00 stdout F [INFO] 10.217.0.60:55148 - 46935 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006413193s 2025-08-13T20:00:49.401101669+00:00 stdout F [INFO] 10.217.0.60:43142 - 57420 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.029564173s 2025-08-13T20:00:49.401101669+00:00 stdout F [INFO] 10.217.0.60:53476 - 47620 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.030470539s 2025-08-13T20:00:49.475906792+00:00 stdout F [INFO] 10.217.0.60:44808 - 38767 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00664648s 2025-08-13T20:00:49.476184320+00:00 stdout F [INFO] 10.217.0.60:59050 - 11320 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007127623s 2025-08-13T20:00:49.499986538+00:00 stdout F [INFO] 10.217.0.60:56114 - 55312 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001972917s 2025-08-13T20:00:49.500952016+00:00 stdout F [INFO] 10.217.0.60:58021 - 19747 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003528551s 2025-08-13T20:00:49.585704093+00:00 stdout F [INFO] 10.217.0.60:53492 - 45836 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004307853s 2025-08-13T20:00:49.587022820+00:00 stdout F [INFO] 10.217.0.60:59742 - 9936 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006042932s 2025-08-13T20:00:49.619867467+00:00 stdout F [INFO] 10.217.0.60:40608 - 36044 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002080209s 2025-08-13T20:00:49.621386450+00:00 stdout F [INFO] 10.217.0.60:54497 - 59787 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003969453s 2025-08-13T20:00:49.738205081+00:00 stdout F [INFO] 10.217.0.60:59059 - 46154 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0013962s 2025-08-13T20:00:49.738205081+00:00 stdout F [INFO] 10.217.0.60:37185 - 27738 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002493211s 2025-08-13T20:00:49.742536965+00:00 stdout F [INFO] 10.217.0.60:49568 - 23639 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006554967s 2025-08-13T20:00:49.742898255+00:00 stdout F [INFO] 10.217.0.60:60560 - 13861 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007278567s 2025-08-13T20:00:49.772066467+00:00 stdout F [INFO] 10.217.0.60:37772 - 50665 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.009398078s 2025-08-13T20:00:49.798546172+00:00 stdout F [INFO] 10.217.0.60:41603 - 50543 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.035662117s 2025-08-13T20:00:49.876354960+00:00 stdout F [INFO] 10.217.0.60:39133 - 51596 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003322455s 2025-08-13T20:00:49.879992484+00:00 stdout F [INFO] 10.217.0.60:48144 - 11011 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004976322s 2025-08-13T20:00:49.899650055+00:00 stdout F [INFO] 10.217.0.60:34077 - 56114 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001278427s 2025-08-13T20:00:49.900298013+00:00 stdout F [INFO] 10.217.0.60:47986 - 62439 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001832252s 2025-08-13T20:00:50.012628316+00:00 stdout F [INFO] 10.217.0.60:39637 - 26080 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00632463s 2025-08-13T20:00:50.012688988+00:00 stdout F [INFO] 10.217.0.60:53716 - 534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.010245322s 2025-08-13T20:00:50.051583307+00:00 stdout F [INFO] 10.217.0.60:55657 - 19501 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004846708s 2025-08-13T20:00:50.058106233+00:00 stdout F [INFO] 10.217.0.60:40058 - 18678 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.011136948s 2025-08-13T20:00:50.141707167+00:00 stdout F [INFO] 10.217.0.60:60817 - 13198 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008362138s 2025-08-13T20:00:50.142358925+00:00 stdout F [INFO] 10.217.0.60:46182 - 57950 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009989265s 2025-08-13T20:00:50.152990589+00:00 stdout F [INFO] 10.217.0.60:60651 - 47522 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001214825s 2025-08-13T20:00:50.153652057+00:00 stdout F [INFO] 10.217.0.60:51945 - 46133 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005919309s 2025-08-13T20:00:50.216588312+00:00 stdout F [INFO] 10.217.0.60:36086 - 48239 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009558012s 2025-08-13T20:00:50.230207140+00:00 stdout F [INFO] 10.217.0.60:41438 - 20130 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001547914s 2025-08-13T20:00:50.246760042+00:00 stdout F [INFO] 10.217.0.60:37703 - 41222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000947307s 2025-08-13T20:00:50.286219267+00:00 stdout F [INFO] 10.217.0.60:44233 - 28746 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.038549609s 2025-08-13T20:00:50.426368064+00:00 stdout F [INFO] 10.217.0.60:59761 - 18861 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103962s 2025-08-13T20:00:50.426610421+00:00 stdout F [INFO] 10.217.0.60:50474 - 18337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001572425s 2025-08-13T20:00:50.476992067+00:00 stdout F [INFO] 10.217.0.60:45294 - 4861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.011467187s 2025-08-13T20:00:50.479194270+00:00 stdout F [INFO] 10.217.0.60:60484 - 20544 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.014007289s 2025-08-13T20:00:50.487049573+00:00 stdout F [INFO] 10.217.0.60:50323 - 53459 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00952296s 2025-08-13T20:00:50.487462955+00:00 stdout F [INFO] 10.217.0.60:53709 - 16232 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010004194s 2025-08-13T20:00:50.631761129+00:00 stdout F [INFO] 10.217.0.60:43049 - 58271 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004848749s 2025-08-13T20:00:50.631761129+00:00 stdout F [INFO] 10.217.0.60:51596 - 4810 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005409235s 2025-08-13T20:00:50.632212972+00:00 stdout F [INFO] 10.217.0.60:36500 - 56939 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005445896s 2025-08-13T20:00:50.653875190+00:00 stdout F [INFO] 10.217.0.60:37162 - 20095 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002929244s 2025-08-13T20:00:50.710595157+00:00 stdout F [INFO] 10.217.0.60:53343 - 65404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001281806s 2025-08-13T20:00:50.710917926+00:00 stdout F [INFO] 10.217.0.60:46189 - 9933 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002245714s 2025-08-13T20:00:50.741703954+00:00 stdout F [INFO] 10.217.0.60:43271 - 32831 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003597993s 2025-08-13T20:00:50.741906950+00:00 stdout F [INFO] 10.217.0.60:49322 - 45791 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004130768s 2025-08-13T20:00:50.814239543+00:00 stdout F [INFO] 10.217.0.60:43252 - 38980 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008340968s 2025-08-13T20:00:50.821518250+00:00 stdout F [INFO] 10.217.0.60:33887 - 23010 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008209924s 2025-08-13T20:00:50.822469257+00:00 stdout F [INFO] 10.217.0.60:52123 - 64764 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000922446s 2025-08-13T20:00:50.822531939+00:00 stdout F [INFO] 10.217.0.60:47079 - 60051 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000604947s 2025-08-13T20:00:50.902917641+00:00 stdout F [INFO] 10.217.0.60:37685 - 16757 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001947336s 2025-08-13T20:00:50.903207449+00:00 stdout F [INFO] 10.217.0.60:57688 - 53872 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00208412s 2025-08-13T20:00:50.903287672+00:00 stdout F [INFO] 10.217.0.60:60105 - 24057 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002338506s 2025-08-13T20:00:50.903567440+00:00 stdout F [INFO] 10.217.0.60:38381 - 3564 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00279449s 2025-08-13T20:00:51.002414068+00:00 stdout F [INFO] 10.217.0.60:41363 - 55003 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001155413s 2025-08-13T20:00:51.004027554+00:00 stdout F [INFO] 10.217.0.60:41385 - 36490 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104889s 2025-08-13T20:00:51.017230481+00:00 stdout F [INFO] 10.217.0.60:51407 - 52430 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003244232s 2025-08-13T20:00:51.018354993+00:00 stdout F [INFO] 10.217.0.60:48463 - 37591 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004471547s 2025-08-13T20:00:51.101497723+00:00 stdout F [INFO] 10.217.0.60:46161 - 3185 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00561465s 2025-08-13T20:00:51.101497723+00:00 stdout F [INFO] 10.217.0.60:51482 - 46090 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005485686s 2025-08-13T20:00:51.142181403+00:00 stdout F [INFO] 10.217.0.60:56445 - 47860 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004654473s 2025-08-13T20:00:51.146638801+00:00 stdout F [INFO] 10.217.0.60:45442 - 19436 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005363643s 2025-08-13T20:00:51.204234113+00:00 stdout F [INFO] 10.217.0.60:35174 - 59788 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007879465s 2025-08-13T20:00:51.204373347+00:00 stdout F [INFO] 10.217.0.60:38913 - 63826 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009311036s 2025-08-13T20:00:51.219247781+00:00 stdout F [INFO] 10.217.0.28:60652 - 32272 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005673381s 2025-08-13T20:00:51.219938511+00:00 stdout F [INFO] 10.217.0.28:33440 - 14907 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007741481s 2025-08-13T20:00:51.220274210+00:00 stdout F [INFO] 10.217.0.60:33067 - 24093 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003064477s 2025-08-13T20:00:51.220749424+00:00 stdout F [INFO] 10.217.0.60:60012 - 42704 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002688427s 2025-08-13T20:00:51.288859666+00:00 stdout F [INFO] 10.217.0.60:49681 - 14797 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000803023s 2025-08-13T20:00:51.288859666+00:00 stdout F [INFO] 10.217.0.60:44507 - 3751 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000508474s 2025-08-13T20:00:51.306944231+00:00 stdout F [INFO] 10.217.0.60:40419 - 58620 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007643068s 2025-08-13T20:00:51.306944231+00:00 stdout F [INFO] 10.217.0.60:51791 - 27961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008525664s 2025-08-13T20:00:51.375923848+00:00 stdout F [INFO] 10.217.0.60:34231 - 35820 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000928496s 2025-08-13T20:00:51.376458394+00:00 stdout F [INFO] 10.217.0.60:43604 - 31528 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001008199s 2025-08-13T20:00:51.386691535+00:00 stdout F [INFO] 10.217.0.60:60210 - 17663 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001227575s 2025-08-13T20:00:51.386691535+00:00 stdout F [INFO] 10.217.0.60:44075 - 52352 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001294827s 2025-08-13T20:00:51.611735065+00:00 stdout F [INFO] 10.217.0.60:38690 - 52947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.077360954s 2025-08-13T20:00:51.612714183+00:00 stdout F [INFO] 10.217.0.60:41948 - 6937 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.078289441s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:42928 - 19370 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001911985s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:32987 - 60196 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002667866s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:49495 - 63508 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002623465s 2025-08-13T20:00:51.676514931+00:00 stdout F [INFO] 10.217.0.60:52718 - 33970 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002470911s 2025-08-13T20:00:51.742160331+00:00 stdout F [INFO] 10.217.0.60:48686 - 41468 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001831722s 2025-08-13T20:00:51.742427329+00:00 stdout F [INFO] 10.217.0.60:35098 - 63774 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002177992s 2025-08-13T20:00:51.743220981+00:00 stdout F [INFO] 10.217.0.60:43032 - 9717 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001273976s 2025-08-13T20:00:51.744057455+00:00 stdout F [INFO] 10.217.0.60:34108 - 623 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002023838s 2025-08-13T20:00:51.748934934+00:00 stdout F [INFO] 10.217.0.60:40602 - 21272 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000581176s 2025-08-13T20:00:51.749930403+00:00 stdout F [INFO] 10.217.0.60:37496 - 12601 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001247316s 2025-08-13T20:00:51.822689737+00:00 stdout F [INFO] 10.217.0.60:38232 - 17431 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00665717s 2025-08-13T20:00:51.823074108+00:00 stdout F [INFO] 10.217.0.60:58055 - 13489 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007323248s 2025-08-13T20:00:51.823325085+00:00 stdout F [INFO] 10.217.0.60:47561 - 18500 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007794712s 2025-08-13T20:00:51.823613724+00:00 stdout F [INFO] 10.217.0.60:47869 - 39243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007890675s 2025-08-13T20:00:51.855978707+00:00 stdout F [INFO] 10.217.0.60:34863 - 45394 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004177349s 2025-08-13T20:00:51.867184376+00:00 stdout F [INFO] 10.217.0.60:39514 - 30757 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.012415594s 2025-08-13T20:00:51.910303916+00:00 stdout F [INFO] 10.217.0.60:49136 - 577 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003391667s 2025-08-13T20:00:51.910573373+00:00 stdout F [INFO] 10.217.0.60:42532 - 1690 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004396395s 2025-08-13T20:00:51.942863424+00:00 stdout F [INFO] 10.217.0.60:33964 - 26144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002259924s 2025-08-13T20:00:51.945255812+00:00 stdout F [INFO] 10.217.0.60:58693 - 49707 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004602351s 2025-08-13T20:00:51.984627495+00:00 stdout F [INFO] 10.217.0.60:38197 - 6440 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012006032s 2025-08-13T20:00:51.984761369+00:00 stdout F [INFO] 10.217.0.60:55819 - 1373 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012547668s 2025-08-13T20:00:51.999348845+00:00 stdout F [INFO] 10.217.0.60:44642 - 63594 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004095017s 2025-08-13T20:00:51.999725875+00:00 stdout F [INFO] 10.217.0.60:53602 - 61788 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004738385s 2025-08-13T20:00:52.015235118+00:00 stdout F [INFO] 10.217.0.60:52813 - 36985 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001107691s 2025-08-13T20:00:52.015343841+00:00 stdout F [INFO] 10.217.0.60:55494 - 9494 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001735579s 2025-08-13T20:00:52.067732165+00:00 stdout F [INFO] 10.217.0.60:56886 - 32543 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002148031s 2025-08-13T20:00:52.068143936+00:00 stdout F [INFO] 10.217.0.60:52460 - 57024 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002538542s 2025-08-13T20:00:52.108533718+00:00 stdout F [INFO] 10.217.0.60:51310 - 53415 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016064728s 2025-08-13T20:00:52.109018332+00:00 stdout F [INFO] 10.217.0.60:59350 - 62708 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.016686755s 2025-08-13T20:00:52.174940892+00:00 stdout F [INFO] 10.217.0.60:39079 - 60775 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005555518s 2025-08-13T20:00:52.183123615+00:00 stdout F [INFO] 10.217.0.60:57759 - 31828 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.013467164s 2025-08-13T20:00:52.184567376+00:00 stdout F [INFO] 10.217.0.60:58451 - 2791 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001231495s 2025-08-13T20:00:52.186513841+00:00 stdout F [INFO] 10.217.0.60:32892 - 3317 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002866262s 2025-08-13T20:00:52.252289647+00:00 stdout F [INFO] 10.217.0.60:49551 - 64846 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001023919s 2025-08-13T20:00:52.252707309+00:00 stdout F [INFO] 10.217.0.60:49202 - 12073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00142371s 2025-08-13T20:00:52.342704175+00:00 stdout F [INFO] 10.217.0.60:43192 - 32993 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.016870411s 2025-08-13T20:00:52.342704175+00:00 stdout F [INFO] 10.217.0.60:38616 - 39260 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017765217s 2025-08-13T20:00:52.343319953+00:00 stdout F [INFO] 10.217.0.60:32855 - 42141 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.034283607s 2025-08-13T20:00:52.343319953+00:00 stdout F [INFO] 10.217.0.60:58760 - 51997 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.034872634s 2025-08-13T20:00:52.363666863+00:00 stdout F [INFO] 10.217.0.60:34342 - 39912 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002230364s 2025-08-13T20:00:52.364055264+00:00 stdout F [INFO] 10.217.0.60:40112 - 47926 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003496609s 2025-08-13T20:00:52.368147640+00:00 stdout F [INFO] 10.217.0.60:41418 - 27502 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000772232s 2025-08-13T20:00:52.368147640+00:00 stdout F [INFO] 10.217.0.60:54568 - 8178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000862675s 2025-08-13T20:00:52.417474187+00:00 stdout F [INFO] 10.217.0.60:34401 - 41823 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001162503s 2025-08-13T20:00:52.417474187+00:00 stdout F [INFO] 10.217.0.60:54281 - 1628 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000844794s 2025-08-13T20:00:52.425915018+00:00 stdout F [INFO] 10.217.0.60:44410 - 34400 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001689318s 2025-08-13T20:00:52.425915018+00:00 stdout F [INFO] 10.217.0.60:41198 - 33445 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002418249s 2025-08-13T20:00:52.478947420+00:00 stdout F [INFO] 10.217.0.60:41674 - 36850 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004340144s 2025-08-13T20:00:52.478947420+00:00 stdout F [INFO] 10.217.0.60:47806 - 36647 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.004133997s 2025-08-13T20:00:52.503407487+00:00 stdout F [INFO] 10.217.0.60:49901 - 36499 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00488833s 2025-08-13T20:00:52.503434608+00:00 stdout F [INFO] 10.217.0.60:56089 - 38963 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003719806s 2025-08-13T20:00:52.612027434+00:00 stdout F [INFO] 10.217.0.60:49947 - 17990 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002598664s 2025-08-13T20:00:52.612027434+00:00 stdout F [INFO] 10.217.0.60:42773 - 36346 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003213992s 2025-08-13T20:00:52.631294194+00:00 stdout F [INFO] 10.217.0.60:35440 - 16529 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007477253s 2025-08-13T20:00:52.631294194+00:00 stdout F [INFO] 10.217.0.60:38024 - 19973 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00633636s 2025-08-13T20:00:52.669176664+00:00 stdout F [INFO] 10.217.0.60:60705 - 31478 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001868913s 2025-08-13T20:00:52.686242461+00:00 stdout F [INFO] 10.217.0.60:40476 - 8782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.017952332s 2025-08-13T20:00:52.700432845+00:00 stdout F [INFO] 10.217.0.60:38184 - 24580 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006019771s 2025-08-13T20:00:52.700533838+00:00 stdout F [INFO] 10.217.0.60:60768 - 46385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006904217s 2025-08-13T20:00:52.708316640+00:00 stdout F [INFO] 10.217.0.60:41556 - 44564 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003196391s 2025-08-13T20:00:52.708471474+00:00 stdout F [INFO] 10.217.0.60:46736 - 43501 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003790588s 2025-08-13T20:00:52.753748055+00:00 stdout F [INFO] 10.217.0.60:40008 - 18248 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003482239s 2025-08-13T20:00:52.757647567+00:00 stdout F [INFO] 10.217.0.60:39281 - 40776 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007330079s 2025-08-13T20:00:52.793241162+00:00 stdout F [INFO] 10.217.0.57:45300 - 46926 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.014451182s 2025-08-13T20:00:52.793326134+00:00 stdout F [INFO] 10.217.0.57:35437 - 60936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008610236s 2025-08-13T20:00:52.804324888+00:00 stdout F [INFO] 10.217.0.60:50313 - 39461 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025408345s 2025-08-13T20:00:52.804324888+00:00 stdout F [INFO] 10.217.0.60:34678 - 44426 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025856147s 2025-08-13T20:00:52.860209981+00:00 stdout F [INFO] 10.217.0.60:51260 - 353 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003001965s 2025-08-13T20:00:52.861057635+00:00 stdout F [INFO] 10.217.0.60:47222 - 2537 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006474134s 2025-08-13T20:00:52.881133378+00:00 stdout F [INFO] 10.217.0.60:52091 - 13165 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000937946s 2025-08-13T20:00:52.881607451+00:00 stdout F [INFO] 10.217.0.60:54382 - 48231 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001847163s 2025-08-13T20:00:52.885077150+00:00 stdout F [INFO] 10.217.0.60:47517 - 41143 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003975963s 2025-08-13T20:00:52.901372165+00:00 stdout F [INFO] 10.217.0.60:49983 - 15446 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020226507s 2025-08-13T20:00:52.945679118+00:00 stdout F [INFO] 10.217.0.60:57504 - 17108 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002316026s 2025-08-13T20:00:52.946193043+00:00 stdout F [INFO] 10.217.0.60:55459 - 34942 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003532381s 2025-08-13T20:00:52.965326068+00:00 stdout F [INFO] 10.217.0.60:52607 - 15292 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001042949s 2025-08-13T20:00:52.966944765+00:00 stdout F [INFO] 10.217.0.60:48379 - 16992 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002693877s 2025-08-13T20:00:52.968991643+00:00 stdout F [INFO] 10.217.0.60:34347 - 19971 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001370789s 2025-08-13T20:00:52.968991643+00:00 stdout F [INFO] 10.217.0.60:58227 - 27878 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002165991s 2025-08-13T20:00:53.008765737+00:00 stdout F [INFO] 10.217.0.60:50325 - 4808 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004454127s 2025-08-13T20:00:53.008765737+00:00 stdout F [INFO] 10.217.0.60:53513 - 15907 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004383155s 2025-08-13T20:00:53.042605572+00:00 stdout F [INFO] 10.217.0.60:50040 - 6391 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001593495s 2025-08-13T20:00:53.042711485+00:00 stdout F [INFO] 10.217.0.60:56947 - 35540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001457651s 2025-08-13T20:00:53.098025152+00:00 stdout F [INFO] 10.217.0.60:47610 - 52365 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002047098s 2025-08-13T20:00:53.098424924+00:00 stdout F [INFO] 10.217.0.60:46903 - 19146 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.011319583s 2025-08-13T20:00:53.099042741+00:00 stdout F [INFO] 10.217.0.60:40196 - 10562 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0035066s 2025-08-13T20:00:53.099131124+00:00 stdout F [INFO] 10.217.0.60:40279 - 44420 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.012856286s 2025-08-13T20:00:53.201288987+00:00 stdout F [INFO] 10.217.0.60:44271 - 34314 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004942361s 2025-08-13T20:00:53.201394970+00:00 stdout F [INFO] 10.217.0.60:40528 - 65373 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003594543s 2025-08-13T20:00:53.201640297+00:00 stdout F [INFO] 10.217.0.60:45273 - 26892 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005355922s 2025-08-13T20:00:53.201729409+00:00 stdout F [INFO] 10.217.0.60:57186 - 19083 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00348992s 2025-08-13T20:00:53.252955060+00:00 stdout F [INFO] 10.217.0.60:54736 - 18202 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006892836s 2025-08-13T20:00:53.255122962+00:00 stdout F [INFO] 10.217.0.60:56460 - 34298 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006607799s 2025-08-13T20:00:53.286328582+00:00 stdout F [INFO] 10.217.0.60:46439 - 33364 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006952718s 2025-08-13T20:00:53.286657981+00:00 stdout F [INFO] 10.217.0.60:50478 - 32524 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006934677s 2025-08-13T20:00:53.289987276+00:00 stdout F [INFO] 10.217.0.60:47197 - 42349 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002953375s 2025-08-13T20:00:53.289987276+00:00 stdout F [INFO] 10.217.0.60:55550 - 14391 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003505219s 2025-08-13T20:00:53.340759034+00:00 stdout F [INFO] 10.217.0.60:36130 - 51725 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005857947s 2025-08-13T20:00:53.340759034+00:00 stdout F [INFO] 10.217.0.60:47471 - 34726 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005486817s 2025-08-13T20:00:53.411344546+00:00 stdout F [INFO] 10.217.0.60:54552 - 11612 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023914062s 2025-08-13T20:00:53.411344546+00:00 stdout F [INFO] 10.217.0.60:43437 - 29520 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.025134666s 2025-08-13T20:00:53.432280653+00:00 stdout F [INFO] 10.217.0.60:38044 - 13694 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008570454s 2025-08-13T20:00:53.433227400+00:00 stdout F [INFO] 10.217.0.60:54568 - 64079 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007869224s 2025-08-13T20:00:53.471951515+00:00 stdout F [INFO] 10.217.0.60:53072 - 41075 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002703447s 2025-08-13T20:00:53.471951515+00:00 stdout F [INFO] 10.217.0.60:49580 - 61579 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002508161s 2025-08-13T20:00:53.516481754+00:00 stdout F [INFO] 10.217.0.60:50918 - 35890 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003679485s 2025-08-13T20:00:53.516481754+00:00 stdout F [INFO] 10.217.0.60:43105 - 22887 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003932652s 2025-08-13T20:00:53.516594207+00:00 stdout F [INFO] 10.217.0.60:33168 - 53941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.023277554s 2025-08-13T20:00:53.535903078+00:00 stdout F [INFO] 10.217.0.60:42750 - 34289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.038139318s 2025-08-13T20:00:53.570983318+00:00 stdout F [INFO] 10.217.0.60:41931 - 41111 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003255393s 2025-08-13T20:00:53.573029197+00:00 stdout F [INFO] 10.217.0.60:40847 - 48976 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006612799s 2025-08-13T20:00:53.597868635+00:00 stdout F [INFO] 10.217.0.60:59360 - 2136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005416224s 2025-08-13T20:00:53.597868635+00:00 stdout F [INFO] 10.217.0.60:52461 - 27104 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001841083s 2025-08-13T20:00:53.601450557+00:00 stdout F [INFO] 10.217.0.60:54454 - 54017 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006414802s 2025-08-13T20:00:53.601450557+00:00 stdout F [INFO] 10.217.0.60:37130 - 13113 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001838232s 2025-08-13T20:00:53.665990417+00:00 stdout F [INFO] 10.217.0.60:46913 - 29149 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006355062s 2025-08-13T20:00:53.678985018+00:00 stdout F [INFO] 10.217.0.60:60979 - 7034 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002641435s 2025-08-13T20:00:53.679034359+00:00 stdout F [INFO] 10.217.0.60:42536 - 55728 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002853981s 2025-08-13T20:00:53.692268767+00:00 stdout F [INFO] 10.217.0.60:59147 - 6840 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.020945667s 2025-08-13T20:00:53.759479803+00:00 stdout F [INFO] 10.217.0.60:45721 - 40846 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001359888s 2025-08-13T20:00:53.759479803+00:00 stdout F [INFO] 10.217.0.60:59305 - 1822 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002186972s 2025-08-13T20:00:53.766363369+00:00 stdout F [INFO] 10.217.0.60:35259 - 58487 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003414577s 2025-08-13T20:00:53.766543834+00:00 stdout F [INFO] 10.217.0.60:37103 - 25785 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004143848s 2025-08-13T20:00:53.848258384+00:00 stdout F [INFO] 10.217.0.60:47545 - 58113 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002532112s 2025-08-13T20:00:53.848258384+00:00 stdout F [INFO] 10.217.0.60:42233 - 41616 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004171259s 2025-08-13T20:00:53.864414455+00:00 stdout F [INFO] 10.217.0.60:51098 - 41843 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004119977s 2025-08-13T20:00:53.864721734+00:00 stdout F [INFO] 10.217.0.60:35925 - 20271 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004664763s 2025-08-13T20:00:53.981440612+00:00 stdout F [INFO] 10.217.0.60:55209 - 35425 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003106589s 2025-08-13T20:00:53.984768667+00:00 stdout F [INFO] 10.217.0.60:49145 - 61100 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003199231s 2025-08-13T20:00:54.006162107+00:00 stdout F [INFO] 10.217.0.60:36995 - 2348 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001911485s 2025-08-13T20:00:54.006666721+00:00 stdout F [INFO] 10.217.0.60:44304 - 50706 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002065359s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:58116 - 63292 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.017787097s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:56494 - 34065 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.018799796s 2025-08-13T20:00:54.088313588+00:00 stdout F [INFO] 10.217.0.60:40690 - 33753 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.018499167s 2025-08-13T20:00:54.091014255+00:00 stdout F [INFO] 10.217.0.60:52303 - 57777 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.021306917s 2025-08-13T20:00:54.177955294+00:00 stdout F [INFO] 10.217.0.60:45987 - 37479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002613084s 2025-08-13T20:00:54.177955294+00:00 stdout F [INFO] 10.217.0.60:58306 - 8356 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003806909s 2025-08-13T20:00:54.190158042+00:00 stdout F [INFO] 10.217.0.60:49703 - 48052 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005040684s 2025-08-13T20:00:54.190158042+00:00 stdout F [INFO] 10.217.0.60:47515 - 57080 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005875278s 2025-08-13T20:00:54.210693428+00:00 stdout F [INFO] 10.217.0.60:53317 - 61694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001216725s 2025-08-13T20:00:54.211213623+00:00 stdout F [INFO] 10.217.0.60:36938 - 24375 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001097501s 2025-08-13T20:00:54.261073224+00:00 stdout F [INFO] 10.217.0.60:48205 - 50646 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001469772s 2025-08-13T20:00:54.261451375+00:00 stdout F [INFO] 10.217.0.60:48539 - 55836 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005469026s 2025-08-13T20:00:54.346313265+00:00 stdout F [INFO] 10.217.0.60:54730 - 32651 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.045075945s 2025-08-13T20:00:54.346313265+00:00 stdout F [INFO] 10.217.0.60:43637 - 9908 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.046208437s 2025-08-13T20:00:54.358372139+00:00 stdout F [INFO] 10.217.0.60:56901 - 19987 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006063023s 2025-08-13T20:00:54.358429720+00:00 stdout F [INFO] 10.217.0.60:39734 - 36586 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008345908s 2025-08-13T20:00:54.413487130+00:00 stdout F [INFO] 10.217.0.60:35374 - 37770 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003413067s 2025-08-13T20:00:54.414041696+00:00 stdout F [INFO] 10.217.0.60:32861 - 23169 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003239342s 2025-08-13T20:00:54.444068402+00:00 stdout F [INFO] 10.217.0.60:49969 - 9790 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003211791s 2025-08-13T20:00:54.444068402+00:00 stdout F [INFO] 10.217.0.60:48352 - 43128 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00142177s 2025-08-13T20:00:54.496125126+00:00 stdout F [INFO] 10.217.0.60:50646 - 6041 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001598405s 2025-08-13T20:00:54.500601564+00:00 stdout F [INFO] 10.217.0.60:54137 - 5363 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005470266s 2025-08-13T20:00:54.562235942+00:00 stdout F [INFO] 10.217.0.60:34907 - 2227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009674926s 2025-08-13T20:00:54.562235942+00:00 stdout F [INFO] 10.217.0.60:56654 - 46794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.009203472s 2025-08-13T20:00:54.605821094+00:00 stdout F [INFO] 10.217.0.60:38008 - 56395 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006390213s 2025-08-13T20:00:54.605821094+00:00 stdout F [INFO] 10.217.0.60:60224 - 57243 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006918808s 2025-08-13T20:00:54.641390439+00:00 stdout F [INFO] 10.217.0.60:35101 - 21936 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00350588s 2025-08-13T20:00:54.642122009+00:00 stdout F [INFO] 10.217.0.60:36606 - 13769 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00490198s 2025-08-13T20:00:54.701656737+00:00 stdout F [INFO] 10.217.0.60:60974 - 25606 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00630711s 2025-08-13T20:00:54.717990603+00:00 stdout F [INFO] 10.217.0.60:56711 - 1491 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.021716869s 2025-08-13T20:00:54.769018978+00:00 stdout F [INFO] 10.217.0.60:56705 - 50944 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002386768s 2025-08-13T20:00:54.800321270+00:00 stdout F [INFO] 10.217.0.60:33894 - 34414 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.020661909s 2025-08-13T20:00:54.821698250+00:00 stdout F [INFO] 10.217.0.60:46811 - 16255 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001141063s 2025-08-13T20:00:54.821698250+00:00 stdout F [INFO] 10.217.0.60:41651 - 35052 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002192633s 2025-08-13T20:00:56.237173901+00:00 stdout F [INFO] 10.217.0.28:57650 - 26271 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006250578s 2025-08-13T20:00:56.237173901+00:00 stdout F [INFO] 10.217.0.28:36911 - 23347 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006025012s 2025-08-13T20:00:56.426192821+00:00 stdout F [INFO] 10.217.0.64:60386 - 16371 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002096029s 2025-08-13T20:00:56.426755407+00:00 stdout F [INFO] 10.217.0.64:37132 - 17691 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002402099s 2025-08-13T20:00:56.429936718+00:00 stdout F [INFO] 10.217.0.64:56566 - 24357 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001352609s 2025-08-13T20:00:56.429936718+00:00 stdout F [INFO] 10.217.0.64:56185 - 8747 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001461742s 2025-08-13T20:00:57.764385539+00:00 stdout F [INFO] 10.217.0.57:55995 - 61941 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002156022s 2025-08-13T20:00:57.764487992+00:00 stdout F [INFO] 10.217.0.57:32930 - 18533 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003038847s 2025-08-13T20:01:01.217889967+00:00 stdout F [INFO] 10.217.0.28:60933 - 64320 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00173895s 2025-08-13T20:01:01.229498738+00:00 stdout F [INFO] 10.217.0.28:45794 - 37899 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004856648s 2025-08-13T20:01:02.786206288+00:00 stdout F [INFO] 10.217.0.57:40045 - 52840 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004254891s 2025-08-13T20:01:02.786714652+00:00 stdout F [INFO] 10.217.0.57:37315 - 53363 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004645843s 2025-08-13T20:01:06.248307858+00:00 stdout F [INFO] 10.217.0.28:32940 - 64022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010549551s 2025-08-13T20:01:06.248307858+00:00 stdout F [INFO] 10.217.0.28:39158 - 51585 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010443988s 2025-08-13T20:01:07.803260386+00:00 stdout F [INFO] 10.217.0.57:55745 - 47377 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008387099s 2025-08-13T20:01:07.803260386+00:00 stdout F [INFO] 10.217.0.57:52414 - 56772 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001229835s 2025-08-13T20:01:07.876341910+00:00 stdout F [INFO] 10.217.0.62:36493 - 37658 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006798604s 2025-08-13T20:01:07.928674602+00:00 stdout F [INFO] 10.217.0.62:47351 - 5566 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.054731061s 2025-08-13T20:01:08.958312120+00:00 stdout F [INFO] 10.217.0.62:45058 - 48452 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009701407s 2025-08-13T20:01:08.958708771+00:00 stdout F [INFO] 10.217.0.62:55693 - 35669 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010232312s 2025-08-13T20:01:09.463479824+00:00 stdout F [INFO] 10.217.0.62:42400 - 55577 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003704086s 2025-08-13T20:01:09.463555486+00:00 stdout F [INFO] 10.217.0.62:37144 - 52861 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003480729s 2025-08-13T20:01:09.791534678+00:00 stdout F [INFO] 10.217.0.62:35264 - 59363 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.008885523s 2025-08-13T20:01:09.792368432+00:00 stdout F [INFO] 10.217.0.62:51739 - 64573 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.019929048s 2025-08-13T20:01:09.883410818+00:00 stdout F [INFO] 10.217.0.62:33611 - 14300 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003421418s 2025-08-13T20:01:09.883410818+00:00 stdout F [INFO] 10.217.0.62:43024 - 10224 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004042806s 2025-08-13T20:01:10.148001973+00:00 stdout F [INFO] 10.217.0.62:46260 - 39301 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005178448s 2025-08-13T20:01:10.153873390+00:00 stdout F [INFO] 10.217.0.62:57629 - 50837 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.011846428s 2025-08-13T20:01:10.366079851+00:00 stdout F [INFO] 10.217.0.62:53398 - 6757 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.010000395s 2025-08-13T20:01:10.368951013+00:00 stdout F [INFO] 10.217.0.62:34726 - 49492 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.009213843s 2025-08-13T20:01:10.599053724+00:00 stdout F [INFO] 10.217.0.62:38447 - 31498 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006358471s 2025-08-13T20:01:10.602083821+00:00 stdout F [INFO] 10.217.0.62:35021 - 53431 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.028566614s 2025-08-13T20:01:11.189276624+00:00 stdout F [INFO] 10.217.0.28:57278 - 49657 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000955917s 2025-08-13T20:01:11.190236392+00:00 stdout F [INFO] 10.217.0.28:51161 - 41838 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001605305s 2025-08-13T20:01:12.737583602+00:00 stdout F [INFO] 10.217.0.57:38545 - 42595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000573556s 2025-08-13T20:01:12.738199670+00:00 stdout F [INFO] 10.217.0.57:37428 - 59560 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000958997s 2025-08-13T20:01:16.220937836+00:00 stdout F [INFO] 10.217.0.28:53038 - 50311 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004677073s 2025-08-13T20:01:16.272384153+00:00 stdout F [INFO] 10.217.0.28:41185 - 25277 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.059117746s 2025-08-13T20:01:17.742430731+00:00 stdout F [INFO] 10.217.0.57:55347 - 17398 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001211184s 2025-08-13T20:01:17.743296655+00:00 stdout F [INFO] 10.217.0.57:50425 - 39022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001655518s 2025-08-13T20:01:21.215652285+00:00 stdout F [INFO] 10.217.0.28:39681 - 25914 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001337798s 2025-08-13T20:01:21.218189227+00:00 stdout F [INFO] 10.217.0.28:50437 - 47565 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00278562s 2025-08-13T20:01:22.758813675+00:00 stdout F [INFO] 10.217.0.57:51254 - 6258 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00737564s 2025-08-13T20:01:22.759524666+00:00 stdout F [INFO] 10.217.0.57:39283 - 43233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00418449s 2025-08-13T20:01:22.875226975+00:00 stdout F [INFO] 10.217.0.8:35450 - 460 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002742558s 2025-08-13T20:01:22.899969020+00:00 stdout F [INFO] 10.217.0.8:50023 - 58252 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.012143567s 2025-08-13T20:01:22.903084419+00:00 stdout F [INFO] 10.217.0.8:37803 - 13193 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001617146s 2025-08-13T20:01:22.903387998+00:00 stdout F [INFO] 10.217.0.8:33487 - 42396 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001981667s 2025-08-13T20:01:22.948433202+00:00 stdout F [INFO] 10.217.0.8:60399 - 35132 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000913156s 2025-08-13T20:01:22.948694070+00:00 stdout F [INFO] 10.217.0.8:58767 - 6469 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00105186s 2025-08-13T20:01:22.955416611+00:00 stdout F [INFO] 10.217.0.8:35574 - 36598 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0017425s 2025-08-13T20:01:22.956320387+00:00 stdout F [INFO] 10.217.0.8:52173 - 4616 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00209741s 2025-08-13T20:01:22.983271905+00:00 stdout F [INFO] 10.217.0.8:42709 - 34679 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000729141s 2025-08-13T20:01:22.983556464+00:00 stdout F [INFO] 10.217.0.8:37180 - 59501 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000854214s 2025-08-13T20:01:22.985588542+00:00 stdout F [INFO] 10.217.0.8:46898 - 18051 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00067822s 2025-08-13T20:01:22.986040514+00:00 stdout F [INFO] 10.217.0.8:55598 - 646 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000887455s 2025-08-13T20:01:23.019215770+00:00 stdout F [INFO] 10.217.0.8:43776 - 16490 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000885795s 2025-08-13T20:01:23.019563030+00:00 stdout F [INFO] 10.217.0.8:55377 - 13705 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000998358s 2025-08-13T20:01:23.021052543+00:00 stdout F [INFO] 10.217.0.8:32974 - 5627 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000607378s 2025-08-13T20:01:23.021367562+00:00 stdout F [INFO] 10.217.0.8:46715 - 588 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000855444s 2025-08-13T20:01:23.072998254+00:00 stdout F [INFO] 10.217.0.8:46363 - 14241 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000838344s 2025-08-13T20:01:23.073300263+00:00 stdout F [INFO] 10.217.0.8:39978 - 64267 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000959147s 2025-08-13T20:01:23.075549817+00:00 stdout F [INFO] 10.217.0.8:50302 - 46004 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00036754s 2025-08-13T20:01:23.080598711+00:00 stdout F [INFO] 10.217.0.8:36433 - 63422 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001915785s 2025-08-13T20:01:23.174177349+00:00 stdout F [INFO] 10.217.0.8:44290 - 35137 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001082941s 2025-08-13T20:01:23.174453997+00:00 stdout F [INFO] 10.217.0.8:58110 - 59594 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001618486s 2025-08-13T20:01:23.182685632+00:00 stdout F [INFO] 10.217.0.8:41613 - 40136 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0024516s 2025-08-13T20:01:23.183076613+00:00 stdout F [INFO] 10.217.0.8:49877 - 49171 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003004526s 2025-08-13T20:01:23.356737254+00:00 stdout F [INFO] 10.217.0.8:44016 - 20804 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000900336s 2025-08-13T20:01:23.356737254+00:00 stdout F [INFO] 10.217.0.8:50779 - 40285 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000814863s 2025-08-13T20:01:23.362991283+00:00 stdout F [INFO] 10.217.0.8:58191 - 8238 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003906692s 2025-08-13T20:01:23.363173828+00:00 stdout F [INFO] 10.217.0.8:34112 - 1811 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.004521639s 2025-08-13T20:01:23.697180232+00:00 stdout F [INFO] 10.217.0.8:45455 - 40717 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005744714s 2025-08-13T20:01:23.697623495+00:00 stdout F [INFO] 10.217.0.8:36955 - 33064 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006594509s 2025-08-13T20:01:23.710523722+00:00 stdout F [INFO] 10.217.0.8:53645 - 3233 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.011739265s 2025-08-13T20:01:23.711900992+00:00 stdout F [INFO] 10.217.0.8:59710 - 44829 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.012081614s 2025-08-13T20:01:24.356571364+00:00 stdout F [INFO] 10.217.0.8:60593 - 48150 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000811133s 2025-08-13T20:01:24.356620205+00:00 stdout F [INFO] 10.217.0.8:55236 - 41544 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000640498s 2025-08-13T20:01:24.357769188+00:00 stdout F [INFO] 10.217.0.8:41541 - 33055 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000577307s 2025-08-13T20:01:24.358411346+00:00 stdout F [INFO] 10.217.0.8:47550 - 49968 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000632798s 2025-08-13T20:01:25.676466559+00:00 stdout F [INFO] 10.217.0.8:59216 - 3385 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.006457944s 2025-08-13T20:01:25.676466559+00:00 stdout F [INFO] 10.217.0.8:41508 - 20230 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.007326559s 2025-08-13T20:01:25.677325214+00:00 stdout F [INFO] 10.217.0.8:35700 - 28311 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069899s 2025-08-13T20:01:25.678744934+00:00 stdout F [INFO] 10.217.0.8:57821 - 32512 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000834894s 2025-08-13T20:01:26.195177770+00:00 stdout F [INFO] 10.217.0.28:45247 - 60056 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001457361s 2025-08-13T20:01:26.197937238+00:00 stdout F [INFO] 10.217.0.28:36922 - 18578 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00243877s 2025-08-13T20:01:27.747230374+00:00 stdout F [INFO] 10.217.0.57:42648 - 41427 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003019136s 2025-08-13T20:01:27.747230374+00:00 stdout F [INFO] 10.217.0.57:45322 - 51505 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003414618s 2025-08-13T20:01:28.252961704+00:00 stdout F [INFO] 10.217.0.8:38267 - 2620 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001890944s 2025-08-13T20:01:28.253402637+00:00 stdout F [INFO] 10.217.0.8:33474 - 20943 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002586484s 2025-08-13T20:01:28.257625927+00:00 stdout F [INFO] 10.217.0.8:46050 - 9905 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002819021s 2025-08-13T20:01:28.258326047+00:00 stdout F [INFO] 10.217.0.8:52537 - 2133 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000777933s 2025-08-13T20:01:31.226187453+00:00 stdout F [INFO] 10.217.0.28:54125 - 15824 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010559841s 2025-08-13T20:01:31.226275255+00:00 stdout F [INFO] 10.217.0.28:34218 - 4235 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.010554371s 2025-08-13T20:01:32.746987848+00:00 stdout F [INFO] 10.217.0.57:48872 - 60983 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001016209s 2025-08-13T20:01:32.747463922+00:00 stdout F [INFO] 10.217.0.57:44677 - 14267 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001244225s 2025-08-13T20:01:33.390184889+00:00 stdout F [INFO] 10.217.0.8:51622 - 5984 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001513223s 2025-08-13T20:01:33.390184889+00:00 stdout F [INFO] 10.217.0.8:55023 - 43646 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001353799s 2025-08-13T20:01:33.391758223+00:00 stdout F [INFO] 10.217.0.8:36157 - 4425 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000748261s 2025-08-13T20:01:33.427944867+00:00 stdout F [INFO] 10.217.0.8:50839 - 60785 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.036577225s 2025-08-13T20:01:36.185028201+00:00 stdout F [INFO] 10.217.0.28:53977 - 19614 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000873425s 2025-08-13T20:01:36.185510115+00:00 stdout F [INFO] 10.217.0.28:48739 - 23520 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000631698s 2025-08-13T20:01:37.742700716+00:00 stdout F [INFO] 10.217.0.57:37766 - 57355 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001464932s 2025-08-13T20:01:37.742909242+00:00 stdout F [INFO] 10.217.0.57:45806 - 61763 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001585065s 2025-08-13T20:01:42.737094876+00:00 stdout F [INFO] 10.217.0.57:51744 - 57853 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000895476s 2025-08-13T20:01:42.738118555+00:00 stdout F [INFO] 10.217.0.57:50810 - 5972 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001739799s 2025-08-13T20:01:43.701713921+00:00 stdout F [INFO] 10.217.0.8:36382 - 38398 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001006558s 2025-08-13T20:01:43.701753482+00:00 stdout F [INFO] 10.217.0.8:47387 - 17641 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001006888s 2025-08-13T20:01:43.704013746+00:00 stdout F [INFO] 10.217.0.8:54898 - 64653 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000649298s 2025-08-13T20:01:43.704736497+00:00 stdout F [INFO] 10.217.0.8:32768 - 24650 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001379599s 2025-08-13T20:01:47.743368384+00:00 stdout F [INFO] 10.217.0.57:58791 - 31751 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001133693s 2025-08-13T20:01:47.744028232+00:00 stdout F [INFO] 10.217.0.57:60427 - 36503 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001844383s 2025-08-13T20:01:52.740083758+00:00 stdout F [INFO] 10.217.0.57:55965 - 27277 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001546664s 2025-08-13T20:01:52.740083758+00:00 stdout F [INFO] 10.217.0.57:43501 - 22555 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001841002s 2025-08-13T20:01:56.381289243+00:00 stdout F [INFO] 10.217.0.64:33868 - 27378 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001576005s 2025-08-13T20:01:56.381949141+00:00 stdout F [INFO] 10.217.0.64:43107 - 15688 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000985269s 2025-08-13T20:01:56.385583775+00:00 stdout F [INFO] 10.217.0.64:38774 - 34772 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000796203s 2025-08-13T20:01:56.386594514+00:00 stdout F [INFO] 10.217.0.64:51409 - 59863 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001705519s 2025-08-13T20:01:57.740516440+00:00 stdout F [INFO] 10.217.0.57:40457 - 8246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000826874s 2025-08-13T20:01:57.740968553+00:00 stdout F [INFO] 10.217.0.57:41951 - 15134 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000784732s 2025-08-13T20:02:02.752936391+00:00 stdout F [INFO] 10.217.0.57:45491 - 28069 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00174288s 2025-08-13T20:02:02.752936391+00:00 stdout F [INFO] 10.217.0.57:51623 - 20798 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001608546s 2025-08-13T20:02:04.196547714+00:00 stdout F [INFO] 10.217.0.8:53185 - 3699 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001116902s 2025-08-13T20:02:04.196872024+00:00 stdout F [INFO] 10.217.0.8:41818 - 23258 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001150073s 2025-08-13T20:02:04.198577372+00:00 stdout F [INFO] 10.217.0.8:59757 - 57451 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00072344s 2025-08-13T20:02:04.198696666+00:00 stdout F [INFO] 10.217.0.8:40174 - 24364 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069757s 2025-08-13T20:02:07.740393995+00:00 stdout F [INFO] 10.217.0.57:44423 - 57160 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001285037s 2025-08-13T20:02:07.740930810+00:00 stdout F [INFO] 10.217.0.57:37229 - 23125 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001233406s 2025-08-13T20:02:12.737589534+00:00 stdout F [INFO] 10.217.0.57:60917 - 6095 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001498463s 2025-08-13T20:02:12.740934829+00:00 stdout F [INFO] 10.217.0.57:46568 - 39709 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005422165s 2025-08-13T20:02:17.741689966+00:00 stdout F [INFO] 10.217.0.57:39448 - 45084 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001421561s 2025-08-13T20:02:17.742317324+00:00 stdout F [INFO] 10.217.0.57:58611 - 38262 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175589s 2025-08-13T20:02:22.742260017+00:00 stdout F [INFO] 10.217.0.57:37594 - 46202 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001674168s 2025-08-13T20:02:22.743136312+00:00 stdout F [INFO] 10.217.0.57:46178 - 3725 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002001987s 2025-08-13T20:02:22.850376041+00:00 stdout F [INFO] 10.217.0.8:37535 - 5907 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000958718s 2025-08-13T20:02:22.850376041+00:00 stdout F [INFO] 10.217.0.8:57939 - 54092 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00138282s 2025-08-13T20:02:22.851987297+00:00 stdout F [INFO] 10.217.0.8:47235 - 23930 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000638688s 2025-08-13T20:02:22.852372948+00:00 stdout F [INFO] 10.217.0.8:35321 - 3816 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000816614s 2025-08-13T20:02:45.175333378+00:00 stdout F [INFO] 10.217.0.8:44330 - 47496 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002702667s 2025-08-13T20:02:45.175333378+00:00 stdout F [INFO] 10.217.0.8:53638 - 28468 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003530061s 2025-08-13T20:02:45.178408116+00:00 stdout F [INFO] 10.217.0.8:34486 - 6540 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001221594s 2025-08-13T20:02:45.181027171+00:00 stdout F [INFO] 10.217.0.8:50562 - 34559 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.003549701s 2025-08-13T20:02:56.361633502+00:00 stdout F [INFO] 10.217.0.64:54018 - 28556 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002588663s 2025-08-13T20:02:56.361633502+00:00 stdout F [INFO] 10.217.0.64:52974 - 20654 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002253644s 2025-08-13T20:02:56.361737164+00:00 stdout F [INFO] 10.217.0.64:55753 - 41252 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002745678s 2025-08-13T20:02:56.361757175+00:00 stdout F [INFO] 10.217.0.64:57017 - 50212 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002530172s 2025-08-13T20:03:22.854718995+00:00 stdout F [INFO] 10.217.0.8:48056 - 26529 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.005041034s 2025-08-13T20:03:22.855475337+00:00 stdout F [INFO] 10.217.0.8:36036 - 55050 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00594783s 2025-08-13T20:03:22.858104102+00:00 stdout F [INFO] 10.217.0.8:35111 - 28439 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001604606s 2025-08-13T20:03:22.858457422+00:00 stdout F [INFO] 10.217.0.8:38433 - 51125 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00106096s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:44595 - 12598 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 2.98031181s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:51254 - 44553 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 2.9808070239999997s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:47570 - 1814 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 2.9712284s 2025-08-13T20:03:59.364398927+00:00 stdout F [INFO] 10.217.0.64:38933 - 34536 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 2.971904639s 2025-08-13T20:04:22.855177511+00:00 stdout F [INFO] 10.217.0.8:36152 - 1487 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003761058s 2025-08-13T20:04:22.855177511+00:00 stdout F [INFO] 10.217.0.8:57798 - 5862 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00453144s 2025-08-13T20:04:22.859450584+00:00 stdout F [INFO] 10.217.0.8:51299 - 43533 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001581495s 2025-08-13T20:04:22.859450584+00:00 stdout F [INFO] 10.217.0.8:33424 - 53620 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001211915s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:53398 - 7438 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002246825s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:53707 - 34441 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001054011s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:35175 - 59215 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000854044s 2025-08-13T20:04:56.363900410+00:00 stdout F [INFO] 10.217.0.64:52207 - 4802 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002560213s 2025-08-13T20:05:22.834833023+00:00 stdout F [INFO] 10.217.0.57:56324 - 25639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002765099s 2025-08-13T20:05:22.834833023+00:00 stdout F [INFO] 10.217.0.57:56473 - 6138 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003923752s 2025-08-13T20:05:22.874352144+00:00 stdout F [INFO] 10.217.0.8:57994 - 3404 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002601494s 2025-08-13T20:05:22.874352144+00:00 stdout F [INFO] 10.217.0.8:56186 - 51110 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003099199s 2025-08-13T20:05:22.877030471+00:00 stdout F [INFO] 10.217.0.8:39214 - 1864 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000801883s 2025-08-13T20:05:22.877568387+00:00 stdout F [INFO] 10.217.0.8:52797 - 23506 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000858964s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:42329 - 34614 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001238436s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:34828 - 37668 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002666427s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:50135 - 64401 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002043728s 2025-08-13T20:05:29.056323092+00:00 stdout F [INFO] 10.217.0.8:47646 - 27514 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001369899s 2025-08-13T20:05:30.484629752+00:00 stdout F [INFO] 10.217.0.73:35591 - 37435 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129143s 2025-08-13T20:05:30.485074535+00:00 stdout F [INFO] 10.217.0.73:46188 - 1945 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001558785s 2025-08-13T20:05:31.039418839+00:00 stdout F [INFO] 10.217.0.57:55199 - 28877 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002956185s 2025-08-13T20:05:31.067272687+00:00 stdout F [INFO] 10.217.0.57:34733 - 49370 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.032735907s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:50848 - 53238 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003411278s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:39821 - 33040 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003394617s 2025-08-13T20:05:56.380618832+00:00 stdout F [INFO] 10.217.0.64:37737 - 1404 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003336776s 2025-08-13T20:05:56.380818118+00:00 stdout F [INFO] 10.217.0.64:35989 - 676 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003093488s 2025-08-13T20:05:57.260526589+00:00 stdout F [INFO] 10.217.0.19:55716 - 30971 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001340569s 2025-08-13T20:05:57.260526589+00:00 stdout F [INFO] 10.217.0.19:48212 - 28087 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001226735s 2025-08-13T20:06:02.817339144+00:00 stdout F [INFO] 10.217.0.19:42395 - 20503 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001299787s 2025-08-13T20:06:02.817339144+00:00 stdout F [INFO] 10.217.0.19:58590 - 35761 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001914884s 2025-08-13T20:06:10.928750040+00:00 stdout F [INFO] 10.217.0.19:40611 - 63299 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001083611s 2025-08-13T20:06:10.928750040+00:00 stdout F [INFO] 10.217.0.19:45243 - 63194 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001169673s 2025-08-13T20:06:13.331971681+00:00 stdout F [INFO] 10.217.0.19:60442 - 24966 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00211397s 2025-08-13T20:06:13.331971681+00:00 stdout F [INFO] 10.217.0.19:49512 - 53168 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002144402s 2025-08-13T20:06:19.649446948+00:00 stdout F [INFO] 10.217.0.19:33262 - 15714 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002213083s 2025-08-13T20:06:19.649446948+00:00 stdout F [INFO] 10.217.0.19:51577 - 30006 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002758709s 2025-08-13T20:06:22.858350418+00:00 stdout F [INFO] 10.217.0.8:54831 - 57999 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.0014117s 2025-08-13T20:06:22.858350418+00:00 stdout F [INFO] 10.217.0.8:51901 - 11298 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001923375s 2025-08-13T20:06:22.860114628+00:00 stdout F [INFO] 10.217.0.8:41737 - 24832 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000652439s 2025-08-13T20:06:22.860657794+00:00 stdout F [INFO] 10.217.0.8:51273 - 60742 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001147012s 2025-08-13T20:06:24.856002612+00:00 stdout F [INFO] 10.217.0.19:46264 - 24052 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000716081s 2025-08-13T20:06:24.856002612+00:00 stdout F [INFO] 10.217.0.19:38175 - 50312 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000926017s 2025-08-13T20:06:24.982110083+00:00 stdout F [INFO] 10.217.0.19:41025 - 12544 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00067708s 2025-08-13T20:06:24.983396940+00:00 stdout F [INFO] 10.217.0.19:48621 - 6324 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000912816s 2025-08-13T20:06:42.398374316+00:00 stdout F [INFO] 10.217.0.19:40544 - 31548 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003992194s 2025-08-13T20:06:42.398551511+00:00 stdout F [INFO] 10.217.0.19:47961 - 38383 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004599002s 2025-08-13T20:06:44.831326502+00:00 stdout F [INFO] 10.217.0.19:46233 - 36481 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000929797s 2025-08-13T20:06:44.831326502+00:00 stdout F [INFO] 10.217.0.19:52474 - 16656 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001438521s 2025-08-13T20:06:45.977531073+00:00 stdout F [INFO] 10.217.0.19:42773 - 41690 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000721611s 2025-08-13T20:06:45.977531073+00:00 stdout F [INFO] 10.217.0.19:48183 - 6599 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000650689s 2025-08-13T20:06:49.310538523+00:00 stdout F [INFO] 10.217.0.19:40594 - 10031 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001391659s 2025-08-13T20:06:49.310569354+00:00 stdout F [INFO] 10.217.0.19:50841 - 61241 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000930497s 2025-08-13T20:06:56.401041584+00:00 stdout F [INFO] 10.217.0.64:35784 - 58433 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005351913s 2025-08-13T20:06:56.401041584+00:00 stdout F [INFO] 10.217.0.64:56005 - 56756 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005147368s 2025-08-13T20:06:56.408528299+00:00 stdout F [INFO] 10.217.0.64:53375 - 63899 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001581075s 2025-08-13T20:06:56.409009593+00:00 stdout F [INFO] 10.217.0.64:60446 - 44331 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002788589s 2025-08-13T20:07:12.391819733+00:00 stdout F [INFO] 10.217.0.19:51663 - 10457 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003456329s 2025-08-13T20:07:12.392347668+00:00 stdout F [INFO] 10.217.0.19:36490 - 30092 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002132671s 2025-08-13T20:07:22.916941526+00:00 stdout F [INFO] 10.217.0.8:49926 - 26483 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004742656s 2025-08-13T20:07:22.919477009+00:00 stdout F [INFO] 10.217.0.8:44496 - 22655 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00626125s 2025-08-13T20:07:22.925974105+00:00 stdout F [INFO] 10.217.0.8:43816 - 62129 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001857553s 2025-08-13T20:07:22.926229433+00:00 stdout F [INFO] 10.217.0.8:38711 - 50307 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00279s 2025-08-13T20:07:42.446369421+00:00 stdout F [INFO] 10.217.0.19:35262 - 46720 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004152379s 2025-08-13T20:07:42.446369421+00:00 stdout F [INFO] 10.217.0.19:58765 - 30779 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004750666s 2025-08-13T20:07:55.895470739+00:00 stdout F [INFO] 10.217.0.19:55147 - 31451 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002516592s 2025-08-13T20:07:55.895470739+00:00 stdout F [INFO] 10.217.0.19:57149 - 51963 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001709219s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:57024 - 62161 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001544464s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:40706 - 49086 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001788221s 2025-08-13T20:07:56.365057553+00:00 stdout F [INFO] 10.217.0.64:33994 - 7565 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001635137s 2025-08-13T20:07:56.368462020+00:00 stdout F [INFO] 10.217.0.64:38183 - 47201 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00557274s 2025-08-13T20:07:58.927633963+00:00 stdout F [INFO] 10.217.0.19:35433 - 24636 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002575593s 2025-08-13T20:07:58.927706205+00:00 stdout F [INFO] 10.217.0.19:38853 - 41776 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003402368s 2025-08-13T20:07:59.051685730+00:00 stdout F [INFO] 10.217.0.19:39692 - 59438 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001105412s 2025-08-13T20:07:59.053430520+00:00 stdout F [INFO] 10.217.0.19:37686 - 44994 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001551274s 2025-08-13T20:08:02.186596361+00:00 stdout F [INFO] 10.217.0.45:41379 - 46721 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000921637s 2025-08-13T20:08:02.189161414+00:00 stdout F [INFO] 10.217.0.45:59167 - 54274 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001012559s 2025-08-13T20:08:03.090366973+00:00 stdout F [INFO] 10.217.0.82:44963 - 1211 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001140693s 2025-08-13T20:08:03.090366973+00:00 stdout F [INFO] 10.217.0.82:46007 - 19049 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000906806s 2025-08-13T20:08:03.829207705+00:00 stdout F [INFO] 10.217.0.82:50887 - 47041 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.00105427s 2025-08-13T20:08:03.829537805+00:00 stdout F [INFO] 10.217.0.82:49104 - 36628 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001064701s 2025-08-13T20:08:11.673531949+00:00 stdout F [INFO] 10.217.0.62:53197 - 19130 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000967497s 2025-08-13T20:08:11.673531949+00:00 stdout F [INFO] 10.217.0.62:55102 - 58758 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001366819s 2025-08-13T20:08:11.863221898+00:00 stdout F [INFO] 10.217.0.62:46299 - 6790 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002732218s 2025-08-13T20:08:11.863307000+00:00 stdout F [INFO] 10.217.0.62:52890 - 25150 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002978285s 2025-08-13T20:08:12.460165583+00:00 stdout F [INFO] 10.217.0.19:55674 - 5069 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001345239s 2025-08-13T20:08:12.460165583+00:00 stdout F [INFO] 10.217.0.19:53687 - 56231 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001230745s 2025-08-13T20:08:16.460247929+00:00 stdout F [INFO] 10.217.0.19:33883 - 22269 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001597046s 2025-08-13T20:08:16.460247929+00:00 stdout F [INFO] 10.217.0.19:55451 - 9936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001798961s 2025-08-13T20:08:17.093307339+00:00 stdout F [INFO] 10.217.0.74:47638 - 51886 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0014134s 2025-08-13T20:08:17.093307339+00:00 stdout F [INFO] 10.217.0.74:43722 - 53547 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002339637s 2025-08-13T20:08:17.093651509+00:00 stdout F [INFO] 10.217.0.74:47930 - 29842 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001831943s 2025-08-13T20:08:17.093928847+00:00 stdout F [INFO] 10.217.0.74:60430 - 2357 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002843112s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:37794 - 44338 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002469581s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:36380 - 24703 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002938054s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:54413 - 4769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006599279s 2025-08-13T20:08:17.242727213+00:00 stdout F [INFO] 10.217.0.74:58887 - 6627 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007189946s 2025-08-13T20:08:17.313472882+00:00 stdout F [INFO] 10.217.0.74:34420 - 20953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654899s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:40265 - 13881 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000726571s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:60934 - 58201 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001991577s 2025-08-13T20:08:17.316035505+00:00 stdout F [INFO] 10.217.0.74:56818 - 41121 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001181694s 2025-08-13T20:08:17.353452958+00:00 stdout F [INFO] 10.217.0.74:59414 - 1737 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001528744s 2025-08-13T20:08:17.353452958+00:00 stdout F [INFO] 10.217.0.74:35775 - 32087 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001648827s 2025-08-13T20:08:17.374565733+00:00 stdout F [INFO] 10.217.0.74:51839 - 10472 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000966457s 2025-08-13T20:08:17.374623335+00:00 stdout F [INFO] 10.217.0.74:59412 - 56029 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000880815s 2025-08-13T20:08:17.379713471+00:00 stdout F [INFO] 10.217.0.74:59850 - 37284 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467644s 2025-08-13T20:08:17.379713471+00:00 stdout F [INFO] 10.217.0.74:48117 - 9957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000437173s 2025-08-13T20:08:17.414002834+00:00 stdout F [INFO] 10.217.0.74:53202 - 11313 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0007195s 2025-08-13T20:08:17.415289781+00:00 stdout F [INFO] 10.217.0.74:34969 - 34716 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001980387s 2025-08-13T20:08:17.438063694+00:00 stdout F [INFO] 10.217.0.74:44550 - 1766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001649777s 2025-08-13T20:08:17.438179977+00:00 stdout F [INFO] 10.217.0.74:33996 - 10441 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002167302s 2025-08-13T20:08:17.506074713+00:00 stdout F [INFO] 10.217.0.74:36687 - 64047 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00101641s 2025-08-13T20:08:17.506356141+00:00 stdout F [INFO] 10.217.0.74:60766 - 9260 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001620316s 2025-08-13T20:08:17.511222911+00:00 stdout F [INFO] 10.217.0.74:58673 - 45520 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001016829s 2025-08-13T20:08:17.512197309+00:00 stdout F [INFO] 10.217.0.74:48516 - 11841 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001193584s 2025-08-13T20:08:17.512382915+00:00 stdout F [INFO] 10.217.0.74:43668 - 62296 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001465352s 2025-08-13T20:08:17.513393153+00:00 stdout F [INFO] 10.217.0.74:53257 - 38114 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001675628s 2025-08-13T20:08:17.531563284+00:00 stdout F [INFO] 10.217.0.74:32914 - 43822 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000721731s 2025-08-13T20:08:17.531563284+00:00 stdout F [INFO] 10.217.0.74:57557 - 60863 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000788833s 2025-08-13T20:08:17.571268303+00:00 stdout F [INFO] 10.217.0.74:34472 - 31021 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001095471s 2025-08-13T20:08:17.571371496+00:00 stdout F [INFO] 10.217.0.74:45301 - 50358 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771102s 2025-08-13T20:08:17.576149263+00:00 stdout F [INFO] 10.217.0.74:38042 - 53403 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00139997s 2025-08-13T20:08:17.577165722+00:00 stdout F [INFO] 10.217.0.74:50457 - 1804 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000845834s 2025-08-13T20:08:17.587648382+00:00 stdout F [INFO] 10.217.0.74:33807 - 22539 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00101709s 2025-08-13T20:08:17.587648382+00:00 stdout F [INFO] 10.217.0.74:50078 - 57611 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000952327s 2025-08-13T20:08:17.596519817+00:00 stdout F [INFO] 10.217.0.74:51619 - 1271 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000502865s 2025-08-13T20:08:17.596723603+00:00 stdout F [INFO] 10.217.0.74:33030 - 31537 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527365s 2025-08-13T20:08:17.633355123+00:00 stdout F [INFO] 10.217.0.74:49719 - 52641 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757552s 2025-08-13T20:08:17.633409224+00:00 stdout F [INFO] 10.217.0.74:44615 - 29659 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824994s 2025-08-13T20:08:17.659664257+00:00 stdout F [INFO] 10.217.0.74:60079 - 38710 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002021758s 2025-08-13T20:08:17.659664257+00:00 stdout F [INFO] 10.217.0.74:46372 - 29784 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002361298s 2025-08-13T20:08:17.697094000+00:00 stdout F [INFO] 10.217.0.74:37367 - 17356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175155s 2025-08-13T20:08:17.697094000+00:00 stdout F [INFO] 10.217.0.74:33453 - 15295 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001937126s 2025-08-13T20:08:17.720648796+00:00 stdout F [INFO] 10.217.0.74:56978 - 59221 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001009399s 2025-08-13T20:08:17.720648796+00:00 stdout F [INFO] 10.217.0.74:37924 - 45832 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001598885s 2025-08-13T20:08:17.757582705+00:00 stdout F [INFO] 10.217.0.74:50255 - 7778 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010567s 2025-08-13T20:08:17.757663517+00:00 stdout F [INFO] 10.217.0.74:49792 - 21282 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001282237s 2025-08-13T20:08:17.780338727+00:00 stdout F [INFO] 10.217.0.74:56255 - 42807 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001229795s 2025-08-13T20:08:17.780338727+00:00 stdout F [INFO] 10.217.0.74:59643 - 60857 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001216784s 2025-08-13T20:08:17.796204772+00:00 stdout F [INFO] 10.217.0.74:39042 - 60622 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002146812s 2025-08-13T20:08:17.796295525+00:00 stdout F [INFO] 10.217.0.74:51432 - 60269 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002134271s 2025-08-13T20:08:17.819143930+00:00 stdout F [INFO] 10.217.0.74:33576 - 27950 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103722s 2025-08-13T20:08:17.819928862+00:00 stdout F [INFO] 10.217.0.74:46599 - 38481 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000400511s 2025-08-13T20:08:17.858210460+00:00 stdout F [INFO] 10.217.0.74:40244 - 6782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000604517s 2025-08-13T20:08:17.858261901+00:00 stdout F [INFO] 10.217.0.74:34036 - 38337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001010159s 2025-08-13T20:08:17.858490348+00:00 stdout F [INFO] 10.217.0.74:59286 - 12399 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000644209s 2025-08-13T20:08:17.858646702+00:00 stdout F [INFO] 10.217.0.74:59246 - 35961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001014419s 2025-08-13T20:08:17.880153489+00:00 stdout F [INFO] 10.217.0.74:36022 - 45218 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000942657s 2025-08-13T20:08:17.880526860+00:00 stdout F [INFO] 10.217.0.74:34981 - 33902 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001340278s 2025-08-13T20:08:17.881396075+00:00 stdout F [INFO] 10.217.0.74:49413 - 27840 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000502115s 2025-08-13T20:08:17.883545775+00:00 stdout F [INFO] 10.217.0.74:57490 - 176 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002198163s 2025-08-13T20:08:17.923052838+00:00 stdout F [INFO] 10.217.0.74:42831 - 31660 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003493061s 2025-08-13T20:08:17.924053797+00:00 stdout F [INFO] 10.217.0.74:56272 - 13437 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003915012s 2025-08-13T20:08:17.930211313+00:00 stdout F [INFO] 10.217.0.74:51914 - 40356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001364369s 2025-08-13T20:08:17.930663796+00:00 stdout F [INFO] 10.217.0.74:35796 - 40289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001480022s 2025-08-13T20:08:17.939123259+00:00 stdout F [INFO] 10.217.0.74:48300 - 58931 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000874155s 2025-08-13T20:08:17.939484169+00:00 stdout F [INFO] 10.217.0.74:44967 - 4651 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001081602s 2025-08-13T20:08:17.957997040+00:00 stdout F [INFO] 10.217.0.74:45670 - 58180 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001100601s 2025-08-13T20:08:17.958301839+00:00 stdout F [INFO] 10.217.0.74:39047 - 29042 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000748512s 2025-08-13T20:08:18.001523288+00:00 stdout F [INFO] 10.217.0.74:60147 - 42545 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001336129s 2025-08-13T20:08:18.002163866+00:00 stdout F [INFO] 10.217.0.74:52066 - 45542 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001274117s 2025-08-13T20:08:18.003171135+00:00 stdout F [INFO] 10.217.0.74:44306 - 52127 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001024219s 2025-08-13T20:08:18.003308809+00:00 stdout F [INFO] 10.217.0.74:54199 - 52772 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001133893s 2025-08-13T20:08:18.042742490+00:00 stdout F [INFO] 10.217.0.74:49386 - 1226 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768522s 2025-08-13T20:08:18.042914435+00:00 stdout F [INFO] 10.217.0.74:47491 - 30793 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000759702s 2025-08-13T20:08:18.057213194+00:00 stdout F [INFO] 10.217.0.74:35466 - 46773 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001504533s 2025-08-13T20:08:18.058068579+00:00 stdout F [INFO] 10.217.0.74:47145 - 28223 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001951675s 2025-08-13T20:08:18.060862139+00:00 stdout F [INFO] 10.217.0.74:43862 - 43274 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000581036s 2025-08-13T20:08:18.060862139+00:00 stdout F [INFO] 10.217.0.74:53185 - 40376 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613208s 2025-08-13T20:08:18.101125083+00:00 stdout F [INFO] 10.217.0.74:41513 - 8463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105484s 2025-08-13T20:08:18.101125083+00:00 stdout F [INFO] 10.217.0.74:57968 - 54244 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001155973s 2025-08-13T20:08:18.119922342+00:00 stdout F [INFO] 10.217.0.74:40669 - 25614 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000916636s 2025-08-13T20:08:18.120042356+00:00 stdout F [INFO] 10.217.0.74:52622 - 18205 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000878835s 2025-08-13T20:08:18.155849493+00:00 stdout F [INFO] 10.217.0.74:59445 - 26368 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769882s 2025-08-13T20:08:18.156263654+00:00 stdout F [INFO] 10.217.0.74:60747 - 31872 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104462s 2025-08-13T20:08:18.180989663+00:00 stdout F [INFO] 10.217.0.74:60997 - 17041 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000908776s 2025-08-13T20:08:18.180989663+00:00 stdout F [INFO] 10.217.0.74:46551 - 10659 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001160833s 2025-08-13T20:08:18.213105324+00:00 stdout F [INFO] 10.217.0.74:39244 - 55181 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070225s 2025-08-13T20:08:18.213105324+00:00 stdout F [INFO] 10.217.0.74:51836 - 55722 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000638858s 2025-08-13T20:08:18.239968064+00:00 stdout F [INFO] 10.217.0.74:52292 - 12385 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729421s 2025-08-13T20:08:18.240025206+00:00 stdout F [INFO] 10.217.0.74:39250 - 28993 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000791312s 2025-08-13T20:08:18.271506388+00:00 stdout F [INFO] 10.217.0.74:58756 - 712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000933617s 2025-08-13T20:08:18.271506388+00:00 stdout F [INFO] 10.217.0.74:46085 - 64615 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001323918s 2025-08-13T20:08:18.296870226+00:00 stdout F [INFO] 10.217.0.74:51242 - 53671 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001269976s 2025-08-13T20:08:18.296870226+00:00 stdout F [INFO] 10.217.0.74:38515 - 27078 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001712259s 2025-08-13T20:08:18.331032635+00:00 stdout F [INFO] 10.217.0.74:50758 - 41176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000789273s 2025-08-13T20:08:18.331032635+00:00 stdout F [INFO] 10.217.0.74:35435 - 27337 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001215435s 2025-08-13T20:08:18.340077824+00:00 stdout F [INFO] 10.217.0.74:52631 - 10801 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000752912s 2025-08-13T20:08:18.340128446+00:00 stdout F [INFO] 10.217.0.74:49085 - 51573 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104725s 2025-08-13T20:08:18.357214316+00:00 stdout F [INFO] 10.217.0.74:36458 - 4215 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00101837s 2025-08-13T20:08:18.357267757+00:00 stdout F [INFO] 10.217.0.74:37756 - 51038 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001114382s 2025-08-13T20:08:18.388217535+00:00 stdout F [INFO] 10.217.0.74:55365 - 27500 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001113472s 2025-08-13T20:08:18.388380129+00:00 stdout F [INFO] 10.217.0.74:50138 - 31839 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001151443s 2025-08-13T20:08:18.395161234+00:00 stdout F [INFO] 10.217.0.74:34389 - 8169 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000879295s 2025-08-13T20:08:18.395161234+00:00 stdout F [INFO] 10.217.0.74:39526 - 1664 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000770822s 2025-08-13T20:08:18.421573191+00:00 stdout F [INFO] 10.217.0.74:42922 - 4581 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002536073s 2025-08-13T20:08:18.421699015+00:00 stdout F [INFO] 10.217.0.74:36744 - 20784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003105509s 2025-08-13T20:08:18.445528128+00:00 stdout F [INFO] 10.217.0.74:40515 - 63429 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000746172s 2025-08-13T20:08:18.446329821+00:00 stdout F [INFO] 10.217.0.74:40343 - 7258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000667309s 2025-08-13T20:08:18.454415793+00:00 stdout F [INFO] 10.217.0.74:43286 - 24798 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000833254s 2025-08-13T20:08:18.456846412+00:00 stdout F [INFO] 10.217.0.74:45836 - 44265 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002275765s 2025-08-13T20:08:18.477692230+00:00 stdout F [INFO] 10.217.0.74:40548 - 35806 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000763682s 2025-08-13T20:08:18.478217505+00:00 stdout F [INFO] 10.217.0.74:51750 - 50242 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104433s 2025-08-13T20:08:18.501767740+00:00 stdout F [INFO] 10.217.0.74:45811 - 21551 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001103142s 2025-08-13T20:08:18.501767740+00:00 stdout F [INFO] 10.217.0.74:35630 - 41838 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010681s 2025-08-13T20:08:18.513550628+00:00 stdout F [INFO] 10.217.0.74:37388 - 59626 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000973008s 2025-08-13T20:08:18.513633050+00:00 stdout F [INFO] 10.217.0.74:41749 - 40718 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001201095s 2025-08-13T20:08:18.533399707+00:00 stdout F [INFO] 10.217.0.74:37546 - 7034 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001270477s 2025-08-13T20:08:18.533399707+00:00 stdout F [INFO] 10.217.0.74:39060 - 18619 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001359849s 2025-08-13T20:08:18.561114502+00:00 stdout F [INFO] 10.217.0.74:50716 - 20934 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001269167s 2025-08-13T20:08:18.561666278+00:00 stdout F [INFO] 10.217.0.74:41229 - 6523 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001590155s 2025-08-13T20:08:18.568039400+00:00 stdout F [INFO] 10.217.0.74:46810 - 19974 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00174046s 2025-08-13T20:08:18.568272437+00:00 stdout F [INFO] 10.217.0.74:52908 - 17620 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001644087s 2025-08-13T20:08:18.590841034+00:00 stdout F [INFO] 10.217.0.74:39127 - 36265 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000630128s 2025-08-13T20:08:18.591324348+00:00 stdout F [INFO] 10.217.0.74:40898 - 29496 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103056s 2025-08-13T20:08:18.616736236+00:00 stdout F [INFO] 10.217.0.74:36035 - 28386 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000574817s 2025-08-13T20:08:18.617176059+00:00 stdout F [INFO] 10.217.0.74:39294 - 50408 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000754191s 2025-08-13T20:08:18.653306395+00:00 stdout F [INFO] 10.217.0.74:49643 - 20012 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793343s 2025-08-13T20:08:18.653829190+00:00 stdout F [INFO] 10.217.0.74:49400 - 43367 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001490833s 2025-08-13T20:08:18.681602586+00:00 stdout F [INFO] 10.217.0.74:34632 - 19788 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000816623s 2025-08-13T20:08:18.681980997+00:00 stdout F [INFO] 10.217.0.74:60503 - 51226 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001154653s 2025-08-13T20:08:18.714427047+00:00 stdout F [INFO] 10.217.0.74:57813 - 21578 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103567s 2025-08-13T20:08:18.714427047+00:00 stdout F [INFO] 10.217.0.74:44347 - 19831 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001489173s 2025-08-13T20:08:18.728724417+00:00 stdout F [INFO] 10.217.0.74:56433 - 9101 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000562716s 2025-08-13T20:08:18.730372845+00:00 stdout F [INFO] 10.217.0.74:43298 - 41510 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668719s 2025-08-13T20:08:18.739250759+00:00 stdout F [INFO] 10.217.0.74:58344 - 16334 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000675279s 2025-08-13T20:08:18.739410844+00:00 stdout F [INFO] 10.217.0.74:55776 - 48013 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000848244s 2025-08-13T20:08:18.772723499+00:00 stdout F [INFO] 10.217.0.74:52116 - 40428 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000591977s 2025-08-13T20:08:18.772940755+00:00 stdout F [INFO] 10.217.0.74:56184 - 26165 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000535085s 2025-08-13T20:08:18.784148006+00:00 stdout F [INFO] 10.217.0.74:59633 - 62916 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071752s 2025-08-13T20:08:18.784242399+00:00 stdout F [INFO] 10.217.0.74:36176 - 20344 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000646268s 2025-08-13T20:08:18.829873817+00:00 stdout F [INFO] 10.217.0.74:53963 - 24242 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000631288s 2025-08-13T20:08:18.829873817+00:00 stdout F [INFO] 10.217.0.74:53356 - 40328 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000655019s 2025-08-13T20:08:18.842616683+00:00 stdout F [INFO] 10.217.0.74:44219 - 60953 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001134443s 2025-08-13T20:08:18.842616683+00:00 stdout F [INFO] 10.217.0.74:53785 - 15265 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001234925s 2025-08-13T20:08:18.888835948+00:00 stdout F [INFO] 10.217.0.74:45759 - 30313 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000572467s 2025-08-13T20:08:18.889302571+00:00 stdout F [INFO] 10.217.0.74:42183 - 64045 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001291417s 2025-08-13T20:08:18.889944740+00:00 stdout F [INFO] 10.217.0.74:53201 - 28698 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00066511s 2025-08-13T20:08:18.891499084+00:00 stdout F [INFO] 10.217.0.74:34657 - 40843 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001221505s 2025-08-13T20:08:18.901686146+00:00 stdout F [INFO] 10.217.0.74:59660 - 46487 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000527275s 2025-08-13T20:08:18.902438668+00:00 stdout F [INFO] 10.217.0.74:39170 - 27000 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000802423s 2025-08-13T20:08:18.943844235+00:00 stdout F [INFO] 10.217.0.74:47238 - 51574 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000864945s 2025-08-13T20:08:18.943958148+00:00 stdout F [INFO] 10.217.0.74:40533 - 50479 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000665359s 2025-08-13T20:08:18.945034189+00:00 stdout F [INFO] 10.217.0.74:59443 - 32212 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070142s 2025-08-13T20:08:18.945284256+00:00 stdout F [INFO] 10.217.0.74:44930 - 37490 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000979978s 2025-08-13T20:08:18.999972634+00:00 stdout F [INFO] 10.217.0.74:49226 - 54351 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069281s 2025-08-13T20:08:19.000010075+00:00 stdout F [INFO] 10.217.0.74:33374 - 675 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000548685s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:34487 - 13154 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845254s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:47712 - 4340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000636399s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:35471 - 58428 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001321158s 2025-08-13T20:08:19.059905113+00:00 stdout F [INFO] 10.217.0.74:60689 - 274 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734062s 2025-08-13T20:08:19.111097770+00:00 stdout F [INFO] 10.217.0.74:46092 - 31714 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001013969s 2025-08-13T20:08:19.111729979+00:00 stdout F [INFO] 10.217.0.74:35110 - 49539 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000569996s 2025-08-13T20:08:19.123997620+00:00 stdout F [INFO] 10.217.0.74:50974 - 12345 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000832623s 2025-08-13T20:08:19.124532406+00:00 stdout F [INFO] 10.217.0.74:32788 - 22690 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001368809s 2025-08-13T20:08:19.173362416+00:00 stdout F [INFO] 10.217.0.74:57072 - 18749 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002017758s 2025-08-13T20:08:19.173960953+00:00 stdout F [INFO] 10.217.0.74:58011 - 64217 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003071098s 2025-08-13T20:08:19.176240778+00:00 stdout F [INFO] 10.217.0.74:49301 - 14570 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001588786s 2025-08-13T20:08:19.176240778+00:00 stdout F [INFO] 10.217.0.74:51980 - 56486 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001845263s 2025-08-13T20:08:19.231853013+00:00 stdout F [INFO] 10.217.0.74:42889 - 57106 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001345139s 2025-08-13T20:08:19.232280125+00:00 stdout F [INFO] 10.217.0.74:38556 - 62901 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001417371s 2025-08-13T20:08:19.234921181+00:00 stdout F [INFO] 10.217.0.74:36720 - 15603 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00138679s 2025-08-13T20:08:19.237827514+00:00 stdout F [INFO] 10.217.0.74:59480 - 37520 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104755s 2025-08-13T20:08:19.254981216+00:00 stdout F [INFO] 10.217.0.74:36818 - 57438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000951858s 2025-08-13T20:08:19.254981216+00:00 stdout F [INFO] 10.217.0.74:55824 - 7717 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001288587s 2025-08-13T20:08:19.287223030+00:00 stdout F [INFO] 10.217.0.74:45490 - 37893 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742451s 2025-08-13T20:08:19.288857277+00:00 stdout F [INFO] 10.217.0.74:54329 - 23127 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000895675s 2025-08-13T20:08:19.309303843+00:00 stdout F [INFO] 10.217.0.74:46021 - 745 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000624868s 2025-08-13T20:08:19.309303843+00:00 stdout F [INFO] 10.217.0.74:60525 - 34491 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000919096s 2025-08-13T20:08:19.344323977+00:00 stdout F [INFO] 10.217.0.74:35478 - 40915 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001874983s 2025-08-13T20:08:19.344323977+00:00 stdout F [INFO] 10.217.0.74:34458 - 1330 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002321437s 2025-08-13T20:08:19.404959636+00:00 stdout F [INFO] 10.217.0.74:49339 - 57172 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000725511s 2025-08-13T20:08:19.404959636+00:00 stdout F [INFO] 10.217.0.74:42428 - 29004 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000939657s 2025-08-13T20:08:19.440233127+00:00 stdout F [INFO] 10.217.0.74:34899 - 32956 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001207635s 2025-08-13T20:08:19.440233127+00:00 stdout F [INFO] 10.217.0.74:48635 - 22276 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000980108s 2025-08-13T20:08:19.458175032+00:00 stdout F [INFO] 10.217.0.74:57975 - 38076 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000981929s 2025-08-13T20:08:19.458175032+00:00 stdout F [INFO] 10.217.0.74:46325 - 41507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000946747s 2025-08-13T20:08:19.491980691+00:00 stdout F [INFO] 10.217.0.74:43809 - 28638 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070041s 2025-08-13T20:08:19.495368818+00:00 stdout F [INFO] 10.217.0.74:54058 - 23406 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003173411s 2025-08-13T20:08:19.499145456+00:00 stdout F [INFO] 10.217.0.74:32823 - 17562 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000504084s 2025-08-13T20:08:19.499145456+00:00 stdout F [INFO] 10.217.0.74:56726 - 343 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000561356s 2025-08-13T20:08:19.514751294+00:00 stdout F [INFO] 10.217.0.74:41593 - 63316 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069s 2025-08-13T20:08:19.514751294+00:00 stdout F [INFO] 10.217.0.74:42069 - 26740 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000919106s 2025-08-13T20:08:19.557246502+00:00 stdout F [INFO] 10.217.0.74:36148 - 26416 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000834204s 2025-08-13T20:08:19.557246502+00:00 stdout F [INFO] 10.217.0.74:53623 - 53928 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612508s 2025-08-13T20:08:19.560498005+00:00 stdout F [INFO] 10.217.0.74:57862 - 55531 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001326568s 2025-08-13T20:08:19.561442072+00:00 stdout F [INFO] 10.217.0.74:46271 - 32034 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001072861s 2025-08-13T20:08:19.574042364+00:00 stdout F [INFO] 10.217.0.74:60622 - 623 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000890776s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:44083 - 31197 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001470792s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:33941 - 39911 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001874684s 2025-08-13T20:08:19.575124835+00:00 stdout F [INFO] 10.217.0.74:52521 - 63095 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001776991s 2025-08-13T20:08:19.619598280+00:00 stdout F [INFO] 10.217.0.74:57139 - 31701 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001597296s 2025-08-13T20:08:19.619631901+00:00 stdout F [INFO] 10.217.0.74:51704 - 25051 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001798212s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:48185 - 21897 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004593802s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:42680 - 51080 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002749438s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:53337 - 43838 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002883242s 2025-08-13T20:08:19.634578559+00:00 stdout F [INFO] 10.217.0.74:49462 - 22842 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004737485s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:47487 - 10576 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000635059s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:55514 - 64163 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001015499s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:59295 - 60158 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657648s 2025-08-13T20:08:19.692766767+00:00 stdout F [INFO] 10.217.0.74:56438 - 61941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000965287s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:51878 - 49945 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657199s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:36094 - 50966 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001020449s 2025-08-13T20:08:19.751462260+00:00 stdout F [INFO] 10.217.0.74:55603 - 60901 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069649s 2025-08-13T20:08:19.752212632+00:00 stdout F [INFO] 10.217.0.74:56998 - 15524 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000435012s 2025-08-13T20:08:19.809153344+00:00 stdout F [INFO] 10.217.0.74:53401 - 39051 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001184504s 2025-08-13T20:08:19.809153344+00:00 stdout F [INFO] 10.217.0.74:51215 - 21865 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000982628s 2025-08-13T20:08:19.812235183+00:00 stdout F [INFO] 10.217.0.74:40889 - 52437 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000586647s 2025-08-13T20:08:19.813729486+00:00 stdout F [INFO] 10.217.0.74:48421 - 25312 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001891555s 2025-08-13T20:08:19.863663957+00:00 stdout F [INFO] 10.217.0.74:60962 - 58177 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000864505s 2025-08-13T20:08:19.863663957+00:00 stdout F [INFO] 10.217.0.74:53141 - 18297 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000925797s 2025-08-13T20:08:19.920427445+00:00 stdout F [INFO] 10.217.0.74:46406 - 6689 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001152653s 2025-08-13T20:08:19.921107744+00:00 stdout F [INFO] 10.217.0.74:52820 - 53416 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558376s 2025-08-13T20:08:19.922021890+00:00 stdout F [INFO] 10.217.0.74:47651 - 17881 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001226225s 2025-08-13T20:08:19.922523145+00:00 stdout F [INFO] 10.217.0.74:52083 - 21131 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001890835s 2025-08-13T20:08:19.964350344+00:00 stdout F [INFO] 10.217.0.74:46219 - 37278 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069833s 2025-08-13T20:08:19.964350344+00:00 stdout F [INFO] 10.217.0.74:56445 - 19602 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000983288s 2025-08-13T20:08:19.980119026+00:00 stdout F [INFO] 10.217.0.74:47896 - 63739 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002735378s 2025-08-13T20:08:19.980119026+00:00 stdout F [INFO] 10.217.0.74:53528 - 63966 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001429141s 2025-08-13T20:08:19.982137164+00:00 stdout F [INFO] 10.217.0.74:36392 - 821 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000785813s 2025-08-13T20:08:19.983587326+00:00 stdout F [INFO] 10.217.0.74:37348 - 50385 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001475423s 2025-08-13T20:08:20.026518367+00:00 stdout F [INFO] 10.217.0.74:53987 - 20951 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000582706s 2025-08-13T20:08:20.026565948+00:00 stdout F [INFO] 10.217.0.74:59175 - 57290 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000483584s 2025-08-13T20:08:20.037504422+00:00 stdout F [INFO] 10.217.0.74:59422 - 54552 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000653699s 2025-08-13T20:08:20.037552413+00:00 stdout F [INFO] 10.217.0.74:42804 - 8086 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626458s 2025-08-13T20:08:20.045226223+00:00 stdout F [INFO] 10.217.0.74:38048 - 34636 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000587927s 2025-08-13T20:08:20.045523631+00:00 stdout F [INFO] 10.217.0.74:46147 - 23800 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000991229s 2025-08-13T20:08:20.092669803+00:00 stdout F [INFO] 10.217.0.74:42603 - 6548 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000541455s 2025-08-13T20:08:20.092669803+00:00 stdout F [INFO] 10.217.0.74:59176 - 45031 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000920527s 2025-08-13T20:08:20.132848905+00:00 stdout F [INFO] 10.217.0.74:39842 - 39036 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000801383s 2025-08-13T20:08:20.132848905+00:00 stdout F [INFO] 10.217.0.74:55461 - 20359 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000863444s 2025-08-13T20:08:20.148882125+00:00 stdout F [INFO] 10.217.0.74:48741 - 47367 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000800953s 2025-08-13T20:08:20.148882125+00:00 stdout F [INFO] 10.217.0.74:49669 - 13644 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000790093s 2025-08-13T20:08:20.188878572+00:00 stdout F [INFO] 10.217.0.74:45619 - 23079 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000639578s 2025-08-13T20:08:20.189544251+00:00 stdout F [INFO] 10.217.0.74:35659 - 16828 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000816493s 2025-08-13T20:08:20.203331556+00:00 stdout F [INFO] 10.217.0.74:54643 - 54596 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001219905s 2025-08-13T20:08:20.203816790+00:00 stdout F [INFO] 10.217.0.74:44079 - 24799 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001638507s 2025-08-13T20:08:20.222126875+00:00 stdout F [INFO] 10.217.0.74:35728 - 57603 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000833413s 2025-08-13T20:08:20.223450793+00:00 stdout F [INFO] 10.217.0.74:51625 - 51357 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001006178s 2025-08-13T20:08:20.224555275+00:00 stdout F [INFO] 10.217.0.74:55727 - 11533 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000638348s 2025-08-13T20:08:20.225231624+00:00 stdout F [INFO] 10.217.0.74:47336 - 45122 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000887886s 2025-08-13T20:08:20.234617923+00:00 stdout F [INFO] 10.217.0.74:48099 - 10617 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001363009s 2025-08-13T20:08:20.234617923+00:00 stdout F [INFO] 10.217.0.74:40174 - 24904 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001339848s 2025-08-13T20:08:20.253737411+00:00 stdout F [INFO] 10.217.0.74:42674 - 40263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000679899s 2025-08-13T20:08:20.254765271+00:00 stdout F [INFO] 10.217.0.74:34954 - 36129 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000766252s 2025-08-13T20:08:20.266526848+00:00 stdout F [INFO] 10.217.0.74:47334 - 28205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001164104s 2025-08-13T20:08:20.266742764+00:00 stdout F [INFO] 10.217.0.74:53173 - 33111 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001522993s 2025-08-13T20:08:20.289119096+00:00 stdout F [INFO] 10.217.0.74:58640 - 24864 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070667s 2025-08-13T20:08:20.289391903+00:00 stdout F [INFO] 10.217.0.74:50835 - 5956 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740561s 2025-08-13T20:08:20.301440449+00:00 stdout F [INFO] 10.217.0.74:49383 - 62464 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104861s 2025-08-13T20:08:20.301440449+00:00 stdout F [INFO] 10.217.0.74:36017 - 30424 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001671858s 2025-08-13T20:08:20.316057878+00:00 stdout F [INFO] 10.217.0.74:36367 - 62148 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000437042s 2025-08-13T20:08:20.316353217+00:00 stdout F [INFO] 10.217.0.74:56467 - 60074 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000994989s 2025-08-13T20:08:20.347647934+00:00 stdout F [INFO] 10.217.0.74:43603 - 36237 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001694369s 2025-08-13T20:08:20.347647934+00:00 stdout F [INFO] 10.217.0.74:58264 - 25764 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002189132s 2025-08-13T20:08:20.374389510+00:00 stdout F [INFO] 10.217.0.74:40516 - 61985 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000824754s 2025-08-13T20:08:20.374389510+00:00 stdout F [INFO] 10.217.0.74:41851 - 23707 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000762832s 2025-08-13T20:08:20.404057201+00:00 stdout F [INFO] 10.217.0.74:45825 - 36179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000787853s 2025-08-13T20:08:20.404187815+00:00 stdout F [INFO] 10.217.0.74:60727 - 34978 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001304217s 2025-08-13T20:08:20.433333370+00:00 stdout F [INFO] 10.217.0.74:47313 - 41258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001661527s 2025-08-13T20:08:20.433333370+00:00 stdout F [INFO] 10.217.0.74:37101 - 36275 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001336558s 2025-08-13T20:08:20.434687389+00:00 stdout F [INFO] 10.217.0.74:50737 - 46844 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000860355s 2025-08-13T20:08:20.434687389+00:00 stdout F [INFO] 10.217.0.74:34462 - 31226 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001051541s 2025-08-13T20:08:20.468028635+00:00 stdout F [INFO] 10.217.0.74:57365 - 25742 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000990839s 2025-08-13T20:08:20.469755305+00:00 stdout F [INFO] 10.217.0.74:39107 - 47850 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001581315s 2025-08-13T20:08:20.477362013+00:00 stdout F [INFO] 10.217.0.74:47989 - 52317 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649848s 2025-08-13T20:08:20.477449415+00:00 stdout F [INFO] 10.217.0.74:40591 - 60541 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000771862s 2025-08-13T20:08:20.510911725+00:00 stdout F [INFO] 10.217.0.74:48028 - 3073 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000765232s 2025-08-13T20:08:20.510911725+00:00 stdout F [INFO] 10.217.0.74:46573 - 64702 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001097651s 2025-08-13T20:08:20.530874907+00:00 stdout F [INFO] 10.217.0.74:41963 - 44907 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002360568s 2025-08-13T20:08:20.531193616+00:00 stdout F [INFO] 10.217.0.74:53470 - 27807 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002826101s 2025-08-13T20:08:20.544350043+00:00 stdout F [INFO] 10.217.0.74:53647 - 29671 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009632836s 2025-08-13T20:08:20.544350043+00:00 stdout F [INFO] 10.217.0.74:57620 - 55875 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.009781851s 2025-08-13T20:08:20.570202815+00:00 stdout F [INFO] 10.217.0.74:44231 - 53021 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651088s 2025-08-13T20:08:20.571462151+00:00 stdout F [INFO] 10.217.0.74:42901 - 59801 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768212s 2025-08-13T20:08:20.576648890+00:00 stdout F [INFO] 10.217.0.74:57027 - 64889 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000545476s 2025-08-13T20:08:20.576802884+00:00 stdout F [INFO] 10.217.0.74:33614 - 61489 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000609417s 2025-08-13T20:08:20.588171280+00:00 stdout F [INFO] 10.217.0.74:49097 - 19093 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000917837s 2025-08-13T20:08:20.588599452+00:00 stdout F [INFO] 10.217.0.74:55195 - 10285 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001500893s 2025-08-13T20:08:20.629764072+00:00 stdout F [INFO] 10.217.0.74:49525 - 63178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000586707s 2025-08-13T20:08:20.630031790+00:00 stdout F [INFO] 10.217.0.74:43936 - 49048 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000670439s 2025-08-13T20:08:20.633615463+00:00 stdout F [INFO] 10.217.0.74:56815 - 9782 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000677729s 2025-08-13T20:08:20.633615463+00:00 stdout F [INFO] 10.217.0.74:50379 - 52189 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158163s 2025-08-13T20:08:20.648194051+00:00 stdout F [INFO] 10.217.0.74:40370 - 31715 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001159454s 2025-08-13T20:08:20.648194051+00:00 stdout F [INFO] 10.217.0.74:46407 - 49932 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001255207s 2025-08-13T20:08:20.651815225+00:00 stdout F [INFO] 10.217.0.74:39433 - 20178 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001055991s 2025-08-13T20:08:20.656983383+00:00 stdout F [INFO] 10.217.0.74:44582 - 39095 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140599s 2025-08-13T20:08:20.665125626+00:00 stdout F [INFO] 10.217.0.74:44221 - 55255 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734341s 2025-08-13T20:08:20.665251720+00:00 stdout F [INFO] 10.217.0.74:39018 - 60238 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000659679s 2025-08-13T20:08:20.689938068+00:00 stdout F [INFO] 10.217.0.74:41391 - 51096 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002542943s 2025-08-13T20:08:20.690173254+00:00 stdout F [INFO] 10.217.0.74:47992 - 23554 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934757s 2025-08-13T20:08:20.712100273+00:00 stdout F [INFO] 10.217.0.74:59180 - 1034 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000846694s 2025-08-13T20:08:20.712746112+00:00 stdout F [INFO] 10.217.0.74:50309 - 3207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000605618s 2025-08-13T20:08:20.722083729+00:00 stdout F [INFO] 10.217.0.74:33841 - 24481 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069825s 2025-08-13T20:08:20.722843781+00:00 stdout F [INFO] 10.217.0.74:41004 - 7845 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000935487s 2025-08-13T20:08:20.749768843+00:00 stdout F [INFO] 10.217.0.74:35507 - 16179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000877175s 2025-08-13T20:08:20.753088058+00:00 stdout F [INFO] 10.217.0.74:59183 - 20585 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753702s 2025-08-13T20:08:20.767614805+00:00 stdout F [INFO] 10.217.0.74:58570 - 41317 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000853814s 2025-08-13T20:08:20.767841851+00:00 stdout F [INFO] 10.217.0.74:55816 - 52170 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000998769s 2025-08-13T20:08:20.814605662+00:00 stdout F [INFO] 10.217.0.74:43412 - 35534 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070546s 2025-08-13T20:08:20.815200969+00:00 stdout F [INFO] 10.217.0.74:36960 - 17945 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001435021s 2025-08-13T20:08:20.829635843+00:00 stdout F [INFO] 10.217.0.74:36040 - 9387 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000743341s 2025-08-13T20:08:20.829635843+00:00 stdout F [INFO] 10.217.0.74:42363 - 34278 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615428s 2025-08-13T20:08:20.873258044+00:00 stdout F [INFO] 10.217.0.74:57034 - 44179 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813913s 2025-08-13T20:08:20.873319845+00:00 stdout F [INFO] 10.217.0.74:46023 - 9625 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001219815s 2025-08-13T20:08:20.899992170+00:00 stdout F [INFO] 10.217.0.74:44255 - 47347 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000738771s 2025-08-13T20:08:20.900095283+00:00 stdout F [INFO] 10.217.0.74:45113 - 757 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000606878s 2025-08-13T20:08:20.922948738+00:00 stdout F [INFO] 10.217.0.74:43853 - 47584 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000456963s 2025-08-13T20:08:20.923684229+00:00 stdout F [INFO] 10.217.0.74:59554 - 41272 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001007579s 2025-08-13T20:08:20.932553414+00:00 stdout F [INFO] 10.217.0.74:51472 - 20194 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000796723s 2025-08-13T20:08:20.932639126+00:00 stdout F [INFO] 10.217.0.74:42302 - 22466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105013s 2025-08-13T20:08:20.958865018+00:00 stdout F [INFO] 10.217.0.74:34581 - 22745 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071544s 2025-08-13T20:08:20.959120615+00:00 stdout F [INFO] 10.217.0.74:51406 - 23089 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000901286s 2025-08-13T20:08:20.967416073+00:00 stdout F [INFO] 10.217.0.74:45014 - 37578 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870665s 2025-08-13T20:08:20.967648760+00:00 stdout F [INFO] 10.217.0.74:47564 - 64455 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001078651s 2025-08-13T20:08:20.984656478+00:00 stdout F [INFO] 10.217.0.74:34397 - 48184 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003339386s 2025-08-13T20:08:20.984656478+00:00 stdout F [INFO] 10.217.0.74:57204 - 2685 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003514661s 2025-08-13T20:08:20.995082076+00:00 stdout F [INFO] 10.217.0.74:55637 - 63608 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068953s 2025-08-13T20:08:20.996061235+00:00 stdout F [INFO] 10.217.0.74:60552 - 45167 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000512405s 2025-08-13T20:08:21.027181477+00:00 stdout F [INFO] 10.217.0.74:45214 - 44061 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001082451s 2025-08-13T20:08:21.028565126+00:00 stdout F [INFO] 10.217.0.74:34987 - 36881 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000870085s 2025-08-13T20:08:21.029176524+00:00 stdout F [INFO] 10.217.0.74:57687 - 39493 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000792063s 2025-08-13T20:08:21.029436942+00:00 stdout F [INFO] 10.217.0.74:48392 - 28980 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000893525s 2025-08-13T20:08:21.049611960+00:00 stdout F [INFO] 10.217.0.74:57286 - 65085 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105317s 2025-08-13T20:08:21.049670302+00:00 stdout F [INFO] 10.217.0.74:43053 - 12216 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657459s 2025-08-13T20:08:21.095700921+00:00 stdout F [INFO] 10.217.0.74:43606 - 1746 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001463912s 2025-08-13T20:08:21.095700921+00:00 stdout F [INFO] 10.217.0.74:59907 - 36665 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000466304s 2025-08-13T20:08:21.097937406+00:00 stdout F [INFO] 10.217.0.74:54768 - 998 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000672219s 2025-08-13T20:08:21.097963896+00:00 stdout F [INFO] 10.217.0.74:45132 - 28321 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000950917s 2025-08-13T20:08:21.156295639+00:00 stdout F [INFO] 10.217.0.74:53052 - 4292 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813174s 2025-08-13T20:08:21.157350069+00:00 stdout F [INFO] 10.217.0.74:43527 - 27937 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000610728s 2025-08-13T20:08:21.159719867+00:00 stdout F [INFO] 10.217.0.74:35243 - 38330 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001566785s 2025-08-13T20:08:21.159719867+00:00 stdout F [INFO] 10.217.0.74:44258 - 26336 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001432661s 2025-08-13T20:08:21.214957301+00:00 stdout F [INFO] 10.217.0.74:33663 - 52704 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000819633s 2025-08-13T20:08:21.215178627+00:00 stdout F [INFO] 10.217.0.74:54171 - 1 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000600528s 2025-08-13T20:08:21.251848858+00:00 stdout F [INFO] 10.217.0.74:41906 - 5676 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001237486s 2025-08-13T20:08:21.253050703+00:00 stdout F [INFO] 10.217.0.74:47230 - 11157 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001222845s 2025-08-13T20:08:21.276477684+00:00 stdout F [INFO] 10.217.0.74:36698 - 36123 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000664929s 2025-08-13T20:08:21.277333799+00:00 stdout F [INFO] 10.217.0.74:38715 - 787 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000642098s 2025-08-13T20:08:21.277816763+00:00 stdout F [INFO] 10.217.0.74:41323 - 41761 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000915877s 2025-08-13T20:08:21.278021229+00:00 stdout F [INFO] 10.217.0.74:44740 - 7711 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001326388s 2025-08-13T20:08:21.318615902+00:00 stdout F [INFO] 10.217.0.74:47736 - 41622 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001616496s 2025-08-13T20:08:21.318761397+00:00 stdout F [INFO] 10.217.0.74:60351 - 34419 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002020678s 2025-08-13T20:08:21.348972883+00:00 stdout F [INFO] 10.217.0.74:51905 - 53438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003447209s 2025-08-13T20:08:21.350017763+00:00 stdout F [INFO] 10.217.0.74:53640 - 36814 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004793708s 2025-08-13T20:08:21.350343552+00:00 stdout F [INFO] 10.217.0.74:53486 - 4265 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001804502s 2025-08-13T20:08:21.352277748+00:00 stdout F [INFO] 10.217.0.74:42630 - 31454 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002065249s 2025-08-13T20:08:21.381030062+00:00 stdout F [INFO] 10.217.0.74:40627 - 8986 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000994388s 2025-08-13T20:08:21.381292199+00:00 stdout F [INFO] 10.217.0.74:56655 - 62455 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000911647s 2025-08-13T20:08:21.391505122+00:00 stdout F [INFO] 10.217.0.74:38469 - 59137 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000469634s 2025-08-13T20:08:21.391505122+00:00 stdout F [INFO] 10.217.0.74:33175 - 27614 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000449763s 2025-08-13T20:08:21.406906914+00:00 stdout F [INFO] 10.217.0.74:59333 - 37348 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001283207s 2025-08-13T20:08:21.407047058+00:00 stdout F [INFO] 10.217.0.74:47742 - 60191 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001129573s 2025-08-13T20:08:21.443857173+00:00 stdout F [INFO] 10.217.0.74:40528 - 41650 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000792613s 2025-08-13T20:08:21.444235444+00:00 stdout F [INFO] 10.217.0.74:47840 - 21143 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003349207s 2025-08-13T20:08:21.451058140+00:00 stdout F [INFO] 10.217.0.74:41261 - 13281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775202s 2025-08-13T20:08:21.451058140+00:00 stdout F [INFO] 10.217.0.74:45851 - 14398 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826563s 2025-08-13T20:08:21.467184651+00:00 stdout F [INFO] 10.217.0.74:51266 - 7856 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000544876s 2025-08-13T20:08:21.467329675+00:00 stdout F [INFO] 10.217.0.74:42910 - 23102 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009659s 2025-08-13T20:08:21.507686802+00:00 stdout F [INFO] 10.217.0.74:57006 - 16266 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000783853s 2025-08-13T20:08:21.507686802+00:00 stdout F [INFO] 10.217.0.74:59004 - 2798 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070864s 2025-08-13T20:08:21.508597748+00:00 stdout F [INFO] 10.217.0.74:36516 - 35450 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000865265s 2025-08-13T20:08:21.509967498+00:00 stdout F [INFO] 10.217.0.74:58294 - 64367 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001360009s 2025-08-13T20:08:21.524058432+00:00 stdout F [INFO] 10.217.0.74:57622 - 23682 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000840354s 2025-08-13T20:08:21.524096323+00:00 stdout F [INFO] 10.217.0.74:36061 - 43951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754442s 2025-08-13T20:08:21.567555409+00:00 stdout F [INFO] 10.217.0.74:46832 - 53516 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000781792s 2025-08-13T20:08:21.567607830+00:00 stdout F [INFO] 10.217.0.74:59571 - 25182 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000803143s 2025-08-13T20:08:21.578841352+00:00 stdout F [INFO] 10.217.0.74:35014 - 55842 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000644738s 2025-08-13T20:08:21.581847819+00:00 stdout F [INFO] 10.217.0.74:55775 - 55026 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102618s 2025-08-13T20:08:21.627158318+00:00 stdout F [INFO] 10.217.0.74:52303 - 42393 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000750662s 2025-08-13T20:08:21.627158318+00:00 stdout F [INFO] 10.217.0.74:59691 - 22958 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000549025s 2025-08-13T20:08:21.641962452+00:00 stdout F [INFO] 10.217.0.74:54771 - 8887 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000581156s 2025-08-13T20:08:21.641962452+00:00 stdout F [INFO] 10.217.0.74:49009 - 37894 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104908s 2025-08-13T20:08:21.681094214+00:00 stdout F [INFO] 10.217.0.74:37748 - 51751 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001596456s 2025-08-13T20:08:21.681628369+00:00 stdout F [INFO] 10.217.0.74:36028 - 57991 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002143281s 2025-08-13T20:08:21.699708158+00:00 stdout F [INFO] 10.217.0.74:35438 - 20392 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000583846s 2025-08-13T20:08:21.699708158+00:00 stdout F [INFO] 10.217.0.74:45645 - 53652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000513624s 2025-08-13T20:08:21.740565719+00:00 stdout F [INFO] 10.217.0.74:43636 - 21088 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001104292s 2025-08-13T20:08:21.740881948+00:00 stdout F [INFO] 10.217.0.74:60974 - 8141 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001300058s 2025-08-13T20:08:21.753051197+00:00 stdout F [INFO] 10.217.0.74:58618 - 16057 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001003989s 2025-08-13T20:08:21.756076784+00:00 stdout F [INFO] 10.217.0.74:43844 - 23921 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004007895s 2025-08-13T20:08:21.758245566+00:00 stdout F [INFO] 10.217.0.74:47124 - 59391 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936547s 2025-08-13T20:08:21.758441132+00:00 stdout F [INFO] 10.217.0.74:49102 - 43624 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001713939s 2025-08-13T20:08:21.772176476+00:00 stdout F [INFO] 10.217.0.74:46657 - 4769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000960378s 2025-08-13T20:08:21.772284439+00:00 stdout F [INFO] 10.217.0.74:35549 - 18077 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001296937s 2025-08-13T20:08:21.781389130+00:00 stdout F [INFO] 10.217.0.74:35033 - 499 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000914066s 2025-08-13T20:08:21.781597126+00:00 stdout F [INFO] 10.217.0.74:52691 - 12784 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000996999s 2025-08-13T20:08:21.795328699+00:00 stdout F [INFO] 10.217.0.74:39043 - 52094 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000585827s 2025-08-13T20:08:21.795527995+00:00 stdout F [INFO] 10.217.0.74:38492 - 24303 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870904s 2025-08-13T20:08:21.812769619+00:00 stdout F [INFO] 10.217.0.74:44059 - 37291 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002576694s 2025-08-13T20:08:21.812980465+00:00 stdout F [INFO] 10.217.0.74:35021 - 5072 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002731668s 2025-08-13T20:08:21.815246030+00:00 stdout F [INFO] 10.217.0.74:51931 - 8604 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001972337s 2025-08-13T20:08:21.815246030+00:00 stdout F [INFO] 10.217.0.74:46159 - 49185 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002262854s 2025-08-13T20:08:21.836761357+00:00 stdout F [INFO] 10.217.0.74:41242 - 33438 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001621987s 2025-08-13T20:08:21.837088427+00:00 stdout F [INFO] 10.217.0.74:56449 - 54193 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0020712s 2025-08-13T20:08:21.852242801+00:00 stdout F [INFO] 10.217.0.74:40528 - 28590 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000773742s 2025-08-13T20:08:21.853209469+00:00 stdout F [INFO] 10.217.0.74:40518 - 52546 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001800842s 2025-08-13T20:08:21.874935372+00:00 stdout F [INFO] 10.217.0.74:48787 - 4883 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742461s 2025-08-13T20:08:21.875084876+00:00 stdout F [INFO] 10.217.0.74:53900 - 17729 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001129183s 2025-08-13T20:08:21.909328318+00:00 stdout F [INFO] 10.217.0.74:58437 - 7710 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000685829s 2025-08-13T20:08:21.909587265+00:00 stdout F [INFO] 10.217.0.74:41177 - 8284 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000807493s 2025-08-13T20:08:21.931037270+00:00 stdout F [INFO] 10.217.0.74:50656 - 30424 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070117s 2025-08-13T20:08:21.931133773+00:00 stdout F [INFO] 10.217.0.74:54700 - 47502 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001229895s 2025-08-13T20:08:21.968340560+00:00 stdout F [INFO] 10.217.0.74:51881 - 37498 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001640137s 2025-08-13T20:08:21.968429062+00:00 stdout F [INFO] 10.217.0.74:40751 - 48109 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001837213s 2025-08-13T20:08:21.973223730+00:00 stdout F [INFO] 10.217.0.74:47528 - 5737 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527265s 2025-08-13T20:08:21.973223730+00:00 stdout F [INFO] 10.217.0.74:36322 - 65342 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000478184s 2025-08-13T20:08:21.979201161+00:00 stdout F [INFO] 10.217.0.74:40156 - 14196 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000723381s 2025-08-13T20:08:21.980020715+00:00 stdout F [INFO] 10.217.0.74:46359 - 39016 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001127842s 2025-08-13T20:08:21.982708972+00:00 stdout F [INFO] 10.217.0.74:52956 - 1784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001235185s 2025-08-13T20:08:21.983336250+00:00 stdout F [INFO] 10.217.0.74:49008 - 65323 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001339569s 2025-08-13T20:08:22.031831640+00:00 stdout F [INFO] 10.217.0.74:59415 - 64727 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000711211s 2025-08-13T20:08:22.031917433+00:00 stdout F [INFO] 10.217.0.74:54528 - 22049 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001002928s 2025-08-13T20:08:22.042281320+00:00 stdout F [INFO] 10.217.0.74:35159 - 49207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068618s 2025-08-13T20:08:22.042378513+00:00 stdout F [INFO] 10.217.0.74:54247 - 9704 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000502274s 2025-08-13T20:08:22.044223495+00:00 stdout F [INFO] 10.217.0.74:56155 - 28029 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001385s 2025-08-13T20:08:22.044223495+00:00 stdout F [INFO] 10.217.0.74:49841 - 61991 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102096s 2025-08-13T20:08:22.083609545+00:00 stdout F [INFO] 10.217.0.74:46141 - 31380 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000932726s 2025-08-13T20:08:22.084075808+00:00 stdout F [INFO] 10.217.0.74:47850 - 15680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001004049s 2025-08-13T20:08:22.098749489+00:00 stdout F [INFO] 10.217.0.74:36090 - 7351 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000636378s 2025-08-13T20:08:22.099014486+00:00 stdout F [INFO] 10.217.0.74:38597 - 21700 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000812803s 2025-08-13T20:08:22.139299391+00:00 stdout F [INFO] 10.217.0.74:39378 - 44582 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000653949s 2025-08-13T20:08:22.139647061+00:00 stdout F [INFO] 10.217.0.74:36225 - 22223 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001251976s 2025-08-13T20:08:22.155679041+00:00 stdout F [INFO] 10.217.0.74:37035 - 15705 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000885045s 2025-08-13T20:08:22.158742589+00:00 stdout F [INFO] 10.217.0.74:35138 - 51780 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001082841s 2025-08-13T20:08:22.179351120+00:00 stdout F [INFO] 10.217.0.74:45734 - 36900 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000978308s 2025-08-13T20:08:22.179593517+00:00 stdout F [INFO] 10.217.0.74:40799 - 54997 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001279057s 2025-08-13T20:08:22.201436993+00:00 stdout F [INFO] 10.217.0.74:44396 - 55140 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001330658s 2025-08-13T20:08:22.201560556+00:00 stdout F [INFO] 10.217.0.74:33447 - 1163 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00139551s 2025-08-13T20:08:22.214420215+00:00 stdout F [INFO] 10.217.0.74:52628 - 34934 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000763552s 2025-08-13T20:08:22.214820137+00:00 stdout F [INFO] 10.217.0.74:52648 - 24657 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001467352s 2025-08-13T20:08:22.239860595+00:00 stdout F [INFO] 10.217.0.74:51385 - 16973 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105344s 2025-08-13T20:08:22.239860595+00:00 stdout F [INFO] 10.217.0.74:49919 - 34860 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001381359s 2025-08-13T20:08:22.261126194+00:00 stdout F [INFO] 10.217.0.74:43526 - 6054 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000821284s 2025-08-13T20:08:22.261374731+00:00 stdout F [INFO] 10.217.0.74:40719 - 25417 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001145663s 2025-08-13T20:08:22.267248970+00:00 stdout F [INFO] 10.217.0.74:44331 - 53040 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002694397s 2025-08-13T20:08:22.267341953+00:00 stdout F [INFO] 10.217.0.74:56326 - 34656 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002656876s 2025-08-13T20:08:22.276757652+00:00 stdout F [INFO] 10.217.0.74:52375 - 43508 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001356439s 2025-08-13T20:08:22.276943238+00:00 stdout F [INFO] 10.217.0.74:58513 - 13901 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001270606s 2025-08-13T20:08:22.332239413+00:00 stdout F [INFO] 10.217.0.74:41229 - 42767 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001075081s 2025-08-13T20:08:22.332239413+00:00 stdout F [INFO] 10.217.0.74:52021 - 30200 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000567466s 2025-08-13T20:08:22.398007929+00:00 stdout F [INFO] 10.217.0.74:47267 - 62452 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001238796s 2025-08-13T20:08:22.398975957+00:00 stdout F [INFO] 10.217.0.74:57833 - 38362 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000962667s 2025-08-13T20:08:22.399740409+00:00 stdout F [INFO] 10.217.0.74:38237 - 53887 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069585s 2025-08-13T20:08:22.401736936+00:00 stdout F [INFO] 10.217.0.74:44127 - 22403 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105724s 2025-08-13T20:08:22.433097905+00:00 stdout F [INFO] 10.217.0.74:52965 - 42770 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000876085s 2025-08-13T20:08:22.434533936+00:00 stdout F [INFO] 10.217.0.74:58613 - 49472 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002353548s 2025-08-13T20:08:22.455585540+00:00 stdout F [INFO] 10.217.0.74:60674 - 20048 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000941467s 2025-08-13T20:08:22.463268320+00:00 stdout F [INFO] 10.217.0.74:41587 - 40893 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000694699s 2025-08-13T20:08:22.466577205+00:00 stdout F [INFO] 10.217.0.74:46766 - 32712 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000801933s 2025-08-13T20:08:22.466577205+00:00 stdout F [INFO] 10.217.0.74:45096 - 7155 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000899256s 2025-08-13T20:08:22.510876515+00:00 stdout F [INFO] 10.217.0.74:54552 - 47724 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005303202s 2025-08-13T20:08:22.511946716+00:00 stdout F [INFO] 10.217.0.74:45434 - 9918 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006879638s 2025-08-13T20:08:22.519684217+00:00 stdout F [INFO] 10.217.0.74:37786 - 11215 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000952157s 2025-08-13T20:08:22.520166181+00:00 stdout F [INFO] 10.217.0.74:36296 - 46459 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757792s 2025-08-13T20:08:22.522077546+00:00 stdout F [INFO] 10.217.0.74:51580 - 63207 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000449953s 2025-08-13T20:08:22.522077546+00:00 stdout F [INFO] 10.217.0.74:40251 - 25049 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000519685s 2025-08-13T20:08:22.576696652+00:00 stdout F [INFO] 10.217.0.74:57622 - 63627 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069116s 2025-08-13T20:08:22.577608658+00:00 stdout F [INFO] 10.217.0.74:45888 - 2144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001443201s 2025-08-13T20:08:22.636625250+00:00 stdout F [INFO] 10.217.0.74:52830 - 1253 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000813273s 2025-08-13T20:08:22.636625250+00:00 stdout F [INFO] 10.217.0.74:37953 - 41778 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001006499s 2025-08-13T20:08:22.638191015+00:00 stdout F [INFO] 10.217.0.74:41117 - 48794 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000588647s 2025-08-13T20:08:22.638569146+00:00 stdout F [INFO] 10.217.0.74:41318 - 41580 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000748311s 2025-08-13T20:08:22.648858241+00:00 stdout F [INFO] 10.217.0.74:45123 - 27094 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000730341s 2025-08-13T20:08:22.649838699+00:00 stdout F [INFO] 10.217.0.74:38611 - 55493 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001436781s 2025-08-13T20:08:22.664156650+00:00 stdout F [INFO] 10.217.0.74:43102 - 36070 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070239s 2025-08-13T20:08:22.664418027+00:00 stdout F [INFO] 10.217.0.74:32935 - 60085 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615287s 2025-08-13T20:08:22.691543585+00:00 stdout F [INFO] 10.217.0.74:57635 - 58953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001525044s 2025-08-13T20:08:22.691858614+00:00 stdout F [INFO] 10.217.0.74:50777 - 5102 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139997s 2025-08-13T20:08:22.693921343+00:00 stdout F [INFO] 10.217.0.74:34461 - 46395 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000959297s 2025-08-13T20:08:22.694219062+00:00 stdout F [INFO] 10.217.0.74:34317 - 59106 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001340569s 2025-08-13T20:08:22.702807788+00:00 stdout F [INFO] 10.217.0.74:53439 - 28469 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000780463s 2025-08-13T20:08:22.703057565+00:00 stdout F [INFO] 10.217.0.74:45448 - 47572 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071143s 2025-08-13T20:08:22.721880755+00:00 stdout F [INFO] 10.217.0.74:51174 - 61205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579236s 2025-08-13T20:08:22.721984198+00:00 stdout F [INFO] 10.217.0.74:40056 - 50199 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000744641s 2025-08-13T20:08:22.744304728+00:00 stdout F [INFO] 10.217.0.74:42321 - 7798 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613208s 2025-08-13T20:08:22.745109991+00:00 stdout F [INFO] 10.217.0.74:42452 - 58325 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000604637s 2025-08-13T20:08:22.802369352+00:00 stdout F [INFO] 10.217.0.74:43377 - 56553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826424s 2025-08-13T20:08:22.802422454+00:00 stdout F [INFO] 10.217.0.74:36584 - 51319 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934307s 2025-08-13T20:08:22.826702370+00:00 stdout F [INFO] 10.217.0.74:50113 - 4511 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001435291s 2025-08-13T20:08:22.827707989+00:00 stdout F [INFO] 10.217.0.74:60437 - 29422 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002011328s 2025-08-13T20:08:22.841089162+00:00 stdout F [INFO] 10.217.0.74:40733 - 6592 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000608397s 2025-08-13T20:08:22.841724611+00:00 stdout F [INFO] 10.217.0.74:48412 - 63283 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000536045s 2025-08-13T20:08:22.853629642+00:00 stdout F [INFO] 10.217.0.74:57847 - 38739 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000649439s 2025-08-13T20:08:22.853831288+00:00 stdout F [INFO] 10.217.0.74:44784 - 44679 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068887s 2025-08-13T20:08:22.862625980+00:00 stdout F [INFO] 10.217.0.8:39285 - 842 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000741802s 2025-08-13T20:08:22.862710062+00:00 stdout F [INFO] 10.217.0.8:49026 - 54348 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000958068s 2025-08-13T20:08:22.865088361+00:00 stdout F [INFO] 10.217.0.8:45372 - 17228 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000958098s 2025-08-13T20:08:22.865369679+00:00 stdout F [INFO] 10.217.0.8:59953 - 11278 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001006529s 2025-08-13T20:08:22.866947584+00:00 stdout F [INFO] 10.217.0.74:50270 - 28207 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000865685s 2025-08-13T20:08:22.867147420+00:00 stdout F [INFO] 10.217.0.74:35084 - 42316 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103552s 2025-08-13T20:08:22.883126038+00:00 stdout F [INFO] 10.217.0.74:43979 - 42279 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000583907s 2025-08-13T20:08:22.884513628+00:00 stdout F [INFO] 10.217.0.74:36172 - 11931 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001900804s 2025-08-13T20:08:22.901498455+00:00 stdout F [INFO] 10.217.0.74:53107 - 13887 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001286337s 2025-08-13T20:08:22.901736311+00:00 stdout F [INFO] 10.217.0.74:58698 - 43570 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001462982s 2025-08-13T20:08:22.906571780+00:00 stdout F [INFO] 10.217.0.74:38047 - 19286 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000501705s 2025-08-13T20:08:22.906856988+00:00 stdout F [INFO] 10.217.0.74:60554 - 11266 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069933s 2025-08-13T20:08:22.921088926+00:00 stdout F [INFO] 10.217.0.74:54298 - 2714 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000801863s 2025-08-13T20:08:22.921255611+00:00 stdout F [INFO] 10.217.0.74:51577 - 64057 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070477s 2025-08-13T20:08:22.943009155+00:00 stdout F [INFO] 10.217.0.74:56610 - 49819 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000551956s 2025-08-13T20:08:22.943108448+00:00 stdout F [INFO] 10.217.0.74:60473 - 63037 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001326858s 2025-08-13T20:08:22.964922453+00:00 stdout F [INFO] 10.217.0.74:40775 - 7463 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003672445s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:38048 - 46678 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004013355s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:41056 - 56626 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004316424s 2025-08-13T20:08:22.967642081+00:00 stdout F [INFO] 10.217.0.74:46554 - 45253 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004679264s 2025-08-13T20:08:22.983985159+00:00 stdout F [INFO] 10.217.0.74:35219 - 42924 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00314427s 2025-08-13T20:08:22.984067622+00:00 stdout F [INFO] 10.217.0.74:33846 - 27423 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003050767s 2025-08-13T20:08:22.998949769+00:00 stdout F [INFO] 10.217.0.74:34661 - 48996 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009099s 2025-08-13T20:08:22.998949769+00:00 stdout F [INFO] 10.217.0.74:49940 - 35189 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001271607s 2025-08-13T20:08:23.028307460+00:00 stdout F [INFO] 10.217.0.74:50107 - 58863 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579406s 2025-08-13T20:08:23.028442514+00:00 stdout F [INFO] 10.217.0.74:47941 - 33212 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000843434s 2025-08-13T20:08:23.028560918+00:00 stdout F [INFO] 10.217.0.74:53687 - 62909 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000572706s 2025-08-13T20:08:23.029147614+00:00 stdout F [INFO] 10.217.0.74:33055 - 46652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587257s 2025-08-13T20:08:23.041401596+00:00 stdout F [INFO] 10.217.0.74:50392 - 19123 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000922237s 2025-08-13T20:08:23.041401596+00:00 stdout F [INFO] 10.217.0.74:59302 - 36026 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001150433s 2025-08-13T20:08:23.067258497+00:00 stdout F [INFO] 10.217.0.74:48739 - 15689 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001277987s 2025-08-13T20:08:23.067258497+00:00 stdout F [INFO] 10.217.0.74:45224 - 16916 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001430021s 2025-08-13T20:08:23.086723925+00:00 stdout F [INFO] 10.217.0.74:46805 - 11307 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001466452s 2025-08-13T20:08:23.086723925+00:00 stdout F [INFO] 10.217.0.74:44357 - 57641 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001210575s 2025-08-13T20:08:23.102167298+00:00 stdout F [INFO] 10.217.0.74:33526 - 21050 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001001959s 2025-08-13T20:08:23.102167298+00:00 stdout F [INFO] 10.217.0.74:44924 - 63144 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001224765s 2025-08-13T20:08:23.130224712+00:00 stdout F [INFO] 10.217.0.74:51112 - 22062 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001516313s 2025-08-13T20:08:23.130457029+00:00 stdout F [INFO] 10.217.0.74:45256 - 58772 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002293196s 2025-08-13T20:08:23.142601387+00:00 stdout F [INFO] 10.217.0.74:39991 - 54096 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001159244s 2025-08-13T20:08:23.142601387+00:00 stdout F [INFO] 10.217.0.74:53534 - 40160 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824164s 2025-08-13T20:08:23.158352619+00:00 stdout F [INFO] 10.217.0.74:43614 - 27969 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000830473s 2025-08-13T20:08:23.158352619+00:00 stdout F [INFO] 10.217.0.74:51746 - 46118 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00071584s 2025-08-13T20:08:23.173865214+00:00 stdout F [INFO] 10.217.0.74:56815 - 10800 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001192754s 2025-08-13T20:08:23.173865214+00:00 stdout F [INFO] 10.217.0.74:37470 - 26160 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001359149s 2025-08-13T20:08:23.186530057+00:00 stdout F [INFO] 10.217.0.74:35698 - 17396 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001814522s 2025-08-13T20:08:23.186576378+00:00 stdout F [INFO] 10.217.0.74:47297 - 35966 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002178742s 2025-08-13T20:08:23.201879497+00:00 stdout F [INFO] 10.217.0.74:44242 - 61307 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000986358s 2025-08-13T20:08:23.204248675+00:00 stdout F [INFO] 10.217.0.74:49688 - 14474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001558625s 2025-08-13T20:08:23.207265181+00:00 stdout F [INFO] 10.217.0.74:60793 - 2671 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507625s 2025-08-13T20:08:23.207265181+00:00 stdout F [INFO] 10.217.0.74:47305 - 44007 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00280909s 2025-08-13T20:08:23.224348531+00:00 stdout F [INFO] 10.217.0.74:43667 - 49108 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139733s 2025-08-13T20:08:23.224348531+00:00 stdout F [INFO] 10.217.0.74:59356 - 15274 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001646567s 2025-08-13T20:08:23.240945167+00:00 stdout F [INFO] 10.217.0.74:37437 - 18158 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001358979s 2025-08-13T20:08:23.245963521+00:00 stdout F [INFO] 10.217.0.74:36155 - 8426 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002000028s 2025-08-13T20:08:23.247389952+00:00 stdout F [INFO] 10.217.0.74:36748 - 23802 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000823644s 2025-08-13T20:08:23.247389952+00:00 stdout F [INFO] 10.217.0.74:34141 - 30409 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001084881s 2025-08-13T20:08:23.265588203+00:00 stdout F [INFO] 10.217.0.74:33930 - 425 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002363888s 2025-08-13T20:08:23.265588203+00:00 stdout F [INFO] 10.217.0.74:48650 - 34517 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002275536s 2025-08-13T20:08:23.265645915+00:00 stdout F [INFO] 10.217.0.74:45625 - 16991 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002644116s 2025-08-13T20:08:23.265862481+00:00 stdout F [INFO] 10.217.0.74:44165 - 37866 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002362178s 2025-08-13T20:08:23.286846603+00:00 stdout F [INFO] 10.217.0.74:43829 - 8853 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001176033s 2025-08-13T20:08:23.287205573+00:00 stdout F [INFO] 10.217.0.74:59220 - 37306 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001708199s 2025-08-13T20:08:23.302813801+00:00 stdout F [INFO] 10.217.0.74:35066 - 6993 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069728s 2025-08-13T20:08:23.303000866+00:00 stdout F [INFO] 10.217.0.74:58090 - 34860 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070014s 2025-08-13T20:08:23.323088672+00:00 stdout F [INFO] 10.217.0.74:58163 - 25525 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000577657s 2025-08-13T20:08:23.323339729+00:00 stdout F [INFO] 10.217.0.74:47008 - 15212 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000624078s 2025-08-13T20:08:23.346958476+00:00 stdout F [INFO] 10.217.0.74:36598 - 11240 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000590827s 2025-08-13T20:08:23.347015058+00:00 stdout F [INFO] 10.217.0.74:55618 - 51013 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006817s 2025-08-13T20:08:23.365053625+00:00 stdout F [INFO] 10.217.0.74:56789 - 10666 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001782331s 2025-08-13T20:08:23.365137098+00:00 stdout F [INFO] 10.217.0.74:50602 - 20928 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001779181s 2025-08-13T20:08:23.385098650+00:00 stdout F [INFO] 10.217.0.74:38012 - 36657 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002411469s 2025-08-13T20:08:23.385748948+00:00 stdout F [INFO] 10.217.0.74:39530 - 9020 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002152452s 2025-08-13T20:08:23.403006163+00:00 stdout F [INFO] 10.217.0.74:46876 - 29006 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001095072s 2025-08-13T20:08:23.403196749+00:00 stdout F [INFO] 10.217.0.74:55259 - 46879 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001254786s 2025-08-13T20:08:23.423972174+00:00 stdout F [INFO] 10.217.0.74:45266 - 62572 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001078551s 2025-08-13T20:08:23.424981563+00:00 stdout F [INFO] 10.217.0.74:56831 - 15440 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002025318s 2025-08-13T20:08:23.448177158+00:00 stdout F [INFO] 10.217.0.74:35227 - 32690 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000593787s 2025-08-13T20:08:23.448323243+00:00 stdout F [INFO] 10.217.0.74:35367 - 16258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001110942s 2025-08-13T20:08:23.462711955+00:00 stdout F [INFO] 10.217.0.74:46751 - 5046 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000588527s 2025-08-13T20:08:23.463100416+00:00 stdout F [INFO] 10.217.0.74:40424 - 57957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000795833s 2025-08-13T20:08:23.483741708+00:00 stdout F [INFO] 10.217.0.74:57391 - 28731 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004024075s 2025-08-13T20:08:23.483866722+00:00 stdout F [INFO] 10.217.0.74:50791 - 22189 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000505134s 2025-08-13T20:08:23.504960606+00:00 stdout F [INFO] 10.217.0.74:56036 - 54608 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000469544s 2025-08-13T20:08:23.505524843+00:00 stdout F [INFO] 10.217.0.74:34403 - 11947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001620877s 2025-08-13T20:08:23.521936363+00:00 stdout F [INFO] 10.217.0.74:55722 - 8768 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000545856s 2025-08-13T20:08:23.521936363+00:00 stdout F [INFO] 10.217.0.74:35719 - 8130 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000526285s 2025-08-13T20:08:23.547486986+00:00 stdout F [INFO] 10.217.0.74:40315 - 7041 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001810692s 2025-08-13T20:08:23.548531626+00:00 stdout F [INFO] 10.217.0.74:38044 - 53013 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002714748s 2025-08-13T20:08:23.566259754+00:00 stdout F [INFO] 10.217.0.74:43842 - 53315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175518s 2025-08-13T20:08:23.566840831+00:00 stdout F [INFO] 10.217.0.74:39656 - 15795 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001910445s 2025-08-13T20:08:23.604865841+00:00 stdout F [INFO] 10.217.0.74:45080 - 14184 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000976128s 2025-08-13T20:08:23.604865841+00:00 stdout F [INFO] 10.217.0.74:52623 - 24062 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001460812s 2025-08-13T20:08:23.606214639+00:00 stdout F [INFO] 10.217.0.74:34918 - 62766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070184s 2025-08-13T20:08:23.607222188+00:00 stdout F [INFO] 10.217.0.74:55300 - 45356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002002047s 2025-08-13T20:08:23.617876724+00:00 stdout F [INFO] 10.217.0.74:33261 - 28747 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000575036s 2025-08-13T20:08:23.618124261+00:00 stdout F [INFO] 10.217.0.74:47962 - 437 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000735731s 2025-08-13T20:08:23.649416848+00:00 stdout F [INFO] 10.217.0.74:46369 - 10987 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00175553s 2025-08-13T20:08:23.649961034+00:00 stdout F [INFO] 10.217.0.74:42929 - 63396 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002341427s 2025-08-13T20:08:23.664390887+00:00 stdout F [INFO] 10.217.0.74:59747 - 35086 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000679749s 2025-08-13T20:08:23.665021106+00:00 stdout F [INFO] 10.217.0.74:47958 - 54146 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000965598s 2025-08-13T20:08:23.665225681+00:00 stdout F [INFO] 10.217.0.74:60738 - 28876 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001199405s 2025-08-13T20:08:23.665372356+00:00 stdout F [INFO] 10.217.0.74:43542 - 36132 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000755901s 2025-08-13T20:08:23.673935971+00:00 stdout F [INFO] 10.217.0.74:49465 - 19319 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069495s 2025-08-13T20:08:23.673935971+00:00 stdout F [INFO] 10.217.0.74:38700 - 37429 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587637s 2025-08-13T20:08:23.711639802+00:00 stdout F [INFO] 10.217.0.74:45354 - 1150 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000710531s 2025-08-13T20:08:23.712135206+00:00 stdout F [INFO] 10.217.0.74:55757 - 61526 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001368139s 2025-08-13T20:08:23.724351257+00:00 stdout F [INFO] 10.217.0.74:43712 - 32007 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000786572s 2025-08-13T20:08:23.725416347+00:00 stdout F [INFO] 10.217.0.74:60368 - 40387 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001848233s 2025-08-13T20:08:23.790917425+00:00 stdout F [INFO] 10.217.0.74:35075 - 58159 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105166s 2025-08-13T20:08:23.790917425+00:00 stdout F [INFO] 10.217.0.74:41707 - 58369 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000997008s 2025-08-13T20:08:23.790977527+00:00 stdout F [INFO] 10.217.0.74:42168 - 54228 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005657392s 2025-08-13T20:08:23.790993397+00:00 stdout F [INFO] 10.217.0.74:57634 - 13866 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005406595s 2025-08-13T20:08:23.861389176+00:00 stdout F [INFO] 10.217.0.74:53505 - 55365 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004804728s 2025-08-13T20:08:23.862470117+00:00 stdout F [INFO] 10.217.0.74:49058 - 22335 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005747865s 2025-08-13T20:08:23.865482123+00:00 stdout F [INFO] 10.217.0.74:44041 - 4083 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003981994s 2025-08-13T20:08:23.865695549+00:00 stdout F [INFO] 10.217.0.74:38165 - 44456 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004146199s 2025-08-13T20:08:23.918755860+00:00 stdout F [INFO] 10.217.0.74:37762 - 27467 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006508776s 2025-08-13T20:08:23.919418649+00:00 stdout F [INFO] 10.217.0.74:38545 - 48889 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006821736s 2025-08-13T20:08:23.930539608+00:00 stdout F [INFO] 10.217.0.74:37549 - 38725 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000463294s 2025-08-13T20:08:23.941953185+00:00 stdout F [INFO] 10.217.0.74:40745 - 59090 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.012452197s 2025-08-13T20:08:23.993117352+00:00 stdout F [INFO] 10.217.0.74:45014 - 24003 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002614755s 2025-08-13T20:08:23.993117352+00:00 stdout F [INFO] 10.217.0.74:59484 - 61983 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002975335s 2025-08-13T20:08:24.024252145+00:00 stdout F [INFO] 10.217.0.74:48887 - 8971 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000939367s 2025-08-13T20:08:24.024723789+00:00 stdout F [INFO] 10.217.0.74:42036 - 20136 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001184584s 2025-08-13T20:08:24.062560143+00:00 stdout F [INFO] 10.217.0.74:52891 - 55952 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00069057s 2025-08-13T20:08:24.062560143+00:00 stdout F [INFO] 10.217.0.74:55499 - 34115 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000744161s 2025-08-13T20:08:24.079944652+00:00 stdout F [INFO] 10.217.0.74:49890 - 38847 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767392s 2025-08-13T20:08:24.079944652+00:00 stdout F [INFO] 10.217.0.74:60726 - 27498 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001103431s 2025-08-13T20:08:24.110453807+00:00 stdout F [INFO] 10.217.0.74:39015 - 8181 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000881105s 2025-08-13T20:08:24.110453807+00:00 stdout F [INFO] 10.217.0.74:46491 - 49334 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803433s 2025-08-13T20:08:24.131459409+00:00 stdout F [INFO] 10.217.0.74:60382 - 27381 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000645609s 2025-08-13T20:08:24.134677111+00:00 stdout F [INFO] 10.217.0.74:39985 - 63434 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002143151s 2025-08-13T20:08:24.141847427+00:00 stdout F [INFO] 10.217.0.74:39208 - 986 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000837914s 2025-08-13T20:08:24.141847427+00:00 stdout F [INFO] 10.217.0.74:51325 - 7814 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001170333s 2025-08-13T20:08:24.170697164+00:00 stdout F [INFO] 10.217.0.74:33143 - 6862 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834134s 2025-08-13T20:08:24.170986162+00:00 stdout F [INFO] 10.217.0.74:42045 - 3662 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103698s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:48666 - 48400 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000563326s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:46676 - 6547 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000519965s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:35811 - 22073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000716421s 2025-08-13T20:08:24.213495631+00:00 stdout F [INFO] 10.217.0.74:36893 - 38752 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000668669s 2025-08-13T20:08:24.234172204+00:00 stdout F [INFO] 10.217.0.74:59916 - 9133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000994929s 2025-08-13T20:08:24.234172204+00:00 stdout F [INFO] 10.217.0.74:53675 - 62828 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001190384s 2025-08-13T20:08:24.244733557+00:00 stdout F [INFO] 10.217.0.74:45956 - 55947 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104376s 2025-08-13T20:08:24.249329228+00:00 stdout F [INFO] 10.217.0.74:32801 - 25347 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00071749s 2025-08-13T20:08:24.264423121+00:00 stdout F [INFO] 10.217.0.74:38307 - 32343 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000897656s 2025-08-13T20:08:24.264423121+00:00 stdout F [INFO] 10.217.0.74:33335 - 51186 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000887916s 2025-08-13T20:08:24.271386791+00:00 stdout F [INFO] 10.217.0.74:43747 - 31683 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.002332827s 2025-08-13T20:08:24.272200764+00:00 stdout F [INFO] 10.217.0.74:50597 - 61774 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000667099s 2025-08-13T20:08:24.291488567+00:00 stdout F [INFO] 10.217.0.74:38457 - 23497 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001091461s 2025-08-13T20:08:24.292299590+00:00 stdout F [INFO] 10.217.0.74:60003 - 46978 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740811s 2025-08-13T20:08:24.300860186+00:00 stdout F [INFO] 10.217.0.74:33575 - 3747 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834954s 2025-08-13T20:08:24.301911626+00:00 stdout F [INFO] 10.217.0.74:49103 - 44491 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587857s 2025-08-13T20:08:24.327883021+00:00 stdout F [INFO] 10.217.0.74:43019 - 60503 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000859995s 2025-08-13T20:08:24.328021395+00:00 stdout F [INFO] 10.217.0.74:44241 - 44160 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001261347s 2025-08-13T20:08:24.345248108+00:00 stdout F [INFO] 10.217.0.74:47062 - 48859 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000619378s 2025-08-13T20:08:24.345599408+00:00 stdout F [INFO] 10.217.0.74:47666 - 58634 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612027s 2025-08-13T20:08:24.366537759+00:00 stdout F [INFO] 10.217.0.74:47016 - 59253 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001480562s 2025-08-13T20:08:24.366744295+00:00 stdout F [INFO] 10.217.0.74:46246 - 30705 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001438961s 2025-08-13T20:08:24.392236476+00:00 stdout F [INFO] 10.217.0.74:60212 - 51027 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008598517s 2025-08-13T20:08:24.392592016+00:00 stdout F [INFO] 10.217.0.74:58066 - 61218 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.008901875s 2025-08-13T20:08:24.410356635+00:00 stdout F [INFO] 10.217.0.74:40348 - 18397 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000849115s 2025-08-13T20:08:24.410696465+00:00 stdout F [INFO] 10.217.0.74:55095 - 42434 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001425661s 2025-08-13T20:08:24.426346164+00:00 stdout F [INFO] 10.217.0.74:58429 - 37747 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002262725s 2025-08-13T20:08:24.426429916+00:00 stdout F [INFO] 10.217.0.74:47306 - 45293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002616975s 2025-08-13T20:08:24.450868817+00:00 stdout F [INFO] 10.217.0.74:55881 - 27796 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000377991s 2025-08-13T20:08:24.451188236+00:00 stdout F [INFO] 10.217.0.74:45482 - 49898 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000913146s 2025-08-13T20:08:24.452458552+00:00 stdout F [INFO] 10.217.0.74:33213 - 47965 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000542426s 2025-08-13T20:08:24.453206924+00:00 stdout F [INFO] 10.217.0.74:46715 - 48650 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000621578s 2025-08-13T20:08:24.508503639+00:00 stdout F [INFO] 10.217.0.74:35011 - 31718 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001217464s 2025-08-13T20:08:24.508854299+00:00 stdout F [INFO] 10.217.0.74:52715 - 20222 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768762s 2025-08-13T20:08:24.520193884+00:00 stdout F [INFO] 10.217.0.74:51498 - 7713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000939187s 2025-08-13T20:08:24.520439531+00:00 stdout F [INFO] 10.217.0.74:38612 - 10532 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001119952s 2025-08-13T20:08:24.553056217+00:00 stdout F [INFO] 10.217.0.74:39589 - 34577 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00073006s 2025-08-13T20:08:24.553266833+00:00 stdout F [INFO] 10.217.0.74:50887 - 44553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000965228s 2025-08-13T20:08:24.578471775+00:00 stdout F [INFO] 10.217.0.74:49744 - 44397 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000530205s 2025-08-13T20:08:24.578997010+00:00 stdout F [INFO] 10.217.0.74:60908 - 2189 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731991s 2025-08-13T20:08:24.617388391+00:00 stdout F [INFO] 10.217.0.74:48084 - 23967 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000972908s 2025-08-13T20:08:24.617695760+00:00 stdout F [INFO] 10.217.0.74:36137 - 28536 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000564016s 2025-08-13T20:08:24.618158183+00:00 stdout F [INFO] 10.217.0.74:43087 - 46692 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001262616s 2025-08-13T20:08:24.618211615+00:00 stdout F [INFO] 10.217.0.74:52408 - 20765 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00138048s 2025-08-13T20:08:24.645443035+00:00 stdout F [INFO] 10.217.0.74:38609 - 47142 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001852633s 2025-08-13T20:08:24.646116835+00:00 stdout F [INFO] 10.217.0.74:41031 - 52842 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002800981s 2025-08-13T20:08:24.646466725+00:00 stdout F [INFO] 10.217.0.74:59849 - 60627 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000685259s 2025-08-13T20:08:24.646518566+00:00 stdout F [INFO] 10.217.0.74:35121 - 7797 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626798s 2025-08-13T20:08:24.682149238+00:00 stdout F [INFO] 10.217.0.74:34031 - 10475 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158263s 2025-08-13T20:08:24.682391585+00:00 stdout F [INFO] 10.217.0.74:35405 - 13007 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001918105s 2025-08-13T20:08:24.688375276+00:00 stdout F [INFO] 10.217.0.74:42975 - 3941 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000588656s 2025-08-13T20:08:24.688375276+00:00 stdout F [INFO] 10.217.0.74:40270 - 27750 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000626067s 2025-08-13T20:08:24.719747866+00:00 stdout F [INFO] 10.217.0.74:54688 - 30674 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002935224s 2025-08-13T20:08:24.719747866+00:00 stdout F [INFO] 10.217.0.74:47077 - 35095 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003633474s 2025-08-13T20:08:24.722207566+00:00 stdout F [INFO] 10.217.0.74:39440 - 61086 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002880883s 2025-08-13T20:08:24.722207566+00:00 stdout F [INFO] 10.217.0.74:47023 - 61827 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003258813s 2025-08-13T20:08:24.755713397+00:00 stdout F [INFO] 10.217.0.74:35434 - 20749 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000781653s 2025-08-13T20:08:24.756109458+00:00 stdout F [INFO] 10.217.0.74:53059 - 9205 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001007379s 2025-08-13T20:08:24.784571244+00:00 stdout F [INFO] 10.217.0.74:36555 - 53245 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000628068s 2025-08-13T20:08:24.785053848+00:00 stdout F [INFO] 10.217.0.74:56648 - 41834 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001459212s 2025-08-13T20:08:24.805676099+00:00 stdout F [INFO] 10.217.0.74:54450 - 39176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001632277s 2025-08-13T20:08:24.805729691+00:00 stdout F [INFO] 10.217.0.74:53352 - 1136 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001506933s 2025-08-13T20:08:24.851120342+00:00 stdout F [INFO] 10.217.0.74:38902 - 48894 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002668936s 2025-08-13T20:08:24.852221004+00:00 stdout F [INFO] 10.217.0.74:43404 - 30730 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003059038s 2025-08-13T20:08:24.873870125+00:00 stdout F [INFO] 10.217.0.74:42313 - 6315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000823914s 2025-08-13T20:08:24.873870125+00:00 stdout F [INFO] 10.217.0.74:44164 - 12605 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000872185s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:42936 - 10687 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103921s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:36100 - 37711 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985678s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:56708 - 58003 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768852s 2025-08-13T20:08:24.909741623+00:00 stdout F [INFO] 10.217.0.74:55787 - 15515 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001686759s 2025-08-13T20:08:24.929399557+00:00 stdout F [INFO] 10.217.0.74:36348 - 27003 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728081s 2025-08-13T20:08:24.929612103+00:00 stdout F [INFO] 10.217.0.74:58709 - 40669 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001330818s 2025-08-13T20:08:24.930241351+00:00 stdout F [INFO] 10.217.0.74:43717 - 42072 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001431632s 2025-08-13T20:08:24.930980552+00:00 stdout F [INFO] 10.217.0.74:50271 - 30858 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001860083s 2025-08-13T20:08:24.972041189+00:00 stdout F [INFO] 10.217.0.74:36524 - 45953 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004283983s 2025-08-13T20:08:24.972041189+00:00 stdout F [INFO] 10.217.0.74:46433 - 44305 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005543429s 2025-08-13T20:08:24.978928047+00:00 stdout F [INFO] 10.217.0.74:39950 - 50856 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000684569s 2025-08-13T20:08:24.978928047+00:00 stdout F [INFO] 10.217.0.74:51766 - 25395 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000776972s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:34690 - 7086 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003429738s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:43571 - 61951 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003356256s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:56828 - 58461 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007452054s 2025-08-13T20:08:25.004847210+00:00 stdout F [INFO] 10.217.0.74:46054 - 64977 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007036922s 2025-08-13T20:08:25.036035054+00:00 stdout F [INFO] 10.217.0.74:53095 - 17901 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001446761s 2025-08-13T20:08:25.036035054+00:00 stdout F [INFO] 10.217.0.74:57138 - 43105 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001692948s 2025-08-13T20:08:25.052948038+00:00 stdout F [INFO] 10.217.0.74:43926 - 25694 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002316425s 2025-08-13T20:08:25.052997559+00:00 stdout F [INFO] 10.217.0.74:53532 - 45713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001506383s 2025-08-13T20:08:25.096404534+00:00 stdout F [INFO] 10.217.0.74:45404 - 13830 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002596194s 2025-08-13T20:08:25.096613950+00:00 stdout F [INFO] 10.217.0.74:51308 - 24763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000542626s 2025-08-13T20:08:25.111347213+00:00 stdout F [INFO] 10.217.0.74:40585 - 22852 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808484s 2025-08-13T20:08:25.111431875+00:00 stdout F [INFO] 10.217.0.74:52176 - 59269 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000953717s 2025-08-13T20:08:25.142186147+00:00 stdout F [INFO] 10.217.0.74:49925 - 55227 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000873406s 2025-08-13T20:08:25.142406793+00:00 stdout F [INFO] 10.217.0.74:58393 - 32978 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001325018s 2025-08-13T20:08:25.154245632+00:00 stdout F [INFO] 10.217.0.74:33747 - 17097 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000816724s 2025-08-13T20:08:25.154245632+00:00 stdout F [INFO] 10.217.0.74:57980 - 13317 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104587s 2025-08-13T20:08:25.157223058+00:00 stdout F [INFO] 10.217.0.74:52234 - 27248 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000541706s 2025-08-13T20:08:25.157223058+00:00 stdout F [INFO] 10.217.0.74:35217 - 42622 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668129s 2025-08-13T20:08:25.174913265+00:00 stdout F [INFO] 10.217.0.74:54396 - 14624 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000631218s 2025-08-13T20:08:25.174913265+00:00 stdout F [INFO] 10.217.0.74:43944 - 47763 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001561625s 2025-08-13T20:08:25.199658625+00:00 stdout F [INFO] 10.217.0.74:59881 - 6776 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068003s 2025-08-13T20:08:25.199985764+00:00 stdout F [INFO] 10.217.0.74:51326 - 7569 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001062151s 2025-08-13T20:08:25.227633737+00:00 stdout F [INFO] 10.217.0.74:46695 - 35811 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000990998s 2025-08-13T20:08:25.228805970+00:00 stdout F [INFO] 10.217.0.74:36601 - 55298 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000490834s 2025-08-13T20:08:25.239563699+00:00 stdout F [INFO] 10.217.0.74:57169 - 22926 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000628898s 2025-08-13T20:08:25.239563699+00:00 stdout F [INFO] 10.217.0.74:45489 - 37868 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000734892s 2025-08-13T20:08:25.295034379+00:00 stdout F [INFO] 10.217.0.74:46388 - 48751 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001222805s 2025-08-13T20:08:25.295085910+00:00 stdout F [INFO] 10.217.0.74:51846 - 12374 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001515264s 2025-08-13T20:08:41.635823402+00:00 stdout F [INFO] 10.217.0.62:34379 - 7559 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003923512s 2025-08-13T20:08:41.636428109+00:00 stdout F [INFO] 10.217.0.62:45312 - 22703 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004136568s 2025-08-13T20:08:42.402994408+00:00 stdout F [INFO] 10.217.0.19:40701 - 2139 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003220132s 2025-08-13T20:08:42.403236254+00:00 stdout F [INFO] 10.217.0.19:39684 - 2175 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003694346s 2025-08-13T20:08:56.374659197+00:00 stdout F [INFO] 10.217.0.64:34472 - 48135 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003421968s 2025-08-13T20:08:56.374745359+00:00 stdout F [INFO] 10.217.0.64:54476 - 13919 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002192603s 2025-08-13T20:08:56.374840782+00:00 stdout F [INFO] 10.217.0.64:40326 - 5569 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004057756s 2025-08-13T20:08:56.375154361+00:00 stdout F [INFO] 10.217.0.64:49383 - 27450 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004855109s 2025-08-13T20:09:02.378239024+00:00 stdout F [INFO] 10.217.0.45:43397 - 49512 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002536683s 2025-08-13T20:09:02.378239024+00:00 stdout F [INFO] 10.217.0.45:56823 - 15158 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002466611s 2025-08-13T20:09:11.725896608+00:00 stdout F [INFO] 10.217.0.62:34550 - 12666 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.006674761s 2025-08-13T20:09:11.725896608+00:00 stdout F [INFO] 10.217.0.62:39907 - 38685 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00801045s 2025-08-13T20:09:22.862762824+00:00 stdout F [INFO] 10.217.0.8:39156 - 25361 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002846862s 2025-08-13T20:09:22.862953529+00:00 stdout F [INFO] 10.217.0.8:53927 - 1632 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002753809s 2025-08-13T20:09:22.865524273+00:00 stdout F [INFO] 10.217.0.8:54729 - 65332 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001623787s 2025-08-13T20:09:22.865728939+00:00 stdout F [INFO] 10.217.0.8:49636 - 52910 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00175812s 2025-08-13T20:09:26.078490881+00:00 stdout F [INFO] 10.217.0.19:48977 - 50456 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001002199s 2025-08-13T20:09:26.078490881+00:00 stdout F [INFO] 10.217.0.19:59578 - 61462 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106775s 2025-08-13T20:09:33.484850278+00:00 stdout F [INFO] 10.217.0.62:42372 - 190 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139979s 2025-08-13T20:09:33.485393903+00:00 stdout F [INFO] 10.217.0.62:34167 - 57309 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00244067s 2025-08-13T20:09:33.523096954+00:00 stdout F [INFO] 10.217.0.62:49737 - 32251 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000720071s 2025-08-13T20:09:33.523096954+00:00 stdout F [INFO] 10.217.0.62:40671 - 52652 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000871775s 2025-08-13T20:09:35.187928017+00:00 stdout F [INFO] 10.217.0.19:33301 - 21434 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001188724s 2025-08-13T20:09:35.188043610+00:00 stdout F [INFO] 10.217.0.19:52934 - 11233 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001575785s 2025-08-13T20:09:38.306477737+00:00 stdout F [INFO] 10.217.0.62:40952 - 6986 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001327978s 2025-08-13T20:09:38.306477737+00:00 stdout F [INFO] 10.217.0.62:45564 - 26341 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002726588s 2025-08-13T20:09:41.638121368+00:00 stdout F [INFO] 10.217.0.62:56173 - 24407 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002718738s 2025-08-13T20:09:41.638416066+00:00 stdout F [INFO] 10.217.0.62:47549 - 44536 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003351996s 2025-08-13T20:09:42.030655352+00:00 stdout F [INFO] 10.217.0.19:56636 - 61643 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001563925s 2025-08-13T20:09:42.030851607+00:00 stdout F [INFO] 10.217.0.19:51975 - 50744 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002233574s 2025-08-13T20:09:42.183194045+00:00 stdout F [INFO] 10.217.0.19:33628 - 49975 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003283644s 2025-08-13T20:09:42.183194045+00:00 stdout F [INFO] 10.217.0.19:45670 - 3055 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003702877s 2025-08-13T20:09:42.529990608+00:00 stdout F [INFO] 10.217.0.62:48401 - 656 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071058s 2025-08-13T20:09:42.529990608+00:00 stdout F [INFO] 10.217.0.62:58670 - 49481 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000742051s 2025-08-13T20:09:45.431538738+00:00 stdout F [INFO] 10.217.0.62:43098 - 58085 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00107588s 2025-08-13T20:09:45.431538738+00:00 stdout F [INFO] 10.217.0.62:59851 - 55795 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000526385s 2025-08-13T20:09:54.586441237+00:00 stdout F [INFO] 10.217.0.19:47711 - 21321 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001843932s 2025-08-13T20:09:54.586441237+00:00 stdout F [INFO] 10.217.0.19:44898 - 45604 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002623275s 2025-08-13T20:09:56.371181378+00:00 stdout F [INFO] 10.217.0.64:41542 - 41108 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001686828s 2025-08-13T20:09:56.371181378+00:00 stdout F [INFO] 10.217.0.64:48576 - 20336 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00139462s 2025-08-13T20:09:56.372300050+00:00 stdout F [INFO] 10.217.0.64:38485 - 9093 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000542365s 2025-08-13T20:09:56.374142783+00:00 stdout F [INFO] 10.217.0.64:46788 - 13302 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001550825s 2025-08-13T20:09:57.900691400+00:00 stdout F [INFO] 10.217.0.19:53614 - 57161 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001425781s 2025-08-13T20:09:57.901094752+00:00 stdout F [INFO] 10.217.0.19:53845 - 1062 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000942357s 2025-08-13T20:09:57.983843465+00:00 stdout F [INFO] 10.217.0.19:39361 - 11239 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002185313s 2025-08-13T20:09:57.983843465+00:00 stdout F [INFO] 10.217.0.19:41001 - 62298 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002361678s 2025-08-13T20:10:01.056405577+00:00 stdout F [INFO] 10.217.0.19:39388 - 44237 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002664086s 2025-08-13T20:10:01.056405577+00:00 stdout F [INFO] 10.217.0.19:33615 - 53295 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002689067s 2025-08-13T20:10:01.079333115+00:00 stdout F [INFO] 10.217.0.19:50682 - 20587 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000795232s 2025-08-13T20:10:01.081832366+00:00 stdout F [INFO] 10.217.0.19:44383 - 40450 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000861545s 2025-08-13T20:10:02.444873055+00:00 stdout F [INFO] 10.217.0.45:44013 - 11251 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000983468s 2025-08-13T20:10:02.453875573+00:00 stdout F [INFO] 10.217.0.45:37306 - 39606 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001917425s 2025-08-13T20:10:05.186876261+00:00 stdout F [INFO] 10.217.0.19:58118 - 11638 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002586544s 2025-08-13T20:10:05.187021395+00:00 stdout F [INFO] 10.217.0.19:38219 - 21719 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002535682s 2025-08-13T20:10:08.345010648+00:00 stdout F [INFO] 10.217.0.19:51265 - 1330 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002312606s 2025-08-13T20:10:08.345010648+00:00 stdout F [INFO] 10.217.0.19:59348 - 19091 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002631336s 2025-08-13T20:10:08.355087687+00:00 stdout F [INFO] 10.217.0.19:42792 - 61879 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002574304s 2025-08-13T20:10:08.355139488+00:00 stdout F [INFO] 10.217.0.19:39369 - 9267 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002482411s 2025-08-13T20:10:08.411554935+00:00 stdout F [INFO] 10.217.0.19:35601 - 58707 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001258786s 2025-08-13T20:10:08.413167012+00:00 stdout F [INFO] 10.217.0.19:56524 - 65241 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002158461s 2025-08-13T20:10:08.548968705+00:00 stdout F [INFO] 10.217.0.19:45966 - 19689 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001272617s 2025-08-13T20:10:08.548968705+00:00 stdout F [INFO] 10.217.0.19:48885 - 11127 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001335808s 2025-08-13T20:10:11.628730835+00:00 stdout F [INFO] 10.217.0.62:36584 - 46375 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001654868s 2025-08-13T20:10:11.633039398+00:00 stdout F [INFO] 10.217.0.62:41877 - 29105 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000915227s 2025-08-13T20:10:22.867019153+00:00 stdout F [INFO] 10.217.0.8:49904 - 64741 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003766368s 2025-08-13T20:10:22.867019153+00:00 stdout F [INFO] 10.217.0.8:41201 - 45742 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004181499s 2025-08-13T20:10:22.867959440+00:00 stdout F [INFO] 10.217.0.8:35597 - 48884 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001018489s 2025-08-13T20:10:22.868719412+00:00 stdout F [INFO] 10.217.0.8:47410 - 20918 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000770882s 2025-08-13T20:10:30.808858992+00:00 stdout F [INFO] 10.217.0.73:39897 - 44453 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001530163s 2025-08-13T20:10:30.808951815+00:00 stdout F [INFO] 10.217.0.73:50070 - 51054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00174087s 2025-08-13T20:10:33.362905250+00:00 stdout F [INFO] 10.217.0.19:32790 - 5866 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004795368s 2025-08-13T20:10:33.362905250+00:00 stdout F [INFO] 10.217.0.19:49534 - 57961 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004693464s 2025-08-13T20:10:33.373874594+00:00 stdout F [INFO] 10.217.0.19:37942 - 8212 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000595517s 2025-08-13T20:10:33.374417290+00:00 stdout F [INFO] 10.217.0.19:56916 - 12238 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000661719s 2025-08-13T20:10:35.199277209+00:00 stdout F [INFO] 10.217.0.19:41870 - 23972 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001476932s 2025-08-13T20:10:35.199277209+00:00 stdout F [INFO] 10.217.0.19:58212 - 53423 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001832952s 2025-08-13T20:10:35.697241647+00:00 stdout F [INFO] 10.217.0.19:33071 - 20323 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00069523s 2025-08-13T20:10:35.697303468+00:00 stdout F [INFO] 10.217.0.19:42008 - 33508 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000793093s 2025-08-13T20:10:41.622326063+00:00 stdout F [INFO] 10.217.0.62:34619 - 105 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001195944s 2025-08-13T20:10:41.622417725+00:00 stdout F [INFO] 10.217.0.62:43729 - 23355 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001137973s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:35428 - 7758 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002594974s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:53534 - 35151 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003622834s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:60247 - 46067 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003174031s 2025-08-13T20:10:56.367748695+00:00 stdout F [INFO] 10.217.0.64:49932 - 39403 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003596073s 2025-08-13T20:11:02.520033866+00:00 stdout F [INFO] 10.217.0.45:39664 - 38219 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001183744s 2025-08-13T20:11:02.520033866+00:00 stdout F [INFO] 10.217.0.45:60311 - 9110 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001162163s 2025-08-13T20:11:03.399531671+00:00 stdout F [INFO] 10.217.0.87:37356 - 34781 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001512653s 2025-08-13T20:11:03.399531671+00:00 stdout F [INFO] 10.217.0.87:54183 - 38414 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002224574s 2025-08-13T20:11:03.399949703+00:00 stdout F [INFO] 10.217.0.87:35523 - 20655 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000555516s 2025-08-13T20:11:03.399949703+00:00 stdout F [INFO] 10.217.0.87:49559 - 28694 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001214345s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:35061 - 433 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000936687s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:40965 - 48513 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000667779s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:54542 - 34394 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000680099s 2025-08-13T20:11:03.529312752+00:00 stdout F [INFO] 10.217.0.87:38160 - 26586 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000648358s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:41763 - 22360 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001318338s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:47208 - 63365 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000723751s 2025-08-13T20:11:03.647199392+00:00 stdout F [INFO] 10.217.0.87:53434 - 45248 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070464s 2025-08-13T20:11:03.649033824+00:00 stdout F [INFO] 10.217.0.87:53713 - 18434 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002473161s 2025-08-13T20:11:03.718848076+00:00 stdout F [INFO] 10.217.0.87:41691 - 15051 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002671346s 2025-08-13T20:11:03.718848076+00:00 stdout F [INFO] 10.217.0.87:46056 - 9861 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103799s 2025-08-13T20:11:03.722611354+00:00 stdout F [INFO] 10.217.0.87:39086 - 34308 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000556906s 2025-08-13T20:11:03.722611354+00:00 stdout F [INFO] 10.217.0.87:42891 - 6546 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000504144s 2025-08-13T20:11:03.767221563+00:00 stdout F [INFO] 10.217.0.87:58197 - 30208 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00385624s 2025-08-13T20:11:03.767221563+00:00 stdout F [INFO] 10.217.0.87:39728 - 49827 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003767278s 2025-08-13T20:11:03.817666909+00:00 stdout F [INFO] 10.217.0.87:35172 - 55283 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001101361s 2025-08-13T20:11:03.817995749+00:00 stdout F [INFO] 10.217.0.87:39261 - 6732 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000712641s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53119 - 44401 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001572155s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:56071 - 13263 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001206034s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53447 - 33479 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001892754s 2025-08-13T20:11:03.829900000+00:00 stdout F [INFO] 10.217.0.87:53190 - 29850 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00175118s 2025-08-13T20:11:03.847942587+00:00 stdout F [INFO] 10.217.0.87:56508 - 38072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004455958s 2025-08-13T20:11:03.848072911+00:00 stdout F [INFO] 10.217.0.87:33109 - 11968 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004842229s 2025-08-13T20:11:03.897437386+00:00 stdout F [INFO] 10.217.0.87:53430 - 55335 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000998288s 2025-08-13T20:11:03.897437386+00:00 stdout F [INFO] 10.217.0.87:32838 - 14045 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001481643s 2025-08-13T20:11:03.900510014+00:00 stdout F [INFO] 10.217.0.87:43972 - 23286 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000949927s 2025-08-13T20:11:03.900756701+00:00 stdout F [INFO] 10.217.0.87:49838 - 38792 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001019979s 2025-08-13T20:11:03.902765709+00:00 stdout F [INFO] 10.217.0.87:47774 - 42440 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000679239s 2025-08-13T20:11:03.904858479+00:00 stdout F [INFO] 10.217.0.87:57291 - 56285 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001835753s 2025-08-13T20:11:03.913620140+00:00 stdout F [INFO] 10.217.0.87:46856 - 11308 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000987408s 2025-08-13T20:11:03.914091244+00:00 stdout F [INFO] 10.217.0.87:55527 - 2267 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001479192s 2025-08-13T20:11:03.954577195+00:00 stdout F [INFO] 10.217.0.87:51392 - 21341 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000807283s 2025-08-13T20:11:03.954937355+00:00 stdout F [INFO] 10.217.0.87:41890 - 15684 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001436721s 2025-08-13T20:11:03.977215704+00:00 stdout F [INFO] 10.217.0.87:47581 - 1434 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104783s 2025-08-13T20:11:03.977509832+00:00 stdout F [INFO] 10.217.0.87:54607 - 53936 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001454602s 2025-08-13T20:11:04.023222913+00:00 stdout F [INFO] 10.217.0.87:54496 - 16868 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003156581s 2025-08-13T20:11:04.023686356+00:00 stdout F [INFO] 10.217.0.87:42267 - 51763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003255144s 2025-08-13T20:11:04.039212591+00:00 stdout F [INFO] 10.217.0.87:53493 - 33994 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001088152s 2025-08-13T20:11:04.041175287+00:00 stdout F [INFO] 10.217.0.87:40575 - 33957 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003272184s 2025-08-13T20:11:04.041577319+00:00 stdout F [INFO] 10.217.0.87:50647 - 26691 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0010365s 2025-08-13T20:11:04.045511112+00:00 stdout F [INFO] 10.217.0.87:54103 - 16592 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000857164s 2025-08-13T20:11:04.102489046+00:00 stdout F [INFO] 10.217.0.87:55724 - 48220 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006211418s 2025-08-13T20:11:04.133961708+00:00 stdout F [INFO] 10.217.0.87:43240 - 60487 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001790991s 2025-08-13T20:11:04.142111361+00:00 stdout F [INFO] 10.217.0.87:41388 - 46250 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003411068s 2025-08-13T20:11:04.142305827+00:00 stdout F [INFO] 10.217.0.87:36215 - 4218 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003739277s 2025-08-13T20:11:04.202104702+00:00 stdout F [INFO] 10.217.0.87:55842 - 26479 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001143423s 2025-08-13T20:11:04.202351719+00:00 stdout F [INFO] 10.217.0.87:34872 - 6296 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000963698s 2025-08-13T20:11:04.209565155+00:00 stdout F [INFO] 10.217.0.87:36576 - 31736 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000777092s 2025-08-13T20:11:04.209661408+00:00 stdout F [INFO] 10.217.0.87:52223 - 46340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000739101s 2025-08-13T20:11:04.267950309+00:00 stdout F [INFO] 10.217.0.87:47717 - 18265 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001877574s 2025-08-13T20:11:04.269479953+00:00 stdout F [INFO] 10.217.0.87:49071 - 3191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001551425s 2025-08-13T20:11:04.271193682+00:00 stdout F [INFO] 10.217.0.87:39213 - 49156 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000561976s 2025-08-13T20:11:04.271193682+00:00 stdout F [INFO] 10.217.0.87:45159 - 61063 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000781293s 2025-08-13T20:11:04.334709023+00:00 stdout F [INFO] 10.217.0.87:42608 - 45702 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000631568s 2025-08-13T20:11:04.336961018+00:00 stdout F [INFO] 10.217.0.87:40086 - 23695 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000568236s 2025-08-13T20:11:04.337177644+00:00 stdout F [INFO] 10.217.0.87:44964 - 114 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000677759s 2025-08-13T20:11:04.337395390+00:00 stdout F [INFO] 10.217.0.87:46041 - 32750 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003591873s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:42635 - 50007 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002949285s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:41785 - 28793 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003545352s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:47944 - 7692 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000543566s 2025-08-13T20:11:04.408524350+00:00 stdout F [INFO] 10.217.0.87:37949 - 25447 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000756501s 2025-08-13T20:11:04.463665041+00:00 stdout F [INFO] 10.217.0.87:36079 - 39964 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001139033s 2025-08-13T20:11:04.463665041+00:00 stdout F [INFO] 10.217.0.87:43169 - 50861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001230846s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:48399 - 60449 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002498091s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:37499 - 27113 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001824773s 2025-08-13T20:11:04.532024350+00:00 stdout F [INFO] 10.217.0.87:58069 - 60634 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004967292s 2025-08-13T20:11:04.538092615+00:00 stdout F [INFO] 10.217.0.87:49277 - 25371 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007577988s 2025-08-13T20:11:04.569854585+00:00 stdout F [INFO] 10.217.0.87:43743 - 36368 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001295347s 2025-08-13T20:11:04.569854585+00:00 stdout F [INFO] 10.217.0.87:39846 - 32417 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105676s 2025-08-13T20:11:04.618958633+00:00 stdout F [INFO] 10.217.0.87:50590 - 58487 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005965221s 2025-08-13T20:11:04.621902648+00:00 stdout F [INFO] 10.217.0.87:33159 - 18725 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006726893s 2025-08-13T20:11:04.653088392+00:00 stdout F [INFO] 10.217.0.87:40765 - 34294 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001484582s 2025-08-13T20:11:04.653088392+00:00 stdout F [INFO] 10.217.0.87:44177 - 23695 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001702179s 2025-08-13T20:11:04.715885942+00:00 stdout F [INFO] 10.217.0.87:33043 - 12052 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002513822s 2025-08-13T20:11:04.723084329+00:00 stdout F [INFO] 10.217.0.87:35338 - 1652 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006784525s 2025-08-13T20:11:04.805091720+00:00 stdout F [INFO] 10.217.0.87:52674 - 55206 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006223669s 2025-08-13T20:11:04.805091720+00:00 stdout F [INFO] 10.217.0.87:52033 - 60656 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006853436s 2025-08-13T20:11:04.816179148+00:00 stdout F [INFO] 10.217.0.87:50848 - 46737 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001064251s 2025-08-13T20:11:04.816179148+00:00 stdout F [INFO] 10.217.0.87:42103 - 9497 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001151093s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:47340 - 39540 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003902982s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:39624 - 35803 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004711905s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:52299 - 14108 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006719073s 2025-08-13T20:11:04.890117708+00:00 stdout F [INFO] 10.217.0.87:51626 - 3473 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006789615s 2025-08-13T20:11:04.954439932+00:00 stdout F [INFO] 10.217.0.87:52193 - 7609 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888555s 2025-08-13T20:11:04.954660798+00:00 stdout F [INFO] 10.217.0.87:38807 - 33179 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000561866s 2025-08-13T20:11:04.954746981+00:00 stdout F [INFO] 10.217.0.87:51130 - 30582 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000945637s 2025-08-13T20:11:04.954990948+00:00 stdout F [INFO] 10.217.0.87:35982 - 7711 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001363909s 2025-08-13T20:11:05.014950167+00:00 stdout F [INFO] 10.217.0.87:42495 - 23659 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000950157s 2025-08-13T20:11:05.014950167+00:00 stdout F [INFO] 10.217.0.87:56385 - 22005 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001003799s 2025-08-13T20:11:05.067956076+00:00 stdout F [INFO] 10.217.0.87:47869 - 28411 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001158573s 2025-08-13T20:11:05.067956076+00:00 stdout F [INFO] 10.217.0.87:53901 - 44918 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000643019s 2025-08-13T20:11:05.072057984+00:00 stdout F [INFO] 10.217.0.87:49967 - 54232 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001286327s 2025-08-13T20:11:05.072114736+00:00 stdout F [INFO] 10.217.0.87:37700 - 58806 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102417s 2025-08-13T20:11:05.102279941+00:00 stdout F [INFO] 10.217.0.87:46754 - 38555 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001617416s 2025-08-13T20:11:05.116212460+00:00 stdout F [INFO] 10.217.0.87:51552 - 34798 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.022058323s 2025-08-13T20:11:05.145679675+00:00 stdout F [INFO] 10.217.0.87:36269 - 19193 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000509534s 2025-08-13T20:11:05.151134801+00:00 stdout F [INFO] 10.217.0.87:45946 - 16784 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.007139375s 2025-08-13T20:11:05.155084265+00:00 stdout F [INFO] 10.217.0.87:54068 - 64440 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002909163s 2025-08-13T20:11:05.160362146+00:00 stdout F [INFO] 10.217.0.87:52370 - 7710 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006949079s 2025-08-13T20:11:05.204469531+00:00 stdout F [INFO] 10.217.0.87:46241 - 8297 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069313s 2025-08-13T20:11:05.208651580+00:00 stdout F [INFO] 10.217.0.87:40189 - 37269 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619368s 2025-08-13T20:11:05.241635746+00:00 stdout F [INFO] 10.217.0.87:44567 - 60474 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006494957s 2025-08-13T20:11:05.241752309+00:00 stdout F [INFO] 10.217.0.87:38105 - 27874 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.006842646s 2025-08-13T20:11:05.249504242+00:00 stdout F [INFO] 10.217.0.87:57155 - 58400 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005020243s 2025-08-13T20:11:05.250721467+00:00 stdout F [INFO] 10.217.0.87:52971 - 8666 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005474167s 2025-08-13T20:11:05.258077158+00:00 stdout F [INFO] 10.217.0.19:40122 - 47771 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005301442s 2025-08-13T20:11:05.259867269+00:00 stdout F [INFO] 10.217.0.19:52923 - 1992 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001806382s 2025-08-13T20:11:05.290980531+00:00 stdout F [INFO] 10.217.0.87:51112 - 26133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001604326s 2025-08-13T20:11:05.290980531+00:00 stdout F [INFO] 10.217.0.87:57064 - 21762 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.004293173s 2025-08-13T20:11:05.302120030+00:00 stdout F [INFO] 10.217.0.87:40821 - 41355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000800522s 2025-08-13T20:11:05.302379718+00:00 stdout F [INFO] 10.217.0.87:56713 - 19604 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001137903s 2025-08-13T20:11:05.313414544+00:00 stdout F [INFO] 10.217.0.87:36400 - 26073 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000772012s 2025-08-13T20:11:05.317306166+00:00 stdout F [INFO] 10.217.0.87:53031 - 42574 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003964054s 2025-08-13T20:11:05.321047143+00:00 stdout F [INFO] 10.217.0.87:53956 - 17610 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068611s 2025-08-13T20:11:05.321278060+00:00 stdout F [INFO] 10.217.0.87:46140 - 44894 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000664749s 2025-08-13T20:11:05.342043325+00:00 stdout F [INFO] 10.217.0.87:35847 - 16647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000609347s 2025-08-13T20:11:05.342043325+00:00 stdout F [INFO] 10.217.0.87:54364 - 5238 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000893065s 2025-08-13T20:11:05.362751859+00:00 stdout F [INFO] 10.217.0.87:37115 - 46250 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000650209s 2025-08-13T20:11:05.362751859+00:00 stdout F [INFO] 10.217.0.87:49240 - 12686 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000519375s 2025-08-13T20:11:05.367891566+00:00 stdout F [INFO] 10.217.0.87:34732 - 2819 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000436073s 2025-08-13T20:11:05.367969348+00:00 stdout F [INFO] 10.217.0.87:33759 - 26935 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000470533s 2025-08-13T20:11:05.397690750+00:00 stdout F [INFO] 10.217.0.87:35998 - 34083 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003644675s 2025-08-13T20:11:05.397754462+00:00 stdout F [INFO] 10.217.0.87:52131 - 3546 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003547612s 2025-08-13T20:11:05.420550816+00:00 stdout F [INFO] 10.217.0.87:39735 - 12055 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000621878s 2025-08-13T20:11:05.420843084+00:00 stdout F [INFO] 10.217.0.87:42570 - 32800 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000528725s 2025-08-13T20:11:05.457708621+00:00 stdout F [INFO] 10.217.0.87:48256 - 42070 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001333268s 2025-08-13T20:11:05.457865856+00:00 stdout F [INFO] 10.217.0.87:59243 - 325 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001696519s 2025-08-13T20:11:05.459695468+00:00 stdout F [INFO] 10.217.0.87:36953 - 24753 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000388781s 2025-08-13T20:11:05.459695468+00:00 stdout F [INFO] 10.217.0.87:50391 - 5271 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000510425s 2025-08-13T20:11:05.476240483+00:00 stdout F [INFO] 10.217.0.87:53411 - 28017 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000549455s 2025-08-13T20:11:05.477273462+00:00 stdout F [INFO] 10.217.0.87:54565 - 53660 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001512104s 2025-08-13T20:11:05.517877896+00:00 stdout F [INFO] 10.217.0.87:36552 - 9276 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000594707s 2025-08-13T20:11:05.517877896+00:00 stdout F [INFO] 10.217.0.87:57974 - 5702 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000671679s 2025-08-13T20:11:05.518295358+00:00 stdout F [INFO] 10.217.0.87:35507 - 62258 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000621747s 2025-08-13T20:11:05.518966027+00:00 stdout F [INFO] 10.217.0.87:48489 - 51566 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001242945s 2025-08-13T20:11:05.532273779+00:00 stdout F [INFO] 10.217.0.87:54360 - 8769 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000570296s 2025-08-13T20:11:05.532620469+00:00 stdout F [INFO] 10.217.0.87:45071 - 2076 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000563326s 2025-08-13T20:11:05.572070690+00:00 stdout F [INFO] 10.217.0.87:35327 - 48220 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001152263s 2025-08-13T20:11:05.573180142+00:00 stdout F [INFO] 10.217.0.87:40319 - 52828 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888025s 2025-08-13T20:11:05.573360557+00:00 stdout F [INFO] 10.217.0.87:57234 - 30752 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001159563s 2025-08-13T20:11:05.574248242+00:00 stdout F [INFO] 10.217.0.87:50779 - 25703 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000996218s 2025-08-13T20:11:05.636950840+00:00 stdout F [INFO] 10.217.0.87:38613 - 10822 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001529404s 2025-08-13T20:11:05.636950840+00:00 stdout F [INFO] 10.217.0.87:34827 - 31320 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001443362s 2025-08-13T20:11:05.695036296+00:00 stdout F [INFO] 10.217.0.87:50823 - 21243 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000660819s 2025-08-13T20:11:05.695036296+00:00 stdout F [INFO] 10.217.0.87:34187 - 38762 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649678s 2025-08-13T20:11:05.695768016+00:00 stdout F [INFO] 10.217.0.87:48186 - 4485 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000949247s 2025-08-13T20:11:05.695871989+00:00 stdout F [INFO] 10.217.0.87:55448 - 36001 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001257476s 2025-08-13T20:11:05.727963740+00:00 stdout F [INFO] 10.217.0.87:45140 - 62363 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001007239s 2025-08-13T20:11:05.727963740+00:00 stdout F [INFO] 10.217.0.87:41301 - 4913 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000911526s 2025-08-13T20:11:05.737425181+00:00 stdout F [INFO] 10.217.0.87:54940 - 54838 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068333s 2025-08-13T20:11:05.737695999+00:00 stdout F [INFO] 10.217.0.87:47179 - 4957 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000484444s 2025-08-13T20:11:05.750248559+00:00 stdout F [INFO] 10.217.0.87:38521 - 7178 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070606s 2025-08-13T20:11:05.750464235+00:00 stdout F [INFO] 10.217.0.87:40720 - 32764 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000742241s 2025-08-13T20:11:05.758315660+00:00 stdout F [INFO] 10.217.0.87:53784 - 14976 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000741112s 2025-08-13T20:11:05.759021180+00:00 stdout F [INFO] 10.217.0.87:47766 - 50912 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000766572s 2025-08-13T20:11:05.787698922+00:00 stdout F [INFO] 10.217.0.87:46050 - 38144 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000734861s 2025-08-13T20:11:05.787846096+00:00 stdout F [INFO] 10.217.0.87:49776 - 41453 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069027s 2025-08-13T20:11:05.808445967+00:00 stdout F [INFO] 10.217.0.87:34882 - 53698 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000775223s 2025-08-13T20:11:05.810042633+00:00 stdout F [INFO] 10.217.0.87:57934 - 36471 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613088s 2025-08-13T20:11:05.815279783+00:00 stdout F [INFO] 10.217.0.87:59022 - 44298 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000583917s 2025-08-13T20:11:05.815389526+00:00 stdout F [INFO] 10.217.0.87:50553 - 13065 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000446953s 2025-08-13T20:11:05.847885058+00:00 stdout F [INFO] 10.217.0.87:50104 - 3863 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000556416s 2025-08-13T20:11:05.849309909+00:00 stdout F [INFO] 10.217.0.87:42086 - 17959 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001748971s 2025-08-13T20:11:05.850025639+00:00 stdout F [INFO] 10.217.0.87:46475 - 670 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000601117s 2025-08-13T20:11:05.850271846+00:00 stdout F [INFO] 10.217.0.87:54185 - 52777 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000961198s 2025-08-13T20:11:05.864866695+00:00 stdout F [INFO] 10.217.0.87:35066 - 21339 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000673089s 2025-08-13T20:11:05.864866695+00:00 stdout F [INFO] 10.217.0.87:52115 - 23043 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000745862s 2025-08-13T20:11:05.905121909+00:00 stdout F [INFO] 10.217.0.87:41027 - 45325 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649229s 2025-08-13T20:11:05.905199141+00:00 stdout F [INFO] 10.217.0.87:35153 - 34789 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000867205s 2025-08-13T20:11:05.905703236+00:00 stdout F [INFO] 10.217.0.87:48675 - 18007 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000562456s 2025-08-13T20:11:05.905742957+00:00 stdout F [INFO] 10.217.0.87:43674 - 13853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000559216s 2025-08-13T20:11:05.921436857+00:00 stdout F [INFO] 10.217.0.87:48274 - 1386 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000690469s 2025-08-13T20:11:05.921436857+00:00 stdout F [INFO] 10.217.0.87:42202 - 38824 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000865555s 2025-08-13T20:11:05.946203447+00:00 stdout F [INFO] 10.217.0.87:56106 - 42543 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072583s 2025-08-13T20:11:05.946437223+00:00 stdout F [INFO] 10.217.0.87:56970 - 21694 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012729s 2025-08-13T20:11:05.961452604+00:00 stdout F [INFO] 10.217.0.87:55002 - 42104 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000834214s 2025-08-13T20:11:05.962054481+00:00 stdout F [INFO] 10.217.0.87:57408 - 45063 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001483442s 2025-08-13T20:11:05.978225745+00:00 stdout F [INFO] 10.217.0.87:54164 - 2186 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000564496s 2025-08-13T20:11:05.978225745+00:00 stdout F [INFO] 10.217.0.87:49778 - 4297 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001432481s 2025-08-13T20:11:06.001327877+00:00 stdout F [INFO] 10.217.0.87:53683 - 16082 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000602247s 2025-08-13T20:11:06.001327877+00:00 stdout F [INFO] 10.217.0.87:50274 - 56137 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070625s 2025-08-13T20:11:06.024428690+00:00 stdout F [INFO] 10.217.0.87:59104 - 10557 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000574946s 2025-08-13T20:11:06.024482571+00:00 stdout F [INFO] 10.217.0.87:57333 - 36257 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000501454s 2025-08-13T20:11:06.042144708+00:00 stdout F [INFO] 10.217.0.87:39448 - 51107 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729891s 2025-08-13T20:11:06.042350963+00:00 stdout F [INFO] 10.217.0.87:53358 - 62191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761701s 2025-08-13T20:11:06.064882039+00:00 stdout F [INFO] 10.217.0.87:46028 - 30827 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000779163s 2025-08-13T20:11:06.064965342+00:00 stdout F [INFO] 10.217.0.87:34523 - 33195 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976068s 2025-08-13T20:11:06.079122668+00:00 stdout F [INFO] 10.217.0.87:47662 - 3356 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976048s 2025-08-13T20:11:06.079490318+00:00 stdout F [INFO] 10.217.0.87:39599 - 47911 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001112542s 2025-08-13T20:11:06.081466575+00:00 stdout F [INFO] 10.217.0.87:56701 - 58527 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001175663s 2025-08-13T20:11:06.081526277+00:00 stdout F [INFO] 10.217.0.87:46845 - 61569 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001124562s 2025-08-13T20:11:06.094482548+00:00 stdout F [INFO] 10.217.0.87:44174 - 39781 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000914246s 2025-08-13T20:11:06.094591531+00:00 stdout F [INFO] 10.217.0.87:57763 - 55061 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00136419s 2025-08-13T20:11:06.117101526+00:00 stdout F [INFO] 10.217.0.87:35335 - 509 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558666s 2025-08-13T20:11:06.117376293+00:00 stdout F [INFO] 10.217.0.87:45399 - 59288 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000826444s 2025-08-13T20:11:06.134357430+00:00 stdout F [INFO] 10.217.0.87:57107 - 7410 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000519265s 2025-08-13T20:11:06.134609068+00:00 stdout F [INFO] 10.217.0.87:38962 - 36757 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000547605s 2025-08-13T20:11:06.136617685+00:00 stdout F [INFO] 10.217.0.87:40779 - 41501 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000559076s 2025-08-13T20:11:06.136842652+00:00 stdout F [INFO] 10.217.0.87:53109 - 41446 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000586077s 2025-08-13T20:11:06.153357345+00:00 stdout F [INFO] 10.217.0.87:53791 - 54818 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000739891s 2025-08-13T20:11:06.153455868+00:00 stdout F [INFO] 10.217.0.87:40529 - 22388 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000568216s 2025-08-13T20:11:06.174253844+00:00 stdout F [INFO] 10.217.0.87:33434 - 53255 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000811123s 2025-08-13T20:11:06.174311886+00:00 stdout F [INFO] 10.217.0.87:41504 - 17167 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000981359s 2025-08-13T20:11:06.191676854+00:00 stdout F [INFO] 10.217.0.87:44850 - 25924 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000839094s 2025-08-13T20:11:06.191676854+00:00 stdout F [INFO] 10.217.0.87:35096 - 8796 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000593937s 2025-08-13T20:11:06.194354941+00:00 stdout F [INFO] 10.217.0.87:42299 - 56354 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000612367s 2025-08-13T20:11:06.195107692+00:00 stdout F [INFO] 10.217.0.87:60823 - 24639 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000432882s 2025-08-13T20:11:06.222853908+00:00 stdout F [INFO] 10.217.0.87:58939 - 34814 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000455973s 2025-08-13T20:11:06.223381183+00:00 stdout F [INFO] 10.217.0.87:41083 - 28150 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000423563s 2025-08-13T20:11:06.229876849+00:00 stdout F [INFO] 10.217.0.87:50140 - 37233 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00036622s 2025-08-13T20:11:06.229876849+00:00 stdout F [INFO] 10.217.0.87:44139 - 19006 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000462453s 2025-08-13T20:11:06.235723447+00:00 stdout F [INFO] 10.217.0.87:37000 - 48562 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000615558s 2025-08-13T20:11:06.236182840+00:00 stdout F [INFO] 10.217.0.87:37401 - 26048 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000878205s 2025-08-13T20:11:06.248820622+00:00 stdout F [INFO] 10.217.0.87:53517 - 14644 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000478294s 2025-08-13T20:11:06.249197003+00:00 stdout F [INFO] 10.217.0.87:46966 - 49070 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00102005s 2025-08-13T20:11:06.252328663+00:00 stdout F [INFO] 10.217.0.87:49959 - 54517 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936917s 2025-08-13T20:11:06.252872748+00:00 stdout F [INFO] 10.217.0.87:34122 - 64368 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001336928s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:54879 - 31349 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000757111s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:36976 - 4227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072202s 2025-08-13T20:11:06.289952311+00:00 stdout F [INFO] 10.217.0.87:50486 - 30028 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000467073s 2025-08-13T20:11:06.290157227+00:00 stdout F [INFO] 10.217.0.87:44581 - 20684 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000646279s 2025-08-13T20:11:06.306865616+00:00 stdout F [INFO] 10.217.0.87:52436 - 41201 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000389121s 2025-08-13T20:11:06.307338690+00:00 stdout F [INFO] 10.217.0.87:46131 - 60825 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000489754s 2025-08-13T20:11:06.347124001+00:00 stdout F [INFO] 10.217.0.87:50795 - 29243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000588857s 2025-08-13T20:11:06.347124001+00:00 stdout F [INFO] 10.217.0.87:40418 - 64190 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070063s 2025-08-13T20:11:06.354559374+00:00 stdout F [INFO] 10.217.0.87:38571 - 1815 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000470153s 2025-08-13T20:11:06.355035317+00:00 stdout F [INFO] 10.217.0.87:33273 - 16168 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000579847s 2025-08-13T20:11:06.400201202+00:00 stdout F [INFO] 10.217.0.87:60599 - 15192 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002863203s 2025-08-13T20:11:06.400386088+00:00 stdout F [INFO] 10.217.0.87:39072 - 30678 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002858582s 2025-08-13T20:11:06.412956878+00:00 stdout F [INFO] 10.217.0.87:55153 - 53749 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003419738s 2025-08-13T20:11:06.412956878+00:00 stdout F [INFO] 10.217.0.87:39601 - 28661 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003660195s 2025-08-13T20:11:06.454535910+00:00 stdout F [INFO] 10.217.0.87:35573 - 1487 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000883155s 2025-08-13T20:11:06.454573761+00:00 stdout F [INFO] 10.217.0.87:53850 - 41953 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000804833s 2025-08-13T20:11:06.463589630+00:00 stdout F [INFO] 10.217.0.87:34383 - 55906 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000546776s 2025-08-13T20:11:06.463877388+00:00 stdout F [INFO] 10.217.0.87:50571 - 33343 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808043s 2025-08-13T20:11:06.491525171+00:00 stdout F [INFO] 10.217.0.87:41911 - 60063 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001192454s 2025-08-13T20:11:06.492985652+00:00 stdout F [INFO] 10.217.0.87:45782 - 22682 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002186713s 2025-08-13T20:11:06.509816265+00:00 stdout F [INFO] 10.217.0.87:52306 - 22289 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000733441s 2025-08-13T20:11:06.509855916+00:00 stdout F [INFO] 10.217.0.87:35555 - 28182 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000762071s 2025-08-13T20:11:06.533077352+00:00 stdout F [INFO] 10.217.0.87:43457 - 64202 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000824794s 2025-08-13T20:11:06.534133062+00:00 stdout F [INFO] 10.217.0.87:50391 - 2706 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000806523s 2025-08-13T20:11:06.552254362+00:00 stdout F [INFO] 10.217.0.87:41758 - 54760 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000894436s 2025-08-13T20:11:06.552254362+00:00 stdout F [INFO] 10.217.0.87:41418 - 37950 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102568s 2025-08-13T20:11:06.565714768+00:00 stdout F [INFO] 10.217.0.87:53002 - 34946 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000669639s 2025-08-13T20:11:06.565991156+00:00 stdout F [INFO] 10.217.0.87:52134 - 13366 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000942697s 2025-08-13T20:11:06.592532397+00:00 stdout F [INFO] 10.217.0.87:41702 - 16932 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000916666s 2025-08-13T20:11:06.592709062+00:00 stdout F [INFO] 10.217.0.87:33361 - 18852 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009309s 2025-08-13T20:11:06.608624028+00:00 stdout F [INFO] 10.217.0.87:37399 - 31636 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000787072s 2025-08-13T20:11:06.609027200+00:00 stdout F [INFO] 10.217.0.87:38988 - 32905 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001131022s 2025-08-13T20:11:06.650274692+00:00 stdout F [INFO] 10.217.0.87:56662 - 29308 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001180094s 2025-08-13T20:11:06.651669022+00:00 stdout F [INFO] 10.217.0.87:41415 - 62225 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001204265s 2025-08-13T20:11:06.665180180+00:00 stdout F [INFO] 10.217.0.87:35263 - 58718 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000958997s 2025-08-13T20:11:06.665440387+00:00 stdout F [INFO] 10.217.0.87:34409 - 3458 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000758072s 2025-08-13T20:11:06.665900150+00:00 stdout F [INFO] 10.217.0.87:38774 - 44257 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069317s 2025-08-13T20:11:06.666261191+00:00 stdout F [INFO] 10.217.0.87:51417 - 37318 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000584717s 2025-08-13T20:11:06.710675574+00:00 stdout F [INFO] 10.217.0.87:58364 - 51070 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012859s 2025-08-13T20:11:06.710869830+00:00 stdout F [INFO] 10.217.0.87:33657 - 35571 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001009739s 2025-08-13T20:11:06.719730754+00:00 stdout F [INFO] 10.217.0.87:56472 - 42237 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000597037s 2025-08-13T20:11:06.720049443+00:00 stdout F [INFO] 10.217.0.87:34285 - 51553 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070137s 2025-08-13T20:11:06.725270743+00:00 stdout F [INFO] 10.217.0.87:56793 - 19375 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000649548s 2025-08-13T20:11:06.725506199+00:00 stdout F [INFO] 10.217.0.87:37335 - 60642 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00069964s 2025-08-13T20:11:06.743459024+00:00 stdout F [INFO] 10.217.0.87:49928 - 38274 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000782013s 2025-08-13T20:11:06.743459024+00:00 stdout F [INFO] 10.217.0.87:38836 - 29306 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000717341s 2025-08-13T20:11:06.768242695+00:00 stdout F [INFO] 10.217.0.87:60924 - 22492 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001305128s 2025-08-13T20:11:06.768311347+00:00 stdout F [INFO] 10.217.0.87:33026 - 58340 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001566935s 2025-08-13T20:11:06.774824223+00:00 stdout F [INFO] 10.217.0.87:35023 - 36636 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000527365s 2025-08-13T20:11:06.775069740+00:00 stdout F [INFO] 10.217.0.87:36959 - 13740 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000538765s 2025-08-13T20:11:06.804987058+00:00 stdout F [INFO] 10.217.0.87:52927 - 15321 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000732581s 2025-08-13T20:11:06.805176204+00:00 stdout F [INFO] 10.217.0.87:41921 - 3811 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001665357s 2025-08-13T20:11:06.824849498+00:00 stdout F [INFO] 10.217.0.87:56546 - 11258 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651899s 2025-08-13T20:11:06.825061844+00:00 stdout F [INFO] 10.217.0.87:53868 - 34815 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001021429s 2025-08-13T20:11:06.843694408+00:00 stdout F [INFO] 10.217.0.87:39912 - 28227 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000592707s 2025-08-13T20:11:06.844727207+00:00 stdout F [INFO] 10.217.0.87:43094 - 49549 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000830364s 2025-08-13T20:11:06.880210985+00:00 stdout F [INFO] 10.217.0.87:39540 - 28116 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000741151s 2025-08-13T20:11:06.880420891+00:00 stdout F [INFO] 10.217.0.87:37296 - 5530 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000985599s 2025-08-13T20:11:06.898124619+00:00 stdout F [INFO] 10.217.0.87:34894 - 32594 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001575145s 2025-08-13T20:11:06.898483399+00:00 stdout F [INFO] 10.217.0.87:52980 - 13091 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001652387s 2025-08-13T20:11:06.929318283+00:00 stdout F [INFO] 10.217.0.87:59919 - 19170 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769912s 2025-08-13T20:11:06.929360744+00:00 stdout F [INFO] 10.217.0.87:47630 - 55246 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000912626s 2025-08-13T20:11:06.937565909+00:00 stdout F [INFO] 10.217.0.87:40289 - 56438 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001221625s 2025-08-13T20:11:06.937699333+00:00 stdout F [INFO] 10.217.0.87:52474 - 24238 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001223485s 2025-08-13T20:11:06.939627758+00:00 stdout F [INFO] 10.217.0.87:36682 - 2516 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001295677s 2025-08-13T20:11:06.939627758+00:00 stdout F [INFO] 10.217.0.87:56748 - 10632 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001002569s 2025-08-13T20:11:06.986652897+00:00 stdout F [INFO] 10.217.0.87:49396 - 42637 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002637565s 2025-08-13T20:11:06.986720629+00:00 stdout F [INFO] 10.217.0.87:41630 - 25830 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002414649s 2025-08-13T20:11:06.997326363+00:00 stdout F [INFO] 10.217.0.87:51569 - 44524 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00104834s 2025-08-13T20:11:06.997622911+00:00 stdout F [INFO] 10.217.0.87:36794 - 18852 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001446492s 2025-08-13T20:11:07.044982969+00:00 stdout F [INFO] 10.217.0.87:51837 - 36888 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000936357s 2025-08-13T20:11:07.045427022+00:00 stdout F [INFO] 10.217.0.87:58976 - 21162 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001307418s 2025-08-13T20:11:07.045548015+00:00 stdout F [INFO] 10.217.0.87:36974 - 32460 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000771272s 2025-08-13T20:11:07.047007987+00:00 stdout F [INFO] 10.217.0.87:58415 - 60280 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002137671s 2025-08-13T20:11:07.105296868+00:00 stdout F [INFO] 10.217.0.87:60538 - 54742 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001012649s 2025-08-13T20:11:07.105296868+00:00 stdout F [INFO] 10.217.0.87:58811 - 17080 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0017433s 2025-08-13T20:11:07.105495824+00:00 stdout F [INFO] 10.217.0.87:51688 - 12168 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002047849s 2025-08-13T20:11:07.105664929+00:00 stdout F [INFO] 10.217.0.87:50539 - 42293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002032268s 2025-08-13T20:11:07.163007373+00:00 stdout F [INFO] 10.217.0.87:58079 - 39541 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000982978s 2025-08-13T20:11:07.163007373+00:00 stdout F [INFO] 10.217.0.87:37302 - 55370 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001032149s 2025-08-13T20:11:07.170320102+00:00 stdout F [INFO] 10.217.0.87:49364 - 65517 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000833624s 2025-08-13T20:11:07.171272730+00:00 stdout F [INFO] 10.217.0.87:36435 - 40868 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000906396s 2025-08-13T20:11:07.171989020+00:00 stdout F [INFO] 10.217.0.87:59848 - 32310 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000610828s 2025-08-13T20:11:07.172141005+00:00 stdout F [INFO] 10.217.0.87:55904 - 37116 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000517145s 2025-08-13T20:11:07.207731465+00:00 stdout F [INFO] 10.217.0.87:43437 - 61119 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728381s 2025-08-13T20:11:07.207861279+00:00 stdout F [INFO] 10.217.0.87:57484 - 35925 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000743692s 2025-08-13T20:11:07.221266323+00:00 stdout F [INFO] 10.217.0.87:43164 - 16768 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000541336s 2025-08-13T20:11:07.221266323+00:00 stdout F [INFO] 10.217.0.87:60482 - 24325 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000885805s 2025-08-13T20:11:07.225198466+00:00 stdout F [INFO] 10.217.0.87:49620 - 5559 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587827s 2025-08-13T20:11:07.225319079+00:00 stdout F [INFO] 10.217.0.87:48813 - 53088 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000534395s 2025-08-13T20:11:07.257577124+00:00 stdout F [INFO] 10.217.0.87:40760 - 5643 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000937127s 2025-08-13T20:11:07.257826671+00:00 stdout F [INFO] 10.217.0.87:41535 - 4835 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001141493s 2025-08-13T20:11:07.262181946+00:00 stdout F [INFO] 10.217.0.87:49409 - 29932 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072507s 2025-08-13T20:11:07.262240898+00:00 stdout F [INFO] 10.217.0.87:33200 - 42103 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000580437s 2025-08-13T20:11:07.276715173+00:00 stdout F [INFO] 10.217.0.87:49404 - 50720 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000742431s 2025-08-13T20:11:07.276846737+00:00 stdout F [INFO] 10.217.0.87:54369 - 4775 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000498485s 2025-08-13T20:11:07.280807520+00:00 stdout F [INFO] 10.217.0.87:53914 - 14210 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000432693s 2025-08-13T20:11:07.281426688+00:00 stdout F [INFO] 10.217.0.87:51940 - 43696 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000888956s 2025-08-13T20:11:07.317145732+00:00 stdout F [INFO] 10.217.0.87:59791 - 56988 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000923027s 2025-08-13T20:11:07.317197224+00:00 stdout F [INFO] 10.217.0.87:43385 - 12255 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00072117s 2025-08-13T20:11:07.334514660+00:00 stdout F [INFO] 10.217.0.87:34980 - 10777 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00070021s 2025-08-13T20:11:07.334691555+00:00 stdout F [INFO] 10.217.0.87:55757 - 15662 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000538395s 2025-08-13T20:11:07.376549415+00:00 stdout F [INFO] 10.217.0.87:53403 - 11578 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070054s 2025-08-13T20:11:07.376549415+00:00 stdout F [INFO] 10.217.0.87:35946 - 17798 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001085221s 2025-08-13T20:11:07.389303621+00:00 stdout F [INFO] 10.217.0.87:44091 - 57874 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000605428s 2025-08-13T20:11:07.389703323+00:00 stdout F [INFO] 10.217.0.87:53756 - 50686 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000758902s 2025-08-13T20:11:07.417015286+00:00 stdout F [INFO] 10.217.0.87:40347 - 17567 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001583275s 2025-08-13T20:11:07.417194241+00:00 stdout F [INFO] 10.217.0.87:33768 - 59369 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001353908s 2025-08-13T20:11:07.443644379+00:00 stdout F [INFO] 10.217.0.87:50810 - 56645 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776422s 2025-08-13T20:11:07.443644379+00:00 stdout F [INFO] 10.217.0.87:46506 - 36903 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000660439s 2025-08-13T20:11:07.472205178+00:00 stdout F [INFO] 10.217.0.87:47203 - 54622 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000751011s 2025-08-13T20:11:07.472205178+00:00 stdout F [INFO] 10.217.0.87:57837 - 6935 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000608628s 2025-08-13T20:11:07.481557286+00:00 stdout F [INFO] 10.217.0.87:34574 - 39418 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000713751s 2025-08-13T20:11:07.482115062+00:00 stdout F [INFO] 10.217.0.87:60348 - 34743 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001230536s 2025-08-13T20:11:07.507484500+00:00 stdout F [INFO] 10.217.0.87:34603 - 8271 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000671269s 2025-08-13T20:11:07.507538051+00:00 stdout F [INFO] 10.217.0.87:46661 - 53710 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000798843s 2025-08-13T20:11:07.527233716+00:00 stdout F [INFO] 10.217.0.87:50539 - 40214 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000919156s 2025-08-13T20:11:07.527354589+00:00 stdout F [INFO] 10.217.0.87:58467 - 3852 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000786952s 2025-08-13T20:11:07.535472752+00:00 stdout F [INFO] 10.217.0.87:34042 - 43843 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000737821s 2025-08-13T20:11:07.535559044+00:00 stdout F [INFO] 10.217.0.87:35568 - 3264 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000796383s 2025-08-13T20:11:07.557271757+00:00 stdout F [INFO] 10.217.0.87:38703 - 62460 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070702s 2025-08-13T20:11:07.557654668+00:00 stdout F [INFO] 10.217.0.87:54204 - 9243 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000859315s 2025-08-13T20:11:07.571607918+00:00 stdout F [INFO] 10.217.0.87:51349 - 43568 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000394932s 2025-08-13T20:11:07.571909177+00:00 stdout F [INFO] 10.217.0.87:55665 - 10478 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000890045s 2025-08-13T20:11:07.579506694+00:00 stdout F [INFO] 10.217.0.87:41713 - 44982 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000639949s 2025-08-13T20:11:07.579822964+00:00 stdout F [INFO] 10.217.0.87:35599 - 30094 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000641579s 2025-08-13T20:11:07.611660656+00:00 stdout F [INFO] 10.217.0.87:52989 - 57443 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000811524s 2025-08-13T20:11:07.612091619+00:00 stdout F [INFO] 10.217.0.87:56925 - 44043 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000982728s 2025-08-13T20:11:07.630198318+00:00 stdout F [INFO] 10.217.0.87:59685 - 54799 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001186674s 2025-08-13T20:11:07.630585489+00:00 stdout F [INFO] 10.217.0.87:41588 - 19165 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001908224s 2025-08-13T20:11:07.638215148+00:00 stdout F [INFO] 10.217.0.87:60419 - 15905 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870815s 2025-08-13T20:11:07.638310660+00:00 stdout F [INFO] 10.217.0.87:39051 - 25853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001563095s 2025-08-13T20:11:07.668658981+00:00 stdout F [INFO] 10.217.0.87:36510 - 56531 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753982s 2025-08-13T20:11:07.668910118+00:00 stdout F [INFO] 10.217.0.87:45382 - 41023 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000604917s 2025-08-13T20:11:07.688061667+00:00 stdout F [INFO] 10.217.0.87:43511 - 46046 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000736041s 2025-08-13T20:11:07.688430487+00:00 stdout F [INFO] 10.217.0.87:55325 - 56240 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001117712s 2025-08-13T20:11:07.693539064+00:00 stdout F [INFO] 10.217.0.87:57667 - 32897 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000927677s 2025-08-13T20:11:07.693603626+00:00 stdout F [INFO] 10.217.0.87:53696 - 30464 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00103904s 2025-08-13T20:11:07.716821652+00:00 stdout F [INFO] 10.217.0.87:47346 - 60672 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000607528s 2025-08-13T20:11:07.717144831+00:00 stdout F [INFO] 10.217.0.87:57924 - 33305 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000790483s 2025-08-13T20:11:07.743893558+00:00 stdout F [INFO] 10.217.0.87:56868 - 30677 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000767852s 2025-08-13T20:11:07.744194106+00:00 stdout F [INFO] 10.217.0.87:49842 - 57859 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000859045s 2025-08-13T20:11:07.772413855+00:00 stdout F [INFO] 10.217.0.87:33450 - 52319 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000833204s 2025-08-13T20:11:07.772864258+00:00 stdout F [INFO] 10.217.0.87:49356 - 2562 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001155553s 2025-08-13T20:11:07.799166922+00:00 stdout F [INFO] 10.217.0.87:53352 - 29022 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00142271s 2025-08-13T20:11:07.799415080+00:00 stdout F [INFO] 10.217.0.87:49746 - 13386 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001722979s 2025-08-13T20:11:07.826411524+00:00 stdout F [INFO] 10.217.0.87:35786 - 37507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000635989s 2025-08-13T20:11:07.826551608+00:00 stdout F [INFO] 10.217.0.87:40944 - 25651 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000536725s 2025-08-13T20:11:07.866043630+00:00 stdout F [INFO] 10.217.0.87:46701 - 20031 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803634s 2025-08-13T20:11:07.866305297+00:00 stdout F [INFO] 10.217.0.87:49754 - 37423 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000920356s 2025-08-13T20:11:07.881445101+00:00 stdout F [INFO] 10.217.0.87:41833 - 64957 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000903816s 2025-08-13T20:11:07.881964906+00:00 stdout F [INFO] 10.217.0.87:43415 - 47315 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001354449s 2025-08-13T20:11:07.882467471+00:00 stdout F [INFO] 10.217.0.87:48205 - 62842 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731091s 2025-08-13T20:11:07.882601685+00:00 stdout F [INFO] 10.217.0.87:48027 - 3372 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105356s 2025-08-13T20:11:07.905826531+00:00 stdout F [INFO] 10.217.0.87:58159 - 46154 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000625888s 2025-08-13T20:11:07.906098518+00:00 stdout F [INFO] 10.217.0.87:36665 - 53602 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000843184s 2025-08-13T20:11:07.921844300+00:00 stdout F [INFO] 10.217.0.87:52715 - 35017 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000484183s 2025-08-13T20:11:07.922081147+00:00 stdout F [INFO] 10.217.0.87:52898 - 49650 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000426353s 2025-08-13T20:11:07.938112126+00:00 stdout F [INFO] 10.217.0.87:49711 - 41337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000938537s 2025-08-13T20:11:07.938154007+00:00 stdout F [INFO] 10.217.0.87:44817 - 58022 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001096921s 2025-08-13T20:11:07.962470985+00:00 stdout F [INFO] 10.217.0.87:41285 - 13116 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000762342s 2025-08-13T20:11:07.962525396+00:00 stdout F [INFO] 10.217.0.87:38417 - 4434 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000501594s 2025-08-13T20:11:07.981162011+00:00 stdout F [INFO] 10.217.0.87:37682 - 11071 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072595s 2025-08-13T20:11:07.981193221+00:00 stdout F [INFO] 10.217.0.87:45937 - 140 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000706161s 2025-08-13T20:11:07.999602619+00:00 stdout F [INFO] 10.217.0.87:48786 - 24055 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000580597s 2025-08-13T20:11:07.999602619+00:00 stdout F [INFO] 10.217.0.87:39752 - 12452 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000929997s 2025-08-13T20:11:08.064061297+00:00 stdout F [INFO] 10.217.0.87:49777 - 44088 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000721461s 2025-08-13T20:11:08.064688655+00:00 stdout F [INFO] 10.217.0.87:58214 - 49133 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001578405s 2025-08-13T20:11:08.081518438+00:00 stdout F [INFO] 10.217.0.87:36948 - 4802 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000605337s 2025-08-13T20:11:08.081581300+00:00 stdout F [INFO] 10.217.0.87:41574 - 52127 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793012s 2025-08-13T20:11:08.119315391+00:00 stdout F [INFO] 10.217.0.87:34539 - 59046 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001407601s 2025-08-13T20:11:08.119385703+00:00 stdout F [INFO] 10.217.0.87:53056 - 45122 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001035959s 2025-08-13T20:11:08.140376275+00:00 stdout F [INFO] 10.217.0.87:32830 - 23502 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000724451s 2025-08-13T20:11:08.141998682+00:00 stdout F [INFO] 10.217.0.87:41187 - 7567 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001931556s 2025-08-13T20:11:08.146236623+00:00 stdout F [INFO] 10.217.0.87:45883 - 45200 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000991418s 2025-08-13T20:11:08.146587473+00:00 stdout F [INFO] 10.217.0.87:59245 - 9995 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001093662s 2025-08-13T20:11:08.176333046+00:00 stdout F [INFO] 10.217.0.87:38631 - 5845 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000641959s 2025-08-13T20:11:08.176854491+00:00 stdout F [INFO] 10.217.0.87:45038 - 42993 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985579s 2025-08-13T20:11:08.195841936+00:00 stdout F [INFO] 10.217.0.87:35908 - 62593 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00107632s 2025-08-13T20:11:08.195879627+00:00 stdout F [INFO] 10.217.0.87:57842 - 25290 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001344349s 2025-08-13T20:11:08.201090476+00:00 stdout F [INFO] 10.217.0.87:40521 - 43457 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070197s 2025-08-13T20:11:08.201833717+00:00 stdout F [INFO] 10.217.0.87:53603 - 40731 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001195764s 2025-08-13T20:11:08.236267675+00:00 stdout F [INFO] 10.217.0.87:54377 - 33154 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000904966s 2025-08-13T20:11:08.236475721+00:00 stdout F [INFO] 10.217.0.87:49212 - 45423 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000958377s 2025-08-13T20:11:08.251370648+00:00 stdout F [INFO] 10.217.0.87:55160 - 29098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000675209s 2025-08-13T20:11:08.251516222+00:00 stdout F [INFO] 10.217.0.87:42220 - 5777 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000705991s 2025-08-13T20:11:08.259369497+00:00 stdout F [INFO] 10.217.0.87:53420 - 37723 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000562176s 2025-08-13T20:11:08.259485550+00:00 stdout F [INFO] 10.217.0.87:37634 - 18652 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000766012s 2025-08-13T20:11:08.306407546+00:00 stdout F [INFO] 10.217.0.87:58080 - 12192 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000620858s 2025-08-13T20:11:08.307266460+00:00 stdout F [INFO] 10.217.0.87:41950 - 42971 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001173934s 2025-08-13T20:11:08.317512554+00:00 stdout F [INFO] 10.217.0.87:41163 - 25281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000689609s 2025-08-13T20:11:08.317614577+00:00 stdout F [INFO] 10.217.0.87:43491 - 18647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000532295s 2025-08-13T20:11:08.322396894+00:00 stdout F [INFO] 10.217.0.87:35754 - 20409 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000502435s 2025-08-13T20:11:08.322396894+00:00 stdout F [INFO] 10.217.0.87:52198 - 36446 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000585887s 2025-08-13T20:11:08.339933417+00:00 stdout F [INFO] 10.217.0.87:50506 - 34942 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000850635s 2025-08-13T20:11:08.340078971+00:00 stdout F [INFO] 10.217.0.87:41082 - 58001 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001259356s 2025-08-13T20:11:08.347099922+00:00 stdout F [INFO] 10.217.0.87:48202 - 24673 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000706341s 2025-08-13T20:11:08.347345749+00:00 stdout F [INFO] 10.217.0.87:55112 - 2851 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000876806s 2025-08-13T20:11:08.363877823+00:00 stdout F [INFO] 10.217.0.87:45181 - 18776 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00068396s 2025-08-13T20:11:08.364125350+00:00 stdout F [INFO] 10.217.0.87:33510 - 43360 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000972978s 2025-08-13T20:11:08.376663840+00:00 stdout F [INFO] 10.217.0.87:54478 - 11527 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000708951s 2025-08-13T20:11:08.376987139+00:00 stdout F [INFO] 10.217.0.87:48980 - 35262 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000651708s 2025-08-13T20:11:08.384834434+00:00 stdout F [INFO] 10.217.0.87:35665 - 2681 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000518975s 2025-08-13T20:11:08.384966958+00:00 stdout F [INFO] 10.217.0.87:33876 - 4161 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000749152s 2025-08-13T20:11:08.395677425+00:00 stdout F [INFO] 10.217.0.87:57212 - 64901 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000905326s 2025-08-13T20:11:08.395890481+00:00 stdout F [INFO] 10.217.0.87:45257 - 8753 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069839s 2025-08-13T20:11:08.404554680+00:00 stdout F [INFO] 10.217.0.87:59219 - 40644 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001885915s 2025-08-13T20:11:08.404554680+00:00 stdout F [INFO] 10.217.0.87:50503 - 20139 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001935436s 2025-08-13T20:11:08.423018619+00:00 stdout F [INFO] 10.217.0.87:60190 - 12866 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001440531s 2025-08-13T20:11:08.423496763+00:00 stdout F [INFO] 10.217.0.87:55511 - 27204 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00209878s 2025-08-13T20:11:08.432903562+00:00 stdout F [INFO] 10.217.0.87:35410 - 46456 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507545s 2025-08-13T20:11:08.432903562+00:00 stdout F [INFO] 10.217.0.87:58066 - 60152 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000623148s 2025-08-13T20:11:08.459096243+00:00 stdout F [INFO] 10.217.0.87:55275 - 54993 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000979079s 2025-08-13T20:11:08.459194356+00:00 stdout F [INFO] 10.217.0.87:33489 - 1806 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000961338s 2025-08-13T20:11:08.487465687+00:00 stdout F [INFO] 10.217.0.87:56155 - 40098 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000728021s 2025-08-13T20:11:08.487522958+00:00 stdout F [INFO] 10.217.0.87:34779 - 64998 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000902886s 2025-08-13T20:11:08.499420449+00:00 stdout F [INFO] 10.217.0.87:53081 - 28211 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000496094s 2025-08-13T20:11:08.499476821+00:00 stdout F [INFO] 10.217.0.87:44623 - 61713 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000791513s 2025-08-13T20:11:08.517257761+00:00 stdout F [INFO] 10.217.0.87:60530 - 2512 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005661673s 2025-08-13T20:11:08.517432846+00:00 stdout F [INFO] 10.217.0.87:46936 - 26510 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005884159s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:46216 - 38326 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000781543s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:55829 - 44853 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001203765s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:36518 - 59457 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001556755s 2025-08-13T20:11:08.544881423+00:00 stdout F [INFO] 10.217.0.87:38180 - 64319 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001057991s 2025-08-13T20:11:08.555623421+00:00 stdout F [INFO] 10.217.0.87:55842 - 205 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000616398s 2025-08-13T20:11:08.555623421+00:00 stdout F [INFO] 10.217.0.87:45745 - 7394 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000842004s 2025-08-13T20:11:08.573364259+00:00 stdout F [INFO] 10.217.0.87:57068 - 35192 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001069521s 2025-08-13T20:11:08.573364259+00:00 stdout F [INFO] 10.217.0.87:49760 - 49044 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000945517s 2025-08-13T20:11:08.579015571+00:00 stdout F [INFO] 10.217.0.87:43119 - 21963 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000549946s 2025-08-13T20:11:08.579708371+00:00 stdout F [INFO] 10.217.0.87:60286 - 28624 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000616738s 2025-08-13T20:11:08.599127888+00:00 stdout F [INFO] 10.217.0.87:42287 - 8556 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001494493s 2025-08-13T20:11:08.599256052+00:00 stdout F [INFO] 10.217.0.87:38544 - 61819 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001566324s 2025-08-13T20:11:08.599857979+00:00 stdout F [INFO] 10.217.0.87:54830 - 40587 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000758882s 2025-08-13T20:11:08.600256170+00:00 stdout F [INFO] 10.217.0.87:45174 - 390 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001139543s 2025-08-13T20:11:08.627299166+00:00 stdout F [INFO] 10.217.0.87:52597 - 28415 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000736661s 2025-08-13T20:11:08.627299166+00:00 stdout F [INFO] 10.217.0.87:57052 - 4137 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000971697s 2025-08-13T20:11:08.632964848+00:00 stdout F [INFO] 10.217.0.87:39800 - 50300 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000512485s 2025-08-13T20:11:08.633108582+00:00 stdout F [INFO] 10.217.0.87:58174 - 39741 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000584247s 2025-08-13T20:11:08.655018860+00:00 stdout F [INFO] 10.217.0.87:47248 - 53195 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000666499s 2025-08-13T20:11:08.655018860+00:00 stdout F [INFO] 10.217.0.87:60705 - 40238 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000729491s 2025-08-13T20:11:08.661694522+00:00 stdout F [INFO] 10.217.0.87:53642 - 49494 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000573287s 2025-08-13T20:11:08.661694522+00:00 stdout F [INFO] 10.217.0.87:36968 - 25355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000502334s 2025-08-13T20:11:08.687659976+00:00 stdout F [INFO] 10.217.0.87:48084 - 29807 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000730811s 2025-08-13T20:11:08.687659976+00:00 stdout F [INFO] 10.217.0.87:53674 - 5248 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000805873s 2025-08-13T20:11:08.711454949+00:00 stdout F [INFO] 10.217.0.87:34393 - 12701 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000739862s 2025-08-13T20:11:08.712082747+00:00 stdout F [INFO] 10.217.0.87:42170 - 56756 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000760271s 2025-08-13T20:11:08.741562942+00:00 stdout F [INFO] 10.217.0.87:47188 - 22454 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000750161s 2025-08-13T20:11:08.741692766+00:00 stdout F [INFO] 10.217.0.87:60309 - 59151 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001182084s 2025-08-13T20:11:08.745069502+00:00 stdout F [INFO] 10.217.0.87:45344 - 61474 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000668139s 2025-08-13T20:11:08.745069502+00:00 stdout F [INFO] 10.217.0.87:58293 - 48213 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00105222s 2025-08-13T20:11:08.766544188+00:00 stdout F [INFO] 10.217.0.87:40412 - 29040 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000454903s 2025-08-13T20:11:08.766693232+00:00 stdout F [INFO] 10.217.0.87:59933 - 26530 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000411332s 2025-08-13T20:11:08.797008712+00:00 stdout F [INFO] 10.217.0.87:45446 - 13925 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000613957s 2025-08-13T20:11:08.797008712+00:00 stdout F [INFO] 10.217.0.87:57036 - 43938 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000506345s 2025-08-13T20:11:08.820598308+00:00 stdout F [INFO] 10.217.0.87:45608 - 64359 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000469703s 2025-08-13T20:11:08.820818864+00:00 stdout F [INFO] 10.217.0.87:39399 - 5121 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000514944s 2025-08-13T20:11:08.849384853+00:00 stdout F [INFO] 10.217.0.87:40247 - 24315 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068251s 2025-08-13T20:11:08.850164516+00:00 stdout F [INFO] 10.217.0.87:59475 - 26937 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001337518s 2025-08-13T20:11:08.874556005+00:00 stdout F [INFO] 10.217.0.87:57108 - 6458 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000702661s 2025-08-13T20:11:08.874619757+00:00 stdout F [INFO] 10.217.0.87:44884 - 13914 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001088391s 2025-08-13T20:11:08.888978959+00:00 stdout F [INFO] 10.217.0.87:53148 - 29687 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000759401s 2025-08-13T20:11:08.889120743+00:00 stdout F [INFO] 10.217.0.87:38152 - 61846 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000890716s 2025-08-13T20:11:08.905832352+00:00 stdout F [INFO] 10.217.0.87:33541 - 15399 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000869564s 2025-08-13T20:11:08.906256264+00:00 stdout F [INFO] 10.217.0.87:36658 - 33176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001183774s 2025-08-13T20:11:08.942624797+00:00 stdout F [INFO] 10.217.0.87:60473 - 46967 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000935207s 2025-08-13T20:11:08.942624797+00:00 stdout F [INFO] 10.217.0.87:42459 - 50629 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000915966s 2025-08-13T20:11:08.956306379+00:00 stdout F [INFO] 10.217.0.87:39704 - 64412 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069933s 2025-08-13T20:11:08.956390161+00:00 stdout F [INFO] 10.217.0.87:35286 - 54664 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000657089s 2025-08-13T20:11:09.009847054+00:00 stdout F [INFO] 10.217.0.87:54051 - 28377 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000598137s 2025-08-13T20:11:09.010077731+00:00 stdout F [INFO] 10.217.0.87:37021 - 5506 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000633918s 2025-08-13T20:11:09.064643675+00:00 stdout F [INFO] 10.217.0.87:48804 - 14694 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000769092s 2025-08-13T20:11:09.064643675+00:00 stdout F [INFO] 10.217.0.87:40276 - 41515 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000978868s 2025-08-13T20:11:09.075868827+00:00 stdout F [INFO] 10.217.0.87:52252 - 12463 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000681859s 2025-08-13T20:11:09.076070703+00:00 stdout F [INFO] 10.217.0.87:40170 - 19513 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001002759s 2025-08-13T20:11:09.115422351+00:00 stdout F [INFO] 10.217.0.87:37118 - 64783 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000903006s 2025-08-13T20:11:09.115540514+00:00 stdout F [INFO] 10.217.0.87:55116 - 52020 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000845964s 2025-08-13T20:11:09.123530044+00:00 stdout F [INFO] 10.217.0.87:46018 - 56449 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654459s 2025-08-13T20:11:09.124037828+00:00 stdout F [INFO] 10.217.0.87:34184 - 43139 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000538175s 2025-08-13T20:11:09.130141053+00:00 stdout F [INFO] 10.217.0.87:56344 - 19 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817974s 2025-08-13T20:11:09.130393150+00:00 stdout F [INFO] 10.217.0.87:46527 - 2906 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000835174s 2025-08-13T20:11:09.165076025+00:00 stdout F [INFO] 10.217.0.87:45989 - 3354 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000988799s 2025-08-13T20:11:09.165162427+00:00 stdout F [INFO] 10.217.0.87:40822 - 23382 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001084201s 2025-08-13T20:11:09.177453200+00:00 stdout F [INFO] 10.217.0.87:56347 - 34656 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000718601s 2025-08-13T20:11:09.177554863+00:00 stdout F [INFO] 10.217.0.87:45640 - 4199 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768752s 2025-08-13T20:11:09.185731777+00:00 stdout F [INFO] 10.217.0.87:36847 - 50417 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000633828s 2025-08-13T20:11:09.185731777+00:00 stdout F [INFO] 10.217.0.87:52308 - 60129 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000776212s 2025-08-13T20:11:09.221102691+00:00 stdout F [INFO] 10.217.0.87:47405 - 29150 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000558266s 2025-08-13T20:11:09.221102691+00:00 stdout F [INFO] 10.217.0.87:45271 - 42783 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761961s 2025-08-13T20:11:09.231398136+00:00 stdout F [INFO] 10.217.0.87:33150 - 48647 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000595407s 2025-08-13T20:11:09.231444548+00:00 stdout F [INFO] 10.217.0.87:55990 - 64284 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000492205s 2025-08-13T20:11:09.241229768+00:00 stdout F [INFO] 10.217.0.87:35019 - 34439 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000612528s 2025-08-13T20:11:09.241851496+00:00 stdout F [INFO] 10.217.0.87:57066 - 18806 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000641939s 2025-08-13T20:11:09.275480630+00:00 stdout F [INFO] 10.217.0.87:46739 - 53451 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000772852s 2025-08-13T20:11:09.275853701+00:00 stdout F [INFO] 10.217.0.87:55427 - 54278 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00103188s 2025-08-13T20:11:09.276502500+00:00 stdout F [INFO] 10.217.0.87:49219 - 48743 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000627578s 2025-08-13T20:11:09.277077556+00:00 stdout F [INFO] 10.217.0.87:38267 - 55558 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001083721s 2025-08-13T20:11:09.285312502+00:00 stdout F [INFO] 10.217.0.87:57050 - 26639 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000527796s 2025-08-13T20:11:09.285312502+00:00 stdout F [INFO] 10.217.0.87:38263 - 35891 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000778362s 2025-08-13T20:11:09.328960133+00:00 stdout F [INFO] 10.217.0.87:35786 - 6519 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000763642s 2025-08-13T20:11:09.329544330+00:00 stdout F [INFO] 10.217.0.87:51541 - 48774 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001336938s 2025-08-13T20:11:09.330937810+00:00 stdout F [INFO] 10.217.0.87:45477 - 8398 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000767262s 2025-08-13T20:11:09.331439575+00:00 stdout F [INFO] 10.217.0.87:47873 - 29317 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001480123s 2025-08-13T20:11:09.339755763+00:00 stdout F [INFO] 10.217.0.87:53989 - 17357 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001479943s 2025-08-13T20:11:09.339946839+00:00 stdout F [INFO] 10.217.0.87:39615 - 48203 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001503723s 2025-08-13T20:11:09.371635847+00:00 stdout F [INFO] 10.217.0.87:33870 - 47553 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000803013s 2025-08-13T20:11:09.372254395+00:00 stdout F [INFO] 10.217.0.87:33091 - 45955 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001228876s 2025-08-13T20:11:09.392121074+00:00 stdout F [INFO] 10.217.0.87:43512 - 7191 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619788s 2025-08-13T20:11:09.392121074+00:00 stdout F [INFO] 10.217.0.87:43215 - 45310 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000675009s 2025-08-13T20:11:09.427988023+00:00 stdout F [INFO] 10.217.0.87:33535 - 58269 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000669729s 2025-08-13T20:11:09.428161888+00:00 stdout F [INFO] 10.217.0.87:40410 - 40341 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000664079s 2025-08-13T20:11:09.448156561+00:00 stdout F [INFO] 10.217.0.87:54954 - 42761 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000628158s 2025-08-13T20:11:09.448156561+00:00 stdout F [INFO] 10.217.0.87:53421 - 38162 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00138773s 2025-08-13T20:11:09.449071847+00:00 stdout F [INFO] 10.217.0.87:55224 - 28314 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000527796s 2025-08-13T20:11:09.449071847+00:00 stdout F [INFO] 10.217.0.87:49659 - 6648 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000764562s 2025-08-13T20:11:09.482376662+00:00 stdout F [INFO] 10.217.0.87:48689 - 21850 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000638679s 2025-08-13T20:11:09.482376662+00:00 stdout F [INFO] 10.217.0.87:57101 - 38149 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000632798s 2025-08-13T20:11:09.503586530+00:00 stdout F [INFO] 10.217.0.87:52881 - 6877 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000808803s 2025-08-13T20:11:09.503586530+00:00 stdout F [INFO] 10.217.0.87:34201 - 21870 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000537925s 2025-08-13T20:11:09.505160085+00:00 stdout F [INFO] 10.217.0.87:35048 - 16070 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000575357s 2025-08-13T20:11:09.505404882+00:00 stdout F [INFO] 10.217.0.87:41863 - 24054 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00105403s 2025-08-13T20:11:09.537342868+00:00 stdout F [INFO] 10.217.0.87:60456 - 44692 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000797203s 2025-08-13T20:11:09.537342868+00:00 stdout F [INFO] 10.217.0.87:38893 - 64497 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001186754s 2025-08-13T20:11:09.565609508+00:00 stdout F [INFO] 10.217.0.87:49518 - 2987 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000829094s 2025-08-13T20:11:09.566091892+00:00 stdout F [INFO] 10.217.0.87:33729 - 40144 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000658899s 2025-08-13T20:11:09.567072360+00:00 stdout F [INFO] 10.217.0.87:53459 - 9026 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000851374s 2025-08-13T20:11:09.567072360+00:00 stdout F [INFO] 10.217.0.87:51738 - 51961 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000828444s 2025-08-13T20:11:09.619399371+00:00 stdout F [INFO] 10.217.0.87:46668 - 55956 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000985679s 2025-08-13T20:11:09.619708900+00:00 stdout F [INFO] 10.217.0.87:42988 - 40163 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000664769s 2025-08-13T20:11:09.621897792+00:00 stdout F [INFO] 10.217.0.87:48469 - 28257 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000531815s 2025-08-13T20:11:09.622085258+00:00 stdout F [INFO] 10.217.0.87:56697 - 386 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006824s 2025-08-13T20:11:09.644941383+00:00 stdout F [INFO] 10.217.0.87:50544 - 34100 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001134353s 2025-08-13T20:11:09.645365005+00:00 stdout F [INFO] 10.217.0.87:53854 - 56413 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00140471s 2025-08-13T20:11:09.676304942+00:00 stdout F [INFO] 10.217.0.87:56020 - 5860 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000740331s 2025-08-13T20:11:09.676304942+00:00 stdout F [INFO] 10.217.0.87:39917 - 33779 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000581146s 2025-08-13T20:11:09.678225417+00:00 stdout F [INFO] 10.217.0.87:44903 - 15129 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000491185s 2025-08-13T20:11:09.678534746+00:00 stdout F [INFO] 10.217.0.87:47391 - 53784 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000419112s 2025-08-13T20:11:09.699531377+00:00 stdout F [INFO] 10.217.0.87:40503 - 29441 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000648348s 2025-08-13T20:11:09.699735143+00:00 stdout F [INFO] 10.217.0.87:56421 - 20914 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000838404s 2025-08-13T20:11:09.732739419+00:00 stdout F [INFO] 10.217.0.87:60895 - 25087 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000904066s 2025-08-13T20:11:09.732879243+00:00 stdout F [INFO] 10.217.0.87:45233 - 38453 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001086661s 2025-08-13T20:11:09.754002619+00:00 stdout F [INFO] 10.217.0.87:47687 - 30565 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000753871s 2025-08-13T20:11:09.754062771+00:00 stdout F [INFO] 10.217.0.87:56598 - 25031 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000797913s 2025-08-13T20:11:09.791738541+00:00 stdout F [INFO] 10.217.0.87:35890 - 8037 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070223s 2025-08-13T20:11:09.791738541+00:00 stdout F [INFO] 10.217.0.87:54483 - 44678 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000723801s 2025-08-13T20:11:09.801146801+00:00 stdout F [INFO] 10.217.0.87:50514 - 38291 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00313687s 2025-08-13T20:11:09.803285852+00:00 stdout F [INFO] 10.217.0.87:39839 - 32977 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001900175s 2025-08-13T20:11:09.810966062+00:00 stdout F [INFO] 10.217.0.87:48258 - 20246 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002353797s 2025-08-13T20:11:09.811439386+00:00 stdout F [INFO] 10.217.0.87:60135 - 51291 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002544313s 2025-08-13T20:11:09.815948235+00:00 stdout F [INFO] 10.217.0.87:40563 - 6633 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000910756s 2025-08-13T20:11:09.815948235+00:00 stdout F [INFO] 10.217.0.87:37855 - 63755 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000946797s 2025-08-13T20:11:09.855902510+00:00 stdout F [INFO] 10.217.0.87:55357 - 52910 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000672089s 2025-08-13T20:11:09.856062125+00:00 stdout F [INFO] 10.217.0.87:56983 - 37252 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001012639s 2025-08-13T20:11:09.871047725+00:00 stdout F [INFO] 10.217.0.87:35549 - 20648 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000976718s 2025-08-13T20:11:09.871047725+00:00 stdout F [INFO] 10.217.0.87:44595 - 1932 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002394738s 2025-08-13T20:11:09.871549349+00:00 stdout F [INFO] 10.217.0.87:40336 - 25626 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001868553s 2025-08-13T20:11:09.871549349+00:00 stdout F [INFO] 10.217.0.87:46912 - 10871 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003059918s 2025-08-13T20:11:09.910681771+00:00 stdout F [INFO] 10.217.0.87:45858 - 55022 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001129743s 2025-08-13T20:11:09.910974900+00:00 stdout F [INFO] 10.217.0.87:55087 - 63530 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001228406s 2025-08-13T20:11:09.914111879+00:00 stdout F [INFO] 10.217.0.87:38666 - 64429 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000674719s 2025-08-13T20:11:09.914236783+00:00 stdout F [INFO] 10.217.0.87:47423 - 54243 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000635178s 2025-08-13T20:11:09.924884558+00:00 stdout F [INFO] 10.217.0.87:55691 - 6387 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000615748s 2025-08-13T20:11:09.925091264+00:00 stdout F [INFO] 10.217.0.87:36726 - 7563 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000820113s 2025-08-13T20:11:09.933855386+00:00 stdout F [INFO] 10.217.0.87:41358 - 31832 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000607047s 2025-08-13T20:11:09.934167695+00:00 stdout F [INFO] 10.217.0.87:54170 - 36712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000620148s 2025-08-13T20:11:09.967295634+00:00 stdout F [INFO] 10.217.0.87:58101 - 34961 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000751712s 2025-08-13T20:11:09.967366646+00:00 stdout F [INFO] 10.217.0.87:49474 - 38655 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000732611s 2025-08-13T20:11:09.969581790+00:00 stdout F [INFO] 10.217.0.87:47263 - 41007 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000959338s 2025-08-13T20:11:09.969630631+00:00 stdout F [INFO] 10.217.0.87:58934 - 8505 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00107031s 2025-08-13T20:11:09.979130774+00:00 stdout F [INFO] 10.217.0.87:58145 - 58179 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000754711s 2025-08-13T20:11:09.979260177+00:00 stdout F [INFO] 10.217.0.87:38045 - 32921 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000793963s 2025-08-13T20:11:09.988431340+00:00 stdout F [INFO] 10.217.0.87:58901 - 15757 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000785472s 2025-08-13T20:11:09.988598135+00:00 stdout F [INFO] 10.217.0.87:40838 - 44964 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001242825s 2025-08-13T20:11:10.020712196+00:00 stdout F [INFO] 10.217.0.87:47276 - 32294 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000511265s 2025-08-13T20:11:10.020857110+00:00 stdout F [INFO] 10.217.0.87:53692 - 50495 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000708841s 2025-08-13T20:11:10.031652339+00:00 stdout F [INFO] 10.217.0.87:35587 - 6934 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771622s 2025-08-13T20:11:10.031652339+00:00 stdout F [INFO] 10.217.0.87:49877 - 38155 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000954388s 2025-08-13T20:11:10.040906155+00:00 stdout F [INFO] 10.217.0.87:38169 - 44285 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000416262s 2025-08-13T20:11:10.041016538+00:00 stdout F [INFO] 10.217.0.87:48760 - 59142 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00070476s 2025-08-13T20:11:10.075523907+00:00 stdout F [INFO] 10.217.0.87:40850 - 53105 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000811223s 2025-08-13T20:11:10.075755104+00:00 stdout F [INFO] 10.217.0.87:38297 - 14321 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000948948s 2025-08-13T20:11:10.084645699+00:00 stdout F [INFO] 10.217.0.87:34519 - 42542 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000430953s 2025-08-13T20:11:10.084704071+00:00 stdout F [INFO] 10.217.0.87:43438 - 12631 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00067879s 2025-08-13T20:11:10.093303337+00:00 stdout F [INFO] 10.217.0.87:56908 - 17450 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000708811s 2025-08-13T20:11:10.093442521+00:00 stdout F [INFO] 10.217.0.87:51618 - 15318 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000817544s 2025-08-13T20:11:10.101313267+00:00 stdout F [INFO] 10.217.0.87:37133 - 19887 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000712141s 2025-08-13T20:11:10.101512533+00:00 stdout F [INFO] 10.217.0.87:50412 - 11192 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000948597s 2025-08-13T20:11:10.129508615+00:00 stdout F [INFO] 10.217.0.87:43831 - 20450 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000750722s 2025-08-13T20:11:10.129556986+00:00 stdout F [INFO] 10.217.0.87:52358 - 12808 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000957057s 2025-08-13T20:11:10.136328551+00:00 stdout F [INFO] 10.217.0.87:52783 - 29135 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000594527s 2025-08-13T20:11:10.136569338+00:00 stdout F [INFO] 10.217.0.87:34678 - 60145 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000918516s 2025-08-13T20:11:10.152319909+00:00 stdout F [INFO] 10.217.0.87:37166 - 20284 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00071952s 2025-08-13T20:11:10.152695840+00:00 stdout F [INFO] 10.217.0.87:58908 - 31337 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001094711s 2025-08-13T20:11:10.186369485+00:00 stdout F [INFO] 10.217.0.87:40691 - 64 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00106306s 2025-08-13T20:11:10.186424177+00:00 stdout F [INFO] 10.217.0.87:59312 - 58681 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000909676s 2025-08-13T20:11:10.212445733+00:00 stdout F [INFO] 10.217.0.87:36036 - 47426 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001638717s 2025-08-13T20:11:10.212445733+00:00 stdout F [INFO] 10.217.0.87:47264 - 35254 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001092741s 2025-08-13T20:11:10.214609215+00:00 stdout F [INFO] 10.217.0.87:60530 - 26229 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000657939s 2025-08-13T20:11:10.214609215+00:00 stdout F [INFO] 10.217.0.87:41544 - 60427 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001028729s 2025-08-13T20:11:10.240992232+00:00 stdout F [INFO] 10.217.0.87:54263 - 21128 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001061571s 2025-08-13T20:11:10.241217088+00:00 stdout F [INFO] 10.217.0.87:47748 - 54361 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001358849s 2025-08-13T20:11:10.282300966+00:00 stdout F [INFO] 10.217.0.87:49674 - 704 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.003252953s 2025-08-13T20:11:10.282300966+00:00 stdout F [INFO] 10.217.0.87:44821 - 63231 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.00314728s 2025-08-13T20:11:10.291850080+00:00 stdout F [INFO] 10.217.0.87:35498 - 30057 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000780553s 2025-08-13T20:11:10.292084226+00:00 stdout F [INFO] 10.217.0.87:34433 - 61062 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761342s 2025-08-13T20:11:10.308115376+00:00 stdout F [INFO] 10.217.0.87:45267 - 53227 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000601597s 2025-08-13T20:11:10.308174398+00:00 stdout F [INFO] 10.217.0.87:53644 - 2574 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000985089s 2025-08-13T20:11:10.345653612+00:00 stdout F [INFO] 10.217.0.87:54134 - 43179 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740462s 2025-08-13T20:11:10.345706304+00:00 stdout F [INFO] 10.217.0.87:51934 - 41503 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068563s 2025-08-13T20:11:10.347835605+00:00 stdout F [INFO] 10.217.0.87:48788 - 65440 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000528275s 2025-08-13T20:11:10.348054241+00:00 stdout F [INFO] 10.217.0.87:46192 - 13272 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000802233s 2025-08-13T20:11:10.365227793+00:00 stdout F [INFO] 10.217.0.87:56813 - 61336 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000841114s 2025-08-13T20:11:10.365558593+00:00 stdout F [INFO] 10.217.0.87:56406 - 44405 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001118022s 2025-08-13T20:11:10.404568631+00:00 stdout F [INFO] 10.217.0.87:39247 - 53082 "AAAA IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.000962568s 2025-08-13T20:11:10.404568631+00:00 stdout F [INFO] 10.217.0.87:39015 - 64029 "A IN quay.io.crc.testing. udp 48 false 1232" NXDOMAIN qr,rd,ra 37 0.001217245s 2025-08-13T20:11:10.407330981+00:00 stdout F [INFO] 10.217.0.87:48101 - 3473 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00104135s 2025-08-13T20:11:10.407428593+00:00 stdout F [INFO] 10.217.0.87:54432 - 43945 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000922756s 2025-08-13T20:11:10.427216141+00:00 stdout F [INFO] 10.217.0.87:58629 - 23429 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069069s 2025-08-13T20:11:10.427321594+00:00 stdout F [INFO] 10.217.0.87:53692 - 13549 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000727961s 2025-08-13T20:11:10.442966342+00:00 stdout F [INFO] 10.217.0.87:35003 - 8346 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000654699s 2025-08-13T20:11:10.443015794+00:00 stdout F [INFO] 10.217.0.87:57738 - 6845 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000876895s 2025-08-13T20:11:10.462218544+00:00 stdout F [INFO] 10.217.0.87:38434 - 47432 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001120352s 2025-08-13T20:11:10.462218544+00:00 stdout F [INFO] 10.217.0.87:46544 - 18300 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001356399s 2025-08-13T20:11:10.483870565+00:00 stdout F [INFO] 10.217.0.87:56416 - 36850 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000761392s 2025-08-13T20:11:10.483943937+00:00 stdout F [INFO] 10.217.0.87:35365 - 54110 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001481532s 2025-08-13T20:11:10.497518346+00:00 stdout F [INFO] 10.217.0.87:36394 - 20484 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000662119s 2025-08-13T20:11:10.497561348+00:00 stdout F [INFO] 10.217.0.87:47551 - 7794 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000662119s 2025-08-13T20:11:10.524954413+00:00 stdout F [INFO] 10.217.0.87:58045 - 1135 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000569947s 2025-08-13T20:11:10.525179859+00:00 stdout F [INFO] 10.217.0.87:45550 - 18182 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00102599s 2025-08-13T20:11:10.535454884+00:00 stdout F [INFO] 10.217.0.87:38958 - 24486 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001087191s 2025-08-13T20:11:10.535617379+00:00 stdout F [INFO] 10.217.0.87:35462 - 51414 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001264926s 2025-08-13T20:11:10.555337624+00:00 stdout F [INFO] 10.217.0.87:37227 - 51475 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000821594s 2025-08-13T20:11:10.555405996+00:00 stdout F [INFO] 10.217.0.87:35920 - 64361 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069905s 2025-08-13T20:11:10.582638297+00:00 stdout F [INFO] 10.217.0.87:46664 - 5406 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000808283s 2025-08-13T20:11:10.582867113+00:00 stdout F [INFO] 10.217.0.87:37306 - 48071 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00106375s 2025-08-13T20:11:10.594028313+00:00 stdout F [INFO] 10.217.0.87:59666 - 45350 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001113462s 2025-08-13T20:11:10.594028313+00:00 stdout F [INFO] 10.217.0.87:53509 - 55627 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001300197s 2025-08-13T20:11:10.610093124+00:00 stdout F [INFO] 10.217.0.87:46572 - 58676 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001320778s 2025-08-13T20:11:10.610148496+00:00 stdout F [INFO] 10.217.0.87:40323 - 64318 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00139729s 2025-08-13T20:11:10.635853543+00:00 stdout F [INFO] 10.217.0.87:53169 - 9990 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069213s 2025-08-13T20:11:10.636072669+00:00 stdout F [INFO] 10.217.0.87:50177 - 43293 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000754261s 2025-08-13T20:11:10.668321404+00:00 stdout F [INFO] 10.217.0.87:53293 - 62044 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001576965s 2025-08-13T20:11:10.668321404+00:00 stdout F [INFO] 10.217.0.87:57662 - 55233 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001761411s 2025-08-13T20:11:10.697836330+00:00 stdout F [INFO] 10.217.0.87:41254 - 48236 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001464082s 2025-08-13T20:11:10.697836330+00:00 stdout F [INFO] 10.217.0.87:37182 - 29642 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001693129s 2025-08-13T20:11:10.725196374+00:00 stdout F [INFO] 10.217.0.87:34029 - 25028 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000731301s 2025-08-13T20:11:10.725431641+00:00 stdout F [INFO] 10.217.0.87:51302 - 1519 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817833s 2025-08-13T20:11:10.739264747+00:00 stdout F [INFO] 10.217.0.87:46810 - 32281 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000892646s 2025-08-13T20:11:10.739420212+00:00 stdout F [INFO] 10.217.0.87:43458 - 39469 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000972398s 2025-08-13T20:11:10.751142388+00:00 stdout F [INFO] 10.217.0.87:36895 - 9809 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000612998s 2025-08-13T20:11:10.751430606+00:00 stdout F [INFO] 10.217.0.87:39596 - 65208 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000677019s 2025-08-13T20:11:10.764475080+00:00 stdout F [INFO] 10.217.0.87:47548 - 14641 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000989968s 2025-08-13T20:11:10.764557333+00:00 stdout F [INFO] 10.217.0.87:43050 - 3786 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000903196s 2025-08-13T20:11:10.779200152+00:00 stdout F [INFO] 10.217.0.87:37918 - 49565 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001230145s 2025-08-13T20:11:10.779329346+00:00 stdout F [INFO] 10.217.0.87:41188 - 48358 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001293657s 2025-08-13T20:11:10.791840205+00:00 stdout F [INFO] 10.217.0.87:56862 - 47485 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001256216s 2025-08-13T20:11:10.791965058+00:00 stdout F [INFO] 10.217.0.87:56497 - 6626 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001321368s 2025-08-13T20:11:10.806885366+00:00 stdout F [INFO] 10.217.0.87:42559 - 25132 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000740422s 2025-08-13T20:11:10.807054091+00:00 stdout F [INFO] 10.217.0.87:38089 - 18651 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000656199s 2025-08-13T20:11:10.821087963+00:00 stdout F [INFO] 10.217.0.87:53932 - 43404 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000529605s 2025-08-13T20:11:10.821087963+00:00 stdout F [INFO] 10.217.0.87:53669 - 32799 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000495664s 2025-08-13T20:11:10.838031589+00:00 stdout F [INFO] 10.217.0.87:48051 - 47009 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00069481s 2025-08-13T20:11:10.838031589+00:00 stdout F [INFO] 10.217.0.87:49633 - 2906 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000731321s 2025-08-13T20:11:10.847942923+00:00 stdout F [INFO] 10.217.0.87:40866 - 54204 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000492464s 2025-08-13T20:11:10.847942923+00:00 stdout F [INFO] 10.217.0.87:55638 - 5418 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000507535s 2025-08-13T20:11:10.859496155+00:00 stdout F [INFO] 10.217.0.87:43040 - 23737 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000779872s 2025-08-13T20:11:10.859560776+00:00 stdout F [INFO] 10.217.0.87:32768 - 51691 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000733041s 2025-08-13T20:11:10.897474393+00:00 stdout F [INFO] 10.217.0.87:44023 - 5516 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068915s 2025-08-13T20:11:10.898269516+00:00 stdout F [INFO] 10.217.0.87:43332 - 54899 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001154573s 2025-08-13T20:11:10.913750840+00:00 stdout F [INFO] 10.217.0.87:44629 - 53763 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000848825s 2025-08-13T20:11:10.913846803+00:00 stdout F [INFO] 10.217.0.87:47543 - 42445 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000658709s 2025-08-13T20:11:10.949764543+00:00 stdout F [INFO] 10.217.0.87:60784 - 25294 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001517523s 2025-08-13T20:11:10.949888526+00:00 stdout F [INFO] 10.217.0.87:43931 - 17207 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00175024s 2025-08-13T20:11:10.968321445+00:00 stdout F [INFO] 10.217.0.87:39431 - 5861 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000533925s 2025-08-13T20:11:10.969151149+00:00 stdout F [INFO] 10.217.0.87:36790 - 35647 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001096441s 2025-08-13T20:11:10.989866742+00:00 stdout F [INFO] 10.217.0.87:40960 - 23472 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000554456s 2025-08-13T20:11:10.990262134+00:00 stdout F [INFO] 10.217.0.87:51522 - 65437 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000818503s 2025-08-13T20:11:11.006375476+00:00 stdout F [INFO] 10.217.0.87:52401 - 28255 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000934187s 2025-08-13T20:11:11.006555201+00:00 stdout F [INFO] 10.217.0.87:44627 - 52350 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000977718s 2025-08-13T20:11:11.006699505+00:00 stdout F [INFO] 10.217.0.87:39108 - 49176 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000870595s 2025-08-13T20:11:11.006741886+00:00 stdout F [INFO] 10.217.0.87:34385 - 6198 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001256896s 2025-08-13T20:11:11.032143735+00:00 stdout F [INFO] 10.217.0.87:49275 - 4543 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000543846s 2025-08-13T20:11:11.032388102+00:00 stdout F [INFO] 10.217.0.87:53875 - 52880 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000647548s 2025-08-13T20:11:11.042664306+00:00 stdout F [INFO] 10.217.0.87:49993 - 57969 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000487754s 2025-08-13T20:11:11.042990406+00:00 stdout F [INFO] 10.217.0.87:42332 - 18348 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00072954s 2025-08-13T20:11:11.060284971+00:00 stdout F [INFO] 10.217.0.87:46737 - 15987 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000730151s 2025-08-13T20:11:11.061205788+00:00 stdout F [INFO] 10.217.0.87:37223 - 10334 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001298888s 2025-08-13T20:11:11.061591239+00:00 stdout F [INFO] 10.217.0.87:46133 - 23565 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000556496s 2025-08-13T20:11:11.061872807+00:00 stdout F [INFO] 10.217.0.87:35095 - 42296 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000506325s 2025-08-13T20:11:11.086639487+00:00 stdout F [INFO] 10.217.0.87:34121 - 241 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000619908s 2025-08-13T20:11:11.086905245+00:00 stdout F [INFO] 10.217.0.87:53216 - 15697 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000627458s 2025-08-13T20:11:11.099091904+00:00 stdout F [INFO] 10.217.0.87:52035 - 14176 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000739021s 2025-08-13T20:11:11.099598979+00:00 stdout F [INFO] 10.217.0.87:43061 - 9950 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000636628s 2025-08-13T20:11:11.115611158+00:00 stdout F [INFO] 10.217.0.87:55396 - 28029 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00105816s 2025-08-13T20:11:11.115710071+00:00 stdout F [INFO] 10.217.0.87:47389 - 50072 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001448772s 2025-08-13T20:11:11.116040450+00:00 stdout F [INFO] 10.217.0.87:38105 - 49459 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001355359s 2025-08-13T20:11:11.116375710+00:00 stdout F [INFO] 10.217.0.87:60529 - 58340 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001838553s 2025-08-13T20:11:11.139878123+00:00 stdout F [INFO] 10.217.0.87:34386 - 39284 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000587697s 2025-08-13T20:11:11.139997127+00:00 stdout F [INFO] 10.217.0.87:33415 - 62766 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000665539s 2025-08-13T20:11:11.153224846+00:00 stdout F [INFO] 10.217.0.87:58006 - 26905 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000784372s 2025-08-13T20:11:11.153637368+00:00 stdout F [INFO] 10.217.0.87:56675 - 53466 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001014779s 2025-08-13T20:11:11.173482927+00:00 stdout F [INFO] 10.217.0.87:37374 - 59512 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002216123s 2025-08-13T20:11:11.173804926+00:00 stdout F [INFO] 10.217.0.87:43308 - 15355 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.002719148s 2025-08-13T20:11:11.193315066+00:00 stdout F [INFO] 10.217.0.87:39323 - 45299 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000551476s 2025-08-13T20:11:11.193843811+00:00 stdout F [INFO] 10.217.0.87:60659 - 56528 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000749141s 2025-08-13T20:11:11.203904119+00:00 stdout F [INFO] 10.217.0.87:35418 - 56911 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000771722s 2025-08-13T20:11:11.204142896+00:00 stdout F [INFO] 10.217.0.87:39299 - 56238 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000817754s 2025-08-13T20:11:11.206533775+00:00 stdout F [INFO] 10.217.0.87:39203 - 4198 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000597697s 2025-08-13T20:11:11.206586266+00:00 stdout F [INFO] 10.217.0.87:33058 - 32028 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000648949s 2025-08-13T20:11:11.221888475+00:00 stdout F [INFO] 10.217.0.87:57419 - 33230 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000789802s 2025-08-13T20:11:11.222177353+00:00 stdout F [INFO] 10.217.0.87:45855 - 55824 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001099731s 2025-08-13T20:11:11.252482972+00:00 stdout F [INFO] 10.217.0.87:49168 - 33322 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000678s 2025-08-13T20:11:11.253498791+00:00 stdout F [INFO] 10.217.0.87:41421 - 56012 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.001364589s 2025-08-13T20:11:11.258970418+00:00 stdout F [INFO] 10.217.0.87:34340 - 41230 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000517045s 2025-08-13T20:11:11.259189624+00:00 stdout F [INFO] 10.217.0.87:36332 - 57392 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.0006936s 2025-08-13T20:11:11.277766107+00:00 stdout F [INFO] 10.217.0.87:55318 - 35910 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000542755s 2025-08-13T20:11:11.277852489+00:00 stdout F [INFO] 10.217.0.87:57870 - 21939 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000753292s 2025-08-13T20:11:11.307979793+00:00 stdout F [INFO] 10.217.0.87:51817 - 47893 "A IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000649408s 2025-08-13T20:11:11.308105307+00:00 stdout F [INFO] 10.217.0.87:33695 - 38695 "AAAA IN registry.access.redhat.com.crc.testing. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.000747191s 2025-08-13T20:11:11.313892163+00:00 stdout F [INFO] 10.217.0.87:35898 - 19074 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00068864s 2025-08-13T20:11:11.314104319+00:00 stdout F [INFO] 10.217.0.87:47876 - 34289 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000799533s 2025-08-13T20:11:11.367508110+00:00 stdout F [INFO] 10.217.0.87:52614 - 11229 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.00069437s 2025-08-13T20:11:11.367664444+00:00 stdout F [INFO] 10.217.0.87:59094 - 25413 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.000768972s 2025-08-13T20:11:11.643660627+00:00 stdout F [INFO] 10.217.0.62:54796 - 40368 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001262316s 2025-08-13T20:11:11.643823292+00:00 stdout F [INFO] 10.217.0.62:57586 - 55468 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001439811s 2025-08-13T20:11:22.864396336+00:00 stdout F [INFO] 10.217.0.8:43207 - 51542 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001437441s 2025-08-13T20:11:22.864396336+00:00 stdout F [INFO] 10.217.0.8:32904 - 36126 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001346429s 2025-08-13T20:11:22.866861087+00:00 stdout F [INFO] 10.217.0.8:58933 - 4423 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001564905s 2025-08-13T20:11:22.867523206+00:00 stdout F [INFO] 10.217.0.8:59732 - 57745 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000567686s 2025-08-13T20:11:35.191236613+00:00 stdout F [INFO] 10.217.0.19:43624 - 31557 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004043076s 2025-08-13T20:11:35.191360136+00:00 stdout F [INFO] 10.217.0.19:42091 - 49079 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004665044s 2025-08-13T20:11:41.626012045+00:00 stdout F [INFO] 10.217.0.62:49209 - 7004 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00136998s 2025-08-13T20:11:41.626231631+00:00 stdout F [INFO] 10.217.0.62:48100 - 28186 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001462432s 2025-08-13T20:11:45.337920649+00:00 stdout F [INFO] 10.217.0.19:55508 - 38516 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000946417s 2025-08-13T20:11:45.337920649+00:00 stdout F [INFO] 10.217.0.19:57786 - 17416 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001212935s 2025-08-13T20:11:53.267739152+00:00 stdout F [INFO] 10.217.0.19:38914 - 46916 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105049s 2025-08-13T20:11:53.268465283+00:00 stdout F [INFO] 10.217.0.19:41853 - 32507 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001277896s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:34223 - 33431 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004252351s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:34960 - 46826 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004715415s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:55176 - 46961 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002248995s 2025-08-13T20:11:56.369747159+00:00 stdout F [INFO] 10.217.0.64:47156 - 33166 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003679365s 2025-08-13T20:12:02.567650759+00:00 stdout F [INFO] 10.217.0.45:44131 - 57505 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00210487s 2025-08-13T20:12:02.567650759+00:00 stdout F [INFO] 10.217.0.45:47913 - 32986 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00244551s 2025-08-13T20:12:05.182422296+00:00 stdout F [INFO] 10.217.0.19:34869 - 29012 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175998s 2025-08-13T20:12:05.187282006+00:00 stdout F [INFO] 10.217.0.19:47330 - 29258 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003057838s 2025-08-13T20:12:11.627950944+00:00 stdout F [INFO] 10.217.0.62:40275 - 29937 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002399889s 2025-08-13T20:12:11.628227361+00:00 stdout F [INFO] 10.217.0.62:39484 - 32290 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003273974s 2025-08-13T20:12:22.871627992+00:00 stdout F [INFO] 10.217.0.8:39575 - 52483 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004259432s 2025-08-13T20:12:22.871905410+00:00 stdout F [INFO] 10.217.0.8:53081 - 41816 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00452466s 2025-08-13T20:12:22.876477911+00:00 stdout F [INFO] 10.217.0.8:35821 - 12538 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002057629s 2025-08-13T20:12:22.877191531+00:00 stdout F [INFO] 10.217.0.8:48890 - 2287 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00069125s 2025-08-13T20:12:35.190887254+00:00 stdout F [INFO] 10.217.0.19:60434 - 29890 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002610955s 2025-08-13T20:12:35.191185352+00:00 stdout F [INFO] 10.217.0.19:47054 - 51416 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002316357s 2025-08-13T20:12:41.625981740+00:00 stdout F [INFO] 10.217.0.62:40202 - 46466 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001112132s 2025-08-13T20:12:41.625981740+00:00 stdout F [INFO] 10.217.0.62:38107 - 12855 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001321438s 2025-08-13T20:12:54.987282390+00:00 stdout F [INFO] 10.217.0.19:45800 - 50279 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003032327s 2025-08-13T20:12:54.990679337+00:00 stdout F [INFO] 10.217.0.19:53152 - 56491 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003755488s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:35513 - 44991 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001019439s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:45882 - 50490 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001013849s 2025-08-13T20:12:56.370738424+00:00 stdout F [INFO] 10.217.0.64:37570 - 4790 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001625246s 2025-08-13T20:12:56.376464509+00:00 stdout F [INFO] 10.217.0.64:46358 - 22273 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.008205426s 2025-08-13T20:13:02.615040274+00:00 stdout F [INFO] 10.217.0.45:37088 - 44026 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000938007s 2025-08-13T20:13:02.615040274+00:00 stdout F [INFO] 10.217.0.45:35431 - 27214 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000953788s 2025-08-13T20:13:05.198056111+00:00 stdout F [INFO] 10.217.0.19:35572 - 34597 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000881615s 2025-08-13T20:13:05.198056111+00:00 stdout F [INFO] 10.217.0.19:59582 - 43606 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002272025s 2025-08-13T20:13:11.639921623+00:00 stdout F [INFO] 10.217.0.62:52711 - 32476 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001424541s 2025-08-13T20:13:11.639921623+00:00 stdout F [INFO] 10.217.0.62:57132 - 59874 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001547004s 2025-08-13T20:13:22.865124441+00:00 stdout F [INFO] 10.217.0.8:59533 - 12813 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001793821s 2025-08-13T20:13:22.865124441+00:00 stdout F [INFO] 10.217.0.8:35766 - 13841 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002905793s 2025-08-13T20:13:22.866911342+00:00 stdout F [INFO] 10.217.0.8:35077 - 20431 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000996988s 2025-08-13T20:13:22.867195700+00:00 stdout F [INFO] 10.217.0.8:60321 - 18676 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001366739s 2025-08-13T20:13:35.203554812+00:00 stdout F [INFO] 10.217.0.19:50652 - 1843 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001442701s 2025-08-13T20:13:35.204251312+00:00 stdout F [INFO] 10.217.0.19:44566 - 54018 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004432657s 2025-08-13T20:13:41.632163817+00:00 stdout F [INFO] 10.217.0.62:59452 - 58538 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001657787s 2025-08-13T20:13:41.632163817+00:00 stdout F [INFO] 10.217.0.62:33873 - 61617 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00068974s 2025-08-13T20:13:51.958542921+00:00 stdout F [INFO] 10.217.0.19:34570 - 51195 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001959286s 2025-08-13T20:13:51.958542921+00:00 stdout F [INFO] 10.217.0.19:52745 - 6433 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002972695s 2025-08-13T20:13:56.366582744+00:00 stdout F [INFO] 10.217.0.64:58857 - 31612 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001163063s 2025-08-13T20:13:56.366997556+00:00 stdout F [INFO] 10.217.0.64:38313 - 44558 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001188384s 2025-08-13T20:13:56.367223423+00:00 stdout F [INFO] 10.217.0.64:43580 - 26926 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001452902s 2025-08-13T20:13:56.368567151+00:00 stdout F [INFO] 10.217.0.64:34606 - 26831 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001230565s 2025-08-13T20:14:02.654198796+00:00 stdout F [INFO] 10.217.0.45:34942 - 58963 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001013979s 2025-08-13T20:14:02.654198796+00:00 stdout F [INFO] 10.217.0.45:33630 - 17756 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001152493s 2025-08-13T20:14:04.578540679+00:00 stdout F [INFO] 10.217.0.19:36000 - 321 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000795613s 2025-08-13T20:14:04.578714824+00:00 stdout F [INFO] 10.217.0.19:39517 - 29052 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000842304s 2025-08-13T20:14:05.187584410+00:00 stdout F [INFO] 10.217.0.19:46122 - 58576 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001175993s 2025-08-13T20:14:05.187721404+00:00 stdout F [INFO] 10.217.0.19:57402 - 32181 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001507053s 2025-08-13T20:14:11.629651830+00:00 stdout F [INFO] 10.217.0.62:59159 - 41701 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001221915s 2025-08-13T20:14:11.629651830+00:00 stdout F [INFO] 10.217.0.62:50146 - 57371 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00104821s 2025-08-13T20:14:22.866394636+00:00 stdout F [INFO] 10.217.0.8:37824 - 64992 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002725368s 2025-08-13T20:14:22.866496279+00:00 stdout F [INFO] 10.217.0.8:59572 - 3383 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002805781s 2025-08-13T20:14:22.868010292+00:00 stdout F [INFO] 10.217.0.8:60365 - 35111 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000760842s 2025-08-13T20:14:22.868010292+00:00 stdout F [INFO] 10.217.0.8:32914 - 8157 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001276487s 2025-08-13T20:14:35.203869808+00:00 stdout F [INFO] 10.217.0.19:60879 - 49232 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003147531s 2025-08-13T20:14:35.203869808+00:00 stdout F [INFO] 10.217.0.19:52879 - 60081 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003865761s 2025-08-13T20:14:41.632919724+00:00 stdout F [INFO] 10.217.0.62:59797 - 26522 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001363269s 2025-08-13T20:14:41.632919724+00:00 stdout F [INFO] 10.217.0.62:35790 - 34336 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001472952s 2025-08-13T20:14:56.364108879+00:00 stdout F [INFO] 10.217.0.64:43764 - 10505 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003449399s 2025-08-13T20:14:56.364148850+00:00 stdout F [INFO] 10.217.0.64:56395 - 23031 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005052034s 2025-08-13T20:14:56.364314075+00:00 stdout F [INFO] 10.217.0.64:38623 - 43236 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00524927s 2025-08-13T20:14:56.364473110+00:00 stdout F [INFO] 10.217.0.64:48325 - 42341 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003925912s 2025-08-13T20:15:02.699019916+00:00 stdout F [INFO] 10.217.0.45:34508 - 55080 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001459191s 2025-08-13T20:15:02.699019916+00:00 stdout F [INFO] 10.217.0.45:37620 - 24835 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000971367s 2025-08-13T20:15:05.193938748+00:00 stdout F [INFO] 10.217.0.19:40224 - 64478 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002663827s 2025-08-13T20:15:05.193938748+00:00 stdout F [INFO] 10.217.0.19:56288 - 41557 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004044026s 2025-08-13T20:15:11.637670792+00:00 stdout F [INFO] 10.217.0.62:47926 - 48891 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000847794s 2025-08-13T20:15:11.637670792+00:00 stdout F [INFO] 10.217.0.62:39812 - 38108 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000968387s 2025-08-13T20:15:14.207678284+00:00 stdout F [INFO] 10.217.0.19:50192 - 41833 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000742601s 2025-08-13T20:15:14.207824008+00:00 stdout F [INFO] 10.217.0.19:35203 - 22436 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001242815s 2025-08-13T20:15:22.865495047+00:00 stdout F [INFO] 10.217.0.8:48730 - 3306 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001072211s 2025-08-13T20:15:22.865621930+00:00 stdout F [INFO] 10.217.0.8:57376 - 21441 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000529565s 2025-08-13T20:15:22.866675980+00:00 stdout F [INFO] 10.217.0.8:43108 - 55457 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000485033s 2025-08-13T20:15:22.866750692+00:00 stdout F [INFO] 10.217.0.8:37354 - 59539 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000607437s 2025-08-13T20:15:30.900572435+00:00 stdout F [INFO] 10.217.0.73:36123 - 30169 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001139382s 2025-08-13T20:15:30.900679548+00:00 stdout F [INFO] 10.217.0.73:59681 - 33505 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001239995s 2025-08-13T20:15:35.189505304+00:00 stdout F [INFO] 10.217.0.19:57428 - 57631 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000825914s 2025-08-13T20:15:35.196183465+00:00 stdout F [INFO] 10.217.0.19:53814 - 50916 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001068761s 2025-08-13T20:15:41.629469340+00:00 stdout F [INFO] 10.217.0.62:34583 - 42316 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001442091s 2025-08-13T20:15:41.629735768+00:00 stdout F [INFO] 10.217.0.62:59091 - 44739 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001546074s 2025-08-13T20:15:50.643763762+00:00 stdout F [INFO] 10.217.0.19:39438 - 48635 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001065651s 2025-08-13T20:15:50.644611426+00:00 stdout F [INFO] 10.217.0.19:53652 - 38101 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001177314s 2025-08-13T20:15:56.362018229+00:00 stdout F [INFO] 10.217.0.64:41088 - 873 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001000629s 2025-08-13T20:15:56.362251565+00:00 stdout F [INFO] 10.217.0.64:54845 - 9984 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001465762s 2025-08-13T20:15:56.362723509+00:00 stdout F [INFO] 10.217.0.64:55499 - 33178 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001249626s 2025-08-13T20:15:56.363073269+00:00 stdout F [INFO] 10.217.0.64:47932 - 55426 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002146091s 2025-08-13T20:16:02.739548364+00:00 stdout F [INFO] 10.217.0.45:60222 - 28348 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000946617s 2025-08-13T20:16:02.739548364+00:00 stdout F [INFO] 10.217.0.45:48591 - 5325 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001547304s 2025-08-13T20:16:05.205548545+00:00 stdout F [INFO] 10.217.0.19:53559 - 36203 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001881074s 2025-08-13T20:16:05.209619061+00:00 stdout F [INFO] 10.217.0.19:49043 - 24811 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001894224s 2025-08-13T20:16:11.634174071+00:00 stdout F [INFO] 10.217.0.62:48721 - 37751 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002248584s 2025-08-13T20:16:11.634174071+00:00 stdout F [INFO] 10.217.0.62:38022 - 4312 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002118481s 2025-08-13T20:16:22.867687818+00:00 stdout F [INFO] 10.217.0.8:33638 - 1399 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002938574s 2025-08-13T20:16:22.867687818+00:00 stdout F [INFO] 10.217.0.8:60628 - 5377 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002751959s 2025-08-13T20:16:22.869538900+00:00 stdout F [INFO] 10.217.0.8:50403 - 34129 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001442112s 2025-08-13T20:16:22.869829829+00:00 stdout F [INFO] 10.217.0.8:56570 - 9246 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001772s 2025-08-13T20:16:23.832644995+00:00 stdout F [INFO] 10.217.0.19:37855 - 59033 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001472782s 2025-08-13T20:16:23.833301754+00:00 stdout F [INFO] 10.217.0.19:51228 - 32795 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001648597s 2025-08-13T20:16:35.191683097+00:00 stdout F [INFO] 10.217.0.19:58939 - 48494 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001907314s 2025-08-13T20:16:35.191683097+00:00 stdout F [INFO] 10.217.0.19:58218 - 25332 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003341335s 2025-08-13T20:16:41.630991336+00:00 stdout F [INFO] 10.217.0.62:36001 - 43032 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001942895s 2025-08-13T20:16:41.630991336+00:00 stdout F [INFO] 10.217.0.62:47530 - 49232 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002037278s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:41050 - 32678 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004154149s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:40954 - 48337 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00383684s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:40716 - 7441 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003682005s 2025-08-13T20:16:56.366145027+00:00 stdout F [INFO] 10.217.0.64:33635 - 35645 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003746728s 2025-08-13T20:17:02.788890842+00:00 stdout F [INFO] 10.217.0.45:34768 - 24306 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00141146s 2025-08-13T20:17:02.788934033+00:00 stdout F [INFO] 10.217.0.45:33409 - 18159 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002433539s 2025-08-13T20:17:05.225377351+00:00 stdout F [INFO] 10.217.0.19:40180 - 29563 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003059687s 2025-08-13T20:17:05.225767672+00:00 stdout F [INFO] 10.217.0.19:36494 - 38063 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003259813s 2025-08-13T20:17:11.648223999+00:00 stdout F [INFO] 10.217.0.62:46663 - 8219 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002281925s 2025-08-13T20:17:11.648223999+00:00 stdout F [INFO] 10.217.0.62:41844 - 54966 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000982588s 2025-08-13T20:17:22.871070539+00:00 stdout F [INFO] 10.217.0.8:32890 - 11124 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002656126s 2025-08-13T20:17:22.871352047+00:00 stdout F [INFO] 10.217.0.8:44256 - 52256 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003753657s 2025-08-13T20:17:22.874458756+00:00 stdout F [INFO] 10.217.0.8:58412 - 22097 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001787121s 2025-08-13T20:17:22.874552369+00:00 stdout F [INFO] 10.217.0.8:35431 - 14077 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001840333s 2025-08-13T20:17:34.179906137+00:00 stdout F [INFO] 10.217.0.19:44118 - 63889 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004259092s 2025-08-13T20:17:34.179906137+00:00 stdout F [INFO] 10.217.0.19:37748 - 10294 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005022833s 2025-08-13T20:17:35.190115476+00:00 stdout F [INFO] 10.217.0.19:40477 - 2326 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000728961s 2025-08-13T20:17:35.190115476+00:00 stdout F [INFO] 10.217.0.19:34273 - 12052 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000755701s 2025-08-13T20:17:41.668105549+00:00 stdout F [INFO] 10.217.0.62:46066 - 43508 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001331998s 2025-08-13T20:17:41.668105549+00:00 stdout F [INFO] 10.217.0.62:51181 - 37419 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00176431s 2025-08-13T20:17:49.346102970+00:00 stdout F [INFO] 10.217.0.19:36761 - 23898 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001356448s 2025-08-13T20:17:49.346102970+00:00 stdout F [INFO] 10.217.0.19:48274 - 65185 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001533424s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:38161 - 13410 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002194382s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:56655 - 1458 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003056427s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:33396 - 10926 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000674519s 2025-08-13T20:17:56.366593235+00:00 stdout F [INFO] 10.217.0.64:57160 - 47287 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000974197s 2025-08-13T20:18:02.989608509+00:00 stdout F [INFO] 10.217.0.45:45054 - 18970 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000822773s 2025-08-13T20:18:02.989696421+00:00 stdout F [INFO] 10.217.0.45:33510 - 45927 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001276436s 2025-08-13T20:18:05.199108315+00:00 stdout F [INFO] 10.217.0.19:54250 - 7398 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001587865s 2025-08-13T20:18:05.200078023+00:00 stdout F [INFO] 10.217.0.19:41213 - 54456 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000583547s 2025-08-13T20:18:11.579762720+00:00 stdout F [INFO] 10.217.0.62:41421 - 22525 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001183184s 2025-08-13T20:18:11.580219803+00:00 stdout F [INFO] 10.217.0.62:36929 - 36123 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000424312s 2025-08-13T20:18:11.649163252+00:00 stdout F [INFO] 10.217.0.62:42461 - 29602 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000733311s 2025-08-13T20:18:11.649199703+00:00 stdout F [INFO] 10.217.0.62:56212 - 10407 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000868245s 2025-08-13T20:18:22.870151721+00:00 stdout F [INFO] 10.217.0.8:49188 - 57176 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002603574s 2025-08-13T20:18:22.870151721+00:00 stdout F [INFO] 10.217.0.8:36995 - 42927 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002829531s 2025-08-13T20:18:26.402111553+00:00 stdout F [INFO] 10.217.0.19:58040 - 22048 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001067321s 2025-08-13T20:18:26.404015028+00:00 stdout F [INFO] 10.217.0.19:40164 - 61985 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002737419s 2025-08-13T20:18:33.587042474+00:00 stdout F [INFO] 10.217.0.82:53079 - 31680 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001442992s 2025-08-13T20:18:33.587042474+00:00 stdout F [INFO] 10.217.0.82:56870 - 30507 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001379969s 2025-08-13T20:18:34.092944759+00:00 stdout F [INFO] 10.217.0.82:35359 - 18217 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.003619673s 2025-08-13T20:18:34.094876035+00:00 stdout F [INFO] 10.217.0.82:46518 - 34600 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001328408s 2025-08-13T20:18:35.226993336+00:00 stdout F [INFO] 10.217.0.19:33501 - 47608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001215595s 2025-08-13T20:18:35.228134039+00:00 stdout F [INFO] 10.217.0.19:60161 - 50650 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002118781s 2025-08-13T20:18:41.636032550+00:00 stdout F [INFO] 10.217.0.62:38503 - 27459 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0010409s 2025-08-13T20:18:41.636032550+00:00 stdout F [INFO] 10.217.0.62:33295 - 42276 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001495803s 2025-08-13T20:18:43.085667028+00:00 stdout F [INFO] 10.217.0.19:60883 - 52790 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001888864s 2025-08-13T20:18:43.086028788+00:00 stdout F [INFO] 10.217.0.19:36040 - 17832 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000813713s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:41379 - 8858 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002860291s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:54022 - 180 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004060546s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:47538 - 9813 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004242701s 2025-08-13T20:18:56.366735426+00:00 stdout F [INFO] 10.217.0.64:45471 - 9488 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004319933s 2025-08-13T20:19:04.585379040+00:00 stdout F [INFO] 10.217.0.45:33914 - 2665 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002576953s 2025-08-13T20:19:04.585379040+00:00 stdout F [INFO] 10.217.0.45:39467 - 47346 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002866012s 2025-08-13T20:19:05.231523052+00:00 stdout F [INFO] 10.217.0.19:40211 - 52386 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001249576s 2025-08-13T20:19:05.231523052+00:00 stdout F [INFO] 10.217.0.19:33331 - 15313 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000792802s 2025-08-13T20:19:11.643530019+00:00 stdout F [INFO] 10.217.0.62:33123 - 49781 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000986178s 2025-08-13T20:19:11.643530019+00:00 stdout F [INFO] 10.217.0.62:49419 - 12181 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000731901s 2025-08-13T20:19:22.872287200+00:00 stdout F [INFO] 10.217.0.8:38758 - 21581 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002758359s 2025-08-13T20:19:22.872287200+00:00 stdout F [INFO] 10.217.0.8:46333 - 9209 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003077518s 2025-08-13T20:19:33.482702098+00:00 stdout F [INFO] 10.217.0.62:60479 - 42267 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002999965s 2025-08-13T20:19:33.482702098+00:00 stdout F [INFO] 10.217.0.62:60878 - 6876 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003089438s 2025-08-13T20:19:33.502325339+00:00 stdout F [INFO] 10.217.0.62:39222 - 4986 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001012609s 2025-08-13T20:19:33.502364490+00:00 stdout F [INFO] 10.217.0.62:60567 - 47569 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001098921s 2025-08-13T20:19:35.213953105+00:00 stdout F [INFO] 10.217.0.19:46242 - 54493 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001431731s 2025-08-13T20:19:35.213953105+00:00 stdout F [INFO] 10.217.0.19:36706 - 45113 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001586256s 2025-08-13T20:19:38.306546218+00:00 stdout F [INFO] 10.217.0.62:38211 - 9131 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000923397s 2025-08-13T20:19:38.306630511+00:00 stdout F [INFO] 10.217.0.62:46397 - 22873 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00072067s 2025-08-13T20:19:41.636439144+00:00 stdout F [INFO] 10.217.0.62:47865 - 15773 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000987888s 2025-08-13T20:19:41.636439144+00:00 stdout F [INFO] 10.217.0.62:50558 - 51504 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.0010408s 2025-08-13T20:19:42.026544983+00:00 stdout F [INFO] 10.217.0.19:36856 - 18795 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000823494s 2025-08-13T20:19:42.026938724+00:00 stdout F [INFO] 10.217.0.19:34625 - 9057 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001165133s 2025-08-13T20:19:42.164504595+00:00 stdout F [INFO] 10.217.0.19:45942 - 61324 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000872075s 2025-08-13T20:19:42.164671840+00:00 stdout F [INFO] 10.217.0.19:41972 - 52459 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106331s 2025-08-13T20:19:42.532376799+00:00 stdout F [INFO] 10.217.0.62:46454 - 63259 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000792552s 2025-08-13T20:19:42.535296553+00:00 stdout F [INFO] 10.217.0.62:47598 - 51230 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.004496309s 2025-08-13T20:19:45.401583608+00:00 stdout F [INFO] 10.217.0.62:56666 - 9412 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001494062s 2025-08-13T20:19:45.401583608+00:00 stdout F [INFO] 10.217.0.62:38089 - 57843 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001731939s 2025-08-13T20:19:48.020856666+00:00 stdout F [INFO] 10.217.0.19:49297 - 40636 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000908326s 2025-08-13T20:19:48.021008500+00:00 stdout F [INFO] 10.217.0.19:50217 - 33124 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000479443s 2025-08-13T20:19:52.707583818+00:00 stdout F [INFO] 10.217.0.19:41970 - 64782 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206844s 2025-08-13T20:19:52.707583818+00:00 stdout F [INFO] 10.217.0.19:38499 - 56421 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001794481s 2025-08-13T20:19:56.368549176+00:00 stdout F [INFO] 10.217.0.64:45663 - 24171 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001554525s 2025-08-13T20:19:56.368829954+00:00 stdout F [INFO] 10.217.0.64:56459 - 43664 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00173986s 2025-08-13T20:19:56.369091442+00:00 stdout F [INFO] 10.217.0.64:46431 - 38097 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002498151s 2025-08-13T20:19:56.369270407+00:00 stdout F [INFO] 10.217.0.64:46771 - 2172 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002322767s 2025-08-13T20:20:01.071900077+00:00 stdout F [INFO] 10.217.0.19:36348 - 12264 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001161613s 2025-08-13T20:20:01.071900077+00:00 stdout F [INFO] 10.217.0.19:58531 - 49546 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880615s 2025-08-13T20:20:01.094909815+00:00 stdout F [INFO] 10.217.0.19:45307 - 27566 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001574105s 2025-08-13T20:20:01.094909815+00:00 stdout F [INFO] 10.217.0.19:46628 - 46836 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001631457s 2025-08-13T20:20:04.641614348+00:00 stdout F [INFO] 10.217.0.45:35084 - 63330 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000804223s 2025-08-13T20:20:04.641962067+00:00 stdout F [INFO] 10.217.0.45:45080 - 40107 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001006019s 2025-08-13T20:20:05.195151847+00:00 stdout F [INFO] 10.217.0.19:56212 - 48003 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000984178s 2025-08-13T20:20:05.196163296+00:00 stdout F [INFO] 10.217.0.19:39285 - 14780 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002337637s 2025-08-13T20:20:08.340700585+00:00 stdout F [INFO] 10.217.0.19:53833 - 40272 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001970646s 2025-08-13T20:20:08.340700585+00:00 stdout F [INFO] 10.217.0.19:41946 - 24164 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002420129s 2025-08-13T20:20:08.363431225+00:00 stdout F [INFO] 10.217.0.19:46307 - 5452 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000778262s 2025-08-13T20:20:08.363431225+00:00 stdout F [INFO] 10.217.0.19:35245 - 39248 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000445573s 2025-08-13T20:20:08.502443628+00:00 stdout F [INFO] 10.217.0.19:44873 - 4998 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001938985s 2025-08-13T20:20:08.502494999+00:00 stdout F [INFO] 10.217.0.19:44407 - 54307 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001842663s 2025-08-13T20:20:11.654028374+00:00 stdout F [INFO] 10.217.0.62:38324 - 64366 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002006857s 2025-08-13T20:20:11.655275660+00:00 stdout F [INFO] 10.217.0.62:45022 - 17853 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00071154s 2025-08-13T20:20:22.874056324+00:00 stdout F [INFO] 10.217.0.8:56271 - 7607 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003067057s 2025-08-13T20:20:22.874056324+00:00 stdout F [INFO] 10.217.0.8:53777 - 62518 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004086857s 2025-08-13T20:20:22.875510136+00:00 stdout F [INFO] 10.217.0.8:47578 - 64153 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000949147s 2025-08-13T20:20:22.877051280+00:00 stdout F [INFO] 10.217.0.8:38799 - 52729 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000611507s 2025-08-13T20:20:30.984937467+00:00 stdout F [INFO] 10.217.0.73:34657 - 18108 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001264556s 2025-08-13T20:20:30.984937467+00:00 stdout F [INFO] 10.217.0.73:54396 - 42450 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000660829s 2025-08-13T20:20:35.217291606+00:00 stdout F [INFO] 10.217.0.19:58010 - 34249 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001632276s 2025-08-13T20:20:35.217291606+00:00 stdout F [INFO] 10.217.0.19:43193 - 27871 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001431521s 2025-08-13T20:20:41.640317422+00:00 stdout F [INFO] 10.217.0.62:33136 - 3157 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000968398s 2025-08-13T20:20:41.640317422+00:00 stdout F [INFO] 10.217.0.62:42410 - 28426 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001284317s 2025-08-13T20:20:56.375567479+00:00 stdout F [INFO] 10.217.0.64:46364 - 61033 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004658863s 2025-08-13T20:20:56.375655501+00:00 stdout F [INFO] 10.217.0.64:41556 - 23747 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003774458s 2025-08-13T20:20:56.375691332+00:00 stdout F [INFO] 10.217.0.64:37617 - 7065 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003570242s 2025-08-13T20:20:56.376064003+00:00 stdout F [INFO] 10.217.0.64:40574 - 17849 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004663733s 2025-08-13T20:21:02.344894508+00:00 stdout F [INFO] 10.217.0.19:36337 - 62106 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002328257s 2025-08-13T20:21:02.344894508+00:00 stdout F [INFO] 10.217.0.19:42735 - 19208 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002479191s 2025-08-13T20:21:04.715673334+00:00 stdout F [INFO] 10.217.0.45:52465 - 28402 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001366489s 2025-08-13T20:21:04.715673334+00:00 stdout F [INFO] 10.217.0.45:46488 - 13842 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00104926s 2025-08-13T20:21:05.226410270+00:00 stdout F [INFO] 10.217.0.19:46338 - 63671 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001718309s 2025-08-13T20:21:05.228319235+00:00 stdout F [INFO] 10.217.0.19:45280 - 55564 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000554165s 2025-08-13T20:21:11.645199285+00:00 stdout F [INFO] 10.217.0.62:40198 - 22184 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002646386s 2025-08-13T20:21:11.646182993+00:00 stdout F [INFO] 10.217.0.62:45571 - 1066 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003772287s 2025-08-13T20:21:22.874469926+00:00 stdout F [INFO] 10.217.0.8:52225 - 3320 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000639608s 2025-08-13T20:21:22.874469926+00:00 stdout F [INFO] 10.217.0.8:54099 - 33964 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003515781s 2025-08-13T20:21:22.877588095+00:00 stdout F [INFO] 10.217.0.8:52824 - 64521 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000438082s 2025-08-13T20:21:22.877652327+00:00 stdout F [INFO] 10.217.0.8:57555 - 59549 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001463112s 2025-08-13T20:21:35.222143431+00:00 stdout F [INFO] 10.217.0.19:60649 - 50574 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002335237s 2025-08-13T20:21:35.222143431+00:00 stdout F [INFO] 10.217.0.19:53041 - 33994 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006000452s 2025-08-13T20:21:41.644282842+00:00 stdout F [INFO] 10.217.0.62:40499 - 18965 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001027549s 2025-08-13T20:21:41.644545980+00:00 stdout F [INFO] 10.217.0.62:56677 - 33292 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001361999s 2025-08-13T20:21:46.720648942+00:00 stdout F [INFO] 10.217.0.19:52844 - 54819 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000996739s 2025-08-13T20:21:46.720648942+00:00 stdout F [INFO] 10.217.0.19:35401 - 32403 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00106424s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:53848 - 12324 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001368299s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:58459 - 32629 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001531664s 2025-08-13T20:21:56.365048060+00:00 stdout F [INFO] 10.217.0.64:54440 - 63440 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001854953s 2025-08-13T20:21:56.365450051+00:00 stdout F [INFO] 10.217.0.64:45586 - 58199 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003015456s 2025-08-13T20:22:04.772447424+00:00 stdout F [INFO] 10.217.0.45:47208 - 6310 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002282725s 2025-08-13T20:22:04.773388391+00:00 stdout F [INFO] 10.217.0.45:40257 - 64603 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002641876s 2025-08-13T20:22:05.225832351+00:00 stdout F [INFO] 10.217.0.19:48589 - 4475 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206624s 2025-08-13T20:22:05.226189052+00:00 stdout F [INFO] 10.217.0.19:44554 - 62237 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001643067s 2025-08-13T20:22:09.059585917+00:00 stdout F [INFO] 10.217.0.8:46872 - 45123 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001205414s 2025-08-13T20:22:09.059585917+00:00 stdout F [INFO] 10.217.0.8:44809 - 58206 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001253486s 2025-08-13T20:22:09.061384968+00:00 stdout F [INFO] 10.217.0.8:40056 - 55732 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000912436s 2025-08-13T20:22:09.062433578+00:00 stdout F [INFO] 10.217.0.8:50159 - 60649 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000761292s 2025-08-13T20:22:11.644848192+00:00 stdout F [INFO] 10.217.0.62:55597 - 19802 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00140525s 2025-08-13T20:22:11.644848192+00:00 stdout F [INFO] 10.217.0.62:37054 - 62067 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001243296s 2025-08-13T20:22:11.959636427+00:00 stdout F [INFO] 10.217.0.19:37069 - 44420 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001090712s 2025-08-13T20:22:11.959636427+00:00 stdout F [INFO] 10.217.0.19:51196 - 10736 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001195314s 2025-08-13T20:22:18.381498281+00:00 stdout F [INFO] 10.217.0.82:33260 - 39922 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001861193s 2025-08-13T20:22:18.381703217+00:00 stdout F [INFO] 10.217.0.82:57325 - 15076 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002520261s 2025-08-13T20:22:18.819005535+00:00 stdout F [INFO] 10.217.0.82:59297 - 26216 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.00105767s 2025-08-13T20:22:18.819194680+00:00 stdout F [INFO] 10.217.0.82:59783 - 53810 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.001294797s 2025-08-13T20:22:18.930505661+00:00 stdout F [INFO] 10.217.0.87:37155 - 34712 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001688419s 2025-08-13T20:22:18.930695497+00:00 stdout F [INFO] 10.217.0.87:43269 - 1726 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001895594s 2025-08-13T20:22:18.996825837+00:00 stdout F [INFO] 10.217.0.87:57214 - 3424 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.001348508s 2025-08-13T20:22:18.996914279+00:00 stdout F [INFO] 10.217.0.87:44706 - 18025 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.002379739s 2025-08-13T20:22:22.879676495+00:00 stdout F [INFO] 10.217.0.8:46274 - 2923 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004247991s 2025-08-13T20:22:22.879676495+00:00 stdout F [INFO] 10.217.0.8:45842 - 32928 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004246601s 2025-08-13T20:22:22.881415405+00:00 stdout F [INFO] 10.217.0.8:57451 - 31094 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000848284s 2025-08-13T20:22:22.881448626+00:00 stdout F [INFO] 10.217.0.8:36628 - 63016 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001175654s 2025-08-13T20:22:35.224010916+00:00 stdout F [INFO] 10.217.0.19:43222 - 5389 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002881602s 2025-08-13T20:22:35.224683185+00:00 stdout F [INFO] 10.217.0.19:42907 - 51848 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004116278s 2025-08-13T20:22:41.646202026+00:00 stdout F [INFO] 10.217.0.62:43760 - 28539 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001611936s 2025-08-13T20:22:41.646202026+00:00 stdout F [INFO] 10.217.0.62:35224 - 43702 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002153242s 2025-08-13T20:22:56.369606611+00:00 stdout F [INFO] 10.217.0.64:59458 - 7965 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002547783s 2025-08-13T20:22:56.369606611+00:00 stdout F [INFO] 10.217.0.64:48398 - 54107 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002361567s 2025-08-13T20:22:56.370722603+00:00 stdout F [INFO] 10.217.0.64:36212 - 12955 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001255206s 2025-08-13T20:22:56.372176595+00:00 stdout F [INFO] 10.217.0.64:36092 - 52355 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001346788s 2025-08-13T20:23:04.846709970+00:00 stdout F [INFO] 10.217.0.45:40440 - 63694 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001198544s 2025-08-13T20:23:04.846827404+00:00 stdout F [INFO] 10.217.0.45:38541 - 28253 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001752361s 2025-08-13T20:23:05.213124442+00:00 stdout F [INFO] 10.217.0.19:56738 - 47459 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000649099s 2025-08-13T20:23:05.213124442+00:00 stdout F [INFO] 10.217.0.19:46391 - 51426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00070577s 2025-08-13T20:23:11.652968446+00:00 stdout F [INFO] 10.217.0.62:53728 - 49866 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00210305s 2025-08-13T20:23:11.653085550+00:00 stdout F [INFO] 10.217.0.62:45456 - 15639 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002129911s 2025-08-13T20:23:21.588321740+00:00 stdout F [INFO] 10.217.0.19:60874 - 35135 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002127911s 2025-08-13T20:23:21.588321740+00:00 stdout F [INFO] 10.217.0.19:38718 - 31175 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002510541s 2025-08-13T20:23:22.871336408+00:00 stdout F [INFO] 10.217.0.8:36103 - 24028 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000782063s 2025-08-13T20:23:22.871515043+00:00 stdout F [INFO] 10.217.0.8:49021 - 53513 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000862385s 2025-08-13T20:23:22.873339626+00:00 stdout F [INFO] 10.217.0.8:37985 - 20136 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000531166s 2025-08-13T20:23:22.873339626+00:00 stdout F [INFO] 10.217.0.8:55991 - 14323 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000785993s 2025-08-13T20:23:35.238045523+00:00 stdout F [INFO] 10.217.0.19:36684 - 2848 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002465901s 2025-08-13T20:23:35.240079521+00:00 stdout F [INFO] 10.217.0.19:33588 - 27633 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001195794s 2025-08-13T20:23:41.647280071+00:00 stdout F [INFO] 10.217.0.62:42476 - 43342 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001642987s 2025-08-13T20:23:41.647415965+00:00 stdout F [INFO] 10.217.0.62:54440 - 6040 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001811622s 2025-08-13T20:23:45.402046379+00:00 stdout F [INFO] 10.217.0.19:44745 - 60022 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000837994s 2025-08-13T20:23:45.402108621+00:00 stdout F [INFO] 10.217.0.19:43993 - 40694 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000823644s 2025-08-13T20:23:56.365249950+00:00 stdout F [INFO] 10.217.0.64:40236 - 37886 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003663245s 2025-08-13T20:23:56.365372663+00:00 stdout F [INFO] 10.217.0.64:46539 - 61891 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004957812s 2025-08-13T20:23:56.365478967+00:00 stdout F [INFO] 10.217.0.64:56603 - 40332 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004119848s 2025-08-13T20:23:56.365851677+00:00 stdout F [INFO] 10.217.0.64:58763 - 26661 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004710615s 2025-08-13T20:24:04.894208688+00:00 stdout F [INFO] 10.217.0.45:53176 - 54604 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001489782s 2025-08-13T20:24:04.895019631+00:00 stdout F [INFO] 10.217.0.45:35227 - 1868 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001315408s 2025-08-13T20:24:05.215869215+00:00 stdout F [INFO] 10.217.0.19:50430 - 16440 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002552023s 2025-08-13T20:24:05.215869215+00:00 stdout F [INFO] 10.217.0.19:43727 - 5350 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002644575s 2025-08-13T20:24:11.654122291+00:00 stdout F [INFO] 10.217.0.62:47207 - 44200 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000967948s 2025-08-13T20:24:11.654122291+00:00 stdout F [INFO] 10.217.0.62:35930 - 61246 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000842024s 2025-08-13T20:24:22.877013284+00:00 stdout F [INFO] 10.217.0.8:34036 - 32585 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002436069s 2025-08-13T20:24:22.877013284+00:00 stdout F [INFO] 10.217.0.8:37756 - 19886 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002389609s 2025-08-13T20:24:22.877728545+00:00 stdout F [INFO] 10.217.0.8:43919 - 30620 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000891985s 2025-08-13T20:24:22.877890659+00:00 stdout F [INFO] 10.217.0.8:59344 - 6990 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00070076s 2025-08-13T20:24:31.218619713+00:00 stdout F [INFO] 10.217.0.19:38893 - 10072 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001308328s 2025-08-13T20:24:31.218619713+00:00 stdout F [INFO] 10.217.0.19:49427 - 61245 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000495315s 2025-08-13T20:24:35.210617970+00:00 stdout F [INFO] 10.217.0.19:34324 - 3422 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001206925s 2025-08-13T20:24:35.216068206+00:00 stdout F [INFO] 10.217.0.19:35128 - 8581 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000886335s 2025-08-13T20:24:41.649324149+00:00 stdout F [INFO] 10.217.0.62:45676 - 32314 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001269026s 2025-08-13T20:24:41.650352639+00:00 stdout F [INFO] 10.217.0.62:59504 - 25827 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002697517s 2025-08-13T20:24:56.364831731+00:00 stdout F [INFO] 10.217.0.64:46731 - 49096 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002134331s 2025-08-13T20:24:56.364904373+00:00 stdout F [INFO] 10.217.0.64:47783 - 39549 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003447299s 2025-08-13T20:24:56.364915843+00:00 stdout F [INFO] 10.217.0.64:58109 - 10846 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004238141s 2025-08-13T20:24:56.365140799+00:00 stdout F [INFO] 10.217.0.64:42859 - 26107 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003592322s 2025-08-13T20:25:04.954625974+00:00 stdout F [INFO] 10.217.0.45:56098 - 16684 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00105109s 2025-08-13T20:25:04.954625974+00:00 stdout F [INFO] 10.217.0.45:60242 - 56074 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001544635s 2025-08-13T20:25:05.216681297+00:00 stdout F [INFO] 10.217.0.19:43870 - 53964 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880955s 2025-08-13T20:25:05.218245922+00:00 stdout F [INFO] 10.217.0.19:38683 - 13931 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002842042s 2025-08-13T20:25:11.655107217+00:00 stdout F [INFO] 10.217.0.62:42560 - 56615 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001234255s 2025-08-13T20:25:11.655152038+00:00 stdout F [INFO] 10.217.0.62:51024 - 56501 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001732259s 2025-08-13T20:25:22.875674448+00:00 stdout F [INFO] 10.217.0.8:37291 - 10141 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002284745s 2025-08-13T20:25:22.875674448+00:00 stdout F [INFO] 10.217.0.8:53584 - 44780 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00104894s 2025-08-13T20:25:22.876833261+00:00 stdout F [INFO] 10.217.0.8:44784 - 23561 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000829414s 2025-08-13T20:25:22.877283834+00:00 stdout F [INFO] 10.217.0.8:34509 - 32310 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000649649s 2025-08-13T20:25:31.108541627+00:00 stdout F [INFO] 10.217.0.73:54395 - 27034 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001383989s 2025-08-13T20:25:31.108673060+00:00 stdout F [INFO] 10.217.0.73:50430 - 47262 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000849244s 2025-08-13T20:25:35.223646663+00:00 stdout F [INFO] 10.217.0.19:59430 - 40751 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003284004s 2025-08-13T20:25:35.223646663+00:00 stdout F [INFO] 10.217.0.19:38543 - 12475 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003529581s 2025-08-13T20:25:40.839594434+00:00 stdout F [INFO] 10.217.0.19:54153 - 43863 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966588s 2025-08-13T20:25:40.839594434+00:00 stdout F [INFO] 10.217.0.19:60269 - 32295 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001198244s 2025-08-13T20:25:41.654132535+00:00 stdout F [INFO] 10.217.0.62:57722 - 60781 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001192605s 2025-08-13T20:25:41.654225638+00:00 stdout F [INFO] 10.217.0.62:57240 - 9692 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000768012s 2025-08-13T20:25:44.091295833+00:00 stdout F [INFO] 10.217.0.19:47837 - 6058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000956198s 2025-08-13T20:25:44.091480398+00:00 stdout F [INFO] 10.217.0.19:42351 - 47123 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001190144s 2025-08-13T20:25:56.362749204+00:00 stdout F [INFO] 10.217.0.64:57008 - 2843 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001875883s 2025-08-13T20:25:56.362749204+00:00 stdout F [INFO] 10.217.0.64:59145 - 41905 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000892495s 2025-08-13T20:25:56.364121553+00:00 stdout F [INFO] 10.217.0.64:54461 - 27316 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000776812s 2025-08-13T20:25:56.364366410+00:00 stdout F [INFO] 10.217.0.64:49156 - 29068 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001151533s 2025-08-13T20:26:05.008171728+00:00 stdout F [INFO] 10.217.0.45:56933 - 22116 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001376049s 2025-08-13T20:26:05.008171728+00:00 stdout F [INFO] 10.217.0.45:38062 - 21287 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001615627s 2025-08-13T20:26:05.225685178+00:00 stdout F [INFO] 10.217.0.19:42480 - 2952 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001715269s 2025-08-13T20:26:05.229846137+00:00 stdout F [INFO] 10.217.0.19:38966 - 45405 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000540405s 2025-08-13T20:26:11.660082231+00:00 stdout F [INFO] 10.217.0.62:36357 - 25846 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001317148s 2025-08-13T20:26:11.660082231+00:00 stdout F [INFO] 10.217.0.62:40131 - 27947 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001431501s 2025-08-13T20:26:22.885119750+00:00 stdout F [INFO] 10.217.0.8:39265 - 12506 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.008256897s 2025-08-13T20:26:22.885300815+00:00 stdout F [INFO] 10.217.0.8:60052 - 21113 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.009225783s 2025-08-13T20:26:22.887208370+00:00 stdout F [INFO] 10.217.0.8:50059 - 64664 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001089472s 2025-08-13T20:26:22.888349282+00:00 stdout F [INFO] 10.217.0.8:58525 - 41733 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002595604s 2025-08-13T20:26:35.217672305+00:00 stdout F [INFO] 10.217.0.19:50871 - 46595 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003050657s 2025-08-13T20:26:35.223889833+00:00 stdout F [INFO] 10.217.0.19:34499 - 21925 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001911215s 2025-08-13T20:26:41.663241068+00:00 stdout F [INFO] 10.217.0.62:37952 - 57689 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002296435s 2025-08-13T20:26:41.663450064+00:00 stdout F [INFO] 10.217.0.62:32910 - 1261 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002552483s 2025-08-13T20:26:50.459861487+00:00 stdout F [INFO] 10.217.0.19:59463 - 36264 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129462s 2025-08-13T20:26:50.460897926+00:00 stdout F [INFO] 10.217.0.19:43264 - 12719 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003320605s 2025-08-13T20:26:56.361161538+00:00 stdout F [INFO] 10.217.0.64:40095 - 42976 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001215025s 2025-08-13T20:26:56.362470865+00:00 stdout F [INFO] 10.217.0.64:51703 - 18236 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00106133s 2025-08-13T20:26:56.362470865+00:00 stdout F [INFO] 10.217.0.64:42173 - 15221 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001270597s 2025-08-13T20:26:56.366062168+00:00 stdout F [INFO] 10.217.0.64:56428 - 48198 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.006411643s 2025-08-13T20:27:05.061678481+00:00 stdout F [INFO] 10.217.0.45:44274 - 14734 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001185503s 2025-08-13T20:27:05.061678481+00:00 stdout F [INFO] 10.217.0.45:55812 - 7493 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000598277s 2025-08-13T20:27:05.211882976+00:00 stdout F [INFO] 10.217.0.19:52471 - 16429 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001636387s 2025-08-13T20:27:05.215722676+00:00 stdout F [INFO] 10.217.0.19:54830 - 38464 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000740491s 2025-08-13T20:27:11.668728224+00:00 stdout F [INFO] 10.217.0.62:46677 - 64812 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001148663s 2025-08-13T20:27:11.668728224+00:00 stdout F [INFO] 10.217.0.62:51999 - 8607 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001020539s 2025-08-13T20:27:22.879288288+00:00 stdout F [INFO] 10.217.0.8:37705 - 46550 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002511792s 2025-08-13T20:27:22.879288288+00:00 stdout F [INFO] 10.217.0.8:37535 - 28497 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003007216s 2025-08-13T20:27:22.880758070+00:00 stdout F [INFO] 10.217.0.8:51257 - 27538 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001197134s 2025-08-13T20:27:22.880952866+00:00 stdout F [INFO] 10.217.0.8:53250 - 20415 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001178664s 2025-08-13T20:27:35.228295304+00:00 stdout F [INFO] 10.217.0.19:57583 - 19387 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003182911s 2025-08-13T20:27:35.228295304+00:00 stdout F [INFO] 10.217.0.19:53003 - 43498 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001328308s 2025-08-13T20:27:41.658872270+00:00 stdout F [INFO] 10.217.0.62:45081 - 383 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001441351s 2025-08-13T20:27:41.658872270+00:00 stdout F [INFO] 10.217.0.62:33850 - 51076 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001704479s 2025-08-13T20:27:42.780742399+00:00 stdout F [INFO] 10.217.0.19:39615 - 13995 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001031459s 2025-08-13T20:27:42.780742399+00:00 stdout F [INFO] 10.217.0.19:51920 - 40149 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000956647s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:58075 - 52190 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004438267s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:48199 - 47108 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005041774s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:51669 - 65440 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.005462966s 2025-08-13T20:27:56.376127303+00:00 stdout F [INFO] 10.217.0.64:47060 - 8587 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.005900449s 2025-08-13T20:27:58.930109056+00:00 stdout F [INFO] 10.217.0.19:55625 - 45930 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000983058s 2025-08-13T20:27:58.930109056+00:00 stdout F [INFO] 10.217.0.19:42361 - 34807 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001129392s 2025-08-13T20:28:00.077029005+00:00 stdout F [INFO] 10.217.0.19:53335 - 44549 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000760012s 2025-08-13T20:28:00.077029005+00:00 stdout F [INFO] 10.217.0.19:40680 - 63548 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000908866s 2025-08-13T20:28:05.109126285+00:00 stdout F [INFO] 10.217.0.45:51541 - 679 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000994358s 2025-08-13T20:28:05.109126285+00:00 stdout F [INFO] 10.217.0.45:51904 - 3423 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001150933s 2025-08-13T20:28:05.222457713+00:00 stdout F [INFO] 10.217.0.19:56911 - 31417 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001446252s 2025-08-13T20:28:05.222507354+00:00 stdout F [INFO] 10.217.0.19:35415 - 62687 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001685248s 2025-08-13T20:28:11.573842296+00:00 stdout F [INFO] 10.217.0.62:38831 - 24140 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.01252948s 2025-08-13T20:28:11.574213006+00:00 stdout F [INFO] 10.217.0.62:52754 - 21246 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.012967142s 2025-08-13T20:28:11.652482776+00:00 stdout F [INFO] 10.217.0.62:44148 - 61190 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00139288s 2025-08-13T20:28:11.652482776+00:00 stdout F [INFO] 10.217.0.62:40743 - 20956 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001218395s 2025-08-13T20:28:22.878744002+00:00 stdout F [INFO] 10.217.0.8:46071 - 3347 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002957745s 2025-08-13T20:28:22.878744002+00:00 stdout F [INFO] 10.217.0.8:60971 - 48670 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003520741s 2025-08-13T20:28:22.880619546+00:00 stdout F [INFO] 10.217.0.8:58226 - 45100 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001212545s 2025-08-13T20:28:22.880977026+00:00 stdout F [INFO] 10.217.0.8:44978 - 48564 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001512964s 2025-08-13T20:28:35.237885214+00:00 stdout F [INFO] 10.217.0.19:54157 - 35131 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002727999s 2025-08-13T20:28:35.238211853+00:00 stdout F [INFO] 10.217.0.19:37322 - 15708 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003765728s 2025-08-13T20:28:41.656353106+00:00 stdout F [INFO] 10.217.0.62:34350 - 14878 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001065571s 2025-08-13T20:28:41.656353106+00:00 stdout F [INFO] 10.217.0.62:42110 - 17498 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000952677s 2025-08-13T20:28:56.363747197+00:00 stdout F [INFO] 10.217.0.64:59117 - 59507 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003759828s 2025-08-13T20:28:56.363747197+00:00 stdout F [INFO] 10.217.0.64:39236 - 60297 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.004420417s 2025-08-13T20:28:56.364644013+00:00 stdout F [INFO] 10.217.0.64:39886 - 59994 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002289126s 2025-08-13T20:28:56.364993613+00:00 stdout F [INFO] 10.217.0.64:51482 - 47269 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001537855s 2025-08-13T20:29:05.158973531+00:00 stdout F [INFO] 10.217.0.45:49586 - 293 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001536384s 2025-08-13T20:29:05.158973531+00:00 stdout F [INFO] 10.217.0.45:37945 - 29941 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002622176s 2025-08-13T20:29:05.246980231+00:00 stdout F [INFO] 10.217.0.19:39196 - 22276 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000976709s 2025-08-13T20:29:05.247213568+00:00 stdout F [INFO] 10.217.0.19:50331 - 9060 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000653139s 2025-08-13T20:29:09.709941691+00:00 stdout F [INFO] 10.217.0.19:49760 - 39795 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001007329s 2025-08-13T20:29:09.710193458+00:00 stdout F [INFO] 10.217.0.19:52124 - 16792 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002368648s 2025-08-13T20:29:11.659446941+00:00 stdout F [INFO] 10.217.0.62:48548 - 25764 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000931517s 2025-08-13T20:29:11.659446941+00:00 stdout F [INFO] 10.217.0.62:52247 - 39213 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001345389s 2025-08-13T20:29:22.877995183+00:00 stdout F [INFO] 10.217.0.8:55820 - 24792 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004234281s 2025-08-13T20:29:22.877995183+00:00 stdout F [INFO] 10.217.0.8:36516 - 18894 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003181971s 2025-08-13T20:29:22.879826456+00:00 stdout F [INFO] 10.217.0.8:35384 - 34055 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.0010181s 2025-08-13T20:29:22.880051702+00:00 stdout F [INFO] 10.217.0.8:37455 - 35412 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001280907s 2025-08-13T20:29:33.508718749+00:00 stdout F [INFO] 10.217.0.62:52420 - 65467 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005813477s 2025-08-13T20:29:33.508718749+00:00 stdout F [INFO] 10.217.0.62:59946 - 45827 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.005716655s 2025-08-13T20:29:33.548111311+00:00 stdout F [INFO] 10.217.0.62:39576 - 4767 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001328578s 2025-08-13T20:29:33.549247944+00:00 stdout F [INFO] 10.217.0.62:39409 - 20523 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00106722s 2025-08-13T20:29:35.242365363+00:00 stdout F [INFO] 10.217.0.19:52444 - 13513 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000664429s 2025-08-13T20:29:35.245163164+00:00 stdout F [INFO] 10.217.0.19:57824 - 18608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003030437s 2025-08-13T20:29:38.331371308+00:00 stdout F [INFO] 10.217.0.62:50284 - 47289 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000978308s 2025-08-13T20:29:38.331371308+00:00 stdout F [INFO] 10.217.0.62:54421 - 29466 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00069253s 2025-08-13T20:29:41.472703057+00:00 stdout F [INFO] 10.217.0.19:50960 - 37288 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00139353s 2025-08-13T20:29:41.472703057+00:00 stdout F [INFO] 10.217.0.19:40772 - 35345 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000867005s 2025-08-13T20:29:41.655946815+00:00 stdout F [INFO] 10.217.0.62:46017 - 45350 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000554786s 2025-08-13T20:29:41.657692135+00:00 stdout F [INFO] 10.217.0.62:38458 - 9290 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002201973s 2025-08-13T20:29:42.029686128+00:00 stdout F [INFO] 10.217.0.19:49384 - 17937 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001275446s 2025-08-13T20:29:42.030119931+00:00 stdout F [INFO] 10.217.0.19:34126 - 31563 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002031728s 2025-08-13T20:29:42.195863505+00:00 stdout F [INFO] 10.217.0.19:50292 - 8646 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000771612s 2025-08-13T20:29:42.195863505+00:00 stdout F [INFO] 10.217.0.19:41813 - 53616 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001189794s 2025-08-13T20:29:42.548149462+00:00 stdout F [INFO] 10.217.0.62:46761 - 59090 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001441271s 2025-08-13T20:29:42.548391029+00:00 stdout F [INFO] 10.217.0.62:44356 - 37096 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001953676s 2025-08-13T20:29:45.402769419+00:00 stdout F [INFO] 10.217.0.62:48046 - 45459 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000817103s 2025-08-13T20:29:45.402769419+00:00 stdout F [INFO] 10.217.0.62:49429 - 57122 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000449153s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:57376 - 19028 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003027407s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:53232 - 19566 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003590323s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:50097 - 25136 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003240624s 2025-08-13T20:29:56.372192900+00:00 stdout F [INFO] 10.217.0.64:47419 - 35557 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003612844s 2025-08-13T20:30:01.075376634+00:00 stdout F [INFO] 10.217.0.19:42400 - 32330 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00315681s 2025-08-13T20:30:01.075376634+00:00 stdout F [INFO] 10.217.0.19:43531 - 21848 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003187702s 2025-08-13T20:30:01.100151146+00:00 stdout F [INFO] 10.217.0.19:37195 - 2542 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000633388s 2025-08-13T20:30:01.100151146+00:00 stdout F [INFO] 10.217.0.19:48203 - 48919 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000567806s 2025-08-13T20:30:05.230154427+00:00 stdout F [INFO] 10.217.0.45:36707 - 63835 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001633947s 2025-08-13T20:30:05.230154427+00:00 stdout F [INFO] 10.217.0.45:54164 - 55506 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002323076s 2025-08-13T20:30:05.241957226+00:00 stdout F [INFO] 10.217.0.19:56331 - 59077 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003138751s 2025-08-13T20:30:05.241957226+00:00 stdout F [INFO] 10.217.0.19:45527 - 23788 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003551052s 2025-08-13T20:30:08.348942269+00:00 stdout F [INFO] 10.217.0.19:33338 - 2468 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00103812s 2025-08-13T20:30:08.348942269+00:00 stdout F [INFO] 10.217.0.19:45130 - 60291 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001927446s 2025-08-13T20:30:08.397565217+00:00 stdout F [INFO] 10.217.0.19:35130 - 55305 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000807704s 2025-08-13T20:30:08.397565217+00:00 stdout F [INFO] 10.217.0.19:43466 - 56354 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002227894s 2025-08-13T20:30:08.542631706+00:00 stdout F [INFO] 10.217.0.19:32789 - 5660 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001609206s 2025-08-13T20:30:08.542692378+00:00 stdout F [INFO] 10.217.0.19:34852 - 15652 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001358189s 2025-08-13T20:30:11.680732082+00:00 stdout F [INFO] 10.217.0.62:57583 - 18489 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001232076s 2025-08-13T20:30:11.680732082+00:00 stdout F [INFO] 10.217.0.62:41359 - 11743 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001834313s 2025-08-13T20:30:19.348744243+00:00 stdout F [INFO] 10.217.0.19:60500 - 65234 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002055099s 2025-08-13T20:30:19.348744243+00:00 stdout F [INFO] 10.217.0.19:34223 - 16093 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002216144s 2025-08-13T20:30:22.880352091+00:00 stdout F [INFO] 10.217.0.8:43568 - 50549 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001704109s 2025-08-13T20:30:22.880603728+00:00 stdout F [INFO] 10.217.0.8:47766 - 6147 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00312927s 2025-08-13T20:30:22.882453871+00:00 stdout F [INFO] 10.217.0.8:54083 - 61218 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000749791s 2025-08-13T20:30:22.882881273+00:00 stdout F [INFO] 10.217.0.8:54024 - 55183 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001355379s 2025-08-13T20:30:31.155485163+00:00 stdout F [INFO] 10.217.0.73:38005 - 4639 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002329047s 2025-08-13T20:30:31.155485163+00:00 stdout F [INFO] 10.217.0.73:38446 - 24841 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003351347s 2025-08-13T20:30:33.360074255+00:00 stdout F [INFO] 10.217.0.19:45240 - 18271 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001849443s 2025-08-13T20:30:33.360074255+00:00 stdout F [INFO] 10.217.0.19:56701 - 35882 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002291556s 2025-08-13T20:30:33.386018850+00:00 stdout F [INFO] 10.217.0.19:59664 - 50456 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004310074s 2025-08-13T20:30:33.386313009+00:00 stdout F [INFO] 10.217.0.19:58470 - 46639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005115767s 2025-08-13T20:30:35.215184861+00:00 stdout F [INFO] 10.217.0.19:49596 - 39944 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002485991s 2025-08-13T20:30:35.217097636+00:00 stdout F [INFO] 10.217.0.19:37961 - 22633 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000818494s 2025-08-13T20:30:41.662681529+00:00 stdout F [INFO] 10.217.0.62:49564 - 50238 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000863175s 2025-08-13T20:30:41.662681529+00:00 stdout F [INFO] 10.217.0.62:38814 - 44130 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001675658s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:43125 - 35104 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002406829s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:35781 - 62498 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003089479s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:37735 - 22294 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00384699s 2025-08-13T20:30:56.364671261+00:00 stdout F [INFO] 10.217.0.64:37353 - 20789 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003985004s 2025-08-13T20:31:05.224302085+00:00 stdout F [INFO] 10.217.0.19:57220 - 36252 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001177644s 2025-08-13T20:31:05.224433359+00:00 stdout F [INFO] 10.217.0.19:59782 - 30926 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000577116s 2025-08-13T20:31:05.326079580+00:00 stdout F [INFO] 10.217.0.45:45832 - 46685 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000889425s 2025-08-13T20:31:05.326079580+00:00 stdout F [INFO] 10.217.0.45:40414 - 1549 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001170953s 2025-08-13T20:31:11.673462830+00:00 stdout F [INFO] 10.217.0.62:38378 - 23668 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002980646s 2025-08-13T20:31:11.673636245+00:00 stdout F [INFO] 10.217.0.62:40505 - 54986 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002723098s 2025-08-13T20:31:22.881264896+00:00 stdout F [INFO] 10.217.0.8:55399 - 52791 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003612124s 2025-08-13T20:31:22.881264896+00:00 stdout F [INFO] 10.217.0.8:38597 - 2000 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004345355s 2025-08-13T20:31:22.883736167+00:00 stdout F [INFO] 10.217.0.8:50837 - 15887 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.00104287s 2025-08-13T20:31:22.883736167+00:00 stdout F [INFO] 10.217.0.8:43696 - 36032 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001087321s 2025-08-13T20:31:28.968581310+00:00 stdout F [INFO] 10.217.0.19:39400 - 8426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001307347s 2025-08-13T20:31:28.968882459+00:00 stdout F [INFO] 10.217.0.19:36145 - 62501 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001686888s 2025-08-13T20:31:35.240270472+00:00 stdout F [INFO] 10.217.0.19:36421 - 17002 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002607855s 2025-08-13T20:31:35.241578470+00:00 stdout F [INFO] 10.217.0.19:59967 - 53451 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004725376s 2025-08-13T20:31:40.160974940+00:00 stdout F [INFO] 10.217.0.19:38477 - 33833 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001063061s 2025-08-13T20:31:40.160974940+00:00 stdout F [INFO] 10.217.0.19:37658 - 8896 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001155733s 2025-08-13T20:31:41.663014977+00:00 stdout F [INFO] 10.217.0.62:51660 - 16571 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000794503s 2025-08-13T20:31:41.663014977+00:00 stdout F [INFO] 10.217.0.62:52306 - 23225 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000757912s 2025-08-13T20:31:56.361142474+00:00 stdout F [INFO] 10.217.0.64:60172 - 9058 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00172696s 2025-08-13T20:31:56.361142474+00:00 stdout F [INFO] 10.217.0.64:48943 - 4720 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003051658s 2025-08-13T20:31:56.363961735+00:00 stdout F [INFO] 10.217.0.64:50037 - 34579 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001489083s 2025-08-13T20:31:56.365112368+00:00 stdout F [INFO] 10.217.0.64:43784 - 39155 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001770351s 2025-08-13T20:32:05.226220156+00:00 stdout F [INFO] 10.217.0.19:43101 - 63759 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00175152s 2025-08-13T20:32:05.226220156+00:00 stdout F [INFO] 10.217.0.19:55083 - 62337 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002368358s 2025-08-13T20:32:05.381093178+00:00 stdout F [INFO] 10.217.0.45:52832 - 65452 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000965308s 2025-08-13T20:32:05.381833029+00:00 stdout F [INFO] 10.217.0.45:36205 - 44015 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001442462s 2025-08-13T20:32:11.669659366+00:00 stdout F [INFO] 10.217.0.62:46206 - 42199 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001127753s 2025-08-13T20:32:11.669659366+00:00 stdout F [INFO] 10.217.0.62:55044 - 47589 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001251706s 2025-08-13T20:32:22.887274833+00:00 stdout F [INFO] 10.217.0.8:57254 - 11688 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002743639s 2025-08-13T20:32:22.887327804+00:00 stdout F [INFO] 10.217.0.8:40915 - 29430 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003152451s 2025-08-13T20:32:22.889946979+00:00 stdout F [INFO] 10.217.0.8:58018 - 20602 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001598576s 2025-08-13T20:32:22.890211667+00:00 stdout F [INFO] 10.217.0.8:35529 - 19346 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001166383s 2025-08-13T20:32:35.244724858+00:00 stdout F [INFO] 10.217.0.19:45669 - 34426 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002952165s 2025-08-13T20:32:35.244724858+00:00 stdout F [INFO] 10.217.0.19:51147 - 51510 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003438999s 2025-08-13T20:32:38.580841186+00:00 stdout F [INFO] 10.217.0.19:43690 - 3324 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001319678s 2025-08-13T20:32:38.580988980+00:00 stdout F [INFO] 10.217.0.19:56587 - 24648 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001099972s 2025-08-13T20:32:41.667486954+00:00 stdout F [INFO] 10.217.0.62:55855 - 31828 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001149163s 2025-08-13T20:32:41.667486954+00:00 stdout F [INFO] 10.217.0.62:34829 - 3463 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001708029s 2025-08-13T20:32:56.366707428+00:00 stdout F [INFO] 10.217.0.64:51627 - 65020 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003232263s 2025-08-13T20:32:56.366707428+00:00 stdout F [INFO] 10.217.0.64:52149 - 8902 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.003574023s 2025-08-13T20:32:56.368728506+00:00 stdout F [INFO] 10.217.0.64:51624 - 35655 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000891486s 2025-08-13T20:32:56.369118527+00:00 stdout F [INFO] 10.217.0.64:60490 - 38135 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.00103067s 2025-08-13T20:33:05.238556718+00:00 stdout F [INFO] 10.217.0.19:59718 - 38615 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00139407s 2025-08-13T20:33:05.238651740+00:00 stdout F [INFO] 10.217.0.19:54459 - 19380 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001287367s 2025-08-13T20:33:05.431433402+00:00 stdout F [INFO] 10.217.0.45:50366 - 22220 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000820193s 2025-08-13T20:33:05.431991198+00:00 stdout F [INFO] 10.217.0.45:52058 - 7055 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000636528s 2025-08-13T20:33:11.674968627+00:00 stdout F [INFO] 10.217.0.62:53746 - 415 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00138406s 2025-08-13T20:33:11.674968627+00:00 stdout F [INFO] 10.217.0.62:46126 - 49757 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002078379s 2025-08-13T20:33:22.881237326+00:00 stdout F [INFO] 10.217.0.8:40930 - 16922 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00244019s 2025-08-13T20:33:22.881237326+00:00 stdout F [INFO] 10.217.0.8:34743 - 60811 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002328907s 2025-08-13T20:33:22.883380997+00:00 stdout F [INFO] 10.217.0.8:58520 - 29998 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001180044s 2025-08-13T20:33:22.883380997+00:00 stdout F [INFO] 10.217.0.8:33257 - 4060 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001460842s 2025-08-13T20:33:33.398825661+00:00 stdout F [INFO] 10.217.0.82:32850 - 61432 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003149961s 2025-08-13T20:33:33.398825661+00:00 stdout F [INFO] 10.217.0.82:33287 - 31809 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.003351926s 2025-08-13T20:33:33.878633203+00:00 stdout F [INFO] 10.217.0.82:59293 - 7022 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000809964s 2025-08-13T20:33:33.878633203+00:00 stdout F [INFO] 10.217.0.82:60830 - 12546 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000924657s 2025-08-13T20:33:35.232345647+00:00 stdout F [INFO] 10.217.0.19:36818 - 54936 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001196634s 2025-08-13T20:33:35.232345647+00:00 stdout F [INFO] 10.217.0.19:41457 - 46863 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001223505s 2025-08-13T20:33:38.860763048+00:00 stdout F [INFO] 10.217.0.19:34258 - 15358 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001795122s 2025-08-13T20:33:38.860763048+00:00 stdout F [INFO] 10.217.0.19:42002 - 13607 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002488091s 2025-08-13T20:33:41.668370103+00:00 stdout F [INFO] 10.217.0.62:60543 - 3360 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001166404s 2025-08-13T20:33:41.668370103+00:00 stdout F [INFO] 10.217.0.62:56859 - 37798 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001088791s 2025-08-13T20:33:48.204972862+00:00 stdout F [INFO] 10.217.0.19:60096 - 61982 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000928067s 2025-08-13T20:33:48.206303960+00:00 stdout F [INFO] 10.217.0.19:60303 - 53773 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002658236s 2025-08-13T20:33:56.360014243+00:00 stdout F [INFO] 10.217.0.64:51406 - 31658 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001163614s 2025-08-13T20:33:56.360014243+00:00 stdout F [INFO] 10.217.0.64:48360 - 33699 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001532054s 2025-08-13T20:33:56.361576218+00:00 stdout F [INFO] 10.217.0.64:50492 - 53197 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000629309s 2025-08-13T20:33:56.362663479+00:00 stdout F [INFO] 10.217.0.64:48910 - 50058 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001494913s 2025-08-13T20:34:05.229192181+00:00 stdout F [INFO] 10.217.0.19:41931 - 36188 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001559254s 2025-08-13T20:34:05.229192181+00:00 stdout F [INFO] 10.217.0.19:56602 - 24472 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001454581s 2025-08-13T20:34:05.481479033+00:00 stdout F [INFO] 10.217.0.45:46813 - 52256 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000608437s 2025-08-13T20:34:05.481870144+00:00 stdout F [INFO] 10.217.0.45:42826 - 54158 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000893826s 2025-08-13T20:34:11.675507967+00:00 stdout F [INFO] 10.217.0.62:42151 - 37111 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001868794s 2025-08-13T20:34:11.679175752+00:00 stdout F [INFO] 10.217.0.62:59737 - 62734 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001298907s 2025-08-13T20:34:22.881957784+00:00 stdout F [INFO] 10.217.0.8:43179 - 46202 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002114951s 2025-08-13T20:34:22.881957784+00:00 stdout F [INFO] 10.217.0.8:42515 - 56482 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002568244s 2025-08-13T20:34:22.883479128+00:00 stdout F [INFO] 10.217.0.8:39755 - 48303 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000877095s 2025-08-13T20:34:22.883726015+00:00 stdout F [INFO] 10.217.0.8:39774 - 22805 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000591607s 2025-08-13T20:34:35.232811425+00:00 stdout F [INFO] 10.217.0.19:60022 - 37769 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003804899s 2025-08-13T20:34:35.232811425+00:00 stdout F [INFO] 10.217.0.19:39283 - 32612 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004103508s 2025-08-13T20:34:41.678398886+00:00 stdout F [INFO] 10.217.0.62:35730 - 62817 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001451642s 2025-08-13T20:34:41.678520039+00:00 stdout F [INFO] 10.217.0.62:47850 - 28599 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003685396s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:38897 - 642 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00246145s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:52998 - 17838 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002885533s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:52702 - 63679 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003412048s 2025-08-13T20:34:56.369902494+00:00 stdout F [INFO] 10.217.0.64:54492 - 49052 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002701758s 2025-08-13T20:34:57.836345007+00:00 stdout F [INFO] 10.217.0.19:42490 - 13001 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002930475s 2025-08-13T20:34:57.836416799+00:00 stdout F [INFO] 10.217.0.19:46472 - 20548 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003218083s 2025-08-13T20:35:05.227701626+00:00 stdout F [INFO] 10.217.0.19:52788 - 47586 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00484177s 2025-08-13T20:35:05.227701626+00:00 stdout F [INFO] 10.217.0.19:51739 - 54276 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005381574s 2025-08-13T20:35:05.528765310+00:00 stdout F [INFO] 10.217.0.45:41036 - 749 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001621397s 2025-08-13T20:35:05.528765310+00:00 stdout F [INFO] 10.217.0.45:40449 - 23309 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001683448s 2025-08-13T20:35:11.673861094+00:00 stdout F [INFO] 10.217.0.62:50333 - 63789 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000766982s 2025-08-13T20:35:11.674033429+00:00 stdout F [INFO] 10.217.0.62:51423 - 972 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001759371s 2025-08-13T20:35:22.883890497+00:00 stdout F [INFO] 10.217.0.8:35533 - 23190 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002822982s 2025-08-13T20:35:22.883890497+00:00 stdout F [INFO] 10.217.0.8:48943 - 18475 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003335756s 2025-08-13T20:35:22.884929167+00:00 stdout F [INFO] 10.217.0.8:51669 - 49205 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000912526s 2025-08-13T20:35:22.885243076+00:00 stdout F [INFO] 10.217.0.8:35489 - 30954 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000887996s 2025-08-13T20:35:31.201730869+00:00 stdout F [INFO] 10.217.0.73:36025 - 37682 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00104204s 2025-08-13T20:35:31.201730869+00:00 stdout F [INFO] 10.217.0.73:44698 - 26406 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001180854s 2025-08-13T20:35:35.230980092+00:00 stdout F [INFO] 10.217.0.19:50589 - 7915 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001655618s 2025-08-13T20:35:35.230980092+00:00 stdout F [INFO] 10.217.0.19:55591 - 64446 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001892654s 2025-08-13T20:35:37.541280062+00:00 stdout F [INFO] 10.217.0.19:36145 - 55938 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000741561s 2025-08-13T20:35:37.541280062+00:00 stdout F [INFO] 10.217.0.19:36908 - 8687 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000822553s 2025-08-13T20:35:41.673136284+00:00 stdout F [INFO] 10.217.0.62:47360 - 54352 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000790583s 2025-08-13T20:35:41.675182283+00:00 stdout F [INFO] 10.217.0.62:52225 - 16649 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001279057s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:40533 - 46498 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002024898s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:53651 - 56781 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001327868s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:40804 - 18331 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002547243s 2025-08-13T20:35:56.367908224+00:00 stdout F [INFO] 10.217.0.64:49348 - 64759 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001311637s 2025-08-13T20:36:05.242703867+00:00 stdout F [INFO] 10.217.0.19:52997 - 8478 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002759859s 2025-08-13T20:36:05.242703867+00:00 stdout F [INFO] 10.217.0.19:37916 - 39157 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003782448s 2025-08-13T20:36:05.582597387+00:00 stdout F [INFO] 10.217.0.45:41747 - 26619 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000789732s 2025-08-13T20:36:05.582597387+00:00 stdout F [INFO] 10.217.0.45:56187 - 63967 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000641038s 2025-08-13T20:36:07.456016709+00:00 stdout F [INFO] 10.217.0.19:55081 - 39422 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000868175s 2025-08-13T20:36:07.456016709+00:00 stdout F [INFO] 10.217.0.19:50072 - 41015 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000941947s 2025-08-13T20:36:11.676425877+00:00 stdout F [INFO] 10.217.0.62:37814 - 7136 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001359319s 2025-08-13T20:36:11.676425877+00:00 stdout F [INFO] 10.217.0.62:33472 - 12989 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001268686s 2025-08-13T20:36:22.883978362+00:00 stdout F [INFO] 10.217.0.8:58021 - 44272 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003780959s 2025-08-13T20:36:22.884527128+00:00 stdout F [INFO] 10.217.0.8:35563 - 10048 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004389926s 2025-08-13T20:36:22.889843281+00:00 stdout F [INFO] 10.217.0.8:51549 - 40045 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001655578s 2025-08-13T20:36:22.889843281+00:00 stdout F [INFO] 10.217.0.8:41648 - 55672 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001504023s 2025-08-13T20:36:35.261662630+00:00 stdout F [INFO] 10.217.0.19:44748 - 55600 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.007348752s 2025-08-13T20:36:35.261936927+00:00 stdout F [INFO] 10.217.0.19:58670 - 34210 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00869192s 2025-08-13T20:36:41.673019005+00:00 stdout F [INFO] 10.217.0.62:47774 - 57002 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000775162s 2025-08-13T20:36:41.673019005+00:00 stdout F [INFO] 10.217.0.62:58481 - 36932 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00069989s 2025-08-13T20:36:56.360899890+00:00 stdout F [INFO] 10.217.0.64:49690 - 6346 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001882895s 2025-08-13T20:36:56.360899890+00:00 stdout F [INFO] 10.217.0.64:33376 - 48808 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002107641s 2025-08-13T20:36:56.362291470+00:00 stdout F [INFO] 10.217.0.64:46296 - 927 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00072105s 2025-08-13T20:36:56.362291470+00:00 stdout F [INFO] 10.217.0.64:36520 - 31547 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.000756802s 2025-08-13T20:37:05.233258069+00:00 stdout F [INFO] 10.217.0.19:59772 - 14035 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002645536s 2025-08-13T20:37:05.233258069+00:00 stdout F [INFO] 10.217.0.19:37719 - 14045 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004455419s 2025-08-13T20:37:05.640907632+00:00 stdout F [INFO] 10.217.0.45:38874 - 9095 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000904947s 2025-08-13T20:37:05.640907632+00:00 stdout F [INFO] 10.217.0.45:47158 - 35513 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000973378s 2025-08-13T20:37:11.675012994+00:00 stdout F [INFO] 10.217.0.62:52041 - 28255 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001223665s 2025-08-13T20:37:11.675012994+00:00 stdout F [INFO] 10.217.0.62:53127 - 64300 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001225125s 2025-08-13T20:37:17.087581728+00:00 stdout F [INFO] 10.217.0.19:41701 - 21437 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001998977s 2025-08-13T20:37:17.088880585+00:00 stdout F [INFO] 10.217.0.19:35947 - 41854 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002072s 2025-08-13T20:37:22.884701150+00:00 stdout F [INFO] 10.217.0.8:33740 - 41869 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001245746s 2025-08-13T20:37:22.884864495+00:00 stdout F [INFO] 10.217.0.8:47477 - 64327 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002157842s 2025-08-13T20:37:22.885529914+00:00 stdout F [INFO] 10.217.0.8:36284 - 15082 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000625468s 2025-08-13T20:37:22.885654827+00:00 stdout F [INFO] 10.217.0.8:51765 - 18434 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000766943s 2025-08-13T20:37:35.259845730+00:00 stdout F [INFO] 10.217.0.19:57972 - 33316 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002853822s 2025-08-13T20:37:35.259933912+00:00 stdout F [INFO] 10.217.0.19:38435 - 20685 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000810904s 2025-08-13T20:37:36.229228104+00:00 stdout F [INFO] 10.217.0.19:52269 - 385 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000973738s 2025-08-13T20:37:36.229265696+00:00 stdout F [INFO] 10.217.0.19:51103 - 9389 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001151813s 2025-08-13T20:37:41.676036215+00:00 stdout F [INFO] 10.217.0.62:57770 - 42152 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00107205s 2025-08-13T20:37:41.676216830+00:00 stdout F [INFO] 10.217.0.62:38336 - 46730 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001156253s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:57040 - 20480 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003036318s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:58395 - 10701 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002653297s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:42967 - 18626 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002456391s 2025-08-13T20:37:56.364031551+00:00 stdout F [INFO] 10.217.0.64:46951 - 59146 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.002577614s 2025-08-13T20:38:05.237364761+00:00 stdout F [INFO] 10.217.0.19:58012 - 10248 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00347164s 2025-08-13T20:38:05.237436263+00:00 stdout F [INFO] 10.217.0.19:43124 - 45715 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003759018s 2025-08-13T20:38:05.684249535+00:00 stdout F [INFO] 10.217.0.45:44888 - 52674 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000809783s 2025-08-13T20:38:05.684453351+00:00 stdout F [INFO] 10.217.0.45:51261 - 16994 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000764562s 2025-08-13T20:38:11.559879617+00:00 stdout F [INFO] 10.217.0.62:40287 - 3642 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001132343s 2025-08-13T20:38:11.559879617+00:00 stdout F [INFO] 10.217.0.62:43178 - 21365 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001228095s 2025-08-13T20:38:11.671238298+00:00 stdout F [INFO] 10.217.0.62:53652 - 48116 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000799183s 2025-08-13T20:38:11.671454264+00:00 stdout F [INFO] 10.217.0.62:47552 - 28323 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000930827s 2025-08-13T20:38:22.888462531+00:00 stdout F [INFO] 10.217.0.8:60705 - 40432 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.004011316s 2025-08-13T20:38:22.888594465+00:00 stdout F [INFO] 10.217.0.8:51067 - 23753 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002672687s 2025-08-13T20:38:22.890079978+00:00 stdout F [INFO] 10.217.0.8:49803 - 63924 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000994649s 2025-08-13T20:38:22.890333025+00:00 stdout F [INFO] 10.217.0.8:40392 - 33949 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001184864s 2025-08-13T20:38:26.708978077+00:00 stdout F [INFO] 10.217.0.19:48385 - 4246 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001006469s 2025-08-13T20:38:26.708978077+00:00 stdout F [INFO] 10.217.0.19:40797 - 41192 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001494034s 2025-08-13T20:38:35.230301728+00:00 stdout F [INFO] 10.217.0.19:35587 - 13762 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001269237s 2025-08-13T20:38:35.230301728+00:00 stdout F [INFO] 10.217.0.19:36692 - 31283 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001164714s 2025-08-13T20:38:41.688856539+00:00 stdout F [INFO] 10.217.0.62:43247 - 17071 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001293217s 2025-08-13T20:38:41.689246861+00:00 stdout F [INFO] 10.217.0.62:48262 - 45415 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001867384s 2025-08-13T20:38:49.075275340+00:00 stdout F [INFO] 10.217.0.8:48946 - 32235 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001550515s 2025-08-13T20:38:49.075275340+00:00 stdout F [INFO] 10.217.0.8:51973 - 5060 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000840614s 2025-08-13T20:38:49.075678612+00:00 stdout F [INFO] 10.217.0.8:58876 - 13836 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000530995s 2025-08-13T20:38:49.076039632+00:00 stdout F [INFO] 10.217.0.8:36573 - 30703 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000560596s 2025-08-13T20:38:56.362727357+00:00 stdout F [INFO] 10.217.0.64:36222 - 957 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001191765s 2025-08-13T20:38:56.362727357+00:00 stdout F [INFO] 10.217.0.64:34160 - 60776 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001621137s 2025-08-13T20:38:56.363249612+00:00 stdout F [INFO] 10.217.0.64:49956 - 6395 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00106333s 2025-08-13T20:38:56.363525860+00:00 stdout F [INFO] 10.217.0.64:34527 - 596 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001017109s 2025-08-13T20:39:05.244100877+00:00 stdout F [INFO] 10.217.0.19:41309 - 38781 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001548515s 2025-08-13T20:39:05.248323839+00:00 stdout F [INFO] 10.217.0.19:39054 - 64832 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001515684s 2025-08-13T20:39:05.727542485+00:00 stdout F [INFO] 10.217.0.45:51846 - 56137 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000750081s 2025-08-13T20:39:05.727619187+00:00 stdout F [INFO] 10.217.0.45:33209 - 27535 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000454923s 2025-08-13T20:39:11.675957299+00:00 stdout F [INFO] 10.217.0.62:59504 - 55315 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001278777s 2025-08-13T20:39:11.675957299+00:00 stdout F [INFO] 10.217.0.62:53199 - 51466 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001345839s 2025-08-13T20:39:22.886569621+00:00 stdout F [INFO] 10.217.0.8:44293 - 63474 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002724538s 2025-08-13T20:39:22.886569621+00:00 stdout F [INFO] 10.217.0.8:45409 - 47820 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00310802s 2025-08-13T20:39:22.888062534+00:00 stdout F [INFO] 10.217.0.8:46886 - 26202 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001225315s 2025-08-13T20:39:22.889389783+00:00 stdout F [INFO] 10.217.0.8:34919 - 32205 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000997758s 2025-08-13T20:39:33.486891188+00:00 stdout F [INFO] 10.217.0.62:38465 - 36749 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003028897s 2025-08-13T20:39:33.487008922+00:00 stdout F [INFO] 10.217.0.62:59209 - 25765 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00138855s 2025-08-13T20:39:33.520351993+00:00 stdout F [INFO] 10.217.0.62:54424 - 53500 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002620206s 2025-08-13T20:39:33.520574099+00:00 stdout F [INFO] 10.217.0.62:34405 - 52422 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002562044s 2025-08-13T20:39:34.929199110+00:00 stdout F [INFO] 10.217.0.19:59832 - 28710 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00105181s 2025-08-13T20:39:34.929293403+00:00 stdout F [INFO] 10.217.0.19:52613 - 8655 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001231076s 2025-08-13T20:39:35.238379334+00:00 stdout F [INFO] 10.217.0.19:53361 - 39733 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001116923s 2025-08-13T20:39:35.238379334+00:00 stdout F [INFO] 10.217.0.19:40427 - 35608 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000605167s 2025-08-13T20:39:36.328013239+00:00 stdout F [INFO] 10.217.0.19:37534 - 9463 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000661189s 2025-08-13T20:39:36.328494453+00:00 stdout F [INFO] 10.217.0.19:40881 - 3299 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001098372s 2025-08-13T20:39:38.315116047+00:00 stdout F [INFO] 10.217.0.62:36946 - 5637 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001194124s 2025-08-13T20:39:38.315116047+00:00 stdout F [INFO] 10.217.0.62:36687 - 12833 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001436381s 2025-08-13T20:39:41.673972293+00:00 stdout F [INFO] 10.217.0.62:52646 - 36352 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000531765s 2025-08-13T20:39:41.673972293+00:00 stdout F [INFO] 10.217.0.62:55867 - 16523 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001416061s 2025-08-13T20:39:42.023083748+00:00 stdout F [INFO] 10.217.0.19:34636 - 26806 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000735301s 2025-08-13T20:39:42.023083748+00:00 stdout F [INFO] 10.217.0.19:35424 - 55421 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000683849s 2025-08-13T20:39:42.111955400+00:00 stdout F [INFO] 10.217.0.19:54314 - 8861 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000667829s 2025-08-13T20:39:42.112000792+00:00 stdout F [INFO] 10.217.0.19:33866 - 17232 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000617808s 2025-08-13T20:39:42.548542797+00:00 stdout F [INFO] 10.217.0.62:46643 - 51865 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000583177s 2025-08-13T20:39:42.548542797+00:00 stdout F [INFO] 10.217.0.62:57527 - 7741 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001258096s 2025-08-13T20:39:45.396443433+00:00 stdout F [INFO] 10.217.0.62:36515 - 31345 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002234595s 2025-08-13T20:39:45.396822784+00:00 stdout F [INFO] 10.217.0.62:45617 - 47171 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002168872s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:42472 - 6754 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.00345008s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:48733 - 49549 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.002868173s 2025-08-13T20:39:56.364665487+00:00 stdout F [INFO] 10.217.0.64:49502 - 50789 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004282123s 2025-08-13T20:39:56.365445019+00:00 stdout F [INFO] 10.217.0.64:43991 - 12908 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003339476s 2025-08-13T20:40:01.060378034+00:00 stdout F [INFO] 10.217.0.19:47434 - 25551 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001119163s 2025-08-13T20:40:01.060378034+00:00 stdout F [INFO] 10.217.0.19:51981 - 9328 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00137947s 2025-08-13T20:40:01.096217857+00:00 stdout F [INFO] 10.217.0.19:39773 - 1053 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001982967s 2025-08-13T20:40:01.096217857+00:00 stdout F [INFO] 10.217.0.19:34232 - 3657 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002569844s 2025-08-13T20:40:05.319413503+00:00 stdout F [INFO] 10.217.0.19:53209 - 19770 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001161094s 2025-08-13T20:40:05.330872523+00:00 stdout F [INFO] 10.217.0.19:47054 - 30121 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000943507s 2025-08-13T20:40:05.767886341+00:00 stdout F [INFO] 10.217.0.45:37617 - 52692 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000704861s 2025-08-13T20:40:05.767886341+00:00 stdout F [INFO] 10.217.0.45:36136 - 29941 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000623578s 2025-08-13T20:40:08.338925635+00:00 stdout F [INFO] 10.217.0.19:40398 - 2296 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000933276s 2025-08-13T20:40:08.338925635+00:00 stdout F [INFO] 10.217.0.19:37518 - 13823 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001020039s 2025-08-13T20:40:08.364254426+00:00 stdout F [INFO] 10.217.0.19:44262 - 12027 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000547166s 2025-08-13T20:40:08.364254426+00:00 stdout F [INFO] 10.217.0.19:52854 - 44366 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000602588s 2025-08-13T20:40:08.482902576+00:00 stdout F [INFO] 10.217.0.19:46869 - 62396 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000638609s 2025-08-13T20:40:08.482902576+00:00 stdout F [INFO] 10.217.0.19:59250 - 28552 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000966127s 2025-08-13T20:40:11.682310416+00:00 stdout F [INFO] 10.217.0.62:34923 - 63174 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000663609s 2025-08-13T20:40:11.682310416+00:00 stdout F [INFO] 10.217.0.62:48099 - 56805 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001330308s 2025-08-13T20:40:22.885384643+00:00 stdout F [INFO] 10.217.0.8:34635 - 60579 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002014208s 2025-08-13T20:40:22.885384643+00:00 stdout F [INFO] 10.217.0.8:50611 - 3878 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.000949757s 2025-08-13T20:40:22.887684909+00:00 stdout F [INFO] 10.217.0.8:50782 - 41587 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.000613388s 2025-08-13T20:40:22.888213085+00:00 stdout F [INFO] 10.217.0.8:60131 - 22403 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.002481611s 2025-08-13T20:40:31.249961554+00:00 stdout F [INFO] 10.217.0.73:56729 - 39008 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001197004s 2025-08-13T20:40:31.249961554+00:00 stdout F [INFO] 10.217.0.73:50058 - 34761 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001338929s 2025-08-13T20:40:35.247958776+00:00 stdout F [INFO] 10.217.0.19:32839 - 29831 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002160112s 2025-08-13T20:40:35.247958776+00:00 stdout F [INFO] 10.217.0.19:45865 - 28487 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002171683s 2025-08-13T20:40:41.677077336+00:00 stdout F [INFO] 10.217.0.62:39935 - 29145 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001932495s 2025-08-13T20:40:41.677077336+00:00 stdout F [INFO] 10.217.0.62:44261 - 57272 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.002200554s 2025-08-13T20:40:45.955422642+00:00 stdout F [INFO] 10.217.0.19:49738 - 38120 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000867465s 2025-08-13T20:40:45.955559056+00:00 stdout F [INFO] 10.217.0.19:38158 - 58947 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000880456s 2025-08-13T20:40:56.365382803+00:00 stdout F [INFO] 10.217.0.64:40460 - 52275 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.004399127s 2025-08-13T20:40:56.365452485+00:00 stdout F [INFO] 10.217.0.64:55259 - 53324 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003396638s 2025-08-13T20:40:56.365603599+00:00 stdout F [INFO] 10.217.0.64:41027 - 3240 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.003673406s 2025-08-13T20:40:56.366382632+00:00 stdout F [INFO] 10.217.0.64:40992 - 1845 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.005853039s 2025-08-13T20:41:03.402012231+00:00 stdout F [INFO] 10.217.0.82:36224 - 24973 "A IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005070816s 2025-08-13T20:41:03.402012231+00:00 stdout F [INFO] 10.217.0.82:58091 - 39129 "AAAA IN registry.redhat.io.crc.testing. udp 59 false 1232" NXDOMAIN qr,rd,ra 48 0.005111768s 2025-08-13T20:41:03.942112441+00:00 stdout F [INFO] 10.217.0.82:45684 - 38759 "AAAA IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.002796691s 2025-08-13T20:41:03.942412880+00:00 stdout F [INFO] 10.217.0.82:60802 - 56082 "A IN cdn01.quay.io.crc.testing. udp 54 false 1232" NXDOMAIN qr,rd,ra 43 0.000828994s 2025-08-13T20:41:05.259690387+00:00 stdout F [INFO] 10.217.0.19:54024 - 37942 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001103122s 2025-08-13T20:41:05.259690387+00:00 stdout F [INFO] 10.217.0.19:55781 - 37536 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000913356s 2025-08-13T20:41:05.828109685+00:00 stdout F [INFO] 10.217.0.45:58352 - 47304 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000848504s 2025-08-13T20:41:05.828612010+00:00 stdout F [INFO] 10.217.0.45:37579 - 6043 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.000965248s 2025-08-13T20:41:11.678751738+00:00 stdout F [INFO] 10.217.0.62:37982 - 19302 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00105337s 2025-08-13T20:41:11.679052437+00:00 stdout F [INFO] 10.217.0.62:36701 - 22169 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001077221s 2025-08-13T20:41:22.888874237+00:00 stdout F [INFO] 10.217.0.8:40546 - 41194 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.002784941s 2025-08-13T20:41:22.888976080+00:00 stdout F [INFO] 10.217.0.8:49312 - 22262 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.00451814s 2025-08-13T20:41:22.890701110+00:00 stdout F [INFO] 10.217.0.8:35622 - 3186 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001110432s 2025-08-13T20:41:22.893256974+00:00 stdout F [INFO] 10.217.0.8:46249 - 43292 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001048781s 2025-08-13T20:41:33.614180660+00:00 stdout F [INFO] 10.217.0.19:34045 - 59058 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003720818s 2025-08-13T20:41:33.614180660+00:00 stdout F [INFO] 10.217.0.19:42856 - 33590 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004016166s 2025-08-13T20:41:35.258012771+00:00 stdout F [INFO] 10.217.0.19:47082 - 44019 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004036316s 2025-08-13T20:41:35.258012771+00:00 stdout F [INFO] 10.217.0.19:53023 - 62739 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004330375s 2025-08-13T20:41:41.687703349+00:00 stdout F [INFO] 10.217.0.62:59948 - 65257 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.003153271s 2025-08-13T20:41:41.687703349+00:00 stdout F [INFO] 10.217.0.62:46724 - 35067 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.00419401s 2025-08-13T20:41:55.588870352+00:00 stdout F [INFO] 10.217.0.19:60199 - 65263 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002129141s 2025-08-13T20:41:55.588870352+00:00 stdout F [INFO] 10.217.0.19:37883 - 24039 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003023257s 2025-08-13T20:41:56.366333937+00:00 stdout F [INFO] 10.217.0.64:45373 - 9031 "A IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.000812273s 2025-08-13T20:41:56.366770229+00:00 stdout F [INFO] 10.217.0.64:49602 - 22957 "AAAA IN api.crc.testing.crc.testing. udp 56 false 1232" NXDOMAIN qr,rd,ra 45 0.001103102s 2025-08-13T20:41:56.367114399+00:00 stdout F [INFO] 10.217.0.64:57192 - 51247 "AAAA IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001530174s 2025-08-13T20:41:56.369267911+00:00 stdout F [INFO] 10.217.0.64:43691 - 12648 "A IN api-int.crc.testing.crc.testing. udp 60 false 1232" NXDOMAIN qr,rd,ra 49 0.001527594s 2025-08-13T20:42:05.249404267+00:00 stdout F [INFO] 10.217.0.19:56767 - 13605 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001139273s 2025-08-13T20:42:05.249404267+00:00 stdout F [INFO] 10.217.0.19:42307 - 25762 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.002037729s 2025-08-13T20:42:05.884957991+00:00 stdout F [INFO] 10.217.0.45:34593 - 23662 "AAAA IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.001245216s 2025-08-13T20:42:05.886252038+00:00 stdout F [INFO] 10.217.0.45:41209 - 48819 "A IN canary-openshift-ingress-canary.apps-crc.testing.crc.testing. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.002737289s 2025-08-13T20:42:10.672284590+00:00 stdout F [INFO] 10.217.0.19:36729 - 28054 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.000588907s 2025-08-13T20:42:10.674088373+00:00 stdout F [INFO] 10.217.0.19:43538 - 27639 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.001673758s 2025-08-13T20:42:11.682824284+00:00 stdout F [INFO] 10.217.0.62:47919 - 6529 "A IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.000978938s 2025-08-13T20:42:11.682824284+00:00 stdout F [INFO] 10.217.0.62:48501 - 2072 "AAAA IN console-openshift-console.apps-crc.testing.crc.testing. udp 83 false 1232" NXDOMAIN qr,rd,ra 72 0.001006219s 2025-08-13T20:42:22.889244705+00:00 stdout F [INFO] 10.217.0.8:57028 - 4650 "AAAA IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.003507911s 2025-08-13T20:42:22.889244705+00:00 stdout F [INFO] 10.217.0.8:55542 - 40045 "A IN thanos-querier.openshift-monitoring.svc.crc.testing. udp 80 false 1232" NXDOMAIN qr,rd,ra 69 0.001010219s 2025-08-13T20:42:22.890900792+00:00 stdout F [INFO] 10.217.0.8:51800 - 58877 "A IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001256136s 2025-08-13T20:42:22.891156140+00:00 stdout F [INFO] 10.217.0.8:58299 - 14903 "AAAA IN thanos-querier.openshift-monitoring.svc. udp 68 false 1232" NXDOMAIN qr,rd,ra 57 0.001578475s 2025-08-13T20:42:35.276591715+00:00 stdout F [INFO] 10.217.0.19:42163 - 4628 "AAAA IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003242554s 2025-08-13T20:42:35.276591715+00:00 stdout F [INFO] 10.217.0.19:43475 - 45715 "A IN oauth-openshift.apps-crc.testing.crc.testing. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004115779s 2025-08-13T20:42:43.641185657+00:00 stdout F [INFO] SIGTERM: Shutting down servers then terminating 2025-08-13T20:42:43.648934750+00:00 stdout F [INFO] plugin/health: Going into lameduck mode for 20s ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000755000175000017500000000000015117130654033000 5ustar zuulzuul././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000202015117130647032776 0ustar zuulzuul2025-12-13T00:13:16.585080627+00:00 stderr F W1213 00:13:16.583759 1 deprecated.go:66] 2025-12-13T00:13:16.585080627+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:16.585080627+00:00 stderr F 2025-12-13T00:13:16.585080627+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:16.585080627+00:00 stderr F 2025-12-13T00:13:16.585080627+00:00 stderr F =============================================== 2025-12-13T00:13:16.585080627+00:00 stderr F 2025-12-13T00:13:16.585080627+00:00 stderr F I1213 00:13:16.584496 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:16.585080627+00:00 stderr F I1213 00:13:16.584528 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:16.585080627+00:00 stderr F I1213 00:13:16.585007 1 kube-rbac-proxy.go:395] Starting TCP socket on :9154 2025-12-13T00:13:16.587013132+00:00 stderr F I1213 00:13:16.585610 1 kube-rbac-proxy.go:402] Listening securely on :9154 ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-defa0000644000175000017500000000222515117130647033005 0ustar zuulzuul2025-08-13T19:59:22.553763903+00:00 stderr F W0813 19:59:22.553003 1 deprecated.go:66] 2025-08-13T19:59:22.553763903+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.553763903+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.553763903+00:00 stderr F =============================================== 2025-08-13T19:59:22.553763903+00:00 stderr F 2025-08-13T19:59:22.658394206+00:00 stderr F I0813 19:59:22.658018 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:22.658394206+00:00 stderr F I0813 19:59:22.658104 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:22.714929947+00:00 stderr F I0813 19:59:22.702562 1 kube-rbac-proxy.go:395] Starting TCP socket on :9154 2025-08-13T19:59:22.796718549+00:00 stderr F I0813 19:59:22.796259 1 kube-rbac-proxy.go:402] Listening securely on :9154 2025-08-13T20:42:42.272284042+00:00 stderr F I0813 20:42:42.271402 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000012315117130646033071 0ustar zuulzuul2025-08-13T20:42:54.210046648+00:00 stderr F Shutting down, got signal: Terminated ././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000000015117130646033063 0ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000034400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000123015117130646033071 0ustar zuulzuul2025-12-13T00:13:17.129438099+00:00 stderr F I1213 00:13:17.126435 23 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-12-13T00:13:17.129438099+00:00 stderr F I1213 00:13:17.127419 23 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-12-13T00:13:17.129438099+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.crt", 2025-12-13T00:13:17.129438099+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.key" 2025-12-13T00:13:17.129438099+00:00 stderr F } 2025-12-13T00:13:17.162554321+00:00 stderr F I1213 00:13:17.162506 23 observer_polling.go:159] Starting file observer ././@LongLink0000644000000000000000000000034400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000000770515117130646033106 0ustar zuulzuul2025-08-13T19:59:27.826236007+00:00 stderr F I0813 19:59:27.767514 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-08-13T19:59:28.048819791+00:00 stderr F I0813 19:59:28.024085 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-08-13T19:59:28.048819791+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.crt", 2025-08-13T19:59:28.048819791+00:00 stderr F (string) (len=32) "/proc/7/root/etc/secrets/tls.key" 2025-08-13T19:59:28.048819791+00:00 stderr F } 2025-08-13T19:59:28.504245112+00:00 stderr F I0813 19:59:28.496669 20 observer_polling.go:159] Starting file observer 2025-08-13T20:00:33.509123438+00:00 stderr F I0813 20:00:33.508126 20 observer_polling.go:120] Observed file "/proc/7/root/etc/secrets/tls.crt" has been modified (old="cdb17acdd32bfc0645d11444c7bea3d36372b393d1d931de26e0171fac0f40c1", new="efb887ba7696e196412e106436a291839e27a1e68375ce9e547b2e2b32b7e988") 2025-08-13T20:00:33.774400752+00:00 stderr F I0813 20:00:33.767934 20 cmd.go:292] Sending TERM signal to 7 ... 2025-08-13T20:00:34.034707685+00:00 stderr F W0813 20:00:34.034452 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 67, err: ) 2025-08-13T20:00:34.273950867+00:00 stderr F I0813 20:00:34.272174 20 observer_polling.go:114] Observed file "/proc/7/root/etc/secrets/tls.key" has been deleted 2025-08-13T20:00:34.273950867+00:00 stderr F I0813 20:00:34.272491 20 cmd.go:331] Waiting for process with process name "cluster-samples-operator" ... 2025-08-13T20:00:34.273950867+00:00 stderr F W0813 20:00:34.273404 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 1, err: ) 2025-08-13T20:00:35.365647885+00:00 stderr F W0813 20:00:35.357860 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 2, err: ) 2025-08-13T20:00:36.281903040+00:00 stderr F W0813 20:00:36.279125 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 3, err: ) 2025-08-13T20:00:37.283066398+00:00 stderr F W0813 20:00:37.276990 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 4, err: ) 2025-08-13T20:00:38.308991331+00:00 stderr F W0813 20:00:38.305516 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 5, err: ) 2025-08-13T20:00:38.520605965+00:00 stderr F I0813 20:00:38.518352 20 observer_polling.go:162] Shutting down file observer 2025-08-13T20:00:39.284050724+00:00 stderr F W0813 20:00:39.283223 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 6, err: ) 2025-08-13T20:00:40.280375272+00:00 stderr F W0813 20:00:40.277531 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 7, err: ) 2025-08-13T20:00:41.276881747+00:00 stderr F W0813 20:00:41.276510 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 8, err: ) 2025-08-13T20:00:42.286214347+00:00 stderr F W0813 20:00:42.278618 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 9, err: ) 2025-08-13T20:00:43.275711531+00:00 stderr F W0813 20:00:43.275310 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 10, err: ) 2025-08-13T20:00:44.275672973+00:00 stderr F I0813 20:00:44.274906 20 cmd.go:341] Watching for changes in: ([]string) (len=2 cap=2) { 2025-08-13T20:00:44.275672973+00:00 stderr F (string) (len=33) "/proc/38/root/etc/secrets/tls.crt", 2025-08-13T20:00:44.275672973+00:00 stderr F (string) (len=33) "/proc/38/root/etc/secrets/tls.key" 2025-08-13T20:00:44.275672973+00:00 stderr F } 2025-08-13T20:00:45.576200637+00:00 stderr F I0813 20:00:45.575191 20 observer_polling.go:159] Starting file observer 2025-08-13T20:42:42.279269663+00:00 stderr F W0813 20:42:42.278947 20 cmd.go:197] Unable to determine PID for "cluster-samples-operator" (retry: 2529, err: ) ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000033314115117130646033102 0ustar zuulzuul2025-08-13T20:00:44.607246338+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-08-13T20:00:44.607860915+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="Go OS/Arch: linux/amd64" 2025-08-13T20:00:44.643869742+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc000789040)}" 2025-08-13T20:00:44.643869742+00:00 stderr F time="2025-08-13T20:00:44Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0007890e0)}" 2025-08-13T20:00:46.045395765+00:00 stderr F time="2025-08-13T20:00:46Z" level=info msg="waiting for informer caches to sync" 2025-08-13T20:00:46.490100176+00:00 stderr F W0813 20:00:46.490036 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:46.490191988+00:00 stderr F E0813 20:00:46.490176 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:46.556919121+00:00 stderr F W0813 20:00:46.550017 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:46.556919121+00:00 stderr F E0813 20:00:46.550134 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.714511817+00:00 stderr F W0813 20:00:47.713583 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.714623810+00:00 stderr F E0813 20:00:47.714577 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:47.909393354+00:00 stderr F W0813 20:00:47.908970 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:47.909393354+00:00 stderr F E0813 20:00:47.909304 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:50.029249460+00:00 stderr F W0813 20:00:50.028537 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:50.029249460+00:00 stderr F E0813 20:00:50.029222 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:50.706434319+00:00 stderr F W0813 20:00:50.705977 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:50.706434319+00:00 stderr F E0813 20:00:50.706347 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:54.091117328+00:00 stderr F W0813 20:00:54.087954 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:54.091117328+00:00 stderr F E0813 20:00:54.088658 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:56.391398369+00:00 stderr F W0813 20:00:56.389256 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:56.391398369+00:00 stderr F E0813 20:00:56.389730 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:06.689662603+00:00 stderr F W0813 20:01:06.687460 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:06.693053539+00:00 stderr F E0813 20:01:06.688866 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:08.723703810+00:00 stderr F W0813 20:01:08.722018 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:08.723703810+00:00 stderr F E0813 20:01:08.722756 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:26.625246002+00:00 stderr F W0813 20:01:26.623011 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:26.633354693+00:00 stderr F E0813 20:01:26.633140 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:01:28.382170209+00:00 stderr F W0813 20:01:28.381670 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:28.382270301+00:00 stderr F E0813 20:01:28.382213 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:59.134006323+00:00 stderr F W0813 20:01:59.133064 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:01:59.134006323+00:00 stderr F E0813 20:01:59.133733 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:02:17.309913758+00:00 stderr F W0813 20:02:17.309063 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:02:17.310005721+00:00 stderr F E0813 20:02:17.309935 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:02:36.746592578+00:00 stderr F W0813 20:02:36.745612 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.746592578+00:00 stderr F E0813 20:02:36.746462 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.876275642+00:00 stderr F W0813 20:02:49.875554 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.876275642+00:00 stderr F E0813 20:02:49.876223 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.230143008+00:00 stderr F W0813 20:03:22.229209 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.230143008+00:00 stderr F E0813 20:03:22.229992 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.101454885+00:00 stderr F W0813 20:03:26.101276 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.101677402+00:00 stderr F E0813 20:03:26.101486 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.219366650+00:00 stderr F W0813 20:03:59.218715 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.219366650+00:00 stderr F E0813 20:03:59.219326 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.239806871+00:00 stderr F W0813 20:04:06.239069 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.239806871+00:00 stderr F E0813 20:04:06.239748 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.574106586+00:00 stderr F W0813 20:04:37.561396 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.574975231+00:00 stderr F E0813 20:04:37.574405 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: Get "https://10.217.4.1:443/apis/template.openshift.io/v1/namespaces/openshift/templates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.987897501+00:00 stderr F W0813 20:04:38.985355 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.987897501+00:00 stderr F E0813 20:04:38.985464 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: Get "https://10.217.4.1:443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:33.947007920+00:00 stderr F W0813 20:05:33.933063 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:05:33.947007920+00:00 stderr F E0813 20:05:33.942434 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:05:34.721279152+00:00 stderr F W0813 20:05:34.720916 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:34.721279152+00:00 stderr F E0813 20:05:34.721267 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:08.857838209+00:00 stderr F W0813 20:06:08.856659 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:08.857988443+00:00 stderr F E0813 20:06:08.857853 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:06:21.536005791+00:00 stderr F W0813 20:06:21.533951 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:21.536005791+00:00 stderr F E0813 20:06:21.534644 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:56.337219614+00:00 stderr F W0813 20:06:56.335478 38 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:06:56.337219614+00:00 stderr F E0813 20:06:56.336353 38 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:07:06.011577326+00:00 stderr F W0813 20:07:06.010522 38 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:06.011577326+00:00 stderr F E0813 20:07:06.011292 38 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:08:02.348213355+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="started events processor" 2025-08-13T20:08:02.372559943+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.372559943+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:08:02.403632764+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.403632764+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:08:02.410222553+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.410222553+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:08:02.427300232+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:02.427300232+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:08:02.429669810+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:02.435338613+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:08:02.435626801+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:08:02.447449820+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.447449820+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:08:02.448483380+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:02.455757238+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:08:02.455757238+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:08:02.461821002+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.461821002+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:08:02.803253941+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:08:02.803253941+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:08:02.807970266+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.807970266+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:08:02.872235759+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:08:02.872235759+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:08:02.888209217+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.888209217+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:08:02.896845954+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:02.896845954+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:08:02.901936570+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.901936570+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:08:02.910760503+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.910760503+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:08:02.914398708+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.914398708+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:08:02.921727088+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.921727088+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:08:02.926546616+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.926546616+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:08:02.940195247+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.940195247+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:08:02.947607250+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:08:02.947672352+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:08:02.955914108+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.955914108+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:08:02.962386474+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.962386474+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:08:02.969634151+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:02.969634151+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:08:02.974028547+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:08:02.974028547+00:00 stderr F time="2025-08-13T20:08:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:08:02.982579443+00:00 stderr F time="2025-08-13T20:08:02Z" level=warning msg="Image import for imagestream jenkins-agent-base tag scheduled-upgrade generation 3 failed with detailed message Internal error occurred: registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:v4.13.0: Get \"https://registry.redhat.io/v2/ocp-tools-4/jenkins-agent-base-rhel8/manifests/v4.13.0\": unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" 2025-08-13T20:08:03.922922212+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="initiated an imagestreamimport retry for imagestream/tag jenkins-agent-base/scheduled-upgrade" 2025-08-13T20:08:03.935561485+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:08:03.935561485+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:08:03.938521480+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.938521480+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:08:03.942338049+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:08:03.942393101+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:08:03.945844949+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:08:03.945939002+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:08:03.952315515+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:08:03.952315515+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:08:03.960539321+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:08:03.960848920+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:08:03.965657717+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:08:03.965880724+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:08:03.969083446+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:08:03.969203969+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:08:03.973291166+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:08:03.973291166+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:08:03.977096866+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:08:03.977170668+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:08:03.980565505+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:08:03.980597226+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:08:03.984014454+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.984014454+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:08:03.986409873+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.986738382+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:08:03.989862012+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.989969005+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:08:03.994198356+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.994198356+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:08:03.999411255+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:08:03.999511038+00:00 stderr F time="2025-08-13T20:08:03Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:08:04.009067372+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:08:04.009667959+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:08:04.013335484+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.013420207+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:08:04.016498465+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.016678530+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:08:04.020407737+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.020590873+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:08:04.026368188+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.026455471+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:08:04.031284829+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearing error messages from configmap for stream jenkins-agent-base and tag scheduled-upgrade" 2025-08-13T20:08:04.040103402+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:08:04.049922974+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:08:04.049922974+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:08:04.050622114+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="CRDUPDATE importerrors false update" 2025-08-13T20:08:04.053712322+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.053762564+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:08:04.057615664+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:08:04.057615664+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:08:07.114613122+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:07.114613122+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:07.222850344+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:07.223007889+00:00 stderr F time="2025-08-13T20:08:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:11.779708813+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:11.779708813+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:11.861485008+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:11.861485008+00:00 stderr F time="2025-08-13T20:08:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:08:12.282581371+00:00 stderr F time="2025-08-13T20:08:12Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:08:12.282581371+00:00 stderr F time="2025-08-13T20:08:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:09:54.830597497+00:00 stderr F time="2025-08-13T20:09:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:09:54.830597497+00:00 stderr F time="2025-08-13T20:09:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:09:55.128982603+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:09:55.129065715+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:01.566036489+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:01.566036489+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:39.024057289+00:00 stderr F time="2025-08-13T20:10:39Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:39.024302786+00:00 stderr F time="2025-08-13T20:10:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:10:59.781430907+00:00 stderr F time="2025-08-13T20:10:59Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:10:59.781430907+00:00 stderr F time="2025-08-13T20:10:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:00.084006942+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:00.084006942+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:00.374907753+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:00.375574102+00:00 stderr F time="2025-08-13T20:11:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:11:19.467053212+00:00 stderr F time="2025-08-13T20:11:19Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:11:19.467053212+00:00 stderr F time="2025-08-13T20:11:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:18:02.307339975+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:18:02.307339975+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:18:02.966757286+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:18:02.966757286+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:18:02.985382158+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:18:03.520430957+00:00 stderr F time="2025-08-13T20:18:03Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:18:03.520430957+00:00 stderr F time="2025-08-13T20:18:03Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:18:04.535876206+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:04.535876206+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:18:04.978403133+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:04.978403133+00:00 stderr F time="2025-08-13T20:18:04Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:18:06.390925831+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:18:06.390925831+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:18:06.407627178+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:18:06.407627178+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:18:06.416865322+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.416865322+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:18:06.446104327+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.446104327+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:18:06.452899421+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.452899421+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:18:06.459854979+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:18:06.459854979+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:18:06.469164085+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:18:06.469164085+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:18:06.568908434+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.568908434+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:18:06.574094902+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.574094902+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:18:06.579286820+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.579286820+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:18:06.582616595+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.582616595+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:18:06.587947887+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:18:06.587947887+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:18:06.599131687+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:18:06.599131687+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:18:06.605735775+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.605735775+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:18:06.612459677+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.612459677+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:18:06.623063180+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:18:06.623063180+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:18:06.634421014+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.634421014+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:18:06.643141153+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:18:06.643141153+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:18:06.654255851+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.654255851+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:18:06.662020883+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.662020883+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:18:06.667241402+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.667241402+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:18:06.685541374+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.685541374+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:18:06.688721875+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.688721875+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:18:06.693616445+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.693698747+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:18:06.699126952+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:18:06.699126952+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:18:06.702904520+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.702904520+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:18:06.707837741+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.707897353+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:18:06.714696467+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.714696467+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:18:06.726013190+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.726013190+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:18:06.731489846+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:18:06.731489846+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:18:06.736492549+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.736492549+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:18:06.739185426+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.739252558+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:18:06.744658042+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:18:06.744658042+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:18:06.748432640+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.748432640+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:18:06.751220370+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.751220370+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:18:06.754631607+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:18:06.754631607+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:18:06.758684183+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:18:06.758684183+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:18:06.767243277+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:18:06.767243277+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:18:06.776049099+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.776049099+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:18:06.779757575+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.779757575+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:18:06.783944384+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:18:06.783944384+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:18:06.787472455+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:18:06.787543947+00:00 stderr F time="2025-08-13T20:18:06Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:19:54.880115698+00:00 stderr F time="2025-08-13T20:19:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:19:54.880115698+00:00 stderr F time="2025-08-13T20:19:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:19:55.141373815+00:00 stderr F time="2025-08-13T20:19:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:19:55.141494039+00:00 stderr F time="2025-08-13T20:19:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:22:18.915201214+00:00 stderr F time="2025-08-13T20:22:18Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:22:18.915201214+00:00 stderr F time="2025-08-13T20:22:18Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:28:02.308714006+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.308714006+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:28:02.319453535+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.319453535+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:28:02.323832900+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:28:02.323923243+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:28:02.329429961+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.329429961+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:28:02.333298503+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.333298503+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:28:02.337924946+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.337924946+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:28:02.341408756+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.341408756+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:28:02.346147092+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.346147092+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:28:02.350829617+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.350829617+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:28:02.355305725+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.355305725+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:28:02.360209206+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:28:02.360209206+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:28:02.364812158+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.364812158+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:28:02.371478110+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.371478110+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:28:02.373967922+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.373967922+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:28:02.378446610+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.378446610+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:28:02.381110237+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.381110237+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:28:02.384421912+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:28:02.384421912+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:28:02.386399489+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.386399489+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:28:02.389844498+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.389844498+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:28:02.392765632+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.392765632+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:28:02.395645355+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:28:02.395645355+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:28:02.403048348+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:28:02.403142020+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:28:02.411039457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.411039457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:28:02.415214457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.415214457+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:28:02.419758268+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:28:02.419758268+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:28:02.424949507+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:28:02.424949507+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:28:02.430269180+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.430269180+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:28:02.440613397+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.440613397+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:28:02.444632473+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:28:02.444632473+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:28:02.448248957+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.448248957+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:28:02.454270920+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:28:02.454270920+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:28:02.463883476+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:28:02.463883476+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:28:02.467411198+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.467411198+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:28:02.471754743+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:28:02.471754743+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:28:02.477402255+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:28:02.477402255+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:28:02.482131731+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.482131731+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:28:02.488397691+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:28:02.488397691+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:28:02.491967054+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.492082687+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:28:02.495688911+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.495688911+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:28:02.498623645+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:28:02.498623645+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:28:02.501133997+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:28:02.501185759+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:28:02.503971799+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.503971799+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:28:02.506392838+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.506446380+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:28:02.509020194+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.509020194+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:28:02.511605938+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:28:02.511653530+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:28:02.514507372+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:28:02.514507372+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:28:02.521752660+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.522160162+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:28:02.526249639+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:28:02.526249639+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:28:02.529900994+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:28:02.529900994+00:00 stderr F time="2025-08-13T20:28:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:29:54.885451542+00:00 stderr F time="2025-08-13T20:29:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:54.885451542+00:00 stderr F time="2025-08-13T20:29:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:29:55.027747743+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:55.030641676+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:29:55.147048872+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:29:55.147048872+00:00 stderr F time="2025-08-13T20:29:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:38:02.309098888+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.309269413+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:38:02.326326495+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.326326495+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:38:02.338454975+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.338454975+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:38:02.342211403+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.342211403+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:38:02.346724143+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-08-13T20:38:02.346865967+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-08-13T20:38:02.351160871+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:38:02.351160871+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:38:02.354514998+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.354514998+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:38:02.358551754+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.358598555+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:38:02.362463087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.362463087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:38:02.366733010+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.366863374+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:38:02.370736005+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:38:02.370736005+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:38:02.375228885+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.375308407+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:38:02.379249151+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.379295622+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:38:02.383374900+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.383423071+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:38:02.385961164+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.386057937+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:38:02.388712063+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.388831257+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:38:02.391967277+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.391967277+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:38:02.399498344+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.399547966+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:38:02.403921842+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.403990024+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:38:02.407048532+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.407236408+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:38:02.409688868+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.409733350+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:38:02.413591791+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:38:02.413642652+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:38:02.416944857+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:38:02.416994169+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:38:02.419545742+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:38:02.419632375+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:38:02.422408055+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.422457466+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:38:02.425163324+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.425163324+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:38:02.427469901+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.427516402+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:38:02.430865569+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:38:02.430920290+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:38:02.436699087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:38:02.436699087+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:38:02.439641982+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.439641982+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:38:02.442081142+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.442081142+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:38:02.446396566+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.446396566+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:38:02.455233461+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:38:02.455233461+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:38:02.458619829+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.458619829+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:38:02.463981274+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.463981274+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:38:02.469686288+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:38:02.469686288+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:38:02.475182526+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.475182526+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:38:02.479040148+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.479040148+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:38:02.489055846+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:38:02.489055846+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:38:02.491998151+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:38:02.492242748+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:38:02.496285355+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:38:02.496285355+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:38:02.499108776+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.499213759+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:38:02.502560806+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:38:02.502649578+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:38:02.507321043+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:38:02.507537159+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:38:02.512683188+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.513183312+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:38:02.518338191+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:38:02.518338191+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:38:02.522257654+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:38:02.522257654+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:38:02.528893235+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:38:02.528893235+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:38:02.533061065+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:38:02.533061065+00:00 stderr F time="2025-08-13T20:38:02Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:39:54.858926167+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:54.858926167+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:39:54.929035027+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:54.929035027+00:00 stderr F time="2025-08-13T20:39:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:39:55.130956989+00:00 stderr F time="2025-08-13T20:39:55Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:39:55.130956989+00:00 stderr F time="2025-08-13T20:39:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:20.405267752+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:20.405267752+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:20.467397943+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:20.467397943+00:00 stderr F time="2025-08-13T20:42:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:21.394447020+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:21.396885090+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:21.465459317+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:21.465459317+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:22.370705595+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:22.371446596+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:22.433963449+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:22.433963449+00:00 stderr F time="2025-08-13T20:42:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:23.380587010+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:23.380587010+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:23.461112032+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:23.461112032+00:00 stderr F time="2025-08-13T20:42:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:24.381655721+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:24.381655721+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:24.458359573+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:24.458471076+00:00 stderr F time="2025-08-13T20:42:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:25.417401671+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:25.417401671+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:25.505537472+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:25.505537472+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:26.408426263+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:26.408524316+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:26.480985995+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:26.480985995+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:27.472023757+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:27.472023757+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:27.612048704+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:27.614752602+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:28.392512125+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:28.392512125+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:28.468054283+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:28.468054283+00:00 stderr F time="2025-08-13T20:42:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:29.407675142+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:29.407719013+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:29.479289737+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:29.479289737+00:00 stderr F time="2025-08-13T20:42:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:30.404129860+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:30.404129860+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:30.506750169+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:30.506750169+00:00 stderr F time="2025-08-13T20:42:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:31.393037481+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:31.393037481+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:31.456042957+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:31.456042957+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:32.390306402+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:32.390306402+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:32.467966580+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:32.467966580+00:00 stderr F time="2025-08-13T20:42:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:33.431971123+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:33.431971123+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:33.545267030+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:33.545267030+00:00 stderr F time="2025-08-13T20:42:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:34.405879691+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:34.405879691+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:34.501982352+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:34.501982352+00:00 stderr F time="2025-08-13T20:42:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:35.404360718+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:35.405447990+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:35.490723988+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:42:35.490723988+00:00 stderr F time="2025-08-13T20:42:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:42:41.328870733+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="shutting down events processor" ././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000032775215117130646033115 0ustar zuulzuul2025-12-13T00:13:16.321211691+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:13:16.321211691+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="Go OS/Arch: linux/amd64" 2025-12-13T00:13:16.384567640+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc0003a7220)}" 2025-12-13T00:13:16.384567640+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0003a72c0)}" 2025-12-13T00:13:16.728873980+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="waiting for informer caches to sync" 2025-12-13T00:13:16.744915998+00:00 stderr F W1213 00:13:16.744818 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:16.744915998+00:00 stderr F E1213 00:13:16.744891 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:16.793350056+00:00 stderr F W1213 00:13:16.791250 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:16.793350056+00:00 stderr F E1213 00:13:16.791454 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:18.162616546+00:00 stderr F W1213 00:13:18.160247 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:18.162616546+00:00 stderr F E1213 00:13:18.160531 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:18.376797793+00:00 stderr F W1213 00:13:18.375150 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:18.376797793+00:00 stderr F E1213 00:13:18.375185 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:19.876272989+00:00 stderr F W1213 00:13:19.875450 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:19.876272989+00:00 stderr F E1213 00:13:19.875717 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:21.016424461+00:00 stderr F W1213 00:13:21.016135 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:21.016526174+00:00 stderr F E1213 00:13:21.016510 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:25.272992671+00:00 stderr F W1213 00:13:25.272589 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:25.272992671+00:00 stderr F E1213 00:13:25.272971 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-12-13T00:13:31.929668051+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="started events processor" 2025-12-13T00:13:31.929668051+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-12-13T00:13:31.929668051+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-12-13T00:13:31.933077865+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-12-13T00:13:31.933077865+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-12-13T00:13:31.935616310+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:31.935616310+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-12-13T00:13:31.937417301+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:31.937417301+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-12-13T00:13:31.939268183+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:31.939268183+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-12-13T00:13:31.965015228+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:13:31.965060019+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:13:31.985131014+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-12-13T00:13:31.985131014+00:00 stderr F time="2025-12-13T00:13:31Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-12-13T00:13:32.040577337+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.040577337+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-12-13T00:13:32.045642177+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.045642177+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-12-13T00:13:32.048780442+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-12-13T00:13:32.048780442+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-12-13T00:13:32.051313708+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-12-13T00:13:32.051352979+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-12-13T00:13:32.053496551+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.053544883+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-12-13T00:13:32.055142497+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.055179688+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-12-13T00:13:32.057177985+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.057250577+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-12-13T00:13:32.059436371+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.059436371+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-12-13T00:13:32.061395597+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-12-13T00:13:32.061438788+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-12-13T00:13:32.063072833+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-12-13T00:13:32.063112584+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-12-13T00:13:32.064982827+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.064982827+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-12-13T00:13:32.066712456+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.066712456+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-12-13T00:13:32.068084511+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.068084511+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-12-13T00:13:32.069645514+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.069645514+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-12-13T00:13:32.071131194+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.071131194+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-12-13T00:13:32.073089030+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.073089030+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-12-13T00:13:32.074964883+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-12-13T00:13:32.074964883+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-12-13T00:13:32.077119455+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-12-13T00:13:32.077119455+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-12-13T00:13:32.079726282+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jenkins-agent-base already deleted so no worries on clearing tags" 2025-12-13T00:13:32.079726282+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-12-13T00:13:32.081946877+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-12-13T00:13:32.081946877+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-12-13T00:13:32.084252625+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-12-13T00:13:32.084252625+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-12-13T00:13:32.088092053+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.088092053+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-12-13T00:13:32.088092053+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.088092053+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-12-13T00:13:32.090563777+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-12-13T00:13:32.090563777+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-12-13T00:13:32.093208206+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.093208206+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-12-13T00:13:32.095644287+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-12-13T00:13:32.095644287+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-12-13T00:13:32.098079119+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-12-13T00:13:32.098079119+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-12-13T00:13:32.102834430+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-12-13T00:13:32.102834430+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-12-13T00:13:32.104262097+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-12-13T00:13:32.104262097+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-12-13T00:13:32.105912473+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-12-13T00:13:32.105912473+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-12-13T00:13:32.107970881+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.107970881+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-12-13T00:13:32.110275969+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.110275969+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-12-13T00:13:32.112183203+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-12-13T00:13:32.112183203+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-12-13T00:13:32.113732125+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.113732125+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-12-13T00:13:32.115173734+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-12-13T00:13:32.115173734+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-12-13T00:13:32.116323052+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.116323052+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-12-13T00:13:32.118214476+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-12-13T00:13:32.118214476+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-12-13T00:13:32.120192123+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.120192123+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-12-13T00:13:32.121651941+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.121651941+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-12-13T00:13:32.123015677+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-12-13T00:13:32.123015677+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-12-13T00:13:32.124268359+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-12-13T00:13:32.124268359+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-12-13T00:13:32.125624815+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-12-13T00:13:32.125624815+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-12-13T00:13:32.127387214+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-12-13T00:13:32.127387214+00:00 stderr F time="2025-12-13T00:13:32Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-12-13T00:13:35.051727838+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:13:35.051727838+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:14:54.457156245+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:14:54.457156245+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:17:51.185051290+00:00 stderr F time="2025-12-13T00:17:51Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:17:51.185051290+00:00 stderr F time="2025-12-13T00:17:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:17:51.232447001+00:00 stderr F time="2025-12-13T00:17:51Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:17:51.232447001+00:00 stderr F time="2025-12-13T00:17:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:17:53.501993664+00:00 stderr F time="2025-12-13T00:17:53Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:17:53.502053005+00:00 stderr F time="2025-12-13T00:17:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:17:53.711750391+00:00 stderr F time="2025-12-13T00:17:53Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:17:53.711750391+00:00 stderr F time="2025-12-13T00:17:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:18:08.961249678+00:00 stderr F time="2025-12-13T00:18:08Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:18:08.961249678+00:00 stderr F time="2025-12-13T00:18:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:18:09.283283241+00:00 stderr F time="2025-12-13T00:18:09Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:18:09.283283241+00:00 stderr F time="2025-12-13T00:18:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:18:11.554734785+00:00 stderr F time="2025-12-13T00:18:11Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:18:11.554734785+00:00 stderr F time="2025-12-13T00:18:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:18:32.339740873+00:00 stderr F time="2025-12-13T00:18:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:18:32.339740873+00:00 stderr F time="2025-12-13T00:18:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:18:52.846006513+00:00 stderr F time="2025-12-13T00:18:52Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:18:52.846006513+00:00 stderr F time="2025-12-13T00:18:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:00.830341779+00:00 stderr F time="2025-12-13T00:19:00Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:00.830341779+00:00 stderr F time="2025-12-13T00:19:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:06.065043144+00:00 stderr F time="2025-12-13T00:19:06Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:06.065043144+00:00 stderr F time="2025-12-13T00:19:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:13.749462809+00:00 stderr F time="2025-12-13T00:19:13Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:13.749462809+00:00 stderr F time="2025-12-13T00:19:13Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:14.734749698+00:00 stderr F time="2025-12-13T00:19:14Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:14.734749698+00:00 stderr F time="2025-12-13T00:19:14Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:21.069785105+00:00 stderr F time="2025-12-13T00:19:21Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:21.069819976+00:00 stderr F time="2025-12-13T00:19:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:22.062911420+00:00 stderr F time="2025-12-13T00:19:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:22.062911420+00:00 stderr F time="2025-12-13T00:19:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:22.111179151+00:00 stderr F time="2025-12-13T00:19:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:22.111179151+00:00 stderr F time="2025-12-13T00:19:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:23.060002455+00:00 stderr F time="2025-12-13T00:19:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:23.060002455+00:00 stderr F time="2025-12-13T00:19:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:23.102614620+00:00 stderr F time="2025-12-13T00:19:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:23.102614620+00:00 stderr F time="2025-12-13T00:19:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:24.061740810+00:00 stderr F time="2025-12-13T00:19:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:24.061740810+00:00 stderr F time="2025-12-13T00:19:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:24.145422127+00:00 stderr F time="2025-12-13T00:19:24Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:24.145422127+00:00 stderr F time="2025-12-13T00:19:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:25.063651797+00:00 stderr F time="2025-12-13T00:19:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:25.063651797+00:00 stderr F time="2025-12-13T00:19:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:25.129316799+00:00 stderr F time="2025-12-13T00:19:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:25.129316799+00:00 stderr F time="2025-12-13T00:19:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:26.071181100+00:00 stderr F time="2025-12-13T00:19:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:26.071181100+00:00 stderr F time="2025-12-13T00:19:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:26.145746457+00:00 stderr F time="2025-12-13T00:19:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:26.145746457+00:00 stderr F time="2025-12-13T00:19:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:27.061766327+00:00 stderr F time="2025-12-13T00:19:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:27.061766327+00:00 stderr F time="2025-12-13T00:19:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:27.134306627+00:00 stderr F time="2025-12-13T00:19:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:27.134306627+00:00 stderr F time="2025-12-13T00:19:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:27.887002982+00:00 stderr F time="2025-12-13T00:19:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:27.887084464+00:00 stderr F time="2025-12-13T00:19:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:28.067009016+00:00 stderr F time="2025-12-13T00:19:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:28.067009016+00:00 stderr F time="2025-12-13T00:19:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:28.139113304+00:00 stderr F time="2025-12-13T00:19:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:28.139320200+00:00 stderr F time="2025-12-13T00:19:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:29.064775610+00:00 stderr F time="2025-12-13T00:19:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:29.064775610+00:00 stderr F time="2025-12-13T00:19:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:29.110738477+00:00 stderr F time="2025-12-13T00:19:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:29.110738477+00:00 stderr F time="2025-12-13T00:19:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:30.062405870+00:00 stderr F time="2025-12-13T00:19:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:30.062405870+00:00 stderr F time="2025-12-13T00:19:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:30.107451842+00:00 stderr F time="2025-12-13T00:19:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:30.107451842+00:00 stderr F time="2025-12-13T00:19:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:31.068149264+00:00 stderr F time="2025-12-13T00:19:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:31.068149264+00:00 stderr F time="2025-12-13T00:19:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:31.116111196+00:00 stderr F time="2025-12-13T00:19:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:31.116111196+00:00 stderr F time="2025-12-13T00:19:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:32.062972006+00:00 stderr F time="2025-12-13T00:19:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:32.062972006+00:00 stderr F time="2025-12-13T00:19:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:32.109281122+00:00 stderr F time="2025-12-13T00:19:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:32.109281122+00:00 stderr F time="2025-12-13T00:19:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:32.572275420+00:00 stderr F time="2025-12-13T00:19:32Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:32.572275420+00:00 stderr F time="2025-12-13T00:19:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:33.060114742+00:00 stderr F time="2025-12-13T00:19:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:33.060114742+00:00 stderr F time="2025-12-13T00:19:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:33.111668573+00:00 stderr F time="2025-12-13T00:19:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:33.111668573+00:00 stderr F time="2025-12-13T00:19:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:34.065012571+00:00 stderr F time="2025-12-13T00:19:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:34.065012571+00:00 stderr F time="2025-12-13T00:19:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:34.121654513+00:00 stderr F time="2025-12-13T00:19:34Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:34.121724185+00:00 stderr F time="2025-12-13T00:19:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:35.062234839+00:00 stderr F time="2025-12-13T00:19:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:35.062234839+00:00 stderr F time="2025-12-13T00:19:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:35.122964563+00:00 stderr F time="2025-12-13T00:19:35Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:35.122964563+00:00 stderr F time="2025-12-13T00:19:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:36.062908842+00:00 stderr F time="2025-12-13T00:19:36Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:36.063008155+00:00 stderr F time="2025-12-13T00:19:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:36.104861599+00:00 stderr F time="2025-12-13T00:19:36Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:36.104861599+00:00 stderr F time="2025-12-13T00:19:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:37.063304137+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:37.063304137+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:37.111800404+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:37.111800404+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:37.179556452+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:37.179556452+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:37.232151343+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:37.232234765+00:00 stderr F time="2025-12-13T00:19:37Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:38.061503812+00:00 stderr F time="2025-12-13T00:19:38Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:38.061503812+00:00 stderr F time="2025-12-13T00:19:38Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:38.119287725+00:00 stderr F time="2025-12-13T00:19:38Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:38.119287725+00:00 stderr F time="2025-12-13T00:19:38Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:39.066066442+00:00 stderr F time="2025-12-13T00:19:39Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:39.066066442+00:00 stderr F time="2025-12-13T00:19:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:39.108730708+00:00 stderr F time="2025-12-13T00:19:39Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:39.108730708+00:00 stderr F time="2025-12-13T00:19:39Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:40.062721255+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:40.062721255+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:40.114022559+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:40.114022559+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:41.062238976+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:41.062238976+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:41.113891791+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:41.113970443+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:42.068955816+00:00 stderr F time="2025-12-13T00:19:42Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:42.068955816+00:00 stderr F time="2025-12-13T00:19:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:42.115043957+00:00 stderr F time="2025-12-13T00:19:42Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:42.115043957+00:00 stderr F time="2025-12-13T00:19:42Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:43.063520861+00:00 stderr F time="2025-12-13T00:19:43Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:43.063520861+00:00 stderr F time="2025-12-13T00:19:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:43.108046189+00:00 stderr F time="2025-12-13T00:19:43Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:43.108046189+00:00 stderr F time="2025-12-13T00:19:43Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:44.057862859+00:00 stderr F time="2025-12-13T00:19:44Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:44.057862859+00:00 stderr F time="2025-12-13T00:19:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:44.115125758+00:00 stderr F time="2025-12-13T00:19:44Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:44.115125758+00:00 stderr F time="2025-12-13T00:19:44Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:45.067016716+00:00 stderr F time="2025-12-13T00:19:45Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:45.067016716+00:00 stderr F time="2025-12-13T00:19:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:45.112475349+00:00 stderr F time="2025-12-13T00:19:45Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:45.112475349+00:00 stderr F time="2025-12-13T00:19:45Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:46.061645283+00:00 stderr F time="2025-12-13T00:19:46Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:46.061786966+00:00 stderr F time="2025-12-13T00:19:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:46.110507840+00:00 stderr F time="2025-12-13T00:19:46Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:46.110507840+00:00 stderr F time="2025-12-13T00:19:46Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:47.068180657+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:47.068180657+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:47.114661619+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-12-13T00:19:47.114661619+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:47.795148213+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:47.795148213+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="SamplesRegistry changed from to registry.redhat.io" 2025-12-13T00:19:47.795192775+00:00 stderr F time="2025-12-13T00:19:47Z" level=info msg="ENTERING UPSERT / STEADY STATE PATH ExistTrue true ImageInProgressFalse true VersionOK true ConfigChanged true ManagementStateChanged true" 2025-12-13T00:19:48.005845463+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-12-13T00:19:48.006723007+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream dotnet-runtime" 2025-12-13T00:19:48.018716728+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-12-13T00:19:48.019229982+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-12-13T00:19:48.035113710+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream fuse7-karaf-openshift" 2025-12-13T00:19:48.035578903+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-12-13T00:19:48.072200883+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream postgresql13-for-sso75-openshift-rhel8" 2025-12-13T00:19:48.072351477+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-12-13T00:19:48.133422101+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-12-13T00:19:48.133647928+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-12-13T00:19:48.194539756+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-12-13T00:19:48.194748702+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream httpd" 2025-12-13T00:19:48.253853202+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins-agent-base" 2025-12-13T00:19:48.254127840+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream jenkins-agent-base" 2025-12-13T00:19:48.316315814+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream perl" 2025-12-13T00:19:48.316573431+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-12-13T00:19:48.379466125+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream dotnet" 2025-12-13T00:19:48.379874267+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-12-13T00:19:48.434626826+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-12-13T00:19:48.434660957+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream jboss-eap74-openjdk11-openshift" 2025-12-13T00:19:48.501115530+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-12-13T00:19:48.501233503+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream openjdk-11-rhel7" 2025-12-13T00:19:48.551884220+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-12-13T00:19:48.553018801+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream ubi8-openjdk-21" 2025-12-13T00:19:48.615568916+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream python" 2025-12-13T00:19:48.617847618+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-12-13T00:19:48.674501891+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream jboss-datagrid73-openshift" 2025-12-13T00:19:48.675128048+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-12-13T00:19:48.732465939+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-12-13T00:19:48.732990334+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream golang" 2025-12-13T00:19:48.793315637+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream ubi8-openjdk-17-runtime" 2025-12-13T00:19:48.793806711+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-12-13T00:19:48.851504932+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-12-13T00:19:48.851504932+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream ubi8-openjdk-21-runtime" 2025-12-13T00:19:48.913486461+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-12-13T00:19:48.913526952+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream sso75-openshift-rhel8" 2025-12-13T00:19:48.974816862+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="updated imagestream fuse7-eap-openshift-java11" 2025-12-13T00:19:48.977337492+00:00 stderr F time="2025-12-13T00:19:48Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-12-13T00:19:49.008056239+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-12-13T00:19:49.008056239+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-12-13T00:19:49.031422903+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream jenkins" 2025-12-13T00:19:49.046788647+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-12-13T00:19:49.111533952+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-12-13T00:19:49.111533952+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream nginx" 2025-12-13T00:19:49.143987137+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-12-13T00:19:49.143987137+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-12-13T00:19:49.195548909+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream php" 2025-12-13T00:19:49.195842027+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-12-13T00:19:49.252542010+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream fuse7-karaf-openshift-jdk11" 2025-12-13T00:19:49.252542010+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-12-13T00:19:49.312595806+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-12-13T00:19:49.312991688+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream mysql" 2025-12-13T00:19:49.371028037+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-12-13T00:19:49.371287624+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-12-13T00:19:49.431598018+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-12-13T00:19:49.432155743+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-12-13T00:19:49.478246344+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-12-13T00:19:49.478246344+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-12-13T00:19:49.494343437+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream fuse7-java11-openshift" 2025-12-13T00:19:49.505961088+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-12-13T00:19:49.572808791+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream ubi8-openjdk-8" 2025-12-13T00:19:49.573647385+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-12-13T00:19:49.633627778+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream ruby" 2025-12-13T00:19:49.633627778+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-12-13T00:19:49.690310441+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-12-13T00:19:49.690876866+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream postgresql13-for-sso76-openshift-rhel8" 2025-12-13T00:19:49.753533405+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-12-13T00:19:49.753860494+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-12-13T00:19:49.813402895+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream ubi8-openjdk-11" 2025-12-13T00:19:49.814365022+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-12-13T00:19:49.873755270+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-12-13T00:19:49.873755270+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream ubi8-openjdk-17" 2025-12-13T00:19:49.934496194+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="updated imagestream nodejs" 2025-12-13T00:19:49.934825984+00:00 stderr F time="2025-12-13T00:19:49Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-12-13T00:19:50.007537089+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-12-13T00:19:50.008232458+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream jboss-eap-xp3-openjdk11-openshift" 2025-12-13T00:19:50.057050874+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-12-13T00:19:50.059604594+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream fuse7-eap-openshift" 2025-12-13T00:19:50.118588971+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-12-13T00:19:50.119106105+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream redhat-openjdk18-openshift" 2025-12-13T00:19:50.176985751+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream ubi8-openjdk-11-runtime" 2025-12-13T00:19:50.178922275+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-12-13T00:19:50.235828724+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream mariadb" 2025-12-13T00:19:50.235828724+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-12-13T00:19:50.293585397+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-12-13T00:19:50.293585397+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream redis" 2025-12-13T00:19:50.353587181+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-12-13T00:19:50.354139416+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream sso76-openshift-rhel8" 2025-12-13T00:19:50.414516281+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream jboss-eap-xp4-openjdk11-openshift" 2025-12-13T00:19:50.414619954+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-12-13T00:19:50.471705158+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-12-13T00:19:50.472403107+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-12-13T00:19:50.500505412+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-12-13T00:19:50.500505412+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-12-13T00:19:50.555431407+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-12-13T00:19:50.555482598+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream java-runtime" 2025-12-13T00:19:50.617807126+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream ubi8-openjdk-8-runtime" 2025-12-13T00:19:50.618983349+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-12-13T00:19:50.673013089+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream jboss-eap74-openjdk8-openshift" 2025-12-13T00:19:50.673013089+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-12-13T00:19:50.733240440+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream fuse7-java-openshift" 2025-12-13T00:19:50.736422058+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-12-13T00:19:50.796920725+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-12-13T00:19:50.797713928+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream java" 2025-12-13T00:19:50.857650240+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="updated imagestream postgresql" 2025-12-13T00:19:50.857650240+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="CRDUPDATE samples upserted; set clusteroperator ready, steady state" 2025-12-13T00:19:50.858100413+00:00 stderr F time="2025-12-13T00:19:50Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-12-13T00:19:52.360477189+00:00 stderr F time="2025-12-13T00:19:52Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-12-13T00:19:52.360477189+00:00 stderr F time="2025-12-13T00:19:52Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-12-13T00:19:52.598468481+00:00 stderr F time="2025-12-13T00:19:52Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-12-13T00:19:52.598468481+00:00 stderr F time="2025-12-13T00:19:52Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-12-13T00:19:53.415817750+00:00 stderr F time="2025-12-13T00:19:53Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-12-13T00:19:53.415817750+00:00 stderr F time="2025-12-13T00:19:53Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-12-13T00:19:53.846135966+00:00 stderr F time="2025-12-13T00:19:53Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-12-13T00:19:53.846135966+00:00 stderr F time="2025-12-13T00:19:53Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-12-13T00:19:53.872565065+00:00 stderr F time="2025-12-13T00:19:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:53.872565065+00:00 stderr F time="2025-12-13T00:19:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:54.058021088+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:54.058021088+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:54.116211123+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:54.116211123+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:54.384601464+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:54.384636015+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:54.505710974+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-12-13T00:19:54.505710974+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-12-13T00:19:54.986736608+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-12-13T00:19:54.986736608+00:00 stderr F time="2025-12-13T00:19:54Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-12-13T00:19:55.068748759+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:55.068748759+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:55.114229163+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:55.114307255+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:55.259121928+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-12-13T00:19:55.259121928+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-12-13T00:19:55.324855281+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-12-13T00:19:55.324957864+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-12-13T00:19:55.712754637+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-12-13T00:19:55.712838269+00:00 stderr F time="2025-12-13T00:19:55Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-12-13T00:19:56.064625490+00:00 stderr F time="2025-12-13T00:19:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:56.064731183+00:00 stderr F time="2025-12-13T00:19:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:56.111722649+00:00 stderr F time="2025-12-13T00:19:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:56.111722649+00:00 stderr F time="2025-12-13T00:19:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:57.058774353+00:00 stderr F time="2025-12-13T00:19:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:57.058774353+00:00 stderr F time="2025-12-13T00:19:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:57.114530410+00:00 stderr F time="2025-12-13T00:19:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:57.114530410+00:00 stderr F time="2025-12-13T00:19:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:58.068215590+00:00 stderr F time="2025-12-13T00:19:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:58.068265491+00:00 stderr F time="2025-12-13T00:19:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:58.130537058+00:00 stderr F time="2025-12-13T00:19:58Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:58.130537058+00:00 stderr F time="2025-12-13T00:19:58Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:59.059176866+00:00 stderr F time="2025-12-13T00:19:59Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:59.059176866+00:00 stderr F time="2025-12-13T00:19:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:19:59.102115550+00:00 stderr F time="2025-12-13T00:19:59Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:19:59.102115550+00:00 stderr F time="2025-12-13T00:19:59Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:00.066339668+00:00 stderr F time="2025-12-13T00:20:00Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:00.066455811+00:00 stderr F time="2025-12-13T00:20:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:00.138746555+00:00 stderr F time="2025-12-13T00:20:00Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:00.138746555+00:00 stderr F time="2025-12-13T00:20:00Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:01.063111194+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:01.063111194+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:01.119374465+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:01.119374465+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:01.502567252+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:01.502567252+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:01.578126425+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:01.578126425+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:01.645751570+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:01.645751570+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:01.763418254+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:01.763418254+00:00 stderr F time="2025-12-13T00:20:01Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:02.059752316+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:02.059752316+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:02.104828099+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:02.105408494+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:02.493048633+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:02.493048633+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:02.880004844+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:02.880004844+00:00 stderr F time="2025-12-13T00:20:02Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:03.066342022+00:00 stderr F time="2025-12-13T00:20:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:03.066342022+00:00 stderr F time="2025-12-13T00:20:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:03.119220910+00:00 stderr F time="2025-12-13T00:20:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:03.119220910+00:00 stderr F time="2025-12-13T00:20:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:03.686449821+00:00 stderr F time="2025-12-13T00:20:03Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:03.686449821+00:00 stderr F time="2025-12-13T00:20:03Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:04.064391893+00:00 stderr F time="2025-12-13T00:20:04Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:04.064391893+00:00 stderr F time="2025-12-13T00:20:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:04.118498165+00:00 stderr F time="2025-12-13T00:20:04Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:04.118589838+00:00 stderr F time="2025-12-13T00:20:04Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:05.059423931+00:00 stderr F time="2025-12-13T00:20:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:05.059423931+00:00 stderr F time="2025-12-13T00:20:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:05.111257060+00:00 stderr F time="2025-12-13T00:20:05Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:05.111257060+00:00 stderr F time="2025-12-13T00:20:05Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:06.072808004+00:00 stderr F time="2025-12-13T00:20:06Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:06.072808004+00:00 stderr F time="2025-12-13T00:20:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:06.141143519+00:00 stderr F time="2025-12-13T00:20:06Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:06.141143519+00:00 stderr F time="2025-12-13T00:20:06Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:07.063733569+00:00 stderr F time="2025-12-13T00:20:07Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:07.063828742+00:00 stderr F time="2025-12-13T00:20:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:07.124273869+00:00 stderr F time="2025-12-13T00:20:07Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:07.124397632+00:00 stderr F time="2025-12-13T00:20:07Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:08.075358874+00:00 stderr F time="2025-12-13T00:20:08Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:08.075358874+00:00 stderr F time="2025-12-13T00:20:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:08.128384056+00:00 stderr F time="2025-12-13T00:20:08Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:08.128384056+00:00 stderr F time="2025-12-13T00:20:08Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:09.060509110+00:00 stderr F time="2025-12-13T00:20:09Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:09.060576002+00:00 stderr F time="2025-12-13T00:20:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:09.103053283+00:00 stderr F time="2025-12-13T00:20:09Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:09.103053283+00:00 stderr F time="2025-12-13T00:20:09Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:10.069124422+00:00 stderr F time="2025-12-13T00:20:10Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:10.069124422+00:00 stderr F time="2025-12-13T00:20:10Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:10.129086865+00:00 stderr F time="2025-12-13T00:20:10Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:10.129086865+00:00 stderr F time="2025-12-13T00:20:10Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:11.066451683+00:00 stderr F time="2025-12-13T00:20:11Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:11.066451683+00:00 stderr F time="2025-12-13T00:20:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:11.113967853+00:00 stderr F time="2025-12-13T00:20:11Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:11.113967853+00:00 stderr F time="2025-12-13T00:20:11Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:12.068419932+00:00 stderr F time="2025-12-13T00:20:12Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:12.068419932+00:00 stderr F time="2025-12-13T00:20:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:12.127029438+00:00 stderr F time="2025-12-13T00:20:12Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:12.127029438+00:00 stderr F time="2025-12-13T00:20:12Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:13.062311497+00:00 stderr F time="2025-12-13T00:20:13Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:13.062311497+00:00 stderr F time="2025-12-13T00:20:13Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:13.102844216+00:00 stderr F time="2025-12-13T00:20:13Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:13.102903477+00:00 stderr F time="2025-12-13T00:20:13Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:14.063972278+00:00 stderr F time="2025-12-13T00:20:14Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:14.064024999+00:00 stderr F time="2025-12-13T00:20:14Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:14.152488988+00:00 stderr F time="2025-12-13T00:20:14Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:14.152488988+00:00 stderr F time="2025-12-13T00:20:14Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:15.066776840+00:00 stderr F time="2025-12-13T00:20:15Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:15.066776840+00:00 stderr F time="2025-12-13T00:20:15Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:15.133454729+00:00 stderr F time="2025-12-13T00:20:15Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:15.133454729+00:00 stderr F time="2025-12-13T00:20:15Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:16.064593174+00:00 stderr F time="2025-12-13T00:20:16Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:16.064593174+00:00 stderr F time="2025-12-13T00:20:16Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:16.133646049+00:00 stderr F time="2025-12-13T00:20:16Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:16.133646049+00:00 stderr F time="2025-12-13T00:20:16Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:17.063306994+00:00 stderr F time="2025-12-13T00:20:17Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:17.063306994+00:00 stderr F time="2025-12-13T00:20:17Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:17.106312329+00:00 stderr F time="2025-12-13T00:20:17Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:17.106312329+00:00 stderr F time="2025-12-13T00:20:17Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:18.063094451+00:00 stderr F time="2025-12-13T00:20:18Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:18.063094451+00:00 stderr F time="2025-12-13T00:20:18Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:18.120987928+00:00 stderr F time="2025-12-13T00:20:18Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:18.120987928+00:00 stderr F time="2025-12-13T00:20:18Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:19.079177010+00:00 stderr F time="2025-12-13T00:20:19Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:19.079177010+00:00 stderr F time="2025-12-13T00:20:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:19.116654723+00:00 stderr F time="2025-12-13T00:20:19Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:19.116654723+00:00 stderr F time="2025-12-13T00:20:19Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:20.067524924+00:00 stderr F time="2025-12-13T00:20:20Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:20.067524924+00:00 stderr F time="2025-12-13T00:20:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:20.117104691+00:00 stderr F time="2025-12-13T00:20:20Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:20.117104691+00:00 stderr F time="2025-12-13T00:20:20Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:21.061561744+00:00 stderr F time="2025-12-13T00:20:21Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:21.061561744+00:00 stderr F time="2025-12-13T00:20:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:21.122439942+00:00 stderr F time="2025-12-13T00:20:21Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:21.122439942+00:00 stderr F time="2025-12-13T00:20:21Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:22.060882210+00:00 stderr F time="2025-12-13T00:20:22Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:22.060882210+00:00 stderr F time="2025-12-13T00:20:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:22.107275999+00:00 stderr F time="2025-12-13T00:20:22Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:22.107275999+00:00 stderr F time="2025-12-13T00:20:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:23.063809535+00:00 stderr F time="2025-12-13T00:20:23Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:23.063809535+00:00 stderr F time="2025-12-13T00:20:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:23.103702495+00:00 stderr F time="2025-12-13T00:20:23Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:23.103702495+00:00 stderr F time="2025-12-13T00:20:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:24.062331929+00:00 stderr F time="2025-12-13T00:20:24Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:24.062331929+00:00 stderr F time="2025-12-13T00:20:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:24.109892590+00:00 stderr F time="2025-12-13T00:20:24Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:24.109892590+00:00 stderr F time="2025-12-13T00:20:24Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:25.060170184+00:00 stderr F time="2025-12-13T00:20:25Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:25.060170184+00:00 stderr F time="2025-12-13T00:20:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:25.100608919+00:00 stderr F time="2025-12-13T00:20:25Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:25.100608919+00:00 stderr F time="2025-12-13T00:20:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:26.070059572+00:00 stderr F time="2025-12-13T00:20:26Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:26.070059572+00:00 stderr F time="2025-12-13T00:20:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:26.118167458+00:00 stderr F time="2025-12-13T00:20:26Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:26.118167458+00:00 stderr F time="2025-12-13T00:20:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:27.063161475+00:00 stderr F time="2025-12-13T00:20:27Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:27.063161475+00:00 stderr F time="2025-12-13T00:20:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:27.109267397+00:00 stderr F time="2025-12-13T00:20:27Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:27.109267397+00:00 stderr F time="2025-12-13T00:20:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:28.064354634+00:00 stderr F time="2025-12-13T00:20:28Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:28.064354634+00:00 stderr F time="2025-12-13T00:20:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:28.134008383+00:00 stderr F time="2025-12-13T00:20:28Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:28.134008383+00:00 stderr F time="2025-12-13T00:20:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:29.064633066+00:00 stderr F time="2025-12-13T00:20:29Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:29.064633066+00:00 stderr F time="2025-12-13T00:20:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:29.111158238+00:00 stderr F time="2025-12-13T00:20:29Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:29.111253831+00:00 stderr F time="2025-12-13T00:20:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:30.074767249+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:30.074767249+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:30.118082824+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:30.118082824+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:30.485043013+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:30.485043013+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:30.534726052+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:30.534726052+00:00 stderr F time="2025-12-13T00:20:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:31.061379074+00:00 stderr F time="2025-12-13T00:20:31Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:31.061482657+00:00 stderr F time="2025-12-13T00:20:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:31.108803182+00:00 stderr F time="2025-12-13T00:20:31Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:31.108803182+00:00 stderr F time="2025-12-13T00:20:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:32.069440251+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:32.069440251+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:32.120520219+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:32.120520219+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:32.461520983+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:32.461520983+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:32.503700826+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:32.503700826+00:00 stderr F time="2025-12-13T00:20:32Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:33.071453666+00:00 stderr F time="2025-12-13T00:20:33Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:33.071453666+00:00 stderr F time="2025-12-13T00:20:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:33.131649890+00:00 stderr F time="2025-12-13T00:20:33Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:33.131649890+00:00 stderr F time="2025-12-13T00:20:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:34.072716714+00:00 stderr F time="2025-12-13T00:20:34Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:34.072716714+00:00 stderr F time="2025-12-13T00:20:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:34.152799505+00:00 stderr F time="2025-12-13T00:20:34Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:34.152799505+00:00 stderr F time="2025-12-13T00:20:34Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:35.062464844+00:00 stderr F time="2025-12-13T00:20:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:35.062464844+00:00 stderr F time="2025-12-13T00:20:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:35.114879978+00:00 stderr F time="2025-12-13T00:20:35Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:35.114879978+00:00 stderr F time="2025-12-13T00:20:35Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:36.068199725+00:00 stderr F time="2025-12-13T00:20:36Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:36.068199725+00:00 stderr F time="2025-12-13T00:20:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:20:36.148016748+00:00 stderr F time="2025-12-13T00:20:36Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:20:36.148016748+00:00 stderr F time="2025-12-13T00:20:36Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:51.891890162+00:00 stderr F time="2025-12-13T00:21:51Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:51.891890162+00:00 stderr F time="2025-12-13T00:21:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:51.941725901+00:00 stderr F time="2025-12-13T00:21:51Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:51.941725901+00:00 stderr F time="2025-12-13T00:21:51Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:52.059887345+00:00 stderr F time="2025-12-13T00:21:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:52.059887345+00:00 stderr F time="2025-12-13T00:21:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:52.112020962+00:00 stderr F time="2025-12-13T00:21:52Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:52.112094353+00:00 stderr F time="2025-12-13T00:21:52Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:53.060738201+00:00 stderr F time="2025-12-13T00:21:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:53.060738201+00:00 stderr F time="2025-12-13T00:21:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:53.108234682+00:00 stderr F time="2025-12-13T00:21:53Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:53.108234682+00:00 stderr F time="2025-12-13T00:21:53Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:54.058715197+00:00 stderr F time="2025-12-13T00:21:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:54.058715197+00:00 stderr F time="2025-12-13T00:21:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:54.102522347+00:00 stderr F time="2025-12-13T00:21:54Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:54.102522347+00:00 stderr F time="2025-12-13T00:21:54Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:55.056710581+00:00 stderr F time="2025-12-13T00:21:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:55.056828444+00:00 stderr F time="2025-12-13T00:21:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:55.143490702+00:00 stderr F time="2025-12-13T00:21:55Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:55.143490702+00:00 stderr F time="2025-12-13T00:21:55Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:56.061741531+00:00 stderr F time="2025-12-13T00:21:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:56.061741531+00:00 stderr F time="2025-12-13T00:21:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:56.575999254+00:00 stderr F time="2025-12-13T00:21:56Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:56.575999254+00:00 stderr F time="2025-12-13T00:21:56Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:57.057305816+00:00 stderr F time="2025-12-13T00:21:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:57.057305816+00:00 stderr F time="2025-12-13T00:21:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-12-13T00:21:57.107707749+00:00 stderr F time="2025-12-13T00:21:57Z" level=info msg="no global imagestream configuration will block imagestream creation using registry.redhat.io" 2025-12-13T00:21:57.107707749+00:00 stderr F time="2025-12-13T00:21:57Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" ././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samp0000644000175000017500000007432515117130646033110 0ustar zuulzuul2025-08-13T19:59:23.745759522+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-08-13T19:59:23.745759522+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="Go OS/Arch: linux/amd64" 2025-08-13T19:59:25.650871697+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="template client &v1.TemplateV1Client{restClient:(*rest.RESTClient)(0xc0005c8f00)}" 2025-08-13T19:59:25.650871697+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="image client &v1.ImageV1Client{restClient:(*rest.RESTClient)(0xc0005c8fa0)}" 2025-08-13T19:59:31.855068557+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="waiting for informer caches to sync" 2025-08-13T19:59:34.408955148+00:00 stderr F W0813 19:59:34.375704 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F E0813 19:59:34.378903 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F W0813 19:59:32.402524 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:34.408955148+00:00 stderr F E0813 19:59:34.378961 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:35.874613566+00:00 stderr F W0813 19:59:35.873930 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:35.874724959+00:00 stderr F E0813 19:59:35.874709 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:35.874972206+00:00 stderr F W0813 19:59:35.874946 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:35.875028847+00:00 stderr F E0813 19:59:35.875010 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:38.636207756+00:00 stderr F W0813 19:59:38.635498 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:38.636318039+00:00 stderr F E0813 19:59:38.636299 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:38.659493230+00:00 stderr F W0813 19:59:38.659430 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:38.659730837+00:00 stderr F E0813 19:59:38.659714 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F W0813 19:59:44.386596 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F W0813 19:59:44.387299 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F E0813 19:59:44.387307 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:44.388028852+00:00 stderr F E0813 19:59:44.387323 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F W0813 19:59:56.214886 7 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F E0813 19:59:56.215543 7 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F W0813 19:59:56.215600 7 reflector.go:539] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T19:59:56.216972453+00:00 stderr F E0813 19:59:56.215622 7 reflector.go:147] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: failed to list *v1.Template: the server is currently unable to handle the request (get templates.template.openshift.io) 2025-08-13T20:00:20.485015857+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="started events processor" 2025-08-13T20:00:20.695061156+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="clearImageStreamTagError: stream dotnet-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:20.695061156+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet-runtime" 2025-08-13T20:00:20.975764440+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift-java11 already deleted so no worries on clearing tags" 2025-08-13T20:00:20.975764440+00:00 stderr F time="2025-08-13T20:00:20Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift-java11" 2025-08-13T20:00:21.317402651+00:00 stderr F time="2025-08-13T20:00:21Z" level=info msg="clearImageStreamTagError: stream fuse7-eap-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:21.317402651+00:00 stderr F time="2025-08-13T20:00:21Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-eap-openshift" 2025-08-13T20:00:22.029163475+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:22.029163475+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift" 2025-08-13T20:00:22.088482377+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:22.121173649+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:22.363082328+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="clearImageStreamTagError: stream fuse7-java11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:22.363082328+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java11-openshift" 2025-08-13T20:00:23.156969546+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream fuse7-java-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:23.156969546+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-java-openshift" 2025-08-13T20:00:23.197957675+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream dotnet already deleted so no worries on clearing tags" 2025-08-13T20:00:23.197957675+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream dotnet" 2025-08-13T20:00:23.444171106+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream fuse7-karaf-openshift-jdk11 already deleted so no worries on clearing tags" 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream fuse7-karaf-openshift-jdk11" 2025-08-13T20:00:23.514211193+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:23.795935686+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="clearImageStreamTagError: stream java already deleted so no worries on clearing tags" 2025-08-13T20:00:23.795935686+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="There are no more errors or image imports in flight for imagestream java" 2025-08-13T20:00:24.208376076+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream golang already deleted so no worries on clearing tags" 2025-08-13T20:00:24.208454759+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream golang" 2025-08-13T20:00:24.576855803+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream httpd already deleted so no worries on clearing tags" 2025-08-13T20:00:24.576855803+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream httpd" 2025-08-13T20:00:24.752157662+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream java-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:24.752157662+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream java-runtime" 2025-08-13T20:00:24.953569996+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:24.953569996+00:00 stderr F time="2025-08-13T20:00:24Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-openshift" 2025-08-13T20:00:25.068954766+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp4-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.076592703+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp4-openjdk11-runtime-openshift" 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.102950565+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-runtime-openshift" 2025-08-13T20:00:25.222950897+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-datagrid73-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.222950897+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-datagrid73-openshift" 2025-08-13T20:00:25.293323493+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.293323493+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-openshift" 2025-08-13T20:00:25.355468586+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="clearImageStreamTagError: stream jboss-eap-xp3-openjdk11-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:25.355468586+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap-xp3-openjdk11-runtime-openshift" 2025-08-13T20:00:25.373568142+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:26.063921706+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:26.092546152+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:00:26.092546152+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk8-tomcat9-openshift-ubi8" 2025-08-13T20:00:26.098371999+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk11-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.098420160+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk11-openshift" 2025-08-13T20:00:26.104509704+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.104509704+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-openshift" 2025-08-13T20:00:26.108077015+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:26.116665780+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream mariadb already deleted so no worries on clearing tags" 2025-08-13T20:00:26.116665780+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream mariadb" 2025-08-13T20:00:26.132354727+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-eap74-openjdk8-runtime-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:26.132354727+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-eap74-openjdk8-runtime-openshift" 2025-08-13T20:00:26.137939257+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8 already deleted so no worries on clearing tags" 2025-08-13T20:00:26.140329065+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream jboss-webserver57-openjdk11-tomcat9-openshift-ubi8" 2025-08-13T20:00:26.989988312+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="clearImageStreamTagError: stream nginx already deleted so no worries on clearing tags" 2025-08-13T20:00:26.990079965+00:00 stderr F time="2025-08-13T20:00:26Z" level=info msg="There are no more errors or image imports in flight for imagestream nginx" 2025-08-13T20:00:27.020001668+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream jenkins already deleted so no worries on clearing tags" 2025-08-13T20:00:27.020001668+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream jenkins" 2025-08-13T20:00:27.047936015+00:00 stderr F time="2025-08-13T20:00:27Z" level=warning msg="Image import for imagestream jenkins-agent-base tag scheduled-upgrade generation 3 failed with detailed message Internal error occurred: registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:v4.13.0: Get \"https://registry.redhat.io/v2/ocp-tools-4/jenkins-agent-base-rhel8/manifests/v4.13.0\": unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" 2025-08-13T20:00:27.227916267+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:27.457083101+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="initiated an imagestreamimport retry for imagestream/tag jenkins-agent-base/scheduled-upgrade" 2025-08-13T20:00:27.488319112+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream mysql already deleted so no worries on clearing tags" 2025-08-13T20:00:27.488528778+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream mysql" 2025-08-13T20:00:27.517631908+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream openjdk-11-rhel7 already deleted so no worries on clearing tags" 2025-08-13T20:00:27.535678842+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:27.535678842+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream openjdk-11-rhel7" 2025-08-13T20:00:27.567528931+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:27.567528931+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso75-openshift-rhel8" 2025-08-13T20:00:27.611589907+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream nodejs already deleted so no worries on clearing tags" 2025-08-13T20:00:27.611589907+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream nodejs" 2025-08-13T20:00:27.912992651+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream perl already deleted so no worries on clearing tags" 2025-08-13T20:00:27.916887692+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream perl" 2025-08-13T20:00:27.931633623+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream php already deleted so no worries on clearing tags" 2025-08-13T20:00:27.931633623+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream php" 2025-08-13T20:00:27.938067556+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="clearImageStreamTagError: stream redis already deleted so no worries on clearing tags" 2025-08-13T20:00:27.938067556+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="There are no more errors or image imports in flight for imagestream redis" 2025-08-13T20:00:28.025671784+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream postgresql already deleted so no worries on clearing tags" 2025-08-13T20:00:28.026030494+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql" 2025-08-13T20:00:28.038732106+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream postgresql13-for-sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.038896461+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream postgresql13-for-sso76-openshift-rhel8" 2025-08-13T20:00:28.046319203+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream python already deleted so no worries on clearing tags" 2025-08-13T20:00:28.046395345+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream python" 2025-08-13T20:00:28.086291793+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:28.086672094+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream redhat-openjdk18-openshift already deleted so no worries on clearing tags" 2025-08-13T20:00:28.086710605+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream redhat-openjdk18-openshift" 2025-08-13T20:00:28.135997070+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ruby already deleted so no worries on clearing tags" 2025-08-13T20:00:28.135997070+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ruby" 2025-08-13T20:00:28.137027379+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:28.183570867+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.183570867+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11" 2025-08-13T20:00:28.429676585+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream sso75-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.429676585+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream sso75-openshift-rhel8" 2025-08-13T20:00:28.449937642+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream sso76-openshift-rhel8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.449937642+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream sso76-openshift-rhel8" 2025-08-13T20:00:28.499246578+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.499246578+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17-runtime" 2025-08-13T20:00:28.513314029+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.513314029+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8" 2025-08-13T20:00:28.523039677+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-17 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.523039677+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-17" 2025-08-13T20:00:28.541055630+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21 already deleted so no worries on clearing tags" 2025-08-13T20:00:28.541055630+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21" 2025-08-13T20:00:28.549866492+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-8-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.549866492+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-8-runtime" 2025-08-13T20:00:28.560148035+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-11-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.560148035+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-11-runtime" 2025-08-13T20:00:28.566894357+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="clearImageStreamTagError: stream ubi8-openjdk-21-runtime already deleted so no worries on clearing tags" 2025-08-13T20:00:28.566894357+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="There are no more errors or image imports in flight for imagestream ubi8-openjdk-21-runtime" 2025-08-13T20:00:28.570909852+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:28.583921022+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:29.732106741+00:00 stderr F time="2025-08-13T20:00:29Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:29.732106741+00:00 stderr F time="2025-08-13T20:00:29Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.031907490+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.031907490+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.300890139+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.300890139+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:30.833731993+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:30.833894197+00:00 stderr F time="2025-08-13T20:00:30Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.044594205+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.044757080+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.258115964+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.258263938+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:31.592207790+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:31.592207790+00:00 stderr F time="2025-08-13T20:00:31Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:33.490730614+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="no global imagestream configuration will block imagestream creation using " 2025-08-13T20:00:33.491354072+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="At steady state: config the same and exists is true, in progress false, and version correct" 2025-08-13T20:00:33.777940123+00:00 stderr F time="2025-08-13T20:00:33Z" level=info msg="shutting down events processor" ././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130646033103 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015117130646033110 0ustar zuulzuul2025-12-13T00:14:55.462060517+00:00 stderr F time="2025-12-13T00:14:55Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-12-13T00:15:00.910846987+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-12-13T00:15:00.910846987+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130646033073 0ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130646033073 0ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130647033061 5ustar zuulzuul././@LongLink0000644000000000000000000000036200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000117165015117130647033076 0ustar zuulzuul2025-08-13T20:05:36.514581427+00:00 stderr F I0813 20:05:36.511530 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:05:36.514581427+00:00 stderr F I0813 20:05:36.511831 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:36.536300489+00:00 stderr F I0813 20:05:36.536068 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:36.665065366+00:00 stderr F I0813 20:05:36.664937 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T20:05:37.295386946+00:00 stderr F I0813 20:05:37.294764 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:37.295478189+00:00 stderr F W0813 20:05:37.295460 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.295518470+00:00 stderr F W0813 20:05:37.295504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.331330466+00:00 stderr F I0813 20:05:37.326144 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:37.331574223+00:00 stderr F I0813 20:05:37.331533 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T20:05:37.333384024+00:00 stderr F I0813 20:05:37.332718 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:37.333384024+00:00 stderr F I0813 20:05:37.333071 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:37.333653102+00:00 stderr F I0813 20:05:37.333580 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.353574 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.353645 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354854 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354923 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.354994 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:37.355226880+00:00 stderr F I0813 20:05:37.355011 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.387949927+00:00 stderr F I0813 20:05:37.387768 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-08-13T20:05:37.402530115+00:00 stderr F I0813 20:05:37.390150 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31745", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_e17706f2-94c3-4c11-bacc-b114096fd37e became leader 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.404425 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.423394 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:37.431259157+00:00 stderr F I0813 20:05:37.428270 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:37.466191818+00:00 stderr F I0813 20:05:37.464311 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.466191818+00:00 stderr F I0813 20:05:37.464679 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:37.473592920+00:00 stderr F I0813 20:05:37.469592 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.690758209+00:00 stderr F I0813 20:05:37.690347 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-08-13T20:05:37.691722566+00:00 stderr F I0813 20:05:37.691605 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:05:37.694602549+00:00 stderr F I0813 20:05:37.693336 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T20:05:37.696375289+00:00 stderr F I0813 20:05:37.696346 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:05:37.696446501+00:00 stderr F I0813 20:05:37.696430 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-08-13T20:05:37.697198463+00:00 stderr F I0813 20:05:37.697115 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:37.697220463+00:00 stderr F I0813 20:05:37.697193 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:05:37.697648446+00:00 stderr F I0813 20:05:37.697575 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T20:05:37.697648446+00:00 stderr F I0813 20:05:37.697631 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-08-13T20:05:37.698286684+00:00 stderr F I0813 20:05:37.698231 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:05:37.698597563+00:00 stderr F I0813 20:05:37.698542 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:05:37.711605395+00:00 stderr F I0813 20:05:37.711545 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:05:37.711720759+00:00 stderr F I0813 20:05:37.711704 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:05:37.713356496+00:00 stderr F I0813 20:05:37.713232 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:05:37.713356496+00:00 stderr F I0813 20:05:37.713305 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:05:37.713579652+00:00 stderr F I0813 20:05:37.713554 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:05:37.713664314+00:00 stderr F I0813 20:05:37.713647 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:05:37.713836589+00:00 stderr F I0813 20:05:37.713760 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:37.728300953+00:00 stderr F I0813 20:05:37.728166 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:05:37.735956003+00:00 stderr F I0813 20:05:37.734446 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:05:37.941763616+00:00 stderr F I0813 20:05:37.940941 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:05:37.941763616+00:00 stderr F I0813 20:05:37.941104 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:05:37.945175514+00:00 stderr F I0813 20:05:37.943577 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:05:37.945175514+00:00 stderr F I0813 20:05:37.943624 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.961306 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.961486 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.963073 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:05:37.963699194+00:00 stderr F I0813 20:05:37.963085 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985229 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985459 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985485 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:05:37.986055495+00:00 stderr F I0813 20:05:37.985494 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10 2025-08-13T20:05:37.989486673+00:00 stderr F E0813 20:05:37.989079 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-10" not found 2025-08-13T20:05:37.990072570+00:00 stderr F I0813 20:05:37.989540 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001002 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001048 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001931 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T20:05:38.002449894+00:00 stderr F I0813 20:05:38.001956 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T20:05:38.009507136+00:00 stderr F I0813 20:05:38.009211 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:05:38.009507136+00:00 stderr F I0813 20:05:38.009304 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:05:38.009552308+00:00 stderr F I0813 20:05:38.009494 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:05:38.009552308+00:00 stderr F I0813 20:05:38.009515 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.022638 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.022678 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:38.023609120+00:00 stderr F I0813 20:05:38.023510 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.054908236+00:00 stderr F E0813 20:05:38.054015 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10] 2025-08-13T20:05:38.071474651+00:00 stderr F I0813 20:05:38.071276 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:38.108738358+00:00 stderr F I0813 20:05:38.107067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]" 2025-08-13T20:05:38.231418341+00:00 stderr F I0813 20:05:38.231010 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.434263850+00:00 stderr F I0813 20:05:38.433834 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.499090686+00:00 stderr F I0813 20:05:38.499028 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T20:05:38.499168258+00:00 stderr F I0813 20:05:38.499150 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:05:38.621559283+00:00 stderr F I0813 20:05:38.621492 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:38.629225383+00:00 stderr F I0813 20:05:38.629182 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:05:38.629297075+00:00 stderr F I0813 20:05:38.629281 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:05:38.713959039+00:00 stderr F I0813 20:05:38.713849 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:05:38.714049952+00:00 stderr F I0813 20:05:38.714034 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:05:38.815560509+00:00 stderr F I0813 20:05:38.815178 1 request.go:697] Waited for 1.117060289s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?limit=500&resourceVersion=0 2025-08-13T20:05:38.829328833+00:00 stderr F I0813 20:05:38.829250 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.048347936+00:00 stderr F I0813 20:05:39.047959 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.255540009+00:00 stderr F I0813 20:05:39.255468 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.290947193+00:00 stderr F I0813 20:05:39.290729 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-08-13T20:05:39.291035846+00:00 stderr F I0813 20:05:39.291017 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-08-13T20:05:39.419138504+00:00 stderr F I0813 20:05:39.418571 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.498210308+00:00 stderr F I0813 20:05:39.498044 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:05:39.498299021+00:00 stderr F I0813 20:05:39.498283 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:05:39.620257543+00:00 stderr F I0813 20:05:39.620126 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723880 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723933 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723982 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:05:39.734896756+00:00 stderr F I0813 20:05:39.723988 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:05:39.817011458+00:00 stderr F I0813 20:05:39.816753 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:39.893035685+00:00 stderr F I0813 20:05:39.892925 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:05:39.893035685+00:00 stderr F I0813 20:05:39.892974 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:05:39.898318626+00:00 stderr F I0813 20:05:39.898279 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:39.898378448+00:00 stderr F I0813 20:05:39.898363 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:05:40.013757722+00:00 stderr F I0813 20:05:40.013689 1 request.go:697] Waited for 2.027614104s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:05:41.215111744+00:00 stderr F I0813 20:05:41.213553 1 request.go:697] Waited for 1.489350039s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dopenshift-kube-apiserver 2025-08-13T20:05:42.218226789+00:00 stderr F I0813 20:05:42.218018 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' installer errors: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.218226789+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F I0813 20:05:42.220398 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:42.220484124+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:42.220484124+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:05:42.220484124+00:00 stderr F TargetRevision: (int32) 10, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedTime: (*v1.Time)(0xc000a3d830)(2025-08-13 20:05:42.217747775 +0000 UTC m=+6.736788489), 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:42.220484124+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:42.220484124+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:05:42.220484124+00:00 stderr F } 2025-08-13T20:05:42.220484124+00:00 stderr F } 2025-08-13T20:05:42.220484124+00:00 stderr F because installer pod failed: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.220484124+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.299764664+00:00 stderr F I0813 20:05:42.295707 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.326058137+00:00 stderr F I0813 20:05:42.323599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:05:42.373419313+00:00 stderr F I0813 20:05:42.373161 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.404745160+00:00 stderr F I0813 20:05:42.404616 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-10,config-10,controller-manager-kubeconfig-10,kube-controller-cert-syncer-kubeconfig-10,kube-controller-manager-pod-10,recycler-config-10,service-ca-10,serviceaccount-ca-10]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:05:42.413767369+00:00 stderr F I0813 20:05:42.413731 1 request.go:697] Waited for 1.327177765s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra 2025-08-13T20:05:42.629149796+00:00 stderr F I0813 20:05:42.629032 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-08-13T20:05:43.613509565+00:00 stderr F I0813 20:05:43.613161 1 request.go:697] Waited for 1.193070645s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager 2025-08-13T20:05:57.065456713+00:00 stderr F I0813 20:05:57.063355 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-retry-1-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:05:57.088850643+00:00 stderr F I0813 20:05:57.085745 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:57.131914866+00:00 stderr F I0813 20:05:57.131456 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:57.218625389+00:00 stderr F I0813 20:05:57.212321 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:58.028420369+00:00 stderr F I0813 20:05:58.027854 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:05:58.807715594+00:00 stderr F I0813 20:05:58.807067 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:06:00.112517718+00:00 stderr F I0813 20:06:00.112357 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:06:33.497496611+00:00 stderr F I0813 20:06:33.496837 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:06:35.691201926+00:00 stderr F I0813 20:06:35.688448 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because waiting for static pod of revision 10, found 8 2025-08-13T20:06:36.706043922+00:00 stderr F I0813 20:06:36.703456 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because waiting for static pod of revision 10, found 8 2025-08-13T20:06:50.660622631+00:00 stderr F I0813 20:06:50.658985 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:50.850225138+00:00 stderr F I0813 20:06:50.848552 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:51.121329900+00:00 stderr F I0813 20:06:51.119648 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.484035044+00:00 stderr F I0813 20:06:55.481643 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.563495791+00:00 stderr F I0813 20:06:55.563359 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:55.655427417+00:00 stderr F I0813 20:06:55.655320 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:55.658088293+00:00 stderr F I0813 20:06:55.658034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " 2025-08-13T20:06:55.665956629+00:00 stderr F I0813 20:06:55.661252 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:06:55.683381938+00:00 stderr F E0813 20:06:55.683327 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:56.936619590+00:00 stderr F I0813 20:06:56.930386 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event \u0026Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:57.091031747+00:00 stderr F I0813 20:06:57.090927 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-crc container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " 2025-08-13T20:06:57.980483799+00:00 stderr F I0813 20:06:57.979433 1 request.go:697] Waited for 1.042308714s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-10-crc 2025-08-13T20:06:59.201359691+00:00 stderr F I0813 20:06:59.201295 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:07:01.121543755+00:00 stderr F I0813 20:07:01.121137 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because static pod is pending 2025-08-13T20:07:04.595151506+00:00 stderr F I0813 20:07:04.533289 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:04.595151506+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:04.595151506+00:00 stderr F CurrentRevision: (int32) 10, 2025-08-13T20:07:04.595151506+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00264f218)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:04.595151506+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:04.595151506+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:07:04.595151506+00:00 stderr F } 2025-08-13T20:07:04.595151506+00:00 stderr F } 2025-08-13T20:07:04.595151506+00:00 stderr F because static pod is ready 2025-08-13T20:07:04.677555779+00:00 stderr F I0813 20:07:04.677143 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 10 because static pod is ready 2025-08-13T20:07:04.721704214+00:00 stderr F I0813 20:07:04.720531 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:04Z","message":"NodeInstallerProgressing: 1 node is at revision 10","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:04.783723732+00:00 stderr F I0813 20:07:04.783645 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 10"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10" 2025-08-13T20:07:05.988628468+00:00 stderr F I0813 20:07:05.985923 1 request.go:697] Waited for 1.015351241s due to client-side throttling, not priority and fairness, request: DELETE:https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider 2025-08-13T20:07:15.750061857+00:00 stderr F I0813 20:07:15.749143 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.830481992+00:00 stderr F I0813 20:07:15.825103 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.864035724+00:00 stderr F I0813 20:07:15.863695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.880677681+00:00 stderr F I0813 20:07:15.880518 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:15.896993799+00:00 stderr F I0813 20:07:15.896746 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:16.533054716+00:00 stderr F I0813 20:07:16.531142 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:16.966016579+00:00 stderr F I0813 20:07:16.965723 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:17.163532201+00:00 stderr F I0813 20:07:17.163424 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:17.730583569+00:00 stderr F I0813 20:07:17.730517 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:18.745910990+00:00 stderr F I0813 20:07:18.743413 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:19.932212122+00:00 stderr F I0813 20:07:19.894036 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:20.024030635+00:00 stderr F I0813 20:07:20.023839 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-11 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:20.190952711+00:00 stderr F I0813 20:07:20.190769 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:20.342014862+00:00 stderr F I0813 20:07:20.341946 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 11 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:21.518012327+00:00 stderr F I0813 20:07:21.517052 1 request.go:697] Waited for 1.17130022s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config 2025-08-13T20:07:21.914562077+00:00 stderr F I0813 20:07:21.914362 1 installer_controller.go:524] node crc with revision 10 is the oldest and needs new revision 11 2025-08-13T20:07:21.914562077+00:00 stderr F I0813 20:07:21.914465 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:21.914562077+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:21.914562077+00:00 stderr F CurrentRevision: (int32) 10, 2025-08-13T20:07:21.914562077+00:00 stderr F TargetRevision: (int32) 11, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedTime: (*v1.Time)(0xc002d33830)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:21.914562077+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:21.914562077+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:07:21.914562077+00:00 stderr F } 2025-08-13T20:07:21.914562077+00:00 stderr F } 2025-08-13T20:07:21.957209780+00:00 stderr F I0813 20:07:21.956243 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:21Z","message":"NodeInstallerProgressing: 1 node is at revision 10; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.971364466+00:00 stderr F I0813 20:07:21.959131 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 10 to 11 because node crc with revision 10 is the oldest 2025-08-13T20:07:22.018179948+00:00 stderr F I0813 20:07:22.017362 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 10; 0 nodes have achieved new revision 11"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11" 2025-08-13T20:07:22.198346004+00:00 stderr F I0813 20:07:22.177477 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-11-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:23.112166394+00:00 stderr F I0813 20:07:23.109244 1 request.go:697] Waited for 1.121977728s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc 2025-08-13T20:07:24.113915855+00:00 stderr F I0813 20:07:24.112993 1 request.go:697] Waited for 1.361823165s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:07:24.340358286+00:00 stderr F I0813 20:07:24.340133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-11-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:07:25.328965391+00:00 stderr F I0813 20:07:25.328424 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:27.325769971+00:00 stderr F I0813 20:07:27.321708 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:29.319020529+00:00 stderr F I0813 20:07:29.318273 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:32.013351707+00:00 stderr F I0813 20:07:32.012561 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:01.280911734+00:00 stderr F I0813 20:08:01.278043 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:02.776377240+00:00 stderr F I0813 20:08:02.774488 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because waiting for static pod of revision 11, found 10 2025-08-13T20:08:03.237190373+00:00 stderr F I0813 20:08:03.236041 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because waiting for static pod of revision 11, found 10 2025-08-13T20:08:12.346025060+00:00 stderr F I0813 20:08:12.336670 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:12.393195983+00:00 stderr F I0813 20:08:12.392436 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:14.389113567+00:00 stderr F I0813 20:08:14.382351 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:22.396199897+00:00 stderr F I0813 20:08:22.394876 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:22.452822330+00:00 stderr F I0813 20:08:22.451881 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:23.367716581+00:00 stderr F I0813 20:08:23.367578 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because static pod is pending 2025-08-13T20:08:24.170857908+00:00 stderr F E0813 20:08:24.170609 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371187832+00:00 stderr F I0813 20:08:24.369377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.375219228+00:00 stderr F E0813 20:08:24.374420 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:24.394850951+00:00 stderr F E0813 20:08:24.394715 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.568198841+00:00 stderr F I0813 20:08:24.567660 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.571906687+00:00 stderr F E0813 20:08:24.571270 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.765553469+00:00 stderr F I0813 20:08:24.765486 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.771100708+00:00 stderr F E0813 20:08:24.771014 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.971852124+00:00 stderr F I0813 20:08:24.969335 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.978861485+00:00 stderr F E0813 20:08:24.978496 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.165866856+00:00 stderr F I0813 20:08:25.165412 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.169325765+00:00 stderr F E0813 20:08:25.166448 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.364084589+00:00 stderr F I0813 20:08:25.363947 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.366759675+00:00 stderr F E0813 20:08:25.366692 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.704643453+00:00 stderr F I0813 20:08:25.698772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.714601128+00:00 stderr F E0813 20:08:25.714305 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.369521486+00:00 stderr F I0813 20:08:26.360516 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.398639191+00:00 stderr F E0813 20:08:26.362744 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.685364933+00:00 stderr F I0813 20:08:27.685138 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.688204264+00:00 stderr F E0813 20:08:27.687213 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.431541927+00:00 stderr F E0813 20:08:29.429165 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:30.267922087+00:00 stderr F I0813 20:08:30.266521 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268515264+00:00 stderr F E0813 20:08:30.268373 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.396384583+00:00 stderr F I0813 20:08:35.395552 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.398464212+00:00 stderr F E0813 20:08:35.398026 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.708406420+00:00 stderr F E0813 20:08:37.707561 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.977995409+00:00 stderr F E0813 20:08:37.977379 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.991073454+00:00 stderr F E0813 20:08:37.991014 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.994263745+00:00 stderr F E0813 20:08:37.994197 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.995028277+00:00 stderr F E0813 20:08:37.994974 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.002764669+00:00 stderr F E0813 20:08:38.002720 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.003259743+00:00 stderr F E0813 20:08:38.003178 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.006769884+00:00 stderr F E0813 20:08:38.006664 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.015736081+00:00 stderr F E0813 20:08:38.015641 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.018489110+00:00 stderr F E0813 20:08:38.018422 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.036316851+00:00 stderr F E0813 20:08:38.036184 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.169417247+00:00 stderr F E0813 20:08:38.169315 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.371619185+00:00 stderr F E0813 20:08:38.371482 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.583145309+00:00 stderr F E0813 20:08:38.581142 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.770923113+00:00 stderr F E0813 20:08:38.770624 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.972205464+00:00 stderr F E0813 20:08:38.972095 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.378703148+00:00 stderr F E0813 20:08:39.378322 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:39.434393324+00:00 stderr F E0813 20:08:39.434225 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:39.569598961+00:00 stderr F E0813 20:08:39.569421 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.772648372+00:00 stderr F E0813 20:08:39.772541 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.898155131+00:00 stderr F E0813 20:08:39.898073 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.923172408+00:00 stderr F E0813 20:08:39.922158 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.937491379+00:00 stderr F E0813 20:08:39.937365 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.965853612+00:00 stderr F E0813 20:08:39.963385 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.007860736+00:00 stderr F E0813 20:08:40.006971 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.092879614+00:00 stderr F E0813 20:08:40.092336 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.177913742+00:00 stderr F E0813 20:08:40.177511 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:40.256260358+00:00 stderr F E0813 20:08:40.255973 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.377703030+00:00 stderr F E0813 20:08:40.377602 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.586885368+00:00 stderr F E0813 20:08:40.585655 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.586885368+00:00 stderr F E0813 20:08:40.586491 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.997595883+00:00 stderr F E0813 20:08:40.997086 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.173972980+00:00 stderr F E0813 20:08:41.173275 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.236970396+00:00 stderr F E0813 20:08:41.232354 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.572877047+00:00 stderr F E0813 20:08:41.572494 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.004873103+00:00 stderr F E0813 20:08:42.002107 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.370864736+00:00 stderr F E0813 20:08:42.369130 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.520686582+00:00 stderr F E0813 20:08:42.520583 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.583390490+00:00 stderr F E0813 20:08:42.582572 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.782116257+00:00 stderr F E0813 20:08:42.781255 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.575448852+00:00 stderr F E0813 20:08:43.575363 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.969587302+00:00 stderr F E0813 20:08:43.969477 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.373945716+00:00 stderr F E0813 20:08:44.373709 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.979102146+00:00 stderr F E0813 20:08:44.978928 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.105530441+00:00 stderr F E0813 20:08:45.101170 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.644941877+00:00 stderr F I0813 20:08:45.643771 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 11 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.647885141+00:00 stderr F E0813 20:08:45.646113 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.376183812+00:00 stderr F E0813 20:08:46.375706 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.533064029+00:00 stderr F E0813 20:08:46.532345 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.940860211+00:00 stderr F E0813 20:08:46.939076 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.384872593+00:00 stderr F E0813 20:08:48.382040 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.437137672+00:00 stderr F E0813 20:08:49.437038 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:08:50.225616298+00:00 stderr F E0813 20:08:50.225290 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.576501139+00:00 stderr F E0813 20:08:51.576436 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.656540804+00:00 stderr F E0813 20:08:51.656359 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.776989467+00:00 stderr F E0813 20:08:51.775149 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:52.064855021+00:00 stderr F E0813 20:08:52.064721 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.984195111+00:00 stderr F E0813 20:08:54.983672 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:58.174975433+00:00 stderr F E0813 20:08:58.174827 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:59.441281750+00:00 stderr F E0813 20:08:59.440725 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-operator.185b6c6f22a0eb36 openshift-kube-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-controller-manager-operator,Name:kube-controller-manager-operator,UID:8a9ccf98-e60f-4580-94d2-1560cf66cd74,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 11 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-controller-manager-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,LastTimestamp:2025-08-13 20:08:24.369081142 +0000 UTC m=+168.888121906,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-controller-manager-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:09:00.467884753+00:00 stderr F E0813 20:09:00.467455 1 base_controller.go:268] TargetConfigController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:01.433467986+00:00 stderr F E0813 20:09:01.433310 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/namespace-openshift-infra.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/recycler-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): Delete "https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:09:06.140281097+00:00 stderr F I0813 20:09:06.139717 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:06.140281097+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:06.140281097+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:06.140281097+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00231f1a0)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:06.140281097+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:06.140281097+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:06.140281097+00:00 stderr F } 2025-08-13T20:09:06.140281097+00:00 stderr F } 2025-08-13T20:09:06.140281097+00:00 stderr F because static pod is ready 2025-08-13T20:09:06.171341418+00:00 stderr F I0813 20:09:06.170651 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32975 2025-08-13T20:09:06.328351489+00:00 stderr F I0813 20:09:06.328133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 10 to 11 because static pod is ready 2025-08-13T20:09:32.791415227+00:00 stderr F I0813 20:09:32.790410 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.842145974+00:00 stderr F I0813 20:09:35.841480 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.566471571+00:00 stderr F I0813 20:09:36.566334 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.867218033+00:00 stderr F I0813 20:09:36.867072 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.031180185+00:00 stderr F I0813 20:09:39.030762 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.077560615+00:00 stderr F I0813 20:09:39.075582 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.669969650+00:00 stderr F I0813 20:09:39.669137 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.868331488+00:00 stderr F I0813 20:09:41.867858 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:41.868331488+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:41.868331488+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:41.868331488+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00167baa0)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:41.868331488+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:41.868331488+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:41.868331488+00:00 stderr F } 2025-08-13T20:09:41.868331488+00:00 stderr F } 2025-08-13T20:09:41.868331488+00:00 stderr F because static pod is ready 2025-08-13T20:09:41.901710485+00:00 stderr F I0813 20:09:41.901606 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32995 2025-08-13T20:09:43.065986166+00:00 stderr F I0813 20:09:43.065471 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.755607148+00:00 stderr F I0813 20:09:43.753551 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.828929360+00:00 stderr F I0813 20:09:43.828769 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.868881685+00:00 stderr F I0813 20:09:43.866849 1 request.go:697] Waited for 1.008177855s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/configmaps?resourceVersion=32682 2025-08-13T20:09:43.874975750+00:00 stderr F I0813 20:09:43.874058 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:44.267842224+00:00 stderr F I0813 20:09:44.266984 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:44.267842224+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:44.267842224+00:00 stderr F CurrentRevision: (int32) 11, 2025-08-13T20:09:44.267842224+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedRevision: (int32) 10, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00184d668)(2025-08-13 20:05:42 +0000 UTC), 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:44.267842224+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:44.267842224+00:00 stderr F (string) (len=2059) "installer: go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nI0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nW0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\nF0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:09:44.267842224+00:00 stderr F } 2025-08-13T20:09:44.267842224+00:00 stderr F } 2025-08-13T20:09:44.267842224+00:00 stderr F because static pod is ready 2025-08-13T20:09:44.305548305+00:00 stderr F I0813 20:09:44.302039 1 helpers.go:260] lister was stale at resourceVersion=32529, live get showed resourceVersion=32995 2025-08-13T20:09:44.830307690+00:00 stderr F I0813 20:09:44.830050 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.070993701+00:00 stderr F I0813 20:09:45.070496 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.882929610+00:00 stderr F I0813 20:09:45.880547 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.698310448+00:00 stderr F I0813 20:09:46.698204 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.920826358+00:00 stderr F I0813 20:09:46.920723 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.441742102+00:00 stderr F I0813 20:09:47.441227 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.488483362+00:00 stderr F I0813 20:09:47.488294 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.865952065+00:00 stderr F I0813 20:09:47.865749 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.867515881+00:00 stderr F I0813 20:09:48.867436 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.465855626+00:00 stderr F I0813 20:09:49.465342 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.469199783+00:00 stderr F I0813 20:09:50.468636 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.880873876+00:00 stderr F I0813 20:09:50.878988 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.891951154+00:00 stderr F I0813 20:09:50.889425 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:50.912624487+00:00 stderr F I0813 20:09:50.911375 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 11"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 10; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11" 2025-08-13T20:09:51.070035449+00:00 stderr F I0813 20:09:51.069679 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.062438492+00:00 stderr F I0813 20:09:52.062144 1 request.go:697] Waited for 1.166969798s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:09:52.074969781+00:00 stderr F I0813 20:09:52.073887 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.087851291+00:00 stderr F E0813 20:09:52.086954 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.089358154+00:00 stderr F I0813 20:09:52.088079 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.100062871+00:00 stderr F E0813 20:09:52.099285 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.107549096+00:00 stderr F I0813 20:09:52.106752 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.112456946+00:00 stderr F E0813 20:09:52.112377 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.116400119+00:00 stderr F I0813 20:09:52.116182 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.133078288+00:00 stderr F E0813 20:09:52.133003 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.136254809+00:00 stderr F I0813 20:09:52.136195 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.167870265+00:00 stderr F E0813 20:09:52.167244 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.177925303+00:00 stderr F I0813 20:09:52.175512 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.194191010+00:00 stderr F E0813 20:09:52.194117 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.356891154+00:00 stderr F I0813 20:09:52.356749 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.385455173+00:00 stderr F E0813 20:09:52.382429 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:52.704555382+00:00 stderr F I0813 20:09:52.704250 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.715332131+00:00 stderr F E0813 20:09:52.714021 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.119107698+00:00 stderr F I0813 20:09:53.117989 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:53.356032781+00:00 stderr F I0813 20:09:53.355849 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: ","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:53Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:53.364569826+00:00 stderr F E0813 20:09:53.363148 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.481039765+00:00 stderr F I0813 20:09:53.480338 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:53.799756183+00:00 stderr F I0813 20:09:53.799653 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:53Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:53.836003352+00:00 stderr F E0813 20:09:53.833661 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.645177441+00:00 stderr F I0813 20:09:54.644492 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.660597223+00:00 stderr F E0813 20:09:54.660244 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.862986776+00:00 stderr F I0813 20:09:54.862846 1 request.go:697] Waited for 1.062232365s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:09:54.870893733+00:00 stderr F I0813 20:09:54.868932 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.878888712+00:00 stderr F E0813 20:09:54.877734 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:54.878888712+00:00 stderr F I0813 20:09:54.878416 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:54Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:54.889590369+00:00 stderr F E0813 20:09:54.889437 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:59.784361326+00:00 stderr F I0813 20:09:59.782708 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:59Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:59.796398381+00:00 stderr F E0813 20:09:59.794549 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:01.485628003+00:00 stderr F I0813 20:10:01.485296 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:01.486631552+00:00 stderr F I0813 20:10:01.486593 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:01.512229046+00:00 stderr F I0813 20:10:01.512052 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://10.217.4.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 10.217.4.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:10:05.576588314+00:00 stderr F I0813 20:10:05.574320 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.788647521+00:00 stderr F I0813 20:10:16.786138 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:17.632072662+00:00 stderr F I0813 20:10:17.631636 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.410729797+00:00 stderr F I0813 20:10:18.410459 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.615626252+00:00 stderr F I0813 20:10:18.615513 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.806435403+00:00 stderr F I0813 20:10:18.806372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:20.493105591+00:00 stderr F I0813 20:10:20.492429 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:24.807862058+00:00 stderr F I0813 20:10:24.807400 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:31.619188005+00:00 stderr F I0813 20:10:31.618223 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:32.803599894+00:00 stderr F I0813 20:10:32.803378 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:35.876587429+00:00 stderr F I0813 20:10:35.875698 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.381175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.382267 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.383047 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.387525102+00:00 stderr F I0813 20:42:36.383551 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.396116500+00:00 stderr F I0813 20:42:36.375892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.439697676+00:00 stderr F I0813 20:42:36.438509 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.473221493+00:00 stderr F I0813 20:42:36.472939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517723086+00:00 stderr F I0813 20:42:36.515446 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.525477209+00:00 stderr F I0813 20:42:36.524990 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.525649584+00:00 stderr F I0813 20:42:36.525597 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526007115+00:00 stderr F I0813 20:42:36.525975 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526113798+00:00 stderr F I0813 20:42:36.526057 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526590702+00:00 stderr F I0813 20:42:36.526528 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.526860159+00:00 stderr F I0813 20:42:36.526836 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527274371+00:00 stderr F I0813 20:42:36.527249 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527652012+00:00 stderr F I0813 20:42:36.527630 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.527982892+00:00 stderr F I0813 20:42:36.527959 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.528339942+00:00 stderr F I0813 20:42:36.528315 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.528704013+00:00 stderr F I0813 20:42:36.528683 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.529639970+00:00 stderr F I0813 20:42:36.529613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.530079102+00:00 stderr F I0813 20:42:36.530057 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543610662+00:00 stderr F I0813 20:42:36.543448 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.551664234+00:00 stderr F I0813 20:42:36.551599 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.552394896+00:00 stderr F I0813 20:42:36.552291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.552666603+00:00 stderr F I0813 20:42:36.552644 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553034024+00:00 stderr F I0813 20:42:36.553007 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553347353+00:00 stderr F I0813 20:42:36.553324 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.553991832+00:00 stderr F I0813 20:42:36.553964 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554282780+00:00 stderr F I0813 20:42:36.554257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.564312 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565334 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565351 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.565556575+00:00 stderr F I0813 20:42:36.565544 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566095551+00:00 stderr F I0813 20:42:36.565899 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566119391+00:00 stderr F I0813 20:42:36.566108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566487422+00:00 stderr F I0813 20:42:36.566257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566502902+00:00 stderr F I0813 20:42:36.566491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.566909994+00:00 stderr F I0813 20:42:36.566684 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:38.016213258+00:00 stderr F E0813 20:42:38.010560 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.018502994+00:00 stderr F E0813 20:42:38.018396 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.022073757+00:00 stderr F E0813 20:42:38.021141 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.029345557+00:00 stderr F E0813 20:42:38.029269 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.034276839+00:00 stderr F E0813 20:42:38.033926 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.036483353+00:00 stderr F E0813 20:42:38.035737 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.040625892+00:00 stderr F E0813 20:42:38.039940 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.047424938+00:00 stderr F E0813 20:42:38.047061 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.051877666+00:00 stderr F E0813 20:42:38.051510 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.060824984+00:00 stderr F E0813 20:42:38.060661 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.199407930+00:00 stderr F E0813 20:42:38.199318 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.405874742+00:00 stderr F E0813 20:42:38.403054 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-controller-manager-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:38.599883536+00:00 stderr F E0813 20:42:38.599536 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.802826186+00:00 stderr F E0813 20:42:38.801003 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.858378818+00:00 stderr F I0813 20:42:38.858182 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:38.860434177+00:00 stderr F I0813 20:42:38.860371 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:38.860434177+00:00 stderr F I0813 20:42:38.860425 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:38.860457888+00:00 stderr F I0813 20:42:38.860446 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:38.860457888+00:00 stderr F I0813 20:42:38.860453 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:38.860520100+00:00 stderr F I0813 20:42:38.860468 1 base_controller.go:172] Shutting down SATokenSignerController ... 2025-08-13T20:42:38.860520100+00:00 stderr F I0813 20:42:38.860504 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:38.860533350+00:00 stderr F I0813 20:42:38.860520 1 base_controller.go:172] Shutting down GarbageCollectorWatcherController ... 2025-08-13T20:42:38.860543201+00:00 stderr F I0813 20:42:38.860535 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:38.860583682+00:00 stderr F I0813 20:42:38.860550 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:38.860583682+00:00 stderr F I0813 20:42:38.860567 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:42:38.860930672+00:00 stderr F I0813 20:42:38.860883 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:38.860930672+00:00 stderr F I0813 20:42:38.860922 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:38.860950012+00:00 stderr F I0813 20:42:38.860936 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:38.860959843+00:00 stderr F I0813 20:42:38.860949 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:42:38.860969773+00:00 stderr F I0813 20:42:38.860962 1 base_controller.go:172] Shutting down StatusSyncer_kube-controller-manager ... 2025-08-13T20:42:38.860979963+00:00 stderr F I0813 20:42:38.860967 1 base_controller.go:150] All StatusSyncer_kube-controller-manager post start hooks have been terminated 2025-08-13T20:42:38.860989933+00:00 stderr F I0813 20:42:38.860979 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:38.861177759+00:00 stderr F I0813 20:42:38.861129 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:38.861292472+00:00 stderr F I0813 20:42:38.861164 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:38.861891139+00:00 stderr F E0813 20:42:38.861726 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.863436384+00:00 stderr F E0813 20:42:38.861921 1 base_controller.go:268] InstallerStateController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.861973 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.861984 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:42:38.863436384+00:00 stderr F I0813 20:42:38.862005 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:42:38.863436384+00:00 stderr F W0813 20:42:38.862143 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000057124415117130647033100 0ustar zuulzuul2025-08-13T19:59:24.033453753+00:00 stderr F I0813 19:59:24.032400 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:24.033453753+00:00 stderr F I0813 19:59:24.033256 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:24.045613349+00:00 stderr F I0813 19:59:24.045312 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:28.618292083+00:00 stderr F I0813 19:59:28.581671 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T19:59:41.628019168+00:00 stderr F I0813 19:59:41.627091 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:41.688536273+00:00 stderr F I0813 19:59:41.686658 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T19:59:41.688536273+00:00 stderr F I0813 19:59:41.687289 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-08-13T19:59:41.783485650+00:00 stderr F W0813 19:59:41.728767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.783485650+00:00 stderr F W0813 19:59:41.783334 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.912309252+00:00 stderr F I0813 19:59:41.912083 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:41.922961415+00:00 stderr F I0813 19:59:41.920266 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:41.922961415+00:00 stderr F I0813 19:59:41.921423 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923402 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923470 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923610 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923627 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923644 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.931931161+00:00 stderr F I0813 19:59:41.923650 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.967324630+00:00 stderr F I0813 19:59:41.964723 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-08-13T19:59:42.010618034+00:00 stderr F I0813 19:59:42.008445 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28303", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_222cdc59-c33e-47e3-9961-9d9799c1f827 became leader 2025-08-13T19:59:42.023767319+00:00 stderr F I0813 19:59:42.023455 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:42.023767319+00:00 stderr F I0813 19:59:42.023517 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:42.023966385+00:00 stderr F I0813 19:59:42.023881 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.023966385+00:00 stderr F I0813 19:59:42.023913 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.023982015+00:00 stderr F E0813 19:59:42.023960 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.023997666+00:00 stderr F E0813 19:59:42.023989 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.051606542+00:00 stderr F E0813 19:59:42.047922 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.057444769+00:00 stderr F E0813 19:59:42.056639 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.059437936+00:00 stderr F E0813 19:59:42.058443 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.067436374+00:00 stderr F E0813 19:59:42.067065 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.078942392+00:00 stderr F E0813 19:59:42.078701 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.096151052+00:00 stderr F E0813 19:59:42.094004 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.096151052+00:00 stderr F I0813 19:59:42.094275 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:42.096151052+00:00 stderr F I0813 19:59:42.095053 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:42.135419622+00:00 stderr F E0813 19:59:42.131231 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.149967616+00:00 stderr F E0813 19:59:42.149745 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.216747230+00:00 stderr F E0813 19:59:42.212138 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.328639559+00:00 stderr F E0813 19:59:42.328373 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.380154118+00:00 stderr F E0813 19:59:42.379397 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.490361409+00:00 stderr F E0813 19:59:42.489984 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.734689153+00:00 stderr F E0813 19:59:42.730261 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.813911061+00:00 stderr F E0813 19:59:42.813173 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.093876912+00:00 stderr F I0813 19:59:43.093228 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-08-13T19:59:43.207267524+00:00 stderr F I0813 19:59:43.193372 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-08-13T19:59:43.271215437+00:00 stderr F I0813 19:59:43.269044 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T19:59:44.345763787+00:00 stderr F I0813 19:59:44.345462 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:44.346128778+00:00 stderr F I0813 19:59:44.345751 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:44.397758109+00:00 stderr F I0813 19:59:44.397026 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:44.406665463+00:00 stderr F I0813 19:59:44.406579 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:44.408134075+00:00 stderr F I0813 19:59:44.407390 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454354 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.342036 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454585 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454640 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454668 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.454686 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455379 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455402 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455416 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:44.457951755+00:00 stderr F I0813 19:59:44.455430 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:44.457951755+00:00 stderr F E0813 19:59:44.456040 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.457951755+00:00 stderr F E0813 19:59:44.456098 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.605027117+00:00 stderr F I0813 19:59:44.604706 1 request.go:697] Waited for 1.261836099s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-infra/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:45.701060661+00:00 stderr F I0813 19:59:45.683986 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T19:59:45.777649654+00:00 stderr F E0813 19:59:45.777571 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.214107444+00:00 stderr F I0813 19:59:46.213425 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:49.333946649+00:00 stderr F E0813 19:59:49.333240 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.333946649+00:00 stderr F E0813 19:59:49.333693 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658599 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658650 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658701 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:49.660686732+00:00 stderr F I0813 19:59:49.658709 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:49.686514198+00:00 stderr F I0813 19:59:49.685489 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SignerUpdateRequired' "csr-signer-signer" in "openshift-kube-controller-manager-operator" requires a new signing cert/key pair: past its refresh time 2025-06-27 13:05:19 +0000 UTC 2025-08-13T19:59:52.221081929+00:00 stderr F E0813 19:59:52.216537 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:52.947399193+00:00 stderr F E0813 19:59:52.942757 1 base_controller.go:268] InstallerStateController reconciliation failed: kubecontrollermanagers.operator.openshift.io "cluster" not found 2025-08-13T19:59:53.014988420+00:00 stderr F I0813 19:59:53.014830 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:53.014988420+00:00 stderr F I0813 19:59:53.014901 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:53.247120117+00:00 stderr F I0813 19:59:53.000502 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:53.290763200+00:00 stderr F I0813 19:59:53.290133 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.246981 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.326055 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.246985 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:53.327388354+00:00 stderr F I0813 19:59:53.326119 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:53.343642967+00:00 stderr F I0813 19:59:53.246989 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:53.343642967+00:00 stderr F I0813 19:59:53.343520 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:53.348723092+00:00 stderr F E0813 19:59:53.347167 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.348723092+00:00 stderr F I0813 19:59:53.246992 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:53.348723092+00:00 stderr F I0813 19:59:53.347291 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:53.349069832+00:00 stderr F I0813 19:59:53.248700 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:53.349069832+00:00 stderr F I0813 19:59:53.349039 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:53.368211478+00:00 stderr F E0813 19:59:53.365245 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.370976496+00:00 stderr F I0813 19:59:53.370553 1 trace.go:236] Trace[1198132947]: "DeltaFIFO Pop Process" ID:openshift-infra/build-config-change-controller,Depth:24,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:53.014) (total time: 355ms): 2025-08-13T19:59:53.370976496+00:00 stderr F Trace[1198132947]: [355.508683ms] [355.508683ms] END 2025-08-13T19:59:53.379894541+00:00 stderr F E0813 19:59:53.379715 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.383234916+00:00 stderr F I0813 19:59:53.380322 1 trace.go:236] Trace[1066495986]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.193) (total time: 10186ms): 2025-08-13T19:59:53.383234916+00:00 stderr F Trace[1066495986]: ---"Objects listed" error: 10186ms (19:59:53.380) 2025-08-13T19:59:53.383234916+00:00 stderr F Trace[1066495986]: [10.186242865s] [10.186242865s] END 2025-08-13T19:59:53.383234916+00:00 stderr F I0813 19:59:53.380358 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.416024670+00:00 stderr F E0813 19:59:53.414698 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.420198000+00:00 stderr F I0813 19:59:53.420157 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.422652869+00:00 stderr F I0813 19:59:53.422216 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.423601336+00:00 stderr F I0813 19:59:53.423567 1 trace.go:236] Trace[1991635445]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.194) (total time: 10228ms): 2025-08-13T19:59:53.423601336+00:00 stderr F Trace[1991635445]: ---"Objects listed" error: 10228ms (19:59:53.423) 2025-08-13T19:59:53.423601336+00:00 stderr F Trace[1991635445]: [10.228950532s] [10.228950532s] END 2025-08-13T19:59:53.423669918+00:00 stderr F I0813 19:59:53.423649 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.449483304+00:00 stderr F I0813 19:59:53.246974 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-08-13T19:59:53.449483304+00:00 stderr F I0813 19:59:53.446162 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T19:59:53.463072922+00:00 stderr F I0813 19:59:53.463014 1 trace.go:236] Trace[1514590009]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.150) (total time: 10312ms): 2025-08-13T19:59:53.463072922+00:00 stderr F Trace[1514590009]: ---"Objects listed" error: 10301ms (19:59:53.452) 2025-08-13T19:59:53.463072922+00:00 stderr F Trace[1514590009]: [10.312669109s] [10.312669109s] END 2025-08-13T19:59:53.463172424+00:00 stderr F I0813 19:59:53.463153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.525856751+00:00 stderr F E0813 19:59:53.525272 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.527213780+00:00 stderr F E0813 19:59:53.525952 1 base_controller.go:268] PruneController reconciliation failed: unable to set pruner pod ownerrefs: configmap "revision-status-8" not found 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.544828 1 trace.go:236] Trace[1007991698]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.194) (total time: 10345ms): 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1007991698]: ---"Objects listed" error: 10345ms (19:59:53.539) 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1007991698]: [10.345534185s] [10.345534185s] END 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.544882 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.545145 1 trace.go:236] Trace[1260161130]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.098) (total time: 10446ms): 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1260161130]: ---"Objects listed" error: 10446ms (19:59:53.545) 2025-08-13T19:59:53.548014333+00:00 stderr F Trace[1260161130]: [10.446396791s] [10.446396791s] END 2025-08-13T19:59:53.548014333+00:00 stderr F I0813 19:59:53.545153 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.572000517+00:00 stderr F I0813 19:59:53.569230 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T19:59:53.572000517+00:00 stderr F I0813 19:59:53.569288 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T19:59:53.575611490+00:00 stderr F I0813 19:59:53.575519 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8 2025-08-13T19:59:53.633472279+00:00 stderr F I0813 19:59:53.633259 1 trace.go:236] Trace[1243592164]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:43.099) (total time: 10533ms): 2025-08-13T19:59:53.633472279+00:00 stderr F Trace[1243592164]: ---"Objects listed" error: 10533ms (19:59:53.633) 2025-08-13T19:59:53.633472279+00:00 stderr F Trace[1243592164]: [10.533904655s] [10.533904655s] END 2025-08-13T19:59:53.633472279+00:00 stderr F I0813 19:59:53.633362 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.675137747+00:00 stderr F I0813 19:59:53.671142 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.762284671+00:00 stderr F I0813 19:59:53.730591 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:53.762284671+00:00 stderr F I0813 19:59:53.760573 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:53.788307503+00:00 stderr F I0813 19:59:53.783223 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T19:59:53.788307503+00:00 stderr F I0813 19:59:53.783290 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T19:59:53.792922194+00:00 stderr F I0813 19:59:53.791421 1 trace.go:236] Trace[896251159]: "DeltaFIFO Pop Process" ID:openshift-config-managed/dashboard-k8s-resources-cluster,Depth:34,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:53.645) (total time: 145ms): 2025-08-13T19:59:53.792922194+00:00 stderr F Trace[896251159]: [145.698363ms] [145.698363ms] END 2025-08-13T19:59:53.804206756+00:00 stderr F I0813 19:59:53.803143 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:53.804206756+00:00 stderr F I0813 19:59:53.803216 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819120 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819168 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819235 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:53.820246093+00:00 stderr F I0813 19:59:53.819240 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829044 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829253 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829501 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-08-13T19:59:53.830456084+00:00 stderr F I0813 19:59:53.829521 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-08-13T19:59:53.899573244+00:00 stderr F I0813 19:59:53.859656 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:53.899573244+00:00 stderr F I0813 19:59:53.859744 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:53.963060424+00:00 stderr F I0813 19:59:53.944658 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:53.963060424+00:00 stderr F I0813 19:59:53.950284 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:54.249369305+00:00 stderr F I0813 19:59:54.247394 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.289571031+00:00 stderr F I0813 19:59:54.251263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:54.310357104+00:00 stderr F I0813 19:59:54.305679 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-08-13T19:59:54.337892059+00:00 stderr F E0813 19:59:54.337627 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:54.346882635+00:00 stderr F I0813 19:59:54.341597 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.358706072+00:00 stderr F E0813 19:59:54.358653 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8] 2025-08-13T19:59:54.402212842+00:00 stderr F I0813 19:59:54.401970 1 core.go:359] ConfigMap "openshift-kube-controller-manager/service-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/description":{},"f:openshift.io/owning-component":{}}}},"manager":"service-ca-operator","operation":"Update","time":"2025-08-13T19:59:39Z"}],"resourceVersion":null,"uid":"8a52a0ef-1908-47be-bed5-31ee169c99a3"}} 2025-08-13T19:59:54.403505199+00:00 stderr F I0813 19:59:54.403422 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/service-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:54.403505199+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:54.410230861+00:00 stderr F I0813 19:59:54.410168 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.411327932+00:00 stderr F I0813 19:59:54.411251 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:54.421904543+00:00 stderr F I0813 19:59:54.418693 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 9 triggered by "required configmap/service-ca has changed" 2025-08-13T19:59:54.513584627+00:00 stderr F I0813 19:59:54.513520 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.514087421+00:00 stderr F I0813 19:59:54.514057 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]" 2025-08-13T19:59:54.535333137+00:00 stderr P I0813 19:59:54.535256 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/i 2025-08-13T19:59:54.535405889+00:00 stderr F SVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:59:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T19:59:54.535596484+00:00 stderr F I0813 19:59:54.535557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:54.535596484+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:54.619393933+00:00 stderr F I0813 19:59:54.619247 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer-signer -n openshift-kube-controller-manager-operator because it changed 2025-08-13T19:59:54.619484395+00:00 stderr F I0813 19:59:54.619464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CABundleUpdateRequired' "csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert 2025-08-13T19:59:54.619513726+00:00 stderr F E0813 19:59:54.619406 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:54.754106093+00:00 stderr F I0813 19:59:54.753338 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.805605461+00:00 stderr F I0813 19:59:54.805538 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-8,config-8,controller-manager-kubeconfig-8,kube-controller-cert-syncer-kubeconfig-8,kube-controller-manager-pod-8,recycler-config-8,service-ca-8,serviceaccount-ca-8]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:55.108944468+00:00 stderr F I0813 19:59:55.107484 1 core.go:359] ConfigMap "openshift-kube-controller-manager/aggregator-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIV+a/r/KBVSQwDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2FnZ3JlZ2F0b3It\nY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX2FnZ3JlZ2F0b3ItY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwz1oeDcXqAniG+VxAzEbZbeheswm\nibqk0LwWbA9YAD2aJCC2U0gbXouz0u1dzDnEuwzslM0OFq2kW+1RmEB1drVBkCMV\ny/gKGmRafqGt31/rDe81XneOBzrUC/rNVDZq7rx4wsZ8YzYkPhj1frvlCCWyOdyB\n+nWF+ZZQHLXeSuHuVGnfGqmckiQf/R8ITZp/vniyeOED0w8B9ZdfVHNYJksR/Vn2\ngslU8a/mluPzSCyD10aHnX5c75yTzW4TBQvytjkEpDR5LBoRmHiuL64999DtWonq\niX7TdcoQY1LuHyilaXIp0TazmkRb3ycHAY/RQ3xumj9I25D8eLCwWvI8GwIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nWtUaz8JmZMUc/fPnQTR0L7R9wakwHwYDVR0jBBgwFoAUWtUaz8JmZMUc/fPnQTR0\nL7R9wakwDQYJKoZIhvcNAQELBQADggEBAECt0YM/4XvtPuVY5pY2aAXRuthsw2zQ\nnXVvR5jDsfpaNvMXQWaMjM1B+giNKhDDeLmcF7GlWmFfccqBPicgFhUgQANE3ALN\ngq2Wttd641M6i4B3UuRewNySj1sc12wfgAcwaRcecDTCsZo5yuF90z4mXpZs7MWh\nKCxYPpAtLqi17IF1tJVz/03L+6WD5kUProTELtY7/KBJYV/GONMG+KAMBjg1ikMK\njA0HQiCZiWDuW1ZdAwuvh6oRNWoQy6w9Wksard/AnfXUFBwNgULMp56+tOOPHxtm\nu3XYTN0dPJXsimSk4KfS0by8waS7ocoXa3LgQxb/6h0ympDbcWtgD0w=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:00Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}},"f:labels":{".":{},"f:auth.openshift.io/managed-certificate-type":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:33Z"}],"resourceVersion":null,"uid":"1d0d5c4a-d5a2-488a-94e2-bf622b67cadf"}} 2025-08-13T19:59:55.116031370+00:00 stderr F I0813 19:59:55.115686 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager: 2025-08-13T19:59:55.116031370+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:55.688617272+00:00 stderr F I0813 19:59:55.688549 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-signer-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n"}} 2025-08-13T19:59:55.704932237+00:00 stderr F I0813 19:59:55.690301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator: Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.714899181+00:00 stderr F I0813 19:59:55.708597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:55.762149678+00:00 stderr F I0813 19:59:55.761861 1 request.go:697] Waited for 1.076004342s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T19:59:56.762951177+00:00 stderr F I0813 19:59:56.762427 1 request.go:697] Waited for 1.055958541s due to client-side throttling, not priority and fairness, request: POST:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps 2025-08-13T19:59:57.074899158+00:00 stderr F I0813 19:59:57.074757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:57.349627799+00:00 stderr F I0813 19:59:57.331149 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.538299877+00:00 stderr F E0813 19:59:57.537920 1 base_controller.go:268] CertRotationController reconciliation failed: Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:57.612464112+00:00 stderr F I0813 19:59:57.611932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RotationError' Operation cannot be fulfilled on configmaps "csr-controller-signer-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:57.729409045+00:00 stderr F I0813 19:59:57.729313 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T19:59:58.238010793+00:00 stderr F I0813 19:59:58.236969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:58.340670809+00:00 stderr F I0813 19:59:58.333712 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:09Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:58.369221253+00:00 stderr F I0813 19:59:58.368719 1 request.go:697] Waited for 1.037197985s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2025-08-13T19:59:58.427858545+00:00 stderr F I0813 19:59:58.422906 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nCertRotation_CSRSigningCert_Degraded: Operation cannot be fulfilled on configmaps \"csr-controller-signer-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:59.248931550+00:00 stderr F I0813 19:59:59.226352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T19:59:59.574132010+00:00 stderr F I0813 19:59:59.566388 1 request.go:697] Waited for 1.156719703s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:00:00.643511052+00:00 stderr F I0813 20:00:00.637169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:01.507455149+00:00 stderr F I0813 20:00:01.478751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:02.034100931+00:00 stderr F I0813 20:00:02.023659 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:02.034100931+00:00 stderr F I0813 20:00:02.024573 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: Operation cannot be fulfilled on configmaps "csr-controller-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.209018759+00:00 stderr F I0813 20:00:02.206166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:02.885065476+00:00 stderr F I0813 20:00:02.884103 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:03.224935337+00:00 stderr F I0813 20:00:03.222157 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:03.729095862+00:00 stderr F I0813 20:00:03.725248 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:04.211702912+00:00 stderr F I0813 20:00:04.208754 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-9 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:04.786158933+00:00 stderr F I0813 20:00:04.781212 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 9 triggered by "required configmap/service-ca has changed" 2025-08-13T20:00:04.870048975+00:00 stderr F I0813 20:00:04.863754 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 9 created because required configmap/service-ca has changed 2025-08-13T20:00:04.884154757+00:00 stderr F W0813 20:00:04.883346 1 staticpod.go:38] revision 9 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:00:04.891111825+00:00 stderr F E0813 20:00:04.889270 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 9 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769114 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.769040049 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769378 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.769361598 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769400 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.769386709 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769418 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.769407239 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769435 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.7694241 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769452 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.76944052 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769468 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769456861 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769484 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769472801 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769512 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.769496222 +0000 UTC))" 2025-08-13T20:00:05.774890346+00:00 stderr F I0813 20:00:05.769535 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.769523243 +0000 UTC))" 2025-08-13T20:00:05.790969514+00:00 stderr F I0813 20:00:05.790398 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.790355257 +0000 UTC))" 2025-08-13T20:00:05.790969514+00:00 stderr F I0813 20:00:05.790724 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:00:05.790702476 +0000 UTC))" 2025-08-13T20:00:05.965070208+00:00 stderr F I0813 20:00:05.964149 1 request.go:697] Waited for 1.102447146s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-9-crc 2025-08-13T20:00:06.586405825+00:00 stderr F I0813 20:00:06.585096 1 installer_controller.go:524] node crc with revision 8 is the oldest and needs new revision 9 2025-08-13T20:00:06.586405825+00:00 stderr F I0813 20:00:06.585750 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:06.586405825+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:06.586405825+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:06.586405825+00:00 stderr F TargetRevision: (int32) 9, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedRevision: (int32) 8, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0020ea588)(2024-06-27 13:18:10 +0000 UTC), 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:06.586405825+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:06.586405825+00:00 stderr F (string) (len=2059) "installer: ry-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\n (string) (len=27) \"kube-controller-manager-pod\",\n (string) (len=6) \"config\",\n (string) (len=32) \"cluster-policy-controller-config\",\n (string) (len=29) \"controller-manager-kubeconfig\",\n (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=10) \"service-ca\",\n (string) (len=15) \"recycler-config\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"cloud-config\"\n },\n CertSecretNames: ([]string) (len=2 cap=2) {\n (string) (len=39) \"kube-controller-manager-client-cert-key\",\n (string) (len=10) \"csr-signer\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0627 13:15:36.458960 1 cmd.go:409] Getting controller reference for node crc\nI0627 13:15:36.472798 1 cmd.go:422] Waiting for installer revisions to settle for node crc\nI0627 13:15:36.476730 1 cmd.go:514] Waiting additional period after revisions have settled for node crc\nI0627 13:16:06.477243 1 cmd.go:520] Getting installer pods for node crc\nF0627 13:16:06.480777 1 cmd.go:105] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:00:06.586405825+00:00 stderr F } 2025-08-13T20:00:06.586405825+00:00 stderr F } 2025-08-13T20:00:06.687925820+00:00 stderr F I0813 20:00:06.683522 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 8 to 9 because node crc with revision 8 is the oldest 2025-08-13T20:00:06.720211090+00:00 stderr F I0813 20:00:06.711726 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:06.760673034+00:00 stderr F I0813 20:00:06.760569 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" 2025-08-13T20:00:06.836714792+00:00 stderr F I0813 20:00:06.834742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-9-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:07.767166432+00:00 stderr F I0813 20:00:07.764212 1 request.go:697] Waited for 1.027872228s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-08-13T20:00:08.456826607+00:00 stderr P I0813 20:00:08.456025 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQU 2025-08-13T20:00:08.456932090+00:00 stderr F AA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:06Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:00:08.456932090+00:00 stderr F I0813 20:00:08.456716 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T20:00:08.456932090+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.966021067+00:00 stderr F I0813 20:00:08.962975 1 request.go:697] Waited for 1.36289333s due to client-side throttling, not priority and fairness, request: POST:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods 2025-08-13T20:00:09.447679141+00:00 stderr F I0813 20:00:09.447065 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-9-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:10.144016496+00:00 stderr F I0813 20:00:10.130744 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:10.628131900+00:00 stderr F I0813 20:00:10.557295 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:10.855083031+00:00 stderr F I0813 20:00:10.854579 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:00:10.862919385+00:00 stderr F I0813 20:00:10.861255 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:11.266415079+00:00 stderr F E0813 20:00:11.265447 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:11.937577357+00:00 stderr F I0813 20:00:11.937279 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:11.973540762+00:00 stderr F I0813 20:00:11.970921 1 request.go:697] Waited for 1.066126049s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config 2025-08-13T20:00:12.912112395+00:00 stderr F I0813 20:00:12.886263 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:16.706302621+00:00 stderr F I0813 20:00:16.705335 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:17.450477091+00:00 stderr F I0813 20:00:17.450091 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:19.774153637+00:00 stderr F I0813 20:00:19.769354 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:19.935640232+00:00 stderr F I0813 20:00:19.935570 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:00:19.998082432+00:00 stderr F I0813 20:00:19.998012 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:22.011667596+00:00 stderr F I0813 20:00:22.006462 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:23.446037179+00:00 stderr F I0813 20:00:23.444714 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:24.916564830+00:00 stderr F I0813 20:00:24.914283 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:26.792434459+00:00 stderr F I0813 20:00:26.784906 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:27.694565663+00:00 stderr F I0813 20:00:27.690687 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 10 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:27.970950544+00:00 stderr F I0813 20:00:27.969903 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:29.012869353+00:00 stderr F I0813 20:00:29.011822 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-manager-pod-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:29.746921794+00:00 stderr F I0813 20:00:29.744278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:30.213415615+00:00 stderr F I0813 20:00:30.212313 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster-policy-controller-config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:30.800346431+00:00 stderr F I0813 20:00:30.785484 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/controller-manager-kubeconfig-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:31.266697908+00:00 stderr F I0813 20:00:31.250574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-controller-cert-syncer-kubeconfig-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:31.794188219+00:00 stderr F I0813 20:00:31.793698 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.017819839+00:00 stderr F I0813 20:00:33.017439 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/service-ca-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.184819941+00:00 stderr F I0813 20:00:33.179616 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/recycler-config-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:33.622176352+00:00 stderr F I0813 20:00:33.622117 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/service-account-private-key-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:34.201159381+00:00 stderr F I0813 20:00:34.200690 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:34.883284441+00:00 stderr F I0813 20:00:34.881509 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-10 -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:35.514679035+00:00 stderr F I0813 20:00:35.513692 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 10 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:35.960155817+00:00 stderr F I0813 20:00:35.958416 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 10 created because optional secret/serving-cert has changed 2025-08-13T20:00:36.965344168+00:00 stderr F I0813 20:00:36.955393 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:36.965344168+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:36.965344168+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:36.965344168+00:00 stderr F TargetRevision: (int32) 10, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedRevision: (int32) 8, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001ff4780)(2024-06-27 13:18:10 +0000 UTC), 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:36.965344168+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:36.965344168+00:00 stderr F (string) (len=2059) "installer: ry-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\n (string) (len=27) \"kube-controller-manager-pod\",\n (string) (len=6) \"config\",\n (string) (len=32) \"cluster-policy-controller-config\",\n (string) (len=29) \"controller-manager-kubeconfig\",\n (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=10) \"service-ca\",\n (string) (len=15) \"recycler-config\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"cloud-config\"\n },\n CertSecretNames: ([]string) (len=2 cap=2) {\n (string) (len=39) \"kube-controller-manager-client-cert-key\",\n (string) (len=10) \"csr-signer\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0627 13:15:36.458960 1 cmd.go:409] Getting controller reference for node crc\nI0627 13:15:36.472798 1 cmd.go:422] Waiting for installer revisions to settle for node crc\nI0627 13:15:36.476730 1 cmd.go:514] Waiting additional period after revisions have settled for node crc\nI0627 13:16:06.477243 1 cmd.go:520] Getting installer pods for node crc\nF0627 13:16:06.480777 1 cmd.go:105] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 10.217.4.1:443: connect: connection refused\n" 2025-08-13T20:00:36.965344168+00:00 stderr F } 2025-08-13T20:00:36.965344168+00:00 stderr F } 2025-08-13T20:00:36.965344168+00:00 stderr F because new revision pending 2025-08-13T20:00:37.063979591+00:00 stderr F I0813 20:00:37.049572 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.084022262+00:00 stderr F I0813 20:00:37.082999 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.084022262+00:00 stderr F I0813 20:00:37.083556 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10" 2025-08-13T20:00:37.123864098+00:00 stderr F E0813 20:00:37.120427 1 base_controller.go:268] StatusSyncer_kube-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:37.318318893+00:00 stderr F I0813 20:00:37.316069 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-10-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:38.190099651+00:00 stderr F I0813 20:00:38.189546 1 request.go:697] Waited for 1.147826339s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa 2025-08-13T20:00:39.723621538+00:00 stderr F I0813 20:00:39.722118 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-10-crc -n openshift-kube-controller-manager because it was missing 2025-08-13T20:00:40.578344999+00:00 stderr F I0813 20:00:40.573345 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:45.563774882+00:00 stderr F I0813 20:00:45.553082 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:46.701158604+00:00 stderr F I0813 20:00:46.694673 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:47.187761398+00:00 stderr F I0813 20:00:47.184743 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:56.229393959+00:00 stderr P I0813 20:00:56.223445 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxM 2025-08-13T20:00:56.229760529+00:00 stderr F TQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:00:56.239003423+00:00 stderr F I0813 20:00:56.231111 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-08-13T20:00:56.239003423+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:00.010151258+00:00 stderr F I0813 20:00:59.999206 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.999078302 +0000 UTC))" 2025-08-13T20:01:00.012078983+00:00 stderr F I0813 20:01:00.011490 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.008929353 +0000 UTC))" 2025-08-13T20:01:00.013182774+00:00 stderr F I0813 20:01:00.013038 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.012049852 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013881 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.013857453 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013939 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.013921695 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.013963 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.013946756 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.014016 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014000307 +0000 UTC))" 2025-08-13T20:01:00.014099750+00:00 stderr F I0813 20:01:00.014037 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014023008 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014103 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.014042849 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014137 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.014118021 +0000 UTC))" 2025-08-13T20:01:00.014174712+00:00 stderr F I0813 20:01:00.014159 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.014147682 +0000 UTC))" 2025-08-13T20:01:00.031031783+00:00 stderr F I0813 20:01:00.030593 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:01:00.030551769 +0000 UTC))" 2025-08-13T20:01:00.031031783+00:00 stderr F I0813 20:01:00.031018 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:01:00.030988972 +0000 UTC))" 2025-08-13T20:01:03.266043461+00:00 stderr P I0813 20:01:03.265053 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxM 2025-08-13T20:01:03.266368450+00:00 stderr F TQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-08-13T20:01:03.266368450+00:00 stderr F I0813 20:01:03.266008 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/client-ca -n openshift-kube-controller-manager: Operation cannot be fulfilled on configmaps "client-ca": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:05.228984283+00:00 stderr F I0813 20:01:05.179740 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:06.133168995+00:00 stderr F I0813 20:01:06.125619 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:01:07.023349078+00:00 stderr F I0813 20:01:06.999583 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:06Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:07.023349078+00:00 stderr F I0813 20:01:07.002684 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:07.263262658+00:00 stderr F I0813 20:01:07.263101 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"client-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:01:07.839947862+00:00 stderr F I0813 20:01:07.743713 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:22.554548162+00:00 stderr F I0813 20:01:22.551707 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:23.629342647+00:00 stderr F I0813 20:01:23.629273 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:24.026047149+00:00 stderr F I0813 20:01:24.025682 1 installer_controller.go:512] "crc" is in transition to 10, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:26.338292640+00:00 stderr F I0813 20:01:26.337632 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.340007288+00:00 stderr F I0813 20:01:26.338395 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:26.340007288+00:00 stderr F I0813 20:01:26.339666 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.350195569+00:00 stderr F I0813 20:01:26.349343 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:26.348725367 +0000 UTC))" 2025-08-13T20:01:26.351715732+00:00 stderr F I0813 20:01:26.350769 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:26.350746725 +0000 UTC))" 2025-08-13T20:01:26.351715732+00:00 stderr F I0813 20:01:26.351583 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:26.351257289 +0000 UTC))" 2025-08-13T20:01:26.352549576+00:00 stderr F I0813 20:01:26.351755 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:26.351730833 +0000 UTC))" 2025-08-13T20:01:26.352549576+00:00 stderr F I0813 20:01:26.352233 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352196976 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352607 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352525505 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352745 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352634238 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352866 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.352755722 +0000 UTC))" 2025-08-13T20:01:26.353318078+00:00 stderr F I0813 20:01:26.352956 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:26.352939397 +0000 UTC))" 2025-08-13T20:01:26.353728070+00:00 stderr F I0813 20:01:26.352981 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:26.352967808 +0000 UTC))" 2025-08-13T20:01:26.353728070+00:00 stderr F I0813 20:01:26.353666 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:26.353648137 +0000 UTC))" 2025-08-13T20:01:26.359825203+00:00 stderr F I0813 20:01:26.359621 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-08-13 20:01:26.359510245 +0000 UTC))" 2025-08-13T20:01:26.361272905+00:00 stderr F I0813 20:01:26.360170 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:29 +0000 UTC to 2026-08-13 18:59:29 +0000 UTC (now=2025-08-13 20:01:26.360149493 +0000 UTC))" 2025-08-13T20:01:29.161403988+00:00 stderr F I0813 20:01:29.161063 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="b677063b32bdea5d0626c22feb3e33cd81922e655466381de19492c7e38d993f", new="048a1a430ed1deee9ccf052d51f855e6e8f2fee2d531560f57befbbfad23cd9d") 2025-08-13T20:01:29.161679096+00:00 stderr F W0813 20:01:29.161655 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:29.161911722+00:00 stderr F I0813 20:01:29.161888 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="b10e6fe375e1e85006c2f2c6e133de2da450246bee57a078c8f61fda3a384f1b", new="4311dcc9562700df28c73989dc84c69933e91c13e7ee5e5588d7570593276c97") 2025-08-13T20:01:29.163290022+00:00 stderr F I0813 20:01:29.163222 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:29.163387054+00:00 stderr F I0813 20:01:29.163330 1 base_controller.go:172] Shutting down GarbageCollectorWatcherController ... 2025-08-13T20:01:29.163417905+00:00 stderr F I0813 20:01:29.163352 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:29.163535239+00:00 stderr F I0813 20:01:29.163516 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:29.164066004+00:00 stderr F I0813 20:01:29.164005 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:29.164111315+00:00 stderr F I0813 20:01:29.164014 1 base_controller.go:172] Shutting down SATokenSignerController ... 2025-08-13T20:01:29.164139656+00:00 stderr F I0813 20:01:29.164101 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164165 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164221 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:29.164260049+00:00 stderr F I0813 20:01:29.164237 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:01:29.164692262+00:00 stderr F I0813 20:01:29.164607 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:29.164961759+00:00 stderr F I0813 20:01:29.164820 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:29.165095503+00:00 stderr F I0813 20:01:29.165074 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:29.165144755+00:00 stderr F I0813 20:01:29.165131 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:29.165466574+00:00 stderr F I0813 20:01:29.165378 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:29.165558206+00:00 stderr F I0813 20:01:29.165531 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:29.165771282+00:00 stderr F I0813 20:01:29.165666 1 base_controller.go:114] Shutting down worker of GarbageCollectorWatcherController controller ... 2025-08-13T20:01:29.165917257+00:00 stderr F I0813 20:01:29.165864 1 base_controller.go:104] All GarbageCollectorWatcherController workers have been terminated 2025-08-13T20:01:29.166484343+00:00 stderr F E0813 20:01:29.166374 1 base_controller.go:268] TargetConfigController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166426 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166437 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166451 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166463 1 base_controller.go:114] Shutting down worker of SATokenSignerController controller ... 2025-08-13T20:01:29.166484343+00:00 stderr F I0813 20:01:29.166469 1 base_controller.go:104] All SATokenSignerController workers have been terminated 2025-08-13T20:01:29.166537154+00:00 stderr F I0813 20:01:29.166496 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:01:29.166537154+00:00 stderr F E0813 20:01:29.166497 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-controller-manager/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client": context canceled, "assets/kube-controller-manager/csr_approver_clusterrole.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/gce/cloud-provider-role.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-controller-manager/gce/cloud-provider-binding.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:01:29.166537154+00:00 stderr F I0813 20:01:29.166502 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:01:29.166556025+00:00 stderr F I0813 20:01:29.166528 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:29.166571335+00:00 stderr F I0813 20:01:29.166560 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:29.166586586+00:00 stderr F I0813 20:01:29.166579 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166587 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166586 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:29.166598846+00:00 stderr F I0813 20:01:29.166593 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166606 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166687 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:29.166719459+00:00 stderr F I0813 20:01:29.166704 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:29.166735550+00:00 stderr F I0813 20:01:29.166721 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:29.166735550+00:00 stderr F I0813 20:01:29.166728 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:29.166747810+00:00 stderr F I0813 20:01:29.166733 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:29.167183443+00:00 stderr F I0813 20:01:29.167058 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:29.167183443+00:00 stderr F I0813 20:01:29.167087 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:29.167311166+00:00 stderr F I0813 20:01:29.167260 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:29.167311166+00:00 stderr F I0813 20:01:29.167303 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-controller-manager controller ... 2025-08-13T20:01:29.167327837+00:00 stderr F I0813 20:01:29.167315 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:29.167327837+00:00 stderr F I0813 20:01:29.167322 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167289 1 base_controller.go:172] Shutting down StatusSyncer_kube-controller-manager ... 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167331 1 base_controller.go:150] All StatusSyncer_kube-controller-manager post start hooks have been terminated 2025-08-13T20:01:29.167340177+00:00 stderr F I0813 20:01:29.167336 1 base_controller.go:104] All StatusSyncer_kube-controller-manager workers have been terminated 2025-08-13T20:01:29.167354207+00:00 stderr F I0813 20:01:29.167336 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:29.167366448+00:00 stderr F I0813 20:01:29.167351 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167363 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167369 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:29.167378408+00:00 stderr F I0813 20:01:29.167374 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:29.167391069+00:00 stderr F I0813 20:01:29.167381 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167393 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167393 1 builder.go:329] server exited 2025-08-13T20:01:29.167402949+00:00 stderr F I0813 20:01:29.167394 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:29.167416179+00:00 stderr F I0813 20:01:29.167408 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:29.167428310+00:00 stderr F I0813 20:01:29.167413 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:29.167428310+00:00 stderr F I0813 20:01:29.167418 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:29.167441000+00:00 stderr F I0813 20:01:29.167429 1 base_controller.go:114] Shutting down worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:01:29.167441000+00:00 stderr F I0813 20:01:29.167437 1 base_controller.go:104] All KubeControllerManagerStaticResources workers have been terminated 2025-08-13T20:01:29.167456530+00:00 stderr F I0813 20:01:29.167447 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:29.167498632+00:00 stderr F I0813 20:01:29.167478 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:29.167556643+00:00 stderr F I0813 20:01:29.167541 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:29.167690917+00:00 stderr F I0813 20:01:29.167656 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:29.167728938+00:00 stderr F I0813 20:01:29.167658 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:29.167832991+00:00 stderr F I0813 20:01:29.167726 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:29.167897793+00:00 stderr F I0813 20:01:29.167583 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:29.167976565+00:00 stderr F I0813 20:01:29.167962 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:29.168040417+00:00 stderr F I0813 20:01:29.168007 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:29.168077568+00:00 stderr F I0813 20:01:29.168065 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:29.168108789+00:00 stderr F I0813 20:01:29.168097 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:29.168180191+00:00 stderr F I0813 20:01:29.168140 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:29.168180191+00:00 stderr F I0813 20:01:29.168170 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:29.168193111+00:00 stderr F I0813 20:01:29.168177 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:29.168238023+00:00 stderr F E0813 20:01:29.168082 1 base_controller.go:268] StaticPodStateController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:29.168337335+00:00 stderr F I0813 20:01:29.168320 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:29.168393597+00:00 stderr F I0813 20:01:29.168376 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:29.168426248+00:00 stderr F I0813 20:01:29.168414 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:29.168454419+00:00 stderr F I0813 20:01:29.168227 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:29.168508740+00:00 stderr F I0813 20:01:29.168495 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:29.168556802+00:00 stderr F I0813 20:01:29.168247 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:29.168592053+00:00 stderr F I0813 20:01:29.168580 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:29.169355535+00:00 stderr F I0813 20:01:29.169299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 10 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc": context canceled 2025-08-13T20:01:29.169656083+00:00 stderr F E0813 20:01:29.169600 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": context canceled 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170025 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170055 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:29.170080875+00:00 stderr F I0813 20:01:29.170064 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:29.175724656+00:00 stderr F E0813 20:01:29.174684 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc": context canceled 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174870 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174922 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:29.175724656+00:00 stderr F I0813 20:01:29.174962 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:30.877494640+00:00 stderr F W0813 20:01:30.875423 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000023017115117130647033067 0ustar zuulzuul2025-12-13T00:13:15.489161912+00:00 stderr F I1213 00:13:15.487356 1 cmd.go:240] Using service-serving-cert provided certificates 2025-12-13T00:13:15.489161912+00:00 stderr F I1213 00:13:15.487638 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:15.489161912+00:00 stderr F I1213 00:13:15.488628 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:15.523686722+00:00 stderr F I1213 00:13:15.513603 1 builder.go:298] kube-controller-manager-operator version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-12-13T00:13:16.116967098+00:00 stderr F I1213 00:13:16.116032 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.116967098+00:00 stderr F W1213 00:13:16.116451 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.116967098+00:00 stderr F W1213 00:13:16.116458 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.122968409+00:00 stderr F I1213 00:13:16.122232 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-12-13T00:13:16.122968409+00:00 stderr F I1213 00:13:16.122530 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock... 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128264 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128483 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128536 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128619 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128689 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128717 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128744 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.128832466+00:00 stderr F I1213 00:13:16.128762 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.129158308+00:00 stderr F I1213 00:13:16.128732 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.232420387+00:00 stderr F I1213 00:13:16.231450 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:16.232420387+00:00 stderr F I1213 00:13:16.231926 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.235048845+00:00 stderr F I1213 00:13:16.232876 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:19:27.712712496+00:00 stderr F I1213 00:19:27.711991 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock 2025-12-13T00:19:27.713007624+00:00 stderr F I1213 00:19:27.712680 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator-lock", UID:"d058fe3a-98b8-4ef3-8f29-e5f93bb21bb1", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42243", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-controller-manager-operator-6f6cb54958-rbddb_8c63a158-d040-4b6d-abc4-5618a2e4a62b became leader 2025-12-13T00:19:27.714120286+00:00 stderr F I1213 00:19:27.714102 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:19:27.719184405+00:00 stderr F I1213 00:19:27.719125 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:19:27.719639667+00:00 stderr F I1213 00:19:27.719516 1 starter.go:88] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:19:27.737538781+00:00 stderr F I1213 00:19:27.737488 1 base_controller.go:67] Waiting for caches to sync for GarbageCollectorWatcherController 2025-12-13T00:19:27.737709626+00:00 stderr F I1213 00:19:27.737683 1 base_controller.go:67] Waiting for caches to sync for SATokenSignerController 2025-12-13T00:19:27.737769097+00:00 stderr F I1213 00:19:27.737735 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-12-13T00:19:27.737781038+00:00 stderr F I1213 00:19:27.737765 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-12-13T00:19:27.737816999+00:00 stderr F I1213 00:19:27.737792 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:19:27.737850640+00:00 stderr F I1213 00:19:27.737828 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:19:27.738523988+00:00 stderr F I1213 00:19:27.737923 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-controller-manager 2025-12-13T00:19:27.738523988+00:00 stderr F I1213 00:19:27.738032 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-12-13T00:19:27.738888838+00:00 stderr F I1213 00:19:27.737683 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:27.739002241+00:00 stderr F I1213 00:19:27.738954 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-12-13T00:19:27.739042022+00:00 stderr F I1213 00:19:27.739017 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-12-13T00:19:27.739072623+00:00 stderr F I1213 00:19:27.739062 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-12-13T00:19:27.739215277+00:00 stderr F I1213 00:19:27.739174 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-12-13T00:19:27.739215277+00:00 stderr F I1213 00:19:27.739188 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-12-13T00:19:27.739215277+00:00 stderr F I1213 00:19:27.739200 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-12-13T00:19:27.741712106+00:00 stderr F I1213 00:19:27.739507 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:19:27.741712106+00:00 stderr F I1213 00:19:27.739524 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:19:27.741712106+00:00 stderr F I1213 00:19:27.739530 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-12-13T00:19:27.741712106+00:00 stderr F I1213 00:19:27.739532 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-12-13T00:19:27.747058793+00:00 stderr F I1213 00:19:27.746312 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-12-13T00:19:27.839240535+00:00 stderr F I1213 00:19:27.838759 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-controller-manager 2025-12-13T00:19:27.839240535+00:00 stderr F I1213 00:19:27.838794 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-controller-manager controller ... 2025-12-13T00:19:27.839240535+00:00 stderr F I1213 00:19:27.838812 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-12-13T00:19:27.839240535+00:00 stderr F I1213 00:19:27.838819 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-12-13T00:19:27.839507263+00:00 stderr F I1213 00:19:27.839483 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-12-13T00:19:27.839507263+00:00 stderr F I1213 00:19:27.839499 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-12-13T00:19:27.839582635+00:00 stderr F I1213 00:19:27.839561 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:19:27.839622566+00:00 stderr F I1213 00:19:27.839593 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:19:27.839622566+00:00 stderr F I1213 00:19:27.839618 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:19:27.839648266+00:00 stderr F I1213 00:19:27.839604 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:19:27.839973495+00:00 stderr F I1213 00:19:27.839581 1 base_controller.go:73] Caches are synced for NodeController 2025-12-13T00:19:27.840020776+00:00 stderr F I1213 00:19:27.839997 1 base_controller.go:73] Caches are synced for GuardController 2025-12-13T00:19:27.840020776+00:00 stderr F I1213 00:19:27.840008 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-12-13T00:19:27.840046277+00:00 stderr F I1213 00:19:27.840002 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-12-13T00:19:27.840259544+00:00 stderr F I1213 00:19:27.839962 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-12-13T00:19:27.840259544+00:00 stderr F I1213 00:19:27.840248 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-12-13T00:19:27.840272914+00:00 stderr F I1213 00:19:27.839968 1 base_controller.go:73] Caches are synced for InstallerController 2025-12-13T00:19:27.840281205+00:00 stderr F I1213 00:19:27.840271 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-12-13T00:19:27.840440699+00:00 stderr F I1213 00:19:27.839987 1 base_controller.go:73] Caches are synced for PruneController 2025-12-13T00:19:27.840440699+00:00 stderr F I1213 00:19:27.840430 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-12-13T00:19:27.840921102+00:00 stderr F I1213 00:19:27.840888 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:27.842039682+00:00 stderr F I1213 00:19:27.842007 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-12-13T00:19:27.842039682+00:00 stderr F I1213 00:19:27.842019 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-12-13T00:19:27.847217395+00:00 stderr F I1213 00:19:27.847164 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-12-13T00:19:27.847217395+00:00 stderr F I1213 00:19:27.847184 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-12-13T00:19:27.861447288+00:00 stderr F E1213 00:19:27.861397 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:27.864138362+00:00 stderr F E1213 00:19:27.864088 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:27.864174303+00:00 stderr F I1213 00:19:27.864129 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:27.867507685+00:00 stderr F I1213 00:19:27.867464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:27.867649018+00:00 stderr F E1213 00:19:27.867627 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:27.871534716+00:00 stderr F I1213 00:19:27.871454 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:27.877361697+00:00 stderr F I1213 00:19:27.876770 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11]" 2025-12-13T00:19:27.888593237+00:00 stderr F E1213 00:19:27.888502 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:27.888593237+00:00 stderr F I1213 00:19:27.888545 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:27.930014719+00:00 stderr F I1213 00:19:27.929918 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:27.930129882+00:00 stderr F E1213 00:19:27.930110 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:27.935538321+00:00 stderr F I1213 00:19:27.935499 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:28.011170447+00:00 stderr F E1213 00:19:28.011124 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:28.011254839+00:00 stderr F I1213 00:19:28.011188 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:28.135444133+00:00 stderr F I1213 00:19:28.135349 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:28.173372378+00:00 stderr F E1213 00:19:28.173253 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:28.173924314+00:00 stderr F I1213 00:19:28.173873 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:28.336068956+00:00 stderr F I1213 00:19:28.335974 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:28.494462703+00:00 stderr F I1213 00:19:28.494379 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11 2025-12-13T00:19:28.494628118+00:00 stderr F E1213 00:19:28.494590 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11] 2025-12-13T00:19:28.537289144+00:00 stderr F I1213 00:19:28.537219 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:28.540805092+00:00 stderr F I1213 00:19:28.540647 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:28.540838733+00:00 stderr F I1213 00:19:28.540800 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:28.541989864+00:00 stderr F I1213 00:19:28.541883 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TargetUpdateRequired' "csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: past its refresh time 2025-06-27 13:05:19 +0000 UTC 2025-12-13T00:19:28.743362667+00:00 stderr F I1213 00:19:28.743277 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:28.840089914+00:00 stderr F I1213 00:19:28.840017 1 base_controller.go:73] Caches are synced for RevisionController 2025-12-13T00:19:28.840089914+00:00 stderr F I1213 00:19:28.840058 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-12-13T00:19:28.934511517+00:00 stderr F I1213 00:19:28.934446 1 request.go:697] Waited for 1.194827597s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets?limit=500&resourceVersion=0 2025-12-13T00:19:28.942033625+00:00 stderr F I1213 00:19:28.941971 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:29.159117301+00:00 stderr F I1213 00:19:29.159049 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:29.237814261+00:00 stderr F I1213 00:19:29.237754 1 base_controller.go:73] Caches are synced for GarbageCollectorWatcherController 2025-12-13T00:19:29.237814261+00:00 stderr F I1213 00:19:29.237781 1 base_controller.go:110] Starting #1 worker of GarbageCollectorWatcherController controller ... 2025-12-13T00:19:29.238028048+00:00 stderr F I1213 00:19:29.237975 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:19:29.238044528+00:00 stderr F I1213 00:19:29.238026 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:19:29.239260731+00:00 stderr F I1213 00:19:29.239228 1 base_controller.go:73] Caches are synced for SATokenSignerController 2025-12-13T00:19:29.239260731+00:00 stderr F I1213 00:19:29.239239 1 base_controller.go:110] Starting #1 worker of SATokenSignerController controller ... 2025-12-13T00:19:29.336827922+00:00 stderr F I1213 00:19:29.336785 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:29.537174826+00:00 stderr F I1213 00:19:29.537113 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:29.737528611+00:00 stderr F I1213 00:19:29.737442 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:29.737917442+00:00 stderr F I1213 00:19:29.737875 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-12-13T00:19:29.737917442+00:00 stderr F I1213 00:19:29.737895 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-12-13T00:19:29.738705833+00:00 stderr F I1213 00:19:29.738669 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-12-13T00:19:29.738705833+00:00 stderr F I1213 00:19:29.738691 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-12-13T00:19:29.938410541+00:00 stderr F I1213 00:19:29.938310 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:30.038759458+00:00 stderr F I1213 00:19:30.038675 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:19:30.038759458+00:00 stderr F I1213 00:19:30.038702 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:19:30.134194669+00:00 stderr F I1213 00:19:30.134112 1 request.go:697] Waited for 2.29445873s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller 2025-12-13T00:19:30.940833673+00:00 stderr F I1213 00:19:30.940703 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager-operator because it changed 2025-12-13T00:19:31.134518273+00:00 stderr F I1213 00:19:31.134456 1 request.go:697] Waited for 1.997045168s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-12-13T00:19:32.134782656+00:00 stderr F I1213 00:19:32.134728 1 request.go:697] Waited for 1.596637857s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-12-13T00:19:32.552835433+00:00 stderr F I1213 00:19:32.552770 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:32.562432548+00:00 stderr F I1213 00:19:32.562283 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: csr-signer,kube-controller-manager-client-cert-key, secrets: localhost-recovery-client-token-11,service-account-private-key-11]" to "NodeControllerDegraded: All master nodes are ready" 2025-12-13T00:19:32.738271617+00:00 stderr F I1213 00:19:32.738203 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SATokenSignerControllerOK' found expected kube-apiserver endpoints 2025-12-13T00:19:33.135568802+00:00 stderr F I1213 00:19:33.135080 1 request.go:697] Waited for 1.397436865s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/serving-cert 2025-12-13T00:19:34.334371179+00:00 stderr F I1213 00:19:34.334318 1 request.go:697] Waited for 1.19491854s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller-config 2025-12-13T00:19:36.138385073+00:00 stderr F I1213 00:19:36.138309 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-signer-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"}} 2025-12-13T00:19:36.139073212+00:00 stderr F I1213 00:19:36.139014 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator: 2025-12-13T00:19:36.139073212+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:19:36.738894091+00:00 stderr F I1213 00:19:36.738827 1 core.go:359] ConfigMap "openshift-kube-controller-manager-operator/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-12-13T00:19:36.739244591+00:00 stderr F I1213 00:19:36.739212 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: 2025-12-13T00:19:36.739244591+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:19:37.137140813+00:00 stderr F I1213 00:19:37.136816 1 core.go:359] ConfigMap "openshift-config-managed/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:13Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-12-13T00:19:36Z"}],"resourceVersion":null,"uid":"4aabbce1-72f4-478a-b382-9ed7c988ad76"}} 2025-12-13T00:19:37.139085377+00:00 stderr F I1213 00:19:37.138568 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapUpdateFailed' Failed to update ConfigMap/csr-controller-ca -n openshift-config-managed: Operation cannot be fulfilled on configmaps "csr-controller-ca": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:19:37.155056027+00:00 stderr F I1213 00:19:37.154995 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:37.163484670+00:00 stderr F I1213 00:19:37.163436 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" 2025-12-13T00:19:37.172222571+00:00 stderr F I1213 00:19:37.172162 1 status_controller.go:218] clusteroperator/kube-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:54Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:50Z","message":"NodeInstallerProgressing: 1 node is at revision 11","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:53:43Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:07Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:37.179109730+00:00 stderr F I1213 00:19:37.179056 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"csr-controller-ca\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569190 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.569114604 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569311 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.569218977 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569370 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.569323261 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569402 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.569378172 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569457 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569409983 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569487 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569465145 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569515 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569494315 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569565 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569522226 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569597 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.569571928 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569646 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.569607779 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569678 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.56965587 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.569719 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569689401 +0000 UTC))" 2025-12-13T00:19:37.570981686+00:00 stderr F I1213 00:19:37.570372 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-controller-manager-operator.svc,metrics.openshift-kube-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-12-13 00:19:37.570337728 +0000 UTC))" 2025-12-13T00:19:37.571115390+00:00 stderr F I1213 00:19:37.571057 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584795\" (2025-12-12 23:13:15 +0000 UTC to 2026-12-12 23:13:15 +0000 UTC (now=2025-12-13 00:19:37.571027867 +0000 UTC))" 2025-12-13T00:19:38.334784378+00:00 stderr F I1213 00:19:38.334687 1 request.go:697] Waited for 1.179651518s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc 2025-12-13T00:19:39.335048369+00:00 stderr F I1213 00:19:39.334977 1 request.go:697] Waited for 1.19639555s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-11-crc 2025-12-13T00:19:39.943174818+00:00 stderr P I1213 00:19:39.943009 1 core.go:359] ConfigMap "openshift-kube-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGq 2025-12-13T00:19:39.943278101+00:00 stderr F jsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:57Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-12-13T00:19:38Z"}],"resourceVersion":null,"uid":"56468a41-d560-4106-974c-24e97afc9e77"}} 2025-12-13T00:19:39.943519108+00:00 stderr F I1213 00:19:39.943437 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8a9ccf98-e60f-4580-94d2-1560cf66cd74", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-controller-manager: 2025-12-13T00:19:39.943519108+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:21:45.425000796+00:00 stderr F I1213 00:21:45.424092 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:49.813999501+00:00 stderr F I1213 00:21:49.813473 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:51.962759819+00:00 stderr F I1213 00:21:51.962674 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:52.616628397+00:00 stderr F I1213 00:21:52.616554 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:53.431831695+00:00 stderr F I1213 00:21:53.431314 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:54.440796630+00:00 stderr F I1213 00:21:54.440723 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000023400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130646033051 5ustar zuulzuul././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000114167615117130646033073 0ustar zuulzuul2025-12-13T00:20:51.943132456+00:00 stdout F flock: getting lock took 0.000009 seconds 2025-12-13T00:20:51.943378084+00:00 stdout F Copying system trust bundle ... 2025-12-13T00:20:51.965452899+00:00 stderr F I1213 00:20:51.965309 1 loader.go:395] Config loaded from file: /etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig 2025-12-13T00:20:51.966181299+00:00 stderr F Copying termination logs to "/var/log/kube-apiserver/termination.log" 2025-12-13T00:20:51.966292082+00:00 stderr F I1213 00:20:51.966265 1 main.go:161] Touching termination lock file "/var/log/kube-apiserver/.terminating" 2025-12-13T00:20:51.967845363+00:00 stderr F I1213 00:20:51.967521 1 main.go:219] Launching sub-process "/usr/bin/hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=192.168.126.11 -v=2 --permit-address-sharing" 2025-12-13T00:20:52.058651494+00:00 stderr F Flag --openshift-config has been deprecated, to be removed 2025-12-13T00:20:52.058972593+00:00 stderr F I1213 00:20:52.058626 18 flags.go:64] FLAG: --admission-control="[]" 2025-12-13T00:20:52.058972593+00:00 stderr F I1213 00:20:52.058894 18 flags.go:64] FLAG: --admission-control-config-file="" 2025-12-13T00:20:52.058972593+00:00 stderr F I1213 00:20:52.058900 18 flags.go:64] FLAG: --advertise-address="192.168.126.11" 2025-12-13T00:20:52.058972593+00:00 stderr F I1213 00:20:52.058906 18 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.058912 18 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.058989 18 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.058993 18 flags.go:64] FLAG: --allow-privileged="false" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.058997 18 flags.go:64] FLAG: --anonymous-auth="true" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059001 18 flags.go:64] FLAG: --api-audiences="[]" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059008 18 flags.go:64] FLAG: --apiserver-count="1" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059014 18 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059018 18 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059022 18 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059027 18 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059032 18 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-12-13T00:20:52.059044495+00:00 stderr F I1213 00:20:52.059037 18 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-12-13T00:20:52.059059915+00:00 stderr F I1213 00:20:52.059043 18 flags.go:64] FLAG: --audit-log-compress="false" 2025-12-13T00:20:52.059059915+00:00 stderr F I1213 00:20:52.059048 18 flags.go:64] FLAG: --audit-log-format="json" 2025-12-13T00:20:52.059059915+00:00 stderr F I1213 00:20:52.059052 18 flags.go:64] FLAG: --audit-log-maxage="0" 2025-12-13T00:20:52.059069715+00:00 stderr F I1213 00:20:52.059057 18 flags.go:64] FLAG: --audit-log-maxbackup="0" 2025-12-13T00:20:52.059069715+00:00 stderr F I1213 00:20:52.059061 18 flags.go:64] FLAG: --audit-log-maxsize="0" 2025-12-13T00:20:52.059078576+00:00 stderr F I1213 00:20:52.059066 18 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-12-13T00:20:52.059078576+00:00 stderr F I1213 00:20:52.059071 18 flags.go:64] FLAG: --audit-log-path="" 2025-12-13T00:20:52.059087006+00:00 stderr F I1213 00:20:52.059075 18 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059080 18 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059095 18 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059099 18 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059103 18 flags.go:64] FLAG: --audit-policy-file="" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059107 18 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059112 18 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059116 18 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059120 18 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-12-13T00:20:52.059130947+00:00 stderr F I1213 00:20:52.059124 18 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-12-13T00:20:52.059159088+00:00 stderr F I1213 00:20:52.059128 18 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-12-13T00:20:52.059159088+00:00 stderr F I1213 00:20:52.059133 18 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-12-13T00:20:52.059159088+00:00 stderr F I1213 00:20:52.059138 18 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-12-13T00:20:52.059159088+00:00 stderr F I1213 00:20:52.059142 18 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-12-13T00:20:52.059159088+00:00 stderr F I1213 00:20:52.059147 18 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-12-13T00:20:52.059159088+00:00 stderr F I1213 00:20:52.059151 18 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-12-13T00:20:52.059168748+00:00 stderr F I1213 00:20:52.059156 18 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-12-13T00:20:52.059168748+00:00 stderr F I1213 00:20:52.059160 18 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-12-13T00:20:52.059176158+00:00 stderr F I1213 00:20:52.059165 18 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-12-13T00:20:52.059176158+00:00 stderr F I1213 00:20:52.059169 18 flags.go:64] FLAG: --authentication-config="" 2025-12-13T00:20:52.059184578+00:00 stderr F I1213 00:20:52.059173 18 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" 2025-12-13T00:20:52.059184578+00:00 stderr F I1213 00:20:52.059178 18 flags.go:64] FLAG: --authentication-token-webhook-config-file="" 2025-12-13T00:20:52.059202779+00:00 stderr F I1213 00:20:52.059182 18 flags.go:64] FLAG: --authentication-token-webhook-version="v1beta1" 2025-12-13T00:20:52.059202779+00:00 stderr F I1213 00:20:52.059186 18 flags.go:64] FLAG: --authorization-config="" 2025-12-13T00:20:52.059202779+00:00 stderr F I1213 00:20:52.059190 18 flags.go:64] FLAG: --authorization-mode="[]" 2025-12-13T00:20:52.059202779+00:00 stderr F I1213 00:20:52.059196 18 flags.go:64] FLAG: --authorization-policy-file="" 2025-12-13T00:20:52.059211209+00:00 stderr F I1213 00:20:52.059200 18 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" 2025-12-13T00:20:52.059211209+00:00 stderr F I1213 00:20:52.059204 18 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" 2025-12-13T00:20:52.059218279+00:00 stderr F I1213 00:20:52.059209 18 flags.go:64] FLAG: --authorization-webhook-config-file="" 2025-12-13T00:20:52.059225270+00:00 stderr F I1213 00:20:52.059214 18 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" 2025-12-13T00:20:52.059225270+00:00 stderr F I1213 00:20:52.059218 18 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:20:52.059232700+00:00 stderr F I1213 00:20:52.059223 18 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:20:52.059239600+00:00 stderr F I1213 00:20:52.059227 18 flags.go:64] FLAG: --client-ca-file="" 2025-12-13T00:20:52.059239600+00:00 stderr F I1213 00:20:52.059231 18 flags.go:64] FLAG: --cloud-config="" 2025-12-13T00:20:52.059250990+00:00 stderr F I1213 00:20:52.059235 18 flags.go:64] FLAG: --cloud-provider="" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059241 18 flags.go:64] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059253 18 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059259 18 flags.go:64] FLAG: --contention-profiling="false" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059263 18 flags.go:64] FLAG: --cors-allowed-origins="[]" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059269 18 flags.go:64] FLAG: --debug-socket-path="" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059273 18 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059277 18 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" 2025-12-13T00:20:52.059291391+00:00 stderr F I1213 00:20:52.059281 18 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-12-13T00:20:52.059301922+00:00 stderr F I1213 00:20:52.059286 18 flags.go:64] FLAG: --delete-collection-workers="1" 2025-12-13T00:20:52.059301922+00:00 stderr F I1213 00:20:52.059290 18 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-12-13T00:20:52.059309092+00:00 stderr F I1213 00:20:52.059295 18 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:20:52.059309092+00:00 stderr F I1213 00:20:52.059300 18 flags.go:64] FLAG: --egress-selector-config-file="" 2025-12-13T00:20:52.059316312+00:00 stderr F I1213 00:20:52.059305 18 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-12-13T00:20:52.059316312+00:00 stderr F I1213 00:20:52.059310 18 flags.go:64] FLAG: --enable-aggregator-routing="false" 2025-12-13T00:20:52.059323392+00:00 stderr F I1213 00:20:52.059314 18 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" 2025-12-13T00:20:52.059330352+00:00 stderr F I1213 00:20:52.059319 18 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-12-13T00:20:52.059363423+00:00 stderr F I1213 00:20:52.059325 18 flags.go:64] FLAG: --enable-logs-handler="true" 2025-12-13T00:20:52.059363423+00:00 stderr F I1213 00:20:52.059337 18 flags.go:64] FLAG: --enable-priority-and-fairness="true" 2025-12-13T00:20:52.059363423+00:00 stderr F I1213 00:20:52.059341 18 flags.go:64] FLAG: --encryption-provider-config="" 2025-12-13T00:20:52.059363423+00:00 stderr F I1213 00:20:52.059346 18 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-12-13T00:20:52.059363423+00:00 stderr F I1213 00:20:52.059350 18 flags.go:64] FLAG: --endpoint-reconciler-type="lease" 2025-12-13T00:20:52.059363423+00:00 stderr F I1213 00:20:52.059354 18 flags.go:64] FLAG: --etcd-cafile="" 2025-12-13T00:20:52.059372693+00:00 stderr F I1213 00:20:52.059358 18 flags.go:64] FLAG: --etcd-certfile="" 2025-12-13T00:20:52.059372693+00:00 stderr F I1213 00:20:52.059362 18 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-12-13T00:20:52.059379834+00:00 stderr F I1213 00:20:52.059367 18 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-12-13T00:20:52.059379834+00:00 stderr F I1213 00:20:52.059371 18 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-12-13T00:20:52.059387084+00:00 stderr F I1213 00:20:52.059376 18 flags.go:64] FLAG: --etcd-healthcheck-timeout="2s" 2025-12-13T00:20:52.059387084+00:00 stderr F I1213 00:20:52.059380 18 flags.go:64] FLAG: --etcd-keyfile="" 2025-12-13T00:20:52.059394344+00:00 stderr F I1213 00:20:52.059385 18 flags.go:64] FLAG: --etcd-prefix="/registry" 2025-12-13T00:20:52.059401234+00:00 stderr F I1213 00:20:52.059389 18 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-12-13T00:20:52.059422965+00:00 stderr F I1213 00:20:52.059394 18 flags.go:64] FLAG: --etcd-servers="[]" 2025-12-13T00:20:52.059422965+00:00 stderr F I1213 00:20:52.059399 18 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-12-13T00:20:52.059422965+00:00 stderr F I1213 00:20:52.059404 18 flags.go:64] FLAG: --event-ttl="1h0m0s" 2025-12-13T00:20:52.059422965+00:00 stderr F I1213 00:20:52.059409 18 flags.go:64] FLAG: --external-hostname="" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059413 18 flags.go:64] FLAG: --feature-gates="" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059424 18 flags.go:64] FLAG: --goaway-chance="0" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059431 18 flags.go:64] FLAG: --help="false" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059435 18 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059439 18 flags.go:64] FLAG: --kubelet-certificate-authority="" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059443 18 flags.go:64] FLAG: --kubelet-client-certificate="" 2025-12-13T00:20:52.059454136+00:00 stderr F I1213 00:20:52.059447 18 flags.go:64] FLAG: --kubelet-client-key="" 2025-12-13T00:20:52.059463766+00:00 stderr F I1213 00:20:52.059451 18 flags.go:64] FLAG: --kubelet-port="10250" 2025-12-13T00:20:52.059492147+00:00 stderr F I1213 00:20:52.059457 18 flags.go:64] FLAG: --kubelet-preferred-address-types="[Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]" 2025-12-13T00:20:52.059492147+00:00 stderr F I1213 00:20:52.059468 18 flags.go:64] FLAG: --kubelet-read-only-port="10255" 2025-12-13T00:20:52.059492147+00:00 stderr F I1213 00:20:52.059472 18 flags.go:64] FLAG: --kubelet-timeout="5s" 2025-12-13T00:20:52.059492147+00:00 stderr F I1213 00:20:52.059477 18 flags.go:64] FLAG: --kubernetes-service-node-port="0" 2025-12-13T00:20:52.059492147+00:00 stderr F I1213 00:20:52.059481 18 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-12-13T00:20:52.059492147+00:00 stderr F I1213 00:20:52.059485 18 flags.go:64] FLAG: --livez-grace-period="0s" 2025-12-13T00:20:52.059500817+00:00 stderr F I1213 00:20:52.059489 18 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:20:52.059517447+00:00 stderr F I1213 00:20:52.059494 18 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:20:52.059517447+00:00 stderr F I1213 00:20:52.059505 18 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:20:52.059517447+00:00 stderr F I1213 00:20:52.059509 18 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:20:52.059525207+00:00 stderr F I1213 00:20:52.059514 18 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" 2025-12-13T00:20:52.059525207+00:00 stderr F I1213 00:20:52.059518 18 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-12-13T00:20:52.059532428+00:00 stderr F I1213 00:20:52.059523 18 flags.go:64] FLAG: --max-requests-inflight="400" 2025-12-13T00:20:52.059532428+00:00 stderr F I1213 00:20:52.059527 18 flags.go:64] FLAG: --min-request-timeout="1800" 2025-12-13T00:20:52.059539638+00:00 stderr F I1213 00:20:52.059531 18 flags.go:64] FLAG: --oidc-ca-file="" 2025-12-13T00:20:52.059546598+00:00 stderr F I1213 00:20:52.059536 18 flags.go:64] FLAG: --oidc-client-id="" 2025-12-13T00:20:52.059546598+00:00 stderr F I1213 00:20:52.059540 18 flags.go:64] FLAG: --oidc-groups-claim="" 2025-12-13T00:20:52.059553648+00:00 stderr F I1213 00:20:52.059544 18 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-12-13T00:20:52.059560458+00:00 stderr F I1213 00:20:52.059549 18 flags.go:64] FLAG: --oidc-issuer-url="" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059553 18 flags.go:64] FLAG: --oidc-required-claim="" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059566 18 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059571 18 flags.go:64] FLAG: --oidc-username-claim="sub" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059575 18 flags.go:64] FLAG: --oidc-username-prefix="" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059579 18 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059584 18 flags.go:64] FLAG: --peer-advertise-ip="" 2025-12-13T00:20:52.059598139+00:00 stderr F I1213 00:20:52.059589 18 flags.go:64] FLAG: --peer-advertise-port="" 2025-12-13T00:20:52.059612780+00:00 stderr F I1213 00:20:52.059593 18 flags.go:64] FLAG: --peer-ca-file="" 2025-12-13T00:20:52.059612780+00:00 stderr F I1213 00:20:52.059597 18 flags.go:64] FLAG: --permit-address-sharing="true" 2025-12-13T00:20:52.059612780+00:00 stderr F I1213 00:20:52.059602 18 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:20:52.059612780+00:00 stderr F I1213 00:20:52.059607 18 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:20:52.059651871+00:00 stderr F I1213 00:20:52.059618 18 flags.go:64] FLAG: --proxy-client-cert-file="" 2025-12-13T00:20:52.059651871+00:00 stderr F I1213 00:20:52.059627 18 flags.go:64] FLAG: --proxy-client-key-file="" 2025-12-13T00:20:52.059651871+00:00 stderr F I1213 00:20:52.059631 18 flags.go:64] FLAG: --request-timeout="1m0s" 2025-12-13T00:20:52.059651871+00:00 stderr F I1213 00:20:52.059635 18 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:20:52.059651871+00:00 stderr F I1213 00:20:52.059640 18 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-12-13T00:20:52.059651871+00:00 stderr F I1213 00:20:52.059645 18 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[]" 2025-12-13T00:20:52.059661511+00:00 stderr F I1213 00:20:52.059650 18 flags.go:64] FLAG: --requestheader-group-headers="[]" 2025-12-13T00:20:52.059691922+00:00 stderr F I1213 00:20:52.059655 18 flags.go:64] FLAG: --requestheader-username-headers="[]" 2025-12-13T00:20:52.059691922+00:00 stderr F I1213 00:20:52.059660 18 flags.go:64] FLAG: --runtime-config="" 2025-12-13T00:20:52.059691922+00:00 stderr F I1213 00:20:52.059666 18 flags.go:64] FLAG: --secure-port="6443" 2025-12-13T00:20:52.059691922+00:00 stderr F I1213 00:20:52.059670 18 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="false" 2025-12-13T00:20:52.059691922+00:00 stderr F I1213 00:20:52.059674 18 flags.go:64] FLAG: --service-account-extend-token-expiration="true" 2025-12-13T00:20:52.059691922+00:00 stderr F I1213 00:20:52.059678 18 flags.go:64] FLAG: --service-account-issuer="[]" 2025-12-13T00:20:52.059711042+00:00 stderr F I1213 00:20:52.059685 18 flags.go:64] FLAG: --service-account-jwks-uri="" 2025-12-13T00:20:52.059711042+00:00 stderr F I1213 00:20:52.059689 18 flags.go:64] FLAG: --service-account-key-file="[]" 2025-12-13T00:20:52.059711042+00:00 stderr F I1213 00:20:52.059694 18 flags.go:64] FLAG: --service-account-lookup="true" 2025-12-13T00:20:52.059711042+00:00 stderr F I1213 00:20:52.059698 18 flags.go:64] FLAG: --service-account-max-token-expiration="0s" 2025-12-13T00:20:52.059711042+00:00 stderr F I1213 00:20:52.059703 18 flags.go:64] FLAG: --service-account-signing-key-file="" 2025-12-13T00:20:52.059719453+00:00 stderr F I1213 00:20:52.059708 18 flags.go:64] FLAG: --service-cluster-ip-range="" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059713 18 flags.go:64] FLAG: --service-node-port-range="30000-32767" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059727 18 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059732 18 flags.go:64] FLAG: --shutdown-delay-duration="0s" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059736 18 flags.go:64] FLAG: --shutdown-send-retry-after="false" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059740 18 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059744 18 flags.go:64] FLAG: --storage-backend="" 2025-12-13T00:20:52.059756044+00:00 stderr F I1213 00:20:52.059748 18 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:20:52.059770174+00:00 stderr F I1213 00:20:52.059753 18 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-12-13T00:20:52.059770174+00:00 stderr F I1213 00:20:52.059759 18 flags.go:64] FLAG: --tls-cert-file="" 2025-12-13T00:20:52.059777424+00:00 stderr F I1213 00:20:52.059763 18 flags.go:64] FLAG: --tls-cipher-suites="[]" 2025-12-13T00:20:52.059777424+00:00 stderr F I1213 00:20:52.059769 18 flags.go:64] FLAG: --tls-min-version="" 2025-12-13T00:20:52.059784604+00:00 stderr F I1213 00:20:52.059773 18 flags.go:64] FLAG: --tls-private-key-file="" 2025-12-13T00:20:52.059808155+00:00 stderr F I1213 00:20:52.059777 18 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:20:52.059808155+00:00 stderr F I1213 00:20:52.059783 18 flags.go:64] FLAG: --token-auth-file="" 2025-12-13T00:20:52.059808155+00:00 stderr F I1213 00:20:52.059788 18 flags.go:64] FLAG: --tracing-config-file="" 2025-12-13T00:20:52.059808155+00:00 stderr F I1213 00:20:52.059792 18 flags.go:64] FLAG: --v="2" 2025-12-13T00:20:52.059808155+00:00 stderr F I1213 00:20:52.059797 18 flags.go:64] FLAG: --version="false" 2025-12-13T00:20:52.059819035+00:00 stderr F I1213 00:20:52.059805 18 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:20:52.059819035+00:00 stderr F I1213 00:20:52.059810 18 flags.go:64] FLAG: --watch-cache="true" 2025-12-13T00:20:52.059827696+00:00 stderr F I1213 00:20:52.059815 18 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059862 18 plugins.go:83] "Registered admission plugin" plugin="authorization.openshift.io/RestrictSubjectBindings" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059880 18 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/RouteHostAssignment" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059887 18 plugins.go:83] "Registered admission plugin" plugin="image.openshift.io/ImagePolicy" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059893 18 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/IngressAdmission" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059900 18 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagementCPUsOverride" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059907 18 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ManagedNode" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059914 18 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/MixedCPUs" 2025-12-13T00:20:52.059971909+00:00 stderr F I1213 00:20:52.059921 18 plugins.go:83] "Registered admission plugin" plugin="scheduling.openshift.io/OriginPodNodeEnvironment" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060032 18 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/ClusterResourceOverride" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060056 18 plugins.go:83] "Registered admission plugin" plugin="quota.openshift.io/ClusterResourceQuota" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060063 18 plugins.go:83] "Registered admission plugin" plugin="autoscaling.openshift.io/RunOnceDuration" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060070 18 plugins.go:83] "Registered admission plugin" plugin="scheduling.openshift.io/PodNodeConstraints" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060077 18 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/SecurityContextConstraint" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060086 18 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/SCCExecRestrictions" 2025-12-13T00:20:52.060117713+00:00 stderr F I1213 00:20:52.060094 18 plugins.go:83] "Registered admission plugin" plugin="network.openshift.io/ExternalIPRanger" 2025-12-13T00:20:52.060140844+00:00 stderr F I1213 00:20:52.060102 18 plugins.go:83] "Registered admission plugin" plugin="network.openshift.io/RestrictedEndpointsAdmission" 2025-12-13T00:20:52.060140844+00:00 stderr F I1213 00:20:52.060114 18 plugins.go:83] "Registered admission plugin" plugin="storage.openshift.io/CSIInlineVolumeSecurity" 2025-12-13T00:20:52.060173215+00:00 stderr F I1213 00:20:52.060135 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIServer" 2025-12-13T00:20:52.060173215+00:00 stderr F I1213 00:20:52.060147 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAuthentication" 2025-12-13T00:20:52.060173215+00:00 stderr F I1213 00:20:52.060154 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateFeatureGate" 2025-12-13T00:20:52.060184895+00:00 stderr F I1213 00:20:52.060161 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateConsole" 2025-12-13T00:20:52.060184895+00:00 stderr F I1213 00:20:52.060172 18 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/ValidateDNS" 2025-12-13T00:20:52.060213046+00:00 stderr F I1213 00:20:52.060178 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateImage" 2025-12-13T00:20:52.060213046+00:00 stderr F I1213 00:20:52.060189 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateOAuth" 2025-12-13T00:20:52.060213046+00:00 stderr F I1213 00:20:52.060197 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateProject" 2025-12-13T00:20:52.060226866+00:00 stderr F I1213 00:20:52.060204 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/DenyDeleteClusterConfiguration" 2025-12-13T00:20:52.060226866+00:00 stderr F I1213 00:20:52.060215 18 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/DenyDeleteClusterOperators" 2025-12-13T00:20:52.060236866+00:00 stderr F I1213 00:20:52.060222 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateScheduler" 2025-12-13T00:20:52.060249957+00:00 stderr F I1213 00:20:52.060230 18 plugins.go:83] "Registered admission plugin" plugin="operator.openshift.io/ValidateKubeControllerManager" 2025-12-13T00:20:52.060249957+00:00 stderr F I1213 00:20:52.060241 18 plugins.go:83] "Registered admission plugin" plugin="quota.openshift.io/ValidateClusterResourceQuota" 2025-12-13T00:20:52.060286888+00:00 stderr F I1213 00:20:52.060248 18 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/ValidateSecurityContextConstraints" 2025-12-13T00:20:52.060286888+00:00 stderr F I1213 00:20:52.060259 18 plugins.go:83] "Registered admission plugin" plugin="authorization.openshift.io/ValidateRoleBindingRestriction" 2025-12-13T00:20:52.060286888+00:00 stderr F I1213 00:20:52.060266 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateNetwork" 2025-12-13T00:20:52.060286888+00:00 stderr F I1213 00:20:52.060274 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/ValidateAPIRequestCount" 2025-12-13T00:20:52.060299188+00:00 stderr F I1213 00:20:52.060282 18 plugins.go:83] "Registered admission plugin" plugin="config.openshift.io/RestrictExtremeWorkerLatencyProfile" 2025-12-13T00:20:52.060311108+00:00 stderr F I1213 00:20:52.060290 18 plugins.go:83] "Registered admission plugin" plugin="security.openshift.io/DefaultSecurityContextConstraints" 2025-12-13T00:20:52.060311108+00:00 stderr F I1213 00:20:52.060301 18 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/ValidateRoute" 2025-12-13T00:20:52.060328349+00:00 stderr F I1213 00:20:52.060307 18 plugins.go:83] "Registered admission plugin" plugin="route.openshift.io/DefaultRoute" 2025-12-13T00:20:52.065107458+00:00 stderr F W1213 00:20:52.065010 18 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-12-13T00:20:52.065107458+00:00 stderr F I1213 00:20:52.065043 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065125929+00:00 stderr F W1213 00:20:52.065096 18 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-12-13T00:20:52.065125929+00:00 stderr F I1213 00:20:52.065108 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065201341+00:00 stderr F W1213 00:20:52.065146 18 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-12-13T00:20:52.065201341+00:00 stderr F I1213 00:20:52.065158 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065222681+00:00 stderr F W1213 00:20:52.065193 18 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-12-13T00:20:52.065222681+00:00 stderr F I1213 00:20:52.065204 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065304833+00:00 stderr F W1213 00:20:52.065241 18 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-12-13T00:20:52.065304833+00:00 stderr F I1213 00:20:52.065253 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065339424+00:00 stderr F W1213 00:20:52.065306 18 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-12-13T00:20:52.065339424+00:00 stderr F I1213 00:20:52.065317 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065404766+00:00 stderr F W1213 00:20:52.065354 18 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-12-13T00:20:52.065404766+00:00 stderr F I1213 00:20:52.065364 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065439537+00:00 stderr F W1213 00:20:52.065402 18 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-12-13T00:20:52.065439537+00:00 stderr F I1213 00:20:52.065412 18 feature_gate.go:250] feature gates: &{map[]} 2025-12-13T00:20:52.065513149+00:00 stderr F I1213 00:20:52.065467 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065551670+00:00 stderr F W1213 00:20:52.065509 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-12-13T00:20:52.065551670+00:00 stderr F I1213 00:20:52.065521 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065612102+00:00 stderr F W1213 00:20:52.065579 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-12-13T00:20:52.065612102+00:00 stderr F I1213 00:20:52.065590 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065663903+00:00 stderr F W1213 00:20:52.065628 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-12-13T00:20:52.065663903+00:00 stderr F I1213 00:20:52.065639 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065706224+00:00 stderr F W1213 00:20:52.065676 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-12-13T00:20:52.065706224+00:00 stderr F I1213 00:20:52.065686 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065756715+00:00 stderr F W1213 00:20:52.065723 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-12-13T00:20:52.065756715+00:00 stderr F I1213 00:20:52.065734 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065804437+00:00 stderr F W1213 00:20:52.065772 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-12-13T00:20:52.065804437+00:00 stderr F I1213 00:20:52.065784 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065863868+00:00 stderr F W1213 00:20:52.065829 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-12-13T00:20:52.065863868+00:00 stderr F I1213 00:20:52.065840 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.065947171+00:00 stderr F W1213 00:20:52.065899 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-12-13T00:20:52.065947171+00:00 stderr F I1213 00:20:52.065912 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.066004102+00:00 stderr F W1213 00:20:52.065970 18 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-12-13T00:20:52.066004102+00:00 stderr F I1213 00:20:52.065982 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.066051353+00:00 stderr F W1213 00:20:52.066020 18 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-12-13T00:20:52.066051353+00:00 stderr F I1213 00:20:52.066031 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true]} 2025-12-13T00:20:52.066113245+00:00 stderr F I1213 00:20:52.066070 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true]} 2025-12-13T00:20:52.066149826+00:00 stderr F I1213 00:20:52.066114 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false]} 2025-12-13T00:20:52.066189197+00:00 stderr F W1213 00:20:52.066158 18 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-12-13T00:20:52.066189197+00:00 stderr F I1213 00:20:52.066169 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false]} 2025-12-13T00:20:52.066247029+00:00 stderr F I1213 00:20:52.066207 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066279399+00:00 stderr F W1213 00:20:52.066249 18 feature_gate.go:227] unrecognized feature gate: Example 2025-12-13T00:20:52.066279399+00:00 stderr F I1213 00:20:52.066259 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066341151+00:00 stderr F W1213 00:20:52.066296 18 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-12-13T00:20:52.066341151+00:00 stderr F I1213 00:20:52.066307 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066385602+00:00 stderr F W1213 00:20:52.066344 18 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-12-13T00:20:52.066385602+00:00 stderr F I1213 00:20:52.066354 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066425323+00:00 stderr F W1213 00:20:52.066394 18 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-12-13T00:20:52.066425323+00:00 stderr F I1213 00:20:52.066404 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066485125+00:00 stderr F W1213 00:20:52.066442 18 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-12-13T00:20:52.066485125+00:00 stderr F I1213 00:20:52.066452 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066524956+00:00 stderr F W1213 00:20:52.066491 18 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-12-13T00:20:52.066524956+00:00 stderr F I1213 00:20:52.066501 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066571877+00:00 stderr F W1213 00:20:52.066540 18 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-12-13T00:20:52.066571877+00:00 stderr F I1213 00:20:52.066550 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066664681+00:00 stderr F W1213 00:20:52.066622 18 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-12-13T00:20:52.066664681+00:00 stderr F I1213 00:20:52.066634 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066705982+00:00 stderr F W1213 00:20:52.066670 18 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-12-13T00:20:52.066705982+00:00 stderr F I1213 00:20:52.066681 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066766373+00:00 stderr F W1213 00:20:52.066719 18 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-12-13T00:20:52.066766373+00:00 stderr F I1213 00:20:52.066729 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066805234+00:00 stderr F W1213 00:20:52.066767 18 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-12-13T00:20:52.066805234+00:00 stderr F I1213 00:20:52.066777 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066846855+00:00 stderr F W1213 00:20:52.066813 18 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-12-13T00:20:52.066846855+00:00 stderr F I1213 00:20:52.066823 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.066907287+00:00 stderr F W1213 00:20:52.066860 18 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-12-13T00:20:52.066907287+00:00 stderr F I1213 00:20:52.066872 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.067012610+00:00 stderr F W1213 00:20:52.066958 18 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-12-13T00:20:52.067012610+00:00 stderr F I1213 00:20:52.066970 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.067054961+00:00 stderr F W1213 00:20:52.067009 18 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-12-13T00:20:52.067054961+00:00 stderr F I1213 00:20:52.067019 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.067089262+00:00 stderr F W1213 00:20:52.067057 18 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-12-13T00:20:52.067089262+00:00 stderr F I1213 00:20:52.067067 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false]} 2025-12-13T00:20:52.067134803+00:00 stderr F W1213 00:20:52.067103 18 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:20:52.067134803+00:00 stderr F I1213 00:20:52.067115 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-12-13T00:20:52.067185394+00:00 stderr F W1213 00:20:52.067153 18 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-12-13T00:20:52.067185394+00:00 stderr F I1213 00:20:52.067164 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-12-13T00:20:52.067239046+00:00 stderr F W1213 00:20:52.067208 18 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-12-13T00:20:52.067239046+00:00 stderr F I1213 00:20:52.067219 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-12-13T00:20:52.067288437+00:00 stderr F W1213 00:20:52.067257 18 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-12-13T00:20:52.067288437+00:00 stderr F I1213 00:20:52.067268 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-12-13T00:20:52.067332638+00:00 stderr F W1213 00:20:52.067304 18 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-12-13T00:20:52.067332638+00:00 stderr F I1213 00:20:52.067315 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true]} 2025-12-13T00:20:52.067389260+00:00 stderr F I1213 00:20:52.067354 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067430951+00:00 stderr F W1213 00:20:52.067397 18 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-12-13T00:20:52.067430951+00:00 stderr F I1213 00:20:52.067408 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067472302+00:00 stderr F W1213 00:20:52.067444 18 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-12-13T00:20:52.067472302+00:00 stderr F I1213 00:20:52.067455 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067530944+00:00 stderr F W1213 00:20:52.067492 18 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-12-13T00:20:52.067530944+00:00 stderr F I1213 00:20:52.067502 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067572235+00:00 stderr F W1213 00:20:52.067541 18 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-12-13T00:20:52.067572235+00:00 stderr F I1213 00:20:52.067552 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067618396+00:00 stderr F W1213 00:20:52.067589 18 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-12-13T00:20:52.067618396+00:00 stderr F I1213 00:20:52.067600 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067675598+00:00 stderr F W1213 00:20:52.067637 18 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-12-13T00:20:52.067675598+00:00 stderr F I1213 00:20:52.067647 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067716169+00:00 stderr F W1213 00:20:52.067684 18 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-12-13T00:20:52.067716169+00:00 stderr F I1213 00:20:52.067695 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false]} 2025-12-13T00:20:52.067764790+00:00 stderr F I1213 00:20:52.067733 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-12-13T00:20:52.067816621+00:00 stderr F W1213 00:20:52.067777 18 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-12-13T00:20:52.067816621+00:00 stderr F I1213 00:20:52.067788 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-12-13T00:20:52.067868963+00:00 stderr F W1213 00:20:52.067837 18 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-12-13T00:20:52.067868963+00:00 stderr F I1213 00:20:52.067848 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-12-13T00:20:52.067916714+00:00 stderr F W1213 00:20:52.067885 18 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-12-13T00:20:52.067916714+00:00 stderr F I1213 00:20:52.067896 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-12-13T00:20:52.067988226+00:00 stderr F W1213 00:20:52.067951 18 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-12-13T00:20:52.067988226+00:00 stderr F I1213 00:20:52.067963 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-12-13T00:20:52.068031517+00:00 stderr F W1213 00:20:52.068001 18 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-12-13T00:20:52.068031517+00:00 stderr F I1213 00:20:52.068012 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false]} 2025-12-13T00:20:52.068100109+00:00 stderr F I1213 00:20:52.068050 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false]} 2025-12-13T00:20:52.068162620+00:00 stderr F I1213 00:20:52.068098 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false]} 2025-12-13T00:20:52.068186621+00:00 stderr F I1213 00:20:52.068149 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false]} 2025-12-13T00:20:52.068254563+00:00 stderr F I1213 00:20:52.068197 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2025-12-13T00:20:52.068304244+00:00 stderr F W1213 00:20:52.068270 18 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-12-13T00:20:52.068304244+00:00 stderr F I1213 00:20:52.068283 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2025-12-13T00:20:52.068367746+00:00 stderr F W1213 00:20:52.068325 18 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-12-13T00:20:52.068367746+00:00 stderr F I1213 00:20:52.068336 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false]} 2025-12-13T00:20:52.068439358+00:00 stderr F I1213 00:20:52.068383 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-12-13T00:20:52.068472199+00:00 stderr F W1213 00:20:52.068434 18 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-12-13T00:20:52.068472199+00:00 stderr F I1213 00:20:52.068446 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-12-13T00:20:52.068540551+00:00 stderr F W1213 00:20:52.068495 18 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-12-13T00:20:52.068540551+00:00 stderr F I1213 00:20:52.068506 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-12-13T00:20:52.068592882+00:00 stderr F W1213 00:20:52.068549 18 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-12-13T00:20:52.068592882+00:00 stderr F I1213 00:20:52.068561 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-12-13T00:20:52.068641393+00:00 stderr F W1213 00:20:52.068606 18 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-12-13T00:20:52.068641393+00:00 stderr F I1213 00:20:52.068616 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-12-13T00:20:52.068699705+00:00 stderr F W1213 00:20:52.068660 18 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-12-13T00:20:52.068699705+00:00 stderr F I1213 00:20:52.068671 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false]} 2025-12-13T00:20:52.068761256+00:00 stderr F I1213 00:20:52.068717 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-12-13T00:20:52.068799898+00:00 stderr F W1213 00:20:52.068768 18 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-12-13T00:20:52.068808548+00:00 stderr F I1213 00:20:52.068779 18 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-12-13T00:20:52.068895030+00:00 stderr F Flag --openshift-config has been deprecated, to be removed 2025-12-13T00:20:52.068895030+00:00 stderr F Flag --enable-logs-handler has been deprecated, This flag will be removed in v1.19 2025-12-13T00:20:52.068895030+00:00 stderr F Flag --kubelet-read-only-port has been deprecated, kubelet-read-only-port is deprecated and will be removed. 2025-12-13T00:20:52.068895030+00:00 stderr F I1213 00:20:52.068873 18 flags.go:64] FLAG: --admission-control="[]" 2025-12-13T00:20:52.068895030+00:00 stderr F I1213 00:20:52.068882 18 flags.go:64] FLAG: --admission-control-config-file="/tmp/kubeapiserver-admission-config.yaml2948189780" 2025-12-13T00:20:52.068942341+00:00 stderr F I1213 00:20:52.068889 18 flags.go:64] FLAG: --advertise-address="192.168.126.11" 2025-12-13T00:20:52.068942341+00:00 stderr F I1213 00:20:52.068901 18 flags.go:64] FLAG: --aggregator-reject-forwarding-redirect="true" 2025-12-13T00:20:52.068942341+00:00 stderr F I1213 00:20:52.068906 18 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:20:52.068942341+00:00 stderr F I1213 00:20:52.068915 18 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:20:52.068942341+00:00 stderr F I1213 00:20:52.068919 18 flags.go:64] FLAG: --allow-privileged="true" 2025-12-13T00:20:52.068955492+00:00 stderr F I1213 00:20:52.068924 18 flags.go:64] FLAG: --anonymous-auth="true" 2025-12-13T00:20:52.068991323+00:00 stderr F I1213 00:20:52.068946 18 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-12-13T00:20:52.068991323+00:00 stderr F I1213 00:20:52.068958 18 flags.go:64] FLAG: --apiserver-count="1" 2025-12-13T00:20:52.068991323+00:00 stderr F I1213 00:20:52.068963 18 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-12-13T00:20:52.068991323+00:00 stderr F I1213 00:20:52.068968 18 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-12-13T00:20:52.068991323+00:00 stderr F I1213 00:20:52.068973 18 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-12-13T00:20:52.069000573+00:00 stderr F I1213 00:20:52.068978 18 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-12-13T00:20:52.069000573+00:00 stderr F I1213 00:20:52.068983 18 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-12-13T00:20:52.069000573+00:00 stderr F I1213 00:20:52.068988 18 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-12-13T00:20:52.069000573+00:00 stderr F I1213 00:20:52.068994 18 flags.go:64] FLAG: --audit-log-compress="false" 2025-12-13T00:20:52.069008833+00:00 stderr F I1213 00:20:52.068999 18 flags.go:64] FLAG: --audit-log-format="json" 2025-12-13T00:20:52.069015923+00:00 stderr F I1213 00:20:52.069003 18 flags.go:64] FLAG: --audit-log-maxage="0" 2025-12-13T00:20:52.069015923+00:00 stderr F I1213 00:20:52.069008 18 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-12-13T00:20:52.069023303+00:00 stderr F I1213 00:20:52.069012 18 flags.go:64] FLAG: --audit-log-maxsize="200" 2025-12-13T00:20:52.069023303+00:00 stderr F I1213 00:20:52.069017 18 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069022 18 flags.go:64] FLAG: --audit-log-path="/var/log/kube-apiserver/audit.log" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069032 18 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069036 18 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069042 18 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069047 18 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069051 18 flags.go:64] FLAG: --audit-policy-file="/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-audit-policies/policy.yaml" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069057 18 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069061 18 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069066 18 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-12-13T00:20:52.069078415+00:00 stderr F I1213 00:20:52.069070 18 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-12-13T00:20:52.069089745+00:00 stderr F I1213 00:20:52.069075 18 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-12-13T00:20:52.069089745+00:00 stderr F I1213 00:20:52.069080 18 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-12-13T00:20:52.069101396+00:00 stderr F I1213 00:20:52.069084 18 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-12-13T00:20:52.069101396+00:00 stderr F I1213 00:20:52.069089 18 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-12-13T00:20:52.069101396+00:00 stderr F I1213 00:20:52.069094 18 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-12-13T00:20:52.069109406+00:00 stderr F I1213 00:20:52.069099 18 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-12-13T00:20:52.069116426+00:00 stderr F I1213 00:20:52.069104 18 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-12-13T00:20:52.069116426+00:00 stderr F I1213 00:20:52.069108 18 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-12-13T00:20:52.069123656+00:00 stderr F I1213 00:20:52.069113 18 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-12-13T00:20:52.069140817+00:00 stderr F I1213 00:20:52.069118 18 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-12-13T00:20:52.069140817+00:00 stderr F I1213 00:20:52.069123 18 flags.go:64] FLAG: --authentication-config="" 2025-12-13T00:20:52.069140817+00:00 stderr F I1213 00:20:52.069127 18 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" 2025-12-13T00:20:52.069148717+00:00 stderr F I1213 00:20:52.069132 18 flags.go:64] FLAG: --authentication-token-webhook-config-file="/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig" 2025-12-13T00:20:52.069148717+00:00 stderr F I1213 00:20:52.069139 18 flags.go:64] FLAG: --authentication-token-webhook-version="v1" 2025-12-13T00:20:52.069155957+00:00 stderr F I1213 00:20:52.069143 18 flags.go:64] FLAG: --authorization-config="" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069148 18 flags.go:64] FLAG: --authorization-mode="[Scope,SystemMasters,RBAC,Node]" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069160 18 flags.go:64] FLAG: --authorization-policy-file="" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069164 18 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069168 18 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069173 18 flags.go:64] FLAG: --authorization-webhook-config-file="" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069178 18 flags.go:64] FLAG: --authorization-webhook-version="v1beta1" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069182 18 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:20:52.069195748+00:00 stderr F I1213 00:20:52.069187 18 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:20:52.069206068+00:00 stderr F I1213 00:20:52.069192 18 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:20:52.069206068+00:00 stderr F I1213 00:20:52.069198 18 flags.go:64] FLAG: --cloud-config="" 2025-12-13T00:20:52.069213329+00:00 stderr F I1213 00:20:52.069203 18 flags.go:64] FLAG: --cloud-provider="" 2025-12-13T00:20:52.069249499+00:00 stderr F I1213 00:20:52.069208 18 flags.go:64] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16" 2025-12-13T00:20:52.069249499+00:00 stderr F I1213 00:20:52.069221 18 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-12-13T00:20:52.069249499+00:00 stderr F I1213 00:20:52.069228 18 flags.go:64] FLAG: --contention-profiling="false" 2025-12-13T00:20:52.069249499+00:00 stderr F I1213 00:20:52.069233 18 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-12-13T00:20:52.069249499+00:00 stderr F I1213 00:20:52.069239 18 flags.go:64] FLAG: --debug-socket-path="" 2025-12-13T00:20:52.069249499+00:00 stderr F I1213 00:20:52.069243 18 flags.go:64] FLAG: --default-not-ready-toleration-seconds="300" 2025-12-13T00:20:52.069268550+00:00 stderr F I1213 00:20:52.069248 18 flags.go:64] FLAG: --default-unreachable-toleration-seconds="300" 2025-12-13T00:20:52.069268550+00:00 stderr F I1213 00:20:52.069253 18 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-12-13T00:20:52.069268550+00:00 stderr F I1213 00:20:52.069258 18 flags.go:64] FLAG: --delete-collection-workers="1" 2025-12-13T00:20:52.069315021+00:00 stderr F I1213 00:20:52.069262 18 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-12-13T00:20:52.069315021+00:00 stderr F I1213 00:20:52.069277 18 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:20:52.069315021+00:00 stderr F I1213 00:20:52.069282 18 flags.go:64] FLAG: --egress-selector-config-file="" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069287 18 flags.go:64] FLAG: --enable-admission-plugins="[CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DefaultIngressClass,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,NodeRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,PersistentVolumeLabel,PodNodeSelector,PodTolerationRestriction,Priority,ResourceQuota,RuntimeClass,ServiceAccount,StorageObjectInUseProtection,TaintNodesByCondition,ValidatingAdmissionWebhook,ValidatingAdmissionPolicy,authorization.openshift.io/RestrictSubjectBindings,authorization.openshift.io/ValidateRoleBindingRestriction,config.openshift.io/DenyDeleteClusterConfiguration,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateConsole,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/ValidateScheduler,image.openshift.io/ImagePolicy,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,quota.openshift.io/ClusterResourceQuota,quota.openshift.io/ValidateClusterResourceQuota,route.openshift.io/IngressAdmission,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/DefaultSecurityContextConstraints,security.openshift.io/SCCExecRestrictions,security.openshift.io/SecurityContextConstraint,security.openshift.io/ValidateSecurityContextConstraints,storage.openshift.io/CSIInlineVolumeSecurity]" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069344 18 flags.go:64] FLAG: --enable-aggregator-routing="true" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069349 18 flags.go:64] FLAG: --enable-bootstrap-token-auth="false" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069354 18 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069358 18 flags.go:64] FLAG: --enable-logs-handler="false" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069363 18 flags.go:64] FLAG: --enable-priority-and-fairness="true" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069367 18 flags.go:64] FLAG: --encryption-provider-config="" 2025-12-13T00:20:52.069379194+00:00 stderr F I1213 00:20:52.069372 18 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-12-13T00:20:52.069390344+00:00 stderr F I1213 00:20:52.069377 18 flags.go:64] FLAG: --endpoint-reconciler-type="lease" 2025-12-13T00:20:52.069417345+00:00 stderr F I1213 00:20:52.069381 18 flags.go:64] FLAG: --etcd-cafile="/etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-12-13T00:20:52.069417345+00:00 stderr F I1213 00:20:52.069391 18 flags.go:64] FLAG: --etcd-certfile="/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt" 2025-12-13T00:20:52.069417345+00:00 stderr F I1213 00:20:52.069396 18 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-12-13T00:20:52.069417345+00:00 stderr F I1213 00:20:52.069401 18 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-12-13T00:20:52.069417345+00:00 stderr F I1213 00:20:52.069406 18 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-12-13T00:20:52.069430375+00:00 stderr F I1213 00:20:52.069410 18 flags.go:64] FLAG: --etcd-healthcheck-timeout="9s" 2025-12-13T00:20:52.069430375+00:00 stderr F I1213 00:20:52.069415 18 flags.go:64] FLAG: --etcd-keyfile="/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key" 2025-12-13T00:20:52.069430375+00:00 stderr F I1213 00:20:52.069420 18 flags.go:64] FLAG: --etcd-prefix="kubernetes.io" 2025-12-13T00:20:52.069438095+00:00 stderr F I1213 00:20:52.069425 18 flags.go:64] FLAG: --etcd-readycheck-timeout="9s" 2025-12-13T00:20:52.069490427+00:00 stderr F I1213 00:20:52.069429 18 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379,https://localhost:2379]" 2025-12-13T00:20:52.069519028+00:00 stderr F I1213 00:20:52.069473 18 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-12-13T00:20:52.069519028+00:00 stderr F I1213 00:20:52.069491 18 flags.go:64] FLAG: --event-ttl="3h0m0s" 2025-12-13T00:20:52.069519028+00:00 stderr F I1213 00:20:52.069496 18 flags.go:64] FLAG: --external-hostname="" 2025-12-13T00:20:52.069555789+00:00 stderr F I1213 00:20:52.069500 18 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-12-13T00:20:52.069555789+00:00 stderr F I1213 00:20:52.069529 18 flags.go:64] FLAG: --goaway-chance="0" 2025-12-13T00:20:52.069555789+00:00 stderr F I1213 00:20:52.069535 18 flags.go:64] FLAG: --help="false" 2025-12-13T00:20:52.069555789+00:00 stderr F I1213 00:20:52.069540 18 flags.go:64] FLAG: --http2-max-streams-per-connection="2000" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069544 18 flags.go:64] FLAG: --kubelet-certificate-authority="/etc/kubernetes/static-pod-resources/configmaps/kubelet-serving-ca/ca-bundle.crt" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069555 18 flags.go:64] FLAG: --kubelet-client-certificate="/etc/kubernetes/static-pod-certs/secrets/kubelet-client/tls.crt" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069560 18 flags.go:64] FLAG: --kubelet-client-key="/etc/kubernetes/static-pod-certs/secrets/kubelet-client/tls.key" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069565 18 flags.go:64] FLAG: --kubelet-port="10250" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069571 18 flags.go:64] FLAG: --kubelet-preferred-address-types="[InternalIP]" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069576 18 flags.go:64] FLAG: --kubelet-read-only-port="0" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069581 18 flags.go:64] FLAG: --kubelet-timeout="5s" 2025-12-13T00:20:52.069593630+00:00 stderr F I1213 00:20:52.069586 18 flags.go:64] FLAG: --kubernetes-service-node-port="0" 2025-12-13T00:20:52.069603800+00:00 stderr F I1213 00:20:52.069590 18 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-12-13T00:20:52.069603800+00:00 stderr F I1213 00:20:52.069595 18 flags.go:64] FLAG: --livez-grace-period="0s" 2025-12-13T00:20:52.069611210+00:00 stderr F I1213 00:20:52.069600 18 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:20:52.069618160+00:00 stderr F I1213 00:20:52.069605 18 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:20:52.069618160+00:00 stderr F I1213 00:20:52.069611 18 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:20:52.069629131+00:00 stderr F I1213 00:20:52.069615 18 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:20:52.069629131+00:00 stderr F I1213 00:20:52.069620 18 flags.go:64] FLAG: --max-connection-bytes-per-sec="0" 2025-12-13T00:20:52.069636521+00:00 stderr F I1213 00:20:52.069624 18 flags.go:64] FLAG: --max-mutating-requests-inflight="1000" 2025-12-13T00:20:52.069643781+00:00 stderr F I1213 00:20:52.069629 18 flags.go:64] FLAG: --max-requests-inflight="3000" 2025-12-13T00:20:52.069643781+00:00 stderr F I1213 00:20:52.069634 18 flags.go:64] FLAG: --min-request-timeout="3600" 2025-12-13T00:20:52.069643781+00:00 stderr F I1213 00:20:52.069638 18 flags.go:64] FLAG: --oidc-ca-file="" 2025-12-13T00:20:52.069651361+00:00 stderr F I1213 00:20:52.069643 18 flags.go:64] FLAG: --oidc-client-id="" 2025-12-13T00:20:52.069658321+00:00 stderr F I1213 00:20:52.069648 18 flags.go:64] FLAG: --oidc-groups-claim="" 2025-12-13T00:20:52.069658321+00:00 stderr F I1213 00:20:52.069652 18 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-12-13T00:20:52.069665372+00:00 stderr F I1213 00:20:52.069656 18 flags.go:64] FLAG: --oidc-issuer-url="" 2025-12-13T00:20:52.069672152+00:00 stderr F I1213 00:20:52.069661 18 flags.go:64] FLAG: --oidc-required-claim="" 2025-12-13T00:20:52.069679102+00:00 stderr F I1213 00:20:52.069666 18 flags.go:64] FLAG: --oidc-signing-algs="[RS256]" 2025-12-13T00:20:52.069679102+00:00 stderr F I1213 00:20:52.069673 18 flags.go:64] FLAG: --oidc-username-claim="sub" 2025-12-13T00:20:52.069686152+00:00 stderr F I1213 00:20:52.069678 18 flags.go:64] FLAG: --oidc-username-prefix="" 2025-12-13T00:20:52.069693212+00:00 stderr F I1213 00:20:52.069682 18 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:20:52.069699992+00:00 stderr F I1213 00:20:52.069688 18 flags.go:64] FLAG: --peer-advertise-ip="" 2025-12-13T00:20:52.069699992+00:00 stderr F I1213 00:20:52.069692 18 flags.go:64] FLAG: --peer-advertise-port="" 2025-12-13T00:20:52.069707313+00:00 stderr F I1213 00:20:52.069696 18 flags.go:64] FLAG: --peer-ca-file="" 2025-12-13T00:20:52.069714103+00:00 stderr F I1213 00:20:52.069701 18 flags.go:64] FLAG: --permit-address-sharing="true" 2025-12-13T00:20:52.069714103+00:00 stderr F I1213 00:20:52.069705 18 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:20:52.069721173+00:00 stderr F I1213 00:20:52.069711 18 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:20:52.069727963+00:00 stderr F I1213 00:20:52.069715 18 flags.go:64] FLAG: --proxy-client-cert-file="/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt" 2025-12-13T00:20:52.069769094+00:00 stderr F I1213 00:20:52.069722 18 flags.go:64] FLAG: --proxy-client-key-file="/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2025-12-13T00:20:52.069769094+00:00 stderr F I1213 00:20:52.069736 18 flags.go:64] FLAG: --request-timeout="1m0s" 2025-12-13T00:20:52.069769094+00:00 stderr F I1213 00:20:52.069740 18 flags.go:64] FLAG: --requestheader-allowed-names="[kube-apiserver-proxy,system:kube-apiserver-proxy,system:openshift-aggregator]" 2025-12-13T00:20:52.069769094+00:00 stderr F I1213 00:20:52.069748 18 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:20:52.069769094+00:00 stderr F I1213 00:20:52.069753 18 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]" 2025-12-13T00:20:52.069769094+00:00 stderr F I1213 00:20:52.069759 18 flags.go:64] FLAG: --requestheader-group-headers="[X-Remote-Group]" 2025-12-13T00:20:52.069803775+00:00 stderr F I1213 00:20:52.069765 18 flags.go:64] FLAG: --requestheader-username-headers="[X-Remote-User]" 2025-12-13T00:20:52.069803775+00:00 stderr F I1213 00:20:52.069776 18 flags.go:64] FLAG: --runtime-config="" 2025-12-13T00:20:52.069803775+00:00 stderr F I1213 00:20:52.069782 18 flags.go:64] FLAG: --secure-port="6443" 2025-12-13T00:20:52.069803775+00:00 stderr F I1213 00:20:52.069787 18 flags.go:64] FLAG: --send-retry-after-while-not-ready-once="true" 2025-12-13T00:20:52.069803775+00:00 stderr F I1213 00:20:52.069791 18 flags.go:64] FLAG: --service-account-extend-token-expiration="true" 2025-12-13T00:20:52.069833076+00:00 stderr F I1213 00:20:52.069796 18 flags.go:64] FLAG: --service-account-issuer="[https://kubernetes.default.svc]" 2025-12-13T00:20:52.069833076+00:00 stderr F I1213 00:20:52.069806 18 flags.go:64] FLAG: --service-account-jwks-uri="https://api.crc.testing:6443/openid/v1/jwks" 2025-12-13T00:20:52.069833076+00:00 stderr F I1213 00:20:52.069811 18 flags.go:64] FLAG: --service-account-key-file="[/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-001.pub,/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-002.pub,/etc/kubernetes/static-pod-resources/configmaps/sa-token-signing-certs/service-account-003.pub,/etc/kubernetes/static-pod-resources/configmaps/bound-sa-token-signing-certs/service-account-001.pub]" 2025-12-13T00:20:52.069833076+00:00 stderr F I1213 00:20:52.069824 18 flags.go:64] FLAG: --service-account-lookup="true" 2025-12-13T00:20:52.069842856+00:00 stderr F I1213 00:20:52.069829 18 flags.go:64] FLAG: --service-account-max-token-expiration="0s" 2025-12-13T00:20:52.069842856+00:00 stderr F I1213 00:20:52.069834 18 flags.go:64] FLAG: --service-account-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/bound-service-account-signing-key/service-account.key" 2025-12-13T00:20:52.069850166+00:00 stderr F I1213 00:20:52.069840 18 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-12-13T00:20:52.069867327+00:00 stderr F I1213 00:20:52.069845 18 flags.go:64] FLAG: --service-node-port-range="30000-32767" 2025-12-13T00:20:52.069867327+00:00 stderr F I1213 00:20:52.069853 18 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:20:52.069867327+00:00 stderr F I1213 00:20:52.069858 18 flags.go:64] FLAG: --shutdown-delay-duration="0s" 2025-12-13T00:20:52.069875097+00:00 stderr F I1213 00:20:52.069862 18 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-12-13T00:20:52.069875097+00:00 stderr F I1213 00:20:52.069867 18 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-12-13T00:20:52.069882447+00:00 stderr F I1213 00:20:52.069871 18 flags.go:64] FLAG: --storage-backend="etcd3" 2025-12-13T00:20:52.069889427+00:00 stderr F I1213 00:20:52.069876 18 flags.go:64] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:20:52.069889427+00:00 stderr F I1213 00:20:52.069881 18 flags.go:64] FLAG: --strict-transport-security-directives="[max-age=31536000,includeSubDomains,preload]" 2025-12-13T00:20:52.069898248+00:00 stderr F I1213 00:20:52.069887 18 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt" 2025-12-13T00:20:52.069945009+00:00 stderr F I1213 00:20:52.069893 18 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:20:52.069945009+00:00 stderr F I1213 00:20:52.069907 18 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:20:52.069945009+00:00 stderr F I1213 00:20:52.069912 18 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-12-13T00:20:52.070006411+00:00 stderr F I1213 00:20:52.069918 18 flags.go:64] FLAG: --tls-sni-cert-key="[/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key;/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt,/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key;/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt,/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key]" 2025-12-13T00:20:52.070006411+00:00 stderr F I1213 00:20:52.069978 18 flags.go:64] FLAG: --token-auth-file="" 2025-12-13T00:20:52.070006411+00:00 stderr F I1213 00:20:52.069983 18 flags.go:64] FLAG: --tracing-config-file="" 2025-12-13T00:20:52.070006411+00:00 stderr F I1213 00:20:52.069988 18 flags.go:64] FLAG: --v="2" 2025-12-13T00:20:52.070006411+00:00 stderr F I1213 00:20:52.069994 18 flags.go:64] FLAG: --version="false" 2025-12-13T00:20:52.070006411+00:00 stderr F I1213 00:20:52.070000 18 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:20:52.070020351+00:00 stderr F I1213 00:20:52.070005 18 flags.go:64] FLAG: --watch-cache="true" 2025-12-13T00:20:52.070020351+00:00 stderr F I1213 00:20:52.070009 18 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-12-13T00:20:52.070124254+00:00 stderr F I1213 00:20:52.070089 18 options.go:222] external host was not specified, using 192.168.126.11 2025-12-13T00:20:52.072165348+00:00 stderr F I1213 00:20:52.072072 18 server.go:189] Version: v1.29.5+29c95f3 2025-12-13T00:20:52.072165348+00:00 stderr F I1213 00:20:52.072134 18 server.go:191] "Golang settings" GOGC="100" GOMAXPROCS="" GOTRACEBACK="" 2025-12-13T00:20:52.073081693+00:00 stderr F I1213 00:20:52.073012 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-12-13T00:20:52.073472034+00:00 stderr F I1213 00:20:52.073408 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" 2025-12-13T00:20:52.074337307+00:00 stderr F I1213 00:20:52.074274 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-12-13T00:20:52.075218401+00:00 stderr F I1213 00:20:52.075151 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" 2025-12-13T00:20:52.075988792+00:00 stderr F I1213 00:20:52.075896 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" 2025-12-13T00:20:52.076917677+00:00 stderr F I1213 00:20:52.076853 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" 2025-12-13T00:20:52.350438418+00:00 stderr F I1213 00:20:52.350321 18 apf_controller.go:292] NewTestableController "Controller" with serverConcurrencyLimit=4000, name=Controller, asFieldManager="api-priority-and-fairness-config-consumer-v1" 2025-12-13T00:20:52.350606142+00:00 stderr F I1213 00:20:52.350435 18 apf_controller.go:861] Introducing queues for priority level "exempt": config={"type":"Exempt","exempt":{"nominalConcurrencyShares":0,"lendablePercent":0}}, nominalCL=0, lendableCL=0, borrowingCL=4000, currentCL=0, quiescing=false (shares=0xc0004cef00, shareSum=5) 2025-12-13T00:20:52.350606142+00:00 stderr F I1213 00:20:52.350569 18 apf_controller.go:861] Introducing queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=4000, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false (shares=0xc0004cfbc0, shareSum=5) 2025-12-13T00:20:52.360509079+00:00 stderr F I1213 00:20:52.360408 18 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:20:52.360545030+00:00 stderr F I1213 00:20:52.360497 18 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:20:52.391084774+00:00 stderr F I1213 00:20:52.390899 18 shared_informer.go:311] Waiting for caches to sync for node_authorizer 2025-12-13T00:20:52.391924188+00:00 stderr F I1213 00:20:52.391864 18 audit.go:340] Using audit backend: ignoreErrors 2025-12-13T00:20:52.406886941+00:00 stderr F I1213 00:20:52.406777 18 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2025-12-13T00:20:52.407122457+00:00 stderr F I1213 00:20:52.407061 18 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:20:52.407174109+00:00 stderr F I1213 00:20:52.407139 18 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:20:52.409982364+00:00 stderr F W1213 00:20:52.409640 18 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. 2025-12-13T00:20:52.409982364+00:00 stderr F I1213 00:20:52.409839 18 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. 2025-12-13T00:20:52.410045906+00:00 stderr F I1213 00:20:52.409965 18 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. 2025-12-13T00:20:52.410045906+00:00 stderr F I1213 00:20:52.409976 18 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. 2025-12-13T00:20:52.415926565+00:00 stderr F I1213 00:20:52.415837 18 plugins.go:157] Loaded 23 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,autoscaling.openshift.io/ManagementCPUsOverride,scheduling.openshift.io/OriginPodNodeEnvironment,image.openshift.io/ImagePolicy,security.openshift.io/SecurityContextConstraint,route.openshift.io/RouteHostAssignment,autoscaling.openshift.io/MixedCPUs,route.openshift.io/DefaultRoute,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. 2025-12-13T00:20:52.415926565+00:00 stderr F I1213 00:20:52.415866 18 plugins.go:160] Loaded 47 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,autoscaling.openshift.io/ManagementCPUsOverride,authorization.openshift.io/RestrictSubjectBindings,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,network.openshift.io/RestrictedEndpointsAdmission,image.openshift.io/ImagePolicy,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,storage.openshift.io/CSIInlineVolumeSecurity,autoscaling.openshift.io/ManagedNode,config.openshift.io/ValidateAPIServer,config.openshift.io/ValidateAuthentication,config.openshift.io/ValidateFeatureGate,config.openshift.io/ValidateConsole,operator.openshift.io/ValidateDNS,config.openshift.io/ValidateImage,config.openshift.io/ValidateOAuth,config.openshift.io/ValidateProject,config.openshift.io/DenyDeleteClusterConfiguration,operator.openshift.io/DenyDeleteClusterOperators,config.openshift.io/ValidateScheduler,quota.openshift.io/ValidateClusterResourceQuota,security.openshift.io/ValidateSecurityContextConstraints,authorization.openshift.io/ValidateRoleBindingRestriction,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,route.openshift.io/ValidateRoute,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota,quota.openshift.io/ClusterResourceQuota. 2025-12-13T00:20:52.416290964+00:00 stderr F I1213 00:20:52.416234 18 instance.go:297] Using reconciler: lease 2025-12-13T00:20:52.435118583+00:00 stderr F I1213 00:20:52.434412 18 store.go:1579] "Monitoring resource count at path" resource="customresourcedefinitions.apiextensions.k8s.io" path="//apiextensions.k8s.io/customresourcedefinitions" 2025-12-13T00:20:52.437146538+00:00 stderr F I1213 00:20:52.436557 18 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.437146538+00:00 stderr F W1213 00:20:52.436576 18 genericapiserver.go:774] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.463805867+00:00 stderr F I1213 00:20:52.463697 18 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2025-12-13T00:20:52.471635918+00:00 stderr F I1213 00:20:52.471537 18 store.go:1579] "Monitoring resource count at path" resource="resourcequotas" path="//resourcequotas" 2025-12-13T00:20:52.473117508+00:00 stderr F I1213 00:20:52.473049 18 cacher.go:460] cacher (resourcequotas): initialized 2025-12-13T00:20:52.473117508+00:00 stderr F I1213 00:20:52.473093 18 reflector.go:351] Caches populated for *core.ResourceQuota from storage/cacher.go:/resourcequotas 2025-12-13T00:20:52.480106477+00:00 stderr F I1213 00:20:52.480001 18 store.go:1579] "Monitoring resource count at path" resource="secrets" path="//secrets" 2025-12-13T00:20:52.491352950+00:00 stderr F I1213 00:20:52.491233 18 store.go:1579] "Monitoring resource count at path" resource="configmaps" path="//configmaps" 2025-12-13T00:20:52.507242599+00:00 stderr F I1213 00:20:52.507123 18 store.go:1579] "Monitoring resource count at path" resource="namespaces" path="//namespaces" 2025-12-13T00:20:52.527107485+00:00 stderr F I1213 00:20:52.516046 18 cacher.go:460] cacher (namespaces): initialized 2025-12-13T00:20:52.527107485+00:00 stderr F I1213 00:20:52.516076 18 reflector.go:351] Caches populated for *core.Namespace from storage/cacher.go:/namespaces 2025-12-13T00:20:52.527107485+00:00 stderr F I1213 00:20:52.517679 18 store.go:1579] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" 2025-12-13T00:20:52.531090932+00:00 stderr F I1213 00:20:52.530322 18 cacher.go:460] cacher (serviceaccounts): initialized 2025-12-13T00:20:52.531090932+00:00 stderr F I1213 00:20:52.530360 18 reflector.go:351] Caches populated for *core.ServiceAccount from storage/cacher.go:/serviceaccounts 2025-12-13T00:20:52.533400135+00:00 stderr F I1213 00:20:52.533337 18 store.go:1579] "Monitoring resource count at path" resource="podtemplates" path="//podtemplates" 2025-12-13T00:20:52.534546696+00:00 stderr F I1213 00:20:52.534485 18 cacher.go:460] cacher (podtemplates): initialized 2025-12-13T00:20:52.534563307+00:00 stderr F I1213 00:20:52.534537 18 reflector.go:351] Caches populated for *core.PodTemplate from storage/cacher.go:/podtemplates 2025-12-13T00:20:52.536675803+00:00 stderr F I1213 00:20:52.536606 18 cacher.go:460] cacher (secrets): initialized 2025-12-13T00:20:52.536675803+00:00 stderr F I1213 00:20:52.536638 18 reflector.go:351] Caches populated for *core.Secret from storage/cacher.go:/secrets 2025-12-13T00:20:52.543008235+00:00 stderr F I1213 00:20:52.542870 18 store.go:1579] "Monitoring resource count at path" resource="limitranges" path="//limitranges" 2025-12-13T00:20:52.544439613+00:00 stderr F I1213 00:20:52.544370 18 cacher.go:460] cacher (limitranges): initialized 2025-12-13T00:20:52.544819233+00:00 stderr F I1213 00:20:52.544644 18 reflector.go:351] Caches populated for *core.LimitRange from storage/cacher.go:/limitranges 2025-12-13T00:20:52.554975707+00:00 stderr F I1213 00:20:52.554716 18 store.go:1579] "Monitoring resource count at path" resource="persistentvolumes" path="//persistentvolumes" 2025-12-13T00:20:52.556324913+00:00 stderr F I1213 00:20:52.556258 18 cacher.go:460] cacher (persistentvolumes): initialized 2025-12-13T00:20:52.556324913+00:00 stderr F I1213 00:20:52.556284 18 reflector.go:351] Caches populated for *core.PersistentVolume from storage/cacher.go:/persistentvolumes 2025-12-13T00:20:52.557833204+00:00 stderr F I1213 00:20:52.557763 18 cacher.go:460] cacher (configmaps): initialized 2025-12-13T00:20:52.557833204+00:00 stderr F I1213 00:20:52.557781 18 reflector.go:351] Caches populated for *core.ConfigMap from storage/cacher.go:/configmaps 2025-12-13T00:20:52.565176992+00:00 stderr F I1213 00:20:52.564585 18 store.go:1579] "Monitoring resource count at path" resource="persistentvolumeclaims" path="//persistentvolumeclaims" 2025-12-13T00:20:52.567047443+00:00 stderr F I1213 00:20:52.566862 18 cacher.go:460] cacher (persistentvolumeclaims): initialized 2025-12-13T00:20:52.567047443+00:00 stderr F I1213 00:20:52.566885 18 reflector.go:351] Caches populated for *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims 2025-12-13T00:20:52.576271761+00:00 stderr F I1213 00:20:52.572281 18 store.go:1579] "Monitoring resource count at path" resource="endpoints" path="//services/endpoints" 2025-12-13T00:20:52.576271761+00:00 stderr F I1213 00:20:52.574273 18 cacher.go:460] cacher (endpoints): initialized 2025-12-13T00:20:52.576271761+00:00 stderr F I1213 00:20:52.574291 18 reflector.go:351] Caches populated for *core.Endpoints from storage/cacher.go:/services/endpoints 2025-12-13T00:20:52.582043228+00:00 stderr F I1213 00:20:52.579992 18 store.go:1579] "Monitoring resource count at path" resource="nodes" path="//minions" 2025-12-13T00:20:52.582043228+00:00 stderr F I1213 00:20:52.581050 18 cacher.go:460] cacher (nodes): initialized 2025-12-13T00:20:52.582043228+00:00 stderr F I1213 00:20:52.581067 18 reflector.go:351] Caches populated for *core.Node from storage/cacher.go:/minions 2025-12-13T00:20:52.596242691+00:00 stderr F I1213 00:20:52.589363 18 store.go:1579] "Monitoring resource count at path" resource="pods" path="//pods" 2025-12-13T00:20:52.596868458+00:00 stderr F I1213 00:20:52.596801 18 store.go:1579] "Monitoring resource count at path" resource="services" path="//services/specs" 2025-12-13T00:20:52.604236416+00:00 stderr F I1213 00:20:52.604098 18 cacher.go:460] cacher (services): initialized 2025-12-13T00:20:52.604236416+00:00 stderr F I1213 00:20:52.604122 18 reflector.go:351] Caches populated for *core.Service from storage/cacher.go:/services/specs 2025-12-13T00:20:52.607351461+00:00 stderr F I1213 00:20:52.607273 18 store.go:1579] "Monitoring resource count at path" resource="serviceaccounts" path="//serviceaccounts" 2025-12-13T00:20:52.607714391+00:00 stderr F I1213 00:20:52.607642 18 cacher.go:460] cacher (pods): initialized 2025-12-13T00:20:52.607714391+00:00 stderr F I1213 00:20:52.607667 18 reflector.go:351] Caches populated for *core.Pod from storage/cacher.go:/pods 2025-12-13T00:20:52.614222266+00:00 stderr F I1213 00:20:52.614004 18 cacher.go:460] cacher (serviceaccounts): initialized 2025-12-13T00:20:52.614222266+00:00 stderr F I1213 00:20:52.614028 18 reflector.go:351] Caches populated for *core.ServiceAccount from storage/cacher.go:/serviceaccounts 2025-12-13T00:20:52.619398366+00:00 stderr F I1213 00:20:52.619053 18 store.go:1579] "Monitoring resource count at path" resource="replicationcontrollers" path="//controllers" 2025-12-13T00:20:52.621020409+00:00 stderr F I1213 00:20:52.620071 18 instance.go:707] Enabling API group "". 2025-12-13T00:20:52.623109256+00:00 stderr F I1213 00:20:52.623027 18 cacher.go:460] cacher (replicationcontrollers): initialized 2025-12-13T00:20:52.623109256+00:00 stderr F I1213 00:20:52.623054 18 reflector.go:351] Caches populated for *core.ReplicationController from storage/cacher.go:/controllers 2025-12-13T00:20:52.634341399+00:00 stderr F I1213 00:20:52.634197 18 cacher.go:460] cacher (customresourcedefinitions.apiextensions.k8s.io): initialized 2025-12-13T00:20:52.634341399+00:00 stderr F I1213 00:20:52.634226 18 reflector.go:351] Caches populated for *apiextensions.CustomResourceDefinition from storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions 2025-12-13T00:20:52.644642717+00:00 stderr F I1213 00:20:52.644558 18 handler.go:275] Adding GroupVersion v1 to ResourceManager 2025-12-13T00:20:52.644772130+00:00 stderr F I1213 00:20:52.644725 18 instance.go:694] API group "internal.apiserver.k8s.io" is not enabled, skipping. 2025-12-13T00:20:52.644807511+00:00 stderr F I1213 00:20:52.644766 18 instance.go:707] Enabling API group "authentication.k8s.io". 2025-12-13T00:20:52.644851312+00:00 stderr F I1213 00:20:52.644811 18 instance.go:707] Enabling API group "authorization.k8s.io". 2025-12-13T00:20:52.652276283+00:00 stderr F I1213 00:20:52.652216 18 store.go:1579] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" 2025-12-13T00:20:52.653305091+00:00 stderr F I1213 00:20:52.653212 18 cacher.go:460] cacher (horizontalpodautoscalers.autoscaling): initialized 2025-12-13T00:20:52.653305091+00:00 stderr F I1213 00:20:52.653233 18 reflector.go:351] Caches populated for *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers 2025-12-13T00:20:52.660243727+00:00 stderr F I1213 00:20:52.660156 18 store.go:1579] "Monitoring resource count at path" resource="horizontalpodautoscalers.autoscaling" path="//horizontalpodautoscalers" 2025-12-13T00:20:52.660243727+00:00 stderr F I1213 00:20:52.660213 18 instance.go:707] Enabling API group "autoscaling". 2025-12-13T00:20:52.664187764+00:00 stderr F I1213 00:20:52.664122 18 cacher.go:460] cacher (horizontalpodautoscalers.autoscaling): initialized 2025-12-13T00:20:52.664187764+00:00 stderr F I1213 00:20:52.664142 18 reflector.go:351] Caches populated for *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers 2025-12-13T00:20:52.668757397+00:00 stderr F I1213 00:20:52.668682 18 store.go:1579] "Monitoring resource count at path" resource="jobs.batch" path="//jobs" 2025-12-13T00:20:52.670035582+00:00 stderr F I1213 00:20:52.669971 18 cacher.go:460] cacher (jobs.batch): initialized 2025-12-13T00:20:52.670035582+00:00 stderr F I1213 00:20:52.669999 18 reflector.go:351] Caches populated for *batch.Job from storage/cacher.go:/jobs 2025-12-13T00:20:52.675481229+00:00 stderr F I1213 00:20:52.675399 18 store.go:1579] "Monitoring resource count at path" resource="cronjobs.batch" path="//cronjobs" 2025-12-13T00:20:52.675524240+00:00 stderr F I1213 00:20:52.675479 18 instance.go:707] Enabling API group "batch". 2025-12-13T00:20:52.677077252+00:00 stderr F I1213 00:20:52.676750 18 cacher.go:460] cacher (cronjobs.batch): initialized 2025-12-13T00:20:52.677077252+00:00 stderr F I1213 00:20:52.676772 18 reflector.go:351] Caches populated for *batch.CronJob from storage/cacher.go:/cronjobs 2025-12-13T00:20:52.684480082+00:00 stderr F I1213 00:20:52.684395 18 store.go:1579] "Monitoring resource count at path" resource="certificatesigningrequests.certificates.k8s.io" path="//certificatesigningrequests" 2025-12-13T00:20:52.684480082+00:00 stderr F I1213 00:20:52.684429 18 instance.go:707] Enabling API group "certificates.k8s.io". 2025-12-13T00:20:52.685850109+00:00 stderr F I1213 00:20:52.685781 18 cacher.go:460] cacher (certificatesigningrequests.certificates.k8s.io): initialized 2025-12-13T00:20:52.685850109+00:00 stderr F I1213 00:20:52.685798 18 reflector.go:351] Caches populated for *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests 2025-12-13T00:20:52.691346318+00:00 stderr F I1213 00:20:52.691277 18 store.go:1579] "Monitoring resource count at path" resource="leases.coordination.k8s.io" path="//leases" 2025-12-13T00:20:52.691346318+00:00 stderr F I1213 00:20:52.691303 18 instance.go:707] Enabling API group "coordination.k8s.io". 2025-12-13T00:20:52.693437983+00:00 stderr F I1213 00:20:52.693379 18 cacher.go:460] cacher (leases.coordination.k8s.io): initialized 2025-12-13T00:20:52.693437983+00:00 stderr F I1213 00:20:52.693400 18 reflector.go:351] Caches populated for *coordination.Lease from storage/cacher.go:/leases 2025-12-13T00:20:52.699010514+00:00 stderr F I1213 00:20:52.698921 18 store.go:1579] "Monitoring resource count at path" resource="endpointslices.discovery.k8s.io" path="//endpointslices" 2025-12-13T00:20:52.699010514+00:00 stderr F I1213 00:20:52.698967 18 instance.go:707] Enabling API group "discovery.k8s.io". 2025-12-13T00:20:52.703582197+00:00 stderr F I1213 00:20:52.701199 18 cacher.go:460] cacher (endpointslices.discovery.k8s.io): initialized 2025-12-13T00:20:52.703582197+00:00 stderr F I1213 00:20:52.701215 18 reflector.go:351] Caches populated for *discovery.EndpointSlice from storage/cacher.go:/endpointslices 2025-12-13T00:20:52.705357556+00:00 stderr F I1213 00:20:52.705303 18 store.go:1579] "Monitoring resource count at path" resource="networkpolicies.networking.k8s.io" path="//networkpolicies" 2025-12-13T00:20:52.706427754+00:00 stderr F I1213 00:20:52.706322 18 cacher.go:460] cacher (networkpolicies.networking.k8s.io): initialized 2025-12-13T00:20:52.706427754+00:00 stderr F I1213 00:20:52.706342 18 reflector.go:351] Caches populated for *networking.NetworkPolicy from storage/cacher.go:/networkpolicies 2025-12-13T00:20:52.711619314+00:00 stderr F I1213 00:20:52.711556 18 store.go:1579] "Monitoring resource count at path" resource="ingresses.networking.k8s.io" path="//ingress" 2025-12-13T00:20:52.712898558+00:00 stderr F I1213 00:20:52.712834 18 cacher.go:460] cacher (ingresses.networking.k8s.io): initialized 2025-12-13T00:20:52.712898558+00:00 stderr F I1213 00:20:52.712853 18 reflector.go:351] Caches populated for *networking.Ingress from storage/cacher.go:/ingress 2025-12-13T00:20:52.719791575+00:00 stderr F I1213 00:20:52.719696 18 store.go:1579] "Monitoring resource count at path" resource="ingressclasses.networking.k8s.io" path="//ingressclasses" 2025-12-13T00:20:52.719791575+00:00 stderr F I1213 00:20:52.719750 18 instance.go:707] Enabling API group "networking.k8s.io". 2025-12-13T00:20:52.720698839+00:00 stderr F I1213 00:20:52.720649 18 cacher.go:460] cacher (ingressclasses.networking.k8s.io): initialized 2025-12-13T00:20:52.720698839+00:00 stderr F I1213 00:20:52.720669 18 reflector.go:351] Caches populated for *networking.IngressClass from storage/cacher.go:/ingressclasses 2025-12-13T00:20:52.726766943+00:00 stderr F I1213 00:20:52.726709 18 store.go:1579] "Monitoring resource count at path" resource="runtimeclasses.node.k8s.io" path="//runtimeclasses" 2025-12-13T00:20:52.726766943+00:00 stderr F I1213 00:20:52.726742 18 instance.go:707] Enabling API group "node.k8s.io". 2025-12-13T00:20:52.727748400+00:00 stderr F I1213 00:20:52.727689 18 cacher.go:460] cacher (runtimeclasses.node.k8s.io): initialized 2025-12-13T00:20:52.727748400+00:00 stderr F I1213 00:20:52.727710 18 reflector.go:351] Caches populated for *node.RuntimeClass from storage/cacher.go:/runtimeclasses 2025-12-13T00:20:52.734783729+00:00 stderr F I1213 00:20:52.734681 18 store.go:1579] "Monitoring resource count at path" resource="poddisruptionbudgets.policy" path="//poddisruptionbudgets" 2025-12-13T00:20:52.734783729+00:00 stderr F I1213 00:20:52.734727 18 instance.go:707] Enabling API group "policy". 2025-12-13T00:20:52.741099529+00:00 stderr F I1213 00:20:52.741025 18 cacher.go:460] cacher (poddisruptionbudgets.policy): initialized 2025-12-13T00:20:52.741099529+00:00 stderr F I1213 00:20:52.741051 18 reflector.go:351] Caches populated for *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets 2025-12-13T00:20:52.741596913+00:00 stderr F I1213 00:20:52.741535 18 store.go:1579] "Monitoring resource count at path" resource="roles.rbac.authorization.k8s.io" path="//roles" 2025-12-13T00:20:52.745379175+00:00 stderr F I1213 00:20:52.745313 18 cacher.go:460] cacher (roles.rbac.authorization.k8s.io): initialized 2025-12-13T00:20:52.745379175+00:00 stderr F I1213 00:20:52.745337 18 reflector.go:351] Caches populated for *rbac.Role from storage/cacher.go:/roles 2025-12-13T00:20:52.748732766+00:00 stderr F I1213 00:20:52.748544 18 store.go:1579] "Monitoring resource count at path" resource="rolebindings.rbac.authorization.k8s.io" path="//rolebindings" 2025-12-13T00:20:52.752740793+00:00 stderr F I1213 00:20:52.752669 18 cacher.go:460] cacher (rolebindings.rbac.authorization.k8s.io): initialized 2025-12-13T00:20:52.752740793+00:00 stderr F I1213 00:20:52.752691 18 reflector.go:351] Caches populated for *rbac.RoleBinding from storage/cacher.go:/rolebindings 2025-12-13T00:20:52.754731927+00:00 stderr F I1213 00:20:52.754672 18 store.go:1579] "Monitoring resource count at path" resource="clusterroles.rbac.authorization.k8s.io" path="//clusterroles" 2025-12-13T00:20:52.761747477+00:00 stderr F I1213 00:20:52.761675 18 store.go:1579] "Monitoring resource count at path" resource="clusterrolebindings.rbac.authorization.k8s.io" path="//clusterrolebindings" 2025-12-13T00:20:52.761769428+00:00 stderr F I1213 00:20:52.761731 18 instance.go:707] Enabling API group "rbac.authorization.k8s.io". 2025-12-13T00:20:52.761824659+00:00 stderr F I1213 00:20:52.761787 18 cacher.go:460] cacher (clusterroles.rbac.authorization.k8s.io): initialized 2025-12-13T00:20:52.761824659+00:00 stderr F I1213 00:20:52.761808 18 reflector.go:351] Caches populated for *rbac.ClusterRole from storage/cacher.go:/clusterroles 2025-12-13T00:20:52.766551186+00:00 stderr F I1213 00:20:52.766456 18 cacher.go:460] cacher (clusterrolebindings.rbac.authorization.k8s.io): initialized 2025-12-13T00:20:52.766551186+00:00 stderr F I1213 00:20:52.766484 18 reflector.go:351] Caches populated for *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings 2025-12-13T00:20:52.769713491+00:00 stderr F I1213 00:20:52.769657 18 store.go:1579] "Monitoring resource count at path" resource="priorityclasses.scheduling.k8s.io" path="//priorityclasses" 2025-12-13T00:20:52.769713491+00:00 stderr F I1213 00:20:52.769687 18 instance.go:707] Enabling API group "scheduling.k8s.io". 2025-12-13T00:20:52.772338292+00:00 stderr F I1213 00:20:52.772265 18 cacher.go:460] cacher (priorityclasses.scheduling.k8s.io): initialized 2025-12-13T00:20:52.772338292+00:00 stderr F I1213 00:20:52.772283 18 reflector.go:351] Caches populated for *scheduling.PriorityClass from storage/cacher.go:/priorityclasses 2025-12-13T00:20:52.776369822+00:00 stderr F I1213 00:20:52.776304 18 store.go:1579] "Monitoring resource count at path" resource="storageclasses.storage.k8s.io" path="//storageclasses" 2025-12-13T00:20:52.777412299+00:00 stderr F I1213 00:20:52.777356 18 cacher.go:460] cacher (storageclasses.storage.k8s.io): initialized 2025-12-13T00:20:52.777412299+00:00 stderr F I1213 00:20:52.777375 18 reflector.go:351] Caches populated for *storage.StorageClass from storage/cacher.go:/storageclasses 2025-12-13T00:20:52.782677021+00:00 stderr F I1213 00:20:52.782619 18 store.go:1579] "Monitoring resource count at path" resource="volumeattachments.storage.k8s.io" path="//volumeattachments" 2025-12-13T00:20:52.783500903+00:00 stderr F I1213 00:20:52.783424 18 cacher.go:460] cacher (volumeattachments.storage.k8s.io): initialized 2025-12-13T00:20:52.783553165+00:00 stderr F I1213 00:20:52.783505 18 reflector.go:351] Caches populated for *storage.VolumeAttachment from storage/cacher.go:/volumeattachments 2025-12-13T00:20:52.789231498+00:00 stderr F I1213 00:20:52.789158 18 store.go:1579] "Monitoring resource count at path" resource="csinodes.storage.k8s.io" path="//csinodes" 2025-12-13T00:20:52.790725609+00:00 stderr F I1213 00:20:52.790519 18 cacher.go:460] cacher (csinodes.storage.k8s.io): initialized 2025-12-13T00:20:52.790725609+00:00 stderr F I1213 00:20:52.790537 18 reflector.go:351] Caches populated for *storage.CSINode from storage/cacher.go:/csinodes 2025-12-13T00:20:52.796686519+00:00 stderr F I1213 00:20:52.796633 18 store.go:1579] "Monitoring resource count at path" resource="csidrivers.storage.k8s.io" path="//csidrivers" 2025-12-13T00:20:52.797899573+00:00 stderr F I1213 00:20:52.797664 18 cacher.go:460] cacher (csidrivers.storage.k8s.io): initialized 2025-12-13T00:20:52.797899573+00:00 stderr F I1213 00:20:52.797685 18 reflector.go:351] Caches populated for *storage.CSIDriver from storage/cacher.go:/csidrivers 2025-12-13T00:20:52.805257301+00:00 stderr F I1213 00:20:52.803844 18 store.go:1579] "Monitoring resource count at path" resource="csistoragecapacities.storage.k8s.io" path="//csistoragecapacities" 2025-12-13T00:20:52.805257301+00:00 stderr F I1213 00:20:52.803915 18 instance.go:707] Enabling API group "storage.k8s.io". 2025-12-13T00:20:52.805257301+00:00 stderr F I1213 00:20:52.804670 18 cacher.go:460] cacher (csistoragecapacities.storage.k8s.io): initialized 2025-12-13T00:20:52.805257301+00:00 stderr F I1213 00:20:52.804682 18 reflector.go:351] Caches populated for *storage.CSIStorageCapacity from storage/cacher.go:/csistoragecapacities 2025-12-13T00:20:52.811568241+00:00 stderr F I1213 00:20:52.811246 18 store.go:1579] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" 2025-12-13T00:20:52.814976034+00:00 stderr F I1213 00:20:52.812901 18 cacher.go:460] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized 2025-12-13T00:20:52.814976034+00:00 stderr F I1213 00:20:52.812923 18 reflector.go:351] Caches populated for *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas 2025-12-13T00:20:52.818666483+00:00 stderr F I1213 00:20:52.818605 18 store.go:1579] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" 2025-12-13T00:20:52.820288047+00:00 stderr F I1213 00:20:52.820121 18 cacher.go:460] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized 2025-12-13T00:20:52.820288047+00:00 stderr F I1213 00:20:52.820141 18 reflector.go:351] Caches populated for *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations 2025-12-13T00:20:52.827284545+00:00 stderr F I1213 00:20:52.827202 18 store.go:1579] "Monitoring resource count at path" resource="flowschemas.flowcontrol.apiserver.k8s.io" path="//flowschemas" 2025-12-13T00:20:52.831494709+00:00 stderr F I1213 00:20:52.830620 18 cacher.go:460] cacher (flowschemas.flowcontrol.apiserver.k8s.io): initialized 2025-12-13T00:20:52.831494709+00:00 stderr F I1213 00:20:52.830642 18 reflector.go:351] Caches populated for *flowcontrol.FlowSchema from storage/cacher.go:/flowschemas 2025-12-13T00:20:52.835873847+00:00 stderr F I1213 00:20:52.834904 18 store.go:1579] "Monitoring resource count at path" resource="prioritylevelconfigurations.flowcontrol.apiserver.k8s.io" path="//prioritylevelconfigurations" 2025-12-13T00:20:52.835873847+00:00 stderr F I1213 00:20:52.834987 18 instance.go:707] Enabling API group "flowcontrol.apiserver.k8s.io". 2025-12-13T00:20:52.838591850+00:00 stderr F I1213 00:20:52.837176 18 cacher.go:460] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): initialized 2025-12-13T00:20:52.838591850+00:00 stderr F I1213 00:20:52.837194 18 reflector.go:351] Caches populated for *flowcontrol.PriorityLevelConfiguration from storage/cacher.go:/prioritylevelconfigurations 2025-12-13T00:20:52.841925240+00:00 stderr F I1213 00:20:52.841842 18 store.go:1579] "Monitoring resource count at path" resource="deployments.apps" path="//deployments" 2025-12-13T00:20:52.845634631+00:00 stderr F I1213 00:20:52.845583 18 cacher.go:460] cacher (deployments.apps): initialized 2025-12-13T00:20:52.845655951+00:00 stderr F I1213 00:20:52.845639 18 reflector.go:351] Caches populated for *apps.Deployment from storage/cacher.go:/deployments 2025-12-13T00:20:52.849208617+00:00 stderr F I1213 00:20:52.849156 18 store.go:1579] "Monitoring resource count at path" resource="statefulsets.apps" path="//statefulsets" 2025-12-13T00:20:52.850024429+00:00 stderr F I1213 00:20:52.849959 18 cacher.go:460] cacher (statefulsets.apps): initialized 2025-12-13T00:20:52.850024429+00:00 stderr F I1213 00:20:52.849976 18 reflector.go:351] Caches populated for *apps.StatefulSet from storage/cacher.go:/statefulsets 2025-12-13T00:20:52.856347679+00:00 stderr F I1213 00:20:52.856298 18 store.go:1579] "Monitoring resource count at path" resource="daemonsets.apps" path="//daemonsets" 2025-12-13T00:20:52.861715944+00:00 stderr F I1213 00:20:52.861655 18 cacher.go:460] cacher (daemonsets.apps): initialized 2025-12-13T00:20:52.861715944+00:00 stderr F I1213 00:20:52.861676 18 reflector.go:351] Caches populated for *apps.DaemonSet from storage/cacher.go:/daemonsets 2025-12-13T00:20:52.864797827+00:00 stderr F I1213 00:20:52.864752 18 store.go:1579] "Monitoring resource count at path" resource="replicasets.apps" path="//replicasets" 2025-12-13T00:20:52.879773472+00:00 stderr F I1213 00:20:52.879654 18 store.go:1579] "Monitoring resource count at path" resource="controllerrevisions.apps" path="//controllerrevisions" 2025-12-13T00:20:52.879889015+00:00 stderr F I1213 00:20:52.879844 18 instance.go:707] Enabling API group "apps". 2025-12-13T00:20:52.892695480+00:00 stderr F I1213 00:20:52.892602 18 store.go:1579] "Monitoring resource count at path" resource="validatingwebhookconfigurations.admissionregistration.k8s.io" path="//validatingwebhookconfigurations" 2025-12-13T00:20:52.894154250+00:00 stderr F I1213 00:20:52.894097 18 cacher.go:460] cacher (controllerrevisions.apps): initialized 2025-12-13T00:20:52.894154250+00:00 stderr F I1213 00:20:52.894118 18 reflector.go:351] Caches populated for *apps.ControllerRevision from storage/cacher.go:/controllerrevisions 2025-12-13T00:20:52.897431308+00:00 stderr F I1213 00:20:52.897356 18 cacher.go:460] cacher (validatingwebhookconfigurations.admissionregistration.k8s.io): initialized 2025-12-13T00:20:52.897431308+00:00 stderr F I1213 00:20:52.897382 18 reflector.go:351] Caches populated for *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations 2025-12-13T00:20:52.898177908+00:00 stderr F I1213 00:20:52.898124 18 cacher.go:460] cacher (replicasets.apps): initialized 2025-12-13T00:20:52.898195298+00:00 stderr F I1213 00:20:52.898180 18 reflector.go:351] Caches populated for *apps.ReplicaSet from storage/cacher.go:/replicasets 2025-12-13T00:20:52.902765942+00:00 stderr F I1213 00:20:52.902677 18 store.go:1579] "Monitoring resource count at path" resource="mutatingwebhookconfigurations.admissionregistration.k8s.io" path="//mutatingwebhookconfigurations" 2025-12-13T00:20:52.902765942+00:00 stderr F I1213 00:20:52.902731 18 instance.go:707] Enabling API group "admissionregistration.k8s.io". 2025-12-13T00:20:52.903557453+00:00 stderr F I1213 00:20:52.903495 18 cacher.go:460] cacher (mutatingwebhookconfigurations.admissionregistration.k8s.io): initialized 2025-12-13T00:20:52.903557453+00:00 stderr F I1213 00:20:52.903542 18 reflector.go:351] Caches populated for *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations 2025-12-13T00:20:52.911677582+00:00 stderr F I1213 00:20:52.911570 18 store.go:1579] "Monitoring resource count at path" resource="events" path="//events" 2025-12-13T00:20:52.911677582+00:00 stderr F I1213 00:20:52.911636 18 instance.go:707] Enabling API group "events.k8s.io". 2025-12-13T00:20:52.911704843+00:00 stderr F I1213 00:20:52.911649 18 instance.go:694] API group "resource.k8s.io" is not enabled, skipping. 2025-12-13T00:20:52.920145811+00:00 stderr F I1213 00:20:52.920063 18 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.920145811+00:00 stderr F W1213 00:20:52.920087 18 genericapiserver.go:774] Skipping API authentication.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.920145811+00:00 stderr F W1213 00:20:52.920094 18 genericapiserver.go:774] Skipping API authentication.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.920570492+00:00 stderr F I1213 00:20:52.920515 18 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.920570492+00:00 stderr F W1213 00:20:52.920532 18 genericapiserver.go:774] Skipping API authorization.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.921439126+00:00 stderr F I1213 00:20:52.921381 18 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager 2025-12-13T00:20:52.922159236+00:00 stderr F I1213 00:20:52.922098 18 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager 2025-12-13T00:20:52.922159236+00:00 stderr F W1213 00:20:52.922116 18 genericapiserver.go:774] Skipping API autoscaling/v2beta1 because it has no resources. 2025-12-13T00:20:52.922159236+00:00 stderr F W1213 00:20:52.922124 18 genericapiserver.go:774] Skipping API autoscaling/v2beta2 because it has no resources. 2025-12-13T00:20:52.923535892+00:00 stderr F I1213 00:20:52.923464 18 handler.go:275] Adding GroupVersion batch v1 to ResourceManager 2025-12-13T00:20:52.923535892+00:00 stderr F W1213 00:20:52.923482 18 genericapiserver.go:774] Skipping API batch/v1beta1 because it has no resources. 2025-12-13T00:20:52.924237572+00:00 stderr F I1213 00:20:52.924152 18 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.924237572+00:00 stderr F W1213 00:20:52.924168 18 genericapiserver.go:774] Skipping API certificates.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.924237572+00:00 stderr F W1213 00:20:52.924172 18 genericapiserver.go:774] Skipping API certificates.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.924615142+00:00 stderr F I1213 00:20:52.924545 18 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.924615142+00:00 stderr F W1213 00:20:52.924559 18 genericapiserver.go:774] Skipping API coordination.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.924615142+00:00 stderr F W1213 00:20:52.924586 18 genericapiserver.go:774] Skipping API discovery.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.925199727+00:00 stderr F I1213 00:20:52.925071 18 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.926961695+00:00 stderr F I1213 00:20:52.926525 18 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.926961695+00:00 stderr F W1213 00:20:52.926544 18 genericapiserver.go:774] Skipping API networking.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.926961695+00:00 stderr F W1213 00:20:52.926549 18 genericapiserver.go:774] Skipping API networking.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.927036257+00:00 stderr F I1213 00:20:52.926907 18 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.927036257+00:00 stderr F W1213 00:20:52.926925 18 genericapiserver.go:774] Skipping API node.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.927036257+00:00 stderr F W1213 00:20:52.926945 18 genericapiserver.go:774] Skipping API node.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.927793238+00:00 stderr F I1213 00:20:52.927702 18 handler.go:275] Adding GroupVersion policy v1 to ResourceManager 2025-12-13T00:20:52.927793238+00:00 stderr F W1213 00:20:52.927722 18 genericapiserver.go:774] Skipping API policy/v1beta1 because it has no resources. 2025-12-13T00:20:52.930332106+00:00 stderr F I1213 00:20:52.930248 18 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.930332106+00:00 stderr F W1213 00:20:52.930267 18 genericapiserver.go:774] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.930332106+00:00 stderr F W1213 00:20:52.930273 18 genericapiserver.go:774] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.930616554+00:00 stderr F I1213 00:20:52.930542 18 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.930616554+00:00 stderr F W1213 00:20:52.930553 18 genericapiserver.go:774] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.930616554+00:00 stderr F W1213 00:20:52.930558 18 genericapiserver.go:774] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.932180985+00:00 stderr F I1213 00:20:52.932112 18 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.932180985+00:00 stderr F W1213 00:20:52.932128 18 genericapiserver.go:774] Skipping API storage.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.932180985+00:00 stderr F W1213 00:20:52.932132 18 genericapiserver.go:774] Skipping API storage.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.932987438+00:00 stderr F I1213 00:20:52.932907 18 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager 2025-12-13T00:20:52.933723057+00:00 stderr F I1213 00:20:52.933646 18 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.933723057+00:00 stderr F W1213 00:20:52.933661 18 genericapiserver.go:774] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources. 2025-12-13T00:20:52.933723057+00:00 stderr F W1213 00:20:52.933665 18 genericapiserver.go:774] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.936203384+00:00 stderr F I1213 00:20:52.936131 18 handler.go:275] Adding GroupVersion apps v1 to ResourceManager 2025-12-13T00:20:52.936203384+00:00 stderr F W1213 00:20:52.936144 18 genericapiserver.go:774] Skipping API apps/v1beta2 because it has no resources. 2025-12-13T00:20:52.936203384+00:00 stderr F W1213 00:20:52.936149 18 genericapiserver.go:774] Skipping API apps/v1beta1 because it has no resources. 2025-12-13T00:20:52.936788320+00:00 stderr F I1213 00:20:52.936720 18 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.936788320+00:00 stderr F W1213 00:20:52.936732 18 genericapiserver.go:774] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.936788320+00:00 stderr F W1213 00:20:52.936736 18 genericapiserver.go:774] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. 2025-12-13T00:20:52.937263803+00:00 stderr F I1213 00:20:52.937197 18 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.937263803+00:00 stderr F W1213 00:20:52.937213 18 genericapiserver.go:774] Skipping API events.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.950454989+00:00 stderr F I1213 00:20:52.950304 18 store.go:1579] "Monitoring resource count at path" resource="apiservices.apiregistration.k8s.io" path="//apiregistration.k8s.io/apiservices" 2025-12-13T00:20:52.951336602+00:00 stderr F I1213 00:20:52.951255 18 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager 2025-12-13T00:20:52.951336602+00:00 stderr F W1213 00:20:52.951278 18 genericapiserver.go:774] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. 2025-12-13T00:20:52.952226897+00:00 stderr F I1213 00:20:52.952149 18 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2025-12-13T00:20:52.969304408+00:00 stderr F I1213 00:20:52.969194 18 cacher.go:460] cacher (apiservices.apiregistration.k8s.io): initialized 2025-12-13T00:20:52.969334099+00:00 stderr F I1213 00:20:52.969295 18 reflector.go:351] Caches populated for *apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices 2025-12-13T00:20:53.330603497+00:00 stderr F I1213 00:20:53.329948 18 genericapiserver.go:576] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-12-13T00:20:53.330877435+00:00 stderr F I1213 00:20:53.330799 18 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:20:53.330877435+00:00 stderr F I1213 00:20:53.330850 18 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:20:53.331360018+00:00 stderr F I1213 00:20:53.331275 18 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-12-13T00:20:53.331723107+00:00 stderr F I1213 00:20:53.331597 18 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" 2025-12-13T00:20:53.332109788+00:00 stderr F I1213 00:20:53.331973 18 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" 2025-12-13T00:20:53.332471357+00:00 stderr F I1213 00:20:53.332375 18 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" 2025-12-13T00:20:53.333016753+00:00 stderr F I1213 00:20:53.332960 18 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333365 18 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333414 18 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:20:53.333376962 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333454 18 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:20:53.333432904 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333477 18 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:20:53.333460615 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333499 18 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:20:53.333482875 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333520 18 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:20:53.333503836 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333542 18 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.333525646 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333564 18 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.333547837 +0000 UTC))" 2025-12-13T00:20:53.333620999+00:00 stderr F I1213 00:20:53.333587 18 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.333570947 +0000 UTC))" 2025-12-13T00:20:53.333722092+00:00 stderr F I1213 00:20:53.333669 18 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:20:53.333592408 +0000 UTC))" 2025-12-13T00:20:53.333722092+00:00 stderr F I1213 00:20:53.333703 18 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.333685831 +0000 UTC))" 2025-12-13T00:20:53.333844615+00:00 stderr F I1213 00:20:53.333761 18 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.333742372 +0000 UTC))" 2025-12-13T00:20:53.334270026+00:00 stderr F I1213 00:20:53.334133 18 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" certDetail="\"10.217.4.1\" [serving] validServingFor=[10.217.4.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.217.4.1] issuer=\"kube-apiserver-service-network-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.334112992 +0000 UTC))" 2025-12-13T00:20:53.334613085+00:00 stderr F I1213 00:20:53.334550 18 named_certificates.go:53] "Loaded SNI cert" index=5 certName="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" certDetail="\"localhost-recovery\" [serving] validServingFor=[localhost-recovery] issuer=\"openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1719406013\" (2024-06-26 12:47:06 +0000 UTC to 2034-06-24 12:46:53 +0000 UTC (now=2025-12-13 00:20:53.334530653 +0000 UTC))" 2025-12-13T00:20:53.334976395+00:00 stderr F I1213 00:20:53.334899 18 named_certificates.go:53] "Loaded SNI cert" index=4 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" certDetail="\"api-int.crc.testing\" [serving] validServingFor=[api-int.crc.testing] issuer=\"kube-apiserver-lb-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.334884402 +0000 UTC))" 2025-12-13T00:20:53.335275093+00:00 stderr F I1213 00:20:53.335226 18 named_certificates.go:53] "Loaded SNI cert" index=3 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" certDetail="\"api.crc.testing\" [serving] validServingFor=[api.crc.testing] issuer=\"kube-apiserver-lb-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.335209711 +0000 UTC))" 2025-12-13T00:20:53.335598771+00:00 stderr F I1213 00:20:53.335531 18 named_certificates.go:53] "Loaded SNI cert" index=2 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" certDetail="\"10.217.4.1\" [serving] validServingFor=[10.217.4.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,10.217.4.1] issuer=\"kube-apiserver-service-network-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.335509709 +0000 UTC))" 2025-12-13T00:20:53.335978793+00:00 stderr F I1213 00:20:53.335905 18 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" certDetail="\"127.0.0.1\" [serving] validServingFor=[127.0.0.1,localhost,127.0.0.1] issuer=\"kube-apiserver-localhost-signer\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:53.33588408 +0000 UTC))" 2025-12-13T00:20:53.336350322+00:00 stderr F I1213 00:20:53.336289 18 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765585252\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765585252\" (2025-12-12 23:20:52 +0000 UTC to 2026-12-12 23:20:52 +0000 UTC (now=2025-12-13 00:20:53.33626942 +0000 UTC))" 2025-12-13T00:20:53.336423904+00:00 stderr F I1213 00:20:53.336375 18 secure_serving.go:213] Serving securely on [::]:6443 2025-12-13T00:20:53.336822155+00:00 stderr F I1213 00:20:53.336666 18 genericapiserver.go:700] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:20:53.336822155+00:00 stderr F I1213 00:20:53.336668 18 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:20:53.337129783+00:00 stderr F I1213 00:20:53.336958 18 controller.go:80] Starting OpenAPI V3 AggregationController 2025-12-13T00:20:53.337146274+00:00 stderr F I1213 00:20:53.337095 18 apf_controller.go:374] Starting API Priority and Fairness config controller 2025-12-13T00:20:53.337235366+00:00 stderr F I1213 00:20:53.337173 18 controller.go:78] Starting OpenAPI AggregationController 2025-12-13T00:20:53.337497423+00:00 stderr F I1213 00:20:53.337427 18 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-12-13T00:20:53.337681988+00:00 stderr F I1213 00:20:53.337529 18 customresource_discovery_controller.go:289] Starting DiscoveryController 2025-12-13T00:20:53.337769080+00:00 stderr F I1213 00:20:53.337722 18 handler_discovery.go:412] Starting ResourceDiscoveryManager 2025-12-13T00:20:53.337914334+00:00 stderr F I1213 00:20:53.337842 18 aggregator.go:163] waiting for initial CRD sync... 2025-12-13T00:20:53.338032247+00:00 stderr F I1213 00:20:53.337990 18 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" 2025-12-13T00:20:53.338084009+00:00 stderr F I1213 00:20:53.338051 18 apiaccess_count_controller.go:89] Starting APIRequestCount controller. 2025-12-13T00:20:53.338144200+00:00 stderr F I1213 00:20:53.338102 18 apiservice_controller.go:97] Starting APIServiceRegistrationController 2025-12-13T00:20:53.338144200+00:00 stderr F I1213 00:20:53.338115 18 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller 2025-12-13T00:20:53.338288364+00:00 stderr F I1213 00:20:53.338234 18 crdregistration_controller.go:112] Starting crd-autoregister controller 2025-12-13T00:20:53.338288364+00:00 stderr F I1213 00:20:53.338259 18 shared_informer.go:311] Waiting for caches to sync for crd-autoregister 2025-12-13T00:20:53.338371356+00:00 stderr F I1213 00:20:53.338338 18 gc_controller.go:78] Starting apiserver lease garbage collector 2025-12-13T00:20:53.338554002+00:00 stderr F I1213 00:20:53.338503 18 controller.go:133] Starting OpenAPI controller 2025-12-13T00:20:53.338554002+00:00 stderr F I1213 00:20:53.338538 18 controller.go:85] Starting OpenAPI V3 controller 2025-12-13T00:20:53.338586483+00:00 stderr F I1213 00:20:53.338554 18 naming_controller.go:291] Starting NamingConditionController 2025-12-13T00:20:53.338586483+00:00 stderr F I1213 00:20:53.338568 18 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller 2025-12-13T00:20:53.338586483+00:00 stderr F I1213 00:20:53.338571 18 establishing_controller.go:76] Starting EstablishingController 2025-12-13T00:20:53.338633214+00:00 stderr F I1213 00:20:53.338598 18 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController 2025-12-13T00:20:53.338644275+00:00 stderr F I1213 00:20:53.338618 18 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController 2025-12-13T00:20:53.338644275+00:00 stderr F I1213 00:20:53.338635 18 crd_finalizer.go:266] Starting CRDFinalizer 2025-12-13T00:20:53.338897501+00:00 stderr F I1213 00:20:53.338846 18 available_controller.go:445] Starting AvailableConditionController 2025-12-13T00:20:53.338897501+00:00 stderr F I1213 00:20:53.338861 18 cache.go:32] Waiting for caches to sync for AvailableConditionController controller 2025-12-13T00:20:53.338978043+00:00 stderr F I1213 00:20:53.338924 18 system_namespaces_controller.go:67] Starting system namespaces controller 2025-12-13T00:20:53.339135238+00:00 stderr F I1213 00:20:53.339084 18 controller.go:116] Starting legacy_token_tracking_controller 2025-12-13T00:20:53.339135238+00:00 stderr F I1213 00:20:53.339102 18 shared_informer.go:311] Waiting for caches to sync for configmaps 2025-12-13T00:20:53.339174859+00:00 stderr F I1213 00:20:53.338601 18 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller 2025-12-13T00:20:53.340345060+00:00 stderr F I1213 00:20:53.340286 18 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:20:53.340437012+00:00 stderr F I1213 00:20:53.340396 18 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:20:53.341879722+00:00 stderr F I1213 00:20:53.341806 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/secrets" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.341999105+00:00 stderr F I1213 00:20:53.341926 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/configmaps" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.342371485+00:00 stderr F W1213 00:20:53.342321 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-machine-api/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.344567984+00:00 stderr F W1213 00:20:53.342856 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-marketplace/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.344567984+00:00 stderr F W1213 00:20:53.343023 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-dns/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.344567984+00:00 stderr F W1213 00:20:53.343409 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-etcd-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.344567984+00:00 stderr F W1213 00:20:53.343476 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-multus/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.344567984+00:00 stderr F W1213 00:20:53.343810 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-marketplace/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.345996562+00:00 stderr F W1213 00:20:53.344628 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-cluster-machine-approver/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.347200835+00:00 stderr F W1213 00:20:53.347124 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-controller-manager/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.352552520+00:00 stderr F W1213 00:20:53.350903 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ovn-kubernetes/pods" (source IP 38.102.83.51:41200, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.352725145+00:00 stderr F W1213 00:20:53.352684 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.353056723+00:00 stderr F W1213 00:20:53.352973 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-authentication/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.353188457+00:00 stderr F W1213 00:20:53.353137 18 patch_genericapiserver.go:204] Request to "/apis/config.openshift.io/v1/clusterversions" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.353332751+00:00 stderr F W1213 00:20:53.353279 18 patch_genericapiserver.go:204] Request to "/apis/storage.k8s.io/v1/csistoragecapacities" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.354381179+00:00 stderr F W1213 00:20:53.354089 18 patch_genericapiserver.go:204] Request to "/apis/apps/v1/deployments" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.354381179+00:00 stderr F W1213 00:20:53.354343 18 patch_genericapiserver.go:204] Request to "/api/v1/services" (source IP 38.102.83.51:41200, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.354582854+00:00 stderr F W1213 00:20:53.354534 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/kube-system/configmaps" (source IP 38.102.83.51:41206, user agent "kube-scheduler/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.354635015+00:00 stderr F W1213 00:20:53.354598 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-authentication/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.355036866+00:00 stderr F W1213 00:20:53.354792 18 patch_genericapiserver.go:204] Request to "/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.355137 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-network-diagnostics/endpoints" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.355250 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-console/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.355393 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-etcd-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.355515 18 patch_genericapiserver.go:204] Request to "/apis/rbac.authorization.k8s.io/v1/rolebindings" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.355779 18 patch_genericapiserver.go:204] Request to "/apis/monitoring.openshift.io/v1/alertrelabelconfigs" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.355865 18 patch_genericapiserver.go:204] Request to "/apis/config.openshift.io/v1/clusterversions/version/status" (source IP 38.102.83.51:41240, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/openshift-cluster-version") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.356685331+00:00 stderr F W1213 00:20:53.356427 18 patch_genericapiserver.go:204] Request to "/apis/monitoring.coreos.com/v1/probes" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.357535634+00:00 stderr F W1213 00:20:53.357487 18 patch_genericapiserver.go:204] Request to "/apis/k8s.ovn.org/v1/egressfirewalls" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.358377067+00:00 stderr F W1213 00:20:53.358194 18 patch_genericapiserver.go:204] Request to "/apis/helm.openshift.io/v1beta1/projecthelmchartrepositories" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.360371980+00:00 stderr F W1213 00:20:53.359580 18 patch_genericapiserver.go:204] Request to "/api/v1/services" (source IP 38.102.83.51:41312, user agent "kube-scheduler/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21/scheduler") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.360371980+00:00 stderr F W1213 00:20:53.359658 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-network-diagnostics/services" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.364339068+00:00 stderr F W1213 00:20:53.361993 18 patch_genericapiserver.go:204] Request to "/apis/k8s.ovn.org/v1/egressfirewalls" (source IP 38.102.83.51:41320, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.366195237+00:00 stderr F I1213 00:20:53.366115 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.369704103+00:00 stderr F I1213 00:20:53.369627 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.375043107+00:00 stderr F E1213 00:20:53.374890 18 sdn_readyz_wait.go:100] api-openshift-apiserver-available did not find any IPs for kubernetes.default.svc endpoint 2025-12-13T00:20:53.382535258+00:00 stderr F I1213 00:20:53.382359 18 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.383451844+00:00 stderr F E1213 00:20:53.383285 18 sdn_readyz_wait.go:100] api-openshift-oauth-apiserver-available did not find any IPs for kubernetes.default.svc endpoint 2025-12-13T00:20:53.384927523+00:00 stderr F I1213 00:20:53.384864 18 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.387235295+00:00 stderr F W1213 00:20:53.387126 18 patch_genericapiserver.go:204] Request to "/apis/k8s.ovn.org/v1/egressservices" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.387741679+00:00 stderr F I1213 00:20:53.387639 18 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.388627482+00:00 stderr F I1213 00:20:53.388535 18 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.389198099+00:00 stderr F I1213 00:20:53.389125 18 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.389913988+00:00 stderr F W1213 00:20:53.389808 18 patch_genericapiserver.go:204] Request to "/apis/config.openshift.io/v1/proxies" (source IP 38.102.83.51:41240, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/shared-informer") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.390576425+00:00 stderr F I1213 00:20:53.390484 18 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391032748+00:00 stderr F I1213 00:20:53.390946 18 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391103549+00:00 stderr F I1213 00:20:53.391055 18 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391194112+00:00 stderr F I1213 00:20:53.391128 18 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391295605+00:00 stderr F I1213 00:20:53.391238 18 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391424298+00:00 stderr F I1213 00:20:53.391325 18 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391535081+00:00 stderr F I1213 00:20:53.391470 18 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.391993534+00:00 stderr F I1213 00:20:53.391904 18 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.392236631+00:00 stderr F I1213 00:20:53.392182 18 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.394038419+00:00 stderr F I1213 00:20:53.393970 18 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.399090515+00:00 stderr F I1213 00:20:53.399019 18 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.399542177+00:00 stderr F I1213 00:20:53.399475 18 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.400613567+00:00 stderr F I1213 00:20:53.400526 18 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.401114560+00:00 stderr F I1213 00:20:53.401048 18 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.401196002+00:00 stderr F I1213 00:20:53.401144 18 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.401556092+00:00 stderr F I1213 00:20:53.401497 18 reflector.go:351] Caches populated for *v1.APIService from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.412586429+00:00 stderr F I1213 00:20:53.412461 18 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.414773719+00:00 stderr F I1213 00:20:53.414676 18 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.419705982+00:00 stderr F I1213 00:20:53.419582 18 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.421234883+00:00 stderr F I1213 00:20:53.421153 18 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.425538679+00:00 stderr F I1213 00:20:53.425420 18 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.429162937+00:00 stderr F I1213 00:20:53.429067 18 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.437335807+00:00 stderr F I1213 00:20:53.437174 18 apf_controller.go:379] Running API Priority and Fairness config worker 2025-12-13T00:20:53.437335807+00:00 stderr F I1213 00:20:53.437215 18 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437683 18 apf_controller.go:861] Introducing queues for priority level "global-default": config={"type":"Limited","limited":{"nominalConcurrencyShares":20,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=314, lendableCL=157, borrowingCL=4000, currentCL=236, quiescing=false (shares=0xc007d28a90, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437723 18 apf_controller.go:861] Introducing queues for priority level "openshift-control-plane-operators": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=157, lendableCL=52, borrowingCL=4000, currentCL=131, quiescing=false (shares=0xc007d28b98, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437741 18 apf_controller.go:861] Introducing queues for priority level "workload-low": config={"type":"Limited","limited":{"nominalConcurrencyShares":100,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":90}}, nominalCL=1569, lendableCL=1412, borrowingCL=4000, currentCL=863, quiescing=false (shares=0xc007d28c90, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437763 18 apf_controller.go:869] Retaining queues for priority level "catch-all": config={"type":"Limited","limited":{"nominalConcurrencyShares":5,"limitResponse":{"type":"Reject"},"lendablePercent":0}}, nominalCL=79, lendableCL=0, borrowingCL=4000, currentCL=4000, quiescing=false, numPending=0 (shares=0xc007d28a18, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437776 18 apf_controller.go:861] Introducing queues for priority level "node-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":25}}, nominalCL=628, lendableCL=157, borrowingCL=4000, currentCL=550, quiescing=false (shares=0xc007d28b28, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437787 18 apf_controller.go:861] Introducing queues for priority level "system": config={"type":"Limited","limited":{"nominalConcurrencyShares":30,"limitResponse":{"type":"Queue","queuing":{"queues":64,"handSize":6,"queueLengthLimit":50}},"lendablePercent":33}}, nominalCL=471, lendableCL=155, borrowingCL=4000, currentCL=394, quiescing=false (shares=0xc007d28be8, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437833 18 apf_controller.go:861] Introducing queues for priority level "leader-election": config={"type":"Limited","limited":{"nominalConcurrencyShares":10,"limitResponse":{"type":"Queue","queuing":{"queues":16,"handSize":4,"queueLengthLimit":50}},"lendablePercent":0}}, nominalCL=157, lendableCL=0, borrowingCL=4000, currentCL=157, quiescing=false (shares=0xc007d28ae0, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437850 18 apf_controller.go:861] Introducing queues for priority level "workload-high": config={"type":"Limited","limited":{"nominalConcurrencyShares":40,"limitResponse":{"type":"Queue","queuing":{"queues":128,"handSize":6,"queueLengthLimit":50}},"lendablePercent":50}}, nominalCL=628, lendableCL=314, borrowingCL=4000, currentCL=471, quiescing=false (shares=0xc007d28c40, shareSum=255) 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437888 18 apf_controller.go:455] "Update CurrentCL" plName="node-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=1072 concurrencyDenominator=1072 backstop=false 2025-12-13T00:20:53.438009135+00:00 stderr F I1213 00:20:53.437963 18 apf_controller.go:455] "Update CurrentCL" plName="system" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=719 concurrencyDenominator=719 backstop=false 2025-12-13T00:20:53.438120648+00:00 stderr F I1213 00:20:53.438069 18 apf_controller.go:455] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=3 seatDemandAvg=3 seatDemandStdev=0 seatDemandSmoothed=3 fairFrac=2.275724843661171 currentCL=7 concurrencyDenominator=7 backstop=false 2025-12-13T00:20:53.438183550+00:00 stderr F I1213 00:20:53.438142 18 apf_controller.go:455] "Update CurrentCL" plName="leader-election" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=357 concurrencyDenominator=357 backstop=false 2025-12-13T00:20:53.438276313+00:00 stderr F I1213 00:20:53.438226 18 apf_controller.go:455] "Update CurrentCL" plName="workload-high" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=715 concurrencyDenominator=715 backstop=false 2025-12-13T00:20:53.438416136+00:00 stderr F I1213 00:20:53.438323 18 cache.go:39] Caches are synced for APIServiceRegistrationController controller 2025-12-13T00:20:53.438416136+00:00 stderr F I1213 00:20:53.438354 18 apf_controller.go:455] "Update CurrentCL" plName="workload-low" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=357 concurrencyDenominator=357 backstop=false 2025-12-13T00:20:53.438524869+00:00 stderr F I1213 00:20:53.438475 18 apf_controller.go:455] "Update CurrentCL" plName="catch-all" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=180 concurrencyDenominator=180 backstop=false 2025-12-13T00:20:53.438595811+00:00 stderr F I1213 00:20:53.438551 18 apf_controller.go:455] "Update CurrentCL" plName="global-default" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=357 concurrencyDenominator=357 backstop=false 2025-12-13T00:20:53.438735725+00:00 stderr F I1213 00:20:53.438685 18 apf_controller.go:455] "Update CurrentCL" plName="openshift-control-plane-operators" seatDemandHighWatermark=0 seatDemandAvg=0 seatDemandStdev=0 seatDemandSmoothed=0 fairFrac=2.275724843661171 currentCL=239 concurrencyDenominator=239 backstop=false 2025-12-13T00:20:53.438982311+00:00 stderr F I1213 00:20:53.438906 18 cache.go:39] Caches are synced for AvailableConditionController controller 2025-12-13T00:20:53.439202647+00:00 stderr F I1213 00:20:53.439133 18 shared_informer.go:318] Caches are synced for configmaps 2025-12-13T00:20:53.439251598+00:00 stderr F I1213 00:20:53.439226 18 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller 2025-12-13T00:20:53.458109348+00:00 stderr F I1213 00:20:53.458008 18 healthz.go:261] informer-sync,poststarthook/start-apiextensions-controllers,poststarthook/crd-informer-synced,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,poststarthook/apiservice-registration-controller check failed: readyz 2025-12-13T00:20:53.458109348+00:00 stderr F [-]informer-sync failed: 2 informers not started yet: [*v1.Secret *v1.ConfigMap] 2025-12-13T00:20:53.458109348+00:00 stderr F [-]poststarthook/start-apiextensions-controllers failed: not finished 2025-12-13T00:20:53.458109348+00:00 stderr F [-]poststarthook/crd-informer-synced failed: not finished 2025-12-13T00:20:53.458109348+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:53.458109348+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:53.458109348+00:00 stderr F [-]poststarthook/apiservice-registration-controller failed: not finished 2025-12-13T00:20:53.458333144+00:00 stderr F I1213 00:20:53.458275 18 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.461461948+00:00 stderr F I1213 00:20:53.459020 18 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.461662263+00:00 stderr F I1213 00:20:53.461618 18 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.461942842+00:00 stderr F I1213 00:20:53.461879 18 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.466772521+00:00 stderr F I1213 00:20:53.466692 18 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.467528862+00:00 stderr F I1213 00:20:53.467455 18 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.468474848+00:00 stderr F I1213 00:20:53.468393 18 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.468474848+00:00 stderr F I1213 00:20:53.468403 18 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.471718665+00:00 stderr F I1213 00:20:53.471646 18 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.471718665+00:00 stderr F I1213 00:20:53.471673 18 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.472139826+00:00 stderr F I1213 00:20:53.472085 18 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.475303501+00:00 stderr F W1213 00:20:53.475207 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.480161933+00:00 stderr F I1213 00:20:53.480080 18 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.480253635+00:00 stderr F I1213 00:20:53.480203 18 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-12-13T00:20:53.483714668+00:00 stderr F I1213 00:20:53.483619 18 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.493593356+00:00 stderr F I1213 00:20:53.493269 18 shared_informer.go:318] Caches are synced for node_authorizer 2025-12-13T00:20:53.494286284+00:00 stderr F E1213 00:20:53.494217 18 controller.go:146] Error updating APIService "v1.apps.openshift.io" with err: failed to download v1.apps.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.494286284+00:00 stderr F , Header: map[Audit-Id:[97fcd1cc-1bdc-4c36-9e0c-85a379df12e3] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.494604692+00:00 stderr F I1213 00:20:53.494493 18 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io 2025-12-13T00:20:53.498746665+00:00 stderr F E1213 00:20:53.498652 18 controller.go:146] Error updating APIService "v1.authorization.openshift.io" with err: failed to download v1.authorization.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.498746665+00:00 stderr F , Header: map[Audit-Id:[951d1e6b-23f4-4ef7-a95b-9d712712afa6] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.500474501+00:00 stderr F E1213 00:20:53.500407 18 controller.go:146] Error updating APIService "v1.build.openshift.io" with err: failed to download v1.build.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.500474501+00:00 stderr F , Header: map[Audit-Id:[38e1fd0d-b95f-440e-a532-fbd858f800af] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.501847198+00:00 stderr F E1213 00:20:53.501782 18 controller.go:146] Error updating APIService "v1.image.openshift.io" with err: failed to download v1.image.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.501847198+00:00 stderr F , Header: map[Audit-Id:[4fe61d7a-360e-4820-8462-477d7362ce3f] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.504092379+00:00 stderr F E1213 00:20:53.504006 18 controller.go:146] Error updating APIService "v1.oauth.openshift.io" with err: failed to download v1.oauth.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.504092379+00:00 stderr F , Header: map[Audit-Id:[fc26a8a1-a513-46f6-9bb5-afa1a6cc9737] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.506393290+00:00 stderr F E1213 00:20:53.506322 18 controller.go:146] Error updating APIService "v1.packages.operators.coreos.com" with err: failed to download v1.packages.operators.coreos.com: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.506393290+00:00 stderr F , Header: map[Audit-Id:[10344df9-b14e-48d1-9f4c-5585157ffb02] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.506596777+00:00 stderr F I1213 00:20:53.506539 18 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:53.508659092+00:00 stderr F E1213 00:20:53.508572 18 controller.go:146] Error updating APIService "v1.project.openshift.io" with err: failed to download v1.project.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.508659092+00:00 stderr F , Header: map[Audit-Id:[314d5acc-6151-4306-bbf9-e33d105255d5] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.510423230+00:00 stderr F E1213 00:20:53.510340 18 controller.go:146] Error updating APIService "v1.quota.openshift.io" with err: failed to download v1.quota.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.510423230+00:00 stderr F , Header: map[Audit-Id:[05a10a67-c929-4c0c-bd37-db0fee6cc1b5] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.512144806+00:00 stderr F E1213 00:20:53.512094 18 controller.go:146] Error updating APIService "v1.route.openshift.io" with err: failed to download v1.route.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.512144806+00:00 stderr F , Header: map[Audit-Id:[6504f27a-bff8-4fb5-9843-3be473819000] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.513915074+00:00 stderr F E1213 00:20:53.513846 18 controller.go:146] Error updating APIService "v1.security.openshift.io" with err: failed to download v1.security.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.513915074+00:00 stderr F , Header: map[Audit-Id:[be73fc9f-1423-41e5-89dc-3ff9866203ae] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.517117710+00:00 stderr F E1213 00:20:53.517067 18 controller.go:146] Error updating APIService "v1.template.openshift.io" with err: failed to download v1.template.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.517117710+00:00 stderr F , Header: map[Audit-Id:[bf4200b1-23b4-4727-8cd5-39dcb9156b49] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.518847057+00:00 stderr F E1213 00:20:53.518772 18 controller.go:146] Error updating APIService "v1.user.openshift.io" with err: failed to download v1.user.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.518847057+00:00 stderr F , Header: map[Audit-Id:[e571aa1f-6897-41ea-a712-86669233b1f3] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:53 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:53.521571821+00:00 stderr F W1213 00:20:53.521503 18 patch_genericapiserver.go:204] Request to "/apis/apiextensions.k8s.io/v1/customresourcedefinitions" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.534034526+00:00 stderr F I1213 00:20:53.533795 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-controller-manager/configmaps" (user agent "cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.537156501+00:00 stderr F I1213 00:20:53.537100 18 genericapiserver.go:527] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:20:53.538344593+00:00 stderr F I1213 00:20:53.538299 18 shared_informer.go:318] Caches are synced for crd-autoregister 2025-12-13T00:20:53.538491837+00:00 stderr F I1213 00:20:53.538306 18 handler.go:275] Adding GroupVersion controlplane.operator.openshift.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.538569479+00:00 stderr F I1213 00:20:53.538533 18 handler.go:275] Adding GroupVersion ingress.operator.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.538691222+00:00 stderr F I1213 00:20:53.538641 18 handler.go:275] Adding GroupVersion config.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.538691222+00:00 stderr F I1213 00:20:53.538682 18 handler.go:275] Adding GroupVersion machine.openshift.io v1beta1 to ResourceManager 2025-12-13T00:20:53.538760914+00:00 stderr F I1213 00:20:53.538716 18 handler.go:275] Adding GroupVersion operators.coreos.com v1 to ResourceManager 2025-12-13T00:20:53.538760914+00:00 stderr F I1213 00:20:53.538744 18 handler.go:275] Adding GroupVersion operators.coreos.com v2 to ResourceManager 2025-12-13T00:20:53.538801665+00:00 stderr F I1213 00:20:53.538772 18 handler.go:275] Adding GroupVersion ipam.cluster.x-k8s.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.538825016+00:00 stderr F I1213 00:20:53.538796 18 handler.go:275] Adding GroupVersion ipam.cluster.x-k8s.io v1beta1 to ResourceManager 2025-12-13T00:20:53.538856087+00:00 stderr F I1213 00:20:53.538822 18 handler.go:275] Adding GroupVersion whereabouts.cni.cncf.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.538938859+00:00 stderr F I1213 00:20:53.538904 18 handler.go:275] Adding GroupVersion monitoring.coreos.com v1 to ResourceManager 2025-12-13T00:20:53.539090733+00:00 stderr F I1213 00:20:53.539050 18 handler.go:275] Adding GroupVersion console.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.539140694+00:00 stderr F I1213 00:20:53.539100 18 handler.go:275] Adding GroupVersion machineconfiguration.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.539196566+00:00 stderr F I1213 00:20:53.539162 18 handler.go:275] Adding GroupVersion helm.openshift.io v1beta1 to ResourceManager 2025-12-13T00:20:53.539487974+00:00 stderr F I1213 00:20:53.539441 18 handler.go:275] Adding GroupVersion operator.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.539763571+00:00 stderr F I1213 00:20:53.539668 18 controller.go:222] Updating CRD OpenAPI spec because adminnetworkpolicies.policy.networking.k8s.io changed 2025-12-13T00:20:53.539763571+00:00 stderr F I1213 00:20:53.539722 18 controller.go:222] Updating CRD OpenAPI spec because adminpolicybasedexternalroutes.k8s.ovn.org changed 2025-12-13T00:20:53.539763571+00:00 stderr F I1213 00:20:53.539726 18 handler.go:275] Adding GroupVersion operators.coreos.com v1alpha1 to ResourceManager 2025-12-13T00:20:53.539763571+00:00 stderr F I1213 00:20:53.539738 18 controller.go:222] Updating CRD OpenAPI spec because alertingrules.monitoring.openshift.io changed 2025-12-13T00:20:53.539763571+00:00 stderr F I1213 00:20:53.539748 18 controller.go:222] Updating CRD OpenAPI spec because alertmanagerconfigs.monitoring.coreos.com changed 2025-12-13T00:20:53.539792162+00:00 stderr F I1213 00:20:53.539759 18 controller.go:222] Updating CRD OpenAPI spec because alertmanagers.monitoring.coreos.com changed 2025-12-13T00:20:53.539792162+00:00 stderr F I1213 00:20:53.539775 18 controller.go:222] Updating CRD OpenAPI spec because alertrelabelconfigs.monitoring.openshift.io changed 2025-12-13T00:20:53.539800332+00:00 stderr F I1213 00:20:53.539787 18 controller.go:222] Updating CRD OpenAPI spec because apirequestcounts.apiserver.openshift.io changed 2025-12-13T00:20:53.539827563+00:00 stderr F I1213 00:20:53.539797 18 controller.go:222] Updating CRD OpenAPI spec because apiservers.config.openshift.io changed 2025-12-13T00:20:53.539827563+00:00 stderr F I1213 00:20:53.539811 18 controller.go:222] Updating CRD OpenAPI spec because authentications.config.openshift.io changed 2025-12-13T00:20:53.539861474+00:00 stderr F I1213 00:20:53.539825 18 controller.go:222] Updating CRD OpenAPI spec because authentications.operator.openshift.io changed 2025-12-13T00:20:53.539861474+00:00 stderr F I1213 00:20:53.539837 18 controller.go:222] Updating CRD OpenAPI spec because baselineadminnetworkpolicies.policy.networking.k8s.io changed 2025-12-13T00:20:53.539861474+00:00 stderr F I1213 00:20:53.539848 18 controller.go:222] Updating CRD OpenAPI spec because builds.config.openshift.io changed 2025-12-13T00:20:53.539890744+00:00 stderr F I1213 00:20:53.539858 18 controller.go:222] Updating CRD OpenAPI spec because catalogsources.operators.coreos.com changed 2025-12-13T00:20:53.539890744+00:00 stderr F I1213 00:20:53.539870 18 controller.go:222] Updating CRD OpenAPI spec because clusterautoscalers.autoscaling.openshift.io changed 2025-12-13T00:20:53.539890744+00:00 stderr F I1213 00:20:53.539878 18 controller.go:222] Updating CRD OpenAPI spec because clustercsidrivers.operator.openshift.io changed 2025-12-13T00:20:53.539919135+00:00 stderr F I1213 00:20:53.539887 18 controller.go:222] Updating CRD OpenAPI spec because clusteroperators.config.openshift.io changed 2025-12-13T00:20:53.539919135+00:00 stderr F I1213 00:20:53.539900 18 controller.go:222] Updating CRD OpenAPI spec because clusterresourcequotas.quota.openshift.io changed 2025-12-13T00:20:53.539919135+00:00 stderr F I1213 00:20:53.539908 18 controller.go:222] Updating CRD OpenAPI spec because clusterserviceversions.operators.coreos.com changed 2025-12-13T00:20:53.539980197+00:00 stderr F I1213 00:20:53.539919 18 controller.go:222] Updating CRD OpenAPI spec because clusterversions.config.openshift.io changed 2025-12-13T00:20:53.539980197+00:00 stderr F I1213 00:20:53.539935 18 controller.go:222] Updating CRD OpenAPI spec because configs.imageregistry.operator.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.539960 18 controller.go:222] Updating CRD OpenAPI spec because configs.operator.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.539969 18 controller.go:222] Updating CRD OpenAPI spec because configs.samples.operator.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.539977 18 controller.go:222] Updating CRD OpenAPI spec because consoleclidownloads.console.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.539984 18 controller.go:222] Updating CRD OpenAPI spec because consoleexternalloglinks.console.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.539991 18 controller.go:222] Updating CRD OpenAPI spec because consolelinks.console.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.540001 18 controller.go:222] Updating CRD OpenAPI spec because consolenotifications.console.openshift.io changed 2025-12-13T00:20:53.540022208+00:00 stderr F I1213 00:20:53.540009 18 controller.go:222] Updating CRD OpenAPI spec because consoleplugins.console.openshift.io changed 2025-12-13T00:20:53.540035538+00:00 stderr F I1213 00:20:53.540017 18 controller.go:222] Updating CRD OpenAPI spec because consolequickstarts.console.openshift.io changed 2025-12-13T00:20:53.540067589+00:00 stderr F I1213 00:20:53.540024 18 controller.go:222] Updating CRD OpenAPI spec because consoles.config.openshift.io changed 2025-12-13T00:20:53.540067589+00:00 stderr F I1213 00:20:53.540036 18 controller.go:222] Updating CRD OpenAPI spec because consoles.operator.openshift.io changed 2025-12-13T00:20:53.540067589+00:00 stderr F I1213 00:20:53.540044 18 controller.go:222] Updating CRD OpenAPI spec because consolesamples.console.openshift.io changed 2025-12-13T00:20:53.540067589+00:00 stderr F I1213 00:20:53.540051 18 controller.go:222] Updating CRD OpenAPI spec because consoleyamlsamples.console.openshift.io changed 2025-12-13T00:20:53.540067589+00:00 stderr F I1213 00:20:53.540059 18 controller.go:222] Updating CRD OpenAPI spec because containerruntimeconfigs.machineconfiguration.openshift.io changed 2025-12-13T00:20:53.540099540+00:00 stderr F I1213 00:20:53.540067 18 controller.go:222] Updating CRD OpenAPI spec because controllerconfigs.machineconfiguration.openshift.io changed 2025-12-13T00:20:53.540099540+00:00 stderr F I1213 00:20:53.540079 18 controller.go:222] Updating CRD OpenAPI spec because controlplanemachinesets.machine.openshift.io changed 2025-12-13T00:20:53.540099540+00:00 stderr F I1213 00:20:53.540088 18 controller.go:222] Updating CRD OpenAPI spec because csisnapshotcontrollers.operator.openshift.io changed 2025-12-13T00:20:53.540128702+00:00 stderr F I1213 00:20:53.540096 18 controller.go:222] Updating CRD OpenAPI spec because dnses.config.openshift.io changed 2025-12-13T00:20:53.540128702+00:00 stderr F I1213 00:20:53.540109 18 controller.go:222] Updating CRD OpenAPI spec because dnses.operator.openshift.io changed 2025-12-13T00:20:53.540128702+00:00 stderr F I1213 00:20:53.540118 18 controller.go:222] Updating CRD OpenAPI spec because dnsrecords.ingress.operator.openshift.io changed 2025-12-13T00:20:53.540166333+00:00 stderr F I1213 00:20:53.540126 18 controller.go:222] Updating CRD OpenAPI spec because egressfirewalls.k8s.ovn.org changed 2025-12-13T00:20:53.540166333+00:00 stderr F I1213 00:20:53.540137 18 controller.go:222] Updating CRD OpenAPI spec because egressips.k8s.ovn.org changed 2025-12-13T00:20:53.540166333+00:00 stderr F I1213 00:20:53.540146 18 controller.go:222] Updating CRD OpenAPI spec because egressqoses.k8s.ovn.org changed 2025-12-13T00:20:53.540166333+00:00 stderr F I1213 00:20:53.540155 18 controller.go:222] Updating CRD OpenAPI spec because egressrouters.network.operator.openshift.io changed 2025-12-13T00:20:53.540195313+00:00 stderr F I1213 00:20:53.540163 18 controller.go:222] Updating CRD OpenAPI spec because egressservices.k8s.ovn.org changed 2025-12-13T00:20:53.540195313+00:00 stderr F I1213 00:20:53.540177 18 controller.go:222] Updating CRD OpenAPI spec because etcds.operator.openshift.io changed 2025-12-13T00:20:53.540195313+00:00 stderr F I1213 00:20:53.540185 18 controller.go:222] Updating CRD OpenAPI spec because featuregates.config.openshift.io changed 2025-12-13T00:20:53.540226664+00:00 stderr F I1213 00:20:53.540195 18 controller.go:222] Updating CRD OpenAPI spec because helmchartrepositories.helm.openshift.io changed 2025-12-13T00:20:53.540226664+00:00 stderr F I1213 00:20:53.540207 18 controller.go:222] Updating CRD OpenAPI spec because imagecontentpolicies.config.openshift.io changed 2025-12-13T00:20:53.540226664+00:00 stderr F I1213 00:20:53.540217 18 controller.go:222] Updating CRD OpenAPI spec because imagecontentsourcepolicies.operator.openshift.io changed 2025-12-13T00:20:53.540267965+00:00 stderr F I1213 00:20:53.540228 18 controller.go:222] Updating CRD OpenAPI spec because imagedigestmirrorsets.config.openshift.io changed 2025-12-13T00:20:53.540267965+00:00 stderr F I1213 00:20:53.540241 18 controller.go:222] Updating CRD OpenAPI spec because imagepruners.imageregistry.operator.openshift.io changed 2025-12-13T00:20:53.540267965+00:00 stderr F I1213 00:20:53.540250 18 controller.go:222] Updating CRD OpenAPI spec because images.config.openshift.io changed 2025-12-13T00:20:53.540267965+00:00 stderr F I1213 00:20:53.540259 18 controller.go:222] Updating CRD OpenAPI spec because imagetagmirrorsets.config.openshift.io changed 2025-12-13T00:20:53.540301906+00:00 stderr F I1213 00:20:53.540269 18 controller.go:222] Updating CRD OpenAPI spec because infrastructures.config.openshift.io changed 2025-12-13T00:20:53.540301906+00:00 stderr F I1213 00:20:53.540284 18 controller.go:222] Updating CRD OpenAPI spec because ingresscontrollers.operator.openshift.io changed 2025-12-13T00:20:53.540301906+00:00 stderr F I1213 00:20:53.540295 18 controller.go:222] Updating CRD OpenAPI spec because ingresses.config.openshift.io changed 2025-12-13T00:20:53.540348737+00:00 stderr F I1213 00:20:53.540305 18 controller.go:222] Updating CRD OpenAPI spec because installplans.operators.coreos.com changed 2025-12-13T00:20:53.540348737+00:00 stderr F I1213 00:20:53.540319 18 controller.go:222] Updating CRD OpenAPI spec because ipaddressclaims.ipam.cluster.x-k8s.io changed 2025-12-13T00:20:53.540348737+00:00 stderr F I1213 00:20:53.540331 18 controller.go:222] Updating CRD OpenAPI spec because ipaddresses.ipam.cluster.x-k8s.io changed 2025-12-13T00:20:53.540348737+00:00 stderr F I1213 00:20:53.540342 18 controller.go:222] Updating CRD OpenAPI spec because ippools.whereabouts.cni.cncf.io changed 2025-12-13T00:20:53.540383668+00:00 stderr F I1213 00:20:53.540351 18 controller.go:222] Updating CRD OpenAPI spec because kubeapiservers.operator.openshift.io changed 2025-12-13T00:20:53.540383668+00:00 stderr F I1213 00:20:53.540363 18 controller.go:222] Updating CRD OpenAPI spec because kubecontrollermanagers.operator.openshift.io changed 2025-12-13T00:20:53.540383668+00:00 stderr F I1213 00:20:53.540371 18 controller.go:222] Updating CRD OpenAPI spec because kubeletconfigs.machineconfiguration.openshift.io changed 2025-12-13T00:20:53.540383668+00:00 stderr F I1213 00:20:53.540374 18 handler.go:275] Adding GroupVersion samples.operator.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.540392409+00:00 stderr F I1213 00:20:53.540380 18 controller.go:222] Updating CRD OpenAPI spec because kubeschedulers.operator.openshift.io changed 2025-12-13T00:20:53.540419739+00:00 stderr F I1213 00:20:53.540388 18 controller.go:222] Updating CRD OpenAPI spec because kubestorageversionmigrators.operator.openshift.io changed 2025-12-13T00:20:53.540419739+00:00 stderr F I1213 00:20:53.540401 18 controller.go:222] Updating CRD OpenAPI spec because machineautoscalers.autoscaling.openshift.io changed 2025-12-13T00:20:53.540419739+00:00 stderr F I1213 00:20:53.540409 18 controller.go:222] Updating CRD OpenAPI spec because machineconfigpools.machineconfiguration.openshift.io changed 2025-12-13T00:20:53.540458820+00:00 stderr F I1213 00:20:53.540417 18 controller.go:222] Updating CRD OpenAPI spec because machineconfigs.machineconfiguration.openshift.io changed 2025-12-13T00:20:53.540458820+00:00 stderr F I1213 00:20:53.540433 18 controller.go:222] Updating CRD OpenAPI spec because machineconfigurations.operator.openshift.io changed 2025-12-13T00:20:53.540458820+00:00 stderr F I1213 00:20:53.540441 18 controller.go:222] Updating CRD OpenAPI spec because machinehealthchecks.machine.openshift.io changed 2025-12-13T00:20:53.540458820+00:00 stderr F I1213 00:20:53.540451 18 controller.go:222] Updating CRD OpenAPI spec because machines.machine.openshift.io changed 2025-12-13T00:20:53.540493801+00:00 stderr F I1213 00:20:53.540460 18 controller.go:222] Updating CRD OpenAPI spec because machinesets.machine.openshift.io changed 2025-12-13T00:20:53.540493801+00:00 stderr F I1213 00:20:53.540473 18 controller.go:222] Updating CRD OpenAPI spec because metal3remediations.infrastructure.cluster.x-k8s.io changed 2025-12-13T00:20:53.540493801+00:00 stderr F I1213 00:20:53.540482 18 controller.go:222] Updating CRD OpenAPI spec because metal3remediationtemplates.infrastructure.cluster.x-k8s.io changed 2025-12-13T00:20:53.540522662+00:00 stderr F I1213 00:20:53.540491 18 controller.go:222] Updating CRD OpenAPI spec because network-attachment-definitions.k8s.cni.cncf.io changed 2025-12-13T00:20:53.540522662+00:00 stderr F I1213 00:20:53.540503 18 controller.go:222] Updating CRD OpenAPI spec because networks.config.openshift.io changed 2025-12-13T00:20:53.540522662+00:00 stderr F I1213 00:20:53.540514 18 controller.go:222] Updating CRD OpenAPI spec because networks.operator.openshift.io changed 2025-12-13T00:20:53.540560213+00:00 stderr F I1213 00:20:53.540522 18 controller.go:222] Updating CRD OpenAPI spec because nodes.config.openshift.io changed 2025-12-13T00:20:53.540560213+00:00 stderr F I1213 00:20:53.540533 18 controller.go:222] Updating CRD OpenAPI spec because oauths.config.openshift.io changed 2025-12-13T00:20:53.540560213+00:00 stderr F I1213 00:20:53.540542 18 controller.go:222] Updating CRD OpenAPI spec because olmconfigs.operators.coreos.com changed 2025-12-13T00:20:53.540560213+00:00 stderr F I1213 00:20:53.540551 18 controller.go:222] Updating CRD OpenAPI spec because openshiftapiservers.operator.openshift.io changed 2025-12-13T00:20:53.540593094+00:00 stderr F I1213 00:20:53.540559 18 controller.go:222] Updating CRD OpenAPI spec because openshiftcontrollermanagers.operator.openshift.io changed 2025-12-13T00:20:53.540593094+00:00 stderr F I1213 00:20:53.540571 18 controller.go:222] Updating CRD OpenAPI spec because operatorconditions.operators.coreos.com changed 2025-12-13T00:20:53.540593094+00:00 stderr F I1213 00:20:53.540579 18 controller.go:222] Updating CRD OpenAPI spec because operatorgroups.operators.coreos.com changed 2025-12-13T00:20:53.540601464+00:00 stderr F I1213 00:20:53.540587 18 controller.go:222] Updating CRD OpenAPI spec because operatorhubs.config.openshift.io changed 2025-12-13T00:20:53.540628215+00:00 stderr F I1213 00:20:53.540595 18 controller.go:222] Updating CRD OpenAPI spec because operatorpkis.network.operator.openshift.io changed 2025-12-13T00:20:53.540628215+00:00 stderr F I1213 00:20:53.540609 18 controller.go:222] Updating CRD OpenAPI spec because operators.operators.coreos.com changed 2025-12-13T00:20:53.540628215+00:00 stderr F I1213 00:20:53.540616 18 controller.go:222] Updating CRD OpenAPI spec because overlappingrangeipreservations.whereabouts.cni.cncf.io changed 2025-12-13T00:20:53.540664406+00:00 stderr F I1213 00:20:53.540625 18 controller.go:222] Updating CRD OpenAPI spec because podmonitors.monitoring.coreos.com changed 2025-12-13T00:20:53.540664406+00:00 stderr F I1213 00:20:53.540637 18 controller.go:222] Updating CRD OpenAPI spec because podnetworkconnectivitychecks.controlplane.operator.openshift.io changed 2025-12-13T00:20:53.540664406+00:00 stderr F I1213 00:20:53.540645 18 controller.go:222] Updating CRD OpenAPI spec because probes.monitoring.coreos.com changed 2025-12-13T00:20:53.540664406+00:00 stderr F I1213 00:20:53.540653 18 controller.go:222] Updating CRD OpenAPI spec because projecthelmchartrepositories.helm.openshift.io changed 2025-12-13T00:20:53.540697417+00:00 stderr F I1213 00:20:53.540662 18 controller.go:222] Updating CRD OpenAPI spec because projects.config.openshift.io changed 2025-12-13T00:20:53.540697417+00:00 stderr F I1213 00:20:53.540674 18 controller.go:222] Updating CRD OpenAPI spec because prometheuses.monitoring.coreos.com changed 2025-12-13T00:20:53.540697417+00:00 stderr F I1213 00:20:53.540682 18 controller.go:222] Updating CRD OpenAPI spec because prometheusrules.monitoring.coreos.com changed 2025-12-13T00:20:53.540724967+00:00 stderr F I1213 00:20:53.540691 18 controller.go:222] Updating CRD OpenAPI spec because proxies.config.openshift.io changed 2025-12-13T00:20:53.540724967+00:00 stderr F I1213 00:20:53.540703 18 controller.go:222] Updating CRD OpenAPI spec because rangeallocations.security.internal.openshift.io changed 2025-12-13T00:20:53.540724967+00:00 stderr F I1213 00:20:53.540713 18 controller.go:222] Updating CRD OpenAPI spec because rolebindingrestrictions.authorization.openshift.io changed 2025-12-13T00:20:53.540759598+00:00 stderr F I1213 00:20:53.540722 18 controller.go:222] Updating CRD OpenAPI spec because schedulers.config.openshift.io changed 2025-12-13T00:20:53.540759598+00:00 stderr F I1213 00:20:53.540731 18 handler.go:275] Adding GroupVersion security.internal.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.540759598+00:00 stderr F I1213 00:20:53.540735 18 controller.go:222] Updating CRD OpenAPI spec because securitycontextconstraints.security.openshift.io changed 2025-12-13T00:20:53.540759598+00:00 stderr F I1213 00:20:53.540746 18 controller.go:222] Updating CRD OpenAPI spec because servicecas.operator.openshift.io changed 2025-12-13T00:20:53.540759598+00:00 stderr F I1213 00:20:53.540746 18 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.540799089+00:00 stderr F I1213 00:20:53.540763 18 controller.go:222] Updating CRD OpenAPI spec because servicemonitors.monitoring.coreos.com changed 2025-12-13T00:20:53.540799089+00:00 stderr F I1213 00:20:53.540786 18 controller.go:222] Updating CRD OpenAPI spec because storages.operator.openshift.io changed 2025-12-13T00:20:53.540834630+00:00 stderr F I1213 00:20:53.540800 18 controller.go:222] Updating CRD OpenAPI spec because storagestates.migration.k8s.io changed 2025-12-13T00:20:53.540834630+00:00 stderr F I1213 00:20:53.540814 18 controller.go:222] Updating CRD OpenAPI spec because storageversionmigrations.migration.k8s.io changed 2025-12-13T00:20:53.540834630+00:00 stderr F I1213 00:20:53.540822 18 controller.go:222] Updating CRD OpenAPI spec because subscriptions.operators.coreos.com changed 2025-12-13T00:20:53.540863801+00:00 stderr F I1213 00:20:53.540831 18 controller.go:222] Updating CRD OpenAPI spec because thanosrulers.monitoring.coreos.com changed 2025-12-13T00:20:53.540863801+00:00 stderr F I1213 00:20:53.540839 18 handler.go:275] Adding GroupVersion infrastructure.cluster.x-k8s.io v1alpha5 to ResourceManager 2025-12-13T00:20:53.540894262+00:00 stderr F I1213 00:20:53.540860 18 handler.go:275] Adding GroupVersion infrastructure.cluster.x-k8s.io v1beta1 to ResourceManager 2025-12-13T00:20:53.541135828+00:00 stderr F I1213 00:20:53.541082 18 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.541224931+00:00 stderr F I1213 00:20:53.541179 18 aggregator.go:165] initial CRD sync complete... 2025-12-13T00:20:53.541224931+00:00 stderr F I1213 00:20:53.541200 18 autoregister_controller.go:141] Starting autoregister controller 2025-12-13T00:20:53.541224931+00:00 stderr F I1213 00:20:53.541204 18 handler.go:275] Adding GroupVersion imageregistry.operator.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.541224931+00:00 stderr F I1213 00:20:53.541208 18 cache.go:32] Waiting for caches to sync for autoregister controller 2025-12-13T00:20:53.541224931+00:00 stderr F I1213 00:20:53.541214 18 cache.go:39] Caches are synced for autoregister controller 2025-12-13T00:20:53.541268272+00:00 stderr F I1213 00:20:53.541232 18 handler.go:275] Adding GroupVersion k8s.ovn.org v1 to ResourceManager 2025-12-13T00:20:53.541370495+00:00 stderr F I1213 00:20:53.541325 18 handler.go:275] Adding GroupVersion migration.k8s.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.541922049+00:00 stderr F I1213 00:20:53.541854 18 handler.go:275] Adding GroupVersion network.operator.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.542433813+00:00 stderr F I1213 00:20:53.542376 18 handler.go:275] Adding GroupVersion operators.coreos.com v1alpha2 to ResourceManager 2025-12-13T00:20:53.542634738+00:00 stderr F I1213 00:20:53.542587 18 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.542800923+00:00 stderr F I1213 00:20:53.542755 18 handler.go:275] Adding GroupVersion autoscaling.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.542917647+00:00 stderr F I1213 00:20:53.542870 18 handler.go:275] Adding GroupVersion machine.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.543256526+00:00 stderr F I1213 00:20:53.543203 18 handler.go:275] Adding GroupVersion monitoring.coreos.com v1alpha1 to ResourceManager 2025-12-13T00:20:53.543256526+00:00 stderr F I1213 00:20:53.543232 18 handler.go:275] Adding GroupVersion monitoring.coreos.com v1beta1 to ResourceManager 2025-12-13T00:20:53.543333248+00:00 stderr F I1213 00:20:53.543291 18 handler.go:275] Adding GroupVersion autoscaling.openshift.io v1beta1 to ResourceManager 2025-12-13T00:20:53.543472732+00:00 stderr F I1213 00:20:53.543426 18 handler.go:275] Adding GroupVersion policy.networking.k8s.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.543472732+00:00 stderr F I1213 00:20:53.543451 18 handler.go:275] Adding GroupVersion apiserver.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.543580295+00:00 stderr F I1213 00:20:53.543535 18 handler.go:275] Adding GroupVersion console.openshift.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.543733379+00:00 stderr F I1213 00:20:53.543686 18 handler.go:275] Adding GroupVersion k8s.cni.cncf.io v1 to ResourceManager 2025-12-13T00:20:53.544378766+00:00 stderr F I1213 00:20:53.544321 18 handler.go:275] Adding GroupVersion operator.openshift.io v1alpha1 to ResourceManager 2025-12-13T00:20:53.544655873+00:00 stderr F I1213 00:20:53.544603 18 handler.go:275] Adding GroupVersion monitoring.openshift.io v1 to ResourceManager 2025-12-13T00:20:53.564222931+00:00 stderr F W1213 00:20:53.563563 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.564222931+00:00 stderr F I1213 00:20:53.564174 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:53.564222931+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:53.564222931+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:53.575177177+00:00 stderr F I1213 00:20:53.575057 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-config-managed/configmaps" (user agent "cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.627396826+00:00 stderr F W1213 00:20:53.627227 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-apiserver/services" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.634506028+00:00 stderr F W1213 00:20:53.634406 18 patch_genericapiserver.go:204] Request to "/api/v1/configmaps" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.657792727+00:00 stderr F I1213 00:20:53.657653 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc" (user agent "cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:53.662552795+00:00 stderr F I1213 00:20:53.662470 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:53.662552795+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:53.662552795+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:53.708262019+00:00 stderr F W1213 00:20:53.708106 18 patch_genericapiserver.go:204] Request to "/apis/monitoring.coreos.com/v1/alertmanagers" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.764190287+00:00 stderr F I1213 00:20:53.764057 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:53.764190287+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:53.764190287+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:53.862564552+00:00 stderr F I1213 00:20:53.862441 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:53.862564552+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:53.862564552+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:53.876453977+00:00 stderr F W1213 00:20:53.876319 18 patch_genericapiserver.go:204] Request to "/apis/controlplane.operator.openshift.io/v1alpha1/podnetworkconnectivitychecks" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.938186763+00:00 stderr F W1213 00:20:53.937910 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-apiserver-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:53.962051936+00:00 stderr F I1213 00:20:53.961880 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:53.962051936+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:53.962051936+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:53.972194601+00:00 stderr F W1213 00:20:53.972037 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-apiserver/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.005325355+00:00 stderr F W1213 00:20:54.005185 18 patch_genericapiserver.go:204] Request to "/apis/template.openshift.io/v1/templates" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.016325701+00:00 stderr F W1213 00:20:54.016175 18 patch_genericapiserver.go:204] Request to "/apis/node.k8s.io/v1/runtimeclasses" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.056174027+00:00 stderr F W1213 00:20:54.055848 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-controller-manager-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.062811585+00:00 stderr F I1213 00:20:54.062649 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:54.062811585+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:54.062811585+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:54.080234205+00:00 stderr F I1213 00:20:54.080067 18 patch_genericapiserver.go:201] Loopback request to "/api/v1/namespaces/openshift-config/secrets" (user agent "cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready. This client probably does not watch /readyz and might get inconsistent answers. 2025-12-13T00:20:54.101429518+00:00 stderr F W1213 00:20:54.101278 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-service-ca/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.119309720+00:00 stderr F W1213 00:20:54.119170 18 patch_genericapiserver.go:204] Request to "/apis/config.openshift.io/v1/featuregates" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.140685237+00:00 stderr F W1213 00:20:54.140460 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-oauth-apiserver/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.145267631+00:00 stderr F W1213 00:20:54.145136 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-authentication/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.146631357+00:00 stderr F W1213 00:20:54.146486 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-config/configmaps" (source IP 38.102.83.51:41240, user agent "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/openshift-config") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.165465706+00:00 stderr F W1213 00:20:54.165241 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-controller-manager-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.166096683+00:00 stderr F I1213 00:20:54.165970 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:54.166096683+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:54.166096683+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:54.196868243+00:00 stderr F W1213 00:20:54.196055 18 patch_genericapiserver.go:204] Request to "/apis/monitoring.coreos.com/v1/servicemonitors" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.199036012+00:00 stderr F W1213 00:20:54.198175 18 patch_genericapiserver.go:204] Request to "/apis/storage.k8s.io/v1/csistoragecapacities" (source IP 38.102.83.51:41312, user agent "kube-scheduler/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21/scheduler") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.262488693+00:00 stderr F I1213 00:20:54.262336 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:54.262488693+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:54.262488693+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:54.339006419+00:00 stderr F W1213 00:20:54.338017 18 patch_genericapiserver.go:204] Request to "/apis/operator.openshift.io/v1/openshiftapiservers" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.365431402+00:00 stderr F I1213 00:20:54.365250 18 healthz.go:261] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz 2025-12-13T00:20:54.365431402+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:54.365431402+00:00 stderr F [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished 2025-12-13T00:20:54.365431402+00:00 stderr F I1213 00:20:54.365325 18 storage_scheduling.go:111] all system priority classes are created successfully or already exist. 2025-12-13T00:20:54.389833980+00:00 stderr F W1213 00:20:54.389426 18 patch_genericapiserver.go:204] Request to "/apis/machine.openshift.io/v1beta1/machines" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.400263072+00:00 stderr F W1213 00:20:54.399714 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-cluster-samples-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.442534763+00:00 stderr F E1213 00:20:54.442386 18 controller.go:102] loading OpenAPI spec for "v1.template.openshift.io" failed with: failed to download v1.template.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.442534763+00:00 stderr F , Header: map[Audit-Id:[beb9d5d7-5e3e-4253-a02c-a38c7f9c526e] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.442534763+00:00 stderr F I1213 00:20:54.442410 18 controller.go:109] OpenAPI AggregationController: action for item v1.template.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.452547782+00:00 stderr F E1213 00:20:54.452457 18 controller.go:102] loading OpenAPI spec for "v1.image.openshift.io" failed with: failed to download v1.image.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.452547782+00:00 stderr F , Header: map[Audit-Id:[88e39e02-10b0-4fda-8ca2-afc01240e0f1] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.452547782+00:00 stderr F I1213 00:20:54.452474 18 controller.go:109] OpenAPI AggregationController: action for item v1.image.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.454686160+00:00 stderr F E1213 00:20:54.454603 18 controller.go:102] loading OpenAPI spec for "v1.security.openshift.io" failed with: failed to download v1.security.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.454686160+00:00 stderr F , Header: map[Audit-Id:[2efde32d-8659-43dd-ba08-e4b4a96d25e0] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.454686160+00:00 stderr F I1213 00:20:54.454626 18 controller.go:109] OpenAPI AggregationController: action for item v1.security.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.460551618+00:00 stderr F E1213 00:20:54.460462 18 controller.go:102] loading OpenAPI spec for "v1.project.openshift.io" failed with: failed to download v1.project.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.460551618+00:00 stderr F , Header: map[Audit-Id:[5475146a-62ac-498c-97d7-37028c7b6f47] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.460551618+00:00 stderr F I1213 00:20:54.460478 18 controller.go:109] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.462097301+00:00 stderr F I1213 00:20:54.462042 18 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2025-12-13T00:20:54.462097301+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:54.462434070+00:00 stderr F E1213 00:20:54.462360 18 controller.go:102] loading OpenAPI spec for "v1.user.openshift.io" failed with: failed to download v1.user.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.462434070+00:00 stderr F , Header: map[Audit-Id:[b4b2b349-357d-486f-9666-2689feb63619] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.462434070+00:00 stderr F I1213 00:20:54.462379 18 controller.go:109] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.463808486+00:00 stderr F E1213 00:20:54.463748 18 controller.go:102] loading OpenAPI spec for "v1.build.openshift.io" failed with: failed to download v1.build.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.463808486+00:00 stderr F , Header: map[Audit-Id:[743ceb9a-ffd9-444d-a02b-a74580928c3f] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.463808486+00:00 stderr F I1213 00:20:54.463763 18 controller.go:109] OpenAPI AggregationController: action for item v1.build.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.465283786+00:00 stderr F E1213 00:20:54.465217 18 controller.go:102] loading OpenAPI spec for "v1.quota.openshift.io" failed with: failed to download v1.quota.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.465283786+00:00 stderr F , Header: map[Audit-Id:[87d2c1fa-a4b7-4a17-9126-4508948871f8] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.465283786+00:00 stderr F I1213 00:20:54.465238 18 controller.go:109] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.467331491+00:00 stderr F E1213 00:20:54.467235 18 controller.go:102] loading OpenAPI spec for "v1.oauth.openshift.io" failed with: failed to download v1.oauth.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.467331491+00:00 stderr F , Header: map[Audit-Id:[c0b114e6-3a10-4c96-85a7-393b92bc66e6] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.467331491+00:00 stderr F I1213 00:20:54.467257 18 controller.go:109] OpenAPI AggregationController: action for item v1.oauth.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.468999506+00:00 stderr F E1213 00:20:54.468828 18 controller.go:102] loading OpenAPI spec for "v1.authorization.openshift.io" failed with: failed to download v1.authorization.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.468999506+00:00 stderr F , Header: map[Audit-Id:[93558883-64db-402d-aa51-a05f78de20ca] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.468999506+00:00 stderr F I1213 00:20:54.468861 18 controller.go:109] OpenAPI AggregationController: action for item v1.authorization.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.470050134+00:00 stderr F E1213 00:20:54.469988 18 controller.go:113] loading OpenAPI spec for "v1.template.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.470050134+00:00 stderr F I1213 00:20:54.470006 18 controller.go:126] OpenAPI AggregationController: action for item v1.template.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.471032431+00:00 stderr F E1213 00:20:54.470590 18 controller.go:102] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to download v1.packages.operators.coreos.com: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.471032431+00:00 stderr F , Header: map[Audit-Id:[b914a05d-b1f6-4851-b146-8137dc7e7d33] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.471090983+00:00 stderr F I1213 00:20:54.471044 18 controller.go:109] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. 2025-12-13T00:20:54.471254747+00:00 stderr F E1213 00:20:54.471196 18 controller.go:113] loading OpenAPI spec for "v1.image.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.471720660+00:00 stderr F I1213 00:20:54.471658 18 controller.go:126] OpenAPI AggregationController: action for item v1.image.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.473328754+00:00 stderr F E1213 00:20:54.473258 18 controller.go:113] loading OpenAPI spec for "v1.security.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.473328754+00:00 stderr F I1213 00:20:54.473274 18 controller.go:126] OpenAPI AggregationController: action for item v1.security.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.474784432+00:00 stderr F E1213 00:20:54.474714 18 controller.go:113] loading OpenAPI spec for "v1.project.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.474784432+00:00 stderr F I1213 00:20:54.474729 18 controller.go:126] OpenAPI AggregationController: action for item v1.project.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.476865349+00:00 stderr F E1213 00:20:54.476790 18 controller.go:102] loading OpenAPI spec for "v1.route.openshift.io" failed with: failed to download v1.route.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.476865349+00:00 stderr F , Header: map[Audit-Id:[b1188f5b-e07a-4f11-99d0-255a4d5777ac] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.476865349+00:00 stderr F I1213 00:20:54.476809 18 controller.go:109] OpenAPI AggregationController: action for item v1.route.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.477006533+00:00 stderr F E1213 00:20:54.476900 18 controller.go:113] loading OpenAPI spec for "v1.user.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.478286157+00:00 stderr F I1213 00:20:54.478032 18 controller.go:126] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.479650654+00:00 stderr F E1213 00:20:54.479457 18 controller.go:113] loading OpenAPI spec for "v1.build.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.479650654+00:00 stderr F I1213 00:20:54.479471 18 controller.go:126] OpenAPI AggregationController: action for item v1.build.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.481317658+00:00 stderr F E1213 00:20:54.481245 18 controller.go:113] loading OpenAPI spec for "v1.quota.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.481317658+00:00 stderr F I1213 00:20:54.481257 18 controller.go:126] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.482907112+00:00 stderr F E1213 00:20:54.482796 18 controller.go:102] loading OpenAPI spec for "v1.apps.openshift.io" failed with: failed to download v1.apps.openshift.io: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.482907112+00:00 stderr F , Header: map[Audit-Id:[fd055a02-db9d-4bed-9500-ec07fa6ef9f3] Cache-Control:[no-cache, private] Content-Length:[184] Content-Type:[text/plain; charset=utf-8] Date:[Sat, 13 Dec 2025 00:20:54 GMT] X-Content-Type-Options:[nosniff]] 2025-12-13T00:20:54.482907112+00:00 stderr F I1213 00:20:54.482820 18 controller.go:109] OpenAPI AggregationController: action for item v1.apps.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.482907112+00:00 stderr F E1213 00:20:54.482800 18 controller.go:113] loading OpenAPI spec for "v1.oauth.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.483912148+00:00 stderr F I1213 00:20:54.483843 18 controller.go:126] OpenAPI AggregationController: action for item v1.oauth.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.485378578+00:00 stderr F E1213 00:20:54.485320 18 controller.go:113] loading OpenAPI spec for "v1.authorization.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.485378578+00:00 stderr F I1213 00:20:54.485338 18 controller.go:126] OpenAPI AggregationController: action for item v1.authorization.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.487313661+00:00 stderr F E1213 00:20:54.487243 18 controller.go:113] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.487313661+00:00 stderr F I1213 00:20:54.487260 18 controller.go:126] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue. 2025-12-13T00:20:54.489175361+00:00 stderr F E1213 00:20:54.489117 18 controller.go:113] loading OpenAPI spec for "v1.route.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.489175361+00:00 stderr F I1213 00:20:54.489136 18 controller.go:126] OpenAPI AggregationController: action for item v1.route.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.491910244+00:00 stderr F E1213 00:20:54.491818 18 controller.go:113] loading OpenAPI spec for "v1.apps.openshift.io" failed with: Error, could not get list of group versions for APIService 2025-12-13T00:20:54.491910244+00:00 stderr F I1213 00:20:54.491829 18 controller.go:126] OpenAPI AggregationController: action for item v1.apps.openshift.io: Rate Limited Requeue. 2025-12-13T00:20:54.519883219+00:00 stderr F W1213 00:20:54.519695 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.538154933+00:00 stderr F W1213 00:20:54.538029 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-marketplace/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.558184703+00:00 stderr F W1213 00:20:54.558064 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-marketplace/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.561539033+00:00 stderr F I1213 00:20:54.561449 18 healthz.go:261] poststarthook/rbac/bootstrap-roles check failed: readyz 2025-12-13T00:20:54.561539033+00:00 stderr F [-]poststarthook/rbac/bootstrap-roles failed: not finished 2025-12-13T00:20:54.612634692+00:00 stderr F W1213 00:20:54.612296 18 patch_genericapiserver.go:204] Request to "/apis/authorization.openshift.io/v1/rolebindingrestrictions" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.625447818+00:00 stderr F W1213 00:20:54.625373 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ovn-kubernetes/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.625972272+00:00 stderr F W1213 00:20:54.625836 18 patch_genericapiserver.go:204] Request to "/apis/operator.openshift.io/v1/ingresscontrollers" (source IP 38.102.83.51:41236, user agent "cluster-policy-controller/v0.0.0 (linux/amd64) kubernetes/$Format") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.626296281+00:00 stderr F W1213 00:20:54.626222 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-apiserver/services" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.629237020+00:00 stderr F W1213 00:20:54.629159 18 patch_genericapiserver.go:204] Request to "/apis/network.operator.openshift.io/v1/egressrouters" (source IP 38.102.83.51:41192, user agent "network-operator/4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.630458064+00:00 stderr F W1213 00:20:54.630364 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.642699594+00:00 stderr F W1213 00:20:54.642615 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-ingress/secrets" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.650809943+00:00 stderr F W1213 00:20:54.650716 18 patch_genericapiserver.go:204] Request to "/apis/discovery.k8s.io/v1/endpointslices" (source IP 38.102.83.51:41200, user agent "crc/ovnkube@254d920e214b (linux/amd64) kubernetes/v0.29.2") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.654315997+00:00 stderr F W1213 00:20:54.654242 18 patch_genericapiserver.go:204] Request to "/api/v1/namespaces/openshift-console/configmaps" (source IP 38.102.83.51:41172, user agent "kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21") before server is ready, possibly a sign for a broken load balancer setup. 2025-12-13T00:20:54.662506948+00:00 stderr F I1213 00:20:54.662401 18 patch_genericapiserver.go:93] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-crc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'KubeAPIReadyz' readyz=true 2025-12-13T00:20:54.668069228+00:00 stderr F W1213 00:20:54.667923 18 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.126.11] 2025-12-13T00:20:54.669767184+00:00 stderr F I1213 00:20:54.669685 18 controller.go:624] quota admission added evaluator for: endpoints 2025-12-13T00:20:54.674844822+00:00 stderr F I1213 00:20:54.673464 18 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io 2025-12-13T00:20:54.679749933+00:00 stderr F I1213 00:20:54.679656 18 store.go:1579] "Monitoring resource count at path" resource="dnsrecords.ingress.operator.openshift.io" path="//ingress.operator.openshift.io/dnsrecords" 2025-12-13T00:20:54.681160812+00:00 stderr F I1213 00:20:54.681007 18 cacher.go:460] cacher (dnsrecords.ingress.operator.openshift.io): initialized 2025-12-13T00:20:54.681160812+00:00 stderr F I1213 00:20:54.681029 18 reflector.go:351] Caches populated for ingress.operator.openshift.io/v1, Kind=DNSRecord from storage/cacher.go:/ingress.operator.openshift.io/dnsrecords 2025-12-13T00:20:54.701518561+00:00 stderr F I1213 00:20:54.701355 18 store.go:1579] "Monitoring resource count at path" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" path="//whereabouts.cni.cncf.io/overlappingrangeipreservations" 2025-12-13T00:20:54.702452006+00:00 stderr F I1213 00:20:54.702315 18 cacher.go:460] cacher (overlappingrangeipreservations.whereabouts.cni.cncf.io): initialized 2025-12-13T00:20:54.702452006+00:00 stderr F I1213 00:20:54.702364 18 reflector.go:351] Caches populated for whereabouts.cni.cncf.io/v1alpha1, Kind=OverlappingRangeIPReservation from storage/cacher.go:/whereabouts.cni.cncf.io/overlappingrangeipreservations 2025-12-13T00:20:54.763532944+00:00 stderr F I1213 00:20:54.763412 18 store.go:1579] "Monitoring resource count at path" resource="infrastructures.config.openshift.io" path="//config.openshift.io/infrastructures" 2025-12-13T00:20:54.766526845+00:00 stderr F I1213 00:20:54.766422 18 cacher.go:460] cacher (infrastructures.config.openshift.io): initialized 2025-12-13T00:20:54.766526845+00:00 stderr F I1213 00:20:54.766460 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Infrastructure from storage/cacher.go:/config.openshift.io/infrastructures 2025-12-13T00:20:54.775740774+00:00 stderr F I1213 00:20:54.775600 18 store.go:1579] "Monitoring resource count at path" resource="egressqoses.k8s.ovn.org" path="//k8s.ovn.org/egressqoses" 2025-12-13T00:20:54.777229174+00:00 stderr F I1213 00:20:54.777168 18 cacher.go:460] cacher (egressqoses.k8s.ovn.org): initialized 2025-12-13T00:20:54.777263995+00:00 stderr F I1213 00:20:54.777220 18 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressQoS from storage/cacher.go:/k8s.ovn.org/egressqoses 2025-12-13T00:20:54.784577143+00:00 stderr F I1213 00:20:54.784480 18 store.go:1579] "Monitoring resource count at path" resource="network-attachment-definitions.k8s.cni.cncf.io" path="//k8s.cni.cncf.io/network-attachment-definitions" 2025-12-13T00:20:54.787778669+00:00 stderr F I1213 00:20:54.787671 18 cacher.go:460] cacher (network-attachment-definitions.k8s.cni.cncf.io): initialized 2025-12-13T00:20:54.787778669+00:00 stderr F I1213 00:20:54.787708 18 reflector.go:351] Caches populated for k8s.cni.cncf.io/v1, Kind=NetworkAttachmentDefinition from storage/cacher.go:/k8s.cni.cncf.io/network-attachment-definitions 2025-12-13T00:20:54.804683735+00:00 stderr F I1213 00:20:54.804590 18 store.go:1579] "Monitoring resource count at path" resource="metal3remediations.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediations" 2025-12-13T00:20:54.806452933+00:00 stderr F I1213 00:20:54.806366 18 cacher.go:460] cacher (metal3remediations.infrastructure.cluster.x-k8s.io): initialized 2025-12-13T00:20:54.806452933+00:00 stderr F I1213 00:20:54.806403 18 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1alpha5, Kind=Metal3Remediation from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediations 2025-12-13T00:20:54.812597319+00:00 stderr F I1213 00:20:54.812527 18 store.go:1579] "Monitoring resource count at path" resource="metal3remediations.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediations" 2025-12-13T00:20:54.813441431+00:00 stderr F I1213 00:20:54.813389 18 cacher.go:460] cacher (metal3remediations.infrastructure.cluster.x-k8s.io): initialized 2025-12-13T00:20:54.813441431+00:00 stderr F I1213 00:20:54.813415 18 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1beta1, Kind=Metal3Remediation from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediations 2025-12-13T00:20:54.823119242+00:00 stderr F I1213 00:20:54.823025 18 store.go:1579] "Monitoring resource count at path" resource="adminpolicybasedexternalroutes.k8s.ovn.org" path="//k8s.ovn.org/adminpolicybasedexternalroutes" 2025-12-13T00:20:54.824099749+00:00 stderr F I1213 00:20:54.824015 18 cacher.go:460] cacher (adminpolicybasedexternalroutes.k8s.ovn.org): initialized 2025-12-13T00:20:54.824099749+00:00 stderr F I1213 00:20:54.824061 18 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=AdminPolicyBasedExternalRoute from storage/cacher.go:/k8s.ovn.org/adminpolicybasedexternalroutes 2025-12-13T00:20:54.944588560+00:00 stderr F I1213 00:20:54.944473 18 store.go:1579] "Monitoring resource count at path" resource="egressips.k8s.ovn.org" path="//k8s.ovn.org/egressips" 2025-12-13T00:20:54.945785432+00:00 stderr F I1213 00:20:54.945720 18 cacher.go:460] cacher (egressips.k8s.ovn.org): initialized 2025-12-13T00:20:54.945785432+00:00 stderr F I1213 00:20:54.945739 18 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressIP from storage/cacher.go:/k8s.ovn.org/egressips 2025-12-13T00:20:54.979378319+00:00 stderr F I1213 00:20:54.979260 18 store.go:1579] "Monitoring resource count at path" resource="machinesets.machine.openshift.io" path="//machine.openshift.io/machinesets" 2025-12-13T00:20:54.980064618+00:00 stderr F I1213 00:20:54.979997 18 cacher.go:460] cacher (machinesets.machine.openshift.io): initialized 2025-12-13T00:20:54.980064618+00:00 stderr F I1213 00:20:54.980022 18 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=MachineSet from storage/cacher.go:/machine.openshift.io/machinesets 2025-12-13T00:20:55.056667145+00:00 stderr F I1213 00:20:55.056547 18 store.go:1579] "Monitoring resource count at path" resource="networks.operator.openshift.io" path="//operator.openshift.io/networks" 2025-12-13T00:20:55.059194433+00:00 stderr F I1213 00:20:55.059088 18 cacher.go:460] cacher (networks.operator.openshift.io): initialized 2025-12-13T00:20:55.059194433+00:00 stderr F I1213 00:20:55.059113 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Network from storage/cacher.go:/operator.openshift.io/networks 2025-12-13T00:20:55.180809164+00:00 stderr F I1213 00:20:55.180646 18 store.go:1579] "Monitoring resource count at path" resource="machinehealthchecks.machine.openshift.io" path="//machine.openshift.io/machinehealthchecks" 2025-12-13T00:20:55.182697745+00:00 stderr F I1213 00:20:55.182595 18 cacher.go:460] cacher (machinehealthchecks.machine.openshift.io): initialized 2025-12-13T00:20:55.182697745+00:00 stderr F I1213 00:20:55.182622 18 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=MachineHealthCheck from storage/cacher.go:/machine.openshift.io/machinehealthchecks 2025-12-13T00:20:55.239345794+00:00 stderr F I1213 00:20:55.239199 18 store.go:1579] "Monitoring resource count at path" resource="installplans.operators.coreos.com" path="//operators.coreos.com/installplans" 2025-12-13T00:20:55.240607599+00:00 stderr F I1213 00:20:55.240498 18 cacher.go:460] cacher (installplans.operators.coreos.com): initialized 2025-12-13T00:20:55.240607599+00:00 stderr F I1213 00:20:55.240519 18 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=InstallPlan from storage/cacher.go:/operators.coreos.com/installplans 2025-12-13T00:20:55.454337886+00:00 stderr F I1213 00:20:55.454215 18 store.go:1579] "Monitoring resource count at path" resource="clusterresourcequotas.quota.openshift.io" path="//quota.openshift.io/clusterresourcequotas" 2025-12-13T00:20:55.455706342+00:00 stderr F I1213 00:20:55.455600 18 cacher.go:460] cacher (clusterresourcequotas.quota.openshift.io): initialized 2025-12-13T00:20:55.455706342+00:00 stderr F I1213 00:20:55.455631 18 reflector.go:351] Caches populated for quota.openshift.io/v1, Kind=ClusterResourceQuota from storage/cacher.go:/quota.openshift.io/clusterresourcequotas 2025-12-13T00:20:55.522875205+00:00 stderr F I1213 00:20:55.522753 18 store.go:1579] "Monitoring resource count at path" resource="alertmanagerconfigs.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagerconfigs" 2025-12-13T00:20:55.524214491+00:00 stderr F I1213 00:20:55.524115 18 cacher.go:460] cacher (alertmanagerconfigs.monitoring.coreos.com): initialized 2025-12-13T00:20:55.524214491+00:00 stderr F I1213 00:20:55.524144 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1alpha1, Kind=AlertmanagerConfig from storage/cacher.go:/monitoring.coreos.com/alertmanagerconfigs 2025-12-13T00:20:55.538495227+00:00 stderr F I1213 00:20:55.538339 18 store.go:1579] "Monitoring resource count at path" resource="alertmanagerconfigs.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagerconfigs" 2025-12-13T00:20:55.545262479+00:00 stderr F I1213 00:20:55.544282 18 cacher.go:460] cacher (alertmanagerconfigs.monitoring.coreos.com): initialized 2025-12-13T00:20:55.545262479+00:00 stderr F I1213 00:20:55.544307 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1beta1, Kind=AlertmanagerConfig from storage/cacher.go:/monitoring.coreos.com/alertmanagerconfigs 2025-12-13T00:20:55.555470165+00:00 stderr F I1213 00:20:55.554883 18 store.go:1579] "Monitoring resource count at path" resource="machineconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/machineconfigs" 2025-12-13T00:20:55.569538404+00:00 stderr F I1213 00:20:55.569390 18 cacher.go:460] cacher (machineconfigs.machineconfiguration.openshift.io): initialized 2025-12-13T00:20:55.569538404+00:00 stderr F I1213 00:20:55.569424 18 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=MachineConfig from storage/cacher.go:/machineconfiguration.openshift.io/machineconfigs 2025-12-13T00:20:56.083055431+00:00 stderr F I1213 00:20:56.081991 18 store.go:1579] "Monitoring resource count at path" resource="egressfirewalls.k8s.ovn.org" path="//k8s.ovn.org/egressfirewalls" 2025-12-13T00:20:56.083212635+00:00 stderr F I1213 00:20:56.083155 18 cacher.go:460] cacher (egressfirewalls.k8s.ovn.org): initialized 2025-12-13T00:20:56.083212635+00:00 stderr F I1213 00:20:56.083184 18 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressFirewall from storage/cacher.go:/k8s.ovn.org/egressfirewalls 2025-12-13T00:20:56.107933922+00:00 stderr F I1213 00:20:56.107644 18 store.go:1579] "Monitoring resource count at path" resource="featuregates.config.openshift.io" path="//config.openshift.io/featuregates" 2025-12-13T00:20:56.109744522+00:00 stderr F I1213 00:20:56.109668 18 cacher.go:460] cacher (featuregates.config.openshift.io): initialized 2025-12-13T00:20:56.109744522+00:00 stderr F I1213 00:20:56.109697 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=FeatureGate from storage/cacher.go:/config.openshift.io/featuregates 2025-12-13T00:20:56.535480500+00:00 stderr F I1213 00:20:56.535346 18 store.go:1579] "Monitoring resource count at path" resource="adminnetworkpolicies.policy.networking.k8s.io" path="//policy.networking.k8s.io/adminnetworkpolicies" 2025-12-13T00:20:56.538153432+00:00 stderr F I1213 00:20:56.537480 18 cacher.go:460] cacher (adminnetworkpolicies.policy.networking.k8s.io): initialized 2025-12-13T00:20:56.538153432+00:00 stderr F I1213 00:20:56.537503 18 reflector.go:351] Caches populated for policy.networking.k8s.io/v1alpha1, Kind=AdminNetworkPolicy from storage/cacher.go:/policy.networking.k8s.io/adminnetworkpolicies 2025-12-13T00:20:56.660640917+00:00 stderr F I1213 00:20:56.660536 18 store.go:1579] "Monitoring resource count at path" resource="networks.config.openshift.io" path="//config.openshift.io/networks" 2025-12-13T00:20:56.663982118+00:00 stderr F I1213 00:20:56.663900 18 cacher.go:460] cacher (networks.config.openshift.io): initialized 2025-12-13T00:20:56.663982118+00:00 stderr F I1213 00:20:56.663924 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Network from storage/cacher.go:/config.openshift.io/networks 2025-12-13T00:20:56.868406854+00:00 stderr F I1213 00:20:56.868299 18 store.go:1579] "Monitoring resource count at path" resource="operatorconditions.operators.coreos.com" path="//operators.coreos.com/operatorconditions" 2025-12-13T00:20:56.869434271+00:00 stderr F I1213 00:20:56.869377 18 cacher.go:460] cacher (operatorconditions.operators.coreos.com): initialized 2025-12-13T00:20:56.869434271+00:00 stderr F I1213 00:20:56.869403 18 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OperatorCondition from storage/cacher.go:/operators.coreos.com/operatorconditions 2025-12-13T00:20:56.875983978+00:00 stderr F I1213 00:20:56.875910 18 store.go:1579] "Monitoring resource count at path" resource="operatorconditions.operators.coreos.com" path="//operators.coreos.com/operatorconditions" 2025-12-13T00:20:56.877976172+00:00 stderr F I1213 00:20:56.877895 18 cacher.go:460] cacher (operatorconditions.operators.coreos.com): initialized 2025-12-13T00:20:56.877976172+00:00 stderr F I1213 00:20:56.877923 18 reflector.go:351] Caches populated for operators.coreos.com/v2, Kind=OperatorCondition from storage/cacher.go:/operators.coreos.com/operatorconditions 2025-12-13T00:20:57.001553306+00:00 stderr F I1213 00:20:57.001474 18 store.go:1579] "Monitoring resource count at path" resource="subscriptions.operators.coreos.com" path="//operators.coreos.com/subscriptions" 2025-12-13T00:20:57.002733239+00:00 stderr F I1213 00:20:57.002683 18 cacher.go:460] cacher (subscriptions.operators.coreos.com): initialized 2025-12-13T00:20:57.002733239+00:00 stderr F I1213 00:20:57.002707 18 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=Subscription from storage/cacher.go:/operators.coreos.com/subscriptions 2025-12-13T00:20:57.042389069+00:00 stderr F I1213 00:20:57.042130 18 store.go:1579] "Monitoring resource count at path" resource="kubeschedulers.operator.openshift.io" path="//operator.openshift.io/kubeschedulers" 2025-12-13T00:20:57.044672871+00:00 stderr F I1213 00:20:57.044013 18 cacher.go:460] cacher (kubeschedulers.operator.openshift.io): initialized 2025-12-13T00:20:57.044870306+00:00 stderr F I1213 00:20:57.044792 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeScheduler from storage/cacher.go:/operator.openshift.io/kubeschedulers 2025-12-13T00:20:57.063700533+00:00 stderr F I1213 00:20:57.063608 18 store.go:1579] "Monitoring resource count at path" resource="controlplanemachinesets.machine.openshift.io" path="//machine.openshift.io/controlplanemachinesets" 2025-12-13T00:20:57.064877886+00:00 stderr F I1213 00:20:57.064824 18 cacher.go:460] cacher (controlplanemachinesets.machine.openshift.io): initialized 2025-12-13T00:20:57.064909697+00:00 stderr F I1213 00:20:57.064872 18 reflector.go:351] Caches populated for machine.openshift.io/v1, Kind=ControlPlaneMachineSet from storage/cacher.go:/machine.openshift.io/controlplanemachinesets 2025-12-13T00:20:57.104569377+00:00 stderr F I1213 00:20:57.102099 18 store.go:1579] "Monitoring resource count at path" resource="podmonitors.monitoring.coreos.com" path="//monitoring.coreos.com/podmonitors" 2025-12-13T00:20:57.104569377+00:00 stderr F I1213 00:20:57.103409 18 cacher.go:460] cacher (podmonitors.monitoring.coreos.com): initialized 2025-12-13T00:20:57.104569377+00:00 stderr F I1213 00:20:57.103422 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=PodMonitor from storage/cacher.go:/monitoring.coreos.com/podmonitors 2025-12-13T00:20:57.115909102+00:00 stderr F I1213 00:20:57.115851 18 store.go:1579] "Monitoring resource count at path" resource="machineautoscalers.autoscaling.openshift.io" path="//autoscaling.openshift.io/machineautoscalers" 2025-12-13T00:20:57.117540227+00:00 stderr F I1213 00:20:57.117453 18 cacher.go:460] cacher (machineautoscalers.autoscaling.openshift.io): initialized 2025-12-13T00:20:57.117540227+00:00 stderr F I1213 00:20:57.117488 18 reflector.go:351] Caches populated for autoscaling.openshift.io/v1beta1, Kind=MachineAutoscaler from storage/cacher.go:/autoscaling.openshift.io/machineautoscalers 2025-12-13T00:20:57.144610757+00:00 stderr F I1213 00:20:57.144499 18 store.go:1579] "Monitoring resource count at path" resource="clusterversions.config.openshift.io" path="//config.openshift.io/clusterversions" 2025-12-13T00:20:57.150054354+00:00 stderr F I1213 00:20:57.149934 18 cacher.go:460] cacher (clusterversions.config.openshift.io): initialized 2025-12-13T00:20:57.150054354+00:00 stderr F I1213 00:20:57.149981 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ClusterVersion from storage/cacher.go:/config.openshift.io/clusterversions 2025-12-13T00:20:57.227757171+00:00 stderr F I1213 00:20:57.227634 18 store.go:1579] "Monitoring resource count at path" resource="catalogsources.operators.coreos.com" path="//operators.coreos.com/catalogsources" 2025-12-13T00:20:57.231979664+00:00 stderr F I1213 00:20:57.231803 18 cacher.go:460] cacher (catalogsources.operators.coreos.com): initialized 2025-12-13T00:20:57.231979664+00:00 stderr F I1213 00:20:57.231843 18 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=CatalogSource from storage/cacher.go:/operators.coreos.com/catalogsources 2025-12-13T00:20:57.299793215+00:00 stderr F I1213 00:20:57.299682 18 store.go:1579] "Monitoring resource count at path" resource="securitycontextconstraints.security.openshift.io" path="//security.openshift.io/securitycontextconstraints" 2025-12-13T00:20:57.307976626+00:00 stderr F I1213 00:20:57.307844 18 cacher.go:460] cacher (securitycontextconstraints.security.openshift.io): initialized 2025-12-13T00:20:57.307976626+00:00 stderr F I1213 00:20:57.307879 18 reflector.go:351] Caches populated for security.openshift.io/v1, Kind=SecurityContextConstraints from storage/cacher.go:/security.openshift.io/securitycontextconstraints 2025-12-13T00:20:57.532687519+00:00 stderr F I1213 00:20:57.532558 18 store.go:1579] "Monitoring resource count at path" resource="operatorpkis.network.operator.openshift.io" path="//network.operator.openshift.io/operatorpkis" 2025-12-13T00:20:57.535601918+00:00 stderr F I1213 00:20:57.535488 18 cacher.go:460] cacher (operatorpkis.network.operator.openshift.io): initialized 2025-12-13T00:20:57.535601918+00:00 stderr F I1213 00:20:57.535536 18 reflector.go:351] Caches populated for network.operator.openshift.io/v1, Kind=OperatorPKI from storage/cacher.go:/network.operator.openshift.io/operatorpkis 2025-12-13T00:20:57.884113722+00:00 stderr F I1213 00:20:57.884006 18 store.go:1579] "Monitoring resource count at path" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediationtemplates" 2025-12-13T00:20:57.885664624+00:00 stderr F I1213 00:20:57.885609 18 cacher.go:460] cacher (metal3remediationtemplates.infrastructure.cluster.x-k8s.io): initialized 2025-12-13T00:20:57.885664624+00:00 stderr F I1213 00:20:57.885633 18 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1alpha5, Kind=Metal3RemediationTemplate from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediationtemplates 2025-12-13T00:20:57.891312187+00:00 stderr F I1213 00:20:57.891252 18 store.go:1579] "Monitoring resource count at path" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" path="//infrastructure.cluster.x-k8s.io/metal3remediationtemplates" 2025-12-13T00:20:57.892419936+00:00 stderr F I1213 00:20:57.892366 18 cacher.go:460] cacher (metal3remediationtemplates.infrastructure.cluster.x-k8s.io): initialized 2025-12-13T00:20:57.892465497+00:00 stderr F I1213 00:20:57.892399 18 reflector.go:351] Caches populated for infrastructure.cluster.x-k8s.io/v1beta1, Kind=Metal3RemediationTemplate from storage/cacher.go:/infrastructure.cluster.x-k8s.io/metal3remediationtemplates 2025-12-13T00:20:57.980895334+00:00 stderr F I1213 00:20:57.980705 18 store.go:1579] "Monitoring resource count at path" resource="egressservices.k8s.ovn.org" path="//k8s.ovn.org/egressservices" 2025-12-13T00:20:57.983146655+00:00 stderr F I1213 00:20:57.983017 18 cacher.go:460] cacher (egressservices.k8s.ovn.org): initialized 2025-12-13T00:20:57.983146655+00:00 stderr F I1213 00:20:57.983039 18 reflector.go:351] Caches populated for k8s.ovn.org/v1, Kind=EgressService from storage/cacher.go:/k8s.ovn.org/egressservices 2025-12-13T00:20:58.174655033+00:00 stderr F I1213 00:20:58.174494 18 store.go:1579] "Monitoring resource count at path" resource="ippools.whereabouts.cni.cncf.io" path="//whereabouts.cni.cncf.io/ippools" 2025-12-13T00:20:58.177034287+00:00 stderr F I1213 00:20:58.176296 18 cacher.go:460] cacher (ippools.whereabouts.cni.cncf.io): initialized 2025-12-13T00:20:58.177034287+00:00 stderr F I1213 00:20:58.176318 18 reflector.go:351] Caches populated for whereabouts.cni.cncf.io/v1alpha1, Kind=IPPool from storage/cacher.go:/whereabouts.cni.cncf.io/ippools 2025-12-13T00:20:58.250355865+00:00 stderr F I1213 00:20:58.250223 18 store.go:1579] "Monitoring resource count at path" resource="clusterserviceversions.operators.coreos.com" path="//operators.coreos.com/clusterserviceversions" 2025-12-13T00:20:58.253163471+00:00 stderr F I1213 00:20:58.253084 18 cacher.go:460] cacher (clusterserviceversions.operators.coreos.com): initialized 2025-12-13T00:20:58.253163471+00:00 stderr F I1213 00:20:58.253122 18 reflector.go:351] Caches populated for operators.coreos.com/v1alpha1, Kind=ClusterServiceVersion from storage/cacher.go:/operators.coreos.com/clusterserviceversions 2025-12-13T00:20:58.267094637+00:00 stderr F I1213 00:20:58.266796 18 store.go:1579] "Monitoring resource count at path" resource="baselineadminnetworkpolicies.policy.networking.k8s.io" path="//policy.networking.k8s.io/baselineadminnetworkpolicies" 2025-12-13T00:20:58.267990661+00:00 stderr F I1213 00:20:58.267890 18 cacher.go:460] cacher (baselineadminnetworkpolicies.policy.networking.k8s.io): initialized 2025-12-13T00:20:58.267990661+00:00 stderr F I1213 00:20:58.267930 18 reflector.go:351] Caches populated for policy.networking.k8s.io/v1alpha1, Kind=BaselineAdminNetworkPolicy from storage/cacher.go:/policy.networking.k8s.io/baselineadminnetworkpolicies 2025-12-13T00:20:58.349673145+00:00 stderr F I1213 00:20:58.349577 18 store.go:1579] "Monitoring resource count at path" resource="prometheuses.monitoring.coreos.com" path="//monitoring.coreos.com/prometheuses" 2025-12-13T00:20:58.351191846+00:00 stderr F I1213 00:20:58.351108 18 cacher.go:460] cacher (prometheuses.monitoring.coreos.com): initialized 2025-12-13T00:20:58.351191846+00:00 stderr F I1213 00:20:58.351138 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Prometheus from storage/cacher.go:/monitoring.coreos.com/prometheuses 2025-12-13T00:20:58.377039434+00:00 stderr F I1213 00:20:58.376168 18 store.go:1579] "Monitoring resource count at path" resource="alertrelabelconfigs.monitoring.openshift.io" path="//monitoring.openshift.io/alertrelabelconfigs" 2025-12-13T00:20:58.377382503+00:00 stderr F I1213 00:20:58.377300 18 cacher.go:460] cacher (alertrelabelconfigs.monitoring.openshift.io): initialized 2025-12-13T00:20:58.377382503+00:00 stderr F I1213 00:20:58.377326 18 reflector.go:351] Caches populated for monitoring.openshift.io/v1, Kind=AlertRelabelConfig from storage/cacher.go:/monitoring.openshift.io/alertrelabelconfigs 2025-12-13T00:20:58.384435834+00:00 stderr F I1213 00:20:58.384333 18 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.386929921+00:00 stderr F I1213 00:20:58.386822 18 store.go:1579] "Monitoring resource count at path" resource="probes.monitoring.coreos.com" path="//monitoring.coreos.com/probes" 2025-12-13T00:20:58.388323608+00:00 stderr F I1213 00:20:58.388269 18 cacher.go:460] cacher (probes.monitoring.coreos.com): initialized 2025-12-13T00:20:58.388323608+00:00 stderr F I1213 00:20:58.388306 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Probe from storage/cacher.go:/monitoring.coreos.com/probes 2025-12-13T00:20:58.392608784+00:00 stderr F I1213 00:20:58.392509 18 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.393411536+00:00 stderr F I1213 00:20:58.393338 18 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.394580087+00:00 stderr F I1213 00:20:58.394510 18 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.397184397+00:00 stderr F I1213 00:20:58.397068 18 store.go:1579] "Monitoring resource count at path" resource="projecthelmchartrepositories.helm.openshift.io" path="//helm.openshift.io/projecthelmchartrepositories" 2025-12-13T00:20:58.399337485+00:00 stderr F I1213 00:20:58.399226 18 cacher.go:460] cacher (projecthelmchartrepositories.helm.openshift.io): initialized 2025-12-13T00:20:58.399337485+00:00 stderr F I1213 00:20:58.399252 18 reflector.go:351] Caches populated for helm.openshift.io/v1beta1, Kind=ProjectHelmChartRepository from storage/cacher.go:/helm.openshift.io/projecthelmchartrepositories 2025-12-13T00:20:58.416582971+00:00 stderr F I1213 00:20:58.413501 18 store.go:1579] "Monitoring resource count at path" resource="apirequestcounts.apiserver.openshift.io" path="//apiserver.openshift.io/apirequestcounts" 2025-12-13T00:20:58.463071596+00:00 stderr F I1213 00:20:58.459717 18 store.go:1579] "Monitoring resource count at path" resource="proxies.config.openshift.io" path="//config.openshift.io/proxies" 2025-12-13T00:20:58.463071596+00:00 stderr F I1213 00:20:58.460687 18 cacher.go:460] cacher (proxies.config.openshift.io): initialized 2025-12-13T00:20:58.463071596+00:00 stderr F I1213 00:20:58.460702 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Proxy from storage/cacher.go:/config.openshift.io/proxies 2025-12-13T00:20:58.475228993+00:00 stderr F I1213 00:20:58.475120 18 store.go:1579] "Monitoring resource count at path" resource="kubeapiservers.operator.openshift.io" path="//operator.openshift.io/kubeapiservers" 2025-12-13T00:20:58.479528580+00:00 stderr F I1213 00:20:58.478873 18 cacher.go:460] cacher (kubeapiservers.operator.openshift.io): initialized 2025-12-13T00:20:58.479528580+00:00 stderr F I1213 00:20:58.478906 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeAPIServer from storage/cacher.go:/operator.openshift.io/kubeapiservers 2025-12-13T00:20:58.480810484+00:00 stderr F I1213 00:20:58.480722 18 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io 2025-12-13T00:20:58.482342456+00:00 stderr F I1213 00:20:58.482279 18 trace.go:236] Trace[891564077]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b3c9fa93-62de-4eb4-aaf2-da115e386699,client:38.102.83.51,api-group:coordination.k8s.io,api-version:v1,name:crc,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc,user-agent:kubelet/v1.29.5+29c95f3 (linux/amd64) kubernetes/a0d6f21,verb:PUT (13-Dec-2025 00:20:57.173) (total time: 1309ms): 2025-12-13T00:20:58.482342456+00:00 stderr F Trace[891564077]: ["GuaranteedUpdate etcd3" audit-id:b3c9fa93-62de-4eb4-aaf2-da115e386699,key:/leases/kube-node-lease/crc,type:*coordination.Lease,resource:leases.coordination.k8s.io 1309ms (00:20:57.173) 2025-12-13T00:20:58.482342456+00:00 stderr F Trace[891564077]: ---"About to Encode" 1307ms (00:20:58.480)] 2025-12-13T00:20:58.482342456+00:00 stderr F Trace[891564077]: [1.30923043s] [1.30923043s] END 2025-12-13T00:20:58.538219714+00:00 stderr F I1213 00:20:58.538127 18 store.go:1579] "Monitoring resource count at path" resource="clusteroperators.config.openshift.io" path="//config.openshift.io/clusteroperators" 2025-12-13T00:20:58.562385835+00:00 stderr F I1213 00:20:58.562274 18 cacher.go:460] cacher (clusteroperators.config.openshift.io): initialized 2025-12-13T00:20:58.562385835+00:00 stderr F I1213 00:20:58.562309 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ClusterOperator from storage/cacher.go:/config.openshift.io/clusteroperators 2025-12-13T00:20:58.612654372+00:00 stderr F I1213 00:20:58.612553 18 store.go:1579] "Monitoring resource count at path" resource="ipaddressclaims.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddressclaims" 2025-12-13T00:20:58.616972879+00:00 stderr F I1213 00:20:58.616863 18 cacher.go:460] cacher (ipaddressclaims.ipam.cluster.x-k8s.io): initialized 2025-12-13T00:20:58.616972879+00:00 stderr F I1213 00:20:58.616891 18 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddressClaim from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddressclaims 2025-12-13T00:20:58.621820140+00:00 stderr F I1213 00:20:58.621709 18 store.go:1579] "Monitoring resource count at path" resource="ipaddressclaims.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddressclaims" 2025-12-13T00:20:58.623658579+00:00 stderr F I1213 00:20:58.623571 18 cacher.go:460] cacher (ipaddressclaims.ipam.cluster.x-k8s.io): initialized 2025-12-13T00:20:58.623658579+00:00 stderr F I1213 00:20:58.623600 18 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1beta1, Kind=IPAddressClaim from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddressclaims 2025-12-13T00:20:58.672292652+00:00 stderr F I1213 00:20:58.672198 18 cacher.go:460] cacher (apirequestcounts.apiserver.openshift.io): initialized 2025-12-13T00:20:58.672292652+00:00 stderr F I1213 00:20:58.672225 18 reflector.go:351] Caches populated for apiserver.openshift.io/v1, Kind=APIRequestCount from storage/cacher.go:/apiserver.openshift.io/apirequestcounts 2025-12-13T00:20:58.736911665+00:00 stderr F I1213 00:20:58.736818 18 store.go:1579] "Monitoring resource count at path" resource="alertmanagers.monitoring.coreos.com" path="//monitoring.coreos.com/alertmanagers" 2025-12-13T00:20:58.737846190+00:00 stderr F I1213 00:20:58.737756 18 cacher.go:460] cacher (alertmanagers.monitoring.coreos.com): initialized 2025-12-13T00:20:58.737846190+00:00 stderr F I1213 00:20:58.737802 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=Alertmanager from storage/cacher.go:/monitoring.coreos.com/alertmanagers 2025-12-13T00:20:58.893727677+00:00 stderr F I1213 00:20:58.893612 18 store.go:1579] "Monitoring resource count at path" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" path="//controlplane.operator.openshift.io/podnetworkconnectivitychecks" 2025-12-13T00:20:58.899503003+00:00 stderr F I1213 00:20:58.899407 18 cacher.go:460] cacher (podnetworkconnectivitychecks.controlplane.operator.openshift.io): initialized 2025-12-13T00:20:58.899503003+00:00 stderr F I1213 00:20:58.899439 18 reflector.go:351] Caches populated for controlplane.operator.openshift.io/v1alpha1, Kind=PodNetworkConnectivityCheck from storage/cacher.go:/controlplane.operator.openshift.io/podnetworkconnectivitychecks 2025-12-13T00:20:58.964860656+00:00 stderr F I1213 00:20:58.964755 18 store.go:1579] "Monitoring resource count at path" resource="controllerconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/controllerconfigs" 2025-12-13T00:20:58.968297119+00:00 stderr F I1213 00:20:58.968203 18 cacher.go:460] cacher (controllerconfigs.machineconfiguration.openshift.io): initialized 2025-12-13T00:20:58.968297119+00:00 stderr F I1213 00:20:58.968228 18 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=ControllerConfig from storage/cacher.go:/machineconfiguration.openshift.io/controllerconfigs 2025-12-13T00:20:58.975603636+00:00 stderr F I1213 00:20:58.975522 18 store.go:1579] "Monitoring resource count at path" resource="ipaddresses.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddresses" 2025-12-13T00:20:58.976596813+00:00 stderr F I1213 00:20:58.976542 18 cacher.go:460] cacher (ipaddresses.ipam.cluster.x-k8s.io): initialized 2025-12-13T00:20:58.976616183+00:00 stderr F I1213 00:20:58.976600 18 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddress from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddresses 2025-12-13T00:20:58.984982789+00:00 stderr F I1213 00:20:58.984911 18 store.go:1579] "Monitoring resource count at path" resource="ipaddresses.ipam.cluster.x-k8s.io" path="//ipam.cluster.x-k8s.io/ipaddresses" 2025-12-13T00:20:58.986007847+00:00 stderr F I1213 00:20:58.985914 18 cacher.go:460] cacher (ipaddresses.ipam.cluster.x-k8s.io): initialized 2025-12-13T00:20:58.986007847+00:00 stderr F I1213 00:20:58.985960 18 reflector.go:351] Caches populated for ipam.cluster.x-k8s.io/v1beta1, Kind=IPAddress from storage/cacher.go:/ipam.cluster.x-k8s.io/ipaddresses 2025-12-13T00:20:59.192158469+00:00 stderr F I1213 00:20:59.191958 18 store.go:1579] "Monitoring resource count at path" resource="alertingrules.monitoring.openshift.io" path="//monitoring.openshift.io/alertingrules" 2025-12-13T00:20:59.192996422+00:00 stderr F I1213 00:20:59.192809 18 cacher.go:460] cacher (alertingrules.monitoring.openshift.io): initialized 2025-12-13T00:20:59.192996422+00:00 stderr F I1213 00:20:59.192841 18 reflector.go:351] Caches populated for monitoring.openshift.io/v1, Kind=AlertingRule from storage/cacher.go:/monitoring.openshift.io/alertingrules 2025-12-13T00:20:59.209141508+00:00 stderr F I1213 00:20:59.208988 18 store.go:1579] "Monitoring resource count at path" resource="servicemonitors.monitoring.coreos.com" path="//monitoring.coreos.com/servicemonitors" 2025-12-13T00:20:59.233739712+00:00 stderr F I1213 00:20:59.233616 18 cacher.go:460] cacher (servicemonitors.monitoring.coreos.com): initialized 2025-12-13T00:20:59.233739712+00:00 stderr F I1213 00:20:59.233650 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=ServiceMonitor from storage/cacher.go:/monitoring.coreos.com/servicemonitors 2025-12-13T00:20:59.355522227+00:00 stderr F I1213 00:20:59.355433 18 store.go:1579] "Monitoring resource count at path" resource="openshiftapiservers.operator.openshift.io" path="//operator.openshift.io/openshiftapiservers" 2025-12-13T00:20:59.358146738+00:00 stderr F I1213 00:20:59.358086 18 cacher.go:460] cacher (openshiftapiservers.operator.openshift.io): initialized 2025-12-13T00:20:59.358146738+00:00 stderr F I1213 00:20:59.358106 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=OpenShiftAPIServer from storage/cacher.go:/operator.openshift.io/openshiftapiservers 2025-12-13T00:20:59.407669115+00:00 stderr F I1213 00:20:59.406597 18 store.go:1579] "Monitoring resource count at path" resource="machines.machine.openshift.io" path="//machine.openshift.io/machines" 2025-12-13T00:20:59.408187499+00:00 stderr F I1213 00:20:59.408101 18 cacher.go:460] cacher (machines.machine.openshift.io): initialized 2025-12-13T00:20:59.408187499+00:00 stderr F I1213 00:20:59.408139 18 reflector.go:351] Caches populated for machine.openshift.io/v1beta1, Kind=Machine from storage/cacher.go:/machine.openshift.io/machines 2025-12-13T00:20:59.491079226+00:00 stderr F I1213 00:20:59.490982 18 store.go:1579] "Monitoring resource count at path" resource="operatorhubs.config.openshift.io" path="//config.openshift.io/operatorhubs" 2025-12-13T00:20:59.493069590+00:00 stderr F I1213 00:20:59.493007 18 cacher.go:460] cacher (operatorhubs.config.openshift.io): initialized 2025-12-13T00:20:59.493069590+00:00 stderr F I1213 00:20:59.493029 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=OperatorHub from storage/cacher.go:/config.openshift.io/operatorhubs 2025-12-13T00:20:59.518419914+00:00 stderr F I1213 00:20:59.518285 18 store.go:1579] "Monitoring resource count at path" resource="builds.config.openshift.io" path="//config.openshift.io/builds" 2025-12-13T00:20:59.521685912+00:00 stderr F I1213 00:20:59.521593 18 cacher.go:460] cacher (builds.config.openshift.io): initialized 2025-12-13T00:20:59.521685912+00:00 stderr F I1213 00:20:59.521643 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Build from storage/cacher.go:/config.openshift.io/builds 2025-12-13T00:20:59.533860480+00:00 stderr F I1213 00:20:59.533254 18 store.go:1579] "Monitoring resource count at path" resource="egressrouters.network.operator.openshift.io" path="//network.operator.openshift.io/egressrouters" 2025-12-13T00:20:59.537646442+00:00 stderr F I1213 00:20:59.537555 18 cacher.go:460] cacher (egressrouters.network.operator.openshift.io): initialized 2025-12-13T00:20:59.538042994+00:00 stderr F I1213 00:20:59.537958 18 reflector.go:351] Caches populated for network.operator.openshift.io/v1, Kind=EgressRouter from storage/cacher.go:/network.operator.openshift.io/egressrouters 2025-12-13T00:20:59.625171845+00:00 stderr F I1213 00:20:59.624990 18 store.go:1579] "Monitoring resource count at path" resource="rolebindingrestrictions.authorization.openshift.io" path="//authorization.openshift.io/rolebindingrestrictions" 2025-12-13T00:20:59.626295125+00:00 stderr F I1213 00:20:59.626237 18 cacher.go:460] cacher (rolebindingrestrictions.authorization.openshift.io): initialized 2025-12-13T00:20:59.626331636+00:00 stderr F I1213 00:20:59.626304 18 reflector.go:351] Caches populated for authorization.openshift.io/v1, Kind=RoleBindingRestriction from storage/cacher.go:/authorization.openshift.io/rolebindingrestrictions 2025-12-13T00:20:59.640782495+00:00 stderr F I1213 00:20:59.640683 18 store.go:1579] "Monitoring resource count at path" resource="configs.operator.openshift.io" path="//operator.openshift.io/configs" 2025-12-13T00:20:59.643552320+00:00 stderr F I1213 00:20:59.643416 18 cacher.go:460] cacher (configs.operator.openshift.io): initialized 2025-12-13T00:20:59.643552320+00:00 stderr F I1213 00:20:59.643442 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Config from storage/cacher.go:/operator.openshift.io/configs 2025-12-13T00:20:59.664564248+00:00 stderr F I1213 00:20:59.664434 18 store.go:1579] "Monitoring resource count at path" resource="ingresscontrollers.operator.openshift.io" path="//operator.openshift.io/ingresscontrollers" 2025-12-13T00:20:59.666635753+00:00 stderr F I1213 00:20:59.666495 18 cacher.go:460] cacher (ingresscontrollers.operator.openshift.io): initialized 2025-12-13T00:20:59.666635753+00:00 stderr F I1213 00:20:59.666523 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=IngressController from storage/cacher.go:/operator.openshift.io/ingresscontrollers 2025-12-13T00:20:59.673877559+00:00 stderr F I1213 00:20:59.673758 18 store.go:1579] "Monitoring resource count at path" resource="nodes.config.openshift.io" path="//config.openshift.io/nodes" 2025-12-13T00:20:59.675220565+00:00 stderr F I1213 00:20:59.675151 18 cacher.go:460] cacher (nodes.config.openshift.io): initialized 2025-12-13T00:20:59.675220565+00:00 stderr F I1213 00:20:59.675172 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Node from storage/cacher.go:/config.openshift.io/nodes 2025-12-13T00:20:59.739238442+00:00 stderr F I1213 00:20:59.739129 18 store.go:1579] "Monitoring resource count at path" resource="storageversionmigrations.migration.k8s.io" path="//migration.k8s.io/storageversionmigrations" 2025-12-13T00:20:59.741647567+00:00 stderr F I1213 00:20:59.741579 18 cacher.go:460] cacher (storageversionmigrations.migration.k8s.io): initialized 2025-12-13T00:20:59.741647567+00:00 stderr F I1213 00:20:59.741596 18 reflector.go:351] Caches populated for migration.k8s.io/v1alpha1, Kind=StorageVersionMigration from storage/cacher.go:/migration.k8s.io/storageversionmigrations 2025-12-13T00:21:00.217226200+00:00 stderr F I1213 00:21:00.217111 18 store.go:1579] "Monitoring resource count at path" resource="thanosrulers.monitoring.coreos.com" path="//monitoring.coreos.com/thanosrulers" 2025-12-13T00:21:00.218417352+00:00 stderr F I1213 00:21:00.218346 18 cacher.go:460] cacher (thanosrulers.monitoring.coreos.com): initialized 2025-12-13T00:21:00.218417352+00:00 stderr F I1213 00:21:00.218371 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=ThanosRuler from storage/cacher.go:/monitoring.coreos.com/thanosrulers 2025-12-13T00:21:00.946561501+00:00 stderr F I1213 00:21:00.946454 18 store.go:1579] "Monitoring resource count at path" resource="kubecontrollermanagers.operator.openshift.io" path="//operator.openshift.io/kubecontrollermanagers" 2025-12-13T00:21:00.950200130+00:00 stderr F I1213 00:21:00.950126 18 cacher.go:460] cacher (kubecontrollermanagers.operator.openshift.io): initialized 2025-12-13T00:21:00.950264252+00:00 stderr F I1213 00:21:00.950188 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeControllerManager from storage/cacher.go:/operator.openshift.io/kubecontrollermanagers 2025-12-13T00:21:01.244991095+00:00 stderr F I1213 00:21:01.244815 18 store.go:1579] "Monitoring resource count at path" resource="operatorgroups.operators.coreos.com" path="//operators.coreos.com/operatorgroups" 2025-12-13T00:21:01.248575311+00:00 stderr F I1213 00:21:01.248459 18 cacher.go:460] cacher (operatorgroups.operators.coreos.com): initialized 2025-12-13T00:21:01.248627772+00:00 stderr F I1213 00:21:01.248567 18 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OperatorGroup from storage/cacher.go:/operators.coreos.com/operatorgroups 2025-12-13T00:21:01.252744344+00:00 stderr F I1213 00:21:01.252638 18 store.go:1579] "Monitoring resource count at path" resource="operatorgroups.operators.coreos.com" path="//operators.coreos.com/operatorgroups" 2025-12-13T00:21:01.252786325+00:00 stderr F I1213 00:21:01.252716 18 controller.go:624] quota admission added evaluator for: namespaces 2025-12-13T00:21:01.255260572+00:00 stderr F I1213 00:21:01.255057 18 cacher.go:460] cacher (operatorgroups.operators.coreos.com): initialized 2025-12-13T00:21:01.255260572+00:00 stderr F I1213 00:21:01.255080 18 reflector.go:351] Caches populated for operators.coreos.com/v1alpha2, Kind=OperatorGroup from storage/cacher.go:/operators.coreos.com/operatorgroups 2025-12-13T00:21:01.641713920+00:00 stderr F I1213 00:21:01.641558 18 store.go:1579] "Monitoring resource count at path" resource="prometheusrules.monitoring.coreos.com" path="//monitoring.coreos.com/prometheusrules" 2025-12-13T00:21:01.658512083+00:00 stderr F I1213 00:21:01.658374 18 cacher.go:460] cacher (prometheusrules.monitoring.coreos.com): initialized 2025-12-13T00:21:01.658512083+00:00 stderr F I1213 00:21:01.658408 18 reflector.go:351] Caches populated for monitoring.coreos.com/v1, Kind=PrometheusRule from storage/cacher.go:/monitoring.coreos.com/prometheusrules 2025-12-13T00:21:11.462535011+00:00 stderr F I1213 00:21:11.462371 18 controller.go:624] quota admission added evaluator for: podnetworkconnectivitychecks.controlplane.operator.openshift.io 2025-12-13T00:21:11.462535011+00:00 stderr F I1213 00:21:11.462405 18 controller.go:624] quota admission added evaluator for: podnetworkconnectivitychecks.controlplane.operator.openshift.io 2025-12-13T00:21:19.480235066+00:00 stderr F I1213 00:21:19.480093 18 store.go:1579] "Monitoring resource count at path" resource="apiservers.config.openshift.io" path="//config.openshift.io/apiservers" 2025-12-13T00:21:19.481361647+00:00 stderr F I1213 00:21:19.481296 18 cacher.go:460] cacher (apiservers.config.openshift.io): initialized 2025-12-13T00:21:19.481361647+00:00 stderr F I1213 00:21:19.481319 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=APIServer from storage/cacher.go:/config.openshift.io/apiservers 2025-12-13T00:21:19.501258334+00:00 stderr F I1213 00:21:19.501146 18 store.go:1579] "Monitoring resource count at path" resource="ingresses.config.openshift.io" path="//config.openshift.io/ingresses" 2025-12-13T00:21:19.502514078+00:00 stderr F I1213 00:21:19.502452 18 cacher.go:460] cacher (ingresses.config.openshift.io): initialized 2025-12-13T00:21:19.502514078+00:00 stderr F I1213 00:21:19.502496 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Ingress from storage/cacher.go:/config.openshift.io/ingresses 2025-12-13T00:21:20.269681570+00:00 stderr F I1213 00:21:20.268111 18 store.go:1579] "Monitoring resource count at path" resource="authentications.operator.openshift.io" path="//operator.openshift.io/authentications" 2025-12-13T00:21:20.273357928+00:00 stderr F I1213 00:21:20.273106 18 cacher.go:460] cacher (authentications.operator.openshift.io): initialized 2025-12-13T00:21:20.273357928+00:00 stderr F I1213 00:21:20.273135 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Authentication from storage/cacher.go:/operator.openshift.io/authentications 2025-12-13T00:21:20.356217945+00:00 stderr F I1213 00:21:20.356076 18 store.go:1579] "Monitoring resource count at path" resource="images.config.openshift.io" path="//config.openshift.io/images" 2025-12-13T00:21:20.357751726+00:00 stderr F I1213 00:21:20.357692 18 cacher.go:460] cacher (images.config.openshift.io): initialized 2025-12-13T00:21:20.357813838+00:00 stderr F I1213 00:21:20.357772 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Image from storage/cacher.go:/config.openshift.io/images 2025-12-13T00:21:21.237322001+00:00 stderr F I1213 00:21:21.237209 18 store.go:1579] "Monitoring resource count at path" resource="consoleplugins.console.openshift.io" path="//console.openshift.io/consoleplugins" 2025-12-13T00:21:21.238191805+00:00 stderr F I1213 00:21:21.238047 18 cacher.go:460] cacher (consoleplugins.console.openshift.io): initialized 2025-12-13T00:21:21.238191805+00:00 stderr F I1213 00:21:21.238076 18 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsolePlugin from storage/cacher.go:/console.openshift.io/consoleplugins 2025-12-13T00:21:21.245956355+00:00 stderr F I1213 00:21:21.245837 18 store.go:1579] "Monitoring resource count at path" resource="consoleplugins.console.openshift.io" path="//console.openshift.io/consoleplugins" 2025-12-13T00:21:21.246532900+00:00 stderr F I1213 00:21:21.246458 18 cacher.go:460] cacher (consoleplugins.console.openshift.io): initialized 2025-12-13T00:21:21.246532900+00:00 stderr F I1213 00:21:21.246482 18 reflector.go:351] Caches populated for console.openshift.io/v1alpha1, Kind=ConsolePlugin from storage/cacher.go:/console.openshift.io/consoleplugins 2025-12-13T00:21:21.495966711+00:00 stderr F I1213 00:21:21.495830 18 store.go:1579] "Monitoring resource count at path" resource="machineconfigpools.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/machineconfigpools" 2025-12-13T00:21:21.498170261+00:00 stderr F I1213 00:21:21.498084 18 cacher.go:460] cacher (machineconfigpools.machineconfiguration.openshift.io): initialized 2025-12-13T00:21:21.498170261+00:00 stderr F I1213 00:21:21.498102 18 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=MachineConfigPool from storage/cacher.go:/machineconfiguration.openshift.io/machineconfigpools 2025-12-13T00:21:21.739863923+00:00 stderr F I1213 00:21:21.739745 18 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.internal.openshift.io" path="//security.internal.openshift.io/rangeallocations" 2025-12-13T00:21:21.743302826+00:00 stderr F I1213 00:21:21.743232 18 cacher.go:460] cacher (rangeallocations.security.internal.openshift.io): initialized 2025-12-13T00:21:21.743302826+00:00 stderr F I1213 00:21:21.743251 18 reflector.go:351] Caches populated for security.internal.openshift.io/v1, Kind=RangeAllocation from storage/cacher.go:/security.internal.openshift.io/rangeallocations 2025-12-13T00:21:22.033038935+00:00 stderr F I1213 00:21:22.032898 18 store.go:1579] "Monitoring resource count at path" resource="dnses.operator.openshift.io" path="//operator.openshift.io/dnses" 2025-12-13T00:21:22.034603997+00:00 stderr F I1213 00:21:22.034472 18 cacher.go:460] cacher (dnses.operator.openshift.io): initialized 2025-12-13T00:21:22.034603997+00:00 stderr F I1213 00:21:22.034493 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=DNS from storage/cacher.go:/operator.openshift.io/dnses 2025-12-13T00:21:22.073793894+00:00 stderr F I1213 00:21:22.073640 18 store.go:1579] "Monitoring resource count at path" resource="oauths.config.openshift.io" path="//config.openshift.io/oauths" 2025-12-13T00:21:22.075269625+00:00 stderr F I1213 00:21:22.075189 18 cacher.go:460] cacher (oauths.config.openshift.io): initialized 2025-12-13T00:21:22.075269625+00:00 stderr F I1213 00:21:22.075220 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=OAuth from storage/cacher.go:/config.openshift.io/oauths 2025-12-13T00:21:22.083441715+00:00 stderr F I1213 00:21:22.083354 18 store.go:1579] "Monitoring resource count at path" resource="schedulers.config.openshift.io" path="//config.openshift.io/schedulers" 2025-12-13T00:21:22.084557975+00:00 stderr F I1213 00:21:22.084477 18 cacher.go:460] cacher (schedulers.config.openshift.io): initialized 2025-12-13T00:21:22.084557975+00:00 stderr F I1213 00:21:22.084498 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Scheduler from storage/cacher.go:/config.openshift.io/schedulers 2025-12-13T00:21:22.094022840+00:00 stderr F I1213 00:21:22.093902 18 store.go:1579] "Monitoring resource count at path" resource="consoles.config.openshift.io" path="//config.openshift.io/consoles" 2025-12-13T00:21:22.095184042+00:00 stderr F I1213 00:21:22.095115 18 cacher.go:460] cacher (consoles.config.openshift.io): initialized 2025-12-13T00:21:22.095184042+00:00 stderr F I1213 00:21:22.095137 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Console from storage/cacher.go:/config.openshift.io/consoles 2025-12-13T00:21:22.437654093+00:00 stderr F I1213 00:21:22.437500 18 controller.go:624] quota admission added evaluator for: csistoragecapacities.storage.k8s.io 2025-12-13T00:21:22.437654093+00:00 stderr F I1213 00:21:22.437543 18 controller.go:624] quota admission added evaluator for: csistoragecapacities.storage.k8s.io 2025-12-13T00:21:22.596723926+00:00 stderr F I1213 00:21:22.596099 18 store.go:1579] "Monitoring resource count at path" resource="kubeletconfigs.machineconfiguration.openshift.io" path="//machineconfiguration.openshift.io/kubeletconfigs" 2025-12-13T00:21:22.597185548+00:00 stderr F I1213 00:21:22.597094 18 cacher.go:460] cacher (kubeletconfigs.machineconfiguration.openshift.io): initialized 2025-12-13T00:21:22.597185548+00:00 stderr F I1213 00:21:22.597115 18 reflector.go:351] Caches populated for machineconfiguration.openshift.io/v1, Kind=KubeletConfig from storage/cacher.go:/machineconfiguration.openshift.io/kubeletconfigs 2025-12-13T00:21:23.439527788+00:00 stderr F I1213 00:21:23.439374 18 apf_controller.go:455] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=5 seatDemandAvg=0.010547555820270551 seatDemandStdev=0.14093798587061263 seatDemandSmoothed=2.8105536187418827 fairFrac=2.275969968319699 currentCL=6 concurrencyDenominator=6 backstop=false 2025-12-13T00:21:24.111266175+00:00 stderr F I1213 00:21:24.111144 18 store.go:1579] "Monitoring resource count at path" resource="imagecontentsourcepolicies.operator.openshift.io" path="//operator.openshift.io/imagecontentsourcepolicies" 2025-12-13T00:21:24.115451838+00:00 stderr F I1213 00:21:24.115328 18 cacher.go:460] cacher (imagecontentsourcepolicies.operator.openshift.io): initialized 2025-12-13T00:21:24.115451838+00:00 stderr F I1213 00:21:24.115367 18 reflector.go:351] Caches populated for operator.openshift.io/v1alpha1, Kind=ImageContentSourcePolicy from storage/cacher.go:/operator.openshift.io/imagecontentsourcepolicies 2025-12-13T00:21:25.053328487+00:00 stderr F I1213 00:21:25.053179 18 store.go:1579] "Monitoring resource count at path" resource="kubestorageversionmigrators.operator.openshift.io" path="//operator.openshift.io/kubestorageversionmigrators" 2025-12-13T00:21:25.054465127+00:00 stderr F I1213 00:21:25.054366 18 cacher.go:460] cacher (kubestorageversionmigrators.operator.openshift.io): initialized 2025-12-13T00:21:25.054465127+00:00 stderr F I1213 00:21:25.054393 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=KubeStorageVersionMigrator from storage/cacher.go:/operator.openshift.io/kubestorageversionmigrators 2025-12-13T00:21:27.189833839+00:00 stderr F I1213 00:21:27.189726 18 store.go:1579] "Monitoring resource count at path" resource="consolenotifications.console.openshift.io" path="//console.openshift.io/consolenotifications" 2025-12-13T00:21:27.191282258+00:00 stderr F I1213 00:21:27.191205 18 cacher.go:460] cacher (consolenotifications.console.openshift.io): initialized 2025-12-13T00:21:27.191282258+00:00 stderr F I1213 00:21:27.191236 18 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleNotification from storage/cacher.go:/console.openshift.io/consolenotifications 2025-12-13T00:21:27.294130414+00:00 stderr F I1213 00:21:27.294003 18 store.go:1579] "Monitoring resource count at path" resource="consoleclidownloads.console.openshift.io" path="//console.openshift.io/consoleclidownloads" 2025-12-13T00:21:27.295815429+00:00 stderr F I1213 00:21:27.295734 18 cacher.go:460] cacher (consoleclidownloads.console.openshift.io): initialized 2025-12-13T00:21:27.295815429+00:00 stderr F I1213 00:21:27.295766 18 reflector.go:351] Caches populated for console.openshift.io/v1, Kind=ConsoleCLIDownload from storage/cacher.go:/console.openshift.io/consoleclidownloads 2025-12-13T00:21:27.491729475+00:00 stderr F I1213 00:21:27.491601 18 store.go:1579] "Monitoring resource count at path" resource="olmconfigs.operators.coreos.com" path="//operators.coreos.com/olmconfigs" 2025-12-13T00:21:27.493955836+00:00 stderr F I1213 00:21:27.493345 18 cacher.go:460] cacher (olmconfigs.operators.coreos.com): initialized 2025-12-13T00:21:27.493955836+00:00 stderr F I1213 00:21:27.493376 18 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=OLMConfig from storage/cacher.go:/operators.coreos.com/olmconfigs 2025-12-13T00:21:35.954076169+00:00 stderr F I1213 00:21:35.953728 18 store.go:1579] "Monitoring resource count at path" resource="openshiftcontrollermanagers.operator.openshift.io" path="//operator.openshift.io/openshiftcontrollermanagers" 2025-12-13T00:21:35.956424073+00:00 stderr F I1213 00:21:35.956275 18 cacher.go:460] cacher (openshiftcontrollermanagers.operator.openshift.io): initialized 2025-12-13T00:21:35.956424073+00:00 stderr F I1213 00:21:35.956303 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=OpenShiftControllerManager from storage/cacher.go:/operator.openshift.io/openshiftcontrollermanagers 2025-12-13T00:21:36.530190866+00:00 stderr F I1213 00:21:36.530040 18 store.go:1579] "Monitoring resource count at path" resource="imagetagmirrorsets.config.openshift.io" path="//config.openshift.io/imagetagmirrorsets" 2025-12-13T00:21:36.531411728+00:00 stderr F I1213 00:21:36.531273 18 cacher.go:460] cacher (imagetagmirrorsets.config.openshift.io): initialized 2025-12-13T00:21:36.531411728+00:00 stderr F I1213 00:21:36.531362 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageTagMirrorSet from storage/cacher.go:/config.openshift.io/imagetagmirrorsets 2025-12-13T00:21:38.512290294+00:00 stderr F I1213 00:21:38.512066 18 store.go:1579] "Monitoring resource count at path" resource="etcds.operator.openshift.io" path="//operator.openshift.io/etcds" 2025-12-13T00:21:38.517010000+00:00 stderr F I1213 00:21:38.516713 18 cacher.go:460] cacher (etcds.operator.openshift.io): initialized 2025-12-13T00:21:38.517010000+00:00 stderr F I1213 00:21:38.516734 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Etcd from storage/cacher.go:/operator.openshift.io/etcds 2025-12-13T00:21:39.325279416+00:00 stderr F I1213 00:21:39.324011 18 store.go:1579] "Monitoring resource count at path" resource="authentications.config.openshift.io" path="//config.openshift.io/authentications" 2025-12-13T00:21:39.326745533+00:00 stderr F I1213 00:21:39.326513 18 cacher.go:460] cacher (authentications.config.openshift.io): initialized 2025-12-13T00:21:39.326745533+00:00 stderr F I1213 00:21:39.326535 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Authentication from storage/cacher.go:/config.openshift.io/authentications 2025-12-13T00:21:40.445644390+00:00 stderr F I1213 00:21:40.444985 18 store.go:1579] "Monitoring resource count at path" resource="imagedigestmirrorsets.config.openshift.io" path="//config.openshift.io/imagedigestmirrorsets" 2025-12-13T00:21:40.446303216+00:00 stderr F I1213 00:21:40.446239 18 cacher.go:460] cacher (imagedigestmirrorsets.config.openshift.io): initialized 2025-12-13T00:21:40.447368503+00:00 stderr F I1213 00:21:40.446394 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=ImageDigestMirrorSet from storage/cacher.go:/config.openshift.io/imagedigestmirrorsets 2025-12-13T00:21:41.937265041+00:00 stderr F I1213 00:21:41.937119 18 store.go:1579] "Monitoring resource count at path" resource="dnses.config.openshift.io" path="//config.openshift.io/dnses" 2025-12-13T00:21:41.939075475+00:00 stderr F I1213 00:21:41.938964 18 cacher.go:460] cacher (dnses.config.openshift.io): initialized 2025-12-13T00:21:41.939075475+00:00 stderr F I1213 00:21:41.938994 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=DNS from storage/cacher.go:/config.openshift.io/dnses 2025-12-13T00:21:42.803426475+00:00 stderr F I1213 00:21:42.803232 18 store.go:1579] "Monitoring resource count at path" resource="servicecas.operator.openshift.io" path="//operator.openshift.io/servicecas" 2025-12-13T00:21:42.805346073+00:00 stderr F I1213 00:21:42.805250 18 cacher.go:460] cacher (servicecas.operator.openshift.io): initialized 2025-12-13T00:21:42.805346073+00:00 stderr F I1213 00:21:42.805274 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=ServiceCA from storage/cacher.go:/operator.openshift.io/servicecas 2025-12-13T00:21:44.102909477+00:00 stderr F I1213 00:21:44.102751 18 store.go:1579] "Monitoring resource count at path" resource="projects.config.openshift.io" path="//config.openshift.io/projects" 2025-12-13T00:21:44.103967523+00:00 stderr F I1213 00:21:44.103882 18 cacher.go:460] cacher (projects.config.openshift.io): initialized 2025-12-13T00:21:44.103967523+00:00 stderr F I1213 00:21:44.103906 18 reflector.go:351] Caches populated for config.openshift.io/v1, Kind=Project from storage/cacher.go:/config.openshift.io/projects 2025-12-13T00:21:51.441060692+00:00 stderr F I1213 00:21:51.440848 18 trace.go:236] Trace[1210452737]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:90f34b49-5ef9-453d-9450-fcaac85bf30d,client:38.102.83.182,api-group:,api-version:v1,name:,subresource:,namespace:openshift-must-gather-zffxd,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openshift-must-gather-zffxd/pods,user-agent:oc/4.20.0 (linux/amd64) kubernetes/0963a01,verb:POST (13-Dec-2025 00:21:50.367) (total time: 1072ms): 2025-12-13T00:21:51.441060692+00:00 stderr F Trace[1210452737]: ---"Write to database call failed" len:1721,err:pods "must-gather-" is forbidden: error looking up service account openshift-must-gather-zffxd/default: serviceaccount "default" not found 1072ms (00:21:51.440) 2025-12-13T00:21:51.441060692+00:00 stderr F Trace[1210452737]: [1.072816431s] [1.072816431s] END 2025-12-13T00:21:53.411137944+00:00 stderr F I1213 00:21:53.411023 18 store.go:1579] "Monitoring resource count at path" resource="operators.operators.coreos.com" path="//operators.coreos.com/operators" 2025-12-13T00:21:53.412183230+00:00 stderr F I1213 00:21:53.412105 18 cacher.go:460] cacher (operators.operators.coreos.com): initialized 2025-12-13T00:21:53.412183230+00:00 stderr F I1213 00:21:53.412122 18 reflector.go:351] Caches populated for operators.coreos.com/v1, Kind=Operator from storage/cacher.go:/operators.coreos.com/operators 2025-12-13T00:21:54.517245395+00:00 stderr F I1213 00:21:54.517113 18 store.go:1579] "Monitoring resource count at path" resource="machineconfigurations.operator.openshift.io" path="//operator.openshift.io/machineconfigurations" 2025-12-13T00:21:54.519452810+00:00 stderr F I1213 00:21:54.519384 18 cacher.go:460] cacher (machineconfigurations.operator.openshift.io): initialized 2025-12-13T00:21:54.519452810+00:00 stderr F I1213 00:21:54.519401 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=MachineConfiguration from storage/cacher.go:/operator.openshift.io/machineconfigurations 2025-12-13T00:21:56.566949342+00:00 stderr F I1213 00:21:56.566761 18 store.go:1579] "Monitoring resource count at path" resource="configs.samples.operator.openshift.io" path="//samples.operator.openshift.io/configs" 2025-12-13T00:21:56.569623448+00:00 stderr F I1213 00:21:56.569535 18 cacher.go:460] cacher (configs.samples.operator.openshift.io): initialized 2025-12-13T00:21:56.569623448+00:00 stderr F I1213 00:21:56.569558 18 reflector.go:351] Caches populated for samples.operator.openshift.io/v1, Kind=Config from storage/cacher.go:/samples.operator.openshift.io/configs 2025-12-13T00:21:56.924103720+00:00 stderr F I1213 00:21:56.923306 18 store.go:1579] "Monitoring resource count at path" resource="imagepruners.imageregistry.operator.openshift.io" path="//imageregistry.operator.openshift.io/imagepruners" 2025-12-13T00:21:56.924741777+00:00 stderr F I1213 00:21:56.924644 18 cacher.go:460] cacher (imagepruners.imageregistry.operator.openshift.io): initialized 2025-12-13T00:21:56.924741777+00:00 stderr F I1213 00:21:56.924671 18 reflector.go:351] Caches populated for imageregistry.operator.openshift.io/v1, Kind=ImagePruner from storage/cacher.go:/imageregistry.operator.openshift.io/imagepruners 2025-12-13T00:21:57.137461154+00:00 stderr F I1213 00:21:57.137350 18 store.go:1579] "Monitoring resource count at path" resource="consoles.operator.openshift.io" path="//operator.openshift.io/consoles" 2025-12-13T00:21:57.139962995+00:00 stderr F I1213 00:21:57.139867 18 cacher.go:460] cacher (consoles.operator.openshift.io): initialized 2025-12-13T00:21:57.139962995+00:00 stderr F I1213 00:21:57.139896 18 reflector.go:351] Caches populated for operator.openshift.io/v1, Kind=Console from storage/cacher.go:/operator.openshift.io/consoles ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000041115117130646033047 0ustar zuulzuul2025-12-13T00:20:50.891257022+00:00 stdout F Fixing audit permissions ... 2025-12-13T00:20:50.896663958+00:00 stdout F Acquiring exclusive lock /var/log/kube-apiserver/.lock ... 2025-12-13T00:20:50.897724526+00:00 stdout F flock: getting lock took 0.000008 seconds ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000005277215117130646033070 0ustar zuulzuul2025-12-13T00:20:53.215499271+00:00 stderr F W1213 00:20:53.215325 1 cmd.go:245] Using insecure, self-signed certificates 2025-12-13T00:20:53.215761948+00:00 stderr F I1213 00:20:53.215623 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765585253 cert, and key in /tmp/serving-cert-1408744315/serving-signer.crt, /tmp/serving-cert-1408744315/serving-signer.key 2025-12-13T00:20:53.650917921+00:00 stderr F I1213 00:20:53.650849 1 observer_polling.go:159] Starting file observer 2025-12-13T00:20:58.664891402+00:00 stderr F I1213 00:20:58.664834 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-12-13T00:20:58.666487154+00:00 stderr F I1213 00:20:58.666451 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1408744315/tls.crt::/tmp/serving-cert-1408744315/tls.key" 2025-12-13T00:20:59.175720816+00:00 stderr F I1213 00:20:59.175647 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:20:59.177074353+00:00 stderr F I1213 00:20:59.176990 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:20:59.177074353+00:00 stderr F I1213 00:20:59.177008 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:20:59.177074353+00:00 stderr F I1213 00:20:59.177020 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:20:59.177074353+00:00 stderr F I1213 00:20:59.177025 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:20:59.179862368+00:00 stderr F I1213 00:20:59.179796 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:20:59.179862368+00:00 stderr F I1213 00:20:59.179818 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:20:59.179862368+00:00 stderr F W1213 00:20:59.179826 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:20:59.179862368+00:00 stderr F W1213 00:20:59.179835 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184405 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184434 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184506 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1408744315/tls.crt::/tmp/serving-cert-1408744315/tls.key" 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184544 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184565 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184683 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1408744315/tls.crt::/tmp/serving-cert-1408744315/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765585253\" (2025-12-13 00:20:52 +0000 UTC to 2026-01-12 00:20:53 +0000 UTC (now=2025-12-13 00:20:59.184641996 +0000 UTC))" 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184725 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:20:59.185032728+00:00 stderr F I1213 00:20:59.184735 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:20:59.185164371+00:00 stderr F I1213 00:20:59.185125 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765585259\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765585258\" (2025-12-12 23:20:58 +0000 UTC to 2026-12-12 23:20:58 +0000 UTC (now=2025-12-13 00:20:59.185095189 +0000 UTC))" 2025-12-13T00:20:59.185202752+00:00 stderr F I1213 00:20:59.185164 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:20:59.185222533+00:00 stderr F I1213 00:20:59.185210 1 secure_serving.go:213] Serving securely on [::]:17697 2025-12-13T00:20:59.185241033+00:00 stderr F I1213 00:20:59.185231 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:20:59.188123181+00:00 stderr F I1213 00:20:59.187505 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:59.188123181+00:00 stderr F I1213 00:20:59.187569 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:59.188263575+00:00 stderr F I1213 00:20:59.188215 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:59.189653792+00:00 stderr F I1213 00:20:59.189621 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-12-13T00:20:59.284641915+00:00 stderr F I1213 00:20:59.284574 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:20:59.284722837+00:00 stderr F I1213 00:20:59.284694 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:20:59.284916922+00:00 stderr F I1213 00:20:59.284890 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:20:59.285126088+00:00 stderr F I1213 00:20:59.285088 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:20:59.285059606 +0000 UTC))" 2025-12-13T00:20:59.285126088+00:00 stderr F I1213 00:20:59.285121 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:20:59.285105807 +0000 UTC))" 2025-12-13T00:20:59.285182800+00:00 stderr F I1213 00:20:59.285160 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:20:59.285126578 +0000 UTC))" 2025-12-13T00:20:59.285213160+00:00 stderr F I1213 00:20:59.285190 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:20:59.285173099 +0000 UTC))" 2025-12-13T00:20:59.285230591+00:00 stderr F I1213 00:20:59.285219 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.28520295 +0000 UTC))" 2025-12-13T00:20:59.285261942+00:00 stderr F I1213 00:20:59.285241 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.285225191 +0000 UTC))" 2025-12-13T00:20:59.285291162+00:00 stderr F I1213 00:20:59.285267 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.285251071 +0000 UTC))" 2025-12-13T00:20:59.285298973+00:00 stderr F I1213 00:20:59.285293 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.285276672 +0000 UTC))" 2025-12-13T00:20:59.285336164+00:00 stderr F I1213 00:20:59.285316 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:20:59.285299283 +0000 UTC))" 2025-12-13T00:20:59.285345604+00:00 stderr F I1213 00:20:59.285339 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:20:59.285325313 +0000 UTC))" 2025-12-13T00:20:59.285383205+00:00 stderr F I1213 00:20:59.285363 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:20:59.285345504 +0000 UTC))" 2025-12-13T00:20:59.285688513+00:00 stderr F I1213 00:20:59.285658 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1408744315/tls.crt::/tmp/serving-cert-1408744315/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765585253\" (2025-12-13 00:20:52 +0000 UTC to 2026-01-12 00:20:53 +0000 UTC (now=2025-12-13 00:20:59.285644512 +0000 UTC))" 2025-12-13T00:20:59.286068034+00:00 stderr F I1213 00:20:59.286032 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765585259\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765585258\" (2025-12-12 23:20:58 +0000 UTC to 2026-12-12 23:20:58 +0000 UTC (now=2025-12-13 00:20:59.286011083 +0000 UTC))" 2025-12-13T00:20:59.286225318+00:00 stderr F I1213 00:20:59.286197 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:20:59.286170277 +0000 UTC))" 2025-12-13T00:20:59.286225318+00:00 stderr F I1213 00:20:59.286218 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:20:59.286207788 +0000 UTC))" 2025-12-13T00:20:59.286250709+00:00 stderr F I1213 00:20:59.286235 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:20:59.286222738 +0000 UTC))" 2025-12-13T00:20:59.286258039+00:00 stderr F I1213 00:20:59.286252 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:20:59.286240679 +0000 UTC))" 2025-12-13T00:20:59.286288240+00:00 stderr F I1213 00:20:59.286269 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.286256649 +0000 UTC))" 2025-12-13T00:20:59.286296170+00:00 stderr F I1213 00:20:59.286288 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.28627659 +0000 UTC))" 2025-12-13T00:20:59.286325231+00:00 stderr F I1213 00:20:59.286307 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.2862924 +0000 UTC))" 2025-12-13T00:20:59.286333021+00:00 stderr F I1213 00:20:59.286326 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.286314221 +0000 UTC))" 2025-12-13T00:20:59.286369582+00:00 stderr F I1213 00:20:59.286343 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:20:59.286331761 +0000 UTC))" 2025-12-13T00:20:59.286369582+00:00 stderr F I1213 00:20:59.286361 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:20:59.286351142 +0000 UTC))" 2025-12-13T00:20:59.286394373+00:00 stderr F I1213 00:20:59.286378 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:20:59.286366332 +0000 UTC))" 2025-12-13T00:20:59.286401773+00:00 stderr F I1213 00:20:59.286395 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:20:59.286383032 +0000 UTC))" 2025-12-13T00:20:59.287639816+00:00 stderr F I1213 00:20:59.287600 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1408744315/tls.crt::/tmp/serving-cert-1408744315/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765585253\" (2025-12-13 00:20:52 +0000 UTC to 2026-01-12 00:20:53 +0000 UTC (now=2025-12-13 00:20:59.287395239 +0000 UTC))" 2025-12-13T00:20:59.288284003+00:00 stderr F I1213 00:20:59.288250 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765585259\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765585258\" (2025-12-12 23:20:58 +0000 UTC to 2026-12-12 23:20:58 +0000 UTC (now=2025-12-13 00:20:59.288195881 +0000 UTC))" 2025-12-13T00:20:59.416750950+00:00 stderr F I1213 00:20:59.416665 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:59.489900554+00:00 stderr F I1213 00:20:59.489822 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-12-13T00:20:59.489900554+00:00 stderr F I1213 00:20:59.489868 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-12-13T00:20:59.489966225+00:00 stderr F I1213 00:20:59.489958 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-12-13T00:20:59.489977806+00:00 stderr F I1213 00:20:59.489964 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-12-13T00:20:59.489977806+00:00 stderr F I1213 00:20:59.489969 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-12-13T00:20:59.490052588+00:00 stderr F I1213 00:20:59.490004 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-12-13T00:20:59.490082888+00:00 stderr F I1213 00:20:59.490040 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-12-13T00:20:59.490082888+00:00 stderr F I1213 00:20:59.490074 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-12-13T00:20:59.490082888+00:00 stderr F I1213 00:20:59.490078 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-12-13T00:20:59.492291568+00:00 stderr F I1213 00:20:59.492266 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:59.519187864+00:00 stderr F I1213 00:20:59.519142 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:59.590194700+00:00 stderr F I1213 00:20:59.590143 1 base_controller.go:73] Caches are synced for check-endpoints 2025-12-13T00:20:59.590262122+00:00 stderr F I1213 00:20:59.590247 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000016415117130646033054 0ustar zuulzuul2025-12-13T00:20:52.970293634+00:00 stderr F I1213 00:20:52.970147 1 readyz.go:111] Listening on 0.0.0.0:6080 ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000002615315117130646033062 0ustar zuulzuul2025-12-13T00:20:52.635008166+00:00 stderr F W1213 00:20:52.634867 1 cmd.go:245] Using insecure, self-signed certificates 2025-12-13T00:20:52.635208812+00:00 stderr F I1213 00:20:52.635186 1 crypto.go:601] Generating new CA for cert-regeneration-controller-signer@1765585252 cert, and key in /tmp/serving-cert-933709920/serving-signer.crt, /tmp/serving-cert-933709920/serving-signer.key 2025-12-13T00:20:53.082249245+00:00 stderr F I1213 00:20:53.082182 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:20:53.083088948+00:00 stderr F I1213 00:20:53.083041 1 observer_polling.go:159] Starting file observer 2025-12-13T00:20:58.404072014+00:00 stderr F I1213 00:20:58.403939 1 builder.go:299] cert-regeneration-controller version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-12-13T00:20:58.408976236+00:00 stderr F I1213 00:20:58.408934 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:20:58.409327455+00:00 stderr F I1213 00:20:58.409300 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver/cert-regeneration-controller-lock... 2025-12-13T00:20:58.419983732+00:00 stderr F I1213 00:20:58.417514 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver/cert-regeneration-controller-lock 2025-12-13T00:20:58.419983732+00:00 stderr F I1213 00:20:58.418117 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver", Name:"cert-regeneration-controller-lock", UID:"eb250dab-ea81-4164-a3be-f2834c870dea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42971", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_93e1c5fc-7ffa-4593-8432-762656d91f92 became leader 2025-12-13T00:20:58.445411449+00:00 stderr F I1213 00:20:58.445354 1 cabundlesyncer.go:82] Starting CA bundle controller 2025-12-13T00:20:58.445411449+00:00 stderr F I1213 00:20:58.445384 1 shared_informer.go:311] Waiting for caches to sync for CABundleController 2025-12-13T00:20:58.445643585+00:00 stderr F I1213 00:20:58.445602 1 certrotationcontroller.go:886] Starting CertRotation 2025-12-13T00:20:58.445643585+00:00 stderr F I1213 00:20:58.445624 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-12-13T00:20:58.448207415+00:00 stderr F I1213 00:20:58.448156 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.450030273+00:00 stderr F I1213 00:20:58.450010 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.450306461+00:00 stderr F I1213 00:20:58.450290 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.450760103+00:00 stderr F I1213 00:20:58.450742 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.453314202+00:00 stderr F I1213 00:20:58.453276 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.458280076+00:00 stderr F I1213 00:20:58.458243 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.465212604+00:00 stderr F I1213 00:20:58.465176 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.477137085+00:00 stderr F I1213 00:20:58.477079 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.479724315+00:00 stderr F I1213 00:20:58.479675 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.484661248+00:00 stderr F I1213 00:20:58.484596 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:20:58.546301602+00:00 stderr F I1213 00:20:58.546198 1 shared_informer.go:318] Caches are synced for CABundleController 2025-12-13T00:20:58.546344913+00:00 stderr F I1213 00:20:58.546261 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-12-13T00:20:58.546344913+00:00 stderr F I1213 00:20:58.546331 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-12-13T00:20:58.546344913+00:00 stderr F I1213 00:20:58.546340 1 internalloadbalancer.go:27] syncing internal loadbalancer hostnames: api-int.crc.testing 2025-12-13T00:20:58.546356683+00:00 stderr F I1213 00:20:58.546346 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546430 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546449 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546503 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546511 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546518 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546553 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546557 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546561 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546599 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546613 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.546637581+00:00 stderr F I1213 00:20:58.546620 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.546653231+00:00 stderr F I1213 00:20:58.546643 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.546653231+00:00 stderr F I1213 00:20:58.546649 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.546660741+00:00 stderr F I1213 00:20:58.546653 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.546694022+00:00 stderr F I1213 00:20:58.546667 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.546694022+00:00 stderr F I1213 00:20:58.546677 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.546694022+00:00 stderr F I1213 00:20:58.546682 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.547565245+00:00 stderr F I1213 00:20:58.547531 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.547565245+00:00 stderr F I1213 00:20:58.547550 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.547565245+00:00 stderr F I1213 00:20:58.547556 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.547593526+00:00 stderr F I1213 00:20:58.547583 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.547593526+00:00 stderr F I1213 00:20:58.547590 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.547601056+00:00 stderr F I1213 00:20:58.547595 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.547633407+00:00 stderr F I1213 00:20:58.547611 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.547633407+00:00 stderr F I1213 00:20:58.547622 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.547633407+00:00 stderr F I1213 00:20:58.547626 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.548328576+00:00 stderr F I1213 00:20:58.548304 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.548328576+00:00 stderr F I1213 00:20:58.548319 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.548328576+00:00 stderr F I1213 00:20:58.548324 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.548368557+00:00 stderr F I1213 00:20:58.548343 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.548368557+00:00 stderr F I1213 00:20:58.548351 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.548368557+00:00 stderr F I1213 00:20:58.548354 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.549659432+00:00 stderr F I1213 00:20:58.549637 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.549659432+00:00 stderr F I1213 00:20:58.549651 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.549659432+00:00 stderr F I1213 00:20:58.549656 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:20:58.549864258+00:00 stderr F I1213 00:20:58.549830 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:20:58.549864258+00:00 stderr F I1213 00:20:58.549848 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:20:58.549864258+00:00 stderr F I1213 00:20:58.549854 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000000316015117130646033053 0ustar zuulzuul2025-12-13T00:20:52.252906856+00:00 stderr F I1213 00:20:52.252695 1 observer_polling.go:159] Starting file observer 2025-12-13T00:20:52.253061900+00:00 stderr F I1213 00:20:52.252788 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-12-13T00:20:58.453845917+00:00 stderr F I1213 00:20:58.453783 1 base_controller.go:73] Caches are synced for CertSyncController 2025-12-13T00:20:58.453845917+00:00 stderr F I1213 00:20:58.453808 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-12-13T00:20:58.455503611+00:00 stderr F I1213 00:20:58.453870 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}] 2025-12-13T00:20:58.455503611+00:00 stderr F I1213 00:20:58.454205 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}] ././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130647033074 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000046335115117130647033112 0ustar zuulzuul2025-08-13T19:50:47.008111355+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v4_transit_switch_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + ovn_v6_transit_switch_subnet_opt= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ '' != '' ]] 2025-08-13T19:50:47.008111355+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-08-13T19:50:47.008111355+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-08-13T19:50:47.008704362+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:47.017865444+00:00 stdout F I0813 19:50:47.015608289 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc 2025-08-13T19:50:47.017888264+00:00 stderr F + echo 'I0813 19:50:47.015608289 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc' 2025-08-13T19:50:47.017888264+00:00 stderr F + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager crc --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration 2025-08-13T19:50:50.137364720+00:00 stderr F I0813 19:50:50.135597 1 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-08-13T19:50:50.138350768+00:00 stderr F I0813 19:50:50.137279 1 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-08-13T19:50:50.196606393+00:00 stderr F I0813 19:50:50.194947 1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... 2025-08-13T19:50:50.221444453+00:00 stderr F I0813 19:50:50.219718 1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108" 2025-08-13T19:50:51.311304292+00:00 stderr F I0813 19:50:51.308268 1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master 2025-08-13T19:50:51.313362211+00:00 stderr F I0813 19:50:51.312386 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"252a0995-33c8-4721-8f3e-d993f8bb73c8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"25604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-77c846df58-6l97b became leader 2025-08-13T19:50:51.323193252+00:00 stderr F I0813 19:50:51.313472 1 ovnkube.go:386] Won leader election; in active mode 2025-08-13T19:50:51.783390905+00:00 stderr F I0813 19:50:51.783329 1 secondary_network_cluster_manager.go:38] Creating secondary network cluster manager 2025-08-13T19:50:51.785274099+00:00 stderr F I0813 19:50:51.785245 1 egressservice_cluster.go:97] Setting up event handlers for Egress Services 2025-08-13T19:50:51.786952736+00:00 stderr F I0813 19:50:51.786879 1 clustermanager.go:123] Starting the cluster manager 2025-08-13T19:50:51.787671477+00:00 stderr F I0813 19:50:51.787651 1 factory.go:405] Starting watch factory 2025-08-13T19:50:51.792769953+00:00 stderr F I0813 19:50:51.792729 1 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.793460833+00:00 stderr F I0813 19:50:51.792660 1 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.793460833+00:00 stderr F I0813 19:50:51.793314 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.821478453+00:00 stderr F I0813 19:50:51.796024 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.822853883+00:00 stderr F I0813 19:50:51.796034 1 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833061844+00:00 stderr F I0813 19:50:51.833013 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833449055+00:00 stderr F I0813 19:50:51.793187 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.833536038+00:00 stderr F I0813 19:50:51.833482 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.866091138+00:00 stderr F I0813 19:50:51.866025 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.879486051+00:00 stderr F I0813 19:50:51.879300 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:51.886033998+00:00 stderr F I0813 19:50:51.884115 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:52.016009213+00:00 stderr F I0813 19:50:52.013643 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:52.138039531+00:00 stderr F I0813 19:50:52.133891 1 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.138039531+00:00 stderr F I0813 19:50:52.134003 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.225412408+00:00 stderr F I0813 19:50:52.223535 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.252296116+00:00 stderr F I0813 19:50:52.246244 1 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.252296116+00:00 stderr F I0813 19:50:52.246268 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.256614110+00:00 stderr F I0813 19:50:52.255960 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.352689005+00:00 stderr F I0813 19:50:52.352212 1 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.352689005+00:00 stderr F I0813 19:50:52.352334 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.360154769+00:00 stderr F I0813 19:50:52.359628 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.455446802+00:00 stderr F I0813 19:50:52.454619 1 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.455446802+00:00 stderr F I0813 19:50:52.455116 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.473755876+00:00 stderr F I0813 19:50:52.473481 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.559030883+00:00 stderr F I0813 19:50:52.558718 1 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.559030883+00:00 stderr F I0813 19:50:52.558875 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.563253964+00:00 stderr F I0813 19:50:52.563219 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.702912 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.703033 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:50:52.705244052+00:00 stderr F I0813 19:50:52.703049 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:50:52.706934640+00:00 stderr F I0813 19:50:52.706318 1 zone_cluster_controller.go:244] Node crc has the id 2 set 2025-08-13T19:50:52.714757014+00:00 stderr F I0813 19:50:52.714187 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:50:52.809440979+00:00 stderr F E0813 19:50:52.803290 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:52.809440979+00:00 stderr F E0813 19:50:52.803425 1 obj_retry.go:533] Failed to create *v1.Node crc, error: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:52.809440979+00:00 stderr F I0813 19:50:52.803522 1 secondary_network_cluster_manager.go:65] Starting secondary network cluster manager 2025-08-13T19:50:52.856380130+00:00 stderr F I0813 19:50:52.853690 1 network_attach_def_controller.go:134] Starting cluster-manager NAD controller 2025-08-13T19:50:52.856380130+00:00 stderr F I0813 19:50:52.855582 1 shared_informer.go:311] Waiting for caches to sync for cluster-manager 2025-08-13T19:50:52.857419180+00:00 stderr F I0813 19:50:52.857099 1 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.857419180+00:00 stderr F I0813 19:50:52.857151 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.870162564+00:00 stderr F I0813 19:50:52.869381 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T19:50:52.958608312+00:00 stderr F I0813 19:50:52.957364 1 shared_informer.go:318] Caches are synced for cluster-manager 2025-08-13T19:50:52.958608312+00:00 stderr F I0813 19:50:52.957688 1 network_attach_def_controller.go:182] Starting repairing loop for cluster-manager 2025-08-13T19:50:52.959329373+00:00 stderr F I0813 19:50:52.959096 1 network_attach_def_controller.go:184] Finished repairing loop for cluster-manager: 1.407231ms err: 2025-08-13T19:50:52.959329373+00:00 stderr F I0813 19:50:52.959265 1 network_attach_def_controller.go:153] Starting workers for cluster-manager NAD controller 2025-08-13T19:50:52.969034880+00:00 stderr F W0813 19:50:52.966294 1 egressip_healthcheck.go:165] Health checking using insecure connection 2025-08-13T19:50:53.964938774+00:00 stderr F W0813 19:50:53.963445 1 egressip_healthcheck.go:182] Could not connect to crc (10.217.0.2:9107): context deadline exceeded 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975184 1 egressip_controller.go:459] EgressIP node reachability enabled and using gRPC port 9107 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975308 1 egressservice_cluster.go:170] Starting Egress Services Controller 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975431 1 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975463 1 shared_informer.go:318] Caches are synced for egressservices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975470 1 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975481 1 shared_informer.go:318] Caches are synced for egressservices_services 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975489 1 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975494 1 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975501 1 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975506 1 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-08-13T19:50:53.976914906+00:00 stderr F I0813 19:50:53.975510 1 egressservice_cluster.go:187] Repairing Egress Services 2025-08-13T19:50:53.979904332+00:00 stderr F I0813 19:50:53.978374 1 kube.go:267] Setting labels map[] on node crc 2025-08-13T19:50:54.041876633+00:00 stderr F E0813 19:50:54.041484 1 egressservice_cluster.go:190] Failed to repair Egress Services entries: failed to remove stale labels map[] from node crc, err: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z 2025-08-13T19:50:54.041876633+00:00 stderr F I0813 19:50:54.041708 1 status_manager.go:210] Starting StatusManager with typed managers: map[adminpolicybasedexternalroutes:0xc000543280 egressfirewalls:0xc000543640 egressqoses:0xc000543a00] 2025-08-13T19:50:54.058164418+00:00 stderr F I0813 19:50:54.056436 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-08-13T19:50:54.060862575+00:00 stderr F I0813 19:50:54.060104 1 controller.go:69] Adding controller zone_tracker event handlers 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063359 1 shared_informer.go:311] Waiting for caches to sync for zone_tracker 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063452 1 shared_informer.go:318] Caches are synced for zone_tracker 2025-08-13T19:50:54.063860881+00:00 stderr F I0813 19:50:54.063583 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-08-13T19:50:54.071896051+00:00 stderr F I0813 19:50:54.071052 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 2.665757ms 2025-08-13T19:50:54.071896051+00:00 stderr F I0813 19:50:54.071883 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.072169 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076453 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076479 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076560 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076567 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076573 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076580 1 controller.go:93] Starting controller zone_tracker with 1 workers 2025-08-13T19:50:54.077179112+00:00 stderr F I0813 19:50:54.076820 1 controller.go:69] Adding controller egressqoses_statusmanager event handlers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.089351 1 shared_informer.go:311] Waiting for caches to sync for egressqoses_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090597 1 shared_informer.go:318] Caches are synced for egressqoses_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090843 1 controller.go:93] Starting controller egressqoses_statusmanager with 1 workers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.090973 1 controller.go:69] Adding controller adminpolicybasedexternalroutes_statusmanager event handlers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091220 1 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091228 1 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes_statusmanager 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091236 1 controller.go:93] Starting controller adminpolicybasedexternalroutes_statusmanager with 1 workers 2025-08-13T19:50:54.094110796+00:00 stderr F I0813 19:50:54.091582 1 controller.go:69] Adding controller egressfirewalls_statusmanager event handlers 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110354 1 shared_informer.go:311] Waiting for caches to sync for egressfirewalls_statusmanager 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110587 1 shared_informer.go:318] Caches are synced for egressfirewalls_statusmanager 2025-08-13T19:50:54.110862645+00:00 stderr F I0813 19:50:54.110615 1 controller.go:93] Starting controller egressfirewalls_statusmanager with 1 workers 2025-08-13T19:51:22.807417979+00:00 stderr F I0813 19:51:22.806770 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:51:22.807519982+00:00 stderr F I0813 19:51:22.807427 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:51:22.808847550+00:00 stderr F I0813 19:51:22.807642 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:51:22.823338894+00:00 stderr F E0813 19:51:22.823289 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:22.823429986+00:00 stderr F I0813 19:51:22.823396 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.806947 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.807028 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:51:52.807413517+00:00 stderr F I0813 19:51:52.807070 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:51:52.820544981+00:00 stderr F E0813 19:51:52.820407 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:51:52.820677944+00:00 stderr F I0813 19:51:52.820623 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:22.806992436+00:00 stderr F I0813 19:52:22.806741 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:52:22.806992436+00:00 stderr F I0813 19:52:22.806917 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:52:22.807074858+00:00 stderr F I0813 19:52:22.806963 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:52:22.830313290+00:00 stderr F E0813 19:52:22.829879 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:22.830313290+00:00 stderr F I0813 19:52:22.829935 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807110 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807254 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:52:52.807391803+00:00 stderr F I0813 19:52:52.807342 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:52:52.825204490+00:00 stderr F E0813 19:52:52.825150 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:52:52.825331403+00:00 stderr F I0813 19:52:52.825297 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:53:22.807657141+00:00 stderr F I0813 19:53:22.807266 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:53:22.807657141+00:00 stderr F I0813 19:53:22.807365 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:53:22.811209102+00:00 stderr F I0813 19:53:22.807710 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:53:22.827475965+00:00 stderr F E0813 19:53:22.827347 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:53:22.827475965+00:00 stderr F I0813 19:53:22.827388 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:54:22.807698924+00:00 stderr F I0813 19:54:22.807408 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:54:22.807698924+00:00 stderr F I0813 19:54:22.807554 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:54:22.808136047+00:00 stderr F I0813 19:54:22.807918 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:54:22.820529181+00:00 stderr F E0813 19:54:22.820369 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:54:22.820529181+00:00 stderr F I0813 19:54:22.820465 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:22Z is after 2024-12-26T00:46:02Z 2025-08-13T19:55:52.807349912+00:00 stderr F I0813 19:55:52.807274 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:55:52.807456975+00:00 stderr F I0813 19:55:52.807442 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:55:52.807543087+00:00 stderr F I0813 19:55:52.807501 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:55:52.829499734+00:00 stderr F E0813 19:55:52.829448 1 kube.go:137] Error in setting annotation on node crc: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:55:52.829591107+00:00 stderr F I0813 19:55:52.829564 1 obj_retry.go:370] Retry add failed for *v1.Node crc, will try again later: node add failed for crc, will try again later: Internal error occurred: failed calling webhook "node.network-node-identity.openshift.io": failed to call webhook: Post "https://127.0.0.1:9743/node?timeout=10s": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:52Z is after 2024-12-26T00:46:02Z 2025-08-13T19:56:40.869402446+00:00 stderr F I0813 19:56:40.869230 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 7 items received 2025-08-13T19:56:56.883563870+00:00 stderr F I0813 19:56:56.883367 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 8 items received 2025-08-13T19:57:18.475975004+00:00 stderr F I0813 19:57:18.475715 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 7 items received 2025-08-13T19:57:22.807232882+00:00 stderr F I0813 19:57:22.807088 1 obj_retry.go:296] Retry object setup: *v1.Node crc 2025-08-13T19:57:22.807232882+00:00 stderr F I0813 19:57:22.807163 1 obj_retry.go:358] Adding new object: *v1.Node crc 2025-08-13T19:57:22.807461968+00:00 stderr F I0813 19:57:22.807315 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-08-13T19:57:22.821010285+00:00 stderr F I0813 19:57:22.820923 1 obj_retry.go:379] Retry successful for *v1.Node crc after 8 failed attempt(s) 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936725 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936855 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:57:34.937879056+00:00 stderr F I0813 19:57:34.936874 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245549 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245606 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:57:35.246653813+00:00 stderr F I0813 19:57:35.245619 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:57:48.015393389+00:00 stderr F I0813 19:57:48.014453 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-08-13T19:57:48.016040407+00:00 stderr F I0813 19:57:48.015562 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 1.166293ms 2025-08-13T19:58:08.260292188+00:00 stderr F I0813 19:58:08.260083 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 9 items received 2025-08-13T19:58:11.367112049+00:00 stderr F I0813 19:58:11.366955 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T19:58:28.569171919+00:00 stderr F I0813 19:58:28.569104 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 8 items received 2025-08-13T19:58:30.882072410+00:00 stderr F I0813 19:58:30.882015 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2025-08-13T19:59:31.931567968+00:00 stderr F I0813 19:59:31.931278 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:31.931723763+00:00 stderr F I0813 19:59:31.931703 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:31.931768044+00:00 stderr F I0813 19:59:31.931753 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:36.344737507+00:00 stderr F I0813 19:59:36.344667 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:36.344924792+00:00 stderr F I0813 19:59:36.344903 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:36.344973634+00:00 stderr F I0813 19:59:36.344959 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:39.558692531+00:00 stderr F I0813 19:59:39.558625 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:39.558876106+00:00 stderr F I0813 19:59:39.558765 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:39.558924078+00:00 stderr F I0813 19:59:39.558909 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.446758 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.447112 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:41.449920391+00:00 stderr F I0813 19:59:41.447132 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552263 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552292 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T19:59:46.552350166+00:00 stderr F I0813 19:59:46.552301 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:06.018912174+00:00 stderr F I0813 20:00:06.018621 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 267 items received 2025-08-13T20:00:10.536567409+00:00 stderr F I0813 20:00:10.536496 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:10.546479252+00:00 stderr F I0813 20:00:10.546405 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:10.553482522+00:00 stderr F I0813 20:00:10.553237 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:10.904909542+00:00 stderr F I0813 20:00:10.901702 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 117 items received 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372350 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372401 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:17.374942857+00:00 stderr F I0813 20:00:17.372442 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964555 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964670 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:20.964953522+00:00 stderr F I0813 20:00:20.964684 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207613 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207664 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:27.217829259+00:00 stderr F I0813 20:00:27.207677 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.175693 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.175986 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:00:29.178003061+00:00 stderr F I0813 20:00:29.176002 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:00:37.239886826+00:00 stderr F I0813 20:00:37.239669 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721539 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721626 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:06.730935180+00:00 stderr F I0813 20:01:06.721637 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183454 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183508 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:20.183921586+00:00 stderr F I0813 20:01:20.183522 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482592 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482642 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:01:31.483369536+00:00 stderr F I0813 20:01:31.482653 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:02:27.230902514+00:00 stderr F E0813 20:02:27.230715 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:02:29.555021265+00:00 stderr F I0813 20:02:29.543445 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 5 items received 2025-08-13T20:02:29.567434599+00:00 stderr F I0813 20:02:29.559371 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.618309570+00:00 stderr F I0813 20:02:29.617674 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 29 items received 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.634907 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 41 items received 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.635379 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m17s&timeoutSeconds=437&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.636020835+00:00 stderr F I0813 20:02:29.635438 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.650166689+00:00 stderr F I0813 20:02:29.648275 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 190 items received 2025-08-13T20:02:29.650864819+00:00 stderr F I0813 20:02:29.650415 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.728826623+00:00 stderr F I0813 20:02:29.728640 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-08-13T20:02:29.730307945+00:00 stderr F I0813 20:02:29.729917 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m26s&timeoutSeconds=326&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.766212049+00:00 stderr F I0813 20:02:29.766065 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 1 items received 2025-08-13T20:02:29.769020879+00:00 stderr F I0813 20:02:29.768893 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.786236280+00:00 stderr F I0813 20:02:29.785403 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 5 items received 2025-08-13T20:02:29.790440980+00:00 stderr F I0813 20:02:29.790200 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 4 items received 2025-08-13T20:02:29.791201392+00:00 stderr F I0813 20:02:29.790954 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.791201392+00:00 stderr F I0813 20:02:29.791122 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.821539418+00:00 stderr F I0813 20:02:29.819711 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 4 items received 2025-08-13T20:02:29.826120728+00:00 stderr F I0813 20:02:29.823557 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:29.850282528+00:00 stderr F I0813 20:02:29.848563 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 4 items received 2025-08-13T20:02:29.863768742+00:00 stderr F I0813 20:02:29.862059 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.426758203+00:00 stderr F I0813 20:02:30.426630 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=5m0s&timeoutSeconds=300&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.613562752+00:00 stderr F I0813 20:02:30.613459 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.622867737+00:00 stderr F I0813 20:02:30.622702 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.734187682+00:00 stderr F I0813 20:02:30.733902 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m49s&timeoutSeconds=349&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.827708730+00:00 stderr F I0813 20:02:30.827599 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m30s&timeoutSeconds=510&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.969348540+00:00 stderr F I0813 20:02:30.969254 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:30.986989914+00:00 stderr F I0813 20:02:30.986882 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.148498841+00:00 stderr F I0813 20:02:31.148283 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.185367203+00:00 stderr F I0813 20:02:31.185265 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=5m28s&timeoutSeconds=328&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.205262971+00:00 stderr F I0813 20:02:31.205151 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:32.811521383+00:00 stderr F I0813 20:02:32.811380 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:32.967218024+00:00 stderr F I0813 20:02:32.966975 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.267354416+00:00 stderr F I0813 20:02:33.267244 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.283575399+00:00 stderr F I0813 20:02:33.283445 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.311052643+00:00 stderr F I0813 20:02:33.310927 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.497556503+00:00 stderr F I0813 20:02:33.497437 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.812968671+00:00 stderr F I0813 20:02:33.812640 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:33.896226736+00:00 stderr F I0813 20:02:33.896089 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.043488597+00:00 stderr F I0813 20:02:34.043275 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.127397161+00:00 stderr F I0813 20:02:34.124967 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:36.687493352+00:00 stderr F I0813 20:02:36.687328 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:37.379416071+00:00 stderr F I0813 20:02:37.379304 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m34s&timeoutSeconds=454&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:37.775255493+00:00 stderr F I0813 20:02:37.775089 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=7m16s&timeoutSeconds=436&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:38.458428071+00:00 stderr F I0813 20:02:38.458252 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:38.775456795+00:00 stderr F I0813 20:02:38.775309 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.049732509+00:00 stderr F I0813 20:02:39.049528 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=7m58s&timeoutSeconds=478&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.266981397+00:00 stderr F I0813 20:02:39.266750 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=9m17s&timeoutSeconds=557&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.303708335+00:00 stderr F I0813 20:02:39.303525 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.370876081+00:00 stderr F I0813 20:02:39.369179 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:40.381014997+00:00 stderr F I0813 20:02:40.380834 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=6m14s&timeoutSeconds=374&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:44.406123425+00:00 stderr F I0813 20:02:44.406046 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m26s&timeoutSeconds=446&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:47.103889375+00:00 stderr F I0813 20:02:47.103631 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:48.121575296+00:00 stderr F I0813 20:02:48.121432 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=7m59s&timeoutSeconds=479&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:48.670409442+00:00 stderr F I0813 20:02:48.670299 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.210064637+00:00 stderr F I0813 20:02:49.209985 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.265045125+00:00 stderr F I0813 20:02:49.264951 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=6m51s&timeoutSeconds=411&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:49.804731651+00:00 stderr F I0813 20:02:49.804662 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:50.484299458+00:00 stderr F I0813 20:02:50.484202 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:50.736101791+00:00 stderr F I0813 20:02:50.736002 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:51.559878001+00:00 stderr F I0813 20:02:51.559728 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:53.234302317+00:00 stderr F E0813 20:02:53.233647 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:01.312578358+00:00 stderr F I0813 20:03:01.312464 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:03.902879611+00:00 stderr F I0813 20:03:03.902658 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:04.115010862+00:00 stderr F I0813 20:03:04.114909 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:04.838387938+00:00 stderr F I0813 20:03:04.838262 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:05.747741640+00:00 stderr F I0813 20:03:05.747634 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:07.821412355+00:00 stderr F I0813 20:03:07.821340 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=5m5s&timeoutSeconds=305&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:08.326698340+00:00 stderr F I0813 20:03:08.326594 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=7m33s&timeoutSeconds=453&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:09.777199688+00:00 stderr F I0813 20:03:09.777085 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:10.723459022+00:00 stderr F I0813 20:03:10.723347 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:12.627312223+00:00 stderr F I0813 20:03:12.627170 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=9m29s&timeoutSeconds=569&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:19.234222432+00:00 stderr F E0813 20:03:19.233955 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:30.614507142+00:00 stderr F I0813 20:03:30.614265 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=30619&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:31.360239635+00:00 stderr F I0813 20:03:31.360179 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=30528&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:34.585615087+00:00 stderr F I0813 20:03:34.585528 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=30644&timeout=8m39s&timeoutSeconds=519&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:36.600715002+00:00 stderr F I0813 20:03:36.600587 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:37.001654079+00:00 stderr F I0813 20:03:37.001568 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=30717&timeout=9m37s&timeoutSeconds=577&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:45.054617405+00:00 stderr F I0813 20:03:45.054499 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=30708&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:45.234548188+00:00 stderr F E0813 20:03:45.234001 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:55.172078932+00:00 stderr F I0813 20:03:55.171354 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30556&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.287601266+00:00 stderr F I0813 20:03:59.287477 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=30713&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.287647398+00:00 stderr F I0813 20:03:59.287609 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=30574&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:59.288739989+00:00 stderr F I0813 20:03:59.287711 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=30542&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:04:08.669168343+00:00 stderr F I0813 20:04:08.669023 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 30619 (30722) 2025-08-13T20:04:11.111392643+00:00 stderr F I0813 20:04:11.109692 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 30717 (30724) 2025-08-13T20:04:15.899353869+00:00 stderr F I0813 20:04:15.899290 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 30528 (30740) 2025-08-13T20:04:21.176697857+00:00 stderr F I0813 20:04:21.175306 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 30644 (30758) 2025-08-13T20:04:32.645446258+00:00 stderr F I0813 20:04:32.644704 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 30620 (30792) 2025-08-13T20:04:34.205071440+00:00 stderr F I0813 20:04:34.204960 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 30713 (30718) 2025-08-13T20:04:35.147270382+00:00 stderr F I0813 20:04:35.144833 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 30556 (30718) 2025-08-13T20:04:43.777106034+00:00 stderr F I0813 20:04:43.776984 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 30708 (30718) 2025-08-13T20:04:44.225162345+00:00 stderr F I0813 20:04:44.221249 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 30574 (30718) 2025-08-13T20:04:51.763337638+00:00 stderr F I0813 20:04:51.763204 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:04:51.774225740+00:00 stderr F I0813 20:04:51.774007 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:04:58.390947006+00:00 stderr F I0813 20:04:58.390185 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 30542 (30756) 2025-08-13T20:05:04.968210444+00:00 stderr F I0813 20:05:04.968084 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:04.971231360+00:00 stderr F I0813 20:05:04.971150 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.117422378+00:00 stderr F I0813 20:05:10.117132 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.178603540+00:00 stderr F I0813 20:05:10.178488 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.489933925+00:00 stderr F I0813 20:05:10.489759 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:10.495159075+00:00 stderr F I0813 20:05:10.495008 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:13.591742609+00:00 stderr F I0813 20:05:13.591678 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:05:13.595271800+00:00 stderr F I0813 20:05:13.595146 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:05:14.414762877+00:00 stderr F I0813 20:05:14.414695 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.435980504+00:00 stderr F I0813 20:05:14.435741 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.973598850+00:00 stderr F I0813 20:05:14.973540 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:14.982395342+00:00 stderr F I0813 20:05:14.982351 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:20.334415922+00:00 stderr F I0813 20:05:20.334322 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:20.343099340+00:00 stderr F I0813 20:05:20.342971 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:23.010243446+00:00 stderr F I0813 20:05:23.007982 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:23.085513441+00:00 stderr F I0813 20:05:23.085366 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:40.178921662+00:00 stderr F I0813 20:05:40.178700 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:05:40.183126722+00:00 stderr F I0813 20:05:40.181664 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034127 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034184 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:07.034686591+00:00 stderr F I0813 20:06:07.034195 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802522 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802582 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:14.803185921+00:00 stderr F I0813 20:06:14.802595 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900523 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900591 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:49.903899045+00:00 stderr F I0813 20:06:49.900602 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.815968 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.816015 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:50.817129669+00:00 stderr F I0813 20:06:50.816025 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.439960 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.440007 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:06:51.440337467+00:00 stderr F I0813 20:06:51.440018 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:08:26.090759383+00:00 stderr F I0813 20:08:26.089857 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 8 items received 2025-08-13T20:08:26.238631123+00:00 stderr F I0813 20:08:26.238046 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 222 items received 2025-08-13T20:08:26.259847771+00:00 stderr F I0813 20:08:26.259645 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 3 items received 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.276954 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.277048 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m23s&timeoutSeconds=323&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.277692593+00:00 stderr F I0813 20:08:26.277107 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=5m20s&timeoutSeconds=320&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.292522258+00:00 stderr F I0813 20:08:26.292372 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 39 items received 2025-08-13T20:08:26.298874000+00:00 stderr F I0813 20:08:26.298551 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m5s&timeoutSeconds=425&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.365083809+00:00 stderr F I0813 20:08:26.364958 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 3 items received 2025-08-13T20:08:26.378009589+00:00 stderr F I0813 20:08:26.377942 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.393010649+00:00 stderr F I0813 20:08:26.392832 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 2 items received 2025-08-13T20:08:26.397152478+00:00 stderr F I0813 20:08:26.395718 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=5m31s&timeoutSeconds=331&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.435614541+00:00 stderr F I0813 20:08:26.435110 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 3 items received 2025-08-13T20:08:26.437728472+00:00 stderr F I0813 20:08:26.436745 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.455392008+00:00 stderr F I0813 20:08:26.454752 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 3 items received 2025-08-13T20:08:26.485053188+00:00 stderr F I0813 20:08:26.479510 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.505489924+00:00 stderr F I0813 20:08:26.504755 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 3 items received 2025-08-13T20:08:26.520279368+00:00 stderr F I0813 20:08:26.520223 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:26.524691415+00:00 stderr F I0813 20:08:26.524132 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 3 items received 2025-08-13T20:08:26.536512614+00:00 stderr F I0813 20:08:26.535729 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.439708929+00:00 stderr F I0813 20:08:27.437496 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554592 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554639 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.556914909+00:00 stderr F I0813 20:08:27.554759 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.581716051+00:00 stderr F I0813 20:08:27.581655 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=5m28s&timeoutSeconds=328&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.827098016+00:00 stderr F I0813 20:08:27.827034 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=9m32s&timeoutSeconds=572&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.848690705+00:00 stderr F I0813 20:08:27.847430 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.856049506+00:00 stderr F I0813 20:08:27.854639 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.874058453+00:00 stderr F I0813 20:08:27.873571 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:28.064758340+00:00 stderr F I0813 20:08:28.064695 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=8m40s&timeoutSeconds=520&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.331667863+00:00 stderr F I0813 20:08:29.331478 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=5m44s&timeoutSeconds=344&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.808479154+00:00 stderr F I0813 20:08:29.808398 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.058301906+00:00 stderr F I0813 20:08:30.058178 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.059010837+00:00 stderr F I0813 20:08:30.058958 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.073310337+00:00 stderr F I0813 20:08:30.073260 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.449455051+00:00 stderr F I0813 20:08:30.449389 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=7m52s&timeoutSeconds=472&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.647255572+00:00 stderr F I0813 20:08:30.647054 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=9m42s&timeoutSeconds=582&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.682630716+00:00 stderr F I0813 20:08:30.682560 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=9m49s&timeoutSeconds=589&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.707877010+00:00 stderr F I0813 20:08:30.707073 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:30.992885373+00:00 stderr F I0813 20:08:30.992644 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=6m17s&timeoutSeconds=377&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:32.581485986+00:00 stderr F I0813 20:08:32.581307 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=7m45s&timeoutSeconds=465&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.324217982+00:00 stderr F I0813 20:08:34.324044 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=32829&timeout=6m2s&timeoutSeconds=362&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.885714871+00:00 stderr F I0813 20:08:34.885543 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=8m27s&timeoutSeconds=507&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.087059034+00:00 stderr F I0813 20:08:35.086944 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=32732&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.131211520+00:00 stderr F I0813 20:08:35.131130 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=5m13s&timeoutSeconds=313&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.199171128+00:00 stderr F E0813 20:08:35.198986 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:35.380952220+00:00 stderr F I0813 20:08:35.380700 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=32690&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:35.555910746+00:00 stderr F I0813 20:08:35.555680 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=32798&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.059852564+00:00 stderr F I0813 20:08:36.059700 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32875&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.252016463+00:00 stderr F I0813 20:08:36.251855 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=32843&timeout=7m51s&timeoutSeconds=471&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:36.662864323+00:00 stderr F I0813 20:08:36.661881 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=32814&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:44.451271463+00:00 stderr F I0813 20:08:44.451033 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 32732 (32918) 2025-08-13T20:08:44.844100816+00:00 stderr F I0813 20:08:44.843246 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 32690 (32918) 2025-08-13T20:08:45.099720865+00:00 stderr F I0813 20:08:45.098729 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 32875 (32913) 2025-08-13T20:08:45.348963531+00:00 stderr F I0813 20:08:45.348844 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 32829 (32918) 2025-08-13T20:08:46.097685517+00:00 stderr F I0813 20:08:46.097516 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 32814 (32918) 2025-08-13T20:08:47.272971763+00:00 stderr F I0813 20:08:47.269375 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 32798 (32918) 2025-08-13T20:08:47.290868536+00:00 stderr F I0813 20:08:47.290586 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 32843 (32913) 2025-08-13T20:08:47.326887289+00:00 stderr F I0813 20:08:47.326144 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=32909&timeout=7m15s&timeoutSeconds=435&watch=true 2025-08-13T20:08:47.334399114+00:00 stderr F I0813 20:08:47.334296 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=32837&timeout=7m19s&timeoutSeconds=439&watch=true 2025-08-13T20:08:47.336839134+00:00 stderr F I0813 20:08:47.334591 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 32909 (32913) 2025-08-13T20:08:47.343543856+00:00 stderr F I0813 20:08:47.342605 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 32837 (32918) 2025-08-13T20:08:48.497341147+00:00 stderr F I0813 20:08:48.497028 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32901&timeout=5m40s&timeoutSeconds=340&watch=true 2025-08-13T20:08:48.500319193+00:00 stderr F I0813 20:08:48.500176 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 32901 (32913) 2025-08-13T20:09:00.365291112+00:00 stderr F I0813 20:09:00.365157 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:00.370503812+00:00 stderr F I0813 20:09:00.370417 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:02.210017681+00:00 stderr F I0813 20:09:02.208986 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:02.217704731+00:00 stderr F I0813 20:09:02.217587 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.178241281+00:00 stderr F I0813 20:09:03.178084 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.183834491+00:00 stderr F I0813 20:09:03.183731 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.256741451+00:00 stderr F I0813 20:09:03.256609 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:03.258701918+00:00 stderr F I0813 20:09:03.258600 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:04.923135168+00:00 stderr F I0813 20:09:04.922990 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:04.925005072+00:00 stderr F I0813 20:09:04.924884 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:07.684448359+00:00 stderr F I0813 20:09:07.684300 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:07.686339683+00:00 stderr F I0813 20:09:07.686236 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:08.255527921+00:00 stderr F I0813 20:09:08.255435 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:09:08.257354203+00:00 stderr F I0813 20:09:08.257266 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:09:08.414106628+00:00 stderr F I0813 20:09:08.413950 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:08.421467149+00:00 stderr F I0813 20:09:08.421398 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:09.945760781+00:00 stderr F I0813 20:09:09.945677 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:09.948237462+00:00 stderr F I0813 20:09:09.948206 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:09:10.057412292+00:00 stderr F I0813 20:09:10.057293 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:10.081556054+00:00 stderr F I0813 20:09:10.081404 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:15:03.950126047+00:00 stderr F I0813 20:15:03.949893 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 6 items received 2025-08-13T20:16:32.083872167+00:00 stderr F I0813 20:16:32.083696 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 58 items received 2025-08-13T20:16:58.188249111+00:00 stderr F I0813 20:16:58.188171 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 10 items received 2025-08-13T20:17:00.691109205+00:00 stderr F I0813 20:17:00.689700 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T20:17:05.423536700+00:00 stderr F I0813 20:17:05.423393 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:17:25.219155214+00:00 stderr F I0813 20:17:25.219006 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 19 items received 2025-08-13T20:17:27.261038704+00:00 stderr F I0813 20:17:27.259905 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 9 items received 2025-08-13T20:17:29.374880989+00:00 stderr F I0813 20:17:29.373161 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:18:30.926830606+00:00 stderr F I0813 20:18:30.926629 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2025-08-13T20:18:46.261049777+00:00 stderr F I0813 20:18:46.260940 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 10 items received 2025-08-13T20:22:38.193464370+00:00 stderr F I0813 20:22:38.192907 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 7 items received 2025-08-13T20:23:32.958254448+00:00 stderr F I0813 20:23:32.957887 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 10 items received 2025-08-13T20:23:54.378534512+00:00 stderr F I0813 20:23:54.377539 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 8 items received 2025-08-13T20:24:42.425620956+00:00 stderr F I0813 20:24:42.425447 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:25:18.087113742+00:00 stderr F I0813 20:25:18.086745 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 73 items received 2025-08-13T20:25:45.268485504+00:00 stderr F I0813 20:25:45.268365 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 10 items received 2025-08-13T20:26:09.696522126+00:00 stderr F I0813 20:26:09.696386 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 10 items received 2025-08-13T20:26:19.269665650+00:00 stderr F I0813 20:26:19.269527 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 9 items received 2025-08-13T20:26:36.221522339+00:00 stderr F I0813 20:26:36.221340 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 11 items received 2025-08-13T20:27:13.935322595+00:00 stderr F I0813 20:27:13.935240 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 9 items received 2025-08-13T20:29:38.197111009+00:00 stderr F I0813 20:29:38.196647 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 8 items received 2025-08-13T20:30:32.963915317+00:00 stderr F I0813 20:30:32.963851 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 7 items received 2025-08-13T20:32:27.381759922+00:00 stderr F I0813 20:32:27.381663 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 10 items received 2025-08-13T20:33:02.700387816+00:00 stderr F I0813 20:33:02.700286 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-08-13T20:34:09.277196906+00:00 stderr F I0813 20:34:09.276918 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 10 items received 2025-08-13T20:34:15.428383736+00:00 stderr F I0813 20:34:15.428177 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 11 items received 2025-08-13T20:34:37.095177679+00:00 stderr F I0813 20:34:37.094865 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 87 items received 2025-08-13T20:35:23.231610773+00:00 stderr F I0813 20:35:23.231439 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 9 items received 2025-08-13T20:35:51.277656871+00:00 stderr F I0813 20:35:51.277406 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 11 items received 2025-08-13T20:37:01.945895656+00:00 stderr F I0813 20:37:01.945690 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 10 items received 2025-08-13T20:37:25.979330639+00:00 stderr F I0813 20:37:25.979247 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 8 items received 2025-08-13T20:38:59.199981815+00:00 stderr F I0813 20:38:59.199865 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 13 items received 2025-08-13T20:40:09.103307343+00:00 stderr F I0813 20:40:09.102623 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 38 items received 2025-08-13T20:41:27.433448588+00:00 stderr F I0813 20:41:27.432302 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 9 items received 2025-08-13T20:42:06.389294881+00:00 stderr F I0813 20:42:06.388390 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 11 items received 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322153 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322268 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:13.322337202+00:00 stderr F I0813 20:42:13.322281 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:14.045741887+00:00 stderr F I0813 20:42:14.045658 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:14.046010575+00:00 stderr F I0813 20:42:14.045977 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:14.046096927+00:00 stderr F I0813 20:42:14.046069 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:16.636605691+00:00 stderr F I0813 20:42:16.636532 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:16.636711154+00:00 stderr F I0813 20:42:16.636691 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:16.636762906+00:00 stderr F I0813 20:42:16.636744 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:21.922931155+00:00 stderr F I0813 20:42:21.922766 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:21.923024178+00:00 stderr F I0813 20:42:21.923009 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:21.923060749+00:00 stderr F I0813 20:42:21.923047 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099356 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099402 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-08-13T20:42:24.100400083+00:00 stderr F I0813 20:42:24.099413 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-08-13T20:42:36.331250880+00:00 stderr F I0813 20:42:36.331166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331349213+00:00 stderr F I0813 20:42:36.331311 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 7 items received 2025-08-13T20:42:36.331592590+00:00 stderr F I0813 20:42:36.331573 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331637981+00:00 stderr F I0813 20:42:36.331624 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 5 items received 2025-08-13T20:42:36.331898209+00:00 stderr F I0813 20:42:36.331828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.331990341+00:00 stderr F I0813 20:42:36.331914 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340240 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340465 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 8 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340514 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.340560 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 24 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.342976 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.343115 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 6 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378439 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 5 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378738 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378756 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 11 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.378911 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 9 items received 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.379107 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.397251893+00:00 stderr F I0813 20:42:36.379119 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 1 items received 2025-08-13T20:42:36.468721613+00:00 stderr F I0813 20:42:36.468553 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469025152+00:00 stderr F I0813 20:42:36.468987 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=7m30s&timeoutSeconds=450&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469131115+00:00 stderr F I0813 20:42:36.469109 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469254349+00:00 stderr F I0813 20:42:36.469202 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469357872+00:00 stderr F I0813 20:42:36.469335 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469467895+00:00 stderr F I0813 20:42:36.469426 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=7m32s&timeoutSeconds=452&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469565308+00:00 stderr F I0813 20:42:36.469543 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=7m11s&timeoutSeconds=431&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.469649140+00:00 stderr F I0813 20:42:36.469629 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=7m17s&timeoutSeconds=437&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.474744937+00:00 stderr F I0813 20:42:36.474712 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=8m48s&timeoutSeconds=528&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:36.512134905+00:00 stderr F I0813 20:42:36.510142 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=5m3s&timeoutSeconds=303&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.323390694+00:00 stderr F I0813 20:42:37.323003 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.366734713+00:00 stderr F I0813 20:42:37.365096 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=7m18s&timeoutSeconds=438&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.368280888+00:00 stderr F I0813 20:42:37.368204 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=5m43s&timeoutSeconds=343&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.469326791+00:00 stderr F I0813 20:42:37.469035 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=5m52s&timeoutSeconds=352&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.507438310+00:00 stderr F I0813 20:42:37.507270 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=9m21s&timeoutSeconds=561&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.576580683+00:00 stderr F I0813 20:42:37.576505 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.719706180+00:00 stderr F I0813 20:42:37.719420 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=9m3s&timeoutSeconds=543&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.761913206+00:00 stderr F I0813 20:42:37.761702 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:37.893640494+00:00 stderr F I0813 20:42:37.893106 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=5m24s&timeoutSeconds=324&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:38.751053994+00:00 stderr F I0813 20:42:38.747855 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.422669377+00:00 stderr F I0813 20:42:39.422559 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=8m51s&timeoutSeconds=531&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.647877839+00:00 stderr F I0813 20:42:39.647552 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:39.747394858+00:00 stderr F I0813 20:42:39.747311 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.014528689+00:00 stderr F I0813 20:42:40.014415 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=7m2s&timeoutSeconds=422&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.169436885+00:00 stderr F I0813 20:42:40.169161 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.328353177+00:00 stderr F I0813 20:42:40.328195 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37518&timeout=8m57s&timeoutSeconds=537&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.614885258+00:00 stderr F I0813 20:42:40.614162 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.765210432+00:00 stderr F I0813 20:42:40.765040 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.865262196+00:00 stderr F I0813 20:42:40.864737 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=7m6s&timeoutSeconds=426&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:41.495986521+00:00 stderr F I0813 20:42:41.495869 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.194981592+00:00 stderr F I0813 20:42:43.194872 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=37463&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.685720091+00:00 stderr F I0813 20:42:43.685610 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=37457&timeout=9m58s&timeoutSeconds=598&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:43.877741517+00:00 stderr F I0813 20:42:43.877605 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=37483&timeout=6m47s&timeoutSeconds=407&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:44.027481544+00:00 stderr F I0813 20:42:44.027363 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=37387&timeout=7m13s&timeoutSeconds=433&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:44.107128320+00:00 stderr F I0813 20:42:44.107005 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=37339&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.013280195+00:00 stderr F I0813 20:42:45.013115 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=37460&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.065594613+00:00 stderr F I0813 20:42:45.065486 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=37395&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.541671939+00:00 stderr F I0813 20:42:45.541600 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=37379&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.848624948+00:00 stderr F I0813 20:42:45.848520 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=37382&timeout=8m1s&timeoutSeconds=481&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:45.871742875+00:00 stderr F I0813 20:42:45.871625 1 ovnkube.go:129] Received signal terminated. Shutting down 2025-08-13T20:42:45.871742875+00:00 stderr F I0813 20:42:45.871726 1 ovnkube.go:577] Stopping ovnkube... 2025-08-13T20:42:45.871926880+00:00 stderr F I0813 20:42:45.871874 1 metrics.go:552] Stopping metrics server at address "127.0.0.1:29108" 2025-08-13T20:42:45.872162857+00:00 stderr F I0813 20:42:45.872084 1 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872178347+00:00 stderr F I0813 20:42:45.872161 1 reflector.go:295] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872398174+00:00 stderr F I0813 20:42:45.872346 1 clustermanager.go:170] Stopping the cluster manager 2025-08-13T20:42:45.872412244+00:00 stderr F I0813 20:42:45.872395 1 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872426 1 secondary_network_cluster_manager.go:100] Stopping secondary network cluster manager 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872458 1 network_attach_def_controller.go:166] Shutting down cluster-manager NAD controller 2025-08-13T20:42:45.872482296+00:00 stderr F I0813 20:42:45.872461 1 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872497767+00:00 stderr F I0813 20:42:45.872488 1 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872595679+00:00 stderr F I0813 20:42:45.872537 1 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872608610+00:00 stderr F I0813 20:42:45.872591 1 reflector.go:295] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.872618260+00:00 stderr F I0813 20:42:45.872341 1 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-08-13T20:42:45.872841806+00:00 stderr F I0813 20:42:45.872669 1 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-08-13T20:42:45.872841806+00:00 stderr F I0813 20:42:45.872371 1 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:45.873977959+00:00 stderr F E0813 20:42:45.873903 1 handler.go:199] Removing already-removed *v1.EgressIP event handler 4 2025-08-13T20:42:45.874051631+00:00 stderr F E0813 20:42:45.873997 1 handler.go:199] Removing already-removed *v1.Node event handler 1 2025-08-13T20:42:45.874079042+00:00 stderr F E0813 20:42:45.874061 1 handler.go:199] Removing already-removed *v1.Node event handler 2 2025-08-13T20:42:45.874090303+00:00 stderr F E0813 20:42:45.874077 1 handler.go:199] Removing already-removed *v1.Node event handler 3 2025-08-13T20:42:45.875302367+00:00 stderr F I0813 20:42:45.875214 1 egressservice_cluster.go:218] Shutting down Egress Services controller 2025-08-13T20:42:45.877599304+00:00 stderr F I0813 20:42:45.877560 1 ovnkube.go:581] Stopped ovnkube 2025-08-13T20:42:45.877876942+00:00 stderr F E0813 20:42:45.877692 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:45.880356233+00:00 stderr F I0813 20:42:45.880204 1 ovnkube.go:395] No longer leader; exiting ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000015650015117130647033105 0ustar zuulzuul2025-12-13T00:11:04.341261938+00:00 stderr F + [[ -f /env/_master ]] 2025-12-13T00:11:04.341261938+00:00 stderr F + ovn_v4_join_subnet_opt= 2025-12-13T00:11:04.341261938+00:00 stderr F + [[ '' != '' ]] 2025-12-13T00:11:04.341261938+00:00 stderr F + ovn_v6_join_subnet_opt= 2025-12-13T00:11:04.341261938+00:00 stderr F + [[ '' != '' ]] 2025-12-13T00:11:04.341261938+00:00 stderr F + ovn_v4_transit_switch_subnet_opt= 2025-12-13T00:11:04.341261938+00:00 stderr F + [[ '' != '' ]] 2025-12-13T00:11:04.341261938+00:00 stderr F + ovn_v6_transit_switch_subnet_opt= 2025-12-13T00:11:04.341261938+00:00 stderr F + [[ '' != '' ]] 2025-12-13T00:11:04.341604777+00:00 stderr F + dns_name_resolver_enabled_flag= 2025-12-13T00:11:04.341604777+00:00 stderr F + [[ false == \t\r\u\e ]] 2025-12-13T00:11:04.341991487+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-12-13T00:11:04.345719338+00:00 stdout F I1213 00:11:04.344754432 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc 2025-12-13T00:11:04.345751369+00:00 stderr F + echo 'I1213 00:11:04.344754432 - ovnkube-control-plane - start ovnkube --init-cluster-manager crc' 2025-12-13T00:11:04.345751369+00:00 stderr F + exec /usr/bin/ovnkube --enable-interconnect --init-cluster-manager crc --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --metrics-bind-address 127.0.0.1:29108 --metrics-enable-pprof --metrics-enable-config-duration 2025-12-13T00:11:04.468334741+00:00 stderr F I1213 00:11:04.468175 1 config.go:2178] Parsed config file /run/ovnkube-config/ovnkube.conf 2025-12-13T00:11:04.468360441+00:00 stderr F I1213 00:11:04.468265 1 config.go:2179] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 HostMasqConntrackZone:0 OVNMasqConntrackZone:0 HostNodePortConntrackZone:0 ReassemblyConntrackZone:0 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:true LFlowCacheLimit:0 LFlowCacheLimitKb:1048576 RawClusterSubnets:10.217.0.0/22/23 ClusterSubnets:[] EnableUDPAggregation:true Zone:global} Logging:{File: CNIFile: LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableAdminNetworkPolicy:true EnableEgressIP:true EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:true EnableEgressQoS:true EnableEgressService:true EgressIPNodeHealthCheckPort:9107 EnableMultiNetwork:true EnableMultiNetworkPolicy:false EnableStatelessNetPol:false EnableInterconnect:false EnableMultiExternalGateway:true EnablePersistentIPs:false EnableDNSNameResolver:false EnableServiceTemplateSupport:false} Kubernetes:{BootstrapKubeconfig: CertDir: CertDuration:10m0s Kubeconfig: CACert: CAData:[] APIServer:https://api-int.crc.testing:6443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.217.4.0/23 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes: HostNetworkNamespace:openshift-host-network PlatformType:None HealthzBindAddress:0.0.0.0:10256 CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false DNSServiceNamespace:openshift-dns DNSServiceName:dns-default} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:shared Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 V4MasqueradeSubnet:169.254.169.0/29 V6MasqueradeSubnet:fd69::/125 MasqueradeIPs:{V4OVNMasqueradeIP:169.254.169.1 V6OVNMasqueradeIP:fd69::1 V4HostMasqueradeIP:169.254.169.2 V6HostMasqueradeIP:fd69::2 V4HostETPLocalMasqueradeIP:169.254.169.3 V6HostETPLocalMasqueradeIP:fd69::3 V4DummyNextHopMasqueradeIP:169.254.169.4 V6DummyNextHopMasqueradeIP:fd69::4 V4OVNServiceHairpinMasqueradeIP:169.254.169.5 V6OVNServiceHairpinMasqueradeIP:fd69::5} DisablePacketMTUCheck:false RouterSubnet: SingleNode:false DisableForwarding:false AllowNoUplink:false} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} ClusterMgrHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full DPResourceDeviceIdsMap:map[] MgmtPortNetdev: MgmtPortDPResourceName:} ClusterManager:{V4TransitSwitchSubnet:100.88.0.0/16 V6TransitSwitchSubnet:fd97::/64}} 2025-12-13T00:11:04.477291564+00:00 stderr F I1213 00:11:04.477249 1 leaderelection.go:250] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... 2025-12-13T00:11:04.477336125+00:00 stderr F I1213 00:11:04.477239 1 metrics.go:532] Starting metrics server at address "127.0.0.1:29108" 2025-12-13T00:11:04.499445536+00:00 stderr F I1213 00:11:04.499390 1 leaderelection.go:260] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master 2025-12-13T00:11:04.500842025+00:00 stderr F I1213 00:11:04.499672 1 ovnkube.go:386] Won leader election; in active mode 2025-12-13T00:11:04.500842025+00:00 stderr F I1213 00:11:04.499820 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"191c52d0-8e80-4a25-b5b6-abbf211ef81a", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"38225", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ovnkube-control-plane-77c846df58-6l97b became leader 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.510794 1 secondary_network_cluster_manager.go:38] Creating secondary network cluster manager 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.511799 1 egressservice_cluster.go:97] Setting up event handlers for Egress Services 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.511914 1 clustermanager.go:123] Starting the cluster manager 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.511922 1 factory.go:405] Starting watch factory 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.513041 1 reflector.go:289] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.513058 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.513203 1 reflector.go:289] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.513336224+00:00 stderr F I1213 00:11:04.513208 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.513463987+00:00 stderr F I1213 00:11:04.513430 1 reflector.go:289] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.515751969+00:00 stderr F I1213 00:11:04.515227 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.515751969+00:00 stderr F I1213 00:11:04.515364 1 reflector.go:289] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.515751969+00:00 stderr F I1213 00:11:04.515374 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.520772596+00:00 stderr F I1213 00:11:04.519389 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.520772596+00:00 stderr F I1213 00:11:04.520323 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.529202295+00:00 stderr F I1213 00:11:04.527583 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.544014867+00:00 stderr F I1213 00:11:04.543195 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:04.612734415+00:00 stderr F I1213 00:11:04.612673 1 reflector.go:289] Starting reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.612734415+00:00 stderr F I1213 00:11:04.612704 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.627894387+00:00 stderr F I1213 00:11:04.627814 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.717846091+00:00 stderr F I1213 00:11:04.717771 1 reflector.go:289] Starting reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.717846091+00:00 stderr F I1213 00:11:04.717807 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.750545491+00:00 stderr F I1213 00:11:04.750467 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.819284458+00:00 stderr F I1213 00:11:04.819228 1 reflector.go:289] Starting reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.819284458+00:00 stderr F I1213 00:11:04.819262 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.835959421+00:00 stderr F I1213 00:11:04.835817 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.919995835+00:00 stderr F I1213 00:11:04.919912 1 reflector.go:289] Starting reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.919995835+00:00 stderr F I1213 00:11:04.919974 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:04.935099376+00:00 stderr F I1213 00:11:04.935058 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:05.020993700+00:00 stderr F I1213 00:11:05.020156 1 reflector.go:289] Starting reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:05.020993700+00:00 stderr F I1213 00:11:05.020187 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:05.036754389+00:00 stderr F I1213 00:11:05.036702 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:11:05.121535462+00:00 stderr F I1213 00:11:05.121447 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:11:05.121535462+00:00 stderr F I1213 00:11:05.121492 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:11:05.121535462+00:00 stderr F I1213 00:11:05.121503 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:11:05.121535462+00:00 stderr F I1213 00:11:05.121528 1 zone_cluster_controller.go:244] Node crc has the id 2 set 2025-12-13T00:11:05.121752728+00:00 stderr F I1213 00:11:05.121719 1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-id:2 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.2/16"}] on node crc 2025-12-13T00:11:05.133325023+00:00 stderr F I1213 00:11:05.133277 1 secondary_network_cluster_manager.go:65] Starting secondary network cluster manager 2025-12-13T00:11:05.133345713+00:00 stderr F I1213 00:11:05.133328 1 network_attach_def_controller.go:134] Starting cluster-manager NAD controller 2025-12-13T00:11:05.133478618+00:00 stderr F I1213 00:11:05.133458 1 shared_informer.go:311] Waiting for caches to sync for cluster-manager 2025-12-13T00:11:05.134214967+00:00 stderr F I1213 00:11:05.134174 1 reflector.go:289] Starting reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:11:05.134214967+00:00 stderr F I1213 00:11:05.134197 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:11:05.150424588+00:00 stderr F I1213 00:11:05.149980 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:11:05.234006689+00:00 stderr F I1213 00:11:05.233918 1 shared_informer.go:318] Caches are synced for cluster-manager 2025-12-13T00:11:05.234006689+00:00 stderr F I1213 00:11:05.233983 1 network_attach_def_controller.go:182] Starting repairing loop for cluster-manager 2025-12-13T00:11:05.234237995+00:00 stderr F I1213 00:11:05.234206 1 network_attach_def_controller.go:184] Finished repairing loop for cluster-manager: 226.026µs err: 2025-12-13T00:11:05.234237995+00:00 stderr F I1213 00:11:05.234224 1 network_attach_def_controller.go:153] Starting workers for cluster-manager NAD controller 2025-12-13T00:11:05.236394334+00:00 stderr F W1213 00:11:05.236371 1 egressip_healthcheck.go:165] Health checking using insecure connection 2025-12-13T00:11:06.237987724+00:00 stderr F W1213 00:11:06.237836 1 egressip_healthcheck.go:182] Could not connect to crc (10.217.0.2:9107): context deadline exceeded 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237911 1 egressip_controller.go:459] EgressIP node reachability enabled and using gRPC port 9107 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237919 1 egressservice_cluster.go:170] Starting Egress Services Controller 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237948 1 shared_informer.go:311] Waiting for caches to sync for egressservices 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237957 1 shared_informer.go:318] Caches are synced for egressservices 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237963 1 shared_informer.go:311] Waiting for caches to sync for egressservices_services 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237967 1 shared_informer.go:318] Caches are synced for egressservices_services 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237975 1 shared_informer.go:311] Waiting for caches to sync for egressservices_endpointslices 2025-12-13T00:11:06.237987724+00:00 stderr F I1213 00:11:06.237980 1 shared_informer.go:318] Caches are synced for egressservices_endpointslices 2025-12-13T00:11:06.238071977+00:00 stderr F I1213 00:11:06.237986 1 shared_informer.go:311] Waiting for caches to sync for egressservices_nodes 2025-12-13T00:11:06.238071977+00:00 stderr F I1213 00:11:06.237992 1 shared_informer.go:318] Caches are synced for egressservices_nodes 2025-12-13T00:11:06.238071977+00:00 stderr F I1213 00:11:06.237996 1 egressservice_cluster.go:187] Repairing Egress Services 2025-12-13T00:11:06.238082917+00:00 stderr F I1213 00:11:06.238068 1 kube.go:267] Setting labels map[] on node crc 2025-12-13T00:11:06.239025292+00:00 stderr F I1213 00:11:06.238294 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:11:06.239025292+00:00 stderr F I1213 00:11:06.238347 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:11:06.239025292+00:00 stderr F I1213 00:11:06.238370 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258046 1 status_manager.go:210] Starting StatusManager with typed managers: map[adminpolicybasedexternalroutes:0xc0004e5280 egressfirewalls:0xc0004e5640 egressqoses:0xc0004e5a00] 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258127 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258151 1 controller.go:69] Adding controller zone_tracker event handlers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258224 1 shared_informer.go:311] Waiting for caches to sync for zone_tracker 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258233 1 shared_informer.go:318] Caches are synced for zone_tracker 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258253 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258268 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258353 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258363 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258370 1 status_manager.go:234] StatusManager got zones update: map[crc:{}] 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258376 1 controller.go:199] Controller adminpolicybasedexternalroutes_statusmanager: full reconcile 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258380 1 controller.go:199] Controller egressfirewalls_statusmanager: full reconcile 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258383 1 controller.go:199] Controller egressqoses_statusmanager: full reconcile 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258390 1 controller.go:93] Starting controller zone_tracker with 1 workers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258406 1 controller.go:69] Adding controller egressqoses_statusmanager event handlers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258422 1 shared_informer.go:311] Waiting for caches to sync for egressqoses_statusmanager 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258429 1 shared_informer.go:318] Caches are synced for egressqoses_statusmanager 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258434 1 controller.go:93] Starting controller egressqoses_statusmanager with 1 workers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258445 1 controller.go:69] Adding controller adminpolicybasedexternalroutes_statusmanager event handlers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258458 1 shared_informer.go:311] Waiting for caches to sync for adminpolicybasedexternalroutes_statusmanager 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258463 1 shared_informer.go:318] Caches are synced for adminpolicybasedexternalroutes_statusmanager 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258468 1 controller.go:93] Starting controller adminpolicybasedexternalroutes_statusmanager with 1 workers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258483 1 controller.go:69] Adding controller egressfirewalls_statusmanager event handlers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258499 1 shared_informer.go:311] Waiting for caches to sync for egressfirewalls_statusmanager 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258503 1 shared_informer.go:318] Caches are synced for egressfirewalls_statusmanager 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258509 1 controller.go:93] Starting controller egressfirewalls_statusmanager with 1 workers 2025-12-13T00:11:06.258899232+00:00 stderr F I1213 00:11:06.258182 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 63.572µs 2025-12-13T00:12:40.456536294+00:00 stderr F I1213 00:12:40.456442 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:12:40.456536294+00:00 stderr F I1213 00:12:40.456488 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:12:40.456536294+00:00 stderr F I1213 00:12:40.456504 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:12:40.539078617+00:00 stderr F I1213 00:12:40.539014 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:12:40.539078617+00:00 stderr F I1213 00:12:40.539036 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:12:40.539078617+00:00 stderr F I1213 00:12:40.539044 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:12:58.191102679+00:00 stderr F I1213 00:12:58.190979 1 egressservice_cluster_node.go:167] Processing sync for Egress Service node crc 2025-12-13T00:12:58.191102679+00:00 stderr F I1213 00:12:58.191028 1 egressservice_cluster_node.go:170] Finished syncing Egress Service node crc: 97.492µs 2025-12-13T00:17:07.522315325+00:00 stderr F I1213 00:17:07.522200 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 8 items received 2025-12-13T00:17:44.753015974+00:00 stderr F I1213 00:17:44.752886 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 7 items received 2025-12-13T00:17:58.155872532+00:00 stderr F I1213 00:17:58.155772 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 7 items received 2025-12-13T00:18:25.038762951+00:00 stderr F I1213 00:18:25.038369 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 8 items received 2025-12-13T00:18:38.837640830+00:00 stderr F I1213 00:18:38.837537 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 8 items received 2025-12-13T00:18:54.732456061+00:00 stderr F I1213 00:18:54.732363 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:18:54.732456061+00:00 stderr F I1213 00:18:54.732398 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:18:54.732456061+00:00 stderr F I1213 00:18:54.732408 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:19:07.530426901+00:00 stderr F I1213 00:19:07.530352 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 20 items received 2025-12-13T00:19:10.645528979+00:00 stderr F I1213 00:19:10.643882 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:19:10.645528979+00:00 stderr F I1213 00:19:10.643925 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:19:10.645528979+00:00 stderr F I1213 00:19:10.643967 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:19:11.559921393+00:00 stderr F I1213 00:19:11.559798 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:19:11.559921393+00:00 stderr F I1213 00:19:11.559826 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:19:11.559921393+00:00 stderr F I1213 00:19:11.559836 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:19:21.937464921+00:00 stderr F I1213 00:19:21.936905 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 9 items received 2025-12-13T00:19:37.629110069+00:00 stderr F I1213 00:19:37.629043 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 9 items received 2025-12-13T00:19:49.522463643+00:00 stderr F I1213 00:19:49.522400 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 148 items received 2025-12-13T00:20:09.768661347+00:00 stderr F I1213 00:20:09.767451 1 node_allocator.go:371] Expected 1 subnets on node crc, found 1: [10.217.0.0/23] 2025-12-13T00:20:09.768661347+00:00 stderr F I1213 00:20:09.767496 1 node_allocator.go:385] Valid subnet 10.217.0.0/23 allocated on node crc 2025-12-13T00:20:09.768661347+00:00 stderr F I1213 00:20:09.767515 1 node_allocator.go:407] Allowed existing subnets [10.217.0.0/23] on node crc 2025-12-13T00:20:27.546279978+00:00 stderr F I1213 00:20:27.546161 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 423 items received 2025-12-13T00:20:36.754397042+00:00 stderr F E1213 00:20:36.754301 1 leaderelection.go:332] error retrieving resource lock openshift-ovn-kubernetes/ovn-kubernetes-master: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:38.426241895+00:00 stderr F I1213 00:20:38.426167 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Service total 3 items received 2025-12-13T00:20:38.427113669+00:00 stderr F I1213 00:20:38.426483 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=8m33s&timeoutSeconds=513&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.427725596+00:00 stderr F I1213 00:20:38.427376 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Node total 5 items received 2025-12-13T00:20:38.427725596+00:00 stderr F I1213 00:20:38.427563 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=9m24s&timeoutSeconds=564&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.437134229+00:00 stderr F I1213 00:20:38.437049 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.EndpointSlice total 1 items received 2025-12-13T00:20:38.437526390+00:00 stderr F I1213 00:20:38.437359 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.490362056+00:00 stderr F I1213 00:20:38.490287 1 reflector.go:800] k8s.io/client-go/informers/factory.go:159: Watch close - *v1.Pod total 3 items received 2025-12-13T00:20:38.491193069+00:00 stderr F I1213 00:20:38.490595 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=9m28s&timeoutSeconds=568&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.501446375+00:00 stderr F I1213 00:20:38.497909 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressFirewall total 2 items received 2025-12-13T00:20:38.501446375+00:00 stderr F I1213 00:20:38.498514 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42438&timeout=9m35s&timeoutSeconds=575&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.525878714+00:00 stderr F I1213 00:20:38.525757 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressIP total 0 items received 2025-12-13T00:20:38.526442999+00:00 stderr F I1213 00:20:38.526332 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42302&timeout=9m1s&timeoutSeconds=541&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.526442999+00:00 stderr F I1213 00:20:38.526368 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressService total 1 items received 2025-12-13T00:20:38.529369168+00:00 stderr F I1213 00:20:38.528796 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42847&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.543842279+00:00 stderr F I1213 00:20:38.543752 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.AdminPolicyBasedExternalRoute total 2 items received 2025-12-13T00:20:38.544283180+00:00 stderr F I1213 00:20:38.544169 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42867&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.546670105+00:00 stderr F I1213 00:20:38.546480 1 reflector.go:800] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: Watch close - *v1.NetworkAttachmentDefinition total 2 items received 2025-12-13T00:20:38.546749227+00:00 stderr F I1213 00:20:38.546723 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42688&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.550574011+00:00 stderr F I1213 00:20:38.550529 1 reflector.go:800] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: Watch close - *v1.EgressQoS total 1 items received 2025-12-13T00:20:38.560729474+00:00 stderr F I1213 00:20:38.558569 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42307&timeout=8m8s&timeoutSeconds=488&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.270370635+00:00 stderr F I1213 00:20:39.269912 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.277722712+00:00 stderr F I1213 00:20:39.277673 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=9m12s&timeoutSeconds=552&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.495010556+00:00 stderr F I1213 00:20:39.494549 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=8m17s&timeoutSeconds=497&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.542788086+00:00 stderr F I1213 00:20:39.542733 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42438&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.627425129+00:00 stderr F I1213 00:20:39.627351 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42307&timeout=6m56s&timeoutSeconds=416&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.695945569+00:00 stderr F I1213 00:20:39.695868 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42302&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.728434095+00:00 stderr F I1213 00:20:39.728362 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42867&timeout=9m16s&timeoutSeconds=556&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.766984295+00:00 stderr F I1213 00:20:39.766909 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42847&timeout=7m24s&timeoutSeconds=444&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.831463525+00:00 stderr F I1213 00:20:39.831391 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42688&timeout=9m1s&timeoutSeconds=541&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:39.893074928+00:00 stderr F I1213 00:20:39.893003 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.351615226+00:00 stderr F I1213 00:20:41.351501 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42438&timeout=6m34s&timeoutSeconds=394&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.634803938+00:00 stderr F I1213 00:20:41.634723 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42302&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.815776371+00:00 stderr F I1213 00:20:41.815681 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42867&timeout=7m38s&timeoutSeconds=458&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.894599278+00:00 stderr F I1213 00:20:41.894506 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=8m41s&timeoutSeconds=521&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.033016884+00:00 stderr F I1213 00:20:42.032871 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.053166497+00:00 stderr F I1213 00:20:42.053060 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=8m46s&timeoutSeconds=526&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.131767268+00:00 stderr F I1213 00:20:42.131692 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42688&timeout=6m29s&timeoutSeconds=389&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.216364961+00:00 stderr F I1213 00:20:42.216296 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=9m16s&timeoutSeconds=556&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.472383920+00:00 stderr F I1213 00:20:42.472285 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42307&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:42.553604882+00:00 stderr F I1213 00:20:42.553528 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m58s&timeoutSeconds=418&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:45.018735994+00:00 stderr F I1213 00:20:45.018634 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42438&timeout=9m39s&timeoutSeconds=579&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:45.649443533+00:00 stderr F I1213 00:20:45.649369 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressips?allowWatchBookmarks=true&resourceVersion=42302&timeout=8m49s&timeoutSeconds=529&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:45.715352681+00:00 stderr F I1213 00:20:45.715246 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod returned Get "https://api-int.crc.testing:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=42927&timeout=6m42s&timeoutSeconds=402&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.530756984+00:00 stderr F I1213 00:20:46.530662 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressservices?allowWatchBookmarks=true&resourceVersion=42847&timeout=7m24s&timeoutSeconds=444&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:46.671362079+00:00 stderr F I1213 00:20:46.671296 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes?allowWatchBookmarks=true&resourceVersion=42867&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:47.586631997+00:00 stderr F I1213 00:20:47.586542 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice returned Get "https://api-int.crc.testing:6443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=kubernetes.io%2Fservice-name%2Ckubernetes.io%2Fservice-name%21%3D%2C%21service.kubernetes.io%2Fheadless&resourceVersion=42930&timeout=6m42s&timeoutSeconds=402&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:47.715453083+00:00 stderr F I1213 00:20:47.715363 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service returned Get "https://api-int.crc.testing:6443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=42796&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:48.268153398+00:00 stderr F I1213 00:20:48.268061 1 reflector.go:425] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition returned Get "https://api-int.crc.testing:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=42688&timeout=8m59s&timeoutSeconds=539&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:48.387797776+00:00 stderr F I1213 00:20:48.387723 1 reflector.go:425] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node returned Get "https://api-int.crc.testing:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=42804&timeout=8m0s&timeoutSeconds=480&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:48.566218871+00:00 stderr F I1213 00:20:48.565756 1 reflector.go:425] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS returned Get "https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressqoses?allowWatchBookmarks=true&resourceVersion=42307&timeout=7m46s&timeoutSeconds=466&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:54.813377149+00:00 stderr F I1213 00:20:54.813266 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice closed with: too old resource version: 42930 (42935) 2025-12-13T00:20:54.825521287+00:00 stderr F I1213 00:20:54.825137 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140: watch of *v1.AdminPolicyBasedExternalRoute closed with: too old resource version: 42867 (42940) 2025-12-13T00:20:56.476618472+00:00 stderr F I1213 00:20:56.476345 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Service closed with: too old resource version: 42796 (42935) 2025-12-13T00:20:56.887815118+00:00 stderr F I1213 00:20:56.887746 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Pod closed with: too old resource version: 42927 (42935) 2025-12-13T00:20:57.752186932+00:00 stderr F I1213 00:20:57.752096 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressIP closed with: too old resource version: 42302 (42940) 2025-12-13T00:20:58.443905898+00:00 stderr F I1213 00:20:58.443109 1 with_retry.go:234] Got a Retry-After 5s response for attempt 1 to https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?allowWatchBookmarks=true&resourceVersion=42438&timeout=8m15s&timeoutSeconds=495&watch=true 2025-12-13T00:20:58.446710694+00:00 stderr F I1213 00:20:58.446477 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressFirewall closed with: too old resource version: 42438 (42967) 2025-12-13T00:20:59.301102749+00:00 stderr F I1213 00:20:59.300777 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressService closed with: too old resource version: 42847 (42967) 2025-12-13T00:21:00.312182243+00:00 stderr F I1213 00:21:00.312120 1 reflector.go:449] github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117: watch of *v1.NetworkAttachmentDefinition closed with: too old resource version: 42688 (42940) 2025-12-13T00:21:00.312793019+00:00 stderr F I1213 00:21:00.312738 1 reflector.go:449] github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140: watch of *v1.EgressQoS closed with: too old resource version: 42307 (42940) 2025-12-13T00:21:00.772391671+00:00 stderr F I1213 00:21:00.772324 1 reflector.go:449] k8s.io/client-go/informers/factory.go:159: watch of *v1.Node closed with: too old resource version: 42804 (42935) 2025-12-13T00:21:09.835265900+00:00 stderr F I1213 00:21:09.835202 1 reflector.go:325] Listing and watching *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:09.841966722+00:00 stderr F I1213 00:21:09.841726 1 reflector.go:351] Caches populated for *v1.AdminPolicyBasedExternalRoute from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:11.517909356+00:00 stderr F I1213 00:21:11.517839 1 reflector.go:325] Listing and watching *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:11.519405927+00:00 stderr F I1213 00:21:11.519351 1 reflector.go:351] Caches populated for *v1.EgressFirewall from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:11.694557653+00:00 stderr F I1213 00:21:11.694483 1 reflector.go:325] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:11.711208702+00:00 stderr F I1213 00:21:11.711106 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:12.352139138+00:00 stderr F I1213 00:21:12.351414 1 reflector.go:325] Listing and watching *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:12.353063302+00:00 stderr F I1213 00:21:12.352998 1 reflector.go:351] Caches populated for *v1.EgressIP from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:12.658383381+00:00 stderr F I1213 00:21:12.658263 1 reflector.go:325] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:12.661808273+00:00 stderr F I1213 00:21:12.661757 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:14.622478321+00:00 stderr F I1213 00:21:14.621702 1 reflector.go:325] Listing and watching *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:14.641639219+00:00 stderr F I1213 00:21:14.640418 1 reflector.go:351] Caches populated for *v1.EgressQoS from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:17.356135539+00:00 stderr F I1213 00:21:17.356066 1 reflector.go:325] Listing and watching *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:21:17.357737622+00:00 stderr F I1213 00:21:17.357699 1 reflector.go:351] Caches populated for *v1.NetworkAttachmentDefinition from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117 2025-12-13T00:21:18.444025035+00:00 stderr F I1213 00:21:18.443887 1 reflector.go:325] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:18.447360175+00:00 stderr F I1213 00:21:18.447289 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:21.321618087+00:00 stderr F I1213 00:21:21.321554 1 reflector.go:325] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:21.328127302+00:00 stderr F I1213 00:21:21.328101 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:21.768581168+00:00 stderr F I1213 00:21:21.768522 1 reflector.go:325] Listing and watching *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 2025-12-13T00:21:21.769720899+00:00 stderr F I1213 00:21:21.769679 1 reflector.go:351] Caches populated for *v1.EgressService from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140 ././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000755000175000017500000000000015117130654033072 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000244415117130647033102 0ustar zuulzuul2025-08-13T19:50:44.187310475+00:00 stdout F 2025-08-13T19:50:44+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy 2025-08-13T19:50:45.643448822+00:00 stderr F W0813 19:50:45.641749 1 deprecated.go:66] 2025-08-13T19:50:45.643448822+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.643448822+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.643448822+00:00 stderr F =============================================== 2025-08-13T19:50:45.643448822+00:00 stderr F 2025-08-13T19:50:45.669538817+00:00 stderr F I0813 19:50:45.668563 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:45.669538817+00:00 stderr F I0813 19:50:45.668714 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:45.683583669+00:00 stderr F I0813 19:50:45.683419 1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 2025-08-13T19:50:45.688697335+00:00 stderr F I0813 19:50:45.688343 1 kube-rbac-proxy.go:402] Listening securely on :9108 2025-08-13T20:42:46.651250588+00:00 stderr F I0813 20:42:46.650986 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernet0000644000175000017500000000223715117130647033102 0ustar zuulzuul2025-12-13T00:11:03.629546646+00:00 stdout F 2025-12-13T00:11:03+00:00 INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy 2025-12-13T00:11:03.669990564+00:00 stderr F W1213 00:11:03.668785 1 deprecated.go:66] 2025-12-13T00:11:03.669990564+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:11:03.669990564+00:00 stderr F 2025-12-13T00:11:03.669990564+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:11:03.669990564+00:00 stderr F 2025-12-13T00:11:03.669990564+00:00 stderr F =============================================== 2025-12-13T00:11:03.669990564+00:00 stderr F 2025-12-13T00:11:03.669990564+00:00 stderr F I1213 00:11:03.669429 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:11:03.669990564+00:00 stderr F I1213 00:11:03.669465 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:11:03.669990564+00:00 stderr F I1213 00:11:03.669957 1 kube-rbac-proxy.go:395] Starting TCP socket on :9108 2025-12-13T00:11:03.670296643+00:00 stderr F I1213 00:11:03.670265 1 kube-rbac-proxy.go:402] Listening securely on :9108 ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130646033003 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415117130646033012 0ustar zuulzuul2025-12-13T00:13:16.312081284+00:00 stderr F W1213 00:13:16.311901 1 deprecated.go:66] 2025-12-13T00:13:16.312081284+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:16.312081284+00:00 stderr F 2025-12-13T00:13:16.312081284+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:16.312081284+00:00 stderr F 2025-12-13T00:13:16.312081284+00:00 stderr F =============================================== 2025-12-13T00:13:16.312081284+00:00 stderr F 2025-12-13T00:13:16.312081284+00:00 stderr F I1213 00:13:16.312039 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-12-13T00:13:16.312801488+00:00 stderr F I1213 00:13:16.312774 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:16.312832469+00:00 stderr F I1213 00:13:16.312813 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:16.318847891+00:00 stderr F I1213 00:13:16.313411 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-12-13T00:13:16.318847891+00:00 stderr F I1213 00:13:16.313872 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115117130646033003 0ustar zuulzuul2025-08-13T19:59:12.246877885+00:00 stderr F W0813 19:59:12.142202 1 deprecated.go:66] 2025-08-13T19:59:12.246877885+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F =============================================== 2025-08-13T19:59:12.246877885+00:00 stderr F 2025-08-13T19:59:12.246877885+00:00 stderr F I0813 19:59:12.224140 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:12.550927052+00:00 stderr F I0813 19:59:12.536614 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:12.551285632+00:00 stderr F I0813 19:59:12.551190 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:12.558915680+00:00 stderr F I0813 19:59:12.558720 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:59:12.724634104+00:00 stderr F I0813 19:59:12.644589 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:43.553128358+00:00 stderr F I0813 20:42:43.552988 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000011012715117130646033007 0ustar zuulzuul2025-12-13T00:13:14.540488294+00:00 stderr F I1213 00:13:14.540323 1 start.go:52] Version: 4.16.0 (Raw: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty, Hash: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:13:14.541097305+00:00 stderr F I1213 00:13:14.541016 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:14.541097305+00:00 stderr F I1213 00:13:14.541026 1 metrics.go:100] Registering Prometheus metrics 2025-12-13T00:13:14.541118766+00:00 stderr F I1213 00:13:14.541102 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-12-13T00:13:14.560377003+00:00 stderr F I1213 00:13:14.559185 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config... 2025-12-13T00:18:51.425189224+00:00 stderr F I1213 00:18:51.424534 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config 2025-12-13T00:18:51.488435328+00:00 stderr F I1213 00:18:51.488333 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:51.596094207+00:00 stderr F I1213 00:18:51.594295 1 start.go:127] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:18:51.615522493+00:00 stderr F I1213 00:18:51.615208 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:52.023360808+00:00 stderr F I1213 00:18:52.023254 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-12-13T00:18:52.113774801+00:00 stderr F I1213 00:18:52.113699 1 operator.go:376] Starting MachineConfigOperator 2025-12-13T00:19:13.714198656+00:00 stderr F E1213 00:19:13.713846 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:13.729153308+00:00 stderr F I1213 00:19:13.729041 1 event.go:364] Event(v1.ObjectReference{Kind:"", Namespace:"openshift-machine-config-operator", Name:"machine-config", UID:"7f2912cb-6bc0-4410-a0a8-5ee7300fa84b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OperatorDegraded: RequiredPoolsFailed' Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused 2025-12-13T00:19:21.039672605+00:00 stderr F E1213 00:19:21.039524 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:22.052070891+00:00 stderr F E1213 00:19:22.050627 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:23.048639752+00:00 stderr F E1213 00:19:23.047638 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:24.048054752+00:00 stderr F E1213 00:19:24.046903 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:25.049043985+00:00 stderr F E1213 00:19:25.047695 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:26.052964459+00:00 stderr F E1213 00:19:26.052895 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:27.050399303+00:00 stderr F E1213 00:19:27.050299 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:28.049615326+00:00 stderr F E1213 00:19:28.048681 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:29.050411474+00:00 stderr F E1213 00:19:29.048638 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:30.047719635+00:00 stderr F E1213 00:19:30.046510 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:31.048158443+00:00 stderr F E1213 00:19:31.046996 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:32.047256382+00:00 stderr F E1213 00:19:32.046558 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:33.048227364+00:00 stderr F E1213 00:19:33.047880 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:34.048869226+00:00 stderr F E1213 00:19:34.047547 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:35.048417398+00:00 stderr F E1213 00:19:35.045949 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:36.047679171+00:00 stderr F E1213 00:19:36.047602 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:37.048990212+00:00 stderr F E1213 00:19:37.048081 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:38.048149843+00:00 stderr F E1213 00:19:38.047118 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:39.051133060+00:00 stderr F E1213 00:19:39.051080 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:40.048263475+00:00 stderr F E1213 00:19:40.046798 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:41.047947602+00:00 stderr F E1213 00:19:41.046882 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:42.049322035+00:00 stderr F E1213 00:19:42.049239 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:43.049987297+00:00 stderr F E1213 00:19:43.049671 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:44.049143509+00:00 stderr F E1213 00:19:44.048414 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:45.050996884+00:00 stderr F E1213 00:19:45.047205 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:46.054680850+00:00 stderr F E1213 00:19:46.054378 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:47.050089979+00:00 stderr F E1213 00:19:47.049696 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:48.051007148+00:00 stderr F E1213 00:19:48.049683 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:49.047094495+00:00 stderr F E1213 00:19:49.047044 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:50.048323693+00:00 stderr F E1213 00:19:50.048254 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:51.048587205+00:00 stderr F E1213 00:19:51.047694 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:52.048019174+00:00 stderr F E1213 00:19:52.045444 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:53.051341840+00:00 stderr F E1213 00:19:53.050425 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:54.046725637+00:00 stderr F E1213 00:19:54.045090 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:55.048077979+00:00 stderr F E1213 00:19:55.047762 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:56.047314403+00:00 stderr F E1213 00:19:56.047239 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:57.046876774+00:00 stderr F E1213 00:19:57.045971 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:58.052519977+00:00 stderr F E1213 00:19:58.048958 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:19:59.045707665+00:00 stderr F E1213 00:19:59.045622 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:00.055254563+00:00 stderr F E1213 00:20:00.053874 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:01.046254449+00:00 stderr F E1213 00:20:01.046147 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:02.050234843+00:00 stderr F E1213 00:20:02.049729 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:03.057858628+00:00 stderr F E1213 00:20:03.057264 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:04.048078014+00:00 stderr F E1213 00:20:04.047591 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:05.049280251+00:00 stderr F E1213 00:20:05.049208 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:06.047427935+00:00 stderr F E1213 00:20:06.047305 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:07.050989077+00:00 stderr F E1213 00:20:07.050646 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:08.058414757+00:00 stderr F E1213 00:20:08.058025 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:09.048299092+00:00 stderr F E1213 00:20:09.048224 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:10.050114398+00:00 stderr F E1213 00:20:10.049440 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:11.046385830+00:00 stderr F E1213 00:20:11.045963 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:12.047802683+00:00 stderr F E1213 00:20:12.047740 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:13.047923301+00:00 stderr F E1213 00:20:13.046006 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:14.047451742+00:00 stderr F E1213 00:20:14.047391 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:15.050577293+00:00 stderr F E1213 00:20:15.049677 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:16.052042899+00:00 stderr F E1213 00:20:16.049881 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:17.046506651+00:00 stderr F E1213 00:20:17.046289 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:18.048809668+00:00 stderr F E1213 00:20:18.047409 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:19.045645776+00:00 stderr F E1213 00:20:19.045566 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:20.051952934+00:00 stderr F E1213 00:20:20.049271 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:21.052501005+00:00 stderr F E1213 00:20:21.052422 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:22.047899982+00:00 stderr F E1213 00:20:22.047830 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:23.047241149+00:00 stderr F E1213 00:20:23.046725 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:24.051506760+00:00 stderr F E1213 00:20:24.050407 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:25.047049992+00:00 stderr F E1213 00:20:25.046193 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:26.049400181+00:00 stderr F E1213 00:20:26.049320 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:27.047720970+00:00 stderr F E1213 00:20:27.047175 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:28.054472321+00:00 stderr F E1213 00:20:28.054407 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:29.050871876+00:00 stderr F E1213 00:20:29.047511 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:30.048070933+00:00 stderr F E1213 00:20:30.047527 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:31.050056232+00:00 stderr F E1213 00:20:31.049805 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:32.053792010+00:00 stderr F E1213 00:20:32.053685 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:33.048688942+00:00 stderr F E1213 00:20:33.048625 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:34.059779065+00:00 stderr F E1213 00:20:34.059712 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:35.047034378+00:00 stderr F E1213 00:20:35.046903 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:36.049127880+00:00 stderr F E1213 00:20:36.046849 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:20:51.443761331+00:00 stderr F E1213 00:20:51.443324 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:20.059577210+00:00 stderr F E1213 00:21:20.059031 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:21.045469554+00:00 stderr F E1213 00:21:21.045391 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:22.045338527+00:00 stderr F E1213 00:21:22.045036 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:23.048254590+00:00 stderr F E1213 00:21:23.047902 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:24.049229011+00:00 stderr F E1213 00:21:24.049138 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:25.047378316+00:00 stderr F E1213 00:21:25.046923 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:26.045891920+00:00 stderr F E1213 00:21:26.045761 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:27.047988722+00:00 stderr F E1213 00:21:27.047899 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:28.061670466+00:00 stderr F E1213 00:21:28.054432 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:29.046840910+00:00 stderr F E1213 00:21:29.046762 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:30.048728496+00:00 stderr F E1213 00:21:30.048662 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:31.053907340+00:00 stderr F E1213 00:21:31.053188 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:32.049135446+00:00 stderr F E1213 00:21:32.049054 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:33.050131477+00:00 stderr F E1213 00:21:33.048613 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:34.049819943+00:00 stderr F E1213 00:21:34.049713 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:35.049237492+00:00 stderr F E1213 00:21:35.049098 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:36.048244030+00:00 stderr F E1213 00:21:36.048156 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:37.053304968+00:00 stderr F E1213 00:21:37.053226 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:38.046739581+00:00 stderr F E1213 00:21:38.046662 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:39.047383122+00:00 stderr F E1213 00:21:39.047312 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:40.048490974+00:00 stderr F E1213 00:21:40.048175 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:41.050244933+00:00 stderr F E1213 00:21:41.049950 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:42.047999702+00:00 stderr F E1213 00:21:42.047884 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:43.052664193+00:00 stderr F E1213 00:21:43.052300 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:44.048539546+00:00 stderr F E1213 00:21:44.047982 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:45.048528830+00:00 stderr F E1213 00:21:45.048426 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:46.059715441+00:00 stderr F E1213 00:21:46.059644 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:47.052118769+00:00 stderr F E1213 00:21:47.051750 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:48.049789957+00:00 stderr F E1213 00:21:48.049684 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:48.723917244+00:00 stderr F I1213 00:21:48.723870 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-12-13T00:21:49.047561887+00:00 stderr F E1213 00:21:49.047437 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:50.046956228+00:00 stderr F E1213 00:21:50.046856 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:51.048193483+00:00 stderr F E1213 00:21:51.048129 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:52.050052683+00:00 stderr F E1213 00:21:52.049974 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:53.050109179+00:00 stderr F E1213 00:21:53.049548 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:54.046676689+00:00 stderr F E1213 00:21:54.046595 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:55.047565236+00:00 stderr F E1213 00:21:55.047478 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:56.048043793+00:00 stderr F E1213 00:21:56.047929 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 0, ready 0, updated: 0, unavailable: 0)" 2025-12-13T00:21:57.046361826+00:00 stderr F E1213 00:21:57.046281 1 sync.go:1508] Error syncing Required MachineConfigPools: "error MachineConfigPool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)" ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000003771515117130646033022 0ustar zuulzuul2025-08-13T19:59:03.793003276+00:00 stderr F I0813 19:59:03.792203 1 start.go:52] Version: 4.16.0 (Raw: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty, Hash: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:03.810241137+00:00 stderr F I0813 19:59:03.806593 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:03.811597626+00:00 stderr F I0813 19:59:03.811551 1 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:59:03.811901004+00:00 stderr F I0813 19:59:03.811879 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:59:05.606955533+00:00 stderr F I0813 19:59:05.602330 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config... 2025-08-13T19:59:06.352317300+00:00 stderr F I0813 19:59:06.351645 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config 2025-08-13T19:59:06.624953102+00:00 stderr F I0813 19:59:06.621000 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:07.641337833+00:00 stderr F I0813 19:59:07.604244 1 start.go:127] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:07.641337833+00:00 stderr F I0813 19:59:07.605338 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:17.879134960+00:00 stderr F I0813 19:59:17.868467 1 trace.go:236] Trace[2088585093]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:07.208) (total time: 10647ms): 2025-08-13T19:59:17.879134960+00:00 stderr F Trace[2088585093]: ---"Objects listed" error: 10625ms (19:59:17.833) 2025-08-13T19:59:17.879134960+00:00 stderr F Trace[2088585093]: [10.647548997s] [10.647548997s] END 2025-08-13T19:59:17.879134960+00:00 stderr F I0813 19:59:17.878865 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T19:59:18.050386191+00:00 stderr F I0813 19:59:18.044491 1 operator.go:376] Starting MachineConfigOperator 2025-08-13T20:02:26.192144382+00:00 stderr F I0813 20:02:26.175662 1 sync.go:419] Detecting changes in merged-trusted-image-registry-ca, creating patch 2025-08-13T20:02:26.192144382+00:00 stderr F I0813 20:02:26.191605 1 sync.go:424] JSONPATCH: 2025-08-13T20:02:26.192144382+00:00 stderr F {"data":{"image-registry.openshift-image-registry.svc..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n","image-registry.openshift-image-registry.svc.cluster.local..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"namespace":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:03:15.119052825+00:00 stderr F E0813 20:03:15.118870 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.125557835+00:00 stderr F E0813 20:04:15.124918 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.128905567+00:00 stderr F E0813 20:05:15.128095 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.763145649+00:00 stderr F E0813 20:05:15.763084 1 operator.go:448] error syncing progressing status: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:24.640926653+00:00 stderr F I0813 20:06:24.640737 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T20:09:39.828705851+00:00 stderr F I0813 20:09:39.828618 1 operator.go:396] Change observed to kube-apiserver-server-ca 2025-08-13T20:42:28.362519840+00:00 stderr F E0813 20:42:28.360448 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:29.356362823+00:00 stderr F E0813 20:42:29.352075 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:30.358835494+00:00 stderr F E0813 20:42:30.355618 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:31.358397922+00:00 stderr F E0813 20:42:31.356276 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.362554222+00:00 stderr F E0813 20:42:32.361311 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.358700581+00:00 stderr F E0813 20:42:33.358518 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:34.357667441+00:00 stderr F E0813 20:42:34.355686 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:35.354656805+00:00 stderr F E0813 20:42:35.353969 1 sync.go:1528] Failed to stamp bootimages configmap: failed to grab rendered MC rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, error: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:44.634028771+00:00 stderr F I0813 20:42:44.633156 1 helpers.go:93] Shutting down due to: terminated 2025-08-13T20:42:44.634028771+00:00 stderr F I0813 20:42:44.633964 1 helpers.go:96] Context cancelled 2025-08-13T20:42:44.637659186+00:00 stderr F I0813 20:42:44.637441 1 operator.go:386] Shutting down MachineConfigOperator 2025-08-13T20:42:44.639461528+00:00 stderr F E0813 20:42:44.638840 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.641299241+00:00 stderr F I0813 20:42:44.641137 1 start.go:150] Stopped leading. Terminating. ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130647033004 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000422015117130647033004 0ustar zuulzuul2025-12-13T00:10:44.892209195+00:00 stderr F W1213 00:10:44.892050 1 deprecated.go:66] 2025-12-13T00:10:44.892209195+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:10:44.892209195+00:00 stderr F 2025-12-13T00:10:44.892209195+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:10:44.892209195+00:00 stderr F 2025-12-13T00:10:44.892209195+00:00 stderr F =============================================== 2025-12-13T00:10:44.892209195+00:00 stderr F 2025-12-13T00:10:44.892209195+00:00 stderr F I1213 00:10:44.892168 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-12-13T00:10:44.894711188+00:00 stderr F I1213 00:10:44.894680 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:10:44.895004476+00:00 stderr F I1213 00:10:44.894917 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:10:44.895245163+00:00 stderr F I1213 00:10:44.895197 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-12-13T00:10:44.895405888+00:00 stderr F I1213 00:10:44.895380 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-12-13T00:10:44.895968844+00:00 stderr F I1213 00:10:44.895912 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-12-13T00:18:54.703042880+00:00 stderr F I1213 00:18:54.702969 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-12-13T00:19:10.600210479+00:00 stderr F I1213 00:19:10.600132 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-12-13T00:19:11.544286671+00:00 stderr F I1213 00:19:11.544187 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000274515117130647033016 0ustar zuulzuul2025-12-13T00:06:37.333456077+00:00 stderr F W1213 00:06:37.333149 1 deprecated.go:66] 2025-12-13T00:06:37.333456077+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:06:37.333456077+00:00 stderr F 2025-12-13T00:06:37.333456077+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:06:37.333456077+00:00 stderr F 2025-12-13T00:06:37.333456077+00:00 stderr F =============================================== 2025-12-13T00:06:37.333456077+00:00 stderr F 2025-12-13T00:06:37.333456077+00:00 stderr F I1213 00:06:37.333305 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-12-13T00:06:37.337406867+00:00 stderr F I1213 00:06:37.336437 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:06:37.337406867+00:00 stderr F I1213 00:06:37.336902 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:06:37.337406867+00:00 stderr F I1213 00:06:37.336977 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-12-13T00:06:37.338190540+00:00 stderr F I1213 00:06:37.337434 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-12-13T00:06:37.338190540+00:00 stderr F I1213 00:06:37.338012 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-12-13T00:09:51.173912410+00:00 stderr F I1213 00:09:51.171376 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000001166515117130647033017 0ustar zuulzuul2025-08-13T19:44:01.126546932+00:00 stderr F W0813 19:44:01.123942 1 deprecated.go:66] 2025-08-13T19:44:01.126546932+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F =============================================== 2025-08-13T19:44:01.126546932+00:00 stderr F 2025-08-13T19:44:01.126546932+00:00 stderr F I0813 19:44:01.124657 1 kube-rbac-proxy.go:530] Reading config file: /etc/kubernetes/crio-metrics-proxy.cfg 2025-08-13T19:44:01.175293808+00:00 stderr F I0813 19:44:01.175109 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:44:01.179226024+00:00 stderr F I0813 19:44:01.176759 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:44:01.182533911+00:00 stderr F I0813 19:44:01.181865 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca::/etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:44:01.182613100+00:00 stderr F I0813 19:44:01.182582 1 kube-rbac-proxy.go:395] Starting TCP socket on :9637 2025-08-13T19:44:01.184817475+00:00 stderr F I0813 19:44:01.184752 1 kube-rbac-proxy.go:402] Listening securely on :9637 2025-08-13T19:51:58.235350662+00:00 stderr F I0813 19:51:58.235115 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:55:11.101868577+00:00 stderr F I0813 19:55:11.101578 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:31.671386093+00:00 stderr F I0813 19:59:31.671100 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:36.085180598+00:00 stderr F I0813 19:59:36.068810 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T19:59:41.058153354+00:00 stderr F I0813 19:59:41.058038 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:06.964137311+00:00 stderr F I0813 20:06:06.954550 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:14.717482026+00:00 stderr F I0813 20:06:14.717284 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:47.017157131+00:00 stderr F I0813 20:06:47.016195 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:50.034528031+00:00 stderr F I0813 20:06:50.034310 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:06:51.358262373+00:00 stderr F I0813 20:06:51.347263 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:13.287019643+00:00 stderr F I0813 20:42:13.282260 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:13.985706076+00:00 stderr F I0813 20:42:13.985582 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:16.592484969+00:00 stderr F I0813 20:42:16.592362 1 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="can't remove non-existent watcher: /etc/kubernetes/kubelet-ca.crt" 2025-08-13T20:42:47.532180885+00:00 stderr F I0813 20:42:47.532038 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515117130647033006 0ustar zuulzuul2025-08-13T19:43:57.536596541+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515117130647033006 0ustar zuulzuul2025-12-13T00:10:44.377136531+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000014515117130647033006 0ustar zuulzuul2025-12-13T00:06:36.216914756+00:00 stdout P Waiting for kubelet key and certificate to be available ././@LongLink0000644000000000000000000000023300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130646033031 5ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130654033030 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000007570315117130646033047 0ustar zuulzuul2025-08-13T19:50:44.179030078+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:50:44.237874850+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:50:44.289116574+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:50:44.432193094+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:50:44.488575685+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:50:44.596138679+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:51:44.793865665+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:51:44.800948797+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:51:44.807599916+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:51:44.816644794+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:51:44.820681709+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:51:44.824596800+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:52:44.840933394+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:52:44.848563401+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:52:44.857305630+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:52:44.866752939+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:52:44.873025757+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:52:44.877576937+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:53:44.895479215+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:53:44.904981736+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:53:44.916047412+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:53:44.927692085+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:53:44.931149313+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:53:44.935619241+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:54:44.950291420+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:54:44.959097171+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:54:44.972143633+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:54:44.985855865+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:54:44.989359554+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:54:44.993524073+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:55:45.010639720+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:55:45.019616857+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:55:45.025727891+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:55:45.034734568+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:55:45.038501826+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:55:45.042243723+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:56:45.054496589+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:56:45.061991153+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:56:45.072677698+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:56:45.095294644+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:56:45.102084498+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:56:45.109215231+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:57:45.130749729+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:57:45.137631605+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:57:45.144302516+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:57:45.154681432+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:57:45.158736538+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:57:45.163941517+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:58:45.177593190+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:58:45.184123356+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:58:45.190436746+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:58:45.202169050+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:58:45.205926076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:58:45.213955125+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T19:59:45.467363579+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:59:45.990733368+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T19:59:46.348749022+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T19:59:46.413734585+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T19:59:46.497041809+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T19:59:46.855909219+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:00:47.400675889+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:00:47.671347326+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:00:47.768266490+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:00:47.917191257+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:00:47.985897916+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:00:48.049021466+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:01:48.345872873+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:01:48.925360996+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:01:48.933652583+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:01:48.944651026+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:01:48.949217056+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:01:48.953745906+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:02:49.892068063+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:02:49.995503284+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:02:50.002994117+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:02:50.014342111+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:02:50.018820879+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:02:50.024269954+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:03:50.089607897+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:03:50.099923071+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:03:50.109966648+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:03:50.119533921+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:03:50.127032885+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:03:50.130664498+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:04:50.144878272+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:04:50.205034795+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:04:50.216842483+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:04:50.230380341+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:04:50.234838258+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:04:50.242270821+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:05:50.258590982+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:05:50.269693380+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:05:50.281888759+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:05:50.300749569+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:05:50.305997930+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:05:50.310677554+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:06:50.341001987+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:06:50.367363093+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:06:50.379169482+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:06:50.392109473+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:06:50.395078188+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:06:50.398663201+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:07:50.415535003+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:50.426037825+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:07:50.440053076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:07:50.457313681+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:50.476738488+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:07:50.481346290+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:08:50.505933265+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:08:50.511639988+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:08:50.519055261+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:08:50.532878397+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:08:50.536563493+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:08:50.542156553+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:09:50.561316124+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:09:50.588121533+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:09:50.647838885+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:09:50.702960175+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:09:50.714519997+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:09:50.719568922+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:10:50.739488450+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:10:50.747188040+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:10:50.754209522+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:10:50.767261916+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:10:50.772190487+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:10:50.776316666+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:11:50.794322108+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:11:50.803628205+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:11:50.813347114+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:11:50.825701918+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:11:50.830052603+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:11:50.834189241+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:12:50.848728773+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:12:50.854349454+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:12:50.863721563+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:12:50.870427995+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:12:50.874595945+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:12:50.878092745+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:13:50.889843871+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:13:50.897546822+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:13:50.903580425+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:13:50.913621003+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:13:50.917484983+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:13:50.922079615+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:14:50.934660344+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:14:50.946599046+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:14:50.955119770+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:14:50.965687243+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:14:50.970257264+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:14:50.974985180+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:15:50.990328799+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:15:50.998571764+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:15:51.007188290+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:15:51.021448067+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:15:51.026611565+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:15:51.031947957+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:16:51.046189254+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:16:51.054469721+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:16:51.064193108+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:16:51.075767989+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:16:51.079844505+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:16:51.083611393+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:17:51.095301291+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:17:51.106125390+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:17:51.122001684+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:17:51.132767541+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:17:51.140506572+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:17:51.148720447+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:18:51.164139785+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:18:51.171831135+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:18:51.184226349+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:18:51.204513368+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:18:51.215682157+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:18:51.223433189+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:19:51.242413375+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:19:51.255073497+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:19:51.264893077+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:19:51.276414546+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:19:51.280402130+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:19:51.287049410+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:20:51.299959742+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:20:51.305878951+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:20:51.312962754+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:20:51.323135154+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:20:51.326248163+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:20:51.330379541+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:21:51.343764978+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:21:51.352820006+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:21:51.361837164+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:21:51.372254532+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:21:51.376860923+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:21:51.381518287+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:22:51.398739807+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:22:51.406085316+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:22:51.412600773+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:22:51.421683252+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:22:51.425453870+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:22:51.429822705+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:23:51.445523366+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:23:51.453693890+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:23:51.460924166+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:23:51.471057946+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:23:51.478256292+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:23:51.481915147+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:24:51.495866769+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:24:51.503283901+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:24:51.512755712+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:24:51.527875515+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:24:51.532710953+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:24:51.536374068+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:25:51.549034970+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:25:51.557078810+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:25:51.565216283+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:25:51.575481146+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:25:51.580249903+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:25:51.584875085+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:26:51.598649328+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:26:51.605657798+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:26:51.612983138+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:26:51.622559972+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:26:51.627584345+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:26:51.631536758+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:27:51.647962447+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:27:51.658085056+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:27:51.665942891+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:27:51.677450850+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:27:51.681587858+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:27:51.685494920+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:28:51.706042488+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:28:51.720733561+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:28:51.729951246+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:28:51.755563072+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:28:51.768132063+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:28:51.772989693+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:29:51.793283737+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:29:51.806941069+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:29:51.816148324+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:29:51.830997841+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:29:51.837600431+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:29:51.842635045+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:30:51.859244339+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:30:51.871837001+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:30:51.879857372+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:30:51.892769683+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:30:51.898760945+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:30:51.902899894+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:31:51.915497901+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:31:51.926827037+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:31:51.934752705+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:31:51.946344638+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:31:51.950888159+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:31:51.954856063+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:32:51.967487272+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:32:51.973539756+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:32:51.981221237+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:32:51.991501942+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:32:51.995170888+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:32:51.999110381+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:33:52.015352083+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:33:52.028486171+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:33:52.036493631+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:33:52.053462279+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:33:52.062454957+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:33:52.069294024+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:34:52.087208945+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:34:52.094613198+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:34:52.102524996+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:34:52.111355859+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:34:52.122145500+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:34:52.126930647+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:35:52.148494694+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:35:52.156479084+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:35:52.169187349+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:35:52.182749539+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:35:52.188410682+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:35:52.194130316+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:36:52.212706927+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:36:52.225700141+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:36:52.236669598+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:36:52.248599752+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:36:52.252548165+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:36:52.257011264+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:37:52.271169813+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:37:52.282038636+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:37:52.291913851+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:37:52.315675506+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:37:52.324174341+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:37:52.329417192+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:38:52.345967174+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:38:52.353704657+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:38:52.364194029+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:38:52.374217218+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:38:52.378944234+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:38:52.382480376+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:39:52.408991716+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:39:52.419278352+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:39:52.432388350+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:39:52.445580601+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:39:52.451324546+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:39:52.456467214+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:40:52.475429215+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:40:52.484263909+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:40:52.492461906+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:40:52.503455673+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:40:52.507405917+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:40:52.511266108+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:41:52.525956568+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:41:52.538127589+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-08-13T20:41:52.556161139+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:41:52.577509234+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:41:52.582425076+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-08-13T20:41:52.590598602+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-08-13T20:42:46.781617096+00:00 stdout F shutting down node-ca ././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000001504015117130646033033 0ustar zuulzuul2025-12-13T00:11:03.501702851+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:11:03.507851238+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:11:03.521841048+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:11:03.552534782+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:11:03.557591790+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:11:03.562568025+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:12:03.571962179+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:12:03.579691990+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:12:03.585604313+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:12:03.594664148+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:12:03.597849681+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:12:03.601079354+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:13:03.612867953+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:13:03.622538048+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:13:03.631715526+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:13:03.644983612+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:13:03.649864926+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:13:03.655621560+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:14:03.669740436+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:14:03.675368245+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:14:03.682593068+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:14:03.691514298+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:14:03.695168221+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:14:03.698831623+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:15:03.710182501+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:15:03.717520518+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:15:03.724514172+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:15:03.731739315+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:15:03.734726411+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:15:03.739876867+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:16:03.759183926+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:16:03.767102619+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:16:03.773694345+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:16:03.785212616+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:16:03.789785310+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:16:03.795396976+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:17:03.809968885+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:17:03.819439244+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:17:03.827110663+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:17:03.837547888+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:17:03.841469416+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:17:03.845271039+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:18:03.855888186+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:18:03.867759971+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:18:03.872566119+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:18:03.879210496+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:18:03.882420041+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:18:03.885497594+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:19:03.899787598+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:19:03.914171585+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:19:03.921754303+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:19:03.932356045+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:19:03.937310672+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:19:03.940817019+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:20:03.951699126+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:20:03.957163746+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:20:03.962103963+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:20:03.968789107+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:20:03.971489351+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:20:03.974498494+00:00 stdout F image-registry.openshift-image-registry.svc:5000 2025-12-13T00:21:03.985826564+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:21:03.992517165+00:00 stdout F image-registry.openshift-image-registry.svc..5000 2025-12-13T00:21:03.998028604+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:21:04.005549536+00:00 stdout F default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:21:04.009069862+00:00 stdout F image-registry.openshift-image-registry.svc.cluster.local:5000 2025-12-13T00:21:04.012485594+00:00 stdout F image-registry.openshift-image-registry.svc:5000 ././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015117130647033072 5ustar zuulzuul././@LongLink0000644000000000000000000000036700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015117130654033070 5ustar zuulzuul././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000050200615117130647033077 0ustar zuulzuul2025-12-13T00:13:14.629699262+00:00 stderr F I1213 00:13:14.629540 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:13:14.630229069+00:00 stderr F I1213 00:13:14.630192 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:14.630740277+00:00 stderr F I1213 00:13:14.630705 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:14.657499196+00:00 stderr F I1213 00:13:14.657434 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-12-13T00:13:14.879248507+00:00 stderr F I1213 00:13:14.878715 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:14.879248507+00:00 stderr F W1213 00:13:14.879193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:14.879248507+00:00 stderr F W1213 00:13:14.879199 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:14.882141074+00:00 stderr F I1213 00:13:14.882049 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:14.883696637+00:00 stderr F I1213 00:13:14.883619 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-12-13T00:13:14.885484187+00:00 stderr F I1213 00:13:14.885120 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:14.885484187+00:00 stderr F I1213 00:13:14.885312 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:14.885484187+00:00 stderr F I1213 00:13:14.885351 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:14.885484187+00:00 stderr F I1213 00:13:14.885380 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:14.885484187+00:00 stderr F I1213 00:13:14.885340 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:14.885484187+00:00 stderr F I1213 00:13:14.885433 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:14.886081737+00:00 stderr F I1213 00:13:14.885997 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:14.886666717+00:00 stderr F I1213 00:13:14.886577 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:14.886666717+00:00 stderr F I1213 00:13:14.886629 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:14.986874254+00:00 stderr F I1213 00:13:14.985988 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:14.987117672+00:00 stderr F I1213 00:13:14.987056 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:14.987205315+00:00 stderr F I1213 00:13:14.987071 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:18:49.073554358+00:00 stderr F I1213 00:18:49.072860 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-12-13T00:18:49.074927306+00:00 stderr F I1213 00:18:49.073424 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42077", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_e033e88c-810c-4658-abb1-a505eb6b5dd4 became leader 2025-12-13T00:18:49.106490917+00:00 stderr F I1213 00:18:49.106402 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:49.111790433+00:00 stderr F I1213 00:18:49.111718 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:18:49.112363508+00:00 stderr F I1213 00:18:49.112242 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:49.245091688+00:00 stderr F I1213 00:18:49.244976 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-12-13T00:18:49.245214211+00:00 stderr F I1213 00:18:49.245178 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-12-13T00:18:49.245282713+00:00 stderr F I1213 00:18:49.245248 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:18:49.245328044+00:00 stderr F I1213 00:18:49.245301 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:18:49.245364385+00:00 stderr F I1213 00:18:49.245339 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-12-13T00:18:49.245521641+00:00 stderr F I1213 00:18:49.245493 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-12-13T00:18:49.245561252+00:00 stderr F I1213 00:18:49.245538 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:49.249329815+00:00 stderr F I1213 00:18:49.249268 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-12-13T00:18:49.345478386+00:00 stderr F I1213 00:18:49.345413 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-12-13T00:18:49.345478386+00:00 stderr F I1213 00:18:49.345438 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:18:49.345478386+00:00 stderr F I1213 00:18:49.345448 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-12-13T00:18:49.345478386+00:00 stderr F I1213 00:18:49.345457 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:18:49.345544918+00:00 stderr F I1213 00:18:49.345427 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:18:49.345596789+00:00 stderr F I1213 00:18:49.345580 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:18:49.345644991+00:00 stderr F I1213 00:18:49.345539 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-12-13T00:18:49.345683192+00:00 stderr F I1213 00:18:49.345601 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:49.345694322+00:00 stderr F I1213 00:18:49.345680 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:49.345724073+00:00 stderr F I1213 00:18:49.345709 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-12-13T00:18:49.350057133+00:00 stderr F I1213 00:18:49.350005 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-12-13T00:18:49.350057133+00:00 stderr F I1213 00:18:49.350044 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-12-13T00:18:49.497820998+00:00 stderr F I1213 00:18:49.497753 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:18:49.545340568+00:00 stderr F I1213 00:18:49.545251 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-12-13T00:18:49.545340568+00:00 stderr F I1213 00:18:49.545297 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.577779 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.577706811 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578045 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.578021561 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578080 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.578054331 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578112 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.578088172 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578138 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.578118283 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578176 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.578146154 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578205 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.578183835 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578234 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.578212086 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578263 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.578240697 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578298 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.578273767 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578330 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.578307088 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578362 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.578337099 +0000 UTC))" 2025-12-13T00:19:37.579030398+00:00 stderr F I1213 00:19:37.578914 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:14 +0000 UTC to 2027-08-13 20:00:15 +0000 UTC (now=2025-12-13 00:19:37.578883274 +0000 UTC))" 2025-12-13T00:19:37.583570944+00:00 stderr F I1213 00:19:37.583530 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584794\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584794\" (2025-12-12 23:13:14 +0000 UTC to 2026-12-12 23:13:14 +0000 UTC (now=2025-12-13 00:19:37.583499682 +0000 UTC))" 2025-12-13T00:20:49.089894703+00:00 stderr F E1213 00:20:49.089412 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager-operator/openshift-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:49.427999827+00:00 stderr P E1213 00:20:49.427849 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:49.428077009+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:49.466151026+00:00 stderr P E1213 00:20:49.466001 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:49.466267449+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:49.715885055+00:00 stderr P E1213 00:20:49.715789 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:49.715973487+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.496475329+00:00 stderr P E1213 00:20:50.495963 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:50.496542281+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:51.275808129+00:00 stderr P E1213 00:20:51.275728 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:51.275875011+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:52.056680381+00:00 stderr P E1213 00:20:52.056207 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:52.056742682+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:52.836517274+00:00 stderr P E1213 00:20:52.836413 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:52.836595196+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:53.619021800+00:00 stderr P E1213 00:20:53.618740 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:53.619088692+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:54.404717642+00:00 stderr P E1213 00:20:54.404264 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:54.405321768+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:56.134089358+00:00 stderr P E1213 00:20:56.133573 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:56.134147969+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:58.723038421+00:00 stderr P E1213 00:20:58.722906 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:20:58.723130723+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:03.872578989+00:00 stderr P E1213 00:21:03.872055 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:21:03.872649120+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:14.143485436+00:00 stderr P E1213 00:21:14.143023 1 base_controller.go:268] OpenshiftControllerManagerStaticResources reconciliation failed: ["assets/openshift-controller-manager/informer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/informer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:leader-locking-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-route-controller-manager/services/route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-tokenreview-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:tokenreview-openshift-route-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/route-controller-manager-ingress-to-route-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/old-leader-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/roles/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/separate-sa-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-infra/rolebindings/system:openshift:sa-creating-openshift-controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-controller-manager/services/controller-manager": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s": dial tcp 10.217.4 2025-12-13T00:21:14.143536927+00:00 stderr F .1:443: connect: connection refused, "assets/openshift-controller-manager/servicemonitor-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:update-buildconfig-status": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/deployer-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:deployer": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/image-trigger-controller-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-controller-manager:image-trigger-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/roles/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, "assets/openshift-controller-manager/leader-ingress-to-route-controller-rolebinding.yaml" (string): Delete "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-route-controller-manager/rolebindings/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:47.864322612+00:00 stderr F I1213 00:21:47.863585 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:52.234324057+00:00 stderr F I1213 00:21:52.234239 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:56.562609714+00:00 stderr F I1213 00:21:56.554385 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000011407115117130647033100 0ustar zuulzuul2025-08-13T20:05:39.477713001+00:00 stderr F I0813 20:05:39.477011 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:39.480944024+00:00 stderr F I0813 20:05:39.480813 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:39.482761686+00:00 stderr F I0813 20:05:39.482640 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:39.616820695+00:00 stderr F I0813 20:05:39.612634 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-08-13T20:05:39.971388769+00:00 stderr F I0813 20:05:39.970377 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:39.971388769+00:00 stderr F W0813 20:05:39.970434 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:39.971388769+00:00 stderr F W0813 20:05:39.970443 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:39.979965674+00:00 stderr F I0813 20:05:39.978309 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:39.986559793+00:00 stderr F I0813 20:05:39.986511 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-08-13T20:05:39.991060022+00:00 stderr F I0813 20:05:39.991028 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:39.991487234+00:00 stderr F I0813 20:05:39.991445 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:39.991591387+00:00 stderr F I0813 20:05:39.991531 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:39.992067701+00:00 stderr F I0813 20:05:39.992036 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:39.993505212+00:00 stderr F I0813 20:05:39.993450 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:39.993673207+00:00 stderr F I0813 20:05:39.992697 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:39.993687587+00:00 stderr F I0813 20:05:39.993668 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:39.993845892+00:00 stderr F I0813 20:05:39.992951 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:39.993888643+00:00 stderr F I0813 20:05:39.993837 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:40.095002829+00:00 stderr F I0813 20:05:40.094054 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:40.095197894+00:00 stderr F I0813 20:05:40.095170 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:40.095346198+00:00 stderr F I0813 20:05:40.095286 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:48.315010930+00:00 stderr F E0813 20:08:48.305568 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager-operator/openshift-controller-manager-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:10:59.203663912+00:00 stderr F I0813 20:10:59.202174 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-08-13T20:10:59.208330496+00:00 stderr F I0813 20:10:59.205651 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33251", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_97a869b3-b1e5-4caf-ad32-912e51043cee became leader 2025-08-13T20:10:59.232052376+00:00 stderr F I0813 20:10:59.231988 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:10:59.243593237+00:00 stderr F I0813 20:10:59.243456 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:10:59.244106501+00:00 stderr F I0813 20:10:59.243119 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:10:59.298685346+00:00 stderr F I0813 20:10:59.298610 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-08-13T20:10:59.309429584+00:00 stderr F I0813 20:10:59.309371 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-08-13T20:10:59.309539297+00:00 stderr F I0813 20:10:59.309523 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-08-13T20:10:59.310213146+00:00 stderr F I0813 20:10:59.310182 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:10:59.310296649+00:00 stderr F I0813 20:10:59.310279 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:10:59.310347910+00:00 stderr F I0813 20:10:59.310335 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-08-13T20:10:59.310550696+00:00 stderr F I0813 20:10:59.310525 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-08-13T20:10:59.310699400+00:00 stderr F I0813 20:10:59.310676 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:10:59.411721487+00:00 stderr F I0813 20:10:59.411566 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:10:59.411721487+00:00 stderr F I0813 20:10:59.411639 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:10:59.411844520+00:00 stderr F I0813 20:10:59.411737 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-08-13T20:10:59.411844520+00:00 stderr F I0813 20:10:59.411745 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-08-13T20:10:59.412006155+00:00 stderr F I0813 20:10:59.411946 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.418689607+00:00 stderr F I0813 20:10:59.418610 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.431131613+00:00 stderr F I0813 20:10:59.430873 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.442558351+00:00 stderr F I0813 20:10:59.442445 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.443436006+00:00 stderr F I0813 20:10:59.443371 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.448866902+00:00 stderr F I0813 20:10:59.448819 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.455659307+00:00 stderr F I0813 20:10:59.455614 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.459617080+00:00 stderr F I0813 20:10:59.458001 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.469865814+00:00 stderr F I0813 20:10:59.469573 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.510179590+00:00 stderr F I0813 20:10:59.510092 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-08-13T20:10:59.510280923+00:00 stderr F I0813 20:10:59.510258 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511488 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511529 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511581 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511589 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511616 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:10:59.511697503+00:00 stderr F I0813 20:10:59.511629 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:10:59.552331418+00:00 stderr F I0813 20:10:59.552188 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:59.563880430+00:00 stderr F I0813 20:10:59.563622 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f6a079a2c81073c36b72bd771673556c9f87406689988ef3e993138968845bcc"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"30489"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:10:59.587185468+00:00 stderr F I0813 20:10:59.585561 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:10:59.596895606+00:00 stderr F I0813 20:10:59.596737 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"a8b1e7668d4b183445825c8d9d8daba3bc69d22add1b4a1e25e5081d7b9c2cd7"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"30534"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:10:59.601860628+00:00 stderr F I0813 20:10:59.601714 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-08-13T20:10:59.602059954+00:00 stderr F I0813 20:10:59.602030 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-08-13T20:10:59.614457720+00:00 stderr F I0813 20:10:59.614278 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:10:59.650180144+00:00 stderr F I0813 20:10:59.650071 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:59.677105436+00:00 stderr F I0813 20:10:59.676700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14.",Available changed from False to True ("All is well") 2025-08-13T20:10:59.996206105+00:00 stderr F I0813 20:10:59.996116 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:00.047997790+00:00 stderr F I0813 20:11:00.047867 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 15, desired generation is 16.\nProgressing: deployment/route-controller-manager: observed generation is 13, desired generation is 14." to "Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no route controller manager deployment pods available on any node.") 2025-08-13T20:11:00.317103545+00:00 stderr F I0813 20:11:00.316543 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:59Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:00.340647020+00:00 stderr F I0813 20:11:00.339734 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no route controller manager deployment pods available on any node." to "Available: no pods available on any node." 2025-08-13T20:11:19.410645645+00:00 stderr F I0813 20:11:19.409794 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:11:19Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:11:19.441531050+00:00 stderr F I0813 20:11:19.436405 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.420037 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430567 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430625 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430641 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430655 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430667 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430679 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442177138+00:00 stderr F I0813 20:42:36.430691 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469924708+00:00 stderr F I0813 20:42:36.430703 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.640999140+00:00 stderr F I0813 20:42:36.430724 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.641265608+00:00 stderr F I0813 20:42:36.430733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.641460023+00:00 stderr F I0813 20:42:36.430743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.642325258+00:00 stderr F I0813 20:42:36.430752 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.644339656+00:00 stderr F I0813 20:42:36.422097 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.644450870+00:00 stderr F I0813 20:42:36.430828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430849 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430860 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430891 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430908 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430919 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645439398+00:00 stderr F I0813 20:42:36.430940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645688405+00:00 stderr F I0813 20:42:36.430950 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.645845310+00:00 stderr F I0813 20:42:36.430960 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646290823+00:00 stderr F I0813 20:42:36.430970 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646290823+00:00 stderr F I0813 20:42:36.430981 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646484298+00:00 stderr F I0813 20:42:36.430991 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646500289+00:00 stderr F I0813 20:42:36.431008 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646677554+00:00 stderr F I0813 20:42:36.431018 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646696514+00:00 stderr F I0813 20:42:36.431038 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.646708345+00:00 stderr F I0813 20:42:36.431049 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.651420531+00:00 stderr F I0813 20:42:36.431059 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.651566525+00:00 stderr F I0813 20:42:36.431071 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652165712+00:00 stderr F I0813 20:42:36.431081 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652720058+00:00 stderr F I0813 20:42:36.431096 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.652883633+00:00 stderr F I0813 20:42:36.431111 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431122 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431132 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655507488+00:00 stderr F I0813 20:42:36.431147 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.655889539+00:00 stderr F I0813 20:42:36.431164 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.656338182+00:00 stderr F I0813 20:42:36.422033 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.657733093+00:00 stderr F I0813 20:42:36.422154 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.659246806+00:00 stderr F I0813 20:42:36.422194 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.659583316+00:00 stderr F I0813 20:42:36.422249 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.377512944+00:00 stderr F I0813 20:42:40.375202 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.377512944+00:00 stderr F I0813 20:42:40.376636 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.377736721+00:00 stderr F I0813 20:42:40.377656 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379633 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379656 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379685 1 base_controller.go:172] Shutting down ImagePullSecretCleanupController ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379689 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ... 2025-08-13T20:42:40.379708598+00:00 stderr F I0813 20:42:40.379697 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:40.379735989+00:00 stderr F I0813 20:42:40.379712 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.379735989+00:00 stderr F I0813 20:42:40.379719 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.380063278+00:00 stderr F I0813 20:42:40.379990 1 base_controller.go:172] Shutting down UserCAObservationController ... 2025-08-13T20:42:40.380163951+00:00 stderr F I0813 20:42:40.380116 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.380163951+00:00 stderr F I0813 20:42:40.380153 1 base_controller.go:114] Shutting down worker of ImagePullSecretCleanupController controller ... 2025-08-13T20:42:40.380180221+00:00 stderr F I0813 20:42:40.380162 1 base_controller.go:104] All ImagePullSecretCleanupController workers have been terminated 2025-08-13T20:42:40.380191982+00:00 stderr F I0813 20:42:40.380171 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:40.380206042+00:00 stderr F I0813 20:42:40.380196 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:40.380252273+00:00 stderr F I0813 20:42:40.380210 1 base_controller.go:114] Shutting down worker of UserCAObservationController controller ... 2025-08-13T20:42:40.380378737+00:00 stderr F I0813 20:42:40.380219 1 base_controller.go:104] All UserCAObservationController workers have been terminated 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.379696 1 base_controller.go:150] All StatusSyncer_openshift-controller-manager post start hooks have been terminated 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.381115 1 base_controller.go:114] Shutting down worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T20:42:40.381132839+00:00 stderr F I0813 20:42:40.381125 1 base_controller.go:104] All OpenshiftControllerManagerStaticResources workers have been terminated 2025-08-13T20:42:40.381155830+00:00 stderr F I0813 20:42:40.381134 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T20:42:40.381155830+00:00 stderr F I0813 20:42:40.381140 1 base_controller.go:104] All StatusSyncer_openshift-controller-manager workers have been terminated 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.380217 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.381248 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:40.381360435+00:00 stderr F I0813 20:42:40.381248 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.381904751+00:00 stderr F I0813 20:42:40.380269 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator 2025-08-13T20:42:40.383730464+00:00 stderr F I0813 20:42:40.382525 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.383730464+00:00 stderr F W0813 20:42:40.382565 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000037400000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000052140215117130647033100 0ustar zuulzuul2025-08-13T19:59:22.501014350+00:00 stderr F I0813 19:59:22.498949 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:22.518155678+00:00 stderr F I0813 19:59:22.517397 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:22.740731243+00:00 stderr F I0813 19:59:22.705993 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:24.642256226+00:00 stderr F I0813 19:59:24.638870 1 builder.go:299] openshift-controller-manager-operator version 4.16.0-202406131906.p0.g8996996.assembly.stream.el9-8996996-899699681f8bb984d0f249dec171e630440c461b 2025-08-13T19:59:34.801442426+00:00 stderr F I0813 19:59:34.794211 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:34.801442426+00:00 stderr F W0813 19:59:34.799728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.801442426+00:00 stderr F W0813 19:59:34.799743 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.853910601+00:00 stderr F I0813 19:59:34.852730 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.930903836+00:00 stderr F I0813 19:59:34.924662 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock... 2025-08-13T19:59:35.156718343+00:00 stderr F I0813 19:59:35.156653 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:35.274368056+00:00 stderr F I0813 19:59:35.237965 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:35.274496450+00:00 stderr F I0813 19:59:35.198935 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:35.274530241+00:00 stderr F I0813 19:59:35.199918 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:35.274556002+00:00 stderr F I0813 19:59:35.204331 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.274581893+00:00 stderr F I0813 19:59:35.253733 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:35.284174546+00:00 stderr F I0813 19:59:35.284131 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.352969556+00:00 stderr F I0813 19:59:35.296585 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.596705014+00:00 stderr F I0813 19:59:35.309268 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:35.605019651+00:00 stderr F I0813 19:59:35.584214 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:35.834367478+00:00 stderr F I0813 19:59:35.584244 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.843013895+00:00 stderr F I0813 19:59:35.842575 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.843500579+00:00 stderr F E0813 19:59:35.843458 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.843619232+00:00 stderr F E0813 19:59:35.843597 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.852731992+00:00 stderr F E0813 19:59:35.852661 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.879277559+00:00 stderr F E0813 19:59:35.876410 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.879277559+00:00 stderr F E0813 19:59:35.876472 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.887279527+00:00 stderr F E0813 19:59:35.887219 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.904343823+00:00 stderr F E0813 19:59:35.904291 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.912429184+00:00 stderr F E0813 19:59:35.912348 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.959033952+00:00 stderr F E0813 19:59:35.958973 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.959146965+00:00 stderr F E0813 19:59:35.959131 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.040763242+00:00 stderr F E0813 19:59:36.040676 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.054607717+00:00 stderr F E0813 19:59:36.054573 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.232656712+00:00 stderr F E0813 19:59:36.232421 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.331167670+00:00 stderr F I0813 19:59:36.329754 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager-operator/openshift-controller-manager-operator-lock 2025-08-13T19:59:36.344607243+00:00 stderr F I0813 19:59:36.344427 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator-lock", UID:"decdf66a-787e-4779-ba30-d58496563da0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28188", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-controller-manager-operator-7978d7d7f6-2nt8z_9d027477-b136-47ef-a304-2dd35bc9cd4d became leader 2025-08-13T19:59:36.434342421+00:00 stderr F E0813 19:59:36.434276 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.557910153+00:00 stderr F E0813 19:59:36.555387 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.789066893+00:00 stderr F E0813 19:59:36.782173 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.792564442+00:00 stderr F I0813 19:59:36.792518 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:36.888852057+00:00 stderr F I0813 19:59:36.847317 1 starter.go:115] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:36.995937550+00:00 stderr F I0813 19:59:36.848765 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:37.271882766+00:00 stderr F E0813 19:59:37.268698 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:37.436072326+00:00 stderr F E0813 19:59:37.422409 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:37.628636835+00:00 stderr F I0813 19:59:37.627170 1 base_controller.go:67] Waiting for caches to sync for ImagePullSecretCleanupController 2025-08-13T19:59:37.691184378+00:00 stderr F I0813 19:59:37.674414 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:38.100083794+00:00 stderr F I0813 19:59:38.097705 1 base_controller.go:67] Waiting for caches to sync for UserCAObservationController 2025-08-13T19:59:39.611923168+00:00 stderr F E0813 19:59:39.610157 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.611923168+00:00 stderr F E0813 19:59:39.610561 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899051 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899362 1 base_controller.go:67] Waiting for caches to sync for OpenshiftControllerManagerStaticResources 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.899512 1 operator.go:145] Starting OpenShiftControllerManagerOperator 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.900209 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:39.904962902+00:00 stderr F I0813 19:59:39.901615 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-controller-manager 2025-08-13T19:59:42.993629804+00:00 stderr F E0813 19:59:42.978620 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.993629804+00:00 stderr F E0813 19:59:42.993327 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.024014960+00:00 stderr F I0813 19:59:43.021103 1 trace.go:236] Trace[411137344]: "DeltaFIFO Pop Process" ID:default,Depth:63,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.802) (total time: 214ms): 2025-08-13T19:59:43.024014960+00:00 stderr F Trace[411137344]: [214.00944ms] [214.00944ms] END 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.032975 1 base_controller.go:73] Caches are synced for UserCAObservationController 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033125 1 base_controller.go:110] Starting #1 worker of UserCAObservationController controller ... 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033291 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:43.033957184+00:00 stderr F I0813 19:59:43.033300 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.107927 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.108215 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.120035607+00:00 stderr F I0813 19:59:43.109536 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.124359101+00:00 stderr F I0813 19:59:43.123596 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.219533754+00:00 stderr F I0813 19:59:43.217513 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.361143030+00:00 stderr F I0813 19:59:43.355828 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.368662565+00:00 stderr F I0813 19:59:43.368109 1 trace.go:236] Trace[1825624301]: "DeltaFIFO Pop Process" ID:cluster-admin,Depth:188,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.255) (total time: 112ms): 2025-08-13T19:59:43.368662565+00:00 stderr F Trace[1825624301]: [112.148267ms] [112.148267ms] END 2025-08-13T19:59:43.477358833+00:00 stderr F I0813 19:59:43.476446 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:43.637191349+00:00 stderr F I0813 19:59:43.636024 1 trace.go:236] Trace[430823319]: "DeltaFIFO Pop Process" ID:multus-group,Depth:146,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.432) (total time: 202ms): 2025-08-13T19:59:43.637191349+00:00 stderr F Trace[430823319]: [202.973466ms] [202.973466ms] END 2025-08-13T19:59:45.230277591+00:00 stderr F I0813 19:59:45.131770 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:45.264485136+00:00 stderr F I0813 19:59:45.235273 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.966283855+00:00 stderr F I0813 19:59:46.929670 1 trace.go:236] Trace[1910610313]: "DeltaFIFO Pop Process" ID:aggregate-olm-edit,Depth:223,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.235) (total time: 1693ms): 2025-08-13T19:59:46.966283855+00:00 stderr F Trace[1910610313]: [1.693771071s] [1.693771071s] END 2025-08-13T19:59:47.040871171+00:00 stderr F I0813 19:59:46.973674 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.190433995+00:00 stderr F I0813 19:59:47.190257 1 trace.go:236] Trace[287803092]: "DeltaFIFO Pop Process" ID:net-attach-def-project,Depth:179,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:47.076) (total time: 113ms): 2025-08-13T19:59:47.190433995+00:00 stderr F Trace[287803092]: [113.420843ms] [113.420843ms] END 2025-08-13T19:59:47.267690217+00:00 stderr F I0813 19:59:47.079030 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:47.304922349+00:00 stderr F I0813 19:59:47.293440 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:47.340553874+00:00 stderr F I0813 19:59:47.340491 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.342163810+00:00 stderr F I0813 19:59:47.341746 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401080 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401137 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401095 1 base_controller.go:73] Caches are synced for OpenshiftControllerManagerStaticResources 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.401182 1 base_controller.go:110] Starting #1 worker of OpenshiftControllerManagerStaticResources controller ... 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.409057 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-controller-manager 2025-08-13T19:59:47.412154455+00:00 stderr F I0813 19:59:47.409075 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-controller-manager controller ... 2025-08-13T19:59:48.127951600+00:00 stderr F E0813 19:59:48.115094 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:48.128310830+00:00 stderr F E0813 19:59:48.128272 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.115961985+00:00 stderr P I0813 19:59:49.109919 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ 2025-08-13T19:59:49.116028847+00:00 stderr F /NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:49.116028847+00:00 stderr F I0813 19:59:49.111960 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T19:59:49.116028847+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:49.865857840+00:00 stderr P I0813 19:59:49.863221 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/ 2025-08-13T19:59:49.865993994+00:00 stderr F iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:49.899495549+00:00 stderr F I0813 19:59:49.899305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T19:59:49.899495549+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.414174951+00:00 stderr F I0813 19:59:50.410733 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"1accb1ac836aa33f8208a614d8657e2894b298fdc84501b2ced8f0aea7081a7e"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28451"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T19:59:50.764362733+00:00 stderr F I0813 19:59:50.763635 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T19:59:51.005057045+00:00 stderr F I0813 19:59:51.002574 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"bd022aaa08f7a35114f39954475616ebb6669ca5cfd704016d15dc0be2736d06"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28474"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.802982 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.802735174 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803080 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.803061153 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803107 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.803090464 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803124 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.803113735 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803143 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803131585 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803214 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803148676 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803326 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803221628 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803352 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803334771 +0000 UTC))" 2025-08-13T19:59:51.803821515+00:00 stderr F I0813 19:59:51.803370 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.803359372 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.803714 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:51.803696981 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.865124 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 19:59:51.865080941 +0000 UTC))" 2025-08-13T19:59:51.890136225+00:00 stderr F I0813 19:59:51.862192 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T19:59:51.908679914+00:00 stderr F I0813 19:59:51.908319 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.226393681+00:00 stderr F I0813 19:59:52.224708 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:52Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.309720466+00:00 stderr F I0813 19:59:52.307397 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.",Available changed from False to True ("All is well") 2025-08-13T19:59:52.632061214+00:00 stderr F I0813 19:59:52.630311 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:52Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.011012106+00:00 stderr F I0813 19:59:52.950889 1 trace.go:236] Trace[587110061]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:37.636) (total time: 15314ms): 2025-08-13T19:59:53.011012106+00:00 stderr F Trace[587110061]: ---"Objects listed" error: 15314ms (19:59:52.950) 2025-08-13T19:59:53.011012106+00:00 stderr F Trace[587110061]: [15.31460899s] [15.31460899s] END 2025-08-13T19:59:53.011402667+00:00 stderr F I0813 19:59:53.011356 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.012131518+00:00 stderr F E0813 19:59:52.962146 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.033095616+00:00 stderr F I0813 19:59:53.033026 1 base_controller.go:73] Caches are synced for ImagePullSecretCleanupController 2025-08-13T19:59:53.039407866+00:00 stderr F I0813 19:59:53.039368 1 base_controller.go:110] Starting #1 worker of ImagePullSecretCleanupController controller ... 2025-08-13T19:59:53.294618370+00:00 stderr F I0813 19:59:53.294426 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"MfAL0g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T19:59:53.309522275+00:00 stderr F I0813 19:59:53.295151 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T19:59:53.309522275+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2025-08-13T19:59:53.404664057+00:00 stderr F I0813 19:59:53.402032 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"MfAL0g=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T19:59:53.404664057+00:00 stderr F I0813 19:59:53.402516 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T19:59:53.404664057+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T19:59:53.610534275+00:00 stderr F I0813 19:59:53.610241 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"9560e1ebfed279fa4b00d6622d2d0c7548ddaafda3bf7adb90c2675e98237adc"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"28609"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T19:59:53.839948774+00:00 stderr F I0813 19:59:53.792171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T19:59:53.839948774+00:00 stderr F I0813 19:59:53.817658 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"55202ddf9b00b1660ea2cb5f3c4a6bcc3fe4ffc25c2e72e085bdaf7de2334698"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"28614"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T19:59:53.899455651+00:00 stderr F I0813 19:59:53.858690 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T19:59:54.035239521+00:00 stderr F I0813 19:59:54.035105 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.195940292+00:00 stderr F I0813 19:59:54.195299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 10, desired generation is 11.\nProgressing: deployment/route-controller-manager: observed generation is 8, desired generation is 9." to "Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") 2025-08-13T19:59:56.033043570+00:00 stderr F I0813 19:59:56.025824 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:56.508439492+00:00 stderr F I0813 19:59:56.501318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" 2025-08-13T19:59:57.112488860+00:00 stderr F I0813 19:59:57.111042 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.287965542+00:00 stderr F E0813 19:59:57.279216 1 base_controller.go:268] StatusSyncer_openshift-controller-manager reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-controller-manager": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757458 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.757412947 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757647 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.757632283 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757667 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.757654564 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757719 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.757672245 +0000 UTC))" 2025-08-13T20:00:05.762510063+00:00 stderr F I0813 20:00:05.757753 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.757728736 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773048 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77295724 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773142 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773125205 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773167 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773149976 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.773176797 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773207 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773196747 +0000 UTC))" 2025-08-13T20:00:05.785996492+00:00 stderr F I0813 20:00:05.773564 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.773546847 +0000 UTC))" 2025-08-13T20:00:05.829599936+00:00 stderr F I0813 20:00:05.828483 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:05.828419362 +0000 UTC))" 2025-08-13T20:00:08.126868879+00:00 stderr P I0813 20:00:08.115317 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IB 2025-08-13T20:00:08.126971252+00:00 stderr F DwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.143879924+00:00 stderr F I0813 20:00:08.143169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T20:00:08.143879924+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.734625589+00:00 stderr P I0813 20:00:08.729751 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQ 2025-08-13T20:00:08.734699641+00:00 stderr F UAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.735633667+00:00 stderr F I0813 20:00:08.735589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T20:00:08.735633667+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:08.868503696+00:00 stderr F I0813 20:00:08.868378 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"fc47afb69400ec93f9f5988897f8473ca4c4bdea046fab70a0e165e5b4cc33c1"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28980"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:00:09.328665487+00:00 stderr F I0813 20:00:09.303747 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:00:09.761062066+00:00 stderr F I0813 20:00:09.759946 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"2997ec2c10eb5744ab5a4358602bc88f99f56641a9b7132faff9749e82773557"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"28996"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:00:10.271157441+00:00 stderr F I0813 20:00:10.267309 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:00:10.956011969+00:00 stderr F I0813 20:00:10.955606 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:10Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:11.317181517+00:00 stderr F I0813 20:00:11.315079 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11.",Available changed from False to True ("All is well") 2025-08-13T20:00:20.807714778+00:00 stderr F I0813 20:00:20.797011 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"nkDd2A==","openshift-controller-manager.serving-cert.secret":"inPi3w=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:20.807714778+00:00 stderr F I0813 20:00:20.799219 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T20:00:20.807714778+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap,data.openshift-controller-manager.serving-cert.secret 2025-08-13T20:00:22.570906954+00:00 stderr F I0813 20:00:22.561663 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"nkDd2A=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:22.570906954+00:00 stderr F I0813 20:00:22.563114 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:00:22.570906954+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T20:00:23.296960378+00:00 stderr F I0813 20:00:23.294715 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"6bcc007c303cd189832d7956655ea4a765a39149dbd0e31991e8b4f86ad92eeb"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"29406","configmaps/openshift-service-ca":"29219"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:00:23.454507270+00:00 stderr F I0813 20:00:23.453199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:00:23.804053877+00:00 stderr F I0813 20:00:23.802669 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"d2cfbecabf7957093938fbbf20602fed627c8dcf88c7baa3039d1aa76706feb6"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/config":"29435"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:00:24.288062459+00:00 stderr F I0813 20:00:24.286413 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:00:24.762012823+00:00 stderr F I0813 20:00:24.761237 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas \u003e 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:24.887286025+00:00 stderr F I0813 20:00:24.885987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: observed generation is 10, desired generation is 11." to "Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") 2025-08-13T20:00:40.856666175+00:00 stderr F I0813 20:00:40.845673 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:40.990130500+00:00 stderr F I0813 20:00:40.969350 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 13, desired generation is 14.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 11, desired generation is 12.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node." 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049126 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.048880262 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049726 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.049707486 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.049761 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049732806 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050027 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049766497 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050108 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050042495 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050128 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050114927 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050146 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050134378 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050166 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050152008 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050184 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.050172309 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050252 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.05019623 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050294 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.050261171 +0000 UTC))" 2025-08-13T20:01:00.051034334+00:00 stderr F I0813 20:01:00.050992 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:01:00.050970852 +0000 UTC))" 2025-08-13T20:01:00.053948177+00:00 stderr F I0813 20:01:00.051276 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:01:00.05126006 +0000 UTC))" 2025-08-13T20:01:08.673277712+00:00 stderr P I0813 20:01:08.666614 1 core.go:341] ConfigMap "openshift-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1Nj 2025-08-13T20:01:08.674489597+00:00 stderr F YwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:01:08.713898331+00:00 stderr F I0813 20:01:08.711258 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-controller-manager: 2025-08-13T20:01:08.713898331+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:09.637633520+00:00 stderr F I0813 20:01:09.637216 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.serving-cert.secret":"6wVDCg=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:09.654632895+00:00 stderr F I0813 20:01:09.637763 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:01:09.654632895+00:00 stderr F cause by changes in data.openshift-route-controller-manager.serving-cert.secret 2025-08-13T20:01:10.338074073+00:00 stderr P I0813 20:01:10.337043 1 core.go:341] ConfigMap "openshift-route-controller-manager/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUx 2025-08-13T20:01:10.338280109+00:00 stderr F MTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:01:10.344929248+00:00 stderr F I0813 20:01:10.344119 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-route-controller-manager: 2025-08-13T20:01:10.344929248+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:01:14.401913999+00:00 stderr F I0813 20:01:14.394388 1 apps.go:154] Deployment "openshift-controller-manager/controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"f04f287b763c8ca46f7254ec00d7f77f509cbf1bfd94b52bf4b4d93869c665c0"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"30255"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["openshift-controller-manager","start"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"proxy-ca-bundles"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"openshift-global-ca"},"name":"proxy-ca-bundles"}]}}}} 2025-08-13T20:01:17.951922354+00:00 stderr F I0813 20:01:17.951238 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed 2025-08-13T20:01:18.815583180+00:00 stderr F I0813 20:01:18.814730 1 apps.go:154] Deployment "openshift-route-controller-manager/route-controller-manager" changes: {"metadata":{"annotations":{"operator.openshift.io/spec-hash":"b9a81854272e0c5424d4034a7ee3633ceff604e77368302720feb1f03d857755"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"configmaps/client-ca":"30328","configmaps/config":"30299"}},"spec":{"containers":[{"args":["--config=/var/run/configmaps/config/config.yaml","-v=2"],"command":["route-controller-manager","start"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"initialDelaySeconds":30},"name":"route-controller-manager","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"}},"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/client-ca","name":"client-ca"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"terminationGracePeriodSeconds":null,"volumes":[{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"client-ca"},"name":"client-ca"},{"name":"serving-cert","secret":{"secretName":"serving-cert"}}]}}}} 2025-08-13T20:01:20.174150267+00:00 stderr F I0813 20:01:20.173618 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed 2025-08-13T20:01:21.295577804+00:00 stderr F I0813 20:01:21.295110 1 status_controller.go:218] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas \u003e 1","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:24Z","message":"Available: no route controller manager deployment pods available on any node.","reason":"_NoPodsAvailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:05Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:21.648275290+00:00 stderr F I0813 20:01:21.638715 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 14, desired generation is 15.\nProgressing: deployment/route-controller-manager: observed generation is 12, desired generation is 13.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" 2025-08-13T20:01:23.289898379+00:00 stderr F I0813 20:01:23.275156 1 core.go:341] ConfigMap "openshift-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-controller-manager.client-ca.configmap":"d_XbdQ=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:23.304622698+00:00 stderr F I0813 20:01:23.293916 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-controller-manager: 2025-08-13T20:01:23.304622698+00:00 stderr F cause by changes in data.openshift-controller-manager.client-ca.configmap 2025-08-13T20:01:28.280990974+00:00 stderr F I0813 20:01:28.280362 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.281032975+00:00 stderr F I0813 20:01:28.280998 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:28.281407776+00:00 stderr F I0813 20:01:28.281358 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.282386073+00:00 stderr F I0813 20:01:28.282302 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:28.28226776 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282362 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:28.282335452 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282384 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.282369833 +0000 UTC))" 2025-08-13T20:01:28.282428555+00:00 stderr F I0813 20:01:28.282414 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.282402004 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282431 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282420344 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282451 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282438615 +0000 UTC))" 2025-08-13T20:01:28.282498927+00:00 stderr F I0813 20:01:28.282474 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282456705 +0000 UTC))" 2025-08-13T20:01:28.282513407+00:00 stderr F I0813 20:01:28.282495 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282482476 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282523 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:28.282501107 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282564 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:28.282548048 +0000 UTC))" 2025-08-13T20:01:28.282735313+00:00 stderr F I0813 20:01:28.282589 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.282577829 +0000 UTC))" 2025-08-13T20:01:28.289939799+00:00 stderr F I0813 20:01:28.283343 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-controller-manager-operator.svc\" [serving] validServingFor=[metrics.openshift-controller-manager-operator.svc,metrics.openshift-controller-manager-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:14 +0000 UTC to 2027-08-13 20:00:15 +0000 UTC (now=2025-08-13 20:01:28.28332429 +0000 UTC))" 2025-08-13T20:01:28.289939799+00:00 stderr F I0813 20:01:28.284285 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115174\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115167\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:01:28.284143234 +0000 UTC))" 2025-08-13T20:01:31.519181458+00:00 stderr F I0813 20:01:31.515303 1 core.go:341] ConfigMap "openshift-route-controller-manager/config" changes: {"apiVersion":"v1","data":{"openshift-route-controller-manager.client-ca.configmap":"d_XbdQ=="},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:31.520220907+00:00 stderr F I0813 20:01:31.520148 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"945d64e1-c873-4e9d-b5ff-47904d2b347f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/config -n openshift-route-controller-manager: 2025-08-13T20:01:31.520220907+00:00 stderr F cause by changes in data.openshift-route-controller-manager.client-ca.configmap 2025-08-13T20:01:32.710407505+00:00 stderr F I0813 20:01:32.709976 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="f4b72f648a02bf4d745720b461c43dc88e5b533156c427b7905f426178ca53a1", new="d241a06236d5f1f5f86885717c7d346103e02b5d1ed9dcf4c19f7f338250fbcb") 2025-08-13T20:01:32.711219518+00:00 stderr F W0813 20:01:32.710474 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.710576 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9fa7e5fbef9e286ed42003219ce81736b0a30e8ce2f7dd520c0c149b834fa6a0", new="db6902c5c5fee4f9a52663b228002d42646911159d139a2d4d9110064da348fd") 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.710987 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.711074 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:32.711219518+00:00 stderr F I0813 20:01:32.711163 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:32.711678161+00:00 stderr F I0813 20:01:32.711622 1 base_controller.go:172] Shutting down StatusSyncer_openshift-controller-manager ... 2025-08-13T20:01:32.711678161+00:00 stderr F I0813 20:01:32.711623 1 base_controller.go:172] Shutting down OpenshiftControllerManagerStaticResources ... 2025-08-13T20:01:32.711946829+00:00 stderr F I0813 20:01:32.711872 1 operator.go:151] Shutting down OpenShiftControllerManagerOperator 2025-08-13T20:01:32.712037742+00:00 stderr F I0813 20:01:32.711949 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:32.712098493+00:00 stderr F I0813 20:01:32.711995 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:32.713875504+00:00 stderr F I0813 20:01:32.712115 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:32.713875504+00:00 stderr F W0813 20:01:32.712173 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130646033057 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000006320615117130646033070 0ustar zuulzuul2025-12-13T00:13:16.145798576+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="log level info" 2025-12-13T00:13:16.145798576+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="TLS keys set, using https for metrics" 2025-12-13T00:13:16.316747700+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=pods" 2025-12-13T00:13:16.316747700+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=roles" 2025-12-13T00:13:16.316747700+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=rolebindings" 2025-12-13T00:13:16.316747700+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="batch/v1, Resource=jobs" 2025-12-13T00:13:16.316797723+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=serviceaccounts" 2025-12-13T00:13:16.316797723+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=configmaps" 2025-12-13T00:13:16.316797723+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=services" 2025-12-13T00:13:16.528158075+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="detected ability to filter informers" canFilter=true 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="OpenShift Proxy API available - setting up watch for Proxy type" 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="OpenShift Proxy query will be used to fetch cluster proxy configuration" 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="[CSV NS Plug-in] setting up csv namespace plug-in for namespaces: []" 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="[CSV NS Plug-in] registering namespace informer" 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="[CSV NS Plug-in] setting up namespace: " 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="[CSV NS Plug-in] registered csv queue informer for: " 2025-12-13T00:13:16.547356160+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="[CSV NS Plug-in] finished setting up csv namespace labeler plugin" 2025-12-13T00:13:16.567307650+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-12-13T00:13:16.570968473+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="operator ready" 2025-12-13T00:13:16.570968473+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="starting informers..." 2025-12-13T00:13:16.570968473+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="informers started" 2025-12-13T00:13:16.570968473+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="waiting for caches to sync..." 2025-12-13T00:13:16.768100618+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="starting workers..." 2025-12-13T00:13:16.768472590+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="Initializing cluster operator monitor for package server" 2025-12-13T00:13:16.768481120+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="monitoring the following components [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-12-13T00:13:16.773874622+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="starting clusteroperator monitor loop" monitor=clusteroperator 2025-12-13T00:13:16.774272795+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v2.OperatorCondition"} 2025-12-13T00:13:16.774286305+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Role"} 2025-12-13T00:13:16.774286305+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.RoleBinding"} 2025-12-13T00:13:16.774304756+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Deployment"} 2025-12-13T00:13:16.774313266+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting Controller","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-12-13T00:13:16.774498682+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Operator"} 2025-12-13T00:13:16.774506943+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Deployment"} 2025-12-13T00:13:16.774506943+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Namespace"} 2025-12-13T00:13:16.774513933+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.CustomResourceDefinition"} 2025-12-13T00:13:16.774540434+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.APIService"} 2025-12-13T00:13:16.774540434+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.Subscription"} 2025-12-13T00:13:16.774548044+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.InstallPlan"} 2025-12-13T00:13:16.774548044+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-12-13T00:13:16.774568845+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v2.OperatorCondition"} 2025-12-13T00:13:16.774575575+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774575575+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774582475+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774589095+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774589095+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774618556+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774618556+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.774639827+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting Controller","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-12-13T00:13:16.779450229+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-12-13T00:13:16.779450229+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-12-13T00:13:16.779450229+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Deployment"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Namespace"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Service"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.CustomResourceDefinition"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.APIService"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.Subscription"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.Subscription"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.InstallPlan"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting Controller","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1.ClusterOperator"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"channel source: 0xc00049a180"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-12-13T00:13:16.781738755+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:16Z","msg":"Starting Controller","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-12-13T00:13:16.814377202+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="ClusterOperator api is present" monitor=clusteroperator 2025-12-13T00:13:16.814377202+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="initializing clusteroperator resource(s) for [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-12-13T00:13:16.879966307+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="initialized cluster resource - operator-lifecycle-manager-packageserver" monitor=clusteroperator 2025-12-13T00:13:17.090864313+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:17Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-12-13T00:13:17.090864313+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:17Z","msg":"Starting workers","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","worker count":1} 2025-12-13T00:13:17.090864313+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:17Z","msg":"Starting workers","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","worker count":1} 2025-12-13T00:13:17.090864313+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:17Z","msg":"Starting workers","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","worker count":1} 2025-12-13T00:13:17.090864313+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:17Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-12-13T00:13:17.090864313+00:00 stderr F {"level":"info","ts":"2025-12-13T00:13:17Z","msg":"Starting workers","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","worker count":1} 2025-12-13T00:13:17.324985809+00:00 stderr F time="2025-12-13T00:13:17Z" level=warning msg="unhealthy component: apiServices not installed" csv=packageserver id=wa74B namespace=openshift-operator-lifecycle-manager phase=Succeeded strategy=deployment 2025-12-13T00:13:17.329071387+00:00 stderr F I1213 00:13:17.326252 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28612", FieldPath:""}): type: 'Warning' reason: 'ComponentUnhealthy' apiServices not installed 2025-12-13T00:13:17.514715435+00:00 stderr F time="2025-12-13T00:13:17Z" level=warning msg="unhealthy component: apiServices not installed" csv=packageserver id=magN1 namespace=openshift-operator-lifecycle-manager phase=Succeeded strategy=deployment 2025-12-13T00:13:17.514921042+00:00 stderr F I1213 00:13:17.514897 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28612", FieldPath:""}): type: 'Warning' reason: 'ComponentUnhealthy' apiServices not installed 2025-12-13T00:13:17.533927421+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=vPbd/ namespace=openshift-operator-lifecycle-manager phase=Succeeded 2025-12-13T00:13:17.533927421+00:00 stderr F E1213 00:13:17.533388 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:13:17.613748022+00:00 stderr F time="2025-12-13T00:13:17Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=kauXq namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2025-12-13T00:13:17.613748022+00:00 stderr F I1213 00:13:17.613336 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40230", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2025-12-13T00:13:17.644597829+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=edId1 namespace=openshift-operator-lifecycle-manager phase=Pending 2025-12-13T00:13:17.644852088+00:00 stderr F I1213 00:13:17.644827 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40261", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2025-12-13T00:13:17.689030602+00:00 stderr F time="2025-12-13T00:13:17Z" level=warning msg="reusing existing cert packageserver-service-cert" 2025-12-13T00:13:17.827010709+00:00 stderr F I1213 00:13:17.826744 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40269", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2025-12-13T00:13:17.854945118+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=JztGU namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:17.855898450+00:00 stderr F I1213 00:13:17.855283 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40283", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-12-13T00:13:17.898399918+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=cPva4 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:17.909992647+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=WeT1Z namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:17.940988589+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=cL8bW namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:17.955739215+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=tUWKf namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:17.962691968+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=eVYtX namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:17.987554874+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="install strategy successful" csv=packageserver id=/zqgr namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:18.099048880+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="install strategy successful" csv=packageserver id=Mad+L namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:18.195873793+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="install strategy successful" csv=packageserver id=pi0TT namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:18.337880986+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="install strategy successful" csv=packageserver id=EQeHm namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-12-13T00:13:18.337880986+00:00 stderr F I1213 00:13:18.337629 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"40287", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' install strategy completed with no errors 2025-12-13T00:14:26.491500003+00:00 stderr F 2025/12/13 00:14:26 http: TLS handshake error from 10.217.0.17:51004: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "Red Hat, Inc.") ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000011562015117130646033066 0ustar zuulzuul2025-08-13T19:59:23.408499878+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="log level info" 2025-08-13T19:59:23.408499878+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="TLS keys set, using https for metrics" 2025-08-13T19:59:26.390932373+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=configmaps" 2025-08-13T19:59:26.390932373+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="batch/v1, Resource=jobs" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=rolebindings" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=services" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=pods" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=roles" 2025-08-13T19:59:26.440401473+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="skipping irrelevant gvr" gvr="/v1, Resource=serviceaccounts" 2025-08-13T19:59:28.078977601+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="detected ability to filter informers" canFilter=true 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="OpenShift Proxy API available - setting up watch for Proxy type" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="OpenShift Proxy query will be used to fetch cluster proxy configuration" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] setting up csv namespace plug-in for namespaces: []" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] registering namespace informer" 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] setting up namespace: " 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] registered csv queue informer for: " 2025-08-13T19:59:28.977500913+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="[CSV NS Plug-in] finished setting up csv namespace labeler plugin" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="operator ready" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="starting informers..." 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="informers started" 2025-08-13T19:59:29.042326870+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:32.924188753+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="starting workers..." 2025-08-13T19:59:33.031704498+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="Initializing cluster operator monitor for package server" 2025-08-13T19:59:33.032017877+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="monitoring the following components [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-08-13T19:59:33.055910848+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="starting clusteroperator monitor loop" monitor=clusteroperator 2025-08-13T19:59:33.246755698+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Role"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.RoleBinding"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Operator"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.Namespace"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.CustomResourceDefinition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.APIService"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.InstallPlan"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.322918839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T19:59:33.382939450+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ClusterOperator api is present" monitor=clusteroperator 2025-08-13T19:59:33.382939450+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="initializing clusteroperator resource(s) for [operator-lifecycle-manager-packageserver]" monitor=clusteroperator 2025-08-13T19:59:33.542897060+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="initialized cluster resource - operator-lifecycle-manager-packageserver" monitor=clusteroperator 2025-08-13T19:59:33.553977566+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:33.554066839+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Deployment"} 2025-08-13T19:59:33.554105840+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Namespace"} 2025-08-13T19:59:33.554139551+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.Service"} 2025-08-13T19:59:33.554170521+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.CustomResourceDefinition"} 2025-08-13T19:59:33.554200772+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.APIService"} 2025-08-13T19:59:33.554544782+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:33.554595594+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v2.OperatorCondition"} 2025-08-13T19:59:33.555013716+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555085628+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555128149+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555159860+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.555770787+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556103227+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556141448+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting EventSource","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","source":"kind source: *v1.PartialObjectMetadata"} 2025-08-13T19:59:33.556177709+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:33Z","msg":"Starting Controller","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.Subscription"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","source":"kind source: *v1alpha1.InstallPlan"} 2025-08-13T19:59:36.144826478+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting Controller","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T19:59:36.144826478+00:00 stderr F 2025/08/13 19:59:36 http: TLS handshake error from 10.217.0.2:45622: EOF 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1.ClusterOperator"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"channel source: 0xc000b88840"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting EventSource","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","source":"kind source: *v1alpha1.ClusterServiceVersion"} 2025-08-13T19:59:36.432375625+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:36Z","msg":"Starting Controller","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T19:59:36.572124379+00:00 stderr F 2025/08/13 19:59:36 http: TLS handshake error from 10.217.0.2:45620: EOF 2025-08-13T19:59:36.941610091+00:00 stderr F time="2025-08-13T19:59:36Z" level=warning msg="install timed out" csv=packageserver id=89lYL namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:36.979210383+00:00 stderr F I0813 19:59:36.967910 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"23959", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout 2025-08-13T19:59:37.168694394+00:00 stderr F I0813 19:59:37.167075 1 trace.go:236] Trace[437002520]: "DeltaFIFO Pop Process" ID:openshift-machine-api/master-user-data,Depth:124,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:36.927) (total time: 224ms): 2025-08-13T19:59:37.168694394+00:00 stderr F Trace[437002520]: [224.761967ms] [224.761967ms] END 2025-08-13T19:59:39.160935043+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition","worker count":1} 2025-08-13T19:59:39.161732036+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator","worker count":1} 2025-08-13T19:59:39.162600960+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-08-13T19:59:39.163362922+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription","worker count":1} 2025-08-13T19:59:39.167356936+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion","worker count":1} 2025-08-13T19:59:39.181857399+00:00 stderr F {"level":"info","ts":"2025-08-13T19:59:39Z","msg":"Starting workers","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator","worker count":1} 2025-08-13T19:59:39.409155649+00:00 stderr F time="2025-08-13T19:59:39Z" level=warning msg="install timed out" csv=packageserver id=wFTJL namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:39.491326631+00:00 stderr F I0813 19:59:39.461032 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"23959", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout 2025-08-13T19:59:39.678121055+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=YtP4v namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:39.678121055+00:00 stderr F E0813 19:59:39.677187 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:40.946415519+00:00 stderr F time="2025-08-13T19:59:40Z" level=warning msg="needs reinstall: apiServices not installed" csv=packageserver id=Ot2sV namespace=openshift-operator-lifecycle-manager phase=Failed strategy=deployment 2025-08-13T19:59:40.956756573+00:00 stderr F I0813 19:59:40.956497 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28220", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' apiServices not installed 2025-08-13T19:59:41.432936177+00:00 stderr F time="2025-08-13T19:59:41Z" level=info msg="scheduling ClusterServiceVersion for install" csv=packageserver id=GN3+B namespace=openshift-operator-lifecycle-manager phase=Pending 2025-08-13T19:59:41.432936177+00:00 stderr F I0813 19:59:41.421229 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28286", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install 2025-08-13T19:59:42.403819742+00:00 stderr F time="2025-08-13T19:59:42Z" level=warning msg="reusing existing cert packageserver-service-cert" 2025-08-13T19:59:46.244202112+00:00 stderr F I0813 19:59:46.239490 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28295", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy 2025-08-13T19:59:47.823165362+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="install strategy successful" csv=packageserver id=6ssjB namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:47.824931572+00:00 stderr F I0813 19:59:47.823956 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28383", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-08-13T19:59:48.876738185+00:00 stderr F time="2025-08-13T19:59:48Z" level=info msg="install strategy successful" csv=packageserver id=pY0V5 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:48.880552904+00:00 stderr F I0813 19:59:48.877424 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28383", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' apiServices not installed 2025-08-13T19:59:48.965878547+00:00 stderr F time="2025-08-13T19:59:48Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"packageserver\": the object has been modified; please apply your changes to the latest version and try again" csv=packageserver id=k/B5q namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:48.976565881+00:00 stderr F E0813 19:59:48.973051 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "packageserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.634998790+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="install strategy successful" csv=packageserver id=HgOMt namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:50.622862930+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="install strategy successful" csv=packageserver id=9BQnX namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:51.329144563+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="install strategy successful" csv=packageserver id=dbNQV namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.031285079+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=rj7n7 namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.332868995+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=JU2gq namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.533734821+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=I3wSK namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:52.820222188+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="install strategy successful" csv=packageserver id=nCH5J namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:53.054214688+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="could not query for GVK in api discovery" err="the server is currently unable to handle the request" group=packages.operators.coreos.com kind=PackageManifest version=v1 2025-08-13T19:59:53.110286586+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="could not update install status" csv=packageserver error="the server is currently unable to handle the request" id=nhmlX namespace=openshift-operator-lifecycle-manager phase=Installing 2025-08-13T19:59:53.110286586+00:00 stderr F E0813 19:59:53.107826 1 queueinformer_operator.go:319] sync {"update" "openshift-operator-lifecycle-manager/packageserver"} failed: the server is currently unable to handle the request 2025-08-13T19:59:53.290519303+00:00 stderr F time="2025-08-13T19:59:53Z" level=info msg="install strategy successful" csv=packageserver id=+yVoI namespace=openshift-operator-lifecycle-manager phase=Installing strategy=deployment 2025-08-13T19:59:53.297101411+00:00 stderr F I0813 19:59:53.296413 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"0beab272-7637-4d44-b3aa-502dcafbc929", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"28423", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' install strategy completed with no errors 2025-08-13T20:03:41.629323813+00:00 stderr F time="2025-08-13T20:03:41Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:41.629323813+00:00 stderr F time="2025-08-13T20:03:41Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:56.983016152+00:00 stderr F time="2025-08-13T20:03:56Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:56.984258098+00:00 stderr F time="2025-08-13T20:03:56Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.043288057+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.043288057+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.071758619+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.071758619+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.102757923+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.admin-2SOrzhaSHllEqB6Becsc9Z2BniBuXZxdBrPmIq\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.103560566+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.edit-brB9auo7mhdQtycRdrSZm5XlKKbUjCe698FPlD\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.105920163+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="update existing cluster role failed: nil" error="Get \"https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/olm.og.openshift-cluster-monitoring.view-9QCGFNcofBHQ2DeWEf2qFa4NWqTOGskUedO4Tz\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:59.117917436+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:03:59.118091211+00:00 stderr F time="2025-08-13T20:03:59Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:02.371881741+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:02.371920852+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:02.610171729+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:02.610171729+00:00 stderr F time="2025-08-13T20:04:02Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:05.519098082+00:00 stderr F time="2025-08-13T20:04:05Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:04:05.519098082+00:00 stderr F time="2025-08-13T20:04:05Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:05:07.886185865+00:00 stderr F time="2025-08-13T20:05:07Z" level=error msg="status update error - failed to ensure initial clusteroperator name=operator-lifecycle-manager-packageserver - Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver\": dial tcp 10.217.4.1:443: connect: connection refused" monitor=clusteroperator 2025-08-13T20:05:07.897930811+00:00 stderr F time="2025-08-13T20:05:07Z" level=error msg="failed to get a client for operator deployment - Get \"https://10.217.4.1:443/apis/operators.coreos.com/v1/namespaces/openshift-operator-lifecycle-manager/operatorgroups\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:42:43.424379186+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="exiting from clusteroperator monitor loop" monitor=clusteroperator 2025-08-13T20:42:43.425568160+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for non leader election runnables"} 2025-08-13T20:42:43.432031667+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for leader election runnables"} 2025-08-13T20:42:43.433208311+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T20:42:43.433208311+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433258182+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusterserviceversion","controllerGroup":"operators.coreos.com","controllerKind":"ClusterServiceVersion"} 2025-08-13T20:42:43.433344975+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"subscription","controllerGroup":"operators.coreos.com","controllerKind":"Subscription"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T20:42:43.433360345+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"clusteroperator","controllerGroup":"config.openshift.io","controllerKind":"ClusterOperator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"operatorcondition","controllerGroup":"operators.coreos.com","controllerKind":"OperatorCondition"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"All workers finished","controller":"operator","controllerGroup":"operators.coreos.com","controllerKind":"Operator"} 2025-08-13T20:42:43.433429997+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for caches"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for webhooks"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Stopping and waiting for HTTP servers"} 2025-08-13T20:42:43.435462176+00:00 stderr F {"level":"info","ts":"2025-08-13T20:42:43Z","msg":"Wait completed, proceeding to shutdown the manager"} ././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000755000175000017500000000000015117130646033177 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000755000175000017500000000000015117130654033176 5ustar zuulzuul././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000015013215117130646033203 0ustar zuulzuul2025-12-13T00:11:03.377648100+00:00 stderr F I1213 00:11:03.377399 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-12-13T00:11:03.383244562+00:00 stderr F I1213 00:11:03.383200 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-12-13T00:11:03.391717292+00:00 stderr F I1213 00:11:03.391643 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-12-13T00:11:03.391751873+00:00 stderr F I1213 00:11:03.391730 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-12-13T00:11:03.393140590+00:00 stderr F I1213 00:11:03.393114 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-12-13T00:11:03.393573602+00:00 stderr F I1213 00:11:03.393171 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-12-13T00:11:03.865370504+00:00 stderr F I1213 00:11:03.865316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:03.865370504+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:03.865370504+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:04.865585777+00:00 stderr F I1213 00:11:04.865541 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:04.865585777+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:04.865585777+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:05.864244927+00:00 stderr F I1213 00:11:05.864182 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:05.864244927+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:05.864244927+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:06.865096887+00:00 stderr F I1213 00:11:06.865055 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:06.865096887+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:06.865096887+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:07.863391787+00:00 stderr F I1213 00:11:07.863306 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:07.863391787+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:07.863391787+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:08.863782094+00:00 stderr F I1213 00:11:08.863717 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:08.863782094+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:08.863782094+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:09.864809299+00:00 stderr F I1213 00:11:09.864682 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:09.864809299+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:09.864809299+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:10.862855436+00:00 stderr F I1213 00:11:10.862784 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:10.862855436+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:10.862855436+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:11.862814619+00:00 stderr F I1213 00:11:11.862684 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:11.862814619+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:11.862814619+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:12.864123919+00:00 stderr F I1213 00:11:12.864045 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:12.864123919+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:12.864123919+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:13.862870587+00:00 stderr F I1213 00:11:13.862776 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:13.862870587+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:13.862870587+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:14.863778457+00:00 stderr F I1213 00:11:14.863645 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:14.863778457+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:14.863778457+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:15.863245405+00:00 stderr F I1213 00:11:15.863106 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:15.863245405+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:15.863245405+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:16.864475754+00:00 stderr F I1213 00:11:16.864399 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:16.864475754+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:16.864475754+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:17.863902402+00:00 stderr F I1213 00:11:17.863803 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:17.863902402+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:17.863902402+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:18.863750833+00:00 stderr F I1213 00:11:18.863651 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:18.863750833+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:18.863750833+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:19.863717726+00:00 stderr F I1213 00:11:19.863624 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:19.863717726+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:19.863717726+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:20.867369335+00:00 stderr F I1213 00:11:20.867244 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:20.867369335+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:20.867369335+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:21.863665463+00:00 stderr F I1213 00:11:21.863281 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:21.863665463+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:21.863665463+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:22.867986091+00:00 stderr F I1213 00:11:22.867080 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:22.867986091+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:22.867986091+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:23.864020792+00:00 stderr F I1213 00:11:23.863222 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:23.864020792+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:23.864020792+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:24.863083549+00:00 stderr F I1213 00:11:24.862990 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:24.863083549+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:24.863083549+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:25.865703458+00:00 stderr F I1213 00:11:25.865046 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:25.865703458+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:25.865703458+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:26.863532490+00:00 stderr F I1213 00:11:26.863423 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:26.863532490+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:26.863532490+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:27.862852605+00:00 stderr F I1213 00:11:27.862767 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:27.862852605+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:27.862852605+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:28.863838317+00:00 stderr F I1213 00:11:28.863718 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:28.863838317+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:28.863838317+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:29.863412449+00:00 stderr F I1213 00:11:29.863282 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:29.863412449+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:29.863412449+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:30.863325491+00:00 stderr F I1213 00:11:30.863261 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:30.863325491+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:30.863325491+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:31.864541850+00:00 stderr F I1213 00:11:31.864463 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:31.864541850+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:31.864541850+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:32.863618087+00:00 stderr F I1213 00:11:32.863535 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:32.863618087+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:32.863618087+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:33.392178464+00:00 stderr F W1213 00:11:33.392045 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.392245136+00:00 stderr F I1213 00:11:33.392186 1 trace.go:236] Trace[842260070]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Dec-2025 00:11:03.389) (total time: 30002ms): 2025-12-13T00:11:33.392245136+00:00 stderr F Trace[842260070]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (00:11:33.392) 2025-12-13T00:11:33.392245136+00:00 stderr F Trace[842260070]: [30.002237651s] [30.002237651s] END 2025-12-13T00:11:33.392245136+00:00 stderr F E1213 00:11:33.392211 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.394386078+00:00 stderr F W1213 00:11:33.394311 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.394507711+00:00 stderr F I1213 00:11:33.394481 1 trace.go:236] Trace[1689451182]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Dec-2025 00:11:03.393) (total time: 30000ms): 2025-12-13T00:11:33.394507711+00:00 stderr F Trace[1689451182]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:11:33.394) 2025-12-13T00:11:33.394507711+00:00 stderr F Trace[1689451182]: [30.000488937s] [30.000488937s] END 2025-12-13T00:11:33.394622425+00:00 stderr F E1213 00:11:33.394590 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.394858971+00:00 stderr F W1213 00:11:33.394803 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.395005615+00:00 stderr F I1213 00:11:33.394976 1 trace.go:236] Trace[1431477786]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Dec-2025 00:11:03.394) (total time: 30000ms): 2025-12-13T00:11:33.395005615+00:00 stderr F Trace[1431477786]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:11:33.394) 2025-12-13T00:11:33.395005615+00:00 stderr F Trace[1431477786]: [30.0006288s] [30.0006288s] END 2025-12-13T00:11:33.395085998+00:00 stderr F E1213 00:11:33.395061 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.863635612+00:00 stderr F I1213 00:11:33.863568 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:33.863635612+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:33.863635612+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:34.863362499+00:00 stderr F I1213 00:11:34.863275 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:34.863362499+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:34.863362499+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:35.864078174+00:00 stderr F I1213 00:11:35.864005 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:35.864078174+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:35.864078174+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:36.864263173+00:00 stderr F I1213 00:11:36.864188 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:36.864263173+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:36.864263173+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:37.863893717+00:00 stderr F I1213 00:11:37.863840 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:37.863893717+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:37.863893717+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:38.863884610+00:00 stderr F I1213 00:11:38.863796 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:38.863884610+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:38.863884610+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:39.863962867+00:00 stderr F I1213 00:11:39.863818 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:39.863962867+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:39.863962867+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:40.863686693+00:00 stderr F I1213 00:11:40.863571 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:40.863686693+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:40.863686693+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:41.863708918+00:00 stderr F I1213 00:11:41.863625 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:41.863708918+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:41.863708918+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:42.863210308+00:00 stderr F I1213 00:11:42.863120 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:42.863210308+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:42.863210308+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:43.863389047+00:00 stderr F I1213 00:11:43.863306 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:43.863389047+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:43.863389047+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:44.864379540+00:00 stderr F I1213 00:11:44.864292 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:44.864379540+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:44.864379540+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:45.862261103+00:00 stderr F I1213 00:11:45.862187 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:45.862261103+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:45.862261103+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:46.863114504+00:00 stderr F I1213 00:11:46.863048 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:46.863114504+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:46.863114504+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:47.862947233+00:00 stderr F I1213 00:11:47.862848 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:47.862947233+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:47.862947233+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:48.863874595+00:00 stderr F I1213 00:11:48.863801 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:48.863874595+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:48.863874595+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:49.864051385+00:00 stderr F I1213 00:11:49.863909 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:49.864051385+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:49.864051385+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:50.863491043+00:00 stderr F I1213 00:11:50.863352 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:50.863491043+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:50.863491043+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:51.863198738+00:00 stderr F I1213 00:11:51.863117 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:51.863198738+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:51.863198738+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:52.864179741+00:00 stderr F I1213 00:11:52.864087 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:52.864179741+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:52.864179741+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:53.863910548+00:00 stderr F I1213 00:11:53.863818 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:53.863910548+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:53.863910548+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:54.867131784+00:00 stderr F I1213 00:11:54.867026 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:54.867131784+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:54.867131784+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:55.863899016+00:00 stderr F I1213 00:11:55.863836 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:55.863899016+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:55.863899016+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:56.863170197+00:00 stderr F I1213 00:11:56.863085 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:56.863170197+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:56.863170197+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:57.864864229+00:00 stderr F I1213 00:11:57.864777 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:57.864864229+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:57.864864229+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:58.864595649+00:00 stderr F I1213 00:11:58.864470 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:58.864595649+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:58.864595649+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:11:59.863563770+00:00 stderr F I1213 00:11:59.863502 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:11:59.863563770+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:11:59.863563770+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:00.863856995+00:00 stderr F I1213 00:12:00.863775 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:00.863856995+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:00.863856995+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:01.863358430+00:00 stderr F I1213 00:12:01.863302 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:01.863358430+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:01.863358430+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:02.862833544+00:00 stderr F I1213 00:12:02.862742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:02.862833544+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:02.862833544+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:03.863322845+00:00 stderr F I1213 00:12:03.863224 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:03.863322845+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:03.863322845+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:04.505679255+00:00 stderr F W1213 00:12:04.505591 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.505679255+00:00 stderr F I1213 00:12:04.505659 1 trace.go:236] Trace[1291301429]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Dec-2025 00:11:34.505) (total time: 30000ms): 2025-12-13T00:12:04.505679255+00:00 stderr F Trace[1291301429]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:12:04.505) 2025-12-13T00:12:04.505679255+00:00 stderr F Trace[1291301429]: [30.000564046s] [30.000564046s] END 2025-12-13T00:12:04.505739037+00:00 stderr F E1213 00:12:04.505676 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.728637765+00:00 stderr F W1213 00:12:04.728510 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.728637765+00:00 stderr F I1213 00:12:04.728567 1 trace.go:236] Trace[1675089735]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Dec-2025 00:11:34.727) (total time: 30001ms): 2025-12-13T00:12:04.728637765+00:00 stderr F Trace[1675089735]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:12:04.728) 2025-12-13T00:12:04.728637765+00:00 stderr F Trace[1675089735]: [30.001411223s] [30.001411223s] END 2025-12-13T00:12:04.728637765+00:00 stderr F E1213 00:12:04.728579 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.855210612+00:00 stderr F W1213 00:12:04.855113 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.855210612+00:00 stderr F I1213 00:12:04.855192 1 trace.go:236] Trace[989716109]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Dec-2025 00:11:34.854) (total time: 30000ms): 2025-12-13T00:12:04.855210612+00:00 stderr F Trace[989716109]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:12:04.855) 2025-12-13T00:12:04.855210612+00:00 stderr F Trace[989716109]: [30.000845921s] [30.000845921s] END 2025-12-13T00:12:04.855262123+00:00 stderr F E1213 00:12:04.855210 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.863666112+00:00 stderr F I1213 00:12:04.863589 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:04.863666112+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:04.863666112+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:05.863552077+00:00 stderr F I1213 00:12:05.863454 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:05.863552077+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:05.863552077+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:06.862533579+00:00 stderr F I1213 00:12:06.862461 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:06.862533579+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:06.862533579+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:07.862487825+00:00 stderr F I1213 00:12:07.862420 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:07.862487825+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:07.862487825+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:08.890556112+00:00 stderr F I1213 00:12:08.890473 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:08.890556112+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:08.890556112+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:09.863142338+00:00 stderr F I1213 00:12:09.863024 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:09.863142338+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:09.863142338+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:10.863236908+00:00 stderr F I1213 00:12:10.863167 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:10.863236908+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:10.863236908+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:11.863737619+00:00 stderr F I1213 00:12:11.863653 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:11.863737619+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:11.863737619+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:12.863980093+00:00 stderr F I1213 00:12:12.863908 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:12.863980093+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:12.863980093+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:13.862656797+00:00 stderr F I1213 00:12:13.862599 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:13.862656797+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:13.862656797+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:14.863295801+00:00 stderr F I1213 00:12:14.863248 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:14.863295801+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:14.863295801+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:15.863461504+00:00 stderr F I1213 00:12:15.863412 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:15.863461504+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:15.863461504+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:16.864314803+00:00 stderr F I1213 00:12:16.864233 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:16.864314803+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:16.864314803+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:17.863301455+00:00 stderr F I1213 00:12:17.863242 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:17.863301455+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:17.863301455+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:18.863798366+00:00 stderr F I1213 00:12:18.863730 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:18.863798366+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:18.863798366+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:19.862826299+00:00 stderr F I1213 00:12:19.862777 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:19.862826299+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:19.862826299+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:20.864033809+00:00 stderr F I1213 00:12:20.863927 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:20.864033809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:20.864033809+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:21.863439622+00:00 stderr F I1213 00:12:21.863363 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:21.863439622+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:21.863439622+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:22.863558743+00:00 stderr F I1213 00:12:22.863481 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:22.863558743+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:22.863558743+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:23.863326834+00:00 stderr F I1213 00:12:23.863246 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:23.863326834+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:23.863326834+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:24.863615869+00:00 stderr F I1213 00:12:24.863541 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:24.863615869+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:24.863615869+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:25.862957210+00:00 stderr F I1213 00:12:25.862879 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:25.862957210+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:25.862957210+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:26.863842020+00:00 stderr F I1213 00:12:26.863784 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:26.863842020+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:26.863842020+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:27.863660814+00:00 stderr F I1213 00:12:27.863564 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:27.863660814+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:27.863660814+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:28.864722080+00:00 stderr F I1213 00:12:28.864612 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:28.864722080+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:28.864722080+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:29.863809824+00:00 stderr F I1213 00:12:29.863718 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:29.863809824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:29.863809824+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:30.863460323+00:00 stderr F I1213 00:12:30.863403 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:30.863460323+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:30.863460323+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:31.863243995+00:00 stderr F I1213 00:12:31.863170 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:31.863243995+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:31.863243995+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:32.863227372+00:00 stderr F I1213 00:12:32.863156 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:32.863227372+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:32.863227372+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:33.863351043+00:00 stderr F I1213 00:12:33.863275 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:33.863351043+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:33.863351043+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:34.864462919+00:00 stderr F I1213 00:12:34.864344 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:34.864462919+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:34.864462919+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:35.863362318+00:00 stderr F I1213 00:12:35.863303 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:35.863362318+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:35.863362318+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:36.811580482+00:00 stderr F W1213 00:12:36.811469 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:36.811650974+00:00 stderr F I1213 00:12:36.811616 1 trace.go:236] Trace[570848254]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Dec-2025 00:12:06.810) (total time: 30001ms): 2025-12-13T00:12:36.811650974+00:00 stderr F Trace[570848254]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:12:36.811) 2025-12-13T00:12:36.811650974+00:00 stderr F Trace[570848254]: [30.00130954s] [30.00130954s] END 2025-12-13T00:12:36.811661594+00:00 stderr F E1213 00:12:36.811645 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:36.863571902+00:00 stderr F I1213 00:12:36.863461 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:36.863571902+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:36.863571902+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:36.891700752+00:00 stderr F W1213 00:12:36.891624 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:36.891890327+00:00 stderr F I1213 00:12:36.891841 1 trace.go:236] Trace[1112355127]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Dec-2025 00:12:06.890) (total time: 30001ms): 2025-12-13T00:12:36.891890327+00:00 stderr F Trace[1112355127]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:12:36.891) 2025-12-13T00:12:36.891890327+00:00 stderr F Trace[1112355127]: [30.001042373s] [30.001042373s] END 2025-12-13T00:12:36.892053711+00:00 stderr F E1213 00:12:36.892014 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:37.579525394+00:00 stderr F W1213 00:12:37.579422 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:37.579525394+00:00 stderr F I1213 00:12:37.579513 1 trace.go:236] Trace[1439180330]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Dec-2025 00:12:07.578) (total time: 30000ms): 2025-12-13T00:12:37.579525394+00:00 stderr F Trace[1439180330]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:12:37.579) 2025-12-13T00:12:37.579525394+00:00 stderr F Trace[1439180330]: [30.000930201s] [30.000930201s] END 2025-12-13T00:12:37.579585455+00:00 stderr F E1213 00:12:37.579531 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:37.864267757+00:00 stderr F I1213 00:12:37.864173 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:37.864267757+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:37.864267757+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:38.863036303+00:00 stderr F I1213 00:12:38.862971 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:38.863036303+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:38.863036303+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:39.863694088+00:00 stderr F I1213 00:12:39.863609 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:39.863694088+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:39.863694088+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:40.864079886+00:00 stderr F I1213 00:12:40.864022 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:40.864079886+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:40.864079886+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:41.294549075+00:00 stderr F W1213 00:12:41.294420 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:12:41.294549075+00:00 stderr F E1213 00:12:41.294500 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:12:41.760462233+00:00 stderr F I1213 00:12:41.760367 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-12-13T00:12:41.863434977+00:00 stderr F I1213 00:12:41.863323 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:41.863434977+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:41.863434977+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:42.863513457+00:00 stderr F I1213 00:12:42.863456 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:42.863513457+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:42.863513457+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:42.912493249+00:00 stderr F I1213 00:12:42.912437 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-12-13T00:12:43.863993198+00:00 stderr F I1213 00:12:43.863865 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:43.863993198+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:43.863993198+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:44.864225671+00:00 stderr F I1213 00:12:44.864137 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:44.864225671+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:44.864225671+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:45.862785401+00:00 stderr F I1213 00:12:45.862660 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:45.862785401+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:45.862785401+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:46.864386750+00:00 stderr F I1213 00:12:46.864274 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:46.864386750+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:46.864386750+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:47.863499935+00:00 stderr F I1213 00:12:47.863441 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:47.863499935+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:47.863499935+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:48.864136169+00:00 stderr F I1213 00:12:48.864066 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:48.864136169+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:48.864136169+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:49.864757913+00:00 stderr F I1213 00:12:49.864664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:49.864757913+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:49.864757913+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:50.342475239+00:00 stderr F W1213 00:12:50.342374 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:12:50.342475239+00:00 stderr F E1213 00:12:50.342440 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:12:50.864135025+00:00 stderr F I1213 00:12:50.863911 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:50.864135025+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:50.864135025+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:51.864002619+00:00 stderr F I1213 00:12:51.863893 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:51.864002619+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:51.864002619+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:52.864344666+00:00 stderr F I1213 00:12:52.864268 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:52.864344666+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:52.864344666+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:53.863083990+00:00 stderr F I1213 00:12:53.863028 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:53.863083990+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:53.863083990+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:54.863278153+00:00 stderr F I1213 00:12:54.863199 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:54.863278153+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:54.863278153+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:55.866315880+00:00 stderr F I1213 00:12:55.865754 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:55.866315880+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:55.866315880+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:56.863991908+00:00 stderr F I1213 00:12:56.863873 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:56.863991908+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:56.863991908+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:57.863550473+00:00 stderr F I1213 00:12:57.863465 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:57.863550473+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:57.863550473+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:58.863858520+00:00 stderr F I1213 00:12:58.863741 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:58.863858520+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:58.863858520+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:12:59.864618548+00:00 stderr F I1213 00:12:59.864545 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:12:59.864618548+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:12:59.864618548+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:13:00.863425374+00:00 stderr F I1213 00:13:00.863282 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:13:00.863425374+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:13:00.863425374+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:13:01.863802720+00:00 stderr F I1213 00:13:01.863704 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:13:01.863802720+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:13:01.863802720+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:13:02.863070908+00:00 stderr F I1213 00:13:02.863003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-12-13T00:13:02.863070908+00:00 stderr F [-]backend-http failed: backend reported failure 2025-12-13T00:13:02.863070908+00:00 stderr F [-]has-synced failed: Router not synced 2025-12-13T00:13:02.873445037+00:00 stderr F E1213 00:13:02.873380 1 factory.go:130] failed to sync cache for *v1.Route shared informer 2025-12-13T00:13:02.875257047+00:00 stderr F I1213 00:13:02.875221 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" 2025-12-13T00:13:02.879850752+00:00 stderr F E1213 00:13:02.879772 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-12-13T00:13:02.962172318+00:00 stderr F I1213 00:13:02.962105 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-12-13T00:13:47.878593709+00:00 stderr F I1213 00:13:47.878063 1 template.go:846] "msg"="Instructing the template router to terminate" "logger"="router" 2025-12-13T00:13:47.910043836+00:00 stderr F I1213 00:13:47.909988 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Shutting down\n" 2025-12-13T00:13:47.910043836+00:00 stderr F I1213 00:13:47.910036 1 template.go:850] "msg"="Shutdown complete, exiting" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000000676615117130646033220 0ustar zuulzuul2025-12-13T00:14:21.960352047+00:00 stderr F I1213 00:14:21.959592 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-12-13T00:14:21.968300003+00:00 stderr F I1213 00:14:21.968185 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-12-13T00:14:21.971828396+00:00 stderr F I1213 00:14:21.971125 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-12-13T00:14:21.971828396+00:00 stderr F I1213 00:14:21.971185 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-12-13T00:14:21.971828396+00:00 stderr F I1213 00:14:21.971635 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-12-13T00:14:21.971828396+00:00 stderr F I1213 00:14:21.971700 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-12-13T00:14:21.992646676+00:00 stderr F I1213 00:14:21.992543 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-12-13T00:14:21.997824702+00:00 stderr F I1213 00:14:21.997359 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-12-13T00:14:21.997824702+00:00 stderr F I1213 00:14:21.997559 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-12-13T00:14:22.077433983+00:00 stderr F E1213 00:14:22.077384 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-12-13T00:14:22.131413869+00:00 stderr F I1213 00:14:22.131360 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-12-13T00:14:27.524635902+00:00 stderr F I1213 00:14:27.524555 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-12-13T00:15:43.096458611+00:00 stderr F I1213 00:15:43.096370 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-12-13T00:18:52.804121468+00:00 stderr F I1213 00:18:52.804043 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-12-13T00:18:57.807425401+00:00 stderr F I1213 00:18:57.807361 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-12-13T00:21:26.122550269+00:00 stderr F I1213 00:21:26.121909 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-12-13T00:21:26.317970052+00:00 stderr F I1213 00:21:26.317854 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000014600015117130646033202 0ustar zuulzuul2025-08-13T19:56:16.897628970+00:00 stderr F I0813 19:56:16.897259 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-08-13T19:56:16.903853848+00:00 stderr F I0813 19:56:16.902128 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-08-13T19:56:16.906242926+00:00 stderr F I0813 19:56:16.906162 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-08-13T19:56:16.906321448+00:00 stderr F I0813 19:56:16.906281 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-08-13T19:56:16.906988387+00:00 stderr F I0813 19:56:16.906917 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-08-13T19:56:16.907098210+00:00 stderr F I0813 19:56:16.907056 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-08-13T19:56:17.432753830+00:00 stderr F I0813 19:56:17.432644 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:17.432753830+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:17.432753830+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:18.431893231+00:00 stderr F I0813 19:56:18.431619 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:18.431893231+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:18.431893231+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:19.432022349+00:00 stderr F I0813 19:56:19.431929 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:19.432022349+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:19.432022349+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:20.432639101+00:00 stderr F I0813 19:56:20.432527 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:20.432639101+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:20.432639101+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:21.432452011+00:00 stderr F I0813 19:56:21.432309 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:21.432452011+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:21.432452011+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:22.432538597+00:00 stderr F I0813 19:56:22.432355 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:22.432538597+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:22.432538597+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:23.432382067+00:00 stderr F I0813 19:56:23.432308 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:23.432382067+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:23.432382067+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:24.433473033+00:00 stderr F I0813 19:56:24.433330 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:24.433473033+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:24.433473033+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:25.433471328+00:00 stderr F I0813 19:56:25.431263 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:25.433471328+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:25.433471328+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:26.432059721+00:00 stderr F I0813 19:56:26.431979 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:26.432059721+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:26.432059721+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:27.434503986+00:00 stderr F I0813 19:56:27.434363 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:27.434503986+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:27.434503986+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:28.432741230+00:00 stderr F I0813 19:56:28.432682 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:28.432741230+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:28.432741230+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:29.439648112+00:00 stderr F I0813 19:56:29.439264 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:29.439648112+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:29.439648112+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:30.435213880+00:00 stderr F I0813 19:56:30.433077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:30.435213880+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:30.435213880+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:31.433340482+00:00 stderr F I0813 19:56:31.433017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:31.433340482+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:31.433340482+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:32.433555683+00:00 stderr F I0813 19:56:32.433494 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:32.433555683+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:32.433555683+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:33.434362750+00:00 stderr F I0813 19:56:33.434286 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:33.434362750+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:33.434362750+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:34.432458420+00:00 stderr F I0813 19:56:34.432307 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:34.432458420+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:34.432458420+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:35.432746463+00:00 stderr F I0813 19:56:35.432248 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:35.432746463+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:35.432746463+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:36.433244751+00:00 stderr F I0813 19:56:36.433192 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:36.433244751+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:36.433244751+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:37.431705222+00:00 stderr F I0813 19:56:37.431613 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:37.431705222+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:37.431705222+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:38.432325045+00:00 stderr F I0813 19:56:38.432166 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:38.432325045+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:38.432325045+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:39.431535137+00:00 stderr F I0813 19:56:39.431441 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:39.431535137+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:39.431535137+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:40.431416609+00:00 stderr F I0813 19:56:40.431310 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:40.431416609+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:40.431416609+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:41.431708842+00:00 stderr F I0813 19:56:41.431601 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:41.431708842+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:41.431708842+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:42.433355784+00:00 stderr F I0813 19:56:42.433239 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:42.433355784+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:42.433355784+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:43.433293677+00:00 stderr F I0813 19:56:43.433204 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:43.433293677+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:43.433293677+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:44.437035918+00:00 stderr F I0813 19:56:44.436975 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:44.437035918+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:44.437035918+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:45.434593002+00:00 stderr F I0813 19:56:45.434106 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:45.434593002+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:45.434593002+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:46.433540697+00:00 stderr F I0813 19:56:46.433398 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:46.433540697+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:46.433540697+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:46.905667848+00:00 stderr F W0813 19:56:46.905508 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.905970527+00:00 stderr F I0813 19:56:46.905939 1 trace.go:236] Trace[1642337220]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:56:16.903) (total time: 30002ms): 2025-08-13T19:56:46.905970527+00:00 stderr F Trace[1642337220]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.905) 2025-08-13T19:56:46.905970527+00:00 stderr F Trace[1642337220]: [30.002025157s] [30.002025157s] END 2025-08-13T19:56:46.906068190+00:00 stderr F E0813 19:56:46.906047 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.908969293+00:00 stderr F W0813 19:56:46.908900 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909091246+00:00 stderr F I0813 19:56:46.909070 1 trace.go:236] Trace[436705070]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:16.907) (total time: 30001ms): 2025-08-13T19:56:46.909091246+00:00 stderr F Trace[436705070]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.908) 2025-08-13T19:56:46.909091246+00:00 stderr F Trace[436705070]: [30.001298066s] [30.001298066s] END 2025-08-13T19:56:46.909150168+00:00 stderr F E0813 19:56:46.909134 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909259201+00:00 stderr F W0813 19:56:46.909072 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:46.909357404+00:00 stderr F I0813 19:56:46.909284 1 trace.go:236] Trace[2037442418]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:16.907) (total time: 30001ms): 2025-08-13T19:56:46.909357404+00:00 stderr F Trace[2037442418]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:46.908) 2025-08-13T19:56:46.909357404+00:00 stderr F Trace[2037442418]: [30.001572733s] [30.001572733s] END 2025-08-13T19:56:46.909357404+00:00 stderr F E0813 19:56:46.909336 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:47.432589043+00:00 stderr F I0813 19:56:47.432529 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:47.432589043+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:47.432589043+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:48.432951947+00:00 stderr F I0813 19:56:48.432525 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:48.432951947+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:48.432951947+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:49.432066177+00:00 stderr F I0813 19:56:49.431941 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:49.432066177+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:49.432066177+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:50.432125873+00:00 stderr F I0813 19:56:50.432032 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:50.432125873+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:50.432125873+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:51.432001853+00:00 stderr F I0813 19:56:51.431906 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:51.432001853+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:51.432001853+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:52.432737878+00:00 stderr F I0813 19:56:52.432631 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:52.432737878+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:52.432737878+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:53.432868787+00:00 stderr F I0813 19:56:53.432673 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:53.432868787+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:53.432868787+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:54.433275973+00:00 stderr F I0813 19:56:54.432875 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:54.433275973+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:54.433275973+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:55.432271579+00:00 stderr F I0813 19:56:55.432202 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:55.432271579+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:55.432271579+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:56.432325345+00:00 stderr F I0813 19:56:56.432186 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:56.432325345+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:56.432325345+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:57.432645998+00:00 stderr F I0813 19:56:57.432569 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:57.432645998+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:57.432645998+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:58.432199589+00:00 stderr F I0813 19:56:58.432124 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:58.432199589+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:58.432199589+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:56:59.433020537+00:00 stderr F I0813 19:56:59.432864 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:56:59.433020537+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:56:59.433020537+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:00.433573017+00:00 stderr F I0813 19:57:00.433509 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:00.433573017+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:00.433573017+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:01.432562092+00:00 stderr F I0813 19:57:01.432418 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:01.432562092+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:01.432562092+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:02.433124814+00:00 stderr F I0813 19:57:02.433003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:02.433124814+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:02.433124814+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:03.432258824+00:00 stderr F I0813 19:57:03.432162 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:03.432258824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:03.432258824+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:04.432722113+00:00 stderr F I0813 19:57:04.432145 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:04.432722113+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:04.432722113+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:05.431260714+00:00 stderr F I0813 19:57:05.431205 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:05.431260714+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:05.431260714+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:06.432030181+00:00 stderr F I0813 19:57:06.431977 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:06.432030181+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:06.432030181+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:07.431464501+00:00 stderr F I0813 19:57:07.431406 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:07.431464501+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:07.431464501+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:08.432085384+00:00 stderr F I0813 19:57:08.431989 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:08.432085384+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:08.432085384+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:09.432053706+00:00 stderr F I0813 19:57:09.431951 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:09.432053706+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:09.432053706+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:10.432654007+00:00 stderr F I0813 19:57:10.432610 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:10.432654007+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:10.432654007+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:11.431640704+00:00 stderr F I0813 19:57:11.431588 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:11.431640704+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:11.431640704+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:12.434203290+00:00 stderr F I0813 19:57:12.434111 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:12.434203290+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:12.434203290+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:13.433521056+00:00 stderr F I0813 19:57:13.433423 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:13.433521056+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:13.433521056+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:14.433498481+00:00 stderr F I0813 19:57:14.433149 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:14.433498481+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:14.433498481+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:15.432030014+00:00 stderr F I0813 19:57:15.431954 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:15.432030014+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:15.432030014+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:16.434512699+00:00 stderr F I0813 19:57:16.434364 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:16.434512699+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:16.434512699+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:17.433485565+00:00 stderr F I0813 19:57:17.432334 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:17.433485565+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:17.433485565+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:17.783031196+00:00 stderr F W0813 19:57:17.780368 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:17.783031196+00:00 stderr F I0813 19:57:17.780622 1 trace.go:236] Trace[1748179681]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:47.772) (total time: 30007ms): 2025-08-13T19:57:17.783031196+00:00 stderr F Trace[1748179681]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30007ms (19:57:17.780) 2025-08-13T19:57:17.783031196+00:00 stderr F Trace[1748179681]: [30.007586633s] [30.007586633s] END 2025-08-13T19:57:17.783031196+00:00 stderr F E0813 19:57:17.780688 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.038305706+00:00 stderr F W0813 19:57:18.037956 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.038305706+00:00 stderr F I0813 19:57:18.038270 1 trace.go:236] Trace[1900261663]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:56:48.036) (total time: 30001ms): 2025-08-13T19:57:18.038305706+00:00 stderr F Trace[1900261663]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:57:18.037) 2025-08-13T19:57:18.038305706+00:00 stderr F Trace[1900261663]: [30.001957303s] [30.001957303s] END 2025-08-13T19:57:18.038418509+00:00 stderr F E0813 19:57:18.038308 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.217.4.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.135368408+00:00 stderr F W0813 19:57:18.135253 1 reflector.go:539] github.com/openshift/router/pkg/router/template/service_lookup.go:33: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.135478171+00:00 stderr F I0813 19:57:18.135462 1 trace.go:236] Trace[1715725877]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:56:48.134) (total time: 30001ms): 2025-08-13T19:57:18.135478171+00:00 stderr F Trace[1715725877]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:57:18.135) 2025-08-13T19:57:18.135478171+00:00 stderr F Trace[1715725877]: [30.001251833s] [30.001251833s] END 2025-08-13T19:57:18.135532372+00:00 stderr F E0813 19:57:18.135510 1 reflector.go:147] github.com/openshift/router/pkg/router/template/service_lookup.go:33: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.217.4.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:18.431943426+00:00 stderr F I0813 19:57:18.431887 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:18.431943426+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:18.431943426+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:19.431645322+00:00 stderr F I0813 19:57:19.431575 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:19.431645322+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:19.431645322+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:20.435646542+00:00 stderr F I0813 19:57:20.435520 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:20.435646542+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:20.435646542+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:21.431757945+00:00 stderr F I0813 19:57:21.431628 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:21.431757945+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:21.431757945+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:22.433349495+00:00 stderr F I0813 19:57:22.433199 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:22.433349495+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:22.433349495+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:23.431476856+00:00 stderr F I0813 19:57:23.431395 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:23.431476856+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:23.431476856+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:24.431854371+00:00 stderr F I0813 19:57:24.431497 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:24.431854371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:24.431854371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:25.433028709+00:00 stderr F I0813 19:57:25.432858 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:25.433028709+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:25.433028709+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:26.433458567+00:00 stderr F I0813 19:57:26.433214 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:26.433458567+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:26.433458567+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:27.433281236+00:00 stderr F I0813 19:57:27.433191 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:27.433281236+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:27.433281236+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:28.432026316+00:00 stderr F I0813 19:57:28.431902 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:28.432026316+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:28.432026316+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:29.433555235+00:00 stderr F I0813 19:57:29.433273 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:29.433555235+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:29.433555235+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:30.431537989+00:00 stderr F I0813 19:57:30.431465 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:30.431537989+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:30.431537989+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:31.431923854+00:00 stderr F I0813 19:57:31.431866 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:31.431923854+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:31.431923854+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:32.431204570+00:00 stderr F I0813 19:57:32.431059 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:32.431204570+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:32.431204570+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:33.433366346+00:00 stderr F I0813 19:57:33.433269 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:33.433366346+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:33.433366346+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:34.432642709+00:00 stderr F I0813 19:57:34.432592 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:34.432642709+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:34.432642709+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:35.432859600+00:00 stderr F I0813 19:57:35.432686 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:35.432859600+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:35.432859600+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:36.437029824+00:00 stderr F I0813 19:57:36.436741 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:36.437029824+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:36.437029824+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:36.933443039+00:00 stderr F W0813 19:57:36.933286 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:36.933583323+00:00 stderr F I0813 19:57:36.933566 1 trace.go:236] Trace[2081447682]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:57:19.880) (total time: 17053ms): 2025-08-13T19:57:36.933583323+00:00 stderr F Trace[2081447682]: ---"Objects listed" error:the server is currently unable to handle the request (get routes.route.openshift.io) 17052ms (19:57:36.933) 2025-08-13T19:57:36.933583323+00:00 stderr F Trace[2081447682]: [17.053173756s] [17.053173756s] END 2025-08-13T19:57:36.933668565+00:00 stderr F E0813 19:57:36.933651 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:36.968356316+00:00 stderr F I0813 19:57:36.968289 1 trace.go:236] Trace[1148232741]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 19:57:19.880) (total time: 17087ms): 2025-08-13T19:57:36.968356316+00:00 stderr F Trace[1148232741]: ---"Objects listed" error: 17087ms (19:57:36.967) 2025-08-13T19:57:36.968356316+00:00 stderr F Trace[1148232741]: [17.087755934s] [17.087755934s] END 2025-08-13T19:57:36.968468509+00:00 stderr F I0813 19:57:36.968427 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T19:57:37.032534479+00:00 stderr F I0813 19:57:37.032393 1 trace.go:236] Trace[287589973]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/template/service_lookup.go:33 (13-Aug-2025 19:57:21.283) (total time: 15748ms): 2025-08-13T19:57:37.032534479+00:00 stderr F Trace[287589973]: ---"Objects listed" error: 15748ms (19:57:37.031) 2025-08-13T19:57:37.032534479+00:00 stderr F Trace[287589973]: [15.748753569s] [15.748753569s] END 2025-08-13T19:57:37.032609361+00:00 stderr F I0813 19:57:37.032593 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T19:57:37.434524076+00:00 stderr F I0813 19:57:37.434316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:37.434524076+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:37.434524076+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:38.433052679+00:00 stderr F I0813 19:57:38.432706 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:38.433052679+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:38.433052679+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:39.435744381+00:00 stderr F I0813 19:57:39.435644 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:39.435744381+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:39.435744381+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:40.432219425+00:00 stderr F I0813 19:57:40.432159 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:40.432219425+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:40.432219425+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:41.766582397+00:00 stderr F I0813 19:57:41.766530 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:41.766582397+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:41.766582397+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:42.001205487+00:00 stderr F W0813 19:57:42.001067 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:42.001205487+00:00 stderr F E0813 19:57:42.001137 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:42.434103768+00:00 stderr F I0813 19:57:42.434003 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:42.434103768+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:42.434103768+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:43.432131466+00:00 stderr F I0813 19:57:43.432032 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:43.432131466+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:43.432131466+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:44.431976806+00:00 stderr F I0813 19:57:44.431854 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:44.431976806+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:44.431976806+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:45.432675090+00:00 stderr F I0813 19:57:45.432578 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:45.432675090+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:45.432675090+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:46.433509920+00:00 stderr F I0813 19:57:46.433334 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:46.433509920+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:46.433509920+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:47.432967569+00:00 stderr F I0813 19:57:47.432819 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:47.432967569+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:47.432967569+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:48.447547809+00:00 stderr F I0813 19:57:48.447211 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:48.447547809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:48.447547809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:49.433520123+00:00 stderr F I0813 19:57:49.433466 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:49.433520123+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:49.433520123+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:50.439672484+00:00 stderr F I0813 19:57:50.438318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:50.439672484+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:50.439672484+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:50.477344599+00:00 stderr F W0813 19:57:50.477079 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:50.477344599+00:00 stderr F E0813 19:57:50.477160 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:57:51.433433181+00:00 stderr F I0813 19:57:51.433291 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:51.433433181+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:51.433433181+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:52.431983943+00:00 stderr F I0813 19:57:52.431923 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:52.431983943+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:52.431983943+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:53.432056270+00:00 stderr F I0813 19:57:53.431981 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:53.432056270+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:53.432056270+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:54.433287283+00:00 stderr F I0813 19:57:54.433181 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:54.433287283+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:54.433287283+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:55.434947235+00:00 stderr F I0813 19:57:55.434648 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:55.434947235+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:55.434947235+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:56.432574223+00:00 stderr F I0813 19:57:56.432093 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:56.432574223+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:56.432574223+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:57.433262348+00:00 stderr F I0813 19:57:57.433165 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:57.433262348+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:57.433262348+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:58.431917455+00:00 stderr F I0813 19:57:58.431696 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:58.431917455+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:58.431917455+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:57:59.432912138+00:00 stderr F I0813 19:57:59.432742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:57:59.432912138+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:57:59.432912138+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:00.431893395+00:00 stderr F I0813 19:58:00.431689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:00.431893395+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:00.431893395+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:01.434020642+00:00 stderr F I0813 19:58:01.433872 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:01.434020642+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:01.434020642+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:02.435928520+00:00 stderr F I0813 19:58:02.435742 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:02.435928520+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:02.435928520+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:03.433875287+00:00 stderr F I0813 19:58:03.433666 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:03.433875287+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:03.433875287+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:04.431119524+00:00 stderr F I0813 19:58:04.430898 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:04.431119524+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:04.431119524+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:05.432938093+00:00 stderr F I0813 19:58:05.432765 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:05.432938093+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:05.432938093+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:06.435314875+00:00 stderr F I0813 19:58:06.434429 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:06.435314875+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:06.435314875+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:07.434281442+00:00 stderr F I0813 19:58:07.434059 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:07.434281442+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:07.434281442+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:08.433708781+00:00 stderr F I0813 19:58:08.433055 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:08.433708781+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:08.433708781+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:09.437898085+00:00 stderr F I0813 19:58:09.437671 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:09.437898085+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:09.437898085+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:10.432124936+00:00 stderr F I0813 19:58:10.431734 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:10.432124936+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:10.432124936+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:11.034272901+00:00 stderr F W0813 19:58:11.034023 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:58:11.034272901+00:00 stderr F E0813 19:58:11.034095 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:58:11.432869003+00:00 stderr F I0813 19:58:11.432656 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:11.432869003+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:11.432869003+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:12.433933519+00:00 stderr F I0813 19:58:12.432599 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:12.433933519+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:12.433933519+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:13.431929237+00:00 stderr F I0813 19:58:13.431765 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:13.431929237+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:13.431929237+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:14.431767508+00:00 stderr F I0813 19:58:14.431622 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:14.431767508+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:14.431767508+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:15.434103371+00:00 stderr F I0813 19:58:15.434008 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:15.434103371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:15.434103371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:16.433465119+00:00 stderr F I0813 19:58:16.433332 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:58:16.433465119+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:58:16.433465119+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:58:16.457355050+00:00 stderr F E0813 19:58:16.457214 1 factory.go:130] failed to sync cache for *v1.Route shared informer 2025-08-13T19:58:16.463553086+00:00 stderr F I0813 19:58:16.463474 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" 2025-08-13T19:58:16.466863901+00:00 stderr F E0813 19:58:16.466693 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-08-13T19:58:16.525526943+00:00 stderr F I0813 19:58:16.525381 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T19:59:01.471118640+00:00 stderr F I0813 19:59:01.470384 1 template.go:846] "msg"="Instructing the template router to terminate" "logger"="router" 2025-08-13T19:59:02.673417482+00:00 stderr F I0813 19:59:02.673063 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Shutting down\n" 2025-08-13T19:59:02.673417482+00:00 stderr F I0813 19:59:02.673291 1 template.go:850] "msg"="Shutdown complete, exiting" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_rout0000644000175000017500000011174415117130646033211 0ustar zuulzuul2025-08-13T19:59:11.304677918+00:00 stderr F I0813 19:59:11.297505 1 template.go:560] "msg"="starting router" "logger"="router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 4d9b8c4afa6cd89b41f4bd5e7c09ccddd8679bc6\nversionFromGit: 4.0.0-527-g4d9b8c4a\ngitTreeState: clean\nbuildDate: 2024-06-13T22:04:06Z\n" 2025-08-13T19:59:11.673366627+00:00 stderr F I0813 19:59:11.672883 1 metrics.go:156] "msg"="router health and metrics port listening on HTTP and HTTPS" "address"="0.0.0.0:1936" "logger"="metrics" 2025-08-13T19:59:11.769373724+00:00 stderr F I0813 19:59:11.769189 1 router.go:217] "msg"="creating a new template router" "logger"="template" "writeDir"="/var/lib/haproxy" 2025-08-13T19:59:11.771932617+00:00 stderr F I0813 19:59:11.769613 1 router.go:302] "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" "logger"="template" 2025-08-13T19:59:11.777891327+00:00 stderr F I0813 19:59:11.773270 1 router.go:372] "msg"="watching for changes" "logger"="template" "path"="/etc/pki/tls/private" 2025-08-13T19:59:11.777891327+00:00 stderr F I0813 19:59:11.773414 1 router.go:282] "msg"="router is including routes in all namespaces" "logger"="router" 2025-08-13T19:59:11.886728879+00:00 stderr F W0813 19:59:11.886634 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:11.886728879+00:00 stderr F E0813 19:59:11.886705 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:12.672132258+00:00 stderr F I0813 19:59:12.671738 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:12.672132258+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:12.672132258+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:12.681237917+00:00 stderr F I0813 19:59:12.678724 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T19:59:12.851562443+00:00 stderr F I0813 19:59:12.851175 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T19:59:13.380116350+00:00 stderr F W0813 19:59:13.379504 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:13.380116350+00:00 stderr F E0813 19:59:13.380097 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:13.459080341+00:00 stderr F I0813 19:59:13.456206 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:13.459080341+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:13.459080341+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:14.445501809+00:00 stderr F I0813 19:59:14.439664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:14.445501809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:14.445501809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:15.468994886+00:00 stderr F I0813 19:59:15.459972 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:15.468994886+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:15.468994886+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:16.104399630+00:00 stderr F W0813 19:59:16.103133 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:16.104399630+00:00 stderr F E0813 19:59:16.103188 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:16.447703496+00:00 stderr F I0813 19:59:16.446220 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:16.447703496+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:16.447703496+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:17.449471752+00:00 stderr F I0813 19:59:17.447716 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:17.449471752+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:17.449471752+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:18.455398967+00:00 stderr F I0813 19:59:18.454926 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:18.455398967+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:18.455398967+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:19.478456090+00:00 stderr F I0813 19:59:19.478324 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:19.478456090+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:19.478456090+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:20.443736896+00:00 stderr F I0813 19:59:20.443497 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:20.443736896+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:20.443736896+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:21.154386143+00:00 stderr F W0813 19:59:21.149472 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:21.154386143+00:00 stderr F E0813 19:59:21.149615 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:21.441014694+00:00 stderr F I0813 19:59:21.438582 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:21.441014694+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:21.441014694+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:22.436881922+00:00 stderr F I0813 19:59:22.436077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:22.436881922+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:22.436881922+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:23.444874115+00:00 stderr F I0813 19:59:23.444249 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:23.444874115+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:23.444874115+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:24.443249214+00:00 stderr F I0813 19:59:24.441639 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:24.443249214+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:24.443249214+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:25.434856409+00:00 stderr F I0813 19:59:25.433416 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:25.434856409+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:25.434856409+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:26.455137963+00:00 stderr F I0813 19:59:26.452756 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:26.455137963+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:26.455137963+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:27.442377615+00:00 stderr F I0813 19:59:27.441945 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:27.442377615+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:27.442377615+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:28.441913556+00:00 stderr F I0813 19:59:28.440770 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:28.441913556+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:28.441913556+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:29.432264236+00:00 stderr F I0813 19:59:29.432104 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:29.432264236+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:29.432264236+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:30.434530676+00:00 stderr F I0813 19:59:30.434323 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:30.434530676+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:30.434530676+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:31.436502367+00:00 stderr F I0813 19:59:31.436416 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:31.436502367+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:31.436502367+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:32.438185499+00:00 stderr F I0813 19:59:32.436917 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:32.438185499+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:32.438185499+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:32.780568869+00:00 stderr F W0813 19:59:32.779563 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:32.780568869+00:00 stderr F E0813 19:59:32.779636 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:33.442515279+00:00 stderr F I0813 19:59:33.439470 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:33.442515279+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:33.442515279+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:34.440001973+00:00 stderr F I0813 19:59:34.432664 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:34.440001973+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:34.440001973+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:35.437748283+00:00 stderr F I0813 19:59:35.437596 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:35.437748283+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:35.437748283+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:36.432709335+00:00 stderr F I0813 19:59:36.432555 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:36.432709335+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:36.432709335+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:37.441529821+00:00 stderr F I0813 19:59:37.441270 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:37.441529821+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:37.441529821+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:38.437191323+00:00 stderr F I0813 19:59:38.435413 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:38.437191323+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:38.437191323+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:39.443362104+00:00 stderr F I0813 19:59:39.443272 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:39.443362104+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:39.443362104+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:40.463715509+00:00 stderr F I0813 19:59:40.439513 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:40.463715509+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:40.463715509+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:41.444454315+00:00 stderr F I0813 19:59:41.444077 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:41.444454315+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:41.444454315+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:42.447673783+00:00 stderr F I0813 19:59:42.447382 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:42.447673783+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:42.447673783+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:43.439405871+00:00 stderr F I0813 19:59:43.439305 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:43.439405871+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:43.439405871+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:44.440923330+00:00 stderr F I0813 19:59:44.438757 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:44.440923330+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:44.440923330+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:45.438150496+00:00 stderr F I0813 19:59:45.436473 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:45.438150496+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:45.438150496+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:46.442893156+00:00 stderr F I0813 19:59:46.437085 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:46.442893156+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:46.442893156+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:47.574656328+00:00 stderr F I0813 19:59:47.573480 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:47.574656328+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:47.574656328+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:48.435955780+00:00 stderr F I0813 19:59:48.434292 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:48.435955780+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:48.435955780+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:49.444657025+00:00 stderr F I0813 19:59:49.441019 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:49.444657025+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:49.444657025+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:50.441014186+00:00 stderr F I0813 19:59:50.440726 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:50.441014186+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:50.441014186+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:51.438074809+00:00 stderr F I0813 19:59:51.437388 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:51.438074809+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:51.438074809+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:52.271503646+00:00 stderr F W0813 19:59:52.271401 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:52.271503646+00:00 stderr F E0813 19:59:52.271467 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:52.437958611+00:00 stderr F I0813 19:59:52.435764 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:52.437958611+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:52.437958611+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:53.437003279+00:00 stderr F I0813 19:59:53.436582 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:53.437003279+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:53.437003279+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:54.435994195+00:00 stderr F I0813 19:59:54.435870 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:54.435994195+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:54.435994195+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:55.447610142+00:00 stderr F I0813 19:59:55.444133 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:55.447610142+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:55.447610142+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:56.435445371+00:00 stderr F I0813 19:59:56.435342 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:56.435445371+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:56.435445371+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:57.437007470+00:00 stderr F I0813 19:59:57.436719 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:57.437007470+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:57.437007470+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:58.441676389+00:00 stderr F I0813 19:59:58.441568 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:58.441676389+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:58.441676389+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T19:59:59.441219781+00:00 stderr F I0813 19:59:59.435551 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T19:59:59.441219781+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T19:59:59.441219781+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:00.449216644+00:00 stderr F I0813 20:00:00.449017 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:00.449216644+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:00.449216644+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:01.435257071+00:00 stderr F I0813 20:00:01.435136 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:01.435257071+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:01.435257071+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:02.435075695+00:00 stderr F I0813 20:00:02.434217 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:02.435075695+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:02.435075695+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:03.435159681+00:00 stderr F I0813 20:00:03.435040 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:03.435159681+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:03.435159681+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:04.434374502+00:00 stderr F I0813 20:00:04.433405 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:04.434374502+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:04.434374502+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:05.435923430+00:00 stderr F I0813 20:00:05.434204 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:05.435923430+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:05.435923430+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:06.436268404+00:00 stderr F I0813 20:00:06.435113 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:06.436268404+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:06.436268404+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:07.444277557+00:00 stderr F I0813 20:00:07.444061 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:07.444277557+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:07.444277557+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:08.440445100+00:00 stderr F I0813 20:00:08.440318 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:08.440445100+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:08.440445100+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:09.436634906+00:00 stderr F I0813 20:00:09.436496 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:09.436634906+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:09.436634906+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:10.444501654+00:00 stderr F I0813 20:00:10.443430 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:10.444501654+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:10.444501654+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:11.447671037+00:00 stderr F I0813 20:00:11.442535 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:11.447671037+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:11.447671037+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:12.432586351+00:00 stderr F I0813 20:00:12.432076 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:12.432586351+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:12.432586351+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:13.443036294+00:00 stderr F I0813 20:00:13.441571 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:13.443036294+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:13.443036294+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:14.439365563+00:00 stderr F I0813 20:00:14.437341 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:14.439365563+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:14.439365563+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:15.439457808+00:00 stderr F I0813 20:00:15.439401 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:15.439457808+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:15.439457808+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:16.445945307+00:00 stderr F I0813 20:00:16.445316 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:16.445945307+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:16.445945307+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:17.448372851+00:00 stderr F I0813 20:00:17.448314 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:17.448372851+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:17.448372851+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:18.458895964+00:00 stderr F I0813 20:00:18.452920 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:18.458895964+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:18.458895964+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:19.443228891+00:00 stderr F I0813 20:00:19.439909 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:19.443228891+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:19.443228891+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:20.452264183+00:00 stderr F I0813 20:00:20.451613 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:20.452264183+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:20.452264183+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:21.444279109+00:00 stderr F I0813 20:00:21.441573 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:21.444279109+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:21.444279109+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:22.442027309+00:00 stderr F I0813 20:00:22.441223 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:22.442027309+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:22.442027309+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:23.442934930+00:00 stderr F I0813 20:00:23.441492 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:23.442934930+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:23.442934930+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:24.460454004+00:00 stderr F I0813 20:00:24.460307 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:24.460454004+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:24.460454004+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:25.448935401+00:00 stderr F I0813 20:00:25.441689 1 healthz.go:261] backend-http,has-synced check failed: healthz 2025-08-13T20:00:25.448935401+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:25.448935401+00:00 stderr F [-]has-synced failed: Router not synced 2025-08-13T20:00:26.031207313+00:00 stderr F I0813 20:00:26.031093 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:00:26.092081849+00:00 stderr F E0813 20:00:26.090484 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory 2025-08-13T20:00:26.742089514+00:00 stderr F I0813 20:00:26.741483 1 healthz.go:261] backend-http check failed: healthz 2025-08-13T20:00:26.742089514+00:00 stderr F [-]backend-http failed: backend reported failure 2025-08-13T20:00:27.296769290+00:00 stderr F I0813 20:00:27.296070 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:00:31.257363322+00:00 stderr F I0813 20:00:31.255569 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:00:41.640227457+00:00 stderr F I0813 20:00:41.636935 1 template.go:941] "msg"="reloaded metrics certificate" "cert"="/etc/pki/tls/metrics-certs/tls.crt" "key"="/etc/pki/tls/metrics-certs/tls.key" "logger"="router" 2025-08-13T20:01:09.895922795+00:00 stderr F I0813 20:01:09.894921 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:01:24.804112015+00:00 stderr F W0813 20:01:24.803826 1 reflector.go:462] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR; received from peer") has prevented the request from succeeding 2025-08-13T20:02:17.794050969+00:00 stderr F W0813 20:02:17.793041 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:17.794050969+00:00 stderr F E0813 20:02:17.793139 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:03:19.527530230+00:00 stderr F W0813 20:03:19.527086 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:03:19.527530230+00:00 stderr F I0813 20:03:19.527406 1 trace.go:236] Trace[310903481]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 20:02:49.524) (total time: 30002ms): 2025-08-13T20:03:19.527530230+00:00 stderr F Trace[310903481]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 30002ms (20:03:19.527) 2025-08-13T20:03:19.527530230+00:00 stderr F Trace[310903481]: [30.002341853s] [30.002341853s] END 2025-08-13T20:03:19.527530230+00:00 stderr F E0813 20:03:19.527464 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:28.566676368+00:00 stderr F W0813 20:04:28.566243 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:28.566676368+00:00 stderr F I0813 20:04:28.566510 1 trace.go:236] Trace[1141895289]: "Reflector ListAndWatch" name:github.com/openshift/router/pkg/router/controller/factory/factory.go:124 (13-Aug-2025 20:03:58.471) (total time: 30095ms): 2025-08-13T20:04:28.566676368+00:00 stderr F Trace[1141895289]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 30082ms (20:04:28.553) 2025-08-13T20:04:28.566676368+00:00 stderr F Trace[1141895289]: [30.095336028s] [30.095336028s] END 2025-08-13T20:04:28.566676368+00:00 stderr F E0813 20:04:28.566571 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/routes?resourceVersion=29471": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:05:22.531648891+00:00 stderr F W0813 20:05:22.531122 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:22.533378770+00:00 stderr F E0813 20:05:22.533163 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:23.709339045+00:00 stderr F I0813 20:05:23.709111 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T20:05:23.943489560+00:00 stderr F I0813 20:05:23.943369 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:05:24.175916606+00:00 stderr F I0813 20:05:24.175825 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:29.171169261+00:00 stderr F I0813 20:05:29.171023 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:40.055113516+00:00 stderr F I0813 20:05:40.055049 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:05:45.053675044+00:00 stderr F I0813 20:05:45.050701 1 router.go:669] "msg"="router reloaded" "logger"="template" "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n" 2025-08-13T20:06:04.550607868+00:00 stderr F W0813 20:06:04.550499 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:04.551886485+00:00 stderr F E0813 20:06:04.551731 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:53.870533142+00:00 stderr F W0813 20:06:53.870282 1 reflector.go:539] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:53.870533142+00:00 stderr F E0813 20:06:53.870427 1 reflector.go:147] github.com/openshift/router/pkg/router/controller/factory/factory.go:124: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:40.667625093+00:00 stderr F I0813 20:07:40.666123 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:09:05.305116521+00:00 stderr F I0813 20:09:05.304952 1 reflector.go:351] Caches populated for *v1.EndpointSlice from github.com/openshift/router/pkg/router/controller/factory/factory.go:124 2025-08-13T20:09:05.342759711+00:00 stderr F I0813 20:09:05.342612 1 reflector.go:351] Caches populated for *v1.Service from github.com/openshift/router/pkg/router/template/service_lookup.go:33 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351003 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351200 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.362457950+00:00 stderr F I0813 20:42:36.351247 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:42.965144927+00:00 stderr F I0813 20:42:42.964922 1 template.go:844] "msg"="Shutdown requested, waiting 45s for new connections to cease" "logger"="router" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130647033004 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000005164315117130647033017 0ustar zuulzuul2025-12-13T00:18:00.184199692+00:00 stderr F I1213 00:18:00.184116 24012 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:18:00.184992114+00:00 stderr F I1213 00:18:00.184349 24012 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-12-13T00:18:00.189729519+00:00 stderr F I1213 00:18:00.189685 24012 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-12-13T00:18:00.192864352+00:00 stderr F I1213 00:18:00.192813 24012 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-12-13T00:18:00.276604759+00:00 stderr F I1213 00:18:00.276492 24012 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-12-13T00:18:00.317768234+00:00 stderr F I1213 00:18:00.317674 24012 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:18:00.318065051+00:00 stderr F E1213 00:18:00.318020 24012 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-12-13T00:18:00.318205305+00:00 stderr F I1213 00:18:00.318154 24012 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-12-13T00:18:00.625110416+00:00 stderr F I1213 00:18:00.625031 24012 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-12-13T00:18:00.626679267+00:00 stderr F I1213 00:18:00.626626 24012 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-12-13T00:18:00.633216281+00:00 stderr F I1213 00:18:00.631417 24012 metrics.go:100] Registering Prometheus metrics 2025-12-13T00:18:00.633216281+00:00 stderr F I1213 00:18:00.631523 24012 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-12-13T00:18:00.647197713+00:00 stderr F I1213 00:18:00.647068 24012 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:00.648607520+00:00 stderr F I1213 00:18:00.648567 24012 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-12-13T00:18:00.654677601+00:00 stderr F I1213 00:18:00.654535 24012 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:18:00.654677601+00:00 stderr F I1213 00:18:00.654640 24012 update.go:2610] Starting to manage node: crc 2025-12-13T00:18:00.659022317+00:00 stderr F I1213 00:18:00.658965 24012 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-12-13T00:18:00.659767427+00:00 stderr F I1213 00:18:00.659677 24012 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:00.723823800+00:00 stderr F I1213 00:18:00.723379 24012 daemon.go:1727] State: idle 2025-12-13T00:18:00.723823800+00:00 stderr F Deployments: 2025-12-13T00:18:00.723823800+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-12-13T00:18:00.723823800+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-12-13T00:18:00.723823800+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-12-13T00:18:00.723823800+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-12-13T00:18:00.723823800+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-12-13T00:18:00.723823800+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-12-13T00:18:00.723823800+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-12-13T00:18:00.723823800+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-12-13T00:18:00.723823800+00:00 stderr F I1213 00:18:00.723789 24012 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-12-13T00:18:00.723823800+00:00 stderr F { 2025-12-13T00:18:00.723823800+00:00 stderr F "container-image": { 2025-12-13T00:18:00.723823800+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-12-13T00:18:00.723823800+00:00 stderr F "image-labels": { 2025-12-13T00:18:00.723823800+00:00 stderr F "containers.bootc": "1", 2025-12-13T00:18:00.723823800+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-12-13T00:18:00.723823800+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-12-13T00:18:00.723823800+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-12-13T00:18:00.723823800+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-12-13T00:18:00.723823800+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-12-13T00:18:00.723823800+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-12-13T00:18:00.723823800+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-12-13T00:18:00.723823800+00:00 stderr F "ostree.bootable": "true", 2025-12-13T00:18:00.723823800+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-12-13T00:18:00.723823800+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-12-13T00:18:00.723823800+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-12-13T00:18:00.723823800+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-12-13T00:18:00.723823800+00:00 stderr F }, 2025-12-13T00:18:00.723823800+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-12-13T00:18:00.723823800+00:00 stderr F }, 2025-12-13T00:18:00.723823800+00:00 stderr F "osbuild-version": "114", 2025-12-13T00:18:00.723823800+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-12-13T00:18:00.723823800+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-12-13T00:18:00.723823800+00:00 stderr F "version": "416.94.202405291527-0" 2025-12-13T00:18:00.723823800+00:00 stderr F } 2025-12-13T00:18:00.723950313+00:00 stderr F I1213 00:18:00.723897 24012 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-12-13T00:18:00.723950313+00:00 stderr F I1213 00:18:00.723918 24012 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-12-13T00:18:00.732073799+00:00 stderr F I1213 00:18:00.731354 24012 daemon.go:1736] journalctl --list-boots: 2025-12-13T00:18:00.732073799+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-12-13T00:18:00.732073799+00:00 stderr F -4 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-12-13T00:18:00.732073799+00:00 stderr F -3 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-12-13T00:18:00.732073799+00:00 stderr F -2 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 20:42:52 UTC 2025-12-13T00:18:00.732073799+00:00 stderr F -1 ac3ee373ffba44fcbb674f979afd649b Sat 2025-12-13 00:06:18 UTC Sat 2025-12-13 00:10:07 UTC 2025-12-13T00:18:00.732073799+00:00 stderr F 0 6b04f78578a947bdb1b0d69223fec89b Sat 2025-12-13 00:10:12 UTC Sat 2025-12-13 00:18:00 UTC 2025-12-13T00:18:00.732073799+00:00 stderr F I1213 00:18:00.732052 24012 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-12-13T00:18:00.740646027+00:00 stderr F I1213 00:18:00.740579 24012 daemon.go:1751] systemd service state: OK 2025-12-13T00:18:00.740646027+00:00 stderr F I1213 00:18:00.740602 24012 daemon.go:1327] Starting MachineConfigDaemon 2025-12-13T00:18:00.740691468+00:00 stderr F I1213 00:18:00.740670 24012 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-12-13T00:18:01.656884698+00:00 stderr F I1213 00:18:01.656748 24012 daemon.go:647] Node crc is part of the control plane 2025-12-13T00:18:01.714393908+00:00 stderr F I1213 00:18:01.713432 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1244327315 --cleanup 2025-12-13T00:18:01.715690792+00:00 stderr F [2025-12-13T00:18:01Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:18:01.715729913+00:00 stdout F 2025-12-13T00:18:01.715741633+00:00 stderr F [2025-12-13T00:18:01Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:18:01.730303100+00:00 stderr F I1213 00:18:01.730207 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:18:01.730349661+00:00 stderr F E1213 00:18:01.730317 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:18:01.730349661+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:18:03.746410885+00:00 stderr F I1213 00:18:03.746309 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1648024386 --cleanup 2025-12-13T00:18:03.750786481+00:00 stderr F [2025-12-13T00:18:03Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:18:03.750838652+00:00 stdout F 2025-12-13T00:18:03.750848723+00:00 stderr F [2025-12-13T00:18:03Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:18:03.760212342+00:00 stderr F I1213 00:18:03.760101 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:18:03.760212342+00:00 stderr F E1213 00:18:03.760180 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:18:03.760212342+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:18:07.774914387+00:00 stderr F I1213 00:18:07.774788 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4221852909 --cleanup 2025-12-13T00:18:07.778399109+00:00 stderr F [2025-12-13T00:18:07Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:18:07.778435560+00:00 stderr F [2025-12-13T00:18:07Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:18:07.778452061+00:00 stdout F 2025-12-13T00:18:07.792685899+00:00 stderr F I1213 00:18:07.792008 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:18:07.792685899+00:00 stderr F E1213 00:18:07.792066 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:18:07.792685899+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:18:15.808185629+00:00 stderr F I1213 00:18:15.807378 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs678157614 --cleanup 2025-12-13T00:18:15.810525570+00:00 stderr F [2025-12-13T00:18:15Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:18:15.810656174+00:00 stdout F 2025-12-13T00:18:15.810664674+00:00 stderr F [2025-12-13T00:18:15Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:18:15.821073221+00:00 stderr F I1213 00:18:15.820930 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:18:15.821182454+00:00 stderr F E1213 00:18:15.821130 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:18:15.821182454+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:18:31.837356799+00:00 stderr F I1213 00:18:31.836823 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4113748575 --cleanup 2025-12-13T00:18:31.841559466+00:00 stderr F [2025-12-13T00:18:31Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:18:31.841723020+00:00 stdout F 2025-12-13T00:18:31.841736960+00:00 stderr F [2025-12-13T00:18:31Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:18:31.856041675+00:00 stderr F I1213 00:18:31.855917 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:18:31.856041675+00:00 stderr F E1213 00:18:31.856002 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:18:31.856041675+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:18:54.733175031+00:00 stderr F I1213 00:18:54.732536 24012 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 42115 2025-12-13T00:19:03.869167234+00:00 stderr F I1213 00:19:03.869080 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs4163660905 --cleanup 2025-12-13T00:19:03.872475605+00:00 stderr F [2025-12-13T00:19:03Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:19:03.872619599+00:00 stdout F 2025-12-13T00:19:03.872627379+00:00 stderr F [2025-12-13T00:19:03Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:19:03.881208515+00:00 stderr F I1213 00:19:03.881130 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:19:03.881239926+00:00 stderr F E1213 00:19:03.881211 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:19:03.881239926+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:10.653020225+00:00 stderr F I1213 00:19:10.649230 24012 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 42168 2025-12-13T00:19:11.558900755+00:00 stderr F I1213 00:19:11.558694 24012 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 42173 2025-12-13T00:20:03.898218661+00:00 stderr F I1213 00:20:03.898090 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs132957146 --cleanup 2025-12-13T00:20:03.900342610+00:00 stderr F [2025-12-13T00:20:03Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:20:03.900418202+00:00 stderr F [2025-12-13T00:20:03Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:20:03.900442873+00:00 stdout F 2025-12-13T00:20:03.908211457+00:00 stderr F I1213 00:20:03.908131 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:20:03.908238128+00:00 stderr F E1213 00:20:03.908209 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:20:03.908238128+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:21:03.918634281+00:00 stderr F I1213 00:21:03.918514 24012 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs3524785058 --cleanup 2025-12-13T00:21:03.922119296+00:00 stderr F [2025-12-13T00:21:03Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:21:03.922192538+00:00 stdout F 2025-12-13T00:21:03.922201248+00:00 stderr F [2025-12-13T00:21:03Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:21:03.933980976+00:00 stderr F I1213 00:21:03.933842 24012 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:21:03.933980976+00:00 stderr F E1213 00:21:03.933901 24012 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:21:03.933980976+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:21:24.382952216+00:00 stderr F I1213 00:21:24.382840 24012 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 42173 2025-12-13T00:21:29.773140528+00:00 stderr F I1213 00:21:29.772566 24012 daemon.go:1363] Shutting down MachineConfigDaemon ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000004017715117130647033017 0ustar zuulzuul2025-12-13T00:21:31.829321604+00:00 stderr F I1213 00:21:31.829223 32044 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:21:31.829485519+00:00 stderr F I1213 00:21:31.829456 32044 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-12-13T00:21:31.833763094+00:00 stderr F I1213 00:21:31.833717 32044 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-12-13T00:21:31.836595710+00:00 stderr F I1213 00:21:31.836542 32044 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-12-13T00:21:31.895190372+00:00 stderr F I1213 00:21:31.895121 32044 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-12-13T00:21:31.933066503+00:00 stderr F I1213 00:21:31.932992 32044 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:21:31.933346062+00:00 stderr F E1213 00:21:31.933314 32044 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-12-13T00:21:31.933449524+00:00 stderr F I1213 00:21:31.933417 32044 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-12-13T00:21:32.179887194+00:00 stderr F I1213 00:21:32.179796 32044 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-12-13T00:21:32.181053426+00:00 stderr F I1213 00:21:32.180998 32044 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-12-13T00:21:32.182077863+00:00 stderr F I1213 00:21:32.182007 32044 metrics.go:100] Registering Prometheus metrics 2025-12-13T00:21:32.182175045+00:00 stderr F I1213 00:21:32.182128 32044 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-12-13T00:21:32.199640497+00:00 stderr F I1213 00:21:32.199576 32044 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:21:32.201384204+00:00 stderr F I1213 00:21:32.201350 32044 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-12-13T00:21:32.210283734+00:00 stderr F I1213 00:21:32.210127 32044 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:21:32.211841596+00:00 stderr F I1213 00:21:32.210992 32044 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:21:32.211841596+00:00 stderr F I1213 00:21:32.211825 32044 update.go:2610] Starting to manage node: crc 2025-12-13T00:21:32.215579197+00:00 stderr F I1213 00:21:32.215504 32044 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-12-13T00:21:32.258578508+00:00 stderr F I1213 00:21:32.258487 32044 daemon.go:1727] State: idle 2025-12-13T00:21:32.258578508+00:00 stderr F Deployments: 2025-12-13T00:21:32.258578508+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-12-13T00:21:32.258578508+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-12-13T00:21:32.258578508+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-12-13T00:21:32.258578508+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-12-13T00:21:32.258578508+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-12-13T00:21:32.258578508+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-12-13T00:21:32.258578508+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-12-13T00:21:32.258578508+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-12-13T00:21:32.261910088+00:00 stderr F I1213 00:21:32.261821 32044 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-12-13T00:21:32.261910088+00:00 stderr F { 2025-12-13T00:21:32.261910088+00:00 stderr F "container-image": { 2025-12-13T00:21:32.261910088+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-12-13T00:21:32.261910088+00:00 stderr F "image-labels": { 2025-12-13T00:21:32.261910088+00:00 stderr F "containers.bootc": "1", 2025-12-13T00:21:32.261910088+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-12-13T00:21:32.261910088+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-12-13T00:21:32.261910088+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-12-13T00:21:32.261910088+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-12-13T00:21:32.261910088+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-12-13T00:21:32.261910088+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-12-13T00:21:32.261910088+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-12-13T00:21:32.261910088+00:00 stderr F "ostree.bootable": "true", 2025-12-13T00:21:32.261910088+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-12-13T00:21:32.261910088+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-12-13T00:21:32.261910088+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-12-13T00:21:32.261910088+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-12-13T00:21:32.261910088+00:00 stderr F }, 2025-12-13T00:21:32.261910088+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-12-13T00:21:32.261910088+00:00 stderr F }, 2025-12-13T00:21:32.261910088+00:00 stderr F "osbuild-version": "114", 2025-12-13T00:21:32.261910088+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-12-13T00:21:32.261910088+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-12-13T00:21:32.261910088+00:00 stderr F "version": "416.94.202405291527-0" 2025-12-13T00:21:32.261910088+00:00 stderr F } 2025-12-13T00:21:32.262091952+00:00 stderr F I1213 00:21:32.262047 32044 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-12-13T00:21:32.262091952+00:00 stderr F I1213 00:21:32.262072 32044 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-12-13T00:21:32.269480872+00:00 stderr F I1213 00:21:32.269412 32044 daemon.go:1736] journalctl --list-boots: 2025-12-13T00:21:32.269480872+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-12-13T00:21:32.269480872+00:00 stderr F -4 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-12-13T00:21:32.269480872+00:00 stderr F -3 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-12-13T00:21:32.269480872+00:00 stderr F -2 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 20:42:52 UTC 2025-12-13T00:21:32.269480872+00:00 stderr F -1 ac3ee373ffba44fcbb674f979afd649b Sat 2025-12-13 00:06:18 UTC Sat 2025-12-13 00:10:07 UTC 2025-12-13T00:21:32.269480872+00:00 stderr F 0 6b04f78578a947bdb1b0d69223fec89b Sat 2025-12-13 00:10:12 UTC Sat 2025-12-13 00:21:32 UTC 2025-12-13T00:21:32.269480872+00:00 stderr F I1213 00:21:32.269442 32044 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-12-13T00:21:32.280134679+00:00 stderr F I1213 00:21:32.280083 32044 daemon.go:1751] systemd service state: OK 2025-12-13T00:21:32.280134679+00:00 stderr F I1213 00:21:32.280109 32044 daemon.go:1327] Starting MachineConfigDaemon 2025-12-13T00:21:32.280182060+00:00 stderr F I1213 00:21:32.280162 32044 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-12-13T00:21:33.213187287+00:00 stderr F I1213 00:21:33.213104 32044 daemon.go:647] Node crc is part of the control plane 2025-12-13T00:21:33.270725850+00:00 stderr F I1213 00:21:33.270646 32044 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs2212711218 --cleanup 2025-12-13T00:21:33.272387835+00:00 stderr F [2025-12-13T00:21:33Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:21:33.272410096+00:00 stderr F [2025-12-13T00:21:33Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:21:33.272417896+00:00 stdout F 2025-12-13T00:21:33.279660481+00:00 stderr F I1213 00:21:33.279556 32044 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:21:33.279660481+00:00 stderr F E1213 00:21:33.279642 32044 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:21:33.279660481+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:21:35.290673387+00:00 stderr F I1213 00:21:35.290582 32044 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs2815911385 --cleanup 2025-12-13T00:21:35.292534958+00:00 stderr F [2025-12-13T00:21:35Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:21:35.292570689+00:00 stdout F 2025-12-13T00:21:35.292578159+00:00 stderr F [2025-12-13T00:21:35Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:21:35.299444044+00:00 stderr F I1213 00:21:35.299402 32044 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:21:35.299501535+00:00 stderr F E1213 00:21:35.299465 32044 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:21:35.299501535+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:21:39.311373143+00:00 stderr F I1213 00:21:39.311271 32044 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs943689762 --cleanup 2025-12-13T00:21:39.312966153+00:00 stderr F [2025-12-13T00:21:39Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:21:39.312966153+00:00 stderr F [2025-12-13T00:21:39Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:21:39.313005454+00:00 stdout F 2025-12-13T00:21:39.325584143+00:00 stderr F I1213 00:21:39.322698 32044 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:21:39.325584143+00:00 stderr F E1213 00:21:39.322743 32044 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:21:39.325584143+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:21:47.340523933+00:00 stderr F I1213 00:21:47.340461 32044 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1357373632 --cleanup 2025-12-13T00:21:47.344206324+00:00 stderr F [2025-12-13T00:21:47Z INFO nmstatectl] Nmstate version: 2.2.29 2025-12-13T00:21:47.344293716+00:00 stdout F 2025-12-13T00:21:47.344300066+00:00 stderr F [2025-12-13T00:21:47Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-12-13T00:21:47.352434907+00:00 stderr F I1213 00:21:47.352359 32044 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-12-13T00:21:47.352455538+00:00 stderr F E1213 00:21:47.352432 32044 writer.go:226] Marking Degraded due to: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:21:47.352455538+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000023413515117130647033016 0ustar zuulzuul2025-08-13T19:57:10.537128831+00:00 stderr F I0813 19:57:10.536908 22232 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:57:10.537343497+00:00 stderr F I0813 19:57:10.537327 22232 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-08-13T19:57:10.544102180+00:00 stderr F I0813 19:57:10.543965 22232 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-08-13T19:57:10.548261819+00:00 stderr F I0813 19:57:10.548151 22232 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-08-13T19:57:10.660138773+00:00 stderr F I0813 19:57:10.660004 22232 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-08-13T19:57:10.729900775+00:00 stderr F I0813 19:57:10.729682 22232 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:57:10.730192564+00:00 stderr F E0813 19:57:10.730121 22232 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-08-13T19:57:10.730305507+00:00 stderr F I0813 19:57:10.730252 22232 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-08-13T19:57:10.952575284+00:00 stderr F I0813 19:57:10.952524 22232 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-08-13T19:57:10.953974444+00:00 stderr F I0813 19:57:10.953931 22232 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-08-13T19:57:10.955283391+00:00 stderr F I0813 19:57:10.955215 22232 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:57:10.955349303+00:00 stderr F I0813 19:57:10.955309 22232 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:57:10.986460181+00:00 stderr F I0813 19:57:10.985143 22232 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:57:10.986460181+00:00 stderr F I0813 19:57:10.985317 22232 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.000562 22232 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.001140 22232 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:57:11.001876102+00:00 stderr F I0813 19:57:11.001213 22232 update.go:2610] Starting to manage node: crc 2025-08-13T19:57:11.013211645+00:00 stderr F I0813 19:57:11.009509 22232 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-08-13T19:57:11.071088538+00:00 stderr F I0813 19:57:11.070943 22232 daemon.go:1727] State: idle 2025-08-13T19:57:11.071088538+00:00 stderr F Deployments: 2025-08-13T19:57:11.071088538+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:57:11.071088538+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:57:11.071088538+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-08-13T19:57:11.071088538+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-08-13T19:57:11.071088538+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071088538+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:57:11.071903531+00:00 stderr F I0813 19:57:11.071718 22232 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-08-13T19:57:11.071903531+00:00 stderr F { 2025-08-13T19:57:11.071903531+00:00 stderr F "container-image": { 2025-08-13T19:57:11.071903531+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-08-13T19:57:11.071903531+00:00 stderr F "image-labels": { 2025-08-13T19:57:11.071903531+00:00 stderr F "containers.bootc": "1", 2025-08-13T19:57:11.071903531+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-08-13T19:57:11.071903531+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-08-13T19:57:11.071903531+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-08-13T19:57:11.071903531+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-08-13T19:57:11.071903531+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.bootable": "true", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-08-13T19:57:11.071903531+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-08-13T19:57:11.071903531+00:00 stderr F }, 2025-08-13T19:57:11.071903531+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-08-13T19:57:11.071903531+00:00 stderr F }, 2025-08-13T19:57:11.071903531+00:00 stderr F "osbuild-version": "114", 2025-08-13T19:57:11.071903531+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:57:11.071903531+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-08-13T19:57:11.071903531+00:00 stderr F "version": "416.94.202405291527-0" 2025-08-13T19:57:11.071903531+00:00 stderr F } 2025-08-13T19:57:11.072011204+00:00 stderr F I0813 19:57:11.071965 22232 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-08-13T19:57:11.072011204+00:00 stderr F I0813 19:57:11.071980 22232 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-08-13T19:57:11.084910483+00:00 stderr F I0813 19:57:11.084726 22232 daemon.go:1736] journalctl --list-boots: 2025-08-13T19:57:11.084910483+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-08-13T19:57:11.084910483+00:00 stderr F -2 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F -1 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F 0 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 19:57:11 UTC 2025-08-13T19:57:11.084910483+00:00 stderr F I0813 19:57:11.084819 22232 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.097905 22232 daemon.go:1751] systemd service state: OK 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.097949 22232 daemon.go:1327] Starting MachineConfigDaemon 2025-08-13T19:57:11.098599324+00:00 stderr F I0813 19:57:11.098084 22232 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-08-13T19:57:11.990345587+00:00 stderr F I0813 19:57:11.990162 22232 daemon.go:647] Node crc is part of the control plane 2025-08-13T19:57:12.010350119+00:00 stderr F I0813 19:57:12.010192 22232 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs1986687374 --cleanup 2025-08-13T19:57:12.013985292+00:00 stderr F [2025-08-13T19:57:12Z INFO nmstatectl] Nmstate version: 2.2.29 2025-08-13T19:57:12.014047174+00:00 stdout F 2025-08-13T19:57:12.014055814+00:00 stderr F [2025-08-13T19:57:12Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025258 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025316 22232 daemon.go:1680] Current+desired config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025328 22232 daemon.go:1695] state: Done 2025-08-13T19:57:12.025548503+00:00 stderr F I0813 19:57:12.025374 22232 update.go:2595] Running: rpm-ostree cleanup -r 2025-08-13T19:57:12.086057180+00:00 stdout F Deployments unchanged. 2025-08-13T19:57:12.096181789+00:00 stderr F I0813 19:57:12.096073 22232 daemon.go:2096] Validating against current config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:12.096716985+00:00 stderr F I0813 19:57:12.096505 22232 daemon.go:2008] SSH key location ("/home/core/.ssh/authorized_keys.d/ignition") up-to-date! 2025-08-13T19:57:12.367529387+00:00 stderr F W0813 19:57:12.367366 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:57:12.367529387+00:00 stderr F I0813 19:57:12.367431 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:57:12.438064461+00:00 stderr F I0813 19:57:12.437991 22232 update.go:2610] Validated on-disk state 2025-08-13T19:57:12.443242619+00:00 stderr F I0813 19:57:12.443192 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:22.467338806+00:00 stderr F I0813 19:57:22.467156 22232 update.go:2610] Update completed for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 and node has been successfully uncordoned 2025-08-13T19:57:22.488642294+00:00 stderr F I0813 19:57:22.487546 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:57:22.505041773+00:00 stderr F I0813 19:57:22.504737 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T19:57:22.505041773+00:00 stderr F I0813 19:57:22.504953 22232 daemon.go:735] Transitioned from state: -> Done 2025-08-13T19:58:11.115676832+00:00 stderr F I0813 19:58:11.115363 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 23037 2025-08-13T19:58:22.507690785+00:00 stderr F I0813 19:58:22.507325 22232 daemon.go:858] Starting health listener on 127.0.0.1:8798 2025-08-13T19:59:31.927707858+00:00 stderr F I0813 19:59:31.927502 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28133 2025-08-13T19:59:36.357925063+00:00 stderr F I0813 19:59:36.357581 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28190 2025-08-13T19:59:40.727962252+00:00 stderr F W0813 19:59:40.718365 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:59:40.727962252+00:00 stderr F I0813 19:59:40.718636 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:59:41.458408873+00:00 stderr F I0813 19:59:41.458316 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28276 2025-08-13T19:59:45.463438017+00:00 stderr F I0813 19:59:45.461866 22232 daemon.go:921] Preflight config drift check successful (took 5.102890289s) 2025-08-13T19:59:45.473484843+00:00 stderr F I0813 19:59:45.469582 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T19:59:46.475458464+00:00 stderr F I0813 19:59:46.475394 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T19:59:46.535535217+00:00 stderr F I0813 19:59:46.535393 22232 update.go:1011] Checking Reconcilable for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:47.009685103+00:00 stderr F I0813 19:59:47.009130 22232 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:47.009685103+00:00 stderr F I0813 19:59:47.009225 22232 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:47.357071315+00:00 stderr F I0813 19:59:47.349732 22232 update.go:2610] Starting update from rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a: &{osUpdate:false kargs:false fips:false passwd:true files:false units:false kernelType:false extensions:false} 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928599 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928671 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928751 22232 update.go:1824] Updating files 2025-08-13T19:59:47.929262636+00:00 stderr F I0813 19:59:47.928761 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T19:59:48.033316733+00:00 stderr F I0813 19:59:48.033244 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T19:59:48.696336273+00:00 stderr F I0813 19:59:48.689565 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T19:59:49.030039376+00:00 stderr F I0813 19:59:49.022894 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T19:59:49.484090729+00:00 stderr F I0813 19:59:49.483886 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T19:59:49.585269533+00:00 stderr F I0813 19:59:49.585015 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T19:59:49.689499053+00:00 stderr F I0813 19:59:49.687103 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T19:59:49.956253737+00:00 stderr F I0813 19:59:49.949356 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T19:59:50.261551160+00:00 stderr F I0813 19:59:50.261371 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T19:59:50.385114742+00:00 stderr F I0813 19:59:50.385051 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T19:59:50.741647616+00:00 stderr F I0813 19:59:50.722828 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T19:59:51.109315807+00:00 stderr F I0813 19:59:51.109245 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T19:59:51.192529159+00:00 stderr F I0813 19:59:51.192456 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T19:59:51.218742786+00:00 stderr F I0813 19:59:51.215769 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T19:59:51.460395665+00:00 stderr F I0813 19:59:51.460311 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T19:59:51.521491636+00:00 stderr F I0813 19:59:51.521209 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T19:59:51.562914567+00:00 stderr F I0813 19:59:51.560100 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T19:59:51.696166476+00:00 stderr F I0813 19:59:51.684283 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T19:59:51.777634438+00:00 stderr F I0813 19:59:51.777471 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T19:59:52.071225977+00:00 stderr F I0813 19:59:52.069309 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T19:59:52.171214088+00:00 stderr F I0813 19:59:52.170641 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T19:59:52.276324934+00:00 stderr F I0813 19:59:52.276183 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T19:59:52.514677088+00:00 stderr F I0813 19:59:52.513265 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T19:59:52.705937040+00:00 stderr F I0813 19:59:52.704355 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T19:59:52.794668479+00:00 stderr F I0813 19:59:52.793886 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T19:59:52.920257429+00:00 stderr F I0813 19:59:52.920192 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T19:59:53.087749554+00:00 stderr F I0813 19:59:53.087687 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T19:59:53.152336605+00:00 stderr F I0813 19:59:53.152274 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T19:59:53.237315557+00:00 stderr F I0813 19:59:53.237175 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T19:59:53.267968770+00:00 stderr F I0813 19:59:53.267902 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T19:59:53.297734859+00:00 stderr F I0813 19:59:53.296534 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T19:59:53.435545397+00:00 stderr F I0813 19:59:53.435193 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T19:59:53.530363950+00:00 stderr F I0813 19:59:53.530307 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T19:59:53.563872605+00:00 stderr F I0813 19:59:53.563294 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T19:59:53.635688322+00:00 stderr F I0813 19:59:53.635414 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T19:59:53.689104175+00:00 stderr F I0813 19:59:53.684418 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T19:59:53.737416492+00:00 stderr F I0813 19:59:53.737245 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T19:59:53.975887440+00:00 stderr F I0813 19:59:53.975753 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T19:59:54.027015427+00:00 stderr F I0813 19:59:54.026269 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T19:59:54.359685090+00:00 stderr F I0813 19:59:54.358593 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T19:59:54.453024270+00:00 stderr F I0813 19:59:54.452862 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T19:59:54.508676137+00:00 stderr F I0813 19:59:54.508492 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T19:59:54.524150608+00:00 stderr F I0813 19:59:54.520664 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T19:59:54.552559448+00:00 stderr F I0813 19:59:54.552348 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T19:59:54.552559448+00:00 stderr F I0813 19:59:54.552424 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T19:59:54.585310891+00:00 stderr F I0813 19:59:54.575814 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T19:59:54.590633523+00:00 stderr F I0813 19:59:54.590146 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T19:59:57.032018376+00:00 stderr F I0813 19:59:57.026027 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T19:59:57.032018376+00:00 stderr F I0813 19:59:57.026074 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T19:59:57.066021635+00:00 stderr F I0813 19:59:57.061688 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T19:59:57.207558150+00:00 stderr F I0813 19:59:57.201523 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T19:59:57.207558150+00:00 stderr F ) 2025-08-13T19:59:57.207558150+00:00 stderr F I0813 19:59:57.201627 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T19:59:57.491331609+00:00 stderr F I0813 19:59:57.484415 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T19:59:57.503493765+00:00 stderr F I0813 19:59:57.503151 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T19:59:59.829564801+00:00 stderr F I0813 19:59:59.826592 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T19:59:59.829664244+00:00 stderr F I0813 19:59:59.829642 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T19:59:59.838453104+00:00 stderr F I0813 19:59:59.838391 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T19:59:59.838532937+00:00 stderr F I0813 19:59:59.838512 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T19:59:59.843176079+00:00 stderr F I0813 19:59:59.843066 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T19:59:59.857110866+00:00 stderr F I0813 19:59:59.857036 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T19:59:59.871352322+00:00 stderr F I0813 19:59:59.870212 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T19:59:59.894870933+00:00 stderr F I0813 19:59:59.894672 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T19:59:59.910213540+00:00 stderr F I0813 19:59:59.909423 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T19:59:59.917510698+00:00 stderr F I0813 19:59:59.917338 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T19:59:59.931979601+00:00 stderr F I0813 19:59:59.930098 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T19:59:59.933565056+00:00 stderr F I0813 19:59:59.933409 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:00:02.559116822+00:00 stderr F I0813 20:00:02.552812 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:00:02.559116822+00:00 stderr F I0813 20:00:02.552900 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:00:02.794046180+00:00 stderr F I0813 20:00:02.793485 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064320 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:00:03.064705348+00:00 stderr F ) 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064467 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:03.064705348+00:00 stderr F I0813 20:00:03.064495 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:00:05.470235199+00:00 stderr F I0813 20:00:05.470067 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:00:05.470235199+00:00 stderr F I0813 20:00:05.470165 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:00:05.512909075+00:00 stderr F I0813 20:00:05.511641 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:05.621250585+00:00 stderr F I0813 20:00:05.621138 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:00:05.621250585+00:00 stderr F ) 2025-08-13T20:00:05.621250585+00:00 stderr F I0813 20:00:05.621185 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:00:05.630887549+00:00 stderr F I0813 20:00:05.630133 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:00:07.534870020+00:00 stderr F I0813 20:00:07.525150 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:00:09.652570903+00:00 stderr F I0813 20:00:09.652173 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:00:09.652648545+00:00 stderr F I0813 20:00:09.652607 22232 update.go:1887] Deleting stale data 2025-08-13T20:00:09.653243112+00:00 stderr F I0813 20:00:09.652960 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:00:09.653243112+00:00 stderr F I0813 20:00:09.653074 22232 update.go:2316] updating SSH keys 2025-08-13T20:00:09.661636461+00:00 stderr F I0813 20:00:09.655549 22232 update.go:2217] Writing SSH keys to "/home/core/.ssh/authorized_keys.d/ignition" 2025-08-13T20:00:09.669466795+00:00 stderr F I0813 20:00:09.668341 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:00:09.947245965+00:00 stderr F I0813 20:00:09.947112 22232 update.go:2284] Password has been configured 2025-08-13T20:00:10.034967376+00:00 stderr F I0813 20:00:10.029610 22232 update.go:2610] Node has Desired Config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, skipping reboot 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081628 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081684 22232 daemon.go:1686] Current config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081695 22232 daemon.go:1687] Desired config: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:10.083516291+00:00 stderr F I0813 20:00:10.081701 22232 daemon.go:1695] state: Done 2025-08-13T20:00:10.140755523+00:00 stderr F I0813 20:00:10.139823 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:20.784312371+00:00 stderr F I0813 20:00:20.784137 22232 update.go:2610] Update completed for config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a and node has been successfully uncordoned 2025-08-13T20:00:20.937038686+00:00 stderr F I0813 20:00:20.936759 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:21.054077223+00:00 stderr F I0813 20:00:21.053931 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T20:00:21.054077223+00:00 stderr F I0813 20:00:21.054062 22232 update.go:2640] Removing SIGTERM protection 2025-08-13T20:00:28.570968623+00:00 stderr F W0813 20:00:28.569716 22232 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T20:00:28.570968623+00:00 stderr F I0813 20:00:28.569765 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T20:00:29.061416487+00:00 stderr F I0813 20:00:29.058613 22232 daemon.go:921] Preflight config drift check successful (took 838.048625ms) 2025-08-13T20:00:29.061416487+00:00 stderr F I0813 20:00:29.059008 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T20:00:29.153688818+00:00 stderr F I0813 20:00:29.153551 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T20:00:29.590755131+00:00 stderr F I0813 20:00:29.584560 22232 update.go:1011] Checking Reconcilable for config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a to rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:29.828121039+00:00 stderr F I0813 20:00:29.826580 22232 update.go:2610] Starting update from rendered-master-ef556ead28ddfad01c34ac56c7adfb5a to rendered-master-11405dc064e9fc83a779a06d1cd665b3: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false extensions:false} 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858244 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858329 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858389 22232 update.go:1824] Updating files 2025-08-13T20:00:29.859963587+00:00 stderr F I0813 20:00:29.858397 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:00:29.928929103+00:00 stderr F I0813 20:00:29.927253 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:00:29.956200881+00:00 stderr F I0813 20:00:29.953938 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:00:29.986563087+00:00 stderr F I0813 20:00:29.986401 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:00:30.003994594+00:00 stderr F I0813 20:00:30.002598 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:00:30.058935210+00:00 stderr F I0813 20:00:30.058265 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:00:30.138138529+00:00 stderr F I0813 20:00:30.137114 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:00:30.180542928+00:00 stderr F I0813 20:00:30.180426 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:00:30.205171880+00:00 stderr F I0813 20:00:30.205111 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:00:30.320662783+00:00 stderr F I0813 20:00:30.318958 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:00:30.446189862+00:00 stderr F I0813 20:00:30.446118 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:00:30.712471235+00:00 stderr F I0813 20:00:30.712360 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:00:30.749099069+00:00 stderr F I0813 20:00:30.749001 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:00:30.793060323+00:00 stderr F I0813 20:00:30.792857 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:00:30.822506863+00:00 stderr F I0813 20:00:30.822361 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:00:30.855605566+00:00 stderr F I0813 20:00:30.855477 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:00:30.893684752+00:00 stderr F I0813 20:00:30.893516 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:00:30.930010118+00:00 stderr F I0813 20:00:30.929909 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:00:30.949467253+00:00 stderr F I0813 20:00:30.949256 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:00:30.975131595+00:00 stderr F I0813 20:00:30.975007 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:00:31.026386506+00:00 stderr F I0813 20:00:31.023025 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:00:31.054892399+00:00 stderr F I0813 20:00:31.053008 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:00:31.236716744+00:00 stderr F I0813 20:00:31.236482 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:00:31.287532863+00:00 stderr F I0813 20:00:31.287281 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:00:31.367567625+00:00 stderr F I0813 20:00:31.367362 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:00:31.458897719+00:00 stderr F I0813 20:00:31.458746 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:00:31.519898518+00:00 stderr F I0813 20:00:31.518075 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:00:31.640943030+00:00 stderr F I0813 20:00:31.639712 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:00:31.995951702+00:00 stderr F I0813 20:00:31.995010 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:00:32.941604476+00:00 stderr F I0813 20:00:32.941205 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:00:33.099537179+00:00 stderr F I0813 20:00:33.098742 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:00:33.259858511+00:00 stderr F I0813 20:00:33.259051 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:00:33.358342259+00:00 stderr F I0813 20:00:33.357730 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:00:33.429253221+00:00 stderr F I0813 20:00:33.421085 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:00:33.718915470+00:00 stderr F I0813 20:00:33.714664 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:00:33.950983087+00:00 stderr F I0813 20:00:33.949050 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:00:34.153575824+00:00 stderr F I0813 20:00:34.152013 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:00:34.337065366+00:00 stderr F I0813 20:00:34.335199 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:00:34.391819897+00:00 stderr F I0813 20:00:34.391466 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:00:34.773422499+00:00 stderr F I0813 20:00:34.769368 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:00:34.913323098+00:00 stderr F I0813 20:00:34.912048 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:00:34.991521507+00:00 stderr F I0813 20:00:34.989402 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:00:35.019358491+00:00 stderr F I0813 20:00:35.019218 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:00:35.085055785+00:00 stderr F I0813 20:00:35.082897 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:35.085055785+00:00 stderr F I0813 20:00:35.083651 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:00:35.103228133+00:00 stderr F I0813 20:00:35.101299 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:00:35.157459499+00:00 stderr F I0813 20:00:35.157364 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:00:37.250278123+00:00 stderr F I0813 20:00:37.250022 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:00:37.250278123+00:00 stderr F I0813 20:00:37.250190 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:00:37.253953048+00:00 stderr F I0813 20:00:37.253910 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:37.405302973+00:00 stderr F I0813 20:00:37.405192 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:00:37.405302973+00:00 stderr F ) 2025-08-13T20:00:37.405302973+00:00 stderr F I0813 20:00:37.405273 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:00:37.420331222+00:00 stderr F I0813 20:00:37.417654 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:00:37.423037109+00:00 stderr F I0813 20:00:37.422320 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:00:39.911409521+00:00 stderr F I0813 20:00:39.911340 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:00:39.911522365+00:00 stderr F I0813 20:00:39.911506 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:00:39.919367808+00:00 stderr F I0813 20:00:39.919290 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:39.919452881+00:00 stderr F I0813 20:00:39.919431 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:00:39.927199332+00:00 stderr F I0813 20:00:39.927170 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:00:39.932952896+00:00 stderr F I0813 20:00:39.932926 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:00:39.946758159+00:00 stderr F I0813 20:00:39.946698 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:00:39.957654460+00:00 stderr F I0813 20:00:39.956581 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:00:39.981065418+00:00 stderr F I0813 20:00:39.980511 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:00:39.999995307+00:00 stderr F I0813 20:00:39.999741 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:00:40.023938250+00:00 stderr F I0813 20:00:40.023650 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:00:40.040043639+00:00 stderr F I0813 20:00:40.035400 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:00:42.156462427+00:00 stderr F I0813 20:00:42.155441 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:00:42.156462427+00:00 stderr F I0813 20:00:42.155486 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:00:42.566969882+00:00 stderr F I0813 20:00:42.561559 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670285 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:00:42.671871473+00:00 stderr F ) 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670332 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:00:42.671871473+00:00 stderr F I0813 20:00:42.670357 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:00:46.439115612+00:00 stderr F I0813 20:00:46.435710 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:00:46.439115612+00:00 stderr F I0813 20:00:46.435881 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:00:46.556598492+00:00 stderr F I0813 20:00:46.553101 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:00:46.749134532+00:00 stderr F I0813 20:00:46.744992 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:00:46.749134532+00:00 stderr F ) 2025-08-13T20:00:46.749134532+00:00 stderr F I0813 20:00:46.745047 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:00:46.979914161+00:00 stderr F I0813 20:00:46.960227 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:00:50.349518792+00:00 stderr F I0813 20:00:50.332944 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701092 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701211 22232 update.go:1887] Deleting stale data 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701287 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:00:52.701873276+00:00 stderr F I0813 20:00:52.701558 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:01:04.903053940+00:00 stderr F I0813 20:01:04.902693 22232 update.go:2284] Password has been configured 2025-08-13T20:01:05.406232137+00:00 stderr F I0813 20:01:05.404157 22232 update.go:2610] Node has Desired Config rendered-master-11405dc064e9fc83a779a06d1cd665b3, skipping reboot 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666377 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666613 22232 daemon.go:1686] Current config: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666626 22232 daemon.go:1687] Desired config: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:05.669318508+00:00 stderr F I0813 20:01:05.666638 22232 daemon.go:1695] state: Done 2025-08-13T20:01:05.722514235+00:00 stderr F I0813 20:01:05.721976 22232 daemon.go:2198] Completing update to target MachineConfig: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:29.333812924+00:00 stderr F I0813 20:01:29.332455 22232 update.go:2610] Update completed for config rendered-master-11405dc064e9fc83a779a06d1cd665b3 and node has been successfully uncordoned 2025-08-13T20:01:31.494362400+00:00 stderr F I0813 20:01:31.488741 22232 daemon.go:2223] In desired state MachineConfig: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:31.628122714+00:00 stderr F I0813 20:01:31.627395 22232 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T20:01:31.628122714+00:00 stderr F I0813 20:01:31.627505 22232 update.go:2640] Removing SIGTERM protection 2025-08-13T20:05:15.716149813+00:00 stderr F I0813 20:05:15.715368 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 28276 2025-08-13T20:06:07.027530456+00:00 stderr F I0813 20:06:07.027467 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 31988 2025-08-13T20:06:14.811137908+00:00 stderr F I0813 20:06:14.810074 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32019 2025-08-13T20:06:49.883441178+00:00 stderr F I0813 20:06:49.877623 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32205 2025-08-13T20:06:50.782933748+00:00 stderr F I0813 20:06:50.782658 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32210 2025-08-13T20:06:51.459098074+00:00 stderr F I0813 20:06:51.458152 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:09:06.622937465+00:00 stderr F I0813 20:09:06.622671 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:36:38.651731771+00:00 stderr F I0813 20:36:38.651488 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 32257 2025-08-13T20:42:13.301299505+00:00 stderr F I0813 20:42:13.300560 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37415 2025-08-13T20:42:14.048707112+00:00 stderr F I0813 20:42:14.026597 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37419 2025-08-13T20:42:16.625653205+00:00 stderr F I0813 20:42:16.625182 22232 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 37431 2025-08-13T20:42:23.469674098+00:00 stderr F I0813 20:42:23.469267 22232 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T20:42:24.066938248+00:00 stderr F I0813 20:42:24.065572 22232 daemon.go:921] Preflight config drift check successful (took 1.190182413s) 2025-08-13T20:42:24.072380705+00:00 stderr F I0813 20:42:24.072300 22232 config_drift_monitor.go:255] Config Drift Monitor has shut down 2025-08-13T20:42:24.105901431+00:00 stderr F I0813 20:42:24.104044 22232 update.go:2632] Adding SIGTERM protection 2025-08-13T20:42:24.149125037+00:00 stderr F I0813 20:42:24.149020 22232 update.go:1011] Checking Reconcilable for config rendered-master-11405dc064e9fc83a779a06d1cd665b3 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:24.198179062+00:00 stderr F I0813 20:42:24.198116 22232 update.go:2610] Starting update from rendered-master-11405dc064e9fc83a779a06d1cd665b3 to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false extensions:false} 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214755 22232 upgrade_monitor.go:324] MCN Featuregate is not enabled. Please enable the TechPreviewNoUpgrade featureset to use MachineConfigNodes 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214848 22232 update.go:1135] Changes do not require drain, skipping. 2025-08-13T20:42:24.214914824+00:00 stderr F I0813 20:42:24.214909 22232 update.go:1824] Updating files 2025-08-13T20:42:24.214956025+00:00 stderr F I0813 20:42:24.214916 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:42:24.252122577+00:00 stderr F I0813 20:42:24.251988 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:42:24.263958918+00:00 stderr F I0813 20:42:24.263917 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:42:24.277697514+00:00 stderr F I0813 20:42:24.277629 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:42:24.290403600+00:00 stderr F I0813 20:42:24.290214 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:42:24.301490240+00:00 stderr F I0813 20:42:24.301378 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:42:24.311638183+00:00 stderr F I0813 20:42:24.311569 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:42:24.322919988+00:00 stderr F I0813 20:42:24.322846 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:42:24.336051967+00:00 stderr F I0813 20:42:24.335914 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:42:24.361308765+00:00 stderr F I0813 20:42:24.361173 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:42:24.382915808+00:00 stderr F I0813 20:42:24.382510 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:42:24.404915332+00:00 stderr F I0813 20:42:24.403312 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:42:24.431580261+00:00 stderr F I0813 20:42:24.430042 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:42:24.449065225+00:00 stderr F I0813 20:42:24.449000 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:42:24.467327221+00:00 stderr F I0813 20:42:24.467204 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:42:24.479612755+00:00 stderr F I0813 20:42:24.479511 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:42:24.497209363+00:00 stderr F I0813 20:42:24.494684 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:42:24.511488174+00:00 stderr F I0813 20:42:24.511369 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:42:24.525574921+00:00 stderr F I0813 20:42:24.525487 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:42:24.542909090+00:00 stderr F I0813 20:42:24.542836 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:42:24.555617787+00:00 stderr F I0813 20:42:24.555531 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:42:24.571710431+00:00 stderr F I0813 20:42:24.571625 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:42:24.597107223+00:00 stderr F I0813 20:42:24.596981 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:42:24.612088595+00:00 stderr F I0813 20:42:24.611990 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:42:24.626033247+00:00 stderr F I0813 20:42:24.625139 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:42:24.640163794+00:00 stderr F I0813 20:42:24.640072 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:42:24.654876238+00:00 stderr F I0813 20:42:24.654702 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:42:24.678018696+00:00 stderr F I0813 20:42:24.677958 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:42:24.691753702+00:00 stderr F I0813 20:42:24.691701 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:42:24.710099500+00:00 stderr F I0813 20:42:24.709966 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:42:24.722011874+00:00 stderr F I0813 20:42:24.721912 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:42:24.745032968+00:00 stderr F I0813 20:42:24.744917 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:42:24.755827509+00:00 stderr F I0813 20:42:24.755676 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:42:24.770988076+00:00 stderr F I0813 20:42:24.770880 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:42:24.790473008+00:00 stderr F I0813 20:42:24.789876 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:42:24.807441977+00:00 stderr F I0813 20:42:24.807336 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:42:24.844291519+00:00 stderr F I0813 20:42:24.844124 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:42:24.855415970+00:00 stderr F I0813 20:42:24.855268 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:42:24.866124419+00:00 stderr F I0813 20:42:24.866022 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:42:24.881380209+00:00 stderr F I0813 20:42:24.879962 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:42:24.894569969+00:00 stderr F I0813 20:42:24.894399 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:42:24.908705326+00:00 stderr F I0813 20:42:24.908483 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:42:24.911642411+00:00 stderr F I0813 20:42:24.911520 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:24.917033356+00:00 stderr F I0813 20:42:24.916871 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:24.917033356+00:00 stderr F I0813 20:42:24.916973 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:42:24.919952361+00:00 stderr F I0813 20:42:24.919753 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:42:24.924153292+00:00 stderr F I0813 20:42:24.923975 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:26.791141357+00:00 stderr F I0813 20:42:26.791027 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:42:26.791141357+00:00 stderr F I0813 20:42:26.791080 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:42:26.794389151+00:00 stderr F I0813 20:42:26.794326 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:26.850068746+00:00 stderr F I0813 20:42:26.849861 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:42:26.850068746+00:00 stderr F ) 2025-08-13T20:42:26.850068746+00:00 stderr F I0813 20:42:26.849932 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:42:26.853442923+00:00 stderr F I0813 20:42:26.853333 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:42:26.859037604+00:00 stderr F I0813 20:42:26.858955 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:42:28.576991194+00:00 stderr F I0813 20:42:28.575017 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:42:28.576991194+00:00 stderr F I0813 20:42:28.575063 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:28.580716541+00:00 stderr F I0813 20:42:28.580693 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:28.580854265+00:00 stderr F I0813 20:42:28.580751 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:28.584480740+00:00 stderr F I0813 20:42:28.584458 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:42:28.593538641+00:00 stderr F I0813 20:42:28.593516 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:42:28.601360256+00:00 stderr F I0813 20:42:28.599059 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:42:28.602883280+00:00 stderr F I0813 20:42:28.602706 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:42:28.611321354+00:00 stderr F I0813 20:42:28.611279 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:42:28.614991489+00:00 stderr F I0813 20:42:28.614967 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:42:28.618508081+00:00 stderr F I0813 20:42:28.618438 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:42:28.621332872+00:00 stderr F I0813 20:42:28.621310 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:42:30.087391678+00:00 stderr F I0813 20:42:30.086209 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:42:30.087510012+00:00 stderr F I0813 20:42:30.087491 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:42:30.093930477+00:00 stderr F I0813 20:42:30.093554 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.114339 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:42:30.115671134+00:00 stderr F ) 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.115519 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:30.115671134+00:00 stderr F I0813 20:42:30.115562 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:42:31.509335524+00:00 stderr F I0813 20:42:31.509270 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:42:31.509424616+00:00 stderr F I0813 20:42:31.509409 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:42:31.513537035+00:00 stderr F I0813 20:42:31.513011 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:31.546347121+00:00 stderr F I0813 20:42:31.546288 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:42:31.546347121+00:00 stderr F ) 2025-08-13T20:42:31.546415863+00:00 stderr F I0813 20:42:31.546400 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:42:31.552157308+00:00 stderr F I0813 20:42:31.552085 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:42:33.062583914+00:00 stderr F I0813 20:42:33.061640 22232 update.go:2096] Enabled systemd units: [NetworkManager-clean-initrd-state.service disable-mglru.service firstboot-osupdate.target kubelet-auto-node-size.service kubelet.service machine-config-daemon-firstboot.service machine-config-daemon-pull.service node-valid-hostname.service nodeip-configuration.service openvswitch.service ovs-configuration.service ovsdb-server.service wait-for-primary-ip.service kubelet-cleanup.service dummy-network.service] 2025-08-13T20:42:34.628042927+00:00 stderr F I0813 20:42:34.627976 22232 update.go:2107] Disabled systemd units [kubens.service] 2025-08-13T20:42:34.628143799+00:00 stderr F I0813 20:42:34.628129 22232 update.go:1887] Deleting stale data 2025-08-13T20:42:34.628275183+00:00 stderr F I0813 20:42:34.628255 22232 update.go:2293] updating the permission of the kubeconfig to: 0o600 2025-08-13T20:42:34.629196670+00:00 stderr F I0813 20:42:34.629177 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:42:34.902454208+00:00 stderr F I0813 20:42:34.901737 22232 update.go:2284] Password has been configured 2025-08-13T20:42:34.910419238+00:00 stderr F I0813 20:42:34.910383 22232 update.go:2610] Node has Desired Config rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, skipping reboot 2025-08-13T20:42:34.974693461+00:00 stderr F I0813 20:42:34.974482 22232 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T20:42:34.984181794+00:00 stderr F I0813 20:42:34.984085 22232 update.go:2259] Checking if absent users need to be disconfigured 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064034 22232 update.go:2284] Password has been configured 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064092 22232 update.go:1824] Updating files 2025-08-13T20:42:35.064191821+00:00 stderr F I0813 20:42:35.064105 22232 file_writers.go:233] Writing file "/usr/local/bin/nm-clean-initrd-state.sh" 2025-08-13T20:42:35.080327306+00:00 stderr F I0813 20:42:35.080203 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/01-ipv6.conf" 2025-08-13T20:42:35.092598230+00:00 stderr F I0813 20:42:35.092489 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/20-keyfiles.conf" 2025-08-13T20:42:35.103657929+00:00 stderr F I0813 20:42:35.103567 22232 file_writers.go:233] Writing file "/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt" 2025-08-13T20:42:35.120481844+00:00 stderr F I0813 20:42:35.120434 22232 file_writers.go:233] Writing file "/etc/kubernetes/apiserver-url.env" 2025-08-13T20:42:35.135935139+00:00 stderr F I0813 20:42:35.135849 22232 file_writers.go:233] Writing file "/etc/audit/rules.d/mco-audit-quiet-containers.rules" 2025-08-13T20:42:35.149484510+00:00 stderr F I0813 20:42:35.149389 22232 file_writers.go:233] Writing file "/etc/tmpfiles.d/cleanup-cni.conf" 2025-08-13T20:42:35.161465315+00:00 stderr F I0813 20:42:35.161379 22232 file_writers.go:233] Writing file "/usr/local/bin/configure-ovs.sh" 2025-08-13T20:42:35.182567244+00:00 stderr F I0813 20:42:35.182459 22232 file_writers.go:233] Writing file "/etc/containers/storage.conf" 2025-08-13T20:42:35.199287336+00:00 stderr F I0813 20:42:35.199095 22232 file_writers.go:233] Writing file "/etc/mco/proxy.env" 2025-08-13T20:42:35.213657570+00:00 stderr F I0813 20:42:35.213564 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/10-default-env-godebug.conf" 2025-08-13T20:42:35.225871972+00:00 stderr F I0813 20:42:35.225709 22232 file_writers.go:233] Writing file "/etc/modules-load.d/iptables.conf" 2025-08-13T20:42:35.250539514+00:00 stderr F I0813 20:42:35.250122 22232 file_writers.go:233] Writing file "/etc/node-sizing-enabled.env" 2025-08-13T20:42:35.273525386+00:00 stderr F I0813 20:42:35.271163 22232 file_writers.go:233] Writing file "/usr/local/sbin/dynamic-system-reserved-calc.sh" 2025-08-13T20:42:35.301671208+00:00 stderr F I0813 20:42:35.301016 22232 file_writers.go:233] Writing file "/etc/systemd/system.conf.d/kubelet-cgroups.conf" 2025-08-13T20:42:35.328518312+00:00 stderr F I0813 20:42:35.328357 22232 file_writers.go:233] Writing file "/etc/systemd/system/kubelet.service.d/20-logging.conf" 2025-08-13T20:42:35.352840333+00:00 stderr F I0813 20:42:35.350893 22232 file_writers.go:233] Writing file "/etc/NetworkManager/conf.d/sdn.conf" 2025-08-13T20:42:35.373667023+00:00 stderr F I0813 20:42:35.371918 22232 file_writers.go:233] Writing file "/etc/nmstate/nmstate.conf" 2025-08-13T20:42:35.404121861+00:00 stderr F I0813 20:42:35.403939 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/pre-up.d/10-ofport-request.sh" 2025-08-13T20:42:35.436752152+00:00 stderr F I0813 20:42:35.436632 22232 file_writers.go:233] Writing file "/var/lib/kubelet/config.json" 2025-08-13T20:42:35.453877856+00:00 stderr F I0813 20:42:35.453638 22232 file_writers.go:233] Writing file "/etc/kubernetes/ca.crt" 2025-08-13T20:42:35.468862658+00:00 stderr F I0813 20:42:35.468286 22232 file_writers.go:233] Writing file "/etc/dnsmasq.conf" 2025-08-13T20:42:35.491322875+00:00 stderr F I0813 20:42:35.490614 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/forcedns-rhel9-fix" 2025-08-13T20:42:35.504189526+00:00 stderr F I0813 20:42:35.504055 22232 file_writers.go:233] Writing file "/etc/sysctl.d/arp.conf" 2025-08-13T20:42:35.520377993+00:00 stderr F I0813 20:42:35.518283 22232 file_writers.go:233] Writing file "/etc/sysctl.d/gc-thresh.conf" 2025-08-13T20:42:35.539960518+00:00 stderr F I0813 20:42:35.538639 22232 file_writers.go:233] Writing file "/etc/sysctl.d/inotify.conf" 2025-08-13T20:42:35.556549106+00:00 stderr F I0813 20:42:35.556450 22232 file_writers.go:233] Writing file "/etc/sysctl.d/enable-userfaultfd.conf" 2025-08-13T20:42:35.568008026+00:00 stderr F I0813 20:42:35.567730 22232 file_writers.go:233] Writing file "/etc/sysctl.d/vm-max-map.conf" 2025-08-13T20:42:35.577947273+00:00 stderr F I0813 20:42:35.577848 22232 file_writers.go:233] Writing file "/usr/local/bin/mco-hostname" 2025-08-13T20:42:35.588734984+00:00 stderr F I0813 20:42:35.588631 22232 file_writers.go:233] Writing file "/usr/local/bin/recover-kubeconfig.sh" 2025-08-13T20:42:35.601888393+00:00 stderr F I0813 20:42:35.601827 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet-plugins/volume/exec/.dummy" 2025-08-13T20:42:35.615974709+00:00 stderr F I0813 20:42:35.615912 22232 file_writers.go:233] Writing file "/etc/NetworkManager/dispatcher.d/99-vsphere-disable-tx-udp-tnl" 2025-08-13T20:42:35.629195690+00:00 stderr F I0813 20:42:35.629134 22232 file_writers.go:233] Writing file "/usr/local/bin/wait-for-primary-ip.sh" 2025-08-13T20:42:35.651689649+00:00 stderr F I0813 20:42:35.651629 22232 file_writers.go:233] Writing file "/etc/containers/registries.conf" 2025-08-13T20:42:35.669093641+00:00 stderr F I0813 20:42:35.669030 22232 file_writers.go:233] Writing file "/etc/crio/crio.conf.d/00-default" 2025-08-13T20:42:35.682464926+00:00 stderr F I0813 20:42:35.682366 22232 file_writers.go:233] Writing file "/etc/containers/policy.json" 2025-08-13T20:42:35.696846231+00:00 stderr F I0813 20:42:35.696688 22232 file_writers.go:233] Writing file "/etc/kubernetes/cloud.conf" 2025-08-13T20:42:35.709859166+00:00 stderr F I0813 20:42:35.708958 22232 file_writers.go:233] Writing file "/etc/kubernetes/crio-metrics-proxy.cfg" 2025-08-13T20:42:35.721989996+00:00 stderr F I0813 20:42:35.721926 22232 file_writers.go:233] Writing file "/etc/kubernetes/manifests/criometricsproxy.yaml" 2025-08-13T20:42:35.736007530+00:00 stderr F I0813 20:42:35.735745 22232 file_writers.go:233] Writing file "/etc/kubernetes/kubelet.conf" 2025-08-13T20:42:35.753092332+00:00 stderr F I0813 20:42:35.752857 22232 file_writers.go:233] Writing file "/usr/local/bin/kubenswrapper" 2025-08-13T20:42:35.780277376+00:00 stderr F I0813 20:42:35.779309 22232 file_writers.go:293] Writing systemd unit "NetworkManager-clean-initrd-state.service" 2025-08-13T20:42:35.783961322+00:00 stderr F I0813 20:42:35.783555 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:35.789884023+00:00 stderr F I0813 20:42:35.788276 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:35.789884023+00:00 stderr F I0813 20:42:35.788335 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-profile-unix-socket.conf" 2025-08-13T20:42:35.791366746+00:00 stderr F I0813 20:42:35.791309 22232 file_writers.go:207] Writing systemd unit dropin "05-mco-ordering.conf" 2025-08-13T20:42:35.794091574+00:00 stderr F I0813 20:42:35.794017 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:37.702915466+00:00 stderr F I0813 20:42:37.702382 22232 update.go:2118] Preset systemd unit crio.service 2025-08-13T20:42:37.702915466+00:00 stderr F I0813 20:42:37.702519 22232 file_writers.go:293] Writing systemd unit "disable-mglru.service" 2025-08-13T20:42:37.707136597+00:00 stderr F I0813 20:42:37.706349 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:37.893965314+00:00 stderr F I0813 20:42:37.893585 22232 update.go:2155] Could not reset unit preset for docker.socket, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file docker.socket does not exist. 2025-08-13T20:42:37.893965314+00:00 stderr F ) 2025-08-13T20:42:37.893965314+00:00 stderr F I0813 20:42:37.893629 22232 file_writers.go:293] Writing systemd unit "firstboot-osupdate.target" 2025-08-13T20:42:37.897539277+00:00 stderr F I0813 20:42:37.897475 22232 file_writers.go:293] Writing systemd unit "kubelet-auto-node-size.service" 2025-08-13T20:42:37.901289415+00:00 stderr F I0813 20:42:37.901217 22232 file_writers.go:293] Writing systemd unit "kubelet-dependencies.target" 2025-08-13T20:42:39.316065013+00:00 stderr F I0813 20:42:39.315961 22232 update.go:2118] Preset systemd unit kubelet-dependencies.target 2025-08-13T20:42:39.316114875+00:00 stderr F I0813 20:42:39.316068 22232 file_writers.go:207] Writing systemd unit dropin "01-kubens.conf" 2025-08-13T20:42:39.321291884+00:00 stderr F I0813 20:42:39.320616 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:39.321291884+00:00 stderr F I0813 20:42:39.320721 22232 file_writers.go:207] Writing systemd unit dropin "10-mco-default-madv.conf" 2025-08-13T20:42:39.324873447+00:00 stderr F I0813 20:42:39.323944 22232 file_writers.go:293] Writing systemd unit "kubelet.service" 2025-08-13T20:42:39.327420141+00:00 stderr F I0813 20:42:39.327371 22232 file_writers.go:293] Writing systemd unit "kubens.service" 2025-08-13T20:42:39.330551491+00:00 stderr F I0813 20:42:39.330511 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-firstboot.service" 2025-08-13T20:42:39.334212687+00:00 stderr F I0813 20:42:39.334104 22232 file_writers.go:293] Writing systemd unit "machine-config-daemon-pull.service" 2025-08-13T20:42:39.337603454+00:00 stderr F I0813 20:42:39.337539 22232 file_writers.go:293] Writing systemd unit "node-valid-hostname.service" 2025-08-13T20:42:39.339667454+00:00 stderr F I0813 20:42:39.339601 22232 file_writers.go:293] Writing systemd unit "nodeip-configuration.service" 2025-08-13T20:42:39.341844787+00:00 stderr F I0813 20:42:39.341732 22232 file_writers.go:293] Writing systemd unit "ovs-configuration.service" 2025-08-13T20:42:39.344166914+00:00 stderr F I0813 20:42:39.344100 22232 file_writers.go:207] Writing systemd unit dropin "10-ovs-vswitchd-restart.conf" 2025-08-13T20:42:41.135897899+00:00 stderr F I0813 20:42:41.135709 22232 update.go:2118] Preset systemd unit ovs-vswitchd.service 2025-08-13T20:42:41.135897899+00:00 stderr F I0813 20:42:41.135751 22232 file_writers.go:207] Writing systemd unit dropin "10-ovsdb-restart.conf" 2025-08-13T20:42:41.139032159+00:00 stderr F I0813 20:42:41.138978 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:41.405824051+00:00 stderr F W0813 20:42:41.405685 22232 daemon.go:1366] Got an error from auxiliary tools: kubelet health check has failed 1 times: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused 2025-08-13T20:42:42.159850680+00:00 stderr F I0813 20:42:42.159719 22232 update.go:2155] Could not reset unit preset for pivot.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file pivot.service does not exist. 2025-08-13T20:42:42.159850680+00:00 stderr F ) 2025-08-13T20:42:42.159850680+00:00 stderr F I0813 20:42:42.159768 22232 file_writers.go:193] Dropin for 10-mco-default-env.conf has no content, skipping write 2025-08-13T20:42:42.159983064+00:00 stderr F I0813 20:42:42.159942 22232 file_writers.go:207] Writing systemd unit dropin "mco-controlplane-nice.conf" 2025-08-13T20:42:43.488849285+00:00 stderr F I0813 20:42:43.488711 22232 update.go:2118] Preset systemd unit rpm-ostreed.service 2025-08-13T20:42:43.488849285+00:00 stderr F I0813 20:42:43.488755 22232 file_writers.go:293] Writing systemd unit "wait-for-primary-ip.service" 2025-08-13T20:42:43.492146470+00:00 stderr F I0813 20:42:43.492068 22232 file_writers.go:207] Writing systemd unit dropin "mco-disabled.conf" 2025-08-13T20:42:44.411025122+00:00 stderr F I0813 20:42:44.410920 22232 update.go:2155] Could not reset unit preset for zincati.service, skipping. (Error msg: error running preset on unit: Failed to preset unit: Unit file zincati.service does not exist. 2025-08-13T20:42:44.411025122+00:00 stderr F ) 2025-08-13T20:42:44.411025122+00:00 stderr F I0813 20:42:44.410979 22232 file_writers.go:293] Writing systemd unit "kubelet-cleanup.service" 2025-08-13T20:42:44.415625774+00:00 stderr F I0813 20:42:44.415545 22232 file_writers.go:293] Writing systemd unit "dummy-network.service" 2025-08-13T20:42:44.951528605+00:00 stderr F I0813 20:42:44.951462 22232 daemon.go:1302] Got SIGTERM, but actively updating ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000003575315117130647033023 0ustar zuulzuul2025-08-13T19:54:10.556685779+00:00 stderr F I0813 19:54:10.556542 19129 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:54:10.557371878+00:00 stderr F I0813 19:54:10.557253 19129 update.go:2595] Running: mount --rbind /run/secrets /rootfs/run/secrets 2025-08-13T19:54:10.560882359+00:00 stderr F I0813 19:54:10.560852 19129 update.go:2595] Running: mount --rbind /usr/bin /rootfs/run/machine-config-daemon-bin 2025-08-13T19:54:10.564159432+00:00 stderr F I0813 19:54:10.564115 19129 daemon.go:513] using appropriate binary for source=rhel-9 target=rhel-9 2025-08-13T19:54:10.671602010+00:00 stderr F I0813 19:54:10.671421 19129 daemon.go:566] Invoking re-exec /run/bin/machine-config-daemon 2025-08-13T19:54:10.760550110+00:00 stderr F I0813 19:54:10.759528 19129 start.go:68] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:54:10.761339942+00:00 stderr F E0813 19:54:10.761303 19129 rpm-ostree.go:276] Merged secret file does not exist; defaulting to cluster pull secret 2025-08-13T19:54:10.761511917+00:00 stderr F I0813 19:54:10.761490 19129 rpm-ostree.go:263] Linking ostree authfile to /var/lib/kubelet/config.json 2025-08-13T19:54:10.973048117+00:00 stderr F I0813 19:54:10.972984 19129 daemon.go:317] Booted osImageURL: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 (416.94.202406172220-0) 3aa42b3e55a31016873768eb92311a9ba07c871ac81e126e9561a61d8f9d2f24 2025-08-13T19:54:10.975268441+00:00 stderr F I0813 19:54:10.975225 19129 start.go:134] overriding kubernetes api to https://api-int.crc.testing:6443 2025-08-13T19:54:10.976896327+00:00 stderr F I0813 19:54:10.976617 19129 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:54:10.978138493+00:00 stderr F I0813 19:54:10.978067 19129 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:54:11.001263983+00:00 stderr F I0813 19:54:11.001140 19129 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:54:11.004521666+00:00 stderr F I0813 19:54:11.004417 19129 writer.go:88] NodeWriter initialized with credentials from /var/lib/kubelet/kubeconfig 2025-08-13T19:54:11.016089966+00:00 stderr F I0813 19:54:11.015988 19129 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:54:11.017386113+00:00 stderr F I0813 19:54:11.016256 19129 start.go:214] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:54:11.018157775+00:00 stderr F I0813 19:54:11.018022 19129 update.go:2610] Starting to manage node: crc 2025-08-13T19:54:11.022896291+00:00 stderr F I0813 19:54:11.022232 19129 rpm-ostree.go:308] Running captured: rpm-ostree status 2025-08-13T19:54:11.074973758+00:00 stderr F I0813 19:54:11.074899 19129 daemon.go:1727] State: idle 2025-08-13T19:54:11.074973758+00:00 stderr F Deployments: 2025-08-13T19:54:11.074973758+00:00 stderr F * ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:54:11.074973758+00:00 stderr F Digest: sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 2025-08-13T19:54:11.074973758+00:00 stderr F Version: 416.94.202406172220-0 (2024-06-17T22:24:17Z) 2025-08-13T19:54:11.074973758+00:00 stderr F LocalPackages: hyperv-daemons-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hyperv-daemons-license-0-0.42.20190303git.el9.noarch 2025-08-13T19:54:11.074973758+00:00 stderr F hypervfcopyd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hypervkvpd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.074973758+00:00 stderr F hypervvssd-0-0.42.20190303git.el9.x86_64 2025-08-13T19:54:11.075414900+00:00 stderr F I0813 19:54:11.075378 19129 coreos.go:53] CoreOS aleph version: mtime=2022-08-01 23:42:11 +0000 UTC 2025-08-13T19:54:11.075414900+00:00 stderr F { 2025-08-13T19:54:11.075414900+00:00 stderr F "container-image": { 2025-08-13T19:54:11.075414900+00:00 stderr F "image-digest": "sha256:ea0cef07b0cd5ba8ff8c487324bf6a4df15fa31e69962a8e8fb7d00f1caa7f1d", 2025-08-13T19:54:11.075414900+00:00 stderr F "image-labels": { 2025-08-13T19:54:11.075414900+00:00 stderr F "containers.bootc": "1", 2025-08-13T19:54:11.075414900+00:00 stderr F "coreos-assembler.image-config-checksum": "626c78cc9e2da6ffd642d678ac6109f5532e817e107932464c8222f8d3492491", 2025-08-13T19:54:11.075414900+00:00 stderr F "coreos-assembler.image-input-checksum": "6f9238f67f75298a27a620eacd78d313fd451541fc877d3f6aa681c0f6b22811", 2025-08-13T19:54:11.075414900+00:00 stderr F "io.openshift.build.version-display-names": "machine-os=Red Hat Enterprise Linux CoreOS", 2025-08-13T19:54:11.075414900+00:00 stderr F "io.openshift.build.versions": "machine-os=416.94.202405291527-0", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.revision": "b01a02cc1b92e2dacf12d9b5cddf690a2439ce64", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.source": "https://github.com/openshift/os", 2025-08-13T19:54:11.075414900+00:00 stderr F "org.opencontainers.image.version": "416.94.202405291527-0", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.bootable": "true", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.final-diffid": "sha256:12787d84fa137cd5649a9005efe98ec9d05ea46245fdc50aecb7dd007f2035b1", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree.linux": "5.14.0-427.18.1.el9_4.x86_64", 2025-08-13T19:54:11.075414900+00:00 stderr F "rpmostree.inputhash": "a00190b296e00061b72bcbc4ace9fb4b317e86da7d8c58471b027239035b05d6" 2025-08-13T19:54:11.075414900+00:00 stderr F }, 2025-08-13T19:54:11.075414900+00:00 stderr F "image-name": "oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive" 2025-08-13T19:54:11.075414900+00:00 stderr F }, 2025-08-13T19:54:11.075414900+00:00 stderr F "osbuild-version": "114", 2025-08-13T19:54:11.075414900+00:00 stderr F "ostree-commit": "de1e21f16c43056a1ef1999d682e171b8cfe4db701e6ae00fb12347b360552f7", 2025-08-13T19:54:11.075414900+00:00 stderr F "ref": "docker://ostree-image-signed:oci-archive:/rhcos-416.94.202405291527-0-ostree.x86_64.ociarchive", 2025-08-13T19:54:11.075414900+00:00 stderr F "version": "416.94.202405291527-0" 2025-08-13T19:54:11.075414900+00:00 stderr F } 2025-08-13T19:54:11.075551544+00:00 stderr F I0813 19:54:11.075531 19129 coreos.go:70] Ignition provisioning: time=2024-06-26T12:42:18Z 2025-08-13T19:54:11.075591915+00:00 stderr F I0813 19:54:11.075578 19129 rpm-ostree.go:308] Running captured: journalctl --list-boots 2025-08-13T19:54:11.085158798+00:00 stderr F I0813 19:54:11.085050 19129 daemon.go:1736] journalctl --list-boots: 2025-08-13T19:54:11.085158798+00:00 stderr F IDX BOOT ID FIRST ENTRY LAST ENTRY 2025-08-13T19:54:11.085158798+00:00 stderr F -2 286f1119e01c427899b4130371f705c5 Thu 2024-06-27 13:36:35 UTC Thu 2024-06-27 13:36:39 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F -1 2ff245ef1efc4648b6c81a61c24bb5db Thu 2024-06-27 13:37:00 UTC Thu 2024-06-27 13:37:11 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F 0 7bac8de7aad04ed8a9adc4391f6449b7 Wed 2025-08-13 19:43:15 UTC Wed 2025-08-13 19:54:11 UTC 2025-08-13T19:54:11.085158798+00:00 stderr F I0813 19:54:11.085107 19129 rpm-ostree.go:308] Running captured: systemctl list-units --state=failed --no-legend 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096023 19129 daemon.go:1751] systemd service state: OK 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096062 19129 daemon.go:1327] Starting MachineConfigDaemon 2025-08-13T19:54:11.096965756+00:00 stderr F I0813 19:54:11.096159 19129 daemon.go:1334] Enabling Kubelet Healthz Monitor 2025-08-13T19:54:12.018003324+00:00 stderr F I0813 19:54:12.017747 19129 daemon.go:647] Node crc is part of the control plane 2025-08-13T19:54:12.039676423+00:00 stderr F I0813 19:54:12.039574 19129 daemon.go:1899] Running: /run/machine-config-daemon-bin/nmstatectl persist-nic-names --root / --kargs-out /tmp/nmstate-kargs7331210 --cleanup 2025-08-13T19:54:12.044998325+00:00 stderr F [2025-08-13T19:54:12Z INFO nmstatectl] Nmstate version: 2.2.29 2025-08-13T19:54:12.045105768+00:00 stdout F 2025-08-13T19:54:12.045116079+00:00 stderr F [2025-08-13T19:54:12Z INFO nmstatectl::persist_nic] /etc/systemd/network does not exist, no need to clean up 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053225 19129 daemon.go:1563] Previous boot ostree-finalize-staged.service appears successful 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053264 19129 daemon.go:1680] Current+desired config: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:12.053293452+00:00 stderr F I0813 19:54:12.053273 19129 daemon.go:1695] state: Done 2025-08-13T19:54:12.053318993+00:00 stderr F I0813 19:54:12.053307 19129 update.go:2595] Running: rpm-ostree cleanup -r 2025-08-13T19:54:12.115456337+00:00 stdout F Deployments unchanged. 2025-08-13T19:54:12.126749860+00:00 stderr F I0813 19:54:12.126665 19129 daemon.go:2096] Validating against current config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:12.127261514+00:00 stderr F I0813 19:54:12.127202 19129 daemon.go:2008] SSH key location ("/home/core/.ssh/authorized_keys.d/ignition") up-to-date! 2025-08-13T19:54:12.425502300+00:00 stderr F W0813 19:54:12.425404 19129 daemon.go:2601] Unable to check manifest for matching hash: error parsing image name "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883": reading manifest sha256:eaa7835f2ec7d2513a76e30a41c21ce62ec11313fab2f8f3f46dd4999957a883 in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized 2025-08-13T19:54:12.425502300+00:00 stderr F I0813 19:54:12.425443 19129 rpm-ostree.go:308] Running captured: rpm-ostree kargs 2025-08-13T19:54:12.499133842+00:00 stderr F I0813 19:54:12.498978 19129 update.go:2610] Validated on-disk state 2025-08-13T19:54:12.504581478+00:00 stderr F I0813 19:54:12.504490 19129 daemon.go:2198] Completing update to target MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:22.534493943+00:00 stderr F I0813 19:54:22.534332 19129 update.go:2610] Update completed for config rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 and node has been successfully uncordoned 2025-08-13T19:54:22.551916871+00:00 stderr F I0813 19:54:22.551858 19129 daemon.go:2223] In desired state MachineConfig: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5 2025-08-13T19:54:22.567300330+00:00 stderr F I0813 19:54:22.567229 19129 config_drift_monitor.go:246] Config Drift Monitor started 2025-08-13T19:55:11.143020151+00:00 stderr F I0813 19:55:11.142956 19129 certificate_writer.go:288] Certificate was synced from controllerconfig resourceVersion 23037 2025-08-13T19:57:10.161621328+00:00 stderr F I0813 19:57:10.160577 19129 daemon.go:1363] Shutting down MachineConfigDaemon ././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415117130647033013 0ustar zuulzuul2025-12-13T00:11:03.553805287+00:00 stderr F W1213 00:11:03.553103 5582 deprecated.go:66] 2025-12-13T00:11:03.553805287+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:11:03.553805287+00:00 stderr F 2025-12-13T00:11:03.553805287+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:11:03.553805287+00:00 stderr F 2025-12-13T00:11:03.553805287+00:00 stderr F =============================================== 2025-12-13T00:11:03.553805287+00:00 stderr F 2025-12-13T00:11:03.553805287+00:00 stderr F I1213 00:11:03.553210 5582 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-12-13T00:11:03.555865883+00:00 stderr F I1213 00:11:03.555259 5582 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:11:03.555865883+00:00 stderr F I1213 00:11:03.555311 5582 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:11:03.558048912+00:00 stderr F I1213 00:11:03.556623 5582 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-12-13T00:11:03.558048912+00:00 stderr F I1213 00:11:03.557003 5582 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115117130647033004 0ustar zuulzuul2025-08-13T19:50:45.604729576+00:00 stderr F W0813 19:50:45.602545 13767 deprecated.go:66] 2025-08-13T19:50:45.604729576+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F =============================================== 2025-08-13T19:50:45.604729576+00:00 stderr F 2025-08-13T19:50:45.604729576+00:00 stderr F I0813 19:50:45.602752 13767 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:50:45.618370216+00:00 stderr F I0813 19:50:45.617282 13767 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:45.618370216+00:00 stderr F I0813 19:50:45.617490 13767 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:45.649070553+00:00 stderr F I0813 19:50:45.647523 13767 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:50:45.662264789+00:00 stderr F I0813 19:50:45.660720 13767 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:46.043998571+00:00 stderr F I0813 20:42:46.043741 13767 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130646033051 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016462315117130646033067 0ustar zuulzuul2025-08-13T20:07:36.638917816+00:00 stderr F I0813 20:07:36.636214 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a0f180 cert-dir:0xc000a0f360 cert-secrets:0xc000a0f0e0 configmaps:0xc000a0ec80 namespace:0xc000a0eaa0 optional-cert-configmaps:0xc000a0f2c0 optional-cert-secrets:0xc000a0f220 optional-configmaps:0xc000a0edc0 optional-secrets:0xc000a0ed20 pod:0xc000a0eb40 pod-manifest-dir:0xc000a0ef00 resource-dir:0xc000a0ee60 revision:0xc000a0ea00 secrets:0xc000a0ebe0 v:0xc000a1e780] [0xc000a1e780 0xc000a0ea00 0xc000a0eaa0 0xc000a0eb40 0xc000a0ee60 0xc000a0ef00 0xc000a0ec80 0xc000a0edc0 0xc000a0ebe0 0xc000a0ed20 0xc000a0f360 0xc000a0f180 0xc000a0f2c0 0xc000a0f0e0 0xc000a0f220] [] map[cert-configmaps:0xc000a0f180 cert-dir:0xc000a0f360 cert-secrets:0xc000a0f0e0 configmaps:0xc000a0ec80 help:0xc000a1eb40 kubeconfig:0xc000a0e960 log-flush-frequency:0xc000a1e6e0 namespace:0xc000a0eaa0 optional-cert-configmaps:0xc000a0f2c0 optional-cert-secrets:0xc000a0f220 optional-configmaps:0xc000a0edc0 optional-secrets:0xc000a0ed20 pod:0xc000a0eb40 pod-manifest-dir:0xc000a0ef00 pod-manifests-lock-file:0xc000a0f040 resource-dir:0xc000a0ee60 revision:0xc000a0ea00 secrets:0xc000a0ebe0 timeout-duration:0xc000a0efa0 v:0xc000a1e780 vmodule:0xc000a1e820] [0xc000a0e960 0xc000a0ea00 0xc000a0eaa0 0xc000a0eb40 0xc000a0ebe0 0xc000a0ec80 0xc000a0ed20 0xc000a0edc0 0xc000a0ee60 0xc000a0ef00 0xc000a0efa0 0xc000a0f040 0xc000a0f0e0 0xc000a0f180 0xc000a0f220 0xc000a0f2c0 0xc000a0f360 0xc000a1e6e0 0xc000a1e780 0xc000a1e820 0xc000a1eb40] [0xc000a0f180 0xc000a0f360 0xc000a0f0e0 0xc000a0ec80 0xc000a1eb40 0xc000a0e960 0xc000a1e6e0 0xc000a0eaa0 0xc000a0f2c0 0xc000a0f220 0xc000a0edc0 0xc000a0ed20 0xc000a0eb40 0xc000a0ef00 0xc000a0f040 0xc000a0ee60 0xc000a0ea00 0xc000a0ebe0 0xc000a0efa0 0xc000a1e780 0xc000a1e820] map[104:0xc000a1eb40 118:0xc000a1e780] [] -1 0 0xc000a023c0 true 0xa51380 []} 2025-08-13T20:07:36.638917816+00:00 stderr F I0813 20:07:36.636984 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a0a340)({ 2025-08-13T20:07:36.638917816+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:36.638917816+00:00 stderr F Revision: (string) (len=2) "12", 2025-08-13T20:07:36.638917816+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-08-13T20:07:36.638917816+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:07:36.638917816+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=11) "etcd-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "encryption-config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=12) "cloud-config", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "aggregator-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=14) "kubelet-client", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=9) "client-ca", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:36.638917816+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:07:36.638917816+00:00 stderr F }, 2025-08-13T20:07:36.638917816+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-08-13T20:07:36.638917816+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:36.638917816+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:36.638917816+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:36.638917816+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:36.638917816+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:36.638917816+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:36.638917816+00:00 stderr F }) 2025-08-13T20:07:36.647040639+00:00 stderr F I0813 20:07:36.644426 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:07:36.676170244+00:00 stderr F I0813 20:07:36.676103 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:36.729144953+00:00 stderr F I0813 20:07:36.729077 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:07:46.742419322+00:00 stderr F I0813 20:07:46.742303 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:08:16.746174226+00:00 stderr F I0813 20:08:16.745979 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:08:16.759085647+00:00 stderr F I0813 20:08:16.758998 1 cmd.go:539] Latest installer revision for node crc is: 12 2025-08-13T20:08:16.759085647+00:00 stderr F I0813 20:08:16.759051 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:08:16.766542280+00:00 stderr F I0813 20:08:16.766427 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:08:16.766638183+00:00 stderr F I0813 20:08:16.766618 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12" ... 2025-08-13T20:08:16.767661233+00:00 stderr F I0813 20:08:16.767400 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12" ... 2025-08-13T20:08:16.767719274+00:00 stderr F I0813 20:08:16.767702 1 cmd.go:226] Getting secrets ... 2025-08-13T20:08:16.777540006+00:00 stderr F I0813 20:08:16.777422 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-12 2025-08-13T20:08:16.787704677+00:00 stderr F I0813 20:08:16.784537 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-12 2025-08-13T20:08:16.790097306+00:00 stderr F I0813 20:08:16.790066 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-12 2025-08-13T20:08:16.792622618+00:00 stderr F I0813 20:08:16.792596 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-12: secrets "encryption-config-12" not found 2025-08-13T20:08:16.798056444+00:00 stderr F I0813 20:08:16.797997 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-12 2025-08-13T20:08:16.798153447+00:00 stderr F I0813 20:08:16.798139 1 cmd.go:239] Getting config maps ... 2025-08-13T20:08:16.807858675+00:00 stderr F I0813 20:08:16.803474 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-12 2025-08-13T20:08:16.830867405+00:00 stderr F I0813 20:08:16.828390 1 copy.go:60] Got configMap openshift-kube-apiserver/config-12 2025-08-13T20:08:16.837695091+00:00 stderr F I0813 20:08:16.836653 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-12 2025-08-13T20:08:16.953566323+00:00 stderr F I0813 20:08:16.953507 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-12 2025-08-13T20:08:17.152655191+00:00 stderr F I0813 20:08:17.152585 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-12 2025-08-13T20:08:17.356523426+00:00 stderr F I0813 20:08:17.356468 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-12 2025-08-13T20:08:17.560140594+00:00 stderr F I0813 20:08:17.558391 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-12 2025-08-13T20:08:17.752620232+00:00 stderr F I0813 20:08:17.752494 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-12 2025-08-13T20:08:17.958951147+00:00 stderr F I0813 20:08:17.958857 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-12: configmaps "cloud-config-12" not found 2025-08-13T20:08:18.158022865+00:00 stderr F I0813 20:08:18.156610 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-12 2025-08-13T20:08:18.352269654+00:00 stderr F I0813 20:08:18.352156 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-12 2025-08-13T20:08:18.352369817+00:00 stderr F I0813 20:08:18.352353 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client" ... 2025-08-13T20:08:18.353113228+00:00 stderr F I0813 20:08:18.353009 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client/tls.crt" ... 2025-08-13T20:08:18.353340855+00:00 stderr F I0813 20:08:18.353319 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/etcd-client/tls.key" ... 2025-08-13T20:08:18.353507260+00:00 stderr F I0813 20:08:18.353487 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token" ... 2025-08-13T20:08:18.353665544+00:00 stderr F I0813 20:08:18.353647 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:08:18.354413516+00:00 stderr F I0813 20:08:18.353881 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:08:18.354604501+00:00 stderr F I0813 20:08:18.354582 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:08:18.354759505+00:00 stderr F I0813 20:08:18.354739 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:08:18.355074254+00:00 stderr F I0813 20:08:18.354982 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey" ... 2025-08-13T20:08:18.355209868+00:00 stderr F I0813 20:08:18.355190 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-08-13T20:08:18.355372803+00:00 stderr F I0813 20:08:18.355353 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-08-13T20:08:18.356621279+00:00 stderr F I0813 20:08:18.356597 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/webhook-authenticator" ... 2025-08-13T20:08:18.356843525+00:00 stderr F I0813 20:08:18.356820 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/secrets/webhook-authenticator/kubeConfig" ... 2025-08-13T20:08:18.357036741+00:00 stderr F I0813 20:08:18.357015 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/bound-sa-token-signing-certs" ... 2025-08-13T20:08:18.357222456+00:00 stderr F I0813 20:08:18.357203 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:08:18.357353840+00:00 stderr F I0813 20:08:18.357335 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/config" ... 2025-08-13T20:08:18.357935916+00:00 stderr F I0813 20:08:18.357489 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/config/config.yaml" ... 2025-08-13T20:08:18.359209643+00:00 stderr F I0813 20:08:18.359186 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/etcd-serving-ca" ... 2025-08-13T20:08:18.359448360+00:00 stderr F I0813 20:08:18.359410 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.359598054+00:00 stderr F I0813 20:08:18.359579 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-audit-policies" ... 2025-08-13T20:08:18.359723238+00:00 stderr F I0813 20:08:18.359705 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-08-13T20:08:18.359975605+00:00 stderr F I0813 20:08:18.359952 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-08-13T20:08:18.360104889+00:00 stderr F I0813 20:08:18.360086 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:08:18.360250723+00:00 stderr F I0813 20:08:18.360232 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod" ... 2025-08-13T20:08:18.360398597+00:00 stderr F I0813 20:08:18.360378 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-08-13T20:08:18.360577992+00:00 stderr F I0813 20:08:18.360555 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:18.360767418+00:00 stderr F I0813 20:08:18.360716 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-08-13T20:08:18.361041326+00:00 stderr F I0813 20:08:18.361018 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-pod/version" ... 2025-08-13T20:08:18.361181070+00:00 stderr F I0813 20:08:18.361161 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kubelet-serving-ca" ... 2025-08-13T20:08:18.361530210+00:00 stderr F I0813 20:08:18.361505 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.361743146+00:00 stderr F I0813 20:08:18.361708 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs" ... 2025-08-13T20:08:18.361983433+00:00 stderr F I0813 20:08:18.361934 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:08:18.362161068+00:00 stderr F I0813 20:08:18.362141 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-08-13T20:08:18.362731884+00:00 stderr F I0813 20:08:18.362563 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364128 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-server-ca" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364307 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364445 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/oauth-metadata" ... 2025-08-13T20:08:18.365098422+00:00 stderr F I0813 20:08:18.364556 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/configmaps/oauth-metadata/oauthMetadata" ... 2025-08-13T20:08:18.367237633+00:00 stderr F I0813 20:08:18.367154 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-08-13T20:08:18.367237633+00:00 stderr F I0813 20:08:18.367217 1 cmd.go:226] Getting secrets ... 2025-08-13T20:08:18.553964087+00:00 stderr F I0813 20:08:18.551863 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-08-13T20:08:18.755506325+00:00 stderr F I0813 20:08:18.755397 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-08-13T20:08:18.961047308+00:00 stderr F I0813 20:08:18.959393 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-08-13T20:08:19.157436069+00:00 stderr F I0813 20:08:19.157218 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-08-13T20:08:19.361055187+00:00 stderr F I0813 20:08:19.360239 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-08-13T20:08:19.554011299+00:00 stderr F I0813 20:08:19.552970 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-08-13T20:08:19.758824191+00:00 stderr F I0813 20:08:19.758068 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-08-13T20:08:19.952841864+00:00 stderr F I0813 20:08:19.952693 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-08-13T20:08:20.155019351+00:00 stderr F I0813 20:08:20.152283 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-08-13T20:08:20.355912051+00:00 stderr F I0813 20:08:20.354435 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-08-13T20:08:20.557744358+00:00 stderr F I0813 20:08:20.557637 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-08-13T20:08:20.752183112+00:00 stderr F I0813 20:08:20.752136 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-08-13T20:08:20.951144607+00:00 stderr F I0813 20:08:20.951043 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-08-13T20:08:21.153445757+00:00 stderr F I0813 20:08:21.153364 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-08-13T20:08:21.358305480+00:00 stderr F I0813 20:08:21.358147 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-08-13T20:08:21.552003393+00:00 stderr F I0813 20:08:21.550526 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-08-13T20:08:21.754680194+00:00 stderr F I0813 20:08:21.754592 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-08-13T20:08:21.952917598+00:00 stderr F I0813 20:08:21.952702 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-08-13T20:08:22.158927624+00:00 stderr F I0813 20:08:22.156637 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-08-13T20:08:22.368824392+00:00 stderr F I0813 20:08:22.365760 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-08-13T20:08:22.559923021+00:00 stderr F I0813 20:08:22.557941 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-08-13T20:08:22.559923021+00:00 stderr F I0813 20:08:22.558009 1 cmd.go:239] Getting config maps ... 2025-08-13T20:08:22.753218473+00:00 stderr F I0813 20:08:22.752977 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-08-13T20:08:22.958543640+00:00 stderr F I0813 20:08:22.958422 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-08-13T20:08:23.152462540+00:00 stderr F I0813 20:08:23.152406 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-08-13T20:08:23.355346357+00:00 stderr F I0813 20:08:23.355215 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-08-13T20:08:23.633038419+00:00 stderr F I0813 20:08:23.632400 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:08:23.633038419+00:00 stderr F I0813 20:08:23.632844 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-08-13T20:08:23.633354288+00:00 stderr F I0813 20:08:23.633123 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-08-13T20:08:23.634466239+00:00 stderr F I0813 20:08:23.634378 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-08-13T20:08:23.634614964+00:00 stderr F I0813 20:08:23.634555 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-08-13T20:08:23.634614964+00:00 stderr F I0813 20:08:23.634596 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.635714 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.636052 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-08-13T20:08:23.639930056+00:00 stderr F I0813 20:08:23.636078 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-08-13T20:08:23.643978932+00:00 stderr F I0813 20:08:23.643653 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-08-13T20:08:23.643978932+00:00 stderr F I0813 20:08:23.643958 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-08-13T20:08:23.644008123+00:00 stderr F I0813 20:08:23.643993 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-08-13T20:08:23.646206346+00:00 stderr F I0813 20:08:23.646135 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-08-13T20:08:23.646387351+00:00 stderr F I0813 20:08:23.646327 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-08-13T20:08:23.646387351+00:00 stderr F I0813 20:08:23.646374 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.646686290+00:00 stderr F I0813 20:08:23.646604 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:08:23.653659940+00:00 stderr F I0813 20:08:23.653566 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-08-13T20:08:23.653728932+00:00 stderr F I0813 20:08:23.653695 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.656370107+00:00 stderr F I0813 20:08:23.656272 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.658150 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.658211 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659295 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659416 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-08-13T20:08:23.660841906+00:00 stderr F I0813 20:08:23.659443 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662231 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662485 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-08-13T20:08:23.663432750+00:00 stderr F I0813 20:08:23.662504 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-08-13T20:08:23.664350806+00:00 stderr F I0813 20:08:23.664307 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.664501 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665514 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665643 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665658 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-08-13T20:08:23.666085116+00:00 stderr F I0813 20:08:23.665954 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-08-13T20:08:23.666230140+00:00 stderr F I0813 20:08:23.666179 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:08:23.666230140+00:00 stderr F I0813 20:08:23.666201 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:08:23.666524959+00:00 stderr F I0813 20:08:23.666397 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-08-13T20:08:23.666524959+00:00 stderr F I0813 20:08:23.666438 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-08-13T20:08:23.675065014+00:00 stderr F I0813 20:08:23.674383 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-08-13T20:08:23.727612830+00:00 stderr F I0813 20:08:23.678973 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:08:23.728107704+00:00 stderr F I0813 20:08:23.728022 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-08-13T20:08:23.728170316+00:00 stderr F I0813 20:08:23.728088 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-08-13T20:08:23.730094851+00:00 stderr F I0813 20:08:23.730033 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:08:23.730271116+00:00 stderr F I0813 20:08:23.730167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:08:23.730853573+00:00 stderr F I0813 20:08:23.730656 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-12 -n openshift-kube-apiserver 2025-08-13T20:08:23.756294352+00:00 stderr F I0813 20:08:23.754194 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:08:23.756294352+00:00 stderr F I0813 20:08:23.754297 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-08-13T20:08:23.756294352+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"12"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=12","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.767161554+00:00 stderr F I0813 20:08:23.767055 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:23.767422721+00:00 stderr F I0813 20:08:23.767311 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:08:23.767422721+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"12"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=12","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.767702189+00:00 stderr F I0813 20:08:23.767602 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-08-13T20:08:23.767702189+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"12"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"12"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50M 2025-08-13T20:08:23.767733110+00:00 stderr F i"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:23.768399900+00:00 stderr F I0813 20:08:23.768332 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768798391+00:00 stderr F I0813 20:08:23.768742 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768926285+00:00 stderr F I0813 20:08:23.768828 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:08:23.768926285+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"12"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-12"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"12"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"re 2025-08-13T20:08:23.768952735+00:00 stderr F quests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015117130647033062 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015117130654033060 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000273622615117130647033106 0ustar zuulzuul2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.027560 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.027993 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:01:21.034056637+00:00 stderr F I0813 20:01:21.033219 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:21.804748812+00:00 stderr F I0813 20:01:21.804107 1 builder.go:299] console-operator version - 2025-08-13T20:01:23.971163904+00:00 stderr F I0813 20:01:23.969130 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:23.971163904+00:00 stderr F W0813 20:01:23.969729 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:23.971163904+00:00 stderr F W0813 20:01:23.969910 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.040286 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.040734 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043251 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043292 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043447 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043467 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043492 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.043500 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:24.045353390+00:00 stderr F I0813 20:01:24.044628 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:24.108125490+00:00 stderr F I0813 20:01:24.103247 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:24.108125490+00:00 stderr F I0813 20:01:24.103350 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.144496 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.144576 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:24.148206102+00:00 stderr F I0813 20:01:24.145110 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:31.459571568+00:00 stderr F I0813 20:01:31.458825 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-08-13T20:01:31.509035438+00:00 stderr F I0813 20:01:31.508679 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_26c3518c-99b0-4182-9f74-a6df43fe8de3 became leader 2025-08-13T20:01:31.661391592+00:00 stderr F I0813 20:01:31.656860 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:01:31.683595846+00:00 stderr F I0813 20:01:31.679742 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:01:31.683595846+00:00 stderr F I0813 20:01:31.680373 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:01:36.909432847+00:00 stderr F I0813 20:01:36.897612 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:01:36.909432847+00:00 stderr F I0813 20:01:36.901422 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-08-13T20:01:36.992913177+00:00 stderr F I0813 20:01:36.992598 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.992895 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.992994 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993019 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993031 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T20:01:36.993120243+00:00 stderr F I0813 20:01:36.993048 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T20:01:36.993626828+00:00 stderr F I0813 20:01:36.993059 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-08-13T20:01:36.993626828+00:00 stderr F I0813 20:01:36.993200 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-08-13T20:01:36.993643118+00:00 stderr F I0813 20:01:36.993223 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-08-13T20:01:36.993643118+00:00 stderr F I0813 20:01:36.993637 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-08-13T20:01:36.993733931+00:00 stderr F I0813 20:01:36.993692 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-08-13T20:01:36.994068770+00:00 stderr F I0813 20:01:36.993365 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993446 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993461 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-08-13T20:01:36.994091761+00:00 stderr F I0813 20:01:36.993471 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-08-13T20:01:36.994104791+00:00 stderr F I0813 20:01:36.993479 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T20:01:36.994104791+00:00 stderr F I0813 20:01:36.993486 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T20:01:36.994115792+00:00 stderr F I0813 20:01:36.993494 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-08-13T20:01:36.994126842+00:00 stderr F I0813 20:01:36.993498 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-08-13T20:01:36.994126842+00:00 stderr F I0813 20:01:36.993505 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-08-13T20:01:36.994137602+00:00 stderr F I0813 20:01:36.993513 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T20:01:36.994971946+00:00 stderr F I0813 20:01:36.994439 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-08-13T20:01:37.000550155+00:00 stderr F I0813 20:01:36.996683 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-08-13T20:01:37.000550155+00:00 stderr F I0813 20:01:36.998209 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-08-13T20:01:37.000550155+00:00 stderr F E0813 20:01:36.999530 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T20:01:37.019403323+00:00 stderr F W0813 20:01:37.018234 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:37.019403323+00:00 stderr F E0813 20:01:37.018301 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:37.098673342+00:00 stderr F I0813 20:01:37.093008 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:37.098673342+00:00 stderr F I0813 20:01:37.098659 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.093245 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.098683 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T20:01:37.098709193+00:00 stderr F I0813 20:01:37.093279 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:01:37.098721753+00:00 stderr F I0813 20:01:37.098709 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.093289 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.098766 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:37.098858247+00:00 stderr F I0813 20:01:37.093302 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T20:01:37.098878058+00:00 stderr F I0813 20:01:37.098858 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T20:01:37.110930882+00:00 stderr F I0813 20:01:37.110829 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T20:01:37.111337563+00:00 stderr F I0813 20:01:37.111260 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-08-13T20:01:37.111337563+00:00 stderr F I0813 20:01:37.111296 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111416 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111304 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111447 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111329 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-08-13T20:01:37.111501118+00:00 stderr F I0813 20:01:37.111483 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-08-13T20:01:37.112073594+00:00 stderr F I0813 20:01:37.111337 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T20:01:37.112073594+00:00 stderr F I0813 20:01:37.112043 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T20:01:37.112248559+00:00 stderr F I0813 20:01:37.111345 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-08-13T20:01:37.112297170+00:00 stderr F I0813 20:01:37.112281 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-08-13T20:01:37.115871872+00:00 stderr F I0813 20:01:37.115684 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:01:37.115871872+00:00 stderr F I0813 20:01:37.115732 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:37.139159046+00:00 stderr F I0813 20:01:37.138982 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.157168150+00:00 stderr F I0813 20:01:37.156393 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.184583552+00:00 stderr F I0813 20:01:37.184437 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:37.193553177+00:00 stderr F I0813 20:01:37.193494 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:37.193633830+00:00 stderr F I0813 20:01:37.193619 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:37.198707084+00:00 stderr F I0813 20:01:37.198596 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-08-13T20:01:37.198707084+00:00 stderr F I0813 20:01:37.198656 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-08-13T20:01:38.553891696+00:00 stderr F W0813 20:01:38.553477 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:38.553891696+00:00 stderr F E0813 20:01:38.553542 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:40.636470078+00:00 stderr F W0813 20:01:40.635957 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:40.636470078+00:00 stderr F E0813 20:01:40.636399 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:46.988630633+00:00 stderr F W0813 20:01:46.987604 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:46.988681465+00:00 stderr F E0813 20:01:46.988648 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:51.227743396+00:00 stderr F I0813 20:01:51.224273 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:55.704917637+00:00 stderr F W0813 20:01:55.704384 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:55.704917637+00:00 stderr F E0813 20:01:55.704888 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:56.189227326+00:00 stderr F I0813 20:01:56.178481 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" 2025-08-13T20:02:10.877413274+00:00 stderr F W0813 20:02:10.876711 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:10.877413274+00:00 stderr F E0813 20:02:10.877391 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:35.304614522+00:00 stderr F E0813 20:02:35.303652 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.001196281+00:00 stderr F E0813 20:02:37.001089 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.006719239+00:00 stderr F I0813 20:02:37.006649 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.006719239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.006719239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.006719239+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F - { 2025-08-13T20:02:37.006719239+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.006719239+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.006719239+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.006719239+00:00 stderr F - }, 2025-08-13T20:02:37.006719239+00:00 stderr F + { 2025-08-13T20:02:37.006719239+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.006719239+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.006719239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.001171681 +0000 UTC m=+76.206270985", 2025-08-13T20:02:37.006719239+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.006719239+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.006719239+00:00 stderr F + }, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.006719239+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.006719239+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.006719239+00:00 stderr F }, 2025-08-13T20:02:37.006719239+00:00 stderr F Version: "", 2025-08-13T20:02:37.006719239+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.006719239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.006719239+00:00 stderr F } 2025-08-13T20:02:37.008377726+00:00 stderr F E0813 20:02:37.008348 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.015011335+00:00 stderr F E0813 20:02:37.014935 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.017540448+00:00 stderr F I0813 20:02:37.017287 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.017540448+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.017540448+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.017540448+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F - { 2025-08-13T20:02:37.017540448+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.017540448+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.017540448+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.017540448+00:00 stderr F - }, 2025-08-13T20:02:37.017540448+00:00 stderr F + { 2025-08-13T20:02:37.017540448+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.017540448+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.017540448+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.014994335 +0000 UTC m=+76.220093699", 2025-08-13T20:02:37.017540448+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.017540448+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.017540448+00:00 stderr F + }, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.017540448+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.017540448+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.017540448+00:00 stderr F }, 2025-08-13T20:02:37.017540448+00:00 stderr F Version: "", 2025-08-13T20:02:37.017540448+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.017540448+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.017540448+00:00 stderr F } 2025-08-13T20:02:37.018674320+00:00 stderr F E0813 20:02:37.018650 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.030345693+00:00 stderr F E0813 20:02:37.030317 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.032925026+00:00 stderr F I0813 20:02:37.032590 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.032925026+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.032925026+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.032925026+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F - { 2025-08-13T20:02:37.032925026+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.032925026+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.032925026+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.032925026+00:00 stderr F - }, 2025-08-13T20:02:37.032925026+00:00 stderr F + { 2025-08-13T20:02:37.032925026+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.032925026+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.032925026+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.030398664 +0000 UTC m=+76.235497818", 2025-08-13T20:02:37.032925026+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.032925026+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.032925026+00:00 stderr F + }, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.032925026+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.032925026+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.032925026+00:00 stderr F }, 2025-08-13T20:02:37.032925026+00:00 stderr F Version: "", 2025-08-13T20:02:37.032925026+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.032925026+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.032925026+00:00 stderr F } 2025-08-13T20:02:37.035612393+00:00 stderr F E0813 20:02:37.035519 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.056986483+00:00 stderr F E0813 20:02:37.056954 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.059095323+00:00 stderr F I0813 20:02:37.059069 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.059095323+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.059095323+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.059095323+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F - { 2025-08-13T20:02:37.059095323+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.059095323+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.059095323+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.059095323+00:00 stderr F - }, 2025-08-13T20:02:37.059095323+00:00 stderr F + { 2025-08-13T20:02:37.059095323+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.059095323+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.059095323+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.057040674 +0000 UTC m=+76.262139848", 2025-08-13T20:02:37.059095323+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.059095323+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.059095323+00:00 stderr F + }, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.059095323+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.059095323+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.059095323+00:00 stderr F }, 2025-08-13T20:02:37.059095323+00:00 stderr F Version: "", 2025-08-13T20:02:37.059095323+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.059095323+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.059095323+00:00 stderr F } 2025-08-13T20:02:37.060330618+00:00 stderr F E0813 20:02:37.060220 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.103134429+00:00 stderr F E0813 20:02:37.103081 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.106226368+00:00 stderr F I0813 20:02:37.106197 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.106226368+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.106226368+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.106226368+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F - { 2025-08-13T20:02:37.106226368+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.106226368+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.106226368+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.106226368+00:00 stderr F - }, 2025-08-13T20:02:37.106226368+00:00 stderr F + { 2025-08-13T20:02:37.106226368+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.106226368+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.106226368+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.103221472 +0000 UTC m=+76.308320556", 2025-08-13T20:02:37.106226368+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.106226368+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.106226368+00:00 stderr F + }, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.106226368+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.106226368+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.106226368+00:00 stderr F }, 2025-08-13T20:02:37.106226368+00:00 stderr F Version: "", 2025-08-13T20:02:37.106226368+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.106226368+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.106226368+00:00 stderr F } 2025-08-13T20:02:37.107951347+00:00 stderr F E0813 20:02:37.107861 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.121601696+00:00 stderr F E0813 20:02:37.121368 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.124528880+00:00 stderr F I0813 20:02:37.124409 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.124528880+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.124528880+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.124528880+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.124528880+00:00 stderr F - { 2025-08-13T20:02:37.124528880+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.124528880+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.124528880+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.124528880+00:00 stderr F - }, 2025-08-13T20:02:37.124528880+00:00 stderr F + { 2025-08-13T20:02:37.124528880+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.124528880+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.124528880+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.121440242 +0000 UTC m=+76.326539426", 2025-08-13T20:02:37.124528880+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.124528880+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.124528880+00:00 stderr F + }, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.124528880+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.124528880+00:00 stderr F }, 2025-08-13T20:02:37.124528880+00:00 stderr F Version: "", 2025-08-13T20:02:37.124528880+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.124528880+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.124528880+00:00 stderr F } 2025-08-13T20:02:37.126343221+00:00 stderr F E0813 20:02:37.126256 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.126520716+00:00 stderr F E0813 20:02:37.126442 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.129926634+00:00 stderr F E0813 20:02:37.129762 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.130551841+00:00 stderr F E0813 20:02:37.130431 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.132014023+00:00 stderr F I0813 20:02:37.131960 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.132014023+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.132014023+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.132014023+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.132014023+00:00 stderr F - { 2025-08-13T20:02:37.132014023+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.132014023+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.132014023+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.132014023+00:00 stderr F - }, 2025-08-13T20:02:37.132014023+00:00 stderr F + { 2025-08-13T20:02:37.132014023+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.132014023+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.132014023+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.126492286 +0000 UTC m=+76.331591490", 2025-08-13T20:02:37.132014023+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.132014023+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.132014023+00:00 stderr F + }, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.132014023+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.132014023+00:00 stderr F }, 2025-08-13T20:02:37.132014023+00:00 stderr F Version: "", 2025-08-13T20:02:37.132014023+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.132014023+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.132014023+00:00 stderr F } 2025-08-13T20:02:37.133736572+00:00 stderr F I0813 20:02:37.133640 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.133736572+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.133736572+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.133736572+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F - { 2025-08-13T20:02:37.133736572+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.133736572+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.133736572+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.133736572+00:00 stderr F - }, 2025-08-13T20:02:37.133736572+00:00 stderr F + { 2025-08-13T20:02:37.133736572+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.133736572+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.133736572+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.130450949 +0000 UTC m=+76.335550003", 2025-08-13T20:02:37.133736572+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.133736572+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.133736572+00:00 stderr F + }, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.133736572+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.133736572+00:00 stderr F }, 2025-08-13T20:02:37.133736572+00:00 stderr F Version: "", 2025-08-13T20:02:37.133736572+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.133736572+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.133736572+00:00 stderr F } 2025-08-13T20:02:37.134305229+00:00 stderr F E0813 20:02:37.134188 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.134900546+00:00 stderr F E0813 20:02:37.134697 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.135542744+00:00 stderr F I0813 20:02:37.135463 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.135542744+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.135542744+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.135542744+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F - { 2025-08-13T20:02:37.135542744+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.135542744+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.135542744+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.135542744+00:00 stderr F - }, 2025-08-13T20:02:37.135542744+00:00 stderr F + { 2025-08-13T20:02:37.135542744+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.135542744+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.135542744+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.126359392 +0000 UTC m=+76.331458536", 2025-08-13T20:02:37.135542744+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.135542744+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.135542744+00:00 stderr F + }, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.135542744+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.135542744+00:00 stderr F }, 2025-08-13T20:02:37.135542744+00:00 stderr F Version: "", 2025-08-13T20:02:37.135542744+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.135542744+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.135542744+00:00 stderr F } 2025-08-13T20:02:37.136386588+00:00 stderr F I0813 20:02:37.136322 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.136386588+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.136386588+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.136386588+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.136386588+00:00 stderr F - { 2025-08-13T20:02:37.136386588+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.136386588+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.136386588+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.136386588+00:00 stderr F - }, 2025-08-13T20:02:37.136386588+00:00 stderr F + { 2025-08-13T20:02:37.136386588+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.136386588+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.136386588+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.134232746 +0000 UTC m=+76.339331950", 2025-08-13T20:02:37.136386588+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.136386588+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:37.136386588+00:00 stderr F + }, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.136386588+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.136386588+00:00 stderr F }, 2025-08-13T20:02:37.136386588+00:00 stderr F Version: "", 2025-08-13T20:02:37.136386588+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.136386588+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.136386588+00:00 stderr F } 2025-08-13T20:02:37.136681286+00:00 stderr F E0813 20:02:37.136602 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.136954614+00:00 stderr F E0813 20:02:37.136764 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.137758157+00:00 stderr F E0813 20:02:37.137668 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.138021025+00:00 stderr F E0813 20:02:37.137945 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.138645062+00:00 stderr F I0813 20:02:37.138562 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.138645062+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.138645062+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.138645062+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.138645062+00:00 stderr F - { 2025-08-13T20:02:37.138645062+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.138645062+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.138645062+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.138645062+00:00 stderr F - }, 2025-08-13T20:02:37.138645062+00:00 stderr F + { 2025-08-13T20:02:37.138645062+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.138645062+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.138645062+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.136946624 +0000 UTC m=+76.342045848", 2025-08-13T20:02:37.138645062+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.138645062+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.138645062+00:00 stderr F + }, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.138645062+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.138645062+00:00 stderr F }, 2025-08-13T20:02:37.138645062+00:00 stderr F Version: "", 2025-08-13T20:02:37.138645062+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.138645062+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.138645062+00:00 stderr F } 2025-08-13T20:02:37.143150611+00:00 stderr F E0813 20:02:37.143051 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.144031406+00:00 stderr F E0813 20:02:37.143962 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.144376696+00:00 stderr F E0813 20:02:37.144241 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.145581960+00:00 stderr F I0813 20:02:37.145476 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.145581960+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.145581960+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.145581960+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F - { 2025-08-13T20:02:37.145581960+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.145581960+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.145581960+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.145581960+00:00 stderr F - }, 2025-08-13T20:02:37.145581960+00:00 stderr F + { 2025-08-13T20:02:37.145581960+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.145581960+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.145581960+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.143086079 +0000 UTC m=+76.348185273", 2025-08-13T20:02:37.145581960+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.145581960+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.145581960+00:00 stderr F + }, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.145581960+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.145581960+00:00 stderr F }, 2025-08-13T20:02:37.145581960+00:00 stderr F Version: "", 2025-08-13T20:02:37.145581960+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.145581960+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.145581960+00:00 stderr F } 2025-08-13T20:02:37.146268710+00:00 stderr F I0813 20:02:37.146151 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.146268710+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.146268710+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.146268710+00:00 stderr F - { 2025-08-13T20:02:37.146268710+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.146268710+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.146268710+00:00 stderr F - }, 2025-08-13T20:02:37.146268710+00:00 stderr F + { 2025-08-13T20:02:37.146268710+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.146268710+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.144033216 +0000 UTC m=+76.349132530", 2025-08-13T20:02:37.146268710+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.146268710+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.146268710+00:00 stderr F + }, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F }, 2025-08-13T20:02:37.146268710+00:00 stderr F Version: "", 2025-08-13T20:02:37.146268710+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.146268710+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.146268710+00:00 stderr F } 2025-08-13T20:02:37.146268710+00:00 stderr F I0813 20:02:37.146196 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.146268710+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.146268710+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F - { 2025-08-13T20:02:37.146268710+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.146268710+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.146268710+00:00 stderr F - }, 2025-08-13T20:02:37.146268710+00:00 stderr F + { 2025-08-13T20:02:37.146268710+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.146268710+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.146268710+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.144273173 +0000 UTC m=+76.349372367", 2025-08-13T20:02:37.146268710+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.146268710+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.146268710+00:00 stderr F + }, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.146268710+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.146268710+00:00 stderr F }, 2025-08-13T20:02:37.146268710+00:00 stderr F Version: "", 2025-08-13T20:02:37.146268710+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.146268710+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.146268710+00:00 stderr F } 2025-08-13T20:02:37.147033832+00:00 stderr F E0813 20:02:37.146954 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.150736087+00:00 stderr F I0813 20:02:37.150114 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.150736087+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.150736087+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.150736087+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.150736087+00:00 stderr F - { 2025-08-13T20:02:37.150736087+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.150736087+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.150736087+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.150736087+00:00 stderr F - }, 2025-08-13T20:02:37.150736087+00:00 stderr F + { 2025-08-13T20:02:37.150736087+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.150736087+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.150736087+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.147029161 +0000 UTC m=+76.352128676", 2025-08-13T20:02:37.150736087+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.150736087+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:37.150736087+00:00 stderr F + }, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.150736087+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.150736087+00:00 stderr F }, 2025-08-13T20:02:37.150736087+00:00 stderr F Version: "", 2025-08-13T20:02:37.150736087+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.150736087+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.150736087+00:00 stderr F } 2025-08-13T20:02:37.189472952+00:00 stderr F E0813 20:02:37.189377 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.192793477+00:00 stderr F I0813 20:02:37.192709 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.192793477+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.192793477+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.192793477+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F - { 2025-08-13T20:02:37.192793477+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.192793477+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.192793477+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:37.192793477+00:00 stderr F - }, 2025-08-13T20:02:37.192793477+00:00 stderr F + { 2025-08-13T20:02:37.192793477+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:37.192793477+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.192793477+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.189460582 +0000 UTC m=+76.394559916", 2025-08-13T20:02:37.192793477+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:37.192793477+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:37.192793477+00:00 stderr F + }, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:37.192793477+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.192793477+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:37.192793477+00:00 stderr F }, 2025-08-13T20:02:37.192793477+00:00 stderr F Version: "", 2025-08-13T20:02:37.192793477+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.192793477+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.192793477+00:00 stderr F } 2025-08-13T20:02:37.211320526+00:00 stderr F E0813 20:02:37.211235 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.223465952+00:00 stderr F E0813 20:02:37.223395 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.225205742+00:00 stderr F I0813 20:02:37.225143 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.225205742+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.225205742+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.225205742+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:37.225205742+00:00 stderr F - { 2025-08-13T20:02:37.225205742+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.225205742+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.225205742+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:37.225205742+00:00 stderr F - }, 2025-08-13T20:02:37.225205742+00:00 stderr F + { 2025-08-13T20:02:37.225205742+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:37.225205742+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.225205742+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.223457082 +0000 UTC m=+76.428556296", 2025-08-13T20:02:37.225205742+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.225205742+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:37.225205742+00:00 stderr F + }, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.225205742+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:37.225205742+00:00 stderr F }, 2025-08-13T20:02:37.225205742+00:00 stderr F Version: "", 2025-08-13T20:02:37.225205742+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.225205742+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.225205742+00:00 stderr F } 2025-08-13T20:02:37.410190499+00:00 stderr F E0813 20:02:37.410003 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.422460029+00:00 stderr F E0813 20:02:37.421621 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.428228663+00:00 stderr F I0813 20:02:37.427543 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.428228663+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.428228663+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.428228663+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F - { 2025-08-13T20:02:37.428228663+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.428228663+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.428228663+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.428228663+00:00 stderr F - }, 2025-08-13T20:02:37.428228663+00:00 stderr F + { 2025-08-13T20:02:37.428228663+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.428228663+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.428228663+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.422604473 +0000 UTC m=+76.627703867", 2025-08-13T20:02:37.428228663+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.428228663+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:37.428228663+00:00 stderr F + }, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.428228663+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.428228663+00:00 stderr F }, 2025-08-13T20:02:37.428228663+00:00 stderr F Version: "", 2025-08-13T20:02:37.428228663+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.428228663+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.428228663+00:00 stderr F } 2025-08-13T20:02:37.610046740+00:00 stderr F E0813 20:02:37.609937 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.633734006+00:00 stderr F E0813 20:02:37.633628 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.635942429+00:00 stderr F I0813 20:02:37.635888 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.635942429+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.635942429+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.635942429+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:37.635942429+00:00 stderr F - { 2025-08-13T20:02:37.635942429+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.635942429+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.635942429+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:37.635942429+00:00 stderr F - }, 2025-08-13T20:02:37.635942429+00:00 stderr F + { 2025-08-13T20:02:37.635942429+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:37.635942429+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.635942429+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.633691365 +0000 UTC m=+76.838790539", 2025-08-13T20:02:37.635942429+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.635942429+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:37.635942429+00:00 stderr F + }, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:37.635942429+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:37.635942429+00:00 stderr F }, 2025-08-13T20:02:37.635942429+00:00 stderr F Version: "", 2025-08-13T20:02:37.635942429+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.635942429+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.635942429+00:00 stderr F } 2025-08-13T20:02:37.809628154+00:00 stderr F E0813 20:02:37.809489 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.821830842+00:00 stderr F E0813 20:02:37.821736 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.824464047+00:00 stderr F I0813 20:02:37.824383 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:37.824464047+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:37.824464047+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:37.824464047+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F - { 2025-08-13T20:02:37.824464047+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:37.824464047+00:00 stderr F - Status: "False", 2025-08-13T20:02:37.824464047+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:37.824464047+00:00 stderr F - }, 2025-08-13T20:02:37.824464047+00:00 stderr F + { 2025-08-13T20:02:37.824464047+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:37.824464047+00:00 stderr F + Status: "True", 2025-08-13T20:02:37.824464047+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:37.821865803 +0000 UTC m=+77.026965177", 2025-08-13T20:02:37.824464047+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:37.824464047+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:37.824464047+00:00 stderr F + }, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:37.824464047+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:37.824464047+00:00 stderr F }, 2025-08-13T20:02:37.824464047+00:00 stderr F Version: "", 2025-08-13T20:02:37.824464047+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:37.824464047+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:37.824464047+00:00 stderr F } 2025-08-13T20:02:38.009616618+00:00 stderr F E0813 20:02:38.009553 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.022606408+00:00 stderr F E0813 20:02:38.022533 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.024687358+00:00 stderr F I0813 20:02:38.024601 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.024687358+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.024687358+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.024687358+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.024687358+00:00 stderr F - { 2025-08-13T20:02:38.024687358+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.024687358+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.024687358+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:38.024687358+00:00 stderr F - }, 2025-08-13T20:02:38.024687358+00:00 stderr F + { 2025-08-13T20:02:38.024687358+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.024687358+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.024687358+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.022592148 +0000 UTC m=+77.227691292", 2025-08-13T20:02:38.024687358+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.024687358+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:38.024687358+00:00 stderr F + }, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.024687358+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:38.024687358+00:00 stderr F }, 2025-08-13T20:02:38.024687358+00:00 stderr F Version: "", 2025-08-13T20:02:38.024687358+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.024687358+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.024687358+00:00 stderr F } 2025-08-13T20:02:38.208207273+00:00 stderr F I0813 20:02:38.208072 1 request.go:697] Waited for 1.014815649s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:38.210816507+00:00 stderr F E0813 20:02:38.210699 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.374117816+00:00 stderr F E0813 20:02:38.373953 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.377282966+00:00 stderr F I0813 20:02:38.377135 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.377282966+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.377282966+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.377282966+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F - { 2025-08-13T20:02:38.377282966+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:38.377282966+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.377282966+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:38.377282966+00:00 stderr F - }, 2025-08-13T20:02:38.377282966+00:00 stderr F + { 2025-08-13T20:02:38.377282966+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:38.377282966+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.377282966+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.374052224 +0000 UTC m=+77.579151698", 2025-08-13T20:02:38.377282966+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:38.377282966+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:38.377282966+00:00 stderr F + }, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:38.377282966+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.377282966+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:38.377282966+00:00 stderr F }, 2025-08-13T20:02:38.377282966+00:00 stderr F Version: "", 2025-08-13T20:02:38.377282966+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.377282966+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.377282966+00:00 stderr F } 2025-08-13T20:02:38.409229868+00:00 stderr F E0813 20:02:38.409114 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.432512932+00:00 stderr F E0813 20:02:38.432459 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.434983522+00:00 stderr F I0813 20:02:38.434956 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.434983522+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.434983522+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.434983522+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:38.434983522+00:00 stderr F - { 2025-08-13T20:02:38.434983522+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.434983522+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.434983522+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:38.434983522+00:00 stderr F - }, 2025-08-13T20:02:38.434983522+00:00 stderr F + { 2025-08-13T20:02:38.434983522+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:38.434983522+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.434983522+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.432589994 +0000 UTC m=+77.637689208", 2025-08-13T20:02:38.434983522+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.434983522+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:38.434983522+00:00 stderr F + }, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.434983522+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:38.434983522+00:00 stderr F }, 2025-08-13T20:02:38.434983522+00:00 stderr F Version: "", 2025-08-13T20:02:38.434983522+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.434983522+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.434983522+00:00 stderr F } 2025-08-13T20:02:38.609067568+00:00 stderr F E0813 20:02:38.608938 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.631156048+00:00 stderr F E0813 20:02:38.631058 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.633299470+00:00 stderr F I0813 20:02:38.633058 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.633299470+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.633299470+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.633299470+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F - { 2025-08-13T20:02:38.633299470+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:38.633299470+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.633299470+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:38.633299470+00:00 stderr F - }, 2025-08-13T20:02:38.633299470+00:00 stderr F + { 2025-08-13T20:02:38.633299470+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:38.633299470+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.633299470+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.631231761 +0000 UTC m=+77.836330955", 2025-08-13T20:02:38.633299470+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.633299470+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:38.633299470+00:00 stderr F + }, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:38.633299470+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:38.633299470+00:00 stderr F }, 2025-08-13T20:02:38.633299470+00:00 stderr F Version: "", 2025-08-13T20:02:38.633299470+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.633299470+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.633299470+00:00 stderr F } 2025-08-13T20:02:38.809022192+00:00 stderr F E0813 20:02:38.808964 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.831928866+00:00 stderr F E0813 20:02:38.831696 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.833717297+00:00 stderr F I0813 20:02:38.833666 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:38.833717297+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:38.833717297+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:38.833717297+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:38.833717297+00:00 stderr F - { 2025-08-13T20:02:38.833717297+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:38.833717297+00:00 stderr F - Status: "False", 2025-08-13T20:02:38.833717297+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:38.833717297+00:00 stderr F - }, 2025-08-13T20:02:38.833717297+00:00 stderr F + { 2025-08-13T20:02:38.833717297+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:38.833717297+00:00 stderr F + Status: "True", 2025-08-13T20:02:38.833717297+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:38.831754401 +0000 UTC m=+78.036853625", 2025-08-13T20:02:38.833717297+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:38.833717297+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:38.833717297+00:00 stderr F + }, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:38.833717297+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:38.833717297+00:00 stderr F }, 2025-08-13T20:02:38.833717297+00:00 stderr F Version: "", 2025-08-13T20:02:38.833717297+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:38.833717297+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:38.833717297+00:00 stderr F } 2025-08-13T20:02:39.008508193+00:00 stderr F E0813 20:02:39.008405 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.033549498+00:00 stderr F E0813 20:02:39.033436 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.036017018+00:00 stderr F I0813 20:02:39.035943 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.036017018+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.036017018+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.036017018+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F - { 2025-08-13T20:02:39.036017018+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:39.036017018+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.036017018+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:39.036017018+00:00 stderr F - }, 2025-08-13T20:02:39.036017018+00:00 stderr F + { 2025-08-13T20:02:39.036017018+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:39.036017018+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.036017018+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.033546278 +0000 UTC m=+78.238645532", 2025-08-13T20:02:39.036017018+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.036017018+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:39.036017018+00:00 stderr F + }, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.036017018+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:39.036017018+00:00 stderr F }, 2025-08-13T20:02:39.036017018+00:00 stderr F Version: "", 2025-08-13T20:02:39.036017018+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.036017018+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.036017018+00:00 stderr F } 2025-08-13T20:02:39.209076115+00:00 stderr F E0813 20:02:39.209019 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.233264635+00:00 stderr F E0813 20:02:39.233185 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.234977154+00:00 stderr F I0813 20:02:39.234925 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.234977154+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.234977154+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.234977154+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.234977154+00:00 stderr F - { 2025-08-13T20:02:39.234977154+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.234977154+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.234977154+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:39.234977154+00:00 stderr F - }, 2025-08-13T20:02:39.234977154+00:00 stderr F + { 2025-08-13T20:02:39.234977154+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.234977154+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.234977154+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.233222854 +0000 UTC m=+78.438322008", 2025-08-13T20:02:39.234977154+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.234977154+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:39.234977154+00:00 stderr F + }, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.234977154+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:39.234977154+00:00 stderr F }, 2025-08-13T20:02:39.234977154+00:00 stderr F Version: "", 2025-08-13T20:02:39.234977154+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.234977154+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.234977154+00:00 stderr F } 2025-08-13T20:02:39.408274038+00:00 stderr F I0813 20:02:39.407918 1 request.go:697] Waited for 1.030436886s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:39.409755390+00:00 stderr F E0813 20:02:39.409699 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.609879939+00:00 stderr F E0813 20:02:39.609674 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.652287489+00:00 stderr F E0813 20:02:39.652176 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.654922334+00:00 stderr F I0813 20:02:39.654739 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.654922334+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.654922334+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.654922334+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.654922334+00:00 stderr F - { 2025-08-13T20:02:39.654922334+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.654922334+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.654922334+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:39.654922334+00:00 stderr F - }, 2025-08-13T20:02:39.654922334+00:00 stderr F + { 2025-08-13T20:02:39.654922334+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:39.654922334+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.654922334+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.652405342 +0000 UTC m=+78.857504516", 2025-08-13T20:02:39.654922334+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.654922334+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:39.654922334+00:00 stderr F + }, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.654922334+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:39.654922334+00:00 stderr F }, 2025-08-13T20:02:39.654922334+00:00 stderr F Version: "", 2025-08-13T20:02:39.654922334+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.654922334+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.654922334+00:00 stderr F } 2025-08-13T20:02:39.733002751+00:00 stderr F E0813 20:02:39.732689 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.736083259+00:00 stderr F I0813 20:02:39.735966 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.736083259+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.736083259+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.736083259+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F - { 2025-08-13T20:02:39.736083259+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:39.736083259+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.736083259+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:39.736083259+00:00 stderr F - }, 2025-08-13T20:02:39.736083259+00:00 stderr F + { 2025-08-13T20:02:39.736083259+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:39.736083259+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.736083259+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.732833536 +0000 UTC m=+78.937932970", 2025-08-13T20:02:39.736083259+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:39.736083259+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:39.736083259+00:00 stderr F + }, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:39.736083259+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:39.736083259+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:39.736083259+00:00 stderr F }, 2025-08-13T20:02:39.736083259+00:00 stderr F Version: "", 2025-08-13T20:02:39.736083259+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.736083259+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.736083259+00:00 stderr F } 2025-08-13T20:02:39.810677247+00:00 stderr F E0813 20:02:39.810556 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.855908507+00:00 stderr F E0813 20:02:39.855723 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.858237484+00:00 stderr F I0813 20:02:39.858171 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:39.858237484+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:39.858237484+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:39.858237484+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F - { 2025-08-13T20:02:39.858237484+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:39.858237484+00:00 stderr F - Status: "False", 2025-08-13T20:02:39.858237484+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:39.858237484+00:00 stderr F - }, 2025-08-13T20:02:39.858237484+00:00 stderr F + { 2025-08-13T20:02:39.858237484+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:39.858237484+00:00 stderr F + Status: "True", 2025-08-13T20:02:39.858237484+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:39.855873176 +0000 UTC m=+79.060972630", 2025-08-13T20:02:39.858237484+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:39.858237484+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:39.858237484+00:00 stderr F + }, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:39.858237484+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:39.858237484+00:00 stderr F }, 2025-08-13T20:02:39.858237484+00:00 stderr F Version: "", 2025-08-13T20:02:39.858237484+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:39.858237484+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:39.858237484+00:00 stderr F } 2025-08-13T20:02:40.011393083+00:00 stderr F E0813 20:02:40.010825 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.055184292+00:00 stderr F E0813 20:02:40.055065 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.057880279+00:00 stderr F I0813 20:02:40.057721 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.057880279+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.057880279+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.057880279+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:40.057880279+00:00 stderr F - { 2025-08-13T20:02:40.057880279+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:40.057880279+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.057880279+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:40.057880279+00:00 stderr F - }, 2025-08-13T20:02:40.057880279+00:00 stderr F + { 2025-08-13T20:02:40.057880279+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:40.057880279+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.057880279+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.055165632 +0000 UTC m=+79.260264806", 2025-08-13T20:02:40.057880279+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.057880279+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:40.057880279+00:00 stderr F + }, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:40.057880279+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:40.057880279+00:00 stderr F }, 2025-08-13T20:02:40.057880279+00:00 stderr F Version: "", 2025-08-13T20:02:40.057880279+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.057880279+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.057880279+00:00 stderr F } 2025-08-13T20:02:40.211032458+00:00 stderr F E0813 20:02:40.210726 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.254105447+00:00 stderr F E0813 20:02:40.253832 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.257162694+00:00 stderr F I0813 20:02:40.257024 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.257162694+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.257162694+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.257162694+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F - { 2025-08-13T20:02:40.257162694+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:40.257162694+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.257162694+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:40.257162694+00:00 stderr F - }, 2025-08-13T20:02:40.257162694+00:00 stderr F + { 2025-08-13T20:02:40.257162694+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:40.257162694+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.257162694+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.253945202 +0000 UTC m=+79.459044626", 2025-08-13T20:02:40.257162694+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.257162694+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:40.257162694+00:00 stderr F + }, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.257162694+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:40.257162694+00:00 stderr F }, 2025-08-13T20:02:40.257162694+00:00 stderr F Version: "", 2025-08-13T20:02:40.257162694+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.257162694+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.257162694+00:00 stderr F } 2025-08-13T20:02:40.408925773+00:00 stderr F I0813 20:02:40.408472 1 request.go:697] Waited for 1.17324116s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:40.410283412+00:00 stderr F E0813 20:02:40.410202 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.453893266+00:00 stderr F E0813 20:02:40.453701 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.459104355+00:00 stderr F I0813 20:02:40.458981 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.459104355+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.459104355+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.459104355+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:40.459104355+00:00 stderr F - { 2025-08-13T20:02:40.459104355+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.459104355+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.459104355+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:40.459104355+00:00 stderr F - }, 2025-08-13T20:02:40.459104355+00:00 stderr F + { 2025-08-13T20:02:40.459104355+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.459104355+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.459104355+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.45403173 +0000 UTC m=+79.659131874", 2025-08-13T20:02:40.459104355+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.459104355+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:40.459104355+00:00 stderr F + }, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.459104355+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:40.459104355+00:00 stderr F }, 2025-08-13T20:02:40.459104355+00:00 stderr F Version: "", 2025-08-13T20:02:40.459104355+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.459104355+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.459104355+00:00 stderr F } 2025-08-13T20:02:40.610141543+00:00 stderr F E0813 20:02:40.609707 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.693087400+00:00 stderr F E0813 20:02:40.693030 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.695862119+00:00 stderr F I0813 20:02:40.695752 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:40.695862119+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:40.695862119+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:40.695862119+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:40.695862119+00:00 stderr F - { 2025-08-13T20:02:40.695862119+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.695862119+00:00 stderr F - Status: "False", 2025-08-13T20:02:40.695862119+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:40.695862119+00:00 stderr F - }, 2025-08-13T20:02:40.695862119+00:00 stderr F + { 2025-08-13T20:02:40.695862119+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:40.695862119+00:00 stderr F + Status: "True", 2025-08-13T20:02:40.695862119+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:40.693200443 +0000 UTC m=+79.898299647", 2025-08-13T20:02:40.695862119+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:40.695862119+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:40.695862119+00:00 stderr F + }, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:40.695862119+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:40.695862119+00:00 stderr F }, 2025-08-13T20:02:40.695862119+00:00 stderr F Version: "", 2025-08-13T20:02:40.695862119+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:40.695862119+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:40.695862119+00:00 stderr F } 2025-08-13T20:02:40.809537742+00:00 stderr F E0813 20:02:40.809455 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.009569698+00:00 stderr F E0813 20:02:41.009458 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.092326639+00:00 stderr F E0813 20:02:41.092276 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.094510742+00:00 stderr F I0813 20:02:41.094483 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.094510742+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.094510742+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.094510742+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F - { 2025-08-13T20:02:41.094510742+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:41.094510742+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.094510742+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:41.094510742+00:00 stderr F - }, 2025-08-13T20:02:41.094510742+00:00 stderr F + { 2025-08-13T20:02:41.094510742+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:41.094510742+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.094510742+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.092412442 +0000 UTC m=+80.297511526", 2025-08-13T20:02:41.094510742+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.094510742+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:41.094510742+00:00 stderr F + }, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.094510742+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:41.094510742+00:00 stderr F }, 2025-08-13T20:02:41.094510742+00:00 stderr F Version: "", 2025-08-13T20:02:41.094510742+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.094510742+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.094510742+00:00 stderr F } 2025-08-13T20:02:41.209867223+00:00 stderr F E0813 20:02:41.209685 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.292589613+00:00 stderr F E0813 20:02:41.292529 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.295547067+00:00 stderr F I0813 20:02:41.295514 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.295547067+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.295547067+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.295547067+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:41.295547067+00:00 stderr F - { 2025-08-13T20:02:41.295547067+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:41.295547067+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.295547067+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:41.295547067+00:00 stderr F - }, 2025-08-13T20:02:41.295547067+00:00 stderr F + { 2025-08-13T20:02:41.295547067+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:41.295547067+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.295547067+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.292687976 +0000 UTC m=+80.497787250", 2025-08-13T20:02:41.295547067+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.295547067+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:41.295547067+00:00 stderr F + }, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:41.295547067+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:41.295547067+00:00 stderr F }, 2025-08-13T20:02:41.295547067+00:00 stderr F Version: "", 2025-08-13T20:02:41.295547067+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.295547067+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.295547067+00:00 stderr F } 2025-08-13T20:02:41.408646444+00:00 stderr F I0813 20:02:41.408524 1 request.go:697] Waited for 1.151170611s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:41.410157427+00:00 stderr F E0813 20:02:41.410122 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.452286258+00:00 stderr F E0813 20:02:41.452087 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.455177620+00:00 stderr F I0813 20:02:41.455081 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.455177620+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.455177620+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.455177620+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F - { 2025-08-13T20:02:41.455177620+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:41.455177620+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.455177620+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:41.455177620+00:00 stderr F - }, 2025-08-13T20:02:41.455177620+00:00 stderr F + { 2025-08-13T20:02:41.455177620+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:41.455177620+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.455177620+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.452167594 +0000 UTC m=+80.657266869", 2025-08-13T20:02:41.455177620+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:41.455177620+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:41.455177620+00:00 stderr F + }, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:41.455177620+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.455177620+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:41.455177620+00:00 stderr F }, 2025-08-13T20:02:41.455177620+00:00 stderr F Version: "", 2025-08-13T20:02:41.455177620+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.455177620+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.455177620+00:00 stderr F } 2025-08-13T20:02:41.492750232+00:00 stderr F E0813 20:02:41.492654 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.498085424+00:00 stderr F I0813 20:02:41.498014 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.498085424+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.498085424+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.498085424+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F - { 2025-08-13T20:02:41.498085424+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:41.498085424+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.498085424+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:41.498085424+00:00 stderr F - }, 2025-08-13T20:02:41.498085424+00:00 stderr F + { 2025-08-13T20:02:41.498085424+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:41.498085424+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.498085424+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.492727882 +0000 UTC m=+80.697827116", 2025-08-13T20:02:41.498085424+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.498085424+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:41.498085424+00:00 stderr F + }, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.498085424+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:41.498085424+00:00 stderr F }, 2025-08-13T20:02:41.498085424+00:00 stderr F Version: "", 2025-08-13T20:02:41.498085424+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.498085424+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.498085424+00:00 stderr F } 2025-08-13T20:02:41.609886224+00:00 stderr F E0813 20:02:41.609696 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.692903652+00:00 stderr F E0813 20:02:41.692665 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.695712942+00:00 stderr F I0813 20:02:41.695640 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.695712942+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.695712942+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.695712942+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.695712942+00:00 stderr F - { 2025-08-13T20:02:41.695712942+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.695712942+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.695712942+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:41.695712942+00:00 stderr F - }, 2025-08-13T20:02:41.695712942+00:00 stderr F + { 2025-08-13T20:02:41.695712942+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.695712942+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.695712942+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.692911722 +0000 UTC m=+80.898011126", 2025-08-13T20:02:41.695712942+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.695712942+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:41.695712942+00:00 stderr F + }, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.695712942+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:41.695712942+00:00 stderr F }, 2025-08-13T20:02:41.695712942+00:00 stderr F Version: "", 2025-08-13T20:02:41.695712942+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.695712942+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.695712942+00:00 stderr F } 2025-08-13T20:02:41.809092337+00:00 stderr F E0813 20:02:41.808992 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.971767138+00:00 stderr F E0813 20:02:41.971645 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.975059611+00:00 stderr F I0813 20:02:41.975003 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:41.975059611+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:41.975059611+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:41.975059611+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:41.975059611+00:00 stderr F - { 2025-08-13T20:02:41.975059611+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.975059611+00:00 stderr F - Status: "False", 2025-08-13T20:02:41.975059611+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:41.975059611+00:00 stderr F - }, 2025-08-13T20:02:41.975059611+00:00 stderr F + { 2025-08-13T20:02:41.975059611+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:41.975059611+00:00 stderr F + Status: "True", 2025-08-13T20:02:41.975059611+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:41.971699266 +0000 UTC m=+81.176798460", 2025-08-13T20:02:41.975059611+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:41.975059611+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:41.975059611+00:00 stderr F + }, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:41.975059611+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:41.975059611+00:00 stderr F }, 2025-08-13T20:02:41.975059611+00:00 stderr F Version: "", 2025-08-13T20:02:41.975059611+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:41.975059611+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:41.975059611+00:00 stderr F } 2025-08-13T20:02:42.009924016+00:00 stderr F E0813 20:02:42.009727 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.173314598+00:00 stderr F E0813 20:02:42.173193 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.175037337+00:00 stderr F I0813 20:02:42.174736 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.175037337+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.175037337+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.175037337+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F - { 2025-08-13T20:02:42.175037337+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:42.175037337+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.175037337+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:42.175037337+00:00 stderr F - }, 2025-08-13T20:02:42.175037337+00:00 stderr F + { 2025-08-13T20:02:42.175037337+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:42.175037337+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.175037337+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.173233336 +0000 UTC m=+81.378332610", 2025-08-13T20:02:42.175037337+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.175037337+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:42.175037337+00:00 stderr F + }, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.175037337+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:42.175037337+00:00 stderr F }, 2025-08-13T20:02:42.175037337+00:00 stderr F Version: "", 2025-08-13T20:02:42.175037337+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.175037337+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.175037337+00:00 stderr F } 2025-08-13T20:02:42.209069618+00:00 stderr F E0813 20:02:42.208979 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.373713945+00:00 stderr F E0813 20:02:42.373274 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.376973758+00:00 stderr F I0813 20:02:42.376690 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.376973758+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.376973758+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.376973758+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:42.376973758+00:00 stderr F - { 2025-08-13T20:02:42.376973758+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:42.376973758+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.376973758+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:42.376973758+00:00 stderr F - }, 2025-08-13T20:02:42.376973758+00:00 stderr F + { 2025-08-13T20:02:42.376973758+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:42.376973758+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.376973758+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.373340805 +0000 UTC m=+81.578439969", 2025-08-13T20:02:42.376973758+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.376973758+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:42.376973758+00:00 stderr F + }, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:42.376973758+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:42.376973758+00:00 stderr F }, 2025-08-13T20:02:42.376973758+00:00 stderr F Version: "", 2025-08-13T20:02:42.376973758+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.376973758+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.376973758+00:00 stderr F } 2025-08-13T20:02:42.410688000+00:00 stderr F E0813 20:02:42.410560 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.608154843+00:00 stderr F I0813 20:02:42.607562 1 request.go:697] Waited for 1.109243645s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:02:42.610078638+00:00 stderr F E0813 20:02:42.609974 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.772931454+00:00 stderr F E0813 20:02:42.772878 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.775012764+00:00 stderr F I0813 20:02:42.774987 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.775012764+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.775012764+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.775012764+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F - { 2025-08-13T20:02:42.775012764+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:42.775012764+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.775012764+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:42.775012764+00:00 stderr F - }, 2025-08-13T20:02:42.775012764+00:00 stderr F + { 2025-08-13T20:02:42.775012764+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:42.775012764+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.775012764+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.773033487 +0000 UTC m=+81.978132701", 2025-08-13T20:02:42.775012764+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.775012764+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:42.775012764+00:00 stderr F + }, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.775012764+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:42.775012764+00:00 stderr F }, 2025-08-13T20:02:42.775012764+00:00 stderr F Version: "", 2025-08-13T20:02:42.775012764+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.775012764+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.775012764+00:00 stderr F } 2025-08-13T20:02:42.809667162+00:00 stderr F E0813 20:02:42.809511 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.972950400+00:00 stderr F E0813 20:02:42.972830 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.983392638+00:00 stderr F I0813 20:02:42.983251 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:42.983392638+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:42.983392638+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:42.983392638+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:42.983392638+00:00 stderr F - { 2025-08-13T20:02:42.983392638+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:42.983392638+00:00 stderr F - Status: "False", 2025-08-13T20:02:42.983392638+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:42.983392638+00:00 stderr F - }, 2025-08-13T20:02:42.983392638+00:00 stderr F + { 2025-08-13T20:02:42.983392638+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:42.983392638+00:00 stderr F + Status: "True", 2025-08-13T20:02:42.983392638+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:42.972919719 +0000 UTC m=+82.178018933", 2025-08-13T20:02:42.983392638+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:42.983392638+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:42.983392638+00:00 stderr F + }, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:42.983392638+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:42.983392638+00:00 stderr F }, 2025-08-13T20:02:42.983392638+00:00 stderr F Version: "", 2025-08-13T20:02:42.983392638+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:42.983392638+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:42.983392638+00:00 stderr F } 2025-08-13T20:02:43.009577656+00:00 stderr F E0813 20:02:43.009483 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.211282169+00:00 stderr F E0813 20:02:43.210955 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.332547909+00:00 stderr F E0813 20:02:43.332470 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.335764841+00:00 stderr F I0813 20:02:43.335650 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.335764841+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.335764841+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.335764841+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:43.335764841+00:00 stderr F - { 2025-08-13T20:02:43.335764841+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:43.335764841+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.335764841+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:43.335764841+00:00 stderr F - }, 2025-08-13T20:02:43.335764841+00:00 stderr F + { 2025-08-13T20:02:43.335764841+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:43.335764841+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.335764841+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.332531919 +0000 UTC m=+82.537631143", 2025-08-13T20:02:43.335764841+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.335764841+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:43.335764841+00:00 stderr F + }, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.335764841+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:43.335764841+00:00 stderr F }, 2025-08-13T20:02:43.335764841+00:00 stderr F Version: "", 2025-08-13T20:02:43.335764841+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.335764841+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.335764841+00:00 stderr F } 2025-08-13T20:02:43.410632006+00:00 stderr F E0813 20:02:43.410544 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.533812061+00:00 stderr F E0813 20:02:43.533681 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.535882560+00:00 stderr F I0813 20:02:43.535707 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.535882560+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.535882560+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.535882560+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F - { 2025-08-13T20:02:43.535882560+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:43.535882560+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.535882560+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:43.535882560+00:00 stderr F - }, 2025-08-13T20:02:43.535882560+00:00 stderr F + { 2025-08-13T20:02:43.535882560+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:43.535882560+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.535882560+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.533750569 +0000 UTC m=+82.738849793", 2025-08-13T20:02:43.535882560+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.535882560+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:43.535882560+00:00 stderr F + }, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.535882560+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:43.535882560+00:00 stderr F }, 2025-08-13T20:02:43.535882560+00:00 stderr F Version: "", 2025-08-13T20:02:43.535882560+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.535882560+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.535882560+00:00 stderr F } 2025-08-13T20:02:43.613427862+00:00 stderr F E0813 20:02:43.613301 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.692654802+00:00 stderr F E0813 20:02:43.692552 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.694875885+00:00 stderr F I0813 20:02:43.694731 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.694875885+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.694875885+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.694875885+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F - { 2025-08-13T20:02:43.694875885+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:43.694875885+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.694875885+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:43.694875885+00:00 stderr F - }, 2025-08-13T20:02:43.694875885+00:00 stderr F + { 2025-08-13T20:02:43.694875885+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:43.694875885+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.694875885+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.692635021 +0000 UTC m=+82.897734245", 2025-08-13T20:02:43.694875885+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:43.694875885+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:43.694875885+00:00 stderr F + }, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:43.694875885+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:43.694875885+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:43.694875885+00:00 stderr F }, 2025-08-13T20:02:43.694875885+00:00 stderr F Version: "", 2025-08-13T20:02:43.694875885+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.694875885+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.694875885+00:00 stderr F } 2025-08-13T20:02:43.738092648+00:00 stderr F E0813 20:02:43.738038 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.740215249+00:00 stderr F I0813 20:02:43.740186 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.740215249+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.740215249+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.740215249+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:43.740215249+00:00 stderr F - { 2025-08-13T20:02:43.740215249+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:43.740215249+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.740215249+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:43.740215249+00:00 stderr F - }, 2025-08-13T20:02:43.740215249+00:00 stderr F + { 2025-08-13T20:02:43.740215249+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:43.740215249+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.740215249+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.738236632 +0000 UTC m=+82.943335826", 2025-08-13T20:02:43.740215249+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.740215249+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:43.740215249+00:00 stderr F + }, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:43.740215249+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:43.740215249+00:00 stderr F }, 2025-08-13T20:02:43.740215249+00:00 stderr F Version: "", 2025-08-13T20:02:43.740215249+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.740215249+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.740215249+00:00 stderr F } 2025-08-13T20:02:43.810450213+00:00 stderr F E0813 20:02:43.810300 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.937389464+00:00 stderr F E0813 20:02:43.937257 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.943501168+00:00 stderr F I0813 20:02:43.943424 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:43.943501168+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:43.943501168+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:43.943501168+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F - { 2025-08-13T20:02:43.943501168+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:43.943501168+00:00 stderr F - Status: "False", 2025-08-13T20:02:43.943501168+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:43.943501168+00:00 stderr F - }, 2025-08-13T20:02:43.943501168+00:00 stderr F + { 2025-08-13T20:02:43.943501168+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:43.943501168+00:00 stderr F + Status: "True", 2025-08-13T20:02:43.943501168+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:43.937455325 +0000 UTC m=+83.142555230", 2025-08-13T20:02:43.943501168+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:43.943501168+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:43.943501168+00:00 stderr F + }, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:43.943501168+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:43.943501168+00:00 stderr F }, 2025-08-13T20:02:43.943501168+00:00 stderr F Version: "", 2025-08-13T20:02:43.943501168+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:43.943501168+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:43.943501168+00:00 stderr F } 2025-08-13T20:02:44.009754338+00:00 stderr F E0813 20:02:44.009698 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.133475947+00:00 stderr F E0813 20:02:44.133326 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.136048801+00:00 stderr F I0813 20:02:44.136012 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.136048801+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.136048801+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.136048801+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:44.136048801+00:00 stderr F - { 2025-08-13T20:02:44.136048801+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.136048801+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.136048801+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:44.136048801+00:00 stderr F - }, 2025-08-13T20:02:44.136048801+00:00 stderr F + { 2025-08-13T20:02:44.136048801+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.136048801+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.136048801+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.133948111 +0000 UTC m=+83.339047395", 2025-08-13T20:02:44.136048801+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.136048801+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:44.136048801+00:00 stderr F + }, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.136048801+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:44.136048801+00:00 stderr F }, 2025-08-13T20:02:44.136048801+00:00 stderr F Version: "", 2025-08-13T20:02:44.136048801+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.136048801+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.136048801+00:00 stderr F } 2025-08-13T20:02:44.210293679+00:00 stderr F E0813 20:02:44.210220 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.410241533+00:00 stderr F E0813 20:02:44.410145 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.610735652+00:00 stderr F E0813 20:02:44.610572 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.653021509+00:00 stderr F E0813 20:02:44.652960 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.654728947+00:00 stderr F I0813 20:02:44.654702 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.654728947+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.654728947+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.654728947+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:44.654728947+00:00 stderr F - { 2025-08-13T20:02:44.654728947+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.654728947+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.654728947+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:44.654728947+00:00 stderr F - }, 2025-08-13T20:02:44.654728947+00:00 stderr F + { 2025-08-13T20:02:44.654728947+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:44.654728947+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.654728947+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.653127182 +0000 UTC m=+83.858226276", 2025-08-13T20:02:44.654728947+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.654728947+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:44.654728947+00:00 stderr F + }, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.654728947+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:44.654728947+00:00 stderr F }, 2025-08-13T20:02:44.654728947+00:00 stderr F Version: "", 2025-08-13T20:02:44.654728947+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.654728947+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.654728947+00:00 stderr F } 2025-08-13T20:02:44.809875933+00:00 stderr F E0813 20:02:44.809720 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.852525840+00:00 stderr F E0813 20:02:44.852477 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.854682591+00:00 stderr F I0813 20:02:44.854656 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:44.854682591+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:44.854682591+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:44.854682591+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F - { 2025-08-13T20:02:44.854682591+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:44.854682591+00:00 stderr F - Status: "False", 2025-08-13T20:02:44.854682591+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:44.854682591+00:00 stderr F - }, 2025-08-13T20:02:44.854682591+00:00 stderr F + { 2025-08-13T20:02:44.854682591+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:44.854682591+00:00 stderr F + Status: "True", 2025-08-13T20:02:44.854682591+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:44.852597902 +0000 UTC m=+84.057697016", 2025-08-13T20:02:44.854682591+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:44.854682591+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:44.854682591+00:00 stderr F + }, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:44.854682591+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:44.854682591+00:00 stderr F }, 2025-08-13T20:02:44.854682591+00:00 stderr F Version: "", 2025-08-13T20:02:44.854682591+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:44.854682591+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:44.854682591+00:00 stderr F } 2025-08-13T20:02:45.010542128+00:00 stderr F E0813 20:02:45.010409 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.209175334+00:00 stderr F E0813 20:02:45.208632 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.254010902+00:00 stderr F E0813 20:02:45.253914 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.256010770+00:00 stderr F I0813 20:02:45.255942 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.256010770+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.256010770+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.256010770+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:45.256010770+00:00 stderr F - { 2025-08-13T20:02:45.256010770+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:45.256010770+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.256010770+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:45.256010770+00:00 stderr F - }, 2025-08-13T20:02:45.256010770+00:00 stderr F + { 2025-08-13T20:02:45.256010770+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:45.256010770+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.256010770+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.253957231 +0000 UTC m=+84.459056385", 2025-08-13T20:02:45.256010770+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.256010770+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:45.256010770+00:00 stderr F + }, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:45.256010770+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:45.256010770+00:00 stderr F }, 2025-08-13T20:02:45.256010770+00:00 stderr F Version: "", 2025-08-13T20:02:45.256010770+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.256010770+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.256010770+00:00 stderr F } 2025-08-13T20:02:45.409763546+00:00 stderr F E0813 20:02:45.409605 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.453226156+00:00 stderr F E0813 20:02:45.453104 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.456240132+00:00 stderr F I0813 20:02:45.456165 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.456240132+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.456240132+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.456240132+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F - { 2025-08-13T20:02:45.456240132+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:45.456240132+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.456240132+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:45.456240132+00:00 stderr F - }, 2025-08-13T20:02:45.456240132+00:00 stderr F + { 2025-08-13T20:02:45.456240132+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:45.456240132+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.456240132+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.453183404 +0000 UTC m=+84.658282768", 2025-08-13T20:02:45.456240132+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.456240132+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:45.456240132+00:00 stderr F + }, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:45.456240132+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:45.456240132+00:00 stderr F }, 2025-08-13T20:02:45.456240132+00:00 stderr F Version: "", 2025-08-13T20:02:45.456240132+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.456240132+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.456240132+00:00 stderr F } 2025-08-13T20:02:45.610164553+00:00 stderr F E0813 20:02:45.609866 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.654445656+00:00 stderr F E0813 20:02:45.654335 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.658755559+00:00 stderr F I0813 20:02:45.658340 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:45.658755559+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:45.658755559+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:45.658755559+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:45.658755559+00:00 stderr F - { 2025-08-13T20:02:45.658755559+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:45.658755559+00:00 stderr F - Status: "False", 2025-08-13T20:02:45.658755559+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:45.658755559+00:00 stderr F - }, 2025-08-13T20:02:45.658755559+00:00 stderr F + { 2025-08-13T20:02:45.658755559+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:45.658755559+00:00 stderr F + Status: "True", 2025-08-13T20:02:45.658755559+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:45.654403195 +0000 UTC m=+84.859502529", 2025-08-13T20:02:45.658755559+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:45.658755559+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:45.658755559+00:00 stderr F + }, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:45.658755559+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:45.658755559+00:00 stderr F }, 2025-08-13T20:02:45.658755559+00:00 stderr F Version: "", 2025-08-13T20:02:45.658755559+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:45.658755559+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:45.658755559+00:00 stderr F } 2025-08-13T20:02:45.812534816+00:00 stderr F E0813 20:02:45.812435 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.009827914+00:00 stderr F E0813 20:02:46.009709 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.491641599+00:00 stderr F E0813 20:02:46.491579 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.493878643+00:00 stderr F I0813 20:02:46.493768 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.493878643+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.493878643+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.493878643+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:46.493878643+00:00 stderr F - { 2025-08-13T20:02:46.493878643+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:46.493878643+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.493878643+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:46.493878643+00:00 stderr F - }, 2025-08-13T20:02:46.493878643+00:00 stderr F + { 2025-08-13T20:02:46.493878643+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:46.493878643+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.493878643+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.491770803 +0000 UTC m=+85.696870017", 2025-08-13T20:02:46.493878643+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.493878643+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:46.493878643+00:00 stderr F + }, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:46.493878643+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:46.493878643+00:00 stderr F }, 2025-08-13T20:02:46.493878643+00:00 stderr F Version: "", 2025-08-13T20:02:46.493878643+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.493878643+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.493878643+00:00 stderr F } 2025-08-13T20:02:46.495978243+00:00 stderr F E0813 20:02:46.495218 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.692088328+00:00 stderr F E0813 20:02:46.692031 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.694663531+00:00 stderr F I0813 20:02:46.694637 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.694663531+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.694663531+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.694663531+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F - { 2025-08-13T20:02:46.694663531+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:46.694663531+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.694663531+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:46.694663531+00:00 stderr F - }, 2025-08-13T20:02:46.694663531+00:00 stderr F + { 2025-08-13T20:02:46.694663531+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:46.694663531+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.694663531+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.692192771 +0000 UTC m=+85.897291995", 2025-08-13T20:02:46.694663531+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.694663531+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:46.694663531+00:00 stderr F + }, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:46.694663531+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:46.694663531+00:00 stderr F }, 2025-08-13T20:02:46.694663531+00:00 stderr F Version: "", 2025-08-13T20:02:46.694663531+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.694663531+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.694663531+00:00 stderr F } 2025-08-13T20:02:46.696121993+00:00 stderr F E0813 20:02:46.696027 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.893951606+00:00 stderr F E0813 20:02:46.893726 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.896709045+00:00 stderr F I0813 20:02:46.896600 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.896709045+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.896709045+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.896709045+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:46.896709045+00:00 stderr F - { 2025-08-13T20:02:46.896709045+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:46.896709045+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.896709045+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:46.896709045+00:00 stderr F - }, 2025-08-13T20:02:46.896709045+00:00 stderr F + { 2025-08-13T20:02:46.896709045+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:46.896709045+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.896709045+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.893873064 +0000 UTC m=+86.098972518", 2025-08-13T20:02:46.896709045+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:46.896709045+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:46.896709045+00:00 stderr F + }, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:46.896709045+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:46.896709045+00:00 stderr F }, 2025-08-13T20:02:46.896709045+00:00 stderr F Version: "", 2025-08-13T20:02:46.896709045+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.896709045+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.896709045+00:00 stderr F } 2025-08-13T20:02:46.898198017+00:00 stderr F E0813 20:02:46.898159 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.972571309+00:00 stderr F E0813 20:02:46.972327 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.975397639+00:00 stderr F I0813 20:02:46.975340 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:46.975397639+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:46.975397639+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:46.975397639+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F - { 2025-08-13T20:02:46.975397639+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:46.975397639+00:00 stderr F - Status: "False", 2025-08-13T20:02:46.975397639+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:46.975397639+00:00 stderr F - }, 2025-08-13T20:02:46.975397639+00:00 stderr F + { 2025-08-13T20:02:46.975397639+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:46.975397639+00:00 stderr F + Status: "True", 2025-08-13T20:02:46.975397639+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:46.972388134 +0000 UTC m=+86.177487448", 2025-08-13T20:02:46.975397639+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:46.975397639+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:46.975397639+00:00 stderr F + }, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:46.975397639+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:46.975397639+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:46.975397639+00:00 stderr F }, 2025-08-13T20:02:46.975397639+00:00 stderr F Version: "", 2025-08-13T20:02:46.975397639+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:46.975397639+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:46.975397639+00:00 stderr F } 2025-08-13T20:02:46.977408187+00:00 stderr F E0813 20:02:46.977378 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.094554669+00:00 stderr F E0813 20:02:47.094426 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.096831594+00:00 stderr F I0813 20:02:47.096699 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:47.096831594+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:47.096831594+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:47.096831594+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F - { 2025-08-13T20:02:47.096831594+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:47.096831594+00:00 stderr F - Status: "False", 2025-08-13T20:02:47.096831594+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:47.096831594+00:00 stderr F - }, 2025-08-13T20:02:47.096831594+00:00 stderr F + { 2025-08-13T20:02:47.096831594+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:47.096831594+00:00 stderr F + Status: "True", 2025-08-13T20:02:47.096831594+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:47.094501647 +0000 UTC m=+86.299600961", 2025-08-13T20:02:47.096831594+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:47.096831594+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:47.096831594+00:00 stderr F + }, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:47.096831594+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:47.096831594+00:00 stderr F }, 2025-08-13T20:02:47.096831594+00:00 stderr F Version: "", 2025-08-13T20:02:47.096831594+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:47.096831594+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:47.096831594+00:00 stderr F } 2025-08-13T20:02:47.098274195+00:00 stderr F E0813 20:02:47.098170 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.292954108+00:00 stderr F E0813 20:02:47.292596 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.295254294+00:00 stderr F I0813 20:02:47.295053 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:47.295254294+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:47.295254294+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:47.295254294+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:47.295254294+00:00 stderr F - { 2025-08-13T20:02:47.295254294+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:47.295254294+00:00 stderr F - Status: "False", 2025-08-13T20:02:47.295254294+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:47.295254294+00:00 stderr F - }, 2025-08-13T20:02:47.295254294+00:00 stderr F + { 2025-08-13T20:02:47.295254294+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:47.295254294+00:00 stderr F + Status: "True", 2025-08-13T20:02:47.295254294+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:47.29266478 +0000 UTC m=+86.497763994", 2025-08-13T20:02:47.295254294+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:47.295254294+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:47.295254294+00:00 stderr F + }, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:47.295254294+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:47.295254294+00:00 stderr F }, 2025-08-13T20:02:47.295254294+00:00 stderr F Version: "", 2025-08-13T20:02:47.295254294+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:47.295254294+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:47.295254294+00:00 stderr F } 2025-08-13T20:02:47.296515290+00:00 stderr F E0813 20:02:47.296303 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.058882924+00:00 stderr F E0813 20:02:49.058373 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.060875371+00:00 stderr F I0813 20:02:49.060752 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.060875371+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.060875371+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.060875371+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:49.060875371+00:00 stderr F - { 2025-08-13T20:02:49.060875371+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.060875371+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.060875371+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:49.060875371+00:00 stderr F - }, 2025-08-13T20:02:49.060875371+00:00 stderr F + { 2025-08-13T20:02:49.060875371+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.060875371+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.060875371+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.058833793 +0000 UTC m=+88.263933037", 2025-08-13T20:02:49.060875371+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.060875371+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:49.060875371+00:00 stderr F + }, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.060875371+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:49.060875371+00:00 stderr F }, 2025-08-13T20:02:49.060875371+00:00 stderr F Version: "", 2025-08-13T20:02:49.060875371+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.060875371+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.060875371+00:00 stderr F } 2025-08-13T20:02:49.061996513+00:00 stderr F E0813 20:02:49.061969 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.258492338+00:00 stderr F E0813 20:02:49.258437 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.261381521+00:00 stderr F I0813 20:02:49.261351 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.261381521+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.261381521+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.261381521+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F - { 2025-08-13T20:02:49.261381521+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:49.261381521+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.261381521+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:49.261381521+00:00 stderr F - }, 2025-08-13T20:02:49.261381521+00:00 stderr F + { 2025-08-13T20:02:49.261381521+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:49.261381521+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.261381521+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.258576731 +0000 UTC m=+88.463675915", 2025-08-13T20:02:49.261381521+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.261381521+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:49.261381521+00:00 stderr F + }, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.261381521+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:49.261381521+00:00 stderr F }, 2025-08-13T20:02:49.261381521+00:00 stderr F Version: "", 2025-08-13T20:02:49.261381521+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.261381521+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.261381521+00:00 stderr F } 2025-08-13T20:02:49.262738109+00:00 stderr F E0813 20:02:49.262649 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.463326122+00:00 stderr F E0813 20:02:49.463212 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.464921058+00:00 stderr F I0813 20:02:49.464835 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.464921058+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.464921058+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.464921058+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:49.464921058+00:00 stderr F - { 2025-08-13T20:02:49.464921058+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:49.464921058+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.464921058+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:49.464921058+00:00 stderr F - }, 2025-08-13T20:02:49.464921058+00:00 stderr F + { 2025-08-13T20:02:49.464921058+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:49.464921058+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.464921058+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.463280471 +0000 UTC m=+88.668379695", 2025-08-13T20:02:49.464921058+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.464921058+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:49.464921058+00:00 stderr F + }, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:49.464921058+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:49.464921058+00:00 stderr F }, 2025-08-13T20:02:49.464921058+00:00 stderr F Version: "", 2025-08-13T20:02:49.464921058+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.464921058+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.464921058+00:00 stderr F } 2025-08-13T20:02:49.466155773+00:00 stderr F E0813 20:02:49.466097 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.662081552+00:00 stderr F E0813 20:02:49.661961 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.665182730+00:00 stderr F I0813 20:02:49.664728 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.665182730+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.665182730+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.665182730+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F - { 2025-08-13T20:02:49.665182730+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:49.665182730+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.665182730+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:49.665182730+00:00 stderr F - }, 2025-08-13T20:02:49.665182730+00:00 stderr F + { 2025-08-13T20:02:49.665182730+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:49.665182730+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.665182730+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.662041651 +0000 UTC m=+88.867141105", 2025-08-13T20:02:49.665182730+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.665182730+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:49.665182730+00:00 stderr F + }, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.665182730+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:49.665182730+00:00 stderr F }, 2025-08-13T20:02:49.665182730+00:00 stderr F Version: "", 2025-08-13T20:02:49.665182730+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.665182730+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.665182730+00:00 stderr F } 2025-08-13T20:02:49.666707054+00:00 stderr F E0813 20:02:49.666621 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.859386661+00:00 stderr F E0813 20:02:49.859269 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.862996234+00:00 stderr F I0813 20:02:49.862758 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:49.862996234+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:49.862996234+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:49.862996234+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:49.862996234+00:00 stderr F - { 2025-08-13T20:02:49.862996234+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.862996234+00:00 stderr F - Status: "False", 2025-08-13T20:02:49.862996234+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:49.862996234+00:00 stderr F - }, 2025-08-13T20:02:49.862996234+00:00 stderr F + { 2025-08-13T20:02:49.862996234+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:49.862996234+00:00 stderr F + Status: "True", 2025-08-13T20:02:49.862996234+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:49.85935953 +0000 UTC m=+89.064458744", 2025-08-13T20:02:49.862996234+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:49.862996234+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:49.862996234+00:00 stderr F + }, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:49.862996234+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:49.862996234+00:00 stderr F }, 2025-08-13T20:02:49.862996234+00:00 stderr F Version: "", 2025-08-13T20:02:49.862996234+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:49.862996234+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:49.862996234+00:00 stderr F } 2025-08-13T20:02:49.864553388+00:00 stderr F E0813 20:02:49.864490 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.100349989+00:00 stderr F E0813 20:02:52.100206 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.102302825+00:00 stderr F I0813 20:02:52.102244 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:52.102302825+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:52.102302825+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:52.102302825+00:00 stderr F ... // 43 identical elements 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F - { 2025-08-13T20:02:52.102302825+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:52.102302825+00:00 stderr F - Status: "False", 2025-08-13T20:02:52.102302825+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:02:52.102302825+00:00 stderr F - }, 2025-08-13T20:02:52.102302825+00:00 stderr F + { 2025-08-13T20:02:52.102302825+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:02:52.102302825+00:00 stderr F + Status: "True", 2025-08-13T20:02:52.102302825+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:52.100318508 +0000 UTC m=+91.305417732", 2025-08-13T20:02:52.102302825+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:02:52.102302825+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:02:52.102302825+00:00 stderr F + }, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:02:52.102302825+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:52.102302825+00:00 stderr F ... // 14 identical elements 2025-08-13T20:02:52.102302825+00:00 stderr F }, 2025-08-13T20:02:52.102302825+00:00 stderr F Version: "", 2025-08-13T20:02:52.102302825+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:52.102302825+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:52.102302825+00:00 stderr F } 2025-08-13T20:02:52.103716805+00:00 stderr F E0813 20:02:52.103639 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.922128069+00:00 stderr F W0813 20:02:53.921703 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.922128069+00:00 stderr F E0813 20:02:53.921904 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.185693288+00:00 stderr F E0813 20:02:54.185571 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.188333574+00:00 stderr F I0813 20:02:54.188235 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.188333574+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.188333574+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.188333574+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:54.188333574+00:00 stderr F - { 2025-08-13T20:02:54.188333574+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.188333574+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.188333574+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:54.188333574+00:00 stderr F - }, 2025-08-13T20:02:54.188333574+00:00 stderr F + { 2025-08-13T20:02:54.188333574+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.188333574+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.188333574+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.185654827 +0000 UTC m=+93.390754171", 2025-08-13T20:02:54.188333574+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.188333574+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:02:54.188333574+00:00 stderr F + }, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.188333574+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:54.188333574+00:00 stderr F }, 2025-08-13T20:02:54.188333574+00:00 stderr F Version: "", 2025-08-13T20:02:54.188333574+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.188333574+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.188333574+00:00 stderr F } 2025-08-13T20:02:54.190895287+00:00 stderr F E0813 20:02:54.190263 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.385528939+00:00 stderr F E0813 20:02:54.385395 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.389274516+00:00 stderr F I0813 20:02:54.389174 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.389274516+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.389274516+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.389274516+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F - { 2025-08-13T20:02:54.389274516+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:54.389274516+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.389274516+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:54.389274516+00:00 stderr F - }, 2025-08-13T20:02:54.389274516+00:00 stderr F + { 2025-08-13T20:02:54.389274516+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:54.389274516+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.389274516+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.385467807 +0000 UTC m=+93.590566981", 2025-08-13T20:02:54.389274516+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.389274516+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:02:54.389274516+00:00 stderr F + }, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.389274516+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:54.389274516+00:00 stderr F }, 2025-08-13T20:02:54.389274516+00:00 stderr F Version: "", 2025-08-13T20:02:54.389274516+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.389274516+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.389274516+00:00 stderr F } 2025-08-13T20:02:54.394194976+00:00 stderr F E0813 20:02:54.394097 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.590049464+00:00 stderr F E0813 20:02:54.589960 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.592184545+00:00 stderr F I0813 20:02:54.592117 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.592184545+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.592184545+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.592184545+00:00 stderr F ... // 2 identical elements 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:02:54.592184545+00:00 stderr F - { 2025-08-13T20:02:54.592184545+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:54.592184545+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.592184545+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:02:54.592184545+00:00 stderr F - }, 2025-08-13T20:02:54.592184545+00:00 stderr F + { 2025-08-13T20:02:54.592184545+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:02:54.592184545+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.592184545+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.590033023 +0000 UTC m=+93.795132307", 2025-08-13T20:02:54.592184545+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.592184545+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:02:54.592184545+00:00 stderr F + }, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:02:54.592184545+00:00 stderr F ... // 55 identical elements 2025-08-13T20:02:54.592184545+00:00 stderr F }, 2025-08-13T20:02:54.592184545+00:00 stderr F Version: "", 2025-08-13T20:02:54.592184545+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.592184545+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.592184545+00:00 stderr F } 2025-08-13T20:02:54.593226934+00:00 stderr F E0813 20:02:54.593183 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.789140294+00:00 stderr F E0813 20:02:54.789091 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.791350607+00:00 stderr F I0813 20:02:54.791303 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.791350607+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.791350607+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.791350607+00:00 stderr F ... // 19 identical elements 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F - { 2025-08-13T20:02:54.791350607+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:02:54.791350607+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.791350607+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:02:54.791350607+00:00 stderr F - }, 2025-08-13T20:02:54.791350607+00:00 stderr F + { 2025-08-13T20:02:54.791350607+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:02:54.791350607+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.791350607+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.789221796 +0000 UTC m=+93.994320850", 2025-08-13T20:02:54.791350607+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.791350607+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:02:54.791350607+00:00 stderr F + }, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.791350607+00:00 stderr F ... // 38 identical elements 2025-08-13T20:02:54.791350607+00:00 stderr F }, 2025-08-13T20:02:54.791350607+00:00 stderr F Version: "", 2025-08-13T20:02:54.791350607+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.791350607+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.791350607+00:00 stderr F } 2025-08-13T20:02:54.793879449+00:00 stderr F E0813 20:02:54.793257 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.987382889+00:00 stderr F E0813 20:02:54.987331 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.990219700+00:00 stderr F I0813 20:02:54.990194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:02:54.990219700+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:02:54.990219700+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:02:54.990219700+00:00 stderr F ... // 10 identical elements 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:02:54.990219700+00:00 stderr F - { 2025-08-13T20:02:54.990219700+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.990219700+00:00 stderr F - Status: "False", 2025-08-13T20:02:54.990219700+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:02:54.990219700+00:00 stderr F - }, 2025-08-13T20:02:54.990219700+00:00 stderr F + { 2025-08-13T20:02:54.990219700+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:02:54.990219700+00:00 stderr F + Status: "True", 2025-08-13T20:02:54.990219700+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:02:54.987465371 +0000 UTC m=+94.192564445", 2025-08-13T20:02:54.990219700+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:02:54.990219700+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:02:54.990219700+00:00 stderr F + }, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:02:54.990219700+00:00 stderr F ... // 47 identical elements 2025-08-13T20:02:54.990219700+00:00 stderr F }, 2025-08-13T20:02:54.990219700+00:00 stderr F Version: "", 2025-08-13T20:02:54.990219700+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:02:54.990219700+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:02:54.990219700+00:00 stderr F } 2025-08-13T20:02:54.991403564+00:00 stderr F E0813 20:02:54.991380 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.347261235+00:00 stderr F E0813 20:03:02.346578 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.348916012+00:00 stderr F I0813 20:03:02.348863 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:02.348916012+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:02.348916012+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:02.348916012+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F - { 2025-08-13T20:03:02.348916012+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:02.348916012+00:00 stderr F - Status: "False", 2025-08-13T20:03:02.348916012+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:02.348916012+00:00 stderr F - }, 2025-08-13T20:03:02.348916012+00:00 stderr F + { 2025-08-13T20:03:02.348916012+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:02.348916012+00:00 stderr F + Status: "True", 2025-08-13T20:03:02.348916012+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:02.347210973 +0000 UTC m=+101.552310187", 2025-08-13T20:03:02.348916012+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:02.348916012+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:02.348916012+00:00 stderr F + }, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:02.348916012+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:02.348916012+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:02.348916012+00:00 stderr F }, 2025-08-13T20:03:02.348916012+00:00 stderr F Version: "", 2025-08-13T20:03:02.348916012+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:02.348916012+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:02.348916012+00:00 stderr F } 2025-08-13T20:03:02.350906699+00:00 stderr F E0813 20:03:02.350729 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.433210680+00:00 stderr F E0813 20:03:04.433095 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.435199707+00:00 stderr F I0813 20:03:04.435118 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.435199707+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.435199707+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.435199707+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:04.435199707+00:00 stderr F - { 2025-08-13T20:03:04.435199707+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:04.435199707+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.435199707+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:04.435199707+00:00 stderr F - }, 2025-08-13T20:03:04.435199707+00:00 stderr F + { 2025-08-13T20:03:04.435199707+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:04.435199707+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.435199707+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.433156558 +0000 UTC m=+103.638255912", 2025-08-13T20:03:04.435199707+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.435199707+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:04.435199707+00:00 stderr F + }, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:04.435199707+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:04.435199707+00:00 stderr F }, 2025-08-13T20:03:04.435199707+00:00 stderr F Version: "", 2025-08-13T20:03:04.435199707+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.435199707+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.435199707+00:00 stderr F } 2025-08-13T20:03:04.436316628+00:00 stderr F E0813 20:03:04.436239 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.637671522+00:00 stderr F E0813 20:03:04.637543 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.640534914+00:00 stderr F I0813 20:03:04.640422 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.640534914+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.640534914+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.640534914+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F - { 2025-08-13T20:03:04.640534914+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:04.640534914+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.640534914+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:04.640534914+00:00 stderr F - }, 2025-08-13T20:03:04.640534914+00:00 stderr F + { 2025-08-13T20:03:04.640534914+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:04.640534914+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.640534914+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.637684293 +0000 UTC m=+103.842783657", 2025-08-13T20:03:04.640534914+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.640534914+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:04.640534914+00:00 stderr F + }, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:04.640534914+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:04.640534914+00:00 stderr F }, 2025-08-13T20:03:04.640534914+00:00 stderr F Version: "", 2025-08-13T20:03:04.640534914+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.640534914+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.640534914+00:00 stderr F } 2025-08-13T20:03:04.642434758+00:00 stderr F E0813 20:03:04.642398 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.838517592+00:00 stderr F E0813 20:03:04.838410 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.840338214+00:00 stderr F I0813 20:03:04.840273 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:04.840338214+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:04.840338214+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:04.840338214+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:04.840338214+00:00 stderr F - { 2025-08-13T20:03:04.840338214+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:04.840338214+00:00 stderr F - Status: "False", 2025-08-13T20:03:04.840338214+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:04.840338214+00:00 stderr F - }, 2025-08-13T20:03:04.840338214+00:00 stderr F + { 2025-08-13T20:03:04.840338214+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:04.840338214+00:00 stderr F + Status: "True", 2025-08-13T20:03:04.840338214+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:04.838476711 +0000 UTC m=+104.043575975", 2025-08-13T20:03:04.840338214+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:04.840338214+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:04.840338214+00:00 stderr F + }, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:04.840338214+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:04.840338214+00:00 stderr F }, 2025-08-13T20:03:04.840338214+00:00 stderr F Version: "", 2025-08-13T20:03:04.840338214+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:04.840338214+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:04.840338214+00:00 stderr F } 2025-08-13T20:03:04.841729014+00:00 stderr F E0813 20:03:04.841647 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.036209562+00:00 stderr F E0813 20:03:05.036004 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.041749930+00:00 stderr F I0813 20:03:05.041664 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:05.041749930+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:05.041749930+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:05.041749930+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F - { 2025-08-13T20:03:05.041749930+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:05.041749930+00:00 stderr F - Status: "False", 2025-08-13T20:03:05.041749930+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:05.041749930+00:00 stderr F - }, 2025-08-13T20:03:05.041749930+00:00 stderr F + { 2025-08-13T20:03:05.041749930+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:05.041749930+00:00 stderr F + Status: "True", 2025-08-13T20:03:05.041749930+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:05.036119899 +0000 UTC m=+104.241219113", 2025-08-13T20:03:05.041749930+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:05.041749930+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:05.041749930+00:00 stderr F + }, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:05.041749930+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:05.041749930+00:00 stderr F }, 2025-08-13T20:03:05.041749930+00:00 stderr F Version: "", 2025-08-13T20:03:05.041749930+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:05.041749930+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:05.041749930+00:00 stderr F } 2025-08-13T20:03:05.042971324+00:00 stderr F E0813 20:03:05.042921 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.234516949+00:00 stderr F E0813 20:03:05.234399 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.237211016+00:00 stderr F I0813 20:03:05.237104 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:05.237211016+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:05.237211016+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:05.237211016+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:05.237211016+00:00 stderr F - { 2025-08-13T20:03:05.237211016+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:05.237211016+00:00 stderr F - Status: "False", 2025-08-13T20:03:05.237211016+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:05.237211016+00:00 stderr F - }, 2025-08-13T20:03:05.237211016+00:00 stderr F + { 2025-08-13T20:03:05.237211016+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:05.237211016+00:00 stderr F + Status: "True", 2025-08-13T20:03:05.237211016+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:05.234450737 +0000 UTC m=+104.439549901", 2025-08-13T20:03:05.237211016+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:05.237211016+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:05.237211016+00:00 stderr F + }, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:05.237211016+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:05.237211016+00:00 stderr F }, 2025-08-13T20:03:05.237211016+00:00 stderr F Version: "", 2025-08-13T20:03:05.237211016+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:05.237211016+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:05.237211016+00:00 stderr F } 2025-08-13T20:03:05.239046088+00:00 stderr F E0813 20:03:05.238768 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.836279379+00:00 stderr F E0813 20:03:22.835474 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.839112150+00:00 stderr F I0813 20:03:22.839011 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:22.839112150+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:22.839112150+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:22.839112150+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F - { 2025-08-13T20:03:22.839112150+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:22.839112150+00:00 stderr F - Status: "False", 2025-08-13T20:03:22.839112150+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:22.839112150+00:00 stderr F - }, 2025-08-13T20:03:22.839112150+00:00 stderr F + { 2025-08-13T20:03:22.839112150+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:22.839112150+00:00 stderr F + Status: "True", 2025-08-13T20:03:22.839112150+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:22.83629378 +0000 UTC m=+122.041393044", 2025-08-13T20:03:22.839112150+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:22.839112150+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:22.839112150+00:00 stderr F + }, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:22.839112150+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:22.839112150+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:22.839112150+00:00 stderr F }, 2025-08-13T20:03:22.839112150+00:00 stderr F Version: "", 2025-08-13T20:03:22.839112150+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:22.839112150+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:22.839112150+00:00 stderr F } 2025-08-13T20:03:22.841314113+00:00 stderr F E0813 20:03:22.841211 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.921173105+00:00 stderr F E0813 20:03:24.921029 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.923902473+00:00 stderr F I0813 20:03:24.923866 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:24.923902473+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:24.923902473+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:24.923902473+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:24.923902473+00:00 stderr F - { 2025-08-13T20:03:24.923902473+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:24.923902473+00:00 stderr F - Status: "False", 2025-08-13T20:03:24.923902473+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:24.923902473+00:00 stderr F - }, 2025-08-13T20:03:24.923902473+00:00 stderr F + { 2025-08-13T20:03:24.923902473+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:24.923902473+00:00 stderr F + Status: "True", 2025-08-13T20:03:24.923902473+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:24.921269628 +0000 UTC m=+124.126368962", 2025-08-13T20:03:24.923902473+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:24.923902473+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:24.923902473+00:00 stderr F + }, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:24.923902473+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:24.923902473+00:00 stderr F }, 2025-08-13T20:03:24.923902473+00:00 stderr F Version: "", 2025-08-13T20:03:24.923902473+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:24.923902473+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:24.923902473+00:00 stderr F } 2025-08-13T20:03:24.925448307+00:00 stderr F E0813 20:03:24.925394 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.126030619+00:00 stderr F E0813 20:03:25.125969 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.129151838+00:00 stderr F I0813 20:03:25.129119 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.129151838+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.129151838+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.129151838+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F - { 2025-08-13T20:03:25.129151838+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:25.129151838+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.129151838+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:25.129151838+00:00 stderr F - }, 2025-08-13T20:03:25.129151838+00:00 stderr F + { 2025-08-13T20:03:25.129151838+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:25.129151838+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.129151838+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.126180074 +0000 UTC m=+124.331279348", 2025-08-13T20:03:25.129151838+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.129151838+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:25.129151838+00:00 stderr F + }, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.129151838+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:25.129151838+00:00 stderr F }, 2025-08-13T20:03:25.129151838+00:00 stderr F Version: "", 2025-08-13T20:03:25.129151838+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.129151838+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.129151838+00:00 stderr F } 2025-08-13T20:03:25.130926699+00:00 stderr F E0813 20:03:25.130880 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.326744395+00:00 stderr F E0813 20:03:25.326646 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.329996568+00:00 stderr F I0813 20:03:25.329836 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.329996568+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.329996568+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.329996568+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:25.329996568+00:00 stderr F - { 2025-08-13T20:03:25.329996568+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:25.329996568+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.329996568+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:25.329996568+00:00 stderr F - }, 2025-08-13T20:03:25.329996568+00:00 stderr F + { 2025-08-13T20:03:25.329996568+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:25.329996568+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.329996568+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.326953871 +0000 UTC m=+124.532053195", 2025-08-13T20:03:25.329996568+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.329996568+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:25.329996568+00:00 stderr F + }, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:25.329996568+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:25.329996568+00:00 stderr F }, 2025-08-13T20:03:25.329996568+00:00 stderr F Version: "", 2025-08-13T20:03:25.329996568+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.329996568+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.329996568+00:00 stderr F } 2025-08-13T20:03:25.332283563+00:00 stderr F E0813 20:03:25.332211 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.525548166+00:00 stderr F E0813 20:03:25.525415 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.528057577+00:00 stderr F I0813 20:03:25.527970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.528057577+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.528057577+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.528057577+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F - { 2025-08-13T20:03:25.528057577+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:25.528057577+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.528057577+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:25.528057577+00:00 stderr F - }, 2025-08-13T20:03:25.528057577+00:00 stderr F + { 2025-08-13T20:03:25.528057577+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:25.528057577+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.528057577+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.525488714 +0000 UTC m=+124.730587908", 2025-08-13T20:03:25.528057577+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.528057577+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:25.528057577+00:00 stderr F + }, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.528057577+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:25.528057577+00:00 stderr F }, 2025-08-13T20:03:25.528057577+00:00 stderr F Version: "", 2025-08-13T20:03:25.528057577+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.528057577+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.528057577+00:00 stderr F } 2025-08-13T20:03:25.529081187+00:00 stderr F E0813 20:03:25.528986 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.722506365+00:00 stderr F E0813 20:03:25.722404 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.724716808+00:00 stderr F I0813 20:03:25.724619 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:25.724716808+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:25.724716808+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:25.724716808+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:25.724716808+00:00 stderr F - { 2025-08-13T20:03:25.724716808+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:25.724716808+00:00 stderr F - Status: "False", 2025-08-13T20:03:25.724716808+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:25.724716808+00:00 stderr F - }, 2025-08-13T20:03:25.724716808+00:00 stderr F + { 2025-08-13T20:03:25.724716808+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:25.724716808+00:00 stderr F + Status: "True", 2025-08-13T20:03:25.724716808+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:25.722476354 +0000 UTC m=+124.927575578", 2025-08-13T20:03:25.724716808+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:25.724716808+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:25.724716808+00:00 stderr F + }, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:25.724716808+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:25.724716808+00:00 stderr F }, 2025-08-13T20:03:25.724716808+00:00 stderr F Version: "", 2025-08-13T20:03:25.724716808+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:25.724716808+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:25.724716808+00:00 stderr F } 2025-08-13T20:03:25.726247031+00:00 stderr F E0813 20:03:25.726156 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.309131285+00:00 stderr F E0813 20:03:35.308076 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.002192285+00:00 stderr F E0813 20:03:37.002088 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.005224771+00:00 stderr F I0813 20:03:37.005161 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.005224771+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.005224771+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.005224771+00:00 stderr F ... // 43 identical elements 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F - { 2025-08-13T20:03:37.005224771+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:37.005224771+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.005224771+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:03:37.005224771+00:00 stderr F - }, 2025-08-13T20:03:37.005224771+00:00 stderr F + { 2025-08-13T20:03:37.005224771+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:03:37.005224771+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.005224771+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.002153564 +0000 UTC m=+136.207252708", 2025-08-13T20:03:37.005224771+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:03:37.005224771+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:03:37.005224771+00:00 stderr F + }, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:03:37.005224771+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.005224771+00:00 stderr F ... // 14 identical elements 2025-08-13T20:03:37.005224771+00:00 stderr F }, 2025-08-13T20:03:37.005224771+00:00 stderr F Version: "", 2025-08-13T20:03:37.005224771+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.005224771+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.005224771+00:00 stderr F } 2025-08-13T20:03:37.006494927+00:00 stderr F E0813 20:03:37.006414 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.115226139+00:00 stderr F E0813 20:03:37.115175 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.117290808+00:00 stderr F I0813 20:03:37.117251 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.117290808+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.117290808+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.117290808+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F - { 2025-08-13T20:03:37.117290808+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:37.117290808+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.117290808+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:37.117290808+00:00 stderr F - }, 2025-08-13T20:03:37.117290808+00:00 stderr F + { 2025-08-13T20:03:37.117290808+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:37.117290808+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.117290808+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.115319432 +0000 UTC m=+136.320418566", 2025-08-13T20:03:37.117290808+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.117290808+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:03:37.117290808+00:00 stderr F + }, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.117290808+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:37.117290808+00:00 stderr F }, 2025-08-13T20:03:37.117290808+00:00 stderr F Version: "", 2025-08-13T20:03:37.117290808+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.117290808+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.117290808+00:00 stderr F } 2025-08-13T20:03:37.118891614+00:00 stderr F E0813 20:03:37.118757 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.119163772+00:00 stderr F E0813 20:03:37.117534 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.120903341+00:00 stderr F I0813 20:03:37.120758 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.120903341+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.120903341+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.120903341+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.120903341+00:00 stderr F - { 2025-08-13T20:03:37.120903341+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.120903341+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.120903341+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:37.120903341+00:00 stderr F - }, 2025-08-13T20:03:37.120903341+00:00 stderr F + { 2025-08-13T20:03:37.120903341+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.120903341+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.120903341+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.118886654 +0000 UTC m=+136.323986058", 2025-08-13T20:03:37.120903341+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.120903341+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:03:37.120903341+00:00 stderr F + }, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.120903341+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:37.120903341+00:00 stderr F }, 2025-08-13T20:03:37.120903341+00:00 stderr F Version: "", 2025-08-13T20:03:37.120903341+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.120903341+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.120903341+00:00 stderr F } 2025-08-13T20:03:37.121382915+00:00 stderr F I0813 20:03:37.121358 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.121382915+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.121382915+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.121382915+00:00 stderr F ... // 2 identical elements 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:03:37.121382915+00:00 stderr F - { 2025-08-13T20:03:37.121382915+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:37.121382915+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.121382915+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:03:37.121382915+00:00 stderr F - }, 2025-08-13T20:03:37.121382915+00:00 stderr F + { 2025-08-13T20:03:37.121382915+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:03:37.121382915+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.121382915+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.119211223 +0000 UTC m=+136.324310487", 2025-08-13T20:03:37.121382915+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.121382915+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:03:37.121382915+00:00 stderr F + }, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:03:37.121382915+00:00 stderr F ... // 55 identical elements 2025-08-13T20:03:37.121382915+00:00 stderr F }, 2025-08-13T20:03:37.121382915+00:00 stderr F Version: "", 2025-08-13T20:03:37.121382915+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.121382915+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.121382915+00:00 stderr F } 2025-08-13T20:03:37.121892799+00:00 stderr F E0813 20:03:37.121868 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.122761554+00:00 stderr F E0813 20:03:37.122662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.123134605+00:00 stderr F E0813 20:03:37.123101 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.128750475+00:00 stderr F E0813 20:03:37.128724 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.131482963+00:00 stderr F I0813 20:03:37.131423 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.131482963+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.131482963+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.131482963+00:00 stderr F ... // 19 identical elements 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F - { 2025-08-13T20:03:37.131482963+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:03:37.131482963+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.131482963+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:03:37.131482963+00:00 stderr F - }, 2025-08-13T20:03:37.131482963+00:00 stderr F + { 2025-08-13T20:03:37.131482963+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:03:37.131482963+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.131482963+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.128898429 +0000 UTC m=+136.333997793", 2025-08-13T20:03:37.131482963+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.131482963+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:03:37.131482963+00:00 stderr F + }, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.131482963+00:00 stderr F ... // 38 identical elements 2025-08-13T20:03:37.131482963+00:00 stderr F }, 2025-08-13T20:03:37.131482963+00:00 stderr F Version: "", 2025-08-13T20:03:37.131482963+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.131482963+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.131482963+00:00 stderr F } 2025-08-13T20:03:37.132311937+00:00 stderr F E0813 20:03:37.132267 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.136277390+00:00 stderr F E0813 20:03:37.136211 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.136968619+00:00 stderr F I0813 20:03:37.136907 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:03:37.136968619+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:03:37.136968619+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:03:37.136968619+00:00 stderr F ... // 10 identical elements 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:03:37.136968619+00:00 stderr F - { 2025-08-13T20:03:37.136968619+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.136968619+00:00 stderr F - Status: "False", 2025-08-13T20:03:37.136968619+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:03:37.136968619+00:00 stderr F - }, 2025-08-13T20:03:37.136968619+00:00 stderr F + { 2025-08-13T20:03:37.136968619+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:03:37.136968619+00:00 stderr F + Status: "True", 2025-08-13T20:03:37.136968619+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:03:37.132306286 +0000 UTC m=+136.337405491", 2025-08-13T20:03:37.136968619+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:03:37.136968619+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:03:37.136968619+00:00 stderr F + }, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:03:37.136968619+00:00 stderr F ... // 47 identical elements 2025-08-13T20:03:37.136968619+00:00 stderr F }, 2025-08-13T20:03:37.136968619+00:00 stderr F Version: "", 2025-08-13T20:03:37.136968619+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:03:37.136968619+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:03:37.136968619+00:00 stderr F } 2025-08-13T20:03:37.146010717+00:00 stderr F E0813 20:03:37.145622 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:50.832318355+00:00 stderr F W0813 20:03:50.831668 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:50.832318355+00:00 stderr F E0813 20:03:50.832276 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:03.811403786+00:00 stderr F E0813 20:04:03.807034 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:03.825416906+00:00 stderr F I0813 20:04:03.824582 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:03.825416906+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:03.825416906+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:03.825416906+00:00 stderr F ... // 43 identical elements 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F - { 2025-08-13T20:04:03.825416906+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:03.825416906+00:00 stderr F - Status: "False", 2025-08-13T20:04:03.825416906+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:04:03.825416906+00:00 stderr F - }, 2025-08-13T20:04:03.825416906+00:00 stderr F + { 2025-08-13T20:04:03.825416906+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:03.825416906+00:00 stderr F + Status: "True", 2025-08-13T20:04:03.825416906+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:03.807627468 +0000 UTC m=+163.012726792", 2025-08-13T20:04:03.825416906+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:04:03.825416906+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:04:03.825416906+00:00 stderr F + }, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:04:03.825416906+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:03.825416906+00:00 stderr F ... // 14 identical elements 2025-08-13T20:04:03.825416906+00:00 stderr F }, 2025-08-13T20:04:03.825416906+00:00 stderr F Version: "", 2025-08-13T20:04:03.825416906+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:03.825416906+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:03.825416906+00:00 stderr F } 2025-08-13T20:04:03.858934092+00:00 stderr F E0813 20:04:03.853628 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.896446866+00:00 stderr F E0813 20:04:05.893516 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.923424836+00:00 stderr F I0813 20:04:05.923124 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:05.923424836+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:05.923424836+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:05.923424836+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:05.923424836+00:00 stderr F - { 2025-08-13T20:04:05.923424836+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:05.923424836+00:00 stderr F - Status: "False", 2025-08-13T20:04:05.923424836+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:05.923424836+00:00 stderr F - }, 2025-08-13T20:04:05.923424836+00:00 stderr F + { 2025-08-13T20:04:05.923424836+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:05.923424836+00:00 stderr F + Status: "True", 2025-08-13T20:04:05.923424836+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:05.894241363 +0000 UTC m=+165.099340748", 2025-08-13T20:04:05.923424836+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:05.923424836+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:04:05.923424836+00:00 stderr F + }, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:05.923424836+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:05.923424836+00:00 stderr F }, 2025-08-13T20:04:05.923424836+00:00 stderr F Version: "", 2025-08-13T20:04:05.923424836+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:05.923424836+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:05.923424836+00:00 stderr F } 2025-08-13T20:04:05.937612201+00:00 stderr F E0813 20:04:05.936704 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.095120444+00:00 stderr F E0813 20:04:06.095053 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.098044487+00:00 stderr F I0813 20:04:06.098014 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.098044487+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.098044487+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.098044487+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F - { 2025-08-13T20:04:06.098044487+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:06.098044487+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.098044487+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:06.098044487+00:00 stderr F - }, 2025-08-13T20:04:06.098044487+00:00 stderr F + { 2025-08-13T20:04:06.098044487+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:06.098044487+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.098044487+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.095211036 +0000 UTC m=+165.300310130", 2025-08-13T20:04:06.098044487+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.098044487+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:04:06.098044487+00:00 stderr F + }, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.098044487+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:06.098044487+00:00 stderr F }, 2025-08-13T20:04:06.098044487+00:00 stderr F Version: "", 2025-08-13T20:04:06.098044487+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.098044487+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.098044487+00:00 stderr F } 2025-08-13T20:04:06.102716631+00:00 stderr F E0813 20:04:06.100421 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.296891860+00:00 stderr F E0813 20:04:06.296625 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.300179564+00:00 stderr F I0813 20:04:06.300020 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.300179564+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.300179564+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.300179564+00:00 stderr F ... // 2 identical elements 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:04:06.300179564+00:00 stderr F - { 2025-08-13T20:04:06.300179564+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:06.300179564+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.300179564+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:04:06.300179564+00:00 stderr F - }, 2025-08-13T20:04:06.300179564+00:00 stderr F + { 2025-08-13T20:04:06.300179564+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:06.300179564+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.300179564+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.296729665 +0000 UTC m=+165.501828979", 2025-08-13T20:04:06.300179564+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.300179564+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:04:06.300179564+00:00 stderr F + }, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:04:06.300179564+00:00 stderr F ... // 55 identical elements 2025-08-13T20:04:06.300179564+00:00 stderr F }, 2025-08-13T20:04:06.300179564+00:00 stderr F Version: "", 2025-08-13T20:04:06.300179564+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.300179564+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.300179564+00:00 stderr F } 2025-08-13T20:04:06.301719637+00:00 stderr F E0813 20:04:06.301617 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.491970365+00:00 stderr F E0813 20:04:06.491887 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.494330812+00:00 stderr F I0813 20:04:06.494219 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.494330812+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.494330812+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.494330812+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F - { 2025-08-13T20:04:06.494330812+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:06.494330812+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.494330812+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:06.494330812+00:00 stderr F - }, 2025-08-13T20:04:06.494330812+00:00 stderr F + { 2025-08-13T20:04:06.494330812+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:06.494330812+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.494330812+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.491934914 +0000 UTC m=+165.697034178", 2025-08-13T20:04:06.494330812+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.494330812+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:04:06.494330812+00:00 stderr F + }, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.494330812+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:06.494330812+00:00 stderr F }, 2025-08-13T20:04:06.494330812+00:00 stderr F Version: "", 2025-08-13T20:04:06.494330812+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.494330812+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.494330812+00:00 stderr F } 2025-08-13T20:04:06.495895007+00:00 stderr F E0813 20:04:06.495730 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.689370306+00:00 stderr F E0813 20:04:06.689248 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.691285151+00:00 stderr F I0813 20:04:06.691211 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:06.691285151+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:06.691285151+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:06.691285151+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:06.691285151+00:00 stderr F - { 2025-08-13T20:04:06.691285151+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:06.691285151+00:00 stderr F - Status: "False", 2025-08-13T20:04:06.691285151+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:06.691285151+00:00 stderr F - }, 2025-08-13T20:04:06.691285151+00:00 stderr F + { 2025-08-13T20:04:06.691285151+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:06.691285151+00:00 stderr F + Status: "True", 2025-08-13T20:04:06.691285151+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:06.689319605 +0000 UTC m=+165.894418939", 2025-08-13T20:04:06.691285151+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:06.691285151+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:04:06.691285151+00:00 stderr F + }, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:06.691285151+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:06.691285151+00:00 stderr F }, 2025-08-13T20:04:06.691285151+00:00 stderr F Version: "", 2025-08-13T20:04:06.691285151+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:06.691285151+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:06.691285151+00:00 stderr F } 2025-08-13T20:04:06.692658550+00:00 stderr F E0813 20:04:06.692545 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:35.324389024+00:00 stderr F E0813 20:04:35.323337 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.015702996+00:00 stderr F E0813 20:04:37.015359 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.032889778+00:00 stderr F I0813 20:04:37.032444 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.032889778+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.032889778+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.032889778+00:00 stderr F ... // 43 identical elements 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F - { 2025-08-13T20:04:37.032889778+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:37.032889778+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.032889778+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:04:37.032889778+00:00 stderr F - }, 2025-08-13T20:04:37.032889778+00:00 stderr F + { 2025-08-13T20:04:37.032889778+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:04:37.032889778+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.032889778+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.015561332 +0000 UTC m=+196.220660786", 2025-08-13T20:04:37.032889778+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:04:37.032889778+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:04:37.032889778+00:00 stderr F + }, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:04:37.032889778+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:01:06 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.032889778+00:00 stderr F ... // 14 identical elements 2025-08-13T20:04:37.032889778+00:00 stderr F }, 2025-08-13T20:04:37.032889778+00:00 stderr F Version: "", 2025-08-13T20:04:37.032889778+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.032889778+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.032889778+00:00 stderr F } 2025-08-13T20:04:37.040484775+00:00 stderr F E0813 20:04:37.040421 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.127144927+00:00 stderr F E0813 20:04:37.126951 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.130468242+00:00 stderr F I0813 20:04:37.130394 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.130468242+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.130468242+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.130468242+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F - { 2025-08-13T20:04:37.130468242+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:37.130468242+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.130468242+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:37.130468242+00:00 stderr F - }, 2025-08-13T20:04:37.130468242+00:00 stderr F + { 2025-08-13T20:04:37.130468242+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:37.130468242+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.130468242+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.127039864 +0000 UTC m=+196.332139128", 2025-08-13T20:04:37.130468242+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.130468242+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:04:37.130468242+00:00 stderr F + }, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.130468242+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:37.130468242+00:00 stderr F }, 2025-08-13T20:04:37.130468242+00:00 stderr F Version: "", 2025-08-13T20:04:37.130468242+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.130468242+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.130468242+00:00 stderr F } 2025-08-13T20:04:37.141613431+00:00 stderr F E0813 20:04:37.141491 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.141957491+00:00 stderr F E0813 20:04:37.141922 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.142348962+00:00 stderr F E0813 20:04:37.142323 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.142905478+00:00 stderr F E0813 20:04:37.142812 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.145037209+00:00 stderr F I0813 20:04:37.144970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145037209+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145037209+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145037209+00:00 stderr F ... // 19 identical elements 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F - { 2025-08-13T20:04:37.145037209+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:04:37.145037209+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145037209+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:04:37.145037209+00:00 stderr F - }, 2025-08-13T20:04:37.145037209+00:00 stderr F + { 2025-08-13T20:04:37.145037209+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:04:37.145037209+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145037209+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.142403284 +0000 UTC m=+196.347502578", 2025-08-13T20:04:37.145037209+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145037209+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:04:37.145037209+00:00 stderr F + }, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.145037209+00:00 stderr F ... // 38 identical elements 2025-08-13T20:04:37.145037209+00:00 stderr F }, 2025-08-13T20:04:37.145037209+00:00 stderr F Version: "", 2025-08-13T20:04:37.145037209+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145037209+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145037209+00:00 stderr F } 2025-08-13T20:04:37.145271676+00:00 stderr F I0813 20:04:37.145211 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145271676+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145271676+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145271676+00:00 stderr F ... // 2 identical elements 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:04:37.145271676+00:00 stderr F - { 2025-08-13T20:04:37.145271676+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:37.145271676+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145271676+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:04:37.145271676+00:00 stderr F - }, 2025-08-13T20:04:37.145271676+00:00 stderr F + { 2025-08-13T20:04:37.145271676+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:04:37.145271676+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145271676+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.142885227 +0000 UTC m=+196.347984652", 2025-08-13T20:04:37.145271676+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145271676+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:04:37.145271676+00:00 stderr F + }, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:04:37.145271676+00:00 stderr F ... // 55 identical elements 2025-08-13T20:04:37.145271676+00:00 stderr F }, 2025-08-13T20:04:37.145271676+00:00 stderr F Version: "", 2025-08-13T20:04:37.145271676+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145271676+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145271676+00:00 stderr F } 2025-08-13T20:04:37.145656917+00:00 stderr F I0813 20:04:37.145549 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.145656917+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.145656917+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.145656917+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.145656917+00:00 stderr F - { 2025-08-13T20:04:37.145656917+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.145656917+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.145656917+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:37.145656917+00:00 stderr F - }, 2025-08-13T20:04:37.145656917+00:00 stderr F + { 2025-08-13T20:04:37.145656917+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.145656917+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.145656917+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.141552419 +0000 UTC m=+196.346651603", 2025-08-13T20:04:37.145656917+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.145656917+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:04:37.145656917+00:00 stderr F + }, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.145656917+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:37.145656917+00:00 stderr F }, 2025-08-13T20:04:37.145656917+00:00 stderr F Version: "", 2025-08-13T20:04:37.145656917+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.145656917+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.145656917+00:00 stderr F } 2025-08-13T20:04:37.148239511+00:00 stderr F E0813 20:04:37.148127 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.148386505+00:00 stderr F E0813 20:04:37.148301 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.174762180+00:00 stderr F E0813 20:04:37.174421 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.174762180+00:00 stderr F E0813 20:04:37.174495 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:37.222108896+00:00 stderr F I0813 20:04:37.219913 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:04:37.222108896+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:04:37.222108896+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:04:37.222108896+00:00 stderr F ... // 10 identical elements 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:01:07 +0000 UTC"}, Reason: "FailedDeleteCustomRoutes", ...}, 2025-08-13T20:04:37.222108896+00:00 stderr F - { 2025-08-13T20:04:37.222108896+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.222108896+00:00 stderr F - Status: "False", 2025-08-13T20:04:37.222108896+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:04:37.222108896+00:00 stderr F - }, 2025-08-13T20:04:37.222108896+00:00 stderr F + { 2025-08-13T20:04:37.222108896+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:04:37.222108896+00:00 stderr F + Status: "True", 2025-08-13T20:04:37.222108896+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:04:37.174489833 +0000 UTC m=+196.379589187", 2025-08-13T20:04:37.222108896+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:04:37.222108896+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:04:37.222108896+00:00 stderr F + }, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:04:37.222108896+00:00 stderr F ... // 47 identical elements 2025-08-13T20:04:37.222108896+00:00 stderr F }, 2025-08-13T20:04:37.222108896+00:00 stderr F Version: "", 2025-08-13T20:04:37.222108896+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:04:37.222108896+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:04:37.222108896+00:00 stderr F } 2025-08-13T20:04:37.240222785+00:00 stderr F E0813 20:04:37.239626 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.159436499+00:00 stderr F W0813 20:04:40.159270 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.159436499+00:00 stderr F E0813 20:04:40.159417 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.939682414+00:00 stderr F W0813 20:05:15.939006 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.939682414+00:00 stderr F E0813 20:05:15.939606 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:58.894029135+00:00 stderr F I0813 20:05:58.893245 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:00.833503405+00:00 stderr F W0813 20:06:00.833311 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:00.833503405+00:00 stderr F E0813 20:06:00.833411 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:07.070995261+00:00 stderr F I0813 20:06:07.070544 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:09.506747891+00:00 stderr F I0813 20:06:09.505328 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:06:10.008389275+00:00 stderr F I0813 20:06:10.008146 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:11.858828575+00:00 stderr F I0813 20:06:11.858679 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.574120261+00:00 stderr F I0813 20:06:14.571912 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:16.430994894+00:00 stderr F I0813 20:06:16.428932 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:16.503308295+00:00 stderr F I0813 20:06:16.502856 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.745989123+00:00 stderr F I0813 20:06:19.745756 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:21.584488059+00:00 stderr F I0813 20:06:21.583599 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:24.265860302+00:00 stderr F I0813 20:06:24.265660 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:27.103264534+00:00 stderr F I0813 20:06:27.103083 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.102166851+00:00 stderr F I0813 20:06:30.101372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.696617163+00:00 stderr F I0813 20:06:30.692061 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:32.583329881+00:00 stderr F I0813 20:06:32.577091 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:34.743077822+00:00 stderr F I0813 20:06:34.741619 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:37.031253456+00:00 stderr F I0813 20:06:37.029264 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:38.867945805+00:00 stderr F I0813 20:06:38.867053 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:06:39.042915812+00:00 stderr F I0813 20:06:39.041057 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:39.826960871+00:00 stderr F I0813 20:06:39.826296 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:40.339728663+00:00 stderr F I0813 20:06:40.339674 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.448014411+00:00 stderr F I0813 20:06:44.446671 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:46.224728850+00:00 stderr F W0813 20:06:46.224243 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:46.224728850+00:00 stderr F E0813 20:06:46.224304 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:06:48.474750951+00:00 stderr F I0813 20:06:48.471933 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:48.536239163+00:00 stderr F I0813 20:06:48.536187 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:50.189428322+00:00 stderr F I0813 20:06:50.188710 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:51.116409329+00:00 stderr F I0813 20:06:51.116171 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:55.690126201+00:00 stderr F I0813 20:06:55.687737 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:57.820930254+00:00 stderr F I0813 20:06:57.814209 1 reflector.go:351] Caches populated for operators.coreos.com/v1, Resource=olmconfigs from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:07:04.256997701+00:00 stderr F I0813 20:07:04.256271 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.194607533+00:00 stderr F I0813 20:07:05.193920 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:29.199852862+00:00 stderr F W0813 20:07:29.198739 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:29.199852862+00:00 stderr F E0813 20:07:29.199674 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:08:11.536021817+00:00 stderr F I0813 20:08:11.534969 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:08:11.599255470+00:00 stderr F I0813 20:08:11.599129 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-08-13T20:08:11.599615240+00:00 stderr F I0813 20:08:11.599512 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-08-13T20:08:11.599682932+00:00 stderr F I0813 20:08:11.599608 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-08-13T20:08:11.599682932+00:00 stderr F I0813 20:08:11.599612 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-08-13T20:08:11.600076293+00:00 stderr F I0813 20:08:11.599243 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-08-13T20:08:11.600330041+00:00 stderr F I0813 20:08:11.599912 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:08:11.600393192+00:00 stderr F I0813 20:08:11.600358 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:08:11.601047241+00:00 stderr F I0813 20:08:11.599595 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-08-13T20:08:11.601182305+00:00 stderr F I0813 20:08:11.599678 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-08-13T20:08:11.601873235+00:00 stderr F I0813 20:08:11.599678 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-08-13T20:08:11.609975107+00:00 stderr F I0813 20:08:11.609743 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-08-13T20:08:11.609975107+00:00 stderr F I0813 20:08:11.609956 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:08:11.611633065+00:00 stderr F I0813 20:08:11.611351 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-08-13T20:08:11.647366239+00:00 stderr F I0813 20:08:11.647226 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.647366239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.647366239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.647366239+00:00 stderr F ... // 7 identical elements 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F - { 2025-08-13T20:08:11.647366239+00:00 stderr F - Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:08:11.647366239+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.647366239+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:07 +0000 UTC", 2025-08-13T20:08:11.647366239+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647366239+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:08:11.647366239+00:00 stderr F - }, 2025-08-13T20:08:11.647366239+00:00 stderr F + { 2025-08-13T20:08:11.647366239+00:00 stderr F + Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:08:11.647366239+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.647366239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.642454678 +0000 UTC m=+410.847553882", 2025-08-13T20:08:11.647366239+00:00 stderr F + }, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F - { 2025-08-13T20:08:11.647366239+00:00 stderr F - Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647366239+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.647366239+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:07 +0000 UTC", 2025-08-13T20:08:11.647366239+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647366239+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:08:11.647366239+00:00 stderr F - }, 2025-08-13T20:08:11.647366239+00:00 stderr F + { 2025-08-13T20:08:11.647366239+00:00 stderr F + Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647366239+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.647366239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.642456158 +0000 UTC m=+410.847555162", 2025-08-13T20:08:11.647366239+00:00 stderr F + }, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:11.647366239+00:00 stderr F ... // 48 identical elements 2025-08-13T20:08:11.647366239+00:00 stderr F }, 2025-08-13T20:08:11.647366239+00:00 stderr F Version: "", 2025-08-13T20:08:11.647366239+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:08:11.647366239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.647366239+00:00 stderr F } 2025-08-13T20:08:11.647426461+00:00 stderr F I0813 20:08:11.647372 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.647426461+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.647426461+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.647426461+00:00 stderr F ... // 45 identical elements 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F - { 2025-08-13T20:08:11.647426461+00:00 stderr F - Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.647426461+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.647426461+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.647426461+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647426461+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.647426461+00:00 stderr F - }, 2025-08-13T20:08:11.647426461+00:00 stderr F + { 2025-08-13T20:08:11.647426461+00:00 stderr F + Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.647426461+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.647426461+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.644182268 +0000 UTC m=+410.849281462", 2025-08-13T20:08:11.647426461+00:00 stderr F + }, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F - { 2025-08-13T20:08:11.647426461+00:00 stderr F - Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647426461+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.647426461+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.647426461+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.647426461+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.647426461+00:00 stderr F - }, 2025-08-13T20:08:11.647426461+00:00 stderr F + { 2025-08-13T20:08:11.647426461+00:00 stderr F + Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.647426461+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.647426461+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.644184158 +0000 UTC m=+410.849283172", 2025-08-13T20:08:11.647426461+00:00 stderr F + }, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.647426461+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:11.647426461+00:00 stderr F }, 2025-08-13T20:08:11.647426461+00:00 stderr F Version: "", 2025-08-13T20:08:11.647426461+00:00 stderr F ReadyReplicas: 0, 2025-08-13T20:08:11.647426461+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.647426461+00:00 stderr F } 2025-08-13T20:08:11.684401991+00:00 stderr F I0813 20:08:11.684097 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.684401991+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.684401991+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 13 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F - { 2025-08-13T20:08:11.684401991+00:00 stderr F - Type: "SyncLoopRefreshProgressing", 2025-08-13T20:08:11.684401991+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.684401991+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.684401991+00:00 stderr F - Reason: "InProgress", 2025-08-13T20:08:11.684401991+00:00 stderr F - Message: "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:08:11.684401991+00:00 stderr F - }, 2025-08-13T20:08:11.684401991+00:00 stderr F + { 2025-08-13T20:08:11.684401991+00:00 stderr F + Type: "SyncLoopRefreshProgressing", 2025-08-13T20:08:11.684401991+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.684401991+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.681641202 +0000 UTC m=+410.886740456", 2025-08-13T20:08:11.684401991+00:00 stderr F + }, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 6 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "ServiceCASyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "TrustedCASyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F - { 2025-08-13T20:08:11.684401991+00:00 stderr F - Type: "DeploymentAvailable", 2025-08-13T20:08:11.684401991+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.684401991+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.684401991+00:00 stderr F - Reason: "InsufficientReplicas", 2025-08-13T20:08:11.684401991+00:00 stderr F - Message: "0 replicas available for console deployment", 2025-08-13T20:08:11.684401991+00:00 stderr F - }, 2025-08-13T20:08:11.684401991+00:00 stderr F + { 2025-08-13T20:08:11.684401991+00:00 stderr F + Type: "DeploymentAvailable", 2025-08-13T20:08:11.684401991+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.684401991+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.681643252 +0000 UTC m=+410.886742386", 2025-08-13T20:08:11.684401991+00:00 stderr F + }, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "TrustedCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F {Type: "OAuthServingCertValidationProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:11.684401991+00:00 stderr F ... // 33 identical elements 2025-08-13T20:08:11.684401991+00:00 stderr F }, 2025-08-13T20:08:11.684401991+00:00 stderr F Version: "", 2025-08-13T20:08:11.684401991+00:00 stderr F - ReadyReplicas: 0, 2025-08-13T20:08:11.684401991+00:00 stderr F + ReadyReplicas: 1, 2025-08-13T20:08:11.684401991+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.684401991+00:00 stderr F } 2025-08-13T20:08:11.726243231+00:00 stderr F I0813 20:08:11.724414 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.738741229+00:00 stderr F I0813 20:08:11.737607 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.738741229+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.738741229+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.738741229+00:00 stderr F ... // 45 identical elements 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F - { 2025-08-13T20:08:11.738741229+00:00 stderr F - Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.738741229+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.738741229+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.738741229+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.738741229+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.738741229+00:00 stderr F - }, 2025-08-13T20:08:11.738741229+00:00 stderr F + { 2025-08-13T20:08:11.738741229+00:00 stderr F + Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:08:11.738741229+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.738741229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.705518836 +0000 UTC m=+410.910617990", 2025-08-13T20:08:11.738741229+00:00 stderr F + }, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F - { 2025-08-13T20:08:11.738741229+00:00 stderr F - Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.738741229+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.738741229+00:00 stderr F - LastTransitionTime: s"2025-08-13 20:01:06 +0000 UTC", 2025-08-13T20:08:11.738741229+00:00 stderr F - Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:08:11.738741229+00:00 stderr F - Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:08:11.738741229+00:00 stderr F - }, 2025-08-13T20:08:11.738741229+00:00 stderr F + { 2025-08-13T20:08:11.738741229+00:00 stderr F + Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:08:11.738741229+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.738741229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.705520676 +0000 UTC m=+410.910619680", 2025-08-13T20:08:11.738741229+00:00 stderr F + }, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:08:11.738741229+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:11.738741229+00:00 stderr F }, 2025-08-13T20:08:11.738741229+00:00 stderr F Version: "", 2025-08-13T20:08:11.738741229+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:11.738741229+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.738741229+00:00 stderr F } 2025-08-13T20:08:11.765700862+00:00 stderr F I0813 20:08:11.764955 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" 2025-08-13T20:08:11.787871757+00:00 stderr F I0813 20:08:11.787363 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.798038939+00:00 stderr F I0813 20:08:11.797846 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:11.798038939+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:11.798038939+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:08:11.798038939+00:00 stderr F - { 2025-08-13T20:08:11.798038939+00:00 stderr F - Type: "RouteHealthDegraded", 2025-08-13T20:08:11.798038939+00:00 stderr F - Status: "True", 2025-08-13T20:08:11.798038939+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.798038939+00:00 stderr F - Reason: "StatusError", 2025-08-13T20:08:11.798038939+00:00 stderr F - Message: "route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'", 2025-08-13T20:08:11.798038939+00:00 stderr F - }, 2025-08-13T20:08:11.798038939+00:00 stderr F + { 2025-08-13T20:08:11.798038939+00:00 stderr F + Type: "RouteHealthDegraded", 2025-08-13T20:08:11.798038939+00:00 stderr F + Status: "False", 2025-08-13T20:08:11.798038939+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.795014552 +0000 UTC m=+411.000113566", 2025-08-13T20:08:11.798038939+00:00 stderr F + }, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:11.798038939+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:08:11.798038939+00:00 stderr F - { 2025-08-13T20:08:11.798038939+00:00 stderr F - Type: "RouteHealthAvailable", 2025-08-13T20:08:11.798038939+00:00 stderr F - Status: "False", 2025-08-13T20:08:11.798038939+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:08:11.798038939+00:00 stderr F - Reason: "StatusError", 2025-08-13T20:08:11.798038939+00:00 stderr F - Message: "route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'", 2025-08-13T20:08:11.798038939+00:00 stderr F - }, 2025-08-13T20:08:11.798038939+00:00 stderr F + { 2025-08-13T20:08:11.798038939+00:00 stderr F + Type: "RouteHealthAvailable", 2025-08-13T20:08:11.798038939+00:00 stderr F + Status: "True", 2025-08-13T20:08:11.798038939+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:11.795013432 +0000 UTC m=+411.000112516", 2025-08-13T20:08:11.798038939+00:00 stderr F + }, 2025-08-13T20:08:11.798038939+00:00 stderr F }, 2025-08-13T20:08:11.798038939+00:00 stderr F Version: "", 2025-08-13T20:08:11.798038939+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:11.798038939+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:11.798038939+00:00 stderr F } 2025-08-13T20:08:11.810129396+00:00 stderr F I0813 20:08:11.810025 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" 2025-08-13T20:08:11.817676502+00:00 stderr F I0813 20:08:11.815021 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"RouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:11.829639345+00:00 stderr F E0813 20:08:11.828392 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:08:12.214932842+00:00 stderr F I0813 20:08:12.214692 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:08:11Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2025-08-13T20:08:12Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:08:12.240848715+00:00 stderr F I0813 20:08:12.240681 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well"),Upgradeable changed from False to True ("All is well") 2025-08-13T20:08:35.466638407+00:00 stderr F E0813 20:08:35.465262 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.009647895+00:00 stderr F E0813 20:08:37.009556 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.012931140+00:00 stderr F I0813 20:08:37.012550 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.012931140+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.012931140+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.012931140+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F - { 2025-08-13T20:08:37.012931140+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.012931140+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.012931140+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.012931140+00:00 stderr F - }, 2025-08-13T20:08:37.012931140+00:00 stderr F + { 2025-08-13T20:08:37.012931140+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.012931140+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.012931140+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.009736018 +0000 UTC m=+436.214835452", 2025-08-13T20:08:37.012931140+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.012931140+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.012931140+00:00 stderr F + }, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.012931140+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.012931140+00:00 stderr F }, 2025-08-13T20:08:37.012931140+00:00 stderr F Version: "", 2025-08-13T20:08:37.012931140+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.012931140+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.012931140+00:00 stderr F } 2025-08-13T20:08:37.014415062+00:00 stderr F E0813 20:08:37.014389 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.024056229+00:00 stderr F E0813 20:08:37.024023 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.025952823+00:00 stderr F I0813 20:08:37.025888 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.025952823+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.025952823+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.025952823+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F - { 2025-08-13T20:08:37.025952823+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.025952823+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.025952823+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.025952823+00:00 stderr F - }, 2025-08-13T20:08:37.025952823+00:00 stderr F + { 2025-08-13T20:08:37.025952823+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.025952823+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.025952823+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.02412334 +0000 UTC m=+436.229222504", 2025-08-13T20:08:37.025952823+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.025952823+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.025952823+00:00 stderr F + }, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.025952823+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.025952823+00:00 stderr F }, 2025-08-13T20:08:37.025952823+00:00 stderr F Version: "", 2025-08-13T20:08:37.025952823+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.025952823+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.025952823+00:00 stderr F } 2025-08-13T20:08:37.027692853+00:00 stderr F E0813 20:08:37.027653 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.039249144+00:00 stderr F E0813 20:08:37.039190 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.041532590+00:00 stderr F I0813 20:08:37.041481 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.041532590+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.041532590+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.041532590+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F - { 2025-08-13T20:08:37.041532590+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.041532590+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.041532590+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.041532590+00:00 stderr F - }, 2025-08-13T20:08:37.041532590+00:00 stderr F + { 2025-08-13T20:08:37.041532590+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.041532590+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.041532590+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.039340387 +0000 UTC m=+436.244439561", 2025-08-13T20:08:37.041532590+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.041532590+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.041532590+00:00 stderr F + }, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.041532590+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.041532590+00:00 stderr F }, 2025-08-13T20:08:37.041532590+00:00 stderr F Version: "", 2025-08-13T20:08:37.041532590+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.041532590+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.041532590+00:00 stderr F } 2025-08-13T20:08:37.043141576+00:00 stderr F E0813 20:08:37.043110 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.068983347+00:00 stderr F E0813 20:08:37.068842 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.071530750+00:00 stderr F I0813 20:08:37.071381 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.071530750+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.071530750+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.071530750+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F - { 2025-08-13T20:08:37.071530750+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.071530750+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.071530750+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.071530750+00:00 stderr F - }, 2025-08-13T20:08:37.071530750+00:00 stderr F + { 2025-08-13T20:08:37.071530750+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.071530750+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.071530750+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.068923635 +0000 UTC m=+436.274022979", 2025-08-13T20:08:37.071530750+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.071530750+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.071530750+00:00 stderr F + }, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.071530750+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.071530750+00:00 stderr F }, 2025-08-13T20:08:37.071530750+00:00 stderr F Version: "", 2025-08-13T20:08:37.071530750+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.071530750+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.071530750+00:00 stderr F } 2025-08-13T20:08:37.074563647+00:00 stderr F E0813 20:08:37.074482 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.118723633+00:00 stderr F E0813 20:08:37.118547 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.120936056+00:00 stderr F I0813 20:08:37.120846 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.120936056+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.120936056+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.120936056+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F - { 2025-08-13T20:08:37.120936056+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.120936056+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.120936056+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.120936056+00:00 stderr F - }, 2025-08-13T20:08:37.120936056+00:00 stderr F + { 2025-08-13T20:08:37.120936056+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.120936056+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.120936056+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.118607149 +0000 UTC m=+436.323706353", 2025-08-13T20:08:37.120936056+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.120936056+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.120936056+00:00 stderr F + }, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.120936056+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.120936056+00:00 stderr F }, 2025-08-13T20:08:37.120936056+00:00 stderr F Version: "", 2025-08-13T20:08:37.120936056+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.120936056+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.120936056+00:00 stderr F } 2025-08-13T20:08:37.122380188+00:00 stderr F E0813 20:08:37.122297 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.126693301+00:00 stderr F E0813 20:08:37.126620 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.128860763+00:00 stderr F E0813 20:08:37.128828 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.129044779+00:00 stderr F I0813 20:08:37.128986 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.129044779+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.129044779+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.129044779+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.129044779+00:00 stderr F - { 2025-08-13T20:08:37.129044779+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.129044779+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.129044779+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.129044779+00:00 stderr F - }, 2025-08-13T20:08:37.129044779+00:00 stderr F + { 2025-08-13T20:08:37.129044779+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.129044779+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.129044779+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.126777714 +0000 UTC m=+436.331918699", 2025-08-13T20:08:37.129044779+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.129044779+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.129044779+00:00 stderr F + }, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.129044779+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.129044779+00:00 stderr F }, 2025-08-13T20:08:37.129044779+00:00 stderr F Version: "", 2025-08-13T20:08:37.129044779+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.129044779+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.129044779+00:00 stderr F } 2025-08-13T20:08:37.129133261+00:00 stderr F E0813 20:08:37.129080 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.130269294+00:00 stderr F E0813 20:08:37.130192 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.131451618+00:00 stderr F I0813 20:08:37.131424 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.131451618+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.131451618+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.131451618+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F - { 2025-08-13T20:08:37.131451618+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.131451618+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.131451618+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.131451618+00:00 stderr F - }, 2025-08-13T20:08:37.131451618+00:00 stderr F + { 2025-08-13T20:08:37.131451618+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.131451618+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.131451618+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.128947986 +0000 UTC m=+436.334047360", 2025-08-13T20:08:37.131451618+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.131451618+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.131451618+00:00 stderr F + }, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.131451618+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.131451618+00:00 stderr F }, 2025-08-13T20:08:37.131451618+00:00 stderr F Version: "", 2025-08-13T20:08:37.131451618+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.131451618+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.131451618+00:00 stderr F } 2025-08-13T20:08:37.132122847+00:00 stderr F E0813 20:08:37.131759 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.132520558+00:00 stderr F E0813 20:08:37.132497 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.134950188+00:00 stderr F E0813 20:08:37.134854 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.135505744+00:00 stderr F I0813 20:08:37.135447 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.135505744+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.135505744+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.135505744+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F - { 2025-08-13T20:08:37.135505744+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.135505744+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.135505744+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.135505744+00:00 stderr F - }, 2025-08-13T20:08:37.135505744+00:00 stderr F + { 2025-08-13T20:08:37.135505744+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.135505744+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.135505744+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.131959352 +0000 UTC m=+436.337599672", 2025-08-13T20:08:37.135505744+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.135505744+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.135505744+00:00 stderr F + }, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.135505744+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.135505744+00:00 stderr F }, 2025-08-13T20:08:37.135505744+00:00 stderr F Version: "", 2025-08-13T20:08:37.135505744+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.135505744+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.135505744+00:00 stderr F } 2025-08-13T20:08:37.136976046+00:00 stderr F E0813 20:08:37.136520 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.136976046+00:00 stderr F I0813 20:08:37.136883 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.136976046+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.136976046+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.136976046+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F - { 2025-08-13T20:08:37.136976046+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.136976046+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.136976046+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.136976046+00:00 stderr F - }, 2025-08-13T20:08:37.136976046+00:00 stderr F + { 2025-08-13T20:08:37.136976046+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.136976046+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.136976046+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.129233434 +0000 UTC m=+436.334332668", 2025-08-13T20:08:37.136976046+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.136976046+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.136976046+00:00 stderr F + }, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.136976046+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.136976046+00:00 stderr F }, 2025-08-13T20:08:37.136976046+00:00 stderr F Version: "", 2025-08-13T20:08:37.136976046+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.136976046+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.136976046+00:00 stderr F } 2025-08-13T20:08:37.137052298+00:00 stderr F I0813 20:08:37.137032 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.137052298+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.137052298+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.137052298+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F - { 2025-08-13T20:08:37.137052298+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.137052298+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.137052298+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.137052298+00:00 stderr F - }, 2025-08-13T20:08:37.137052298+00:00 stderr F + { 2025-08-13T20:08:37.137052298+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.137052298+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.137052298+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.13256414 +0000 UTC m=+436.337663404", 2025-08-13T20:08:37.137052298+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.137052298+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:37.137052298+00:00 stderr F + }, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.137052298+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.137052298+00:00 stderr F }, 2025-08-13T20:08:37.137052298+00:00 stderr F Version: "", 2025-08-13T20:08:37.137052298+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.137052298+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.137052298+00:00 stderr F } 2025-08-13T20:08:37.138667285+00:00 stderr F E0813 20:08:37.138621 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.138667285+00:00 stderr F E0813 20:08:37.138645 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.138865790+00:00 stderr F E0813 20:08:37.138778 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.140730034+00:00 stderr F I0813 20:08:37.140654 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.140730034+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.140730034+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.140730034+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.140730034+00:00 stderr F - { 2025-08-13T20:08:37.140730034+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.140730034+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.140730034+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.140730034+00:00 stderr F - }, 2025-08-13T20:08:37.140730034+00:00 stderr F + { 2025-08-13T20:08:37.140730034+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.140730034+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.140730034+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.138652834 +0000 UTC m=+436.343751888", 2025-08-13T20:08:37.140730034+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.140730034+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.140730034+00:00 stderr F + }, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.140730034+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.140730034+00:00 stderr F }, 2025-08-13T20:08:37.140730034+00:00 stderr F Version: "", 2025-08-13T20:08:37.140730034+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.140730034+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.140730034+00:00 stderr F } 2025-08-13T20:08:37.141507606+00:00 stderr F E0813 20:08:37.141454 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.145027597+00:00 stderr F E0813 20:08:37.144866 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.147193069+00:00 stderr F I0813 20:08:37.147113 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.147193069+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.147193069+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.147193069+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F - { 2025-08-13T20:08:37.147193069+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.147193069+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.147193069+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.147193069+00:00 stderr F - }, 2025-08-13T20:08:37.147193069+00:00 stderr F + { 2025-08-13T20:08:37.147193069+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.147193069+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.147193069+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.141502156 +0000 UTC m=+436.346601450", 2025-08-13T20:08:37.147193069+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.147193069+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.147193069+00:00 stderr F + }, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.147193069+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.147193069+00:00 stderr F }, 2025-08-13T20:08:37.147193069+00:00 stderr F Version: "", 2025-08-13T20:08:37.147193069+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.147193069+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.147193069+00:00 stderr F } 2025-08-13T20:08:37.147287632+00:00 stderr F E0813 20:08:37.147229 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.147414245+00:00 stderr F E0813 20:08:37.147333 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.149523346+00:00 stderr F I0813 20:08:37.149277 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.149523346+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.149523346+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.149523346+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F - { 2025-08-13T20:08:37.149523346+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149523346+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.149523346+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.149523346+00:00 stderr F - }, 2025-08-13T20:08:37.149523346+00:00 stderr F + { 2025-08-13T20:08:37.149523346+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149523346+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.149523346+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.144932974 +0000 UTC m=+436.350032228", 2025-08-13T20:08:37.149523346+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.149523346+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.149523346+00:00 stderr F + }, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.149523346+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.149523346+00:00 stderr F }, 2025-08-13T20:08:37.149523346+00:00 stderr F Version: "", 2025-08-13T20:08:37.149523346+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.149523346+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.149523346+00:00 stderr F } 2025-08-13T20:08:37.149849525+00:00 stderr F I0813 20:08:37.149707 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.149849525+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.149849525+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.149849525+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F - { 2025-08-13T20:08:37.149849525+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149849525+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.149849525+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.149849525+00:00 stderr F - }, 2025-08-13T20:08:37.149849525+00:00 stderr F + { 2025-08-13T20:08:37.149849525+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.149849525+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.149849525+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.147289752 +0000 UTC m=+436.352388986", 2025-08-13T20:08:37.149849525+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.149849525+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.149849525+00:00 stderr F + }, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.149849525+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.149849525+00:00 stderr F }, 2025-08-13T20:08:37.149849525+00:00 stderr F Version: "", 2025-08-13T20:08:37.149849525+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.149849525+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.149849525+00:00 stderr F } 2025-08-13T20:08:37.150974307+00:00 stderr F I0813 20:08:37.150709 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.150974307+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.150974307+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.150974307+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F - { 2025-08-13T20:08:37.150974307+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.150974307+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.150974307+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.150974307+00:00 stderr F - }, 2025-08-13T20:08:37.150974307+00:00 stderr F + { 2025-08-13T20:08:37.150974307+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.150974307+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.150974307+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.147427256 +0000 UTC m=+436.352526510", 2025-08-13T20:08:37.150974307+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.150974307+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:37.150974307+00:00 stderr F + }, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.150974307+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.150974307+00:00 stderr F }, 2025-08-13T20:08:37.150974307+00:00 stderr F Version: "", 2025-08-13T20:08:37.150974307+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.150974307+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.150974307+00:00 stderr F } 2025-08-13T20:08:37.205003486+00:00 stderr F E0813 20:08:37.204868 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.217414432+00:00 stderr F I0813 20:08:37.214036 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.217414432+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.217414432+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.217414432+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F - { 2025-08-13T20:08:37.217414432+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.217414432+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.217414432+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:37.217414432+00:00 stderr F - }, 2025-08-13T20:08:37.217414432+00:00 stderr F + { 2025-08-13T20:08:37.217414432+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:37.217414432+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.217414432+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.204941165 +0000 UTC m=+436.410040379", 2025-08-13T20:08:37.217414432+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:37.217414432+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:37.217414432+00:00 stderr F + }, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.217414432+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:37.217414432+00:00 stderr F }, 2025-08-13T20:08:37.217414432+00:00 stderr F Version: "", 2025-08-13T20:08:37.217414432+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.217414432+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.217414432+00:00 stderr F } 2025-08-13T20:08:37.224473165+00:00 stderr F E0813 20:08:37.224400 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.241217205+00:00 stderr F E0813 20:08:37.241152 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.243916362+00:00 stderr F I0813 20:08:37.243773 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.243916362+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.243916362+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.243916362+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:37.243916362+00:00 stderr F - { 2025-08-13T20:08:37.243916362+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.243916362+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.243916362+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:37.243916362+00:00 stderr F - }, 2025-08-13T20:08:37.243916362+00:00 stderr F + { 2025-08-13T20:08:37.243916362+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:37.243916362+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.243916362+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.241369539 +0000 UTC m=+436.446468853", 2025-08-13T20:08:37.243916362+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.243916362+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:37.243916362+00:00 stderr F + }, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:37.243916362+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:37.243916362+00:00 stderr F }, 2025-08-13T20:08:37.243916362+00:00 stderr F Version: "", 2025-08-13T20:08:37.243916362+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.243916362+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.243916362+00:00 stderr F } 2025-08-13T20:08:37.416690046+00:00 stderr F E0813 20:08:37.416629 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.431698516+00:00 stderr F E0813 20:08:37.431585 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.434256229+00:00 stderr F I0813 20:08:37.434128 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.434256229+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.434256229+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.434256229+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F - { 2025-08-13T20:08:37.434256229+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.434256229+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.434256229+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:37.434256229+00:00 stderr F - }, 2025-08-13T20:08:37.434256229+00:00 stderr F + { 2025-08-13T20:08:37.434256229+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:37.434256229+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.434256229+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.431642004 +0000 UTC m=+436.636741178", 2025-08-13T20:08:37.434256229+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.434256229+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:37.434256229+00:00 stderr F + }, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.434256229+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:37.434256229+00:00 stderr F }, 2025-08-13T20:08:37.434256229+00:00 stderr F Version: "", 2025-08-13T20:08:37.434256229+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.434256229+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.434256229+00:00 stderr F } 2025-08-13T20:08:37.615607029+00:00 stderr F E0813 20:08:37.615445 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.629245430+00:00 stderr F E0813 20:08:37.629132 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.632642387+00:00 stderr F I0813 20:08:37.632497 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.632642387+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.632642387+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.632642387+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F - { 2025-08-13T20:08:37.632642387+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.632642387+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.632642387+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.632642387+00:00 stderr F - }, 2025-08-13T20:08:37.632642387+00:00 stderr F + { 2025-08-13T20:08:37.632642387+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.632642387+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.632642387+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.629412355 +0000 UTC m=+436.834511619", 2025-08-13T20:08:37.632642387+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.632642387+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:37.632642387+00:00 stderr F + }, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.632642387+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.632642387+00:00 stderr F }, 2025-08-13T20:08:37.632642387+00:00 stderr F Version: "", 2025-08-13T20:08:37.632642387+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.632642387+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.632642387+00:00 stderr F } 2025-08-13T20:08:37.817262301+00:00 stderr F E0813 20:08:37.816548 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.832922820+00:00 stderr F E0813 20:08:37.832497 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.838182340+00:00 stderr F I0813 20:08:37.835045 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:37.838182340+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:37.838182340+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:37.838182340+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F - { 2025-08-13T20:08:37.838182340+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:37.838182340+00:00 stderr F - Status: "False", 2025-08-13T20:08:37.838182340+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:37.838182340+00:00 stderr F - }, 2025-08-13T20:08:37.838182340+00:00 stderr F + { 2025-08-13T20:08:37.838182340+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:37.838182340+00:00 stderr F + Status: "True", 2025-08-13T20:08:37.838182340+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:37.83257012 +0000 UTC m=+437.037669464", 2025-08-13T20:08:37.838182340+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:37.838182340+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:37.838182340+00:00 stderr F + }, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:37.838182340+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:37.838182340+00:00 stderr F }, 2025-08-13T20:08:37.838182340+00:00 stderr F Version: "", 2025-08-13T20:08:37.838182340+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:37.838182340+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:37.838182340+00:00 stderr F } 2025-08-13T20:08:38.017333607+00:00 stderr F E0813 20:08:38.016268 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.028539988+00:00 stderr F E0813 20:08:38.028429 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.030472094+00:00 stderr F I0813 20:08:38.030194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.030472094+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.030472094+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.030472094+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F - { 2025-08-13T20:08:38.030472094+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.030472094+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.030472094+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:38.030472094+00:00 stderr F - }, 2025-08-13T20:08:38.030472094+00:00 stderr F + { 2025-08-13T20:08:38.030472094+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.030472094+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.030472094+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.028485107 +0000 UTC m=+437.233584351", 2025-08-13T20:08:38.030472094+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.030472094+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:38.030472094+00:00 stderr F + }, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.030472094+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:38.030472094+00:00 stderr F }, 2025-08-13T20:08:38.030472094+00:00 stderr F Version: "", 2025-08-13T20:08:38.030472094+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.030472094+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.030472094+00:00 stderr F } 2025-08-13T20:08:38.214811839+00:00 stderr F E0813 20:08:38.214667 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.377653358+00:00 stderr F E0813 20:08:38.377601 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.379842010+00:00 stderr F I0813 20:08:38.379755 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.379842010+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.379842010+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.379842010+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F - { 2025-08-13T20:08:38.379842010+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:38.379842010+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.379842010+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:38.379842010+00:00 stderr F - }, 2025-08-13T20:08:38.379842010+00:00 stderr F + { 2025-08-13T20:08:38.379842010+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:38.379842010+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.379842010+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.37774217 +0000 UTC m=+437.582841404", 2025-08-13T20:08:38.379842010+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:38.379842010+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:38.379842010+00:00 stderr F + }, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.379842010+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:38.379842010+00:00 stderr F }, 2025-08-13T20:08:38.379842010+00:00 stderr F Version: "", 2025-08-13T20:08:38.379842010+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.379842010+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.379842010+00:00 stderr F } 2025-08-13T20:08:38.420703232+00:00 stderr F I0813 20:08:38.420561 1 request.go:697] Waited for 1.16981886s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:38.422954186+00:00 stderr F E0813 20:08:38.422867 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.446282985+00:00 stderr F E0813 20:08:38.446243 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.448131668+00:00 stderr F I0813 20:08:38.448104 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.448131668+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.448131668+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.448131668+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:38.448131668+00:00 stderr F - { 2025-08-13T20:08:38.448131668+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:38.448131668+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.448131668+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:38.448131668+00:00 stderr F - }, 2025-08-13T20:08:38.448131668+00:00 stderr F + { 2025-08-13T20:08:38.448131668+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:38.448131668+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.448131668+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.446349037 +0000 UTC m=+437.651448241", 2025-08-13T20:08:38.448131668+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.448131668+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:38.448131668+00:00 stderr F + }, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:38.448131668+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:38.448131668+00:00 stderr F }, 2025-08-13T20:08:38.448131668+00:00 stderr F Version: "", 2025-08-13T20:08:38.448131668+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.448131668+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.448131668+00:00 stderr F } 2025-08-13T20:08:38.616531016+00:00 stderr F E0813 20:08:38.616131 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.641005418+00:00 stderr F E0813 20:08:38.640926 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.642727708+00:00 stderr F I0813 20:08:38.642666 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.642727708+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.642727708+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.642727708+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F - { 2025-08-13T20:08:38.642727708+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.642727708+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.642727708+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:38.642727708+00:00 stderr F - }, 2025-08-13T20:08:38.642727708+00:00 stderr F + { 2025-08-13T20:08:38.642727708+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:38.642727708+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.642727708+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.640990858 +0000 UTC m=+437.846090092", 2025-08-13T20:08:38.642727708+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.642727708+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:38.642727708+00:00 stderr F + }, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.642727708+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:38.642727708+00:00 stderr F }, 2025-08-13T20:08:38.642727708+00:00 stderr F Version: "", 2025-08-13T20:08:38.642727708+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.642727708+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.642727708+00:00 stderr F } 2025-08-13T20:08:38.815355887+00:00 stderr F E0813 20:08:38.815104 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.838994335+00:00 stderr F E0813 20:08:38.838944 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.841204708+00:00 stderr F I0813 20:08:38.841152 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:38.841204708+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:38.841204708+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:38.841204708+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F - { 2025-08-13T20:08:38.841204708+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:38.841204708+00:00 stderr F - Status: "False", 2025-08-13T20:08:38.841204708+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:38.841204708+00:00 stderr F - }, 2025-08-13T20:08:38.841204708+00:00 stderr F + { 2025-08-13T20:08:38.841204708+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:38.841204708+00:00 stderr F + Status: "True", 2025-08-13T20:08:38.841204708+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:38.839076657 +0000 UTC m=+438.044175821", 2025-08-13T20:08:38.841204708+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:38.841204708+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:38.841204708+00:00 stderr F + }, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:38.841204708+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:38.841204708+00:00 stderr F }, 2025-08-13T20:08:38.841204708+00:00 stderr F Version: "", 2025-08-13T20:08:38.841204708+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:38.841204708+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:38.841204708+00:00 stderr F } 2025-08-13T20:08:39.020182970+00:00 stderr F E0813 20:08:39.019603 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.043126507+00:00 stderr F E0813 20:08:39.043075 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.044739084+00:00 stderr F I0813 20:08:39.044699 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.044739084+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.044739084+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.044739084+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F - { 2025-08-13T20:08:39.044739084+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:39.044739084+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.044739084+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:39.044739084+00:00 stderr F - }, 2025-08-13T20:08:39.044739084+00:00 stderr F + { 2025-08-13T20:08:39.044739084+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:39.044739084+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.044739084+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.04320704 +0000 UTC m=+438.248306194", 2025-08-13T20:08:39.044739084+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.044739084+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:39.044739084+00:00 stderr F + }, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.044739084+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:39.044739084+00:00 stderr F }, 2025-08-13T20:08:39.044739084+00:00 stderr F Version: "", 2025-08-13T20:08:39.044739084+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.044739084+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.044739084+00:00 stderr F } 2025-08-13T20:08:39.216042325+00:00 stderr F E0813 20:08:39.215998 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.240292330+00:00 stderr F E0813 20:08:39.238950 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.241651919+00:00 stderr F I0813 20:08:39.241606 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.241651919+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.241651919+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.241651919+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F - { 2025-08-13T20:08:39.241651919+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.241651919+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.241651919+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:39.241651919+00:00 stderr F - }, 2025-08-13T20:08:39.241651919+00:00 stderr F + { 2025-08-13T20:08:39.241651919+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.241651919+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.241651919+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.239087206 +0000 UTC m=+438.444186720", 2025-08-13T20:08:39.241651919+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.241651919+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:39.241651919+00:00 stderr F + }, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.241651919+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:39.241651919+00:00 stderr F }, 2025-08-13T20:08:39.241651919+00:00 stderr F Version: "", 2025-08-13T20:08:39.241651919+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.241651919+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.241651919+00:00 stderr F } 2025-08-13T20:08:39.415781821+00:00 stderr F E0813 20:08:39.415305 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.614970782+00:00 stderr F I0813 20:08:39.613913 1 request.go:697] Waited for 1.165510665s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:39.615019093+00:00 stderr F E0813 20:08:39.614983 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.666494069+00:00 stderr F E0813 20:08:39.666435 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.672193102+00:00 stderr F I0813 20:08:39.671638 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.672193102+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.672193102+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.672193102+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:39.672193102+00:00 stderr F - { 2025-08-13T20:08:39.672193102+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:39.672193102+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.672193102+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:39.672193102+00:00 stderr F - }, 2025-08-13T20:08:39.672193102+00:00 stderr F + { 2025-08-13T20:08:39.672193102+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:39.672193102+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.672193102+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.666922771 +0000 UTC m=+438.872022185", 2025-08-13T20:08:39.672193102+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.672193102+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:39.672193102+00:00 stderr F + }, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:39.672193102+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:39.672193102+00:00 stderr F }, 2025-08-13T20:08:39.672193102+00:00 stderr F Version: "", 2025-08-13T20:08:39.672193102+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.672193102+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.672193102+00:00 stderr F } 2025-08-13T20:08:39.738606097+00:00 stderr F E0813 20:08:39.738379 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.740856061+00:00 stderr F I0813 20:08:39.740551 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.740856061+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.740856061+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.740856061+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F - { 2025-08-13T20:08:39.740856061+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:39.740856061+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.740856061+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:39.740856061+00:00 stderr F - }, 2025-08-13T20:08:39.740856061+00:00 stderr F + { 2025-08-13T20:08:39.740856061+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:39.740856061+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.740856061+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.738432722 +0000 UTC m=+438.943531886", 2025-08-13T20:08:39.740856061+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:39.740856061+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:39.740856061+00:00 stderr F + }, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.740856061+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:39.740856061+00:00 stderr F }, 2025-08-13T20:08:39.740856061+00:00 stderr F Version: "", 2025-08-13T20:08:39.740856061+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.740856061+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.740856061+00:00 stderr F } 2025-08-13T20:08:39.815139801+00:00 stderr F E0813 20:08:39.815043 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.861647434+00:00 stderr F E0813 20:08:39.861102 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.863310732+00:00 stderr F I0813 20:08:39.863196 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:39.863310732+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:39.863310732+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:39.863310732+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F - { 2025-08-13T20:08:39.863310732+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.863310732+00:00 stderr F - Status: "False", 2025-08-13T20:08:39.863310732+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:39.863310732+00:00 stderr F - }, 2025-08-13T20:08:39.863310732+00:00 stderr F + { 2025-08-13T20:08:39.863310732+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:39.863310732+00:00 stderr F + Status: "True", 2025-08-13T20:08:39.863310732+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:39.861184181 +0000 UTC m=+439.066283455", 2025-08-13T20:08:39.863310732+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:39.863310732+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:39.863310732+00:00 stderr F + }, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:39.863310732+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:39.863310732+00:00 stderr F }, 2025-08-13T20:08:39.863310732+00:00 stderr F Version: "", 2025-08-13T20:08:39.863310732+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:39.863310732+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:39.863310732+00:00 stderr F } 2025-08-13T20:08:40.023122134+00:00 stderr F E0813 20:08:40.022978 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.068376981+00:00 stderr F E0813 20:08:40.068301 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.070879393+00:00 stderr F I0813 20:08:40.070814 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.070879393+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.070879393+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.070879393+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F - { 2025-08-13T20:08:40.070879393+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:40.070879393+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.070879393+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:40.070879393+00:00 stderr F - }, 2025-08-13T20:08:40.070879393+00:00 stderr F + { 2025-08-13T20:08:40.070879393+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:40.070879393+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.070879393+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.068369611 +0000 UTC m=+439.273468915", 2025-08-13T20:08:40.070879393+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.070879393+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:40.070879393+00:00 stderr F + }, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.070879393+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:40.070879393+00:00 stderr F }, 2025-08-13T20:08:40.070879393+00:00 stderr F Version: "", 2025-08-13T20:08:40.070879393+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.070879393+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.070879393+00:00 stderr F } 2025-08-13T20:08:40.218648330+00:00 stderr F E0813 20:08:40.218518 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.262499887+00:00 stderr F E0813 20:08:40.262217 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.270760884+00:00 stderr F I0813 20:08:40.267420 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.270760884+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.270760884+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.270760884+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F - { 2025-08-13T20:08:40.270760884+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:40.270760884+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.270760884+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:40.270760884+00:00 stderr F - }, 2025-08-13T20:08:40.270760884+00:00 stderr F + { 2025-08-13T20:08:40.270760884+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:40.270760884+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.270760884+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.262340883 +0000 UTC m=+439.467440147", 2025-08-13T20:08:40.270760884+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.270760884+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:40.270760884+00:00 stderr F + }, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.270760884+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:40.270760884+00:00 stderr F }, 2025-08-13T20:08:40.270760884+00:00 stderr F Version: "", 2025-08-13T20:08:40.270760884+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.270760884+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.270760884+00:00 stderr F } 2025-08-13T20:08:40.416578585+00:00 stderr F E0813 20:08:40.416357 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.477135771+00:00 stderr F E0813 20:08:40.476206 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.519601119+00:00 stderr F I0813 20:08:40.518976 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.519601119+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.519601119+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.519601119+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F - { 2025-08-13T20:08:40.519601119+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:40.519601119+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.519601119+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:40.519601119+00:00 stderr F - }, 2025-08-13T20:08:40.519601119+00:00 stderr F + { 2025-08-13T20:08:40.519601119+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:40.519601119+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.519601119+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.476273256 +0000 UTC m=+439.681372500", 2025-08-13T20:08:40.519601119+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.519601119+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:40.519601119+00:00 stderr F + }, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:40.519601119+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:40.519601119+00:00 stderr F }, 2025-08-13T20:08:40.519601119+00:00 stderr F Version: "", 2025-08-13T20:08:40.519601119+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.519601119+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.519601119+00:00 stderr F } 2025-08-13T20:08:40.622538090+00:00 stderr F E0813 20:08:40.622405 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.705527799+00:00 stderr F E0813 20:08:40.705212 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.707585248+00:00 stderr F I0813 20:08:40.707517 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:40.707585248+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:40.707585248+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:40.707585248+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:40.707585248+00:00 stderr F - { 2025-08-13T20:08:40.707585248+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:40.707585248+00:00 stderr F - Status: "False", 2025-08-13T20:08:40.707585248+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:40.707585248+00:00 stderr F - }, 2025-08-13T20:08:40.707585248+00:00 stderr F + { 2025-08-13T20:08:40.707585248+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:40.707585248+00:00 stderr F + Status: "True", 2025-08-13T20:08:40.707585248+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:40.705510879 +0000 UTC m=+439.910610083", 2025-08-13T20:08:40.707585248+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:40.707585248+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:40.707585248+00:00 stderr F + }, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:40.707585248+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:40.707585248+00:00 stderr F }, 2025-08-13T20:08:40.707585248+00:00 stderr F Version: "", 2025-08-13T20:08:40.707585248+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:40.707585248+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:40.707585248+00:00 stderr F } 2025-08-13T20:08:40.814590936+00:00 stderr F I0813 20:08:40.814038 1 request.go:697] Waited for 1.073262681s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:40.830016859+00:00 stderr F E0813 20:08:40.826096 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.020581012+00:00 stderr F E0813 20:08:41.020441 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.106359572+00:00 stderr F E0813 20:08:41.106226 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.108849493+00:00 stderr F I0813 20:08:41.108071 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.108849493+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.108849493+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.108849493+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F - { 2025-08-13T20:08:41.108849493+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.108849493+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.108849493+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:41.108849493+00:00 stderr F - }, 2025-08-13T20:08:41.108849493+00:00 stderr F + { 2025-08-13T20:08:41.108849493+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.108849493+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.108849493+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.10630736 +0000 UTC m=+440.311406594", 2025-08-13T20:08:41.108849493+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.108849493+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:41.108849493+00:00 stderr F + }, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.108849493+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:41.108849493+00:00 stderr F }, 2025-08-13T20:08:41.108849493+00:00 stderr F Version: "", 2025-08-13T20:08:41.108849493+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.108849493+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.108849493+00:00 stderr F } 2025-08-13T20:08:41.220472763+00:00 stderr F E0813 20:08:41.219218 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.312987176+00:00 stderr F E0813 20:08:41.310744 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.317692401+00:00 stderr F I0813 20:08:41.317583 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.317692401+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.317692401+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.317692401+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F - { 2025-08-13T20:08:41.317692401+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:41.317692401+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.317692401+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:41.317692401+00:00 stderr F - }, 2025-08-13T20:08:41.317692401+00:00 stderr F + { 2025-08-13T20:08:41.317692401+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:41.317692401+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.317692401+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.310887966 +0000 UTC m=+440.516016311", 2025-08-13T20:08:41.317692401+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.317692401+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:41.317692401+00:00 stderr F + }, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.317692401+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:41.317692401+00:00 stderr F }, 2025-08-13T20:08:41.317692401+00:00 stderr F Version: "", 2025-08-13T20:08:41.317692401+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.317692401+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.317692401+00:00 stderr F } 2025-08-13T20:08:41.415911757+00:00 stderr F E0813 20:08:41.415712 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.470984576+00:00 stderr F E0813 20:08:41.470499 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.474142146+00:00 stderr F I0813 20:08:41.474075 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.474142146+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.474142146+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.474142146+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F - { 2025-08-13T20:08:41.474142146+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:41.474142146+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.474142146+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:41.474142146+00:00 stderr F - }, 2025-08-13T20:08:41.474142146+00:00 stderr F + { 2025-08-13T20:08:41.474142146+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:41.474142146+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.474142146+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.470928164 +0000 UTC m=+440.676027688", 2025-08-13T20:08:41.474142146+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:41.474142146+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:41.474142146+00:00 stderr F + }, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.474142146+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:41.474142146+00:00 stderr F }, 2025-08-13T20:08:41.474142146+00:00 stderr F Version: "", 2025-08-13T20:08:41.474142146+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.474142146+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.474142146+00:00 stderr F } 2025-08-13T20:08:41.499540955+00:00 stderr F E0813 20:08:41.499430 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.501244953+00:00 stderr F I0813 20:08:41.501116 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.501244953+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.501244953+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.501244953+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F - { 2025-08-13T20:08:41.501244953+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:41.501244953+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.501244953+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:41.501244953+00:00 stderr F - }, 2025-08-13T20:08:41.501244953+00:00 stderr F + { 2025-08-13T20:08:41.501244953+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:41.501244953+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.501244953+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.499509634 +0000 UTC m=+440.704608858", 2025-08-13T20:08:41.501244953+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.501244953+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:41.501244953+00:00 stderr F + }, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.501244953+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:41.501244953+00:00 stderr F }, 2025-08-13T20:08:41.501244953+00:00 stderr F Version: "", 2025-08-13T20:08:41.501244953+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.501244953+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.501244953+00:00 stderr F } 2025-08-13T20:08:41.617100685+00:00 stderr F E0813 20:08:41.617046 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.702115823+00:00 stderr F E0813 20:08:41.700869 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.703451601+00:00 stderr F I0813 20:08:41.703292 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.703451601+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.703451601+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.703451601+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F - { 2025-08-13T20:08:41.703451601+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.703451601+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.703451601+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:41.703451601+00:00 stderr F - }, 2025-08-13T20:08:41.703451601+00:00 stderr F + { 2025-08-13T20:08:41.703451601+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:41.703451601+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.703451601+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.700951129 +0000 UTC m=+440.906050523", 2025-08-13T20:08:41.703451601+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.703451601+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:41.703451601+00:00 stderr F + }, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:41.703451601+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:41.703451601+00:00 stderr F }, 2025-08-13T20:08:41.703451601+00:00 stderr F Version: "", 2025-08-13T20:08:41.703451601+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.703451601+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.703451601+00:00 stderr F } 2025-08-13T20:08:41.815672598+00:00 stderr F E0813 20:08:41.815214 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.988818273+00:00 stderr F E0813 20:08:41.988457 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.998696056+00:00 stderr F I0813 20:08:41.990208 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:41.998696056+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:41.998696056+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:41.998696056+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:41.998696056+00:00 stderr F - { 2025-08-13T20:08:41.998696056+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:41.998696056+00:00 stderr F - Status: "False", 2025-08-13T20:08:41.998696056+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:41.998696056+00:00 stderr F - }, 2025-08-13T20:08:41.998696056+00:00 stderr F + { 2025-08-13T20:08:41.998696056+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:41.998696056+00:00 stderr F + Status: "True", 2025-08-13T20:08:41.998696056+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:41.988499663 +0000 UTC m=+441.193598837", 2025-08-13T20:08:41.998696056+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:41.998696056+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:41.998696056+00:00 stderr F + }, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:41.998696056+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:41.998696056+00:00 stderr F }, 2025-08-13T20:08:41.998696056+00:00 stderr F Version: "", 2025-08-13T20:08:41.998696056+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:41.998696056+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:41.998696056+00:00 stderr F } 2025-08-13T20:08:42.021301974+00:00 stderr F E0813 20:08:42.018061 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.186702406+00:00 stderr F E0813 20:08:42.186597 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.191558915+00:00 stderr F I0813 20:08:42.190611 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.191558915+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.191558915+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.191558915+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F - { 2025-08-13T20:08:42.191558915+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.191558915+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.191558915+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:42.191558915+00:00 stderr F - }, 2025-08-13T20:08:42.191558915+00:00 stderr F + { 2025-08-13T20:08:42.191558915+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.191558915+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.191558915+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.186660165 +0000 UTC m=+441.391759379", 2025-08-13T20:08:42.191558915+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.191558915+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:42.191558915+00:00 stderr F + }, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.191558915+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:42.191558915+00:00 stderr F }, 2025-08-13T20:08:42.191558915+00:00 stderr F Version: "", 2025-08-13T20:08:42.191558915+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.191558915+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.191558915+00:00 stderr F } 2025-08-13T20:08:42.217340755+00:00 stderr F E0813 20:08:42.216746 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.387145983+00:00 stderr F E0813 20:08:42.386163 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.389334026+00:00 stderr F I0813 20:08:42.387949 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.389334026+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.389334026+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.389334026+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F - { 2025-08-13T20:08:42.389334026+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:42.389334026+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.389334026+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:42.389334026+00:00 stderr F - }, 2025-08-13T20:08:42.389334026+00:00 stderr F + { 2025-08-13T20:08:42.389334026+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:42.389334026+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.389334026+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.386224797 +0000 UTC m=+441.591324171", 2025-08-13T20:08:42.389334026+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.389334026+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:42.389334026+00:00 stderr F + }, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.389334026+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:42.389334026+00:00 stderr F }, 2025-08-13T20:08:42.389334026+00:00 stderr F Version: "", 2025-08-13T20:08:42.389334026+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.389334026+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.389334026+00:00 stderr F } 2025-08-13T20:08:42.420863810+00:00 stderr F E0813 20:08:42.418609 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.614367508+00:00 stderr F I0813 20:08:42.613673 1 request.go:697] Waited for 1.112255989s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:08:42.619195186+00:00 stderr F E0813 20:08:42.616075 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.777729212+00:00 stderr F E0813 20:08:42.777624 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.781402937+00:00 stderr F I0813 20:08:42.781368 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.781402937+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.781402937+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.781402937+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F - { 2025-08-13T20:08:42.781402937+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:42.781402937+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.781402937+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:42.781402937+00:00 stderr F - }, 2025-08-13T20:08:42.781402937+00:00 stderr F + { 2025-08-13T20:08:42.781402937+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:42.781402937+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.781402937+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.777866465 +0000 UTC m=+441.982965660", 2025-08-13T20:08:42.781402937+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.781402937+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:42.781402937+00:00 stderr F + }, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.781402937+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:42.781402937+00:00 stderr F }, 2025-08-13T20:08:42.781402937+00:00 stderr F Version: "", 2025-08-13T20:08:42.781402937+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.781402937+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.781402937+00:00 stderr F } 2025-08-13T20:08:42.820097516+00:00 stderr F E0813 20:08:42.817337 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.980056452+00:00 stderr F E0813 20:08:42.979975 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.984561011+00:00 stderr F I0813 20:08:42.984516 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:42.984561011+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:42.984561011+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:42.984561011+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F - { 2025-08-13T20:08:42.984561011+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.984561011+00:00 stderr F - Status: "False", 2025-08-13T20:08:42.984561011+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:42.984561011+00:00 stderr F - }, 2025-08-13T20:08:42.984561011+00:00 stderr F + { 2025-08-13T20:08:42.984561011+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:42.984561011+00:00 stderr F + Status: "True", 2025-08-13T20:08:42.984561011+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:42.980152504 +0000 UTC m=+442.185251748", 2025-08-13T20:08:42.984561011+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:42.984561011+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:42.984561011+00:00 stderr F + }, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:42.984561011+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:42.984561011+00:00 stderr F }, 2025-08-13T20:08:42.984561011+00:00 stderr F Version: "", 2025-08-13T20:08:42.984561011+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:42.984561011+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:42.984561011+00:00 stderr F } 2025-08-13T20:08:43.026203515+00:00 stderr F E0813 20:08:43.026107 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.219599940+00:00 stderr F E0813 20:08:43.219361 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.351032518+00:00 stderr F E0813 20:08:43.350753 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.358561944+00:00 stderr F I0813 20:08:43.357865 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.358561944+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.358561944+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.358561944+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:43.358561944+00:00 stderr F - { 2025-08-13T20:08:43.358561944+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:43.358561944+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.358561944+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:43.358561944+00:00 stderr F - }, 2025-08-13T20:08:43.358561944+00:00 stderr F + { 2025-08-13T20:08:43.358561944+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:43.358561944+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.358561944+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.350980336 +0000 UTC m=+442.556079610", 2025-08-13T20:08:43.358561944+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.358561944+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:43.358561944+00:00 stderr F + }, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:43.358561944+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:43.358561944+00:00 stderr F }, 2025-08-13T20:08:43.358561944+00:00 stderr F Version: "", 2025-08-13T20:08:43.358561944+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.358561944+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.358561944+00:00 stderr F } 2025-08-13T20:08:43.420359366+00:00 stderr F E0813 20:08:43.420292 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.544543156+00:00 stderr F E0813 20:08:43.541670 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.544543156+00:00 stderr F I0813 20:08:43.543481 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.544543156+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.544543156+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.544543156+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F - { 2025-08-13T20:08:43.544543156+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:43.544543156+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.544543156+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:43.544543156+00:00 stderr F - }, 2025-08-13T20:08:43.544543156+00:00 stderr F + { 2025-08-13T20:08:43.544543156+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:43.544543156+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.544543156+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.541760616 +0000 UTC m=+442.746859820", 2025-08-13T20:08:43.544543156+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.544543156+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:43.544543156+00:00 stderr F + }, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.544543156+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:43.544543156+00:00 stderr F }, 2025-08-13T20:08:43.544543156+00:00 stderr F Version: "", 2025-08-13T20:08:43.544543156+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.544543156+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.544543156+00:00 stderr F } 2025-08-13T20:08:43.621723299+00:00 stderr F E0813 20:08:43.617077 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.704040239+00:00 stderr F E0813 20:08:43.700450 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.707824157+00:00 stderr F I0813 20:08:43.707671 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.707824157+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.707824157+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.707824157+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F - { 2025-08-13T20:08:43.707824157+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:43.707824157+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.707824157+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:43.707824157+00:00 stderr F - }, 2025-08-13T20:08:43.707824157+00:00 stderr F + { 2025-08-13T20:08:43.707824157+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:43.707824157+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.707824157+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.700504517 +0000 UTC m=+442.905603752", 2025-08-13T20:08:43.707824157+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:43.707824157+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:43.707824157+00:00 stderr F + }, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:43.707824157+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:43.707824157+00:00 stderr F }, 2025-08-13T20:08:43.707824157+00:00 stderr F Version: "", 2025-08-13T20:08:43.707824157+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.707824157+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.707824157+00:00 stderr F } 2025-08-13T20:08:43.747565477+00:00 stderr F E0813 20:08:43.747500 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.754464205+00:00 stderr F I0813 20:08:43.749557 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.754464205+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.754464205+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.754464205+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F - { 2025-08-13T20:08:43.754464205+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:43.754464205+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.754464205+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:43.754464205+00:00 stderr F - }, 2025-08-13T20:08:43.754464205+00:00 stderr F + { 2025-08-13T20:08:43.754464205+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:43.754464205+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.754464205+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.74766571 +0000 UTC m=+442.952764914", 2025-08-13T20:08:43.754464205+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.754464205+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:43.754464205+00:00 stderr F + }, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.754464205+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:43.754464205+00:00 stderr F }, 2025-08-13T20:08:43.754464205+00:00 stderr F Version: "", 2025-08-13T20:08:43.754464205+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.754464205+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.754464205+00:00 stderr F } 2025-08-13T20:08:43.817070210+00:00 stderr F E0813 20:08:43.816973 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.954648214+00:00 stderr F E0813 20:08:43.954369 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.958497534+00:00 stderr F I0813 20:08:43.958002 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:43.958497534+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:43.958497534+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:43.958497534+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F - { 2025-08-13T20:08:43.958497534+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:43.958497534+00:00 stderr F - Status: "False", 2025-08-13T20:08:43.958497534+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:43.958497534+00:00 stderr F - }, 2025-08-13T20:08:43.958497534+00:00 stderr F + { 2025-08-13T20:08:43.958497534+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:43.958497534+00:00 stderr F + Status: "True", 2025-08-13T20:08:43.958497534+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:43.954440868 +0000 UTC m=+443.159540072", 2025-08-13T20:08:43.958497534+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:43.958497534+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:43.958497534+00:00 stderr F + }, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:43.958497534+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:43.958497534+00:00 stderr F }, 2025-08-13T20:08:43.958497534+00:00 stderr F Version: "", 2025-08-13T20:08:43.958497534+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:43.958497534+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:43.958497534+00:00 stderr F } 2025-08-13T20:08:44.016321952+00:00 stderr F E0813 20:08:44.015605 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.140135912+00:00 stderr F E0813 20:08:44.140038 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.142288394+00:00 stderr F I0813 20:08:44.142194 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.142288394+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.142288394+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.142288394+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F - { 2025-08-13T20:08:44.142288394+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.142288394+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.142288394+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:44.142288394+00:00 stderr F - }, 2025-08-13T20:08:44.142288394+00:00 stderr F + { 2025-08-13T20:08:44.142288394+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.142288394+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.142288394+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.140098821 +0000 UTC m=+443.345198165", 2025-08-13T20:08:44.142288394+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.142288394+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:44.142288394+00:00 stderr F + }, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:44.142288394+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:44.142288394+00:00 stderr F }, 2025-08-13T20:08:44.142288394+00:00 stderr F Version: "", 2025-08-13T20:08:44.142288394+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.142288394+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.142288394+00:00 stderr F } 2025-08-13T20:08:44.217677275+00:00 stderr F E0813 20:08:44.216206 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.414622152+00:00 stderr F E0813 20:08:44.414500 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.627485295+00:00 stderr F E0813 20:08:44.625858 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.660118291+00:00 stderr F E0813 20:08:44.659512 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.664169017+00:00 stderr F I0813 20:08:44.661294 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.664169017+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.664169017+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.664169017+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:44.664169017+00:00 stderr F - { 2025-08-13T20:08:44.664169017+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:44.664169017+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.664169017+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:44.664169017+00:00 stderr F - }, 2025-08-13T20:08:44.664169017+00:00 stderr F + { 2025-08-13T20:08:44.664169017+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:44.664169017+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.664169017+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.659587885 +0000 UTC m=+443.864687109", 2025-08-13T20:08:44.664169017+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.664169017+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:44.664169017+00:00 stderr F + }, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:44.664169017+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:44.664169017+00:00 stderr F }, 2025-08-13T20:08:44.664169017+00:00 stderr F Version: "", 2025-08-13T20:08:44.664169017+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.664169017+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.664169017+00:00 stderr F } 2025-08-13T20:08:44.816568856+00:00 stderr F E0813 20:08:44.816386 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.858538260+00:00 stderr F E0813 20:08:44.858437 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.861991979+00:00 stderr F I0813 20:08:44.860457 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:44.861991979+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:44.861991979+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:44.861991979+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F - { 2025-08-13T20:08:44.861991979+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.861991979+00:00 stderr F - Status: "False", 2025-08-13T20:08:44.861991979+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:44.861991979+00:00 stderr F - }, 2025-08-13T20:08:44.861991979+00:00 stderr F + { 2025-08-13T20:08:44.861991979+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:44.861991979+00:00 stderr F + Status: "True", 2025-08-13T20:08:44.861991979+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:44.858510859 +0000 UTC m=+444.063610093", 2025-08-13T20:08:44.861991979+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:44.861991979+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:44.861991979+00:00 stderr F + }, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:44.861991979+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:44.861991979+00:00 stderr F }, 2025-08-13T20:08:44.861991979+00:00 stderr F Version: "", 2025-08-13T20:08:44.861991979+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:44.861991979+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:44.861991979+00:00 stderr F } 2025-08-13T20:08:45.015636224+00:00 stderr F E0813 20:08:45.015347 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.215994528+00:00 stderr F E0813 20:08:45.214927 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.268234216+00:00 stderr F E0813 20:08:45.268094 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.270711397+00:00 stderr F I0813 20:08:45.270626 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.270711397+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.270711397+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.270711397+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F - { 2025-08-13T20:08:45.270711397+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:45.270711397+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.270711397+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:45.270711397+00:00 stderr F - }, 2025-08-13T20:08:45.270711397+00:00 stderr F + { 2025-08-13T20:08:45.270711397+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:45.270711397+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.270711397+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.268154714 +0000 UTC m=+444.473253928", 2025-08-13T20:08:45.270711397+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.270711397+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:45.270711397+00:00 stderr F + }, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.270711397+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:45.270711397+00:00 stderr F }, 2025-08-13T20:08:45.270711397+00:00 stderr F Version: "", 2025-08-13T20:08:45.270711397+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.270711397+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.270711397+00:00 stderr F } 2025-08-13T20:08:45.422093407+00:00 stderr F E0813 20:08:45.414446 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.459687385+00:00 stderr F E0813 20:08:45.459567 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.462676851+00:00 stderr F I0813 20:08:45.462418 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.462676851+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.462676851+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.462676851+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F - { 2025-08-13T20:08:45.462676851+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:45.462676851+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.462676851+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:45.462676851+00:00 stderr F - }, 2025-08-13T20:08:45.462676851+00:00 stderr F + { 2025-08-13T20:08:45.462676851+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:45.462676851+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.462676851+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.459644254 +0000 UTC m=+444.664743638", 2025-08-13T20:08:45.462676851+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.462676851+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:45.462676851+00:00 stderr F + }, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.462676851+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:45.462676851+00:00 stderr F }, 2025-08-13T20:08:45.462676851+00:00 stderr F Version: "", 2025-08-13T20:08:45.462676851+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.462676851+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.462676851+00:00 stderr F } 2025-08-13T20:08:45.618344284+00:00 stderr F E0813 20:08:45.617682 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.657507677+00:00 stderr F E0813 20:08:45.657306 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.663505239+00:00 stderr F I0813 20:08:45.663430 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:45.663505239+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:45.663505239+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:45.663505239+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F - { 2025-08-13T20:08:45.663505239+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:45.663505239+00:00 stderr F - Status: "False", 2025-08-13T20:08:45.663505239+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:45.663505239+00:00 stderr F - }, 2025-08-13T20:08:45.663505239+00:00 stderr F + { 2025-08-13T20:08:45.663505239+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:45.663505239+00:00 stderr F + Status: "True", 2025-08-13T20:08:45.663505239+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:45.657411114 +0000 UTC m=+444.862510488", 2025-08-13T20:08:45.663505239+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:45.663505239+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:45.663505239+00:00 stderr F + }, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:45.663505239+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:45.663505239+00:00 stderr F }, 2025-08-13T20:08:45.663505239+00:00 stderr F Version: "", 2025-08-13T20:08:45.663505239+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:45.663505239+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:45.663505239+00:00 stderr F } 2025-08-13T20:08:45.815375213+00:00 stderr F E0813 20:08:45.815224 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.015615404+00:00 stderr F E0813 20:08:46.015490 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.501005391+00:00 stderr F E0813 20:08:46.500924 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.502990298+00:00 stderr F I0813 20:08:46.502964 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.502990298+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.502990298+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.502990298+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:46.502990298+00:00 stderr F - { 2025-08-13T20:08:46.502990298+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:46.502990298+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.502990298+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:46.502990298+00:00 stderr F - }, 2025-08-13T20:08:46.502990298+00:00 stderr F + { 2025-08-13T20:08:46.502990298+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:46.502990298+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.502990298+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.501100054 +0000 UTC m=+445.706199228", 2025-08-13T20:08:46.502990298+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.502990298+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:46.502990298+00:00 stderr F + }, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:46.502990298+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:46.502990298+00:00 stderr F }, 2025-08-13T20:08:46.502990298+00:00 stderr F Version: "", 2025-08-13T20:08:46.502990298+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.502990298+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.502990298+00:00 stderr F } 2025-08-13T20:08:46.505478419+00:00 stderr F E0813 20:08:46.505412 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.706044639+00:00 stderr F E0813 20:08:46.705954 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.708050136+00:00 stderr F I0813 20:08:46.707877 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.708050136+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.708050136+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.708050136+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F - { 2025-08-13T20:08:46.708050136+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:46.708050136+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.708050136+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:46.708050136+00:00 stderr F - }, 2025-08-13T20:08:46.708050136+00:00 stderr F + { 2025-08-13T20:08:46.708050136+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:46.708050136+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.708050136+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.706148572 +0000 UTC m=+445.911247926", 2025-08-13T20:08:46.708050136+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.708050136+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:46.708050136+00:00 stderr F + }, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:46.708050136+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:46.708050136+00:00 stderr F }, 2025-08-13T20:08:46.708050136+00:00 stderr F Version: "", 2025-08-13T20:08:46.708050136+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.708050136+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.708050136+00:00 stderr F } 2025-08-13T20:08:46.709550969+00:00 stderr F E0813 20:08:46.709501 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.901179643+00:00 stderr F E0813 20:08:46.900695 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.906398613+00:00 stderr F I0813 20:08:46.906310 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.906398613+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.906398613+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.906398613+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F - { 2025-08-13T20:08:46.906398613+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:46.906398613+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.906398613+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:46.906398613+00:00 stderr F - }, 2025-08-13T20:08:46.906398613+00:00 stderr F + { 2025-08-13T20:08:46.906398613+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:46.906398613+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.906398613+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.901114832 +0000 UTC m=+446.106214156", 2025-08-13T20:08:46.906398613+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:46.906398613+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:46.906398613+00:00 stderr F + }, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:46.906398613+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:46.906398613+00:00 stderr F }, 2025-08-13T20:08:46.906398613+00:00 stderr F Version: "", 2025-08-13T20:08:46.906398613+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.906398613+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.906398613+00:00 stderr F } 2025-08-13T20:08:46.907769112+00:00 stderr F E0813 20:08:46.907662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.977342047+00:00 stderr F E0813 20:08:46.977226 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.979552040+00:00 stderr F I0813 20:08:46.979470 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:46.979552040+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:46.979552040+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:46.979552040+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F - { 2025-08-13T20:08:46.979552040+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:46.979552040+00:00 stderr F - Status: "False", 2025-08-13T20:08:46.979552040+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:46.979552040+00:00 stderr F - }, 2025-08-13T20:08:46.979552040+00:00 stderr F + { 2025-08-13T20:08:46.979552040+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:46.979552040+00:00 stderr F + Status: "True", 2025-08-13T20:08:46.979552040+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:46.977278655 +0000 UTC m=+446.182377829", 2025-08-13T20:08:46.979552040+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:46.979552040+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:46.979552040+00:00 stderr F + }, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:46.979552040+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:46.979552040+00:00 stderr F }, 2025-08-13T20:08:46.979552040+00:00 stderr F Version: "", 2025-08-13T20:08:46.979552040+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:46.979552040+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:46.979552040+00:00 stderr F } 2025-08-13T20:08:46.983096882+00:00 stderr F E0813 20:08:46.982938 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.098157441+00:00 stderr F E0813 20:08:47.098074 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.102136425+00:00 stderr F I0813 20:08:47.100739 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:47.102136425+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:47.102136425+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:47.102136425+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F - { 2025-08-13T20:08:47.102136425+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:47.102136425+00:00 stderr F - Status: "False", 2025-08-13T20:08:47.102136425+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:47.102136425+00:00 stderr F - }, 2025-08-13T20:08:47.102136425+00:00 stderr F + { 2025-08-13T20:08:47.102136425+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:47.102136425+00:00 stderr F + Status: "True", 2025-08-13T20:08:47.102136425+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:47.098144541 +0000 UTC m=+446.303243835", 2025-08-13T20:08:47.102136425+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:47.102136425+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:47.102136425+00:00 stderr F + }, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:47.102136425+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:47.102136425+00:00 stderr F }, 2025-08-13T20:08:47.102136425+00:00 stderr F Version: "", 2025-08-13T20:08:47.102136425+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:47.102136425+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:47.102136425+00:00 stderr F } 2025-08-13T20:08:47.105107300+00:00 stderr F E0813 20:08:47.105013 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.299446792+00:00 stderr F E0813 20:08:47.298300 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.314666088+00:00 stderr F I0813 20:08:47.299923 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:47.314666088+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:47.314666088+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:47.314666088+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F - { 2025-08-13T20:08:47.314666088+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:47.314666088+00:00 stderr F - Status: "False", 2025-08-13T20:08:47.314666088+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:47.314666088+00:00 stderr F - }, 2025-08-13T20:08:47.314666088+00:00 stderr F + { 2025-08-13T20:08:47.314666088+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:47.314666088+00:00 stderr F + Status: "True", 2025-08-13T20:08:47.314666088+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:47.298356561 +0000 UTC m=+446.503455745", 2025-08-13T20:08:47.314666088+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:47.314666088+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:47.314666088+00:00 stderr F + }, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:47.314666088+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:47.314666088+00:00 stderr F }, 2025-08-13T20:08:47.314666088+00:00 stderr F Version: "", 2025-08-13T20:08:47.314666088+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:47.314666088+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:47.314666088+00:00 stderr F } 2025-08-13T20:08:47.314666088+00:00 stderr F E0813 20:08:47.301488 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.073730023+00:00 stderr F E0813 20:08:49.073678 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.078610473+00:00 stderr F I0813 20:08:49.077425 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.078610473+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.078610473+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.078610473+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:49.078610473+00:00 stderr F - { 2025-08-13T20:08:49.078610473+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:49.078610473+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.078610473+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:49.078610473+00:00 stderr F - }, 2025-08-13T20:08:49.078610473+00:00 stderr F + { 2025-08-13T20:08:49.078610473+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:49.078610473+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.078610473+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.073921938 +0000 UTC m=+448.279021332", 2025-08-13T20:08:49.078610473+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.078610473+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:49.078610473+00:00 stderr F + }, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:49.078610473+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:49.078610473+00:00 stderr F }, 2025-08-13T20:08:49.078610473+00:00 stderr F Version: "", 2025-08-13T20:08:49.078610473+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.078610473+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.078610473+00:00 stderr F } 2025-08-13T20:08:49.082722501+00:00 stderr F E0813 20:08:49.082646 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.272062159+00:00 stderr F E0813 20:08:49.271885 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.275436466+00:00 stderr F I0813 20:08:49.275396 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.275436466+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.275436466+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.275436466+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F - { 2025-08-13T20:08:49.275436466+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.275436466+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.275436466+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:49.275436466+00:00 stderr F - }, 2025-08-13T20:08:49.275436466+00:00 stderr F + { 2025-08-13T20:08:49.275436466+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.275436466+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.275436466+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.271969677 +0000 UTC m=+448.477068891", 2025-08-13T20:08:49.275436466+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.275436466+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:49.275436466+00:00 stderr F + }, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.275436466+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:49.275436466+00:00 stderr F }, 2025-08-13T20:08:49.275436466+00:00 stderr F Version: "", 2025-08-13T20:08:49.275436466+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.275436466+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.275436466+00:00 stderr F } 2025-08-13T20:08:49.277923097+00:00 stderr F E0813 20:08:49.276996 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.470053486+00:00 stderr F E0813 20:08:49.469879 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.472027282+00:00 stderr F I0813 20:08:49.471955 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.472027282+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.472027282+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.472027282+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F - { 2025-08-13T20:08:49.472027282+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:49.472027282+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.472027282+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:49.472027282+00:00 stderr F - }, 2025-08-13T20:08:49.472027282+00:00 stderr F + { 2025-08-13T20:08:49.472027282+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:49.472027282+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.472027282+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.469974794 +0000 UTC m=+448.675074108", 2025-08-13T20:08:49.472027282+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.472027282+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:49.472027282+00:00 stderr F + }, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.472027282+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:49.472027282+00:00 stderr F }, 2025-08-13T20:08:49.472027282+00:00 stderr F Version: "", 2025-08-13T20:08:49.472027282+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.472027282+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.472027282+00:00 stderr F } 2025-08-13T20:08:49.473334110+00:00 stderr F E0813 20:08:49.473274 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.667066634+00:00 stderr F E0813 20:08:49.666953 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.672081948+00:00 stderr F I0813 20:08:49.671872 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.672081948+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.672081948+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.672081948+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F - { 2025-08-13T20:08:49.672081948+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:49.672081948+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.672081948+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:49.672081948+00:00 stderr F - }, 2025-08-13T20:08:49.672081948+00:00 stderr F + { 2025-08-13T20:08:49.672081948+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:49.672081948+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.672081948+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.667166027 +0000 UTC m=+448.872265331", 2025-08-13T20:08:49.672081948+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.672081948+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:49.672081948+00:00 stderr F + }, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.672081948+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:49.672081948+00:00 stderr F }, 2025-08-13T20:08:49.672081948+00:00 stderr F Version: "", 2025-08-13T20:08:49.672081948+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.672081948+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.672081948+00:00 stderr F } 2025-08-13T20:08:49.673729856+00:00 stderr F E0813 20:08:49.673622 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.863610400+00:00 stderr F E0813 20:08:49.863525 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.865616637+00:00 stderr F I0813 20:08:49.865399 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:49.865616637+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:49.865616637+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:49.865616637+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F - { 2025-08-13T20:08:49.865616637+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.865616637+00:00 stderr F - Status: "False", 2025-08-13T20:08:49.865616637+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:49.865616637+00:00 stderr F - }, 2025-08-13T20:08:49.865616637+00:00 stderr F + { 2025-08-13T20:08:49.865616637+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:49.865616637+00:00 stderr F + Status: "True", 2025-08-13T20:08:49.865616637+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:49.863575439 +0000 UTC m=+449.068674603", 2025-08-13T20:08:49.865616637+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:49.865616637+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:49.865616637+00:00 stderr F + }, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:49.865616637+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:49.865616637+00:00 stderr F }, 2025-08-13T20:08:49.865616637+00:00 stderr F Version: "", 2025-08-13T20:08:49.865616637+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:49.865616637+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:49.865616637+00:00 stderr F } 2025-08-13T20:08:49.867161861+00:00 stderr F E0813 20:08:49.867084 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.105158806+00:00 stderr F E0813 20:08:52.105106 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.107373990+00:00 stderr F I0813 20:08:52.107315 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:52.107373990+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:52.107373990+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:52.107373990+00:00 stderr F ... // 43 identical elements 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F - { 2025-08-13T20:08:52.107373990+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:52.107373990+00:00 stderr F - Status: "False", 2025-08-13T20:08:52.107373990+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:08:52.107373990+00:00 stderr F - }, 2025-08-13T20:08:52.107373990+00:00 stderr F + { 2025-08-13T20:08:52.107373990+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:08:52.107373990+00:00 stderr F + Status: "True", 2025-08-13T20:08:52.107373990+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:52.105270139 +0000 UTC m=+451.310369213", 2025-08-13T20:08:52.107373990+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:08:52.107373990+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:08:52.107373990+00:00 stderr F + }, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:52.107373990+00:00 stderr F ... // 14 identical elements 2025-08-13T20:08:52.107373990+00:00 stderr F }, 2025-08-13T20:08:52.107373990+00:00 stderr F Version: "", 2025-08-13T20:08:52.107373990+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:52.107373990+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:52.107373990+00:00 stderr F } 2025-08-13T20:08:52.109545982+00:00 stderr F E0813 20:08:52.108731 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.207325077+00:00 stderr F E0813 20:08:54.206946 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.210384064+00:00 stderr F I0813 20:08:54.209528 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.210384064+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.210384064+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.210384064+00:00 stderr F ... // 2 identical elements 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:08:54.210384064+00:00 stderr F - { 2025-08-13T20:08:54.210384064+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:54.210384064+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.210384064+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:08:54.210384064+00:00 stderr F - }, 2025-08-13T20:08:54.210384064+00:00 stderr F + { 2025-08-13T20:08:54.210384064+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:08:54.210384064+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.210384064+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.207307786 +0000 UTC m=+453.412407020", 2025-08-13T20:08:54.210384064+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.210384064+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:08:54.210384064+00:00 stderr F + }, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:08:54.210384064+00:00 stderr F ... // 55 identical elements 2025-08-13T20:08:54.210384064+00:00 stderr F }, 2025-08-13T20:08:54.210384064+00:00 stderr F Version: "", 2025-08-13T20:08:54.210384064+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.210384064+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.210384064+00:00 stderr F } 2025-08-13T20:08:54.212992689+00:00 stderr F E0813 20:08:54.211585 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.401921646+00:00 stderr F E0813 20:08:54.399433 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.403230154+00:00 stderr F I0813 20:08:54.401987 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.403230154+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.403230154+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.403230154+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F - { 2025-08-13T20:08:54.403230154+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.403230154+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.403230154+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:54.403230154+00:00 stderr F - }, 2025-08-13T20:08:54.403230154+00:00 stderr F + { 2025-08-13T20:08:54.403230154+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.403230154+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.403230154+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.399525887 +0000 UTC m=+453.604625211", 2025-08-13T20:08:54.403230154+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.403230154+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:08:54.403230154+00:00 stderr F + }, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.403230154+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:54.403230154+00:00 stderr F }, 2025-08-13T20:08:54.403230154+00:00 stderr F Version: "", 2025-08-13T20:08:54.403230154+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.403230154+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.403230154+00:00 stderr F } 2025-08-13T20:08:54.404868690+00:00 stderr F E0813 20:08:54.404648 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.598589065+00:00 stderr F E0813 20:08:54.598467 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.600546971+00:00 stderr F I0813 20:08:54.600449 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.600546971+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.600546971+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.600546971+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F - { 2025-08-13T20:08:54.600546971+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:54.600546971+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.600546971+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:54.600546971+00:00 stderr F - }, 2025-08-13T20:08:54.600546971+00:00 stderr F + { 2025-08-13T20:08:54.600546971+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:54.600546971+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.600546971+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.598694108 +0000 UTC m=+453.803793192", 2025-08-13T20:08:54.600546971+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.600546971+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:08:54.600546971+00:00 stderr F + }, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.600546971+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:54.600546971+00:00 stderr F }, 2025-08-13T20:08:54.600546971+00:00 stderr F Version: "", 2025-08-13T20:08:54.600546971+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.600546971+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.600546971+00:00 stderr F } 2025-08-13T20:08:54.606496561+00:00 stderr F E0813 20:08:54.605662 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.803139279+00:00 stderr F E0813 20:08:54.803041 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.804947011+00:00 stderr F I0813 20:08:54.804771 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.804947011+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.804947011+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.804947011+00:00 stderr F ... // 19 identical elements 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F - { 2025-08-13T20:08:54.804947011+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:08:54.804947011+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.804947011+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:08:54.804947011+00:00 stderr F - }, 2025-08-13T20:08:54.804947011+00:00 stderr F + { 2025-08-13T20:08:54.804947011+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:08:54.804947011+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.804947011+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.803096458 +0000 UTC m=+454.008195642", 2025-08-13T20:08:54.804947011+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.804947011+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:08:54.804947011+00:00 stderr F + }, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.804947011+00:00 stderr F ... // 38 identical elements 2025-08-13T20:08:54.804947011+00:00 stderr F }, 2025-08-13T20:08:54.804947011+00:00 stderr F Version: "", 2025-08-13T20:08:54.804947011+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.804947011+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.804947011+00:00 stderr F } 2025-08-13T20:08:54.812567070+00:00 stderr F E0813 20:08:54.812132 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.994513956+00:00 stderr F E0813 20:08:54.994469 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.998052418+00:00 stderr F I0813 20:08:54.998024 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:08:54.998052418+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:08:54.998052418+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:08:54.998052418+00:00 stderr F ... // 10 identical elements 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F - { 2025-08-13T20:08:54.998052418+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.998052418+00:00 stderr F - Status: "False", 2025-08-13T20:08:54.998052418+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:08:54.998052418+00:00 stderr F - }, 2025-08-13T20:08:54.998052418+00:00 stderr F + { 2025-08-13T20:08:54.998052418+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:08:54.998052418+00:00 stderr F + Status: "True", 2025-08-13T20:08:54.998052418+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:08:54.994837066 +0000 UTC m=+454.199936400", 2025-08-13T20:08:54.998052418+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:08:54.998052418+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:08:54.998052418+00:00 stderr F + }, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:08:54.998052418+00:00 stderr F ... // 47 identical elements 2025-08-13T20:08:54.998052418+00:00 stderr F }, 2025-08-13T20:08:54.998052418+00:00 stderr F Version: "", 2025-08-13T20:08:54.998052418+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:08:54.998052418+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:08:54.998052418+00:00 stderr F } 2025-08-13T20:08:55.000666323+00:00 stderr F E0813 20:08:54.999518 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.132447831+00:00 stderr F I0813 20:09:29.131961 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:09:31.727617477+00:00 stderr F I0813 20:09:31.726940 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:33.466439120+00:00 stderr F I0813 20:09:33.466285 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:38.293980459+00:00 stderr F I0813 20:09:38.293268 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:38.438579575+00:00 stderr F I0813 20:09:38.438168 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:39.328846060+00:00 stderr F I0813 20:09:39.328717 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:41.223935102+00:00 stderr F I0813 20:09:41.221758 1 reflector.go:351] Caches populated for *v1.ConsoleCLIDownload from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:09:42.504541109+00:00 stderr F I0813 20:09:42.504138 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.362113407+00:00 stderr F I0813 20:09:43.361586 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.809874274+00:00 stderr F I0813 20:09:43.808495 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:44.589466095+00:00 stderr F I0813 20:09:44.589375 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:44.848149892+00:00 stderr F I0813 20:09:44.848068 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.227115527+00:00 stderr F I0813 20:09:45.227056 1 reflector.go:351] Caches populated for operators.coreos.com/v1, Resource=olmconfigs from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:09:45.377858239+00:00 stderr F I0813 20:09:45.376607 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:45.788857763+00:00 stderr F I0813 20:09:45.786291 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.932375290+00:00 stderr F I0813 20:09:48.931130 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.007225758+00:00 stderr F I0813 20:09:50.006990 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:51.632549997+00:00 stderr F I0813 20:09:51.628985 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.739357640+00:00 stderr F I0813 20:09:52.737293 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:53.232652774+00:00 stderr F I0813 20:09:53.232458 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:08.318075215+00:00 stderr F I0813 20:10:08.307610 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:09.116330181+00:00 stderr F I0813 20:10:09.115726 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:13.496759312+00:00 stderr F I0813 20:10:13.496314 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:10:14.844105511+00:00 stderr F I0813 20:10:14.842060 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:21.201057198+00:00 stderr F I0813 20:10:21.200251 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:26.969289168+00:00 stderr F I0813 20:10:26.960073 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.566027517+00:00 stderr F I0813 20:10:27.565375 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:28.128064702+00:00 stderr F I0813 20:10:28.127899 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.661418915+00:00 stderr F I0813 20:10:29.639889 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:38.203711119+00:00 stderr F I0813 20:10:38.202869 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.413613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.411381 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.417730 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.411546 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424210440+00:00 stderr F I0813 20:42:36.420250 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.470292238+00:00 stderr F I0813 20:42:36.424386 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.479606457+00:00 stderr F I0813 20:42:36.478993 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.479606457+00:00 stderr F I0813 20:42:36.479403 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486942459+00:00 stderr F I0813 20:42:36.482209 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.487206916+00:00 stderr F I0813 20:42:36.487174 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.488106742+00:00 stderr F I0813 20:42:36.488085 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.488608537+00:00 stderr F I0813 20:42:36.488588 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.489345088+00:00 stderr F I0813 20:42:36.489320 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.489868833+00:00 stderr F I0813 20:42:36.489841 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.490397378+00:00 stderr F I0813 20:42:36.490371 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.490919933+00:00 stderr F I0813 20:42:36.490891 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491293134+00:00 stderr F I0813 20:42:36.491263 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.498563 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.499140 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.499433 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.500829 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501329 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.501902 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.502180 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.502471 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515605195+00:00 stderr F I0813 20:42:36.513653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516357957+00:00 stderr F I0813 20:42:36.516322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516766668+00:00 stderr F I0813 20:42:36.516740 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517208181+00:00 stderr F I0813 20:42:36.517182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.517532220+00:00 stderr F I0813 20:42:36.517509 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.535073116+00:00 stderr F I0813 20:42:36.411567 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.724747085+00:00 stderr F E0813 20:42:36.724674 1 leaderelection.go:332] error retrieving resource lock openshift-console-operator/console-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.047531220+00:00 stderr F E0813 20:42:37.047077 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.072970634+00:00 stderr F I0813 20:42:37.072881 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.072970634+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.072970634+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.072970634+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F - { 2025-08-13T20:42:37.072970634+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.072970634+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.072970634+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.072970634+00:00 stderr F - }, 2025-08-13T20:42:37.072970634+00:00 stderr F + { 2025-08-13T20:42:37.072970634+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.072970634+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.072970634+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.047612663 +0000 UTC m=+2476.252711917", 2025-08-13T20:42:37.072970634+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.072970634+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.072970634+00:00 stderr F + }, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.072970634+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.072970634+00:00 stderr F }, 2025-08-13T20:42:37.072970634+00:00 stderr F Version: "", 2025-08-13T20:42:37.072970634+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.072970634+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.072970634+00:00 stderr F } 2025-08-13T20:42:37.080679256+00:00 stderr F E0813 20:42:37.080588 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.091382095+00:00 stderr F E0813 20:42:37.088612 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.093880137+00:00 stderr F I0813 20:42:37.091708 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.093880137+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.093880137+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.093880137+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F - { 2025-08-13T20:42:37.093880137+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.093880137+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.093880137+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.093880137+00:00 stderr F - }, 2025-08-13T20:42:37.093880137+00:00 stderr F + { 2025-08-13T20:42:37.093880137+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.093880137+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.093880137+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.088681547 +0000 UTC m=+2476.293780771", 2025-08-13T20:42:37.093880137+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.093880137+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.093880137+00:00 stderr F + }, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.093880137+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.093880137+00:00 stderr F }, 2025-08-13T20:42:37.093880137+00:00 stderr F Version: "", 2025-08-13T20:42:37.093880137+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.093880137+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.093880137+00:00 stderr F } 2025-08-13T20:42:37.093880137+00:00 stderr F E0813 20:42:37.093400 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.104514653+00:00 stderr F E0813 20:42:37.104469 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.107567551+00:00 stderr F I0813 20:42:37.107531 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.107567551+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.107567551+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.107567551+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F - { 2025-08-13T20:42:37.107567551+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.107567551+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.107567551+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.107567551+00:00 stderr F - }, 2025-08-13T20:42:37.107567551+00:00 stderr F + { 2025-08-13T20:42:37.107567551+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.107567551+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.107567551+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.104589836 +0000 UTC m=+2476.309689100", 2025-08-13T20:42:37.107567551+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.107567551+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.107567551+00:00 stderr F + }, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.107567551+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.107567551+00:00 stderr F }, 2025-08-13T20:42:37.107567551+00:00 stderr F Version: "", 2025-08-13T20:42:37.107567551+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.107567551+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.107567551+00:00 stderr F } 2025-08-13T20:42:37.108659843+00:00 stderr F E0813 20:42:37.108634 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.130076070+00:00 stderr F E0813 20:42:37.130008 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.134904380+00:00 stderr F I0813 20:42:37.134145 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.134904380+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.134904380+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.134904380+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F - { 2025-08-13T20:42:37.134904380+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.134904380+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.134904380+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.134904380+00:00 stderr F - }, 2025-08-13T20:42:37.134904380+00:00 stderr F + { 2025-08-13T20:42:37.134904380+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.134904380+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.134904380+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.130170743 +0000 UTC m=+2476.335269987", 2025-08-13T20:42:37.134904380+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.134904380+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.134904380+00:00 stderr F + }, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.134904380+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.134904380+00:00 stderr F }, 2025-08-13T20:42:37.134904380+00:00 stderr F Version: "", 2025-08-13T20:42:37.134904380+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.134904380+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.134904380+00:00 stderr F } 2025-08-13T20:42:37.136082974+00:00 stderr F E0813 20:42:37.136006 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.152401314+00:00 stderr F E0813 20:42:37.152276 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.153696011+00:00 stderr F E0813 20:42:37.153636 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.154406892+00:00 stderr F I0813 20:42:37.154344 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.154406892+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.154406892+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.154406892+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.154406892+00:00 stderr F - { 2025-08-13T20:42:37.154406892+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.154406892+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.154406892+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.154406892+00:00 stderr F - }, 2025-08-13T20:42:37.154406892+00:00 stderr F + { 2025-08-13T20:42:37.154406892+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.154406892+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.154406892+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.152329982 +0000 UTC m=+2476.357429176", 2025-08-13T20:42:37.154406892+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.154406892+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.154406892+00:00 stderr F + }, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.154406892+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.154406892+00:00 stderr F }, 2025-08-13T20:42:37.154406892+00:00 stderr F Version: "", 2025-08-13T20:42:37.154406892+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.154406892+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.154406892+00:00 stderr F } 2025-08-13T20:42:37.154951978+00:00 stderr F E0813 20:42:37.154902 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.156953415+00:00 stderr F E0813 20:42:37.156911 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.157254174+00:00 stderr F I0813 20:42:37.157149 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.157254174+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.157254174+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.157254174+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F - { 2025-08-13T20:42:37.157254174+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.157254174+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.157254174+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.157254174+00:00 stderr F - }, 2025-08-13T20:42:37.157254174+00:00 stderr F + { 2025-08-13T20:42:37.157254174+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.157254174+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.157254174+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.153704092 +0000 UTC m=+2476.358803346", 2025-08-13T20:42:37.157254174+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.157254174+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.157254174+00:00 stderr F + }, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.157254174+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.157254174+00:00 stderr F }, 2025-08-13T20:42:37.157254174+00:00 stderr F Version: "", 2025-08-13T20:42:37.157254174+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.157254174+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.157254174+00:00 stderr F } 2025-08-13T20:42:37.157768069+00:00 stderr F E0813 20:42:37.157694 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.158715096+00:00 stderr F I0813 20:42:37.158674 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.158715096+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.158715096+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.158715096+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F - { 2025-08-13T20:42:37.158715096+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.158715096+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.158715096+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.158715096+00:00 stderr F - }, 2025-08-13T20:42:37.158715096+00:00 stderr F + { 2025-08-13T20:42:37.158715096+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.158715096+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.158715096+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.156949085 +0000 UTC m=+2476.362048299", 2025-08-13T20:42:37.158715096+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.158715096+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.158715096+00:00 stderr F + }, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.158715096+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.158715096+00:00 stderr F }, 2025-08-13T20:42:37.158715096+00:00 stderr F Version: "", 2025-08-13T20:42:37.158715096+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.158715096+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.158715096+00:00 stderr F } 2025-08-13T20:42:37.161049123+00:00 stderr F E0813 20:42:37.160953 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.162561037+00:00 stderr F E0813 20:42:37.162500 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.168692684+00:00 stderr F E0813 20:42:37.166942 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.170063273+00:00 stderr F I0813 20:42:37.169966 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.170063273+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.170063273+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.170063273+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.170063273+00:00 stderr F - { 2025-08-13T20:42:37.170063273+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.170063273+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.170063273+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.170063273+00:00 stderr F - }, 2025-08-13T20:42:37.170063273+00:00 stderr F + { 2025-08-13T20:42:37.170063273+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.170063273+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.170063273+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.162545296 +0000 UTC m=+2476.367644531", 2025-08-13T20:42:37.170063273+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.170063273+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.170063273+00:00 stderr F + }, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.170063273+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.170063273+00:00 stderr F }, 2025-08-13T20:42:37.170063273+00:00 stderr F Version: "", 2025-08-13T20:42:37.170063273+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.170063273+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.170063273+00:00 stderr F } 2025-08-13T20:42:37.170813665+00:00 stderr F E0813 20:42:37.170710 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.171744182+00:00 stderr F E0813 20:42:37.171719 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.174842371+00:00 stderr F I0813 20:42:37.174733 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.174842371+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.174842371+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.174842371+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F - { 2025-08-13T20:42:37.174842371+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.174842371+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.174842371+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.174842371+00:00 stderr F - }, 2025-08-13T20:42:37.174842371+00:00 stderr F + { 2025-08-13T20:42:37.174842371+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.174842371+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.174842371+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.169093895 +0000 UTC m=+2476.374193069", 2025-08-13T20:42:37.174842371+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.174842371+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.174842371+00:00 stderr F + }, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.174842371+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.174842371+00:00 stderr F }, 2025-08-13T20:42:37.174842371+00:00 stderr F Version: "", 2025-08-13T20:42:37.174842371+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.174842371+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.174842371+00:00 stderr F } 2025-08-13T20:42:37.175020506+00:00 stderr F E0813 20:42:37.174961 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.175419478+00:00 stderr F E0813 20:42:37.175365 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.175579562+00:00 stderr F I0813 20:42:37.175559 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.175579562+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.175579562+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.175579562+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F - { 2025-08-13T20:42:37.175579562+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.175579562+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.175579562+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.175579562+00:00 stderr F - }, 2025-08-13T20:42:37.175579562+00:00 stderr F + { 2025-08-13T20:42:37.175579562+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.175579562+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.175579562+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.172106842 +0000 UTC m=+2476.377206116", 2025-08-13T20:42:37.175579562+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.175579562+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:37.175579562+00:00 stderr F + }, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.175579562+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.175579562+00:00 stderr F }, 2025-08-13T20:42:37.175579562+00:00 stderr F Version: "", 2025-08-13T20:42:37.175579562+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.175579562+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.175579562+00:00 stderr F } 2025-08-13T20:42:37.177169998+00:00 stderr F E0813 20:42:37.177145 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.178307351+00:00 stderr F E0813 20:42:37.178282 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.181095601+00:00 stderr F I0813 20:42:37.181070 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.181095601+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.181095601+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.181095601+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F - { 2025-08-13T20:42:37.181095601+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.181095601+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.181095601+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.181095601+00:00 stderr F - }, 2025-08-13T20:42:37.181095601+00:00 stderr F + { 2025-08-13T20:42:37.181095601+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.181095601+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.181095601+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.178357482 +0000 UTC m=+2476.383456656", 2025-08-13T20:42:37.181095601+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.181095601+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.181095601+00:00 stderr F + }, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.181095601+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.181095601+00:00 stderr F }, 2025-08-13T20:42:37.181095601+00:00 stderr F Version: "", 2025-08-13T20:42:37.181095601+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.181095601+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.181095601+00:00 stderr F } 2025-08-13T20:42:37.181326208+00:00 stderr F E0813 20:42:37.181273 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.182698968+00:00 stderr F E0813 20:42:37.182673 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.184335625+00:00 stderr F E0813 20:42:37.183135 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.193748266+00:00 stderr F I0813 20:42:37.193682 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.193748266+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.193748266+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.193748266+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F - { 2025-08-13T20:42:37.193748266+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.193748266+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.193748266+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.193748266+00:00 stderr F - }, 2025-08-13T20:42:37.193748266+00:00 stderr F + { 2025-08-13T20:42:37.193748266+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.193748266+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.193748266+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.181330718 +0000 UTC m=+2476.386429932", 2025-08-13T20:42:37.193748266+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.193748266+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.193748266+00:00 stderr F + }, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.193748266+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.193748266+00:00 stderr F }, 2025-08-13T20:42:37.193748266+00:00 stderr F Version: "", 2025-08-13T20:42:37.193748266+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.193748266+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.193748266+00:00 stderr F } 2025-08-13T20:42:37.196264659+00:00 stderr F E0813 20:42:37.195842 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.197475353+00:00 stderr F I0813 20:42:37.197447 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.197475353+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.197475353+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.197475353+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.197475353+00:00 stderr F - { 2025-08-13T20:42:37.197475353+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.197475353+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.197475353+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.197475353+00:00 stderr F - }, 2025-08-13T20:42:37.197475353+00:00 stderr F + { 2025-08-13T20:42:37.197475353+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.197475353+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.197475353+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.182755359 +0000 UTC m=+2476.387854593", 2025-08-13T20:42:37.197475353+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.197475353+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.197475353+00:00 stderr F + }, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.197475353+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.197475353+00:00 stderr F }, 2025-08-13T20:42:37.197475353+00:00 stderr F Version: "", 2025-08-13T20:42:37.197475353+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.197475353+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.197475353+00:00 stderr F } 2025-08-13T20:42:37.197866805+00:00 stderr F I0813 20:42:37.197734 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.197866805+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.197866805+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.197866805+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F - { 2025-08-13T20:42:37.197866805+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.197866805+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.197866805+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.197866805+00:00 stderr F - }, 2025-08-13T20:42:37.197866805+00:00 stderr F + { 2025-08-13T20:42:37.197866805+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.197866805+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.197866805+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.195891338 +0000 UTC m=+2476.400990662", 2025-08-13T20:42:37.197866805+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.197866805+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.197866805+00:00 stderr F + }, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.197866805+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.197866805+00:00 stderr F }, 2025-08-13T20:42:37.197866805+00:00 stderr F Version: "", 2025-08-13T20:42:37.197866805+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.197866805+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.197866805+00:00 stderr F } 2025-08-13T20:42:37.201089658+00:00 stderr F I0813 20:42:37.199658 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.201089658+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F - { 2025-08-13T20:42:37.201089658+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.201089658+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.201089658+00:00 stderr F - }, 2025-08-13T20:42:37.201089658+00:00 stderr F + { 2025-08-13T20:42:37.201089658+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.201089658+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.175048387 +0000 UTC m=+2476.380147631", 2025-08-13T20:42:37.201089658+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.201089658+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:37.201089658+00:00 stderr F + }, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F }, 2025-08-13T20:42:37.201089658+00:00 stderr F Version: "", 2025-08-13T20:42:37.201089658+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.201089658+00:00 stderr F } 2025-08-13T20:42:37.201089658+00:00 stderr F I0813 20:42:37.200041 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.201089658+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F - { 2025-08-13T20:42:37.201089658+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.201089658+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.201089658+00:00 stderr F - }, 2025-08-13T20:42:37.201089658+00:00 stderr F + { 2025-08-13T20:42:37.201089658+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.201089658+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.201089658+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.183194532 +0000 UTC m=+2476.388293796", 2025-08-13T20:42:37.201089658+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.201089658+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:37.201089658+00:00 stderr F + }, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.201089658+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.201089658+00:00 stderr F }, 2025-08-13T20:42:37.201089658+00:00 stderr F Version: "", 2025-08-13T20:42:37.201089658+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.201089658+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.201089658+00:00 stderr F } 2025-08-13T20:42:37.278750157+00:00 stderr F E0813 20:42:37.278081 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.361374069+00:00 stderr F E0813 20:42:37.360021 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.362399578+00:00 stderr F I0813 20:42:37.361872 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.362399578+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.362399578+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.362399578+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F - { 2025-08-13T20:42:37.362399578+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.362399578+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.362399578+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:37.362399578+00:00 stderr F - }, 2025-08-13T20:42:37.362399578+00:00 stderr F + { 2025-08-13T20:42:37.362399578+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:37.362399578+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.362399578+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.360093372 +0000 UTC m=+2476.565192556", 2025-08-13T20:42:37.362399578+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:37.362399578+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:37.362399578+00:00 stderr F + }, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.362399578+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:37.362399578+00:00 stderr F }, 2025-08-13T20:42:37.362399578+00:00 stderr F Version: "", 2025-08-13T20:42:37.362399578+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.362399578+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.362399578+00:00 stderr F } 2025-08-13T20:42:37.480029010+00:00 stderr F E0813 20:42:37.478833 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.499478650+00:00 stderr F E0813 20:42:37.499402 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.508584733+00:00 stderr F I0813 20:42:37.508555 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.508584733+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.508584733+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.508584733+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F - { 2025-08-13T20:42:37.508584733+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.508584733+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.508584733+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:37.508584733+00:00 stderr F - }, 2025-08-13T20:42:37.508584733+00:00 stderr F + { 2025-08-13T20:42:37.508584733+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:37.508584733+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.508584733+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.499967515 +0000 UTC m=+2476.705066729", 2025-08-13T20:42:37.508584733+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.508584733+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:37.508584733+00:00 stderr F + }, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.508584733+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:37.508584733+00:00 stderr F }, 2025-08-13T20:42:37.508584733+00:00 stderr F Version: "", 2025-08-13T20:42:37.508584733+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.508584733+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.508584733+00:00 stderr F } 2025-08-13T20:42:37.678252325+00:00 stderr F E0813 20:42:37.678114 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.709321850+00:00 stderr F E0813 20:42:37.709216 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.714349285+00:00 stderr F I0813 20:42:37.714323 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.714349285+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.714349285+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.714349285+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:37.714349285+00:00 stderr F - { 2025-08-13T20:42:37.714349285+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.714349285+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.714349285+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:37.714349285+00:00 stderr F - }, 2025-08-13T20:42:37.714349285+00:00 stderr F + { 2025-08-13T20:42:37.714349285+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:37.714349285+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.714349285+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.709431683 +0000 UTC m=+2476.914531117", 2025-08-13T20:42:37.714349285+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.714349285+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:37.714349285+00:00 stderr F + }, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:37.714349285+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:37.714349285+00:00 stderr F }, 2025-08-13T20:42:37.714349285+00:00 stderr F Version: "", 2025-08-13T20:42:37.714349285+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.714349285+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.714349285+00:00 stderr F } 2025-08-13T20:42:37.880558617+00:00 stderr F E0813 20:42:37.880503 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.902973793+00:00 stderr F E0813 20:42:37.902903 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.905525307+00:00 stderr F I0813 20:42:37.905036 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:37.905525307+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:37.905525307+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:37.905525307+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F - { 2025-08-13T20:42:37.905525307+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:37.905525307+00:00 stderr F - Status: "False", 2025-08-13T20:42:37.905525307+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:37.905525307+00:00 stderr F - }, 2025-08-13T20:42:37.905525307+00:00 stderr F + { 2025-08-13T20:42:37.905525307+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:37.905525307+00:00 stderr F + Status: "True", 2025-08-13T20:42:37.905525307+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:37.902964833 +0000 UTC m=+2477.108064057", 2025-08-13T20:42:37.905525307+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:37.905525307+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:37.905525307+00:00 stderr F + }, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:37.905525307+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:37.905525307+00:00 stderr F }, 2025-08-13T20:42:37.905525307+00:00 stderr F Version: "", 2025-08-13T20:42:37.905525307+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:37.905525307+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:37.905525307+00:00 stderr F } 2025-08-13T20:42:38.079416610+00:00 stderr F E0813 20:42:38.079292 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.087560055+00:00 stderr F E0813 20:42:38.087532 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.095856864+00:00 stderr F I0813 20:42:38.095829 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.095856864+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.095856864+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.095856864+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F - { 2025-08-13T20:42:38.095856864+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:38.095856864+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.095856864+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:38.095856864+00:00 stderr F - }, 2025-08-13T20:42:38.095856864+00:00 stderr F + { 2025-08-13T20:42:38.095856864+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:38.095856864+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.095856864+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.087688979 +0000 UTC m=+2477.292788183", 2025-08-13T20:42:38.095856864+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.095856864+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:38.095856864+00:00 stderr F + }, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.095856864+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:38.095856864+00:00 stderr F }, 2025-08-13T20:42:38.095856864+00:00 stderr F Version: "", 2025-08-13T20:42:38.095856864+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.095856864+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.095856864+00:00 stderr F } 2025-08-13T20:42:38.281930229+00:00 stderr F I0813 20:42:38.281864 1 request.go:697] Waited for 1.077706361s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:38.284905095+00:00 stderr F E0813 20:42:38.284725 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.297294822+00:00 stderr F E0813 20:42:38.297210 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.299001191+00:00 stderr F I0813 20:42:38.298970 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.299001191+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.299001191+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.299001191+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F - { 2025-08-13T20:42:38.299001191+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.299001191+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.299001191+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:38.299001191+00:00 stderr F - }, 2025-08-13T20:42:38.299001191+00:00 stderr F + { 2025-08-13T20:42:38.299001191+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.299001191+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.299001191+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.297415085 +0000 UTC m=+2477.502514279", 2025-08-13T20:42:38.299001191+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.299001191+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:38.299001191+00:00 stderr F + }, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.299001191+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:38.299001191+00:00 stderr F }, 2025-08-13T20:42:38.299001191+00:00 stderr F Version: "", 2025-08-13T20:42:38.299001191+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.299001191+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.299001191+00:00 stderr F } 2025-08-13T20:42:38.481760610+00:00 stderr F E0813 20:42:38.481054 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.645268924+00:00 stderr F E0813 20:42:38.643682 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.647826998+00:00 stderr F I0813 20:42:38.645476 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.647826998+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.647826998+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.647826998+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F - { 2025-08-13T20:42:38.647826998+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:38.647826998+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.647826998+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:38.647826998+00:00 stderr F - }, 2025-08-13T20:42:38.647826998+00:00 stderr F + { 2025-08-13T20:42:38.647826998+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:38.647826998+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.647826998+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.64373663 +0000 UTC m=+2477.848835834", 2025-08-13T20:42:38.647826998+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:38.647826998+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:38.647826998+00:00 stderr F + }, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.647826998+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:38.647826998+00:00 stderr F }, 2025-08-13T20:42:38.647826998+00:00 stderr F Version: "", 2025-08-13T20:42:38.647826998+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.647826998+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.647826998+00:00 stderr F } 2025-08-13T20:42:38.679048798+00:00 stderr F E0813 20:42:38.677606 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.699143977+00:00 stderr F E0813 20:42:38.699048 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.702053281+00:00 stderr F I0813 20:42:38.700674 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.702053281+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.702053281+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.702053281+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F - { 2025-08-13T20:42:38.702053281+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.702053281+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.702053281+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:38.702053281+00:00 stderr F - }, 2025-08-13T20:42:38.702053281+00:00 stderr F + { 2025-08-13T20:42:38.702053281+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:38.702053281+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.702053281+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.699106996 +0000 UTC m=+2477.904206200", 2025-08-13T20:42:38.702053281+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.702053281+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:38.702053281+00:00 stderr F + }, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:38.702053281+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:38.702053281+00:00 stderr F }, 2025-08-13T20:42:38.702053281+00:00 stderr F Version: "", 2025-08-13T20:42:38.702053281+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.702053281+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.702053281+00:00 stderr F } 2025-08-13T20:42:38.877903141+00:00 stderr F E0813 20:42:38.877464 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.920318844+00:00 stderr F E0813 20:42:38.920260 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.922484606+00:00 stderr F I0813 20:42:38.922457 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:38.922484606+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:38.922484606+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:38.922484606+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:38.922484606+00:00 stderr F - { 2025-08-13T20:42:38.922484606+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:38.922484606+00:00 stderr F - Status: "False", 2025-08-13T20:42:38.922484606+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:38.922484606+00:00 stderr F - }, 2025-08-13T20:42:38.922484606+00:00 stderr F + { 2025-08-13T20:42:38.922484606+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:38.922484606+00:00 stderr F + Status: "True", 2025-08-13T20:42:38.922484606+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:38.920400506 +0000 UTC m=+2478.125499590", 2025-08-13T20:42:38.922484606+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:38.922484606+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:38.922484606+00:00 stderr F + }, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:38.922484606+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:38.922484606+00:00 stderr F }, 2025-08-13T20:42:38.922484606+00:00 stderr F Version: "", 2025-08-13T20:42:38.922484606+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:38.922484606+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:38.922484606+00:00 stderr F } 2025-08-13T20:42:39.078488314+00:00 stderr F E0813 20:42:39.077955 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.122207924+00:00 stderr F E0813 20:42:39.122164 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.123761269+00:00 stderr F I0813 20:42:39.123735 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.123761269+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.123761269+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.123761269+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F - { 2025-08-13T20:42:39.123761269+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:39.123761269+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.123761269+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:39.123761269+00:00 stderr F - }, 2025-08-13T20:42:39.123761269+00:00 stderr F + { 2025-08-13T20:42:39.123761269+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:39.123761269+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.123761269+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.122300367 +0000 UTC m=+2478.327399581", 2025-08-13T20:42:39.123761269+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.123761269+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:39.123761269+00:00 stderr F + }, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.123761269+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:39.123761269+00:00 stderr F }, 2025-08-13T20:42:39.123761269+00:00 stderr F Version: "", 2025-08-13T20:42:39.123761269+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.123761269+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.123761269+00:00 stderr F } 2025-08-13T20:42:39.279080997+00:00 stderr F E0813 20:42:39.278603 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.291431193+00:00 stderr F E0813 20:42:39.291116 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.295216782+00:00 stderr F I0813 20:42:39.294507 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.295216782+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.295216782+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.295216782+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F - { 2025-08-13T20:42:39.295216782+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:39.295216782+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.295216782+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:39.295216782+00:00 stderr F - }, 2025-08-13T20:42:39.295216782+00:00 stderr F + { 2025-08-13T20:42:39.295216782+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:39.295216782+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.295216782+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.291218057 +0000 UTC m=+2478.496317251", 2025-08-13T20:42:39.295216782+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.295216782+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:39.295216782+00:00 stderr F + }, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.295216782+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:39.295216782+00:00 stderr F }, 2025-08-13T20:42:39.295216782+00:00 stderr F Version: "", 2025-08-13T20:42:39.295216782+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.295216782+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.295216782+00:00 stderr F } 2025-08-13T20:42:39.478261130+00:00 stderr F I0813 20:42:39.477886 1 request.go:697] Waited for 1.178505017s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:39.479417903+00:00 stderr F E0813 20:42:39.479296 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.501115318+00:00 stderr F E0813 20:42:39.501043 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.503024893+00:00 stderr F I0813 20:42:39.502969 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.503024893+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.503024893+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.503024893+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F - { 2025-08-13T20:42:39.503024893+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.503024893+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.503024893+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:39.503024893+00:00 stderr F - }, 2025-08-13T20:42:39.503024893+00:00 stderr F + { 2025-08-13T20:42:39.503024893+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.503024893+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.503024893+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.501092018 +0000 UTC m=+2478.706191252", 2025-08-13T20:42:39.503024893+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.503024893+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:39.503024893+00:00 stderr F + }, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.503024893+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:39.503024893+00:00 stderr F }, 2025-08-13T20:42:39.503024893+00:00 stderr F Version: "", 2025-08-13T20:42:39.503024893+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.503024893+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.503024893+00:00 stderr F } 2025-08-13T20:42:39.678143251+00:00 stderr F E0813 20:42:39.678091 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.877598372+00:00 stderr F E0813 20:42:39.877538 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.919911252+00:00 stderr F E0813 20:42:39.919849 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.922002062+00:00 stderr F I0813 20:42:39.921973 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:39.922002062+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:39.922002062+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:39.922002062+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F - { 2025-08-13T20:42:39.922002062+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.922002062+00:00 stderr F - Status: "False", 2025-08-13T20:42:39.922002062+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:39.922002062+00:00 stderr F - }, 2025-08-13T20:42:39.922002062+00:00 stderr F + { 2025-08-13T20:42:39.922002062+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:39.922002062+00:00 stderr F + Status: "True", 2025-08-13T20:42:39.922002062+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.920004324 +0000 UTC m=+2479.125103398", 2025-08-13T20:42:39.922002062+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:39.922002062+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/downloads": dial tcp 10.217.4.1:443: connect: connectio`..., 2025-08-13T20:42:39.922002062+00:00 stderr F + }, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:39.922002062+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:39.922002062+00:00 stderr F }, 2025-08-13T20:42:39.922002062+00:00 stderr F Version: "", 2025-08-13T20:42:39.922002062+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:39.922002062+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:39.922002062+00:00 stderr F } 2025-08-13T20:42:39.999384503+00:00 stderr F E0813 20:42:39.999332 1 status.go:130] ConsoleNotificationSyncDegraded FailedDelete Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.001633568+00:00 stderr F I0813 20:42:40.001606 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.001633568+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.001633568+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.001633568+00:00 stderr F ... // 43 identical elements 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "RedirectServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ManagementStateDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:15 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F - { 2025-08-13T20:42:40.001633568+00:00 stderr F - Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:40.001633568+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.001633568+00:00 stderr F - LastTransitionTime: s"2024-06-26 12:54:17 +0000 UTC", 2025-08-13T20:42:40.001633568+00:00 stderr F - }, 2025-08-13T20:42:40.001633568+00:00 stderr F + { 2025-08-13T20:42:40.001633568+00:00 stderr F + Type: "ConsoleNotificationSyncDegraded", 2025-08-13T20:42:40.001633568+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.001633568+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:39.999473585 +0000 UTC m=+2479.204572669", 2025-08-13T20:42:40.001633568+00:00 stderr F + Reason: "FailedDelete", 2025-08-13T20:42:40.001633568+00:00 stderr F + Message: `Delete "https://10.217.4.1:443/apis/console.openshift.io/v1/consolenotifications/cluster-upgrade": dial tcp 10.217.4.1:443: conn`..., 2025-08-13T20:42:40.001633568+00:00 stderr F + }, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F {Type: "ConsoleCustomRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:40.001633568+00:00 stderr F ... // 14 identical elements 2025-08-13T20:42:40.001633568+00:00 stderr F }, 2025-08-13T20:42:40.001633568+00:00 stderr F Version: "", 2025-08-13T20:42:40.001633568+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.001633568+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.001633568+00:00 stderr F } 2025-08-13T20:42:40.077993219+00:00 stderr F E0813 20:42:40.077941 1 base_controller.go:268] ConsoleDownloadsDeploymentSyncController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.160942641+00:00 stderr F E0813 20:42:40.160874 1 status.go:130] DownloadsDeploymentSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.163209416+00:00 stderr F I0813 20:42:40.163164 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.163209416+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.163209416+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.163209416+00:00 stderr F ... // 2 identical elements 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:42:40.163209416+00:00 stderr F - { 2025-08-13T20:42:40.163209416+00:00 stderr F - Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:40.163209416+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.163209416+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:41 +0000 UTC", 2025-08-13T20:42:40.163209416+00:00 stderr F - }, 2025-08-13T20:42:40.163209416+00:00 stderr F + { 2025-08-13T20:42:40.163209416+00:00 stderr F + Type: "DownloadsDeploymentSyncDegraded", 2025-08-13T20:42:40.163209416+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.163209416+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.16092296 +0000 UTC m=+2479.366022164", 2025-08-13T20:42:40.163209416+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.163209416+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads": dial tcp 10.217.4.1:443: connect: `..., 2025-08-13T20:42:40.163209416+00:00 stderr F + }, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F {Type: "DownloadsDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:42:40.163209416+00:00 stderr F ... // 55 identical elements 2025-08-13T20:42:40.163209416+00:00 stderr F }, 2025-08-13T20:42:40.163209416+00:00 stderr F Version: "", 2025-08-13T20:42:40.163209416+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.163209416+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.163209416+00:00 stderr F } 2025-08-13T20:42:40.278471909+00:00 stderr F E0813 20:42:40.278269 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.359349001+00:00 stderr F E0813 20:42:40.359265 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.361268266+00:00 stderr F I0813 20:42:40.361120 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.361268266+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.361268266+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.361268266+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F - { 2025-08-13T20:42:40.361268266+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:40.361268266+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.361268266+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:40.361268266+00:00 stderr F - }, 2025-08-13T20:42:40.361268266+00:00 stderr F + { 2025-08-13T20:42:40.361268266+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:40.361268266+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.361268266+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.35931355 +0000 UTC m=+2479.564412834", 2025-08-13T20:42:40.361268266+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.361268266+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads": dial tcp 10.217.4.1:443`..., 2025-08-13T20:42:40.361268266+00:00 stderr F + }, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.361268266+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:40.361268266+00:00 stderr F }, 2025-08-13T20:42:40.361268266+00:00 stderr F Version: "", 2025-08-13T20:42:40.361268266+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.361268266+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.361268266+00:00 stderr F } 2025-08-13T20:42:40.478274389+00:00 stderr F E0813 20:42:40.478160 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.500108769+00:00 stderr F E0813 20:42:40.499999 1 status.go:130] PDBSyncDegraded FailedApply Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.502410005+00:00 stderr F I0813 20:42:40.502353 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.502410005+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.502410005+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.502410005+00:00 stderr F ... // 19 identical elements 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ODODownloadsSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:58 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ResourceSyncControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:01 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F - { 2025-08-13T20:42:40.502410005+00:00 stderr F - Type: "PDBSyncDegraded", 2025-08-13T20:42:40.502410005+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.502410005+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:17:28 +0000 UTC", 2025-08-13T20:42:40.502410005+00:00 stderr F - }, 2025-08-13T20:42:40.502410005+00:00 stderr F + { 2025-08-13T20:42:40.502410005+00:00 stderr F + Type: "PDBSyncDegraded", 2025-08-13T20:42:40.502410005+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.502410005+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.500058417 +0000 UTC m=+2479.705157681", 2025-08-13T20:42:40.502410005+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.502410005+00:00 stderr F + Message: `Get "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console": dial tcp 10.217.4.1:443: `..., 2025-08-13T20:42:40.502410005+00:00 stderr F + }, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "PDBSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:03 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F {Type: "ConsoleConfigDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.502410005+00:00 stderr F ... // 38 identical elements 2025-08-13T20:42:40.502410005+00:00 stderr F }, 2025-08-13T20:42:40.502410005+00:00 stderr F Version: "", 2025-08-13T20:42:40.502410005+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.502410005+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.502410005+00:00 stderr F } 2025-08-13T20:42:40.677891154+00:00 stderr F I0813 20:42:40.677742 1 request.go:697] Waited for 1.174569142s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status 2025-08-13T20:42:40.678918274+00:00 stderr F E0813 20:42:40.678855 1 base_controller.go:268] ConsoleServiceController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.721323527+00:00 stderr F E0813 20:42:40.721165 1 status.go:130] ServiceSyncDegraded FailedApply Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.724381145+00:00 stderr F I0813 20:42:40.724319 1 helpers.go:201] Operator status changed: &v1.OperatorStatus{ 2025-08-13T20:42:40.724381145+00:00 stderr F ObservedGeneration: 1, 2025-08-13T20:42:40.724381145+00:00 stderr F Conditions: []v1.OperatorCondition{ 2025-08-13T20:42:40.724381145+00:00 stderr F ... // 10 identical elements 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "DownloadsCustomRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F - { 2025-08-13T20:42:40.724381145+00:00 stderr F - Type: "ServiceSyncDegraded", 2025-08-13T20:42:40.724381145+00:00 stderr F - Status: "False", 2025-08-13T20:42:40.724381145+00:00 stderr F - LastTransitionTime: s"2024-06-27 13:29:40 +0000 UTC", 2025-08-13T20:42:40.724381145+00:00 stderr F - }, 2025-08-13T20:42:40.724381145+00:00 stderr F + { 2025-08-13T20:42:40.724381145+00:00 stderr F + Type: "ServiceSyncDegraded", 2025-08-13T20:42:40.724381145+00:00 stderr F + Status: "True", 2025-08-13T20:42:40.724381145+00:00 stderr F + LastTransitionTime: s"2025-08-13 20:42:40.721275235 +0000 UTC m=+2479.926374889", 2025-08-13T20:42:40.724381145+00:00 stderr F + Reason: "FailedApply", 2025-08-13T20:42:40.724381145+00:00 stderr F + Message: `Get "https://10.217.4.1:443/api/v1/namespaces/openshift-console/services/console": dial tcp 10.217.4.1:443: connect: connection `..., 2025-08-13T20:42:40.724381145+00:00 stderr F + }, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:42:40.724381145+00:00 stderr F ... // 47 identical elements 2025-08-13T20:42:40.724381145+00:00 stderr F }, 2025-08-13T20:42:40.724381145+00:00 stderr F Version: "", 2025-08-13T20:42:40.724381145+00:00 stderr F ReadyReplicas: 1, 2025-08-13T20:42:40.724381145+00:00 stderr F Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:42:40.724381145+00:00 stderr F } 2025-08-13T20:42:40.799625384+00:00 stderr F I0813 20:42:40.798651 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.801046815+00:00 stderr F I0813 20:42:40.799884 1 leaderelection.go:285] failed to renew lease openshift-console-operator/console-operator-lock: timed out waiting for the condition 2025-08-13T20:42:40.801210510+00:00 stderr F I0813 20:42:40.801130 1 base_controller.go:172] Shutting down ConsoleCLIDownloadsController ... 2025-08-13T20:42:40.801210510+00:00 stderr F I0813 20:42:40.801202 1 base_controller.go:172] Shutting down HealthCheckController ... 2025-08-13T20:42:40.801294602+00:00 stderr F I0813 20:42:40.801250 1 base_controller.go:172] Shutting down ConsoleOperator ... 2025-08-13T20:42:40.801307573+00:00 stderr F I0813 20:42:40.801290 1 base_controller.go:172] Shutting down DownloadsRouteController ... 2025-08-13T20:42:40.801317543+00:00 stderr F I0813 20:42:40.801310 1 base_controller.go:172] Shutting down ConsoleRouteController ... 2025-08-13T20:42:40.801368394+00:00 stderr F I0813 20:42:40.801326 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:42:40.801368394+00:00 stderr F I0813 20:42:40.801361 1 base_controller.go:172] Shutting down OIDCSetupController ... 2025-08-13T20:42:40.801384975+00:00 stderr F I0813 20:42:40.801377 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.801433016+00:00 stderr F I0813 20:42:40.801394 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.801433016+00:00 stderr F I0813 20:42:40.801424 1 base_controller.go:172] Shutting down OAuthClientSecretController ... 2025-08-13T20:42:40.801445507+00:00 stderr F I0813 20:42:40.801439 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:42:40.801457967+00:00 stderr F I0813 20:42:40.801450 1 base_controller.go:172] Shutting down ConsoleDownloadsDeploymentSyncController ... 2025-08-13T20:42:40.801508098+00:00 stderr F I0813 20:42:40.801468 1 base_controller.go:172] Shutting down StatusSyncer_console ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801569 1 base_controller.go:172] Shutting down CLIOIDCClientStatusController ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801605 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:42:40.801623962+00:00 stderr F I0813 20:42:40.801618 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:42:40.801650342+00:00 stderr F I0813 20:42:40.801630 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:40.801650342+00:00 stderr F I0813 20:42:40.801642 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:40.801660693+00:00 stderr F I0813 20:42:40.801654 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:42:40.801673243+00:00 stderr F I0813 20:42:40.801665 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.801683563+00:00 stderr F I0813 20:42:40.801676 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ... 2025-08-13T20:42:40.801695694+00:00 stderr F I0813 20:42:40.801689 1 base_controller.go:172] Shutting down InformerWithSwitchController ... 2025-08-13T20:42:40.801754715+00:00 stderr F I0813 20:42:40.801711 1 base_controller.go:114] Shutting down worker of CLIOIDCClientStatusController controller ... 2025-08-13T20:42:40.801754715+00:00 stderr F I0813 20:42:40.801736 1 base_controller.go:104] All CLIOIDCClientStatusController workers have been terminated 2025-08-13T20:42:40.801767756+00:00 stderr F I0813 20:42:40.801753 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:40.801767756+00:00 stderr F I0813 20:42:40.801760 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:40.801870469+00:00 stderr F I0813 20:42:40.801851 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:40.801870469+00:00 stderr F I0813 20:42:40.801860 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801868 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801874 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.801887949+00:00 stderr F I0813 20:42:40.801881 1 base_controller.go:114] Shutting down worker of InformerWithSwitchController controller ... 2025-08-13T20:42:40.801899380+00:00 stderr F I0813 20:42:40.801886 1 base_controller.go:104] All InformerWithSwitchController workers have been terminated 2025-08-13T20:42:40.802006443+00:00 stderr F E0813 20:42:40.801959 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802019993+00:00 stderr F I0813 20:42:40.802004 1 base_controller.go:114] Shutting down worker of PodDisruptionBudgetController controller ... 2025-08-13T20:42:40.802019993+00:00 stderr F I0813 20:42:40.802013 1 base_controller.go:104] All PodDisruptionBudgetController workers have been terminated 2025-08-13T20:42:40.802182328+00:00 stderr F E0813 20:42:40.802113 1 base_controller.go:268] ConsoleServiceController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.801490 1 base_controller.go:150] All StatusSyncer_console post start hooks have been terminated 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.802308 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:42:40.802334832+00:00 stderr F I0813 20:42:40.802328 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:42:40.802351263+00:00 stderr F I0813 20:42:40.802338 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.802351263+00:00 stderr F I0813 20:42:40.802330 1 base_controller.go:114] Shutting down worker of OIDCSetupController controller ... 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802367 1 base_controller.go:104] All OIDCSetupController workers have been terminated 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802347 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.802418525+00:00 stderr F I0813 20:42:40.802406 1 base_controller.go:114] Shutting down worker of StatusSyncer_console controller ... 2025-08-13T20:42:40.802441615+00:00 stderr F I0813 20:42:40.802433 1 base_controller.go:114] Shutting down worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:42:40.802454836+00:00 stderr F I0813 20:42:40.802445 1 base_controller.go:104] All ConsoleCLIDownloadsController workers have been terminated 2025-08-13T20:42:40.802464726+00:00 stderr F I0813 20:42:40.802447 1 base_controller.go:104] All StatusSyncer_console workers have been terminated 2025-08-13T20:42:40.802464726+00:00 stderr F I0813 20:42:40.802396 1 base_controller.go:114] Shutting down worker of OAuthClientSecretController controller ... 2025-08-13T20:42:40.802474956+00:00 stderr F I0813 20:42:40.802462 1 base_controller.go:114] Shutting down worker of HealthCheckController controller ... 2025-08-13T20:42:40.802474956+00:00 stderr F I0813 20:42:40.802470 1 base_controller.go:104] All OAuthClientSecretController workers have been terminated 2025-08-13T20:42:40.802485237+00:00 stderr F I0813 20:42:40.802387 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:40.802485237+00:00 stderr F I0813 20:42:40.802477 1 base_controller.go:104] All HealthCheckController workers have been terminated 2025-08-13T20:42:40.802495347+00:00 stderr F I0813 20:42:40.802480 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:40.802505137+00:00 stderr F I0813 20:42:40.802494 1 base_controller.go:114] Shutting down worker of ConsoleOperator controller ... 2025-08-13T20:42:40.802514787+00:00 stderr F I0813 20:42:40.802506 1 base_controller.go:104] All ConsoleOperator workers have been terminated 2025-08-13T20:42:40.802607070+00:00 stderr F I0813 20:42:40.802552 1 base_controller.go:114] Shutting down worker of DownloadsRouteController controller ... 2025-08-13T20:42:40.802607070+00:00 stderr F E0813 20:42:40.802586 1 base_controller.go:268] PodDisruptionBudgetController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:40.802607070+00:00 stderr F I0813 20:42:40.802593 1 base_controller.go:104] All DownloadsRouteController workers have been terminated 2025-08-13T20:42:40.802622521+00:00 stderr F I0813 20:42:40.802610 1 base_controller.go:114] Shutting down worker of ConsoleRouteController controller ... 2025-08-13T20:42:40.802632471+00:00 stderr F I0813 20:42:40.802621 1 base_controller.go:104] All ConsoleRouteController workers have been terminated 2025-08-13T20:42:40.803512196+00:00 stderr F W0813 20:42:40.803457 1 builder.go:131] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000067477315117130647033114 0ustar zuulzuul2025-08-13T19:59:35.430117225+00:00 stderr F I0813 19:59:35.366203 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:35.430117225+00:00 stderr F I0813 19:59:35.428254 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:35.614651095+00:00 stderr F I0813 19:59:35.614467 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:37.273990016+00:00 stderr F I0813 19:59:37.272267 1 builder.go:299] console-operator version - 2025-08-13T19:59:42.268464924+00:00 stderr F I0813 19:59:42.264480 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:42.268464924+00:00 stderr F W0813 19:59:42.265251 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:42.268464924+00:00 stderr F W0813 19:59:42.265265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:42.417642327+00:00 stderr F I0813 19:59:42.417146 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:42.447585060+00:00 stderr F I0813 19:59:42.447491 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:42.460890169+00:00 stderr F I0813 19:59:42.447747 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.464867 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.470576 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-08-13T19:59:42.473611792+00:00 stderr F I0813 19:59:42.471496 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:42.484648657+00:00 stderr F I0813 19:59:42.479569 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:42.485093609+00:00 stderr F I0813 19:59:42.485053 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:42.485156761+00:00 stderr F I0813 19:59:42.485136 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.494534208+00:00 stderr F I0813 19:59:42.493042 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.494534208+00:00 stderr F I0813 19:59:42.493284 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.624227255+00:00 stderr F I0813 19:59:42.570766 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:42.689951678+00:00 stderr F I0813 19:59:42.685458 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.742681721+00:00 stderr F I0813 19:59:42.742026 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:42.742923228+00:00 stderr F E0813 19:59:42.742750 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.742923228+00:00 stderr F E0813 19:59:42.742910 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.749667020+00:00 stderr F E0813 19:59:42.749548 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.749667020+00:00 stderr F E0813 19:59:42.749610 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.761053454+00:00 stderr F E0813 19:59:42.760935 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.762733642+00:00 stderr F E0813 19:59:42.762656 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.788828846+00:00 stderr F E0813 19:59:42.788226 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.788828846+00:00 stderr F E0813 19:59:42.788471 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.857348119+00:00 stderr F E0813 19:59:42.828627 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.857348119+00:00 stderr F E0813 19:59:42.837936 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.918957066+00:00 stderr F E0813 19:59:42.915022 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.926281904+00:00 stderr F E0813 19:59:42.926087 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.077938407+00:00 stderr F E0813 19:59:43.077073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.096293421+00:00 stderr F E0813 19:59:43.093707 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.140896612+00:00 stderr F I0813 19:59:43.138759 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-08-13T19:59:43.140896612+00:00 stderr F I0813 19:59:43.139101 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28335", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_b845076a-9ad5-4a9c-bbd4-efec7e6dc1b0 became leader 2025-08-13T19:59:43.398872586+00:00 stderr F E0813 19:59:43.398742 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.414467350+00:00 stderr F E0813 19:59:43.414400 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.612315180+00:00 stderr F I0813 19:59:43.603718 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:43.613618117+00:00 stderr F I0813 19:59:43.613557 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:43.623642823+00:00 stderr F I0813 19:59:43.614377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:45.601309287+00:00 stderr F E0813 19:59:45.600542 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.601434231+00:00 stderr F E0813 19:59:45.601415 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.883192447+00:00 stderr F E0813 19:59:46.883025 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.898282427+00:00 stderr F E0813 19:59:46.898118 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.387849482+00:00 stderr F I0813 19:59:47.387336 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388575 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388614 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-08-13T19:59:47.389257912+00:00 stderr F I0813 19:59:47.388689 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-08-13T19:59:47.389757287+00:00 stderr F I0813 19:59:47.389689 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:47.391083334+00:00 stderr F I0813 19:59:47.390899 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-08-13T19:59:47.391309231+00:00 stderr F I0813 19:59:47.391175 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:47.391326521+00:00 stderr F I0813 19:59:47.391312 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:59:47.391346322+00:00 stderr F I0813 19:59:47.391331 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:47.391359512+00:00 stderr F I0813 19:59:47.391352 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T19:59:47.391435754+00:00 stderr F I0813 19:59:47.391367 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391538 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391745 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.391763 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.392019 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.392231 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398725 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398868 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398891 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398909 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398926 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398940 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398953 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398968 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398973 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-08-13T19:59:47.400935045+00:00 stderr F I0813 19:59:47.398978 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-08-13T19:59:47.401900803+00:00 stderr F E0813 19:59:47.401812 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.410046175+00:00 stderr F E0813 19:59:47.409978 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.420544574+00:00 stderr F E0813 19:59:47.420492 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.447700848+00:00 stderr F E0813 19:59:47.442074 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.489532211+00:00 stderr F E0813 19:59:47.489400 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.585230899+00:00 stderr F E0813 19:59:47.574356 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.742574994+00:00 stderr F W0813 19:59:47.742493 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:47.798358105+00:00 stderr F E0813 19:59:47.794599 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-08-13T19:59:47.823880362+00:00 stderr F W0813 19:59:47.822633 1 reflector.go:539] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:47.830907522+00:00 stderr F E0813 19:59:47.830857 1 reflector.go:147] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:47.860633060+00:00 stderr F E0813 19:59:47.860553 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.508335 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509186 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509247 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:49.510969275+00:00 stderr F I0813 19:59:49.509255 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:49.515921776+00:00 stderr F I0813 19:59:49.515608 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:59:49.515921776+00:00 stderr F I0813 19:59:49.515678 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:59:49.520307491+00:00 stderr F I0813 19:59:49.520186 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.531943 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.532167 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T19:59:49.533243100+00:00 stderr F I0813 19:59:49.532184 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T19:59:49.535538356+00:00 stderr F I0813 19:59:49.533502 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-08-13T19:59:49.535538356+00:00 stderr F I0813 19:59:49.533548 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-08-13T19:59:49.541890497+00:00 stderr F I0813 19:59:49.536144 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:49.541890497+00:00 stderr F I0813 19:59:49.536295 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:49.618398968+00:00 stderr F I0813 19:59:49.618299 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T19:59:49.618398968+00:00 stderr F I0813 19:59:49.618337 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T19:59:49.646890379+00:00 stderr F I0813 19:59:49.644990 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-08-13T19:59:49.646890379+00:00 stderr F I0813 19:59:49.645071 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-08-13T19:59:49.721671560+00:00 stderr F I0813 19:59:49.718432 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:49.721671560+00:00 stderr F I0813 19:59:49.718955 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.721671560+00:00 stderr F E0813 19:59:49.719549 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.721671560+00:00 stderr F E0813 19:59:49.719590 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.898184732+00:00 stderr F I0813 19:59:49.898121 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.940917540+00:00 stderr F I0813 19:59:49.928583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.940917540+00:00 stderr F I0813 19:59:49.936265 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.942210487+00:00 stderr F I0813 19:59:49.942177 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:49.948889987+00:00 stderr F I0813 19:59:49.945321 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:50.002587738+00:00 stderr F I0813 19:59:50.001186 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-08-13T19:59:50.002587738+00:00 stderr F I0813 19:59:50.001291 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-08-13T19:59:50.398113313+00:00 stderr F I0813 19:59:50.330334 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:50.402557329+00:00 stderr F I0813 19:59:50.402519 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-08-13T19:59:50.402662812+00:00 stderr F I0813 19:59:50.402640 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-08-13T19:59:50.402731154+00:00 stderr F I0813 19:59:50.402517 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-08-13T19:59:50.402823577+00:00 stderr F I0813 19:59:50.402760 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-08-13T19:59:50.678549937+00:00 stderr F W0813 19:59:50.636559 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F E0813 19:59:50.636746 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F W0813 19:59:50.637078 1 reflector.go:539] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:50.678549937+00:00 stderr F E0813 19:59:50.637099 1 reflector.go:147] github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:51.714595091+00:00 stderr F I0813 19:59:51.713173 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.756014067+00:00 stderr F I0813 19:59:52.744048 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.766049034+00:00 stderr F I0813 19:59:52.744062 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.986403855+00:00 stderr F I0813 19:59:52.978266 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-08-13T19:59:52.986403855+00:00 stderr F I0813 19:59:52.978319 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-08-13T19:59:53.101033032+00:00 stderr F I0813 19:59:53.096560 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.118272194+00:00 stderr F I0813 19:59:53.106464 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded changed from False to True ("ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:47256->10.217.4.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:53.128385752+00:00 stderr F I0813 19:59:53.128276 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.174648771+00:00 stderr F E0813 19:59:53.145522 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.174648771+00:00 stderr F I0813 19:59:53.172768 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientSync_FailedRegister::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.216904325+00:00 stderr F E0813 19:59:53.208544 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:53.220559200+00:00 stderr F I0813 19:59:53.217599 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.221030863+00:00 stderr F I0813 19:59:53.220678 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:53.221165027+00:00 stderr F I0813 19:59:53.221102 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:53.229032401+00:00 stderr F I0813 19:59:53.222660 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:53.222200806 +0000 UTC))" 2025-08-13T19:59:53.240469716+00:00 stderr F I0813 19:59:53.240433 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:53.240379524 +0000 UTC))" 2025-08-13T19:59:53.240560859+00:00 stderr F I0813 19:59:53.240540 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.240514928 +0000 UTC))" 2025-08-13T19:59:53.241113115+00:00 stderr F I0813 19:59:53.241084 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.240698013 +0000 UTC))" 2025-08-13T19:59:53.241191047+00:00 stderr F I0813 19:59:53.241170 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241152076 +0000 UTC))" 2025-08-13T19:59:53.241241448+00:00 stderr F I0813 19:59:53.241228 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241212937 +0000 UTC))" 2025-08-13T19:59:53.241293460+00:00 stderr F I0813 19:59:53.241280 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241260189 +0000 UTC))" 2025-08-13T19:59:53.241341011+00:00 stderr F I0813 19:59:53.241325 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.24131169 +0000 UTC))" 2025-08-13T19:59:53.241386082+00:00 stderr F I0813 19:59:53.241373 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.241359553 +0000 UTC))" 2025-08-13T19:59:53.243396370+00:00 stderr F I0813 19:59:53.243370 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 19:59:53.243349938 +0000 UTC))" 2025-08-13T19:59:53.244109660+00:00 stderr F I0813 19:59:53.244085 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.303105192+00:00 stderr F W0813 19:59:53.303040 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.303400070+00:00 stderr F E0813 19:59:53.303374 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.304328677+00:00 stderr F I0813 19:59:53.244764 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 19:59:53.24376803 +0000 UTC))" 2025-08-13T19:59:54.584047775+00:00 stderr F I0813 19:59:54.583331 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/console-operator/pkg/console/controllers/util/informers.go:106 2025-08-13T19:59:59.114234360+00:00 stderr F W0813 19:59:59.112599 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:59.114711074+00:00 stderr F E0813 19:59:59.114643 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.736477 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.736397648 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.818989 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.818933081 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819019 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.819004153 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819043 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.819026214 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819062 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819050635 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819085 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819068345 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819112 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819093806 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819157 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819119837 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.819171098 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819220 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.819203849 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819560 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 20:00:05.819537779 +0000 UTC))" 2025-08-13T20:00:05.821703540+00:00 stderr F I0813 20:00:05.819995 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:05.81991988 +0000 UTC))" 2025-08-13T20:00:09.419606640+00:00 stderr F I0813 20:00:09.418744 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-08-13T20:00:10.061828092+00:00 stderr F I0813 20:00:10.056742 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092230 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092282 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092324 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092329 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092343 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092348 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092366 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-08-13T20:00:10.093178246+00:00 stderr F I0813 20:00:10.092733 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-08-13T20:00:10.121956577+00:00 stderr F I0813 20:00:10.121143 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:00:10.121956577+00:00 stderr F I0813 20:00:10.121235 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:00:10.151497129+00:00 stderr F I0813 20:00:10.151095 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-08-13T20:00:10.151558671+00:00 stderr F I0813 20:00:10.151535 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-08-13T20:00:10.205152159+00:00 stderr F I0813 20:00:10.169180 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-08-13T20:00:11.778877921+00:00 stderr F I0813 20:00:11.778245 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.778877921+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.778877921+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.778877921+00:00 stderr F    ... // 38 identical elements 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F -  { 2025-08-13T20:00:11.778877921+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.778877921+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Reason: "FailedRegister", 2025-08-13T20:00:11.778877921+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:11.778877921+00:00 stderr F -  }, 2025-08-13T20:00:11.778877921+00:00 stderr F +  { 2025-08-13T20:00:11.778877921+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-08-13T20:00:11.778877921+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.778877921+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.35249456 +0000 UTC m=+46.642382196", 2025-08-13T20:00:11.778877921+00:00 stderr F +  }, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-08-13T20:00:11.778877921+00:00 stderr F    ... // 19 identical elements 2025-08-13T20:00:11.778877921+00:00 stderr F    }, 2025-08-13T20:00:11.778877921+00:00 stderr F    Version: "", 2025-08-13T20:00:11.778877921+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.778877921+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.778877921+00:00 stderr F   } 2025-08-13T20:00:11.869674490+00:00 stderr F I0813 20:00:11.864551 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.869674490+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.869674490+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.869674490+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F -  { 2025-08-13T20:00:11.869674490+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.869674490+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:11.869674490+00:00 stderr F -  }, 2025-08-13T20:00:11.869674490+00:00 stderr F +  { 2025-08-13T20:00:11.869674490+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:11.869674490+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.869674490+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.776893212 +0000 UTC m=+47.066780948", 2025-08-13T20:00:11.869674490+00:00 stderr F +  }, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F -  { 2025-08-13T20:00:11.869674490+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Status: "False", 2025-08-13T20:00:11.869674490+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:11.869674490+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:11.869674490+00:00 stderr F -  }, 2025-08-13T20:00:11.869674490+00:00 stderr F +  { 2025-08-13T20:00:11.869674490+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:11.869674490+00:00 stderr F +  Status: "True", 2025-08-13T20:00:11.869674490+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.776895672 +0000 UTC m=+47.066782988", 2025-08-13T20:00:11.869674490+00:00 stderr F +  }, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:11.869674490+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:00:11.869674490+00:00 stderr F    }, 2025-08-13T20:00:11.869674490+00:00 stderr F    Version: "", 2025-08-13T20:00:11.869674490+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.869674490+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.869674490+00:00 stderr F   } 2025-08-13T20:00:11.906421638+00:00 stderr F I0813 20:00:11.906049 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:11.906421638+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:11.906421638+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "True", LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, Reason: "FailedGet", ...}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F -  { 2025-08-13T20:00:11.906421638+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Status: "True", 2025-08-13T20:00:11.906421638+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Reason: "SyncError", 2025-08-13T20:00:11.906421638+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:11.906421638+00:00 stderr F -  }, 2025-08-13T20:00:11.906421638+00:00 stderr F +  { 2025-08-13T20:00:11.906421638+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:11.906421638+00:00 stderr F +  Status: "False", 2025-08-13T20:00:11.906421638+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:11.893081228 +0000 UTC m=+48.182968804", 2025-08-13T20:00:11.906421638+00:00 stderr F +  Reason: "AsExpected", 2025-08-13T20:00:11.906421638+00:00 stderr F +  }, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:11.906421638+00:00 stderr F    ... // 56 identical elements 2025-08-13T20:00:11.906421638+00:00 stderr F    }, 2025-08-13T20:00:11.906421638+00:00 stderr F    Version: "", 2025-08-13T20:00:11.906421638+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:11.906421638+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:11.906421638+00:00 stderr F   } 2025-08-13T20:00:12.014816399+00:00 stderr F I0813 20:00:12.014451 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.014816399+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.014816399+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.014816399+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F -  { 2025-08-13T20:00:12.014816399+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.014816399+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.014816399+00:00 stderr F -  }, 2025-08-13T20:00:12.014816399+00:00 stderr F +  { 2025-08-13T20:00:12.014816399+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.014816399+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.014816399+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.999440878 +0000 UTC m=+47.289328174", 2025-08-13T20:00:12.014816399+00:00 stderr F +  }, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F -  { 2025-08-13T20:00:12.014816399+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.014816399+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.014816399+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.014816399+00:00 stderr F -  }, 2025-08-13T20:00:12.014816399+00:00 stderr F +  { 2025-08-13T20:00:12.014816399+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.014816399+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.014816399+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:10.999434448 +0000 UTC m=+47.289325784", 2025-08-13T20:00:12.014816399+00:00 stderr F +  }, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.014816399+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.014816399+00:00 stderr F    }, 2025-08-13T20:00:12.014816399+00:00 stderr F    Version: "", 2025-08-13T20:00:12.014816399+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.014816399+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.014816399+00:00 stderr F   } 2025-08-13T20:00:12.096945711+00:00 stderr F I0813 20:00:12.094858 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.096945711+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.096945711+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "True", LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, Reason: "FailedGet", ...}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F -  { 2025-08-13T20:00:12.096945711+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.096945711+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Reason: "SyncError", 2025-08-13T20:00:12.096945711+00:00 stderr F -  Message: "the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)", 2025-08-13T20:00:12.096945711+00:00 stderr F -  }, 2025-08-13T20:00:12.096945711+00:00 stderr F +  { 2025-08-13T20:00:12.096945711+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-08-13T20:00:12.096945711+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.096945711+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.082043956 +0000 UTC m=+48.371931642", 2025-08-13T20:00:12.096945711+00:00 stderr F +  Reason: "AsExpected", 2025-08-13T20:00:12.096945711+00:00 stderr F +  }, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.096945711+00:00 stderr F    ... // 56 identical elements 2025-08-13T20:00:12.096945711+00:00 stderr F    }, 2025-08-13T20:00:12.096945711+00:00 stderr F    Version: "", 2025-08-13T20:00:12.096945711+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.096945711+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.096945711+00:00 stderr F   } 2025-08-13T20:00:12.279647800+00:00 stderr F I0813 20:00:12.275503 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::OAuthClientsController_SyncError::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:12.279647800+00:00 stderr F I0813 20:00:12.278446 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.279647800+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.279647800+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.279647800+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F -  { 2025-08-13T20:00:12.279647800+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.279647800+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.279647800+00:00 stderr F -  }, 2025-08-13T20:00:12.279647800+00:00 stderr F +  { 2025-08-13T20:00:12.279647800+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.279647800+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.279647800+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.16985366 +0000 UTC m=+48.459741076", 2025-08-13T20:00:12.279647800+00:00 stderr F +  }, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F -  { 2025-08-13T20:00:12.279647800+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.279647800+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.279647800+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.279647800+00:00 stderr F -  }, 2025-08-13T20:00:12.279647800+00:00 stderr F +  { 2025-08-13T20:00:12.279647800+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.279647800+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.279647800+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.16985055 +0000 UTC m=+48.459738236", 2025-08-13T20:00:12.279647800+00:00 stderr F +  }, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.279647800+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.279647800+00:00 stderr F    }, 2025-08-13T20:00:12.279647800+00:00 stderr F    Version: "", 2025-08-13T20:00:12.279647800+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.279647800+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.279647800+00:00 stderr F   } 2025-08-13T20:00:12.297038556+00:00 stderr F I0813 20:00:12.295727 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.297038556+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.297038556+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.297038556+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F -  { 2025-08-13T20:00:12.297038556+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.297038556+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:12.297038556+00:00 stderr F -  }, 2025-08-13T20:00:12.297038556+00:00 stderr F +  { 2025-08-13T20:00:12.297038556+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:00:12.297038556+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.297038556+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.286220938 +0000 UTC m=+48.576108394", 2025-08-13T20:00:12.297038556+00:00 stderr F +  }, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F -  { 2025-08-13T20:00:12.297038556+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.297038556+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.297038556+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:00:12.297038556+00:00 stderr F -  }, 2025-08-13T20:00:12.297038556+00:00 stderr F +  { 2025-08-13T20:00:12.297038556+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.297038556+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.297038556+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.286222948 +0000 UTC m=+48.576110244", 2025-08-13T20:00:12.297038556+00:00 stderr F +  }, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:00:12.297038556+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:00:12.297038556+00:00 stderr F    }, 2025-08-13T20:00:12.297038556+00:00 stderr F    Version: "", 2025-08-13T20:00:12.297038556+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.297038556+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.297038556+00:00 stderr F   } 2025-08-13T20:00:12.378818518+00:00 stderr F I0813 20:00:12.378663 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:12.536265268+00:00 stderr F E0813 20:00:12.534526 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:12.536265268+00:00 stderr F E0813 20:00:12.534603 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:12.545689446+00:00 stderr F I0813 20:00:12.539000 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.545689446+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.545689446+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.545689446+00:00 stderr F    { 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.545689446+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.545689446+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.545689446+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.545689446+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.545689446+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.545689446+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":", 2025-08-13T20:00:12.545689446+00:00 stderr F    " ", 2025-08-13T20:00:12.545689446+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.545689446+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.545689446+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.545689446+00:00 stderr F    }, ""), 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.545689446+00:00 stderr F    { 2025-08-13T20:00:12.545689446+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.545689446+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.545689446+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.545689446+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.545689446+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.545689446+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.545689446+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":", 2025-08-13T20:00:12.545689446+00:00 stderr F    " ", 2025-08-13T20:00:12.545689446+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.545689446+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.545689446+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.545689446+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.545689446+00:00 stderr F    }, ""), 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    }, 2025-08-13T20:00:12.545689446+00:00 stderr F    Version: "", 2025-08-13T20:00:12.545689446+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.545689446+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.545689446+00:00 stderr F   } 2025-08-13T20:00:12.888645326+00:00 stderr F I0813 20:00:12.851613 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.888645326+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.888645326+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:12.888645326+00:00 stderr F    { 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.888645326+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.888645326+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.888645326+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.888645326+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.888645326+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.888645326+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":", 2025-08-13T20:00:12.888645326+00:00 stderr F    " ", 2025-08-13T20:00:12.888645326+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.888645326+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.888645326+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.888645326+00:00 stderr F    }, ""), 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:12.888645326+00:00 stderr F    { 2025-08-13T20:00:12.888645326+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:12.888645326+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:12.888645326+00:00 stderr F    Reason: "FailedGet", 2025-08-13T20:00:12.888645326+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:12.888645326+00:00 stderr F    "failed to GET route (https://console-openshift-console.apps-crc.", 2025-08-13T20:00:12.888645326+00:00 stderr F    `testing): Get "https://console-openshift-console.apps-crc.testin`, 2025-08-13T20:00:12.888645326+00:00 stderr F    `g": dial tcp`, 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":", 2025-08-13T20:00:12.888645326+00:00 stderr F    " ", 2025-08-13T20:00:12.888645326+00:00 stderr F -  "lookup console-openshift-console.apps-crc.testing on 10.217.4.10", 2025-08-13T20:00:12.888645326+00:00 stderr F -  ":53: read udp 10.217.0.62:46199->10.217.4.10:53: read", 2025-08-13T20:00:12.888645326+00:00 stderr F +  "192.168.130.11:443: connect", 2025-08-13T20:00:12.888645326+00:00 stderr F    ": connection refused", 2025-08-13T20:00:12.888645326+00:00 stderr F    }, ""), 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    }, 2025-08-13T20:00:12.888645326+00:00 stderr F    Version: "", 2025-08-13T20:00:12.888645326+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.888645326+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.888645326+00:00 stderr F   } 2025-08-13T20:00:12.890976112+00:00 stderr F I0813 20:00:12.865542 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199-\u003e10.217.4.10:53: read: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:12.969377458+00:00 stderr F I0813 20:00:12.966556 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:12.969377458+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:12.969377458+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:12.969377458+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F -  { 2025-08-13T20:00:12.969377458+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Status: "True", 2025-08-13T20:00:12.969377458+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.969377458+00:00 stderr F -  }, 2025-08-13T20:00:12.969377458+00:00 stderr F +  { 2025-08-13T20:00:12.969377458+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:12.969377458+00:00 stderr F +  Status: "False", 2025-08-13T20:00:12.969377458+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.962050399 +0000 UTC m=+49.251937695", 2025-08-13T20:00:12.969377458+00:00 stderr F +  }, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F -  { 2025-08-13T20:00:12.969377458+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Status: "False", 2025-08-13T20:00:12.969377458+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:12.969377458+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:12.969377458+00:00 stderr F -  }, 2025-08-13T20:00:12.969377458+00:00 stderr F +  { 2025-08-13T20:00:12.969377458+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:12.969377458+00:00 stderr F +  Status: "True", 2025-08-13T20:00:12.969377458+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:12.962047889 +0000 UTC m=+49.251935475", 2025-08-13T20:00:12.969377458+00:00 stderr F +  }, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:12.969377458+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:12.969377458+00:00 stderr F    }, 2025-08-13T20:00:12.969377458+00:00 stderr F    Version: "", 2025-08-13T20:00:12.969377458+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:12.969377458+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:12.969377458+00:00 stderr F   } 2025-08-13T20:00:13.033632120+00:00 stderr F E0813 20:00:13.029315 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:13.033632120+00:00 stderr F E0813 20:00:13.029355 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:13.073359003+00:00 stderr F I0813 20:00:13.072674 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nOAuthClientsControllerDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:13.091020806+00:00 stderr F E0813 20:00:13.090678 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.260640553+00:00 stderr F I0813 20:00:13.258400 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.344880425+00:00 stderr F I0813 20:00:13.344591 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp: lookup console-openshift-console.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.62:46199->10.217.4.10:53: read: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" to "DownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" 2025-08-13T20:00:13.484965749+00:00 stderr F I0813 20:00:13.483043 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:13.484965749+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:13.484965749+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:13.484965749+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F -  { 2025-08-13T20:00:13.484965749+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Status: "True", 2025-08-13T20:00:13.484965749+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:13.484965749+00:00 stderr F -  }, 2025-08-13T20:00:13.484965749+00:00 stderr F +  { 2025-08-13T20:00:13.484965749+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:00:13.484965749+00:00 stderr F +  Status: "False", 2025-08-13T20:00:13.484965749+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:13.440922653 +0000 UTC m=+49.730810019", 2025-08-13T20:00:13.484965749+00:00 stderr F +  }, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F -  { 2025-08-13T20:00:13.484965749+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Status: "False", 2025-08-13T20:00:13.484965749+00:00 stderr F -  LastTransitionTime: s"2024-06-27 13:34:18 +0000 UTC", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:00:13.484965749+00:00 stderr F -  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:00:13.484965749+00:00 stderr F -  }, 2025-08-13T20:00:13.484965749+00:00 stderr F +  { 2025-08-13T20:00:13.484965749+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:00:13.484965749+00:00 stderr F +  Status: "True", 2025-08-13T20:00:13.484965749+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:00:13.440919713 +0000 UTC m=+49.730807239", 2025-08-13T20:00:13.484965749+00:00 stderr F +  }, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:13.484965749+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:00:13.484965749+00:00 stderr F    }, 2025-08-13T20:00:13.484965749+00:00 stderr F    Version: "", 2025-08-13T20:00:13.484965749+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:13.484965749+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:13.484965749+00:00 stderr F   } 2025-08-13T20:00:13.587009919+00:00 stderr F E0813 20:00:13.586324 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.587009919+00:00 stderr F E0813 20:00:13.586656 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.587743220+00:00 stderr F E0813 20:00:13.587122 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.752083226+00:00 stderr F I0813 20:00:13.751974 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:14.129461516+00:00 stderr F I0813 20:00:14.122744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused",Upgradeable changed from False to True ("All is well") 2025-08-13T20:00:14.175457698+00:00 stderr F E0813 20:00:14.175390 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.175553251+00:00 stderr F E0813 20:00:14.175539 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.175988813+00:00 stderr F E0813 20:00:14.175880 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.526271251+00:00 stderr F E0813 20:00:14.494881 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:14.526425155+00:00 stderr F E0813 20:00:14.526399 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:14.584962424+00:00 stderr F I0813 20:00:14.581817 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:14Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742362 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742404 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.743107083+00:00 stderr F E0813 20:00:14.742578 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.394056514+00:00 stderr F E0813 20:00:15.393954 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.394115755+00:00 stderr F E0813 20:00:15.394101 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.399979303+00:00 stderr F E0813 20:00:15.399954 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.741117730+00:00 stderr F E0813 20:00:15.740427 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.741117730+00:00 stderr F E0813 20:00:15.741014 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.750376894+00:00 stderr F E0813 20:00:15.750217 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005193 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005266 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.005937411+00:00 stderr F E0813 20:00:16.005459 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200377335+00:00 stderr F E0813 20:00:16.200319 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200448337+00:00 stderr F E0813 20:00:16.200431 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.200679204+00:00 stderr F E0813 20:00:16.200650 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.452760992+00:00 stderr F E0813 20:00:16.452659 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.452908646+00:00 stderr F E0813 20:00:16.452858 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.453274616+00:00 stderr F E0813 20:00:16.453131 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.705107057+00:00 stderr F E0813 20:00:16.703434 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.638125962+00:00 stderr F I0813 20:00:17.637402 1 apps.go:154] Deployment "openshift-console/console" changes: {"metadata":{"annotations":{"console.openshift.io/service-ca-config-version":"29218","operator.openshift.io/spec-hash":"b2372c4f2f3d3abb58592f8e229a7b3901addc8a288a978cd753c769ea967ca8"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"metadata":{"annotations":{"console.openshift.io/service-ca-config-version":"29218"}},"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:00:17.981299097+00:00 stderr F E0813 20:00:17.980371 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:00:17.981299097+00:00 stderr F E0813 20:00:17.980549 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:17.984023174+00:00 stderr F I0813 20:00:17.983958 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:00:18.129528263+00:00 stderr F I0813 20:00:18.129456 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:18.129528263+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:18.129528263+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    { 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:18.129528263+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:18.129528263+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:00:18.129528263+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:18.129528263+00:00 stderr F -  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:00:18.129528263+00:00 stderr F +  "changes made during sync updates, additional sync expected", 2025-08-13T20:00:18.129528263+00:00 stderr F    }, ""), 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    Version: "", 2025-08-13T20:00:18.129528263+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:18.129528263+00:00 stderr F    Generations: []v1.GenerationStatus{ 2025-08-13T20:00:18.129528263+00:00 stderr F    {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, 2025-08-13T20:00:18.129528263+00:00 stderr F    { 2025-08-13T20:00:18.129528263+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:18.129528263+00:00 stderr F    Namespace: "openshift-console", 2025-08-13T20:00:18.129528263+00:00 stderr F    Name: "console", 2025-08-13T20:00:18.129528263+00:00 stderr F -  LastGeneration: 3, 2025-08-13T20:00:18.129528263+00:00 stderr F +  LastGeneration: 4, 2025-08-13T20:00:18.129528263+00:00 stderr F    Hash: "", 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F    }, 2025-08-13T20:00:18.129528263+00:00 stderr F   } 2025-08-13T20:00:18.203709259+00:00 stderr F E0813 20:00:18.203654 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.203857193+00:00 stderr F E0813 20:00:18.203763 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.204171062+00:00 stderr F E0813 20:00:18.204143 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.594160241+00:00 stderr F I0813 20:00:18.560202 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.906190398+00:00 stderr F I0813 20:00:18.905262 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" 2025-08-13T20:00:19.181957811+00:00 stderr F I0813 20:00:19.180766 1 apps.go:154] Deployment "openshift-console/console" changes: {"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:00:19.273735058+00:00 stderr F E0813 20:00:19.273406 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.273735058+00:00 stderr F E0813 20:00:19.273483 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.273930014+00:00 stderr F E0813 20:00:19.273739 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.396090437+00:00 stderr F E0813 20:00:19.393079 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:00:19.396090437+00:00 stderr F E0813 20:00:19.393134 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:19.396090437+00:00 stderr F I0813 20:00:19.394369 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:00:19.941306583+00:00 stderr F E0813 20:00:19.928673 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:19.941306583+00:00 stderr F E0813 20:00:19.938591 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:20.063366034+00:00 stderr F I0813 20:00:20.058701 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:20.063366034+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:20.063366034+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    { 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:00:20.063366034+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:20.063366034+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:00:20.063366034+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:20.063366034+00:00 stderr F -  "changes made during sync updates, additional sync expected", 2025-08-13T20:00:20.063366034+00:00 stderr F +  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:00:20.063366034+00:00 stderr F    }, ""), 2025-08-13T20:00:20.063366034+00:00 stderr F    }, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:00:20.063366034+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:00:20.063366034+00:00 stderr F    }, 2025-08-13T20:00:20.063366034+00:00 stderr F    Version: "", 2025-08-13T20:00:20.063366034+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:20.063366034+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:20.063366034+00:00 stderr F   } 2025-08-13T20:00:20.461970770+00:00 stderr F I0813 20:00:20.459046 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"RouteHealth_FailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"Deployment_InsufficientReplicas::RouteHealth_FailedGet","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:20.641330564+00:00 stderr F I0813 20:00:20.641200 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" 2025-08-13T20:00:20.795446338+00:00 stderr F E0813 20:00:20.788140 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:20.795446338+00:00 stderr F E0813 20:00:20.788183 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.155039 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.155745 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.156084512+00:00 stderr F E0813 20:00:21.156032 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705293 1 status.go:130] RouteHealthDegraded FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705709 1 status.go:130] RouteHealthAvailable FailedGet failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:21.709299925+00:00 stderr F E0813 20:00:21.705972 1 base_controller.go:268] HealthCheckController reconciliation failed: failed to GET route (https://console-openshift-console.apps-crc.testing): Get "https://console-openshift-console.apps-crc.testing": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:22.124634068+00:00 stderr F E0813 20:00:22.117354 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:22.124634068+00:00 stderr F E0813 20:00:22.117397 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:27.342212836+00:00 stderr F E0813 20:00:27.338400 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:27.342212836+00:00 stderr F E0813 20:00:27.338884 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:29.403930083+00:00 stderr F E0813 20:00:29.403323 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:29.404013726+00:00 stderr F E0813 20:00:29.403997 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:32.183362086+00:00 stderr F E0813 20:00:32.174301 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:32.183362086+00:00 stderr F E0813 20:00:32.174954 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:32.187702610+00:00 stderr F I0813 20:00:32.187561 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:00:32.187702610+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:00:32.187702610+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-08-13T20:00:32.187702610+00:00 stderr F    { 2025-08-13T20:00:32.187702610+00:00 stderr F    Type: "RouteHealthDegraded", 2025-08-13T20:00:32.187702610+00:00 stderr F    Status: "True", 2025-08-13T20:00:32.187702610+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:32.187702610+00:00 stderr F -  Reason: "FailedGet", 2025-08-13T20:00:32.187702610+00:00 stderr F +  Reason: "StatusError", 2025-08-13T20:00:32.187702610+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:32.187702610+00:00 stderr F -  "failed to GET route (", 2025-08-13T20:00:32.187702610+00:00 stderr F +  "route not yet available, ", 2025-08-13T20:00:32.187702610+00:00 stderr F    "https://console-openshift-console.apps-crc.testing", 2025-08-13T20:00:32.187702610+00:00 stderr F -  `): Get "https://console-openshift-console.apps-crc.testing": dia`, 2025-08-13T20:00:32.187702610+00:00 stderr F -  "l tcp 192.168.130.11:443: connect: connection refused", 2025-08-13T20:00:32.187702610+00:00 stderr F +  " returns '503 Service Unavailable'", 2025-08-13T20:00:32.187702610+00:00 stderr F    }, ""), 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "OAuthClientsControllerDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:00:12 +0000 UTC"}, Reason: "AsExpected", ...}, 2025-08-13T20:00:32.187702610+00:00 stderr F    ... // 55 identical elements 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "AuthStatusHandlerDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    {Type: "AuthStatusHandlerProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:20 +0000 UTC"}}, 2025-08-13T20:00:32.187702610+00:00 stderr F    { 2025-08-13T20:00:32.187702610+00:00 stderr F    Type: "RouteHealthAvailable", 2025-08-13T20:00:32.187702610+00:00 stderr F    Status: "False", 2025-08-13T20:00:32.187702610+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:00:32.187702610+00:00 stderr F -  Reason: "FailedGet", 2025-08-13T20:00:32.187702610+00:00 stderr F +  Reason: "StatusError", 2025-08-13T20:00:32.187702610+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:00:32.187702610+00:00 stderr F -  "failed to GET route (", 2025-08-13T20:00:32.187702610+00:00 stderr F +  "route not yet available, ", 2025-08-13T20:00:32.187702610+00:00 stderr F    "https://console-openshift-console.apps-crc.testing", 2025-08-13T20:00:32.187702610+00:00 stderr F -  `): Get "https://console-openshift-console.apps-crc.testing": dia`, 2025-08-13T20:00:32.187702610+00:00 stderr F -  "l tcp 192.168.130.11:443: connect: connection refused", 2025-08-13T20:00:32.187702610+00:00 stderr F +  " returns '503 Service Unavailable'", 2025-08-13T20:00:32.187702610+00:00 stderr F    }, ""), 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    }, 2025-08-13T20:00:32.187702610+00:00 stderr F    Version: "", 2025-08-13T20:00:32.187702610+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:00:32.187702610+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:00:32.187702610+00:00 stderr F   } 2025-08-13T20:00:33.048627008+00:00 stderr F E0813 20:00:33.030179 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:33.426894334+00:00 stderr F I0813 20:00:33.413078 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:33.651450317+00:00 stderr F I0813 20:00:33.646207 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: failed to GET route (https://console-openshift-console.apps-crc.testing): Get \"https://console-openshift-console.apps-crc.testing\": dial tcp 192.168.130.11:443: connect: connection refused" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" 2025-08-13T20:00:33.658043525+00:00 stderr F E0813 20:00:33.640606 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:33.658668352+00:00 stderr F E0813 20:00:33.658630 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:33.870570415+00:00 stderr F I0813 20:00:33.870467 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:33.996030952+00:00 stderr F E0813 20:00:33.993652 1 base_controller.go:268] StatusSyncer_console reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "console": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:34.083989390+00:00 stderr F E0813 20:00:34.083684 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.083989390+00:00 stderr F E0813 20:00:34.083728 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.084042942+00:00 stderr F E0813 20:00:34.084025 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:34.166760700+00:00 stderr F E0813 20:00:34.165395 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:34.166760700+00:00 stderr F E0813 20:00:34.165741 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:35.974390653+00:00 stderr F E0813 20:00:35.963976 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:00:35.974390653+00:00 stderr F E0813 20:00:35.974218 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:00:36.148861788+00:00 stderr F E0813 20:00:36.148466 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:36.148861788+00:00 stderr F E0813 20:00:36.148707 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:36.149084144+00:00 stderr F E0813 20:00:36.148959 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.791228523+00:00 stderr F E0813 20:00:41.768136 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.791362577+00:00 stderr F E0813 20:00:41.791342 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:41.797892713+00:00 stderr F E0813 20:00:41.791605 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.961555 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.955325084 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.962859 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.962733346 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.962930 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.962911841 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963081 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.963064525 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963120 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963105756 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963142 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963126837 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963159 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963147707 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963195 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963164118 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963214 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.963201899 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963235 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.9632251 +0000 UTC))" 2025-08-13T20:00:59.964374812+00:00 stderr F I0813 20:00:59.963332 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.963315972 +0000 UTC))" 2025-08-13T20:00:59.979997458+00:00 stderr F I0813 20:00:59.970434 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:53:40 +0000 UTC to 2026-06-26 12:53:41 +0000 UTC (now=2025-08-13 20:00:59.964032113 +0000 UTC))" 2025-08-13T20:00:59.979997458+00:00 stderr F I0813 20:00:59.978952 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:59.978871596 +0000 UTC))" 2025-08-13T20:01:06.989884093+00:00 stderr F E0813 20:01:06.988167 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:06.989884093+00:00 stderr F E0813 20:01:06.988728 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:06.993574719+00:00 stderr F E0813 20:01:06.988515 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:06.993646521+00:00 stderr F E0813 20:01:06.993631 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.015583326+00:00 stderr F I0813 20:01:07.015493 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.015583326+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.015583326+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.015583326+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F -  { 2025-08-13T20:01:07.015583326+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.015583326+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.015583326+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.015583326+00:00 stderr F -  }, 2025-08-13T20:01:07.015583326+00:00 stderr F +  { 2025-08-13T20:01:07.015583326+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.015583326+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.993883917 +0000 UTC m=+103.283771773", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.015583326+00:00 stderr F +  }, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F -  { 2025-08-13T20:01:07.015583326+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.015583326+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.015583326+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.015583326+00:00 stderr F -  }, 2025-08-13T20:01:07.015583326+00:00 stderr F +  { 2025-08-13T20:01:07.015583326+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.015583326+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.993886447 +0000 UTC m=+103.283773743", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.015583326+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.015583326+00:00 stderr F +  }, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.015583326+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:01:07.015583326+00:00 stderr F    }, 2025-08-13T20:01:07.015583326+00:00 stderr F    Version: "", 2025-08-13T20:01:07.015583326+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.015583326+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.015583326+00:00 stderr F   } 2025-08-13T20:01:07.078335875+00:00 stderr F I0813 20:01:07.065635 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.078335875+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.078335875+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.078335875+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F -  { 2025-08-13T20:01:07.078335875+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.078335875+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.078335875+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.078335875+00:00 stderr F -  }, 2025-08-13T20:01:07.078335875+00:00 stderr F +  { 2025-08-13T20:01:07.078335875+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.078335875+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.988753571 +0000 UTC m=+103.278640987", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.078335875+00:00 stderr F +  }, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F -  { 2025-08-13T20:01:07.078335875+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.078335875+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.078335875+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.078335875+00:00 stderr F -  }, 2025-08-13T20:01:07.078335875+00:00 stderr F +  { 2025-08-13T20:01:07.078335875+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.078335875+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:06.988755081 +0000 UTC m=+103.278642367", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.078335875+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.078335875+00:00 stderr F +  }, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:07.078335875+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:01:07.078335875+00:00 stderr F    }, 2025-08-13T20:01:07.078335875+00:00 stderr F    Version: "", 2025-08-13T20:01:07.078335875+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.078335875+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.078335875+00:00 stderr F   } 2025-08-13T20:01:07.264500694+00:00 stderr F E0813 20:01:07.264400 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.275896009+00:00 stderr F E0813 20:01:07.272629 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.275896009+00:00 stderr F E0813 20:01:07.272826 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.278903904+00:00 stderr F I0813 20:01:07.276103 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.278903904+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.278903904+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.278903904+00:00 stderr F    ... // 45 identical elements 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleNotificationSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleNotificationSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:17 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F -  { 2025-08-13T20:01:07.278903904+00:00 stderr F -  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.278903904+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.278903904+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.278903904+00:00 stderr F -  }, 2025-08-13T20:01:07.278903904+00:00 stderr F +  { 2025-08-13T20:01:07.278903904+00:00 stderr F +  Type: "ConsoleCustomRouteSyncDegraded", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.278903904+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.273043147 +0000 UTC m=+103.562930533", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.278903904+00:00 stderr F +  }, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F -  { 2025-08-13T20:01:07.278903904+00:00 stderr F -  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.278903904+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.278903904+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-08-13T20:01:07.278903904+00:00 stderr F -  }, 2025-08-13T20:01:07.278903904+00:00 stderr F +  { 2025-08-13T20:01:07.278903904+00:00 stderr F +  Type: "ConsoleCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.278903904+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.273040577 +0000 UTC m=+103.562928533", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.278903904+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)", 2025-08-13T20:01:07.278903904+00:00 stderr F +  }, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    {Type: "ConsoleDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:18 +0000 UTC"}}, 2025-08-13T20:01:07.278903904+00:00 stderr F    ... // 10 identical elements 2025-08-13T20:01:07.278903904+00:00 stderr F    }, 2025-08-13T20:01:07.278903904+00:00 stderr F    Version: "", 2025-08-13T20:01:07.278903904+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.278903904+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.278903904+00:00 stderr F   } 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.538002 1 core.go:341] ConfigMap "openshift-console/console-config" changes: {"apiVersion":"v1","data":{"console-config.yaml":"apiVersion: console.openshift.io/v1\nauth:\n authType: openshift\n clientID: console\n clientSecretFile: /var/oauth-config/clientSecret\n oauthEndpointCAFile: /var/oauth-serving-cert/ca-bundle.crt\nclusterInfo:\n consoleBaseAddress: https://console-openshift-console.apps-crc.testing\n controlPlaneTopology: SingleReplica\n masterPublicURL: https://api.crc.testing:6443\n nodeArchitectures:\n - amd64\n nodeOperatingSystems:\n - linux\n releaseVersion: 4.16.0\ncustomization:\n branding: ocp\n documentationBaseURL: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.16/\nkind: ConsoleConfig\nproviders: {}\nservingInfo:\n bindAddress: https://[::]:8443\n certFile: /var/serving-cert/tls.crt\n keyFile: /var/serving-cert/tls.key\nsession: {}\ntelemetry:\n CLUSTER_ID: a84dabf3-edcf-4828-b6a1-f9d3a6f02304\n SEGMENT_API_HOST: console.redhat.com/connections/api/v1\n SEGMENT_JS_HOST: console.redhat.com/connections/cdn\n SEGMENT_PUBLIC_API_KEY: BnuS1RP39EmLQjP21ko67oDjhbl9zpNU\n TELEMETER_CLIENT_DISABLED: \"true\"\n"},"kind":"ConfigMap","metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.543698 1 helpers.go:184] lister was stale at resourceVersion=29730, live get showed resourceVersion=30184 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.544621 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.551336 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:07.587016780+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:07.587016780+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:07.587016780+00:00 stderr F    ... // 7 identical elements 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsDefaultRouteSyncUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:52 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F -  { 2025-08-13T20:01:07.587016780+00:00 stderr F -  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.587016780+00:00 stderr F -  Status: "False", 2025-08-13T20:01:07.587016780+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.587016780+00:00 stderr F -  }, 2025-08-13T20:01:07.587016780+00:00 stderr F +  { 2025-08-13T20:01:07.587016780+00:00 stderr F +  Type: "DownloadsCustomRouteSyncDegraded", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Status: "True", 2025-08-13T20:01:07.587016780+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.544159288 +0000 UTC m=+103.834046734", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.587016780+00:00 stderr F +  }, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "DownloadsCustomRouteSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:44 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F -  { 2025-08-13T20:01:07.587016780+00:00 stderr F -  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.587016780+00:00 stderr F -  Status: "True", 2025-08-13T20:01:07.587016780+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:13 +0000 UTC", 2025-08-13T20:01:07.587016780+00:00 stderr F -  }, 2025-08-13T20:01:07.587016780+00:00 stderr F +  { 2025-08-13T20:01:07.587016780+00:00 stderr F +  Type: "DownloadsCustomRouteSyncUpgradeable", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Status: "False", 2025-08-13T20:01:07.587016780+00:00 stderr F +  LastTransitionTime: s"2025-08-13 20:01:07.544161238 +0000 UTC m=+103.834048644", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Reason: "FailedDeleteCustomRoutes", 2025-08-13T20:01:07.587016780+00:00 stderr F +  Message: "the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)", 2025-08-13T20:01:07.587016780+00:00 stderr F +  }, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "ServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:07.587016780+00:00 stderr F    ... // 48 identical elements 2025-08-13T20:01:07.587016780+00:00 stderr F    }, 2025-08-13T20:01:07.587016780+00:00 stderr F    Version: "", 2025-08-13T20:01:07.587016780+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:07.587016780+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:07.587016780+00:00 stderr F   } 2025-08-13T20:01:07.587016780+00:00 stderr F I0813 20:01:07.551749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/console-config -n openshift-console: 2025-08-13T20:01:07.587016780+00:00 stderr F cause by changes in data.console-config.yaml 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555037 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555048 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.555177 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558442 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558451 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.587016780+00:00 stderr F E0813 20:01:07.558591 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646400 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646455 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.652762715+00:00 stderr F E0813 20:01:07.646626 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696232 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696304 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.697226813+00:00 stderr F E0813 20:01:07.696511 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.869646059+00:00 stderr F I0813 20:01:07.823208 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:07.881203309+00:00 stderr F E0813 20:01:07.880855 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.894914909+00:00 stderr F E0813 20:01:07.894527 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.904815762+00:00 stderr F E0813 20:01:07.898203 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.904815762+00:00 stderr F E0813 20:01:07.884409 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.910492344+00:00 stderr F E0813 20:01:07.909147 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.910667659+00:00 stderr F E0813 20:01:07.910649 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.911087881+00:00 stderr F E0813 20:01:07.911062 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:07.916939968+00:00 stderr F E0813 20:01:07.916908 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.917004159+00:00 stderr F E0813 20:01:07.916989 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:07.917282507+00:00 stderr F E0813 20:01:07.917254 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084310 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084362 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110279260+00:00 stderr F E0813 20:01:08.084596 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.110863267+00:00 stderr F I0813 20:01:08.110664 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable changed from True to False ("ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)") 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118295 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118382 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.139733750+00:00 stderr F E0813 20:01:08.118624 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.177380183+00:00 stderr F I0813 20:01:08.177300 1 apps.go:154] Deployment "openshift-console/console" changes: {"metadata":{"annotations":{"console.openshift.io/console-config-version":"30193","operator.openshift.io/spec-hash":"f1efe610e88a03d177d7da7c48414aef173b52463548255cc8d0eb8b0da2387b"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"strategy":{"rollingUpdate":{"maxSurge":null,"maxUnavailable":null}},"template":{"metadata":{"annotations":{"console.openshift.io/console-config-version":"30193"}},"spec":{"containers":[{"command":["/opt/bridge/bin/bridge","--public-dir=/opt/bridge/static","--config=/var/console-config/console-config.yaml","--service-ca-file=/var/service-ca/service-ca.crt","--v=2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae","imagePullPolicy":"IfNotPresent","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":1,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"console","ports":[{"containerPort":8443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"100Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":false},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"/health","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/serving-cert","name":"console-serving-cert","readOnly":true},{"mountPath":"/var/oauth-config","name":"console-oauth-config","readOnly":true},{"mountPath":"/var/console-config","name":"console-config","readOnly":true},{"mountPath":"/var/service-ca","name":"service-ca","readOnly":true},{"mountPath":"/etc/pki/ca-trust/extracted/pem","name":"trusted-ca-bundle","readOnly":true},{"mountPath":"/var/oauth-serving-cert","name":"oauth-serving-cert","readOnly":true}]}],"dnsPolicy":null,"serviceAccount":null,"volumes":[{"name":"console-serving-cert","secret":{"secretName":"console-serving-cert"}},{"name":"console-oauth-config","secret":{"secretName":"console-oauth-config"}},{"configMap":{"name":"console-config"},"name":"console-config"},{"configMap":{"name":"service-ca"},"name":"service-ca"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle"},"name":"trusted-ca-bundle"},{"configMap":{"name":"oauth-serving-cert"},"name":"oauth-serving-cert"}]}}}} 2025-08-13T20:01:08.227915294+00:00 stderr F E0813 20:01:08.225041 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.262308 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263064 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263077 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.269017006+00:00 stderr F E0813 20:01:08.263497 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:08.276528661+00:00 stderr F I0813 20:01:08.274511 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:08.283445968+00:00 stderr F E0813 20:01:08.282072 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.324892079+00:00 stderr F E0813 20:01:08.324712 1 status.go:130] SyncLoopRefreshProgressing InProgress changes made during sync updates, additional sync expected 2025-08-13T20:01:08.324892079+00:00 stderr F E0813 20:01:08.324766 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:01:08.327935766+00:00 stderr F I0813 20:01:08.327887 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/console -n openshift-console because it changed 2025-08-13T20:01:08.350113509+00:00 stderr F I0813 20:01:08.347209 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:08.350113509+00:00 stderr F I0813 20:01:08.347910 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:08.409921724+00:00 stderr F I0813 20:01:08.405262 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:08.414879955+00:00 stderr F I0813 20:01:08.414034 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:08.41398487 +0000 UTC))" 2025-08-13T20:01:08.415876944+00:00 stderr F I0813 20:01:08.414953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:08.414931167 +0000 UTC))" 2025-08-13T20:01:08.415969196+00:00 stderr F I0813 20:01:08.415945 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:08.415914925 +0000 UTC))" 2025-08-13T20:01:08.416032318+00:00 stderr F I0813 20:01:08.416013 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:08.415991547 +0000 UTC))" 2025-08-13T20:01:08.416092010+00:00 stderr F I0813 20:01:08.416073 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416057379 +0000 UTC))" 2025-08-13T20:01:08.416142221+00:00 stderr F I0813 20:01:08.416129 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416112411 +0000 UTC))" 2025-08-13T20:01:08.416246064+00:00 stderr F I0813 20:01:08.416230 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416210733 +0000 UTC))" 2025-08-13T20:01:08.416304896+00:00 stderr F I0813 20:01:08.416288 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416267875 +0000 UTC))" 2025-08-13T20:01:08.416354447+00:00 stderr F I0813 20:01:08.416342 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:08.416327387 +0000 UTC))" 2025-08-13T20:01:08.416457460+00:00 stderr F I0813 20:01:08.416437 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:08.416421569 +0000 UTC))" 2025-08-13T20:01:08.416518042+00:00 stderr F I0813 20:01:08.416497 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:08.416480951 +0000 UTC))" 2025-08-13T20:01:08.417026957+00:00 stderr F I0813 20:01:08.416992 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:12 +0000 UTC to 2027-08-13 20:00:13 +0000 UTC (now=2025-08-13 20:01:08.416966885 +0000 UTC))" 2025-08-13T20:01:08.420489225+00:00 stderr F I0813 20:01:08.420460 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115181\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115179\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:01:08.420425854 +0000 UTC))" 2025-08-13T20:01:08.738228045+00:00 stderr F E0813 20:01:08.733037 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.738228045+00:00 stderr F E0813 20:01:08.733079 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.752475751+00:00 stderr F E0813 20:01:08.746554 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:08.826232414+00:00 stderr F I0813 20:01:08.765352 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'" to "ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'",Upgradeable message changed from "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)" to "ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)" 2025-08-13T20:01:08.826232414+00:00 stderr F I0813 20:01:08.817376 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:08.826232414+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:08.826232414+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    { 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:08.826232414+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:01:08.826232414+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:01:08.826232414+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:01:08.826232414+00:00 stderr F -  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:01:08.826232414+00:00 stderr F +  "changes made during sync updates, additional sync expected", 2025-08-13T20:01:08.826232414+00:00 stderr F    }, ""), 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    Version: "", 2025-08-13T20:01:08.826232414+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:08.826232414+00:00 stderr F    Generations: []v1.GenerationStatus{ 2025-08-13T20:01:08.826232414+00:00 stderr F    {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, 2025-08-13T20:01:08.826232414+00:00 stderr F    { 2025-08-13T20:01:08.826232414+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:08.826232414+00:00 stderr F    Namespace: "openshift-console", 2025-08-13T20:01:08.826232414+00:00 stderr F    Name: "console", 2025-08-13T20:01:08.826232414+00:00 stderr F -  LastGeneration: 4, 2025-08-13T20:01:08.826232414+00:00 stderr F +  LastGeneration: 5, 2025-08-13T20:01:08.826232414+00:00 stderr F    Hash: "", 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F    }, 2025-08-13T20:01:08.826232414+00:00 stderr F   } 2025-08-13T20:01:09.226328592+00:00 stderr F E0813 20:01:09.174533 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.226500987+00:00 stderr F E0813 20:01:09.226463 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.250915843+00:00 stderr F E0813 20:01:09.250812 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.409214567+00:00 stderr F E0813 20:01:09.409153 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.409302429+00:00 stderr F E0813 20:01:09.409287 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.412097119+00:00 stderr F E0813 20:01:09.410035 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.450065452+00:00 stderr F E0813 20:01:09.449911 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.450065452+00:00 stderr F E0813 20:01:09.449991 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.450697530+00:00 stderr F E0813 20:01:09.450576 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.500530601+00:00 stderr F I0813 20:01:09.493874 1 status_controller.go:218] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"ConsoleCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes::RouteHealth_StatusError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected","reason":"SyncLoopRefresh_InProgress","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:18Z","message":"DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable'","reason":"Deployment_InsufficientReplicas::RouteHealth_StatusError","status":"False","type":"Available"},{"lastTransitionTime":"2025-08-13T20:01:07Z","message":"ConsoleCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nDownloadsCustomRouteSyncUpgradeable: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom)","reason":"ConsoleCustomRouteSync_FailedDeleteCustomRoutes::DownloadsCustomRouteSync_FailedDeleteCustomRoutes","status":"False","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:53:43Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614090 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614158 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.622490358+00:00 stderr F E0813 20:01:09.614374 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.645963 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.646046 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.648479929+00:00 stderr F E0813 20:01:09.646219 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:09.924185441+00:00 stderr F I0813 20:01:09.915994 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" 2025-08-13T20:01:09.958638953+00:00 stderr F E0813 20:01:09.958438 1 status.go:130] RouteHealthDegraded StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:09.958638953+00:00 stderr F E0813 20:01:09.958476 1 status.go:130] RouteHealthAvailable StatusError route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:09.958869540+00:00 stderr F E0813 20:01:09.958723 1 base_controller.go:268] HealthCheckController reconciliation failed: route not yet available, https://console-openshift-console.apps-crc.testing returns '503 Service Unavailable' 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199527 1 status.go:130] ConsoleCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199568 1 status.go:130] ConsoleCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199727 1 base_controller.go:268] ConsoleRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199932 1 status.go:130] DownloadsCustomRouteSyncDegraded FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.199941 1 status.go:130] DownloadsCustomRouteSyncUpgradeable FailedDeleteCustomRoutes the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.207044716+00:00 stderr F E0813 20:01:10.200059 1 base_controller.go:268] DownloadsRouteController reconciliation failed: the server is currently unable to handle the request (delete routes.route.openshift.io downloads-custom) 2025-08-13T20:01:10.235524539+00:00 stderr F E0813 20:01:10.235060 1 status.go:130] SyncLoopRefreshProgressing InProgress working toward version 4.16.0, 0 replicas available 2025-08-13T20:01:10.235524539+00:00 stderr F E0813 20:01:10.235387 1 status.go:130] DeploymentAvailable InsufficientReplicas 0 replicas available for console deployment 2025-08-13T20:01:10.443988903+00:00 stderr F I0813 20:01:10.426019 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-08-13T20:01:10.443988903+00:00 stderr F    ObservedGeneration: 1, 2025-08-13T20:01:10.443988903+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 13 identical elements 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "ServiceSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:46 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "SyncLoopRefreshDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    { 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 2 identical fields 2025-08-13T20:01:10.443988903+00:00 stderr F    LastTransitionTime: {Time: s"2024-06-27 13:34:18 +0000 UTC"}, 2025-08-13T20:01:10.443988903+00:00 stderr F    Reason: "InProgress", 2025-08-13T20:01:10.443988903+00:00 stderr F    Message: strings.Join({ 2025-08-13T20:01:10.443988903+00:00 stderr F -  "changes made during sync updates, additional sync expected", 2025-08-13T20:01:10.443988903+00:00 stderr F +  "working toward version 4.16.0, 0 replicas available", 2025-08-13T20:01:10.443988903+00:00 stderr F    }, ""), 2025-08-13T20:01:10.443988903+00:00 stderr F    }, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "OAuthClientSecretSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    {Type: "OAuthClientSecretSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:55 +0000 UTC"}}, 2025-08-13T20:01:10.443988903+00:00 stderr F    ... // 44 identical elements 2025-08-13T20:01:10.443988903+00:00 stderr F    }, 2025-08-13T20:01:10.443988903+00:00 stderr F    Version: "", 2025-08-13T20:01:10.443988903+00:00 stderr F    ReadyReplicas: 0, 2025-08-13T20:01:10.443988903+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-08-13T20:01:10.443988903+00:00 stderr F   } 2025-08-13T20:01:10.648973078+00:00 stderr F I0813 20:01:10.648051 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="986026bc94c265a214cb3459ff9cc01d5aa0eabbc41959f11d26b6222c432f4b", new="c8d612f3b74dc6507c61e4d04d4ecf5c547ff292af799c7a689fe7a15e5377e0") 2025-08-13T20:01:10.684128900+00:00 stderr F W0813 20:01:10.679640 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.680909 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="4b5d87903056afff0f59aa1059503707e0decf9c5ece89d2e759b1a6adbf089a", new="b9e8e76d9d6343210f883954e57c9ccdef1698a4fed96aca367288053d3b1f02") 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.683590 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:10.684128900+00:00 stderr F I0813 20:01:10.683741 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:10.684188392+00:00 stderr F I0813 20:01:10.684120 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:10.684188392+00:00 stderr F I0813 20:01:10.684129 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:01:10.684349227+00:00 stderr F I0813 20:01:10.684313 1 base_controller.go:172] Shutting down PodDisruptionBudgetController ... 2025-08-13T20:01:10.684398598+00:00 stderr F I0813 20:01:10.684385 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:10.684434139+00:00 stderr F I0813 20:01:10.684408 1 base_controller.go:172] Shutting down ClusterUpgradeNotificationController ... 2025-08-13T20:01:10.684480740+00:00 stderr F I0813 20:01:10.684468 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:01:10.684521261+00:00 stderr F I0813 20:01:10.684509 1 base_controller.go:172] Shutting down ConsoleServiceController ... 2025-08-13T20:01:10.684556742+00:00 stderr F I0813 20:01:10.684517 1 base_controller.go:172] Shutting down InformerWithSwitchController ... 2025-08-13T20:01:10.684587203+00:00 stderr F W0813 20:01:10.684548 1 builder.go:131] graceful termination failed, controllers failed with error: stopped 2025-08-13T20:01:10.685148869+00:00 stderr F I0813 20:01:10.684633 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000033347415117130647033102 0ustar zuulzuul2025-12-13T00:13:13.858664274+00:00 stderr F I1213 00:13:13.858315 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:13:13.858664274+00:00 stderr F I1213 00:13:13.858440 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:13.859161340+00:00 stderr F I1213 00:13:13.859139 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:13.948486512+00:00 stderr F I1213 00:13:13.948155 1 builder.go:299] console-operator version - 2025-12-13T00:13:14.238186127+00:00 stderr F I1213 00:13:14.237839 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:14.238186127+00:00 stderr F W1213 00:13:14.238066 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:14.238186127+00:00 stderr F W1213 00:13:14.238072 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:14.241353893+00:00 stderr F I1213 00:13:14.241293 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:14.242231062+00:00 stderr F I1213 00:13:14.241599 1 leaderelection.go:250] attempting to acquire leader lease openshift-console-operator/console-operator-lock... 2025-12-13T00:13:14.249868719+00:00 stderr F I1213 00:13:14.249838 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:14.249868719+00:00 stderr F I1213 00:13:14.249856 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:14.249941952+00:00 stderr F I1213 00:13:14.249908 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:14.249941952+00:00 stderr F I1213 00:13:14.249923 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:14.249953972+00:00 stderr F I1213 00:13:14.249922 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:14.249961052+00:00 stderr F I1213 00:13:14.249952 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:14.250448589+00:00 stderr F I1213 00:13:14.250418 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:14.251314447+00:00 stderr F I1213 00:13:14.251288 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:14.251353749+00:00 stderr F I1213 00:13:14.251334 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:14.351638999+00:00 stderr F I1213 00:13:14.350746 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:14.351638999+00:00 stderr F I1213 00:13:14.350825 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:14.351638999+00:00 stderr F I1213 00:13:14.351496 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:18:27.151663914+00:00 stderr F I1213 00:18:27.151187 1 leaderelection.go:260] successfully acquired lease openshift-console-operator/console-operator-lock 2025-12-13T00:18:27.151663914+00:00 stderr F I1213 00:18:27.151237 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-console-operator", Name:"console-operator-lock", UID:"30020b87-25e8-41b0-a858-a4ef10623cf0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41998", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' console-operator-5dbbc74dc9-cp5cd_3caa6c05-ca23-474c-b89a-0fb382d9f9a1 became leader 2025-12-13T00:18:27.160176798+00:00 stderr F I1213 00:18:27.160133 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:27.163280964+00:00 stderr F I1213 00:18:27.163242 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:27.163364056+00:00 stderr F I1213 00:18:27.163319 1 starter.go:206] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177293 1 base_controller.go:67] Waiting for caches to sync for HealthCheckController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177332 1 base_controller.go:67] Waiting for caches to sync for DownloadsRouteController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177351 1 base_controller.go:67] Waiting for caches to sync for ConsoleOperator 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177364 1 base_controller.go:67] Waiting for caches to sync for ConsoleCLIDownloadsController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177386 1 base_controller.go:67] Waiting for caches to sync for ConsoleDownloadsDeploymentSyncController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177400 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177415 1 base_controller.go:67] Waiting for caches to sync for PodDisruptionBudgetController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177429 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177443 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177457 1 base_controller.go:67] Waiting for caches to sync for InformerWithSwitchController 2025-12-13T00:18:27.177469295+00:00 stderr F I1213 00:18:27.177461 1 base_controller.go:73] Caches are synced for InformerWithSwitchController 2025-12-13T00:18:27.177506326+00:00 stderr F I1213 00:18:27.177466 1 base_controller.go:110] Starting #1 worker of InformerWithSwitchController controller ... 2025-12-13T00:18:27.178545094+00:00 stderr F I1213 00:18:27.178503 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:18:27.178562725+00:00 stderr F I1213 00:18:27.178551 1 base_controller.go:67] Waiting for caches to sync for OAuthClientSecretController 2025-12-13T00:18:27.178581225+00:00 stderr F I1213 00:18:27.178567 1 base_controller.go:67] Waiting for caches to sync for OIDCSetupController 2025-12-13T00:18:27.178588775+00:00 stderr F I1213 00:18:27.178583 1 base_controller.go:67] Waiting for caches to sync for CLIOIDCClientStatusController 2025-12-13T00:18:27.178628817+00:00 stderr F I1213 00:18:27.178599 1 base_controller.go:67] Waiting for caches to sync for ClusterUpgradeNotificationController 2025-12-13T00:18:27.178628817+00:00 stderr F I1213 00:18:27.178609 1 base_controller.go:73] Caches are synced for ClusterUpgradeNotificationController 2025-12-13T00:18:27.178628817+00:00 stderr F I1213 00:18:27.178616 1 base_controller.go:110] Starting #1 worker of ClusterUpgradeNotificationController controller ... 2025-12-13T00:18:27.178665478+00:00 stderr F I1213 00:18:27.178645 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:18:27.178665478+00:00 stderr F I1213 00:18:27.178661 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-12-13T00:18:27.178682798+00:00 stderr F I1213 00:18:27.178677 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-12-13T00:18:27.179370338+00:00 stderr F I1213 00:18:27.178781 1 base_controller.go:67] Waiting for caches to sync for ConsoleServiceController 2025-12-13T00:18:27.179370338+00:00 stderr F I1213 00:18:27.179272 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:18:27.180364695+00:00 stderr F I1213 00:18:27.180323 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_console 2025-12-13T00:18:27.180431457+00:00 stderr F E1213 00:18:27.180394 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-12-13T00:18:27.180431457+00:00 stderr F I1213 00:18:27.180416 1 base_controller.go:67] Waiting for caches to sync for ConsoleRouteController 2025-12-13T00:18:27.186042111+00:00 stderr F E1213 00:18:27.185501 1 base_controller.go:268] ClusterUpgradeNotificationController reconciliation failed: console.operator.openshift.io "cluster" not found 2025-12-13T00:18:27.277889404+00:00 stderr F I1213 00:18:27.277783 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:27.277889404+00:00 stderr F I1213 00:18:27.277860 1 base_controller.go:73] Caches are synced for DownloadsRouteController 2025-12-13T00:18:27.277889404+00:00 stderr F I1213 00:18:27.277865 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:27.277889404+00:00 stderr F I1213 00:18:27.277877 1 base_controller.go:110] Starting #1 worker of DownloadsRouteController controller ... 2025-12-13T00:18:27.278049179+00:00 stderr F I1213 00:18:27.278016 1 base_controller.go:73] Caches are synced for ConsoleDownloadsDeploymentSyncController 2025-12-13T00:18:27.278061449+00:00 stderr F I1213 00:18:27.278045 1 base_controller.go:110] Starting #1 worker of ConsoleDownloadsDeploymentSyncController controller ... 2025-12-13T00:18:27.278111440+00:00 stderr F I1213 00:18:27.278085 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-12-13T00:18:27.278111440+00:00 stderr F I1213 00:18:27.278093 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-12-13T00:18:27.278119830+00:00 stderr F I1213 00:18:27.277817 1 base_controller.go:73] Caches are synced for ConsoleCLIDownloadsController 2025-12-13T00:18:27.278150351+00:00 stderr F I1213 00:18:27.278125 1 base_controller.go:73] Caches are synced for PodDisruptionBudgetController 2025-12-13T00:18:27.278158182+00:00 stderr F I1213 00:18:27.278149 1 base_controller.go:110] Starting #1 worker of PodDisruptionBudgetController controller ... 2025-12-13T00:18:27.278208183+00:00 stderr F I1213 00:18:27.278182 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-12-13T00:18:27.278208183+00:00 stderr F I1213 00:18:27.278192 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-12-13T00:18:27.278216583+00:00 stderr F I1213 00:18:27.278128 1 base_controller.go:110] Starting #1 worker of ConsoleCLIDownloadsController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278608 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278620 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278626 1 base_controller.go:73] Caches are synced for OAuthClientSecretController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278654 1 base_controller.go:110] Starting #1 worker of OAuthClientSecretController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278867 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278877 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278893 1 base_controller.go:73] Caches are synced for CLIOIDCClientStatusController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278901 1 base_controller.go:110] Starting #1 worker of CLIOIDCClientStatusController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278918 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.278923 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.279033 1 base_controller.go:73] Caches are synced for ConsoleServiceController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.279042 1 base_controller.go:110] Starting #1 worker of ConsoleServiceController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.279132 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.279139 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.279335 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:18:27.279913180+00:00 stderr F I1213 00:18:27.279343 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:18:27.281871254+00:00 stderr F I1213 00:18:27.280685 1 base_controller.go:73] Caches are synced for ConsoleRouteController 2025-12-13T00:18:27.281871254+00:00 stderr F I1213 00:18:27.280717 1 base_controller.go:110] Starting #1 worker of ConsoleRouteController controller ... 2025-12-13T00:18:27.281871254+00:00 stderr F I1213 00:18:27.280821 1 base_controller.go:73] Caches are synced for StatusSyncer_console 2025-12-13T00:18:27.281871254+00:00 stderr F I1213 00:18:27.280830 1 base_controller.go:110] Starting #1 worker of StatusSyncer_console controller ... 2025-12-13T00:18:27.385108880+00:00 stderr F I1213 00:18:27.384999 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:27.478042473+00:00 stderr F I1213 00:18:27.477970 1 base_controller.go:73] Caches are synced for ConsoleOperator 2025-12-13T00:18:27.478042473+00:00 stderr F I1213 00:18:27.478010 1 base_controller.go:110] Starting #1 worker of ConsoleOperator controller ... 2025-12-13T00:18:27.478104805+00:00 stderr F I1213 00:18:27.477985 1 base_controller.go:73] Caches are synced for HealthCheckController 2025-12-13T00:18:27.478104805+00:00 stderr F I1213 00:18:27.478071 1 base_controller.go:110] Starting #1 worker of HealthCheckController controller ... 2025-12-13T00:18:27.478203237+00:00 stderr F I1213 00:18:27.478161 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"e977212b-5bb5-4096-9f11-353076a2ebeb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling 2025-12-13T00:18:27.480056949+00:00 stderr F I1213 00:18:27.479317 1 base_controller.go:73] Caches are synced for OIDCSetupController 2025-12-13T00:18:27.480056949+00:00 stderr F I1213 00:18:27.479345 1 base_controller.go:110] Starting #1 worker of OIDCSetupController controller ... 2025-12-13T00:19:37.580571731+00:00 stderr F I1213 00:19:37.580337 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.580259062 +0000 UTC))" 2025-12-13T00:19:37.580571731+00:00 stderr F I1213 00:19:37.580557 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.58053955 +0000 UTC))" 2025-12-13T00:19:37.580618402+00:00 stderr F I1213 00:19:37.580580 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.580563891 +0000 UTC))" 2025-12-13T00:19:37.580618402+00:00 stderr F I1213 00:19:37.580602 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.580586811 +0000 UTC))" 2025-12-13T00:19:37.580629103+00:00 stderr F I1213 00:19:37.580622 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.580608342 +0000 UTC))" 2025-12-13T00:19:37.580662283+00:00 stderr F I1213 00:19:37.580643 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.580629543 +0000 UTC))" 2025-12-13T00:19:37.580672024+00:00 stderr F I1213 00:19:37.580665 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.580652013 +0000 UTC))" 2025-12-13T00:19:37.581624190+00:00 stderr F I1213 00:19:37.580686 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.580670954 +0000 UTC))" 2025-12-13T00:19:37.581624190+00:00 stderr F I1213 00:19:37.580753 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.580737546 +0000 UTC))" 2025-12-13T00:19:37.581624190+00:00 stderr F I1213 00:19:37.580776 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.580761156 +0000 UTC))" 2025-12-13T00:19:37.581624190+00:00 stderr F I1213 00:19:37.580800 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.580784597 +0000 UTC))" 2025-12-13T00:19:37.581624190+00:00 stderr F I1213 00:19:37.580836 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.580805057 +0000 UTC))" 2025-12-13T00:19:37.581624190+00:00 stderr F I1213 00:19:37.581293 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-console-operator.svc\" [serving] validServingFor=[metrics.openshift-console-operator.svc,metrics.openshift-console-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:12 +0000 UTC to 2027-08-13 20:00:13 +0000 UTC (now=2025-12-13 00:19:37.5812727 +0000 UTC))" 2025-12-13T00:19:37.581891437+00:00 stderr F I1213 00:19:37.581845 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584794\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584794\" (2025-12-12 23:13:13 +0000 UTC to 2026-12-12 23:13:13 +0000 UTC (now=2025-12-13 00:19:37.581803815 +0000 UTC))" 2025-12-13T00:20:57.998054907+00:00 stderr F E1213 00:20:57.997228 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.000284988+00:00 stderr F I1213 00:20:58.000235 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.000284988+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.000284988+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.000284988+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.000284988+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.000284988+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.000284988+00:00 stderr F -  { 2025-12-13T00:20:58.000284988+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.000284988+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.000284988+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.000284988+00:00 stderr F -  }, 2025-12-13T00:20:58.000284988+00:00 stderr F +  { 2025-12-13T00:20:58.000284988+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.000284988+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.000284988+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:57.998045527 +0000 UTC m=+464.889159339", 2025-12-13T00:20:58.000284988+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.000284988+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.000284988+00:00 stderr F +  }, 2025-12-13T00:20:58.000284988+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.000284988+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.000284988+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.000284988+00:00 stderr F    }, 2025-12-13T00:20:58.000284988+00:00 stderr F    Version: "", 2025-12-13T00:20:58.000284988+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.000284988+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.000284988+00:00 stderr F   } 2025-12-13T00:20:58.004337757+00:00 stderr F I1213 00:20:58.004282 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.004337757+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.004337757+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.004337757+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.004337757+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.004337757+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.004337757+00:00 stderr F -  { 2025-12-13T00:20:58.004337757+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.004337757+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.004337757+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.004337757+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.004337757+00:00 stderr F -  }, 2025-12-13T00:20:58.004337757+00:00 stderr F +  { 2025-12-13T00:20:58.004337757+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.004337757+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.004337757+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.00188958 +0000 UTC m=+464.893003402", 2025-12-13T00:20:58.004337757+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.004337757+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.004337757+00:00 stderr F +  }, 2025-12-13T00:20:58.004337757+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.004337757+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.004337757+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.004337757+00:00 stderr F    }, 2025-12-13T00:20:58.004337757+00:00 stderr F    Version: "", 2025-12-13T00:20:58.004337757+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.004337757+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.004337757+00:00 stderr F   } 2025-12-13T00:20:58.005236690+00:00 stderr F W1213 00:20:58.005166 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.005236690+00:00 stderr F E1213 00:20:58.005218 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.011228173+00:00 stderr F E1213 00:20:58.011186 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.013180525+00:00 stderr F I1213 00:20:58.013143 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.013180525+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.013180525+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.013180525+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.013180525+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.013180525+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.013180525+00:00 stderr F -  { 2025-12-13T00:20:58.013180525+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.013180525+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.013180525+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.013180525+00:00 stderr F -  }, 2025-12-13T00:20:58.013180525+00:00 stderr F +  { 2025-12-13T00:20:58.013180525+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.013180525+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.013180525+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.011219423 +0000 UTC m=+464.902333235", 2025-12-13T00:20:58.013180525+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.013180525+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.013180525+00:00 stderr F +  }, 2025-12-13T00:20:58.013180525+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.013180525+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.013180525+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.013180525+00:00 stderr F    }, 2025-12-13T00:20:58.013180525+00:00 stderr F    Version: "", 2025-12-13T00:20:58.013180525+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.013180525+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.013180525+00:00 stderr F   } 2025-12-13T00:20:58.016337460+00:00 stderr F I1213 00:20:58.016259 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.016337460+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.016337460+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.016337460+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.016337460+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.016337460+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.016337460+00:00 stderr F -  { 2025-12-13T00:20:58.016337460+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.016337460+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.016337460+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.016337460+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.016337460+00:00 stderr F -  }, 2025-12-13T00:20:58.016337460+00:00 stderr F +  { 2025-12-13T00:20:58.016337460+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.016337460+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.016337460+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.014169672 +0000 UTC m=+464.905283494", 2025-12-13T00:20:58.016337460+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.016337460+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.016337460+00:00 stderr F +  }, 2025-12-13T00:20:58.016337460+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.016337460+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.016337460+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.016337460+00:00 stderr F    }, 2025-12-13T00:20:58.016337460+00:00 stderr F    Version: "", 2025-12-13T00:20:58.016337460+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.016337460+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.016337460+00:00 stderr F   } 2025-12-13T00:20:58.016983648+00:00 stderr F W1213 00:20:58.016952 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.016983648+00:00 stderr F E1213 00:20:58.016978 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.028092848+00:00 stderr F E1213 00:20:58.028037 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.029712731+00:00 stderr F I1213 00:20:58.029675 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.029712731+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.029712731+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.029712731+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.029712731+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.029712731+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.029712731+00:00 stderr F -  { 2025-12-13T00:20:58.029712731+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.029712731+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.029712731+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.029712731+00:00 stderr F -  }, 2025-12-13T00:20:58.029712731+00:00 stderr F +  { 2025-12-13T00:20:58.029712731+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.029712731+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.029712731+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.028077958 +0000 UTC m=+464.919191760", 2025-12-13T00:20:58.029712731+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.029712731+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.029712731+00:00 stderr F +  }, 2025-12-13T00:20:58.029712731+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.029712731+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.029712731+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.029712731+00:00 stderr F    }, 2025-12-13T00:20:58.029712731+00:00 stderr F    Version: "", 2025-12-13T00:20:58.029712731+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.029712731+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.029712731+00:00 stderr F   } 2025-12-13T00:20:58.031975332+00:00 stderr F I1213 00:20:58.031893 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.031975332+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.031975332+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.031975332+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.031975332+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.031975332+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.031975332+00:00 stderr F -  { 2025-12-13T00:20:58.031975332+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.031975332+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.031975332+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.031975332+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.031975332+00:00 stderr F -  }, 2025-12-13T00:20:58.031975332+00:00 stderr F +  { 2025-12-13T00:20:58.031975332+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.031975332+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.031975332+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.030364559 +0000 UTC m=+464.921478371", 2025-12-13T00:20:58.031975332+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.031975332+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.031975332+00:00 stderr F +  }, 2025-12-13T00:20:58.031975332+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.031975332+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.031975332+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.031975332+00:00 stderr F    }, 2025-12-13T00:20:58.031975332+00:00 stderr F    Version: "", 2025-12-13T00:20:58.031975332+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.031975332+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.031975332+00:00 stderr F   } 2025-12-13T00:20:58.032544508+00:00 stderr F W1213 00:20:58.032484 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.032544508+00:00 stderr F E1213 00:20:58.032512 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.054196092+00:00 stderr F E1213 00:20:58.054137 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.056110494+00:00 stderr F I1213 00:20:58.056088 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.056110494+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.056110494+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.056110494+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.056110494+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.056110494+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.056110494+00:00 stderr F -  { 2025-12-13T00:20:58.056110494+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.056110494+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.056110494+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.056110494+00:00 stderr F -  }, 2025-12-13T00:20:58.056110494+00:00 stderr F +  { 2025-12-13T00:20:58.056110494+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.056110494+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.056110494+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.054260954 +0000 UTC m=+464.945374766", 2025-12-13T00:20:58.056110494+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.056110494+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.056110494+00:00 stderr F +  }, 2025-12-13T00:20:58.056110494+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.056110494+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.056110494+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.056110494+00:00 stderr F    }, 2025-12-13T00:20:58.056110494+00:00 stderr F    Version: "", 2025-12-13T00:20:58.056110494+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.056110494+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.056110494+00:00 stderr F   } 2025-12-13T00:20:58.058862348+00:00 stderr F I1213 00:20:58.058821 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.058862348+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.058862348+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.058862348+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.058862348+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.058862348+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.058862348+00:00 stderr F -  { 2025-12-13T00:20:58.058862348+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.058862348+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.058862348+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.058862348+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.058862348+00:00 stderr F -  }, 2025-12-13T00:20:58.058862348+00:00 stderr F +  { 2025-12-13T00:20:58.058862348+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.058862348+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.058862348+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.057106321 +0000 UTC m=+464.948220133", 2025-12-13T00:20:58.058862348+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.058862348+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.058862348+00:00 stderr F +  }, 2025-12-13T00:20:58.058862348+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.058862348+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.058862348+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.058862348+00:00 stderr F    }, 2025-12-13T00:20:58.058862348+00:00 stderr F    Version: "", 2025-12-13T00:20:58.058862348+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.058862348+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.058862348+00:00 stderr F   } 2025-12-13T00:20:58.059685480+00:00 stderr F W1213 00:20:58.059655 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.059700661+00:00 stderr F E1213 00:20:58.059683 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.101129979+00:00 stderr F E1213 00:20:58.101069 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.102748512+00:00 stderr F I1213 00:20:58.102715 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.102748512+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.102748512+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.102748512+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.102748512+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.102748512+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.102748512+00:00 stderr F -  { 2025-12-13T00:20:58.102748512+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.102748512+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.102748512+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.102748512+00:00 stderr F -  }, 2025-12-13T00:20:58.102748512+00:00 stderr F +  { 2025-12-13T00:20:58.102748512+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.102748512+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.102748512+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.101110818 +0000 UTC m=+464.992224630", 2025-12-13T00:20:58.102748512+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.102748512+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.102748512+00:00 stderr F +  }, 2025-12-13T00:20:58.102748512+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.102748512+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.102748512+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.102748512+00:00 stderr F    }, 2025-12-13T00:20:58.102748512+00:00 stderr F    Version: "", 2025-12-13T00:20:58.102748512+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.102748512+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.102748512+00:00 stderr F   } 2025-12-13T00:20:58.105926868+00:00 stderr F I1213 00:20:58.105863 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.105926868+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.105926868+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.105926868+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.105926868+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.105926868+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.105926868+00:00 stderr F -  { 2025-12-13T00:20:58.105926868+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.105926868+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.105926868+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.105926868+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.105926868+00:00 stderr F -  }, 2025-12-13T00:20:58.105926868+00:00 stderr F +  { 2025-12-13T00:20:58.105926868+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.105926868+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.105926868+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.103628586 +0000 UTC m=+464.994742408", 2025-12-13T00:20:58.105926868+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.105926868+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.105926868+00:00 stderr F +  }, 2025-12-13T00:20:58.105926868+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.105926868+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.105926868+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.105926868+00:00 stderr F    }, 2025-12-13T00:20:58.105926868+00:00 stderr F    Version: "", 2025-12-13T00:20:58.105926868+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.105926868+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.105926868+00:00 stderr F   } 2025-12-13T00:20:58.106838763+00:00 stderr F W1213 00:20:58.106806 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.106882464+00:00 stderr F E1213 00:20:58.106871 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.188353282+00:00 stderr F E1213 00:20:58.188311 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.190041597+00:00 stderr F I1213 00:20:58.190021 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.190041597+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.190041597+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.190041597+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.190041597+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.190041597+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.190041597+00:00 stderr F -  { 2025-12-13T00:20:58.190041597+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.190041597+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.190041597+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.190041597+00:00 stderr F -  }, 2025-12-13T00:20:58.190041597+00:00 stderr F +  { 2025-12-13T00:20:58.190041597+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.190041597+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.190041597+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.188443605 +0000 UTC m=+465.079557417", 2025-12-13T00:20:58.190041597+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.190041597+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.190041597+00:00 stderr F +  }, 2025-12-13T00:20:58.190041597+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.190041597+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.190041597+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.190041597+00:00 stderr F    }, 2025-12-13T00:20:58.190041597+00:00 stderr F    Version: "", 2025-12-13T00:20:58.190041597+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.190041597+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.190041597+00:00 stderr F   } 2025-12-13T00:20:58.203605864+00:00 stderr F I1213 00:20:58.203577 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.203605864+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.203605864+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.203605864+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.203605864+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.203605864+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.203605864+00:00 stderr F -  { 2025-12-13T00:20:58.203605864+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.203605864+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.203605864+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.203605864+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.203605864+00:00 stderr F -  }, 2025-12-13T00:20:58.203605864+00:00 stderr F +  { 2025-12-13T00:20:58.203605864+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.203605864+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.203605864+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.202129654 +0000 UTC m=+465.093243466", 2025-12-13T00:20:58.203605864+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.203605864+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.203605864+00:00 stderr F +  }, 2025-12-13T00:20:58.203605864+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.203605864+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.203605864+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.203605864+00:00 stderr F    }, 2025-12-13T00:20:58.203605864+00:00 stderr F    Version: "", 2025-12-13T00:20:58.203605864+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.203605864+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.203605864+00:00 stderr F   } 2025-12-13T00:20:58.409078949+00:00 stderr F W1213 00:20:58.409004 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.409078949+00:00 stderr F E1213 00:20:58.409051 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.571389538+00:00 stderr F E1213 00:20:58.571348 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.572986881+00:00 stderr F I1213 00:20:58.572967 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.572986881+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.572986881+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.572986881+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:58.572986881+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:58.572986881+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:58.572986881+00:00 stderr F -  { 2025-12-13T00:20:58.572986881+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.572986881+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.572986881+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:58.572986881+00:00 stderr F -  }, 2025-12-13T00:20:58.572986881+00:00 stderr F +  { 2025-12-13T00:20:58.572986881+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:58.572986881+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.572986881+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.571457081 +0000 UTC m=+465.462570893", 2025-12-13T00:20:58.572986881+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:58.572986881+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:58.572986881+00:00 stderr F +  }, 2025-12-13T00:20:58.572986881+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:58.572986881+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:58.572986881+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:58.572986881+00:00 stderr F    }, 2025-12-13T00:20:58.572986881+00:00 stderr F    Version: "", 2025-12-13T00:20:58.572986881+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.572986881+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.572986881+00:00 stderr F   } 2025-12-13T00:20:58.603739281+00:00 stderr F I1213 00:20:58.603690 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:58.603739281+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:58.603739281+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:58.603739281+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:58.603739281+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:58.603739281+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.603739281+00:00 stderr F -  { 2025-12-13T00:20:58.603739281+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.603739281+00:00 stderr F -  Status: "False", 2025-12-13T00:20:58.603739281+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:58.603739281+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:58.603739281+00:00 stderr F -  }, 2025-12-13T00:20:58.603739281+00:00 stderr F +  { 2025-12-13T00:20:58.603739281+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:58.603739281+00:00 stderr F +  Status: "True", 2025-12-13T00:20:58.603739281+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:58.602288443 +0000 UTC m=+465.493402265", 2025-12-13T00:20:58.603739281+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:58.603739281+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:58.603739281+00:00 stderr F +  }, 2025-12-13T00:20:58.603739281+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:58.603739281+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:58.603739281+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:58.603739281+00:00 stderr F    }, 2025-12-13T00:20:58.603739281+00:00 stderr F    Version: "", 2025-12-13T00:20:58.603739281+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:58.603739281+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:58.603739281+00:00 stderr F   } 2025-12-13T00:20:58.802278189+00:00 stderr F W1213 00:20:58.802203 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.802278189+00:00 stderr F E1213 00:20:58.802252 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.125222033+00:00 stderr F E1213 00:20:59.124041 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.126340914+00:00 stderr F I1213 00:20:59.125687 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:59.126340914+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:59.126340914+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:59.126340914+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:59.126340914+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:59.126340914+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:59.126340914+00:00 stderr F -  { 2025-12-13T00:20:59.126340914+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:59.126340914+00:00 stderr F -  Status: "False", 2025-12-13T00:20:59.126340914+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:59.126340914+00:00 stderr F -  }, 2025-12-13T00:20:59.126340914+00:00 stderr F +  { 2025-12-13T00:20:59.126340914+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:59.126340914+00:00 stderr F +  Status: "True", 2025-12-13T00:20:59.126340914+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:59.124097433 +0000 UTC m=+466.015211235", 2025-12-13T00:20:59.126340914+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:59.126340914+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:59.126340914+00:00 stderr F +  }, 2025-12-13T00:20:59.126340914+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:59.126340914+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:59.126340914+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:59.126340914+00:00 stderr F    }, 2025-12-13T00:20:59.126340914+00:00 stderr F    Version: "", 2025-12-13T00:20:59.126340914+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:59.126340914+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:59.126340914+00:00 stderr F   } 2025-12-13T00:20:59.133011014+00:00 stderr F I1213 00:20:59.131257 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:59.133011014+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:59.133011014+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:59.133011014+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:59.133011014+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:59.133011014+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:59.133011014+00:00 stderr F -  { 2025-12-13T00:20:59.133011014+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:59.133011014+00:00 stderr F -  Status: "False", 2025-12-13T00:20:59.133011014+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:59.133011014+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:59.133011014+00:00 stderr F -  }, 2025-12-13T00:20:59.133011014+00:00 stderr F +  { 2025-12-13T00:20:59.133011014+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:59.133011014+00:00 stderr F +  Status: "True", 2025-12-13T00:20:59.133011014+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:59.126536009 +0000 UTC m=+466.017649821", 2025-12-13T00:20:59.133011014+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:59.133011014+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:59.133011014+00:00 stderr F +  }, 2025-12-13T00:20:59.133011014+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:59.133011014+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:59.133011014+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:59.133011014+00:00 stderr F    }, 2025-12-13T00:20:59.133011014+00:00 stderr F    Version: "", 2025-12-13T00:20:59.133011014+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:59.133011014+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:59.133011014+00:00 stderr F   } 2025-12-13T00:20:59.202228302+00:00 stderr F W1213 00:20:59.202139 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.202228302+00:00 stderr F E1213 00:20:59.202188 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.843977129+00:00 stderr F E1213 00:20:59.843918 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.845528330+00:00 stderr F I1213 00:20:59.845498 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:59.845528330+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:59.845528330+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:59.845528330+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:20:59.845528330+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:20:59.845528330+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:20:59.845528330+00:00 stderr F -  { 2025-12-13T00:20:59.845528330+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:59.845528330+00:00 stderr F -  Status: "False", 2025-12-13T00:20:59.845528330+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:20:59.845528330+00:00 stderr F -  }, 2025-12-13T00:20:59.845528330+00:00 stderr F +  { 2025-12-13T00:20:59.845528330+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:20:59.845528330+00:00 stderr F +  Status: "True", 2025-12-13T00:20:59.845528330+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:59.843973089 +0000 UTC m=+466.735086901", 2025-12-13T00:20:59.845528330+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:20:59.845528330+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:20:59.845528330+00:00 stderr F +  }, 2025-12-13T00:20:59.845528330+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:20:59.845528330+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:20:59.845528330+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:20:59.845528330+00:00 stderr F    }, 2025-12-13T00:20:59.845528330+00:00 stderr F    Version: "", 2025-12-13T00:20:59.845528330+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:59.845528330+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:59.845528330+00:00 stderr F   } 2025-12-13T00:20:59.847681469+00:00 stderr F I1213 00:20:59.847637 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:20:59.847681469+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:20:59.847681469+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:20:59.847681469+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:20:59.847681469+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:20:59.847681469+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:59.847681469+00:00 stderr F -  { 2025-12-13T00:20:59.847681469+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:59.847681469+00:00 stderr F -  Status: "False", 2025-12-13T00:20:59.847681469+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:20:59.847681469+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:20:59.847681469+00:00 stderr F -  }, 2025-12-13T00:20:59.847681469+00:00 stderr F +  { 2025-12-13T00:20:59.847681469+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:20:59.847681469+00:00 stderr F +  Status: "True", 2025-12-13T00:20:59.847681469+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:20:59.84623061 +0000 UTC m=+466.737344422", 2025-12-13T00:20:59.847681469+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:20:59.847681469+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:20:59.847681469+00:00 stderr F +  }, 2025-12-13T00:20:59.847681469+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:20:59.847681469+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:20:59.847681469+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:20:59.847681469+00:00 stderr F    }, 2025-12-13T00:20:59.847681469+00:00 stderr F    Version: "", 2025-12-13T00:20:59.847681469+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:20:59.847681469+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:20:59.847681469+00:00 stderr F   } 2025-12-13T00:20:59.848354596+00:00 stderr F W1213 00:20:59.848270 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.848354596+00:00 stderr F E1213 00:20:59.848307 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.130798043+00:00 stderr F E1213 00:21:01.130494 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.132694774+00:00 stderr F I1213 00:21:01.132636 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:01.132694774+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:01.132694774+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:01.132694774+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:21:01.132694774+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:21:01.132694774+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:21:01.132694774+00:00 stderr F -  { 2025-12-13T00:21:01.132694774+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:01.132694774+00:00 stderr F -  Status: "False", 2025-12-13T00:21:01.132694774+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:21:01.132694774+00:00 stderr F -  }, 2025-12-13T00:21:01.132694774+00:00 stderr F +  { 2025-12-13T00:21:01.132694774+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:01.132694774+00:00 stderr F +  Status: "True", 2025-12-13T00:21:01.132694774+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:01.130598098 +0000 UTC m=+468.021711910", 2025-12-13T00:21:01.132694774+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:21:01.132694774+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:21:01.132694774+00:00 stderr F +  }, 2025-12-13T00:21:01.132694774+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:21:01.132694774+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:21:01.132694774+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:21:01.132694774+00:00 stderr F    }, 2025-12-13T00:21:01.132694774+00:00 stderr F    Version: "", 2025-12-13T00:21:01.132694774+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:01.132694774+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:01.132694774+00:00 stderr F   } 2025-12-13T00:21:01.136153217+00:00 stderr F I1213 00:21:01.136091 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:01.136153217+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:01.136153217+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:01.136153217+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:21:01.136153217+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:21:01.136153217+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:01.136153217+00:00 stderr F -  { 2025-12-13T00:21:01.136153217+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:01.136153217+00:00 stderr F -  Status: "False", 2025-12-13T00:21:01.136153217+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:21:01.136153217+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:21:01.136153217+00:00 stderr F -  }, 2025-12-13T00:21:01.136153217+00:00 stderr F +  { 2025-12-13T00:21:01.136153217+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:01.136153217+00:00 stderr F +  Status: "True", 2025-12-13T00:21:01.136153217+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:01.133604999 +0000 UTC m=+468.024718831", 2025-12-13T00:21:01.136153217+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:21:01.136153217+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:21:01.136153217+00:00 stderr F +  }, 2025-12-13T00:21:01.136153217+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:21:01.136153217+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:01.136153217+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:21:01.136153217+00:00 stderr F    }, 2025-12-13T00:21:01.136153217+00:00 stderr F    Version: "", 2025-12-13T00:21:01.136153217+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:01.136153217+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:01.136153217+00:00 stderr F   } 2025-12-13T00:21:01.137126393+00:00 stderr F W1213 00:21:01.137058 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.137126393+00:00 stderr F E1213 00:21:01.137105 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.700069753+00:00 stderr F E1213 00:21:03.699464 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.702037956+00:00 stderr F I1213 00:21:03.701977 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:03.702037956+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:03.702037956+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:03.702037956+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:21:03.702037956+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:21:03.702037956+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:21:03.702037956+00:00 stderr F -  { 2025-12-13T00:21:03.702037956+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:03.702037956+00:00 stderr F -  Status: "False", 2025-12-13T00:21:03.702037956+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:21:03.702037956+00:00 stderr F -  }, 2025-12-13T00:21:03.702037956+00:00 stderr F +  { 2025-12-13T00:21:03.702037956+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:03.702037956+00:00 stderr F +  Status: "True", 2025-12-13T00:21:03.702037956+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:03.700025102 +0000 UTC m=+470.591138934", 2025-12-13T00:21:03.702037956+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:21:03.702037956+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:21:03.702037956+00:00 stderr F +  }, 2025-12-13T00:21:03.702037956+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:21:03.702037956+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:21:03.702037956+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:21:03.702037956+00:00 stderr F    }, 2025-12-13T00:21:03.702037956+00:00 stderr F    Version: "", 2025-12-13T00:21:03.702037956+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:03.702037956+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:03.702037956+00:00 stderr F   } 2025-12-13T00:21:03.704846632+00:00 stderr F I1213 00:21:03.704791 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:03.704846632+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:03.704846632+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:03.704846632+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:21:03.704846632+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:21:03.704846632+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:03.704846632+00:00 stderr F -  { 2025-12-13T00:21:03.704846632+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:03.704846632+00:00 stderr F -  Status: "False", 2025-12-13T00:21:03.704846632+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:21:03.704846632+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:21:03.704846632+00:00 stderr F -  }, 2025-12-13T00:21:03.704846632+00:00 stderr F +  { 2025-12-13T00:21:03.704846632+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:03.704846632+00:00 stderr F +  Status: "True", 2025-12-13T00:21:03.704846632+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:03.703123075 +0000 UTC m=+470.594236887", 2025-12-13T00:21:03.704846632+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:21:03.704846632+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:21:03.704846632+00:00 stderr F +  }, 2025-12-13T00:21:03.704846632+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:21:03.704846632+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:03.704846632+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:21:03.704846632+00:00 stderr F    }, 2025-12-13T00:21:03.704846632+00:00 stderr F    Version: "", 2025-12-13T00:21:03.704846632+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:03.704846632+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:03.704846632+00:00 stderr F   } 2025-12-13T00:21:03.705746446+00:00 stderr F W1213 00:21:03.705690 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.705746446+00:00 stderr F E1213 00:21:03.705731 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.827680981+00:00 stderr F E1213 00:21:08.827590 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.829884700+00:00 stderr F I1213 00:21:08.829823 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:08.829884700+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:08.829884700+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:08.829884700+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:21:08.829884700+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:21:08.829884700+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:21:08.829884700+00:00 stderr F -  { 2025-12-13T00:21:08.829884700+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:08.829884700+00:00 stderr F -  Status: "False", 2025-12-13T00:21:08.829884700+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:21:08.829884700+00:00 stderr F -  }, 2025-12-13T00:21:08.829884700+00:00 stderr F +  { 2025-12-13T00:21:08.829884700+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:08.829884700+00:00 stderr F +  Status: "True", 2025-12-13T00:21:08.829884700+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:08.827628089 +0000 UTC m=+475.718741911", 2025-12-13T00:21:08.829884700+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:21:08.829884700+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:21:08.829884700+00:00 stderr F +  }, 2025-12-13T00:21:08.829884700+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:21:08.829884700+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:21:08.829884700+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:21:08.829884700+00:00 stderr F    }, 2025-12-13T00:21:08.829884700+00:00 stderr F    Version: "", 2025-12-13T00:21:08.829884700+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:08.829884700+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:08.829884700+00:00 stderr F   } 2025-12-13T00:21:08.834162856+00:00 stderr F I1213 00:21:08.834108 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:08.834162856+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:08.834162856+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:08.834162856+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:21:08.834162856+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:21:08.834162856+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:08.834162856+00:00 stderr F -  { 2025-12-13T00:21:08.834162856+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:08.834162856+00:00 stderr F -  Status: "False", 2025-12-13T00:21:08.834162856+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:21:08.834162856+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:21:08.834162856+00:00 stderr F -  }, 2025-12-13T00:21:08.834162856+00:00 stderr F +  { 2025-12-13T00:21:08.834162856+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:08.834162856+00:00 stderr F +  Status: "True", 2025-12-13T00:21:08.834162856+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:08.830835406 +0000 UTC m=+475.721949248", 2025-12-13T00:21:08.834162856+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:21:08.834162856+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:21:08.834162856+00:00 stderr F +  }, 2025-12-13T00:21:08.834162856+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:21:08.834162856+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:08.834162856+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:21:08.834162856+00:00 stderr F    }, 2025-12-13T00:21:08.834162856+00:00 stderr F    Version: "", 2025-12-13T00:21:08.834162856+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:08.834162856+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:08.834162856+00:00 stderr F   } 2025-12-13T00:21:08.835154842+00:00 stderr F W1213 00:21:08.835095 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.835154842+00:00 stderr F E1213 00:21:08.835144 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.077955991+00:00 stderr F E1213 00:21:19.077024 1 status.go:130] OAuthClientSyncDegraded FailedRegister Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.080080498+00:00 stderr F I1213 00:21:19.080046 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:19.080080498+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:19.080080498+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:19.080080498+00:00 stderr F    ... // 38 identical elements 2025-12-13T00:21:19.080080498+00:00 stderr F    {Type: "ConfigMapSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:40 +0000 UTC"}}, 2025-12-13T00:21:19.080080498+00:00 stderr F    {Type: "ServiceCASyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:06 +0000 UTC"}}, 2025-12-13T00:21:19.080080498+00:00 stderr F -  { 2025-12-13T00:21:19.080080498+00:00 stderr F -  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:19.080080498+00:00 stderr F -  Status: "False", 2025-12-13T00:21:19.080080498+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:10 +0000 UTC", 2025-12-13T00:21:19.080080498+00:00 stderr F -  }, 2025-12-13T00:21:19.080080498+00:00 stderr F +  { 2025-12-13T00:21:19.080080498+00:00 stderr F +  Type: "OAuthClientSyncDegraded", 2025-12-13T00:21:19.080080498+00:00 stderr F +  Status: "True", 2025-12-13T00:21:19.080080498+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:19.078096005 +0000 UTC m=+485.969209837", 2025-12-13T00:21:19.080080498+00:00 stderr F +  Reason: "FailedRegister", 2025-12-13T00:21:19.080080498+00:00 stderr F +  Message: `Get "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients/console": dial tcp 10.217.4.1:443: connect: connection refused`, 2025-12-13T00:21:19.080080498+00:00 stderr F +  }, 2025-12-13T00:21:19.080080498+00:00 stderr F    {Type: "OAuthClientSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:09 +0000 UTC"}}, 2025-12-13T00:21:19.080080498+00:00 stderr F    {Type: "RedirectServiceSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:54:13 +0000 UTC"}}, 2025-12-13T00:21:19.080080498+00:00 stderr F    ... // 19 identical elements 2025-12-13T00:21:19.080080498+00:00 stderr F    }, 2025-12-13T00:21:19.080080498+00:00 stderr F    Version: "", 2025-12-13T00:21:19.080080498+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:19.080080498+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:19.080080498+00:00 stderr F   } 2025-12-13T00:21:19.082971276+00:00 stderr F I1213 00:21:19.082911 1 helpers.go:201] Operator status changed:   &v1.OperatorStatus{ 2025-12-13T00:21:19.082971276+00:00 stderr F    ObservedGeneration: 1, 2025-12-13T00:21:19.082971276+00:00 stderr F    Conditions: []v1.OperatorCondition{ 2025-12-13T00:21:19.082971276+00:00 stderr F    {Type: "UnsupportedConfigOverridesUpgradeable", Status: "True", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}, Reason: "NoUnsupportedConfigOverrides", ...}, 2025-12-13T00:21:19.082971276+00:00 stderr F    {Type: "RouteHealthDegraded", Status: "False", LastTransitionTime: {Time: s"2025-08-13 20:08:11 +0000 UTC"}}, 2025-12-13T00:21:19.082971276+00:00 stderr F    {Type: "RouteHealthProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:19.082971276+00:00 stderr F -  { 2025-12-13T00:21:19.082971276+00:00 stderr F -  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:19.082971276+00:00 stderr F -  Status: "False", 2025-12-13T00:21:19.082971276+00:00 stderr F -  LastTransitionTime: s"2025-08-13 20:00:12 +0000 UTC", 2025-12-13T00:21:19.082971276+00:00 stderr F -  Reason: "AsExpected", 2025-12-13T00:21:19.082971276+00:00 stderr F -  }, 2025-12-13T00:21:19.082971276+00:00 stderr F +  { 2025-12-13T00:21:19.082971276+00:00 stderr F +  Type: "OAuthClientsControllerDegraded", 2025-12-13T00:21:19.082971276+00:00 stderr F +  Status: "True", 2025-12-13T00:21:19.082971276+00:00 stderr F +  LastTransitionTime: s"2025-12-13 00:21:19.081339362 +0000 UTC m=+485.972453174", 2025-12-13T00:21:19.082971276+00:00 stderr F +  Reason: "SyncError", 2025-12-13T00:21:19.082971276+00:00 stderr F +  Message: `Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection`..., 2025-12-13T00:21:19.082971276+00:00 stderr F +  }, 2025-12-13T00:21:19.082971276+00:00 stderr F    {Type: "DownloadsDeploymentSyncDegraded", Status: "False", LastTransitionTime: {Time: s"2024-06-27 13:29:41 +0000 UTC"}}, 2025-12-13T00:21:19.082971276+00:00 stderr F    {Type: "DownloadsDeploymentSyncProgressing", Status: "False", LastTransitionTime: {Time: s"2024-06-26 12:53:43 +0000 UTC"}}, 2025-12-13T00:21:19.082971276+00:00 stderr F    ... // 56 identical elements 2025-12-13T00:21:19.082971276+00:00 stderr F    }, 2025-12-13T00:21:19.082971276+00:00 stderr F    Version: "", 2025-12-13T00:21:19.082971276+00:00 stderr F    ReadyReplicas: 1, 2025-12-13T00:21:19.082971276+00:00 stderr F    Generations: {{Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "downloads", ...}, {Group: "apps", Resource: "deployments", Namespace: "openshift-console", Name: "console", ...}}, 2025-12-13T00:21:19.082971276+00:00 stderr F   } 2025-12-13T00:21:19.083877511+00:00 stderr F W1213 00:21:19.083829 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.083877511+00:00 stderr F E1213 00:21:19.083871 1 base_controller.go:268] OAuthClientsController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/consoles/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:48.354417390+00:00 stderr F I1213 00:21:48.353793 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:53.691677154+00:00 stderr F I1213 00:21:53.691345 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:54.311479851+00:00 stderr F I1213 00:21:54.311432 1 reflector.go:351] Caches populated for *v1.ConsolePlugin from github.com/openshift/client-go/console/informers/externalversions/factory.go:125 2025-12-13T00:21:56.100874366+00:00 stderr F I1213 00:21:56.100385 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 ././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015117130646033110 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015117130654033107 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000000017015117130646033110 0ustar zuulzuul2025-12-13T00:11:15.297292996+00:00 stdout F Sat Dec 13 00:11:15 UTC 2025 2025-12-13T00:14:22.697903619+00:00 stdout F ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000000057515117130646033121 0ustar zuulzuul2025-08-13T19:51:17.827657214+00:00 stdout F Wed Aug 13 19:51:17 UTC 2025 2025-08-13T19:51:34.627632491+00:00 stderr F time="2025-08-13T19:51:34Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded" 2025-08-13T19:51:35.128202382+00:00 stdout F ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015117130646033071 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000755000175000017500000000000015117130654033070 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000007035515117130646033105 0ustar zuulzuul2025-08-13T20:11:02.960889364+00:00 stderr F I0813 20:11:02.960329 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:11:02.962080249+00:00 stderr F I0813 20:11:02.961753 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.16.0-202406131906.p0.g1432fe0.assembly.stream.el9-1432fe0) 2025-08-13T20:11:02.976121501+00:00 stderr F I0813 20:11:02.976055 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3deb112ca908d86a8b7f07feb4e0da8204aab510e2799d2dccdab5e5905d1b24" 2025-08-13T20:11:02.976121501+00:00 stderr F I0813 20:11:02.976091 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f69708d7711c9fdc19d1b60591f04a3061fe1f796853e2daea9edea688b2e086" 2025-08-13T20:11:02.976374458+00:00 stderr F I0813 20:11:02.976349 1 standalone_apiserver.go:105] Started health checks at 0.0.0.0:8443 2025-08-13T20:11:02.977037737+00:00 stderr F I0813 20:11:02.976985 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers... 2025-08-13T20:11:03.001296893+00:00 stderr F I0813 20:11:03.000593 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager/openshift-master-controllers 2025-08-13T20:11:03.001400006+00:00 stderr F I0813 20:11:03.001133 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"05722deb-fc4c-4763-8689-9fea6b1f7ec9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-778975cc4f-x5vcf became leader 2025-08-13T20:11:03.004702171+00:00 stderr F I0813 20:11:03.004599 1 controller_manager.go:145] Starting "openshift.io/serviceaccount" 2025-08-13T20:11:03.004702171+00:00 stderr F I0813 20:11:03.004686 1 serviceaccount.go:16] openshift.io/serviceaccount: no managed names specified 2025-08-13T20:11:03.004732031+00:00 stderr F W0813 20:11:03.004699 1 controller_manager.go:152] Skipping "openshift.io/serviceaccount" 2025-08-13T20:11:03.004732031+00:00 stderr F I0813 20:11:03.004708 1 controller_manager.go:145] Starting "openshift.io/origin-namespace" 2025-08-13T20:11:03.012836514+00:00 stderr F I0813 20:11:03.012692 1 controller_manager.go:155] Started "openshift.io/origin-namespace" 2025-08-13T20:11:03.012836514+00:00 stderr F I0813 20:11:03.012731 1 controller_manager.go:145] Starting "openshift.io/image-import" 2025-08-13T20:11:03.017441236+00:00 stderr F I0813 20:11:03.017343 1 imagestream_controller.go:66] Starting image stream controller 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021134 1 controller_manager.go:155] Started "openshift.io/image-import" 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021241 1 controller_manager.go:145] Starting "openshift.io/templateinstancefinalizer" 2025-08-13T20:11:03.022117020+00:00 stderr F I0813 20:11:03.021350 1 scheduled_image_controller.go:68] Starting scheduled import controller 2025-08-13T20:11:03.028759010+00:00 stderr F I0813 20:11:03.028660 1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer" 2025-08-13T20:11:03.028759010+00:00 stderr F I0813 20:11:03.028734 1 controller_manager.go:145] Starting "openshift.io/unidling" 2025-08-13T20:11:03.029112391+00:00 stderr F I0813 20:11:03.028885 1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync 2025-08-13T20:11:03.037609334+00:00 stderr F I0813 20:11:03.037503 1 controller_manager.go:155] Started "openshift.io/unidling" 2025-08-13T20:11:03.038359756+00:00 stderr F I0813 20:11:03.038263 1 controller_manager.go:145] Starting "openshift.io/builder-serviceaccount" 2025-08-13T20:11:03.041640580+00:00 stderr F I0813 20:11:03.041595 1 controller_manager.go:155] Started "openshift.io/builder-serviceaccount" 2025-08-13T20:11:03.041756663+00:00 stderr F I0813 20:11:03.041706 1 controller_manager.go:145] Starting "openshift.io/deployer-serviceaccount" 2025-08-13T20:11:03.042480464+00:00 stderr F I0813 20:11:03.042400 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:11:03.042550836+00:00 stderr F I0813 20:11:03.042535 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:11:03.045358456+00:00 stderr F I0813 20:11:03.045299 1 controller_manager.go:155] Started "openshift.io/deployer-serviceaccount" 2025-08-13T20:11:03.045358456+00:00 stderr F I0813 20:11:03.045337 1 controller_manager.go:145] Starting "openshift.io/deploymentconfig" 2025-08-13T20:11:03.045713826+00:00 stderr F I0813 20:11:03.045604 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:11:03.045736167+00:00 stderr F I0813 20:11:03.045724 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:11:03.051008128+00:00 stderr F I0813 20:11:03.050886 1 controller_manager.go:155] Started "openshift.io/deploymentconfig" 2025-08-13T20:11:03.051008128+00:00 stderr F I0813 20:11:03.050944 1 controller_manager.go:145] Starting "openshift.io/templateinstance" 2025-08-13T20:11:03.051386649+00:00 stderr F I0813 20:11:03.051242 1 factory.go:78] Starting deploymentconfig controller 2025-08-13T20:11:03.065455903+00:00 stderr F I0813 20:11:03.065352 1 controller_manager.go:155] Started "openshift.io/templateinstance" 2025-08-13T20:11:03.065455903+00:00 stderr F I0813 20:11:03.065397 1 controller_manager.go:145] Starting "openshift.io/serviceaccount-pull-secrets" 2025-08-13T20:11:03.068100168+00:00 stderr F I0813 20:11:03.068064 1 controller_manager.go:155] Started "openshift.io/serviceaccount-pull-secrets" 2025-08-13T20:11:03.068212132+00:00 stderr F I0813 20:11:03.068186 1 controller_manager.go:145] Starting "openshift.io/deployer-rolebindings" 2025-08-13T20:11:03.068317465+00:00 stderr F I0813 20:11:03.068249 1 registry_urls_observation_controller.go:139] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:11:03.068317465+00:00 stderr F I0813 20:11:03.068310 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_urls 2025-08-13T20:11:03.068393337+00:00 stderr F I0813 20:11:03.068341 1 keyid_observation_controller.go:164] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:11:03.068393337+00:00 stderr F I0813 20:11:03.068373 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_kids 2025-08-13T20:11:03.068410747+00:00 stderr F I0813 20:11:03.068401 1 legacy_token_secret_controller.go:109] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:11:03.068422458+00:00 stderr F I0813 20:11:03.068408 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-08-13T20:11:03.068434378+00:00 stderr F I0813 20:11:03.068426 1 image_pull_secret_controller.go:301] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:11:03.068446038+00:00 stderr F I0813 20:11:03.068432 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-08-13T20:11:03.068457779+00:00 stderr F I0813 20:11:03.068448 1 legacy_image_pull_secret_controller.go:131] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:11:03.068457779+00:00 stderr F I0813 20:11:03.068454 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-08-13T20:11:03.068704496+00:00 stderr F I0813 20:11:03.068203 1 service_account_controller.go:336] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:11:03.068704496+00:00 stderr F I0813 20:11:03.068622 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_service-account 2025-08-13T20:11:03.076382426+00:00 stderr F I0813 20:11:03.076273 1 controller_manager.go:155] Started "openshift.io/deployer-rolebindings" 2025-08-13T20:11:03.076555351+00:00 stderr F I0813 20:11:03.076483 1 controller_manager.go:145] Starting "openshift.io/image-signature-import" 2025-08-13T20:11:03.077265761+00:00 stderr F I0813 20:11:03.076414 1 defaultrolebindings.go:154] Starting DeployerRoleBindingController 2025-08-13T20:11:03.077265761+00:00 stderr F I0813 20:11:03.077204 1 shared_informer.go:311] Waiting for caches to sync for DeployerRoleBindingController 2025-08-13T20:11:03.080546645+00:00 stderr F I0813 20:11:03.080439 1 controller_manager.go:155] Started "openshift.io/image-signature-import" 2025-08-13T20:11:03.080546645+00:00 stderr F I0813 20:11:03.080472 1 controller_manager.go:145] Starting "openshift.io/deployer" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094601 1 controller_manager.go:155] Started "openshift.io/deployer" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094726 1 controller_manager.go:145] Starting "openshift.io/image-trigger" 2025-08-13T20:11:03.099499309+00:00 stderr F I0813 20:11:03.094683 1 factory.go:73] Starting deployer controller 2025-08-13T20:11:03.113845090+00:00 stderr F I0813 20:11:03.113658 1 controller_manager.go:155] Started "openshift.io/image-trigger" 2025-08-13T20:11:03.113946443+00:00 stderr F I0813 20:11:03.113871 1 image_trigger_controller.go:229] Starting trigger controller 2025-08-13T20:11:03.114174979+00:00 stderr F I0813 20:11:03.114046 1 controller_manager.go:145] Starting "openshift.io/image-puller-rolebindings" 2025-08-13T20:11:03.118686329+00:00 stderr F I0813 20:11:03.118620 1 controller_manager.go:155] Started "openshift.io/image-puller-rolebindings" 2025-08-13T20:11:03.118686329+00:00 stderr F W0813 20:11:03.118641 1 controller_manager.go:142] "openshift.io/default-rolebindings" is disabled 2025-08-13T20:11:03.118686329+00:00 stderr F I0813 20:11:03.118648 1 controller_manager.go:145] Starting "openshift.io/build" 2025-08-13T20:11:03.119373948+00:00 stderr F I0813 20:11:03.119263 1 defaultrolebindings.go:154] Starting ImagePullerRoleBindingController 2025-08-13T20:11:03.119373948+00:00 stderr F I0813 20:11:03.119301 1 shared_informer.go:311] Waiting for caches to sync for ImagePullerRoleBindingController 2025-08-13T20:11:03.126268746+00:00 stderr F I0813 20:11:03.126188 1 controller_manager.go:155] Started "openshift.io/build" 2025-08-13T20:11:03.126268746+00:00 stderr F I0813 20:11:03.126228 1 controller_manager.go:145] Starting "openshift.io/build-config-change" 2025-08-13T20:11:03.135650695+00:00 stderr F I0813 20:11:03.135601 1 controller_manager.go:155] Started "openshift.io/build-config-change" 2025-08-13T20:11:03.135730517+00:00 stderr F I0813 20:11:03.135712 1 controller_manager.go:145] Starting "openshift.io/builder-rolebindings" 2025-08-13T20:11:03.139568287+00:00 stderr F I0813 20:11:03.139490 1 controller_manager.go:155] Started "openshift.io/builder-rolebindings" 2025-08-13T20:11:03.139568287+00:00 stderr F I0813 20:11:03.139538 1 controller_manager.go:157] Started Origin Controllers 2025-08-13T20:11:03.139651340+00:00 stderr F I0813 20:11:03.139628 1 defaultrolebindings.go:154] Starting BuilderRoleBindingController 2025-08-13T20:11:03.139741142+00:00 stderr F I0813 20:11:03.139723 1 shared_informer.go:311] Waiting for caches to sync for BuilderRoleBindingController 2025-08-13T20:11:03.169260069+00:00 stderr F I0813 20:11:03.169193 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.173518971+00:00 stderr F I0813 20:11:03.173458 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.176474546+00:00 stderr F I0813 20:11:03.176362 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.177665040+00:00 stderr F I0813 20:11:03.177622 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.180083009+00:00 stderr F I0813 20:11:03.180003 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.181215371+00:00 stderr F I0813 20:11:03.181106 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.181507830+00:00 stderr F I0813 20:11:03.181444 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.182751745+00:00 stderr F I0813 20:11:03.182712 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.185134084+00:00 stderr F I0813 20:11:03.183022 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.190246390+00:00 stderr F I0813 20:11:03.190190 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.194325617+00:00 stderr F I0813 20:11:03.194225 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.198378524+00:00 stderr F W0813 20:11:03.198291 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:11:03.199765823+00:00 stderr F I0813 20:11:03.199544 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.202071779+00:00 stderr F I0813 20:11:03.201023 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.214545197+00:00 stderr F I0813 20:11:03.210094 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.221080394+00:00 stderr F I0813 20:11:03.220985 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.222953158+00:00 stderr F I0813 20:11:03.222883 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.223201495+00:00 stderr F I0813 20:11:03.223176 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.224708568+00:00 stderr F I0813 20:11:03.224663 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.227238901+00:00 stderr F I0813 20:11:03.227174 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.227449147+00:00 stderr F I0813 20:11:03.227385 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.231337349+00:00 stderr F I0813 20:11:03.231271 1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller 2025-08-13T20:11:03.234503929+00:00 stderr F I0813 20:11:03.234457 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.239160863+00:00 stderr F I0813 20:11:03.238714 1 buildconfig_controller.go:212] Starting buildconfig controller 2025-08-13T20:11:03.255859232+00:00 stderr F I0813 20:11:03.255749 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers. 2025-08-13T20:11:03.258393844+00:00 stderr F I0813 20:11:03.258294 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:11:03.269350588+00:00 stderr F I0813 20:11:03.269203 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.269546314+00:00 stderr F I0813 20:11:03.269467 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:11:03.279697525+00:00 stderr F I0813 20:11:03.279610 1 templateinstance_controller.go:297] Starting TemplateInstance controller 2025-08-13T20:11:03.279697525+00:00 stderr F I0813 20:11:03.279670 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_urls 2025-08-13T20:11:03.279753797+00:00 stderr F I0813 20:11:03.279691 1 registry_urls_observation_controller.go:146] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:11:03.280877749+00:00 stderr F I0813 20:11:03.280840 1 shared_informer.go:318] Caches are synced for DeployerRoleBindingController 2025-08-13T20:11:03.282687271+00:00 stderr F W0813 20:11:03.282655 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:11:03.286173031+00:00 stderr F I0813 20:11:03.286089 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.300431929+00:00 stderr F I0813 20:11:03.300296 1 factory.go:80] Deployer controller caches are synced. Starting workers. 2025-08-13T20:11:03.323603564+00:00 stderr F I0813 20:11:03.323502 1 shared_informer.go:318] Caches are synced for ImagePullerRoleBindingController 2025-08-13T20:11:03.366003709+00:00 stderr F I0813 20:11:03.365879 1 shared_informer.go:318] Caches are synced for BuilderRoleBindingController 2025-08-13T20:11:03.371257520+00:00 stderr F I0813 20:11:03.369556 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.392538160+00:00 stderr F I0813 20:11:03.392303 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.500976489+00:00 stderr F I0813 20:11:03.497362 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.514602170+00:00 stderr F I0813 20:11:03.513510 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:11:03.527231062+00:00 stderr F I0813 20:11:03.526879 1 build_controller.go:503] Starting build controller 2025-08-13T20:11:03.527231062+00:00 stderr F I0813 20:11:03.526940 1 build_controller.go:505] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000 2025-08-13T20:11:03.568669160+00:00 stderr F I0813 20:11:03.568575 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-08-13T20:11:03.568727282+00:00 stderr F I0813 20:11:03.568691 1 legacy_image_pull_secret_controller.go:138] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:11:03.568727282+00:00 stderr F I0813 20:11:03.568720 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_service-account 2025-08-13T20:11:03.568827745+00:00 stderr F I0813 20:11:03.568734 1 service_account_controller.go:343] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.568993 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_kids 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569026 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569044 1 legacy_token_secret_controller.go:116] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:11:03.569076492+00:00 stderr F I0813 20:11:03.569038 1 keyid_observation_controller.go:172] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:11:03.569341419+00:00 stderr F I0813 20:11:03.569012 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-08-13T20:11:03.569357520+00:00 stderr F I0813 20:11:03.569339 1 image_pull_secret_controller.go:327] Waiting for service account token signing cert to be observed 2025-08-13T20:11:03.569367700+00:00 stderr F I0813 20:11:03.569352 1 image_pull_secret_controller.go:313] Waiting for image registry urls to be observed 2025-08-13T20:11:03.569367700+00:00 stderr F I0813 20:11:03.569356 1 image_pull_secret_controller.go:330] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:11:03.569382131+00:00 stderr F I0813 20:11:03.569370 1 image_pull_secret_controller.go:317] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:11:03.569448183+00:00 stderr F I0813 20:11:03.569389 1 image_pull_secret_controller.go:374] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:19:56.300590604+00:00 stderr F W0813 20:19:56.295341 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:20:24.817863086+00:00 stderr F I0813 20:20:24.810877 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:21:03.289154765+00:00 stderr F I0813 20:21:03.287183 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:21:03.519750035+00:00 stderr F I0813 20:21:03.519604 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:25:55.305263486+00:00 stderr F W0813 20:25:55.303637 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:29:57.085327979+00:00 stderr F I0813 20:29:57.083719 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-08-13T20:31:03.288080738+00:00 stderr F I0813 20:31:03.284011 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:31:03.518212103+00:00 stderr F I0813 20:31:03.518089 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:34:21.325502573+00:00 stderr F W0813 20:34:21.324495 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:41:03.284951366+00:00 stderr F I0813 20:41:03.282391 1 image_pull_secret_controller.go:353] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-08-13T20:41:03.572943138+00:00 stderr F I0813 20:41:03.572884 1 image_pull_secret_controller.go:363] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-08-13T20:42:35.632999370+00:00 stderr F I0813 20:42:35.631768 1 keyid_observation_controller.go:174] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-08-13T20:42:35.634115802+00:00 stderr F I0813 20:42:35.633951 1 project_finalizer_controller.go:74] Shutting down 2025-08-13T20:42:35.634352309+00:00 stderr F I0813 20:42:35.634323 1 legacy_token_secret_controller.go:118] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-08-13T20:42:35.634432091+00:00 stderr F I0813 20:42:35.634416 1 service_account_controller.go:345] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-08-13T20:42:35.634506093+00:00 stderr F I0813 20:42:35.634490 1 legacy_image_pull_secret_controller.go:140] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-08-13T20:42:35.635735189+00:00 stderr F I0813 20:42:35.635662 1 serviceaccounts_controller.go:123] "Shutting down service account controller" 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636859 1 scheduled_image_controller.go:81] Shutting down image stream controller 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636904 1 imagestream_controller.go:81] Shutting down image stream controller 2025-08-13T20:42:35.637739937+00:00 stderr F I0813 20:42:35.636963 1 image_trigger_controller.go:245] Shutting down trigger controller 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638352 1 image_pull_secret_controller.go:376] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638478 1 serviceaccounts_controller.go:123] "Shutting down service account controller" 2025-08-13T20:42:35.638701804+00:00 stderr F I0813 20:42:35.638527 1 buildconfig_controller.go:219] Shutting down buildconfig controller 2025-08-13T20:42:35.638753276+00:00 stderr F I0813 20:42:35.638736 1 build_controller.go:521] Shutting down build controller 2025-08-13T20:42:35.638896900+00:00 stderr F I0813 20:42:35.638876 1 signature_import_controller.go:81] Shutting down 2025-08-13T20:42:35.638987433+00:00 stderr F I0813 20:42:35.638942 1 factory.go:88] Shutting down deployer controller 2025-08-13T20:42:35.639000763+00:00 stderr F I0813 20:42:35.638986 1 defaultrolebindings.go:166] Shutting down DeployerRoleBindingController 2025-08-13T20:42:35.639026874+00:00 stderr F I0813 20:42:35.634119 1 registry_urls_observation_controller.go:148] "Shutting down controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-08-13T20:42:35.639449976+00:00 stderr F I0813 20:42:35.638947 1 defaultrolebindings.go:166] Shutting down BuilderRoleBindingController 2025-08-13T20:42:35.639600830+00:00 stderr F I0813 20:42:35.639579 1 templateinstance_finalizer.go:201] Stopping TemplateInstanceFinalizer controller 2025-08-13T20:42:35.639742184+00:00 stderr F I0813 20:42:35.639722 1 defaultrolebindings.go:166] Shutting down ImagePullerRoleBindingController 2025-08-13T20:42:35.640471215+00:00 stderr F I0813 20:42:35.640011 1 factory.go:95] Shutting down deploymentconfig controller 2025-08-13T20:42:35.652143072+00:00 stderr F W0813 20:42:35.649302 1 controller_manager.go:107] Controller Manager received stop signal: leaderelection lost ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-m0000644000175000017500000074517315117130646033114 0ustar zuulzuul2025-12-13T00:13:15.565002720+00:00 stderr F I1213 00:13:15.562460 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:15.588971906+00:00 stderr F I1213 00:13:15.588556 1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (4.16.0-202406131906.p0.g1432fe0.assembly.stream.el9-1432fe0) 2025-12-13T00:13:15.612510997+00:00 stderr F I1213 00:13:15.612451 1 controller_manager.go:50] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3deb112ca908d86a8b7f07feb4e0da8204aab510e2799d2dccdab5e5905d1b24" 2025-12-13T00:13:15.612510997+00:00 stderr F I1213 00:13:15.612478 1 controller_manager.go:56] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f69708d7711c9fdc19d1b60591f04a3061fe1f796853e2daea9edea688b2e086" 2025-12-13T00:13:15.612987313+00:00 stderr F I1213 00:13:15.612717 1 standalone_apiserver.go:105] Started health checks at 0.0.0.0:8443 2025-12-13T00:13:15.621682265+00:00 stderr F I1213 00:13:15.621175 1 leaderelection.go:250] attempting to acquire leader lease openshift-controller-manager/openshift-master-controllers... 2025-12-13T00:13:15.929014462+00:00 stderr F I1213 00:13:15.926018 1 leaderelection.go:260] successfully acquired lease openshift-controller-manager/openshift-master-controllers 2025-12-13T00:13:15.932320563+00:00 stderr F I1213 00:13:15.932250 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"05722deb-fc4c-4763-8689-9fea6b1f7ec9", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40062", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-778975cc4f-x5vcf became leader 2025-12-13T00:13:16.005037626+00:00 stderr F I1213 00:13:16.004977 1 controller_manager.go:145] Starting "openshift.io/serviceaccount-pull-secrets" 2025-12-13T00:13:16.011309507+00:00 stderr F I1213 00:13:16.011270 1 controller_manager.go:155] Started "openshift.io/serviceaccount-pull-secrets" 2025-12-13T00:13:16.011365679+00:00 stderr F I1213 00:13:16.011355 1 controller_manager.go:145] Starting "openshift.io/build-config-change" 2025-12-13T00:13:16.012113615+00:00 stderr F I1213 00:13:16.012085 1 registry_urls_observation_controller.go:139] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012214 1 legacy_token_secret_controller.go:109] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012349 1 legacy_image_pull_secret_controller.go:131] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012426 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012465 1 image_pull_secret_controller.go:301] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012471 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012496 1 service_account_controller.go:336] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012500 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_service-account 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012616 1 keyid_observation_controller.go:164] "Starting controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.012641 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_kids 2025-12-13T00:13:16.013598784+00:00 stderr F I1213 00:13:16.013433 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-12-13T00:13:16.013663786+00:00 stderr F I1213 00:13:16.013650 1 shared_informer.go:311] Waiting for caches to sync for openshift.io/internal-image-registry-pull-secrets_urls 2025-12-13T00:13:16.026485697+00:00 stderr F I1213 00:13:16.026423 1 controller_manager.go:155] Started "openshift.io/build-config-change" 2025-12-13T00:13:16.026485697+00:00 stderr F I1213 00:13:16.026445 1 controller_manager.go:145] Starting "openshift.io/deployer-serviceaccount" 2025-12-13T00:13:16.032754368+00:00 stderr F I1213 00:13:16.032090 1 controller_manager.go:155] Started "openshift.io/deployer-serviceaccount" 2025-12-13T00:13:16.032754368+00:00 stderr F I1213 00:13:16.032118 1 controller_manager.go:145] Starting "openshift.io/image-signature-import" 2025-12-13T00:13:16.032754368+00:00 stderr F I1213 00:13:16.032472 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-12-13T00:13:16.032754368+00:00 stderr F I1213 00:13:16.032483 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-12-13T00:13:16.035664206+00:00 stderr F I1213 00:13:16.035633 1 controller_manager.go:155] Started "openshift.io/image-signature-import" 2025-12-13T00:13:16.035711897+00:00 stderr F I1213 00:13:16.035701 1 controller_manager.go:145] Starting "openshift.io/serviceaccount" 2025-12-13T00:13:16.035765499+00:00 stderr F I1213 00:13:16.035749 1 serviceaccount.go:16] openshift.io/serviceaccount: no managed names specified 2025-12-13T00:13:16.035790220+00:00 stderr F W1213 00:13:16.035780 1 controller_manager.go:152] Skipping "openshift.io/serviceaccount" 2025-12-13T00:13:16.035814061+00:00 stderr F W1213 00:13:16.035804 1 controller_manager.go:142] "openshift.io/default-rolebindings" is disabled 2025-12-13T00:13:16.035836611+00:00 stderr F I1213 00:13:16.035827 1 controller_manager.go:145] Starting "openshift.io/build" 2025-12-13T00:13:16.046555282+00:00 stderr F I1213 00:13:16.046488 1 controller_manager.go:155] Started "openshift.io/build" 2025-12-13T00:13:16.046555282+00:00 stderr F I1213 00:13:16.046509 1 controller_manager.go:145] Starting "openshift.io/deployer" 2025-12-13T00:13:16.051996535+00:00 stderr F I1213 00:13:16.051954 1 controller_manager.go:155] Started "openshift.io/deployer" 2025-12-13T00:13:16.052094348+00:00 stderr F I1213 00:13:16.052082 1 controller_manager.go:145] Starting "openshift.io/image-puller-rolebindings" 2025-12-13T00:13:16.052427089+00:00 stderr F I1213 00:13:16.052413 1 factory.go:73] Starting deployer controller 2025-12-13T00:13:16.061253385+00:00 stderr F I1213 00:13:16.061197 1 controller_manager.go:155] Started "openshift.io/image-puller-rolebindings" 2025-12-13T00:13:16.061253385+00:00 stderr F I1213 00:13:16.061219 1 controller_manager.go:145] Starting "openshift.io/templateinstance" 2025-12-13T00:13:16.066027316+00:00 stderr F I1213 00:13:16.065989 1 defaultrolebindings.go:154] Starting ImagePullerRoleBindingController 2025-12-13T00:13:16.066069717+00:00 stderr F I1213 00:13:16.066059 1 shared_informer.go:311] Waiting for caches to sync for ImagePullerRoleBindingController 2025-12-13T00:13:16.101001511+00:00 stderr F I1213 00:13:16.100367 1 controller_manager.go:155] Started "openshift.io/templateinstance" 2025-12-13T00:13:16.101001511+00:00 stderr F I1213 00:13:16.100387 1 controller_manager.go:145] Starting "openshift.io/unidling" 2025-12-13T00:13:16.109965543+00:00 stderr F I1213 00:13:16.109027 1 controller_manager.go:155] Started "openshift.io/unidling" 2025-12-13T00:13:16.109965543+00:00 stderr F I1213 00:13:16.109047 1 controller_manager.go:145] Starting "openshift.io/origin-namespace" 2025-12-13T00:13:16.114097301+00:00 stderr F I1213 00:13:16.114060 1 controller_manager.go:155] Started "openshift.io/origin-namespace" 2025-12-13T00:13:16.114140243+00:00 stderr F I1213 00:13:16.114129 1 controller_manager.go:145] Starting "openshift.io/templateinstancefinalizer" 2025-12-13T00:13:16.127760960+00:00 stderr F I1213 00:13:16.127326 1 controller_manager.go:155] Started "openshift.io/templateinstancefinalizer" 2025-12-13T00:13:16.127760960+00:00 stderr F I1213 00:13:16.127346 1 controller_manager.go:145] Starting "openshift.io/builder-serviceaccount" 2025-12-13T00:13:16.127760960+00:00 stderr F I1213 00:13:16.127544 1 templateinstance_finalizer.go:189] TemplateInstanceFinalizer controller waiting for cache sync 2025-12-13T00:13:16.137978494+00:00 stderr F I1213 00:13:16.137662 1 controller_manager.go:155] Started "openshift.io/builder-serviceaccount" 2025-12-13T00:13:16.137978494+00:00 stderr F I1213 00:13:16.137690 1 controller_manager.go:145] Starting "openshift.io/builder-rolebindings" 2025-12-13T00:13:16.137978494+00:00 stderr F I1213 00:13:16.137732 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-12-13T00:13:16.137978494+00:00 stderr F I1213 00:13:16.137754 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-12-13T00:13:16.141375638+00:00 stderr F I1213 00:13:16.141345 1 controller_manager.go:155] Started "openshift.io/builder-rolebindings" 2025-12-13T00:13:16.141419329+00:00 stderr F I1213 00:13:16.141409 1 controller_manager.go:145] Starting "openshift.io/deploymentconfig" 2025-12-13T00:13:16.141570884+00:00 stderr F I1213 00:13:16.141558 1 defaultrolebindings.go:154] Starting BuilderRoleBindingController 2025-12-13T00:13:16.141598865+00:00 stderr F I1213 00:13:16.141589 1 shared_informer.go:311] Waiting for caches to sync for BuilderRoleBindingController 2025-12-13T00:13:16.164335739+00:00 stderr F I1213 00:13:16.164176 1 controller_manager.go:155] Started "openshift.io/deploymentconfig" 2025-12-13T00:13:16.164335739+00:00 stderr F I1213 00:13:16.164275 1 controller_manager.go:145] Starting "openshift.io/deployer-rolebindings" 2025-12-13T00:13:16.164435213+00:00 stderr F I1213 00:13:16.164416 1 factory.go:78] Starting deploymentconfig controller 2025-12-13T00:13:16.173980864+00:00 stderr F I1213 00:13:16.173910 1 controller_manager.go:155] Started "openshift.io/deployer-rolebindings" 2025-12-13T00:13:16.173980864+00:00 stderr F I1213 00:13:16.173968 1 controller_manager.go:145] Starting "openshift.io/image-trigger" 2025-12-13T00:13:16.176580740+00:00 stderr F I1213 00:13:16.174097 1 defaultrolebindings.go:154] Starting DeployerRoleBindingController 2025-12-13T00:13:16.176580740+00:00 stderr F I1213 00:13:16.174109 1 shared_informer.go:311] Waiting for caches to sync for DeployerRoleBindingController 2025-12-13T00:13:16.190604412+00:00 stderr F I1213 00:13:16.190564 1 controller_manager.go:155] Started "openshift.io/image-trigger" 2025-12-13T00:13:16.190653303+00:00 stderr F I1213 00:13:16.190643 1 controller_manager.go:145] Starting "openshift.io/image-import" 2025-12-13T00:13:16.190856871+00:00 stderr F I1213 00:13:16.190843 1 image_trigger_controller.go:229] Starting trigger controller 2025-12-13T00:13:16.202025786+00:00 stderr F I1213 00:13:16.201968 1 imagestream_controller.go:66] Starting image stream controller 2025-12-13T00:13:16.205233814+00:00 stderr F I1213 00:13:16.202719 1 controller_manager.go:155] Started "openshift.io/image-import" 2025-12-13T00:13:16.205233814+00:00 stderr F I1213 00:13:16.202735 1 controller_manager.go:157] Started Origin Controllers 2025-12-13T00:13:16.205233814+00:00 stderr F I1213 00:13:16.203053 1 scheduled_image_controller.go:68] Starting scheduled import controller 2025-12-13T00:13:16.211133022+00:00 stderr F W1213 00:13:16.211088 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:16.211202524+00:00 stderr F E1213 00:13:16.211190 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:16.211610158+00:00 stderr F I1213 00:13:16.211593 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.211797504+00:00 stderr F I1213 00:13:16.211606 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.211855746+00:00 stderr F W1213 00:13:16.211838 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:16.211863866+00:00 stderr F E1213 00:13:16.211855 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:16.211976260+00:00 stderr F I1213 00:13:16.211922 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.212009631+00:00 stderr F I1213 00:13:16.211789 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.212057433+00:00 stderr F W1213 00:13:16.211813 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:16.212065463+00:00 stderr F E1213 00:13:16.212055 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:16.224865174+00:00 stderr F I1213 00:13:16.224800 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.229257611+00:00 stderr F I1213 00:13:16.226685 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.231521497+00:00 stderr F I1213 00:13:16.230256 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.231632651+00:00 stderr F W1213 00:13:16.231614 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:16.231675882+00:00 stderr F E1213 00:13:16.231664 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:16.240881961+00:00 stderr F I1213 00:13:16.240827 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.242843097+00:00 stderr F W1213 00:13:16.242820 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:16.242911910+00:00 stderr F E1213 00:13:16.242894 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:16.243092576+00:00 stderr F W1213 00:13:16.243066 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:16.243132917+00:00 stderr F E1213 00:13:16.243114 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:16.243354254+00:00 stderr F I1213 00:13:16.243339 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.249455409+00:00 stderr F I1213 00:13:16.248393 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.254845420+00:00 stderr F I1213 00:13:16.251417 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.259047042+00:00 stderr F I1213 00:13:16.259002 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.358555495+00:00 stderr F I1213 00:13:16.353090 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.373512258+00:00 stderr F I1213 00:13:16.371818 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.389616779+00:00 stderr F I1213 00:13:16.389564 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.391258064+00:00 stderr F I1213 00:13:16.391225 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.398506138+00:00 stderr F I1213 00:13:16.398379 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.401256181+00:00 stderr F I1213 00:13:16.401214 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.416771312+00:00 stderr F I1213 00:13:16.416725 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_urls 2025-12-13T00:13:16.416838144+00:00 stderr F I1213 00:13:16.416821 1 registry_urls_observation_controller.go:146] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_urls" 2025-12-13T00:13:16.424016635+00:00 stderr F I1213 00:13:16.423964 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.432995727+00:00 stderr F I1213 00:13:16.432953 1 shared_informer.go:318] Caches are synced for service account 2025-12-13T00:13:16.437970244+00:00 stderr F I1213 00:13:16.437821 1 shared_informer.go:318] Caches are synced for service account 2025-12-13T00:13:16.442303929+00:00 stderr F I1213 00:13:16.442123 1 shared_informer.go:318] Caches are synced for BuilderRoleBindingController 2025-12-13T00:13:16.453032850+00:00 stderr F I1213 00:13:16.452991 1 factory.go:80] Deployer controller caches are synced. Starting workers. 2025-12-13T00:13:16.467010740+00:00 stderr F I1213 00:13:16.466972 1 shared_informer.go:318] Caches are synced for ImagePullerRoleBindingController 2025-12-13T00:13:16.475967781+00:00 stderr F I1213 00:13:16.475085 1 shared_informer.go:318] Caches are synced for DeployerRoleBindingController 2025-12-13T00:13:16.568436128+00:00 stderr F I1213 00:13:16.568388 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.608083810+00:00 stderr F I1213 00:13:16.608014 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:16.615234531+00:00 stderr F I1213 00:13:16.615188 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_service-account 2025-12-13T00:13:16.615310483+00:00 stderr F I1213 00:13:16.615291 1 service_account_controller.go:343] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_service-account" 2025-12-13T00:13:16.615837261+00:00 stderr F I1213 00:13:16.615191 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-token-secret 2025-12-13T00:13:16.615886002+00:00 stderr F I1213 00:13:16.615871 1 legacy_token_secret_controller.go:116] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-token-secret" 2025-12-13T00:13:16.616325797+00:00 stderr F I1213 00:13:16.615221 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret 2025-12-13T00:13:16.616377089+00:00 stderr F I1213 00:13:16.616362 1 legacy_image_pull_secret_controller.go:138] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_legacy-image-pull-secret" 2025-12-13T00:13:16.616408450+00:00 stderr F I1213 00:13:16.615241 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_image-pull-secret 2025-12-13T00:13:16.616448261+00:00 stderr F I1213 00:13:16.616436 1 image_pull_secret_controller.go:327] Waiting for service account token signing cert to be observed 2025-12-13T00:13:16.616475652+00:00 stderr F I1213 00:13:16.615253 1 shared_informer.go:318] Caches are synced for openshift.io/internal-image-registry-pull-secrets_kids 2025-12-13T00:13:16.616512183+00:00 stderr F I1213 00:13:16.616499 1 keyid_observation_controller.go:172] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_kids" 2025-12-13T00:13:16.616670229+00:00 stderr F I1213 00:13:16.616655 1 image_pull_secret_controller.go:330] "Observed service account token signing certs" kids=["Eis1R21gHpHFLAkJU-GQ-azSF6VzwnC1XhhzxsZx2Qg"] 2025-12-13T00:13:16.616707411+00:00 stderr F I1213 00:13:16.616695 1 image_pull_secret_controller.go:313] Waiting for image registry urls to be observed 2025-12-13T00:13:16.616743132+00:00 stderr F I1213 00:13:16.616730 1 image_pull_secret_controller.go:317] "Observed image registry urls" urls=["10.217.4.41:5000","default-route-openshift-image-registry.apps-crc.testing","image-registry.openshift-image-registry.svc.cluster.local:5000","image-registry.openshift-image-registry.svc:5000"] 2025-12-13T00:13:16.616792323+00:00 stderr F I1213 00:13:16.616776 1 image_pull_secret_controller.go:374] "Started controller" name="openshift.io/internal-image-registry-pull-secrets_image-pull-secret" 2025-12-13T00:13:16.617249239+00:00 stderr F I1213 00:13:16.617224 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="builder-dockercfg-hn9nn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.753120965 +0000 UTC" 2025-12-13T00:13:16.617296990+00:00 stderr F I1213 00:13:16.617284 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="builder-dockercfg-hn9nn" serviceaccount="builder" 2025-12-13T00:13:16.617661712+00:00 stderr F I1213 00:13:16.617643 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="default-dockercfg-rwmqp" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.752952923 +0000 UTC" 2025-12-13T00:13:16.618105937+00:00 stderr F I1213 00:13:16.618089 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="default-dockercfg-rwmqp" serviceaccount="default" 2025-12-13T00:13:16.618409317+00:00 stderr F I1213 00:13:16.618372 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="deployer-dockercfg-rxncs" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.752669622 +0000 UTC" 2025-12-13T00:13:16.618409317+00:00 stderr F I1213 00:13:16.618400 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="deployer-dockercfg-rxncs" serviceaccount="deployer" 2025-12-13T00:13:16.618522611+00:00 stderr F I1213 00:13:16.618504 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="builder-dockercfg-68c6h" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.7526071 +0000 UTC" 2025-12-13T00:13:16.618560822+00:00 stderr F I1213 00:13:16.618548 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="builder-dockercfg-68c6h" serviceaccount="builder" 2025-12-13T00:13:16.618928344+00:00 stderr F I1213 00:13:16.618688 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="csi-hostpath-provisioner-sa-dockercfg-nqbbq" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.75253183 +0000 UTC" 2025-12-13T00:13:16.618928344+00:00 stderr F I1213 00:13:16.618706 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="csi-hostpath-provisioner-sa-dockercfg-nqbbq" serviceaccount="csi-hostpath-provisioner-sa" 2025-12-13T00:13:16.673833870+00:00 stderr F I1213 00:13:16.673572 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="csi-provisioner-dockercfg-m4vbf" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.730583252 +0000 UTC" 2025-12-13T00:13:16.673833870+00:00 stderr F I1213 00:13:16.673810 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="csi-provisioner-dockercfg-m4vbf" serviceaccount="csi-provisioner" 2025-12-13T00:13:16.674019126+00:00 stderr F I1213 00:13:16.673989 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="default-dockercfg-svxcm" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:14.73043131 +0000 UTC" 2025-12-13T00:13:16.674071858+00:00 stderr F I1213 00:13:16.674054 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="default-dockercfg-svxcm" serviceaccount="default" 2025-12-13T00:13:16.697537736+00:00 stderr F I1213 00:13:16.697490 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="hostpath-provisioner" name="deployer-dockercfg-xtrqb" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.121017155 +0000 UTC" 2025-12-13T00:13:16.697607748+00:00 stderr F I1213 00:13:16.697593 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="hostpath-provisioner" name="deployer-dockercfg-xtrqb" serviceaccount="deployer" 2025-12-13T00:13:16.698723276+00:00 stderr F I1213 00:13:16.698694 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="builder-dockercfg-fhvt9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.120532554 +0000 UTC" 2025-12-13T00:13:16.698780588+00:00 stderr F I1213 00:13:16.698765 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="builder-dockercfg-fhvt9" serviceaccount="builder" 2025-12-13T00:13:16.700473134+00:00 stderr F I1213 00:13:16.700443 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="default-dockercfg-dp7cf" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.119833803 +0000 UTC" 2025-12-13T00:13:16.700533756+00:00 stderr F I1213 00:13:16.700520 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="default-dockercfg-dp7cf" serviceaccount="default" 2025-12-13T00:13:16.718159719+00:00 stderr F I1213 00:13:16.718118 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-node-lease" name="deployer-dockercfg-l8zq8" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.112766149 +0000 UTC" 2025-12-13T00:13:16.718216071+00:00 stderr F I1213 00:13:16.718205 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-node-lease" name="deployer-dockercfg-l8zq8" serviceaccount="deployer" 2025-12-13T00:13:16.736674941+00:00 stderr F I1213 00:13:16.736402 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="builder-dockercfg-pq2fn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.105449979 +0000 UTC" 2025-12-13T00:13:16.736674941+00:00 stderr F I1213 00:13:16.736621 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="builder-dockercfg-pq2fn" serviceaccount="builder" 2025-12-13T00:13:16.741967619+00:00 stderr F I1213 00:13:16.737988 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="default-dockercfg-mg7xn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.104821638 +0000 UTC" 2025-12-13T00:13:16.741967619+00:00 stderr F I1213 00:13:16.738015 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="default-dockercfg-mg7xn" serviceaccount="default" 2025-12-13T00:13:16.749049207+00:00 stderr F I1213 00:13:16.748996 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-public" name="deployer-dockercfg-4blxw" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.100411854 +0000 UTC" 2025-12-13T00:13:16.749049207+00:00 stderr F I1213 00:13:16.749036 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-public" name="deployer-dockercfg-4blxw" serviceaccount="deployer" 2025-12-13T00:13:16.753620440+00:00 stderr F I1213 00:13:16.750161 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="attachdetach-controller-dockercfg-fdtjb" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:42 +0000 UTC" refreshTime="2025-06-26 10:06:16.099942974 +0000 UTC" 2025-12-13T00:13:16.753620440+00:00 stderr F I1213 00:13:16.750194 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="attachdetach-controller-dockercfg-fdtjb" serviceaccount="attachdetach-controller" 2025-12-13T00:13:16.753620440+00:00 stderr F I1213 00:13:16.750739 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="builder-dockercfg-kkqp2" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.499710443 +0000 UTC" 2025-12-13T00:13:16.753620440+00:00 stderr F I1213 00:13:16.750750 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="builder-dockercfg-kkqp2" serviceaccount="builder" 2025-12-13T00:13:16.779355956+00:00 stderr F I1213 00:13:16.779312 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="certificate-controller-dockercfg-9v2kj" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.488284962 +0000 UTC" 2025-12-13T00:13:16.779423688+00:00 stderr F I1213 00:13:16.779412 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="certificate-controller-dockercfg-9v2kj" serviceaccount="certificate-controller" 2025-12-13T00:13:16.803765675+00:00 stderr F I1213 00:13:16.803723 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="clusterrole-aggregation-controller-dockercfg-2tcfh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.478520822 +0000 UTC" 2025-12-13T00:13:16.803832588+00:00 stderr F I1213 00:13:16.803821 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="clusterrole-aggregation-controller-dockercfg-2tcfh" serviceaccount="clusterrole-aggregation-controller" 2025-12-13T00:13:16.804205350+00:00 stderr F I1213 00:13:16.804186 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="cronjob-controller-dockercfg-g2sp5" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.47833244 +0000 UTC" 2025-12-13T00:13:16.804246252+00:00 stderr F I1213 00:13:16.804235 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="cronjob-controller-dockercfg-g2sp5" serviceaccount="cronjob-controller" 2025-12-13T00:13:16.828852478+00:00 stderr F I1213 00:13:16.828802 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="daemon-set-controller-dockercfg-pjkzz" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.468494134 +0000 UTC" 2025-12-13T00:13:16.828852478+00:00 stderr F I1213 00:13:16.828828 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="daemon-set-controller-dockercfg-pjkzz" serviceaccount="daemon-set-controller" 2025-12-13T00:13:16.868658587+00:00 stderr F I1213 00:13:16.867319 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="default-dockercfg-q6b6n" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.453082432 +0000 UTC" 2025-12-13T00:13:16.868658587+00:00 stderr F I1213 00:13:16.867342 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="default-dockercfg-q6b6n" serviceaccount="default" 2025-12-13T00:13:16.879995528+00:00 stderr F I1213 00:13:16.879842 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="deployer-dockercfg-bscn9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.448074064 +0000 UTC" 2025-12-13T00:13:16.879995528+00:00 stderr F I1213 00:13:16.879865 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="deployer-dockercfg-bscn9" serviceaccount="deployer" 2025-12-13T00:13:16.899052097+00:00 stderr F I1213 00:13:16.898983 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="deployment-controller-dockercfg-xwj9s" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.440418234 +0000 UTC" 2025-12-13T00:13:16.899052097+00:00 stderr F I1213 00:13:16.899011 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="deployment-controller-dockercfg-xwj9s" serviceaccount="deployment-controller" 2025-12-13T00:13:16.918535372+00:00 stderr F I1213 00:13:16.916171 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="disruption-controller-dockercfg-27hxh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.433548592 +0000 UTC" 2025-12-13T00:13:16.918535372+00:00 stderr F I1213 00:13:16.916229 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="disruption-controller-dockercfg-27hxh" serviceaccount="disruption-controller" 2025-12-13T00:13:16.921232703+00:00 stderr F I1213 00:13:16.919475 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpoint-controller-dockercfg-fnmd9" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.432221851 +0000 UTC" 2025-12-13T00:13:16.921232703+00:00 stderr F I1213 00:13:16.919503 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpoint-controller-dockercfg-fnmd9" serviceaccount="endpoint-controller" 2025-12-13T00:13:16.925751695+00:00 stderr F I1213 00:13:16.925443 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpointslice-controller-dockercfg-kvrd9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.42983363 +0000 UTC" 2025-12-13T00:13:16.925751695+00:00 stderr F I1213 00:13:16.925465 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpointslice-controller-dockercfg-kvrd9" serviceaccount="endpointslice-controller" 2025-12-13T00:13:16.964604200+00:00 stderr F I1213 00:13:16.963996 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="endpointslicemirroring-controller-dockercfg-skzmn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:43 +0000 UTC" refreshTime="2025-06-26 10:06:17.414413184 +0000 UTC" 2025-12-13T00:13:16.964604200+00:00 stderr F I1213 00:13:16.964040 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="endpointslicemirroring-controller-dockercfg-skzmn" serviceaccount="endpointslicemirroring-controller" 2025-12-13T00:13:16.964604200+00:00 stderr F I1213 00:13:16.964402 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="ephemeral-volume-controller-dockercfg-jfqhh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:44 +0000 UTC" refreshTime="2025-06-26 10:06:18.814245478 +0000 UTC" 2025-12-13T00:13:16.964657532+00:00 stderr F I1213 00:13:16.964622 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="ephemeral-volume-controller-dockercfg-jfqhh" serviceaccount="ephemeral-volume-controller" 2025-12-13T00:13:16.996562024+00:00 stderr F I1213 00:13:16.996506 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="expand-controller-dockercfg-ls7wp" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:44 +0000 UTC" refreshTime="2025-06-26 10:06:18.801407864 +0000 UTC" 2025-12-13T00:13:16.996562024+00:00 stderr F I1213 00:13:16.996529 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="expand-controller-dockercfg-ls7wp" serviceaccount="expand-controller" 2025-12-13T00:13:17.022354600+00:00 stderr F I1213 00:13:17.021150 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="generic-garbage-collector-dockercfg-wqxkz" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-26 10:06:21.591555573 +0000 UTC" 2025-12-13T00:13:17.022354600+00:00 stderr F I1213 00:13:17.021188 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="generic-garbage-collector-dockercfg-wqxkz" serviceaccount="generic-garbage-collector" 2025-12-13T00:13:17.022354600+00:00 stderr F I1213 00:13:17.021889 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="horizontal-pod-autoscaler-dockercfg-5mlhd" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-26 10:06:21.591252366 +0000 UTC" 2025-12-13T00:13:17.022354600+00:00 stderr F I1213 00:13:17.021904 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="horizontal-pod-autoscaler-dockercfg-5mlhd" serviceaccount="horizontal-pod-autoscaler" 2025-12-13T00:13:17.078884371+00:00 stderr F I1213 00:13:17.078746 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="job-controller-dockercfg-wq5x7" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-26 10:06:21.568512986 +0000 UTC" 2025-12-13T00:13:17.078884371+00:00 stderr F I1213 00:13:17.078770 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="job-controller-dockercfg-wq5x7" serviceaccount="job-controller" 2025-12-13T00:13:17.079199141+00:00 stderr F W1213 00:13:17.079175 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:17.079213182+00:00 stderr F E1213 00:13:17.079205 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:17.079707638+00:00 stderr F I1213 00:13:17.079686 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="legacy-service-account-token-cleaner-dockercfg-qqxct" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-26 10:06:21.568132733 +0000 UTC" 2025-12-13T00:13:17.079716408+00:00 stderr F I1213 00:13:17.079705 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="legacy-service-account-token-cleaner-dockercfg-qqxct" serviceaccount="legacy-service-account-token-cleaner" 2025-12-13T00:13:17.103426635+00:00 stderr F I1213 00:13:17.103091 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="namespace-controller-dockercfg-5hkmr" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:46 +0000 UTC" refreshTime="2025-06-26 10:06:21.558777487 +0000 UTC" 2025-12-13T00:13:17.103426635+00:00 stderr F I1213 00:13:17.103120 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="namespace-controller-dockercfg-5hkmr" serviceaccount="namespace-controller" 2025-12-13T00:13:17.118969187+00:00 stderr F I1213 00:13:17.118783 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="persistent-volume-binder-dockercfg-49lxl" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-26 10:06:22.952498728 +0000 UTC" 2025-12-13T00:13:17.118969187+00:00 stderr F I1213 00:13:17.118810 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="persistent-volume-binder-dockercfg-49lxl" serviceaccount="persistent-volume-binder" 2025-12-13T00:13:17.119961280+00:00 stderr F I1213 00:13:17.119435 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pod-garbage-collector-dockercfg-9jzsm" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-26 10:06:22.952232683 +0000 UTC" 2025-12-13T00:13:17.119961280+00:00 stderr F I1213 00:13:17.119454 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pod-garbage-collector-dockercfg-9jzsm" serviceaccount="pod-garbage-collector" 2025-12-13T00:13:17.119961280+00:00 stderr F I1213 00:13:17.119594 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="node-controller-dockercfg-r8598" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-26 10:06:22.952178426 +0000 UTC" 2025-12-13T00:13:17.119961280+00:00 stderr F I1213 00:13:17.119617 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="node-controller-dockercfg-r8598" serviceaccount="node-controller" 2025-12-13T00:13:17.119961280+00:00 stderr F I1213 00:13:17.119713 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pv-protection-controller-dockercfg-r2lrg" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:47 +0000 UTC" refreshTime="2025-06-26 10:06:22.952120663 +0000 UTC" 2025-12-13T00:13:17.119961280+00:00 stderr F I1213 00:13:17.119724 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pv-protection-controller-dockercfg-r2lrg" serviceaccount="pv-protection-controller" 2025-12-13T00:13:17.147484915+00:00 stderr F W1213 00:13:17.147427 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:17.147484915+00:00 stderr F E1213 00:13:17.147452 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:17.148121056+00:00 stderr F I1213 00:13:17.148068 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="pvc-protection-controller-dockercfg-zqpk9" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-26 10:06:24.340782258 +0000 UTC" 2025-12-13T00:13:17.148121056+00:00 stderr F I1213 00:13:17.148088 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="pvc-protection-controller-dockercfg-zqpk9" serviceaccount="pvc-protection-controller" 2025-12-13T00:13:17.161152495+00:00 stderr F I1213 00:13:17.158237 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="replicaset-controller-dockercfg-m7w7t" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-26 10:06:24.336716654 +0000 UTC" 2025-12-13T00:13:17.161152495+00:00 stderr F I1213 00:13:17.158263 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="replicaset-controller-dockercfg-m7w7t" serviceaccount="replicaset-controller" 2025-12-13T00:13:17.163350109+00:00 stderr F I1213 00:13:17.163303 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="replication-controller-dockercfg-zx22f" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-26 10:06:24.334690834 +0000 UTC" 2025-12-13T00:13:17.163350109+00:00 stderr F I1213 00:13:17.163332 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="replication-controller-dockercfg-zx22f" serviceaccount="replication-controller" 2025-12-13T00:13:17.171110339+00:00 stderr F I1213 00:13:17.171065 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="resourcequota-controller-dockercfg-f7clv" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-26 10:06:24.331589366 +0000 UTC" 2025-12-13T00:13:17.171176251+00:00 stderr F I1213 00:13:17.171164 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="resourcequota-controller-dockercfg-f7clv" serviceaccount="resourcequota-controller" 2025-12-13T00:13:17.171775482+00:00 stderr F I1213 00:13:17.171756 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="root-ca-cert-publisher-dockercfg-4z4hh" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-26 10:06:24.331304376 +0000 UTC" 2025-12-13T00:13:17.171815193+00:00 stderr F I1213 00:13:17.171802 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="root-ca-cert-publisher-dockercfg-4z4hh" serviceaccount="root-ca-cert-publisher" 2025-12-13T00:13:17.212696466+00:00 stderr F I1213 00:13:17.210362 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-account-controller-dockercfg-wvw6s" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:48 +0000 UTC" refreshTime="2025-06-26 10:06:24.315867949 +0000 UTC" 2025-12-13T00:13:17.212696466+00:00 stderr F I1213 00:13:17.210391 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-account-controller-dockercfg-wvw6s" serviceaccount="service-account-controller" 2025-12-13T00:13:17.218953647+00:00 stderr F I1213 00:13:17.218901 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-ca-cert-publisher-dockercfg-npjg7" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.712452158 +0000 UTC" 2025-12-13T00:13:17.218953647+00:00 stderr F I1213 00:13:17.218946 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-ca-cert-publisher-dockercfg-npjg7" serviceaccount="service-ca-cert-publisher" 2025-12-13T00:13:17.220753057+00:00 stderr F I1213 00:13:17.219453 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="service-controller-dockercfg-4cv62" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.71223099 +0000 UTC" 2025-12-13T00:13:17.220753057+00:00 stderr F I1213 00:13:17.219476 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="service-controller-dockercfg-4cv62" serviceaccount="service-controller" 2025-12-13T00:13:17.220753057+00:00 stderr F I1213 00:13:17.220103 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="statefulset-controller-dockercfg-ndvv5" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.711965118 +0000 UTC" 2025-12-13T00:13:17.220753057+00:00 stderr F I1213 00:13:17.220117 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="statefulset-controller-dockercfg-ndvv5" serviceaccount="statefulset-controller" 2025-12-13T00:13:17.230487205+00:00 stderr F I1213 00:13:17.230441 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="kube-system" name="ttl-after-finished-controller-dockercfg-7wg62" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.707836379 +0000 UTC" 2025-12-13T00:13:17.230487205+00:00 stderr F I1213 00:13:17.230471 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="kube-system" name="ttl-after-finished-controller-dockercfg-7wg62" serviceaccount="ttl-after-finished-controller" 2025-12-13T00:13:17.250145975+00:00 stderr F I1213 00:13:17.249151 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="builder-dockercfg-fcp4f" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.700354092 +0000 UTC" 2025-12-13T00:13:17.250145975+00:00 stderr F I1213 00:13:17.249183 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="builder-dockercfg-fcp4f" serviceaccount="builder" 2025-12-13T00:13:17.265560623+00:00 stderr F I1213 00:13:17.265510 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="default-dockercfg-qknsb" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.693811124 +0000 UTC" 2025-12-13T00:13:17.265560623+00:00 stderr F I1213 00:13:17.265546 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="default-dockercfg-qknsb" serviceaccount="default" 2025-12-13T00:13:17.272466465+00:00 stderr F I1213 00:13:17.272428 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="deployer-dockercfg-rk5zr" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.691041779 +0000 UTC" 2025-12-13T00:13:17.272530758+00:00 stderr F I1213 00:13:17.272518 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="deployer-dockercfg-rk5zr" serviceaccount="deployer" 2025-12-13T00:13:17.297006309+00:00 stderr F I1213 00:13:17.295000 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver-operator" name="openshift-apiserver-operator-dockercfg-vw4hh" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:50 +0000 UTC" refreshTime="2025-06-26 10:06:27.082015608 +0000 UTC" 2025-12-13T00:13:17.297006309+00:00 stderr F I1213 00:13:17.295040 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver-operator" name="openshift-apiserver-operator-dockercfg-vw4hh" serviceaccount="openshift-apiserver-operator" 2025-12-13T00:13:17.297006309+00:00 stderr F I1213 00:13:17.295190 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="builder-dockercfg-bsrrx" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:49 +0000 UTC" refreshTime="2025-06-26 10:06:25.681933193 +0000 UTC" 2025-12-13T00:13:17.297006309+00:00 stderr F I1213 00:13:17.295207 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="builder-dockercfg-bsrrx" serviceaccount="builder" 2025-12-13T00:13:17.302009968+00:00 stderr F I1213 00:13:17.301954 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="default-dockercfg-hxncm" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:50 +0000 UTC" refreshTime="2025-06-26 10:06:27.079242482 +0000 UTC" 2025-12-13T00:13:17.302009968+00:00 stderr F I1213 00:13:17.301987 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="default-dockercfg-hxncm" serviceaccount="default" 2025-12-13T00:13:17.347615220+00:00 stderr F I1213 00:13:17.347572 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="deployer-dockercfg-qkt4v" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-26 10:06:28.460983685 +0000 UTC" 2025-12-13T00:13:17.347690082+00:00 stderr F I1213 00:13:17.347676 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="deployer-dockercfg-qkt4v" serviceaccount="deployer" 2025-12-13T00:13:17.357769601+00:00 stderr F I1213 00:13:17.357722 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-apiserver" name="openshift-apiserver-sa-dockercfg-r9fjc" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-26 10:06:28.456936249 +0000 UTC" 2025-12-13T00:13:17.357837074+00:00 stderr F I1213 00:13:17.357826 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-apiserver" name="openshift-apiserver-sa-dockercfg-r9fjc" serviceaccount="openshift-apiserver-sa" 2025-12-13T00:13:17.378998815+00:00 stderr F I1213 00:13:17.377775 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="authentication-operator-dockercfg-7rvdq" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-26 10:06:28.448900814 +0000 UTC" 2025-12-13T00:13:17.378998815+00:00 stderr F I1213 00:13:17.377802 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="authentication-operator-dockercfg-7rvdq" serviceaccount="authentication-operator" 2025-12-13T00:13:17.378998815+00:00 stderr F I1213 00:13:17.378088 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="builder-dockercfg-gr58d" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-26 10:06:28.448771115 +0000 UTC" 2025-12-13T00:13:17.378998815+00:00 stderr F I1213 00:13:17.378103 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="builder-dockercfg-gr58d" serviceaccount="builder" 2025-12-13T00:13:17.378998815+00:00 stderr F I1213 00:13:17.378395 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="default-dockercfg-mpz9v" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:51 +0000 UTC" refreshTime="2025-06-26 10:06:28.448647746 +0000 UTC" 2025-12-13T00:13:17.378998815+00:00 stderr F I1213 00:13:17.378407 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="default-dockercfg-mpz9v" serviceaccount="default" 2025-12-13T00:13:17.390665257+00:00 stderr F I1213 00:13:17.390602 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication-operator" name="deployer-dockercfg-7xqgr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-26 10:07:38.443773202 +0000 UTC" 2025-12-13T00:13:17.390665257+00:00 stderr F I1213 00:13:17.390636 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication-operator" name="deployer-dockercfg-7xqgr" serviceaccount="deployer" 2025-12-13T00:13:17.404260224+00:00 stderr F I1213 00:13:17.402806 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="builder-dockercfg-wbrzn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-26 10:07:38.43889071 +0000 UTC" 2025-12-13T00:13:17.404260224+00:00 stderr F I1213 00:13:17.402834 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="builder-dockercfg-wbrzn" serviceaccount="builder" 2025-12-13T00:13:17.413864066+00:00 stderr F W1213 00:13:17.413809 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:17.413864066+00:00 stderr F E1213 00:13:17.413837 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:17.414710504+00:00 stderr F I1213 00:13:17.414412 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="default-dockercfg-8smsw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-26 10:07:38.434244858 +0000 UTC" 2025-12-13T00:13:17.414710504+00:00 stderr F I1213 00:13:17.414432 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="default-dockercfg-8smsw" serviceaccount="default" 2025-12-13T00:13:17.435472293+00:00 stderr F I1213 00:13:17.435417 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="deployer-dockercfg-txlvt" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-26 10:07:38.42584324 +0000 UTC" 2025-12-13T00:13:17.435472293+00:00 stderr F I1213 00:13:17.435439 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="deployer-dockercfg-txlvt" serviceaccount="deployer" 2025-12-13T00:13:17.455224726+00:00 stderr F I1213 00:13:17.454309 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-authentication" name="oauth-openshift-dockercfg-6sd5l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:41 +0000 UTC" refreshTime="2025-06-26 10:07:38.418287734 +0000 UTC" 2025-12-13T00:13:17.455224726+00:00 stderr F I1213 00:13:17.454335 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-authentication" name="oauth-openshift-dockercfg-6sd5l" serviceaccount="oauth-openshift" 2025-12-13T00:13:17.458989382+00:00 stderr F I1213 00:13:17.457952 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="builder-dockercfg-4stzg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-26 10:07:44.016840842 +0000 UTC" 2025-12-13T00:13:17.458989382+00:00 stderr F I1213 00:13:17.457983 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="builder-dockercfg-4stzg" serviceaccount="builder" 2025-12-13T00:13:17.468214933+00:00 stderr F I1213 00:13:17.465577 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="default-dockercfg-bswg4" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-26 10:07:44.013780907 +0000 UTC" 2025-12-13T00:13:17.468214933+00:00 stderr F I1213 00:13:17.465626 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="default-dockercfg-bswg4" serviceaccount="default" 2025-12-13T00:13:17.484476089+00:00 stderr F I1213 00:13:17.483894 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-network-config-controller" name="deployer-dockercfg-95h82" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-26 10:07:44.006454909 +0000 UTC" 2025-12-13T00:13:17.484476089+00:00 stderr F I1213 00:13:17.483945 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-network-config-controller" name="deployer-dockercfg-95h82" serviceaccount="deployer" 2025-12-13T00:13:17.492288381+00:00 stderr F I1213 00:13:17.485322 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="builder-dockercfg-88rrx" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-26 10:07:44.005894062 +0000 UTC" 2025-12-13T00:13:17.492288381+00:00 stderr F I1213 00:13:17.485346 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="builder-dockercfg-88rrx" serviceaccount="builder" 2025-12-13T00:13:17.500800547+00:00 stderr F W1213 00:13:17.500755 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:17.500876240+00:00 stderr F E1213 00:13:17.500864 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:17.501129608+00:00 stderr F I1213 00:13:17.501097 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="default-dockercfg-7xbdb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:45 +0000 UTC" refreshTime="2025-06-26 10:07:43.99957281 +0000 UTC" 2025-12-13T00:13:17.501129608+00:00 stderr F I1213 00:13:17.501121 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="default-dockercfg-7xbdb" serviceaccount="default" 2025-12-13T00:13:17.501250782+00:00 stderr F I1213 00:13:17.501230 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cloud-platform-infra" name="deployer-dockercfg-d4ldp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-26 10:07:45.399520984 +0000 UTC" 2025-12-13T00:13:17.501286764+00:00 stderr F I1213 00:13:17.501276 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cloud-platform-infra" name="deployer-dockercfg-d4ldp" serviceaccount="deployer" 2025-12-13T00:13:17.513640739+00:00 stderr F I1213 00:13:17.513596 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="builder-dockercfg-dkg74" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-26 10:07:45.394571917 +0000 UTC" 2025-12-13T00:13:17.513712642+00:00 stderr F I1213 00:13:17.513701 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="builder-dockercfg-dkg74" serviceaccount="builder" 2025-12-13T00:13:17.515775910+00:00 stderr F I1213 00:13:17.515742 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="default-dockercfg-89xjf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-26 10:07:45.393714369 +0000 UTC" 2025-12-13T00:13:17.515840493+00:00 stderr F I1213 00:13:17.515827 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="default-dockercfg-89xjf" serviceaccount="default" 2025-12-13T00:13:17.533296770+00:00 stderr F I1213 00:13:17.530349 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="deployer-dockercfg-vb2qm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-26 10:07:45.387872216 +0000 UTC" 2025-12-13T00:13:17.533296770+00:00 stderr F I1213 00:13:17.530378 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="deployer-dockercfg-vb2qm" serviceaccount="deployer" 2025-12-13T00:13:17.535580387+00:00 stderr F I1213 00:13:17.535554 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-machine-approver" name="machine-approver-sa-dockercfg-6nbmk" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:00:46 +0000 UTC" refreshTime="2025-06-26 10:07:45.385787026 +0000 UTC" 2025-12-13T00:13:17.535595797+00:00 stderr F I1213 00:13:17.535584 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-machine-approver" name="machine-approver-sa-dockercfg-6nbmk" serviceaccount="machine-approver-sa" 2025-12-13T00:13:17.549010897+00:00 stderr F I1213 00:13:17.548959 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="builder-dockercfg-bgnkz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-26 10:07:46.78043651 +0000 UTC" 2025-12-13T00:13:17.549010897+00:00 stderr F I1213 00:13:17.548983 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="builder-dockercfg-bgnkz" serviceaccount="builder" 2025-12-13T00:13:17.553870501+00:00 stderr F I1213 00:13:17.553807 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="cluster-samples-operator-dockercfg-q289q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-26 10:07:46.778484201 +0000 UTC" 2025-12-13T00:13:17.553870501+00:00 stderr F I1213 00:13:17.553827 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="cluster-samples-operator-dockercfg-q289q" serviceaccount="cluster-samples-operator" 2025-12-13T00:13:17.565358227+00:00 stderr F I1213 00:13:17.565305 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="default-dockercfg-78cjw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-26 10:07:46.77388987 +0000 UTC" 2025-12-13T00:13:17.565358227+00:00 stderr F I1213 00:13:17.565333 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="default-dockercfg-78cjw" serviceaccount="default" 2025-12-13T00:13:17.569456995+00:00 stderr F I1213 00:13:17.569421 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-samples-operator" name="deployer-dockercfg-hx9zf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:47 +0000 UTC" refreshTime="2025-06-26 10:07:46.772239747 +0000 UTC" 2025-12-13T00:13:17.569456995+00:00 stderr F I1213 00:13:17.569444 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-samples-operator" name="deployer-dockercfg-hx9zf" serviceaccount="deployer" 2025-12-13T00:13:17.601383967+00:00 stderr F I1213 00:13:17.601140 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="builder-dockercfg-l8dbc" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-06-26 10:07:48.159556977 +0000 UTC" 2025-12-13T00:13:17.601383967+00:00 stderr F I1213 00:13:17.601353 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="builder-dockercfg-l8dbc" serviceaccount="builder" 2025-12-13T00:13:17.603842790+00:00 stderr F I1213 00:13:17.602055 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="default-dockercfg-l44fb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-06-26 10:07:48.159183941 +0000 UTC" 2025-12-13T00:13:17.603842790+00:00 stderr F I1213 00:13:17.602074 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="default-dockercfg-l44fb" serviceaccount="default" 2025-12-13T00:13:17.603842790+00:00 stderr F I1213 00:13:17.602372 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-storage-operator" name="deployer-dockercfg-g97l8" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:00:49 +0000 UTC" refreshTime="2025-06-26 10:07:49.559056488 +0000 UTC" 2025-12-13T00:13:17.603842790+00:00 stderr F I1213 00:13:17.602383 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-storage-operator" name="deployer-dockercfg-g97l8" serviceaccount="deployer" 2025-12-13T00:13:17.603842790+00:00 stderr F I1213 00:13:17.602648 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="builder-dockercfg-l4k9s" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:48 +0000 UTC" refreshTime="2025-06-26 10:07:48.158946476 +0000 UTC" 2025-12-13T00:13:17.603842790+00:00 stderr F I1213 00:13:17.602658 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="builder-dockercfg-l4k9s" serviceaccount="builder" 2025-12-13T00:13:17.610322358+00:00 stderr F I1213 00:13:17.610280 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="default-dockercfg-5wpfz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:49 +0000 UTC" refreshTime="2025-06-26 10:07:49.55589823 +0000 UTC" 2025-12-13T00:13:17.610322358+00:00 stderr F I1213 00:13:17.610312 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="default-dockercfg-5wpfz" serviceaccount="default" 2025-12-13T00:13:17.643779422+00:00 stderr F I1213 00:13:17.642447 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-cluster-version" name="deployer-dockercfg-r7kd4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-26 10:07:50.943040914 +0000 UTC" 2025-12-13T00:13:17.643779422+00:00 stderr F I1213 00:13:17.642745 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-cluster-version" name="deployer-dockercfg-r7kd4" serviceaccount="deployer" 2025-12-13T00:13:17.646392650+00:00 stderr F I1213 00:13:17.645380 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="builder-dockercfg-nndcv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-26 10:07:50.94185929 +0000 UTC" 2025-12-13T00:13:17.646392650+00:00 stderr F I1213 00:13:17.645402 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="builder-dockercfg-nndcv" serviceaccount="builder" 2025-12-13T00:13:17.651113239+00:00 stderr F I1213 00:13:17.651064 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="default-dockercfg-5zsff" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-26 10:07:50.939585226 +0000 UTC" 2025-12-13T00:13:17.651113239+00:00 stderr F I1213 00:13:17.651086 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="default-dockercfg-5zsff" serviceaccount="default" 2025-12-13T00:13:17.651365957+00:00 stderr F I1213 00:13:17.651345 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-managed" name="deployer-dockercfg-v47lz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:56 +0000 UTC" refreshTime="2025-06-26 10:07:59.339474738 +0000 UTC" 2025-12-13T00:13:17.651400518+00:00 stderr F I1213 00:13:17.651390 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-managed" name="deployer-dockercfg-v47lz" serviceaccount="deployer" 2025-12-13T00:13:17.651455210+00:00 stderr F I1213 00:13:17.651424 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="builder-dockercfg-lbblj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:00:50 +0000 UTC" refreshTime="2025-06-26 10:07:50.939443321 +0000 UTC" 2025-12-13T00:13:17.651463550+00:00 stderr F I1213 00:13:17.651453 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="builder-dockercfg-lbblj" serviceaccount="builder" 2025-12-13T00:13:17.668345328+00:00 stderr F I1213 00:13:17.668282 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="default-dockercfg-rltwn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-26 10:08:11.932697858 +0000 UTC" 2025-12-13T00:13:17.668345328+00:00 stderr F I1213 00:13:17.668335 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="default-dockercfg-rltwn" serviceaccount="default" 2025-12-13T00:13:17.676354927+00:00 stderr F I1213 00:13:17.676310 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="deployer-dockercfg-8tp68" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-26 10:08:11.92949007 +0000 UTC" 2025-12-13T00:13:17.676354927+00:00 stderr F I1213 00:13:17.676337 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="deployer-dockercfg-8tp68" serviceaccount="deployer" 2025-12-13T00:13:17.719098453+00:00 stderr F I1213 00:13:17.719013 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config-operator" name="openshift-config-operator-dockercfg-6jthd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-26 10:08:11.912406672 +0000 UTC" 2025-12-13T00:13:17.719098453+00:00 stderr F I1213 00:13:17.719049 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config-operator" name="openshift-config-operator-dockercfg-6jthd" serviceaccount="openshift-config-operator" 2025-12-13T00:13:17.726163741+00:00 stderr F I1213 00:13:17.726112 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="builder-dockercfg-c75dg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-26 10:08:11.90957889 +0000 UTC" 2025-12-13T00:13:17.726203652+00:00 stderr F I1213 00:13:17.726181 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="builder-dockercfg-c75dg" serviceaccount="builder" 2025-12-13T00:13:17.747673393+00:00 stderr F I1213 00:13:17.747611 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="default-dockercfg-hbnsp" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:05 +0000 UTC" refreshTime="2025-06-26 10:08:11.900973728 +0000 UTC" 2025-12-13T00:13:17.747673393+00:00 stderr F I1213 00:13:17.747650 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="default-dockercfg-hbnsp" serviceaccount="default" 2025-12-13T00:13:17.781102726+00:00 stderr F I1213 00:13:17.780611 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-config" name="deployer-dockercfg-q8mb8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.687766261 +0000 UTC" 2025-12-13T00:13:17.781102726+00:00 stderr F I1213 00:13:17.780635 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-config" name="deployer-dockercfg-q8mb8" serviceaccount="deployer" 2025-12-13T00:13:17.801761861+00:00 stderr F W1213 00:13:17.801705 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:17.801761861+00:00 stderr F W1213 00:13:17.801722 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:17.801761861+00:00 stderr F E1213 00:13:17.801735 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:17.801761861+00:00 stderr F E1213 00:13:17.801746 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:17.812052346+00:00 stderr F I1213 00:13:17.811965 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="builder-dockercfg-5h26t" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.675244515 +0000 UTC" 2025-12-13T00:13:17.812052346+00:00 stderr F I1213 00:13:17.812024 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="builder-dockercfg-5h26t" serviceaccount="builder" 2025-12-13T00:13:17.867360315+00:00 stderr F I1213 00:13:17.867305 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="console-operator-dockercfg-lwp4z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.653092531 +0000 UTC" 2025-12-13T00:13:17.867389026+00:00 stderr F I1213 00:13:17.867360 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="console-operator-dockercfg-lwp4z" serviceaccount="console-operator" 2025-12-13T00:13:17.873348276+00:00 stderr F I1213 00:13:17.873144 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="default-dockercfg-vgw7h" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.650756313 +0000 UTC" 2025-12-13T00:13:17.873348276+00:00 stderr F I1213 00:13:17.873174 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="default-dockercfg-vgw7h" serviceaccount="default" 2025-12-13T00:13:17.885982821+00:00 stderr F I1213 00:13:17.885918 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-operator" name="deployer-dockercfg-cgf7g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.645645773 +0000 UTC" 2025-12-13T00:13:17.886042623+00:00 stderr F I1213 00:13:17.886031 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-operator" name="deployer-dockercfg-cgf7g" serviceaccount="deployer" 2025-12-13T00:13:17.907986471+00:00 stderr F I1213 00:13:17.907924 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="builder-dockercfg-s9kk5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:16.03684257 +0000 UTC" 2025-12-13T00:13:17.908046693+00:00 stderr F I1213 00:13:17.908036 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="builder-dockercfg-s9kk5" serviceaccount="builder" 2025-12-13T00:13:17.956092817+00:00 stderr F I1213 00:13:17.955470 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="default-dockercfg-mkcsd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.61782625 +0000 UTC" 2025-12-13T00:13:17.956092817+00:00 stderr F I1213 00:13:17.955495 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="default-dockercfg-mkcsd" serviceaccount="default" 2025-12-13T00:13:17.994258309+00:00 stderr F I1213 00:13:17.994193 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console-user-settings" name="deployer-dockercfg-s5pld" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.602335174 +0000 UTC" 2025-12-13T00:13:17.994258309+00:00 stderr F I1213 00:13:17.994220 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console-user-settings" name="deployer-dockercfg-s5pld" serviceaccount="deployer" 2025-12-13T00:13:18.002905579+00:00 stderr F I1213 00:13:18.002799 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="builder-dockercfg-nmnq6" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:07 +0000 UTC" refreshTime="2025-06-26 10:08:14.598892922 +0000 UTC" 2025-12-13T00:13:18.002905579+00:00 stderr F I1213 00:13:18.002825 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="builder-dockercfg-nmnq6" serviceaccount="builder" 2025-12-13T00:13:18.011745257+00:00 stderr F I1213 00:13:18.011657 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="console-dockercfg-ng44q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.995349575 +0000 UTC" 2025-12-13T00:13:18.011745257+00:00 stderr F I1213 00:13:18.011680 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="console-dockercfg-ng44q" serviceaccount="console" 2025-12-13T00:13:18.053020423+00:00 stderr F I1213 00:13:18.050323 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="default-dockercfg-bv4gd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.979882691 +0000 UTC" 2025-12-13T00:13:18.053020423+00:00 stderr F I1213 00:13:18.050349 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="default-dockercfg-bv4gd" serviceaccount="default" 2025-12-13T00:13:18.091827577+00:00 stderr F I1213 00:13:18.091760 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-console" name="deployer-dockercfg-mpsf7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.963315518 +0000 UTC" 2025-12-13T00:13:18.091827577+00:00 stderr F I1213 00:13:18.091792 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-console" name="deployer-dockercfg-mpsf7" serviceaccount="deployer" 2025-12-13T00:13:18.140579876+00:00 stderr F I1213 00:13:18.140519 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="builder-dockercfg-rn5hk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.943804955 +0000 UTC" 2025-12-13T00:13:18.140671479+00:00 stderr F I1213 00:13:18.140655 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="builder-dockercfg-rn5hk" serviceaccount="builder" 2025-12-13T00:13:18.167453539+00:00 stderr F I1213 00:13:18.167309 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="default-dockercfg-hmzqd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.933088507 +0000 UTC" 2025-12-13T00:13:18.167453539+00:00 stderr F I1213 00:13:18.167336 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="default-dockercfg-hmzqd" serviceaccount="default" 2025-12-13T00:13:18.189425737+00:00 stderr F I1213 00:13:18.189362 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="deployer-dockercfg-8mhlz" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.924271542 +0000 UTC" 2025-12-13T00:13:18.189425737+00:00 stderr F I1213 00:13:18.189396 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="deployer-dockercfg-8mhlz" serviceaccount="deployer" 2025-12-13T00:13:18.189631854+00:00 stderr F I1213 00:13:18.189609 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager-operator" name="openshift-controller-manager-operator-dockercfg-zx7mb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.924169963 +0000 UTC" 2025-12-13T00:13:18.189677255+00:00 stderr F I1213 00:13:18.189664 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager-operator" name="openshift-controller-manager-operator-dockercfg-zx7mb" serviceaccount="openshift-controller-manager-operator" 2025-12-13T00:13:18.248115029+00:00 stderr F I1213 00:13:18.248065 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="builder-dockercfg-gmnbf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.900788606 +0000 UTC" 2025-12-13T00:13:18.248212512+00:00 stderr F I1213 00:13:18.248197 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="builder-dockercfg-gmnbf" serviceaccount="builder" 2025-12-13T00:13:18.262734640+00:00 stderr F I1213 00:13:18.262493 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="default-dockercfg-vdmzk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:08 +0000 UTC" refreshTime="2025-06-26 10:08:15.89501424 +0000 UTC" 2025-12-13T00:13:18.262734640+00:00 stderr F I1213 00:13:18.262521 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="default-dockercfg-vdmzk" serviceaccount="default" 2025-12-13T00:13:18.297943884+00:00 stderr F I1213 00:13:18.297872 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="deployer-dockercfg-q4jdx" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-26 10:08:17.28086412 +0000 UTC" 2025-12-13T00:13:18.297943884+00:00 stderr F I1213 00:13:18.297905 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="deployer-dockercfg-q4jdx" serviceaccount="deployer" 2025-12-13T00:13:18.310012249+00:00 stderr F I1213 00:13:18.309212 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-controller-manager" name="openshift-controller-manager-sa-dockercfg-58g82" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-26 10:08:17.276332616 +0000 UTC" 2025-12-13T00:13:18.310012249+00:00 stderr F I1213 00:13:18.309244 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-controller-manager" name="openshift-controller-manager-sa-dockercfg-58g82" serviceaccount="openshift-controller-manager-sa" 2025-12-13T00:13:18.324061511+00:00 stderr F I1213 00:13:18.318554 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="builder-dockercfg-pnlmc" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-26 10:08:17.272590606 +0000 UTC" 2025-12-13T00:13:18.324061511+00:00 stderr F I1213 00:13:18.318581 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="builder-dockercfg-pnlmc" serviceaccount="builder" 2025-12-13T00:13:18.383966824+00:00 stderr F I1213 00:13:18.380751 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="default-dockercfg-zzdtv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-26 10:08:17.24771367 +0000 UTC" 2025-12-13T00:13:18.383966824+00:00 stderr F I1213 00:13:18.380802 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="default-dockercfg-zzdtv" serviceaccount="default" 2025-12-13T00:13:18.403510080+00:00 stderr F I1213 00:13:18.403173 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="deployer-dockercfg-ft65g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:09 +0000 UTC" refreshTime="2025-06-26 10:08:17.238743577 +0000 UTC" 2025-12-13T00:13:18.403510080+00:00 stderr F I1213 00:13:18.403197 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="deployer-dockercfg-ft65g" serviceaccount="deployer" 2025-12-13T00:13:18.430590891+00:00 stderr F I1213 00:13:18.430541 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns-operator" name="dns-operator-dockercfg-wgzbx" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-26 10:08:18.627794965 +0000 UTC" 2025-12-13T00:13:18.430590891+00:00 stderr F I1213 00:13:18.430567 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns-operator" name="dns-operator-dockercfg-wgzbx" serviceaccount="dns-operator" 2025-12-13T00:13:18.459382508+00:00 stderr F I1213 00:13:18.459318 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="builder-dockercfg-hlsv2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-26 10:08:18.616287638 +0000 UTC" 2025-12-13T00:13:18.459382508+00:00 stderr F I1213 00:13:18.459351 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="builder-dockercfg-hlsv2" serviceaccount="builder" 2025-12-13T00:13:18.463826608+00:00 stderr F I1213 00:13:18.463294 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="default-dockercfg-4pr8h" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-26 10:08:18.614700413 +0000 UTC" 2025-12-13T00:13:18.463826608+00:00 stderr F I1213 00:13:18.463316 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="default-dockercfg-4pr8h" serviceaccount="default" 2025-12-13T00:13:18.511396836+00:00 stderr F I1213 00:13:18.511117 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="deployer-dockercfg-45hhc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:10 +0000 UTC" refreshTime="2025-06-26 10:08:18.59556543 +0000 UTC" 2025-12-13T00:13:18.511396836+00:00 stderr F I1213 00:13:18.511143 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="deployer-dockercfg-45hhc" serviceaccount="deployer" 2025-12-13T00:13:18.542302995+00:00 stderr F I1213 00:13:18.542251 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="dns-dockercfg-dff28" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:01:18 +0000 UTC" refreshTime="2025-06-26 10:08:29.783112291 +0000 UTC" 2025-12-13T00:13:18.542302995+00:00 stderr F I1213 00:13:18.542277 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="dns-dockercfg-dff28" serviceaccount="dns" 2025-12-13T00:13:18.565716781+00:00 stderr F I1213 00:13:18.565663 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-dns" name="node-resolver-dockercfg-5kr6x" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:18 +0000 UTC" refreshTime="2025-06-26 10:08:29.773746501 +0000 UTC" 2025-12-13T00:13:18.565716781+00:00 stderr F I1213 00:13:18.565691 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-dns" name="node-resolver-dockercfg-5kr6x" serviceaccount="node-resolver" 2025-12-13T00:13:18.587197873+00:00 stderr F I1213 00:13:18.587083 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="builder-dockercfg-sf67n" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-26 10:08:32.565182369 +0000 UTC" 2025-12-13T00:13:18.587197873+00:00 stderr F I1213 00:13:18.587107 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="builder-dockercfg-sf67n" serviceaccount="builder" 2025-12-13T00:13:18.602385613+00:00 stderr F I1213 00:13:18.602277 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="default-dockercfg-xdg4w" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-26 10:08:32.559102553 +0000 UTC" 2025-12-13T00:13:18.602385613+00:00 stderr F I1213 00:13:18.602303 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="default-dockercfg-xdg4w" serviceaccount="default" 2025-12-13T00:13:18.648053098+00:00 stderr F I1213 00:13:18.647733 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="deployer-dockercfg-zmpgs" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-26 10:08:32.540921346 +0000 UTC" 2025-12-13T00:13:18.648053098+00:00 stderr F I1213 00:13:18.647791 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="deployer-dockercfg-zmpgs" serviceaccount="deployer" 2025-12-13T00:13:18.688478156+00:00 stderr F I1213 00:13:18.688421 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd-operator" name="etcd-operator-dockercfg-hwzhz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.924644862 +0000 UTC" 2025-12-13T00:13:18.688478156+00:00 stderr F I1213 00:13:18.688451 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd-operator" name="etcd-operator-dockercfg-hwzhz" serviceaccount="etcd-operator" 2025-12-13T00:13:18.702357693+00:00 stderr F I1213 00:13:18.702304 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="builder-dockercfg-sqwsk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.9190918 +0000 UTC" 2025-12-13T00:13:18.702386884+00:00 stderr F I1213 00:13:18.702354 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="builder-dockercfg-sqwsk" serviceaccount="builder" 2025-12-13T00:13:18.722462688+00:00 stderr F I1213 00:13:18.722402 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="default-dockercfg-vd62w" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:01:20 +0000 UTC" refreshTime="2025-06-26 10:08:32.511058154 +0000 UTC" 2025-12-13T00:13:18.722462688+00:00 stderr F I1213 00:13:18.722427 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="default-dockercfg-vd62w" serviceaccount="default" 2025-12-13T00:13:18.746357301+00:00 stderr F I1213 00:13:18.744568 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="deployer-dockercfg-p6hbm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.90218284 +0000 UTC" 2025-12-13T00:13:18.746357301+00:00 stderr F I1213 00:13:18.744594 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="deployer-dockercfg-p6hbm" serviceaccount="deployer" 2025-12-13T00:13:18.784799542+00:00 stderr F I1213 00:13:18.784750 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="etcd-backup-sa-dockercfg-rd8b5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.886111396 +0000 UTC" 2025-12-13T00:13:18.784799542+00:00 stderr F I1213 00:13:18.784778 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="etcd-backup-sa-dockercfg-rd8b5" serviceaccount="etcd-backup-sa" 2025-12-13T00:13:18.829677681+00:00 stderr F I1213 00:13:18.829631 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="etcd-sa-dockercfg-cgskw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.868158293 +0000 UTC" 2025-12-13T00:13:18.829677681+00:00 stderr F I1213 00:13:18.829654 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="etcd-sa-dockercfg-cgskw" serviceaccount="etcd-sa" 2025-12-13T00:13:18.834555615+00:00 stderr F I1213 00:13:18.834514 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-etcd" name="installer-sa-dockercfg-gxvhz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.866209239 +0000 UTC" 2025-12-13T00:13:18.834617017+00:00 stderr F I1213 00:13:18.834605 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-etcd" name="installer-sa-dockercfg-gxvhz" serviceaccount="installer-sa" 2025-12-13T00:13:18.854898059+00:00 stderr F I1213 00:13:18.854852 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="builder-dockercfg-h5pg5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.858079994 +0000 UTC" 2025-12-13T00:13:18.854898059+00:00 stderr F I1213 00:13:18.854882 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="builder-dockercfg-h5pg5" serviceaccount="builder" 2025-12-13T00:13:18.888901941+00:00 stderr F I1213 00:13:18.888818 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="default-dockercfg-swwqf" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:01:21 +0000 UTC" refreshTime="2025-06-26 10:08:33.844484089 +0000 UTC" 2025-12-13T00:13:18.888901941+00:00 stderr F I1213 00:13:18.888852 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="default-dockercfg-swwqf" serviceaccount="default" 2025-12-13T00:13:18.921163105+00:00 stderr F I1213 00:13:18.921101 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-host-network" name="deployer-dockercfg-ddh74" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-26 10:14:19.631576907 +0000 UTC" 2025-12-13T00:13:18.921270968+00:00 stderr F I1213 00:13:18.921249 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-host-network" name="deployer-dockercfg-ddh74" serviceaccount="deployer" 2025-12-13T00:13:18.961515781+00:00 stderr F I1213 00:13:18.961458 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="builder-dockercfg-2jkwc" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-26 10:14:19.615429705 +0000 UTC" 2025-12-13T00:13:18.961515781+00:00 stderr F I1213 00:13:18.961485 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="builder-dockercfg-2jkwc" serviceaccount="builder" 2025-12-13T00:13:18.978214233+00:00 stderr F I1213 00:13:18.978165 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="cluster-image-registry-operator-dockercfg-ddjzq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-26 10:14:19.60874874 +0000 UTC" 2025-12-13T00:13:18.978295085+00:00 stderr F I1213 00:13:18.978280 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="cluster-image-registry-operator-dockercfg-ddjzq" serviceaccount="cluster-image-registry-operator" 2025-12-13T00:13:18.990833016+00:00 stderr F I1213 00:13:18.990364 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="default-dockercfg-w58sb" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-26 10:14:19.603866536 +0000 UTC" 2025-12-13T00:13:18.990833016+00:00 stderr F I1213 00:13:18.990390 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="default-dockercfg-w58sb" serviceaccount="default" 2025-12-13T00:13:19.022433798+00:00 stderr F I1213 00:13:19.022387 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="deployer-dockercfg-5sk9l" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:28 +0000 UTC" refreshTime="2025-06-26 10:14:19.591059298 +0000 UTC" 2025-12-13T00:13:19.022506240+00:00 stderr F I1213 00:13:19.022490 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="deployer-dockercfg-5sk9l" serviceaccount="deployer" 2025-12-13T00:13:19.028419329+00:00 stderr F W1213 00:13:19.028370 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:19.028419329+00:00 stderr F E1213 00:13:19.028400 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:19.048303358+00:00 stderr F I1213 00:13:19.048216 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="node-ca-dockercfg-mcgx9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.980736763 +0000 UTC" 2025-12-13T00:13:19.048303358+00:00 stderr F I1213 00:13:19.048247 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="node-ca-dockercfg-mcgx9" serviceaccount="node-ca" 2025-12-13T00:13:19.183977446+00:00 stderr F I1213 00:13:19.180883 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="pruner-dockercfg-nzhll" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.927659568 +0000 UTC" 2025-12-13T00:13:19.183977446+00:00 stderr F I1213 00:13:19.180911 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="pruner-dockercfg-nzhll" serviceaccount="pruner" 2025-12-13T00:13:19.183977446+00:00 stderr F I1213 00:13:19.181284 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-image-registry" name="registry-dockercfg-q786x" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.927492334 +0000 UTC" 2025-12-13T00:13:19.183977446+00:00 stderr F I1213 00:13:19.181296 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-image-registry" name="registry-dockercfg-q786x" serviceaccount="registry" 2025-12-13T00:13:19.183977446+00:00 stderr F I1213 00:13:19.181867 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="build-config-change-controller-dockercfg-x9cbn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.92725931 +0000 UTC" 2025-12-13T00:13:19.183977446+00:00 stderr F I1213 00:13:19.181879 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="build-config-change-controller-dockercfg-x9cbn" serviceaccount="build-config-change-controller" 2025-12-13T00:13:19.233601644+00:00 stderr F I1213 00:13:19.233547 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="build-controller-dockercfg-6s44z" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.906592792 +0000 UTC" 2025-12-13T00:13:19.233601644+00:00 stderr F I1213 00:13:19.233571 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="build-controller-dockercfg-6s44z" serviceaccount="build-controller" 2025-12-13T00:13:19.244334535+00:00 stderr F I1213 00:13:19.244273 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="builder-dockercfg-ztkx9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.90230684 +0000 UTC" 2025-12-13T00:13:19.244334535+00:00 stderr F I1213 00:13:19.244307 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="builder-dockercfg-ztkx9" serviceaccount="builder" 2025-12-13T00:13:19.256390920+00:00 stderr F I1213 00:13:19.256340 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="cluster-csr-approver-controller-dockercfg-4n58l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.897478501 +0000 UTC" 2025-12-13T00:13:19.256390920+00:00 stderr F I1213 00:13:19.256371 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="cluster-csr-approver-controller-dockercfg-4n58l" serviceaccount="cluster-csr-approver-controller" 2025-12-13T00:13:19.260892220+00:00 stderr F I1213 00:13:19.260850 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="cluster-quota-reconciliation-controller-dockercfg-6clv4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:29 +0000 UTC" refreshTime="2025-06-26 10:14:20.895671741 +0000 UTC" 2025-12-13T00:13:19.260892220+00:00 stderr F I1213 00:13:19.260878 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="cluster-quota-reconciliation-controller-dockercfg-6clv4" serviceaccount="cluster-quota-reconciliation-controller" 2025-12-13T00:13:19.278440041+00:00 stderr F I1213 00:13:19.278392 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="default-dockercfg-qcclx" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.288655265 +0000 UTC" 2025-12-13T00:13:19.278440041+00:00 stderr F I1213 00:13:19.278419 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="default-dockercfg-qcclx" serviceaccount="default" 2025-12-13T00:13:19.375577674+00:00 stderr F I1213 00:13:19.375338 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="default-rolebindings-controller-dockercfg-mjvl7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.249877018 +0000 UTC" 2025-12-13T00:13:19.375577674+00:00 stderr F I1213 00:13:19.375551 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="default-rolebindings-controller-dockercfg-mjvl7" serviceaccount="default-rolebindings-controller" 2025-12-13T00:13:19.382063623+00:00 stderr F I1213 00:13:19.382015 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deployer-controller-dockercfg-nps8b" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.247210556 +0000 UTC" 2025-12-13T00:13:19.382063623+00:00 stderr F I1213 00:13:19.382048 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deployer-controller-dockercfg-nps8b" serviceaccount="deployer-controller" 2025-12-13T00:13:19.388261211+00:00 stderr F I1213 00:13:19.388093 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deployer-dockercfg-fjtnq" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.244772326 +0000 UTC" 2025-12-13T00:13:19.388261211+00:00 stderr F I1213 00:13:19.388116 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deployer-dockercfg-fjtnq" serviceaccount="deployer" 2025-12-13T00:13:19.401356281+00:00 stderr F I1213 00:13:19.401303 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="deploymentconfig-controller-dockercfg-7sjgp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.239491449 +0000 UTC" 2025-12-13T00:13:19.401356281+00:00 stderr F I1213 00:13:19.401330 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="deploymentconfig-controller-dockercfg-7sjgp" serviceaccount="deploymentconfig-controller" 2025-12-13T00:13:19.421502508+00:00 stderr F I1213 00:13:19.421402 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="image-import-controller-dockercfg-wtcck" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.231453878 +0000 UTC" 2025-12-13T00:13:19.421502508+00:00 stderr F I1213 00:13:19.421436 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="image-import-controller-dockercfg-wtcck" serviceaccount="image-import-controller" 2025-12-13T00:13:19.507217957+00:00 stderr F I1213 00:13:19.507169 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="image-trigger-controller-dockercfg-75z9g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.197144354 +0000 UTC" 2025-12-13T00:13:19.507217957+00:00 stderr F I1213 00:13:19.507195 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="image-trigger-controller-dockercfg-75z9g" serviceaccount="image-trigger-controller" 2025-12-13T00:13:19.514426800+00:00 stderr F I1213 00:13:19.514380 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="ingress-to-route-controller-dockercfg-486s5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.194259189 +0000 UTC" 2025-12-13T00:13:19.514426800+00:00 stderr F I1213 00:13:19.514407 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="ingress-to-route-controller-dockercfg-486s5" serviceaccount="ingress-to-route-controller" 2025-12-13T00:13:19.527578662+00:00 stderr F I1213 00:13:19.527533 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="namespace-security-allocation-controller-dockercfg-d9nzv" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.188998448 +0000 UTC" 2025-12-13T00:13:19.527578662+00:00 stderr F I1213 00:13:19.527558 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="namespace-security-allocation-controller-dockercfg-d9nzv" serviceaccount="namespace-security-allocation-controller" 2025-12-13T00:13:19.534547866+00:00 stderr F I1213 00:13:19.534510 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="node-bootstrapper-dockercfg-mj85j" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.18620659 +0000 UTC" 2025-12-13T00:13:19.534601458+00:00 stderr F I1213 00:13:19.534590 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="node-bootstrapper-dockercfg-mj85j" serviceaccount="node-bootstrapper" 2025-12-13T00:13:19.548875537+00:00 stderr F W1213 00:13:19.548603 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:19.548875537+00:00 stderr F E1213 00:13:19.548630 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:19.561389358+00:00 stderr F I1213 00:13:19.561323 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="origin-namespace-controller-dockercfg-5s4zt" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.175485402 +0000 UTC" 2025-12-13T00:13:19.561389358+00:00 stderr F I1213 00:13:19.561357 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="origin-namespace-controller-dockercfg-5s4zt" serviceaccount="origin-namespace-controller" 2025-12-13T00:13:19.640506307+00:00 stderr F I1213 00:13:19.640448 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="podsecurity-admission-label-syncer-controller-dockercfg-b5pxh" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.143835802 +0000 UTC" 2025-12-13T00:13:19.640506307+00:00 stderr F I1213 00:13:19.640480 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="podsecurity-admission-label-syncer-controller-dockercfg-b5pxh" serviceaccount="podsecurity-admission-label-syncer-controller" 2025-12-13T00:13:19.653267886+00:00 stderr F I1213 00:13:19.653223 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="privileged-namespaces-psa-label-syncer-dockercfg-lm8jh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.138725943 +0000 UTC" 2025-12-13T00:13:19.653267886+00:00 stderr F I1213 00:13:19.653254 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="privileged-namespaces-psa-label-syncer-dockercfg-lm8jh" serviceaccount="privileged-namespaces-psa-label-syncer" 2025-12-13T00:13:19.659743363+00:00 stderr F I1213 00:13:19.659703 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="pv-recycler-controller-dockercfg-d76pz" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.136142636 +0000 UTC" 2025-12-13T00:13:19.659757284+00:00 stderr F I1213 00:13:19.659740 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="pv-recycler-controller-dockercfg-d76pz" serviceaccount="pv-recycler-controller" 2025-12-13T00:13:19.683844682+00:00 stderr F I1213 00:13:19.683788 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="resourcequota-controller-dockercfg-mlv87" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.126499541 +0000 UTC" 2025-12-13T00:13:19.683844682+00:00 stderr F I1213 00:13:19.683817 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="resourcequota-controller-dockercfg-mlv87" serviceaccount="resourcequota-controller" 2025-12-13T00:13:19.703221544+00:00 stderr F I1213 00:13:19.703165 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="serviceaccount-controller-dockercfg-l8hfk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.118747552 +0000 UTC" 2025-12-13T00:13:19.703221544+00:00 stderr F I1213 00:13:19.703195 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="serviceaccount-controller-dockercfg-l8hfk" serviceaccount="serviceaccount-controller" 2025-12-13T00:13:19.780678297+00:00 stderr F I1213 00:13:19.780604 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="serviceaccount-pull-secrets-controller-dockercfg-hshqh" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.087771683 +0000 UTC" 2025-12-13T00:13:19.780678297+00:00 stderr F I1213 00:13:19.780632 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="serviceaccount-pull-secrets-controller-dockercfg-hshqh" serviceaccount="serviceaccount-pull-secrets-controller" 2025-12-13T00:13:19.787256877+00:00 stderr F I1213 00:13:19.787211 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="template-instance-controller-dockercfg-f72bl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.485129618 +0000 UTC" 2025-12-13T00:13:19.787256877+00:00 stderr F I1213 00:13:19.787238 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="template-instance-controller-dockercfg-f72bl" serviceaccount="template-instance-controller" 2025-12-13T00:13:19.810213990+00:00 stderr F I1213 00:13:19.809698 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="template-instance-finalizer-controller-dockercfg-xwvr9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:30 +0000 UTC" refreshTime="2025-06-26 10:14:22.076132491 +0000 UTC" 2025-12-13T00:13:19.810213990+00:00 stderr F I1213 00:13:19.809978 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="template-instance-finalizer-controller-dockercfg-xwvr9" serviceaccount="template-instance-finalizer-controller" 2025-12-13T00:13:19.821274780+00:00 stderr F I1213 00:13:19.821229 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-infra" name="unidling-controller-dockercfg-ndddq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.471521257 +0000 UTC" 2025-12-13T00:13:19.821274780+00:00 stderr F I1213 00:13:19.821250 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-infra" name="unidling-controller-dockercfg-ndddq" serviceaccount="unidling-controller" 2025-12-13T00:13:19.839884546+00:00 stderr F I1213 00:13:19.839822 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="builder-dockercfg-jjc4r" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.464082206 +0000 UTC" 2025-12-13T00:13:19.839884546+00:00 stderr F I1213 00:13:19.839846 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="builder-dockercfg-jjc4r" serviceaccount="builder" 2025-12-13T00:13:19.919764691+00:00 stderr F I1213 00:13:19.919524 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="default-dockercfg-4clxc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.432201975 +0000 UTC" 2025-12-13T00:13:19.919764691+00:00 stderr F I1213 00:13:19.919550 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="default-dockercfg-4clxc" serviceaccount="default" 2025-12-13T00:13:19.922590656+00:00 stderr F I1213 00:13:19.922545 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-canary" name="deployer-dockercfg-njf4l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.430994998 +0000 UTC" 2025-12-13T00:13:19.922590656+00:00 stderr F I1213 00:13:19.922573 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-canary" name="deployer-dockercfg-njf4l" serviceaccount="deployer" 2025-12-13T00:13:19.948426894+00:00 stderr F I1213 00:13:19.948378 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="builder-dockercfg-qnlh9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.420662032 +0000 UTC" 2025-12-13T00:13:19.948426894+00:00 stderr F I1213 00:13:19.948408 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="builder-dockercfg-qnlh9" serviceaccount="builder" 2025-12-13T00:13:19.962529587+00:00 stderr F I1213 00:13:19.962493 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="default-dockercfg-dbsd9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.415014638 +0000 UTC" 2025-12-13T00:13:19.962529587+00:00 stderr F I1213 00:13:19.962518 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="default-dockercfg-dbsd9" serviceaccount="default" 2025-12-13T00:13:19.982340743+00:00 stderr F I1213 00:13:19.982265 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="deployer-dockercfg-m9j7c" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.407106296 +0000 UTC" 2025-12-13T00:13:19.982340743+00:00 stderr F I1213 00:13:19.982292 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="deployer-dockercfg-m9j7c" serviceaccount="deployer" 2025-12-13T00:13:20.016372466+00:00 stderr F W1213 00:13:20.016313 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:20.016372466+00:00 stderr F E1213 00:13:20.016352 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:20.040524038+00:00 stderr F I1213 00:13:20.040472 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress-operator" name="ingress-operator-dockercfg-sxxwd" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.38382407 +0000 UTC" 2025-12-13T00:13:20.040524038+00:00 stderr F I1213 00:13:20.040502 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress-operator" name="ingress-operator-dockercfg-sxxwd" serviceaccount="ingress-operator" 2025-12-13T00:13:20.061162002+00:00 stderr F I1213 00:13:20.061130 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="builder-dockercfg-dc6f6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.375559805 +0000 UTC" 2025-12-13T00:13:20.061162002+00:00 stderr F I1213 00:13:20.061154 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="builder-dockercfg-dc6f6" serviceaccount="builder" 2025-12-13T00:13:20.082381485+00:00 stderr F I1213 00:13:20.082327 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="default-dockercfg-dvqwl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.367084543 +0000 UTC" 2025-12-13T00:13:20.082381485+00:00 stderr F I1213 00:13:20.082351 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="default-dockercfg-dvqwl" serviceaccount="default" 2025-12-13T00:13:20.095004129+00:00 stderr F I1213 00:13:20.094973 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="deployer-dockercfg-6cpmp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.362022034 +0000 UTC" 2025-12-13T00:13:20.095004129+00:00 stderr F I1213 00:13:20.094998 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="deployer-dockercfg-6cpmp" serviceaccount="deployer" 2025-12-13T00:13:20.120744983+00:00 stderr F I1213 00:13:20.120680 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ingress" name="router-dockercfg-n864z" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.351740224 +0000 UTC" 2025-12-13T00:13:20.120744983+00:00 stderr F I1213 00:13:20.120706 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ingress" name="router-dockercfg-n864z" serviceaccount="router" 2025-12-13T00:13:20.183850594+00:00 stderr F I1213 00:13:20.183796 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="builder-dockercfg-pzdvv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.326492067 +0000 UTC" 2025-12-13T00:13:20.183850594+00:00 stderr F I1213 00:13:20.183823 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="builder-dockercfg-pzdvv" serviceaccount="builder" 2025-12-13T00:13:20.201668563+00:00 stderr F I1213 00:13:20.201516 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="default-dockercfg-2zsnk" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.319405365 +0000 UTC" 2025-12-13T00:13:20.201668563+00:00 stderr F I1213 00:13:20.201539 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="default-dockercfg-2zsnk" serviceaccount="default" 2025-12-13T00:13:20.227696238+00:00 stderr F I1213 00:13:20.227640 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kni-infra" name="deployer-dockercfg-v52pl" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.308955498 +0000 UTC" 2025-12-13T00:13:20.227696238+00:00 stderr F I1213 00:13:20.227666 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kni-infra" name="deployer-dockercfg-v52pl" serviceaccount="deployer" 2025-12-13T00:13:20.244164420+00:00 stderr F I1213 00:13:20.244106 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="builder-dockercfg-2cs69" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.302368841 +0000 UTC" 2025-12-13T00:13:20.244164420+00:00 stderr F I1213 00:13:20.244134 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="builder-dockercfg-2cs69" serviceaccount="builder" 2025-12-13T00:13:20.277920295+00:00 stderr F I1213 00:13:20.277857 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="default-dockercfg-7dskq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:31 +0000 UTC" refreshTime="2025-06-26 10:14:23.288871328 +0000 UTC" 2025-12-13T00:13:20.277920295+00:00 stderr F I1213 00:13:20.277890 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="default-dockercfg-7dskq" serviceaccount="default" 2025-12-13T00:13:20.345180585+00:00 stderr F I1213 00:13:20.345116 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="deployer-dockercfg-xjjdg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.661967335 +0000 UTC" 2025-12-13T00:13:20.345180585+00:00 stderr F I1213 00:13:20.345148 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="deployer-dockercfg-xjjdg" serviceaccount="deployer" 2025-12-13T00:13:20.358580365+00:00 stderr F I1213 00:13:20.358538 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver-operator" name="kube-apiserver-operator-dockercfg-n4bm9" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.656600926 +0000 UTC" 2025-12-13T00:13:20.358580365+00:00 stderr F I1213 00:13:20.358572 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver-operator" name="kube-apiserver-operator-dockercfg-n4bm9" serviceaccount="kube-apiserver-operator" 2025-12-13T00:13:20.384598480+00:00 stderr F I1213 00:13:20.384535 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="builder-dockercfg-rrcrf" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.646197613 +0000 UTC" 2025-12-13T00:13:20.384598480+00:00 stderr F I1213 00:13:20.384559 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="builder-dockercfg-rrcrf" serviceaccount="builder" 2025-12-13T00:13:20.389241986+00:00 stderr F I1213 00:13:20.389208 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="default-dockercfg-dlw8f" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.644329854 +0000 UTC" 2025-12-13T00:13:20.389256126+00:00 stderr F I1213 00:13:20.389241 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="default-dockercfg-dlw8f" serviceaccount="default" 2025-12-13T00:13:20.400035178+00:00 stderr F W1213 00:13:20.400003 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:20.400049499+00:00 stderr F E1213 00:13:20.400038 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:20.411842675+00:00 stderr F I1213 00:13:20.411683 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="deployer-dockercfg-vr4tw" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.635346197 +0000 UTC" 2025-12-13T00:13:20.411842675+00:00 stderr F I1213 00:13:20.411714 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="deployer-dockercfg-vr4tw" serviceaccount="deployer" 2025-12-13T00:13:20.474969847+00:00 stderr F I1213 00:13:20.474206 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="installer-sa-dockercfg-4kgh8" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.610332064 +0000 UTC" 2025-12-13T00:13:20.474969847+00:00 stderr F I1213 00:13:20.474239 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="installer-sa-dockercfg-4kgh8" serviceaccount="installer-sa" 2025-12-13T00:13:20.487971053+00:00 stderr F I1213 00:13:20.487763 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-apiserver" name="localhost-recovery-client-dockercfg-qll5d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.604906562 +0000 UTC" 2025-12-13T00:13:20.487971053+00:00 stderr F I1213 00:13:20.487788 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-apiserver" name="localhost-recovery-client-dockercfg-qll5d" serviceaccount="localhost-recovery-client" 2025-12-13T00:13:20.523275070+00:00 stderr F I1213 00:13:20.523220 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="builder-dockercfg-lfl8l" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.590727197 +0000 UTC" 2025-12-13T00:13:20.523275070+00:00 stderr F I1213 00:13:20.523255 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="builder-dockercfg-lfl8l" serviceaccount="builder" 2025-12-13T00:13:20.529642283+00:00 stderr F I1213 00:13:20.529583 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="default-dockercfg-ztmg5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.588179188 +0000 UTC" 2025-12-13T00:13:20.529642283+00:00 stderr F I1213 00:13:20.529628 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="default-dockercfg-ztmg5" serviceaccount="default" 2025-12-13T00:13:20.544433431+00:00 stderr F I1213 00:13:20.544377 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="deployer-dockercfg-qpmjq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.582260505 +0000 UTC" 2025-12-13T00:13:20.544433431+00:00 stderr F I1213 00:13:20.544405 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="deployer-dockercfg-qpmjq" serviceaccount="deployer" 2025-12-13T00:13:20.559744015+00:00 stderr F W1213 00:13:20.559664 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:20.559744015+00:00 stderr F E1213 00:13:20.559713 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:20.608829895+00:00 stderr F I1213 00:13:20.608773 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager-operator" name="kube-controller-manager-operator-dockercfg-mwmd7" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.556515971 +0000 UTC" 2025-12-13T00:13:20.608829895+00:00 stderr F I1213 00:13:20.608806 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager-operator" name="kube-controller-manager-operator-dockercfg-mwmd7" serviceaccount="kube-controller-manager-operator" 2025-12-13T00:13:20.627562154+00:00 stderr F I1213 00:13:20.627505 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="builder-dockercfg-4xp92" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.549013328 +0000 UTC" 2025-12-13T00:13:20.627562154+00:00 stderr F I1213 00:13:20.627540 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="builder-dockercfg-4xp92" serviceaccount="builder" 2025-12-13T00:13:20.673236169+00:00 stderr F I1213 00:13:20.672313 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="default-dockercfg-8rtql" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.531087406 +0000 UTC" 2025-12-13T00:13:20.673236169+00:00 stderr F I1213 00:13:20.672337 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="default-dockercfg-8rtql" serviceaccount="default" 2025-12-13T00:13:20.673236169+00:00 stderr F I1213 00:13:20.672685 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="deployer-dockercfg-bnp5r" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.530932056 +0000 UTC" 2025-12-13T00:13:20.673236169+00:00 stderr F I1213 00:13:20.672698 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="deployer-dockercfg-bnp5r" serviceaccount="deployer" 2025-12-13T00:13:20.674583284+00:00 stderr F I1213 00:13:20.674545 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="installer-sa-dockercfg-dl9g2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.530191067 +0000 UTC" 2025-12-13T00:13:20.674583284+00:00 stderr F I1213 00:13:20.674571 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="installer-sa-dockercfg-dl9g2" serviceaccount="installer-sa" 2025-12-13T00:13:20.719131171+00:00 stderr F W1213 00:13:20.719064 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:20.719166012+00:00 stderr F E1213 00:13:20.719125 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:20.742055351+00:00 stderr F I1213 00:13:20.741957 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="kube-controller-manager-sa-dockercfg-4jsp6" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.503237334 +0000 UTC" 2025-12-13T00:13:20.742055351+00:00 stderr F I1213 00:13:20.741987 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="kube-controller-manager-sa-dockercfg-4jsp6" serviceaccount="kube-controller-manager-sa" 2025-12-13T00:13:20.770026261+00:00 stderr F I1213 00:13:20.768643 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-controller-manager" name="localhost-recovery-client-dockercfg-862fd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.492559142 +0000 UTC" 2025-12-13T00:13:20.770026261+00:00 stderr F I1213 00:13:20.768683 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-controller-manager" name="localhost-recovery-client-dockercfg-862fd" serviceaccount="localhost-recovery-client" 2025-12-13T00:13:20.790302442+00:00 stderr F I1213 00:13:20.790239 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="builder-dockercfg-2h8s9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.483917829 +0000 UTC" 2025-12-13T00:13:20.790302442+00:00 stderr F I1213 00:13:20.790266 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="builder-dockercfg-2h8s9" serviceaccount="builder" 2025-12-13T00:13:20.815011323+00:00 stderr F I1213 00:13:20.804230 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="default-dockercfg-vmhb5" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.478320629 +0000 UTC" 2025-12-13T00:13:20.815011323+00:00 stderr F I1213 00:13:20.804262 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="default-dockercfg-vmhb5" serviceaccount="default" 2025-12-13T00:13:20.815656264+00:00 stderr F I1213 00:13:20.815575 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="deployer-dockercfg-ltvsw" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.473785176 +0000 UTC" 2025-12-13T00:13:20.815656264+00:00 stderr F I1213 00:13:20.815611 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="deployer-dockercfg-ltvsw" serviceaccount="deployer" 2025-12-13T00:13:20.870345482+00:00 stderr F I1213 00:13:20.870004 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler-operator" name="openshift-kube-scheduler-operator-dockercfg-w67m2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.452011948 +0000 UTC" 2025-12-13T00:13:20.870345482+00:00 stderr F I1213 00:13:20.870037 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler-operator" name="openshift-kube-scheduler-operator-dockercfg-w67m2" serviceaccount="openshift-kube-scheduler-operator" 2025-12-13T00:13:20.901000032+00:00 stderr F I1213 00:13:20.900723 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="builder-dockercfg-wmtqz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.439724647 +0000 UTC" 2025-12-13T00:13:20.901000032+00:00 stderr F I1213 00:13:20.900751 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="builder-dockercfg-wmtqz" serviceaccount="builder" 2025-12-13T00:13:20.923607912+00:00 stderr F I1213 00:13:20.922777 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="default-dockercfg-m6rz5" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.430905123 +0000 UTC" 2025-12-13T00:13:20.923607912+00:00 stderr F I1213 00:13:20.922811 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="default-dockercfg-m6rz5" serviceaccount="default" 2025-12-13T00:13:20.928969922+00:00 stderr F I1213 00:13:20.928903 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="deployer-dockercfg-mks8v" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:32 +0000 UTC" refreshTime="2025-06-26 10:14:24.428447573 +0000 UTC" 2025-12-13T00:13:20.929020864+00:00 stderr F I1213 00:13:20.928927 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="deployer-dockercfg-mks8v" serviceaccount="deployer" 2025-12-13T00:13:20.946773590+00:00 stderr F I1213 00:13:20.946719 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="installer-sa-dockercfg-9ln8g" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.821322525 +0000 UTC" 2025-12-13T00:13:20.946773590+00:00 stderr F I1213 00:13:20.946743 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="installer-sa-dockercfg-9ln8g" serviceaccount="installer-sa" 2025-12-13T00:13:21.002300656+00:00 stderr F I1213 00:13:21.002200 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="localhost-recovery-client-dockercfg-b5dfm" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.799151992 +0000 UTC" 2025-12-13T00:13:21.002300656+00:00 stderr F I1213 00:13:21.002259 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="localhost-recovery-client-dockercfg-b5dfm" serviceaccount="localhost-recovery-client" 2025-12-13T00:13:21.042576829+00:00 stderr F I1213 00:13:21.042499 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-scheduler" name="openshift-kube-scheduler-sa-dockercfg-d9dtc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.78301147 +0000 UTC" 2025-12-13T00:13:21.042576829+00:00 stderr F I1213 00:13:21.042551 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-scheduler" name="openshift-kube-scheduler-sa-dockercfg-d9dtc" serviceaccount="openshift-kube-scheduler-sa" 2025-12-13T00:13:21.054559392+00:00 stderr F I1213 00:13:21.054485 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="builder-dockercfg-cp4s8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.778218944 +0000 UTC" 2025-12-13T00:13:21.054559392+00:00 stderr F I1213 00:13:21.054544 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="builder-dockercfg-cp4s8" serviceaccount="builder" 2025-12-13T00:13:21.074411819+00:00 stderr F I1213 00:13:21.074352 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="default-dockercfg-47tpp" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.770274762 +0000 UTC" 2025-12-13T00:13:21.074411819+00:00 stderr F I1213 00:13:21.074386 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="default-dockercfg-47tpp" serviceaccount="default" 2025-12-13T00:13:21.088006866+00:00 stderr F I1213 00:13:21.087959 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="deployer-dockercfg-vphfw" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.764839219 +0000 UTC" 2025-12-13T00:13:21.088006866+00:00 stderr F I1213 00:13:21.087993 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="deployer-dockercfg-vphfw" serviceaccount="deployer" 2025-12-13T00:13:21.133847956+00:00 stderr F I1213 00:13:21.133789 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator-operator" name="kube-storage-version-migrator-operator-dockercfg-8l4fr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.746499371 +0000 UTC" 2025-12-13T00:13:21.133847956+00:00 stderr F I1213 00:13:21.133820 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator-operator" name="kube-storage-version-migrator-operator-dockercfg-8l4fr" serviceaccount="kube-storage-version-migrator-operator" 2025-12-13T00:13:21.186975382+00:00 stderr F I1213 00:13:21.184772 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="builder-dockercfg-lvk8s" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.726103566 +0000 UTC" 2025-12-13T00:13:21.186975382+00:00 stderr F I1213 00:13:21.184800 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="builder-dockercfg-lvk8s" serviceaccount="builder" 2025-12-13T00:13:21.193777320+00:00 stderr F I1213 00:13:21.190682 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="default-dockercfg-4vhnw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.723747678 +0000 UTC" 2025-12-13T00:13:21.193777320+00:00 stderr F I1213 00:13:21.190709 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="default-dockercfg-4vhnw" serviceaccount="default" 2025-12-13T00:13:21.216866926+00:00 stderr F I1213 00:13:21.216812 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="deployer-dockercfg-tzgw6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.713291766 +0000 UTC" 2025-12-13T00:13:21.216963049+00:00 stderr F I1213 00:13:21.216946 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="deployer-dockercfg-tzgw6" serviceaccount="deployer" 2025-12-13T00:13:21.223682025+00:00 stderr F I1213 00:13:21.223627 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-kube-storage-version-migrator" name="kube-storage-version-migrator-sa-dockercfg-5v9xj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.710565183 +0000 UTC" 2025-12-13T00:13:21.223682025+00:00 stderr F I1213 00:13:21.223663 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-kube-storage-version-migrator" name="kube-storage-version-migrator-sa-dockercfg-5v9xj" serviceaccount="kube-storage-version-migrator-sa" 2025-12-13T00:13:21.274306606+00:00 stderr F I1213 00:13:21.274257 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="builder-dockercfg-t2jdt" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.690312486 +0000 UTC" 2025-12-13T00:13:21.274410009+00:00 stderr F I1213 00:13:21.274396 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="builder-dockercfg-t2jdt" serviceaccount="builder" 2025-12-13T00:13:21.314424114+00:00 stderr F I1213 00:13:21.314366 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="cluster-autoscaler-dockercfg-5ld89" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.674281312 +0000 UTC" 2025-12-13T00:13:21.314507687+00:00 stderr F I1213 00:13:21.314492 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="cluster-autoscaler-dockercfg-5ld89" serviceaccount="cluster-autoscaler" 2025-12-13T00:13:21.329077207+00:00 stderr F I1213 00:13:21.329018 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="cluster-autoscaler-operator-dockercfg-nlvfr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.668408939 +0000 UTC" 2025-12-13T00:13:21.329077207+00:00 stderr F I1213 00:13:21.329050 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="cluster-autoscaler-operator-dockercfg-nlvfr" serviceaccount="cluster-autoscaler-operator" 2025-12-13T00:13:21.348035253+00:00 stderr F I1213 00:13:21.347973 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="control-plane-machine-set-operator-dockercfg-7wtgv" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.660833728 +0000 UTC" 2025-12-13T00:13:21.348035253+00:00 stderr F I1213 00:13:21.348004 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="control-plane-machine-set-operator-dockercfg-7wtgv" serviceaccount="control-plane-machine-set-operator" 2025-12-13T00:13:21.362777649+00:00 stderr F I1213 00:13:21.362717 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="default-dockercfg-t6f6m" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.65492895 +0000 UTC" 2025-12-13T00:13:21.362868372+00:00 stderr F I1213 00:13:21.362853 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="default-dockercfg-t6f6m" serviceaccount="default" 2025-12-13T00:13:21.417682663+00:00 stderr F I1213 00:13:21.417625 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="deployer-dockercfg-z5zkh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.632963008 +0000 UTC" 2025-12-13T00:13:21.417682663+00:00 stderr F I1213 00:13:21.417650 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="deployer-dockercfg-z5zkh" serviceaccount="deployer" 2025-12-13T00:13:21.454436078+00:00 stderr F I1213 00:13:21.454370 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-controllers-dockercfg-5gkdn" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.618266102 +0000 UTC" 2025-12-13T00:13:21.454436078+00:00 stderr F I1213 00:13:21.454400 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-controllers-dockercfg-5gkdn" serviceaccount="machine-api-controllers" 2025-12-13T00:13:21.468041766+00:00 stderr F I1213 00:13:21.467976 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-operator-dockercfg-q7fmc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.612834371 +0000 UTC" 2025-12-13T00:13:21.468041766+00:00 stderr F I1213 00:13:21.468012 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-operator-dockercfg-q7fmc" serviceaccount="machine-api-operator" 2025-12-13T00:13:21.480548596+00:00 stderr F I1213 00:13:21.480485 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-api" name="machine-api-termination-handler-dockercfg-86pgj" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.607816294 +0000 UTC" 2025-12-13T00:13:21.480548596+00:00 stderr F I1213 00:13:21.480515 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-api" name="machine-api-termination-handler-dockercfg-86pgj" serviceaccount="machine-api-termination-handler" 2025-12-13T00:13:21.502917168+00:00 stderr F I1213 00:13:21.502859 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="builder-dockercfg-kjnvp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.598870618 +0000 UTC" 2025-12-13T00:13:21.503012051+00:00 stderr F I1213 00:13:21.502997 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="builder-dockercfg-kjnvp" serviceaccount="builder" 2025-12-13T00:13:21.547464155+00:00 stderr F I1213 00:13:21.546890 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="default-dockercfg-lh7f8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.581255862 +0000 UTC" 2025-12-13T00:13:21.547464155+00:00 stderr F I1213 00:13:21.546915 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="default-dockercfg-lh7f8" serviceaccount="default" 2025-12-13T00:13:21.593746390+00:00 stderr F I1213 00:13:21.593703 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="deployer-dockercfg-8qpnv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.562531274 +0000 UTC" 2025-12-13T00:13:21.593809072+00:00 stderr F I1213 00:13:21.593798 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="deployer-dockercfg-8qpnv" serviceaccount="deployer" 2025-12-13T00:13:21.608621429+00:00 stderr F I1213 00:13:21.608568 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-controller-dockercfg-wtlbj" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.556585258 +0000 UTC" 2025-12-13T00:13:21.608621429+00:00 stderr F I1213 00:13:21.608594 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-controller-dockercfg-wtlbj" serviceaccount="machine-config-controller" 2025-12-13T00:13:21.613587586+00:00 stderr F I1213 00:13:21.613538 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-daemon-dockercfg-rfwqs" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.55459665 +0000 UTC" 2025-12-13T00:13:21.613587586+00:00 stderr F I1213 00:13:21.613562 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-daemon-dockercfg-rfwqs" serviceaccount="machine-config-daemon" 2025-12-13T00:13:21.640760810+00:00 stderr F I1213 00:13:21.640692 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-operator-dockercfg-vhshz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.543735186 +0000 UTC" 2025-12-13T00:13:21.640760810+00:00 stderr F I1213 00:13:21.640721 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-operator-dockercfg-vhshz" serviceaccount="machine-config-operator" 2025-12-13T00:13:21.681489179+00:00 stderr F I1213 00:13:21.681440 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-config-server-dockercfg-xm5kr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.527440226 +0000 UTC" 2025-12-13T00:13:21.681567291+00:00 stderr F I1213 00:13:21.681552 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-config-server-dockercfg-xm5kr" serviceaccount="machine-config-server" 2025-12-13T00:13:21.728234759+00:00 stderr F I1213 00:13:21.728168 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-os-builder-dockercfg-p6ljl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.50874979 +0000 UTC" 2025-12-13T00:13:21.728234759+00:00 stderr F I1213 00:13:21.728204 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-os-builder-dockercfg-p6ljl" serviceaccount="machine-os-builder" 2025-12-13T00:13:21.734665425+00:00 stderr F I1213 00:13:21.734603 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="machine-os-puller-dockercfg-bkkmv" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.506173515 +0000 UTC" 2025-12-13T00:13:21.734665425+00:00 stderr F I1213 00:13:21.734634 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="machine-os-puller-dockercfg-bkkmv" serviceaccount="machine-os-puller" 2025-12-13T00:13:21.749203614+00:00 stderr F I1213 00:13:21.749148 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-machine-config-operator" name="node-bootstrapper-dockercfg-8hvnd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.500356179 +0000 UTC" 2025-12-13T00:13:21.749203614+00:00 stderr F I1213 00:13:21.749182 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-machine-config-operator" name="node-bootstrapper-dockercfg-8hvnd" serviceaccount="node-bootstrapper" 2025-12-13T00:13:21.782493943+00:00 stderr F I1213 00:13:21.782435 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="builder-dockercfg-w5l7k" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.487040588 +0000 UTC" 2025-12-13T00:13:21.782493943+00:00 stderr F I1213 00:13:21.782468 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="builder-dockercfg-w5l7k" serviceaccount="builder" 2025-12-13T00:13:21.815557733+00:00 stderr F I1213 00:13:21.815494 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="certified-operators-dockercfg-twmwc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.473819189 +0000 UTC" 2025-12-13T00:13:21.815557733+00:00 stderr F I1213 00:13:21.815525 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="certified-operators-dockercfg-twmwc" serviceaccount="certified-operators" 2025-12-13T00:13:21.861674383+00:00 stderr F I1213 00:13:21.861626 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="community-operators-dockercfg-sv888" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.455363128 +0000 UTC" 2025-12-13T00:13:21.861746226+00:00 stderr F I1213 00:13:21.861731 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="community-operators-dockercfg-sv888" serviceaccount="community-operators" 2025-12-13T00:13:21.867786688+00:00 stderr F I1213 00:13:21.867749 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="default-dockercfg-4w6pc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.452911006 +0000 UTC" 2025-12-13T00:13:21.867786688+00:00 stderr F I1213 00:13:21.867774 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="default-dockercfg-4w6pc" serviceaccount="default" 2025-12-13T00:13:21.887286764+00:00 stderr F I1213 00:13:21.887232 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="deployer-dockercfg-wdpgc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.445121363 +0000 UTC" 2025-12-13T00:13:21.887286764+00:00 stderr F I1213 00:13:21.887263 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="deployer-dockercfg-wdpgc" serviceaccount="deployer" 2025-12-13T00:13:21.915176391+00:00 stderr F I1213 00:13:21.915114 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="marketplace-operator-dockercfg-b4zbk" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.433970405 +0000 UTC" 2025-12-13T00:13:21.915176391+00:00 stderr F I1213 00:13:21.915145 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="marketplace-operator-dockercfg-b4zbk" serviceaccount="marketplace-operator" 2025-12-13T00:13:21.947893100+00:00 stderr F I1213 00:13:21.947841 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="redhat-marketplace-dockercfg-kpdvz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.420876293 +0000 UTC" 2025-12-13T00:13:21.947893100+00:00 stderr F I1213 00:13:21.947877 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="redhat-marketplace-dockercfg-kpdvz" serviceaccount="redhat-marketplace" 2025-12-13T00:13:21.995039834+00:00 stderr F I1213 00:13:21.994980 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-marketplace" name="redhat-operators-dockercfg-dwn4s" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.40203086 +0000 UTC" 2025-12-13T00:13:21.995039834+00:00 stderr F I1213 00:13:21.995012 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-marketplace" name="redhat-operators-dockercfg-dwn4s" serviceaccount="redhat-operators" 2025-12-13T00:13:22.002056850+00:00 stderr F I1213 00:13:22.002003 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="builder-dockercfg-c82h4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.399215053 +0000 UTC" 2025-12-13T00:13:22.002056850+00:00 stderr F I1213 00:13:22.002038 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="builder-dockercfg-c82h4" serviceaccount="builder" 2025-12-13T00:13:22.027017369+00:00 stderr F I1213 00:13:22.026964 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="cluster-monitoring-operator-dockercfg-vg26t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.389239302 +0000 UTC" 2025-12-13T00:13:22.027142233+00:00 stderr F I1213 00:13:22.027115 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="cluster-monitoring-operator-dockercfg-vg26t" serviceaccount="cluster-monitoring-operator" 2025-12-13T00:13:22.054308406+00:00 stderr F I1213 00:13:22.054232 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="default-dockercfg-vffxx" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.378323779 +0000 UTC" 2025-12-13T00:13:22.054308406+00:00 stderr F I1213 00:13:22.054269 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="default-dockercfg-vffxx" serviceaccount="default" 2025-12-13T00:13:22.087673047+00:00 stderr F I1213 00:13:22.087624 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-monitoring" name="deployer-dockercfg-fzkn2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.364961602 +0000 UTC" 2025-12-13T00:13:22.087673047+00:00 stderr F I1213 00:13:22.087649 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-monitoring" name="deployer-dockercfg-fzkn2" serviceaccount="deployer" 2025-12-13T00:13:22.128439017+00:00 stderr F I1213 00:13:22.128380 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="builder-dockercfg-wqmk7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.748666347 +0000 UTC" 2025-12-13T00:13:22.128439017+00:00 stderr F I1213 00:13:22.128415 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="builder-dockercfg-wqmk7" serviceaccount="builder" 2025-12-13T00:13:22.142064775+00:00 stderr F I1213 00:13:22.142013 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="default-dockercfg-smth4" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.743207771 +0000 UTC" 2025-12-13T00:13:22.142064775+00:00 stderr F I1213 00:13:22.142040 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="default-dockercfg-smth4" serviceaccount="default" 2025-12-13T00:13:22.168216263+00:00 stderr F I1213 00:13:22.168151 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="deployer-dockercfg-lbcm2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:33 +0000 UTC" refreshTime="2025-06-26 10:14:25.332755104 +0000 UTC" 2025-12-13T00:13:22.168216263+00:00 stderr F I1213 00:13:22.168182 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="deployer-dockercfg-lbcm2" serviceaccount="deployer" 2025-12-13T00:13:22.195605534+00:00 stderr F I1213 00:13:22.195543 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="metrics-daemon-sa-dockercfg-22xbz" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.721799492 +0000 UTC" 2025-12-13T00:13:22.195605534+00:00 stderr F I1213 00:13:22.195578 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="metrics-daemon-sa-dockercfg-22xbz" serviceaccount="metrics-daemon-sa" 2025-12-13T00:13:22.229303846+00:00 stderr F I1213 00:13:22.229248 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-ac-dockercfg-ltm2q" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.708318171 +0000 UTC" 2025-12-13T00:13:22.229303846+00:00 stderr F I1213 00:13:22.229277 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-ac-dockercfg-ltm2q" serviceaccount="multus-ac" 2025-12-13T00:13:22.261718516+00:00 stderr F I1213 00:13:22.261658 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-ancillary-tools-dockercfg-6hnwp" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.695353055 +0000 UTC" 2025-12-13T00:13:22.261718516+00:00 stderr F I1213 00:13:22.261692 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-ancillary-tools-dockercfg-6hnwp" serviceaccount="multus-ancillary-tools" 2025-12-13T00:13:22.281908244+00:00 stderr F I1213 00:13:22.281848 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-multus" name="multus-dockercfg-2hmjh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.687275744 +0000 UTC" 2025-12-13T00:13:22.281908244+00:00 stderr F I1213 00:13:22.281884 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-multus" name="multus-dockercfg-2hmjh" serviceaccount="multus" 2025-12-13T00:13:22.307211904+00:00 stderr F I1213 00:13:22.307156 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="builder-dockercfg-v84zj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.677152603 +0000 UTC" 2025-12-13T00:13:22.307211904+00:00 stderr F I1213 00:13:22.307189 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="builder-dockercfg-v84zj" serviceaccount="builder" 2025-12-13T00:13:22.328040994+00:00 stderr F I1213 00:13:22.327991 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="default-dockercfg-l7mph" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.668816088 +0000 UTC" 2025-12-13T00:13:22.328040994+00:00 stderr F I1213 00:13:22.328014 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="default-dockercfg-l7mph" serviceaccount="default" 2025-12-13T00:13:22.368456302+00:00 stderr F I1213 00:13:22.368401 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="deployer-dockercfg-xfj6f" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.652651957 +0000 UTC" 2025-12-13T00:13:22.368456302+00:00 stderr F I1213 00:13:22.368430 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="deployer-dockercfg-xfj6f" serviceaccount="deployer" 2025-12-13T00:13:22.388676061+00:00 stderr F I1213 00:13:22.388618 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-diagnostics" name="network-diagnostics-dockercfg-lpz5v" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.644564725 +0000 UTC" 2025-12-13T00:13:22.388676061+00:00 stderr F I1213 00:13:22.388646 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-diagnostics" name="network-diagnostics-dockercfg-lpz5v" serviceaccount="network-diagnostics" 2025-12-13T00:13:22.421791754+00:00 stderr F I1213 00:13:22.421742 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="builder-dockercfg-jflnh" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.631316187 +0000 UTC" 2025-12-13T00:13:22.421791754+00:00 stderr F I1213 00:13:22.421768 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="builder-dockercfg-jflnh" serviceaccount="builder" 2025-12-13T00:13:22.442545441+00:00 stderr F I1213 00:13:22.442469 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="default-dockercfg-75lxp" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.62302733 +0000 UTC" 2025-12-13T00:13:22.442545441+00:00 stderr F I1213 00:13:22.442502 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="default-dockercfg-75lxp" serviceaccount="default" 2025-12-13T00:13:22.468183833+00:00 stderr F I1213 00:13:22.468115 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="deployer-dockercfg-rj2df" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.612766893 +0000 UTC" 2025-12-13T00:13:22.468183833+00:00 stderr F I1213 00:13:22.468147 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="deployer-dockercfg-rj2df" serviceaccount="deployer" 2025-12-13T00:13:22.509428549+00:00 stderr F I1213 00:13:22.509341 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-node-identity" name="network-node-identity-dockercfg-q58sj" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.596279522 +0000 UTC" 2025-12-13T00:13:22.509428549+00:00 stderr F I1213 00:13:22.509406 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-node-identity" name="network-node-identity-dockercfg-q58sj" serviceaccount="network-node-identity" 2025-12-13T00:13:22.521505544+00:00 stderr F I1213 00:13:22.521452 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="builder-dockercfg-rm8cp" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.591431087 +0000 UTC" 2025-12-13T00:13:22.521505544+00:00 stderr F I1213 00:13:22.521477 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="builder-dockercfg-rm8cp" serviceaccount="builder" 2025-12-13T00:13:22.562408929+00:00 stderr F I1213 00:13:22.562348 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="cluster-network-operator-dockercfg-fknq9" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.575077762 +0000 UTC" 2025-12-13T00:13:22.562408929+00:00 stderr F I1213 00:13:22.562378 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="cluster-network-operator-dockercfg-fknq9" serviceaccount="cluster-network-operator" 2025-12-13T00:13:22.574620459+00:00 stderr F I1213 00:13:22.574552 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="default-dockercfg-qbblb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.570194118 +0000 UTC" 2025-12-13T00:13:22.574620459+00:00 stderr F I1213 00:13:22.574587 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="default-dockercfg-qbblb" serviceaccount="default" 2025-12-13T00:13:22.607611898+00:00 stderr F I1213 00:13:22.607543 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="deployer-dockercfg-dxzsm" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.556999726 +0000 UTC" 2025-12-13T00:13:22.607611898+00:00 stderr F I1213 00:13:22.607593 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="deployer-dockercfg-dxzsm" serviceaccount="deployer" 2025-12-13T00:13:22.640538055+00:00 stderr F I1213 00:13:22.640214 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-network-operator" name="iptables-alerter-dockercfg-m85pb" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.543926126 +0000 UTC" 2025-12-13T00:13:22.640538055+00:00 stderr F I1213 00:13:22.640244 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-network-operator" name="iptables-alerter-dockercfg-m85pb" serviceaccount="iptables-alerter" 2025-12-13T00:13:22.663312170+00:00 stderr F I1213 00:13:22.663273 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="builder-dockercfg-x5dlr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.534702378 +0000 UTC" 2025-12-13T00:13:22.663371402+00:00 stderr F I1213 00:13:22.663360 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="builder-dockercfg-x5dlr" serviceaccount="builder" 2025-12-13T00:13:22.702886719+00:00 stderr F I1213 00:13:22.702781 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="default-dockercfg-rkcl2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.518907594 +0000 UTC" 2025-12-13T00:13:22.702886719+00:00 stderr F I1213 00:13:22.702812 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="default-dockercfg-rkcl2" serviceaccount="default" 2025-12-13T00:13:22.714180999+00:00 stderr F I1213 00:13:22.714131 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-node" name="deployer-dockercfg-ng566" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.51436295 +0000 UTC" 2025-12-13T00:13:22.714252461+00:00 stderr F I1213 00:13:22.714237 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-node" name="deployer-dockercfg-ng566" serviceaccount="deployer" 2025-12-13T00:13:22.754198853+00:00 stderr F I1213 00:13:22.754142 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="builder-dockercfg-zj94t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.49835804 +0000 UTC" 2025-12-13T00:13:22.754198853+00:00 stderr F I1213 00:13:22.754174 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="builder-dockercfg-zj94t" serviceaccount="builder" 2025-12-13T00:13:22.762467751+00:00 stderr F I1213 00:13:22.762427 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="default-dockercfg-w25km" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:34 +0000 UTC" refreshTime="2025-06-26 10:14:26.495042184 +0000 UTC" 2025-12-13T00:13:22.762467751+00:00 stderr F I1213 00:13:22.762458 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="default-dockercfg-w25km" serviceaccount="default" 2025-12-13T00:13:22.794318672+00:00 stderr F I1213 00:13:22.794272 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-nutanix-infra" name="deployer-dockercfg-dgzbc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.882303024 +0000 UTC" 2025-12-13T00:13:22.794378534+00:00 stderr F I1213 00:13:22.794366 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-nutanix-infra" name="deployer-dockercfg-dgzbc" serviceaccount="deployer" 2025-12-13T00:13:22.839814381+00:00 stderr F I1213 00:13:22.839768 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="builder-dockercfg-ssklj" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.864105317 +0000 UTC" 2025-12-13T00:13:22.839881143+00:00 stderr F I1213 00:13:22.839870 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="builder-dockercfg-ssklj" serviceaccount="builder" 2025-12-13T00:13:22.868168093+00:00 stderr F I1213 00:13:22.868116 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="default-dockercfg-m5zsx" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.852766212 +0000 UTC" 2025-12-13T00:13:22.868168093+00:00 stderr F I1213 00:13:22.868144 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="default-dockercfg-m5zsx" serviceaccount="default" 2025-12-13T00:13:22.894585612+00:00 stderr F I1213 00:13:22.894531 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="deployer-dockercfg-s7krv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.842199216 +0000 UTC" 2025-12-13T00:13:22.894585612+00:00 stderr F I1213 00:13:22.894561 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="deployer-dockercfg-s7krv" serviceaccount="deployer" 2025-12-13T00:13:22.925020493+00:00 stderr F I1213 00:13:22.919797 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-oauth-apiserver" name="oauth-apiserver-sa-dockercfg-qvbzg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.832093285 +0000 UTC" 2025-12-13T00:13:22.925020493+00:00 stderr F I1213 00:13:22.919827 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-oauth-apiserver" name="oauth-apiserver-sa-dockercfg-qvbzg" serviceaccount="oauth-apiserver-sa" 2025-12-13T00:13:22.948981909+00:00 stderr F I1213 00:13:22.941331 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="builder-dockercfg-74x9t" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.823479795 +0000 UTC" 2025-12-13T00:13:22.948981909+00:00 stderr F I1213 00:13:22.941358 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="builder-dockercfg-74x9t" serviceaccount="builder" 2025-12-13T00:13:23.007905399+00:00 stderr F I1213 00:13:23.007844 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="default-dockercfg-n4wxs" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.79687339 +0000 UTC" 2025-12-13T00:13:23.007905399+00:00 stderr F I1213 00:13:23.007868 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="default-dockercfg-n4wxs" serviceaccount="default" 2025-12-13T00:13:23.015291857+00:00 stderr F I1213 00:13:23.015242 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-openstack-infra" name="deployer-dockercfg-ddvf7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.793911098 +0000 UTC" 2025-12-13T00:13:23.015291857+00:00 stderr F I1213 00:13:23.015262 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-openstack-infra" name="deployer-dockercfg-ddvf7" serviceaccount="deployer" 2025-12-13T00:13:23.034260714+00:00 stderr F I1213 00:13:23.034201 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="builder-dockercfg-kvbw2" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.786330624 +0000 UTC" 2025-12-13T00:13:23.034260714+00:00 stderr F I1213 00:13:23.034224 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="builder-dockercfg-kvbw2" serviceaccount="builder" 2025-12-13T00:13:23.079847647+00:00 stderr F I1213 00:13:23.079779 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="collect-profiles-dockercfg-45g9d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.768102827 +0000 UTC" 2025-12-13T00:13:23.079847647+00:00 stderr F I1213 00:13:23.079818 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="collect-profiles-dockercfg-45g9d" serviceaccount="collect-profiles" 2025-12-13T00:13:23.080481808+00:00 stderr F I1213 00:13:23.080448 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="default-dockercfg-sps9x" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.76783637 +0000 UTC" 2025-12-13T00:13:23.080495588+00:00 stderr F I1213 00:13:23.080484 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="default-dockercfg-sps9x" serviceaccount="default" 2025-12-13T00:13:23.114975226+00:00 stderr F I1213 00:13:23.114906 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="deployer-dockercfg-fm6b6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.754050174 +0000 UTC" 2025-12-13T00:13:23.114975226+00:00 stderr F I1213 00:13:23.114946 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="deployer-dockercfg-fm6b6" serviceaccount="deployer" 2025-12-13T00:13:23.151963359+00:00 stderr F I1213 00:13:23.151628 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operator-lifecycle-manager" name="olm-operator-serviceaccount-dockercfg-ncpbj" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.739362082 +0000 UTC" 2025-12-13T00:13:23.151963359+00:00 stderr F I1213 00:13:23.151672 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operator-lifecycle-manager" name="olm-operator-serviceaccount-dockercfg-ncpbj" serviceaccount="olm-operator-serviceaccount" 2025-12-13T00:13:23.167533813+00:00 stderr F I1213 00:13:23.167469 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="builder-dockercfg-bmp44" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.733027221 +0000 UTC" 2025-12-13T00:13:23.167533813+00:00 stderr F I1213 00:13:23.167499 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="builder-dockercfg-bmp44" serviceaccount="builder" 2025-12-13T00:13:23.201381240+00:00 stderr F I1213 00:13:23.201328 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="default-dockercfg-6cjkw" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.719521482 +0000 UTC" 2025-12-13T00:13:23.201413501+00:00 stderr F I1213 00:13:23.201388 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="default-dockercfg-6cjkw" serviceaccount="default" 2025-12-13T00:13:23.214660486+00:00 stderr F I1213 00:13:23.214602 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-operators" name="deployer-dockercfg-kwdj9" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.71417531 +0000 UTC" 2025-12-13T00:13:23.214660486+00:00 stderr F I1213 00:13:23.214639 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-operators" name="deployer-dockercfg-kwdj9" serviceaccount="deployer" 2025-12-13T00:13:23.255069444+00:00 stderr F I1213 00:13:23.255016 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="builder-dockercfg-686g2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.698005339 +0000 UTC" 2025-12-13T00:13:23.255069444+00:00 stderr F I1213 00:13:23.255041 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="builder-dockercfg-686g2" serviceaccount="builder" 2025-12-13T00:13:23.281123610+00:00 stderr F I1213 00:13:23.281040 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="default-dockercfg-j8jz7" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.687610374 +0000 UTC" 2025-12-13T00:13:23.281123610+00:00 stderr F I1213 00:13:23.281070 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="default-dockercfg-j8jz7" serviceaccount="default" 2025-12-13T00:13:23.307423203+00:00 stderr F I1213 00:13:23.307376 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovirt-infra" name="deployer-dockercfg-kmtk7" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.677061872 +0000 UTC" 2025-12-13T00:13:23.307423203+00:00 stderr F I1213 00:13:23.307400 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovirt-infra" name="deployer-dockercfg-kmtk7" serviceaccount="deployer" 2025-12-13T00:13:23.335580490+00:00 stderr F I1213 00:13:23.335516 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="builder-dockercfg-gvwbd" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.665809426 +0000 UTC" 2025-12-13T00:13:23.335580490+00:00 stderr F I1213 00:13:23.335546 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="builder-dockercfg-gvwbd" serviceaccount="builder" 2025-12-13T00:13:23.347291153+00:00 stderr F I1213 00:13:23.347222 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="default-dockercfg-gd6sg" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.661124152 +0000 UTC" 2025-12-13T00:13:23.347291153+00:00 stderr F I1213 00:13:23.347248 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="default-dockercfg-gd6sg" serviceaccount="default" 2025-12-13T00:13:23.394178009+00:00 stderr F I1213 00:13:23.394103 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="deployer-dockercfg-nhhff" url="image-registry.openshift-image-registry.svc:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.642371782 +0000 UTC" 2025-12-13T00:13:23.394178009+00:00 stderr F I1213 00:13:23.394155 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="deployer-dockercfg-nhhff" serviceaccount="deployer" 2025-12-13T00:13:23.414675877+00:00 stderr F I1213 00:13:23.414610 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-control-plane-dockercfg-76h6h" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.634170266 +0000 UTC" 2025-12-13T00:13:23.414675877+00:00 stderr F I1213 00:13:23.414646 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-control-plane-dockercfg-76h6h" serviceaccount="ovn-kubernetes-control-plane" 2025-12-13T00:13:23.448767242+00:00 stderr F I1213 00:13:23.448699 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-node-dockercfg-jpwlq" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:35 +0000 UTC" refreshTime="2025-06-26 10:14:27.620534616 +0000 UTC" 2025-12-13T00:13:23.448767242+00:00 stderr F I1213 00:13:23.448728 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-ovn-kubernetes" name="ovn-kubernetes-node-dockercfg-jpwlq" serviceaccount="ovn-kubernetes-node" 2025-12-13T00:13:23.469648735+00:00 stderr F I1213 00:13:23.469561 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="builder-dockercfg-6nwr2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:29.012189696 +0000 UTC" 2025-12-13T00:13:23.469648735+00:00 stderr F I1213 00:13:23.469585 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="builder-dockercfg-6nwr2" serviceaccount="builder" 2025-12-13T00:13:23.481947648+00:00 stderr F I1213 00:13:23.481881 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="default-dockercfg-gd9rc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:29.007264154 +0000 UTC" 2025-12-13T00:13:23.481947648+00:00 stderr F I1213 00:13:23.481905 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="default-dockercfg-gd9rc" serviceaccount="default" 2025-12-13T00:13:23.535787566+00:00 stderr F I1213 00:13:23.535728 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="deployer-dockercfg-6wg24" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.985723734 +0000 UTC" 2025-12-13T00:13:23.535787566+00:00 stderr F I1213 00:13:23.535762 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="deployer-dockercfg-6wg24" serviceaccount="deployer" 2025-12-13T00:13:23.542607876+00:00 stderr F I1213 00:13:23.542549 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-route-controller-manager" name="route-controller-manager-sa-dockercfg-9r4gl" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.982996046 +0000 UTC" 2025-12-13T00:13:23.542607876+00:00 stderr F I1213 00:13:23.542582 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-route-controller-manager" name="route-controller-manager-sa-dockercfg-9r4gl" serviceaccount="route-controller-manager-sa" 2025-12-13T00:13:23.588432806+00:00 stderr F I1213 00:13:23.588353 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="builder-dockercfg-bdttl" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.964678816 +0000 UTC" 2025-12-13T00:13:23.588432806+00:00 stderr F I1213 00:13:23.588398 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="builder-dockercfg-bdttl" serviceaccount="builder" 2025-12-13T00:13:23.595685740+00:00 stderr F I1213 00:13:23.595631 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="default-dockercfg-ptcxl" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.961761485 +0000 UTC" 2025-12-13T00:13:23.595685740+00:00 stderr F I1213 00:13:23.595655 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="default-dockercfg-ptcxl" serviceaccount="default" 2025-12-13T00:13:23.620965669+00:00 stderr F I1213 00:13:23.620837 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="deployer-dockercfg-2k6vp" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.95167695 +0000 UTC" 2025-12-13T00:13:23.620965669+00:00 stderr F I1213 00:13:23.620866 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="deployer-dockercfg-2k6vp" serviceaccount="deployer" 2025-12-13T00:13:23.667793223+00:00 stderr F I1213 00:13:23.667690 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca-operator" name="service-ca-operator-dockercfg-zrj8d" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.932938578 +0000 UTC" 2025-12-13T00:13:23.667793223+00:00 stderr F I1213 00:13:23.667721 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca-operator" name="service-ca-operator-dockercfg-zrj8d" serviceaccount="service-ca-operator" 2025-12-13T00:13:23.688709915+00:00 stderr F I1213 00:13:23.688640 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="builder-dockercfg-hklq6" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.924558519 +0000 UTC" 2025-12-13T00:13:23.688709915+00:00 stderr F I1213 00:13:23.688673 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="builder-dockercfg-hklq6" serviceaccount="builder" 2025-12-13T00:13:23.722364036+00:00 stderr F I1213 00:13:23.722303 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="default-dockercfg-r6bvc" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.91109179 +0000 UTC" 2025-12-13T00:13:23.722364036+00:00 stderr F I1213 00:13:23.722337 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="default-dockercfg-r6bvc" serviceaccount="default" 2025-12-13T00:13:23.735745556+00:00 stderr F I1213 00:13:23.735673 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="deployer-dockercfg-k7zp8" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.905743166 +0000 UTC" 2025-12-13T00:13:23.735745556+00:00 stderr F I1213 00:13:23.735713 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="deployer-dockercfg-k7zp8" serviceaccount="deployer" 2025-12-13T00:13:23.754853938+00:00 stderr F I1213 00:13:23.754778 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-service-ca" name="service-ca-dockercfg-79vsd" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.898101418 +0000 UTC" 2025-12-13T00:13:23.754853938+00:00 stderr F I1213 00:13:23.754842 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-service-ca" name="service-ca-dockercfg-79vsd" serviceaccount="service-ca" 2025-12-13T00:13:23.762402361+00:00 stderr F W1213 00:13:23.762348 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:23.762402361+00:00 stderr F E1213 00:13:23.762374 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io) 2025-12-13T00:13:23.801376951+00:00 stderr F I1213 00:13:23.801168 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="builder-dockercfg-dktpk" url="image-registry.openshift-image-registry.svc.cluster.local:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.879548638 +0000 UTC" 2025-12-13T00:13:23.801376951+00:00 stderr F I1213 00:13:23.801201 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="builder-dockercfg-dktpk" serviceaccount="builder" 2025-12-13T00:13:23.821159076+00:00 stderr F I1213 00:13:23.821107 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="default-dockercfg-qbbwv" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.871585047 +0000 UTC" 2025-12-13T00:13:23.821159076+00:00 stderr F I1213 00:13:23.821141 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="default-dockercfg-qbbwv" serviceaccount="default" 2025-12-13T00:13:23.877993566+00:00 stderr F I1213 00:13:23.877949 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-user-workload-monitoring" name="deployer-dockercfg-cxqvw" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.848850207 +0000 UTC" 2025-12-13T00:13:23.877993566+00:00 stderr F I1213 00:13:23.877978 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-user-workload-monitoring" name="deployer-dockercfg-cxqvw" serviceaccount="deployer" 2025-12-13T00:13:23.879229957+00:00 stderr F I1213 00:13:23.879188 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="builder-dockercfg-d58tr" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.848339674 +0000 UTC" 2025-12-13T00:13:23.879229957+00:00 stderr F I1213 00:13:23.879218 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="builder-dockercfg-d58tr" serviceaccount="builder" 2025-12-13T00:13:23.888851890+00:00 stderr F I1213 00:13:23.888613 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="default-dockercfg-mvb5j" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.844572963 +0000 UTC" 2025-12-13T00:13:23.888851890+00:00 stderr F I1213 00:13:23.888645 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="default-dockercfg-mvb5j" serviceaccount="default" 2025-12-13T00:13:23.934366570+00:00 stderr F I1213 00:13:23.934310 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift-vsphere-infra" name="deployer-dockercfg-tlw89" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.826287417 +0000 UTC" 2025-12-13T00:13:23.934366570+00:00 stderr F I1213 00:13:23.934335 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift-vsphere-infra" name="deployer-dockercfg-tlw89" serviceaccount="deployer" 2025-12-13T00:13:23.956377880+00:00 stderr F I1213 00:13:23.956305 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="builder-dockercfg-7bl85" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.817490242 +0000 UTC" 2025-12-13T00:13:23.956377880+00:00 stderr F I1213 00:13:23.956329 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="builder-dockercfg-7bl85" serviceaccount="builder" 2025-12-13T00:13:23.988772228+00:00 stderr F I1213 00:13:23.988704 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="default-dockercfg-mqnf2" url="10.217.4.41:5000" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.80453185 +0000 UTC" 2025-12-13T00:13:23.988772228+00:00 stderr F I1213 00:13:23.988735 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="default-dockercfg-mqnf2" serviceaccount="default" 2025-12-13T00:13:24.002341804+00:00 stderr F I1213 00:13:24.001824 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="openshift" name="deployer-dockercfg-xhxvc" url="default-route-openshift-image-registry.apps-crc.testing" expirtyTime="2025-08-13 21:05:36 +0000 UTC" refreshTime="2025-06-26 10:14:28.799285706 +0000 UTC" 2025-12-13T00:13:24.002341804+00:00 stderr F I1213 00:13:24.001859 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openshift" name="deployer-dockercfg-xhxvc" serviceaccount="deployer" 2025-12-13T00:13:24.017446231+00:00 stderr F I1213 00:13:24.016653 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="builder-dockercfg-hn9nn" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:11.79335293 +0000 UTC" 2025-12-13T00:13:24.017446231+00:00 stderr F I1213 00:13:24.016695 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="builder-dockercfg-hn9nn" serviceaccount="builder" 2025-12-13T00:13:24.061376718+00:00 stderr F I1213 00:13:24.061055 1 image_pull_secret_controller.go:286] "Internal registry pull secret needs to be refreshed" reason="auth token needs to be refreshed" ns="default" name="deployer-dockercfg-rxncs" url="10.217.4.41:5000" expirtyTime="2025-08-13 20:59:41 +0000 UTC" refreshTime="2025-06-26 10:06:11.77560069 +0000 UTC" 2025-12-13T00:13:24.061376718+00:00 stderr F I1213 00:13:24.061089 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="default" name="deployer-dockercfg-rxncs" serviceaccount="deployer" 2025-12-13T00:13:24.415379463+00:00 stderr F W1213 00:13:24.415317 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:24.415379463+00:00 stderr F E1213 00:13:24.415359 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io) 2025-12-13T00:13:24.437928710+00:00 stderr F W1213 00:13:24.437879 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:24.437928710+00:00 stderr F E1213 00:13:24.437912 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: the server is currently unable to handle the request (get templateinstances.template.openshift.io) 2025-12-13T00:13:24.882249401+00:00 stderr F W1213 00:13:24.881752 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:24.882249401+00:00 stderr F E1213 00:13:24.882237 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.Image: failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io) 2025-12-13T00:13:25.111973860+00:00 stderr F W1213 00:13:25.111896 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:25.111973860+00:00 stderr F E1213 00:13:25.111949 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io) 2025-12-13T00:13:26.026740349+00:00 stderr F W1213 00:13:26.026635 1 reflector.go:539] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:26.026740349+00:00 stderr F E1213 00:13:26.026705 1 reflector.go:147] k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:32.154324629+00:00 stderr F I1213 00:13:32.154022 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:32.658892254+00:00 stderr F I1213 00:13:32.658826 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:33.465958523+00:00 stderr F I1213 00:13:33.465874 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:33.526917031+00:00 stderr F I1213 00:13:33.526857 1 buildconfig_controller.go:212] Starting buildconfig controller 2025-12-13T00:13:34.607053625+00:00 stderr F I1213 00:13:34.606955 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:34.627799273+00:00 stderr F I1213 00:13:34.627721 1 templateinstance_finalizer.go:194] Starting TemplateInstanceFinalizer controller 2025-12-13T00:13:34.700822167+00:00 stderr F I1213 00:13:34.700731 1 templateinstance_controller.go:297] Starting TemplateInstance controller 2025-12-13T00:13:35.412368606+00:00 stderr F W1213 00:13:35.412304 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:13:35.423657136+00:00 stderr F I1213 00:13:35.423589 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:35.428835560+00:00 stderr F W1213 00:13:35.428781 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:13:35.464844760+00:00 stderr F I1213 00:13:35.464747 1 factory.go:85] deploymentconfig controller caches are synced. Starting workers. 2025-12-13T00:13:35.847809848+00:00 stderr F I1213 00:13:35.847756 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v1.5.2/tools/cache/reflector.go:229 2025-12-13T00:13:35.947973684+00:00 stderr F I1213 00:13:35.947784 1 build_controller.go:503] Starting build controller 2025-12-13T00:13:35.947973684+00:00 stderr F I1213 00:13:35.947805 1 build_controller.go:505] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000 2025-12-13T00:19:40.374978695+00:00 stderr F I1213 00:19:40.370675 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="default-dockercfg-bmhhr" expected=4 actual=0 2025-12-13T00:19:40.374978695+00:00 stderr F I1213 00:19:40.371221 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="default-dockercfg-bmhhr" serviceaccount="default" 2025-12-13T00:19:40.374978695+00:00 stderr F I1213 00:19:40.371058 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="deployer-dockercfg-srhn6" expected=4 actual=0 2025-12-13T00:19:40.374978695+00:00 stderr F I1213 00:19:40.371368 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="deployer-dockercfg-srhn6" serviceaccount="deployer" 2025-12-13T00:19:40.378723158+00:00 stderr F I1213 00:19:40.378695 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="builder-dockercfg-22wl9" expected=4 actual=0 2025-12-13T00:19:40.378781439+00:00 stderr F I1213 00:19:40.378762 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="builder-dockercfg-22wl9" serviceaccount="builder" 2025-12-13T00:19:40.407985205+00:00 stderr F I1213 00:19:40.405670 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack" name="default-dockercfg-bmhhr" expected=4 actual=0 2025-12-13T00:19:40.407985205+00:00 stderr F I1213 00:19:40.405699 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack" name="default-dockercfg-bmhhr" serviceaccount="default" 2025-12-13T00:19:41.022188122+00:00 stderr F I1213 00:19:41.021690 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="builder-dockercfg-s7rn7" expected=4 actual=0 2025-12-13T00:19:41.022188122+00:00 stderr F I1213 00:19:41.022161 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="builder-dockercfg-s7rn7" serviceaccount="builder" 2025-12-13T00:19:41.022337706+00:00 stderr F I1213 00:19:41.021803 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="default-dockercfg-rq62v" expected=4 actual=0 2025-12-13T00:19:41.022348626+00:00 stderr F I1213 00:19:41.021824 1 image_pull_secret_controller.go:233] "Internal registry pull secret auth data does not contain the correct number of entries" ns="openstack-operators" name="deployer-dockercfg-5v2tb" expected=4 actual=0 2025-12-13T00:19:41.022378697+00:00 stderr F I1213 00:19:41.022353 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="deployer-dockercfg-5v2tb" serviceaccount="deployer" 2025-12-13T00:19:41.022543651+00:00 stderr F I1213 00:19:41.022482 1 image_pull_secret_controller.go:163] "Refreshing image pull secret" ns="openstack-operators" name="default-dockercfg-rq62v" serviceaccount="default" 2025-12-13T00:20:38.143409303+00:00 stderr F E1213 00:20:38.142758 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager/leases/openshift-master-controllers": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.146611343+00:00 stderr F E1213 00:21:04.146185 1 leaderelection.go:332] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager/leases/openshift-master-controllers": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:21.907563669+00:00 stderr F W1213 00:21:21.907080 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130646033031 5ustar zuulzuul././@LongLink0000644000000000000000000000033600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130654033030 5ustar zuulzuul././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000005547515117130646033053 0ustar zuulzuul2025-08-13T20:05:31.350875728+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-08-13T20:05:34.560759116+00:00 stderr F I0813 20:05:34.538410 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:34.573181841+00:00 stderr F I0813 20:05:34.572394 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:35.152155431+00:00 stderr F I0813 20:05:35.151753 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:35.153203081+00:00 stderr F I0813 20:05:35.152517 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-08-13T20:05:35.236413874+00:00 stderr F I0813 20:05:35.234124 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-08-13T20:05:35.236594939+00:00 stderr F I0813 20:05:35.236561 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-08-13T20:05:35.236630760+00:00 stderr F I0813 20:05:35.236617 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-08-13T20:05:35.236661851+00:00 stderr F I0813 20:05:35.236649 1 main.go:35] Go OS/Arch: linux/amd64 2025-08-13T20:05:35.236703192+00:00 stderr F I0813 20:05:35.236680 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-08-13T20:05:35.237208107+00:00 stderr F I0813 20:05:35.236940 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_54cb8f8a-f1fe-4304-818b-373b62828b0d became leader 2025-08-13T20:05:35.432352325+00:00 stderr F I0813 20:05:35.432268 1 metrics.go:88] Starting MetricsController 2025-08-13T20:05:35.433254951+00:00 stderr F I0813 20:05:35.433184 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435108 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435151 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.433210 1 imageconfig.go:86] Starting ImageConfigController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435274 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-08-13T20:05:35.441323172+00:00 stderr F I0813 20:05:35.435290 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-08-13T20:05:35.444151743+00:00 stderr F I0813 20:05:35.441931 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900142 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900192 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:35.916298394+00:00 stderr F I0813 20:05:35.900349 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-08-13T20:05:35.977097965+00:00 stderr F W0813 20:05:35.881106 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:35.977097965+00:00 stderr F E0813 20:05:35.920501 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:35.977097965+00:00 stderr F I0813 20:05:35.924914 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.977212858+00:00 stderr F W0813 20:05:35.896004 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:35.977301521+00:00 stderr F E0813 20:05:35.977280 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:35.977617480+00:00 stderr F I0813 20:05:35.977441 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-08-13T20:05:35.996186362+00:00 stderr F I0813 20:05:35.988230 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.013418735+00:00 stderr F I0813 20:05:36.013360 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.030196465+00:00 stderr F I0813 20:05:36.028088 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.035365954+00:00 stderr F I0813 20:05:36.035338 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-08-13T20:05:36.036283540+00:00 stderr F I0813 20:05:36.036260 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-08-13T20:05:36.073980569+00:00 stderr F I0813 20:05:36.067391 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.133655429+00:00 stderr F I0813 20:05:36.133375 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-08-13T20:05:36.280345169+00:00 stderr F I0813 20:05:36.280188 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:36.333671476+00:00 stderr F I0813 20:05:36.333611 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-08-13T20:05:37.020230946+00:00 stderr F W0813 20:05:37.018982 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:37.020230946+00:00 stderr F E0813 20:05:37.019044 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:37.281066526+00:00 stderr F W0813 20:05:37.280612 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:37.281066526+00:00 stderr F E0813 20:05:37.280700 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:39.504821258+00:00 stderr F W0813 20:05:39.504581 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:39.504821258+00:00 stderr F E0813 20:05:39.504637 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:40.202603800+00:00 stderr F W0813 20:05:40.202537 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:40.202693703+00:00 stderr F E0813 20:05:40.202679 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.347291943+00:00 stderr F W0813 20:05:45.346582 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.347291943+00:00 stderr F E0813 20:05:45.347206 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:05:45.904703685+00:00 stderr F W0813 20:05:45.904527 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:45.904703685+00:00 stderr F E0813 20:05:45.904584 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:05:52.005179407+00:00 stderr F I0813 20:05:52.004586 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:05:52.033375184+00:00 stderr F I0813 20:05:52.033311 1 metrics.go:94] Started MetricsController 2025-08-13T20:05:54.197518657+00:00 stderr F I0813 20:05:54.197085 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:05:54.236459062+00:00 stderr F I0813 20:05:54.236312 1 imageconfig.go:93] Started ImageConfigController 2025-08-13T20:05:54.236585266+00:00 stderr F I0813 20:05:54.236562 1 controller.go:452] Starting Controller 2025-08-13T20:07:14.860751489+00:00 stderr F E0813 20:07:14.860066 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:14.864649781+00:00 stderr F E0813 20:07:14.864593 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:30.246382098+00:00 stderr F W0813 20:07:30.245491 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:30.246382098+00:00 stderr F E0813 20:07:30.246168 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:07:37.313181438+00:00 stderr F I0813 20:07:37.312479 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:08:10.069237273+00:00 stderr F I0813 20:08:10.067066 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:08:35.407215323+00:00 stderr F E0813 20:08:35.406443 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.484992848+00:00 stderr F I0813 20:09:29.484498 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:33.942276903+00:00 stderr F I0813 20:09:33.941268 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.313329223+00:00 stderr F I0813 20:09:36.312206 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:37.685525694+00:00 stderr F I0813 20:09:37.685404 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:39.680454581+00:00 stderr F I0813 20:09:39.678471 1 reflector.go:351] Caches populated for *v1.ImagePruner from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T20:09:42.768944279+00:00 stderr F I0813 20:09:42.767471 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:43.115880377+00:00 stderr F I0813 20:09:43.115025 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:44.458981734+00:00 stderr F I0813 20:09:44.458753 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.363674635+00:00 stderr F I0813 20:09:48.363526 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:49.508340314+00:00 stderr F I0813 20:09:49.507835 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:52.284846749+00:00 stderr F I0813 20:09:52.284085 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.366653799+00:00 stderr F I0813 20:09:57.364300 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:07.683255424+00:00 stderr F I0813 20:10:07.681191 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.606673474+00:00 stderr F I0813 20:10:15.605733 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.953871779+00:00 stderr F I0813 20:10:15.953646 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.627732939+00:00 stderr F I0813 20:10:29.627031 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:30.282694577+00:00 stderr F I0813 20:10:30.282122 1 reflector.go:351] Caches populated for *v1.Config from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T20:10:33.630065369+00:00 stderr F I0813 20:10:33.628984 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:43.053059944+00:00 stderr F I0813 20:10:43.052423 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:43.982593184+00:00 stderr F I0813 20:10:43.981760 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:42:36.400650211+00:00 stderr F I0813 20:42:36.399905 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.404340367+00:00 stderr F I0813 20:42:36.404002 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415105747+00:00 stderr F I0813 20:42:36.408974 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415105747+00:00 stderr F I0813 20:42:36.409471 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.425743904+00:00 stderr F I0813 20:42:36.398202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.499699596+00:00 stderr F I0813 20:42:36.499303 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516513441+00:00 stderr F I0813 20:42:36.516428 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541373478+00:00 stderr F I0813 20:42:36.541150 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568582222+00:00 stderr F I0813 20:42:36.568013 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.584575623+00:00 stderr F I0813 20:42:36.584446 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.584976065+00:00 stderr F I0813 20:42:36.584877 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585458739+00:00 stderr F I0813 20:42:36.585398 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586120668+00:00 stderr F I0813 20:42:36.586047 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586513629+00:00 stderr F I0813 20:42:36.586436 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586865359+00:00 stderr F I0813 20:42:36.585473 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.586865359+00:00 stderr F I0813 20:42:36.586739 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.587276561+00:00 stderr F I0813 20:42:36.587143 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.587560869+00:00 stderr F I0813 20:42:36.586722 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588023573+00:00 stderr F I0813 20:42:36.587939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588713133+00:00 stderr F I0813 20:42:36.588590 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.588987971+00:00 stderr F I0813 20:42:36.588931 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.589666410+00:00 stderr F I0813 20:42:36.586706 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.715605201+00:00 stderr F E0813 20:42:36.715085 1 leaderelection.go:332] error retrieving resource lock openshift-image-registry/openshift-master-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.215010560+00:00 stderr F I0813 20:42:39.213109 1 main.go:52] Received SIGTERM or SIGINT signal, shutting down the operator. 2025-08-13T20:42:39.217512372+00:00 stderr F I0813 20:42:39.217454 1 controllerimagepruner.go:390] Shutting down ImagePrunerController ... 2025-08-13T20:42:39.217538183+00:00 stderr F I0813 20:42:39.217517 1 controller.go:456] Shutting down Controller ... 2025-08-13T20:42:39.217561664+00:00 stderr F I0813 20:42:39.217544 1 imageconfig.go:95] Shutting down ImageConfigController 2025-08-13T20:42:39.217573774+00:00 stderr F I0813 20:42:39.217567 1 metrics.go:96] Shutting down MetricsController 2025-08-13T20:42:39.218483870+00:00 stderr F I0813 20:42:39.217577 1 imageregistrycertificates.go:216] Shutting down ImageRegistryCertificatesController 2025-08-13T20:42:39.218851141+00:00 stderr F I0813 20:42:39.218713 1 leaderelection.go:285] failed to renew lease openshift-image-registry/openshift-master-controllers: timed out waiting for the condition 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222866 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222930 1 azurepathfixcontroller.go:326] Shutting down AzurePathFixController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222943 1 clusteroperator.go:152] Shutting down ClusterOperatorStatusController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222953 1 azurestackcloud.go:181] Shutting down AzureStackCloudController 2025-08-13T20:42:39.223121694+00:00 stderr F I0813 20:42:39.222962 1 nodecadaemon.go:211] Shutting down NodeCADaemonController 2025-08-13T20:42:39.224584866+00:00 stderr F I0813 20:42:39.223740 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:39.226353487+00:00 stderr F I0813 20:42:39.226310 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:39.229605441+00:00 stderr F E0813 20:42:39.229511 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-image-registry/leases/openshift-master-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.230585389+00:00 stderr F W0813 20:42:39.230513 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000013306715117130646033045 0ustar zuulzuul2025-08-13T19:59:07.238527301+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-08-13T19:59:32.468279537+00:00 stderr F I0813 19:59:32.410410 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:32.532267201+00:00 stderr F I0813 19:59:32.529815 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:34.559928351+00:00 stderr F I0813 19:59:34.555025 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.559928351+00:00 stderr F I0813 19:59:34.559332 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.698956 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699759 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699856 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699863 1 main.go:35] Go OS/Arch: linux/amd64 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.699996 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-08-13T19:59:34.716124384+00:00 stderr F I0813 19:59:34.706495 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28177", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_6e4ef831-9ca6-4914-905d-7214cad92174 became leader 2025-08-13T19:59:36.902238389+00:00 stderr F I0813 19:59:36.899333 1 metrics.go:88] Starting MetricsController 2025-08-13T19:59:36.902238389+00:00 stderr F I0813 19:59:36.901727 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-08-13T19:59:37.182626381+00:00 stderr F I0813 19:59:37.158995 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.189139 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.191135 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:37.197080523+00:00 stderr F I0813 19:59:37.191173 1 imageconfig.go:86] Starting ImageConfigController 2025-08-13T19:59:37.243681972+00:00 stderr F I0813 19:59:37.242191 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-08-13T19:59:37.247402978+00:00 stderr F I0813 19:59:37.247377 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-08-13T19:59:37.413710078+00:00 stderr F I0813 19:59:37.381885 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-08-13T19:59:37.501316355+00:00 stderr F I0813 19:59:37.444165 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-08-13T19:59:38.007381131+00:00 stderr F E0813 19:59:38.007004 1 azurestackcloud.go:76] AzureStackCloudController: unable to sync: config.imageregistry.operator.openshift.io "cluster" not found, requeuing 2025-08-13T19:59:39.197697421+00:00 stderr F I0813 19:59:39.197219 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:39.522447768+00:00 stderr F I0813 19:59:39.197982 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:39.591490816+00:00 stderr F I0813 19:59:39.577516 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.591490816+00:00 stderr F I0813 19:59:39.579410 1 reflector.go:351] Caches populated for *v1.ImagePruner from github.com/openshift/client-go/imageregistry/informers/externalversions/factory.go:125 2025-08-13T19:59:39.591490816+00:00 stderr F W0813 19:59:39.579746 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:39.591490816+00:00 stderr F E0813 19:59:39.580001 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:39.652526036+00:00 stderr F W0813 19:59:39.652136 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:39.652526036+00:00 stderr F E0813 19:59:39.652274 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:39.781990576+00:00 stderr F I0813 19:59:39.757255 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.881095 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.888743 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.901159103+00:00 stderr F I0813 19:59:39.889591 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-08-13T19:59:39.948356619+00:00 stderr F I0813 19:59:39.948219 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:39.983099779+00:00 stderr F I0813 19:59:39.982996 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-08-13T19:59:40.086146276+00:00 stderr F I0813 19:59:40.079998 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:40.148072042+00:00 stderr F I0813 19:59:40.148002 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:40.246921819+00:00 stderr F I0813 19:59:40.246315 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-08-13T19:59:40.820345945+00:00 stderr F W0813 19:59:40.797598 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:40.820481959+00:00 stderr F E0813 19:59:40.820460 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:41.074455408+00:00 stderr F W0813 19:59:41.074352 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:41.074455408+00:00 stderr F E0813 19:59:41.074410 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:41.135983282+00:00 stderr F I0813 19:59:41.135814 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: removed:apiVersion="config.openshift.io/v1", removed:kind="ClusterOperator", changed:metadata.managedFields.2.time={"2024-06-27T13:34:18Z" -> "2025-08-13T19:59:40Z"}, changed:metadata.resourceVersion={"23930" -> "28282"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca does not have available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable::NodeCADaemonNoAvailableReplicas" -> "NoReplicasAvailable"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deploying node pods" -> "Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted::NodeCADaemonUnavailable" -> "DeploymentNotCompleted"} 2025-08-13T19:59:41.141502780+00:00 stderr F I0813 19:59:41.136956 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:41.307344807+00:00 stderr F I0813 19:59:41.306542 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-08-13T19:59:42.817613347+00:00 stderr F W0813 19:59:42.814334 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.817613347+00:00 stderr F E0813 19:59:42.815089 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:43.437966550+00:00 stderr F W0813 19:59:43.437445 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:43.437966550+00:00 stderr F E0813 19:59:43.437509 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:47.824966453+00:00 stderr F W0813 19:59:47.824230 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:47.824966453+00:00 stderr F E0813 19:59:47.824864 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:48.187447036+00:00 stderr F W0813 19:59:48.187101 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:48.187447036+00:00 stderr F E0813 19:59:48.187195 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:54.721302158+00:00 stderr F W0813 19:59:54.720490 1 reflector.go:539] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:54.721302158+00:00 stderr F E0813 19:59:54.721160 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T19:59:58.226936117+00:00 stderr F W0813 19:59:58.226058 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:58.226936117+00:00 stderr F E0813 19:59:58.226686 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:15.597920077+00:00 stderr F I0813 20:00:15.596690 1 reflector.go:351] Caches populated for *v1.ImageStream from github.com/openshift/client-go/image/informers/externalversions/factory.go:125 2025-08-13T20:00:15.602244620+00:00 stderr F I0813 20:00:15.602202 1 metrics.go:94] Started MetricsController 2025-08-13T20:00:17.585059728+00:00 stderr F I0813 20:00:17.568887 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-image-registry, Name=serviceca updated: removed:metadata.annotations.openshift.io/description="Configmap is added/updated with a data item containing the CA signing bundle that can be used to verify service-serving certificates", removed:metadata.annotations.openshift.io/owning-component="service-ca", changed:metadata.resourceVersion={"29256" -> "29269"} 2025-08-13T20:00:17.629035862+00:00 stderr F I0813 20:00:17.598084 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-image-registry, Name=image-registry-certificates updated: changed:data.image-registry.openshift-image-registry.svc..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:data.image-registry.openshift-image-registry.svc.cluster.local..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:f0bdbd4b4b483117b7890bf3f3d072b58bd5c5ac221322fb5c069a25e1e8cb66" -> "sha256:653bf3ca288d3df55507222afe41f33beb70f23514182de2cc295244a1d79403"}, changed:metadata.managedFields.0.time={"2024-06-27T13:18:57Z" -> "2025-08-13T20:00:17Z"}, changed:metadata.resourceVersion={"18030" -> "29265"} 2025-08-13T20:00:17.949627914+00:00 stderr F I0813 20:00:17.948689 1 generator.go:62] object *v1.ConfigMap, Namespace=openshift-config-managed, Name=image-registry-ca updated: changed:data.image-registry.openshift-image-registry.svc..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:data.image-registry.openshift-image-registry.svc.cluster.local..5000={"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIQ8BvhCMTtRcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA4MjUxMjQ3MDZaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBSHmdGnJknQSxTvpkMa8GYETWnG\n0TANBgkqhkiG9w0BAQsFAAOCAQEAAoJbZdck+sGqSy8/82bjeTrcuNzegezF5kjp\nLRqVOHCh7gDbM2kVONtVb642umD5+RFzLDUZZlDyuMV9eCpAZH44DIseU89LC/+t\nLmmKaXSbWjExzoU1bdCCV8DnuiIEkeRmuzs7NC0IjZHxhSxbcEsicRWuyDTsBPbG\nTyuolJjhPa1uKZaEk2ASdT9SgLRFbjPGn13MKX3FwgohOQaPEPqWEPDbPljNkROi\nQsb9CUeZg+vsRRhZiADo/PamU9mLJ/8ePXIBzcE0VdgmRmDa4H92gV0ie+/fFrU3\nRBZMOM4iG1SW0yexNbJ+DYmxvDdxP/3btkulPGxFKue/4fzlyw==\n-----END CERTIFICATE-----\n" -> "-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"}, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:d537ad4b8475d1cab44f6a9af72391f500985389b6664e1dba8e546d76b40b02" -> "sha256:8a4b2012ebacc19d57e4aae623e3012d9b0e3cf1957da5588850ec19f97baa40"}, changed:metadata.managedFields.0.time={"2024-06-27T13:18:53Z" -> "2025-08-13T20:00:17Z"}, changed:metadata.resourceVersion={"17963" -> "29281"} 2025-08-13T20:00:22.798386460+00:00 stderr F I0813 20:00:22.779164 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:22.798386460+00:00 stderr F I0813 20:00:22.792146 1 imageconfig.go:93] Started ImageConfigController 2025-08-13T20:00:22.848609373+00:00 stderr F I0813 20:00:22.848251 1 controller.go:452] Starting Controller 2025-08-13T20:00:23.585293700+00:00 stderr F I0813 20:00:23.584565 1 generator.go:62] object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets updated: changed:data..dockerconfigjson={ -> }, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:085fdb2709b57d501872b4e20b38e3618d21be40f24851b4fad2074469e1fa6d" -> "sha256:134d2023417aa99dc70c099f12731fc3d94cb8fe5fef3d499d5c1ff70d124cfb"}, changed:metadata.managedFields.0.time={"2024-06-27T13:34:15Z" -> "2025-08-13T20:00:23Z"}, changed:metadata.resourceVersion={"23543" -> "29461"} 2025-08-13T20:00:24.745952665+00:00 stderr F I0813 20:00:24.745340 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"metadata":{"annotations":{"imageregistry.operator.openshift.io/checksum":"sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6","operator.openshift.io/spec-hash":"2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da"}},"spec":{"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"imageregistry.operator.openshift.io/dependencies-checksum":"sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10"}},"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-08-13T20:00:24.918966429+00:00 stderr F I0813 20:00:24.918342 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-08-13T20:00:24.975235403+00:00 stderr F I0813 20:00:24.965374 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:f7ef6312b4aa9a5819f99115a43ad18318ade18e78d0d43d8f8db34ee8a97e8d" -> "sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"944e90126c7956be2484e645afe5c783cacf55a40f11cb132e07b25294ee50fa" -> "2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da"}, changed:metadata.generation={"3.000000" -> "4.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, added:metadata.managedFields.0.subresource="status", changed:metadata.managedFields.0.time={"2024-06-27T13:34:15Z" -> "2025-08-13T19:49:58Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, removed:metadata.managedFields.1.subresource="status", changed:metadata.managedFields.1.time={"2025-08-13T19:49:58Z" -> "2025-08-13T20:00:24Z"}, changed:metadata.resourceVersion={"25235" -> "29506"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:c4eb23334aa8e38243d604088f7d430c98cd061e527b1f39182877df0dc8680c" -> "sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10"} 2025-08-13T20:00:25.031080526+00:00 stderr F I0813 20:00:25.029193 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.2.lastTransitionTime={"2024-06-26T12:53:37Z" -> "2025-08-13T20:00:25Z"}, added:status.conditions.2.message="The deployment does not have available replicas", added:status.conditions.2.reason="Unavailable", changed:status.conditions.2.status={"False" -> "True"}, changed:status.generations.1.lastGeneration={"3.000000" -> "4.000000"} 2025-08-13T20:00:25.122118682+00:00 stderr F I0813 20:00:25.122061 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T19:59:40Z" -> "2025-08-13T20:00:25Z"}, changed:metadata.resourceVersion={"28282" -> "29540"}, changed:status.conditions.2.lastTransitionTime={"2024-06-26T12:53:37Z" -> "2025-08-13T20:00:25Z"}, added:status.conditions.2.message="Degraded: The deployment does not have available replicas", changed:status.conditions.2.reason={"AsExpected" -> "Unavailable"}, changed:status.conditions.2.status={"False" -> "True"} 2025-08-13T20:00:49.469631363+00:00 stderr F E0813 20:00:49.468721 1 reflector.go:147] github.com/openshift/client-go/image/informers/externalversions/factory.go:125: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-08-13T20:00:49.493293998+00:00 stderr F E0813 20:00:49.489473 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:09.481028325+00:00 stderr F I0813 20:01:09.479275 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.1.lastTransitionTime={"2024-06-27T13:34:18Z" -> "2025-08-13T20:01:09Z"}, changed:status.conditions.1.message={"The deployment does not have available replicas" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.1.status={"False" -> "True"}, changed:status.conditions.2.lastTransitionTime={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, removed:status.conditions.2.message="The deployment does not have available replicas", removed:status.conditions.2.reason="Unavailable", changed:status.conditions.2.status={"True" -> "False"}, changed:status.readyReplicas={"0.000000" -> "1.000000"} 2025-08-13T20:01:10.204765482+00:00 stderr F I0813 20:01:10.204334 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, changed:metadata.resourceVersion={"29540" -> "30319"}, changed:status.conditions.0.lastTransitionTime={"2024-06-27T13:34:14Z" -> "2025-08-13T20:01:09Z"}, changed:status.conditions.0.message={"Available: The deployment does not have available replicas\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"NoReplicasAvailable" -> "MinimumAvailability"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.2.lastTransitionTime={"2025-08-13T20:00:25Z" -> "2025-08-13T20:01:09Z"}, removed:status.conditions.2.message="Degraded: The deployment does not have available replicas", changed:status.conditions.2.reason={"Unavailable" -> "AsExpected"}, changed:status.conditions.2.status={"True" -> "False"} 2025-08-13T20:01:19.011059104+00:00 stderr F W0813 20:01:19.006176 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:19.011059104+00:00 stderr F E0813 20:01:19.006869 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:21.283886640+00:00 stderr F I0813 20:01:21.283335 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2024-06-27T13:34:15Z" -> "2025-08-13T20:01:21Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"} 2025-08-13T20:01:21.697944527+00:00 stderr F I0813 20:01:21.697887 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-08-13T20:01:09Z" -> "2025-08-13T20:01:21Z"}, changed:metadata.resourceVersion={"30319" -> "30445"}, changed:status.conditions.0.message={"Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.1.lastTransitionTime={"2024-06-27T13:34:14Z" -> "2025-08-13T20:01:21Z"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.1.status={"True" -> "False"} 2025-08-13T20:01:22.535888750+00:00 stderr F I0813 20:01:22.535112 1 observer_polling.go:120] Observed file "/etc/secrets/tls.crt" has been modified (old="1b4fefb1e2cc07f8a890596a1d5f574f6950f0d920567a2f2dc82a253d167d29", new="c608c7a9b6a1e5e93b05aeb838482ea7878695c06fd899cda56bfdd48b1f7613") 2025-08-13T20:01:22.536123607+00:00 stderr F W0813 20:01:22.536099 1 builder.go:154] Restart triggered because of file /etc/secrets/tls.crt was modified 2025-08-13T20:01:22.536476357+00:00 stderr F I0813 20:01:22.536316 1 observer_polling.go:120] Observed file "/etc/secrets/tls.key" has been modified (old="ce0633a34805c4dab1eb6f9f90254ed254d7f3cf0899dcbde466b988b655d7c9", new="2d36d9ce30dbba8b01ea274d2bf93b78a3e49924e61177e7a6e202a5e6fd5047") 2025-08-13T20:01:22.536712773+00:00 stderr F I0813 20:01:22.536659 1 main.go:54] Watched file changed, shutting down the operator. 2025-08-13T20:01:22.538703110+00:00 stderr F I0813 20:01:22.538674 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:22.540514022+00:00 stderr F I0813 20:01:22.538889 1 nodecadaemon.go:211] Shutting down NodeCADaemonController 2025-08-13T20:01:22.565977378+00:00 stderr F I0813 20:01:22.538933 1 azurestackcloud.go:181] Shutting down AzureStackCloudController 2025-08-13T20:01:22.566163403+00:00 stderr F I0813 20:01:22.539232 1 controller.go:456] Shutting down Controller ... 2025-08-13T20:01:22.566208224+00:00 stderr F I0813 20:01:22.539238 1 clusteroperator.go:152] Shutting down ClusterOperatorStatusController 2025-08-13T20:01:22.566243185+00:00 stderr F I0813 20:01:22.539258 1 imageconfig.go:95] Shutting down ImageConfigController 2025-08-13T20:01:22.566372099+00:00 stderr F I0813 20:01:22.539259 1 azurepathfixcontroller.go:326] Shutting down AzurePathFixController 2025-08-13T20:01:22.566426581+00:00 stderr F I0813 20:01:22.539290 1 metrics.go:96] Shutting down MetricsController 2025-08-13T20:01:22.566454101+00:00 stderr F I0813 20:01:22.539308 1 controllerimagepruner.go:390] Shutting down ImagePrunerController ... 2025-08-13T20:01:22.566491692+00:00 stderr F I0813 20:01:22.539334 1 imageregistrycertificates.go:216] Shutting down ImageRegistryCertificatesController 2025-08-13T20:01:22.566533594+00:00 stderr F I0813 20:01:22.539593 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:22.566898264+00:00 stderr F I0813 20:01:22.566827 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:23.271401421+00:00 stderr F W0813 20:01:23.271351 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000005014415117130646033037 0ustar zuulzuul2025-12-13T00:13:14.848089250+00:00 stdout F Overwriting root TLS certificate authority trust store 2025-12-13T00:13:16.028407162+00:00 stderr F I1213 00:13:16.025590 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:16.032282722+00:00 stderr F I1213 00:13:16.031788 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:16.144440861+00:00 stderr F I1213 00:13:16.143382 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.145065302+00:00 stderr F I1213 00:13:16.144664 1 leaderelection.go:250] attempting to acquire leader lease openshift-image-registry/openshift-master-controllers... 2025-12-13T00:18:31.662183499+00:00 stderr F I1213 00:18:31.661720 1 leaderelection.go:260] successfully acquired lease openshift-image-registry/openshift-master-controllers 2025-12-13T00:18:31.662231641+00:00 stderr F I1213 00:18:31.661839 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-image-registry", Name:"openshift-master-controllers", UID:"661e70ba-12a1-40d0-a89e-ec3849239f8f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42009", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-image-registry-operator-7769bd8d7d-q5cvv_74b0281e-d0ac-481a-ad97-05613d1ff6c4 became leader 2025-12-13T00:18:31.662322813+00:00 stderr F I1213 00:18:31.662304 1 main.go:33] Cluster Image Registry Operator Version: v4.16.0-202406131906.p0.g0fc07ed.assembly.stream.el9-dirty 2025-12-13T00:18:31.662322813+00:00 stderr F I1213 00:18:31.662314 1 main.go:34] Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime 2025-12-13T00:18:31.662322813+00:00 stderr F I1213 00:18:31.662319 1 main.go:35] Go OS/Arch: linux/amd64 2025-12-13T00:18:31.662356844+00:00 stderr F I1213 00:18:31.662324 1 main.go:66] Watching files [/var/run/configmaps/trusted-ca/tls-ca-bundle.pem /etc/secrets/tls.crt /etc/secrets/tls.key]... 2025-12-13T00:18:31.672322219+00:00 stderr F I1213 00:18:31.672277 1 metrics.go:88] Starting MetricsController 2025-12-13T00:18:31.672872024+00:00 stderr F I1213 00:18:31.672849 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:31.672920026+00:00 stderr F I1213 00:18:31.672876 1 azurestackcloud.go:172] Starting AzureStackCloudController 2025-12-13T00:18:31.673003138+00:00 stderr F I1213 00:18:31.672970 1 clusteroperator.go:143] Starting ClusterOperatorStatusController 2025-12-13T00:18:31.673070950+00:00 stderr F I1213 00:18:31.672890 1 azurepathfixcontroller.go:317] Starting AzurePathFixController 2025-12-13T00:18:31.673132891+00:00 stderr F I1213 00:18:31.672917 1 nodecadaemon.go:202] Starting NodeCADaemonController 2025-12-13T00:18:31.673132891+00:00 stderr F I1213 00:18:31.672927 1 imageregistrycertificates.go:207] Starting ImageRegistryCertificatesController 2025-12-13T00:18:31.673132891+00:00 stderr F I1213 00:18:31.672960 1 imageconfig.go:86] Starting ImageConfigController 2025-12-13T00:18:31.773478549+00:00 stderr F I1213 00:18:31.773381 1 imageconfig.go:93] Started ImageConfigController 2025-12-13T00:18:31.773478549+00:00 stderr F I1213 00:18:31.773427 1 azurestackcloud.go:179] Started AzureStackCloudController 2025-12-13T00:18:31.773607062+00:00 stderr F I1213 00:18:31.773544 1 nodecadaemon.go:209] Started NodeCADaemonController 2025-12-13T00:18:31.773696115+00:00 stderr F I1213 00:18:31.773651 1 imageregistrycertificates.go:214] Started ImageRegistryCertificatesController 2025-12-13T00:18:31.773792167+00:00 stderr F I1213 00:18:31.773747 1 metrics.go:94] Started MetricsController 2025-12-13T00:18:31.774221489+00:00 stderr F I1213 00:18:31.774170 1 controller.go:452] Starting Controller 2025-12-13T00:18:31.774221489+00:00 stderr F I1213 00:18:31.774192 1 controllerimagepruner.go:386] Starting ImagePrunerController 2025-12-13T00:18:31.774275420+00:00 stderr F I1213 00:18:31.774239 1 azurepathfixcontroller.go:324] Started AzurePathFixController 2025-12-13T00:18:31.774316451+00:00 stderr F I1213 00:18:31.774291 1 clusteroperator.go:150] Started ClusterOperatorStatusController 2025-12-13T00:18:31.774316451+00:00 stderr F I1213 00:18:31.774292 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:31.774316451+00:00 stderr F I1213 00:18:31.774311 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:31.870239396+00:00 stderr F I1213 00:18:31.868893 1 generator.go:62] object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets updated: changed:data..dockerconfigjson={ -> }, changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:134d2023417aa99dc70c099f12731fc3d94cb8fe5fef3d499d5c1ff70d124cfb" -> "sha256:cd9658903c20944eb60992db7d1167845b9660771a71716e1875bc10e5145610"}, changed:metadata.managedFields.0.time={"2025-08-13T20:00:23Z" -> "2025-12-13T00:18:31Z"}, changed:metadata.resourceVersion={"29461" -> "42012"} 2025-12-13T00:18:32.274495763+00:00 stderr F I1213 00:18:32.274372 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"metadata":{"annotations":{"imageregistry.operator.openshift.io/checksum":"sha256:3616891ac97a04dff8f52e8fc01cee609bbfbe0247bfa3ef0f9ebbcc435b27f1","operator.openshift.io/spec-hash":"3abd68f3c2e68f9a4d2c85d68647a58f3da61ebcaeeecc1baedcf649ce0065c8"}},"spec":{"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"imageregistry.operator.openshift.io/dependencies-checksum":"sha256:53869d9c320e001c11f9e0c8b26efab68c1d93a6051736c231681cabec99482e"}},"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-12-13T00:18:32.296865400+00:00 stderr F I1213 00:18:32.293245 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-12-13T00:18:32.298629869+00:00 stderr F I1213 00:18:32.298577 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: changed:metadata.annotations.imageregistry.operator.openshift.io/checksum={"sha256:cee8150fa701d72b2f46d7bcceb33c236c438f750bbc1b40d8b605e0b4d71ff6" -> "sha256:3616891ac97a04dff8f52e8fc01cee609bbfbe0247bfa3ef0f9ebbcc435b27f1"}, changed:metadata.annotations.operator.openshift.io/spec-hash={"2cb17909e0e1b27c8d2f6aa675d4f102e5035c1403d0e41f721e84207e2599da" -> "3abd68f3c2e68f9a4d2c85d68647a58f3da61ebcaeeecc1baedcf649ce0065c8"}, changed:metadata.generation={"4.000000" -> "5.000000"}, changed:metadata.managedFields.0.manager={"cluster-image-registry-operator" -> "kube-controller-manager"}, added:metadata.managedFields.0.subresource="status", changed:metadata.managedFields.0.time={"2025-08-13T20:00:24Z" -> "2025-12-13T00:15:43Z"}, changed:metadata.managedFields.1.manager={"kube-controller-manager" -> "cluster-image-registry-operator"}, removed:metadata.managedFields.1.subresource="status", changed:metadata.managedFields.1.time={"2025-12-13T00:15:43Z" -> "2025-12-13T00:18:32Z"}, changed:metadata.resourceVersion={"41559" -> "42014"}, changed:spec.template.metadata.annotations.imageregistry.operator.openshift.io/dependencies-checksum={"sha256:c059b55dea10a776646453f9fb9086f121d0269969b8a65f980859a763a4ec10" -> "sha256:53869d9c320e001c11f9e0c8b26efab68c1d93a6051736c231681cabec99482e"} 2025-12-13T00:18:32.302156856+00:00 stderr F I1213 00:18:32.301699 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2025-12-13T00:18:32Z"}, changed:status.conditions.0.message={"The registry is ready" -> "The deployment has not completed"}, changed:status.conditions.0.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.1.message={"The registry is ready" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"Ready" -> "MinimumAvailability"}, changed:status.generations.1.lastGeneration={"4.000000" -> "5.000000"} 2025-12-13T00:18:32.324814451+00:00 stderr F I1213 00:18:32.324736 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: removed:apiVersion="config.openshift.io/v1", removed:kind="ClusterOperator", changed:metadata.managedFields.2.time={"2025-08-13T20:01:21Z" -> "2025-12-13T00:18:32Z"}, changed:metadata.resourceVersion={"30445" -> "42019"}, changed:status.conditions.0.message={"Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"Ready" -> "MinimumAvailability"}, changed:status.conditions.1.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2025-12-13T00:18:32Z"}, changed:status.conditions.1.message={"Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.1.status={"False" -> "True"} 2025-12-13T00:18:33.274630702+00:00 stderr F I1213 00:18:33.274531 1 apps.go:154] Deployment "openshift-image-registry/image-registry" changes: {"spec":{"revisionHistoryLimit":null,"template":{"spec":{"containers":[{"command":["/bin/sh","-c","mkdir -p /etc/pki/ca-trust/extracted/edk2 /etc/pki/ca-trust/extracted/java /etc/pki/ca-trust/extracted/openssl /etc/pki/ca-trust/extracted/pem \u0026\u0026 update-ca-trust extract \u0026\u0026 exec /usr/bin/dockerregistry"],"env":[{"name":"REGISTRY_STORAGE","value":"filesystem"},{"name":"REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY","value":"/registry"},{"name":"REGISTRY_HTTP_ADDR","value":":5000"},{"name":"REGISTRY_HTTP_NET","value":"tcp"},{"name":"REGISTRY_HTTP_SECRET","value":"56a187e4d57194474f191fc1c74d365975dcd1d1b128301855ac1d25d0b497b3a0ba226d71cd71c0d3f1e86c45bc36460c308cfd8ce5ec72262061fe2bf42b78"},{"name":"REGISTRY_LOG_LEVEL","value":"info"},{"name":"REGISTRY_OPENSHIFT_QUOTA_ENABLED","value":"true"},{"name":"REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR","value":"inmemory"},{"name":"REGISTRY_STORAGE_DELETE_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_ENABLED","value":"true"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_INTERVAL","value":"10s"},{"name":"REGISTRY_HEALTH_STORAGEDRIVER_THRESHOLD","value":"1"},{"name":"REGISTRY_OPENSHIFT_METRICS_ENABLED","value":"true"},{"name":"REGISTRY_OPENSHIFT_SERVER_ADDR","value":"image-registry.openshift-image-registry.svc:5000"},{"name":"REGISTRY_HTTP_TLS_CERTIFICATE","value":"/etc/secrets/tls.crt"},{"name":"REGISTRY_HTTP_TLS_KEY","value":"/etc/secrets/tls.key"}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":5,"timeoutSeconds":5},"name":"registry","ports":[{"containerPort":5000,"protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/healthz","port":5000,"scheme":"HTTPS"},"initialDelaySeconds":15,"timeoutSeconds":5},"resources":{"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/registry","name":"registry-storage"},{"mountPath":"/etc/secrets","name":"registry-tls"},{"mountPath":"/etc/pki/ca-trust/extracted","name":"ca-trust-extracted"},{"mountPath":"/etc/pki/ca-trust/source/anchors","name":"registry-certificates"},{"mountPath":"/usr/share/pki/ca-trust-source","name":"trusted-ca"},{"mountPath":"/var/lib/kubelet/","name":"installation-pull-secrets"},{"mountPath":"/var/run/secrets/openshift/serviceaccount","name":"bound-sa-token","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"serviceAccount":null,"volumes":[{"name":"registry-storage","persistentVolumeClaim":{"claimName":"crc-image-registry-storage"}},{"name":"registry-tls","projected":{"sources":[{"secret":{"name":"image-registry-tls"}}]}},{"emptyDir":{},"name":"ca-trust-extracted"},{"configMap":{"name":"image-registry-certificates"},"name":"registry-certificates"},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"anchors/ca-bundle.crt"}],"name":"trusted-ca","optional":true},"name":"trusted-ca"},{"name":"installation-pull-secrets","secret":{"items":[{"key":".dockerconfigjson","path":"config.json"}],"optional":true,"secretName":"installation-pull-secrets"}},{"name":"bound-sa-token","projected":{"sources":[{"serviceAccountToken":{"audience":"openshift","path":"token"}}]}}]}}}} 2025-12-13T00:18:33.284376691+00:00 stderr F I1213 00:18:33.284301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-image-registry", Name:"cluster-image-registry-operator", UID:"485aecbc-d986-4290-a12b-2be6eccbc76c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/image-registry -n openshift-image-registry because it changed 2025-12-13T00:18:33.286903091+00:00 stderr F I1213 00:18:33.286858 1 generator.go:62] object *v1.Deployment, Namespace=openshift-image-registry, Name=image-registry updated: 2025-12-13T00:18:33.288522365+00:00 stderr F I1213 00:18:33.288469 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-08-13T20:01:21Z" -> "2025-12-13T00:18:33Z"}, changed:status.conditions.0.message={"The registry is ready" -> "The deployment has not completed"}, changed:status.conditions.0.reason={"Ready" -> "DeploymentNotCompleted"}, changed:status.conditions.0.status={"False" -> "True"}, changed:status.conditions.1.message={"The registry is ready" -> "The registry has minimum availability"}, changed:status.conditions.1.reason={"Ready" -> "MinimumAvailability"}, changed:status.generations.1.lastGeneration={"4.000000" -> "5.000000"} 2025-12-13T00:18:33.294669155+00:00 stderr F E1213 00:18:33.294576 1 controller.go:377] unable to sync: Operation cannot be fulfilled on configs.imageregistry.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again, requeuing 2025-12-13T00:18:52.779729845+00:00 stderr F I1213 00:18:52.779625 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.readyReplicas={"1.000000" -> "2.000000"} 2025-12-13T00:18:52.818074152+00:00 stderr F I1213 00:18:52.818023 1 controller.go:338] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.0.lastTransitionTime={"2025-12-13T00:18:32Z" -> "2025-12-13T00:18:52Z"}, changed:status.conditions.0.message={"The deployment has not completed" -> "The registry is ready"}, changed:status.conditions.0.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.0.status={"True" -> "False"}, changed:status.conditions.1.message={"The registry has minimum availability" -> "The registry is ready"}, changed:status.conditions.1.reason={"MinimumAvailability" -> "Ready"}, changed:status.readyReplicas={"2.000000" -> "1.000000"} 2025-12-13T00:18:52.834998869+00:00 stderr F I1213 00:18:52.833169 1 generator.go:62] object *v1.ClusterOperator, Name=image-registry updated: changed:metadata.managedFields.2.time={"2025-12-13T00:18:32Z" -> "2025-12-13T00:18:52Z"}, changed:metadata.resourceVersion={"42019" -> "42111"}, changed:status.conditions.0.message={"Available: The registry has minimum availability\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created" -> "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca has available replicas\nImagePrunerAvailable: Pruner CronJob has been created"}, changed:status.conditions.0.reason={"MinimumAvailability" -> "Ready"}, changed:status.conditions.1.lastTransitionTime={"2025-12-13T00:18:32Z" -> "2025-12-13T00:18:52Z"}, changed:status.conditions.1.message={"Progressing: The deployment has not completed\nNodeCADaemonProgressing: The daemon set node-ca is deployed" -> "Progressing: The registry is ready\nNodeCADaemonProgressing: The daemon set node-ca is deployed"}, changed:status.conditions.1.reason={"DeploymentNotCompleted" -> "Ready"}, changed:status.conditions.1.status={"True" -> "False"} ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000755000175000017500000000000015117130646033017 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000755000175000017500000000000015117130654033016 5ustar zuulzuul././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000644000175000017500000010645015117130646033027 0ustar zuulzuul2025-12-13T00:13:16.170748125+00:00 stderr F W1213 00:13:16.170260 1 cmd.go:237] Using insecure, self-signed certificates 2025-12-13T00:13:16.174962397+00:00 stderr F I1213 00:13:16.171022 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1765584796 cert, and key in /tmp/serving-cert-326572618/serving-signer.crt, /tmp/serving-cert-326572618/serving-signer.key 2025-12-13T00:13:17.085757441+00:00 stderr F I1213 00:13:17.085337 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:17.091024068+00:00 stderr F I1213 00:13:17.090999 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:17.140968707+00:00 stderr F I1213 00:13:17.140909 1 builder.go:271] service-ca-controller version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-12-13T00:13:17.143607045+00:00 stderr F I1213 00:13:17.141828 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-326572618/tls.crt::/tmp/serving-cert-326572618/tls.key" 2025-12-13T00:13:17.918964258+00:00 stderr F I1213 00:13:17.917359 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:17.929298436+00:00 stderr F I1213 00:13:17.928648 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:17.929298436+00:00 stderr F I1213 00:13:17.928667 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:17.929298436+00:00 stderr F I1213 00:13:17.928679 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:17.929298436+00:00 stderr F I1213 00:13:17.928684 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:17.943231464+00:00 stderr F I1213 00:13:17.940055 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:17.943231464+00:00 stderr F W1213 00:13:17.940086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:17.943231464+00:00 stderr F W1213 00:13:17.940095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:17.943231464+00:00 stderr F I1213 00:13:17.940377 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:17.949545626+00:00 stderr F I1213 00:13:17.944509 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-326572618/tls.crt::/tmp/serving-cert-326572618/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1765584796\" (2025-12-13 00:13:16 +0000 UTC to 2026-01-12 00:13:17 +0000 UTC (now=2025-12-13 00:13:17.944481447 +0000 UTC))" 2025-12-13T00:13:17.949925449+00:00 stderr F I1213 00:13:17.949904 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:13:17.949870747 +0000 UTC))" 2025-12-13T00:13:17.949999262+00:00 stderr F I1213 00:13:17.949986 1 secure_serving.go:210] Serving securely on [::]:8443 2025-12-13T00:13:17.950048394+00:00 stderr F I1213 00:13:17.950035 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:17.950076375+00:00 stderr F I1213 00:13:17.944811 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:17.950104766+00:00 stderr F I1213 00:13:17.950095 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:17.950277981+00:00 stderr F I1213 00:13:17.950266 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:17.950415486+00:00 stderr F I1213 00:13:17.944836 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:17.950443487+00:00 stderr F I1213 00:13:17.950434 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.952831297+00:00 stderr F I1213 00:13:17.944847 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:17.952831297+00:00 stderr F I1213 00:13:17.951127 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.952831297+00:00 stderr F I1213 00:13:17.945016 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-326572618/tls.crt::/tmp/serving-cert-326572618/tls.key" 2025-12-13T00:13:17.952831297+00:00 stderr F I1213 00:13:17.945988 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:17.952831297+00:00 stderr F I1213 00:13:17.951658 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... 2025-12-13T00:13:18.053377135+00:00 stderr F I1213 00:13:18.053011 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.053377135+00:00 stderr F I1213 00:13:18.053092 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:18.053495999+00:00 stderr F I1213 00:13:18.053473 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:18.053450558 +0000 UTC))" 2025-12-13T00:13:18.053504879+00:00 stderr F I1213 00:13:18.053497 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:18.053487369 +0000 UTC))" 2025-12-13T00:13:18.053534250+00:00 stderr F I1213 00:13:18.053517 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.053501939 +0000 UTC))" 2025-12-13T00:13:18.053560571+00:00 stderr F I1213 00:13:18.053474 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.053600893+00:00 stderr F I1213 00:13:18.053561 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.05352757 +0000 UTC))" 2025-12-13T00:13:18.053686886+00:00 stderr F I1213 00:13:18.053674 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.053635094 +0000 UTC))" 2025-12-13T00:13:18.053726588+00:00 stderr F I1213 00:13:18.053717 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.053702147 +0000 UTC))" 2025-12-13T00:13:18.053761499+00:00 stderr F I1213 00:13:18.053752 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.053740138 +0000 UTC))" 2025-12-13T00:13:18.053796040+00:00 stderr F I1213 00:13:18.053787 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.053774919 +0000 UTC))" 2025-12-13T00:13:18.053830161+00:00 stderr F I1213 00:13:18.053821 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:18.05380939 +0000 UTC))" 2025-12-13T00:13:18.053866712+00:00 stderr F I1213 00:13:18.053856 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:18.053845612 +0000 UTC))" 2025-12-13T00:13:18.063673571+00:00 stderr F I1213 00:13:18.063630 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-326572618/tls.crt::/tmp/serving-cert-326572618/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1765584796\" (2025-12-13 00:13:16 +0000 UTC to 2026-01-12 00:13:17 +0000 UTC (now=2025-12-13 00:13:18.063596739 +0000 UTC))" 2025-12-13T00:13:18.064040224+00:00 stderr F I1213 00:13:18.064023 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:13:18.063999262 +0000 UTC))" 2025-12-13T00:13:18.064230430+00:00 stderr F I1213 00:13:18.064215 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:18.064199509 +0000 UTC))" 2025-12-13T00:13:18.064283502+00:00 stderr F I1213 00:13:18.064273 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:18.064261801 +0000 UTC))" 2025-12-13T00:13:18.064329573+00:00 stderr F I1213 00:13:18.064310 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.064297592 +0000 UTC))" 2025-12-13T00:13:18.064365094+00:00 stderr F I1213 00:13:18.064355 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.064343594 +0000 UTC))" 2025-12-13T00:13:18.064410546+00:00 stderr F I1213 00:13:18.064397 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.064378865 +0000 UTC))" 2025-12-13T00:13:18.064462618+00:00 stderr F I1213 00:13:18.064449 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.064429557 +0000 UTC))" 2025-12-13T00:13:18.064506569+00:00 stderr F I1213 00:13:18.064496 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.064482318 +0000 UTC))" 2025-12-13T00:13:18.064542340+00:00 stderr F I1213 00:13:18.064533 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.06452069 +0000 UTC))" 2025-12-13T00:13:18.064578511+00:00 stderr F I1213 00:13:18.064569 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:18.064555261 +0000 UTC))" 2025-12-13T00:13:18.064620723+00:00 stderr F I1213 00:13:18.064611 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:18.064598102 +0000 UTC))" 2025-12-13T00:13:18.064656344+00:00 stderr F I1213 00:13:18.064647 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.064635203 +0000 UTC))" 2025-12-13T00:13:18.064987886+00:00 stderr F I1213 00:13:18.064925 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-326572618/tls.crt::/tmp/serving-cert-326572618/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1765584796\" (2025-12-13 00:13:16 +0000 UTC to 2026-01-12 00:13:17 +0000 UTC (now=2025-12-13 00:13:18.064912134 +0000 UTC))" 2025-12-13T00:13:18.065355118+00:00 stderr F I1213 00:13:18.065336 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:13:18.065318437 +0000 UTC))" 2025-12-13T00:19:02.318556436+00:00 stderr F I1213 00:19:02.318070 1 leaderelection.go:260] successfully acquired lease openshift-service-ca/service-ca-controller-lock 2025-12-13T00:19:02.318648238+00:00 stderr F I1213 00:19:02.318241 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0db0ad19-cc12-475d-ab84-bad75b08b334", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42142", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-666f99b6f-kk8kg_8198c57c-28a3-405e-a589-07f81785f995 became leader 2025-12-13T00:19:02.331858003+00:00 stderr F I1213 00:19:02.331800 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector 2025-12-13T00:19:02.332030597+00:00 stderr F I1213 00:19:02.331986 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector 2025-12-13T00:19:02.332063168+00:00 stderr F I1213 00:19:02.332029 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector 2025-12-13T00:19:02.338120255+00:00 stderr F I1213 00:19:02.332602 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector 2025-12-13T00:19:02.338120255+00:00 stderr F I1213 00:19:02.332639 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector 2025-12-13T00:19:02.338120255+00:00 stderr F I1213 00:19:02.332649 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... 2025-12-13T00:19:02.338120255+00:00 stderr F I1213 00:19:02.332659 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... 2025-12-13T00:19:02.338345321+00:00 stderr F I1213 00:19:02.338305 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... 2025-12-13T00:19:02.338372772+00:00 stderr F I1213 00:19:02.338341 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... 2025-12-13T00:19:02.338372772+00:00 stderr F I1213 00:19:02.338353 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... 2025-12-13T00:19:02.338733702+00:00 stderr F I1213 00:19:02.338702 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController 2025-12-13T00:19:02.339267197+00:00 stderr F I1213 00:19:02.331889 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController 2025-12-13T00:19:02.339281518+00:00 stderr F I1213 00:19:02.331914 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector 2025-12-13T00:19:02.339288878+00:00 stderr F I1213 00:19:02.331967 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector 2025-12-13T00:19:02.433690910+00:00 stderr F I1213 00:19:02.433431 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector 2025-12-13T00:19:02.433690910+00:00 stderr F I1213 00:19:02.433472 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.433690910+00:00 stderr F I1213 00:19:02.433482 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.433690910+00:00 stderr F I1213 00:19:02.433489 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.433690910+00:00 stderr F I1213 00:19:02.433494 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.433690910+00:00 stderr F I1213 00:19:02.433499 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.439639994+00:00 stderr F I1213 00:19:02.439616 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector 2025-12-13T00:19:02.439690556+00:00 stderr F I1213 00:19:02.439676 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.439721877+00:00 stderr F I1213 00:19:02.439711 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.439758518+00:00 stderr F I1213 00:19:02.439745 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.439790758+00:00 stderr F I1213 00:19:02.439780 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.439820339+00:00 stderr F I1213 00:19:02.439809 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... 2025-12-13T00:19:02.539644322+00:00 stderr F I1213 00:19:02.539548 1 base_controller.go:73] Caches are synced for ServiceServingCertController 2025-12-13T00:19:02.539644322+00:00 stderr F I1213 00:19:02.539597 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... 2025-12-13T00:19:02.539644322+00:00 stderr F I1213 00:19:02.539618 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... 2025-12-13T00:19:02.539644322+00:00 stderr F I1213 00:19:02.539565 1 base_controller.go:73] Caches are synced for CRDCABundleInjector 2025-12-13T00:19:02.539644322+00:00 stderr F I1213 00:19:02.539632 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... 2025-12-13T00:19:02.539728304+00:00 stderr F I1213 00:19:02.539643 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... 2025-12-13T00:19:02.539728304+00:00 stderr F I1213 00:19:02.539655 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... 2025-12-13T00:19:02.539728304+00:00 stderr F I1213 00:19:02.539598 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController 2025-12-13T00:19:02.539728304+00:00 stderr F I1213 00:19:02.539703 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... 2025-12-13T00:19:02.539728304+00:00 stderr F I1213 00:19:02.539712 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... 2025-12-13T00:19:02.539728304+00:00 stderr F I1213 00:19:02.539720 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... 2025-12-13T00:19:02.539771726+00:00 stderr F I1213 00:19:02.539728 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... 2025-12-13T00:19:02.539771726+00:00 stderr F I1213 00:19:02.539737 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... 2025-12-13T00:19:02.541134284+00:00 stderr F I1213 00:19:02.541098 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... 2025-12-13T00:19:02.541179105+00:00 stderr F I1213 00:19:02.541151 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... 2025-12-13T00:19:02.541179105+00:00 stderr F I1213 00:19:02.541172 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... 2025-12-13T00:19:02.541189955+00:00 stderr F I1213 00:19:02.541182 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... 2025-12-13T00:19:02.541201515+00:00 stderr F I1213 00:19:02.541194 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... 2025-12-13T00:19:02.632964015+00:00 stderr F I1213 00:19:02.632872 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector 2025-12-13T00:19:02.633046537+00:00 stderr F I1213 00:19:02.633030 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633089778+00:00 stderr F I1213 00:19:02.633078 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633122189+00:00 stderr F I1213 00:19:02.633110 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633153280+00:00 stderr F I1213 00:19:02.633142 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633183201+00:00 stderr F I1213 00:19:02.633172 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633236903+00:00 stderr F I1213 00:19:02.633025 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector 2025-12-13T00:19:02.633269394+00:00 stderr F I1213 00:19:02.633236 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633269394+00:00 stderr F I1213 00:19:02.633257 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633269394+00:00 stderr F I1213 00:19:02.633262 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633283445+00:00 stderr F I1213 00:19:02.633270 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-12-13T00:19:02.633283445+00:00 stderr F I1213 00:19:02.633274 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-12-13T00:19:37.574455402+00:00 stderr F I1213 00:19:37.573861 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.573795464 +0000 UTC))" 2025-12-13T00:19:37.574585825+00:00 stderr F I1213 00:19:37.574563 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.574525743 +0000 UTC))" 2025-12-13T00:19:37.574666207+00:00 stderr F I1213 00:19:37.574647 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.574615666 +0000 UTC))" 2025-12-13T00:19:37.574759560+00:00 stderr F I1213 00:19:37.574736 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.574692868 +0000 UTC))" 2025-12-13T00:19:37.574832942+00:00 stderr F I1213 00:19:37.574814 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.574788141 +0000 UTC))" 2025-12-13T00:19:37.574905324+00:00 stderr F I1213 00:19:37.574887 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.574858823 +0000 UTC))" 2025-12-13T00:19:37.574999627+00:00 stderr F I1213 00:19:37.574980 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.574930185 +0000 UTC))" 2025-12-13T00:19:37.575071229+00:00 stderr F I1213 00:19:37.575056 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.575031938 +0000 UTC))" 2025-12-13T00:19:37.575137151+00:00 stderr F I1213 00:19:37.575121 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.57509491 +0000 UTC))" 2025-12-13T00:19:37.575210633+00:00 stderr F I1213 00:19:37.575195 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.575171432 +0000 UTC))" 2025-12-13T00:19:37.575299986+00:00 stderr F I1213 00:19:37.575283 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.575234064 +0000 UTC))" 2025-12-13T00:19:37.575364537+00:00 stderr F I1213 00:19:37.575349 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.575323166 +0000 UTC))" 2025-12-13T00:19:37.579710637+00:00 stderr F I1213 00:19:37.579654 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-326572618/tls.crt::/tmp/serving-cert-326572618/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1765584796\" (2025-12-13 00:13:16 +0000 UTC to 2026-01-12 00:13:17 +0000 UTC (now=2025-12-13 00:19:37.579619534 +0000 UTC))" 2025-12-13T00:19:37.580295173+00:00 stderr F I1213 00:19:37.580245 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:19:37.58021523 +0000 UTC))" 2025-12-13T00:19:40.276069838+00:00 stderr F I1213 00:19:40.271322 1 configmap.go:109] updating configmap openstack/openshift-service-ca.crt with the service signing CA bundle 2025-12-13T00:19:41.006869670+00:00 stderr F I1213 00:19:41.006356 1 configmap.go:109] updating configmap openstack-operators/openshift-service-ca.crt with the service signing CA bundle 2025-12-13T00:21:02.337564417+00:00 stderr F E1213 00:21:02.337029 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_s0000644000175000017500000017005415117130646033030 0ustar zuulzuul2025-08-13T19:59:53.905156333+00:00 stderr F W0813 19:59:53.904104 1 cmd.go:237] Using insecure, self-signed certificates 2025-08-13T19:59:53.905741480+00:00 stderr F I0813 19:59:53.905400 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1755115193 cert, and key in /tmp/serving-cert-2770977124/serving-signer.crt, /tmp/serving-cert-2770977124/serving-signer.key 2025-08-13T19:59:56.792955952+00:00 stderr F I0813 19:59:56.785561 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:56.795018501+00:00 stderr F I0813 19:59:56.794758 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:57.536248119+00:00 stderr F I0813 19:59:57.532205 1 builder.go:271] service-ca-controller version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T19:59:57.570850555+00:00 stderr F I0813 19:59:57.568995 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:00:00.381378871+00:00 stderr F I0813 20:00:00.361664 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:00:00.690333237+00:00 stderr F I0813 20:00:00.690271 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:00:00.690551553+00:00 stderr F I0813 20:00:00.690534 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:00:00.690677027+00:00 stderr F I0813 20:00:00.690658 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:00:00.690713028+00:00 stderr F I0813 20:00:00.690701 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:00:00.779052026+00:00 stderr F I0813 20:00:00.773924 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:00:00.779052026+00:00 stderr F I0813 20:00:00.774096 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:00.779052026+00:00 stderr F W0813 20:00:00.774116 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:00.779052026+00:00 stderr F W0813 20:00:00.774123 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:00.877103421+00:00 stderr F I0813 20:00:00.870488 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:00.877103421+00:00 stderr F I0813 20:00:00.871260 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.966590 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967111 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967373 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967386 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967427 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:00.969972448+00:00 stderr F I0813 20:00:00.967435 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:00.978537222+00:00 stderr F I0813 20:00:00.978440 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:00:01.237433212+00:00 stderr F I0813 20:00:00.994394 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:00.994350203 +0000 UTC))" 2025-08-13T20:00:01.275449706+00:00 stderr F I0813 20:00:01.275385 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:01.279296405+00:00 stderr F I0813 20:00:01.010429 1 leaderelection.go:260] successfully acquired lease openshift-service-ca/service-ca-controller-lock 2025-08-13T20:00:01.279622185+00:00 stderr F I0813 20:00:01.012093 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0db0ad19-cc12-475d-ab84-bad75b08b334", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-666f99b6f-kk8kg_02e86f09-10c8-4c1e-ad51-e1a5d4b40977 became leader 2025-08-13T20:00:01.279657306+00:00 stderr F I0813 20:00:01.084581 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:01.308670713+00:00 stderr F I0813 20:00:01.196057 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:01.530259199+00:00 stderr F I0813 20:00:01.530197 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector 2025-08-13T20:00:01.530383673+00:00 stderr F I0813 20:00:01.530364 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector 2025-08-13T20:00:01.530430114+00:00 stderr F I0813 20:00:01.530417 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector 2025-08-13T20:00:01.530882487+00:00 stderr F I0813 20:00:01.530768 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:01.530729443 +0000 UTC))" 2025-08-13T20:00:01.530963879+00:00 stderr F I0813 20:00:01.530940 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:00:01.531060582+00:00 stderr F I0813 20:00:01.531037 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:00:01.531119894+00:00 stderr F I0813 20:00:01.531102 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector 2025-08-13T20:00:01.531203066+00:00 stderr F I0813 20:00:01.531180 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:01.531558186+00:00 stderr F I0813 20:00:01.531528 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:01.531387811 +0000 UTC))" 2025-08-13T20:00:01.531647709+00:00 stderr F I0813 20:00:01.531623 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:01.531593267 +0000 UTC))" 2025-08-13T20:00:01.531753892+00:00 stderr F I0813 20:00:01.531728 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:01.53169742 +0000 UTC))" 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.536568 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.536977 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController 2025-08-13T20:00:01.583070505+00:00 stderr F I0813 20:00:01.558136 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController 2025-08-13T20:00:01.583302481+00:00 stderr F I0813 20:00:01.583259 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:01.583185308 +0000 UTC))" 2025-08-13T20:00:01.583375023+00:00 stderr F I0813 20:00:01.583354 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583331552 +0000 UTC))" 2025-08-13T20:00:01.583433285+00:00 stderr F I0813 20:00:01.583416 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583397484 +0000 UTC))" 2025-08-13T20:00:01.583497847+00:00 stderr F I0813 20:00:01.583478 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583459136 +0000 UTC))" 2025-08-13T20:00:01.583627270+00:00 stderr F I0813 20:00:01.583607 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.583585969 +0000 UTC))" 2025-08-13T20:00:01.585390851+00:00 stderr F I0813 20:00:01.585359 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:01.585325739 +0000 UTC))" 2025-08-13T20:00:01.586007958+00:00 stderr F I0813 20:00:01.585984 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:01.585965167 +0000 UTC))" 2025-08-13T20:00:01.597391783+00:00 stderr F I0813 20:00:01.597350 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:01.59730462 +0000 UTC))" 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670628 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670675 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670687 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670692 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.670703 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.680986 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681021 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681031 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.690336402+00:00 stderr F I0813 20:00:01.681038 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.691703231+00:00 stderr F I0813 20:00:01.691667 1 apiservice.go:62] updating apiservice v1.apps.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.703267051+00:00 stderr F I0813 20:00:01.703221 1 apiservice.go:62] updating apiservice v1.authorization.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.761934793+00:00 stderr F I0813 20:00:01.761658 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783472 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783523 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783573 1 apiservice.go:62] updating apiservice v1.build.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783755 1 apiservice.go:62] updating apiservice v1.image.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.785012081+00:00 stderr F I0813 20:00:01.783893 1 apiservice.go:62] updating apiservice v1.oauth.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.880091571+00:00 stderr F I0813 20:00:01.880026 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector 2025-08-13T20:00:01.880192074+00:00 stderr F I0813 20:00:01.880176 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880236895+00:00 stderr F I0813 20:00:01.880223 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880268266+00:00 stderr F I0813 20:00:01.880256 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880304517+00:00 stderr F I0813 20:00:01.880290 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880334788+00:00 stderr F I0813 20:00:01.880323 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... 2025-08-13T20:00:01.880497083+00:00 stderr F I0813 20:00:01.880473 1 admissionwebhook.go:116] updating validatingwebhookconfiguration controlplanemachineset.machine.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.882019036+00:00 stderr F I0813 20:00:01.881994 1 admissionwebhook.go:116] updating validatingwebhookconfiguration multus.openshift.io with the service signing CA bundle 2025-08-13T20:00:01.922162840+00:00 stderr F I0813 20:00:01.918431 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector 2025-08-13T20:00:02.498433851+00:00 stderr F I0813 20:00:02.491383 1 apiservice.go:62] updating apiservice v1.project.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.498640 1 apiservice.go:62] updating apiservice v1.quota.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.501503 1 apiservice.go:62] updating apiservice v1.security.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.507945382+00:00 stderr F I0813 20:00:02.506265 1 apiservice.go:62] updating apiservice v1.route.openshift.io with the service signing CA bundle 2025-08-13T20:00:02.878309293+00:00 stderr F E0813 20:00:02.873725 1 base_controller.go:266] "APIServiceCABundleInjector" controller failed to sync "v1.apps.openshift.io", err: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.apps.openshift.io": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.878309293+00:00 stderr F I0813 20:00:02.874328 1 apiservice.go:62] updating apiservice v1.template.openshift.io with the service signing CA bundle 2025-08-13T20:00:03.200411708+00:00 stderr F I0813 20:00:03.183130 1 apiservice.go:62] updating apiservice v1.user.openshift.io with the service signing CA bundle 2025-08-13T20:00:03.200411708+00:00 stderr F I0813 20:00:03.183506 1 apiservice.go:62] updating apiservice v1.apps.openshift.io with the service signing CA bundle 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292110 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:06.292061932 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292742 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:06.292719861 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292769 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:06.292749792 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292874 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:06.292826134 +0000 UTC))" 2025-08-13T20:00:06.292934267+00:00 stderr F I0813 20:00:06.292902 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292885976 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292927 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292909116 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292951 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292935147 +0000 UTC))" 2025-08-13T20:00:06.292992459+00:00 stderr F I0813 20:00:06.292974 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.292957398 +0000 UTC))" 2025-08-13T20:00:06.293007149+00:00 stderr F I0813 20:00:06.292999 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:06.292980798 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293026 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:06.293007269 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293450 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:06.293427291 +0000 UTC))" 2025-08-13T20:00:06.297475096+00:00 stderr F I0813 20:00:06.293893 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:06.293768051 +0000 UTC))" 2025-08-13T20:00:06.488525964+00:00 stderr F I0813 20:00:06.485459 1 base_controller.go:73] Caches are synced for ServiceServingCertController 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507060 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507115 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507121 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507130 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.507458814+00:00 stderr F I0813 20:00:06.507135 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.486469 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518259 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518276 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518281 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518290 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.518336814+00:00 stderr F I0813 20:00:06.518295 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... 2025-08-13T20:00:06.548071772+00:00 stderr F I0813 20:00:06.547959 1 base_controller.go:73] Caches are synced for CRDCABundleInjector 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548118 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548214 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548221 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548227 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.549093651+00:00 stderr F I0813 20:00:06.548231 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... 2025-08-13T20:00:06.745229034+00:00 stderr F I0813 20:00:06.745082 1 crd.go:69] updating customresourcedefinition alertmanagerconfigs.monitoring.coreos.com conversion webhook config with the service signing CA bundle 2025-08-13T20:00:06.879052430+00:00 stderr F I0813 20:00:06.878991 1 crd.go:69] updating customresourcedefinition consoleplugins.console.openshift.io conversion webhook config with the service signing CA bundle 2025-08-13T20:00:11.997074793+00:00 stderr F I0813 20:00:11.996116 1 trace.go:236] Trace[2035768624]: "DeltaFIFO Pop Process" ID:openshift-authentication/kube-root-ca.crt,Depth:468,Reason:slow event handlers blocking the queue (13-Aug-2025 20:00:10.956) (total time: 1039ms): 2025-08-13T20:00:11.997074793+00:00 stderr F Trace[2035768624]: [1.039966012s] [1.039966012s] END 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232646 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232763 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232822 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232828 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232855 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232862 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232933 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232940 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232945 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232952 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232957 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.232962 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233002 1 configmap.go:109] updating configmap default/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233395 1 configmap.go:109] updating configmap hostpath-provisioner/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233493 1 configmap.go:109] updating configmap kube-node-lease/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233767 1 configmap.go:109] updating configmap kube-public/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.235028008+00:00 stderr F I0813 20:00:12.233979 1 configmap.go:109] updating configmap kube-system/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.307510945+00:00 stderr F I0813 20:00:12.307368 1 configmap.go:109] updating configmap openshift-apiserver-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.312396304+00:00 stderr F I0813 20:00:12.312358 1 configmap.go:109] updating configmap openshift-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.345914 1 configmap.go:109] updating configmap openshift-authentication-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.346312 1 configmap.go:109] updating configmap openshift-authentication-operator/service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:12.347208117+00:00 stderr F I0813 20:00:12.346883 1 configmap.go:109] updating configmap openshift-authentication/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.363920953+00:00 stderr F I0813 20:00:12.359612 1 configmap.go:109] updating configmap openshift-authentication/v4-0-config-system-service-ca with the service signing CA bundle 2025-08-13T20:00:12.390245744+00:00 stderr F I0813 20:00:12.390087 1 configmap.go:109] updating configmap openshift-cloud-network-config-controller/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.653877521+00:00 stderr F I0813 20:00:12.652872 1 configmap.go:109] updating configmap openshift-cloud-platform-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.821553 1 configmap.go:109] updating configmap openshift-cluster-machine-approver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.822059 1 configmap.go:109] updating configmap openshift-cluster-samples-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.863091997+00:00 stderr F I0813 20:00:12.835767 1 configmap.go:109] updating configmap openshift-cluster-storage-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:12.876028596+00:00 stderr F I0813 20:00:12.873745 1 configmap.go:109] updating configmap openshift-cluster-version/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.012099386+00:00 stderr F I0813 20:00:13.003114 1 configmap.go:109] updating configmap openshift-config-managed/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.173388725+00:00 stderr F I0813 20:00:13.173331 1 configmap.go:109] updating configmap openshift-config-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.377111194+00:00 stderr F I0813 20:00:13.355691 1 configmap.go:109] updating configmap openshift-config/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.601168223+00:00 stderr F I0813 20:00:13.600240 1 configmap.go:109] updating configmap openshift-console-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:13.812380335+00:00 stderr F I0813 20:00:13.806721 1 configmap.go:109] updating configmap openshift-console-user-settings/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:14.609907106+00:00 stderr F I0813 20:00:14.607658 1 configmap.go:109] updating configmap openshift-console/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:14.727356554+00:00 stderr F I0813 20:00:14.724130 1 configmap.go:109] updating configmap openshift-console/service-ca with the service signing CA bundle 2025-08-13T20:00:15.440417256+00:00 stderr F I0813 20:00:15.439282 1 configmap.go:109] updating configmap openshift-controller-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:15.515047674+00:00 stderr F I0813 20:00:15.513957 1 configmap.go:109] updating configmap openshift-controller-manager/openshift-service-ca with the service signing CA bundle 2025-08-13T20:00:16.540070781+00:00 stderr F I0813 20:00:16.539585 1 configmap.go:109] updating configmap openshift-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.540382920+00:00 stderr F I0813 20:00:16.540302 1 configmap.go:109] updating configmap openshift-dns-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.560032700+00:00 stderr F I0813 20:00:16.559963 1 configmap.go:109] updating configmap openshift-dns/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.560454953+00:00 stderr F I0813 20:00:16.560428 1 configmap.go:109] updating configmap openshift-etcd-operator/etcd-service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:16.687434013+00:00 stderr F I0813 20:00:16.680442 1 configmap.go:109] updating configmap openshift-etcd-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.762941806+00:00 stderr F I0813 20:00:16.750734 1 configmap.go:109] updating configmap openshift-etcd/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792458758+00:00 stderr F I0813 20:00:16.792392 1 configmap.go:109] updating configmap openshift-host-network/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792644173+00:00 stderr F I0813 20:00:16.792622 1 configmap.go:109] updating configmap openshift-image-registry/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.792730676+00:00 stderr F I0813 20:00:16.792711 1 configmap.go:109] updating configmap openshift-image-registry/serviceca with the service signing CA bundle 2025-08-13T20:00:16.796090031+00:00 stderr F I0813 20:00:16.796058 1 configmap.go:109] updating configmap openshift-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:16.818726757+00:00 stderr F I0813 20:00:16.818316 1 configmap.go:109] updating configmap openshift-ingress-canary/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.049716543+00:00 stderr F I0813 20:00:17.049660 1 configmap.go:109] updating configmap openshift-ingress-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.317574771+00:00 stderr F I0813 20:00:17.315887 1 configmap.go:109] updating configmap openshift-ingress/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.455584107+00:00 stderr F I0813 20:00:17.452246 1 configmap.go:109] updating configmap openshift-ingress/service-ca-bundle with the service signing CA bundle 2025-08-13T20:00:17.570884264+00:00 stderr F I0813 20:00:17.570709 1 configmap.go:109] updating configmap openshift-kni-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.955957004+00:00 stderr F I0813 20:00:17.953019 1 configmap.go:109] updating configmap openshift-kube-apiserver-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:17.955957004+00:00 stderr F I0813 20:00:17.953016 1 configmap.go:109] updating configmap openshift-kube-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.188114034+00:00 stderr F I0813 20:00:18.187887 1 configmap.go:109] updating configmap openshift-kube-controller-manager-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.522050445+00:00 stderr F I0813 20:00:18.521633 1 configmap.go:109] updating configmap openshift-kube-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.825947690+00:00 stderr F I0813 20:00:18.825317 1 configmap.go:109] updating configmap openshift-kube-scheduler-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:18.933481736+00:00 stderr F I0813 20:00:18.886131 1 configmap.go:109] updating configmap openshift-kube-scheduler/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.208135938+00:00 stderr F I0813 20:00:19.207619 1 configmap.go:109] updating configmap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.587523646+00:00 stderr F I0813 20:00:19.586045 1 configmap.go:109] updating configmap openshift-kube-storage-version-migrator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:19.606589159+00:00 stderr F I0813 20:00:19.606153 1 configmap.go:109] updating configmap openshift-machine-api/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.009422926+00:00 stderr F I0813 20:00:20.002186 1 configmap.go:109] updating configmap openshift-machine-config-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.009422926+00:00 stderr F I0813 20:00:20.002507 1 configmap.go:109] updating configmap openshift-marketplace/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.895244294+00:00 stderr F I0813 20:00:20.890157 1 configmap.go:109] updating configmap openshift-monitoring/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:20.895421089+00:00 stderr F I0813 20:00:20.895375 1 request.go:697] Waited for 1.250420235s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-machine-api/configmaps/openshift-service-ca.crt 2025-08-13T20:00:21.179931351+00:00 stderr F I0813 20:00:21.005663 1 configmap.go:109] updating configmap openshift-multus/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.188084174+00:00 stderr F I0813 20:00:21.016159 1 configmap.go:109] updating configmap openshift-network-node-identity/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.219745687+00:00 stderr F I0813 20:00:21.016256 1 configmap.go:109] updating configmap openshift-network-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.220406856+00:00 stderr F I0813 20:00:21.014189 1 configmap.go:109] updating configmap openshift-network-diagnostics/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.986147588+00:00 stderr F I0813 20:00:21.974352 1 configmap.go:109] updating configmap openshift-node/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:21.995887216+00:00 stderr F I0813 20:00:21.993021 1 configmap.go:109] updating configmap openshift-nutanix-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042403 1 configmap.go:109] updating configmap openshift-oauth-apiserver/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042635 1 configmap.go:109] updating configmap openshift-openstack-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.048125206+00:00 stderr F I0813 20:00:22.042704 1 configmap.go:109] updating configmap openshift-operator-lifecycle-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.133694216+00:00 stderr F I0813 20:00:22.130556 1 configmap.go:109] updating configmap openshift-operators/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.139828371+00:00 stderr F I0813 20:00:22.138307 1 configmap.go:109] updating configmap openshift-ovirt-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.514936557+00:00 stderr F I0813 20:00:22.506140 1 configmap.go:109] updating configmap openshift-ovn-kubernetes/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:22.620769205+00:00 stderr F I0813 20:00:22.591045 1 configmap.go:109] updating configmap openshift-route-controller-manager/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.145944782+00:00 stderr F I0813 20:00:23.144593 1 configmap.go:109] updating configmap openshift-service-ca-operator/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.185630594+00:00 stderr F I0813 20:00:23.185527 1 configmap.go:109] updating configmap openshift-service-ca/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.222596558+00:00 stderr F I0813 20:00:23.222544 1 configmap.go:109] updating configmap openshift-user-workload-monitoring/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.344223196+00:00 stderr F I0813 20:00:23.337740 1 configmap.go:109] updating configmap openshift-vsphere-infra/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:23.801506735+00:00 stderr F I0813 20:00:23.795233 1 configmap.go:109] updating configmap openshift/openshift-service-ca.crt with the service signing CA bundle 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.991128 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.990982871 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.991967 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.991945759 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992043 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.99197661 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992090 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.992072732 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992118 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992102963 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992147 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992130494 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992191 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992179605 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992209 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992196796 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992227 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.992214366 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992253 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.992242747 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992278 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992259978 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.992932 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1755115193\" (2025-08-13 19:59:55 +0000 UTC to 2025-09-12 19:59:56 +0000 UTC (now=2025-08-13 20:00:59.992908786 +0000 UTC))" 2025-08-13T20:00:59.996017265+00:00 stderr F I0813 20:00:59.993287 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115200\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115199\" (2025-08-13 18:59:57 +0000 UTC to 2026-08-13 18:59:57 +0000 UTC (now=2025-08-13 20:00:59.993263116 +0000 UTC))" 2025-08-13T20:03:15.140543818+00:00 stderr F E0813 20:03:15.139505 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.507620624+00:00 stderr F E0813 20:04:15.506936 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.192296423+00:00 stderr F E0813 20:05:15.190409 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca/service-ca-controller-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.387910163+00:00 stderr F I0813 20:42:36.387265 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388108869+00:00 stderr F I0813 20:42:36.387996 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.386245 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.386322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.396159 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.396305 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429408 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429654 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429735 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437605156+00:00 stderr F I0813 20:42:36.429929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.745388331+00:00 stderr F I0813 20:42:41.744466 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.745575776+00:00 stderr F I0813 20:42:41.745476 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.745575776+00:00 stderr F I0813 20:42:41.745524 1 base_controller.go:172] Shutting down ConfigMapCABundleInjector ... 2025-08-13T20:42:41.745592667+00:00 stderr F I0813 20:42:41.745571 1 base_controller.go:172] Shutting down LegacyVulnerableConfigMapCABundleInjector ... 2025-08-13T20:42:41.745592667+00:00 stderr F I0813 20:42:41.745575 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.745602877+00:00 stderr F I0813 20:42:41.745590 1 base_controller.go:172] Shutting down CRDCABundleInjector ... 2025-08-13T20:42:41.745602877+00:00 stderr F I0813 20:42:41.745594 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:41.745647888+00:00 stderr F I0813 20:42:41.745613 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:41.745647888+00:00 stderr F I0813 20:42:41.745637 1 base_controller.go:172] Shutting down ServiceServingCertUpdateController ... 2025-08-13T20:42:41.745754302+00:00 stderr F I0813 20:42:41.745684 1 base_controller.go:172] Shutting down ServiceServingCertController ... 2025-08-13T20:42:41.745754302+00:00 stderr F I0813 20:42:41.745701 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.745857274+00:00 stderr F I0813 20:42:41.745770 1 base_controller.go:172] Shutting down ValidatingWebhookCABundleInjector ... 2025-08-13T20:42:41.745857274+00:00 stderr F I0813 20:42:41.745820 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.745876575+00:00 stderr F I0813 20:42:41.745860 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:41.745876575+00:00 stderr F I0813 20:42:41.745868 1 base_controller.go:172] Shutting down APIServiceCABundleInjector ... 2025-08-13T20:42:41.745887015+00:00 stderr F I0813 20:42:41.745876 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:41.745897006+00:00 stderr F I0813 20:42:41.745886 1 base_controller.go:172] Shutting down MutatingWebhookCABundleInjector ... 2025-08-13T20:42:41.745951497+00:00 stderr F I0813 20:42:41.745904 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.745972808+00:00 stderr F I0813 20:42:41.745951 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.745972808+00:00 stderr F I0813 20:42:41.745961 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.745969 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.745980 1 base_controller.go:114] Shutting down worker of ConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746189854+00:00 stderr F I0813 20:42:41.746006 1 base_controller.go:104] All ConfigMapCABundleInjector workers have been terminated 2025-08-13T20:42:41.746335778+00:00 stderr F I0813 20:42:41.746152 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.746543974+00:00 stderr F I0813 20:42:41.746486 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:42:41.746559525+00:00 stderr F I0813 20:42:41.746540 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.746569765+00:00 stderr F I0813 20:42:41.746559 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.746605376+00:00 stderr F I0813 20:42:41.746584 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.746689328+00:00 stderr F I0813 20:42:41.746642 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/tmp/serving-cert-2770977124/tls.crt::/tmp/serving-cert-2770977124/tls.key" 2025-08-13T20:42:41.746870454+00:00 stderr F I0813 20:42:41.746834 1 base_controller.go:114] Shutting down worker of ServiceServingCertController controller ... 2025-08-13T20:42:41.746870454+00:00 stderr F I0813 20:42:41.746864 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746938256+00:00 stderr F I0813 20:42:41.746906 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746951286+00:00 stderr F I0813 20:42:41.746936 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746951286+00:00 stderr F I0813 20:42:41.746945 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746962886+00:00 stderr F I0813 20:42:41.746952 1 base_controller.go:114] Shutting down worker of LegacyVulnerableConfigMapCABundleInjector controller ... 2025-08-13T20:42:41.746974507+00:00 stderr F I0813 20:42:41.746962 1 base_controller.go:104] All LegacyVulnerableConfigMapCABundleInjector workers have been terminated 2025-08-13T20:42:41.747098780+00:00 stderr F I0813 20:42:41.747030 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.747098780+00:00 stderr F I0813 20:42:41.747076 1 builder.go:302] server exited 2025-08-13T20:42:41.747499952+00:00 stderr F E0813 20:42:41.747390 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.747646846+00:00 stderr F W0813 20:42:41.747540 1 leaderelection.go:85] leader election lost 2025-08-13T20:42:41.747646846+00:00 stderr F I0813 20:42:41.747611 1 base_controller.go:114] Shutting down worker of ServiceServingCertController controller ... ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000755000175000017500000000000015117130647033071 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000755000175000017500000000000015117130654033067 5ustar zuulzuul././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000016357615117130647033115 0ustar zuulzuul2025-12-13T00:13:15.572916226+00:00 stderr F I1213 00:13:15.572068 1 cmd.go:240] Using service-serving-cert provided certificates 2025-12-13T00:13:15.572916226+00:00 stderr F I1213 00:13:15.572484 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:15.574707117+00:00 stderr F I1213 00:13:15.574357 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:15.650480413+00:00 stderr F I1213 00:13:15.649325 1 builder.go:298] openshift-apiserver-operator version - 2025-12-13T00:13:16.187261459+00:00 stderr F I1213 00:13:16.181565 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.187261459+00:00 stderr F W1213 00:13:16.182091 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.187261459+00:00 stderr F W1213 00:13:16.182100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.187261459+00:00 stderr F I1213 00:13:16.186496 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.187261459+00:00 stderr F I1213 00:13:16.187025 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-12-13T00:13:16.189361620+00:00 stderr F I1213 00:13:16.188659 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.189361620+00:00 stderr F I1213 00:13:16.188718 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191542 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191660 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191729 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191855 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191861 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191873 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.192386462+00:00 stderr F I1213 00:13:16.191878 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.294978690+00:00 stderr F I1213 00:13:16.292005 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.294978690+00:00 stderr F I1213 00:13:16.292053 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:16.294978690+00:00 stderr F I1213 00:13:16.292140 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:19:00.456847670+00:00 stderr F I1213 00:19:00.456202 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-12-13T00:19:00.456986514+00:00 stderr F I1213 00:19:00.456375 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42133", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_dd87b64a-1f22-43ce-8663-af167a79f732 became leader 2025-12-13T00:19:00.463103952+00:00 stderr F I1213 00:19:00.463034 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:19:00.465194460+00:00 stderr F I1213 00:19:00.465121 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:19:00.465478418+00:00 stderr F I1213 00:19:00.465377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:19:00.479407712+00:00 stderr F I1213 00:19:00.479311 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-12-13T00:19:00.479914866+00:00 stderr F I1213 00:19:00.479862 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:19:00.479914866+00:00 stderr F I1213 00:19:00.479900 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:19:00.479950207+00:00 stderr F I1213 00:19:00.479925 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:19:00.480045370+00:00 stderr F I1213 00:19:00.480003 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:19:00.480056800+00:00 stderr F I1213 00:19:00.480041 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-12-13T00:19:00.480228315+00:00 stderr F I1213 00:19:00.480155 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-12-13T00:19:00.480239995+00:00 stderr F I1213 00:19:00.480225 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-12-13T00:19:00.480279556+00:00 stderr F I1213 00:19:00.480255 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-12-13T00:19:00.480319667+00:00 stderr F I1213 00:19:00.480297 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-12-13T00:19:00.480371388+00:00 stderr F I1213 00:19:00.480240 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-12-13T00:19:00.671111948+00:00 stderr F I1213 00:19:00.480229 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-12-13T00:19:00.671164919+00:00 stderr F I1213 00:19:00.671120 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-12-13T00:19:00.671164919+00:00 stderr F I1213 00:19:00.671144 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:19:00.671436347+00:00 stderr F I1213 00:19:00.671396 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-12-13T00:19:00.673119414+00:00 stderr F I1213 00:19:00.673093 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-12-13T00:19:00.673510414+00:00 stderr F I1213 00:19:00.673486 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-12-13T00:19:00.675144570+00:00 stderr F I1213 00:19:00.675120 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-12-13T00:19:00.675212832+00:00 stderr F I1213 00:19:00.675191 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-12-13T00:19:00.772068602+00:00 stderr F I1213 00:19:00.771373 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:19:00.772068602+00:00 stderr F I1213 00:19:00.771816 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:19:00.775447375+00:00 stderr F I1213 00:19:00.775403 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-12-13T00:19:00.775461785+00:00 stderr F I1213 00:19:00.775446 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-12-13T00:19:00.779991940+00:00 stderr F I1213 00:19:00.779954 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:19:00.780019281+00:00 stderr F I1213 00:19:00.779987 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:19:00.780089913+00:00 stderr F I1213 00:19:00.780053 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:19:00.780089913+00:00 stderr F I1213 00:19:00.780079 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:19:00.780567636+00:00 stderr F I1213 00:19:00.780478 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-12-13T00:19:00.780567636+00:00 stderr F I1213 00:19:00.780501 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-12-13T00:19:00.780567636+00:00 stderr F I1213 00:19:00.780527 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-12-13T00:19:00.780567636+00:00 stderr F I1213 00:19:00.780541 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-12-13T00:19:00.787232500+00:00 stderr F I1213 00:19:00.787191 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:00.827095659+00:00 stderr F I1213 00:19:00.824586 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:00.991189564+00:00 stderr F I1213 00:19:00.991096 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:01.084757004+00:00 stderr F I1213 00:19:01.084685 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:01.137514889+00:00 stderr F I1213 00:19:01.137457 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:19:01.288468621+00:00 stderr F I1213 00:19:01.288371 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:01.482823761+00:00 stderr F I1213 00:19:01.482739 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:01.680121421+00:00 stderr F I1213 00:19:01.680011 1 request.go:697] Waited for 1.02266579s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets?limit=500&resourceVersion=0 2025-12-13T00:19:01.683065242+00:00 stderr F I1213 00:19:01.682831 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:01.882305666+00:00 stderr F I1213 00:19:01.882260 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:02.081830579+00:00 stderr F I1213 00:19:02.081758 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:02.308395986+00:00 stderr F I1213 00:19:02.308320 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:02.512481543+00:00 stderr F I1213 00:19:02.512416 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:02.680149746+00:00 stderr F I1213 00:19:02.680051 1 request.go:697] Waited for 2.006489858s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/nodes?limit=500&resourceVersion=0 2025-12-13T00:19:02.682088260+00:00 stderr F I1213 00:19:02.682042 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:02.779570578+00:00 stderr F I1213 00:19:02.779443 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-12-13T00:19:02.779570578+00:00 stderr F I1213 00:19:02.779479 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-12-13T00:19:02.883307939+00:00 stderr F I1213 00:19:02.883181 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:02.974167824+00:00 stderr F I1213 00:19:02.974053 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-12-13T00:19:02.974167824+00:00 stderr F I1213 00:19:02.974123 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-12-13T00:19:02.974752320+00:00 stderr F I1213 00:19:02.974695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-12-13T00:19:03.084784564+00:00 stderr F I1213 00:19:03.084740 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:03.282225318+00:00 stderr F I1213 00:19:03.282141 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:03.372971711+00:00 stderr F I1213 00:19:03.371531 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-12-13T00:19:03.372971711+00:00 stderr F I1213 00:19:03.371570 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-12-13T00:19:03.373664190+00:00 stderr F I1213 00:19:03.373631 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-12-13T00:19:03.373687621+00:00 stderr F I1213 00:19:03.373662 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-12-13T00:19:03.373745352+00:00 stderr F I1213 00:19:03.373730 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-12-13T00:19:03.373775613+00:00 stderr F I1213 00:19:03.373764 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-12-13T00:19:03.376030066+00:00 stderr F I1213 00:19:03.375986 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-12-13T00:19:03.376030066+00:00 stderr F I1213 00:19:03.376003 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-12-13T00:19:03.380280822+00:00 stderr F I1213 00:19:03.380243 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:19:03.380280822+00:00 stderr F I1213 00:19:03.380259 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:19:03.380280822+00:00 stderr F I1213 00:19:03.380265 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-12-13T00:19:03.380298803+00:00 stderr F I1213 00:19:03.380278 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-12-13T00:19:03.380298803+00:00 stderr F I1213 00:19:03.380293 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-12-13T00:19:03.380306323+00:00 stderr F I1213 00:19:03.380298 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-12-13T00:19:03.491289974+00:00 stderr F I1213 00:19:03.491176 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:03.571970949+00:00 stderr F I1213 00:19:03.571843 1 base_controller.go:73] Caches are synced for RevisionController 2025-12-13T00:19:03.571970949+00:00 stderr F I1213 00:19:03.571882 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-12-13T00:19:03.572259547+00:00 stderr F I1213 00:19:03.572193 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-12-13T00:19:03.572259547+00:00 stderr F I1213 00:19:03.572239 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-12-13T00:19:03.572413151+00:00 stderr F I1213 00:19:03.572338 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-12-13T00:19:03.572413151+00:00 stderr F I1213 00:19:03.572378 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-12-13T00:19:03.572413151+00:00 stderr F I1213 00:19:03.572386 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-12-13T00:19:03.580587156+00:00 stderr F I1213 00:19:03.580500 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:19:03.580587156+00:00 stderr F I1213 00:19:03.580538 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:19:03.880907937+00:00 stderr F I1213 00:19:03.880261 1 request.go:697] Waited for 3.098886911s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver 2025-12-13T00:19:37.570296267+00:00 stderr F I1213 00:19:37.570089 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.57004478 +0000 UTC))" 2025-12-13T00:19:37.570386120+00:00 stderr F I1213 00:19:37.570368 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.570343699 +0000 UTC))" 2025-12-13T00:19:37.570445961+00:00 stderr F I1213 00:19:37.570430 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.57040734 +0000 UTC))" 2025-12-13T00:19:37.570499523+00:00 stderr F I1213 00:19:37.570485 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.570465742 +0000 UTC))" 2025-12-13T00:19:37.570563085+00:00 stderr F I1213 00:19:37.570541 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.570519733 +0000 UTC))" 2025-12-13T00:19:37.570627586+00:00 stderr F I1213 00:19:37.570613 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.570592845 +0000 UTC))" 2025-12-13T00:19:37.570680518+00:00 stderr F I1213 00:19:37.570667 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.570646497 +0000 UTC))" 2025-12-13T00:19:37.570735439+00:00 stderr F I1213 00:19:37.570720 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.570699518 +0000 UTC))" 2025-12-13T00:19:37.570790961+00:00 stderr F I1213 00:19:37.570777 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.57075432 +0000 UTC))" 2025-12-13T00:19:37.570845182+00:00 stderr F I1213 00:19:37.570831 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.570812361 +0000 UTC))" 2025-12-13T00:19:37.570896694+00:00 stderr F I1213 00:19:37.570883 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.570864183 +0000 UTC))" 2025-12-13T00:19:37.570976286+00:00 stderr F I1213 00:19:37.570959 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.570914924 +0000 UTC))" 2025-12-13T00:19:37.571404378+00:00 stderr F I1213 00:19:37.571383 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-12-13 00:19:37.571359326 +0000 UTC))" 2025-12-13T00:19:37.571834929+00:00 stderr F I1213 00:19:37.571816 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584795\" (2025-12-12 23:13:15 +0000 UTC to 2026-12-12 23:13:15 +0000 UTC (now=2025-12-13 00:19:37.571793318 +0000 UTC))" 2025-12-13T00:20:42.991329573+00:00 stderr F E1213 00:20:42.990745 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.585259140+00:00 stderr F E1213 00:20:43.585184 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.592628499+00:00 stderr F E1213 00:20:43.592588 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.605251500+00:00 stderr F E1213 00:20:43.605183 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.628735193+00:00 stderr F E1213 00:20:43.628671 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.672256448+00:00 stderr F E1213 00:20:43.672203 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.756427539+00:00 stderr F E1213 00:20:43.756204 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.920143167+00:00 stderr F E1213 00:20:43.919828 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.243336899+00:00 stderr F E1213 00:20:44.243278 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.584911347+00:00 stderr F E1213 00:20:44.584832 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.886339341+00:00 stderr F E1213 00:20:44.886281 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:46.170473823+00:00 stderr F E1213 00:20:46.170362 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:46.385193526+00:00 stderr F E1213 00:20:46.383825 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.185384134+00:00 stderr F E1213 00:20:48.185291 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.735106288+00:00 stderr F E1213 00:20:48.735022 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:49.984361900+00:00 stderr F E1213 00:20:49.984014 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.784619370+00:00 stderr F E1213 00:20:51.784389 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.586021280+00:00 stderr F E1213 00:20:53.585225 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.586021280+00:00 stderr F E1213 00:20:53.585255 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.858372469+00:00 stderr F E1213 00:20:53.858302 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.385811276+00:00 stderr F E1213 00:20:55.385335 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.185252504+00:00 stderr F E1213 00:20:57.184865 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.983601412+00:00 stderr F E1213 00:20:58.983546 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.473940838+00:00 stderr F E1213 00:21:00.473845 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.782601117+00:00 stderr F E1213 00:21:00.782531 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.784171659+00:00 stderr F E1213 00:21:00.784124 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.788408204+00:00 stderr F E1213 00:21:00.788360 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:00.788501396+00:00 stderr F E1213 00:21:00.788470 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.799833712+00:00 stderr F E1213 00:21:00.799775 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:00.804302503+00:00 stderr F E1213 00:21:00.804233 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.184060530+00:00 stderr F E1213 00:21:01.183974 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.385623919+00:00 stderr F E1213 00:21:01.385557 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:01.584108676+00:00 stderr F E1213 00:21:01.584070 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.983632046+00:00 stderr F E1213 00:21:01.983575 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.383790754+00:00 stderr F E1213 00:21:02.383447 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.584651054+00:00 stderr F E1213 00:21:02.584606 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.586327920+00:00 stderr F E1213 00:21:02.586297 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:02.983854127+00:00 stderr F E1213 00:21:02.983571 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.386851971+00:00 stderr F E1213 00:21:03.386762 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:03.788634263+00:00 stderr F E1213 00:21:03.788558 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.183241142+00:00 stderr F E1213 00:21:04.183179 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.384160153+00:00 stderr F E1213 00:21:04.384083 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.385526390+00:00 stderr F E1213 00:21:04.385480 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:04.986239450+00:00 stderr F E1213 00:21:04.986171 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:05.585775428+00:00 stderr F E1213 00:21:05.585683 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:05.783432753+00:00 stderr F E1213 00:21:05.783364 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.387274647+00:00 stderr F E1213 00:21:06.387188 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:07.676531837+00:00 stderr F E1213 00:21:07.676409 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.345216891+00:00 stderr F E1213 00:21:08.345148 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.243256010+00:00 stderr F E1213 00:21:10.242580 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:12.831970686+00:00 stderr F E1213 00:21:12.831904 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.466859658+00:00 stderr F E1213 00:21:13.466792 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.586422634+00:00 stderr F E1213 00:21:13.586339 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.343711589+00:00 stderr F E1213 00:21:14.343290 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.427511731+00:00 stderr F E1213 00:21:14.427436 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.369177831+00:00 stderr F E1213 00:21:15.368766 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:49.335268604+00:00 stderr F I1213 00:21:49.334718 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-12-13T00:21:49.509276956+00:00 stderr F I1213 00:21:49.509211 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:51.973784391+00:00 stderr F I1213 00:21:51.973710 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-12-13T00:21:52.865965557+00:00 stderr F I1213 00:21:52.865827 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000064223615117130647033110 0ustar zuulzuul2025-08-13T20:00:32.623453154+00:00 stderr F I0813 20:00:32.622897 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:32.623453154+00:00 stderr F I0813 20:00:32.623330 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:32.625343068+00:00 stderr F I0813 20:00:32.623961 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:33.276056553+00:00 stderr F I0813 20:00:33.274072 1 builder.go:298] openshift-apiserver-operator version - 2025-08-13T20:00:36.573424133+00:00 stderr F I0813 20:00:36.572654 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:36.573424133+00:00 stderr F W0813 20:00:36.573276 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:36.573424133+00:00 stderr F W0813 20:00:36.573285 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.880697 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.883661 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.883892 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.884303 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.884422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.886040 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:36.887505149+00:00 stderr F I0813 20:00:36.886058 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:36.904856823+00:00 stderr F I0813 20:00:36.903498 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:36.904856823+00:00 stderr F I0813 20:00:36.903553 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:36.923594798+00:00 stderr F I0813 20:00:36.920666 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:36.923594798+00:00 stderr F I0813 20:00:36.921349 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-08-13T20:00:36.989631781+00:00 stderr F I0813 20:00:36.986773 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:36.990036712+00:00 stderr F I0813 20:00:36.990004 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:37.010309200+00:00 stderr F I0813 20:00:37.010248 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:37.044255768+00:00 stderr F I0813 20:00:37.044199 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-08-13T20:00:37.064193987+00:00 stderr F I0813 20:00:37.046499 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"29854", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_a533f076-9102-4f1c-ac58-2cc3fe6b65c6 became leader 2025-08-13T20:00:37.064279739+00:00 stderr F I0813 20:00:37.059666 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:00:37.151322491+00:00 stderr F I0813 20:00:37.151228 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:00:37.203134919+00:00 stderr F I0813 20:00:37.201263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:00:37.642070444+00:00 stderr F I0813 20:00:37.637452 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:00:37.650333960+00:00 stderr F I0813 20:00:37.643338 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:00:37.659630705+00:00 stderr F I0813 20:00:37.650486 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:00:37.659732778+00:00 stderr F I0813 20:00:37.658070 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:00:37.659787410+00:00 stderr F I0813 20:00:37.658093 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:00:37.680132630+00:00 stderr F I0813 20:00:37.680081 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:00:37.680234053+00:00 stderr F I0813 20:00:37.680217 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-08-13T20:00:38.720224967+00:00 stderr F I0813 20:00:38.719249 1 request.go:697] Waited for 1.078839913s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver-operator/secrets?limit=500&resourceVersion=0 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738681 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738733 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738746 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738760 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738772 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738827 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.738905 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739020 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739036 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:00:38.743698556+00:00 stderr F I0813 20:00:38.739050 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:00:38.805124978+00:00 stderr F I0813 20:00:38.797689 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T20:00:38.805124978+00:00 stderr F I0813 20:00:38.798463 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:00:41.851229294+00:00 stderr F I0813 20:00:41.800206 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T20:00:42.094738887+00:00 stderr F I0813 20:00:42.094226 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:41.805143 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:42.094944 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:00:42.095003695+00:00 stderr F I0813 20:00:41.848831 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:00:42.095051716+00:00 stderr F I0813 20:00:42.095011 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:00:42.101372966+00:00 stderr F I0813 20:00:42.101290 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:00:42.101372966+00:00 stderr F I0813 20:00:42.101339 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:00:42.102589461+00:00 stderr F I0813 20:00:41.849214 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-08-13T20:00:42.102589461+00:00 stderr F I0813 20:00:42.102575 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:00:42.102870659+00:00 stderr F I0813 20:00:41.849228 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T20:00:42.102870659+00:00 stderr F I0813 20:00:42.102854 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:42.103007643+00:00 stderr F I0813 20:00:41.849236 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:00:42.103007643+00:00 stderr F I0813 20:00:42.102939 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:00:42.103378783+00:00 stderr F I0813 20:00:41.849559 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:00:42.103378783+00:00 stderr F I0813 20:00:42.103367 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:42.103666 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:41.849576 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:00:42.103721293+00:00 stderr F I0813 20:00:42.103710 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110133 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:41.849585 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110192 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:41.849679 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:00:42.112283647+00:00 stderr F I0813 20:00:42.110352 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.849703 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.150683 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.961021 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.150763 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:41.961108 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151139 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.021017 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151487 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.032626 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-08-13T20:00:42.152647168+00:00 stderr F I0813 20:00:42.151738 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:00:42.203779096+00:00 stderr F I0813 20:00:42.202033 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-08-13T20:00:42.203779096+00:00 stderr F I0813 20:00:42.202077 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:00:42.260567555+00:00 stderr F I0813 20:00:42.260259 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.299249098+00:00 stderr F I0813 20:00:42.298608 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.322060519+00:00 stderr F I0813 20:00:42.321963 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.327430152+00:00 stderr F I0813 20:00:42.323555 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:00:42.362673617+00:00 stderr F I0813 20:00:42.360270 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:00:42.362673617+00:00 stderr F I0813 20:00:42.360314 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:00:42.670889245+00:00 stderr F I0813 20:00:42.669476 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:00:42.743163146+00:00 stderr F I0813 20:00:42.742372 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:00:42.743163146+00:00 stderr F I0813 20:00:42.742670 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:00:45.524920664+00:00 stderr F I0813 20:00:45.508635 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServicesAvailable: PreconditionNotReady","reason":"APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:45.821627595+00:00 stderr F I0813 20:00:45.812347 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: PreconditionNotReady") 2025-08-13T20:01:00.051369613+00:00 stderr F I0813 20:00:59.999945 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.999559696 +0000 UTC))" 2025-08-13T20:01:00.051369613+00:00 stderr F I0813 20:01:00.051334 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.051282991 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051370 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.051347462 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051403 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.051380133 +0000 UTC))" 2025-08-13T20:01:00.051451885+00:00 stderr F I0813 20:01:00.051429 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051415584 +0000 UTC))" 2025-08-13T20:01:00.051566589+00:00 stderr F I0813 20:01:00.051481 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051435195 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051592 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051564049 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051651 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.05162805 +0000 UTC))" 2025-08-13T20:01:00.051680422+00:00 stderr F I0813 20:01:00.051670 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.051659281 +0000 UTC))" 2025-08-13T20:01:00.051711783+00:00 stderr F I0813 20:01:00.051697 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.051680092 +0000 UTC))" 2025-08-13T20:01:00.051812326+00:00 stderr F I0813 20:01:00.051723 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.051707923 +0000 UTC))" 2025-08-13T20:01:00.052406013+00:00 stderr F I0813 20:01:00.052327 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-08-13 20:01:00.05230566 +0000 UTC))" 2025-08-13T20:01:00.052915467+00:00 stderr F I0813 20:01:00.052691 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115236\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115234\" (2025-08-13 19:00:33 +0000 UTC to 2026-08-13 19:00:33 +0000 UTC (now=2025-08-13 20:01:00.05265489 +0000 UTC))" 2025-08-13T20:01:00.075203073+00:00 stderr F I0813 20:01:00.074877 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:04.674153274+00:00 stderr F I0813 20:01:04.672724 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" 2025-08-13T20:02:16.819652382+00:00 stderr F I0813 20:02:16.817488 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:17.836711546+00:00 stderr F I0813 20:02:17.836540 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)" 2025-08-13T20:02:31.903143649+00:00 stderr F E0813 20:02:31.902389 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.915566814+00:00 stderr F E0813 20:02:31.915523 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.929668616+00:00 stderr F E0813 20:02:31.929568 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.953661520+00:00 stderr F E0813 20:02:31.953490 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.998468599+00:00 stderr F E0813 20:02:31.998309 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.094028095+00:00 stderr F E0813 20:02:32.091120 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.123160126+00:00 stderr F E0813 20:02:32.123058 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.132639056+00:00 stderr F E0813 20:02:32.132542 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.149736984+00:00 stderr F E0813 20:02:32.149581 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.174235313+00:00 stderr F E0813 20:02:32.174067 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.218270729+00:00 stderr F E0813 20:02:32.218136 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.300349550+00:00 stderr F E0813 20:02:32.300297 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.500620033+00:00 stderr F E0813 20:02:32.500564 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.700608449+00:00 stderr F E0813 20:02:32.700509 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.901161410+00:00 stderr F E0813 20:02:32.901109 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.225520973+00:00 stderr F E0813 20:02:33.225466 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.345324831+00:00 stderr F E0813 20:02:33.345274 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.888991740+00:00 stderr F E0813 20:02:33.888337 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.632116838+00:00 stderr F E0813 20:02:34.632014 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.172962687+00:00 stderr F E0813 20:02:35.172879 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.197114260+00:00 stderr F E0813 20:02:37.197026 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.737456145+00:00 stderr F E0813 20:02:37.737335 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.123017143+00:00 stderr F E0813 20:02:42.122325 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.155753357+00:00 stderr F E0813 20:02:42.155690 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.163199439+00:00 stderr F E0813 20:02:42.162939 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.163199439+00:00 stderr F E0813 20:02:42.163106 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.174492282+00:00 stderr F E0813 20:02:42.174391 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.179982768+00:00 stderr F E0813 20:02:42.179920 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.322272097+00:00 stderr F E0813 20:02:42.322144 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.521540853+00:00 stderr F E0813 20:02:42.521307 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.922004457+00:00 stderr F E0813 20:02:42.921908 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.124500944+00:00 stderr F E0813 20:02:43.124391 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:43.323042348+00:00 stderr F E0813 20:02:43.322908 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.521237212+00:00 stderr F E0813 20:02:43.521044 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.920910073+00:00 stderr F E0813 20:02:43.920768 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.323879629+00:00 stderr F E0813 20:02:44.323719 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:44.521968140+00:00 stderr F E0813 20:02:44.521871 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.124535789+00:00 stderr F E0813 20:02:45.124346 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.319572743+00:00 stderr F E0813 20:02:45.319381 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.924420698+00:00 stderr F E0813 20:02:45.924237 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:46.523057565+00:00 stderr F E0813 20:02:46.522935 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:46.720616821+00:00 stderr F E0813 20:02:46.720477 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.336036357+00:00 stderr F E0813 20:02:47.335935 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:47.994396188+00:00 stderr F E0813 20:02:47.993511 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:49.283347107+00:00 stderr F E0813 20:02:49.283216 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.283472481+00:00 stderr F E0813 20:02:49.283445 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:51.855959557+00:00 stderr F E0813 20:02:51.855716 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.123110748+00:00 stderr F E0813 20:02:52.122730 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.567385021+00:00 stderr F E0813 20:02:52.567277 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.409958606+00:00 stderr F E0813 20:02:54.409592 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.099987657+00:00 stderr F E0813 20:02:56.099735 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.992247701+00:00 stderr F E0813 20:02:56.991649 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:02.124640714+00:00 stderr F E0813 20:03:02.123737 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.807227862+00:00 stderr F E0813 20:03:03.807117 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.652390122+00:00 stderr F E0813 20:03:04.652187 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:07.242069628+00:00 stderr F E0813 20:03:07.241944 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:12.125445946+00:00 stderr F E0813 20:03:12.124657 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.051087142+00:00 stderr F E0813 20:03:13.050957 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.132117281+00:00 stderr F E0813 20:03:22.131173 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.136296422+00:00 stderr F E0813 20:03:25.136123 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.738604899+00:00 stderr F E0813 20:03:27.738450 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:32.124766154+00:00 stderr F E0813 20:03:32.124621 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.126310690+00:00 stderr F E0813 20:03:42.125532 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.154910106+00:00 stderr F E0813 20:03:42.154730 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.161279778+00:00 stderr F E0813 20:03:42.161177 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:42.205909101+00:00 stderr F I0813 20:03:42.205800 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.207388163+00:00 stderr F E0813 20:03:42.207359 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.213896209+00:00 stderr F I0813 20:03:42.213740 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.215527656+00:00 stderr F E0813 20:03:42.215513 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.227506847+00:00 stderr F I0813 20:03:42.227401 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.229187455+00:00 stderr F E0813 20:03:42.229161 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.250562135+00:00 stderr F I0813 20:03:42.250442 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.252163971+00:00 stderr F E0813 20:03:42.252100 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.293449997+00:00 stderr F I0813 20:03:42.293392 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.295256739+00:00 stderr F E0813 20:03:42.295181 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.377169276+00:00 stderr F I0813 20:03:42.377103 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.379393929+00:00 stderr F E0813 20:03:42.379299 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.541071301+00:00 stderr F I0813 20:03:42.540941 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.545997002+00:00 stderr F E0813 20:03:42.545764 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.866756742+00:00 stderr F I0813 20:03:42.866645 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:42.867915695+00:00 stderr F E0813 20:03:42.867756 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.509672403+00:00 stderr F I0813 20:03:43.509555 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:43Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:43.511247028+00:00 stderr F E0813 20:03:43.511182 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.793050554+00:00 stderr F I0813 20:03:44.792933 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:44Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:44.795308088+00:00 stderr F E0813 20:03:44.795247 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:47.357068477+00:00 stderr F I0813 20:03:47.356945 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:47Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:47.359056754+00:00 stderr F E0813 20:03:47.358985 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.133416455+00:00 stderr F E0813 20:03:52.132768 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.493938720+00:00 stderr F I0813 20:03:52.486096 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:03:52Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:03:52.493938720+00:00 stderr F E0813 20:03:52.488188 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.024984158+00:00 stderr F E0813 20:03:54.024284 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.105893981+00:00 stderr F E0813 20:03:56.104408 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.137089403+00:00 stderr F E0813 20:04:02.135923 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.739498108+00:00 stderr F I0813 20:04:02.734664 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:02Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:02.740609460+00:00 stderr F E0813 20:04:02.740483 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.102662429+00:00 stderr F E0813 20:04:06.101089 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:08.723089732+00:00 stderr F E0813 20:04:08.722924 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:12.136377423+00:00 stderr F E0813 20:04:12.135667 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:22.140944498+00:00 stderr F E0813 20:04:22.139880 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.242015481+00:00 stderr F I0813 20:04:23.241952 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:23Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:23.246984593+00:00 stderr F E0813 20:04:23.246831 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:32.136452333+00:00 stderr F E0813 20:04:32.135741 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.157979619+00:00 stderr F E0813 20:04:42.157223 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.178114396+00:00 stderr F E0813 20:04:42.177968 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:42.183583663+00:00 stderr F E0813 20:04:42.183519 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:42.205522291+00:00 stderr F I0813 20:04:42.205452 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:04:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:04:42.207164318+00:00 stderr F E0813 20:04:42.207094 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.143327400+00:00 stderr F E0813 20:04:52.142325 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.135917062+00:00 stderr F E0813 20:04:56.134039 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:02.150079333+00:00 stderr F E0813 20:05:02.149400 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:04.226529326+00:00 stderr F I0813 20:05:04.223354 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:05:04Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:04.226756632+00:00 stderr F E0813 20:05:04.226667 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.436760004+00:00 stderr F E0813 20:05:12.436000 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.982682525+00:00 stderr F E0813 20:05:15.982157 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:42.214735049+00:00 stderr F I0813 20:05:42.213508 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:05:42Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:42.275706025+00:00 stderr F I0813 20:05:42.275473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-67cbf64bc9-jjfds pod)") 2025-08-13T20:05:57.285378861+00:00 stderr F I0813 20:05:57.284355 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:57.523034657+00:00 stderr F I0813 20:05:57.519246 1 core.go:359] ConfigMap "openshift-apiserver/image-import-ca" changes: {"data":{"image-registry.openshift-image-registry.svc..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n","image-registry.openshift-image-registry.svc.cluster.local..5000":"-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIIbihq1OwPREcwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNTA4MTMxOTU5MzdaFw0yNzEwMTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC2SrcqXyamkN5ClE8zJybmMdgjr1+CqvK5\njAzS7OnXolDqvqp9kNvBU1VOSmm/Qym3Tsze6Ucw5fBMKt4PMNPaKMOE471qBrgG\n4jT3Tv3mI/YxRNBOb10/4xSDuBdqMshz/OYI3WKqkv93p+zNAroVHJa2h3PHmvSN\nfyOEv14ACktUNccUXPlqWF3Uz9wj8FpFalj2zCQ4Yd8wi4zdLURpjYTE+MSkev2G\nBmiAPuDyq+QKkF6OmFHYUGlrIrmGjN29lTTaG7ycdF8wL6/5z7ZVjgQ7C335NQRE\nZgOuX6LQlreriUfVQwMjTZtHcJjR80JX6jdnoYungAu7Ga6UbY3rAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTI9D4l\njQqWVvbaXItjvDhtvYTKpDAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAUQ9+s0Z9Zf639n7maG40/i4BWOJ3B6v58ACX\nnELfIMtGF30+yq9pKFPZ8B3cQOLRTuDwETotVjhZ9SSYgot5qFKHRrjzxns29+ty\nQymqPySlQp4SPs9UT5RpURJT5H9OjSaA3IsYHDoBuiXOf7YIepyPwOLI9L5kjmys\ns1LbKHJCsG9k6g8dAdg8OADPSJo/jgZ5vG0z6IwwnNGjRWhATKMoCmaIbj3vaO49\nwm9IQH6Uus92Rw5aDN8rmVfizaJ5Lg91TJAibz9CEGX/5UfUohJbGSbx/zUEphsn\nUnmYVHHHANesur55NcOCEVNBqrV2AP59z2LgTdbNaBYTTT1nSw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDUTCCAjmgAwIBAgIICIHOnq3eEiQwDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE\nAwwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTcxOTQwNjAyNjAe\nFw0yNDA2MjYxMjQ3MDVaFw0yNjA5MTIxOTU5MzhaMDYxNDAyBgNVBAMMK29wZW5z\naGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3MTk0MDYwMjYwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCxoU+/aEqL1riTJ24B4W1MOmHSscVIUnSQ\nyo76+YwjI4kwEtKZT90wMjNO0XFnQ1TbvLlOOpLOhGoKYRl4iuUptuuHmrpuO2h+\nTZfHzF8y5hLnYAQf+5UA0WcvyVvWU6pfEOQBc6st4FVSFeVe8UGcr2M5bBIZ6AIr\nJnLsUH1kUBAY1eMGXvkonkzvZ082MfhyEYtSzSf9vE1Zp8Lgi5mHXi8hG7eGI1W7\nsVu/j0c6nMafnq/1ePXSejoc4pUwGx9q3nnr97hGEV6cTegkOfwZaBGw8QU5CQBM\nnkf1Z9tzH4gJMLJnsGnhx4t8h2M3CPDOYe9/1WJsynTBXgRtmlVdAgMBAAGjYzBh\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSHmdGn\nJknQSxTvpkMa8GYETWnG0TAfBgNVHSMEGDAWgBTI9D4ljQqWVvbaXItjvDhtvYTK\npDANBgkqhkiG9w0BAQsFAAOCAQEAsd7bU+1Q+dFpqmoVa4MOv65kMyXfZnJtcX09\nsHldKnCG6NrB0edChmIFOLUejZZ+4JH2olGNxkIeXfTqygv7lw2TWVF13yGavnTY\ngzj6UWVu3XK4Vkt01EgueHEbJ5ei1uiW5b/xzga54nDfLXdQvTeemwpUDMB+95/t\nuCpFX7+ZIvLazzJ/yKtFDUokHy94hoHuEe2VdAkOUbAP3Z3QbA8uMu94wjecFTup\nsf0gAMIVQFpXuwH1/DQM/831Rc/QCb8/3p8sJ57gMojE0uiwYW3hF27/nDV5VUSa\nM2hZHYoOUW6os5t7FH/aXdAfGmwrS1meRMZ9AvUUHhuFkpdfjQ==\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:05:57.525101056+00:00 stderr F I0813 20:05:57.523954 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/image-import-ca -n openshift-apiserver: 2025-08-13T20:05:57.525101056+00:00 stderr F cause by changes in data.image-registry.openshift-image-registry.svc..5000,data.image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:05:57.677947233+00:00 stderr F I0813 20:05:57.676320 1 apps.go:154] Deployment "openshift-apiserver/apiserver" changes: {"metadata":{"annotations":{"operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":"ZjlHVA==","operator.openshift.io/spec-hash":"7538696d7771eb6997d5f9627023b75abea5bcd941bd000eddd83452c44c117a"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":"ZjlHVA=="}},"spec":{"containers":[{"args":["if [ -s /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec openshift-apiserver start --config=/var/run/configmaps/config/config.yaml -v=2\n"],"command":["/bin/bash","-ec"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":10},"name":"openshift-apiserver","ports":[{"containerPort":8443}],"readinessProbe":{"failureThreshold":1,"httpGet":{"path":"readyz","port":8443,"scheme":"HTTPS"},"periodSeconds":5,"successThreshold":1,"timeoutSeconds":10},"resources":{"requests":{"cpu":"100m","memory":"200Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"startupProbe":{"failureThreshold":30,"httpGet":{"path":"healthz","port":8443,"scheme":"HTTPS"},"periodSeconds":5,"successThreshold":1,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/lib/kubelet/","name":"node-pullsecrets","readOnly":true},{"mountPath":"/var/run/configmaps/config","name":"config"},{"mountPath":"/var/run/configmaps/audit","name":"audit"},{"mountPath":"/var/run/secrets/etcd-client","name":"etcd-client"},{"mountPath":"/var/run/configmaps/etcd-serving-ca","name":"etcd-serving-ca"},{"mountPath":"/var/run/configmaps/image-import-ca","name":"image-import-ca"},{"mountPath":"/var/run/configmaps/trusted-ca-bundle","name":"trusted-ca-bundle"},{"mountPath":"/var/run/secrets/serving-cert","name":"serving-cert"},{"mountPath":"/var/run/secrets/encryption-config","name":"encryption-config"},{"mountPath":"/var/log/openshift-apiserver","name":"audit-dir"}]},{"args":["--listen","0.0.0.0:17698","--namespace","$(POD_NAMESPACE)","--v","2"],"command":["cluster-kube-apiserver-operator","check-endpoints"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","imagePullPolicy":"IfNotPresent","name":"openshift-apiserver-check-endpoints","ports":[{"containerPort":17698,"name":"check-endpoints","protocol":"TCP"}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError"}],"dnsPolicy":null,"initContainers":[{"command":["sh","-c","chmod 0700 /var/log/openshift-apiserver \u0026\u0026 touch /var/log/openshift-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/openshift-apiserver/*"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78","imagePullPolicy":"IfNotPresent","name":"fix-audit-permissions","resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"securityContext":{"privileged":true,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/log/openshift-apiserver","name":"audit-dir"}]}],"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"hostPath":{"path":"/var/lib/kubelet/","type":"Directory"},"name":"node-pullsecrets"},{"configMap":{"name":"config"},"name":"config"},{"configMap":{"name":"audit-1"},"name":"audit"},{"name":"etcd-client","secret":{"defaultMode":384,"secretName":"etcd-client"}},{"configMap":{"name":"etcd-serving-ca"},"name":"etcd-serving-ca"},{"configMap":{"name":"image-import-ca","optional":true},"name":"image-import-ca"},{"name":"serving-cert","secret":{"defaultMode":384,"secretName":"serving-cert"}},{"configMap":{"items":[{"key":"ca-bundle.crt","path":"tls-ca-bundle.pem"}],"name":"trusted-ca-bundle","optional":true},"name":"trusted-ca-bundle"},{"name":"encryption-config","secret":{"defaultMode":384,"optional":true,"secretName":"encryption-config-1"}},{"hostPath":{"path":"/var/log/openshift-apiserver"},"name":"audit-dir"}]}}}} 2025-08-13T20:05:57.712527173+00:00 stderr F I0813 20:05:57.710377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/apiserver -n openshift-apiserver because it changed 2025-08-13T20:05:57.792728349+00:00 stderr F I0813 20:05:57.792468 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31903 2025-08-13T20:06:06.054104191+00:00 stderr F I0813 20:06:06.052307 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.357041686+00:00 stderr F I0813 20:06:06.356939 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.445729146+00:00 stderr F I0813 20:06:06.445316 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31909 2025-08-13T20:06:06.477306380+00:00 stderr F I0813 20:06:06.477196 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31909 2025-08-13T20:06:11.032088889+00:00 stderr F I0813 20:06:11.031411 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:11.299406344+00:00 stderr F I0813 20:06:11.299320 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:13.285884901+00:00 stderr F I0813 20:06:13.284224 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.486063610+00:00 stderr F I0813 20:06:14.485923 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:15.055726412+00:00 stderr F I0813 20:06:15.053898 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:15.459610248+00:00 stderr F I0813 20:06:15.459453 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:17.607433323+00:00 stderr F I0813 20:06:17.607118 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.355976544+00:00 stderr F I0813 20:06:19.354945 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.425608978+00:00 stderr F I0813 20:06:19.425257 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.623244878+00:00 stderr F I0813 20:06:19.623160 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31984 2025-08-13T20:06:19.669516183+00:00 stderr F I0813 20:06:19.668198 1 helpers.go:184] lister was stale at resourceVersion=30641, live get showed resourceVersion=31984 2025-08-13T20:06:19.768531358+00:00 stderr F I0813 20:06:19.768080 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:20.567335762+00:00 stderr F I0813 20:06:20.566920 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:21.953215278+00:00 stderr F I0813 20:06:21.952766 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.176324287+00:00 stderr F I0813 20:06:22.176251 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:06:23.357625934+00:00 stderr F I0813 20:06:23.357518 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:23.807047764+00:00 stderr F I0813 20:06:23.806931 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.827925682+00:00 stderr F E0813 20:06:23.827635 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.829383633+00:00 stderr F I0813 20:06:23.829291 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:23.834485579+00:00 stderr F I0813 20:06:23.834379 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.842751926+00:00 stderr F E0813 20:06:23.842631 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.855149561+00:00 stderr F I0813 20:06:23.855012 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.862187083+00:00 stderr F E0813 20:06:23.862066 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.884900333+00:00 stderr F I0813 20:06:23.884209 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.891101781+00:00 stderr F E0813 20:06:23.891048 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:23.932631530+00:00 stderr F I0813 20:06:23.932329 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:23.940998389+00:00 stderr F E0813 20:06:23.940698 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.024163381+00:00 stderr F I0813 20:06:24.023531 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.031439149+00:00 stderr F E0813 20:06:24.031375 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.193903412+00:00 stderr F I0813 20:06:24.193399 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.201835099+00:00 stderr F E0813 20:06:24.201172 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.524439637+00:00 stderr F I0813 20:06:24.522204 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:24.536657377+00:00 stderr F E0813 20:06:24.536589 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:24.971001895+00:00 stderr F I0813 20:06:24.970928 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:25.177934641+00:00 stderr F I0813 20:06:25.177836 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:25.185500097+00:00 stderr F E0813 20:06:25.185333 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:26.476481046+00:00 stderr F I0813 20:06:26.476125 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:26.492605468+00:00 stderr F E0813 20:06:26.492414 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:29.054767127+00:00 stderr F I0813 20:06:29.053969 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:29.065161865+00:00 stderr F E0813 20:06:29.063297 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:32.761019095+00:00 stderr F I0813 20:06:32.749752 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.373923428+00:00 stderr F I0813 20:06:33.370632 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:34.188020078+00:00 stderr F I0813 20:06:34.186083 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:34.201345500+00:00 stderr F E0813 20:06:34.201140 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:34.629962859+00:00 stderr F I0813 20:06:34.629748 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:35.628992572+00:00 stderr F I0813 20:06:35.623979 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:06:37.573096771+00:00 stderr F I0813 20:06:37.559256 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:39.863063066+00:00 stderr F I0813 20:06:39.854404 1 reflector.go:351] Caches populated for *v1.OpenShiftAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:40.874002091+00:00 stderr F I0813 20:06:40.873474 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:41.517049087+00:00 stderr F I0813 20:06:41.516981 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:42.215088761+00:00 stderr F I0813 20:06:42.212846 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:42.230288416+00:00 stderr F E0813 20:06:42.228844 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:42.403380339+00:00 stderr F I0813 20:06:42.402470 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.618395174+00:00 stderr F I0813 20:06:42.618271 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:43.381188904+00:00 stderr F I0813 20:06:43.380529 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:43.702493616+00:00 stderr F I0813 20:06:43.702335 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.447973600+00:00 stderr F I0813 20:06:44.446315 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:44.709153819+00:00 stderr F E0813 20:06:44.707647 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:45.663485390+00:00 stderr F I0813 20:06:45.663401 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:46.013983068+00:00 stderr F I0813 20:06:46.013676 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:46.630691410+00:00 stderr F I0813 20:06:46.629730 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:47.052339409+00:00 stderr F I0813 20:06:47.051586 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:48.619540462+00:00 stderr F I0813 20:06:48.619377 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:49.788030533+00:00 stderr F I0813 20:06:49.784534 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:50.214571492+00:00 stderr F I0813 20:06:50.214240 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.329369285+00:00 stderr F I0813 20:06:52.329004 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.581238027+00:00 stderr F I0813 20:06:52.581124 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:53.108701330+00:00 stderr F I0813 20:06:53.106313 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:53.850514579+00:00 stderr F I0813 20:06:53.850039 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:53.852128345+00:00 stderr F I0813 20:06:53.851354 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:54.563951803+00:00 stderr F I0813 20:06:54.563666 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)") 2025-08-13T20:06:57.320102145+00:00 stderr F I0813 20:06:57.319606 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:01.233182636+00:00 stderr F I0813 20:07:01.232496 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:17.890728631+00:00 stderr F I0813 20:07:17.881267 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:18.637895843+00:00 stderr F I0813 20:07:18.635400 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-67cbf64bc9-jjfds pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" 2025-08-13T20:07:21.193390350+00:00 stderr F I0813 20:07:21.178719 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.265029694+00:00 stderr F I0813 20:07:21.263587 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:24.311204460+00:00 stderr F I0813 20:07:24.310365 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:24.368491813+00:00 stderr F I0813 20:07:24.367473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:27.080709085+00:00 stderr F I0813 20:07:27.078242 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:27.121975998+00:00 stderr F I0813 20:07:27.118623 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)" 2025-08-13T20:07:32.597969319+00:00 stderr F I0813 20:07:32.596873 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.667490962+00:00 stderr F I0813 20:07:32.658855 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7fc54b8dd7-d2bhp pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" 2025-08-13T20:07:32.750329137+00:00 stderr F E0813 20:07:32.750225 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:07:32.750329137+00:00 stderr F apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:07:32.759322405+00:00 stderr F I0813 20:07:32.752724 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServerDeployment_NoPod::APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.792450235+00:00 stderr F I0813 20:07:32.792049 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:07:34.176923719+00:00 stderr F I0813 20:07:34.170758 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:45Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.","reason":"APIServerDeployment_NoPod","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:34.195911053+00:00 stderr F I0813 20:07:34.195827 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node." 2025-08-13T20:07:36.311843249+00:00 stderr F I0813 20:07:36.311067 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:53Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:07:36Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:36.346835552+00:00 stderr F I0813 20:07:36.346490 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "All is well",Available changed from False to True ("All is well") 2025-08-13T20:08:32.179388330+00:00 stderr F E0813 20:08:32.178583 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.189343265+00:00 stderr F E0813 20:08:32.189232 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.203144140+00:00 stderr F E0813 20:08:32.203070 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.228753714+00:00 stderr F E0813 20:08:32.228697 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.275595507+00:00 stderr F E0813 20:08:32.275537 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.289332641+00:00 stderr F E0813 20:08:32.289208 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.360314446+00:00 stderr F E0813 20:08:32.360136 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.527515109+00:00 stderr F E0813 20:08:32.527362 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.856120041+00:00 stderr F E0813 20:08:32.855691 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.501560646+00:00 stderr F E0813 20:08:33.501472 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.881626283+00:00 stderr F E0813 20:08:33.881532 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.787856165+00:00 stderr F E0813 20:08:34.786299 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.683511705+00:00 stderr F E0813 20:08:35.682585 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.352861596+00:00 stderr F E0813 20:08:37.352634 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.486967511+00:00 stderr F E0813 20:08:37.486760 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.283191570+00:00 stderr F E0813 20:08:39.282318 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.096391326+00:00 stderr F E0813 20:08:41.095863 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.181862167+00:00 stderr F E0813 20:08:42.179122 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.207859713+00:00 stderr F E0813 20:08:42.207708 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.208449180+00:00 stderr F E0813 20:08:42.207955 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.229404461+00:00 stderr F E0813 20:08:42.229189 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.229404461+00:00 stderr F E0813 20:08:42.229343 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.268871782+00:00 stderr F E0813 20:08:42.267704 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.382185411+00:00 stderr F E0813 20:08:42.381628 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766076528+00:00 stderr F E0813 20:08:42.766028 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.891857444+00:00 stderr F E0813 20:08:42.889247 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.973855744+00:00 stderr F E0813 20:08:42.971113 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.367543341+00:00 stderr F E0813 20:08:43.367458 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.570761568+00:00 stderr F E0813 20:08:43.570658 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.769219928+00:00 stderr F E0813 20:08:43.769164 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.368482849+00:00 stderr F E0813 20:08:44.366601 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.576396050+00:00 stderr F E0813 20:08:44.572921 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.687210467+00:00 stderr F E0813 20:08:44.687127 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.170720400+00:00 stderr F E0813 20:08:45.170091 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.367202854+00:00 stderr F E0813 20:08:45.366522 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.970722057+00:00 stderr F E0813 20:08:45.970545 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.481114281+00:00 stderr F E0813 20:08:46.481039 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.570690708+00:00 stderr F E0813 20:08:46.570554 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.766928924+00:00 stderr F E0813 20:08:46.766834 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.389880155+00:00 stderr F E0813 20:08:47.387262 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:48.059283578+00:00 stderr F E0813 20:08:48.059137 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:48.287271614+00:00 stderr F E0813 20:08:48.287221 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.328361563+00:00 stderr F E0813 20:08:49.328167 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.350525429+00:00 stderr F E0813 20:08:49.350239 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.082487095+00:00 stderr F E0813 20:08:50.081155 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.883630395+00:00 stderr F E0813 20:08:51.883481 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.921413918+00:00 stderr F E0813 20:08:51.921209 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:52.180256249+00:00 stderr F E0813 20:08:52.180202 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.684069314+00:00 stderr F E0813 20:08:53.682653 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.453376391+00:00 stderr F E0813 20:08:54.451082 1 base_controller.go:268] NamespaceFinalizerController_openshift-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.419142742+00:00 stderr F E0813 20:08:56.418869 1 leaderelection.go:332] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.052167092+00:00 stderr F E0813 20:08:57.052043 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["v3.11.0/openshift-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "v3.11.0/openshift-apiserver/pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-apiserver/poddisruptionbudgets/openshift-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:09:29.824090490+00:00 stderr F I0813 20:09:29.822973 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:31.240366727+00:00 stderr F I0813 20:09:31.239942 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.295474821+00:00 stderr F I0813 20:09:36.294940 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.472729763+00:00 stderr F I0813 20:09:36.472581 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.011372797+00:00 stderr F I0813 20:09:39.011140 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.190142743+00:00 stderr F I0813 20:09:39.189974 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.949937737+00:00 stderr F I0813 20:09:39.949751 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:40.632052473+00:00 stderr F I0813 20:09:40.631458 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:42.697111970+00:00 stderr F I0813 20:09:42.696118 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go/dynamic/dynamicinformer/informer.go:108 2025-08-13T20:09:44.948264682+00:00 stderr F I0813 20:09:44.947880 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.090233883+00:00 stderr F I0813 20:09:45.090158 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:45.949887510+00:00 stderr F I0813 20:09:45.949400 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:46.360890224+00:00 stderr F I0813 20:09:46.360329 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:48.613127477+00:00 stderr F I0813 20:09:48.612972 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:09:48.923736133+00:00 stderr F I0813 20:09:48.922579 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:49.794622532+00:00 stderr F I0813 20:09:49.793994 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.355175894+00:00 stderr F I0813 20:09:50.352768 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:50.502105847+00:00 stderr F I0813 20:09:50.500868 1 reflector.go:351] Caches populated for *v1.OpenShiftAPIServer from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:52.014953081+00:00 stderr F I0813 20:09:52.013366 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.469163333+00:00 stderr F I0813 20:09:52.468239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:09:52.841878240+00:00 stderr F I0813 20:09:52.840162 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:09:53.419225933+00:00 stderr F I0813 20:09:53.416296 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.028120271+00:00 stderr F I0813 20:09:54.026244 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.472161952+00:00 stderr F I0813 20:09:54.471226 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.804352795+00:00 stderr F I0813 20:09:54.803583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:55.208708598+00:00 stderr F I0813 20:09:55.208626 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:56.289377132+00:00 stderr F I0813 20:09:56.288969 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.105866032+00:00 stderr F I0813 20:09:57.105686 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:05.900208772+00:00 stderr F I0813 20:10:05.899418 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:12.147163668+00:00 stderr F I0813 20:10:12.146472 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:15.256298709+00:00 stderr F I0813 20:10:15.242093 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:20.755480033+00:00 stderr F I0813 20:10:20.754899 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:21.980853556+00:00 stderr F I0813 20:10:21.980244 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:23.527055176+00:00 stderr F I0813 20:10:23.526690 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:23.859439076+00:00 stderr F I0813 20:10:23.859356 1 reflector.go:351] Caches populated for *v1.Project from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:24.663610433+00:00 stderr F I0813 20:10:24.662181 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:10:26.217124734+00:00 stderr F I0813 20:10:26.216375 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.979341987+00:00 stderr F I0813 20:10:27.979198 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:29.016644348+00:00 stderr F I0813 20:10:29.016063 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:33.261888743+00:00 stderr F I0813 20:10:33.261552 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:34.479462892+00:00 stderr F I0813 20:10:34.478962 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:34.698461010+00:00 stderr F I0813 20:10:34.697436 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:36.213468297+00:00 stderr F I0813 20:10:36.213066 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:37.684223695+00:00 stderr F I0813 20:10:37.684042 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:11:00.169609486+00:00 stderr F I0813 20:11:00.165362 1 request.go:697] Waited for 1.001815503s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/route.openshift.io/v1 2025-08-13T20:21:41.161724381+00:00 stderr F I0813 20:21:41.160393 1 request.go:697] Waited for 1.000116633s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/template.openshift.io/v1 2025-08-13T20:42:36.411058991+00:00 stderr F I0813 20:42:36.401963 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415098707+00:00 stderr F I0813 20:42:36.414695 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.416342 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.419879 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421596965+00:00 stderr F I0813 20:42:36.421094 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441999143+00:00 stderr F I0813 20:42:36.436903 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454961217+00:00 stderr F I0813 20:42:36.452746 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464151 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464389 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464482 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464527 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.464646 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.465696 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.465866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466178 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466359 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466497 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466640 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466873 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.466952 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.467003 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469843446+00:00 stderr F I0813 20:42:36.467068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.483443 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.483970 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484189 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.486676531+00:00 stderr F I0813 20:42:36.484396 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508609013+00:00 stderr F I0813 20:42:36.507478 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.547376721+00:00 stderr F I0813 20:42:36.547282 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.563360252+00:00 stderr F I0813 20:42:36.563269 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.563690941+00:00 stderr F I0813 20:42:36.563595 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.571200458+00:00 stderr F I0813 20:42:36.571058 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.579639761+00:00 stderr F I0813 20:42:36.579547 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.579848497+00:00 stderr F I0813 20:42:36.579727 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.580176156+00:00 stderr F I0813 20:42:36.580117 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585343085+00:00 stderr F I0813 20:42:36.584706 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.585343085+00:00 stderr F I0813 20:42:36.585101 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.601554873+00:00 stderr F I0813 20:42:36.601410 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604017204+00:00 stderr F I0813 20:42:36.603073 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604017204+00:00 stderr F I0813 20:42:36.603646 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.604410115+00:00 stderr F I0813 20:42:36.604202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.605026003+00:00 stderr F I0813 20:42:36.604971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.560395498+00:00 stderr F I0813 20:42:41.559614 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.564204617+00:00 stderr F I0813 20:42:41.562965 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.565885616+00:00 stderr F I0813 20:42:41.565738 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.567214934+00:00 stderr F I0813 20:42:41.567135 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.567214934+00:00 stderr F I0813 20:42:41.567198 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-apiserver ... 2025-08-13T20:42:41.567470661+00:00 stderr F I0813 20:42:41.567393 1 base_controller.go:172] Shutting down StatusSyncer_openshift-apiserver ... 2025-08-13T20:42:41.567531943+00:00 stderr F I0813 20:42:41.567493 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:41.567531943+00:00 stderr F I0813 20:42:41.567512 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.567577785+00:00 stderr F I0813 20:42:41.567549 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:41.567591225+00:00 stderr F I0813 20:42:41.567554 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:42:41.568050328+00:00 stderr F I0813 20:42:41.567536 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.568158161+00:00 stderr F I0813 20:42:41.567213 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.568158161+00:00 stderr F I0813 20:42:41.568152 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:42:41.568176892+00:00 stderr F I0813 20:42:41.568166 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.568189682+00:00 stderr F I0813 20:42:41.568179 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:41.568201383+00:00 stderr F I0813 20:42:41.568193 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:41.568213603+00:00 stderr F I0813 20:42:41.568206 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:41.568263164+00:00 stderr F I0813 20:42:41.568220 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:41.568278355+00:00 stderr F I0813 20:42:41.568267 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:42:41.568290495+00:00 stderr F I0813 20:42:41.568279 1 base_controller.go:172] Shutting down OpenShiftAPIServerWorkloadController ... 2025-08-13T20:42:41.568302435+00:00 stderr F I0813 20:42:41.568293 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.568314006+00:00 stderr F I0813 20:42:41.568305 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.567455 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.567589 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.568842 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:41.568880472+00:00 stderr F I0813 20:42:41.568869 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568101 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568948 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568319 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:41.568970345+00:00 stderr F I0813 20:42:41.568964 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568974 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568981 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.568989525+00:00 stderr F I0813 20:42:41.568982 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:41.569004836+00:00 stderr F I0813 20:42:41.568996 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:42:41.569016266+00:00 stderr F I0813 20:42:41.569008 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.569027306+00:00 stderr F I0813 20:42:41.569019 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.569038277+00:00 stderr F I0813 20:42:41.569030 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:41.569049237+00:00 stderr F I0813 20:42:41.569038 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:41.569060767+00:00 stderr F I0813 20:42:41.569046 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:41.569072158+00:00 stderr F I0813 20:42:41.569059 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:41.569083528+00:00 stderr F I0813 20:42:41.569067 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:42:41.569130949+00:00 stderr F I0813 20:42:41.569092 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:42:41.569130949+00:00 stderr F I0813 20:42:41.569126 1 base_controller.go:114] Shutting down worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:42:41.569143190+00:00 stderr F I0813 20:42:41.569135 1 base_controller.go:104] All OpenShiftAPIServerWorkloadController workers have been terminated 2025-08-13T20:42:41.569152760+00:00 stderr F I0813 20:42:41.569145 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.569163740+00:00 stderr F I0813 20:42:41.569153 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:41.569217322+00:00 stderr F I0813 20:42:41.569184 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.569217322+00:00 stderr F I0813 20:42:41.569211 1 base_controller.go:104] All NamespaceFinalizerController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.569276544+00:00 stderr F I0813 20:42:41.569251 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.569276544+00:00 stderr F I0813 20:42:41.569263 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:41.569290064+00:00 stderr F I0813 20:42:41.569274 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:41.569290064+00:00 stderr F I0813 20:42:41.569284 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:41.569302034+00:00 stderr F I0813 20:42:41.569293 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:41.569313445+00:00 stderr F I0813 20:42:41.569303 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:41.570580971+00:00 stderr F I0813 20:42:41.570511 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.571554339+00:00 stderr F I0813 20:42:41.571477 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:41.571822437+00:00 stderr F I0813 20:42:41.571727 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:42:41.571822437+00:00 stderr F I0813 20:42:41.571759 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:42:41.572262430+00:00 stderr F I0813 20:42:41.572191 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.572279030+00:00 stderr F I0813 20:42:41.572263 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:41.572570778+00:00 stderr F I0813 20:42:41.572519 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.572683722+00:00 stderr F I0813 20:42:41.572279 1 base_controller.go:150] All StatusSyncer_openshift-apiserver post start hooks have been terminated 2025-08-13T20:42:41.572683722+00:00 stderr F I0813 20:42:41.572664 1 base_controller.go:104] All StatusSyncer_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.573062053+00:00 stderr F I0813 20:42:41.572997 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.574001360+00:00 stderr F I0813 20:42:41.573930 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.574739081+00:00 stderr F I0813 20:42:41.574010 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.574865255+00:00 stderr F I0813 20:42:41.574102 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.574896446+00:00 stderr F I0813 20:42:41.574375 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:41.574984268+00:00 stderr F E0813 20:42:41.574407 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574941 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574958 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.574984268+00:00 stderr F I0813 20:42:41.574974 1 builder.go:329] server exited 2025-08-13T20:42:41.575146793+00:00 stderr F I0813 20:42:41.574458 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:41.575146793+00:00 stderr F I0813 20:42:41.573201 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:41.576441030+00:00 stderr F I0813 20:42:41.576366 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_a533f076-9102-4f1c-ac58-2cc3fe6b65c6 stopped leading 2025-08-13T20:42:41.578623853+00:00 stderr F W0813 20:42:41.578497 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-op0000644000175000017500000024575515117130647033115 0ustar zuulzuul2025-08-13T19:59:34.135123902+00:00 stderr F I0813 19:59:34.115398 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:34.136554623+00:00 stderr F I0813 19:59:34.136493 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:34.219283961+00:00 stderr F I0813 19:59:34.207593 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:35.552098122+00:00 stderr F I0813 19:59:35.549458 1 builder.go:298] openshift-apiserver-operator version - 2025-08-13T19:59:41.023017032+00:00 stderr F I0813 19:59:41.021715 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:41.023017032+00:00 stderr F W0813 19:59:41.022603 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.023017032+00:00 stderr F W0813 19:59:41.022613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:41.121703825+00:00 stderr F I0813 19:59:41.096993 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:41.121703825+00:00 stderr F I0813 19:59:41.097644 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:41.124078033+00:00 stderr F I0813 19:59:41.123190 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:41.133035848+00:00 stderr F I0813 19:59:41.132945 1 leaderelection.go:250] attempting to acquire leader lease openshift-apiserver-operator/openshift-apiserver-operator-lock... 2025-08-13T19:59:41.144280409+00:00 stderr F I0813 19:59:41.136102 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.144507235+00:00 stderr F I0813 19:59:41.144422 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.145183715+00:00 stderr F I0813 19:59:41.141975 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:41.145266797+00:00 stderr F I0813 19:59:41.145244 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:41.230075334+00:00 stderr F I0813 19:59:41.206005 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:41.230075334+00:00 stderr F I0813 19:59:41.207491 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:41.258358011+00:00 stderr F I0813 19:59:41.234516 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:41.507550514+00:00 stderr F I0813 19:59:41.490492 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:41.622481470+00:00 stderr F I0813 19:59:41.562304 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.562884 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.562933 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.622481470+00:00 stderr F E0813 19:59:41.594311 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.638918819+00:00 stderr F E0813 19:59:41.638768 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.642404768+00:00 stderr F E0813 19:59:41.642245 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.687986337+00:00 stderr F E0813 19:59:41.686170 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.792514437+00:00 stderr F E0813 19:59:41.790374 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.854587096+00:00 stderr F E0813 19:59:41.854483 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.854587096+00:00 stderr F E0813 19:59:41.854544 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.924636323+00:00 stderr F I0813 19:59:41.889206 1 leaderelection.go:260] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock 2025-08-13T19:59:41.950578953+00:00 stderr F I0813 19:59:41.948675 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:42.022083821+00:00 stderr F I0813 19:59:42.018917 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"d9b35288-1c3d-4620-987e-0e2acf09bc76", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28297", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-apiserver-operator-7c88c4c865-kn67m_f4e743f1-18d7-4ed5-bf22-f0d7b2e289da became leader 2025-08-13T19:59:42.022083821+00:00 stderr F E0813 19:59:42.019038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.022083821+00:00 stderr F E0813 19:59:42.019069 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.100370823+00:00 stderr F E0813 19:59:42.100311 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.184075459+00:00 stderr F E0813 19:59:42.184009 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.305423498+00:00 stderr F E0813 19:59:42.266711 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.308877906+00:00 stderr F I0813 19:59:42.308767 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:42.324312196+00:00 stderr F I0813 19:59:42.314029 1 starter.go:133] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:42.325739457+00:00 stderr F I0813 19:59:42.325498 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:42.505465430+00:00 stderr F E0813 19:59:42.505124 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.587987932+00:00 stderr F E0813 19:59:42.587167 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.157083323+00:00 stderr F E0813 19:59:43.155147 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.243961490+00:00 stderr F E0813 19:59:43.242357 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.423531584+00:00 stderr F I0813 19:59:44.350010 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:59:44.423531584+00:00 stderr F I0813 19:59:44.370098 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:44.425127769+00:00 stderr F I0813 19:59:44.425086 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAPIServerWorkloadController 2025-08-13T19:59:44.425207902+00:00 stderr F I0813 19:59:44.425189 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.428650 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452637 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-apiserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452689 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452706 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.452940 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453112 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453130 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453280 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453303 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453360 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453383 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453447 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:44.463005019+00:00 stderr F I0813 19:59:44.453460 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:44.499400267+00:00 stderr F E0813 19:59:44.497287 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.529484614+00:00 stderr F E0813 19:59:44.528348 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.637050180+00:00 stderr F I0813 19:59:44.636109 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T19:59:44.680926361+00:00 stderr F I0813 19:59:44.678565 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_openshift-apiserver 2025-08-13T19:59:45.346473993+00:00 stderr F I0813 19:59:45.333954 1 trace.go:236] Trace[880847307]: "DeltaFIFO Pop Process" ID:config-operator,Depth:22,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.152) (total time: 180ms): 2025-08-13T19:59:45.346473993+00:00 stderr F Trace[880847307]: [180.65508ms] [180.65508ms] END 2025-08-13T19:59:45.347405809+00:00 stderr F I0813 19:59:45.347363 1 request.go:697] Waited for 1.074015845s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:46.881152029+00:00 stderr F I0813 19:59:46.880389 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-apiserver 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.911315 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.880450 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T19:59:46.914408827+00:00 stderr F I0813 19:59:46.911661 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T19:59:47.039904944+00:00 stderr F I0813 19:59:47.038817 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T19:59:47.039904944+00:00 stderr F I0813 19:59:47.038944 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T19:59:47.050216138+00:00 stderr F I0813 19:59:47.049173 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:47.050216138+00:00 stderr F I0813 19:59:47.049312 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:47.268890551+00:00 stderr F I0813 19:59:47.268767 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:47.268993164+00:00 stderr F I0813 19:59:47.268972 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:47.269063136+00:00 stderr F I0813 19:59:47.269043 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:47.269097157+00:00 stderr F I0813 19:59:47.269085 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:47.318959819+00:00 stderr F E0813 19:59:47.282138 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.318959819+00:00 stderr F E0813 19:59:47.282332 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.359016360+00:00 stderr F I0813 19:59:47.358950 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:47.359111993+00:00 stderr F I0813 19:59:47.359088 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:47.385992089+00:00 stderr F I0813 19:59:47.365671 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:47.385992089+00:00 stderr F I0813 19:59:47.365900 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.147306 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.152040 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151264 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.152162 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151273 1 base_controller.go:73] Caches are synced for StatusSyncer_openshift-apiserver 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.153572 1 base_controller.go:110] Starting #1 worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.151281 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:48.178270634+00:00 stderr F I0813 19:59:48.153614 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:48.178754988+00:00 stderr F I0813 19:59:48.178520 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T19:59:48.178984295+00:00 stderr F I0813 19:59:48.178887 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.189988 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151349 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190079 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151410 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190101 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.151426 1 base_controller.go:73] Caches are synced for OpenShiftAPIServerWorkloadController 2025-08-13T19:59:48.200723534+00:00 stderr F I0813 19:59:48.190641 1 base_controller.go:110] Starting #1 worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T19:59:48.201255449+00:00 stderr F I0813 19:59:48.201208 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.201310881+00:00 stderr F I0813 19:59:48.151535 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:48.201365282+00:00 stderr F I0813 19:59:48.201348 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:48.379765438+00:00 stderr F I0813 19:59:48.377924 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:52.679660601+00:00 stderr F I0813 19:59:52.678581 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()") 2025-08-13T19:59:52.815968576+00:00 stderr F I0813 19:59:52.814213 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.856690037+00:00 stderr F I0813 19:59:52.856631 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.947888527+00:00 stderr F I0813 19:59:52.930025 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985768 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:52.985684334 +0000 UTC))" 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985892 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:52.98587307 +0000 UTC))" 2025-08-13T19:59:52.985924371+00:00 stderr F I0813 19:59:52.985914 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.98589904 +0000 UTC))" 2025-08-13T19:59:52.986020194+00:00 stderr F I0813 19:59:52.985939 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.985919701 +0000 UTC))" 2025-08-13T19:59:52.989767381+00:00 stderr F I0813 19:59:52.989708 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.985948382 +0000 UTC))" 2025-08-13T19:59:52.989870384+00:00 stderr F I0813 19:59:52.989827 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.98974213 +0000 UTC))" 2025-08-13T19:59:52.989922125+00:00 stderr F I0813 19:59:52.989879 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989863123 +0000 UTC))" 2025-08-13T19:59:52.989934526+00:00 stderr F I0813 19:59:52.989918 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989903795 +0000 UTC))" 2025-08-13T19:59:52.989986507+00:00 stderr F I0813 19:59:52.989946 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.989924315 +0000 UTC))" 2025-08-13T19:59:52.990365098+00:00 stderr F I0813 19:59:52.990315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:52.990295666 +0000 UTC))" 2025-08-13T19:59:53.015943127+00:00 stderr F I0813 19:59:52.990714 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 19:59:52.990651836 +0000 UTC))" 2025-08-13T19:59:53.044103870+00:00 stderr F I0813 19:59:52.941961 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.150721773+00:00 stderr F I0813 19:59:54.136572 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.150721773+00:00 stderr F I0813 19:59:54.141652 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.186460882+00:00 stderr F I0813 19:59:54.186325 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:54.186460882+00:00 stderr F I0813 19:59:54.186427 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:55.132750746+00:00 stderr F I0813 19:59:55.131992 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.849174409+00:00 stderr F I0813 19:59:55.847137 1 trace.go:236] Trace[896855870]: "Reflector ListAndWatch" name:k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 (13-Aug-2025 19:59:44.428) (total time: 11413ms): 2025-08-13T19:59:55.849174409+00:00 stderr F Trace[896855870]: ---"Objects listed" error: 11412ms (19:59:55.841) 2025-08-13T19:59:55.849174409+00:00 stderr F Trace[896855870]: [11.413184959s] [11.413184959s] END 2025-08-13T19:59:55.849174409+00:00 stderr F I0813 19:59:55.847586 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:141 2025-08-13T19:59:55.852177834+00:00 stderr F I0813 19:59:55.850951 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:59:55.852177834+00:00 stderr F I0813 19:59:55.851008 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:59:56.081976655+00:00 stderr F I0813 19:59:56.063059 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)" 2025-08-13T20:00:00.935404843+00:00 stderr F I0813 20:00:00.926421 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-67cbf64bc9-mtx25 pod)","reason":"APIServerDeployment_UnavailablePod","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServerDeployment_NoPod::APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.007640322+00:00 stderr F E0813 20:00:01.007204 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.007640322+00:00 stderr F apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:01.324598297+00:00 stderr F I0813 20:00:01.321168 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:01.389367993+00:00 stderr F I0813 20:00:01.389240 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.733756640+00:00 stderr F I0813 20:00:01.729465 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:01.733756640+00:00 stderr F I0813 20:00:01.702742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:01.893116002+00:00 stderr F E0813 20:00:01.892352 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.341122666+00:00 stderr F I0813 20:00:02.338716 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:02.380369065+00:00 stderr F E0813 20:00:02.378934 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in "openshift-apiserver" have no addresses with port name "https" 2025-08-13T20:00:02.898135048+00:00 stderr F I0813 20:00:02.894040 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:17Z","message":"APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"","reason":"APIServices_Error","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:02.960055244+00:00 stderr F E0813 20:00:02.959041 1 base_controller.go:268] StatusSyncer_openshift-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "openshift-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:02.961464364+00:00 stderr F I0813 20:00:02.960628 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" 2025-08-13T20:00:05.445899825+00:00 stderr F I0813 20:00:05.444491 1 status_controller.go:218] clusteroperator/openshift-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:00:01Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:49Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:00:05Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:05.525944197+00:00 stderr F I0813 20:00:05.520820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"bf9e0c20-07cb-4537-b7f9-efae9f964f5e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") 2025-08-13T20:00:05.783362207+00:00 stderr F I0813 20:00:05.767722 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.767683 +0000 UTC))" 2025-08-13T20:00:05.783543412+00:00 stderr F I0813 20:00:05.783514 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.78345736 +0000 UTC))" 2025-08-13T20:00:05.783626085+00:00 stderr F I0813 20:00:05.783603 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783570633 +0000 UTC))" 2025-08-13T20:00:05.783685886+00:00 stderr F I0813 20:00:05.783668 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783647445 +0000 UTC))" 2025-08-13T20:00:05.783761848+00:00 stderr F I0813 20:00:05.783741 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783705937 +0000 UTC))" 2025-08-13T20:00:05.784002945+00:00 stderr F I0813 20:00:05.783985 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783963184 +0000 UTC))" 2025-08-13T20:00:05.784071537+00:00 stderr F I0813 20:00:05.784056 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.784040176 +0000 UTC))" 2025-08-13T20:00:05.798856269+00:00 stderr F I0813 20:00:05.784390 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.784092188 +0000 UTC))" 2025-08-13T20:00:05.799622201+00:00 stderr F I0813 20:00:05.799596 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.799511798 +0000 UTC))" 2025-08-13T20:00:05.799693793+00:00 stderr F I0813 20:00:05.799679 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799652172 +0000 UTC))" 2025-08-13T20:00:05.800171056+00:00 stderr F I0813 20:00:05.800144 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.800123445 +0000 UTC))" 2025-08-13T20:00:05.800500196+00:00 stderr F I0813 20:00:05.800476 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 20:00:05.800458495 +0000 UTC))" 2025-08-13T20:00:28.273069989+00:00 stderr F I0813 20:00:28.266938 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:28.273069989+00:00 stderr F I0813 20:00:28.272188 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.272753 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:28.272702608 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275316 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:28.275274432 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275353 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:28.275331893 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275377 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:28.275358604 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275399 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275385875 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275428 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275405685 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275448 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275435676 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275467 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275454277 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275494 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:28.275476947 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275524 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:28.275506228 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.275954 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-apiserver-operator.svc,metrics.openshift-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:07 +0000 UTC to 2027-08-13 20:00:08 +0000 UTC (now=2025-08-13 20:00:28.27593466 +0000 UTC))" 2025-08-13T20:00:28.280901242+00:00 stderr F I0813 20:00:28.279726 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115180\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115176\" (2025-08-13 18:59:35 +0000 UTC to 2026-08-13 18:59:35 +0000 UTC (now=2025-08-13 20:00:28.279546053 +0000 UTC))" 2025-08-13T20:00:28.309754775+00:00 stderr F I0813 20:00:28.308236 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:29.270989773+00:00 stderr F I0813 20:00:29.263950 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="32d40e5d8e7856640e9aa0fa689da20ce17efb0f940d3f20467677d758499f97", new="619bd1166efa93cd4a6ecce49845ad14ea28915925e46f2a1ae6f0f79bf4e301") 2025-08-13T20:00:29.270989773+00:00 stderr F W0813 20:00:29.264566 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:29.270989773+00:00 stderr F I0813 20:00:29.264640 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="2deeedad0b20e8edd995d2b2452184a3a6d229c608ee3d8e51038616f572694e", new="cc755c6895764b0695467ba8a77e5ba8598a858e253fc2c3e013de460f5584c5") 2025-08-13T20:00:29.271215499+00:00 stderr F I0813 20:00:29.271181 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:29.271295661+00:00 stderr F I0813 20:00:29.271281 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:29.271485157+00:00 stderr F I0813 20:00:29.271434 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:00:29.271501527+00:00 stderr F I0813 20:00:29.271493 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:00:29.271551519+00:00 stderr F I0813 20:00:29.271514 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:29.271562879+00:00 stderr F I0813 20:00:29.271550 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:00:29.271682982+00:00 stderr F I0813 20:00:29.271572 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:00:29.271682982+00:00 stderr F I0813 20:00:29.271586 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-apiserver ... 2025-08-13T20:00:29.271738294+00:00 stderr F I0813 20:00:29.271698 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:00:29.271738294+00:00 stderr F I0813 20:00:29.271732 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271748 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271770 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:00:29.271860937+00:00 stderr F I0813 20:00:29.271828 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:29.271880868+00:00 stderr F I0813 20:00:29.271858 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:29.271880868+00:00 stderr F I0813 20:00:29.271874 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:00:29.271890288+00:00 stderr F I0813 20:00:29.271881 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:00:29.271899219+00:00 stderr F I0813 20:00:29.271889 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:29.271899219+00:00 stderr F I0813 20:00:29.271895 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:00:29.271908319+00:00 stderr F I0813 20:00:29.271902 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-apiserver controller ... 2025-08-13T20:00:29.271917359+00:00 stderr F I0813 20:00:29.271908 1 base_controller.go:104] All NamespaceFinalizerController_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.271983701+00:00 stderr F I0813 20:00:29.271959 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:29.272080664+00:00 stderr F I0813 20:00:29.272047 1 base_controller.go:172] Shutting down OpenShiftAPIServerWorkloadController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273043 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273101 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273117 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273129 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273144 1 base_controller.go:172] Shutting down StatusSyncer_openshift-apiserver ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273149 1 base_controller.go:150] All StatusSyncer_openshift-apiserver post start hooks have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273175 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273188 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273201 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273210 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273217 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273226 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273235 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273240 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273249 1 base_controller.go:114] Shutting down worker of StatusSyncer_openshift-apiserver controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273255 1 base_controller.go:104] All StatusSyncer_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273263 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273267 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273274 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273279 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273307 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273318 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273323 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273412 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273463 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273552 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273577 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273602 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:29.275102630+00:00 stderr F I0813 20:00:29.273725 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:00:29.276727666+00:00 stderr F I0813 20:00:29.276689 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:29.277242271+00:00 stderr F I0813 20:00:29.277217 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:29.277347174+00:00 stderr F I0813 20:00:29.277326 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:29.280376320+00:00 stderr F I0813 20:00:29.277485 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:00:29.281461021+00:00 stderr F E0813 20:00:29.281299 1 base_controller.go:268] OpenShiftAPIServerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:29.281558024+00:00 stderr F I0813 20:00:29.281539 1 base_controller.go:114] Shutting down worker of OpenShiftAPIServerWorkloadController controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.281357 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282340 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282541 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.282553 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294267 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294408 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294470 1 builder.go:329] server exited 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294494 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:00:29.325557889+00:00 stderr F I0813 20:00:29.294505 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:00:29.325711713+00:00 stderr F I0813 20:00:29.325682 1 base_controller.go:104] All OpenShiftAPIServerWorkloadController workers have been terminated 2025-08-13T20:00:29.325923179+00:00 stderr F I0813 20:00:29.325901 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:29.326444954+00:00 stderr F I0813 20:00:29.326421 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:00:29.326606559+00:00 stderr F E0813 20:00:29.326582 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-apiserver/podnetworkconnectivitychecks": context canceled 2025-08-13T20:00:29.326673950+00:00 stderr F I0813 20:00:29.326657 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:00:29.326720562+00:00 stderr F I0813 20:00:29.326704 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:00:29.326878466+00:00 stderr F I0813 20:00:29.326859 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:29.326955608+00:00 stderr F I0813 20:00:29.326939 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:29.406027283+00:00 stderr F W0813 20:00:29.405506 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130647033060 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134315117130647033063 0ustar zuulzuul2025-12-13T00:14:24.040642736+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="error verifying provided cert and key: certificate has expired" 2025-12-13T00:14:24.041353079+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="generating a new cert and key" 2025-12-13T00:14:26.491661449+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="error retrieving pprof profile: Get \"https://olm-operator-metrics:8443/debug/pprof/heap\": remote error: tls: unknown certificate authority" 2025-12-13T00:14:26.520890119+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="error retrieving pprof profile: Get \"https://catalog-operator-metrics:8443/debug/pprof/heap\": remote error: tls: unknown certificate authority" ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015117130646033044 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015117130653033042 5ustar zuulzuul././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000000352315117130646033051 0ustar zuulzuul2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797147 1 migrator.go:18] FLAG: --add_dir_header="false" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797273 1 migrator.go:18] FLAG: --alsologtostderr="true" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797277 1 migrator.go:18] FLAG: --kube-api-burst="1000" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797282 1 migrator.go:18] FLAG: --kube-api-qps="40" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797287 1 migrator.go:18] FLAG: --kubeconfig="" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797291 1 migrator.go:18] FLAG: --log_backtrace_at=":0" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797296 1 migrator.go:18] FLAG: --log_dir="" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797300 1 migrator.go:18] FLAG: --log_file="" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797303 1 migrator.go:18] FLAG: --log_file_max_size="1800" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797306 1 migrator.go:18] FLAG: --logtostderr="true" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797309 1 migrator.go:18] FLAG: --one_output="false" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797312 1 migrator.go:18] FLAG: --skip_headers="false" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797315 1 migrator.go:18] FLAG: --skip_log_headers="false" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797318 1 migrator.go:18] FLAG: --stderrthreshold="2" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797321 1 migrator.go:18] FLAG: --v="2" 2025-12-13T00:13:14.797428608+00:00 stderr F I1213 00:13:14.797324 1 migrator.go:18] FLAG: --vmodule="" ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000000376615117130646033062 0ustar zuulzuul2025-08-13T19:59:09.272236653+00:00 stderr F I0813 19:59:09.269668 1 migrator.go:18] FLAG: --add_dir_header="false" 2025-08-13T19:59:09.315418924+00:00 stderr F I0813 19:59:09.315376 1 migrator.go:18] FLAG: --alsologtostderr="true" 2025-08-13T19:59:09.315482885+00:00 stderr F I0813 19:59:09.315453 1 migrator.go:18] FLAG: --kube-api-burst="1000" 2025-08-13T19:59:09.315541857+00:00 stderr F I0813 19:59:09.315512 1 migrator.go:18] FLAG: --kube-api-qps="40" 2025-08-13T19:59:09.315594069+00:00 stderr F I0813 19:59:09.315571 1 migrator.go:18] FLAG: --kubeconfig="" 2025-08-13T19:59:09.315638740+00:00 stderr F I0813 19:59:09.315617 1 migrator.go:18] FLAG: --log_backtrace_at=":0" 2025-08-13T19:59:09.315681631+00:00 stderr F I0813 19:59:09.315659 1 migrator.go:18] FLAG: --log_dir="" 2025-08-13T19:59:09.315724812+00:00 stderr F I0813 19:59:09.315710 1 migrator.go:18] FLAG: --log_file="" 2025-08-13T19:59:09.315765133+00:00 stderr F I0813 19:59:09.315747 1 migrator.go:18] FLAG: --log_file_max_size="1800" 2025-08-13T19:59:09.315911708+00:00 stderr F I0813 19:59:09.315890 1 migrator.go:18] FLAG: --logtostderr="true" 2025-08-13T19:59:09.316095953+00:00 stderr F I0813 19:59:09.315951 1 migrator.go:18] FLAG: --one_output="false" 2025-08-13T19:59:09.316153515+00:00 stderr F I0813 19:59:09.316130 1 migrator.go:18] FLAG: --skip_headers="false" 2025-08-13T19:59:09.316200426+00:00 stderr F I0813 19:59:09.316184 1 migrator.go:18] FLAG: --skip_log_headers="false" 2025-08-13T19:59:09.316252307+00:00 stderr F I0813 19:59:09.316234 1 migrator.go:18] FLAG: --stderrthreshold="2" 2025-08-13T19:59:09.316293679+00:00 stderr F I0813 19:59:09.316280 1 migrator.go:18] FLAG: --v="2" 2025-08-13T19:59:09.316350440+00:00 stderr F I0813 19:59:09.316311 1 migrator.go:18] FLAG: --vmodule="" 2025-08-13T20:42:36.480632977+00:00 stderr F I0813 20:42:36.473590 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000755000175000017500000000000015117130646032731 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000755000175000017500000000000015117130654032730 5ustar zuulzuul././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000005633015117130646032742 0ustar zuulzuul2025-12-13T00:13:16.207831221+00:00 stderr F I1213 00:13:16.204914 1 cmd.go:233] Using service-serving-cert provided certificates 2025-12-13T00:13:16.207831221+00:00 stderr F I1213 00:13:16.205273 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:16.207831221+00:00 stderr F I1213 00:13:16.206381 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:16.311012378+00:00 stderr F I1213 00:13:16.309838 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-12-13T00:13:16.311012378+00:00 stderr F I1213 00:13:16.310669 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:17.952818407+00:00 stderr F I1213 00:13:17.952288 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:17.959754940+00:00 stderr F I1213 00:13:17.959331 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:17.959754940+00:00 stderr F I1213 00:13:17.959350 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:17.959754940+00:00 stderr F I1213 00:13:17.959363 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:17.959754940+00:00 stderr F I1213 00:13:17.959368 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:17.962357107+00:00 stderr F I1213 00:13:17.962028 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:17.962357107+00:00 stderr F W1213 00:13:17.962043 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:17.962357107+00:00 stderr F W1213 00:13:17.962049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:17.962357107+00:00 stderr F I1213 00:13:17.962083 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.965749 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.965790 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.965825 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.965832 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.965845 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.965849 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.966306 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.966504 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.966635 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-12-13 00:13:17.96661291 +0000 UTC))" 2025-12-13T00:13:17.966999533+00:00 stderr F I1213 00:13:17.966791 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-12-13T00:13:17.967051365+00:00 stderr F I1213 00:13:17.967017 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:17.966999523 +0000 UTC))" 2025-12-13T00:13:17.967051365+00:00 stderr F I1213 00:13:17.967032 1 secure_serving.go:210] Serving securely on [::]:8443 2025-12-13T00:13:17.967051365+00:00 stderr F I1213 00:13:17.967046 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:17.967062465+00:00 stderr F I1213 00:13:17.967056 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:18.069989013+00:00 stderr F I1213 00:13:18.066023 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.069989013+00:00 stderr F I1213 00:13:18.066105 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:18.069989013+00:00 stderr F I1213 00:13:18.066205 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.069989013+00:00 stderr F I1213 00:13:18.066370 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.066336911 +0000 UTC))" 2025-12-13T00:13:18.069989013+00:00 stderr F I1213 00:13:18.066736 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-12-13 00:13:18.066714863 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077374 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:18.07733427 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077581 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:18.077566228 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077601 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:18.077591859 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077621 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.077605259 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077639 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.07762549 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077654 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.077643271 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077669 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.077659521 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077684 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.077673772 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077697 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.077687722 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077712 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:18.077701543 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077734 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:18.077719643 +0000 UTC))" 2025-12-13T00:13:18.077976832+00:00 stderr F I1213 00:13:18.077749 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.077739824 +0000 UTC))" 2025-12-13T00:13:18.081973217+00:00 stderr F I1213 00:13:18.078075 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-12-13 00:13:18.078061665 +0000 UTC))" 2025-12-13T00:13:18.081973217+00:00 stderr F I1213 00:13:18.078353 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:18.078342954 +0000 UTC))" 2025-12-13T00:19:35.790246293+00:00 stderr F I1213 00:19:35.789713 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-12-13T00:19:35.791510098+00:00 stderr F I1213 00:19:35.791449 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42296", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_23992dc0-5f38-4ec3-a225-581f0ac9116d became leader 2025-12-13T00:19:35.794052098+00:00 stderr F I1213 00:19:35.794013 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:19:35.796164747+00:00 stderr F I1213 00:19:35.796131 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:19:35.796164747+00:00 stderr F I1213 00:19:35.796146 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:19:35.796164747+00:00 stderr F I1213 00:19:35.796154 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:19:35.799974581+00:00 stderr F I1213 00:19:35.797272 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-12-13T00:19:35.799974581+00:00 stderr F I1213 00:19:35.797537 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-12-13T00:19:35.898330803+00:00 stderr F I1213 00:19:35.898226 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-12-13T00:19:35.898330803+00:00 stderr F I1213 00:19:35.898269 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-12-13T00:19:35.898330803+00:00 stderr F I1213 00:19:35.898309 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-12-13T00:19:35.898376955+00:00 stderr F I1213 00:19:35.898336 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-12-13T00:19:36.295061743+00:00 stderr F I1213 00:19:36.294927 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:19:36.295061743+00:00 stderr F I1213 00:19:36.294989 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561217 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.561187896 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561250 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.561235158 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561285 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.561255838 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561300 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.561289459 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561315 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.5613054 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561336 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.56131992 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561350 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561340301 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561365 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561354031 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561383 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.561373302 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561399 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.561389642 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561414 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.561403052 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561427 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561417873 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.561749 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-12-13 00:19:37.561710231 +0000 UTC))" 2025-12-13T00:19:37.564305642+00:00 stderr F I1213 00:19:37.562026 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:19:37.562014389 +0000 UTC))" ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000006266415117130646032751 0ustar zuulzuul2025-08-13T20:05:46.695171711+00:00 stderr F I0813 20:05:46.693109 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T20:05:46.698320051+00:00 stderr F I0813 20:05:46.697106 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:46.701930994+00:00 stderr F I0813 20:05:46.700578 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:46.766204245+00:00 stderr F I0813 20:05:46.766110 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T20:05:46.768050687+00:00 stderr F I0813 20:05:46.768021 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:47.202189140+00:00 stderr F I0813 20:05:47.202074 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:05:47.209249832+00:00 stderr F I0813 20:05:47.209203 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:05:47.209327574+00:00 stderr F I0813 20:05:47.209313 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:05:47.209388826+00:00 stderr F I0813 20:05:47.209374 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:05:47.209432707+00:00 stderr F I0813 20:05:47.209417 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:05:47.234427463+00:00 stderr F I0813 20:05:47.234338 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:47.234427463+00:00 stderr F W0813 20:05:47.234389 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:47.234427463+00:00 stderr F W0813 20:05:47.234399 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:47.234479114+00:00 stderr F I0813 20:05:47.234405 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:05:47.239726024+00:00 stderr F I0813 20:05:47.239644 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:47.239726024+00:00 stderr F I0813 20:05:47.239690 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:47.239852348+00:00 stderr F I0813 20:05:47.239830 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:47.240207368+00:00 stderr F I0813 20:05:47.240155 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.240124436 +0000 UTC))" 2025-08-13T20:05:47.240207368+00:00 stderr F I0813 20:05:47.240178 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:47.240223789+00:00 stderr F I0813 20:05:47.240207 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240600 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.240573979 +0000 UTC))" 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240685 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:05:47.240819806+00:00 stderr F I0813 20:05:47.240714 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:05:47.241303220+00:00 stderr F I0813 20:05:47.241198 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:47.242046351+00:00 stderr F I0813 20:05:47.241956 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:47.242115163+00:00 stderr F I0813 20:05:47.242060 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:47.242115163+00:00 stderr F I0813 20:05:47.242100 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:47.243732009+00:00 stderr F I0813 20:05:47.243709 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-08-13T20:05:47.269716273+00:00 stderr F I0813 20:05:47.269602 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-08-13T20:05:47.272188094+00:00 stderr F I0813 20:05:47.272007 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_cd77d967-43e4-48e5-af41-05935a4b105a became leader 2025-08-13T20:05:47.306324662+00:00 stderr F I0813 20:05:47.306185 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:47.328919019+00:00 stderr F I0813 20:05:47.328769 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-08-13T20:05:47.330111923+00:00 stderr F I0813 20:05:47.329537 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:47.330460283+00:00 stderr F I0813 20:05:47.330404 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:47.330460283+00:00 stderr F I0813 20:05:47.330431 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:47.336307180+00:00 stderr F I0813 20:05:47.336161 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-08-13T20:05:47.341560371+00:00 stderr F I0813 20:05:47.341414 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:47.342178988+00:00 stderr F I0813 20:05:47.342066 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.342018904 +0000 UTC))" 2025-08-13T20:05:47.345069291+00:00 stderr F I0813 20:05:47.344954 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:47.345166924+00:00 stderr F I0813 20:05:47.344954 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:47.345447712+00:00 stderr F I0813 20:05:47.345315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.345247906 +0000 UTC))" 2025-08-13T20:05:47.345686099+00:00 stderr F I0813 20:05:47.345621 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.345601186 +0000 UTC))" 2025-08-13T20:05:47.347929553+00:00 stderr F I0813 20:05:47.347827 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:05:47.347745868 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353588 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:05:47.348304684 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353744 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:05:47.353713179 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353765 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:05:47.3537514 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353925 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353820102 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353957 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353941695 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.353974 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353962766 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354005 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.353985766 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354030 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:05:47.354012267 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354054 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:05:47.354040398 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354182 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:05:47.35412156 +0000 UTC))" 2025-08-13T20:05:47.354594884+00:00 stderr F I0813 20:05:47.354568 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:05:47.354544052 +0000 UTC))" 2025-08-13T20:05:47.358087384+00:00 stderr F I0813 20:05:47.357824 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115547\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115546\" (2025-08-13 19:05:46 +0000 UTC to 2026-08-13 19:05:46 +0000 UTC (now=2025-08-13 20:05:47.357737924 +0000 UTC))" 2025-08-13T20:05:47.430618321+00:00 stderr F I0813 20:05:47.430480 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-08-13T20:05:47.430668902+00:00 stderr F I0813 20:05:47.430627 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-08-13T20:05:47.437831297+00:00 stderr F I0813 20:05:47.437741 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-08-13T20:05:47.437999972+00:00 stderr F I0813 20:05:47.437834 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-08-13T20:05:47.807892884+00:00 stderr F I0813 20:05:47.807418 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:47.807892884+00:00 stderr F I0813 20:05:47.807476 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:49.261077294+00:00 stderr F E0813 20:08:49.260139 1 leaderelection.go:332] error retrieving resource lock openshift-service-ca-operator/service-ca-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386102 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.385986 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386126 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.386147 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410661849+00:00 stderr F I0813 20:42:36.396431 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410947878+00:00 stderr F I0813 20:42:36.410909 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.430056448+00:00 stderr F I0813 20:42:36.429968 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.444700 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.444988 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.445099 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446965186+00:00 stderr F I0813 20:42:36.445265 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454712929+00:00 stderr F I0813 20:42:36.453927 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454983407+00:00 stderr F I0813 20:42:36.454916 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.507069389+00:00 stderr F I0813 20:42:36.503562 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516213932+00:00 stderr F I0813 20:42:36.515748 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531605046+00:00 stderr F I0813 20:42:36.516940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531735290+00:00 stderr F I0813 20:42:36.516985 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538946848+00:00 stderr F I0813 20:42:36.519622 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538946848+00:00 stderr F I0813 20:42:36.519747 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.273148956+00:00 stderr F I0813 20:42:40.269755 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.275012839+00:00 stderr F I0813 20:42:40.274951 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.275037130+00:00 stderr F I0813 20:42:40.275010 1 base_controller.go:172] Shutting down StatusSyncer_service-ca ... 2025-08-13T20:42:40.275901815+00:00 stderr F I0813 20:42:40.275017 1 base_controller.go:150] All StatusSyncer_service-ca post start hooks have been terminated 2025-08-13T20:42:40.275901815+00:00 stderr F I0813 20:42:40.275890 1 base_controller.go:172] Shutting down ServiceCAOperator ... 2025-08-13T20:42:40.276134082+00:00 stderr F I0813 20:42:40.274141 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.281716873+00:00 stderr F I0813 20:42:40.281678 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:40.281880867+00:00 stderr F I0813 20:42:40.281865 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.281702 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.281741 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.282949078+00:00 stderr F I0813 20:42:40.282934 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.273874 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.281762 1 base_controller.go:114] Shutting down worker of ServiceCAOperator controller ... 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.283146 1 base_controller.go:104] All ServiceCAOperator workers have been terminated 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.281754 1 base_controller.go:114] Shutting down worker of StatusSyncer_service-ca controller ... 2025-08-13T20:42:40.283165404+00:00 stderr F I0813 20:42:40.283157 1 base_controller.go:104] All StatusSyncer_service-ca workers have been terminated 2025-08-13T20:42:40.283275118+00:00 stderr F I0813 20:42:40.281769 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.283275118+00:00 stderr F I0813 20:42:40.283200 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.284330578+00:00 stderr F I0813 20:42:40.284206 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285683 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285725 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.285855702+00:00 stderr F I0813 20:42:40.285838 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.285946314+00:00 stderr F I0813 20:42:40.285886 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.287334595+00:00 stderr F I0813 20:42:40.287275 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.287426957+00:00 stderr F I0813 20:42:40.287375 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.287506089+00:00 stderr F I0813 20:42:40.287457 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.287506089+00:00 stderr F I0813 20:42:40.287477 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:40.287523390+00:00 stderr F I0813 20:42:40.287504 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:40.287700335+00:00 stderr F I0813 20:42:40.285932 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.288313053+00:00 stderr F I0813 20:42:40.288252 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.289258460+00:00 stderr F I0813 20:42:40.289175 1 builder.go:302] server exited 2025-08-13T20:42:40.290335971+00:00 stderr F E0813 20:42:40.290246 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.291335530+00:00 stderr F I0813 20:42:40.291154 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_cd77d967-43e4-48e5-af41-05935a4b105a stopped leading 2025-08-13T20:42:40.292880134+00:00 stderr F W0813 20:42:40.292702 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-o0000644000175000017500000016255115117130646032745 0ustar zuulzuul2025-08-13T19:59:15.609666557+00:00 stderr F I0813 19:59:15.581369 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T19:59:15.609666557+00:00 stderr F I0813 19:59:15.596283 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:15.899439107+00:00 stderr F I0813 19:59:15.895681 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:17.925502651+00:00 stderr F I0813 19:59:17.918436 1 builder.go:271] service-ca-operator version v4.16.0-202406131906.p0.g538c7b9.assembly.stream.el9-0-g6d77dd5- 2025-08-13T19:59:19.416925586+00:00 stderr F I0813 19:59:19.416426 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:27.199472670+00:00 stderr F I0813 19:59:27.164373 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359754 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359870 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359974 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:27.361919811+00:00 stderr F I0813 19:59:27.359983 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:27.539080441+00:00 stderr F I0813 19:59:27.537724 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:27.539080441+00:00 stderr F W0813 19:59:27.538022 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:27.539080441+00:00 stderr F W0813 19:59:27.538031 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:27.544388722+00:00 stderr F I0813 19:59:27.541588 1 genericapiserver.go:525] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:27.754735019+00:00 stderr F I0813 19:59:27.754668 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:27.783809777+00:00 stderr F I0813 19:59:27.782997 1 leaderelection.go:250] attempting to acquire leader lease openshift-service-ca-operator/service-ca-operator-lock... 2025-08-13T19:59:27.811830216+00:00 stderr F I0813 19:59:27.811703 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:27.811669361 +0000 UTC))" 2025-08-13T19:59:27.825500576+00:00 stderr F I0813 19:59:27.825462 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:27.825998880+00:00 stderr F I0813 19:59:27.825978 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:27.836639763+00:00 stderr F I0813 19:59:27.836563 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:27.837165378+00:00 stderr F I0813 19:59:27.837143 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:27.838887667+00:00 stderr F I0813 19:59:27.838569 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.838887667+00:00 stderr F I0813 19:59:27.838654 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:28.034964727+00:00 stderr F I0813 19:59:28.032713 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.053879 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:27.812142195 +0000 UTC))" 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054053 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054180 1 genericapiserver.go:673] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:28.069188012+00:00 stderr F I0813 19:59:28.054368 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:28.207045091+00:00 stderr F I0813 19:59:28.204297 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:28.327321909+00:00 stderr F I0813 19:59:28.291761 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:28.425916980+00:00 stderr F E0813 19:59:28.423919 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.425916980+00:00 stderr F E0813 19:59:28.424055 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.425916980+00:00 stderr F I0813 19:59:28.425035 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:28.425002914 +0000 UTC))" 2025-08-13T19:59:28.425916980+00:00 stderr F I0813 19:59:28.425070 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:28.425050155 +0000 UTC))" 2025-08-13T19:59:28.436945004+00:00 stderr F E0813 19:59:28.436254 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.508061931+00:00 stderr F E0813 19:59:28.507546 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.511589642+00:00 stderr F I0813 19:59:28.511486 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:28.523985505+00:00 stderr F E0813 19:59:28.522980 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.523985505+00:00 stderr F E0813 19:59:28.523131 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.533477246+00:00 stderr F I0813 19:59:28.533326 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:28.425082696 +0000 UTC))" 2025-08-13T19:59:28.533477246+00:00 stderr F I0813 19:59:28.533400 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:28.533377223 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533661 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533417554 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533718 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533696162 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533754 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533733783 +0000 UTC))" 2025-08-13T19:59:28.534188476+00:00 stderr F I0813 19:59:28.533870 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:28.533761984 +0000 UTC))" 2025-08-13T19:59:28.534476044+00:00 stderr F I0813 19:59:28.534395 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:28.534367511 +0000 UTC))" 2025-08-13T19:59:28.562541554+00:00 stderr F E0813 19:59:28.559065 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.562541554+00:00 stderr F E0813 19:59:28.559198 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.600922078+00:00 stderr F E0813 19:59:28.600126 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.629967166+00:00 stderr F E0813 19:59:28.627224 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.687441405+00:00 stderr F E0813 19:59:28.686150 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.708043192+00:00 stderr F E0813 19:59:28.707397 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.756934746+00:00 stderr F I0813 19:59:28.756063 1 leaderelection.go:260] successfully acquired lease openshift-service-ca-operator/service-ca-operator-lock 2025-08-13T19:59:28.776867694+00:00 stderr F I0813 19:59:28.758535 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator-lock", UID:"76f50757-9318-4fb1-b428-2b64c99787ea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28107", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-operator-546b4f8984-pwccz_c8767c2b-8c87-42c2-ac6c-176a996e5e2c became leader 2025-08-13T19:59:28.819010675+00:00 stderr F I0813 19:59:28.813946 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:28.534766172 +0000 UTC))" 2025-08-13T19:59:28.894882238+00:00 stderr F I0813 19:59:28.877043 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:28.910128372+00:00 stderr F E0813 19:59:28.901710 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.910128372+00:00 stderr F E0813 19:59:28.901828 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:28.951357188+00:00 stderr F I0813 19:59:28.951299 1 base_controller.go:67] Waiting for caches to sync for ServiceCAOperator 2025-08-13T19:59:28.951456871+00:00 stderr F I0813 19:59:28.951439 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:28.951975935+00:00 stderr F I0813 19:59:28.951949 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_service-ca 2025-08-13T19:59:31.323743733+00:00 stderr F I0813 19:59:31.292566 1 request.go:697] Waited for 2.247873836s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-08-13T19:59:31.532284157+00:00 stderr F I0813 19:59:31.453284 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:31.532284157+00:00 stderr F I0813 19:59:31.532072 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:33.853457023+00:00 stderr F E0813 19:59:33.847947 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.853457023+00:00 stderr F E0813 19:59:33.848557 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.154580 1 base_controller.go:73] Caches are synced for StatusSyncer_service-ca 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.154670 1 base_controller.go:110] Starting #1 worker of StatusSyncer_service-ca controller ... 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.155324 1 base_controller.go:73] Caches are synced for ServiceCAOperator 2025-08-13T19:59:34.226722143+00:00 stderr F I0813 19:59:34.155334 1 base_controller.go:110] Starting #1 worker of ServiceCAOperator controller ... 2025-08-13T19:59:34.504016287+00:00 stderr F E0813 19:59:34.497677 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.504016287+00:00 stderr F E0813 19:59:34.497764 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.801623925+00:00 stderr F E0813 19:59:35.778348 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.825044193+00:00 stderr F E0813 19:59:35.824071 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.955154397+00:00 stderr F I0813 19:59:36.955035 1 rotate.go:99] Rotating service CA due to the CA being past the mid-point of its validity. 2025-08-13T19:59:38.336152503+00:00 stderr F I0813 19:59:38.333455 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:38.336152503+00:00 stderr F I0813 19:59:38.334696 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:38.364878932+00:00 stderr F E0813 19:59:38.364613 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.386488598+00:00 stderr F E0813 19:59:38.385301 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.234761047+00:00 stderr F I0813 19:59:39.234566 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/signing-key -n openshift-service-ca because it changed 2025-08-13T19:59:39.234761047+00:00 stderr F I0813 19:59:39.234628 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ServiceCARotated' Rotating service CA due to the CA being past the mid-point of its validity. 2025-08-13T19:59:39.234761047+00:00 stderr F The previous CA will be trusted until 2026-09-12T19:59:38Z 2025-08-13T19:59:39.508502940+00:00 stderr F I0813 19:59:39.461379 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/service-ca -n openshift-config-managed: 2025-08-13T19:59:39.508502940+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:39.508502940+00:00 stderr F I0813 19:59:39.477989 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/signing-cabundle -n openshift-service-ca: 2025-08-13T19:59:39.508502940+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:39.756660394+00:00 stderr F I0813 19:59:39.756568 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/service-ca -n openshift-service-ca because it changed 2025-08-13T19:59:39.943240583+00:00 stderr F I0813 19:59:39.941911 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:15Z","message":"Progressing: \nProgressing: service-ca is updating","reason":"_ManagedDeploymentsAvailable","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:40.032656072+00:00 stderr F I0813 19:59:40.032507 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing message changed from "Progressing: \nProgressing: service-ca does not have available replicas" to "Progressing: \nProgressing: service-ca is updating" 2025-08-13T19:59:40.563249286+00:00 stderr F I0813 19:59:40.559852 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/service-ca -n openshift-service-ca because it changed 2025-08-13T19:59:40.887673554+00:00 stderr F I0813 19:59:40.887078 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:34:15Z","message":"Progressing: \nProgressing: service-ca does not have available replicas","reason":"_ManagedDeploymentsAvailable","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:41.121658674+00:00 stderr F I0813 19:59:41.119997 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing message changed from "Progressing: \nProgressing: service-ca is updating" to "Progressing: \nProgressing: service-ca does not have available replicas" 2025-08-13T19:59:43.486698869+00:00 stderr F E0813 19:59:43.485454 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.520859453+00:00 stderr F E0813 19:59:43.515370 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.907020147+00:00 stderr F I0813 19:59:51.906115 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.905972337 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916595 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.906769359 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916660 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.916633141 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916891 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.916724353 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916936 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916913229 +0000 UTC))" 2025-08-13T19:59:51.917932748+00:00 stderr F I0813 19:59:51.916967 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.91694589 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942353 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942279082 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942455 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942436646 +0000 UTC))" 2025-08-13T19:59:51.942556640+00:00 stderr F I0813 19:59:51.942475 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.942462417 +0000 UTC))" 2025-08-13T19:59:51.945436062+00:00 stderr F I0813 19:59:51.945304 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 19:59:51.945250866 +0000 UTC))" 2025-08-13T19:59:51.955585631+00:00 stderr F I0813 19:59:51.955361 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 19:59:51.95521261 +0000 UTC))" 2025-08-13T19:59:52.001613473+00:00 stderr F I0813 19:59:51.999483 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.136393990+00:00 stderr F I0813 19:59:53.135266 1 status_controller.go:213] clusteroperator/service-ca diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T19:59:53Z","message":"Progressing: All service-ca-operator deployments updated","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:06Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:53.166916290+00:00 stderr F I0813 19:59:53.166657 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca-operator", Name:"service-ca-operator", UID:"4e10c137-983b-49c4-b977-7d19390e427b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") 2025-08-13T20:00:05.796004788+00:00 stderr F I0813 20:00:05.775547 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.775474452 +0000 UTC))" 2025-08-13T20:00:05.796196303+00:00 stderr F I0813 20:00:05.796174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.796110161 +0000 UTC))" 2025-08-13T20:00:05.796261825+00:00 stderr F I0813 20:00:05.796245 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.796221094 +0000 UTC))" 2025-08-13T20:00:05.796323487+00:00 stderr F I0813 20:00:05.796308 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.796285846 +0000 UTC))" 2025-08-13T20:00:05.796417939+00:00 stderr F I0813 20:00:05.796360 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796344797 +0000 UTC))" 2025-08-13T20:00:05.796482201+00:00 stderr F I0813 20:00:05.796468 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.79644076 +0000 UTC))" 2025-08-13T20:00:05.796551873+00:00 stderr F I0813 20:00:05.796533 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796501622 +0000 UTC))" 2025-08-13T20:00:05.796614985+00:00 stderr F I0813 20:00:05.796599 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796577774 +0000 UTC))" 2025-08-13T20:00:05.796687467+00:00 stderr F I0813 20:00:05.796674 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.796658416 +0000 UTC))" 2025-08-13T20:00:05.796747519+00:00 stderr F I0813 20:00:05.796732 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.796708588 +0000 UTC))" 2025-08-13T20:00:05.797270254+00:00 stderr F I0813 20:00:05.797212 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 20:00:05.797190811 +0000 UTC))" 2025-08-13T20:00:05.797677035+00:00 stderr F I0813 20:00:05.797646 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:00:05.797621284 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.968609 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.968081498 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997119 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.997060114 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997182 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997132747 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997201 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997189298 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997284 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997211129 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997369 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997293401 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997459 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997377464 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997485 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997469856 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997511 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.997495797 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997553 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.997524678 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.997573 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997561949 +0000 UTC))" 2025-08-13T20:01:00.000684918+00:00 stderr F I0813 20:00:59.998251 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:19 +0000 UTC to 2026-06-26 12:47:20 +0000 UTC (now=2025-08-13 20:00:59.998212147 +0000 UTC))" 2025-08-13T20:01:00.024263490+00:00 stderr F I0813 20:01:00.015459 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:01:00.015395877 +0000 UTC))" 2025-08-13T20:01:31.257242288+00:00 stderr F I0813 20:01:31.255926 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.257353131+00:00 stderr F I0813 20:01:31.257289 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:31.258978588+00:00 stderr F I0813 20:01:31.258095 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259200 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:31.259092921 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259309 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:31.259262486 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259346 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:31.259318087 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259371 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:31.259352518 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259397 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259385379 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259415 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.25940386 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259432 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.25942063 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259483 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259437051 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259500 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:31.259488302 +0000 UTC))" 2025-08-13T20:01:31.259600666+00:00 stderr F I0813 20:01:31.259542 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:31.259529983 +0000 UTC))" 2025-08-13T20:01:31.259630776+00:00 stderr F I0813 20:01:31.259608 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:31.259554514 +0000 UTC))" 2025-08-13T20:01:31.260928813+00:00 stderr F I0813 20:01:31.260054 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-service-ca-operator.svc\" [serving] validServingFor=[metrics.openshift-service-ca-operator.svc,metrics.openshift-service-ca-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:47 +0000 UTC to 2027-08-13 20:00:48 +0000 UTC (now=2025-08-13 20:01:31.260027158 +0000 UTC))" 2025-08-13T20:01:31.260928813+00:00 stderr F I0813 20:01:31.260391 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115166\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:19 +0000 UTC to 2026-08-13 18:59:19 +0000 UTC (now=2025-08-13 20:01:31.260373778 +0000 UTC))" 2025-08-13T20:01:36.007462308+00:00 stderr F I0813 20:01:36.007012 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="f269835458ddba2137165862fff7fd217525c9e40a9c6f489e3967be4db60372", new="cfa284ff196c7f240b2c9d12ecb58ca59660cd15e39fec28f4ebbfa7f9c29fb3") 2025-08-13T20:01:36.007530580+00:00 stderr F W0813 20:01:36.007467 1 builder.go:132] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:36.007639663+00:00 stderr F I0813 20:01:36.007582 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="6f1dcb91ce339e53ed60d5e5e4ed5a83d1b7ad3e174dad355b404f5099a93f97", new="31318292ca2ee5278a39f1090b20b1f370fcbc25269641a3cee345c20dd36b58") 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008876 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008946 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.008994 1 genericapiserver.go:605] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009336 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009375 1 genericapiserver.go:630] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009395 1 genericapiserver.go:671] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009449 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:36.009951309+00:00 stderr F I0813 20:01:36.009486 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011416 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011631 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011682 1 base_controller.go:172] Shutting down ServiceCAOperator ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011699 1 base_controller.go:172] Shutting down StatusSyncer_service-ca ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012644 1 base_controller.go:150] All StatusSyncer_service-ca post start hooks have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.011709 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012354 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012677 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012363 1 base_controller.go:114] Shutting down worker of ServiceCAOperator controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012712 1 base_controller.go:104] All ServiceCAOperator workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012569 1 base_controller.go:114] Shutting down worker of StatusSyncer_service-ca controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012762 1 base_controller.go:104] All StatusSyncer_service-ca workers have been terminated 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012813 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012888 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012916 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:36.013199231+00:00 stderr F I0813 20:01:36.012923 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:36.013602943+00:00 stderr F I0813 20:01:36.013542 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:01:36.013618823+00:00 stderr F I0813 20:01:36.013595 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:36.013618823+00:00 stderr F I0813 20:01:36.013613 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:36.014059546+00:00 stderr F I0813 20:01:36.014014 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:36.014082177+00:00 stderr F I0813 20:01:36.014063 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:01:36.014145768+00:00 stderr F I0813 20:01:36.014102 1 builder.go:302] server exited 2025-08-13T20:01:40.236433072+00:00 stderr F W0813 20:01:40.231558 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000025300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000755000175000017500000000000015117130646033042 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000755000175000017500000000000015117130654033041 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000110333715117130646033054 0ustar zuulzuul2025-08-13T20:00:59.204934877+00:00 stderr F I0813 20:00:59.201500 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-08-13T20:00:59.221560951+00:00 stderr F I0813 20:00:59.220554 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:59.227991934+00:00 stderr F I0813 20:00:59.227901 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:59.230144035+00:00 stderr F I0813 20:00:59.229979 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:59.237435533+00:00 stderr F I0813 20:00:59.235747 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:06.985203130+00:00 stderr F I0813 20:01:06.952113 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-08-13T20:01:20.230947497+00:00 stderr F I0813 20:01:20.230035 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.230947497+00:00 stderr F W0813 20:01:20.230635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.230947497+00:00 stderr F W0813 20:01:20.230645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.949569307+00:00 stderr F I0813 20:01:20.949138 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:20.950898435+00:00 stderr F I0813 20:01:20.950870 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957052 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957150 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957211 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957229 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957253 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.960736406+00:00 stderr F I0813 20:01:20.957261 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.962225238+00:00 stderr F I0813 20:01:20.962192 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:20.962340592+00:00 stderr F I0813 20:01:20.962318 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.962574838+00:00 stderr F I0813 20:01:20.962552 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:21.071599137+00:00 stderr F I0813 20:01:21.071521 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.072017149+00:00 stderr F I0813 20:01:21.071742 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.072623786+00:00 stderr F I0813 20:01:21.071762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.420290630+00:00 stderr F I0813 20:01:21.417525 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-08-13T20:01:21.420502616+00:00 stderr F I0813 20:01:21.420447 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_41a3e9d3-dcbb-4c99-b08b-92fe107d13b5 became leader 2025-08-13T20:01:21.693231682+00:00 stderr F I0813 20:01:21.693176 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-08-13T20:01:21.803696622+00:00 stderr F I0813 20:01:21.803631 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114639 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114639 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:01:22.115996337+00:00 stderr F I0813 20:01:22.114760 1 starter.go:499] waiting for cluster version informer sync... 2025-08-13T20:01:22.835208314+00:00 stderr F I0813 20:01:22.833505 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-08-13T20:01:22.835208314+00:00 stderr F I0813 20:01:22.834925 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835586 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835631 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-08-13T20:01:22.836957933+00:00 stderr F I0813 20:01:22.835942 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838357 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838389 1 base_controller.go:73] Caches are synced for FSyncController 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838398 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838614 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-08-13T20:01:22.838859178+00:00 stderr F I0813 20:01:22.838633 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843569 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843631 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843638 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843644 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843715 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.843734 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844119 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844149 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844166 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844171 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844176 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844207 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844222 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844905 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844929 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844961 1 envvarcontroller.go:193] Starting EnvVarController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.844980 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:01:22.845886058+00:00 stderr F E0813 20:01:22.845191 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845358 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845681 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:01:22.845886058+00:00 stderr F I0813 20:01:22.845700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.846999 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847159 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847181 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847196 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:22.847932026+00:00 stderr F I0813 20:01:22.847225 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:01:22.851604501+00:00 stderr F I0813 20:01:22.851481 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:01:22.851868739+00:00 stderr F E0813 20:01:22.851701 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:22.852231279+00:00 stderr F I0813 20:01:22.852034 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.865982 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.866018 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-08-13T20:01:22.866823725+00:00 stderr F I0813 20:01:22.866027 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-08-13T20:01:22.907728821+00:00 stderr F E0813 20:01:22.907578 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:22.945245131+00:00 stderr F I0813 20:01:22.945145 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:22.945245131+00:00 stderr F I0813 20:01:22.945194 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:22.945523019+00:00 stderr F I0813 20:01:22.945492 1 base_controller.go:73] Caches are synced for DefragController 2025-08-13T20:01:22.945563420+00:00 stderr F I0813 20:01:22.945549 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-08-13T20:01:22.947247808+00:00 stderr F I0813 20:01:22.947181 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:01:22.947302060+00:00 stderr F I0813 20:01:22.947287 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:01:22.948105423+00:00 stderr F I0813 20:01:22.948050 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:01:22.948105423+00:00 stderr F I0813 20:01:22.948083 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:01:22.948126773+00:00 stderr F I0813 20:01:22.948109 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:22.948126773+00:00 stderr F I0813 20:01:22.948114 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:22.948223566+00:00 stderr F I0813 20:01:22.948206 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:22.948262877+00:00 stderr F I0813 20:01:22.948248 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:22.952234560+00:00 stderr F I0813 20:01:22.952136 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:01:22.952234560+00:00 stderr F I0813 20:01:22.952166 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:01:22.952550329+00:00 stderr F I0813 20:01:22.952431 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:22.956176783+00:00 stderr F E0813 20:01:22.956032 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:22.964645374+00:00 stderr F I0813 20:01:22.964470 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:22.988731301+00:00 stderr F E0813 20:01:22.988570 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.022307729+00:00 stderr F E0813 20:01:23.022169 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.040754585+00:00 stderr F I0813 20:01:23.040666 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.045177371+00:00 stderr F I0813 20:01:23.045113 1 base_controller.go:73] Caches are synced for ScriptController 2025-08-13T20:01:23.045177371+00:00 stderr F I0813 20:01:23.045158 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-08-13T20:01:23.045208242+00:00 stderr F I0813 20:01:23.045200 1 envvarcontroller.go:199] caches synced 2025-08-13T20:01:23.082200166+00:00 stderr F E0813 20:01:23.082082 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.183869165+00:00 stderr F E0813 20:01:23.183735 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.240423948+00:00 stderr F I0813 20:01:23.240267 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.244659499+00:00 stderr F I0813 20:01:23.244611 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:01:23.244714180+00:00 stderr F I0813 20:01:23.244699 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.248015 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.248116 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:01:23.253151011+00:00 stderr F I0813 20:01:23.252637 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253148 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253159 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253185 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:01:23.253223563+00:00 stderr F I0813 20:01:23.253189 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:01:23.363347963+00:00 stderr F E0813 20:01:23.363263 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.475565093+00:00 stderr F I0813 20:01:23.475449 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.478628840+00:00 stderr F E0813 20:01:23.478489 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T20:01:23.480253096+00:00 stderr F I0813 20:01:23.480151 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.481074550+00:00 stderr F I0813 20:01:23.481010 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:23.539188437+00:00 stderr F I0813 20:01:23.539021 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-08-13T20:01:23.539188437+00:00 stderr F I0813 20:01:23.539124 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-08-13T20:01:23.671176050+00:00 stderr F I0813 20:01:23.661078 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.711893141+00:00 stderr F E0813 20:01:23.710921 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737021 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737068 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737113 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-08-13T20:01:23.738303444+00:00 stderr F I0813 20:01:23.737119 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-08-13T20:01:23.739571481+00:00 stderr F I0813 20:01:23.739135 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-08-13T20:01:23.739571481+00:00 stderr F I0813 20:01:23.739217 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-08-13T20:01:23.739752336+00:00 stderr F I0813 20:01:23.739723 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:01:23.739863029+00:00 stderr F I0813 20:01:23.739821 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:01:23.740017183+00:00 stderr F E0813 20:01:23.739997 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.745144849+00:00 stderr F I0813 20:01:23.745064 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-08-13T20:01:23.745144849+00:00 stderr F I0813 20:01:23.745101 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-08-13T20:01:23.745293434+00:00 stderr F I0813 20:01:23.745208 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:23.745365916+00:00 stderr F I0813 20:01:23.745308 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:01:23.745365916+00:00 stderr F I0813 20:01:23.745348 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.746990 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.747024 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-08-13T20:01:23.751584763+00:00 stderr F I0813 20:01:23.747092 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:23.760225729+00:00 stderr F E0813 20:01:23.760115 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.770430700+00:00 stderr F E0813 20:01:23.770333 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.794754584+00:00 stderr F E0813 20:01:23.794566 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.837589226+00:00 stderr F E0813 20:01:23.835259 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.845963064+00:00 stderr F I0813 20:01:23.841472 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:23.876753702+00:00 stderr F E0813 20:01:23.876682 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:23.887741586+00:00 stderr F E0813 20:01:23.886505 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.903355931+00:00 stderr F I0813 20:01:23.893974 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:23.905662336+00:00 stderr F I0813 20:01:23.905547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:23.909276520+00:00 stderr F I0813 20:01:23.905930 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.917638608+00:00 stderr F E0813 20:01:23.917231 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:23.984460943+00:00 stderr F I0813 20:01:23.984397 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:24.011359700+00:00 stderr F E0813 20:01:24.008314 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: configmaps lister not synced 2025-08-13T20:01:24.011359700+00:00 stderr F E0813 20:01:24.009051 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController missing env var values 2025-08-13T20:01:24.012053910+00:00 stderr F I0813 20:01:24.012017 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:24.030213498+00:00 stderr F E0813 20:01:24.026310 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.030213498+00:00 stderr F E0813 20:01:24.026659 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.038711430+00:00 stderr F I0813 20:01:24.037373 1 request.go:697] Waited for 1.198785213s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/secrets?limit=500&resourceVersion=0 2025-08-13T20:01:24.042444197+00:00 stderr F E0813 20:01:24.041417 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.043651721+00:00 stderr F I0813 20:01:24.043530 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:24.046381469+00:00 stderr F I0813 20:01:24.043917 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.048881210+00:00 stderr F I0813 20:01:24.048655 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:24.086625656+00:00 stderr F E0813 20:01:24.086488 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.169315404+00:00 stderr F E0813 20:01:24.169212 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.240874095+00:00 stderr F I0813 20:01:24.239677 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244433 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244464 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244495 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244500 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:24.244991062+00:00 stderr F I0813 20:01:24.244696 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:24.331661633+00:00 stderr F E0813 20:01:24.330450 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:24.337326115+00:00 stderr F I0813 20:01:24.336511 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-08-13T20:01:24.337326115+00:00 stderr F I0813 20:01:24.336565 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-08-13T20:01:24.359629601+00:00 stderr F E0813 20:01:24.358162 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:24.445604942+00:00 stderr F I0813 20:01:24.445404 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:01:24.551146152+00:00 stderr F I0813 20:01:24.547065 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:24.551146152+00:00 stderr F I0813 20:01:24.547110 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:24.652969385+00:00 stderr F E0813 20:01:24.652424 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:25.238667776+00:00 stderr F I0813 20:01:25.236481 1 request.go:697] Waited for 1.696867805s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-08-13T20:01:25.293606912+00:00 stderr F E0813 20:01:25.293553 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:25.678198389+00:00 stderr F E0813 20:01:25.678017 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:26.575472982+00:00 stderr F E0813 20:01:26.575394 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:28.261052465+00:00 stderr F E0813 20:01:28.258990 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:29.139588316+00:00 stderr F E0813 20:01:29.136873 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:29.233407391+00:00 stderr F I0813 20:01:29.233029 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:29.234221804+00:00 stderr F E0813 20:01:29.234128 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T20:01:29.261491642+00:00 stderr F E0813 20:01:29.261353 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3] 2025-08-13T20:01:29.263634923+00:00 stderr F E0813 20:01:29.262756 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T20:01:29.263896160+00:00 stderr F I0813 20:01:29.263863 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EnvVarControllerUpdatingStatus' Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:29.364256102+00:00 stderr F E0813 20:01:29.360701 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:29.364256102+00:00 stderr F I0813 20:01:29.361877 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:31.538098707+00:00 stderr F I0813 20:01:31.502003 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:31.538098707+00:00 stderr F I0813 20:01:31.502226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:33.450883531+00:00 stderr F E0813 20:01:33.448764 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:35.546125383+00:00 stderr F I0813 20:01:35.545906 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:36.913351769+00:00 stderr F E0813 20:01:36.909745 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:36.913351769+00:00 stderr F I0813 20:01:36.913015 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:42.986081825+00:00 stderr F I0813 20:01:42.983366 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:42.998117788+00:00 stderr F I0813 20:01:42.995881 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:43.031426538+00:00 stderr F I0813 20:01:43.030573 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:43.705192780+00:00 stderr F E0813 20:01:43.705110 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:01:56.176110932+00:00 stderr F E0813 20:01:56.135500 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:56.176110932+00:00 stderr F I0813 20:01:56.146405 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:56.176110932+00:00 stderr F I0813 20:01:56.168699 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:01:58.241888256+00:00 stderr F I0813 20:01:58.240961 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:58.244003966+00:00 stderr F I0813 20:01:58.243317 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, configmaps: etcd-endpoints-3,etcd-metrics-proxy-client-ca-3,etcd-metrics-proxy-serving-ca-3,etcd-peer-client-ca-3,etcd-pod-3,etcd-serving-ca-3]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T20:01:59.464618610+00:00 stderr F E0813 20:01:59.454164 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:04.200344733+00:00 stderr F E0813 20:02:04.199129 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:22.853589283+00:00 stderr F E0813 20:02:22.852691 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:27.916072140+00:00 stderr F E0813 20:02:27.915438 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused - error from a previous attempt: dial tcp 10.217.4.1:443: connect: connection reset by peer, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.944047368+00:00 stderr F E0813 20:02:27.943599 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.310388068+00:00 stderr F E0813 20:02:28.310273 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.121359753+00:00 stderr F E0813 20:02:29.121111 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.910865446+00:00 stderr F E0813 20:02:29.910411 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.712047130+00:00 stderr F E0813 20:02:30.711543 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.521644476+00:00 stderr F E0813 20:02:31.521476 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.904943841+00:00 stderr F E0813 20:02:31.904879 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error on serving cert sync for node crc: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-serving-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.905151887+00:00 stderr F I0813 20:02:31.905066 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.909971884+00:00 stderr F E0813 20:02:31.909916 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:31.913650089+00:00 stderr F E0813 20:02:31.913236 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.914249526+00:00 stderr F I0813 20:02:31.914150 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917480528+00:00 stderr F E0813 20:02:31.917423 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917612332+00:00 stderr F I0813 20:02:31.917558 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.930318194+00:00 stderr F E0813 20:02:31.930241 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.930375156+00:00 stderr F I0813 20:02:31.930349 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.975917815+00:00 stderr F E0813 20:02:31.975647 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.975917815+00:00 stderr F I0813 20:02:31.975896 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.059535441+00:00 stderr F E0813 20:02:32.059417 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.059535441+00:00 stderr F I0813 20:02:32.059488 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.223615501+00:00 stderr F E0813 20:02:32.223472 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.223615501+00:00 stderr F I0813 20:02:32.223557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.310012026+00:00 stderr F E0813 20:02:32.309952 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.548999514+00:00 stderr F E0813 20:02:32.548871 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.548999514+00:00 stderr F I0813 20:02:32.548968 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.111589303+00:00 stderr F E0813 20:02:33.111451 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:33.194473277+00:00 stderr F E0813 20:02:33.194344 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.194708674+00:00 stderr F I0813 20:02:33.194606 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.407945453+00:00 stderr F E0813 20:02:34.407394 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.481051079+00:00 stderr F E0813 20:02:34.480960 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.481220563+00:00 stderr F I0813 20:02:34.481118 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.983359442+00:00 stderr F E0813 20:02:36.983231 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:37.045231388+00:00 stderr F E0813 20:02:37.045168 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.045450154+00:00 stderr F I0813 20:02:37.045403 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.823228592+00:00 stderr F E0813 20:02:40.822097 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:42.119568465+00:00 stderr F E0813 20:02:42.119470 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.172278588+00:00 stderr F E0813 20:02:42.172223 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.172419712+00:00 stderr F I0813 20:02:42.172387 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.181326239+00:00 stderr F E0813 20:02:45.181006 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:02:50.826440458+00:00 stderr F E0813 20:02:50.826086 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:02:52.378155643+00:00 stderr F E0813 20:02:52.378050 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.440554073+00:00 stderr F E0813 20:02:52.440430 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.440626755+00:00 stderr F I0813 20:02:52.440586 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.828872299+00:00 stderr F E0813 20:03:00.828215 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:10.831582506+00:00 stderr F E0813 20:03:10.831026 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:12.883654656+00:00 stderr F E0813 20:03:12.883340 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:12.929214286+00:00 stderr F E0813 20:03:12.929073 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.929364070+00:00 stderr F I0813 20:03:12.929250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:20.838115277+00:00 stderr F E0813 20:03:20.837631 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:22.859631275+00:00 stderr F E0813 20:03:22.859033 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:03:22.957431345+00:00 stderr F E0813 20:03:22.957273 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:22.967341118+00:00 stderr F E0813 20:03:22.967259 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:22.982678106+00:00 stderr F E0813 20:03:22.982628 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.008378959+00:00 stderr F E0813 20:03:23.008287 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.053133056+00:00 stderr F E0813 20:03:23.053046 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.053411613+00:00 stderr F I0813 20:03:23.053338 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.056392899+00:00 stderr F E0813 20:03:23.056266 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.057256643+00:00 stderr F E0813 20:03:23.057197 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.057374826+00:00 stderr F E0813 20:03:23.057317 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:23.057744617+00:00 stderr F E0813 20:03:23.057688 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.057887711+00:00 stderr F I0813 20:03:23.057738 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.062345748+00:00 stderr F E0813 20:03:23.062284 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.062372019+00:00 stderr F I0813 20:03:23.062353 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.356343475+00:00 stderr F E0813 20:03:23.356244 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.356714196+00:00 stderr F I0813 20:03:23.356653 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.557486883+00:00 stderr F E0813 20:03:23.557332 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.757595502+00:00 stderr F E0813 20:03:23.757413 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.757595502+00:00 stderr F I0813 20:03:23.757553 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.780075193+00:00 stderr F E0813 20:03:23.780005 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.786199928+00:00 stderr F E0813 20:03:23.786101 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.786246539+00:00 stderr F I0813 20:03:23.786202 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.787520656+00:00 stderr F E0813 20:03:23.787490 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:23.957981659+00:00 stderr F E0813 20:03:23.957886 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.957981659+00:00 stderr F I0813 20:03:23.957943 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.154655539+00:00 stderr F E0813 20:03:24.154590 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.158468888+00:00 stderr F E0813 20:03:24.158439 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.357587489+00:00 stderr F E0813 20:03:24.357513 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.357626330+00:00 stderr F I0813 20:03:24.357589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.505373634+00:00 stderr F E0813 20:03:24.505108 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.551723986+00:00 stderr F I0813 20:03:24.551623 1 request.go:697] Waited for 1.009458816s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-08-13T20:03:24.558413797+00:00 stderr F E0813 20:03:24.558307 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.558517610+00:00 stderr F I0813 20:03:24.558484 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.757933178+00:00 stderr F E0813 20:03:24.757746 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.758035851+00:00 stderr F I0813 20:03:24.757993 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.958230252+00:00 stderr F E0813 20:03:24.958137 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:25.153714949+00:00 stderr F E0813 20:03:25.153597 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.158041973+00:00 stderr F E0813 20:03:25.157992 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.158129965+00:00 stderr F I0813 20:03:25.158072 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.358476540+00:00 stderr F E0813 20:03:25.358158 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.358615464+00:00 stderr F I0813 20:03:25.358296 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.558516896+00:00 stderr F E0813 20:03:25.558383 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.752531091+00:00 stderr F I0813 20:03:25.752453 1 request.go:697] Waited for 1.193181188s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2025-08-13T20:03:25.758576004+00:00 stderr F E0813 20:03:25.757732 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.758576004+00:00 stderr F I0813 20:03:25.757892 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.857071064+00:00 stderr F E0813 20:03:25.856945 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:25.958207469+00:00 stderr F E0813 20:03:25.958059 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.958207469+00:00 stderr F I0813 20:03:25.958158 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.153532231+00:00 stderr F E0813 20:03:26.153414 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.158027719+00:00 stderr F E0813 20:03:26.157943 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.357712246+00:00 stderr F E0813 20:03:26.357613 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.357827169+00:00 stderr F I0813 20:03:26.357705 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.557380822+00:00 stderr F E0813 20:03:26.557258 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:26.758231992+00:00 stderr F E0813 20:03:26.758118 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.958319360+00:00 stderr F E0813 20:03:26.958176 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.958319360+00:00 stderr F I0813 20:03:26.958226 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.157973746+00:00 stderr F E0813 20:03:27.157640 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.157973746+00:00 stderr F I0813 20:03:27.157736 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.352905736+00:00 stderr F I0813 20:03:27.352124 1 request.go:697] Waited for 1.178087108s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-08-13T20:03:27.355495730+00:00 stderr F E0813 20:03:27.355381 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.558159942+00:00 stderr F E0813 20:03:27.558037 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:27.754411790+00:00 stderr F E0813 20:03:27.754272 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.957162104+00:00 stderr F E0813 20:03:27.957062 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.957613027+00:00 stderr F I0813 20:03:27.957217 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.156063508+00:00 stderr F E0813 20:03:28.155964 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.355594340+00:00 stderr F E0813 20:03:28.355485 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:28.444511917+00:00 stderr F E0813 20:03:28.444455 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.444640480+00:00 stderr F I0813 20:03:28.444615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.551331324+00:00 stderr F I0813 20:03:28.551276 1 request.go:697] Waited for 1.154931357s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller 2025-08-13T20:03:28.553475925+00:00 stderr F E0813 20:03:28.553451 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.762089566+00:00 stderr F E0813 20:03:28.761998 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.957700287+00:00 stderr F E0813 20:03:28.957497 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.957700287+00:00 stderr F I0813 20:03:28.957571 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.355682610+00:00 stderr F E0813 20:03:29.355320 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.537526057+00:00 stderr F E0813 20:03:29.537405 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:29.556090137+00:00 stderr F E0813 20:03:29.555992 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.765203002+00:00 stderr F E0813 20:03:29.765039 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.765297575+00:00 stderr F I0813 20:03:29.765179 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.958615090+00:00 stderr F E0813 20:03:29.958470 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.153994804+00:00 stderr F E0813 20:03:30.153764 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.359384223+00:00 stderr F E0813 20:03:30.359136 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:30.555817727+00:00 stderr F E0813 20:03:30.555628 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.755600527+00:00 stderr F E0813 20:03:30.755477 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.755941116+00:00 stderr F I0813 20:03:30.755697 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.840899200+00:00 stderr F E0813 20:03:30.840733 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:31.012560357+00:00 stderr F E0813 20:03:31.012387 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.012560357+00:00 stderr F I0813 20:03:31.012536 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.159309693+00:00 stderr F E0813 20:03:31.159218 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.356281642+00:00 stderr F E0813 20:03:31.356157 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.356618142+00:00 stderr F I0813 20:03:31.356536 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.554473645+00:00 stderr F E0813 20:03:31.554337 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.755144050+00:00 stderr F E0813 20:03:31.755021 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.955864116+00:00 stderr F E0813 20:03:31.955695 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.956168485+00:00 stderr F I0813 20:03:31.956101 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.353411897+00:00 stderr F E0813 20:03:32.353282 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.555996746+00:00 stderr F E0813 20:03:32.555895 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.556484370+00:00 stderr F I0813 20:03:32.556404 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.757106004+00:00 stderr F E0813 20:03:32.756929 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.958043466+00:00 stderr F E0813 20:03:32.957913 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.160642345+00:00 stderr F E0813 20:03:33.156525 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.160642345+00:00 stderr F I0813 20:03:33.158978 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.559447462+00:00 stderr F E0813 20:03:33.558753 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:33.757573125+00:00 stderr F E0813 20:03:33.757461 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.757573125+00:00 stderr F I0813 20:03:33.757547 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:33.957753815+00:00 stderr F E0813 20:03:33.957575 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.161032784+00:00 stderr F E0813 20:03:34.159741 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.355140732+00:00 stderr F E0813 20:03:34.355038 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.355239124+00:00 stderr F I0813 20:03:34.355179 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.755950416+00:00 stderr F E0813 20:03:34.755750 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.956662761+00:00 stderr F E0813 20:03:34.956237 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:34.956662761+00:00 stderr F I0813 20:03:34.956341 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.169619416+00:00 stderr F E0813 20:03:35.169293 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.557111580+00:00 stderr F E0813 20:03:35.557005 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.861442162+00:00 stderr F E0813 20:03:35.861273 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:35.962182946+00:00 stderr F E0813 20:03:35.962038 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.962252108+00:00 stderr F I0813 20:03:35.962169 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.139711820+00:00 stderr F E0813 20:03:36.139611 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.139925116+00:00 stderr F I0813 20:03:36.139700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.157481337+00:00 stderr F E0813 20:03:36.157376 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.556533341+00:00 stderr F E0813 20:03:36.556330 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.556533341+00:00 stderr F I0813 20:03:36.556415 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.759466900+00:00 stderr F E0813 20:03:36.759334 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.953633649+00:00 stderr F E0813 20:03:36.953531 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.356673327+00:00 stderr F E0813 20:03:37.356579 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.555997593+00:00 stderr F E0813 20:03:37.555915 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:37.959219406+00:00 stderr F E0813 20:03:37.959149 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.355437569+00:00 stderr F E0813 20:03:38.355380 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.556167655+00:00 stderr F E0813 20:03:38.556111 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.556307989+00:00 stderr F I0813 20:03:38.556219 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:38.955301350+00:00 stderr F E0813 20:03:38.955074 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:39.156957833+00:00 stderr F E0813 20:03:39.156656 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.356349261+00:00 stderr F E0813 20:03:39.356168 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.356349261+00:00 stderr F I0813 20:03:39.356305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.541460462+00:00 stderr F E0813 20:03:39.541323 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:39.760090389+00:00 stderr F E0813 20:03:39.758278 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.410572265+00:00 stderr F E0813 20:03:40.410156 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.554832701+00:00 stderr F E0813 20:03:40.554724 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:40.844255727+00:00 stderr F E0813 20:03:40.844164 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:41.697942270+00:00 stderr F E0813 20:03:41.696580 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.076366176+00:00 stderr F E0813 20:03:42.076310 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.683909466+00:00 stderr F E0813 20:03:42.683724 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.122655472+00:00 stderr F E0813 20:03:43.122546 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.681258448+00:00 stderr F E0813 20:03:43.681139 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.681320709+00:00 stderr F I0813 20:03:43.681273 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.263280191+00:00 stderr F E0813 20:03:44.263223 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.482826264+00:00 stderr F E0813 20:03:44.482146 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:44.482826264+00:00 stderr F I0813 20:03:44.482228 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:45.864068586+00:00 stderr F E0813 20:03:45.863953 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:46.393626573+00:00 stderr F E0813 20:03:46.393442 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.393626573+00:00 stderr F I0813 20:03:46.393548 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:48.262920558+00:00 stderr F E0813 20:03:48.262499 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.207182985+00:00 stderr F E0813 20:03:49.207077 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:49.391407061+00:00 stderr F E0813 20:03:49.391305 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.545107794+00:00 stderr F E0813 20:03:49.544819 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:50.846910431+00:00 stderr F E0813 20:03:50.846526 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:03:52.325943768+00:00 stderr F E0813 20:03:52.322551 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.940969334+00:00 stderr F E0813 20:03:52.940373 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.932722016+00:00 stderr F E0813 20:03:53.931117 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:53.950185344+00:00 stderr F E0813 20:03:53.946198 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.950185344+00:00 stderr F I0813 20:03:53.946277 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.974173028+00:00 stderr F E0813 20:03:53.974113 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.974330663+00:00 stderr F I0813 20:03:53.974301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.735887598+00:00 stderr F E0813 20:03:54.734474 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:54.735887598+00:00 stderr F I0813 20:03:54.735026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:55.873922234+00:00 stderr F E0813 20:03:55.873512 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:03:58.509095808+00:00 stderr F E0813 20:03:58.508630 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.547894352+00:00 stderr F E0813 20:03:59.547725 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:03:59.658252940+00:00 stderr F E0813 20:03:59.647921 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:00.859073585+00:00 stderr F E0813 20:04:00.855063 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:05.881238883+00:00 stderr F E0813 20:04:05.878083 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:07.554363661+00:00 stderr F E0813 20:04:07.554312 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:07.554545866+00:00 stderr F I0813 20:04:07.554507 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.551030160+00:00 stderr F E0813 20:04:09.550761 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:09.724299903+00:00 stderr F E0813 20:04:09.722238 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:10.862056161+00:00 stderr F E0813 20:04:10.861468 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:12.806143739+00:00 stderr F E0813 20:04:12.806031 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:13.424606892+00:00 stderr F E0813 20:04:13.424481 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:14.434228454+00:00 stderr F E0813 20:04:14.434114 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:14.434322137+00:00 stderr F I0813 20:04:14.434260 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.221155572+00:00 stderr F E0813 20:04:15.221031 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.221155572+00:00 stderr F I0813 20:04:15.221124 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:15.894947624+00:00 stderr F E0813 20:04:15.892361 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:18.996137771+00:00 stderr F E0813 20:04:18.995688 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.556094465+00:00 stderr F E0813 20:04:19.555673 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:20.142189435+00:00 stderr F E0813 20:04:20.142129 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.869223985+00:00 stderr F E0813 20:04:20.867149 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:20.869223985+00:00 stderr F E0813 20:04:20.867241 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:20.869580806+00:00 stderr F E0813 20:04:20.869524 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:22.857735944+00:00 stderr F E0813 20:04:22.857600 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:04:22.961238868+00:00 stderr F E0813 20:04:22.959978 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.051968907+00:00 stderr F E0813 20:04:23.051911 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.052106241+00:00 stderr F I0813 20:04:23.052049 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.052967065+00:00 stderr F E0813 20:04:23.052883 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.054510849+00:00 stderr F E0813 20:04:23.054456 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.054532100+00:00 stderr F I0813 20:04:23.054510 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.261045995+00:00 stderr F E0813 20:04:23.260917 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.262444295+00:00 stderr F E0813 20:04:23.262379 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.559501720+00:00 stderr F E0813 20:04:23.559378 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.760595529+00:00 stderr F E0813 20:04:23.760494 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.761087813+00:00 stderr F E0813 20:04:23.760634 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.761243628+00:00 stderr F I0813 20:04:23.760744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.238534455+00:00 stderr F E0813 20:04:24.236428 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.238534455+00:00 stderr F I0813 20:04:24.237021 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.511558044+00:00 stderr F E0813 20:04:24.510976 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.895454472+00:00 stderr F E0813 20:04:25.895088 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:29.439894403+00:00 stderr F E0813 20:04:29.438675 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:29.557961674+00:00 stderr F E0813 20:04:29.557907 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:35.898907446+00:00 stderr F E0813 20:04:35.898239 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:39.444014303+00:00 stderr F E0813 20:04:39.442629 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:39.582918490+00:00 stderr F E0813 20:04:39.582396 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:45.905067071+00:00 stderr F E0813 20:04:45.903678 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:48.532965973+00:00 stderr F E0813 20:04:48.528929 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:48.532965973+00:00 stderr F I0813 20:04:48.532009 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:49.445322549+00:00 stderr F E0813 20:04:49.444886 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:49.585467953+00:00 stderr F E0813 20:04:49.585240 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:04:50.693385169+00:00 stderr F E0813 20:04:50.693311 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:53.773511932+00:00 stderr F E0813 20:04:53.773174 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:54.390655633+00:00 stderr F E0813 20:04:54.390251 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.404886357+00:00 stderr F E0813 20:04:55.404737 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.405273519+00:00 stderr F I0813 20:04:55.404969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:55.909447746+00:00 stderr F E0813 20:04:55.909312 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:04:56.190326910+00:00 stderr F E0813 20:04:56.190154 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.190381151+00:00 stderr F I0813 20:04:56.190310 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.452388172+00:00 stderr F E0813 20:04:59.451291 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:04:59.593054480+00:00 stderr F E0813 20:04:59.591542 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:00.060713062+00:00 stderr F E0813 20:05:00.058154 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:01.130496844+00:00 stderr F E0813 20:05:01.130339 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.967346406+00:00 stderr F E0813 20:05:05.966528 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:05.967413718+00:00 stderr F E0813 20:05:05.967379 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:05.969130917+00:00 stderr F E0813 20:05:05.969101 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c29267bd768\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.957751582 +0000 UTC m=+145.691815432,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:09.457429598+00:00 stderr F E0813 20:05:09.456992 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:05:09.598327073+00:00 stderr F E0813 20:05:09.598138 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:09.598327073+00:00 stderr F E0813 20:05:09.598283 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:09.600359401+00:00 stderr F E0813 20:05:09.600235 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c28facabbfa\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.057670635 +0000 UTC m=+144.791734625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:13.643070308+00:00 stderr F E0813 20:05:13.642637 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c28facabbfa\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c28facabbfa openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.053022202 +0000 UTC m=+144.787086132,LastTimestamp:2025-08-13 20:03:23.057670635 +0000 UTC m=+144.791734625,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-08-13T20:05:13.767411739+00:00 stderr F E0813 20:05:13.767326 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c29267bd768\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c29267bd768 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-08-13 20:03:23.786049384 +0000 UTC m=+145.520113284,LastTimestamp:2025-08-13 20:03:23.957751582 +0000 UTC m=+145.691815432,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-08-13T20:05:19.462051600+00:00 stderr F E0813 20:05:19.461334 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31.904730724 +0000 UTC m=+93.638794625,LastTimestamp:2025-08-13 20:02:31.913200936 +0000 UTC m=+93.647264956,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:05:22.879651976+00:00 stderr F E0813 20:05:22.878646 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:05:29.048246551+00:00 stderr F E0813 20:05:29.046023 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:06:02.210131115+00:00 stderr F I0813 20:06:02.208595 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:03.033576196+00:00 stderr F I0813 20:06:03.033438 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.445424067+00:00 stderr F I0813 20:06:06.444339 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:07.608175334+00:00 stderr F I0813 20:06:07.608122 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.612634827+00:00 stderr F I0813 20:06:08.611959 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:09.414593552+00:00 stderr F I0813 20:06:09.414529 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:12.348169958+00:00 stderr F I0813 20:06:12.348000 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:12.836973505+00:00 stderr F I0813 20:06:12.835622 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:15.988378780+00:00 stderr F I0813 20:06:15.985500 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:22.843903184+00:00 stderr F I0813 20:06:22.842449 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:22.861297962+00:00 stderr F E0813 20:06:22.861183 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:06:23.072250683+00:00 stderr F I0813 20:06:23.072111 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:23.617974219+00:00 stderr F I0813 20:06:23.617182 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.659640189+00:00 stderr F I0813 20:06:24.659469 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=etcds from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.676183782+00:00 stderr F I0813 20:06:24.675972 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:27.158032882+00:00 stderr F I0813 20:06:27.157843 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:27.558221692+00:00 stderr F I0813 20:06:27.558152 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.358379415+00:00 stderr F I0813 20:06:28.358311 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.174883817+00:00 stderr F I0813 20:06:29.174307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.207098629+00:00 stderr F I0813 20:06:29.206943 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.561481258+00:00 stderr F I0813 20:06:29.557522 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:30.560253238+00:00 stderr F I0813 20:06:30.560145 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.778853495+00:00 stderr F I0813 20:06:31.706217 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:32.160490047+00:00 stderr F I0813 20:06:32.160293 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.148306849+00:00 stderr F I0813 20:06:33.147728 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.372511267+00:00 stderr F I0813 20:06:33.365420 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:34.644663810+00:00 stderr F I0813 20:06:34.644608 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:37.201202349+00:00 stderr F I0813 20:06:37.200137 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:39.541999251+00:00 stderr F I0813 20:06:39.522707 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:42.650540216+00:00 stderr F I0813 20:06:42.649910 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.907959937+00:00 stderr F I0813 20:06:43.897571 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.809039123+00:00 stderr F I0813 20:06:45.808925 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:50.511690451+00:00 stderr F I0813 20:06:50.502367 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:50.553990194+00:00 stderr F I0813 20:06:50.552587 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:57.566845679+00:00 stderr F I0813 20:06:57.561917 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:02.371117412+00:00 stderr F I0813 20:07:02.370179 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:04.722915769+00:00 stderr F I0813 20:07:04.722764 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:10.046854770+00:00 stderr F I0813 20:07:10.044642 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:22.928281282+00:00 stderr F E0813 20:07:22.926421 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:08:22.866636195+00:00 stderr F E0813 20:08:22.865703 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:08:24.161995414+00:00 stderr F E0813 20:08:24.161937 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error on peer cert sync for node crc: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/secrets/etcd-peer-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.184943762+00:00 stderr F I0813 20:08:24.182280 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.223274251+00:00 stderr F E0813 20:08:24.222855 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:24.359258560+00:00 stderr F I0813 20:08:24.358504 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.359258560+00:00 stderr F E0813 20:08:24.359213 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.569482947+00:00 stderr F E0813 20:08:24.567947 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.569482947+00:00 stderr F I0813 20:08:24.568045 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.690223139+00:00 stderr F E0813 20:08:24.689972 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.757401485+00:00 stderr F E0813 20:08:24.757221 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.757401485+00:00 stderr F I0813 20:08:24.757345 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.956251467+00:00 stderr F E0813 20:08:24.956169 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.956301158+00:00 stderr F I0813 20:08:24.956261 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.153401538+00:00 stderr F E0813 20:08:25.153177 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.155126058+00:00 stderr F I0813 20:08:25.153268 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.350054066+00:00 stderr F E0813 20:08:25.350001 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.350190790+00:00 stderr F I0813 20:08:25.350166 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.716429421+00:00 stderr F E0813 20:08:25.714732 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.716429421+00:00 stderr F I0813 20:08:25.714869 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.387466040+00:00 stderr F E0813 20:08:26.377075 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.387466040+00:00 stderr F I0813 20:08:26.377191 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.679662479+00:00 stderr F E0813 20:08:27.678724 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.679662479+00:00 stderr F I0813 20:08:27.678885 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268107832+00:00 stderr F E0813 20:08:30.267513 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.268107832+00:00 stderr F I0813 20:08:30.267695 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.645977526+00:00 stderr F E0813 20:08:31.645840 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:35.396656560+00:00 stderr F E0813 20:08:35.393708 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.396656560+00:00 stderr F I0813 20:08:35.394473 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.650405050+00:00 stderr F E0813 20:08:41.648539 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:08:45.655396576+00:00 stderr F E0813 20:08:45.654638 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.655446358+00:00 stderr F I0813 20:08:45.654841 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.655745521+00:00 stderr F E0813 20:08:51.655347 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events/etcd-operator.185b6c1d121dbe64\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.185b6c1d121dbe64 openshift-etcd-operator 31528 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-08-13 20:02:31 +0000 UTC,LastTimestamp:2025-08-13 20:08:24.159629196 +0000 UTC m=+445.893693186,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-08-13T20:09:22.866896272+00:00 stderr F E0813 20:09:22.866157 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:09:28.690253803+00:00 stderr F I0813 20:09:28.689154 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:29.796471329+00:00 stderr F I0813 20:09:29.796019 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:29.950505895+00:00 stderr F I0813 20:09:29.950372 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:30.868878596+00:00 stderr F I0813 20:09:30.868418 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.896868311+00:00 stderr F I0813 20:09:32.896678 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:33.825653849+00:00 stderr F I0813 20:09:33.825145 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:34.341240701+00:00 stderr F I0813 20:09:34.341160 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:34.508565869+00:00 stderr F I0813 20:09:34.508463 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.550575104+00:00 stderr F I0813 20:09:35.550203 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:36.763696894+00:00 stderr F I0813 20:09:36.763292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:37.445863483+00:00 stderr F I0813 20:09:37.445703 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.041151681+00:00 stderr F I0813 20:09:39.040874 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.059841067+00:00 stderr F I0813 20:09:39.059728 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.775376112+00:00 stderr F I0813 20:09:39.775289 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.877276473+00:00 stderr F I0813 20:09:39.877139 1 reflector.go:351] Caches populated for *v1beta1.Machine from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:42.554569883+00:00 stderr F I0813 20:09:42.553989 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:44.886981005+00:00 stderr F I0813 20:09:44.885150 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.269155223+00:00 stderr F I0813 20:09:45.269000 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.299185674+00:00 stderr F I0813 20:09:45.291372 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=etcds from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.301321245+00:00 stderr F I0813 20:09:45.301283 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.321293898+00:00 stderr F I0813 20:09:45.305596 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:45.683142992+00:00 stderr F I0813 20:09:45.683015 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.246210666+00:00 stderr F I0813 20:09:46.246035 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.483549671+00:00 stderr F I0813 20:09:46.480153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.069358597+00:00 stderr F I0813 20:09:48.066045 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.655359978+00:00 stderr F I0813 20:09:48.655206 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:50.226945617+00:00 stderr F I0813 20:09:50.224154 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.332282769+00:00 stderr F I0813 20:09:52.331846 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:54.468656401+00:00 stderr F I0813 20:09:54.468339 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.661100629+00:00 stderr F I0813 20:09:55.652730 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:56.423555399+00:00 stderr F I0813 20:09:56.423197 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:08.511851241+00:00 stderr F I0813 20:10:08.507574 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:14.081605350+00:00 stderr F I0813 20:10:14.079826 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:21.924516261+00:00 stderr F I0813 20:10:21.923721 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:22.870503533+00:00 stderr F E0813 20:10:22.870263 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:10:25.574907120+00:00 stderr F I0813 20:10:25.574734 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:35.172134291+00:00 stderr F I0813 20:10:35.168313 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:43.224549840+00:00 stderr F I0813 20:10:43.223556 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:11:22.868676659+00:00 stderr F E0813 20:11:22.868202 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:12:22.881343790+00:00 stderr F E0813 20:12:22.877458 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:13:22.867875010+00:00 stderr F E0813 20:13:22.867581 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:14:22.868843436+00:00 stderr F E0813 20:14:22.868197 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:15:22.867512094+00:00 stderr F E0813 20:15:22.867177 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:16:22.870828367+00:00 stderr F E0813 20:16:22.870141 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:17:22.876213876+00:00 stderr F E0813 20:17:22.875455 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:18:22.877519331+00:00 stderr F E0813 20:18:22.876956 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:19:22.880135864+00:00 stderr F E0813 20:19:22.879267 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:20:22.877295837+00:00 stderr F E0813 20:20:22.876172 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:21:22.884254785+00:00 stderr F E0813 20:21:22.878816 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:22:09.064314212+00:00 stderr F E0813 20:22:09.063425 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:22:22.884322808+00:00 stderr F E0813 20:22:22.882273 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:23:22.874296613+00:00 stderr F E0813 20:23:22.873563 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:24:22.880233946+00:00 stderr F E0813 20:24:22.879473 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:25:22.877911382+00:00 stderr F E0813 20:25:22.877520 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:26:22.890656328+00:00 stderr F E0813 20:26:22.889040 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:27:22.881600995+00:00 stderr F E0813 20:27:22.881267 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:28:22.881465280+00:00 stderr F E0813 20:28:22.881095 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:29:22.880966008+00:00 stderr F E0813 20:29:22.880293 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:30:22.883764059+00:00 stderr F E0813 20:30:22.883122 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:31:22.885060755+00:00 stderr F E0813 20:31:22.884194 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:32:22.891502994+00:00 stderr F E0813 20:32:22.890695 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:33:22.884361515+00:00 stderr F E0813 20:33:22.883390 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:34:22.885392643+00:00 stderr F E0813 20:34:22.884208 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:35:22.886406289+00:00 stderr F E0813 20:35:22.885518 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:36:22.891921201+00:00 stderr F E0813 20:36:22.889617 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:37:22.886728388+00:00 stderr F E0813 20:37:22.886091 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:38:22.891078907+00:00 stderr F E0813 20:38:22.890725 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:38:49.076961309+00:00 stderr F E0813 20:38:49.076496 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:39:22.889738503+00:00 stderr F E0813 20:39:22.888686 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:40:22.889029618+00:00 stderr F E0813 20:40:22.888462 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:41:22.895517819+00:00 stderr F E0813 20:41:22.893211 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:42:22.891578222+00:00 stderr F E0813 20:42:22.891138 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:42:36.405189682+00:00 stderr F I0813 20:42:36.341327 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.344825 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.352873 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418486595+00:00 stderr F I0813 20:42:36.352895 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418628739+00:00 stderr F I0813 20:42:36.352915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.352971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353002 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353046 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353086 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353132 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353217 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450170978+00:00 stderr F I0813 20:42:36.353266 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.450603191+00:00 stderr F I0813 20:42:36.353283 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451313861+00:00 stderr F I0813 20:42:36.353299 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451663361+00:00 stderr F I0813 20:42:36.353316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.463306937+00:00 stderr F I0813 20:42:36.353361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.463971006+00:00 stderr F I0813 20:42:36.353378 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464496041+00:00 stderr F I0813 20:42:36.353393 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464896573+00:00 stderr F I0813 20:42:36.353411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.465287254+00:00 stderr F I0813 20:42:36.353428 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.465534021+00:00 stderr F I0813 20:42:36.353445 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466008965+00:00 stderr F I0813 20:42:36.353460 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466395156+00:00 stderr F I0813 20:42:36.353475 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.466717065+00:00 stderr F I0813 20:42:36.353491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467177499+00:00 stderr F I0813 20:42:36.353506 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467517179+00:00 stderr F I0813 20:42:36.353521 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467915070+00:00 stderr F I0813 20:42:36.353535 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468287871+00:00 stderr F I0813 20:42:36.353552 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468517787+00:00 stderr F I0813 20:42:36.353568 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.468972860+00:00 stderr F I0813 20:42:36.353585 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469270839+00:00 stderr F I0813 20:42:36.353602 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469545767+00:00 stderr F I0813 20:42:36.353617 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511689472+00:00 stderr F I0813 20:42:36.353630 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531136833+00:00 stderr F I0813 20:42:36.353647 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531136833+00:00 stderr F I0813 20:42:36.342482 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.120750 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125509 1 base_controller.go:172] Shutting down ClusterMemberController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125549 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125596 1 base_controller.go:172] Shutting down EtcdEndpointsController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125613 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125628 1 base_controller.go:172] Shutting down EtcdCertSignerController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125644 1 base_controller.go:172] Shutting down ClusterMemberRemovalController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125663 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125685 1 base_controller.go:172] Shutting down EtcdStaticResources ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125703 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125717 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125732 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:41.125878520+00:00 stderr F I0813 20:42:41.125750 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:42:41.126171969+00:00 stderr F I0813 20:42:41.126131 1 base_controller.go:172] Shutting down ScriptController ... 2025-08-13T20:42:41.126296192+00:00 stderr F I0813 20:42:41.126270 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:42:41.126367504+00:00 stderr F I0813 20:42:41.126349 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.126429406+00:00 stderr F I0813 20:42:41.126412 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.126499908+00:00 stderr F I0813 20:42:41.126481 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:42:41.126563860+00:00 stderr F I0813 20:42:41.126545 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:41.126642342+00:00 stderr F I0813 20:42:41.126618 1 base_controller.go:172] Shutting down DefragController ... 2025-08-13T20:42:41.126717944+00:00 stderr F I0813 20:42:41.126697 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.126853818+00:00 stderr F I0813 20:42:41.126765 1 base_controller.go:172] Shutting down StatusSyncer_etcd ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127575 1 base_controller.go:172] Shutting down BootstrapTeardownController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127632 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127646 1 base_controller.go:172] Shutting down MachineDeletionHooksController ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127671 1 base_controller.go:114] Shutting down worker of ClusterMemberController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127675 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127706 1 base_controller.go:114] Shutting down worker of BootstrapTeardownController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.128508 1 envvarcontroller.go:209] Shutting down EnvVarController 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127684 1 base_controller.go:104] All ClusterMemberController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F E0813 20:42:41.128859 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.127889 1 base_controller.go:104] All BootstrapTeardownController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129352 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129526 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129543 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129560 1 base_controller.go:114] Shutting down worker of MachineDeletionHooksController controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129570 1 base_controller.go:104] All MachineDeletionHooksController workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129630 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129654 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.129861045+00:00 stderr F I0813 20:42:41.129830 1 base_controller.go:172] Shutting down EtcdMembersController ... 2025-08-13T20:42:41.129907136+00:00 stderr F I0813 20:42:41.129897 1 base_controller.go:172] Shutting down EtcdCertCleanerController ... 2025-08-13T20:42:41.129981228+00:00 stderr F I0813 20:42:41.126895 1 base_controller.go:150] All StatusSyncer_etcd post start hooks have been terminated 2025-08-13T20:42:41.132899923+00:00 stderr F I0813 20:42:41.130971 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.132899923+00:00 stderr F W0813 20:42:41.132287 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000030366115117130646033055 0ustar zuulzuul2025-08-13T19:59:19.439582302+00:00 stderr F I0813 19:59:18.611601 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:19.439582302+00:00 stderr F I0813 19:59:18.511681 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-08-13T19:59:19.497175674+00:00 stderr F I0813 19:59:18.676219 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:19.497590426+00:00 stderr F I0813 19:59:19.497506 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:19.742684212+00:00 stderr F I0813 19:59:19.687992 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:24.021392759+00:00 stderr F I0813 19:59:24.020506 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-08-13T19:59:29.136189326+00:00 stderr F I0813 19:59:29.134492 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:29.136189326+00:00 stderr F W0813 19:59:29.135111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:29.136189326+00:00 stderr F W0813 19:59:29.135122 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:29.236993950+00:00 stderr F I0813 19:59:29.235641 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:29.242432165+00:00 stderr F I0813 19:59:29.240557 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:29.242432165+00:00 stderr F I0813 19:59:29.241447 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:29.273372357+00:00 stderr F I0813 19:59:29.272043 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:29.313331306+00:00 stderr F I0813 19:59:29.312113 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.313331306+00:00 stderr F I0813 19:59:29.312268 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:29.364638288+00:00 stderr F I0813 19:59:29.364375 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:29.367155910+00:00 stderr F I0813 19:59:29.365767 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:29.555894250+00:00 stderr F I0813 19:59:29.546100 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:29.555894250+00:00 stderr F I0813 19:59:29.546695 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-08-13T19:59:29.569083816+00:00 stderr F I0813 19:59:29.567958 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:29.569083816+00:00 stderr F I0813 19:59:29.569001 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:29.713290977+00:00 stderr F I0813 19:59:29.713230 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:29.724559348+00:00 stderr F E0813 19:59:29.722298 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.725038032+00:00 stderr F E0813 19:59:29.725012 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.736378395+00:00 stderr F E0813 19:59:29.730717 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.736378395+00:00 stderr F E0813 19:59:29.734086 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.743035945+00:00 stderr F E0813 19:59:29.742379 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.744939749+00:00 stderr F E0813 19:59:29.744270 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.766698299+00:00 stderr F E0813 19:59:29.763559 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.766698299+00:00 stderr F E0813 19:59:29.766038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.774469311+00:00 stderr F I0813 19:59:29.774188 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:29.912187596+00:00 stderr F E0813 19:59:29.908002 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.912187596+00:00 stderr F E0813 19:59:29.908064 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.990180180+00:00 stderr F E0813 19:59:29.990117 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.039085414+00:00 stderr F E0813 19:59:30.039024 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.167941217+00:00 stderr F E0813 19:59:30.167862 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.204724855+00:00 stderr F E0813 19:59:30.204571 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.489536614+00:00 stderr F E0813 19:59:30.488746 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:30.525665924+00:00 stderr F E0813 19:59:30.525598 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.151526054+00:00 stderr F E0813 19:59:31.151456 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.175321802+00:00 stderr F I0813 19:59:31.175063 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-08-13T19:59:31.175363553+00:00 stderr F E0813 19:59:31.175316 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:31.208255471+00:00 stderr F I0813 19:59:31.208171 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28128", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_13b0a2d9-9b76-44b9-abad-e471c2f65ca3 became leader 2025-08-13T19:59:31.550569339+00:00 stderr F I0813 19:59:31.549964 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-08-13T19:59:32.441695689+00:00 stderr F E0813 19:59:32.437879 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:32.458201790+00:00 stderr F E0813 19:59:32.458107 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.874763240+00:00 stderr F I0813 19:59:33.874019 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:34.190295535+00:00 stderr F I0813 19:59:34.184537 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:34.190295535+00:00 stderr F I0813 19:59:34.184682 1 starter.go:499] waiting for cluster version informer sync... 2025-08-13T19:59:34.282931475+00:00 stderr F I0813 19:59:34.280679 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:34.412808117+00:00 stderr F I0813 19:59:34.403536 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-08-13T19:59:34.599604162+00:00 stderr F I0813 19:59:34.599224 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-08-13T19:59:35.074728296+00:00 stderr F I0813 19:59:35.074419 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-08-13T19:59:35.080904012+00:00 stderr F I0813 19:59:35.080535 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.115441 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.247896 1 base_controller.go:73] Caches are synced for FSyncController 2025-08-13T19:59:35.248213521+00:00 stderr F I0813 19:59:35.247922 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243497 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243594 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243610 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313378 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313393 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243630 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243649 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243908 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243926 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243939 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243951 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.313995 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.314001 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243962 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-08-13T19:59:35.315551700+00:00 stderr F E0813 19:59:35.314409 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243973 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.314517 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.315551700+00:00 stderr F I0813 19:59:35.243992 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-08-13T19:59:35.388949042+00:00 stderr F I0813 19:59:35.244004 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-08-13T19:59:35.389179068+00:00 stderr F E0813 19:59:35.389154 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.389382674+00:00 stderr F I0813 19:59:35.244225 1 envvarcontroller.go:193] Starting EnvVarController 2025-08-13T19:59:35.391058182+00:00 stderr F I0813 19:59:35.391020 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.468019336+00:00 stderr F I0813 19:59:35.244247 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:35.483227819+00:00 stderr F E0813 19:59:35.476989 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244441 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.478066 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244455 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244539 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244551 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244567 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244736 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244751 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244761 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.244770 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:35.483227819+00:00 stderr F I0813 19:59:35.245278 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.245323 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.504348 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:35.529198640+00:00 stderr F E0813 19:59:35.509325 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.529198640+00:00 stderr F I0813 19:59:35.509567 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229019 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229573 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229618 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229624 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229640 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:38.230932083+00:00 stderr F I0813 19:59:38.229645 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:38.315580626+00:00 stderr F I0813 19:59:38.315147 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-08-13T19:59:38.315580626+00:00 stderr F I0813 19:59:38.315168 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-08-13T19:59:38.445108599+00:00 stderr F E0813 19:59:38.444373 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:38.515916647+00:00 stderr F E0813 19:59:38.503438 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:38.566434227+00:00 stderr F I0813 19:59:38.566370 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:38.575941538+00:00 stderr F I0813 19:59:38.575907 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:38.577320148+00:00 stderr F I0813 19:59:38.577282 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:38.606453528+00:00 stderr F I0813 19:59:38.597541 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.681984 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.682102 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.682111 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.654724 1 base_controller.go:73] Caches are synced for DefragController 2025-08-13T19:59:38.688754724+00:00 stderr F I0813 19:59:38.683248 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-08-13T19:59:38.838939705+00:00 stderr F I0813 19:59:38.836534 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:38.843890516+00:00 stderr F I0813 19:59:38.842074 1 base_controller.go:73] Caches are synced for ScriptController 2025-08-13T19:59:38.874057006+00:00 stderr F I0813 19:59:38.872051 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-08-13T19:59:38.982047663+00:00 stderr F I0813 19:59:38.967712 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:38.982047663+00:00 stderr F E0813 19:59:38.970342 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:38.982047663+00:00 stderr F I0813 19:59:38.971545 1 reflector.go:351] Caches populated for *v1.Etcd from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:39.036536197+00:00 stderr F I0813 19:59:39.036468 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:39.036630399+00:00 stderr F I0813 19:59:39.036612 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:39.098609596+00:00 stderr F I0813 19:59:39.098186 1 envvarcontroller.go:199] caches synced 2025-08-13T19:59:39.512175835+00:00 stderr F E0813 19:59:39.511575 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.554505982+00:00 stderr F I0813 19:59:39.554323 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:39.555444998+00:00 stderr F E0813 19:59:39.555415 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.597947860+00:00 stderr F E0813 19:59:39.597723 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:39.614323977+00:00 stderr F E0813 19:59:39.614245 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:41.444218429+00:00 stderr F I0813 19:59:41.402415 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.444218429+00:00 stderr F I0813 19:59:41.410329 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.510703544+00:00 stderr F E0813 19:59:41.453610 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.575286225+00:00 stderr F E0813 19:59:41.491014 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:41.688451451+00:00 stderr F E0813 19:59:41.686147 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.716241883+00:00 stderr F E0813 19:59:41.714508 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.772432005+00:00 stderr F E0813 19:59:41.772374 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T19:59:41.792715813+00:00 stderr F I0813 19:59:41.792615 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:41.797374666+00:00 stderr F E0813 19:59:41.793568 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:41.813208317+00:00 stderr F I0813 19:59:41.802685 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:41.815346458+00:00 stderr F I0813 19:59:41.815115 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.030002477+00:00 stderr F I0813 19:59:42.029895 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.032700264+00:00 stderr F I0813 19:59:42.032424 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.045939881+00:00 stderr F E0813 19:59:42.045747 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-08-13T19:59:42.050357667+00:00 stderr F I0813 19:59:42.050321 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:42Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"EnvVarController_Error::NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:42.064411728+00:00 stderr F I0813 19:59:42.051461 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:42.073188328+00:00 stderr F E0813 19:59:42.070405 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:42.084309165+00:00 stderr F I0813 19:59:42.057153 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.084309165+00:00 stderr F I0813 19:59:42.059321 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.104760 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.220958 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.239738 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.221066 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.239829 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.221247 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:42.240711363+00:00 stderr F I0813 19:59:42.240287 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:42.244982595+00:00 stderr F I0813 19:59:42.241586 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.244982595+00:00 stderr F I0813 19:59:42.242611 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.267741034+00:00 stderr F I0813 19:59:42.267670 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.274360852+00:00 stderr F I0813 19:59:42.274307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.364353727+00:00 stderr F I0813 19:59:42.221285 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:42.364470211+00:00 stderr F I0813 19:59:42.364447 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:42.381169387+00:00 stderr F I0813 19:59:42.368972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.385226692+00:00 stderr F I0813 19:59:42.221433 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:42.411953034+00:00 stderr F I0813 19:59:42.411890 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:42.415565527+00:00 stderr F I0813 19:59:42.415474 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:42.415733612+00:00 stderr F I0813 19:59:42.415686 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-08-13T19:59:42.415963909+00:00 stderr F I0813 19:59:42.415763 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-08-13T19:59:42.439443888+00:00 stderr F I0813 19:59:42.438475 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.455607459+00:00 stderr F E0813 19:59:42.455540 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:42.455983369+00:00 stderr F I0813 19:59:42.455959 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-08-13T19:59:42.456028521+00:00 stderr F I0813 19:59:42.456015 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-08-13T19:59:42.456351400+00:00 stderr F I0813 19:59:42.456328 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:42.456389951+00:00 stderr F I0813 19:59:42.456377 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:42.456532865+00:00 stderr F I0813 19:59:42.456516 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-08-13T19:59:42.456565556+00:00 stderr F I0813 19:59:42.456553 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-08-13T19:59:42.459898211+00:00 stderr F I0813 19:59:42.459871 1 trace.go:236] Trace[761031083]: "DeltaFIFO Pop Process" ID:system:controller:pvc-protection-controller,Depth:106,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.274) (total time: 185ms): 2025-08-13T19:59:42.459898211+00:00 stderr F Trace[761031083]: [185.063576ms] [185.063576ms] END 2025-08-13T19:59:42.515986220+00:00 stderr F I0813 19:59:42.514103 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:42.515986220+00:00 stderr F I0813 19:59:42.515190 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.519198 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.520748 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-08-13T19:59:42.521079815+00:00 stderr F I0813 19:59:42.520766 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-08-13T19:59:42.529508635+00:00 stderr F I0813 19:59:42.529367 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:42.541220119+00:00 stderr F I0813 19:59:42.535950 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:42.541220119+00:00 stderr F I0813 19:59:42.536164 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:42.542379502+00:00 stderr F I0813 19:59:42.542265 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:43.388393107+00:00 stderr F I0813 19:59:43.174330 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:43.437977401+00:00 stderr F I0813 19:59:43.437366 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:43.442007086+00:00 stderr F I0813 19:59:43.190035 1 trace.go:236] Trace[790508828]: "DeltaFIFO Pop Process" ID:system:openshift:controller:serviceaccount-controller,Depth:41,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.552) (total time: 637ms): 2025-08-13T19:59:43.442007086+00:00 stderr F Trace[790508828]: [637.43805ms] [637.43805ms] END 2025-08-13T19:59:43.461142281+00:00 stderr F I0813 19:59:43.190583 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:43.461142281+00:00 stderr F I0813 19:59:43.460085 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:43.469627043+00:00 stderr F I0813 19:59:43.235059 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:43.469885710+00:00 stderr F E0813 19:59:43.240476 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-08-13T19:59:43.470704823+00:00 stderr F E0813 19:59:43.240674 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:43.470889159+00:00 stderr F I0813 19:59:43.248768 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T19:59:43.477025134+00:00 stderr F I0813 19:59:43.476267 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"EnvVarController_Error::EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:43.494418570+00:00 stderr F I0813 19:59:43.255933 1 helpers.go:184] lister was stale at resourceVersion=28242, live get showed resourceVersion=28318 2025-08-13T19:59:43.494418570+00:00 stderr F E0813 19:59:43.329586 1 envvarcontroller.go:231] key failed with : configmap "etcd-endpoints" not found 2025-08-13T19:59:43.494418570+00:00 stderr F E0813 19:59:43.353066 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.513639467+00:00 stderr F I0813 19:59:43.512527 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-08-13T19:59:43.513639467+00:00 stderr F I0813 19:59:43.512590 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-08-13T19:59:43.687120012+00:00 stderr F E0813 19:59:43.683712 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:43.926955579+00:00 stderr F E0813 19:59:43.891241 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-08-13T19:59:44.024683875+00:00 stderr F I0813 19:59:44.017254 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values","reason":"EnvVarController_Error::EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady::ScriptController_Error","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:44.115764081+00:00 stderr F I0813 19:59:44.040471 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" 2025-08-13T19:59:44.115764081+00:00 stderr F I0813 19:59:44.066649 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.115764081+00:00 stderr F E0813 19:59:44.076233 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:44.460925990+00:00 stderr F I0813 19:59:44.406579 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.491474701+00:00 stderr F E0813 19:59:44.491373 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:44.501905558+00:00 stderr F I0813 19:59:44.499984 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:38Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values","reason":"EtcdMembersController_ErrorUpdatingReportEtcdMembers::NodeController_MasterNodesReady::ScriptController_Error","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:44.510090791+00:00 stderr F I0813 19:59:44.507255 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)" to "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" 2025-08-13T19:59:45.405026942+00:00 stderr F E0813 19:59:45.402517 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:46.080965999+00:00 stderr F I0813 19:59:46.077699 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EnvVarControllerDegraded: configmap \"etcd-endpoints\" not found\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" 2025-08-13T19:59:46.527543259+00:00 stderr F I0813 19:59:46.523634 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:46.544948805+00:00 stderr F I0813 19:59:46.544689 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:46.832987076+00:00 stderr F E0813 19:59:46.821501 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:47.102551730+00:00 stderr F I0813 19:59:47.102160 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:47.363246801+00:00 stderr F I0813 19:59:47.317985 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:47.363246801+00:00 stderr F I0813 19:59:47.334353 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded changed from True to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found") 2025-08-13T19:59:47.767981449+00:00 stderr F I0813 19:59:47.766996 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T19:59:48.064737628+00:00 stderr F I0813 19:59:48.051566 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.064737628+00:00 stderr F I0813 19:59:48.055625 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:48.611643899+00:00 stderr F I0813 19:59:48.609906 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.611643899+00:00 stderr F I0813 19:59:48.610301 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-08-13T19:59:48.812943397+00:00 stderr F E0813 19:59:48.806732 1 base_controller.go:268] StatusSyncer_etcd reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "etcd": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.572577771+00:00 stderr F E0813 19:59:49.572209 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T19:59:51.830068133+00:00 stderr F I0813 19:59:51.825763 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.928955572+00:00 stderr F E0813 19:59:51.927523 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.961419857+00:00 stderr F I0813 19:59:51.961204 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.961109858 +0000 UTC))" 2025-08-13T19:59:51.961559271+00:00 stderr F I0813 19:59:51.961543 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.96152121 +0000 UTC))" 2025-08-13T19:59:51.961682245+00:00 stderr F I0813 19:59:51.961664 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.961640153 +0000 UTC))" 2025-08-13T19:59:51.961743026+00:00 stderr F I0813 19:59:51.961724 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.961704295 +0000 UTC))" 2025-08-13T19:59:51.961958763+00:00 stderr F I0813 19:59:51.961828 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.961762997 +0000 UTC))" 2025-08-13T19:59:51.962028875+00:00 stderr F I0813 19:59:51.962009 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.961990933 +0000 UTC))" 2025-08-13T19:59:51.962091646+00:00 stderr F I0813 19:59:51.962076 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.962050705 +0000 UTC))" 2025-08-13T19:59:51.962191349+00:00 stderr F I0813 19:59:51.962170 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.962119247 +0000 UTC))" 2025-08-13T19:59:51.962244911+00:00 stderr F I0813 19:59:51.962231 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.96221247 +0000 UTC))" 2025-08-13T19:59:51.962662473+00:00 stderr F I0813 19:59:51.962641 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 19:59:51.962616281 +0000 UTC))" 2025-08-13T19:59:51.963046684+00:00 stderr F I0813 19:59:51.963019 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 19:59:51.963000952 +0000 UTC))" 2025-08-13T19:59:54.765899009+00:00 stderr F E0813 19:59:54.737133 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:05.196145423+00:00 stderr F E0813 20:00:05.186288 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:05.765550329+00:00 stderr F I0813 20:00:05.765069 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.765023964 +0000 UTC))" 2025-08-13T20:00:05.783205133+00:00 stderr F I0813 20:00:05.783177 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.783096489 +0000 UTC))" 2025-08-13T20:00:05.783328776+00:00 stderr F I0813 20:00:05.783309 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783252924 +0000 UTC))" 2025-08-13T20:00:05.783385568+00:00 stderr F I0813 20:00:05.783368 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.783351887 +0000 UTC))" 2025-08-13T20:00:05.783446519+00:00 stderr F I0813 20:00:05.783426 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783409788 +0000 UTC))" 2025-08-13T20:00:05.783515761+00:00 stderr F I0813 20:00:05.783502 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783485201 +0000 UTC))" 2025-08-13T20:00:05.783586993+00:00 stderr F I0813 20:00:05.783570 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783552522 +0000 UTC))" 2025-08-13T20:00:05.783645745+00:00 stderr F I0813 20:00:05.783629 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783610724 +0000 UTC))" 2025-08-13T20:00:05.783709567+00:00 stderr F I0813 20:00:05.783694 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.783672236 +0000 UTC))" 2025-08-13T20:00:05.783764869+00:00 stderr F I0813 20:00:05.783746 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.783731698 +0000 UTC))" 2025-08-13T20:00:05.784556051+00:00 stderr F I0813 20:00:05.784526 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.78450408 +0000 UTC))" 2025-08-13T20:00:05.838385956+00:00 stderr F I0813 20:00:05.834488 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:05.784988084 +0000 UTC))" 2025-08-13T20:00:25.750118848+00:00 stderr F E0813 20:00:25.749354 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:35.469254510+00:00 stderr F E0813 20:00:35.468465 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-08-13T20:00:40.271403626+00:00 stderr F I0813 20:00:40.269711 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.307409 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308144 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:40.308098593 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308174 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:40.308159874 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308233 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:40.308183055 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308251 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:40.308239327 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308269 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308257217 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308288 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308276678 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308307 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308293358 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308333 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.308313139 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308353 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:40.30834121 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308399 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:40.30836468 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.308688 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:18 +0000 UTC to 2027-08-13 20:00:19 +0000 UTC (now=2025-08-13 20:00:40.308670389 +0000 UTC))" 2025-08-13T20:00:40.309143053+00:00 stderr F I0813 20:00:40.309022 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115167\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115166\" (2025-08-13 18:59:24 +0000 UTC to 2026-08-13 18:59:24 +0000 UTC (now=2025-08-13 20:00:40.309005079 +0000 UTC))" 2025-08-13T20:00:40.309778251+00:00 stderr F I0813 20:00:40.309716 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.342161 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cc073c73c8c431e58bd599b22534e379416474b407ab24fcc4003eb26d4d413", new="d7210eb97e9fdfd0251de57fa3139593b93cc605b31992f9f8fc921c7054baed") 2025-08-13T20:00:41.345958686+00:00 stderr F W0813 20:00:41.342771 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.342183 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="052d72315f99f9ec238bcbb8fe50e397b66ccc142d1ac794929899df735a889d", new="8fce782bf4ffc195a605185b97044a1cf85fb8408671072386211fc5d0f7f9b4") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.343639 1 cmd.go:160] exiting because "/var/run/secrets/serving-cert/tls.key" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344070 1 observer_polling.go:120] Observed file "/var/run/configmaps/etcd-service-ca/service-ca.crt" has been modified (old="4aae21eb5e7288ebd1f51edb8217b701366dd5aec958415476bca84ab942e90c", new="51e7a388d2ba2794fb8e557a4c52a736ae262d754d30e8729e9392e40100869b") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344086 1 cmd.go:160] exiting because "/var/run/configmaps/etcd-service-ca/service-ca.crt" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344144 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cc073c73c8c431e58bd599b22534e379416474b407ab24fcc4003eb26d4d413", new="d7210eb97e9fdfd0251de57fa3139593b93cc605b31992f9f8fc921c7054baed") 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344153 1 cmd.go:160] exiting because "/var/run/secrets/serving-cert/tls.crt" changed 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344617 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:41.345958686+00:00 stderr F I0813 20:00:41.344649 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:41.353961434+00:00 stderr F I0813 20:00:41.352308 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:41.387389708+00:00 stderr F I0813 20:00:41.385105 1 base_controller.go:172] Shutting down EtcdStaticResources ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.393930 1 envvarcontroller.go:209] Shutting down EnvVarController 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394029 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394084 1 base_controller.go:172] Shutting down ScriptController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394116 1 base_controller.go:172] Shutting down DefragController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394132 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394148 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394171 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394376 1 base_controller.go:172] Shutting down MachineDeletionHooksController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394470 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394490 1 base_controller.go:172] Shutting down BootstrapTeardownController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394512 1 base_controller.go:172] Shutting down StatusSyncer_etcd ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394526 1 base_controller.go:150] All StatusSyncer_etcd post start hooks have been terminated 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394649 1 base_controller.go:172] Shutting down FSyncController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394665 1 base_controller.go:172] Shutting down EtcdMembersController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.394682 1 base_controller.go:172] Shutting down EtcdCertCleanerController ... 2025-08-13T20:00:41.395887190+00:00 stderr F I0813 20:00:41.395094 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450279 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450362 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450540 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:00:41.450948430+00:00 stderr F I0813 20:00:41.450579 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:41.457009103+00:00 stderr F I0813 20:00:41.456673 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:00:41.457009103+00:00 stderr F I0813 20:00:41.456755 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465575 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465631 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465673 1 base_controller.go:172] Shutting down ClusterMemberController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465719 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465751 1 base_controller.go:172] Shutting down EtcdCertSignerController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465860 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.465892 1 base_controller.go:172] Shutting down EtcdEndpointsController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466643 1 base_controller.go:172] Shutting down ClusterMemberRemovalController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466720 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.466750 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467204 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467253 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467270 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467280 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467341 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467357 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467454 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467468 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467483 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467495 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467501 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467515 1 base_controller.go:114] Shutting down worker of ClusterMemberController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467527 1 base_controller.go:104] All ClusterMemberController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467548 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467558 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467563 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467576 1 base_controller.go:114] Shutting down worker of EtcdCertSignerController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467586 1 base_controller.go:104] All EtcdCertSignerController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467602 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467618 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467633 1 base_controller.go:114] Shutting down worker of EtcdEndpointsController controller ... 2025-08-13T20:00:41.469118258+00:00 stderr F I0813 20:00:41.467647 1 base_controller.go:104] All EtcdEndpointsController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471415 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471485 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471493 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471525 1 base_controller.go:114] Shutting down worker of MachineDeletionHooksController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471534 1 base_controller.go:104] All MachineDeletionHooksController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471623 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471632 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471642 1 base_controller.go:114] Shutting down worker of BootstrapTeardownController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471650 1 base_controller.go:104] All BootstrapTeardownController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471660 1 base_controller.go:114] Shutting down worker of StatusSyncer_etcd controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471670 1 base_controller.go:104] All StatusSyncer_etcd workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471680 1 base_controller.go:114] Shutting down worker of FSyncController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471688 1 base_controller.go:104] All FSyncController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471697 1 base_controller.go:114] Shutting down worker of EtcdMembersController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471707 1 base_controller.go:104] All EtcdMembersController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471716 1 base_controller.go:114] Shutting down worker of EtcdCertCleanerController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.471724 1 base_controller.go:104] All EtcdCertCleanerController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472098 1 base_controller.go:114] Shutting down worker of ClusterMemberRemovalController controller ... 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472111 1 base_controller.go:104] All ClusterMemberRemovalController workers have been terminated 2025-08-13T20:00:41.472479784+00:00 stderr F I0813 20:00:41.472298 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472906 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472940 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:00:41.473148593+00:00 stderr F I0813 20:00:41.472949 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473459 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473707 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.473762 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474073 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474175 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474217 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474233 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474250 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:00:41.474724778+00:00 stderr F I0813 20:00:41.474259 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475635 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475688 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:41.475861680+00:00 stderr F I0813 20:00:41.475742 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:41.475882591+00:00 stderr F I0813 20:00:41.475872 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530027 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530740 1 base_controller.go:114] Shutting down worker of EtcdStaticResources controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530756 1 base_controller.go:104] All EtcdStaticResources workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530780 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530956 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.530971 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534080 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534227 1 base_controller.go:114] Shutting down worker of DefragController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534240 1 base_controller.go:104] All DefragController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534251 1 base_controller.go:114] Shutting down worker of ScriptController controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534258 1 base_controller.go:104] All ScriptController workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534280 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534311 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534320 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:41.538024743+00:00 stderr F I0813 20:00:41.534325 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:00:41.541754909+00:00 stderr F I0813 20:00:41.541461 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:41.541754909+00:00 stderr F I0813 20:00:41.541554 1 builder.go:329] server exited 2025-08-13T20:00:41.551133457+00:00 stderr F I0813 20:00:41.542312 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="052d72315f99f9ec238bcbb8fe50e397b66ccc142d1ac794929899df735a889d", new="8fce782bf4ffc195a605185b97044a1cf85fb8408671072386211fc5d0f7f9b4") 2025-08-13T20:00:41.551133457+00:00 stderr F W0813 20:00:41.542993 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operato0000644000175000017500000047440615117130646033063 0ustar zuulzuul2025-12-13T00:13:15.496880732+00:00 stderr F I1213 00:13:15.494350 1 cmd.go:240] Using service-serving-cert provided certificates 2025-12-13T00:13:15.496880732+00:00 stderr F I1213 00:13:15.494702 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:15.496880732+00:00 stderr F I1213 00:13:15.495493 1 profiler.go:21] Starting profiling endpoint at http://127.0.0.1:6060/debug/pprof/ 2025-12-13T00:13:15.496880732+00:00 stderr F I1213 00:13:15.495690 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:15.528056149+00:00 stderr F I1213 00:13:15.520672 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:15.610925284+00:00 stderr F I1213 00:13:15.610634 1 builder.go:298] openshift-cluster-etcd-operator version 4.16.0-202406131906.p0.gdc4f4e8.assembly.stream.el9-dc4f4e8-dc4f4e858ba8395dce6883242c7d12009685d145 2025-12-13T00:13:16.641987810+00:00 stderr F I1213 00:13:16.640818 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.641987810+00:00 stderr F W1213 00:13:16.641243 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.641987810+00:00 stderr F W1213 00:13:16.641249 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.677986639+00:00 stderr F I1213 00:13:16.662312 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.677986639+00:00 stderr F I1213 00:13:16.676961 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.677986639+00:00 stderr F I1213 00:13:16.665972 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.677986639+00:00 stderr F I1213 00:13:16.665998 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.678236298+00:00 stderr F I1213 00:13:16.678182 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.679038995+00:00 stderr F I1213 00:13:16.678372 1 leaderelection.go:250] attempting to acquire leader lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock... 2025-12-13T00:13:16.679038995+00:00 stderr F I1213 00:13:16.678476 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.679038995+00:00 stderr F I1213 00:13:16.678490 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.681613771+00:00 stderr F I1213 00:13:16.681258 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.707528722+00:00 stderr F I1213 00:13:16.707291 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.707528722+00:00 stderr F I1213 00:13:16.707342 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.879838761+00:00 stderr F I1213 00:13:16.879249 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.880072340+00:00 stderr F I1213 00:13:16.880029 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.881986214+00:00 stderr F I1213 00:13:16.881794 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:17:50.919640373+00:00 stderr F I1213 00:17:50.919178 1 leaderelection.go:260] successfully acquired lease openshift-etcd-operator/openshift-cluster-etcd-operator-lock 2025-12-13T00:17:50.922265923+00:00 stderr F I1213 00:17:50.922205 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-etcd-operator", Name:"openshift-cluster-etcd-operator-lock", UID:"1b330a9d-47ca-437b-91e1-6481a2280da8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-768d5b5d86-722mg_5da26f8f-62a3-43d2-9d61-e02e6165a79a became leader 2025-12-13T00:17:50.925956801+00:00 stderr F I1213 00:17:50.925906 1 starter.go:166] recorded cluster versions: map[etcd:4.16.0 operator:4.16.0 raw-internal:4.16.0] 2025-12-13T00:17:50.936038960+00:00 stderr F I1213 00:17:50.934894 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:17:50.938279259+00:00 stderr F I1213 00:17:50.937670 1 starter.go:445] FeatureGates initializedenabled[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs]disabled[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:17:50.938378632+00:00 stderr F I1213 00:17:50.938362 1 starter.go:499] waiting for cluster version informer sync... 2025-12-13T00:17:50.938439704+00:00 stderr F I1213 00:17:50.938420 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:17:51.041789022+00:00 stderr F I1213 00:17:51.041712 1 starter.go:522] Detected available machine API, starting vertical scaling related controllers and informers... 2025-12-13T00:17:51.043734763+00:00 stderr F I1213 00:17:51.043685 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberRemovalController 2025-12-13T00:17:51.045750066+00:00 stderr F I1213 00:17:51.044085 1 base_controller.go:67] Waiting for caches to sync for MachineDeletionHooksController 2025-12-13T00:17:51.048208802+00:00 stderr F I1213 00:17:51.048153 1 base_controller.go:67] Waiting for caches to sync for EtcdCertCleanerController 2025-12-13T00:17:51.048208802+00:00 stderr F I1213 00:17:51.048199 1 base_controller.go:73] Caches are synced for EtcdCertCleanerController 2025-12-13T00:17:51.048228172+00:00 stderr F I1213 00:17:51.048209 1 base_controller.go:110] Starting #1 worker of EtcdCertCleanerController controller ... 2025-12-13T00:17:51.048417197+00:00 stderr F I1213 00:17:51.048375 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-12-13T00:17:51.048436828+00:00 stderr F I1213 00:17:51.048416 1 base_controller.go:67] Waiting for caches to sync for EtcdEndpointsController 2025-12-13T00:17:51.048651383+00:00 stderr F I1213 00:17:51.048453 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:17:51.048784837+00:00 stderr F I1213 00:17:51.048732 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:17:51.048848559+00:00 stderr F I1213 00:17:51.048823 1 base_controller.go:67] Waiting for caches to sync for DefragController 2025-12-13T00:17:51.048860209+00:00 stderr F I1213 00:17:51.048851 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-12-13T00:17:51.048867549+00:00 stderr F I1213 00:17:51.048857 1 base_controller.go:67] Waiting for caches to sync for ScriptController 2025-12-13T00:17:51.048903260+00:00 stderr F I1213 00:17:51.048876 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-12-13T00:17:51.049005323+00:00 stderr F I1213 00:17:51.048971 1 base_controller.go:67] Waiting for caches to sync for EtcdCertSignerController 2025-12-13T00:17:51.049049304+00:00 stderr F I1213 00:17:51.049007 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:17:51.049049304+00:00 stderr F I1213 00:17:51.049041 1 base_controller.go:67] Waiting for caches to sync for ClusterMemberController 2025-12-13T00:17:51.049088985+00:00 stderr F I1213 00:17:51.049068 1 base_controller.go:67] Waiting for caches to sync for EtcdMembersController 2025-12-13T00:17:51.049145046+00:00 stderr F I1213 00:17:51.049124 1 base_controller.go:73] Caches are synced for EtcdMembersController 2025-12-13T00:17:51.049153797+00:00 stderr F I1213 00:17:51.049138 1 base_controller.go:110] Starting #1 worker of EtcdMembersController controller ... 2025-12-13T00:17:51.049306071+00:00 stderr F I1213 00:17:51.049274 1 base_controller.go:67] Waiting for caches to sync for BootstrapTeardownController 2025-12-13T00:17:51.049354082+00:00 stderr F I1213 00:17:51.049333 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-12-13T00:17:51.049380893+00:00 stderr F I1213 00:17:51.049370 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-12-13T00:17:51.049407763+00:00 stderr F I1213 00:17:51.049391 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-12-13T00:17:51.049482816+00:00 stderr F I1213 00:17:51.049451 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-12-13T00:17:51.049515767+00:00 stderr F I1213 00:17:51.049493 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:17:51.049558368+00:00 stderr F I1213 00:17:51.049547 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-12-13T00:17:51.049621800+00:00 stderr F I1213 00:17:51.049538 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:17:51.049704482+00:00 stderr F I1213 00:17:51.049680 1 base_controller.go:67] Waiting for caches to sync for FSyncController 2025-12-13T00:17:51.049704482+00:00 stderr F I1213 00:17:51.049693 1 base_controller.go:73] Caches are synced for FSyncController 2025-12-13T00:17:51.049704482+00:00 stderr F I1213 00:17:51.049700 1 base_controller.go:110] Starting #1 worker of FSyncController controller ... 2025-12-13T00:17:51.050416151+00:00 stderr F I1213 00:17:51.050376 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_etcd 2025-12-13T00:17:51.052038643+00:00 stderr F E1213 00:17:51.052010 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-12-13T00:17:51.053666197+00:00 stderr F I1213 00:17:51.053626 1 envvarcontroller.go:193] Starting EnvVarController 2025-12-13T00:17:51.054065028+00:00 stderr F I1213 00:17:51.054036 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-12-13T00:17:51.054212461+00:00 stderr F I1213 00:17:51.054184 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-12-13T00:17:51.054498599+00:00 stderr F I1213 00:17:51.054471 1 base_controller.go:67] Waiting for caches to sync for EtcdStaticResources 2025-12-13T00:17:51.054731705+00:00 stderr F I1213 00:17:51.054557 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ReportEtcdMembersErrorUpdatingStatus' etcds.operator.openshift.io "cluster" not found 2025-12-13T00:17:51.072185770+00:00 stderr F E1213 00:17:51.072117 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.076689359+00:00 stderr F E1213 00:17:51.076644 1 base_controller.go:268] EtcdMembersController reconciliation failed: getting cache client could not retrieve endpoints: node lister not synced 2025-12-13T00:17:51.080032728+00:00 stderr F E1213 00:17:51.079991 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.087186158+00:00 stderr F I1213 00:17:51.087128 1 etcdcli_pool.go:70] creating a new cached client 2025-12-13T00:17:51.093118326+00:00 stderr F E1213 00:17:51.093080 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.116811486+00:00 stderr F E1213 00:17:51.116738 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.144481681+00:00 stderr F I1213 00:17:51.144417 1 base_controller.go:73] Caches are synced for ClusterMemberRemovalController 2025-12-13T00:17:51.144481681+00:00 stderr F I1213 00:17:51.144441 1 base_controller.go:110] Starting #1 worker of ClusterMemberRemovalController controller ... 2025-12-13T00:17:51.146185337+00:00 stderr F I1213 00:17:51.146131 1 base_controller.go:73] Caches are synced for MachineDeletionHooksController 2025-12-13T00:17:51.146185337+00:00 stderr F I1213 00:17:51.146153 1 base_controller.go:110] Starting #1 worker of MachineDeletionHooksController controller ... 2025-12-13T00:17:51.149500925+00:00 stderr F I1213 00:17:51.149453 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-12-13T00:17:51.149500925+00:00 stderr F I1213 00:17:51.149467 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-12-13T00:17:51.149517925+00:00 stderr F I1213 00:17:51.149465 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-12-13T00:17:51.149517925+00:00 stderr F I1213 00:17:51.149512 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-12-13T00:17:51.149596677+00:00 stderr F I1213 00:17:51.149552 1 base_controller.go:73] Caches are synced for EtcdEndpointsController 2025-12-13T00:17:51.149596677+00:00 stderr F I1213 00:17:51.149562 1 base_controller.go:110] Starting #1 worker of EtcdEndpointsController controller ... 2025-12-13T00:17:51.149596677+00:00 stderr F I1213 00:17:51.149578 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-12-13T00:17:51.149596677+00:00 stderr F I1213 00:17:51.149590 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:17:51.149607948+00:00 stderr F I1213 00:17:51.149593 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-12-13T00:17:51.149607948+00:00 stderr F I1213 00:17:51.149601 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:17:51.149645499+00:00 stderr F I1213 00:17:51.149621 1 base_controller.go:73] Caches are synced for NodeController 2025-12-13T00:17:51.149653129+00:00 stderr F I1213 00:17:51.149642 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-12-13T00:17:51.149676159+00:00 stderr F I1213 00:17:51.149658 1 base_controller.go:73] Caches are synced for PruneController 2025-12-13T00:17:51.149676159+00:00 stderr F I1213 00:17:51.149668 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-12-13T00:17:51.149712340+00:00 stderr F I1213 00:17:51.149613 1 base_controller.go:73] Caches are synced for GuardController 2025-12-13T00:17:51.149712340+00:00 stderr F I1213 00:17:51.149695 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-12-13T00:17:51.149712340+00:00 stderr F I1213 00:17:51.149676 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:17:51.149712340+00:00 stderr F I1213 00:17:51.149706 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:17:51.149766082+00:00 stderr F I1213 00:17:51.149737 1 base_controller.go:73] Caches are synced for InstallerController 2025-12-13T00:17:51.149766082+00:00 stderr F I1213 00:17:51.149753 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-12-13T00:17:51.150045919+00:00 stderr F I1213 00:17:51.150002 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:17:51.150045919+00:00 stderr F I1213 00:17:51.150023 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:17:51.150134692+00:00 stderr F I1213 00:17:51.150087 1 base_controller.go:73] Caches are synced for BootstrapTeardownController 2025-12-13T00:17:51.150134692+00:00 stderr F I1213 00:17:51.150104 1 base_controller.go:110] Starting #1 worker of BootstrapTeardownController controller ... 2025-12-13T00:17:51.150134692+00:00 stderr F I1213 00:17:51.150108 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:17:51.150134692+00:00 stderr F I1213 00:17:51.150111 1 base_controller.go:73] Caches are synced for DefragController 2025-12-13T00:17:51.150134692+00:00 stderr F I1213 00:17:51.150128 1 base_controller.go:73] Caches are synced for ScriptController 2025-12-13T00:17:51.150146272+00:00 stderr F I1213 00:17:51.150134 1 base_controller.go:110] Starting #1 worker of ScriptController controller ... 2025-12-13T00:17:51.150146272+00:00 stderr F I1213 00:17:51.150135 1 base_controller.go:110] Starting #1 worker of DefragController controller ... 2025-12-13T00:17:51.150146272+00:00 stderr F I1213 00:17:51.150139 1 base_controller.go:73] Caches are synced for ClusterMemberController 2025-12-13T00:17:51.150154432+00:00 stderr F I1213 00:17:51.150146 1 base_controller.go:110] Starting #1 worker of ClusterMemberController controller ... 2025-12-13T00:17:51.150305457+00:00 stderr F I1213 00:17:51.150287 1 etcdcli_pool.go:70] creating a new cached client 2025-12-13T00:17:51.150452391+00:00 stderr F E1213 00:17:51.150413 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.154239871+00:00 stderr F I1213 00:17:51.154211 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-12-13T00:17:51.154239871+00:00 stderr F I1213 00:17:51.154224 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-12-13T00:17:51.154273162+00:00 stderr F I1213 00:17:51.154241 1 base_controller.go:73] Caches are synced for RevisionController 2025-12-13T00:17:51.154273162+00:00 stderr F I1213 00:17:51.154246 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-12-13T00:17:51.156179123+00:00 stderr F I1213 00:17:51.156140 1 envvarcontroller.go:199] caches synced 2025-12-13T00:17:51.156425000+00:00 stderr F E1213 00:17:51.156390 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.156449930+00:00 stderr F I1213 00:17:51.156434 1 base_controller.go:73] Caches are synced for StatusSyncer_etcd 2025-12-13T00:17:51.156449930+00:00 stderr F I1213 00:17:51.156440 1 base_controller.go:110] Starting #1 worker of StatusSyncer_etcd controller ... 2025-12-13T00:17:51.156527602+00:00 stderr F E1213 00:17:51.156494 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.157978210+00:00 stderr F E1213 00:17:51.157717 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.160520408+00:00 stderr F E1213 00:17:51.160472 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.161843774+00:00 stderr F E1213 00:17:51.161799 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": missing env var values 2025-12-13T00:17:51.162034389+00:00 stderr F I1213 00:17:51.161995 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:17:51.162532642+00:00 stderr F E1213 00:17:51.162498 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.162755908+00:00 stderr F I1213 00:17:51.162723 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:17:51.163292672+00:00 stderr F E1213 00:17:51.163256 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.163813415+00:00 stderr F E1213 00:17:51.163778 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.164153804+00:00 stderr F E1213 00:17:51.164098 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.164702010+00:00 stderr F E1213 00:17:51.164669 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.167438442+00:00 stderr F E1213 00:17:51.167381 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.170127034+00:00 stderr F I1213 00:17:51.170059 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-12-13T00:17:51.173868283+00:00 stderr F E1213 00:17:51.173804 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.174196182+00:00 stderr F E1213 00:17:51.174146 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.189317923+00:00 stderr F I1213 00:17:51.189180 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:17:51.190258589+00:00 stderr F E1213 00:17:51.190223 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.191302477+00:00 stderr F E1213 00:17:51.191272 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.192220861+00:00 stderr F E1213 00:17:51.192191 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.192706774+00:00 stderr F E1213 00:17:51.192686 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.193861275+00:00 stderr F I1213 00:17:51.193823 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:17:51.196547396+00:00 stderr F E1213 00:17:51.195852 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.196547396+00:00 stderr F I1213 00:17:51.196251 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdMembersDegraded: No unhealthy members found" 2025-12-13T00:17:51.197983905+00:00 stderr F E1213 00:17:51.197829 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.198266432+00:00 stderr F E1213 00:17:51.198232 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.201542979+00:00 stderr F E1213 00:17:51.201521 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.208386761+00:00 stderr F E1213 00:17:51.208349 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.209943922+00:00 stderr F E1213 00:17:51.209890 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.243812573+00:00 stderr F E1213 00:17:51.243750 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.247922022+00:00 stderr F I1213 00:17:51.247606 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:51.252823752+00:00 stderr F E1213 00:17:51.252781 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.254906498+00:00 stderr F E1213 00:17:51.254874 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.276485692+00:00 stderr F E1213 00:17:51.276463 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.336528828+00:00 stderr F E1213 00:17:51.336467 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.368893919+00:00 stderr F E1213 00:17:51.368836 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.407949796+00:00 stderr F E1213 00:17:51.407864 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.438011166+00:00 stderr F E1213 00:17:51.437892 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.484326208+00:00 stderr F I1213 00:17:51.484265 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:51.499373998+00:00 stderr F E1213 00:17:51.499295 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.549672255+00:00 stderr F I1213 00:17:51.549604 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-12-13T00:17:51.549784158+00:00 stderr F I1213 00:17:51.549760 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-12-13T00:17:51.647675701+00:00 stderr F I1213 00:17:51.647615 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:51.689421791+00:00 stderr F E1213 00:17:51.689335 1 base_controller.go:268] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.731872779+00:00 stderr F E1213 00:17:51.731808 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:51.758838507+00:00 stderr F E1213 00:17:51.758794 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.822513129+00:00 stderr F E1213 00:17:51.822455 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:51.848622024+00:00 stderr F I1213 00:17:51.848503 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:51.896126937+00:00 stderr F E1213 00:17:51.896057 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.048999131+00:00 stderr F I1213 00:17:52.048885 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:52.247247942+00:00 stderr F I1213 00:17:52.247144 1 request.go:697] Waited for 1.191268704s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets?limit=500&resourceVersion=0 2025-12-13T00:17:52.258005299+00:00 stderr F I1213 00:17:52.257885 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:52.349255544+00:00 stderr F I1213 00:17:52.349179 1 base_controller.go:73] Caches are synced for EtcdCertSignerController 2025-12-13T00:17:52.349255544+00:00 stderr F I1213 00:17:52.349206 1 base_controller.go:110] Starting #1 worker of EtcdCertSignerController controller ... 2025-12-13T00:17:52.349639775+00:00 stderr F E1213 00:17:52.349594 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: EtcdCertSignerController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.356510657+00:00 stderr F E1213 00:17:52.356393 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: EtcdCertSignerController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.367769537+00:00 stderr F E1213 00:17:52.367610 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: EtcdCertSignerController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.376176740+00:00 stderr F E1213 00:17:52.376114 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:52.388919099+00:00 stderr F E1213 00:17:52.388863 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: EtcdCertSignerController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.399979984+00:00 stderr F E1213 00:17:52.399893 1 base_controller.go:268] TargetConfigController reconciliation failed: TargetConfigController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.430219627+00:00 stderr F E1213 00:17:52.430123 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: EtcdCertSignerController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.467915650+00:00 stderr F E1213 00:17:52.467776 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.484202833+00:00 stderr F I1213 00:17:52.484115 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:52.511395885+00:00 stderr F E1213 00:17:52.511316 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: EtcdCertSignerController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace "openshift-etcd" not found 2025-12-13T00:17:52.549644003+00:00 stderr F I1213 00:17:52.549579 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:17:52.549747975+00:00 stderr F I1213 00:17:52.549724 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:17:52.549813177+00:00 stderr F I1213 00:17:52.549764 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:17:52.557278066+00:00 stderr F I1213 00:17:52.557209 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:17:52.650729240+00:00 stderr F I1213 00:17:52.650651 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:17:52.655207350+00:00 stderr F I1213 00:17:52.655015 1 base_controller.go:73] Caches are synced for EtcdStaticResources 2025-12-13T00:17:52.655207350+00:00 stderr F I1213 00:17:52.655046 1 base_controller.go:110] Starting #1 worker of EtcdStaticResources controller ... 2025-12-13T00:17:53.246541432+00:00 stderr F I1213 00:17:53.246448 1 request.go:697] Waited for 2.095868615s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-12-13T00:17:53.474798541+00:00 stderr F I1213 00:17:53.474384 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:17:53.475124429+00:00 stderr F I1213 00:17:53.475029 1 etcdcli_pool.go:70] creating a new cached client 2025-12-13T00:17:53.476479126+00:00 stderr F I1213 00:17:53.476321 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:17:53.488159096+00:00 stderr F I1213 00:17:53.486230 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdMembersDegraded: No unhealthy members found" 2025-12-13T00:17:53.659752548+00:00 stderr F E1213 00:17:53.659696 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:17:53.692855658+00:00 stderr F I1213 00:17:53.692762 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:17:53.693138956+00:00 stderr F I1213 00:17:53.692961 1 status_controller.go:218] clusteroperator/etcd diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:46Z","message":"NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"NodeInstallerProgressing: 1 node is at revision 3\nEtcdMembersProgressing: No unstarted etcd members found","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:52:01Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3\nEtcdMembersAvailable: 1 members are available","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:15Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:17:53.698863748+00:00 stderr F I1213 00:17:53.698800 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" 2025-12-13T00:17:54.246596462+00:00 stderr F I1213 00:17:54.246504 1 request.go:697] Waited for 1.0677723s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod 2025-12-13T00:17:55.446417963+00:00 stderr F I1213 00:17:55.446095 1 request.go:697] Waited for 1.196073771s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/restore-etcd-pod 2025-12-13T00:17:56.224873341+00:00 stderr F E1213 00:17:56.224811 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:18:01.351404096+00:00 stderr F E1213 00:18:01.350670 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:18:11.599525956+00:00 stderr F E1213 00:18:11.598849 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:18:32.087721494+00:00 stderr F E1213 00:18:32.087228 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:18:51.055041707+00:00 stderr F E1213 00:18:51.054720 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:19:13.055187515+00:00 stderr F E1213 00:19:13.054980 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.576834 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.576786736 +0000 UTC))" 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.577213 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.577187327 +0000 UTC))" 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.577246 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.577222538 +0000 UTC))" 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.577277 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.577253069 +0000 UTC))" 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.577307 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.5772852 +0000 UTC))" 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.577353 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.57731653 +0000 UTC))" 2025-12-13T00:19:37.577409053+00:00 stderr F I1213 00:19:37.577390 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.577368052 +0000 UTC))" 2025-12-13T00:19:37.577489585+00:00 stderr F I1213 00:19:37.577419 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.577397303 +0000 UTC))" 2025-12-13T00:19:37.577489585+00:00 stderr F I1213 00:19:37.577449 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.577426843 +0000 UTC))" 2025-12-13T00:19:37.577489585+00:00 stderr F I1213 00:19:37.577476 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.577457684 +0000 UTC))" 2025-12-13T00:19:37.577607248+00:00 stderr F I1213 00:19:37.577510 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.577487535 +0000 UTC))" 2025-12-13T00:19:37.577607248+00:00 stderr F I1213 00:19:37.577552 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.577526346 +0000 UTC))" 2025-12-13T00:19:37.578186195+00:00 stderr F I1213 00:19:37.578126 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-etcd-operator.svc\" [serving] validServingFor=[metrics.openshift-etcd-operator.svc,metrics.openshift-etcd-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:18 +0000 UTC to 2027-08-13 20:00:19 +0000 UTC (now=2025-12-13 00:19:37.578092713 +0000 UTC))" 2025-12-13T00:19:37.578703859+00:00 stderr F I1213 00:19:37.578664 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:15 +0000 UTC to 2026-12-12 23:13:15 +0000 UTC (now=2025-12-13 00:19:37.578638137 +0000 UTC))" 2025-12-13T00:19:51.059084495+00:00 stderr F E1213 00:19:51.057718 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:20:50.952309300+00:00 stderr F E1213 00:20:50.951783 1 leaderelection.go:332] error retrieving resource lock openshift-etcd-operator/openshift-cluster-etcd-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.057379125+00:00 stderr F E1213 00:20:51.057308 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:20:51.152118261+00:00 stderr F E1213 00:20:51.152040 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.153145869+00:00 stderr F E1213 00:20:51.153041 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.156019716+00:00 stderr F E1213 00:20:51.155898 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.156019716+00:00 stderr F I1213 00:20:51.155971 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.156922021+00:00 stderr F E1213 00:20:51.156878 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73d08f1a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,LastTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-12-13T00:20:51.158360080+00:00 stderr F E1213 00:20:51.158302 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.158360080+00:00 stderr F E1213 00:20:51.158306 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.158422911+00:00 stderr F I1213 00:20:51.158386 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.159063889+00:00 stderr F E1213 00:20:51.159038 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73f54143 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,LastTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-12-13T00:20:51.160030205+00:00 stderr F E1213 00:20:51.159983 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.160048916+00:00 stderr F I1213 00:20:51.160027 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.160168629+00:00 stderr F E1213 00:20:51.160135 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.160622631+00:00 stderr F E1213 00:20:51.160585 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.163792106+00:00 stderr F E1213 00:20:51.163742 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.163792106+00:00 stderr F I1213 00:20:51.163780 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.168618517+00:00 stderr F E1213 00:20:51.168573 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.168651128+00:00 stderr F I1213 00:20:51.168611 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.182614885+00:00 stderr F E1213 00:20:51.182565 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.182641575+00:00 stderr F I1213 00:20:51.182615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.208064921+00:00 stderr F E1213 00:20:51.207881 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.208064921+00:00 stderr F I1213 00:20:51.207958 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.353551887+00:00 stderr F E1213 00:20:51.353481 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.353725082+00:00 stderr F I1213 00:20:51.353538 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.553746909+00:00 stderr F E1213 00:20:51.553694 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.752842912+00:00 stderr F E1213 00:20:51.752794 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.753458229+00:00 stderr F E1213 00:20:51.753423 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.753487070+00:00 stderr F I1213 00:20:51.753465 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.955109770+00:00 stderr F E1213 00:20:51.954557 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.955109770+00:00 stderr F I1213 00:20:51.955067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.153710129+00:00 stderr F E1213 00:20:52.153386 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.153747470+00:00 stderr F I1213 00:20:52.153466 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.353476330+00:00 stderr F E1213 00:20:52.353407 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.554425033+00:00 stderr F E1213 00:20:52.554302 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:52.555281685+00:00 stderr F E1213 00:20:52.555132 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.755571640+00:00 stderr F E1213 00:20:52.755419 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.755889299+00:00 stderr F I1213 00:20:52.755726 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.756506076+00:00 stderr F E1213 00:20:52.756468 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6ed3276045 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-12-13 00:20:52.755398725 +0000 UTC m=+457.531689254,LastTimestamp:2025-12-13 00:20:52.755398725 +0000 UTC m=+457.531689254,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-12-13T00:20:52.954810737+00:00 stderr F E1213 00:20:52.954754 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.954855378+00:00 stderr F I1213 00:20:52.954822 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.153606301+00:00 stderr F E1213 00:20:53.153545 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.153745475+00:00 stderr F I1213 00:20:53.153704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.363567616+00:00 stderr F E1213 00:20:53.363306 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.363567616+00:00 stderr F I1213 00:20:53.363374 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.556733869+00:00 stderr F E1213 00:20:53.556669 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.562471294+00:00 stderr F E1213 00:20:53.562395 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.751748412+00:00 stderr F I1213 00:20:53.751162 1 request.go:697] Waited for 1.093812007s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd 2025-12-13T00:20:53.759228824+00:00 stderr F E1213 00:20:53.759165 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.954758860+00:00 stderr F E1213 00:20:53.954682 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:54.153749710+00:00 stderr F E1213 00:20:54.153701 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.153789461+00:00 stderr F I1213 00:20:54.153751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.356058159+00:00 stderr F E1213 00:20:54.355874 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.356058159+00:00 stderr F E1213 00:20:54.355915 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.356058159+00:00 stderr F I1213 00:20:54.356000 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.553472676+00:00 stderr F E1213 00:20:54.553408 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.553554648+00:00 stderr F I1213 00:20:54.553504 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.753829603+00:00 stderr F E1213 00:20:54.753768 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.753908465+00:00 stderr F I1213 00:20:54.753864 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.954108047+00:00 stderr F E1213 00:20:54.954039 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.153481607+00:00 stderr F E1213 00:20:55.153406 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.153520908+00:00 stderr F I1213 00:20:55.153477 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.352594519+00:00 stderr F E1213 00:20:55.352533 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.353826683+00:00 stderr F E1213 00:20:55.353773 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:55.381658384+00:00 stderr F E1213 00:20:55.381556 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73f54143 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,LastTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-12-13T00:20:55.751722200+00:00 stderr F I1213 00:20:55.751275 1 request.go:697] Waited for 1.197481525s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts 2025-12-13T00:20:55.753585341+00:00 stderr F E1213 00:20:55.753556 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.753623702+00:00 stderr F I1213 00:20:55.753597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.953497325+00:00 stderr F E1213 00:20:55.953448 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.953685290+00:00 stderr F I1213 00:20:55.953659 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.066161405+00:00 stderr F E1213 00:20:56.066084 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.154216872+00:00 stderr F E1213 00:20:56.154159 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.154255953+00:00 stderr F I1213 00:20:56.154212 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.353421847+00:00 stderr F E1213 00:20:56.353260 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.354239999+00:00 stderr F E1213 00:20:56.354171 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.554004549+00:00 stderr F E1213 00:20:56.553950 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.554055891+00:00 stderr F I1213 00:20:56.554009 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.753427451+00:00 stderr F E1213 00:20:56.753350 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:56.953172611+00:00 stderr F E1213 00:20:56.953125 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.953202892+00:00 stderr F I1213 00:20:56.953178 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.153615790+00:00 stderr F E1213 00:20:57.153526 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.153868716+00:00 stderr F I1213 00:20:57.153832 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.355883848+00:00 stderr F E1213 00:20:57.355797 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:57.554083606+00:00 stderr F E1213 00:20:57.554002 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.753141318+00:00 stderr F E1213 00:20:57.753060 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.754022402+00:00 stderr F E1213 00:20:57.753971 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.861486852+00:00 stderr F E1213 00:20:57.861425 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73d08f1a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,LastTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-12-13T00:20:57.953516775+00:00 stderr F E1213 00:20:57.953445 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:58.153650225+00:00 stderr F E1213 00:20:58.153567 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.153779659+00:00 stderr F I1213 00:20:58.153694 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.353558720+00:00 stderr F E1213 00:20:58.353492 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.353626212+00:00 stderr F I1213 00:20:58.353582 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.556129636+00:00 stderr F E1213 00:20:58.554154 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.754503860+00:00 stderr F E1213 00:20:58.754446 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:59.118499672+00:00 stderr F E1213 00:20:59.118421 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.118499672+00:00 stderr F I1213 00:20:59.118475 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.154780491+00:00 stderr F E1213 00:20:59.154705 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.354453389+00:00 stderr F E1213 00:20:59.354388 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.354534241+00:00 stderr F I1213 00:20:59.354491 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.554006444+00:00 stderr F E1213 00:20:59.553797 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.554006444+00:00 stderr F I1213 00:20:59.553846 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.753868488+00:00 stderr F E1213 00:20:59.753803 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:59.952198519+00:00 stderr F E1213 00:20:59.951955 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.351137464+00:00 stderr F I1213 00:21:00.351069 1 request.go:697] Waited for 1.155990415s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-crc 2025-12-13T00:21:00.353540149+00:00 stderr F E1213 00:21:00.353468 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.555469208+00:00 stderr F E1213 00:21:00.555419 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.555530980+00:00 stderr F I1213 00:21:00.555504 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.756880403+00:00 stderr F E1213 00:21:00.756836 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:00.954642799+00:00 stderr F E1213 00:21:00.954605 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:01.153819694+00:00 stderr F E1213 00:21:01.153758 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.352848875+00:00 stderr F E1213 00:21:01.352803 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.353173344+00:00 stderr F I1213 00:21:01.353130 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.956142975+00:00 stderr F E1213 00:21:01.953272 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.956142975+00:00 stderr F I1213 00:21:01.953481 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.106875782+00:00 stderr F E1213 00:21:02.106803 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6ed3276045 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-12-13 00:20:52.755398725 +0000 UTC m=+457.531689254,LastTimestamp:2025-12-13 00:20:52.755398725 +0000 UTC m=+457.531689254,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-12-13T00:21:02.117780176+00:00 stderr F E1213 00:21:02.117731 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.117814666+00:00 stderr F I1213 00:21:02.117776 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.363065705+00:00 stderr F E1213 00:21:02.362134 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:02.554314275+00:00 stderr F E1213 00:21:02.554015 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.754987861+00:00 stderr F E1213 00:21:02.754848 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.955005018+00:00 stderr F E1213 00:21:02.954871 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.955074650+00:00 stderr F I1213 00:21:02.955033 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.352421592+00:00 stderr F E1213 00:21:03.352327 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.754139583+00:00 stderr F E1213 00:21:03.754073 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.754206395+00:00 stderr F I1213 00:21:03.754172 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.956640127+00:00 stderr F E1213 00:21:03.956578 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:04.153889779+00:00 stderr F E1213 00:21:04.153816 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.243950989+00:00 stderr F E1213 00:21:04.243876 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.243998001+00:00 stderr F I1213 00:21:04.243927 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.355016327+00:00 stderr F E1213 00:21:04.354888 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:04.553924674+00:00 stderr F E1213 00:21:04.553850 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.554017187+00:00 stderr F I1213 00:21:04.553954 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.153307419+00:00 stderr F E1213 00:21:05.153248 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.153353410+00:00 stderr F I1213 00:21:05.153297 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.383286305+00:00 stderr F E1213 00:21:05.383209 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73f54143 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,LastTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-12-13T00:21:05.554237928+00:00 stderr F E1213 00:21:05.554162 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.757636817+00:00 stderr F E1213 00:21:05.757565 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.954853798+00:00 stderr F E1213 00:21:05.954783 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.954907379+00:00 stderr F I1213 00:21:05.954849 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.556604076+00:00 stderr F E1213 00:21:06.556519 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:06.753326304+00:00 stderr F E1213 00:21:06.753251 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.954638517+00:00 stderr F E1213 00:21:06.954525 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.954638517+00:00 stderr F I1213 00:21:06.954588 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.241114947+00:00 stderr F E1213 00:21:07.241038 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.241171708+00:00 stderr F I1213 00:21:07.241107 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.555312525+00:00 stderr F E1213 00:21:07.555222 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:07.864149980+00:00 stderr F E1213 00:21:07.864061 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73d08f1a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,LastTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-12-13T00:21:07.955080044+00:00 stderr F E1213 00:21:07.955008 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.955289239+00:00 stderr F I1213 00:21:07.955077 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.156519319+00:00 stderr F E1213 00:21:08.156423 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.755910404+00:00 stderr F E1213 00:21:08.755851 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.952908739+00:00 stderr F E1213 00:21:08.952831 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.155012403+00:00 stderr F E1213 00:21:09.154909 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.955457744+00:00 stderr F E1213 00:21:09.954904 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.955457744+00:00 stderr F I1213 00:21:09.955042 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.154089484+00:00 stderr F E1213 00:21:10.154027 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.755086812+00:00 stderr F E1213 00:21:10.755006 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:10.955654834+00:00 stderr F E1213 00:21:10.955615 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.955851239+00:00 stderr F I1213 00:21:10.955778 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.155392604+00:00 stderr F E1213 00:21:11.155324 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.353596002+00:00 stderr F E1213 00:21:11.353533 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.108309838+00:00 stderr F E1213 00:21:12.108222 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6ed3276045 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdCertSignerControllerUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,Host:,},FirstTimestamp:2025-12-13 00:20:52.755398725 +0000 UTC m=+457.531689254,LastTimestamp:2025-12-13 00:20:52.755398725 +0000 UTC m=+457.531689254,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller,ReportingInstance:,}" 2025-12-13T00:21:12.155827560+00:00 stderr F E1213 00:21:12.154491 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.756146400+00:00 stderr F E1213 00:21:12.756074 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:12.953776533+00:00 stderr F E1213 00:21:12.953716 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.155243439+00:00 stderr F E1213 00:21:13.155154 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-etcd-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:13.755302321+00:00 stderr F E1213 00:21:13.755239 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.355445106+00:00 stderr F E1213 00:21:14.355373 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:14.492325959+00:00 stderr F E1213 00:21:14.492290 1 base_controller.go:268] EtcdEndpointsController reconciliation failed: applying configmap update failed :Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-endpoints": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.492493504+00:00 stderr F I1213 00:21:14.492386 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdEndpointsErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.754923175+00:00 stderr F E1213 00:21:14.754655 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.384469284+00:00 stderr F E1213 00:21:15.384233 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73f54143 openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:EtcdEndpointsErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,LastTimestamp:2025-12-13 00:20:51.158278467 +0000 UTC m=+455.934568996,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller,ReportingInstance:,}" 2025-12-13T00:21:15.553694871+00:00 stderr F E1213 00:21:15.553647 1 base_controller.go:268] ScriptController reconciliation failed: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.553740912+00:00 stderr F I1213 00:21:15.553709 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.954335682+00:00 stderr F E1213 00:21:15.953887 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.156645971+00:00 stderr F E1213 00:21:16.156602 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:16.354148550+00:00 stderr F E1213 00:21:16.354105 1 base_controller.go:266] "ScriptController" controller failed to sync "", err: "configmap/etcd-pod": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-scripts": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.354221812+00:00 stderr F I1213 00:21:16.354162 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ScriptControllerErrorUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.319208152+00:00 stderr F E1213 00:21:17.319080 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.484703838+00:00 stderr F E1213 00:21:17.484668 1 base_controller.go:268] EtcdCertSignerController reconciliation failed: error getting openshift-config/etcd-signer: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config/secrets/etcd-signer": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.484823651+00:00 stderr F I1213 00:21:17.484798 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-etcd-operator", Name:"etcd-operator", UID:"fb798e33-6d4c-4082-a60c-594a9db7124a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'EtcdCertSignerControllerUpdatingStatus' Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.557777310+00:00 stderr F E1213 00:21:17.557727 1 base_controller.go:268] EtcdStaticResources reconciliation failed: ["etcd/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/services/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/minimal-sm.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/prometheus-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-backup-sa": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-cr.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:operator:etcd-backup-role": dial tcp 10.217.4.1:443: connect: connection refused, "etcd/backups-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:17.865832313+00:00 stderr F E1213 00:21:17.865769 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-etcd-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{etcd-operator.18809e6e73d08f1a openshift-etcd-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-etcd-operator,Name:etcd-operator,UID:fb798e33-6d4c-4082-a60c-594a9db7124a,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ScriptControllerErrorUpdatingStatus,Message:Put \"https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:openshift-cluster-etcd-operator-script-controller-scriptcontroller,Host:,},FirstTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,LastTimestamp:2025-12-13 00:20:51.155873562 +0000 UTC m=+455.932164091,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:openshift-cluster-etcd-operator-script-controller-scriptcontroller,ReportingInstance:,}" 2025-12-13T00:21:18.520150140+00:00 stderr F E1213 00:21:18.520077 1 base_controller.go:266] "TargetConfigController" controller failed to sync "", err: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.194396603+00:00 stderr F E1213 00:21:19.194310 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:51.060394383+00:00 stderr F E1213 00:21:51.060205 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host 2025-12-13T00:21:55.905278362+00:00 stderr F I1213 00:21:55.904771 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:56.901606215+00:00 stderr F E1213 00:21:56.901540 1 base_controller.go:268] FSyncController reconciliation failed: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 10.217.4.10:53: no such host ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000357515117130646033074 0ustar zuulzuul2025-08-13T19:59:23.961605895+00:00 stderr F I0813 19:59:23.946532 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc000543180 max-eligible-revision:0xc000542f00 protected-revisions:0xc000542fa0 resource-dir:0xc000543040 static-pod-name:0xc0005430e0 v:0xc000543860] [0xc000543860 0xc000542f00 0xc000542fa0 0xc000543040 0xc000543180 0xc0005430e0] [] map[cert-dir:0xc000543180 help:0xc000543c20 log-flush-frequency:0xc0005437c0 max-eligible-revision:0xc000542f00 protected-revisions:0xc000542fa0 resource-dir:0xc000543040 static-pod-name:0xc0005430e0 v:0xc000543860 vmodule:0xc000543900] [0xc000542f00 0xc000542fa0 0xc000543040 0xc0005430e0 0xc000543180 0xc0005437c0 0xc000543860 0xc000543900 0xc000543c20] [0xc000543180 0xc000543c20 0xc0005437c0 0xc000542f00 0xc000542fa0 0xc000543040 0xc0005430e0 0xc000543860 0xc000543900] map[104:0xc000543c20 118:0xc000543860] [] -1 0 0xc000548120 true 0x73b100 []} 2025-08-13T19:59:23.962267703+00:00 stderr F I0813 19:59:23.962243 1 cmd.go:41] (*prune.PruneOptions)(0xc0004fe2d0)({ 2025-08-13T19:59:23.962267703+00:00 stderr F MaxEligibleRevision: (int) 8, 2025-08-13T19:59:23.962267703+00:00 stderr F ProtectedRevisions: ([]int) (len=5 cap=5) { 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 4, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 5, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 6, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 7, 2025-08-13T19:59:23.962267703+00:00 stderr F (int) 8 2025-08-13T19:59:23.962267703+00:00 stderr F }, 2025-08-13T19:59:23.962267703+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T19:59:23.962267703+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T19:59:23.962267703+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T19:59:23.962267703+00:00 stderr F }) ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015117130647033222 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015117130654033220 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000006530715117130647033237 0ustar zuulzuul2025-12-13T00:13:15.018290460+00:00 stderr F I1213 00:13:15.016958 1 main.go:45] Version:216149b14d9cb61ae90ac65d839a448fb11075bb 2025-12-13T00:13:15.018290460+00:00 stderr F I1213 00:13:15.017279 1 main.go:46] Starting with config{ :9091 crc} 2025-12-13T00:13:15.020890177+00:00 stderr F W1213 00:13:15.019507 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:13:15.024207058+00:00 stderr F I1213 00:13:15.023605 1 controller.go:42] Setting up event handlers 2025-12-13T00:13:15.024207058+00:00 stderr F I1213 00:13:15.023738 1 podmetrics.go:101] Serving network metrics 2025-12-13T00:13:15.024207058+00:00 stderr F I1213 00:13:15.023745 1 controller.go:101] Starting pod controller 2025-12-13T00:13:15.024207058+00:00 stderr F I1213 00:13:15.023748 1 controller.go:104] Waiting for informer caches to sync 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.324676 1 controller.go:109] Starting workers 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325159 1 controller.go:114] Started workers 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325252 1 controller.go:192] Received pod 'csi-hostpathplugin-hvm8g' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325359 1 controller.go:151] Successfully synced 'hostpath-provisioner/csi-hostpathplugin-hvm8g' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325367 1 controller.go:192] Received pod 'openshift-apiserver-operator-7c88c4c865-kn67m' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325378 1 controller.go:151] Successfully synced 'openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325383 1 controller.go:192] Received pod 'apiserver-7fc54b8dd7-d2bhp' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325392 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-7fc54b8dd7-d2bhp' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325395 1 controller.go:192] Received pod 'authentication-operator-7cc7ff75d5-g9qv8' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325404 1 controller.go:151] Successfully synced 'openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325408 1 controller.go:192] Received pod 'oauth-openshift-74fc7c67cc-xqf8b' 2025-12-13T00:13:15.325435981+00:00 stderr F I1213 00:13:15.325427 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325432 1 controller.go:192] Received pod 'cluster-samples-operator-bc474d5d6-wshwg' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325442 1 controller.go:151] Successfully synced 'openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325449 1 controller.go:192] Received pod 'openshift-config-operator-77658b5b66-dq5sc' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325457 1 controller.go:151] Successfully synced 'openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325461 1 controller.go:192] Received pod 'console-conversion-webhook-595f9969b-l6z49' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325469 1 controller.go:151] Successfully synced 'openshift-console-operator/console-conversion-webhook-595f9969b-l6z49' 2025-12-13T00:13:15.325480042+00:00 stderr F I1213 00:13:15.325475 1 controller.go:192] Received pod 'console-operator-5dbbc74dc9-cp5cd' 2025-12-13T00:13:15.325510893+00:00 stderr F I1213 00:13:15.325491 1 controller.go:151] Successfully synced 'openshift-console-operator/console-operator-5dbbc74dc9-cp5cd' 2025-12-13T00:13:15.325510893+00:00 stderr F I1213 00:13:15.325495 1 controller.go:192] Received pod 'console-644bb77b49-5x5xk' 2025-12-13T00:13:15.325510893+00:00 stderr F I1213 00:13:15.325504 1 controller.go:151] Successfully synced 'openshift-console/console-644bb77b49-5x5xk' 2025-12-13T00:13:15.325510893+00:00 stderr F I1213 00:13:15.325508 1 controller.go:192] Received pod 'downloads-65476884b9-9wcvx' 2025-12-13T00:13:15.325539834+00:00 stderr F I1213 00:13:15.325516 1 controller.go:151] Successfully synced 'openshift-console/downloads-65476884b9-9wcvx' 2025-12-13T00:13:15.325539834+00:00 stderr F I1213 00:13:15.325520 1 controller.go:192] Received pod 'openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-12-13T00:13:15.325539834+00:00 stderr F I1213 00:13:15.325531 1 controller.go:151] Successfully synced 'openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-12-13T00:13:15.325539834+00:00 stderr F I1213 00:13:15.325534 1 controller.go:192] Received pod 'controller-manager-778975cc4f-x5vcf' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325553 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-778975cc4f-x5vcf' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325562 1 controller.go:192] Received pod 'dns-operator-75f687757b-nz2xb' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325572 1 controller.go:151] Successfully synced 'openshift-dns-operator/dns-operator-75f687757b-nz2xb' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325576 1 controller.go:192] Received pod 'dns-default-gbw49' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325584 1 controller.go:151] Successfully synced 'openshift-dns/dns-default-gbw49' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325588 1 controller.go:192] Received pod 'etcd-operator-768d5b5d86-722mg' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325596 1 controller.go:151] Successfully synced 'openshift-etcd-operator/etcd-operator-768d5b5d86-722mg' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325600 1 controller.go:192] Received pod 'cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325608 1 controller.go:151] Successfully synced 'openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325615 1 controller.go:192] Received pod 'image-registry-75779c45fd-v2j2v' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325624 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325628 1 controller.go:192] Received pod 'ingress-canary-2vhcn' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325636 1 controller.go:151] Successfully synced 'openshift-ingress-canary/ingress-canary-2vhcn' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325640 1 controller.go:192] Received pod 'ingress-operator-7d46d5bb6d-rrg6t' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325648 1 controller.go:151] Successfully synced 'openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325652 1 controller.go:192] Received pod 'kube-apiserver-operator-78d54458c4-sc8h7' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325660 1 controller.go:151] Successfully synced 'openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325664 1 controller.go:192] Received pod 'installer-12-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325672 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-12-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325675 1 controller.go:192] Received pod 'installer-9-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325686 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-9-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325690 1 controller.go:192] Received pod 'kube-controller-manager-operator-6f6cb54958-rbddb' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325707 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325711 1 controller.go:192] Received pod 'installer-10-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325725 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325728 1 controller.go:192] Received pod 'installer-10-retry-1-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325740 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-retry-1-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325746 1 controller.go:192] Received pod 'installer-11-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325759 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-11-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325763 1 controller.go:192] Received pod 'revision-pruner-10-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325771 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-10-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325775 1 controller.go:192] Received pod 'revision-pruner-11-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325783 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-11-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325787 1 controller.go:192] Received pod 'revision-pruner-8-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325795 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-8-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325799 1 controller.go:192] Received pod 'revision-pruner-9-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325807 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-9-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325811 1 controller.go:192] Received pod 'openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325821 1 controller.go:151] Successfully synced 'openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325824 1 controller.go:192] Received pod 'installer-7-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325835 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-7-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325839 1 controller.go:192] Received pod 'installer-8-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325847 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-8-crc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325851 1 controller.go:192] Received pod 'kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325859 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325865 1 controller.go:192] Received pod 'migrator-f7c6d88df-q2fnv' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325876 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325879 1 controller.go:192] Received pod 'control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325888 1 controller.go:151] Successfully synced 'openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325892 1 controller.go:192] Received pod 'machine-api-operator-788b7c6b6c-ctdmb' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325900 1 controller.go:151] Successfully synced 'openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325904 1 controller.go:192] Received pod 'machine-config-controller-6df6df6b6b-58shh' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325912 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325915 1 controller.go:192] Received pod 'machine-config-operator-76788bff89-wkjgm' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325923 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325945 1 controller.go:192] Received pod 'certified-operators-7287f' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325958 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325962 1 controller.go:192] Received pod 'community-operators-8jhz6' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325970 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325974 1 controller.go:192] Received pod 'community-operators-sdddl' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325982 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325986 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-f9xdt' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.325994 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.326000 1 controller.go:192] Received pod 'redhat-marketplace-8s8pc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.326008 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.326012 1 controller.go:192] Received pod 'redhat-operators-f4jkp' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.326021 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-12-13T00:13:15.326292609+00:00 stderr F I1213 00:13:15.326025 1 controller.go:192] Received pod 'multus-admission-controller-6c7c885997-4hbbc' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328603 1 controller.go:151] Successfully synced 'openshift-multus/multus-admission-controller-6c7c885997-4hbbc' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328658 1 controller.go:192] Received pod 'network-metrics-daemon-qdfr4' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328685 1 controller.go:151] Successfully synced 'openshift-multus/network-metrics-daemon-qdfr4' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328689 1 controller.go:192] Received pod 'network-check-source-5c5478f8c-vqvt7' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328816 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328821 1 controller.go:192] Received pod 'network-check-target-v54bt' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328839 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-target-v54bt' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328844 1 controller.go:192] Received pod 'apiserver-69c565c9b6-vbdpd' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328862 1 controller.go:151] Successfully synced 'openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328867 1 controller.go:192] Received pod 'catalog-operator-857456c46-7f5wf' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328885 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328890 1 controller.go:192] Received pod 'collect-profiles-29251920-wcws2' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328907 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.328915 1 controller.go:192] Received pod 'collect-profiles-29251935-d7x6j' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329031 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329039 1 controller.go:192] Received pod 'collect-profiles-29251950-x8jjd' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329053 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329057 1 controller.go:192] Received pod 'olm-operator-6d8474f75f-x54mh' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329070 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329074 1 controller.go:192] Received pod 'package-server-manager-84d578d794-jw7r2' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329083 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329090 1 controller.go:192] Received pod 'packageserver-8464bcc55b-sjnqz' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329099 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329102 1 controller.go:192] Received pod 'route-controller-manager-776b8b7477-sfpvs' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329114 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329118 1 controller.go:192] Received pod 'service-ca-operator-546b4f8984-pwccz' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329129 1 controller.go:151] Successfully synced 'openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329133 1 controller.go:192] Received pod 'service-ca-666f99b6f-kk8kg' 2025-12-13T00:13:15.331846586+00:00 stderr F I1213 00:13:15.329234 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-kk8kg' 2025-12-13T00:14:21.825247702+00:00 stderr F I1213 00:14:21.822799 1 controller.go:192] Received pod 'image-pruner-29426400-tlv26' 2025-12-13T00:14:21.825247702+00:00 stderr F I1213 00:14:21.823600 1 controller.go:151] Successfully synced 'openshift-image-registry/image-pruner-29426400-tlv26' 2025-12-13T00:14:21.896450502+00:00 stderr F I1213 00:14:21.896406 1 controller.go:192] Received pod 'collect-profiles-29426400-gwxp5' 2025-12-13T00:14:21.896477763+00:00 stderr F I1213 00:14:21.896448 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5' 2025-12-13T00:14:22.101424544+00:00 stderr F I1213 00:14:22.101373 1 controller.go:192] Received pod 'certified-operators-999b2' 2025-12-13T00:14:22.101424544+00:00 stderr F I1213 00:14:22.101416 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-999b2' 2025-12-13T00:14:22.179258138+00:00 stderr F I1213 00:14:22.179177 1 controller.go:192] Received pod 'redhat-operators-scn2m' 2025-12-13T00:14:22.179258138+00:00 stderr F I1213 00:14:22.179212 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-scn2m' 2025-12-13T00:14:22.191957127+00:00 stderr F I1213 00:14:22.191879 1 controller.go:192] Received pod 'redhat-marketplace-5fv2w' 2025-12-13T00:14:22.191957127+00:00 stderr F I1213 00:14:22.191921 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-5fv2w' 2025-12-13T00:14:29.680365308+00:00 stderr F I1213 00:14:29.679813 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-12-13T00:14:38.514260072+00:00 stderr F I1213 00:14:38.513466 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-kghgr' 2025-12-13T00:14:38.514260072+00:00 stderr F I1213 00:14:38.513993 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-kghgr' 2025-12-13T00:14:38.887893029+00:00 stderr F I1213 00:14:38.887429 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-12-13T00:14:38.988964371+00:00 stderr F I1213 00:14:38.988888 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-5fv2w' 2025-12-13T00:14:39.901692856+00:00 stderr F I1213 00:14:39.901402 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-12-13T00:14:40.975350089+00:00 stderr F I1213 00:14:40.975016 1 controller.go:192] Received pod 'certified-operators-lcrg8' 2025-12-13T00:14:40.975350089+00:00 stderr F I1213 00:14:40.975055 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-lcrg8' 2025-12-13T00:14:42.820863586+00:00 stderr F I1213 00:14:42.820236 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-12-13T00:14:43.025851919+00:00 stderr F I1213 00:14:43.025793 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-scn2m' 2025-12-13T00:14:43.053681854+00:00 stderr F I1213 00:14:43.053618 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-12-13T00:14:43.080312531+00:00 stderr F I1213 00:14:43.080259 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-12-13T00:14:43.106466381+00:00 stderr F I1213 00:14:43.106406 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-999b2' 2025-12-13T00:14:43.966694389+00:00 stderr F I1213 00:14:43.966024 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-12-13T00:14:44.038741327+00:00 stderr F I1213 00:14:44.037999 1 controller.go:192] Received pod 'community-operators-fs22p' 2025-12-13T00:14:44.038741327+00:00 stderr F I1213 00:14:44.038039 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-fs22p' 2025-12-13T00:14:45.356633065+00:00 stderr F I1213 00:14:45.356370 1 controller.go:192] Received pod 'redhat-operators-zg7cl' 2025-12-13T00:14:45.356633065+00:00 stderr F I1213 00:14:45.356628 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zg7cl' 2025-12-13T00:14:46.768649960+00:00 stderr F I1213 00:14:46.768266 1 controller.go:192] Received pod 'redhat-marketplace-nv4pl' 2025-12-13T00:14:46.768649960+00:00 stderr F I1213 00:14:46.768597 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nv4pl' 2025-12-13T00:14:46.983237441+00:00 stderr F I1213 00:14:46.981101 1 controller.go:192] Received pod 'community-operators-s2hxn' 2025-12-13T00:14:46.983237441+00:00 stderr F I1213 00:14:46.981147 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-s2hxn' 2025-12-13T00:15:00.966140415+00:00 stderr F I1213 00:15:00.965288 1 controller.go:192] Received pod 'collect-profiles-29426415-vhdrh' 2025-12-13T00:15:00.966140415+00:00 stderr F I1213 00:15:00.966132 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh' 2025-12-13T00:15:41.684240451+00:00 stderr F I1213 00:15:41.683740 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2025-12-13T00:15:47.982767534+00:00 stderr F I1213 00:15:47.982171 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-fs22p' 2025-12-13T00:17:12.382904737+00:00 stderr F I1213 00:17:12.382063 1 controller.go:151] Successfully synced 'openshift-multus/cni-sysctl-allowlist-ds-7gz4s' 2025-12-13T00:18:33.097727974+00:00 stderr F I1213 00:18:33.097238 1 controller.go:192] Received pod 'image-registry-75b7bb6564-rnjvj' 2025-12-13T00:18:33.097727974+00:00 stderr F I1213 00:18:33.097707 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75b7bb6564-rnjvj' 2025-12-13T00:19:19.022162462+00:00 stderr F I1213 00:19:19.021348 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-12-13T00:19:57.542224335+00:00 stderr F I1213 00:19:57.541632 1 controller.go:192] Received pod 'installer-13-crc' 2025-12-13T00:19:57.542224335+00:00 stderr F I1213 00:19:57.542142 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-13-crc' 2025-12-13T00:20:02.278878888+00:00 stderr F I1213 00:20:02.278212 1 controller.go:151] Successfully synced 'openshift-ovn-kubernetes/ovnkube-node-44qcg' ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000011767515117130647033245 0ustar zuulzuul2025-08-13T19:59:14.203072729+00:00 stderr F I0813 19:59:14.200136 1 main.go:45] Version:216149b14d9cb61ae90ac65d839a448fb11075bb 2025-08-13T19:59:14.203072729+00:00 stderr F I0813 19:59:14.201374 1 main.go:46] Starting with config{ :9091 crc} 2025-08-13T19:59:14.243408519+00:00 stderr F W0813 19:59:14.238333 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:15.680406063+00:00 stderr F I0813 19:59:15.679464 1 controller.go:42] Setting up event handlers 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811144 1 podmetrics.go:101] Serving network metrics 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811485 1 controller.go:101] Starting pod controller 2025-08-13T19:59:15.819134228+00:00 stderr F I0813 19:59:15.811492 1 controller.go:104] Waiting for informer caches to sync 2025-08-13T19:59:22.235926353+00:00 stderr F I0813 19:59:22.214243 1 controller.go:109] Starting workers 2025-08-13T19:59:22.251194178+00:00 stderr F I0813 19:59:22.239598 1 controller.go:114] Started workers 2025-08-13T19:59:22.251194178+00:00 stderr F I0813 19:59:22.239961 1 controller.go:192] Received pod 'csi-hostpathplugin-hvm8g' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.292973 1 controller.go:192] Received pod 'openshift-apiserver-operator-7c88c4c865-kn67m' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293124 1 controller.go:151] Successfully synced 'openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293138 1 controller.go:192] Received pod 'apiserver-67cbf64bc9-mtx25' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293157 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-mtx25' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293165 1 controller.go:192] Received pod 'authentication-operator-7cc7ff75d5-g9qv8' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293176 1 controller.go:151] Successfully synced 'openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293182 1 controller.go:192] Received pod 'oauth-openshift-765b47f944-n2lhl' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293199 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-765b47f944-n2lhl' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293206 1 controller.go:192] Received pod 'cluster-samples-operator-bc474d5d6-wshwg' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293217 1 controller.go:151] Successfully synced 'openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293221 1 controller.go:192] Received pod 'openshift-config-operator-77658b5b66-dq5sc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293246 1 controller.go:151] Successfully synced 'openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293253 1 controller.go:192] Received pod 'console-conversion-webhook-595f9969b-l6z49' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293269 1 controller.go:151] Successfully synced 'openshift-console-operator/console-conversion-webhook-595f9969b-l6z49' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293276 1 controller.go:192] Received pod 'console-operator-5dbbc74dc9-cp5cd' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293287 1 controller.go:151] Successfully synced 'openshift-console-operator/console-operator-5dbbc74dc9-cp5cd' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293292 1 controller.go:192] Received pod 'console-84fccc7b6-mkncc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293306 1 controller.go:151] Successfully synced 'openshift-console/console-84fccc7b6-mkncc' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293311 1 controller.go:192] Received pod 'downloads-65476884b9-9wcvx' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293327 1 controller.go:151] Successfully synced 'openshift-console/downloads-65476884b9-9wcvx' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293334 1 controller.go:192] Received pod 'openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293345 1 controller.go:151] Successfully synced 'openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z' 2025-08-13T19:59:22.303066597+00:00 stderr F I0813 19:59:22.293350 1 controller.go:192] Received pod 'controller-manager-6ff78978b4-q4vv8' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.350984 1 controller.go:151] Successfully synced 'hostpath-provisioner/csi-hostpathplugin-hvm8g' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403421 1 controller.go:192] Received pod 'dns-operator-75f687757b-nz2xb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403501 1 controller.go:151] Successfully synced 'openshift-dns-operator/dns-operator-75f687757b-nz2xb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403513 1 controller.go:192] Received pod 'dns-default-gbw49' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403534 1 controller.go:151] Successfully synced 'openshift-dns/dns-default-gbw49' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403539 1 controller.go:192] Received pod 'etcd-operator-768d5b5d86-722mg' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403550 1 controller.go:151] Successfully synced 'openshift-etcd-operator/etcd-operator-768d5b5d86-722mg' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403556 1 controller.go:192] Received pod 'cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403567 1 controller.go:151] Successfully synced 'openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403572 1 controller.go:192] Received pod 'image-registry-585546dd8b-v5m4t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403583 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-585546dd8b-v5m4t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403587 1 controller.go:192] Received pod 'ingress-canary-2vhcn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403598 1 controller.go:151] Successfully synced 'openshift-ingress-canary/ingress-canary-2vhcn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403607 1 controller.go:192] Received pod 'ingress-operator-7d46d5bb6d-rrg6t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403624 1 controller.go:151] Successfully synced 'openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403629 1 controller.go:192] Received pod 'kube-apiserver-operator-78d54458c4-sc8h7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403659 1 controller.go:151] Successfully synced 'openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403665 1 controller.go:192] Received pod 'kube-controller-manager-operator-6f6cb54958-rbddb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403675 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403680 1 controller.go:192] Received pod 'revision-pruner-8-crc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403691 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-8-crc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403696 1 controller.go:192] Received pod 'openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403721 1 controller.go:151] Successfully synced 'openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403728 1 controller.go:192] Received pod 'kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403744 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403751 1 controller.go:192] Received pod 'migrator-f7c6d88df-q2fnv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403767 1 controller.go:151] Successfully synced 'openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403828 1 controller.go:192] Received pod 'control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403905 1 controller.go:151] Successfully synced 'openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403915 1 controller.go:192] Received pod 'machine-api-operator-788b7c6b6c-ctdmb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403929 1 controller.go:151] Successfully synced 'openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403935 1 controller.go:192] Received pod 'machine-config-controller-6df6df6b6b-58shh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403944 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403949 1 controller.go:192] Received pod 'machine-config-operator-76788bff89-wkjgm' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403960 1 controller.go:151] Successfully synced 'openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403973 1 controller.go:192] Received pod 'certified-operators-7287f' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403985 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-7287f' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.403990 1 controller.go:192] Received pod 'certified-operators-g4v97' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404000 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-g4v97' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404005 1 controller.go:192] Received pod 'community-operators-8jhz6' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404015 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-8jhz6' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404024 1 controller.go:192] Received pod 'community-operators-k9qqb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404039 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-k9qqb' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404043 1 controller.go:192] Received pod 'marketplace-operator-8b455464d-f9xdt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404055 1 controller.go:151] Successfully synced 'openshift-marketplace/marketplace-operator-8b455464d-f9xdt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404060 1 controller.go:192] Received pod 'redhat-marketplace-8s8pc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404070 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-8s8pc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404075 1 controller.go:192] Received pod 'redhat-marketplace-rmwfn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404093 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-rmwfn' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404099 1 controller.go:192] Received pod 'redhat-operators-dcqzh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404109 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-dcqzh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404114 1 controller.go:192] Received pod 'redhat-operators-f4jkp' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404124 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-f4jkp' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404128 1 controller.go:192] Received pod 'multus-admission-controller-6c7c885997-4hbbc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404141 1 controller.go:151] Successfully synced 'openshift-multus/multus-admission-controller-6c7c885997-4hbbc' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404149 1 controller.go:192] Received pod 'network-metrics-daemon-qdfr4' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404162 1 controller.go:151] Successfully synced 'openshift-multus/network-metrics-daemon-qdfr4' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404166 1 controller.go:192] Received pod 'network-check-source-5c5478f8c-vqvt7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404176 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404180 1 controller.go:192] Received pod 'network-check-target-v54bt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404191 1 controller.go:151] Successfully synced 'openshift-network-diagnostics/network-check-target-v54bt' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404196 1 controller.go:192] Received pod 'apiserver-69c565c9b6-vbdpd' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404212 1 controller.go:151] Successfully synced 'openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404218 1 controller.go:192] Received pod 'catalog-operator-857456c46-7f5wf' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404229 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404233 1 controller.go:192] Received pod 'collect-profiles-29251905-zmjv9' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404243 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404248 1 controller.go:192] Received pod 'olm-operator-6d8474f75f-x54mh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404257 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404262 1 controller.go:192] Received pod 'package-server-manager-84d578d794-jw7r2' 2025-08-13T19:59:22.415768500+00:00 stderr F I0813 19:59:22.404278 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2' 2025-08-13T19:59:22.415768500+00:00 stderr P I0813 19:59:22.404286 1 con 2025-08-13T19:59:22.416128400+00:00 stderr F troller.go:192] Received pod 'packageserver-8464bcc55b-sjnqz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404298 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404302 1 controller.go:192] Received pod 'route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404318 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404323 1 controller.go:192] Received pod 'service-ca-operator-546b4f8984-pwccz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404345 1 controller.go:151] Successfully synced 'openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404349 1 controller.go:192] Received pod 'service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404361 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:22.416128400+00:00 stderr F I0813 19:59:22.404375 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-6ff78978b4-q4vv8' 2025-08-13T19:59:47.793265519+00:00 stderr F I0813 19:59:47.789858 1 controller.go:192] Received pod 'service-ca-666f99b6f-kk8kg' 2025-08-13T19:59:47.798560020+00:00 stderr F I0813 19:59:47.795202 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-kk8kg' 2025-08-13T19:59:52.481966496+00:00 stderr F I0813 19:59:52.454353 1 controller.go:151] Successfully synced 'openshift-service-ca/service-ca-666f99b6f-vlbxv' 2025-08-13T19:59:54.699326581+00:00 stderr F I0813 19:59:54.695209 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-6ff78978b4-q4vv8' 2025-08-13T20:00:00.033952767+00:00 stderr F I0813 20:00:00.016212 1 controller.go:192] Received pod 'controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:00.033952767+00:00 stderr F I0813 20:00:00.031888 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:01.733428071+00:00 stderr F I0813 20:00:01.686450 1 controller.go:192] Received pod 'route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:01.733428071+00:00 stderr F I0813 20:00:01.687228 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:05.392908974+00:00 stderr F I0813 20:00:05.381095 1 controller.go:192] Received pod 'collect-profiles-29251920-wcws2' 2025-08-13T20:00:05.392908974+00:00 stderr F I0813 20:00:05.390516 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2' 2025-08-13T20:00:12.837175058+00:00 stderr F I0813 20:00:12.831533 1 controller.go:192] Received pod 'revision-pruner-9-crc' 2025-08-13T20:00:12.837175058+00:00 stderr F I0813 20:00:12.832637 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-9-crc' 2025-08-13T20:00:13.415470178+00:00 stderr F I0813 20:00:13.414068 1 controller.go:192] Received pod 'installer-9-crc' 2025-08-13T20:00:13.415470178+00:00 stderr F I0813 20:00:13.414423 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-9-crc' 2025-08-13T20:00:18.087146555+00:00 stderr F I0813 20:00:18.085440 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt' 2025-08-13T20:00:19.199895773+00:00 stderr F I0813 20:00:19.199376 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-c4dd57946-mpxjt' 2025-08-13T20:00:23.817216183+00:00 stderr F I0813 20:00:23.814290 1 controller.go:192] Received pod 'route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:23.817216183+00:00 stderr F I0813 20:00:23.815556 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.262921 1 controller.go:192] Received pod 'installer-9-crc' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263147 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-9-crc' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263157 1 controller.go:192] Received pod 'console-5d9678894c-wx62n' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263168 1 controller.go:151] Successfully synced 'openshift-console/console-5d9678894c-wx62n' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263180 1 controller.go:192] Received pod 'controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:24.263536059+00:00 stderr F I0813 20:00:24.263191 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:27.488118776+00:00 stderr F I0813 20:00:27.486613 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5' 2025-08-13T20:00:27.637899127+00:00 stderr F I0813 20:00:27.636553 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-585546dd8b-v5m4t' 2025-08-13T20:00:29.897264330+00:00 stderr F I0813 20:00:29.895923 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-67685c4459-7p2h8' 2025-08-13T20:00:30.862824172+00:00 stderr F I0813 20:00:30.859600 1 controller.go:192] Received pod 'image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:00:30.862824172+00:00 stderr F I0813 20:00:30.860019 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:00:30.904206192+00:00 stderr F I0813 20:00:30.904154 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw' 2025-08-13T20:00:30.971050898+00:00 stderr F I0813 20:00:30.970969 1 controller.go:192] Received pod 'image-registry-75779c45fd-v2j2v' 2025-08-13T20:00:30.971139431+00:00 stderr F I0813 20:00:30.971121 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-75779c45fd-v2j2v' 2025-08-13T20:00:31.082874517+00:00 stderr F I0813 20:00:31.082592 1 controller.go:192] Received pod 'controller-manager-78589965b8-vmcwt' 2025-08-13T20:00:31.082874517+00:00 stderr F I0813 20:00:31.082653 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-78589965b8-vmcwt' 2025-08-13T20:00:36.540416602+00:00 stderr F I0813 20:00:36.533143 1 controller.go:192] Received pod 'route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:00:36.540416602+00:00 stderr F I0813 20:00:36.533597 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:00:41.455954023+00:00 stderr F I0813 20:00:41.454478 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-mtx25' 2025-08-13T20:00:45.801433589+00:00 stderr F I0813 20:00:45.784549 1 controller.go:192] Received pod 'installer-10-crc' 2025-08-13T20:00:45.801433589+00:00 stderr F I0813 20:00:45.798645 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-crc' 2025-08-13T20:00:45.900313318+00:00 stderr F I0813 20:00:45.873345 1 controller.go:192] Received pod 'revision-pruner-10-crc' 2025-08-13T20:00:45.900531905+00:00 stderr F I0813 20:00:45.900504 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-10-crc' 2025-08-13T20:00:45.951490608+00:00 stderr F I0813 20:00:45.949257 1 controller.go:192] Received pod 'installer-7-crc' 2025-08-13T20:00:45.951490608+00:00 stderr F I0813 20:00:45.949399 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-7-crc' 2025-08-13T20:00:47.103216737+00:00 stderr F I0813 20:00:47.102333 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-765b47f944-n2lhl' 2025-08-13T20:01:00.041614175+00:00 stderr F I0813 20:01:00.039017 1 controller.go:192] Received pod 'oauth-openshift-74fc7c67cc-xqf8b' 2025-08-13T20:01:00.054891223+00:00 stderr F I0813 20:01:00.054311 1 controller.go:151] Successfully synced 'openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b' 2025-08-13T20:01:00.080921856+00:00 stderr F I0813 20:01:00.078163 1 controller.go:192] Received pod 'apiserver-67cbf64bc9-jjfds' 2025-08-13T20:01:00.080921856+00:00 stderr F I0813 20:01:00.078340 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-jjfds' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.282197 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-9-crc' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.283523 1 controller.go:192] Received pod 'console-644bb77b49-5x5xk' 2025-08-13T20:01:30.283968186+00:00 stderr F I0813 20:01:30.283591 1 controller.go:151] Successfully synced 'openshift-console/console-644bb77b49-5x5xk' 2025-08-13T20:06:25.106206287+00:00 stderr F I0813 20:06:25.105569 1 controller.go:192] Received pod 'installer-10-retry-1-crc' 2025-08-13T20:06:25.106544196+00:00 stderr F I0813 20:06:25.105651 1 controller.go:192] Received pod 'controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:06:25.109592163+00:00 stderr F I0813 20:06:25.109563 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-10-retry-1-crc' 2025-08-13T20:06:25.109648825+00:00 stderr F I0813 20:06:25.109635 1 controller.go:192] Received pod 'route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:06:25.109696206+00:00 stderr F I0813 20:06:25.109682 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:06:25.109731127+00:00 stderr F I0813 20:06:25.109719 1 controller.go:151] Successfully synced 'openshift-console/console-84fccc7b6-mkncc' 2025-08-13T20:06:25.109764118+00:00 stderr F I0813 20:06:25.109753 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-78589965b8-vmcwt' 2025-08-13T20:06:25.109920543+00:00 stderr F I0813 20:06:25.109906 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh' 2025-08-13T20:06:25.109960774+00:00 stderr F I0813 20:06:25.109949 1 controller.go:151] Successfully synced 'openshift-console/console-5d9678894c-wx62n' 2025-08-13T20:06:25.109990255+00:00 stderr F I0813 20:06:25.109979 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/kube-apiserver-startup-monitor-crc' 2025-08-13T20:06:25.110058797+00:00 stderr F I0813 20:06:25.110002 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:06:25.110085968+00:00 stderr F I0813 20:06:25.110010 1 controller.go:151] Successfully synced 'openshift-image-registry/image-registry-7cbd5666ff-bbfrf' 2025-08-13T20:06:32.084964732+00:00 stderr F I0813 20:06:32.083272 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-rmwfn' 2025-08-13T20:06:34.485249150+00:00 stderr F I0813 20:06:34.484717 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-dcqzh' 2025-08-13T20:06:34.779125875+00:00 stderr F I0813 20:06:34.778404 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-g4v97' 2025-08-13T20:06:35.123551180+00:00 stderr F I0813 20:06:35.123213 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-k9qqb' 2025-08-13T20:06:35.928407086+00:00 stderr F I0813 20:06:35.928154 1 controller.go:192] Received pod 'redhat-marketplace-4txfd' 2025-08-13T20:06:35.929223220+00:00 stderr F I0813 20:06:35.929127 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-4txfd' 2025-08-13T20:06:36.643606302+00:00 stderr F I0813 20:06:36.642995 1 controller.go:192] Received pod 'certified-operators-cfdk8' 2025-08-13T20:06:36.643736426+00:00 stderr F I0813 20:06:36.643721 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cfdk8' 2025-08-13T20:06:37.666895990+00:00 stderr F I0813 20:06:37.665212 1 controller.go:192] Received pod 'redhat-operators-pmqwc' 2025-08-13T20:06:37.666895990+00:00 stderr F I0813 20:06:37.665954 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-pmqwc' 2025-08-13T20:06:39.388929202+00:00 stderr F I0813 20:06:39.388413 1 controller.go:192] Received pod 'community-operators-p7svp' 2025-08-13T20:06:39.389035525+00:00 stderr F I0813 20:06:39.389019 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-p7svp' 2025-08-13T20:06:49.910865775+00:00 stderr F I0813 20:06:49.910061 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/kube-controller-manager-crc' 2025-08-13T20:07:05.606433680+00:00 stderr F I0813 20:07:05.590499 1 controller.go:192] Received pod 'installer-11-crc' 2025-08-13T20:07:05.606433680+00:00 stderr F I0813 20:07:05.591545 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-11-crc' 2025-08-13T20:07:09.571116321+00:00 stderr F I0813 20:07:09.568456 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-4txfd' 2025-08-13T20:07:17.348392622+00:00 stderr F I0813 20:07:17.344598 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-67cbf64bc9-jjfds' 2025-08-13T20:07:21.108945819+00:00 stderr F I0813 20:07:21.104214 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-cfdk8' 2025-08-13T20:07:21.346738817+00:00 stderr F I0813 20:07:21.346147 1 controller.go:192] Received pod 'apiserver-7fc54b8dd7-d2bhp' 2025-08-13T20:07:21.346738817+00:00 stderr F I0813 20:07:21.346207 1 controller.go:151] Successfully synced 'openshift-apiserver/apiserver-7fc54b8dd7-d2bhp' 2025-08-13T20:07:23.498192142+00:00 stderr F I0813 20:07:23.495542 1 controller.go:192] Received pod 'revision-pruner-11-crc' 2025-08-13T20:07:23.498192142+00:00 stderr F I0813 20:07:23.497749 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/revision-pruner-11-crc' 2025-08-13T20:07:24.100475910+00:00 stderr F I0813 20:07:24.094438 1 controller.go:192] Received pod 'installer-8-crc' 2025-08-13T20:07:24.100475910+00:00 stderr F I0813 20:07:24.095461 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/installer-8-crc' 2025-08-13T20:07:26.190039028+00:00 stderr F I0813 20:07:26.184720 1 controller.go:192] Received pod 'installer-11-crc' 2025-08-13T20:07:26.190039028+00:00 stderr F I0813 20:07:26.189167 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/installer-11-crc' 2025-08-13T20:07:34.911652425+00:00 stderr F I0813 20:07:34.905613 1 controller.go:192] Received pod 'installer-12-crc' 2025-08-13T20:07:34.911652425+00:00 stderr F I0813 20:07:34.906370 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-12-crc' 2025-08-13T20:07:41.042723107+00:00 stderr F I0813 20:07:41.042017 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/installer-11-crc' 2025-08-13T20:07:42.641679050+00:00 stderr F I0813 20:07:42.640862 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-p7svp' 2025-08-13T20:08:01.087979002+00:00 stderr F I0813 20:08:01.087350 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-pmqwc' 2025-08-13T20:08:08.267400173+00:00 stderr F I0813 20:08:08.264471 1 controller.go:151] Successfully synced 'openshift-kube-scheduler/openshift-kube-scheduler-crc' 2025-08-13T20:08:12.291050774+00:00 stderr F I0813 20:08:12.289975 1 controller.go:151] Successfully synced 'openshift-kube-controller-manager/kube-controller-manager-crc' 2025-08-13T20:10:21.976031818+00:00 stderr F I0813 20:10:21.975169 1 controller.go:151] Successfully synced 'openshift-kube-apiserver/kube-apiserver-startup-monitor-crc' 2025-08-13T20:10:50.613462976+00:00 stderr F I0813 20:10:50.611261 1 controller.go:151] Successfully synced 'openshift-multus/cni-sysctl-allowlist-ds-jx5m8' 2025-08-13T20:11:00.793692470+00:00 stderr F I0813 20:11:00.789632 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-598fc85fd4-8wlsm' 2025-08-13T20:11:00.852394873+00:00 stderr F I0813 20:11:00.852251 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx' 2025-08-13T20:11:02.192181446+00:00 stderr F I0813 20:11:02.192032 1 controller.go:192] Received pod 'controller-manager-778975cc4f-x5vcf' 2025-08-13T20:11:02.192437293+00:00 stderr F I0813 20:11:02.192326 1 controller.go:151] Successfully synced 'openshift-controller-manager/controller-manager-778975cc4f-x5vcf' 2025-08-13T20:11:02.283304968+00:00 stderr F I0813 20:11:02.281406 1 controller.go:192] Received pod 'route-controller-manager-776b8b7477-sfpvs' 2025-08-13T20:11:02.283304968+00:00 stderr F I0813 20:11:02.281657 1 controller.go:151] Successfully synced 'openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs' 2025-08-13T20:15:01.024125786+00:00 stderr F I0813 20:15:01.017300 1 controller.go:192] Received pod 'collect-profiles-29251935-d7x6j' 2025-08-13T20:15:01.024125786+00:00 stderr F I0813 20:15:01.020210 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j' 2025-08-13T20:16:58.871475492+00:00 stderr F I0813 20:16:58.866052 1 controller.go:192] Received pod 'certified-operators-8bbjz' 2025-08-13T20:16:58.871475492+00:00 stderr F I0813 20:16:58.870147 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-8bbjz' 2025-08-13T20:17:01.049845789+00:00 stderr F I0813 20:17:01.049067 1 controller.go:192] Received pod 'redhat-marketplace-nsk78' 2025-08-13T20:17:01.049906111+00:00 stderr F I0813 20:17:01.049893 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nsk78' 2025-08-13T20:17:21.605438507+00:00 stderr F I0813 20:17:21.604580 1 controller.go:192] Received pod 'redhat-operators-swl5s' 2025-08-13T20:17:21.605556170+00:00 stderr F I0813 20:17:21.605442 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-swl5s' 2025-08-13T20:17:31.144134404+00:00 stderr F I0813 20:17:31.142706 1 controller.go:192] Received pod 'community-operators-tfv59' 2025-08-13T20:17:31.144134404+00:00 stderr F I0813 20:17:31.143821 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-tfv59' 2025-08-13T20:17:36.938844574+00:00 stderr F I0813 20:17:36.938059 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nsk78' 2025-08-13T20:17:45.316184057+00:00 stderr F I0813 20:17:45.314435 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-8bbjz' 2025-08-13T20:18:56.581914971+00:00 stderr F I0813 20:18:56.581177 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-tfv59' 2025-08-13T20:19:38.583898355+00:00 stderr F I0813 20:19:38.580924 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-swl5s' 2025-08-13T20:27:06.515561143+00:00 stderr F I0813 20:27:06.514117 1 controller.go:192] Received pod 'redhat-marketplace-jbzn9' 2025-08-13T20:27:06.522213573+00:00 stderr F I0813 20:27:06.521390 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jbzn9' 2025-08-13T20:27:06.626739732+00:00 stderr F I0813 20:27:06.626334 1 controller.go:192] Received pod 'certified-operators-xldzg' 2025-08-13T20:27:06.626769043+00:00 stderr F I0813 20:27:06.626758 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-xldzg' 2025-08-13T20:27:30.206946955+00:00 stderr F I0813 20:27:30.204841 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-xldzg' 2025-08-13T20:27:30.240926917+00:00 stderr F I0813 20:27:30.239904 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-jbzn9' 2025-08-13T20:28:44.070439440+00:00 stderr F I0813 20:28:44.069725 1 controller.go:192] Received pod 'community-operators-hvwvm' 2025-08-13T20:28:44.071889282+00:00 stderr F I0813 20:28:44.070535 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-hvwvm' 2025-08-13T20:29:07.485652753+00:00 stderr F I0813 20:29:07.484254 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-hvwvm' 2025-08-13T20:29:30.800937402+00:00 stderr F I0813 20:29:30.800342 1 controller.go:192] Received pod 'redhat-operators-zdwjn' 2025-08-13T20:29:30.801141928+00:00 stderr F I0813 20:29:30.801085 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zdwjn' 2025-08-13T20:30:02.812209420+00:00 stderr F I0813 20:30:02.808013 1 controller.go:192] Received pod 'collect-profiles-29251950-x8jjd' 2025-08-13T20:30:02.812209420+00:00 stderr F I0813 20:30:02.808719 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd' 2025-08-13T20:30:08.194448628+00:00 stderr F I0813 20:30:08.193089 1 controller.go:151] Successfully synced 'openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9' 2025-08-13T20:30:34.188255151+00:00 stderr F I0813 20:30:34.187324 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-zdwjn' 2025-08-13T20:37:48.902847474+00:00 stderr F I0813 20:37:48.896192 1 controller.go:192] Received pod 'redhat-marketplace-nkzlk' 2025-08-13T20:37:48.902847474+00:00 stderr F I0813 20:37:48.899938 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nkzlk' 2025-08-13T20:38:09.866597412+00:00 stderr F I0813 20:38:09.865627 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-marketplace-nkzlk' 2025-08-13T20:38:36.821311697+00:00 stderr F I0813 20:38:36.817256 1 controller.go:192] Received pod 'certified-operators-4kmbv' 2025-08-13T20:38:36.826287260+00:00 stderr F I0813 20:38:36.824009 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-4kmbv' 2025-08-13T20:38:58.234193701+00:00 stderr F I0813 20:38:58.233474 1 controller.go:151] Successfully synced 'openshift-marketplace/certified-operators-4kmbv' 2025-08-13T20:41:22.218162180+00:00 stderr F I0813 20:41:22.216147 1 controller.go:192] Received pod 'redhat-operators-k2tgr' 2025-08-13T20:41:22.218162180+00:00 stderr F I0813 20:41:22.217285 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-k2tgr' 2025-08-13T20:42:15.662994211+00:00 stderr F I0813 20:42:15.659488 1 controller.go:151] Successfully synced 'openshift-marketplace/redhat-operators-k2tgr' 2025-08-13T20:42:27.892076227+00:00 stderr F I0813 20:42:27.887166 1 controller.go:192] Received pod 'community-operators-sdddl' 2025-08-13T20:42:27.892076227+00:00 stderr F I0813 20:42:27.888532 1 controller.go:151] Successfully synced 'openshift-marketplace/community-operators-sdddl' 2025-08-13T20:42:44.151210941+00:00 stderr F I0813 20:42:44.150298 1 controller.go:116] Shutting down workers ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000755000175000017500000000000015117130654033220 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000000202015117130647033216 0ustar zuulzuul2025-12-13T00:13:16.612058934+00:00 stderr F W1213 00:13:16.610542 1 deprecated.go:66] 2025-12-13T00:13:16.612058934+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:16.612058934+00:00 stderr F 2025-12-13T00:13:16.612058934+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:16.612058934+00:00 stderr F 2025-12-13T00:13:16.612058934+00:00 stderr F =============================================== 2025-12-13T00:13:16.612058934+00:00 stderr F 2025-12-13T00:13:16.612058934+00:00 stderr F I1213 00:13:16.611228 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:16.612058934+00:00 stderr F I1213 00:13:16.611270 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:16.612058934+00:00 stderr F I1213 00:13:16.611995 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-12-13T00:13:16.612828890+00:00 stderr F I1213 00:13:16.612343 1 kube-rbac-proxy.go:402] Listening securely on :8443 ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_netwo0000644000175000017500000000222515117130647033225 0ustar zuulzuul2025-08-13T19:59:23.418270946+00:00 stderr F W0813 19:59:23.412030 1 deprecated.go:66] 2025-08-13T19:59:23.418270946+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F =============================================== 2025-08-13T19:59:23.418270946+00:00 stderr F 2025-08-13T19:59:23.418270946+00:00 stderr F I0813 19:59:23.413617 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:23.418270946+00:00 stderr F I0813 19:59:23.413677 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:23.432166223+00:00 stderr F I0813 19:59:23.432126 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-08-13T19:59:23.450188556+00:00 stderr F I0813 19:59:23.450142 1 kube-rbac-proxy.go:402] Listening securely on :8443 2025-08-13T20:42:43.082211382+00:00 stderr F I0813 20:42:43.081428 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000024400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130647033104 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130647033074 0ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130647033074 0ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015117130647033111 0ustar zuulzuul2025-12-13T00:15:35.514790141+00:00 stderr F time="2025-12-13T00:15:35Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-12-13T00:15:38.358206073+00:00 stderr F time="2025-12-13T00:15:38Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-12-13T00:15:38.358206073+00:00 stderr F time="2025-12-13T00:15:38Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015117130646033047 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000755000175000017500000000000015117130653033045 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000002334315117130646033056 0ustar zuulzuul2025-08-13T19:59:38.816516836+00:00 stderr F W0813 19:59:38.811105 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T19:59:42.551613375+00:00 stderr F I0813 19:59:42.550430 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:43.926057823+00:00 stderr F I0813 19:59:43.924317 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T19:59:49.044814837+00:00 stderr F I0813 19:59:49.018356 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:49.044814837+00:00 stderr F W0813 19:59:49.041495 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:49.044814837+00:00 stderr F W0813 19:59:49.041514 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:49.353686632+00:00 stderr F I0813 19:59:49.352555 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-737639254/tls.crt::/tmp/serving-cert-737639254/tls.key" 2025-08-13T19:59:49.357611804+00:00 stderr F I0813 19:59:49.356070 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.404952 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.405109 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.406528 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:49.407921948+00:00 stderr F I0813 19:59:49.406576 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:49.411376676+00:00 stderr F I0813 19:59:49.409598 1 secure_serving.go:213] Serving securely on [::]:17698 2025-08-13T19:59:49.481618619+00:00 stderr F I0813 19:59:49.410082 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:49.488757592+00:00 stderr F I0813 19:59:49.481955 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:49.695002670+00:00 stderr F I0813 19:59:49.694746 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:49.696477542+00:00 stderr F E0813 19:59:49.695955 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.696477542+00:00 stderr F E0813 19:59:49.696059 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.734727193+00:00 stderr F I0813 19:59:49.722956 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:49.734727193+00:00 stderr F E0813 19:59:49.723053 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.736936305+00:00 stderr F E0813 19:59:49.735304 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.736936305+00:00 stderr F E0813 19:59:49.735356 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.829455883+00:00 stderr F E0813 19:59:49.829356 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.850042510+00:00 stderr F E0813 19:59:49.849956 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.879030246+00:00 stderr F E0813 19:59:49.877548 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.891467700+00:00 stderr F E0813 19:59:49.891384 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.996736061+00:00 stderr F E0813 19:59:49.996679 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.996927636+00:00 stderr F E0813 19:59:49.996905 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.118993176+00:00 stderr F E0813 19:59:50.116700 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.167541430+00:00 stderr F E0813 19:59:50.166359 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.276515046+00:00 stderr F I0813 19:59:50.267147 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-08-13T19:59:50.293519981+00:00 stderr F E0813 19:59:50.278355 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.315617001+00:00 stderr F I0813 19:59:50.307588 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:50.489017894+00:00 stderr F E0813 19:59:50.488212 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:50.618445184+00:00 stderr F E0813 19:59:50.618021 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.147165276+00:00 stderr F E0813 19:59:51.144391 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.258646404+00:00 stderr F E0813 19:59:51.258287 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:55.970593550+00:00 stderr F I0813 19:59:55.968309 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-08-13T19:59:55.970593550+00:00 stderr F I0813 19:59:55.970408 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977334 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977375 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977385 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-08-13T19:59:55.978477274+00:00 stderr F I0813 19:59:55.977448 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-08-13T19:59:55.994104840+00:00 stderr F I0813 19:59:55.991731 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-08-13T19:59:56.043528339+00:00 stderr F I0813 19:59:56.043238 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-08-13T19:59:56.043528339+00:00 stderr F I0813 19:59:56.043391 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-08-13T19:59:56.347940657+00:00 stderr F I0813 19:59:56.345926 1 base_controller.go:73] Caches are synced for check-endpoints 2025-08-13T19:59:56.347940657+00:00 stderr F I0813 19:59:56.346237 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-08-13T20:42:42.524890884+00:00 stderr F I0813 20:42:42.523195 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diag0000644000175000017500000001156015117130646033054 0ustar zuulzuul2025-12-13T00:13:16.060099697+00:00 stderr F W1213 00:13:16.059063 1 cmd.go:245] Using insecure, self-signed certificates 2025-12-13T00:13:16.960808142+00:00 stderr F I1213 00:13:16.954279 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:17.087025473+00:00 stderr F I1213 00:13:17.086230 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-12-13T00:13:18.083990214+00:00 stderr F I1213 00:13:18.079756 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:18.083990214+00:00 stderr F W1213 00:13:18.080137 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:18.083990214+00:00 stderr F W1213 00:13:18.080143 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:18.092444038+00:00 stderr F I1213 00:13:18.091972 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:18.095037635+00:00 stderr F I1213 00:13:18.092534 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:18.095037635+00:00 stderr F I1213 00:13:18.092596 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:18.095037635+00:00 stderr F I1213 00:13:18.092625 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.095037635+00:00 stderr F I1213 00:13:18.092647 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:18.095037635+00:00 stderr F I1213 00:13:18.092652 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.095942856+00:00 stderr F I1213 00:13:18.095902 1 secure_serving.go:213] Serving securely on [::]:17698 2025-12-13T00:13:18.096050550+00:00 stderr F I1213 00:13:18.095968 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2514764047/tls.crt::/tmp/serving-cert-2514764047/tls.key" 2025-12-13T00:13:18.096128882+00:00 stderr F I1213 00:13:18.096057 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:18.143061559+00:00 stderr F I1213 00:13:18.142350 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-12-13T00:13:18.196656590+00:00 stderr F I1213 00:13:18.193100 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.196656590+00:00 stderr F I1213 00:13:18.193235 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.196656590+00:00 stderr F I1213 00:13:18.194611 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643017 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643059 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643125 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643130 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643134 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643134 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643187 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643193 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-12-13T00:13:18.643319529+00:00 stderr F I1213 00:13:18.643222 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-12-13T00:13:18.746641511+00:00 stderr F I1213 00:13:18.746021 1 base_controller.go:73] Caches are synced for check-endpoints 2025-12-13T00:13:18.746641511+00:00 stderr F I1213 00:13:18.746047 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... ././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015117130646033070 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015117130654033067 5ustar zuulzuul././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000000745615117130646033106 0ustar zuulzuul2025-12-13T00:11:03.280627913+00:00 stderr F + [[ -f /env/_master ]] 2025-12-13T00:11:03.280627913+00:00 stderr F + ho_enable=--enable-hybrid-overlay 2025-12-13T00:11:03.281489356+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-12-13T00:11:03.285637609+00:00 stdout F I1213 00:11:03.283788259 - network-node-identity - start webhook 2025-12-13T00:11:03.285650570+00:00 stderr F + echo 'I1213 00:11:03.283788259 - network-node-identity - start webhook' 2025-12-13T00:11:03.285650570+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --webhook-cert-dir=/etc/webhook-cert --webhook-host=127.0.0.1 --webhook-port=9743 --enable-hybrid-overlay --enable-interconnect --disable-approver --extra-allowed-user=system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane --wait-for-kubernetes-api=200s --pod-admission-conditions=/var/run/ovnkube-identity-config/additional-pod-admission-cond.json --loglevel=2 2025-12-13T00:11:03.441727491+00:00 stderr F I1213 00:11:03.440913 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:2 port:9743 host:127.0.0.1 certDir:/etc/webhook-cert metricsAddress:0 leaseNamespace: enableInterconnect:true enableHybridOverlay:true disableWebhook:false disableApprover:true waitForKAPIDuration:200000000000 localKAPIPort:6443 extraAllowedUsers:{slice:[system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane] hasBeenSet:true} csrAcceptanceConditionFile: csrAcceptanceConditions:[] podAdmissionConditionFile:/var/run/ovnkube-identity-config/additional-pod-admission-cond.json podAdmissionConditions:[]} 2025-12-13T00:11:03.441727491+00:00 stderr F W1213 00:11:03.441597 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:11:03.450766757+00:00 stderr F I1213 00:11:03.450701 1 ovnkubeidentity.go:351] Waiting for caches to sync 2025-12-13T00:11:03.469603609+00:00 stderr F I1213 00:11:03.469286 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:03.552025909+00:00 stderr F I1213 00:11:03.551985 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-12-13T00:11:03.552301306+00:00 stderr F I1213 00:11:03.552287 1 ovnkubeidentity.go:430] Starting the webhook server 2025-12-13T00:11:03.552463220+00:00 stderr F I1213 00:11:03.552431 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-12-13T00:11:23.014978019+00:00 stderr F 2025/12/13 00:11:23 http: TLS handshake error from 127.0.0.1:34072: EOF 2025-12-13T00:13:12.459238759+00:00 stderr F 2025/12/13 00:13:12 http: TLS handshake error from 127.0.0.1:44644: EOF 2025-12-13T00:13:12.491408831+00:00 stderr F 2025/12/13 00:13:12 http: TLS handshake error from 127.0.0.1:44656: EOF 2025-12-13T00:17:57.690081018+00:00 stderr F I1213 00:17:57.689984 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.key\"" 2025-12-13T00:17:57.691805324+00:00 stderr F I1213 00:17:57.691747 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-12-13T00:17:57.691855365+00:00 stderr F I1213 00:17:57.691825 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.crt\"" 2025-12-13T00:17:57.692957884+00:00 stderr F I1213 00:17:57.692895 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-12-13T00:21:16.867362230+00:00 stderr F I1213 00:21:16.867301 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000271462415117130646033112 0ustar zuulzuul2025-08-13T19:50:43.127891376+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:43.128098882+00:00 stderr F + ho_enable=--enable-hybrid-overlay 2025-08-13T19:50:43.129082180+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:43.184222956+00:00 stderr F + echo 'I0813 19:50:43.132736804 - network-node-identity - start webhook' 2025-08-13T19:50:43.184322418+00:00 stdout F I0813 19:50:43.132736804 - network-node-identity - start webhook 2025-08-13T19:50:43.184928436+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --webhook-cert-dir=/etc/webhook-cert --webhook-host=127.0.0.1 --webhook-port=9743 --enable-hybrid-overlay --enable-interconnect --disable-approver --extra-allowed-user=system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane --wait-for-kubernetes-api=200s --pod-admission-conditions=/var/run/ovnkube-identity-config/additional-pod-admission-cond.json --loglevel=2 2025-08-13T19:50:47.188251283+00:00 stderr F I0813 19:50:47.185908 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:2 port:9743 host:127.0.0.1 certDir:/etc/webhook-cert metricsAddress:0 leaseNamespace: enableInterconnect:true enableHybridOverlay:true disableWebhook:false disableApprover:true waitForKAPIDuration:200000000000 localKAPIPort:6443 extraAllowedUsers:{slice:[system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane] hasBeenSet:true} csrAcceptanceConditionFile: csrAcceptanceConditions:[] podAdmissionConditionFile:/var/run/ovnkube-identity-config/additional-pod-admission-cond.json podAdmissionConditions:[]} 2025-08-13T19:50:47.188251283+00:00 stderr F W0813 19:50:47.188191 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:47.261974120+00:00 stderr F I0813 19:50:47.259425 1 ovnkubeidentity.go:351] Waiting for caches to sync 2025-08-13T19:50:47.517742120+00:00 stderr F I0813 19:50:47.517497 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:50:47.577168279+00:00 stderr F I0813 19:50:47.576751 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:50:47.590146560+00:00 stderr F I0813 19:50:47.587224 1 ovnkubeidentity.go:430] Starting the webhook server 2025-08-13T19:50:47.594226226+00:00 stderr F I0813 19:50:47.582558 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-08-13T19:50:47.745047647+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53894: remote error: tls: bad certificate 2025-08-13T19:50:47.793929724+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53908: remote error: tls: bad certificate 2025-08-13T19:50:47.868213187+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53924: remote error: tls: bad certificate 2025-08-13T19:50:47.964301333+00:00 stderr F 2025/08/13 19:50:47 http: TLS handshake error from 127.0.0.1:53936: remote error: tls: bad certificate 2025-08-13T19:50:48.029546398+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53952: remote error: tls: bad certificate 2025-08-13T19:50:48.147259312+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53954: remote error: tls: bad certificate 2025-08-13T19:50:48.215592515+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53960: remote error: tls: bad certificate 2025-08-13T19:50:48.280282874+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53968: remote error: tls: bad certificate 2025-08-13T19:50:48.342619216+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53984: remote error: tls: bad certificate 2025-08-13T19:50:48.687006018+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:53992: remote error: tls: bad certificate 2025-08-13T19:50:48.866582011+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51718: remote error: tls: bad certificate 2025-08-13T19:50:48.912027150+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51724: remote error: tls: bad certificate 2025-08-13T19:50:48.944431296+00:00 stderr F 2025/08/13 19:50:48 http: TLS handshake error from 127.0.0.1:51736: remote error: tls: bad certificate 2025-08-13T19:50:49.378656735+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51750: remote error: tls: bad certificate 2025-08-13T19:50:49.710110938+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51754: remote error: tls: bad certificate 2025-08-13T19:50:49.824538099+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51762: remote error: tls: bad certificate 2025-08-13T19:50:49.900767758+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51766: remote error: tls: bad certificate 2025-08-13T19:50:49.986411945+00:00 stderr F 2025/08/13 19:50:49 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:50:50.647326135+00:00 stderr F 2025/08/13 19:50:50 http: TLS handshake error from 127.0.0.1:51790: remote error: tls: bad certificate 2025-08-13T19:50:51.312891567+00:00 stderr F 2025/08/13 19:50:51 http: TLS handshake error from 127.0.0.1:51806: remote error: tls: bad certificate 2025-08-13T19:50:52.077974724+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51822: remote error: tls: bad certificate 2025-08-13T19:50:52.149405405+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:50:52.196607595+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51832: remote error: tls: bad certificate 2025-08-13T19:50:52.244962827+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51846: remote error: tls: bad certificate 2025-08-13T19:50:52.314900675+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51862: remote error: tls: bad certificate 2025-08-13T19:50:52.379511692+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51864: remote error: tls: bad certificate 2025-08-13T19:50:52.434951447+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51872: remote error: tls: bad certificate 2025-08-13T19:50:52.503497206+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51882: remote error: tls: bad certificate 2025-08-13T19:50:52.594189118+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51894: remote error: tls: bad certificate 2025-08-13T19:50:52.645166325+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51902: remote error: tls: bad certificate 2025-08-13T19:50:52.700091544+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51906: remote error: tls: bad certificate 2025-08-13T19:50:52.759619795+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51916: remote error: tls: bad certificate 2025-08-13T19:50:52.790890399+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51920: remote error: tls: bad certificate 2025-08-13T19:50:52.808982886+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51934: remote error: tls: bad certificate 2025-08-13T19:50:52.857934075+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51944: remote error: tls: bad certificate 2025-08-13T19:50:52.898973088+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51952: remote error: tls: bad certificate 2025-08-13T19:50:52.920450271+00:00 stderr F 2025/08/13 19:50:52 http: TLS handshake error from 127.0.0.1:51976: remote error: tls: bad certificate 2025-08-13T19:50:53.225985724+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51992: remote error: tls: bad certificate 2025-08-13T19:50:53.241171268+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51998: remote error: tls: bad certificate 2025-08-13T19:50:53.277023692+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:51964: remote error: tls: bad certificate 2025-08-13T19:50:53.294267605+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52008: remote error: tls: bad certificate 2025-08-13T19:50:53.327105514+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52024: remote error: tls: bad certificate 2025-08-13T19:50:53.330707707+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52038: remote error: tls: bad certificate 2025-08-13T19:50:53.362941238+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52044: remote error: tls: bad certificate 2025-08-13T19:50:53.377062762+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52056: remote error: tls: bad certificate 2025-08-13T19:50:53.412945867+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52064: remote error: tls: bad certificate 2025-08-13T19:50:53.441219756+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52068: remote error: tls: bad certificate 2025-08-13T19:50:53.451243732+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52076: remote error: tls: bad certificate 2025-08-13T19:50:53.478115140+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52078: remote error: tls: bad certificate 2025-08-13T19:50:53.520115450+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52090: remote error: tls: bad certificate 2025-08-13T19:50:53.605220803+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52092: remote error: tls: bad certificate 2025-08-13T19:50:53.639766180+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52106: remote error: tls: bad certificate 2025-08-13T19:50:53.704171171+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52120: remote error: tls: bad certificate 2025-08-13T19:50:53.753149931+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52136: remote error: tls: bad certificate 2025-08-13T19:50:53.821236107+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52152: remote error: tls: bad certificate 2025-08-13T19:50:53.864738370+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52158: remote error: tls: bad certificate 2025-08-13T19:50:53.962376541+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52168: remote error: tls: bad certificate 2025-08-13T19:50:53.974479196+00:00 stderr F 2025/08/13 19:50:53 http: TLS handshake error from 127.0.0.1:52174: remote error: tls: bad certificate 2025-08-13T19:50:54.024067274+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52186: remote error: tls: bad certificate 2025-08-13T19:50:54.030571270+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52178: remote error: tls: bad certificate 2025-08-13T19:50:54.079695624+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52202: remote error: tls: bad certificate 2025-08-13T19:50:54.114244271+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52204: remote error: tls: bad certificate 2025-08-13T19:50:54.160390340+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52210: remote error: tls: bad certificate 2025-08-13T19:50:54.251052751+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52212: remote error: tls: bad certificate 2025-08-13T19:50:54.313880267+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52228: remote error: tls: bad certificate 2025-08-13T19:50:54.570898253+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52240: remote error: tls: bad certificate 2025-08-13T19:50:54.755031685+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52252: remote error: tls: bad certificate 2025-08-13T19:50:54.912921418+00:00 stderr F 2025/08/13 19:50:54 http: TLS handshake error from 127.0.0.1:52262: remote error: tls: bad certificate 2025-08-13T19:50:55.003989171+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52272: remote error: tls: bad certificate 2025-08-13T19:50:55.046258119+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52288: remote error: tls: bad certificate 2025-08-13T19:50:55.117683540+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52294: remote error: tls: bad certificate 2025-08-13T19:50:55.312921210+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52310: remote error: tls: bad certificate 2025-08-13T19:50:55.382659673+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52316: remote error: tls: bad certificate 2025-08-13T19:50:55.451897042+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52326: remote error: tls: bad certificate 2025-08-13T19:50:55.533005970+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52334: remote error: tls: bad certificate 2025-08-13T19:50:55.849677271+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52348: remote error: tls: bad certificate 2025-08-13T19:50:55.917101448+00:00 stderr F 2025/08/13 19:50:55 http: TLS handshake error from 127.0.0.1:52364: remote error: tls: bad certificate 2025-08-13T19:50:56.051583702+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52366: remote error: tls: bad certificate 2025-08-13T19:50:56.185426907+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52378: remote error: tls: bad certificate 2025-08-13T19:50:56.239143822+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52384: remote error: tls: bad certificate 2025-08-13T19:50:56.289679247+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52386: remote error: tls: bad certificate 2025-08-13T19:50:56.334149327+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52390: remote error: tls: bad certificate 2025-08-13T19:50:56.375012175+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52394: remote error: tls: bad certificate 2025-08-13T19:50:56.439319143+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52410: remote error: tls: bad certificate 2025-08-13T19:50:56.645922828+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52426: remote error: tls: bad certificate 2025-08-13T19:50:56.699151349+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52432: remote error: tls: bad certificate 2025-08-13T19:50:56.994924873+00:00 stderr F 2025/08/13 19:50:56 http: TLS handshake error from 127.0.0.1:52440: remote error: tls: bad certificate 2025-08-13T19:50:57.237949208+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52454: remote error: tls: bad certificate 2025-08-13T19:50:57.305384326+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52456: remote error: tls: bad certificate 2025-08-13T19:50:57.386773162+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52472: remote error: tls: bad certificate 2025-08-13T19:50:57.465000898+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52488: remote error: tls: bad certificate 2025-08-13T19:50:57.496446756+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52496: remote error: tls: bad certificate 2025-08-13T19:50:57.522851991+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52506: remote error: tls: bad certificate 2025-08-13T19:50:57.553314542+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52522: remote error: tls: bad certificate 2025-08-13T19:50:57.598395340+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52524: remote error: tls: bad certificate 2025-08-13T19:50:57.665761646+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52528: remote error: tls: bad certificate 2025-08-13T19:50:57.732241096+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52540: remote error: tls: bad certificate 2025-08-13T19:50:57.800867447+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52550: remote error: tls: bad certificate 2025-08-13T19:50:57.842209629+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52560: remote error: tls: bad certificate 2025-08-13T19:50:57.889692366+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52576: remote error: tls: bad certificate 2025-08-13T19:50:57.934844296+00:00 stderr F 2025/08/13 19:50:57 http: TLS handshake error from 127.0.0.1:52588: remote error: tls: bad certificate 2025-08-13T19:50:58.005091664+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52604: remote error: tls: bad certificate 2025-08-13T19:50:58.042565145+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52620: remote error: tls: bad certificate 2025-08-13T19:50:58.213664785+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52632: remote error: tls: bad certificate 2025-08-13T19:50:58.300443125+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52634: remote error: tls: bad certificate 2025-08-13T19:50:58.342754055+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52646: remote error: tls: bad certificate 2025-08-13T19:50:58.397146519+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52662: remote error: tls: bad certificate 2025-08-13T19:50:58.424341976+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52674: remote error: tls: bad certificate 2025-08-13T19:50:58.462741084+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52686: remote error: tls: bad certificate 2025-08-13T19:50:58.499094283+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52700: remote error: tls: bad certificate 2025-08-13T19:50:58.532325083+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52702: remote error: tls: bad certificate 2025-08-13T19:50:58.559133629+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52704: remote error: tls: bad certificate 2025-08-13T19:50:58.598263777+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52714: remote error: tls: bad certificate 2025-08-13T19:50:58.618370812+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52722: remote error: tls: bad certificate 2025-08-13T19:50:58.651226101+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52726: remote error: tls: bad certificate 2025-08-13T19:50:58.682213447+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52730: remote error: tls: bad certificate 2025-08-13T19:50:58.707888900+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:52746: remote error: tls: bad certificate 2025-08-13T19:50:58.742193181+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36542: remote error: tls: bad certificate 2025-08-13T19:50:58.767019280+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36550: remote error: tls: bad certificate 2025-08-13T19:50:58.793288891+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36562: remote error: tls: bad certificate 2025-08-13T19:50:58.820872670+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36570: remote error: tls: bad certificate 2025-08-13T19:50:58.871392534+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36586: remote error: tls: bad certificate 2025-08-13T19:50:58.904159000+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36594: remote error: tls: bad certificate 2025-08-13T19:50:58.940449017+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36608: remote error: tls: bad certificate 2025-08-13T19:50:58.970211598+00:00 stderr F 2025/08/13 19:50:58 http: TLS handshake error from 127.0.0.1:36616: remote error: tls: bad certificate 2025-08-13T19:50:59.017749157+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36624: remote error: tls: bad certificate 2025-08-13T19:50:59.071537814+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36626: remote error: tls: bad certificate 2025-08-13T19:50:59.123889600+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36634: remote error: tls: bad certificate 2025-08-13T19:50:59.173079226+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36640: remote error: tls: bad certificate 2025-08-13T19:50:59.227739088+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36650: remote error: tls: bad certificate 2025-08-13T19:50:59.281849325+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36664: remote error: tls: bad certificate 2025-08-13T19:50:59.317373600+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36668: remote error: tls: bad certificate 2025-08-13T19:50:59.348519680+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36674: remote error: tls: bad certificate 2025-08-13T19:50:59.406069255+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36682: remote error: tls: bad certificate 2025-08-13T19:50:59.445916724+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36694: remote error: tls: bad certificate 2025-08-13T19:50:59.470085825+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36700: remote error: tls: bad certificate 2025-08-13T19:50:59.497681314+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36706: remote error: tls: bad certificate 2025-08-13T19:50:59.530281755+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36710: remote error: tls: bad certificate 2025-08-13T19:50:59.599178285+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36716: remote error: tls: bad certificate 2025-08-13T19:50:59.649423641+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36732: remote error: tls: bad certificate 2025-08-13T19:50:59.684307848+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36740: remote error: tls: bad certificate 2025-08-13T19:50:59.718094433+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36756: remote error: tls: bad certificate 2025-08-13T19:50:59.754352070+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36768: remote error: tls: bad certificate 2025-08-13T19:50:59.800177539+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36776: remote error: tls: bad certificate 2025-08-13T19:50:59.847697058+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36780: remote error: tls: bad certificate 2025-08-13T19:50:59.890215563+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36788: remote error: tls: bad certificate 2025-08-13T19:50:59.934275701+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36790: remote error: tls: bad certificate 2025-08-13T19:50:59.991349152+00:00 stderr F 2025/08/13 19:50:59 http: TLS handshake error from 127.0.0.1:36800: remote error: tls: bad certificate 2025-08-13T19:51:00.049017011+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36802: remote error: tls: bad certificate 2025-08-13T19:51:00.132532968+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36818: remote error: tls: bad certificate 2025-08-13T19:51:00.189720412+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36822: remote error: tls: bad certificate 2025-08-13T19:51:00.223725634+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36830: remote error: tls: bad certificate 2025-08-13T19:51:00.261105912+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36838: remote error: tls: bad certificate 2025-08-13T19:51:00.300171249+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36842: remote error: tls: bad certificate 2025-08-13T19:51:00.316661440+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36856: remote error: tls: bad certificate 2025-08-13T19:51:00.357302802+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36858: remote error: tls: bad certificate 2025-08-13T19:51:00.386042854+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36874: remote error: tls: bad certificate 2025-08-13T19:51:00.420893710+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36888: remote error: tls: bad certificate 2025-08-13T19:51:00.454148510+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36898: remote error: tls: bad certificate 2025-08-13T19:51:00.504189320+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36914: remote error: tls: bad certificate 2025-08-13T19:51:00.550072572+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36926: remote error: tls: bad certificate 2025-08-13T19:51:00.574640284+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36938: remote error: tls: bad certificate 2025-08-13T19:51:00.617011395+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36940: remote error: tls: bad certificate 2025-08-13T19:51:00.650405229+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36950: remote error: tls: bad certificate 2025-08-13T19:51:00.686280005+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36966: remote error: tls: bad certificate 2025-08-13T19:51:00.725739813+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36982: remote error: tls: bad certificate 2025-08-13T19:51:00.773126677+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36990: remote error: tls: bad certificate 2025-08-13T19:51:00.829127177+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36992: remote error: tls: bad certificate 2025-08-13T19:51:00.853702340+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:36994: remote error: tls: bad certificate 2025-08-13T19:51:00.897999775+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37000: remote error: tls: bad certificate 2025-08-13T19:51:00.925969375+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37012: remote error: tls: bad certificate 2025-08-13T19:51:00.956533788+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37026: remote error: tls: bad certificate 2025-08-13T19:51:00.994699869+00:00 stderr F 2025/08/13 19:51:00 http: TLS handshake error from 127.0.0.1:37042: remote error: tls: bad certificate 2025-08-13T19:51:01.033348614+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37050: remote error: tls: bad certificate 2025-08-13T19:51:01.060967123+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37066: remote error: tls: bad certificate 2025-08-13T19:51:01.100954416+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37068: remote error: tls: bad certificate 2025-08-13T19:51:01.144876961+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37076: remote error: tls: bad certificate 2025-08-13T19:51:01.175751554+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37090: remote error: tls: bad certificate 2025-08-13T19:51:01.207250064+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37104: remote error: tls: bad certificate 2025-08-13T19:51:01.247654899+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37114: remote error: tls: bad certificate 2025-08-13T19:51:01.276133483+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37118: remote error: tls: bad certificate 2025-08-13T19:51:01.306406678+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37130: remote error: tls: bad certificate 2025-08-13T19:51:01.339034530+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37138: remote error: tls: bad certificate 2025-08-13T19:51:01.360545975+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37148: remote error: tls: bad certificate 2025-08-13T19:51:01.395433632+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37154: remote error: tls: bad certificate 2025-08-13T19:51:01.511654254+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37164: remote error: tls: bad certificate 2025-08-13T19:51:01.516197744+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37176: remote error: tls: bad certificate 2025-08-13T19:51:01.612159086+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37184: remote error: tls: bad certificate 2025-08-13T19:51:01.653710183+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37192: remote error: tls: bad certificate 2025-08-13T19:51:01.710394323+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37200: remote error: tls: bad certificate 2025-08-13T19:51:01.746680500+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37210: remote error: tls: bad certificate 2025-08-13T19:51:01.803880064+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37214: remote error: tls: bad certificate 2025-08-13T19:51:01.839914504+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37230: remote error: tls: bad certificate 2025-08-13T19:51:01.870634442+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37234: remote error: tls: bad certificate 2025-08-13T19:51:01.949026672+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37236: remote error: tls: bad certificate 2025-08-13T19:51:01.998071504+00:00 stderr F 2025/08/13 19:51:01 http: TLS handshake error from 127.0.0.1:37244: remote error: tls: bad certificate 2025-08-13T19:51:02.030856891+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37246: remote error: tls: bad certificate 2025-08-13T19:51:02.066442158+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37252: remote error: tls: bad certificate 2025-08-13T19:51:02.105622638+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37268: remote error: tls: bad certificate 2025-08-13T19:51:02.146911037+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37280: remote error: tls: bad certificate 2025-08-13T19:51:02.177227054+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37286: remote error: tls: bad certificate 2025-08-13T19:51:02.195575838+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37292: remote error: tls: bad certificate 2025-08-13T19:51:02.227222943+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37304: remote error: tls: bad certificate 2025-08-13T19:51:02.254015388+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37310: remote error: tls: bad certificate 2025-08-13T19:51:02.282593005+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37322: remote error: tls: bad certificate 2025-08-13T19:51:02.313024875+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37330: remote error: tls: bad certificate 2025-08-13T19:51:02.339564593+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37342: remote error: tls: bad certificate 2025-08-13T19:51:02.372934127+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37350: remote error: tls: bad certificate 2025-08-13T19:51:02.391770735+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37358: remote error: tls: bad certificate 2025-08-13T19:51:02.465381079+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37366: remote error: tls: bad certificate 2025-08-13T19:51:02.492945957+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37378: remote error: tls: bad certificate 2025-08-13T19:51:02.522199363+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37394: remote error: tls: bad certificate 2025-08-13T19:51:02.562970878+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37400: remote error: tls: bad certificate 2025-08-13T19:51:02.611675860+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37408: remote error: tls: bad certificate 2025-08-13T19:51:02.646425693+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37416: remote error: tls: bad certificate 2025-08-13T19:51:02.675100923+00:00 stderr F 2025/08/13 19:51:02 http: TLS handshake error from 127.0.0.1:37420: remote error: tls: bad certificate 2025-08-13T19:51:03.107905973+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37426: remote error: tls: bad certificate 2025-08-13T19:51:03.165928801+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37434: remote error: tls: bad certificate 2025-08-13T19:51:03.216223679+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37442: remote error: tls: bad certificate 2025-08-13T19:51:03.248332746+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37448: remote error: tls: bad certificate 2025-08-13T19:51:03.313901890+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37452: remote error: tls: bad certificate 2025-08-13T19:51:03.355765376+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37468: remote error: tls: bad certificate 2025-08-13T19:51:03.383241102+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37470: remote error: tls: bad certificate 2025-08-13T19:51:03.406117275+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37482: remote error: tls: bad certificate 2025-08-13T19:51:03.462346822+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37498: remote error: tls: bad certificate 2025-08-13T19:51:03.512550436+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37510: remote error: tls: bad certificate 2025-08-13T19:51:03.556443761+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37514: remote error: tls: bad certificate 2025-08-13T19:51:03.592369398+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37516: remote error: tls: bad certificate 2025-08-13T19:51:03.629024485+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37518: remote error: tls: bad certificate 2025-08-13T19:51:03.692179381+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37524: remote error: tls: bad certificate 2025-08-13T19:51:03.730339411+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37538: remote error: tls: bad certificate 2025-08-13T19:51:03.860483790+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37546: remote error: tls: bad certificate 2025-08-13T19:51:03.886262596+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37548: remote error: tls: bad certificate 2025-08-13T19:51:03.922530403+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37554: remote error: tls: bad certificate 2025-08-13T19:51:03.963665218+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37566: remote error: tls: bad certificate 2025-08-13T19:51:03.969899456+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37570: remote error: tls: bad certificate 2025-08-13T19:51:03.998223616+00:00 stderr F 2025/08/13 19:51:03 http: TLS handshake error from 127.0.0.1:37572: remote error: tls: bad certificate 2025-08-13T19:51:04.014296155+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37586: remote error: tls: bad certificate 2025-08-13T19:51:04.035582264+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37602: remote error: tls: bad certificate 2025-08-13T19:51:04.067182027+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37606: remote error: tls: bad certificate 2025-08-13T19:51:04.088258739+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37622: remote error: tls: bad certificate 2025-08-13T19:51:04.096409982+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37628: remote error: tls: bad certificate 2025-08-13T19:51:04.142221921+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37638: remote error: tls: bad certificate 2025-08-13T19:51:04.172312281+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37644: remote error: tls: bad certificate 2025-08-13T19:51:04.212474329+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37658: remote error: tls: bad certificate 2025-08-13T19:51:04.263190889+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37668: remote error: tls: bad certificate 2025-08-13T19:51:04.299631800+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37682: remote error: tls: bad certificate 2025-08-13T19:51:04.387549273+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37690: remote error: tls: bad certificate 2025-08-13T19:51:04.424341654+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37698: remote error: tls: bad certificate 2025-08-13T19:51:04.497659110+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37704: remote error: tls: bad certificate 2025-08-13T19:51:04.610452053+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37716: remote error: tls: bad certificate 2025-08-13T19:51:04.640043379+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37720: remote error: tls: bad certificate 2025-08-13T19:51:04.668125171+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37732: remote error: tls: bad certificate 2025-08-13T19:51:04.746185803+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37734: remote error: tls: bad certificate 2025-08-13T19:51:04.946409465+00:00 stderr F 2025/08/13 19:51:04 http: TLS handshake error from 127.0.0.1:37748: remote error: tls: bad certificate 2025-08-13T19:51:05.034016969+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37754: remote error: tls: bad certificate 2025-08-13T19:51:05.061009981+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37766: remote error: tls: bad certificate 2025-08-13T19:51:05.091070990+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37770: remote error: tls: bad certificate 2025-08-13T19:51:05.131356681+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37776: remote error: tls: bad certificate 2025-08-13T19:51:05.205135240+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37780: remote error: tls: bad certificate 2025-08-13T19:51:05.255356405+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37794: remote error: tls: bad certificate 2025-08-13T19:51:05.387906103+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37810: remote error: tls: bad certificate 2025-08-13T19:51:05.443694848+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37816: remote error: tls: bad certificate 2025-08-13T19:51:05.519501824+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37824: remote error: tls: bad certificate 2025-08-13T19:51:05.543353656+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37834: remote error: tls: bad certificate 2025-08-13T19:51:05.620419769+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37848: remote error: tls: bad certificate 2025-08-13T19:51:05.643328973+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37864: remote error: tls: bad certificate 2025-08-13T19:51:05.746574714+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37876: remote error: tls: bad certificate 2025-08-13T19:51:05.817102220+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37888: remote error: tls: bad certificate 2025-08-13T19:51:05.962096764+00:00 stderr F 2025/08/13 19:51:05 http: TLS handshake error from 127.0.0.1:37898: remote error: tls: bad certificate 2025-08-13T19:51:06.000694597+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37910: remote error: tls: bad certificate 2025-08-13T19:51:06.087982192+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37920: remote error: tls: bad certificate 2025-08-13T19:51:06.151663462+00:00 stderr F 2025/08/13 19:51:06 http: TLS handshake error from 127.0.0.1:37926: remote error: tls: bad certificate 2025-08-13T19:51:07.520441000+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37936: remote error: tls: bad certificate 2025-08-13T19:51:07.584917952+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37942: remote error: tls: bad certificate 2025-08-13T19:51:07.637642999+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37950: remote error: tls: bad certificate 2025-08-13T19:51:07.664908569+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37956: remote error: tls: bad certificate 2025-08-13T19:51:07.688984427+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37960: remote error: tls: bad certificate 2025-08-13T19:51:07.736749202+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37976: remote error: tls: bad certificate 2025-08-13T19:51:07.798507217+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37990: remote error: tls: bad certificate 2025-08-13T19:51:07.826221579+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:37994: remote error: tls: bad certificate 2025-08-13T19:51:07.859079428+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38000: remote error: tls: bad certificate 2025-08-13T19:51:07.896648992+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38008: remote error: tls: bad certificate 2025-08-13T19:51:07.936917273+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38018: remote error: tls: bad certificate 2025-08-13T19:51:07.967011883+00:00 stderr F 2025/08/13 19:51:07 http: TLS handshake error from 127.0.0.1:38030: remote error: tls: bad certificate 2025-08-13T19:51:08.022760096+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38032: remote error: tls: bad certificate 2025-08-13T19:51:08.058379924+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38046: remote error: tls: bad certificate 2025-08-13T19:51:08.094905998+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38048: remote error: tls: bad certificate 2025-08-13T19:51:08.119050568+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38058: remote error: tls: bad certificate 2025-08-13T19:51:08.146479512+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38066: remote error: tls: bad certificate 2025-08-13T19:51:08.177530120+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38070: remote error: tls: bad certificate 2025-08-13T19:51:08.201698570+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38078: remote error: tls: bad certificate 2025-08-13T19:51:08.231027769+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38094: remote error: tls: bad certificate 2025-08-13T19:51:08.259730369+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38102: remote error: tls: bad certificate 2025-08-13T19:51:08.289092248+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38110: remote error: tls: bad certificate 2025-08-13T19:51:08.309397008+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:38114: remote error: tls: bad certificate 2025-08-13T19:51:08.936181992+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:55404: remote error: tls: bad certificate 2025-08-13T19:51:08.979532561+00:00 stderr F 2025/08/13 19:51:08 http: TLS handshake error from 127.0.0.1:55406: remote error: tls: bad certificate 2025-08-13T19:51:09.008619833+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55422: remote error: tls: bad certificate 2025-08-13T19:51:09.045163547+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55424: remote error: tls: bad certificate 2025-08-13T19:51:09.072073746+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55434: remote error: tls: bad certificate 2025-08-13T19:51:09.111767841+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55442: remote error: tls: bad certificate 2025-08-13T19:51:09.139935806+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55444: remote error: tls: bad certificate 2025-08-13T19:51:09.168216894+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55456: remote error: tls: bad certificate 2025-08-13T19:51:09.200169877+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55470: remote error: tls: bad certificate 2025-08-13T19:51:09.232715287+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:51:09.267215783+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55494: remote error: tls: bad certificate 2025-08-13T19:51:09.296091539+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55504: remote error: tls: bad certificate 2025-08-13T19:51:09.343610397+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:51:09.848109466+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55528: remote error: tls: bad certificate 2025-08-13T19:51:09.878282038+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55530: remote error: tls: bad certificate 2025-08-13T19:51:09.916370777+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:51:09.945860180+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:51:09.970298888+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55548: remote error: tls: bad certificate 2025-08-13T19:51:09.989120466+00:00 stderr F 2025/08/13 19:51:09 http: TLS handshake error from 127.0.0.1:55554: remote error: tls: bad certificate 2025-08-13T19:51:10.010124886+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55564: remote error: tls: bad certificate 2025-08-13T19:51:10.029567152+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:51:10.082567337+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55582: remote error: tls: bad certificate 2025-08-13T19:51:10.200974171+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55590: remote error: tls: bad certificate 2025-08-13T19:51:10.269140019+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55596: remote error: tls: bad certificate 2025-08-13T19:51:10.526060142+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:51:10.565277933+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55614: remote error: tls: bad certificate 2025-08-13T19:51:10.621054507+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55624: remote error: tls: bad certificate 2025-08-13T19:51:10.680709671+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:51:10.708190187+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55650: remote error: tls: bad certificate 2025-08-13T19:51:10.765766342+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55658: remote error: tls: bad certificate 2025-08-13T19:51:10.813054613+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55670: remote error: tls: bad certificate 2025-08-13T19:51:10.851977196+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55672: remote error: tls: bad certificate 2025-08-13T19:51:10.909139640+00:00 stderr F 2025/08/13 19:51:10 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:51:11.112006178+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55678: remote error: tls: bad certificate 2025-08-13T19:51:11.134733157+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55682: remote error: tls: bad certificate 2025-08-13T19:51:11.179219989+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55692: remote error: tls: bad certificate 2025-08-13T19:51:11.199674053+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:51:11.236478535+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55710: remote error: tls: bad certificate 2025-08-13T19:51:11.258368291+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55724: remote error: tls: bad certificate 2025-08-13T19:51:11.277408425+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55740: remote error: tls: bad certificate 2025-08-13T19:51:11.310586903+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55750: remote error: tls: bad certificate 2025-08-13T19:51:11.337531274+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55762: remote error: tls: bad certificate 2025-08-13T19:51:11.359350087+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55764: remote error: tls: bad certificate 2025-08-13T19:51:11.384598969+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55778: remote error: tls: bad certificate 2025-08-13T19:51:11.405238439+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55794: remote error: tls: bad certificate 2025-08-13T19:51:11.441067473+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55804: remote error: tls: bad certificate 2025-08-13T19:51:11.466377236+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55816: remote error: tls: bad certificate 2025-08-13T19:51:11.488383965+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55832: remote error: tls: bad certificate 2025-08-13T19:51:11.513172993+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55842: remote error: tls: bad certificate 2025-08-13T19:51:11.541749410+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55856: remote error: tls: bad certificate 2025-08-13T19:51:11.569414231+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55858: remote error: tls: bad certificate 2025-08-13T19:51:11.583654368+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:51:11.604279097+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55888: remote error: tls: bad certificate 2025-08-13T19:51:11.623957230+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55890: remote error: tls: bad certificate 2025-08-13T19:51:11.645020082+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55902: remote error: tls: bad certificate 2025-08-13T19:51:11.662619235+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55914: remote error: tls: bad certificate 2025-08-13T19:51:11.677941633+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55922: remote error: tls: bad certificate 2025-08-13T19:51:11.696610586+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55924: remote error: tls: bad certificate 2025-08-13T19:51:11.720107488+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55936: remote error: tls: bad certificate 2025-08-13T19:51:11.740327816+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55952: remote error: tls: bad certificate 2025-08-13T19:51:11.757151677+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55962: remote error: tls: bad certificate 2025-08-13T19:51:11.778065764+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55976: remote error: tls: bad certificate 2025-08-13T19:51:11.800046323+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55982: remote error: tls: bad certificate 2025-08-13T19:51:11.818430788+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55986: remote error: tls: bad certificate 2025-08-13T19:51:11.835852666+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:55996: remote error: tls: bad certificate 2025-08-13T19:51:11.869709524+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56006: remote error: tls: bad certificate 2025-08-13T19:51:11.901452691+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56022: remote error: tls: bad certificate 2025-08-13T19:51:11.925191249+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56032: remote error: tls: bad certificate 2025-08-13T19:51:11.944592524+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56038: remote error: tls: bad certificate 2025-08-13T19:51:11.964132062+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56044: remote error: tls: bad certificate 2025-08-13T19:51:11.987549822+00:00 stderr F 2025/08/13 19:51:11 http: TLS handshake error from 127.0.0.1:56050: remote error: tls: bad certificate 2025-08-13T19:51:12.007037978+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56062: remote error: tls: bad certificate 2025-08-13T19:51:12.026225957+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56070: remote error: tls: bad certificate 2025-08-13T19:51:12.049928134+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56074: remote error: tls: bad certificate 2025-08-13T19:51:12.066472967+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56080: remote error: tls: bad certificate 2025-08-13T19:51:12.094082986+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56084: remote error: tls: bad certificate 2025-08-13T19:51:12.121739217+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56098: remote error: tls: bad certificate 2025-08-13T19:51:12.157597582+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56100: remote error: tls: bad certificate 2025-08-13T19:51:12.177716267+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56116: remote error: tls: bad certificate 2025-08-13T19:51:12.207557610+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:51:12.232195604+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56140: remote error: tls: bad certificate 2025-08-13T19:51:12.247984205+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56142: remote error: tls: bad certificate 2025-08-13T19:51:12.263884989+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56148: remote error: tls: bad certificate 2025-08-13T19:51:12.283077628+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56150: remote error: tls: bad certificate 2025-08-13T19:51:12.301991789+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56166: remote error: tls: bad certificate 2025-08-13T19:51:12.315766972+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56176: remote error: tls: bad certificate 2025-08-13T19:51:12.331283186+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56178: remote error: tls: bad certificate 2025-08-13T19:51:12.349030813+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56182: remote error: tls: bad certificate 2025-08-13T19:51:12.366279616+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56186: remote error: tls: bad certificate 2025-08-13T19:51:12.391048224+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56188: remote error: tls: bad certificate 2025-08-13T19:51:12.407750971+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56196: remote error: tls: bad certificate 2025-08-13T19:51:12.424480839+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56198: remote error: tls: bad certificate 2025-08-13T19:51:12.441757163+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56200: remote error: tls: bad certificate 2025-08-13T19:51:12.460493379+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56216: remote error: tls: bad certificate 2025-08-13T19:51:12.487909002+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56218: remote error: tls: bad certificate 2025-08-13T19:51:12.503107247+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56222: remote error: tls: bad certificate 2025-08-13T19:51:12.519890386+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56232: remote error: tls: bad certificate 2025-08-13T19:51:12.541567386+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56248: remote error: tls: bad certificate 2025-08-13T19:51:12.559672193+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56250: remote error: tls: bad certificate 2025-08-13T19:51:12.574982911+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56252: remote error: tls: bad certificate 2025-08-13T19:51:12.599274365+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56258: remote error: tls: bad certificate 2025-08-13T19:51:12.622022115+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56262: remote error: tls: bad certificate 2025-08-13T19:51:12.643713105+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56268: remote error: tls: bad certificate 2025-08-13T19:51:12.660965638+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56284: remote error: tls: bad certificate 2025-08-13T19:51:12.676703558+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56296: remote error: tls: bad certificate 2025-08-13T19:51:12.695582158+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56308: remote error: tls: bad certificate 2025-08-13T19:51:12.712148481+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56312: remote error: tls: bad certificate 2025-08-13T19:51:12.729627371+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56328: remote error: tls: bad certificate 2025-08-13T19:51:12.746057320+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56340: remote error: tls: bad certificate 2025-08-13T19:51:12.769516241+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56348: remote error: tls: bad certificate 2025-08-13T19:51:12.795541405+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56358: remote error: tls: bad certificate 2025-08-13T19:51:12.811990965+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56368: remote error: tls: bad certificate 2025-08-13T19:51:12.829954018+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56382: remote error: tls: bad certificate 2025-08-13T19:51:12.846641755+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:51:12.873191424+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56402: remote error: tls: bad certificate 2025-08-13T19:51:12.893746191+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56418: remote error: tls: bad certificate 2025-08-13T19:51:12.909586664+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56430: remote error: tls: bad certificate 2025-08-13T19:51:12.935738082+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56434: remote error: tls: bad certificate 2025-08-13T19:51:12.951402209+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56450: remote error: tls: bad certificate 2025-08-13T19:51:12.971053681+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56458: remote error: tls: bad certificate 2025-08-13T19:51:12.988277633+00:00 stderr F 2025/08/13 19:51:12 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:51:13.002164440+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56470: remote error: tls: bad certificate 2025-08-13T19:51:13.018524058+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56480: remote error: tls: bad certificate 2025-08-13T19:51:13.037715756+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56484: remote error: tls: bad certificate 2025-08-13T19:51:13.055989508+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56492: remote error: tls: bad certificate 2025-08-13T19:51:13.077931436+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56502: remote error: tls: bad certificate 2025-08-13T19:51:13.094364215+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56516: remote error: tls: bad certificate 2025-08-13T19:51:13.108995613+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56532: remote error: tls: bad certificate 2025-08-13T19:51:13.126513384+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56538: remote error: tls: bad certificate 2025-08-13T19:51:13.139469984+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56554: remote error: tls: bad certificate 2025-08-13T19:51:13.157605813+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56558: remote error: tls: bad certificate 2025-08-13T19:51:13.176663217+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56566: remote error: tls: bad certificate 2025-08-13T19:51:13.193339994+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56568: remote error: tls: bad certificate 2025-08-13T19:51:13.210146094+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56570: remote error: tls: bad certificate 2025-08-13T19:51:13.237200328+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56586: remote error: tls: bad certificate 2025-08-13T19:51:13.257216900+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56602: remote error: tls: bad certificate 2025-08-13T19:51:13.275069500+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56614: remote error: tls: bad certificate 2025-08-13T19:51:13.296548644+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56618: remote error: tls: bad certificate 2025-08-13T19:51:13.336249349+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56632: remote error: tls: bad certificate 2025-08-13T19:51:13.375628944+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56642: remote error: tls: bad certificate 2025-08-13T19:51:13.417585093+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56654: remote error: tls: bad certificate 2025-08-13T19:51:13.460748847+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56662: remote error: tls: bad certificate 2025-08-13T19:51:13.496887470+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56672: remote error: tls: bad certificate 2025-08-13T19:51:13.538101458+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56674: remote error: tls: bad certificate 2025-08-13T19:51:13.577150764+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56676: remote error: tls: bad certificate 2025-08-13T19:51:13.614976155+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56692: remote error: tls: bad certificate 2025-08-13T19:51:13.654620938+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56704: remote error: tls: bad certificate 2025-08-13T19:51:13.696578717+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56720: remote error: tls: bad certificate 2025-08-13T19:51:13.736185909+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56726: remote error: tls: bad certificate 2025-08-13T19:51:13.776889442+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56738: remote error: tls: bad certificate 2025-08-13T19:51:13.825333577+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56750: remote error: tls: bad certificate 2025-08-13T19:51:13.878017402+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56754: remote error: tls: bad certificate 2025-08-13T19:51:13.918018146+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56762: remote error: tls: bad certificate 2025-08-13T19:51:13.940852788+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56778: remote error: tls: bad certificate 2025-08-13T19:51:13.975715405+00:00 stderr F 2025/08/13 19:51:13 http: TLS handshake error from 127.0.0.1:56784: remote error: tls: bad certificate 2025-08-13T19:51:14.017318674+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56788: remote error: tls: bad certificate 2025-08-13T19:51:14.055215897+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56796: remote error: tls: bad certificate 2025-08-13T19:51:14.104613509+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56808: remote error: tls: bad certificate 2025-08-13T19:51:14.136732097+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56814: remote error: tls: bad certificate 2025-08-13T19:51:14.173913969+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56822: remote error: tls: bad certificate 2025-08-13T19:51:14.220596933+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56832: remote error: tls: bad certificate 2025-08-13T19:51:14.255966324+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56846: remote error: tls: bad certificate 2025-08-13T19:51:14.301126084+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56858: remote error: tls: bad certificate 2025-08-13T19:51:14.330879975+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56862: remote error: tls: bad certificate 2025-08-13T19:51:14.337535445+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56872: remote error: tls: bad certificate 2025-08-13T19:51:14.356650221+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56882: remote error: tls: bad certificate 2025-08-13T19:51:14.385160856+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56894: remote error: tls: bad certificate 2025-08-13T19:51:14.386954837+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56906: remote error: tls: bad certificate 2025-08-13T19:51:14.405027844+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56910: remote error: tls: bad certificate 2025-08-13T19:51:14.418855549+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56920: remote error: tls: bad certificate 2025-08-13T19:51:14.430440620+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56926: remote error: tls: bad certificate 2025-08-13T19:51:14.455565418+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56930: remote error: tls: bad certificate 2025-08-13T19:51:14.496387855+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56934: remote error: tls: bad certificate 2025-08-13T19:51:14.537382497+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56950: remote error: tls: bad certificate 2025-08-13T19:51:14.578863402+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56966: remote error: tls: bad certificate 2025-08-13T19:51:14.625207267+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56968: remote error: tls: bad certificate 2025-08-13T19:51:14.666023853+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56970: remote error: tls: bad certificate 2025-08-13T19:51:14.697620196+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56982: remote error: tls: bad certificate 2025-08-13T19:51:14.737162267+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:56998: remote error: tls: bad certificate 2025-08-13T19:51:14.779391794+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57008: remote error: tls: bad certificate 2025-08-13T19:51:14.832110740+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57016: remote error: tls: bad certificate 2025-08-13T19:51:14.857645800+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57024: remote error: tls: bad certificate 2025-08-13T19:51:14.899008222+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57028: remote error: tls: bad certificate 2025-08-13T19:51:14.942346031+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57038: remote error: tls: bad certificate 2025-08-13T19:51:14.975859139+00:00 stderr F 2025/08/13 19:51:14 http: TLS handshake error from 127.0.0.1:57040: remote error: tls: bad certificate 2025-08-13T19:51:15.016744567+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57046: remote error: tls: bad certificate 2025-08-13T19:51:15.054408254+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57056: remote error: tls: bad certificate 2025-08-13T19:51:15.094640014+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57060: remote error: tls: bad certificate 2025-08-13T19:51:15.139407263+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57066: remote error: tls: bad certificate 2025-08-13T19:51:15.176716129+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57068: remote error: tls: bad certificate 2025-08-13T19:51:15.220768898+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57076: remote error: tls: bad certificate 2025-08-13T19:51:15.252941698+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57092: remote error: tls: bad certificate 2025-08-13T19:51:15.358891256+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57104: remote error: tls: bad certificate 2025-08-13T19:51:15.619594007+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57112: remote error: tls: bad certificate 2025-08-13T19:51:15.684559674+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57120: remote error: tls: bad certificate 2025-08-13T19:51:15.848595482+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57122: remote error: tls: bad certificate 2025-08-13T19:51:15.893192097+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57130: remote error: tls: bad certificate 2025-08-13T19:51:16.000179064+00:00 stderr F 2025/08/13 19:51:15 http: TLS handshake error from 127.0.0.1:57140: remote error: tls: bad certificate 2025-08-13T19:51:16.049123203+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57144: remote error: tls: bad certificate 2025-08-13T19:51:16.079272745+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57146: remote error: tls: bad certificate 2025-08-13T19:51:16.288498265+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57162: remote error: tls: bad certificate 2025-08-13T19:51:16.361176402+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57172: remote error: tls: bad certificate 2025-08-13T19:51:16.380248147+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57180: remote error: tls: bad certificate 2025-08-13T19:51:16.402147563+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57182: remote error: tls: bad certificate 2025-08-13T19:51:16.419362805+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57190: remote error: tls: bad certificate 2025-08-13T19:51:16.437168534+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57194: remote error: tls: bad certificate 2025-08-13T19:51:16.470389923+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57204: remote error: tls: bad certificate 2025-08-13T19:51:16.492042662+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57218: remote error: tls: bad certificate 2025-08-13T19:51:16.512763464+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57220: remote error: tls: bad certificate 2025-08-13T19:51:16.532359734+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57224: remote error: tls: bad certificate 2025-08-13T19:51:16.552450229+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57230: remote error: tls: bad certificate 2025-08-13T19:51:16.572054709+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57238: remote error: tls: bad certificate 2025-08-13T19:51:16.594323935+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57242: remote error: tls: bad certificate 2025-08-13T19:51:16.637283303+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57254: remote error: tls: bad certificate 2025-08-13T19:51:16.652766666+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57256: remote error: tls: bad certificate 2025-08-13T19:51:16.669300648+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57260: remote error: tls: bad certificate 2025-08-13T19:51:16.700094658+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57262: remote error: tls: bad certificate 2025-08-13T19:51:16.724684431+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57268: remote error: tls: bad certificate 2025-08-13T19:51:16.744451996+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57272: remote error: tls: bad certificate 2025-08-13T19:51:16.758405375+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57280: remote error: tls: bad certificate 2025-08-13T19:51:16.773868067+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57290: remote error: tls: bad certificate 2025-08-13T19:51:16.795180356+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57296: remote error: tls: bad certificate 2025-08-13T19:51:16.830660130+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57310: remote error: tls: bad certificate 2025-08-13T19:51:16.847641125+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57314: remote error: tls: bad certificate 2025-08-13T19:51:16.864753594+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57326: remote error: tls: bad certificate 2025-08-13T19:51:16.886215948+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57330: remote error: tls: bad certificate 2025-08-13T19:51:16.907208558+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:51:16.923319178+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57348: remote error: tls: bad certificate 2025-08-13T19:51:16.937390440+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57362: remote error: tls: bad certificate 2025-08-13T19:51:16.953968324+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57376: remote error: tls: bad certificate 2025-08-13T19:51:16.969876929+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57384: remote error: tls: bad certificate 2025-08-13T19:51:16.988606234+00:00 stderr F 2025/08/13 19:51:16 http: TLS handshake error from 127.0.0.1:57398: remote error: tls: bad certificate 2025-08-13T19:51:17.004299883+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57414: remote error: tls: bad certificate 2025-08-13T19:51:17.019194058+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57422: remote error: tls: bad certificate 2025-08-13T19:51:17.037623665+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57424: remote error: tls: bad certificate 2025-08-13T19:51:17.054081786+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57440: remote error: tls: bad certificate 2025-08-13T19:51:17.070039562+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:51:17.095944982+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57470: remote error: tls: bad certificate 2025-08-13T19:51:17.136189822+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57474: remote error: tls: bad certificate 2025-08-13T19:51:17.175875847+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:51:17.218141185+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57494: remote error: tls: bad certificate 2025-08-13T19:51:17.255710168+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57498: remote error: tls: bad certificate 2025-08-13T19:51:17.297227785+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57500: remote error: tls: bad certificate 2025-08-13T19:51:17.336637281+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57506: remote error: tls: bad certificate 2025-08-13T19:51:17.377241142+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57518: remote error: tls: bad certificate 2025-08-13T19:51:17.419906141+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57526: remote error: tls: bad certificate 2025-08-13T19:51:17.454640834+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57540: remote error: tls: bad certificate 2025-08-13T19:51:17.495319557+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57542: remote error: tls: bad certificate 2025-08-13T19:51:17.545379047+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57548: remote error: tls: bad certificate 2025-08-13T19:51:17.574591122+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57556: remote error: tls: bad certificate 2025-08-13T19:51:17.617194900+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57564: remote error: tls: bad certificate 2025-08-13T19:51:17.658094319+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57578: remote error: tls: bad certificate 2025-08-13T19:51:17.696045924+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57588: remote error: tls: bad certificate 2025-08-13T19:51:17.736190631+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57598: remote error: tls: bad certificate 2025-08-13T19:51:17.775011651+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57610: remote error: tls: bad certificate 2025-08-13T19:51:17.816130585+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57618: remote error: tls: bad certificate 2025-08-13T19:51:17.857471686+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57622: remote error: tls: bad certificate 2025-08-13T19:51:17.893332051+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57634: remote error: tls: bad certificate 2025-08-13T19:51:17.937021800+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57646: remote error: tls: bad certificate 2025-08-13T19:51:17.974630585+00:00 stderr F 2025/08/13 19:51:17 http: TLS handshake error from 127.0.0.1:57658: remote error: tls: bad certificate 2025-08-13T19:51:18.018132358+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57660: remote error: tls: bad certificate 2025-08-13T19:51:18.055549548+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57674: remote error: tls: bad certificate 2025-08-13T19:51:18.098351461+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57676: remote error: tls: bad certificate 2025-08-13T19:51:18.140087574+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57690: remote error: tls: bad certificate 2025-08-13T19:51:18.175060213+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57702: remote error: tls: bad certificate 2025-08-13T19:51:18.218624428+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:51:18.258133277+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57734: remote error: tls: bad certificate 2025-08-13T19:51:18.296634998+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57740: remote error: tls: bad certificate 2025-08-13T19:51:18.337196447+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57748: remote error: tls: bad certificate 2025-08-13T19:51:18.378149598+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57764: remote error: tls: bad certificate 2025-08-13T19:51:18.415622969+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57774: remote error: tls: bad certificate 2025-08-13T19:51:18.458387461+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57790: remote error: tls: bad certificate 2025-08-13T19:51:18.495199003+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57798: remote error: tls: bad certificate 2025-08-13T19:51:18.539216271+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57800: remote error: tls: bad certificate 2025-08-13T19:51:18.577979439+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57804: remote error: tls: bad certificate 2025-08-13T19:51:18.614594906+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57808: remote error: tls: bad certificate 2025-08-13T19:51:18.655243847+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57812: remote error: tls: bad certificate 2025-08-13T19:51:18.696197528+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:57816: remote error: tls: bad certificate 2025-08-13T19:51:18.734933055+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48794: remote error: tls: bad certificate 2025-08-13T19:51:18.782486814+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48798: remote error: tls: bad certificate 2025-08-13T19:51:18.815491857+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48814: remote error: tls: bad certificate 2025-08-13T19:51:18.853877375+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48826: remote error: tls: bad certificate 2025-08-13T19:51:18.896680738+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48842: remote error: tls: bad certificate 2025-08-13T19:51:18.938149613+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48844: remote error: tls: bad certificate 2025-08-13T19:51:18.973016829+00:00 stderr F 2025/08/13 19:51:18 http: TLS handshake error from 127.0.0.1:48858: remote error: tls: bad certificate 2025-08-13T19:51:19.018100608+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48872: remote error: tls: bad certificate 2025-08-13T19:51:19.057138824+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48888: remote error: tls: bad certificate 2025-08-13T19:51:19.095294264+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48902: remote error: tls: bad certificate 2025-08-13T19:51:19.136037609+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48904: remote error: tls: bad certificate 2025-08-13T19:51:19.175518737+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48910: remote error: tls: bad certificate 2025-08-13T19:51:19.218533266+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48918: remote error: tls: bad certificate 2025-08-13T19:51:19.256003887+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48928: remote error: tls: bad certificate 2025-08-13T19:51:19.298466001+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48932: remote error: tls: bad certificate 2025-08-13T19:51:19.335958282+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48946: remote error: tls: bad certificate 2025-08-13T19:51:19.378441817+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48956: remote error: tls: bad certificate 2025-08-13T19:51:19.416966898+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48964: remote error: tls: bad certificate 2025-08-13T19:51:19.456156398+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48970: remote error: tls: bad certificate 2025-08-13T19:51:19.497225832+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48984: remote error: tls: bad certificate 2025-08-13T19:51:19.535277549+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:51:19.574767638+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:48988: remote error: tls: bad certificate 2025-08-13T19:51:19.619140096+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49002: remote error: tls: bad certificate 2025-08-13T19:51:19.760504665+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49018: remote error: tls: bad certificate 2025-08-13T19:51:19.797073871+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49034: remote error: tls: bad certificate 2025-08-13T19:51:19.819899484+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49046: remote error: tls: bad certificate 2025-08-13T19:51:19.845909868+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49054: remote error: tls: bad certificate 2025-08-13T19:51:19.870769268+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49056: remote error: tls: bad certificate 2025-08-13T19:51:19.890556244+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49070: remote error: tls: bad certificate 2025-08-13T19:51:19.907127727+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49076: remote error: tls: bad certificate 2025-08-13T19:51:19.935927590+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49082: remote error: tls: bad certificate 2025-08-13T19:51:19.975422599+00:00 stderr F 2025/08/13 19:51:19 http: TLS handshake error from 127.0.0.1:49092: remote error: tls: bad certificate 2025-08-13T19:51:20.014550047+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49106: remote error: tls: bad certificate 2025-08-13T19:51:20.054914621+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate 2025-08-13T19:51:20.095512841+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate 2025-08-13T19:51:20.136422211+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49130: remote error: tls: bad certificate 2025-08-13T19:51:20.179887973+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49138: remote error: tls: bad certificate 2025-08-13T19:51:20.216285713+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate 2025-08-13T19:51:20.256577895+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate 2025-08-13T19:51:20.294250642+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49166: remote error: tls: bad certificate 2025-08-13T19:51:20.338264850+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate 2025-08-13T19:51:20.378175010+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49192: remote error: tls: bad certificate 2025-08-13T19:51:20.420151640+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49208: remote error: tls: bad certificate 2025-08-13T19:51:20.455171891+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49224: remote error: tls: bad certificate 2025-08-13T19:51:20.497421358+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49230: remote error: tls: bad certificate 2025-08-13T19:51:20.539308466+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49232: remote error: tls: bad certificate 2025-08-13T19:51:20.576921541+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49242: remote error: tls: bad certificate 2025-08-13T19:51:20.620756943+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49248: remote error: tls: bad certificate 2025-08-13T19:51:20.655605999+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49256: remote error: tls: bad certificate 2025-08-13T19:51:20.701613924+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49262: remote error: tls: bad certificate 2025-08-13T19:51:20.737487670+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49268: remote error: tls: bad certificate 2025-08-13T19:51:20.775267579+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49272: remote error: tls: bad certificate 2025-08-13T19:51:20.814435639+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49284: remote error: tls: bad certificate 2025-08-13T19:51:20.861732111+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:51:20.897545024+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49306: remote error: tls: bad certificate 2025-08-13T19:51:20.935412567+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49312: remote error: tls: bad certificate 2025-08-13T19:51:20.976219453+00:00 stderr F 2025/08/13 19:51:20 http: TLS handshake error from 127.0.0.1:49314: remote error: tls: bad certificate 2025-08-13T19:51:21.015857946+00:00 stderr F 2025/08/13 19:51:21 http: TLS handshake error from 127.0.0.1:49324: remote error: tls: bad certificate 2025-08-13T19:51:22.820721319+00:00 stderr F 2025/08/13 19:51:22 http: TLS handshake error from 127.0.0.1:49336: remote error: tls: bad certificate 2025-08-13T19:51:24.650938228+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49344: remote error: tls: bad certificate 2025-08-13T19:51:24.671006000+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:51:24.693388507+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49364: remote error: tls: bad certificate 2025-08-13T19:51:24.713368767+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49370: remote error: tls: bad certificate 2025-08-13T19:51:24.735238740+00:00 stderr F 2025/08/13 19:51:24 http: TLS handshake error from 127.0.0.1:49382: remote error: tls: bad certificate 2025-08-13T19:51:25.229155691+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49384: remote error: tls: bad certificate 2025-08-13T19:51:25.246324780+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49398: remote error: tls: bad certificate 2025-08-13T19:51:25.263464138+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49406: remote error: tls: bad certificate 2025-08-13T19:51:25.277217700+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:51:25.293293068+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49422: remote error: tls: bad certificate 2025-08-13T19:51:25.310585241+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:51:25.325511446+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49426: remote error: tls: bad certificate 2025-08-13T19:51:25.341518872+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49432: remote error: tls: bad certificate 2025-08-13T19:51:25.364286111+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:51:25.381632405+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49450: remote error: tls: bad certificate 2025-08-13T19:51:25.398575458+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49462: remote error: tls: bad certificate 2025-08-13T19:51:25.414258465+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49464: remote error: tls: bad certificate 2025-08-13T19:51:25.430355163+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49466: remote error: tls: bad certificate 2025-08-13T19:51:25.454371238+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49478: remote error: tls: bad certificate 2025-08-13T19:51:25.471703371+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:51:25.487219043+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49490: remote error: tls: bad certificate 2025-08-13T19:51:25.505701240+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49500: remote error: tls: bad certificate 2025-08-13T19:51:25.524707501+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49502: remote error: tls: bad certificate 2025-08-13T19:51:25.543520727+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49504: remote error: tls: bad certificate 2025-08-13T19:51:25.560540912+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49520: remote error: tls: bad certificate 2025-08-13T19:51:25.578019030+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49530: remote error: tls: bad certificate 2025-08-13T19:51:25.610920948+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49540: remote error: tls: bad certificate 2025-08-13T19:51:25.627284454+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49556: remote error: tls: bad certificate 2025-08-13T19:51:25.645934675+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:51:25.663733742+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49574: remote error: tls: bad certificate 2025-08-13T19:51:25.681895430+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49584: remote error: tls: bad certificate 2025-08-13T19:51:25.705085211+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49598: remote error: tls: bad certificate 2025-08-13T19:51:25.722750824+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:51:25.741721714+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49606: remote error: tls: bad certificate 2025-08-13T19:51:25.763099613+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49616: remote error: tls: bad certificate 2025-08-13T19:51:25.782352652+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49624: remote error: tls: bad certificate 2025-08-13T19:51:25.802023502+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49626: remote error: tls: bad certificate 2025-08-13T19:51:25.821910699+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:51:25.841640791+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49642: remote error: tls: bad certificate 2025-08-13T19:51:25.862193877+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49654: remote error: tls: bad certificate 2025-08-13T19:51:25.877658537+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49656: remote error: tls: bad certificate 2025-08-13T19:51:25.892992604+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49662: remote error: tls: bad certificate 2025-08-13T19:51:25.918943874+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49670: remote error: tls: bad certificate 2025-08-13T19:51:25.945947213+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:51:25.977615655+00:00 stderr F 2025/08/13 19:51:25 http: TLS handshake error from 127.0.0.1:49694: remote error: tls: bad certificate 2025-08-13T19:51:26.003864763+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49708: remote error: tls: bad certificate 2025-08-13T19:51:26.020383534+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49720: remote error: tls: bad certificate 2025-08-13T19:51:26.038353276+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49726: remote error: tls: bad certificate 2025-08-13T19:51:26.058040357+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49732: remote error: tls: bad certificate 2025-08-13T19:51:26.081340790+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49734: remote error: tls: bad certificate 2025-08-13T19:51:26.097624354+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49744: remote error: tls: bad certificate 2025-08-13T19:51:26.113924419+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49758: remote error: tls: bad certificate 2025-08-13T19:51:26.129926265+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49764: remote error: tls: bad certificate 2025-08-13T19:51:26.144420338+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:51:26.167900957+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49776: remote error: tls: bad certificate 2025-08-13T19:51:26.181471043+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:51:26.195262196+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49796: remote error: tls: bad certificate 2025-08-13T19:51:26.213324421+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49802: remote error: tls: bad certificate 2025-08-13T19:51:26.230124529+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49810: remote error: tls: bad certificate 2025-08-13T19:51:26.246231448+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49816: remote error: tls: bad certificate 2025-08-13T19:51:26.269661956+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49820: remote error: tls: bad certificate 2025-08-13T19:51:26.286405443+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49824: remote error: tls: bad certificate 2025-08-13T19:51:26.303175101+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49836: remote error: tls: bad certificate 2025-08-13T19:51:26.318382014+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49838: remote error: tls: bad certificate 2025-08-13T19:51:26.335882373+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49846: remote error: tls: bad certificate 2025-08-13T19:51:26.350144109+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49860: remote error: tls: bad certificate 2025-08-13T19:51:26.365631970+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49874: remote error: tls: bad certificate 2025-08-13T19:51:26.383921891+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49882: remote error: tls: bad certificate 2025-08-13T19:51:26.400607027+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49896: remote error: tls: bad certificate 2025-08-13T19:51:26.419436783+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49906: remote error: tls: bad certificate 2025-08-13T19:51:26.438519207+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49918: remote error: tls: bad certificate 2025-08-13T19:51:26.454055969+00:00 stderr F 2025/08/13 19:51:26 http: TLS handshake error from 127.0.0.1:49920: remote error: tls: bad certificate 2025-08-13T19:51:34.888909375+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44154: remote error: tls: bad certificate 2025-08-13T19:51:34.911553090+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44164: remote error: tls: bad certificate 2025-08-13T19:51:34.932418284+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44174: remote error: tls: bad certificate 2025-08-13T19:51:34.953989569+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44182: remote error: tls: bad certificate 2025-08-13T19:51:34.977100287+00:00 stderr F 2025/08/13 19:51:34 http: TLS handshake error from 127.0.0.1:44194: remote error: tls: bad certificate 2025-08-13T19:51:35.230890378+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44204: remote error: tls: bad certificate 2025-08-13T19:51:35.246219745+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44220: remote error: tls: bad certificate 2025-08-13T19:51:35.260329997+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44228: remote error: tls: bad certificate 2025-08-13T19:51:35.278439343+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44238: remote error: tls: bad certificate 2025-08-13T19:51:35.298312789+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44250: remote error: tls: bad certificate 2025-08-13T19:51:35.324349951+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44252: remote error: tls: bad certificate 2025-08-13T19:51:35.341884941+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44268: remote error: tls: bad certificate 2025-08-13T19:51:35.358668909+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44274: remote error: tls: bad certificate 2025-08-13T19:51:35.374070068+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44276: remote error: tls: bad certificate 2025-08-13T19:51:35.387741277+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44282: remote error: tls: bad certificate 2025-08-13T19:51:35.412084041+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44294: remote error: tls: bad certificate 2025-08-13T19:51:35.428288072+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44298: remote error: tls: bad certificate 2025-08-13T19:51:35.445766120+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44314: remote error: tls: bad certificate 2025-08-13T19:51:35.461766916+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44318: remote error: tls: bad certificate 2025-08-13T19:51:35.481981802+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44334: remote error: tls: bad certificate 2025-08-13T19:51:35.506534571+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44342: remote error: tls: bad certificate 2025-08-13T19:51:35.521755915+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44348: remote error: tls: bad certificate 2025-08-13T19:51:35.536405812+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44360: remote error: tls: bad certificate 2025-08-13T19:51:35.551632586+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44374: remote error: tls: bad certificate 2025-08-13T19:51:35.572519391+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44376: remote error: tls: bad certificate 2025-08-13T19:51:35.596243597+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44388: remote error: tls: bad certificate 2025-08-13T19:51:35.613958172+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44394: remote error: tls: bad certificate 2025-08-13T19:51:35.631295326+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44410: remote error: tls: bad certificate 2025-08-13T19:51:35.650620427+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44412: remote error: tls: bad certificate 2025-08-13T19:51:35.667202959+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44426: remote error: tls: bad certificate 2025-08-13T19:51:35.680952901+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44442: remote error: tls: bad certificate 2025-08-13T19:51:35.699039085+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44446: remote error: tls: bad certificate 2025-08-13T19:51:35.714517136+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44460: remote error: tls: bad certificate 2025-08-13T19:51:35.734411193+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44472: remote error: tls: bad certificate 2025-08-13T19:51:35.749755970+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44480: remote error: tls: bad certificate 2025-08-13T19:51:35.769064090+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44486: remote error: tls: bad certificate 2025-08-13T19:51:35.793466566+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44494: remote error: tls: bad certificate 2025-08-13T19:51:35.809748190+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44504: remote error: tls: bad certificate 2025-08-13T19:51:35.823169672+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44508: remote error: tls: bad certificate 2025-08-13T19:51:35.844928822+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44522: remote error: tls: bad certificate 2025-08-13T19:51:35.862579885+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44532: remote error: tls: bad certificate 2025-08-13T19:51:35.878615402+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44536: remote error: tls: bad certificate 2025-08-13T19:51:35.893603579+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44552: remote error: tls: bad certificate 2025-08-13T19:51:35.908233105+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44554: remote error: tls: bad certificate 2025-08-13T19:51:35.924323204+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44558: remote error: tls: bad certificate 2025-08-13T19:51:35.938981422+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44572: remote error: tls: bad certificate 2025-08-13T19:51:35.953170416+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44582: remote error: tls: bad certificate 2025-08-13T19:51:35.968663917+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44596: remote error: tls: bad certificate 2025-08-13T19:51:35.998586760+00:00 stderr F 2025/08/13 19:51:35 http: TLS handshake error from 127.0.0.1:44604: remote error: tls: bad certificate 2025-08-13T19:51:36.013735551+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44620: remote error: tls: bad certificate 2025-08-13T19:51:36.033660039+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:51:36.048400469+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44628: remote error: tls: bad certificate 2025-08-13T19:51:36.063665594+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44640: remote error: tls: bad certificate 2025-08-13T19:51:36.105005242+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44654: remote error: tls: bad certificate 2025-08-13T19:51:36.170414595+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44664: remote error: tls: bad certificate 2025-08-13T19:51:36.185508785+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44680: remote error: tls: bad certificate 2025-08-13T19:51:36.210122447+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44696: remote error: tls: bad certificate 2025-08-13T19:51:36.225264118+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44710: remote error: tls: bad certificate 2025-08-13T19:51:36.240088881+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:51:36.254623595+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44740: remote error: tls: bad certificate 2025-08-13T19:51:36.265593497+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44746: remote error: tls: bad certificate 2025-08-13T19:51:36.279867284+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44754: remote error: tls: bad certificate 2025-08-13T19:51:36.294984375+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44760: remote error: tls: bad certificate 2025-08-13T19:51:36.315271863+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44766: remote error: tls: bad certificate 2025-08-13T19:51:36.331644569+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44770: remote error: tls: bad certificate 2025-08-13T19:51:36.349393065+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44776: remote error: tls: bad certificate 2025-08-13T19:51:36.367290895+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44778: remote error: tls: bad certificate 2025-08-13T19:51:36.383337162+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44790: remote error: tls: bad certificate 2025-08-13T19:51:36.398226026+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44798: remote error: tls: bad certificate 2025-08-13T19:51:36.415058326+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44804: remote error: tls: bad certificate 2025-08-13T19:51:36.429761165+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44818: remote error: tls: bad certificate 2025-08-13T19:51:36.447405417+00:00 stderr F 2025/08/13 19:51:36 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:51:45.229423954+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58312: remote error: tls: bad certificate 2025-08-13T19:51:45.247171210+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58324: remote error: tls: bad certificate 2025-08-13T19:51:45.257149764+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58336: remote error: tls: bad certificate 2025-08-13T19:51:45.273425098+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58352: remote error: tls: bad certificate 2025-08-13T19:51:45.283935737+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58366: remote error: tls: bad certificate 2025-08-13T19:51:45.295551288+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58372: remote error: tls: bad certificate 2025-08-13T19:51:45.308979121+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58378: remote error: tls: bad certificate 2025-08-13T19:51:45.312430609+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58382: remote error: tls: bad certificate 2025-08-13T19:51:45.331969776+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58394: remote error: tls: bad certificate 2025-08-13T19:51:45.331969776+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58404: remote error: tls: bad certificate 2025-08-13T19:51:45.349488195+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58406: remote error: tls: bad certificate 2025-08-13T19:51:45.352435989+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58416: remote error: tls: bad certificate 2025-08-13T19:51:45.369491465+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58422: remote error: tls: bad certificate 2025-08-13T19:51:45.388581459+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58436: remote error: tls: bad certificate 2025-08-13T19:51:45.407101377+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58452: remote error: tls: bad certificate 2025-08-13T19:51:45.424728109+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58454: remote error: tls: bad certificate 2025-08-13T19:51:45.441388033+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58464: remote error: tls: bad certificate 2025-08-13T19:51:45.457253205+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58472: remote error: tls: bad certificate 2025-08-13T19:51:45.480961341+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58486: remote error: tls: bad certificate 2025-08-13T19:51:45.500220840+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58488: remote error: tls: bad certificate 2025-08-13T19:51:45.520414815+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58502: remote error: tls: bad certificate 2025-08-13T19:51:45.539251772+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58518: remote error: tls: bad certificate 2025-08-13T19:51:45.557504182+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58526: remote error: tls: bad certificate 2025-08-13T19:51:45.571286554+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58536: remote error: tls: bad certificate 2025-08-13T19:51:45.589704359+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58542: remote error: tls: bad certificate 2025-08-13T19:51:45.607948879+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58548: remote error: tls: bad certificate 2025-08-13T19:51:45.625083857+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58560: remote error: tls: bad certificate 2025-08-13T19:51:45.646020864+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58568: remote error: tls: bad certificate 2025-08-13T19:51:45.667046433+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58584: remote error: tls: bad certificate 2025-08-13T19:51:45.690323306+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58598: remote error: tls: bad certificate 2025-08-13T19:51:45.709131022+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58604: remote error: tls: bad certificate 2025-08-13T19:51:45.725559170+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58620: remote error: tls: bad certificate 2025-08-13T19:51:45.747176896+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58636: remote error: tls: bad certificate 2025-08-13T19:51:45.765553239+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58638: remote error: tls: bad certificate 2025-08-13T19:51:45.785663762+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58642: remote error: tls: bad certificate 2025-08-13T19:51:45.804608922+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58648: remote error: tls: bad certificate 2025-08-13T19:51:45.820687760+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58662: remote error: tls: bad certificate 2025-08-13T19:51:45.837835759+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58678: remote error: tls: bad certificate 2025-08-13T19:51:45.855400129+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58694: remote error: tls: bad certificate 2025-08-13T19:51:45.874041070+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58706: remote error: tls: bad certificate 2025-08-13T19:51:45.895304306+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58718: remote error: tls: bad certificate 2025-08-13T19:51:45.918644221+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58722: remote error: tls: bad certificate 2025-08-13T19:51:45.937194459+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58728: remote error: tls: bad certificate 2025-08-13T19:51:45.956282993+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58742: remote error: tls: bad certificate 2025-08-13T19:51:45.973995498+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58752: remote error: tls: bad certificate 2025-08-13T19:51:45.991052044+00:00 stderr F 2025/08/13 19:51:45 http: TLS handshake error from 127.0.0.1:58758: remote error: tls: bad certificate 2025-08-13T19:51:46.014019558+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58772: remote error: tls: bad certificate 2025-08-13T19:51:46.036255592+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58782: remote error: tls: bad certificate 2025-08-13T19:51:46.060187624+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58790: remote error: tls: bad certificate 2025-08-13T19:51:46.079926496+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58806: remote error: tls: bad certificate 2025-08-13T19:51:46.100487972+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58818: remote error: tls: bad certificate 2025-08-13T19:51:46.128986464+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58834: remote error: tls: bad certificate 2025-08-13T19:51:46.144252709+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58844: remote error: tls: bad certificate 2025-08-13T19:51:46.158254278+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58852: remote error: tls: bad certificate 2025-08-13T19:51:46.178462863+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58854: remote error: tls: bad certificate 2025-08-13T19:51:46.198764252+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58862: remote error: tls: bad certificate 2025-08-13T19:51:46.213966015+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58874: remote error: tls: bad certificate 2025-08-13T19:51:46.227996355+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58882: remote error: tls: bad certificate 2025-08-13T19:51:46.246208283+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58894: remote error: tls: bad certificate 2025-08-13T19:51:46.266552293+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58900: remote error: tls: bad certificate 2025-08-13T19:51:46.283127445+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58908: remote error: tls: bad certificate 2025-08-13T19:51:46.298365510+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58910: remote error: tls: bad certificate 2025-08-13T19:51:46.323312250+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58926: remote error: tls: bad certificate 2025-08-13T19:51:46.339574334+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58938: remote error: tls: bad certificate 2025-08-13T19:51:46.360373316+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58952: remote error: tls: bad certificate 2025-08-13T19:51:46.379384908+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58954: remote error: tls: bad certificate 2025-08-13T19:51:46.397547285+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58966: remote error: tls: bad certificate 2025-08-13T19:51:46.416198287+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58972: remote error: tls: bad certificate 2025-08-13T19:51:46.433426266+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:58988: remote error: tls: bad certificate 2025-08-13T19:51:46.447897129+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59002: remote error: tls: bad certificate 2025-08-13T19:51:46.468113175+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59014: remote error: tls: bad certificate 2025-08-13T19:51:46.491347087+00:00 stderr F 2025/08/13 19:51:46 http: TLS handshake error from 127.0.0.1:59022: remote error: tls: bad certificate 2025-08-13T19:51:49.076095659+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48056: remote error: tls: bad certificate 2025-08-13T19:51:49.107637277+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48070: remote error: tls: bad certificate 2025-08-13T19:51:49.128606465+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48082: remote error: tls: bad certificate 2025-08-13T19:51:49.149867760+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48094: remote error: tls: bad certificate 2025-08-13T19:51:49.168729248+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48110: remote error: tls: bad certificate 2025-08-13T19:51:49.185655820+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48112: remote error: tls: bad certificate 2025-08-13T19:51:49.212078173+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48120: remote error: tls: bad certificate 2025-08-13T19:51:49.232245838+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48128: remote error: tls: bad certificate 2025-08-13T19:51:49.250031674+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48138: remote error: tls: bad certificate 2025-08-13T19:51:49.273241696+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48146: remote error: tls: bad certificate 2025-08-13T19:51:49.291936728+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48158: remote error: tls: bad certificate 2025-08-13T19:51:49.308127519+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48160: remote error: tls: bad certificate 2025-08-13T19:51:49.327733988+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48166: remote error: tls: bad certificate 2025-08-13T19:51:49.337156937+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48170: remote error: tls: bad certificate 2025-08-13T19:51:49.354172371+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48178: remote error: tls: bad certificate 2025-08-13T19:51:49.380365958+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48192: remote error: tls: bad certificate 2025-08-13T19:51:49.411187506+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48204: remote error: tls: bad certificate 2025-08-13T19:51:49.440935043+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48212: remote error: tls: bad certificate 2025-08-13T19:51:49.460707737+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48222: remote error: tls: bad certificate 2025-08-13T19:51:49.491663629+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48238: remote error: tls: bad certificate 2025-08-13T19:51:49.509939839+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48248: remote error: tls: bad certificate 2025-08-13T19:51:49.537520105+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48256: remote error: tls: bad certificate 2025-08-13T19:51:49.560039917+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48272: remote error: tls: bad certificate 2025-08-13T19:51:49.575920549+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48288: remote error: tls: bad certificate 2025-08-13T19:51:49.600389346+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48300: remote error: tls: bad certificate 2025-08-13T19:51:49.618001058+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48302: remote error: tls: bad certificate 2025-08-13T19:51:49.644572915+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48310: remote error: tls: bad certificate 2025-08-13T19:51:49.672764988+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48324: remote error: tls: bad certificate 2025-08-13T19:51:49.692653795+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48330: remote error: tls: bad certificate 2025-08-13T19:51:49.716426202+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48334: remote error: tls: bad certificate 2025-08-13T19:51:49.760494018+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48344: remote error: tls: bad certificate 2025-08-13T19:51:49.806922041+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48350: remote error: tls: bad certificate 2025-08-13T19:51:49.835891726+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48362: remote error: tls: bad certificate 2025-08-13T19:51:49.865521470+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48374: remote error: tls: bad certificate 2025-08-13T19:51:49.892461968+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48378: remote error: tls: bad certificate 2025-08-13T19:51:49.935357250+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48394: remote error: tls: bad certificate 2025-08-13T19:51:49.982184734+00:00 stderr F 2025/08/13 19:51:49 http: TLS handshake error from 127.0.0.1:48410: remote error: tls: bad certificate 2025-08-13T19:51:50.011342255+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48424: remote error: tls: bad certificate 2025-08-13T19:51:50.037960573+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48438: remote error: tls: bad certificate 2025-08-13T19:51:50.058617801+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48454: remote error: tls: bad certificate 2025-08-13T19:51:50.105753704+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48468: remote error: tls: bad certificate 2025-08-13T19:51:50.135751038+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48470: remote error: tls: bad certificate 2025-08-13T19:51:50.158219968+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48472: remote error: tls: bad certificate 2025-08-13T19:51:50.201533853+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48476: remote error: tls: bad certificate 2025-08-13T19:51:50.244723343+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48490: remote error: tls: bad certificate 2025-08-13T19:51:50.315070827+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48502: remote error: tls: bad certificate 2025-08-13T19:51:50.376929440+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48508: remote error: tls: bad certificate 2025-08-13T19:51:50.555041034+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48514: remote error: tls: bad certificate 2025-08-13T19:51:50.609665641+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48524: remote error: tls: bad certificate 2025-08-13T19:51:50.672921733+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48532: remote error: tls: bad certificate 2025-08-13T19:51:50.769471844+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48544: remote error: tls: bad certificate 2025-08-13T19:51:50.831486710+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48560: remote error: tls: bad certificate 2025-08-13T19:51:50.863310337+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48566: remote error: tls: bad certificate 2025-08-13T19:51:50.908341840+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48580: remote error: tls: bad certificate 2025-08-13T19:51:50.924962964+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48596: remote error: tls: bad certificate 2025-08-13T19:51:50.948964588+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48606: remote error: tls: bad certificate 2025-08-13T19:51:50.968907536+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48614: remote error: tls: bad certificate 2025-08-13T19:51:51.001053502+00:00 stderr F 2025/08/13 19:51:50 http: TLS handshake error from 127.0.0.1:48626: remote error: tls: bad certificate 2025-08-13T19:51:51.019893938+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48638: remote error: tls: bad certificate 2025-08-13T19:51:51.046002942+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48644: remote error: tls: bad certificate 2025-08-13T19:51:51.076886692+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48646: remote error: tls: bad certificate 2025-08-13T19:51:51.109906753+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48662: remote error: tls: bad certificate 2025-08-13T19:51:51.133918747+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48674: remote error: tls: bad certificate 2025-08-13T19:51:51.154886284+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48684: remote error: tls: bad certificate 2025-08-13T19:51:51.185132866+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48698: remote error: tls: bad certificate 2025-08-13T19:51:51.225982780+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48712: remote error: tls: bad certificate 2025-08-13T19:51:51.248045369+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48718: remote error: tls: bad certificate 2025-08-13T19:51:51.275053928+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48728: remote error: tls: bad certificate 2025-08-13T19:51:51.307019519+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48732: remote error: tls: bad certificate 2025-08-13T19:51:51.330985142+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48746: remote error: tls: bad certificate 2025-08-13T19:51:51.360095031+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48760: remote error: tls: bad certificate 2025-08-13T19:51:51.386985437+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48772: remote error: tls: bad certificate 2025-08-13T19:51:51.414079099+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48780: remote error: tls: bad certificate 2025-08-13T19:51:51.443983831+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48792: remote error: tls: bad certificate 2025-08-13T19:51:51.468964643+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48802: remote error: tls: bad certificate 2025-08-13T19:51:51.498901096+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48808: remote error: tls: bad certificate 2025-08-13T19:51:51.510905168+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48814: remote error: tls: bad certificate 2025-08-13T19:51:51.523940479+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48822: remote error: tls: bad certificate 2025-08-13T19:51:51.547918962+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48824: remote error: tls: bad certificate 2025-08-13T19:51:51.585071241+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48830: remote error: tls: bad certificate 2025-08-13T19:51:51.604072822+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48832: remote error: tls: bad certificate 2025-08-13T19:51:51.632913834+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48840: remote error: tls: bad certificate 2025-08-13T19:51:51.652868552+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48848: remote error: tls: bad certificate 2025-08-13T19:51:51.671007739+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48864: remote error: tls: bad certificate 2025-08-13T19:51:51.688964061+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48876: remote error: tls: bad certificate 2025-08-13T19:51:51.706877871+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48878: remote error: tls: bad certificate 2025-08-13T19:51:51.722889297+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48884: remote error: tls: bad certificate 2025-08-13T19:51:51.741873858+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48900: remote error: tls: bad certificate 2025-08-13T19:51:51.762867906+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48908: remote error: tls: bad certificate 2025-08-13T19:51:51.779890261+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48922: remote error: tls: bad certificate 2025-08-13T19:51:51.803894675+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48934: remote error: tls: bad certificate 2025-08-13T19:51:51.827888129+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48938: remote error: tls: bad certificate 2025-08-13T19:51:51.848906338+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48948: remote error: tls: bad certificate 2025-08-13T19:51:51.875903357+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48950: remote error: tls: bad certificate 2025-08-13T19:51:51.895929788+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48952: remote error: tls: bad certificate 2025-08-13T19:51:51.917910464+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48956: remote error: tls: bad certificate 2025-08-13T19:51:51.938886121+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48968: remote error: tls: bad certificate 2025-08-13T19:51:51.959876029+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48974: remote error: tls: bad certificate 2025-08-13T19:51:51.977891333+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48980: remote error: tls: bad certificate 2025-08-13T19:51:51.995901926+00:00 stderr F 2025/08/13 19:51:51 http: TLS handshake error from 127.0.0.1:48982: remote error: tls: bad certificate 2025-08-13T19:51:52.012876879+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:51:52.030907153+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48992: remote error: tls: bad certificate 2025-08-13T19:51:52.057915593+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:48994: remote error: tls: bad certificate 2025-08-13T19:51:52.080271780+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49000: remote error: tls: bad certificate 2025-08-13T19:51:52.100929988+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49014: remote error: tls: bad certificate 2025-08-13T19:51:52.118900100+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49016: remote error: tls: bad certificate 2025-08-13T19:51:52.144946212+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49030: remote error: tls: bad certificate 2025-08-13T19:51:52.166445025+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49038: remote error: tls: bad certificate 2025-08-13T19:51:52.189448370+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49042: remote error: tls: bad certificate 2025-08-13T19:51:52.217337345+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49054: remote error: tls: bad certificate 2025-08-13T19:51:52.244161329+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49060: remote error: tls: bad certificate 2025-08-13T19:51:52.255997516+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49076: remote error: tls: bad certificate 2025-08-13T19:51:52.274023970+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49086: remote error: tls: bad certificate 2025-08-13T19:51:52.304564450+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49098: remote error: tls: bad certificate 2025-08-13T19:51:52.330130308+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49104: remote error: tls: bad certificate 2025-08-13T19:51:52.361858712+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49112: remote error: tls: bad certificate 2025-08-13T19:51:52.386456493+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49120: remote error: tls: bad certificate 2025-08-13T19:51:52.414108811+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49134: remote error: tls: bad certificate 2025-08-13T19:51:52.444031763+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate 2025-08-13T19:51:52.468897962+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49140: remote error: tls: bad certificate 2025-08-13T19:51:52.490647532+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49156: remote error: tls: bad certificate 2025-08-13T19:51:52.518408173+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49158: remote error: tls: bad certificate 2025-08-13T19:51:52.554922713+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate 2025-08-13T19:51:52.570868187+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate 2025-08-13T19:51:52.603080335+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49180: remote error: tls: bad certificate 2025-08-13T19:51:52.626078340+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49194: remote error: tls: bad certificate 2025-08-13T19:51:52.653371248+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49198: remote error: tls: bad certificate 2025-08-13T19:51:52.674299524+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49206: remote error: tls: bad certificate 2025-08-13T19:51:52.698756671+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49218: remote error: tls: bad certificate 2025-08-13T19:51:52.719594044+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49230: remote error: tls: bad certificate 2025-08-13T19:51:52.740359316+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49232: remote error: tls: bad certificate 2025-08-13T19:51:52.775948640+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49238: remote error: tls: bad certificate 2025-08-13T19:51:52.802281040+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:51:52.816648530+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49260: remote error: tls: bad certificate 2025-08-13T19:51:52.819327866+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49272: remote error: tls: bad certificate 2025-08-13T19:51:52.838937585+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49274: remote error: tls: bad certificate 2025-08-13T19:51:52.862078924+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49286: remote error: tls: bad certificate 2025-08-13T19:51:52.888863847+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49292: remote error: tls: bad certificate 2025-08-13T19:51:52.915021762+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:51:52.938728128+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49304: remote error: tls: bad certificate 2025-08-13T19:51:52.963520884+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49308: remote error: tls: bad certificate 2025-08-13T19:51:52.986878080+00:00 stderr F 2025/08/13 19:51:52 http: TLS handshake error from 127.0.0.1:49318: remote error: tls: bad certificate 2025-08-13T19:51:53.016400571+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49334: remote error: tls: bad certificate 2025-08-13T19:51:53.033557990+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49348: remote error: tls: bad certificate 2025-08-13T19:51:53.437623582+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49356: remote error: tls: bad certificate 2025-08-13T19:51:53.463060217+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:51:53.481366338+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49364: remote error: tls: bad certificate 2025-08-13T19:51:53.503377355+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49372: remote error: tls: bad certificate 2025-08-13T19:51:53.528933483+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49386: remote error: tls: bad certificate 2025-08-13T19:51:53.553239986+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49388: remote error: tls: bad certificate 2025-08-13T19:51:53.571636570+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49402: remote error: tls: bad certificate 2025-08-13T19:51:53.593067800+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:51:53.617272199+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49428: remote error: tls: bad certificate 2025-08-13T19:51:53.639766470+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49432: remote error: tls: bad certificate 2025-08-13T19:51:53.658890745+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:51:53.677946088+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49442: remote error: tls: bad certificate 2025-08-13T19:51:53.696568608+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49458: remote error: tls: bad certificate 2025-08-13T19:51:53.714454278+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49460: remote error: tls: bad certificate 2025-08-13T19:51:53.733860061+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49472: remote error: tls: bad certificate 2025-08-13T19:51:53.751093742+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:51:53.776317781+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49496: remote error: tls: bad certificate 2025-08-13T19:51:53.796904197+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49504: remote error: tls: bad certificate 2025-08-13T19:51:53.812332277+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49520: remote error: tls: bad certificate 2025-08-13T19:51:53.827761696+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:51:53.845674277+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49542: remote error: tls: bad certificate 2025-08-13T19:51:53.863381401+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:51:53.881640031+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49568: remote error: tls: bad certificate 2025-08-13T19:51:53.899249993+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49576: remote error: tls: bad certificate 2025-08-13T19:51:53.915200718+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49592: remote error: tls: bad certificate 2025-08-13T19:51:53.932169221+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:51:53.956525415+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49612: remote error: tls: bad certificate 2025-08-13T19:51:53.972322225+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49622: remote error: tls: bad certificate 2025-08-13T19:51:53.986572571+00:00 stderr F 2025/08/13 19:51:53 http: TLS handshake error from 127.0.0.1:49628: remote error: tls: bad certificate 2025-08-13T19:51:54.001901338+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:51:54.018089079+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49640: remote error: tls: bad certificate 2025-08-13T19:51:54.042053942+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49646: remote error: tls: bad certificate 2025-08-13T19:51:54.071468220+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:51:54.107311621+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49660: remote error: tls: bad certificate 2025-08-13T19:51:54.154198657+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49666: remote error: tls: bad certificate 2025-08-13T19:51:54.190040388+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49674: remote error: tls: bad certificate 2025-08-13T19:51:54.227873526+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:51:54.267584117+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49690: remote error: tls: bad certificate 2025-08-13T19:51:54.315009608+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49698: remote error: tls: bad certificate 2025-08-13T19:51:54.350597882+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49714: remote error: tls: bad certificate 2025-08-13T19:51:54.393021381+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49728: remote error: tls: bad certificate 2025-08-13T19:51:54.427350599+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49738: remote error: tls: bad certificate 2025-08-13T19:51:54.466697610+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49752: remote error: tls: bad certificate 2025-08-13T19:51:54.510339604+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49768: remote error: tls: bad certificate 2025-08-13T19:51:54.551517927+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:51:54.585918877+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:51:54.588643785+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49782: remote error: tls: bad certificate 2025-08-13T19:51:54.626564205+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49796: remote error: tls: bad certificate 2025-08-13T19:51:54.673626936+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49804: remote error: tls: bad certificate 2025-08-13T19:51:54.706184093+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49808: remote error: tls: bad certificate 2025-08-13T19:51:54.747370227+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49812: remote error: tls: bad certificate 2025-08-13T19:51:54.786201023+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49828: remote error: tls: bad certificate 2025-08-13T19:51:54.825520463+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49838: remote error: tls: bad certificate 2025-08-13T19:51:54.868887379+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49850: remote error: tls: bad certificate 2025-08-13T19:51:54.906038627+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49866: remote error: tls: bad certificate 2025-08-13T19:51:54.948265401+00:00 stderr F 2025/08/13 19:51:54 http: TLS handshake error from 127.0.0.1:49882: remote error: tls: bad certificate 2025-08-13T19:51:55.071308806+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49890: remote error: tls: bad certificate 2025-08-13T19:51:55.090315868+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49894: remote error: tls: bad certificate 2025-08-13T19:51:55.108698042+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49898: remote error: tls: bad certificate 2025-08-13T19:51:55.123356209+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49904: remote error: tls: bad certificate 2025-08-13T19:51:55.147871908+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49910: remote error: tls: bad certificate 2025-08-13T19:51:55.187059074+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49924: remote error: tls: bad certificate 2025-08-13T19:51:55.226522038+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49928: remote error: tls: bad certificate 2025-08-13T19:51:55.266248410+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49936: remote error: tls: bad certificate 2025-08-13T19:51:55.306411905+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49948: remote error: tls: bad certificate 2025-08-13T19:51:55.347991039+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49954: remote error: tls: bad certificate 2025-08-13T19:51:55.386904648+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49960: remote error: tls: bad certificate 2025-08-13T19:51:55.427086683+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49974: remote error: tls: bad certificate 2025-08-13T19:51:55.466236038+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49986: remote error: tls: bad certificate 2025-08-13T19:51:55.506382512+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:49996: remote error: tls: bad certificate 2025-08-13T19:51:55.546525846+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50004: remote error: tls: bad certificate 2025-08-13T19:51:55.597896839+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50012: remote error: tls: bad certificate 2025-08-13T19:51:55.643475588+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50026: remote error: tls: bad certificate 2025-08-13T19:51:55.672675510+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50042: remote error: tls: bad certificate 2025-08-13T19:51:55.686856504+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50046: remote error: tls: bad certificate 2025-08-13T19:51:55.708117569+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50072: remote error: tls: bad certificate 2025-08-13T19:51:55.708869141+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50058: remote error: tls: bad certificate 2025-08-13T19:51:55.727301646+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50074: remote error: tls: bad certificate 2025-08-13T19:51:55.746007539+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50096: remote error: tls: bad certificate 2025-08-13T19:51:55.748355306+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50088: remote error: tls: bad certificate 2025-08-13T19:51:55.767081920+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50104: remote error: tls: bad certificate 2025-08-13T19:51:55.789621482+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50106: remote error: tls: bad certificate 2025-08-13T19:51:55.828289813+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50120: remote error: tls: bad certificate 2025-08-13T19:51:55.866932614+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50134: remote error: tls: bad certificate 2025-08-13T19:51:55.909077455+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50142: remote error: tls: bad certificate 2025-08-13T19:51:55.944401361+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50144: remote error: tls: bad certificate 2025-08-13T19:51:55.987899461+00:00 stderr F 2025/08/13 19:51:55 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate 2025-08-13T19:51:56.026151471+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:51:56.068135107+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50182: remote error: tls: bad certificate 2025-08-13T19:51:56.110857324+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50184: remote error: tls: bad certificate 2025-08-13T19:51:56.148175537+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50188: remote error: tls: bad certificate 2025-08-13T19:51:56.187425075+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50204: remote error: tls: bad certificate 2025-08-13T19:51:56.225651835+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50210: remote error: tls: bad certificate 2025-08-13T19:51:56.267155117+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50218: remote error: tls: bad certificate 2025-08-13T19:51:56.309512874+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50230: remote error: tls: bad certificate 2025-08-13T19:51:56.347987120+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50236: remote error: tls: bad certificate 2025-08-13T19:51:56.389556494+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50240: remote error: tls: bad certificate 2025-08-13T19:51:56.427295400+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50246: remote error: tls: bad certificate 2025-08-13T19:51:56.467683890+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50258: remote error: tls: bad certificate 2025-08-13T19:51:56.507263138+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50272: remote error: tls: bad certificate 2025-08-13T19:51:56.546551757+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50276: remote error: tls: bad certificate 2025-08-13T19:51:56.586389562+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:51:56.633498824+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:51:56.672197457+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50288: remote error: tls: bad certificate 2025-08-13T19:51:56.707403420+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50304: remote error: tls: bad certificate 2025-08-13T19:51:56.748165521+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50314: remote error: tls: bad certificate 2025-08-13T19:51:56.785644649+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50320: remote error: tls: bad certificate 2025-08-13T19:51:56.830688413+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50330: remote error: tls: bad certificate 2025-08-13T19:51:56.868089638+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:51:56.908528840+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:51:56.947165361+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50346: remote error: tls: bad certificate 2025-08-13T19:51:56.987654225+00:00 stderr F 2025/08/13 19:51:56 http: TLS handshake error from 127.0.0.1:50348: remote error: tls: bad certificate 2025-08-13T19:51:57.035987822+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50364: remote error: tls: bad certificate 2025-08-13T19:51:57.064526955+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50368: remote error: tls: bad certificate 2025-08-13T19:51:57.105877663+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50380: remote error: tls: bad certificate 2025-08-13T19:51:57.145515902+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50388: remote error: tls: bad certificate 2025-08-13T19:51:57.189225367+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50394: remote error: tls: bad certificate 2025-08-13T19:51:57.230210824+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50408: remote error: tls: bad certificate 2025-08-13T19:51:57.267122786+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:51:57.307740073+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50422: remote error: tls: bad certificate 2025-08-13T19:51:57.347906888+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:51:57.389106251+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50432: remote error: tls: bad certificate 2025-08-13T19:51:57.425764166+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50434: remote error: tls: bad certificate 2025-08-13T19:51:57.468734730+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50444: remote error: tls: bad certificate 2025-08-13T19:51:57.509384488+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50452: remote error: tls: bad certificate 2025-08-13T19:51:57.550016526+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50462: remote error: tls: bad certificate 2025-08-13T19:51:57.588688438+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50470: remote error: tls: bad certificate 2025-08-13T19:51:57.628283016+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50476: remote error: tls: bad certificate 2025-08-13T19:51:57.668227604+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50490: remote error: tls: bad certificate 2025-08-13T19:51:57.709942472+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50492: remote error: tls: bad certificate 2025-08-13T19:51:57.745881856+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50494: remote error: tls: bad certificate 2025-08-13T19:51:57.788856991+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50498: remote error: tls: bad certificate 2025-08-13T19:51:57.825960748+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50510: remote error: tls: bad certificate 2025-08-13T19:51:57.865308779+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50512: remote error: tls: bad certificate 2025-08-13T19:51:57.907450620+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50518: remote error: tls: bad certificate 2025-08-13T19:51:57.947535242+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50524: remote error: tls: bad certificate 2025-08-13T19:51:57.987173751+00:00 stderr F 2025/08/13 19:51:57 http: TLS handshake error from 127.0.0.1:50530: remote error: tls: bad certificate 2025-08-13T19:51:58.027241023+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50542: remote error: tls: bad certificate 2025-08-13T19:51:58.070232508+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50558: remote error: tls: bad certificate 2025-08-13T19:51:58.107928581+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50574: remote error: tls: bad certificate 2025-08-13T19:51:58.147928351+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50580: remote error: tls: bad certificate 2025-08-13T19:51:58.186857380+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:51:58.230539035+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50604: remote error: tls: bad certificate 2025-08-13T19:51:58.266508570+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50614: remote error: tls: bad certificate 2025-08-13T19:51:58.306916331+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50618: remote error: tls: bad certificate 2025-08-13T19:51:58.345992464+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50632: remote error: tls: bad certificate 2025-08-13T19:51:58.389033810+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50642: remote error: tls: bad certificate 2025-08-13T19:51:58.428723821+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50656: remote error: tls: bad certificate 2025-08-13T19:51:58.467154726+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:51:58.507931868+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50680: remote error: tls: bad certificate 2025-08-13T19:51:58.544065057+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50682: remote error: tls: bad certificate 2025-08-13T19:51:58.586625050+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50690: remote error: tls: bad certificate 2025-08-13T19:51:58.626725882+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50700: remote error: tls: bad certificate 2025-08-13T19:51:58.669401158+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50712: remote error: tls: bad certificate 2025-08-13T19:51:58.707880285+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:50724: remote error: tls: bad certificate 2025-08-13T19:51:58.748734129+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35984: remote error: tls: bad certificate 2025-08-13T19:51:58.792397833+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35986: remote error: tls: bad certificate 2025-08-13T19:51:58.826091813+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:35994: remote error: tls: bad certificate 2025-08-13T19:51:58.872266098+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36002: remote error: tls: bad certificate 2025-08-13T19:51:58.912637468+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36008: remote error: tls: bad certificate 2025-08-13T19:51:58.948347306+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36012: remote error: tls: bad certificate 2025-08-13T19:51:58.989556140+00:00 stderr F 2025/08/13 19:51:58 http: TLS handshake error from 127.0.0.1:36016: remote error: tls: bad certificate 2025-08-13T19:51:59.026058560+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36018: remote error: tls: bad certificate 2025-08-13T19:51:59.066424630+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36030: remote error: tls: bad certificate 2025-08-13T19:51:59.105723980+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate 2025-08-13T19:51:59.149355533+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36054: remote error: tls: bad certificate 2025-08-13T19:51:59.187990094+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36066: remote error: tls: bad certificate 2025-08-13T19:51:59.231429931+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36076: remote error: tls: bad certificate 2025-08-13T19:51:59.271972936+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36086: remote error: tls: bad certificate 2025-08-13T19:51:59.308136197+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36096: remote error: tls: bad certificate 2025-08-13T19:51:59.346877050+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36098: remote error: tls: bad certificate 2025-08-13T19:51:59.387407615+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36110: remote error: tls: bad certificate 2025-08-13T19:51:59.430143873+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36116: remote error: tls: bad certificate 2025-08-13T19:51:59.468856036+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36130: remote error: tls: bad certificate 2025-08-13T19:51:59.511932223+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36140: remote error: tls: bad certificate 2025-08-13T19:51:59.546727344+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36156: remote error: tls: bad certificate 2025-08-13T19:51:59.588886105+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36168: remote error: tls: bad certificate 2025-08-13T19:51:59.635268037+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36172: remote error: tls: bad certificate 2025-08-13T19:51:59.669538453+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36184: remote error: tls: bad certificate 2025-08-13T19:51:59.707252578+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36188: remote error: tls: bad certificate 2025-08-13T19:51:59.745287591+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36194: remote error: tls: bad certificate 2025-08-13T19:51:59.788976006+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36200: remote error: tls: bad certificate 2025-08-13T19:51:59.828248605+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36204: remote error: tls: bad certificate 2025-08-13T19:51:59.866762582+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36208: remote error: tls: bad certificate 2025-08-13T19:51:59.909291384+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36214: remote error: tls: bad certificate 2025-08-13T19:51:59.947633607+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36224: remote error: tls: bad certificate 2025-08-13T19:51:59.987223464+00:00 stderr F 2025/08/13 19:51:59 http: TLS handshake error from 127.0.0.1:36236: remote error: tls: bad certificate 2025-08-13T19:52:00.027913324+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36242: remote error: tls: bad certificate 2025-08-13T19:52:00.067411429+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36246: remote error: tls: bad certificate 2025-08-13T19:52:00.111442214+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36260: remote error: tls: bad certificate 2025-08-13T19:52:00.157400113+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36276: remote error: tls: bad certificate 2025-08-13T19:52:00.190490616+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36292: remote error: tls: bad certificate 2025-08-13T19:52:00.229408055+00:00 stderr F 2025/08/13 19:52:00 http: TLS handshake error from 127.0.0.1:36294: remote error: tls: bad certificate 2025-08-13T19:52:01.167641625+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36298: remote error: tls: bad certificate 2025-08-13T19:52:01.184061993+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36308: remote error: tls: bad certificate 2025-08-13T19:52:01.201028956+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36318: remote error: tls: bad certificate 2025-08-13T19:52:01.219742989+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36324: remote error: tls: bad certificate 2025-08-13T19:52:01.236437835+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36326: remote error: tls: bad certificate 2025-08-13T19:52:01.251860544+00:00 stderr F 2025/08/13 19:52:01 http: TLS handshake error from 127.0.0.1:36332: remote error: tls: bad certificate 2025-08-13T19:52:05.236931622+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36340: remote error: tls: bad certificate 2025-08-13T19:52:05.266534105+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36356: remote error: tls: bad certificate 2025-08-13T19:52:05.287584185+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36368: remote error: tls: bad certificate 2025-08-13T19:52:05.307226524+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36374: remote error: tls: bad certificate 2025-08-13T19:52:05.324289521+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36390: remote error: tls: bad certificate 2025-08-13T19:52:05.342008606+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36392: remote error: tls: bad certificate 2025-08-13T19:52:05.360517743+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36400: remote error: tls: bad certificate 2025-08-13T19:52:05.377038244+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36404: remote error: tls: bad certificate 2025-08-13T19:52:05.397209168+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36418: remote error: tls: bad certificate 2025-08-13T19:52:05.415505810+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36424: remote error: tls: bad certificate 2025-08-13T19:52:05.432891705+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36428: remote error: tls: bad certificate 2025-08-13T19:52:05.451260688+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36442: remote error: tls: bad certificate 2025-08-13T19:52:05.469665253+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36444: remote error: tls: bad certificate 2025-08-13T19:52:05.490288090+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36450: remote error: tls: bad certificate 2025-08-13T19:52:05.509049285+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36456: remote error: tls: bad certificate 2025-08-13T19:52:05.532361669+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36462: remote error: tls: bad certificate 2025-08-13T19:52:05.552752400+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36468: remote error: tls: bad certificate 2025-08-13T19:52:05.570749933+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36482: remote error: tls: bad certificate 2025-08-13T19:52:05.584704090+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36494: remote error: tls: bad certificate 2025-08-13T19:52:05.603485405+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36506: remote error: tls: bad certificate 2025-08-13T19:52:05.620620393+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36518: remote error: tls: bad certificate 2025-08-13T19:52:05.635519638+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36532: remote error: tls: bad certificate 2025-08-13T19:52:05.653222512+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36548: remote error: tls: bad certificate 2025-08-13T19:52:05.673669555+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36564: remote error: tls: bad certificate 2025-08-13T19:52:05.698261476+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36570: remote error: tls: bad certificate 2025-08-13T19:52:05.716276199+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36578: remote error: tls: bad certificate 2025-08-13T19:52:05.735325551+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36580: remote error: tls: bad certificate 2025-08-13T19:52:05.755627170+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36584: remote error: tls: bad certificate 2025-08-13T19:52:05.771925954+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36596: remote error: tls: bad certificate 2025-08-13T19:52:05.788884888+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36612: remote error: tls: bad certificate 2025-08-13T19:52:05.808722333+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36614: remote error: tls: bad certificate 2025-08-13T19:52:05.824987336+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36622: remote error: tls: bad certificate 2025-08-13T19:52:05.839405767+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36634: remote error: tls: bad certificate 2025-08-13T19:52:05.854456306+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36638: remote error: tls: bad certificate 2025-08-13T19:52:05.879326394+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36644: remote error: tls: bad certificate 2025-08-13T19:52:05.935344450+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36646: remote error: tls: bad certificate 2025-08-13T19:52:05.957058429+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36660: remote error: tls: bad certificate 2025-08-13T19:52:05.972047726+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36664: remote error: tls: bad certificate 2025-08-13T19:52:05.985517710+00:00 stderr F 2025/08/13 19:52:05 http: TLS handshake error from 127.0.0.1:36680: remote error: tls: bad certificate 2025-08-13T19:52:06.003663327+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36682: remote error: tls: bad certificate 2025-08-13T19:52:06.017726227+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36686: remote error: tls: bad certificate 2025-08-13T19:52:06.026959610+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36692: remote error: tls: bad certificate 2025-08-13T19:52:06.037721947+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36706: remote error: tls: bad certificate 2025-08-13T19:52:06.046892248+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36708: remote error: tls: bad certificate 2025-08-13T19:52:06.050854951+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36712: remote error: tls: bad certificate 2025-08-13T19:52:06.066339242+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36722: remote error: tls: bad certificate 2025-08-13T19:52:06.068868084+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36724: remote error: tls: bad certificate 2025-08-13T19:52:06.083759869+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36740: remote error: tls: bad certificate 2025-08-13T19:52:06.097086158+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36750: remote error: tls: bad certificate 2025-08-13T19:52:06.117000506+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36762: remote error: tls: bad certificate 2025-08-13T19:52:06.146716492+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36774: remote error: tls: bad certificate 2025-08-13T19:52:06.167216166+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36776: remote error: tls: bad certificate 2025-08-13T19:52:06.183553462+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36782: remote error: tls: bad certificate 2025-08-13T19:52:06.199196368+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36798: remote error: tls: bad certificate 2025-08-13T19:52:06.214675209+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36808: remote error: tls: bad certificate 2025-08-13T19:52:06.234711359+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36814: remote error: tls: bad certificate 2025-08-13T19:52:06.251624711+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36816: remote error: tls: bad certificate 2025-08-13T19:52:06.268128402+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36826: remote error: tls: bad certificate 2025-08-13T19:52:06.287366830+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36840: remote error: tls: bad certificate 2025-08-13T19:52:06.303615793+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36844: remote error: tls: bad certificate 2025-08-13T19:52:06.317056476+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36846: remote error: tls: bad certificate 2025-08-13T19:52:06.340746921+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36852: remote error: tls: bad certificate 2025-08-13T19:52:06.360356229+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36862: remote error: tls: bad certificate 2025-08-13T19:52:06.379545686+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36866: remote error: tls: bad certificate 2025-08-13T19:52:06.399064262+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36876: remote error: tls: bad certificate 2025-08-13T19:52:06.413962877+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36888: remote error: tls: bad certificate 2025-08-13T19:52:06.429262262+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36892: remote error: tls: bad certificate 2025-08-13T19:52:06.447497802+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36902: remote error: tls: bad certificate 2025-08-13T19:52:06.462392416+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36916: remote error: tls: bad certificate 2025-08-13T19:52:06.479386651+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36918: remote error: tls: bad certificate 2025-08-13T19:52:06.494960344+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36920: remote error: tls: bad certificate 2025-08-13T19:52:06.509599981+00:00 stderr F 2025/08/13 19:52:06 http: TLS handshake error from 127.0.0.1:36926: remote error: tls: bad certificate 2025-08-13T19:52:09.232627872+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38862: remote error: tls: bad certificate 2025-08-13T19:52:09.252110557+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38870: remote error: tls: bad certificate 2025-08-13T19:52:09.266079425+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38886: remote error: tls: bad certificate 2025-08-13T19:52:09.280973830+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38894: remote error: tls: bad certificate 2025-08-13T19:52:09.303372578+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38904: remote error: tls: bad certificate 2025-08-13T19:52:09.318289233+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38918: remote error: tls: bad certificate 2025-08-13T19:52:09.344658414+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38928: remote error: tls: bad certificate 2025-08-13T19:52:09.363391458+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38942: remote error: tls: bad certificate 2025-08-13T19:52:09.383070928+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38958: remote error: tls: bad certificate 2025-08-13T19:52:09.401216235+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38972: remote error: tls: bad certificate 2025-08-13T19:52:09.425712023+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38982: remote error: tls: bad certificate 2025-08-13T19:52:09.449089149+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38988: remote error: tls: bad certificate 2025-08-13T19:52:09.463050887+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38992: remote error: tls: bad certificate 2025-08-13T19:52:09.481665857+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:38998: remote error: tls: bad certificate 2025-08-13T19:52:09.503006726+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39002: remote error: tls: bad certificate 2025-08-13T19:52:09.531838787+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39016: remote error: tls: bad certificate 2025-08-13T19:52:09.548955925+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39032: remote error: tls: bad certificate 2025-08-13T19:52:09.565971849+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39042: remote error: tls: bad certificate 2025-08-13T19:52:09.583457428+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39048: remote error: tls: bad certificate 2025-08-13T19:52:09.607585995+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39060: remote error: tls: bad certificate 2025-08-13T19:52:09.632363741+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39066: remote error: tls: bad certificate 2025-08-13T19:52:09.653141543+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39068: remote error: tls: bad certificate 2025-08-13T19:52:09.675610153+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39084: remote error: tls: bad certificate 2025-08-13T19:52:09.691690161+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39100: remote error: tls: bad certificate 2025-08-13T19:52:09.715331175+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39110: remote error: tls: bad certificate 2025-08-13T19:52:09.731424563+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39112: remote error: tls: bad certificate 2025-08-13T19:52:09.755659974+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39128: remote error: tls: bad certificate 2025-08-13T19:52:09.790123386+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39130: remote error: tls: bad certificate 2025-08-13T19:52:09.810916228+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39136: remote error: tls: bad certificate 2025-08-13T19:52:09.835099087+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39148: remote error: tls: bad certificate 2025-08-13T19:52:09.852948146+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39160: remote error: tls: bad certificate 2025-08-13T19:52:09.868696204+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39176: remote error: tls: bad certificate 2025-08-13T19:52:09.889639671+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39180: remote error: tls: bad certificate 2025-08-13T19:52:09.907500820+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39186: remote error: tls: bad certificate 2025-08-13T19:52:09.923646710+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39198: remote error: tls: bad certificate 2025-08-13T19:52:09.939259035+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39210: remote error: tls: bad certificate 2025-08-13T19:52:09.953406518+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39218: remote error: tls: bad certificate 2025-08-13T19:52:09.977557946+00:00 stderr F 2025/08/13 19:52:09 http: TLS handshake error from 127.0.0.1:39234: remote error: tls: bad certificate 2025-08-13T19:52:10.413194018+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39238: remote error: tls: bad certificate 2025-08-13T19:52:10.434192256+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39242: remote error: tls: bad certificate 2025-08-13T19:52:10.458764126+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39256: remote error: tls: bad certificate 2025-08-13T19:52:10.491059726+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39268: remote error: tls: bad certificate 2025-08-13T19:52:10.510424878+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39278: remote error: tls: bad certificate 2025-08-13T19:52:10.530265113+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39290: remote error: tls: bad certificate 2025-08-13T19:52:10.556875341+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39294: remote error: tls: bad certificate 2025-08-13T19:52:10.576465279+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39308: remote error: tls: bad certificate 2025-08-13T19:52:10.596180591+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39324: remote error: tls: bad certificate 2025-08-13T19:52:10.611123507+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39330: remote error: tls: bad certificate 2025-08-13T19:52:10.627052181+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39336: remote error: tls: bad certificate 2025-08-13T19:52:10.647875534+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39342: remote error: tls: bad certificate 2025-08-13T19:52:10.669904852+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39344: remote error: tls: bad certificate 2025-08-13T19:52:10.690860479+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39346: remote error: tls: bad certificate 2025-08-13T19:52:10.708047158+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39362: remote error: tls: bad certificate 2025-08-13T19:52:10.727132012+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39372: remote error: tls: bad certificate 2025-08-13T19:52:10.743498018+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39378: remote error: tls: bad certificate 2025-08-13T19:52:10.762885421+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39380: remote error: tls: bad certificate 2025-08-13T19:52:10.777832007+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39384: remote error: tls: bad certificate 2025-08-13T19:52:10.797945690+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39400: remote error: tls: bad certificate 2025-08-13T19:52:10.814191273+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39410: remote error: tls: bad certificate 2025-08-13T19:52:10.823028084+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39416: remote error: tls: bad certificate 2025-08-13T19:52:10.831588288+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39418: remote error: tls: bad certificate 2025-08-13T19:52:10.852153644+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39424: remote error: tls: bad certificate 2025-08-13T19:52:10.871182816+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39430: remote error: tls: bad certificate 2025-08-13T19:52:10.886941185+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39446: remote error: tls: bad certificate 2025-08-13T19:52:10.907397778+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39462: remote error: tls: bad certificate 2025-08-13T19:52:10.927762728+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39474: remote error: tls: bad certificate 2025-08-13T19:52:10.944996149+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39490: remote error: tls: bad certificate 2025-08-13T19:52:10.963029773+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39502: remote error: tls: bad certificate 2025-08-13T19:52:10.978043161+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39512: remote error: tls: bad certificate 2025-08-13T19:52:10.994343835+00:00 stderr F 2025/08/13 19:52:10 http: TLS handshake error from 127.0.0.1:39524: remote error: tls: bad certificate 2025-08-13T19:52:11.009925899+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39540: remote error: tls: bad certificate 2025-08-13T19:52:11.035461807+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39552: remote error: tls: bad certificate 2025-08-13T19:52:11.054002075+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39564: remote error: tls: bad certificate 2025-08-13T19:52:11.071367320+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39572: remote error: tls: bad certificate 2025-08-13T19:52:11.089543108+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39584: remote error: tls: bad certificate 2025-08-13T19:52:11.107528200+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39594: remote error: tls: bad certificate 2025-08-13T19:52:11.127368165+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39610: remote error: tls: bad certificate 2025-08-13T19:52:11.144221796+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39618: remote error: tls: bad certificate 2025-08-13T19:52:11.161006134+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39624: remote error: tls: bad certificate 2025-08-13T19:52:11.179588303+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39626: remote error: tls: bad certificate 2025-08-13T19:52:11.196653949+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39630: remote error: tls: bad certificate 2025-08-13T19:52:11.221924519+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39644: remote error: tls: bad certificate 2025-08-13T19:52:11.236727971+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39658: remote error: tls: bad certificate 2025-08-13T19:52:11.257653857+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39674: remote error: tls: bad certificate 2025-08-13T19:52:11.281644401+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39682: remote error: tls: bad certificate 2025-08-13T19:52:11.298472340+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39688: remote error: tls: bad certificate 2025-08-13T19:52:11.315032192+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39702: remote error: tls: bad certificate 2025-08-13T19:52:11.331735458+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39712: remote error: tls: bad certificate 2025-08-13T19:52:11.347877758+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39720: remote error: tls: bad certificate 2025-08-13T19:52:11.364930624+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39722: remote error: tls: bad certificate 2025-08-13T19:52:11.381492806+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39730: remote error: tls: bad certificate 2025-08-13T19:52:11.396977417+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39746: remote error: tls: bad certificate 2025-08-13T19:52:11.411919512+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39748: remote error: tls: bad certificate 2025-08-13T19:52:11.425877970+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39752: remote error: tls: bad certificate 2025-08-13T19:52:11.442925426+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39768: remote error: tls: bad certificate 2025-08-13T19:52:11.457592704+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39780: remote error: tls: bad certificate 2025-08-13T19:52:11.474499765+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39794: remote error: tls: bad certificate 2025-08-13T19:52:11.489083600+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39798: remote error: tls: bad certificate 2025-08-13T19:52:11.504001685+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39812: remote error: tls: bad certificate 2025-08-13T19:52:11.527297039+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39824: remote error: tls: bad certificate 2025-08-13T19:52:11.544857089+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39838: remote error: tls: bad certificate 2025-08-13T19:52:11.561618747+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39842: remote error: tls: bad certificate 2025-08-13T19:52:11.578022934+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39856: remote error: tls: bad certificate 2025-08-13T19:52:11.592737703+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39862: remote error: tls: bad certificate 2025-08-13T19:52:11.605950840+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39864: remote error: tls: bad certificate 2025-08-13T19:52:11.620057401+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39874: remote error: tls: bad certificate 2025-08-13T19:52:11.632685761+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39884: remote error: tls: bad certificate 2025-08-13T19:52:11.644263651+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39900: remote error: tls: bad certificate 2025-08-13T19:52:11.657287642+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39906: remote error: tls: bad certificate 2025-08-13T19:52:11.669849710+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39920: remote error: tls: bad certificate 2025-08-13T19:52:11.684449876+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39932: remote error: tls: bad certificate 2025-08-13T19:52:11.700539075+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39946: remote error: tls: bad certificate 2025-08-13T19:52:11.717894039+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39948: remote error: tls: bad certificate 2025-08-13T19:52:11.741139591+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39958: remote error: tls: bad certificate 2025-08-13T19:52:11.783948171+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39972: remote error: tls: bad certificate 2025-08-13T19:52:11.825070713+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39984: remote error: tls: bad certificate 2025-08-13T19:52:11.862937661+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39986: remote error: tls: bad certificate 2025-08-13T19:52:11.902239011+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:39996: remote error: tls: bad certificate 2025-08-13T19:52:11.942315013+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:40006: remote error: tls: bad certificate 2025-08-13T19:52:11.983443985+00:00 stderr F 2025/08/13 19:52:11 http: TLS handshake error from 127.0.0.1:40012: remote error: tls: bad certificate 2025-08-13T19:52:12.021448508+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40024: remote error: tls: bad certificate 2025-08-13T19:52:12.060711606+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40030: remote error: tls: bad certificate 2025-08-13T19:52:12.102512157+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40046: remote error: tls: bad certificate 2025-08-13T19:52:12.167097067+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40048: remote error: tls: bad certificate 2025-08-13T19:52:12.217665528+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40052: remote error: tls: bad certificate 2025-08-13T19:52:12.240474918+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40054: remote error: tls: bad certificate 2025-08-13T19:52:12.261388544+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40056: remote error: tls: bad certificate 2025-08-13T19:52:12.301742003+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40060: remote error: tls: bad certificate 2025-08-13T19:52:12.341023962+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40066: remote error: tls: bad certificate 2025-08-13T19:52:12.381319700+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40080: remote error: tls: bad certificate 2025-08-13T19:52:12.422914886+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40092: remote error: tls: bad certificate 2025-08-13T19:52:12.463697788+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40102: remote error: tls: bad certificate 2025-08-13T19:52:12.504071048+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40116: remote error: tls: bad certificate 2025-08-13T19:52:12.547703351+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40122: remote error: tls: bad certificate 2025-08-13T19:52:12.581839253+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40128: remote error: tls: bad certificate 2025-08-13T19:52:12.622860002+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40138: remote error: tls: bad certificate 2025-08-13T19:52:12.661340779+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40140: remote error: tls: bad certificate 2025-08-13T19:52:12.701079951+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40148: remote error: tls: bad certificate 2025-08-13T19:52:12.740683089+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40152: remote error: tls: bad certificate 2025-08-13T19:52:12.781331177+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40166: remote error: tls: bad certificate 2025-08-13T19:52:12.822731237+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40178: remote error: tls: bad certificate 2025-08-13T19:52:12.864116206+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40192: remote error: tls: bad certificate 2025-08-13T19:52:12.902372336+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40206: remote error: tls: bad certificate 2025-08-13T19:52:12.940494812+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40216: remote error: tls: bad certificate 2025-08-13T19:52:12.981758598+00:00 stderr F 2025/08/13 19:52:12 http: TLS handshake error from 127.0.0.1:40232: remote error: tls: bad certificate 2025-08-13T19:52:13.031258298+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40248: remote error: tls: bad certificate 2025-08-13T19:52:13.066912604+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40262: remote error: tls: bad certificate 2025-08-13T19:52:13.107282584+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40268: remote error: tls: bad certificate 2025-08-13T19:52:13.142567149+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40280: remote error: tls: bad certificate 2025-08-13T19:52:13.183164186+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40288: remote error: tls: bad certificate 2025-08-13T19:52:13.223954458+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40300: remote error: tls: bad certificate 2025-08-13T19:52:13.264955716+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40308: remote error: tls: bad certificate 2025-08-13T19:52:13.309511016+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40320: remote error: tls: bad certificate 2025-08-13T19:52:13.348260370+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40330: remote error: tls: bad certificate 2025-08-13T19:52:13.382583788+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40344: remote error: tls: bad certificate 2025-08-13T19:52:13.422758932+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40350: remote error: tls: bad certificate 2025-08-13T19:52:13.464926023+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40366: remote error: tls: bad certificate 2025-08-13T19:52:13.503688118+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40374: remote error: tls: bad certificate 2025-08-13T19:52:13.545931881+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40386: remote error: tls: bad certificate 2025-08-13T19:52:13.582022830+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40398: remote error: tls: bad certificate 2025-08-13T19:52:13.621601367+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40406: remote error: tls: bad certificate 2025-08-13T19:52:13.664280403+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40422: remote error: tls: bad certificate 2025-08-13T19:52:13.703613304+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40436: remote error: tls: bad certificate 2025-08-13T19:52:13.744616232+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40440: remote error: tls: bad certificate 2025-08-13T19:52:13.785130986+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40454: remote error: tls: bad certificate 2025-08-13T19:52:13.822426749+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40456: remote error: tls: bad certificate 2025-08-13T19:52:13.861883983+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:52:13.910098867+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40472: remote error: tls: bad certificate 2025-08-13T19:52:13.939739281+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40488: remote error: tls: bad certificate 2025-08-13T19:52:13.983495428+00:00 stderr F 2025/08/13 19:52:13 http: TLS handshake error from 127.0.0.1:40490: remote error: tls: bad certificate 2025-08-13T19:52:14.021704327+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40494: remote error: tls: bad certificate 2025-08-13T19:52:14.063676833+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40504: remote error: tls: bad certificate 2025-08-13T19:52:14.104156576+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40516: remote error: tls: bad certificate 2025-08-13T19:52:14.144366011+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40520: remote error: tls: bad certificate 2025-08-13T19:52:14.191540715+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40534: remote error: tls: bad certificate 2025-08-13T19:52:14.221933991+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40544: remote error: tls: bad certificate 2025-08-13T19:52:14.268723094+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40560: remote error: tls: bad certificate 2025-08-13T19:52:14.308665693+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:52:14.342111615+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40566: remote error: tls: bad certificate 2025-08-13T19:52:14.388072655+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:52:14.421615230+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40582: remote error: tls: bad certificate 2025-08-13T19:52:14.461890188+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:52:14.501881407+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40598: remote error: tls: bad certificate 2025-08-13T19:52:14.543302267+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:52:14.588453424+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:52:14.623759570+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40624: remote error: tls: bad certificate 2025-08-13T19:52:14.661446144+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40638: remote error: tls: bad certificate 2025-08-13T19:52:14.701385821+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40648: remote error: tls: bad certificate 2025-08-13T19:52:14.742138102+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40658: remote error: tls: bad certificate 2025-08-13T19:52:14.781102153+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40666: remote error: tls: bad certificate 2025-08-13T19:52:14.822133612+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40678: remote error: tls: bad certificate 2025-08-13T19:52:14.861476223+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40688: remote error: tls: bad certificate 2025-08-13T19:52:14.902266185+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40690: remote error: tls: bad certificate 2025-08-13T19:52:14.942502491+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40696: remote error: tls: bad certificate 2025-08-13T19:52:14.989922982+00:00 stderr F 2025/08/13 19:52:14 http: TLS handshake error from 127.0.0.1:40702: remote error: tls: bad certificate 2025-08-13T19:52:15.035294025+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40716: remote error: tls: bad certificate 2025-08-13T19:52:15.062418068+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40718: remote error: tls: bad certificate 2025-08-13T19:52:15.101994224+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40730: remote error: tls: bad certificate 2025-08-13T19:52:15.142621022+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40742: remote error: tls: bad certificate 2025-08-13T19:52:15.194532501+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40756: remote error: tls: bad certificate 2025-08-13T19:52:15.224587177+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40758: remote error: tls: bad certificate 2025-08-13T19:52:15.266966404+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40772: remote error: tls: bad certificate 2025-08-13T19:52:15.307219001+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:52:15.346391457+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40786: remote error: tls: bad certificate 2025-08-13T19:52:15.388867288+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40788: remote error: tls: bad certificate 2025-08-13T19:52:15.426451058+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40798: remote error: tls: bad certificate 2025-08-13T19:52:15.462024732+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40800: remote error: tls: bad certificate 2025-08-13T19:52:15.504317767+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40808: remote error: tls: bad certificate 2025-08-13T19:52:15.541993750+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40824: remote error: tls: bad certificate 2025-08-13T19:52:15.587716863+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40830: remote error: tls: bad certificate 2025-08-13T19:52:15.620622270+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40840: remote error: tls: bad certificate 2025-08-13T19:52:15.662918125+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40844: remote error: tls: bad certificate 2025-08-13T19:52:15.700840836+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40850: remote error: tls: bad certificate 2025-08-13T19:52:15.743609805+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40866: remote error: tls: bad certificate 2025-08-13T19:52:15.788287597+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40876: remote error: tls: bad certificate 2025-08-13T19:52:15.821371540+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40892: remote error: tls: bad certificate 2025-08-13T19:52:15.862442170+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40908: remote error: tls: bad certificate 2025-08-13T19:52:15.901707219+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40910: remote error: tls: bad certificate 2025-08-13T19:52:15.944092526+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40920: remote error: tls: bad certificate 2025-08-13T19:52:15.989336395+00:00 stderr F 2025/08/13 19:52:15 http: TLS handshake error from 127.0.0.1:40936: remote error: tls: bad certificate 2025-08-13T19:52:16.021855652+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40946: remote error: tls: bad certificate 2025-08-13T19:52:16.063159029+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40950: remote error: tls: bad certificate 2025-08-13T19:52:16.104336372+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40952: remote error: tls: bad certificate 2025-08-13T19:52:16.143946810+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40968: remote error: tls: bad certificate 2025-08-13T19:52:16.185765572+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40972: remote error: tls: bad certificate 2025-08-13T19:52:16.224690171+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40988: remote error: tls: bad certificate 2025-08-13T19:52:16.271144184+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:52:16.303604989+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41002: remote error: tls: bad certificate 2025-08-13T19:52:16.343666201+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41018: remote error: tls: bad certificate 2025-08-13T19:52:16.384617087+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:52:16.421874179+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41046: remote error: tls: bad certificate 2025-08-13T19:52:16.427717065+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41060: remote error: tls: bad certificate 2025-08-13T19:52:16.450360310+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41076: remote error: tls: bad certificate 2025-08-13T19:52:16.463030341+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41078: remote error: tls: bad certificate 2025-08-13T19:52:16.474139578+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41088: remote error: tls: bad certificate 2025-08-13T19:52:16.495102035+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41092: remote error: tls: bad certificate 2025-08-13T19:52:16.508591420+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41094: remote error: tls: bad certificate 2025-08-13T19:52:16.516873826+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41096: remote error: tls: bad certificate 2025-08-13T19:52:16.543182675+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41104: remote error: tls: bad certificate 2025-08-13T19:52:16.583160114+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41118: remote error: tls: bad certificate 2025-08-13T19:52:16.622651059+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41134: remote error: tls: bad certificate 2025-08-13T19:52:16.662400672+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41150: remote error: tls: bad certificate 2025-08-13T19:52:16.702245907+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41156: remote error: tls: bad certificate 2025-08-13T19:52:16.742052621+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41168: remote error: tls: bad certificate 2025-08-13T19:52:16.784443129+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41172: remote error: tls: bad certificate 2025-08-13T19:52:16.822364699+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41182: remote error: tls: bad certificate 2025-08-13T19:52:16.863114030+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41194: remote error: tls: bad certificate 2025-08-13T19:52:16.904113078+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41198: remote error: tls: bad certificate 2025-08-13T19:52:16.942268855+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41204: remote error: tls: bad certificate 2025-08-13T19:52:16.981872174+00:00 stderr F 2025/08/13 19:52:16 http: TLS handshake error from 127.0.0.1:41206: remote error: tls: bad certificate 2025-08-13T19:52:17.024432966+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41208: remote error: tls: bad certificate 2025-08-13T19:52:17.069047137+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41214: remote error: tls: bad certificate 2025-08-13T19:52:17.108159092+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41226: remote error: tls: bad certificate 2025-08-13T19:52:17.151381943+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41230: remote error: tls: bad certificate 2025-08-13T19:52:17.189513860+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41240: remote error: tls: bad certificate 2025-08-13T19:52:17.228182411+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41248: remote error: tls: bad certificate 2025-08-13T19:52:17.266487893+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41264: remote error: tls: bad certificate 2025-08-13T19:52:17.307342997+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41272: remote error: tls: bad certificate 2025-08-13T19:52:17.346971366+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41276: remote error: tls: bad certificate 2025-08-13T19:52:17.383013183+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41278: remote error: tls: bad certificate 2025-08-13T19:52:17.425935346+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41282: remote error: tls: bad certificate 2025-08-13T19:52:17.464476354+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41286: remote error: tls: bad certificate 2025-08-13T19:52:17.502745114+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41296: remote error: tls: bad certificate 2025-08-13T19:52:17.544722150+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41308: remote error: tls: bad certificate 2025-08-13T19:52:17.584005599+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41316: remote error: tls: bad certificate 2025-08-13T19:52:17.629548597+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41318: remote error: tls: bad certificate 2025-08-13T19:52:17.665853351+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41334: remote error: tls: bad certificate 2025-08-13T19:52:17.703093872+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41348: remote error: tls: bad certificate 2025-08-13T19:52:17.744529713+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41354: remote error: tls: bad certificate 2025-08-13T19:52:17.785493290+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41364: remote error: tls: bad certificate 2025-08-13T19:52:17.823942645+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41380: remote error: tls: bad certificate 2025-08-13T19:52:17.871266064+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41384: remote error: tls: bad certificate 2025-08-13T19:52:17.904524381+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:52:17.947506846+00:00 stderr F 2025/08/13 19:52:17 http: TLS handshake error from 127.0.0.1:41408: remote error: tls: bad certificate 2025-08-13T19:52:22.825275536+00:00 stderr F 2025/08/13 19:52:22 http: TLS handshake error from 127.0.0.1:58334: remote error: tls: bad certificate 2025-08-13T19:52:25.223326149+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58338: remote error: tls: bad certificate 2025-08-13T19:52:25.240954791+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58350: remote error: tls: bad certificate 2025-08-13T19:52:25.296952207+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58364: remote error: tls: bad certificate 2025-08-13T19:52:25.338490320+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58368: remote error: tls: bad certificate 2025-08-13T19:52:25.356747301+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58384: remote error: tls: bad certificate 2025-08-13T19:52:25.372677344+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58386: remote error: tls: bad certificate 2025-08-13T19:52:25.387531868+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58402: remote error: tls: bad certificate 2025-08-13T19:52:25.407517437+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58412: remote error: tls: bad certificate 2025-08-13T19:52:25.424520621+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58416: remote error: tls: bad certificate 2025-08-13T19:52:25.442419241+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58430: remote error: tls: bad certificate 2025-08-13T19:52:25.459381585+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58434: remote error: tls: bad certificate 2025-08-13T19:52:25.475724650+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58440: remote error: tls: bad certificate 2025-08-13T19:52:25.501438643+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58442: remote error: tls: bad certificate 2025-08-13T19:52:25.516681827+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58450: remote error: tls: bad certificate 2025-08-13T19:52:25.531246612+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58458: remote error: tls: bad certificate 2025-08-13T19:52:25.545003804+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58466: remote error: tls: bad certificate 2025-08-13T19:52:25.561746321+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58472: remote error: tls: bad certificate 2025-08-13T19:52:25.579395714+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58480: remote error: tls: bad certificate 2025-08-13T19:52:25.593453455+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58490: remote error: tls: bad certificate 2025-08-13T19:52:25.609185443+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58492: remote error: tls: bad certificate 2025-08-13T19:52:25.625933600+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58498: remote error: tls: bad certificate 2025-08-13T19:52:25.639068834+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58514: remote error: tls: bad certificate 2025-08-13T19:52:25.652001703+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58524: remote error: tls: bad certificate 2025-08-13T19:52:25.666618059+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58534: remote error: tls: bad certificate 2025-08-13T19:52:25.682511092+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58536: remote error: tls: bad certificate 2025-08-13T19:52:25.701233115+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58552: remote error: tls: bad certificate 2025-08-13T19:52:25.716989544+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58560: remote error: tls: bad certificate 2025-08-13T19:52:25.735908663+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58566: remote error: tls: bad certificate 2025-08-13T19:52:25.754454232+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58582: remote error: tls: bad certificate 2025-08-13T19:52:25.769279854+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58594: remote error: tls: bad certificate 2025-08-13T19:52:25.783194450+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58600: remote error: tls: bad certificate 2025-08-13T19:52:25.799534336+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58602: remote error: tls: bad certificate 2025-08-13T19:52:25.818184086+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58608: remote error: tls: bad certificate 2025-08-13T19:52:25.843068765+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58618: remote error: tls: bad certificate 2025-08-13T19:52:25.858664330+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58632: remote error: tls: bad certificate 2025-08-13T19:52:25.876926030+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58642: remote error: tls: bad certificate 2025-08-13T19:52:25.900269615+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58644: remote error: tls: bad certificate 2025-08-13T19:52:25.916197429+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58654: remote error: tls: bad certificate 2025-08-13T19:52:25.933082790+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58666: remote error: tls: bad certificate 2025-08-13T19:52:25.950171327+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58678: remote error: tls: bad certificate 2025-08-13T19:52:25.968612212+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58694: remote error: tls: bad certificate 2025-08-13T19:52:25.985738720+00:00 stderr F 2025/08/13 19:52:25 http: TLS handshake error from 127.0.0.1:58706: remote error: tls: bad certificate 2025-08-13T19:52:26.003170677+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58708: remote error: tls: bad certificate 2025-08-13T19:52:26.027500480+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58716: remote error: tls: bad certificate 2025-08-13T19:52:26.054754387+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58718: remote error: tls: bad certificate 2025-08-13T19:52:26.069362183+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58730: remote error: tls: bad certificate 2025-08-13T19:52:26.085567544+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58740: remote error: tls: bad certificate 2025-08-13T19:52:26.098739770+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58742: remote error: tls: bad certificate 2025-08-13T19:52:26.115668642+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58756: remote error: tls: bad certificate 2025-08-13T19:52:26.135727873+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58764: remote error: tls: bad certificate 2025-08-13T19:52:26.149984610+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58766: remote error: tls: bad certificate 2025-08-13T19:52:26.167277312+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58776: remote error: tls: bad certificate 2025-08-13T19:52:26.181919249+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58786: remote error: tls: bad certificate 2025-08-13T19:52:26.198191933+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58798: remote error: tls: bad certificate 2025-08-13T19:52:26.216253928+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58804: remote error: tls: bad certificate 2025-08-13T19:52:26.234710754+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58816: remote error: tls: bad certificate 2025-08-13T19:52:26.255699392+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58826: remote error: tls: bad certificate 2025-08-13T19:52:26.275589568+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58832: remote error: tls: bad certificate 2025-08-13T19:52:26.293391465+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58836: remote error: tls: bad certificate 2025-08-13T19:52:26.309984128+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58850: remote error: tls: bad certificate 2025-08-13T19:52:26.327238480+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58856: remote error: tls: bad certificate 2025-08-13T19:52:26.350983526+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58864: remote error: tls: bad certificate 2025-08-13T19:52:26.367071035+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58870: remote error: tls: bad certificate 2025-08-13T19:52:26.386034035+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58880: remote error: tls: bad certificate 2025-08-13T19:52:26.404960874+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58896: remote error: tls: bad certificate 2025-08-13T19:52:26.473646541+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58900: remote error: tls: bad certificate 2025-08-13T19:52:26.487644650+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58912: remote error: tls: bad certificate 2025-08-13T19:52:26.693322640+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58926: remote error: tls: bad certificate 2025-08-13T19:52:26.712385813+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58936: remote error: tls: bad certificate 2025-08-13T19:52:26.738643721+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58950: remote error: tls: bad certificate 2025-08-13T19:52:26.760013040+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58962: remote error: tls: bad certificate 2025-08-13T19:52:26.793966847+00:00 stderr F 2025/08/13 19:52:26 http: TLS handshake error from 127.0.0.1:58968: remote error: tls: bad certificate 2025-08-13T19:52:35.227220003+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:59988: remote error: tls: bad certificate 2025-08-13T19:52:35.249591660+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:59992: remote error: tls: bad certificate 2025-08-13T19:52:35.268651212+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60004: remote error: tls: bad certificate 2025-08-13T19:52:35.292355107+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60018: remote error: tls: bad certificate 2025-08-13T19:52:35.314572919+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60030: remote error: tls: bad certificate 2025-08-13T19:52:35.338897801+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60046: remote error: tls: bad certificate 2025-08-13T19:52:35.361975338+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60048: remote error: tls: bad certificate 2025-08-13T19:52:35.380907637+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60058: remote error: tls: bad certificate 2025-08-13T19:52:35.404499658+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60070: remote error: tls: bad certificate 2025-08-13T19:52:35.451247219+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60082: remote error: tls: bad certificate 2025-08-13T19:52:35.479362279+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60096: remote error: tls: bad certificate 2025-08-13T19:52:35.501621333+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60106: remote error: tls: bad certificate 2025-08-13T19:52:35.523306260+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60114: remote error: tls: bad certificate 2025-08-13T19:52:35.540165600+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60118: remote error: tls: bad certificate 2025-08-13T19:52:35.558233844+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60130: remote error: tls: bad certificate 2025-08-13T19:52:35.580755125+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60142: remote error: tls: bad certificate 2025-08-13T19:52:35.599640012+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60156: remote error: tls: bad certificate 2025-08-13T19:52:35.617128680+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60170: remote error: tls: bad certificate 2025-08-13T19:52:35.634223507+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60182: remote error: tls: bad certificate 2025-08-13T19:52:35.652244880+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60192: remote error: tls: bad certificate 2025-08-13T19:52:35.670488279+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60200: remote error: tls: bad certificate 2025-08-13T19:52:35.694291766+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60202: remote error: tls: bad certificate 2025-08-13T19:52:35.718700431+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60210: remote error: tls: bad certificate 2025-08-13T19:52:35.740032238+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60226: remote error: tls: bad certificate 2025-08-13T19:52:35.761107728+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60240: remote error: tls: bad certificate 2025-08-13T19:52:35.785249005+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60248: remote error: tls: bad certificate 2025-08-13T19:52:35.805376058+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60262: remote error: tls: bad certificate 2025-08-13T19:52:35.830883094+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60276: remote error: tls: bad certificate 2025-08-13T19:52:35.848616099+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60290: remote error: tls: bad certificate 2025-08-13T19:52:35.869591326+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60294: remote error: tls: bad certificate 2025-08-13T19:52:35.886356883+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60298: remote error: tls: bad certificate 2025-08-13T19:52:35.908531544+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60314: remote error: tls: bad certificate 2025-08-13T19:52:35.935897313+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60328: remote error: tls: bad certificate 2025-08-13T19:52:35.959551676+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60338: remote error: tls: bad certificate 2025-08-13T19:52:35.979486513+00:00 stderr F 2025/08/13 19:52:35 http: TLS handshake error from 127.0.0.1:60348: remote error: tls: bad certificate 2025-08-13T19:52:36.010430514+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60360: remote error: tls: bad certificate 2025-08-13T19:52:36.028902670+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60372: remote error: tls: bad certificate 2025-08-13T19:52:36.047386496+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:52:36.064362099+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60390: remote error: tls: bad certificate 2025-08-13T19:52:36.088390823+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60402: remote error: tls: bad certificate 2025-08-13T19:52:36.102733541+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60408: remote error: tls: bad certificate 2025-08-13T19:52:36.117908473+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60414: remote error: tls: bad certificate 2025-08-13T19:52:36.135445892+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60426: remote error: tls: bad certificate 2025-08-13T19:52:36.160731252+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60436: remote error: tls: bad certificate 2025-08-13T19:52:36.176887242+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60442: remote error: tls: bad certificate 2025-08-13T19:52:36.191049525+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60452: remote error: tls: bad certificate 2025-08-13T19:52:36.215687576+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60462: remote error: tls: bad certificate 2025-08-13T19:52:36.285396160+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60476: remote error: tls: bad certificate 2025-08-13T19:52:36.310347920+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60478: remote error: tls: bad certificate 2025-08-13T19:52:36.337593086+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60486: remote error: tls: bad certificate 2025-08-13T19:52:36.356948637+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:52:36.380633021+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60496: remote error: tls: bad certificate 2025-08-13T19:52:36.399318903+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60500: remote error: tls: bad certificate 2025-08-13T19:52:36.422595145+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60504: remote error: tls: bad certificate 2025-08-13T19:52:36.438301642+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60516: remote error: tls: bad certificate 2025-08-13T19:52:36.460266597+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60522: remote error: tls: bad certificate 2025-08-13T19:52:36.494429050+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60530: remote error: tls: bad certificate 2025-08-13T19:52:36.519513724+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60536: remote error: tls: bad certificate 2025-08-13T19:52:36.542643321+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60552: remote error: tls: bad certificate 2025-08-13T19:52:36.561230200+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60564: remote error: tls: bad certificate 2025-08-13T19:52:36.583634798+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60570: remote error: tls: bad certificate 2025-08-13T19:52:36.607348422+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60582: remote error: tls: bad certificate 2025-08-13T19:52:36.631427048+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60598: remote error: tls: bad certificate 2025-08-13T19:52:36.654232857+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60614: remote error: tls: bad certificate 2025-08-13T19:52:36.671708964+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60622: remote error: tls: bad certificate 2025-08-13T19:52:36.689869311+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60636: remote error: tls: bad certificate 2025-08-13T19:52:36.707753290+00:00 stderr F 2025/08/13 19:52:36 http: TLS handshake error from 127.0.0.1:60650: remote error: tls: bad certificate 2025-08-13T19:52:37.193249368+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60660: remote error: tls: bad certificate 2025-08-13T19:52:37.227900124+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60672: remote error: tls: bad certificate 2025-08-13T19:52:37.253324988+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60682: remote error: tls: bad certificate 2025-08-13T19:52:37.289531118+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60698: remote error: tls: bad certificate 2025-08-13T19:52:37.320870740+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60702: remote error: tls: bad certificate 2025-08-13T19:52:37.359422918+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:52:37.385714426+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60724: remote error: tls: bad certificate 2025-08-13T19:52:37.406349833+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60736: remote error: tls: bad certificate 2025-08-13T19:52:37.439943789+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60740: remote error: tls: bad certificate 2025-08-13T19:52:37.465069355+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60754: remote error: tls: bad certificate 2025-08-13T19:52:37.484755785+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60768: remote error: tls: bad certificate 2025-08-13T19:52:37.511383423+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60780: remote error: tls: bad certificate 2025-08-13T19:52:37.540501142+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60794: remote error: tls: bad certificate 2025-08-13T19:52:37.560563783+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60800: remote error: tls: bad certificate 2025-08-13T19:52:37.586075869+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60804: remote error: tls: bad certificate 2025-08-13T19:52:37.602590359+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60808: remote error: tls: bad certificate 2025-08-13T19:52:37.622554317+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60824: remote error: tls: bad certificate 2025-08-13T19:52:37.644898103+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60826: remote error: tls: bad certificate 2025-08-13T19:52:37.657870252+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60838: remote error: tls: bad certificate 2025-08-13T19:52:37.663043059+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60852: remote error: tls: bad certificate 2025-08-13T19:52:37.680747583+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60864: remote error: tls: bad certificate 2025-08-13T19:52:37.699417155+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60868: remote error: tls: bad certificate 2025-08-13T19:52:37.716896672+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60884: remote error: tls: bad certificate 2025-08-13T19:52:37.735016848+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60898: remote error: tls: bad certificate 2025-08-13T19:52:37.752608598+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60908: remote error: tls: bad certificate 2025-08-13T19:52:37.772235147+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60910: remote error: tls: bad certificate 2025-08-13T19:52:37.791632809+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60926: remote error: tls: bad certificate 2025-08-13T19:52:37.808453488+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60934: remote error: tls: bad certificate 2025-08-13T19:52:37.829219909+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60936: remote error: tls: bad certificate 2025-08-13T19:52:37.843633199+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60948: remote error: tls: bad certificate 2025-08-13T19:52:37.859916393+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60950: remote error: tls: bad certificate 2025-08-13T19:52:37.877126023+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60956: remote error: tls: bad certificate 2025-08-13T19:52:37.893380835+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60964: remote error: tls: bad certificate 2025-08-13T19:52:37.909553905+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60966: remote error: tls: bad certificate 2025-08-13T19:52:37.927916698+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60982: remote error: tls: bad certificate 2025-08-13T19:52:37.940262830+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:60996: remote error: tls: bad certificate 2025-08-13T19:52:37.958360625+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32774: remote error: tls: bad certificate 2025-08-13T19:52:37.976464500+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32790: remote error: tls: bad certificate 2025-08-13T19:52:37.996353576+00:00 stderr F 2025/08/13 19:52:37 http: TLS handshake error from 127.0.0.1:32798: remote error: tls: bad certificate 2025-08-13T19:52:38.013785722+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32800: remote error: tls: bad certificate 2025-08-13T19:52:38.031618750+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32806: remote error: tls: bad certificate 2025-08-13T19:52:38.048144550+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32818: remote error: tls: bad certificate 2025-08-13T19:52:38.066642077+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32832: remote error: tls: bad certificate 2025-08-13T19:52:38.087573122+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32848: remote error: tls: bad certificate 2025-08-13T19:52:38.107236172+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32858: remote error: tls: bad certificate 2025-08-13T19:52:38.121904749+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32862: remote error: tls: bad certificate 2025-08-13T19:52:38.147047965+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32870: remote error: tls: bad certificate 2025-08-13T19:52:38.160669693+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32878: remote error: tls: bad certificate 2025-08-13T19:52:38.178071618+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32886: remote error: tls: bad certificate 2025-08-13T19:52:38.197938123+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32900: remote error: tls: bad certificate 2025-08-13T19:52:38.220079204+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32904: remote error: tls: bad certificate 2025-08-13T19:52:38.239529217+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32918: remote error: tls: bad certificate 2025-08-13T19:52:38.263114808+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32930: remote error: tls: bad certificate 2025-08-13T19:52:38.283980442+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32932: remote error: tls: bad certificate 2025-08-13T19:52:38.301495141+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32946: remote error: tls: bad certificate 2025-08-13T19:52:38.318188146+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32958: remote error: tls: bad certificate 2025-08-13T19:52:38.334608393+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32966: remote error: tls: bad certificate 2025-08-13T19:52:38.368160578+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32970: remote error: tls: bad certificate 2025-08-13T19:52:38.390010810+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32984: remote error: tls: bad certificate 2025-08-13T19:52:38.413753726+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:32986: remote error: tls: bad certificate 2025-08-13T19:52:38.432204501+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33000: remote error: tls: bad certificate 2025-08-13T19:52:38.456990616+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33004: remote error: tls: bad certificate 2025-08-13T19:52:38.472303832+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33006: remote error: tls: bad certificate 2025-08-13T19:52:38.486705692+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33016: remote error: tls: bad certificate 2025-08-13T19:52:38.502739549+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33030: remote error: tls: bad certificate 2025-08-13T19:52:38.519135215+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33044: remote error: tls: bad certificate 2025-08-13T19:52:38.532735662+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33058: remote error: tls: bad certificate 2025-08-13T19:52:38.549972963+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33074: remote error: tls: bad certificate 2025-08-13T19:52:38.565916947+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33090: remote error: tls: bad certificate 2025-08-13T19:52:38.579729910+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33104: remote error: tls: bad certificate 2025-08-13T19:52:38.595466578+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33118: remote error: tls: bad certificate 2025-08-13T19:52:38.617000141+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33132: remote error: tls: bad certificate 2025-08-13T19:52:38.633136970+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33148: remote error: tls: bad certificate 2025-08-13T19:52:38.649141945+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33162: remote error: tls: bad certificate 2025-08-13T19:52:38.664753450+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33172: remote error: tls: bad certificate 2025-08-13T19:52:38.704004367+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:33180: remote error: tls: bad certificate 2025-08-13T19:52:38.738288043+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49942: remote error: tls: bad certificate 2025-08-13T19:52:38.778373084+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49958: remote error: tls: bad certificate 2025-08-13T19:52:38.818572478+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49974: remote error: tls: bad certificate 2025-08-13T19:52:38.860590264+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:49988: remote error: tls: bad certificate 2025-08-13T19:52:38.911176863+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50004: remote error: tls: bad certificate 2025-08-13T19:52:38.937672828+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50008: remote error: tls: bad certificate 2025-08-13T19:52:38.978188481+00:00 stderr F 2025/08/13 19:52:38 http: TLS handshake error from 127.0.0.1:50014: remote error: tls: bad certificate 2025-08-13T19:52:39.019203178+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50024: remote error: tls: bad certificate 2025-08-13T19:52:39.062579053+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50036: remote error: tls: bad certificate 2025-08-13T19:52:39.100899163+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50040: remote error: tls: bad certificate 2025-08-13T19:52:39.140337246+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50044: remote error: tls: bad certificate 2025-08-13T19:52:39.179467790+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50048: remote error: tls: bad certificate 2025-08-13T19:52:39.223054360+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50064: remote error: tls: bad certificate 2025-08-13T19:52:39.258021305+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50072: remote error: tls: bad certificate 2025-08-13T19:52:39.297485249+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50086: remote error: tls: bad certificate 2025-08-13T19:52:39.341536392+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50092: remote error: tls: bad certificate 2025-08-13T19:52:39.384896026+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50094: remote error: tls: bad certificate 2025-08-13T19:52:39.420532681+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50106: remote error: tls: bad certificate 2025-08-13T19:52:39.460883209+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50114: remote error: tls: bad certificate 2025-08-13T19:52:39.500563549+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50116: remote error: tls: bad certificate 2025-08-13T19:52:39.539577799+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50122: remote error: tls: bad certificate 2025-08-13T19:52:39.580488363+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50130: remote error: tls: bad certificate 2025-08-13T19:52:39.620439841+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50144: remote error: tls: bad certificate 2025-08-13T19:52:39.660903792+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50146: remote error: tls: bad certificate 2025-08-13T19:52:39.699542742+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50148: remote error: tls: bad certificate 2025-08-13T19:52:39.739991853+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50152: remote error: tls: bad certificate 2025-08-13T19:52:39.781197136+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50158: remote error: tls: bad certificate 2025-08-13T19:52:39.818201909+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50164: remote error: tls: bad certificate 2025-08-13T19:52:39.859011151+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:52:39.900138121+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50184: remote error: tls: bad certificate 2025-08-13T19:52:39.940674485+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50198: remote error: tls: bad certificate 2025-08-13T19:52:39.981173628+00:00 stderr F 2025/08/13 19:52:39 http: TLS handshake error from 127.0.0.1:50208: remote error: tls: bad certificate 2025-08-13T19:52:40.021504846+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50210: remote error: tls: bad certificate 2025-08-13T19:52:40.060499815+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:52:40.099227238+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50218: remote error: tls: bad certificate 2025-08-13T19:52:40.138207596+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50234: remote error: tls: bad certificate 2025-08-13T19:52:40.179364457+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50248: remote error: tls: bad certificate 2025-08-13T19:52:40.220187179+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50250: remote error: tls: bad certificate 2025-08-13T19:52:40.266923459+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50262: remote error: tls: bad certificate 2025-08-13T19:52:40.300463004+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50278: remote error: tls: bad certificate 2025-08-13T19:52:40.338439655+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:52:40.380567274+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50296: remote error: tls: bad certificate 2025-08-13T19:52:40.418242506+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50310: remote error: tls: bad certificate 2025-08-13T19:52:40.465546533+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50320: remote error: tls: bad certificate 2025-08-13T19:52:40.496763391+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:52:40.538459038+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:52:40.578203929+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50340: remote error: tls: bad certificate 2025-08-13T19:52:40.620458722+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50354: remote error: tls: bad certificate 2025-08-13T19:52:40.660864622+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50360: remote error: tls: bad certificate 2025-08-13T19:52:40.699334777+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50366: remote error: tls: bad certificate 2025-08-13T19:52:40.739588543+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:52:40.779523379+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50388: remote error: tls: bad certificate 2025-08-13T19:52:40.822485122+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50392: remote error: tls: bad certificate 2025-08-13T19:52:40.860665839+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50400: remote error: tls: bad certificate 2025-08-13T19:52:40.900133812+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50404: remote error: tls: bad certificate 2025-08-13T19:52:40.941152159+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50416: remote error: tls: bad certificate 2025-08-13T19:52:40.978907494+00:00 stderr F 2025/08/13 19:52:40 http: TLS handshake error from 127.0.0.1:50424: remote error: tls: bad certificate 2025-08-13T19:52:41.019480659+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:52:41.058350335+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50436: remote error: tls: bad certificate 2025-08-13T19:52:41.098664902+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50438: remote error: tls: bad certificate 2025-08-13T19:52:41.142159720+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50450: remote error: tls: bad certificate 2025-08-13T19:52:41.185477073+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:52:41.238757810+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50470: remote error: tls: bad certificate 2025-08-13T19:52:41.258080630+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50480: remote error: tls: bad certificate 2025-08-13T19:52:41.298741647+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50486: remote error: tls: bad certificate 2025-08-13T19:52:41.341063322+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50496: remote error: tls: bad certificate 2025-08-13T19:52:41.382123200+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50508: remote error: tls: bad certificate 2025-08-13T19:52:41.418920798+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50520: remote error: tls: bad certificate 2025-08-13T19:52:41.464397882+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50536: remote error: tls: bad certificate 2025-08-13T19:52:41.498985476+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50538: remote error: tls: bad certificate 2025-08-13T19:52:41.538845161+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50546: remote error: tls: bad certificate 2025-08-13T19:52:41.578564341+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50554: remote error: tls: bad certificate 2025-08-13T19:52:41.619309401+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:52:41.665264039+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50578: remote error: tls: bad certificate 2025-08-13T19:52:41.698925307+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:52:41.738637707+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50600: remote error: tls: bad certificate 2025-08-13T19:52:41.783739051+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50616: remote error: tls: bad certificate 2025-08-13T19:52:41.826310292+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50624: remote error: tls: bad certificate 2025-08-13T19:52:41.860895847+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50628: remote error: tls: bad certificate 2025-08-13T19:52:41.898189188+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50632: remote error: tls: bad certificate 2025-08-13T19:52:41.938887107+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50638: remote error: tls: bad certificate 2025-08-13T19:52:41.979832382+00:00 stderr F 2025/08/13 19:52:41 http: TLS handshake error from 127.0.0.1:50648: remote error: tls: bad certificate 2025-08-13T19:52:42.020386316+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50664: remote error: tls: bad certificate 2025-08-13T19:52:42.085024836+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50674: remote error: tls: bad certificate 2025-08-13T19:52:42.108896596+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50684: remote error: tls: bad certificate 2025-08-13T19:52:42.142605725+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:52:42.181068060+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50706: remote error: tls: bad certificate 2025-08-13T19:52:42.220756259+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50716: remote error: tls: bad certificate 2025-08-13T19:52:42.262327832+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50730: remote error: tls: bad certificate 2025-08-13T19:52:42.301191519+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50734: remote error: tls: bad certificate 2025-08-13T19:52:42.339704595+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50738: remote error: tls: bad certificate 2025-08-13T19:52:42.381139314+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50754: remote error: tls: bad certificate 2025-08-13T19:52:42.419863786+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50766: remote error: tls: bad certificate 2025-08-13T19:52:42.465001591+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50776: remote error: tls: bad certificate 2025-08-13T19:52:42.498351280+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50790: remote error: tls: bad certificate 2025-08-13T19:52:42.539761019+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50802: remote error: tls: bad certificate 2025-08-13T19:52:42.579370336+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50804: remote error: tls: bad certificate 2025-08-13T19:52:42.617901903+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50814: remote error: tls: bad certificate 2025-08-13T19:52:42.658504158+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50816: remote error: tls: bad certificate 2025-08-13T19:52:42.700273977+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50818: remote error: tls: bad certificate 2025-08-13T19:52:42.739716560+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50832: remote error: tls: bad certificate 2025-08-13T19:52:42.783097475+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50840: remote error: tls: bad certificate 2025-08-13T19:52:42.818829582+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:52:42.858970954+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50854: remote error: tls: bad certificate 2025-08-13T19:52:42.898699005+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50862: remote error: tls: bad certificate 2025-08-13T19:52:42.938124517+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate 2025-08-13T19:52:42.978704422+00:00 stderr F 2025/08/13 19:52:42 http: TLS handshake error from 127.0.0.1:50880: remote error: tls: bad certificate 2025-08-13T19:52:43.021602073+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50884: remote error: tls: bad certificate 2025-08-13T19:52:43.058596086+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50896: remote error: tls: bad certificate 2025-08-13T19:52:43.101869527+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50906: remote error: tls: bad certificate 2025-08-13T19:52:43.139448487+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50912: remote error: tls: bad certificate 2025-08-13T19:52:43.177199172+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50920: remote error: tls: bad certificate 2025-08-13T19:52:43.233126223+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50926: remote error: tls: bad certificate 2025-08-13T19:52:43.260025009+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50930: remote error: tls: bad certificate 2025-08-13T19:52:43.304552106+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50932: remote error: tls: bad certificate 2025-08-13T19:52:43.337638108+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50936: remote error: tls: bad certificate 2025-08-13T19:52:43.384202933+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50952: remote error: tls: bad certificate 2025-08-13T19:52:43.421442143+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:52:43.467144854+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50968: remote error: tls: bad certificate 2025-08-13T19:52:43.484164738+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50984: remote error: tls: bad certificate 2025-08-13T19:52:43.501910653+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50990: remote error: tls: bad certificate 2025-08-13T19:52:43.510504498+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50994: remote error: tls: bad certificate 2025-08-13T19:52:43.541136590+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:50996: remote error: tls: bad certificate 2025-08-13T19:52:43.577963828+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51010: remote error: tls: bad certificate 2025-08-13T19:52:43.581344194+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51020: remote error: tls: bad certificate 2025-08-13T19:52:43.623530135+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51022: remote error: tls: bad certificate 2025-08-13T19:52:43.660688303+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51038: remote error: tls: bad certificate 2025-08-13T19:52:43.699336413+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51042: remote error: tls: bad certificate 2025-08-13T19:52:43.739646039+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51058: remote error: tls: bad certificate 2025-08-13T19:52:43.781623654+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51060: remote error: tls: bad certificate 2025-08-13T19:52:43.819892993+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51066: remote error: tls: bad certificate 2025-08-13T19:52:43.851440761+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51076: remote error: tls: bad certificate 2025-08-13T19:52:43.859924972+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51086: remote error: tls: bad certificate 2025-08-13T19:52:43.899604992+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51094: remote error: tls: bad certificate 2025-08-13T19:52:43.944966503+00:00 stderr F 2025/08/13 19:52:43 http: TLS handshake error from 127.0.0.1:51098: remote error: tls: bad certificate 2025-08-13T19:52:45.225947302+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51112: remote error: tls: bad certificate 2025-08-13T19:52:45.240247699+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51122: remote error: tls: bad certificate 2025-08-13T19:52:45.256047379+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51130: remote error: tls: bad certificate 2025-08-13T19:52:45.271318753+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51146: remote error: tls: bad certificate 2025-08-13T19:52:45.296477029+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51162: remote error: tls: bad certificate 2025-08-13T19:52:45.313750891+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51176: remote error: tls: bad certificate 2025-08-13T19:52:45.330859518+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51182: remote error: tls: bad certificate 2025-08-13T19:52:45.347684477+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51192: remote error: tls: bad certificate 2025-08-13T19:52:45.365724360+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51204: remote error: tls: bad certificate 2025-08-13T19:52:45.382989652+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51210: remote error: tls: bad certificate 2025-08-13T19:52:45.399849591+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51224: remote error: tls: bad certificate 2025-08-13T19:52:45.416096524+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51236: remote error: tls: bad certificate 2025-08-13T19:52:45.432255704+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51246: remote error: tls: bad certificate 2025-08-13T19:52:45.448523286+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51254: remote error: tls: bad certificate 2025-08-13T19:52:45.464452620+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51270: remote error: tls: bad certificate 2025-08-13T19:52:45.481724901+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51272: remote error: tls: bad certificate 2025-08-13T19:52:45.502720799+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51288: remote error: tls: bad certificate 2025-08-13T19:52:45.523698776+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51300: remote error: tls: bad certificate 2025-08-13T19:52:45.542021538+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51316: remote error: tls: bad certificate 2025-08-13T19:52:45.558209038+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51328: remote error: tls: bad certificate 2025-08-13T19:52:45.573304198+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51338: remote error: tls: bad certificate 2025-08-13T19:52:45.590321372+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51342: remote error: tls: bad certificate 2025-08-13T19:52:45.605003560+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51352: remote error: tls: bad certificate 2025-08-13T19:52:45.621551881+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51362: remote error: tls: bad certificate 2025-08-13T19:52:45.638193525+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51374: remote error: tls: bad certificate 2025-08-13T19:52:45.653460929+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51388: remote error: tls: bad certificate 2025-08-13T19:52:45.670676119+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51392: remote error: tls: bad certificate 2025-08-13T19:52:45.687166648+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51400: remote error: tls: bad certificate 2025-08-13T19:52:45.704924204+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51402: remote error: tls: bad certificate 2025-08-13T19:52:45.724972404+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51412: remote error: tls: bad certificate 2025-08-13T19:52:45.741738981+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51428: remote error: tls: bad certificate 2025-08-13T19:52:45.761212126+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51444: remote error: tls: bad certificate 2025-08-13T19:52:45.779491976+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51460: remote error: tls: bad certificate 2025-08-13T19:52:45.796329015+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51472: remote error: tls: bad certificate 2025-08-13T19:52:45.811374093+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51474: remote error: tls: bad certificate 2025-08-13T19:52:45.826917786+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51480: remote error: tls: bad certificate 2025-08-13T19:52:45.844705732+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51486: remote error: tls: bad certificate 2025-08-13T19:52:45.868352425+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51496: remote error: tls: bad certificate 2025-08-13T19:52:45.889509207+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51504: remote error: tls: bad certificate 2025-08-13T19:52:45.907740935+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51506: remote error: tls: bad certificate 2025-08-13T19:52:45.924041009+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51516: remote error: tls: bad certificate 2025-08-13T19:52:45.938378668+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51530: remote error: tls: bad certificate 2025-08-13T19:52:45.957604385+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51532: remote error: tls: bad certificate 2025-08-13T19:52:45.974873586+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51542: remote error: tls: bad certificate 2025-08-13T19:52:45.995423291+00:00 stderr F 2025/08/13 19:52:45 http: TLS handshake error from 127.0.0.1:51546: remote error: tls: bad certificate 2025-08-13T19:52:46.012261460+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51560: remote error: tls: bad certificate 2025-08-13T19:52:46.028545314+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51576: remote error: tls: bad certificate 2025-08-13T19:52:46.052997680+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51582: remote error: tls: bad certificate 2025-08-13T19:52:46.068505791+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51596: remote error: tls: bad certificate 2025-08-13T19:52:46.083164058+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51608: remote error: tls: bad certificate 2025-08-13T19:52:46.097340252+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51622: remote error: tls: bad certificate 2025-08-13T19:52:46.116855027+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51636: remote error: tls: bad certificate 2025-08-13T19:52:46.134343505+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51648: remote error: tls: bad certificate 2025-08-13T19:52:46.149321121+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51654: remote error: tls: bad certificate 2025-08-13T19:52:46.164659358+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51670: remote error: tls: bad certificate 2025-08-13T19:52:46.182469634+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51674: remote error: tls: bad certificate 2025-08-13T19:52:46.219869249+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51684: remote error: tls: bad certificate 2025-08-13T19:52:46.260067963+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51696: remote error: tls: bad certificate 2025-08-13T19:52:46.302070268+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51712: remote error: tls: bad certificate 2025-08-13T19:52:46.339118923+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51726: remote error: tls: bad certificate 2025-08-13T19:52:46.381685234+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51742: remote error: tls: bad certificate 2025-08-13T19:52:46.424425790+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51744: remote error: tls: bad certificate 2025-08-13T19:52:46.460672882+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51746: remote error: tls: bad certificate 2025-08-13T19:52:46.499684202+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51758: remote error: tls: bad certificate 2025-08-13T19:52:46.539385302+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51760: remote error: tls: bad certificate 2025-08-13T19:52:46.577085205+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51770: remote error: tls: bad certificate 2025-08-13T19:52:46.618899655+00:00 stderr F 2025/08/13 19:52:46 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:52:47.525561588+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51788: remote error: tls: bad certificate 2025-08-13T19:52:47.549911571+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51800: remote error: tls: bad certificate 2025-08-13T19:52:47.571483325+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51804: remote error: tls: bad certificate 2025-08-13T19:52:47.591991639+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51818: remote error: tls: bad certificate 2025-08-13T19:52:47.613573423+00:00 stderr F 2025/08/13 19:52:47 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:52:52.822261126+00:00 stderr F 2025/08/13 19:52:52 http: TLS handshake error from 127.0.0.1:41480: remote error: tls: bad certificate 2025-08-13T19:52:53.250892205+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41492: remote error: tls: bad certificate 2025-08-13T19:52:53.288911887+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41498: remote error: tls: bad certificate 2025-08-13T19:52:53.306545969+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41502: remote error: tls: bad certificate 2025-08-13T19:52:53.329355968+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41510: remote error: tls: bad certificate 2025-08-13T19:52:53.371528159+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41522: remote error: tls: bad certificate 2025-08-13T19:52:53.388894133+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41532: remote error: tls: bad certificate 2025-08-13T19:52:53.409691325+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41534: remote error: tls: bad certificate 2025-08-13T19:52:53.428690276+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41542: remote error: tls: bad certificate 2025-08-13T19:52:53.448461298+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41556: remote error: tls: bad certificate 2025-08-13T19:52:53.465886094+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41558: remote error: tls: bad certificate 2025-08-13T19:52:53.489946899+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41564: remote error: tls: bad certificate 2025-08-13T19:52:53.524639517+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41572: remote error: tls: bad certificate 2025-08-13T19:52:53.544525362+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41582: remote error: tls: bad certificate 2025-08-13T19:52:53.561312900+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41594: remote error: tls: bad certificate 2025-08-13T19:52:53.595445612+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41600: remote error: tls: bad certificate 2025-08-13T19:52:53.621247736+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41614: remote error: tls: bad certificate 2025-08-13T19:52:53.642441509+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41624: remote error: tls: bad certificate 2025-08-13T19:52:53.658166437+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41638: remote error: tls: bad certificate 2025-08-13T19:52:53.676036176+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41640: remote error: tls: bad certificate 2025-08-13T19:52:53.701036027+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41646: remote error: tls: bad certificate 2025-08-13T19:52:53.720359397+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41652: remote error: tls: bad certificate 2025-08-13T19:52:53.744577396+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41660: remote error: tls: bad certificate 2025-08-13T19:52:53.763927677+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41674: remote error: tls: bad certificate 2025-08-13T19:52:53.782188737+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41686: remote error: tls: bad certificate 2025-08-13T19:52:53.799675384+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41700: remote error: tls: bad certificate 2025-08-13T19:52:53.823228715+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41704: remote error: tls: bad certificate 2025-08-13T19:52:53.874706880+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41716: remote error: tls: bad certificate 2025-08-13T19:52:53.907436572+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41726: remote error: tls: bad certificate 2025-08-13T19:52:53.929080088+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41738: remote error: tls: bad certificate 2025-08-13T19:52:53.951725902+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41754: remote error: tls: bad certificate 2025-08-13T19:52:53.982139838+00:00 stderr F 2025/08/13 19:52:53 http: TLS handshake error from 127.0.0.1:41766: remote error: tls: bad certificate 2025-08-13T19:52:54.005000498+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41772: remote error: tls: bad certificate 2025-08-13T19:52:54.032761258+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41776: remote error: tls: bad certificate 2025-08-13T19:52:54.055918458+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41780: remote error: tls: bad certificate 2025-08-13T19:52:54.080633471+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41782: remote error: tls: bad certificate 2025-08-13T19:52:54.100256039+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41784: remote error: tls: bad certificate 2025-08-13T19:52:54.121094663+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41794: remote error: tls: bad certificate 2025-08-13T19:52:54.143504460+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41800: remote error: tls: bad certificate 2025-08-13T19:52:54.175703647+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41808: remote error: tls: bad certificate 2025-08-13T19:52:54.201592554+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41810: remote error: tls: bad certificate 2025-08-13T19:52:54.247903882+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41826: remote error: tls: bad certificate 2025-08-13T19:52:54.286569942+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41834: remote error: tls: bad certificate 2025-08-13T19:52:54.327975851+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41842: remote error: tls: bad certificate 2025-08-13T19:52:54.355567666+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41854: remote error: tls: bad certificate 2025-08-13T19:52:54.381564386+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41868: remote error: tls: bad certificate 2025-08-13T19:52:54.432900547+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41880: remote error: tls: bad certificate 2025-08-13T19:52:54.461137820+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41896: remote error: tls: bad certificate 2025-08-13T19:52:54.484679820+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41908: remote error: tls: bad certificate 2025-08-13T19:52:54.507568811+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41910: remote error: tls: bad certificate 2025-08-13T19:52:54.530932366+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41922: remote error: tls: bad certificate 2025-08-13T19:52:54.555939808+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41930: remote error: tls: bad certificate 2025-08-13T19:52:54.578209102+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41934: remote error: tls: bad certificate 2025-08-13T19:52:54.598142099+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41940: remote error: tls: bad certificate 2025-08-13T19:52:54.618382465+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41948: remote error: tls: bad certificate 2025-08-13T19:52:54.640995009+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41964: remote error: tls: bad certificate 2025-08-13T19:52:54.658542898+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41980: remote error: tls: bad certificate 2025-08-13T19:52:54.677479797+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41982: remote error: tls: bad certificate 2025-08-13T19:52:54.694943944+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41992: remote error: tls: bad certificate 2025-08-13T19:52:54.714329486+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:41998: remote error: tls: bad certificate 2025-08-13T19:52:54.734549962+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42008: remote error: tls: bad certificate 2025-08-13T19:52:54.753707347+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42018: remote error: tls: bad certificate 2025-08-13T19:52:54.771011529+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42032: remote error: tls: bad certificate 2025-08-13T19:52:54.793527740+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42036: remote error: tls: bad certificate 2025-08-13T19:52:54.810987377+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42044: remote error: tls: bad certificate 2025-08-13T19:52:54.830154683+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42050: remote error: tls: bad certificate 2025-08-13T19:52:54.856083541+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42066: remote error: tls: bad certificate 2025-08-13T19:52:54.870171852+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42082: remote error: tls: bad certificate 2025-08-13T19:52:54.898071926+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42098: remote error: tls: bad certificate 2025-08-13T19:52:54.914767671+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42114: remote error: tls: bad certificate 2025-08-13T19:52:54.936620233+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42118: remote error: tls: bad certificate 2025-08-13T19:52:54.953233386+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42120: remote error: tls: bad certificate 2025-08-13T19:52:54.985152344+00:00 stderr F 2025/08/13 19:52:54 http: TLS handshake error from 127.0.0.1:42132: remote error: tls: bad certificate 2025-08-13T19:52:55.002014864+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42148: remote error: tls: bad certificate 2025-08-13T19:52:55.054629911+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42156: remote error: tls: bad certificate 2025-08-13T19:52:55.071086060+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42162: remote error: tls: bad certificate 2025-08-13T19:52:55.092919321+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42170: remote error: tls: bad certificate 2025-08-13T19:52:55.118983953+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42186: remote error: tls: bad certificate 2025-08-13T19:52:55.133997631+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42192: remote error: tls: bad certificate 2025-08-13T19:52:55.151021575+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42208: remote error: tls: bad certificate 2025-08-13T19:52:55.170148129+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42212: remote error: tls: bad certificate 2025-08-13T19:52:55.242229321+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42224: remote error: tls: bad certificate 2025-08-13T19:52:55.273872862+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42238: remote error: tls: bad certificate 2025-08-13T19:52:55.291481883+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42252: remote error: tls: bad certificate 2025-08-13T19:52:55.308587490+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42260: remote error: tls: bad certificate 2025-08-13T19:52:55.323694750+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42276: remote error: tls: bad certificate 2025-08-13T19:52:55.339687145+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42282: remote error: tls: bad certificate 2025-08-13T19:52:55.358117329+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42294: remote error: tls: bad certificate 2025-08-13T19:52:55.375193855+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42306: remote error: tls: bad certificate 2025-08-13T19:52:55.394539286+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42310: remote error: tls: bad certificate 2025-08-13T19:52:55.412068405+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42316: remote error: tls: bad certificate 2025-08-13T19:52:55.427551085+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42322: remote error: tls: bad certificate 2025-08-13T19:52:55.443897061+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42332: remote error: tls: bad certificate 2025-08-13T19:52:55.461866062+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42334: remote error: tls: bad certificate 2025-08-13T19:52:55.479754801+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42348: remote error: tls: bad certificate 2025-08-13T19:52:55.495305974+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42358: remote error: tls: bad certificate 2025-08-13T19:52:55.512859304+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42370: remote error: tls: bad certificate 2025-08-13T19:52:55.528691324+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42374: remote error: tls: bad certificate 2025-08-13T19:52:55.544188645+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42382: remote error: tls: bad certificate 2025-08-13T19:52:55.558555384+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42396: remote error: tls: bad certificate 2025-08-13T19:52:55.574377544+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42408: remote error: tls: bad certificate 2025-08-13T19:52:55.589147665+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42410: remote error: tls: bad certificate 2025-08-13T19:52:55.603646977+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42418: remote error: tls: bad certificate 2025-08-13T19:52:55.620698043+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42420: remote error: tls: bad certificate 2025-08-13T19:52:55.638494469+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42432: remote error: tls: bad certificate 2025-08-13T19:52:55.653157467+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42442: remote error: tls: bad certificate 2025-08-13T19:52:55.668553795+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42446: remote error: tls: bad certificate 2025-08-13T19:52:55.685881678+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42462: remote error: tls: bad certificate 2025-08-13T19:52:55.699141055+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42466: remote error: tls: bad certificate 2025-08-13T19:52:55.716705675+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42474: remote error: tls: bad certificate 2025-08-13T19:52:55.731095095+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42478: remote error: tls: bad certificate 2025-08-13T19:52:55.748325765+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42488: remote error: tls: bad certificate 2025-08-13T19:52:55.764921978+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42492: remote error: tls: bad certificate 2025-08-13T19:52:55.780853921+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42508: remote error: tls: bad certificate 2025-08-13T19:52:55.804416932+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42522: remote error: tls: bad certificate 2025-08-13T19:52:55.845388358+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42536: remote error: tls: bad certificate 2025-08-13T19:52:55.887732713+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42552: remote error: tls: bad certificate 2025-08-13T19:52:55.924672094+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42556: remote error: tls: bad certificate 2025-08-13T19:52:55.965096165+00:00 stderr F 2025/08/13 19:52:55 http: TLS handshake error from 127.0.0.1:42562: remote error: tls: bad certificate 2025-08-13T19:52:56.006509904+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42574: remote error: tls: bad certificate 2025-08-13T19:52:56.048707594+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42576: remote error: tls: bad certificate 2025-08-13T19:52:56.085449970+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42582: remote error: tls: bad certificate 2025-08-13T19:52:56.126100917+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42590: remote error: tls: bad certificate 2025-08-13T19:52:56.164636584+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42606: remote error: tls: bad certificate 2025-08-13T19:52:56.214036610+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42608: remote error: tls: bad certificate 2025-08-13T19:52:56.248078759+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42622: remote error: tls: bad certificate 2025-08-13T19:52:56.284706301+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42624: remote error: tls: bad certificate 2025-08-13T19:52:56.334890570+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42636: remote error: tls: bad certificate 2025-08-13T19:52:56.364250135+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42646: remote error: tls: bad certificate 2025-08-13T19:52:56.407463465+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42654: remote error: tls: bad certificate 2025-08-13T19:52:56.445212330+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42670: remote error: tls: bad certificate 2025-08-13T19:52:56.485499106+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42676: remote error: tls: bad certificate 2025-08-13T19:52:56.524645670+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42692: remote error: tls: bad certificate 2025-08-13T19:52:56.565708889+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42700: remote error: tls: bad certificate 2025-08-13T19:52:56.602967379+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42702: remote error: tls: bad certificate 2025-08-13T19:52:56.642951327+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42714: remote error: tls: bad certificate 2025-08-13T19:52:56.684217562+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42720: remote error: tls: bad certificate 2025-08-13T19:52:56.724915080+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42722: remote error: tls: bad certificate 2025-08-13T19:52:56.766377500+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42730: remote error: tls: bad certificate 2025-08-13T19:52:56.806280176+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42738: remote error: tls: bad certificate 2025-08-13T19:52:56.848622651+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42742: remote error: tls: bad certificate 2025-08-13T19:52:56.887936530+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42756: remote error: tls: bad certificate 2025-08-13T19:52:56.925047886+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42760: remote error: tls: bad certificate 2025-08-13T19:52:56.966627020+00:00 stderr F 2025/08/13 19:52:56 http: TLS handshake error from 127.0.0.1:42776: remote error: tls: bad certificate 2025-08-13T19:52:57.004946620+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42782: remote error: tls: bad certificate 2025-08-13T19:52:57.046337938+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42784: remote error: tls: bad certificate 2025-08-13T19:52:57.083754583+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42796: remote error: tls: bad certificate 2025-08-13T19:52:57.123685920+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42800: remote error: tls: bad certificate 2025-08-13T19:52:57.165114619+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42810: remote error: tls: bad certificate 2025-08-13T19:52:57.217762127+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42826: remote error: tls: bad certificate 2025-08-13T19:52:57.245899898+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42834: remote error: tls: bad certificate 2025-08-13T19:52:57.286054221+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42838: remote error: tls: bad certificate 2025-08-13T19:52:57.322986532+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42842: remote error: tls: bad certificate 2025-08-13T19:52:57.366049608+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42852: remote error: tls: bad certificate 2025-08-13T19:52:57.404543844+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42864: remote error: tls: bad certificate 2025-08-13T19:52:57.443362368+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42874: remote error: tls: bad certificate 2025-08-13T19:52:57.484454008+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42886: remote error: tls: bad certificate 2025-08-13T19:52:57.523669134+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42890: remote error: tls: bad certificate 2025-08-13T19:52:57.563286882+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42904: remote error: tls: bad certificate 2025-08-13T19:52:57.617717321+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42914: remote error: tls: bad certificate 2025-08-13T19:52:57.646665185+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42930: remote error: tls: bad certificate 2025-08-13T19:52:57.692141989+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42932: remote error: tls: bad certificate 2025-08-13T19:52:57.723772439+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42940: remote error: tls: bad certificate 2025-08-13T19:52:57.764528949+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42956: remote error: tls: bad certificate 2025-08-13T19:52:57.813446482+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42962: remote error: tls: bad certificate 2025-08-13T19:52:57.840955855+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42964: remote error: tls: bad certificate 2025-08-13T19:52:57.848026926+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42976: remote error: tls: bad certificate 2025-08-13T19:52:57.864354691+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42978: remote error: tls: bad certificate 2025-08-13T19:52:57.889663811+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42986: remote error: tls: bad certificate 2025-08-13T19:52:57.889663811+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:42992: remote error: tls: bad certificate 2025-08-13T19:52:57.908096476+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43006: remote error: tls: bad certificate 2025-08-13T19:52:57.928752624+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43014: remote error: tls: bad certificate 2025-08-13T19:52:57.931942654+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43018: remote error: tls: bad certificate 2025-08-13T19:52:57.972696274+00:00 stderr F 2025/08/13 19:52:57 http: TLS handshake error from 127.0.0.1:43022: remote error: tls: bad certificate 2025-08-13T19:52:58.004026516+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43036: remote error: tls: bad certificate 2025-08-13T19:52:58.077624340+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43040: remote error: tls: bad certificate 2025-08-13T19:52:58.099196944+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43048: remote error: tls: bad certificate 2025-08-13T19:52:58.144243276+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43056: remote error: tls: bad certificate 2025-08-13T19:52:58.164125772+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43062: remote error: tls: bad certificate 2025-08-13T19:52:58.206422075+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43070: remote error: tls: bad certificate 2025-08-13T19:52:58.246240849+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43078: remote error: tls: bad certificate 2025-08-13T19:52:58.288872202+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43084: remote error: tls: bad certificate 2025-08-13T19:52:58.341151290+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43100: remote error: tls: bad certificate 2025-08-13T19:52:58.365887304+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43106: remote error: tls: bad certificate 2025-08-13T19:52:58.409760853+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43118: remote error: tls: bad certificate 2025-08-13T19:52:58.448663510+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43132: remote error: tls: bad certificate 2025-08-13T19:52:58.486195918+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43140: remote error: tls: bad certificate 2025-08-13T19:52:58.526132395+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43154: remote error: tls: bad certificate 2025-08-13T19:52:58.565579728+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43158: remote error: tls: bad certificate 2025-08-13T19:52:58.612442612+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43170: remote error: tls: bad certificate 2025-08-13T19:52:58.647630263+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43186: remote error: tls: bad certificate 2025-08-13T19:52:58.688646890+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43188: remote error: tls: bad certificate 2025-08-13T19:52:58.725170550+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:43196: remote error: tls: bad certificate 2025-08-13T19:52:58.766512877+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54328: remote error: tls: bad certificate 2025-08-13T19:52:58.805249169+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54332: remote error: tls: bad certificate 2025-08-13T19:52:58.845984689+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54340: remote error: tls: bad certificate 2025-08-13T19:52:58.885730500+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54350: remote error: tls: bad certificate 2025-08-13T19:52:58.926610643+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54362: remote error: tls: bad certificate 2025-08-13T19:52:58.967522538+00:00 stderr F 2025/08/13 19:52:58 http: TLS handshake error from 127.0.0.1:54376: remote error: tls: bad certificate 2025-08-13T19:52:59.010137541+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54386: remote error: tls: bad certificate 2025-08-13T19:52:59.045977381+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54398: remote error: tls: bad certificate 2025-08-13T19:52:59.084213709+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54408: remote error: tls: bad certificate 2025-08-13T19:52:59.123999671+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54422: remote error: tls: bad certificate 2025-08-13T19:52:59.166915983+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54438: remote error: tls: bad certificate 2025-08-13T19:52:59.203270908+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54452: remote error: tls: bad certificate 2025-08-13T19:52:59.248928677+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54462: remote error: tls: bad certificate 2025-08-13T19:52:59.286589269+00:00 stderr F 2025/08/13 19:52:59 http: TLS handshake error from 127.0.0.1:54466: remote error: tls: bad certificate 2025-08-13T19:53:05.229994755+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54472: remote error: tls: bad certificate 2025-08-13T19:53:05.246737951+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54488: remote error: tls: bad certificate 2025-08-13T19:53:05.263674723+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54504: remote error: tls: bad certificate 2025-08-13T19:53:05.285403972+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54516: remote error: tls: bad certificate 2025-08-13T19:53:05.301223082+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54520: remote error: tls: bad certificate 2025-08-13T19:53:05.322422625+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54536: remote error: tls: bad certificate 2025-08-13T19:53:05.342665881+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54552: remote error: tls: bad certificate 2025-08-13T19:53:05.362065353+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54558: remote error: tls: bad certificate 2025-08-13T19:53:05.382523416+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54560: remote error: tls: bad certificate 2025-08-13T19:53:05.398259064+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54572: remote error: tls: bad certificate 2025-08-13T19:53:05.418216132+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54578: remote error: tls: bad certificate 2025-08-13T19:53:05.435298288+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54594: remote error: tls: bad certificate 2025-08-13T19:53:05.453285800+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54606: remote error: tls: bad certificate 2025-08-13T19:53:05.471257461+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54618: remote error: tls: bad certificate 2025-08-13T19:53:05.498908108+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54626: remote error: tls: bad certificate 2025-08-13T19:53:05.520762580+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54640: remote error: tls: bad certificate 2025-08-13T19:53:05.539638158+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54642: remote error: tls: bad certificate 2025-08-13T19:53:05.556648482+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54656: remote error: tls: bad certificate 2025-08-13T19:53:05.574329645+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54664: remote error: tls: bad certificate 2025-08-13T19:53:05.592428570+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54678: remote error: tls: bad certificate 2025-08-13T19:53:05.608616891+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54688: remote error: tls: bad certificate 2025-08-13T19:53:05.625933634+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54698: remote error: tls: bad certificate 2025-08-13T19:53:05.641921509+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54710: remote error: tls: bad certificate 2025-08-13T19:53:05.658406988+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54720: remote error: tls: bad certificate 2025-08-13T19:53:05.683173693+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54724: remote error: tls: bad certificate 2025-08-13T19:53:05.703444870+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54738: remote error: tls: bad certificate 2025-08-13T19:53:05.724500489+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54750: remote error: tls: bad certificate 2025-08-13T19:53:05.742389318+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54754: remote error: tls: bad certificate 2025-08-13T19:53:05.758951460+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54762: remote error: tls: bad certificate 2025-08-13T19:53:05.790780435+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54774: remote error: tls: bad certificate 2025-08-13T19:53:05.807976765+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54780: remote error: tls: bad certificate 2025-08-13T19:53:05.823508947+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54782: remote error: tls: bad certificate 2025-08-13T19:53:05.841977013+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54794: remote error: tls: bad certificate 2025-08-13T19:53:05.863257048+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54798: remote error: tls: bad certificate 2025-08-13T19:53:05.878192283+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54804: remote error: tls: bad certificate 2025-08-13T19:53:05.895533517+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54810: remote error: tls: bad certificate 2025-08-13T19:53:05.921506496+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54812: remote error: tls: bad certificate 2025-08-13T19:53:05.941344901+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54816: remote error: tls: bad certificate 2025-08-13T19:53:05.956712448+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54824: remote error: tls: bad certificate 2025-08-13T19:53:05.974279778+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54834: remote error: tls: bad certificate 2025-08-13T19:53:05.989635535+00:00 stderr F 2025/08/13 19:53:05 http: TLS handshake error from 127.0.0.1:54838: remote error: tls: bad certificate 2025-08-13T19:53:06.007107003+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54846: remote error: tls: bad certificate 2025-08-13T19:53:06.025421104+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54854: remote error: tls: bad certificate 2025-08-13T19:53:06.048625044+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54860: remote error: tls: bad certificate 2025-08-13T19:53:06.071913707+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54874: remote error: tls: bad certificate 2025-08-13T19:53:06.089215219+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54890: remote error: tls: bad certificate 2025-08-13T19:53:06.105432961+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54904: remote error: tls: bad certificate 2025-08-13T19:53:06.125697978+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54918: remote error: tls: bad certificate 2025-08-13T19:53:06.144250206+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54924: remote error: tls: bad certificate 2025-08-13T19:53:06.174491027+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54930: remote error: tls: bad certificate 2025-08-13T19:53:06.191339976+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54936: remote error: tls: bad certificate 2025-08-13T19:53:06.206277961+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54944: remote error: tls: bad certificate 2025-08-13T19:53:06.223619465+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54956: remote error: tls: bad certificate 2025-08-13T19:53:06.245045025+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54964: remote error: tls: bad certificate 2025-08-13T19:53:06.259867487+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54970: remote error: tls: bad certificate 2025-08-13T19:53:06.277542920+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54978: remote error: tls: bad certificate 2025-08-13T19:53:06.295119290+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54986: remote error: tls: bad certificate 2025-08-13T19:53:06.312107523+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54992: remote error: tls: bad certificate 2025-08-13T19:53:06.327216773+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:54998: remote error: tls: bad certificate 2025-08-13T19:53:06.343608990+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55012: remote error: tls: bad certificate 2025-08-13T19:53:06.361256012+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55020: remote error: tls: bad certificate 2025-08-13T19:53:06.381916410+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55036: remote error: tls: bad certificate 2025-08-13T19:53:06.396906137+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55044: remote error: tls: bad certificate 2025-08-13T19:53:06.413956552+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55048: remote error: tls: bad certificate 2025-08-13T19:53:06.428983080+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55052: remote error: tls: bad certificate 2025-08-13T19:53:06.444305856+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55066: remote error: tls: bad certificate 2025-08-13T19:53:06.461992179+00:00 stderr F 2025/08/13 19:53:06 http: TLS handshake error from 127.0.0.1:55074: remote error: tls: bad certificate 2025-08-13T19:53:08.124377283+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55090: remote error: tls: bad certificate 2025-08-13T19:53:08.144708872+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55098: remote error: tls: bad certificate 2025-08-13T19:53:08.169200089+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55100: remote error: tls: bad certificate 2025-08-13T19:53:08.191587256+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55108: remote error: tls: bad certificate 2025-08-13T19:53:08.215764564+00:00 stderr F 2025/08/13 19:53:08 http: TLS handshake error from 127.0.0.1:55122: remote error: tls: bad certificate 2025-08-13T19:53:15.230254308+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57320: remote error: tls: bad certificate 2025-08-13T19:53:15.254519958+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57336: remote error: tls: bad certificate 2025-08-13T19:53:15.273246881+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:53:15.290993626+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57344: remote error: tls: bad certificate 2025-08-13T19:53:15.308042922+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57360: remote error: tls: bad certificate 2025-08-13T19:53:15.323416099+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57376: remote error: tls: bad certificate 2025-08-13T19:53:15.348010609+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57378: remote error: tls: bad certificate 2025-08-13T19:53:15.364569510+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57382: remote error: tls: bad certificate 2025-08-13T19:53:15.383879610+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57388: remote error: tls: bad certificate 2025-08-13T19:53:15.403854139+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57404: remote error: tls: bad certificate 2025-08-13T19:53:15.421906672+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57406: remote error: tls: bad certificate 2025-08-13T19:53:15.441069268+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57422: remote error: tls: bad certificate 2025-08-13T19:53:15.472902974+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57438: remote error: tls: bad certificate 2025-08-13T19:53:15.488501388+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57442: remote error: tls: bad certificate 2025-08-13T19:53:15.502286220+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57454: remote error: tls: bad certificate 2025-08-13T19:53:15.519585602+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:53:15.544101080+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57458: remote error: tls: bad certificate 2025-08-13T19:53:15.561890296+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57462: remote error: tls: bad certificate 2025-08-13T19:53:15.579199579+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57464: remote error: tls: bad certificate 2025-08-13T19:53:15.599265390+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57468: remote error: tls: bad certificate 2025-08-13T19:53:15.617419847+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:53:15.633664659+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57486: remote error: tls: bad certificate 2025-08-13T19:53:15.648979215+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57502: remote error: tls: bad certificate 2025-08-13T19:53:15.668605694+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57504: remote error: tls: bad certificate 2025-08-13T19:53:15.685357471+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57516: remote error: tls: bad certificate 2025-08-13T19:53:15.702692944+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57530: remote error: tls: bad certificate 2025-08-13T19:53:15.720031608+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57532: remote error: tls: bad certificate 2025-08-13T19:53:15.738600986+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57534: remote error: tls: bad certificate 2025-08-13T19:53:15.761599861+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57536: remote error: tls: bad certificate 2025-08-13T19:53:15.776962568+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57546: remote error: tls: bad certificate 2025-08-13T19:53:15.791767109+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57558: remote error: tls: bad certificate 2025-08-13T19:53:15.809112343+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57568: remote error: tls: bad certificate 2025-08-13T19:53:15.825866270+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57574: remote error: tls: bad certificate 2025-08-13T19:53:15.841345910+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57580: remote error: tls: bad certificate 2025-08-13T19:53:15.855971407+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57584: remote error: tls: bad certificate 2025-08-13T19:53:15.871254862+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57590: remote error: tls: bad certificate 2025-08-13T19:53:15.888957645+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57596: remote error: tls: bad certificate 2025-08-13T19:53:15.907379430+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57608: remote error: tls: bad certificate 2025-08-13T19:53:15.925199656+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57620: remote error: tls: bad certificate 2025-08-13T19:53:15.942733185+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57626: remote error: tls: bad certificate 2025-08-13T19:53:15.960165881+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57630: remote error: tls: bad certificate 2025-08-13T19:53:15.976287160+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57636: remote error: tls: bad certificate 2025-08-13T19:53:15.994373955+00:00 stderr F 2025/08/13 19:53:15 http: TLS handshake error from 127.0.0.1:57644: remote error: tls: bad certificate 2025-08-13T19:53:16.010107503+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57648: remote error: tls: bad certificate 2025-08-13T19:53:16.024446411+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57654: remote error: tls: bad certificate 2025-08-13T19:53:16.040982671+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57670: remote error: tls: bad certificate 2025-08-13T19:53:16.056222665+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57686: remote error: tls: bad certificate 2025-08-13T19:53:16.073181838+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57690: remote error: tls: bad certificate 2025-08-13T19:53:16.091155159+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57696: remote error: tls: bad certificate 2025-08-13T19:53:16.107908546+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57710: remote error: tls: bad certificate 2025-08-13T19:53:16.130296923+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:53:16.147045200+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57728: remote error: tls: bad certificate 2025-08-13T19:53:16.161113440+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57736: remote error: tls: bad certificate 2025-08-13T19:53:16.179346959+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57744: remote error: tls: bad certificate 2025-08-13T19:53:16.194681446+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57750: remote error: tls: bad certificate 2025-08-13T19:53:16.214370226+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57754: remote error: tls: bad certificate 2025-08-13T19:53:16.230876996+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57766: remote error: tls: bad certificate 2025-08-13T19:53:16.247018645+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57768: remote error: tls: bad certificate 2025-08-13T19:53:16.264169344+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57772: remote error: tls: bad certificate 2025-08-13T19:53:16.280287652+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57776: remote error: tls: bad certificate 2025-08-13T19:53:16.300380054+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57792: remote error: tls: bad certificate 2025-08-13T19:53:16.315936757+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57808: remote error: tls: bad certificate 2025-08-13T19:53:16.334189337+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57816: remote error: tls: bad certificate 2025-08-13T19:53:16.353718072+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57818: remote error: tls: bad certificate 2025-08-13T19:53:16.373083804+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57824: remote error: tls: bad certificate 2025-08-13T19:53:16.399998900+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57838: remote error: tls: bad certificate 2025-08-13T19:53:16.418068144+00:00 stderr F 2025/08/13 19:53:16 http: TLS handshake error from 127.0.0.1:57844: remote error: tls: bad certificate 2025-08-13T19:53:18.646465778+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57856: remote error: tls: bad certificate 2025-08-13T19:53:18.680082655+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57872: remote error: tls: bad certificate 2025-08-13T19:53:18.706134956+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57882: remote error: tls: bad certificate 2025-08-13T19:53:18.728389240+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:57884: remote error: tls: bad certificate 2025-08-13T19:53:18.749899032+00:00 stderr F 2025/08/13 19:53:18 http: TLS handshake error from 127.0.0.1:34698: remote error: tls: bad certificate 2025-08-13T19:53:22.591021295+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34710: remote error: tls: bad certificate 2025-08-13T19:53:22.605655702+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34724: remote error: tls: bad certificate 2025-08-13T19:53:22.620566116+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34736: remote error: tls: bad certificate 2025-08-13T19:53:22.645705832+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34746: remote error: tls: bad certificate 2025-08-13T19:53:22.698604287+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34750: remote error: tls: bad certificate 2025-08-13T19:53:22.723939048+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34754: remote error: tls: bad certificate 2025-08-13T19:53:22.751387410+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34760: remote error: tls: bad certificate 2025-08-13T19:53:22.776338900+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34774: remote error: tls: bad certificate 2025-08-13T19:53:22.803865243+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34788: remote error: tls: bad certificate 2025-08-13T19:53:22.825047576+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34800: remote error: tls: bad certificate 2025-08-13T19:53:22.826663162+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34796: remote error: tls: bad certificate 2025-08-13T19:53:22.844406477+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34804: remote error: tls: bad certificate 2025-08-13T19:53:22.862061109+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34820: remote error: tls: bad certificate 2025-08-13T19:53:22.880748861+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34824: remote error: tls: bad certificate 2025-08-13T19:53:22.902638684+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34832: remote error: tls: bad certificate 2025-08-13T19:53:22.921644185+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34838: remote error: tls: bad certificate 2025-08-13T19:53:22.933935435+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34840: remote error: tls: bad certificate 2025-08-13T19:53:22.950113506+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34850: remote error: tls: bad certificate 2025-08-13T19:53:22.965595246+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34858: remote error: tls: bad certificate 2025-08-13T19:53:22.981860829+00:00 stderr F 2025/08/13 19:53:22 http: TLS handshake error from 127.0.0.1:34870: remote error: tls: bad certificate 2025-08-13T19:53:23.001581990+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34884: remote error: tls: bad certificate 2025-08-13T19:53:23.021316992+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34896: remote error: tls: bad certificate 2025-08-13T19:53:23.039603773+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34900: remote error: tls: bad certificate 2025-08-13T19:53:23.063196324+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34910: remote error: tls: bad certificate 2025-08-13T19:53:23.080436494+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34914: remote error: tls: bad certificate 2025-08-13T19:53:23.100300379+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34922: remote error: tls: bad certificate 2025-08-13T19:53:23.122367917+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34938: remote error: tls: bad certificate 2025-08-13T19:53:23.141321797+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34944: remote error: tls: bad certificate 2025-08-13T19:53:23.159916806+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34948: remote error: tls: bad certificate 2025-08-13T19:53:23.178090393+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34954: remote error: tls: bad certificate 2025-08-13T19:53:23.196761804+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34968: remote error: tls: bad certificate 2025-08-13T19:53:23.220670975+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34976: remote error: tls: bad certificate 2025-08-13T19:53:23.248884208+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34990: remote error: tls: bad certificate 2025-08-13T19:53:23.266045596+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:34992: remote error: tls: bad certificate 2025-08-13T19:53:23.285069508+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35000: remote error: tls: bad certificate 2025-08-13T19:53:23.307900188+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35008: remote error: tls: bad certificate 2025-08-13T19:53:23.326323412+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35020: remote error: tls: bad certificate 2025-08-13T19:53:23.346357272+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35026: remote error: tls: bad certificate 2025-08-13T19:53:23.362327797+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35038: remote error: tls: bad certificate 2025-08-13T19:53:23.379161196+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35044: remote error: tls: bad certificate 2025-08-13T19:53:23.402203582+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35046: remote error: tls: bad certificate 2025-08-13T19:53:23.417482587+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35056: remote error: tls: bad certificate 2025-08-13T19:53:23.447920933+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35058: remote error: tls: bad certificate 2025-08-13T19:53:23.473523032+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35074: remote error: tls: bad certificate 2025-08-13T19:53:23.500123999+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35084: remote error: tls: bad certificate 2025-08-13T19:53:23.518177983+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35098: remote error: tls: bad certificate 2025-08-13T19:53:23.536484004+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35112: remote error: tls: bad certificate 2025-08-13T19:53:23.563257856+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35122: remote error: tls: bad certificate 2025-08-13T19:53:23.585182160+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35130: remote error: tls: bad certificate 2025-08-13T19:53:23.605052625+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35134: remote error: tls: bad certificate 2025-08-13T19:53:23.631633512+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35148: remote error: tls: bad certificate 2025-08-13T19:53:23.653284638+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35156: remote error: tls: bad certificate 2025-08-13T19:53:23.674903663+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35162: remote error: tls: bad certificate 2025-08-13T19:53:23.693084581+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35168: remote error: tls: bad certificate 2025-08-13T19:53:23.716563189+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35180: remote error: tls: bad certificate 2025-08-13T19:53:23.734459658+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35184: remote error: tls: bad certificate 2025-08-13T19:53:23.753267944+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35186: remote error: tls: bad certificate 2025-08-13T19:53:23.771244175+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35188: remote error: tls: bad certificate 2025-08-13T19:53:23.786530830+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35194: remote error: tls: bad certificate 2025-08-13T19:53:23.808616299+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35210: remote error: tls: bad certificate 2025-08-13T19:53:23.833299832+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35218: remote error: tls: bad certificate 2025-08-13T19:53:23.850043338+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35228: remote error: tls: bad certificate 2025-08-13T19:53:23.865424936+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35242: remote error: tls: bad certificate 2025-08-13T19:53:23.885454916+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35250: remote error: tls: bad certificate 2025-08-13T19:53:23.903011546+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35260: remote error: tls: bad certificate 2025-08-13T19:53:23.921135541+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35262: remote error: tls: bad certificate 2025-08-13T19:53:23.933987027+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35268: remote error: tls: bad certificate 2025-08-13T19:53:23.939688169+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35282: remote error: tls: bad certificate 2025-08-13T19:53:23.961271644+00:00 stderr F 2025/08/13 19:53:23 http: TLS handshake error from 127.0.0.1:35290: remote error: tls: bad certificate 2025-08-13T19:53:24.608612958+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35304: remote error: tls: bad certificate 2025-08-13T19:53:24.627416963+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35308: remote error: tls: bad certificate 2025-08-13T19:53:24.648132722+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35312: remote error: tls: bad certificate 2025-08-13T19:53:24.667632877+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35314: remote error: tls: bad certificate 2025-08-13T19:53:24.687181144+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35326: remote error: tls: bad certificate 2025-08-13T19:53:24.708126890+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35334: remote error: tls: bad certificate 2025-08-13T19:53:24.726086821+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35338: remote error: tls: bad certificate 2025-08-13T19:53:24.749867818+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35344: remote error: tls: bad certificate 2025-08-13T19:53:24.769401404+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35346: remote error: tls: bad certificate 2025-08-13T19:53:24.788411295+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35348: remote error: tls: bad certificate 2025-08-13T19:53:24.806189681+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35362: remote error: tls: bad certificate 2025-08-13T19:53:24.823518654+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35364: remote error: tls: bad certificate 2025-08-13T19:53:24.839507479+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35380: remote error: tls: bad certificate 2025-08-13T19:53:24.863265816+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35382: remote error: tls: bad certificate 2025-08-13T19:53:24.888878835+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35384: remote error: tls: bad certificate 2025-08-13T19:53:24.903199092+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35390: remote error: tls: bad certificate 2025-08-13T19:53:24.922199463+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35392: remote error: tls: bad certificate 2025-08-13T19:53:24.938519207+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35408: remote error: tls: bad certificate 2025-08-13T19:53:24.964687082+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35412: remote error: tls: bad certificate 2025-08-13T19:53:24.984069714+00:00 stderr F 2025/08/13 19:53:24 http: TLS handshake error from 127.0.0.1:35428: remote error: tls: bad certificate 2025-08-13T19:53:25.010507486+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35442: remote error: tls: bad certificate 2025-08-13T19:53:25.027988314+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35456: remote error: tls: bad certificate 2025-08-13T19:53:25.046068598+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35460: remote error: tls: bad certificate 2025-08-13T19:53:25.066701436+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35464: remote error: tls: bad certificate 2025-08-13T19:53:25.092472169+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35466: remote error: tls: bad certificate 2025-08-13T19:53:25.110581865+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35478: remote error: tls: bad certificate 2025-08-13T19:53:25.127249079+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35484: remote error: tls: bad certificate 2025-08-13T19:53:25.144859750+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35494: remote error: tls: bad certificate 2025-08-13T19:53:25.168981347+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35510: remote error: tls: bad certificate 2025-08-13T19:53:25.187260547+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35522: remote error: tls: bad certificate 2025-08-13T19:53:25.203738546+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35536: remote error: tls: bad certificate 2025-08-13T19:53:25.233270696+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35544: remote error: tls: bad certificate 2025-08-13T19:53:25.250883778+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35558: remote error: tls: bad certificate 2025-08-13T19:53:25.266268146+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35560: remote error: tls: bad certificate 2025-08-13T19:53:25.281882170+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35570: remote error: tls: bad certificate 2025-08-13T19:53:25.301460557+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35580: remote error: tls: bad certificate 2025-08-13T19:53:25.319433589+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35582: remote error: tls: bad certificate 2025-08-13T19:53:25.339662705+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35590: remote error: tls: bad certificate 2025-08-13T19:53:25.362344160+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35592: remote error: tls: bad certificate 2025-08-13T19:53:25.381756173+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35608: remote error: tls: bad certificate 2025-08-13T19:53:25.404191501+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35610: remote error: tls: bad certificate 2025-08-13T19:53:25.420409283+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35612: remote error: tls: bad certificate 2025-08-13T19:53:25.434623817+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35626: remote error: tls: bad certificate 2025-08-13T19:53:25.454376730+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35634: remote error: tls: bad certificate 2025-08-13T19:53:25.472355041+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35636: remote error: tls: bad certificate 2025-08-13T19:53:25.492913216+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35650: remote error: tls: bad certificate 2025-08-13T19:53:25.507830691+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35654: remote error: tls: bad certificate 2025-08-13T19:53:25.523375113+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35668: remote error: tls: bad certificate 2025-08-13T19:53:25.538526555+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35676: remote error: tls: bad certificate 2025-08-13T19:53:25.555080726+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35682: remote error: tls: bad certificate 2025-08-13T19:53:25.568680863+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35698: remote error: tls: bad certificate 2025-08-13T19:53:25.585471281+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35710: remote error: tls: bad certificate 2025-08-13T19:53:25.611140111+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35712: remote error: tls: bad certificate 2025-08-13T19:53:25.626273972+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35722: remote error: tls: bad certificate 2025-08-13T19:53:25.642274498+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35730: remote error: tls: bad certificate 2025-08-13T19:53:25.659359294+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35732: remote error: tls: bad certificate 2025-08-13T19:53:25.680177646+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35734: remote error: tls: bad certificate 2025-08-13T19:53:25.700996869+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35740: remote error: tls: bad certificate 2025-08-13T19:53:25.716110819+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35742: remote error: tls: bad certificate 2025-08-13T19:53:25.740514744+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35748: remote error: tls: bad certificate 2025-08-13T19:53:25.759281618+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35764: remote error: tls: bad certificate 2025-08-13T19:53:25.775606942+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35776: remote error: tls: bad certificate 2025-08-13T19:53:25.797337521+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35792: remote error: tls: bad certificate 2025-08-13T19:53:25.826253004+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35796: remote error: tls: bad certificate 2025-08-13T19:53:25.866070427+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35800: remote error: tls: bad certificate 2025-08-13T19:53:25.909065421+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35806: remote error: tls: bad certificate 2025-08-13T19:53:25.945585070+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35814: remote error: tls: bad certificate 2025-08-13T19:53:25.985547478+00:00 stderr F 2025/08/13 19:53:25 http: TLS handshake error from 127.0.0.1:35824: remote error: tls: bad certificate 2025-08-13T19:53:26.029852279+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35836: remote error: tls: bad certificate 2025-08-13T19:53:26.068288143+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35840: remote error: tls: bad certificate 2025-08-13T19:53:26.105680077+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35846: remote error: tls: bad certificate 2025-08-13T19:53:26.146863929+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35856: remote error: tls: bad certificate 2025-08-13T19:53:26.190884572+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35870: remote error: tls: bad certificate 2025-08-13T19:53:26.228761950+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35880: remote error: tls: bad certificate 2025-08-13T19:53:26.266262067+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35896: remote error: tls: bad certificate 2025-08-13T19:53:26.309062875+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35908: remote error: tls: bad certificate 2025-08-13T19:53:26.351186044+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35912: remote error: tls: bad certificate 2025-08-13T19:53:26.386908341+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35924: remote error: tls: bad certificate 2025-08-13T19:53:26.425640023+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35938: remote error: tls: bad certificate 2025-08-13T19:53:26.465864668+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35950: remote error: tls: bad certificate 2025-08-13T19:53:26.511654321+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35966: remote error: tls: bad certificate 2025-08-13T19:53:26.550470566+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35974: remote error: tls: bad certificate 2025-08-13T19:53:26.590697781+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35984: remote error: tls: bad certificate 2025-08-13T19:53:26.626270743+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35988: remote error: tls: bad certificate 2025-08-13T19:53:26.666348433+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:35994: remote error: tls: bad certificate 2025-08-13T19:53:26.704407896+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36008: remote error: tls: bad certificate 2025-08-13T19:53:26.744889409+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36014: remote error: tls: bad certificate 2025-08-13T19:53:26.785546426+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36016: remote error: tls: bad certificate 2025-08-13T19:53:26.823026432+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36032: remote error: tls: bad certificate 2025-08-13T19:53:26.864359949+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36040: remote error: tls: bad certificate 2025-08-13T19:53:26.937361086+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate 2025-08-13T19:53:26.985947649+00:00 stderr F 2025/08/13 19:53:26 http: TLS handshake error from 127.0.0.1:36050: remote error: tls: bad certificate 2025-08-13T19:53:27.005917928+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36052: remote error: tls: bad certificate 2025-08-13T19:53:27.023608801+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36062: remote error: tls: bad certificate 2025-08-13T19:53:27.064141945+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36066: remote error: tls: bad certificate 2025-08-13T19:53:27.107856529+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36078: remote error: tls: bad certificate 2025-08-13T19:53:27.147151868+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36090: remote error: tls: bad certificate 2025-08-13T19:53:27.186036994+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36106: remote error: tls: bad certificate 2025-08-13T19:53:27.225337063+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36120: remote error: tls: bad certificate 2025-08-13T19:53:27.267027029+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36122: remote error: tls: bad certificate 2025-08-13T19:53:27.305330799+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36126: remote error: tls: bad certificate 2025-08-13T19:53:27.344620998+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36134: remote error: tls: bad certificate 2025-08-13T19:53:27.387692233+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36144: remote error: tls: bad certificate 2025-08-13T19:53:27.428065963+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36148: remote error: tls: bad certificate 2025-08-13T19:53:27.465972601+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36154: remote error: tls: bad certificate 2025-08-13T19:53:27.505528007+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36158: remote error: tls: bad certificate 2025-08-13T19:53:27.545444804+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36172: remote error: tls: bad certificate 2025-08-13T19:53:27.583631270+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36182: remote error: tls: bad certificate 2025-08-13T19:53:27.626555522+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36190: remote error: tls: bad certificate 2025-08-13T19:53:27.664970126+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36202: remote error: tls: bad certificate 2025-08-13T19:53:27.706243150+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36208: remote error: tls: bad certificate 2025-08-13T19:53:27.750895711+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36212: remote error: tls: bad certificate 2025-08-13T19:53:27.785310780+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36222: remote error: tls: bad certificate 2025-08-13T19:53:27.824358372+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36234: remote error: tls: bad certificate 2025-08-13T19:53:27.863650190+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36242: remote error: tls: bad certificate 2025-08-13T19:53:27.907160328+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36248: remote error: tls: bad certificate 2025-08-13T19:53:27.944426589+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36260: remote error: tls: bad certificate 2025-08-13T19:53:27.988175244+00:00 stderr F 2025/08/13 19:53:27 http: TLS handshake error from 127.0.0.1:36266: remote error: tls: bad certificate 2025-08-13T19:53:28.025438775+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36276: remote error: tls: bad certificate 2025-08-13T19:53:28.066518224+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36292: remote error: tls: bad certificate 2025-08-13T19:53:28.104514646+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36302: remote error: tls: bad certificate 2025-08-13T19:53:28.147359285+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36304: remote error: tls: bad certificate 2025-08-13T19:53:28.183104632+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36316: remote error: tls: bad certificate 2025-08-13T19:53:28.228401082+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36324: remote error: tls: bad certificate 2025-08-13T19:53:28.268228015+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36332: remote error: tls: bad certificate 2025-08-13T19:53:28.304863848+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36338: remote error: tls: bad certificate 2025-08-13T19:53:28.345934627+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36342: remote error: tls: bad certificate 2025-08-13T19:53:28.387258953+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36358: remote error: tls: bad certificate 2025-08-13T19:53:28.426484299+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36374: remote error: tls: bad certificate 2025-08-13T19:53:28.467051074+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36382: remote error: tls: bad certificate 2025-08-13T19:53:28.507511066+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36398: remote error: tls: bad certificate 2025-08-13T19:53:28.542378358+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36410: remote error: tls: bad certificate 2025-08-13T19:53:28.587055820+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36418: remote error: tls: bad certificate 2025-08-13T19:53:28.623354253+00:00 stderr F 2025/08/13 19:53:28 http: TLS handshake error from 127.0.0.1:36426: remote error: tls: bad certificate 2025-08-13T19:53:29.089904451+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55254: remote error: tls: bad certificate 2025-08-13T19:53:29.108303885+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55260: remote error: tls: bad certificate 2025-08-13T19:53:29.126511273+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55276: remote error: tls: bad certificate 2025-08-13T19:53:29.143966440+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55288: remote error: tls: bad certificate 2025-08-13T19:53:29.161856669+00:00 stderr F 2025/08/13 19:53:29 http: TLS handshake error from 127.0.0.1:55294: remote error: tls: bad certificate 2025-08-13T19:53:30.648552861+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55300: remote error: tls: bad certificate 2025-08-13T19:53:30.670918988+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55302: remote error: tls: bad certificate 2025-08-13T19:53:30.690852175+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:53:30.706744047+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55324: remote error: tls: bad certificate 2025-08-13T19:53:30.730114503+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55330: remote error: tls: bad certificate 2025-08-13T19:53:30.777334826+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55332: remote error: tls: bad certificate 2025-08-13T19:53:30.797171361+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:53:30.815413220+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55344: remote error: tls: bad certificate 2025-08-13T19:53:30.830752917+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55356: remote error: tls: bad certificate 2025-08-13T19:53:30.848394119+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55362: remote error: tls: bad certificate 2025-08-13T19:53:30.865723282+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:53:30.882869460+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55376: remote error: tls: bad certificate 2025-08-13T19:53:30.900918944+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:53:30.918315269+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55392: remote error: tls: bad certificate 2025-08-13T19:53:30.936917549+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55408: remote error: tls: bad certificate 2025-08-13T19:53:30.955001983+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55420: remote error: tls: bad certificate 2025-08-13T19:53:30.973015396+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55434: remote error: tls: bad certificate 2025-08-13T19:53:30.991390179+00:00 stderr F 2025/08/13 19:53:30 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:53:31.007022344+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55450: remote error: tls: bad certificate 2025-08-13T19:53:31.022090353+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55466: remote error: tls: bad certificate 2025-08-13T19:53:31.042383720+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55474: remote error: tls: bad certificate 2025-08-13T19:53:31.061742501+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55484: remote error: tls: bad certificate 2025-08-13T19:53:31.079064354+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:53:31.099677331+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:53:31.124328733+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55502: remote error: tls: bad certificate 2025-08-13T19:53:31.142139209+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:53:31.161342126+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55524: remote error: tls: bad certificate 2025-08-13T19:53:31.181322705+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:53:31.204093203+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55538: remote error: tls: bad certificate 2025-08-13T19:53:31.226852321+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:53:31.244687408+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55558: remote error: tls: bad certificate 2025-08-13T19:53:31.263547545+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:53:31.285508470+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:53:31.300322562+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:53:31.315855434+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:53:31.337696365+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55616: remote error: tls: bad certificate 2025-08-13T19:53:31.361398320+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55622: remote error: tls: bad certificate 2025-08-13T19:53:31.382212752+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:53:31.398568328+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:53:31.413845023+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55646: remote error: tls: bad certificate 2025-08-13T19:53:31.429848348+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55650: remote error: tls: bad certificate 2025-08-13T19:53:31.444301550+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55652: remote error: tls: bad certificate 2025-08-13T19:53:31.461504869+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:53:31.476961669+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55684: remote error: tls: bad certificate 2025-08-13T19:53:31.493117119+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55694: remote error: tls: bad certificate 2025-08-13T19:53:31.514311192+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55696: remote error: tls: bad certificate 2025-08-13T19:53:31.533068836+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55712: remote error: tls: bad certificate 2025-08-13T19:53:31.548439704+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:53:31.565037486+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55734: remote error: tls: bad certificate 2025-08-13T19:53:31.580173317+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55750: remote error: tls: bad certificate 2025-08-13T19:53:31.604257492+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55760: remote error: tls: bad certificate 2025-08-13T19:53:31.622169452+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55772: remote error: tls: bad certificate 2025-08-13T19:53:31.640706120+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55780: remote error: tls: bad certificate 2025-08-13T19:53:31.668234673+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:53:31.686745880+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55804: remote error: tls: bad certificate 2025-08-13T19:53:31.702044525+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:53:31.715996002+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55814: remote error: tls: bad certificate 2025-08-13T19:53:31.731558275+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:53:31.755003593+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55824: remote error: tls: bad certificate 2025-08-13T19:53:31.775730363+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55836: remote error: tls: bad certificate 2025-08-13T19:53:31.793231691+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55838: remote error: tls: bad certificate 2025-08-13T19:53:31.818394737+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55854: remote error: tls: bad certificate 2025-08-13T19:53:31.836188003+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55858: remote error: tls: bad certificate 2025-08-13T19:53:31.858885919+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55870: remote error: tls: bad certificate 2025-08-13T19:53:31.875597495+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:53:31.892421774+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55888: remote error: tls: bad certificate 2025-08-13T19:53:31.908754679+00:00 stderr F 2025/08/13 19:53:31 http: TLS handshake error from 127.0.0.1:55890: remote error: tls: bad certificate 2025-08-13T19:53:35.224718462+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55896: remote error: tls: bad certificate 2025-08-13T19:53:35.244349553+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55906: remote error: tls: bad certificate 2025-08-13T19:53:35.261050260+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55920: remote error: tls: bad certificate 2025-08-13T19:53:35.279110096+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55926: remote error: tls: bad certificate 2025-08-13T19:53:35.303117371+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55938: remote error: tls: bad certificate 2025-08-13T19:53:35.321457375+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55952: remote error: tls: bad certificate 2025-08-13T19:53:35.337327778+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55958: remote error: tls: bad certificate 2025-08-13T19:53:35.355110416+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55968: remote error: tls: bad certificate 2025-08-13T19:53:35.368436766+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55978: remote error: tls: bad certificate 2025-08-13T19:53:35.389464507+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55994: remote error: tls: bad certificate 2025-08-13T19:53:35.408029257+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55996: remote error: tls: bad certificate 2025-08-13T19:53:35.425146375+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:55998: remote error: tls: bad certificate 2025-08-13T19:53:35.441105791+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56000: remote error: tls: bad certificate 2025-08-13T19:53:35.457208691+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56004: remote error: tls: bad certificate 2025-08-13T19:53:35.473743393+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56010: remote error: tls: bad certificate 2025-08-13T19:53:35.498664745+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56020: remote error: tls: bad certificate 2025-08-13T19:53:35.513682983+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56034: remote error: tls: bad certificate 2025-08-13T19:53:35.526658354+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56040: remote error: tls: bad certificate 2025-08-13T19:53:35.549298110+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56046: remote error: tls: bad certificate 2025-08-13T19:53:35.567544191+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56052: remote error: tls: bad certificate 2025-08-13T19:53:35.584734132+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56066: remote error: tls: bad certificate 2025-08-13T19:53:35.600997377+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56068: remote error: tls: bad certificate 2025-08-13T19:53:35.618671231+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56074: remote error: tls: bad certificate 2025-08-13T19:53:35.634733770+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56076: remote error: tls: bad certificate 2025-08-13T19:53:35.654704980+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56088: remote error: tls: bad certificate 2025-08-13T19:53:35.669576245+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56090: remote error: tls: bad certificate 2025-08-13T19:53:35.687559288+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56102: remote error: tls: bad certificate 2025-08-13T19:53:35.705010066+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56112: remote error: tls: bad certificate 2025-08-13T19:53:35.722108005+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56118: remote error: tls: bad certificate 2025-08-13T19:53:35.743603298+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:53:35.766235015+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56138: remote error: tls: bad certificate 2025-08-13T19:53:35.786905845+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56146: remote error: tls: bad certificate 2025-08-13T19:53:35.802561162+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56154: remote error: tls: bad certificate 2025-08-13T19:53:35.818548608+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56168: remote error: tls: bad certificate 2025-08-13T19:53:35.838656143+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56172: remote error: tls: bad certificate 2025-08-13T19:53:35.857985005+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56186: remote error: tls: bad certificate 2025-08-13T19:53:35.875248337+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56198: remote error: tls: bad certificate 2025-08-13T19:53:35.891375408+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56212: remote error: tls: bad certificate 2025-08-13T19:53:35.906640524+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56216: remote error: tls: bad certificate 2025-08-13T19:53:35.928942981+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56232: remote error: tls: bad certificate 2025-08-13T19:53:35.947115659+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56244: remote error: tls: bad certificate 2025-08-13T19:53:35.968036007+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56254: remote error: tls: bad certificate 2025-08-13T19:53:35.987885954+00:00 stderr F 2025/08/13 19:53:35 http: TLS handshake error from 127.0.0.1:56270: remote error: tls: bad certificate 2025-08-13T19:53:36.005971000+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56284: remote error: tls: bad certificate 2025-08-13T19:53:36.063898344+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56288: remote error: tls: bad certificate 2025-08-13T19:53:36.100912221+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56302: remote error: tls: bad certificate 2025-08-13T19:53:36.113892492+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56308: remote error: tls: bad certificate 2025-08-13T19:53:36.133520782+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56312: remote error: tls: bad certificate 2025-08-13T19:53:36.152039901+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56318: remote error: tls: bad certificate 2025-08-13T19:53:36.172991059+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56334: remote error: tls: bad certificate 2025-08-13T19:53:36.193922207+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56336: remote error: tls: bad certificate 2025-08-13T19:53:36.214219296+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56350: remote error: tls: bad certificate 2025-08-13T19:53:36.236563664+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56360: remote error: tls: bad certificate 2025-08-13T19:53:36.254482196+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56362: remote error: tls: bad certificate 2025-08-13T19:53:36.275089304+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56370: remote error: tls: bad certificate 2025-08-13T19:53:36.290217346+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56378: remote error: tls: bad certificate 2025-08-13T19:53:36.308409306+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56388: remote error: tls: bad certificate 2025-08-13T19:53:36.327734598+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:53:36.349240702+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56402: remote error: tls: bad certificate 2025-08-13T19:53:36.370983732+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56410: remote error: tls: bad certificate 2025-08-13T19:53:36.390080348+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56426: remote error: tls: bad certificate 2025-08-13T19:53:36.407403552+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56442: remote error: tls: bad certificate 2025-08-13T19:53:36.424618184+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56450: remote error: tls: bad certificate 2025-08-13T19:53:36.464289917+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:53:36.482019983+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56472: remote error: tls: bad certificate 2025-08-13T19:53:36.499596495+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56484: remote error: tls: bad certificate 2025-08-13T19:53:36.518895376+00:00 stderr F 2025/08/13 19:53:36 http: TLS handshake error from 127.0.0.1:56498: remote error: tls: bad certificate 2025-08-13T19:53:39.228603146+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:53:39.242932135+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50224: remote error: tls: bad certificate 2025-08-13T19:53:39.258147480+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50226: remote error: tls: bad certificate 2025-08-13T19:53:39.273697894+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50240: remote error: tls: bad certificate 2025-08-13T19:53:39.297952886+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50244: remote error: tls: bad certificate 2025-08-13T19:53:39.314940781+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50256: remote error: tls: bad certificate 2025-08-13T19:53:39.330761093+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50268: remote error: tls: bad certificate 2025-08-13T19:53:39.345921536+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50274: remote error: tls: bad certificate 2025-08-13T19:53:39.366085762+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50280: remote error: tls: bad certificate 2025-08-13T19:53:39.381858312+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50288: remote error: tls: bad certificate 2025-08-13T19:53:39.399841156+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50290: remote error: tls: bad certificate 2025-08-13T19:53:39.414869175+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50298: remote error: tls: bad certificate 2025-08-13T19:53:39.434983749+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50312: remote error: tls: bad certificate 2025-08-13T19:53:39.441425853+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50324: remote error: tls: bad certificate 2025-08-13T19:53:39.454354192+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:53:39.460229950+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50348: remote error: tls: bad certificate 2025-08-13T19:53:39.473339114+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50362: remote error: tls: bad certificate 2025-08-13T19:53:39.485412799+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50372: remote error: tls: bad certificate 2025-08-13T19:53:39.488173648+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50376: remote error: tls: bad certificate 2025-08-13T19:53:39.506087959+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:53:39.508954771+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50396: remote error: tls: bad certificate 2025-08-13T19:53:39.523573769+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50400: remote error: tls: bad certificate 2025-08-13T19:53:39.535036366+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50410: remote error: tls: bad certificate 2025-08-13T19:53:39.540852022+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50416: remote error: tls: bad certificate 2025-08-13T19:53:39.556114958+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:53:39.570016065+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50430: remote error: tls: bad certificate 2025-08-13T19:53:39.591465867+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50436: remote error: tls: bad certificate 2025-08-13T19:53:39.609619156+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50446: remote error: tls: bad certificate 2025-08-13T19:53:39.624335206+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50454: remote error: tls: bad certificate 2025-08-13T19:53:39.646403826+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50458: remote error: tls: bad certificate 2025-08-13T19:53:39.662891037+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:53:39.679004807+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50472: remote error: tls: bad certificate 2025-08-13T19:53:39.693375467+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50488: remote error: tls: bad certificate 2025-08-13T19:53:39.710139846+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50492: remote error: tls: bad certificate 2025-08-13T19:53:39.731186017+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50506: remote error: tls: bad certificate 2025-08-13T19:53:39.745092314+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50514: remote error: tls: bad certificate 2025-08-13T19:53:39.759182706+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50526: remote error: tls: bad certificate 2025-08-13T19:53:39.785442716+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50536: remote error: tls: bad certificate 2025-08-13T19:53:39.813161948+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50550: remote error: tls: bad certificate 2025-08-13T19:53:39.838641005+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:53:39.864312688+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50572: remote error: tls: bad certificate 2025-08-13T19:53:39.880470369+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50584: remote error: tls: bad certificate 2025-08-13T19:53:39.896138737+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50598: remote error: tls: bad certificate 2025-08-13T19:53:39.912435242+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50606: remote error: tls: bad certificate 2025-08-13T19:53:39.932437133+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50612: remote error: tls: bad certificate 2025-08-13T19:53:39.949203182+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50622: remote error: tls: bad certificate 2025-08-13T19:53:39.966415514+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50636: remote error: tls: bad certificate 2025-08-13T19:53:39.981556116+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50648: remote error: tls: bad certificate 2025-08-13T19:53:39.997675256+00:00 stderr F 2025/08/13 19:53:39 http: TLS handshake error from 127.0.0.1:50656: remote error: tls: bad certificate 2025-08-13T19:53:40.011871321+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:53:40.034566529+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50686: remote error: tls: bad certificate 2025-08-13T19:53:40.049210498+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:53:40.064974488+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50702: remote error: tls: bad certificate 2025-08-13T19:53:40.079085701+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50712: remote error: tls: bad certificate 2025-08-13T19:53:40.096647742+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50728: remote error: tls: bad certificate 2025-08-13T19:53:40.113604686+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50732: remote error: tls: bad certificate 2025-08-13T19:53:40.131424605+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50740: remote error: tls: bad certificate 2025-08-13T19:53:40.147917386+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50744: remote error: tls: bad certificate 2025-08-13T19:53:40.167720461+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50758: remote error: tls: bad certificate 2025-08-13T19:53:40.181520045+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50762: remote error: tls: bad certificate 2025-08-13T19:53:40.195728551+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50770: remote error: tls: bad certificate 2025-08-13T19:53:40.212724847+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50784: remote error: tls: bad certificate 2025-08-13T19:53:40.232612634+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50800: remote error: tls: bad certificate 2025-08-13T19:53:40.256121036+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50806: remote error: tls: bad certificate 2025-08-13T19:53:40.280362708+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50822: remote error: tls: bad certificate 2025-08-13T19:53:40.298548077+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50836: remote error: tls: bad certificate 2025-08-13T19:53:40.317684724+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:53:40.334333439+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50850: remote error: tls: bad certificate 2025-08-13T19:53:40.348950486+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50866: remote error: tls: bad certificate 2025-08-13T19:53:40.370865132+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50876: remote error: tls: bad certificate 2025-08-13T19:53:40.386952351+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50890: remote error: tls: bad certificate 2025-08-13T19:53:40.404317317+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50896: remote error: tls: bad certificate 2025-08-13T19:53:40.682183542+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50906: remote error: tls: bad certificate 2025-08-13T19:53:40.830138346+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50912: remote error: tls: bad certificate 2025-08-13T19:53:40.852362851+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50916: remote error: tls: bad certificate 2025-08-13T19:53:40.867715729+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50924: remote error: tls: bad certificate 2025-08-13T19:53:40.893137955+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50936: remote error: tls: bad certificate 2025-08-13T19:53:40.908983197+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50950: remote error: tls: bad certificate 2025-08-13T19:53:40.924633474+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:53:40.940548709+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50970: remote error: tls: bad certificate 2025-08-13T19:53:40.956331789+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50984: remote error: tls: bad certificate 2025-08-13T19:53:40.970675268+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:50996: remote error: tls: bad certificate 2025-08-13T19:53:40.984748710+00:00 stderr F 2025/08/13 19:53:40 http: TLS handshake error from 127.0.0.1:51012: remote error: tls: bad certificate 2025-08-13T19:53:41.000306144+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51018: remote error: tls: bad certificate 2025-08-13T19:53:41.016050553+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51024: remote error: tls: bad certificate 2025-08-13T19:53:41.029655492+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51036: remote error: tls: bad certificate 2025-08-13T19:53:41.046054150+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51042: remote error: tls: bad certificate 2025-08-13T19:53:41.063399065+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51050: remote error: tls: bad certificate 2025-08-13T19:53:41.077921450+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51052: remote error: tls: bad certificate 2025-08-13T19:53:41.095677637+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51062: remote error: tls: bad certificate 2025-08-13T19:53:41.110105979+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51068: remote error: tls: bad certificate 2025-08-13T19:53:41.125640423+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51082: remote error: tls: bad certificate 2025-08-13T19:53:41.145631493+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51096: remote error: tls: bad certificate 2025-08-13T19:53:41.164570324+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51098: remote error: tls: bad certificate 2025-08-13T19:53:41.184034170+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51104: remote error: tls: bad certificate 2025-08-13T19:53:41.203053723+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51118: remote error: tls: bad certificate 2025-08-13T19:53:41.232431612+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51126: remote error: tls: bad certificate 2025-08-13T19:53:41.251340712+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51132: remote error: tls: bad certificate 2025-08-13T19:53:41.267706039+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51148: remote error: tls: bad certificate 2025-08-13T19:53:41.287701610+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51152: remote error: tls: bad certificate 2025-08-13T19:53:41.305005814+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51162: remote error: tls: bad certificate 2025-08-13T19:53:41.324337866+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51170: remote error: tls: bad certificate 2025-08-13T19:53:41.340895179+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51172: remote error: tls: bad certificate 2025-08-13T19:53:41.358911143+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51186: remote error: tls: bad certificate 2025-08-13T19:53:41.378574595+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51202: remote error: tls: bad certificate 2025-08-13T19:53:41.394209601+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51212: remote error: tls: bad certificate 2025-08-13T19:53:41.415108168+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51216: remote error: tls: bad certificate 2025-08-13T19:53:41.438500306+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51226: remote error: tls: bad certificate 2025-08-13T19:53:41.455054438+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51228: remote error: tls: bad certificate 2025-08-13T19:53:41.473008991+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51232: remote error: tls: bad certificate 2025-08-13T19:53:41.488947096+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51236: remote error: tls: bad certificate 2025-08-13T19:53:41.505689324+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51248: remote error: tls: bad certificate 2025-08-13T19:53:41.542899427+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51260: remote error: tls: bad certificate 2025-08-13T19:53:41.580967164+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51276: remote error: tls: bad certificate 2025-08-13T19:53:41.621004757+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51290: remote error: tls: bad certificate 2025-08-13T19:53:41.659405283+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51294: remote error: tls: bad certificate 2025-08-13T19:53:41.701855455+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51302: remote error: tls: bad certificate 2025-08-13T19:53:41.750594647+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51310: remote error: tls: bad certificate 2025-08-13T19:53:41.780458050+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51318: remote error: tls: bad certificate 2025-08-13T19:53:41.819269118+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51330: remote error: tls: bad certificate 2025-08-13T19:53:41.859475406+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51336: remote error: tls: bad certificate 2025-08-13T19:53:41.904050659+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51350: remote error: tls: bad certificate 2025-08-13T19:53:41.941043335+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51358: remote error: tls: bad certificate 2025-08-13T19:53:41.980527062+00:00 stderr F 2025/08/13 19:53:41 http: TLS handshake error from 127.0.0.1:51370: remote error: tls: bad certificate 2025-08-13T19:53:42.021474882+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51384: remote error: tls: bad certificate 2025-08-13T19:53:42.061280108+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51388: remote error: tls: bad certificate 2025-08-13T19:53:42.103641688+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51390: remote error: tls: bad certificate 2025-08-13T19:53:42.138520574+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51394: remote error: tls: bad certificate 2025-08-13T19:53:42.179604217+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51410: remote error: tls: bad certificate 2025-08-13T19:53:42.223700516+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51426: remote error: tls: bad certificate 2025-08-13T19:53:42.266453537+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51440: remote error: tls: bad certificate 2025-08-13T19:53:42.413671650+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51450: remote error: tls: bad certificate 2025-08-13T19:53:42.439923510+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51454: remote error: tls: bad certificate 2025-08-13T19:53:42.482879177+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51464: remote error: tls: bad certificate 2025-08-13T19:53:42.510876456+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51466: remote error: tls: bad certificate 2025-08-13T19:53:42.546915215+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51474: remote error: tls: bad certificate 2025-08-13T19:53:42.564044934+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51482: remote error: tls: bad certificate 2025-08-13T19:53:42.579032892+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51486: remote error: tls: bad certificate 2025-08-13T19:53:42.597240472+00:00 stderr F 2025/08/13 19:53:42 http: TLS handshake error from 127.0.0.1:51492: remote error: tls: bad certificate 2025-08-13T19:53:45.226381044+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51494: remote error: tls: bad certificate 2025-08-13T19:53:45.241320290+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51500: remote error: tls: bad certificate 2025-08-13T19:53:45.263977617+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51506: remote error: tls: bad certificate 2025-08-13T19:53:45.279577513+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51522: remote error: tls: bad certificate 2025-08-13T19:53:45.299569743+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51524: remote error: tls: bad certificate 2025-08-13T19:53:45.315553690+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51536: remote error: tls: bad certificate 2025-08-13T19:53:45.341116639+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51546: remote error: tls: bad certificate 2025-08-13T19:53:45.361326947+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51562: remote error: tls: bad certificate 2025-08-13T19:53:45.386511596+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51574: remote error: tls: bad certificate 2025-08-13T19:53:45.405372184+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51590: remote error: tls: bad certificate 2025-08-13T19:53:45.426649522+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51602: remote error: tls: bad certificate 2025-08-13T19:53:45.442340130+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51604: remote error: tls: bad certificate 2025-08-13T19:53:45.460097047+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51612: remote error: tls: bad certificate 2025-08-13T19:53:45.479731118+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51624: remote error: tls: bad certificate 2025-08-13T19:53:45.533474902+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51640: remote error: tls: bad certificate 2025-08-13T19:53:45.551358853+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51654: remote error: tls: bad certificate 2025-08-13T19:53:45.565606420+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51670: remote error: tls: bad certificate 2025-08-13T19:53:45.580892986+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51682: remote error: tls: bad certificate 2025-08-13T19:53:45.598843629+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51684: remote error: tls: bad certificate 2025-08-13T19:53:45.614548227+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51700: remote error: tls: bad certificate 2025-08-13T19:53:45.631554293+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51712: remote error: tls: bad certificate 2025-08-13T19:53:45.646499990+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51724: remote error: tls: bad certificate 2025-08-13T19:53:45.664373710+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51740: remote error: tls: bad certificate 2025-08-13T19:53:45.679107241+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51748: remote error: tls: bad certificate 2025-08-13T19:53:45.701559222+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51764: remote error: tls: bad certificate 2025-08-13T19:53:45.720105771+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51772: remote error: tls: bad certificate 2025-08-13T19:53:45.737989852+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51780: remote error: tls: bad certificate 2025-08-13T19:53:45.756665025+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51796: remote error: tls: bad certificate 2025-08-13T19:53:45.775487953+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51806: remote error: tls: bad certificate 2025-08-13T19:53:45.795995138+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51808: remote error: tls: bad certificate 2025-08-13T19:53:45.815296229+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51816: remote error: tls: bad certificate 2025-08-13T19:53:45.837343949+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51818: remote error: tls: bad certificate 2025-08-13T19:53:45.854453197+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51822: remote error: tls: bad certificate 2025-08-13T19:53:45.872035029+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51824: remote error: tls: bad certificate 2025-08-13T19:53:45.889322273+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51826: remote error: tls: bad certificate 2025-08-13T19:53:45.910040195+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51828: remote error: tls: bad certificate 2025-08-13T19:53:45.935540523+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51842: remote error: tls: bad certificate 2025-08-13T19:53:45.954573896+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51856: remote error: tls: bad certificate 2025-08-13T19:53:45.972642602+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51858: remote error: tls: bad certificate 2025-08-13T19:53:45.992005025+00:00 stderr F 2025/08/13 19:53:45 http: TLS handshake error from 127.0.0.1:51860: remote error: tls: bad certificate 2025-08-13T19:53:46.008844656+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51868: remote error: tls: bad certificate 2025-08-13T19:53:46.024552044+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51870: remote error: tls: bad certificate 2025-08-13T19:53:46.040129379+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51882: remote error: tls: bad certificate 2025-08-13T19:53:46.054839769+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51894: remote error: tls: bad certificate 2025-08-13T19:53:46.069429396+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51898: remote error: tls: bad certificate 2025-08-13T19:53:46.084429934+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51914: remote error: tls: bad certificate 2025-08-13T19:53:46.106535985+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51928: remote error: tls: bad certificate 2025-08-13T19:53:46.134769421+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51930: remote error: tls: bad certificate 2025-08-13T19:53:46.159466717+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51934: remote error: tls: bad certificate 2025-08-13T19:53:46.178661315+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51948: remote error: tls: bad certificate 2025-08-13T19:53:46.195898767+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51956: remote error: tls: bad certificate 2025-08-13T19:53:46.219998745+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51958: remote error: tls: bad certificate 2025-08-13T19:53:46.241092777+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51972: remote error: tls: bad certificate 2025-08-13T19:53:46.260916013+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51976: remote error: tls: bad certificate 2025-08-13T19:53:46.278870666+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51980: remote error: tls: bad certificate 2025-08-13T19:53:46.299576877+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51992: remote error: tls: bad certificate 2025-08-13T19:53:46.316179501+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:51998: remote error: tls: bad certificate 2025-08-13T19:53:46.333503016+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52006: remote error: tls: bad certificate 2025-08-13T19:53:46.349693378+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52022: remote error: tls: bad certificate 2025-08-13T19:53:46.368871606+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52034: remote error: tls: bad certificate 2025-08-13T19:53:46.389220157+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52038: remote error: tls: bad certificate 2025-08-13T19:53:46.409970849+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52044: remote error: tls: bad certificate 2025-08-13T19:53:46.429712483+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52056: remote error: tls: bad certificate 2025-08-13T19:53:46.448750617+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52058: remote error: tls: bad certificate 2025-08-13T19:53:46.480049740+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52068: remote error: tls: bad certificate 2025-08-13T19:53:46.501206315+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52076: remote error: tls: bad certificate 2025-08-13T19:53:46.522698028+00:00 stderr F 2025/08/13 19:53:46 http: TLS handshake error from 127.0.0.1:52088: remote error: tls: bad certificate 2025-08-13T19:53:49.617164196+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44584: remote error: tls: bad certificate 2025-08-13T19:53:49.636610121+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44600: remote error: tls: bad certificate 2025-08-13T19:53:49.659212496+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44614: remote error: tls: bad certificate 2025-08-13T19:53:49.678895508+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:53:49.696625414+00:00 stderr F 2025/08/13 19:53:49 http: TLS handshake error from 127.0.0.1:44630: remote error: tls: bad certificate 2025-08-13T19:53:52.227095057+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44646: remote error: tls: bad certificate 2025-08-13T19:53:52.268926331+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44660: remote error: tls: bad certificate 2025-08-13T19:53:52.294504561+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44676: remote error: tls: bad certificate 2025-08-13T19:53:52.314733569+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44684: remote error: tls: bad certificate 2025-08-13T19:53:52.333155155+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44686: remote error: tls: bad certificate 2025-08-13T19:53:52.353277979+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44702: remote error: tls: bad certificate 2025-08-13T19:53:52.380062434+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44710: remote error: tls: bad certificate 2025-08-13T19:53:52.414549829+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44718: remote error: tls: bad certificate 2025-08-13T19:53:52.440060697+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44720: remote error: tls: bad certificate 2025-08-13T19:53:52.457727922+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:53:52.472683809+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44734: remote error: tls: bad certificate 2025-08-13T19:53:52.496383826+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44750: remote error: tls: bad certificate 2025-08-13T19:53:52.517896320+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44758: remote error: tls: bad certificate 2025-08-13T19:53:52.541147764+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44768: remote error: tls: bad certificate 2025-08-13T19:53:52.557736047+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44778: remote error: tls: bad certificate 2025-08-13T19:53:52.574990300+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44792: remote error: tls: bad certificate 2025-08-13T19:53:52.603636518+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44806: remote error: tls: bad certificate 2025-08-13T19:53:52.619749288+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44812: remote error: tls: bad certificate 2025-08-13T19:53:52.635091136+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:53:52.660011058+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44830: remote error: tls: bad certificate 2025-08-13T19:53:52.680188254+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44836: remote error: tls: bad certificate 2025-08-13T19:53:52.696991764+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44838: remote error: tls: bad certificate 2025-08-13T19:53:52.711068896+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44854: remote error: tls: bad certificate 2025-08-13T19:53:52.729148802+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44866: remote error: tls: bad certificate 2025-08-13T19:53:52.745949452+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44868: remote error: tls: bad certificate 2025-08-13T19:53:52.762474943+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44872: remote error: tls: bad certificate 2025-08-13T19:53:52.778204163+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44888: remote error: tls: bad certificate 2025-08-13T19:53:52.796036292+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44898: remote error: tls: bad certificate 2025-08-13T19:53:52.818504463+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44902: remote error: tls: bad certificate 2025-08-13T19:53:52.837562267+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44916: remote error: tls: bad certificate 2025-08-13T19:53:52.856105647+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44930: remote error: tls: bad certificate 2025-08-13T19:53:52.875528422+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44940: remote error: tls: bad certificate 2025-08-13T19:53:52.894423241+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44942: remote error: tls: bad certificate 2025-08-13T19:53:52.916570783+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44950: remote error: tls: bad certificate 2025-08-13T19:53:52.936099211+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44956: remote error: tls: bad certificate 2025-08-13T19:53:52.955720791+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44972: remote error: tls: bad certificate 2025-08-13T19:53:52.972533801+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44984: remote error: tls: bad certificate 2025-08-13T19:53:52.988563959+00:00 stderr F 2025/08/13 19:53:52 http: TLS handshake error from 127.0.0.1:44994: remote error: tls: bad certificate 2025-08-13T19:53:53.005485722+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45008: remote error: tls: bad certificate 2025-08-13T19:53:53.022024175+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45012: remote error: tls: bad certificate 2025-08-13T19:53:53.038862085+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45016: remote error: tls: bad certificate 2025-08-13T19:53:53.056149359+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45032: remote error: tls: bad certificate 2025-08-13T19:53:53.076240013+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45038: remote error: tls: bad certificate 2025-08-13T19:53:53.093720342+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45054: remote error: tls: bad certificate 2025-08-13T19:53:53.110699807+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45056: remote error: tls: bad certificate 2025-08-13T19:53:53.129115762+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45068: remote error: tls: bad certificate 2025-08-13T19:53:53.145554232+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45076: remote error: tls: bad certificate 2025-08-13T19:53:53.161638391+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45088: remote error: tls: bad certificate 2025-08-13T19:53:53.178665857+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45098: remote error: tls: bad certificate 2025-08-13T19:53:53.194976583+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45102: remote error: tls: bad certificate 2025-08-13T19:53:53.217332771+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45116: remote error: tls: bad certificate 2025-08-13T19:53:53.232177375+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45126: remote error: tls: bad certificate 2025-08-13T19:53:53.247967496+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45140: remote error: tls: bad certificate 2025-08-13T19:53:53.273479805+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45152: remote error: tls: bad certificate 2025-08-13T19:53:53.291633863+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45166: remote error: tls: bad certificate 2025-08-13T19:53:53.305135728+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45178: remote error: tls: bad certificate 2025-08-13T19:53:53.322149134+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45192: remote error: tls: bad certificate 2025-08-13T19:53:53.338193382+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45194: remote error: tls: bad certificate 2025-08-13T19:53:53.357901645+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45202: remote error: tls: bad certificate 2025-08-13T19:53:53.380898872+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45214: remote error: tls: bad certificate 2025-08-13T19:53:53.399923025+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45228: remote error: tls: bad certificate 2025-08-13T19:53:53.418956578+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45240: remote error: tls: bad certificate 2025-08-13T19:53:53.438110095+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45248: remote error: tls: bad certificate 2025-08-13T19:53:53.456393227+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45252: remote error: tls: bad certificate 2025-08-13T19:53:53.473543627+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45254: remote error: tls: bad certificate 2025-08-13T19:53:53.493036634+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45270: remote error: tls: bad certificate 2025-08-13T19:53:53.512355815+00:00 stderr F 2025/08/13 19:53:53 http: TLS handshake error from 127.0.0.1:45276: remote error: tls: bad certificate 2025-08-13T19:53:55.236033843+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45290: remote error: tls: bad certificate 2025-08-13T19:53:55.252968866+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45298: remote error: tls: bad certificate 2025-08-13T19:53:55.270724493+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45310: remote error: tls: bad certificate 2025-08-13T19:53:55.287538742+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45326: remote error: tls: bad certificate 2025-08-13T19:53:55.302703325+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45338: remote error: tls: bad certificate 2025-08-13T19:53:55.321409719+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45352: remote error: tls: bad certificate 2025-08-13T19:53:55.338625911+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45356: remote error: tls: bad certificate 2025-08-13T19:53:55.358347094+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45358: remote error: tls: bad certificate 2025-08-13T19:53:55.373379503+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45370: remote error: tls: bad certificate 2025-08-13T19:53:55.390440810+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45382: remote error: tls: bad certificate 2025-08-13T19:53:55.436477805+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45392: remote error: tls: bad certificate 2025-08-13T19:53:55.465944856+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45396: remote error: tls: bad certificate 2025-08-13T19:53:55.491653850+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45400: remote error: tls: bad certificate 2025-08-13T19:53:55.511147616+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45408: remote error: tls: bad certificate 2025-08-13T19:53:55.526959548+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45410: remote error: tls: bad certificate 2025-08-13T19:53:55.544041555+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45416: remote error: tls: bad certificate 2025-08-13T19:53:55.560329050+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45430: remote error: tls: bad certificate 2025-08-13T19:53:55.575998688+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45434: remote error: tls: bad certificate 2025-08-13T19:53:55.590063159+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45442: remote error: tls: bad certificate 2025-08-13T19:53:55.602979578+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45444: remote error: tls: bad certificate 2025-08-13T19:53:55.618206203+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45446: remote error: tls: bad certificate 2025-08-13T19:53:55.632362677+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45458: remote error: tls: bad certificate 2025-08-13T19:53:55.652016208+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45462: remote error: tls: bad certificate 2025-08-13T19:53:55.670677731+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45476: remote error: tls: bad certificate 2025-08-13T19:53:55.692228507+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45478: remote error: tls: bad certificate 2025-08-13T19:53:55.707612656+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45492: remote error: tls: bad certificate 2025-08-13T19:53:55.725362833+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45500: remote error: tls: bad certificate 2025-08-13T19:53:55.743701086+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45506: remote error: tls: bad certificate 2025-08-13T19:53:55.759030924+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45520: remote error: tls: bad certificate 2025-08-13T19:53:55.776724589+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45528: remote error: tls: bad certificate 2025-08-13T19:53:55.795650500+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45534: remote error: tls: bad certificate 2025-08-13T19:53:55.815980670+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45546: remote error: tls: bad certificate 2025-08-13T19:53:55.834473528+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45556: remote error: tls: bad certificate 2025-08-13T19:53:55.849135396+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45564: remote error: tls: bad certificate 2025-08-13T19:53:55.864593148+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45580: remote error: tls: bad certificate 2025-08-13T19:53:55.890759775+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45590: remote error: tls: bad certificate 2025-08-13T19:53:55.909515351+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45596: remote error: tls: bad certificate 2025-08-13T19:53:55.926053343+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45608: remote error: tls: bad certificate 2025-08-13T19:53:55.941849684+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45612: remote error: tls: bad certificate 2025-08-13T19:53:55.961326830+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45616: remote error: tls: bad certificate 2025-08-13T19:53:55.979036006+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45620: remote error: tls: bad certificate 2025-08-13T19:53:55.995602028+00:00 stderr F 2025/08/13 19:53:55 http: TLS handshake error from 127.0.0.1:45632: remote error: tls: bad certificate 2025-08-13T19:53:56.012747358+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45646: remote error: tls: bad certificate 2025-08-13T19:53:56.030061782+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45648: remote error: tls: bad certificate 2025-08-13T19:53:56.046427580+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45656: remote error: tls: bad certificate 2025-08-13T19:53:56.069072727+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45672: remote error: tls: bad certificate 2025-08-13T19:53:56.085597829+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45678: remote error: tls: bad certificate 2025-08-13T19:53:56.101531444+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45690: remote error: tls: bad certificate 2025-08-13T19:53:56.118885179+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45700: remote error: tls: bad certificate 2025-08-13T19:53:56.133745703+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45702: remote error: tls: bad certificate 2025-08-13T19:53:56.158297445+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45712: remote error: tls: bad certificate 2025-08-13T19:53:56.179437368+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45724: remote error: tls: bad certificate 2025-08-13T19:53:56.194996012+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45726: remote error: tls: bad certificate 2025-08-13T19:53:56.215381685+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45740: remote error: tls: bad certificate 2025-08-13T19:53:56.232753340+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45744: remote error: tls: bad certificate 2025-08-13T19:53:56.247559913+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45754: remote error: tls: bad certificate 2025-08-13T19:53:56.261352467+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45770: remote error: tls: bad certificate 2025-08-13T19:53:56.280958337+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45784: remote error: tls: bad certificate 2025-08-13T19:53:56.298734144+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45786: remote error: tls: bad certificate 2025-08-13T19:53:56.314708520+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45802: remote error: tls: bad certificate 2025-08-13T19:53:56.330703157+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45804: remote error: tls: bad certificate 2025-08-13T19:53:56.347122716+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45814: remote error: tls: bad certificate 2025-08-13T19:53:56.372862041+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45828: remote error: tls: bad certificate 2025-08-13T19:53:56.393062768+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45838: remote error: tls: bad certificate 2025-08-13T19:53:56.414088778+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45848: remote error: tls: bad certificate 2025-08-13T19:53:56.431538687+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45864: remote error: tls: bad certificate 2025-08-13T19:53:56.448332786+00:00 stderr F 2025/08/13 19:53:56 http: TLS handshake error from 127.0.0.1:45872: remote error: tls: bad certificate 2025-08-13T19:53:59.994288411+00:00 stderr F 2025/08/13 19:53:59 http: TLS handshake error from 127.0.0.1:55278: remote error: tls: bad certificate 2025-08-13T19:54:00.014664192+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55280: remote error: tls: bad certificate 2025-08-13T19:54:00.034988413+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55296: remote error: tls: bad certificate 2025-08-13T19:54:00.055701424+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55310: remote error: tls: bad certificate 2025-08-13T19:54:00.075603002+00:00 stderr F 2025/08/13 19:54:00 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate 2025-08-13T19:54:03.801344573+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55322: remote error: tls: bad certificate 2025-08-13T19:54:03.819261265+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55328: remote error: tls: bad certificate 2025-08-13T19:54:03.840228804+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:54:03.858451464+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55348: remote error: tls: bad certificate 2025-08-13T19:54:03.877581670+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55360: remote error: tls: bad certificate 2025-08-13T19:54:03.896873111+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:54:03.915740430+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55372: remote error: tls: bad certificate 2025-08-13T19:54:03.934873826+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55376: remote error: tls: bad certificate 2025-08-13T19:54:03.951560583+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55378: remote error: tls: bad certificate 2025-08-13T19:54:03.967153948+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:54:03.986569662+00:00 stderr F 2025/08/13 19:54:03 http: TLS handshake error from 127.0.0.1:55396: remote error: tls: bad certificate 2025-08-13T19:54:04.011360750+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55404: remote error: tls: bad certificate 2025-08-13T19:54:04.028307304+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55410: remote error: tls: bad certificate 2025-08-13T19:54:04.045060242+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55422: remote error: tls: bad certificate 2025-08-13T19:54:04.061246445+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:54:04.076940903+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55440: remote error: tls: bad certificate 2025-08-13T19:54:04.089880132+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55446: remote error: tls: bad certificate 2025-08-13T19:54:04.107173836+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55448: remote error: tls: bad certificate 2025-08-13T19:54:04.121222957+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55460: remote error: tls: bad certificate 2025-08-13T19:54:04.139381656+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55464: remote error: tls: bad certificate 2025-08-13T19:54:04.156322149+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55478: remote error: tls: bad certificate 2025-08-13T19:54:04.175953050+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:54:04.193653455+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55492: remote error: tls: bad certificate 2025-08-13T19:54:04.216049095+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55500: remote error: tls: bad certificate 2025-08-13T19:54:04.232299699+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55510: remote error: tls: bad certificate 2025-08-13T19:54:04.249470209+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55514: remote error: tls: bad certificate 2025-08-13T19:54:04.266014841+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55516: remote error: tls: bad certificate 2025-08-13T19:54:04.283309595+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55522: remote error: tls: bad certificate 2025-08-13T19:54:04.298301583+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55534: remote error: tls: bad certificate 2025-08-13T19:54:04.314131095+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:54:04.332145940+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55548: remote error: tls: bad certificate 2025-08-13T19:54:04.349726382+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55556: remote error: tls: bad certificate 2025-08-13T19:54:04.366151321+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:54:04.384687490+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55574: remote error: tls: bad certificate 2025-08-13T19:54:04.399412240+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55584: remote error: tls: bad certificate 2025-08-13T19:54:04.422123929+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:54:04.440418811+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55602: remote error: tls: bad certificate 2025-08-13T19:54:04.458900569+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55610: remote error: tls: bad certificate 2025-08-13T19:54:04.473187077+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55624: remote error: tls: bad certificate 2025-08-13T19:54:04.486955510+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55626: remote error: tls: bad certificate 2025-08-13T19:54:04.508549017+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55636: remote error: tls: bad certificate 2025-08-13T19:54:04.529548556+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:54:04.546664235+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55642: remote error: tls: bad certificate 2025-08-13T19:54:04.562135227+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55654: remote error: tls: bad certificate 2025-08-13T19:54:04.582879889+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55666: remote error: tls: bad certificate 2025-08-13T19:54:04.609154709+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:54:04.628435650+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:54:04.645911439+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:54:04.669073030+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55700: remote error: tls: bad certificate 2025-08-13T19:54:04.687314331+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55704: remote error: tls: bad certificate 2025-08-13T19:54:04.701906648+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:54:04.724113212+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55732: remote error: tls: bad certificate 2025-08-13T19:54:04.744842514+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55736: remote error: tls: bad certificate 2025-08-13T19:54:04.761768077+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55748: remote error: tls: bad certificate 2025-08-13T19:54:04.791839526+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55758: remote error: tls: bad certificate 2025-08-13T19:54:04.809122739+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55766: remote error: tls: bad certificate 2025-08-13T19:54:04.824963751+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55774: remote error: tls: bad certificate 2025-08-13T19:54:04.842395249+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:54:04.862607836+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55792: remote error: tls: bad certificate 2025-08-13T19:54:04.883321408+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55800: remote error: tls: bad certificate 2025-08-13T19:54:04.910074012+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55810: remote error: tls: bad certificate 2025-08-13T19:54:04.934903461+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55812: remote error: tls: bad certificate 2025-08-13T19:54:04.956612600+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55828: remote error: tls: bad certificate 2025-08-13T19:54:04.984290721+00:00 stderr F 2025/08/13 19:54:04 http: TLS handshake error from 127.0.0.1:55840: remote error: tls: bad certificate 2025-08-13T19:54:05.004661692+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55852: remote error: tls: bad certificate 2025-08-13T19:54:05.029091190+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55854: remote error: tls: bad certificate 2025-08-13T19:54:05.068919787+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55866: remote error: tls: bad certificate 2025-08-13T19:54:05.231187881+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55868: remote error: tls: bad certificate 2025-08-13T19:54:05.253435246+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55872: remote error: tls: bad certificate 2025-08-13T19:54:05.283089523+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55874: remote error: tls: bad certificate 2025-08-13T19:54:05.297448933+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55884: remote error: tls: bad certificate 2025-08-13T19:54:05.313188692+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55894: remote error: tls: bad certificate 2025-08-13T19:54:05.339418061+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55898: remote error: tls: bad certificate 2025-08-13T19:54:05.360628287+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55910: remote error: tls: bad certificate 2025-08-13T19:54:05.377985413+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55918: remote error: tls: bad certificate 2025-08-13T19:54:05.395087051+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55928: remote error: tls: bad certificate 2025-08-13T19:54:05.411553941+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55944: remote error: tls: bad certificate 2025-08-13T19:54:05.429766171+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55958: remote error: tls: bad certificate 2025-08-13T19:54:05.449028661+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55966: remote error: tls: bad certificate 2025-08-13T19:54:05.467946011+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55980: remote error: tls: bad certificate 2025-08-13T19:54:05.485629346+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:55994: remote error: tls: bad certificate 2025-08-13T19:54:05.501872070+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56010: remote error: tls: bad certificate 2025-08-13T19:54:05.516638442+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56014: remote error: tls: bad certificate 2025-08-13T19:54:05.531454335+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56016: remote error: tls: bad certificate 2025-08-13T19:54:05.545691651+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56028: remote error: tls: bad certificate 2025-08-13T19:54:05.562223723+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56042: remote error: tls: bad certificate 2025-08-13T19:54:05.578884149+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56058: remote error: tls: bad certificate 2025-08-13T19:54:05.590260494+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56066: remote error: tls: bad certificate 2025-08-13T19:54:05.605460888+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56080: remote error: tls: bad certificate 2025-08-13T19:54:05.620248910+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56086: remote error: tls: bad certificate 2025-08-13T19:54:05.638069889+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56102: remote error: tls: bad certificate 2025-08-13T19:54:05.653660004+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56106: remote error: tls: bad certificate 2025-08-13T19:54:05.673205262+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56116: remote error: tls: bad certificate 2025-08-13T19:54:05.726669629+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56122: remote error: tls: bad certificate 2025-08-13T19:54:05.768411221+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56124: remote error: tls: bad certificate 2025-08-13T19:54:05.774735111+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56126: remote error: tls: bad certificate 2025-08-13T19:54:05.795135264+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56128: remote error: tls: bad certificate 2025-08-13T19:54:05.812221972+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56134: remote error: tls: bad certificate 2025-08-13T19:54:05.830135333+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56142: remote error: tls: bad certificate 2025-08-13T19:54:05.845655206+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56158: remote error: tls: bad certificate 2025-08-13T19:54:05.861531970+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56172: remote error: tls: bad certificate 2025-08-13T19:54:05.877125745+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56184: remote error: tls: bad certificate 2025-08-13T19:54:05.910217470+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56194: remote error: tls: bad certificate 2025-08-13T19:54:05.948091351+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56208: remote error: tls: bad certificate 2025-08-13T19:54:05.988656779+00:00 stderr F 2025/08/13 19:54:05 http: TLS handshake error from 127.0.0.1:56210: remote error: tls: bad certificate 2025-08-13T19:54:06.029546066+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56214: remote error: tls: bad certificate 2025-08-13T19:54:06.076607450+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56226: remote error: tls: bad certificate 2025-08-13T19:54:06.107183223+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56238: remote error: tls: bad certificate 2025-08-13T19:54:06.146499386+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56252: remote error: tls: bad certificate 2025-08-13T19:54:06.194484836+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56266: remote error: tls: bad certificate 2025-08-13T19:54:06.228528168+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56282: remote error: tls: bad certificate 2025-08-13T19:54:06.271680090+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56292: remote error: tls: bad certificate 2025-08-13T19:54:06.309391997+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56298: remote error: tls: bad certificate 2025-08-13T19:54:06.350234173+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56310: remote error: tls: bad certificate 2025-08-13T19:54:06.386162899+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56314: remote error: tls: bad certificate 2025-08-13T19:54:06.428316272+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56326: remote error: tls: bad certificate 2025-08-13T19:54:06.476694274+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56334: remote error: tls: bad certificate 2025-08-13T19:54:06.509834350+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56346: remote error: tls: bad certificate 2025-08-13T19:54:06.553684632+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56358: remote error: tls: bad certificate 2025-08-13T19:54:06.591259765+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56374: remote error: tls: bad certificate 2025-08-13T19:54:06.630498745+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56380: remote error: tls: bad certificate 2025-08-13T19:54:06.666598066+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56388: remote error: tls: bad certificate 2025-08-13T19:54:06.705427025+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56398: remote error: tls: bad certificate 2025-08-13T19:54:06.748031691+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56406: remote error: tls: bad certificate 2025-08-13T19:54:06.789061583+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56416: remote error: tls: bad certificate 2025-08-13T19:54:06.828297713+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56426: remote error: tls: bad certificate 2025-08-13T19:54:06.869535760+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56440: remote error: tls: bad certificate 2025-08-13T19:54:06.906655370+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56452: remote error: tls: bad certificate 2025-08-13T19:54:06.947239849+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56464: remote error: tls: bad certificate 2025-08-13T19:54:06.990194796+00:00 stderr F 2025/08/13 19:54:06 http: TLS handshake error from 127.0.0.1:56474: remote error: tls: bad certificate 2025-08-13T19:54:07.028611323+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56480: remote error: tls: bad certificate 2025-08-13T19:54:07.072629619+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56496: remote error: tls: bad certificate 2025-08-13T19:54:07.110073149+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56498: remote error: tls: bad certificate 2025-08-13T19:54:07.147519778+00:00 stderr F 2025/08/13 19:54:07 http: TLS handshake error from 127.0.0.1:56504: remote error: tls: bad certificate 2025-08-13T19:54:10.324143599+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53800: remote error: tls: bad certificate 2025-08-13T19:54:10.351052607+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53810: remote error: tls: bad certificate 2025-08-13T19:54:10.379028716+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53826: remote error: tls: bad certificate 2025-08-13T19:54:10.411750200+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53836: remote error: tls: bad certificate 2025-08-13T19:54:10.438112913+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53842: remote error: tls: bad certificate 2025-08-13T19:54:10.843878839+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53844: remote error: tls: bad certificate 2025-08-13T19:54:10.870279393+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53860: remote error: tls: bad certificate 2025-08-13T19:54:10.892950700+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53868: remote error: tls: bad certificate 2025-08-13T19:54:10.913163997+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53882: remote error: tls: bad certificate 2025-08-13T19:54:10.931237763+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53892: remote error: tls: bad certificate 2025-08-13T19:54:10.955366742+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53900: remote error: tls: bad certificate 2025-08-13T19:54:10.974190990+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53908: remote error: tls: bad certificate 2025-08-13T19:54:10.995896150+00:00 stderr F 2025/08/13 19:54:10 http: TLS handshake error from 127.0.0.1:53916: remote error: tls: bad certificate 2025-08-13T19:54:11.025141415+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53924: remote error: tls: bad certificate 2025-08-13T19:54:11.046967078+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53926: remote error: tls: bad certificate 2025-08-13T19:54:11.066142645+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53942: remote error: tls: bad certificate 2025-08-13T19:54:11.084764397+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53958: remote error: tls: bad certificate 2025-08-13T19:54:11.106043095+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53972: remote error: tls: bad certificate 2025-08-13T19:54:11.132503240+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53982: remote error: tls: bad certificate 2025-08-13T19:54:11.154251721+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53986: remote error: tls: bad certificate 2025-08-13T19:54:11.173139731+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:53994: remote error: tls: bad certificate 2025-08-13T19:54:11.194903502+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54004: remote error: tls: bad certificate 2025-08-13T19:54:11.220897794+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54008: remote error: tls: bad certificate 2025-08-13T19:54:11.239054393+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54022: remote error: tls: bad certificate 2025-08-13T19:54:11.257567391+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54032: remote error: tls: bad certificate 2025-08-13T19:54:11.273608709+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54042: remote error: tls: bad certificate 2025-08-13T19:54:11.289194604+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54048: remote error: tls: bad certificate 2025-08-13T19:54:11.305607913+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54064: remote error: tls: bad certificate 2025-08-13T19:54:11.321356573+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54068: remote error: tls: bad certificate 2025-08-13T19:54:11.347699525+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54076: remote error: tls: bad certificate 2025-08-13T19:54:11.366058569+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54086: remote error: tls: bad certificate 2025-08-13T19:54:11.385265668+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54094: remote error: tls: bad certificate 2025-08-13T19:54:11.408454650+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54106: remote error: tls: bad certificate 2025-08-13T19:54:11.428514993+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54110: remote error: tls: bad certificate 2025-08-13T19:54:11.449096430+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54114: remote error: tls: bad certificate 2025-08-13T19:54:11.464211522+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54130: remote error: tls: bad certificate 2025-08-13T19:54:11.481318800+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54134: remote error: tls: bad certificate 2025-08-13T19:54:11.499429527+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54148: remote error: tls: bad certificate 2025-08-13T19:54:11.516029891+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54164: remote error: tls: bad certificate 2025-08-13T19:54:11.534594972+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54176: remote error: tls: bad certificate 2025-08-13T19:54:11.549282941+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54190: remote error: tls: bad certificate 2025-08-13T19:54:11.567407278+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54194: remote error: tls: bad certificate 2025-08-13T19:54:11.587741259+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54208: remote error: tls: bad certificate 2025-08-13T19:54:11.606065312+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54222: remote error: tls: bad certificate 2025-08-13T19:54:11.622622815+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54236: remote error: tls: bad certificate 2025-08-13T19:54:11.639475956+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54240: remote error: tls: bad certificate 2025-08-13T19:54:11.653602620+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54250: remote error: tls: bad certificate 2025-08-13T19:54:11.669066781+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54256: remote error: tls: bad certificate 2025-08-13T19:54:11.688043553+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54268: remote error: tls: bad certificate 2025-08-13T19:54:11.704404330+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54278: remote error: tls: bad certificate 2025-08-13T19:54:11.719577643+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54284: remote error: tls: bad certificate 2025-08-13T19:54:11.734508770+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54290: remote error: tls: bad certificate 2025-08-13T19:54:11.750367353+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54294: remote error: tls: bad certificate 2025-08-13T19:54:11.766342269+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54306: remote error: tls: bad certificate 2025-08-13T19:54:11.783522849+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54316: remote error: tls: bad certificate 2025-08-13T19:54:11.800200285+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54328: remote error: tls: bad certificate 2025-08-13T19:54:11.817072067+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54330: remote error: tls: bad certificate 2025-08-13T19:54:11.833085354+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54336: remote error: tls: bad certificate 2025-08-13T19:54:11.849252126+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54352: remote error: tls: bad certificate 2025-08-13T19:54:11.865088588+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54360: remote error: tls: bad certificate 2025-08-13T19:54:11.881589549+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54374: remote error: tls: bad certificate 2025-08-13T19:54:11.897978597+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54382: remote error: tls: bad certificate 2025-08-13T19:54:11.915336733+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54388: remote error: tls: bad certificate 2025-08-13T19:54:11.931051652+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54390: remote error: tls: bad certificate 2025-08-13T19:54:11.948364826+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54404: remote error: tls: bad certificate 2025-08-13T19:54:11.968452890+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54420: remote error: tls: bad certificate 2025-08-13T19:54:11.989436089+00:00 stderr F 2025/08/13 19:54:11 http: TLS handshake error from 127.0.0.1:54424: remote error: tls: bad certificate 2025-08-13T19:54:12.004551540+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54432: remote error: tls: bad certificate 2025-08-13T19:54:12.026195218+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54446: remote error: tls: bad certificate 2025-08-13T19:54:12.037979565+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54452: remote error: tls: bad certificate 2025-08-13T19:54:12.052939342+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54466: remote error: tls: bad certificate 2025-08-13T19:54:12.071769910+00:00 stderr F 2025/08/13 19:54:12 http: TLS handshake error from 127.0.0.1:54470: remote error: tls: bad certificate 2025-08-13T19:54:15.853328536+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54476: remote error: tls: bad certificate 2025-08-13T19:54:15.966140586+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54490: remote error: tls: bad certificate 2025-08-13T19:54:15.987193217+00:00 stderr F 2025/08/13 19:54:15 http: TLS handshake error from 127.0.0.1:54496: remote error: tls: bad certificate 2025-08-13T19:54:16.003514784+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54500: remote error: tls: bad certificate 2025-08-13T19:54:16.022766333+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54510: remote error: tls: bad certificate 2025-08-13T19:54:16.038674267+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54524: remote error: tls: bad certificate 2025-08-13T19:54:16.054370536+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54540: remote error: tls: bad certificate 2025-08-13T19:54:16.143197972+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54552: remote error: tls: bad certificate 2025-08-13T19:54:16.158607582+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54566: remote error: tls: bad certificate 2025-08-13T19:54:16.175517075+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54574: remote error: tls: bad certificate 2025-08-13T19:54:16.191456580+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54580: remote error: tls: bad certificate 2025-08-13T19:54:16.205409408+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54592: remote error: tls: bad certificate 2025-08-13T19:54:16.223006201+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54602: remote error: tls: bad certificate 2025-08-13T19:54:16.239594454+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54608: remote error: tls: bad certificate 2025-08-13T19:54:16.258661369+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54622: remote error: tls: bad certificate 2025-08-13T19:54:16.276867389+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54628: remote error: tls: bad certificate 2025-08-13T19:54:16.294151662+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54644: remote error: tls: bad certificate 2025-08-13T19:54:16.312158906+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54654: remote error: tls: bad certificate 2025-08-13T19:54:16.329057329+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54670: remote error: tls: bad certificate 2025-08-13T19:54:16.347042602+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54682: remote error: tls: bad certificate 2025-08-13T19:54:16.367316011+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54686: remote error: tls: bad certificate 2025-08-13T19:54:16.386183990+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54688: remote error: tls: bad certificate 2025-08-13T19:54:16.402416114+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54702: remote error: tls: bad certificate 2025-08-13T19:54:16.417756592+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54716: remote error: tls: bad certificate 2025-08-13T19:54:16.433322186+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54724: remote error: tls: bad certificate 2025-08-13T19:54:16.457360412+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54740: remote error: tls: bad certificate 2025-08-13T19:54:16.479533656+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54754: remote error: tls: bad certificate 2025-08-13T19:54:16.499071644+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54768: remote error: tls: bad certificate 2025-08-13T19:54:16.524058997+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54784: remote error: tls: bad certificate 2025-08-13T19:54:16.540699042+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54790: remote error: tls: bad certificate 2025-08-13T19:54:16.560202489+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54806: remote error: tls: bad certificate 2025-08-13T19:54:16.576866315+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54814: remote error: tls: bad certificate 2025-08-13T19:54:16.594332203+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54822: remote error: tls: bad certificate 2025-08-13T19:54:16.609170517+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54828: remote error: tls: bad certificate 2025-08-13T19:54:16.626135522+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54830: remote error: tls: bad certificate 2025-08-13T19:54:16.650720813+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54834: remote error: tls: bad certificate 2025-08-13T19:54:16.675662296+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54848: remote error: tls: bad certificate 2025-08-13T19:54:16.695085490+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54850: remote error: tls: bad certificate 2025-08-13T19:54:16.711583271+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54852: remote error: tls: bad certificate 2025-08-13T19:54:16.725719775+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54860: remote error: tls: bad certificate 2025-08-13T19:54:16.744172602+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54870: remote error: tls: bad certificate 2025-08-13T19:54:16.761632880+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54882: remote error: tls: bad certificate 2025-08-13T19:54:16.780635192+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54884: remote error: tls: bad certificate 2025-08-13T19:54:16.798684607+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54898: remote error: tls: bad certificate 2025-08-13T19:54:16.817738631+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54904: remote error: tls: bad certificate 2025-08-13T19:54:16.835916381+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54918: remote error: tls: bad certificate 2025-08-13T19:54:16.859974438+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54928: remote error: tls: bad certificate 2025-08-13T19:54:16.915907125+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54938: remote error: tls: bad certificate 2025-08-13T19:54:16.931963703+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54940: remote error: tls: bad certificate 2025-08-13T19:54:16.952153640+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54954: remote error: tls: bad certificate 2025-08-13T19:54:16.969020321+00:00 stderr F 2025/08/13 19:54:16 http: TLS handshake error from 127.0.0.1:54962: remote error: tls: bad certificate 2025-08-13T19:54:17.086860076+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54970: remote error: tls: bad certificate 2025-08-13T19:54:17.104701715+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54978: remote error: tls: bad certificate 2025-08-13T19:54:17.123251545+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54980: remote error: tls: bad certificate 2025-08-13T19:54:17.143195575+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54986: remote error: tls: bad certificate 2025-08-13T19:54:17.161661582+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:54998: remote error: tls: bad certificate 2025-08-13T19:54:17.179342307+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55010: remote error: tls: bad certificate 2025-08-13T19:54:17.199657357+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55018: remote error: tls: bad certificate 2025-08-13T19:54:17.218228467+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55030: remote error: tls: bad certificate 2025-08-13T19:54:17.237230160+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55044: remote error: tls: bad certificate 2025-08-13T19:54:17.251893088+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55046: remote error: tls: bad certificate 2025-08-13T19:54:17.270647944+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55062: remote error: tls: bad certificate 2025-08-13T19:54:17.287085233+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55070: remote error: tls: bad certificate 2025-08-13T19:54:17.308525895+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55072: remote error: tls: bad certificate 2025-08-13T19:54:17.324706747+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55088: remote error: tls: bad certificate 2025-08-13T19:54:17.341763604+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55102: remote error: tls: bad certificate 2025-08-13T19:54:17.360314454+00:00 stderr F 2025/08/13 19:54:17 http: TLS handshake error from 127.0.0.1:55116: remote error: tls: bad certificate 2025-08-13T19:54:20.721525557+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53216: remote error: tls: bad certificate 2025-08-13T19:54:20.746320355+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53224: remote error: tls: bad certificate 2025-08-13T19:54:20.771123713+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53234: remote error: tls: bad certificate 2025-08-13T19:54:20.796913209+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53236: remote error: tls: bad certificate 2025-08-13T19:54:20.821871722+00:00 stderr F 2025/08/13 19:54:20 http: TLS handshake error from 127.0.0.1:53250: remote error: tls: bad certificate 2025-08-13T19:54:22.817745111+00:00 stderr F 2025/08/13 19:54:22 http: TLS handshake error from 127.0.0.1:53254: remote error: tls: bad certificate 2025-08-13T19:54:25.227300862+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53266: remote error: tls: bad certificate 2025-08-13T19:54:25.245855302+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53276: remote error: tls: bad certificate 2025-08-13T19:54:25.262362583+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53282: remote error: tls: bad certificate 2025-08-13T19:54:25.281349155+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53292: remote error: tls: bad certificate 2025-08-13T19:54:25.301179791+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53302: remote error: tls: bad certificate 2025-08-13T19:54:25.324967410+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53318: remote error: tls: bad certificate 2025-08-13T19:54:25.364442558+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53326: remote error: tls: bad certificate 2025-08-13T19:54:25.397158082+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53336: remote error: tls: bad certificate 2025-08-13T19:54:25.419758267+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53338: remote error: tls: bad certificate 2025-08-13T19:54:25.436679680+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53350: remote error: tls: bad certificate 2025-08-13T19:54:25.453617054+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53364: remote error: tls: bad certificate 2025-08-13T19:54:25.469684553+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53380: remote error: tls: bad certificate 2025-08-13T19:54:25.490530948+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53388: remote error: tls: bad certificate 2025-08-13T19:54:25.508708857+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53400: remote error: tls: bad certificate 2025-08-13T19:54:25.525215248+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53404: remote error: tls: bad certificate 2025-08-13T19:54:25.537947032+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53412: remote error: tls: bad certificate 2025-08-13T19:54:25.554302069+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53422: remote error: tls: bad certificate 2025-08-13T19:54:25.568329099+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53436: remote error: tls: bad certificate 2025-08-13T19:54:25.582991348+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53444: remote error: tls: bad certificate 2025-08-13T19:54:25.596693629+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53456: remote error: tls: bad certificate 2025-08-13T19:54:25.616365821+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53470: remote error: tls: bad certificate 2025-08-13T19:54:25.632041678+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53482: remote error: tls: bad certificate 2025-08-13T19:54:25.653220783+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53488: remote error: tls: bad certificate 2025-08-13T19:54:25.670881997+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53492: remote error: tls: bad certificate 2025-08-13T19:54:25.688280574+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53508: remote error: tls: bad certificate 2025-08-13T19:54:25.706843854+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53520: remote error: tls: bad certificate 2025-08-13T19:54:25.722669636+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53536: remote error: tls: bad certificate 2025-08-13T19:54:25.740935798+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53550: remote error: tls: bad certificate 2025-08-13T19:54:25.757637885+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53556: remote error: tls: bad certificate 2025-08-13T19:54:25.773219450+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53568: remote error: tls: bad certificate 2025-08-13T19:54:25.788331711+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53584: remote error: tls: bad certificate 2025-08-13T19:54:25.810117233+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53600: remote error: tls: bad certificate 2025-08-13T19:54:25.827526600+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53604: remote error: tls: bad certificate 2025-08-13T19:54:25.843408304+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53616: remote error: tls: bad certificate 2025-08-13T19:54:25.858583567+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53622: remote error: tls: bad certificate 2025-08-13T19:54:25.874072539+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53624: remote error: tls: bad certificate 2025-08-13T19:54:25.890998383+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53640: remote error: tls: bad certificate 2025-08-13T19:54:25.905653711+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53646: remote error: tls: bad certificate 2025-08-13T19:54:25.926635510+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53648: remote error: tls: bad certificate 2025-08-13T19:54:25.942071211+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53656: remote error: tls: bad certificate 2025-08-13T19:54:25.957762569+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53666: remote error: tls: bad certificate 2025-08-13T19:54:25.975213977+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53668: remote error: tls: bad certificate 2025-08-13T19:54:25.992678176+00:00 stderr F 2025/08/13 19:54:25 http: TLS handshake error from 127.0.0.1:53678: remote error: tls: bad certificate 2025-08-13T19:54:26.010514455+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53692: remote error: tls: bad certificate 2025-08-13T19:54:26.025077611+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53706: remote error: tls: bad certificate 2025-08-13T19:54:26.039340648+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53708: remote error: tls: bad certificate 2025-08-13T19:54:26.066560865+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53724: remote error: tls: bad certificate 2025-08-13T19:54:26.089577813+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53728: remote error: tls: bad certificate 2025-08-13T19:54:26.107120494+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53744: remote error: tls: bad certificate 2025-08-13T19:54:26.126047744+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53754: remote error: tls: bad certificate 2025-08-13T19:54:26.142558895+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53756: remote error: tls: bad certificate 2025-08-13T19:54:26.158527671+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53766: remote error: tls: bad certificate 2025-08-13T19:54:26.176040571+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53770: remote error: tls: bad certificate 2025-08-13T19:54:26.192314476+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53772: remote error: tls: bad certificate 2025-08-13T19:54:26.210127685+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53776: remote error: tls: bad certificate 2025-08-13T19:54:26.228091938+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53786: remote error: tls: bad certificate 2025-08-13T19:54:26.244063774+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53792: remote error: tls: bad certificate 2025-08-13T19:54:26.259651109+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53800: remote error: tls: bad certificate 2025-08-13T19:54:26.278102256+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53804: remote error: tls: bad certificate 2025-08-13T19:54:26.297222942+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53816: remote error: tls: bad certificate 2025-08-13T19:54:26.316242055+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53832: remote error: tls: bad certificate 2025-08-13T19:54:26.332867529+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53848: remote error: tls: bad certificate 2025-08-13T19:54:26.350388950+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53864: remote error: tls: bad certificate 2025-08-13T19:54:26.365764699+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53870: remote error: tls: bad certificate 2025-08-13T19:54:26.382657631+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53872: remote error: tls: bad certificate 2025-08-13T19:54:26.404027901+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53884: remote error: tls: bad certificate 2025-08-13T19:54:26.415088637+00:00 stderr F 2025/08/13 19:54:26 http: TLS handshake error from 127.0.0.1:53886: remote error: tls: bad certificate 2025-08-13T19:54:31.046433198+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41316: remote error: tls: bad certificate 2025-08-13T19:54:31.068376685+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41320: remote error: tls: bad certificate 2025-08-13T19:54:31.089971020+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41322: remote error: tls: bad certificate 2025-08-13T19:54:31.111292849+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41338: remote error: tls: bad certificate 2025-08-13T19:54:31.131290610+00:00 stderr F 2025/08/13 19:54:31 http: TLS handshake error from 127.0.0.1:41344: remote error: tls: bad certificate 2025-08-13T19:54:35.227171052+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41346: remote error: tls: bad certificate 2025-08-13T19:54:35.252922997+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41360: remote error: tls: bad certificate 2025-08-13T19:54:35.276163581+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41362: remote error: tls: bad certificate 2025-08-13T19:54:35.304159070+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41376: remote error: tls: bad certificate 2025-08-13T19:54:35.340938550+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41378: remote error: tls: bad certificate 2025-08-13T19:54:35.369984870+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41384: remote error: tls: bad certificate 2025-08-13T19:54:35.397360532+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41386: remote error: tls: bad certificate 2025-08-13T19:54:35.413134552+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:54:35.427249325+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41410: remote error: tls: bad certificate 2025-08-13T19:54:35.443855749+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41412: remote error: tls: bad certificate 2025-08-13T19:54:35.459520286+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41420: remote error: tls: bad certificate 2025-08-13T19:54:35.477164140+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41422: remote error: tls: bad certificate 2025-08-13T19:54:35.496001008+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41436: remote error: tls: bad certificate 2025-08-13T19:54:35.513180579+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41450: remote error: tls: bad certificate 2025-08-13T19:54:35.530507003+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41462: remote error: tls: bad certificate 2025-08-13T19:54:35.545536492+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41466: remote error: tls: bad certificate 2025-08-13T19:54:35.563984219+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41470: remote error: tls: bad certificate 2025-08-13T19:54:35.584946118+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41482: remote error: tls: bad certificate 2025-08-13T19:54:35.602649913+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41490: remote error: tls: bad certificate 2025-08-13T19:54:35.616981962+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41500: remote error: tls: bad certificate 2025-08-13T19:54:35.632596948+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41508: remote error: tls: bad certificate 2025-08-13T19:54:35.649045698+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41518: remote error: tls: bad certificate 2025-08-13T19:54:35.666645051+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41520: remote error: tls: bad certificate 2025-08-13T19:54:35.684569622+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41524: remote error: tls: bad certificate 2025-08-13T19:54:35.704045878+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41534: remote error: tls: bad certificate 2025-08-13T19:54:35.726147510+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41542: remote error: tls: bad certificate 2025-08-13T19:54:35.742761164+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41548: remote error: tls: bad certificate 2025-08-13T19:54:35.761563161+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41560: remote error: tls: bad certificate 2025-08-13T19:54:35.777918448+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41574: remote error: tls: bad certificate 2025-08-13T19:54:35.792654319+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41580: remote error: tls: bad certificate 2025-08-13T19:54:35.815639335+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41588: remote error: tls: bad certificate 2025-08-13T19:54:35.829742187+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41594: remote error: tls: bad certificate 2025-08-13T19:54:35.845423815+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41610: remote error: tls: bad certificate 2025-08-13T19:54:35.861880605+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41626: remote error: tls: bad certificate 2025-08-13T19:54:35.877927283+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41636: remote error: tls: bad certificate 2025-08-13T19:54:35.895002341+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41642: remote error: tls: bad certificate 2025-08-13T19:54:35.908226958+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41646: remote error: tls: bad certificate 2025-08-13T19:54:35.930885865+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41656: remote error: tls: bad certificate 2025-08-13T19:54:35.945323178+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41672: remote error: tls: bad certificate 2025-08-13T19:54:35.963176427+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41682: remote error: tls: bad certificate 2025-08-13T19:54:35.982345695+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41688: remote error: tls: bad certificate 2025-08-13T19:54:35.997435986+00:00 stderr F 2025/08/13 19:54:35 http: TLS handshake error from 127.0.0.1:41698: remote error: tls: bad certificate 2025-08-13T19:54:36.015206053+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41702: remote error: tls: bad certificate 2025-08-13T19:54:36.030573252+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41704: remote error: tls: bad certificate 2025-08-13T19:54:36.047594838+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41712: remote error: tls: bad certificate 2025-08-13T19:54:36.067366642+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41728: remote error: tls: bad certificate 2025-08-13T19:54:36.082614408+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41730: remote error: tls: bad certificate 2025-08-13T19:54:36.101377354+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41744: remote error: tls: bad certificate 2025-08-13T19:54:36.117705550+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41746: remote error: tls: bad certificate 2025-08-13T19:54:36.133877822+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41752: remote error: tls: bad certificate 2025-08-13T19:54:36.148239472+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41756: remote error: tls: bad certificate 2025-08-13T19:54:36.164068494+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41768: remote error: tls: bad certificate 2025-08-13T19:54:36.181239164+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41778: remote error: tls: bad certificate 2025-08-13T19:54:36.205109676+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41784: remote error: tls: bad certificate 2025-08-13T19:54:36.225618141+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41786: remote error: tls: bad certificate 2025-08-13T19:54:36.244268714+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41796: remote error: tls: bad certificate 2025-08-13T19:54:36.259244651+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41806: remote error: tls: bad certificate 2025-08-13T19:54:36.276200615+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41812: remote error: tls: bad certificate 2025-08-13T19:54:36.301162898+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41824: remote error: tls: bad certificate 2025-08-13T19:54:36.325165024+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41838: remote error: tls: bad certificate 2025-08-13T19:54:36.349232481+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41852: remote error: tls: bad certificate 2025-08-13T19:54:36.367540803+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41868: remote error: tls: bad certificate 2025-08-13T19:54:36.391947470+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41880: remote error: tls: bad certificate 2025-08-13T19:54:36.427244898+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41884: remote error: tls: bad certificate 2025-08-13T19:54:36.451516341+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41900: remote error: tls: bad certificate 2025-08-13T19:54:36.468951009+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41904: remote error: tls: bad certificate 2025-08-13T19:54:36.491550874+00:00 stderr F 2025/08/13 19:54:36 http: TLS handshake error from 127.0.0.1:41912: remote error: tls: bad certificate 2025-08-13T19:54:41.519071725+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50242: remote error: tls: bad certificate 2025-08-13T19:54:41.538992693+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50258: remote error: tls: bad certificate 2025-08-13T19:54:41.563178683+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50266: remote error: tls: bad certificate 2025-08-13T19:54:41.626025906+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50276: remote error: tls: bad certificate 2025-08-13T19:54:41.651701259+00:00 stderr F 2025/08/13 19:54:41 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:54:45.236328752+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50292: remote error: tls: bad certificate 2025-08-13T19:54:45.257041153+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50296: remote error: tls: bad certificate 2025-08-13T19:54:45.273909114+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50298: remote error: tls: bad certificate 2025-08-13T19:54:45.288248913+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50302: remote error: tls: bad certificate 2025-08-13T19:54:45.305045482+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50306: remote error: tls: bad certificate 2025-08-13T19:54:45.320968267+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50312: remote error: tls: bad certificate 2025-08-13T19:54:45.339462534+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50326: remote error: tls: bad certificate 2025-08-13T19:54:45.367084432+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50334: remote error: tls: bad certificate 2025-08-13T19:54:45.384769607+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:54:45.399928829+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50346: remote error: tls: bad certificate 2025-08-13T19:54:45.419268830+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50354: remote error: tls: bad certificate 2025-08-13T19:54:45.436260775+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50370: remote error: tls: bad certificate 2025-08-13T19:54:45.452531150+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50382: remote error: tls: bad certificate 2025-08-13T19:54:45.472156780+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50384: remote error: tls: bad certificate 2025-08-13T19:54:45.488235818+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50386: remote error: tls: bad certificate 2025-08-13T19:54:45.505031358+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50396: remote error: tls: bad certificate 2025-08-13T19:54:45.519045817+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50412: remote error: tls: bad certificate 2025-08-13T19:54:45.536148925+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50418: remote error: tls: bad certificate 2025-08-13T19:54:45.552672057+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50428: remote error: tls: bad certificate 2025-08-13T19:54:45.567490320+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50440: remote error: tls: bad certificate 2025-08-13T19:54:45.586848402+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50446: remote error: tls: bad certificate 2025-08-13T19:54:45.609313923+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50448: remote error: tls: bad certificate 2025-08-13T19:54:45.629726306+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50460: remote error: tls: bad certificate 2025-08-13T19:54:45.646340430+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50464: remote error: tls: bad certificate 2025-08-13T19:54:45.661195803+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50478: remote error: tls: bad certificate 2025-08-13T19:54:45.675670796+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50480: remote error: tls: bad certificate 2025-08-13T19:54:45.691987512+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50496: remote error: tls: bad certificate 2025-08-13T19:54:45.707129494+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50502: remote error: tls: bad certificate 2025-08-13T19:54:45.721192075+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50508: remote error: tls: bad certificate 2025-08-13T19:54:45.736476481+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50514: remote error: tls: bad certificate 2025-08-13T19:54:45.751938933+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50516: remote error: tls: bad certificate 2025-08-13T19:54:45.767759134+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50528: remote error: tls: bad certificate 2025-08-13T19:54:45.783994357+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50542: remote error: tls: bad certificate 2025-08-13T19:54:45.797493492+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50548: remote error: tls: bad certificate 2025-08-13T19:54:45.813383646+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50562: remote error: tls: bad certificate 2025-08-13T19:54:45.827280712+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50564: remote error: tls: bad certificate 2025-08-13T19:54:45.844484013+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50572: remote error: tls: bad certificate 2025-08-13T19:54:45.858049520+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50580: remote error: tls: bad certificate 2025-08-13T19:54:45.874998724+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50584: remote error: tls: bad certificate 2025-08-13T19:54:45.896306262+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50588: remote error: tls: bad certificate 2025-08-13T19:54:45.913262116+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50590: remote error: tls: bad certificate 2025-08-13T19:54:45.929432387+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50596: remote error: tls: bad certificate 2025-08-13T19:54:45.948160311+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50606: remote error: tls: bad certificate 2025-08-13T19:54:45.964073845+00:00 stderr F 2025/08/13 19:54:45 http: TLS handshake error from 127.0.0.1:50612: remote error: tls: bad certificate 2025-08-13T19:54:46.006916808+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50616: remote error: tls: bad certificate 2025-08-13T19:54:46.020915227+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50628: remote error: tls: bad certificate 2025-08-13T19:54:46.055171725+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50642: remote error: tls: bad certificate 2025-08-13T19:54:46.069256957+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50654: remote error: tls: bad certificate 2025-08-13T19:54:46.092022466+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50670: remote error: tls: bad certificate 2025-08-13T19:54:46.116726671+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50676: remote error: tls: bad certificate 2025-08-13T19:54:46.136084183+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50678: remote error: tls: bad certificate 2025-08-13T19:54:46.151573485+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50694: remote error: tls: bad certificate 2025-08-13T19:54:46.166209213+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50698: remote error: tls: bad certificate 2025-08-13T19:54:46.184561257+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50714: remote error: tls: bad certificate 2025-08-13T19:54:46.202747295+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50724: remote error: tls: bad certificate 2025-08-13T19:54:46.227492002+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50738: remote error: tls: bad certificate 2025-08-13T19:54:46.255444979+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50742: remote error: tls: bad certificate 2025-08-13T19:54:46.274494723+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50744: remote error: tls: bad certificate 2025-08-13T19:54:46.296945013+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50758: remote error: tls: bad certificate 2025-08-13T19:54:46.318310713+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50770: remote error: tls: bad certificate 2025-08-13T19:54:46.348319239+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50786: remote error: tls: bad certificate 2025-08-13T19:54:46.367607499+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50790: remote error: tls: bad certificate 2025-08-13T19:54:46.389904406+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50798: remote error: tls: bad certificate 2025-08-13T19:54:46.409265978+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50802: remote error: tls: bad certificate 2025-08-13T19:54:46.433139139+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50814: remote error: tls: bad certificate 2025-08-13T19:54:46.452182303+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50820: remote error: tls: bad certificate 2025-08-13T19:54:46.467259783+00:00 stderr F 2025/08/13 19:54:46 http: TLS handshake error from 127.0.0.1:50826: remote error: tls: bad certificate 2025-08-13T19:54:47.001052853+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50840: remote error: tls: bad certificate 2025-08-13T19:54:47.019595873+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50846: remote error: tls: bad certificate 2025-08-13T19:54:47.039075658+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50848: remote error: tls: bad certificate 2025-08-13T19:54:47.056672000+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50854: remote error: tls: bad certificate 2025-08-13T19:54:47.073365597+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50860: remote error: tls: bad certificate 2025-08-13T19:54:47.091295068+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50862: remote error: tls: bad certificate 2025-08-13T19:54:47.111718101+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate 2025-08-13T19:54:47.137031453+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50874: remote error: tls: bad certificate 2025-08-13T19:54:47.157653012+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50890: remote error: tls: bad certificate 2025-08-13T19:54:47.176022046+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50892: remote error: tls: bad certificate 2025-08-13T19:54:47.196487130+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50908: remote error: tls: bad certificate 2025-08-13T19:54:47.227302899+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50920: remote error: tls: bad certificate 2025-08-13T19:54:47.404909387+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50924: remote error: tls: bad certificate 2025-08-13T19:54:47.423644541+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50930: remote error: tls: bad certificate 2025-08-13T19:54:47.441867851+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50942: remote error: tls: bad certificate 2025-08-13T19:54:47.457562599+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50950: remote error: tls: bad certificate 2025-08-13T19:54:47.474138762+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50964: remote error: tls: bad certificate 2025-08-13T19:54:47.495737168+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50978: remote error: tls: bad certificate 2025-08-13T19:54:47.514976567+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:50992: remote error: tls: bad certificate 2025-08-13T19:54:47.526215338+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51002: remote error: tls: bad certificate 2025-08-13T19:54:47.537221872+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51008: remote error: tls: bad certificate 2025-08-13T19:54:47.554406162+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51012: remote error: tls: bad certificate 2025-08-13T19:54:47.575909946+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51016: remote error: tls: bad certificate 2025-08-13T19:54:47.599926841+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51024: remote error: tls: bad certificate 2025-08-13T19:54:47.616442193+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51030: remote error: tls: bad certificate 2025-08-13T19:54:47.632249434+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51032: remote error: tls: bad certificate 2025-08-13T19:54:47.648476656+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51046: remote error: tls: bad certificate 2025-08-13T19:54:47.663969029+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51062: remote error: tls: bad certificate 2025-08-13T19:54:47.679760609+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51064: remote error: tls: bad certificate 2025-08-13T19:54:47.697938208+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51078: remote error: tls: bad certificate 2025-08-13T19:54:47.711034242+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51092: remote error: tls: bad certificate 2025-08-13T19:54:47.729134198+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51100: remote error: tls: bad certificate 2025-08-13T19:54:47.743552369+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51104: remote error: tls: bad certificate 2025-08-13T19:54:47.762242143+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51116: remote error: tls: bad certificate 2025-08-13T19:54:47.782497591+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51122: remote error: tls: bad certificate 2025-08-13T19:54:47.799375712+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51132: remote error: tls: bad certificate 2025-08-13T19:54:47.817075917+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51136: remote error: tls: bad certificate 2025-08-13T19:54:47.838958022+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51146: remote error: tls: bad certificate 2025-08-13T19:54:47.855899665+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51152: remote error: tls: bad certificate 2025-08-13T19:54:47.876687658+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51166: remote error: tls: bad certificate 2025-08-13T19:54:47.898643715+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51180: remote error: tls: bad certificate 2025-08-13T19:54:47.921446705+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51190: remote error: tls: bad certificate 2025-08-13T19:54:47.935888657+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51198: remote error: tls: bad certificate 2025-08-13T19:54:47.960274003+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51214: remote error: tls: bad certificate 2025-08-13T19:54:47.978524504+00:00 stderr F 2025/08/13 19:54:47 http: TLS handshake error from 127.0.0.1:51230: remote error: tls: bad certificate 2025-08-13T19:54:48.005522014+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51244: remote error: tls: bad certificate 2025-08-13T19:54:48.046443822+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51260: remote error: tls: bad certificate 2025-08-13T19:54:48.064538418+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51262: remote error: tls: bad certificate 2025-08-13T19:54:48.080613007+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51274: remote error: tls: bad certificate 2025-08-13T19:54:48.099628939+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51282: remote error: tls: bad certificate 2025-08-13T19:54:48.121064661+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51294: remote error: tls: bad certificate 2025-08-13T19:54:48.145690804+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51304: remote error: tls: bad certificate 2025-08-13T19:54:48.167332531+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51318: remote error: tls: bad certificate 2025-08-13T19:54:48.185262613+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51334: remote error: tls: bad certificate 2025-08-13T19:54:48.202229307+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51336: remote error: tls: bad certificate 2025-08-13T19:54:48.220639022+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51342: remote error: tls: bad certificate 2025-08-13T19:54:48.238681507+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51344: remote error: tls: bad certificate 2025-08-13T19:54:48.266258334+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51346: remote error: tls: bad certificate 2025-08-13T19:54:48.282623311+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51354: remote error: tls: bad certificate 2025-08-13T19:54:48.297842775+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51368: remote error: tls: bad certificate 2025-08-13T19:54:48.312705359+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51376: remote error: tls: bad certificate 2025-08-13T19:54:48.366947119+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51378: remote error: tls: bad certificate 2025-08-13T19:54:48.383752057+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51390: remote error: tls: bad certificate 2025-08-13T19:54:48.420479774+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51404: remote error: tls: bad certificate 2025-08-13T19:54:48.460301511+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51416: remote error: tls: bad certificate 2025-08-13T19:54:48.503072411+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51418: remote error: tls: bad certificate 2025-08-13T19:54:48.539468199+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51434: remote error: tls: bad certificate 2025-08-13T19:54:48.580702536+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51450: remote error: tls: bad certificate 2025-08-13T19:54:48.622874469+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51462: remote error: tls: bad certificate 2025-08-13T19:54:48.665528796+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51470: remote error: tls: bad certificate 2025-08-13T19:54:48.699564097+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:51476: remote error: tls: bad certificate 2025-08-13T19:54:48.743502741+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37316: remote error: tls: bad certificate 2025-08-13T19:54:48.781037412+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37318: remote error: tls: bad certificate 2025-08-13T19:54:48.820597231+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37334: remote error: tls: bad certificate 2025-08-13T19:54:48.861330103+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37348: remote error: tls: bad certificate 2025-08-13T19:54:48.900106410+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37354: remote error: tls: bad certificate 2025-08-13T19:54:48.941313065+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37366: remote error: tls: bad certificate 2025-08-13T19:54:48.982903331+00:00 stderr F 2025/08/13 19:54:48 http: TLS handshake error from 127.0.0.1:37368: remote error: tls: bad certificate 2025-08-13T19:54:49.025480786+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37378: remote error: tls: bad certificate 2025-08-13T19:54:49.061073002+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37384: remote error: tls: bad certificate 2025-08-13T19:54:49.101488645+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37390: remote error: tls: bad certificate 2025-08-13T19:54:49.139364575+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37400: remote error: tls: bad certificate 2025-08-13T19:54:49.181347173+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37414: remote error: tls: bad certificate 2025-08-13T19:54:49.228421736+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37418: remote error: tls: bad certificate 2025-08-13T19:54:49.264503406+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37420: remote error: tls: bad certificate 2025-08-13T19:54:49.303318783+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37426: remote error: tls: bad certificate 2025-08-13T19:54:49.341191304+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37434: remote error: tls: bad certificate 2025-08-13T19:54:49.385224931+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37442: remote error: tls: bad certificate 2025-08-13T19:54:49.421119105+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37448: remote error: tls: bad certificate 2025-08-13T19:54:49.463034371+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37456: remote error: tls: bad certificate 2025-08-13T19:54:49.507885881+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37462: remote error: tls: bad certificate 2025-08-13T19:54:49.541032956+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37464: remote error: tls: bad certificate 2025-08-13T19:54:49.594906904+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37466: remote error: tls: bad certificate 2025-08-13T19:54:49.622143321+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37472: remote error: tls: bad certificate 2025-08-13T19:54:49.666360792+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37482: remote error: tls: bad certificate 2025-08-13T19:54:49.699256001+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37488: remote error: tls: bad certificate 2025-08-13T19:54:49.742683830+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37492: remote error: tls: bad certificate 2025-08-13T19:54:49.787688754+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37506: remote error: tls: bad certificate 2025-08-13T19:54:49.827469789+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37518: remote error: tls: bad certificate 2025-08-13T19:54:49.863624121+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37534: remote error: tls: bad certificate 2025-08-13T19:54:49.901086060+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37544: remote error: tls: bad certificate 2025-08-13T19:54:49.940695440+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37548: remote error: tls: bad certificate 2025-08-13T19:54:49.980859066+00:00 stderr F 2025/08/13 19:54:49 http: TLS handshake error from 127.0.0.1:37550: remote error: tls: bad certificate 2025-08-13T19:54:50.050907135+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37554: remote error: tls: bad certificate 2025-08-13T19:54:50.137901097+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37564: remote error: tls: bad certificate 2025-08-13T19:54:50.151417743+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37572: remote error: tls: bad certificate 2025-08-13T19:54:50.165392081+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37584: remote error: tls: bad certificate 2025-08-13T19:54:50.184057924+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37592: remote error: tls: bad certificate 2025-08-13T19:54:50.232257059+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37602: remote error: tls: bad certificate 2025-08-13T19:54:50.263172202+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37612: remote error: tls: bad certificate 2025-08-13T19:54:50.299941171+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37620: remote error: tls: bad certificate 2025-08-13T19:54:50.339362235+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37628: remote error: tls: bad certificate 2025-08-13T19:54:50.383579637+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37642: remote error: tls: bad certificate 2025-08-13T19:54:50.422077116+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37644: remote error: tls: bad certificate 2025-08-13T19:54:50.459365299+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37646: remote error: tls: bad certificate 2025-08-13T19:54:50.503114028+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37654: remote error: tls: bad certificate 2025-08-13T19:54:50.542227284+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37662: remote error: tls: bad certificate 2025-08-13T19:54:50.580443804+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37666: remote error: tls: bad certificate 2025-08-13T19:54:50.618663025+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37670: remote error: tls: bad certificate 2025-08-13T19:54:50.662091074+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37674: remote error: tls: bad certificate 2025-08-13T19:54:50.703324070+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37690: remote error: tls: bad certificate 2025-08-13T19:54:50.742994632+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37706: remote error: tls: bad certificate 2025-08-13T19:54:50.781174782+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37716: remote error: tls: bad certificate 2025-08-13T19:54:50.822514951+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37724: remote error: tls: bad certificate 2025-08-13T19:54:50.864965152+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37734: remote error: tls: bad certificate 2025-08-13T19:54:50.903082800+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37744: remote error: tls: bad certificate 2025-08-13T19:54:50.939730446+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37746: remote error: tls: bad certificate 2025-08-13T19:54:50.980327724+00:00 stderr F 2025/08/13 19:54:50 http: TLS handshake error from 127.0.0.1:37762: remote error: tls: bad certificate 2025-08-13T19:54:51.028358395+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37776: remote error: tls: bad certificate 2025-08-13T19:54:51.063714993+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37786: remote error: tls: bad certificate 2025-08-13T19:54:51.099078432+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37798: remote error: tls: bad certificate 2025-08-13T19:54:51.138421105+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37810: remote error: tls: bad certificate 2025-08-13T19:54:51.184977953+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37822: remote error: tls: bad certificate 2025-08-13T19:54:51.224947554+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37824: remote error: tls: bad certificate 2025-08-13T19:54:51.259583092+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37838: remote error: tls: bad certificate 2025-08-13T19:54:51.301650752+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37840: remote error: tls: bad certificate 2025-08-13T19:54:51.344008331+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37850: remote error: tls: bad certificate 2025-08-13T19:54:51.381655405+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37854: remote error: tls: bad certificate 2025-08-13T19:54:51.422474030+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37870: remote error: tls: bad certificate 2025-08-13T19:54:51.461519404+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37886: remote error: tls: bad certificate 2025-08-13T19:54:51.502654738+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37900: remote error: tls: bad certificate 2025-08-13T19:54:51.542258418+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37916: remote error: tls: bad certificate 2025-08-13T19:54:51.594379955+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37932: remote error: tls: bad certificate 2025-08-13T19:54:51.622457366+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37948: remote error: tls: bad certificate 2025-08-13T19:54:51.660415729+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37960: remote error: tls: bad certificate 2025-08-13T19:54:51.703872139+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37974: remote error: tls: bad certificate 2025-08-13T19:54:51.741578755+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37988: remote error: tls: bad certificate 2025-08-13T19:54:51.784254163+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:37992: remote error: tls: bad certificate 2025-08-13T19:54:51.819879569+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38000: remote error: tls: bad certificate 2025-08-13T19:54:51.859965063+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38010: remote error: tls: bad certificate 2025-08-13T19:54:51.878121091+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38026: remote error: tls: bad certificate 2025-08-13T19:54:51.902353582+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38034: remote error: tls: bad certificate 2025-08-13T19:54:51.906515861+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38036: remote error: tls: bad certificate 2025-08-13T19:54:51.927728697+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38052: remote error: tls: bad certificate 2025-08-13T19:54:51.942359554+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38058: remote error: tls: bad certificate 2025-08-13T19:54:51.948343675+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38066: remote error: tls: bad certificate 2025-08-13T19:54:51.967201263+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38076: remote error: tls: bad certificate 2025-08-13T19:54:51.981193772+00:00 stderr F 2025/08/13 19:54:51 http: TLS handshake error from 127.0.0.1:38082: remote error: tls: bad certificate 2025-08-13T19:54:52.020268547+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38088: remote error: tls: bad certificate 2025-08-13T19:54:52.061667128+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38096: remote error: tls: bad certificate 2025-08-13T19:54:52.107280290+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38106: remote error: tls: bad certificate 2025-08-13T19:54:52.141223088+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38122: remote error: tls: bad certificate 2025-08-13T19:54:52.184956296+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38136: remote error: tls: bad certificate 2025-08-13T19:54:52.223043423+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38138: remote error: tls: bad certificate 2025-08-13T19:54:52.262898920+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38150: remote error: tls: bad certificate 2025-08-13T19:54:52.302521061+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38156: remote error: tls: bad certificate 2025-08-13T19:54:52.417076959+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38158: remote error: tls: bad certificate 2025-08-13T19:54:52.437246314+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38162: remote error: tls: bad certificate 2025-08-13T19:54:52.456128063+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38168: remote error: tls: bad certificate 2025-08-13T19:54:52.490100172+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38172: remote error: tls: bad certificate 2025-08-13T19:54:52.513515161+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38182: remote error: tls: bad certificate 2025-08-13T19:54:52.577266329+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38188: remote error: tls: bad certificate 2025-08-13T19:54:52.614880222+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38190: remote error: tls: bad certificate 2025-08-13T19:54:52.636364625+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38196: remote error: tls: bad certificate 2025-08-13T19:54:52.661284256+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38200: remote error: tls: bad certificate 2025-08-13T19:54:52.702261005+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38208: remote error: tls: bad certificate 2025-08-13T19:54:52.740693602+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38212: remote error: tls: bad certificate 2025-08-13T19:54:52.782375921+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38224: remote error: tls: bad certificate 2025-08-13T19:54:52.820380875+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38228: remote error: tls: bad certificate 2025-08-13T19:54:52.859233114+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38236: remote error: tls: bad certificate 2025-08-13T19:54:52.909319983+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38238: remote error: tls: bad certificate 2025-08-13T19:54:52.939708260+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38244: remote error: tls: bad certificate 2025-08-13T19:54:52.980378011+00:00 stderr F 2025/08/13 19:54:52 http: TLS handshake error from 127.0.0.1:38254: remote error: tls: bad certificate 2025-08-13T19:54:53.020623659+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38270: remote error: tls: bad certificate 2025-08-13T19:54:53.062672749+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38276: remote error: tls: bad certificate 2025-08-13T19:54:53.102471034+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38290: remote error: tls: bad certificate 2025-08-13T19:54:53.140476859+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38296: remote error: tls: bad certificate 2025-08-13T19:54:53.179913034+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38308: remote error: tls: bad certificate 2025-08-13T19:54:53.222325944+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38312: remote error: tls: bad certificate 2025-08-13T19:54:53.258608689+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38314: remote error: tls: bad certificate 2025-08-13T19:54:53.298877468+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38328: remote error: tls: bad certificate 2025-08-13T19:54:53.347174317+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38338: remote error: tls: bad certificate 2025-08-13T19:54:53.397025109+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38346: remote error: tls: bad certificate 2025-08-13T19:54:53.444224336+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38350: remote error: tls: bad certificate 2025-08-13T19:54:53.497524887+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38366: remote error: tls: bad certificate 2025-08-13T19:54:53.533090441+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38378: remote error: tls: bad certificate 2025-08-13T19:54:53.553177204+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38386: remote error: tls: bad certificate 2025-08-13T19:54:53.582106060+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38392: remote error: tls: bad certificate 2025-08-13T19:54:53.674882677+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38400: remote error: tls: bad certificate 2025-08-13T19:54:53.702545147+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38410: remote error: tls: bad certificate 2025-08-13T19:54:53.724945836+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38416: remote error: tls: bad certificate 2025-08-13T19:54:53.765981567+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38422: remote error: tls: bad certificate 2025-08-13T19:54:53.787168921+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38426: remote error: tls: bad certificate 2025-08-13T19:54:53.819082872+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38430: remote error: tls: bad certificate 2025-08-13T19:54:53.861335407+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38446: remote error: tls: bad certificate 2025-08-13T19:54:53.948517495+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38454: remote error: tls: bad certificate 2025-08-13T19:54:53.966378695+00:00 stderr F 2025/08/13 19:54:53 http: TLS handshake error from 127.0.0.1:38466: remote error: tls: bad certificate 2025-08-13T19:54:55.227851349+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38480: remote error: tls: bad certificate 2025-08-13T19:54:55.242359483+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38484: remote error: tls: bad certificate 2025-08-13T19:54:55.257741052+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38486: remote error: tls: bad certificate 2025-08-13T19:54:55.273574374+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38502: remote error: tls: bad certificate 2025-08-13T19:54:55.290432665+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38506: remote error: tls: bad certificate 2025-08-13T19:54:55.308278864+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38516: remote error: tls: bad certificate 2025-08-13T19:54:55.331014593+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38522: remote error: tls: bad certificate 2025-08-13T19:54:55.345660111+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38524: remote error: tls: bad certificate 2025-08-13T19:54:55.362567503+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38528: remote error: tls: bad certificate 2025-08-13T19:54:55.379289260+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38532: remote error: tls: bad certificate 2025-08-13T19:54:55.396632185+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38536: remote error: tls: bad certificate 2025-08-13T19:54:55.414524296+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38540: remote error: tls: bad certificate 2025-08-13T19:54:55.443186163+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38556: remote error: tls: bad certificate 2025-08-13T19:54:55.459181660+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38572: remote error: tls: bad certificate 2025-08-13T19:54:55.475361311+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38586: remote error: tls: bad certificate 2025-08-13T19:54:55.498190953+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38602: remote error: tls: bad certificate 2025-08-13T19:54:55.523388592+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38610: remote error: tls: bad certificate 2025-08-13T19:54:55.539278625+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38622: remote error: tls: bad certificate 2025-08-13T19:54:55.556160687+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38634: remote error: tls: bad certificate 2025-08-13T19:54:55.574049117+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38642: remote error: tls: bad certificate 2025-08-13T19:54:55.588104238+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38652: remote error: tls: bad certificate 2025-08-13T19:54:55.604112455+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38664: remote error: tls: bad certificate 2025-08-13T19:54:55.617740784+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38668: remote error: tls: bad certificate 2025-08-13T19:54:55.636686545+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38680: remote error: tls: bad certificate 2025-08-13T19:54:55.670200451+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38686: remote error: tls: bad certificate 2025-08-13T19:54:55.688522254+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38702: remote error: tls: bad certificate 2025-08-13T19:54:55.710979924+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38706: remote error: tls: bad certificate 2025-08-13T19:54:55.763993497+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38712: remote error: tls: bad certificate 2025-08-13T19:54:55.783688559+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38716: remote error: tls: bad certificate 2025-08-13T19:54:55.800447597+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38732: remote error: tls: bad certificate 2025-08-13T19:54:55.818536713+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38738: remote error: tls: bad certificate 2025-08-13T19:54:55.837187016+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38752: remote error: tls: bad certificate 2025-08-13T19:54:55.853265814+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38764: remote error: tls: bad certificate 2025-08-13T19:54:55.870504146+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38772: remote error: tls: bad certificate 2025-08-13T19:54:55.889106467+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38788: remote error: tls: bad certificate 2025-08-13T19:54:55.909333034+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38796: remote error: tls: bad certificate 2025-08-13T19:54:55.926467863+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38806: remote error: tls: bad certificate 2025-08-13T19:54:55.945926278+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38810: remote error: tls: bad certificate 2025-08-13T19:54:55.973565937+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38816: remote error: tls: bad certificate 2025-08-13T19:54:55.999894898+00:00 stderr F 2025/08/13 19:54:55 http: TLS handshake error from 127.0.0.1:38826: remote error: tls: bad certificate 2025-08-13T19:54:56.016873703+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38838: remote error: tls: bad certificate 2025-08-13T19:54:56.035196045+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38848: remote error: tls: bad certificate 2025-08-13T19:54:56.051654435+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38856: remote error: tls: bad certificate 2025-08-13T19:54:56.067487627+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38862: remote error: tls: bad certificate 2025-08-13T19:54:56.088916998+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38878: remote error: tls: bad certificate 2025-08-13T19:54:56.111201194+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38884: remote error: tls: bad certificate 2025-08-13T19:54:56.127637483+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38894: remote error: tls: bad certificate 2025-08-13T19:54:56.142056813+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38906: remote error: tls: bad certificate 2025-08-13T19:54:56.166355907+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38912: remote error: tls: bad certificate 2025-08-13T19:54:56.190515486+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38922: remote error: tls: bad certificate 2025-08-13T19:54:56.207329826+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38924: remote error: tls: bad certificate 2025-08-13T19:54:56.224884987+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38938: remote error: tls: bad certificate 2025-08-13T19:54:56.248384287+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38948: remote error: tls: bad certificate 2025-08-13T19:54:56.266689340+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38958: remote error: tls: bad certificate 2025-08-13T19:54:56.282990775+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38964: remote error: tls: bad certificate 2025-08-13T19:54:56.298635591+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38968: remote error: tls: bad certificate 2025-08-13T19:54:56.315967746+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38972: remote error: tls: bad certificate 2025-08-13T19:54:56.333487666+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38984: remote error: tls: bad certificate 2025-08-13T19:54:56.350389798+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38992: remote error: tls: bad certificate 2025-08-13T19:54:56.368765802+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38994: remote error: tls: bad certificate 2025-08-13T19:54:56.384156131+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:38996: remote error: tls: bad certificate 2025-08-13T19:54:56.419667275+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39004: remote error: tls: bad certificate 2025-08-13T19:54:56.461618872+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39012: remote error: tls: bad certificate 2025-08-13T19:54:56.501147229+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39024: remote error: tls: bad certificate 2025-08-13T19:54:56.543489418+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39036: remote error: tls: bad certificate 2025-08-13T19:54:56.581312257+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39042: remote error: tls: bad certificate 2025-08-13T19:54:56.620140895+00:00 stderr F 2025/08/13 19:54:56 http: TLS handshake error from 127.0.0.1:39048: remote error: tls: bad certificate 2025-08-13T19:55:02.072369324+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49228: remote error: tls: bad certificate 2025-08-13T19:55:02.095266718+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49244: remote error: tls: bad certificate 2025-08-13T19:55:02.121954589+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:55:02.145891702+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49256: remote error: tls: bad certificate 2025-08-13T19:55:02.181856968+00:00 stderr F 2025/08/13 19:55:02 http: TLS handshake error from 127.0.0.1:49268: remote error: tls: bad certificate 2025-08-13T19:55:03.229036497+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49270: remote error: tls: bad certificate 2025-08-13T19:55:03.245168697+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49278: remote error: tls: bad certificate 2025-08-13T19:55:03.263196582+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49286: remote error: tls: bad certificate 2025-08-13T19:55:03.281314219+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:55:03.300451624+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49310: remote error: tls: bad certificate 2025-08-13T19:55:03.319897959+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49312: remote error: tls: bad certificate 2025-08-13T19:55:03.347261069+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49316: remote error: tls: bad certificate 2025-08-13T19:55:03.364128491+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49330: remote error: tls: bad certificate 2025-08-13T19:55:03.379416757+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49338: remote error: tls: bad certificate 2025-08-13T19:55:03.394573079+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49344: remote error: tls: bad certificate 2025-08-13T19:55:03.415680942+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:55:03.431020299+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49366: remote error: tls: bad certificate 2025-08-13T19:55:03.450436733+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49368: remote error: tls: bad certificate 2025-08-13T19:55:03.468695354+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49380: remote error: tls: bad certificate 2025-08-13T19:55:03.486017728+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49392: remote error: tls: bad certificate 2025-08-13T19:55:03.552110164+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49402: remote error: tls: bad certificate 2025-08-13T19:55:03.569330636+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49404: remote error: tls: bad certificate 2025-08-13T19:55:03.586444334+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49418: remote error: tls: bad certificate 2025-08-13T19:55:03.600984759+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:55:03.618684434+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49436: remote error: tls: bad certificate 2025-08-13T19:55:03.634714241+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49448: remote error: tls: bad certificate 2025-08-13T19:55:03.651106139+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49460: remote error: tls: bad certificate 2025-08-13T19:55:03.669949057+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49464: remote error: tls: bad certificate 2025-08-13T19:55:03.686266662+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49468: remote error: tls: bad certificate 2025-08-13T19:55:03.703419112+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49482: remote error: tls: bad certificate 2025-08-13T19:55:03.724175074+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49484: remote error: tls: bad certificate 2025-08-13T19:55:03.740313314+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49492: remote error: tls: bad certificate 2025-08-13T19:55:03.756300640+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49508: remote error: tls: bad certificate 2025-08-13T19:55:03.778144294+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49510: remote error: tls: bad certificate 2025-08-13T19:55:03.796837767+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49518: remote error: tls: bad certificate 2025-08-13T19:55:03.811734952+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:55:03.828271844+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49528: remote error: tls: bad certificate 2025-08-13T19:55:03.843378505+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49534: remote error: tls: bad certificate 2025-08-13T19:55:03.861767820+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49542: remote error: tls: bad certificate 2025-08-13T19:55:03.897177870+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49556: remote error: tls: bad certificate 2025-08-13T19:55:03.917193541+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49558: remote error: tls: bad certificate 2025-08-13T19:55:03.933047284+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49564: remote error: tls: bad certificate 2025-08-13T19:55:03.950366808+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49574: remote error: tls: bad certificate 2025-08-13T19:55:03.971187182+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49580: remote error: tls: bad certificate 2025-08-13T19:55:03.987624301+00:00 stderr F 2025/08/13 19:55:03 http: TLS handshake error from 127.0.0.1:49590: remote error: tls: bad certificate 2025-08-13T19:55:04.003668679+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49600: remote error: tls: bad certificate 2025-08-13T19:55:04.019499970+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49610: remote error: tls: bad certificate 2025-08-13T19:55:04.035059394+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49626: remote error: tls: bad certificate 2025-08-13T19:55:04.050753602+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49638: remote error: tls: bad certificate 2025-08-13T19:55:04.067880621+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49640: remote error: tls: bad certificate 2025-08-13T19:55:04.097679831+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:55:04.114539012+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49654: remote error: tls: bad certificate 2025-08-13T19:55:04.129129608+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49670: remote error: tls: bad certificate 2025-08-13T19:55:04.142750187+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49682: remote error: tls: bad certificate 2025-08-13T19:55:04.160052891+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49686: remote error: tls: bad certificate 2025-08-13T19:55:04.176010726+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49690: remote error: tls: bad certificate 2025-08-13T19:55:04.190972713+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49694: remote error: tls: bad certificate 2025-08-13T19:55:04.206735953+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49702: remote error: tls: bad certificate 2025-08-13T19:55:04.226151637+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49706: remote error: tls: bad certificate 2025-08-13T19:55:04.240415524+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49722: remote error: tls: bad certificate 2025-08-13T19:55:04.254145705+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49724: remote error: tls: bad certificate 2025-08-13T19:55:04.270585565+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49730: remote error: tls: bad certificate 2025-08-13T19:55:04.292159220+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49738: remote error: tls: bad certificate 2025-08-13T19:55:04.309279679+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49740: remote error: tls: bad certificate 2025-08-13T19:55:04.326258183+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49752: remote error: tls: bad certificate 2025-08-13T19:55:04.344220726+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49766: remote error: tls: bad certificate 2025-08-13T19:55:04.362106736+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49772: remote error: tls: bad certificate 2025-08-13T19:55:04.378886265+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49776: remote error: tls: bad certificate 2025-08-13T19:55:04.395214081+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49780: remote error: tls: bad certificate 2025-08-13T19:55:04.412095992+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49786: remote error: tls: bad certificate 2025-08-13T19:55:04.428080138+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49798: remote error: tls: bad certificate 2025-08-13T19:55:04.443888319+00:00 stderr F 2025/08/13 19:55:04 http: TLS handshake error from 127.0.0.1:49812: remote error: tls: bad certificate 2025-08-13T19:55:05.227723685+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49822: remote error: tls: bad certificate 2025-08-13T19:55:05.247336744+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49836: remote error: tls: bad certificate 2025-08-13T19:55:05.263668160+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49840: remote error: tls: bad certificate 2025-08-13T19:55:05.298717480+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49846: remote error: tls: bad certificate 2025-08-13T19:55:05.324918628+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49862: remote error: tls: bad certificate 2025-08-13T19:55:05.352543047+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49876: remote error: tls: bad certificate 2025-08-13T19:55:05.369881471+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49886: remote error: tls: bad certificate 2025-08-13T19:55:05.390610883+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49900: remote error: tls: bad certificate 2025-08-13T19:55:05.415905724+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49902: remote error: tls: bad certificate 2025-08-13T19:55:05.433602269+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49914: remote error: tls: bad certificate 2025-08-13T19:55:05.449245916+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49918: remote error: tls: bad certificate 2025-08-13T19:55:05.479003685+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49932: remote error: tls: bad certificate 2025-08-13T19:55:05.495592618+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49948: remote error: tls: bad certificate 2025-08-13T19:55:05.510164614+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49960: remote error: tls: bad certificate 2025-08-13T19:55:05.525165502+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49976: remote error: tls: bad certificate 2025-08-13T19:55:05.544950246+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:49992: remote error: tls: bad certificate 2025-08-13T19:55:05.563106044+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50008: remote error: tls: bad certificate 2025-08-13T19:55:05.579723289+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50014: remote error: tls: bad certificate 2025-08-13T19:55:05.600977585+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50024: remote error: tls: bad certificate 2025-08-13T19:55:05.621222902+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50038: remote error: tls: bad certificate 2025-08-13T19:55:05.643762036+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50050: remote error: tls: bad certificate 2025-08-13T19:55:05.661943755+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50060: remote error: tls: bad certificate 2025-08-13T19:55:05.683520490+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50074: remote error: tls: bad certificate 2025-08-13T19:55:05.705586820+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50078: remote error: tls: bad certificate 2025-08-13T19:55:05.727709081+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50088: remote error: tls: bad certificate 2025-08-13T19:55:05.744994374+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50090: remote error: tls: bad certificate 2025-08-13T19:55:05.763105891+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50096: remote error: tls: bad certificate 2025-08-13T19:55:05.780056955+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50104: remote error: tls: bad certificate 2025-08-13T19:55:05.798433359+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50114: remote error: tls: bad certificate 2025-08-13T19:55:05.820980692+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50128: remote error: tls: bad certificate 2025-08-13T19:55:05.844742590+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50142: remote error: tls: bad certificate 2025-08-13T19:55:05.859925023+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50150: remote error: tls: bad certificate 2025-08-13T19:55:05.877955148+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50160: remote error: tls: bad certificate 2025-08-13T19:55:05.896653292+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50166: remote error: tls: bad certificate 2025-08-13T19:55:05.915192051+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50170: remote error: tls: bad certificate 2025-08-13T19:55:05.932388981+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50174: remote error: tls: bad certificate 2025-08-13T19:55:05.950755436+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50176: remote error: tls: bad certificate 2025-08-13T19:55:05.967581306+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50180: remote error: tls: bad certificate 2025-08-13T19:55:05.986260369+00:00 stderr F 2025/08/13 19:55:05 http: TLS handshake error from 127.0.0.1:50196: remote error: tls: bad certificate 2025-08-13T19:55:06.001455402+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50202: remote error: tls: bad certificate 2025-08-13T19:55:06.025355184+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50212: remote error: tls: bad certificate 2025-08-13T19:55:06.042577586+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50222: remote error: tls: bad certificate 2025-08-13T19:55:06.056247496+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50236: remote error: tls: bad certificate 2025-08-13T19:55:06.072909791+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50248: remote error: tls: bad certificate 2025-08-13T19:55:06.092756498+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50264: remote error: tls: bad certificate 2025-08-13T19:55:06.109374162+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50270: remote error: tls: bad certificate 2025-08-13T19:55:06.123302560+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50284: remote error: tls: bad certificate 2025-08-13T19:55:06.139014788+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50286: remote error: tls: bad certificate 2025-08-13T19:55:06.155313473+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50302: remote error: tls: bad certificate 2025-08-13T19:55:06.169356584+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50314: remote error: tls: bad certificate 2025-08-13T19:55:06.182737056+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50316: remote error: tls: bad certificate 2025-08-13T19:55:06.197899328+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50326: remote error: tls: bad certificate 2025-08-13T19:55:06.215977304+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50330: remote error: tls: bad certificate 2025-08-13T19:55:06.234231425+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50332: remote error: tls: bad certificate 2025-08-13T19:55:06.253554847+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50344: remote error: tls: bad certificate 2025-08-13T19:55:06.270428588+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50358: remote error: tls: bad certificate 2025-08-13T19:55:06.286947419+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50374: remote error: tls: bad certificate 2025-08-13T19:55:06.303024518+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50380: remote error: tls: bad certificate 2025-08-13T19:55:06.329432042+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50384: remote error: tls: bad certificate 2025-08-13T19:55:06.348380412+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50390: remote error: tls: bad certificate 2025-08-13T19:55:06.366329395+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50392: remote error: tls: bad certificate 2025-08-13T19:55:06.386994805+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50408: remote error: tls: bad certificate 2025-08-13T19:55:06.425499093+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50420: remote error: tls: bad certificate 2025-08-13T19:55:06.465500825+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50422: remote error: tls: bad certificate 2025-08-13T19:55:06.506290269+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50434: remote error: tls: bad certificate 2025-08-13T19:55:06.545177628+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50444: remote error: tls: bad certificate 2025-08-13T19:55:06.594091194+00:00 stderr F 2025/08/13 19:55:06 http: TLS handshake error from 127.0.0.1:50460: remote error: tls: bad certificate 2025-08-13T19:55:12.331142743+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46964: remote error: tls: bad certificate 2025-08-13T19:55:12.354480659+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46968: remote error: tls: bad certificate 2025-08-13T19:55:12.378760032+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46982: remote error: tls: bad certificate 2025-08-13T19:55:12.403076225+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46992: remote error: tls: bad certificate 2025-08-13T19:55:12.426234696+00:00 stderr F 2025/08/13 19:55:12 http: TLS handshake error from 127.0.0.1:46996: remote error: tls: bad certificate 2025-08-13T19:55:15.225173818+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47006: remote error: tls: bad certificate 2025-08-13T19:55:15.285497839+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47020: remote error: tls: bad certificate 2025-08-13T19:55:15.311115810+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47036: remote error: tls: bad certificate 2025-08-13T19:55:15.337115672+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47048: remote error: tls: bad certificate 2025-08-13T19:55:15.358567974+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47050: remote error: tls: bad certificate 2025-08-13T19:55:15.379595424+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47052: remote error: tls: bad certificate 2025-08-13T19:55:15.394880050+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47066: remote error: tls: bad certificate 2025-08-13T19:55:15.408566091+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47078: remote error: tls: bad certificate 2025-08-13T19:55:15.423299021+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47092: remote error: tls: bad certificate 2025-08-13T19:55:15.439522234+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47106: remote error: tls: bad certificate 2025-08-13T19:55:15.454868982+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47110: remote error: tls: bad certificate 2025-08-13T19:55:15.471000052+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47114: remote error: tls: bad certificate 2025-08-13T19:55:15.488863622+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47122: remote error: tls: bad certificate 2025-08-13T19:55:15.515423570+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47128: remote error: tls: bad certificate 2025-08-13T19:55:15.534169575+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47134: remote error: tls: bad certificate 2025-08-13T19:55:15.549044499+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47150: remote error: tls: bad certificate 2025-08-13T19:55:15.564432328+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47158: remote error: tls: bad certificate 2025-08-13T19:55:15.583125322+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47160: remote error: tls: bad certificate 2025-08-13T19:55:15.596480623+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47168: remote error: tls: bad certificate 2025-08-13T19:55:15.615202497+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47176: remote error: tls: bad certificate 2025-08-13T19:55:15.636354100+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47180: remote error: tls: bad certificate 2025-08-13T19:55:15.654612161+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47192: remote error: tls: bad certificate 2025-08-13T19:55:15.674444857+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47206: remote error: tls: bad certificate 2025-08-13T19:55:15.691401601+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47214: remote error: tls: bad certificate 2025-08-13T19:55:15.712371689+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47220: remote error: tls: bad certificate 2025-08-13T19:55:15.730596589+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47226: remote error: tls: bad certificate 2025-08-13T19:55:15.753963686+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47230: remote error: tls: bad certificate 2025-08-13T19:55:15.774295636+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47232: remote error: tls: bad certificate 2025-08-13T19:55:15.796067738+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47242: remote error: tls: bad certificate 2025-08-13T19:55:15.820903106+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47248: remote error: tls: bad certificate 2025-08-13T19:55:15.840719152+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47256: remote error: tls: bad certificate 2025-08-13T19:55:15.861375081+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47260: remote error: tls: bad certificate 2025-08-13T19:55:15.881214097+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47270: remote error: tls: bad certificate 2025-08-13T19:55:15.900874318+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47282: remote error: tls: bad certificate 2025-08-13T19:55:15.930890674+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47292: remote error: tls: bad certificate 2025-08-13T19:55:15.952244104+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47304: remote error: tls: bad certificate 2025-08-13T19:55:15.969910698+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47306: remote error: tls: bad certificate 2025-08-13T19:55:15.985747920+00:00 stderr F 2025/08/13 19:55:15 http: TLS handshake error from 127.0.0.1:47312: remote error: tls: bad certificate 2025-08-13T19:55:16.001598612+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47324: remote error: tls: bad certificate 2025-08-13T19:55:16.021975563+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47328: remote error: tls: bad certificate 2025-08-13T19:55:16.040118041+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47342: remote error: tls: bad certificate 2025-08-13T19:55:16.056385355+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47356: remote error: tls: bad certificate 2025-08-13T19:55:16.076656634+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47366: remote error: tls: bad certificate 2025-08-13T19:55:16.094177364+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47374: remote error: tls: bad certificate 2025-08-13T19:55:16.116261034+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47386: remote error: tls: bad certificate 2025-08-13T19:55:16.142067610+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47392: remote error: tls: bad certificate 2025-08-13T19:55:16.158906441+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47402: remote error: tls: bad certificate 2025-08-13T19:55:16.174759553+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47414: remote error: tls: bad certificate 2025-08-13T19:55:16.192625763+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47424: remote error: tls: bad certificate 2025-08-13T19:55:16.211655506+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47440: remote error: tls: bad certificate 2025-08-13T19:55:16.234517378+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47452: remote error: tls: bad certificate 2025-08-13T19:55:16.253576982+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47468: remote error: tls: bad certificate 2025-08-13T19:55:16.275850697+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47474: remote error: tls: bad certificate 2025-08-13T19:55:16.296477066+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47476: remote error: tls: bad certificate 2025-08-13T19:55:16.312267216+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47488: remote error: tls: bad certificate 2025-08-13T19:55:16.329667203+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47502: remote error: tls: bad certificate 2025-08-13T19:55:16.348455249+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47516: remote error: tls: bad certificate 2025-08-13T19:55:16.365290759+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47520: remote error: tls: bad certificate 2025-08-13T19:55:16.382462859+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47526: remote error: tls: bad certificate 2025-08-13T19:55:16.399512616+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47528: remote error: tls: bad certificate 2025-08-13T19:55:16.414212135+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47544: remote error: tls: bad certificate 2025-08-13T19:55:16.430592093+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47552: remote error: tls: bad certificate 2025-08-13T19:55:16.449546983+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47554: remote error: tls: bad certificate 2025-08-13T19:55:16.468413702+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47558: remote error: tls: bad certificate 2025-08-13T19:55:16.489704129+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47562: remote error: tls: bad certificate 2025-08-13T19:55:16.508568797+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47574: remote error: tls: bad certificate 2025-08-13T19:55:16.528029303+00:00 stderr F 2025/08/13 19:55:16 http: TLS handshake error from 127.0.0.1:47588: remote error: tls: bad certificate 2025-08-13T19:55:22.724258620+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60124: remote error: tls: bad certificate 2025-08-13T19:55:22.747266076+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60134: remote error: tls: bad certificate 2025-08-13T19:55:22.773129974+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60142: remote error: tls: bad certificate 2025-08-13T19:55:22.796035168+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60154: remote error: tls: bad certificate 2025-08-13T19:55:22.822198264+00:00 stderr F 2025/08/13 19:55:22 http: TLS handshake error from 127.0.0.1:60158: remote error: tls: bad certificate 2025-08-13T19:55:25.226013992+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60168: remote error: tls: bad certificate 2025-08-13T19:55:25.243915683+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60176: remote error: tls: bad certificate 2025-08-13T19:55:25.263236734+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60184: remote error: tls: bad certificate 2025-08-13T19:55:25.279889680+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60194: remote error: tls: bad certificate 2025-08-13T19:55:25.311380298+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60204: remote error: tls: bad certificate 2025-08-13T19:55:25.335893618+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60216: remote error: tls: bad certificate 2025-08-13T19:55:25.352574434+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60232: remote error: tls: bad certificate 2025-08-13T19:55:25.374067167+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60242: remote error: tls: bad certificate 2025-08-13T19:55:25.390984790+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60250: remote error: tls: bad certificate 2025-08-13T19:55:25.408735036+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60262: remote error: tls: bad certificate 2025-08-13T19:55:25.425133544+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60270: remote error: tls: bad certificate 2025-08-13T19:55:25.444543878+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60280: remote error: tls: bad certificate 2025-08-13T19:55:25.460848663+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60296: remote error: tls: bad certificate 2025-08-13T19:55:25.476048517+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60304: remote error: tls: bad certificate 2025-08-13T19:55:25.498246390+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60310: remote error: tls: bad certificate 2025-08-13T19:55:25.522613145+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60318: remote error: tls: bad certificate 2025-08-13T19:55:25.542240625+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60330: remote error: tls: bad certificate 2025-08-13T19:55:25.559260111+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60342: remote error: tls: bad certificate 2025-08-13T19:55:25.574486505+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60354: remote error: tls: bad certificate 2025-08-13T19:55:25.593993892+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60368: remote error: tls: bad certificate 2025-08-13T19:55:25.614568859+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60374: remote error: tls: bad certificate 2025-08-13T19:55:25.631331347+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60376: remote error: tls: bad certificate 2025-08-13T19:55:25.647363455+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:55:25.666637255+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60398: remote error: tls: bad certificate 2025-08-13T19:55:25.682739784+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60404: remote error: tls: bad certificate 2025-08-13T19:55:25.698456613+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60408: remote error: tls: bad certificate 2025-08-13T19:55:25.715578541+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60416: remote error: tls: bad certificate 2025-08-13T19:55:25.731499376+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60430: remote error: tls: bad certificate 2025-08-13T19:55:25.748157641+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60444: remote error: tls: bad certificate 2025-08-13T19:55:25.764416645+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60460: remote error: tls: bad certificate 2025-08-13T19:55:25.783475489+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60468: remote error: tls: bad certificate 2025-08-13T19:55:25.806465025+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60470: remote error: tls: bad certificate 2025-08-13T19:55:25.827758752+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60482: remote error: tls: bad certificate 2025-08-13T19:55:25.845141138+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60484: remote error: tls: bad certificate 2025-08-13T19:55:25.865158879+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:55:25.886289762+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60496: remote error: tls: bad certificate 2025-08-13T19:55:25.906937921+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60504: remote error: tls: bad certificate 2025-08-13T19:55:25.923766471+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60520: remote error: tls: bad certificate 2025-08-13T19:55:25.942594599+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60522: remote error: tls: bad certificate 2025-08-13T19:55:25.961747565+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60528: remote error: tls: bad certificate 2025-08-13T19:55:25.982764275+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60540: remote error: tls: bad certificate 2025-08-13T19:55:25.998073442+00:00 stderr F 2025/08/13 19:55:25 http: TLS handshake error from 127.0.0.1:60550: remote error: tls: bad certificate 2025-08-13T19:55:26.019044340+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60564: remote error: tls: bad certificate 2025-08-13T19:55:26.036205540+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60578: remote error: tls: bad certificate 2025-08-13T19:55:26.052680450+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60592: remote error: tls: bad certificate 2025-08-13T19:55:26.070468387+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60594: remote error: tls: bad certificate 2025-08-13T19:55:26.088758559+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60602: remote error: tls: bad certificate 2025-08-13T19:55:26.107552705+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60608: remote error: tls: bad certificate 2025-08-13T19:55:26.126840926+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60622: remote error: tls: bad certificate 2025-08-13T19:55:26.144071408+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60626: remote error: tls: bad certificate 2025-08-13T19:55:26.163052349+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60634: remote error: tls: bad certificate 2025-08-13T19:55:26.178105229+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60640: remote error: tls: bad certificate 2025-08-13T19:55:26.194324331+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60652: remote error: tls: bad certificate 2025-08-13T19:55:26.215030852+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60658: remote error: tls: bad certificate 2025-08-13T19:55:26.233077127+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60662: remote error: tls: bad certificate 2025-08-13T19:55:26.246435338+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60666: remote error: tls: bad certificate 2025-08-13T19:55:26.263378962+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60670: remote error: tls: bad certificate 2025-08-13T19:55:26.278150433+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60686: remote error: tls: bad certificate 2025-08-13T19:55:26.298293728+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60688: remote error: tls: bad certificate 2025-08-13T19:55:26.316858798+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60704: remote error: tls: bad certificate 2025-08-13T19:55:26.338270129+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:55:26.355897422+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60720: remote error: tls: bad certificate 2025-08-13T19:55:26.373379100+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60722: remote error: tls: bad certificate 2025-08-13T19:55:26.393035921+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60730: remote error: tls: bad certificate 2025-08-13T19:55:26.409610044+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60742: remote error: tls: bad certificate 2025-08-13T19:55:26.426176577+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60756: remote error: tls: bad certificate 2025-08-13T19:55:26.445216310+00:00 stderr F 2025/08/13 19:55:26 http: TLS handshake error from 127.0.0.1:60770: remote error: tls: bad certificate 2025-08-13T19:55:30.187158407+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40300: remote error: tls: bad certificate 2025-08-13T19:55:30.210267796+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40304: remote error: tls: bad certificate 2025-08-13T19:55:30.227160748+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40316: remote error: tls: bad certificate 2025-08-13T19:55:30.250921076+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40322: remote error: tls: bad certificate 2025-08-13T19:55:30.276694852+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40332: remote error: tls: bad certificate 2025-08-13T19:55:30.297585858+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40340: remote error: tls: bad certificate 2025-08-13T19:55:30.313942174+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40344: remote error: tls: bad certificate 2025-08-13T19:55:30.330537588+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40352: remote error: tls: bad certificate 2025-08-13T19:55:30.350941500+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40360: remote error: tls: bad certificate 2025-08-13T19:55:30.369636034+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40370: remote error: tls: bad certificate 2025-08-13T19:55:30.388164132+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40382: remote error: tls: bad certificate 2025-08-13T19:55:30.405098565+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40394: remote error: tls: bad certificate 2025-08-13T19:55:30.422513152+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40400: remote error: tls: bad certificate 2025-08-13T19:55:30.443720767+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40410: remote error: tls: bad certificate 2025-08-13T19:55:30.462170194+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40426: remote error: tls: bad certificate 2025-08-13T19:55:30.478958423+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40432: remote error: tls: bad certificate 2025-08-13T19:55:30.493470887+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40448: remote error: tls: bad certificate 2025-08-13T19:55:30.508128785+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40464: remote error: tls: bad certificate 2025-08-13T19:55:30.524500612+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:55:30.539152440+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40474: remote error: tls: bad certificate 2025-08-13T19:55:30.555514727+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40478: remote error: tls: bad certificate 2025-08-13T19:55:30.572329757+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40488: remote error: tls: bad certificate 2025-08-13T19:55:30.593456840+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40490: remote error: tls: bad certificate 2025-08-13T19:55:30.610402143+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40502: remote error: tls: bad certificate 2025-08-13T19:55:30.634690056+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40508: remote error: tls: bad certificate 2025-08-13T19:55:30.655562932+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40514: remote error: tls: bad certificate 2025-08-13T19:55:30.674533753+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40528: remote error: tls: bad certificate 2025-08-13T19:55:30.695425009+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40534: remote error: tls: bad certificate 2025-08-13T19:55:30.713093333+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40540: remote error: tls: bad certificate 2025-08-13T19:55:30.731038856+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40544: remote error: tls: bad certificate 2025-08-13T19:55:30.746182788+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40548: remote error: tls: bad certificate 2025-08-13T19:55:30.761851995+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40550: remote error: tls: bad certificate 2025-08-13T19:55:30.779347764+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:55:30.803917115+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:55:30.825400208+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:55:30.843969128+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40586: remote error: tls: bad certificate 2025-08-13T19:55:30.861194199+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40590: remote error: tls: bad certificate 2025-08-13T19:55:30.877293299+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40594: remote error: tls: bad certificate 2025-08-13T19:55:30.895121527+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40598: remote error: tls: bad certificate 2025-08-13T19:55:30.910407503+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40606: remote error: tls: bad certificate 2025-08-13T19:55:30.935099758+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:55:30.952342530+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40614: remote error: tls: bad certificate 2025-08-13T19:55:30.967850072+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:55:30.987941256+00:00 stderr F 2025/08/13 19:55:30 http: TLS handshake error from 127.0.0.1:40634: remote error: tls: bad certificate 2025-08-13T19:55:31.004267672+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40648: remote error: tls: bad certificate 2025-08-13T19:55:31.019576318+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40664: remote error: tls: bad certificate 2025-08-13T19:55:31.035443981+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40672: remote error: tls: bad certificate 2025-08-13T19:55:31.050174521+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40676: remote error: tls: bad certificate 2025-08-13T19:55:31.075765112+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40678: remote error: tls: bad certificate 2025-08-13T19:55:31.095842565+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40694: remote error: tls: bad certificate 2025-08-13T19:55:31.111585564+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40700: remote error: tls: bad certificate 2025-08-13T19:55:31.130693119+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40706: remote error: tls: bad certificate 2025-08-13T19:55:31.149737572+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40720: remote error: tls: bad certificate 2025-08-13T19:55:31.167994773+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40734: remote error: tls: bad certificate 2025-08-13T19:55:31.186055539+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40740: remote error: tls: bad certificate 2025-08-13T19:55:31.202670953+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40748: remote error: tls: bad certificate 2025-08-13T19:55:31.222940151+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40750: remote error: tls: bad certificate 2025-08-13T19:55:31.241208052+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40752: remote error: tls: bad certificate 2025-08-13T19:55:31.260454701+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40766: remote error: tls: bad certificate 2025-08-13T19:55:31.276256072+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40782: remote error: tls: bad certificate 2025-08-13T19:55:31.296312925+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:55:31.320436283+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40798: remote error: tls: bad certificate 2025-08-13T19:55:31.341494914+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40804: remote error: tls: bad certificate 2025-08-13T19:55:31.360622269+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40810: remote error: tls: bad certificate 2025-08-13T19:55:31.385463358+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40824: remote error: tls: bad certificate 2025-08-13T19:55:31.410011959+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40830: remote error: tls: bad certificate 2025-08-13T19:55:31.429662579+00:00 stderr F 2025/08/13 19:55:31 http: TLS handshake error from 127.0.0.1:40838: remote error: tls: bad certificate 2025-08-13T19:55:33.027972933+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40840: remote error: tls: bad certificate 2025-08-13T19:55:33.057389732+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40850: remote error: tls: bad certificate 2025-08-13T19:55:33.085435843+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40854: remote error: tls: bad certificate 2025-08-13T19:55:33.104620350+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40866: remote error: tls: bad certificate 2025-08-13T19:55:33.126115503+00:00 stderr F 2025/08/13 19:55:33 http: TLS handshake error from 127.0.0.1:40870: remote error: tls: bad certificate 2025-08-13T19:55:35.227045600+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40884: remote error: tls: bad certificate 2025-08-13T19:55:35.246573448+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40898: remote error: tls: bad certificate 2025-08-13T19:55:35.268547395+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40906: remote error: tls: bad certificate 2025-08-13T19:55:35.284942182+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40914: remote error: tls: bad certificate 2025-08-13T19:55:35.302182604+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40930: remote error: tls: bad certificate 2025-08-13T19:55:35.322002280+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40936: remote error: tls: bad certificate 2025-08-13T19:55:35.344218634+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40948: remote error: tls: bad certificate 2025-08-13T19:55:35.361602570+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40950: remote error: tls: bad certificate 2025-08-13T19:55:35.383517945+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40966: remote error: tls: bad certificate 2025-08-13T19:55:35.400439768+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40976: remote error: tls: bad certificate 2025-08-13T19:55:35.417424883+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:55:35.436431915+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:40994: remote error: tls: bad certificate 2025-08-13T19:55:35.454947543+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41010: remote error: tls: bad certificate 2025-08-13T19:55:35.471143235+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41014: remote error: tls: bad certificate 2025-08-13T19:55:35.502896571+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41022: remote error: tls: bad certificate 2025-08-13T19:55:35.575462881+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41030: remote error: tls: bad certificate 2025-08-13T19:55:35.575754439+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41026: remote error: tls: bad certificate 2025-08-13T19:55:35.603601564+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:55:35.621914196+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41044: remote error: tls: bad certificate 2025-08-13T19:55:35.642222516+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41050: remote error: tls: bad certificate 2025-08-13T19:55:35.662885595+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41064: remote error: tls: bad certificate 2025-08-13T19:55:35.681107765+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41066: remote error: tls: bad certificate 2025-08-13T19:55:35.699474799+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41074: remote error: tls: bad certificate 2025-08-13T19:55:35.716397602+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41090: remote error: tls: bad certificate 2025-08-13T19:55:35.735190078+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41098: remote error: tls: bad certificate 2025-08-13T19:55:35.753157131+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41110: remote error: tls: bad certificate 2025-08-13T19:55:35.772507463+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41118: remote error: tls: bad certificate 2025-08-13T19:55:35.786490422+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41132: remote error: tls: bad certificate 2025-08-13T19:55:35.803189179+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41142: remote error: tls: bad certificate 2025-08-13T19:55:35.819016400+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41156: remote error: tls: bad certificate 2025-08-13T19:55:35.844026044+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41168: remote error: tls: bad certificate 2025-08-13T19:55:35.860004930+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41180: remote error: tls: bad certificate 2025-08-13T19:55:35.876589063+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41190: remote error: tls: bad certificate 2025-08-13T19:55:35.900619959+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41194: remote error: tls: bad certificate 2025-08-13T19:55:35.916129001+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41208: remote error: tls: bad certificate 2025-08-13T19:55:35.935918986+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41224: remote error: tls: bad certificate 2025-08-13T19:55:35.956083701+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41236: remote error: tls: bad certificate 2025-08-13T19:55:35.974952029+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41238: remote error: tls: bad certificate 2025-08-13T19:55:35.994320032+00:00 stderr F 2025/08/13 19:55:35 http: TLS handshake error from 127.0.0.1:41250: remote error: tls: bad certificate 2025-08-13T19:55:36.011561524+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41254: remote error: tls: bad certificate 2025-08-13T19:55:36.027715485+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41270: remote error: tls: bad certificate 2025-08-13T19:55:36.045503063+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41274: remote error: tls: bad certificate 2025-08-13T19:55:36.063375353+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41286: remote error: tls: bad certificate 2025-08-13T19:55:36.088994874+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41302: remote error: tls: bad certificate 2025-08-13T19:55:36.108676985+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41310: remote error: tls: bad certificate 2025-08-13T19:55:36.132025821+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41314: remote error: tls: bad certificate 2025-08-13T19:55:36.149121519+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41324: remote error: tls: bad certificate 2025-08-13T19:55:36.164097096+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41326: remote error: tls: bad certificate 2025-08-13T19:55:36.177856509+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41332: remote error: tls: bad certificate 2025-08-13T19:55:36.190974313+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41336: remote error: tls: bad certificate 2025-08-13T19:55:36.207574167+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41338: remote error: tls: bad certificate 2025-08-13T19:55:36.224421308+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41342: remote error: tls: bad certificate 2025-08-13T19:55:36.238508850+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41358: remote error: tls: bad certificate 2025-08-13T19:55:36.258044717+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41364: remote error: tls: bad certificate 2025-08-13T19:55:36.286055726+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41376: remote error: tls: bad certificate 2025-08-13T19:55:36.300371135+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41382: remote error: tls: bad certificate 2025-08-13T19:55:36.313492559+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41396: remote error: tls: bad certificate 2025-08-13T19:55:36.328174548+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41404: remote error: tls: bad certificate 2025-08-13T19:55:36.345859293+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41410: remote error: tls: bad certificate 2025-08-13T19:55:36.365382020+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41412: remote error: tls: bad certificate 2025-08-13T19:55:36.380294025+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41426: remote error: tls: bad certificate 2025-08-13T19:55:36.396448136+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41440: remote error: tls: bad certificate 2025-08-13T19:55:36.411076403+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41446: remote error: tls: bad certificate 2025-08-13T19:55:36.425283589+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41454: remote error: tls: bad certificate 2025-08-13T19:55:36.439167365+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41466: remote error: tls: bad certificate 2025-08-13T19:55:36.456911581+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41478: remote error: tls: bad certificate 2025-08-13T19:55:36.477905750+00:00 stderr F 2025/08/13 19:55:36 http: TLS handshake error from 127.0.0.1:41486: remote error: tls: bad certificate 2025-08-13T19:55:43.542359063+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39088: remote error: tls: bad certificate 2025-08-13T19:55:43.562344944+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39090: remote error: tls: bad certificate 2025-08-13T19:55:43.579504574+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39096: remote error: tls: bad certificate 2025-08-13T19:55:43.600220455+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39102: remote error: tls: bad certificate 2025-08-13T19:55:43.623744817+00:00 stderr F 2025/08/13 19:55:43 http: TLS handshake error from 127.0.0.1:39106: remote error: tls: bad certificate 2025-08-13T19:55:45.230706044+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39110: remote error: tls: bad certificate 2025-08-13T19:55:45.249200712+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39120: remote error: tls: bad certificate 2025-08-13T19:55:45.271050146+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39130: remote error: tls: bad certificate 2025-08-13T19:55:45.290850271+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39144: remote error: tls: bad certificate 2025-08-13T19:55:45.310144452+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39152: remote error: tls: bad certificate 2025-08-13T19:55:45.323583086+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39154: remote error: tls: bad certificate 2025-08-13T19:55:45.341274101+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39160: remote error: tls: bad certificate 2025-08-13T19:55:45.362244860+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39164: remote error: tls: bad certificate 2025-08-13T19:55:45.381071508+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39170: remote error: tls: bad certificate 2025-08-13T19:55:45.403018454+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39174: remote error: tls: bad certificate 2025-08-13T19:55:45.421341798+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39188: remote error: tls: bad certificate 2025-08-13T19:55:45.437453518+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39202: remote error: tls: bad certificate 2025-08-13T19:55:45.452191819+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39210: remote error: tls: bad certificate 2025-08-13T19:55:45.469058160+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39224: remote error: tls: bad certificate 2025-08-13T19:55:45.488306020+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39226: remote error: tls: bad certificate 2025-08-13T19:55:45.506644104+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39236: remote error: tls: bad certificate 2025-08-13T19:55:45.524887524+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39244: remote error: tls: bad certificate 2025-08-13T19:55:45.544297949+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39250: remote error: tls: bad certificate 2025-08-13T19:55:45.560512862+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39262: remote error: tls: bad certificate 2025-08-13T19:55:45.577144087+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39276: remote error: tls: bad certificate 2025-08-13T19:55:45.594452551+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39288: remote error: tls: bad certificate 2025-08-13T19:55:45.610481149+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39294: remote error: tls: bad certificate 2025-08-13T19:55:45.625259311+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39306: remote error: tls: bad certificate 2025-08-13T19:55:45.642920215+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39322: remote error: tls: bad certificate 2025-08-13T19:55:45.660028343+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39324: remote error: tls: bad certificate 2025-08-13T19:55:45.676483103+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39334: remote error: tls: bad certificate 2025-08-13T19:55:45.692538492+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39344: remote error: tls: bad certificate 2025-08-13T19:55:45.711744340+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39356: remote error: tls: bad certificate 2025-08-13T19:55:45.728074646+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39366: remote error: tls: bad certificate 2025-08-13T19:55:45.743032844+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39378: remote error: tls: bad certificate 2025-08-13T19:55:45.759152424+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39384: remote error: tls: bad certificate 2025-08-13T19:55:45.775766568+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39396: remote error: tls: bad certificate 2025-08-13T19:55:45.796582783+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39406: remote error: tls: bad certificate 2025-08-13T19:55:45.818418716+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39412: remote error: tls: bad certificate 2025-08-13T19:55:45.836564444+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39426: remote error: tls: bad certificate 2025-08-13T19:55:45.854195358+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39438: remote error: tls: bad certificate 2025-08-13T19:55:45.879333456+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39454: remote error: tls: bad certificate 2025-08-13T19:55:45.899477641+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39468: remote error: tls: bad certificate 2025-08-13T19:55:45.918489234+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39484: remote error: tls: bad certificate 2025-08-13T19:55:45.945675230+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39500: remote error: tls: bad certificate 2025-08-13T19:55:45.968765109+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39504: remote error: tls: bad certificate 2025-08-13T19:55:45.987233797+00:00 stderr F 2025/08/13 19:55:45 http: TLS handshake error from 127.0.0.1:39510: remote error: tls: bad certificate 2025-08-13T19:55:46.008955997+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39514: remote error: tls: bad certificate 2025-08-13T19:55:46.027873827+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39530: remote error: tls: bad certificate 2025-08-13T19:55:46.052034017+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39546: remote error: tls: bad certificate 2025-08-13T19:55:46.077634698+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39554: remote error: tls: bad certificate 2025-08-13T19:55:46.099153273+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39560: remote error: tls: bad certificate 2025-08-13T19:55:46.116990972+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39572: remote error: tls: bad certificate 2025-08-13T19:55:46.135143360+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39576: remote error: tls: bad certificate 2025-08-13T19:55:46.154638557+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39582: remote error: tls: bad certificate 2025-08-13T19:55:46.175216695+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39588: remote error: tls: bad certificate 2025-08-13T19:55:46.204208832+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39604: remote error: tls: bad certificate 2025-08-13T19:55:46.224925874+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39616: remote error: tls: bad certificate 2025-08-13T19:55:46.246351985+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39620: remote error: tls: bad certificate 2025-08-13T19:55:46.270031801+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39624: remote error: tls: bad certificate 2025-08-13T19:55:46.291949157+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39640: remote error: tls: bad certificate 2025-08-13T19:55:46.321080139+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39650: remote error: tls: bad certificate 2025-08-13T19:55:46.339222077+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39664: remote error: tls: bad certificate 2025-08-13T19:55:46.355744949+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39666: remote error: tls: bad certificate 2025-08-13T19:55:46.374868885+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39672: remote error: tls: bad certificate 2025-08-13T19:55:46.398128389+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39678: remote error: tls: bad certificate 2025-08-13T19:55:46.420764495+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39690: remote error: tls: bad certificate 2025-08-13T19:55:46.439880921+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39700: remote error: tls: bad certificate 2025-08-13T19:55:46.460968123+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39712: remote error: tls: bad certificate 2025-08-13T19:55:46.491667340+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39716: remote error: tls: bad certificate 2025-08-13T19:55:46.511737343+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39726: remote error: tls: bad certificate 2025-08-13T19:55:46.529936343+00:00 stderr F 2025/08/13 19:55:46 http: TLS handshake error from 127.0.0.1:39742: remote error: tls: bad certificate 2025-08-13T19:55:52.824556573+00:00 stderr F 2025/08/13 19:55:52 http: TLS handshake error from 127.0.0.1:44446: remote error: tls: bad certificate 2025-08-13T19:55:53.780167759+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44456: remote error: tls: bad certificate 2025-08-13T19:55:53.807762677+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44472: remote error: tls: bad certificate 2025-08-13T19:55:53.845223767+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44484: remote error: tls: bad certificate 2025-08-13T19:55:53.883531911+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44494: remote error: tls: bad certificate 2025-08-13T19:55:53.910349246+00:00 stderr F 2025/08/13 19:55:53 http: TLS handshake error from 127.0.0.1:44510: remote error: tls: bad certificate 2025-08-13T19:55:55.224732629+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44516: remote error: tls: bad certificate 2025-08-13T19:55:55.241277971+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44532: remote error: tls: bad certificate 2025-08-13T19:55:55.261690894+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44538: remote error: tls: bad certificate 2025-08-13T19:55:55.280748038+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44550: remote error: tls: bad certificate 2025-08-13T19:55:55.299029930+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44552: remote error: tls: bad certificate 2025-08-13T19:55:55.316595082+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44564: remote error: tls: bad certificate 2025-08-13T19:55:55.331861858+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44578: remote error: tls: bad certificate 2025-08-13T19:55:55.358071616+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44584: remote error: tls: bad certificate 2025-08-13T19:55:55.375025030+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44586: remote error: tls: bad certificate 2025-08-13T19:55:55.395235878+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44594: remote error: tls: bad certificate 2025-08-13T19:55:55.419379097+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44596: remote error: tls: bad certificate 2025-08-13T19:55:55.439173882+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44604: remote error: tls: bad certificate 2025-08-13T19:55:55.455989842+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44608: remote error: tls: bad certificate 2025-08-13T19:55:55.469614712+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44616: remote error: tls: bad certificate 2025-08-13T19:55:55.496532970+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44626: remote error: tls: bad certificate 2025-08-13T19:55:55.520431953+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44642: remote error: tls: bad certificate 2025-08-13T19:55:55.536882282+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44650: remote error: tls: bad certificate 2025-08-13T19:55:55.560878418+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44660: remote error: tls: bad certificate 2025-08-13T19:55:55.575270509+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44662: remote error: tls: bad certificate 2025-08-13T19:55:55.591161202+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44672: remote error: tls: bad certificate 2025-08-13T19:55:55.612307966+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44682: remote error: tls: bad certificate 2025-08-13T19:55:55.631002370+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44692: remote error: tls: bad certificate 2025-08-13T19:55:55.648979183+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44696: remote error: tls: bad certificate 2025-08-13T19:55:55.666100582+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44700: remote error: tls: bad certificate 2025-08-13T19:55:55.682322165+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44706: remote error: tls: bad certificate 2025-08-13T19:55:55.699884547+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44718: remote error: tls: bad certificate 2025-08-13T19:55:55.715290757+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44724: remote error: tls: bad certificate 2025-08-13T19:55:55.733637551+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44732: remote error: tls: bad certificate 2025-08-13T19:55:55.750501372+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44746: remote error: tls: bad certificate 2025-08-13T19:55:55.764348138+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44760: remote error: tls: bad certificate 2025-08-13T19:55:55.778900093+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44772: remote error: tls: bad certificate 2025-08-13T19:55:55.795024764+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44784: remote error: tls: bad certificate 2025-08-13T19:55:55.812279856+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44796: remote error: tls: bad certificate 2025-08-13T19:55:55.832994698+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44810: remote error: tls: bad certificate 2025-08-13T19:55:55.849993373+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44818: remote error: tls: bad certificate 2025-08-13T19:55:55.866183645+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44828: remote error: tls: bad certificate 2025-08-13T19:55:55.881169033+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44834: remote error: tls: bad certificate 2025-08-13T19:55:55.900964599+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44840: remote error: tls: bad certificate 2025-08-13T19:55:55.920877277+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44846: remote error: tls: bad certificate 2025-08-13T19:55:55.935490895+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44858: remote error: tls: bad certificate 2025-08-13T19:55:55.950882664+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44864: remote error: tls: bad certificate 2025-08-13T19:55:55.966917262+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44876: remote error: tls: bad certificate 2025-08-13T19:55:55.985651847+00:00 stderr F 2025/08/13 19:55:55 http: TLS handshake error from 127.0.0.1:44880: remote error: tls: bad certificate 2025-08-13T19:55:56.000949174+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44884: remote error: tls: bad certificate 2025-08-13T19:55:56.016273631+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44894: remote error: tls: bad certificate 2025-08-13T19:55:56.033253086+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44900: remote error: tls: bad certificate 2025-08-13T19:55:56.052562828+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44906: remote error: tls: bad certificate 2025-08-13T19:55:56.066392292+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44908: remote error: tls: bad certificate 2025-08-13T19:55:56.083670006+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44916: remote error: tls: bad certificate 2025-08-13T19:55:56.110284246+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44932: remote error: tls: bad certificate 2025-08-13T19:55:56.134524528+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44940: remote error: tls: bad certificate 2025-08-13T19:55:56.156586868+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44944: remote error: tls: bad certificate 2025-08-13T19:55:56.182486117+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44956: remote error: tls: bad certificate 2025-08-13T19:55:56.203323543+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44958: remote error: tls: bad certificate 2025-08-13T19:55:56.225880157+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44962: remote error: tls: bad certificate 2025-08-13T19:55:56.242304366+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44974: remote error: tls: bad certificate 2025-08-13T19:55:56.262851962+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:44988: remote error: tls: bad certificate 2025-08-13T19:55:56.283443650+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45000: remote error: tls: bad certificate 2025-08-13T19:55:56.302729671+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45016: remote error: tls: bad certificate 2025-08-13T19:55:56.320443107+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45030: remote error: tls: bad certificate 2025-08-13T19:55:56.338330158+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45038: remote error: tls: bad certificate 2025-08-13T19:55:56.353768378+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45040: remote error: tls: bad certificate 2025-08-13T19:55:56.367973094+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45052: remote error: tls: bad certificate 2025-08-13T19:55:56.384070984+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45058: remote error: tls: bad certificate 2025-08-13T19:55:56.406155034+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45060: remote error: tls: bad certificate 2025-08-13T19:55:56.423310914+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45064: remote error: tls: bad certificate 2025-08-13T19:55:56.439226739+00:00 stderr F 2025/08/13 19:55:56 http: TLS handshake error from 127.0.0.1:45074: remote error: tls: bad certificate 2025-08-13T19:56:04.212256863+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34962: remote error: tls: bad certificate 2025-08-13T19:56:04.232744598+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34966: remote error: tls: bad certificate 2025-08-13T19:56:04.251181244+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34972: remote error: tls: bad certificate 2025-08-13T19:56:04.274170991+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:34988: remote error: tls: bad certificate 2025-08-13T19:56:04.295367696+00:00 stderr F 2025/08/13 19:56:04 http: TLS handshake error from 127.0.0.1:35004: remote error: tls: bad certificate 2025-08-13T19:56:05.226453313+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35006: remote error: tls: bad certificate 2025-08-13T19:56:05.244457487+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35022: remote error: tls: bad certificate 2025-08-13T19:56:05.262510113+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35032: remote error: tls: bad certificate 2025-08-13T19:56:05.279905070+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35040: remote error: tls: bad certificate 2025-08-13T19:56:05.297311857+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35046: remote error: tls: bad certificate 2025-08-13T19:56:05.318033838+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35062: remote error: tls: bad certificate 2025-08-13T19:56:05.335469256+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35064: remote error: tls: bad certificate 2025-08-13T19:56:05.353410969+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35080: remote error: tls: bad certificate 2025-08-13T19:56:05.371305510+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35094: remote error: tls: bad certificate 2025-08-13T19:56:05.384066564+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35110: remote error: tls: bad certificate 2025-08-13T19:56:05.401367898+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35114: remote error: tls: bad certificate 2025-08-13T19:56:05.417630852+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35120: remote error: tls: bad certificate 2025-08-13T19:56:05.433924208+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35134: remote error: tls: bad certificate 2025-08-13T19:56:05.458980173+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35144: remote error: tls: bad certificate 2025-08-13T19:56:05.477643286+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35154: remote error: tls: bad certificate 2025-08-13T19:56:05.493881660+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35156: remote error: tls: bad certificate 2025-08-13T19:56:05.509205347+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35162: remote error: tls: bad certificate 2025-08-13T19:56:05.526420229+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35176: remote error: tls: bad certificate 2025-08-13T19:56:05.549349164+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35184: remote error: tls: bad certificate 2025-08-13T19:56:05.563699803+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35200: remote error: tls: bad certificate 2025-08-13T19:56:05.579660349+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35212: remote error: tls: bad certificate 2025-08-13T19:56:05.595921573+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35226: remote error: tls: bad certificate 2025-08-13T19:56:05.611611262+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35240: remote error: tls: bad certificate 2025-08-13T19:56:05.630486300+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35250: remote error: tls: bad certificate 2025-08-13T19:56:05.646580120+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35262: remote error: tls: bad certificate 2025-08-13T19:56:05.662331250+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35278: remote error: tls: bad certificate 2025-08-13T19:56:05.688963440+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35286: remote error: tls: bad certificate 2025-08-13T19:56:05.708348024+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35290: remote error: tls: bad certificate 2025-08-13T19:56:05.728864560+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35302: remote error: tls: bad certificate 2025-08-13T19:56:05.748190062+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35306: remote error: tls: bad certificate 2025-08-13T19:56:05.767613786+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35320: remote error: tls: bad certificate 2025-08-13T19:56:05.789348337+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35328: remote error: tls: bad certificate 2025-08-13T19:56:05.811254952+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35330: remote error: tls: bad certificate 2025-08-13T19:56:05.833526528+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35336: remote error: tls: bad certificate 2025-08-13T19:56:05.851578174+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35344: remote error: tls: bad certificate 2025-08-13T19:56:05.872663876+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35346: remote error: tls: bad certificate 2025-08-13T19:56:05.899258815+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35350: remote error: tls: bad certificate 2025-08-13T19:56:05.917351002+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35356: remote error: tls: bad certificate 2025-08-13T19:56:05.940849733+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35362: remote error: tls: bad certificate 2025-08-13T19:56:05.961742819+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35378: remote error: tls: bad certificate 2025-08-13T19:56:05.982911784+00:00 stderr F 2025/08/13 19:56:05 http: TLS handshake error from 127.0.0.1:35384: remote error: tls: bad certificate 2025-08-13T19:56:06.006127677+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35388: remote error: tls: bad certificate 2025-08-13T19:56:06.022642348+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35392: remote error: tls: bad certificate 2025-08-13T19:56:06.046241482+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35408: remote error: tls: bad certificate 2025-08-13T19:56:06.073366907+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35410: remote error: tls: bad certificate 2025-08-13T19:56:06.094022967+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35422: remote error: tls: bad certificate 2025-08-13T19:56:06.112557836+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35426: remote error: tls: bad certificate 2025-08-13T19:56:06.133374070+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35432: remote error: tls: bad certificate 2025-08-13T19:56:06.157912101+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35440: remote error: tls: bad certificate 2025-08-13T19:56:06.175482033+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35450: remote error: tls: bad certificate 2025-08-13T19:56:06.197201003+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35452: remote error: tls: bad certificate 2025-08-13T19:56:06.217123102+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35460: remote error: tls: bad certificate 2025-08-13T19:56:06.237505664+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35476: remote error: tls: bad certificate 2025-08-13T19:56:06.255534069+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35484: remote error: tls: bad certificate 2025-08-13T19:56:06.275069526+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35494: remote error: tls: bad certificate 2025-08-13T19:56:06.292607227+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35506: remote error: tls: bad certificate 2025-08-13T19:56:06.312165656+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35522: remote error: tls: bad certificate 2025-08-13T19:56:06.329967944+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35528: remote error: tls: bad certificate 2025-08-13T19:56:06.351057836+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35542: remote error: tls: bad certificate 2025-08-13T19:56:06.367303130+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35558: remote error: tls: bad certificate 2025-08-13T19:56:06.382659809+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35562: remote error: tls: bad certificate 2025-08-13T19:56:06.400181979+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35568: remote error: tls: bad certificate 2025-08-13T19:56:06.416965998+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35578: remote error: tls: bad certificate 2025-08-13T19:56:06.436212358+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35590: remote error: tls: bad certificate 2025-08-13T19:56:06.459046810+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35604: remote error: tls: bad certificate 2025-08-13T19:56:06.478109724+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35614: remote error: tls: bad certificate 2025-08-13T19:56:06.497619531+00:00 stderr F 2025/08/13 19:56:06 http: TLS handshake error from 127.0.0.1:35616: remote error: tls: bad certificate 2025-08-13T19:56:14.523282831+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48366: remote error: tls: bad certificate 2025-08-13T19:56:14.543111027+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48368: remote error: tls: bad certificate 2025-08-13T19:56:14.561732609+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48378: remote error: tls: bad certificate 2025-08-13T19:56:14.586077774+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48386: remote error: tls: bad certificate 2025-08-13T19:56:14.606737354+00:00 stderr F 2025/08/13 19:56:14 http: TLS handshake error from 127.0.0.1:48402: remote error: tls: bad certificate 2025-08-13T19:56:15.225634456+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48404: remote error: tls: bad certificate 2025-08-13T19:56:15.241406516+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48406: remote error: tls: bad certificate 2025-08-13T19:56:15.257578618+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48422: remote error: tls: bad certificate 2025-08-13T19:56:15.274052669+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48428: remote error: tls: bad certificate 2025-08-13T19:56:15.291650561+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48444: remote error: tls: bad certificate 2025-08-13T19:56:15.309985504+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48460: remote error: tls: bad certificate 2025-08-13T19:56:15.332499957+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48464: remote error: tls: bad certificate 2025-08-13T19:56:15.352566890+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48472: remote error: tls: bad certificate 2025-08-13T19:56:15.376644158+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48488: remote error: tls: bad certificate 2025-08-13T19:56:15.395280090+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48500: remote error: tls: bad certificate 2025-08-13T19:56:15.412434010+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48502: remote error: tls: bad certificate 2025-08-13T19:56:15.431586287+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48516: remote error: tls: bad certificate 2025-08-13T19:56:15.447224413+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48528: remote error: tls: bad certificate 2025-08-13T19:56:15.462534960+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48536: remote error: tls: bad certificate 2025-08-13T19:56:15.487488243+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48544: remote error: tls: bad certificate 2025-08-13T19:56:15.504228111+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48548: remote error: tls: bad certificate 2025-08-13T19:56:15.520344662+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48558: remote error: tls: bad certificate 2025-08-13T19:56:15.539117308+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48572: remote error: tls: bad certificate 2025-08-13T19:56:15.554470726+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48588: remote error: tls: bad certificate 2025-08-13T19:56:15.570651978+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48602: remote error: tls: bad certificate 2025-08-13T19:56:15.586091099+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48604: remote error: tls: bad certificate 2025-08-13T19:56:15.602721954+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48618: remote error: tls: bad certificate 2025-08-13T19:56:15.618959948+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48622: remote error: tls: bad certificate 2025-08-13T19:56:15.637066125+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48630: remote error: tls: bad certificate 2025-08-13T19:56:15.652735732+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48642: remote error: tls: bad certificate 2025-08-13T19:56:15.668418590+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48656: remote error: tls: bad certificate 2025-08-13T19:56:15.683131270+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48670: remote error: tls: bad certificate 2025-08-13T19:56:15.696912744+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48686: remote error: tls: bad certificate 2025-08-13T19:56:15.720008783+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48692: remote error: tls: bad certificate 2025-08-13T19:56:15.735206527+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48702: remote error: tls: bad certificate 2025-08-13T19:56:15.758105811+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48704: remote error: tls: bad certificate 2025-08-13T19:56:15.777319080+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48708: remote error: tls: bad certificate 2025-08-13T19:56:15.798733631+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48714: remote error: tls: bad certificate 2025-08-13T19:56:15.814436700+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48726: remote error: tls: bad certificate 2025-08-13T19:56:15.834244675+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48740: remote error: tls: bad certificate 2025-08-13T19:56:15.854860184+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48756: remote error: tls: bad certificate 2025-08-13T19:56:15.872314452+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48770: remote error: tls: bad certificate 2025-08-13T19:56:15.891735747+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48786: remote error: tls: bad certificate 2025-08-13T19:56:15.913677114+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48798: remote error: tls: bad certificate 2025-08-13T19:56:15.942193228+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48810: remote error: tls: bad certificate 2025-08-13T19:56:15.964604088+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48826: remote error: tls: bad certificate 2025-08-13T19:56:15.985579917+00:00 stderr F 2025/08/13 19:56:15 http: TLS handshake error from 127.0.0.1:48838: remote error: tls: bad certificate 2025-08-13T19:56:16.019097874+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48848: remote error: tls: bad certificate 2025-08-13T19:56:16.045058055+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48850: remote error: tls: bad certificate 2025-08-13T19:56:16.066288041+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48856: remote error: tls: bad certificate 2025-08-13T19:56:16.091576833+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48868: remote error: tls: bad certificate 2025-08-13T19:56:16.106235992+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48882: remote error: tls: bad certificate 2025-08-13T19:56:16.122164446+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48892: remote error: tls: bad certificate 2025-08-13T19:56:16.146996326+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48906: remote error: tls: bad certificate 2025-08-13T19:56:16.163858827+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48914: remote error: tls: bad certificate 2025-08-13T19:56:16.181500171+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48920: remote error: tls: bad certificate 2025-08-13T19:56:16.198979520+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48928: remote error: tls: bad certificate 2025-08-13T19:56:16.220998039+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48942: remote error: tls: bad certificate 2025-08-13T19:56:16.237085798+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48958: remote error: tls: bad certificate 2025-08-13T19:56:16.269023870+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48972: remote error: tls: bad certificate 2025-08-13T19:56:16.282916207+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48974: remote error: tls: bad certificate 2025-08-13T19:56:16.300881490+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48986: remote error: tls: bad certificate 2025-08-13T19:56:16.318491013+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48996: remote error: tls: bad certificate 2025-08-13T19:56:16.337081954+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:48998: remote error: tls: bad certificate 2025-08-13T19:56:16.361762798+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49010: remote error: tls: bad certificate 2025-08-13T19:56:16.379760432+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49016: remote error: tls: bad certificate 2025-08-13T19:56:16.397045626+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49020: remote error: tls: bad certificate 2025-08-13T19:56:16.414064142+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49030: remote error: tls: bad certificate 2025-08-13T19:56:16.428882755+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49042: remote error: tls: bad certificate 2025-08-13T19:56:16.443707818+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49056: remote error: tls: bad certificate 2025-08-13T19:56:16.459309754+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49058: remote error: tls: bad certificate 2025-08-13T19:56:16.481357933+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49074: remote error: tls: bad certificate 2025-08-13T19:56:16.501159139+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49086: remote error: tls: bad certificate 2025-08-13T19:56:16.518668019+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49088: remote error: tls: bad certificate 2025-08-13T19:56:16.536877229+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49100: remote error: tls: bad certificate 2025-08-13T19:56:16.556201451+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49114: remote error: tls: bad certificate 2025-08-13T19:56:16.574138933+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate 2025-08-13T19:56:16.592120766+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate 2025-08-13T19:56:16.608378450+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49132: remote error: tls: bad certificate 2025-08-13T19:56:16.627979080+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate 2025-08-13T19:56:16.661932740+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate 2025-08-13T19:56:16.678177733+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate 2025-08-13T19:56:16.713048449+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate 2025-08-13T19:56:16.731738853+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49182: remote error: tls: bad certificate 2025-08-13T19:56:16.749534101+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49190: remote error: tls: bad certificate 2025-08-13T19:56:16.766628639+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49202: remote error: tls: bad certificate 2025-08-13T19:56:16.785175379+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49210: remote error: tls: bad certificate 2025-08-13T19:56:16.804865541+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49218: remote error: tls: bad certificate 2025-08-13T19:56:16.833077097+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49234: remote error: tls: bad certificate 2025-08-13T19:56:16.852035838+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49250: remote error: tls: bad certificate 2025-08-13T19:56:16.875039805+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49266: remote error: tls: bad certificate 2025-08-13T19:56:16.901080509+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49274: remote error: tls: bad certificate 2025-08-13T19:56:16.917706683+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49290: remote error: tls: bad certificate 2025-08-13T19:56:16.937721095+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49296: remote error: tls: bad certificate 2025-08-13T19:56:16.956477951+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49298: remote error: tls: bad certificate 2025-08-13T19:56:16.976037679+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49314: remote error: tls: bad certificate 2025-08-13T19:56:16.991848391+00:00 stderr F 2025/08/13 19:56:16 http: TLS handshake error from 127.0.0.1:49318: remote error: tls: bad certificate 2025-08-13T19:56:17.005632784+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49334: remote error: tls: bad certificate 2025-08-13T19:56:17.030932497+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49342: remote error: tls: bad certificate 2025-08-13T19:56:17.045555164+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49358: remote error: tls: bad certificate 2025-08-13T19:56:17.061213182+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49370: remote error: tls: bad certificate 2025-08-13T19:56:17.102422098+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49386: remote error: tls: bad certificate 2025-08-13T19:56:17.141107583+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49394: remote error: tls: bad certificate 2025-08-13T19:56:17.178469190+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49408: remote error: tls: bad certificate 2025-08-13T19:56:17.226950744+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49424: remote error: tls: bad certificate 2025-08-13T19:56:17.259250946+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49426: remote error: tls: bad certificate 2025-08-13T19:56:17.300422942+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49440: remote error: tls: bad certificate 2025-08-13T19:56:17.340849686+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49444: remote error: tls: bad certificate 2025-08-13T19:56:17.379916752+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49448: remote error: tls: bad certificate 2025-08-13T19:56:17.419144322+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49452: remote error: tls: bad certificate 2025-08-13T19:56:17.458899517+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49454: remote error: tls: bad certificate 2025-08-13T19:56:17.502985276+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49466: remote error: tls: bad certificate 2025-08-13T19:56:17.546481698+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49476: remote error: tls: bad certificate 2025-08-13T19:56:17.578102301+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49492: remote error: tls: bad certificate 2025-08-13T19:56:17.622084926+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49496: remote error: tls: bad certificate 2025-08-13T19:56:17.660241796+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49506: remote error: tls: bad certificate 2025-08-13T19:56:17.700904407+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49512: remote error: tls: bad certificate 2025-08-13T19:56:17.741491176+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49516: remote error: tls: bad certificate 2025-08-13T19:56:17.780104989+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49526: remote error: tls: bad certificate 2025-08-13T19:56:17.817659271+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49530: remote error: tls: bad certificate 2025-08-13T19:56:17.858408045+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49540: remote error: tls: bad certificate 2025-08-13T19:56:17.900224919+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49550: remote error: tls: bad certificate 2025-08-13T19:56:17.938504802+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49562: remote error: tls: bad certificate 2025-08-13T19:56:17.979759980+00:00 stderr F 2025/08/13 19:56:17 http: TLS handshake error from 127.0.0.1:49568: remote error: tls: bad certificate 2025-08-13T19:56:18.024401465+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49576: remote error: tls: bad certificate 2025-08-13T19:56:18.060726722+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49580: remote error: tls: bad certificate 2025-08-13T19:56:18.099416237+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49584: remote error: tls: bad certificate 2025-08-13T19:56:18.141405796+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49592: remote error: tls: bad certificate 2025-08-13T19:56:18.182291913+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49604: remote error: tls: bad certificate 2025-08-13T19:56:18.223781818+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49606: remote error: tls: bad certificate 2025-08-13T19:56:18.260134486+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49612: remote error: tls: bad certificate 2025-08-13T19:56:18.313018516+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49622: remote error: tls: bad certificate 2025-08-13T19:56:18.352276627+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49630: remote error: tls: bad certificate 2025-08-13T19:56:18.385901677+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49634: remote error: tls: bad certificate 2025-08-13T19:56:18.421409611+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49650: remote error: tls: bad certificate 2025-08-13T19:56:18.460554168+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49662: remote error: tls: bad certificate 2025-08-13T19:56:18.497052690+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49672: remote error: tls: bad certificate 2025-08-13T19:56:18.540230523+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49680: remote error: tls: bad certificate 2025-08-13T19:56:18.580440782+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49688: remote error: tls: bad certificate 2025-08-13T19:56:18.621526565+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49698: remote error: tls: bad certificate 2025-08-13T19:56:18.660588940+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49700: remote error: tls: bad certificate 2025-08-13T19:56:18.699347827+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:49702: remote error: tls: bad certificate 2025-08-13T19:56:18.739736240+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60210: remote error: tls: bad certificate 2025-08-13T19:56:18.778538288+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60212: remote error: tls: bad certificate 2025-08-13T19:56:18.822894145+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60224: remote error: tls: bad certificate 2025-08-13T19:56:18.867266522+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60238: remote error: tls: bad certificate 2025-08-13T19:56:18.901464949+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60254: remote error: tls: bad certificate 2025-08-13T19:56:18.941194953+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60260: remote error: tls: bad certificate 2025-08-13T19:56:18.978890780+00:00 stderr F 2025/08/13 19:56:18 http: TLS handshake error from 127.0.0.1:60274: remote error: tls: bad certificate 2025-08-13T19:56:19.023520184+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60282: remote error: tls: bad certificate 2025-08-13T19:56:19.059161562+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60284: remote error: tls: bad certificate 2025-08-13T19:56:19.098975159+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60292: remote error: tls: bad certificate 2025-08-13T19:56:19.141963816+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60304: remote error: tls: bad certificate 2025-08-13T19:56:19.178275523+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60308: remote error: tls: bad certificate 2025-08-13T19:56:19.217360059+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60316: remote error: tls: bad certificate 2025-08-13T19:56:19.260116940+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60320: remote error: tls: bad certificate 2025-08-13T19:56:19.298187827+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60324: remote error: tls: bad certificate 2025-08-13T19:56:19.340386132+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60336: remote error: tls: bad certificate 2025-08-13T19:56:19.382478584+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60348: remote error: tls: bad certificate 2025-08-13T19:56:19.423060843+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60360: remote error: tls: bad certificate 2025-08-13T19:56:19.459342529+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60372: remote error: tls: bad certificate 2025-08-13T19:56:19.502522042+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60382: remote error: tls: bad certificate 2025-08-13T19:56:19.541055782+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60384: remote error: tls: bad certificate 2025-08-13T19:56:19.580605031+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60386: remote error: tls: bad certificate 2025-08-13T19:56:19.691913910+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60390: remote error: tls: bad certificate 2025-08-13T19:56:19.723640376+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60404: remote error: tls: bad certificate 2025-08-13T19:56:19.748067543+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60414: remote error: tls: bad certificate 2025-08-13T19:56:19.767062166+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60416: remote error: tls: bad certificate 2025-08-13T19:56:19.830188728+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60432: remote error: tls: bad certificate 2025-08-13T19:56:19.847652487+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60442: remote error: tls: bad certificate 2025-08-13T19:56:19.873422673+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60450: remote error: tls: bad certificate 2025-08-13T19:56:19.903512852+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60452: remote error: tls: bad certificate 2025-08-13T19:56:19.940493338+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60458: remote error: tls: bad certificate 2025-08-13T19:56:19.979206143+00:00 stderr F 2025/08/13 19:56:19 http: TLS handshake error from 127.0.0.1:60474: remote error: tls: bad certificate 2025-08-13T19:56:20.019782402+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60480: remote error: tls: bad certificate 2025-08-13T19:56:20.060214907+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60492: remote error: tls: bad certificate 2025-08-13T19:56:20.097512922+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60502: remote error: tls: bad certificate 2025-08-13T19:56:20.138601255+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60514: remote error: tls: bad certificate 2025-08-13T19:56:20.179310167+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60528: remote error: tls: bad certificate 2025-08-13T19:56:20.224400055+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60544: remote error: tls: bad certificate 2025-08-13T19:56:20.263304946+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60556: remote error: tls: bad certificate 2025-08-13T19:56:20.302375711+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60560: remote error: tls: bad certificate 2025-08-13T19:56:20.352099811+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60574: remote error: tls: bad certificate 2025-08-13T19:56:20.383428046+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60580: remote error: tls: bad certificate 2025-08-13T19:56:20.419989800+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60588: remote error: tls: bad certificate 2025-08-13T19:56:20.493288673+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60600: remote error: tls: bad certificate 2025-08-13T19:56:20.516630530+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60604: remote error: tls: bad certificate 2025-08-13T19:56:20.543469776+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60608: remote error: tls: bad certificate 2025-08-13T19:56:20.589926052+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60624: remote error: tls: bad certificate 2025-08-13T19:56:20.621435993+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60640: remote error: tls: bad certificate 2025-08-13T19:56:20.658229593+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60646: remote error: tls: bad certificate 2025-08-13T19:56:20.697298119+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60660: remote error: tls: bad certificate 2025-08-13T19:56:20.741112860+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60664: remote error: tls: bad certificate 2025-08-13T19:56:20.782320987+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60670: remote error: tls: bad certificate 2025-08-13T19:56:20.819863509+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60672: remote error: tls: bad certificate 2025-08-13T19:56:20.860204190+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60674: remote error: tls: bad certificate 2025-08-13T19:56:20.897859876+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60680: remote error: tls: bad certificate 2025-08-13T19:56:20.938916668+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60682: remote error: tls: bad certificate 2025-08-13T19:56:20.981937877+00:00 stderr F 2025/08/13 19:56:20 http: TLS handshake error from 127.0.0.1:60698: remote error: tls: bad certificate 2025-08-13T19:56:21.020673772+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60714: remote error: tls: bad certificate 2025-08-13T19:56:21.059742628+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60718: remote error: tls: bad certificate 2025-08-13T19:56:21.103108316+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60722: remote error: tls: bad certificate 2025-08-13T19:56:21.141400180+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60726: remote error: tls: bad certificate 2025-08-13T19:56:21.181011431+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60736: remote error: tls: bad certificate 2025-08-13T19:56:21.221674872+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60738: remote error: tls: bad certificate 2025-08-13T19:56:21.259679737+00:00 stderr F 2025/08/13 19:56:21 http: TLS handshake error from 127.0.0.1:60746: remote error: tls: bad certificate 2025-08-13T19:56:24.713609512+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60752: remote error: tls: bad certificate 2025-08-13T19:56:24.736715922+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60764: remote error: tls: bad certificate 2025-08-13T19:56:24.756431835+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60766: remote error: tls: bad certificate 2025-08-13T19:56:24.776895689+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60772: remote error: tls: bad certificate 2025-08-13T19:56:24.795441349+00:00 stderr F 2025/08/13 19:56:24 http: TLS handshake error from 127.0.0.1:60782: remote error: tls: bad certificate 2025-08-13T19:56:25.228152195+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60794: remote error: tls: bad certificate 2025-08-13T19:56:25.245030607+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60806: remote error: tls: bad certificate 2025-08-13T19:56:25.266129119+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60808: remote error: tls: bad certificate 2025-08-13T19:56:25.283865286+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60824: remote error: tls: bad certificate 2025-08-13T19:56:25.301257042+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60836: remote error: tls: bad certificate 2025-08-13T19:56:25.315609002+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60844: remote error: tls: bad certificate 2025-08-13T19:56:25.333272366+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60858: remote error: tls: bad certificate 2025-08-13T19:56:25.348940394+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60864: remote error: tls: bad certificate 2025-08-13T19:56:25.370581172+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60872: remote error: tls: bad certificate 2025-08-13T19:56:25.394051512+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60874: remote error: tls: bad certificate 2025-08-13T19:56:25.405926421+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60888: remote error: tls: bad certificate 2025-08-13T19:56:25.422310069+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60902: remote error: tls: bad certificate 2025-08-13T19:56:25.438460950+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60912: remote error: tls: bad certificate 2025-08-13T19:56:25.454018324+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60924: remote error: tls: bad certificate 2025-08-13T19:56:25.471947896+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60938: remote error: tls: bad certificate 2025-08-13T19:56:25.489410415+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60944: remote error: tls: bad certificate 2025-08-13T19:56:25.506624236+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60952: remote error: tls: bad certificate 2025-08-13T19:56:25.531308851+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60958: remote error: tls: bad certificate 2025-08-13T19:56:25.546933968+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60974: remote error: tls: bad certificate 2025-08-13T19:56:25.561763591+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60984: remote error: tls: bad certificate 2025-08-13T19:56:25.578331604+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60988: remote error: tls: bad certificate 2025-08-13T19:56:25.598173051+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:60994: remote error: tls: bad certificate 2025-08-13T19:56:25.620897300+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32778: remote error: tls: bad certificate 2025-08-13T19:56:25.638916323+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32794: remote error: tls: bad certificate 2025-08-13T19:56:25.653611893+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32800: remote error: tls: bad certificate 2025-08-13T19:56:25.668929760+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32808: remote error: tls: bad certificate 2025-08-13T19:56:25.688427757+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32822: remote error: tls: bad certificate 2025-08-13T19:56:25.700366698+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32838: remote error: tls: bad certificate 2025-08-13T19:56:25.713189444+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32840: remote error: tls: bad certificate 2025-08-13T19:56:25.730241761+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32854: remote error: tls: bad certificate 2025-08-13T19:56:25.750001565+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32870: remote error: tls: bad certificate 2025-08-13T19:56:25.766507936+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32878: remote error: tls: bad certificate 2025-08-13T19:56:25.781043662+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32888: remote error: tls: bad certificate 2025-08-13T19:56:25.799626672+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32902: remote error: tls: bad certificate 2025-08-13T19:56:25.816681149+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32908: remote error: tls: bad certificate 2025-08-13T19:56:25.836270739+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32922: remote error: tls: bad certificate 2025-08-13T19:56:25.852295126+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32938: remote error: tls: bad certificate 2025-08-13T19:56:25.874538131+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32940: remote error: tls: bad certificate 2025-08-13T19:56:25.899968987+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32944: remote error: tls: bad certificate 2025-08-13T19:56:25.918739933+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32956: remote error: tls: bad certificate 2025-08-13T19:56:25.936371367+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32958: remote error: tls: bad certificate 2025-08-13T19:56:25.952628741+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32960: remote error: tls: bad certificate 2025-08-13T19:56:25.971218142+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32964: remote error: tls: bad certificate 2025-08-13T19:56:25.996639018+00:00 stderr F 2025/08/13 19:56:25 http: TLS handshake error from 127.0.0.1:32980: remote error: tls: bad certificate 2025-08-13T19:56:26.015352622+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:32982: remote error: tls: bad certificate 2025-08-13T19:56:26.032520442+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:32986: remote error: tls: bad certificate 2025-08-13T19:56:26.048177349+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33000: remote error: tls: bad certificate 2025-08-13T19:56:26.065905696+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33016: remote error: tls: bad certificate 2025-08-13T19:56:26.100657058+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33032: remote error: tls: bad certificate 2025-08-13T19:56:26.161130055+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33036: remote error: tls: bad certificate 2025-08-13T19:56:26.181934369+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33038: remote error: tls: bad certificate 2025-08-13T19:56:26.207480808+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33044: remote error: tls: bad certificate 2025-08-13T19:56:26.228406646+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33052: remote error: tls: bad certificate 2025-08-13T19:56:26.245143444+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33066: remote error: tls: bad certificate 2025-08-13T19:56:26.274306257+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33072: remote error: tls: bad certificate 2025-08-13T19:56:26.294680768+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33080: remote error: tls: bad certificate 2025-08-13T19:56:26.309468851+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33088: remote error: tls: bad certificate 2025-08-13T19:56:26.324231492+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33094: remote error: tls: bad certificate 2025-08-13T19:56:26.350863583+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33100: remote error: tls: bad certificate 2025-08-13T19:56:26.373687874+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33110: remote error: tls: bad certificate 2025-08-13T19:56:26.394339524+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33126: remote error: tls: bad certificate 2025-08-13T19:56:26.422339264+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33138: remote error: tls: bad certificate 2025-08-13T19:56:26.447093040+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33148: remote error: tls: bad certificate 2025-08-13T19:56:26.463037236+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33162: remote error: tls: bad certificate 2025-08-13T19:56:26.480013631+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33168: remote error: tls: bad certificate 2025-08-13T19:56:26.499265120+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33176: remote error: tls: bad certificate 2025-08-13T19:56:26.516897704+00:00 stderr F 2025/08/13 19:56:26 http: TLS handshake error from 127.0.0.1:33190: remote error: tls: bad certificate 2025-08-13T19:56:28.230030422+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33202: remote error: tls: bad certificate 2025-08-13T19:56:28.256028364+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33210: remote error: tls: bad certificate 2025-08-13T19:56:28.276163999+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33222: remote error: tls: bad certificate 2025-08-13T19:56:28.372292724+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33232: remote error: tls: bad certificate 2025-08-13T19:56:28.434955944+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33238: remote error: tls: bad certificate 2025-08-13T19:56:28.462683985+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33246: remote error: tls: bad certificate 2025-08-13T19:56:28.520193828+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33250: remote error: tls: bad certificate 2025-08-13T19:56:28.534392783+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33254: remote error: tls: bad certificate 2025-08-13T19:56:28.558302796+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33268: remote error: tls: bad certificate 2025-08-13T19:56:28.591037830+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33278: remote error: tls: bad certificate 2025-08-13T19:56:28.609239080+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33286: remote error: tls: bad certificate 2025-08-13T19:56:28.632140994+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33290: remote error: tls: bad certificate 2025-08-13T19:56:28.652923158+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33292: remote error: tls: bad certificate 2025-08-13T19:56:28.672149507+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33298: remote error: tls: bad certificate 2025-08-13T19:56:28.691951712+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33308: remote error: tls: bad certificate 2025-08-13T19:56:28.711067508+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33312: remote error: tls: bad certificate 2025-08-13T19:56:28.725863900+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:33326: remote error: tls: bad certificate 2025-08-13T19:56:28.743639948+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40208: remote error: tls: bad certificate 2025-08-13T19:56:28.765675137+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40224: remote error: tls: bad certificate 2025-08-13T19:56:28.788074757+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40232: remote error: tls: bad certificate 2025-08-13T19:56:28.806516774+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40234: remote error: tls: bad certificate 2025-08-13T19:56:28.829392007+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40242: remote error: tls: bad certificate 2025-08-13T19:56:28.851298442+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40248: remote error: tls: bad certificate 2025-08-13T19:56:28.869924884+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40254: remote error: tls: bad certificate 2025-08-13T19:56:28.914021143+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40256: remote error: tls: bad certificate 2025-08-13T19:56:28.934675163+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40260: remote error: tls: bad certificate 2025-08-13T19:56:28.953654265+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40276: remote error: tls: bad certificate 2025-08-13T19:56:28.976501517+00:00 stderr F 2025/08/13 19:56:28 http: TLS handshake error from 127.0.0.1:40284: remote error: tls: bad certificate 2025-08-13T19:56:29.022968854+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40294: remote error: tls: bad certificate 2025-08-13T19:56:29.041279527+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40298: remote error: tls: bad certificate 2025-08-13T19:56:29.060927628+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40304: remote error: tls: bad certificate 2025-08-13T19:56:29.078177891+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40308: remote error: tls: bad certificate 2025-08-13T19:56:29.111441111+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40310: remote error: tls: bad certificate 2025-08-13T19:56:29.133208612+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40312: remote error: tls: bad certificate 2025-08-13T19:56:29.149290632+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40328: remote error: tls: bad certificate 2025-08-13T19:56:29.167316776+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40334: remote error: tls: bad certificate 2025-08-13T19:56:29.180703269+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40336: remote error: tls: bad certificate 2025-08-13T19:56:29.198915218+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40346: remote error: tls: bad certificate 2025-08-13T19:56:29.224107937+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40360: remote error: tls: bad certificate 2025-08-13T19:56:29.237585812+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40376: remote error: tls: bad certificate 2025-08-13T19:56:29.252533569+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40392: remote error: tls: bad certificate 2025-08-13T19:56:29.270678007+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40406: remote error: tls: bad certificate 2025-08-13T19:56:29.286617012+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40416: remote error: tls: bad certificate 2025-08-13T19:56:29.312717657+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40422: remote error: tls: bad certificate 2025-08-13T19:56:29.333581853+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40428: remote error: tls: bad certificate 2025-08-13T19:56:29.356018324+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40438: remote error: tls: bad certificate 2025-08-13T19:56:29.383142538+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40444: remote error: tls: bad certificate 2025-08-13T19:56:29.444007006+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40450: remote error: tls: bad certificate 2025-08-13T19:56:29.594660068+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40464: remote error: tls: bad certificate 2025-08-13T19:56:29.615071811+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40470: remote error: tls: bad certificate 2025-08-13T19:56:29.638519201+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40484: remote error: tls: bad certificate 2025-08-13T19:56:29.694117008+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40494: remote error: tls: bad certificate 2025-08-13T19:56:29.717957479+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40496: remote error: tls: bad certificate 2025-08-13T19:56:29.734070789+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40510: remote error: tls: bad certificate 2025-08-13T19:56:29.752045142+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40512: remote error: tls: bad certificate 2025-08-13T19:56:29.771756565+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40520: remote error: tls: bad certificate 2025-08-13T19:56:29.792145097+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40524: remote error: tls: bad certificate 2025-08-13T19:56:29.809265456+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40532: remote error: tls: bad certificate 2025-08-13T19:56:29.828091824+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40546: remote error: tls: bad certificate 2025-08-13T19:56:29.851987096+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40556: remote error: tls: bad certificate 2025-08-13T19:56:29.874285573+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40564: remote error: tls: bad certificate 2025-08-13T19:56:29.899997457+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40568: remote error: tls: bad certificate 2025-08-13T19:56:29.921379078+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40574: remote error: tls: bad certificate 2025-08-13T19:56:29.951109226+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40578: remote error: tls: bad certificate 2025-08-13T19:56:29.969078150+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40584: remote error: tls: bad certificate 2025-08-13T19:56:29.993981281+00:00 stderr F 2025/08/13 19:56:29 http: TLS handshake error from 127.0.0.1:40600: remote error: tls: bad certificate 2025-08-13T19:56:30.017090871+00:00 stderr F 2025/08/13 19:56:30 http: TLS handshake error from 127.0.0.1:40604: remote error: tls: bad certificate 2025-08-13T19:56:35.187288604+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40608: remote error: tls: bad certificate 2025-08-13T19:56:35.213660707+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40622: remote error: tls: bad certificate 2025-08-13T19:56:35.228910922+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40634: remote error: tls: bad certificate 2025-08-13T19:56:35.249924482+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40650: remote error: tls: bad certificate 2025-08-13T19:56:35.252557028+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40656: remote error: tls: bad certificate 2025-08-13T19:56:35.277141189+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40666: remote error: tls: bad certificate 2025-08-13T19:56:35.284230962+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40670: remote error: tls: bad certificate 2025-08-13T19:56:35.300761394+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40672: remote error: tls: bad certificate 2025-08-13T19:56:35.303906424+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40684: remote error: tls: bad certificate 2025-08-13T19:56:35.316918835+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40700: remote error: tls: bad certificate 2025-08-13T19:56:35.332963674+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40712: remote error: tls: bad certificate 2025-08-13T19:56:35.348473916+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40728: remote error: tls: bad certificate 2025-08-13T19:56:35.365609396+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40736: remote error: tls: bad certificate 2025-08-13T19:56:35.387059708+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40738: remote error: tls: bad certificate 2025-08-13T19:56:35.404192377+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40752: remote error: tls: bad certificate 2025-08-13T19:56:35.425539397+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40768: remote error: tls: bad certificate 2025-08-13T19:56:35.442012117+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40772: remote error: tls: bad certificate 2025-08-13T19:56:35.461092042+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40780: remote error: tls: bad certificate 2025-08-13T19:56:35.477609224+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40784: remote error: tls: bad certificate 2025-08-13T19:56:35.496233846+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40792: remote error: tls: bad certificate 2025-08-13T19:56:35.512696136+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40804: remote error: tls: bad certificate 2025-08-13T19:56:35.531024449+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40812: remote error: tls: bad certificate 2025-08-13T19:56:35.548935241+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40828: remote error: tls: bad certificate 2025-08-13T19:56:35.567623174+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40836: remote error: tls: bad certificate 2025-08-13T19:56:35.584547287+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40846: remote error: tls: bad certificate 2025-08-13T19:56:35.607592976+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40856: remote error: tls: bad certificate 2025-08-13T19:56:35.629469020+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40858: remote error: tls: bad certificate 2025-08-13T19:56:35.645192989+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40862: remote error: tls: bad certificate 2025-08-13T19:56:35.661243847+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40870: remote error: tls: bad certificate 2025-08-13T19:56:35.677320057+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40872: remote error: tls: bad certificate 2025-08-13T19:56:35.701640621+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40884: remote error: tls: bad certificate 2025-08-13T19:56:35.721678683+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40898: remote error: tls: bad certificate 2025-08-13T19:56:35.737307960+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40914: remote error: tls: bad certificate 2025-08-13T19:56:35.754155081+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40922: remote error: tls: bad certificate 2025-08-13T19:56:35.769055146+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40926: remote error: tls: bad certificate 2025-08-13T19:56:35.785245618+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40928: remote error: tls: bad certificate 2025-08-13T19:56:35.800938696+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40942: remote error: tls: bad certificate 2025-08-13T19:56:35.822369228+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40956: remote error: tls: bad certificate 2025-08-13T19:56:35.841470544+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40970: remote error: tls: bad certificate 2025-08-13T19:56:35.860976601+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40978: remote error: tls: bad certificate 2025-08-13T19:56:35.881642571+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40988: remote error: tls: bad certificate 2025-08-13T19:56:35.911084762+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:40992: remote error: tls: bad certificate 2025-08-13T19:56:35.929287982+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41000: remote error: tls: bad certificate 2025-08-13T19:56:35.944178507+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41006: remote error: tls: bad certificate 2025-08-13T19:56:35.962487619+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41008: remote error: tls: bad certificate 2025-08-13T19:56:35.984109867+00:00 stderr F 2025/08/13 19:56:35 http: TLS handshake error from 127.0.0.1:41024: remote error: tls: bad certificate 2025-08-13T19:56:36.002488702+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41034: remote error: tls: bad certificate 2025-08-13T19:56:36.019123537+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41038: remote error: tls: bad certificate 2025-08-13T19:56:36.039185030+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41054: remote error: tls: bad certificate 2025-08-13T19:56:36.126893464+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41068: remote error: tls: bad certificate 2025-08-13T19:56:36.147953065+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41074: remote error: tls: bad certificate 2025-08-13T19:56:36.188928986+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41076: remote error: tls: bad certificate 2025-08-13T19:56:36.191859689+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41084: remote error: tls: bad certificate 2025-08-13T19:56:36.210958865+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41090: remote error: tls: bad certificate 2025-08-13T19:56:36.230747780+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41096: remote error: tls: bad certificate 2025-08-13T19:56:36.247985622+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41098: remote error: tls: bad certificate 2025-08-13T19:56:36.263572937+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41100: remote error: tls: bad certificate 2025-08-13T19:56:36.287882181+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41116: remote error: tls: bad certificate 2025-08-13T19:56:36.309217850+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41126: remote error: tls: bad certificate 2025-08-13T19:56:36.330068596+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41130: remote error: tls: bad certificate 2025-08-13T19:56:36.353389761+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41142: remote error: tls: bad certificate 2025-08-13T19:56:36.372585129+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41144: remote error: tls: bad certificate 2025-08-13T19:56:36.395342199+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41160: remote error: tls: bad certificate 2025-08-13T19:56:36.412297303+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41172: remote error: tls: bad certificate 2025-08-13T19:56:36.431758728+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41188: remote error: tls: bad certificate 2025-08-13T19:56:36.449463374+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41202: remote error: tls: bad certificate 2025-08-13T19:56:36.466698776+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41214: remote error: tls: bad certificate 2025-08-13T19:56:36.484006140+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41216: remote error: tls: bad certificate 2025-08-13T19:56:36.503870588+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41230: remote error: tls: bad certificate 2025-08-13T19:56:36.522234852+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41246: remote error: tls: bad certificate 2025-08-13T19:56:36.539405542+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41258: remote error: tls: bad certificate 2025-08-13T19:56:36.555686927+00:00 stderr F 2025/08/13 19:56:36 http: TLS handshake error from 127.0.0.1:41262: remote error: tls: bad certificate 2025-08-13T19:56:45.232549103+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55276: remote error: tls: bad certificate 2025-08-13T19:56:45.254060377+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55286: remote error: tls: bad certificate 2025-08-13T19:56:45.274263444+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55292: remote error: tls: bad certificate 2025-08-13T19:56:45.295747548+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55304: remote error: tls: bad certificate 2025-08-13T19:56:45.315181213+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55314: remote error: tls: bad certificate 2025-08-13T19:56:45.333756853+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:56:45.354737182+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55334: remote error: tls: bad certificate 2025-08-13T19:56:45.373444306+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55342: remote error: tls: bad certificate 2025-08-13T19:56:45.393533520+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55350: remote error: tls: bad certificate 2025-08-13T19:56:45.410423982+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55360: remote error: tls: bad certificate 2025-08-13T19:56:45.427273023+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:56:45.444438213+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55378: remote error: tls: bad certificate 2025-08-13T19:56:45.460449321+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55384: remote error: tls: bad certificate 2025-08-13T19:56:45.481026718+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55398: remote error: tls: bad certificate 2025-08-13T19:56:45.505608970+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55414: remote error: tls: bad certificate 2025-08-13T19:56:45.521532935+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55426: remote error: tls: bad certificate 2025-08-13T19:56:45.530583033+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55428: remote error: tls: bad certificate 2025-08-13T19:56:45.544694146+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55436: remote error: tls: bad certificate 2025-08-13T19:56:45.547977270+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55438: remote error: tls: bad certificate 2025-08-13T19:56:45.563034420+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55452: remote error: tls: bad certificate 2025-08-13T19:56:45.565378167+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55468: remote error: tls: bad certificate 2025-08-13T19:56:45.579891871+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55484: remote error: tls: bad certificate 2025-08-13T19:56:45.586223532+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55486: remote error: tls: bad certificate 2025-08-13T19:56:45.600715616+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:56:45.609232269+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55496: remote error: tls: bad certificate 2025-08-13T19:56:45.618993548+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55512: remote error: tls: bad certificate 2025-08-13T19:56:45.635656164+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55526: remote error: tls: bad certificate 2025-08-13T19:56:45.652487294+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55540: remote error: tls: bad certificate 2025-08-13T19:56:45.675504121+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55542: remote error: tls: bad certificate 2025-08-13T19:56:45.705092476+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:56:45.721005711+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:56:45.735256278+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55570: remote error: tls: bad certificate 2025-08-13T19:56:45.750643727+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55582: remote error: tls: bad certificate 2025-08-13T19:56:45.777755061+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55598: remote error: tls: bad certificate 2025-08-13T19:56:45.799503372+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55606: remote error: tls: bad certificate 2025-08-13T19:56:45.816620341+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55614: remote error: tls: bad certificate 2025-08-13T19:56:45.835108589+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55620: remote error: tls: bad certificate 2025-08-13T19:56:45.852698741+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55630: remote error: tls: bad certificate 2025-08-13T19:56:45.874041651+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55632: remote error: tls: bad certificate 2025-08-13T19:56:45.890161121+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55638: remote error: tls: bad certificate 2025-08-13T19:56:45.907023903+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55644: remote error: tls: bad certificate 2025-08-13T19:56:45.925525041+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55648: remote error: tls: bad certificate 2025-08-13T19:56:45.943903096+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55662: remote error: tls: bad certificate 2025-08-13T19:56:45.963412623+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:56:45.980134920+00:00 stderr F 2025/08/13 19:56:45 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:56:46.000327957+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:56:46.017528248+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55702: remote error: tls: bad certificate 2025-08-13T19:56:46.032597078+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55712: remote error: tls: bad certificate 2025-08-13T19:56:46.047546835+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55722: remote error: tls: bad certificate 2025-08-13T19:56:46.063708677+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55732: remote error: tls: bad certificate 2025-08-13T19:56:46.081696430+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55738: remote error: tls: bad certificate 2025-08-13T19:56:46.097068889+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55746: remote error: tls: bad certificate 2025-08-13T19:56:46.110881664+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55748: remote error: tls: bad certificate 2025-08-13T19:56:46.126720106+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55762: remote error: tls: bad certificate 2025-08-13T19:56:46.140744616+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55766: remote error: tls: bad certificate 2025-08-13T19:56:46.160978344+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55770: remote error: tls: bad certificate 2025-08-13T19:56:46.180014938+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55772: remote error: tls: bad certificate 2025-08-13T19:56:46.200083631+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55776: remote error: tls: bad certificate 2025-08-13T19:56:46.217562270+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55780: remote error: tls: bad certificate 2025-08-13T19:56:46.235698008+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55782: remote error: tls: bad certificate 2025-08-13T19:56:46.251926321+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55790: remote error: tls: bad certificate 2025-08-13T19:56:46.267677251+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55800: remote error: tls: bad certificate 2025-08-13T19:56:46.283754510+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:56:46.301716153+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:56:46.315549568+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55828: remote error: tls: bad certificate 2025-08-13T19:56:46.331707829+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55834: remote error: tls: bad certificate 2025-08-13T19:56:46.348641403+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55838: remote error: tls: bad certificate 2025-08-13T19:56:46.373020389+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55850: remote error: tls: bad certificate 2025-08-13T19:56:46.389893821+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55852: remote error: tls: bad certificate 2025-08-13T19:56:46.402739798+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55862: remote error: tls: bad certificate 2025-08-13T19:56:46.419895958+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55866: remote error: tls: bad certificate 2025-08-13T19:56:46.436435340+00:00 stderr F 2025/08/13 19:56:46 http: TLS handshake error from 127.0.0.1:55876: remote error: tls: bad certificate 2025-08-13T19:56:55.251937089+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57292: remote error: tls: bad certificate 2025-08-13T19:56:55.303880243+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57306: remote error: tls: bad certificate 2025-08-13T19:56:55.334475496+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57318: remote error: tls: bad certificate 2025-08-13T19:56:55.358762930+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57324: remote error: tls: bad certificate 2025-08-13T19:56:55.391517465+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57326: remote error: tls: bad certificate 2025-08-13T19:56:55.414621175+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57332: remote error: tls: bad certificate 2025-08-13T19:56:55.434673147+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57338: remote error: tls: bad certificate 2025-08-13T19:56:55.459612429+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57346: remote error: tls: bad certificate 2025-08-13T19:56:55.481214456+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57362: remote error: tls: bad certificate 2025-08-13T19:56:55.504114070+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57364: remote error: tls: bad certificate 2025-08-13T19:56:55.519919441+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57378: remote error: tls: bad certificate 2025-08-13T19:56:55.540751916+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57386: remote error: tls: bad certificate 2025-08-13T19:56:55.568641093+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57396: remote error: tls: bad certificate 2025-08-13T19:56:55.587000887+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57400: remote error: tls: bad certificate 2025-08-13T19:56:55.604516937+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57410: remote error: tls: bad certificate 2025-08-13T19:56:55.623090237+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57426: remote error: tls: bad certificate 2025-08-13T19:56:55.645844407+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57442: remote error: tls: bad certificate 2025-08-13T19:56:55.667699131+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57452: remote error: tls: bad certificate 2025-08-13T19:56:55.687297951+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57456: remote error: tls: bad certificate 2025-08-13T19:56:55.706473568+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57472: remote error: tls: bad certificate 2025-08-13T19:56:55.724226585+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57482: remote error: tls: bad certificate 2025-08-13T19:56:55.739952854+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57498: remote error: tls: bad certificate 2025-08-13T19:56:55.754345995+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57502: remote error: tls: bad certificate 2025-08-13T19:56:55.774045348+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57508: remote error: tls: bad certificate 2025-08-13T19:56:55.791300540+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57514: remote error: tls: bad certificate 2025-08-13T19:56:55.809028357+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57518: remote error: tls: bad certificate 2025-08-13T19:56:55.839509397+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57530: remote error: tls: bad certificate 2025-08-13T19:56:55.857904082+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57538: remote error: tls: bad certificate 2025-08-13T19:56:55.864706607+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57546: remote error: tls: bad certificate 2025-08-13T19:56:55.880642142+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57548: remote error: tls: bad certificate 2025-08-13T19:56:55.889942997+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57556: remote error: tls: bad certificate 2025-08-13T19:56:55.912389418+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57558: remote error: tls: bad certificate 2025-08-13T19:56:55.919864572+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57570: remote error: tls: bad certificate 2025-08-13T19:56:55.930057443+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57576: remote error: tls: bad certificate 2025-08-13T19:56:55.946863433+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57590: remote error: tls: bad certificate 2025-08-13T19:56:55.953193543+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57596: remote error: tls: bad certificate 2025-08-13T19:56:55.972275648+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57612: remote error: tls: bad certificate 2025-08-13T19:56:55.979193276+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57628: remote error: tls: bad certificate 2025-08-13T19:56:55.999218188+00:00 stderr F 2025/08/13 19:56:55 http: TLS handshake error from 127.0.0.1:57630: remote error: tls: bad certificate 2025-08-13T19:56:56.019032933+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57642: remote error: tls: bad certificate 2025-08-13T19:56:56.038596002+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57646: remote error: tls: bad certificate 2025-08-13T19:56:56.059092197+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57660: remote error: tls: bad certificate 2025-08-13T19:56:56.077239265+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57672: remote error: tls: bad certificate 2025-08-13T19:56:56.093169110+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57684: remote error: tls: bad certificate 2025-08-13T19:56:56.109713403+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57694: remote error: tls: bad certificate 2025-08-13T19:56:56.129161008+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57696: remote error: tls: bad certificate 2025-08-13T19:56:56.145271858+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57706: remote error: tls: bad certificate 2025-08-13T19:56:56.165505986+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57718: remote error: tls: bad certificate 2025-08-13T19:56:56.184171279+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57726: remote error: tls: bad certificate 2025-08-13T19:56:56.200467834+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57736: remote error: tls: bad certificate 2025-08-13T19:56:56.218955612+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57750: remote error: tls: bad certificate 2025-08-13T19:56:56.238091339+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57754: remote error: tls: bad certificate 2025-08-13T19:56:56.255381722+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57766: remote error: tls: bad certificate 2025-08-13T19:56:56.273644024+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57778: remote error: tls: bad certificate 2025-08-13T19:56:56.292020579+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57792: remote error: tls: bad certificate 2025-08-13T19:56:56.312756671+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57802: remote error: tls: bad certificate 2025-08-13T19:56:56.334546453+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57814: remote error: tls: bad certificate 2025-08-13T19:56:56.357384105+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57820: remote error: tls: bad certificate 2025-08-13T19:56:56.380181586+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57828: remote error: tls: bad certificate 2025-08-13T19:56:56.395532484+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57838: remote error: tls: bad certificate 2025-08-13T19:56:56.415227557+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57854: remote error: tls: bad certificate 2025-08-13T19:56:56.436356060+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57866: remote error: tls: bad certificate 2025-08-13T19:56:56.454692693+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57868: remote error: tls: bad certificate 2025-08-13T19:56:56.474250182+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57874: remote error: tls: bad certificate 2025-08-13T19:56:56.490920798+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57884: remote error: tls: bad certificate 2025-08-13T19:56:56.508707886+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57890: remote error: tls: bad certificate 2025-08-13T19:56:56.528748348+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57892: remote error: tls: bad certificate 2025-08-13T19:56:56.547876064+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57894: remote error: tls: bad certificate 2025-08-13T19:56:56.567367651+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57906: remote error: tls: bad certificate 2025-08-13T19:56:56.584531361+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57916: remote error: tls: bad certificate 2025-08-13T19:56:56.600718173+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57932: remote error: tls: bad certificate 2025-08-13T19:56:56.617605085+00:00 stderr F 2025/08/13 19:56:56 http: TLS handshake error from 127.0.0.1:57948: remote error: tls: bad certificate 2025-08-13T19:57:05.230301916+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45610: remote error: tls: bad certificate 2025-08-13T19:57:05.248021232+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45624: remote error: tls: bad certificate 2025-08-13T19:57:05.273461108+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45634: remote error: tls: bad certificate 2025-08-13T19:57:05.305141713+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45636: remote error: tls: bad certificate 2025-08-13T19:57:05.325868925+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45640: remote error: tls: bad certificate 2025-08-13T19:57:05.342881591+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45656: remote error: tls: bad certificate 2025-08-13T19:57:05.367365400+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45662: remote error: tls: bad certificate 2025-08-13T19:57:05.386722343+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45678: remote error: tls: bad certificate 2025-08-13T19:57:05.407868847+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45686: remote error: tls: bad certificate 2025-08-13T19:57:05.424290465+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45692: remote error: tls: bad certificate 2025-08-13T19:57:05.442185886+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45698: remote error: tls: bad certificate 2025-08-13T19:57:05.460172400+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45712: remote error: tls: bad certificate 2025-08-13T19:57:05.478593426+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45724: remote error: tls: bad certificate 2025-08-13T19:57:05.495328664+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45730: remote error: tls: bad certificate 2025-08-13T19:57:05.511511216+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45742: remote error: tls: bad certificate 2025-08-13T19:57:05.529711046+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45748: remote error: tls: bad certificate 2025-08-13T19:57:05.548125822+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45754: remote error: tls: bad certificate 2025-08-13T19:57:05.570967944+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45758: remote error: tls: bad certificate 2025-08-13T19:57:05.586724984+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45768: remote error: tls: bad certificate 2025-08-13T19:57:05.605996564+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45780: remote error: tls: bad certificate 2025-08-13T19:57:05.622596058+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45786: remote error: tls: bad certificate 2025-08-13T19:57:05.643546386+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45798: remote error: tls: bad certificate 2025-08-13T19:57:05.660334256+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45802: remote error: tls: bad certificate 2025-08-13T19:57:05.681204832+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45814: remote error: tls: bad certificate 2025-08-13T19:57:05.702463449+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45826: remote error: tls: bad certificate 2025-08-13T19:57:05.723456308+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45840: remote error: tls: bad certificate 2025-08-13T19:57:05.745062205+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45854: remote error: tls: bad certificate 2025-08-13T19:57:05.763269205+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45860: remote error: tls: bad certificate 2025-08-13T19:57:05.788013961+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45870: remote error: tls: bad certificate 2025-08-13T19:57:05.805992145+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45878: remote error: tls: bad certificate 2025-08-13T19:57:05.822549078+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45882: remote error: tls: bad certificate 2025-08-13T19:57:05.846873602+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45896: remote error: tls: bad certificate 2025-08-13T19:57:05.864119685+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45910: remote error: tls: bad certificate 2025-08-13T19:57:05.880618756+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45920: remote error: tls: bad certificate 2025-08-13T19:57:05.898885137+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45926: remote error: tls: bad certificate 2025-08-13T19:57:05.919036103+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45940: remote error: tls: bad certificate 2025-08-13T19:57:05.934533455+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45956: remote error: tls: bad certificate 2025-08-13T19:57:05.951481689+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45966: remote error: tls: bad certificate 2025-08-13T19:57:05.969558015+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45972: remote error: tls: bad certificate 2025-08-13T19:57:05.988580929+00:00 stderr F 2025/08/13 19:57:05 http: TLS handshake error from 127.0.0.1:45986: remote error: tls: bad certificate 2025-08-13T19:57:06.008454336+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46002: remote error: tls: bad certificate 2025-08-13T19:57:06.025493193+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46018: remote error: tls: bad certificate 2025-08-13T19:57:06.046371529+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46020: remote error: tls: bad certificate 2025-08-13T19:57:06.065501075+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46028: remote error: tls: bad certificate 2025-08-13T19:57:06.088051199+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46032: remote error: tls: bad certificate 2025-08-13T19:57:06.105245760+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46036: remote error: tls: bad certificate 2025-08-13T19:57:06.124323665+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46046: remote error: tls: bad certificate 2025-08-13T19:57:06.146168149+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46048: remote error: tls: bad certificate 2025-08-13T19:57:06.166367205+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46064: remote error: tls: bad certificate 2025-08-13T19:57:06.185961905+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46072: remote error: tls: bad certificate 2025-08-13T19:57:06.203137765+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46074: remote error: tls: bad certificate 2025-08-13T19:57:06.284907090+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46082: remote error: tls: bad certificate 2025-08-13T19:57:06.313498497+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46098: remote error: tls: bad certificate 2025-08-13T19:57:06.334065354+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46106: remote error: tls: bad certificate 2025-08-13T19:57:06.337165382+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46116: remote error: tls: bad certificate 2025-08-13T19:57:06.361948500+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46128: remote error: tls: bad certificate 2025-08-13T19:57:06.372693227+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46130: remote error: tls: bad certificate 2025-08-13T19:57:06.398034201+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46142: remote error: tls: bad certificate 2025-08-13T19:57:06.409981882+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46144: remote error: tls: bad certificate 2025-08-13T19:57:06.421979374+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46158: remote error: tls: bad certificate 2025-08-13T19:57:06.431347652+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46166: remote error: tls: bad certificate 2025-08-13T19:57:06.441501522+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46170: remote error: tls: bad certificate 2025-08-13T19:57:06.448276865+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46172: remote error: tls: bad certificate 2025-08-13T19:57:06.464864639+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46188: remote error: tls: bad certificate 2025-08-13T19:57:06.483985125+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46194: remote error: tls: bad certificate 2025-08-13T19:57:06.501266748+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46196: remote error: tls: bad certificate 2025-08-13T19:57:06.520198659+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46210: remote error: tls: bad certificate 2025-08-13T19:57:06.539083878+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46212: remote error: tls: bad certificate 2025-08-13T19:57:06.559295805+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46218: remote error: tls: bad certificate 2025-08-13T19:57:06.574943602+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46224: remote error: tls: bad certificate 2025-08-13T19:57:06.596600171+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46236: remote error: tls: bad certificate 2025-08-13T19:57:06.614022658+00:00 stderr F 2025/08/13 19:57:06 http: TLS handshake error from 127.0.0.1:46252: remote error: tls: bad certificate 2025-08-13T19:57:10.600037687+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55290: remote error: tls: bad certificate 2025-08-13T19:57:10.621431758+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55298: remote error: tls: bad certificate 2025-08-13T19:57:10.635941312+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55308: remote error: tls: bad certificate 2025-08-13T19:57:10.650896939+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55318: remote error: tls: bad certificate 2025-08-13T19:57:10.669900572+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55320: remote error: tls: bad certificate 2025-08-13T19:57:10.689583654+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55336: remote error: tls: bad certificate 2025-08-13T19:57:10.706357373+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55352: remote error: tls: bad certificate 2025-08-13T19:57:10.724895082+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55354: remote error: tls: bad certificate 2025-08-13T19:57:10.739721396+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55368: remote error: tls: bad certificate 2025-08-13T19:57:10.755175867+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55370: remote error: tls: bad certificate 2025-08-13T19:57:10.777086313+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55374: remote error: tls: bad certificate 2025-08-13T19:57:10.795641893+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55388: remote error: tls: bad certificate 2025-08-13T19:57:10.814930223+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55394: remote error: tls: bad certificate 2025-08-13T19:57:10.832184976+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55400: remote error: tls: bad certificate 2025-08-13T19:57:10.847275607+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55402: remote error: tls: bad certificate 2025-08-13T19:57:10.889940626+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55410: remote error: tls: bad certificate 2025-08-13T19:57:10.907945069+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55420: remote error: tls: bad certificate 2025-08-13T19:57:10.932224823+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55426: remote error: tls: bad certificate 2025-08-13T19:57:10.956453325+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55440: remote error: tls: bad certificate 2025-08-13T19:57:10.973945124+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55444: remote error: tls: bad certificate 2025-08-13T19:57:10.997486336+00:00 stderr F 2025/08/13 19:57:10 http: TLS handshake error from 127.0.0.1:55448: remote error: tls: bad certificate 2025-08-13T19:57:11.018105455+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55452: remote error: tls: bad certificate 2025-08-13T19:57:11.037043386+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55456: remote error: tls: bad certificate 2025-08-13T19:57:11.065667623+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55472: remote error: tls: bad certificate 2025-08-13T19:57:11.085483659+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55482: remote error: tls: bad certificate 2025-08-13T19:57:11.101455615+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55488: remote error: tls: bad certificate 2025-08-13T19:57:11.124011199+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55502: remote error: tls: bad certificate 2025-08-13T19:57:11.142479637+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55508: remote error: tls: bad certificate 2025-08-13T19:57:11.160491541+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55516: remote error: tls: bad certificate 2025-08-13T19:57:11.181507541+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55520: remote error: tls: bad certificate 2025-08-13T19:57:11.197179268+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55536: remote error: tls: bad certificate 2025-08-13T19:57:11.221586335+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55540: remote error: tls: bad certificate 2025-08-13T19:57:11.240347491+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55544: remote error: tls: bad certificate 2025-08-13T19:57:11.254939198+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55546: remote error: tls: bad certificate 2025-08-13T19:57:11.272016065+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55552: remote error: tls: bad certificate 2025-08-13T19:57:11.286613462+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55566: remote error: tls: bad certificate 2025-08-13T19:57:11.306205962+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55572: remote error: tls: bad certificate 2025-08-13T19:57:11.323518106+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55586: remote error: tls: bad certificate 2025-08-13T19:57:11.342847668+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55600: remote error: tls: bad certificate 2025-08-13T19:57:11.362680484+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55612: remote error: tls: bad certificate 2025-08-13T19:57:11.380238716+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55626: remote error: tls: bad certificate 2025-08-13T19:57:11.406290580+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55634: remote error: tls: bad certificate 2025-08-13T19:57:11.422362459+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55644: remote error: tls: bad certificate 2025-08-13T19:57:11.441281279+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55660: remote error: tls: bad certificate 2025-08-13T19:57:11.462956728+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55668: remote error: tls: bad certificate 2025-08-13T19:57:11.480138568+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55670: remote error: tls: bad certificate 2025-08-13T19:57:11.498075931+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55674: remote error: tls: bad certificate 2025-08-13T19:57:11.515517819+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55676: remote error: tls: bad certificate 2025-08-13T19:57:11.534354167+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55688: remote error: tls: bad certificate 2025-08-13T19:57:11.556291493+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55698: remote error: tls: bad certificate 2025-08-13T19:57:11.576725386+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55704: remote error: tls: bad certificate 2025-08-13T19:57:11.598127228+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55718: remote error: tls: bad certificate 2025-08-13T19:57:11.613018833+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55722: remote error: tls: bad certificate 2025-08-13T19:57:11.629109082+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55728: remote error: tls: bad certificate 2025-08-13T19:57:11.647121027+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55740: remote error: tls: bad certificate 2025-08-13T19:57:11.665426079+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55756: remote error: tls: bad certificate 2025-08-13T19:57:11.682359963+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55764: remote error: tls: bad certificate 2025-08-13T19:57:11.700356587+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55770: remote error: tls: bad certificate 2025-08-13T19:57:11.713879813+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55786: remote error: tls: bad certificate 2025-08-13T19:57:11.733523184+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55796: remote error: tls: bad certificate 2025-08-13T19:57:11.777744517+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55798: remote error: tls: bad certificate 2025-08-13T19:57:11.802611327+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55808: remote error: tls: bad certificate 2025-08-13T19:57:11.829885545+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55810: remote error: tls: bad certificate 2025-08-13T19:57:11.858481022+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55820: remote error: tls: bad certificate 2025-08-13T19:57:11.886875443+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55826: remote error: tls: bad certificate 2025-08-13T19:57:11.910943450+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55830: remote error: tls: bad certificate 2025-08-13T19:57:11.931481526+00:00 stderr F 2025/08/13 19:57:11 http: TLS handshake error from 127.0.0.1:55842: remote error: tls: bad certificate 2025-08-13T19:57:14.325250390+00:00 stderr F I0813 19:57:14.325188 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.crt\"" 2025-08-13T19:57:14.326847276+00:00 stderr F I0813 19:57:14.326768 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:57:14.326941248+00:00 stderr F I0813 19:57:14.326923 1 certwatcher.go:180] "certificate event" logger="controller-runtime.certwatcher" event="REMOVE \"/etc/webhook-cert/tls.key\"" 2025-08-13T19:57:14.327263998+00:00 stderr F I0813 19:57:14.327247 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T20:04:58.811366355+00:00 stderr F I0813 20:04:58.810626 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:03.888436763+00:00 stderr F I0813 20:09:03.887738 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:11:01.590145355+00:00 stderr F 2025/08/13 20:11:01 http: TLS handshake error from 127.0.0.1:59200: EOF 2025-08-13T20:41:21.499517961+00:00 stderr F 2025/08/13 20:41:21 http: TLS handshake error from 127.0.0.1:56308: EOF 2025-08-13T20:42:36.402219136+00:00 stderr F I0813 20:42:36.399978 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:46.384981592+00:00 stderr F I0813 20:42:46.384029 1 ovnkubeidentity.go:297] Received signal terminated.... 2025-08-13T20:42:46.384981592+00:00 stderr F I0813 20:42:46.384942 1 ovnkubeidentity.go:77] Waiting (3m20s) for kubernetes-api to stop... ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000755000175000017500000000000015117130654033067 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000003513715117130646033103 0ustar zuulzuul2025-08-13T20:03:52.252101561+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T20:03:52.252902494+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T20:03:52.258513094+00:00 stdout F I0813 20:03:52.257953568 - network-node-identity - start approver 2025-08-13T20:03:52.258534125+00:00 stderr F + echo 'I0813 20:03:52.257953568 - network-node-identity - start approver' 2025-08-13T20:03:52.259139592+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-08-13T20:03:52.393001511+00:00 stderr F I0813 20:03:52.392657 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-08-13T20:03:52.393001511+00:00 stderr F W0813 20:03:52.392885 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:03:52.398011554+00:00 stderr F I0813 20:03:52.397928 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-08-13T20:03:52.398136927+00:00 stderr F I0813 20:03:52.398089 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-08-13T20:03:52.403382227+00:00 stderr F E0813 20:03:52.403301 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:52.403406728+00:00 stderr F I0813 20:03:52.403375 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:04:31.982134314+00:00 stderr F I0813 20:04:31.980755 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:04:31.982134314+00:00 stderr F I0813 20:04:31.980925 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:05:00.826537302+00:00 stderr F I0813 20:05:00.826382 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:05:00.826537302+00:00 stderr F I0813 20:05:00.826425 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:05:28.356696557+00:00 stderr F I0813 20:05:28.355349 1 leaderelection.go:354] lock is held by crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 and has not yet expired 2025-08-13T20:05:28.356696557+00:00 stderr F I0813 20:05:28.355398 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:06:04.133623897+00:00 stderr F I0813 20:06:04.133466 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-08-13T20:06:04.160531788+00:00 stderr F I0813 20:06:04.159733 1 recorder.go:104] "crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"31975"} reason="LeaderElection" 2025-08-13T20:06:04.164734928+00:00 stderr F I0813 20:06:04.164577 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:06:04.164734928+00:00 stderr F I0813 20:06:04.164702 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:06:04.211696293+00:00 stderr F I0813 20:06:04.211552 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h50m56.013963378s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.211696293+00:00 stderr F I0813 20:06:04.211604 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.253691615+00:00 stderr F I0813 20:06:04.253587 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:06:04.309922526+00:00 stderr F I0813 20:06:04.309672 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-08-13T20:06:04.346192274+00:00 stderr F I0813 20:06:04.346069 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 60.352µs 2025-08-13T20:06:04.346385140+00:00 stderr F I0813 20:06:04.346317 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 29.561µs 2025-08-13T20:06:04.346551995+00:00 stderr F I0813 20:06:04.346480 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 22.77µs 2025-08-13T20:06:04.346604026+00:00 stderr F I0813 20:06:04.346568 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 19.381µs 2025-08-13T20:06:04.346665638+00:00 stderr F I0813 20:06:04.346625 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 18.661µs 2025-08-13T20:06:04.346920915+00:00 stderr F I0813 20:06:04.346762 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 15.041µs 2025-08-13T20:08:26.272461173+00:00 stderr F E0813 20:08:26.272090 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:26.297370847+00:00 stderr F I0813 20:08:26.293078 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 2 items received 2025-08-13T20:08:26.308511837+00:00 stderr F I0813 20:08:26.303578 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=599&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:27.178411178+00:00 stderr F I0813 20:08:27.177465 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=541&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:29.356501355+00:00 stderr F I0813 20:08:29.356300 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=472&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:34.382094292+00:00 stderr F I0813 20:08:34.376626 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=32771&timeoutSeconds=456&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:08:45.578744749+00:00 stderr F I0813 20:08:45.578627 1 reflector.go:449] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest closed with: too old resource version: 32771 (32913) 2025-08-13T20:09:03.647365741+00:00 stderr F I0813 20:09:03.646470 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.660568 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663174 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 62.051µs 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663258 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 38.281µs 2025-08-13T20:09:03.663866894+00:00 stderr F I0813 20:09:03.663320 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 39.452µs 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.664963 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 1.617777ms 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.665162 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 37.851µs 2025-08-13T20:09:03.665673406+00:00 stderr F I0813 20:09:03.665323 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 19.921µs 2025-08-13T20:14:41.664443608+00:00 stderr F I0813 20:14:41.664204 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 6 items received 2025-08-13T20:23:18.672638163+00:00 stderr F I0813 20:23:18.672237 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 10 items received 2025-08-13T20:32:30.677906302+00:00 stderr F I0813 20:32:30.677644 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 11 items received 2025-08-13T20:42:11.687999923+00:00 stderr F I0813 20:42:11.684848 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 11 items received 2025-08-13T20:42:36.392193737+00:00 stderr F I0813 20:42:36.345690 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.412533663+00:00 stderr F I0813 20:42:36.412501 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 0 items received 2025-08-13T20:42:36.547542146+00:00 stderr F I0813 20:42:36.546111 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=37397&timeoutSeconds=395&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:38.747969165+00:00 stderr F I0813 20:42:38.747179 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=37397&timeoutSeconds=562&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:42:40.143997762+00:00 stderr F I0813 20:42:40.143888 1 ovnkubeidentity.go:297] Received signal terminated.... 2025-08-13T20:42:40.159443517+00:00 stderr F I0813 20:42:40.159310 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:40.160173778+00:00 stderr F I0813 20:42:40.160116 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:40.161560158+00:00 stderr F I0813 20:42:40.161479 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:40.161560158+00:00 stderr F I0813 20:42:40.161527 1 controller.go:242] "All workers finished" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:40.162157096+00:00 stderr F I0813 20:42:40.162096 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:40.162300340+00:00 stderr F I0813 20:42:40.162205 1 reflector.go:295] Stopping reflector *v1.CertificateSigningRequest (9h50m56.013963378s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163570 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163608 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:40.163638278+00:00 stderr F I0813 20:42:40.163621 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:40.168672223+00:00 stderr F E0813 20:42:40.167441 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:40.169350333+00:00 stderr F I0813 20:42:40.168700 1 recorder.go:104] "crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"37542"} reason="LeaderElection" ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000002627115117130646033102 0ustar zuulzuul2025-12-13T00:11:03.577069709+00:00 stderr F + [[ -f /env/_master ]] 2025-12-13T00:11:03.577069709+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-12-13T00:11:03.581267493+00:00 stdout F I1213 00:11:03.580701098 - network-node-identity - start approver 2025-12-13T00:11:03.581291423+00:00 stderr F + echo 'I1213 00:11:03.580701098 - network-node-identity - start approver' 2025-12-13T00:11:03.581291423+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-12-13T00:11:03.624229641+00:00 stderr F I1213 00:11:03.624105 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-12-13T00:11:03.624291293+00:00 stderr F W1213 00:11:03.624279 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:11:03.625167426+00:00 stderr F I1213 00:11:03.625151 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-12-13T00:11:03.625278189+00:00 stderr F I1213 00:11:03.625262 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-12-13T00:11:03.636687089+00:00 stderr F I1213 00:11:03.636614 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2025-12-13T00:11:03.636687089+00:00 stderr F I1213 00:11:03.636636 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-12-13T00:11:24.275164319+00:00 stderr F I1213 00:11:24.275068 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2025-12-13T00:11:24.275164319+00:00 stderr F I1213 00:11:24.275093 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-12-13T00:12:03.013243131+00:00 stderr F I1213 00:12:03.013174 1 leaderelection.go:354] lock is held by crc_f4814eaa-1834-4eb2-bf62-46477a6c9ac3 and has not yet expired 2025-12-13T00:12:03.013243131+00:00 stderr F I1213 00:12:03.013199 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-12-13T00:12:38.686141470+00:00 stderr F I1213 00:12:38.686018 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-12-13T00:12:38.686380586+00:00 stderr F I1213 00:12:38.686308 1 recorder.go:104] "crc_34b87a46-e705-4fe2-a934-4cbba726f071 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"39898"} reason="LeaderElection" 2025-12-13T00:12:38.686502619+00:00 stderr F I1213 00:12:38.686453 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-12-13T00:12:38.686502619+00:00 stderr F I1213 00:12:38.686478 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-12-13T00:12:38.692318350+00:00 stderr F I1213 00:12:38.692218 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (10h3m0.242954749s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:12:38.692318350+00:00 stderr F I1213 00:12:38.692285 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:12:38.694701202+00:00 stderr F I1213 00:12:38.694624 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:12:38.791298480+00:00 stderr F I1213 00:12:38.791202 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-12-13T00:12:38.791839004+00:00 stderr F I1213 00:12:38.791782 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 20.14µs 2025-12-13T00:12:38.792200504+00:00 stderr F I1213 00:12:38.792158 1 recorder.go:104] "CSR \"csr-fbkxn\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-fbkxn"} reason="CSRApproved" 2025-12-13T00:12:38.796505425+00:00 stderr F I1213 00:12:38.796448 1 approver.go:230] Finished syncing CSR csr-fbkxn for crc node in 4.559779ms 2025-12-13T00:12:38.796634309+00:00 stderr F I1213 00:12:38.796595 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 26.961µs 2025-12-13T00:12:38.796723401+00:00 stderr F I1213 00:12:38.796687 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 27.031µs 2025-12-13T00:12:38.796756992+00:00 stderr F I1213 00:12:38.796741 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 28.59µs 2025-12-13T00:12:38.796795253+00:00 stderr F I1213 00:12:38.796774 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 18.95µs 2025-12-13T00:12:38.796888275+00:00 stderr F I1213 00:12:38.796852 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 20.79µs 2025-12-13T00:12:38.796906256+00:00 stderr F I1213 00:12:38.796889 1 approver.go:230] Finished syncing CSR csr-fbkxn for unknown node in 17.76µs 2025-12-13T00:12:38.825617601+00:00 stderr F I1213 00:12:38.825345 1 approver.go:230] Finished syncing CSR csr-fbkxn for unknown node in 129.974µs 2025-12-13T00:12:51.015512576+00:00 stderr F I1213 00:12:51.015379 1 recorder.go:104] "CSR \"csr-c7kn9\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-c7kn9"} reason="CSRApproved" 2025-12-13T00:12:51.030896915+00:00 stderr F I1213 00:12:51.030804 1 approver.go:230] Finished syncing CSR csr-c7kn9 for crc node in 15.674426ms 2025-12-13T00:12:51.031110850+00:00 stderr F I1213 00:12:51.031069 1 approver.go:230] Finished syncing CSR csr-c7kn9 for unknown node in 164.154µs 2025-12-13T00:12:51.045343740+00:00 stderr F I1213 00:12:51.045284 1 approver.go:230] Finished syncing CSR csr-c7kn9 for unknown node in 113.523µs 2025-12-13T00:20:38.440053378+00:00 stderr F I1213 00:20:38.439966 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 13 items received 2025-12-13T00:20:38.440664385+00:00 stderr F I1213 00:20:38.440619 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42934&timeoutSeconds=571&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:38.984776808+00:00 stderr F E1213 00:20:38.984704 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:39.556307041+00:00 stderr F I1213 00:20:39.556199 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42934&timeoutSeconds=553&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:41.950718732+00:00 stderr F I1213 00:20:41.950638 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42934&timeoutSeconds=488&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:47.911878404+00:00 stderr F I1213 00:20:47.911783 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=42934&timeoutSeconds=564&watch=true": dial tcp 38.102.83.51:6443: connect: connection refused - backing off 2025-12-13T00:20:58.491967025+00:00 stderr F I1213 00:20:58.491866 1 reflector.go:449] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest closed with: too old resource version: 42934 (42935) 2025-12-13T00:21:21.728967159+00:00 stderr F I1213 00:21:21.728901 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:21:21.730700006+00:00 stderr F I1213 00:21:21.730654 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:21:21.731073366+00:00 stderr F I1213 00:21:21.731037 1 approver.go:230] Finished syncing CSR csr-c7kn9 for unknown node in 13.28µs 2025-12-13T00:21:21.731090186+00:00 stderr F I1213 00:21:21.731072 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 13.13µs 2025-12-13T00:21:21.731114617+00:00 stderr F I1213 00:21:21.731100 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 18.921µs 2025-12-13T00:21:21.731147198+00:00 stderr F I1213 00:21:21.731128 1 approver.go:230] Finished syncing CSR csr-fbkxn for unknown node in 16.49µs 2025-12-13T00:21:21.731169719+00:00 stderr F I1213 00:21:21.731152 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 12.42µs 2025-12-13T00:21:21.731192269+00:00 stderr F I1213 00:21:21.731176 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 12.001µs 2025-12-13T00:21:21.731248481+00:00 stderr F I1213 00:21:21.731225 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 13.18µs 2025-12-13T00:21:21.731323203+00:00 stderr F I1213 00:21:21.731304 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 14.51µs ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node0000644000175000017500000002774015117130646033104 0ustar zuulzuul2025-08-13T19:50:46.034128438+00:00 stderr F + [[ -f /env/_master ]] 2025-08-13T19:50:46.061916362+00:00 stderr F ++ date '+%m%d %H:%M:%S.%N' 2025-08-13T19:50:46.146976623+00:00 stdout F I0813 19:50:46.088947634 - network-node-identity - start approver 2025-08-13T19:50:46.147024094+00:00 stderr F + echo 'I0813 19:50:46.088947634 - network-node-identity - start approver' 2025-08-13T19:50:46.147024094+00:00 stderr F + exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 --disable-webhook --csr-acceptance-conditions=/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json --loglevel=4 2025-08-13T19:50:47.258007987+00:00 stderr F I0813 19:50:47.257460 1 ovnkubeidentity.go:132] Config: {kubeconfig: apiServer:https://api-int.crc.testing:6443 logLevel:4 port:9443 host:localhost certDir: metricsAddress:0 leaseNamespace: enableInterconnect:false enableHybridOverlay:false disableWebhook:true disableApprover:false waitForKAPIDuration:0 localKAPIPort:6443 extraAllowedUsers:{slice:[] hasBeenSet:false} csrAcceptanceConditionFile:/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json csrAcceptanceConditions:[] podAdmissionConditionFile: podAdmissionConditions:[]} 2025-08-13T19:50:47.259446928+00:00 stderr F W0813 19:50:47.259418 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:47.270012330+00:00 stderr F I0813 19:50:47.269622 1 ovnkubeidentity.go:471] Starting certificate signing request approver 2025-08-13T19:50:47.283521506+00:00 stderr F I0813 19:50:47.282897 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-node-identity/ovnkube-identity... 2025-08-13T19:50:47.517100522+00:00 stderr F I0813 19:50:47.516520 1 leaderelection.go:354] lock is held by crc_b2366d4a-899d-4575-ad93-10121ab7b42a and has not yet expired 2025-08-13T19:50:47.517190414+00:00 stderr F I0813 19:50:47.517169 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:24.777301158+00:00 stderr F I0813 19:51:24.777094 1 leaderelection.go:354] lock is held by crc_b2366d4a-899d-4575-ad93-10121ab7b42a and has not yet expired 2025-08-13T19:51:24.777301158+00:00 stderr F I0813 19:51:24.777172 1 leaderelection.go:255] failed to acquire lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:48.849637807+00:00 stderr F I0813 19:51:48.849570 1 leaderelection.go:260] successfully acquired lease openshift-network-node-identity/ovnkube-identity 2025-08-13T19:51:48.850659406+00:00 stderr F I0813 19:51:48.850602 1 recorder.go:104] "crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"313ac7f3-4ba1-4dd0-b6b5-40f9f3a73f08","apiVersion":"coordination.k8s.io/v1","resourceVersion":"26671"} reason="LeaderElection" 2025-08-13T19:51:48.851389587+00:00 stderr F I0813 19:51:48.851289 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:51:48.851494360+00:00 stderr F I0813 19:51:48.851475 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T19:51:48.856428690+00:00 stderr F I0813 19:51:48.856310 1 reflector.go:289] Starting reflector *v1.CertificateSigningRequest (9h28m13.239043519s) from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.856428690+00:00 stderr F I0813 19:51:48.856387 1 reflector.go:325] Listing and watching *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.869034989+00:00 stderr F I0813 19:51:48.868974 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:51:48.957216392+00:00 stderr F I0813 19:51:48.957111 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=1 2025-08-13T19:51:48.959225619+00:00 stderr F I0813 19:51:48.959157 1 approver.go:230] Finished syncing CSR csr-zbgxc for unknown node in 16.16µs 2025-08-13T19:51:48.959737294+00:00 stderr F I0813 19:51:48.959701 1 recorder.go:104] "CSR \"csr-dpjmc\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-dpjmc"} reason="CSRApproved" 2025-08-13T19:51:48.971123478+00:00 stderr F I0813 19:51:48.971041 1 approver.go:230] Finished syncing CSR csr-dpjmc for crc node in 11.734344ms 2025-08-13T19:51:48.971246191+00:00 stderr F I0813 19:51:48.971186 1 approver.go:230] Finished syncing CSR csr-f6vwn for unknown node in 59.072µs 2025-08-13T19:51:48.971320084+00:00 stderr F I0813 19:51:48.971288 1 approver.go:230] Finished syncing CSR csr-kzl9s for unknown node in 35.321µs 2025-08-13T19:51:48.971392386+00:00 stderr F I0813 19:51:48.971339 1 approver.go:230] Finished syncing CSR csr-pcfpp for unknown node in 24.941µs 2025-08-13T19:51:48.971698394+00:00 stderr F I0813 19:51:48.971647 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 26.731µs 2025-08-13T19:51:48.983918182+00:00 stderr F I0813 19:51:48.983873 1 approver.go:230] Finished syncing CSR csr-dpjmc for unknown node in 132.074µs 2025-08-13T19:57:39.305594345+00:00 stderr F I0813 19:57:39.305421 1 recorder.go:104] "CSR \"csr-fxkbs\" has been approved" logger="events" type="Normal" object={"kind":"CertificateSigningRequest","name":"csr-fxkbs"} reason="CSRApproved" 2025-08-13T19:57:39.316617830+00:00 stderr F I0813 19:57:39.316333 1 approver.go:230] Finished syncing CSR csr-fxkbs for crc node in 12.345512ms 2025-08-13T19:57:39.316617830+00:00 stderr F I0813 19:57:39.316501 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 61.332µs 2025-08-13T19:57:39.330480056+00:00 stderr F I0813 19:57:39.330297 1 approver.go:230] Finished syncing CSR csr-fxkbs for unknown node in 62.762µs 2025-08-13T19:58:46.872170084+00:00 stderr F I0813 19:58:46.872041 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 15 items received 2025-08-13T20:02:29.640165504+00:00 stderr F I0813 20:02:29.639336 1 reflector.go:800] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: Watch close - *v1.CertificateSigningRequest total 6 items received 2025-08-13T20:02:29.652962089+00:00 stderr F I0813 20:02:29.650652 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=373&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:31.045345338+00:00 stderr F I0813 20:02:31.045086 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=339&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:34.125400204+00:00 stderr F I0813 20:02:34.125327 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=508&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:39.664872788+00:00 stderr F I0813 20:02:39.664401 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=503&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:02:40.049084358+00:00 stderr F E0813 20:02:40.048894 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:02:47.690939252+00:00 stderr F I0813 20:02:47.690717 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=566&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:00.053560022+00:00 stderr F E0813 20:03:00.053423 1 leaderelection.go:332] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:00.839901233+00:00 stderr F I0813 20:03:00.839743 1 reflector.go:425] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.CertificateSigningRequest returned Get "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=30560&timeoutSeconds=591&watch=true": dial tcp 192.168.130.11:6443: connect: connection refused - backing off 2025-08-13T20:03:10.047475339+00:00 stderr F I0813 20:03:10.047083 1 leaderelection.go:285] failed to renew lease openshift-network-node-identity/ovnkube-identity: timed out waiting for the condition 2025-08-13T20:03:10.050289689+00:00 stderr F E0813 20:03:10.050206 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:03:10.050881566+00:00 stderr F I0813 20:03:10.050704 1 recorder.go:104] "crc_9a6fd3ed-e0b7-4ff5-b6f5-4bc33b4b2a02 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-network-node-identity","name":"ovnkube-identity","uid":"affbead6-e1b0-4053-844d-1baff2e26ac5","apiVersion":"coordination.k8s.io/v1","resourceVersion":"30647"} reason="LeaderElection" 2025-08-13T20:03:10.051385910+00:00 stderr F I0813 20:03:10.051306 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051417 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051459 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051469 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051476 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:03:10.052057009+00:00 stderr F I0813 20:03:10.051484 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:03:10.052057009+00:00 stderr F error running approver: leader election lost ././@LongLink0000644000000000000000000000033500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015117130646033044 5ustar zuulzuul././@LongLink0000644000000000000000000000040400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000755000175000017500000000000015117130654033043 5ustar zuulzuul././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000010704715117130646033057 0ustar zuulzuul2025-12-13T00:13:14.781427530+00:00 stderr F I1213 00:13:14.781171 1 cmd.go:233] Using service-serving-cert provided certificates 2025-12-13T00:13:14.782170176+00:00 stderr F I1213 00:13:14.782042 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:14.792159071+00:00 stderr F I1213 00:13:14.788229 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:14.825974967+00:00 stderr F I1213 00:13:14.825902 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-12-13T00:13:15.307991034+00:00 stderr F I1213 00:13:15.307395 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:15.307991034+00:00 stderr F W1213 00:13:15.307974 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:15.307991034+00:00 stderr F W1213 00:13:15.307983 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:15.308419899+00:00 stderr F I1213 00:13:15.308403 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:15.310191968+00:00 stderr F I1213 00:13:15.310159 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-12-13T00:13:15.317154503+00:00 stderr F I1213 00:13:15.316659 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:15.317154503+00:00 stderr F I1213 00:13:15.316686 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:15.318884300+00:00 stderr F I1213 00:13:15.317592 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:15.318884300+00:00 stderr F I1213 00:13:15.317612 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:15.318884300+00:00 stderr F I1213 00:13:15.317680 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:15.318884300+00:00 stderr F I1213 00:13:15.317717 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:15.319279313+00:00 stderr F I1213 00:13:15.319058 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:15.351189256+00:00 stderr F I1213 00:13:15.349534 1 secure_serving.go:210] Serving securely on [::]:8443 2025-12-13T00:13:15.351189256+00:00 stderr F I1213 00:13:15.349615 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:15.417726552+00:00 stderr F I1213 00:13:15.417670 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:15.417803045+00:00 stderr F I1213 00:13:15.417776 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:15.417850466+00:00 stderr F I1213 00:13:15.417748 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:17:49.968066262+00:00 stderr F I1213 00:17:49.967298 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-12-13T00:17:49.968066262+00:00 stderr F I1213 00:17:49.967450 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41870", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_9624852e-428e-4afd-99cd-f79ccd5f207e became leader 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990099 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990152 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990159 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990165 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990222 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990233 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990276 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990287 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990429 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990436 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990440 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990521 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990532 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990754 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990783 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-12-13T00:17:49.990808488+00:00 stderr F I1213 00:17:49.990795 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-12-13T00:17:50.091214807+00:00 stderr F I1213 00:17:50.091148 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-12-13T00:17:50.091214807+00:00 stderr F I1213 00:17:50.091177 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.571531 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.571465019 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572092 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.572064905 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572127 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.572102146 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572157 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.572134638 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572194 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.572170669 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572226 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.57220275 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572257 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.572233291 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572300 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.572269912 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572331 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.572308583 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572371 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.572342664 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572404 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.572380475 +0000 UTC))" 2025-12-13T00:19:37.572749605+00:00 stderr F I1213 00:19:37.572436 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.572413496 +0000 UTC))" 2025-12-13T00:19:37.575643485+00:00 stderr F I1213 00:19:37.573702 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-12-13 00:19:37.57365929 +0000 UTC))" 2025-12-13T00:19:37.575643485+00:00 stderr F I1213 00:19:37.574395 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584795\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584795\" (2025-12-12 23:13:14 +0000 UTC to 2026-12-12 23:13:14 +0000 UTC (now=2025-12-13 00:19:37.574366389 +0000 UTC))" 2025-12-13T00:20:50.000167207+00:00 stderr F E1213 00:20:49.999638 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.009242391+00:00 stderr F E1213 00:20:50.009211 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.011079081+00:00 stderr F E1213 00:20:50.011035 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.022368566+00:00 stderr F E1213 00:20:50.022338 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.045183941+00:00 stderr F E1213 00:20:50.045145 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.088689825+00:00 stderr F E1213 00:20:50.088627 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.099591229+00:00 stderr F W1213 00:20:50.099551 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.099591229+00:00 stderr F E1213 00:20:50.099582 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.106496316+00:00 stderr F W1213 00:20:50.106456 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.106496316+00:00 stderr F E1213 00:20:50.106487 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.126098725+00:00 stderr F W1213 00:20:50.126045 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.126098725+00:00 stderr F E1213 00:20:50.126078 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.148543461+00:00 stderr F W1213 00:20:50.148463 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.148543461+00:00 stderr F E1213 00:20:50.148503 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.191091308+00:00 stderr F W1213 00:20:50.191020 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.191091308+00:00 stderr F E1213 00:20:50.191069 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.275376653+00:00 stderr F W1213 00:20:50.275322 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.275376653+00:00 stderr F E1213 00:20:50.275354 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.402061431+00:00 stderr F E1213 00:20:50.402004 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.600773573+00:00 stderr F W1213 00:20:50.600339 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.600773573+00:00 stderr F E1213 00:20:50.600728 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.800402261+00:00 stderr F E1213 00:20:50.800355 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:51.000705886+00:00 stderr F W1213 00:20:51.000608 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.000705886+00:00 stderr F E1213 00:20:51.000643 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.200481606+00:00 stderr F E1213 00:20:51.200417 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:51.642882194+00:00 stderr F W1213 00:20:51.642827 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.642882194+00:00 stderr F E1213 00:20:51.642858 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.844572717+00:00 stderr F E1213 00:20:51.844508 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:52.932067102+00:00 stderr F W1213 00:20:52.929145 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.932067102+00:00 stderr F E1213 00:20:52.929610 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.129414978+00:00 stderr F E1213 00:20:53.129359 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:55.494195201+00:00 stderr F W1213 00:20:55.493841 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.494195201+00:00 stderr F E1213 00:20:55.494146 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.694397374+00:00 stderr F E1213 00:20:55.694336 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:00.617555453+00:00 stderr F W1213 00:21:00.617254 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.617555453+00:00 stderr F E1213 00:21:00.617485 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.819019000+00:00 stderr F E1213 00:21:00.818723 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:10.861719389+00:00 stderr F W1213 00:21:10.861244 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.861719389+00:00 stderr F E1213 00:21:10.861682 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.062977830+00:00 stderr F E1213 00:21:11.062924 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] ././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000010334015117130646033047 0ustar zuulzuul2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.616690 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.617047 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:45.618270286+00:00 stderr F I0813 20:00:45.617822 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:46.189065082+00:00 stderr F I0813 20:00:46.188028 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-08-13T20:00:57.392828055+00:00 stderr F I0813 20:00:57.391890 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:57.392828055+00:00 stderr F W0813 20:00:57.392512 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:57.392828055+00:00 stderr F W0813 20:00:57.392521 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:03.400228587+00:00 stderr F I0813 20:01:03.372176 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:03.400228587+00:00 stderr F I0813 20:01:03.376636 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:03.402490702+00:00 stderr F I0813 20:01:03.382316 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:03.408512604+00:00 stderr F I0813 20:01:03.382385 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:03.409650426+00:00 stderr F I0813 20:01:03.382414 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:03.433956279+00:00 stderr F I0813 20:01:03.427085 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:03.463050499+00:00 stderr F I0813 20:01:03.447590 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:03.467044513+00:00 stderr F I0813 20:01:03.464488 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:03.467044513+00:00 stderr F I0813 20:01:03.464725 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T20:01:03.551232493+00:00 stderr F I0813 20:01:03.549119 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:03.586958362+00:00 stderr F I0813 20:01:03.585324 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:03.602420253+00:00 stderr F I0813 20:01:03.598102 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-08-13T20:01:03.646296424+00:00 stderr F I0813 20:01:03.644984 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:03.665302456+00:00 stderr F I0813 20:01:03.665177 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:02:58.875037003+00:00 stderr F E0813 20:02:58.874152 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.552341092+00:00 stderr F E0813 20:04:47.551352 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:25.498325085+00:00 stderr F I0813 20:06:25.496973 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32049", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_7f35bdb1-fde8-47f2-9c84-83ee0362fb0d became leader 2025-08-13T20:06:25.499967312+00:00 stderr F I0813 20:06:25.499398 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587371 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587401 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587485 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587504 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:06:25.588894659+00:00 stderr F I0813 20:06:25.587368 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595316 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595372 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-08-13T20:06:25.595438816+00:00 stderr F I0813 20:06:25.595391 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687663 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687688 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687728 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687709 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:06:25.687902114+00:00 stderr F I0813 20:06:25.687739 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.687731 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.688646 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-08-13T20:06:25.688692727+00:00 stderr F I0813 20:06:25.688661 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:06:25.691854777+00:00 stderr F I0813 20:06:25.689721 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:06:25.691854777+00:00 stderr F I0813 20:06:25.689750 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:08:25.599533059+00:00 stderr F E0813 20:08:25.597985 1 leaderelection.go:332] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.734448848+00:00 stderr F W0813 20:08:25.734004 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.735539009+00:00 stderr F E0813 20:08:25.734976 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.735539009+00:00 stderr F E0813 20:08:25.735112 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.745218906+00:00 stderr F W0813 20:08:25.745141 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.745249227+00:00 stderr F E0813 20:08:25.745211 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.750312662+00:00 stderr F E0813 20:08:25.750288 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.759633450+00:00 stderr F W0813 20:08:25.759430 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.759633450+00:00 stderr F E0813 20:08:25.759503 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.770068159+00:00 stderr F E0813 20:08:25.770022 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.789868556+00:00 stderr F W0813 20:08:25.788442 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.789868556+00:00 stderr F E0813 20:08:25.788577 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.806198235+00:00 stderr F E0813 20:08:25.806145 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.857742602+00:00 stderr F W0813 20:08:25.852403 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.857742602+00:00 stderr F E0813 20:08:25.852463 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.861860991+00:00 stderr F E0813 20:08:25.860102 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.943490411+00:00 stderr F W0813 20:08:25.943359 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.943548423+00:00 stderr F E0813 20:08:25.943538 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.167700009+00:00 stderr F W0813 20:08:26.147031 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.168963606+00:00 stderr F E0813 20:08:26.168842 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.336574241+00:00 stderr F E0813 20:08:26.336238 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.535848075+00:00 stderr F W0813 20:08:26.534337 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.535848075+00:00 stderr F E0813 20:08:26.534557 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.734114189+00:00 stderr F E0813 20:08:26.734020 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.063517294+00:00 stderr F E0813 20:08:27.063454 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.184202064+00:00 stderr F W0813 20:08:27.184100 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.184202064+00:00 stderr F E0813 20:08:27.184168 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.720215052+00:00 stderr F E0813 20:08:27.719910 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:28.473667634+00:00 stderr F W0813 20:08:28.473522 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.473667634+00:00 stderr F E0813 20:08:28.473628 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.023002023+00:00 stderr F E0813 20:08:29.022568 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:31.039953282+00:00 stderr F W0813 20:08:31.039353 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.039953282+00:00 stderr F E0813 20:08:31.039685 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.591380821+00:00 stderr F E0813 20:08:31.591277 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.165164983+00:00 stderr F W0813 20:08:36.164486 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.165164983+00:00 stderr F E0813 20:08:36.165119 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.719395434+00:00 stderr F E0813 20:08:36.718222 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.418918077+00:00 stderr F W0813 20:08:46.416514 1 base_controller.go:232] Updating status of "KubeStorageVersionMigrator" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.418918077+00:00 stderr F E0813 20:08:46.418697 1 base_controller.go:268] KubeStorageVersionMigrator reconciliation failed: Get "https://10.217.4.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.965185829+00:00 stderr F E0813 20:08:46.964570 1 base_controller.go:268] KubeStorageVersionMigratorStaticResources reconciliation failed: ["kube-storage-version-migrator/namespace.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa": dial tcp 10.217.4.1:443: connect: connection refused, "kube-storage-version-migrator/roles.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-version-migration-migrator": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubestorageversionmigrators/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:42:36.461846675+00:00 stderr F I0813 20:42:36.430750 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461846675+00:00 stderr F I0813 20:42:36.438450 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.467606601+00:00 stderr F I0813 20:42:36.443218 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.471980047+00:00 stderr F I0813 20:42:36.438644 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.472057619+00:00 stderr F I0813 20:42:36.451160 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.478446954+00:00 stderr F I0813 20:42:36.441451 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.483736176+00:00 stderr F I0813 20:42:36.451208 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:41.244733697+00:00 stderr F I0813 20:42:41.242977 1 cmd.go:121] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.245123478+00:00 stderr F I0813 20:42:41.245089 1 base_controller.go:172] Shutting down KubeStorageVersionMigrator ... 2025-08-13T20:42:41.245201180+00:00 stderr F I0813 20:42:41.245181 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.245334404+00:00 stderr F I0813 20:42:41.245305 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ... 2025-08-13T20:42:41.245404646+00:00 stderr F I0813 20:42:41.245314 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ... 2025-08-13T20:42:41.246118767+00:00 stderr F I0813 20:42:41.246045 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.246118767+00:00 stderr F I0813 20:42:41.245437 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated 2025-08-13T20:42:41.246338343+00:00 stderr F I0813 20:42:41.246165 1 base_controller.go:172] Shutting down StaticConditionsController ... 2025-08-13T20:42:41.246648952+00:00 stderr F I0813 20:42:41.246111 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.246648952+00:00 stderr F I0813 20:42:41.246205 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ... 2025-08-13T20:42:41.246720334+00:00 stderr F I0813 20:42:41.246375 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:42:41.247062304+00:00 stderr F I0813 20:42:41.245736 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.247193198+00:00 stderr F I0813 20:42:41.247165 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.247711623+00:00 stderr F I0813 20:42:41.247664 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.247711623+00:00 stderr F I0813 20:42:41.247686 1 base_controller.go:104] All StaticConditionsController workers have been terminated 2025-08-13T20:42:41.247734993+00:00 stderr F I0813 20:42:41.247715 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:42:41.247734993+00:00 stderr F I0813 20:42:41.247725 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated 2025-08-13T20:42:41.247749164+00:00 stderr F I0813 20:42:41.247733 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.247749164+00:00 stderr F I0813 20:42:41.247739 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.248002231+00:00 stderr F W0813 20:42:41.247762 1 builder.go:109] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000041100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage0000644000175000017500000011576115117130646033061 0ustar zuulzuul2025-08-13T19:59:35.330178547+00:00 stderr F I0813 19:59:35.328428 1 cmd.go:233] Using service-serving-cert provided certificates 2025-08-13T19:59:35.355461738+00:00 stderr F I0813 19:59:35.332273 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:35.355461738+00:00 stderr F I0813 19:59:35.339700 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:37.266163302+00:00 stderr F I0813 19:59:37.198407 1 builder.go:271] openshift-kube-storage-version-migrator-operator version 4.16.0-202406131906.p0.gbf6afbb.assembly.stream.el9-bf6afbb-bf6afbb820531b4adc3a52f78a90f317c5580bad 2025-08-13T19:59:43.151264028+00:00 stderr F I0813 19:59:43.111663 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:43.151420742+00:00 stderr F W0813 19:59:43.151395 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:43.151455293+00:00 stderr F W0813 19:59:43.151443 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:43.215601542+00:00 stderr F I0813 19:59:43.215510 1 builder.go:412] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:43.243873157+00:00 stderr F I0813 19:59:43.243742 1 secure_serving.go:210] Serving securely on [::]:8443 2025-08-13T19:59:43.244636699+00:00 stderr F I0813 19:59:43.244613 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.342200 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.245514 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.246310 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:43.344536237+00:00 stderr F I0813 19:59:43.343769 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:43.386934916+00:00 stderr F I0813 19:59:43.386311 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock... 2025-08-13T19:59:43.406232186+00:00 stderr F I0813 19:59:43.315336 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.406232186+00:00 stderr F I0813 19:59:43.405896 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:43.480296707+00:00 stderr F I0813 19:59:43.268900 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:44.354915128+00:00 stderr F I0813 19:59:44.351258 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:44.432892331+00:00 stderr F I0813 19:59:44.432397 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:44.467855027+00:00 stderr F I0813 19:59:44.467672 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:44.490205385+00:00 stderr F E0813 19:59:44.488873 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.490205385+00:00 stderr F E0813 19:59:44.488994 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.520984552+00:00 stderr F E0813 19:59:44.519629 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.554039574+00:00 stderr F E0813 19:59:44.553983 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.564088650+00:00 stderr F E0813 19:59:44.564054 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.573408966+00:00 stderr F E0813 19:59:44.572182 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.613393246+00:00 stderr F E0813 19:59:44.588701 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.613393246+00:00 stderr F E0813 19:59:44.604003 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633157629+00:00 stderr F E0813 19:59:44.632735 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.654976221+00:00 stderr F E0813 19:59:44.654441 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.713615723+00:00 stderr F E0813 19:59:44.713422 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.736544557+00:00 stderr F E0813 19:59:44.736012 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.877195426+00:00 stderr F E0813 19:59:44.873915 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.900236113+00:00 stderr F E0813 19:59:44.900115 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.213142002+00:00 stderr F E0813 19:59:45.212320 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.238307950+00:00 stderr F E0813 19:59:45.235450 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.856013558+00:00 stderr F E0813 19:59:45.855286 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.879065245+00:00 stderr F E0813 19:59:45.878010 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.461090794+00:00 stderr F I0813 19:59:46.460595 1 leaderelection.go:260] successfully acquired lease openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock 2025-08-13T19:59:46.462002060+00:00 stderr F I0813 19:59:46.461413 1 event.go:298] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"openshift-kube-storage-version-migrator-operator-lock", UID:"f11eeab2-1d72-43eb-b632-de6b959eb2b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-storage-version-migrator-operator-686c6c748c-qbnnr_1a321c6b-3aae-44eb-a5dc-fa9e08493642 became leader 2025-08-13T19:59:47.139745260+00:00 stderr F E0813 19:59:47.136166 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.160170902+00:00 stderr F E0813 19:59:47.160114 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.211266 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigratorStaticResources 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212603 1 base_controller.go:67] Waiting for caches to sync for StaticConditionsController 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212710 1 base_controller.go:67] Waiting for caches to sync for KubeStorageVersionMigrator 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.212710 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:47.237902628+00:00 stderr F I0813 19:59:47.231932 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator 2025-08-13T19:59:47.250364453+00:00 stderr F I0813 19:59:47.208997 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:47.356337384+00:00 stderr F I0813 19:59:47.356281 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:47.356431077+00:00 stderr F I0813 19:59:47.356417 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:47.413240606+00:00 stderr F I0813 19:59:47.413189 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigratorStaticResources 2025-08-13T19:59:47.413321108+00:00 stderr F I0813 19:59:47.413301 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T19:59:47.413666138+00:00 stderr F I0813 19:59:47.413500 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:47.413666138+00:00 stderr F I0813 19:59:47.413658 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:47.418444654+00:00 stderr F I0813 19:59:47.415580 1 base_controller.go:73] Caches are synced for StaticConditionsController 2025-08-13T19:59:47.418444654+00:00 stderr F I0813 19:59:47.415616 1 base_controller.go:110] Starting #1 worker of StaticConditionsController controller ... 2025-08-13T19:59:47.834501895+00:00 stderr F I0813 19:59:47.819048 1 base_controller.go:73] Caches are synced for KubeStorageVersionMigrator 2025-08-13T19:59:47.834501895+00:00 stderr F I0813 19:59:47.819436 1 base_controller.go:110] Starting #1 worker of KubeStorageVersionMigrator controller ... 2025-08-13T19:59:47.933420705+00:00 stderr F I0813 19:59:47.932095 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-storage-version-migrator 2025-08-13T19:59:47.933420705+00:00 stderr F I0813 19:59:47.932166 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T19:59:48.570223548+00:00 stderr F I0813 19:59:48.569636 1 status_controller.go:213] clusteroperator/kube-storage-version-migrator diff {"status":{"conditions":[{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}} 2025-08-13T19:59:48.812035401+00:00 stderr F I0813 19:59:48.811900 1 event.go:298] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") 2025-08-13T19:59:49.703053540+00:00 stderr F E0813 19:59:49.702187 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.722226326+00:00 stderr F E0813 19:59:49.721050 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.828009674+00:00 stderr F I0813 19:59:51.826737 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876354 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.876167977 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876408 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.876391944 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876427 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.876414654 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876444 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.876432905 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876461 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876449915 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876480 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876466376 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876496 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876484806 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876517 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876500697 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.876539 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.876525097 +0000 UTC))" 2025-08-13T19:59:51.877228817+00:00 stderr F I0813 19:59:51.877084 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 19:59:51.877056793 +0000 UTC))" 2025-08-13T19:59:51.877674800+00:00 stderr F I0813 19:59:51.877505 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 19:59:51.877488145 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.730045 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.729990125 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747565 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.747497084 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747598 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.747581067 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747621 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.747605227 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747644 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747631328 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747663 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747650479 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747705 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747667939 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747727 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747714131 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747743 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.747731741 +0000 UTC))" 2025-08-13T20:00:05.747968468+00:00 stderr F I0813 20:00:05.747766 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.747755002 +0000 UTC))" 2025-08-13T20:00:05.749320426+00:00 stderr F I0813 20:00:05.748276 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:00:05.748243846 +0000 UTC))" 2025-08-13T20:00:05.772445416+00:00 stderr F I0813 20:00:05.769337 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:05.769289016 +0000 UTC))" 2025-08-13T20:00:34.342539512+00:00 stderr F I0813 20:00:34.340074 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:34.342539512+00:00 stderr F I0813 20:00:34.342506 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="can't remove non-existent watcher: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:34.347134543+00:00 stderr F I0813 20:00:34.346908 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347876 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:34.347766291 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347908 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:34.347894655 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347944 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:34.347915466 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347964 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:34.347950157 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.347989 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.347970797 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348009 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.347997468 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348026 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348014418 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348051 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348031179 +0000 UTC))" 2025-08-13T20:00:34.348145752+00:00 stderr F I0813 20:00:34.348082 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:34.34806024 +0000 UTC))" 2025-08-13T20:00:34.348300997+00:00 stderr F I0813 20:00:34.348159 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:34.348095171 +0000 UTC))" 2025-08-13T20:00:34.351572430+00:00 stderr F I0813 20:00:34.348503 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-storage-version-migrator-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-storage-version-migrator-operator.svc,metrics.openshift-kube-storage-version-migrator-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-08-13 20:00:34.348471461 +0000 UTC))" 2025-08-13T20:00:34.357169469+00:00 stderr F I0813 20:00:34.356184 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115178\" (2025-08-13 18:59:37 +0000 UTC to 2026-08-13 18:59:37 +0000 UTC (now=2025-08-13 20:00:34.356076198 +0000 UTC))" 2025-08-13T20:00:35.369679330+00:00 stderr F I0813 20:00:35.364527 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="43548186e7ce5eab21976aea3b471207a358b9f8fb63bf325b8f4755a5142ae9", new="cdf9d7851715e3205e610dc8b06ddc4b8a158c767e0f50cab6e974e6fee4d6bf") 2025-08-13T20:00:35.369679330+00:00 stderr F W0813 20:00:35.369569 1 builder.go:132] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:35.369753702+00:00 stderr F I0813 20:00:35.369717 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="9e10b51cb3256c60ae44b395564462b79050e988d1626d5f34804f849a3655a7", new="f7b6ebeaff863e5f1a2771d98136282ec8f6675eb20222ebefd0d2097785c6f3") 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372131 1 genericapiserver.go:681] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372256 1 genericapiserver.go:538] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372306 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372714 1 base_controller.go:172] Shutting down KubeStorageVersionMigrator ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372742 1 base_controller.go:172] Shutting down StaticConditionsController ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.372894 1 genericapiserver.go:639] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373245 1 base_controller.go:114] Shutting down worker of StaticConditionsController controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373320 1 base_controller.go:104] All StaticConditionsController workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373339 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373685 1 secure_serving.go:255] Stopped listening on [::]:8443 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.373702 1 genericapiserver.go:588] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376760 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376876 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376922 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.376984 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377018 1 genericapiserver.go:701] [graceful-termination] apiserver is exiting 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377039 1 builder.go:302] server exited 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377111 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigrator controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377129 1 base_controller.go:104] All KubeStorageVersionMigrator workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377162 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377182 1 base_controller.go:172] Shutting down KubeStorageVersionMigratorStaticResources ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377194 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377277 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377284 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377292 1 base_controller.go:114] Shutting down worker of KubeStorageVersionMigratorStaticResources controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377298 1 base_controller.go:104] All KubeStorageVersionMigratorStaticResources workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377307 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377314 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.377334 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-storage-version-migrator controller ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378324 1 base_controller.go:172] Shutting down StatusSyncer_kube-storage-version-migrator ... 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378427 1 base_controller.go:150] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F I0813 20:00:35.378437 1 base_controller.go:104] All StatusSyncer_kube-storage-version-migrator workers have been terminated 2025-08-13T20:00:35.384702929+00:00 stderr F W0813 20:00:35.381309 1 builder.go:109] graceful termination failed, controllers failed with error: stopped ././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130646033103 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130646033073 0ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130653033101 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000067015117130646033110 0ustar zuulzuul2025-12-13T00:15:08.213758889+00:00 stderr F time="2025-12-13T00:15:08Z" level=info msg="starting pprof endpoint" address="localhost:6060" 2025-12-13T00:15:09.063354405+00:00 stderr F time="2025-12-13T00:15:09Z" level=info msg="serving registry" configs=/extracted-catalog/catalog port=50051 2025-12-13T00:15:09.063409807+00:00 stderr F time="2025-12-13T00:15:09Z" level=info msg="stopped caching cpu profile data" address="localhost:6060" ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130653033101 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000000000015117130646033073 0ustar zuulzuul././@LongLink0000644000000000000000000000024300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015117130646033134 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015117130654033133 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000030750715117130646033152 0ustar zuulzuul2025-12-13T00:13:17.125982873+00:00 stdout F Copying system trust bundle 2025-12-13T00:13:18.049041830+00:00 stderr F W1213 00:13:18.048972 1 feature_gate.go:239] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:13:18.049041830+00:00 stderr F I1213 00:13:18.049013 1 feature_gate.go:249] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049046 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAzure" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049051 1 config.go:121] Ignoring unknown FeatureGate "BuildCSIVolumes" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049055 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallIBMCloud" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049072 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderAzure" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049077 1 config.go:121] Ignoring unknown FeatureGate "AzureWorkloadIdentity" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049081 1 config.go:121] Ignoring unknown FeatureGate "DNSNameResolver" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049086 1 config.go:121] Ignoring unknown FeatureGate "Example" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049090 1 config.go:121] Ignoring unknown FeatureGate "GCPClusterHostedDNS" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049094 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfigAPI" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049098 1 config.go:121] Ignoring unknown FeatureGate "PlatformOperators" 2025-12-13T00:13:18.049104882+00:00 stderr F W1213 00:13:18.049101 1 config.go:121] Ignoring unknown FeatureGate "AutomatedEtcdBackup" 2025-12-13T00:13:18.049117262+00:00 stderr F W1213 00:13:18.049104 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstall" 2025-12-13T00:13:18.049117262+00:00 stderr F W1213 00:13:18.049108 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAWS" 2025-12-13T00:13:18.049117262+00:00 stderr F W1213 00:13:18.049111 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallGCP" 2025-12-13T00:13:18.049117262+00:00 stderr F W1213 00:13:18.049114 1 config.go:121] Ignoring unknown FeatureGate "MachineConfigNodes" 2025-12-13T00:13:18.049126033+00:00 stderr F W1213 00:13:18.049118 1 config.go:121] Ignoring unknown FeatureGate "NodeDisruptionPolicy" 2025-12-13T00:13:18.049126033+00:00 stderr F W1213 00:13:18.049121 1 config.go:121] Ignoring unknown FeatureGate "PinnedImages" 2025-12-13T00:13:18.049135903+00:00 stderr F W1213 00:13:18.049131 1 config.go:121] Ignoring unknown FeatureGate "AlibabaPlatform" 2025-12-13T00:13:18.049143133+00:00 stderr F W1213 00:13:18.049135 1 config.go:121] Ignoring unknown FeatureGate "GCPLabelsTags" 2025-12-13T00:13:18.049143133+00:00 stderr F W1213 00:13:18.049139 1 config.go:121] Ignoring unknown FeatureGate "InstallAlternateInfrastructureAWS" 2025-12-13T00:13:18.049151024+00:00 stderr F W1213 00:13:18.049142 1 config.go:121] Ignoring unknown FeatureGate "SigstoreImageVerification" 2025-12-13T00:13:18.049151024+00:00 stderr F W1213 00:13:18.049146 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallVSphere" 2025-12-13T00:13:18.049159774+00:00 stderr F W1213 00:13:18.049149 1 config.go:121] Ignoring unknown FeatureGate "VSphereControlPlaneMachineSet" 2025-12-13T00:13:18.049166674+00:00 stderr F W1213 00:13:18.049158 1 config.go:121] Ignoring unknown FeatureGate "CSIDriverSharedResource" 2025-12-13T00:13:18.049166674+00:00 stderr F W1213 00:13:18.049162 1 config.go:121] Ignoring unknown FeatureGate "ExternalRouteCertificate" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049165 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIOperatorDisableMachineHealthCheckController" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049170 1 config.go:121] Ignoring unknown FeatureGate "ChunkSizeMiB" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049175 1 config.go:121] Ignoring unknown FeatureGate "ExternalOIDC" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049178 1 config.go:121] Ignoring unknown FeatureGate "ManagedBootImages" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049181 1 config.go:121] Ignoring unknown FeatureGate "EtcdBackendQuota" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049184 1 config.go:121] Ignoring unknown FeatureGate "GatewayAPI" 2025-12-13T00:13:18.049190755+00:00 stderr F W1213 00:13:18.049188 1 config.go:121] Ignoring unknown FeatureGate "InsightsOnDemandDataGather" 2025-12-13T00:13:18.049200405+00:00 stderr F W1213 00:13:18.049191 1 config.go:121] Ignoring unknown FeatureGate "NetworkLiveMigration" 2025-12-13T00:13:18.049200405+00:00 stderr F W1213 00:13:18.049195 1 config.go:121] Ignoring unknown FeatureGate "OpenShiftPodSecurityAdmission" 2025-12-13T00:13:18.049200405+00:00 stderr F W1213 00:13:18.049198 1 config.go:121] Ignoring unknown FeatureGate "UpgradeStatus" 2025-12-13T00:13:18.049208245+00:00 stderr F W1213 00:13:18.049201 1 config.go:121] Ignoring unknown FeatureGate "VSphereMultiVCenters" 2025-12-13T00:13:18.049208245+00:00 stderr F W1213 00:13:18.049205 1 config.go:121] Ignoring unknown FeatureGate "VSphereStaticIPs" 2025-12-13T00:13:18.049215546+00:00 stderr F W1213 00:13:18.049208 1 config.go:121] Ignoring unknown FeatureGate "ImagePolicy" 2025-12-13T00:13:18.049215546+00:00 stderr F W1213 00:13:18.049212 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfig" 2025-12-13T00:13:18.049222896+00:00 stderr F W1213 00:13:18.049215 1 config.go:121] Ignoring unknown FeatureGate "MetricsServer" 2025-12-13T00:13:18.049222896+00:00 stderr F W1213 00:13:18.049219 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallNutanix" 2025-12-13T00:13:18.049230046+00:00 stderr F W1213 00:13:18.049222 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIProviderOpenStack" 2025-12-13T00:13:18.049230046+00:00 stderr F W1213 00:13:18.049225 1 config.go:121] Ignoring unknown FeatureGate "VolumeGroupSnapshot" 2025-12-13T00:13:18.049237476+00:00 stderr F W1213 00:13:18.049229 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProvider" 2025-12-13T00:13:18.049237476+00:00 stderr F W1213 00:13:18.049232 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallOpenStack" 2025-12-13T00:13:18.049245157+00:00 stderr F W1213 00:13:18.049235 1 config.go:121] Ignoring unknown FeatureGate "NetworkDiagnosticsConfig" 2025-12-13T00:13:18.049245157+00:00 stderr F W1213 00:13:18.049239 1 config.go:121] Ignoring unknown FeatureGate "OnClusterBuild" 2025-12-13T00:13:18.049245157+00:00 stderr F W1213 00:13:18.049243 1 config.go:121] Ignoring unknown FeatureGate "PrivateHostedZoneAWS" 2025-12-13T00:13:18.049252937+00:00 stderr F W1213 00:13:18.049246 1 config.go:121] Ignoring unknown FeatureGate "SignatureStores" 2025-12-13T00:13:18.049252937+00:00 stderr F W1213 00:13:18.049250 1 config.go:121] Ignoring unknown FeatureGate "AdminNetworkPolicy" 2025-12-13T00:13:18.049260357+00:00 stderr F W1213 00:13:18.049253 1 config.go:121] Ignoring unknown FeatureGate "MixedCPUsAllocation" 2025-12-13T00:13:18.049260357+00:00 stderr F W1213 00:13:18.049256 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallPowerVS" 2025-12-13T00:13:18.049267447+00:00 stderr F W1213 00:13:18.049260 1 config.go:121] Ignoring unknown FeatureGate "NewOLM" 2025-12-13T00:13:18.049267447+00:00 stderr F W1213 00:13:18.049263 1 config.go:121] Ignoring unknown FeatureGate "BareMetalLoadBalancer" 2025-12-13T00:13:18.049278218+00:00 stderr F W1213 00:13:18.049266 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderExternal" 2025-12-13T00:13:18.049278218+00:00 stderr F W1213 00:13:18.049270 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderGCP" 2025-12-13T00:13:18.049278218+00:00 stderr F W1213 00:13:18.049273 1 config.go:121] Ignoring unknown FeatureGate "HardwareSpeed" 2025-12-13T00:13:18.049285998+00:00 stderr F W1213 00:13:18.049279 1 config.go:121] Ignoring unknown FeatureGate "MetricsCollectionProfiles" 2025-12-13T00:13:18.049285998+00:00 stderr F W1213 00:13:18.049282 1 config.go:121] Ignoring unknown FeatureGate "VSphereDriverConfiguration" 2025-12-13T00:13:18.065023937+00:00 stderr F I1213 00:13:18.064973 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:18.475009953+00:00 stderr F I1213 00:13:18.474957 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:18.492013725+00:00 stderr F I1213 00:13:18.491964 1 audit.go:340] Using audit backend: ignoreErrors 2025-12-13T00:13:18.493610609+00:00 stderr F I1213 00:13:18.493551 1 plugins.go:83] "Registered admission plugin" plugin="NamespaceLifecycle" 2025-12-13T00:13:18.493655410+00:00 stderr F I1213 00:13:18.493644 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionWebhook" 2025-12-13T00:13:18.493679941+00:00 stderr F I1213 00:13:18.493671 1 plugins.go:83] "Registered admission plugin" plugin="MutatingAdmissionWebhook" 2025-12-13T00:13:18.493702942+00:00 stderr F I1213 00:13:18.493694 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionPolicy" 2025-12-13T00:13:18.494462927+00:00 stderr F I1213 00:13:18.494446 1 admission.go:48] Admission plugin "project.openshift.io/ProjectRequestLimit" is not configured so it will be disabled. 2025-12-13T00:13:18.497383955+00:00 stderr F I1213 00:13:18.497365 1 plugins.go:157] Loaded 5 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,build.openshift.io/BuildConfigSecretInjector,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,MutatingAdmissionWebhook. 2025-12-13T00:13:18.497421456+00:00 stderr F I1213 00:13:18.497409 1 plugins.go:160] Loaded 9 validating admission controller(s) successfully in the following order: OwnerReferencesPermissionEnforcement,build.openshift.io/BuildConfigSecretInjector,build.openshift.io/BuildByStrategy,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,route.openshift.io/RequiredRouteAnnotations,ValidatingAdmissionWebhook,ResourceQuota. 2025-12-13T00:13:18.498120640+00:00 stderr F I1213 00:13:18.498103 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.498158231+00:00 stderr F I1213 00:13:18.498147 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.498355938+00:00 stderr F I1213 00:13:18.498341 1 maxinflight.go:116] "Set denominator for readonly requests" limit=3000 2025-12-13T00:13:18.498386179+00:00 stderr F I1213 00:13:18.498377 1 maxinflight.go:120] "Set denominator for mutating requests" limit=1500 2025-12-13T00:13:18.518168583+00:00 stderr F I1213 00:13:18.518131 1 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.openshift.io" path="//rangeallocations" 2025-12-13T00:13:18.520868474+00:00 stderr F I1213 00:13:18.520754 1 cacher.go:451] cacher (rangeallocations.security.openshift.io): initialized 2025-12-13T00:13:18.520868474+00:00 stderr F I1213 00:13:18.520798 1 reflector.go:351] Caches populated for *security.RangeAllocation from storage/cacher.go:/rangeallocations 2025-12-13T00:13:18.522663635+00:00 stderr F I1213 00:13:18.522642 1 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.522779009+00:00 stderr F I1213 00:13:18.522764 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.522807799+00:00 stderr F I1213 00:13:18.522798 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.525466769+00:00 stderr F I1213 00:13:18.525446 1 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.525542831+00:00 stderr F I1213 00:13:18.525530 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.525569702+00:00 stderr F I1213 00:13:18.525560 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.534685618+00:00 stderr F I1213 00:13:18.534660 1 store.go:1579] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//routes" 2025-12-13T00:13:18.536055025+00:00 stderr F I1213 00:13:18.536007 1 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.536158398+00:00 stderr F I1213 00:13:18.536145 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.536186899+00:00 stderr F I1213 00:13:18.536177 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.538366923+00:00 stderr F I1213 00:13:18.538096 1 cacher.go:451] cacher (routes.route.openshift.io): initialized 2025-12-13T00:13:18.538366923+00:00 stderr F I1213 00:13:18.538135 1 reflector.go:351] Caches populated for *route.Route from storage/cacher.go:/routes 2025-12-13T00:13:18.544396945+00:00 stderr F I1213 00:13:18.544357 1 store.go:1579] "Monitoring resource count at path" resource="templates.template.openshift.io" path="//templates" 2025-12-13T00:13:18.554237685+00:00 stderr F I1213 00:13:18.554185 1 store.go:1579] "Monitoring resource count at path" resource="templateinstances.template.openshift.io" path="//templateinstances" 2025-12-13T00:13:18.561962885+00:00 stderr F I1213 00:13:18.561900 1 cacher.go:451] cacher (templateinstances.template.openshift.io): initialized 2025-12-13T00:13:18.561962885+00:00 stderr F I1213 00:13:18.561924 1 reflector.go:351] Caches populated for *template.TemplateInstance from storage/cacher.go:/templateinstances 2025-12-13T00:13:18.565224675+00:00 stderr F I1213 00:13:18.565183 1 cacher.go:451] cacher (templates.template.openshift.io): initialized 2025-12-13T00:13:18.565292887+00:00 stderr F I1213 00:13:18.565279 1 reflector.go:351] Caches populated for *template.Template from storage/cacher.go:/templates 2025-12-13T00:13:18.648184643+00:00 stderr F I1213 00:13:18.648127 1 store.go:1579] "Monitoring resource count at path" resource="brokertemplateinstances.template.openshift.io" path="//brokertemplateinstances" 2025-12-13T00:13:18.650076896+00:00 stderr F I1213 00:13:18.649298 1 cacher.go:451] cacher (brokertemplateinstances.template.openshift.io): initialized 2025-12-13T00:13:18.650076896+00:00 stderr F I1213 00:13:18.649314 1 reflector.go:351] Caches populated for *template.BrokerTemplateInstance from storage/cacher.go:/brokertemplateinstances 2025-12-13T00:13:18.651561806+00:00 stderr F I1213 00:13:18.651537 1 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.651685430+00:00 stderr F I1213 00:13:18.651665 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.651721621+00:00 stderr F I1213 00:13:18.651710 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.662090780+00:00 stderr F I1213 00:13:18.662031 1 store.go:1579] "Monitoring resource count at path" resource="deploymentconfigs.apps.openshift.io" path="//deploymentconfigs" 2025-12-13T00:13:18.665449993+00:00 stderr F I1213 00:13:18.665407 1 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.665482654+00:00 stderr F I1213 00:13:18.665474 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.665491594+00:00 stderr F I1213 00:13:18.665483 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.666161146+00:00 stderr F I1213 00:13:18.666139 1 cacher.go:451] cacher (deploymentconfigs.apps.openshift.io): initialized 2025-12-13T00:13:18.666161146+00:00 stderr F I1213 00:13:18.666155 1 reflector.go:351] Caches populated for *apps.DeploymentConfig from storage/cacher.go:/deploymentconfigs 2025-12-13T00:13:18.671900379+00:00 stderr F I1213 00:13:18.671835 1 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.671926170+00:00 stderr F I1213 00:13:18.671917 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.671952641+00:00 stderr F I1213 00:13:18.671925 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.684843734+00:00 stderr F I1213 00:13:18.684796 1 store.go:1579] "Monitoring resource count at path" resource="builds.build.openshift.io" path="//builds" 2025-12-13T00:13:18.685759235+00:00 stderr F I1213 00:13:18.685733 1 cacher.go:451] cacher (builds.build.openshift.io): initialized 2025-12-13T00:13:18.685808396+00:00 stderr F I1213 00:13:18.685795 1 reflector.go:351] Caches populated for *build.Build from storage/cacher.go:/builds 2025-12-13T00:13:18.694765427+00:00 stderr F I1213 00:13:18.694705 1 store.go:1579] "Monitoring resource count at path" resource="buildconfigs.build.openshift.io" path="//buildconfigs" 2025-12-13T00:13:18.699706033+00:00 stderr F I1213 00:13:18.699649 1 cacher.go:451] cacher (buildconfigs.build.openshift.io): initialized 2025-12-13T00:13:18.699706033+00:00 stderr F I1213 00:13:18.699675 1 reflector.go:351] Caches populated for *build.BuildConfig from storage/cacher.go:/buildconfigs 2025-12-13T00:13:18.700606863+00:00 stderr F I1213 00:13:18.700579 1 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.700727007+00:00 stderr F I1213 00:13:18.700710 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.700761849+00:00 stderr F I1213 00:13:18.700751 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.717732200+00:00 stderr F I1213 00:13:18.717263 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca, incoming err: 2025-12-13T00:13:18.717798462+00:00 stderr F I1213 00:13:18.717784 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca 2025-12-13T00:13:18.717873394+00:00 stderr F I1213 00:13:18.717861 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123, incoming err: 2025-12-13T00:13:18.717902695+00:00 stderr F I1213 00:13:18.717892 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123 2025-12-13T00:13:18.717999219+00:00 stderr F I1213 00:13:18.717982 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-12-13T00:13:18.718103052+00:00 stderr F I1213 00:13:18.718087 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-12-13T00:13:18.718204335+00:00 stderr F I1213 00:13:18.718190 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-12-13T00:13:18.718325549+00:00 stderr F I1213 00:13:18.718310 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..data, incoming err: 2025-12-13T00:13:18.718359571+00:00 stderr F I1213 00:13:18.718348 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..data 2025-12-13T00:13:18.718396862+00:00 stderr F I1213 00:13:18.718386 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-12-13T00:13:18.718423603+00:00 stderr F I1213 00:13:18.718413 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing 2025-12-13T00:13:18.718457094+00:00 stderr F I1213 00:13:18.718447 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-12-13T00:13:18.718483835+00:00 stderr F I1213 00:13:18.718473 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000 2025-12-13T00:13:18.718524026+00:00 stderr F I1213 00:13:18.718513 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-12-13T00:13:18.718551987+00:00 stderr F I1213 00:13:18.718542 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 2025-12-13T00:13:18.727744096+00:00 stderr F I1213 00:13:18.727702 1 store.go:1579] "Monitoring resource count at path" resource="images.image.openshift.io" path="//images" 2025-12-13T00:13:18.742109988+00:00 stderr F I1213 00:13:18.742034 1 store.go:1579] "Monitoring resource count at path" resource="imagestreams.image.openshift.io" path="//imagestreams" 2025-12-13T00:13:18.758629063+00:00 stderr F I1213 00:13:18.758577 1 cacher.go:451] cacher (imagestreams.image.openshift.io): initialized 2025-12-13T00:13:18.758698916+00:00 stderr F I1213 00:13:18.758685 1 reflector.go:351] Caches populated for *image.ImageStream from storage/cacher.go:/imagestreams 2025-12-13T00:13:18.766913612+00:00 stderr F I1213 00:13:18.766876 1 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.767008095+00:00 stderr F W1213 00:13:18.766991 1 genericapiserver.go:756] Skipping API image.openshift.io/1.0 because it has no resources. 2025-12-13T00:13:18.767053716+00:00 stderr F W1213 00:13:18.767041 1 genericapiserver.go:756] Skipping API image.openshift.io/pre012 because it has no resources. 2025-12-13T00:13:18.767302005+00:00 stderr F I1213 00:13:18.767285 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.767338576+00:00 stderr F I1213 00:13:18.767327 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.770375938+00:00 stderr F I1213 00:13:18.770353 1 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.770469101+00:00 stderr F I1213 00:13:18.770454 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-12-13T00:13:18.770501512+00:00 stderr F I1213 00:13:18.770491 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-12-13T00:13:18.811511750+00:00 stderr F I1213 00:13:18.811460 1 cacher.go:451] cacher (images.image.openshift.io): initialized 2025-12-13T00:13:18.811576203+00:00 stderr F I1213 00:13:18.811565 1 reflector.go:351] Caches populated for *image.Image from storage/cacher.go:/images 2025-12-13T00:13:19.192309066+00:00 stderr F I1213 00:13:19.192250 1 server.go:50] Starting master on 0.0.0.0:8443 (v0.0.0-master+$Format:%H$) 2025-12-13T00:13:19.192569925+00:00 stderr F I1213 00:13:19.192547 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:19.202606513+00:00 stderr F I1213 00:13:19.202083 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:19.202606513+00:00 stderr F I1213 00:13:19.202122 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:19.202606513+00:00 stderr F I1213 00:13:19.202131 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:19.202606513+00:00 stderr F I1213 00:13:19.202159 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:19.202606513+00:00 stderr F I1213 00:13:19.202091 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:19.202606513+00:00 stderr F I1213 00:13:19.202210 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:19.203893945+00:00 stderr F I1213 00:13:19.203860 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:13:19.203814623 +0000 UTC))" 2025-12-13T00:13:19.204209566+00:00 stderr F I1213 00:13:19.204185 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584798\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584798\" (2025-12-12 23:13:18 +0000 UTC to 2026-12-12 23:13:18 +0000 UTC (now=2025-12-13 00:13:19.204162944 +0000 UTC))" 2025-12-13T00:13:19.204222676+00:00 stderr F I1213 00:13:19.204215 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:19.204276388+00:00 stderr F I1213 00:13:19.204258 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:19.204300949+00:00 stderr F I1213 00:13:19.204284 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:19.204412253+00:00 stderr F I1213 00:13:19.204381 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-12-13T00:13:19.204471025+00:00 stderr F I1213 00:13:19.204445 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:19.204967981+00:00 stderr F I1213 00:13:19.204926 1 openshift_apiserver.go:593] Using default project node label selector: 2025-12-13T00:13:19.682577300+00:00 stderr F I1213 00:13:19.682485 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.683596754+00:00 stderr F I1213 00:13:19.683433 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.683886054+00:00 stderr F I1213 00:13:19.683864 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.686397728+00:00 stderr F I1213 00:13:19.686372 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.686649887+00:00 stderr F I1213 00:13:19.686622 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.687949271+00:00 stderr F I1213 00:13:19.687895 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.689965149+00:00 stderr F I1213 00:13:19.689912 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.691560402+00:00 stderr F I1213 00:13:19.691534 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.691706687+00:00 stderr F I1213 00:13:19.691686 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.692416840+00:00 stderr F I1213 00:13:19.692381 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.692480084+00:00 stderr F I1213 00:13:19.692458 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.693055283+00:00 stderr F I1213 00:13:19.693025 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.695104081+00:00 stderr F I1213 00:13:19.694983 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.695327959+00:00 stderr F I1213 00:13:19.695300 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.695538236+00:00 stderr F I1213 00:13:19.695515 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.696094335+00:00 stderr F I1213 00:13:19.695996 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.706100900+00:00 stderr F I1213 00:13:19.706035 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:19.706132622+00:00 stderr F I1213 00:13:19.706099 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:19.706168903+00:00 stderr F I1213 00:13:19.706153 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:19.706333998+00:00 stderr F I1213 00:13:19.706305 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:19.706262516 +0000 UTC))" 2025-12-13T00:13:19.706408071+00:00 stderr F I1213 00:13:19.706381 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.706603898+00:00 stderr F I1213 00:13:19.706575 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.706639099+00:00 stderr F I1213 00:13:19.706619 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:13:19.706601438 +0000 UTC))" 2025-12-13T00:13:19.706982691+00:00 stderr F I1213 00:13:19.706961 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584798\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584798\" (2025-12-12 23:13:18 +0000 UTC to 2026-12-12 23:13:18 +0000 UTC (now=2025-12-13 00:13:19.706917729 +0000 UTC))" 2025-12-13T00:13:19.707276961+00:00 stderr F I1213 00:13:19.707194 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:19.707176267 +0000 UTC))" 2025-12-13T00:13:19.707276961+00:00 stderr F I1213 00:13:19.707221 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:19.707210558 +0000 UTC))" 2025-12-13T00:13:19.707310122+00:00 stderr F I1213 00:13:19.707292 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:19.707277021 +0000 UTC))" 2025-12-13T00:13:19.707318322+00:00 stderr F I1213 00:13:19.707312 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:19.707301791 +0000 UTC))" 2025-12-13T00:13:19.707347813+00:00 stderr F I1213 00:13:19.707330 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:19.707316652 +0000 UTC))" 2025-12-13T00:13:19.707370144+00:00 stderr F I1213 00:13:19.707354 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:19.707341693 +0000 UTC))" 2025-12-13T00:13:19.707377944+00:00 stderr F I1213 00:13:19.707372 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:19.707362363 +0000 UTC))" 2025-12-13T00:13:19.707405935+00:00 stderr F I1213 00:13:19.707389 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:19.707378064 +0000 UTC))" 2025-12-13T00:13:19.707413965+00:00 stderr F I1213 00:13:19.707408 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:19.707397235 +0000 UTC))" 2025-12-13T00:13:19.707421275+00:00 stderr F I1213 00:13:19.707408 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.707445706+00:00 stderr F I1213 00:13:19.707426 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:19.707414815 +0000 UTC))" 2025-12-13T00:13:19.707455316+00:00 stderr F I1213 00:13:19.707447 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:19.707436786 +0000 UTC))" 2025-12-13T00:13:19.707862930+00:00 stderr F I1213 00:13:19.707837 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.707911972+00:00 stderr F I1213 00:13:19.707894 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:13:19.707879991 +0000 UTC))" 2025-12-13T00:13:19.708191321+00:00 stderr F I1213 00:13:19.708172 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584798\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584798\" (2025-12-12 23:13:18 +0000 UTC to 2026-12-12 23:13:18 +0000 UTC (now=2025-12-13 00:13:19.70816081 +0000 UTC))" 2025-12-13T00:13:19.708867833+00:00 stderr F I1213 00:13:19.708829 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.731787704+00:00 stderr F I1213 00:13:19.731734 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.737601460+00:00 stderr F I1213 00:13:19.737551 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:19.799397606+00:00 stderr F I1213 00:13:19.798965 1 reflector.go:351] Caches populated for *etcd.ImageLayers from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568330 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.568296782 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568360 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.568349453 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568377 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.568364634 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568391 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.568381184 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568406 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.568395605 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568420 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.568409565 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568435 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.568424565 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568449 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.568438596 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568463 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.568452676 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568481 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.568471077 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568496 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.568486117 +0000 UTC))" 2025-12-13T00:19:37.568813046+00:00 stderr F I1213 00:19:37.568513 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.568503698 +0000 UTC))" 2025-12-13T00:19:37.568891688+00:00 stderr F I1213 00:19:37.568818 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:19:37.568801796 +0000 UTC))" 2025-12-13T00:19:37.570654677+00:00 stderr F I1213 00:19:37.570611 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584798\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584798\" (2025-12-12 23:13:18 +0000 UTC to 2026-12-12 23:13:18 +0000 UTC (now=2025-12-13 00:19:37.570592405 +0000 UTC))" 2025-12-13T00:19:48.977856886+00:00 stderr F E1213 00:19:48.977767 1 strategy.go:60] unable to parse manifest for "sha256:4f35566977c35306a8f2102841ceb7fa10a6d9ac47c079131caed5655140f9b2": unexpected end of JSON input 2025-12-13T00:19:49.139078132+00:00 stderr F E1213 00:19:49.139008 1 strategy.go:60] unable to parse manifest for "sha256:55dc61c31ea50a8f7a45e993a9b3220097974948b5cd1ab3f317e7702e8cb6fc": unexpected end of JSON input 2025-12-13T00:19:49.469999397+00:00 stderr F E1213 00:19:49.469927 1 strategy.go:60] unable to parse manifest for "sha256:5dfcc5b000a1fab4be66bbd43e4db44b61176e2bcba9c24f6fe887dea9b7fd49": unexpected end of JSON input 2025-12-13T00:19:49.471505288+00:00 stderr F E1213 00:19:49.471458 1 strategy.go:60] unable to parse manifest for "sha256:7f501bba8a09957a0ac28ba0c20768f087cf0b16d92139b3f8f0758e9f60691f": unexpected end of JSON input 2025-12-13T00:19:49.473257797+00:00 stderr F E1213 00:19:49.473220 1 strategy.go:60] unable to parse manifest for "sha256:55a832a2dd32c4ab288b2c76e1c531bd6df07651010f7b9f8f983dff5ee584ab": unexpected end of JSON input 2025-12-13T00:19:49.476926357+00:00 stderr F I1213 00:19:49.476874 1 trace.go:236] Trace[411304484]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ac507cee-6a6a-4a60-920c-62201a82c5aa,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:48.031) (total time: 1445ms): 2025-12-13T00:19:49.476926357+00:00 stderr F Trace[411304484]: ---"Write to database call succeeded" len:795 1444ms (00:19:49.476) 2025-12-13T00:19:49.476926357+00:00 stderr F Trace[411304484]: [1.445714455s] [1.445714455s] END 2025-12-13T00:19:50.489171529+00:00 stderr F E1213 00:19:50.489098 1 strategy.go:60] unable to parse manifest for "sha256:70a21b3f93c05843ce9d07f125b1464436caf01680bb733754a2a5df5bc3b11b": unexpected end of JSON input 2025-12-13T00:19:50.490663331+00:00 stderr F E1213 00:19:50.490626 1 strategy.go:60] unable to parse manifest for "sha256:8ef04c895436412065c0f1090db68060d2bb339a400e8653ca3a370211690d1f": unexpected end of JSON input 2025-12-13T00:19:50.492903233+00:00 stderr F E1213 00:19:50.492865 1 strategy.go:60] unable to parse manifest for "sha256:7201e059b92acc55fe9fe1cc390d44e92f0e2af297fbe52b3f1bb56327f59624": unexpected end of JSON input 2025-12-13T00:19:50.497677214+00:00 stderr F I1213 00:19:50.497611 1 trace.go:236] Trace[384636720]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:18fecf53-7d6d-4040-804e-e7f7e67ea9f9,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:48.379) (total time: 2117ms): 2025-12-13T00:19:50.497677214+00:00 stderr F Trace[384636720]: ---"Write to database call succeeded" len:739 2117ms (00:19:50.496) 2025-12-13T00:19:50.497677214+00:00 stderr F Trace[384636720]: [2.117914231s] [2.117914231s] END 2025-12-13T00:19:52.349294511+00:00 stderr F E1213 00:19:52.349198 1 strategy.go:60] unable to parse manifest for "sha256:df8858f0c01ae1657a14234a94f6785cbb2fba7f12c9d0325f427a3f1284481b": unexpected end of JSON input 2025-12-13T00:19:52.351459301+00:00 stderr F E1213 00:19:52.351285 1 strategy.go:60] unable to parse manifest for "sha256:9b5d2fc574a13613f18fa983ac2901593c1e812836e918095bc3d15b6cc4ba57": unexpected end of JSON input 2025-12-13T00:19:52.353110367+00:00 stderr F E1213 00:19:52.353021 1 strategy.go:60] unable to parse manifest for "sha256:57ab1f0ad24e02143978fc79c5219a02c4d6a5a27225ee5454c85a47839b6ddc": unexpected end of JSON input 2025-12-13T00:19:52.359050850+00:00 stderr F I1213 00:19:52.358604 1 trace.go:236] Trace[646023941]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:633ba2bc-0d09-4b32-aee0-0ea0bf94bd47,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:50.556) (total time: 1801ms): 2025-12-13T00:19:52.359050850+00:00 stderr F Trace[646023941]: ---"Write to database call succeeded" len:506 1800ms (00:19:52.357) 2025-12-13T00:19:52.359050850+00:00 stderr F Trace[646023941]: [1.801803553s] [1.801803553s] END 2025-12-13T00:19:52.576597099+00:00 stderr F E1213 00:19:52.576484 1 strategy.go:60] unable to parse manifest for "sha256:a8e4081414cfa644e212ded354dfee12706e63afb19a27c0c0ae2c8c64e56ca6": unexpected end of JSON input 2025-12-13T00:19:52.579823098+00:00 stderr F E1213 00:19:52.579728 1 strategy.go:60] unable to parse manifest for "sha256:38c7e4f7dea04bb536f05d78e0107ebc2a3607cf030db7f5c249f13ce1f52d59": unexpected end of JSON input 2025-12-13T00:19:52.582172483+00:00 stderr F E1213 00:19:52.582102 1 strategy.go:60] unable to parse manifest for "sha256:d2f17aaf2f871fda5620466d69ac67b9c355c0bae5912a1dbef9a51ca8813e50": unexpected end of JSON input 2025-12-13T00:19:52.584497256+00:00 stderr F E1213 00:19:52.584438 1 strategy.go:60] unable to parse manifest for "sha256:e4be2fb7216f432632819b2441df42a5a0063f7f473c2923ca6912b2d64b7494": unexpected end of JSON input 2025-12-13T00:19:52.586382559+00:00 stderr F E1213 00:19:52.586321 1 strategy.go:60] unable to parse manifest for "sha256:14de89e89efc97aee3b50141108b7833708c3a93ad90bf89940025ab5267ba86": unexpected end of JSON input 2025-12-13T00:19:52.588240340+00:00 stderr F E1213 00:19:52.588202 1 strategy.go:60] unable to parse manifest for "sha256:f438230ed2c2e609d0d7dbc430ccf1e9bad2660e6410187fd6e9b14a2952e70b": unexpected end of JSON input 2025-12-13T00:19:52.590306316+00:00 stderr F E1213 00:19:52.590241 1 strategy.go:60] unable to parse manifest for "sha256:f953734d89252219c3dcd8f703ba8b58c9c8a0f5dfa9425c9e56ec0834f7d288": unexpected end of JSON input 2025-12-13T00:19:52.592441885+00:00 stderr F E1213 00:19:52.592392 1 strategy.go:60] unable to parse manifest for "sha256:e4223a60b887ec24cad7dd70fdb6c3f2c107fb7118331be6f45d626219cfe7f3": unexpected end of JSON input 2025-12-13T00:19:52.597447484+00:00 stderr F I1213 00:19:52.597411 1 trace.go:236] Trace[1516141171]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ae3aaab2-c1bd-4ad6-b960-b18b119800f7,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:48.794) (total time: 3802ms): 2025-12-13T00:19:52.597447484+00:00 stderr F Trace[1516141171]: ---"Write to database call succeeded" len:1025 3801ms (00:19:52.596) 2025-12-13T00:19:52.597447484+00:00 stderr F Trace[1516141171]: [3.802496881s] [3.802496881s] END 2025-12-13T00:19:53.387777957+00:00 stderr F E1213 00:19:53.387698 1 strategy.go:60] unable to parse manifest for "sha256:5fb3543c0d42146f0506c1ea4d09575131da6a2f27885729b7cfce13a0fa90e3": unexpected end of JSON input 2025-12-13T00:19:53.391218452+00:00 stderr F E1213 00:19:53.391194 1 strategy.go:60] unable to parse manifest for "sha256:1d68b58a73f4cf15fcd886ab39fddf18be923b52b24cb8ec3ab1da2d3e9bd5f6": unexpected end of JSON input 2025-12-13T00:19:53.394460732+00:00 stderr F E1213 00:19:53.394414 1 strategy.go:60] unable to parse manifest for "sha256:7de877b0e748cdb47cb702400f3ddaa3c3744a022887e2213c2bb27775ab4b25": unexpected end of JSON input 2025-12-13T00:19:53.396824886+00:00 stderr F E1213 00:19:53.396802 1 strategy.go:60] unable to parse manifest for "sha256:af9c08644ca057d83ef4b7d8de1489f01c5a52ff8670133b8a09162831b7fb34": unexpected end of JSON input 2025-12-13T00:19:53.399929592+00:00 stderr F E1213 00:19:53.399843 1 strategy.go:60] unable to parse manifest for "sha256:b053401886c06581d3c296855525cc13e0613100a596ed007bb69d5f8e972346": unexpected end of JSON input 2025-12-13T00:19:53.401822444+00:00 stderr F E1213 00:19:53.401793 1 strategy.go:60] unable to parse manifest for "sha256:61555b923dabe4ff734279ed1bdb9eb6d450c760e1cc04463cf88608ac8d1338": unexpected end of JSON input 2025-12-13T00:19:53.403680266+00:00 stderr F E1213 00:19:53.403622 1 strategy.go:60] unable to parse manifest for "sha256:9ab26cb4005e9b60fd6349950957bbd0120efba216036da53c547c6f1c9e5e7f": unexpected end of JSON input 2025-12-13T00:19:53.405614240+00:00 stderr F E1213 00:19:53.405552 1 strategy.go:60] unable to parse manifest for "sha256:2254dc2f421f496b504aafbbd8ea37e660652c4b6b4f9a0681664b10873be7fe": unexpected end of JSON input 2025-12-13T00:19:53.407312326+00:00 stderr F E1213 00:19:53.407255 1 strategy.go:60] unable to parse manifest for "sha256:e4b1599ba6e88f6df7c4e67d6397371d61b6829d926411184e9855e71e840b8c": unexpected end of JSON input 2025-12-13T00:19:53.413095945+00:00 stderr F I1213 00:19:53.412614 1 trace.go:236] Trace[1012392263]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:cc16d96e-1097-480a-b710-8c5b62ba847c,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:48.502) (total time: 4910ms): 2025-12-13T00:19:53.413095945+00:00 stderr F Trace[1012392263]: ---"Write to database call succeeded" len:1132 4909ms (00:19:53.411) 2025-12-13T00:19:53.413095945+00:00 stderr F Trace[1012392263]: [4.910528345s] [4.910528345s] END 2025-12-13T00:19:53.819446840+00:00 stderr F E1213 00:19:53.819363 1 strategy.go:60] unable to parse manifest for "sha256:d186c94f8843f854d77b2b05d10efb0d272f88a4bf4f1d8ebe304428b9396392": unexpected end of JSON input 2025-12-13T00:19:53.821690622+00:00 stderr F E1213 00:19:53.821643 1 strategy.go:60] unable to parse manifest for "sha256:e37aeaeb0159194a9855350e13e399470f39ce340d6381069933742990741fb8": unexpected end of JSON input 2025-12-13T00:19:53.824212922+00:00 stderr F E1213 00:19:53.824025 1 strategy.go:60] unable to parse manifest for "sha256:f89a54e6d1340be8ddd84a602cb4f1f27c1983417f655941645bf11809d49f18": unexpected end of JSON input 2025-12-13T00:19:53.826621548+00:00 stderr F E1213 00:19:53.826521 1 strategy.go:60] unable to parse manifest for "sha256:739fac452e78a21a16b66e0451b85590b9e48ec7a1ed3887fbb9ed85cf564275": unexpected end of JSON input 2025-12-13T00:19:53.829320213+00:00 stderr F E1213 00:19:53.829273 1 strategy.go:60] unable to parse manifest for "sha256:0eea1d20aaa26041edf26b925fb204d839e5b93122190191893a0299b2e1b589": unexpected end of JSON input 2025-12-13T00:19:53.833019834+00:00 stderr F E1213 00:19:53.832770 1 strategy.go:60] unable to parse manifest for "sha256:3b94ccfa422b8ba0014302a3cfc6916b69f0f5a9dfd757b6704049834d4ff0ae": unexpected end of JSON input 2025-12-13T00:19:53.835137723+00:00 stderr F E1213 00:19:53.835077 1 strategy.go:60] unable to parse manifest for "sha256:46a4e73ddb085d1f36b39903ea13ba307bb958789707e9afde048764b3e3cae2": unexpected end of JSON input 2025-12-13T00:19:53.837450067+00:00 stderr F E1213 00:19:53.837396 1 strategy.go:60] unable to parse manifest for "sha256:bcb0e15cc9d2d3449f0b1acac7b0275035a80e1b3b835391b5464f7bf4553b89": unexpected end of JSON input 2025-12-13T00:19:53.844777178+00:00 stderr F I1213 00:19:53.844705 1 trace.go:236] Trace[2094029544]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:88756d71-dde1-47c0-b6b6-c9198a068863,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:49.873) (total time: 3970ms): 2025-12-13T00:19:53.844777178+00:00 stderr F Trace[2094029544]: ---"Write to database call succeeded" len:953 3969ms (00:19:53.843) 2025-12-13T00:19:53.844777178+00:00 stderr F Trace[2094029544]: [3.970999668s] [3.970999668s] END 2025-12-13T00:19:54.488439297+00:00 stderr F E1213 00:19:54.488363 1 strategy.go:60] unable to parse manifest for "sha256:431753c8a6a8541fdc0edd3385b2c765925d244fdd2347d2baa61303789696be": unexpected end of JSON input 2025-12-13T00:19:54.490262528+00:00 stderr F E1213 00:19:54.490221 1 strategy.go:60] unable to parse manifest for "sha256:64acf3403b5c2c85f7a28f326c63f1312b568db059c66d90b34e3c59fde3a74b": unexpected end of JSON input 2025-12-13T00:19:54.491774739+00:00 stderr F E1213 00:19:54.491722 1 strategy.go:60] unable to parse manifest for "sha256:74051f86b00fb102e34276f03a310c16bc57b9c2a001a56ba66359e15ee48ba6": unexpected end of JSON input 2025-12-13T00:19:54.493204479+00:00 stderr F E1213 00:19:54.493163 1 strategy.go:60] unable to parse manifest for "sha256:33d4dff40514e91d86b42e90b24b09a5ca770d9f67657c936363d348cd33d188": unexpected end of JSON input 2025-12-13T00:19:54.494558856+00:00 stderr F E1213 00:19:54.494522 1 strategy.go:60] unable to parse manifest for "sha256:7711108ef60ef6f0536bfa26914af2afaf6455ce6e4c4abd391e31a2d95d0178": unexpected end of JSON input 2025-12-13T00:19:54.495750659+00:00 stderr F E1213 00:19:54.495688 1 strategy.go:60] unable to parse manifest for "sha256:b163564be6ed5b80816e61a4ee31e42f42dbbf345253daac10ecc9fadf31baa3": unexpected end of JSON input 2025-12-13T00:19:54.496957812+00:00 stderr F E1213 00:19:54.496877 1 strategy.go:60] unable to parse manifest for "sha256:920ff7e5efc777cb523669c425fd7b553176c9f4b34a85ceddcb548c2ac5f78a": unexpected end of JSON input 2025-12-13T00:19:54.498010321+00:00 stderr F E1213 00:19:54.497970 1 strategy.go:60] unable to parse manifest for "sha256:32a5e806bd88b40568d46864fd313541498e38fabfc5afb5f3bdfe052c4b4c5f": unexpected end of JSON input 2025-12-13T00:19:54.499201694+00:00 stderr F E1213 00:19:54.499169 1 strategy.go:60] unable to parse manifest for "sha256:229ee7b88c5f700c95d557d0b37b8f78dbb6b125b188c3bf050cfdb32aec7962": unexpected end of JSON input 2025-12-13T00:19:54.500315565+00:00 stderr F E1213 00:19:54.500279 1 strategy.go:60] unable to parse manifest for "sha256:78bf175cecb15524b2ef81bff8cc11acdf7c0f74c08417f0e443483912e4878a": unexpected end of JSON input 2025-12-13T00:19:54.504500611+00:00 stderr F I1213 00:19:54.504432 1 trace.go:236] Trace[101739858]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5a8b30c5-a513-4e96-88e7-71d4e07b1935,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:50.176) (total time: 4328ms): 2025-12-13T00:19:54.504500611+00:00 stderr F Trace[101739858]: ---"Write to database call succeeded" len:1241 4327ms (00:19:54.503) 2025-12-13T00:19:54.504500611+00:00 stderr F Trace[101739858]: [4.328110066s] [4.328110066s] END 2025-12-13T00:19:54.966005296+00:00 stderr F E1213 00:19:54.965927 1 strategy.go:60] unable to parse manifest for "sha256:496e23be70520863bce6f7cdc54d280aca2c133d06e992795c4dcbde1a9dd1ab": unexpected end of JSON input 2025-12-13T00:19:54.968779393+00:00 stderr F E1213 00:19:54.968732 1 strategy.go:60] unable to parse manifest for "sha256:022488b1bf697b7dd8c393171a3247bef4ea545a9ab828501e72168f2aac9415": unexpected end of JSON input 2025-12-13T00:19:54.970492540+00:00 stderr F E1213 00:19:54.970447 1 strategy.go:60] unable to parse manifest for "sha256:7164a06e9ba98a3ce9991bd7019512488efe30895175bb463e255f00eb9421fd": unexpected end of JSON input 2025-12-13T00:19:54.972130045+00:00 stderr F E1213 00:19:54.972082 1 strategy.go:60] unable to parse manifest for "sha256:81684e422367a075ac113e69ea11d8721416ce4bedea035e25313c5e726fd7d1": unexpected end of JSON input 2025-12-13T00:19:54.973622796+00:00 stderr F E1213 00:19:54.973545 1 strategy.go:60] unable to parse manifest for "sha256:b838fa18dab68d43a19f0c329c3643850691b8f9915823c4f8d25685eb293a11": unexpected end of JSON input 2025-12-13T00:19:54.975083706+00:00 stderr F E1213 00:19:54.975009 1 strategy.go:60] unable to parse manifest for "sha256:8a5b580b76c2fc2dfe55d13bb0dd53e8c71d718fc1a3773264b1710f49060222": unexpected end of JSON input 2025-12-13T00:19:54.976550066+00:00 stderr F E1213 00:19:54.976487 1 strategy.go:60] unable to parse manifest for "sha256:2f59ad75b66a3169b0b03032afb09aa3cfa531dbd844e3d3a562246e7d09c282": unexpected end of JSON input 2025-12-13T00:19:54.977733260+00:00 stderr F E1213 00:19:54.977621 1 strategy.go:60] unable to parse manifest for "sha256:9d759db3bb650e5367216ce261779c5a58693fc7ae10f21cd264011562bd746d": unexpected end of JSON input 2025-12-13T00:19:54.979001844+00:00 stderr F E1213 00:19:54.978922 1 strategy.go:60] unable to parse manifest for "sha256:bf5e518dba2aa935829d9db88d933a264e54ffbfa80041b41287fd70c1c35ba5": unexpected end of JSON input 2025-12-13T00:19:54.980310811+00:00 stderr F E1213 00:19:54.980241 1 strategy.go:60] unable to parse manifest for "sha256:f7ca08a8dda3610fcc10cc1fe5f5d0b9f8fc7a283b01975d0fe2c1e77ae06193": unexpected end of JSON input 2025-12-13T00:19:54.986157442+00:00 stderr F I1213 00:19:54.986071 1 trace.go:236] Trace[1829601361]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7d3b68bd-a711-4b85-ab31-c55078ec27bf,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:49.574) (total time: 5411ms): 2025-12-13T00:19:54.986157442+00:00 stderr F Trace[1829601361]: ---"Write to database call succeeded" len:1142 5409ms (00:19:54.984) 2025-12-13T00:19:54.986157442+00:00 stderr F Trace[1829601361]: [5.411618773s] [5.411618773s] END 2025-12-13T00:19:55.234753706+00:00 stderr F E1213 00:19:55.234674 1 strategy.go:60] unable to parse manifest for "sha256:7bcc365e0ba823ed020ee6e6c3e0c23be5871c8dea3f7f1a65029002c83f9e55": unexpected end of JSON input 2025-12-13T00:19:55.236823473+00:00 stderr F E1213 00:19:55.236778 1 strategy.go:60] unable to parse manifest for "sha256:6a9e81b2eea2f32f2750909b6aa037c2c2e68be3bc9daf3c7a3163c9e1df379f": unexpected end of JSON input 2025-12-13T00:19:55.238788878+00:00 stderr F E1213 00:19:55.238708 1 strategy.go:60] unable to parse manifest for "sha256:00cf28cf9a6c427962f922855a6cc32692c760764ce2ce7411cf605dd510367f": unexpected end of JSON input 2025-12-13T00:19:55.240575458+00:00 stderr F E1213 00:19:55.240545 1 strategy.go:60] unable to parse manifest for "sha256:2cee344e4cfcfdc9a117fd82baa6f2d5daa7eeed450e02cd5d5554b424410439": unexpected end of JSON input 2025-12-13T00:19:55.242502500+00:00 stderr F E1213 00:19:55.242416 1 strategy.go:60] unable to parse manifest for "sha256:aa02a20c2edf83a009746b45a0fd2e0b4a2b224fdef1581046f6afef38c0bee2": unexpected end of JSON input 2025-12-13T00:19:55.244269049+00:00 stderr F E1213 00:19:55.244234 1 strategy.go:60] unable to parse manifest for "sha256:59b88fb0c467ca43bf3c1af6bfd8777577638dd8079f995cdb20b6f4e20ce0b6": unexpected end of JSON input 2025-12-13T00:19:55.246373817+00:00 stderr F E1213 00:19:55.246329 1 strategy.go:60] unable to parse manifest for "sha256:603d10af5e3476add5b5726fdef893033869ae89824ee43949a46c9f004ef65d": unexpected end of JSON input 2025-12-13T00:19:55.247885929+00:00 stderr F E1213 00:19:55.247848 1 strategy.go:60] unable to parse manifest for "sha256:eed7e29bf583e4f01e170bb9f22f2a78098bf15243269b670c307caa6813b783": unexpected end of JSON input 2025-12-13T00:19:55.250145691+00:00 stderr F E1213 00:19:55.250000 1 strategy.go:60] unable to parse manifest for "sha256:b80a514f136f738736d6bf654dc3258c13b04a819e001dd8a39ef2f7475fd9d9": unexpected end of JSON input 2025-12-13T00:19:55.251643012+00:00 stderr F E1213 00:19:55.251588 1 strategy.go:60] unable to parse manifest for "sha256:7ef75cdbc399425105060771cb8e700198cc0bddcfb60bf4311bf87ea62fd440": unexpected end of JSON input 2025-12-13T00:19:55.258592004+00:00 stderr F I1213 00:19:55.258542 1 trace.go:236] Trace[635167875]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fcc83673-d0ff-4a6a-9df3-6df1b358a474,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:50.618) (total time: 4640ms): 2025-12-13T00:19:55.258592004+00:00 stderr F Trace[635167875]: ---"Write to database call succeeded" len:1230 4637ms (00:19:55.255) 2025-12-13T00:19:55.258592004+00:00 stderr F Trace[635167875]: [4.640253562s] [4.640253562s] END 2025-12-13T00:19:55.301919558+00:00 stderr F E1213 00:19:55.301808 1 strategy.go:60] unable to parse manifest for "sha256:e851770fd181ef49193111f7afcdbf872ad23f3a8234e0e07a742c4ca2882c3d": unexpected end of JSON input 2025-12-13T00:19:55.304997434+00:00 stderr F E1213 00:19:55.304913 1 strategy.go:60] unable to parse manifest for "sha256:ce5c0becf829aca80734b4caf3ab6b76cb00f7d78f4e39fb136636a764dea7f6": unexpected end of JSON input 2025-12-13T00:19:55.307329358+00:00 stderr F E1213 00:19:55.307293 1 strategy.go:60] unable to parse manifest for "sha256:3f00540ce2a3a01d2a147a7d73825fe78697be213a050bd09edae36266d6bc40": unexpected end of JSON input 2025-12-13T00:19:55.309229910+00:00 stderr F E1213 00:19:55.309191 1 strategy.go:60] unable to parse manifest for "sha256:868224c3b7c309b9e04003af70a5563af8e4c662f0c53f2a7606e0573c9fad85": unexpected end of JSON input 2025-12-13T00:19:55.311018610+00:00 stderr F E1213 00:19:55.310976 1 strategy.go:60] unable to parse manifest for "sha256:0669a28577b41bb05c67492ef18a1d48a299ac54d1500df8f9f8f760ce4be24b": unexpected end of JSON input 2025-12-13T00:19:55.312731127+00:00 stderr F E1213 00:19:55.312685 1 strategy.go:60] unable to parse manifest for "sha256:9036a59a8275f9c205ef5fc674f38c0495275a1a7912029f9a784406bb00b1f5": unexpected end of JSON input 2025-12-13T00:19:55.314274390+00:00 stderr F E1213 00:19:55.314239 1 strategy.go:60] unable to parse manifest for "sha256:425e2c7c355bea32be238aa2c7bdd363b6ab3709412bdf095efe28a8f6c07d84": unexpected end of JSON input 2025-12-13T00:19:55.315425721+00:00 stderr F E1213 00:19:55.315386 1 strategy.go:60] unable to parse manifest for "sha256:67fee4b64b269f5666a1051d806635b675903ef56d07b7cc019d3d59ff1aa97c": unexpected end of JSON input 2025-12-13T00:19:55.316707827+00:00 stderr F E1213 00:19:55.316669 1 strategy.go:60] unable to parse manifest for "sha256:b85cbdbc289752c91ac7f468cffef916fe9ab01865f3e32cfcc44ccdd633b168": unexpected end of JSON input 2025-12-13T00:19:55.317881719+00:00 stderr F E1213 00:19:55.317843 1 strategy.go:60] unable to parse manifest for "sha256:663eb81388ae8f824e7920c272f6d2e2274cf6c140d61416607261cdce9d50e2": unexpected end of JSON input 2025-12-13T00:19:55.323474453+00:00 stderr F I1213 00:19:55.323422 1 trace.go:236] Trace[385972154]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:219c8096-b7db-46e2-a0e7-b095bc786a8c,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:49.813) (total time: 5509ms): 2025-12-13T00:19:55.323474453+00:00 stderr F Trace[385972154]: ---"Write to database call succeeded" len:1153 5507ms (00:19:55.321) 2025-12-13T00:19:55.323474453+00:00 stderr F Trace[385972154]: [5.509819011s] [5.509819011s] END 2025-12-13T00:19:55.681289550+00:00 stderr F E1213 00:19:55.681214 1 strategy.go:60] unable to parse manifest for "sha256:2ae058ee7239213fb495491112be8cc7e6d6661864fd399deb27f23f50f05eb4": unexpected end of JSON input 2025-12-13T00:19:55.683961434+00:00 stderr F E1213 00:19:55.683919 1 strategy.go:60] unable to parse manifest for "sha256:db3f5192237bfdab2355304f17916e09bc29d6d529fdec48b09a08290ae35905": unexpected end of JSON input 2025-12-13T00:19:55.685664860+00:00 stderr F E1213 00:19:55.685613 1 strategy.go:60] unable to parse manifest for "sha256:b4cb02a4e7cb915b6890d592ed5b4ab67bcef19bf855029c95231f51dd071352": unexpected end of JSON input 2025-12-13T00:19:55.687259384+00:00 stderr F E1213 00:19:55.687201 1 strategy.go:60] unable to parse manifest for "sha256:fa9556628c15b8eb22cafccb737b3fbcecfd681a5c2cfea3302dd771c644a7db": unexpected end of JSON input 2025-12-13T00:19:55.688731855+00:00 stderr F E1213 00:19:55.688696 1 strategy.go:60] unable to parse manifest for "sha256:a0a6db2dcdb3d49e36bd0665e3e00f242a690391700e42cab14e86b154152bfd": unexpected end of JSON input 2025-12-13T00:19:55.690180365+00:00 stderr F E1213 00:19:55.690109 1 strategy.go:60] unable to parse manifest for "sha256:e90172ca0f09acf5db1721bd7df304dffd184e00145072132cb71c7f0797adf6": unexpected end of JSON input 2025-12-13T00:19:55.692794537+00:00 stderr F E1213 00:19:55.691649 1 strategy.go:60] unable to parse manifest for "sha256:421d1f6a10e263677b7687ccea8e4a59058e2e3c80585505eec9a9c2e6f9f40e": unexpected end of JSON input 2025-12-13T00:19:55.693830995+00:00 stderr F E1213 00:19:55.693807 1 strategy.go:60] unable to parse manifest for "sha256:6c009f430da02bdcff618a7dcd085d7d22547263eeebfb8d6377a4cf6f58769d": unexpected end of JSON input 2025-12-13T00:19:55.695643085+00:00 stderr F E1213 00:19:55.695603 1 strategy.go:60] unable to parse manifest for "sha256:dc84fed0f6f40975a2277c126438c8aa15c70eeac75981dbaa4b6b853eff61a6": unexpected end of JSON input 2025-12-13T00:19:55.697546338+00:00 stderr F E1213 00:19:55.697479 1 strategy.go:60] unable to parse manifest for "sha256:78af15475eac13d2ff439b33a9c3bdd39147858a824c420e8042fd5f35adce15": unexpected end of JSON input 2025-12-13T00:19:55.699457370+00:00 stderr F E1213 00:19:55.699411 1 strategy.go:60] unable to parse manifest for "sha256:06bbbf9272d5c5161f444388593e9bd8db793d8a2d95a50b429b3c0301fafcdd": unexpected end of JSON input 2025-12-13T00:19:55.701330182+00:00 stderr F E1213 00:19:55.701302 1 strategy.go:60] unable to parse manifest for "sha256:caba895933209aa9a4f3121f9ec8e5e8013398ab4f72bd3ff255227aad8d2c3e": unexpected end of JSON input 2025-12-13T00:19:55.703234785+00:00 stderr F E1213 00:19:55.703159 1 strategy.go:60] unable to parse manifest for "sha256:dbe9905fe2b20ed30b0e2d64543016fa9c145eeb5a678f720ba9d2055f0c9f88": unexpected end of JSON input 2025-12-13T00:19:55.709557419+00:00 stderr F I1213 00:19:55.709004 1 trace.go:236] Trace[1962571723]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b766a7e4-0c0e-464d-b0fa-cf18c1017c18,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Dec-2025 00:19:50.118) (total time: 5590ms): 2025-12-13T00:19:55.709557419+00:00 stderr F Trace[1962571723]: ---"Write to database call succeeded" len:1743 5588ms (00:19:55.707) 2025-12-13T00:19:55.709557419+00:00 stderr F Trace[1962571723]: [5.590409413s] [5.590409413s] END 2025-12-13T00:20:53.495646111+00:00 stderr F E1213 00:20:53.493627 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.495646111+00:00 stderr F E1213 00:20:53.493689 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.498055395+00:00 stderr F E1213 00:20:53.497937 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.498055395+00:00 stderr F E1213 00:20:53.498022 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.500075670+00:00 stderr F E1213 00:20:53.500051 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.500128692+00:00 stderr F E1213 00:20:53.500116 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.501529840+00:00 stderr F E1213 00:20:53.501489 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.501529840+00:00 stderr F E1213 00:20:53.501512 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.508082986+00:00 stderr F E1213 00:20:53.508051 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.508153978+00:00 stderr F E1213 00:20:53.508136 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.510125522+00:00 stderr F E1213 00:20:53.510079 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.510125522+00:00 stderr F E1213 00:20:53.510105 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.511567130+00:00 stderr F E1213 00:20:53.511511 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.511634952+00:00 stderr F E1213 00:20:53.511596 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.513526633+00:00 stderr F E1213 00:20:53.513451 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.513545324+00:00 stderr F E1213 00:20:53.513531 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.516779661+00:00 stderr F E1213 00:20:53.516742 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.516798321+00:00 stderr F E1213 00:20:53.516775 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.441935716+00:00 stderr F E1213 00:20:54.441857 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.441996627+00:00 stderr F E1213 00:20:54.441960 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.451882475+00:00 stderr F E1213 00:20:54.451840 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.451922606+00:00 stderr F E1213 00:20:54.451902 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.454328921+00:00 stderr F E1213 00:20:54.454281 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.454328921+00:00 stderr F E1213 00:20:54.454311 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.459913931+00:00 stderr F E1213 00:20:54.459884 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.460010854+00:00 stderr F E1213 00:20:54.459994 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.463431556+00:00 stderr F E1213 00:20:54.463390 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.463431556+00:00 stderr F E1213 00:20:54.463426 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.464764403+00:00 stderr F E1213 00:20:54.464738 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.464838515+00:00 stderr F E1213 00:20:54.464824 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.468321938+00:00 stderr F E1213 00:20:54.468302 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.468378600+00:00 stderr F E1213 00:20:54.468364 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.469602902+00:00 stderr F E1213 00:20:54.469562 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.469619843+00:00 stderr F E1213 00:20:54.469597 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.470959520+00:00 stderr F E1213 00:20:54.470911 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.470959520+00:00 stderr F E1213 00:20:54.470946 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.472999425+00:00 stderr F E1213 00:20:54.472952 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.473014655+00:00 stderr F E1213 00:20:54.472998 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.474482274+00:00 stderr F E1213 00:20:54.474437 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.474482274+00:00 stderr F E1213 00:20:54.474469 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.476498239+00:00 stderr F E1213 00:20:54.476463 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.476515049+00:00 stderr F E1213 00:20:54.476497 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.479188172+00:00 stderr F E1213 00:20:54.479153 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.479204912+00:00 stderr F E1213 00:20:54.479191 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.480923578+00:00 stderr F E1213 00:20:54.480871 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.480923578+00:00 stderr F E1213 00:20:54.480900 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.481054071+00:00 stderr F E1213 00:20:54.481035 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.481099443+00:00 stderr F E1213 00:20:54.481087 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.485155823+00:00 stderr F E1213 00:20:54.485039 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.485155823+00:00 stderr F E1213 00:20:54.485072 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.488746829+00:00 stderr F E1213 00:20:54.488718 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.488806701+00:00 stderr F E1213 00:20:54.488792 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.491524994+00:00 stderr F E1213 00:20:54.491493 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.491542345+00:00 stderr F E1213 00:20:54.491533 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.931503147+00:00 stderr F E1213 00:20:54.931440 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.931639050+00:00 stderr F E1213 00:20:54.931579 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.337897313+00:00 stderr F E1213 00:20:57.337776 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.338018726+00:00 stderr F E1213 00:20:57.337977 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.480000307+00:00 stderr F E1213 00:20:57.479871 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.480096600+00:00 stderr F E1213 00:20:57.480054 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.170741397+00:00 stderr F E1213 00:20:58.169855 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.170741397+00:00 stderr F E1213 00:20:58.170410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.456182579+00:00 stderr F E1213 00:20:58.456022 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.456216570+00:00 stderr F E1213 00:20:58.456178 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.009918362+00:00 stderr F E1213 00:20:59.009839 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.010043445+00:00 stderr F E1213 00:20:59.010004 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.161189434+00:00 stderr F E1213 00:20:59.161121 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.161268076+00:00 stderr F E1213 00:20:59.161239 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.482719700+00:00 stderr F E1213 00:20:59.482626 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.482719700+00:00 stderr F E1213 00:20:59.482674 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.332450213+00:00 stderr F E1213 00:21:05.332338 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.332450213+00:00 stderr F E1213 00:21:05.332421 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.489111860+00:00 stderr F E1213 00:21:09.488806 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.489111860+00:00 stderr F E1213 00:21:09.488905 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.914029504+00:00 stderr F E1213 00:21:13.913520 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.914029504+00:00 stderr F E1213 00:21:13.913572 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.339542311+00:00 stderr F E1213 00:21:15.338987 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.339582982+00:00 stderr F E1213 00:21:15.339544 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.052779528+00:00 stderr F E1213 00:21:16.052696 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.052833900+00:00 stderr F E1213 00:21:16.052770 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.118989709+00:00 stderr F E1213 00:21:17.118888 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.119283567+00:00 stderr F E1213 00:21:17.118981 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.866770018+00:00 stderr F E1213 00:21:17.866730 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.866811709+00:00 stderr F E1213 00:21:17.866793 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:18.522296297+00:00 stderr F E1213 00:21:18.522232 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:18.522343588+00:00 stderr F E1213 00:21:18.522294 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:18.548486204+00:00 stderr F E1213 00:21:18.548424 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:18.548532595+00:00 stderr F E1213 00:21:18.548485 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:18.669597622+00:00 stderr F E1213 00:21:18.669526 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:18.669658454+00:00 stderr F E1213 00:21:18.669641 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.031981520+00:00 stderr F E1213 00:21:19.031709 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.031981520+00:00 stderr F E1213 00:21:19.031776 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.171212068+00:00 stderr F E1213 00:21:19.171071 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.171212068+00:00 stderr F E1213 00:21:19.171149 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:57.115292207+00:00 stderr F I1213 00:21:57.115226 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000026634015117130646033151 0ustar zuulzuul2025-08-13T20:07:23.235321095+00:00 stdout F Copying system trust bundle 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.020450 1 feature_gate.go:239] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:07:25.021836665+00:00 stderr F I0813 20:07:25.020513 1 feature_gate.go:249] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021595 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstall" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021612 1 config.go:121] Ignoring unknown FeatureGate "OpenShiftPodSecurityAdmission" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021617 1 config.go:121] Ignoring unknown FeatureGate "SigstoreImageVerification" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021621 1 config.go:121] Ignoring unknown FeatureGate "SignatureStores" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021626 1 config.go:121] Ignoring unknown FeatureGate "NewOLM" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021630 1 config.go:121] Ignoring unknown FeatureGate "AzureWorkloadIdentity" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021635 1 config.go:121] Ignoring unknown FeatureGate "CSIDriverSharedResource" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021639 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallPowerVS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021716 1 config.go:121] Ignoring unknown FeatureGate "AdminNetworkPolicy" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021722 1 config.go:121] Ignoring unknown FeatureGate "Example" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021727 1 config.go:121] Ignoring unknown FeatureGate "MixedCPUsAllocation" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021731 1 config.go:121] Ignoring unknown FeatureGate "MetricsCollectionProfiles" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021736 1 config.go:121] Ignoring unknown FeatureGate "PrivateHostedZoneAWS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021740 1 config.go:121] Ignoring unknown FeatureGate "UpgradeStatus" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021745 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAzure" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021750 1 config.go:121] Ignoring unknown FeatureGate "DNSNameResolver" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021754 1 config.go:121] Ignoring unknown FeatureGate "NetworkDiagnosticsConfig" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021758 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallVSphere" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021762 1 config.go:121] Ignoring unknown FeatureGate "ExternalOIDC" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021767 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfigAPI" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021809 1 config.go:121] Ignoring unknown FeatureGate "NodeDisruptionPolicy" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021818 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderGCP" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021822 1 config.go:121] Ignoring unknown FeatureGate "GCPClusterHostedDNS" 2025-08-13T20:07:25.021836665+00:00 stderr F W0813 20:07:25.021826 1 config.go:121] Ignoring unknown FeatureGate "HardwareSpeed" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021830 1 config.go:121] Ignoring unknown FeatureGate "InsightsConfig" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021837 1 config.go:121] Ignoring unknown FeatureGate "VSphereMultiVCenters" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021841 1 config.go:121] Ignoring unknown FeatureGate "BuildCSIVolumes" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021845 1 config.go:121] Ignoring unknown FeatureGate "ChunkSizeMiB" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021860 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallNutanix" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021865 1 config.go:121] Ignoring unknown FeatureGate "VSphereStaticIPs" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021869 1 config.go:121] Ignoring unknown FeatureGate "ImagePolicy" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021898 1 config.go:121] Ignoring unknown FeatureGate "ManagedBootImages" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021907 1 config.go:121] Ignoring unknown FeatureGate "PinnedImages" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021912 1 config.go:121] Ignoring unknown FeatureGate "VSphereControlPlaneMachineSet" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021916 1 config.go:121] Ignoring unknown FeatureGate "NetworkLiveMigration" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021920 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallOpenStack" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021924 1 config.go:121] Ignoring unknown FeatureGate "GCPLabelsTags" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021929 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIProviderOpenStack" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021933 1 config.go:121] Ignoring unknown FeatureGate "MetricsServer" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021938 1 config.go:121] Ignoring unknown FeatureGate "VSphereDriverConfiguration" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021942 1 config.go:121] Ignoring unknown FeatureGate "EtcdBackendQuota" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021946 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderExternal" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021950 1 config.go:121] Ignoring unknown FeatureGate "MachineAPIOperatorDisableMachineHealthCheckController" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021956 1 config.go:121] Ignoring unknown FeatureGate "PlatformOperators" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021960 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallGCP" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021964 1 config.go:121] Ignoring unknown FeatureGate "OnClusterBuild" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021969 1 config.go:121] Ignoring unknown FeatureGate "VolumeGroupSnapshot" 2025-08-13T20:07:25.021978869+00:00 stderr F W0813 20:07:25.021973 1 config.go:121] Ignoring unknown FeatureGate "ExternalRouteCertificate" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021978 1 config.go:121] Ignoring unknown FeatureGate "AlibabaPlatform" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021982 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallAWS" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021986 1 config.go:121] Ignoring unknown FeatureGate "ClusterAPIInstallIBMCloud" 2025-08-13T20:07:25.021997390+00:00 stderr F W0813 20:07:25.021991 1 config.go:121] Ignoring unknown FeatureGate "AutomatedEtcdBackup" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.021995 1 config.go:121] Ignoring unknown FeatureGate "BareMetalLoadBalancer" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.022000 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProvider" 2025-08-13T20:07:25.022010260+00:00 stderr F W0813 20:07:25.022004 1 config.go:121] Ignoring unknown FeatureGate "InstallAlternateInfrastructureAWS" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022008 1 config.go:121] Ignoring unknown FeatureGate "ExternalCloudProviderAzure" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022013 1 config.go:121] Ignoring unknown FeatureGate "MachineConfigNodes" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022017 1 config.go:121] Ignoring unknown FeatureGate "GatewayAPI" 2025-08-13T20:07:25.022050681+00:00 stderr F W0813 20:07:25.022022 1 config.go:121] Ignoring unknown FeatureGate "InsightsOnDemandDataGather" 2025-08-13T20:07:25.025046997+00:00 stderr F I0813 20:07:25.024565 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:07:25.679468230+00:00 stderr F I0813 20:07:25.676369 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:07:25.711720245+00:00 stderr F I0813 20:07:25.709906 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735038 1 plugins.go:83] "Registered admission plugin" plugin="NamespaceLifecycle" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735540 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionWebhook" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735556 1 plugins.go:83] "Registered admission plugin" plugin="MutatingAdmissionWebhook" 2025-08-13T20:07:25.735643920+00:00 stderr F I0813 20:07:25.735562 1 plugins.go:83] "Registered admission plugin" plugin="ValidatingAdmissionPolicy" 2025-08-13T20:07:25.736633209+00:00 stderr F I0813 20:07:25.736209 1 admission.go:48] Admission plugin "project.openshift.io/ProjectRequestLimit" is not configured so it will be disabled. 2025-08-13T20:07:25.773228408+00:00 stderr F I0813 20:07:25.773109 1 plugins.go:157] Loaded 5 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,build.openshift.io/BuildConfigSecretInjector,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,MutatingAdmissionWebhook. 2025-08-13T20:07:25.773228408+00:00 stderr F I0813 20:07:25.773159 1 plugins.go:160] Loaded 9 validating admission controller(s) successfully in the following order: OwnerReferencesPermissionEnforcement,build.openshift.io/BuildConfigSecretInjector,build.openshift.io/BuildByStrategy,image.openshift.io/ImageLimitRange,image.openshift.io/ImagePolicy,quota.openshift.io/ClusterResourceQuota,route.openshift.io/RequiredRouteAnnotations,ValidatingAdmissionWebhook,ResourceQuota. 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.773960 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774009 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774038 1 maxinflight.go:116] "Set denominator for readonly requests" limit=3000 2025-08-13T20:07:25.782088992+00:00 stderr F I0813 20:07:25.774045 1 maxinflight.go:120] "Set denominator for mutating requests" limit=1500 2025-08-13T20:07:25.823566991+00:00 stderr F I0813 20:07:25.823427 1 store.go:1579] "Monitoring resource count at path" resource="builds.build.openshift.io" path="//builds" 2025-08-13T20:07:25.833390993+00:00 stderr F I0813 20:07:25.833321 1 store.go:1579] "Monitoring resource count at path" resource="buildconfigs.build.openshift.io" path="//buildconfigs" 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841419 1 handler.go:275] Adding GroupVersion build.openshift.io v1 to ResourceManager 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841554 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.841593648+00:00 stderr F I0813 20:07:25.841566 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:25.846512949+00:00 stderr F I0813 20:07:25.846443 1 handler.go:275] Adding GroupVersion quota.openshift.io v1 to ResourceManager 2025-08-13T20:07:25.846623362+00:00 stderr F I0813 20:07:25.846572 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:25.846623362+00:00 stderr F I0813 20:07:25.846612 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.020963071+00:00 stderr F I0813 20:07:26.020524 1 store.go:1579] "Monitoring resource count at path" resource="routes.route.openshift.io" path="//routes" 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.024741 1 handler.go:275] Adding GroupVersion route.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.025146 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.025418569+00:00 stderr F I0813 20:07:26.025164 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.046650937+00:00 stderr F I0813 20:07:26.046552 1 store.go:1579] "Monitoring resource count at path" resource="rangeallocations.security.openshift.io" path="//rangeallocations" 2025-08-13T20:07:26.057699754+00:00 stderr F I0813 20:07:26.057587 1 handler.go:275] Adding GroupVersion security.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.057933581+00:00 stderr F I0813 20:07:26.057844 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.057933581+00:00 stderr F I0813 20:07:26.057912 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.074950239+00:00 stderr F I0813 20:07:26.071717 1 store.go:1579] "Monitoring resource count at path" resource="deploymentconfigs.apps.openshift.io" path="//deploymentconfigs" 2025-08-13T20:07:26.094870880+00:00 stderr F I0813 20:07:26.094722 1 cacher.go:451] cacher (buildconfigs.build.openshift.io): initialized 2025-08-13T20:07:26.095282612+00:00 stderr F I0813 20:07:26.095244 1 cacher.go:451] cacher (rangeallocations.security.openshift.io): initialized 2025-08-13T20:07:26.095351024+00:00 stderr F I0813 20:07:26.095335 1 reflector.go:351] Caches populated for *security.RangeAllocation from storage/cacher.go:/rangeallocations 2025-08-13T20:07:26.095544449+00:00 stderr F I0813 20:07:26.095525 1 cacher.go:451] cacher (deploymentconfigs.apps.openshift.io): initialized 2025-08-13T20:07:26.095588340+00:00 stderr F I0813 20:07:26.095575 1 reflector.go:351] Caches populated for *apps.DeploymentConfig from storage/cacher.go:/deploymentconfigs 2025-08-13T20:07:26.096232139+00:00 stderr F I0813 20:07:26.096203 1 cacher.go:451] cacher (builds.build.openshift.io): initialized 2025-08-13T20:07:26.099095701+00:00 stderr F I0813 20:07:26.099067 1 reflector.go:351] Caches populated for *build.Build from storage/cacher.go:/builds 2025-08-13T20:07:26.099946485+00:00 stderr F I0813 20:07:26.099859 1 reflector.go:351] Caches populated for *build.BuildConfig from storage/cacher.go:/buildconfigs 2025-08-13T20:07:26.101696446+00:00 stderr F I0813 20:07:26.101627 1 handler.go:275] Adding GroupVersion apps.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.102704534+00:00 stderr F I0813 20:07:26.102633 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.102704534+00:00 stderr F I0813 20:07:26.102685 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.132331104+00:00 stderr F I0813 20:07:26.132237 1 cacher.go:451] cacher (routes.route.openshift.io): initialized 2025-08-13T20:07:26.132331104+00:00 stderr F I0813 20:07:26.132293 1 reflector.go:351] Caches populated for *route.Route from storage/cacher.go:/routes 2025-08-13T20:07:26.173127094+00:00 stderr F I0813 20:07:26.173021 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca, incoming err: 2025-08-13T20:07:26.173127094+00:00 stderr F I0813 20:07:26.173074 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173126 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123, incoming err: 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173132 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123 2025-08-13T20:07:26.173186895+00:00 stderr F I0813 20:07:26.173146 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-08-13T20:07:26.173252247+00:00 stderr F I0813 20:07:26.173208 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-08-13T20:07:26.173362670+00:00 stderr F I0813 20:07:26.173322 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..2025_08_13_20_07_20.3215003123/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-08-13T20:07:26.173467893+00:00 stderr F I0813 20:07:26.173425 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/..data, incoming err: 2025-08-13T20:07:26.173467893+00:00 stderr F I0813 20:07:26.173453 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/..data 2025-08-13T20:07:26.173480724+00:00 stderr F I0813 20:07:26.173473 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing, incoming err: 2025-08-13T20:07:26.173490634+00:00 stderr F I0813 20:07:26.173479 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/default-route-openshift-image-registry.apps-crc.testing 2025-08-13T20:07:26.173502834+00:00 stderr F I0813 20:07:26.173496 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000, incoming err: 2025-08-13T20:07:26.173512675+00:00 stderr F I0813 20:07:26.173501 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc..5000 2025-08-13T20:07:26.173522885+00:00 stderr F I0813 20:07:26.173513 1 apiserver.go:156] reading image import ca path: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000, incoming err: 2025-08-13T20:07:26.173522885+00:00 stderr F I0813 20:07:26.173518 1 apiserver.go:161] skipping dir or symlink: /var/run/configmaps/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 2025-08-13T20:07:26.229558662+00:00 stderr F I0813 20:07:26.229472 1 store.go:1579] "Monitoring resource count at path" resource="images.image.openshift.io" path="//images" 2025-08-13T20:07:26.262353442+00:00 stderr F I0813 20:07:26.262258 1 store.go:1579] "Monitoring resource count at path" resource="imagestreams.image.openshift.io" path="//imagestreams" 2025-08-13T20:07:26.345720652+00:00 stderr F I0813 20:07:26.345645 1 cacher.go:451] cacher (imagestreams.image.openshift.io): initialized 2025-08-13T20:07:26.349918072+00:00 stderr F I0813 20:07:26.349863 1 reflector.go:351] Caches populated for *image.ImageStream from storage/cacher.go:/imagestreams 2025-08-13T20:07:26.369736351+00:00 stderr F I0813 20:07:26.369613 1 handler.go:275] Adding GroupVersion image.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.370657947+00:00 stderr F W0813 20:07:26.370604 1 genericapiserver.go:756] Skipping API image.openshift.io/1.0 because it has no resources. 2025-08-13T20:07:26.370657947+00:00 stderr F W0813 20:07:26.370641 1 genericapiserver.go:756] Skipping API image.openshift.io/pre012 because it has no resources. 2025-08-13T20:07:26.371867112+00:00 stderr F I0813 20:07:26.371470 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.372350326+00:00 stderr F I0813 20:07:26.372293 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.385405080+00:00 stderr F I0813 20:07:26.385348 1 handler.go:275] Adding GroupVersion project.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.385561074+00:00 stderr F I0813 20:07:26.385511 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.385561074+00:00 stderr F I0813 20:07:26.385546 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.416983185+00:00 stderr F I0813 20:07:26.416673 1 store.go:1579] "Monitoring resource count at path" resource="templates.template.openshift.io" path="//templates" 2025-08-13T20:07:26.436443863+00:00 stderr F I0813 20:07:26.436348 1 store.go:1579] "Monitoring resource count at path" resource="templateinstances.template.openshift.io" path="//templateinstances" 2025-08-13T20:07:26.443011912+00:00 stderr F I0813 20:07:26.442942 1 cacher.go:451] cacher (templateinstances.template.openshift.io): initialized 2025-08-13T20:07:26.443040462+00:00 stderr F I0813 20:07:26.443009 1 reflector.go:351] Caches populated for *template.TemplateInstance from storage/cacher.go:/templateinstances 2025-08-13T20:07:26.455495849+00:00 stderr F I0813 20:07:26.454602 1 store.go:1579] "Monitoring resource count at path" resource="brokertemplateinstances.template.openshift.io" path="//brokertemplateinstances" 2025-08-13T20:07:26.458248898+00:00 stderr F I0813 20:07:26.458174 1 cacher.go:451] cacher (templates.template.openshift.io): initialized 2025-08-13T20:07:26.458248898+00:00 stderr F I0813 20:07:26.458240 1 reflector.go:351] Caches populated for *template.Template from storage/cacher.go:/templates 2025-08-13T20:07:26.460369019+00:00 stderr F I0813 20:07:26.460124 1 cacher.go:451] cacher (images.image.openshift.io): initialized 2025-08-13T20:07:26.460369019+00:00 stderr F I0813 20:07:26.460182 1 reflector.go:351] Caches populated for *image.Image from storage/cacher.go:/images 2025-08-13T20:07:26.463958292+00:00 stderr F I0813 20:07:26.463908 1 handler.go:275] Adding GroupVersion template.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.464163788+00:00 stderr F I0813 20:07:26.464118 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.464163788+00:00 stderr F I0813 20:07:26.464149 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:26.464408955+00:00 stderr F I0813 20:07:26.464364 1 cacher.go:451] cacher (brokertemplateinstances.template.openshift.io): initialized 2025-08-13T20:07:26.464455926+00:00 stderr F I0813 20:07:26.464418 1 reflector.go:351] Caches populated for *template.BrokerTemplateInstance from storage/cacher.go:/brokertemplateinstances 2025-08-13T20:07:26.484150241+00:00 stderr F I0813 20:07:26.483684 1 handler.go:275] Adding GroupVersion authorization.openshift.io v1 to ResourceManager 2025-08-13T20:07:26.484232063+00:00 stderr F I0813 20:07:26.483850 1 maxinflight.go:139] "Initialized nonMutatingChan" len=3000 2025-08-13T20:07:26.484232063+00:00 stderr F I0813 20:07:26.483908 1 maxinflight.go:145] "Initialized mutatingChan" len=1500 2025-08-13T20:07:27.540162388+00:00 stderr F I0813 20:07:27.537590 1 server.go:50] Starting master on 0.0.0.0:8443 (v0.0.0-master+$Format:%H$) 2025-08-13T20:07:27.540162388+00:00 stderr F I0813 20:07:27.537839 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551033 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551069 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551125 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551144 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551172 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551226 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:27.551402810+00:00 stderr F I0813 20:07:27.551313 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.551267526 +0000 UTC))" 2025-08-13T20:07:27.551645367+00:00 stderr F I0813 20:07:27.551598 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.551574325 +0000 UTC))" 2025-08-13T20:07:27.551660568+00:00 stderr F I0813 20:07:27.551646 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:07:27.551725429+00:00 stderr F I0813 20:07:27.551686 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:07:27.551740480+00:00 stderr F I0813 20:07:27.551729 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:07:27.551972457+00:00 stderr F I0813 20:07:27.551937 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:07:27.552988366+00:00 stderr F I0813 20:07:27.552961 1 openshift_apiserver.go:593] Using default project node label selector: 2025-08-13T20:07:27.553094279+00:00 stderr F I0813 20:07:27.553077 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-08-13T20:07:27.572047802+00:00 stderr F I0813 20:07:27.571750 1 healthz.go:261] poststarthook/authorization.openshift.io-bootstrapclusterroles,poststarthook/authorization.openshift.io-ensurenodebootstrap-sa check failed: healthz 2025-08-13T20:07:27.572047802+00:00 stderr F [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: not finished 2025-08-13T20:07:27.572047802+00:00 stderr F [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: not finished 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575071 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575487 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.575951 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.576639694+00:00 stderr F I0813 20:07:27.576187 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.577214520+00:00 stderr F I0813 20:07:27.577137 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.577962422+00:00 stderr F I0813 20:07:27.577938 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.578227459+00:00 stderr F I0813 20:07:27.578206 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.588645468+00:00 stderr F I0813 20:07:27.585074 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.591386827+00:00 stderr F I0813 20:07:27.591347 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.593579579+00:00 stderr F I0813 20:07:27.593471 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.593949880+00:00 stderr F I0813 20:07:27.593924 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.595380011+00:00 stderr F I0813 20:07:27.595134 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.596744490+00:00 stderr F I0813 20:07:27.596714 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.598120860+00:00 stderr F I0813 20:07:27.598093 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.600147248+00:00 stderr F I0813 20:07:27.600118 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.614977173+00:00 stderr F I0813 20:07:27.611062 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.614977173+00:00 stderr F I0813 20:07:27.611978 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.616675152+00:00 stderr F I0813 20:07:27.615986 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.618291068+00:00 stderr F I0813 20:07:27.617171 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.620571273+00:00 stderr F I0813 20:07:27.620521 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.627531003+00:00 stderr F I0813 20:07:27.627368 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.651381577+00:00 stderr F I0813 20:07:27.651267 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:27.651381577+00:00 stderr F I0813 20:07:27.651361 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:07:27.652189850+00:00 stderr F I0813 20:07:27.651622 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:27.652189850+00:00 stderr F I0813 20:07:27.651749 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.651709916 +0000 UTC))" 2025-08-13T20:07:27.652219171+00:00 stderr F I0813 20:07:27.652193 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.652171329 +0000 UTC))" 2025-08-13T20:07:27.652622852+00:00 stderr F I0813 20:07:27.652480 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.652463258 +0000 UTC))" 2025-08-13T20:07:27.654845316+00:00 stderr F I0813 20:07:27.654701 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:27.654653201 +0000 UTC))" 2025-08-13T20:07:27.654873837+00:00 stderr F I0813 20:07:27.654844 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:27.654817125 +0000 UTC))" 2025-08-13T20:07:27.654922278+00:00 stderr F I0813 20:07:27.654901 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:27.654853046 +0000 UTC))" 2025-08-13T20:07:27.654938689+00:00 stderr F I0813 20:07:27.654928 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:27.654912818 +0000 UTC))" 2025-08-13T20:07:27.654953739+00:00 stderr F I0813 20:07:27.654946 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.654934669 +0000 UTC))" 2025-08-13T20:07:27.654980680+00:00 stderr F I0813 20:07:27.654970 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.654952329 +0000 UTC))" 2025-08-13T20:07:27.655065712+00:00 stderr F I0813 20:07:27.654994 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.65497726 +0000 UTC))" 2025-08-13T20:07:27.655065712+00:00 stderr F I0813 20:07:27.655043 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.655025441 +0000 UTC))" 2025-08-13T20:07:27.655082973+00:00 stderr F I0813 20:07:27.655067 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:27.655050672 +0000 UTC))" 2025-08-13T20:07:27.655094843+00:00 stderr F I0813 20:07:27.655086 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:27.655075233 +0000 UTC))" 2025-08-13T20:07:27.655130624+00:00 stderr F I0813 20:07:27.655107 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:27.655092493 +0000 UTC))" 2025-08-13T20:07:27.658235403+00:00 stderr F I0813 20:07:27.658033 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-apiserver.svc\" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:07:27.658011927 +0000 UTC))" 2025-08-13T20:07:27.658620234+00:00 stderr F I0813 20:07:27.658438 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115645\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:27.658417289 +0000 UTC))" 2025-08-13T20:07:27.661626220+00:00 stderr F I0813 20:07:27.661547 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.705870569+00:00 stderr F I0813 20:07:27.703063 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:27.851120933+00:00 stderr F I0813 20:07:27.851060 1 reflector.go:351] Caches populated for *etcd.ImageLayers from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:08:03.895683401+00:00 stderr F E0813 20:08:03.895372 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:08:03.914238963+00:00 stderr F I0813 20:08:03.913112 1 trace.go:236] Trace[499527712]: "Create" accept:application/json, */*,audit-id:064330a5-92f0-4ee6-b57c-a4a65def5eee,client:10.217.0.46,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:cluster-samples-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:08:03.000) (total time: 912ms): 2025-08-13T20:08:03.914238963+00:00 stderr F Trace[499527712]: ---"Write to database call succeeded" len:436 908ms (20:08:03.911) 2025-08-13T20:08:03.914238963+00:00 stderr F Trace[499527712]: [912.494531ms] [912.494531ms] END 2025-08-13T20:08:42.680410211+00:00 stderr F E0813 20:08:42.679639 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.680410211+00:00 stderr F E0813 20:08:42.680000 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.687005750+00:00 stderr F E0813 20:08:42.686917 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.687221417+00:00 stderr F E0813 20:08:42.687005 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.691993933+00:00 stderr F E0813 20:08:42.691953 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.692173599+00:00 stderr F E0813 20:08:42.692135 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.695301828+00:00 stderr F E0813 20:08:42.695256 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.695398301+00:00 stderr F E0813 20:08:42.695380 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.747725021+00:00 stderr F E0813 20:08:42.747669 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.747938077+00:00 stderr F E0813 20:08:42.747888 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.752690934+00:00 stderr F E0813 20:08:42.752657 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.752819277+00:00 stderr F E0813 20:08:42.752753 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.756291127+00:00 stderr F E0813 20:08:42.756265 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.756361749+00:00 stderr F E0813 20:08:42.756345 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.760296242+00:00 stderr F E0813 20:08:42.760195 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.760377704+00:00 stderr F E0813 20:08:42.760333 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.763271847+00:00 stderr F E0813 20:08:42.763191 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.763293148+00:00 stderr F E0813 20:08:42.763279 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.517637905+00:00 stderr F E0813 20:08:43.517509 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.517637905+00:00 stderr F E0813 20:08:43.517602 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.522309689+00:00 stderr F E0813 20:08:43.521746 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.522309689+00:00 stderr F E0813 20:08:43.521870 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.525432328+00:00 stderr F E0813 20:08:43.525397 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.525530711+00:00 stderr F E0813 20:08:43.525514 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.534683723+00:00 stderr F E0813 20:08:43.534064 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.534683723+00:00 stderr F E0813 20:08:43.534151 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.536467764+00:00 stderr F E0813 20:08:43.536396 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.536542457+00:00 stderr F E0813 20:08:43.536487 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.546050469+00:00 stderr F E0813 20:08:43.545968 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.546185853+00:00 stderr F E0813 20:08:43.546164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.582191905+00:00 stderr F E0813 20:08:43.581025 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.582191905+00:00 stderr F E0813 20:08:43.581113 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.597845344+00:00 stderr F E0813 20:08:43.597742 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.598038620+00:00 stderr F E0813 20:08:43.598016 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.605990648+00:00 stderr F E0813 20:08:43.605034 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.605990648+00:00 stderr F E0813 20:08:43.605299 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.617762435+00:00 stderr F E0813 20:08:43.617714 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.618463275+00:00 stderr F E0813 20:08:43.618441 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.631235552+00:00 stderr F E0813 20:08:43.631111 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.631235552+00:00 stderr F E0813 20:08:43.631195 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.633063684+00:00 stderr F E0813 20:08:43.633030 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.633179007+00:00 stderr F E0813 20:08:43.633158 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.664941568+00:00 stderr F E0813 20:08:43.664815 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.664941568+00:00 stderr F E0813 20:08:43.664883 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.665028590+00:00 stderr F E0813 20:08:43.664756 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.665098482+00:00 stderr F E0813 20:08:43.665082 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.670473807+00:00 stderr F E0813 20:08:43.670001 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.670473807+00:00 stderr F E0813 20:08:43.670073 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.671117565+00:00 stderr F E0813 20:08:43.670938 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.671117565+00:00 stderr F E0813 20:08:43.670992 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.674330557+00:00 stderr F E0813 20:08:43.674193 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.674330557+00:00 stderr F E0813 20:08:43.674263 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.677569540+00:00 stderr F E0813 20:08:43.677422 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.677569540+00:00 stderr F E0813 20:08:43.677516 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.518948113+00:00 stderr F E0813 20:08:44.518836 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.519334454+00:00 stderr F E0813 20:08:44.519105 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.255174302+00:00 stderr F E0813 20:08:45.255113 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.255495301+00:00 stderr F E0813 20:08:45.255471 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.782377827+00:00 stderr F E0813 20:08:45.782289 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.782557402+00:00 stderr F E0813 20:08:45.782437 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.639857322+00:00 stderr F E0813 20:08:47.638317 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.639857322+00:00 stderr F E0813 20:08:47.638519 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.879284687+00:00 stderr F E0813 20:08:47.879093 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.879455542+00:00 stderr F E0813 20:08:47.879342 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159244084+00:00 stderr F E0813 20:08:48.159178 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159371267+00:00 stderr F E0813 20:08:48.159305 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159446559+00:00 stderr F E0813 20:08:48.159400 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.159477540+00:00 stderr F E0813 20:08:48.159408 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.168062356+00:00 stderr F E0813 20:08:48.168036 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.168134548+00:00 stderr F E0813 20:08:48.168119 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.200477386+00:00 stderr F E0813 20:08:48.200372 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.200694172+00:00 stderr F E0813 20:08:48.200614 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.204462580+00:00 stderr F E0813 20:08:48.204390 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.204605784+00:00 stderr F E0813 20:08:48.204549 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.215371863+00:00 stderr F E0813 20:08:48.215286 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.215519747+00:00 stderr F E0813 20:08:48.215463 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.236359945+00:00 stderr F E0813 20:08:48.236258 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.236598911+00:00 stderr F E0813 20:08:48.236510 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.268222328+00:00 stderr F E0813 20:08:48.268104 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.268717872+00:00 stderr F E0813 20:08:48.268693 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.482943234+00:00 stderr F E0813 20:08:48.480051 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.482943234+00:00 stderr F E0813 20:08:48.480202 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.568999762+00:00 stderr F E0813 20:08:48.568880 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570530536+00:00 stderr F E0813 20:08:48.569033 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.580451170+00:00 stderr F E0813 20:08:48.580403 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.580549253+00:00 stderr F E0813 20:08:48.580532 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.012194089+00:00 stderr F E0813 20:08:49.012114 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.012219579+00:00 stderr F E0813 20:08:49.012201 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.326890131+00:00 stderr F E0813 20:08:49.326755 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.327013085+00:00 stderr F E0813 20:08:49.326884 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.673518909+00:00 stderr F E0813 20:08:49.673432 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.673628263+00:00 stderr F E0813 20:08:49.673527 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.723191914+00:00 stderr F E0813 20:08:49.723034 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.723191914+00:00 stderr F E0813 20:08:49.723164 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.819041402+00:00 stderr F E0813 20:08:49.818928 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.819041402+00:00 stderr F E0813 20:08:49.818993 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.824400875+00:00 stderr F E0813 20:08:49.824312 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.824400875+00:00 stderr F E0813 20:08:49.824372 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.955995548+00:00 stderr F E0813 20:08:49.955762 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.955995548+00:00 stderr F E0813 20:08:49.955940 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.066716563+00:00 stderr F E0813 20:08:50.066586 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.066716563+00:00 stderr F E0813 20:08:50.066667 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.139565671+00:00 stderr F E0813 20:08:50.138008 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.139565671+00:00 stderr F E0813 20:08:50.138109 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.306224180+00:00 stderr F E0813 20:08:51.306053 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.306224180+00:00 stderr F E0813 20:08:51.306140 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.772305213+00:00 stderr F E0813 20:08:51.771570 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.772305213+00:00 stderr F E0813 20:08:51.772201 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.803185008+00:00 stderr F E0813 20:08:51.803111 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.803239480+00:00 stderr F E0813 20:08:51.803183 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.176496652+00:00 stderr F E0813 20:08:52.176361 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.176496652+00:00 stderr F E0813 20:08:52.176478 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.487428976+00:00 stderr F E0813 20:08:52.487371 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.487649713+00:00 stderr F E0813 20:08:52.487587 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.496487466+00:00 stderr F E0813 20:08:52.496452 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.496672701+00:00 stderr F E0813 20:08:52.496648 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.810637063+00:00 stderr F E0813 20:08:52.810556 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.810637063+00:00 stderr F E0813 20:08:52.810624 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.814600507+00:00 stderr F E0813 20:08:52.814523 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.814600507+00:00 stderr F E0813 20:08:52.814589 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.029824327+00:00 stderr F E0813 20:08:53.029690 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.029975582+00:00 stderr F E0813 20:08:53.029890 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.010145296+00:00 stderr F E0813 20:08:56.009052 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.010145296+00:00 stderr F E0813 20:08:56.009245 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.600006948+00:00 stderr F E0813 20:08:56.599861 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.600116191+00:00 stderr F E0813 20:08:56.600000 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.304175456+00:00 stderr F E0813 20:08:57.304076 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.304227648+00:00 stderr F E0813 20:08:57.304172 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.337506012+00:00 stderr F E0813 20:08:57.337392 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.337506012+00:00 stderr F E0813 20:08:57.337457 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.610713075+00:00 stderr F E0813 20:08:57.610572 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.610713075+00:00 stderr F E0813 20:08:57.610688 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.794263737+00:00 stderr F E0813 20:08:57.794096 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.794263737+00:00 stderr F E0813 20:08:57.794186 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.658566789+00:00 stderr F E0813 20:08:58.658408 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.658633531+00:00 stderr F E0813 20:08:58.658563 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.713140353+00:00 stderr F E0813 20:08:58.713075 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.713287518+00:00 stderr F E0813 20:08:58.713269 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.742951748+00:00 stderr F E0813 20:08:58.742866 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.743079232+00:00 stderr F E0813 20:08:58.743062 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:34.148973349+00:00 stderr F I0813 20:09:34.148730 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:34.869006983+00:00 stderr F I0813 20:09:34.868932 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.576449306+00:00 stderr F I0813 20:09:35.576346 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.784050738+00:00 stderr F I0813 20:09:35.783984 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:37.050415975+00:00 stderr F I0813 20:09:37.050268 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:37.312150859+00:00 stderr F I0813 20:09:37.310042 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:39.898706978+00:00 stderr F I0813 20:09:39.898600 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:46.110149745+00:00 stderr F I0813 20:09:46.110030 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:48.834429082+00:00 stderr F I0813 20:09:48.833277 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:48.892728974+00:00 stderr F I0813 20:09:48.892672 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:51.791420782+00:00 stderr F I0813 20:09:51.791292 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:53.442499030+00:00 stderr F I0813 20:09:53.435370 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:56.934371515+00:00 stderr F I0813 20:09:56.934268 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:59.692452641+00:00 stderr F I0813 20:09:59.692353 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:03.923274763+00:00 stderr F I0813 20:10:03.923110 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:05.041617317+00:00 stderr F I0813 20:10:05.041496 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:13.806297546+00:00 stderr F I0813 20:10:13.806192 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:14.139715366+00:00 stderr F I0813 20:10:14.139628 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:35.573370085+00:00 stderr F I0813 20:10:35.573172 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:35.662360187+00:00 stderr F I0813 20:10:35.662208 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:36.571610796+00:00 stderr F I0813 20:10:36.571495 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:17:34.122353834+00:00 stderr F I0813 20:17:34.118870 1 trace.go:236] Trace[1693487721]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1fe089cf-6742-4b4d-a354-6690bd53f61e,client:10.217.0.19,api-group:route.openshift.io,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:routes,scope:resource,url:/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:17:33.447) (total time: 671ms): 2025-08-13T20:17:34.122353834+00:00 stderr F Trace[1693487721]: ---"About to write a response" 670ms (20:17:34.118) 2025-08-13T20:17:34.122353834+00:00 stderr F Trace[1693487721]: [671.040603ms] [671.040603ms] END 2025-08-13T20:18:26.366246249+00:00 stderr F I0813 20:18:26.362702 1 trace.go:236] Trace[1917860378]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:07499f87-1a86-4458-a0a3-18f0c0d9c932,client:10.217.0.19,api-group:route.openshift.io,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:routes,scope:resource,url:/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:18:25.572) (total time: 789ms): 2025-08-13T20:18:26.366246249+00:00 stderr F Trace[1917860378]: ---"About to write a response" 788ms (20:18:26.360) 2025-08-13T20:18:26.366246249+00:00 stderr F Trace[1917860378]: [789.800313ms] [789.800313ms] END 2025-08-13T20:18:34.542003345+00:00 stderr F E0813 20:18:34.540993 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:18:34.763501320+00:00 stderr F I0813 20:18:34.762714 1 trace.go:236] Trace[2007589504]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:05d22fbd-a8ec-46ef-8a2c-3a920dd334ea,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:18:33.432) (total time: 1330ms): 2025-08-13T20:18:34.763501320+00:00 stderr F Trace[2007589504]: ---"Write to database call succeeded" len:287 1316ms (20:18:34.751) 2025-08-13T20:18:34.763501320+00:00 stderr F Trace[2007589504]: [1.330074082s] [1.330074082s] END 2025-08-13T20:22:18.881545212+00:00 stderr F E0813 20:22:18.880539 1 strategy.go:60] unable to parse manifest for "sha256:5f73c1b804b7ff63f61151b4f194fe45c645de27671a182582eac8b3fcb30dd4": unexpected end of JSON input 2025-08-13T20:22:18.903663024+00:00 stderr F I0813 20:22:18.903580 1 trace.go:236] Trace[1121898041]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:da392399-76ee-46af-89a0-9e901be9ab96,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:22:18.339) (total time: 563ms): 2025-08-13T20:22:18.903663024+00:00 stderr F Trace[1121898041]: ---"Write to database call succeeded" len:274 562ms (20:22:18.902) 2025-08-13T20:22:18.903663024+00:00 stderr F Trace[1121898041]: [563.883166ms] [563.883166ms] END 2025-08-13T20:33:33.938260917+00:00 stderr F E0813 20:33:33.937966 1 strategy.go:60] unable to parse manifest for "sha256:b95b77cd1b794265f246037453886664374732b4da033339ff08e95ec410994c": unexpected end of JSON input 2025-08-13T20:33:33.953413383+00:00 stderr F I0813 20:33:33.951719 1 trace.go:236] Trace[1496715544]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:11e9111b-024c-48db-a0d2-0ce5b8dbf5ab,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:33:33.343) (total time: 607ms): 2025-08-13T20:33:33.953413383+00:00 stderr F Trace[1496715544]: ---"Write to database call succeeded" len:287 606ms (20:33:33.950) 2025-08-13T20:33:33.953413383+00:00 stderr F Trace[1496715544]: [607.831872ms] [607.831872ms] END 2025-08-13T20:41:04.002220584+00:00 stderr F E0813 20:41:04.001878 1 strategy.go:60] unable to parse manifest for "sha256:5f73c1b804b7ff63f61151b4f194fe45c645de27671a182582eac8b3fcb30dd4": unexpected end of JSON input 2025-08-13T20:41:04.014897090+00:00 stderr F I0813 20:41:04.013060 1 trace.go:236] Trace[586486215]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:934f2366-fb55-42fc-80ba-80239bd09acf,client:10.217.0.87,api-group:image.openshift.io,api-version:v1,name:,subresource:,namespace:openshift,protocol:HTTP/2.0,resource:imagestreamimports,scope:resource,url:/apis/image.openshift.io/v1/namespaces/openshift/imagestreamimports,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format/system:serviceaccount:openshift-infra:image-import-controller,verb:POST (13-Aug-2025 20:41:03.334) (total time: 678ms): 2025-08-13T20:41:04.014897090+00:00 stderr F Trace[586486215]: ---"Write to database call succeeded" len:274 676ms (20:41:04.011) 2025-08-13T20:41:04.014897090+00:00 stderr F Trace[586486215]: [678.206432ms] [678.206432ms] END 2025-08-13T20:42:36.314535278+00:00 stderr F I0813 20:42:36.313683 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.321893450+00:00 stderr F I0813 20:42:36.321625 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.322135567+00:00 stderr F I0813 20:42:36.322017 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382031144+00:00 stderr F I0813 20:42:36.316915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.414129159+00:00 stderr F I0813 20:42:36.316948 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415028545+00:00 stderr F I0813 20:42:36.316971 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415411526+00:00 stderr F I0813 20:42:36.316985 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415672034+00:00 stderr F I0813 20:42:36.316998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.415962002+00:00 stderr F I0813 20:42:36.317012 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437615566+00:00 stderr F I0813 20:42:36.317023 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441272512+00:00 stderr F I0813 20:42:36.317122 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.441877979+00:00 stderr F I0813 20:42:36.317139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442279471+00:00 stderr F I0813 20:42:36.317166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.442527238+00:00 stderr F I0813 20:42:36.317179 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445592236+00:00 stderr F I0813 20:42:36.317191 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446068460+00:00 stderr F I0813 20:42:36.317208 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.446335858+00:00 stderr F I0813 20:42:36.317253 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.457722176+00:00 stderr F I0813 20:42:36.317274 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317307 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460092924+00:00 stderr F I0813 20:42:36.317326 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:38.033451965+00:00 stderr F I0813 20:42:38.032809 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:38.033451965+00:00 stderr F I0813 20:42:38.032891 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:38.035510985+00:00 stderr F I0813 20:42:38.034578 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-apiserver", Name:"apiserver-7fc54b8dd7-d2bhp", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:38.035510985+00:00 stderr F I0813 20:42:38.032881 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:38.041608500+00:00 stderr F W0813 20:42:38.041489 1 genericapiserver.go:1060] failed to create event openshift-apiserver/apiserver-7fc54b8dd7-d2bhp.185b6e4d4a884464: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/events": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015117130654033133 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000000000015117130646033124 0ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000000000015117130646033124 0ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000755000175000017500000000000015117130654033133 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000005642315117130646033150 0ustar zuulzuul2025-08-13T20:07:24.772404583+00:00 stderr F W0813 20:07:24.771967 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T20:07:24.777288853+00:00 stderr F I0813 20:07:24.776657 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755115644 cert, and key in /tmp/serving-cert-2464900357/serving-signer.crt, /tmp/serving-cert-2464900357/serving-signer.key 2025-08-13T20:07:25.331597096+00:00 stderr F I0813 20:07:25.325173 1 observer_polling.go:159] Starting file observer 2025-08-13T20:07:25.420711131+00:00 stderr F I0813 20:07:25.420175 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T20:07:25.435699481+00:00 stderr F I0813 20:07:25.435631 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" 2025-08-13T20:07:26.555451245+00:00 stderr F I0813 20:07:26.516921 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569648 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569692 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569711 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:07:26.570867117+00:00 stderr F I0813 20:07:26.569717 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:07:26.603546134+00:00 stderr F I0813 20:07:26.602166 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:07:26.603546134+00:00 stderr F I0813 20:07:26.602519 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:07:26.603546134+00:00 stderr F W0813 20:07:26.602556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:07:26.603546134+00:00 stderr F W0813 20:07:26.602565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622290 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622370 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622410 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622447 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.623139 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.622993 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:07:26.624606658+00:00 stderr F I0813 20:07:26.623361 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:07:26.626915804+00:00 stderr F I0813 20:07:26.626743 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-08-13T20:07:26.631294090+00:00 stderr F I0813 20:07:26.631213 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.631294090+00:00 stderr F I0813 20:07:26.631217 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.633355789+00:00 stderr F I0813 20:07:26.631563 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:26.646267679+00:00 stderr F I0813 20:07:26.646177 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.646104804 +0000 UTC))" 2025-08-13T20:07:26.646669191+00:00 stderr F I0813 20:07:26.646580 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.646549117 +0000 UTC))" 2025-08-13T20:07:26.646669191+00:00 stderr F I0813 20:07:26.646640 1 secure_serving.go:213] Serving securely on [::]:17698 2025-08-13T20:07:26.646684411+00:00 stderr F I0813 20:07:26.646675 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:07:26.646754703+00:00 stderr F I0813 20:07:26.646702 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726158 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726391 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.726754 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:26.726710525 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727133 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727492 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:26.726770897 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727561 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.727525769 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727583 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.72756897 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727602 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727589171 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727622 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727608371 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727641 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727627762 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727660 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.727646862 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727682 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:26.727668973 +0000 UTC))" 2025-08-13T20:07:26.728155007+00:00 stderr F I0813 20:07:26.727700 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:26.727688594 +0000 UTC))" 2025-08-13T20:07:26.733341906+00:00 stderr F I0813 20:07:26.733256 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.733224312 +0000 UTC))" 2025-08-13T20:07:26.733905142+00:00 stderr F I0813 20:07:26.733831 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.733764008 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734252 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:07:26.734215741 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734304 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:07:26.734285943 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734325 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.734311043 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734345 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:07:26.734332014 +0000 UTC))" 2025-08-13T20:07:26.734398026+00:00 stderr F I0813 20:07:26.734392 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734378285 +0000 UTC))" 2025-08-13T20:07:26.734438267+00:00 stderr F I0813 20:07:26.734411 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734398416 +0000 UTC))" 2025-08-13T20:07:26.734448417+00:00 stderr F I0813 20:07:26.734439 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734416206 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734459 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.734445447 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734505 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:07:26.734487998 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734535 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:07:26.734523269 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734570 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:07:26.7345425 +0000 UTC))" 2025-08-13T20:07:26.734931861+00:00 stderr F I0813 20:07:26.734901 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2464900357/tls.crt::/tmp/serving-cert-2464900357/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1755115644\" (2025-08-13 20:07:24 +0000 UTC to 2025-09-12 20:07:25 +0000 UTC (now=2025-08-13 20:07:26.734860739 +0000 UTC))" 2025-08-13T20:07:26.735531588+00:00 stderr F I0813 20:07:26.735199 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115646\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115646\" (2025-08-13 19:07:25 +0000 UTC to 2026-08-13 19:07:25 +0000 UTC (now=2025-08-13 20:07:26.735180778 +0000 UTC))" 2025-08-13T20:07:27.249549866+00:00 stderr F I0813 20:07:27.249414 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:27.328082557+00:00 stderr F I0813 20:07:27.327932 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-08-13T20:07:27.328258672+00:00 stderr F I0813 20:07:27.328237 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-08-13T20:07:27.328386936+00:00 stderr F I0813 20:07:27.328372 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-08-13T20:07:27.329224680+00:00 stderr F I0813 20:07:27.329204 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-08-13T20:07:27.329332243+00:00 stderr F I0813 20:07:27.329318 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-08-13T20:07:27.340871004+00:00 stderr F I0813 20:07:27.330220 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-08-13T20:07:27.349559893+00:00 stderr F I0813 20:07:27.349500 1 base_controller.go:73] Caches are synced for check-endpoints 2025-08-13T20:07:27.349665676+00:00 stderr F I0813 20:07:27.349644 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-08-13T20:07:27.349838191+00:00 stderr F I0813 20:07:27.330947 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-08-13T20:07:27.349909243+00:00 stderr F I0813 20:07:27.331010 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-08-13T20:07:27.349959345+00:00 stderr F I0813 20:07:27.349941 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-08-13T20:07:27.350053037+00:00 stderr F I0813 20:07:27.348318 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:27.350318915+00:00 stderr F I0813 20:07:27.349376 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.535505534+00:00 stderr F I0813 20:09:38.533238 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.910861766+00:00 stderr F I0813 20:09:38.910681 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:48.325177351+00:00 stderr F I0813 20:09:48.324483 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.727563139+00:00 stderr F I0813 20:09:49.726230 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.459037956+00:00 stderr F I0813 20:09:55.458404 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:58.082039310+00:00 stderr F I0813 20:09:58.081099 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.316616768+00:00 stderr F I0813 20:42:36.315144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319874512+00:00 stderr F I0813 20:42:36.319513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.320954413+00:00 stderr F I0813 20:42:36.320719 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.321202490+00:00 stderr F I0813 20:42:36.321099 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.323320731+00:00 stderr F I0813 20:42:36.322202 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.323320731+00:00 stderr F I0813 20:42:36.322520 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.888191757+00:00 stderr F I0813 20:42:37.887497 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.888191757+00:00 stderr F I0813 20:42:37.888159 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:37.888314061+00:00 stderr F I0813 20:42:37.888189 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:37.888514306+00:00 stderr F I0813 20:42:37.888377 1 base_controller.go:172] Shutting down check-endpoints ... 2025-08-13T20:42:37.888753863+00:00 stderr F I0813 20:42:37.888529 1 base_controller.go:172] Shutting down CheckEndpointsStop ... ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_ap0000644000175000017500000006646715117130646033161 0ustar zuulzuul2025-12-13T00:13:18.931163291+00:00 stderr F W1213 00:13:18.931035 1 cmd.go:245] Using insecure, self-signed certificates 2025-12-13T00:13:18.931462931+00:00 stderr F I1213 00:13:18.931439 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765584798 cert, and key in /tmp/serving-cert-2994726760/serving-signer.crt, /tmp/serving-cert-2994726760/serving-signer.key 2025-12-13T00:13:19.801229067+00:00 stderr F I1213 00:13:19.800844 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:19.814101770+00:00 stderr F I1213 00:13:19.814037 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-12-13T00:13:19.815114684+00:00 stderr F I1213 00:13:19.815060 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-2994726760/tls.crt::/tmp/serving-cert-2994726760/tls.key" 2025-12-13T00:13:20.147015107+00:00 stderr F I1213 00:13:20.146971 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:20.148400093+00:00 stderr F I1213 00:13:20.148378 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:20.148438384+00:00 stderr F I1213 00:13:20.148428 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:20.148469425+00:00 stderr F I1213 00:13:20.148460 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:20.148493196+00:00 stderr F I1213 00:13:20.148484 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:20.151810867+00:00 stderr F I1213 00:13:20.151790 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:20.151857619+00:00 stderr F W1213 00:13:20.151847 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:20.151881800+00:00 stderr F W1213 00:13:20.151873 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:20.155013956+00:00 stderr F I1213 00:13:20.154990 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161454 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161486 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161513 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161521 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161532 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161537 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:20.162182626+00:00 stderr F I1213 00:13:20.161948 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2994726760/tls.crt::/tmp/serving-cert-2994726760/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765584798\" (2025-12-13 00:13:18 +0000 UTC to 2026-01-12 00:13:19 +0000 UTC (now=2025-12-13 00:13:20.161888186 +0000 UTC))" 2025-12-13T00:13:20.162269369+00:00 stderr F I1213 00:13:20.162233 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsTimeToStart 2025-12-13T00:13:20.162439005+00:00 stderr F I1213 00:13:20.162316 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584800\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584799\" (2025-12-12 23:13:19 +0000 UTC to 2026-12-12 23:13:19 +0000 UTC (now=2025-12-13 00:13:20.162285509 +0000 UTC))" 2025-12-13T00:13:20.162439005+00:00 stderr F I1213 00:13:20.162354 1 secure_serving.go:213] Serving securely on [::]:17698 2025-12-13T00:13:20.162439005+00:00 stderr F I1213 00:13:20.162377 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:20.162439005+00:00 stderr F I1213 00:13:20.162398 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-2994726760/tls.crt::/tmp/serving-cert-2994726760/tls.key" 2025-12-13T00:13:20.162481966+00:00 stderr F I1213 00:13:20.162465 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:20.163831902+00:00 stderr F I1213 00:13:20.163770 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:13:20.164287977+00:00 stderr F I1213 00:13:20.164264 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:13:20.164660149+00:00 stderr F I1213 00:13:20.164627 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264017 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264133 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264154 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264487 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:20.264459783 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264509 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:20.264496244 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264527 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:20.264513585 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264542 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:20.264531026 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264559 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.264547126 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264581 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.264564157 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264600 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.264585837 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264618 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.264604528 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264636 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:20.264624709 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264651 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:20.264640999 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.264926 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2994726760/tls.crt::/tmp/serving-cert-2994726760/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765584798\" (2025-12-13 00:13:18 +0000 UTC to 2026-01-12 00:13:19 +0000 UTC (now=2025-12-13 00:13:20.264908598 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.265189 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584800\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584799\" (2025-12-12 23:13:19 +0000 UTC to 2026-12-12 23:13:19 +0000 UTC (now=2025-12-13 00:13:20.265175707 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.265343 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:20.265330412 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.265364 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:20.265353523 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.265380 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:20.265368163 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.265395 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:20.265384114 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr F I1213 00:13:20.265411 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.265399284 +0000 UTC))" 2025-12-13T00:13:20.266991058+00:00 stderr P I1213 00:13:20.265427 1 tlsconfig.go:178] "Loaded client CA" index=5 certName= 2025-12-13T00:13:20.267059361+00:00 stderr F "client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.265415685 +0000 UTC))" 2025-12-13T00:13:20.267059361+00:00 stderr F I1213 00:13:20.265448 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.265431136 +0000 UTC))" 2025-12-13T00:13:20.267059361+00:00 stderr F I1213 00:13:20.265466 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.265452116 +0000 UTC))" 2025-12-13T00:13:20.267059361+00:00 stderr F I1213 00:13:20.265481 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:20.265469607 +0000 UTC))" 2025-12-13T00:13:20.275983700+00:00 stderr F I1213 00:13:20.273007 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:20.265491208 +0000 UTC))" 2025-12-13T00:13:20.275983700+00:00 stderr F I1213 00:13:20.273059 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:20.273044872 +0000 UTC))" 2025-12-13T00:13:20.275983700+00:00 stderr F I1213 00:13:20.273341 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2994726760/tls.crt::/tmp/serving-cert-2994726760/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765584798\" (2025-12-13 00:13:18 +0000 UTC to 2026-01-12 00:13:19 +0000 UTC (now=2025-12-13 00:13:20.273328351 +0000 UTC))" 2025-12-13T00:13:20.275983700+00:00 stderr F I1213 00:13:20.273587 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584800\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584799\" (2025-12-12 23:13:19 +0000 UTC to 2026-12-12 23:13:19 +0000 UTC (now=2025-12-13 00:13:20.273573129 +0000 UTC))" 2025-12-13T00:13:20.530604206+00:00 stderr F I1213 00:13:20.530278 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:13:20.563316195+00:00 stderr F I1213 00:13:20.563241 1 base_controller.go:73] Caches are synced for CheckEndpointsTimeToStart 2025-12-13T00:13:20.563316195+00:00 stderr F I1213 00:13:20.563280 1 base_controller.go:110] Starting #1 worker of CheckEndpointsTimeToStart controller ... 2025-12-13T00:13:20.563392377+00:00 stderr F I1213 00:13:20.563366 1 base_controller.go:67] Waiting for caches to sync for CheckEndpointsStop 2025-12-13T00:13:20.563392377+00:00 stderr F I1213 00:13:20.563380 1 base_controller.go:73] Caches are synced for CheckEndpointsStop 2025-12-13T00:13:20.563392377+00:00 stderr F I1213 00:13:20.563385 1 base_controller.go:110] Starting #1 worker of CheckEndpointsStop controller ... 2025-12-13T00:13:20.563428219+00:00 stderr F I1213 00:13:20.563405 1 base_controller.go:172] Shutting down CheckEndpointsTimeToStart ... 2025-12-13T00:13:20.563462380+00:00 stderr F I1213 00:13:20.563443 1 base_controller.go:67] Waiting for caches to sync for check-endpoints 2025-12-13T00:13:20.563474040+00:00 stderr F I1213 00:13:20.563468 1 base_controller.go:114] Shutting down worker of CheckEndpointsTimeToStart controller ... 2025-12-13T00:13:20.563482170+00:00 stderr F I1213 00:13:20.563475 1 base_controller.go:104] All CheckEndpointsTimeToStart workers have been terminated 2025-12-13T00:13:20.575774974+00:00 stderr F I1213 00:13:20.575715 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:13:20.576041233+00:00 stderr F I1213 00:13:20.575921 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:13:20.664151993+00:00 stderr F I1213 00:13:20.664094 1 base_controller.go:73] Caches are synced for check-endpoints 2025-12-13T00:13:20.664151993+00:00 stderr F I1213 00:13:20.664120 1 base_controller.go:110] Starting #1 worker of check-endpoints controller ... 2025-12-13T00:19:37.564181419+00:00 stderr F I1213 00:19:37.563721 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.563674284 +0000 UTC))" 2025-12-13T00:19:37.564280562+00:00 stderr F I1213 00:19:37.564268 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.564250141 +0000 UTC))" 2025-12-13T00:19:37.564337273+00:00 stderr F I1213 00:19:37.564326 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.564297082 +0000 UTC))" 2025-12-13T00:19:37.564393885+00:00 stderr F I1213 00:19:37.564382 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.564359024 +0000 UTC))" 2025-12-13T00:19:37.564451446+00:00 stderr F I1213 00:19:37.564440 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564408975 +0000 UTC))" 2025-12-13T00:19:37.564499038+00:00 stderr F I1213 00:19:37.564488 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564474177 +0000 UTC))" 2025-12-13T00:19:37.564546319+00:00 stderr F I1213 00:19:37.564525 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564512848 +0000 UTC))" 2025-12-13T00:19:37.564593730+00:00 stderr F I1213 00:19:37.564583 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564566649 +0000 UTC))" 2025-12-13T00:19:37.564655032+00:00 stderr F I1213 00:19:37.564642 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.564617251 +0000 UTC))" 2025-12-13T00:19:37.564714723+00:00 stderr F I1213 00:19:37.564702 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.564683743 +0000 UTC))" 2025-12-13T00:19:37.564769795+00:00 stderr F I1213 00:19:37.564759 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.564732214 +0000 UTC))" 2025-12-13T00:19:37.564815736+00:00 stderr F I1213 00:19:37.564805 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564791756 +0000 UTC))" 2025-12-13T00:19:37.569498796+00:00 stderr F I1213 00:19:37.566197 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-2994726760/tls.crt::/tmp/serving-cert-2994726760/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"check-endpoints-signer@1765584798\" (2025-12-13 00:13:18 +0000 UTC to 2026-01-12 00:13:19 +0000 UTC (now=2025-12-13 00:19:37.565159186 +0000 UTC))" 2025-12-13T00:19:37.569498796+00:00 stderr F I1213 00:19:37.567146 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584800\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584799\" (2025-12-12 23:13:19 +0000 UTC to 2026-12-12 23:13:19 +0000 UTC (now=2025-12-13 00:19:37.56710913 +0000 UTC))" ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130646033057 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000026200415117130646033065 0ustar zuulzuul2025-08-13T19:59:19.965638478+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:20.758915041+00:00 stderr F time="2025-08-13T19:59:20Z" level=info msg="Defaulting Interval to '12h0m0s'" 2025-08-13T19:59:21.874938483+00:00 stderr F I0813 19:59:21.865254 1 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="operator ready" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting informers..." 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="informers started" 2025-08-13T19:59:22.045221487+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:22.456175441+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting workers..." 2025-08-13T19:59:22.478751865+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:22.707297850+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:22.802301208+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:22.948340281+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:23.487362766+00:00 stderr F I0813 19:59:23.487270 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:23.487461169+00:00 stderr F I0813 19:59:23.487443 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:23.617713662+00:00 stderr F I0813 19:59:23.616166 1 secure_serving.go:213] Serving securely on [::]:5443 2025-08-13T19:59:23.658459603+00:00 stderr F I0813 19:59:23.658385 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:23.658534185+00:00 stderr F I0813 19:59:23.658520 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:23.658735911+00:00 stderr F I0813 19:59:23.658704 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:23.658818703+00:00 stderr F I0813 19:59:23.658762 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.658948727+00:00 stderr F I0813 19:59:23.658931 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.658987408+00:00 stderr F I0813 19:59:23.658970 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.660177232+00:00 stderr F I0813 19:59:23.659931 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:23.683340952+00:00 stderr F I0813 19:59:23.683297 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.683464446+00:00 stderr F I0813 19:59:23.660065 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-08-13T19:59:23.697110585+00:00 stderr F I0813 19:59:23.693327 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:23.707916413+00:00 stderr F W0813 19:59:23.706281 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:50216->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.749929450+00:00 stderr F W0813 19:59:23.749230 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:59130->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.789565970+00:00 stderr F W0813 19:59:23.774492 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:34813->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.789565970+00:00 stderr F I0813 19:59:23.665072 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.789565970+00:00 stderr F I0813 19:59:23.788885 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.851459845+00:00 stderr F I0813 19:59:23.838251 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.859441 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.862289 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.862499 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.862529 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.872977 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F W0813 19:59:23.959545 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46183->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.959998 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F E0813 19:59:23.960030 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960080 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960117 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:23.976643633+00:00 stderr F I0813 19:59:23.960260 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:23.977470177+00:00 stderr F E0813 19:59:23.977429 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.977557779+00:00 stderr F E0813 19:59:23.977533 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.978249119+00:00 stderr F E0813 19:59:23.978222 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.988976935+00:00 stderr F E0813 19:59:23.988881 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.996523760+00:00 stderr F E0813 19:59:23.996485 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:23.999168595+00:00 stderr F E0813 19:59:23.999078 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.003033765+00:00 stderr F E0813 19:59:24.002966 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.015609204+00:00 stderr F E0813 19:59:24.014282 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.033490984+00:00 stderr F E0813 19:59:24.012013 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.033653068+00:00 stderr F E0813 19:59:24.033622 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.044284161+00:00 stderr F E0813 19:59:24.043528 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.054992656+00:00 stderr F E0813 19:59:24.054324 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.054992656+00:00 stderr F E0813 19:59:24.054381 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.078767994+00:00 stderr F E0813 19:59:24.077021 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.097853998+00:00 stderr F E0813 19:59:24.095203 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.125724753+00:00 stderr F E0813 19:59:24.124005 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.136417948+00:00 stderr F E0813 19:59:24.136044 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.160041621+00:00 stderr F E0813 19:59:24.157966 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.176171051+00:00 stderr F E0813 19:59:24.175957 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.288501183+00:00 stderr F E0813 19:59:24.284585 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.300258988+00:00 stderr F E0813 19:59:24.300092 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.322673827+00:00 stderr F E0813 19:59:24.321370 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.338433826+00:00 stderr F E0813 19:59:24.338312 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.642557 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.643580 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.657089129+00:00 stderr F E0813 19:59:24.646489 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.682534534+00:00 stderr F E0813 19:59:24.678028 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.286624 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.287059 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.291438891+00:00 stderr F E0813 19:59:25.287505 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.301457677+00:00 stderr F W0813 19:59:25.298317 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55001->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318372169+00:00 stderr F W0813 19:59:25.318170 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:54845->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318478232+00:00 stderr F W0813 19:59:25.318405 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49886->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318491072+00:00 stderr F W0813 19:59:25.318472 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60072->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:25.318637136+00:00 stderr F E0813 19:59:25.318603 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.249019077+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:26.256263304+00:00 stderr F time="2025-08-13T19:59:26Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55001->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-08-13T19:59:26.295117731+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:26.295117731+00:00 stderr F time="2025-08-13T19:59:26Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:60072->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-08-13T19:59:26.583654216+00:00 stderr F E0813 19:59:26.582010 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.583654216+00:00 stderr F E0813 19:59:26.582431 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.584416318+00:00 stderr F E0813 19:59:26.584012 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.605873800+00:00 stderr F E0813 19:59:26.605652 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.095678202+00:00 stderr F E0813 19:59:27.094975 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123041232+00:00 stderr F E0813 19:59:27.122181 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123041232+00:00 stderr F E0813 19:59:27.122689 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.123218527+00:00 stderr F E0813 19:59:27.123107 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.176560857+00:00 stderr F E0813 19:59:27.126044 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.247275103+00:00 stderr F W0813 19:59:27.245344 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43093->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.489334133+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:27.489534669+00:00 stderr F time="2025-08-13T19:59:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49886->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T19:59:27.611118185+00:00 stderr F W0813 19:59:27.610462 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38532->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.611508286+00:00 stderr F W0813 19:59:27.611349 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38374->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.666886914+00:00 stderr F W0813 19:59:27.662119 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:53325->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:27.722828369+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:27.722999974+00:00 stderr F time="2025-08-13T19:59:27Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38532->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-08-13T19:59:27.892387802+00:00 stderr F E0813 19:59:27.892315 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.894210574+00:00 stderr F E0813 19:59:27.893518 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.894714329+00:00 stderr F E0813 19:59:27.893532 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.895250094+00:00 stderr F E0813 19:59:27.893611 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.896173530+00:00 stderr F E0813 19:59:27.893652 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.939682790+00:00 stderr F E0813 19:59:27.907594 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.975969835+00:00 stderr F E0813 19:59:27.975907 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:27.979482245+00:00 stderr F E0813 19:59:27.979448 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.285637351+00:00 stderr F E0813 19:59:28.149888 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.305910959+00:00 stderr F E0813 19:59:28.153221 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.305910959+00:00 stderr F E0813 19:59:28.154254 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.321043260+00:00 stderr F E0813 19:59:28.155060 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.509691 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.510964 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.514887246+00:00 stderr F E0813 19:59:28.511241 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.538281293+00:00 stderr F E0813 19:59:28.538083 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.539471637+00:00 stderr F E0813 19:59:28.538879 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.580161727+00:00 stderr F E0813 19:59:28.576581 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.586629151+00:00 stderr F E0813 19:59:28.583976 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.586629151+00:00 stderr F E0813 19:59:28.586318 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.596542443+00:00 stderr F E0813 19:59:28.595931 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.617760498+00:00 stderr F E0813 19:59:28.615621 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.620897 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.621387 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.622035180+00:00 stderr F E0813 19:59:28.621475 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.627189007+00:00 stderr F E0813 19:59:28.626373 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.627189007+00:00 stderr F E0813 19:59:28.626969 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.702275718+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T19:59:28.702328449+00:00 stderr F time="2025-08-13T19:59:28Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43093->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.860255 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.860620 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861078 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861521 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:28.862011231+00:00 stderr F E0813 19:59:28.861751 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.142703442+00:00 stderr F E0813 19:59:29.142638 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.153274173+00:00 stderr F E0813 19:59:29.153156 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.155073075+00:00 stderr F E0813 19:59:29.153425 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.166039687+00:00 stderr F E0813 19:59:29.165990 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:29.258589855+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T19:59:29.258742140+00:00 stderr F time="2025-08-13T19:59:29Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:38374->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.419873 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420250 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420442 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420622 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.422345723+00:00 stderr F E0813 19:59:29.420927 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:29.626727279+00:00 stderr F W0813 19:59:29.625254 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49867->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:29.928732508+00:00 stderr F W0813 19:59:29.916278 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52709->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:30.160441743+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T19:59:30.168229755+00:00 stderr F time="2025-08-13T19:59:30Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52709->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T19:59:30.201098882+00:00 stderr F E0813 19:59:30.201004 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.205076685+00:00 stderr F E0813 19:59:30.201367 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.205076685+00:00 stderr F E0813 19:59:30.201700 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.261903495+00:00 stderr F E0813 19:59:30.242444 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.275320018+00:00 stderr F E0813 19:59:30.275283 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:30.571214002+00:00 stderr F W0813 19:59:30.571157 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:40819->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:30.798852981+00:00 stderr F W0813 19:59:30.796635 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43500->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.705416 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706088 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706282 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.706505 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:31.732417732+00:00 stderr F E0813 19:59:31.724590 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:33.186124520+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T19:59:33.186291375+00:00 stderr F time="2025-08-13T19:59:33Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43500->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-08-13T19:59:33.731313711+00:00 stderr F W0813 19:59:33.731250 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:34.232634982+00:00 stderr F W0813 19:59:34.231206 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:34.280269209+00:00 stderr F E0813 19:59:34.266200 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.283483771+00:00 stderr F E0813 19:59:34.281889 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.305643193+00:00 stderr F E0813 19:59:34.305574 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.306139547+00:00 stderr F E0813 19:59:34.306120 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.545033677+00:00 stderr F E0813 19:59:34.522194 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.545033677+00:00 stderr F E0813 19:59:34.522523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.674225549+00:00 stderr F E0813 19:59:34.673123 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.706747196+00:00 stderr F E0813 19:59:34.705343 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.754865488+00:00 stderr F E0813 19:59:34.669727 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:34.850960187+00:00 stderr F W0813 19:59:34.850308 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T19:59:35.058219095+00:00 stderr F W0813 19:59:35.056680 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:39.987047992+00:00 stderr F E0813 19:59:39.985498 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.039932499+00:00 stderr F E0813 19:59:40.039605 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.100807584+00:00 stderr F E0813 19:59:40.100483 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.260306171+00:00 stderr F E0813 19:59:40.243282 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.283765190+00:00 stderr F E0813 19:59:40.260700 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.700961392+00:00 stderr F W0813 19:59:40.680350 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:40.788635771+00:00 stderr F W0813 19:59:40.788513 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:41.074215822+00:00 stderr F W0813 19:59:41.073168 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:41.319629407+00:00 stderr F W0813 19:59:41.315176 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T19:59:44.540736135+00:00 stderr F E0813 19:59:44.524881 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.540736135+00:00 stderr F E0813 19:59:44.540258 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633273543+00:00 stderr F E0813 19:59:44.633111 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.633371135+00:00 stderr F E0813 19:59:44.633328 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:49.932636194+00:00 stderr F W0813 19:59:49.931997 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T19:59:50.531744722+00:00 stderr F E0813 19:59:50.530292 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.844545 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.845564 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871163378+00:00 stderr F E0813 19:59:50.852372 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:50.871571450+00:00 stderr F E0813 19:59:50.871534 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:51.326018234+00:00 stderr F W0813 19:59:51.325587 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T19:59:51.450971396+00:00 stderr F W0813 19:59:51.450692 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T19:59:52.606671350+00:00 stderr F W0813 19:59:52.605750 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:03.507188075+00:00 stderr F W0813 20:00:03.505514 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:00:06.623020709+00:00 stderr F W0813 20:00:06.622177 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:00:06.623415940+00:00 stderr F W0813 20:00:06.623258 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:00:07.675007495+00:00 stderr F W0813 20:00:07.671547 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:27.945990342+00:00 stderr F W0813 20:00:27.941701 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:00:29.988055289+00:00 stderr F W0813 20:00:29.985630 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:00:31.935746376+00:00 stderr F W0813 20:00:31.935224 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:00:35.857438308+00:00 stderr F W0813 20:00:35.855636 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:01:10.363073166+00:00 stderr F W0813 20:01:10.361738 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:01:11.010767704+00:00 stderr F W0813 20:01:11.009529 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:01:11.878877308+00:00 stderr F W0813 20:01:11.876332 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:01:21.349283955+00:00 stderr F W0813 20:01:21.348270 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:02:22.615119270+00:00 stderr F W0813 20:02:22.614245 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:02:24.747603143+00:00 stderr F W0813 20:02:24.746366 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:02:26.988925622+00:00 stderr F W0813 20:02:26.988481 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:02:30.453457865+00:00 stderr F W0813 20:02:30.452679 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:03:52.874053825+00:00 stderr F W0813 20:03:52.871907 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:04:04.401934902+00:00 stderr F E0813 20:04:04.401068 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.402998952+00:00 stderr F E0813 20:04:04.402948 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.281572926+00:00 stderr F E0813 20:04:05.280372 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.281572926+00:00 stderr F E0813 20:04:05.281070 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.286935489+00:00 stderr F E0813 20:04:05.284345 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.286935489+00:00 stderr F E0813 20:04:05.284410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.215685534+00:00 stderr F W0813 20:04:09.215602 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:04:29.629577865+00:00 stderr F W0813 20:04:29.629008 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:04:35.640297450+00:00 stderr F W0813 20:04:35.639529 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:05:05.288435794+00:00 stderr F E0813 20:05:05.287900 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.288496986+00:00 stderr F E0813 20:05:05.288432 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.321420159+00:00 stderr F E0813 20:05:05.321363 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.321516132+00:00 stderr F E0813 20:05:05.321498 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:33.199988341+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:06:33.200242358+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:06:46.091914232+00:00 stderr F time="2025-08-13T20:06:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:06:46.092101228+00:00 stderr F time="2025-08-13T20:06:46Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-08-13T20:06:46.092101228+00:00 stderr F time="2025-08-13T20:06:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:06:48.336406265+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:07:00.663275096+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:07:19.651681279+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:07:41.934622389+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:07:50.348659746+00:00 stderr F time="2025-08-13T20:07:50Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:08:18.638742578+00:00 stderr F time="2025-08-13T20:08:18Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:08:42.737265511+00:00 stderr F E0813 20:08:42.735492 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.744188410+00:00 stderr F E0813 20:08:42.744107 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.621857893+00:00 stderr F E0813 20:08:43.618322 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.621857893+00:00 stderr F E0813 20:08:43.618410 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.625036794+00:00 stderr F E0813 20:08:43.624526 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.625036794+00:00 stderr F E0813 20:08:43.624621 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:40.151279360+00:00 stderr F time="2025-08-13T20:09:40Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:09:40.151279360+00:00 stderr F time="2025-08-13T20:09:40Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:09:49.496565397+00:00 stderr F time="2025-08-13T20:09:49Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:09:52.875492813+00:00 stderr F time="2025-08-13T20:09:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:16:58.160638342+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:17:00.205634852+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:17:16.072748758+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:17:30.179055164+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:27:05.651766544+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:27:05.841924311+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:28:43.332268651+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:29:25.916234348+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:29:25.916953379+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:29:33.240381805+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:29:36.514472290+00:00 stderr F time="2025-08-13T20:29:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:29:44.206538782+00:00 stderr F time="2025-08-13T20:29:44Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:29:52.126891777+00:00 stderr F time="2025-08-13T20:29:52Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:37:48.225824045+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-08-13T20:38:36.094882524+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-08-13T20:41:21.481556744+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-08-13T20:42:26.028870400+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-08-13T20:42:39.339171660+00:00 stderr F W0813 20:42:39.338527 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:39.734393723+00:00 stderr F W0813 20:42:39.734342 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:39.951022478+00:00 stderr F W0813 20:42:39.950937 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:40.047724586+00:00 stderr F W0813 20:42:40.047677 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:40.344992617+00:00 stderr F W0813 20:42:40.344945 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:40.742693633+00:00 stderr F W0813 20:42:40.742601 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:40.959143163+00:00 stderr F W0813 20:42:40.959053 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:41.055167932+00:00 stderr F W0813 20:42:41.055001 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:41.670657416+00:00 stderr F W0813 20:42:41.670253 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-08-13T20:42:42.480053712+00:00 stderr F W0813 20:42:42.479949 1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-08-13T20:42:42.551026478+00:00 stderr F W0813 20:42:42.550906 1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-08-13T20:42:42.839825044+00:00 stderr F W0813 20:42:42.839695 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-08-13T20:42:44.057333295+00:00 stderr F I0813 20:42:44.056154 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:44.057333295+00:00 stderr F I0813 20:42:44.056440 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:44.057445078+00:00 stderr F I0813 20:42:44.057377 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:44.057554841+00:00 stderr F I0813 20:42:44.057509 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:44.057614363+00:00 stderr F I0813 20:42:44.057573 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:44.057673074+00:00 stderr F I0813 20:42:44.057635 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:44.058142168+00:00 stderr F I0813 20:42:44.057951 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-08-13T20:42:44.058402505+00:00 stderr F I0813 20:42:44.058348 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:44.058523589+00:00 stderr F I0813 20:42:44.058456 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:44.059177148+00:00 stderr F I0813 20:42:44.059151 1 secure_serving.go:258] Stopped listening on [::]:5443 ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000011607115117130646033067 0ustar zuulzuul2025-12-13T00:13:16.700006599+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="Using in-cluster kube client config" 2025-12-13T00:13:16.716264905+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="Defaulting Interval to '12h0m0s'" 2025-12-13T00:13:16.782674927+00:00 stderr F I1213 00:13:16.781797 1 handler.go:275] Adding GroupVersion packages.operators.coreos.com v1 to ResourceManager 2025-12-13T00:13:16.804437299+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-12-13T00:13:16.804437299+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="operator ready" 2025-12-13T00:13:16.804437299+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="starting informers..." 2025-12-13T00:13:16.804437299+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="informers started" 2025-12-13T00:13:16.804437299+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="waiting for caches to sync..." 2025-12-13T00:13:16.907072617+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="starting workers..." 2025-12-13T00:13:16.909183748+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="connecting to source" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-12-13T00:13:16.911008670+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="connecting to source" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:13:16.911008670+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-12-13T00:13:16.911113363+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="connecting to source" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-12-13T00:13:16.942004501+00:00 stderr F W1213 00:13:16.938778 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49276->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:16.942004501+00:00 stderr F W1213 00:13:16.938796 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:34728->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:16.942004501+00:00 stderr F W1213 00:13:16.939707 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43405->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:16.944601589+00:00 stderr F W1213 00:13:16.943360 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:36500->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:17.104515112+00:00 stderr F I1213 00:13:17.104236 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:17.104515112+00:00 stderr F I1213 00:13:17.104416 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:17.104515112+00:00 stderr F I1213 00:13:17.104462 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:17.104515112+00:00 stderr F I1213 00:13:17.104473 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.104515112+00:00 stderr F I1213 00:13:17.104486 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:17.104515112+00:00 stderr F I1213 00:13:17.104492 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.104816222+00:00 stderr F I1213 00:13:17.104798 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:17.104816222+00:00 stderr F I1213 00:13:17.104809 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:17.105118682+00:00 stderr F I1213 00:13:17.105100 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:17.105118682+00:00 stderr F I1213 00:13:17.105111 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.105131352+00:00 stderr F I1213 00:13:17.105121 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:17.105131352+00:00 stderr F I1213 00:13:17.105126 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.106135696+00:00 stderr F I1213 00:13:17.105629 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key" 2025-12-13T00:13:17.112352494+00:00 stderr F I1213 00:13:17.111979 1 secure_serving.go:213] Serving securely on [::]:5443 2025-12-13T00:13:17.112352494+00:00 stderr F I1213 00:13:17.112085 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:17.209673165+00:00 stderr F I1213 00:13:17.207184 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.209673165+00:00 stderr F I1213 00:13:17.207248 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:17.209673165+00:00 stderr F I1213 00:13:17.207351 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.209673165+00:00 stderr F I1213 00:13:17.207382 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.209673165+00:00 stderr F I1213 00:13:17.207395 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:17.209673165+00:00 stderr F I1213 00:13:17.207427 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.264108375+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-12-13T00:13:17.264108375+00:00 stderr F time="2025-12-13T00:13:17Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:43405->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-12-13T00:13:18.011849070+00:00 stderr F W1213 00:13:18.011218 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:37226->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:18.014994576+00:00 stderr F W1213 00:13:18.013532 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55459->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:18.018095990+00:00 stderr F W1213 00:13:18.017323 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:39455->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:18.021946059+00:00 stderr F W1213 00:13:18.020710 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:37509->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:18.057070430+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-12-13T00:13:18.057070430+00:00 stderr F time="2025-12-13T00:13:18Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:37509->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-12-13T00:13:18.651323098+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:13:18.651440152+00:00 stderr F time="2025-12-13T00:13:18Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:37226->10.217.4.10:53: read: connection refused\"" source="{community-operators openshift-marketplace}" 2025-12-13T00:13:19.253063258+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-12-13T00:13:19.253205823+00:00 stderr F time="2025-12-13T00:13:19Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:39455->10.217.4.10:53: read: connection refused\"" source="{certified-operators openshift-marketplace}" 2025-12-13T00:13:19.354643021+00:00 stderr F W1213 00:13:19.354582 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52965->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:19.488963405+00:00 stderr F W1213 00:13:19.488335 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:49429->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:19.753666249+00:00 stderr F W1213 00:13:19.753602 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:46507->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:19.850886906+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-12-13T00:13:19.850886906+00:00 stderr F time="2025-12-13T00:13:19Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:55459->10.217.4.10:53: read: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-12-13T00:13:19.876979852+00:00 stderr F W1213 00:13:19.873540 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:57412->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:20.451495687+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-12-13T00:13:20.451610551+00:00 stderr F time="2025-12-13T00:13:20Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:52965->10.217.4.10:53: read: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-12-13T00:13:21.828388264+00:00 stderr F W1213 00:13:21.828330 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:54000->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:22.111118255+00:00 stderr F W1213 00:13:22.111047 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup certified-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:59286->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:22.415538394+00:00 stderr F W1213 00:13:22.415483 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:44406->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:22.745568593+00:00 stderr F W1213 00:13:22.744747 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup redhat-marketplace.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:45865->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:25.366272196+00:00 stderr F W1213 00:13:25.366212 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup community-operators.openshift-marketplace.svc on 10.217.4.10:53: read udp 10.217.0.43:53314->10.217.4.10:53: read: connection refused" 2025-12-13T00:13:26.319974972+00:00 stderr F W1213 00:13:26.318355 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:13:26.319974972+00:00 stderr F W1213 00:13:26.318404 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-12-13T00:13:27.180389043+00:00 stderr F W1213 00:13:27.180322 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-12-13T00:13:32.421741475+00:00 stderr F W1213 00:13:32.421170 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-12-13T00:13:32.897667727+00:00 stderr F W1213 00:13:32.897606 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:13:33.299851531+00:00 stderr F W1213 00:13:33.299749 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-12-13T00:13:33.648917660+00:00 stderr F W1213 00:13:33.647860 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-12-13T00:13:44.209619533+00:00 stderr F W1213 00:13:44.209058 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-12-13T00:13:44.502193105+00:00 stderr F W1213 00:13:44.502121 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:13:44.901405909+00:00 stderr F W1213 00:13:44.901026 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-12-13T00:13:45.369685204+00:00 stderr F W1213 00:13:45.369265 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-12-13T00:13:58.482434661+00:00 stderr F W1213 00:13:58.481914 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-12-13T00:13:58.914250791+00:00 stderr F W1213 00:13:58.914193 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-12-13T00:14:01.532800851+00:00 stderr F W1213 00:14:01.532730 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-12-13T00:14:02.998270654+00:00 stderr F W1213 00:14:02.998212 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:21.158174897+00:00 stderr F W1213 00:14:21.158077 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-12-13T00:14:25.131728219+00:00 stderr F W1213 00:14:25.131665 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-12-13T00:14:30.841634748+00:00 stderr F W1213 00:14:30.841577 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "certified-operators.openshift-marketplace.svc:50051", ServerName: "certified-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.249:50051: connect: connection refused" 2025-12-13T00:14:37.731301241+00:00 stderr F W1213 00:14:37.730375 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:38.735142747+00:00 stderr F W1213 00:14:38.735076 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:40.197081117+00:00 stderr F W1213 00:14:40.197032 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:40.992195581+00:00 stderr F time="2025-12-13T00:14:40Z" level=warning msg="error getting package stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-12-13T00:14:43.468149554+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:14:43.468149554+00:00 stderr F time="2025-12-13T00:14:43Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-12-13T00:14:44.154119337+00:00 stderr F W1213 00:14:44.154051 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:44.464827481+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-12-13T00:14:44.464827481+00:00 stderr F time="2025-12-13T00:14:44Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-12-13T00:14:46.061065501+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-12-13T00:14:46.061065501+00:00 stderr F time="2025-12-13T00:14:46Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-12-13T00:14:47.548147050+00:00 stderr F W1213 00:14:47.547804 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:54.341649241+00:00 stderr F W1213 00:14:54.341485 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:14:59.080012201+00:00 stderr F W1213 00:14:59.079535 1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-operators.openshift-marketplace.svc:50051", ServerName: "redhat-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused" 2025-12-13T00:15:06.779815929+00:00 stderr F W1213 00:15:06.779751 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "redhat-marketplace.openshift-marketplace.svc:50051", ServerName: "redhat-marketplace.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused" 2025-12-13T00:15:11.550397077+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-operators openshift-marketplace}" action="sync catalogsource" address="redhat-operators.openshift-marketplace.svc:50051" name=redhat-operators namespace=openshift-marketplace 2025-12-13T00:15:11.550534001+00:00 stderr F time="2025-12-13T00:15:11Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.5.52:50051: connect: connection refused\"" source="{redhat-operators openshift-marketplace}" 2025-12-13T00:15:12.500675644+00:00 stderr F W1213 00:15:12.500610 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "community-operators.openshift-marketplace.svc:50051", ServerName: "community-operators.openshift-marketplace.svc:50051", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused" 2025-12-13T00:15:16.270657654+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:15:16.270710325+00:00 stderr F time="2025-12-13T00:15:16Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-12-13T00:15:16.544752117+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-12-13T00:15:16.544752117+00:00 stderr F time="2025-12-13T00:15:16Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.65:50051: connect: connection refused\"" source="{redhat-marketplace openshift-marketplace}" 2025-12-13T00:15:20.088095195+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="updating PackageManifest based on CatalogSource changes: {certified-operators openshift-marketplace}" action="sync catalogsource" address="certified-operators.openshift-marketplace.svc:50051" name=certified-operators namespace=openshift-marketplace 2025-12-13T00:15:31.553537801+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:15:31.553537801+00:00 stderr F time="2025-12-13T00:15:31Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-12-13T00:16:12.051890921+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:16:12.052007954+00:00 stderr F time="2025-12-13T00:16:12Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-12-13T00:16:15.278495301+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="updating PackageManifest based on CatalogSource changes: {redhat-marketplace openshift-marketplace}" action="sync catalogsource" address="redhat-marketplace.openshift-marketplace.svc:50051" name=redhat-marketplace namespace=openshift-marketplace 2025-12-13T00:16:17.385089785+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="updating PackageManifest based on CatalogSource changes: {community-operators openshift-marketplace}" action="sync catalogsource" address="community-operators.openshift-marketplace.svc:50051" name=community-operators namespace=openshift-marketplace 2025-12-13T00:16:17.385089785+00:00 stderr F time="2025-12-13T00:16:17Z" level=warning msg="error getting bundle stream" action="refresh cache" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.217.4.229:50051: connect: connection refused\"" source="{community-operators openshift-marketplace}" 2025-12-13T00:20:53.506006590+00:00 stderr F E1213 00:20:53.505539 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.506006590+00:00 stderr F E1213 00:20:53.505992 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.470315272+00:00 stderr F E1213 00:20:54.470251 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.470315272+00:00 stderr F E1213 00:20:54.470294 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.486750935+00:00 stderr F E1213 00:20:54.486699 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.486775366+00:00 stderr F E1213 00:20:54.486745 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130647033032 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130654033030 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000000263515117130647033042 0ustar zuulzuul2025-12-13T00:14:24.633056040+00:00 stderr F Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:14:24.896219965+00:00 stderr F error: failed to ping registry https://image-registry.openshift-image-registry.svc:5000: Get "https://image-registry.openshift-image-registry.svc:5000/": dial tcp 10.217.4.41:5000: connect: connection refused 2025-12-13T00:14:24.908560351+00:00 stderr F attempt #1 has failed (exit code 1), going to make another attempt... 2025-12-13T00:14:55.475751877+00:00 stderr F Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:14:55.691289019+00:00 stderr F error: failed to ping registry https://image-registry.openshift-image-registry.svc:5000: Get "https://image-registry.openshift-image-registry.svc:5000/": dial tcp 10.217.4.41:5000: connect: connection refused 2025-12-13T00:14:55.706298202+00:00 stderr F attempt #2 has failed (exit code 1), going to make another attempt... 2025-12-13T00:15:56.026411603+00:00 stderr F Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:15:56.305063491+00:00 stderr F I1213 00:15:56.304990 40 prune.go:348] Creating image pruner with keepYoungerThan=1h0m0s, keepTagRevisions=3, pruneOverSizeLimit=, allImages=true 2025-12-13T00:15:56.481597956+00:00 stdout F Summary: deleted 0 objects ././@LongLink0000644000000000000000000000023700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000755000175000017500000000000015117130646033140 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000755000175000017500000000000015117130654033137 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000202215117130646033136 0ustar zuulzuul2025-08-13T20:05:30.075912298+00:00 stderr F W0813 20:05:30.075190 1 authoptions.go:112] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! 2025-08-13T20:05:30.788889885+00:00 stderr F I0813 20:05:30.788210 1 main.go:605] Binding to [::]:8443... 2025-08-13T20:05:30.788889885+00:00 stderr F I0813 20:05:30.788264 1 main.go:607] using TLS 2025-08-13T20:05:33.795639346+00:00 stderr F I0813 20:05:33.789942 1 metrics.go:128] serverconfig.Metrics: Update ConsolePlugin metrics... 2025-08-13T20:05:33.904070491+00:00 stderr F I0813 20:05:33.895041 1 metrics.go:138] serverconfig.Metrics: Update ConsolePlugin metrics: &map[] (took 103.776412ms) 2025-08-13T20:05:35.792924520+00:00 stderr F I0813 20:05:35.791269 1 metrics.go:80] usage.Metrics: Count console users... 2025-08-13T20:05:36.280627718+00:00 stderr F I0813 20:05:36.280048 1 metrics.go:156] usage.Metrics: Update console users metrics: 0 kubeadmin, 0 cluster-admins, 0 developers, 0 unknown/errors (took 488.51125ms) ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000000015117130646033130 0ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_cons0000644000175000017500000000202215117130646033136 0ustar zuulzuul2025-12-13T00:13:16.623641943+00:00 stderr F W1213 00:13:16.621854 1 authoptions.go:112] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! 2025-12-13T00:13:27.107416521+00:00 stderr F I1213 00:13:27.106260 1 main.go:605] Binding to [::]:8443... 2025-12-13T00:13:27.107416521+00:00 stderr F I1213 00:13:27.106685 1 main.go:607] using TLS 2025-12-13T00:13:30.153880920+00:00 stderr F I1213 00:13:30.153822 1 metrics.go:128] serverconfig.Metrics: Update ConsolePlugin metrics... 2025-12-13T00:13:30.165605754+00:00 stderr F I1213 00:13:30.165263 1 metrics.go:138] serverconfig.Metrics: Update ConsolePlugin metrics: &map[] (took 11.343111ms) 2025-12-13T00:13:32.107629050+00:00 stderr F I1213 00:13:32.107114 1 metrics.go:80] usage.Metrics: Count console users... 2025-12-13T00:13:32.554160785+00:00 stderr F I1213 00:13:32.554079 1 metrics.go:156] usage.Metrics: Update console users metrics: 0 kubeadmin, 0 cluster-admins, 0 developers, 0 unknown/errors (took 446.461162ms) ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000755000175000017500000000000015117130646033112 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000755000175000017500000000000015117130766033115 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000644000175000017500000731340515117130646033131 0ustar zuulzuul2025-12-13T00:11:03.231860188+00:00 stderr F I1213 00:11:03.231678 1 start.go:23] ClusterVersionOperator 4.16.0-202406131906.p0.g6f553e9.assembly.stream.el9-6f553e9 2025-12-13T00:11:03.244897611+00:00 stderr F I1213 00:11:03.244803 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:11:03.285026313+00:00 stderr F I1213 00:11:03.284975 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:03.287440969+00:00 stderr F I1213 00:11:03.285246 1 upgradeable.go:446] ConfigMap openshift-config/admin-acks added. 2025-12-13T00:11:03.299972819+00:00 stderr F I1213 00:11:03.299893 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:11:03.301250953+00:00 stderr F I1213 00:11:03.301212 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:11:03.311138392+00:00 stderr F I1213 00:11:03.311079 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:11:03.311229674+00:00 stderr F I1213 00:11:03.311192 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:11:03.311411569+00:00 stderr F I1213 00:11:03.311384 1 upgradeable.go:446] ConfigMap openshift-config-managed/admin-gates added. 2025-12-13T00:11:03.334151507+00:00 stderr F I1213 00:11:03.333578 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:11:03.360960666+00:00 stderr F I1213 00:11:03.360894 1 start.go:295] Waiting on 1 outstanding goroutines. 2025-12-13T00:11:03.361036528+00:00 stderr F I1213 00:11:03.360978 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-version/version... 2025-12-13T00:16:05.190196161+00:00 stderr F I1213 00:16:05.188789 1 leaderelection.go:260] successfully acquired lease openshift-cluster-version/version 2025-12-13T00:16:05.190196161+00:00 stderr F I1213 00:16:05.189048 1 start.go:565] FeatureGate found in cluster, using its feature set "" at startup 2025-12-13T00:16:05.190196161+00:00 stderr F I1213 00:16:05.189534 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-cluster-version", Name:"version", UID:"4c78d446-a3f8-4879-b39d-248de5762283", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_48c13834-ce56-430e-a6d2-1a99817e2c76 became leader 2025-12-13T00:16:05.190369886+00:00 stderr F I1213 00:16:05.190331 1 payload.go:307] Loading updatepayload from "/" 2025-12-13T00:16:05.192037746+00:00 stderr F I1213 00:16:05.191998 1 payload.go:403] Architecture from release-metadata (4.16.0) retrieved from runtime: "amd64" 2025-12-13T00:16:05.193701156+00:00 stderr F I1213 00:16:05.192736 1 metrics.go:154] Metrics port listening for HTTPS on 0.0.0.0:9099 2025-12-13T00:16:05.217496880+00:00 stderr F I1213 00:16:05.217419 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.229840214+00:00 stderr F I1213 00:16:05.229761 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.253926698+00:00 stderr F I1213 00:16:05.253794 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-Hypershift.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.256533875+00:00 stderr F I1213 00:16:05.256439 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.259992157+00:00 stderr F I1213 00:16:05.259903 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:05.263378677+00:00 stderr F I1213 00:16:05.262524 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.263605215+00:00 stderr F I1213 00:16:05.263571 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.264465550+00:00 stderr F I1213 00:16:05.264434 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.269677924+00:00 stderr F I1213 00:16:05.269607 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.276396623+00:00 stderr F I1213 00:16:05.276335 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:05.282196044+00:00 stderr F I1213 00:16:05.281368 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.318400846+00:00 stderr F I1213 00:16:05.318315 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.347349333+00:00 stderr F I1213 00:16:05.347270 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:05.360316177+00:00 stderr F I1213 00:16:05.360257 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.372703333+00:00 stderr F I1213 00:16:05.371415 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.373518698+00:00 stderr F I1213 00:16:05.373477 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:05.374477516+00:00 stderr F I1213 00:16:05.374442 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.394058466+00:00 stderr F I1213 00:16:05.393995 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.395614632+00:00 stderr F I1213 00:16:05.395581 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.402374392+00:00 stderr F I1213 00:16:05.401772 1 payload.go:210] excluding Filename: "0000_10_openshift_service-ca_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-service-ca": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.403415682+00:00 stderr F I1213 00:16:05.403382 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.404812384+00:00 stderr F I1213 00:16:05.404773 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:05.406305828+00:00 stderr F I1213 00:16:05.406273 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.408639527+00:00 stderr F I1213 00:16:05.408354 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.413816180+00:00 stderr F I1213 00:16:05.413780 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.485584445+00:00 stderr F I1213 00:16:05.485365 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.570256131+00:00 stderr F I1213 00:16:05.570176 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_04_infrastructure-components.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.572345653+00:00 stderr F I1213 00:16:05.572293 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-aws": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.572345653+00:00 stderr F I1213 00:16:05.572327 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-azure": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.572372714+00:00 stderr F I1213 00:16:05.572342 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-gcp": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.572372714+00:00 stderr F I1213 00:16:05.572356 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-powervs": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.572384514+00:00 stderr F I1213 00:16:05.572370 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-vsphere": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.572761025+00:00 stderr F I1213 00:16:05.572721 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.575083554+00:00 stderr F I1213 00:16:05.573221 1 payload.go:210] excluding Filename: "0000_30_cluster-api_01_images.configmap.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-images": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.575083554+00:00 stderr F I1213 00:16:05.573634 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.575083554+00:00 stderr F I1213 00:16:05.573653 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "Secret" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-secret": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.575083554+00:00 stderr F I1213 00:16:05.574014 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_webhook-service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-webhook-service": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.575083554+00:00 stderr F I1213 00:16:05.574439 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.575083554+00:00 stderr F I1213 00:16:05.574460 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.577020621+00:00 stderr F I1213 00:16:05.576939 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.core-cluster-api.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.630589957+00:00 stderr F I1213 00:16:05.630507 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-aws.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "aws": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.639440529+00:00 stderr F I1213 00:16:05.639397 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-gcp.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "gcp": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.648000652+00:00 stderr F I1213 00:16:05.647732 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-ibmcloud.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "ibmcloud": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.668330044+00:00 stderr F I1213 00:16:05.668267 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-vsphere.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "vsphere": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732813922+00:00 stderr F I1213 00:16:05.732734 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterclasses.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732813922+00:00 stderr F I1213 00:16:05.732775 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusters.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732813922+00:00 stderr F I1213 00:16:05.732791 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machines.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732813922+00:00 stderr F I1213 00:16:05.732807 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinesets.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732881534+00:00 stderr F I1213 00:16:05.732822 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinedeployments.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732881534+00:00 stderr F I1213 00:16:05.732837 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinepools.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732881534+00:00 stderr F I1213 00:16:05.732852 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesets.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732881534+00:00 stderr F I1213 00:16:05.732865 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesetbindings.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732892225+00:00 stderr F I1213 00:16:05.732879 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinehealthchecks.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.732901155+00:00 stderr F I1213 00:16:05.732893 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "extensionconfigs.runtime.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:05.733804821+00:00 stderr F I1213 00:16:05.733766 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.733804821+00:00 stderr F I1213 00:16:05.733793 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.734729009+00:00 stderr F I1213 00:16:05.734689 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.734729009+00:00 stderr F I1213 00:16:05.734717 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "validating-webhook-configuration": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.735467321+00:00 stderr F I1213 00:16:05.735431 1 payload.go:210] excluding Filename: "0000_30_cluster-api_11_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.736012147+00:00 stderr F I1213 00:16:05.735979 1 payload.go:210] excluding Filename: "0000_30_cluster-api_12_clusteroperator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.742377075+00:00 stderr F I1213 00:16:05.742329 1 payload.go:210] excluding Filename: "0000_30_machine-api-operator_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-machine-api-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.775339081+00:00 stderr F I1213 00:16:05.775250 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "Namespace" Name: "baremetal-operator-system": no annotations 2025-12-13T00:16:05.775339081+00:00 stderr F I1213 00:16:05.775327 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "ConfigMap" Namespace: "baremetal-operator-system" Name: "ironic": no annotations 2025-12-13T00:16:05.808285996+00:00 stderr F I1213 00:16:05.808193 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.808732149+00:00 stderr F I1213 00:16:05.808689 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.809336547+00:00 stderr F I1213 00:16:05.809276 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.809891274+00:00 stderr F I1213 00:16:05.809844 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-12-13T00:16:05.811099340+00:00 stderr F I1213 00:16:05.810503 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-12-13T00:16:05.811099340+00:00 stderr F I1213 00:16:05.811093 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-12-13T00:16:05.813089609+00:00 stderr F I1213 00:16:05.813048 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_04_serviceaccount-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "csi-snapshot-controller-operator": no annotations 2025-12-13T00:16:05.815969254+00:00 stderr F I1213 00:16:05.815907 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_05_operator_role-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Name: "csi-snapshot-controller-operator-role": no annotations 2025-12-13T00:16:05.818213091+00:00 stderr F I1213 00:16:05.818153 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_06_operator_rolebinding-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Name: "csi-snapshot-controller-operator-role": no annotations 2025-12-13T00:16:05.819143798+00:00 stderr F I1213 00:16:05.819095 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "csi-snapshot-controller-operator": no annotations 2025-12-13T00:16:05.819736235+00:00 stderr F I1213 00:16:05.819698 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "csi-snapshot-controller-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.832265597+00:00 stderr F I1213 00:16:05.832173 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:05.862995945+00:00 stderr F I1213 00:16:05.861937 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:05.879842025+00:00 stderr F I1213 00:16:05.879757 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.890832269+00:00 stderr F I1213 00:16:05.890763 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-image-registry" Name: "cluster-image-registry-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.906843573+00:00 stderr F I1213 00:16:05.906778 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-ibmcloud": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.906843573+00:00 stderr F I1213 00:16:05.906805 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-powervs": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.906843573+00:00 stderr F I1213 00:16:05.906815 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-ingress-operator" Name: "openshift-ingress-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.909609315+00:00 stderr F I1213 00:16:05.909566 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-ingress-operator" Name: "ingress-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.913738728+00:00 stderr F I1213 00:16:05.913444 1 payload.go:210] excluding Filename: "0000_50_cluster-kube-storage-version-migrator-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-kube-storage-version-migrator-operator" Name: "kube-storage-version-migrator-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:05.915188130+00:00 stderr F I1213 00:16:05.915149 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.915188130+00:00 stderr F I1213 00:16:05.915170 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:05.917785667+00:00 stderr F I1213 00:16:05.917748 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_04-deployment-capi.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-machine-approver" Name: "machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.165154569+00:00 stderr F I1213 00:16:06.165090 1 payload.go:210] excluding Filename: "0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-monitoring" Name: "cluster-monitoring-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.183450351+00:00 stderr F I1213 00:16:06.183395 1 payload.go:210] excluding Filename: "0000_50_cluster-node-tuning-operator_50-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-node-tuning-operator" Name: "cluster-node-tuning-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.200469055+00:00 stderr F I1213 00:16:06.200423 1 payload.go:210] excluding Filename: "0000_50_cluster-samples-operator_06-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-samples-operator" Name: "cluster-samples-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.212264734+00:00 stderr F I1213 00:16:06.212218 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-CustomNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.217459307+00:00 stderr F I1213 00:16:06.217424 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-TechPreviewNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.219126877+00:00 stderr F I1213 00:16:06.219090 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_06_operator_cr-hypershift.yaml" Group: "operator.openshift.io" Kind: "Storage" Name: "cluster": no annotations 2025-12-13T00:16:06.219716784+00:00 stderr F I1213 00:16:06.219668 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_07_service_account-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "cluster-storage-operator": no annotations 2025-12-13T00:16:06.220483567+00:00 stderr F I1213 00:16:06.220454 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_08_operator_rbac-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-storage-operator-role": no annotations 2025-12-13T00:16:06.224886168+00:00 stderr F I1213 00:16:06.224863 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "cluster-storage-operator": no annotations 2025-12-13T00:16:06.226007400+00:00 stderr F I1213 00:16:06.225976 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "cluster-storage-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.227319850+00:00 stderr F I1213 00:16:06.227289 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_11_cluster_operator-hypershift.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "storage": no annotations 2025-12-13T00:16:06.228793533+00:00 stderr F I1213 00:16:06.228761 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "aws-ebs-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228793533+00:00 stderr F I1213 00:16:06.228784 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-disk-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228806623+00:00 stderr F I1213 00:16:06.228796 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-file-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228813933+00:00 stderr F I1213 00:16:06.228806 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ibm-vpc-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228821134+00:00 stderr F I1213 00:16:06.228816 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "powervs-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228844944+00:00 stderr F I1213 00:16:06.228827 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "manila-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228852475+00:00 stderr F I1213 00:16:06.228841 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ovirt-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228865515+00:00 stderr F I1213 00:16:06.228853 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vmware-vsphere-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.228872715+00:00 stderr F I1213 00:16:06.228863 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vsphere-problem-detector": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.249225417+00:00 stderr F I1213 00:16:06.249170 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-conversionwebhook-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-conversion-webhook": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.250790914+00:00 stderr F I1213 00:16:06.250753 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.312124540+00:00 stderr F I1213 00:16:06.312065 1 payload.go:210] excluding Filename: "0000_50_insights-operator_03-insightsdatagather-config-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "insightsdatagathers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.318249511+00:00 stderr F I1213 00:16:06.318212 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-datagather-insights-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "datagathers.insights.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.318588091+00:00 stderr F I1213 00:16:06.318549 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-insightsdatagather-config-cr.yaml" Group: "config.openshift.io" Kind: "InsightsDataGather" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.321213378+00:00 stderr F I1213 00:16:06.321013 1 payload.go:210] excluding Filename: "0000_50_insights-operator_06-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-insights" Name: "insights-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.420157447+00:00 stderr F I1213 00:16:06.420106 1 payload.go:210] excluding Filename: "0000_50_olm_06-psm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "package-server-manager": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.423415274+00:00 stderr F I1213 00:16:06.423360 1 payload.go:210] excluding Filename: "0000_50_olm_07-olm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "olm-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.425198816+00:00 stderr F I1213 00:16:06.425145 1 payload.go:210] excluding Filename: "0000_50_olm_08-catalog-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "catalog-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.433564294+00:00 stderr F I1213 00:16:06.433528 1 payload.go:210] excluding Filename: "0000_50_operator-marketplace_09_operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-marketplace" Name: "marketplace-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.438156130+00:00 stderr F I1213 00:16:06.438111 1 payload.go:210] excluding Filename: "0000_50_service-ca-operator_05_deploy-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-service-ca-operator" Name: "service-ca-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.439266063+00:00 stderr F I1213 00:16:06.439232 1 payload.go:210] excluding Filename: "0000_50_tests_test-reporting.yaml" Group: "config.openshift.io" Kind: "TestReporting" Name: "cluster": no annotations 2025-12-13T00:16:06.439524370+00:00 stderr F I1213 00:16:06.439478 1 payload.go:210] excluding Filename: "0000_51_olm_00-olm-operator.yml" Group: "operator.openshift.io" Kind: "OLM" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.439737766+00:00 stderr F I1213 00:16:06.439708 1 payload.go:210] excluding Filename: "0000_51_olm_01_operator_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.440463709+00:00 stderr F I1213 00:16:06.440435 1 payload.go:210] excluding Filename: "0000_51_olm_02_operator_clusterrole.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.440918332+00:00 stderr F I1213 00:16:06.440646 1 payload.go:210] excluding Filename: "0000_51_olm_03_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.440957273+00:00 stderr F I1213 00:16:06.440932 1 payload.go:210] excluding Filename: "0000_51_olm_04_metrics_service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator-metrics": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.441228531+00:00 stderr F I1213 00:16:06.441194 1 payload.go:210] excluding Filename: "0000_51_olm_05_operator_clusterrolebinding.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-olm-operator-role": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.442028624+00:00 stderr F I1213 00:16:06.441998 1 payload.go:210] excluding Filename: "0000_51_olm_06_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.442272181+00:00 stderr F I1213 00:16:06.442243 1 payload.go:210] excluding Filename: "0000_51_olm_07_cluster_operator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "olm": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.444890479+00:00 stderr F I1213 00:16:06.444699 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_02_rbac.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "default-account-cluster-network-operator": unrecognized value include.release.openshift.io/self-managed-high-availability=false 2025-12-13T00:16:06.445702474+00:00 stderr F I1213 00:16:06.445665 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_03_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-network-operator" Name: "network-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.453635678+00:00 stderr F I1213 00:16:06.452362 1 payload.go:210] excluding Filename: "0000_70_dns-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-dns-operator" Name: "dns-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.454559596+00:00 stderr F I1213 00:16:06.454531 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.455733010+00:00 stderr F I1213 00:16:06.455693 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.456838542+00:00 stderr F I1213 00:16:06.456797 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.482369559+00:00 stderr F I1213 00:16:06.482291 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.505835583+00:00 stderr F I1213 00:16:06.505756 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.516808008+00:00 stderr F I1213 00:16:06.516763 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.520119136+00:00 stderr F I1213 00:16:06.520089 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.521946640+00:00 stderr F I1213 00:16:06.521913 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.523745193+00:00 stderr F I1213 00:16:06.523715 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.526882466+00:00 stderr F I1213 00:16:06.526850 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.532360608+00:00 stderr F I1213 00:16:06.532332 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.535234603+00:00 stderr F I1213 00:16:06.535210 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.541480608+00:00 stderr F I1213 00:16:06.541435 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.548118105+00:00 stderr F I1213 00:16:06.548077 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.554795722+00:00 stderr F I1213 00:16:06.554729 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.557158012+00:00 stderr F I1213 00:16:06.557111 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.559269235+00:00 stderr F I1213 00:16:06.559222 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.562742647+00:00 stderr F I1213 00:16:06.562689 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.566131087+00:00 stderr F I1213 00:16:06.566079 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.568976923+00:00 stderr F I1213 00:16:06.568924 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.571630740+00:00 stderr F I1213 00:16:06.571584 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.572965830+00:00 stderr F I1213 00:16:06.572920 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.574426683+00:00 stderr F I1213 00:16:06.574370 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.576232387+00:00 stderr F I1213 00:16:06.576176 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.584961365+00:00 stderr F I1213 00:16:06.584891 1 payload.go:210] excluding Filename: "0000_90_cluster-baremetal-operator_03_servicemonitor.yaml" Group: "monitoring.coreos.com" Kind: "ServiceMonitor" Namespace: "openshift-machine-api" Name: "cluster-baremetal-operator-servicemonitor": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.761182241+00:00 stderr F I1213 00:16:06.761053 1 cvo.go:315] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-redhat (567E347AD0044ADE55BA8A5F199E2F91FD431D51: Red Hat, Inc. (release key 2) , B08B659EE86AF623BC90E8DB938A80CAF21541EB: Red Hat, Inc. (beta key 2) ) - will check for signatures in containers/image format at serial signature store wrapping config maps in openshift-config-managed with label "release.openshift.io/verification-signatures", serial signature store wrapping ClusterVersion signatureStores unset, falling back to default stores, parallel signature store wrapping containers/image signature store under https://storage.googleapis.com/openshift-release/official/signatures/openshift/release, containers/image signature store under https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release 2025-12-13T00:16:06.761182241+00:00 stderr F I1213 00:16:06.761114 1 start.go:590] CVO features for version 4.16.0 enabled at startup: {desiredVersion:4.16.0 unknownVersion:false reconciliationIssuesCondition:false} 2025-12-13T00:16:06.761236683+00:00 stderr F I1213 00:16:06.761177 1 featurechangestopper.go:123] Starting stop-on-features-change controller with startingRequiredFeatureSet="" startingCvoGates={desiredVersion:4.16.0 unknownVersion:false reconciliationIssuesCondition:false} 2025-12-13T00:16:06.761236683+00:00 stderr F I1213 00:16:06.761207 1 cvo.go:415] Starting ClusterVersionOperator with minimum reconcile period 2m52.516505533s 2025-12-13T00:16:06.761269464+00:00 stderr F I1213 00:16:06.761243 1 cvo.go:483] Waiting on 6 outstanding goroutines. 2025-12-13T00:16:06.761333265+00:00 stderr F I1213 00:16:06.761307 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:16:06.761387767+00:00 stderr F I1213 00:16:06.761332 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:06.761397507+00:00 stderr F I1213 00:16:06.761389 1 availableupdates.go:61] First attempt to retrieve available updates 2025-12-13T00:16:06.761453879+00:00 stderr F I1213 00:16:06.761424 1 sync_worker.go:565] Start: starting sync worker 2025-12-13T00:16:06.761577303+00:00 stderr F I1213 00:16:06.761509 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:06.761705146+00:00 stderr F I1213 00:16:06.761640 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:06.761740517+00:00 stderr F I1213 00:16:06.761729 1 sync_worker.go:461] Initializing prior known value of enabled capabilities from ClusterVersion status. 2025-12-13T00:16:06.761825100+00:00 stderr F I1213 00:16:06.761814 1 sync_worker.go:262] syncPayload: 4.16.0 (force=false) 2025-12-13T00:16:06.761883332+00:00 stderr F I1213 00:16:06.761873 1 payload.go:307] Loading updatepayload from "/" 2025-12-13T00:16:06.762017145+00:00 stderr F I1213 00:16:06.761968 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RetrievePayload' Retrieving and verifying payload version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" 2025-12-13T00:16:06.762017145+00:00 stderr F I1213 00:16:06.762005 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LoadPayload' Loading payload version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" 2025-12-13T00:16:06.762111628+00:00 stderr F I1213 00:16:06.762099 1 payload.go:403] Architecture from release-metadata (4.16.0) retrieved from runtime: "amd64" 2025-12-13T00:16:06.762705677+00:00 stderr F I1213 00:16:06.762671 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2025-12-13T00:16:06.762705677+00:00 stderr F I1213 00:16:06.762685 1 upgradeable.go:123] Cluster current version=4.16.0 2025-12-13T00:16:06.762720227+00:00 stderr F I1213 00:16:06.762706 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='PoolUpdating' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are updating, please see `oc get mcp` for further details') 2025-12-13T00:16:06.762728197+00:00 stderr F I1213 00:16:06.762718 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (1.412332ms) 2025-12-13T00:16:06.765481579+00:00 stderr F I1213 00:16:06.765429 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2025-12-13T00:16:06.771420874+00:00 stderr F I1213 00:16:06.771390 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.777439622+00:00 stderr F I1213 00:16:06.777411 1 payload.go:210] excluding Filename: "0000_00_cluster-version-operator_01_clusterversion-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterversions.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.793256741+00:00 stderr F I1213 00:16:06.793213 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-Hypershift.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.795857447+00:00 stderr F I1213 00:16:06.795828 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.799175246+00:00 stderr F I1213 00:16:06.799154 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.802805343+00:00 stderr F I1213 00:16:06.802755 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_authentications-SelfManagedHA-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "authentications.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.805536034+00:00 stderr F I1213 00:16:06.805486 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.807118521+00:00 stderr F I1213 00:16:06.807075 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_backups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "backups.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.810181932+00:00 stderr F I1213 00:16:06.810145 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.812776869+00:00 stderr F I1213 00:16:06.812739 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.815571691+00:00 stderr F I1213 00:16:06.815508 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_clusterimagepolicies-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterimagepolicies.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.845808176+00:00 stderr F I1213 00:16:06.845727 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.872082994+00:00 stderr F I1213 00:16:06.872013 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.884768849+00:00 stderr F I1213 00:16:06.884711 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_infrastructures-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "infrastructures.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.898136575+00:00 stderr F I1213 00:16:06.898077 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.899733673+00:00 stderr F I1213 00:16:06.899688 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.901500324+00:00 stderr F I1213 00:16:06.901456 1 payload.go:210] excluding Filename: "0000_10_config-operator_01_schedulers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "schedulers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.908995956+00:00 stderr F I1213 00:16:06.908942 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.909735638+00:00 stderr F I1213 00:16:06.909707 1 payload.go:210] excluding Filename: "0000_10_etcd_01_etcdbackups-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcdbackups.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.911803299+00:00 stderr F I1213 00:16:06.911768 1 payload.go:210] excluding Filename: "0000_10_openshift_service-ca_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-service-ca": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:06.912644614+00:00 stderr F I1213 00:16:06.912618 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.914875790+00:00 stderr F I1213 00:16:06.913509 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:06.914875790+00:00 stderr F I1213 00:16:06.914326 1 payload.go:210] excluding Filename: "0000_10_operator-lifecycle-manager_01_olms-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "olms.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.915791727+00:00 stderr F I1213 00:16:06.915754 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:06.918250630+00:00 stderr F I1213 00:16:06.918211 1 payload.go:210] excluding Filename: "0000_12_etcd_01_etcds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "etcds.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:06.982362688+00:00 stderr F I1213 00:16:06.981997 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.021959750+00:00 stderr F I1213 00:16:07.021908 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:07.021959750+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:07.021959750+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:07.022068023+00:00 stderr F I1213 00:16:07.022037 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (260.723087ms) 2025-12-13T00:16:07.047231848+00:00 stderr F I1213 00:16:07.047151 1 payload.go:210] excluding Filename: "0000_30_cluster-api-provider-openstack_04_infrastructure-components.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "openstack": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.049553917+00:00 stderr F I1213 00:16:07.049511 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-aws": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.049569687+00:00 stderr F I1213 00:16:07.049553 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-azure": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.049593838+00:00 stderr F I1213 00:16:07.049576 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-gcp": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.049623209+00:00 stderr F I1213 00:16:07.049598 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-powervs": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.049640219+00:00 stderr F I1213 00:16:07.049624 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-cluster-api-vsphere": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.049902027+00:00 stderr F I1213 00:16:07.049870 1 payload.go:210] excluding Filename: "0000_30_cluster-api_00_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.050191015+00:00 stderr F I1213 00:16:07.050163 1 payload.go:210] excluding Filename: "0000_30_cluster-api_01_images.configmap.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-images": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.050448093+00:00 stderr F I1213 00:16:07.050415 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.050448093+00:00 stderr F I1213 00:16:07.050440 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_service_account.yaml" Group: "" Kind: "Secret" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-secret": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.050723591+00:00 stderr F I1213 00:16:07.050687 1 payload.go:210] excluding Filename: "0000_30_cluster-api_02_webhook-service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator-webhook-service": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.051080252+00:00 stderr F I1213 00:16:07.051050 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.051092043+00:00 stderr F I1213 00:16:07.051076 1 payload.go:210] excluding Filename: "0000_30_cluster-api_03_rbac_roles.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.054124482+00:00 stderr F I1213 00:16:07.053280 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.core-cluster-api.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.108098990+00:00 stderr F I1213 00:16:07.107339 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-aws.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "aws": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.117212349+00:00 stderr F I1213 00:16:07.117152 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-gcp.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "gcp": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.125877195+00:00 stderr F I1213 00:16:07.124904 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-ibmcloud.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "ibmcloud": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.145661571+00:00 stderr F I1213 00:16:07.145617 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_cm.infrastructure-vsphere.yaml" Group: "" Kind: "ConfigMap" Namespace: "openshift-cluster-api" Name: "vsphere": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199799214+00:00 stderr F I1213 00:16:07.199730 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterclasses.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199799214+00:00 stderr F I1213 00:16:07.199762 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusters.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199799214+00:00 stderr F I1213 00:16:07.199774 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machines.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199799214+00:00 stderr F I1213 00:16:07.199785 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinesets.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199843625+00:00 stderr F I1213 00:16:07.199797 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinedeployments.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199843625+00:00 stderr F I1213 00:16:07.199808 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinepools.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199843625+00:00 stderr F I1213 00:16:07.199818 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesets.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199843625+00:00 stderr F I1213 00:16:07.199828 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clusterresourcesetbindings.addons.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199843625+00:00 stderr F I1213 00:16:07.199838 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machinehealthchecks.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.199899767+00:00 stderr F I1213 00:16:07.199872 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_crd.core-cluster-api.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "extensionconfigs.runtime.cluster.x-k8s.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade,TechPreviewNoUpgrade 2025-12-13T00:16:07.200242017+00:00 stderr F I1213 00:16:07.200210 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.200242017+00:00 stderr F I1213 00:16:07.200229 1 payload.go:210] excluding Filename: "0000_30_cluster-api_04_rbac_bindings.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.200821014+00:00 stderr F I1213 00:16:07.200780 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.200821014+00:00 stderr F I1213 00:16:07.200799 1 payload.go:210] excluding Filename: "0000_30_cluster-api_10_webhooks.yaml" Group: "admissionregistration.k8s.io" Kind: "ValidatingWebhookConfiguration" Name: "validating-webhook-configuration": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.202140974+00:00 stderr F I1213 00:16:07.202102 1 payload.go:210] excluding Filename: "0000_30_cluster-api_11_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-api" Name: "cluster-capi-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.202350480+00:00 stderr F I1213 00:16:07.202321 1 payload.go:210] excluding Filename: "0000_30_cluster-api_12_clusteroperator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "cluster-api": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.205917885+00:00 stderr F I1213 00:16:07.205876 1 payload.go:210] excluding Filename: "0000_30_machine-api-operator_00_credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-machine-api-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.248426613+00:00 stderr F I1213 00:16:07.248342 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "Namespace" Name: "baremetal-operator-system": no annotations 2025-12-13T00:16:07.248468104+00:00 stderr F I1213 00:16:07.248420 1 payload.go:210] excluding Filename: "0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml" Group: "" Kind: "ConfigMap" Namespace: "baremetal-operator-system" Name: "ironic": no annotations 2025-12-13T00:16:07.284563253+00:00 stderr F I1213 00:16:07.284482 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.285258183+00:00 stderr F I1213 00:16:07.285210 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.285910712+00:00 stderr F I1213 00:16:07.285866 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-Hypershift-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.286470540+00:00 stderr F I1213 00:16:07.286427 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-Default.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-12-13T00:16:07.287058297+00:00 stderr F I1213 00:16:07.286928 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-12-13T00:16:07.287497149+00:00 stderr F I1213 00:16:07.287456 1 payload.go:210] excluding Filename: "0000_50_cluster-config-api_featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml" Group: "config.openshift.io" Kind: "FeatureGate" Name: "cluster": unrecognized value include.release.openshift.io/self-managed-high-availability=false-except-for-the-config-operator 2025-12-13T00:16:07.292094436+00:00 stderr F I1213 00:16:07.291879 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_04_serviceaccount-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "csi-snapshot-controller-operator": no annotations 2025-12-13T00:16:07.294659542+00:00 stderr F I1213 00:16:07.294630 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_05_operator_role-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "Role" Name: "csi-snapshot-controller-operator-role": no annotations 2025-12-13T00:16:07.297304129+00:00 stderr F I1213 00:16:07.296641 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_06_operator_rolebinding-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "RoleBinding" Name: "csi-snapshot-controller-operator-role": no annotations 2025-12-13T00:16:07.297998081+00:00 stderr F I1213 00:16:07.297485 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "csi-snapshot-controller-operator": no annotations 2025-12-13T00:16:07.298247348+00:00 stderr F I1213 00:16:07.298146 1 payload.go:210] excluding Filename: "0000_50_cluster-csi-snapshot-controller-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "csi-snapshot-controller-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.314487789+00:00 stderr F I1213 00:16:07.314311 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.351507274+00:00 stderr F I1213 00:16:07.351418 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.372349951+00:00 stderr F I1213 00:16:07.372269 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_00_configs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "configs.imageregistry.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.390376595+00:00 stderr F I1213 00:16:07.390307 1 payload.go:210] excluding Filename: "0000_50_cluster-image-registry-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-image-registry" Name: "cluster-image-registry-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.413601872+00:00 stderr F I1213 00:16:07.413510 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-ibmcloud": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.413601872+00:00 stderr F I1213 00:16:07.413556 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-ingress-powervs": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.413601872+00:00 stderr F I1213 00:16:07.413577 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-ingress-operator" Name: "openshift-ingress-alibabacloud": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.416993052+00:00 stderr F I1213 00:16:07.416146 1 payload.go:210] excluding Filename: "0000_50_cluster-ingress-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-ingress-operator" Name: "ingress-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.429241665+00:00 stderr F I1213 00:16:07.429025 1 payload.go:210] excluding Filename: "0000_50_cluster-kube-storage-version-migrator-operator_07_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-kube-storage-version-migrator-operator" Name: "kube-storage-version-migrator-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.433056798+00:00 stderr F I1213 00:16:07.431586 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.433056798+00:00 stderr F I1213 00:16:07.431627 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_01-rbac-capi.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "system:openshift:controller:machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.435956584+00:00 stderr F I1213 00:16:07.435908 1 payload.go:210] excluding Filename: "0000_50_cluster-machine-approver_04-deployment-capi.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-machine-approver" Name: "machine-approver-capi": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.675414201+00:00 stderr F I1213 00:16:07.675329 1 payload.go:210] excluding Filename: "0000_50_cluster-monitoring-operator_05-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-monitoring" Name: "cluster-monitoring-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.694635860+00:00 stderr F I1213 00:16:07.694578 1 payload.go:210] excluding Filename: "0000_50_cluster-node-tuning-operator_50-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-node-tuning-operator" Name: "cluster-node-tuning-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.702036329+00:00 stderr F I1213 00:16:07.701974 1 payload.go:210] excluding Filename: "0000_50_cluster-samples-operator_06-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-samples-operator" Name: "cluster-samples-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.707357577+00:00 stderr F I1213 00:16:07.707311 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-CustomNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.710702776+00:00 stderr F I1213 00:16:07.710636 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_04_cluster_csi_driver_crd-TechPreviewNoUpgrade.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "clustercsidrivers.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.711995475+00:00 stderr F I1213 00:16:07.711702 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_06_operator_cr-hypershift.yaml" Group: "operator.openshift.io" Kind: "Storage" Name: "cluster": no annotations 2025-12-13T00:16:07.711995475+00:00 stderr F I1213 00:16:07.711878 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_07_service_account-hypershift.yaml" Group: "" Kind: "ServiceAccount" Name: "cluster-storage-operator": no annotations 2025-12-13T00:16:07.712766007+00:00 stderr F I1213 00:16:07.712095 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_08_operator_rbac-hypershift.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-storage-operator-role": no annotations 2025-12-13T00:16:07.714810208+00:00 stderr F I1213 00:16:07.714750 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-hypershift.yaml" Group: "apps" Kind: "Deployment" Name: "cluster-storage-operator": no annotations 2025-12-13T00:16:07.715741755+00:00 stderr F I1213 00:16:07.715682 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_10_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-storage-operator" Name: "cluster-storage-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.716802966+00:00 stderr F I1213 00:16:07.716669 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_11_cluster_operator-hypershift.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "storage": no annotations 2025-12-13T00:16:07.717742915+00:00 stderr F I1213 00:16:07.717691 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "aws-ebs-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717742915+00:00 stderr F I1213 00:16:07.717710 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-disk-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717742915+00:00 stderr F I1213 00:16:07.717722 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "azure-file-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717742915+00:00 stderr F I1213 00:16:07.717732 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ibm-vpc-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717777496+00:00 stderr F I1213 00:16:07.717742 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "powervs-block-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717777496+00:00 stderr F I1213 00:16:07.717752 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "manila-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717777496+00:00 stderr F I1213 00:16:07.717761 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "ovirt-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717777496+00:00 stderr F I1213 00:16:07.717770 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vmware-vsphere-csi-driver-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.717790236+00:00 stderr F I1213 00:16:07.717780 1 payload.go:210] excluding Filename: "0000_50_cluster-storage-operator_ibm-cloud-managed-cleanup.yaml" Group: "cloudcredential.openshift.io" Kind: "CredentialsRequest" Namespace: "openshift-cloud-credential-operator" Name: "openshift-vsphere-problem-detector": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.731817211+00:00 stderr F I1213 00:16:07.731729 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-conversionwebhook-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-conversion-webhook": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.732926854+00:00 stderr F I1213 00:16:07.732874 1 payload.go:210] excluding Filename: "0000_50_console-operator_07-operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-console-operator" Name: "console-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.754585755+00:00 stderr F I1213 00:16:07.754514 1 payload.go:210] excluding Filename: "0000_50_insights-operator_03-insightsdatagather-config-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "insightsdatagathers.config.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.758209492+00:00 stderr F I1213 00:16:07.758118 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-datagather-insights-crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "datagathers.insights.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.758284044+00:00 stderr F I1213 00:16:07.758251 1 payload.go:210] excluding Filename: "0000_50_insights-operator_04-insightsdatagather-config-cr.yaml" Group: "config.openshift.io" Kind: "InsightsDataGather" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.759519620+00:00 stderr F I1213 00:16:07.759473 1 payload.go:210] excluding Filename: "0000_50_insights-operator_06-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-insights" Name: "insights-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.839449287+00:00 stderr F I1213 00:16:07.839380 1 payload.go:210] excluding Filename: "0000_50_olm_06-psm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "package-server-manager": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.841275641+00:00 stderr F I1213 00:16:07.841236 1 payload.go:210] excluding Filename: "0000_50_olm_07-olm-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "olm-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.842335952+00:00 stderr F I1213 00:16:07.842302 1 payload.go:210] excluding Filename: "0000_50_olm_08-catalog-operator.deployment.ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-operator-lifecycle-manager" Name: "catalog-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.846509516+00:00 stderr F I1213 00:16:07.846466 1 payload.go:210] excluding Filename: "0000_50_operator-marketplace_09_operator-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-marketplace" Name: "marketplace-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.849730701+00:00 stderr F I1213 00:16:07.849678 1 payload.go:210] excluding Filename: "0000_50_service-ca-operator_05_deploy-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-service-ca-operator" Name: "service-ca-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.850775172+00:00 stderr F I1213 00:16:07.850730 1 payload.go:210] excluding Filename: "0000_50_tests_test-reporting.yaml" Group: "config.openshift.io" Kind: "TestReporting" Name: "cluster": no annotations 2025-12-13T00:16:07.850896185+00:00 stderr F I1213 00:16:07.850859 1 payload.go:210] excluding Filename: "0000_51_olm_00-olm-operator.yml" Group: "operator.openshift.io" Kind: "OLM" Name: "cluster": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.851018809+00:00 stderr F I1213 00:16:07.850988 1 payload.go:210] excluding Filename: "0000_51_olm_01_operator_namespace.yaml" Group: "" Kind: "Namespace" Name: "openshift-cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.851634387+00:00 stderr F I1213 00:16:07.851599 1 payload.go:210] excluding Filename: "0000_51_olm_02_operator_clusterrole.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRole" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.851724280+00:00 stderr F I1213 00:16:07.851692 1 payload.go:210] excluding Filename: "0000_51_olm_03_service_account.yaml" Group: "" Kind: "ServiceAccount" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.851864584+00:00 stderr F I1213 00:16:07.851832 1 payload.go:210] excluding Filename: "0000_51_olm_04_metrics_service.yaml" Group: "" Kind: "Service" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator-metrics": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.852004839+00:00 stderr F I1213 00:16:07.851972 1 payload.go:210] excluding Filename: "0000_51_olm_05_operator_clusterrolebinding.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "cluster-olm-operator-role": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.852620967+00:00 stderr F I1213 00:16:07.852588 1 payload.go:210] excluding Filename: "0000_51_olm_06_deployment.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-cluster-olm-operator" Name: "cluster-olm-operator": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.852727370+00:00 stderr F I1213 00:16:07.852695 1 payload.go:210] excluding Filename: "0000_51_olm_07_cluster_operator.yaml" Group: "config.openshift.io" Kind: "ClusterOperator" Name: "olm": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.854650656+00:00 stderr F I1213 00:16:07.854614 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_02_rbac.yaml" Group: "rbac.authorization.k8s.io" Kind: "ClusterRoleBinding" Name: "default-account-cluster-network-operator": unrecognized value include.release.openshift.io/self-managed-high-availability=false 2025-12-13T00:16:07.855398209+00:00 stderr F I1213 00:16:07.855362 1 payload.go:210] excluding Filename: "0000_70_cluster-network-operator_03_deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-network-operator" Name: "network-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.860956024+00:00 stderr F I1213 00:16:07.860892 1 payload.go:210] excluding Filename: "0000_70_dns-operator_02-deployment-ibm-cloud-managed.yaml" Group: "apps" Kind: "Deployment" Namespace: "openshift-dns-operator" Name: "dns-operator": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:07.862617932+00:00 stderr F I1213 00:16:07.862578 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.863685964+00:00 stderr F I1213 00:16:07.863647 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.864748235+00:00 stderr F I1213 00:16:07.864711 1 payload.go:210] excluding Filename: "0000_70_dns_00_dnsnameresolvers-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "dnsnameresolvers.network.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.883870822+00:00 stderr F I1213 00:16:07.883803 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.914519369+00:00 stderr F I1213 00:16:07.914355 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.930032527+00:00 stderr F I1213 00:16:07.929216 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_controllerconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "controllerconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.933916413+00:00 stderr F I1213 00:16:07.933546 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.935996884+00:00 stderr F I1213 00:16:07.935934 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.938267661+00:00 stderr F I1213 00:16:07.938223 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfignodes-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfignodes.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.943416534+00:00 stderr F I1213 00:16:07.942308 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.949616547+00:00 stderr F I1213 00:16:07.949552 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.954873143+00:00 stderr F I1213 00:16:07.954779 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigpools-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigpools.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.962905031+00:00 stderr F I1213 00:16:07.961667 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.967679442+00:00 stderr F I1213 00:16:07.967589 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.973207196+00:00 stderr F I1213 00:16:07.973131 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineconfigurations-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineconfigurations.operator.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.974601707+00:00 stderr F I1213 00:16:07.974561 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.976082681+00:00 stderr F I1213 00:16:07.976035 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.977484742+00:00 stderr F I1213 00:16:07.977447 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosbuilds-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosbuilds.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.979185313+00:00 stderr F I1213 00:16:07.979154 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.980914644+00:00 stderr F I1213 00:16:07.980880 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.988484698+00:00 stderr F I1213 00:16:07.988425 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_machineosconfigs-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "machineosconfigs.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:07.990228560+00:00 stderr F I1213 00:16:07.990193 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-CustomNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=CustomNoUpgrade 2025-12-13T00:16:07.992086375+00:00 stderr F I1213 00:16:07.991666 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-DevPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": unrecognized value "DevPreviewNoUpgrade" in release.openshift.io/feature-set=DevPreviewNoUpgrade; known values are: CustomNoUpgrade,Default,LatencySensitive,TechPreviewNoUpgrade 2025-12-13T00:16:07.992936100+00:00 stderr F I1213 00:16:07.992882 1 payload.go:210] excluding Filename: "0000_80_machine-config_01_pinnedimagesets-TechPreviewNoUpgrade.crd.yaml" Group: "apiextensions.k8s.io" Kind: "CustomResourceDefinition" Name: "pinnedimagesets.machineconfiguration.openshift.io": "Default" is required, and release.openshift.io/feature-set=TechPreviewNoUpgrade 2025-12-13T00:16:08.000317408+00:00 stderr F I1213 00:16:08.000228 1 payload.go:210] excluding Filename: "0000_90_cluster-baremetal-operator_03_servicemonitor.yaml" Group: "monitoring.coreos.com" Kind: "ServiceMonitor" Namespace: "openshift-machine-api" Name: "cluster-baremetal-operator-servicemonitor": include.release.openshift.io/self-managed-high-availability unset 2025-12-13T00:16:08.022111763+00:00 stderr F I1213 00:16:08.022018 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:08.022571207+00:00 stderr F I1213 00:16:08.022529 1 cache.go:131] {"type":"PromQL","promql":{"promql":"(\n group by (type) (cluster_infrastructure_provider{_id=\"\",type=\"Azure\"})\n or\n 0 * group by (type) (cluster_infrastructure_provider{_id=\"\"})\n)\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-12-13T00:16:08.025458522+00:00 stderr F I1213 00:16:08.024601 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:08.025458522+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:08.025458522+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:08.025458522+00:00 stderr F I1213 00:16:08.024651 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (2.646928ms) 2025-12-13T00:16:08.140566560+00:00 stderr F I1213 00:16:08.140494 1 sync_worker.go:366] Skipping preconditions for a local operator image payload. 2025-12-13T00:16:08.140631852+00:00 stderr F I1213 00:16:08.140571 1 sync_worker.go:415] Payload loaded from quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 with hash 6WUw5aCbcO4=, architecture amd64 2025-12-13T00:16:08.140643982+00:00 stderr F I1213 00:16:08.140589 1 sync_worker.go:527] Propagating initial target version { 4.16.0 quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 false} to sync worker loop in state Reconciling. 2025-12-13T00:16:08.140701574+00:00 stderr F I1213 00:16:08.140680 1 sync_worker.go:551] Notify the sync worker that new work is available 2025-12-13T00:16:08.140784056+00:00 stderr F I1213 00:16:08.140740 1 event.go:364] Event(v1.ObjectReference{Kind:"ClusterVersion", Namespace:"openshift-cluster-version", Name:"version", UID:"", APIVersion:"config.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PayloadLoaded' Payload loaded version="4.16.0" image="quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69" architecture="amd64" 2025-12-13T00:16:08.140862949+00:00 stderr F I1213 00:16:08.140808 1 sync_worker.go:584] new work is available 2025-12-13T00:16:08.140904470+00:00 stderr F I1213 00:16:08.140861 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:08.144413273+00:00 stderr F I1213 00:16:08.144329 1 sync_worker.go:807] Detected while calculating next work: version changed (from { false} to { 4.16.0 quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69 false}), overrides changed ([] to [{Deployment apps openshift-monitoring cluster-monitoring-operator true} {ClusterOperator config.openshift.io monitoring true} {Deployment apps openshift-cloud-credential-operator cloud-credential-operator true} {ClusterOperator config.openshift.io cloud-credential true} {Deployment apps openshift-machine-api cluster-autoscaler-operator true} {ClusterOperator config.openshift.io cluster-autoscaler true} {Deployment apps openshift-cloud-controller-manager-operator cluster-cloud-controller-manager-operator true} {ClusterOperator config.openshift.io cloud-controller-manager true}]), capabilities changed (enabled map[] not equal to map[Build:{} Console:{} DeploymentConfig:{} ImageRegistry:{} Ingress:{} MachineAPI:{} OperatorLifecycleManager:{} marketplace:{} openshift-samples:{}]) 2025-12-13T00:16:08.144455585+00:00 stderr F I1213 00:16:08.144358 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:error(nil), Done:0, Total:0, Completed:0, Reconciling:true, Initial:false, VersionHash:"", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:16:08.144455585+00:00 stderr F I1213 00:16:08.144443 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 0 2025-12-13T00:16:08.147225427+00:00 stderr F I1213 00:16:08.147184 1 task_graph.go:481] Running 0 on worker 1 2025-12-13T00:16:08.147225427+00:00 stderr F I1213 00:16:08.147203 1 task_graph.go:481] Running 2 on worker 1 2025-12-13T00:16:08.147792723+00:00 stderr F I1213 00:16:08.147748 1 task_graph.go:481] Running 1 on worker 0 2025-12-13T00:16:08.174047100+00:00 stderr F W1213 00:16:08.173548 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:08.176172013+00:00 stderr F I1213 00:16:08.176123 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.414625161s) 2025-12-13T00:16:08.176172013+00:00 stderr F I1213 00:16:08.176157 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:08.176190134+00:00 stderr F I1213 00:16:08.176173 1 cvo.go:668] Cluster version changed, waiting for newer event 2025-12-13T00:16:08.176190134+00:00 stderr F I1213 00:16:08.176179 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.94µs) 2025-12-13T00:16:08.185714356+00:00 stderr F I1213 00:16:08.185624 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:16:08.185714356+00:00 stderr F I1213 00:16:08.185698 1 upgradeable.go:69] Upgradeability last checked 1.422976069s ago, will not re-check until 2025-12-13T00:18:06Z 2025-12-13T00:16:08.185759038+00:00 stderr F I1213 00:16:08.185710 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (95.863µs) 2025-12-13T00:16:08.185759038+00:00 stderr F I1213 00:16:08.185724 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:08.185840300+00:00 stderr F I1213 00:16:08.185792 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:08.185882091+00:00 stderr F I1213 00:16:08.185858 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:08.186290793+00:00 stderr F I1213 00:16:08.186224 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:08.187996373+00:00 stderr F I1213 00:16:08.187126 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:08.187996373+00:00 stderr F I1213 00:16:08.187607 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:08.187996373+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:08.187996373+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:08.187996373+00:00 stderr F I1213 00:16:08.187692 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (575.097µs) 2025-12-13T00:16:08.210994794+00:00 stderr F I1213 00:16:08.210123 1 task_graph.go:481] Running 3 on worker 0 2025-12-13T00:16:08.213785387+00:00 stderr F W1213 00:16:08.213738 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:08.219111835+00:00 stderr F I1213 00:16:08.219037 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.302756ms) 2025-12-13T00:16:08.219111835+00:00 stderr F I1213 00:16:08.219086 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:08.220246938+00:00 stderr F I1213 00:16:08.219299 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:08.220246938+00:00 stderr F I1213 00:16:08.219384 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:08.220271799+00:00 stderr F I1213 00:16:08.220215 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:08.247447753+00:00 stderr F W1213 00:16:08.247391 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:08.249698869+00:00 stderr F I1213 00:16:08.249672 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.581904ms) 2025-12-13T00:16:08.249875385+00:00 stderr F W1213 00:16:08.249835 1 helper.go:97] PrometheusRule "openshift-image-registry/image-registry-operator-alerts" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:08.249884416+00:00 stderr F I1213 00:16:08.249876 1 task_graph.go:481] Running 4 on worker 1 2025-12-13T00:16:08.300417071+00:00 stderr F I1213 00:16:08.300349 1 task_graph.go:481] Running 5 on worker 0 2025-12-13T00:16:08.300652418+00:00 stderr F I1213 00:16:08.300611 1 task_graph.go:481] Running 6 on worker 0 2025-12-13T00:16:08.352080810+00:00 stderr F I1213 00:16:08.352018 1 task_graph.go:481] Running 7 on worker 1 2025-12-13T00:16:09.001872753+00:00 stderr F I1213 00:16:09.001806 1 task_graph.go:481] Running 8 on worker 0 2025-12-13T00:16:09.001945435+00:00 stderr F I1213 00:16:09.001911 1 task_graph.go:481] Running 9 on worker 0 2025-12-13T00:16:09.025138962+00:00 stderr F I1213 00:16:09.025032 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:09.025427090+00:00 stderr F I1213 00:16:09.025371 1 cache.go:131] {"type":"PromQL","promql":{"promql":"topk(1,\n group by (label_node_openshift_io_os_id) (kube_node_labels{_id=\"\",label_node_openshift_io_os_id=\"rhel\"})\n or\n 0 * group by (label_node_openshift_io_os_id) (kube_node_labels{_id=\"\"})\n)\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-12-13T00:16:09.028475870+00:00 stderr F I1213 00:16:09.028407 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:09.028475870+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:09.028475870+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:09.028532772+00:00 stderr F I1213 00:16:09.028501 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (3.484003ms) 2025-12-13T00:16:09.028532772+00:00 stderr F I1213 00:16:09.028524 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:09.028646946+00:00 stderr F I1213 00:16:09.028597 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:09.028701418+00:00 stderr F I1213 00:16:09.028670 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:09.029367027+00:00 stderr F I1213 00:16:09.029289 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:09.060464378+00:00 stderr F W1213 00:16:09.060386 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:09.063367244+00:00 stderr F I1213 00:16:09.063130 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.600574ms) 2025-12-13T00:16:09.073109781+00:00 stderr F I1213 00:16:09.073061 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:16:09.073145582+00:00 stderr F I1213 00:16:09.073120 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:09.073145582+00:00 stderr F I1213 00:16:09.073125 1 upgradeable.go:69] Upgradeability last checked 2.310405005s ago, will not re-check until 2025-12-13T00:18:06Z 2025-12-13T00:16:09.073145582+00:00 stderr F I1213 00:16:09.073115 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:09.073145582+00:00 stderr F I1213 00:16:09.073134 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (79.382µs) 2025-12-13T00:16:09.073328998+00:00 stderr F I1213 00:16:09.073280 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:09.073396021+00:00 stderr F I1213 00:16:09.073366 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:09.073427842+00:00 stderr F I1213 00:16:09.073400 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:09.073427842+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:09.073427842+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:09.073475903+00:00 stderr F I1213 00:16:09.073457 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (341.501µs) 2025-12-13T00:16:09.073925336+00:00 stderr F I1213 00:16:09.073846 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:09.098916596+00:00 stderr F W1213 00:16:09.098839 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:09.103299145+00:00 stderr F I1213 00:16:09.103243 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.104341ms) 2025-12-13T00:16:09.103299145+00:00 stderr F I1213 00:16:09.103282 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:09.103423739+00:00 stderr F I1213 00:16:09.103379 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:09.103487151+00:00 stderr F I1213 00:16:09.103456 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:09.103934864+00:00 stderr F I1213 00:16:09.103875 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:09.130473680+00:00 stderr F W1213 00:16:09.130196 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:09.132387237+00:00 stderr F I1213 00:16:09.132341 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.05584ms) 2025-12-13T00:16:09.202716468+00:00 stderr F I1213 00:16:09.202624 1 task_graph.go:481] Running 10 on worker 0 2025-12-13T00:16:09.551154891+00:00 stderr F I1213 00:16:09.551100 1 task_graph.go:481] Running 11 on worker 1 2025-12-13T00:16:10.002390327+00:00 stderr F I1213 00:16:10.001851 1 task_graph.go:481] Running 12 on worker 0 2025-12-13T00:16:10.028958674+00:00 stderr F I1213 00:16:10.028833 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:10.029375337+00:00 stderr F I1213 00:16:10.029312 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group(csv_succeeded{_id=\"\", name=~\"numaresources-operator[.].*\"})\nor\n0 * group(csv_count{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-12-13T00:16:10.035793966+00:00 stderr F I1213 00:16:10.035464 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:10.035793966+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:10.035793966+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:10.035793966+00:00 stderr F I1213 00:16:10.035625 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (6.810901ms) 2025-12-13T00:16:10.035793966+00:00 stderr F I1213 00:16:10.035653 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:10.035793966+00:00 stderr F I1213 00:16:10.035744 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:10.035875719+00:00 stderr F I1213 00:16:10.035858 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:10.036520657+00:00 stderr F I1213 00:16:10.036414 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:10.084308632+00:00 stderr F W1213 00:16:10.083521 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:10.087718364+00:00 stderr F I1213 00:16:10.087653 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.994299ms) 2025-12-13T00:16:10.093310009+00:00 stderr F I1213 00:16:10.093255 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:16:10.093331150+00:00 stderr F I1213 00:16:10.093310 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:10.093331150+00:00 stderr F I1213 00:16:10.093318 1 upgradeable.go:69] Upgradeability last checked 3.330592242s ago, will not re-check until 2025-12-13T00:18:06Z 2025-12-13T00:16:10.093363081+00:00 stderr F I1213 00:16:10.093336 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (87.463µs) 2025-12-13T00:16:10.093394792+00:00 stderr F I1213 00:16:10.093364 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:10.093483834+00:00 stderr F I1213 00:16:10.093418 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:10.093529696+00:00 stderr F I1213 00:16:10.093505 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:10.093966428+00:00 stderr F I1213 00:16:10.093916 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:10.093966428+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:10.093966428+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:10.094091392+00:00 stderr F I1213 00:16:10.094051 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (695.371µs) 2025-12-13T00:16:10.094091392+00:00 stderr F I1213 00:16:10.094037 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:10.103016346+00:00 stderr F I1213 00:16:10.102613 1 task_graph.go:481] Running 13 on worker 0 2025-12-13T00:16:10.121145693+00:00 stderr F W1213 00:16:10.121072 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:10.124256555+00:00 stderr F I1213 00:16:10.124213 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.920055ms) 2025-12-13T00:16:10.124256555+00:00 stderr F I1213 00:16:10.124251 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:10.124369778+00:00 stderr F I1213 00:16:10.124334 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:10.124430450+00:00 stderr F I1213 00:16:10.124407 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:10.124900184+00:00 stderr F I1213 00:16:10.124835 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:10.169141304+00:00 stderr F W1213 00:16:10.168936 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:10.171817143+00:00 stderr F I1213 00:16:10.171715 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.438244ms) 2025-12-13T00:16:10.211390364+00:00 stderr F I1213 00:16:10.211278 1 task_graph.go:481] Running 14 on worker 0 2025-12-13T00:16:10.301496841+00:00 stderr F I1213 00:16:10.301421 1 task_graph.go:481] Running 15 on worker 0 2025-12-13T00:16:10.801588753+00:00 stderr F I1213 00:16:10.801131 1 task_graph.go:481] Running 16 on worker 0 2025-12-13T00:16:10.801588753+00:00 stderr F I1213 00:16:10.801549 1 task_graph.go:481] Running 17 on worker 0 2025-12-13T00:16:11.036189837+00:00 stderr F I1213 00:16:11.036091 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:11.036351531+00:00 stderr F I1213 00:16:11.036274 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group(csv_succeeded{_id=\"\", name=~\"sriov-network-operator[.].*\"})\nor\n0 * group(csv_count{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-12-13T00:16:11.040992629+00:00 stderr F I1213 00:16:11.038665 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:11.040992629+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:11.040992629+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:11.040992629+00:00 stderr F I1213 00:16:11.038747 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (2.666399ms) 2025-12-13T00:16:11.040992629+00:00 stderr F I1213 00:16:11.038763 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:11.040992629+00:00 stderr F I1213 00:16:11.038823 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:11.040992629+00:00 stderr F I1213 00:16:11.038870 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:11.047295186+00:00 stderr F I1213 00:16:11.045489 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:11.072469701+00:00 stderr F W1213 00:16:11.072139 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:11.073860832+00:00 stderr F I1213 00:16:11.073830 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.063908ms) 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.080786 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.080854 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.080869 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081117 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:11.081987642+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:11.081987642+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081108 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081160 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (293.429µs) 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081189 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081211 1 upgradeable.go:69] Upgradeability last checked 4.318478642s ago, will not re-check until 2025-12-13T00:18:06Z 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081220 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (441.512µs) 2025-12-13T00:16:11.081987642+00:00 stderr F I1213 00:16:11.081521 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:11.104535270+00:00 stderr F W1213 00:16:11.104487 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:11.106608411+00:00 stderr F I1213 00:16:11.106566 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.70848ms) 2025-12-13T00:16:11.106608411+00:00 stderr F I1213 00:16:11.106598 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:11.106687863+00:00 stderr F I1213 00:16:11.106663 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:11.106724204+00:00 stderr F I1213 00:16:11.106712 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:11.107075045+00:00 stderr F I1213 00:16:11.107011 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:11.129577801+00:00 stderr F W1213 00:16:11.129518 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:11.131171698+00:00 stderr F I1213 00:16:11.131138 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.538186ms) 2025-12-13T00:16:12.038851005+00:00 stderr F I1213 00:16:12.038792 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:12.039081291+00:00 stderr F I1213 00:16:12.039055 1 cache.go:131] {"type":"PromQL","promql":{"promql":"group by (_id, invoker) (cluster_installer{_id=\"\",invoker=\"hypershift\"})\nor\n0 * group by (_id, invoker) (cluster_installer{_id=\"\"})\n"}} is stealing this cluster-condition match call for {"type":"PromQL","promql":{"promql":"topk by (_id) (1,\n group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type=~\"None|BareMetal\"})\n or\n 0 * group by (_id, type) (cluster_infrastructure_provider{_id=\"\",type!~\"None|BareMetal\"})\n)\n"}}, because it has never been evaluated 2025-12-13T00:16:12.042577025+00:00 stderr F I1213 00:16:12.042171 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:12.042577025+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:12.042577025+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:12.042577025+00:00 stderr F I1213 00:16:12.042247 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (3.466223ms) 2025-12-13T00:16:12.042577025+00:00 stderr F I1213 00:16:12.042262 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:12.042577025+00:00 stderr F I1213 00:16:12.042321 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:12.042577025+00:00 stderr F I1213 00:16:12.042368 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:12.043623206+00:00 stderr F I1213 00:16:12.042843 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:12.086080123+00:00 stderr F W1213 00:16:12.085987 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:12.087634868+00:00 stderr F I1213 00:16:12.087593 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.328702ms) 2025-12-13T00:16:12.089401861+00:00 stderr F I1213 00:16:12.089362 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:16:12.089494244+00:00 stderr F I1213 00:16:12.089471 1 upgradeable.go:69] Upgradeability last checked 5.326747916s ago, will not re-check until 2025-12-13T00:18:06Z 2025-12-13T00:16:12.089554345+00:00 stderr F I1213 00:16:12.089521 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (170.765µs) 2025-12-13T00:16:12.089604637+00:00 stderr F I1213 00:16:12.089587 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:12.089716340+00:00 stderr F I1213 00:16:12.089693 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:12.089803393+00:00 stderr F I1213 00:16:12.089784 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:12.090154893+00:00 stderr F I1213 00:16:12.090088 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:12.090293857+00:00 stderr F I1213 00:16:12.090255 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:12.090613997+00:00 stderr F I1213 00:16:12.090575 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:12.090613997+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:12.090613997+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:12.090704180+00:00 stderr F I1213 00:16:12.090674 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (593.958µs) 2025-12-13T00:16:12.129225349+00:00 stderr F W1213 00:16:12.129181 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:12.130748765+00:00 stderr F I1213 00:16:12.130723 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.136428ms) 2025-12-13T00:16:12.130794356+00:00 stderr F I1213 00:16:12.130782 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:12.130871148+00:00 stderr F I1213 00:16:12.130855 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:12.130923420+00:00 stderr F I1213 00:16:12.130914 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:12.131282010+00:00 stderr F I1213 00:16:12.131242 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:12.164266467+00:00 stderr F W1213 00:16:12.164208 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:12.165737240+00:00 stderr F I1213 00:16:12.165675 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.891912ms) 2025-12-13T00:16:13.043364328+00:00 stderr F I1213 00:16:13.043259 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:13.043563843+00:00 stderr F I1213 00:16:13.043521 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:13.043563843+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:13.043563843+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:13.043581724+00:00 stderr F I1213 00:16:13.043564 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (316.449µs) 2025-12-13T00:16:13.043622455+00:00 stderr F I1213 00:16:13.043592 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:13.043727318+00:00 stderr F I1213 00:16:13.043691 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:13.043768549+00:00 stderr F I1213 00:16:13.043747 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:13.044046648+00:00 stderr F I1213 00:16:13.043998 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:13.072605683+00:00 stderr F W1213 00:16:13.072566 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:13.074089167+00:00 stderr F I1213 00:16:13.074066 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.484272ms) 2025-12-13T00:16:13.902779855+00:00 stderr F I1213 00:16:13.902685 1 task_graph.go:481] Running 18 on worker 0 2025-12-13T00:16:14.001896329+00:00 stderr F I1213 00:16:14.001816 1 task_graph.go:481] Running 19 on worker 0 2025-12-13T00:16:14.044165650+00:00 stderr F I1213 00:16:14.044099 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:14.044815969+00:00 stderr F I1213 00:16:14.044777 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:14.044815969+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:14.044815969+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:14.045073717+00:00 stderr F I1213 00:16:14.045030 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (945.137µs) 2025-12-13T00:16:14.045162819+00:00 stderr F I1213 00:16:14.045057 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:14.045335384+00:00 stderr F I1213 00:16:14.045289 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:14.045473098+00:00 stderr F I1213 00:16:14.045447 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:14.046275973+00:00 stderr F I1213 00:16:14.046199 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:14.091959765+00:00 stderr F W1213 00:16:14.091845 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:14.094689716+00:00 stderr F I1213 00:16:14.094636 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.579548ms) 2025-12-13T00:16:14.104951500+00:00 stderr F I1213 00:16:14.104893 1 task_graph.go:481] Running 20 on worker 0 2025-12-13T00:16:14.105223988+00:00 stderr F I1213 00:16:14.105176 1 task_graph.go:481] Running 21 on worker 0 2025-12-13T00:16:14.502261129+00:00 stderr F I1213 00:16:14.502146 1 task_graph.go:481] Running 22 on worker 0 2025-12-13T00:16:14.602587869+00:00 stderr F I1213 00:16:14.602541 1 task_graph.go:481] Running 23 on worker 0 2025-12-13T00:16:15.045714742+00:00 stderr F I1213 00:16:15.045508 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:15.045921318+00:00 stderr F I1213 00:16:15.045859 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:15.045921318+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:15.045921318+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:15.046017752+00:00 stderr F I1213 00:16:15.045968 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (471.524µs) 2025-12-13T00:16:15.046017752+00:00 stderr F I1213 00:16:15.045992 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:15.046090514+00:00 stderr F I1213 00:16:15.046050 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:15.046150415+00:00 stderr F I1213 00:16:15.046122 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:15.046545926+00:00 stderr F I1213 00:16:15.046445 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:15.076475223+00:00 stderr F W1213 00:16:15.075308 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:15.077854911+00:00 stderr F I1213 00:16:15.077795 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.798638ms) 2025-12-13T00:16:15.502195660+00:00 stderr F I1213 00:16:15.501554 1 task_graph.go:481] Running 24 on worker 0 2025-12-13T00:16:15.601540854+00:00 stderr F I1213 00:16:15.601380 1 task_graph.go:481] Running 25 on worker 0 2025-12-13T00:16:16.047832852+00:00 stderr F I1213 00:16:16.047109 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:16.047832852+00:00 stderr F I1213 00:16:16.047529 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:16.047832852+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:16.047832852+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:16.047832852+00:00 stderr F I1213 00:16:16.047610 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (514.724µs) 2025-12-13T00:16:16.047832852+00:00 stderr F I1213 00:16:16.047803 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:16.047924545+00:00 stderr F I1213 00:16:16.047874 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:16.048620694+00:00 stderr F I1213 00:16:16.047958 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:16.048620694+00:00 stderr F I1213 00:16:16.048243 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:16.076923377+00:00 stderr F W1213 00:16:16.076871 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:16.078482799+00:00 stderr F I1213 00:16:16.078460 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.656488ms) 2025-12-13T00:16:16.752312094+00:00 stderr F W1213 00:16:16.752251 1 helper.go:97] ConsoleQuickStart "ocs-install-tour" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:16.804779426+00:00 stderr F I1213 00:16:16.804719 1 task_graph.go:481] Running 26 on worker 0 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.048615 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.048950 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:17.049575232+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:17.049575232+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.049015 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (412.482µs) 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.049030 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.049076 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.049139 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:17.049575232+00:00 stderr F I1213 00:16:17.049395 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:17.074795801+00:00 stderr F W1213 00:16:17.074736 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:17.076527218+00:00 stderr F I1213 00:16:17.076481 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.447629ms) 2025-12-13T00:16:17.352613249+00:00 stderr F I1213 00:16:17.352546 1 task_graph.go:481] Running 27 on worker 1 2025-12-13T00:16:17.457493533+00:00 stderr F I1213 00:16:17.455512 1 task_graph.go:481] Running 28 on worker 1 2025-12-13T00:16:17.804840069+00:00 stderr F I1213 00:16:17.804269 1 task_graph.go:481] Running 29 on worker 0 2025-12-13T00:16:17.852991095+00:00 stderr F I1213 00:16:17.852526 1 task_graph.go:481] Running 30 on worker 1 2025-12-13T00:16:18.052787691+00:00 stderr F I1213 00:16:18.052644 1 task_graph.go:481] Running 31 on worker 1 2025-12-13T00:16:18.052908435+00:00 stderr F I1213 00:16:18.052857 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:18.053378787+00:00 stderr F I1213 00:16:18.053337 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:18.053378787+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:18.053378787+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:18.053471010+00:00 stderr F I1213 00:16:18.053439 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (598.786µs) 2025-12-13T00:16:18.053482330+00:00 stderr F I1213 00:16:18.053466 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:18.053597723+00:00 stderr F I1213 00:16:18.053542 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:18.053672105+00:00 stderr F I1213 00:16:18.053644 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:18.054303292+00:00 stderr F I1213 00:16:18.054228 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:18.090543342+00:00 stderr F W1213 00:16:18.090466 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:18.095895328+00:00 stderr F I1213 00:16:18.095072 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.577575ms) 2025-12-13T00:16:18.101059140+00:00 stderr F I1213 00:16:18.101028 1 task_graph.go:481] Running 32 on worker 0 2025-12-13T00:16:18.700399989+00:00 stderr F I1213 00:16:18.700361 1 task_graph.go:481] Running 33 on worker 0 2025-12-13T00:16:18.801028257+00:00 stderr F I1213 00:16:18.800651 1 task_graph.go:481] Running 34 on worker 0 2025-12-13T00:16:19.054203001+00:00 stderr F I1213 00:16:19.054131 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:19.054464848+00:00 stderr F I1213 00:16:19.054427 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:19.054464848+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:19.054464848+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:19.054513039+00:00 stderr F I1213 00:16:19.054486 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (364.51µs) 2025-12-13T00:16:19.054513039+00:00 stderr F I1213 00:16:19.054505 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:19.054594842+00:00 stderr F I1213 00:16:19.054554 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:19.054636143+00:00 stderr F I1213 00:16:19.054614 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:19.054972203+00:00 stderr F I1213 00:16:19.054912 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:19.098270735+00:00 stderr F W1213 00:16:19.098207 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:19.101647467+00:00 stderr F I1213 00:16:19.100516 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.007067ms) 2025-12-13T00:16:19.550722863+00:00 stderr F I1213 00:16:19.550639 1 task_graph.go:481] Running 35 on worker 1 2025-12-13T00:16:20.056812874+00:00 stderr F I1213 00:16:20.056725 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:20.057100862+00:00 stderr F I1213 00:16:20.057057 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:20.057100862+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:20.057100862+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:20.057159643+00:00 stderr F I1213 00:16:20.057116 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (401.871µs) 2025-12-13T00:16:20.057159643+00:00 stderr F I1213 00:16:20.057150 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:20.057229395+00:00 stderr F I1213 00:16:20.057196 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:20.057273237+00:00 stderr F I1213 00:16:20.057248 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:20.057563464+00:00 stderr F I1213 00:16:20.057518 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:20.081421616+00:00 stderr F W1213 00:16:20.081347 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:20.083637977+00:00 stderr F I1213 00:16:20.083428 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.274858ms) 2025-12-13T00:16:21.057887206+00:00 stderr F I1213 00:16:21.057801 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:21.058357268+00:00 stderr F I1213 00:16:21.058314 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:21.058357268+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:21.058357268+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:21.058393449+00:00 stderr F I1213 00:16:21.058380 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (589.246µs) 2025-12-13T00:16:21.058429200+00:00 stderr F I1213 00:16:21.058398 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:21.058514943+00:00 stderr F I1213 00:16:21.058475 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:21.058554924+00:00 stderr F I1213 00:16:21.058530 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:21.058875632+00:00 stderr F I1213 00:16:21.058825 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:21.095438371+00:00 stderr F W1213 00:16:21.095372 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:21.098126474+00:00 stderr F I1213 00:16:21.096835 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.435239ms) 2025-12-13T00:16:21.302984619+00:00 stderr F I1213 00:16:21.302468 1 core.go:138] Updating ConfigMap openshift-machine-config-operator/kube-rbac-proxy due to diff:   &v1.ConfigMap{ 2025-12-13T00:16:21.302984619+00:00 stderr F    TypeMeta: v1.TypeMeta{ 2025-12-13T00:16:21.302984619+00:00 stderr F -  Kind: "", 2025-12-13T00:16:21.302984619+00:00 stderr F +  Kind: "ConfigMap", 2025-12-13T00:16:21.302984619+00:00 stderr F -  APIVersion: "", 2025-12-13T00:16:21.302984619+00:00 stderr F +  APIVersion: "v1", 2025-12-13T00:16:21.302984619+00:00 stderr F    }, 2025-12-13T00:16:21.302984619+00:00 stderr F    ObjectMeta: v1.ObjectMeta{ 2025-12-13T00:16:21.302984619+00:00 stderr F    ... // 2 identical fields 2025-12-13T00:16:21.302984619+00:00 stderr F    Namespace: "openshift-machine-config-operator", 2025-12-13T00:16:21.302984619+00:00 stderr F    SelfLink: "", 2025-12-13T00:16:21.302984619+00:00 stderr F -  UID: "ba7edbb4-1ba2-49b6-98a7-d849069e9f80", 2025-12-13T00:16:21.302984619+00:00 stderr F +  UID: "", 2025-12-13T00:16:21.302984619+00:00 stderr F -  ResourceVersion: "37089", 2025-12-13T00:16:21.302984619+00:00 stderr F +  ResourceVersion: "", 2025-12-13T00:16:21.302984619+00:00 stderr F    Generation: 0, 2025-12-13T00:16:21.302984619+00:00 stderr F -  CreationTimestamp: v1.Time{Time: s"2024-06-26 12:39:23 +0000 UTC"}, 2025-12-13T00:16:21.302984619+00:00 stderr F +  CreationTimestamp: v1.Time{}, 2025-12-13T00:16:21.302984619+00:00 stderr F    DeletionTimestamp: nil, 2025-12-13T00:16:21.302984619+00:00 stderr F    DeletionGracePeriodSeconds: nil, 2025-12-13T00:16:21.302984619+00:00 stderr F    ... // 2 identical fields 2025-12-13T00:16:21.302984619+00:00 stderr F    OwnerReferences: {{APIVersion: "config.openshift.io/v1", Kind: "ClusterVersion", Name: "version", UID: "a73cbaa6-40d3-4694-9b98-c0a6eed45825", ...}}, 2025-12-13T00:16:21.302984619+00:00 stderr F    Finalizers: nil, 2025-12-13T00:16:21.302984619+00:00 stderr F -  ManagedFields: []v1.ManagedFieldsEntry{ 2025-12-13T00:16:21.302984619+00:00 stderr F -  { 2025-12-13T00:16:21.302984619+00:00 stderr F -  Manager: "cluster-version-operator", 2025-12-13T00:16:21.302984619+00:00 stderr F -  Operation: "Update", 2025-12-13T00:16:21.302984619+00:00 stderr F -  APIVersion: "v1", 2025-12-13T00:16:21.302984619+00:00 stderr F -  Time: s"2025-08-13 20:39:50 +0000 UTC", 2025-12-13T00:16:21.302984619+00:00 stderr F -  FieldsType: "FieldsV1", 2025-12-13T00:16:21.302984619+00:00 stderr F -  FieldsV1: s`{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:include.re`..., 2025-12-13T00:16:21.302984619+00:00 stderr F -  }, 2025-12-13T00:16:21.302984619+00:00 stderr F -  { 2025-12-13T00:16:21.302984619+00:00 stderr F -  Manager: "machine-config-operator", 2025-12-13T00:16:21.302984619+00:00 stderr F -  Operation: "Update", 2025-12-13T00:16:21.302984619+00:00 stderr F -  APIVersion: "v1", 2025-12-13T00:16:21.302984619+00:00 stderr F -  Time: s"2025-08-13 20:39:50 +0000 UTC", 2025-12-13T00:16:21.302984619+00:00 stderr F -  FieldsType: "FieldsV1", 2025-12-13T00:16:21.302984619+00:00 stderr F -  FieldsV1: s`{"f:data":{"f:config-file.yaml":{}}}`, 2025-12-13T00:16:21.302984619+00:00 stderr F -  }, 2025-12-13T00:16:21.302984619+00:00 stderr F -  }, 2025-12-13T00:16:21.302984619+00:00 stderr F +  ManagedFields: nil, 2025-12-13T00:16:21.302984619+00:00 stderr F    }, 2025-12-13T00:16:21.302984619+00:00 stderr F    Immutable: nil, 2025-12-13T00:16:21.302984619+00:00 stderr F    Data: {"config-file.yaml": "authorization:\n resourceAttributes:\n apiVersion: v1\n reso"...}, 2025-12-13T00:16:21.302984619+00:00 stderr F    BinaryData: nil, 2025-12-13T00:16:21.302984619+00:00 stderr F   } 2025-12-13T00:16:21.502406446+00:00 stderr F I1213 00:16:21.501727 1 task_graph.go:481] Running 36 on worker 0 2025-12-13T00:16:21.602086108+00:00 stderr F I1213 00:16:21.602016 1 task_graph.go:481] Running 37 on worker 0 2025-12-13T00:16:21.800781235+00:00 stderr F W1213 00:16:21.800384 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-etcd" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:21.800853457+00:00 stderr F I1213 00:16:21.800843 1 task_graph.go:481] Running 38 on worker 0 2025-12-13T00:16:22.058888184+00:00 stderr F I1213 00:16:22.058815 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:22.059089710+00:00 stderr F I1213 00:16:22.059068 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:22.059089710+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:22.059089710+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:22.059136201+00:00 stderr F I1213 00:16:22.059118 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (315.958µs) 2025-12-13T00:16:22.059144061+00:00 stderr F I1213 00:16:22.059134 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:22.059204223+00:00 stderr F I1213 00:16:22.059174 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:22.059237344+00:00 stderr F I1213 00:16:22.059221 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:22.059471880+00:00 stderr F I1213 00:16:22.059433 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:22.086081647+00:00 stderr F W1213 00:16:22.085341 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:22.087292960+00:00 stderr F I1213 00:16:22.087251 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.114018ms) 2025-12-13T00:16:22.201865889+00:00 stderr F I1213 00:16:22.201653 1 task_graph.go:481] Running 39 on worker 0 2025-12-13T00:16:22.454359395+00:00 stderr F W1213 00:16:22.453723 1 helper.go:97] imagestream "openshift/hello-openshift" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:22.751077919+00:00 stderr F I1213 00:16:22.751006 1 task_graph.go:481] Running 40 on worker 1 2025-12-13T00:16:23.052083870+00:00 stderr F W1213 00:16:23.051500 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-cluster-total" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:23.059523883+00:00 stderr F I1213 00:16:23.059472 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:23.059807681+00:00 stderr F I1213 00:16:23.059778 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:23.059807681+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:23.059807681+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:23.059871852+00:00 stderr F I1213 00:16:23.059844 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (381.921µs) 2025-12-13T00:16:23.059871852+00:00 stderr F I1213 00:16:23.059867 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:23.059973725+00:00 stderr F I1213 00:16:23.059927 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:23.060021216+00:00 stderr F I1213 00:16:23.060002 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:23.060353985+00:00 stderr F I1213 00:16:23.060306 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:23.094475998+00:00 stderr F W1213 00:16:23.094418 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:23.096587215+00:00 stderr F I1213 00:16:23.096554 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.683202ms) 2025-12-13T00:16:23.100484012+00:00 stderr F I1213 00:16:23.100455 1 task_graph.go:481] Running 41 on worker 0 2025-12-13T00:16:23.141376728+00:00 stderr F I1213 00:16:23.141310 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:23.141443600+00:00 stderr F I1213 00:16:23.141416 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:23.141494442+00:00 stderr F I1213 00:16:23.141475 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:23.141876742+00:00 stderr F I1213 00:16:23.141815 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:23.178408071+00:00 stderr F W1213 00:16:23.178349 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:23.179759017+00:00 stderr F I1213 00:16:23.179722 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.42191ms) 2025-12-13T00:16:23.252177725+00:00 stderr F W1213 00:16:23.252111 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:23.451679753+00:00 stderr F W1213 00:16:23.450945 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:23.652564510+00:00 stderr F W1213 00:16:23.652489 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:23.852319636+00:00 stderr F W1213 00:16:23.851396 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:24.051329731+00:00 stderr F W1213 00:16:24.050842 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:24.059980717+00:00 stderr F I1213 00:16:24.059906 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:24.060302616+00:00 stderr F I1213 00:16:24.060251 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:24.060302616+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:24.060302616+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:24.060384908+00:00 stderr F I1213 00:16:24.060356 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (462.552µs) 2025-12-13T00:16:24.060394438+00:00 stderr F I1213 00:16:24.060383 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:24.060480660+00:00 stderr F I1213 00:16:24.060447 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:24.060533982+00:00 stderr F I1213 00:16:24.060514 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:24.061014886+00:00 stderr F I1213 00:16:24.060893 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:24.093463432+00:00 stderr F W1213 00:16:24.093379 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:24.095294502+00:00 stderr F I1213 00:16:24.095242 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.856662ms) 2025-12-13T00:16:24.251586471+00:00 stderr F W1213 00:16:24.251515 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:24.450999547+00:00 stderr F W1213 00:16:24.450834 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:24.501622239+00:00 stderr F I1213 00:16:24.501527 1 task_graph.go:481] Running 42 on worker 0 2025-12-13T00:16:24.651865062+00:00 stderr F W1213 00:16:24.651770 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:24.803086092+00:00 stderr F I1213 00:16:24.803027 1 task_graph.go:481] Running 43 on worker 0 2025-12-13T00:16:24.849998454+00:00 stderr F W1213 00:16:24.849947 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:25.051428205+00:00 stderr F W1213 00:16:25.051370 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-pod-total" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:25.060557974+00:00 stderr F I1213 00:16:25.060507 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:25.060783240+00:00 stderr F I1213 00:16:25.060755 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:25.060783240+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:25.060783240+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:25.060836562+00:00 stderr F I1213 00:16:25.060817 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (317.938µs) 2025-12-13T00:16:25.060860772+00:00 stderr F I1213 00:16:25.060846 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:25.060961726+00:00 stderr F I1213 00:16:25.060899 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:25.061660825+00:00 stderr F I1213 00:16:25.060981 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:25.061660825+00:00 stderr F I1213 00:16:25.061246 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:25.086009669+00:00 stderr F W1213 00:16:25.085970 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:25.088170248+00:00 stderr F I1213 00:16:25.088148 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.309476ms) 2025-12-13T00:16:25.253046772+00:00 stderr F W1213 00:16:25.252984 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-prometheus" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:25.253046772+00:00 stderr F I1213 00:16:25.253018 1 task_graph.go:481] Running 44 on worker 1 2025-12-13T00:16:25.357824813+00:00 stderr F I1213 00:16:25.357218 1 task_graph.go:481] Running 45 on worker 1 2025-12-13T00:16:26.061340908+00:00 stderr F I1213 00:16:26.061265 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:26.061551764+00:00 stderr F I1213 00:16:26.061512 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:26.061551764+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:26.061551764+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:26.061585375+00:00 stderr F I1213 00:16:26.061562 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (306.398µs) 2025-12-13T00:16:26.061585375+00:00 stderr F I1213 00:16:26.061578 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:26.061639956+00:00 stderr F I1213 00:16:26.061613 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:26.061676877+00:00 stderr F I1213 00:16:26.061655 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:26.061954724+00:00 stderr F I1213 00:16:26.061887 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:26.109095422+00:00 stderr F W1213 00:16:26.107203 1 helper.go:97] PrometheusRule "openshift-kube-apiserver/kube-apiserver-recording-rules" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:26.117140802+00:00 stderr F W1213 00:16:26.115346 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:26.117140802+00:00 stderr F I1213 00:16:26.117063 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.483536ms) 2025-12-13T00:16:26.350452254+00:00 stderr F I1213 00:16:26.350403 1 task_graph.go:481] Running 46 on worker 1 2025-12-13T00:16:26.451664628+00:00 stderr F I1213 00:16:26.451631 1 task_graph.go:481] Running 47 on worker 1 2025-12-13T00:16:26.801349049+00:00 stderr F W1213 00:16:26.800951 1 helper.go:97] configmap "openshift-config-managed/grafana-dashboard-api-performance" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:27.000854817+00:00 stderr F I1213 00:16:27.000747 1 task_graph.go:481] Running 48 on worker 0 2025-12-13T00:16:27.062604094+00:00 stderr F I1213 00:16:27.062536 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:27.062860091+00:00 stderr F I1213 00:16:27.062817 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:27.062860091+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:27.062860091+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:27.062884751+00:00 stderr F I1213 00:16:27.062868 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (343.71µs) 2025-12-13T00:16:27.062900292+00:00 stderr F I1213 00:16:27.062884 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:27.063049396+00:00 stderr F I1213 00:16:27.062922 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:27.063076996+00:00 stderr F I1213 00:16:27.063054 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:27.063322383+00:00 stderr F I1213 00:16:27.063268 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:27.085570351+00:00 stderr F W1213 00:16:27.085433 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:27.088270204+00:00 stderr F I1213 00:16:27.088205 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.314721ms) 2025-12-13T00:16:27.505018857+00:00 stderr F I1213 00:16:27.504927 1 task_graph.go:481] Running 49 on worker 0 2025-12-13T00:16:27.802866961+00:00 stderr F I1213 00:16:27.802778 1 task_graph.go:481] Running 50 on worker 0 2025-12-13T00:16:28.051688617+00:00 stderr F I1213 00:16:28.051621 1 task_graph.go:481] Running 51 on worker 1 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064148 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064413 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:28.064901718+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:28.064901718+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064452 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (312.728µs) 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064466 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064504 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064541 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:28.064901718+00:00 stderr F I1213 00:16:28.064761 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:28.097028395+00:00 stderr F W1213 00:16:28.096921 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:28.103477002+00:00 stderr F I1213 00:16:28.103427 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.952424ms) 2025-12-13T00:16:29.065352342+00:00 stderr F I1213 00:16:29.065238 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:29.065754423+00:00 stderr F I1213 00:16:29.065690 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:29.065754423+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:29.065754423+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:29.065851366+00:00 stderr F I1213 00:16:29.065794 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (566.795µs) 2025-12-13T00:16:29.065851366+00:00 stderr F I1213 00:16:29.065826 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:29.065974059+00:00 stderr F I1213 00:16:29.065904 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:29.066071151+00:00 stderr F I1213 00:16:29.066030 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:29.066740159+00:00 stderr F I1213 00:16:29.066609 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:29.097322435+00:00 stderr F W1213 00:16:29.097250 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:29.100628716+00:00 stderr F I1213 00:16:29.100528 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.699448ms) 2025-12-13T00:16:30.051260149+00:00 stderr F I1213 00:16:30.051141 1 task_graph.go:481] Running 52 on worker 0 2025-12-13T00:16:30.067152093+00:00 stderr F I1213 00:16:30.067043 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:30.067571174+00:00 stderr F I1213 00:16:30.067517 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:30.067571174+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:30.067571174+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:30.067645536+00:00 stderr F I1213 00:16:30.067608 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (578.905µs) 2025-12-13T00:16:30.067645536+00:00 stderr F I1213 00:16:30.067638 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:30.067787190+00:00 stderr F I1213 00:16:30.067722 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:30.067843012+00:00 stderr F I1213 00:16:30.067802 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:30.068400887+00:00 stderr F I1213 00:16:30.068325 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:30.093187374+00:00 stderr F W1213 00:16:30.093085 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:30.096683419+00:00 stderr F I1213 00:16:30.096601 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.957121ms) 2025-12-13T00:16:31.052067502+00:00 stderr F W1213 00:16:31.051988 1 helper.go:97] PrometheusRule "openshift-authentication-operator/authentication-operator" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:31.052067502+00:00 stderr F I1213 00:16:31.052032 1 task_graph.go:481] Running 53 on worker 0 2025-12-13T00:16:31.052202956+00:00 stderr F I1213 00:16:31.052176 1 task_graph.go:481] Running 54 on worker 0 2025-12-13T00:16:31.068063739+00:00 stderr F I1213 00:16:31.067997 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:31.068275535+00:00 stderr F I1213 00:16:31.068243 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:31.068275535+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:31.068275535+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:31.068312006+00:00 stderr F I1213 00:16:31.068289 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (301.228µs) 2025-12-13T00:16:31.068312006+00:00 stderr F I1213 00:16:31.068306 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:31.068382918+00:00 stderr F I1213 00:16:31.068348 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:31.068396328+00:00 stderr F I1213 00:16:31.068390 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:31.068662656+00:00 stderr F I1213 00:16:31.068601 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:31.100785683+00:00 stderr F W1213 00:16:31.100684 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:31.103770385+00:00 stderr F I1213 00:16:31.103710 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.399237ms) 2025-12-13T00:16:31.452148399+00:00 stderr F I1213 00:16:31.452051 1 task_graph.go:481] Running 55 on worker 0 2025-12-13T00:16:32.068626666+00:00 stderr F I1213 00:16:32.068536 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:32.068950415+00:00 stderr F I1213 00:16:32.068897 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:32.068950415+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:32.068950415+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:32.069037617+00:00 stderr F I1213 00:16:32.069000 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (474.523µs) 2025-12-13T00:16:32.069037617+00:00 stderr F I1213 00:16:32.069029 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:32.069135290+00:00 stderr F I1213 00:16:32.069097 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:32.069179021+00:00 stderr F I1213 00:16:32.069160 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:32.069542611+00:00 stderr F I1213 00:16:32.069486 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:32.098165513+00:00 stderr F W1213 00:16:32.098039 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:32.103352054+00:00 stderr F I1213 00:16:32.103260 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.225414ms) 2025-12-13T00:16:32.351984175+00:00 stderr F I1213 00:16:32.351881 1 task_graph.go:481] Running 56 on worker 0 2025-12-13T00:16:33.069990205+00:00 stderr F I1213 00:16:33.069917 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:33.070219512+00:00 stderr F I1213 00:16:33.070183 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:33.070219512+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:33.070219512+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:33.070234342+00:00 stderr F I1213 00:16:33.070224 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (315.91µs) 2025-12-13T00:16:33.070246343+00:00 stderr F I1213 00:16:33.070237 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:33.070309954+00:00 stderr F I1213 00:16:33.070276 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:33.070338215+00:00 stderr F I1213 00:16:33.070320 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:33.070566491+00:00 stderr F I1213 00:16:33.070523 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:33.107191531+00:00 stderr F W1213 00:16:33.107108 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:33.110426410+00:00 stderr F I1213 00:16:33.110336 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.091245ms) 2025-12-13T00:16:34.071906069+00:00 stderr F I1213 00:16:34.071351 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:34.072216547+00:00 stderr F I1213 00:16:34.072168 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:34.072216547+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:34.072216547+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:34.072261929+00:00 stderr F I1213 00:16:34.072236 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (897.534µs) 2025-12-13T00:16:34.072277309+00:00 stderr F I1213 00:16:34.072259 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:34.072357561+00:00 stderr F I1213 00:16:34.072311 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:34.072376892+00:00 stderr F I1213 00:16:34.072366 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:34.072724301+00:00 stderr F I1213 00:16:34.072664 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:34.103703227+00:00 stderr F W1213 00:16:34.103570 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:34.105474236+00:00 stderr F I1213 00:16:34.105339 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.077284ms) 2025-12-13T00:16:34.252647586+00:00 stderr F W1213 00:16:34.252532 1 helper.go:97] configmap "openshift-machine-api/cluster-autoscaler-operator-ca" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:34.353376387+00:00 stderr F W1213 00:16:34.351324 1 helper.go:97] PrometheusRule "openshift-machine-api/cluster-autoscaler-operator-rules" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:34.353376387+00:00 stderr F I1213 00:16:34.351383 1 task_graph.go:481] Running 57 on worker 0 2025-12-13T00:16:34.452164945+00:00 stderr F I1213 00:16:34.452084 1 task_graph.go:481] Running 58 on worker 0 2025-12-13T00:16:34.804785675+00:00 stderr F I1213 00:16:34.804159 1 task_graph.go:481] Running 59 on worker 1 2025-12-13T00:16:35.072570919+00:00 stderr F I1213 00:16:35.072492 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:35.074359838+00:00 stderr F I1213 00:16:35.072776 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:35.074359838+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:35.074359838+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:35.074359838+00:00 stderr F I1213 00:16:35.072824 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (344.619µs) 2025-12-13T00:16:35.074359838+00:00 stderr F I1213 00:16:35.072837 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:35.074359838+00:00 stderr F I1213 00:16:35.072874 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:35.074359838+00:00 stderr F I1213 00:16:35.072915 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:35.074359838+00:00 stderr F I1213 00:16:35.073163 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:35.095016742+00:00 stderr F W1213 00:16:35.094208 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:35.096182424+00:00 stderr F I1213 00:16:35.096051 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.209624ms) 2025-12-13T00:16:35.352544055+00:00 stderr F I1213 00:16:35.352471 1 task_graph.go:481] Running 60 on worker 0 2025-12-13T00:16:35.401698818+00:00 stderr F I1213 00:16:35.401592 1 task_graph.go:481] Running 61 on worker 1 2025-12-13T00:16:35.451947961+00:00 stderr F I1213 00:16:35.451862 1 task_graph.go:481] Running 62 on worker 0 2025-12-13T00:16:35.804915211+00:00 stderr F I1213 00:16:35.804215 1 task_graph.go:481] Running 63 on worker 1 2025-12-13T00:16:36.073374353+00:00 stderr F I1213 00:16:36.073307 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:36.073699052+00:00 stderr F I1213 00:16:36.073671 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:36.073699052+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:36.073699052+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:36.073775564+00:00 stderr F I1213 00:16:36.073728 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.441µs) 2025-12-13T00:16:36.073775564+00:00 stderr F I1213 00:16:36.073748 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:36.073823455+00:00 stderr F I1213 00:16:36.073795 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:36.073868526+00:00 stderr F I1213 00:16:36.073845 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:36.074198355+00:00 stderr F I1213 00:16:36.074142 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:36.143962821+00:00 stderr F W1213 00:16:36.143859 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:36.145369748+00:00 stderr F I1213 00:16:36.145308 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (71.558094ms) 2025-12-13T00:16:37.074399862+00:00 stderr F I1213 00:16:37.074344 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:37.075029959+00:00 stderr F I1213 00:16:37.075003 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:37.075029959+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:37.075029959+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:37.075173233+00:00 stderr F I1213 00:16:37.075143 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (810.792µs) 2025-12-13T00:16:37.075223494+00:00 stderr F I1213 00:16:37.075205 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:37.075311217+00:00 stderr F I1213 00:16:37.075288 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:37.075384409+00:00 stderr F I1213 00:16:37.075370 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:37.075753209+00:00 stderr F I1213 00:16:37.075717 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:37.106925741+00:00 stderr F W1213 00:16:37.106884 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:37.110935080+00:00 stderr F I1213 00:16:37.110841 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.596192ms) 2025-12-13T00:16:37.401492426+00:00 stderr F I1213 00:16:37.401408 1 task_graph.go:481] Running 64 on worker 1 2025-12-13T00:16:38.075676138+00:00 stderr F I1213 00:16:38.075611 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:38.076017367+00:00 stderr F I1213 00:16:38.075995 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:38.076017367+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:38.076017367+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:38.076095509+00:00 stderr F I1213 00:16:38.076064 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (464.162µs) 2025-12-13T00:16:38.076095509+00:00 stderr F I1213 00:16:38.076079 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:38.076146971+00:00 stderr F I1213 00:16:38.076116 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:38.077638502+00:00 stderr F I1213 00:16:38.076160 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:38.077638502+00:00 stderr F I1213 00:16:38.076382 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:38.098416259+00:00 stderr F W1213 00:16:38.098369 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:38.100832425+00:00 stderr F I1213 00:16:38.100793 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.706085ms) 2025-12-13T00:16:38.141489176+00:00 stderr F I1213 00:16:38.141418 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:38.141573308+00:00 stderr F I1213 00:16:38.141531 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:38.141623190+00:00 stderr F I1213 00:16:38.141595 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:38.141922498+00:00 stderr F I1213 00:16:38.141870 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:38.164233027+00:00 stderr F W1213 00:16:38.164157 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:38.168478383+00:00 stderr F I1213 00:16:38.166117 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.708135ms) 2025-12-13T00:16:38.400324975+00:00 stderr F I1213 00:16:38.400277 1 task_graph.go:481] Running 65 on worker 1 2025-12-13T00:16:38.851674592+00:00 stderr F W1213 00:16:38.851586 1 helper.go:97] configmap "openshift-operator-lifecycle-manager/olm-operators" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:38.955515968+00:00 stderr F W1213 00:16:38.955352 1 helper.go:97] CatalogSource "openshift-operator-lifecycle-manager/olm-operators" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:39.077113319+00:00 stderr F I1213 00:16:39.077029 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:39.077522331+00:00 stderr F I1213 00:16:39.077487 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:39.077522331+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:39.077522331+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:39.077619883+00:00 stderr F I1213 00:16:39.077589 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (657.918µs) 2025-12-13T00:16:39.077628433+00:00 stderr F I1213 00:16:39.077620 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:39.077725786+00:00 stderr F I1213 00:16:39.077690 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:39.077790538+00:00 stderr F I1213 00:16:39.077767 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:39.078274031+00:00 stderr F I1213 00:16:39.078218 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:39.119657401+00:00 stderr F W1213 00:16:39.119587 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:39.121686677+00:00 stderr F I1213 00:16:39.121640 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.017823ms) 2025-12-13T00:16:39.250572977+00:00 stderr F W1213 00:16:39.250506 1 helper.go:97] Subscription "openshift-operator-lifecycle-manager/packageserver" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:39.302332560+00:00 stderr F I1213 00:16:39.302262 1 task_graph.go:481] Running 66 on worker 1 2025-12-13T00:16:39.302332560+00:00 stderr F I1213 00:16:39.302305 1 task_graph.go:481] Running 67 on worker 1 2025-12-13T00:16:39.451965228+00:00 stderr F I1213 00:16:39.451911 1 task_graph.go:481] Running 68 on worker 0 2025-12-13T00:16:39.653135591+00:00 stderr F I1213 00:16:39.653031 1 task_graph.go:481] Running 69 on worker 0 2025-12-13T00:16:39.703052944+00:00 stderr F I1213 00:16:39.702268 1 task_graph.go:481] Running 70 on worker 1 2025-12-13T00:16:39.703052944+00:00 stderr F I1213 00:16:39.702340 1 task_graph.go:481] Running 71 on worker 1 2025-12-13T00:16:39.754500960+00:00 stderr F I1213 00:16:39.754113 1 task_graph.go:481] Running 72 on worker 0 2025-12-13T00:16:39.805591265+00:00 stderr F I1213 00:16:39.803849 1 task_graph.go:481] Running 73 on worker 1 2025-12-13T00:16:40.078454937+00:00 stderr F I1213 00:16:40.078395 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:40.078939670+00:00 stderr F I1213 00:16:40.078887 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:40.078939670+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:40.078939670+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:40.079119155+00:00 stderr F I1213 00:16:40.079089 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (726.219µs) 2025-12-13T00:16:40.079189398+00:00 stderr F I1213 00:16:40.079149 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:40.079277551+00:00 stderr F I1213 00:16:40.079244 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:40.079320092+00:00 stderr F I1213 00:16:40.079297 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:40.079584419+00:00 stderr F I1213 00:16:40.079529 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:40.103561993+00:00 stderr F W1213 00:16:40.103525 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:40.105405414+00:00 stderr F I1213 00:16:40.105376 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.234327ms) 2025-12-13T00:16:40.950485685+00:00 stderr F I1213 00:16:40.950435 1 task_graph.go:481] Running 74 on worker 0 2025-12-13T00:16:41.079953920+00:00 stderr F I1213 00:16:41.079874 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:41.080279019+00:00 stderr F I1213 00:16:41.080241 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:41.080279019+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:41.080279019+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:41.080363791+00:00 stderr F I1213 00:16:41.080324 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (462.752µs) 2025-12-13T00:16:41.080363791+00:00 stderr F I1213 00:16:41.080348 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:41.080433903+00:00 stderr F I1213 00:16:41.080397 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:41.080475344+00:00 stderr F I1213 00:16:41.080456 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:41.080879135+00:00 stderr F I1213 00:16:41.080829 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:41.124533108+00:00 stderr F W1213 00:16:41.124124 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:41.126098370+00:00 stderr F I1213 00:16:41.126060 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.709158ms) 2025-12-13T00:16:42.082102000+00:00 stderr F I1213 00:16:42.081303 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:42.082605434+00:00 stderr F I1213 00:16:42.082555 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:42.082605434+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:42.082605434+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:42.082847941+00:00 stderr F I1213 00:16:42.082801 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.510541ms) 2025-12-13T00:16:42.082847941+00:00 stderr F I1213 00:16:42.082839 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:42.082960464+00:00 stderr F I1213 00:16:42.082915 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:42.083060287+00:00 stderr F I1213 00:16:42.083029 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:42.083530719+00:00 stderr F I1213 00:16:42.083471 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:42.105634323+00:00 stderr F W1213 00:16:42.105557 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:42.107016920+00:00 stderr F I1213 00:16:42.106968 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.129108ms) 2025-12-13T00:16:42.555936602+00:00 stderr F W1213 00:16:42.555864 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+; use flowcontrol.apiserver.k8s.io/v1 FlowSchema 2025-12-13T00:16:42.556724593+00:00 stderr F I1213 00:16:42.556659 1 task_graph.go:481] Running 75 on worker 0 2025-12-13T00:16:43.051043543+00:00 stderr F I1213 00:16:43.050984 1 task_graph.go:481] Running 76 on worker 0 2025-12-13T00:16:43.083207462+00:00 stderr F I1213 00:16:43.083099 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:43.083460679+00:00 stderr F I1213 00:16:43.083427 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:43.083460679+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:43.083460679+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:43.083498750+00:00 stderr F I1213 00:16:43.083475 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (387.301µs) 2025-12-13T00:16:43.083498750+00:00 stderr F I1213 00:16:43.083493 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:43.083592283+00:00 stderr F I1213 00:16:43.083531 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:43.083592283+00:00 stderr F I1213 00:16:43.083575 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:43.083856250+00:00 stderr F I1213 00:16:43.083825 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:43.107954838+00:00 stderr F W1213 00:16:43.107884 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:43.109540771+00:00 stderr F I1213 00:16:43.109516 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.01906ms) 2025-12-13T00:16:43.153234785+00:00 stderr F I1213 00:16:43.152987 1 task_graph.go:481] Running 77 on worker 0 2025-12-13T00:16:43.153234785+00:00 stderr F I1213 00:16:43.153180 1 task_graph.go:481] Running 78 on worker 0 2025-12-13T00:16:43.951099386+00:00 stderr F W1213 00:16:43.951009 1 helper.go:97] clusterrolebinding "default-account-openshift-machine-config-operator" not found. It either has already been removed or it has never been installed on this cluster. 2025-12-13T00:16:43.951099386+00:00 stderr F I1213 00:16:43.951058 1 task_graph.go:478] Canceled worker 1 while waiting for work 2025-12-13T00:16:43.951099386+00:00 stderr F I1213 00:16:43.951068 1 task_graph.go:478] Canceled worker 0 while waiting for work 2025-12-13T00:16:43.951099386+00:00 stderr F I1213 00:16:43.951079 1 task_graph.go:527] Workers finished 2025-12-13T00:16:43.951099386+00:00 stderr F I1213 00:16:43.951091 1 task_graph.go:550] Result of work: [] 2025-12-13T00:16:44.084488079+00:00 stderr F I1213 00:16:44.084385 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:44.085016273+00:00 stderr F I1213 00:16:44.084979 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:44.085016273+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:44.085016273+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:44.085116336+00:00 stderr F I1213 00:16:44.085081 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (710.349µs) 2025-12-13T00:16:44.085145697+00:00 stderr F I1213 00:16:44.085111 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:44.085212649+00:00 stderr F I1213 00:16:44.085180 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:44.085445955+00:00 stderr F I1213 00:16:44.085406 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:44.085890887+00:00 stderr F I1213 00:16:44.085831 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:44.133132098+00:00 stderr F W1213 00:16:44.133088 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:44.134645539+00:00 stderr F I1213 00:16:44.134624 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.510983ms) 2025-12-13T00:16:45.085275443+00:00 stderr F I1213 00:16:45.085186 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:45.085719105+00:00 stderr F I1213 00:16:45.085642 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:45.085719105+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:45.085719105+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:45.085789336+00:00 stderr F I1213 00:16:45.085743 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (569.896µs) 2025-12-13T00:16:45.085789336+00:00 stderr F I1213 00:16:45.085770 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:45.085884169+00:00 stderr F I1213 00:16:45.085833 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:45.086003552+00:00 stderr F I1213 00:16:45.085901 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:45.086375742+00:00 stderr F I1213 00:16:45.086282 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:45.152002494+00:00 stderr F W1213 00:16:45.151383 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:45.156049025+00:00 stderr F I1213 00:16:45.153174 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (67.401671ms) 2025-12-13T00:16:46.086160188+00:00 stderr F I1213 00:16:46.086083 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:46.086652891+00:00 stderr F I1213 00:16:46.086601 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:46.086652891+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:46.086652891+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:46.086757954+00:00 stderr F I1213 00:16:46.086715 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (667.158µs) 2025-12-13T00:16:46.086766664+00:00 stderr F I1213 00:16:46.086754 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:46.086912628+00:00 stderr F I1213 00:16:46.086842 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:46.086963929+00:00 stderr F I1213 00:16:46.086928 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:46.087614677+00:00 stderr F I1213 00:16:46.087515 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:46.140777219+00:00 stderr F W1213 00:16:46.140663 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:46.142991000+00:00 stderr F I1213 00:16:46.142855 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (56.099432ms) 2025-12-13T00:16:47.087755152+00:00 stderr F I1213 00:16:47.087654 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:47.088087121+00:00 stderr F I1213 00:16:47.088018 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:47.088087121+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:47.088087121+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:47.088129222+00:00 stderr F I1213 00:16:47.088080 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (439.352µs) 2025-12-13T00:16:47.088129222+00:00 stderr F I1213 00:16:47.088097 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:47.088212025+00:00 stderr F I1213 00:16:47.088152 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:47.088283728+00:00 stderr F I1213 00:16:47.088230 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:47.088606256+00:00 stderr F I1213 00:16:47.088512 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:47.137345727+00:00 stderr F W1213 00:16:47.137257 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:47.140733769+00:00 stderr F I1213 00:16:47.140659 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.553866ms) 2025-12-13T00:16:48.089215474+00:00 stderr F I1213 00:16:48.089135 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:48.089480042+00:00 stderr F I1213 00:16:48.089431 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:48.089480042+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:48.089480042+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:48.089504312+00:00 stderr F I1213 00:16:48.089482 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (359.32µs) 2025-12-13T00:16:48.089504312+00:00 stderr F I1213 00:16:48.089499 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:48.089591815+00:00 stderr F I1213 00:16:48.089543 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:48.089611375+00:00 stderr F I1213 00:16:48.089603 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:48.089892633+00:00 stderr F I1213 00:16:48.089839 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:48.115633395+00:00 stderr F W1213 00:16:48.115540 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:48.117538968+00:00 stderr F I1213 00:16:48.117484 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.980274ms) 2025-12-13T00:16:49.090327236+00:00 stderr F I1213 00:16:49.090235 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:49.090658915+00:00 stderr F I1213 00:16:49.090609 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:49.090658915+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:49.090658915+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:49.090730026+00:00 stderr F I1213 00:16:49.090683 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (463.442µs) 2025-12-13T00:16:49.090730026+00:00 stderr F I1213 00:16:49.090707 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:49.090801608+00:00 stderr F I1213 00:16:49.090755 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:49.090845150+00:00 stderr F I1213 00:16:49.090817 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:49.091325093+00:00 stderr F I1213 00:16:49.091243 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:49.134091941+00:00 stderr F W1213 00:16:49.134019 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:49.136091026+00:00 stderr F I1213 00:16:49.136047 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.335969ms) 2025-12-13T00:16:50.091201481+00:00 stderr F I1213 00:16:50.091126 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:50.091534751+00:00 stderr F I1213 00:16:50.091482 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:50.091534751+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:50.091534751+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:50.091571972+00:00 stderr F I1213 00:16:50.091538 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (423.653µs) 2025-12-13T00:16:50.091571972+00:00 stderr F I1213 00:16:50.091552 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:50.091627373+00:00 stderr F I1213 00:16:50.091594 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:50.091675984+00:00 stderr F I1213 00:16:50.091642 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:50.091925681+00:00 stderr F I1213 00:16:50.091862 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:50.130887255+00:00 stderr F W1213 00:16:50.130786 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:50.134325069+00:00 stderr F I1213 00:16:50.134253 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.692086ms) 2025-12-13T00:16:51.092075107+00:00 stderr F I1213 00:16:51.091969 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:51.092633902+00:00 stderr F I1213 00:16:51.092574 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:51.092633902+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:51.092633902+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:51.092710594+00:00 stderr F I1213 00:16:51.092678 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (781.781µs) 2025-12-13T00:16:51.092725904+00:00 stderr F I1213 00:16:51.092706 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:51.092857238+00:00 stderr F I1213 00:16:51.092775 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:51.092872118+00:00 stderr F I1213 00:16:51.092847 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:51.093329780+00:00 stderr F I1213 00:16:51.093263 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:51.116428131+00:00 stderr F W1213 00:16:51.116354 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:51.123237098+00:00 stderr F I1213 00:16:51.123179 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.464452ms) 2025-12-13T00:16:52.092970482+00:00 stderr F I1213 00:16:52.092858 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:52.093559907+00:00 stderr F I1213 00:16:52.093426 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:52.093559907+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:52.093559907+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:52.093739052+00:00 stderr F I1213 00:16:52.093717 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (882.624µs) 2025-12-13T00:16:52.093897477+00:00 stderr F I1213 00:16:52.093825 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:52.094136173+00:00 stderr F I1213 00:16:52.094077 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:52.094198095+00:00 stderr F I1213 00:16:52.094172 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:52.094780341+00:00 stderr F I1213 00:16:52.094712 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:52.123386792+00:00 stderr F W1213 00:16:52.123324 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:52.125289715+00:00 stderr F I1213 00:16:52.125232 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.4629ms) 2025-12-13T00:16:53.094595049+00:00 stderr F I1213 00:16:53.094514 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:53.095001390+00:00 stderr F I1213 00:16:53.094961 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:53.095001390+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:53.095001390+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:53.095057321+00:00 stderr F I1213 00:16:53.095026 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (553.165µs) 2025-12-13T00:16:53.095057321+00:00 stderr F I1213 00:16:53.095042 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:53.095132543+00:00 stderr F I1213 00:16:53.095098 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:53.095173244+00:00 stderr F I1213 00:16:53.095158 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:53.095526024+00:00 stderr F I1213 00:16:53.095478 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:53.140721958+00:00 stderr F W1213 00:16:53.140655 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:53.142867897+00:00 stderr F I1213 00:16:53.142802 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.756394ms) 2025-12-13T00:16:53.142867897+00:00 stderr F I1213 00:16:53.142835 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:53.143038571+00:00 stderr F I1213 00:16:53.142917 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:53.143038571+00:00 stderr F I1213 00:16:53.143016 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:53.143342830+00:00 stderr F I1213 00:16:53.143274 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:53.190323553+00:00 stderr F W1213 00:16:53.190287 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:53.191833334+00:00 stderr F I1213 00:16:53.191812 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.977127ms) 2025-12-13T00:16:54.096060460+00:00 stderr F I1213 00:16:54.095978 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:54.096462101+00:00 stderr F I1213 00:16:54.096412 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:54.096462101+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:54.096462101+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:54.096573574+00:00 stderr F I1213 00:16:54.096521 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (557.235µs) 2025-12-13T00:16:54.096573574+00:00 stderr F I1213 00:16:54.096555 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:54.096693378+00:00 stderr F I1213 00:16:54.096642 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:54.096779650+00:00 stderr F I1213 00:16:54.096748 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:54.097189801+00:00 stderr F I1213 00:16:54.097128 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:54.127403436+00:00 stderr F W1213 00:16:54.127364 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:54.129364750+00:00 stderr F I1213 00:16:54.129337 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.780785ms) 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.233217 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.233535 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:55.243526360+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:55.243526360+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.233606 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (400.901µs) 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.233625 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.233675 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.233723 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:55.243526360+00:00 stderr F I1213 00:16:55.234596 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:55.259042553+00:00 stderr F W1213 00:16:55.259006 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:55.260571975+00:00 stderr F I1213 00:16:55.260547 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.918716ms) 2025-12-13T00:16:56.234231727+00:00 stderr F I1213 00:16:56.234169 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:56.234743012+00:00 stderr F I1213 00:16:56.234711 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:56.234743012+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:56.234743012+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:56.234895936+00:00 stderr F I1213 00:16:56.234865 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (735.591µs) 2025-12-13T00:16:56.235014709+00:00 stderr F I1213 00:16:56.234969 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:56.235110182+00:00 stderr F I1213 00:16:56.235059 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:56.235137192+00:00 stderr F I1213 00:16:56.235127 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:56.235743769+00:00 stderr F I1213 00:16:56.235621 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:56.273959933+00:00 stderr F W1213 00:16:56.273847 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:56.277347695+00:00 stderr F I1213 00:16:56.277276 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.305435ms) 2025-12-13T00:16:57.235783682+00:00 stderr F I1213 00:16:57.235739 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:57.236145712+00:00 stderr F I1213 00:16:57.236127 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:57.236145712+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:57.236145712+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:57.236232434+00:00 stderr F I1213 00:16:57.236216 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (487.143µs) 2025-12-13T00:16:57.236265025+00:00 stderr F I1213 00:16:57.236254 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:57.236374658+00:00 stderr F I1213 00:16:57.236354 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:57.236434400+00:00 stderr F I1213 00:16:57.236424 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:57.236728878+00:00 stderr F I1213 00:16:57.236699 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:57.269056401+00:00 stderr F W1213 00:16:57.268994 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:57.270964002+00:00 stderr F I1213 00:16:57.270881 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.624775ms) 2025-12-13T00:16:58.237195892+00:00 stderr F I1213 00:16:58.237056 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:58.237735407+00:00 stderr F I1213 00:16:58.237663 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:58.237735407+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:58.237735407+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:58.237796029+00:00 stderr F I1213 00:16:58.237758 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (716.391µs) 2025-12-13T00:16:58.237826450+00:00 stderr F I1213 00:16:58.237805 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:58.237990864+00:00 stderr F I1213 00:16:58.237877 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:58.238026615+00:00 stderr F I1213 00:16:58.237992 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:58.238607381+00:00 stderr F I1213 00:16:58.238494 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:58.283378704+00:00 stderr F W1213 00:16:58.283309 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:58.288043810+00:00 stderr F I1213 00:16:58.287992 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.19667ms) 2025-12-13T00:16:59.238166201+00:00 stderr F I1213 00:16:59.238069 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:16:59.238501241+00:00 stderr F I1213 00:16:59.238441 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:16:59.238501241+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:16:59.238501241+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:16:59.238571342+00:00 stderr F I1213 00:16:59.238516 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (460.513µs) 2025-12-13T00:16:59.238571342+00:00 stderr F I1213 00:16:59.238539 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:16:59.238647564+00:00 stderr F I1213 00:16:59.238591 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:16:59.238709316+00:00 stderr F I1213 00:16:59.238667 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:16:59.239066476+00:00 stderr F I1213 00:16:59.238973 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:16:59.285522065+00:00 stderr F W1213 00:16:59.285450 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:16:59.288843365+00:00 stderr F I1213 00:16:59.288758 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.212932ms) 2025-12-13T00:17:00.239565321+00:00 stderr F I1213 00:17:00.239511 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:00.240010253+00:00 stderr F I1213 00:17:00.239988 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:00.240010253+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:00.240010253+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:00.240136907+00:00 stderr F I1213 00:17:00.240109 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (607.667µs) 2025-12-13T00:17:00.240202378+00:00 stderr F I1213 00:17:00.240166 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:00.240315881+00:00 stderr F I1213 00:17:00.240278 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:00.240385713+00:00 stderr F I1213 00:17:00.240357 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:00.240824446+00:00 stderr F I1213 00:17:00.240774 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:00.284734225+00:00 stderr F W1213 00:17:00.284617 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:00.287750367+00:00 stderr F I1213 00:17:00.287672 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.507398ms) 2025-12-13T00:17:01.241012053+00:00 stderr F I1213 00:17:01.240808 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:01.241228478+00:00 stderr F I1213 00:17:01.241156 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:01.241228478+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:01.241228478+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:01.241228478+00:00 stderr F I1213 00:17:01.241206 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (409.382µs) 2025-12-13T00:17:01.241228478+00:00 stderr F I1213 00:17:01.241221 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:01.241306991+00:00 stderr F I1213 00:17:01.241268 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:01.241329831+00:00 stderr F I1213 00:17:01.241314 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:01.241619279+00:00 stderr F I1213 00:17:01.241538 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:01.281573950+00:00 stderr F W1213 00:17:01.281497 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:01.284513140+00:00 stderr F I1213 00:17:01.284446 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.21876ms) 2025-12-13T00:17:02.241658782+00:00 stderr F I1213 00:17:02.241527 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:02.241895758+00:00 stderr F I1213 00:17:02.241851 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:02.241895758+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:02.241895758+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:02.241997541+00:00 stderr F I1213 00:17:02.241961 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (448.252µs) 2025-12-13T00:17:02.241997541+00:00 stderr F I1213 00:17:02.241985 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:02.242093954+00:00 stderr F I1213 00:17:02.242055 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:02.242165145+00:00 stderr F I1213 00:17:02.242111 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:02.242513965+00:00 stderr F I1213 00:17:02.242422 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:02.294619658+00:00 stderr F W1213 00:17:02.294525 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:02.296745026+00:00 stderr F I1213 00:17:02.296658 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.669452ms) 2025-12-13T00:17:03.242354732+00:00 stderr F I1213 00:17:03.242102 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:03.242652000+00:00 stderr F I1213 00:17:03.242581 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:03.242652000+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:03.242652000+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:03.242771934+00:00 stderr F I1213 00:17:03.242707 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (623.487µs) 2025-12-13T00:17:03.242771934+00:00 stderr F I1213 00:17:03.242740 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:03.242874156+00:00 stderr F I1213 00:17:03.242817 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:03.243011490+00:00 stderr F I1213 00:17:03.242914 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:03.243561355+00:00 stderr F I1213 00:17:03.243474 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:03.292523203+00:00 stderr F W1213 00:17:03.292399 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:03.294661571+00:00 stderr F I1213 00:17:03.294576 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.832906ms) 2025-12-13T00:17:04.243635579+00:00 stderr F I1213 00:17:04.243554 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:04.244030480+00:00 stderr F I1213 00:17:04.243984 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:04.244030480+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:04.244030480+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:04.244155454+00:00 stderr F I1213 00:17:04.244075 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (563.006µs) 2025-12-13T00:17:04.244155454+00:00 stderr F I1213 00:17:04.244141 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:04.244292247+00:00 stderr F I1213 00:17:04.244222 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:04.244360819+00:00 stderr F I1213 00:17:04.244317 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:04.244807461+00:00 stderr F I1213 00:17:04.244722 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:04.306914087+00:00 stderr F W1213 00:17:04.302821 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:04.306914087+00:00 stderr F I1213 00:17:04.305286 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (61.142769ms) 2025-12-13T00:17:05.244285209+00:00 stderr F I1213 00:17:05.244174 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:05.244742041+00:00 stderr F I1213 00:17:05.244674 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:05.244742041+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:05.244742041+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:05.244850784+00:00 stderr F I1213 00:17:05.244801 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (637.917µs) 2025-12-13T00:17:05.244850784+00:00 stderr F I1213 00:17:05.244838 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:05.245154392+00:00 stderr F I1213 00:17:05.244986 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:05.245154392+00:00 stderr F I1213 00:17:05.245085 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:05.245633505+00:00 stderr F I1213 00:17:05.245551 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:05.290690726+00:00 stderr F W1213 00:17:05.290580 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:05.293869742+00:00 stderr F I1213 00:17:05.293794 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.951567ms) 2025-12-13T00:17:06.245736310+00:00 stderr F I1213 00:17:06.245614 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:06.246264314+00:00 stderr F I1213 00:17:06.246194 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:06.246264314+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:06.246264314+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:06.246381097+00:00 stderr F I1213 00:17:06.246322 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (720.559µs) 2025-12-13T00:17:06.246381097+00:00 stderr F I1213 00:17:06.246360 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:06.246533191+00:00 stderr F I1213 00:17:06.246457 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:06.246705066+00:00 stderr F I1213 00:17:06.246557 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:06.247267812+00:00 stderr F I1213 00:17:06.247097 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:06.298212013+00:00 stderr F W1213 00:17:06.298131 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:06.301405370+00:00 stderr F I1213 00:17:06.301362 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.995853ms) 2025-12-13T00:17:07.247546411+00:00 stderr F I1213 00:17:07.247453 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:07.247960192+00:00 stderr F I1213 00:17:07.247885 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:07.247960192+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:07.247960192+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:07.248111206+00:00 stderr F I1213 00:17:07.248057 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (615.647µs) 2025-12-13T00:17:07.248111206+00:00 stderr F I1213 00:17:07.248103 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:07.248251150+00:00 stderr F I1213 00:17:07.248194 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:07.248335582+00:00 stderr F I1213 00:17:07.248291 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:07.248790025+00:00 stderr F I1213 00:17:07.248714 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:07.292894929+00:00 stderr F W1213 00:17:07.292848 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:07.295403978+00:00 stderr F I1213 00:17:07.295371 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.268761ms) 2025-12-13T00:17:08.141145617+00:00 stderr F I1213 00:17:08.141073 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:08.141452335+00:00 stderr F I1213 00:17:08.141348 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:08.141554388+00:00 stderr F I1213 00:17:08.141481 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:08.142211046+00:00 stderr F I1213 00:17:08.142105 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:08.184814799+00:00 stderr F W1213 00:17:08.184695 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:08.187702858+00:00 stderr F I1213 00:17:08.187624 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.576952ms) 2025-12-13T00:17:08.248270103+00:00 stderr F I1213 00:17:08.248194 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:08.248730985+00:00 stderr F I1213 00:17:08.248698 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:08.248730985+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:08.248730985+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:08.248883979+00:00 stderr F I1213 00:17:08.248854 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (673.498µs) 2025-12-13T00:17:08.249010623+00:00 stderr F I1213 00:17:08.248925 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:08.249190077+00:00 stderr F I1213 00:17:08.249146 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:08.249298620+00:00 stderr F I1213 00:17:08.249277 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:08.249798394+00:00 stderr F I1213 00:17:08.249746 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:08.290857036+00:00 stderr F W1213 00:17:08.290763 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:08.292347197+00:00 stderr F I1213 00:17:08.292254 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.327954ms) 2025-12-13T00:17:09.249890258+00:00 stderr F I1213 00:17:09.249834 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:09.250225738+00:00 stderr F I1213 00:17:09.250209 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:09.250225738+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:09.250225738+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:09.250301310+00:00 stderr F I1213 00:17:09.250286 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (465.014µs) 2025-12-13T00:17:09.250331401+00:00 stderr F I1213 00:17:09.250321 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:09.250414483+00:00 stderr F I1213 00:17:09.250395 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:09.250470585+00:00 stderr F I1213 00:17:09.250460 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:09.250700711+00:00 stderr F I1213 00:17:09.250675 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:09.274855030+00:00 stderr F W1213 00:17:09.274753 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:09.276350842+00:00 stderr F I1213 00:17:09.276270 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.945048ms) 2025-12-13T00:17:10.250792435+00:00 stderr F I1213 00:17:10.250738 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:10.251182806+00:00 stderr F I1213 00:17:10.251159 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:10.251182806+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:10.251182806+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:10.251291569+00:00 stderr F I1213 00:17:10.251270 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (556.025µs) 2025-12-13T00:17:10.251335830+00:00 stderr F I1213 00:17:10.251320 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:10.251430032+00:00 stderr F I1213 00:17:10.251405 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:10.251504234+00:00 stderr F I1213 00:17:10.251489 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:10.251874954+00:00 stderr F I1213 00:17:10.251835 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:10.280911978+00:00 stderr F W1213 00:17:10.280828 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:10.283102037+00:00 stderr F I1213 00:17:10.283050 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.726227ms) 2025-12-13T00:17:11.252170774+00:00 stderr F I1213 00:17:11.252064 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:11.252690558+00:00 stderr F I1213 00:17:11.252652 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:11.252690558+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:11.252690558+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:11.252855813+00:00 stderr F I1213 00:17:11.252824 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (776.271µs) 2025-12-13T00:17:11.253015378+00:00 stderr F I1213 00:17:11.252907 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:11.253177352+00:00 stderr F I1213 00:17:11.253139 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:11.253283815+00:00 stderr F I1213 00:17:11.253262 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:11.253742878+00:00 stderr F I1213 00:17:11.253691 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:11.284577920+00:00 stderr F W1213 00:17:11.284517 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:11.287505750+00:00 stderr F I1213 00:17:11.287438 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.531784ms) 2025-12-13T00:17:12.253779881+00:00 stderr F I1213 00:17:12.253704 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:12.254408007+00:00 stderr F I1213 00:17:12.254384 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:12.254408007+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:12.254408007+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:12.254517280+00:00 stderr F I1213 00:17:12.254496 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (808.322µs) 2025-12-13T00:17:12.254561462+00:00 stderr F I1213 00:17:12.254546 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:12.254654034+00:00 stderr F I1213 00:17:12.254630 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:12.254742236+00:00 stderr F I1213 00:17:12.254728 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:12.255088276+00:00 stderr F I1213 00:17:12.255051 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:12.290554485+00:00 stderr F W1213 00:17:12.290484 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:12.294574814+00:00 stderr F I1213 00:17:12.294510 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.953981ms) 2025-12-13T00:17:13.255421617+00:00 stderr F I1213 00:17:13.255298 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:13.255847268+00:00 stderr F I1213 00:17:13.255780 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:13.255847268+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:13.255847268+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:13.255925590+00:00 stderr F I1213 00:17:13.255873 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (616.257µs) 2025-12-13T00:17:13.255925590+00:00 stderr F I1213 00:17:13.255905 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:13.256101696+00:00 stderr F I1213 00:17:13.256043 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:13.256170328+00:00 stderr F I1213 00:17:13.256136 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:13.256671691+00:00 stderr F I1213 00:17:13.256577 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:13.303504240+00:00 stderr F W1213 00:17:13.303155 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:13.306429841+00:00 stderr F I1213 00:17:13.306356 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.443347ms) 2025-12-13T00:17:14.256233371+00:00 stderr F I1213 00:17:14.256117 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:14.256517349+00:00 stderr F I1213 00:17:14.256466 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:14.256517349+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:14.256517349+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:14.256546490+00:00 stderr F I1213 00:17:14.256534 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (444.502µs) 2025-12-13T00:17:14.256558370+00:00 stderr F I1213 00:17:14.256551 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:14.256640972+00:00 stderr F I1213 00:17:14.256609 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:14.256694144+00:00 stderr F I1213 00:17:14.256669 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:14.257072574+00:00 stderr F I1213 00:17:14.257015 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:14.299637346+00:00 stderr F W1213 00:17:14.299564 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:14.301716223+00:00 stderr F I1213 00:17:14.301663 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.081731ms) 2025-12-13T00:17:15.257069876+00:00 stderr F I1213 00:17:15.256924 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:15.257337603+00:00 stderr F I1213 00:17:15.257290 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:15.257337603+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:15.257337603+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:15.257432836+00:00 stderr F I1213 00:17:15.257369 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (471.073µs) 2025-12-13T00:17:15.257432836+00:00 stderr F I1213 00:17:15.257417 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:15.257534279+00:00 stderr F I1213 00:17:15.257494 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:15.257590410+00:00 stderr F I1213 00:17:15.257554 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:15.257891318+00:00 stderr F I1213 00:17:15.257833 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:15.285907913+00:00 stderr F W1213 00:17:15.285818 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:15.287834336+00:00 stderr F I1213 00:17:15.287768 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.349309ms) 2025-12-13T00:17:16.258074955+00:00 stderr F I1213 00:17:16.257980 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:16.258466465+00:00 stderr F I1213 00:17:16.258408 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:16.258466465+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:16.258466465+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:16.258553687+00:00 stderr F I1213 00:17:16.258503 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (595.886µs) 2025-12-13T00:17:16.258553687+00:00 stderr F I1213 00:17:16.258534 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:16.258664380+00:00 stderr F I1213 00:17:16.258610 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:16.258727252+00:00 stderr F I1213 00:17:16.258693 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:16.259207166+00:00 stderr F I1213 00:17:16.259139 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:16.311271477+00:00 stderr F W1213 00:17:16.311195 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:16.314328471+00:00 stderr F I1213 00:17:16.314270 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.731752ms) 2025-12-13T00:17:17.259645779+00:00 stderr F I1213 00:17:17.259457 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:17.259975958+00:00 stderr F I1213 00:17:17.259888 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:17.259975958+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:17.259975958+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:17.260018920+00:00 stderr F I1213 00:17:17.260000 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (587.756µs) 2025-12-13T00:17:17.260033170+00:00 stderr F I1213 00:17:17.260024 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:17.260127592+00:00 stderr F I1213 00:17:17.260090 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:17.260200634+00:00 stderr F I1213 00:17:17.260155 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:17.260594185+00:00 stderr F I1213 00:17:17.260534 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:17.343006455+00:00 stderr F W1213 00:17:17.342697 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:17.351046305+00:00 stderr F I1213 00:17:17.349810 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (89.774951ms) 2025-12-13T00:17:18.260386341+00:00 stderr F I1213 00:17:18.260255 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:18.260608457+00:00 stderr F I1213 00:17:18.260542 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:18.260608457+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:18.260608457+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:18.260646258+00:00 stderr F I1213 00:17:18.260601 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (356.99µs) 2025-12-13T00:17:18.260646258+00:00 stderr F I1213 00:17:18.260615 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:18.260690420+00:00 stderr F I1213 00:17:18.260663 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:18.260759751+00:00 stderr F I1213 00:17:18.260713 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:18.261073050+00:00 stderr F I1213 00:17:18.261002 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:18.295960543+00:00 stderr F W1213 00:17:18.295854 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:18.298217004+00:00 stderr F I1213 00:17:18.298172 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.551446ms) 2025-12-13T00:17:19.261519304+00:00 stderr F I1213 00:17:19.261379 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:19.261887573+00:00 stderr F I1213 00:17:19.261802 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:19.261887573+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:19.261887573+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:19.262010307+00:00 stderr F I1213 00:17:19.261917 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (569.415µs) 2025-12-13T00:17:19.262010307+00:00 stderr F I1213 00:17:19.261997 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:19.262134561+00:00 stderr F I1213 00:17:19.262081 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:19.262196443+00:00 stderr F I1213 00:17:19.262159 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:19.262669285+00:00 stderr F I1213 00:17:19.262563 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:19.298410241+00:00 stderr F W1213 00:17:19.298350 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:19.301554638+00:00 stderr F I1213 00:17:19.301474 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.474379ms) 2025-12-13T00:17:20.262159131+00:00 stderr F I1213 00:17:20.262071 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:20.262655145+00:00 stderr F I1213 00:17:20.262605 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:20.262655145+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:20.262655145+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:20.262789929+00:00 stderr F I1213 00:17:20.262741 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (681.339µs) 2025-12-13T00:17:20.262789929+00:00 stderr F I1213 00:17:20.262778 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:20.262907192+00:00 stderr F I1213 00:17:20.262864 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:20.263035605+00:00 stderr F I1213 00:17:20.262996 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:20.263574239+00:00 stderr F I1213 00:17:20.263503 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:20.287710621+00:00 stderr F W1213 00:17:20.287614 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:20.290617999+00:00 stderr F I1213 00:17:20.290557 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.779789ms) 2025-12-13T00:17:21.265102348+00:00 stderr F I1213 00:17:21.265040 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:21.265469609+00:00 stderr F I1213 00:17:21.265449 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:21.265469609+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:21.265469609+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:21.265571991+00:00 stderr F I1213 00:17:21.265549 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (520.445µs) 2025-12-13T00:17:21.265642443+00:00 stderr F I1213 00:17:21.265600 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:21.265731045+00:00 stderr F I1213 00:17:21.265707 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:21.265805417+00:00 stderr F I1213 00:17:21.265790 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:21.266164167+00:00 stderr F I1213 00:17:21.266126 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:21.311481681+00:00 stderr F W1213 00:17:21.311435 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:21.313494945+00:00 stderr F I1213 00:17:21.313466 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.862262ms) 2025-12-13T00:17:22.266165725+00:00 stderr F I1213 00:17:22.266048 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:22.266477173+00:00 stderr F I1213 00:17:22.266435 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:22.266477173+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:22.266477173+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:22.266560065+00:00 stderr F I1213 00:17:22.266523 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (490.313µs) 2025-12-13T00:17:22.266560065+00:00 stderr F I1213 00:17:22.266548 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:22.266632437+00:00 stderr F I1213 00:17:22.266593 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:22.266672728+00:00 stderr F I1213 00:17:22.266651 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:22.267046548+00:00 stderr F I1213 00:17:22.266993 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:22.294216541+00:00 stderr F W1213 00:17:22.294139 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:22.299061900+00:00 stderr F I1213 00:17:22.298972 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.369931ms) 2025-12-13T00:17:23.141113768+00:00 stderr F I1213 00:17:23.141003 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:23.141275762+00:00 stderr F I1213 00:17:23.141204 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:23.141313063+00:00 stderr F I1213 00:17:23.141289 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:23.142809723+00:00 stderr F I1213 00:17:23.142717 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:23.176285313+00:00 stderr F W1213 00:17:23.176138 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:23.177815874+00:00 stderr F I1213 00:17:23.177748 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.754087ms) 2025-12-13T00:17:23.266831030+00:00 stderr F I1213 00:17:23.266751 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:23.267052616+00:00 stderr F I1213 00:17:23.267018 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:23.267052616+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:23.267052616+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:23.267079077+00:00 stderr F I1213 00:17:23.267065 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (324.899µs) 2025-12-13T00:17:23.267095917+00:00 stderr F I1213 00:17:23.267077 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:23.267147449+00:00 stderr F I1213 00:17:23.267119 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:23.267190180+00:00 stderr F I1213 00:17:23.267164 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:23.267400255+00:00 stderr F I1213 00:17:23.267357 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:23.305260263+00:00 stderr F W1213 00:17:23.305168 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:23.307348108+00:00 stderr F I1213 00:17:23.307287 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.206069ms) 2025-12-13T00:17:24.267298411+00:00 stderr F I1213 00:17:24.267193 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:24.267613499+00:00 stderr F I1213 00:17:24.267547 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:24.267613499+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:24.267613499+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:24.267808964+00:00 stderr F I1213 00:17:24.267744 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (563.524µs) 2025-12-13T00:17:24.267808964+00:00 stderr F I1213 00:17:24.267776 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:24.267876446+00:00 stderr F I1213 00:17:24.267837 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:24.267962788+00:00 stderr F I1213 00:17:24.267909 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:24.268372890+00:00 stderr F I1213 00:17:24.268293 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:24.293425706+00:00 stderr F W1213 00:17:24.293350 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:24.295759528+00:00 stderr F I1213 00:17:24.295690 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.911283ms) 2025-12-13T00:17:25.268490712+00:00 stderr F I1213 00:17:25.268412 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:25.268747298+00:00 stderr F I1213 00:17:25.268674 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:25.268747298+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:25.268747298+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:25.268747298+00:00 stderr F I1213 00:17:25.268722 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (323.149µs) 2025-12-13T00:17:25.268747298+00:00 stderr F I1213 00:17:25.268735 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:25.268786239+00:00 stderr F I1213 00:17:25.268770 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:25.268838651+00:00 stderr F I1213 00:17:25.268811 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:25.269110088+00:00 stderr F I1213 00:17:25.269052 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:25.289449108+00:00 stderr F W1213 00:17:25.289392 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:25.291034031+00:00 stderr F I1213 00:17:25.290996 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.257662ms) 2025-12-13T00:17:26.269784664+00:00 stderr F I1213 00:17:26.269504 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:26.270109062+00:00 stderr F I1213 00:17:26.270065 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:26.270109062+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:26.270109062+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:26.270151443+00:00 stderr F I1213 00:17:26.270132 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (663.278µs) 2025-12-13T00:17:26.270159284+00:00 stderr F I1213 00:17:26.270152 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:26.270245476+00:00 stderr F I1213 00:17:26.270205 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:26.270295637+00:00 stderr F I1213 00:17:26.270267 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:26.270605685+00:00 stderr F I1213 00:17:26.270549 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:26.302628177+00:00 stderr F W1213 00:17:26.302543 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:26.304651170+00:00 stderr F I1213 00:17:26.304597 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.441036ms) 2025-12-13T00:17:27.271288612+00:00 stderr F I1213 00:17:27.271209 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:27.271503578+00:00 stderr F I1213 00:17:27.271476 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:27.271503578+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:27.271503578+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:27.271545159+00:00 stderr F I1213 00:17:27.271524 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (326.38µs) 2025-12-13T00:17:27.271545159+00:00 stderr F I1213 00:17:27.271541 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:27.271613901+00:00 stderr F I1213 00:17:27.271581 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:27.271638422+00:00 stderr F I1213 00:17:27.271625 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:27.271928929+00:00 stderr F I1213 00:17:27.271879 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:27.299103071+00:00 stderr F W1213 00:17:27.299022 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:27.300534130+00:00 stderr F I1213 00:17:27.300480 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.935049ms) 2025-12-13T00:17:28.272179124+00:00 stderr F I1213 00:17:28.272096 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:28.272762659+00:00 stderr F I1213 00:17:28.272710 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:28.272762659+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:28.272762659+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:28.272892092+00:00 stderr F I1213 00:17:28.272848 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (767.29µs) 2025-12-13T00:17:28.272900613+00:00 stderr F I1213 00:17:28.272888 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:28.273073387+00:00 stderr F I1213 00:17:28.273024 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:28.273117768+00:00 stderr F I1213 00:17:28.273103 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:28.273543170+00:00 stderr F I1213 00:17:28.273486 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:28.296791058+00:00 stderr F W1213 00:17:28.296725 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:28.298206375+00:00 stderr F I1213 00:17:28.298165 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.277002ms) 2025-12-13T00:17:29.273129167+00:00 stderr F I1213 00:17:29.273034 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:29.273380524+00:00 stderr F I1213 00:17:29.273341 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:29.273380524+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:29.273380524+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:29.273432565+00:00 stderr F I1213 00:17:29.273410 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (389.17µs) 2025-12-13T00:17:29.273432565+00:00 stderr F I1213 00:17:29.273427 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:29.273516278+00:00 stderr F I1213 00:17:29.273479 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:29.273555309+00:00 stderr F I1213 00:17:29.273535 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:29.273873217+00:00 stderr F I1213 00:17:29.273822 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:29.300393433+00:00 stderr F W1213 00:17:29.300311 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:29.301766009+00:00 stderr F I1213 00:17:29.301717 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.287372ms) 2025-12-13T00:17:30.274363678+00:00 stderr F I1213 00:17:30.274260 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:30.274583995+00:00 stderr F I1213 00:17:30.274540 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:30.274583995+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:30.274583995+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:30.274626916+00:00 stderr F I1213 00:17:30.274589 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (340.85µs) 2025-12-13T00:17:30.274626916+00:00 stderr F I1213 00:17:30.274603 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:30.274695138+00:00 stderr F I1213 00:17:30.274642 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:30.274695138+00:00 stderr F I1213 00:17:30.274689 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:30.275020847+00:00 stderr F I1213 00:17:30.274968 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:30.309575345+00:00 stderr F W1213 00:17:30.309450 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:30.311497997+00:00 stderr F I1213 00:17:30.311446 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.838729ms) 2025-12-13T00:17:31.275567829+00:00 stderr F I1213 00:17:31.275477 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:31.276150365+00:00 stderr F I1213 00:17:31.276099 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:31.276150365+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:31.276150365+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:31.276400211+00:00 stderr F I1213 00:17:31.276344 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (881.994µs) 2025-12-13T00:17:31.276400211+00:00 stderr F I1213 00:17:31.276382 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:31.276492284+00:00 stderr F I1213 00:17:31.276450 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:31.276561775+00:00 stderr F I1213 00:17:31.276527 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:31.277030968+00:00 stderr F I1213 00:17:31.276974 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:31.317331430+00:00 stderr F W1213 00:17:31.317274 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:31.321170182+00:00 stderr F I1213 00:17:31.321097 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.709098ms) 2025-12-13T00:17:32.277332845+00:00 stderr F I1213 00:17:32.276391 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:32.277332845+00:00 stderr F I1213 00:17:32.276823 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:32.277332845+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:32.277332845+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:32.277332845+00:00 stderr F I1213 00:17:32.276916 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (572.925µs) 2025-12-13T00:17:32.277332845+00:00 stderr F I1213 00:17:32.277009 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:32.277332845+00:00 stderr F I1213 00:17:32.277082 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:32.277332845+00:00 stderr F I1213 00:17:32.277153 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:32.277677735+00:00 stderr F I1213 00:17:32.277587 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:32.300754598+00:00 stderr F W1213 00:17:32.300674 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:32.303912212+00:00 stderr F I1213 00:17:32.303869 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.859684ms) 2025-12-13T00:17:33.277666043+00:00 stderr F I1213 00:17:33.277519 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:33.278145825+00:00 stderr F I1213 00:17:33.278042 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:33.278145825+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:33.278145825+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:33.278198347+00:00 stderr F I1213 00:17:33.278155 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (648.467µs) 2025-12-13T00:17:33.278198347+00:00 stderr F I1213 00:17:33.278185 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:33.278378871+00:00 stderr F I1213 00:17:33.278258 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:33.278526535+00:00 stderr F I1213 00:17:33.278502 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:33.279064029+00:00 stderr F I1213 00:17:33.279011 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:33.315923399+00:00 stderr F W1213 00:17:33.315544 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:33.317999595+00:00 stderr F I1213 00:17:33.317913 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.721416ms) 2025-12-13T00:17:34.279259843+00:00 stderr F I1213 00:17:34.279180 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:34.279982052+00:00 stderr F I1213 00:17:34.279874 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:34.279982052+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:34.279982052+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:34.280191018+00:00 stderr F I1213 00:17:34.280153 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (986.115µs) 2025-12-13T00:17:34.280265180+00:00 stderr F I1213 00:17:34.280242 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:34.280420525+00:00 stderr F I1213 00:17:34.280385 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:34.280533188+00:00 stderr F I1213 00:17:34.280511 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:34.281054971+00:00 stderr F I1213 00:17:34.280996 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:34.327589768+00:00 stderr F W1213 00:17:34.327540 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:34.329124639+00:00 stderr F I1213 00:17:34.329098 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.860089ms) 2025-12-13T00:17:35.281160232+00:00 stderr F I1213 00:17:35.281106 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:35.281631875+00:00 stderr F I1213 00:17:35.281603 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:35.281631875+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:35.281631875+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:35.281741688+00:00 stderr F I1213 00:17:35.281721 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (625.636µs) 2025-12-13T00:17:35.281777509+00:00 stderr F I1213 00:17:35.281749 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:35.281868011+00:00 stderr F I1213 00:17:35.281843 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:35.281968674+00:00 stderr F I1213 00:17:35.281928 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:35.282323063+00:00 stderr F I1213 00:17:35.282283 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:35.309315681+00:00 stderr F W1213 00:17:35.309268 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:35.311080157+00:00 stderr F I1213 00:17:35.311052 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.301288ms) 2025-12-13T00:17:36.282368333+00:00 stderr F I1213 00:17:36.282309 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:36.282745093+00:00 stderr F I1213 00:17:36.282726 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:36.282745093+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:36.282745093+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:36.282859906+00:00 stderr F I1213 00:17:36.282842 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (550.134µs) 2025-12-13T00:17:36.282895647+00:00 stderr F I1213 00:17:36.282883 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:36.283017120+00:00 stderr F I1213 00:17:36.282984 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:36.283091002+00:00 stderr F I1213 00:17:36.283079 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:36.283421421+00:00 stderr F I1213 00:17:36.283390 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:36.334445328+00:00 stderr F W1213 00:17:36.334374 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:36.336449331+00:00 stderr F I1213 00:17:36.336405 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.518743ms) 2025-12-13T00:17:37.283765329+00:00 stderr F I1213 00:17:37.283694 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:37.284489868+00:00 stderr F I1213 00:17:37.284399 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:37.284489868+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:37.284489868+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:37.284707964+00:00 stderr F I1213 00:17:37.284675 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (992.866µs) 2025-12-13T00:17:37.284773846+00:00 stderr F I1213 00:17:37.284751 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:37.284978721+00:00 stderr F I1213 00:17:37.284896 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:37.285137255+00:00 stderr F I1213 00:17:37.285114 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:37.285826403+00:00 stderr F I1213 00:17:37.285766 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:37.315904523+00:00 stderr F W1213 00:17:37.315813 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:37.317274270+00:00 stderr F I1213 00:17:37.317212 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.461633ms) 2025-12-13T00:17:38.141132555+00:00 stderr F I1213 00:17:38.141061 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:38.141395552+00:00 stderr F I1213 00:17:38.141351 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:38.141529575+00:00 stderr F I1213 00:17:38.141502 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:38.142310746+00:00 stderr F I1213 00:17:38.142236 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:38.189820699+00:00 stderr F W1213 00:17:38.189759 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:38.193014364+00:00 stderr F I1213 00:17:38.192963 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.881769ms) 2025-12-13T00:17:38.285051111+00:00 stderr F I1213 00:17:38.284992 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:38.285522934+00:00 stderr F I1213 00:17:38.285493 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:38.285522934+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:38.285522934+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:38.285716599+00:00 stderr F I1213 00:17:38.285677 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (695.668µs) 2025-12-13T00:17:38.285800421+00:00 stderr F I1213 00:17:38.285771 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:38.285984236+00:00 stderr F I1213 00:17:38.285910 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:38.286104689+00:00 stderr F I1213 00:17:38.286083 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:38.286651404+00:00 stderr F I1213 00:17:38.286596 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:38.342019255+00:00 stderr F W1213 00:17:38.341749 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:38.348188760+00:00 stderr F I1213 00:17:38.347210 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (61.435673ms) 2025-12-13T00:17:39.286301092+00:00 stderr F I1213 00:17:39.286164 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:39.286696454+00:00 stderr F I1213 00:17:39.286638 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:39.286696454+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:39.286696454+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:39.286800356+00:00 stderr F I1213 00:17:39.286738 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (593.277µs) 2025-12-13T00:17:39.286800356+00:00 stderr F I1213 00:17:39.286767 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:39.286887949+00:00 stderr F I1213 00:17:39.286835 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:39.286988171+00:00 stderr F I1213 00:17:39.286921 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:39.287507165+00:00 stderr F I1213 00:17:39.287449 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:39.321514349+00:00 stderr F W1213 00:17:39.321441 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:39.325249518+00:00 stderr F I1213 00:17:39.325192 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.417761ms) 2025-12-13T00:17:40.287029651+00:00 stderr F I1213 00:17:40.286891 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:40.287658167+00:00 stderr F I1213 00:17:40.287571 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:40.287658167+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:40.287658167+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:40.287801611+00:00 stderr F I1213 00:17:40.287715 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (838.692µs) 2025-12-13T00:17:40.287801611+00:00 stderr F I1213 00:17:40.287758 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:40.287973036+00:00 stderr F I1213 00:17:40.287860 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:40.288040877+00:00 stderr F I1213 00:17:40.288003 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:40.288622393+00:00 stderr F I1213 00:17:40.288521 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:40.336055504+00:00 stderr F W1213 00:17:40.335859 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:40.339164426+00:00 stderr F I1213 00:17:40.339092 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.332174ms) 2025-12-13T00:17:41.288299953+00:00 stderr F I1213 00:17:41.287776 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:41.288610021+00:00 stderr F I1213 00:17:41.288592 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:41.288610021+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:41.288610021+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:41.288701693+00:00 stderr F I1213 00:17:41.288685 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (921.105µs) 2025-12-13T00:17:41.288733424+00:00 stderr F I1213 00:17:41.288723 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:41.288798526+00:00 stderr F I1213 00:17:41.288780 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:41.288852587+00:00 stderr F I1213 00:17:41.288842 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:41.289102774+00:00 stderr F I1213 00:17:41.289077 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:41.318093875+00:00 stderr F W1213 00:17:41.318059 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:41.319550703+00:00 stderr F I1213 00:17:41.319526 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.800558ms) 2025-12-13T00:17:42.289262586+00:00 stderr F I1213 00:17:42.289199 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:42.289672948+00:00 stderr F I1213 00:17:42.289633 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:42.289672948+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:42.289672948+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:42.289758640+00:00 stderr F I1213 00:17:42.289723 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (556.685µs) 2025-12-13T00:17:42.289769761+00:00 stderr F I1213 00:17:42.289759 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:42.289869353+00:00 stderr F I1213 00:17:42.289827 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:42.289917734+00:00 stderr F I1213 00:17:42.289898 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:42.290390117+00:00 stderr F I1213 00:17:42.290326 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:42.321906015+00:00 stderr F W1213 00:17:42.321804 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:42.323961459+00:00 stderr F I1213 00:17:42.323877 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.115957ms) 2025-12-13T00:17:43.290759335+00:00 stderr F I1213 00:17:43.290641 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:43.291390841+00:00 stderr F I1213 00:17:43.291347 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:43.291390841+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:43.291390841+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:43.291604837+00:00 stderr F I1213 00:17:43.291568 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (937.455µs) 2025-12-13T00:17:43.292513032+00:00 stderr F I1213 00:17:43.291663 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:43.292513032+00:00 stderr F I1213 00:17:43.291814 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:43.292513032+00:00 stderr F I1213 00:17:43.291894 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:43.292513032+00:00 stderr F I1213 00:17:43.292251 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:43.322701044+00:00 stderr F W1213 00:17:43.322617 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:43.326029193+00:00 stderr F I1213 00:17:43.325972 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.282051ms) 2025-12-13T00:17:44.292815328+00:00 stderr F I1213 00:17:44.291744 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:44.294659196+00:00 stderr F I1213 00:17:44.294574 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:44.294659196+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:44.294659196+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:44.294881072+00:00 stderr F I1213 00:17:44.294849 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (3.130713ms) 2025-12-13T00:17:44.294974815+00:00 stderr F I1213 00:17:44.294926 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:44.295157509+00:00 stderr F I1213 00:17:44.295122 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:44.295265802+00:00 stderr F I1213 00:17:44.295245 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:44.295748646+00:00 stderr F I1213 00:17:44.295697 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:44.343825944+00:00 stderr F W1213 00:17:44.343698 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:44.345147559+00:00 stderr F I1213 00:17:44.345078 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.153374ms) 2025-12-13T00:17:45.295274941+00:00 stderr F I1213 00:17:45.295221 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:45.295742154+00:00 stderr F I1213 00:17:45.295695 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:45.295742154+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:45.295742154+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:45.295871707+00:00 stderr F I1213 00:17:45.295843 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (633.728µs) 2025-12-13T00:17:45.295923909+00:00 stderr F I1213 00:17:45.295873 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:45.296066933+00:00 stderr F I1213 00:17:45.296034 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:45.296165815+00:00 stderr F I1213 00:17:45.296148 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:45.296580076+00:00 stderr F I1213 00:17:45.296533 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:45.318282243+00:00 stderr F W1213 00:17:45.318224 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:45.320278116+00:00 stderr F I1213 00:17:45.320219 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.350147ms) 2025-12-13T00:17:46.296837511+00:00 stderr F I1213 00:17:46.296762 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:46.297624482+00:00 stderr F I1213 00:17:46.297577 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:46.297624482+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:46.297624482+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:46.297850078+00:00 stderr F I1213 00:17:46.297795 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.045427ms) 2025-12-13T00:17:46.297980501+00:00 stderr F I1213 00:17:46.297907 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:46.298164206+00:00 stderr F I1213 00:17:46.298115 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:46.298323051+00:00 stderr F I1213 00:17:46.298292 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:46.299109612+00:00 stderr F I1213 00:17:46.299033 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:46.353021885+00:00 stderr F W1213 00:17:46.352896 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:46.356293332+00:00 stderr F I1213 00:17:46.356240 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (58.329791ms) 2025-12-13T00:17:47.298066522+00:00 stderr F I1213 00:17:47.297913 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:47.298397801+00:00 stderr F I1213 00:17:47.298339 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:47.298397801+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:47.298397801+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:47.298449603+00:00 stderr F I1213 00:17:47.298412 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (511.324µs) 2025-12-13T00:17:47.298449603+00:00 stderr F I1213 00:17:47.298438 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:47.298540515+00:00 stderr F I1213 00:17:47.298499 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:47.298563336+00:00 stderr F I1213 00:17:47.298554 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:47.298838723+00:00 stderr F I1213 00:17:47.298778 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:47.333810572+00:00 stderr F W1213 00:17:47.333721 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:47.335150008+00:00 stderr F I1213 00:17:47.335093 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.655295ms) 2025-12-13T00:17:48.298783050+00:00 stderr F I1213 00:17:48.298719 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:48.299432567+00:00 stderr F I1213 00:17:48.299374 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:48.299432567+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:48.299432567+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:48.299603001+00:00 stderr F I1213 00:17:48.299571 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (865.482µs) 2025-12-13T00:17:48.299668703+00:00 stderr F I1213 00:17:48.299644 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:48.299835637+00:00 stderr F I1213 00:17:48.299799 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:48.299973151+00:00 stderr F I1213 00:17:48.299917 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:48.300484084+00:00 stderr F I1213 00:17:48.300432 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:48.332873936+00:00 stderr F W1213 00:17:48.332765 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:48.336294317+00:00 stderr F I1213 00:17:48.336199 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.548442ms) 2025-12-13T00:17:49.300691758+00:00 stderr F I1213 00:17:49.300623 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:49.301304664+00:00 stderr F I1213 00:17:49.301265 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:49.301304664+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:49.301304664+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:49.301499860+00:00 stderr F I1213 00:17:49.301466 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (854.593µs) 2025-12-13T00:17:49.301568402+00:00 stderr F I1213 00:17:49.301546 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:49.301693896+00:00 stderr F I1213 00:17:49.301661 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:49.301803578+00:00 stderr F I1213 00:17:49.301780 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:49.302375133+00:00 stderr F I1213 00:17:49.302321 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:49.342855019+00:00 stderr F W1213 00:17:49.342793 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:49.346212469+00:00 stderr F I1213 00:17:49.346161 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.609936ms) 2025-12-13T00:17:50.301982841+00:00 stderr F I1213 00:17:50.301754 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:50.302616218+00:00 stderr F I1213 00:17:50.302507 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:50.302616218+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:50.302616218+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:50.302754062+00:00 stderr F I1213 00:17:50.302699 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (960.115µs) 2025-12-13T00:17:50.302876025+00:00 stderr F I1213 00:17:50.302827 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:50.303124571+00:00 stderr F I1213 00:17:50.303082 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:50.303256265+00:00 stderr F I1213 00:17:50.303233 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:50.303765528+00:00 stderr F I1213 00:17:50.303715 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:50.338670107+00:00 stderr F W1213 00:17:50.338612 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:50.342098368+00:00 stderr F I1213 00:17:50.342055 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.245393ms) 2025-12-13T00:17:51.303309525+00:00 stderr F I1213 00:17:51.302886 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:51.303643153+00:00 stderr F I1213 00:17:51.303611 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:51.303643153+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:51.303643153+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:51.303714595+00:00 stderr F I1213 00:17:51.303699 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (822.312µs) 2025-12-13T00:17:51.303746166+00:00 stderr F I1213 00:17:51.303735 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:51.303823088+00:00 stderr F I1213 00:17:51.303805 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:51.303875849+00:00 stderr F I1213 00:17:51.303866 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:51.304176067+00:00 stderr F I1213 00:17:51.304142 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:51.330414306+00:00 stderr F W1213 00:17:51.330368 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:51.331803092+00:00 stderr F I1213 00:17:51.331783 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.046025ms) 2025-12-13T00:17:52.304397861+00:00 stderr F I1213 00:17:52.304302 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:52.304838474+00:00 stderr F I1213 00:17:52.304747 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:52.304838474+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:52.304838474+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:52.304890075+00:00 stderr F I1213 00:17:52.304853 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (575.556µs) 2025-12-13T00:17:52.304890075+00:00 stderr F I1213 00:17:52.304877 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:52.305048499+00:00 stderr F I1213 00:17:52.304982 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:52.305146952+00:00 stderr F I1213 00:17:52.305106 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:52.305638665+00:00 stderr F I1213 00:17:52.305559 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:52.340713947+00:00 stderr F W1213 00:17:52.340646 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:52.342186217+00:00 stderr F I1213 00:17:52.342101 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.223279ms) 2025-12-13T00:17:53.141202611+00:00 stderr F I1213 00:17:53.141131 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:53.141370115+00:00 stderr F I1213 00:17:53.141312 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:53.141434227+00:00 stderr F I1213 00:17:53.141403 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:53.141952360+00:00 stderr F I1213 00:17:53.141880 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:53.180392133+00:00 stderr F W1213 00:17:53.180326 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:53.183687280+00:00 stderr F I1213 00:17:53.183630 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.53908ms) 2025-12-13T00:17:53.305215522+00:00 stderr F I1213 00:17:53.305140 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:53.305658144+00:00 stderr F I1213 00:17:53.305614 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:53.305658144+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:53.305658144+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:53.305770407+00:00 stderr F I1213 00:17:53.305731 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (610.136µs) 2025-12-13T00:17:53.305770407+00:00 stderr F I1213 00:17:53.305763 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:53.305872539+00:00 stderr F I1213 00:17:53.305840 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:53.305983672+00:00 stderr F I1213 00:17:53.305928 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:53.306518246+00:00 stderr F I1213 00:17:53.306457 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:53.351956894+00:00 stderr F W1213 00:17:53.351841 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:53.353867975+00:00 stderr F I1213 00:17:53.353738 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.973816ms) 2025-12-13T00:17:54.305916859+00:00 stderr F I1213 00:17:54.305862 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:54.306367751+00:00 stderr F I1213 00:17:54.306333 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:54.306367751+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:54.306367751+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:54.306478304+00:00 stderr F I1213 00:17:54.306450 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (598.746µs) 2025-12-13T00:17:54.306517355+00:00 stderr F I1213 00:17:54.306490 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:54.306621177+00:00 stderr F I1213 00:17:54.306587 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:54.306678499+00:00 stderr F I1213 00:17:54.306664 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:54.307260364+00:00 stderr F I1213 00:17:54.307193 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:54.329599318+00:00 stderr F W1213 00:17:54.329533 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:54.332703542+00:00 stderr F I1213 00:17:54.332648 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.157915ms) 2025-12-13T00:17:55.307557551+00:00 stderr F I1213 00:17:55.307493 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:55.307849709+00:00 stderr F I1213 00:17:55.307805 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:55.307849709+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:55.307849709+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:55.307962862+00:00 stderr F I1213 00:17:55.307903 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (421.152µs) 2025-12-13T00:17:55.307962862+00:00 stderr F I1213 00:17:55.307923 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:55.308097946+00:00 stderr F I1213 00:17:55.308047 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:55.308153167+00:00 stderr F I1213 00:17:55.308110 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:55.308412434+00:00 stderr F I1213 00:17:55.308369 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:55.339994223+00:00 stderr F W1213 00:17:55.339866 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:55.343266780+00:00 stderr F I1213 00:17:55.343180 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.249756ms) 2025-12-13T00:17:56.308318970+00:00 stderr F I1213 00:17:56.308246 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:56.308715760+00:00 stderr F I1213 00:17:56.308689 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:56.308715760+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:56.308715760+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:56.308804952+00:00 stderr F I1213 00:17:56.308778 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (549.245µs) 2025-12-13T00:17:56.308814223+00:00 stderr F I1213 00:17:56.308805 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:56.308896005+00:00 stderr F I1213 00:17:56.308870 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:56.308999408+00:00 stderr F I1213 00:17:56.308970 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:56.309478240+00:00 stderr F I1213 00:17:56.309373 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:56.360357343+00:00 stderr F W1213 00:17:56.360290 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:56.363461155+00:00 stderr F I1213 00:17:56.363410 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.597401ms) 2025-12-13T00:17:57.309826937+00:00 stderr F I1213 00:17:57.309744 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:57.310084294+00:00 stderr F I1213 00:17:57.310044 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:57.310084294+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:57.310084294+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:57.310107015+00:00 stderr F I1213 00:17:57.310097 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (365.099µs) 2025-12-13T00:17:57.310117685+00:00 stderr F I1213 00:17:57.310109 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:57.310180767+00:00 stderr F I1213 00:17:57.310145 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:57.310215198+00:00 stderr F I1213 00:17:57.310193 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:57.310453444+00:00 stderr F I1213 00:17:57.310400 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:57.335241563+00:00 stderr F W1213 00:17:57.335165 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:57.338096629+00:00 stderr F I1213 00:17:57.338010 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.893361ms) 2025-12-13T00:17:58.311095510+00:00 stderr F I1213 00:17:58.311020 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:58.311383007+00:00 stderr F I1213 00:17:58.311333 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:58.311383007+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:58.311383007+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:58.311403058+00:00 stderr F I1213 00:17:58.311389 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (381.2µs) 2025-12-13T00:17:58.311413988+00:00 stderr F I1213 00:17:58.311406 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:58.311491050+00:00 stderr F I1213 00:17:58.311456 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:58.311542422+00:00 stderr F I1213 00:17:58.311513 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:58.311826029+00:00 stderr F I1213 00:17:58.311773 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:58.348168645+00:00 stderr F W1213 00:17:58.348112 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:58.350302312+00:00 stderr F I1213 00:17:58.350229 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.818462ms) 2025-12-13T00:17:59.312214328+00:00 stderr F I1213 00:17:59.312124 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:17:59.312590548+00:00 stderr F I1213 00:17:59.312529 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:17:59.312590548+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:17:59.312590548+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:17:59.312862475+00:00 stderr F I1213 00:17:59.312618 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (525.774µs) 2025-12-13T00:17:59.312862475+00:00 stderr F I1213 00:17:59.312648 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:17:59.312964707+00:00 stderr F I1213 00:17:59.312888 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:17:59.313081430+00:00 stderr F I1213 00:17:59.313039 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:17:59.313542784+00:00 stderr F I1213 00:17:59.313441 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:17:59.397290819+00:00 stderr F W1213 00:17:59.396471 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:17:59.400981358+00:00 stderr F I1213 00:17:59.398954 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (86.282474ms) 2025-12-13T00:18:00.314159808+00:00 stderr F I1213 00:18:00.313750 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:00.314381554+00:00 stderr F I1213 00:18:00.314319 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:00.314381554+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:00.314381554+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:00.314492597+00:00 stderr F I1213 00:18:00.314437 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (700.768µs) 2025-12-13T00:18:00.314492597+00:00 stderr F I1213 00:18:00.314467 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:00.314601080+00:00 stderr F I1213 00:18:00.314536 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:00.314670621+00:00 stderr F I1213 00:18:00.314620 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:00.315296888+00:00 stderr F I1213 00:18:00.315204 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:00.368295777+00:00 stderr F W1213 00:18:00.368207 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:00.370082765+00:00 stderr F I1213 00:18:00.370007 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.539287ms) 2025-12-13T00:18:01.315395019+00:00 stderr F I1213 00:18:01.315261 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:01.315705077+00:00 stderr F I1213 00:18:01.315619 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:01.315705077+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:01.315705077+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:01.315795429+00:00 stderr F I1213 00:18:01.315698 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (452.031µs) 2025-12-13T00:18:01.315795429+00:00 stderr F I1213 00:18:01.315718 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:01.315817710+00:00 stderr F I1213 00:18:01.315791 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:01.315916122+00:00 stderr F I1213 00:18:01.315846 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:01.316273392+00:00 stderr F I1213 00:18:01.316191 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:01.361445274+00:00 stderr F W1213 00:18:01.361361 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:01.363337103+00:00 stderr F I1213 00:18:01.363280 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.560915ms) 2025-12-13T00:18:02.316478725+00:00 stderr F I1213 00:18:02.316042 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:02.316680642+00:00 stderr F I1213 00:18:02.316645 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:02.316680642+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:02.316680642+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:02.316720663+00:00 stderr F I1213 00:18:02.316700 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (668.848µs) 2025-12-13T00:18:02.316731553+00:00 stderr F I1213 00:18:02.316719 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:02.316796495+00:00 stderr F I1213 00:18:02.316763 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:02.316833036+00:00 stderr F I1213 00:18:02.316816 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:02.317120353+00:00 stderr F I1213 00:18:02.317073 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:02.347635395+00:00 stderr F W1213 00:18:02.347555 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:02.350899491+00:00 stderr F I1213 00:18:02.350829 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.097866ms) 2025-12-13T00:18:03.317692806+00:00 stderr F I1213 00:18:03.317502 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:03.318304882+00:00 stderr F I1213 00:18:03.318238 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:03.318304882+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:03.318304882+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:03.318399195+00:00 stderr F I1213 00:18:03.318341 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (870.943µs) 2025-12-13T00:18:03.318399195+00:00 stderr F I1213 00:18:03.318376 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:03.318550639+00:00 stderr F I1213 00:18:03.318483 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:03.318663522+00:00 stderr F I1213 00:18:03.318603 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:03.319224666+00:00 stderr F I1213 00:18:03.319139 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:03.371668531+00:00 stderr F W1213 00:18:03.371594 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:03.373980103+00:00 stderr F I1213 00:18:03.373892 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.509266ms) 2025-12-13T00:18:04.318777673+00:00 stderr F I1213 00:18:04.318655 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:04.319187734+00:00 stderr F I1213 00:18:04.319145 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:04.319187734+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:04.319187734+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:04.319279456+00:00 stderr F I1213 00:18:04.319238 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (601.626µs) 2025-12-13T00:18:04.319279456+00:00 stderr F I1213 00:18:04.319265 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:04.319376159+00:00 stderr F I1213 00:18:04.319330 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:04.319469281+00:00 stderr F I1213 00:18:04.319426 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:04.319955785+00:00 stderr F I1213 00:18:04.319886 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:04.362589058+00:00 stderr F W1213 00:18:04.362509 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:04.365921797+00:00 stderr F I1213 00:18:04.365859 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.587819ms) 2025-12-13T00:18:05.319922593+00:00 stderr F I1213 00:18:05.319809 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:05.320361004+00:00 stderr F I1213 00:18:05.320303 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:05.320361004+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:05.320361004+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:05.320458267+00:00 stderr F I1213 00:18:05.320411 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (615.666µs) 2025-12-13T00:18:05.320458267+00:00 stderr F I1213 00:18:05.320441 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:05.320616211+00:00 stderr F I1213 00:18:05.320513 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:05.320616211+00:00 stderr F I1213 00:18:05.320593 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:05.321209927+00:00 stderr F I1213 00:18:05.321113 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:05.357607614+00:00 stderr F W1213 00:18:05.357485 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:05.360623354+00:00 stderr F I1213 00:18:05.360537 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.088966ms) 2025-12-13T00:18:06.321421750+00:00 stderr F I1213 00:18:06.321325 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:06.321889823+00:00 stderr F I1213 00:18:06.321828 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:06.321889823+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:06.321889823+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:06.322021646+00:00 stderr F I1213 00:18:06.321983 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (672.958µs) 2025-12-13T00:18:06.322021646+00:00 stderr F I1213 00:18:06.322014 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:06.322191950+00:00 stderr F I1213 00:18:06.322135 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:06.322259002+00:00 stderr F I1213 00:18:06.322212 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:06.322702935+00:00 stderr F I1213 00:18:06.322637 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:06.367796584+00:00 stderr F W1213 00:18:06.367690 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:06.372129868+00:00 stderr F I1213 00:18:06.372061 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.0373ms) 2025-12-13T00:18:07.323126864+00:00 stderr F I1213 00:18:07.323035 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:07.323569146+00:00 stderr F I1213 00:18:07.323516 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:07.323569146+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:07.323569146+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:07.323669039+00:00 stderr F I1213 00:18:07.323620 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (619.466µs) 2025-12-13T00:18:07.323669039+00:00 stderr F I1213 00:18:07.323648 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:07.323772501+00:00 stderr F I1213 00:18:07.323720 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:07.323828613+00:00 stderr F I1213 00:18:07.323794 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:07.324424388+00:00 stderr F I1213 00:18:07.324276 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:07.368134051+00:00 stderr F W1213 00:18:07.368040 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:07.371198062+00:00 stderr F I1213 00:18:07.371142 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.488173ms) 2025-12-13T00:18:08.141022121+00:00 stderr F I1213 00:18:08.140833 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:08.141090673+00:00 stderr F I1213 00:18:08.141033 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:08.141149394+00:00 stderr F I1213 00:18:08.141113 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:08.141662518+00:00 stderr F I1213 00:18:08.141557 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:08.185546535+00:00 stderr F W1213 00:18:08.185463 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:08.188996606+00:00 stderr F I1213 00:18:08.188852 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (48.027236ms) 2025-12-13T00:18:08.324152570+00:00 stderr F I1213 00:18:08.324059 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:08.324558970+00:00 stderr F I1213 00:18:08.324471 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:08.324558970+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:08.324558970+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:08.324591261+00:00 stderr F I1213 00:18:08.324563 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (559.074µs) 2025-12-13T00:18:08.324607512+00:00 stderr F I1213 00:18:08.324585 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:08.324721075+00:00 stderr F I1213 00:18:08.324655 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:08.324744035+00:00 stderr F I1213 00:18:08.324732 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:08.325227078+00:00 stderr F I1213 00:18:08.325157 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:08.374871118+00:00 stderr F W1213 00:18:08.374781 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:08.378029802+00:00 stderr F I1213 00:18:08.377907 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.316007ms) 2025-12-13T00:18:09.325209076+00:00 stderr F I1213 00:18:09.325104 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:09.325580415+00:00 stderr F I1213 00:18:09.325488 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:09.325580415+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:09.325580415+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:09.325614606+00:00 stderr F I1213 00:18:09.325578 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (484.433µs) 2025-12-13T00:18:09.325634707+00:00 stderr F I1213 00:18:09.325623 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:09.325737910+00:00 stderr F I1213 00:18:09.325688 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:09.325816332+00:00 stderr F I1213 00:18:09.325760 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:09.326274064+00:00 stderr F I1213 00:18:09.326188 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:09.371895347+00:00 stderr F W1213 00:18:09.371784 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:09.375722249+00:00 stderr F I1213 00:18:09.375634 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.01714ms) 2025-12-13T00:18:10.326854678+00:00 stderr F I1213 00:18:10.325838 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:10.327572667+00:00 stderr F I1213 00:18:10.327527 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:10.327572667+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:10.327572667+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:10.327804723+00:00 stderr F I1213 00:18:10.327750 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.92274ms) 2025-12-13T00:18:10.327975077+00:00 stderr F I1213 00:18:10.327905 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:10.328901963+00:00 stderr F I1213 00:18:10.328124 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:10.329107328+00:00 stderr F I1213 00:18:10.329068 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:10.329812887+00:00 stderr F I1213 00:18:10.329749 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:10.373425927+00:00 stderr F W1213 00:18:10.373329 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:10.375722847+00:00 stderr F I1213 00:18:10.375627 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.7588ms) 2025-12-13T00:18:11.328530761+00:00 stderr F I1213 00:18:11.328452 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:11.328909692+00:00 stderr F I1213 00:18:11.328857 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:11.328909692+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:11.328909692+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:11.329033085+00:00 stderr F I1213 00:18:11.328971 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (550.725µs) 2025-12-13T00:18:11.329033085+00:00 stderr F I1213 00:18:11.329001 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:11.329105857+00:00 stderr F I1213 00:18:11.329068 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:11.329169828+00:00 stderr F I1213 00:18:11.329142 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:11.329575319+00:00 stderr F I1213 00:18:11.329517 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:11.370315802+00:00 stderr F W1213 00:18:11.370243 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:11.372300105+00:00 stderr F I1213 00:18:11.372230 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.225889ms) 2025-12-13T00:18:12.329990439+00:00 stderr F I1213 00:18:12.329857 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:12.330446271+00:00 stderr F I1213 00:18:12.330385 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:12.330446271+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:12.330446271+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:12.330542303+00:00 stderr F I1213 00:18:12.330486 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (682.417µs) 2025-12-13T00:18:12.330542303+00:00 stderr F I1213 00:18:12.330515 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:12.330698377+00:00 stderr F I1213 00:18:12.330601 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:12.330698377+00:00 stderr F I1213 00:18:12.330691 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:12.331205120+00:00 stderr F I1213 00:18:12.331124 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:12.376444864+00:00 stderr F W1213 00:18:12.376327 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:12.379601358+00:00 stderr F I1213 00:18:12.379537 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.014464ms) 2025-12-13T00:18:13.331259640+00:00 stderr F I1213 00:18:13.331168 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:13.331654182+00:00 stderr F I1213 00:18:13.331600 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:13.331654182+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:13.331654182+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:13.331733684+00:00 stderr F I1213 00:18:13.331685 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (531.605µs) 2025-12-13T00:18:13.331733684+00:00 stderr F I1213 00:18:13.331713 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:13.331834407+00:00 stderr F I1213 00:18:13.331779 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:13.331875328+00:00 stderr F I1213 00:18:13.331852 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:13.332486304+00:00 stderr F I1213 00:18:13.332329 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:13.363047006+00:00 stderr F W1213 00:18:13.362909 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:13.364991887+00:00 stderr F I1213 00:18:13.364896 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.180702ms) 2025-12-13T00:18:14.333230442+00:00 stderr F I1213 00:18:14.333088 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:14.333781396+00:00 stderr F I1213 00:18:14.333712 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:14.333781396+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:14.333781396+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:14.333972461+00:00 stderr F I1213 00:18:14.333864 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (806.961µs) 2025-12-13T00:18:14.333972461+00:00 stderr F I1213 00:18:14.333915 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:14.334163486+00:00 stderr F I1213 00:18:14.334077 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:14.334240268+00:00 stderr F I1213 00:18:14.334203 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:14.334905417+00:00 stderr F I1213 00:18:14.334795 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:14.381708590+00:00 stderr F W1213 00:18:14.381612 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:14.384791382+00:00 stderr F I1213 00:18:14.384727 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.810831ms) 2025-12-13T00:18:15.334880274+00:00 stderr F I1213 00:18:15.334769 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:15.335230113+00:00 stderr F I1213 00:18:15.335160 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:15.335230113+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:15.335230113+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:15.335267074+00:00 stderr F I1213 00:18:15.335235 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (479.062µs) 2025-12-13T00:18:15.335267074+00:00 stderr F I1213 00:18:15.335252 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:15.335350226+00:00 stderr F I1213 00:18:15.335300 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:15.335376317+00:00 stderr F I1213 00:18:15.335362 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:15.335703726+00:00 stderr F I1213 00:18:15.335627 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:15.379436348+00:00 stderr F W1213 00:18:15.379342 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:15.382856310+00:00 stderr F I1213 00:18:15.382761 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.498953ms) 2025-12-13T00:18:16.335618132+00:00 stderr F I1213 00:18:16.335502 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:16.335918590+00:00 stderr F I1213 00:18:16.335870 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:16.335918590+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:16.335918590+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:16.335996582+00:00 stderr F I1213 00:18:16.335964 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (509.943µs) 2025-12-13T00:18:16.335996582+00:00 stderr F I1213 00:18:16.335988 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:16.336134016+00:00 stderr F I1213 00:18:16.336043 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:16.336134016+00:00 stderr F I1213 00:18:16.336111 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:16.336486025+00:00 stderr F I1213 00:18:16.336409 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:16.337112952+00:00 stderr F I1213 00:18:16.337055 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:18:16.337223515+00:00 stderr F I1213 00:18:16.337139 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2025-12-13T00:18:16.337223515+00:00 stderr F I1213 00:18:16.337198 1 upgradeable.go:123] Cluster current version=4.16.0 2025-12-13T00:18:16.337278296+00:00 stderr F I1213 00:18:16.337245 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='PoolUpdating' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are updating, please see `oc get mcp` for further details') 2025-12-13T00:18:16.337354688+00:00 stderr F I1213 00:18:16.337292 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:16.337354688+00:00 stderr F I1213 00:18:16.337295 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (255.226µs) 2025-12-13T00:18:16.337512562+00:00 stderr F I1213 00:18:16.337459 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:16.337512562+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:16.337512562+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:16.337512562+00:00 stderr F I1213 00:18:16.337503 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (211.576µs) 2025-12-13T00:18:16.388661803+00:00 stderr F W1213 00:18:16.388155 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:16.390668435+00:00 stderr F I1213 00:18:16.390585 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (54.590621ms) 2025-12-13T00:18:16.390668435+00:00 stderr F I1213 00:18:16.390631 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:16.390742647+00:00 stderr F I1213 00:18:16.390709 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:16.390770059+00:00 stderr F I1213 00:18:16.390760 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:16.391141829+00:00 stderr F I1213 00:18:16.391057 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:16.420114749+00:00 stderr F W1213 00:18:16.418707 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:16.424481765+00:00 stderr F I1213 00:18:16.423726 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.08458ms) 2025-12-13T00:18:17.336902304+00:00 stderr F I1213 00:18:17.336295 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:17.337372637+00:00 stderr F I1213 00:18:17.337299 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:17.337372637+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:17.337372637+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:17.337433188+00:00 stderr F I1213 00:18:17.337400 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.13918ms) 2025-12-13T00:18:17.337490510+00:00 stderr F I1213 00:18:17.337432 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:17.337525691+00:00 stderr F I1213 00:18:17.337498 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:17.337630034+00:00 stderr F I1213 00:18:17.337566 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:17.338107877+00:00 stderr F I1213 00:18:17.338010 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:17.382000443+00:00 stderr F W1213 00:18:17.381868 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:17.385038254+00:00 stderr F I1213 00:18:17.384932 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.493563ms) 2025-12-13T00:18:18.338058434+00:00 stderr F I1213 00:18:18.337989 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:18.338347022+00:00 stderr F I1213 00:18:18.338320 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:18.338347022+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:18.338347022+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:18.338432744+00:00 stderr F I1213 00:18:18.338407 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.141µs) 2025-12-13T00:18:18.338432744+00:00 stderr F I1213 00:18:18.338428 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:18.338509946+00:00 stderr F I1213 00:18:18.338479 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:18.338562117+00:00 stderr F I1213 00:18:18.338538 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:18.338851155+00:00 stderr F I1213 00:18:18.338803 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:18.361692272+00:00 stderr F W1213 00:18:18.361624 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:18.364826375+00:00 stderr F I1213 00:18:18.364770 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.336859ms) 2025-12-13T00:18:19.339481840+00:00 stderr F I1213 00:18:19.339357 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:19.339715316+00:00 stderr F I1213 00:18:19.339674 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:19.339715316+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:19.339715316+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:19.339772578+00:00 stderr F I1213 00:18:19.339739 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (392.241µs) 2025-12-13T00:18:19.339772578+00:00 stderr F I1213 00:18:19.339760 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:19.339856820+00:00 stderr F I1213 00:18:19.339812 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:19.339881450+00:00 stderr F I1213 00:18:19.339866 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:19.340221699+00:00 stderr F I1213 00:18:19.340162 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:19.368473960+00:00 stderr F W1213 00:18:19.368428 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:19.370405102+00:00 stderr F I1213 00:18:19.370370 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.606564ms) 2025-12-13T00:18:20.340530116+00:00 stderr F I1213 00:18:20.340482 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:20.340774313+00:00 stderr F I1213 00:18:20.340753 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:20.340774313+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:20.340774313+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:20.340861935+00:00 stderr F I1213 00:18:20.340825 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (352.95µs) 2025-12-13T00:18:20.340861935+00:00 stderr F I1213 00:18:20.340850 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:20.340924887+00:00 stderr F I1213 00:18:20.340893 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:20.340971668+00:00 stderr F I1213 00:18:20.340958 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:20.341215885+00:00 stderr F I1213 00:18:20.341165 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:20.383413366+00:00 stderr F W1213 00:18:20.383357 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:20.385327427+00:00 stderr F I1213 00:18:20.385300 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.448511ms) 2025-12-13T00:18:21.342204869+00:00 stderr F I1213 00:18:21.342125 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:21.342699392+00:00 stderr F I1213 00:18:21.342645 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:21.342699392+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:21.342699392+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:21.342827205+00:00 stderr F I1213 00:18:21.342791 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (675.288µs) 2025-12-13T00:18:21.342865356+00:00 stderr F I1213 00:18:21.342833 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:21.343014980+00:00 stderr F I1213 00:18:21.342977 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:21.343085762+00:00 stderr F I1213 00:18:21.343060 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:21.343458702+00:00 stderr F I1213 00:18:21.343390 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:21.393981995+00:00 stderr F W1213 00:18:21.390408 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:21.393981995+00:00 stderr F I1213 00:18:21.393364 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.528244ms) 2025-12-13T00:18:22.343193663+00:00 stderr F I1213 00:18:22.343065 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:22.343555983+00:00 stderr F I1213 00:18:22.343499 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:22.343555983+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:22.343555983+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:22.343637035+00:00 stderr F I1213 00:18:22.343600 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (549.215µs) 2025-12-13T00:18:22.343637035+00:00 stderr F I1213 00:18:22.343630 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:22.343799630+00:00 stderr F I1213 00:18:22.343727 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:22.343862092+00:00 stderr F I1213 00:18:22.343826 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:22.344357775+00:00 stderr F I1213 00:18:22.344296 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:22.376687744+00:00 stderr F W1213 00:18:22.376559 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:22.381213724+00:00 stderr F I1213 00:18:22.381152 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.515248ms) 2025-12-13T00:18:23.142000732+00:00 stderr F I1213 00:18:23.141480 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:23.142143997+00:00 stderr F I1213 00:18:23.142098 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:23.142209749+00:00 stderr F I1213 00:18:23.142180 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:23.142647500+00:00 stderr F I1213 00:18:23.142580 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:23.175212125+00:00 stderr F W1213 00:18:23.175137 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:23.176609483+00:00 stderr F I1213 00:18:23.176531 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.060153ms) 2025-12-13T00:18:23.344434085+00:00 stderr F I1213 00:18:23.344355 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:23.344734193+00:00 stderr F I1213 00:18:23.344694 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:23.344734193+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:23.344734193+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:23.344822856+00:00 stderr F I1213 00:18:23.344776 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (433.251µs) 2025-12-13T00:18:23.344822856+00:00 stderr F I1213 00:18:23.344812 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:23.344910048+00:00 stderr F I1213 00:18:23.344867 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:23.344951539+00:00 stderr F I1213 00:18:23.344925 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:23.345306838+00:00 stderr F I1213 00:18:23.345245 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:23.376083957+00:00 stderr F W1213 00:18:23.375993 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:23.379610040+00:00 stderr F I1213 00:18:23.379529 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.709933ms) 2025-12-13T00:18:24.345261448+00:00 stderr F I1213 00:18:24.345122 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:24.345678079+00:00 stderr F I1213 00:18:24.345599 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:24.345678079+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:24.345678079+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:24.345801562+00:00 stderr F I1213 00:18:24.345732 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (623.057µs) 2025-12-13T00:18:24.345801562+00:00 stderr F I1213 00:18:24.345773 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:24.345987717+00:00 stderr F I1213 00:18:24.345865 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:24.346028308+00:00 stderr F I1213 00:18:24.346012 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:24.346547683+00:00 stderr F I1213 00:18:24.346448 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:24.394170366+00:00 stderr F W1213 00:18:24.394076 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:24.398215178+00:00 stderr F I1213 00:18:24.398149 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.372715ms) 2025-12-13T00:18:25.345811527+00:00 stderr F I1213 00:18:25.345721 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:25.346133696+00:00 stderr F I1213 00:18:25.346093 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:25.346133696+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:25.346133696+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:25.346210328+00:00 stderr F I1213 00:18:25.346178 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (473.893µs) 2025-12-13T00:18:25.346210328+00:00 stderr F I1213 00:18:25.346203 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:25.346316431+00:00 stderr F I1213 00:18:25.346263 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:25.346366132+00:00 stderr F I1213 00:18:25.346336 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:25.346734682+00:00 stderr F I1213 00:18:25.346679 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:25.383214339+00:00 stderr F W1213 00:18:25.383139 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:25.387215909+00:00 stderr F I1213 00:18:25.386927 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.712674ms) 2025-12-13T00:18:26.347331395+00:00 stderr F I1213 00:18:26.347219 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:26.347706615+00:00 stderr F I1213 00:18:26.347649 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:26.347706615+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:26.347706615+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:26.347838658+00:00 stderr F I1213 00:18:26.347783 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (577.266µs) 2025-12-13T00:18:26.347838658+00:00 stderr F I1213 00:18:26.347814 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:26.347991373+00:00 stderr F I1213 00:18:26.347884 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:26.348057964+00:00 stderr F I1213 00:18:26.348011 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:26.348563238+00:00 stderr F I1213 00:18:26.348460 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:26.407993727+00:00 stderr F W1213 00:18:26.407785 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:26.412198233+00:00 stderr F I1213 00:18:26.412136 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (64.314353ms) 2025-12-13T00:18:27.347918535+00:00 stderr F I1213 00:18:27.347815 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:27.348203413+00:00 stderr F I1213 00:18:27.348163 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:27.348203413+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:27.348203413+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:27.348240424+00:00 stderr F I1213 00:18:27.348215 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (413.411µs) 2025-12-13T00:18:27.348240424+00:00 stderr F I1213 00:18:27.348233 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:27.348297645+00:00 stderr F I1213 00:18:27.348270 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:27.348332126+00:00 stderr F I1213 00:18:27.348314 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:27.348568053+00:00 stderr F I1213 00:18:27.348519 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:27.377491480+00:00 stderr F W1213 00:18:27.377425 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:27.379731242+00:00 stderr F I1213 00:18:27.379696 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.456537ms) 2025-12-13T00:18:28.349358129+00:00 stderr F I1213 00:18:28.349300 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:28.349573805+00:00 stderr F I1213 00:18:28.349547 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:28.349573805+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:28.349573805+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:28.349627147+00:00 stderr F I1213 00:18:28.349595 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (305.039µs) 2025-12-13T00:18:28.349627147+00:00 stderr F I1213 00:18:28.349611 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:28.349670298+00:00 stderr F I1213 00:18:28.349647 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:28.349702789+00:00 stderr F I1213 00:18:28.349687 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:28.349951847+00:00 stderr F I1213 00:18:28.349893 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:28.370826022+00:00 stderr F W1213 00:18:28.370767 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:28.372149298+00:00 stderr F I1213 00:18:28.372119 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.506111ms) 2025-12-13T00:18:29.350555837+00:00 stderr F I1213 00:18:29.350457 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:29.350918707+00:00 stderr F I1213 00:18:29.350860 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:29.350918707+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:29.350918707+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:29.351058181+00:00 stderr F I1213 00:18:29.350996 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (551.725µs) 2025-12-13T00:18:29.351058181+00:00 stderr F I1213 00:18:29.351027 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:29.351136373+00:00 stderr F I1213 00:18:29.351093 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:29.351198145+00:00 stderr F I1213 00:18:29.351166 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:29.351654257+00:00 stderr F I1213 00:18:29.351576 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:29.404684950+00:00 stderr F W1213 00:18:29.404628 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:29.406160511+00:00 stderr F I1213 00:18:29.406107 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.079809ms) 2025-12-13T00:18:30.351100887+00:00 stderr F I1213 00:18:30.351005 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:30.351332453+00:00 stderr F I1213 00:18:30.351280 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:30.351332453+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:30.351332453+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:30.351358374+00:00 stderr F I1213 00:18:30.351328 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (334.479µs) 2025-12-13T00:18:30.351358374+00:00 stderr F I1213 00:18:30.351344 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:30.351413726+00:00 stderr F I1213 00:18:30.351379 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:30.351434716+00:00 stderr F I1213 00:18:30.351425 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:30.351753755+00:00 stderr F I1213 00:18:30.351645 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:30.396308243+00:00 stderr F W1213 00:18:30.396249 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:30.397654870+00:00 stderr F I1213 00:18:30.397593 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.247944ms) 2025-12-13T00:18:31.352310034+00:00 stderr F I1213 00:18:31.351504 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:31.352310034+00:00 stderr F I1213 00:18:31.352020 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:31.352310034+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:31.352310034+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:31.352310034+00:00 stderr F I1213 00:18:31.352123 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (664.758µs) 2025-12-13T00:18:31.352310034+00:00 stderr F I1213 00:18:31.352145 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:31.352310034+00:00 stderr F I1213 00:18:31.352218 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:31.352310034+00:00 stderr F I1213 00:18:31.352291 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:31.352794488+00:00 stderr F I1213 00:18:31.352692 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:31.389653935+00:00 stderr F W1213 00:18:31.389588 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:31.391736841+00:00 stderr F I1213 00:18:31.391637 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (39.487598ms) 2025-12-13T00:18:32.353358779+00:00 stderr F I1213 00:18:32.353079 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:32.353737669+00:00 stderr F I1213 00:18:32.353693 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:32.353737669+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:32.353737669+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:32.358720407+00:00 stderr F I1213 00:18:32.358533 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (5.46379ms) 2025-12-13T00:18:32.358741127+00:00 stderr F I1213 00:18:32.358727 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:32.358881541+00:00 stderr F I1213 00:18:32.358835 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:32.359005424+00:00 stderr F I1213 00:18:32.358975 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:32.359694853+00:00 stderr F I1213 00:18:32.359626 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:32.397412983+00:00 stderr F W1213 00:18:32.397338 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:32.400764236+00:00 stderr F I1213 00:18:32.400713 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.984288ms) 2025-12-13T00:18:33.357964960+00:00 stderr F I1213 00:18:33.357828 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:33.358145385+00:00 stderr F I1213 00:18:33.358090 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:33.358145385+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:33.358145385+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:33.358188426+00:00 stderr F I1213 00:18:33.358143 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (328.238µs) 2025-12-13T00:18:33.358188426+00:00 stderr F I1213 00:18:33.358157 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:33.358213756+00:00 stderr F I1213 00:18:33.358194 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:33.358270078+00:00 stderr F I1213 00:18:33.358239 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:33.358508875+00:00 stderr F I1213 00:18:33.358445 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:33.383636757+00:00 stderr F W1213 00:18:33.383529 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:33.386582659+00:00 stderr F I1213 00:18:33.386512 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.345591ms) 2025-12-13T00:18:34.359223320+00:00 stderr F I1213 00:18:34.358584 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:34.359476486+00:00 stderr F I1213 00:18:34.359423 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:34.359476486+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:34.359476486+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:34.359530098+00:00 stderr F I1213 00:18:34.359495 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (924.166µs) 2025-12-13T00:18:34.359530098+00:00 stderr F I1213 00:18:34.359513 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:34.359618340+00:00 stderr F I1213 00:18:34.359558 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:34.359635651+00:00 stderr F I1213 00:18:34.359614 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:34.359963690+00:00 stderr F I1213 00:18:34.359885 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:34.400324132+00:00 stderr F W1213 00:18:34.400244 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:34.403278164+00:00 stderr F I1213 00:18:34.403212 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.689494ms) 2025-12-13T00:18:35.360508130+00:00 stderr F I1213 00:18:35.360308 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:35.360830978+00:00 stderr F I1213 00:18:35.360755 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:35.360830978+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:35.360830978+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:35.361059625+00:00 stderr F I1213 00:18:35.360888 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (588.736µs) 2025-12-13T00:18:35.361059625+00:00 stderr F I1213 00:18:35.360972 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:35.361159727+00:00 stderr F I1213 00:18:35.361094 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:35.361189288+00:00 stderr F I1213 00:18:35.361178 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:35.361679372+00:00 stderr F I1213 00:18:35.361597 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:35.407302140+00:00 stderr F W1213 00:18:35.407204 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:35.410267352+00:00 stderr F I1213 00:18:35.410180 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.206677ms) 2025-12-13T00:18:36.361207803+00:00 stderr F I1213 00:18:36.361089 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:36.361630744+00:00 stderr F I1213 00:18:36.361544 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:36.361630744+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:36.361630744+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:36.361691926+00:00 stderr F I1213 00:18:36.361641 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (565.715µs) 2025-12-13T00:18:36.361691926+00:00 stderr F I1213 00:18:36.361663 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:36.361842511+00:00 stderr F I1213 00:18:36.361747 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:36.361894512+00:00 stderr F I1213 00:18:36.361862 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:36.362411487+00:00 stderr F I1213 00:18:36.362321 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:36.416449636+00:00 stderr F W1213 00:18:36.416368 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:36.419369897+00:00 stderr F I1213 00:18:36.419312 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.64485ms) 2025-12-13T00:18:37.362696249+00:00 stderr F I1213 00:18:37.362596 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:37.363079370+00:00 stderr F I1213 00:18:37.363002 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:37.363079370+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:37.363079370+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:37.363113181+00:00 stderr F I1213 00:18:37.363093 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (522.624µs) 2025-12-13T00:18:37.363128961+00:00 stderr F I1213 00:18:37.363116 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:37.363238934+00:00 stderr F I1213 00:18:37.363176 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:37.363260405+00:00 stderr F I1213 00:18:37.363248 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:37.363674346+00:00 stderr F I1213 00:18:37.363597 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:37.404704918+00:00 stderr F W1213 00:18:37.404591 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:37.408076510+00:00 stderr F I1213 00:18:37.407973 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.807285ms) 2025-12-13T00:18:38.141288029+00:00 stderr F I1213 00:18:38.141239 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:38.141449363+00:00 stderr F I1213 00:18:38.141424 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:38.141527815+00:00 stderr F I1213 00:18:38.141515 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:38.141864324+00:00 stderr F I1213 00:18:38.141825 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:38.163732838+00:00 stderr F W1213 00:18:38.163688 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:38.165852175+00:00 stderr F I1213 00:18:38.165809 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.591467ms) 2025-12-13T00:18:38.364228686+00:00 stderr F I1213 00:18:38.364151 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:38.364759680+00:00 stderr F I1213 00:18:38.364687 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:38.364759680+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:38.364759680+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:38.364927466+00:00 stderr F I1213 00:18:38.364801 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (664.408µs) 2025-12-13T00:18:38.364927466+00:00 stderr F I1213 00:18:38.364833 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:38.365007048+00:00 stderr F I1213 00:18:38.364978 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:38.365119811+00:00 stderr F I1213 00:18:38.365071 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:38.365606424+00:00 stderr F I1213 00:18:38.365503 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:38.398975434+00:00 stderr F W1213 00:18:38.398891 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:38.402146112+00:00 stderr F I1213 00:18:38.402104 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.265737ms) 2025-12-13T00:18:39.365920788+00:00 stderr F I1213 00:18:39.365828 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:39.366178985+00:00 stderr F I1213 00:18:39.366131 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:39.366178985+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:39.366178985+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:39.366201115+00:00 stderr F I1213 00:18:39.366178 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (361.42µs) 2025-12-13T00:18:39.366201115+00:00 stderr F I1213 00:18:39.366191 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:39.366284158+00:00 stderr F I1213 00:18:39.366250 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:39.366317289+00:00 stderr F I1213 00:18:39.366297 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:39.366576386+00:00 stderr F I1213 00:18:39.366529 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:39.391338099+00:00 stderr F W1213 00:18:39.391289 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:39.393012895+00:00 stderr F I1213 00:18:39.392974 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.779229ms) 2025-12-13T00:18:40.366468217+00:00 stderr F I1213 00:18:40.366405 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:40.366774835+00:00 stderr F I1213 00:18:40.366743 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:40.366774835+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:40.366774835+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:40.366892269+00:00 stderr F I1213 00:18:40.366847 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (475.063µs) 2025-12-13T00:18:40.366892269+00:00 stderr F I1213 00:18:40.366875 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:40.366988631+00:00 stderr F I1213 00:18:40.366928 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:40.367035023+00:00 stderr F I1213 00:18:40.367010 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:40.367353581+00:00 stderr F I1213 00:18:40.367296 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:40.409969167+00:00 stderr F W1213 00:18:40.409877 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:40.411966882+00:00 stderr F I1213 00:18:40.411902 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.025352ms) 2025-12-13T00:18:41.367338146+00:00 stderr F I1213 00:18:41.366903 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:41.367747677+00:00 stderr F I1213 00:18:41.367726 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:41.367747677+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:41.367747677+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:41.367827179+00:00 stderr F I1213 00:18:41.367811 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (919.835µs) 2025-12-13T00:18:41.367866170+00:00 stderr F I1213 00:18:41.367855 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:41.367963144+00:00 stderr F I1213 00:18:41.367917 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:41.368043496+00:00 stderr F I1213 00:18:41.368030 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:41.368309493+00:00 stderr F I1213 00:18:41.368279 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:41.395906213+00:00 stderr F W1213 00:18:41.395859 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:41.397467507+00:00 stderr F I1213 00:18:41.397439 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.581246ms) 2025-12-13T00:18:42.368169444+00:00 stderr F I1213 00:18:42.368079 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:42.368742640+00:00 stderr F I1213 00:18:42.368669 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:42.368742640+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:42.368742640+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:42.368866113+00:00 stderr F I1213 00:18:42.368800 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (736.69µs) 2025-12-13T00:18:42.368866113+00:00 stderr F I1213 00:18:42.368837 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:42.369088449+00:00 stderr F I1213 00:18:42.369006 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:42.369120750+00:00 stderr F I1213 00:18:42.369109 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:42.369709796+00:00 stderr F I1213 00:18:42.369626 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:42.416668301+00:00 stderr F W1213 00:18:42.416562 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:42.418853232+00:00 stderr F I1213 00:18:42.418718 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.879396ms) 2025-12-13T00:18:43.369372111+00:00 stderr F I1213 00:18:43.369261 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:43.369650949+00:00 stderr F I1213 00:18:43.369616 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:43.369650949+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:43.369650949+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:43.369699020+00:00 stderr F I1213 00:18:43.369675 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (426.412µs) 2025-12-13T00:18:43.369699020+00:00 stderr F I1213 00:18:43.369693 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:43.369787083+00:00 stderr F I1213 00:18:43.369751 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:43.369811173+00:00 stderr F I1213 00:18:43.369805 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:43.370158093+00:00 stderr F I1213 00:18:43.370113 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:43.393013963+00:00 stderr F W1213 00:18:43.392962 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:43.394476024+00:00 stderr F I1213 00:18:43.394427 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.731002ms) 2025-12-13T00:18:44.370218350+00:00 stderr F I1213 00:18:44.370142 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:44.370623281+00:00 stderr F I1213 00:18:44.370601 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:44.370623281+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:44.370623281+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:44.370754315+00:00 stderr F I1213 00:18:44.370721 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (589.466µs) 2025-12-13T00:18:44.370813086+00:00 stderr F I1213 00:18:44.370794 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:44.370919519+00:00 stderr F I1213 00:18:44.370893 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:44.371037273+00:00 stderr F I1213 00:18:44.371019 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:44.371431244+00:00 stderr F I1213 00:18:44.371394 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:44.417489004+00:00 stderr F W1213 00:18:44.417444 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:44.420501257+00:00 stderr F I1213 00:18:44.420456 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (49.658279ms) 2025-12-13T00:18:45.371113251+00:00 stderr F I1213 00:18:45.371048 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:45.371474101+00:00 stderr F I1213 00:18:45.371455 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:45.371474101+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:45.371474101+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:45.371590224+00:00 stderr F I1213 00:18:45.371569 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (529.036µs) 2025-12-13T00:18:45.371631036+00:00 stderr F I1213 00:18:45.371617 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:45.371718668+00:00 stderr F I1213 00:18:45.371696 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:45.371774009+00:00 stderr F I1213 00:18:45.371764 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:45.372076938+00:00 stderr F I1213 00:18:45.372049 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:45.395219066+00:00 stderr F W1213 00:18:45.395162 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:45.397605521+00:00 stderr F I1213 00:18:45.397569 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.947325ms) 2025-12-13T00:18:46.371810066+00:00 stderr F I1213 00:18:46.371674 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:46.372347540+00:00 stderr F I1213 00:18:46.372260 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:46.372347540+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:46.372347540+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:46.372378581+00:00 stderr F I1213 00:18:46.372361 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (702.569µs) 2025-12-13T00:18:46.372394522+00:00 stderr F I1213 00:18:46.372383 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:46.372525765+00:00 stderr F I1213 00:18:46.372457 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:46.372641168+00:00 stderr F I1213 00:18:46.372580 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:46.373118091+00:00 stderr F I1213 00:18:46.373032 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:46.415577702+00:00 stderr F W1213 00:18:46.415527 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:46.417096494+00:00 stderr F I1213 00:18:46.417073 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.688403ms) 2025-12-13T00:18:47.373471657+00:00 stderr F I1213 00:18:47.373374 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:47.373772115+00:00 stderr F I1213 00:18:47.373720 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:47.373772115+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:47.373772115+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:47.373826026+00:00 stderr F I1213 00:18:47.373790 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (428.631µs) 2025-12-13T00:18:47.373826026+00:00 stderr F I1213 00:18:47.373810 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:47.373923289+00:00 stderr F I1213 00:18:47.373873 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:47.373973490+00:00 stderr F I1213 00:18:47.373963 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:47.374334431+00:00 stderr F I1213 00:18:47.374265 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:47.417446840+00:00 stderr F W1213 00:18:47.417379 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:47.419437755+00:00 stderr F I1213 00:18:47.419386 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (45.571477ms) 2025-12-13T00:18:48.374780079+00:00 stderr F I1213 00:18:48.374681 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:48.375361165+00:00 stderr F I1213 00:18:48.375326 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:48.375361165+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:48.375361165+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:48.375538389+00:00 stderr F I1213 00:18:48.375507 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (840.663µs) 2025-12-13T00:18:48.375603131+00:00 stderr F I1213 00:18:48.375581 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:48.375771906+00:00 stderr F I1213 00:18:48.375714 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:48.375896389+00:00 stderr F I1213 00:18:48.375873 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:48.376573738+00:00 stderr F I1213 00:18:48.376504 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:48.406239646+00:00 stderr F W1213 00:18:48.406176 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:48.409858656+00:00 stderr F I1213 00:18:48.409812 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.226414ms) 2025-12-13T00:18:49.376066020+00:00 stderr F I1213 00:18:49.375835 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:49.376226714+00:00 stderr F I1213 00:18:49.376179 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:49.376226714+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:49.376226714+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:49.376274795+00:00 stderr F I1213 00:18:49.376247 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (426.122µs) 2025-12-13T00:18:49.376274795+00:00 stderr F I1213 00:18:49.376264 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:49.376377458+00:00 stderr F I1213 00:18:49.376331 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:49.376393829+00:00 stderr F I1213 00:18:49.376387 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:49.376738328+00:00 stderr F I1213 00:18:49.376682 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:49.444898058+00:00 stderr F W1213 00:18:49.404603 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:49.444898058+00:00 stderr F I1213 00:18:49.407602 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.331794ms) 2025-12-13T00:18:50.376950278+00:00 stderr F I1213 00:18:50.376856 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:50.377219777+00:00 stderr F I1213 00:18:50.377190 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:50.377219777+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:50.377219777+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:50.377304849+00:00 stderr F I1213 00:18:50.377263 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (417.123µs) 2025-12-13T00:18:50.377304849+00:00 stderr F I1213 00:18:50.377289 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:50.377383141+00:00 stderr F I1213 00:18:50.377340 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:50.377427862+00:00 stderr F I1213 00:18:50.377406 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:50.377736921+00:00 stderr F I1213 00:18:50.377672 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:50.398685328+00:00 stderr F W1213 00:18:50.398630 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:50.400102817+00:00 stderr F I1213 00:18:50.400068 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.777798ms) 2025-12-13T00:18:51.378279481+00:00 stderr F I1213 00:18:51.377368 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:51.378668751+00:00 stderr F I1213 00:18:51.378618 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:51.378668751+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:51.378668751+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:51.378765894+00:00 stderr F I1213 00:18:51.378711 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.376347ms) 2025-12-13T00:18:51.378765894+00:00 stderr F I1213 00:18:51.378739 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:51.378894347+00:00 stderr F I1213 00:18:51.378842 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:51.379009570+00:00 stderr F I1213 00:18:51.378963 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:51.379470323+00:00 stderr F I1213 00:18:51.379399 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:51.403254489+00:00 stderr F W1213 00:18:51.403188 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:51.406305563+00:00 stderr F I1213 00:18:51.406224 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.477368ms) 2025-12-13T00:18:52.379503939+00:00 stderr F I1213 00:18:52.379418 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:52.379783586+00:00 stderr F I1213 00:18:52.379737 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:52.379783586+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:52.379783586+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:52.379848748+00:00 stderr F I1213 00:18:52.379807 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (400.301µs) 2025-12-13T00:18:52.379848748+00:00 stderr F I1213 00:18:52.379828 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:52.379950971+00:00 stderr F I1213 00:18:52.379891 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:52.379991732+00:00 stderr F I1213 00:18:52.379966 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:52.380301701+00:00 stderr F I1213 00:18:52.380247 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:52.408813247+00:00 stderr F W1213 00:18:52.408717 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:52.410310768+00:00 stderr F I1213 00:18:52.410260 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.42932ms) 2025-12-13T00:18:53.141765827+00:00 stderr F I1213 00:18:53.141679 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:53.141816319+00:00 stderr F I1213 00:18:53.141794 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:53.141884990+00:00 stderr F I1213 00:18:53.141855 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:53.142205050+00:00 stderr F I1213 00:18:53.142155 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:53.169100611+00:00 stderr F W1213 00:18:53.169040 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:53.170560782+00:00 stderr F I1213 00:18:53.170486 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.822296ms) 2025-12-13T00:18:53.380616025+00:00 stderr F I1213 00:18:53.380545 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:53.380855981+00:00 stderr F I1213 00:18:53.380828 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:53.380855981+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:53.380855981+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:53.380907863+00:00 stderr F I1213 00:18:53.380879 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (347.13µs) 2025-12-13T00:18:53.380907863+00:00 stderr F I1213 00:18:53.380897 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:53.381007705+00:00 stderr F I1213 00:18:53.380963 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:53.381043066+00:00 stderr F I1213 00:18:53.381028 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:53.381356825+00:00 stderr F I1213 00:18:53.381306 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:53.406593341+00:00 stderr F W1213 00:18:53.406515 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:53.407960608+00:00 stderr F I1213 00:18:53.407916 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.017085ms) 2025-12-13T00:18:54.382101590+00:00 stderr F I1213 00:18:54.382009 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:54.382661075+00:00 stderr F I1213 00:18:54.382571 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:54.382661075+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:54.382661075+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:54.382696776+00:00 stderr F I1213 00:18:54.382680 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (774.291µs) 2025-12-13T00:18:54.382716247+00:00 stderr F I1213 00:18:54.382704 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:54.382848940+00:00 stderr F I1213 00:18:54.382783 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:54.382910772+00:00 stderr F I1213 00:18:54.382874 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:54.383494699+00:00 stderr F I1213 00:18:54.383395 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:54.422208496+00:00 stderr F W1213 00:18:54.422119 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:54.424358605+00:00 stderr F I1213 00:18:54.424311 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (41.604148ms) 2025-12-13T00:18:55.383152623+00:00 stderr F I1213 00:18:55.383047 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:55.383434522+00:00 stderr F I1213 00:18:55.383374 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:55.383434522+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:55.383434522+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:55.383486593+00:00 stderr F I1213 00:18:55.383450 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (415.912µs) 2025-12-13T00:18:55.383486593+00:00 stderr F I1213 00:18:55.383471 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:55.383575456+00:00 stderr F I1213 00:18:55.383524 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:55.383605546+00:00 stderr F I1213 00:18:55.383598 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:55.383963076+00:00 stderr F I1213 00:18:55.383896 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:55.408308387+00:00 stderr F W1213 00:18:55.408239 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:55.409899361+00:00 stderr F I1213 00:18:55.409848 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.344846ms) 2025-12-13T00:18:56.384131205+00:00 stderr F I1213 00:18:56.384060 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:56.384396983+00:00 stderr F I1213 00:18:56.384364 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:56.384396983+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:56.384396983+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:56.384445544+00:00 stderr F I1213 00:18:56.384416 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (367.13µs) 2025-12-13T00:18:56.384445544+00:00 stderr F I1213 00:18:56.384436 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:56.384516276+00:00 stderr F I1213 00:18:56.384474 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:56.384531026+00:00 stderr F I1213 00:18:56.384521 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:56.384775323+00:00 stderr F I1213 00:18:56.384725 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:56.418057181+00:00 stderr F W1213 00:18:56.417982 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:56.421962918+00:00 stderr F I1213 00:18:56.419989 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.55006ms) 2025-12-13T00:18:57.385098876+00:00 stderr F I1213 00:18:57.385034 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:57.385359873+00:00 stderr F I1213 00:18:57.385329 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:57.385359873+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:57.385359873+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:57.385429505+00:00 stderr F I1213 00:18:57.385397 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (381.28µs) 2025-12-13T00:18:57.385429505+00:00 stderr F I1213 00:18:57.385422 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:57.385502867+00:00 stderr F I1213 00:18:57.385469 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:57.385543678+00:00 stderr F I1213 00:18:57.385526 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:57.385893648+00:00 stderr F I1213 00:18:57.385818 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:57.411595577+00:00 stderr F W1213 00:18:57.411526 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:57.414256461+00:00 stderr F I1213 00:18:57.414220 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.792803ms) 2025-12-13T00:18:58.386256713+00:00 stderr F I1213 00:18:58.385638 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:58.386612883+00:00 stderr F I1213 00:18:58.386572 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:58.386612883+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:58.386612883+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:58.386703836+00:00 stderr F I1213 00:18:58.386668 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.04596ms) 2025-12-13T00:18:58.386712306+00:00 stderr F I1213 00:18:58.386702 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:58.386854040+00:00 stderr F I1213 00:18:58.386797 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:58.386962333+00:00 stderr F I1213 00:18:58.386909 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:58.387393955+00:00 stderr F I1213 00:18:58.387338 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:58.419728596+00:00 stderr F W1213 00:18:58.419680 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:58.421976928+00:00 stderr F I1213 00:18:58.421857 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.153468ms) 2025-12-13T00:18:59.386781213+00:00 stderr F I1213 00:18:59.386707 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:18:59.387156393+00:00 stderr F I1213 00:18:59.387079 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:18:59.387156393+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:18:59.387156393+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:18:59.387194254+00:00 stderr F I1213 00:18:59.387146 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (451.133µs) 2025-12-13T00:18:59.387194254+00:00 stderr F I1213 00:18:59.387164 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:18:59.387248485+00:00 stderr F I1213 00:18:59.387212 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:18:59.387294267+00:00 stderr F I1213 00:18:59.387265 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:18:59.387625056+00:00 stderr F I1213 00:18:59.387534 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:18:59.409044427+00:00 stderr F W1213 00:18:59.408983 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:18:59.410566308+00:00 stderr F I1213 00:18:59.410522 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.354434ms) 2025-12-13T00:19:00.388102874+00:00 stderr F I1213 00:19:00.387985 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:00.388102874+00:00 stderr F I1213 00:19:00.388052 1 availableupdates.go:70] Retrieving available updates again, because more than 2m52.516505533s has elapsed since 2025-12-13T00:16:07Z 2025-12-13T00:19:00.390379617+00:00 stderr F I1213 00:19:00.390284 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2025-12-13T00:19:00.623340141+00:00 stderr F I1213 00:19:00.623259 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:00.623340141+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:00.623340141+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:00.623505765+00:00 stderr F I1213 00:19:00.623468 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (235.491804ms) 2025-12-13T00:19:00.623505765+00:00 stderr F I1213 00:19:00.623496 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:00.623635399+00:00 stderr F I1213 00:19:00.623568 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:00.623655039+00:00 stderr F I1213 00:19:00.623638 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:00.624093991+00:00 stderr F I1213 00:19:00.624026 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:00.664285980+00:00 stderr F W1213 00:19:00.664082 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:00.665606416+00:00 stderr F I1213 00:19:00.665549 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.050919ms) 2025-12-13T00:19:01.623514790+00:00 stderr F I1213 00:19:01.623451 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:01.623905471+00:00 stderr F I1213 00:19:01.623874 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:01.623905471+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:01.623905471+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:01.624111036+00:00 stderr F I1213 00:19:01.624082 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (639.587µs) 2025-12-13T00:19:01.624187738+00:00 stderr F I1213 00:19:01.624167 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:01.624325592+00:00 stderr F I1213 00:19:01.624294 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:01.624450896+00:00 stderr F I1213 00:19:01.624414 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:01.624823407+00:00 stderr F I1213 00:19:01.624784 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:01.657428395+00:00 stderr F W1213 00:19:01.657373 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:01.659838092+00:00 stderr F I1213 00:19:01.659800 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.629693ms) 2025-12-13T00:19:02.624520022+00:00 stderr F I1213 00:19:02.624426 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:02.624923064+00:00 stderr F I1213 00:19:02.624859 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:02.624923064+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:02.624923064+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:02.624997776+00:00 stderr F I1213 00:19:02.624976 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (606.858µs) 2025-12-13T00:19:02.625014597+00:00 stderr F I1213 00:19:02.625004 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:02.625119060+00:00 stderr F I1213 00:19:02.625063 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:02.625153861+00:00 stderr F I1213 00:19:02.625135 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:02.625536351+00:00 stderr F I1213 00:19:02.625466 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:02.659173358+00:00 stderr F W1213 00:19:02.659054 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:02.660958497+00:00 stderr F I1213 00:19:02.660868 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.861778ms) 2025-12-13T00:19:03.625593997+00:00 stderr F I1213 00:19:03.625500 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:03.625829734+00:00 stderr F I1213 00:19:03.625779 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:03.625829734+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:03.625829734+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:03.625881865+00:00 stderr F I1213 00:19:03.625854 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (363.7µs) 2025-12-13T00:19:03.625894605+00:00 stderr F I1213 00:19:03.625875 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:03.626020399+00:00 stderr F I1213 00:19:03.625969 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:03.626041809+00:00 stderr F I1213 00:19:03.626030 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:03.626344098+00:00 stderr F I1213 00:19:03.626276 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:03.672974904+00:00 stderr F W1213 00:19:03.672869 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:03.677051836+00:00 stderr F I1213 00:19:03.676991 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (51.057337ms) 2025-12-13T00:19:04.626434074+00:00 stderr F I1213 00:19:04.625906 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:04.626684111+00:00 stderr F I1213 00:19:04.626644 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:04.626684111+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:04.626684111+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:04.626736753+00:00 stderr F I1213 00:19:04.626707 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (815.252µs) 2025-12-13T00:19:04.626745063+00:00 stderr F I1213 00:19:04.626732 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:04.626823165+00:00 stderr F I1213 00:19:04.626797 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:04.626886947+00:00 stderr F I1213 00:19:04.626858 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:04.627198915+00:00 stderr F I1213 00:19:04.627138 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:04.653079400+00:00 stderr F W1213 00:19:04.653010 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:04.655672510+00:00 stderr F I1213 00:19:04.655592 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.857176ms) 2025-12-13T00:19:05.628293640+00:00 stderr F I1213 00:19:05.627264 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:05.628845975+00:00 stderr F I1213 00:19:05.628771 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:05.628845975+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:05.628845975+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:05.629221086+00:00 stderr F I1213 00:19:05.629147 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.895162ms) 2025-12-13T00:19:05.629307248+00:00 stderr F I1213 00:19:05.629227 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:05.629435571+00:00 stderr F I1213 00:19:05.629412 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:05.629504813+00:00 stderr F I1213 00:19:05.629493 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:05.629870543+00:00 stderr F I1213 00:19:05.629760 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:05.653424493+00:00 stderr F W1213 00:19:05.653350 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:05.655663484+00:00 stderr F I1213 00:19:05.655610 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.384878ms) 2025-12-13T00:19:06.629966641+00:00 stderr F I1213 00:19:06.629867 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:06.630286789+00:00 stderr F I1213 00:19:06.630232 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:06.630286789+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:06.630286789+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:06.630306980+00:00 stderr F I1213 00:19:06.630295 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (440.472µs) 2025-12-13T00:19:06.630326991+00:00 stderr F I1213 00:19:06.630312 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:06.630397913+00:00 stderr F I1213 00:19:06.630362 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:06.630448164+00:00 stderr F I1213 00:19:06.630422 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:06.630760542+00:00 stderr F I1213 00:19:06.630700 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:06.662425446+00:00 stderr F W1213 00:19:06.660720 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:06.662425446+00:00 stderr F I1213 00:19:06.662310 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.996303ms) 2025-12-13T00:19:07.631191420+00:00 stderr F I1213 00:19:07.631076 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:07.631691493+00:00 stderr F I1213 00:19:07.631639 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:07.631691493+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:07.631691493+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:07.631791106+00:00 stderr F I1213 00:19:07.631750 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (699.529µs) 2025-12-13T00:19:07.631791106+00:00 stderr F I1213 00:19:07.631779 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:07.631920380+00:00 stderr F I1213 00:19:07.631876 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:07.632027273+00:00 stderr F I1213 00:19:07.631994 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:07.632488665+00:00 stderr F I1213 00:19:07.632430 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:07.663223253+00:00 stderr F W1213 00:19:07.663158 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:07.665282540+00:00 stderr F I1213 00:19:07.665209 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.427422ms) 2025-12-13T00:19:08.142089197+00:00 stderr F I1213 00:19:08.141974 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:08.142198000+00:00 stderr F I1213 00:19:08.142149 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:08.142290823+00:00 stderr F I1213 00:19:08.142238 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:08.142804887+00:00 stderr F I1213 00:19:08.142718 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:08.186385659+00:00 stderr F W1213 00:19:08.186294 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:08.188106056+00:00 stderr F I1213 00:19:08.188056 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.140751ms) 2025-12-13T00:19:08.632168361+00:00 stderr F I1213 00:19:08.632058 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:08.632443798+00:00 stderr F I1213 00:19:08.632402 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:08.632443798+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:08.632443798+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:08.632481580+00:00 stderr F I1213 00:19:08.632451 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (412.402µs) 2025-12-13T00:19:08.632481580+00:00 stderr F I1213 00:19:08.632475 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:08.632542411+00:00 stderr F I1213 00:19:08.632515 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:08.632582102+00:00 stderr F I1213 00:19:08.632562 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:08.632836539+00:00 stderr F I1213 00:19:08.632781 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:08.657048567+00:00 stderr F W1213 00:19:08.657000 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:08.658642491+00:00 stderr F I1213 00:19:08.658600 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.12205ms) 2025-12-13T00:19:09.632610618+00:00 stderr F I1213 00:19:09.632532 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:09.632860764+00:00 stderr F I1213 00:19:09.632815 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:09.632860764+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:09.632860764+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:09.632876895+00:00 stderr F I1213 00:19:09.632864 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (348.909µs) 2025-12-13T00:19:09.632888355+00:00 stderr F I1213 00:19:09.632876 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:09.633001818+00:00 stderr F I1213 00:19:09.632928 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:09.633001818+00:00 stderr F I1213 00:19:09.632989 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:09.633263995+00:00 stderr F I1213 00:19:09.633206 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:09.655449797+00:00 stderr F W1213 00:19:09.655381 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:09.656878107+00:00 stderr F I1213 00:19:09.656823 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.94131ms) 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.633099 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.633537 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:10.638002372+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:10.638002372+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.633605 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (530.005µs) 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.633630 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.633696 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.633758 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:10.638002372+00:00 stderr F I1213 00:19:10.634253 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:10.674971231+00:00 stderr F W1213 00:19:10.674501 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:10.676415291+00:00 stderr F I1213 00:19:10.676293 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.659168ms) 2025-12-13T00:19:11.634011976+00:00 stderr F I1213 00:19:11.633916 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:11.634296974+00:00 stderr F I1213 00:19:11.634269 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:11.634296974+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:11.634296974+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:11.634371786+00:00 stderr F I1213 00:19:11.634346 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (442.314µs) 2025-12-13T00:19:11.634381967+00:00 stderr F I1213 00:19:11.634369 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:11.634450018+00:00 stderr F I1213 00:19:11.634421 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:11.634513100+00:00 stderr F I1213 00:19:11.634493 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:11.634854609+00:00 stderr F I1213 00:19:11.634811 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:11.658209293+00:00 stderr F W1213 00:19:11.658159 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:11.659729925+00:00 stderr F I1213 00:19:11.659682 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.309928ms) 2025-12-13T00:19:12.635511492+00:00 stderr F I1213 00:19:12.635025 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:12.635781100+00:00 stderr F I1213 00:19:12.635734 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:12.635781100+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:12.635781100+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:12.635836821+00:00 stderr F I1213 00:19:12.635803 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (792.022µs) 2025-12-13T00:19:12.635836821+00:00 stderr F I1213 00:19:12.635825 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:12.635922873+00:00 stderr F I1213 00:19:12.635875 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:12.635960864+00:00 stderr F I1213 00:19:12.635953 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:12.636270953+00:00 stderr F I1213 00:19:12.636215 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:12.704064762+00:00 stderr F W1213 00:19:12.703338 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:12.704848594+00:00 stderr F I1213 00:19:12.704752 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (68.923401ms) 2025-12-13T00:19:13.635994910+00:00 stderr F I1213 00:19:13.635890 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:13.636285518+00:00 stderr F I1213 00:19:13.636243 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:13.636285518+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:13.636285518+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:13.636351440+00:00 stderr F I1213 00:19:13.636313 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (454.132µs) 2025-12-13T00:19:13.636351440+00:00 stderr F I1213 00:19:13.636336 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:13.636435922+00:00 stderr F I1213 00:19:13.636394 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:13.636476083+00:00 stderr F I1213 00:19:13.636454 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:13.636784161+00:00 stderr F I1213 00:19:13.636732 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:13.663049216+00:00 stderr F W1213 00:19:13.662980 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:13.665039800+00:00 stderr F I1213 00:19:13.664988 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.64825ms) 2025-12-13T00:19:13.736118411+00:00 stderr F I1213 00:19:13.736042 1 sync_worker.go:234] Notify the sync worker: Cluster operator machine-config changed Degraded from "False" to "True" 2025-12-13T00:19:13.736118411+00:00 stderr F I1213 00:19:13.736089 1 sync_worker.go:584] Cluster operator machine-config changed Degraded from "False" to "True" 2025-12-13T00:19:13.736118411+00:00 stderr F I1213 00:19:13.736102 1 sync_worker.go:592] No change, waiting 2025-12-13T00:19:14.637483016+00:00 stderr F I1213 00:19:14.637394 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:14.637893027+00:00 stderr F I1213 00:19:14.637788 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:14.637893027+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:14.637893027+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:14.637927628+00:00 stderr F I1213 00:19:14.637907 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (526.385µs) 2025-12-13T00:19:14.638003170+00:00 stderr F I1213 00:19:14.637972 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:14.638083953+00:00 stderr F I1213 00:19:14.638047 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:14.638120644+00:00 stderr F I1213 00:19:14.638107 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:14.638509744+00:00 stderr F I1213 00:19:14.638451 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:14.665952901+00:00 stderr F W1213 00:19:14.665806 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:14.667288977+00:00 stderr F I1213 00:19:14.667227 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.298068ms) 2025-12-13T00:19:15.638846038+00:00 stderr F I1213 00:19:15.638754 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:15.639133556+00:00 stderr F I1213 00:19:15.639082 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:15.639133556+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:15.639133556+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:15.639155036+00:00 stderr F I1213 00:19:15.639139 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (396.761µs) 2025-12-13T00:19:15.639164856+00:00 stderr F I1213 00:19:15.639153 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:15.639241828+00:00 stderr F I1213 00:19:15.639195 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:15.639254119+00:00 stderr F I1213 00:19:15.639244 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:15.639532876+00:00 stderr F I1213 00:19:15.639471 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:15.666919832+00:00 stderr F W1213 00:19:15.666831 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:15.668417864+00:00 stderr F I1213 00:19:15.668364 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.206516ms) 2025-12-13T00:19:16.640074176+00:00 stderr F I1213 00:19:16.640016 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:16.640312993+00:00 stderr F I1213 00:19:16.640281 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:16.640312993+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:16.640312993+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:16.640376535+00:00 stderr F I1213 00:19:16.640351 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (345.42µs) 2025-12-13T00:19:16.640386045+00:00 stderr F I1213 00:19:16.640378 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:16.640457547+00:00 stderr F I1213 00:19:16.640426 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:16.640501139+00:00 stderr F I1213 00:19:16.640483 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:16.640735375+00:00 stderr F I1213 00:19:16.640697 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:16.669475277+00:00 stderr F W1213 00:19:16.669412 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:16.670880425+00:00 stderr F I1213 00:19:16.670841 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.461369ms) 2025-12-13T00:19:17.641963973+00:00 stderr F I1213 00:19:17.641569 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:17.642079986+00:00 stderr F I1213 00:19:17.642032 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:17.642079986+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:17.642079986+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:17.642139418+00:00 stderr F I1213 00:19:17.642107 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (550.925µs) 2025-12-13T00:19:17.642139418+00:00 stderr F I1213 00:19:17.642127 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:17.642220640+00:00 stderr F I1213 00:19:17.642180 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:17.642257831+00:00 stderr F I1213 00:19:17.642237 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:17.642534839+00:00 stderr F I1213 00:19:17.642488 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:17.669752070+00:00 stderr F W1213 00:19:17.669681 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:17.671511378+00:00 stderr F I1213 00:19:17.671472 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.342229ms) 2025-12-13T00:19:18.642641036+00:00 stderr F I1213 00:19:18.642567 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:18.642832252+00:00 stderr F I1213 00:19:18.642809 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:18.642832252+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:18.642832252+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:18.642903484+00:00 stderr F I1213 00:19:18.642883 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (326.299µs) 2025-12-13T00:19:18.642903484+00:00 stderr F I1213 00:19:18.642897 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:18.642980346+00:00 stderr F I1213 00:19:18.642936 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:18.643017667+00:00 stderr F I1213 00:19:18.642997 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:18.643259584+00:00 stderr F I1213 00:19:18.643218 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:18.695364251+00:00 stderr F W1213 00:19:18.695282 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:18.696734749+00:00 stderr F I1213 00:19:18.696680 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.780014ms) 2025-12-13T00:19:19.643490125+00:00 stderr F I1213 00:19:19.643414 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:19.643785573+00:00 stderr F I1213 00:19:19.643760 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:19.643785573+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:19.643785573+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:19.643865695+00:00 stderr F I1213 00:19:19.643842 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (439.162µs) 2025-12-13T00:19:19.643873976+00:00 stderr F I1213 00:19:19.643865 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:19.643950768+00:00 stderr F I1213 00:19:19.643919 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:19.644020070+00:00 stderr F I1213 00:19:19.644001 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:19.644317738+00:00 stderr F I1213 00:19:19.644279 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:19.667452325+00:00 stderr F W1213 00:19:19.667393 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:19.669186483+00:00 stderr F I1213 00:19:19.669148 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.280037ms) 2025-12-13T00:19:20.644006174+00:00 stderr F I1213 00:19:20.643893 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:20.644279231+00:00 stderr F I1213 00:19:20.644251 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:20.644279231+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:20.644279231+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:20.644347073+00:00 stderr F I1213 00:19:20.644322 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (444.582µs) 2025-12-13T00:19:20.644355753+00:00 stderr F I1213 00:19:20.644346 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:20.644421425+00:00 stderr F I1213 00:19:20.644399 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:20.644475126+00:00 stderr F I1213 00:19:20.644456 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:20.644795585+00:00 stderr F I1213 00:19:20.644730 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:20.668794567+00:00 stderr F W1213 00:19:20.668723 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:20.670542865+00:00 stderr F I1213 00:19:20.670506 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.15586ms) 2025-12-13T00:19:21.644971576+00:00 stderr F I1213 00:19:21.644855 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:21.645223893+00:00 stderr F I1213 00:19:21.645182 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:21.645223893+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:21.645223893+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:21.645351436+00:00 stderr F I1213 00:19:21.645258 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (416.192µs) 2025-12-13T00:19:21.645351436+00:00 stderr F I1213 00:19:21.645327 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:21.645443959+00:00 stderr F I1213 00:19:21.645413 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:21.645492010+00:00 stderr F I1213 00:19:21.645468 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:21.645789528+00:00 stderr F I1213 00:19:21.645731 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:21.671484647+00:00 stderr F W1213 00:19:21.671438 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:21.673182984+00:00 stderr F I1213 00:19:21.673159 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.829838ms) 2025-12-13T00:19:22.645688271+00:00 stderr F I1213 00:19:22.645631 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:22.646088651+00:00 stderr F I1213 00:19:22.646068 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:22.646088651+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:22.646088651+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:22.646182064+00:00 stderr F I1213 00:19:22.646163 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (541.874µs) 2025-12-13T00:19:22.646242966+00:00 stderr F I1213 00:19:22.646208 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:22.646326148+00:00 stderr F I1213 00:19:22.646304 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:22.646396781+00:00 stderr F I1213 00:19:22.646384 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:22.646691549+00:00 stderr F I1213 00:19:22.646661 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:22.687758791+00:00 stderr F W1213 00:19:22.687709 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:22.689753646+00:00 stderr F I1213 00:19:22.689729 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.51869ms) 2025-12-13T00:19:23.141763600+00:00 stderr F I1213 00:19:23.141716 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:23.141959095+00:00 stderr F I1213 00:19:23.141914 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:23.142039587+00:00 stderr F I1213 00:19:23.142019 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:23.142308756+00:00 stderr F I1213 00:19:23.142277 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:23.162643136+00:00 stderr F W1213 00:19:23.162553 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:23.164586319+00:00 stderr F I1213 00:19:23.164541 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.83196ms) 2025-12-13T00:19:23.646409946+00:00 stderr F I1213 00:19:23.646344 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:23.647050314+00:00 stderr F I1213 00:19:23.647004 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:23.647050314+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:23.647050314+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:23.647223249+00:00 stderr F I1213 00:19:23.647192 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (858.235µs) 2025-12-13T00:19:23.647283181+00:00 stderr F I1213 00:19:23.647263 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:23.647359063+00:00 stderr F I1213 00:19:23.647340 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:23.647423004+00:00 stderr F I1213 00:19:23.647411 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:23.647739863+00:00 stderr F I1213 00:19:23.647708 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:23.670918782+00:00 stderr F W1213 00:19:23.670884 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:23.672469415+00:00 stderr F I1213 00:19:23.672447 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.184694ms) 2025-12-13T00:19:24.648064767+00:00 stderr F I1213 00:19:24.648016 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:24.648340605+00:00 stderr F I1213 00:19:24.648324 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:24.648340605+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:24.648340605+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:24.648442108+00:00 stderr F I1213 00:19:24.648423 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (418.271µs) 2025-12-13T00:19:24.648481329+00:00 stderr F I1213 00:19:24.648469 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:24.648548550+00:00 stderr F I1213 00:19:24.648530 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:24.648611542+00:00 stderr F I1213 00:19:24.648599 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:24.648873499+00:00 stderr F I1213 00:19:24.648847 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:24.699013342+00:00 stderr F W1213 00:19:24.698923 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:24.702266511+00:00 stderr F I1213 00:19:24.702221 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.743902ms) 2025-12-13T00:19:25.649551944+00:00 stderr F I1213 00:19:25.649503 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:25.649879843+00:00 stderr F I1213 00:19:25.649862 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:25.649879843+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:25.649879843+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:25.649996046+00:00 stderr F I1213 00:19:25.649969 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (478.853µs) 2025-12-13T00:19:25.650086109+00:00 stderr F I1213 00:19:25.650032 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:25.650189612+00:00 stderr F I1213 00:19:25.650170 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:25.650252673+00:00 stderr F I1213 00:19:25.650241 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:25.650521321+00:00 stderr F I1213 00:19:25.650493 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:25.678632936+00:00 stderr F W1213 00:19:25.678584 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:25.680781495+00:00 stderr F I1213 00:19:25.680711 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.675166ms) 2025-12-13T00:19:26.650112075+00:00 stderr F I1213 00:19:26.650045 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:26.650484035+00:00 stderr F I1213 00:19:26.650460 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:26.650484035+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:26.650484035+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:26.650582718+00:00 stderr F I1213 00:19:26.650564 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (535.944µs) 2025-12-13T00:19:26.650627339+00:00 stderr F I1213 00:19:26.650612 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:26.650716032+00:00 stderr F I1213 00:19:26.650692 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:26.650819384+00:00 stderr F I1213 00:19:26.650803 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:26.651231206+00:00 stderr F I1213 00:19:26.651167 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:26.679316040+00:00 stderr F W1213 00:19:26.679249 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:26.681636804+00:00 stderr F I1213 00:19:26.681600 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.981234ms) 2025-12-13T00:19:27.651555980+00:00 stderr F I1213 00:19:27.651459 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:27.651963351+00:00 stderr F I1213 00:19:27.651903 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:27.651963351+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:27.651963351+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:27.652124275+00:00 stderr F I1213 00:19:27.652025 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (579.945µs) 2025-12-13T00:19:27.652124275+00:00 stderr F I1213 00:19:27.652056 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:27.652150146+00:00 stderr F I1213 00:19:27.652121 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:27.652247799+00:00 stderr F I1213 00:19:27.652187 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:27.652647541+00:00 stderr F I1213 00:19:27.652568 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:27.690474423+00:00 stderr F W1213 00:19:27.690426 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:27.692594142+00:00 stderr F I1213 00:19:27.692569 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.504288ms) 2025-12-13T00:19:28.653116179+00:00 stderr F I1213 00:19:28.653025 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:28.653415127+00:00 stderr F I1213 00:19:28.653359 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:28.653415127+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:28.653415127+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:28.653439117+00:00 stderr F I1213 00:19:28.653423 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (410.631µs) 2025-12-13T00:19:28.653455718+00:00 stderr F I1213 00:19:28.653437 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:28.653527250+00:00 stderr F I1213 00:19:28.653483 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:28.653547730+00:00 stderr F I1213 00:19:28.653537 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:28.653855239+00:00 stderr F I1213 00:19:28.653793 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:28.679437854+00:00 stderr F W1213 00:19:28.679394 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:28.680860844+00:00 stderr F I1213 00:19:28.680840 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.400346ms) 2025-12-13T00:19:29.653694069+00:00 stderr F I1213 00:19:29.653614 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:29.653955627+00:00 stderr F I1213 00:19:29.653917 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:29.653955627+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:29.653955627+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:29.654053099+00:00 stderr F I1213 00:19:29.654021 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (430.651µs) 2025-12-13T00:19:29.654053099+00:00 stderr F I1213 00:19:29.654040 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:29.654121061+00:00 stderr F I1213 00:19:29.654093 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:29.654163542+00:00 stderr F I1213 00:19:29.654146 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:29.654384908+00:00 stderr F I1213 00:19:29.654345 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:29.677431043+00:00 stderr F W1213 00:19:29.677375 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:29.678870444+00:00 stderr F I1213 00:19:29.678830 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.787333ms) 2025-12-13T00:19:30.654382984+00:00 stderr F I1213 00:19:30.654315 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:30.654633361+00:00 stderr F I1213 00:19:30.654601 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:30.654633361+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:30.654633361+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:30.654685392+00:00 stderr F I1213 00:19:30.654663 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (358.96µs) 2025-12-13T00:19:30.654694103+00:00 stderr F I1213 00:19:30.654683 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:30.654768055+00:00 stderr F I1213 00:19:30.654735 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:30.654837287+00:00 stderr F I1213 00:19:30.654794 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:30.655169166+00:00 stderr F I1213 00:19:30.655120 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:30.686749197+00:00 stderr F W1213 00:19:30.686663 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:30.690144640+00:00 stderr F I1213 00:19:30.690075 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.384466ms) 2025-12-13T00:19:31.655300834+00:00 stderr F I1213 00:19:31.654784 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:31.655505849+00:00 stderr F I1213 00:19:31.655463 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:31.655505849+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:31.655505849+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:31.655540850+00:00 stderr F I1213 00:19:31.655516 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (763.45µs) 2025-12-13T00:19:31.655540850+00:00 stderr F I1213 00:19:31.655533 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:31.655601993+00:00 stderr F I1213 00:19:31.655571 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:31.655641324+00:00 stderr F I1213 00:19:31.655618 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:31.655976803+00:00 stderr F I1213 00:19:31.655885 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:31.691033329+00:00 stderr F W1213 00:19:31.690490 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:31.693532938+00:00 stderr F I1213 00:19:31.693461 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.921006ms) 2025-12-13T00:19:32.655992668+00:00 stderr F I1213 00:19:32.655909 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:32.656245125+00:00 stderr F I1213 00:19:32.656205 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:32.656245125+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:32.656245125+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:32.656282976+00:00 stderr F I1213 00:19:32.656255 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (360.98µs) 2025-12-13T00:19:32.656282976+00:00 stderr F I1213 00:19:32.656272 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:32.656349448+00:00 stderr F I1213 00:19:32.656319 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:32.656400389+00:00 stderr F I1213 00:19:32.656371 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:32.656612515+00:00 stderr F I1213 00:19:32.656570 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:32.678430017+00:00 stderr F W1213 00:19:32.678354 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:32.680027280+00:00 stderr F I1213 00:19:32.679977 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.702293ms) 2025-12-13T00:19:33.657484113+00:00 stderr F I1213 00:19:33.657396 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:33.657710760+00:00 stderr F I1213 00:19:33.657671 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:33.657710760+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:33.657710760+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:33.657764751+00:00 stderr F I1213 00:19:33.657742 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (557.725µs) 2025-12-13T00:19:33.657764751+00:00 stderr F I1213 00:19:33.657761 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:33.657834293+00:00 stderr F I1213 00:19:33.657809 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:33.657873844+00:00 stderr F I1213 00:19:33.657853 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:33.658136561+00:00 stderr F I1213 00:19:33.658100 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:33.688899039+00:00 stderr F W1213 00:19:33.688833 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:33.690902525+00:00 stderr F I1213 00:19:33.690863 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.097783ms) 2025-12-13T00:19:34.657913919+00:00 stderr F I1213 00:19:34.657866 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:34.658148986+00:00 stderr F I1213 00:19:34.658117 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:34.658148986+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:34.658148986+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:34.658174826+00:00 stderr F I1213 00:19:34.658160 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (306.409µs) 2025-12-13T00:19:34.658183577+00:00 stderr F I1213 00:19:34.658175 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:34.658256999+00:00 stderr F I1213 00:19:34.658225 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:34.658327130+00:00 stderr F I1213 00:19:34.658315 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:34.658587587+00:00 stderr F I1213 00:19:34.658555 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:34.682736314+00:00 stderr F W1213 00:19:34.682680 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:34.684672057+00:00 stderr F I1213 00:19:34.684643 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.46301ms) 2025-12-13T00:19:35.658373506+00:00 stderr F I1213 00:19:35.658325 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:35.658666064+00:00 stderr F I1213 00:19:35.658649 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:35.658666064+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:35.658666064+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:35.658739096+00:00 stderr F I1213 00:19:35.658723 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (408.411µs) 2025-12-13T00:19:35.658769237+00:00 stderr F I1213 00:19:35.658759 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:35.658832560+00:00 stderr F I1213 00:19:35.658815 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:35.658886381+00:00 stderr F I1213 00:19:35.658876 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:35.659139028+00:00 stderr F I1213 00:19:35.659111 1 status.go:100] merge into existing history completed=true desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:35.702500913+00:00 stderr F W1213 00:19:35.702439 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:35.705449295+00:00 stderr F I1213 00:19:35.705403 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.633306ms) 2025-12-13T00:19:36.468684311+00:00 stderr F I1213 00:19:36.468632 1 sync_worker.go:582] Wait finished 2025-12-13T00:19:36.468860966+00:00 stderr F I1213 00:19:36.468773 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:error(nil), Done:747, Total:955, Completed:1, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(2025, time.December, 13, 0, 16, 43, 951102126, time.Local), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:19:36.468915997+00:00 stderr F I1213 00:19:36.468897 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 0 2025-12-13T00:19:36.471118497+00:00 stderr F I1213 00:19:36.471102 1 task_graph.go:481] Running 0 on worker 1 2025-12-13T00:19:36.471182120+00:00 stderr F I1213 00:19:36.471171 1 task_graph.go:481] Running 2 on worker 1 2025-12-13T00:19:36.471237822+00:00 stderr F I1213 00:19:36.471207 1 task_graph.go:481] Running 1 on worker 0 2025-12-13T00:19:36.513550698+00:00 stderr F I1213 00:19:36.513432 1 task_graph.go:481] Running 3 on worker 0 2025-12-13T00:19:36.574327443+00:00 stderr F I1213 00:19:36.574275 1 task_graph.go:481] Running 4 on worker 1 2025-12-13T00:19:36.625323230+00:00 stderr F I1213 00:19:36.625273 1 task_graph.go:481] Running 5 on worker 0 2025-12-13T00:19:36.625562547+00:00 stderr F I1213 00:19:36.625547 1 task_graph.go:481] Running 6 on worker 0 2025-12-13T00:19:36.662317450+00:00 stderr F I1213 00:19:36.662243 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:36.662580957+00:00 stderr F I1213 00:19:36.662540 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:36.662580957+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:36.662580957+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:36.662624409+00:00 stderr F I1213 00:19:36.662594 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (362.46µs) 2025-12-13T00:19:36.662624409+00:00 stderr F I1213 00:19:36.662612 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:36.662687090+00:00 stderr F I1213 00:19:36.662652 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:36.662718301+00:00 stderr F I1213 00:19:36.662700 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:36.663002629+00:00 stderr F I1213 00:19:36.662926 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:36.675630617+00:00 stderr F I1213 00:19:36.675565 1 task_graph.go:481] Running 7 on worker 1 2025-12-13T00:19:36.690272781+00:00 stderr F W1213 00:19:36.690225 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:36.691705230+00:00 stderr F I1213 00:19:36.691664 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.048771ms) 2025-12-13T00:19:37.326215757+00:00 stderr F I1213 00:19:37.326151 1 task_graph.go:481] Running 8 on worker 0 2025-12-13T00:19:37.326333340+00:00 stderr F I1213 00:19:37.326302 1 task_graph.go:481] Running 9 on worker 0 2025-12-13T00:19:37.526225442+00:00 stderr F I1213 00:19:37.526150 1 task_graph.go:481] Running 10 on worker 0 2025-12-13T00:19:37.663507257+00:00 stderr F I1213 00:19:37.663450 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:37.663913378+00:00 stderr F I1213 00:19:37.663892 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:37.663913378+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:37.663913378+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:37.664046612+00:00 stderr F I1213 00:19:37.664019 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (580.966µs) 2025-12-13T00:19:37.664120874+00:00 stderr F I1213 00:19:37.664080 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:37.664224557+00:00 stderr F I1213 00:19:37.664192 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:37.664274118+00:00 stderr F I1213 00:19:37.664256 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:37.664534895+00:00 stderr F I1213 00:19:37.664493 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:37.687091148+00:00 stderr F W1213 00:19:37.687025 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:37.688487256+00:00 stderr F I1213 00:19:37.688436 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.366473ms) 2025-12-13T00:19:37.875774151+00:00 stderr F I1213 00:19:37.875718 1 task_graph.go:481] Running 11 on worker 1 2025-12-13T00:19:38.141602501+00:00 stderr F I1213 00:19:38.141519 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:38.141654012+00:00 stderr F I1213 00:19:38.141623 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:38.141714244+00:00 stderr F I1213 00:19:38.141671 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:38.141950830+00:00 stderr F I1213 00:19:38.141894 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:38.167548156+00:00 stderr F W1213 00:19:38.167503 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:38.169305025+00:00 stderr F I1213 00:19:38.169282 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.763506ms) 2025-12-13T00:19:38.326141179+00:00 stderr F I1213 00:19:38.326093 1 task_graph.go:481] Running 12 on worker 0 2025-12-13T00:19:38.426303670+00:00 stderr F I1213 00:19:38.426254 1 task_graph.go:481] Running 13 on worker 0 2025-12-13T00:19:38.527681357+00:00 stderr F I1213 00:19:38.527617 1 task_graph.go:481] Running 14 on worker 0 2025-12-13T00:19:38.624913407+00:00 stderr F I1213 00:19:38.624867 1 task_graph.go:481] Running 15 on worker 0 2025-12-13T00:19:38.664964212+00:00 stderr F I1213 00:19:38.664887 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:38.665203109+00:00 stderr F I1213 00:19:38.665181 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:38.665203109+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:38.665203109+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:38.665248980+00:00 stderr F I1213 00:19:38.665231 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (353.759µs) 2025-12-13T00:19:38.665261910+00:00 stderr F I1213 00:19:38.665246 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:38.665307521+00:00 stderr F I1213 00:19:38.665286 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:38.665342132+00:00 stderr F I1213 00:19:38.665328 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:38.665609960+00:00 stderr F I1213 00:19:38.665574 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:38.694584188+00:00 stderr F W1213 00:19:38.694523 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:38.695921266+00:00 stderr F I1213 00:19:38.695890 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.641595ms) 2025-12-13T00:19:39.126111817+00:00 stderr F I1213 00:19:39.126069 1 task_graph.go:481] Running 16 on worker 0 2025-12-13T00:19:39.126216190+00:00 stderr F I1213 00:19:39.126205 1 task_graph.go:481] Running 17 on worker 0 2025-12-13T00:19:39.666749416+00:00 stderr F I1213 00:19:39.666291 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:39.667258779+00:00 stderr F I1213 00:19:39.667229 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:39.667258779+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:39.667258779+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:39.667396753+00:00 stderr F I1213 00:19:39.667369 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.10536ms) 2025-12-13T00:19:39.667465485+00:00 stderr F I1213 00:19:39.667446 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:39.667581969+00:00 stderr F I1213 00:19:39.667552 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:39.667675282+00:00 stderr F I1213 00:19:39.667657 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:39.668125344+00:00 stderr F I1213 00:19:39.668077 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:39.707662684+00:00 stderr F W1213 00:19:39.707613 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:39.709661170+00:00 stderr F I1213 00:19:39.709632 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.185265ms) 2025-12-13T00:19:40.668167400+00:00 stderr F I1213 00:19:40.668016 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:40.668495409+00:00 stderr F I1213 00:19:40.668453 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:40.668495409+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:40.668495409+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:40.668563991+00:00 stderr F I1213 00:19:40.668517 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (514.854µs) 2025-12-13T00:19:40.668563991+00:00 stderr F I1213 00:19:40.668548 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:40.668665314+00:00 stderr F I1213 00:19:40.668626 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:40.668719995+00:00 stderr F I1213 00:19:40.668682 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:40.669071535+00:00 stderr F I1213 00:19:40.668998 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:40.697970091+00:00 stderr F W1213 00:19:40.697874 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:40.699864524+00:00 stderr F I1213 00:19:40.699832 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.278533ms) 2025-12-13T00:19:41.669585003+00:00 stderr F I1213 00:19:41.669512 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:41.669952583+00:00 stderr F I1213 00:19:41.669885 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:41.669952583+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:41.669952583+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:41.670063526+00:00 stderr F I1213 00:19:41.670033 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (532.634µs) 2025-12-13T00:19:41.670063526+00:00 stderr F I1213 00:19:41.670052 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:41.670134258+00:00 stderr F I1213 00:19:41.670105 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:41.670171449+00:00 stderr F I1213 00:19:41.670158 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:41.670533399+00:00 stderr F I1213 00:19:41.670483 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:41.698091119+00:00 stderr F W1213 00:19:41.698006 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:41.701295347+00:00 stderr F I1213 00:19:41.701226 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.167949ms) 2025-12-13T00:19:42.225267367+00:00 stderr F I1213 00:19:42.225164 1 task_graph.go:481] Running 18 on worker 0 2025-12-13T00:19:42.325799448+00:00 stderr F I1213 00:19:42.325709 1 task_graph.go:481] Running 19 on worker 0 2025-12-13T00:19:42.431633946+00:00 stderr F I1213 00:19:42.431449 1 task_graph.go:481] Running 20 on worker 0 2025-12-13T00:19:42.431868283+00:00 stderr F I1213 00:19:42.431809 1 task_graph.go:481] Running 21 on worker 0 2025-12-13T00:19:42.670134313+00:00 stderr F I1213 00:19:42.670061 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:42.670491162+00:00 stderr F I1213 00:19:42.670457 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:42.670491162+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:42.670491162+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:42.670562474+00:00 stderr F I1213 00:19:42.670527 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (498.773µs) 2025-12-13T00:19:42.670562474+00:00 stderr F I1213 00:19:42.670547 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:42.670636126+00:00 stderr F I1213 00:19:42.670598 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:42.670684949+00:00 stderr F I1213 00:19:42.670664 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:42.671069269+00:00 stderr F I1213 00:19:42.671016 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:42.701108317+00:00 stderr F W1213 00:19:42.701034 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:42.702852155+00:00 stderr F I1213 00:19:42.702796 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.244949ms) 2025-12-13T00:19:42.825559979+00:00 stderr F I1213 00:19:42.825489 1 task_graph.go:481] Running 22 on worker 0 2025-12-13T00:19:42.927136430+00:00 stderr F I1213 00:19:42.927052 1 task_graph.go:481] Running 23 on worker 0 2025-12-13T00:19:43.671395252+00:00 stderr F I1213 00:19:43.671324 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:43.671669530+00:00 stderr F I1213 00:19:43.671638 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:43.671669530+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:43.671669530+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:43.671738082+00:00 stderr F I1213 00:19:43.671702 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (390.431µs) 2025-12-13T00:19:43.671772843+00:00 stderr F I1213 00:19:43.671754 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:43.671851675+00:00 stderr F I1213 00:19:43.671814 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:43.677046318+00:00 stderr F I1213 00:19:43.671872 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:43.677046318+00:00 stderr F I1213 00:19:43.672167 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:43.700737141+00:00 stderr F W1213 00:19:43.700661 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:43.703468216+00:00 stderr F I1213 00:19:43.703421 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.664193ms) 2025-12-13T00:19:43.826692984+00:00 stderr F I1213 00:19:43.826575 1 task_graph.go:481] Running 24 on worker 0 2025-12-13T00:19:43.928381568+00:00 stderr F I1213 00:19:43.928296 1 task_graph.go:481] Running 25 on worker 0 2025-12-13T00:19:44.673257068+00:00 stderr F I1213 00:19:44.672092 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:44.673257068+00:00 stderr F I1213 00:19:44.673001 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:44.673257068+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:44.673257068+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:44.673257068+00:00 stderr F I1213 00:19:44.673071 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (991.907µs) 2025-12-13T00:19:44.673257068+00:00 stderr F I1213 00:19:44.673098 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:44.673257068+00:00 stderr F I1213 00:19:44.673161 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:44.673338140+00:00 stderr F I1213 00:19:44.673253 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:44.674260836+00:00 stderr F I1213 00:19:44.673597 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:44.707213964+00:00 stderr F W1213 00:19:44.707113 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:44.710149656+00:00 stderr F I1213 00:19:44.710086 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.984651ms) 2025-12-13T00:19:45.125958181+00:00 stderr F I1213 00:19:45.125389 1 task_graph.go:481] Running 26 on worker 0 2025-12-13T00:19:45.673170980+00:00 stderr F I1213 00:19:45.673111 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:45.673554471+00:00 stderr F I1213 00:19:45.673532 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:45.673554471+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:45.673554471+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:45.673699395+00:00 stderr F I1213 00:19:45.673675 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (593.726µs) 2025-12-13T00:19:45.673757727+00:00 stderr F I1213 00:19:45.673737 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:45.674042495+00:00 stderr F I1213 00:19:45.674002 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:45.674151448+00:00 stderr F I1213 00:19:45.674131 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:45.674629451+00:00 stderr F I1213 00:19:45.674579 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:45.676089381+00:00 stderr F I1213 00:19:45.676041 1 task_graph.go:481] Running 27 on worker 1 2025-12-13T00:19:45.699851726+00:00 stderr F W1213 00:19:45.699784 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:45.701229954+00:00 stderr F I1213 00:19:45.701186 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.449456ms) 2025-12-13T00:19:45.775350708+00:00 stderr F I1213 00:19:45.775263 1 task_graph.go:481] Running 28 on worker 1 2025-12-13T00:19:46.125453782+00:00 stderr F I1213 00:19:46.125332 1 task_graph.go:481] Running 29 on worker 0 2025-12-13T00:19:46.176435838+00:00 stderr F I1213 00:19:46.176358 1 task_graph.go:481] Running 30 on worker 1 2025-12-13T00:19:46.377671677+00:00 stderr F I1213 00:19:46.377392 1 task_graph.go:481] Running 31 on worker 1 2025-12-13T00:19:46.426354399+00:00 stderr F I1213 00:19:46.426296 1 task_graph.go:481] Running 32 on worker 0 2025-12-13T00:19:46.674153043+00:00 stderr F I1213 00:19:46.674086 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:46.674581454+00:00 stderr F I1213 00:19:46.674550 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:46.674581454+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:46.674581454+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:46.674674717+00:00 stderr F I1213 00:19:46.674646 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (591.976µs) 2025-12-13T00:19:46.674684107+00:00 stderr F I1213 00:19:46.674672 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:46.674784280+00:00 stderr F I1213 00:19:46.674751 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:46.675052867+00:00 stderr F I1213 00:19:46.674836 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:46.675432327+00:00 stderr F I1213 00:19:46.675362 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:46.701105895+00:00 stderr F W1213 00:19:46.701038 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:46.702535185+00:00 stderr F I1213 00:19:46.702494 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.821267ms) 2025-12-13T00:19:47.025526341+00:00 stderr F I1213 00:19:47.025458 1 task_graph.go:481] Running 33 on worker 0 2025-12-13T00:19:47.125027095+00:00 stderr F I1213 00:19:47.124107 1 task_graph.go:481] Running 34 on worker 0 2025-12-13T00:19:47.675470723+00:00 stderr F I1213 00:19:47.675401 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:47.675706040+00:00 stderr F I1213 00:19:47.675674 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:47.675706040+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:47.675706040+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:47.675773171+00:00 stderr F I1213 00:19:47.675741 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (336.989µs) 2025-12-13T00:19:47.675773171+00:00 stderr F I1213 00:19:47.675761 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:47.675828793+00:00 stderr F I1213 00:19:47.675802 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:47.675869344+00:00 stderr F I1213 00:19:47.675851 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:47.676151292+00:00 stderr F I1213 00:19:47.676098 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:47.720422723+00:00 stderr F W1213 00:19:47.720351 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:47.722449479+00:00 stderr F I1213 00:19:47.722388 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.623436ms) 2025-12-13T00:19:47.875323944+00:00 stderr F I1213 00:19:47.875218 1 task_graph.go:481] Running 35 on worker 1 2025-12-13T00:19:48.675821407+00:00 stderr F I1213 00:19:48.675764 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:48.676065734+00:00 stderr F I1213 00:19:48.676044 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:48.676065734+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:48.676065734+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:48.676112405+00:00 stderr F I1213 00:19:48.676095 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (342.39µs) 2025-12-13T00:19:48.676120215+00:00 stderr F I1213 00:19:48.676112 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:48.676178667+00:00 stderr F I1213 00:19:48.676154 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:48.676213268+00:00 stderr F I1213 00:19:48.676202 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:48.676505786+00:00 stderr F I1213 00:19:48.676452 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:48.701571387+00:00 stderr F W1213 00:19:48.701486 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:48.703796279+00:00 stderr F I1213 00:19:48.703742 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.625022ms) 2025-12-13T00:19:49.625284178+00:00 stderr F I1213 00:19:49.625233 1 core.go:138] Updating ConfigMap openshift-machine-config-operator/kube-rbac-proxy due to diff:   &v1.ConfigMap{ 2025-12-13T00:19:49.625284178+00:00 stderr F    TypeMeta: v1.TypeMeta{ 2025-12-13T00:19:49.625284178+00:00 stderr F -  Kind: "", 2025-12-13T00:19:49.625284178+00:00 stderr F +  Kind: "ConfigMap", 2025-12-13T00:19:49.625284178+00:00 stderr F -  APIVersion: "", 2025-12-13T00:19:49.625284178+00:00 stderr F +  APIVersion: "v1", 2025-12-13T00:19:49.625284178+00:00 stderr F    }, 2025-12-13T00:19:49.625284178+00:00 stderr F    ObjectMeta: v1.ObjectMeta{ 2025-12-13T00:19:49.625284178+00:00 stderr F    ... // 2 identical fields 2025-12-13T00:19:49.625284178+00:00 stderr F    Namespace: "openshift-machine-config-operator", 2025-12-13T00:19:49.625284178+00:00 stderr F    SelfLink: "", 2025-12-13T00:19:49.625284178+00:00 stderr F -  UID: "ba7edbb4-1ba2-49b6-98a7-d849069e9f80", 2025-12-13T00:19:49.625284178+00:00 stderr F +  UID: "", 2025-12-13T00:19:49.625284178+00:00 stderr F -  ResourceVersion: "42088", 2025-12-13T00:19:49.625284178+00:00 stderr F +  ResourceVersion: "", 2025-12-13T00:19:49.625284178+00:00 stderr F    Generation: 0, 2025-12-13T00:19:49.625284178+00:00 stderr F -  CreationTimestamp: v1.Time{Time: s"2024-06-26 12:39:23 +0000 UTC"}, 2025-12-13T00:19:49.625284178+00:00 stderr F +  CreationTimestamp: v1.Time{}, 2025-12-13T00:19:49.625284178+00:00 stderr F    DeletionTimestamp: nil, 2025-12-13T00:19:49.625284178+00:00 stderr F    DeletionGracePeriodSeconds: nil, 2025-12-13T00:19:49.625284178+00:00 stderr F    ... // 2 identical fields 2025-12-13T00:19:49.625284178+00:00 stderr F    OwnerReferences: {{APIVersion: "config.openshift.io/v1", Kind: "ClusterVersion", Name: "version", UID: "a73cbaa6-40d3-4694-9b98-c0a6eed45825", ...}}, 2025-12-13T00:19:49.625284178+00:00 stderr F    Finalizers: nil, 2025-12-13T00:19:49.625284178+00:00 stderr F -  ManagedFields: []v1.ManagedFieldsEntry{ 2025-12-13T00:19:49.625284178+00:00 stderr F -  { 2025-12-13T00:19:49.625284178+00:00 stderr F -  Manager: "cluster-version-operator", 2025-12-13T00:19:49.625284178+00:00 stderr F -  Operation: "Update", 2025-12-13T00:19:49.625284178+00:00 stderr F -  APIVersion: "v1", 2025-12-13T00:19:49.625284178+00:00 stderr F -  Time: s"2025-12-13 00:16:21 +0000 UTC", 2025-12-13T00:19:49.625284178+00:00 stderr F -  FieldsType: "FieldsV1", 2025-12-13T00:19:49.625284178+00:00 stderr F -  FieldsV1: s`{"f:data":{},"f:metadata":{"f:annotations":{".":{},"f:include.re`..., 2025-12-13T00:19:49.625284178+00:00 stderr F -  }, 2025-12-13T00:19:49.625284178+00:00 stderr F -  { 2025-12-13T00:19:49.625284178+00:00 stderr F -  Manager: "machine-config-operator", 2025-12-13T00:19:49.625284178+00:00 stderr F -  Operation: "Update", 2025-12-13T00:19:49.625284178+00:00 stderr F -  APIVersion: "v1", 2025-12-13T00:19:49.625284178+00:00 stderr F -  Time: s"2025-12-13 00:18:52 +0000 UTC", 2025-12-13T00:19:49.625284178+00:00 stderr F -  FieldsType: "FieldsV1", 2025-12-13T00:19:49.625284178+00:00 stderr F -  FieldsV1: s`{"f:data":{"f:config-file.yaml":{}}}`, 2025-12-13T00:19:49.625284178+00:00 stderr F -  }, 2025-12-13T00:19:49.625284178+00:00 stderr F -  }, 2025-12-13T00:19:49.625284178+00:00 stderr F +  ManagedFields: nil, 2025-12-13T00:19:49.625284178+00:00 stderr F    }, 2025-12-13T00:19:49.625284178+00:00 stderr F    Immutable: nil, 2025-12-13T00:19:49.625284178+00:00 stderr F    Data: {"config-file.yaml": "authorization:\n resourceAttributes:\n apiVersion: v1\n reso"...}, 2025-12-13T00:19:49.625284178+00:00 stderr F    BinaryData: nil, 2025-12-13T00:19:49.625284178+00:00 stderr F   } 2025-12-13T00:19:49.677157069+00:00 stderr F I1213 00:19:49.677094 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:49.677369395+00:00 stderr F I1213 00:19:49.677338 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:49.677369395+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:49.677369395+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:49.677407916+00:00 stderr F I1213 00:19:49.677389 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (302.238µs) 2025-12-13T00:19:49.677407916+00:00 stderr F I1213 00:19:49.677401 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:49.677468758+00:00 stderr F I1213 00:19:49.677435 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:49.677515039+00:00 stderr F I1213 00:19:49.677496 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:49.677778096+00:00 stderr F I1213 00:19:49.677734 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:49.708968316+00:00 stderr F W1213 00:19:49.708887 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:49.710334703+00:00 stderr F I1213 00:19:49.710296 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.892446ms) 2025-12-13T00:19:49.825402587+00:00 stderr F E1213 00:19:49.825338 1 task.go:122] error running apply for clusteroperator "machine-config" (799 of 955): Cluster operator machine-config is degraded 2025-12-13T00:19:49.825402587+00:00 stderr F I1213 00:19:49.825370 1 task_graph.go:481] Running 36 on worker 0 2025-12-13T00:19:49.924462828+00:00 stderr F I1213 00:19:49.924411 1 task_graph.go:481] Running 37 on worker 0 2025-12-13T00:19:50.125129261+00:00 stderr F I1213 00:19:50.125051 1 task_graph.go:481] Running 38 on worker 0 2025-12-13T00:19:50.525593003+00:00 stderr F I1213 00:19:50.525522 1 task_graph.go:481] Running 39 on worker 0 2025-12-13T00:19:50.677855063+00:00 stderr F I1213 00:19:50.677764 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:50.678130940+00:00 stderr F I1213 00:19:50.678103 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:50.678130940+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:50.678130940+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:50.678199292+00:00 stderr F I1213 00:19:50.678167 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (414.421µs) 2025-12-13T00:19:50.678199292+00:00 stderr F I1213 00:19:50.678188 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:50.678289224+00:00 stderr F I1213 00:19:50.678235 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:50.678324085+00:00 stderr F I1213 00:19:50.678290 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:50.678612703+00:00 stderr F I1213 00:19:50.678548 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:50.708913599+00:00 stderr F W1213 00:19:50.708840 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:50.710437560+00:00 stderr F I1213 00:19:50.710394 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.202887ms) 2025-12-13T00:19:51.074397036+00:00 stderr F I1213 00:19:51.074306 1 task_graph.go:481] Running 40 on worker 1 2025-12-13T00:19:51.432015328+00:00 stderr F I1213 00:19:51.431764 1 task_graph.go:481] Running 41 on worker 0 2025-12-13T00:19:51.679115611+00:00 stderr F I1213 00:19:51.679005 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:51.679393649+00:00 stderr F I1213 00:19:51.679340 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:51.679393649+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:51.679393649+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:51.679452911+00:00 stderr F I1213 00:19:51.679418 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (422.491µs) 2025-12-13T00:19:51.679452911+00:00 stderr F I1213 00:19:51.679439 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:51.679565574+00:00 stderr F I1213 00:19:51.679511 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:51.679585144+00:00 stderr F I1213 00:19:51.679568 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:51.679873922+00:00 stderr F I1213 00:19:51.679835 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:51.708896653+00:00 stderr F W1213 00:19:51.708822 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:51.710960870+00:00 stderr F I1213 00:19:51.710888 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.444018ms) 2025-12-13T00:19:52.680285148+00:00 stderr F I1213 00:19:52.680202 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:52.680514885+00:00 stderr F I1213 00:19:52.680467 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:52.680514885+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:52.680514885+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:52.680534775+00:00 stderr F I1213 00:19:52.680521 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (332.019µs) 2025-12-13T00:19:52.680545205+00:00 stderr F I1213 00:19:52.680538 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:52.680616737+00:00 stderr F I1213 00:19:52.680578 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:52.680649268+00:00 stderr F I1213 00:19:52.680629 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:52.681033519+00:00 stderr F I1213 00:19:52.680917 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:52.702903942+00:00 stderr F W1213 00:19:52.702815 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:52.704303090+00:00 stderr F I1213 00:19:52.704253 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.713194ms) 2025-12-13T00:19:52.826142730+00:00 stderr F I1213 00:19:52.826072 1 task_graph.go:481] Running 42 on worker 0 2025-12-13T00:19:53.125813544+00:00 stderr F I1213 00:19:53.125750 1 task_graph.go:481] Running 43 on worker 0 2025-12-13T00:19:53.140857038+00:00 stderr F I1213 00:19:53.140822 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:53.141015323+00:00 stderr F I1213 00:19:53.140984 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:53.141066504+00:00 stderr F I1213 00:19:53.141048 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:53.141348332+00:00 stderr F I1213 00:19:53.141313 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:53.163797541+00:00 stderr F W1213 00:19:53.163708 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:53.165274622+00:00 stderr F I1213 00:19:53.165239 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.423314ms) 2025-12-13T00:19:53.574829945+00:00 stderr F I1213 00:19:53.574782 1 task_graph.go:481] Running 44 on worker 1 2025-12-13T00:19:53.678695759+00:00 stderr F I1213 00:19:53.678642 1 task_graph.go:481] Running 45 on worker 1 2025-12-13T00:19:53.680631073+00:00 stderr F I1213 00:19:53.680583 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:53.680879070+00:00 stderr F I1213 00:19:53.680841 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:53.680879070+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:53.680879070+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:53.680928781+00:00 stderr F I1213 00:19:53.680896 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (319.789µs) 2025-12-13T00:19:53.680928781+00:00 stderr F I1213 00:19:53.680922 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:53.681036834+00:00 stderr F I1213 00:19:53.680984 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:53.681066465+00:00 stderr F I1213 00:19:53.681033 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:53.681323142+00:00 stderr F I1213 00:19:53.681259 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:53.705966231+00:00 stderr F W1213 00:19:53.705911 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:53.707647907+00:00 stderr F I1213 00:19:53.707602 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.677595ms) 2025-12-13T00:19:54.675906077+00:00 stderr F I1213 00:19:54.675835 1 task_graph.go:481] Running 46 on worker 1 2025-12-13T00:19:54.681708686+00:00 stderr F I1213 00:19:54.681680 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:54.682108497+00:00 stderr F I1213 00:19:54.682084 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:54.682108497+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:54.682108497+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:54.682245581+00:00 stderr F I1213 00:19:54.682218 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (546.215µs) 2025-12-13T00:19:54.682301043+00:00 stderr F I1213 00:19:54.682258 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:54.682430336+00:00 stderr F I1213 00:19:54.682372 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:54.682502348+00:00 stderr F I1213 00:19:54.682460 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:54.682994093+00:00 stderr F I1213 00:19:54.682904 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:54.712598558+00:00 stderr F W1213 00:19:54.712531 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:54.714510491+00:00 stderr F I1213 00:19:54.714463 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.212299ms) 2025-12-13T00:19:54.775694879+00:00 stderr F I1213 00:19:54.775613 1 task_graph.go:481] Running 47 on worker 1 2025-12-13T00:19:55.324978125+00:00 stderr F I1213 00:19:55.324913 1 task_graph.go:481] Running 48 on worker 0 2025-12-13T00:19:55.682640537+00:00 stderr F I1213 00:19:55.682480 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:55.682949915+00:00 stderr F I1213 00:19:55.682898 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:55.682949915+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:55.682949915+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:55.683037917+00:00 stderr F I1213 00:19:55.682983 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (523.174µs) 2025-12-13T00:19:55.683037917+00:00 stderr F I1213 00:19:55.683012 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:55.683152291+00:00 stderr F I1213 00:19:55.683074 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:55.683164282+00:00 stderr F I1213 00:19:55.683155 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:55.683503771+00:00 stderr F I1213 00:19:55.683418 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:55.710439743+00:00 stderr F W1213 00:19:55.710383 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:55.712184122+00:00 stderr F I1213 00:19:55.712154 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.140565ms) 2025-12-13T00:19:55.775913549+00:00 stderr F I1213 00:19:55.775831 1 task_graph.go:481] Running 49 on worker 0 2025-12-13T00:19:56.076665122+00:00 stderr F I1213 00:19:56.076594 1 task_graph.go:481] Running 50 on worker 0 2025-12-13T00:19:56.376464499+00:00 stderr F I1213 00:19:56.376375 1 task_graph.go:481] Running 51 on worker 1 2025-12-13T00:19:56.683425494+00:00 stderr F I1213 00:19:56.683337 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:56.683878216+00:00 stderr F I1213 00:19:56.683819 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:56.683878216+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:56.683878216+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:56.684023590+00:00 stderr F I1213 00:19:56.683979 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (618.697µs) 2025-12-13T00:19:56.684056031+00:00 stderr F I1213 00:19:56.684008 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:56.684111522+00:00 stderr F I1213 00:19:56.684078 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:56.684210575+00:00 stderr F I1213 00:19:56.684124 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:56.684427581+00:00 stderr F I1213 00:19:56.684373 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:56.713984785+00:00 stderr F W1213 00:19:56.713901 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:56.715699463+00:00 stderr F I1213 00:19:56.715651 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.650632ms) 2025-12-13T00:19:57.684805527+00:00 stderr F I1213 00:19:57.684748 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:57.685208898+00:00 stderr F I1213 00:19:57.685187 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:57.685208898+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:57.685208898+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:57.685330171+00:00 stderr F I1213 00:19:57.685310 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (575.525µs) 2025-12-13T00:19:57.685371352+00:00 stderr F I1213 00:19:57.685357 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:57.685453035+00:00 stderr F I1213 00:19:57.685430 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:57.685538587+00:00 stderr F I1213 00:19:57.685525 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:57.685882746+00:00 stderr F I1213 00:19:57.685849 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:57.709204650+00:00 stderr F W1213 00:19:57.709142 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:57.711161843+00:00 stderr F I1213 00:19:57.711125 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.7673ms) 2025-12-13T00:19:58.326459731+00:00 stderr F I1213 00:19:58.326400 1 task_graph.go:481] Running 52 on worker 0 2025-12-13T00:19:58.686251613+00:00 stderr F I1213 00:19:58.686180 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:58.686927742+00:00 stderr F I1213 00:19:58.686847 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:58.686927742+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:58.686927742+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:58.687343933+00:00 stderr F I1213 00:19:58.687270 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.126522ms) 2025-12-13T00:19:58.687452086+00:00 stderr F I1213 00:19:58.687428 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:58.687658231+00:00 stderr F I1213 00:19:58.687624 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:58.687842836+00:00 stderr F I1213 00:19:58.687820 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:58.688364771+00:00 stderr F I1213 00:19:58.688315 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:58.713371740+00:00 stderr F W1213 00:19:58.713295 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:58.715004396+00:00 stderr F I1213 00:19:58.714806 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.379355ms) 2025-12-13T00:19:59.325255423+00:00 stderr F I1213 00:19:59.325215 1 task_graph.go:481] Running 53 on worker 0 2025-12-13T00:19:59.325416837+00:00 stderr F I1213 00:19:59.325404 1 task_graph.go:481] Running 54 on worker 0 2025-12-13T00:19:59.687666456+00:00 stderr F I1213 00:19:59.687618 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:19:59.687992995+00:00 stderr F I1213 00:19:59.687977 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:19:59.687992995+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:19:59.687992995+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:19:59.688071417+00:00 stderr F I1213 00:19:59.688058 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (447.942µs) 2025-12-13T00:19:59.688103368+00:00 stderr F I1213 00:19:59.688093 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:19:59.688170230+00:00 stderr F I1213 00:19:59.688153 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:19:59.688228391+00:00 stderr F I1213 00:19:59.688218 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:19:59.688470318+00:00 stderr F I1213 00:19:59.688443 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:19:59.710404693+00:00 stderr F W1213 00:19:59.710367 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:19:59.711904805+00:00 stderr F I1213 00:19:59.711882 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.786316ms) 2025-12-13T00:19:59.726956000+00:00 stderr F I1213 00:19:59.726882 1 task_graph.go:481] Running 55 on worker 0 2025-12-13T00:20:00.626438733+00:00 stderr F I1213 00:20:00.626367 1 task_graph.go:481] Running 56 on worker 0 2025-12-13T00:20:00.688807842+00:00 stderr F I1213 00:20:00.688747 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:00.689148232+00:00 stderr F I1213 00:20:00.689120 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:00.689148232+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:00.689148232+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:00.689205234+00:00 stderr F I1213 00:20:00.689187 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (451.594µs) 2025-12-13T00:20:00.689213154+00:00 stderr F I1213 00:20:00.689206 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:00.689281786+00:00 stderr F I1213 00:20:00.689253 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:00.689349778+00:00 stderr F I1213 00:20:00.689310 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:00.689671597+00:00 stderr F I1213 00:20:00.689619 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:00.716655020+00:00 stderr F W1213 00:20:00.716592 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:00.718532733+00:00 stderr F I1213 00:20:00.718487 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.278037ms) 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.689720 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.690016 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:01.692232102+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:01.692232102+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.690063 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (357.29µs) 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.690078 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.690123 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.690170 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:01.692232102+00:00 stderr F I1213 00:20:01.690398 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:01.749303115+00:00 stderr F W1213 00:20:01.749164 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:01.750825427+00:00 stderr F I1213 00:20:01.750671 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (60.59025ms) 2025-12-13T00:20:02.624990592+00:00 stderr F I1213 00:20:02.624925 1 task_graph.go:481] Running 57 on worker 0 2025-12-13T00:20:02.691075364+00:00 stderr F I1213 00:20:02.691023 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:02.691280210+00:00 stderr F I1213 00:20:02.691260 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:02.691280210+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:02.691280210+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:02.691320971+00:00 stderr F I1213 00:20:02.691307 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (294.078µs) 2025-12-13T00:20:02.691328151+00:00 stderr F I1213 00:20:02.691321 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:02.691373712+00:00 stderr F I1213 00:20:02.691357 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:02.691409973+00:00 stderr F I1213 00:20:02.691398 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:02.691627259+00:00 stderr F I1213 00:20:02.691593 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:02.713800770+00:00 stderr F W1213 00:20:02.713746 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:02.715511578+00:00 stderr F I1213 00:20:02.715473 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.148056ms) 2025-12-13T00:20:02.726221884+00:00 stderr F I1213 00:20:02.726160 1 task_graph.go:481] Running 58 on worker 0 2025-12-13T00:20:03.175349208+00:00 stderr F I1213 00:20:03.175287 1 task_graph.go:481] Running 59 on worker 1 2025-12-13T00:20:03.625646495+00:00 stderr F I1213 00:20:03.625586 1 task_graph.go:481] Running 60 on worker 0 2025-12-13T00:20:03.693808865+00:00 stderr F I1213 00:20:03.693085 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:03.693808865+00:00 stderr F I1213 00:20:03.693472 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:03.693808865+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:03.693808865+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:03.693808865+00:00 stderr F I1213 00:20:03.693540 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (468.083µs) 2025-12-13T00:20:03.693808865+00:00 stderr F I1213 00:20:03.693560 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:03.693808865+00:00 stderr F I1213 00:20:03.693617 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:03.693808865+00:00 stderr F I1213 00:20:03.693676 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:03.698348760+00:00 stderr F I1213 00:20:03.694030 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:03.718689121+00:00 stderr F W1213 00:20:03.718213 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:03.719697178+00:00 stderr F I1213 00:20:03.719659 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.096299ms) 2025-12-13T00:20:03.724815459+00:00 stderr F I1213 00:20:03.724788 1 task_graph.go:481] Running 61 on worker 0 2025-12-13T00:20:03.774557631+00:00 stderr F I1213 00:20:03.774496 1 task_graph.go:481] Running 62 on worker 1 2025-12-13T00:20:04.125638592+00:00 stderr F I1213 00:20:04.125585 1 task_graph.go:481] Running 63 on worker 0 2025-12-13T00:20:04.694177179+00:00 stderr F I1213 00:20:04.694028 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:04.694586760+00:00 stderr F I1213 00:20:04.694506 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:04.694586760+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:04.694586760+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:04.694735424+00:00 stderr F I1213 00:20:04.694668 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (651.317µs) 2025-12-13T00:20:04.694735424+00:00 stderr F I1213 00:20:04.694705 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:04.694874088+00:00 stderr F I1213 00:20:04.694803 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:04.694915309+00:00 stderr F I1213 00:20:04.694898 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:04.695540607+00:00 stderr F I1213 00:20:04.695439 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:04.724829764+00:00 stderr F W1213 00:20:04.724751 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:04.726748768+00:00 stderr F I1213 00:20:04.726674 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.968482ms) 2025-12-13T00:20:05.695481220+00:00 stderr F I1213 00:20:05.695408 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:05.695743707+00:00 stderr F I1213 00:20:05.695702 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:05.695743707+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:05.695743707+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:05.695834310+00:00 stderr F I1213 00:20:05.695791 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (388.441µs) 2025-12-13T00:20:05.695834310+00:00 stderr F I1213 00:20:05.695821 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:05.695969184+00:00 stderr F I1213 00:20:05.695880 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:05.696003375+00:00 stderr F I1213 00:20:05.695972 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:05.696379795+00:00 stderr F I1213 00:20:05.696295 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:05.727600625+00:00 stderr F I1213 00:20:05.727527 1 task_graph.go:481] Running 64 on worker 0 2025-12-13T00:20:05.737086327+00:00 stderr F W1213 00:20:05.737043 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:05.740163392+00:00 stderr F I1213 00:20:05.740121 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (44.293372ms) 2025-12-13T00:20:06.696510733+00:00 stderr F I1213 00:20:06.696468 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:06.696814781+00:00 stderr F I1213 00:20:06.696797 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:06.696814781+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:06.696814781+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:06.696900594+00:00 stderr F I1213 00:20:06.696884 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (426.212µs) 2025-12-13T00:20:06.696946695+00:00 stderr F I1213 00:20:06.696921 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:06.697019837+00:00 stderr F I1213 00:20:06.697001 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:06.697076268+00:00 stderr F I1213 00:20:06.697066 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:06.697352006+00:00 stderr F I1213 00:20:06.697287 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:06.725340538+00:00 stderr F I1213 00:20:06.725299 1 task_graph.go:481] Running 65 on worker 0 2025-12-13T00:20:06.726425518+00:00 stderr F W1213 00:20:06.726410 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:06.727835587+00:00 stderr F I1213 00:20:06.727815 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.892222ms) 2025-12-13T00:20:07.626192849+00:00 stderr F I1213 00:20:07.626133 1 task_graph.go:481] Running 66 on worker 0 2025-12-13T00:20:07.626192849+00:00 stderr F I1213 00:20:07.626173 1 task_graph.go:481] Running 67 on worker 0 2025-12-13T00:20:07.697442533+00:00 stderr F I1213 00:20:07.697359 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:07.697788733+00:00 stderr F I1213 00:20:07.697728 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:07.697788733+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:07.697788733+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:07.697819153+00:00 stderr F I1213 00:20:07.697803 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (455.843µs) 2025-12-13T00:20:07.697834824+00:00 stderr F I1213 00:20:07.697817 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:07.697927576+00:00 stderr F I1213 00:20:07.697872 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:07.697996338+00:00 stderr F I1213 00:20:07.697929 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:07.698312048+00:00 stderr F I1213 00:20:07.698249 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:07.732581983+00:00 stderr F W1213 00:20:07.732525 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:07.734042352+00:00 stderr F I1213 00:20:07.734002 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.181618ms) 2025-12-13T00:20:07.775129635+00:00 stderr F I1213 00:20:07.775073 1 task_graph.go:481] Running 68 on worker 1 2025-12-13T00:20:07.978857254+00:00 stderr F I1213 00:20:07.978753 1 task_graph.go:481] Running 69 on worker 1 2025-12-13T00:20:08.027438523+00:00 stderr F I1213 00:20:08.027363 1 task_graph.go:481] Running 70 on worker 0 2025-12-13T00:20:08.027438523+00:00 stderr F I1213 00:20:08.027414 1 task_graph.go:481] Running 71 on worker 0 2025-12-13T00:20:08.076344951+00:00 stderr F I1213 00:20:08.076300 1 task_graph.go:481] Running 72 on worker 1 2025-12-13T00:20:08.125199459+00:00 stderr F I1213 00:20:08.125125 1 task_graph.go:481] Running 73 on worker 0 2025-12-13T00:20:08.141120998+00:00 stderr F I1213 00:20:08.141059 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:08.141915030+00:00 stderr F I1213 00:20:08.141176 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:08.141915030+00:00 stderr F I1213 00:20:08.141242 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:08.141915030+00:00 stderr F I1213 00:20:08.141504 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:08.164505312+00:00 stderr F W1213 00:20:08.164215 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:08.166383255+00:00 stderr F I1213 00:20:08.166337 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.285547ms) 2025-12-13T00:20:08.698074745+00:00 stderr F I1213 00:20:08.697959 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:08.698369844+00:00 stderr F I1213 00:20:08.698315 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:08.698369844+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:08.698369844+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:08.698408675+00:00 stderr F I1213 00:20:08.698386 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (464.064µs) 2025-12-13T00:20:08.698408675+00:00 stderr F I1213 00:20:08.698404 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:08.698496378+00:00 stderr F I1213 00:20:08.698451 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:08.698536019+00:00 stderr F I1213 00:20:08.698511 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:08.698847077+00:00 stderr F I1213 00:20:08.698795 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:08.734985964+00:00 stderr F W1213 00:20:08.734873 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:08.737032800+00:00 stderr F I1213 00:20:08.736975 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.567303ms) 2025-12-13T00:20:09.479011639+00:00 stderr F I1213 00:20:09.475954 1 task_graph.go:481] Running 74 on worker 1 2025-12-13T00:20:09.698657727+00:00 stderr F I1213 00:20:09.698598 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:09.698891033+00:00 stderr F I1213 00:20:09.698855 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:09.698891033+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:09.698891033+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:09.698955585+00:00 stderr F I1213 00:20:09.698915 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (331.709µs) 2025-12-13T00:20:09.698971805+00:00 stderr F I1213 00:20:09.698964 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:09.699048407+00:00 stderr F I1213 00:20:09.699012 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:09.699086708+00:00 stderr F I1213 00:20:09.699069 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:09.699349235+00:00 stderr F I1213 00:20:09.699300 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:09.724032256+00:00 stderr F W1213 00:20:09.723964 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:09.726748121+00:00 stderr F I1213 00:20:09.725442 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.482969ms) 2025-12-13T00:20:10.699229937+00:00 stderr F I1213 00:20:10.699135 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:10.699461003+00:00 stderr F I1213 00:20:10.699418 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:10.699461003+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:10.699461003+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:10.699478164+00:00 stderr F I1213 00:20:10.699468 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (344.049µs) 2025-12-13T00:20:10.699487154+00:00 stderr F I1213 00:20:10.699481 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:10.699551836+00:00 stderr F I1213 00:20:10.699520 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:10.699583387+00:00 stderr F I1213 00:20:10.699567 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:10.699831863+00:00 stderr F I1213 00:20:10.699788 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:10.721477270+00:00 stderr F W1213 00:20:10.721402 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:10.724163815+00:00 stderr F I1213 00:20:10.724078 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.591818ms) 2025-12-13T00:20:11.075262546+00:00 stderr F W1213 00:20:11.075181 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta3 FlowSchema is deprecated in v1.29+, unavailable in v1.32+; use flowcontrol.apiserver.k8s.io/v1 FlowSchema 2025-12-13T00:20:11.075640486+00:00 stderr F I1213 00:20:11.075602 1 task_graph.go:481] Running 75 on worker 1 2025-12-13T00:20:11.576197799+00:00 stderr F I1213 00:20:11.576126 1 task_graph.go:481] Running 76 on worker 1 2025-12-13T00:20:11.676565747+00:00 stderr F I1213 00:20:11.676491 1 task_graph.go:481] Running 77 on worker 1 2025-12-13T00:20:11.676926027+00:00 stderr F I1213 00:20:11.676898 1 task_graph.go:481] Running 78 on worker 1 2025-12-13T00:20:11.699713115+00:00 stderr F I1213 00:20:11.699664 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:11.700108596+00:00 stderr F I1213 00:20:11.700057 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:11.700108596+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:11.700108596+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:11.700201038+00:00 stderr F I1213 00:20:11.700184 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (530.155µs) 2025-12-13T00:20:11.700233989+00:00 stderr F I1213 00:20:11.700223 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:11.700304681+00:00 stderr F I1213 00:20:11.700286 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:11.700366823+00:00 stderr F I1213 00:20:11.700355 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:11.700616249+00:00 stderr F I1213 00:20:11.700588 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:11.734000810+00:00 stderr F W1213 00:20:11.733440 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:11.741989920+00:00 stderr F I1213 00:20:11.740335 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.103165ms) 2025-12-13T00:20:12.375376186+00:00 stderr F I1213 00:20:12.375306 1 task_graph.go:478] Canceled worker 0 while waiting for work 2025-12-13T00:20:12.375376186+00:00 stderr F I1213 00:20:12.375353 1 task_graph.go:478] Canceled worker 1 while waiting for work 2025-12-13T00:20:12.375435837+00:00 stderr F I1213 00:20:12.375390 1 task_graph.go:527] Workers finished 2025-12-13T00:20:12.375435837+00:00 stderr F I1213 00:20:12.375402 1 task_graph.go:550] Result of work: [Cluster operator machine-config is degraded] 2025-12-13T00:20:12.375453008+00:00 stderr F I1213 00:20:12.375441 1 sync_worker.go:1167] Summarizing 1 errors 2025-12-13T00:20:12.375468598+00:00 stderr F I1213 00:20:12.375447 1 sync_worker.go:1171] Update error 799 of 955: ClusterOperatorDegraded Cluster operator machine-config is degraded (*errors.errorString: cluster operator machine-config is Degraded=True: RequiredPoolsFailed, Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused) 2025-12-13T00:20:12.375520290+00:00 stderr F E1213 00:20:12.375480 1 sync_worker.go:649] unable to synchronize image (waiting 21.56456319s): Cluster operator machine-config is degraded 2025-12-13T00:20:12.700783279+00:00 stderr F I1213 00:20:12.700730 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:12.701188750+00:00 stderr F I1213 00:20:12.701167 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:12.701188750+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:12.701188750+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:12.701309343+00:00 stderr F I1213 00:20:12.701289 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (568.555µs) 2025-12-13T00:20:12.701348694+00:00 stderr F I1213 00:20:12.701335 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:12.701431716+00:00 stderr F I1213 00:20:12.701409 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:12.701500028+00:00 stderr F I1213 00:20:12.701488 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:12.701588322+00:00 stderr F I1213 00:20:12.701520 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:12.701896180+00:00 stderr F I1213 00:20:12.701870 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:12.730458687+00:00 stderr F W1213 00:20:12.730410 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:12.732743691+00:00 stderr F I1213 00:20:12.732709 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.370155ms) 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740093 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740151 1 upgradeable.go:69] Upgradeability last checked 1m56.402857148s ago, will not re-check until 2025-12-13T00:20:16Z 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740158 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (72.682µs) 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740168 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740414 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740722 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740775 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.740781 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:12.741060850+00:00 stderr F I1213 00:20:12.741010 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:12.741849161+00:00 stderr F I1213 00:20:12.741754 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:12.741849161+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:12.741849161+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:12.741849161+00:00 stderr F I1213 00:20:12.741796 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.401779ms) 2025-12-13T00:20:12.769488414+00:00 stderr F W1213 00:20:12.769445 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:12.771696165+00:00 stderr F I1213 00:20:12.771663 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.490579ms) 2025-12-13T00:20:12.771757646+00:00 stderr F I1213 00:20:12.771740 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:12.771855489+00:00 stderr F I1213 00:20:12.771833 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:12.771929611+00:00 stderr F I1213 00:20:12.771915 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:12.772043264+00:00 stderr F I1213 00:20:12.771980 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:12.772351793+00:00 stderr F I1213 00:20:12.772320 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:12.799045828+00:00 stderr F W1213 00:20:12.798067 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:12.801297220+00:00 stderr F I1213 00:20:12.799706 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.964051ms) 2025-12-13T00:20:13.701792031+00:00 stderr F I1213 00:20:13.701729 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:13.702347876+00:00 stderr F I1213 00:20:13.702302 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:13.702347876+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:13.702347876+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:13.702551862+00:00 stderr F I1213 00:20:13.702515 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (798.042µs) 2025-12-13T00:20:13.702734547+00:00 stderr F I1213 00:20:13.702632 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:13.702846370+00:00 stderr F I1213 00:20:13.702785 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:13.702964473+00:00 stderr F I1213 00:20:13.702871 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:13.703041465+00:00 stderr F I1213 00:20:13.702891 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:13.703488537+00:00 stderr F I1213 00:20:13.703407 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:13.752205771+00:00 stderr F W1213 00:20:13.751518 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:13.755287826+00:00 stderr F I1213 00:20:13.755208 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (52.58933ms) 2025-12-13T00:20:14.703667477+00:00 stderr F I1213 00:20:14.703584 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:14.704145590+00:00 stderr F I1213 00:20:14.704077 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:14.704145590+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:14.704145590+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:14.704206742+00:00 stderr F I1213 00:20:14.704175 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (641.248µs) 2025-12-13T00:20:14.704223072+00:00 stderr F I1213 00:20:14.704206 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:14.704360656+00:00 stderr F I1213 00:20:14.704270 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:14.704413067+00:00 stderr F I1213 00:20:14.704393 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:14.704559812+00:00 stderr F I1213 00:20:14.704413 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:14.705582730+00:00 stderr F I1213 00:20:14.705408 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:14.750334784+00:00 stderr F W1213 00:20:14.749860 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:14.751859226+00:00 stderr F I1213 00:20:14.751806 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (47.598513ms) 2025-12-13T00:20:15.705249246+00:00 stderr F I1213 00:20:15.705161 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:15.705800701+00:00 stderr F I1213 00:20:15.705745 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:15.705800701+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:15.705800701+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:15.706277224+00:00 stderr F I1213 00:20:15.706228 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.10013ms) 2025-12-13T00:20:15.706352106+00:00 stderr F I1213 00:20:15.706307 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:15.706492640+00:00 stderr F I1213 00:20:15.706448 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:15.706622203+00:00 stderr F I1213 00:20:15.706573 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:15.706737966+00:00 stderr F I1213 00:20:15.706606 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:15.707450087+00:00 stderr F I1213 00:20:15.707376 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:15.737440013+00:00 stderr F W1213 00:20:15.737368 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:15.739149021+00:00 stderr F I1213 00:20:15.739096 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.803635ms) 2025-12-13T00:20:16.706135295+00:00 stderr F I1213 00:20:16.706065 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:16.706335640+00:00 stderr F I1213 00:20:16.706301 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:16.706335640+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:16.706335640+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:16.706358101+00:00 stderr F I1213 00:20:16.706347 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (295.018µs) 2025-12-13T00:20:16.706370741+00:00 stderr F I1213 00:20:16.706362 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:16.706425102+00:00 stderr F I1213 00:20:16.706399 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:16.706461813+00:00 stderr F I1213 00:20:16.706444 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:16.706510855+00:00 stderr F I1213 00:20:16.706453 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:16.706750761+00:00 stderr F I1213 00:20:16.706718 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:16.737373066+00:00 stderr F W1213 00:20:16.737283 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:16.740125782+00:00 stderr F I1213 00:20:16.740078 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.709659ms) 2025-12-13T00:20:17.707208498+00:00 stderr F I1213 00:20:17.707116 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:17.707552488+00:00 stderr F I1213 00:20:17.707509 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:17.707552488+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:17.707552488+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:17.707611280+00:00 stderr F I1213 00:20:17.707572 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (468.493µs) 2025-12-13T00:20:17.707611280+00:00 stderr F I1213 00:20:17.707594 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:17.707689832+00:00 stderr F I1213 00:20:17.707646 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:17.707728203+00:00 stderr F I1213 00:20:17.707706 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:17.707808186+00:00 stderr F I1213 00:20:17.707720 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:17.708115074+00:00 stderr F I1213 00:20:17.708050 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:17.732316551+00:00 stderr F W1213 00:20:17.732253 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:17.734180032+00:00 stderr F I1213 00:20:17.734134 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.536321ms) 2025-12-13T00:20:18.708549471+00:00 stderr F I1213 00:20:18.708452 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:18.709052114+00:00 stderr F I1213 00:20:18.708995 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:18.709052114+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:18.709052114+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:18.709166858+00:00 stderr F I1213 00:20:18.709117 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (679.298µs) 2025-12-13T00:20:18.709166858+00:00 stderr F I1213 00:20:18.709157 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:18.709330962+00:00 stderr F I1213 00:20:18.709273 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:18.709419264+00:00 stderr F I1213 00:20:18.709371 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:18.709539158+00:00 stderr F I1213 00:20:18.709405 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:18.710067762+00:00 stderr F I1213 00:20:18.710000 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:18.743312839+00:00 stderr F W1213 00:20:18.743236 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:18.745423287+00:00 stderr F I1213 00:20:18.745369 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.211019ms) 2025-12-13T00:20:19.709286245+00:00 stderr F I1213 00:20:19.709207 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:19.709582923+00:00 stderr F I1213 00:20:19.709547 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:19.709582923+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:19.709582923+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:19.709683626+00:00 stderr F I1213 00:20:19.709648 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (454.412µs) 2025-12-13T00:20:19.709683626+00:00 stderr F I1213 00:20:19.709674 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:19.709759578+00:00 stderr F I1213 00:20:19.709724 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:19.709803349+00:00 stderr F I1213 00:20:19.709783 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:19.709875091+00:00 stderr F I1213 00:20:19.709796 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:19.710324433+00:00 stderr F I1213 00:20:19.710273 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:19.741734100+00:00 stderr F W1213 00:20:19.741668 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:19.746313156+00:00 stderr F I1213 00:20:19.746256 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.554398ms) 2025-12-13T00:20:20.710376340+00:00 stderr F I1213 00:20:20.710217 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:20.710853014+00:00 stderr F I1213 00:20:20.710813 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:20.710853014+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:20.710853014+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:20.711031759+00:00 stderr F I1213 00:20:20.710993 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (792.452µs) 2025-12-13T00:20:20.711055389+00:00 stderr F I1213 00:20:20.711031 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:20.711198603+00:00 stderr F I1213 00:20:20.711130 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:20.711289236+00:00 stderr F I1213 00:20:20.711264 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:20.711410069+00:00 stderr F I1213 00:20:20.711288 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:20.712101078+00:00 stderr F I1213 00:20:20.712023 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:20.737627202+00:00 stderr F W1213 00:20:20.737567 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:20.739235246+00:00 stderr F I1213 00:20:20.739182 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.152087ms) 2025-12-13T00:20:21.711649590+00:00 stderr F I1213 00:20:21.711557 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:21.711957358+00:00 stderr F I1213 00:20:21.711910 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:21.711957358+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:21.711957358+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:21.712084092+00:00 stderr F I1213 00:20:21.712037 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (488.203µs) 2025-12-13T00:20:21.712084092+00:00 stderr F I1213 00:20:21.712065 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:21.712172654+00:00 stderr F I1213 00:20:21.712132 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:21.712223946+00:00 stderr F I1213 00:20:21.712202 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:21.712313038+00:00 stderr F I1213 00:20:21.712217 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:21.712637597+00:00 stderr F I1213 00:20:21.712585 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:21.750573934+00:00 stderr F W1213 00:20:21.736010 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:21.750573934+00:00 stderr F I1213 00:20:21.738029 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.961636ms) 2025-12-13T00:20:22.712860298+00:00 stderr F I1213 00:20:22.712791 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713112 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:22.713780584+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:22.713780584+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713176 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (395.781µs) 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713191 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713237 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713284 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713291 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:22.713780584+00:00 stderr F I1213 00:20:22.713582 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:22.735582104+00:00 stderr F W1213 00:20:22.735528 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:22.737600220+00:00 stderr F I1213 00:20:22.737560 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.365112ms) 2025-12-13T00:20:23.141449506+00:00 stderr F I1213 00:20:23.141374 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:23.141507857+00:00 stderr F I1213 00:20:23.141481 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:23.141552739+00:00 stderr F I1213 00:20:23.141531 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:23.141635531+00:00 stderr F I1213 00:20:23.141542 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:23.141860657+00:00 stderr F I1213 00:20:23.141820 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:23.162132117+00:00 stderr F W1213 00:20:23.162094 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:23.163513734+00:00 stderr F I1213 00:20:23.163481 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.11387ms) 2025-12-13T00:20:23.714331463+00:00 stderr F I1213 00:20:23.714253 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:23.714615671+00:00 stderr F I1213 00:20:23.714570 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:23.714615671+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:23.714615671+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:23.714670382+00:00 stderr F I1213 00:20:23.714640 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (399.3µs) 2025-12-13T00:20:23.714670382+00:00 stderr F I1213 00:20:23.714662 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:23.714753395+00:00 stderr F I1213 00:20:23.714712 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:23.714791176+00:00 stderr F I1213 00:20:23.714768 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:23.714862128+00:00 stderr F I1213 00:20:23.714782 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:23.719642050+00:00 stderr F I1213 00:20:23.719573 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:23.740416812+00:00 stderr F W1213 00:20:23.740379 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:23.742636984+00:00 stderr F I1213 00:20:23.742584 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.91871ms) 2025-12-13T00:20:24.715406687+00:00 stderr F I1213 00:20:24.715311 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:24.715740586+00:00 stderr F I1213 00:20:24.715674 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:24.715740586+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:24.715740586+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:24.715803648+00:00 stderr F I1213 00:20:24.715764 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (466.573µs) 2025-12-13T00:20:24.715803648+00:00 stderr F I1213 00:20:24.715787 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:24.715910971+00:00 stderr F I1213 00:20:24.715833 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:24.715985853+00:00 stderr F I1213 00:20:24.715918 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:24.716052945+00:00 stderr F I1213 00:20:24.715938 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:24.716427605+00:00 stderr F I1213 00:20:24.716329 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:24.756834740+00:00 stderr F W1213 00:20:24.756726 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:24.758561857+00:00 stderr F I1213 00:20:24.758456 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (42.666617ms) 2025-12-13T00:20:25.716143452+00:00 stderr F I1213 00:20:25.716053 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:25.716484991+00:00 stderr F I1213 00:20:25.716397 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:25.716484991+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:25.716484991+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:25.716484991+00:00 stderr F I1213 00:20:25.716470 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.832µs) 2025-12-13T00:20:25.716514312+00:00 stderr F I1213 00:20:25.716486 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:25.716570804+00:00 stderr F I1213 00:20:25.716536 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:25.716606715+00:00 stderr F I1213 00:20:25.716589 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:25.716677717+00:00 stderr F I1213 00:20:25.716600 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:25.716963365+00:00 stderr F I1213 00:20:25.716915 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:25.738154539+00:00 stderr F W1213 00:20:25.738078 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:25.739582079+00:00 stderr F I1213 00:20:25.739525 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.037616ms) 2025-12-13T00:20:26.717588247+00:00 stderr F I1213 00:20:26.717510 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:26.717847214+00:00 stderr F I1213 00:20:26.717817 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:26.717847214+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:26.717847214+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:26.717921766+00:00 stderr F I1213 00:20:26.717882 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (383.361µs) 2025-12-13T00:20:26.717921766+00:00 stderr F I1213 00:20:26.717902 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:26.718054899+00:00 stderr F I1213 00:20:26.718006 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:26.718094440+00:00 stderr F I1213 00:20:26.718071 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:26.718200273+00:00 stderr F I1213 00:20:26.718082 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:26.718514722+00:00 stderr F I1213 00:20:26.718471 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:26.747104670+00:00 stderr F W1213 00:20:26.747044 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:26.749678341+00:00 stderr F I1213 00:20:26.749629 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.720555ms) 2025-12-13T00:20:27.718613239+00:00 stderr F I1213 00:20:27.718540 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:27.718821555+00:00 stderr F I1213 00:20:27.718787 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:27.718821555+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:27.718821555+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:27.718871276+00:00 stderr F I1213 00:20:27.718838 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (308.848µs) 2025-12-13T00:20:27.718871276+00:00 stderr F I1213 00:20:27.718854 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:27.718952748+00:00 stderr F I1213 00:20:27.718897 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:27.719041111+00:00 stderr F I1213 00:20:27.719002 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:27.719088922+00:00 stderr F I1213 00:20:27.719022 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:27.719314708+00:00 stderr F I1213 00:20:27.719274 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:27.750368074+00:00 stderr F W1213 00:20:27.750309 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:27.751960259+00:00 stderr F I1213 00:20:27.751921 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.064222ms) 2025-12-13T00:20:28.719977402+00:00 stderr F I1213 00:20:28.719878 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:28.720259600+00:00 stderr F I1213 00:20:28.720223 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:28.720259600+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:28.720259600+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:28.720402714+00:00 stderr F I1213 00:20:28.720288 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (423.252µs) 2025-12-13T00:20:28.720402714+00:00 stderr F I1213 00:20:28.720316 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:28.720402714+00:00 stderr F I1213 00:20:28.720369 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:28.720485406+00:00 stderr F I1213 00:20:28.720419 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:28.720485406+00:00 stderr F I1213 00:20:28.720427 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:28.720849126+00:00 stderr F I1213 00:20:28.720746 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:28.749274290+00:00 stderr F W1213 00:20:28.749216 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:28.750689279+00:00 stderr F I1213 00:20:28.750643 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.325066ms) 2025-12-13T00:20:29.721620471+00:00 stderr F I1213 00:20:29.721524 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:29.721952440+00:00 stderr F I1213 00:20:29.721905 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:29.721952440+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:29.721952440+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:29.722030622+00:00 stderr F I1213 00:20:29.721998 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (487.183µs) 2025-12-13T00:20:29.722030622+00:00 stderr F I1213 00:20:29.722019 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:29.722103884+00:00 stderr F I1213 00:20:29.722072 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:29.722142965+00:00 stderr F I1213 00:20:29.722128 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:29.722211527+00:00 stderr F I1213 00:20:29.722139 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:29.722526056+00:00 stderr F I1213 00:20:29.722478 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:29.749535801+00:00 stderr F W1213 00:20:29.749466 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:29.751523936+00:00 stderr F I1213 00:20:29.751468 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.444402ms) 2025-12-13T00:20:30.723469477+00:00 stderr F I1213 00:20:30.723185 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:30.723519168+00:00 stderr F I1213 00:20:30.723504 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:30.723519168+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:30.723519168+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:30.723618181+00:00 stderr F I1213 00:20:30.723570 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (397.221µs) 2025-12-13T00:20:30.723618181+00:00 stderr F I1213 00:20:30.723597 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:30.723693583+00:00 stderr F I1213 00:20:30.723649 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:30.723735614+00:00 stderr F I1213 00:20:30.723710 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:30.723807936+00:00 stderr F I1213 00:20:30.723723 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:30.724147665+00:00 stderr F I1213 00:20:30.724084 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:30.757572297+00:00 stderr F W1213 00:20:30.757452 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:30.762170854+00:00 stderr F I1213 00:20:30.761892 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.287055ms) 2025-12-13T00:20:31.723856322+00:00 stderr F I1213 00:20:31.723771 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:31.724285594+00:00 stderr F I1213 00:20:31.724244 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:31.724285594+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:31.724285594+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:31.724375406+00:00 stderr F I1213 00:20:31.724340 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (582.656µs) 2025-12-13T00:20:31.724383917+00:00 stderr F I1213 00:20:31.724370 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:31.724478259+00:00 stderr F I1213 00:20:31.724438 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:31.724544441+00:00 stderr F I1213 00:20:31.724514 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:31.724621323+00:00 stderr F I1213 00:20:31.724533 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:31.725165568+00:00 stderr F I1213 00:20:31.725055 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:31.793990186+00:00 stderr F W1213 00:20:31.755213 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:31.793990186+00:00 stderr F I1213 00:20:31.758692 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.316926ms) 2025-12-13T00:20:32.725495851+00:00 stderr F I1213 00:20:32.725435 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:32.725910314+00:00 stderr F I1213 00:20:32.725869 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:32.725910314+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:32.725910314+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:32.726070758+00:00 stderr F I1213 00:20:32.726023 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (601.578µs) 2025-12-13T00:20:32.726070758+00:00 stderr F I1213 00:20:32.726055 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:32.726196451+00:00 stderr F I1213 00:20:32.726151 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:32.726267013+00:00 stderr F I1213 00:20:32.726242 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:32.726384427+00:00 stderr F I1213 00:20:32.726278 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:32.726810048+00:00 stderr F I1213 00:20:32.726753 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:32.750220043+00:00 stderr F W1213 00:20:32.750154 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:32.753570756+00:00 stderr F I1213 00:20:32.753461 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.401125ms) 2025-12-13T00:20:33.726336828+00:00 stderr F I1213 00:20:33.726260 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:33.726793451+00:00 stderr F I1213 00:20:33.726746 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:33.726793451+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:33.726793451+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:33.727000116+00:00 stderr F I1213 00:20:33.726918 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (699.309µs) 2025-12-13T00:20:33.727062648+00:00 stderr F I1213 00:20:33.727033 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:33.727214942+00:00 stderr F I1213 00:20:33.727186 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:33.727296694+00:00 stderr F I1213 00:20:33.727282 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:33.727391867+00:00 stderr F I1213 00:20:33.727320 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:33.727785997+00:00 stderr F I1213 00:20:33.727741 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:33.755253508+00:00 stderr F W1213 00:20:33.755182 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:33.757166890+00:00 stderr F I1213 00:20:33.757124 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.144324ms) 2025-12-13T00:20:34.727808453+00:00 stderr F I1213 00:20:34.727726 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:34.728132132+00:00 stderr F I1213 00:20:34.728087 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:34.728132132+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:34.728132132+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:34.728194373+00:00 stderr F I1213 00:20:34.728150 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (439.142µs) 2025-12-13T00:20:34.728194373+00:00 stderr F I1213 00:20:34.728181 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:34.728277366+00:00 stderr F I1213 00:20:34.728241 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:34.728335657+00:00 stderr F I1213 00:20:34.728301 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:34.728403059+00:00 stderr F I1213 00:20:34.728316 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:34.728715587+00:00 stderr F I1213 00:20:34.728663 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:34.750704511+00:00 stderr F W1213 00:20:34.750647 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:34.752147240+00:00 stderr F I1213 00:20:34.752114 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.932016ms) 2025-12-13T00:20:35.728381894+00:00 stderr F I1213 00:20:35.728326 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:35.728874997+00:00 stderr F I1213 00:20:35.728833 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:35.728874997+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:35.728874997+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:35.728964050+00:00 stderr F I1213 00:20:35.728919 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (604.287µs) 2025-12-13T00:20:35.728997661+00:00 stderr F I1213 00:20:35.728978 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:35.729101954+00:00 stderr F I1213 00:20:35.729058 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:35.729170706+00:00 stderr F I1213 00:20:35.729133 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:35.729266578+00:00 stderr F I1213 00:20:35.729163 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:35.729724390+00:00 stderr F I1213 00:20:35.729644 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:35.751639392+00:00 stderr F W1213 00:20:35.751437 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:35.753317547+00:00 stderr F I1213 00:20:35.753276 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.295085ms) 2025-12-13T00:20:36.472874635+00:00 stderr F I1213 00:20:36.472815 1 sync_worker.go:582] Wait finished 2025-12-13T00:20:36.473023569+00:00 stderr F I1213 00:20:36.472861 1 sync_worker.go:632] Previous sync status: &cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.473034749+00:00 stderr F I1213 00:20:36.473016 1 sync_worker.go:883] apply: 4.16.0 on generation 4 in state Reconciling at attempt 1 2025-12-13T00:20:36.474963631+00:00 stderr F I1213 00:20:36.474932 1 task_graph.go:481] Running 0 on worker 1 2025-12-13T00:20:36.474963631+00:00 stderr F I1213 00:20:36.474957 1 task_graph.go:481] Running 1 on worker 1 2025-12-13T00:20:36.474990952+00:00 stderr F I1213 00:20:36.474979 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:36.475014543+00:00 stderr F I1213 00:20:36.474989 1 sync_worker.go:999] Running sync for role "openshift-apiserver-operator/prometheus-k8s" (936 of 955) 2025-12-13T00:20:36.475757672+00:00 stderr F I1213 00:20:36.475737 1 task_graph.go:481] Running 2 on worker 0 2025-12-13T00:20:36.477407517+00:00 stderr F E1213 00:20:36.476021 1 task.go:122] error running apply for role "openshift-apiserver-operator/prometheus-k8s" (936 of 955): Get "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-apiserver-operator/roles/prometheus-k8s": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.477407517+00:00 stderr F I1213 00:20:36.476095 1 sync_worker.go:986] Unable to precreate resource clusteroperator "console" (635 of 955): Post "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.477407517+00:00 stderr F I1213 00:20:36.476123 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:36.477407517+00:00 stderr F I1213 00:20:36.476129 1 sync_worker.go:999] Running sync for customresourcedefinition "consoles.operator.openshift.io" (580 of 955) 2025-12-13T00:20:36.478191338+00:00 stderr F E1213 00:20:36.477763 1 task.go:122] error running apply for customresourcedefinition "consoles.operator.openshift.io" (580 of 955): Get "https://api-int.crc.testing:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoles.operator.openshift.io": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.729590292+00:00 stderr F I1213 00:20:36.729524 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:36.729902391+00:00 stderr F I1213 00:20:36.729871 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:36.729902391+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:36.729902391+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:36.729996333+00:00 stderr F I1213 00:20:36.729960 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (463.312µs) 2025-12-13T00:20:36.729996333+00:00 stderr F I1213 00:20:36.729985 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:36.730062955+00:00 stderr F I1213 00:20:36.730037 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:36.730108666+00:00 stderr F I1213 00:20:36.730092 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:36.730200219+00:00 stderr F I1213 00:20:36.730106 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.730515877+00:00 stderr F I1213 00:20:36.730475 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:36.731579725+00:00 stderr F I1213 00:20:36.731546 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.553901ms) 2025-12-13T00:20:36.731598746+00:00 stderr F I1213 00:20:36.731580 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.736828667+00:00 stderr F I1213 00:20:36.736806 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:36.736951680+00:00 stderr F I1213 00:20:36.736911 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:36.737016412+00:00 stderr F I1213 00:20:36.737005 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:36.737084864+00:00 stderr F I1213 00:20:36.737034 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.737356321+00:00 stderr F I1213 00:20:36.737330 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:36.738175414+00:00 stderr F I1213 00:20:36.738143 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.337557ms) 2025-12-13T00:20:36.738198915+00:00 stderr F I1213 00:20:36.738165 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.748437450+00:00 stderr F I1213 00:20:36.748412 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:36.748506292+00:00 stderr F I1213 00:20:36.748480 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:36.748532173+00:00 stderr F I1213 00:20:36.748517 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:36.748583904+00:00 stderr F I1213 00:20:36.748527 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.748804261+00:00 stderr F I1213 00:20:36.748769 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:36.749374216+00:00 stderr F I1213 00:20:36.749353 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (941.776µs) 2025-12-13T00:20:36.749415467+00:00 stderr F I1213 00:20:36.749399 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.769752586+00:00 stderr F I1213 00:20:36.769708 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:36.769871129+00:00 stderr F I1213 00:20:36.769837 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:36.769973002+00:00 stderr F I1213 00:20:36.769907 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:36.770059744+00:00 stderr F I1213 00:20:36.769923 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.770434474+00:00 stderr F I1213 00:20:36.770384 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:36.771493773+00:00 stderr F I1213 00:20:36.771467 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.772748ms) 2025-12-13T00:20:36.771493773+00:00 stderr F I1213 00:20:36.771483 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.811809491+00:00 stderr F I1213 00:20:36.811758 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:36.811972905+00:00 stderr F I1213 00:20:36.811951 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:36.812050667+00:00 stderr F I1213 00:20:36.812040 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:36.812121589+00:00 stderr F I1213 00:20:36.812069 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.812405416+00:00 stderr F I1213 00:20:36.812378 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:36.813115166+00:00 stderr F I1213 00:20:36.813096 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.349017ms) 2025-12-13T00:20:36.813152927+00:00 stderr F I1213 00:20:36.813137 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:36.893595667+00:00 stderr F I1213 00:20:36.893548 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:36.893767852+00:00 stderr F I1213 00:20:36.893743 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:36.893839994+00:00 stderr F I1213 00:20:36.893828 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:36.893924156+00:00 stderr F I1213 00:20:36.893859 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:36.894257095+00:00 stderr F I1213 00:20:36.894222 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:36.895128449+00:00 stderr F I1213 00:20:36.895103 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.566922ms) 2025-12-13T00:20:36.895152010+00:00 stderr F I1213 00:20:36.895119 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:37.055563178+00:00 stderr F I1213 00:20:37.055444 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:37.055643380+00:00 stderr F I1213 00:20:37.055599 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:37.055716112+00:00 stderr F I1213 00:20:37.055665 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:37.055784924+00:00 stderr F I1213 00:20:37.055682 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:37.056169154+00:00 stderr F I1213 00:20:37.056071 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:37.057402898+00:00 stderr F I1213 00:20:37.057261 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.81609ms) 2025-12-13T00:20:37.057529761+00:00 stderr F I1213 00:20:37.057478 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:37.378202944+00:00 stderr F I1213 00:20:37.378117 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:37.378274476+00:00 stderr F I1213 00:20:37.378239 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:37.378353708+00:00 stderr F I1213 00:20:37.378313 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:37.378430041+00:00 stderr F I1213 00:20:37.378337 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:37.378789490+00:00 stderr F I1213 00:20:37.378718 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:37.379979553+00:00 stderr F I1213 00:20:37.379899 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.784388ms) 2025-12-13T00:20:37.380078355+00:00 stderr F I1213 00:20:37.380038 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:37.730986974+00:00 stderr F I1213 00:20:37.730341 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:37.731235521+00:00 stderr F I1213 00:20:37.731193 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:37.731235521+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:37.731235521+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:37.731270082+00:00 stderr F I1213 00:20:37.731242 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (935.565µs) 2025-12-13T00:20:37.731270082+00:00 stderr F I1213 00:20:37.731256 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:37.731381615+00:00 stderr F I1213 00:20:37.731310 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:37.731381615+00:00 stderr F I1213 00:20:37.731360 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:37.731473917+00:00 stderr F I1213 00:20:37.731366 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:37.731712323+00:00 stderr F I1213 00:20:37.731668 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:37.732709281+00:00 stderr F I1213 00:20:37.732630 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.364457ms) 2025-12-13T00:20:37.732709281+00:00 stderr F I1213 00:20:37.732676 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:38.021055912+00:00 stderr F I1213 00:20:38.020980 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:38.021148574+00:00 stderr F I1213 00:20:38.021104 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:38.021190286+00:00 stderr F I1213 00:20:38.021166 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:38.021253157+00:00 stderr F I1213 00:20:38.021179 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:38.021500704+00:00 stderr F I1213 00:20:38.021440 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:38.022349876+00:00 stderr F I1213 00:20:38.022303 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.336966ms) 2025-12-13T00:20:38.022349876+00:00 stderr F I1213 00:20:38.022326 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:38.141722488+00:00 stderr F I1213 00:20:38.141634 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:38.141800700+00:00 stderr F I1213 00:20:38.141738 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:38.141800700+00:00 stderr F I1213 00:20:38.141794 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:38.141890983+00:00 stderr F I1213 00:20:38.141806 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:38.142292523+00:00 stderr F I1213 00:20:38.142240 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:38.143435764+00:00 stderr F I1213 00:20:38.143394 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.765347ms) 2025-12-13T00:20:38.143459294+00:00 stderr F I1213 00:20:38.143427 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:38.731971375+00:00 stderr F I1213 00:20:38.731871 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:38.732169971+00:00 stderr F I1213 00:20:38.732130 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:38.732169971+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:38.732169971+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:38.732218183+00:00 stderr F I1213 00:20:38.732190 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (332.47µs) 2025-12-13T00:20:38.732218183+00:00 stderr F I1213 00:20:38.732209 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:38.732276644+00:00 stderr F I1213 00:20:38.732250 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:38.732310315+00:00 stderr F I1213 00:20:38.732292 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:38.732365847+00:00 stderr F I1213 00:20:38.732302 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:38.732666945+00:00 stderr F I1213 00:20:38.732578 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:38.733536598+00:00 stderr F I1213 00:20:38.733493 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.277554ms) 2025-12-13T00:20:38.733536598+00:00 stderr F I1213 00:20:38.733523 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:39.733386879+00:00 stderr F I1213 00:20:39.733241 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:39.733564683+00:00 stderr F I1213 00:20:39.733526 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:39.733564683+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:39.733564683+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:39.733603735+00:00 stderr F I1213 00:20:39.733576 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (352.13µs) 2025-12-13T00:20:39.733603735+00:00 stderr F I1213 00:20:39.733594 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:39.733666146+00:00 stderr F I1213 00:20:39.733630 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:39.733702427+00:00 stderr F I1213 00:20:39.733683 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:39.733749008+00:00 stderr F I1213 00:20:39.733694 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:39.734061967+00:00 stderr F I1213 00:20:39.734008 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:39.734761535+00:00 stderr F I1213 00:20:39.734730 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.12643ms) 2025-12-13T00:20:39.734816637+00:00 stderr F I1213 00:20:39.734792 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:40.582893702+00:00 stderr F I1213 00:20:40.582834 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:40.582963334+00:00 stderr F I1213 00:20:40.582922 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:40.583031756+00:00 stderr F I1213 00:20:40.582997 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:40.583089118+00:00 stderr F I1213 00:20:40.583015 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:40.583345824+00:00 stderr F I1213 00:20:40.583303 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:40.585038800+00:00 stderr F I1213 00:20:40.585013 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.18521ms) 2025-12-13T00:20:40.585132243+00:00 stderr F I1213 00:20:40.585109 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:40.734439382+00:00 stderr F I1213 00:20:40.734339 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:40.734675728+00:00 stderr F I1213 00:20:40.734623 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:40.734675728+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:40.734675728+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:40.734700749+00:00 stderr F I1213 00:20:40.734674 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (346.799µs) 2025-12-13T00:20:40.734700749+00:00 stderr F I1213 00:20:40.734686 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:40.734790421+00:00 stderr F I1213 00:20:40.734733 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:40.734811632+00:00 stderr F I1213 00:20:40.734787 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:40.734877063+00:00 stderr F I1213 00:20:40.734796 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:40.735183622+00:00 stderr F I1213 00:20:40.735117 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:40.736082036+00:00 stderr F I1213 00:20:40.735893 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.198313ms) 2025-12-13T00:20:40.736082036+00:00 stderr F I1213 00:20:40.735923 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:41.735131866+00:00 stderr F I1213 00:20:41.735056 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:41.735365092+00:00 stderr F I1213 00:20:41.735331 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:41.735365092+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:41.735365092+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:41.735453064+00:00 stderr F I1213 00:20:41.735412 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (366.501µs) 2025-12-13T00:20:41.735453064+00:00 stderr F I1213 00:20:41.735438 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:41.735554607+00:00 stderr F I1213 00:20:41.735502 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:41.735586748+00:00 stderr F I1213 00:20:41.735567 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:41.735649379+00:00 stderr F I1213 00:20:41.735577 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:41.736234175+00:00 stderr F I1213 00:20:41.736168 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:41.737458838+00:00 stderr F I1213 00:20:41.737254 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.807838ms) 2025-12-13T00:20:41.737519079+00:00 stderr F I1213 00:20:41.737456 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:41.738290061+00:00 stderr F E1213 00:20:41.738237 1 cvo.go:642] Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:41.738290061+00:00 stderr F I1213 00:20:41.738265 1 cvo.go:643] Dropping "openshift-cluster-version/version" out of the queue &{0xc00035e120 0xc0001a4078}: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:42.735507780+00:00 stderr F I1213 00:20:42.735446 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:42.735749227+00:00 stderr F I1213 00:20:42.735710 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:42.735749227+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:42.735749227+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:42.735784078+00:00 stderr F I1213 00:20:42.735762 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (327.179µs) 2025-12-13T00:20:42.735784078+00:00 stderr F I1213 00:20:42.735778 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:42.735842239+00:00 stderr F I1213 00:20:42.735817 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:42.735883790+00:00 stderr F I1213 00:20:42.735864 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:42.735930432+00:00 stderr F I1213 00:20:42.735874 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:42.736181538+00:00 stderr F I1213 00:20:42.736125 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:42.736971559+00:00 stderr F I1213 00:20:42.736917 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.131871ms) 2025-12-13T00:20:42.736988650+00:00 stderr F I1213 00:20:42.736958 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:42.742214601+00:00 stderr F I1213 00:20:42.742181 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:42.742304203+00:00 stderr F I1213 00:20:42.742264 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:42.742340014+00:00 stderr F I1213 00:20:42.742326 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:42.742413696+00:00 stderr F I1213 00:20:42.742338 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:42.742668863+00:00 stderr F I1213 00:20:42.742628 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:42.743364861+00:00 stderr F I1213 00:20:42.743323 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.14186ms) 2025-12-13T00:20:42.743364861+00:00 stderr F I1213 00:20:42.743338 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:42.753607588+00:00 stderr F I1213 00:20:42.753576 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:42.753648639+00:00 stderr F I1213 00:20:42.753629 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:42.753678040+00:00 stderr F I1213 00:20:42.753663 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:42.753720871+00:00 stderr F I1213 00:20:42.753675 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:42.753951337+00:00 stderr F I1213 00:20:42.753891 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:42.754524683+00:00 stderr F I1213 00:20:42.754497 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (922.605µs) 2025-12-13T00:20:42.754524683+00:00 stderr F I1213 00:20:42.754513 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:42.774834071+00:00 stderr F I1213 00:20:42.774770 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:42.774897703+00:00 stderr F I1213 00:20:42.774868 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:42.774967955+00:00 stderr F I1213 00:20:42.774948 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:42.775050817+00:00 stderr F I1213 00:20:42.774962 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:42.775330055+00:00 stderr F I1213 00:20:42.775280 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:42.776074004+00:00 stderr F I1213 00:20:42.776031 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.263194ms) 2025-12-13T00:20:42.776074004+00:00 stderr F I1213 00:20:42.776058 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:42.816528767+00:00 stderr F I1213 00:20:42.816427 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:42.816585348+00:00 stderr F I1213 00:20:42.816511 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:42.816585348+00:00 stderr F I1213 00:20:42.816572 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:42.816660380+00:00 stderr F I1213 00:20:42.816583 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:42.816961418+00:00 stderr F I1213 00:20:42.816864 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:42.818040367+00:00 stderr F I1213 00:20:42.817923 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.49874ms) 2025-12-13T00:20:42.818040367+00:00 stderr F I1213 00:20:42.817970 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:42.899345091+00:00 stderr F I1213 00:20:42.898838 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:42.899430933+00:00 stderr F I1213 00:20:42.899389 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:42.899489275+00:00 stderr F I1213 00:20:42.899458 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:42.899568057+00:00 stderr F I1213 00:20:42.899478 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:42.899849464+00:00 stderr F I1213 00:20:42.899804 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:42.900900903+00:00 stderr F I1213 00:20:42.900834 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.996134ms) 2025-12-13T00:20:42.900900903+00:00 stderr F I1213 00:20:42.900872 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:43.061366483+00:00 stderr F I1213 00:20:43.061277 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:43.061423765+00:00 stderr F I1213 00:20:43.061361 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:43.061423765+00:00 stderr F I1213 00:20:43.061410 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:43.061506357+00:00 stderr F I1213 00:20:43.061418 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:43.061751073+00:00 stderr F I1213 00:20:43.061702 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:43.062585686+00:00 stderr F I1213 00:20:43.062520 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.248054ms) 2025-12-13T00:20:43.062585686+00:00 stderr F I1213 00:20:43.062552 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:43.383394623+00:00 stderr F I1213 00:20:43.383317 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:43.383473755+00:00 stderr F I1213 00:20:43.383439 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:43.383539657+00:00 stderr F I1213 00:20:43.383500 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:43.383606509+00:00 stderr F I1213 00:20:43.383516 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:43.383900647+00:00 stderr F I1213 00:20:43.383856 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:43.384971745+00:00 stderr F I1213 00:20:43.384898 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.578702ms) 2025-12-13T00:20:43.385001346+00:00 stderr F I1213 00:20:43.384978 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:43.736877912+00:00 stderr F I1213 00:20:43.736349 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:43.737216211+00:00 stderr F I1213 00:20:43.737170 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:43.737216211+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:43.737216211+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:43.737277062+00:00 stderr F I1213 00:20:43.737239 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (903.004µs) 2025-12-13T00:20:43.737277062+00:00 stderr F I1213 00:20:43.737264 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:43.737367055+00:00 stderr F I1213 00:20:43.737323 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:43.737409596+00:00 stderr F I1213 00:20:43.737387 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:43.737491148+00:00 stderr F I1213 00:20:43.737401 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:43.737771665+00:00 stderr F I1213 00:20:43.737726 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:43.739116752+00:00 stderr F I1213 00:20:43.738924 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.658315ms) 2025-12-13T00:20:43.739116752+00:00 stderr F I1213 00:20:43.738975 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:44.025611113+00:00 stderr F I1213 00:20:44.025526 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:44.025666205+00:00 stderr F I1213 00:20:44.025643 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:44.025739517+00:00 stderr F I1213 00:20:44.025698 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:44.025819699+00:00 stderr F I1213 00:20:44.025713 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:44.026097116+00:00 stderr F I1213 00:20:44.026052 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:44.027104834+00:00 stderr F I1213 00:20:44.027050 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.527451ms) 2025-12-13T00:20:44.027104834+00:00 stderr F I1213 00:20:44.027085 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:44.737894265+00:00 stderr F I1213 00:20:44.737771 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:44.738170833+00:00 stderr F I1213 00:20:44.738132 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:44.738170833+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:44.738170833+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:44.738189194+00:00 stderr F I1213 00:20:44.738181 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (423.782µs) 2025-12-13T00:20:44.738199874+00:00 stderr F I1213 00:20:44.738194 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:44.738266466+00:00 stderr F I1213 00:20:44.738232 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:44.738294766+00:00 stderr F I1213 00:20:44.738280 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:44.738356378+00:00 stderr F I1213 00:20:44.738287 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:44.738612175+00:00 stderr F I1213 00:20:44.738566 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:44.739538060+00:00 stderr F I1213 00:20:44.739440 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.238123ms) 2025-12-13T00:20:44.739538060+00:00 stderr F I1213 00:20:44.739477 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:45.739381730+00:00 stderr F I1213 00:20:45.739176 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:45.739476032+00:00 stderr F I1213 00:20:45.739423 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:45.739476032+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:45.739476032+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:45.739476032+00:00 stderr F I1213 00:20:45.739467 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (306.008µs) 2025-12-13T00:20:45.739499703+00:00 stderr F I1213 00:20:45.739480 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:45.739538354+00:00 stderr F I1213 00:20:45.739519 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:45.739572935+00:00 stderr F I1213 00:20:45.739562 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:45.739749740+00:00 stderr F I1213 00:20:45.739568 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:45.739893603+00:00 stderr F I1213 00:20:45.739799 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:45.740654994+00:00 stderr F I1213 00:20:45.740602 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.11448ms) 2025-12-13T00:20:45.740654994+00:00 stderr F I1213 00:20:45.740634 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:46.478034682+00:00 stderr F E1213 00:20:46.477975 1 task.go:122] error running apply for role "openshift-apiserver-operator/prometheus-k8s" (936 of 955): Get "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-apiserver-operator/roles/prometheus-k8s": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:46.479350367+00:00 stderr F E1213 00:20:46.479297 1 task.go:122] error running apply for customresourcedefinition "consoles.operator.openshift.io" (580 of 955): Get "https://api-int.crc.testing:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoles.operator.openshift.io": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:46.587899117+00:00 stderr F I1213 00:20:46.587405 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:46.587899117+00:00 stderr F I1213 00:20:46.587515 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:46.587899117+00:00 stderr F I1213 00:20:46.587566 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:46.587899117+00:00 stderr F I1213 00:20:46.587574 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:46.587899117+00:00 stderr F I1213 00:20:46.587864 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:46.588739019+00:00 stderr F I1213 00:20:46.588688 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.287395ms) 2025-12-13T00:20:46.588739019+00:00 stderr F I1213 00:20:46.588716 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:46.740500544+00:00 stderr F I1213 00:20:46.740429 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:46.740708720+00:00 stderr F I1213 00:20:46.740676 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:46.740708720+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:46.740708720+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:46.740747321+00:00 stderr F I1213 00:20:46.740728 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (313.358µs) 2025-12-13T00:20:46.740755271+00:00 stderr F I1213 00:20:46.740746 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:46.740816823+00:00 stderr F I1213 00:20:46.740787 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:46.740850574+00:00 stderr F I1213 00:20:46.740835 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:46.740905525+00:00 stderr F I1213 00:20:46.740846 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:46.741151813+00:00 stderr F I1213 00:20:46.741115 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:46.741990665+00:00 stderr F I1213 00:20:46.741961 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.206643ms) 2025-12-13T00:20:46.742001365+00:00 stderr F I1213 00:20:46.741987 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:47.741822135+00:00 stderr F I1213 00:20:47.741706 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:47.742618107+00:00 stderr F I1213 00:20:47.742061 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:47.742618107+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:47.742618107+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:47.742808862+00:00 stderr F I1213 00:20:47.742748 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.043028ms) 2025-12-13T00:20:47.742808862+00:00 stderr F I1213 00:20:47.742790 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:47.742902864+00:00 stderr F I1213 00:20:47.742859 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:47.742951355+00:00 stderr F I1213 00:20:47.742915 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:47.743378417+00:00 stderr F I1213 00:20:47.742925 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:47.743378417+00:00 stderr F I1213 00:20:47.743200 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:47.744291692+00:00 stderr F I1213 00:20:47.744240 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.440049ms) 2025-12-13T00:20:47.744322703+00:00 stderr F I1213 00:20:47.744277 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:48.743115255+00:00 stderr F I1213 00:20:48.743045 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:48.743324730+00:00 stderr F I1213 00:20:48.743293 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:48.743324730+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:48.743324730+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:48.743366751+00:00 stderr F I1213 00:20:48.743341 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (307.569µs) 2025-12-13T00:20:48.743366751+00:00 stderr F I1213 00:20:48.743360 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:48.743481804+00:00 stderr F I1213 00:20:48.743404 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:48.743481804+00:00 stderr F I1213 00:20:48.743465 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:48.743548346+00:00 stderr F I1213 00:20:48.743477 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:48.743772852+00:00 stderr F I1213 00:20:48.743733 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:48.744516323+00:00 stderr F I1213 00:20:48.744474 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.113181ms) 2025-12-13T00:20:48.744581815+00:00 stderr F I1213 00:20:48.744548 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:48.745323134+00:00 stderr F E1213 00:20:48.745244 1 cvo.go:642] Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:48.745323134+00:00 stderr F I1213 00:20:48.745301 1 cvo.go:643] Dropping "openshift-cluster-version/version" out of the queue &{0xc00035e120 0xc0001a4078}: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:49.744240181+00:00 stderr F I1213 00:20:49.744155 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:49.744429316+00:00 stderr F I1213 00:20:49.744387 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:49.744429316+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:49.744429316+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:49.744446756+00:00 stderr F I1213 00:20:49.744435 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (292.149µs) 2025-12-13T00:20:49.744457746+00:00 stderr F I1213 00:20:49.744450 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:49.744529968+00:00 stderr F I1213 00:20:49.744486 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:49.744541819+00:00 stderr F I1213 00:20:49.744534 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:49.744613421+00:00 stderr F I1213 00:20:49.744540 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:49.744831966+00:00 stderr F I1213 00:20:49.744777 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:49.745668109+00:00 stderr F I1213 00:20:49.745608 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.150051ms) 2025-12-13T00:20:49.745668109+00:00 stderr F I1213 00:20:49.745645 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:49.751134746+00:00 stderr F I1213 00:20:49.751059 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:49.751264780+00:00 stderr F I1213 00:20:49.751209 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:49.751328021+00:00 stderr F I1213 00:20:49.751291 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:49.751410744+00:00 stderr F I1213 00:20:49.751308 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:49.751700051+00:00 stderr F I1213 00:20:49.751644 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:49.753034498+00:00 stderr F I1213 00:20:49.752986 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.940393ms) 2025-12-13T00:20:49.753051398+00:00 stderr F I1213 00:20:49.753022 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:49.763368396+00:00 stderr F I1213 00:20:49.763313 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:49.763476519+00:00 stderr F I1213 00:20:49.763440 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:49.763527810+00:00 stderr F I1213 00:20:49.763504 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:49.763601492+00:00 stderr F I1213 00:20:49.763520 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:49.763849490+00:00 stderr F I1213 00:20:49.763812 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:49.764887327+00:00 stderr F I1213 00:20:49.764851 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.532342ms) 2025-12-13T00:20:49.764887327+00:00 stderr F I1213 00:20:49.764872 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:49.785241246+00:00 stderr F I1213 00:20:49.785160 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:49.785440532+00:00 stderr F I1213 00:20:49.785390 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:49.785530384+00:00 stderr F I1213 00:20:49.785499 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:49.785630987+00:00 stderr F I1213 00:20:49.785522 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:49.786275755+00:00 stderr F I1213 00:20:49.786214 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:49.787703283+00:00 stderr F I1213 00:20:49.787677 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.537618ms) 2025-12-13T00:20:49.787716913+00:00 stderr F I1213 00:20:49.787697 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:49.828063212+00:00 stderr F I1213 00:20:49.827993 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:49.828111753+00:00 stderr F I1213 00:20:49.828084 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:49.828190275+00:00 stderr F I1213 00:20:49.828131 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:49.828245797+00:00 stderr F I1213 00:20:49.828175 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:49.828674379+00:00 stderr F I1213 00:20:49.828435 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:49.829425449+00:00 stderr F I1213 00:20:49.829330 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.337947ms) 2025-12-13T00:20:49.829425449+00:00 stderr F I1213 00:20:49.829372 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:49.909859960+00:00 stderr F I1213 00:20:49.909749 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:49.909913781+00:00 stderr F I1213 00:20:49.909858 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:49.909924481+00:00 stderr F I1213 00:20:49.909917 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:49.910057015+00:00 stderr F I1213 00:20:49.909926 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:49.910357723+00:00 stderr F I1213 00:20:49.910311 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:49.911325969+00:00 stderr F I1213 00:20:49.911263 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.524951ms) 2025-12-13T00:20:49.911325969+00:00 stderr F I1213 00:20:49.911291 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:50.071818349+00:00 stderr F I1213 00:20:50.071739 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:50.071868981+00:00 stderr F I1213 00:20:50.071834 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:50.071899202+00:00 stderr F I1213 00:20:50.071882 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:50.072034216+00:00 stderr F I1213 00:20:50.071890 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:50.072253152+00:00 stderr F I1213 00:20:50.072211 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:50.073055303+00:00 stderr F I1213 00:20:50.073016 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.288485ms) 2025-12-13T00:20:50.073055303+00:00 stderr F I1213 00:20:50.073035 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:50.393670945+00:00 stderr F I1213 00:20:50.393600 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:50.393716866+00:00 stderr F I1213 00:20:50.393700 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:50.393801938+00:00 stderr F I1213 00:20:50.393770 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:50.393976923+00:00 stderr F I1213 00:20:50.393793 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:50.394311023+00:00 stderr F I1213 00:20:50.394242 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:50.395387811+00:00 stderr F I1213 00:20:50.395323 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.725487ms) 2025-12-13T00:20:50.395387811+00:00 stderr F I1213 00:20:50.395358 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:50.745576761+00:00 stderr F I1213 00:20:50.745495 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:50.745788897+00:00 stderr F I1213 00:20:50.745742 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:50.745788897+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:50.745788897+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:50.745804167+00:00 stderr F I1213 00:20:50.745793 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (310.878µs) 2025-12-13T00:20:50.745813197+00:00 stderr F I1213 00:20:50.745805 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:50.745902230+00:00 stderr F I1213 00:20:50.745851 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:50.745914350+00:00 stderr F I1213 00:20:50.745908 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:50.746031403+00:00 stderr F I1213 00:20:50.745915 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:50.746262319+00:00 stderr F I1213 00:20:50.746221 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:50.746948318+00:00 stderr F I1213 00:20:50.746902 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.096649ms) 2025-12-13T00:20:50.746963438+00:00 stderr F I1213 00:20:50.746947 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:51.036469681+00:00 stderr F I1213 00:20:51.036397 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:51.036514942+00:00 stderr F I1213 00:20:51.036482 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:51.036566154+00:00 stderr F I1213 00:20:51.036538 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:51.036634205+00:00 stderr F I1213 00:20:51.036552 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:51.036952234+00:00 stderr F I1213 00:20:51.036894 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:51.037910629+00:00 stderr F I1213 00:20:51.037854 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (1.459989ms) 2025-12-13T00:20:51.037910629+00:00 stderr F I1213 00:20:51.037883 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:51.746555922+00:00 stderr F I1213 00:20:51.746026 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:51.746794318+00:00 stderr F I1213 00:20:51.746764 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:51.746794318+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:51.746794318+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:51.746855920+00:00 stderr F I1213 00:20:51.746829 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (815.221µs) 2025-12-13T00:20:51.746855920+00:00 stderr F I1213 00:20:51.746849 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:51.746925182+00:00 stderr F I1213 00:20:51.746897 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:51.746985023+00:00 stderr F I1213 00:20:51.746970 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:51.747046905+00:00 stderr F I1213 00:20:51.746982 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:51.747353834+00:00 stderr F I1213 00:20:51.747312 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:51.749910562+00:00 stderr F I1213 00:20:51.749846 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (2.99597ms) 2025-12-13T00:20:51.749910562+00:00 stderr F I1213 00:20:51.749866 1 cvo.go:636] Error handling openshift-cluster-version/version: Put "https://api-int.crc.testing:6443/apis/config.openshift.io/v1/clusterversions/version/status": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:52.747312798+00:00 stderr F I1213 00:20:52.747204 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:52.747561274+00:00 stderr F I1213 00:20:52.747502 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:52.747561274+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:52.747561274+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:52.747582695+00:00 stderr F I1213 00:20:52.747568 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (376.071µs) 2025-12-13T00:20:52.747592995+00:00 stderr F I1213 00:20:52.747582 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:52.747664297+00:00 stderr F I1213 00:20:52.747623 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:52.747690448+00:00 stderr F I1213 00:20:52.747675 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:52.747769740+00:00 stderr F I1213 00:20:52.747686 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:52.748020757+00:00 stderr F I1213 00:20:52.747974 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:53.752000508+00:00 stderr F I1213 00:20:53.751039 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:53.752000508+00:00 stderr F I1213 00:20:53.751325 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:53.752000508+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:53.752000508+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:53.752000508+00:00 stderr F I1213 00:20:53.751385 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (358.45µs) 2025-12-13T00:20:54.752332042+00:00 stderr F I1213 00:20:54.751795 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:54.752572288+00:00 stderr F I1213 00:20:54.752512 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:54.752572288+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:54.752572288+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:54.752600359+00:00 stderr F I1213 00:20:54.752583 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (800.432µs) 2025-12-13T00:20:55.753639022+00:00 stderr F I1213 00:20:55.753192 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:55.753639022+00:00 stderr F I1213 00:20:55.753426 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:55.753639022+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:55.753639022+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:55.753639022+00:00 stderr F I1213 00:20:55.753472 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (293.449µs) 2025-12-13T00:20:56.753611536+00:00 stderr F I1213 00:20:56.753536 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:56.754045908+00:00 stderr F I1213 00:20:56.754006 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:56.754045908+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:56.754045908+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:56.754140490+00:00 stderr F I1213 00:20:56.754099 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (575.855µs) 2025-12-13T00:20:57.754807673+00:00 stderr F I1213 00:20:57.754691 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:57.755345627+00:00 stderr F I1213 00:20:57.755282 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:57.755345627+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:57.755345627+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:57.755452100+00:00 stderr F I1213 00:20:57.755406 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (727.89µs) 2025-12-13T00:20:58.477844204+00:00 stderr F W1213 00:20:58.477437 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:58.479465398+00:00 stderr F I1213 00:20:58.479431 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (5.731837422s) 2025-12-13T00:20:58.479594711+00:00 stderr F I1213 00:20:58.479507 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:58.479708525+00:00 stderr F I1213 00:20:58.479687 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:58.479779846+00:00 stderr F I1213 00:20:58.479765 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:58.479867479+00:00 stderr F I1213 00:20:58.479802 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:58.480204528+00:00 stderr F I1213 00:20:58.480174 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:58.508123061+00:00 stderr F W1213 00:20:58.508064 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:58.510128836+00:00 stderr F I1213 00:20:58.510086 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.577075ms) 2025-12-13T00:20:58.757056719+00:00 stderr F I1213 00:20:58.756332 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:58.757056719+00:00 stderr F I1213 00:20:58.756903 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:58.757056719+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:58.757056719+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:58.757107950+00:00 stderr F I1213 00:20:58.757061 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (775.312µs) 2025-12-13T00:20:58.757107950+00:00 stderr F I1213 00:20:58.757091 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:58.757233453+00:00 stderr F I1213 00:20:58.757177 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:58.757311665+00:00 stderr F I1213 00:20:58.757279 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:58.757416138+00:00 stderr F I1213 00:20:58.757305 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:58.758101777+00:00 stderr F I1213 00:20:58.758033 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:58.783160503+00:00 stderr F W1213 00:20:58.783073 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:58.785404264+00:00 stderr F I1213 00:20:58.785276 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.15384ms) 2025-12-13T00:20:59.478661880+00:00 stderr F I1213 00:20:59.478601 1 task_graph.go:481] Running 3 on worker 1 2025-12-13T00:20:59.478661880+00:00 stderr F I1213 00:20:59.478634 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.478661880+00:00 stderr F I1213 00:20:59.478643 1 sync_worker.go:999] Running sync for operatorhub "cluster" (21 of 955) 2025-12-13T00:20:59.479748760+00:00 stderr F I1213 00:20:59.479698 1 task_graph.go:481] Running 4 on worker 0 2025-12-13T00:20:59.479748760+00:00 stderr F I1213 00:20:59.479735 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.479769411+00:00 stderr F I1213 00:20:59.479745 1 sync_worker.go:999] Running sync for imagestream "openshift/driver-toolkit" (657 of 955) 2025-12-13T00:20:59.483755008+00:00 stderr F E1213 00:20:59.483722 1 task.go:122] error running apply for imagestream "openshift/driver-toolkit" (657 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io driver-toolkit) 2025-12-13T00:20:59.494471817+00:00 stderr F I1213 00:20:59.494413 1 sync_worker.go:1014] Done syncing for operatorhub "cluster" (21 of 955) 2025-12-13T00:20:59.494471817+00:00 stderr F I1213 00:20:59.494459 1 task_graph.go:481] Running 5 on worker 1 2025-12-13T00:20:59.494501548+00:00 stderr F I1213 00:20:59.494474 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.494501548+00:00 stderr F I1213 00:20:59.494483 1 sync_worker.go:999] Running sync for build "cluster" (68 of 955) 2025-12-13T00:20:59.524821117+00:00 stderr F I1213 00:20:59.524764 1 sync_worker.go:1014] Done syncing for build "cluster" (68 of 955) 2025-12-13T00:20:59.524890838+00:00 stderr F I1213 00:20:59.524875 1 task_graph.go:481] Running 6 on worker 1 2025-12-13T00:20:59.524968610+00:00 stderr F I1213 00:20:59.524940 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.525011172+00:00 stderr F I1213 00:20:59.524993 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-network-config-controller" (466 of 955) 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530307 1 sync_worker.go:1014] Done syncing for namespace "openshift-cloud-network-config-controller" (466 of 955) 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530351 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530359 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-gcp" (467 of 955) 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530377 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-gcp" (467 of 955): disabled capabilities: CloudCredential 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530385 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530392 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-aws" (468 of 955) 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530405 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-aws" (468 of 955): disabled capabilities: CloudCredential 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530412 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530419 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure" (469 of 955) 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530432 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure" (469 of 955): disabled capabilities: CloudCredential 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530439 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530446 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-openstack" (470 of 955) 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530461 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cloud-network-config-controller-openstack" (470 of 955): disabled capabilities: CloudCredential 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530468 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.531002233+00:00 stderr F I1213 00:20:59.530474 1 sync_worker.go:999] Running sync for service "openshift-network-operator/metrics" (471 of 955) 2025-12-13T00:20:59.538926627+00:00 stderr F I1213 00:20:59.538856 1 sync_worker.go:1014] Done syncing for service "openshift-network-operator/metrics" (471 of 955) 2025-12-13T00:20:59.538926627+00:00 stderr F I1213 00:20:59.538901 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.538926627+00:00 stderr F I1213 00:20:59.538912 1 sync_worker.go:999] Running sync for role "openshift-network-operator/prometheus-k8s" (472 of 955) 2025-12-13T00:20:59.541703652+00:00 stderr F I1213 00:20:59.541657 1 sync_worker.go:1014] Done syncing for role "openshift-network-operator/prometheus-k8s" (472 of 955) 2025-12-13T00:20:59.541703652+00:00 stderr F I1213 00:20:59.541689 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.541703652+00:00 stderr F I1213 00:20:59.541695 1 sync_worker.go:999] Running sync for rolebinding "openshift-network-operator/prometheus-k8s" (473 of 955) 2025-12-13T00:20:59.545005831+00:00 stderr F I1213 00:20:59.544972 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-network-operator/prometheus-k8s" (473 of 955) 2025-12-13T00:20:59.545005831+00:00 stderr F I1213 00:20:59.544988 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.545005831+00:00 stderr F I1213 00:20:59.544993 1 sync_worker.go:999] Running sync for servicemonitor "openshift-network-operator/network-operator" (474 of 955) 2025-12-13T00:20:59.551148477+00:00 stderr F I1213 00:20:59.551078 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-network-operator/network-operator" (474 of 955) 2025-12-13T00:20:59.551179427+00:00 stderr F I1213 00:20:59.551165 1 task_graph.go:481] Running 7 on worker 1 2025-12-13T00:20:59.551188438+00:00 stderr F I1213 00:20:59.551182 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.551222818+00:00 stderr F I1213 00:20:59.551193 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-credential-operator" (14 of 955) 2025-12-13T00:20:59.551238999+00:00 stderr F I1213 00:20:59.551221 1 sync_worker.go:1002] Skipping namespace "openshift-cloud-credential-operator" (14 of 955): disabled capabilities: CloudCredential 2025-12-13T00:20:59.551246469+00:00 stderr F I1213 00:20:59.551235 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.551254819+00:00 stderr F I1213 00:20:59.551244 1 sync_worker.go:999] Running sync for customresourcedefinition "credentialsrequests.cloudcredential.openshift.io" (15 of 955) 2025-12-13T00:20:59.551279920+00:00 stderr F I1213 00:20:59.551260 1 sync_worker.go:1002] Skipping customresourcedefinition "credentialsrequests.cloudcredential.openshift.io" (15 of 955): disabled capabilities: CloudCredential 2025-12-13T00:20:59.551279920+00:00 stderr F I1213 00:20:59.551273 1 task_graph.go:481] Running 8 on worker 1 2025-12-13T00:20:59.556463070+00:00 stderr F I1213 00:20:59.556417 1 sync_worker.go:989] Precreated resource clusteroperator "config-operator" (66 of 955) 2025-12-13T00:20:59.556463070+00:00 stderr F I1213 00:20:59.556448 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.556463070+00:00 stderr F I1213 00:20:59.556453 1 sync_worker.go:999] Running sync for customresourcedefinition "apiservers.config.openshift.io" (36 of 955) 2025-12-13T00:20:59.560502810+00:00 stderr F I1213 00:20:59.560455 1 sync_worker.go:1014] Done syncing for customresourcedefinition "apiservers.config.openshift.io" (36 of 955) 2025-12-13T00:20:59.560502810+00:00 stderr F I1213 00:20:59.560476 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.560502810+00:00 stderr F I1213 00:20:59.560482 1 sync_worker.go:999] Running sync for customresourcedefinition "authentications.config.openshift.io" (37 of 955) 2025-12-13T00:20:59.564165538+00:00 stderr F I1213 00:20:59.564063 1 sync_worker.go:1014] Done syncing for customresourcedefinition "authentications.config.openshift.io" (37 of 955) 2025-12-13T00:20:59.564165538+00:00 stderr F I1213 00:20:59.564084 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.564165538+00:00 stderr F I1213 00:20:59.564092 1 sync_worker.go:999] Running sync for customresourcedefinition "configs.operator.openshift.io" (38 of 955) 2025-12-13T00:20:59.566708707+00:00 stderr F I1213 00:20:59.566674 1 sync_worker.go:1014] Done syncing for customresourcedefinition "configs.operator.openshift.io" (38 of 955) 2025-12-13T00:20:59.566708707+00:00 stderr F I1213 00:20:59.566694 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.566708707+00:00 stderr F I1213 00:20:59.566699 1 sync_worker.go:999] Running sync for customresourcedefinition "consoles.config.openshift.io" (39 of 955) 2025-12-13T00:20:59.569127912+00:00 stderr F I1213 00:20:59.569090 1 sync_worker.go:1014] Done syncing for customresourcedefinition "consoles.config.openshift.io" (39 of 955) 2025-12-13T00:20:59.569127912+00:00 stderr F I1213 00:20:59.569108 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.569127912+00:00 stderr F I1213 00:20:59.569113 1 sync_worker.go:999] Running sync for customresourcedefinition "dnses.config.openshift.io" (40 of 955) 2025-12-13T00:20:59.571875036+00:00 stderr F I1213 00:20:59.571824 1 sync_worker.go:1014] Done syncing for customresourcedefinition "dnses.config.openshift.io" (40 of 955) 2025-12-13T00:20:59.571875036+00:00 stderr F I1213 00:20:59.571870 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.571892707+00:00 stderr F I1213 00:20:59.571881 1 sync_worker.go:999] Running sync for customresourcedefinition "featuregates.config.openshift.io" (41 of 955) 2025-12-13T00:20:59.575580646+00:00 stderr F I1213 00:20:59.575526 1 sync_worker.go:1014] Done syncing for customresourcedefinition "featuregates.config.openshift.io" (41 of 955) 2025-12-13T00:20:59.575580646+00:00 stderr F I1213 00:20:59.575558 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.575580646+00:00 stderr F I1213 00:20:59.575569 1 sync_worker.go:999] Running sync for customresourcedefinition "imagecontentpolicies.config.openshift.io" (42 of 955) 2025-12-13T00:20:59.578459804+00:00 stderr F I1213 00:20:59.578417 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagecontentpolicies.config.openshift.io" (42 of 955) 2025-12-13T00:20:59.578459804+00:00 stderr F I1213 00:20:59.578446 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.578478924+00:00 stderr F I1213 00:20:59.578456 1 sync_worker.go:999] Running sync for customresourcedefinition "imagecontentsourcepolicies.operator.openshift.io" (43 of 955) 2025-12-13T00:20:59.581311901+00:00 stderr F I1213 00:20:59.581271 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagecontentsourcepolicies.operator.openshift.io" (43 of 955) 2025-12-13T00:20:59.581311901+00:00 stderr F I1213 00:20:59.581300 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.581328401+00:00 stderr F I1213 00:20:59.581310 1 sync_worker.go:999] Running sync for customresourcedefinition "imagedigestmirrorsets.config.openshift.io" (44 of 955) 2025-12-13T00:20:59.585227666+00:00 stderr F I1213 00:20:59.584647 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagedigestmirrorsets.config.openshift.io" (44 of 955) 2025-12-13T00:20:59.585227666+00:00 stderr F I1213 00:20:59.584673 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.585227666+00:00 stderr F I1213 00:20:59.584683 1 sync_worker.go:999] Running sync for customresourcedefinition "images.config.openshift.io" (45 of 955) 2025-12-13T00:20:59.590085737+00:00 stderr F I1213 00:20:59.590033 1 sync_worker.go:1014] Done syncing for customresourcedefinition "images.config.openshift.io" (45 of 955) 2025-12-13T00:20:59.590085737+00:00 stderr F I1213 00:20:59.590068 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.590106738+00:00 stderr F I1213 00:20:59.590079 1 sync_worker.go:999] Running sync for customresourcedefinition "imagetagmirrorsets.config.openshift.io" (46 of 955) 2025-12-13T00:20:59.593337445+00:00 stderr F I1213 00:20:59.593282 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagetagmirrorsets.config.openshift.io" (46 of 955) 2025-12-13T00:20:59.593337445+00:00 stderr F I1213 00:20:59.593317 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.593337445+00:00 stderr F I1213 00:20:59.593322 1 sync_worker.go:999] Running sync for customresourcedefinition "infrastructures.config.openshift.io" (47 of 955) 2025-12-13T00:20:59.603159320+00:00 stderr F I1213 00:20:59.603090 1 sync_worker.go:1014] Done syncing for customresourcedefinition "infrastructures.config.openshift.io" (47 of 955) 2025-12-13T00:20:59.603159320+00:00 stderr F I1213 00:20:59.603124 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.603159320+00:00 stderr F I1213 00:20:59.603129 1 sync_worker.go:999] Running sync for customresourcedefinition "ingresses.config.openshift.io" (48 of 955) 2025-12-13T00:20:59.607478006+00:00 stderr F I1213 00:20:59.607437 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ingresses.config.openshift.io" (48 of 955) 2025-12-13T00:20:59.607478006+00:00 stderr F I1213 00:20:59.607458 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.607478006+00:00 stderr F I1213 00:20:59.607464 1 sync_worker.go:999] Running sync for customresourcedefinition "networks.config.openshift.io" (49 of 955) 2025-12-13T00:20:59.611746802+00:00 stderr F I1213 00:20:59.611695 1 sync_worker.go:1014] Done syncing for customresourcedefinition "networks.config.openshift.io" (49 of 955) 2025-12-13T00:20:59.611746802+00:00 stderr F I1213 00:20:59.611715 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.611746802+00:00 stderr F I1213 00:20:59.611721 1 sync_worker.go:999] Running sync for customresourcedefinition "nodes.config.openshift.io" (50 of 955) 2025-12-13T00:20:59.614164347+00:00 stderr F I1213 00:20:59.614129 1 sync_worker.go:1014] Done syncing for customresourcedefinition "nodes.config.openshift.io" (50 of 955) 2025-12-13T00:20:59.614164347+00:00 stderr F I1213 00:20:59.614148 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.614164347+00:00 stderr F I1213 00:20:59.614153 1 sync_worker.go:999] Running sync for customresourcedefinition "oauths.config.openshift.io" (51 of 955) 2025-12-13T00:20:59.619361738+00:00 stderr F I1213 00:20:59.619324 1 sync_worker.go:1014] Done syncing for customresourcedefinition "oauths.config.openshift.io" (51 of 955) 2025-12-13T00:20:59.619408749+00:00 stderr F I1213 00:20:59.619398 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.619439150+00:00 stderr F I1213 00:20:59.619425 1 sync_worker.go:999] Running sync for namespace "openshift-config-managed" (52 of 955) 2025-12-13T00:20:59.624450124+00:00 stderr F I1213 00:20:59.624419 1 sync_worker.go:1014] Done syncing for namespace "openshift-config-managed" (52 of 955) 2025-12-13T00:20:59.624450124+00:00 stderr F I1213 00:20:59.624436 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.624450124+00:00 stderr F I1213 00:20:59.624442 1 sync_worker.go:999] Running sync for namespace "openshift-config" (53 of 955) 2025-12-13T00:20:59.626994443+00:00 stderr F I1213 00:20:59.626960 1 sync_worker.go:1014] Done syncing for namespace "openshift-config" (53 of 955) 2025-12-13T00:20:59.627046315+00:00 stderr F I1213 00:20:59.627034 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.627081405+00:00 stderr F I1213 00:20:59.627067 1 sync_worker.go:999] Running sync for config "cluster" (54 of 955) 2025-12-13T00:20:59.645204445+00:00 stderr F I1213 00:20:59.644470 1 sync_worker.go:1014] Done syncing for config "cluster" (54 of 955) 2025-12-13T00:20:59.645334199+00:00 stderr F I1213 00:20:59.645268 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.645334199+00:00 stderr F I1213 00:20:59.645298 1 sync_worker.go:999] Running sync for customresourcedefinition "projects.config.openshift.io" (55 of 955) 2025-12-13T00:20:59.648664928+00:00 stderr F I1213 00:20:59.648610 1 sync_worker.go:1014] Done syncing for customresourcedefinition "projects.config.openshift.io" (55 of 955) 2025-12-13T00:20:59.648664928+00:00 stderr F I1213 00:20:59.648633 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.648664928+00:00 stderr F I1213 00:20:59.648639 1 sync_worker.go:999] Running sync for role "openshift-config-operator/prometheus-k8s" (56 of 955) 2025-12-13T00:20:59.650878828+00:00 stderr F I1213 00:20:59.650723 1 sync_worker.go:1014] Done syncing for role "openshift-config-operator/prometheus-k8s" (56 of 955) 2025-12-13T00:20:59.650927679+00:00 stderr F I1213 00:20:59.650916 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.650988771+00:00 stderr F I1213 00:20:59.650972 1 sync_worker.go:999] Running sync for customresourcedefinition "schedulers.config.openshift.io" (57 of 955) 2025-12-13T00:20:59.654269309+00:00 stderr F I1213 00:20:59.654245 1 sync_worker.go:1014] Done syncing for customresourcedefinition "schedulers.config.openshift.io" (57 of 955) 2025-12-13T00:20:59.654311040+00:00 stderr F I1213 00:20:59.654301 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.654339711+00:00 stderr F I1213 00:20:59.654327 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:cluster-config-operator:cluster-reader" (58 of 955) 2025-12-13T00:20:59.656315565+00:00 stderr F I1213 00:20:59.656294 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:cluster-config-operator:cluster-reader" (58 of 955) 2025-12-13T00:20:59.656352776+00:00 stderr F I1213 00:20:59.656343 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.656379607+00:00 stderr F I1213 00:20:59.656368 1 sync_worker.go:999] Running sync for node "cluster" (59 of 955) 2025-12-13T00:20:59.676397847+00:00 stderr F I1213 00:20:59.676282 1 sync_worker.go:1014] Done syncing for node "cluster" (59 of 955) 2025-12-13T00:20:59.676397847+00:00 stderr F I1213 00:20:59.676315 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.676397847+00:00 stderr F I1213 00:20:59.676321 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-operator/prometheus-k8s" (60 of 955) 2025-12-13T00:20:59.678332519+00:00 stderr F I1213 00:20:59.678294 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-operator/prometheus-k8s" (60 of 955) 2025-12-13T00:20:59.678332519+00:00 stderr F I1213 00:20:59.678313 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.678332519+00:00 stderr F I1213 00:20:59.678319 1 sync_worker.go:999] Running sync for service "openshift-config-operator/metrics" (61 of 955) 2025-12-13T00:20:59.681743031+00:00 stderr F I1213 00:20:59.681714 1 sync_worker.go:1014] Done syncing for service "openshift-config-operator/metrics" (61 of 955) 2025-12-13T00:20:59.681786072+00:00 stderr F I1213 00:20:59.681776 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.681814353+00:00 stderr F I1213 00:20:59.681802 1 sync_worker.go:999] Running sync for servicemonitor "openshift-config-operator/config-operator" (62 of 955) 2025-12-13T00:20:59.684642029+00:00 stderr F I1213 00:20:59.684587 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-config-operator/config-operator" (62 of 955) 2025-12-13T00:20:59.684642029+00:00 stderr F I1213 00:20:59.684627 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.684642029+00:00 stderr F I1213 00:20:59.684632 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:openshift-config-operator" (63 of 955) 2025-12-13T00:20:59.686750956+00:00 stderr F I1213 00:20:59.686729 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:openshift-config-operator" (63 of 955) 2025-12-13T00:20:59.686787397+00:00 stderr F I1213 00:20:59.686778 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.686818558+00:00 stderr F I1213 00:20:59.686803 1 sync_worker.go:999] Running sync for serviceaccount "openshift-config-operator/openshift-config-operator" (64 of 955) 2025-12-13T00:20:59.688959955+00:00 stderr F I1213 00:20:59.688912 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-config-operator/openshift-config-operator" (64 of 955) 2025-12-13T00:20:59.688959955+00:00 stderr F I1213 00:20:59.688951 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.688986096+00:00 stderr F I1213 00:20:59.688957 1 sync_worker.go:999] Running sync for deployment "openshift-config-operator/openshift-config-operator" (65 of 955) 2025-12-13T00:20:59.691662378+00:00 stderr F I1213 00:20:59.691638 1 sync_worker.go:1014] Done syncing for deployment "openshift-config-operator/openshift-config-operator" (65 of 955) 2025-12-13T00:20:59.691699839+00:00 stderr F I1213 00:20:59.691690 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.691727900+00:00 stderr F I1213 00:20:59.691715 1 sync_worker.go:999] Running sync for clusteroperator "config-operator" (66 of 955) 2025-12-13T00:20:59.691889344+00:00 stderr F I1213 00:20:59.691874 1 sync_worker.go:1014] Done syncing for clusteroperator "config-operator" (66 of 955) 2025-12-13T00:20:59.691930535+00:00 stderr F I1213 00:20:59.691918 1 task_graph.go:481] Running 9 on worker 1 2025-12-13T00:20:59.691981688+00:00 stderr F I1213 00:20:59.691969 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.692027569+00:00 stderr F I1213 00:20:59.692013 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-storage-operator" (302 of 955) 2025-12-13T00:20:59.693984471+00:00 stderr F I1213 00:20:59.693963 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-storage-operator" (302 of 955) 2025-12-13T00:20:59.694027092+00:00 stderr F I1213 00:20:59.694017 1 task_graph.go:481] Running 10 on worker 1 2025-12-13T00:20:59.694052313+00:00 stderr F I1213 00:20:59.694043 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.694080683+00:00 stderr F I1213 00:20:59.694067 1 sync_worker.go:999] Running sync for customresourcedefinition "controlplanemachinesets.machine.openshift.io" (67 of 955) 2025-12-13T00:20:59.700294661+00:00 stderr F I1213 00:20:59.700237 1 sync_worker.go:1014] Done syncing for customresourcedefinition "controlplanemachinesets.machine.openshift.io" (67 of 955) 2025-12-13T00:20:59.700294661+00:00 stderr F I1213 00:20:59.700282 1 task_graph.go:481] Running 11 on worker 1 2025-12-13T00:20:59.700314271+00:00 stderr F I1213 00:20:59.700299 1 sync_worker.go:982] Skipping precreation of clusteroperator "baremetal" (300 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700321562+00:00 stderr F I1213 00:20:59.700314 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700330392+00:00 stderr F I1213 00:20:59.700322 1 sync_worker.go:999] Running sync for customresourcedefinition "baremetalhosts.metal3.io" (278 of 955) 2025-12-13T00:20:59.700357774+00:00 stderr F I1213 00:20:59.700336 1 sync_worker.go:1002] Skipping customresourcedefinition "baremetalhosts.metal3.io" (278 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700357774+00:00 stderr F I1213 00:20:59.700349 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700366314+00:00 stderr F I1213 00:20:59.700356 1 sync_worker.go:999] Running sync for customresourcedefinition "bmceventsubscriptions.metal3.io" (279 of 955) 2025-12-13T00:20:59.700395245+00:00 stderr F I1213 00:20:59.700371 1 sync_worker.go:1002] Skipping customresourcedefinition "bmceventsubscriptions.metal3.io" (279 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700395245+00:00 stderr F I1213 00:20:59.700385 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700403485+00:00 stderr F I1213 00:20:59.700392 1 sync_worker.go:999] Running sync for customresourcedefinition "dataimages.metal3.io" (280 of 955) 2025-12-13T00:20:59.700412235+00:00 stderr F I1213 00:20:59.700404 1 sync_worker.go:1002] Skipping customresourcedefinition "dataimages.metal3.io" (280 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700425335+00:00 stderr F I1213 00:20:59.700412 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700425335+00:00 stderr F I1213 00:20:59.700419 1 sync_worker.go:999] Running sync for customresourcedefinition "firmwareschemas.metal3.io" (281 of 955) 2025-12-13T00:20:59.700451926+00:00 stderr F I1213 00:20:59.700431 1 sync_worker.go:1002] Skipping customresourcedefinition "firmwareschemas.metal3.io" (281 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700451926+00:00 stderr F I1213 00:20:59.700443 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700460126+00:00 stderr F I1213 00:20:59.700450 1 sync_worker.go:999] Running sync for customresourcedefinition "hardwaredata.metal3.io" (282 of 955) 2025-12-13T00:20:59.700482797+00:00 stderr F I1213 00:20:59.700462 1 sync_worker.go:1002] Skipping customresourcedefinition "hardwaredata.metal3.io" (282 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700482797+00:00 stderr F I1213 00:20:59.700474 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700626671+00:00 stderr F I1213 00:20:59.700594 1 sync_worker.go:999] Running sync for customresourcedefinition "hostfirmwarecomponents.metal3.io" (283 of 955) 2025-12-13T00:20:59.700626671+00:00 stderr F I1213 00:20:59.700618 1 sync_worker.go:1002] Skipping customresourcedefinition "hostfirmwarecomponents.metal3.io" (283 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700635491+00:00 stderr F I1213 00:20:59.700627 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700642661+00:00 stderr F I1213 00:20:59.700634 1 sync_worker.go:999] Running sync for customresourcedefinition "hostfirmwaresettings.metal3.io" (284 of 955) 2025-12-13T00:20:59.700664192+00:00 stderr F I1213 00:20:59.700647 1 sync_worker.go:1002] Skipping customresourcedefinition "hostfirmwaresettings.metal3.io" (284 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700664192+00:00 stderr F I1213 00:20:59.700659 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700687872+00:00 stderr F I1213 00:20:59.700666 1 sync_worker.go:999] Running sync for customresourcedefinition "preprovisioningimages.metal3.io" (285 of 955) 2025-12-13T00:20:59.700695333+00:00 stderr F I1213 00:20:59.700683 1 sync_worker.go:1002] Skipping customresourcedefinition "preprovisioningimages.metal3.io" (285 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700695333+00:00 stderr F I1213 00:20:59.700691 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700722463+00:00 stderr F I1213 00:20:59.700698 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/cluster-baremetal-operator-images" (286 of 955) 2025-12-13T00:20:59.700722463+00:00 stderr F I1213 00:20:59.700714 1 sync_worker.go:1002] Skipping configmap "openshift-machine-api/cluster-baremetal-operator-images" (286 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700732104+00:00 stderr F I1213 00:20:59.700723 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700740134+00:00 stderr F I1213 00:20:59.700731 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/cbo-trusted-ca" (287 of 955) 2025-12-13T00:20:59.700762374+00:00 stderr F I1213 00:20:59.700743 1 sync_worker.go:1002] Skipping configmap "openshift-machine-api/cbo-trusted-ca" (287 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700762374+00:00 stderr F I1213 00:20:59.700754 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700770015+00:00 stderr F I1213 00:20:59.700761 1 sync_worker.go:999] Running sync for customresourcedefinition "provisionings.metal3.io" (288 of 955) 2025-12-13T00:20:59.700782065+00:00 stderr F I1213 00:20:59.700774 1 sync_worker.go:1002] Skipping customresourcedefinition "provisionings.metal3.io" (288 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700789165+00:00 stderr F I1213 00:20:59.700781 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700796195+00:00 stderr F I1213 00:20:59.700788 1 sync_worker.go:999] Running sync for service "openshift-machine-api/cluster-baremetal-operator-service" (289 of 955) 2025-12-13T00:20:59.700817136+00:00 stderr F I1213 00:20:59.700800 1 sync_worker.go:1002] Skipping service "openshift-machine-api/cluster-baremetal-operator-service" (289 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700817136+00:00 stderr F I1213 00:20:59.700811 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700827196+00:00 stderr F I1213 00:20:59.700818 1 sync_worker.go:999] Running sync for service "openshift-machine-api/cluster-baremetal-webhook-service" (290 of 955) 2025-12-13T00:20:59.700841547+00:00 stderr F I1213 00:20:59.700830 1 sync_worker.go:1002] Skipping service "openshift-machine-api/cluster-baremetal-webhook-service" (290 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700841547+00:00 stderr F I1213 00:20:59.700837 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700854427+00:00 stderr F I1213 00:20:59.700844 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/cluster-baremetal-operator" (291 of 955) 2025-12-13T00:20:59.700865247+00:00 stderr F I1213 00:20:59.700856 1 sync_worker.go:1002] Skipping serviceaccount "openshift-machine-api/cluster-baremetal-operator" (291 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700873637+00:00 stderr F I1213 00:20:59.700863 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700882318+00:00 stderr F I1213 00:20:59.700870 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/baremetal-kube-rbac-proxy" (292 of 955) 2025-12-13T00:20:59.700890598+00:00 stderr F I1213 00:20:59.700882 1 sync_worker.go:1002] Skipping configmap "openshift-machine-api/baremetal-kube-rbac-proxy" (292 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700897668+00:00 stderr F I1213 00:20:59.700889 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700904678+00:00 stderr F I1213 00:20:59.700896 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (293 of 955) 2025-12-13T00:20:59.700936359+00:00 stderr F I1213 00:20:59.700908 1 sync_worker.go:1002] Skipping rolebinding "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (293 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700936359+00:00 stderr F I1213 00:20:59.700919 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700973830+00:00 stderr F I1213 00:20:59.700926 1 sync_worker.go:999] Running sync for role "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (294 of 955) 2025-12-13T00:20:59.700973830+00:00 stderr F I1213 00:20:59.700960 1 sync_worker.go:1002] Skipping role "openshift-machine-api/prometheus-k8s-cluster-baremetal-operator" (294 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.700973830+00:00 stderr F I1213 00:20:59.700967 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.700984140+00:00 stderr F I1213 00:20:59.700974 1 sync_worker.go:999] Running sync for role "openshift-machine-api/cluster-baremetal-operator" (295 of 955) 2025-12-13T00:20:59.701007951+00:00 stderr F I1213 00:20:59.700988 1 sync_worker.go:1002] Skipping role "openshift-machine-api/cluster-baremetal-operator" (295 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.701007951+00:00 stderr F I1213 00:20:59.700999 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.701020181+00:00 stderr F I1213 00:20:59.701006 1 sync_worker.go:999] Running sync for clusterrole "cluster-baremetal-operator" (296 of 955) 2025-12-13T00:20:59.701029002+00:00 stderr F I1213 00:20:59.701018 1 sync_worker.go:1002] Skipping clusterrole "cluster-baremetal-operator" (296 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.701029002+00:00 stderr F I1213 00:20:59.701026 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.701057082+00:00 stderr F I1213 00:20:59.701032 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/cluster-baremetal-operator" (297 of 955) 2025-12-13T00:20:59.701057082+00:00 stderr F I1213 00:20:59.701049 1 sync_worker.go:1002] Skipping rolebinding "openshift-machine-api/cluster-baremetal-operator" (297 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.701066853+00:00 stderr F I1213 00:20:59.701056 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.701075603+00:00 stderr F I1213 00:20:59.701062 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-machine-api/cluster-baremetal-operator" (298 of 955) 2025-12-13T00:20:59.701083603+00:00 stderr F I1213 00:20:59.701074 1 sync_worker.go:1002] Skipping clusterrolebinding "openshift-machine-api/cluster-baremetal-operator" (298 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.701090643+00:00 stderr F I1213 00:20:59.701081 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.701097623+00:00 stderr F I1213 00:20:59.701089 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/cluster-baremetal-operator" (299 of 955) 2025-12-13T00:20:59.701120274+00:00 stderr F I1213 00:20:59.701101 1 sync_worker.go:1002] Skipping deployment "openshift-machine-api/cluster-baremetal-operator" (299 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.701120274+00:00 stderr F I1213 00:20:59.701112 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.701127734+00:00 stderr F I1213 00:20:59.701119 1 sync_worker.go:999] Running sync for clusteroperator "baremetal" (300 of 955) 2025-12-13T00:20:59.701159635+00:00 stderr F I1213 00:20:59.701130 1 sync_worker.go:1002] Skipping clusteroperator "baremetal" (300 of 955): disabled capabilities: baremetal 2025-12-13T00:20:59.701159635+00:00 stderr F I1213 00:20:59.701149 1 task_graph.go:481] Running 12 on worker 1 2025-12-13T00:20:59.701159635+00:00 stderr F I1213 00:20:59.701156 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.701172455+00:00 stderr F I1213 00:20:59.701163 1 sync_worker.go:999] Running sync for role "openshift-console-operator/prometheus-k8s" (855 of 955) 2025-12-13T00:20:59.703446157+00:00 stderr F I1213 00:20:59.703420 1 sync_worker.go:1014] Done syncing for role "openshift-console-operator/prometheus-k8s" (855 of 955) 2025-12-13T00:20:59.703446157+00:00 stderr F I1213 00:20:59.703441 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.703460937+00:00 stderr F I1213 00:20:59.703449 1 sync_worker.go:999] Running sync for rolebinding "openshift-console-operator/prometheus-k8s" (856 of 955) 2025-12-13T00:20:59.705706997+00:00 stderr F I1213 00:20:59.705663 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console-operator/prometheus-k8s" (856 of 955) 2025-12-13T00:20:59.705706997+00:00 stderr F I1213 00:20:59.705696 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.705706997+00:00 stderr F I1213 00:20:59.705701 1 sync_worker.go:999] Running sync for servicemonitor "openshift-console-operator/console-operator" (857 of 955) 2025-12-13T00:20:59.708175344+00:00 stderr F I1213 00:20:59.708136 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-console-operator/console-operator" (857 of 955) 2025-12-13T00:20:59.708175344+00:00 stderr F I1213 00:20:59.708159 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.708175344+00:00 stderr F I1213 00:20:59.708164 1 sync_worker.go:999] Running sync for storageversionmigration "console-plugin-storage-version-migration" (858 of 955) 2025-12-13T00:20:59.741822272+00:00 stderr F I1213 00:20:59.741743 1 sync_worker.go:1014] Done syncing for storageversionmigration "console-plugin-storage-version-migration" (858 of 955) 2025-12-13T00:20:59.741822272+00:00 stderr F I1213 00:20:59.741775 1 task_graph.go:481] Running 13 on worker 1 2025-12-13T00:20:59.741822272+00:00 stderr F I1213 00:20:59.741792 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.741822272+00:00 stderr F I1213 00:20:59.741799 1 sync_worker.go:999] Running sync for customresourcedefinition "etcds.operator.openshift.io" (70 of 955) 2025-12-13T00:20:59.757731841+00:00 stderr F I1213 00:20:59.757673 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:20:59.758087281+00:00 stderr F I1213 00:20:59.758052 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:20:59.758087281+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:20:59.758087281+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:20:59.758166573+00:00 stderr F I1213 00:20:59.758135 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (471.943µs) 2025-12-13T00:20:59.758174783+00:00 stderr F I1213 00:20:59.758163 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:20:59.758241845+00:00 stderr F I1213 00:20:59.758216 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:20:59.758287866+00:00 stderr F I1213 00:20:59.758271 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:20:59.758365068+00:00 stderr F I1213 00:20:59.758284 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:20:59.758734248+00:00 stderr F I1213 00:20:59.758686 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:20:59.783795384+00:00 stderr F I1213 00:20:59.783710 1 sync_worker.go:1014] Done syncing for customresourcedefinition "etcds.operator.openshift.io" (70 of 955) 2025-12-13T00:20:59.783795384+00:00 stderr F I1213 00:20:59.783751 1 task_graph.go:481] Running 14 on worker 1 2025-12-13T00:20:59.783795384+00:00 stderr F I1213 00:20:59.783765 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.783795384+00:00 stderr F I1213 00:20:59.783773 1 sync_worker.go:999] Running sync for role "openshift-controller-manager-operator/prometheus-k8s" (944 of 955) 2025-12-13T00:20:59.788580394+00:00 stderr F W1213 00:20:59.788523 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:20:59.790494516+00:00 stderr F I1213 00:20:59.790016 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.852ms) 2025-12-13T00:20:59.832068938+00:00 stderr F I1213 00:20:59.832012 1 sync_worker.go:1014] Done syncing for role "openshift-controller-manager-operator/prometheus-k8s" (944 of 955) 2025-12-13T00:20:59.832068938+00:00 stderr F I1213 00:20:59.832045 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.832068938+00:00 stderr F I1213 00:20:59.832051 1 sync_worker.go:999] Running sync for rolebinding "openshift-controller-manager-operator/prometheus-k8s" (945 of 955) 2025-12-13T00:20:59.882673453+00:00 stderr F I1213 00:20:59.882391 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-controller-manager-operator/prometheus-k8s" (945 of 955) 2025-12-13T00:20:59.882673453+00:00 stderr F I1213 00:20:59.882618 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.882673453+00:00 stderr F I1213 00:20:59.882624 1 sync_worker.go:999] Running sync for servicemonitor "openshift-controller-manager-operator/openshift-controller-manager-operator" (946 of 955) 2025-12-13T00:20:59.932027504+00:00 stderr F I1213 00:20:59.931922 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-controller-manager-operator/openshift-controller-manager-operator" (946 of 955) 2025-12-13T00:20:59.932027504+00:00 stderr F I1213 00:20:59.931988 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.932027504+00:00 stderr F I1213 00:20:59.931996 1 sync_worker.go:999] Running sync for role "openshift-controller-manager/prometheus-k8s" (947 of 955) 2025-12-13T00:20:59.981722696+00:00 stderr F I1213 00:20:59.981647 1 sync_worker.go:1014] Done syncing for role "openshift-controller-manager/prometheus-k8s" (947 of 955) 2025-12-13T00:20:59.981722696+00:00 stderr F I1213 00:20:59.981684 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:20:59.981722696+00:00 stderr F I1213 00:20:59.981690 1 sync_worker.go:999] Running sync for rolebinding "openshift-controller-manager/prometheus-k8s" (948 of 955) 2025-12-13T00:21:00.032293820+00:00 stderr F I1213 00:21:00.032226 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-controller-manager/prometheus-k8s" (948 of 955) 2025-12-13T00:21:00.032293820+00:00 stderr F I1213 00:21:00.032260 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.032293820+00:00 stderr F I1213 00:21:00.032266 1 sync_worker.go:999] Running sync for servicemonitor "openshift-controller-manager/openshift-controller-manager" (949 of 955) 2025-12-13T00:21:00.083432480+00:00 stderr F I1213 00:21:00.083271 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-controller-manager/openshift-controller-manager" (949 of 955) 2025-12-13T00:21:00.083432480+00:00 stderr F I1213 00:21:00.083320 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.083432480+00:00 stderr F I1213 00:21:00.083328 1 sync_worker.go:999] Running sync for role "openshift-route-controller-manager/prometheus-k8s" (950 of 955) 2025-12-13T00:21:00.133005038+00:00 stderr F I1213 00:21:00.132871 1 sync_worker.go:1014] Done syncing for role "openshift-route-controller-manager/prometheus-k8s" (950 of 955) 2025-12-13T00:21:00.133005038+00:00 stderr F I1213 00:21:00.132912 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.133005038+00:00 stderr F I1213 00:21:00.132921 1 sync_worker.go:999] Running sync for rolebinding "openshift-route-controller-manager/prometheus-k8s" (951 of 955) 2025-12-13T00:21:00.182454413+00:00 stderr F I1213 00:21:00.182387 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-route-controller-manager/prometheus-k8s" (951 of 955) 2025-12-13T00:21:00.182454413+00:00 stderr F I1213 00:21:00.182428 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.182454413+00:00 stderr F I1213 00:21:00.182436 1 sync_worker.go:999] Running sync for servicemonitor "openshift-route-controller-manager/openshift-route-controller-manager" (952 of 955) 2025-12-13T00:21:00.232682558+00:00 stderr F I1213 00:21:00.232619 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-route-controller-manager/openshift-route-controller-manager" (952 of 955) 2025-12-13T00:21:00.232682558+00:00 stderr F I1213 00:21:00.232656 1 task_graph.go:481] Running 15 on worker 1 2025-12-13T00:21:00.232682558+00:00 stderr F I1213 00:21:00.232669 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.232715519+00:00 stderr F I1213 00:21:00.232676 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/release-verification" (854 of 955) 2025-12-13T00:21:00.281429483+00:00 stderr F I1213 00:21:00.281356 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/release-verification" (854 of 955) 2025-12-13T00:21:00.281429483+00:00 stderr F I1213 00:21:00.281417 1 task_graph.go:481] Running 16 on worker 1 2025-12-13T00:21:00.333158328+00:00 stderr F I1213 00:21:00.333095 1 sync_worker.go:989] Precreated resource clusteroperator "machine-approver" (441 of 955) 2025-12-13T00:21:00.333158328+00:00 stderr F I1213 00:21:00.333131 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.333158328+00:00 stderr F I1213 00:21:00.333139 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-machine-approver" (430 of 955) 2025-12-13T00:21:00.383276311+00:00 stderr F I1213 00:21:00.383193 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-machine-approver" (430 of 955) 2025-12-13T00:21:00.383276311+00:00 stderr F I1213 00:21:00.383243 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.383276311+00:00 stderr F I1213 00:21:00.383251 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-machine-approver/machine-approver-sa" (431 of 955) 2025-12-13T00:21:00.432044958+00:00 stderr F I1213 00:21:00.431916 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-cluster-machine-approver/machine-approver-sa" (431 of 955) 2025-12-13T00:21:00.432044958+00:00 stderr F I1213 00:21:00.432006 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.432044958+00:00 stderr F I1213 00:21:00.432018 1 sync_worker.go:999] Running sync for role "openshift-cluster-machine-approver/machine-approver" (432 of 955) 2025-12-13T00:21:00.482649843+00:00 stderr F I1213 00:21:00.482584 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-machine-approver/machine-approver" (432 of 955) 2025-12-13T00:21:00.482649843+00:00 stderr F I1213 00:21:00.482619 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.482649843+00:00 stderr F I1213 00:21:00.482629 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-machine-approver/machine-approver" (433 of 955) 2025-12-13T00:21:00.531960303+00:00 stderr F I1213 00:21:00.531884 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-machine-approver/machine-approver" (433 of 955) 2025-12-13T00:21:00.531960303+00:00 stderr F I1213 00:21:00.531914 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.531960303+00:00 stderr F I1213 00:21:00.531920 1 sync_worker.go:999] Running sync for role "openshift-config-managed/machine-approver" (434 of 955) 2025-12-13T00:21:00.582478796+00:00 stderr F I1213 00:21:00.582387 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/machine-approver" (434 of 955) 2025-12-13T00:21:00.582478796+00:00 stderr F I1213 00:21:00.582427 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.582478796+00:00 stderr F I1213 00:21:00.582435 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/machine-approver" (435 of 955) 2025-12-13T00:21:00.632673421+00:00 stderr F I1213 00:21:00.632529 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/machine-approver" (435 of 955) 2025-12-13T00:21:00.632673421+00:00 stderr F I1213 00:21:00.632593 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.632673421+00:00 stderr F I1213 00:21:00.632600 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:controller:machine-approver" (436 of 955) 2025-12-13T00:21:00.683560494+00:00 stderr F I1213 00:21:00.683439 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:controller:machine-approver" (436 of 955) 2025-12-13T00:21:00.683560494+00:00 stderr F I1213 00:21:00.683508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.683560494+00:00 stderr F I1213 00:21:00.683525 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:controller:machine-approver" (437 of 955) 2025-12-13T00:21:00.732627359+00:00 stderr F I1213 00:21:00.732504 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:controller:machine-approver" (437 of 955) 2025-12-13T00:21:00.732627359+00:00 stderr F I1213 00:21:00.732564 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.732627359+00:00 stderr F I1213 00:21:00.732581 1 sync_worker.go:999] Running sync for configmap "openshift-cluster-machine-approver/kube-rbac-proxy" (438 of 955) 2025-12-13T00:21:00.758695842+00:00 stderr F I1213 00:21:00.758604 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:00.758914358+00:00 stderr F I1213 00:21:00.758863 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:00.758914358+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:00.758914358+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:00.759044991+00:00 stderr F I1213 00:21:00.758913 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (321.408µs) 2025-12-13T00:21:00.759044991+00:00 stderr F I1213 00:21:00.758927 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:00.759044991+00:00 stderr F I1213 00:21:00.759025 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:00.759084402+00:00 stderr F I1213 00:21:00.759070 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:00.759151504+00:00 stderr F I1213 00:21:00.759077 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:00.759420152+00:00 stderr F I1213 00:21:00.759346 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:00.781489867+00:00 stderr F W1213 00:21:00.781399 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:00.782309449+00:00 stderr F I1213 00:21:00.782107 1 sync_worker.go:1014] Done syncing for configmap "openshift-cluster-machine-approver/kube-rbac-proxy" (438 of 955) 2025-12-13T00:21:00.782309449+00:00 stderr F I1213 00:21:00.782147 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.782309449+00:00 stderr F I1213 00:21:00.782161 1 sync_worker.go:999] Running sync for service "openshift-cluster-machine-approver/machine-approver" (439 of 955) 2025-12-13T00:21:00.786019179+00:00 stderr F I1213 00:21:00.785795 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.857805ms) 2025-12-13T00:21:00.832002750+00:00 stderr F I1213 00:21:00.831916 1 sync_worker.go:1014] Done syncing for service "openshift-cluster-machine-approver/machine-approver" (439 of 955) 2025-12-13T00:21:00.832002750+00:00 stderr F I1213 00:21:00.831963 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.832002750+00:00 stderr F I1213 00:21:00.831970 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-machine-approver/machine-approver" (440 of 955) 2025-12-13T00:21:00.882999127+00:00 stderr F I1213 00:21:00.882812 1 sync_worker.go:1014] Done syncing for deployment "openshift-cluster-machine-approver/machine-approver" (440 of 955) 2025-12-13T00:21:00.882999127+00:00 stderr F I1213 00:21:00.882845 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.882999127+00:00 stderr F I1213 00:21:00.882851 1 sync_worker.go:999] Running sync for clusteroperator "machine-approver" (441 of 955) 2025-12-13T00:21:00.884010904+00:00 stderr F I1213 00:21:00.883054 1 sync_worker.go:1014] Done syncing for clusteroperator "machine-approver" (441 of 955) 2025-12-13T00:21:00.884010904+00:00 stderr F I1213 00:21:00.883080 1 task_graph.go:481] Running 17 on worker 1 2025-12-13T00:21:00.936019287+00:00 stderr F I1213 00:21:00.935094 1 sync_worker.go:989] Precreated resource clusteroperator "dns" (772 of 955) 2025-12-13T00:21:00.936019287+00:00 stderr F I1213 00:21:00.935128 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.936019287+00:00 stderr F I1213 00:21:00.935134 1 sync_worker.go:999] Running sync for clusterrole "openshift-dns-operator" (763 of 955) 2025-12-13T00:21:00.981924376+00:00 stderr F I1213 00:21:00.981877 1 sync_worker.go:1014] Done syncing for clusterrole "openshift-dns-operator" (763 of 955) 2025-12-13T00:21:00.982026728+00:00 stderr F I1213 00:21:00.982015 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:00.982057009+00:00 stderr F I1213 00:21:00.982044 1 sync_worker.go:999] Running sync for namespace "openshift-dns-operator" (764 of 955) 2025-12-13T00:21:01.032541922+00:00 stderr F I1213 00:21:01.032491 1 sync_worker.go:1014] Done syncing for namespace "openshift-dns-operator" (764 of 955) 2025-12-13T00:21:01.032628834+00:00 stderr F I1213 00:21:01.032618 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.032658825+00:00 stderr F I1213 00:21:01.032645 1 sync_worker.go:999] Running sync for customresourcedefinition "dnses.operator.openshift.io" (765 of 955) 2025-12-13T00:21:01.083867506+00:00 stderr F I1213 00:21:01.083818 1 sync_worker.go:1014] Done syncing for customresourcedefinition "dnses.operator.openshift.io" (765 of 955) 2025-12-13T00:21:01.083956159+00:00 stderr F I1213 00:21:01.083923 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.083990320+00:00 stderr F I1213 00:21:01.083976 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-dns-operator" (766 of 955) 2025-12-13T00:21:01.133091235+00:00 stderr F I1213 00:21:01.132912 1 sync_worker.go:1014] Done syncing for clusterrolebinding "openshift-dns-operator" (766 of 955) 2025-12-13T00:21:01.133091235+00:00 stderr F I1213 00:21:01.133024 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.133091235+00:00 stderr F I1213 00:21:01.133041 1 sync_worker.go:999] Running sync for rolebinding "openshift-dns-operator/dns-operator" (767 of 955) 2025-12-13T00:21:01.182254381+00:00 stderr F I1213 00:21:01.182178 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-dns-operator/dns-operator" (767 of 955) 2025-12-13T00:21:01.182254381+00:00 stderr F I1213 00:21:01.182215 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.182254381+00:00 stderr F I1213 00:21:01.182226 1 sync_worker.go:999] Running sync for role "openshift-dns-operator/dns-operator" (768 of 955) 2025-12-13T00:21:01.231451279+00:00 stderr F I1213 00:21:01.231405 1 sync_worker.go:1014] Done syncing for role "openshift-dns-operator/dns-operator" (768 of 955) 2025-12-13T00:21:01.231516751+00:00 stderr F I1213 00:21:01.231506 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.231545842+00:00 stderr F I1213 00:21:01.231533 1 sync_worker.go:999] Running sync for serviceaccount "openshift-dns-operator/dns-operator" (769 of 955) 2025-12-13T00:21:01.281737946+00:00 stderr F I1213 00:21:01.281687 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-dns-operator/dns-operator" (769 of 955) 2025-12-13T00:21:01.281798688+00:00 stderr F I1213 00:21:01.281787 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.281828538+00:00 stderr F I1213 00:21:01.281815 1 sync_worker.go:999] Running sync for service "openshift-dns-operator/metrics" (770 of 955) 2025-12-13T00:21:01.333330308+00:00 stderr F I1213 00:21:01.332492 1 sync_worker.go:1014] Done syncing for service "openshift-dns-operator/metrics" (770 of 955) 2025-12-13T00:21:01.333489372+00:00 stderr F I1213 00:21:01.333458 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.333577055+00:00 stderr F I1213 00:21:01.333538 1 sync_worker.go:999] Running sync for deployment "openshift-dns-operator/dns-operator" (771 of 955) 2025-12-13T00:21:01.381345204+00:00 stderr F I1213 00:21:01.381253 1 sync_worker.go:1014] Done syncing for deployment "openshift-dns-operator/dns-operator" (771 of 955) 2025-12-13T00:21:01.381345204+00:00 stderr F I1213 00:21:01.381309 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.381345204+00:00 stderr F I1213 00:21:01.381316 1 sync_worker.go:999] Running sync for clusteroperator "dns" (772 of 955) 2025-12-13T00:21:01.381506009+00:00 stderr F I1213 00:21:01.381475 1 sync_worker.go:1014] Done syncing for clusteroperator "dns" (772 of 955) 2025-12-13T00:21:01.381506009+00:00 stderr F I1213 00:21:01.381494 1 task_graph.go:481] Running 18 on worker 1 2025-12-13T00:21:01.381506009+00:00 stderr F I1213 00:21:01.381500 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.381517839+00:00 stderr F I1213 00:21:01.381505 1 sync_worker.go:999] Running sync for customresourcedefinition "openshiftapiservers.operator.openshift.io" (277 of 955) 2025-12-13T00:21:01.432932956+00:00 stderr F I1213 00:21:01.432871 1 sync_worker.go:1014] Done syncing for customresourcedefinition "openshiftapiservers.operator.openshift.io" (277 of 955) 2025-12-13T00:21:01.433068920+00:00 stderr F I1213 00:21:01.433055 1 task_graph.go:481] Running 19 on worker 1 2025-12-13T00:21:01.433106101+00:00 stderr F I1213 00:21:01.433095 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.433140122+00:00 stderr F I1213 00:21:01.433124 1 sync_worker.go:999] Running sync for role "openshift-dns-operator/prometheus-k8s" (862 of 955) 2025-12-13T00:21:01.482443352+00:00 stderr F I1213 00:21:01.482391 1 sync_worker.go:1014] Done syncing for role "openshift-dns-operator/prometheus-k8s" (862 of 955) 2025-12-13T00:21:01.482538005+00:00 stderr F I1213 00:21:01.482524 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.482571796+00:00 stderr F I1213 00:21:01.482557 1 sync_worker.go:999] Running sync for rolebinding "openshift-dns-operator/prometheus-k8s" (863 of 955) 2025-12-13T00:21:01.532064581+00:00 stderr F I1213 00:21:01.532002 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-dns-operator/prometheus-k8s" (863 of 955) 2025-12-13T00:21:01.532150743+00:00 stderr F I1213 00:21:01.532137 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.532189354+00:00 stderr F I1213 00:21:01.532173 1 sync_worker.go:999] Running sync for servicemonitor "openshift-dns-operator/dns-operator" (864 of 955) 2025-12-13T00:21:01.583132080+00:00 stderr F I1213 00:21:01.583069 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-dns-operator/dns-operator" (864 of 955) 2025-12-13T00:21:01.583132080+00:00 stderr F I1213 00:21:01.583104 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.583132080+00:00 stderr F I1213 00:21:01.583111 1 sync_worker.go:999] Running sync for prometheusrule "openshift-dns-operator/dns" (865 of 955) 2025-12-13T00:21:01.649599712+00:00 stderr F I1213 00:21:01.649510 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-dns-operator/dns" (865 of 955) 2025-12-13T00:21:01.649599712+00:00 stderr F I1213 00:21:01.649575 1 task_graph.go:481] Running 20 on worker 1 2025-12-13T00:21:01.649649484+00:00 stderr F I1213 00:21:01.649605 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.649649484+00:00 stderr F I1213 00:21:01.649614 1 sync_worker.go:999] Running sync for role "openshift-console/prometheus-k8s" (859 of 955) 2025-12-13T00:21:01.681886354+00:00 stderr F I1213 00:21:01.681820 1 sync_worker.go:1014] Done syncing for role "openshift-console/prometheus-k8s" (859 of 955) 2025-12-13T00:21:01.681886354+00:00 stderr F I1213 00:21:01.681858 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.681886354+00:00 stderr F I1213 00:21:01.681865 1 sync_worker.go:999] Running sync for rolebinding "openshift-console/prometheus-k8s" (860 of 955) 2025-12-13T00:21:01.732060178+00:00 stderr F I1213 00:21:01.731992 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-console/prometheus-k8s" (860 of 955) 2025-12-13T00:21:01.732060178+00:00 stderr F I1213 00:21:01.732031 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.732060178+00:00 stderr F I1213 00:21:01.732039 1 sync_worker.go:999] Running sync for servicemonitor "openshift-console/console" (861 of 955) 2025-12-13T00:21:01.759078206+00:00 stderr F I1213 00:21:01.759004 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:01.759360544+00:00 stderr F I1213 00:21:01.759323 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:01.759360544+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:01.759360544+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:01.759428447+00:00 stderr F I1213 00:21:01.759393 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (403.432µs) 2025-12-13T00:21:01.759428447+00:00 stderr F I1213 00:21:01.759415 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:01.759502569+00:00 stderr F I1213 00:21:01.759464 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:01.759540440+00:00 stderr F I1213 00:21:01.759521 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:01.759619772+00:00 stderr F I1213 00:21:01.759532 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:01.759944030+00:00 stderr F I1213 00:21:01.759888 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:01.782143410+00:00 stderr F I1213 00:21:01.782049 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-console/console" (861 of 955) 2025-12-13T00:21:01.782143410+00:00 stderr F I1213 00:21:01.782089 1 task_graph.go:481] Running 21 on worker 1 2025-12-13T00:21:01.782143410+00:00 stderr F I1213 00:21:01.782104 1 sync_worker.go:982] Skipping precreation of clusteroperator "storage" (577 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.782143410+00:00 stderr F I1213 00:21:01.782122 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782143410+00:00 stderr F I1213 00:21:01.782129 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-csi-drivers" (549 of 955) 2025-12-13T00:21:01.782186011+00:00 stderr F I1213 00:21:01.782143 1 sync_worker.go:1002] Skipping namespace "openshift-cluster-csi-drivers" (549 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.782186011+00:00 stderr F I1213 00:21:01.782151 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782186011+00:00 stderr F I1213 00:21:01.782157 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/aws-ebs-csi-driver-operator" (550 of 955) 2025-12-13T00:21:01.782186011+00:00 stderr F I1213 00:21:01.782172 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/aws-ebs-csi-driver-operator" (550 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782186011+00:00 stderr F I1213 00:21:01.782179 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782206601+00:00 stderr F I1213 00:21:01.782185 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/azure-disk-csi-driver-operator" (551 of 955) 2025-12-13T00:21:01.782206601+00:00 stderr F I1213 00:21:01.782197 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/azure-disk-csi-driver-operator" (551 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782215532+00:00 stderr F I1213 00:21:01.782204 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782224082+00:00 stderr F I1213 00:21:01.782212 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/azure-file-csi-driver-operator" (552 of 955) 2025-12-13T00:21:01.782232522+00:00 stderr F I1213 00:21:01.782225 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/azure-file-csi-driver-operator" (552 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782241072+00:00 stderr F I1213 00:21:01.782233 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782249632+00:00 stderr F I1213 00:21:01.782240 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-cluster-csi-drivers" (553 of 955) 2025-12-13T00:21:01.782283253+00:00 stderr F I1213 00:21:01.782254 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-cluster-csi-drivers" (553 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782283253+00:00 stderr F I1213 00:21:01.782267 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782283253+00:00 stderr F I1213 00:21:01.782273 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-pd-csi-driver-operator" (554 of 955) 2025-12-13T00:21:01.782293934+00:00 stderr F I1213 00:21:01.782285 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-pd-csi-driver-operator" (554 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782302284+00:00 stderr F I1213 00:21:01.782292 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782310584+00:00 stderr F I1213 00:21:01.782299 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/ibm-vpc-block-csi-driver-operator" (555 of 955) 2025-12-13T00:21:01.782319304+00:00 stderr F I1213 00:21:01.782311 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/ibm-vpc-block-csi-driver-operator" (555 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782327305+00:00 stderr F I1213 00:21:01.782318 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782335345+00:00 stderr F I1213 00:21:01.782324 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/manila-csi-driver-operator" (556 of 955) 2025-12-13T00:21:01.782362155+00:00 stderr F I1213 00:21:01.782340 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/manila-csi-driver-operator" (556 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782362155+00:00 stderr F I1213 00:21:01.782352 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782375176+00:00 stderr F I1213 00:21:01.782358 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/ovirt-csi-driver-operator" (557 of 955) 2025-12-13T00:21:01.782383526+00:00 stderr F I1213 00:21:01.782371 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/ovirt-csi-driver-operator" (557 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782383526+00:00 stderr F I1213 00:21:01.782378 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782394126+00:00 stderr F I1213 00:21:01.782385 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/ibm-powervs-block-csi-driver-operator" (558 of 955) 2025-12-13T00:21:01.782404317+00:00 stderr F I1213 00:21:01.782396 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/ibm-powervs-block-csi-driver-operator" (558 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782412577+00:00 stderr F I1213 00:21:01.782403 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782421237+00:00 stderr F I1213 00:21:01.782410 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-vmware-vsphere-csi-driver-operator" (559 of 955) 2025-12-13T00:21:01.782429787+00:00 stderr F I1213 00:21:01.782422 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-vmware-vsphere-csi-driver-operator" (559 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782438277+00:00 stderr F I1213 00:21:01.782429 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782446778+00:00 stderr F I1213 00:21:01.782435 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-problem-detector" (560 of 955) 2025-12-13T00:21:01.782455118+00:00 stderr F I1213 00:21:01.782447 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-problem-detector" (560 of 955): disabled capabilities: Storage, CloudCredential 2025-12-13T00:21:01.782463248+00:00 stderr F I1213 00:21:01.782453 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.782471668+00:00 stderr F I1213 00:21:01.782460 1 sync_worker.go:999] Running sync for customresourcedefinition "clustercsidrivers.operator.openshift.io" (561 of 955) 2025-12-13T00:21:01.786755933+00:00 stderr F W1213 00:21:01.786680 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:01.788138481+00:00 stderr F I1213 00:21:01.788096 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.679614ms) 2025-12-13T00:21:01.833110005+00:00 stderr F I1213 00:21:01.833026 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clustercsidrivers.operator.openshift.io" (561 of 955) 2025-12-13T00:21:01.833110005+00:00 stderr F I1213 00:21:01.833058 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.833110005+00:00 stderr F I1213 00:21:01.833064 1 sync_worker.go:999] Running sync for customresourcedefinition "storages.operator.openshift.io" (562 of 955) 2025-12-13T00:21:01.883183006+00:00 stderr F I1213 00:21:01.883111 1 sync_worker.go:1014] Done syncing for customresourcedefinition "storages.operator.openshift.io" (562 of 955) 2025-12-13T00:21:01.883183006+00:00 stderr F I1213 00:21:01.883146 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883183006+00:00 stderr F I1213 00:21:01.883152 1 sync_worker.go:999] Running sync for storage "cluster" (563 of 955) 2025-12-13T00:21:01.883183006+00:00 stderr F I1213 00:21:01.883166 1 sync_worker.go:1002] Skipping storage "cluster" (563 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883183006+00:00 stderr F I1213 00:21:01.883172 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883176 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-storage-operator/cluster-storage-operator" (564 of 955) 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883188 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-storage-operator/cluster-storage-operator" (564 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883192 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883197 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-storage-operator-role" (565 of 955) 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883205 1 sync_worker.go:1002] Skipping clusterrolebinding "cluster-storage-operator-role" (565 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883210 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883224247+00:00 stderr F I1213 00:21:01.883214 1 sync_worker.go:999] Running sync for service "openshift-cluster-storage-operator/cluster-storage-operator-metrics" (566 of 955) 2025-12-13T00:21:01.883233897+00:00 stderr F I1213 00:21:01.883223 1 sync_worker.go:1002] Skipping service "openshift-cluster-storage-operator/cluster-storage-operator-metrics" (566 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883233897+00:00 stderr F I1213 00:21:01.883227 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883241077+00:00 stderr F I1213 00:21:01.883231 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-provisioner-configmap-and-secret-reader-role" (567 of 955) 2025-12-13T00:21:01.883247898+00:00 stderr F I1213 00:21:01.883240 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-provisioner-configmap-and-secret-reader-role" (567 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883247898+00:00 stderr F I1213 00:21:01.883245 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883256628+00:00 stderr F I1213 00:21:01.883249 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-provisioner-volumeattachment-reader-role" (568 of 955) 2025-12-13T00:21:01.883264988+00:00 stderr F I1213 00:21:01.883260 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-provisioner-volumeattachment-reader-role" (568 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883271688+00:00 stderr F I1213 00:21:01.883265 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883278308+00:00 stderr F I1213 00:21:01.883269 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-provisioner-volumesnapshot-reader-role" (569 of 955) 2025-12-13T00:21:01.883284869+00:00 stderr F I1213 00:21:01.883278 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-provisioner-volumesnapshot-reader-role" (569 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883291409+00:00 stderr F I1213 00:21:01.883284 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883297969+00:00 stderr F I1213 00:21:01.883289 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-resizer-infrastructure-reader-role" (570 of 955) 2025-12-13T00:21:01.883304549+00:00 stderr F I1213 00:21:01.883298 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-resizer-infrastructure-reader-role" (570 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883313569+00:00 stderr F I1213 00:21:01.883303 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883313569+00:00 stderr F I1213 00:21:01.883308 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-resizer-storageclass-reader-role" (571 of 955) 2025-12-13T00:21:01.883322050+00:00 stderr F I1213 00:21:01.883316 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-resizer-storageclass-reader-role" (571 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883328660+00:00 stderr F I1213 00:21:01.883321 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883335270+00:00 stderr F I1213 00:21:01.883325 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-attacher-role" (572 of 955) 2025-12-13T00:21:01.883341870+00:00 stderr F I1213 00:21:01.883334 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-attacher-role" (572 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883341870+00:00 stderr F I1213 00:21:01.883339 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883350310+00:00 stderr F I1213 00:21:01.883345 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-provisioner-role" (573 of 955) 2025-12-13T00:21:01.883358391+00:00 stderr F I1213 00:21:01.883353 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-provisioner-role" (573 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883364971+00:00 stderr F I1213 00:21:01.883358 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883371531+00:00 stderr F I1213 00:21:01.883362 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-resizer-role" (574 of 955) 2025-12-13T00:21:01.883378121+00:00 stderr F I1213 00:21:01.883371 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-resizer-role" (574 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883378121+00:00 stderr F I1213 00:21:01.883375 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883386451+00:00 stderr F I1213 00:21:01.883380 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-main-snapshotter-role" (575 of 955) 2025-12-13T00:21:01.883394552+00:00 stderr F I1213 00:21:01.883388 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-main-snapshotter-role" (575 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883401122+00:00 stderr F I1213 00:21:01.883393 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883407672+00:00 stderr F I1213 00:21:01.883398 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-storage-operator/cluster-storage-operator" (576 of 955) 2025-12-13T00:21:01.883414252+00:00 stderr F I1213 00:21:01.883407 1 sync_worker.go:1002] Skipping deployment "openshift-cluster-storage-operator/cluster-storage-operator" (576 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883414252+00:00 stderr F I1213 00:21:01.883412 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883422622+00:00 stderr F I1213 00:21:01.883416 1 sync_worker.go:999] Running sync for clusteroperator "storage" (577 of 955) 2025-12-13T00:21:01.883430712+00:00 stderr F I1213 00:21:01.883424 1 sync_worker.go:1002] Skipping clusteroperator "storage" (577 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883437283+00:00 stderr F I1213 00:21:01.883429 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.883443833+00:00 stderr F I1213 00:21:01.883434 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-storage-operator/prometheus" (578 of 955) 2025-12-13T00:21:01.883450413+00:00 stderr F I1213 00:21:01.883442 1 sync_worker.go:1002] Skipping prometheusrule "openshift-cluster-storage-operator/prometheus" (578 of 955): disabled capabilities: Storage 2025-12-13T00:21:01.883470124+00:00 stderr F I1213 00:21:01.883457 1 task_graph.go:481] Running 22 on worker 1 2025-12-13T00:21:01.936248168+00:00 stderr F I1213 00:21:01.936167 1 sync_worker.go:989] Precreated resource clusteroperator "ingress" (419 of 955) 2025-12-13T00:21:01.936248168+00:00 stderr F I1213 00:21:01.936211 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.936248168+00:00 stderr F I1213 00:21:01.936221 1 sync_worker.go:999] Running sync for clusterrole "openshift-ingress-operator" (403 of 955) 2025-12-13T00:21:01.981890999+00:00 stderr F I1213 00:21:01.981803 1 sync_worker.go:1014] Done syncing for clusterrole "openshift-ingress-operator" (403 of 955) 2025-12-13T00:21:01.981890999+00:00 stderr F I1213 00:21:01.981854 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:01.981890999+00:00 stderr F I1213 00:21:01.981868 1 sync_worker.go:999] Running sync for customresourcedefinition "dnsrecords.ingress.operator.openshift.io" (404 of 955) 2025-12-13T00:21:02.033925434+00:00 stderr F I1213 00:21:02.033854 1 sync_worker.go:1014] Done syncing for customresourcedefinition "dnsrecords.ingress.operator.openshift.io" (404 of 955) 2025-12-13T00:21:02.033925434+00:00 stderr F I1213 00:21:02.033894 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.033925434+00:00 stderr F I1213 00:21:02.033902 1 sync_worker.go:999] Running sync for customresourcedefinition "ingresscontrollers.operator.openshift.io" (405 of 955) 2025-12-13T00:21:02.095242327+00:00 stderr F I1213 00:21:02.095169 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ingresscontrollers.operator.openshift.io" (405 of 955) 2025-12-13T00:21:02.095242327+00:00 stderr F I1213 00:21:02.095208 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.095242327+00:00 stderr F I1213 00:21:02.095214 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ingress" (406 of 955) 2025-12-13T00:21:02.095242327+00:00 stderr F I1213 00:21:02.095229 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ingress" (406 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:02.095242327+00:00 stderr F I1213 00:21:02.095236 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095241 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-azure" (407 of 955) 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095252 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-azure" (407 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095257 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095262 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-gcp" (408 of 955) 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095271 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ingress-gcp" (408 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095275 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.095292299+00:00 stderr F I1213 00:21:02.095279 1 sync_worker.go:999] Running sync for namespace "openshift-ingress-operator" (409 of 955) 2025-12-13T00:21:02.132920835+00:00 stderr F I1213 00:21:02.132849 1 sync_worker.go:1014] Done syncing for namespace "openshift-ingress-operator" (409 of 955) 2025-12-13T00:21:02.132920835+00:00 stderr F I1213 00:21:02.132892 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.132920835+00:00 stderr F I1213 00:21:02.132900 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-ingress-operator" (410 of 955) 2025-12-13T00:21:02.193524129+00:00 stderr F I1213 00:21:02.193110 1 sync_worker.go:1014] Done syncing for clusterrolebinding "openshift-ingress-operator" (410 of 955) 2025-12-13T00:21:02.193524129+00:00 stderr F I1213 00:21:02.193502 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.193524129+00:00 stderr F I1213 00:21:02.193508 1 sync_worker.go:999] Running sync for rolebinding "openshift-ingress-operator/ingress-operator" (411 of 955) 2025-12-13T00:21:02.232130421+00:00 stderr F I1213 00:21:02.232065 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-ingress-operator/ingress-operator" (411 of 955) 2025-12-13T00:21:02.232130421+00:00 stderr F I1213 00:21:02.232106 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.232130421+00:00 stderr F I1213 00:21:02.232115 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/ingress-operator" (412 of 955) 2025-12-13T00:21:02.283614371+00:00 stderr F I1213 00:21:02.283479 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/ingress-operator" (412 of 955) 2025-12-13T00:21:02.283614371+00:00 stderr F I1213 00:21:02.283564 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.283614371+00:00 stderr F I1213 00:21:02.283571 1 sync_worker.go:999] Running sync for role "openshift-ingress-operator/ingress-operator" (413 of 955) 2025-12-13T00:21:02.332195182+00:00 stderr F I1213 00:21:02.332129 1 sync_worker.go:1014] Done syncing for role "openshift-ingress-operator/ingress-operator" (413 of 955) 2025-12-13T00:21:02.332195182+00:00 stderr F I1213 00:21:02.332170 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.332195182+00:00 stderr F I1213 00:21:02.332176 1 sync_worker.go:999] Running sync for role "openshift-config/ingress-operator" (414 of 955) 2025-12-13T00:21:02.382871309+00:00 stderr F I1213 00:21:02.382814 1 sync_worker.go:1014] Done syncing for role "openshift-config/ingress-operator" (414 of 955) 2025-12-13T00:21:02.382871309+00:00 stderr F I1213 00:21:02.382853 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.382871309+00:00 stderr F I1213 00:21:02.382861 1 sync_worker.go:999] Running sync for serviceaccount "openshift-ingress-operator/ingress-operator" (415 of 955) 2025-12-13T00:21:02.432705454+00:00 stderr F I1213 00:21:02.432634 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-ingress-operator/ingress-operator" (415 of 955) 2025-12-13T00:21:02.432705454+00:00 stderr F I1213 00:21:02.432670 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.432705454+00:00 stderr F I1213 00:21:02.432678 1 sync_worker.go:999] Running sync for service "openshift-ingress-operator/metrics" (416 of 955) 2025-12-13T00:21:02.482660002+00:00 stderr F I1213 00:21:02.482594 1 sync_worker.go:1014] Done syncing for service "openshift-ingress-operator/metrics" (416 of 955) 2025-12-13T00:21:02.482660002+00:00 stderr F I1213 00:21:02.482623 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.482660002+00:00 stderr F I1213 00:21:02.482630 1 sync_worker.go:999] Running sync for configmap "openshift-ingress-operator/trusted-ca" (417 of 955) 2025-12-13T00:21:02.543469223+00:00 stderr F I1213 00:21:02.543322 1 sync_worker.go:1014] Done syncing for configmap "openshift-ingress-operator/trusted-ca" (417 of 955) 2025-12-13T00:21:02.543469223+00:00 stderr F I1213 00:21:02.543368 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.543469223+00:00 stderr F I1213 00:21:02.543374 1 sync_worker.go:999] Running sync for deployment "openshift-ingress-operator/ingress-operator" (418 of 955) 2025-12-13T00:21:02.632601988+00:00 stderr F I1213 00:21:02.632503 1 sync_worker.go:1014] Done syncing for deployment "openshift-ingress-operator/ingress-operator" (418 of 955) 2025-12-13T00:21:02.632601988+00:00 stderr F I1213 00:21:02.632551 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.632601988+00:00 stderr F I1213 00:21:02.632564 1 sync_worker.go:999] Running sync for clusteroperator "ingress" (419 of 955) 2025-12-13T00:21:02.633028399+00:00 stderr F I1213 00:21:02.632926 1 sync_worker.go:1014] Done syncing for clusteroperator "ingress" (419 of 955) 2025-12-13T00:21:02.633043590+00:00 stderr F I1213 00:21:02.633026 1 task_graph.go:481] Running 23 on worker 1 2025-12-13T00:21:02.633090911+00:00 stderr F I1213 00:21:02.633051 1 sync_worker.go:982] Skipping precreation of clusteroperator "csi-snapshot-controller" (377 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.633090911+00:00 stderr F I1213 00:21:02.633082 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.633126502+00:00 stderr F I1213 00:21:02.633097 1 sync_worker.go:999] Running sync for customresourcedefinition "csisnapshotcontrollers.operator.openshift.io" (357 of 955) 2025-12-13T00:21:02.684254852+00:00 stderr F I1213 00:21:02.684138 1 sync_worker.go:1014] Done syncing for customresourcedefinition "csisnapshotcontrollers.operator.openshift.io" (357 of 955) 2025-12-13T00:21:02.684254852+00:00 stderr F I1213 00:21:02.684188 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684254852+00:00 stderr F I1213 00:21:02.684200 1 sync_worker.go:999] Running sync for csisnapshotcontroller "cluster" (358 of 955) 2025-12-13T00:21:02.684254852+00:00 stderr F I1213 00:21:02.684225 1 sync_worker.go:1002] Skipping csisnapshotcontroller "cluster" (358 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684254852+00:00 stderr F I1213 00:21:02.684237 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684327394+00:00 stderr F I1213 00:21:02.684247 1 sync_worker.go:999] Running sync for configmap "openshift-cluster-storage-operator/csi-snapshot-controller-operator-config" (359 of 955) 2025-12-13T00:21:02.684327394+00:00 stderr F I1213 00:21:02.684269 1 sync_worker.go:1002] Skipping configmap "openshift-cluster-storage-operator/csi-snapshot-controller-operator-config" (359 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684327394+00:00 stderr F I1213 00:21:02.684281 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684327394+00:00 stderr F I1213 00:21:02.684291 1 sync_worker.go:999] Running sync for service "openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics" (360 of 955) 2025-12-13T00:21:02.684327394+00:00 stderr F I1213 00:21:02.684308 1 sync_worker.go:1002] Skipping service "openshift-cluster-storage-operator/csi-snapshot-controller-operator-metrics" (360 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684327394+00:00 stderr F I1213 00:21:02.684318 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684345575+00:00 stderr F I1213 00:21:02.684328 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (361 of 955) 2025-12-13T00:21:02.684357915+00:00 stderr F I1213 00:21:02.684347 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (361 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684370515+00:00 stderr F I1213 00:21:02.684357 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684402326+00:00 stderr F I1213 00:21:02.684367 1 sync_worker.go:999] Running sync for clusterrole "openshift-csi-snapshot-controller-runner" (362 of 955) 2025-12-13T00:21:02.684402326+00:00 stderr F I1213 00:21:02.684386 1 sync_worker.go:1002] Skipping clusterrole "openshift-csi-snapshot-controller-runner" (362 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684402326+00:00 stderr F I1213 00:21:02.684397 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684421287+00:00 stderr F I1213 00:21:02.684407 1 sync_worker.go:999] Running sync for clusterrolebinding "openshift-csi-snapshot-controller-role" (363 of 955) 2025-12-13T00:21:02.684437457+00:00 stderr F I1213 00:21:02.684425 1 sync_worker.go:1002] Skipping clusterrolebinding "openshift-csi-snapshot-controller-role" (363 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684455088+00:00 stderr F I1213 00:21:02.684436 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684468238+00:00 stderr F I1213 00:21:02.684446 1 sync_worker.go:999] Running sync for role "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (364 of 955) 2025-12-13T00:21:02.684481368+00:00 stderr F I1213 00:21:02.684465 1 sync_worker.go:1002] Skipping role "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (364 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684481368+00:00 stderr F I1213 00:21:02.684475 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684497919+00:00 stderr F I1213 00:21:02.684485 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (365 of 955) 2025-12-13T00:21:02.684513769+00:00 stderr F I1213 00:21:02.684502 1 sync_worker.go:1002] Skipping rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-leaderelection" (365 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.684526480+00:00 stderr F I1213 00:21:02.684512 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.684539300+00:00 stderr F I1213 00:21:02.684522 1 sync_worker.go:999] Running sync for rolebinding "kube-system/csi-snapshot-controller-operator-authentication-reader" (366 of 955) 2025-12-13T00:21:02.732355890+00:00 stderr F I1213 00:21:02.732266 1 sync_worker.go:1014] Done syncing for rolebinding "kube-system/csi-snapshot-controller-operator-authentication-reader" (366 of 955) 2025-12-13T00:21:02.732355890+00:00 stderr F I1213 00:21:02.732306 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.732355890+00:00 stderr F I1213 00:21:02.732316 1 sync_worker.go:999] Running sync for clusterrole "csi-snapshot-controller-operator-clusterrole" (367 of 955) 2025-12-13T00:21:02.760386027+00:00 stderr F I1213 00:21:02.760298 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:02.760894000+00:00 stderr F I1213 00:21:02.760844 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:02.760894000+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:02.760894000+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:02.761016413+00:00 stderr F I1213 00:21:02.760969 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (683.328µs) 2025-12-13T00:21:02.761016413+00:00 stderr F I1213 00:21:02.761001 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:02.761102946+00:00 stderr F I1213 00:21:02.761059 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:02.761142307+00:00 stderr F I1213 00:21:02.761117 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:02.761241429+00:00 stderr F I1213 00:21:02.761132 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:02.761715922+00:00 stderr F I1213 00:21:02.761615 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:02.783335116+00:00 stderr F I1213 00:21:02.783255 1 sync_worker.go:1014] Done syncing for clusterrole "csi-snapshot-controller-operator-clusterrole" (367 of 955) 2025-12-13T00:21:02.783335116+00:00 stderr F I1213 00:21:02.783298 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.783335116+00:00 stderr F I1213 00:21:02.783307 1 sync_worker.go:999] Running sync for role "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (368 of 955) 2025-12-13T00:21:02.796546972+00:00 stderr F W1213 00:21:02.796447 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:02.798138695+00:00 stderr F I1213 00:21:02.798070 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.06721ms) 2025-12-13T00:21:02.832432591+00:00 stderr F I1213 00:21:02.832356 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (368 of 955) 2025-12-13T00:21:02.832432591+00:00 stderr F I1213 00:21:02.832395 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.832432591+00:00 stderr F I1213 00:21:02.832404 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-admin" (369 of 955) 2025-12-13T00:21:02.832432591+00:00 stderr F I1213 00:21:02.832423 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-admin" (369 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832431 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832439 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-view" (370 of 955) 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832452 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-view" (370 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832459 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832465 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-basic-user" (371 of 955) 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832487 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-basic-user" (371 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832495 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.832512963+00:00 stderr F I1213 00:21:02.832501 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:aggregate-snapshots-to-storage-admin" (372 of 955) 2025-12-13T00:21:02.832562624+00:00 stderr F I1213 00:21:02.832514 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:aggregate-snapshots-to-storage-admin" (372 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.832562624+00:00 stderr F I1213 00:21:02.832522 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.832562624+00:00 stderr F I1213 00:21:02.832528 1 sync_worker.go:999] Running sync for clusterrolebinding "csi-snapshot-controller-operator-clusterrole" (373 of 955) 2025-12-13T00:21:02.881771632+00:00 stderr F I1213 00:21:02.881637 1 sync_worker.go:1014] Done syncing for clusterrolebinding "csi-snapshot-controller-operator-clusterrole" (373 of 955) 2025-12-13T00:21:02.881771632+00:00 stderr F I1213 00:21:02.881683 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.881771632+00:00 stderr F I1213 00:21:02.881691 1 sync_worker.go:999] Running sync for clusterrolebinding "csi-snapshot-controller-runner-operator" (374 of 955) 2025-12-13T00:21:02.881771632+00:00 stderr F I1213 00:21:02.881709 1 sync_worker.go:1002] Skipping clusterrolebinding "csi-snapshot-controller-runner-operator" (374 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.881771632+00:00 stderr F I1213 00:21:02.881717 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.881771632+00:00 stderr F I1213 00:21:02.881723 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (375 of 955) 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932861 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-storage-operator/csi-snapshot-controller-operator-role" (375 of 955) 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932892 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932898 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (376 of 955) 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932916 1 sync_worker.go:1002] Skipping deployment "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (376 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932922 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932927 1 sync_worker.go:999] Running sync for clusteroperator "csi-snapshot-controller" (377 of 955) 2025-12-13T00:21:02.932981253+00:00 stderr F I1213 00:21:02.932959 1 sync_worker.go:1002] Skipping clusteroperator "csi-snapshot-controller" (377 of 955): disabled capabilities: CSISnapshot 2025-12-13T00:21:02.933085596+00:00 stderr F I1213 00:21:02.932974 1 task_graph.go:481] Running 24 on worker 1 2025-12-13T00:21:02.933085596+00:00 stderr F I1213 00:21:02.932983 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:02.933085596+00:00 stderr F I1213 00:21:02.932987 1 sync_worker.go:999] Running sync for namespace "openshift-config-operator" (13 of 955) 2025-12-13T00:21:02.991635316+00:00 stderr F I1213 00:21:02.991551 1 sync_worker.go:1014] Done syncing for namespace "openshift-config-operator" (13 of 955) 2025-12-13T00:21:02.991635316+00:00 stderr F I1213 00:21:02.991585 1 task_graph.go:481] Running 25 on worker 1 2025-12-13T00:21:03.035716276+00:00 stderr F I1213 00:21:03.035583 1 sync_worker.go:989] Precreated resource clusteroperator "machine-config" (799 of 955) 2025-12-13T00:21:03.035716276+00:00 stderr F I1213 00:21:03.035627 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.035716276+00:00 stderr F I1213 00:21:03.035633 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:machine-config-operator:cluster-reader" (774 of 955) 2025-12-13T00:21:03.083001112+00:00 stderr F I1213 00:21:03.082857 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:machine-config-operator:cluster-reader" (774 of 955) 2025-12-13T00:21:03.083001112+00:00 stderr F I1213 00:21:03.082900 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.083001112+00:00 stderr F I1213 00:21:03.082909 1 sync_worker.go:999] Running sync for namespace "openshift-machine-config-operator" (775 of 955) 2025-12-13T00:21:03.132719794+00:00 stderr F I1213 00:21:03.132584 1 sync_worker.go:1014] Done syncing for namespace "openshift-machine-config-operator" (775 of 955) 2025-12-13T00:21:03.132719794+00:00 stderr F I1213 00:21:03.132654 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.132719794+00:00 stderr F I1213 00:21:03.132663 1 sync_worker.go:999] Running sync for namespace "openshift-openstack-infra" (776 of 955) 2025-12-13T00:21:03.182540969+00:00 stderr F I1213 00:21:03.182431 1 sync_worker.go:1014] Done syncing for namespace "openshift-openstack-infra" (776 of 955) 2025-12-13T00:21:03.182540969+00:00 stderr F I1213 00:21:03.182476 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.182540969+00:00 stderr F I1213 00:21:03.182486 1 sync_worker.go:999] Running sync for namespace "openshift-kni-infra" (777 of 955) 2025-12-13T00:21:03.232014223+00:00 stderr F I1213 00:21:03.231902 1 sync_worker.go:1014] Done syncing for namespace "openshift-kni-infra" (777 of 955) 2025-12-13T00:21:03.232014223+00:00 stderr F I1213 00:21:03.231963 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.232014223+00:00 stderr F I1213 00:21:03.231971 1 sync_worker.go:999] Running sync for namespace "openshift-ovirt-infra" (778 of 955) 2025-12-13T00:21:03.281969891+00:00 stderr F I1213 00:21:03.281878 1 sync_worker.go:1014] Done syncing for namespace "openshift-ovirt-infra" (778 of 955) 2025-12-13T00:21:03.282027103+00:00 stderr F I1213 00:21:03.281977 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.282027103+00:00 stderr F I1213 00:21:03.281996 1 sync_worker.go:999] Running sync for namespace "openshift-vsphere-infra" (779 of 955) 2025-12-13T00:21:03.332131135+00:00 stderr F I1213 00:21:03.332008 1 sync_worker.go:1014] Done syncing for namespace "openshift-vsphere-infra" (779 of 955) 2025-12-13T00:21:03.332131135+00:00 stderr F I1213 00:21:03.332046 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.332131135+00:00 stderr F I1213 00:21:03.332054 1 sync_worker.go:999] Running sync for namespace "openshift-nutanix-infra" (780 of 955) 2025-12-13T00:21:03.383046908+00:00 stderr F I1213 00:21:03.382916 1 sync_worker.go:1014] Done syncing for namespace "openshift-nutanix-infra" (780 of 955) 2025-12-13T00:21:03.383046908+00:00 stderr F I1213 00:21:03.382972 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.383046908+00:00 stderr F I1213 00:21:03.382978 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-platform-infra" (781 of 955) 2025-12-13T00:21:03.433683375+00:00 stderr F I1213 00:21:03.433537 1 sync_worker.go:1014] Done syncing for namespace "openshift-cloud-platform-infra" (781 of 955) 2025-12-13T00:21:03.433683375+00:00 stderr F I1213 00:21:03.433613 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.433683375+00:00 stderr F I1213 00:21:03.433624 1 sync_worker.go:999] Running sync for service "openshift-machine-config-operator/machine-config-operator" (782 of 955) 2025-12-13T00:21:03.493386866+00:00 stderr F I1213 00:21:03.493292 1 sync_worker.go:1014] Done syncing for service "openshift-machine-config-operator/machine-config-operator" (782 of 955) 2025-12-13T00:21:03.493386866+00:00 stderr F I1213 00:21:03.493340 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.493386866+00:00 stderr F I1213 00:21:03.493351 1 sync_worker.go:999] Running sync for service "openshift-machine-config-operator/machine-config-controller" (783 of 955) 2025-12-13T00:21:03.532117151+00:00 stderr F I1213 00:21:03.532033 1 sync_worker.go:1014] Done syncing for service "openshift-machine-config-operator/machine-config-controller" (783 of 955) 2025-12-13T00:21:03.532117151+00:00 stderr F I1213 00:21:03.532081 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.532117151+00:00 stderr F I1213 00:21:03.532093 1 sync_worker.go:999] Running sync for service "openshift-machine-config-operator/machine-config-daemon" (784 of 955) 2025-12-13T00:21:03.583075867+00:00 stderr F I1213 00:21:03.582995 1 sync_worker.go:1014] Done syncing for service "openshift-machine-config-operator/machine-config-daemon" (784 of 955) 2025-12-13T00:21:03.583075867+00:00 stderr F I1213 00:21:03.583039 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.583075867+00:00 stderr F I1213 00:21:03.583049 1 sync_worker.go:999] Running sync for customresourcedefinition "containerruntimeconfigs.machineconfiguration.openshift.io" (785 of 955) 2025-12-13T00:21:03.633538948+00:00 stderr F I1213 00:21:03.633440 1 sync_worker.go:1014] Done syncing for customresourcedefinition "containerruntimeconfigs.machineconfiguration.openshift.io" (785 of 955) 2025-12-13T00:21:03.633538948+00:00 stderr F I1213 00:21:03.633494 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.633538948+00:00 stderr F I1213 00:21:03.633503 1 sync_worker.go:999] Running sync for customresourcedefinition "controllerconfigs.machineconfiguration.openshift.io" (786 of 955) 2025-12-13T00:21:03.689401945+00:00 stderr F I1213 00:21:03.689321 1 sync_worker.go:1014] Done syncing for customresourcedefinition "controllerconfigs.machineconfiguration.openshift.io" (786 of 955) 2025-12-13T00:21:03.689401945+00:00 stderr F I1213 00:21:03.689364 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.689401945+00:00 stderr F I1213 00:21:03.689370 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeletconfigs.machineconfiguration.openshift.io" (787 of 955) 2025-12-13T00:21:03.733435423+00:00 stderr F I1213 00:21:03.733355 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeletconfigs.machineconfiguration.openshift.io" (787 of 955) 2025-12-13T00:21:03.733435423+00:00 stderr F I1213 00:21:03.733398 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.733435423+00:00 stderr F I1213 00:21:03.733405 1 sync_worker.go:999] Running sync for customresourcedefinition "machineconfigpools.machineconfiguration.openshift.io" (788 of 955) 2025-12-13T00:21:03.761086400+00:00 stderr F I1213 00:21:03.761029 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:03.761542322+00:00 stderr F I1213 00:21:03.761517 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:03.761542322+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:03.761542322+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:03.761649565+00:00 stderr F I1213 00:21:03.761628 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (610.966µs) 2025-12-13T00:21:03.761694216+00:00 stderr F I1213 00:21:03.761679 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:03.761793199+00:00 stderr F I1213 00:21:03.761763 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:03.761877841+00:00 stderr F I1213 00:21:03.761862 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:03.762025475+00:00 stderr F I1213 00:21:03.761903 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:03.762372045+00:00 stderr F I1213 00:21:03.762336 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:03.784482151+00:00 stderr F I1213 00:21:03.784411 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineconfigpools.machineconfiguration.openshift.io" (788 of 955) 2025-12-13T00:21:03.784482151+00:00 stderr F I1213 00:21:03.784453 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.784482151+00:00 stderr F I1213 00:21:03.784461 1 sync_worker.go:999] Running sync for customresourcedefinition "machineconfigs.machineconfiguration.openshift.io" (789 of 955) 2025-12-13T00:21:03.788001156+00:00 stderr F W1213 00:21:03.787970 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:03.789724772+00:00 stderr F I1213 00:21:03.789686 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.005736ms) 2025-12-13T00:21:03.833077783+00:00 stderr F I1213 00:21:03.832564 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineconfigs.machineconfiguration.openshift.io" (789 of 955) 2025-12-13T00:21:03.833077783+00:00 stderr F I1213 00:21:03.832612 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.833077783+00:00 stderr F I1213 00:21:03.832620 1 sync_worker.go:999] Running sync for customresourcedefinition "machineconfigurations.operator.openshift.io" (790 of 955) 2025-12-13T00:21:03.883424161+00:00 stderr F I1213 00:21:03.883349 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineconfigurations.operator.openshift.io" (790 of 955) 2025-12-13T00:21:03.883424161+00:00 stderr F I1213 00:21:03.883386 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.883424161+00:00 stderr F I1213 00:21:03.883394 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/machine-config-operator-images" (791 of 955) 2025-12-13T00:21:03.932739561+00:00 stderr F I1213 00:21:03.932642 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/machine-config-operator-images" (791 of 955) 2025-12-13T00:21:03.932739561+00:00 stderr F I1213 00:21:03.932689 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.932739561+00:00 stderr F I1213 00:21:03.932697 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-config-operator/machine-config-operator" (792 of 955) 2025-12-13T00:21:03.982390352+00:00 stderr F I1213 00:21:03.982305 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-config-operator/machine-config-operator" (792 of 955) 2025-12-13T00:21:03.982390352+00:00 stderr F I1213 00:21:03.982360 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:03.982390352+00:00 stderr F I1213 00:21:03.982371 1 sync_worker.go:999] Running sync for clusterrolebinding "custom-account-openshift-machine-config-operator" (793 of 955) 2025-12-13T00:21:04.032288248+00:00 stderr F I1213 00:21:04.032213 1 sync_worker.go:1014] Done syncing for clusterrolebinding "custom-account-openshift-machine-config-operator" (793 of 955) 2025-12-13T00:21:04.032288248+00:00 stderr F I1213 00:21:04.032256 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.032288248+00:00 stderr F I1213 00:21:04.032265 1 sync_worker.go:999] Running sync for role "openshift-machine-config-operator/prometheus-k8s" (794 of 955) 2025-12-13T00:21:04.081739963+00:00 stderr F I1213 00:21:04.081670 1 sync_worker.go:1014] Done syncing for role "openshift-machine-config-operator/prometheus-k8s" (794 of 955) 2025-12-13T00:21:04.081739963+00:00 stderr F I1213 00:21:04.081700 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.081739963+00:00 stderr F I1213 00:21:04.081706 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-config-operator/prometheus-k8s" (795 of 955) 2025-12-13T00:21:04.132056950+00:00 stderr F I1213 00:21:04.131965 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-config-operator/prometheus-k8s" (795 of 955) 2025-12-13T00:21:04.132056950+00:00 stderr F I1213 00:21:04.132005 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.132056950+00:00 stderr F I1213 00:21:04.132017 1 sync_worker.go:999] Running sync for deployment "openshift-machine-config-operator/machine-config-operator" (796 of 955) 2025-12-13T00:21:04.182645086+00:00 stderr F I1213 00:21:04.182204 1 sync_worker.go:1014] Done syncing for deployment "openshift-machine-config-operator/machine-config-operator" (796 of 955) 2025-12-13T00:21:04.182645086+00:00 stderr F I1213 00:21:04.182264 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.182645086+00:00 stderr F I1213 00:21:04.182272 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/kube-rbac-proxy" (797 of 955) 2025-12-13T00:21:04.232330016+00:00 stderr F I1213 00:21:04.232281 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/kube-rbac-proxy" (797 of 955) 2025-12-13T00:21:04.232412658+00:00 stderr F I1213 00:21:04.232401 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.232442649+00:00 stderr F I1213 00:21:04.232430 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/machine-config-osimageurl" (798 of 955) 2025-12-13T00:21:04.282305395+00:00 stderr F I1213 00:21:04.282243 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/machine-config-osimageurl" (798 of 955) 2025-12-13T00:21:04.282305395+00:00 stderr F I1213 00:21:04.282273 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.282305395+00:00 stderr F I1213 00:21:04.282279 1 sync_worker.go:999] Running sync for clusteroperator "machine-config" (799 of 955) 2025-12-13T00:21:04.282489800+00:00 stderr F E1213 00:21:04.282459 1 task.go:122] error running apply for clusteroperator "machine-config" (799 of 955): Cluster operator machine-config is degraded 2025-12-13T00:21:04.282489800+00:00 stderr F I1213 00:21:04.282480 1 task_graph.go:481] Running 26 on worker 1 2025-12-13T00:21:04.334597996+00:00 stderr F I1213 00:21:04.334548 1 sync_worker.go:989] Precreated resource clusteroperator "openshift-samples" (536 of 955) 2025-12-13T00:21:04.334673418+00:00 stderr F I1213 00:21:04.334663 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.334705369+00:00 stderr F I1213 00:21:04.334690 1 sync_worker.go:999] Running sync for customresourcedefinition "configs.samples.operator.openshift.io" (517 of 955) 2025-12-13T00:21:04.383534576+00:00 stderr F I1213 00:21:04.383476 1 sync_worker.go:1014] Done syncing for customresourcedefinition "configs.samples.operator.openshift.io" (517 of 955) 2025-12-13T00:21:04.383534576+00:00 stderr F I1213 00:21:04.383514 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.383534576+00:00 stderr F I1213 00:21:04.383521 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-samples-operator" (518 of 955) 2025-12-13T00:21:04.432890128+00:00 stderr F I1213 00:21:04.432835 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-samples-operator" (518 of 955) 2025-12-13T00:21:04.432983571+00:00 stderr F I1213 00:21:04.432967 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.433046672+00:00 stderr F I1213 00:21:04.433012 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-samples-operator/samples-operator-alerts" (519 of 955) 2025-12-13T00:21:04.483786282+00:00 stderr F I1213 00:21:04.483671 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-cluster-samples-operator/samples-operator-alerts" (519 of 955) 2025-12-13T00:21:04.483786282+00:00 stderr F I1213 00:21:04.483711 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.483786282+00:00 stderr F I1213 00:21:04.483722 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-samples-operator/cluster-samples-operator" (520 of 955) 2025-12-13T00:21:04.533386610+00:00 stderr F I1213 00:21:04.533317 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-cluster-samples-operator/cluster-samples-operator" (520 of 955) 2025-12-13T00:21:04.533386610+00:00 stderr F I1213 00:21:04.533348 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.533386610+00:00 stderr F I1213 00:21:04.533354 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-samples-operator-imageconfig-reader" (521 of 955) 2025-12-13T00:21:04.581677583+00:00 stderr F I1213 00:21:04.581629 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-samples-operator-imageconfig-reader" (521 of 955) 2025-12-13T00:21:04.581754605+00:00 stderr F I1213 00:21:04.581736 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.581786466+00:00 stderr F I1213 00:21:04.581771 1 sync_worker.go:999] Running sync for clusterrole "cluster-samples-operator-imageconfig-reader" (522 of 955) 2025-12-13T00:21:04.632807723+00:00 stderr F I1213 00:21:04.632710 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-samples-operator-imageconfig-reader" (522 of 955) 2025-12-13T00:21:04.632807723+00:00 stderr F I1213 00:21:04.632754 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.632807723+00:00 stderr F I1213 00:21:04.632762 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-samples-operator-proxy-reader" (523 of 955) 2025-12-13T00:21:04.681579419+00:00 stderr F I1213 00:21:04.681527 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-samples-operator-proxy-reader" (523 of 955) 2025-12-13T00:21:04.681579419+00:00 stderr F I1213 00:21:04.681560 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.681579419+00:00 stderr F I1213 00:21:04.681567 1 sync_worker.go:999] Running sync for clusterrole "cluster-samples-operator-proxy-reader" (524 of 955) 2025-12-13T00:21:04.732421661+00:00 stderr F I1213 00:21:04.732352 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-samples-operator-proxy-reader" (524 of 955) 2025-12-13T00:21:04.732421661+00:00 stderr F I1213 00:21:04.732384 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.732421661+00:00 stderr F I1213 00:21:04.732390 1 sync_worker.go:999] Running sync for role "openshift-cluster-samples-operator/cluster-samples-operator" (525 of 955) 2025-12-13T00:21:04.762589116+00:00 stderr F I1213 00:21:04.762538 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:04.762953705+00:00 stderr F I1213 00:21:04.762872 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:04.762953705+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:04.762953705+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:04.762999447+00:00 stderr F I1213 00:21:04.762974 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (471.373µs) 2025-12-13T00:21:04.763051178+00:00 stderr F I1213 00:21:04.763030 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:04.763179711+00:00 stderr F I1213 00:21:04.763146 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:04.763246683+00:00 stderr F I1213 00:21:04.763236 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:04.763317105+00:00 stderr F I1213 00:21:04.763264 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:04.763610943+00:00 stderr F I1213 00:21:04.763560 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:04.782013209+00:00 stderr F I1213 00:21:04.781973 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-samples-operator/cluster-samples-operator" (525 of 955) 2025-12-13T00:21:04.782067071+00:00 stderr F I1213 00:21:04.782056 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.782103122+00:00 stderr F I1213 00:21:04.782087 1 sync_worker.go:999] Running sync for clusterrole "cluster-samples-operator" (526 of 955) 2025-12-13T00:21:04.797757044+00:00 stderr F W1213 00:21:04.797701 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:04.799256265+00:00 stderr F I1213 00:21:04.799211 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.188127ms) 2025-12-13T00:21:04.832287236+00:00 stderr F I1213 00:21:04.832239 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-samples-operator" (526 of 955) 2025-12-13T00:21:04.832352447+00:00 stderr F I1213 00:21:04.832341 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.832382698+00:00 stderr F I1213 00:21:04.832369 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:cluster-samples-operator:cluster-reader" (527 of 955) 2025-12-13T00:21:04.881540995+00:00 stderr F I1213 00:21:04.881487 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:cluster-samples-operator:cluster-reader" (527 of 955) 2025-12-13T00:21:04.881540995+00:00 stderr F I1213 00:21:04.881522 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.881540995+00:00 stderr F I1213 00:21:04.881527 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-samples-operator/cluster-samples-operator" (528 of 955) 2025-12-13T00:21:04.932021347+00:00 stderr F I1213 00:21:04.931958 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-samples-operator/cluster-samples-operator" (528 of 955) 2025-12-13T00:21:04.932021347+00:00 stderr F I1213 00:21:04.931997 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.932068668+00:00 stderr F I1213 00:21:04.932016 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-samples-operator" (529 of 955) 2025-12-13T00:21:04.982652353+00:00 stderr F I1213 00:21:04.982558 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-samples-operator" (529 of 955) 2025-12-13T00:21:04.982741126+00:00 stderr F I1213 00:21:04.982721 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:04.982796317+00:00 stderr F I1213 00:21:04.982773 1 sync_worker.go:999] Running sync for rolebinding "openshift/cluster-samples-operator-openshift-edit" (530 of 955) 2025-12-13T00:21:05.032791516+00:00 stderr F I1213 00:21:05.032738 1 sync_worker.go:1014] Done syncing for rolebinding "openshift/cluster-samples-operator-openshift-edit" (530 of 955) 2025-12-13T00:21:05.032889119+00:00 stderr F I1213 00:21:05.032875 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.032926710+00:00 stderr F I1213 00:21:05.032911 1 sync_worker.go:999] Running sync for role "openshift-config/coreos-pull-secret-reader" (531 of 955) 2025-12-13T00:21:05.082495488+00:00 stderr F I1213 00:21:05.082420 1 sync_worker.go:1014] Done syncing for role "openshift-config/coreos-pull-secret-reader" (531 of 955) 2025-12-13T00:21:05.082495488+00:00 stderr F I1213 00:21:05.082454 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.082495488+00:00 stderr F I1213 00:21:05.082462 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/cluster-samples-operator-openshift-config-secret-reader" (532 of 955) 2025-12-13T00:21:05.131769167+00:00 stderr F I1213 00:21:05.131693 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/cluster-samples-operator-openshift-config-secret-reader" (532 of 955) 2025-12-13T00:21:05.131769167+00:00 stderr F I1213 00:21:05.131732 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.131769167+00:00 stderr F I1213 00:21:05.131740 1 sync_worker.go:999] Running sync for service "openshift-cluster-samples-operator/metrics" (533 of 955) 2025-12-13T00:21:05.182070665+00:00 stderr F I1213 00:21:05.182019 1 sync_worker.go:1014] Done syncing for service "openshift-cluster-samples-operator/metrics" (533 of 955) 2025-12-13T00:21:05.182130066+00:00 stderr F I1213 00:21:05.182119 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.182160757+00:00 stderr F I1213 00:21:05.182147 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-samples-operator/cluster-samples-operator" (534 of 955) 2025-12-13T00:21:05.232411613+00:00 stderr F I1213 00:21:05.232364 1 sync_worker.go:1014] Done syncing for deployment "openshift-cluster-samples-operator/cluster-samples-operator" (534 of 955) 2025-12-13T00:21:05.232468025+00:00 stderr F I1213 00:21:05.232457 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.232497415+00:00 stderr F I1213 00:21:05.232484 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (535 of 955) 2025-12-13T00:21:05.281810536+00:00 stderr F I1213 00:21:05.281740 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (535 of 955) 2025-12-13T00:21:05.281810536+00:00 stderr F I1213 00:21:05.281776 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.281810536+00:00 stderr F I1213 00:21:05.281782 1 sync_worker.go:999] Running sync for clusteroperator "openshift-samples" (536 of 955) 2025-12-13T00:21:05.282022632+00:00 stderr F I1213 00:21:05.281992 1 sync_worker.go:1014] Done syncing for clusteroperator "openshift-samples" (536 of 955) 2025-12-13T00:21:05.282022632+00:00 stderr F I1213 00:21:05.282006 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:05.282022632+00:00 stderr F I1213 00:21:05.282011 1 sync_worker.go:999] Running sync for imagestream "openshift/cli" (537 of 955) 2025-12-13T00:21:05.333980255+00:00 stderr F E1213 00:21:05.333900 1 task.go:122] error running apply for imagestream "openshift/cli" (537 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io cli) 2025-12-13T00:21:05.763515965+00:00 stderr F I1213 00:21:05.763447 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:05.763805993+00:00 stderr F I1213 00:21:05.763776 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:05.763805993+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:05.763805993+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:05.763869565+00:00 stderr F I1213 00:21:05.763844 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (408.991µs) 2025-12-13T00:21:05.763869565+00:00 stderr F I1213 00:21:05.763863 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:05.763961257+00:00 stderr F I1213 00:21:05.763914 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:05.764008268+00:00 stderr F I1213 00:21:05.763992 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:05.764081500+00:00 stderr F I1213 00:21:05.764009 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:05.764403859+00:00 stderr F I1213 00:21:05.764363 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:05.819544417+00:00 stderr F W1213 00:21:05.819494 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:05.821508440+00:00 stderr F I1213 00:21:05.821481 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (57.613295ms) 2025-12-13T00:21:06.764230119+00:00 stderr F I1213 00:21:06.764177 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:06.764460815+00:00 stderr F I1213 00:21:06.764427 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:06.764460815+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:06.764460815+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:06.764525807+00:00 stderr F I1213 00:21:06.764497 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (330.769µs) 2025-12-13T00:21:06.764525807+00:00 stderr F I1213 00:21:06.764515 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:06.764587898+00:00 stderr F I1213 00:21:06.764560 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:06.764624839+00:00 stderr F I1213 00:21:06.764605 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:06.764670821+00:00 stderr F I1213 00:21:06.764616 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:06.764900147+00:00 stderr F I1213 00:21:06.764864 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:06.787584869+00:00 stderr F W1213 00:21:06.787528 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:06.789042389+00:00 stderr F I1213 00:21:06.789008 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.490191ms) 2025-12-13T00:21:07.765267011+00:00 stderr F I1213 00:21:07.764792 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:07.765550270+00:00 stderr F I1213 00:21:07.765503 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:07.765550270+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:07.765550270+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:07.765586901+00:00 stderr F I1213 00:21:07.765566 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (784.712µs) 2025-12-13T00:21:07.765586901+00:00 stderr F I1213 00:21:07.765579 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:07.765663303+00:00 stderr F I1213 00:21:07.765619 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:07.765680873+00:00 stderr F I1213 00:21:07.765665 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:07.765736405+00:00 stderr F I1213 00:21:07.765672 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:07.765987432+00:00 stderr F I1213 00:21:07.765910 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:07.790227005+00:00 stderr F W1213 00:21:07.790143 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:07.793832083+00:00 stderr F I1213 00:21:07.793770 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.18302ms) 2025-12-13T00:21:08.141608457+00:00 stderr F I1213 00:21:08.141532 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:08.141691510+00:00 stderr F I1213 00:21:08.141648 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:08.141711980+00:00 stderr F I1213 00:21:08.141700 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:08.141801753+00:00 stderr F I1213 00:21:08.141710 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:08.142207323+00:00 stderr F I1213 00:21:08.142082 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:08.183216030+00:00 stderr F W1213 00:21:08.183145 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:08.185570443+00:00 stderr F I1213 00:21:08.185476 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (43.953295ms) 2025-12-13T00:21:08.766148610+00:00 stderr F I1213 00:21:08.766082 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:08.766369116+00:00 stderr F I1213 00:21:08.766341 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:08.766369116+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:08.766369116+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:08.766423038+00:00 stderr F I1213 00:21:08.766393 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (321.369µs) 2025-12-13T00:21:08.766423038+00:00 stderr F I1213 00:21:08.766410 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:08.766514490+00:00 stderr F I1213 00:21:08.766475 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:08.766547341+00:00 stderr F I1213 00:21:08.766532 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:08.766605673+00:00 stderr F I1213 00:21:08.766542 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:08.766865749+00:00 stderr F I1213 00:21:08.766823 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:08.789689745+00:00 stderr F W1213 00:21:08.789666 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:08.791159555+00:00 stderr F I1213 00:21:08.791120 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.707107ms) 2025-12-13T00:21:09.490851436+00:00 stderr F E1213 00:21:09.490679 1 task.go:122] error running apply for imagestream "openshift/driver-toolkit" (657 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io driver-toolkit) 2025-12-13T00:21:09.766571557+00:00 stderr F I1213 00:21:09.766502 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:09.766779153+00:00 stderr F I1213 00:21:09.766751 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:09.766779153+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:09.766779153+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:09.766830094+00:00 stderr F I1213 00:21:09.766803 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (311.958µs) 2025-12-13T00:21:09.766830094+00:00 stderr F I1213 00:21:09.766820 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:09.766896976+00:00 stderr F I1213 00:21:09.766866 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:09.766942957+00:00 stderr F I1213 00:21:09.766913 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:09.766996238+00:00 stderr F I1213 00:21:09.766923 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:09.767200834+00:00 stderr F I1213 00:21:09.767166 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:09.792502426+00:00 stderr F W1213 00:21:09.792435 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:09.795375904+00:00 stderr F I1213 00:21:09.795332 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.506659ms) 2025-12-13T00:21:10.767051174+00:00 stderr F I1213 00:21:10.767007 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:10.767283391+00:00 stderr F I1213 00:21:10.767262 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:10.767283391+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:10.767283391+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:10.767326012+00:00 stderr F I1213 00:21:10.767309 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (312.018µs) 2025-12-13T00:21:10.767333202+00:00 stderr F I1213 00:21:10.767327 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:10.767392103+00:00 stderr F I1213 00:21:10.767371 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:10.767436975+00:00 stderr F I1213 00:21:10.767424 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:10.767494866+00:00 stderr F I1213 00:21:10.767436 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:10.767800854+00:00 stderr F I1213 00:21:10.767759 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:10.790911568+00:00 stderr F W1213 00:21:10.790143 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:10.791785752+00:00 stderr F I1213 00:21:10.791556 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.226774ms) 2025-12-13T00:21:11.768275372+00:00 stderr F I1213 00:21:11.768187 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:11.768557539+00:00 stderr F I1213 00:21:11.768504 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:11.768557539+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:11.768557539+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:11.768578300+00:00 stderr F I1213 00:21:11.768566 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (390.02µs) 2025-12-13T00:21:11.768615211+00:00 stderr F I1213 00:21:11.768581 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:11.768669482+00:00 stderr F I1213 00:21:11.768624 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:11.768702443+00:00 stderr F I1213 00:21:11.768676 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:11.768774846+00:00 stderr F I1213 00:21:11.768694 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:11.769065934+00:00 stderr F I1213 00:21:11.769022 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:11.792771233+00:00 stderr F W1213 00:21:11.792555 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:11.795572388+00:00 stderr F I1213 00:21:11.795520 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.932347ms) 2025-12-13T00:21:12.220124805+00:00 stderr F I1213 00:21:12.219979 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:21:12.220531176+00:00 stderr F I1213 00:21:12.220140 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:21:12.220531176+00:00 stderr F I1213 00:21:12.220189 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:12.220531176+00:00 stderr F I1213 00:21:12.220269 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:12.220531176+00:00 stderr F I1213 00:21:12.220346 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:12.220531176+00:00 stderr F I1213 00:21:12.220407 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:12.220692840+00:00 stderr F I1213 00:21:12.220571 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:12.220827414+00:00 stderr F I1213 00:21:12.220772 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:12.220827414+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:12.220827414+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:12.220854155+00:00 stderr F I1213 00:21:12.220844 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (645.267µs) 2025-12-13T00:21:12.220912066+00:00 stderr F I1213 00:21:12.220889 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterVersionOverrides' reason='ClusterVersionOverridesSet' message='Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.') 2025-12-13T00:21:12.220912066+00:00 stderr F I1213 00:21:12.220906 1 upgradeable.go:123] Cluster current version=4.16.0 2025-12-13T00:21:12.220978538+00:00 stderr F I1213 00:21:12.220954 1 upgradeable.go:92] Upgradeability condition failed (type='UpgradeableClusterOperators' reason='DegradedPool' message='Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are degraded, please see `oc get mcp` for further details and resolve before upgrading') 2025-12-13T00:21:12.220993938+00:00 stderr F I1213 00:21:12.220971 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:12.221052840+00:00 stderr F I1213 00:21:12.221027 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (900.554µs) 2025-12-13T00:21:12.241870071+00:00 stderr F W1213 00:21:12.241805 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:12.243350032+00:00 stderr F I1213 00:21:12.243304 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.117473ms) 2025-12-13T00:21:12.243350032+00:00 stderr F I1213 00:21:12.243331 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:12.243445724+00:00 stderr F I1213 00:21:12.243403 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:12.243454645+00:00 stderr F I1213 00:21:12.243448 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:12.243796634+00:00 stderr F I1213 00:21:12.243454 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:12.243796634+00:00 stderr F I1213 00:21:12.243685 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:12.266625060+00:00 stderr F W1213 00:21:12.266556 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:12.268058729+00:00 stderr F I1213 00:21:12.267982 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.647986ms) 2025-12-13T00:21:12.270789042+00:00 stderr F I1213 00:21:12.270750 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:21:12.270809303+00:00 stderr F I1213 00:21:12.270784 1 upgradeable.go:69] Upgradeability last checked 49.758223ms ago, will not re-check until 2025-12-13T00:23:12Z 2025-12-13T00:21:12.270809303+00:00 stderr F I1213 00:21:12.270790 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (44.161µs) 2025-12-13T00:21:12.270809303+00:00 stderr F I1213 00:21:12.270797 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:12.270841574+00:00 stderr F I1213 00:21:12.270818 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:12.270841574+00:00 stderr F I1213 00:21:12.270833 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:12.270901585+00:00 stderr F I1213 00:21:12.270867 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:12.270962617+00:00 stderr F I1213 00:21:12.270878 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:12.271041229+00:00 stderr F I1213 00:21:12.271001 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:12.271041229+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:12.271041229+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:12.271072850+00:00 stderr F I1213 00:21:12.271049 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (233.396µs) 2025-12-13T00:21:12.271195253+00:00 stderr F I1213 00:21:12.271140 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:12.299484236+00:00 stderr F W1213 00:21:12.299412 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:12.300987538+00:00 stderr F I1213 00:21:12.300957 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.156003ms) 2025-12-13T00:21:12.300987538+00:00 stderr F I1213 00:21:12.300981 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:12.301053589+00:00 stderr F I1213 00:21:12.301029 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:12.301087240+00:00 stderr F I1213 00:21:12.301069 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:12.301129911+00:00 stderr F I1213 00:21:12.301079 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:12.301356537+00:00 stderr F I1213 00:21:12.301316 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:12.328911460+00:00 stderr F W1213 00:21:12.328856 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:12.330845603+00:00 stderr F I1213 00:21:12.330797 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.808934ms) 2025-12-13T00:21:12.770181498+00:00 stderr F I1213 00:21:12.769570 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:12.770475436+00:00 stderr F I1213 00:21:12.770443 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:12.770475436+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:12.770475436+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:12.770515107+00:00 stderr F I1213 00:21:12.770489 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (931.884µs) 2025-12-13T00:21:12.770515107+00:00 stderr F I1213 00:21:12.770506 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:12.770596309+00:00 stderr F I1213 00:21:12.770554 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:12.770669731+00:00 stderr F I1213 00:21:12.770634 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:12.770719333+00:00 stderr F I1213 00:21:12.770646 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:12.771039011+00:00 stderr F I1213 00:21:12.770971 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:12.792164912+00:00 stderr F W1213 00:21:12.792084 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:12.794196797+00:00 stderr F I1213 00:21:12.794123 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.613018ms) 2025-12-13T00:21:13.770819540+00:00 stderr F I1213 00:21:13.770746 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:13.771051286+00:00 stderr F I1213 00:21:13.771021 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:13.771051286+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:13.771051286+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:13.771074786+00:00 stderr F I1213 00:21:13.771062 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (328.069µs) 2025-12-13T00:21:13.771082047+00:00 stderr F I1213 00:21:13.771075 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:13.771145958+00:00 stderr F I1213 00:21:13.771113 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:13.771179149+00:00 stderr F I1213 00:21:13.771161 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:13.771227431+00:00 stderr F I1213 00:21:13.771171 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:13.771449056+00:00 stderr F I1213 00:21:13.771413 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:13.800016108+00:00 stderr F W1213 00:21:13.799965 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:13.801437996+00:00 stderr F I1213 00:21:13.801403 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.325079ms) 2025-12-13T00:21:14.771238145+00:00 stderr F I1213 00:21:14.771157 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:14.771528463+00:00 stderr F I1213 00:21:14.771500 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:14.771528463+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:14.771528463+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:14.771573554+00:00 stderr F I1213 00:21:14.771557 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (411.291µs) 2025-12-13T00:21:14.771580694+00:00 stderr F I1213 00:21:14.771574 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:14.771638216+00:00 stderr F I1213 00:21:14.771615 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:14.771670027+00:00 stderr F I1213 00:21:14.771659 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:14.771725098+00:00 stderr F I1213 00:21:14.771669 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:14.772248353+00:00 stderr F I1213 00:21:14.772212 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:14.794831253+00:00 stderr F W1213 00:21:14.794774 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:14.796406795+00:00 stderr F I1213 00:21:14.796375 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.796049ms) 2025-12-13T00:21:15.340599341+00:00 stderr F E1213 00:21:15.340540 1 task.go:122] error running apply for imagestream "openshift/cli" (537 of 955): an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io cli) 2025-12-13T00:21:15.772047623+00:00 stderr F I1213 00:21:15.771966 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:15.772259459+00:00 stderr F I1213 00:21:15.772225 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:15.772259459+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:15.772259459+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:15.772310820+00:00 stderr F I1213 00:21:15.772284 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (331.309µs) 2025-12-13T00:21:15.772310820+00:00 stderr F I1213 00:21:15.772303 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:15.772402742+00:00 stderr F I1213 00:21:15.772372 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:15.772434323+00:00 stderr F I1213 00:21:15.772418 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:15.772486925+00:00 stderr F I1213 00:21:15.772429 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:15.772717841+00:00 stderr F I1213 00:21:15.772675 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:15.794288912+00:00 stderr F W1213 00:21:15.794223 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:15.796629026+00:00 stderr F I1213 00:21:15.796578 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.272334ms) 2025-12-13T00:21:16.772799488+00:00 stderr F I1213 00:21:16.772733 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:16.773009943+00:00 stderr F I1213 00:21:16.772986 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:16.773009943+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:16.773009943+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:16.773113926+00:00 stderr F I1213 00:21:16.773048 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (325.549µs) 2025-12-13T00:21:16.773113926+00:00 stderr F I1213 00:21:16.773077 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:16.773159417+00:00 stderr F I1213 00:21:16.773134 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:16.773213039+00:00 stderr F I1213 00:21:16.773192 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:16.773267280+00:00 stderr F I1213 00:21:16.773204 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:16.773525277+00:00 stderr F I1213 00:21:16.773482 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:16.794582516+00:00 stderr F W1213 00:21:16.794522 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:16.796072945+00:00 stderr F I1213 00:21:16.796035 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.956369ms) 2025-12-13T00:21:16.839434026+00:00 stderr F I1213 00:21:16.839368 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:21:17.774139958+00:00 stderr F I1213 00:21:17.774091 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:17.774406375+00:00 stderr F I1213 00:21:17.774379 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:17.774406375+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:17.774406375+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:17.774474107+00:00 stderr F I1213 00:21:17.774452 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (370.86µs) 2025-12-13T00:21:17.774481487+00:00 stderr F I1213 00:21:17.774471 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:17.774543689+00:00 stderr F I1213 00:21:17.774517 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:17.774589700+00:00 stderr F I1213 00:21:17.774572 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:17.774655432+00:00 stderr F I1213 00:21:17.774585 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:17.775449904+00:00 stderr F I1213 00:21:17.774966 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:17.795588917+00:00 stderr F W1213 00:21:17.795547 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:17.797446608+00:00 stderr F I1213 00:21:17.797405 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (22.929669ms) 2025-12-13T00:21:18.242027554+00:00 stderr F I1213 00:21:18.241971 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:21:18.774973765+00:00 stderr F I1213 00:21:18.774876 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:18.775237323+00:00 stderr F I1213 00:21:18.775195 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:18.775237323+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:18.775237323+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:18.775311255+00:00 stderr F I1213 00:21:18.775262 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (412.892µs) 2025-12-13T00:21:18.775311255+00:00 stderr F I1213 00:21:18.775285 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:18.775381946+00:00 stderr F I1213 00:21:18.775338 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:18.775422718+00:00 stderr F I1213 00:21:18.775398 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:18.775481859+00:00 stderr F I1213 00:21:18.775411 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:18.775716915+00:00 stderr F I1213 00:21:18.775669 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:18.798688935+00:00 stderr F W1213 00:21:18.798592 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:18.800109933+00:00 stderr F I1213 00:21:18.800071 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.781018ms) 2025-12-13T00:21:19.775572807+00:00 stderr F I1213 00:21:19.775485 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:19.775834594+00:00 stderr F I1213 00:21:19.775778 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:19.775834594+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:19.775834594+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:19.775871285+00:00 stderr F I1213 00:21:19.775848 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (386.68µs) 2025-12-13T00:21:19.775871285+00:00 stderr F I1213 00:21:19.775865 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:19.775957137+00:00 stderr F I1213 00:21:19.775914 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:19.776023239+00:00 stderr F I1213 00:21:19.775991 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:19.776075050+00:00 stderr F I1213 00:21:19.776006 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:19.776357648+00:00 stderr F I1213 00:21:19.776313 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:19.799016879+00:00 stderr F W1213 00:21:19.798970 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:19.800689214+00:00 stderr F I1213 00:21:19.800651 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.783949ms) 2025-12-13T00:21:20.776201618+00:00 stderr F I1213 00:21:20.776090 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:20.776504756+00:00 stderr F I1213 00:21:20.776448 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:20.776504756+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:20.776504756+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:20.776560148+00:00 stderr F I1213 00:21:20.776517 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (451.472µs) 2025-12-13T00:21:20.776560148+00:00 stderr F I1213 00:21:20.776545 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:20.776689881+00:00 stderr F I1213 00:21:20.776604 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:20.776720102+00:00 stderr F I1213 00:21:20.776698 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:20.776816315+00:00 stderr F I1213 00:21:20.776709 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:20.777131013+00:00 stderr F I1213 00:21:20.777063 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:20.810546325+00:00 stderr F W1213 00:21:20.810489 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:20.812036956+00:00 stderr F I1213 00:21:20.811926 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.380646ms) 2025-12-13T00:21:21.776853481+00:00 stderr F I1213 00:21:21.776772 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:21.777212941+00:00 stderr F I1213 00:21:21.777106 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:21.777212941+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:21.777212941+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:21.777241901+00:00 stderr F I1213 00:21:21.777221 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (460.203µs) 2025-12-13T00:21:21.777259012+00:00 stderr F I1213 00:21:21.777239 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:21.777324544+00:00 stderr F I1213 00:21:21.777286 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:21.777351344+00:00 stderr F I1213 00:21:21.777337 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:21.777418446+00:00 stderr F I1213 00:21:21.777347 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:21.777694313+00:00 stderr F I1213 00:21:21.777636 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:21.805785991+00:00 stderr F W1213 00:21:21.801527 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:21.805785991+00:00 stderr F I1213 00:21:21.803037 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.796066ms) 2025-12-13T00:21:22.491887876+00:00 stderr F I1213 00:21:22.491777 1 task_graph.go:481] Running 27 on worker 0 2025-12-13T00:21:22.491887876+00:00 stderr F I1213 00:21:22.491832 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.491887876+00:00 stderr F I1213 00:21:22.491842 1 sync_worker.go:999] Running sync for priorityclass "openshift-user-critical" (579 of 955) 2025-12-13T00:21:22.501564778+00:00 stderr F I1213 00:21:22.501497 1 sync_worker.go:1014] Done syncing for priorityclass "openshift-user-critical" (579 of 955) 2025-12-13T00:21:22.501564778+00:00 stderr F I1213 00:21:22.501536 1 task_graph.go:481] Running 28 on worker 0 2025-12-13T00:21:22.501564778+00:00 stderr F I1213 00:21:22.501548 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.501614699+00:00 stderr F I1213 00:21:22.501556 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/etcd-dashboard" (814 of 955) 2025-12-13T00:21:22.505535895+00:00 stderr F I1213 00:21:22.505473 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/etcd-dashboard" (814 of 955) 2025-12-13T00:21:22.505535895+00:00 stderr F I1213 00:21:22.505505 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.505535895+00:00 stderr F I1213 00:21:22.505511 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-etcd" (815 of 955) 2025-12-13T00:21:22.507960050+00:00 stderr F I1213 00:21:22.507871 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-etcd" (815 of 955) 2025-12-13T00:21:22.507960050+00:00 stderr F I1213 00:21:22.507902 1 task_graph.go:481] Running 29 on worker 0 2025-12-13T00:21:22.507960050+00:00 stderr F I1213 00:21:22.507912 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.507960050+00:00 stderr F I1213 00:21:22.507917 1 sync_worker.go:999] Running sync for role "openshift-etcd-operator/prometheus-k8s" (866 of 955) 2025-12-13T00:21:22.510131759+00:00 stderr F I1213 00:21:22.510057 1 sync_worker.go:1014] Done syncing for role "openshift-etcd-operator/prometheus-k8s" (866 of 955) 2025-12-13T00:21:22.510131759+00:00 stderr F I1213 00:21:22.510078 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.510131759+00:00 stderr F I1213 00:21:22.510083 1 sync_worker.go:999] Running sync for rolebinding "openshift-etcd-operator/prometheus-k8s" (867 of 955) 2025-12-13T00:21:22.512594486+00:00 stderr F I1213 00:21:22.512527 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-etcd-operator/prometheus-k8s" (867 of 955) 2025-12-13T00:21:22.512594486+00:00 stderr F I1213 00:21:22.512552 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.512594486+00:00 stderr F I1213 00:21:22.512561 1 sync_worker.go:999] Running sync for prometheusrule "openshift-etcd-operator/etcd-prometheus-rules" (868 of 955) 2025-12-13T00:21:22.517955770+00:00 stderr F I1213 00:21:22.517896 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-etcd-operator/etcd-prometheus-rules" (868 of 955) 2025-12-13T00:21:22.517955770+00:00 stderr F I1213 00:21:22.517919 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.517955770+00:00 stderr F I1213 00:21:22.517924 1 sync_worker.go:999] Running sync for servicemonitor "openshift-etcd-operator/etcd-operator" (869 of 955) 2025-12-13T00:21:22.520602892+00:00 stderr F I1213 00:21:22.520555 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-etcd-operator/etcd-operator" (869 of 955) 2025-12-13T00:21:22.520602892+00:00 stderr F I1213 00:21:22.520582 1 task_graph.go:481] Running 30 on worker 0 2025-12-13T00:21:22.520602892+00:00 stderr F I1213 00:21:22.520590 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.520602892+00:00 stderr F I1213 00:21:22.520595 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-version" (1 of 955) 2025-12-13T00:21:22.523466549+00:00 stderr F I1213 00:21:22.523384 1 sync_worker.go:1014] Done syncing for namespace "openshift-cluster-version" (1 of 955) 2025-12-13T00:21:22.523466549+00:00 stderr F I1213 00:21:22.523409 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.523466549+00:00 stderr F I1213 00:21:22.523417 1 sync_worker.go:999] Running sync for configmap "openshift-config/admin-acks" (2 of 955) 2025-12-13T00:21:22.525787691+00:00 stderr F I1213 00:21:22.525720 1 sync_worker.go:1014] Done syncing for configmap "openshift-config/admin-acks" (2 of 955) 2025-12-13T00:21:22.525787691+00:00 stderr F I1213 00:21:22.525755 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.525787691+00:00 stderr F I1213 00:21:22.525763 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/admin-gates" (3 of 955) 2025-12-13T00:21:22.528905516+00:00 stderr F I1213 00:21:22.528846 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/admin-gates" (3 of 955) 2025-12-13T00:21:22.528905516+00:00 stderr F I1213 00:21:22.528874 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.528905516+00:00 stderr F I1213 00:21:22.528882 1 sync_worker.go:999] Running sync for customresourcedefinition "clusteroperators.config.openshift.io" (4 of 955) 2025-12-13T00:21:22.532600345+00:00 stderr F I1213 00:21:22.532469 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusteroperators.config.openshift.io" (4 of 955) 2025-12-13T00:21:22.532600345+00:00 stderr F I1213 00:21:22.532499 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.532600345+00:00 stderr F I1213 00:21:22.532506 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterversions.config.openshift.io" (5 of 955) 2025-12-13T00:21:22.541456204+00:00 stderr F I1213 00:21:22.541382 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterversions.config.openshift.io" (5 of 955) 2025-12-13T00:21:22.541456204+00:00 stderr F I1213 00:21:22.541436 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.541478225+00:00 stderr F I1213 00:21:22.541448 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-version-operator" (6 of 955) 2025-12-13T00:21:22.544500706+00:00 stderr F I1213 00:21:22.544465 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-version-operator" (6 of 955) 2025-12-13T00:21:22.544500706+00:00 stderr F I1213 00:21:22.544487 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.544500706+00:00 stderr F I1213 00:21:22.544493 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-version/cluster-version-operator" (7 of 955) 2025-12-13T00:21:22.550392395+00:00 stderr F I1213 00:21:22.550331 1 sync_worker.go:1014] Done syncing for deployment "openshift-cluster-version/cluster-version-operator" (7 of 955) 2025-12-13T00:21:22.550392395+00:00 stderr F I1213 00:21:22.550369 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.550392395+00:00 stderr F I1213 00:21:22.550375 1 sync_worker.go:999] Running sync for service "openshift-cluster-version/cluster-version-operator" (8 of 955) 2025-12-13T00:21:22.552965754+00:00 stderr F I1213 00:21:22.552902 1 sync_worker.go:1014] Done syncing for service "openshift-cluster-version/cluster-version-operator" (8 of 955) 2025-12-13T00:21:22.552965754+00:00 stderr F I1213 00:21:22.552952 1 task_graph.go:481] Running 31 on worker 0 2025-12-13T00:21:22.552985855+00:00 stderr F I1213 00:21:22.552964 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.552985855+00:00 stderr F I1213 00:21:22.552972 1 sync_worker.go:999] Running sync for operatorgroup "openshift-monitoring/openshift-cluster-monitoring" (826 of 955) 2025-12-13T00:21:22.556266143+00:00 stderr F I1213 00:21:22.556230 1 sync_worker.go:1014] Done syncing for operatorgroup "openshift-monitoring/openshift-cluster-monitoring" (826 of 955) 2025-12-13T00:21:22.556266143+00:00 stderr F I1213 00:21:22.556252 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.556266143+00:00 stderr F I1213 00:21:22.556259 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-cluster-total" (827 of 955) 2025-12-13T00:21:22.563600912+00:00 stderr F I1213 00:21:22.562866 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-cluster-total" (827 of 955) 2025-12-13T00:21:22.563600912+00:00 stderr F I1213 00:21:22.562910 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.563600912+00:00 stderr F I1213 00:21:22.562916 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-cluster-total" (828 of 955) 2025-12-13T00:21:22.565157813+00:00 stderr F I1213 00:21:22.565129 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-cluster-total" (828 of 955) 2025-12-13T00:21:22.565157813+00:00 stderr F I1213 00:21:22.565148 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.565174554+00:00 stderr F I1213 00:21:22.565155 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-cluster" (829 of 955) 2025-12-13T00:21:22.570643461+00:00 stderr F I1213 00:21:22.570596 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-cluster" (829 of 955) 2025-12-13T00:21:22.570643461+00:00 stderr F I1213 00:21:22.570612 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.570643461+00:00 stderr F I1213 00:21:22.570617 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" (830 of 955) 2025-12-13T00:21:22.572630505+00:00 stderr F I1213 00:21:22.572595 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-cluster" (830 of 955) 2025-12-13T00:21:22.572630505+00:00 stderr F I1213 00:21:22.572611 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.572630505+00:00 stderr F I1213 00:21:22.572618 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-namespace" (831 of 955) 2025-12-13T00:21:22.577839456+00:00 stderr F I1213 00:21:22.577784 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-namespace" (831 of 955) 2025-12-13T00:21:22.577839456+00:00 stderr F I1213 00:21:22.577812 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.577839456+00:00 stderr F I1213 00:21:22.577818 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" (832 of 955) 2025-12-13T00:21:22.579825730+00:00 stderr F I1213 00:21:22.579760 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-namespace" (832 of 955) 2025-12-13T00:21:22.579825730+00:00 stderr F I1213 00:21:22.579802 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.579825730+00:00 stderr F I1213 00:21:22.579809 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-node" (833 of 955) 2025-12-13T00:21:22.583564190+00:00 stderr F I1213 00:21:22.583508 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-node" (833 of 955) 2025-12-13T00:21:22.583564190+00:00 stderr F I1213 00:21:22.583532 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.583564190+00:00 stderr F I1213 00:21:22.583538 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" (834 of 955) 2025-12-13T00:21:22.585309438+00:00 stderr F I1213 00:21:22.585261 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-node" (834 of 955) 2025-12-13T00:21:22.585309438+00:00 stderr F I1213 00:21:22.585280 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.585309438+00:00 stderr F I1213 00:21:22.585288 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-pod" (835 of 955) 2025-12-13T00:21:22.590638082+00:00 stderr F I1213 00:21:22.590569 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-pod" (835 of 955) 2025-12-13T00:21:22.590638082+00:00 stderr F I1213 00:21:22.590602 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.590638082+00:00 stderr F I1213 00:21:22.590609 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" (836 of 955) 2025-12-13T00:21:22.593262771+00:00 stderr F I1213 00:21:22.593213 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-pod" (836 of 955) 2025-12-13T00:21:22.593262771+00:00 stderr F I1213 00:21:22.593230 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.593262771+00:00 stderr F I1213 00:21:22.593236 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-workload" (837 of 955) 2025-12-13T00:21:22.598214875+00:00 stderr F I1213 00:21:22.598157 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-workload" (837 of 955) 2025-12-13T00:21:22.598214875+00:00 stderr F I1213 00:21:22.598179 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.598214875+00:00 stderr F I1213 00:21:22.598184 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" (838 of 955) 2025-12-13T00:21:22.601098693+00:00 stderr F I1213 00:21:22.601045 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workload" (838 of 955) 2025-12-13T00:21:22.601098693+00:00 stderr F I1213 00:21:22.601080 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.601098693+00:00 stderr F I1213 00:21:22.601086 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-k8s-resources-workloads-namespace" (839 of 955) 2025-12-13T00:21:22.606220892+00:00 stderr F I1213 00:21:22.606163 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-k8s-resources-workloads-namespace" (839 of 955) 2025-12-13T00:21:22.606220892+00:00 stderr F I1213 00:21:22.606191 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.606220892+00:00 stderr F I1213 00:21:22.606198 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" (840 of 955) 2025-12-13T00:21:22.609426508+00:00 stderr F I1213 00:21:22.609344 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-k8s-resources-workloads-namespace" (840 of 955) 2025-12-13T00:21:22.609426508+00:00 stderr F I1213 00:21:22.609409 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.609426508+00:00 stderr F I1213 00:21:22.609417 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-namespace-by-pod" (841 of 955) 2025-12-13T00:21:22.613359255+00:00 stderr F I1213 00:21:22.613285 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-namespace-by-pod" (841 of 955) 2025-12-13T00:21:22.613359255+00:00 stderr F I1213 00:21:22.613328 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.613359255+00:00 stderr F I1213 00:21:22.613334 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" (842 of 955) 2025-12-13T00:21:22.615734369+00:00 stderr F I1213 00:21:22.615681 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-namespace-by-pod" (842 of 955) 2025-12-13T00:21:22.615734369+00:00 stderr F I1213 00:21:22.615700 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.615734369+00:00 stderr F I1213 00:21:22.615706 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-node-cluster-rsrc-use" (843 of 955) 2025-12-13T00:21:22.618888164+00:00 stderr F I1213 00:21:22.618819 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-node-cluster-rsrc-use" (843 of 955) 2025-12-13T00:21:22.618908614+00:00 stderr F I1213 00:21:22.618882 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.618908614+00:00 stderr F I1213 00:21:22.618898 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" (844 of 955) 2025-12-13T00:21:22.621706680+00:00 stderr F I1213 00:21:22.621603 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-node-cluster-rsrc-use" (844 of 955) 2025-12-13T00:21:22.621706680+00:00 stderr F I1213 00:21:22.621647 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.621706680+00:00 stderr F I1213 00:21:22.621656 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-node-rsrc-use" (845 of 955) 2025-12-13T00:21:22.622781218+00:00 stderr F I1213 00:21:22.622751 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:116 2025-12-13T00:21:22.622874971+00:00 stderr F I1213 00:21:22.622847 1 sync_worker.go:234] Notify the sync worker: Cluster operator authentication changed Available from "True" to "False" 2025-12-13T00:21:22.625107101+00:00 stderr F I1213 00:21:22.625034 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-node-rsrc-use" (845 of 955) 2025-12-13T00:21:22.625107101+00:00 stderr F I1213 00:21:22.625063 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.625107101+00:00 stderr F I1213 00:21:22.625071 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" (846 of 955) 2025-12-13T00:21:22.627320021+00:00 stderr F I1213 00:21:22.627288 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-node-rsrc-use" (846 of 955) 2025-12-13T00:21:22.627365603+00:00 stderr F I1213 00:21:22.627355 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.627394363+00:00 stderr F I1213 00:21:22.627382 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-pod-total" (847 of 955) 2025-12-13T00:21:22.631131744+00:00 stderr F I1213 00:21:22.630365 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-pod-total" (847 of 955) 2025-12-13T00:21:22.631131744+00:00 stderr F I1213 00:21:22.630399 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.631131744+00:00 stderr F I1213 00:21:22.630406 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-pod-total" (848 of 955) 2025-12-13T00:21:22.632207093+00:00 stderr F I1213 00:21:22.632167 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-pod-total" (848 of 955) 2025-12-13T00:21:22.632207093+00:00 stderr F I1213 00:21:22.632187 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.632207093+00:00 stderr F I1213 00:21:22.632193 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/dashboard-prometheus" (849 of 955) 2025-12-13T00:21:22.635292187+00:00 stderr F I1213 00:21:22.635255 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/dashboard-prometheus" (849 of 955) 2025-12-13T00:21:22.635292187+00:00 stderr F I1213 00:21:22.635288 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.635309227+00:00 stderr F I1213 00:21:22.635295 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-prometheus" (850 of 955) 2025-12-13T00:21:22.643891689+00:00 stderr F I1213 00:21:22.643838 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-prometheus" (850 of 955) 2025-12-13T00:21:22.643891689+00:00 stderr F I1213 00:21:22.643876 1 task_graph.go:481] Running 32 on worker 0 2025-12-13T00:21:22.643912829+00:00 stderr F I1213 00:21:22.643891 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.643912829+00:00 stderr F I1213 00:21:22.643898 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager-operator/prometheus-k8s" (896 of 955) 2025-12-13T00:21:22.694806702+00:00 stderr F I1213 00:21:22.694749 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager-operator/prometheus-k8s" (896 of 955) 2025-12-13T00:21:22.694806702+00:00 stderr F I1213 00:21:22.694779 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.694806702+00:00 stderr F I1213 00:21:22.694785 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager-operator/prometheus-k8s" (897 of 955) 2025-12-13T00:21:22.745181252+00:00 stderr F I1213 00:21:22.745111 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager-operator/prometheus-k8s" (897 of 955) 2025-12-13T00:21:22.745181252+00:00 stderr F I1213 00:21:22.745144 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.745181252+00:00 stderr F I1213 00:21:22.745149 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (898 of 955) 2025-12-13T00:21:22.778311316+00:00 stderr F I1213 00:21:22.778219 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:22.778595043+00:00 stderr F I1213 00:21:22.778569 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:22.778595043+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:22.778595043+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:22.778659445+00:00 stderr F I1213 00:21:22.778635 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.611µs) 2025-12-13T00:21:22.778659445+00:00 stderr F I1213 00:21:22.778654 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:22.778729957+00:00 stderr F I1213 00:21:22.778702 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:22.778770578+00:00 stderr F I1213 00:21:22.778755 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:22.778835800+00:00 stderr F I1213 00:21:22.778766 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:22.779166119+00:00 stderr F I1213 00:21:22.779131 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:22.795128970+00:00 stderr F I1213 00:21:22.795003 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (898 of 955) 2025-12-13T00:21:22.795128970+00:00 stderr F I1213 00:21:22.795036 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.795128970+00:00 stderr F I1213 00:21:22.795042 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (899 of 955) 2025-12-13T00:21:22.800343450+00:00 stderr F W1213 00:21:22.800289 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:22.802381585+00:00 stderr F I1213 00:21:22.802316 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.658188ms) 2025-12-13T00:21:22.844703487+00:00 stderr F I1213 00:21:22.844615 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager-operator/prometheus-k8s" (899 of 955) 2025-12-13T00:21:22.844703487+00:00 stderr F I1213 00:21:22.844665 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.844703487+00:00 stderr F I1213 00:21:22.844674 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (900 of 955) 2025-12-13T00:21:22.897054880+00:00 stderr F I1213 00:21:22.896911 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (900 of 955) 2025-12-13T00:21:22.897054880+00:00 stderr F I1213 00:21:22.896992 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.897054880+00:00 stderr F I1213 00:21:22.897009 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (901 of 955) 2025-12-13T00:21:22.946380551+00:00 stderr F I1213 00:21:22.946297 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (901 of 955) 2025-12-13T00:21:22.946380551+00:00 stderr F I1213 00:21:22.946345 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:22.946380551+00:00 stderr F I1213 00:21:22.946359 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager/prometheus-k8s" (902 of 955) 2025-12-13T00:21:23.001036335+00:00 stderr F I1213 00:21:23.000363 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager/prometheus-k8s" (902 of 955) 2025-12-13T00:21:23.001036335+00:00 stderr F I1213 00:21:23.000423 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.001036335+00:00 stderr F I1213 00:21:23.000439 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (903 of 955) 2025-12-13T00:21:23.049039711+00:00 stderr F I1213 00:21:23.047241 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (903 of 955) 2025-12-13T00:21:23.049039711+00:00 stderr F I1213 00:21:23.047287 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.049039711+00:00 stderr F I1213 00:21:23.047298 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (904 of 955) 2025-12-13T00:21:23.095315200+00:00 stderr F I1213 00:21:23.095229 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (904 of 955) 2025-12-13T00:21:23.095315200+00:00 stderr F I1213 00:21:23.095270 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.095315200+00:00 stderr F I1213 00:21:23.095278 1 sync_worker.go:999] Running sync for role "openshift-kube-controller-manager/prometheus-k8s" (905 of 955) 2025-12-13T00:21:23.140855068+00:00 stderr F I1213 00:21:23.140791 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:23.140917320+00:00 stderr F I1213 00:21:23.140892 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:23.141011333+00:00 stderr F I1213 00:21:23.140985 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:23.141067034+00:00 stderr F I1213 00:21:23.140997 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:23.141304720+00:00 stderr F I1213 00:21:23.141257 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:23.144357803+00:00 stderr F I1213 00:21:23.144306 1 sync_worker.go:1014] Done syncing for role "openshift-kube-controller-manager/prometheus-k8s" (905 of 955) 2025-12-13T00:21:23.144357803+00:00 stderr F I1213 00:21:23.144335 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.144357803+00:00 stderr F I1213 00:21:23.144341 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (906 of 955) 2025-12-13T00:21:23.168276319+00:00 stderr F W1213 00:21:23.168196 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:23.170223850+00:00 stderr F I1213 00:21:23.169805 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.019033ms) 2025-12-13T00:21:23.194914507+00:00 stderr F I1213 00:21:23.194845 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-controller-manager/prometheus-k8s" (906 of 955) 2025-12-13T00:21:23.194914507+00:00 stderr F I1213 00:21:23.194877 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.194914507+00:00 stderr F I1213 00:21:23.194882 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (907 of 955) 2025-12-13T00:21:23.246404047+00:00 stderr F I1213 00:21:23.246318 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-controller-manager/kube-controller-manager" (907 of 955) 2025-12-13T00:21:23.246404047+00:00 stderr F I1213 00:21:23.246366 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.246404047+00:00 stderr F I1213 00:21:23.246372 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (908 of 955) 2025-12-13T00:21:23.298621275+00:00 stderr F I1213 00:21:23.298559 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (908 of 955) 2025-12-13T00:21:23.298621275+00:00 stderr F I1213 00:21:23.298595 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.298621275+00:00 stderr F I1213 00:21:23.298603 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (909 of 955) 2025-12-13T00:21:23.345534872+00:00 stderr F I1213 00:21:23.345449 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (909 of 955) 2025-12-13T00:21:23.345534872+00:00 stderr F I1213 00:21:23.345488 1 task_graph.go:481] Running 33 on worker 0 2025-12-13T00:21:23.345534872+00:00 stderr F I1213 00:21:23.345499 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.345534872+00:00 stderr F I1213 00:21:23.345505 1 sync_worker.go:999] Running sync for role "openshift-service-ca-operator/prometheus-k8s" (953 of 955) 2025-12-13T00:21:23.394701519+00:00 stderr F I1213 00:21:23.394626 1 sync_worker.go:1014] Done syncing for role "openshift-service-ca-operator/prometheus-k8s" (953 of 955) 2025-12-13T00:21:23.394701519+00:00 stderr F I1213 00:21:23.394663 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.394701519+00:00 stderr F I1213 00:21:23.394668 1 sync_worker.go:999] Running sync for rolebinding "openshift-service-ca-operator/prometheus-k8s" (954 of 955) 2025-12-13T00:21:23.445875389+00:00 stderr F I1213 00:21:23.445802 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-service-ca-operator/prometheus-k8s" (954 of 955) 2025-12-13T00:21:23.445875389+00:00 stderr F I1213 00:21:23.445833 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.445875389+00:00 stderr F I1213 00:21:23.445839 1 sync_worker.go:999] Running sync for servicemonitor "openshift-service-ca-operator/service-ca-operator" (955 of 955) 2025-12-13T00:21:23.495601542+00:00 stderr F I1213 00:21:23.495519 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-service-ca-operator/service-ca-operator" (955 of 955) 2025-12-13T00:21:23.495601542+00:00 stderr F I1213 00:21:23.495553 1 task_graph.go:481] Running 34 on worker 0 2025-12-13T00:21:23.495601542+00:00 stderr F I1213 00:21:23.495567 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.495601542+00:00 stderr F I1213 00:21:23.495572 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver-operator/prometheus-k8s" (874 of 955) 2025-12-13T00:21:23.544864320+00:00 stderr F I1213 00:21:23.544817 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver-operator/prometheus-k8s" (874 of 955) 2025-12-13T00:21:23.544924752+00:00 stderr F I1213 00:21:23.544914 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.544974073+00:00 stderr F I1213 00:21:23.544960 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver-operator/prometheus-k8s" (875 of 955) 2025-12-13T00:21:23.595557468+00:00 stderr F I1213 00:21:23.595444 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver-operator/prometheus-k8s" (875 of 955) 2025-12-13T00:21:23.595557468+00:00 stderr F I1213 00:21:23.595495 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.595557468+00:00 stderr F I1213 00:21:23.595508 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (876 of 955) 2025-12-13T00:21:23.645537757+00:00 stderr F I1213 00:21:23.645489 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (876 of 955) 2025-12-13T00:21:23.645662010+00:00 stderr F I1213 00:21:23.645650 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.645705302+00:00 stderr F I1213 00:21:23.645681 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (877 of 955) 2025-12-13T00:21:23.662075953+00:00 stderr F I1213 00:21:23.660988 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:23.694820338+00:00 stderr F I1213 00:21:23.694685 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver-operator/prometheus-k8s" (877 of 955) 2025-12-13T00:21:23.694820338+00:00 stderr F I1213 00:21:23.694726 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.694820338+00:00 stderr F I1213 00:21:23.694734 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (878 of 955) 2025-12-13T00:21:23.745585717+00:00 stderr F I1213 00:21:23.745476 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (878 of 955) 2025-12-13T00:21:23.745585717+00:00 stderr F I1213 00:21:23.745522 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.745585717+00:00 stderr F I1213 00:21:23.745531 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (879 of 955) 2025-12-13T00:21:23.779370719+00:00 stderr F I1213 00:21:23.779188 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:23.779846671+00:00 stderr F I1213 00:21:23.779762 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:23.779846671+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:23.779846671+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:23.780014796+00:00 stderr F I1213 00:21:23.779883 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (705.558µs) 2025-12-13T00:21:23.780014796+00:00 stderr F I1213 00:21:23.779920 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:23.780137009+00:00 stderr F I1213 00:21:23.780079 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:23.780214951+00:00 stderr F I1213 00:21:23.780172 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:23.780388026+00:00 stderr F I1213 00:21:23.780196 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:23.780877599+00:00 stderr F I1213 00:21:23.780790 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:23.794959829+00:00 stderr F I1213 00:21:23.794871 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (879 of 955) 2025-12-13T00:21:23.794959829+00:00 stderr F I1213 00:21:23.794910 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.794959829+00:00 stderr F I1213 00:21:23.794919 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (880 of 955) 2025-12-13T00:21:23.833481389+00:00 stderr F W1213 00:21:23.833441 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:23.835053252+00:00 stderr F I1213 00:21:23.835027 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (55.106008ms) 2025-12-13T00:21:23.845599556+00:00 stderr F I1213 00:21:23.845558 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (880 of 955) 2025-12-13T00:21:23.845649848+00:00 stderr F I1213 00:21:23.845639 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.845678718+00:00 stderr F I1213 00:21:23.845666 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (881 of 955) 2025-12-13T00:21:23.895335048+00:00 stderr F I1213 00:21:23.895271 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver-operator/kube-apiserver-operator" (881 of 955) 2025-12-13T00:21:23.895462011+00:00 stderr F I1213 00:21:23.895447 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.895501062+00:00 stderr F I1213 00:21:23.895484 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver/prometheus-k8s" (882 of 955) 2025-12-13T00:21:23.945636845+00:00 stderr F I1213 00:21:23.945507 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver/prometheus-k8s" (882 of 955) 2025-12-13T00:21:23.945636845+00:00 stderr F I1213 00:21:23.945552 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.945636845+00:00 stderr F I1213 00:21:23.945563 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver/prometheus-k8s" (883 of 955) 2025-12-13T00:21:23.996576830+00:00 stderr F I1213 00:21:23.996485 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver/prometheus-k8s" (883 of 955) 2025-12-13T00:21:23.996576830+00:00 stderr F I1213 00:21:23.996525 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:23.996576830+00:00 stderr F I1213 00:21:23.996531 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver/kube-apiserver" (884 of 955) 2025-12-13T00:21:24.046344413+00:00 stderr F I1213 00:21:24.046239 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver/kube-apiserver" (884 of 955) 2025-12-13T00:21:24.046344413+00:00 stderr F I1213 00:21:24.046285 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.046344413+00:00 stderr F I1213 00:21:24.046292 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (885 of 955) 2025-12-13T00:21:24.111023628+00:00 stderr F I1213 00:21:24.108512 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (885 of 955) 2025-12-13T00:21:24.111023628+00:00 stderr F I1213 00:21:24.108566 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.111023628+00:00 stderr F I1213 00:21:24.108577 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (886 of 955) 2025-12-13T00:21:24.145092137+00:00 stderr F I1213 00:21:24.145005 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (886 of 955) 2025-12-13T00:21:24.145092137+00:00 stderr F I1213 00:21:24.145037 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.145092137+00:00 stderr F I1213 00:21:24.145043 1 sync_worker.go:999] Running sync for role "openshift-kube-apiserver/prometheus-k8s" (887 of 955) 2025-12-13T00:21:24.195464367+00:00 stderr F I1213 00:21:24.195370 1 sync_worker.go:1014] Done syncing for role "openshift-kube-apiserver/prometheus-k8s" (887 of 955) 2025-12-13T00:21:24.195464367+00:00 stderr F I1213 00:21:24.195408 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.195464367+00:00 stderr F I1213 00:21:24.195416 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-apiserver/prometheus-k8s" (888 of 955) 2025-12-13T00:21:24.231005896+00:00 stderr F I1213 00:21:24.230871 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:24.246284089+00:00 stderr F I1213 00:21:24.246186 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-apiserver/prometheus-k8s" (888 of 955) 2025-12-13T00:21:24.246284089+00:00 stderr F I1213 00:21:24.246256 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.246284089+00:00 stderr F I1213 00:21:24.246265 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-apiserver/kube-apiserver" (889 of 955) 2025-12-13T00:21:24.298923089+00:00 stderr F I1213 00:21:24.298829 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-apiserver/kube-apiserver" (889 of 955) 2025-12-13T00:21:24.298923089+00:00 stderr F I1213 00:21:24.298885 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.298923089+00:00 stderr F I1213 00:21:24.298892 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (890 of 955) 2025-12-13T00:21:24.350459700+00:00 stderr F I1213 00:21:24.350349 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-performance-recording-rules" (890 of 955) 2025-12-13T00:21:24.350459700+00:00 stderr F I1213 00:21:24.350401 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.350459700+00:00 stderr F I1213 00:21:24.350414 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (891 of 955) 2025-12-13T00:21:24.395510746+00:00 stderr F I1213 00:21:24.395396 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-apiserver/kube-apiserver-recording-rules" (891 of 955) 2025-12-13T00:21:24.395510746+00:00 stderr F I1213 00:21:24.395430 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.395510746+00:00 stderr F I1213 00:21:24.395438 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (892 of 955) 2025-12-13T00:21:24.447273262+00:00 stderr F I1213 00:21:24.447158 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (892 of 955) 2025-12-13T00:21:24.447273262+00:00 stderr F I1213 00:21:24.447211 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.447273262+00:00 stderr F I1213 00:21:24.447223 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-api-performance" (893 of 955) 2025-12-13T00:21:24.496005187+00:00 stderr F I1213 00:21:24.495855 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-api-performance" (893 of 955) 2025-12-13T00:21:24.496005187+00:00 stderr F I1213 00:21:24.495921 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.496042688+00:00 stderr F I1213 00:21:24.496007 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (894 of 955) 2025-12-13T00:21:24.545908054+00:00 stderr F I1213 00:21:24.545817 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-apiserver-performance" (894 of 955) 2025-12-13T00:21:24.545908054+00:00 stderr F I1213 00:21:24.545859 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.545908054+00:00 stderr F I1213 00:21:24.545869 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/grafana-dashboard-api-performance" (895 of 955) 2025-12-13T00:21:24.594590457+00:00 stderr F I1213 00:21:24.594516 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/grafana-dashboard-api-performance" (895 of 955) 2025-12-13T00:21:24.594590457+00:00 stderr F I1213 00:21:24.594556 1 task_graph.go:481] Running 35 on worker 0 2025-12-13T00:21:24.594590457+00:00 stderr F I1213 00:21:24.594573 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.594624648+00:00 stderr F I1213 00:21:24.594579 1 sync_worker.go:999] Running sync for customresourcedefinition "networks.operator.openshift.io" (773 of 955) 2025-12-13T00:21:24.651924264+00:00 stderr F I1213 00:21:24.651773 1 sync_worker.go:1014] Done syncing for customresourcedefinition "networks.operator.openshift.io" (773 of 955) 2025-12-13T00:21:24.651924264+00:00 stderr F I1213 00:21:24.651834 1 task_graph.go:481] Running 36 on worker 0 2025-12-13T00:21:24.697504905+00:00 stderr F I1213 00:21:24.697421 1 sync_worker.go:989] Precreated resource clusteroperator "kube-storage-version-migrator" (429 of 955) 2025-12-13T00:21:24.697504905+00:00 stderr F I1213 00:21:24.697461 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.697504905+00:00 stderr F I1213 00:21:24.697469 1 sync_worker.go:999] Running sync for customresourcedefinition "storageversionmigrations.migration.k8s.io" (420 of 955) 2025-12-13T00:21:24.746294961+00:00 stderr F I1213 00:21:24.746211 1 sync_worker.go:1014] Done syncing for customresourcedefinition "storageversionmigrations.migration.k8s.io" (420 of 955) 2025-12-13T00:21:24.746294961+00:00 stderr F I1213 00:21:24.746254 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.746294961+00:00 stderr F I1213 00:21:24.746262 1 sync_worker.go:999] Running sync for customresourcedefinition "storagestates.migration.k8s.io" (421 of 955) 2025-12-13T00:21:24.780207556+00:00 stderr F I1213 00:21:24.780112 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:24.780510554+00:00 stderr F I1213 00:21:24.780472 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:24.780510554+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:24.780510554+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:24.780574706+00:00 stderr F I1213 00:21:24.780549 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (448.372µs) 2025-12-13T00:21:24.780574706+00:00 stderr F I1213 00:21:24.780571 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:24.780657848+00:00 stderr F I1213 00:21:24.780622 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:24.780694169+00:00 stderr F I1213 00:21:24.780676 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:24.780765021+00:00 stderr F I1213 00:21:24.780688 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:24.781089650+00:00 stderr F I1213 00:21:24.781034 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:24.796994459+00:00 stderr F I1213 00:21:24.796914 1 sync_worker.go:1014] Done syncing for customresourcedefinition "storagestates.migration.k8s.io" (421 of 955) 2025-12-13T00:21:24.796994459+00:00 stderr F I1213 00:21:24.796965 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.796994459+00:00 stderr F I1213 00:21:24.796971 1 sync_worker.go:999] Running sync for namespace "openshift-kube-storage-version-migrator-operator" (422 of 955) 2025-12-13T00:21:24.814886252+00:00 stderr F W1213 00:21:24.814830 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:24.816350691+00:00 stderr F I1213 00:21:24.816328 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (35.755555ms) 2025-12-13T00:21:24.845972311+00:00 stderr F I1213 00:21:24.845881 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-storage-version-migrator-operator" (422 of 955) 2025-12-13T00:21:24.845972311+00:00 stderr F I1213 00:21:24.845921 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.845972311+00:00 stderr F I1213 00:21:24.845941 1 sync_worker.go:999] Running sync for configmap "openshift-kube-storage-version-migrator-operator/config" (423 of 955) 2025-12-13T00:21:24.895241550+00:00 stderr F I1213 00:21:24.895145 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-storage-version-migrator-operator/config" (423 of 955) 2025-12-13T00:21:24.895241550+00:00 stderr F I1213 00:21:24.895201 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.895241550+00:00 stderr F I1213 00:21:24.895207 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (424 of 955) 2025-12-13T00:21:24.944846549+00:00 stderr F I1213 00:21:24.944748 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (424 of 955) 2025-12-13T00:21:24.944846549+00:00 stderr F I1213 00:21:24.944781 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.944846549+00:00 stderr F I1213 00:21:24.944786 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-storage-version-migrator-operator" (425 of 955) 2025-12-13T00:21:24.996501753+00:00 stderr F I1213 00:21:24.996389 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-storage-version-migrator-operator" (425 of 955) 2025-12-13T00:21:24.996501753+00:00 stderr F I1213 00:21:24.996428 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:24.996501753+00:00 stderr F I1213 00:21:24.996434 1 sync_worker.go:999] Running sync for kubestorageversionmigrator "cluster" (426 of 955) 2025-12-13T00:21:25.055622448+00:00 stderr F I1213 00:21:25.055538 1 sync_worker.go:1014] Done syncing for kubestorageversionmigrator "cluster" (426 of 955) 2025-12-13T00:21:25.055622448+00:00 stderr F I1213 00:21:25.055587 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.055622448+00:00 stderr F I1213 00:21:25.055595 1 sync_worker.go:999] Running sync for deployment "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (427 of 955) 2025-12-13T00:21:25.094712403+00:00 stderr F I1213 00:21:25.094625 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator" (427 of 955) 2025-12-13T00:21:25.094712403+00:00 stderr F I1213 00:21:25.094667 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.094712403+00:00 stderr F I1213 00:21:25.094675 1 sync_worker.go:999] Running sync for service "openshift-kube-storage-version-migrator-operator/metrics" (428 of 955) 2025-12-13T00:21:25.145278157+00:00 stderr F I1213 00:21:25.145160 1 sync_worker.go:1014] Done syncing for service "openshift-kube-storage-version-migrator-operator/metrics" (428 of 955) 2025-12-13T00:21:25.145278157+00:00 stderr F I1213 00:21:25.145224 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.145278157+00:00 stderr F I1213 00:21:25.145238 1 sync_worker.go:999] Running sync for clusteroperator "kube-storage-version-migrator" (429 of 955) 2025-12-13T00:21:25.145584556+00:00 stderr F I1213 00:21:25.145525 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-storage-version-migrator" (429 of 955) 2025-12-13T00:21:25.145584556+00:00 stderr F I1213 00:21:25.145563 1 task_graph.go:481] Running 37 on worker 0 2025-12-13T00:21:25.145584556+00:00 stderr F I1213 00:21:25.145575 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.145605307+00:00 stderr F I1213 00:21:25.145586 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorhubs.config.openshift.io" (22 of 955) 2025-12-13T00:21:25.195856273+00:00 stderr F I1213 00:21:25.195744 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorhubs.config.openshift.io" (22 of 955) 2025-12-13T00:21:25.195856273+00:00 stderr F I1213 00:21:25.195786 1 task_graph.go:481] Running 38 on worker 0 2025-12-13T00:21:25.247332801+00:00 stderr F I1213 00:21:25.247191 1 sync_worker.go:989] Precreated resource clusteroperator "marketplace" (741 of 955) 2025-12-13T00:21:25.247332801+00:00 stderr F I1213 00:21:25.247264 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.247332801+00:00 stderr F I1213 00:21:25.247281 1 sync_worker.go:999] Running sync for namespace "openshift-marketplace" (731 of 955) 2025-12-13T00:21:25.295230334+00:00 stderr F I1213 00:21:25.295130 1 sync_worker.go:1014] Done syncing for namespace "openshift-marketplace" (731 of 955) 2025-12-13T00:21:25.295230334+00:00 stderr F I1213 00:21:25.295166 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.295230334+00:00 stderr F I1213 00:21:25.295174 1 sync_worker.go:999] Running sync for serviceaccount "openshift-marketplace/marketplace-operator" (732 of 955) 2025-12-13T00:21:25.345918532+00:00 stderr F I1213 00:21:25.345836 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-marketplace/marketplace-operator" (732 of 955) 2025-12-13T00:21:25.345918532+00:00 stderr F I1213 00:21:25.345869 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.345918532+00:00 stderr F I1213 00:21:25.345875 1 sync_worker.go:999] Running sync for clusterrole "marketplace-operator" (733 of 955) 2025-12-13T00:21:25.395333825+00:00 stderr F I1213 00:21:25.395188 1 sync_worker.go:1014] Done syncing for clusterrole "marketplace-operator" (733 of 955) 2025-12-13T00:21:25.395333825+00:00 stderr F I1213 00:21:25.395240 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.395333825+00:00 stderr F I1213 00:21:25.395253 1 sync_worker.go:999] Running sync for role "openshift-marketplace/marketplace-operator" (734 of 955) 2025-12-13T00:21:25.445450488+00:00 stderr F I1213 00:21:25.444914 1 sync_worker.go:1014] Done syncing for role "openshift-marketplace/marketplace-operator" (734 of 955) 2025-12-13T00:21:25.445450488+00:00 stderr F I1213 00:21:25.444969 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.445450488+00:00 stderr F I1213 00:21:25.444976 1 sync_worker.go:999] Running sync for clusterrole "operatorhub-config-reader" (735 of 955) 2025-12-13T00:21:25.495622732+00:00 stderr F I1213 00:21:25.495517 1 sync_worker.go:1014] Done syncing for clusterrole "operatorhub-config-reader" (735 of 955) 2025-12-13T00:21:25.495622732+00:00 stderr F I1213 00:21:25.495555 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.495622732+00:00 stderr F I1213 00:21:25.495561 1 sync_worker.go:999] Running sync for clusterrolebinding "marketplace-operator" (736 of 955) 2025-12-13T00:21:25.545229130+00:00 stderr F I1213 00:21:25.545122 1 sync_worker.go:1014] Done syncing for clusterrolebinding "marketplace-operator" (736 of 955) 2025-12-13T00:21:25.545229130+00:00 stderr F I1213 00:21:25.545199 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.545229130+00:00 stderr F I1213 00:21:25.545211 1 sync_worker.go:999] Running sync for rolebinding "openshift-marketplace/marketplace-operator" (737 of 955) 2025-12-13T00:21:25.594280994+00:00 stderr F I1213 00:21:25.594214 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-marketplace/marketplace-operator" (737 of 955) 2025-12-13T00:21:25.594280994+00:00 stderr F I1213 00:21:25.594252 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.594280994+00:00 stderr F I1213 00:21:25.594261 1 sync_worker.go:999] Running sync for configmap "openshift-marketplace/marketplace-trusted-ca" (738 of 955) 2025-12-13T00:21:25.649687179+00:00 stderr F I1213 00:21:25.649613 1 sync_worker.go:1014] Done syncing for configmap "openshift-marketplace/marketplace-trusted-ca" (738 of 955) 2025-12-13T00:21:25.649687179+00:00 stderr F I1213 00:21:25.649657 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.649687179+00:00 stderr F I1213 00:21:25.649665 1 sync_worker.go:999] Running sync for service "openshift-marketplace/marketplace-operator-metrics" (739 of 955) 2025-12-13T00:21:25.695234328+00:00 stderr F I1213 00:21:25.695152 1 sync_worker.go:1014] Done syncing for service "openshift-marketplace/marketplace-operator-metrics" (739 of 955) 2025-12-13T00:21:25.695234328+00:00 stderr F I1213 00:21:25.695191 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.695234328+00:00 stderr F I1213 00:21:25.695200 1 sync_worker.go:999] Running sync for deployment "openshift-marketplace/marketplace-operator" (740 of 955) 2025-12-13T00:21:25.780957281+00:00 stderr F I1213 00:21:25.780890 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:25.781193597+00:00 stderr F I1213 00:21:25.781169 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:25.781193597+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:25.781193597+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:25.781238059+00:00 stderr F I1213 00:21:25.781217 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (337.099µs) 2025-12-13T00:21:25.781238059+00:00 stderr F I1213 00:21:25.781234 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:25.781306791+00:00 stderr F I1213 00:21:25.781282 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:25.781340992+00:00 stderr F I1213 00:21:25.781329 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:25.781387323+00:00 stderr F I1213 00:21:25.781338 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:25.781604409+00:00 stderr F I1213 00:21:25.781571 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:25.795189615+00:00 stderr F I1213 00:21:25.795019 1 sync_worker.go:1014] Done syncing for deployment "openshift-marketplace/marketplace-operator" (740 of 955) 2025-12-13T00:21:25.795189615+00:00 stderr F I1213 00:21:25.795070 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.795189615+00:00 stderr F I1213 00:21:25.795078 1 sync_worker.go:999] Running sync for clusteroperator "marketplace" (741 of 955) 2025-12-13T00:21:25.795302389+00:00 stderr F I1213 00:21:25.795268 1 sync_worker.go:1014] Done syncing for clusteroperator "marketplace" (741 of 955) 2025-12-13T00:21:25.795302389+00:00 stderr F I1213 00:21:25.795286 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.795302389+00:00 stderr F I1213 00:21:25.795292 1 sync_worker.go:999] Running sync for servicemonitor "openshift-marketplace/marketplace-operator" (742 of 955) 2025-12-13T00:21:25.816897951+00:00 stderr F W1213 00:21:25.816764 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:25.819489121+00:00 stderr F I1213 00:21:25.819391 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.15078ms) 2025-12-13T00:21:25.846419028+00:00 stderr F I1213 00:21:25.846323 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-marketplace/marketplace-operator" (742 of 955) 2025-12-13T00:21:25.846419028+00:00 stderr F I1213 00:21:25.846356 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.846419028+00:00 stderr F I1213 00:21:25.846362 1 sync_worker.go:999] Running sync for rolebinding "openshift-marketplace/openshift-marketplace-metrics" (743 of 955) 2025-12-13T00:21:25.896012076+00:00 stderr F I1213 00:21:25.895267 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-marketplace/openshift-marketplace-metrics" (743 of 955) 2025-12-13T00:21:25.896012076+00:00 stderr F I1213 00:21:25.895312 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.896012076+00:00 stderr F I1213 00:21:25.895320 1 sync_worker.go:999] Running sync for role "openshift-marketplace/openshift-marketplace-metrics" (744 of 955) 2025-12-13T00:21:25.945795029+00:00 stderr F I1213 00:21:25.945707 1 sync_worker.go:1014] Done syncing for role "openshift-marketplace/openshift-marketplace-metrics" (744 of 955) 2025-12-13T00:21:25.945795029+00:00 stderr F I1213 00:21:25.945746 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.945795029+00:00 stderr F I1213 00:21:25.945752 1 sync_worker.go:999] Running sync for prometheusrule "openshift-marketplace/marketplace-alert-rules" (745 of 955) 2025-12-13T00:21:25.996475397+00:00 stderr F I1213 00:21:25.996343 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-marketplace/marketplace-alert-rules" (745 of 955) 2025-12-13T00:21:25.996475397+00:00 stderr F I1213 00:21:25.996397 1 task_graph.go:481] Running 39 on worker 0 2025-12-13T00:21:25.996475397+00:00 stderr F I1213 00:21:25.996413 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:25.996475397+00:00 stderr F I1213 00:21:25.996421 1 sync_worker.go:999] Running sync for role "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (931 of 955) 2025-12-13T00:21:26.045424628+00:00 stderr F I1213 00:21:26.044960 1 sync_worker.go:1014] Done syncing for role "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (931 of 955) 2025-12-13T00:21:26.045424628+00:00 stderr F I1213 00:21:26.044997 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.045424628+00:00 stderr F I1213 00:21:26.045004 1 sync_worker.go:999] Running sync for rolebinding "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (932 of 955) 2025-12-13T00:21:26.095282284+00:00 stderr F I1213 00:21:26.095190 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-operator-lifecycle-manager/operator-lifecycle-manager-metrics" (932 of 955) 2025-12-13T00:21:26.095282284+00:00 stderr F I1213 00:21:26.095250 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.095282284+00:00 stderr F I1213 00:21:26.095256 1 sync_worker.go:999] Running sync for servicemonitor "openshift-operator-lifecycle-manager/olm-operator" (933 of 955) 2025-12-13T00:21:26.146557737+00:00 stderr F I1213 00:21:26.146151 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-operator-lifecycle-manager/olm-operator" (933 of 955) 2025-12-13T00:21:26.146557737+00:00 stderr F I1213 00:21:26.146188 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.146557737+00:00 stderr F I1213 00:21:26.146194 1 sync_worker.go:999] Running sync for servicemonitor "openshift-operator-lifecycle-manager/catalog-operator" (934 of 955) 2025-12-13T00:21:26.195548068+00:00 stderr F I1213 00:21:26.195461 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-operator-lifecycle-manager/catalog-operator" (934 of 955) 2025-12-13T00:21:26.195548068+00:00 stderr F I1213 00:21:26.195497 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.195548068+00:00 stderr F I1213 00:21:26.195503 1 sync_worker.go:999] Running sync for prometheusrule "openshift-operator-lifecycle-manager/olm-alert-rules" (935 of 955) 2025-12-13T00:21:26.247126881+00:00 stderr F I1213 00:21:26.247051 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-operator-lifecycle-manager/olm-alert-rules" (935 of 955) 2025-12-13T00:21:26.247126881+00:00 stderr F I1213 00:21:26.247086 1 task_graph.go:481] Running 40 on worker 0 2025-12-13T00:21:26.247126881+00:00 stderr F I1213 00:21:26.247098 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.247126881+00:00 stderr F I1213 00:21:26.247103 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-api/machine-api-operator" (919 of 955) 2025-12-13T00:21:26.296383219+00:00 stderr F I1213 00:21:26.296270 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-api/machine-api-operator" (919 of 955) 2025-12-13T00:21:26.296383219+00:00 stderr F I1213 00:21:26.296326 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.296383219+00:00 stderr F I1213 00:21:26.296340 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-api/machine-api-controllers" (920 of 955) 2025-12-13T00:21:26.347981912+00:00 stderr F I1213 00:21:26.347879 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-api/machine-api-controllers" (920 of 955) 2025-12-13T00:21:26.347981912+00:00 stderr F I1213 00:21:26.347929 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.347981912+00:00 stderr F I1213 00:21:26.347961 1 sync_worker.go:999] Running sync for prometheusrule "openshift-machine-api/machine-api-operator-prometheus-rules" (921 of 955) 2025-12-13T00:21:26.397598641+00:00 stderr F I1213 00:21:26.397527 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-machine-api/machine-api-operator-prometheus-rules" (921 of 955) 2025-12-13T00:21:26.397598641+00:00 stderr F I1213 00:21:26.397568 1 task_graph.go:481] Running 41 on worker 0 2025-12-13T00:21:26.397598641+00:00 stderr F I1213 00:21:26.397586 1 sync_worker.go:982] Skipping precreation of clusteroperator "monitoring" (465 of 955): overridden 2025-12-13T00:21:26.397642643+00:00 stderr F I1213 00:21:26.397602 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.397642643+00:00 stderr F I1213 00:21:26.397609 1 sync_worker.go:999] Running sync for customresourcedefinition "alertingrules.monitoring.openshift.io" (442 of 955) 2025-12-13T00:21:26.447510138+00:00 stderr F I1213 00:21:26.447411 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertingrules.monitoring.openshift.io" (442 of 955) 2025-12-13T00:21:26.447510138+00:00 stderr F I1213 00:21:26.447460 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.447510138+00:00 stderr F I1213 00:21:26.447468 1 sync_worker.go:999] Running sync for customresourcedefinition "alertmanagerconfigs.monitoring.coreos.com" (443 of 955) 2025-12-13T00:21:26.540404854+00:00 stderr F I1213 00:21:26.540321 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertmanagerconfigs.monitoring.coreos.com" (443 of 955) 2025-12-13T00:21:26.540404854+00:00 stderr F I1213 00:21:26.540368 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.540404854+00:00 stderr F I1213 00:21:26.540374 1 sync_worker.go:999] Running sync for customresourcedefinition "alertmanagers.monitoring.coreos.com" (444 of 955) 2025-12-13T00:21:26.579407747+00:00 stderr F I1213 00:21:26.579314 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertmanagers.monitoring.coreos.com" (444 of 955) 2025-12-13T00:21:26.579407747+00:00 stderr F I1213 00:21:26.579359 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.579407747+00:00 stderr F I1213 00:21:26.579365 1 sync_worker.go:999] Running sync for customresourcedefinition "alertrelabelconfigs.monitoring.openshift.io" (445 of 955) 2025-12-13T00:21:26.596196410+00:00 stderr F I1213 00:21:26.596126 1 sync_worker.go:1014] Done syncing for customresourcedefinition "alertrelabelconfigs.monitoring.openshift.io" (445 of 955) 2025-12-13T00:21:26.596196410+00:00 stderr F I1213 00:21:26.596165 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.596196410+00:00 stderr F I1213 00:21:26.596172 1 sync_worker.go:999] Running sync for customresourcedefinition "podmonitors.monitoring.coreos.com" (446 of 955) 2025-12-13T00:21:26.648678916+00:00 stderr F I1213 00:21:26.648576 1 sync_worker.go:1014] Done syncing for customresourcedefinition "podmonitors.monitoring.coreos.com" (446 of 955) 2025-12-13T00:21:26.648678916+00:00 stderr F I1213 00:21:26.648622 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.648678916+00:00 stderr F I1213 00:21:26.648630 1 sync_worker.go:999] Running sync for customresourcedefinition "probes.monitoring.coreos.com" (447 of 955) 2025-12-13T00:21:26.697498304+00:00 stderr F I1213 00:21:26.697318 1 sync_worker.go:1014] Done syncing for customresourcedefinition "probes.monitoring.coreos.com" (447 of 955) 2025-12-13T00:21:26.697498304+00:00 stderr F I1213 00:21:26.697375 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.697498304+00:00 stderr F I1213 00:21:26.697384 1 sync_worker.go:999] Running sync for customresourcedefinition "prometheuses.monitoring.coreos.com" (448 of 955) 2025-12-13T00:21:26.779182028+00:00 stderr F I1213 00:21:26.779081 1 sync_worker.go:1014] Done syncing for customresourcedefinition "prometheuses.monitoring.coreos.com" (448 of 955) 2025-12-13T00:21:26.779182028+00:00 stderr F I1213 00:21:26.779136 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.779182028+00:00 stderr F I1213 00:21:26.779143 1 sync_worker.go:999] Running sync for customresourcedefinition "prometheusrules.monitoring.coreos.com" (449 of 955) 2025-12-13T00:21:26.782334673+00:00 stderr F I1213 00:21:26.782283 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:26.782613531+00:00 stderr F I1213 00:21:26.782579 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:26.782613531+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:26.782613531+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:26.782644731+00:00 stderr F I1213 00:21:26.782628 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (354.64µs) 2025-12-13T00:21:26.782652212+00:00 stderr F I1213 00:21:26.782643 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:26.782715333+00:00 stderr F I1213 00:21:26.782688 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:26.782757134+00:00 stderr F I1213 00:21:26.782737 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:26.782822226+00:00 stderr F I1213 00:21:26.782749 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:26.783141885+00:00 stderr F I1213 00:21:26.783104 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:26.796316730+00:00 stderr F I1213 00:21:26.796232 1 sync_worker.go:1014] Done syncing for customresourcedefinition "prometheusrules.monitoring.coreos.com" (449 of 955) 2025-12-13T00:21:26.796316730+00:00 stderr F I1213 00:21:26.796266 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.796316730+00:00 stderr F I1213 00:21:26.796272 1 sync_worker.go:999] Running sync for customresourcedefinition "servicemonitors.monitoring.coreos.com" (450 of 955) 2025-12-13T00:21:26.809968289+00:00 stderr F W1213 00:21:26.809890 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:26.811719176+00:00 stderr F I1213 00:21:26.811612 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.965172ms) 2025-12-13T00:21:26.849149496+00:00 stderr F I1213 00:21:26.849063 1 sync_worker.go:1014] Done syncing for customresourcedefinition "servicemonitors.monitoring.coreos.com" (450 of 955) 2025-12-13T00:21:26.849149496+00:00 stderr F I1213 00:21:26.849119 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.849149496+00:00 stderr F I1213 00:21:26.849130 1 sync_worker.go:999] Running sync for customresourcedefinition "thanosrulers.monitoring.coreos.com" (451 of 955) 2025-12-13T00:21:26.923994695+00:00 stderr F I1213 00:21:26.923895 1 sync_worker.go:1014] Done syncing for customresourcedefinition "thanosrulers.monitoring.coreos.com" (451 of 955) 2025-12-13T00:21:26.923994695+00:00 stderr F I1213 00:21:26.923988 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.924040766+00:00 stderr F I1213 00:21:26.923995 1 sync_worker.go:999] Running sync for namespace "openshift-monitoring" (452 of 955) 2025-12-13T00:21:26.945171567+00:00 stderr F I1213 00:21:26.945100 1 sync_worker.go:1014] Done syncing for namespace "openshift-monitoring" (452 of 955) 2025-12-13T00:21:26.945171567+00:00 stderr F I1213 00:21:26.945139 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.945171567+00:00 stderr F I1213 00:21:26.945145 1 sync_worker.go:999] Running sync for namespace "openshift-user-workload-monitoring" (453 of 955) 2025-12-13T00:21:26.995781883+00:00 stderr F I1213 00:21:26.995685 1 sync_worker.go:1014] Done syncing for namespace "openshift-user-workload-monitoring" (453 of 955) 2025-12-13T00:21:26.995781883+00:00 stderr F I1213 00:21:26.995730 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:26.995781883+00:00 stderr F I1213 00:21:26.995737 1 sync_worker.go:999] Running sync for role "openshift-monitoring/cluster-monitoring-operator-alert-customization" (454 of 955) 2025-12-13T00:21:27.044794296+00:00 stderr F I1213 00:21:27.044714 1 sync_worker.go:1014] Done syncing for role "openshift-monitoring/cluster-monitoring-operator-alert-customization" (454 of 955) 2025-12-13T00:21:27.044794296+00:00 stderr F I1213 00:21:27.044763 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.044794296+00:00 stderr F I1213 00:21:27.044774 1 sync_worker.go:999] Running sync for clusterrole "cluster-monitoring-operator-namespaced" (455 of 955) 2025-12-13T00:21:27.095147295+00:00 stderr F I1213 00:21:27.095031 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-monitoring-operator-namespaced" (455 of 955) 2025-12-13T00:21:27.095147295+00:00 stderr F I1213 00:21:27.095082 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.095147295+00:00 stderr F I1213 00:21:27.095089 1 sync_worker.go:999] Running sync for clusterrole "cluster-monitoring-operator" (456 of 955) 2025-12-13T00:21:27.145485842+00:00 stderr F I1213 00:21:27.145414 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-monitoring-operator" (456 of 955) 2025-12-13T00:21:27.145485842+00:00 stderr F I1213 00:21:27.145454 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.145485842+00:00 stderr F I1213 00:21:27.145460 1 sync_worker.go:999] Running sync for serviceaccount "openshift-monitoring/cluster-monitoring-operator" (457 of 955) 2025-12-13T00:21:27.194148756+00:00 stderr F I1213 00:21:27.194076 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-monitoring/cluster-monitoring-operator" (457 of 955) 2025-12-13T00:21:27.194148756+00:00 stderr F I1213 00:21:27.194112 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.194148756+00:00 stderr F I1213 00:21:27.194118 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-monitoring-operator" (458 of 955) 2025-12-13T00:21:27.245576933+00:00 stderr F I1213 00:21:27.245464 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-monitoring-operator" (458 of 955) 2025-12-13T00:21:27.245576933+00:00 stderr F I1213 00:21:27.245537 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.245576933+00:00 stderr F I1213 00:21:27.245545 1 sync_worker.go:999] Running sync for rolebinding "openshift-monitoring/cluster-monitoring-operator" (459 of 955) 2025-12-13T00:21:27.295215953+00:00 stderr F I1213 00:21:27.295149 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-monitoring/cluster-monitoring-operator" (459 of 955) 2025-12-13T00:21:27.295215953+00:00 stderr F I1213 00:21:27.295184 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.295215953+00:00 stderr F I1213 00:21:27.295190 1 sync_worker.go:999] Running sync for rolebinding "openshift-user-workload-monitoring/cluster-monitoring-operator" (460 of 955) 2025-12-13T00:21:27.346012533+00:00 stderr F I1213 00:21:27.345906 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-user-workload-monitoring/cluster-monitoring-operator" (460 of 955) 2025-12-13T00:21:27.346012533+00:00 stderr F I1213 00:21:27.345973 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.346012533+00:00 stderr F I1213 00:21:27.345981 1 sync_worker.go:999] Running sync for rolebinding "openshift-monitoring/cluster-monitoring-operator-alert-customization" (461 of 955) 2025-12-13T00:21:27.395377206+00:00 stderr F I1213 00:21:27.395304 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-monitoring/cluster-monitoring-operator-alert-customization" (461 of 955) 2025-12-13T00:21:27.395377206+00:00 stderr F I1213 00:21:27.395341 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.395377206+00:00 stderr F I1213 00:21:27.395347 1 sync_worker.go:999] Running sync for service "openshift-monitoring/cluster-monitoring-operator" (462 of 955) 2025-12-13T00:21:27.444678886+00:00 stderr F I1213 00:21:27.444602 1 sync_worker.go:1014] Done syncing for service "openshift-monitoring/cluster-monitoring-operator" (462 of 955) 2025-12-13T00:21:27.444678886+00:00 stderr F I1213 00:21:27.444636 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.444678886+00:00 stderr F I1213 00:21:27.444641 1 sync_worker.go:999] Running sync for configmap "openshift-monitoring/telemetry-config" (463 of 955) 2025-12-13T00:21:27.496642228+00:00 stderr F I1213 00:21:27.496255 1 sync_worker.go:1014] Done syncing for configmap "openshift-monitoring/telemetry-config" (463 of 955) 2025-12-13T00:21:27.496642228+00:00 stderr F I1213 00:21:27.496576 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.496642228+00:00 stderr F I1213 00:21:27.496582 1 sync_worker.go:999] Running sync for deployment "openshift-monitoring/cluster-monitoring-operator" (464 of 955) 2025-12-13T00:21:27.496642228+00:00 stderr F I1213 00:21:27.496625 1 sync_worker.go:1002] Skipping deployment "openshift-monitoring/cluster-monitoring-operator" (464 of 955): overridden 2025-12-13T00:21:27.496642228+00:00 stderr F I1213 00:21:27.496630 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.496642228+00:00 stderr F I1213 00:21:27.496635 1 sync_worker.go:999] Running sync for clusteroperator "monitoring" (465 of 955) 2025-12-13T00:21:27.496680939+00:00 stderr F I1213 00:21:27.496647 1 sync_worker.go:1002] Skipping clusteroperator "monitoring" (465 of 955): overridden 2025-12-13T00:21:27.496680939+00:00 stderr F I1213 00:21:27.496661 1 task_graph.go:481] Running 42 on worker 0 2025-12-13T00:21:27.548500737+00:00 stderr F I1213 00:21:27.548413 1 sync_worker.go:989] Precreated resource clusteroperator "kube-apiserver" (143 of 955) 2025-12-13T00:21:27.600267074+00:00 stderr F I1213 00:21:27.600163 1 sync_worker.go:989] Precreated resource clusteroperator "kube-apiserver" (144 of 955) 2025-12-13T00:21:27.600267074+00:00 stderr F I1213 00:21:27.600198 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.600267074+00:00 stderr F I1213 00:21:27.600204 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:anyuid" (83 of 955) 2025-12-13T00:21:27.646113361+00:00 stderr F I1213 00:21:27.646041 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:anyuid" (83 of 955) 2025-12-13T00:21:27.646113361+00:00 stderr F I1213 00:21:27.646075 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.646113361+00:00 stderr F I1213 00:21:27.646081 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:anyuid" (84 of 955) 2025-12-13T00:21:27.695631458+00:00 stderr F I1213 00:21:27.695540 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:anyuid" (84 of 955) 2025-12-13T00:21:27.695631458+00:00 stderr F I1213 00:21:27.695587 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.695631458+00:00 stderr F I1213 00:21:27.695598 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostaccess" (85 of 955) 2025-12-13T00:21:27.747181818+00:00 stderr F I1213 00:21:27.746692 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostaccess" (85 of 955) 2025-12-13T00:21:27.747181818+00:00 stderr F I1213 00:21:27.747133 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.747181818+00:00 stderr F I1213 00:21:27.747144 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostaccess" (86 of 955) 2025-12-13T00:21:27.782848591+00:00 stderr F I1213 00:21:27.782740 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:27.783310424+00:00 stderr F I1213 00:21:27.783256 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:27.783310424+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:27.783310424+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:27.783399336+00:00 stderr F I1213 00:21:27.783350 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (626.967µs) 2025-12-13T00:21:27.783399336+00:00 stderr F I1213 00:21:27.783382 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:27.783501219+00:00 stderr F I1213 00:21:27.783453 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:27.783588641+00:00 stderr F I1213 00:21:27.783550 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:27.783660973+00:00 stderr F I1213 00:21:27.783572 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:27.784115195+00:00 stderr F I1213 00:21:27.784050 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:27.796036097+00:00 stderr F I1213 00:21:27.795202 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostaccess" (86 of 955) 2025-12-13T00:21:27.796036097+00:00 stderr F I1213 00:21:27.795258 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.796036097+00:00 stderr F I1213 00:21:27.795273 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount-anyuid" (87 of 955) 2025-12-13T00:21:27.807535498+00:00 stderr F W1213 00:21:27.807461 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:27.809079349+00:00 stderr F I1213 00:21:27.809038 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.654112ms) 2025-12-13T00:21:27.845983095+00:00 stderr F I1213 00:21:27.845567 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount-anyuid" (87 of 955) 2025-12-13T00:21:27.845983095+00:00 stderr F I1213 00:21:27.845607 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.845983095+00:00 stderr F I1213 00:21:27.845614 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount-anyuid" (88 of 955) 2025-12-13T00:21:27.894520515+00:00 stderr F I1213 00:21:27.894438 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount-anyuid" (88 of 955) 2025-12-13T00:21:27.894520515+00:00 stderr F I1213 00:21:27.894491 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.894520515+00:00 stderr F I1213 00:21:27.894502 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount" (89 of 955) 2025-12-13T00:21:27.946235080+00:00 stderr F I1213 00:21:27.946149 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount" (89 of 955) 2025-12-13T00:21:27.946235080+00:00 stderr F I1213 00:21:27.946194 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.946235080+00:00 stderr F I1213 00:21:27.946202 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostmount" (90 of 955) 2025-12-13T00:21:27.995472359+00:00 stderr F I1213 00:21:27.995367 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostmount" (90 of 955) 2025-12-13T00:21:27.995472359+00:00 stderr F I1213 00:21:27.995431 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:27.995472359+00:00 stderr F I1213 00:21:27.995445 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork-v2" (91 of 955) 2025-12-13T00:21:28.045269783+00:00 stderr F I1213 00:21:28.045183 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork-v2" (91 of 955) 2025-12-13T00:21:28.045269783+00:00 stderr F I1213 00:21:28.045221 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.045269783+00:00 stderr F I1213 00:21:28.045229 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork-v2" (92 of 955) 2025-12-13T00:21:28.095335354+00:00 stderr F I1213 00:21:28.095233 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork-v2" (92 of 955) 2025-12-13T00:21:28.095335354+00:00 stderr F I1213 00:21:28.095282 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.095335354+00:00 stderr F I1213 00:21:28.095293 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork" (93 of 955) 2025-12-13T00:21:28.145113166+00:00 stderr F I1213 00:21:28.145041 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork" (93 of 955) 2025-12-13T00:21:28.145113166+00:00 stderr F I1213 00:21:28.145080 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.145113166+00:00 stderr F I1213 00:21:28.145086 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:hostnetwork" (94 of 955) 2025-12-13T00:21:28.196427422+00:00 stderr F I1213 00:21:28.196326 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:hostnetwork" (94 of 955) 2025-12-13T00:21:28.196427422+00:00 stderr F I1213 00:21:28.196392 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.196427422+00:00 stderr F I1213 00:21:28.196409 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot-v2" (95 of 955) 2025-12-13T00:21:28.246034160+00:00 stderr F I1213 00:21:28.245921 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot-v2" (95 of 955) 2025-12-13T00:21:28.246034160+00:00 stderr F I1213 00:21:28.245999 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.246034160+00:00 stderr F I1213 00:21:28.246009 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot-v2" (96 of 955) 2025-12-13T00:21:28.295363391+00:00 stderr F I1213 00:21:28.295301 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot-v2" (96 of 955) 2025-12-13T00:21:28.295363391+00:00 stderr F I1213 00:21:28.295334 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.295363391+00:00 stderr F I1213 00:21:28.295341 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot" (97 of 955) 2025-12-13T00:21:28.341505276+00:00 stderr F I1213 00:21:28.341384 1 task_graph.go:481] Running 43 on worker 1 2025-12-13T00:21:28.341505276+00:00 stderr F I1213 00:21:28.341416 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.341505276+00:00 stderr F I1213 00:21:28.341423 1 sync_worker.go:999] Running sync for role "openshift-authentication-operator/prometheus-k8s" (804 of 955) 2025-12-13T00:21:28.344588250+00:00 stderr F I1213 00:21:28.344497 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot" (97 of 955) 2025-12-13T00:21:28.344588250+00:00 stderr F I1213 00:21:28.344517 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.344588250+00:00 stderr F I1213 00:21:28.344523 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:nonroot" (98 of 955) 2025-12-13T00:21:28.395102943+00:00 stderr F I1213 00:21:28.394968 1 sync_worker.go:1014] Done syncing for role "openshift-authentication-operator/prometheus-k8s" (804 of 955) 2025-12-13T00:21:28.395102943+00:00 stderr F I1213 00:21:28.395028 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.395102943+00:00 stderr F I1213 00:21:28.395044 1 sync_worker.go:999] Running sync for role "openshift-authentication/prometheus-k8s" (805 of 955) 2025-12-13T00:21:28.446119860+00:00 stderr F I1213 00:21:28.445992 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:nonroot" (98 of 955) 2025-12-13T00:21:28.446119860+00:00 stderr F I1213 00:21:28.446032 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.446119860+00:00 stderr F I1213 00:21:28.446038 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:privileged" (99 of 955) 2025-12-13T00:21:28.495551713+00:00 stderr F I1213 00:21:28.495470 1 sync_worker.go:1014] Done syncing for role "openshift-authentication/prometheus-k8s" (805 of 955) 2025-12-13T00:21:28.495551713+00:00 stderr F I1213 00:21:28.495506 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.495551713+00:00 stderr F I1213 00:21:28.495512 1 sync_worker.go:999] Running sync for role "openshift-oauth-apiserver/prometheus-k8s" (806 of 955) 2025-12-13T00:21:28.545552302+00:00 stderr F I1213 00:21:28.545430 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:privileged" (99 of 955) 2025-12-13T00:21:28.545552302+00:00 stderr F I1213 00:21:28.545476 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.545552302+00:00 stderr F I1213 00:21:28.545482 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:privileged" (100 of 955) 2025-12-13T00:21:28.595087069+00:00 stderr F I1213 00:21:28.595013 1 sync_worker.go:1014] Done syncing for role "openshift-oauth-apiserver/prometheus-k8s" (806 of 955) 2025-12-13T00:21:28.595087069+00:00 stderr F I1213 00:21:28.595056 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.595087069+00:00 stderr F I1213 00:21:28.595063 1 sync_worker.go:999] Running sync for rolebinding "openshift-authentication-operator/prometheus-k8s" (807 of 955) 2025-12-13T00:21:28.645624783+00:00 stderr F I1213 00:21:28.645512 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:privileged" (100 of 955) 2025-12-13T00:21:28.645624783+00:00 stderr F I1213 00:21:28.645555 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.645624783+00:00 stderr F I1213 00:21:28.645561 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted-v2" (101 of 955) 2025-12-13T00:21:28.703867355+00:00 stderr F I1213 00:21:28.703751 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-authentication-operator/prometheus-k8s" (807 of 955) 2025-12-13T00:21:28.703909136+00:00 stderr F I1213 00:21:28.703895 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.703950647+00:00 stderr F I1213 00:21:28.703903 1 sync_worker.go:999] Running sync for rolebinding "openshift-authentication/prometheus-k8s" (808 of 955) 2025-12-13T00:21:28.746378242+00:00 stderr F I1213 00:21:28.746267 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted-v2" (101 of 955) 2025-12-13T00:21:28.746378242+00:00 stderr F I1213 00:21:28.746329 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.746378242+00:00 stderr F I1213 00:21:28.746341 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted-v2" (102 of 955) 2025-12-13T00:21:28.784480441+00:00 stderr F I1213 00:21:28.784387 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:28.784838520+00:00 stderr F I1213 00:21:28.784783 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:28.784838520+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:28.784838520+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:28.784873811+00:00 stderr F I1213 00:21:28.784848 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (475.362µs) 2025-12-13T00:21:28.784884371+00:00 stderr F I1213 00:21:28.784877 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:28.785003515+00:00 stderr F I1213 00:21:28.784955 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:28.785051326+00:00 stderr F I1213 00:21:28.785028 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:28.785130458+00:00 stderr F I1213 00:21:28.785046 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:28.785423646+00:00 stderr F I1213 00:21:28.785377 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:28.795219299+00:00 stderr F I1213 00:21:28.795119 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-authentication/prometheus-k8s" (808 of 955) 2025-12-13T00:21:28.795219299+00:00 stderr F I1213 00:21:28.795159 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.795219299+00:00 stderr F I1213 00:21:28.795166 1 sync_worker.go:999] Running sync for rolebinding "openshift-oauth-apiserver/prometheus-k8s" (809 of 955) 2025-12-13T00:21:28.815681002+00:00 stderr F W1213 00:21:28.815610 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:28.818001005+00:00 stderr F I1213 00:21:28.817923 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (33.039381ms) 2025-12-13T00:21:28.847852250+00:00 stderr F I1213 00:21:28.847759 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted-v2" (102 of 955) 2025-12-13T00:21:28.847852250+00:00 stderr F I1213 00:21:28.847807 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.847852250+00:00 stderr F I1213 00:21:28.847816 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted" (103 of 955) 2025-12-13T00:21:28.895410483+00:00 stderr F I1213 00:21:28.895324 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-oauth-apiserver/prometheus-k8s" (809 of 955) 2025-12-13T00:21:28.895410483+00:00 stderr F I1213 00:21:28.895362 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.895410483+00:00 stderr F I1213 00:21:28.895368 1 sync_worker.go:999] Running sync for servicemonitor "openshift-authentication-operator/authentication-operator" (810 of 955) 2025-12-13T00:21:28.945072254+00:00 stderr F I1213 00:21:28.944977 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted" (103 of 955) 2025-12-13T00:21:28.945072254+00:00 stderr F I1213 00:21:28.945019 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.945072254+00:00 stderr F I1213 00:21:28.945025 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:scc:restricted" (104 of 955) 2025-12-13T00:21:28.995424643+00:00 stderr F I1213 00:21:28.995334 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-authentication-operator/authentication-operator" (810 of 955) 2025-12-13T00:21:28.995424643+00:00 stderr F I1213 00:21:28.995383 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:28.995424643+00:00 stderr F I1213 00:21:28.995394 1 sync_worker.go:999] Running sync for servicemonitor "openshift-authentication/oauth-openshift" (811 of 955) 2025-12-13T00:21:29.044514817+00:00 stderr F I1213 00:21:29.044438 1 sync_worker.go:1014] Done syncing for clusterrole "system:openshift:scc:restricted" (104 of 955) 2025-12-13T00:21:29.044514817+00:00 stderr F I1213 00:21:29.044476 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.044514817+00:00 stderr F I1213 00:21:29.044482 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:scc:restricted-v2" (105 of 955) 2025-12-13T00:21:29.095496733+00:00 stderr F I1213 00:21:29.095423 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-authentication/oauth-openshift" (811 of 955) 2025-12-13T00:21:29.095496733+00:00 stderr F I1213 00:21:29.095474 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.095496733+00:00 stderr F I1213 00:21:29.095482 1 sync_worker.go:999] Running sync for servicemonitor "openshift-oauth-apiserver/openshift-oauth-apiserver" (812 of 955) 2025-12-13T00:21:29.145291276+00:00 stderr F I1213 00:21:29.145209 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:scc:restricted-v2" (105 of 955) 2025-12-13T00:21:29.145291276+00:00 stderr F I1213 00:21:29.145238 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.145291276+00:00 stderr F I1213 00:21:29.145249 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:scc:restricted-v2" (106 of 955) 2025-12-13T00:21:29.195911582+00:00 stderr F I1213 00:21:29.195806 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-oauth-apiserver/openshift-oauth-apiserver" (812 of 955) 2025-12-13T00:21:29.195911582+00:00 stderr F I1213 00:21:29.195841 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.195911582+00:00 stderr F I1213 00:21:29.195849 1 sync_worker.go:999] Running sync for prometheusrule "openshift-authentication-operator/authentication-operator" (813 of 955) 2025-12-13T00:21:29.246006824+00:00 stderr F I1213 00:21:29.245902 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:scc:restricted-v2" (106 of 955) 2025-12-13T00:21:29.246006824+00:00 stderr F I1213 00:21:29.245988 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.246063776+00:00 stderr F I1213 00:21:29.246001 1 sync_worker.go:999] Running sync for namespace "openshift-kube-apiserver-operator" (107 of 955) 2025-12-13T00:21:29.296363873+00:00 stderr F I1213 00:21:29.296243 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-authentication-operator/authentication-operator" (813 of 955) 2025-12-13T00:21:29.296363873+00:00 stderr F I1213 00:21:29.296303 1 task_graph.go:481] Running 44 on worker 1 2025-12-13T00:21:29.296363873+00:00 stderr F I1213 00:21:29.296322 1 sync_worker.go:982] Skipping precreation of clusteroperator "node-tuning" (494 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296363873+00:00 stderr F I1213 00:21:29.296344 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296429225+00:00 stderr F I1213 00:21:29.296355 1 sync_worker.go:999] Running sync for namespace "openshift-cluster-node-tuning-operator" (475 of 955) 2025-12-13T00:21:29.296429225+00:00 stderr F I1213 00:21:29.296376 1 sync_worker.go:1002] Skipping namespace "openshift-cluster-node-tuning-operator" (475 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296429225+00:00 stderr F I1213 00:21:29.296388 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296429225+00:00 stderr F I1213 00:21:29.296396 1 sync_worker.go:999] Running sync for customresourcedefinition "performanceprofiles.performance.openshift.io" (476 of 955) 2025-12-13T00:21:29.296429225+00:00 stderr F I1213 00:21:29.296414 1 sync_worker.go:1002] Skipping customresourcedefinition "performanceprofiles.performance.openshift.io" (476 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296429225+00:00 stderr F I1213 00:21:29.296425 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296450065+00:00 stderr F I1213 00:21:29.296433 1 sync_worker.go:999] Running sync for customresourcedefinition "profiles.tuned.openshift.io" (477 of 955) 2025-12-13T00:21:29.296466806+00:00 stderr F I1213 00:21:29.296452 1 sync_worker.go:1002] Skipping customresourcedefinition "profiles.tuned.openshift.io" (477 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296466806+00:00 stderr F I1213 00:21:29.296462 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296486826+00:00 stderr F I1213 00:21:29.296472 1 sync_worker.go:999] Running sync for customresourcedefinition "tuneds.tuned.openshift.io" (478 of 955) 2025-12-13T00:21:29.296502277+00:00 stderr F I1213 00:21:29.296489 1 sync_worker.go:1002] Skipping customresourcedefinition "tuneds.tuned.openshift.io" (478 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296517667+00:00 stderr F I1213 00:21:29.296499 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296517667+00:00 stderr F I1213 00:21:29.296509 1 sync_worker.go:999] Running sync for configmap "openshift-cluster-node-tuning-operator/trusted-ca" (479 of 955) 2025-12-13T00:21:29.296537718+00:00 stderr F I1213 00:21:29.296526 1 sync_worker.go:1002] Skipping configmap "openshift-cluster-node-tuning-operator/trusted-ca" (479 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296552968+00:00 stderr F I1213 00:21:29.296537 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296567919+00:00 stderr F I1213 00:21:29.296547 1 sync_worker.go:999] Running sync for service "openshift-cluster-node-tuning-operator/node-tuning-operator" (480 of 955) 2025-12-13T00:21:29.296582849+00:00 stderr F I1213 00:21:29.296564 1 sync_worker.go:1002] Skipping service "openshift-cluster-node-tuning-operator/node-tuning-operator" (480 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296582849+00:00 stderr F I1213 00:21:29.296573 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296598379+00:00 stderr F I1213 00:21:29.296581 1 sync_worker.go:999] Running sync for role "openshift-cluster-node-tuning-operator/prometheus-k8s" (481 of 955) 2025-12-13T00:21:29.296627790+00:00 stderr F I1213 00:21:29.296597 1 sync_worker.go:1002] Skipping role "openshift-cluster-node-tuning-operator/prometheus-k8s" (481 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296627790+00:00 stderr F I1213 00:21:29.296607 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296627790+00:00 stderr F I1213 00:21:29.296617 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-node-tuning-operator/prometheus-k8s" (482 of 955) 2025-12-13T00:21:29.296645051+00:00 stderr F I1213 00:21:29.296634 1 sync_worker.go:1002] Skipping rolebinding "openshift-cluster-node-tuning-operator/prometheus-k8s" (482 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296659991+00:00 stderr F I1213 00:21:29.296650 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296674921+00:00 stderr F I1213 00:21:29.296659 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-node-tuning-operator/node-tuning-operator" (483 of 955) 2025-12-13T00:21:29.296689942+00:00 stderr F I1213 00:21:29.296675 1 sync_worker.go:1002] Skipping servicemonitor "openshift-cluster-node-tuning-operator/node-tuning-operator" (483 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296689942+00:00 stderr F I1213 00:21:29.296685 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296705612+00:00 stderr F I1213 00:21:29.296693 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-node-tuning-operator/node-tuning-operator" (484 of 955) 2025-12-13T00:21:29.296720743+00:00 stderr F I1213 00:21:29.296710 1 sync_worker.go:1002] Skipping prometheusrule "openshift-cluster-node-tuning-operator/node-tuning-operator" (484 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296735913+00:00 stderr F I1213 00:21:29.296719 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296735913+00:00 stderr F I1213 00:21:29.296728 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (485 of 955) 2025-12-13T00:21:29.296756114+00:00 stderr F I1213 00:21:29.296745 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (485 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296771434+00:00 stderr F I1213 00:21:29.296755 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296786404+00:00 stderr F I1213 00:21:29.296764 1 sync_worker.go:999] Running sync for clusterrole "cluster-node-tuning-operator" (486 of 955) 2025-12-13T00:21:29.296801435+00:00 stderr F I1213 00:21:29.296781 1 sync_worker.go:1002] Skipping clusterrole "cluster-node-tuning-operator" (486 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296801435+00:00 stderr F I1213 00:21:29.296791 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296817085+00:00 stderr F I1213 00:21:29.296800 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-node-tuning-operator" (487 of 955) 2025-12-13T00:21:29.296832416+00:00 stderr F I1213 00:21:29.296815 1 sync_worker.go:1002] Skipping clusterrolebinding "cluster-node-tuning-operator" (487 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296832416+00:00 stderr F I1213 00:21:29.296825 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296847996+00:00 stderr F I1213 00:21:29.296834 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cluster-node-tuning-operator/tuned" (488 of 955) 2025-12-13T00:21:29.296863186+00:00 stderr F I1213 00:21:29.296850 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cluster-node-tuning-operator/tuned" (488 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296863186+00:00 stderr F I1213 00:21:29.296859 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296884827+00:00 stderr F I1213 00:21:29.296868 1 sync_worker.go:999] Running sync for clusterrole "cluster-node-tuning:tuned" (489 of 955) 2025-12-13T00:21:29.296899917+00:00 stderr F I1213 00:21:29.296884 1 sync_worker.go:1002] Skipping clusterrole "cluster-node-tuning:tuned" (489 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296899917+00:00 stderr F I1213 00:21:29.296894 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296915468+00:00 stderr F I1213 00:21:29.296902 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-node-tuning:tuned" (490 of 955) 2025-12-13T00:21:29.296930348+00:00 stderr F I1213 00:21:29.296918 1 sync_worker.go:1002] Skipping clusterrolebinding "cluster-node-tuning:tuned" (490 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.296981961+00:00 stderr F I1213 00:21:29.296959 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.296981961+00:00 stderr F I1213 00:21:29.296970 1 sync_worker.go:999] Running sync for service "openshift-cluster-node-tuning-operator/performance-addon-operator-service" (491 of 955) 2025-12-13T00:21:29.297001421+00:00 stderr F I1213 00:21:29.296988 1 sync_worker.go:1002] Skipping service "openshift-cluster-node-tuning-operator/performance-addon-operator-service" (491 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.297016331+00:00 stderr F I1213 00:21:29.296999 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.297102474+00:00 stderr F I1213 00:21:29.297008 1 sync_worker.go:999] Running sync for validatingwebhookconfiguration "performance-addon-operator" (492 of 955) 2025-12-13T00:21:29.297102474+00:00 stderr F I1213 00:21:29.297069 1 sync_worker.go:1002] Skipping validatingwebhookconfiguration "performance-addon-operator" (492 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.297102474+00:00 stderr F I1213 00:21:29.297080 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.297102474+00:00 stderr F I1213 00:21:29.297088 1 sync_worker.go:999] Running sync for deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (493 of 955) 2025-12-13T00:21:29.297122164+00:00 stderr F I1213 00:21:29.297106 1 sync_worker.go:1002] Skipping deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (493 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.297122164+00:00 stderr F I1213 00:21:29.297116 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.297138235+00:00 stderr F I1213 00:21:29.297124 1 sync_worker.go:999] Running sync for clusteroperator "node-tuning" (494 of 955) 2025-12-13T00:21:29.297153465+00:00 stderr F I1213 00:21:29.297139 1 sync_worker.go:1002] Skipping clusteroperator "node-tuning" (494 of 955): disabled capabilities: NodeTuning 2025-12-13T00:21:29.297197156+00:00 stderr F I1213 00:21:29.297165 1 task_graph.go:481] Running 45 on worker 1 2025-12-13T00:21:29.297197156+00:00 stderr F I1213 00:21:29.297180 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.297213137+00:00 stderr F I1213 00:21:29.297190 1 sync_worker.go:999] Running sync for role "openshift-ingress-operator/prometheus-k8s" (870 of 955) 2025-12-13T00:21:29.346473995+00:00 stderr F I1213 00:21:29.346350 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-apiserver-operator" (107 of 955) 2025-12-13T00:21:29.346473995+00:00 stderr F I1213 00:21:29.346394 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.346473995+00:00 stderr F I1213 00:21:29.346402 1 sync_worker.go:999] Running sync for namespace "openshift-kube-apiserver-operator" (108 of 955) 2025-12-13T00:21:29.395673543+00:00 stderr F I1213 00:21:29.395543 1 sync_worker.go:1014] Done syncing for role "openshift-ingress-operator/prometheus-k8s" (870 of 955) 2025-12-13T00:21:29.395673543+00:00 stderr F I1213 00:21:29.395597 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.395673543+00:00 stderr F I1213 00:21:29.395609 1 sync_worker.go:999] Running sync for rolebinding "openshift-ingress-operator/prometheus-k8s" (871 of 955) 2025-12-13T00:21:29.446627998+00:00 stderr F I1213 00:21:29.446529 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-apiserver-operator" (108 of 955) 2025-12-13T00:21:29.446627998+00:00 stderr F I1213 00:21:29.446577 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.446627998+00:00 stderr F I1213 00:21:29.446586 1 sync_worker.go:999] Running sync for securitycontextconstraints "anyuid" (109 of 955) 2025-12-13T00:21:29.497240764+00:00 stderr F I1213 00:21:29.497102 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-ingress-operator/prometheus-k8s" (871 of 955) 2025-12-13T00:21:29.497240764+00:00 stderr F I1213 00:21:29.497168 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.497240764+00:00 stderr F I1213 00:21:29.497177 1 sync_worker.go:999] Running sync for servicemonitor "openshift-ingress-operator/ingress-operator" (872 of 955) 2025-12-13T00:21:29.546075141+00:00 stderr F I1213 00:21:29.545980 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "anyuid" (109 of 955) 2025-12-13T00:21:29.546075141+00:00 stderr F I1213 00:21:29.546016 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.546075141+00:00 stderr F I1213 00:21:29.546022 1 sync_worker.go:999] Running sync for securitycontextconstraints "anyuid" (110 of 955) 2025-12-13T00:21:29.601212899+00:00 stderr F I1213 00:21:29.601068 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-ingress-operator/ingress-operator" (872 of 955) 2025-12-13T00:21:29.601212899+00:00 stderr F I1213 00:21:29.601126 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.601212899+00:00 stderr F I1213 00:21:29.601132 1 sync_worker.go:999] Running sync for prometheusrule "openshift-ingress-operator/ingress-operator" (873 of 955) 2025-12-13T00:21:29.646553213+00:00 stderr F I1213 00:21:29.646455 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "anyuid" (110 of 955) 2025-12-13T00:21:29.646553213+00:00 stderr F I1213 00:21:29.646494 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.646553213+00:00 stderr F I1213 00:21:29.646501 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostaccess" (111 of 955) 2025-12-13T00:21:29.696562132+00:00 stderr F I1213 00:21:29.695951 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-ingress-operator/ingress-operator" (873 of 955) 2025-12-13T00:21:29.696562132+00:00 stderr F I1213 00:21:29.695990 1 task_graph.go:481] Running 46 on worker 1 2025-12-13T00:21:29.696562132+00:00 stderr F I1213 00:21:29.696002 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.696562132+00:00 stderr F I1213 00:21:29.696008 1 sync_worker.go:999] Running sync for role "openshift-kube-scheduler-operator/prometheus-k8s" (910 of 955) 2025-12-13T00:21:29.746114500+00:00 stderr F I1213 00:21:29.745919 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostaccess" (111 of 955) 2025-12-13T00:21:29.746114500+00:00 stderr F I1213 00:21:29.745994 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.746114500+00:00 stderr F I1213 00:21:29.746005 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostaccess" (112 of 955) 2025-12-13T00:21:29.786207781+00:00 stderr F I1213 00:21:29.786129 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:29.786645893+00:00 stderr F I1213 00:21:29.786606 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:29.786645893+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:29.786645893+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:29.786750546+00:00 stderr F I1213 00:21:29.786709 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (607.127µs) 2025-12-13T00:21:29.786750546+00:00 stderr F I1213 00:21:29.786739 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:29.786849128+00:00 stderr F I1213 00:21:29.786808 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:29.786913150+00:00 stderr F I1213 00:21:29.786884 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:29.787059354+00:00 stderr F I1213 00:21:29.786902 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:29.787510857+00:00 stderr F I1213 00:21:29.787441 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:29.794689800+00:00 stderr F I1213 00:21:29.794603 1 sync_worker.go:1014] Done syncing for role "openshift-kube-scheduler-operator/prometheus-k8s" (910 of 955) 2025-12-13T00:21:29.794689800+00:00 stderr F I1213 00:21:29.794646 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.794689800+00:00 stderr F I1213 00:21:29.794658 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-scheduler-operator/prometheus-k8s" (911 of 955) 2025-12-13T00:21:29.809889111+00:00 stderr F W1213 00:21:29.809822 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:29.811645858+00:00 stderr F I1213 00:21:29.811594 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.853421ms) 2025-12-13T00:21:29.846713425+00:00 stderr F I1213 00:21:29.846631 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostaccess" (112 of 955) 2025-12-13T00:21:29.846713425+00:00 stderr F I1213 00:21:29.846669 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.846713425+00:00 stderr F I1213 00:21:29.846691 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostmount-anyuid" (113 of 955) 2025-12-13T00:21:29.896828597+00:00 stderr F I1213 00:21:29.896305 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-scheduler-operator/prometheus-k8s" (911 of 955) 2025-12-13T00:21:29.896828597+00:00 stderr F I1213 00:21:29.896788 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.896828597+00:00 stderr F I1213 00:21:29.896794 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-scheduler-operator/kube-scheduler-operator" (912 of 955) 2025-12-13T00:21:29.945638724+00:00 stderr F I1213 00:21:29.945563 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostmount-anyuid" (113 of 955) 2025-12-13T00:21:29.945638724+00:00 stderr F I1213 00:21:29.945599 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.945638724+00:00 stderr F I1213 00:21:29.945605 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostmount-anyuid" (114 of 955) 2025-12-13T00:21:29.996149147+00:00 stderr F I1213 00:21:29.996075 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-scheduler-operator/kube-scheduler-operator" (912 of 955) 2025-12-13T00:21:29.996149147+00:00 stderr F I1213 00:21:29.996111 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:29.996149147+00:00 stderr F I1213 00:21:29.996118 1 sync_worker.go:999] Running sync for prometheusrule "openshift-kube-scheduler-operator/kube-scheduler-operator" (913 of 955) 2025-12-13T00:21:30.046982488+00:00 stderr F I1213 00:21:30.046710 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostmount-anyuid" (114 of 955) 2025-12-13T00:21:30.046982488+00:00 stderr F I1213 00:21:30.046741 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.046982488+00:00 stderr F I1213 00:21:30.046746 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork-v2" (115 of 955) 2025-12-13T00:21:30.096712211+00:00 stderr F I1213 00:21:30.096541 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-kube-scheduler-operator/kube-scheduler-operator" (913 of 955) 2025-12-13T00:21:30.096712211+00:00 stderr F I1213 00:21:30.096573 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.096712211+00:00 stderr F I1213 00:21:30.096580 1 sync_worker.go:999] Running sync for clusterrole "prometheus-k8s-scheduler-resources" (914 of 955) 2025-12-13T00:21:30.146656508+00:00 stderr F I1213 00:21:30.146556 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork-v2" (115 of 955) 2025-12-13T00:21:30.146656508+00:00 stderr F I1213 00:21:30.146597 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.146656508+00:00 stderr F I1213 00:21:30.146604 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork-v2" (116 of 955) 2025-12-13T00:21:30.195800124+00:00 stderr F I1213 00:21:30.195727 1 sync_worker.go:1014] Done syncing for clusterrole "prometheus-k8s-scheduler-resources" (914 of 955) 2025-12-13T00:21:30.195889186+00:00 stderr F I1213 00:21:30.195869 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.195969779+00:00 stderr F I1213 00:21:30.195916 1 sync_worker.go:999] Running sync for role "openshift-kube-scheduler/prometheus-k8s" (915 of 955) 2025-12-13T00:21:30.246423590+00:00 stderr F I1213 00:21:30.246295 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork-v2" (116 of 955) 2025-12-13T00:21:30.246546033+00:00 stderr F I1213 00:21:30.246522 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.246641866+00:00 stderr F I1213 00:21:30.246611 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork" (117 of 955) 2025-12-13T00:21:30.295218457+00:00 stderr F I1213 00:21:30.295102 1 sync_worker.go:1014] Done syncing for role "openshift-kube-scheduler/prometheus-k8s" (915 of 955) 2025-12-13T00:21:30.295404492+00:00 stderr F I1213 00:21:30.295370 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.295525305+00:00 stderr F I1213 00:21:30.295487 1 sync_worker.go:999] Running sync for rolebinding "openshift-kube-scheduler/prometheus-k8s" (916 of 955) 2025-12-13T00:21:30.354479396+00:00 stderr F I1213 00:21:30.354388 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork" (117 of 955) 2025-12-13T00:21:30.354479396+00:00 stderr F I1213 00:21:30.354440 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.354479396+00:00 stderr F I1213 00:21:30.354453 1 sync_worker.go:999] Running sync for securitycontextconstraints "hostnetwork" (118 of 955) 2025-12-13T00:21:30.394649110+00:00 stderr F I1213 00:21:30.394527 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-kube-scheduler/prometheus-k8s" (916 of 955) 2025-12-13T00:21:30.394649110+00:00 stderr F I1213 00:21:30.394568 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.394649110+00:00 stderr F I1213 00:21:30.394580 1 sync_worker.go:999] Running sync for clusterrolebinding "prometheus-k8s-scheduler-resources" (917 of 955) 2025-12-13T00:21:30.446870689+00:00 stderr F I1213 00:21:30.446817 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "hostnetwork" (118 of 955) 2025-12-13T00:21:30.447006723+00:00 stderr F I1213 00:21:30.446983 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.447076795+00:00 stderr F I1213 00:21:30.447048 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot-v2" (119 of 955) 2025-12-13T00:21:30.495657526+00:00 stderr F I1213 00:21:30.495574 1 sync_worker.go:1014] Done syncing for clusterrolebinding "prometheus-k8s-scheduler-resources" (917 of 955) 2025-12-13T00:21:30.495873411+00:00 stderr F I1213 00:21:30.495851 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.495991655+00:00 stderr F I1213 00:21:30.495912 1 sync_worker.go:999] Running sync for servicemonitor "openshift-kube-scheduler/kube-scheduler" (918 of 955) 2025-12-13T00:21:30.549027406+00:00 stderr F I1213 00:21:30.548920 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot-v2" (119 of 955) 2025-12-13T00:21:30.549027406+00:00 stderr F I1213 00:21:30.548972 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.549027406+00:00 stderr F I1213 00:21:30.548977 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot-v2" (120 of 955) 2025-12-13T00:21:30.596001633+00:00 stderr F I1213 00:21:30.595951 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-kube-scheduler/kube-scheduler" (918 of 955) 2025-12-13T00:21:30.596077325+00:00 stderr F I1213 00:21:30.596066 1 task_graph.go:481] Running 47 on worker 1 2025-12-13T00:21:30.596118877+00:00 stderr F I1213 00:21:30.596104 1 sync_worker.go:982] Skipping precreation of clusteroperator "cluster-autoscaler" (353 of 955): overridden 2025-12-13T00:21:30.596148717+00:00 stderr F I1213 00:21:30.596139 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.596185968+00:00 stderr F I1213 00:21:30.596163 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterautoscalers.autoscaling.openshift.io" (335 of 955) 2025-12-13T00:21:30.646481086+00:00 stderr F I1213 00:21:30.646415 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot-v2" (120 of 955) 2025-12-13T00:21:30.646481086+00:00 stderr F I1213 00:21:30.646448 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.646481086+00:00 stderr F I1213 00:21:30.646455 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot" (121 of 955) 2025-12-13T00:21:30.696667680+00:00 stderr F I1213 00:21:30.696598 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterautoscalers.autoscaling.openshift.io" (335 of 955) 2025-12-13T00:21:30.696750612+00:00 stderr F I1213 00:21:30.696735 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.696793313+00:00 stderr F I1213 00:21:30.696775 1 sync_worker.go:999] Running sync for customresourcedefinition "machineautoscalers.autoscaling.openshift.io" (336 of 955) 2025-12-13T00:21:30.751241613+00:00 stderr F I1213 00:21:30.751161 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot" (121 of 955) 2025-12-13T00:21:30.751346336+00:00 stderr F I1213 00:21:30.751327 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.751394327+00:00 stderr F I1213 00:21:30.751371 1 sync_worker.go:999] Running sync for securitycontextconstraints "nonroot" (122 of 955) 2025-12-13T00:21:30.787194462+00:00 stderr F I1213 00:21:30.787101 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:30.787554713+00:00 stderr F I1213 00:21:30.787468 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:30.787554713+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:30.787554713+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:30.787589944+00:00 stderr F I1213 00:21:30.787561 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (469.603µs) 2025-12-13T00:21:30.787598974+00:00 stderr F I1213 00:21:30.787587 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:30.787702677+00:00 stderr F I1213 00:21:30.787664 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:30.787789309+00:00 stderr F I1213 00:21:30.787726 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:30.787862171+00:00 stderr F I1213 00:21:30.787779 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:30.788215021+00:00 stderr F I1213 00:21:30.788167 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:30.795610930+00:00 stderr F I1213 00:21:30.795532 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machineautoscalers.autoscaling.openshift.io" (336 of 955) 2025-12-13T00:21:30.795610930+00:00 stderr F I1213 00:21:30.795575 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.795610930+00:00 stderr F I1213 00:21:30.795590 1 sync_worker.go:999] Running sync for clusterrole "cluster-autoscaler-operator" (337 of 955) 2025-12-13T00:21:30.816633617+00:00 stderr F W1213 00:21:30.816584 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:30.818232711+00:00 stderr F I1213 00:21:30.818195 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.607426ms) 2025-12-13T00:21:30.851178649+00:00 stderr F I1213 00:21:30.849604 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "nonroot" (122 of 955) 2025-12-13T00:21:30.851178649+00:00 stderr F I1213 00:21:30.849637 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.851178649+00:00 stderr F I1213 00:21:30.849643 1 sync_worker.go:999] Running sync for securitycontextconstraints "privileged" (123 of 955) 2025-12-13T00:21:30.894412566+00:00 stderr F I1213 00:21:30.894357 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-autoscaler-operator" (337 of 955) 2025-12-13T00:21:30.894504529+00:00 stderr F I1213 00:21:30.894491 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.894550410+00:00 stderr F I1213 00:21:30.894535 1 sync_worker.go:999] Running sync for role "openshift-machine-api/cluster-autoscaler-operator" (338 of 955) 2025-12-13T00:21:30.946114261+00:00 stderr F I1213 00:21:30.946062 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "privileged" (123 of 955) 2025-12-13T00:21:30.946195483+00:00 stderr F I1213 00:21:30.946182 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.946232164+00:00 stderr F I1213 00:21:30.946216 1 sync_worker.go:999] Running sync for securitycontextconstraints "privileged" (124 of 955) 2025-12-13T00:21:30.995657448+00:00 stderr F I1213 00:21:30.995584 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/cluster-autoscaler-operator" (338 of 955) 2025-12-13T00:21:30.995779632+00:00 stderr F I1213 00:21:30.995750 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:30.995888495+00:00 stderr F I1213 00:21:30.995851 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/cluster-autoscaler-operator" (339 of 955) 2025-12-13T00:21:31.046290115+00:00 stderr F I1213 00:21:31.046197 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "privileged" (124 of 955) 2025-12-13T00:21:31.046290115+00:00 stderr F I1213 00:21:31.046245 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.046290115+00:00 stderr F I1213 00:21:31.046254 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted-v2" (125 of 955) 2025-12-13T00:21:31.095792821+00:00 stderr F I1213 00:21:31.095696 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/cluster-autoscaler-operator" (339 of 955) 2025-12-13T00:21:31.095792821+00:00 stderr F I1213 00:21:31.095732 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.095792821+00:00 stderr F I1213 00:21:31.095738 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-autoscaler-operator" (340 of 955) 2025-12-13T00:21:31.148040920+00:00 stderr F I1213 00:21:31.147880 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted-v2" (125 of 955) 2025-12-13T00:21:31.148040920+00:00 stderr F I1213 00:21:31.148002 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.148040920+00:00 stderr F I1213 00:21:31.148016 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted-v2" (126 of 955) 2025-12-13T00:21:31.195305955+00:00 stderr F I1213 00:21:31.195202 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-autoscaler-operator" (340 of 955) 2025-12-13T00:21:31.195305955+00:00 stderr F I1213 00:21:31.195252 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.195305955+00:00 stderr F I1213 00:21:31.195265 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/cluster-autoscaler-operator" (341 of 955) 2025-12-13T00:21:31.247218007+00:00 stderr F I1213 00:21:31.247088 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted-v2" (126 of 955) 2025-12-13T00:21:31.247218007+00:00 stderr F I1213 00:21:31.247149 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.247218007+00:00 stderr F I1213 00:21:31.247156 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted" (127 of 955) 2025-12-13T00:21:31.299963239+00:00 stderr F I1213 00:21:31.297566 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/cluster-autoscaler-operator" (341 of 955) 2025-12-13T00:21:31.299963239+00:00 stderr F I1213 00:21:31.297634 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.299963239+00:00 stderr F I1213 00:21:31.297643 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/cluster-autoscaler" (342 of 955) 2025-12-13T00:21:31.350528805+00:00 stderr F I1213 00:21:31.348520 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted" (127 of 955) 2025-12-13T00:21:31.350528805+00:00 stderr F I1213 00:21:31.348601 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.350528805+00:00 stderr F I1213 00:21:31.348651 1 sync_worker.go:999] Running sync for securitycontextconstraints "restricted" (128 of 955) 2025-12-13T00:21:31.395535019+00:00 stderr F I1213 00:21:31.395417 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/cluster-autoscaler" (342 of 955) 2025-12-13T00:21:31.395535019+00:00 stderr F I1213 00:21:31.395485 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.395535019+00:00 stderr F I1213 00:21:31.395502 1 sync_worker.go:999] Running sync for clusterrole "cluster-autoscaler" (343 of 955) 2025-12-13T00:21:31.446527885+00:00 stderr F I1213 00:21:31.446437 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "restricted" (128 of 955) 2025-12-13T00:21:31.446527885+00:00 stderr F I1213 00:21:31.446486 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.446527885+00:00 stderr F I1213 00:21:31.446501 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeapiservers.operator.openshift.io" (129 of 955) 2025-12-13T00:21:31.496444172+00:00 stderr F I1213 00:21:31.496339 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-autoscaler" (343 of 955) 2025-12-13T00:21:31.496444172+00:00 stderr F I1213 00:21:31.496391 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.496444172+00:00 stderr F I1213 00:21:31.496404 1 sync_worker.go:999] Running sync for role "openshift-machine-api/cluster-autoscaler" (344 of 955) 2025-12-13T00:21:31.548792064+00:00 stderr F I1213 00:21:31.548684 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeapiservers.operator.openshift.io" (129 of 955) 2025-12-13T00:21:31.548792064+00:00 stderr F I1213 00:21:31.548752 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.548792064+00:00 stderr F I1213 00:21:31.548768 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeapiservers.operator.openshift.io" (130 of 955) 2025-12-13T00:21:31.595659469+00:00 stderr F I1213 00:21:31.595591 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/cluster-autoscaler" (344 of 955) 2025-12-13T00:21:31.595659469+00:00 stderr F I1213 00:21:31.595628 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.595659469+00:00 stderr F I1213 00:21:31.595634 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-autoscaler" (345 of 955) 2025-12-13T00:21:31.646580853+00:00 stderr F I1213 00:21:31.646483 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeapiservers.operator.openshift.io" (130 of 955) 2025-12-13T00:21:31.646580853+00:00 stderr F I1213 00:21:31.646519 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.646580853+00:00 stderr F I1213 00:21:31.646525 1 sync_worker.go:999] Running sync for kubeapiserver "cluster" (131 of 955) 2025-12-13T00:21:31.694795394+00:00 stderr F I1213 00:21:31.694697 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-autoscaler" (345 of 955) 2025-12-13T00:21:31.694795394+00:00 stderr F I1213 00:21:31.694737 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.694795394+00:00 stderr F I1213 00:21:31.694746 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/cluster-autoscaler" (346 of 955) 2025-12-13T00:21:31.750462966+00:00 stderr F I1213 00:21:31.750328 1 sync_worker.go:1014] Done syncing for kubeapiserver "cluster" (131 of 955) 2025-12-13T00:21:31.750462966+00:00 stderr F I1213 00:21:31.750371 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.750462966+00:00 stderr F I1213 00:21:31.750377 1 sync_worker.go:999] Running sync for kubeapiserver "cluster" (132 of 955) 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.787657 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788079 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:31.789028697+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:31.789028697+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788165 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (541.235µs) 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788186 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788257 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788323 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788334 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:31.789028697+00:00 stderr F I1213 00:21:31.788802 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:31.795671546+00:00 stderr F I1213 00:21:31.795111 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/cluster-autoscaler" (346 of 955) 2025-12-13T00:21:31.795671546+00:00 stderr F I1213 00:21:31.795156 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.795671546+00:00 stderr F I1213 00:21:31.795162 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (347 of 955) 2025-12-13T00:21:31.817118785+00:00 stderr F W1213 00:21:31.816057 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:31.819943311+00:00 stderr F I1213 00:21:31.819884 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.695475ms) 2025-12-13T00:21:31.846565810+00:00 stderr F I1213 00:21:31.846492 1 sync_worker.go:1014] Done syncing for kubeapiserver "cluster" (132 of 955) 2025-12-13T00:21:31.846565810+00:00 stderr F I1213 00:21:31.846526 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.846565810+00:00 stderr F I1213 00:21:31.846532 1 sync_worker.go:999] Running sync for service "openshift-kube-apiserver-operator/metrics" (133 of 955) 2025-12-13T00:21:31.894586306+00:00 stderr F I1213 00:21:31.894487 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (347 of 955) 2025-12-13T00:21:31.894586306+00:00 stderr F I1213 00:21:31.894525 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.894586306+00:00 stderr F I1213 00:21:31.894532 1 sync_worker.go:999] Running sync for role "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (348 of 955) 2025-12-13T00:21:31.945722575+00:00 stderr F I1213 00:21:31.945590 1 sync_worker.go:1014] Done syncing for service "openshift-kube-apiserver-operator/metrics" (133 of 955) 2025-12-13T00:21:31.945722575+00:00 stderr F I1213 00:21:31.945638 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.945722575+00:00 stderr F I1213 00:21:31.945645 1 sync_worker.go:999] Running sync for service "openshift-kube-apiserver-operator/metrics" (134 of 955) 2025-12-13T00:21:31.996153816+00:00 stderr F I1213 00:21:31.996063 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/prometheus-k8s-cluster-autoscaler-operator" (348 of 955) 2025-12-13T00:21:31.996153816+00:00 stderr F I1213 00:21:31.996119 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:31.996153816+00:00 stderr F I1213 00:21:31.996132 1 sync_worker.go:999] Running sync for service "openshift-machine-api/cluster-autoscaler-operator" (349 of 955) 2025-12-13T00:21:32.045492818+00:00 stderr F I1213 00:21:32.045392 1 sync_worker.go:1014] Done syncing for service "openshift-kube-apiserver-operator/metrics" (134 of 955) 2025-12-13T00:21:32.045492818+00:00 stderr F I1213 00:21:32.045448 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.045492818+00:00 stderr F I1213 00:21:32.045465 1 sync_worker.go:999] Running sync for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (135 of 955) 2025-12-13T00:21:32.095033534+00:00 stderr F I1213 00:21:32.094927 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/cluster-autoscaler-operator" (349 of 955) 2025-12-13T00:21:32.095033534+00:00 stderr F I1213 00:21:32.094985 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.095033534+00:00 stderr F I1213 00:21:32.094991 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator" (350 of 955) 2025-12-13T00:21:32.145030433+00:00 stderr F I1213 00:21:32.144435 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (135 of 955) 2025-12-13T00:21:32.145030433+00:00 stderr F I1213 00:21:32.144476 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.145030433+00:00 stderr F I1213 00:21:32.144484 1 sync_worker.go:999] Running sync for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (136 of 955) 2025-12-13T00:21:32.197735436+00:00 stderr F I1213 00:21:32.197612 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator" (350 of 955) 2025-12-13T00:21:32.197735436+00:00 stderr F I1213 00:21:32.197644 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.197735436+00:00 stderr F I1213 00:21:32.197650 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (351 of 955) 2025-12-13T00:21:32.245979027+00:00 stderr F I1213 00:21:32.245302 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-apiserver-operator/kube-apiserver-operator-config" (136 of 955) 2025-12-13T00:21:32.245979027+00:00 stderr F I1213 00:21:32.245433 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.245979027+00:00 stderr F I1213 00:21:32.245441 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (137 of 955) 2025-12-13T00:21:32.297023844+00:00 stderr F I1213 00:21:32.296891 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (351 of 955) 2025-12-13T00:21:32.297023844+00:00 stderr F I1213 00:21:32.296987 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.297023844+00:00 stderr F I1213 00:21:32.297007 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/cluster-autoscaler-operator" (352 of 955) 2025-12-13T00:21:32.297096756+00:00 stderr F I1213 00:21:32.297045 1 sync_worker.go:1002] Skipping deployment "openshift-machine-api/cluster-autoscaler-operator" (352 of 955): overridden 2025-12-13T00:21:32.297096756+00:00 stderr F I1213 00:21:32.297065 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.297096756+00:00 stderr F I1213 00:21:32.297080 1 sync_worker.go:999] Running sync for clusteroperator "cluster-autoscaler" (353 of 955) 2025-12-13T00:21:32.297122627+00:00 stderr F I1213 00:21:32.297107 1 sync_worker.go:1002] Skipping clusteroperator "cluster-autoscaler" (353 of 955): overridden 2025-12-13T00:21:32.297141128+00:00 stderr F I1213 00:21:32.297119 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.297141128+00:00 stderr F I1213 00:21:32.297131 1 sync_worker.go:999] Running sync for clusterrole "cluster-autoscaler-operator:cluster-reader" (354 of 955) 2025-12-13T00:21:32.352333837+00:00 stderr F I1213 00:21:32.349410 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (137 of 955) 2025-12-13T00:21:32.352333837+00:00 stderr F I1213 00:21:32.350051 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.352333837+00:00 stderr F I1213 00:21:32.350072 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (138 of 955) 2025-12-13T00:21:32.396078238+00:00 stderr F I1213 00:21:32.395909 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-autoscaler-operator:cluster-reader" (354 of 955) 2025-12-13T00:21:32.396078238+00:00 stderr F I1213 00:21:32.396039 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.396078238+00:00 stderr F I1213 00:21:32.396056 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/cluster-autoscaler-operator-ca" (355 of 955) 2025-12-13T00:21:32.445782460+00:00 stderr F I1213 00:21:32.445697 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-apiserver-operator" (138 of 955) 2025-12-13T00:21:32.445782460+00:00 stderr F I1213 00:21:32.445746 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.445782460+00:00 stderr F I1213 00:21:32.445754 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (139 of 955) 2025-12-13T00:21:32.496005714+00:00 stderr F I1213 00:21:32.495908 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/cluster-autoscaler-operator-ca" (355 of 955) 2025-12-13T00:21:32.496005714+00:00 stderr F I1213 00:21:32.495964 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.496005714+00:00 stderr F I1213 00:21:32.495973 1 sync_worker.go:999] Running sync for prometheusrule "openshift-machine-api/cluster-autoscaler-operator-rules" (356 of 955) 2025-12-13T00:21:32.545581412+00:00 stderr F I1213 00:21:32.545493 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (139 of 955) 2025-12-13T00:21:32.545581412+00:00 stderr F I1213 00:21:32.545549 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.545581412+00:00 stderr F I1213 00:21:32.545562 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (140 of 955) 2025-12-13T00:21:32.595554861+00:00 stderr F I1213 00:21:32.595478 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-machine-api/cluster-autoscaler-operator-rules" (356 of 955) 2025-12-13T00:21:32.595554861+00:00 stderr F I1213 00:21:32.595526 1 task_graph.go:481] Running 48 on worker 1 2025-12-13T00:21:32.595554861+00:00 stderr F I1213 00:21:32.595544 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.595605662+00:00 stderr F I1213 00:21:32.595556 1 sync_worker.go:999] Running sync for customresourcedefinition "kubestorageversionmigrators.operator.openshift.io" (301 of 955) 2025-12-13T00:21:32.645653753+00:00 stderr F I1213 00:21:32.645578 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-apiserver-operator/kube-apiserver-operator" (140 of 955) 2025-12-13T00:21:32.645653753+00:00 stderr F I1213 00:21:32.645612 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.645653753+00:00 stderr F I1213 00:21:32.645621 1 sync_worker.go:999] Running sync for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (141 of 955) 2025-12-13T00:21:32.696567526+00:00 stderr F I1213 00:21:32.696472 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubestorageversionmigrators.operator.openshift.io" (301 of 955) 2025-12-13T00:21:32.696567526+00:00 stderr F I1213 00:21:32.696516 1 task_graph.go:481] Running 49 on worker 1 2025-12-13T00:21:32.745653802+00:00 stderr F I1213 00:21:32.745571 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (141 of 955) 2025-12-13T00:21:32.745653802+00:00 stderr F I1213 00:21:32.745628 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.745653802+00:00 stderr F I1213 00:21:32.745635 1 sync_worker.go:999] Running sync for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (142 of 955) 2025-12-13T00:21:32.788744824+00:00 stderr F I1213 00:21:32.788618 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:32.789223857+00:00 stderr F I1213 00:21:32.789155 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:32.789223857+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:32.789223857+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:32.789357210+00:00 stderr F I1213 00:21:32.789296 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (712.358µs) 2025-12-13T00:21:32.789357210+00:00 stderr F I1213 00:21:32.789334 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:32.789600807+00:00 stderr F I1213 00:21:32.789457 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:32.789600807+00:00 stderr F I1213 00:21:32.789554 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:32.789736920+00:00 stderr F I1213 00:21:32.789571 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:32.790367358+00:00 stderr F I1213 00:21:32.790263 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:32.797985533+00:00 stderr F I1213 00:21:32.797907 1 sync_worker.go:989] Precreated resource clusteroperator "control-plane-machine-set" (226 of 955) 2025-12-13T00:21:32.798032484+00:00 stderr F I1213 00:21:32.797996 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.798032484+00:00 stderr F I1213 00:21:32.798012 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/control-plane-machine-set-operator" (219 of 955) 2025-12-13T00:21:32.816988696+00:00 stderr F W1213 00:21:32.816882 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:32.820307745+00:00 stderr F I1213 00:21:32.820200 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.861053ms) 2025-12-13T00:21:32.846588705+00:00 stderr F I1213 00:21:32.846485 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-apiserver-operator/kube-apiserver-operator" (142 of 955) 2025-12-13T00:21:32.846588705+00:00 stderr F I1213 00:21:32.846558 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.846639837+00:00 stderr F I1213 00:21:32.846576 1 sync_worker.go:999] Running sync for clusteroperator "kube-apiserver" (143 of 955) 2025-12-13T00:21:32.847059948+00:00 stderr F I1213 00:21:32.847011 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-apiserver" (143 of 955) 2025-12-13T00:21:32.847059948+00:00 stderr F I1213 00:21:32.847040 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.847077818+00:00 stderr F I1213 00:21:32.847052 1 sync_worker.go:999] Running sync for clusteroperator "kube-apiserver" (144 of 955) 2025-12-13T00:21:32.847332115+00:00 stderr F I1213 00:21:32.847294 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-apiserver" (144 of 955) 2025-12-13T00:21:32.847332115+00:00 stderr F I1213 00:21:32.847318 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.847343295+00:00 stderr F I1213 00:21:32.847330 1 sync_worker.go:999] Running sync for prioritylevelconfiguration "openshift-control-plane-operators" (145 of 955) 2025-12-13T00:21:32.896348987+00:00 stderr F I1213 00:21:32.896269 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/control-plane-machine-set-operator" (219 of 955) 2025-12-13T00:21:32.896348987+00:00 stderr F I1213 00:21:32.896302 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.896348987+00:00 stderr F I1213 00:21:32.896309 1 sync_worker.go:999] Running sync for role "openshift-machine-api/control-plane-machine-set-operator" (220 of 955) 2025-12-13T00:21:32.946408998+00:00 stderr F I1213 00:21:32.946343 1 sync_worker.go:1014] Done syncing for prioritylevelconfiguration "openshift-control-plane-operators" (145 of 955) 2025-12-13T00:21:32.946408998+00:00 stderr F I1213 00:21:32.946382 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.946408998+00:00 stderr F I1213 00:21:32.946393 1 sync_worker.go:999] Running sync for flowschema "openshift-monitoring-metrics" (146 of 955) 2025-12-13T00:21:32.995989047+00:00 stderr F I1213 00:21:32.995862 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/control-plane-machine-set-operator" (220 of 955) 2025-12-13T00:21:32.995989047+00:00 stderr F I1213 00:21:32.995908 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:32.995989047+00:00 stderr F I1213 00:21:32.995916 1 sync_worker.go:999] Running sync for clusterrole "control-plane-machine-set-operator" (221 of 955) 2025-12-13T00:21:33.048221126+00:00 stderr F I1213 00:21:33.047902 1 sync_worker.go:1014] Done syncing for flowschema "openshift-monitoring-metrics" (146 of 955) 2025-12-13T00:21:33.048221126+00:00 stderr F I1213 00:21:33.048000 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.048221126+00:00 stderr F I1213 00:21:33.048013 1 sync_worker.go:999] Running sync for flowschema "openshift-kube-apiserver-operator" (147 of 955) 2025-12-13T00:21:33.094616318+00:00 stderr F I1213 00:21:33.094537 1 sync_worker.go:1014] Done syncing for clusterrole "control-plane-machine-set-operator" (221 of 955) 2025-12-13T00:21:33.094616318+00:00 stderr F I1213 00:21:33.094577 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.094616318+00:00 stderr F I1213 00:21:33.094586 1 sync_worker.go:999] Running sync for clusterrolebinding "control-plane-machine-set-operator" (222 of 955) 2025-12-13T00:21:33.146334094+00:00 stderr F I1213 00:21:33.146253 1 sync_worker.go:1014] Done syncing for flowschema "openshift-kube-apiserver-operator" (147 of 955) 2025-12-13T00:21:33.146334094+00:00 stderr F I1213 00:21:33.146303 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.146334094+00:00 stderr F I1213 00:21:33.146316 1 sync_worker.go:999] Running sync for prioritylevelconfiguration "openshift-control-plane-operators" (148 of 955) 2025-12-13T00:21:33.195380257+00:00 stderr F I1213 00:21:33.195305 1 sync_worker.go:1014] Done syncing for clusterrolebinding "control-plane-machine-set-operator" (222 of 955) 2025-12-13T00:21:33.195380257+00:00 stderr F I1213 00:21:33.195341 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.195380257+00:00 stderr F I1213 00:21:33.195349 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/control-plane-machine-set-operator" (223 of 955) 2025-12-13T00:21:33.245381246+00:00 stderr F I1213 00:21:33.245312 1 sync_worker.go:1014] Done syncing for prioritylevelconfiguration "openshift-control-plane-operators" (148 of 955) 2025-12-13T00:21:33.245381246+00:00 stderr F I1213 00:21:33.245354 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.245381246+00:00 stderr F I1213 00:21:33.245362 1 sync_worker.go:999] Running sync for flowschema "openshift-monitoring-metrics" (149 of 955) 2025-12-13T00:21:33.294804520+00:00 stderr F I1213 00:21:33.294737 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/control-plane-machine-set-operator" (223 of 955) 2025-12-13T00:21:33.294804520+00:00 stderr F I1213 00:21:33.294775 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.294804520+00:00 stderr F I1213 00:21:33.294781 1 sync_worker.go:999] Running sync for service "openshift-machine-api/control-plane-machine-set-operator" (224 of 955) 2025-12-13T00:21:33.347860612+00:00 stderr F I1213 00:21:33.347736 1 sync_worker.go:1014] Done syncing for flowschema "openshift-monitoring-metrics" (149 of 955) 2025-12-13T00:21:33.347860612+00:00 stderr F I1213 00:21:33.347779 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.347860612+00:00 stderr F I1213 00:21:33.347787 1 sync_worker.go:999] Running sync for flowschema "openshift-kube-apiserver-operator" (150 of 955) 2025-12-13T00:21:33.395247730+00:00 stderr F I1213 00:21:33.395176 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/control-plane-machine-set-operator" (224 of 955) 2025-12-13T00:21:33.395247730+00:00 stderr F I1213 00:21:33.395216 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.395247730+00:00 stderr F I1213 00:21:33.395223 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/control-plane-machine-set-operator" (225 of 955) 2025-12-13T00:21:33.446881624+00:00 stderr F I1213 00:21:33.446806 1 sync_worker.go:1014] Done syncing for flowschema "openshift-kube-apiserver-operator" (150 of 955) 2025-12-13T00:21:33.446881624+00:00 stderr F I1213 00:21:33.446847 1 task_graph.go:481] Running 50 on worker 0 2025-12-13T00:21:33.446881624+00:00 stderr F I1213 00:21:33.446870 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.446958786+00:00 stderr F I1213 00:21:33.446876 1 sync_worker.go:999] Running sync for clusterrole "registry-monitoring" (816 of 955) 2025-12-13T00:21:33.495616378+00:00 stderr F I1213 00:21:33.495541 1 sync_worker.go:1014] Done syncing for deployment "openshift-machine-api/control-plane-machine-set-operator" (225 of 955) 2025-12-13T00:21:33.495616378+00:00 stderr F I1213 00:21:33.495574 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.495616378+00:00 stderr F I1213 00:21:33.495580 1 sync_worker.go:999] Running sync for clusteroperator "control-plane-machine-set" (226 of 955) 2025-12-13T00:21:33.495814704+00:00 stderr F I1213 00:21:33.495770 1 sync_worker.go:1014] Done syncing for clusteroperator "control-plane-machine-set" (226 of 955) 2025-12-13T00:21:33.495814704+00:00 stderr F I1213 00:21:33.495783 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.495814704+00:00 stderr F I1213 00:21:33.495788 1 sync_worker.go:999] Running sync for validatingwebhookconfiguration "controlplanemachineset.machine.openshift.io" (227 of 955) 2025-12-13T00:21:33.546537922+00:00 stderr F I1213 00:21:33.546450 1 sync_worker.go:1014] Done syncing for clusterrole "registry-monitoring" (816 of 955) 2025-12-13T00:21:33.546537922+00:00 stderr F I1213 00:21:33.546497 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.546537922+00:00 stderr F I1213 00:21:33.546503 1 sync_worker.go:999] Running sync for clusterrolebinding "registry-monitoring" (817 of 955) 2025-12-13T00:21:33.595958646+00:00 stderr F I1213 00:21:33.595828 1 sync_worker.go:1014] Done syncing for validatingwebhookconfiguration "controlplanemachineset.machine.openshift.io" (227 of 955) 2025-12-13T00:21:33.595958646+00:00 stderr F I1213 00:21:33.595899 1 task_graph.go:481] Running 51 on worker 1 2025-12-13T00:21:33.595958646+00:00 stderr F I1213 00:21:33.595918 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.596028738+00:00 stderr F I1213 00:21:33.596007 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/coreos-bootimages" (689 of 955) 2025-12-13T00:21:33.645769990+00:00 stderr F I1213 00:21:33.645674 1 sync_worker.go:1014] Done syncing for clusterrolebinding "registry-monitoring" (817 of 955) 2025-12-13T00:21:33.645769990+00:00 stderr F I1213 00:21:33.645718 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.645769990+00:00 stderr F I1213 00:21:33.645726 1 sync_worker.go:999] Running sync for role "openshift-image-registry/prometheus-k8s" (818 of 955) 2025-12-13T00:21:33.695896423+00:00 stderr F I1213 00:21:33.695773 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/coreos-bootimages" (689 of 955) 2025-12-13T00:21:33.695896423+00:00 stderr F I1213 00:21:33.695818 1 task_graph.go:481] Running 52 on worker 1 2025-12-13T00:21:33.695896423+00:00 stderr F I1213 00:21:33.695835 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.695896423+00:00 stderr F I1213 00:21:33.695845 1 sync_worker.go:999] Running sync for role "openshift-cluster-version/prometheus-k8s" (9 of 955) 2025-12-13T00:21:33.746533799+00:00 stderr F I1213 00:21:33.746464 1 sync_worker.go:1014] Done syncing for role "openshift-image-registry/prometheus-k8s" (818 of 955) 2025-12-13T00:21:33.746533799+00:00 stderr F I1213 00:21:33.746508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.746533799+00:00 stderr F I1213 00:21:33.746517 1 sync_worker.go:999] Running sync for rolebinding "openshift-image-registry/prometheus-k8s" (819 of 955) 2025-12-13T00:21:33.789664043+00:00 stderr F I1213 00:21:33.789570 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:33.790118815+00:00 stderr F I1213 00:21:33.790074 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:33.790118815+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:33.790118815+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:33.790215628+00:00 stderr F I1213 00:21:33.790174 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (618.396µs) 2025-12-13T00:21:33.790215628+00:00 stderr F I1213 00:21:33.790205 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:33.790357611+00:00 stderr F I1213 00:21:33.790310 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:33.790421144+00:00 stderr F I1213 00:21:33.790394 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:33.790520527+00:00 stderr F I1213 00:21:33.790414 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:33.790944288+00:00 stderr F I1213 00:21:33.790886 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:33.799169660+00:00 stderr F I1213 00:21:33.798771 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-version/prometheus-k8s" (9 of 955) 2025-12-13T00:21:33.799169660+00:00 stderr F I1213 00:21:33.798811 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.799169660+00:00 stderr F I1213 00:21:33.798820 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-version/prometheus-k8s" (10 of 955) 2025-12-13T00:21:33.820856855+00:00 stderr F W1213 00:21:33.820784 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:33.824321169+00:00 stderr F I1213 00:21:33.824256 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.04401ms) 2025-12-13T00:21:33.846055465+00:00 stderr F I1213 00:21:33.845968 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-image-registry/prometheus-k8s" (819 of 955) 2025-12-13T00:21:33.846055465+00:00 stderr F I1213 00:21:33.846008 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.846055465+00:00 stderr F I1213 00:21:33.846019 1 sync_worker.go:999] Running sync for servicemonitor "openshift-image-registry/image-registry" (820 of 955) 2025-12-13T00:21:33.894929774+00:00 stderr F I1213 00:21:33.894855 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-version/prometheus-k8s" (10 of 955) 2025-12-13T00:21:33.894929774+00:00 stderr F I1213 00:21:33.894890 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.894929774+00:00 stderr F I1213 00:21:33.894899 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (11 of 955) 2025-12-13T00:21:33.949015853+00:00 stderr F I1213 00:21:33.948899 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-image-registry/image-registry" (820 of 955) 2025-12-13T00:21:33.949015853+00:00 stderr F I1213 00:21:33.948978 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.949015853+00:00 stderr F I1213 00:21:33.948991 1 sync_worker.go:999] Running sync for servicemonitor "openshift-image-registry/image-registry-operator" (821 of 955) 2025-12-13T00:21:33.996321360+00:00 stderr F I1213 00:21:33.996244 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (11 of 955) 2025-12-13T00:21:33.996321360+00:00 stderr F I1213 00:21:33.996296 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:33.996352310+00:00 stderr F I1213 00:21:33.996317 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-version/cluster-version-operator" (12 of 955) 2025-12-13T00:21:34.047306505+00:00 stderr F I1213 00:21:34.047192 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-image-registry/image-registry-operator" (821 of 955) 2025-12-13T00:21:34.047306505+00:00 stderr F I1213 00:21:34.047235 1 task_graph.go:481] Running 53 on worker 0 2025-12-13T00:21:34.098280440+00:00 stderr F I1213 00:21:34.098182 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-cluster-version/cluster-version-operator" (12 of 955) 2025-12-13T00:21:34.098280440+00:00 stderr F I1213 00:21:34.098212 1 task_graph.go:481] Running 54 on worker 1 2025-12-13T00:21:34.148542437+00:00 stderr F I1213 00:21:34.148430 1 sync_worker.go:989] Precreated resource clusteroperator "operator-lifecycle-manager" (727 of 955) 2025-12-13T00:21:34.198956847+00:00 stderr F I1213 00:21:34.198864 1 sync_worker.go:989] Precreated resource clusteroperator "kube-controller-manager" (165 of 955) 2025-12-13T00:21:34.249659685+00:00 stderr F I1213 00:21:34.249564 1 sync_worker.go:989] Precreated resource clusteroperator "operator-lifecycle-manager-catalog" (728 of 955) 2025-12-13T00:21:34.301548946+00:00 stderr F I1213 00:21:34.301428 1 sync_worker.go:989] Precreated resource clusteroperator "kube-controller-manager" (166 of 955) 2025-12-13T00:21:34.301548946+00:00 stderr F I1213 00:21:34.301472 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.301548946+00:00 stderr F I1213 00:21:34.301480 1 sync_worker.go:999] Running sync for namespace "openshift-kube-controller-manager-operator" (151 of 955) 2025-12-13T00:21:34.351979147+00:00 stderr F I1213 00:21:34.351108 1 sync_worker.go:989] Precreated resource clusteroperator "operator-lifecycle-manager-packageserver" (729 of 955) 2025-12-13T00:21:34.351979147+00:00 stderr F I1213 00:21:34.351162 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.351979147+00:00 stderr F I1213 00:21:34.351176 1 sync_worker.go:999] Running sync for customresourcedefinition "catalogsources.operators.coreos.com" (690 of 955) 2025-12-13T00:21:34.395519512+00:00 stderr F I1213 00:21:34.395440 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-controller-manager-operator" (151 of 955) 2025-12-13T00:21:34.395519512+00:00 stderr F I1213 00:21:34.395504 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.395570883+00:00 stderr F I1213 00:21:34.395521 1 sync_worker.go:999] Running sync for namespace "openshift-kube-controller-manager-operator" (152 of 955) 2025-12-13T00:21:34.466973080+00:00 stderr F I1213 00:21:34.464723 1 sync_worker.go:1014] Done syncing for customresourcedefinition "catalogsources.operators.coreos.com" (690 of 955) 2025-12-13T00:21:34.466973080+00:00 stderr F I1213 00:21:34.464783 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.466973080+00:00 stderr F I1213 00:21:34.464796 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterserviceversions.operators.coreos.com" (691 of 955) 2025-12-13T00:21:34.499525648+00:00 stderr F I1213 00:21:34.498343 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-controller-manager-operator" (152 of 955) 2025-12-13T00:21:34.499525648+00:00 stderr F I1213 00:21:34.498388 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.499525648+00:00 stderr F I1213 00:21:34.498396 1 sync_worker.go:999] Running sync for kubecontrollermanager "cluster" (153 of 955) 2025-12-13T00:21:34.547186075+00:00 stderr F I1213 00:21:34.547113 1 sync_worker.go:1014] Done syncing for kubecontrollermanager "cluster" (153 of 955) 2025-12-13T00:21:34.547186075+00:00 stderr F I1213 00:21:34.547149 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.547186075+00:00 stderr F I1213 00:21:34.547155 1 sync_worker.go:999] Running sync for kubecontrollermanager "cluster" (154 of 955) 2025-12-13T00:21:34.634143431+00:00 stderr F I1213 00:21:34.634058 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterserviceversions.operators.coreos.com" (691 of 955) 2025-12-13T00:21:34.634143431+00:00 stderr F I1213 00:21:34.634115 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.634143431+00:00 stderr F I1213 00:21:34.634121 1 sync_worker.go:999] Running sync for customresourcedefinition "installplans.operators.coreos.com" (692 of 955) 2025-12-13T00:21:34.646742491+00:00 stderr F I1213 00:21:34.646674 1 sync_worker.go:1014] Done syncing for kubecontrollermanager "cluster" (154 of 955) 2025-12-13T00:21:34.646742491+00:00 stderr F I1213 00:21:34.646712 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.646742491+00:00 stderr F I1213 00:21:34.646719 1 sync_worker.go:999] Running sync for service "openshift-kube-controller-manager-operator/metrics" (155 of 955) 2025-12-13T00:21:34.696307588+00:00 stderr F I1213 00:21:34.696228 1 sync_worker.go:1014] Done syncing for customresourcedefinition "installplans.operators.coreos.com" (692 of 955) 2025-12-13T00:21:34.696307588+00:00 stderr F I1213 00:21:34.696272 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.696307588+00:00 stderr F I1213 00:21:34.696280 1 sync_worker.go:999] Running sync for namespace "openshift-operator-lifecycle-manager" (693 of 955) 2025-12-13T00:21:34.745345311+00:00 stderr F I1213 00:21:34.745289 1 sync_worker.go:1014] Done syncing for service "openshift-kube-controller-manager-operator/metrics" (155 of 955) 2025-12-13T00:21:34.745345311+00:00 stderr F I1213 00:21:34.745312 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.745345311+00:00 stderr F I1213 00:21:34.745317 1 sync_worker.go:999] Running sync for service "openshift-kube-controller-manager-operator/metrics" (156 of 955) 2025-12-13T00:21:34.790786508+00:00 stderr F I1213 00:21:34.790722 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:34.791177678+00:00 stderr F I1213 00:21:34.791146 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:34.791177678+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:34.791177678+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:34.791258931+00:00 stderr F I1213 00:21:34.791229 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (519.034µs) 2025-12-13T00:21:34.791268761+00:00 stderr F I1213 00:21:34.791255 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:34.791361943+00:00 stderr F I1213 00:21:34.791327 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:34.791410455+00:00 stderr F I1213 00:21:34.791395 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:34.791487847+00:00 stderr F I1213 00:21:34.791408 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:34.791825956+00:00 stderr F I1213 00:21:34.791775 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:34.795338290+00:00 stderr F I1213 00:21:34.795281 1 sync_worker.go:1014] Done syncing for namespace "openshift-operator-lifecycle-manager" (693 of 955) 2025-12-13T00:21:34.795338290+00:00 stderr F I1213 00:21:34.795326 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.795375531+00:00 stderr F I1213 00:21:34.795333 1 sync_worker.go:999] Running sync for namespace "openshift-operators" (694 of 955) 2025-12-13T00:21:34.836148482+00:00 stderr F W1213 00:21:34.836095 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:34.839069611+00:00 stderr F I1213 00:21:34.838232 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (46.971228ms) 2025-12-13T00:21:34.844906968+00:00 stderr F I1213 00:21:34.844857 1 sync_worker.go:1014] Done syncing for service "openshift-kube-controller-manager-operator/metrics" (156 of 955) 2025-12-13T00:21:34.844906968+00:00 stderr F I1213 00:21:34.844891 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.844958629+00:00 stderr F I1213 00:21:34.844900 1 sync_worker.go:999] Running sync for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (157 of 955) 2025-12-13T00:21:34.895419301+00:00 stderr F I1213 00:21:34.895341 1 sync_worker.go:1014] Done syncing for namespace "openshift-operators" (694 of 955) 2025-12-13T00:21:34.895419301+00:00 stderr F I1213 00:21:34.895392 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.895419301+00:00 stderr F I1213 00:21:34.895401 1 sync_worker.go:999] Running sync for customresourcedefinition "olmconfigs.operators.coreos.com" (695 of 955) 2025-12-13T00:21:34.945154213+00:00 stderr F I1213 00:21:34.945101 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (157 of 955) 2025-12-13T00:21:34.945154213+00:00 stderr F I1213 00:21:34.945146 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.945184984+00:00 stderr F I1213 00:21:34.945154 1 sync_worker.go:999] Running sync for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (158 of 955) 2025-12-13T00:21:34.995732178+00:00 stderr F I1213 00:21:34.995627 1 sync_worker.go:1014] Done syncing for customresourcedefinition "olmconfigs.operators.coreos.com" (695 of 955) 2025-12-13T00:21:34.995732178+00:00 stderr F I1213 00:21:34.995668 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:34.995732178+00:00 stderr F I1213 00:21:34.995684 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorconditions.operators.coreos.com" (696 of 955) 2025-12-13T00:21:35.045415698+00:00 stderr F I1213 00:21:35.045179 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-controller-manager-operator/kube-controller-manager-operator-config" (158 of 955) 2025-12-13T00:21:35.045415698+00:00 stderr F I1213 00:21:35.045220 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.045415698+00:00 stderr F I1213 00:21:35.045229 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (159 of 955) 2025-12-13T00:21:35.097215027+00:00 stderr F I1213 00:21:35.097127 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorconditions.operators.coreos.com" (696 of 955) 2025-12-13T00:21:35.097215027+00:00 stderr F I1213 00:21:35.097180 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.097215027+00:00 stderr F I1213 00:21:35.097188 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorgroups.operators.coreos.com" (697 of 955) 2025-12-13T00:21:35.146260810+00:00 stderr F I1213 00:21:35.146189 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (159 of 955) 2025-12-13T00:21:35.146260810+00:00 stderr F I1213 00:21:35.146230 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.146260810+00:00 stderr F I1213 00:21:35.146238 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (160 of 955) 2025-12-13T00:21:35.196579238+00:00 stderr F I1213 00:21:35.196504 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorgroups.operators.coreos.com" (697 of 955) 2025-12-13T00:21:35.196579238+00:00 stderr F I1213 00:21:35.196550 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.196579238+00:00 stderr F I1213 00:21:35.196557 1 sync_worker.go:999] Running sync for customresourcedefinition "operators.operators.coreos.com" (698 of 955) 2025-12-13T00:21:35.244987195+00:00 stderr F I1213 00:21:35.244912 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:kube-controller-manager-operator" (160 of 955) 2025-12-13T00:21:35.244987195+00:00 stderr F I1213 00:21:35.244967 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.244987195+00:00 stderr F I1213 00:21:35.244973 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (161 of 955) 2025-12-13T00:21:35.297569224+00:00 stderr F I1213 00:21:35.297494 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operators.operators.coreos.com" (698 of 955) 2025-12-13T00:21:35.297569224+00:00 stderr F I1213 00:21:35.297527 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.297569224+00:00 stderr F I1213 00:21:35.297533 1 sync_worker.go:999] Running sync for poddisruptionbudget "openshift-operator-lifecycle-manager/packageserver-pdb" (699 of 955) 2025-12-13T00:21:35.345064004+00:00 stderr F I1213 00:21:35.344946 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (161 of 955) 2025-12-13T00:21:35.345064004+00:00 stderr F I1213 00:21:35.344982 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.345064004+00:00 stderr F I1213 00:21:35.344989 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (162 of 955) 2025-12-13T00:21:35.394744925+00:00 stderr F I1213 00:21:35.394641 1 sync_worker.go:1014] Done syncing for poddisruptionbudget "openshift-operator-lifecycle-manager/packageserver-pdb" (699 of 955) 2025-12-13T00:21:35.394744925+00:00 stderr F I1213 00:21:35.394678 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.394744925+00:00 stderr F I1213 00:21:35.394685 1 sync_worker.go:999] Running sync for configmap "openshift-operator-lifecycle-manager/collect-profiles-config" (700 of 955) 2025-12-13T00:21:35.445483154+00:00 stderr F I1213 00:21:35.445375 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (162 of 955) 2025-12-13T00:21:35.445483154+00:00 stderr F I1213 00:21:35.445409 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.445483154+00:00 stderr F I1213 00:21:35.445415 1 sync_worker.go:999] Running sync for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (163 of 955) 2025-12-13T00:21:35.495171835+00:00 stderr F I1213 00:21:35.495065 1 sync_worker.go:1014] Done syncing for configmap "openshift-operator-lifecycle-manager/collect-profiles-config" (700 of 955) 2025-12-13T00:21:35.495171835+00:00 stderr F I1213 00:21:35.495100 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.495171835+00:00 stderr F I1213 00:21:35.495106 1 sync_worker.go:999] Running sync for role "openshift-operator-lifecycle-manager/collect-profiles" (701 of 955) 2025-12-13T00:21:35.545681388+00:00 stderr F I1213 00:21:35.545581 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (163 of 955) 2025-12-13T00:21:35.545681388+00:00 stderr F I1213 00:21:35.545620 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.545681388+00:00 stderr F I1213 00:21:35.545630 1 sync_worker.go:999] Running sync for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (164 of 955) 2025-12-13T00:21:35.595431701+00:00 stderr F I1213 00:21:35.595354 1 sync_worker.go:1014] Done syncing for role "openshift-operator-lifecycle-manager/collect-profiles" (701 of 955) 2025-12-13T00:21:35.595431701+00:00 stderr F I1213 00:21:35.595406 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.595431701+00:00 stderr F I1213 00:21:35.595414 1 sync_worker.go:999] Running sync for rolebinding "openshift-operator-lifecycle-manager/collect-profiles" (702 of 955) 2025-12-13T00:21:35.647351111+00:00 stderr F I1213 00:21:35.647280 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (164 of 955) 2025-12-13T00:21:35.647351111+00:00 stderr F I1213 00:21:35.647327 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.647351111+00:00 stderr F I1213 00:21:35.647334 1 sync_worker.go:999] Running sync for clusteroperator "kube-controller-manager" (165 of 955) 2025-12-13T00:21:35.647593358+00:00 stderr F I1213 00:21:35.647574 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-controller-manager" (165 of 955) 2025-12-13T00:21:35.647593358+00:00 stderr F I1213 00:21:35.647587 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.647605868+00:00 stderr F I1213 00:21:35.647593 1 sync_worker.go:999] Running sync for clusteroperator "kube-controller-manager" (166 of 955) 2025-12-13T00:21:35.647765864+00:00 stderr F I1213 00:21:35.647745 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-controller-manager" (166 of 955) 2025-12-13T00:21:35.647777124+00:00 stderr F I1213 00:21:35.647771 1 task_graph.go:481] Running 55 on worker 1 2025-12-13T00:21:35.694969626+00:00 stderr F I1213 00:21:35.694875 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-operator-lifecycle-manager/collect-profiles" (702 of 955) 2025-12-13T00:21:35.694969626+00:00 stderr F I1213 00:21:35.694919 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.694969626+00:00 stderr F I1213 00:21:35.694946 1 sync_worker.go:999] Running sync for serviceaccount "openshift-operator-lifecycle-manager/collect-profiles" (703 of 955) 2025-12-13T00:21:35.746868557+00:00 stderr F I1213 00:21:35.746779 1 sync_worker.go:989] Precreated resource clusteroperator "openshift-controller-manager" (516 of 955) 2025-12-13T00:21:35.746868557+00:00 stderr F I1213 00:21:35.746809 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.746868557+00:00 stderr F I1213 00:21:35.746814 1 sync_worker.go:999] Running sync for namespace "openshift-controller-manager-operator" (507 of 955) 2025-12-13T00:21:35.792034836+00:00 stderr F I1213 00:21:35.791988 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:35.792363335+00:00 stderr F I1213 00:21:35.792337 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:35.792363335+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:35.792363335+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:35.792430107+00:00 stderr F I1213 00:21:35.792407 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (434.992µs) 2025-12-13T00:21:35.792439657+00:00 stderr F I1213 00:21:35.792429 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:35.792509359+00:00 stderr F I1213 00:21:35.792481 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:35.792551590+00:00 stderr F I1213 00:21:35.792540 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:35.792627322+00:00 stderr F I1213 00:21:35.792560 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:35.792915560+00:00 stderr F I1213 00:21:35.792881 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:35.794663197+00:00 stderr F I1213 00:21:35.794639 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-operator-lifecycle-manager/collect-profiles" (703 of 955) 2025-12-13T00:21:35.794704268+00:00 stderr F I1213 00:21:35.794694 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.794731859+00:00 stderr F I1213 00:21:35.794720 1 sync_worker.go:999] Running sync for secret "openshift-operator-lifecycle-manager/pprof-cert" (704 of 955) 2025-12-13T00:21:35.829073425+00:00 stderr F W1213 00:21:35.829027 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:35.830850984+00:00 stderr F I1213 00:21:35.830819 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (38.387677ms) 2025-12-13T00:21:35.844515903+00:00 stderr F I1213 00:21:35.844435 1 sync_worker.go:1014] Done syncing for namespace "openshift-controller-manager-operator" (507 of 955) 2025-12-13T00:21:35.844515903+00:00 stderr F I1213 00:21:35.844483 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.844515903+00:00 stderr F I1213 00:21:35.844494 1 sync_worker.go:999] Running sync for openshiftcontrollermanager "cluster" (508 of 955) 2025-12-13T00:21:35.897175214+00:00 stderr F I1213 00:21:35.897103 1 sync_worker.go:1014] Done syncing for secret "openshift-operator-lifecycle-manager/pprof-cert" (704 of 955) 2025-12-13T00:21:35.897175214+00:00 stderr F I1213 00:21:35.897138 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.897175214+00:00 stderr F I1213 00:21:35.897145 1 sync_worker.go:999] Running sync for customresourcedefinition "subscriptions.operators.coreos.com" (705 of 955) 2025-12-13T00:21:35.957744628+00:00 stderr F I1213 00:21:35.957682 1 sync_worker.go:1014] Done syncing for openshiftcontrollermanager "cluster" (508 of 955) 2025-12-13T00:21:35.957744628+00:00 stderr F I1213 00:21:35.957727 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:35.957768358+00:00 stderr F I1213 00:21:35.957738 1 sync_worker.go:999] Running sync for configmap "openshift-controller-manager-operator/openshift-controller-manager-operator-config" (509 of 955) 2025-12-13T00:21:36.012012073+00:00 stderr F I1213 00:21:36.011905 1 sync_worker.go:1014] Done syncing for customresourcedefinition "subscriptions.operators.coreos.com" (705 of 955) 2025-12-13T00:21:36.012012073+00:00 stderr F I1213 00:21:36.011985 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.012012073+00:00 stderr F I1213 00:21:36.011998 1 sync_worker.go:999] Running sync for serviceaccount "openshift-operator-lifecycle-manager/olm-operator-serviceaccount" (706 of 955) 2025-12-13T00:21:36.047077548+00:00 stderr F I1213 00:21:36.046295 1 sync_worker.go:1014] Done syncing for configmap "openshift-controller-manager-operator/openshift-controller-manager-operator-config" (509 of 955) 2025-12-13T00:21:36.047077548+00:00 stderr F I1213 00:21:36.046341 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.047077548+00:00 stderr F I1213 00:21:36.046353 1 sync_worker.go:999] Running sync for service "openshift-controller-manager-operator/metrics" (510 of 955) 2025-12-13T00:21:36.095304670+00:00 stderr F I1213 00:21:36.095221 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-operator-lifecycle-manager/olm-operator-serviceaccount" (706 of 955) 2025-12-13T00:21:36.095304670+00:00 stderr F I1213 00:21:36.095264 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.095304670+00:00 stderr F I1213 00:21:36.095271 1 sync_worker.go:999] Running sync for clusterrole "system:controller:operator-lifecycle-manager" (707 of 955) 2025-12-13T00:21:36.144763145+00:00 stderr F I1213 00:21:36.144268 1 sync_worker.go:1014] Done syncing for service "openshift-controller-manager-operator/metrics" (510 of 955) 2025-12-13T00:21:36.144763145+00:00 stderr F I1213 00:21:36.144304 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.144763145+00:00 stderr F I1213 00:21:36.144310 1 sync_worker.go:999] Running sync for configmap "openshift-controller-manager-operator/openshift-controller-manager-images" (511 of 955) 2025-12-13T00:21:36.195235927+00:00 stderr F I1213 00:21:36.195164 1 sync_worker.go:1014] Done syncing for clusterrole "system:controller:operator-lifecycle-manager" (707 of 955) 2025-12-13T00:21:36.195235927+00:00 stderr F I1213 00:21:36.195200 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.195235927+00:00 stderr F I1213 00:21:36.195206 1 sync_worker.go:999] Running sync for clusterrolebinding "olm-operator-binding-openshift-operator-lifecycle-manager" (708 of 955) 2025-12-13T00:21:36.245590696+00:00 stderr F I1213 00:21:36.245473 1 sync_worker.go:1014] Done syncing for configmap "openshift-controller-manager-operator/openshift-controller-manager-images" (511 of 955) 2025-12-13T00:21:36.245590696+00:00 stderr F I1213 00:21:36.245508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.245590696+00:00 stderr F I1213 00:21:36.245514 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:openshift-controller-manager-operator" (512 of 955) 2025-12-13T00:21:36.296982242+00:00 stderr F I1213 00:21:36.296827 1 sync_worker.go:1014] Done syncing for clusterrolebinding "olm-operator-binding-openshift-operator-lifecycle-manager" (708 of 955) 2025-12-13T00:21:36.296982242+00:00 stderr F I1213 00:21:36.296896 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.296982242+00:00 stderr F I1213 00:21:36.296903 1 sync_worker.go:999] Running sync for olmconfig "cluster" (709 of 955) 2025-12-13T00:21:36.346649063+00:00 stderr F I1213 00:21:36.346512 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:openshift-controller-manager-operator" (512 of 955) 2025-12-13T00:21:36.346649063+00:00 stderr F I1213 00:21:36.346624 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.346693054+00:00 stderr F I1213 00:21:36.346649 1 sync_worker.go:999] Running sync for serviceaccount "openshift-controller-manager-operator/openshift-controller-manager-operator" (513 of 955) 2025-12-13T00:21:36.399953181+00:00 stderr F I1213 00:21:36.399878 1 sync_worker.go:1014] Done syncing for olmconfig "cluster" (709 of 955) 2025-12-13T00:21:36.399953181+00:00 stderr F I1213 00:21:36.399916 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.399953181+00:00 stderr F I1213 00:21:36.399924 1 sync_worker.go:999] Running sync for service "openshift-operator-lifecycle-manager/olm-operator-metrics" (710 of 955) 2025-12-13T00:21:36.453327771+00:00 stderr F I1213 00:21:36.453242 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-controller-manager-operator/openshift-controller-manager-operator" (513 of 955) 2025-12-13T00:21:36.453327771+00:00 stderr F I1213 00:21:36.453292 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.453327771+00:00 stderr F I1213 00:21:36.453303 1 sync_worker.go:999] Running sync for deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (514 of 955) 2025-12-13T00:21:36.495310384+00:00 stderr F I1213 00:21:36.495235 1 sync_worker.go:1014] Done syncing for service "openshift-operator-lifecycle-manager/olm-operator-metrics" (710 of 955) 2025-12-13T00:21:36.495310384+00:00 stderr F I1213 00:21:36.495275 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.495310384+00:00 stderr F I1213 00:21:36.495283 1 sync_worker.go:999] Running sync for service "openshift-operator-lifecycle-manager/catalog-operator-metrics" (711 of 955) 2025-12-13T00:21:36.545668983+00:00 stderr F I1213 00:21:36.545550 1 sync_worker.go:1014] Done syncing for deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (514 of 955) 2025-12-13T00:21:36.545668983+00:00 stderr F I1213 00:21:36.545616 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.545668983+00:00 stderr F I1213 00:21:36.545628 1 sync_worker.go:999] Running sync for flowschema "openshift-controller-manager" (515 of 955) 2025-12-13T00:21:36.595794556+00:00 stderr F I1213 00:21:36.595694 1 sync_worker.go:1014] Done syncing for service "openshift-operator-lifecycle-manager/catalog-operator-metrics" (711 of 955) 2025-12-13T00:21:36.595794556+00:00 stderr F I1213 00:21:36.595745 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.595794556+00:00 stderr F I1213 00:21:36.595758 1 sync_worker.go:999] Running sync for deployment "openshift-operator-lifecycle-manager/package-server-manager" (712 of 955) 2025-12-13T00:21:36.646266298+00:00 stderr F I1213 00:21:36.646198 1 sync_worker.go:1014] Done syncing for flowschema "openshift-controller-manager" (515 of 955) 2025-12-13T00:21:36.646266298+00:00 stderr F I1213 00:21:36.646238 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.646266298+00:00 stderr F I1213 00:21:36.646245 1 sync_worker.go:999] Running sync for clusteroperator "openshift-controller-manager" (516 of 955) 2025-12-13T00:21:36.646460403+00:00 stderr F I1213 00:21:36.646438 1 sync_worker.go:1014] Done syncing for clusteroperator "openshift-controller-manager" (516 of 955) 2025-12-13T00:21:36.646473384+00:00 stderr F I1213 00:21:36.646463 1 task_graph.go:481] Running 56 on worker 1 2025-12-13T00:21:36.696706659+00:00 stderr F I1213 00:21:36.696643 1 sync_worker.go:1014] Done syncing for deployment "openshift-operator-lifecycle-manager/package-server-manager" (712 of 955) 2025-12-13T00:21:36.696706659+00:00 stderr F I1213 00:21:36.696679 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.696706659+00:00 stderr F I1213 00:21:36.696687 1 sync_worker.go:999] Running sync for service "openshift-operator-lifecycle-manager/package-server-manager-metrics" (713 of 955) 2025-12-13T00:21:36.747974082+00:00 stderr F I1213 00:21:36.747862 1 sync_worker.go:989] Precreated resource clusteroperator "kube-scheduler" (177 of 955) 2025-12-13T00:21:36.747974082+00:00 stderr F I1213 00:21:36.747901 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.747974082+00:00 stderr F I1213 00:21:36.747907 1 sync_worker.go:999] Running sync for namespace "openshift-kube-scheduler-operator" (169 of 955) 2025-12-13T00:21:36.793077119+00:00 stderr F I1213 00:21:36.792532 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:36.793077119+00:00 stderr F I1213 00:21:36.793025 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:36.793077119+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:36.793077119+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:36.793139501+00:00 stderr F I1213 00:21:36.793117 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (600.636µs) 2025-12-13T00:21:36.793148531+00:00 stderr F I1213 00:21:36.793141 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:36.793250284+00:00 stderr F I1213 00:21:36.793206 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:36.793302365+00:00 stderr F I1213 00:21:36.793284 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:36.793411208+00:00 stderr F I1213 00:21:36.793300 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:36.794019985+00:00 stderr F I1213 00:21:36.793758 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:36.794633672+00:00 stderr F I1213 00:21:36.794591 1 sync_worker.go:1014] Done syncing for service "openshift-operator-lifecycle-manager/package-server-manager-metrics" (713 of 955) 2025-12-13T00:21:36.794633672+00:00 stderr F I1213 00:21:36.794616 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.794633672+00:00 stderr F I1213 00:21:36.794621 1 sync_worker.go:999] Running sync for servicemonitor "openshift-operator-lifecycle-manager/package-server-manager-metrics" (714 of 955) 2025-12-13T00:21:36.824471817+00:00 stderr F W1213 00:21:36.824369 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:36.826054729+00:00 stderr F I1213 00:21:36.825855 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (32.712623ms) 2025-12-13T00:21:36.845222307+00:00 stderr F I1213 00:21:36.845155 1 sync_worker.go:1014] Done syncing for namespace "openshift-kube-scheduler-operator" (169 of 955) 2025-12-13T00:21:36.845222307+00:00 stderr F I1213 00:21:36.845206 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.845282358+00:00 stderr F I1213 00:21:36.845213 1 sync_worker.go:999] Running sync for customresourcedefinition "kubeschedulers.operator.openshift.io" (170 of 955) 2025-12-13T00:21:36.895311928+00:00 stderr F I1213 00:21:36.895243 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-operator-lifecycle-manager/package-server-manager-metrics" (714 of 955) 2025-12-13T00:21:36.895311928+00:00 stderr F I1213 00:21:36.895281 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.895311928+00:00 stderr F I1213 00:21:36.895288 1 sync_worker.go:999] Running sync for cronjob "openshift-operator-lifecycle-manager/collect-profiles" (715 of 955) 2025-12-13T00:21:36.945624896+00:00 stderr F I1213 00:21:36.945542 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubeschedulers.operator.openshift.io" (170 of 955) 2025-12-13T00:21:36.945624896+00:00 stderr F I1213 00:21:36.945575 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.945624896+00:00 stderr F I1213 00:21:36.945581 1 sync_worker.go:999] Running sync for kubescheduler "cluster" (171 of 955) 2025-12-13T00:21:36.997594253+00:00 stderr F I1213 00:21:36.997519 1 sync_worker.go:1014] Done syncing for cronjob "openshift-operator-lifecycle-manager/collect-profiles" (715 of 955) 2025-12-13T00:21:36.997594253+00:00 stderr F I1213 00:21:36.997559 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:36.997594253+00:00 stderr F I1213 00:21:36.997567 1 sync_worker.go:999] Running sync for deployment "openshift-operator-lifecycle-manager/olm-operator" (716 of 955) 2025-12-13T00:21:37.046603083+00:00 stderr F I1213 00:21:37.046537 1 sync_worker.go:1014] Done syncing for kubescheduler "cluster" (171 of 955) 2025-12-13T00:21:37.046603083+00:00 stderr F I1213 00:21:37.046583 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.046659004+00:00 stderr F I1213 00:21:37.046595 1 sync_worker.go:999] Running sync for service "openshift-kube-scheduler-operator/metrics" (172 of 955) 2025-12-13T00:21:37.095525319+00:00 stderr F I1213 00:21:37.095457 1 sync_worker.go:1014] Done syncing for deployment "openshift-operator-lifecycle-manager/olm-operator" (716 of 955) 2025-12-13T00:21:37.095525319+00:00 stderr F I1213 00:21:37.095490 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.095525319+00:00 stderr F I1213 00:21:37.095496 1 sync_worker.go:999] Running sync for deployment "openshift-operator-lifecycle-manager/catalog-operator" (717 of 955) 2025-12-13T00:21:37.146042635+00:00 stderr F I1213 00:21:37.145915 1 sync_worker.go:1014] Done syncing for service "openshift-kube-scheduler-operator/metrics" (172 of 955) 2025-12-13T00:21:37.146042635+00:00 stderr F I1213 00:21:37.146005 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.146042635+00:00 stderr F I1213 00:21:37.146012 1 sync_worker.go:999] Running sync for configmap "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config" (173 of 955) 2025-12-13T00:21:37.196828717+00:00 stderr F I1213 00:21:37.196187 1 sync_worker.go:1014] Done syncing for deployment "openshift-operator-lifecycle-manager/catalog-operator" (717 of 955) 2025-12-13T00:21:37.196828717+00:00 stderr F I1213 00:21:37.196797 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.196828717+00:00 stderr F I1213 00:21:37.196806 1 sync_worker.go:999] Running sync for clusterrole "aggregate-olm-edit" (718 of 955) 2025-12-13T00:21:37.246278418+00:00 stderr F I1213 00:21:37.246224 1 sync_worker.go:1014] Done syncing for configmap "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config" (173 of 955) 2025-12-13T00:21:37.246278418+00:00 stderr F I1213 00:21:37.246259 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.246278418+00:00 stderr F I1213 00:21:37.246265 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:cluster-kube-scheduler-operator" (174 of 955) 2025-12-13T00:21:37.296170468+00:00 stderr F I1213 00:21:37.296086 1 sync_worker.go:1014] Done syncing for clusterrole "aggregate-olm-edit" (718 of 955) 2025-12-13T00:21:37.296170468+00:00 stderr F I1213 00:21:37.296152 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.296170468+00:00 stderr F I1213 00:21:37.296160 1 sync_worker.go:999] Running sync for clusterrole "aggregate-olm-view" (719 of 955) 2025-12-13T00:21:37.345171696+00:00 stderr F I1213 00:21:37.345081 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:cluster-kube-scheduler-operator" (174 of 955) 2025-12-13T00:21:37.345171696+00:00 stderr F I1213 00:21:37.345116 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.345171696+00:00 stderr F I1213 00:21:37.345122 1 sync_worker.go:999] Running sync for serviceaccount "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (175 of 955) 2025-12-13T00:21:37.395522118+00:00 stderr F I1213 00:21:37.395445 1 sync_worker.go:1014] Done syncing for clusterrole "aggregate-olm-view" (719 of 955) 2025-12-13T00:21:37.395522118+00:00 stderr F I1213 00:21:37.395477 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.395522118+00:00 stderr F I1213 00:21:37.395483 1 sync_worker.go:999] Running sync for configmap "openshift-operator-lifecycle-manager/olm-operators" (720 of 955) 2025-12-13T00:21:37.445866460+00:00 stderr F I1213 00:21:37.445765 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (175 of 955) 2025-12-13T00:21:37.445866460+00:00 stderr F I1213 00:21:37.445803 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.445866460+00:00 stderr F I1213 00:21:37.445811 1 sync_worker.go:999] Running sync for deployment "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (176 of 955) 2025-12-13T00:21:37.496305684+00:00 stderr F I1213 00:21:37.495993 1 sync_worker.go:1014] Done syncing for configmap "openshift-operator-lifecycle-manager/olm-operators" (720 of 955) 2025-12-13T00:21:37.496305684+00:00 stderr F I1213 00:21:37.496038 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.496305684+00:00 stderr F I1213 00:21:37.496048 1 sync_worker.go:999] Running sync for catalogsource "openshift-operator-lifecycle-manager/olm-operators" (721 of 955) 2025-12-13T00:21:37.545548619+00:00 stderr F I1213 00:21:37.545456 1 sync_worker.go:1014] Done syncing for deployment "openshift-kube-scheduler-operator/openshift-kube-scheduler-operator" (176 of 955) 2025-12-13T00:21:37.545548619+00:00 stderr F I1213 00:21:37.545503 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.545548619+00:00 stderr F I1213 00:21:37.545514 1 sync_worker.go:999] Running sync for clusteroperator "kube-scheduler" (177 of 955) 2025-12-13T00:21:37.545852767+00:00 stderr F I1213 00:21:37.545811 1 sync_worker.go:1014] Done syncing for clusteroperator "kube-scheduler" (177 of 955) 2025-12-13T00:21:37.545946069+00:00 stderr F I1213 00:21:37.545899 1 task_graph.go:481] Running 57 on worker 1 2025-12-13T00:21:37.545946069+00:00 stderr F I1213 00:21:37.545915 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.545975120+00:00 stderr F I1213 00:21:37.545924 1 sync_worker.go:999] Running sync for role "openshift-cloud-credential-operator/prometheus-k8s" (800 of 955) 2025-12-13T00:21:37.545988400+00:00 stderr F I1213 00:21:37.545968 1 sync_worker.go:1002] Skipping role "openshift-cloud-credential-operator/prometheus-k8s" (800 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:37.545988400+00:00 stderr F I1213 00:21:37.545980 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.546001540+00:00 stderr F I1213 00:21:37.545991 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-credential-operator/prometheus-k8s" (801 of 955) 2025-12-13T00:21:37.546034441+00:00 stderr F I1213 00:21:37.546009 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-credential-operator/prometheus-k8s" (801 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:37.546034441+00:00 stderr F I1213 00:21:37.546023 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.546045151+00:00 stderr F I1213 00:21:37.546032 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (802 of 955) 2025-12-13T00:21:37.546057482+00:00 stderr F I1213 00:21:37.546049 1 sync_worker.go:1002] Skipping servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (802 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:37.546067462+00:00 stderr F I1213 00:21:37.546059 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.546078212+00:00 stderr F I1213 00:21:37.546068 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cloud-credential-operator/cloud-credential-operator-alerts" (803 of 955) 2025-12-13T00:21:37.546114563+00:00 stderr F I1213 00:21:37.546087 1 sync_worker.go:1002] Skipping prometheusrule "openshift-cloud-credential-operator/cloud-credential-operator-alerts" (803 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:37.546114563+00:00 stderr F I1213 00:21:37.546106 1 task_graph.go:481] Running 58 on worker 1 2025-12-13T00:21:37.546128813+00:00 stderr F I1213 00:21:37.546114 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.546141254+00:00 stderr F I1213 00:21:37.546123 1 sync_worker.go:999] Running sync for role "openshift-cluster-machine-approver/prometheus-k8s" (822 of 955) 2025-12-13T00:21:37.595669836+00:00 stderr F I1213 00:21:37.595584 1 sync_worker.go:1014] Done syncing for catalogsource "openshift-operator-lifecycle-manager/olm-operators" (721 of 955) 2025-12-13T00:21:37.595669836+00:00 stderr F I1213 00:21:37.595624 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.595669836+00:00 stderr F I1213 00:21:37.595631 1 sync_worker.go:999] Running sync for operatorgroup "openshift-operators/global-operators" (722 of 955) 2025-12-13T00:21:37.645240089+00:00 stderr F I1213 00:21:37.645146 1 sync_worker.go:1014] Done syncing for role "openshift-cluster-machine-approver/prometheus-k8s" (822 of 955) 2025-12-13T00:21:37.645240089+00:00 stderr F I1213 00:21:37.645185 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.645240089+00:00 stderr F I1213 00:21:37.645191 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-machine-approver/prometheus-k8s" (823 of 955) 2025-12-13T00:21:37.695221180+00:00 stderr F I1213 00:21:37.695140 1 sync_worker.go:1014] Done syncing for operatorgroup "openshift-operators/global-operators" (722 of 955) 2025-12-13T00:21:37.695221180+00:00 stderr F I1213 00:21:37.695180 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.695221180+00:00 stderr F I1213 00:21:37.695187 1 sync_worker.go:999] Running sync for operatorgroup "openshift-operator-lifecycle-manager/olm-operators" (723 of 955) 2025-12-13T00:21:37.744745932+00:00 stderr F I1213 00:21:37.744573 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-cluster-machine-approver/prometheus-k8s" (823 of 955) 2025-12-13T00:21:37.744745932+00:00 stderr F I1213 00:21:37.744659 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.744745932+00:00 stderr F I1213 00:21:37.744667 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-machine-approver/cluster-machine-approver" (824 of 955) 2025-12-13T00:21:37.793776912+00:00 stderr F I1213 00:21:37.793689 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:37.794127021+00:00 stderr F I1213 00:21:37.794088 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:37.794127021+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:37.794127021+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:37.794211403+00:00 stderr F I1213 00:21:37.794178 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (499.002µs) 2025-12-13T00:21:37.794211403+00:00 stderr F I1213 00:21:37.794204 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:37.794386277+00:00 stderr F I1213 00:21:37.794290 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:37.794386277+00:00 stderr F I1213 00:21:37.794366 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:37.794477559+00:00 stderr F I1213 00:21:37.794377 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:37.794902040+00:00 stderr F I1213 00:21:37.794826 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:37.796419037+00:00 stderr F I1213 00:21:37.796264 1 sync_worker.go:1014] Done syncing for operatorgroup "openshift-operator-lifecycle-manager/olm-operators" (723 of 955) 2025-12-13T00:21:37.796419037+00:00 stderr F I1213 00:21:37.796301 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.796419037+00:00 stderr F I1213 00:21:37.796313 1 sync_worker.go:999] Running sync for subscription "openshift-operator-lifecycle-manager/packageserver" (724 of 955) 2025-12-13T00:21:37.817439025+00:00 stderr F W1213 00:21:37.817375 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:37.818820260+00:00 stderr F I1213 00:21:37.818793 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (24.588076ms) 2025-12-13T00:21:37.845960889+00:00 stderr F I1213 00:21:37.845912 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-cluster-machine-approver/cluster-machine-approver" (824 of 955) 2025-12-13T00:21:37.845980079+00:00 stderr F I1213 00:21:37.845958 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.845980079+00:00 stderr F I1213 00:21:37.845964 1 sync_worker.go:999] Running sync for prometheusrule "openshift-cluster-machine-approver/machineapprover-rules" (825 of 955) 2025-12-13T00:21:37.898180897+00:00 stderr F I1213 00:21:37.898112 1 sync_worker.go:1014] Done syncing for subscription "openshift-operator-lifecycle-manager/packageserver" (724 of 955) 2025-12-13T00:21:37.898282759+00:00 stderr F I1213 00:21:37.898245 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.898282759+00:00 stderr F I1213 00:21:37.898267 1 sync_worker.go:999] Running sync for role "openshift/copied-csv-viewer" (725 of 955) 2025-12-13T00:21:37.945625607+00:00 stderr F I1213 00:21:37.945571 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-cluster-machine-approver/machineapprover-rules" (825 of 955) 2025-12-13T00:21:37.945625607+00:00 stderr F I1213 00:21:37.945611 1 task_graph.go:481] Running 59 on worker 1 2025-12-13T00:21:37.945668688+00:00 stderr F I1213 00:21:37.945624 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.945668688+00:00 stderr F I1213 00:21:37.945630 1 sync_worker.go:999] Running sync for customresourcedefinition "ipaddresses.ipam.cluster.x-k8s.io" (217 of 955) 2025-12-13T00:21:37.995859396+00:00 stderr F I1213 00:21:37.995784 1 sync_worker.go:1014] Done syncing for role "openshift/copied-csv-viewer" (725 of 955) 2025-12-13T00:21:37.995859396+00:00 stderr F I1213 00:21:37.995814 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:37.995859396+00:00 stderr F I1213 00:21:37.995820 1 sync_worker.go:999] Running sync for rolebinding "openshift/copied-csv-viewers" (726 of 955) 2025-12-13T00:21:38.047926830+00:00 stderr F I1213 00:21:38.047833 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ipaddresses.ipam.cluster.x-k8s.io" (217 of 955) 2025-12-13T00:21:38.047926830+00:00 stderr F I1213 00:21:38.047903 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.048000972+00:00 stderr F I1213 00:21:38.047920 1 sync_worker.go:999] Running sync for customresourcedefinition "ipaddressclaims.ipam.cluster.x-k8s.io" (218 of 955) 2025-12-13T00:21:38.096501239+00:00 stderr F I1213 00:21:38.096384 1 sync_worker.go:1014] Done syncing for rolebinding "openshift/copied-csv-viewers" (726 of 955) 2025-12-13T00:21:38.096501239+00:00 stderr F I1213 00:21:38.096428 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.096501239+00:00 stderr F I1213 00:21:38.096436 1 sync_worker.go:999] Running sync for clusteroperator "operator-lifecycle-manager" (727 of 955) 2025-12-13T00:21:38.096697033+00:00 stderr F I1213 00:21:38.096635 1 sync_worker.go:1014] Done syncing for clusteroperator "operator-lifecycle-manager" (727 of 955) 2025-12-13T00:21:38.096697033+00:00 stderr F I1213 00:21:38.096651 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.096697033+00:00 stderr F I1213 00:21:38.096659 1 sync_worker.go:999] Running sync for clusteroperator "operator-lifecycle-manager-catalog" (728 of 955) 2025-12-13T00:21:38.096833857+00:00 stderr F I1213 00:21:38.096767 1 sync_worker.go:1014] Done syncing for clusteroperator "operator-lifecycle-manager-catalog" (728 of 955) 2025-12-13T00:21:38.096833857+00:00 stderr F I1213 00:21:38.096782 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.096873788+00:00 stderr F I1213 00:21:38.096795 1 sync_worker.go:999] Running sync for clusteroperator "operator-lifecycle-manager-packageserver" (729 of 955) 2025-12-13T00:21:38.097029532+00:00 stderr F I1213 00:21:38.096964 1 sync_worker.go:1014] Done syncing for clusteroperator "operator-lifecycle-manager-packageserver" (729 of 955) 2025-12-13T00:21:38.097029532+00:00 stderr F I1213 00:21:38.096993 1 task_graph.go:481] Running 60 on worker 0 2025-12-13T00:21:38.097029532+00:00 stderr F I1213 00:21:38.097001 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.097029532+00:00 stderr F I1213 00:21:38.097007 1 sync_worker.go:999] Running sync for customresourcedefinition "builds.config.openshift.io" (69 of 955) 2025-12-13T00:21:38.141648402+00:00 stderr F I1213 00:21:38.141514 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:38.141648402+00:00 stderr F I1213 00:21:38.141628 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:38.141720604+00:00 stderr F I1213 00:21:38.141687 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:38.141790326+00:00 stderr F I1213 00:21:38.141696 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:38.142538394+00:00 stderr F I1213 00:21:38.142024 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:38.147536227+00:00 stderr F I1213 00:21:38.147406 1 sync_worker.go:1014] Done syncing for customresourcedefinition "ipaddressclaims.ipam.cluster.x-k8s.io" (218 of 955) 2025-12-13T00:21:38.147536227+00:00 stderr F I1213 00:21:38.147481 1 task_graph.go:481] Running 61 on worker 1 2025-12-13T00:21:38.147536227+00:00 stderr F I1213 00:21:38.147506 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.147578478+00:00 stderr F I1213 00:21:38.147523 1 sync_worker.go:999] Running sync for role "openshift-cluster-storage-operator/prometheus" (851 of 955) 2025-12-13T00:21:38.147578478+00:00 stderr F I1213 00:21:38.147557 1 sync_worker.go:1002] Skipping role "openshift-cluster-storage-operator/prometheus" (851 of 955): disabled capabilities: Storage 2025-12-13T00:21:38.147578478+00:00 stderr F I1213 00:21:38.147573 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.147626269+00:00 stderr F I1213 00:21:38.147589 1 sync_worker.go:999] Running sync for rolebinding "openshift-cluster-storage-operator/prometheus" (852 of 955) 2025-12-13T00:21:38.147638040+00:00 stderr F I1213 00:21:38.147624 1 sync_worker.go:1002] Skipping rolebinding "openshift-cluster-storage-operator/prometheus" (852 of 955): disabled capabilities: Storage 2025-12-13T00:21:38.147651130+00:00 stderr F I1213 00:21:38.147642 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.147686791+00:00 stderr F I1213 00:21:38.147657 1 sync_worker.go:999] Running sync for servicemonitor "openshift-cluster-storage-operator/cluster-storage-operator" (853 of 955) 2025-12-13T00:21:38.147698221+00:00 stderr F I1213 00:21:38.147684 1 sync_worker.go:1002] Skipping servicemonitor "openshift-cluster-storage-operator/cluster-storage-operator" (853 of 955): disabled capabilities: Storage 2025-12-13T00:21:38.147765963+00:00 stderr F I1213 00:21:38.147714 1 task_graph.go:481] Running 62 on worker 1 2025-12-13T00:21:38.147779973+00:00 stderr F I1213 00:21:38.147770 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.147824434+00:00 stderr F I1213 00:21:38.147786 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/coreos-bootimages" (688 of 955) 2025-12-13T00:21:38.167659573+00:00 stderr F W1213 00:21:38.166909 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:38.169667023+00:00 stderr F I1213 00:21:38.169629 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.126944ms) 2025-12-13T00:21:38.195907120+00:00 stderr F I1213 00:21:38.195830 1 sync_worker.go:1014] Done syncing for customresourcedefinition "builds.config.openshift.io" (69 of 955) 2025-12-13T00:21:38.195907120+00:00 stderr F I1213 00:21:38.195875 1 task_graph.go:481] Running 63 on worker 0 2025-12-13T00:21:38.246667552+00:00 stderr F I1213 00:21:38.246586 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/coreos-bootimages" (688 of 955) 2025-12-13T00:21:38.246667552+00:00 stderr F I1213 00:21:38.246633 1 task_graph.go:481] Running 64 on worker 1 2025-12-13T00:21:38.298425499+00:00 stderr F I1213 00:21:38.298365 1 sync_worker.go:989] Precreated resource clusteroperator "etcd" (81 of 955) 2025-12-13T00:21:38.298425499+00:00 stderr F I1213 00:21:38.298402 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.298425499+00:00 stderr F I1213 00:21:38.298411 1 sync_worker.go:999] Running sync for namespace "openshift-etcd-operator" (71 of 955) 2025-12-13T00:21:38.350308218+00:00 stderr F I1213 00:21:38.350223 1 sync_worker.go:989] Precreated resource clusteroperator "machine-api" (275 of 955) 2025-12-13T00:21:38.350308218+00:00 stderr F I1213 00:21:38.350273 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350308218+00:00 stderr F I1213 00:21:38.350286 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-aws" (228 of 955) 2025-12-13T00:21:38.350342659+00:00 stderr F I1213 00:21:38.350318 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-aws" (228 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350342659+00:00 stderr F I1213 00:21:38.350329 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350350239+00:00 stderr F I1213 00:21:38.350338 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-azure" (229 of 955) 2025-12-13T00:21:38.350367790+00:00 stderr F I1213 00:21:38.350356 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-azure" (229 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350374650+00:00 stderr F I1213 00:21:38.350366 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350382950+00:00 stderr F I1213 00:21:38.350375 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-openstack" (230 of 955) 2025-12-13T00:21:38.350421321+00:00 stderr F I1213 00:21:38.350390 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-openstack" (230 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350421321+00:00 stderr F I1213 00:21:38.350411 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350430581+00:00 stderr F I1213 00:21:38.350420 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-gcp" (231 of 955) 2025-12-13T00:21:38.350461852+00:00 stderr F I1213 00:21:38.350436 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-gcp" (231 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350461852+00:00 stderr F I1213 00:21:38.350449 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350469732+00:00 stderr F I1213 00:21:38.350458 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ovirt" (232 of 955) 2025-12-13T00:21:38.350482532+00:00 stderr F I1213 00:21:38.350474 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ovirt" (232 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350489343+00:00 stderr F I1213 00:21:38.350483 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350509613+00:00 stderr F I1213 00:21:38.350492 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-vsphere" (233 of 955) 2025-12-13T00:21:38.350534974+00:00 stderr F I1213 00:21:38.350512 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-vsphere" (233 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350534974+00:00 stderr F I1213 00:21:38.350525 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350543764+00:00 stderr F I1213 00:21:38.350534 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ibmcloud" (234 of 955) 2025-12-13T00:21:38.350569605+00:00 stderr F I1213 00:21:38.350550 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-ibmcloud" (234 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350569605+00:00 stderr F I1213 00:21:38.350565 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350589385+00:00 stderr F I1213 00:21:38.350574 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-powervs" (235 of 955) 2025-12-13T00:21:38.350609176+00:00 stderr F I1213 00:21:38.350593 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-powervs" (235 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350609176+00:00 stderr F I1213 00:21:38.350603 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350628566+00:00 stderr F I1213 00:21:38.350612 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-nutanix" (236 of 955) 2025-12-13T00:21:38.350635466+00:00 stderr F I1213 00:21:38.350628 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-machine-api-nutanix" (236 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:38.350642116+00:00 stderr F I1213 00:21:38.350637 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.350661137+00:00 stderr F I1213 00:21:38.350646 1 sync_worker.go:999] Running sync for namespace "openshift-machine-api" (237 of 955) 2025-12-13T00:21:38.396151370+00:00 stderr F I1213 00:21:38.396015 1 sync_worker.go:1014] Done syncing for namespace "openshift-etcd-operator" (71 of 955) 2025-12-13T00:21:38.396151370+00:00 stderr F I1213 00:21:38.396066 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.396151370+00:00 stderr F I1213 00:21:38.396079 1 sync_worker.go:999] Running sync for etcd "cluster" (72 of 955) 2025-12-13T00:21:38.444838800+00:00 stderr F I1213 00:21:38.444711 1 sync_worker.go:1014] Done syncing for namespace "openshift-machine-api" (237 of 955) 2025-12-13T00:21:38.444838800+00:00 stderr F I1213 00:21:38.444740 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.444838800+00:00 stderr F I1213 00:21:38.444745 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/machine-api-operator-images" (238 of 955) 2025-12-13T00:21:38.521124021+00:00 stderr F I1213 00:21:38.521033 1 sync_worker.go:1014] Done syncing for etcd "cluster" (72 of 955) 2025-12-13T00:21:38.521124021+00:00 stderr F I1213 00:21:38.521087 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.521124021+00:00 stderr F I1213 00:21:38.521093 1 sync_worker.go:999] Running sync for service "openshift-etcd-operator/metrics" (73 of 955) 2025-12-13T00:21:38.546534388+00:00 stderr F I1213 00:21:38.546404 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/machine-api-operator-images" (238 of 955) 2025-12-13T00:21:38.546534388+00:00 stderr F I1213 00:21:38.546452 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.546534388+00:00 stderr F I1213 00:21:38.546479 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/mao-trusted-ca" (239 of 955) 2025-12-13T00:21:38.595189829+00:00 stderr F I1213 00:21:38.595091 1 sync_worker.go:1014] Done syncing for service "openshift-etcd-operator/metrics" (73 of 955) 2025-12-13T00:21:38.595189829+00:00 stderr F I1213 00:21:38.595146 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.595189829+00:00 stderr F I1213 00:21:38.595153 1 sync_worker.go:999] Running sync for configmap "openshift-etcd-operator/etcd-operator-config" (74 of 955) 2025-12-13T00:21:38.654827209+00:00 stderr F I1213 00:21:38.654734 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/mao-trusted-ca" (239 of 955) 2025-12-13T00:21:38.654827209+00:00 stderr F I1213 00:21:38.654807 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.654874781+00:00 stderr F I1213 00:21:38.654822 1 sync_worker.go:999] Running sync for customresourcedefinition "machines.machine.openshift.io" (240 of 955) 2025-12-13T00:21:38.695337918+00:00 stderr F I1213 00:21:38.695275 1 sync_worker.go:1014] Done syncing for configmap "openshift-etcd-operator/etcd-operator-config" (74 of 955) 2025-12-13T00:21:38.695337918+00:00 stderr F I1213 00:21:38.695310 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.695337918+00:00 stderr F I1213 00:21:38.695317 1 sync_worker.go:999] Running sync for configmap "openshift-etcd-operator/etcd-ca-bundle" (75 of 955) 2025-12-13T00:21:38.748456498+00:00 stderr F I1213 00:21:38.748362 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machines.machine.openshift.io" (240 of 955) 2025-12-13T00:21:38.748456498+00:00 stderr F I1213 00:21:38.748406 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.748456498+00:00 stderr F I1213 00:21:38.748412 1 sync_worker.go:999] Running sync for customresourcedefinition "machinesets.machine.openshift.io" (241 of 955) 2025-12-13T00:21:38.795111089+00:00 stderr F I1213 00:21:38.795047 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:38.795201022+00:00 stderr F I1213 00:21:38.795155 1 sync_worker.go:1014] Done syncing for configmap "openshift-etcd-operator/etcd-ca-bundle" (75 of 955) 2025-12-13T00:21:38.795201022+00:00 stderr F I1213 00:21:38.795188 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.795213572+00:00 stderr F I1213 00:21:38.795196 1 sync_worker.go:999] Running sync for configmap "openshift-etcd-operator/etcd-service-ca-bundle" (76 of 955) 2025-12-13T00:21:38.795446638+00:00 stderr F I1213 00:21:38.795369 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:38.795446638+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:38.795446638+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:38.795480378+00:00 stderr F I1213 00:21:38.795449 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (410.17µs) 2025-12-13T00:21:38.795610702+00:00 stderr F I1213 00:21:38.795566 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:38.795752705+00:00 stderr F I1213 00:21:38.795667 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:38.795752705+00:00 stderr F I1213 00:21:38.795729 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:38.795799436+00:00 stderr F I1213 00:21:38.795739 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:38.796199876+00:00 stderr F I1213 00:21:38.796132 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:38.820251650+00:00 stderr F W1213 00:21:38.820193 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:38.821686975+00:00 stderr F I1213 00:21:38.821642 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.075383ms) 2025-12-13T00:21:38.846409925+00:00 stderr F I1213 00:21:38.846338 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machinesets.machine.openshift.io" (241 of 955) 2025-12-13T00:21:38.846409925+00:00 stderr F I1213 00:21:38.846375 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.846409925+00:00 stderr F I1213 00:21:38.846381 1 sync_worker.go:999] Running sync for customresourcedefinition "machinehealthchecks.machine.openshift.io" (242 of 955) 2025-12-13T00:21:38.895917816+00:00 stderr F I1213 00:21:38.895765 1 sync_worker.go:1014] Done syncing for configmap "openshift-etcd-operator/etcd-service-ca-bundle" (76 of 955) 2025-12-13T00:21:38.895917816+00:00 stderr F I1213 00:21:38.895835 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.895917816+00:00 stderr F I1213 00:21:38.895843 1 sync_worker.go:999] Running sync for secret "openshift-etcd-operator/etcd-client" (77 of 955) 2025-12-13T00:21:38.947220261+00:00 stderr F I1213 00:21:38.947151 1 sync_worker.go:1014] Done syncing for customresourcedefinition "machinehealthchecks.machine.openshift.io" (242 of 955) 2025-12-13T00:21:38.947220261+00:00 stderr F I1213 00:21:38.947193 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.947220261+00:00 stderr F I1213 00:21:38.947202 1 sync_worker.go:999] Running sync for customresourcedefinition "metal3remediations.infrastructure.cluster.x-k8s.io" (243 of 955) 2025-12-13T00:21:38.997261515+00:00 stderr F I1213 00:21:38.997193 1 sync_worker.go:1014] Done syncing for secret "openshift-etcd-operator/etcd-client" (77 of 955) 2025-12-13T00:21:38.997261515+00:00 stderr F I1213 00:21:38.997232 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:38.997261515+00:00 stderr F I1213 00:21:38.997241 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:etcd-operator" (78 of 955) 2025-12-13T00:21:39.045327391+00:00 stderr F I1213 00:21:39.045235 1 sync_worker.go:1014] Done syncing for customresourcedefinition "metal3remediations.infrastructure.cluster.x-k8s.io" (243 of 955) 2025-12-13T00:21:39.045327391+00:00 stderr F I1213 00:21:39.045276 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.045327391+00:00 stderr F I1213 00:21:39.045284 1 sync_worker.go:999] Running sync for customresourcedefinition "metal3remediationtemplates.infrastructure.cluster.x-k8s.io" (244 of 955) 2025-12-13T00:21:39.094880103+00:00 stderr F I1213 00:21:39.094808 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:etcd-operator" (78 of 955) 2025-12-13T00:21:39.094880103+00:00 stderr F I1213 00:21:39.094850 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.094880103+00:00 stderr F I1213 00:21:39.094856 1 sync_worker.go:999] Running sync for serviceaccount "openshift-etcd-operator/etcd-operator" (79 of 955) 2025-12-13T00:21:39.148497796+00:00 stderr F I1213 00:21:39.148391 1 sync_worker.go:1014] Done syncing for customresourcedefinition "metal3remediationtemplates.infrastructure.cluster.x-k8s.io" (244 of 955) 2025-12-13T00:21:39.148497796+00:00 stderr F I1213 00:21:39.148449 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.148497796+00:00 stderr F I1213 00:21:39.148455 1 sync_worker.go:999] Running sync for service "openshift-machine-api/machine-api-operator-machine-webhook" (245 of 955) 2025-12-13T00:21:39.194988113+00:00 stderr F I1213 00:21:39.194877 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-etcd-operator/etcd-operator" (79 of 955) 2025-12-13T00:21:39.194988113+00:00 stderr F I1213 00:21:39.194924 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.194988113+00:00 stderr F I1213 00:21:39.194949 1 sync_worker.go:999] Running sync for deployment "openshift-etcd-operator/etcd-operator" (80 of 955) 2025-12-13T00:21:39.246360940+00:00 stderr F I1213 00:21:39.246234 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/machine-api-operator-machine-webhook" (245 of 955) 2025-12-13T00:21:39.246360940+00:00 stderr F I1213 00:21:39.246270 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.246360940+00:00 stderr F I1213 00:21:39.246276 1 sync_worker.go:999] Running sync for service "openshift-machine-api/machine-api-operator-webhook" (246 of 955) 2025-12-13T00:21:39.295465711+00:00 stderr F I1213 00:21:39.295384 1 sync_worker.go:1014] Done syncing for deployment "openshift-etcd-operator/etcd-operator" (80 of 955) 2025-12-13T00:21:39.295465711+00:00 stderr F I1213 00:21:39.295434 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.295465711+00:00 stderr F I1213 00:21:39.295440 1 sync_worker.go:999] Running sync for clusteroperator "etcd" (81 of 955) 2025-12-13T00:21:39.295699857+00:00 stderr F I1213 00:21:39.295671 1 sync_worker.go:1014] Done syncing for clusteroperator "etcd" (81 of 955) 2025-12-13T00:21:39.295699857+00:00 stderr F I1213 00:21:39.295686 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.295699857+00:00 stderr F I1213 00:21:39.295691 1 sync_worker.go:999] Running sync for flowschema "openshift-etcd-operator" (82 of 955) 2025-12-13T00:21:39.344737646+00:00 stderr F I1213 00:21:39.344666 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/machine-api-operator-webhook" (246 of 955) 2025-12-13T00:21:39.344737646+00:00 stderr F I1213 00:21:39.344699 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.344737646+00:00 stderr F I1213 00:21:39.344706 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/machine-api-operator" (247 of 955) 2025-12-13T00:21:39.396031492+00:00 stderr F I1213 00:21:39.395913 1 sync_worker.go:1014] Done syncing for flowschema "openshift-etcd-operator" (82 of 955) 2025-12-13T00:21:39.396031492+00:00 stderr F I1213 00:21:39.396002 1 task_graph.go:481] Running 65 on worker 0 2025-12-13T00:21:39.445710516+00:00 stderr F I1213 00:21:39.445587 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/machine-api-operator" (247 of 955) 2025-12-13T00:21:39.445710516+00:00 stderr F I1213 00:21:39.445636 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.445710516+00:00 stderr F I1213 00:21:39.445648 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/machine-api-controllers" (248 of 955) 2025-12-13T00:21:39.499731479+00:00 stderr F I1213 00:21:39.499587 1 sync_worker.go:989] Precreated resource clusteroperator "authentication" (330 of 955) 2025-12-13T00:21:39.499731479+00:00 stderr F I1213 00:21:39.499675 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.499731479+00:00 stderr F I1213 00:21:39.499684 1 sync_worker.go:999] Running sync for namespace "openshift-authentication-operator" (320 of 955) 2025-12-13T00:21:39.546219595+00:00 stderr F I1213 00:21:39.546130 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/machine-api-controllers" (248 of 955) 2025-12-13T00:21:39.546219595+00:00 stderr F I1213 00:21:39.546168 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.546219595+00:00 stderr F I1213 00:21:39.546173 1 sync_worker.go:999] Running sync for serviceaccount "openshift-machine-api/machine-api-termination-handler" (249 of 955) 2025-12-13T00:21:39.595413719+00:00 stderr F I1213 00:21:39.595341 1 sync_worker.go:1014] Done syncing for namespace "openshift-authentication-operator" (320 of 955) 2025-12-13T00:21:39.595413719+00:00 stderr F I1213 00:21:39.595384 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.595413719+00:00 stderr F I1213 00:21:39.595391 1 sync_worker.go:999] Running sync for customresourcedefinition "authentications.operator.openshift.io" (321 of 955) 2025-12-13T00:21:39.645555486+00:00 stderr F I1213 00:21:39.645485 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-machine-api/machine-api-termination-handler" (249 of 955) 2025-12-13T00:21:39.645555486+00:00 stderr F I1213 00:21:39.645525 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.645555486+00:00 stderr F I1213 00:21:39.645532 1 sync_worker.go:999] Running sync for role "openshift-config/machine-api-controllers" (250 of 955) 2025-12-13T00:21:39.696755899+00:00 stderr F I1213 00:21:39.696625 1 sync_worker.go:1014] Done syncing for customresourcedefinition "authentications.operator.openshift.io" (321 of 955) 2025-12-13T00:21:39.696755899+00:00 stderr F I1213 00:21:39.696685 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.696755899+00:00 stderr F I1213 00:21:39.696699 1 sync_worker.go:999] Running sync for authentication "cluster" (322 of 955) 2025-12-13T00:21:39.744849675+00:00 stderr F I1213 00:21:39.744729 1 sync_worker.go:1014] Done syncing for role "openshift-config/machine-api-controllers" (250 of 955) 2025-12-13T00:21:39.744849675+00:00 stderr F I1213 00:21:39.744773 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.744849675+00:00 stderr F I1213 00:21:39.744786 1 sync_worker.go:999] Running sync for role "openshift-config-managed/machine-api-controllers" (251 of 955) 2025-12-13T00:21:39.796244862+00:00 stderr F I1213 00:21:39.796129 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:39.796568711+00:00 stderr F I1213 00:21:39.796495 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:39.796568711+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:39.796568711+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:39.796598242+00:00 stderr F I1213 00:21:39.796561 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (444.371µs) 2025-12-13T00:21:39.796598242+00:00 stderr F I1213 00:21:39.796577 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:39.796677424+00:00 stderr F I1213 00:21:39.796628 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:39.796702084+00:00 stderr F I1213 00:21:39.796689 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:39.796778106+00:00 stderr F I1213 00:21:39.796698 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:39.797050243+00:00 stderr F I1213 00:21:39.796972 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:39.802765974+00:00 stderr F I1213 00:21:39.802672 1 sync_worker.go:1014] Done syncing for authentication "cluster" (322 of 955) 2025-12-13T00:21:39.802765974+00:00 stderr F I1213 00:21:39.802728 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.802765974+00:00 stderr F I1213 00:21:39.802744 1 sync_worker.go:999] Running sync for service "openshift-authentication-operator/metrics" (323 of 955) 2025-12-13T00:21:39.824894780+00:00 stderr F W1213 00:21:39.824784 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:39.826794626+00:00 stderr F I1213 00:21:39.826711 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.129713ms) 2025-12-13T00:21:39.845560709+00:00 stderr F I1213 00:21:39.845464 1 sync_worker.go:1014] Done syncing for role "openshift-config-managed/machine-api-controllers" (251 of 955) 2025-12-13T00:21:39.845560709+00:00 stderr F I1213 00:21:39.845504 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.845560709+00:00 stderr F I1213 00:21:39.845513 1 sync_worker.go:999] Running sync for role "openshift-machine-api/machine-api-controllers" (252 of 955) 2025-12-13T00:21:39.895558762+00:00 stderr F I1213 00:21:39.895447 1 sync_worker.go:1014] Done syncing for service "openshift-authentication-operator/metrics" (323 of 955) 2025-12-13T00:21:39.895558762+00:00 stderr F I1213 00:21:39.895509 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.895558762+00:00 stderr F I1213 00:21:39.895528 1 sync_worker.go:999] Running sync for configmap "openshift-authentication-operator/authentication-operator-config" (324 of 955) 2025-12-13T00:21:39.946363435+00:00 stderr F I1213 00:21:39.946264 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/machine-api-controllers" (252 of 955) 2025-12-13T00:21:39.946363435+00:00 stderr F I1213 00:21:39.946316 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.946363435+00:00 stderr F I1213 00:21:39.946329 1 sync_worker.go:999] Running sync for clusterrole "machine-api-controllers" (253 of 955) 2025-12-13T00:21:39.995244710+00:00 stderr F I1213 00:21:39.994920 1 sync_worker.go:1014] Done syncing for configmap "openshift-authentication-operator/authentication-operator-config" (324 of 955) 2025-12-13T00:21:39.995244710+00:00 stderr F I1213 00:21:39.995000 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:39.995244710+00:00 stderr F I1213 00:21:39.995012 1 sync_worker.go:999] Running sync for configmap "openshift-authentication-operator/service-ca-bundle" (325 of 955) 2025-12-13T00:21:40.045972873+00:00 stderr F I1213 00:21:40.045675 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-controllers" (253 of 955) 2025-12-13T00:21:40.045972873+00:00 stderr F I1213 00:21:40.045730 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.045972873+00:00 stderr F I1213 00:21:40.045740 1 sync_worker.go:999] Running sync for role "openshift-machine-api/machine-api-operator" (254 of 955) 2025-12-13T00:21:40.094827817+00:00 stderr F I1213 00:21:40.094722 1 sync_worker.go:1014] Done syncing for configmap "openshift-authentication-operator/service-ca-bundle" (325 of 955) 2025-12-13T00:21:40.094827817+00:00 stderr F I1213 00:21:40.094781 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.094827817+00:00 stderr F I1213 00:21:40.094799 1 sync_worker.go:999] Running sync for configmap "openshift-authentication-operator/trusted-ca-bundle" (326 of 955) 2025-12-13T00:21:40.146298806+00:00 stderr F I1213 00:21:40.146187 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/machine-api-operator" (254 of 955) 2025-12-13T00:21:40.146298806+00:00 stderr F I1213 00:21:40.146229 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.146298806+00:00 stderr F I1213 00:21:40.146237 1 sync_worker.go:999] Running sync for clusterrole "machine-api-operator" (255 of 955) 2025-12-13T00:21:40.202143334+00:00 stderr F I1213 00:21:40.202045 1 sync_worker.go:1014] Done syncing for configmap "openshift-authentication-operator/trusted-ca-bundle" (326 of 955) 2025-12-13T00:21:40.202143334+00:00 stderr F I1213 00:21:40.202089 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.202143334+00:00 stderr F I1213 00:21:40.202097 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:authentication" (327 of 955) 2025-12-13T00:21:40.246531809+00:00 stderr F I1213 00:21:40.246446 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-operator" (255 of 955) 2025-12-13T00:21:40.246531809+00:00 stderr F I1213 00:21:40.246477 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.246531809+00:00 stderr F I1213 00:21:40.246483 1 sync_worker.go:999] Running sync for clusterrole "machine-api-operator-ext-remediation" (256 of 955) 2025-12-13T00:21:40.296779758+00:00 stderr F I1213 00:21:40.296716 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:authentication" (327 of 955) 2025-12-13T00:21:40.296779758+00:00 stderr F I1213 00:21:40.296751 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.296779758+00:00 stderr F I1213 00:21:40.296757 1 sync_worker.go:999] Running sync for serviceaccount "openshift-authentication-operator/authentication-operator" (328 of 955) 2025-12-13T00:21:40.346361452+00:00 stderr F I1213 00:21:40.346295 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-operator-ext-remediation" (256 of 955) 2025-12-13T00:21:40.346361452+00:00 stderr F I1213 00:21:40.346327 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.346361452+00:00 stderr F I1213 00:21:40.346333 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-operator-ext-remediation" (257 of 955) 2025-12-13T00:21:40.394826437+00:00 stderr F I1213 00:21:40.394745 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-authentication-operator/authentication-operator" (328 of 955) 2025-12-13T00:21:40.394826437+00:00 stderr F I1213 00:21:40.394789 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.394826437+00:00 stderr F I1213 00:21:40.394794 1 sync_worker.go:999] Running sync for deployment "openshift-authentication-operator/authentication-operator" (329 of 955) 2025-12-13T00:21:40.446727967+00:00 stderr F I1213 00:21:40.446632 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-operator-ext-remediation" (257 of 955) 2025-12-13T00:21:40.446727967+00:00 stderr F I1213 00:21:40.446675 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.446727967+00:00 stderr F I1213 00:21:40.446683 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-controllers" (258 of 955) 2025-12-13T00:21:40.545871452+00:00 stderr F I1213 00:21:40.545789 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-controllers" (258 of 955) 2025-12-13T00:21:40.545871452+00:00 stderr F I1213 00:21:40.545832 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.545871452+00:00 stderr F I1213 00:21:40.545839 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/machine-api-controllers" (259 of 955) 2025-12-13T00:21:40.596174233+00:00 stderr F I1213 00:21:40.596103 1 sync_worker.go:1014] Done syncing for deployment "openshift-authentication-operator/authentication-operator" (329 of 955) 2025-12-13T00:21:40.596174233+00:00 stderr F I1213 00:21:40.596137 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.596174233+00:00 stderr F I1213 00:21:40.596143 1 sync_worker.go:999] Running sync for clusteroperator "authentication" (330 of 955) 2025-12-13T00:21:40.596364808+00:00 stderr F E1213 00:21:40.596334 1 task.go:122] error running apply for clusteroperator "authentication" (330 of 955): Cluster operator authentication is not available 2025-12-13T00:21:40.596364808+00:00 stderr F I1213 00:21:40.596358 1 task_graph.go:481] Running 66 on worker 0 2025-12-13T00:21:40.596373588+00:00 stderr F I1213 00:21:40.596366 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.596382328+00:00 stderr F I1213 00:21:40.596372 1 sync_worker.go:999] Running sync for customresourcedefinition "clusterresourcequotas.quota.openshift.io" (16 of 955) 2025-12-13T00:21:40.644198468+00:00 stderr F I1213 00:21:40.644130 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/machine-api-controllers" (259 of 955) 2025-12-13T00:21:40.644198468+00:00 stderr F I1213 00:21:40.644163 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.644198468+00:00 stderr F I1213 00:21:40.644169 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/machine-api-controllers" (260 of 955) 2025-12-13T00:21:40.695854622+00:00 stderr F I1213 00:21:40.695772 1 sync_worker.go:1014] Done syncing for customresourcedefinition "clusterresourcequotas.quota.openshift.io" (16 of 955) 2025-12-13T00:21:40.695854622+00:00 stderr F I1213 00:21:40.695809 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.695854622+00:00 stderr F I1213 00:21:40.695815 1 sync_worker.go:999] Running sync for customresourcedefinition "proxies.config.openshift.io" (17 of 955) 2025-12-13T00:21:40.745505476+00:00 stderr F I1213 00:21:40.745434 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config/machine-api-controllers" (260 of 955) 2025-12-13T00:21:40.745505476+00:00 stderr F I1213 00:21:40.745472 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.745505476+00:00 stderr F I1213 00:21:40.745482 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/machine-api-controllers" (261 of 955) 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.796873 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797176 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:40.799458178+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:40.799458178+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797225 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (362.219µs) 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797240 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797286 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797328 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797335 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.797607 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.799059 1 sync_worker.go:1014] Done syncing for customresourcedefinition "proxies.config.openshift.io" (17 of 955) 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.799174 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.799458178+00:00 stderr F I1213 00:21:40.799182 1 sync_worker.go:999] Running sync for customresourcedefinition "rolebindingrestrictions.authorization.openshift.io" (18 of 955) 2025-12-13T00:21:40.835806383+00:00 stderr F W1213 00:21:40.835732 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:40.837860234+00:00 stderr F I1213 00:21:40.837818 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (40.57213ms) 2025-12-13T00:21:40.845202776+00:00 stderr F I1213 00:21:40.845149 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-config-managed/machine-api-controllers" (261 of 955) 2025-12-13T00:21:40.845202776+00:00 stderr F I1213 00:21:40.845185 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.845202776+00:00 stderr F I1213 00:21:40.845194 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-operator" (262 of 955) 2025-12-13T00:21:40.896207203+00:00 stderr F I1213 00:21:40.895993 1 sync_worker.go:1014] Done syncing for customresourcedefinition "rolebindingrestrictions.authorization.openshift.io" (18 of 955) 2025-12-13T00:21:40.896207203+00:00 stderr F I1213 00:21:40.896031 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.896207203+00:00 stderr F I1213 00:21:40.896091 1 sync_worker.go:999] Running sync for customresourcedefinition "securitycontextconstraints.security.openshift.io" (19 of 955) 2025-12-13T00:21:40.944203137+00:00 stderr F I1213 00:21:40.944129 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-operator" (262 of 955) 2025-12-13T00:21:40.944203137+00:00 stderr F I1213 00:21:40.944164 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.944203137+00:00 stderr F I1213 00:21:40.944170 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/machine-api-operator" (263 of 955) 2025-12-13T00:21:40.996657981+00:00 stderr F I1213 00:21:40.996586 1 sync_worker.go:1014] Done syncing for customresourcedefinition "securitycontextconstraints.security.openshift.io" (19 of 955) 2025-12-13T00:21:40.996657981+00:00 stderr F I1213 00:21:40.996625 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:40.996657981+00:00 stderr F I1213 00:21:40.996634 1 sync_worker.go:999] Running sync for customresourcedefinition "rangeallocations.security.internal.openshift.io" (20 of 955) 2025-12-13T00:21:41.046266715+00:00 stderr F I1213 00:21:41.046201 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/machine-api-operator" (263 of 955) 2025-12-13T00:21:41.046266715+00:00 stderr F I1213 00:21:41.046233 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.046266715+00:00 stderr F I1213 00:21:41.046239 1 sync_worker.go:999] Running sync for rolebinding "openshift-machine-api/prometheus-k8s-machine-api-operator" (264 of 955) 2025-12-13T00:21:41.094866813+00:00 stderr F I1213 00:21:41.094794 1 sync_worker.go:1014] Done syncing for customresourcedefinition "rangeallocations.security.internal.openshift.io" (20 of 955) 2025-12-13T00:21:41.094866813+00:00 stderr F I1213 00:21:41.094833 1 task_graph.go:481] Running 67 on worker 0 2025-12-13T00:21:41.094866813+00:00 stderr F I1213 00:21:41.094846 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.094866813+00:00 stderr F I1213 00:21:41.094851 1 sync_worker.go:999] Running sync for customresourcedefinition "openshiftcontrollermanagers.operator.openshift.io" (730 of 955) 2025-12-13T00:21:41.145561934+00:00 stderr F I1213 00:21:41.145489 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-machine-api/prometheus-k8s-machine-api-operator" (264 of 955) 2025-12-13T00:21:41.145561934+00:00 stderr F I1213 00:21:41.145529 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.145561934+00:00 stderr F I1213 00:21:41.145535 1 sync_worker.go:999] Running sync for role "openshift-machine-api/prometheus-k8s-machine-api-operator" (265 of 955) 2025-12-13T00:21:41.194594694+00:00 stderr F I1213 00:21:41.194523 1 sync_worker.go:1014] Done syncing for customresourcedefinition "openshiftcontrollermanagers.operator.openshift.io" (730 of 955) 2025-12-13T00:21:41.194629804+00:00 stderr F I1213 00:21:41.194616 1 task_graph.go:481] Running 68 on worker 0 2025-12-13T00:21:41.194670625+00:00 stderr F I1213 00:21:41.194633 1 sync_worker.go:982] Skipping precreation of clusteroperator "cloud-controller-manager" (209 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194670625+00:00 stderr F I1213 00:21:41.194662 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194682806+00:00 stderr F I1213 00:21:41.194672 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-controller-manager-operator" (178 of 955) 2025-12-13T00:21:41.194703836+00:00 stderr F I1213 00:21:41.194688 1 sync_worker.go:1002] Skipping namespace "openshift-cloud-controller-manager-operator" (178 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194703836+00:00 stderr F I1213 00:21:41.194698 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194712906+00:00 stderr F I1213 00:21:41.194705 1 sync_worker.go:999] Running sync for namespace "openshift-cloud-controller-manager" (179 of 955) 2025-12-13T00:21:41.194751027+00:00 stderr F I1213 00:21:41.194717 1 sync_worker.go:1002] Skipping namespace "openshift-cloud-controller-manager" (179 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194751027+00:00 stderr F I1213 00:21:41.194730 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194751027+00:00 stderr F I1213 00:21:41.194735 1 sync_worker.go:999] Running sync for configmap "openshift-cloud-controller-manager-operator/cloud-controller-manager-images" (180 of 955) 2025-12-13T00:21:41.194751027+00:00 stderr F I1213 00:21:41.194745 1 sync_worker.go:1002] Skipping configmap "openshift-cloud-controller-manager-operator/cloud-controller-manager-images" (180 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194763308+00:00 stderr F I1213 00:21:41.194750 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194763308+00:00 stderr F I1213 00:21:41.194755 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (181 of 955) 2025-12-13T00:21:41.194772448+00:00 stderr F I1213 00:21:41.194765 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (181 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194781948+00:00 stderr F I1213 00:21:41.194771 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194781948+00:00 stderr F I1213 00:21:41.194776 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:operator:cloud-controller-manager" (182 of 955) 2025-12-13T00:21:41.194793208+00:00 stderr F I1213 00:21:41.194785 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:operator:cloud-controller-manager" (182 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194801859+00:00 stderr F I1213 00:21:41.194791 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194801859+00:00 stderr F I1213 00:21:41.194796 1 sync_worker.go:999] Running sync for role "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (183 of 955) 2025-12-13T00:21:41.194816659+00:00 stderr F I1213 00:21:41.194805 1 sync_worker.go:1002] Skipping role "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (183 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194816659+00:00 stderr F I1213 00:21:41.194810 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194824159+00:00 stderr F I1213 00:21:41.194814 1 sync_worker.go:999] Running sync for role "openshift-config/cluster-cloud-controller-manager" (184 of 955) 2025-12-13T00:21:41.194831199+00:00 stderr F I1213 00:21:41.194823 1 sync_worker.go:1002] Skipping role "openshift-config/cluster-cloud-controller-manager" (184 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194831199+00:00 stderr F I1213 00:21:41.194827 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194838439+00:00 stderr F I1213 00:21:41.194832 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/cluster-cloud-controller-manager" (185 of 955) 2025-12-13T00:21:41.194847210+00:00 stderr F I1213 00:21:41.194840 1 sync_worker.go:1002] Skipping rolebinding "openshift-config/cluster-cloud-controller-manager" (185 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194854040+00:00 stderr F I1213 00:21:41.194845 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194860840+00:00 stderr F I1213 00:21:41.194850 1 sync_worker.go:999] Running sync for role "openshift-config-managed/cluster-cloud-controller-manager" (186 of 955) 2025-12-13T00:21:41.194868000+00:00 stderr F I1213 00:21:41.194859 1 sync_worker.go:1002] Skipping role "openshift-config-managed/cluster-cloud-controller-manager" (186 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194868000+00:00 stderr F I1213 00:21:41.194864 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194875070+00:00 stderr F I1213 00:21:41.194868 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/cluster-cloud-controller-manager" (187 of 955) 2025-12-13T00:21:41.194883651+00:00 stderr F I1213 00:21:41.194877 1 sync_worker.go:1002] Skipping rolebinding "openshift-config-managed/cluster-cloud-controller-manager" (187 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194891551+00:00 stderr F I1213 00:21:41.194881 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194900201+00:00 stderr F I1213 00:21:41.194887 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:cloud-controller-manager" (188 of 955) 2025-12-13T00:21:41.194908891+00:00 stderr F I1213 00:21:41.194897 1 sync_worker.go:1002] Skipping clusterrolebinding "system:openshift:operator:cloud-controller-manager" (188 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194908891+00:00 stderr F I1213 00:21:41.194902 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194918431+00:00 stderr F I1213 00:21:41.194906 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (189 of 955) 2025-12-13T00:21:41.194918431+00:00 stderr F I1213 00:21:41.194914 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (189 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194927062+00:00 stderr F I1213 00:21:41.194919 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194961122+00:00 stderr F I1213 00:21:41.194944 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-controller-manager/cluster-cloud-controller-manager" (190 of 955) 2025-12-13T00:21:41.194961122+00:00 stderr F I1213 00:21:41.194953 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-controller-manager/cluster-cloud-controller-manager" (190 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194961122+00:00 stderr F I1213 00:21:41.194958 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194975793+00:00 stderr F I1213 00:21:41.194962 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-controller-manager/cloud-controller-manager" (191 of 955) 2025-12-13T00:21:41.194975793+00:00 stderr F I1213 00:21:41.194971 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-controller-manager/cloud-controller-manager" (191 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.194984663+00:00 stderr F I1213 00:21:41.194975 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.194993413+00:00 stderr F I1213 00:21:41.194980 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-controller-manager/cloud-controller-manager" (192 of 955) 2025-12-13T00:21:41.195002153+00:00 stderr F I1213 00:21:41.194992 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-controller-manager/cloud-controller-manager" (192 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195002153+00:00 stderr F I1213 00:21:41.194998 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195013024+00:00 stderr F I1213 00:21:41.195004 1 sync_worker.go:999] Running sync for role "openshift-cloud-controller-manager/cloud-controller-manager" (193 of 955) 2025-12-13T00:21:41.195023634+00:00 stderr F I1213 00:21:41.195015 1 sync_worker.go:1002] Skipping role "openshift-cloud-controller-manager/cloud-controller-manager" (193 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195032874+00:00 stderr F I1213 00:21:41.195022 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195032874+00:00 stderr F I1213 00:21:41.195028 1 sync_worker.go:999] Running sync for rolebinding "kube-system/cloud-controller-manager" (194 of 955) 2025-12-13T00:21:41.195044364+00:00 stderr F I1213 00:21:41.195037 1 sync_worker.go:1002] Skipping rolebinding "kube-system/cloud-controller-manager" (194 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195044364+00:00 stderr F I1213 00:21:41.195041 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195075785+00:00 stderr F I1213 00:21:41.195046 1 sync_worker.go:999] Running sync for role "kube-system/cloud-controller-manager" (195 of 955) 2025-12-13T00:21:41.195087145+00:00 stderr F I1213 00:21:41.195080 1 sync_worker.go:1002] Skipping role "kube-system/cloud-controller-manager" (195 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195095856+00:00 stderr F I1213 00:21:41.195086 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195095856+00:00 stderr F I1213 00:21:41.195091 1 sync_worker.go:999] Running sync for clusterrole "cloud-controller-manager" (196 of 955) 2025-12-13T00:21:41.195106896+00:00 stderr F I1213 00:21:41.195099 1 sync_worker.go:1002] Skipping clusterrole "cloud-controller-manager" (196 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195115496+00:00 stderr F I1213 00:21:41.195105 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195115496+00:00 stderr F I1213 00:21:41.195110 1 sync_worker.go:999] Running sync for clusterrolebinding "cloud-controller-manager" (197 of 955) 2025-12-13T00:21:41.195126266+00:00 stderr F I1213 00:21:41.195119 1 sync_worker.go:1002] Skipping clusterrolebinding "cloud-controller-manager" (197 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195139427+00:00 stderr F I1213 00:21:41.195125 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195139427+00:00 stderr F I1213 00:21:41.195130 1 sync_worker.go:999] Running sync for rolebinding "kube-system/cloud-controller-manager:apiserver-authentication-reader" (198 of 955) 2025-12-13T00:21:41.195148017+00:00 stderr F I1213 00:21:41.195138 1 sync_worker.go:1002] Skipping rolebinding "kube-system/cloud-controller-manager:apiserver-authentication-reader" (198 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195148017+00:00 stderr F I1213 00:21:41.195142 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195156557+00:00 stderr F I1213 00:21:41.195147 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-controller-manager/cloud-node-manager" (199 of 955) 2025-12-13T00:21:41.195164757+00:00 stderr F I1213 00:21:41.195155 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-controller-manager/cloud-node-manager" (199 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195164757+00:00 stderr F I1213 00:21:41.195160 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195172898+00:00 stderr F I1213 00:21:41.195164 1 sync_worker.go:999] Running sync for clusterrole "cloud-node-manager" (200 of 955) 2025-12-13T00:21:41.195181238+00:00 stderr F I1213 00:21:41.195173 1 sync_worker.go:1002] Skipping clusterrole "cloud-node-manager" (200 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195181238+00:00 stderr F I1213 00:21:41.195178 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195189978+00:00 stderr F I1213 00:21:41.195182 1 sync_worker.go:999] Running sync for clusterrolebinding "cloud-node-manager" (201 of 955) 2025-12-13T00:21:41.195198518+00:00 stderr F I1213 00:21:41.195190 1 sync_worker.go:1002] Skipping clusterrolebinding "cloud-node-manager" (201 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195198518+00:00 stderr F I1213 00:21:41.195195 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195209398+00:00 stderr F I1213 00:21:41.195201 1 sync_worker.go:999] Running sync for serviceaccount "kube-system/cloud-controller-manager" (202 of 955) 2025-12-13T00:21:41.195219629+00:00 stderr F I1213 00:21:41.195211 1 sync_worker.go:1002] Skipping serviceaccount "kube-system/cloud-controller-manager" (202 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195227729+00:00 stderr F I1213 00:21:41.195218 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195235569+00:00 stderr F I1213 00:21:41.195224 1 sync_worker.go:999] Running sync for clusterrole "openstack-cloud-controller-manager" (203 of 955) 2025-12-13T00:21:41.195245679+00:00 stderr F I1213 00:21:41.195237 1 sync_worker.go:1002] Skipping clusterrole "openstack-cloud-controller-manager" (203 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195253490+00:00 stderr F I1213 00:21:41.195243 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195261280+00:00 stderr F I1213 00:21:41.195250 1 sync_worker.go:999] Running sync for clusterrolebinding "openstack-cloud-controller-manager" (204 of 955) 2025-12-13T00:21:41.195269220+00:00 stderr F I1213 00:21:41.195262 1 sync_worker.go:1002] Skipping clusterrolebinding "openstack-cloud-controller-manager" (204 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195277050+00:00 stderr F I1213 00:21:41.195268 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195285120+00:00 stderr F I1213 00:21:41.195274 1 sync_worker.go:999] Running sync for service "openshift-cloud-controller-manager-operator/cloud-controller-manager-operator" (205 of 955) 2025-12-13T00:21:41.195296561+00:00 stderr F I1213 00:21:41.195286 1 sync_worker.go:1002] Skipping service "openshift-cloud-controller-manager-operator/cloud-controller-manager-operator" (205 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195296561+00:00 stderr F I1213 00:21:41.195293 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195306691+00:00 stderr F I1213 00:21:41.195299 1 sync_worker.go:999] Running sync for configmap "openshift-cloud-controller-manager-operator/kube-rbac-proxy" (206 of 955) 2025-12-13T00:21:41.195341562+00:00 stderr F I1213 00:21:41.195310 1 sync_worker.go:1002] Skipping configmap "openshift-cloud-controller-manager-operator/kube-rbac-proxy" (206 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195341562+00:00 stderr F I1213 00:21:41.195325 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195341562+00:00 stderr F I1213 00:21:41.195331 1 sync_worker.go:999] Running sync for deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (207 of 955) 2025-12-13T00:21:41.195352272+00:00 stderr F I1213 00:21:41.195343 1 sync_worker.go:1002] Skipping deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager" (207 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195352272+00:00 stderr F I1213 00:21:41.195349 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195363172+00:00 stderr F I1213 00:21:41.195353 1 sync_worker.go:999] Running sync for deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator" (208 of 955) 2025-12-13T00:21:41.195399093+00:00 stderr F I1213 00:21:41.195366 1 sync_worker.go:1002] Skipping deployment "openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator" (208 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195399093+00:00 stderr F I1213 00:21:41.195381 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195399093+00:00 stderr F I1213 00:21:41.195387 1 sync_worker.go:999] Running sync for clusteroperator "cloud-controller-manager" (209 of 955) 2025-12-13T00:21:41.195409143+00:00 stderr F I1213 00:21:41.195398 1 sync_worker.go:1002] Skipping clusteroperator "cloud-controller-manager" (209 of 955): disabled capabilities: CloudControllerManager 2025-12-13T00:21:41.195409143+00:00 stderr F I1213 00:21:41.195405 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195419754+00:00 stderr F I1213 00:21:41.195411 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-openstack-cloud-controller-manager" (210 of 955) 2025-12-13T00:21:41.195430354+00:00 stderr F I1213 00:21:41.195423 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-openstack-cloud-controller-manager" (210 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195438904+00:00 stderr F I1213 00:21:41.195430 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195447434+00:00 stderr F I1213 00:21:41.195437 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-azure-cloud-controller-manager" (211 of 955) 2025-12-13T00:21:41.195457965+00:00 stderr F I1213 00:21:41.195450 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-azure-cloud-controller-manager" (211 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195466415+00:00 stderr F I1213 00:21:41.195457 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195479165+00:00 stderr F I1213 00:21:41.195463 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-ibm-cloud-controller-manager" (212 of 955) 2025-12-13T00:21:41.195487745+00:00 stderr F I1213 00:21:41.195475 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-ibm-cloud-controller-manager" (212 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195487745+00:00 stderr F I1213 00:21:41.195482 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195498455+00:00 stderr F I1213 00:21:41.195488 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-powervs-cloud-controller-manager" (213 of 955) 2025-12-13T00:21:41.195509026+00:00 stderr F I1213 00:21:41.195501 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-powervs-cloud-controller-manager" (213 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195517416+00:00 stderr F I1213 00:21:41.195508 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195525866+00:00 stderr F I1213 00:21:41.195515 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-ccm" (214 of 955) 2025-12-13T00:21:41.195534616+00:00 stderr F I1213 00:21:41.195525 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-gcp-ccm" (214 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195534616+00:00 stderr F I1213 00:21:41.195532 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195546187+00:00 stderr F I1213 00:21:41.195538 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-cloud-controller-manager" (215 of 955) 2025-12-13T00:21:41.195580027+00:00 stderr F I1213 00:21:41.195551 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-vsphere-cloud-controller-manager" (215 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195580027+00:00 stderr F I1213 00:21:41.195564 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195580027+00:00 stderr F I1213 00:21:41.195571 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-nutanix-cloud-controller-manager" (216 of 955) 2025-12-13T00:21:41.195592058+00:00 stderr F I1213 00:21:41.195584 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-nutanix-cloud-controller-manager" (216 of 955): disabled capabilities: CloudCredential, CloudControllerManager 2025-12-13T00:21:41.195629779+00:00 stderr F I1213 00:21:41.195599 1 task_graph.go:481] Running 69 on worker 0 2025-12-13T00:21:41.195629779+00:00 stderr F I1213 00:21:41.195609 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.195629779+00:00 stderr F I1213 00:21:41.195617 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-config-operator/machine-config-operator" (922 of 955) 2025-12-13T00:21:41.244733829+00:00 stderr F I1213 00:21:41.244656 1 sync_worker.go:1014] Done syncing for role "openshift-machine-api/prometheus-k8s-machine-api-operator" (265 of 955) 2025-12-13T00:21:41.244733829+00:00 stderr F I1213 00:21:41.244693 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.244733829+00:00 stderr F I1213 00:21:41.244700 1 sync_worker.go:999] Running sync for clusterrole "machine-api-operator:cluster-reader" (266 of 955) 2025-12-13T00:21:41.295721108+00:00 stderr F I1213 00:21:41.295634 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-config-operator/machine-config-operator" (922 of 955) 2025-12-13T00:21:41.295721108+00:00 stderr F I1213 00:21:41.295672 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.295721108+00:00 stderr F I1213 00:21:41.295680 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-config-operator/machine-config-controller" (923 of 955) 2025-12-13T00:21:41.345029653+00:00 stderr F I1213 00:21:41.344917 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-operator:cluster-reader" (266 of 955) 2025-12-13T00:21:41.345029653+00:00 stderr F I1213 00:21:41.344987 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.345029653+00:00 stderr F I1213 00:21:41.345003 1 sync_worker.go:999] Running sync for securitycontextconstraints "machine-api-termination-handler" (267 of 955) 2025-12-13T00:21:41.395990140+00:00 stderr F I1213 00:21:41.395869 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-config-operator/machine-config-controller" (923 of 955) 2025-12-13T00:21:41.395990140+00:00 stderr F I1213 00:21:41.395916 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.395990140+00:00 stderr F I1213 00:21:41.395926 1 sync_worker.go:999] Running sync for servicemonitor "openshift-machine-config-operator/machine-config-daemon" (924 of 955) 2025-12-13T00:21:41.446849555+00:00 stderr F I1213 00:21:41.446734 1 sync_worker.go:1014] Done syncing for securitycontextconstraints "machine-api-termination-handler" (267 of 955) 2025-12-13T00:21:41.446849555+00:00 stderr F I1213 00:21:41.446787 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.446849555+00:00 stderr F I1213 00:21:41.446797 1 sync_worker.go:999] Running sync for configmap "openshift-machine-api/kube-rbac-proxy" (268 of 955) 2025-12-13T00:21:41.496442168+00:00 stderr F I1213 00:21:41.496348 1 sync_worker.go:1014] Done syncing for servicemonitor "openshift-machine-config-operator/machine-config-daemon" (924 of 955) 2025-12-13T00:21:41.496442168+00:00 stderr F I1213 00:21:41.496389 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.496442168+00:00 stderr F I1213 00:21:41.496397 1 sync_worker.go:999] Running sync for storageversionmigration "machineconfiguration-controllerconfig-storage-version-migration" (925 of 955) 2025-12-13T00:21:41.545759805+00:00 stderr F I1213 00:21:41.545651 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-api/kube-rbac-proxy" (268 of 955) 2025-12-13T00:21:41.545759805+00:00 stderr F I1213 00:21:41.545692 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.545759805+00:00 stderr F I1213 00:21:41.545698 1 sync_worker.go:999] Running sync for clusterrole "machine-api-controllers-metal3-remediation-aggregation" (269 of 955) 2025-12-13T00:21:41.596208439+00:00 stderr F I1213 00:21:41.596110 1 sync_worker.go:1014] Done syncing for storageversionmigration "machineconfiguration-controllerconfig-storage-version-migration" (925 of 955) 2025-12-13T00:21:41.596208439+00:00 stderr F I1213 00:21:41.596153 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.596208439+00:00 stderr F I1213 00:21:41.596159 1 sync_worker.go:999] Running sync for storageversionmigration "machineconfiguration-machineconfigpool-storage-version-migration" (926 of 955) 2025-12-13T00:21:41.645466083+00:00 stderr F I1213 00:21:41.645391 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-controllers-metal3-remediation-aggregation" (269 of 955) 2025-12-13T00:21:41.645466083+00:00 stderr F I1213 00:21:41.645428 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.645466083+00:00 stderr F I1213 00:21:41.645435 1 sync_worker.go:999] Running sync for clusterrole "machine-api-controllers-metal3-remediation" (270 of 955) 2025-12-13T00:21:41.695139379+00:00 stderr F I1213 00:21:41.695065 1 sync_worker.go:1014] Done syncing for storageversionmigration "machineconfiguration-machineconfigpool-storage-version-migration" (926 of 955) 2025-12-13T00:21:41.695139379+00:00 stderr F I1213 00:21:41.695100 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.695139379+00:00 stderr F I1213 00:21:41.695106 1 sync_worker.go:999] Running sync for configmap "openshift-config-managed/node-cluster" (927 of 955) 2025-12-13T00:21:41.745562193+00:00 stderr F I1213 00:21:41.745466 1 sync_worker.go:1014] Done syncing for clusterrole "machine-api-controllers-metal3-remediation" (270 of 955) 2025-12-13T00:21:41.745562193+00:00 stderr F I1213 00:21:41.745503 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.745562193+00:00 stderr F I1213 00:21:41.745509 1 sync_worker.go:999] Running sync for clusterrolebinding "machine-api-controllers-baremetal" (271 of 955) 2025-12-13T00:21:41.796653373+00:00 stderr F I1213 00:21:41.796511 1 sync_worker.go:1014] Done syncing for configmap "openshift-config-managed/node-cluster" (927 of 955) 2025-12-13T00:21:41.796653373+00:00 stderr F I1213 00:21:41.796574 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.796653373+00:00 stderr F I1213 00:21:41.796581 1 sync_worker.go:999] Running sync for prometheusrule "openshift-machine-config-operator/machine-config-controller" (928 of 955) 2025-12-13T00:21:41.798223322+00:00 stderr F I1213 00:21:41.798163 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:41.799083473+00:00 stderr F I1213 00:21:41.799038 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:41.799083473+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:41.799083473+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:41.799394540+00:00 stderr F I1213 00:21:41.799281 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (1.128937ms) 2025-12-13T00:21:41.799410191+00:00 stderr F I1213 00:21:41.799389 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:41.799560045+00:00 stderr F I1213 00:21:41.799498 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:41.799606576+00:00 stderr F I1213 00:21:41.799571 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:41.799695749+00:00 stderr F I1213 00:21:41.799581 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:41.800007136+00:00 stderr F I1213 00:21:41.799927 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:41.826699934+00:00 stderr F W1213 00:21:41.826627 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:41.829370220+00:00 stderr F I1213 00:21:41.829310 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (29.929258ms) 2025-12-13T00:21:41.851136557+00:00 stderr F I1213 00:21:41.851029 1 sync_worker.go:1014] Done syncing for clusterrolebinding "machine-api-controllers-baremetal" (271 of 955) 2025-12-13T00:21:41.851136557+00:00 stderr F I1213 00:21:41.851105 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.851182228+00:00 stderr F I1213 00:21:41.851122 1 sync_worker.go:999] Running sync for service "openshift-machine-api/machine-api-operator" (272 of 955) 2025-12-13T00:21:41.895321237+00:00 stderr F I1213 00:21:41.895257 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-machine-config-operator/machine-config-controller" (928 of 955) 2025-12-13T00:21:41.895321237+00:00 stderr F I1213 00:21:41.895294 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.895321237+00:00 stderr F I1213 00:21:41.895300 1 sync_worker.go:999] Running sync for prometheusrule "openshift-machine-config-operator/machine-config-daemon" (929 of 955) 2025-12-13T00:21:41.946076769+00:00 stderr F I1213 00:21:41.945995 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/machine-api-operator" (272 of 955) 2025-12-13T00:21:41.946076769+00:00 stderr F I1213 00:21:41.946044 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.946076769+00:00 stderr F I1213 00:21:41.946056 1 sync_worker.go:999] Running sync for service "openshift-machine-api/machine-api-controllers" (273 of 955) 2025-12-13T00:21:41.997577669+00:00 stderr F I1213 00:21:41.997505 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-machine-config-operator/machine-config-daemon" (929 of 955) 2025-12-13T00:21:41.997577669+00:00 stderr F I1213 00:21:41.997556 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:41.997626120+00:00 stderr F I1213 00:21:41.997570 1 sync_worker.go:999] Running sync for clusterrolebinding "default-account-openshift-machine-config-operator" (930 of 955) 2025-12-13T00:21:42.047802177+00:00 stderr F I1213 00:21:42.047689 1 sync_worker.go:1014] Done syncing for service "openshift-machine-api/machine-api-controllers" (273 of 955) 2025-12-13T00:21:42.047802177+00:00 stderr F I1213 00:21:42.047781 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.047840898+00:00 stderr F I1213 00:21:42.047800 1 sync_worker.go:999] Running sync for deployment "openshift-machine-api/machine-api-operator" (274 of 955) 2025-12-13T00:21:42.125154956+00:00 stderr F I1213 00:21:42.095043 1 sync_worker.go:1014] Done syncing for clusterrolebinding "default-account-openshift-machine-config-operator" (930 of 955) 2025-12-13T00:21:42.125154956+00:00 stderr F I1213 00:21:42.095095 1 task_graph.go:481] Running 70 on worker 0 2025-12-13T00:21:42.145824585+00:00 stderr F I1213 00:21:42.145737 1 sync_worker.go:1014] Done syncing for deployment "openshift-machine-api/machine-api-operator" (274 of 955) 2025-12-13T00:21:42.145824585+00:00 stderr F I1213 00:21:42.145770 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.145824585+00:00 stderr F I1213 00:21:42.145778 1 sync_worker.go:999] Running sync for clusteroperator "machine-api" (275 of 955) 2025-12-13T00:21:42.146285166+00:00 stderr F I1213 00:21:42.146235 1 sync_worker.go:1014] Done syncing for clusteroperator "machine-api" (275 of 955) 2025-12-13T00:21:42.146285166+00:00 stderr F I1213 00:21:42.146259 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.146285166+00:00 stderr F I1213 00:21:42.146270 1 sync_worker.go:999] Running sync for machinehealthcheck "openshift-machine-api/machine-api-termination-handler" (276 of 955) 2025-12-13T00:21:42.198907664+00:00 stderr F I1213 00:21:42.198789 1 sync_worker.go:989] Precreated resource clusteroperator "service-ca" (754 of 955) 2025-12-13T00:21:42.198907664+00:00 stderr F I1213 00:21:42.198823 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.198907664+00:00 stderr F I1213 00:21:42.198831 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:service-ca-operator" (746 of 955) 2025-12-13T00:21:42.247070702+00:00 stderr F I1213 00:21:42.246558 1 sync_worker.go:1014] Done syncing for machinehealthcheck "openshift-machine-api/machine-api-termination-handler" (276 of 955) 2025-12-13T00:21:42.247070702+00:00 stderr F I1213 00:21:42.247030 1 task_graph.go:481] Running 71 on worker 1 2025-12-13T00:21:42.295981599+00:00 stderr F I1213 00:21:42.295831 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:service-ca-operator" (746 of 955) 2025-12-13T00:21:42.295981599+00:00 stderr F I1213 00:21:42.295899 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.295981599+00:00 stderr F I1213 00:21:42.295916 1 sync_worker.go:999] Running sync for namespace "openshift-service-ca-operator" (747 of 955) 2025-12-13T00:21:42.348580096+00:00 stderr F I1213 00:21:42.348483 1 sync_worker.go:989] Precreated resource clusteroperator "image-registry" (399 of 955) 2025-12-13T00:21:42.348580096+00:00 stderr F I1213 00:21:42.348525 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.348580096+00:00 stderr F I1213 00:21:42.348533 1 sync_worker.go:999] Running sync for customresourcedefinition "configs.imageregistry.operator.openshift.io" (378 of 955) 2025-12-13T00:21:42.395216906+00:00 stderr F I1213 00:21:42.395141 1 sync_worker.go:1014] Done syncing for namespace "openshift-service-ca-operator" (747 of 955) 2025-12-13T00:21:42.395216906+00:00 stderr F I1213 00:21:42.395188 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.395216906+00:00 stderr F I1213 00:21:42.395197 1 sync_worker.go:999] Running sync for customresourcedefinition "servicecas.operator.openshift.io" (748 of 955) 2025-12-13T00:21:42.455000431+00:00 stderr F I1213 00:21:42.454921 1 sync_worker.go:1014] Done syncing for customresourcedefinition "configs.imageregistry.operator.openshift.io" (378 of 955) 2025-12-13T00:21:42.455000431+00:00 stderr F I1213 00:21:42.454971 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.455000431+00:00 stderr F I1213 00:21:42.454978 1 sync_worker.go:999] Running sync for namespace "openshift-image-registry" (379 of 955) 2025-12-13T00:21:42.497379537+00:00 stderr F I1213 00:21:42.497305 1 sync_worker.go:1014] Done syncing for customresourcedefinition "servicecas.operator.openshift.io" (748 of 955) 2025-12-13T00:21:42.497379537+00:00 stderr F I1213 00:21:42.497343 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.497379537+00:00 stderr F I1213 00:21:42.497352 1 sync_worker.go:999] Running sync for service "openshift-service-ca-operator/metrics" (749 of 955) 2025-12-13T00:21:42.545721819+00:00 stderr F I1213 00:21:42.545655 1 sync_worker.go:1014] Done syncing for namespace "openshift-image-registry" (379 of 955) 2025-12-13T00:21:42.545721819+00:00 stderr F I1213 00:21:42.545692 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545721819+00:00 stderr F I1213 00:21:42.545699 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-alibaba" (380 of 955) 2025-12-13T00:21:42.545764750+00:00 stderr F I1213 00:21:42.545718 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-alibaba" (380 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545764750+00:00 stderr F I1213 00:21:42.545724 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545764750+00:00 stderr F I1213 00:21:42.545729 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-azure" (381 of 955) 2025-12-13T00:21:42.545764750+00:00 stderr F I1213 00:21:42.545740 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-azure" (381 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545764750+00:00 stderr F I1213 00:21:42.545746 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545764750+00:00 stderr F I1213 00:21:42.545751 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-gcs" (382 of 955) 2025-12-13T00:21:42.545776760+00:00 stderr F I1213 00:21:42.545763 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-gcs" (382 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545776760+00:00 stderr F I1213 00:21:42.545769 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545785600+00:00 stderr F I1213 00:21:42.545775 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-ibmcos" (383 of 955) 2025-12-13T00:21:42.545796441+00:00 stderr F I1213 00:21:42.545789 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-ibmcos" (383 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545804981+00:00 stderr F I1213 00:21:42.545795 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545813551+00:00 stderr F I1213 00:21:42.545800 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-openstack" (384 of 955) 2025-12-13T00:21:42.545822271+00:00 stderr F I1213 00:21:42.545813 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-openstack" (384 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545822271+00:00 stderr F I1213 00:21:42.545819 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545841212+00:00 stderr F I1213 00:21:42.545826 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-ibmcos-powervs" (385 of 955) 2025-12-13T00:21:42.545841212+00:00 stderr F I1213 00:21:42.545837 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry-ibmcos-powervs" (385 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545850422+00:00 stderr F I1213 00:21:42.545843 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545858682+00:00 stderr F I1213 00:21:42.545849 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry" (386 of 955) 2025-12-13T00:21:42.545891113+00:00 stderr F I1213 00:21:42.545864 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-image-registry" (386 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:42.545891113+00:00 stderr F I1213 00:21:42.545880 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.545900933+00:00 stderr F I1213 00:21:42.545886 1 sync_worker.go:999] Running sync for customresourcedefinition "imagepruners.imageregistry.operator.openshift.io" (387 of 955) 2025-12-13T00:21:42.598265375+00:00 stderr F I1213 00:21:42.597997 1 sync_worker.go:1014] Done syncing for service "openshift-service-ca-operator/metrics" (749 of 955) 2025-12-13T00:21:42.598265375+00:00 stderr F I1213 00:21:42.598229 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.598265375+00:00 stderr F I1213 00:21:42.598236 1 sync_worker.go:999] Running sync for configmap "openshift-service-ca-operator/service-ca-operator-config" (750 of 955) 2025-12-13T00:21:42.648875423+00:00 stderr F I1213 00:21:42.648781 1 sync_worker.go:1014] Done syncing for customresourcedefinition "imagepruners.imageregistry.operator.openshift.io" (387 of 955) 2025-12-13T00:21:42.648875423+00:00 stderr F I1213 00:21:42.648817 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.648875423+00:00 stderr F I1213 00:21:42.648823 1 sync_worker.go:999] Running sync for clusterrole "cluster-image-registry-operator" (388 of 955) 2025-12-13T00:21:42.696265762+00:00 stderr F I1213 00:21:42.696165 1 sync_worker.go:1014] Done syncing for configmap "openshift-service-ca-operator/service-ca-operator-config" (750 of 955) 2025-12-13T00:21:42.696265762+00:00 stderr F I1213 00:21:42.696214 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.696265762+00:00 stderr F I1213 00:21:42.696226 1 sync_worker.go:999] Running sync for serviceca "cluster" (751 of 955) 2025-12-13T00:21:42.745784543+00:00 stderr F I1213 00:21:42.745694 1 sync_worker.go:1014] Done syncing for clusterrole "cluster-image-registry-operator" (388 of 955) 2025-12-13T00:21:42.745784543+00:00 stderr F I1213 00:21:42.745743 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.745784543+00:00 stderr F I1213 00:21:42.745756 1 sync_worker.go:999] Running sync for role "openshift-image-registry/cluster-image-registry-operator" (389 of 955) 2025-12-13T00:21:42.799876198+00:00 stderr F I1213 00:21:42.799754 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:42.800136984+00:00 stderr F I1213 00:21:42.800035 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:42.800136984+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:42.800136984+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:42.800136984+00:00 stderr F I1213 00:21:42.800114 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (373.299µs) 2025-12-13T00:21:42.800136984+00:00 stderr F I1213 00:21:42.800129 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:42.800189696+00:00 stderr F I1213 00:21:42.800165 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:42.800257237+00:00 stderr F I1213 00:21:42.800207 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:42.800274168+00:00 stderr F I1213 00:21:42.800220 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:42.800570485+00:00 stderr F I1213 00:21:42.800504 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:42.806123672+00:00 stderr F I1213 00:21:42.806072 1 sync_worker.go:1014] Done syncing for serviceca "cluster" (751 of 955) 2025-12-13T00:21:42.806123672+00:00 stderr F I1213 00:21:42.806101 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.806123672+00:00 stderr F I1213 00:21:42.806106 1 sync_worker.go:999] Running sync for serviceaccount "openshift-service-ca-operator/service-ca-operator" (752 of 955) 2025-12-13T00:21:42.824220898+00:00 stderr F W1213 00:21:42.824165 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:42.825569852+00:00 stderr F I1213 00:21:42.825526 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (25.394886ms) 2025-12-13T00:21:42.845569095+00:00 stderr F I1213 00:21:42.845503 1 sync_worker.go:1014] Done syncing for role "openshift-image-registry/cluster-image-registry-operator" (389 of 955) 2025-12-13T00:21:42.845569095+00:00 stderr F I1213 00:21:42.845553 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.845587635+00:00 stderr F I1213 00:21:42.845566 1 sync_worker.go:999] Running sync for clusterrolebinding "default-account-cluster-image-registry-operator" (390 of 955) 2025-12-13T00:21:42.894776908+00:00 stderr F I1213 00:21:42.894717 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-service-ca-operator/service-ca-operator" (752 of 955) 2025-12-13T00:21:42.894776908+00:00 stderr F I1213 00:21:42.894754 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.894776908+00:00 stderr F I1213 00:21:42.894763 1 sync_worker.go:999] Running sync for deployment "openshift-service-ca-operator/service-ca-operator" (753 of 955) 2025-12-13T00:21:42.945397067+00:00 stderr F I1213 00:21:42.945298 1 sync_worker.go:1014] Done syncing for clusterrolebinding "default-account-cluster-image-registry-operator" (390 of 955) 2025-12-13T00:21:42.945397067+00:00 stderr F I1213 00:21:42.945336 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.945397067+00:00 stderr F I1213 00:21:42.945342 1 sync_worker.go:999] Running sync for rolebinding "openshift-image-registry/cluster-image-registry-operator" (391 of 955) 2025-12-13T00:21:42.996279152+00:00 stderr F I1213 00:21:42.996199 1 sync_worker.go:1014] Done syncing for deployment "openshift-service-ca-operator/service-ca-operator" (753 of 955) 2025-12-13T00:21:42.996279152+00:00 stderr F I1213 00:21:42.996246 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.996279152+00:00 stderr F I1213 00:21:42.996254 1 sync_worker.go:999] Running sync for clusteroperator "service-ca" (754 of 955) 2025-12-13T00:21:42.996520688+00:00 stderr F I1213 00:21:42.996486 1 sync_worker.go:1014] Done syncing for clusteroperator "service-ca" (754 of 955) 2025-12-13T00:21:42.996529298+00:00 stderr F I1213 00:21:42.996518 1 task_graph.go:481] Running 72 on worker 0 2025-12-13T00:21:42.996536368+00:00 stderr F I1213 00:21:42.996528 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:42.996545959+00:00 stderr F I1213 00:21:42.996537 1 sync_worker.go:999] Running sync for apiserver "cluster" (23 of 955) 2025-12-13T00:21:43.048362486+00:00 stderr F I1213 00:21:43.048015 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-image-registry/cluster-image-registry-operator" (391 of 955) 2025-12-13T00:21:43.048362486+00:00 stderr F I1213 00:21:43.048057 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.048362486+00:00 stderr F I1213 00:21:43.048066 1 sync_worker.go:999] Running sync for serviceaccount "openshift-image-registry/cluster-image-registry-operator" (392 of 955) 2025-12-13T00:21:43.096647978+00:00 stderr F I1213 00:21:43.096564 1 sync_worker.go:1014] Done syncing for apiserver "cluster" (23 of 955) 2025-12-13T00:21:43.096647978+00:00 stderr F I1213 00:21:43.096605 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.096647978+00:00 stderr F I1213 00:21:43.096614 1 sync_worker.go:999] Running sync for authentication "cluster" (24 of 955) 2025-12-13T00:21:43.145863101+00:00 stderr F I1213 00:21:43.145731 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-image-registry/cluster-image-registry-operator" (392 of 955) 2025-12-13T00:21:43.145863101+00:00 stderr F I1213 00:21:43.145766 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.145863101+00:00 stderr F I1213 00:21:43.145772 1 sync_worker.go:999] Running sync for configmap "openshift-image-registry/trusted-ca" (393 of 955) 2025-12-13T00:21:43.196586142+00:00 stderr F I1213 00:21:43.196486 1 sync_worker.go:1014] Done syncing for authentication "cluster" (24 of 955) 2025-12-13T00:21:43.196586142+00:00 stderr F I1213 00:21:43.196528 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.196586142+00:00 stderr F I1213 00:21:43.196536 1 sync_worker.go:999] Running sync for console "cluster" (25 of 955) 2025-12-13T00:21:43.265198965+00:00 stderr F I1213 00:21:43.265114 1 sync_worker.go:1014] Done syncing for configmap "openshift-image-registry/trusted-ca" (393 of 955) 2025-12-13T00:21:43.265198965+00:00 stderr F I1213 00:21:43.265156 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.265198965+00:00 stderr F I1213 00:21:43.265162 1 sync_worker.go:999] Running sync for role "openshift-image-registry/node-ca" (394 of 955) 2025-12-13T00:21:43.295876942+00:00 stderr F I1213 00:21:43.295812 1 sync_worker.go:1014] Done syncing for console "cluster" (25 of 955) 2025-12-13T00:21:43.295876942+00:00 stderr F I1213 00:21:43.295857 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.295876942+00:00 stderr F I1213 00:21:43.295865 1 sync_worker.go:999] Running sync for dns "cluster" (26 of 955) 2025-12-13T00:21:43.346244014+00:00 stderr F I1213 00:21:43.346175 1 sync_worker.go:1014] Done syncing for role "openshift-image-registry/node-ca" (394 of 955) 2025-12-13T00:21:43.346244014+00:00 stderr F I1213 00:21:43.346218 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.346244014+00:00 stderr F I1213 00:21:43.346227 1 sync_worker.go:999] Running sync for rolebinding "openshift-image-registry/node-ca" (395 of 955) 2025-12-13T00:21:43.395210871+00:00 stderr F I1213 00:21:43.395121 1 sync_worker.go:1014] Done syncing for dns "cluster" (26 of 955) 2025-12-13T00:21:43.395210871+00:00 stderr F I1213 00:21:43.395170 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.395210871+00:00 stderr F I1213 00:21:43.395178 1 sync_worker.go:999] Running sync for featuregate "cluster" (27 of 955) 2025-12-13T00:21:43.444844026+00:00 stderr F I1213 00:21:43.444774 1 sync_worker.go:1014] Done syncing for rolebinding "openshift-image-registry/node-ca" (395 of 955) 2025-12-13T00:21:43.444844026+00:00 stderr F I1213 00:21:43.444814 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.444844026+00:00 stderr F I1213 00:21:43.444824 1 sync_worker.go:999] Running sync for serviceaccount "openshift-image-registry/node-ca" (396 of 955) 2025-12-13T00:21:43.496270024+00:00 stderr F I1213 00:21:43.496166 1 sync_worker.go:1014] Done syncing for featuregate "cluster" (27 of 955) 2025-12-13T00:21:43.496270024+00:00 stderr F I1213 00:21:43.496203 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.496270024+00:00 stderr F I1213 00:21:43.496209 1 sync_worker.go:999] Running sync for image "cluster" (28 of 955) 2025-12-13T00:21:43.545310614+00:00 stderr F I1213 00:21:43.545203 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-image-registry/node-ca" (396 of 955) 2025-12-13T00:21:43.545310614+00:00 stderr F I1213 00:21:43.545260 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.545310614+00:00 stderr F I1213 00:21:43.545268 1 sync_worker.go:999] Running sync for service "openshift-image-registry/image-registry-operator" (397 of 955) 2025-12-13T00:21:43.595357559+00:00 stderr F I1213 00:21:43.595305 1 sync_worker.go:1014] Done syncing for image "cluster" (28 of 955) 2025-12-13T00:21:43.595357559+00:00 stderr F I1213 00:21:43.595340 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.595357559+00:00 stderr F I1213 00:21:43.595346 1 sync_worker.go:999] Running sync for infrastructure "cluster" (29 of 955) 2025-12-13T00:21:43.645853694+00:00 stderr F I1213 00:21:43.645794 1 sync_worker.go:1014] Done syncing for service "openshift-image-registry/image-registry-operator" (397 of 955) 2025-12-13T00:21:43.645853694+00:00 stderr F I1213 00:21:43.645826 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.645853694+00:00 stderr F I1213 00:21:43.645833 1 sync_worker.go:999] Running sync for deployment "openshift-image-registry/cluster-image-registry-operator" (398 of 955) 2025-12-13T00:21:43.694671328+00:00 stderr F I1213 00:21:43.694603 1 sync_worker.go:1014] Done syncing for infrastructure "cluster" (29 of 955) 2025-12-13T00:21:43.694671328+00:00 stderr F I1213 00:21:43.694631 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.694671328+00:00 stderr F I1213 00:21:43.694637 1 sync_worker.go:999] Running sync for ingress "cluster" (30 of 955) 2025-12-13T00:21:43.795296780+00:00 stderr F I1213 00:21:43.795234 1 sync_worker.go:1014] Done syncing for ingress "cluster" (30 of 955) 2025-12-13T00:21:43.795296780+00:00 stderr F I1213 00:21:43.795264 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.795296780+00:00 stderr F I1213 00:21:43.795271 1 sync_worker.go:999] Running sync for network "cluster" (31 of 955) 2025-12-13T00:21:43.800767405+00:00 stderr F I1213 00:21:43.800713 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:43.801119303+00:00 stderr F I1213 00:21:43.801068 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:43.801119303+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:43.801119303+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:43.801186655+00:00 stderr F I1213 00:21:43.801158 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (449.251µs) 2025-12-13T00:21:43.801201425+00:00 stderr F I1213 00:21:43.801184 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:43.801291347+00:00 stderr F I1213 00:21:43.801251 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:43.801335959+00:00 stderr F I1213 00:21:43.801315 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:43.801430181+00:00 stderr F I1213 00:21:43.801329 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:43.801760879+00:00 stderr F I1213 00:21:43.801706 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:43.835737007+00:00 stderr F W1213 00:21:43.835656 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:43.837664685+00:00 stderr F I1213 00:21:43.837622 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (36.434568ms) 2025-12-13T00:21:43.847746674+00:00 stderr F I1213 00:21:43.847657 1 sync_worker.go:1014] Done syncing for deployment "openshift-image-registry/cluster-image-registry-operator" (398 of 955) 2025-12-13T00:21:43.847746674+00:00 stderr F I1213 00:21:43.847710 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.847746674+00:00 stderr F I1213 00:21:43.847721 1 sync_worker.go:999] Running sync for clusteroperator "image-registry" (399 of 955) 2025-12-13T00:21:43.848074242+00:00 stderr F I1213 00:21:43.848037 1 sync_worker.go:1014] Done syncing for clusteroperator "image-registry" (399 of 955) 2025-12-13T00:21:43.848074242+00:00 stderr F I1213 00:21:43.848056 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.848074242+00:00 stderr F I1213 00:21:43.848064 1 sync_worker.go:999] Running sync for prometheusrule "openshift-image-registry/imagestreams-rules" (400 of 955) 2025-12-13T00:21:43.895539032+00:00 stderr F I1213 00:21:43.895493 1 sync_worker.go:1014] Done syncing for network "cluster" (31 of 955) 2025-12-13T00:21:43.895599304+00:00 stderr F I1213 00:21:43.895588 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.895647665+00:00 stderr F I1213 00:21:43.895627 1 sync_worker.go:999] Running sync for oauth "cluster" (32 of 955) 2025-12-13T00:21:43.945186226+00:00 stderr F I1213 00:21:43.945138 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-image-registry/imagestreams-rules" (400 of 955) 2025-12-13T00:21:43.945241188+00:00 stderr F I1213 00:21:43.945230 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.945291789+00:00 stderr F I1213 00:21:43.945279 1 sync_worker.go:999] Running sync for prometheusrule "openshift-image-registry/image-registry-rules" (401 of 955) 2025-12-13T00:21:43.996376060+00:00 stderr F I1213 00:21:43.996304 1 sync_worker.go:1014] Done syncing for oauth "cluster" (32 of 955) 2025-12-13T00:21:43.996376060+00:00 stderr F I1213 00:21:43.996350 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:43.996376060+00:00 stderr F I1213 00:21:43.996358 1 sync_worker.go:999] Running sync for project "cluster" (33 of 955) 2025-12-13T00:21:44.045889760+00:00 stderr F I1213 00:21:44.045767 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-image-registry/image-registry-rules" (401 of 955) 2025-12-13T00:21:44.045889760+00:00 stderr F I1213 00:21:44.045800 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.045889760+00:00 stderr F I1213 00:21:44.045807 1 sync_worker.go:999] Running sync for prometheusrule "openshift-image-registry/image-registry-operator-alerts" (402 of 955) 2025-12-13T00:21:44.104788373+00:00 stderr F I1213 00:21:44.104705 1 sync_worker.go:1014] Done syncing for project "cluster" (33 of 955) 2025-12-13T00:21:44.104788373+00:00 stderr F I1213 00:21:44.104740 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.104788373+00:00 stderr F I1213 00:21:44.104746 1 sync_worker.go:999] Running sync for proxy "cluster" (34 of 955) 2025-12-13T00:21:44.145612360+00:00 stderr F I1213 00:21:44.145518 1 sync_worker.go:1014] Done syncing for prometheusrule "openshift-image-registry/image-registry-operator-alerts" (402 of 955) 2025-12-13T00:21:44.145612360+00:00 stderr F I1213 00:21:44.145557 1 task_graph.go:481] Running 73 on worker 1 2025-12-13T00:21:44.145612360+00:00 stderr F I1213 00:21:44.145575 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.145612360+00:00 stderr F I1213 00:21:44.145582 1 sync_worker.go:999] Running sync for configmap "openshift-machine-config-operator/coreos-bootimages" (687 of 955) 2025-12-13T00:21:44.196054144+00:00 stderr F I1213 00:21:44.195916 1 sync_worker.go:1014] Done syncing for proxy "cluster" (34 of 955) 2025-12-13T00:21:44.196054144+00:00 stderr F I1213 00:21:44.196003 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.196054144+00:00 stderr F I1213 00:21:44.196016 1 sync_worker.go:999] Running sync for scheduler "cluster" (35 of 955) 2025-12-13T00:21:44.245436633+00:00 stderr F I1213 00:21:44.245361 1 sync_worker.go:1014] Done syncing for configmap "openshift-machine-config-operator/coreos-bootimages" (687 of 955) 2025-12-13T00:21:44.245436633+00:00 stderr F I1213 00:21:44.245393 1 task_graph.go:481] Running 74 on worker 1 2025-12-13T00:21:44.245436633+00:00 stderr F I1213 00:21:44.245405 1 sync_worker.go:982] Skipping precreation of clusteroperator "insights" (685 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245436633+00:00 stderr F I1213 00:21:44.245418 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245436633+00:00 stderr F I1213 00:21:44.245425 1 sync_worker.go:999] Running sync for namespace "openshift-insights" (658 of 955) 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245435 1 sync_worker.go:1002] Skipping namespace "openshift-insights" (658 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245440 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245444 1 sync_worker.go:999] Running sync for clusterrolebinding "insights-operator-auth" (659 of 955) 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245455 1 sync_worker.go:1002] Skipping clusterrolebinding "insights-operator-auth" (659 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245459 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245464 1 sync_worker.go:999] Running sync for rolebinding "kube-system/insights-operator-auth" (660 of 955) 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245472 1 sync_worker.go:1002] Skipping rolebinding "kube-system/insights-operator-auth" (660 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245479814+00:00 stderr F I1213 00:21:44.245477 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245504594+00:00 stderr F I1213 00:21:44.245481 1 sync_worker.go:999] Running sync for clusterrole "insights-operator" (661 of 955) 2025-12-13T00:21:44.245504594+00:00 stderr F I1213 00:21:44.245491 1 sync_worker.go:1002] Skipping clusterrole "insights-operator" (661 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245504594+00:00 stderr F I1213 00:21:44.245496 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245504594+00:00 stderr F I1213 00:21:44.245500 1 sync_worker.go:999] Running sync for rolebinding "openshift-monitoring/insights-operator-alertmanager" (662 of 955) 2025-12-13T00:21:44.245516875+00:00 stderr F I1213 00:21:44.245509 1 sync_worker.go:1002] Skipping rolebinding "openshift-monitoring/insights-operator-alertmanager" (662 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245516875+00:00 stderr F I1213 00:21:44.245514 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245527385+00:00 stderr F I1213 00:21:44.245518 1 sync_worker.go:999] Running sync for clusterrolebinding "insights-operator" (663 of 955) 2025-12-13T00:21:44.245537425+00:00 stderr F I1213 00:21:44.245527 1 sync_worker.go:1002] Skipping clusterrolebinding "insights-operator" (663 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245537425+00:00 stderr F I1213 00:21:44.245531 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245547845+00:00 stderr F I1213 00:21:44.245536 1 sync_worker.go:999] Running sync for clusterrole "insights-operator-gather" (664 of 955) 2025-12-13T00:21:44.245559596+00:00 stderr F I1213 00:21:44.245545 1 sync_worker.go:1002] Skipping clusterrole "insights-operator-gather" (664 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245559596+00:00 stderr F I1213 00:21:44.245550 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245559596+00:00 stderr F I1213 00:21:44.245554 1 sync_worker.go:999] Running sync for clusterrolebinding "insights-operator-gather" (665 of 955) 2025-12-13T00:21:44.245573996+00:00 stderr F I1213 00:21:44.245563 1 sync_worker.go:1002] Skipping clusterrolebinding "insights-operator-gather" (665 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245573996+00:00 stderr F I1213 00:21:44.245568 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245587336+00:00 stderr F I1213 00:21:44.245572 1 sync_worker.go:999] Running sync for clusterrolebinding "insights-operator-gather-reader" (666 of 955) 2025-12-13T00:21:44.245587336+00:00 stderr F I1213 00:21:44.245581 1 sync_worker.go:1002] Skipping clusterrolebinding "insights-operator-gather-reader" (666 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245599987+00:00 stderr F I1213 00:21:44.245586 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245599987+00:00 stderr F I1213 00:21:44.245590 1 sync_worker.go:999] Running sync for role "openshift-config/insights-operator" (667 of 955) 2025-12-13T00:21:44.245613177+00:00 stderr F I1213 00:21:44.245599 1 sync_worker.go:1002] Skipping role "openshift-config/insights-operator" (667 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245613177+00:00 stderr F I1213 00:21:44.245603 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245613177+00:00 stderr F I1213 00:21:44.245608 1 sync_worker.go:999] Running sync for rolebinding "openshift-config/insights-operator" (668 of 955) 2025-12-13T00:21:44.245634448+00:00 stderr F I1213 00:21:44.245615 1 sync_worker.go:1002] Skipping rolebinding "openshift-config/insights-operator" (668 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245634448+00:00 stderr F I1213 00:21:44.245620 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245634448+00:00 stderr F I1213 00:21:44.245624 1 sync_worker.go:999] Running sync for role "openshift-insights/insights-operator" (669 of 955) 2025-12-13T00:21:44.245645898+00:00 stderr F I1213 00:21:44.245634 1 sync_worker.go:1002] Skipping role "openshift-insights/insights-operator" (669 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245645898+00:00 stderr F I1213 00:21:44.245639 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245656248+00:00 stderr F I1213 00:21:44.245644 1 sync_worker.go:999] Running sync for rolebinding "openshift-insights/insights-operator" (670 of 955) 2025-12-13T00:21:44.245666348+00:00 stderr F I1213 00:21:44.245653 1 sync_worker.go:1002] Skipping rolebinding "openshift-insights/insights-operator" (670 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245666348+00:00 stderr F I1213 00:21:44.245658 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245676739+00:00 stderr F I1213 00:21:44.245663 1 sync_worker.go:999] Running sync for role "openshift-insights/insights-operator-obfuscation-secret" (671 of 955) 2025-12-13T00:21:44.245676739+00:00 stderr F I1213 00:21:44.245671 1 sync_worker.go:1002] Skipping role "openshift-insights/insights-operator-obfuscation-secret" (671 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245687089+00:00 stderr F I1213 00:21:44.245675 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245687089+00:00 stderr F I1213 00:21:44.245680 1 sync_worker.go:999] Running sync for rolebinding "openshift-insights/insights-operator-obfuscation-secret" (672 of 955) 2025-12-13T00:21:44.245697429+00:00 stderr F I1213 00:21:44.245688 1 sync_worker.go:1002] Skipping rolebinding "openshift-insights/insights-operator-obfuscation-secret" (672 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245697429+00:00 stderr F I1213 00:21:44.245692 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245707729+00:00 stderr F I1213 00:21:44.245696 1 sync_worker.go:999] Running sync for role "openshift-config-managed/insights-operator-etc-pki-entitlement" (673 of 955) 2025-12-13T00:21:44.245718710+00:00 stderr F I1213 00:21:44.245706 1 sync_worker.go:1002] Skipping role "openshift-config-managed/insights-operator-etc-pki-entitlement" (673 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245718710+00:00 stderr F I1213 00:21:44.245710 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245718710+00:00 stderr F I1213 00:21:44.245714 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/insights-operator-etc-pki-entitlement" (674 of 955) 2025-12-13T00:21:44.245732590+00:00 stderr F I1213 00:21:44.245723 1 sync_worker.go:1002] Skipping rolebinding "openshift-config-managed/insights-operator-etc-pki-entitlement" (674 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245732590+00:00 stderr F I1213 00:21:44.245727 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245745820+00:00 stderr F I1213 00:21:44.245732 1 sync_worker.go:999] Running sync for customresourcedefinition "insightsoperators.operator.openshift.io" (675 of 955) 2025-12-13T00:21:44.245745820+00:00 stderr F I1213 00:21:44.245741 1 sync_worker.go:1002] Skipping customresourcedefinition "insightsoperators.operator.openshift.io" (675 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245760141+00:00 stderr F I1213 00:21:44.245746 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245760141+00:00 stderr F I1213 00:21:44.245750 1 sync_worker.go:999] Running sync for serviceaccount "openshift-insights/operator" (676 of 955) 2025-12-13T00:21:44.245770481+00:00 stderr F I1213 00:21:44.245759 1 sync_worker.go:1002] Skipping serviceaccount "openshift-insights/operator" (676 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245770481+00:00 stderr F I1213 00:21:44.245763 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245780761+00:00 stderr F I1213 00:21:44.245767 1 sync_worker.go:999] Running sync for serviceaccount "openshift-insights/gather" (677 of 955) 2025-12-13T00:21:44.245780761+00:00 stderr F I1213 00:21:44.245776 1 sync_worker.go:1002] Skipping serviceaccount "openshift-insights/gather" (677 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245791081+00:00 stderr F I1213 00:21:44.245781 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245791081+00:00 stderr F I1213 00:21:44.245785 1 sync_worker.go:999] Running sync for insightsoperator "cluster" (678 of 955) 2025-12-13T00:21:44.245801342+00:00 stderr F I1213 00:21:44.245794 1 sync_worker.go:1002] Skipping insightsoperator "cluster" (678 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245801342+00:00 stderr F I1213 00:21:44.245799 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245811602+00:00 stderr F I1213 00:21:44.245803 1 sync_worker.go:999] Running sync for rolebinding "openshift-insights/prometheus-k8s" (679 of 955) 2025-12-13T00:21:44.245821492+00:00 stderr F I1213 00:21:44.245811 1 sync_worker.go:1002] Skipping rolebinding "openshift-insights/prometheus-k8s" (679 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245821492+00:00 stderr F I1213 00:21:44.245815 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245831792+00:00 stderr F I1213 00:21:44.245819 1 sync_worker.go:999] Running sync for configmap "openshift-insights/trusted-ca-bundle" (680 of 955) 2025-12-13T00:21:44.245831792+00:00 stderr F I1213 00:21:44.245828 1 sync_worker.go:1002] Skipping configmap "openshift-insights/trusted-ca-bundle" (680 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245842033+00:00 stderr F I1213 00:21:44.245832 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245842033+00:00 stderr F I1213 00:21:44.245837 1 sync_worker.go:999] Running sync for configmap "openshift-insights/service-ca-bundle" (681 of 955) 2025-12-13T00:21:44.245852293+00:00 stderr F I1213 00:21:44.245845 1 sync_worker.go:1002] Skipping configmap "openshift-insights/service-ca-bundle" (681 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245852293+00:00 stderr F I1213 00:21:44.245849 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245862483+00:00 stderr F I1213 00:21:44.245854 1 sync_worker.go:999] Running sync for role "openshift-insights/prometheus-k8s" (682 of 955) 2025-12-13T00:21:44.245872383+00:00 stderr F I1213 00:21:44.245862 1 sync_worker.go:1002] Skipping role "openshift-insights/prometheus-k8s" (682 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245872383+00:00 stderr F I1213 00:21:44.245866 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245882634+00:00 stderr F I1213 00:21:44.245870 1 sync_worker.go:999] Running sync for deployment "openshift-insights/insights-operator" (683 of 955) 2025-12-13T00:21:44.245882634+00:00 stderr F I1213 00:21:44.245879 1 sync_worker.go:1002] Skipping deployment "openshift-insights/insights-operator" (683 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245903024+00:00 stderr F I1213 00:21:44.245883 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245903024+00:00 stderr F I1213 00:21:44.245887 1 sync_worker.go:999] Running sync for service "openshift-insights/metrics" (684 of 955) 2025-12-13T00:21:44.245903024+00:00 stderr F I1213 00:21:44.245896 1 sync_worker.go:1002] Skipping service "openshift-insights/metrics" (684 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245903024+00:00 stderr F I1213 00:21:44.245901 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245915144+00:00 stderr F I1213 00:21:44.245905 1 sync_worker.go:999] Running sync for clusteroperator "insights" (685 of 955) 2025-12-13T00:21:44.245927665+00:00 stderr F I1213 00:21:44.245913 1 sync_worker.go:1002] Skipping clusteroperator "insights" (685 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245927665+00:00 stderr F I1213 00:21:44.245918 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.245927665+00:00 stderr F I1213 00:21:44.245922 1 sync_worker.go:999] Running sync for servicemonitor "openshift-insights/insights-operator" (686 of 955) 2025-12-13T00:21:44.245971146+00:00 stderr F I1213 00:21:44.245951 1 sync_worker.go:1002] Skipping servicemonitor "openshift-insights/insights-operator" (686 of 955): disabled capabilities: Insights 2025-12-13T00:21:44.245971146+00:00 stderr F I1213 00:21:44.245966 1 task_graph.go:481] Running 75 on worker 1 2025-12-13T00:21:44.295914258+00:00 stderr F I1213 00:21:44.295837 1 sync_worker.go:1014] Done syncing for scheduler "cluster" (35 of 955) 2025-12-13T00:21:44.295914258+00:00 stderr F I1213 00:21:44.295875 1 task_graph.go:481] Running 76 on worker 0 2025-12-13T00:21:44.351134609+00:00 stderr F I1213 00:21:44.351058 1 sync_worker.go:989] Precreated resource clusteroperator "network" (762 of 955) 2025-12-13T00:21:44.351134609+00:00 stderr F I1213 00:21:44.351096 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.351134609+00:00 stderr F I1213 00:21:44.351104 1 sync_worker.go:999] Running sync for namespace "openshift-network-operator" (755 of 955) 2025-12-13T00:21:44.399198494+00:00 stderr F I1213 00:21:44.399133 1 sync_worker.go:989] Precreated resource clusteroperator "openshift-apiserver" (503 of 955) 2025-12-13T00:21:44.399198494+00:00 stderr F I1213 00:21:44.399166 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.399198494+00:00 stderr F I1213 00:21:44.399172 1 sync_worker.go:999] Running sync for namespace "openshift-apiserver-operator" (495 of 955) 2025-12-13T00:21:44.444874512+00:00 stderr F I1213 00:21:44.444800 1 sync_worker.go:1014] Done syncing for namespace "openshift-network-operator" (755 of 955) 2025-12-13T00:21:44.444874512+00:00 stderr F I1213 00:21:44.444843 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.444874512+00:00 stderr F I1213 00:21:44.444851 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/openshift-network" (756 of 955) 2025-12-13T00:21:44.444926713+00:00 stderr F I1213 00:21:44.444868 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/openshift-network" (756 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.444926713+00:00 stderr F I1213 00:21:44.444876 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.444926713+00:00 stderr F I1213 00:21:44.444883 1 sync_worker.go:999] Running sync for customresourcedefinition "egressrouters.network.operator.openshift.io" (757 of 955) 2025-12-13T00:21:44.495677585+00:00 stderr F I1213 00:21:44.495608 1 sync_worker.go:1014] Done syncing for namespace "openshift-apiserver-operator" (495 of 955) 2025-12-13T00:21:44.495677585+00:00 stderr F I1213 00:21:44.495650 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.495677585+00:00 stderr F I1213 00:21:44.495658 1 sync_worker.go:999] Running sync for openshiftapiserver "cluster" (496 of 955) 2025-12-13T00:21:44.547608615+00:00 stderr F I1213 00:21:44.547537 1 sync_worker.go:1014] Done syncing for customresourcedefinition "egressrouters.network.operator.openshift.io" (757 of 955) 2025-12-13T00:21:44.547608615+00:00 stderr F I1213 00:21:44.547579 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.547608615+00:00 stderr F I1213 00:21:44.547587 1 sync_worker.go:999] Running sync for customresourcedefinition "operatorpkis.network.operator.openshift.io" (758 of 955) 2025-12-13T00:21:44.597230559+00:00 stderr F I1213 00:21:44.597149 1 sync_worker.go:1014] Done syncing for openshiftapiserver "cluster" (496 of 955) 2025-12-13T00:21:44.597230559+00:00 stderr F I1213 00:21:44.597183 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.597230559+00:00 stderr F I1213 00:21:44.597191 1 sync_worker.go:999] Running sync for configmap "openshift-apiserver-operator/openshift-apiserver-operator-config" (497 of 955) 2025-12-13T00:21:44.646569236+00:00 stderr F I1213 00:21:44.646476 1 sync_worker.go:1014] Done syncing for customresourcedefinition "operatorpkis.network.operator.openshift.io" (758 of 955) 2025-12-13T00:21:44.646569236+00:00 stderr F I1213 00:21:44.646521 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.646569236+00:00 stderr F I1213 00:21:44.646530 1 sync_worker.go:999] Running sync for serviceaccount "openshift-network-operator/cluster-network-operator" (759 of 955) 2025-12-13T00:21:44.696237972+00:00 stderr F I1213 00:21:44.696157 1 sync_worker.go:1014] Done syncing for configmap "openshift-apiserver-operator/openshift-apiserver-operator-config" (497 of 955) 2025-12-13T00:21:44.696237972+00:00 stderr F I1213 00:21:44.696210 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.696237972+00:00 stderr F I1213 00:21:44.696225 1 sync_worker.go:999] Running sync for configmap "openshift-apiserver-operator/trusted-ca-bundle" (498 of 955) 2025-12-13T00:21:44.745452135+00:00 stderr F I1213 00:21:44.745376 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-network-operator/cluster-network-operator" (759 of 955) 2025-12-13T00:21:44.745452135+00:00 stderr F I1213 00:21:44.745417 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.745452135+00:00 stderr F I1213 00:21:44.745424 1 sync_worker.go:999] Running sync for clusterrolebinding "cluster-network-operator" (760 of 955) 2025-12-13T00:21:44.801313573+00:00 stderr F I1213 00:21:44.801254 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:44.801522198+00:00 stderr F I1213 00:21:44.801490 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:44.801522198+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:44.801522198+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:44.801575479+00:00 stderr F I1213 00:21:44.801543 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (300.798µs) 2025-12-13T00:21:44.801575479+00:00 stderr F I1213 00:21:44.801563 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:44.801653201+00:00 stderr F I1213 00:21:44.801618 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:44.801691982+00:00 stderr F I1213 00:21:44.801665 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:44.801740123+00:00 stderr F I1213 00:21:44.801677 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc00021fb00), Done:746, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:44.802010630+00:00 stderr F I1213 00:21:44.801925 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:44.806667025+00:00 stderr F I1213 00:21:44.806624 1 sync_worker.go:1014] Done syncing for configmap "openshift-apiserver-operator/trusted-ca-bundle" (498 of 955) 2025-12-13T00:21:44.806687836+00:00 stderr F I1213 00:21:44.806666 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.806687836+00:00 stderr F I1213 00:21:44.806676 1 sync_worker.go:999] Running sync for clusterrolebinding "system:openshift:operator:openshift-apiserver-operator" (499 of 955) 2025-12-13T00:21:44.825435458+00:00 stderr F W1213 00:21:44.825366 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:44.827701133+00:00 stderr F I1213 00:21:44.827653 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.082763ms) 2025-12-13T00:21:44.844772915+00:00 stderr F I1213 00:21:44.844725 1 sync_worker.go:1014] Done syncing for clusterrolebinding "cluster-network-operator" (760 of 955) 2025-12-13T00:21:44.844772915+00:00 stderr F I1213 00:21:44.844755 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.844772915+00:00 stderr F I1213 00:21:44.844763 1 sync_worker.go:999] Running sync for deployment "openshift-network-operator/network-operator" (761 of 955) 2025-12-13T00:21:44.895297692+00:00 stderr F I1213 00:21:44.895244 1 sync_worker.go:1014] Done syncing for clusterrolebinding "system:openshift:operator:openshift-apiserver-operator" (499 of 955) 2025-12-13T00:21:44.895297692+00:00 stderr F I1213 00:21:44.895279 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.895297692+00:00 stderr F I1213 00:21:44.895285 1 sync_worker.go:999] Running sync for serviceaccount "openshift-apiserver-operator/openshift-apiserver-operator" (500 of 955) 2025-12-13T00:21:44.945380216+00:00 stderr F I1213 00:21:44.945307 1 sync_worker.go:1014] Done syncing for deployment "openshift-network-operator/network-operator" (761 of 955) 2025-12-13T00:21:44.945380216+00:00 stderr F I1213 00:21:44.945343 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945380216+00:00 stderr F I1213 00:21:44.945349 1 sync_worker.go:999] Running sync for clusteroperator "network" (762 of 955) 2025-12-13T00:21:44.945543031+00:00 stderr F I1213 00:21:44.945505 1 sync_worker.go:1014] Done syncing for clusteroperator "network" (762 of 955) 2025-12-13T00:21:44.945543031+00:00 stderr F I1213 00:21:44.945527 1 task_graph.go:481] Running 77 on worker 1 2025-12-13T00:21:44.945543031+00:00 stderr F I1213 00:21:44.945537 1 sync_worker.go:982] Skipping precreation of clusteroperator "cloud-credential" (317 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945555411+00:00 stderr F I1213 00:21:44.945544 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945555411+00:00 stderr F I1213 00:21:44.945549 1 sync_worker.go:999] Running sync for clusterrole "system:openshift:cloud-credential-operator:cluster-reader" (303 of 955) 2025-12-13T00:21:44.945567211+00:00 stderr F I1213 00:21:44.945559 1 sync_worker.go:1002] Skipping clusterrole "system:openshift:cloud-credential-operator:cluster-reader" (303 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945574902+00:00 stderr F I1213 00:21:44.945565 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945582072+00:00 stderr F I1213 00:21:44.945571 1 sync_worker.go:999] Running sync for customresourcedefinition "cloudcredentials.operator.openshift.io" (304 of 955) 2025-12-13T00:21:44.945589692+00:00 stderr F I1213 00:21:44.945580 1 sync_worker.go:1002] Skipping customresourcedefinition "cloudcredentials.operator.openshift.io" (304 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945596852+00:00 stderr F I1213 00:21:44.945587 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945604052+00:00 stderr F I1213 00:21:44.945593 1 sync_worker.go:999] Running sync for clusterrolebinding "cloud-credential-operator-rolebinding" (305 of 955) 2025-12-13T00:21:44.945611143+00:00 stderr F I1213 00:21:44.945602 1 sync_worker.go:1002] Skipping clusterrolebinding "cloud-credential-operator-rolebinding" (305 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945611143+00:00 stderr F I1213 00:21:44.945607 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945618553+00:00 stderr F I1213 00:21:44.945611 1 sync_worker.go:999] Running sync for clusterrole "cloud-credential-operator-role" (306 of 955) 2025-12-13T00:21:44.945627013+00:00 stderr F I1213 00:21:44.945621 1 sync_worker.go:1002] Skipping clusterrole "cloud-credential-operator-role" (306 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945641903+00:00 stderr F I1213 00:21:44.945626 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945641903+00:00 stderr F I1213 00:21:44.945631 1 sync_worker.go:999] Running sync for rolebinding "openshift-config-managed/cloud-credential-operator" (307 of 955) 2025-12-13T00:21:44.945650063+00:00 stderr F I1213 00:21:44.945642 1 sync_worker.go:1002] Skipping rolebinding "openshift-config-managed/cloud-credential-operator" (307 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945663184+00:00 stderr F I1213 00:21:44.945649 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945663184+00:00 stderr F I1213 00:21:44.945655 1 sync_worker.go:999] Running sync for role "openshift-config-managed/cloud-credential-operator-role" (308 of 955) 2025-12-13T00:21:44.945675314+00:00 stderr F I1213 00:21:44.945666 1 sync_worker.go:1002] Skipping role "openshift-config-managed/cloud-credential-operator-role" (308 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945675314+00:00 stderr F I1213 00:21:44.945672 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945686654+00:00 stderr F I1213 00:21:44.945678 1 sync_worker.go:999] Running sync for cloudcredential "cluster" (309 of 955) 2025-12-13T00:21:44.945695395+00:00 stderr F I1213 00:21:44.945688 1 sync_worker.go:1002] Skipping cloudcredential "cluster" (309 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945703585+00:00 stderr F I1213 00:21:44.945694 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945703585+00:00 stderr F I1213 00:21:44.945698 1 sync_worker.go:999] Running sync for rolebinding "openshift-cloud-credential-operator/cloud-credential-operator" (310 of 955) 2025-12-13T00:21:44.945737086+00:00 stderr F I1213 00:21:44.945707 1 sync_worker.go:1002] Skipping rolebinding "openshift-cloud-credential-operator/cloud-credential-operator" (310 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945737086+00:00 stderr F I1213 00:21:44.945719 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945737086+00:00 stderr F I1213 00:21:44.945723 1 sync_worker.go:999] Running sync for role "openshift-cloud-credential-operator/cloud-credential-operator-role" (311 of 955) 2025-12-13T00:21:44.945737086+00:00 stderr F I1213 00:21:44.945731 1 sync_worker.go:1002] Skipping role "openshift-cloud-credential-operator/cloud-credential-operator-role" (311 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945747456+00:00 stderr F I1213 00:21:44.945738 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945747456+00:00 stderr F I1213 00:21:44.945743 1 sync_worker.go:999] Running sync for service "openshift-cloud-credential-operator/controller-manager-service" (312 of 955) 2025-12-13T00:21:44.945756906+00:00 stderr F I1213 00:21:44.945751 1 sync_worker.go:1002] Skipping service "openshift-cloud-credential-operator/controller-manager-service" (312 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945763936+00:00 stderr F I1213 00:21:44.945756 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945770976+00:00 stderr F I1213 00:21:44.945761 1 sync_worker.go:999] Running sync for service "openshift-cloud-credential-operator/cco-metrics" (313 of 955) 2025-12-13T00:21:44.945778077+00:00 stderr F I1213 00:21:44.945770 1 sync_worker.go:1002] Skipping service "openshift-cloud-credential-operator/cco-metrics" (313 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945778077+00:00 stderr F I1213 00:21:44.945775 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945812697+00:00 stderr F I1213 00:21:44.945781 1 sync_worker.go:999] Running sync for configmap "openshift-cloud-credential-operator/cco-trusted-ca" (314 of 955) 2025-12-13T00:21:44.945812697+00:00 stderr F I1213 00:21:44.945799 1 sync_worker.go:1002] Skipping configmap "openshift-cloud-credential-operator/cco-trusted-ca" (314 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945812697+00:00 stderr F I1213 00:21:44.945806 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945825138+00:00 stderr F I1213 00:21:44.945812 1 sync_worker.go:999] Running sync for serviceaccount "openshift-cloud-credential-operator/cloud-credential-operator" (315 of 955) 2025-12-13T00:21:44.945861129+00:00 stderr F I1213 00:21:44.945834 1 sync_worker.go:1002] Skipping serviceaccount "openshift-cloud-credential-operator/cloud-credential-operator" (315 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945861129+00:00 stderr F I1213 00:21:44.945849 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945861129+00:00 stderr F I1213 00:21:44.945855 1 sync_worker.go:999] Running sync for deployment "openshift-cloud-credential-operator/cloud-credential-operator" (316 of 955) 2025-12-13T00:21:44.945870779+00:00 stderr F I1213 00:21:44.945864 1 sync_worker.go:1002] Skipping deployment "openshift-cloud-credential-operator/cloud-credential-operator" (316 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945877949+00:00 stderr F I1213 00:21:44.945870 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945885009+00:00 stderr F I1213 00:21:44.945875 1 sync_worker.go:999] Running sync for clusteroperator "cloud-credential" (317 of 955) 2025-12-13T00:21:44.945892129+00:00 stderr F I1213 00:21:44.945883 1 sync_worker.go:1002] Skipping clusteroperator "cloud-credential" (317 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945892129+00:00 stderr F I1213 00:21:44.945888 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945901040+00:00 stderr F I1213 00:21:44.945893 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/cloud-credential-operator-gcp-ro-creds" (318 of 955) 2025-12-13T00:21:44.945908100+00:00 stderr F I1213 00:21:44.945902 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/cloud-credential-operator-gcp-ro-creds" (318 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945915140+00:00 stderr F I1213 00:21:44.945907 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945922210+00:00 stderr F I1213 00:21:44.945911 1 sync_worker.go:999] Running sync for credentialsrequest "openshift-cloud-credential-operator/cloud-credential-operator-iam-ro" (319 of 955) 2025-12-13T00:21:44.945945641+00:00 stderr F I1213 00:21:44.945921 1 sync_worker.go:1002] Skipping credentialsrequest "openshift-cloud-credential-operator/cloud-credential-operator-iam-ro" (319 of 955): disabled capabilities: CloudCredential 2025-12-13T00:21:44.945957431+00:00 stderr F I1213 00:21:44.945949 1 task_graph.go:481] Running 78 on worker 1 2025-12-13T00:21:44.945964511+00:00 stderr F I1213 00:21:44.945956 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:44.945997872+00:00 stderr F I1213 00:21:44.945971 1 sync_worker.go:999] Running sync for customresourcedefinition "kubecontrollermanagers.operator.openshift.io" (167 of 955) 2025-12-13T00:21:45.002005603+00:00 stderr F I1213 00:21:45.001274 1 sync_worker.go:1014] Done syncing for serviceaccount "openshift-apiserver-operator/openshift-apiserver-operator" (500 of 955) 2025-12-13T00:21:45.002005603+00:00 stderr F I1213 00:21:45.001325 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.002005603+00:00 stderr F I1213 00:21:45.001336 1 sync_worker.go:999] Running sync for service "openshift-apiserver-operator/metrics" (501 of 955) 2025-12-13T00:21:45.047184398+00:00 stderr F I1213 00:21:45.046670 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubecontrollermanagers.operator.openshift.io" (167 of 955) 2025-12-13T00:21:45.047184398+00:00 stderr F I1213 00:21:45.046708 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.047184398+00:00 stderr F I1213 00:21:45.046713 1 sync_worker.go:999] Running sync for customresourcedefinition "kubecontrollermanagers.operator.openshift.io" (168 of 955) 2025-12-13T00:21:45.095541900+00:00 stderr F I1213 00:21:45.095456 1 sync_worker.go:1014] Done syncing for service "openshift-apiserver-operator/metrics" (501 of 955) 2025-12-13T00:21:45.095541900+00:00 stderr F I1213 00:21:45.095496 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.095541900+00:00 stderr F I1213 00:21:45.095502 1 sync_worker.go:999] Running sync for deployment "openshift-apiserver-operator/openshift-apiserver-operator" (502 of 955) 2025-12-13T00:21:45.145973734+00:00 stderr F I1213 00:21:45.145896 1 sync_worker.go:1014] Done syncing for customresourcedefinition "kubecontrollermanagers.operator.openshift.io" (168 of 955) 2025-12-13T00:21:45.195181368+00:00 stderr F I1213 00:21:45.195100 1 sync_worker.go:1014] Done syncing for deployment "openshift-apiserver-operator/openshift-apiserver-operator" (502 of 955) 2025-12-13T00:21:45.195181368+00:00 stderr F I1213 00:21:45.195130 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.195181368+00:00 stderr F I1213 00:21:45.195136 1 sync_worker.go:999] Running sync for clusteroperator "openshift-apiserver" (503 of 955) 2025-12-13T00:21:45.195325682+00:00 stderr F I1213 00:21:45.195284 1 sync_worker.go:1014] Done syncing for clusteroperator "openshift-apiserver" (503 of 955) 2025-12-13T00:21:45.195325682+00:00 stderr F I1213 00:21:45.195298 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.195325682+00:00 stderr F I1213 00:21:45.195303 1 sync_worker.go:999] Running sync for flowschema "openshift-apiserver-sar" (504 of 955) 2025-12-13T00:21:45.248362160+00:00 stderr F I1213 00:21:45.248239 1 sync_worker.go:1014] Done syncing for flowschema "openshift-apiserver-sar" (504 of 955) 2025-12-13T00:21:45.248362160+00:00 stderr F I1213 00:21:45.248274 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.248362160+00:00 stderr F I1213 00:21:45.248280 1 sync_worker.go:999] Running sync for flowschema "openshift-apiserver" (505 of 955) 2025-12-13T00:21:45.295578423+00:00 stderr F I1213 00:21:45.295441 1 sync_worker.go:1014] Done syncing for flowschema "openshift-apiserver" (505 of 955) 2025-12-13T00:21:45.295578423+00:00 stderr F I1213 00:21:45.295507 1 sync_worker.go:703] Dropping status report from earlier in sync loop 2025-12-13T00:21:45.295578423+00:00 stderr F I1213 00:21:45.295514 1 sync_worker.go:999] Running sync for flowschema "openshift-apiserver-operator" (506 of 955) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347198 1 sync_worker.go:1014] Done syncing for flowschema "openshift-apiserver-operator" (506 of 955) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347265 1 task_graph.go:478] Canceled worker 1 while waiting for work 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347282 1 task_graph.go:478] Canceled worker 0 while waiting for work 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347349 1 task_graph.go:527] Workers finished 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347371 1 task_graph.go:550] Result of work: [Could not update role "openshift-apiserver-operator/prometheus-k8s" (936 of 955) Could not update customresourcedefinition "consoles.operator.openshift.io" (580 of 955) Could not update imagestream "openshift/driver-toolkit" (657 of 955): the server is down or not responding Cluster operator machine-config is degraded Could not update imagestream "openshift/cli" (537 of 955): the server is down or not responding Cluster operator authentication is not available] 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347398 1 sync_worker.go:1167] Summarizing 6 errors 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347412 1 sync_worker.go:1171] Update error 936 of 955: UpdatePayloadFailed Could not update role "openshift-apiserver-operator/prometheus-k8s" (936 of 955) (*url.Error: Get "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-apiserver-operator/roles/prometheus-k8s": dial tcp 38.102.83.51:6443: connect: connection refused) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347438 1 sync_worker.go:1171] Update error 580 of 955: UpdatePayloadFailed Could not update customresourcedefinition "consoles.operator.openshift.io" (580 of 955) (*url.Error: Get "https://api-int.crc.testing:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoles.operator.openshift.io": dial tcp 38.102.83.51:6443: connect: connection refused) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347454 1 sync_worker.go:1171] Update error 657 of 955: UpdatePayloadClusterDown Could not update imagestream "openshift/driver-toolkit" (657 of 955): the server is down or not responding (*errors.StatusError: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io driver-toolkit)) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347466 1 sync_worker.go:1171] Update error 799 of 955: ClusterOperatorDegraded Cluster operator machine-config is degraded (*errors.errorString: cluster operator machine-config is Degraded=True: RequiredPoolsFailed, Failed to resync 4.16.0 because: error required MachineConfigPool master is paused and cannot sync until it is unpaused) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347478 1 sync_worker.go:1171] Update error 537 of 955: UpdatePayloadClusterDown Could not update imagestream "openshift/cli" (537 of 955): the server is down or not responding (*errors.StatusError: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/cli\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io cli)) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347488 1 sync_worker.go:1171] Update error 330 of 955: ClusterOperatorNotAvailable Cluster operator authentication is not available (*errors.errorString: cluster operator authentication is Available=False: WellKnown_NotReady: WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) 2025-12-13T00:21:45.348008107+00:00 stderr F E1213 00:21:45.347538 1 sync_worker.go:649] unable to synchronize image (waiting 43.12912638s): Multiple errors are preventing progress: 2025-12-13T00:21:45.348008107+00:00 stderr F * Cluster operator authentication is not available 2025-12-13T00:21:45.348008107+00:00 stderr F * Cluster operator machine-config is degraded 2025-12-13T00:21:45.348008107+00:00 stderr F * Could not update customresourcedefinition "consoles.operator.openshift.io" (580 of 955) 2025-12-13T00:21:45.348008107+00:00 stderr F * Could not update imagestream "openshift/cli" (537 of 955): the server is down or not responding 2025-12-13T00:21:45.348008107+00:00 stderr F * Could not update imagestream "openshift/driver-toolkit" (657 of 955): the server is down or not responding 2025-12-13T00:21:45.348008107+00:00 stderr F * Could not update role "openshift-apiserver-operator/prometheus-k8s" (936 of 955) 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347551 1 sync_worker.go:584] Cluster operator authentication changed Available from "True" to "False" 2025-12-13T00:21:45.348008107+00:00 stderr F I1213 00:21:45.347570 1 sync_worker.go:592] No change, waiting 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.801783 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.802514 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:45.802847976+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:45.802847976+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.802574 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (803.91µs) 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.802591 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.802638 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.802687 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:45.802847976+00:00 stderr F I1213 00:21:45.802695 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:45.803155844+00:00 stderr F I1213 00:21:45.803038 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:45.824909780+00:00 stderr F W1213 00:21:45.824850 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:45.826536440+00:00 stderr F I1213 00:21:45.826491 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (23.89727ms) 2025-12-13T00:21:45.833655176+00:00 stderr F I1213 00:21:45.833618 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:45.833655176+00:00 stderr F I1213 00:21:45.833630 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:45.833758989+00:00 stderr F I1213 00:21:45.833618 1 cvo.go:745] Started syncing upgradeable "openshift-cluster-version/version" 2025-12-13T00:21:45.833799040+00:00 stderr F I1213 00:21:45.833783 1 upgradeable.go:69] Upgradeability last checked 33.61275462s ago, will not re-check until 2025-12-13T00:23:12Z 2025-12-13T00:21:45.833799040+00:00 stderr F I1213 00:21:45.833792 1 cvo.go:747] Finished syncing upgradeable "openshift-cluster-version/version" (190.194µs) 2025-12-13T00:21:45.833893752+00:00 stderr F I1213 00:21:45.833876 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:45.833893752+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:45.833893752+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:45.833965714+00:00 stderr F I1213 00:21:45.833914 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:45.833977044+00:00 stderr F I1213 00:21:45.833965 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (334.538µs) 2025-12-13T00:21:45.834014045+00:00 stderr F I1213 00:21:45.834000 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:45.834073346+00:00 stderr F I1213 00:21:45.834012 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:45.834627600+00:00 stderr F I1213 00:21:45.834305 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:45.859442102+00:00 stderr F W1213 00:21:45.858632 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:45.860160620+00:00 stderr F I1213 00:21:45.860118 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.506163ms) 2025-12-13T00:21:45.860160620+00:00 stderr F I1213 00:21:45.860141 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:45.860227081+00:00 stderr F I1213 00:21:45.860190 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:45.860238111+00:00 stderr F I1213 00:21:45.860230 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:45.860299443+00:00 stderr F I1213 00:21:45.860236 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:45.860542749+00:00 stderr F I1213 00:21:45.860494 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:45.883520645+00:00 stderr F W1213 00:21:45.883457 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:45.886657033+00:00 stderr F I1213 00:21:45.886599 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.451912ms) 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803184 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803583 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:46.803881147+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:46.803881147+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803648 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (478.472µs) 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803663 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803717 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803771 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:46.803881147+00:00 stderr F I1213 00:21:46.803780 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:46.804346188+00:00 stderr F I1213 00:21:46.804155 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:46.829273223+00:00 stderr F W1213 00:21:46.829217 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:46.832485202+00:00 stderr F I1213 00:21:46.832273 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (28.604176ms) 2025-12-13T00:21:47.804458726+00:00 stderr F I1213 00:21:47.804375 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:47.804748733+00:00 stderr F I1213 00:21:47.804701 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:47.804748733+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:47.804748733+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:47.804851245+00:00 stderr F I1213 00:21:47.804791 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (429.341µs) 2025-12-13T00:21:47.804851245+00:00 stderr F I1213 00:21:47.804817 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:47.804917737+00:00 stderr F I1213 00:21:47.804877 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:47.805034590+00:00 stderr F I1213 00:21:47.804968 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:47.805116302+00:00 stderr F I1213 00:21:47.805014 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:47.805478090+00:00 stderr F I1213 00:21:47.805367 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:47.837289946+00:00 stderr F W1213 00:21:47.837225 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:47.839106910+00:00 stderr F I1213 00:21:47.839009 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (34.191803ms) 2025-12-13T00:21:48.805323972+00:00 stderr F I1213 00:21:48.805205 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:48.805706551+00:00 stderr F I1213 00:21:48.805642 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:48.805706551+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:48.805706551+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:48.805862506+00:00 stderr F I1213 00:21:48.805774 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (604.325µs) 2025-12-13T00:21:48.805862506+00:00 stderr F I1213 00:21:48.805822 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:48.806027850+00:00 stderr F I1213 00:21:48.805915 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:48.806099472+00:00 stderr F I1213 00:21:48.806068 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:48.806216124+00:00 stderr F I1213 00:21:48.806086 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:48.806637285+00:00 stderr F I1213 00:21:48.806567 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:48.853060029+00:00 stderr F W1213 00:21:48.852919 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:48.856225318+00:00 stderr F I1213 00:21:48.856158 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (50.33205ms) 2025-12-13T00:21:49.806256631+00:00 stderr F I1213 00:21:49.806121 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:49.806421325+00:00 stderr F I1213 00:21:49.806370 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:49.806421325+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:49.806421325+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:49.806465366+00:00 stderr F I1213 00:21:49.806436 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (335.878µs) 2025-12-13T00:21:49.806465366+00:00 stderr F I1213 00:21:49.806453 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:49.806551648+00:00 stderr F I1213 00:21:49.806501 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:49.806565018+00:00 stderr F I1213 00:21:49.806558 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:49.806647870+00:00 stderr F I1213 00:21:49.806565 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:49.806857485+00:00 stderr F I1213 00:21:49.806803 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:49.830745244+00:00 stderr F W1213 00:21:49.830677 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:49.832700752+00:00 stderr F I1213 00:21:49.832653 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (26.196166ms) 2025-12-13T00:21:50.807272230+00:00 stderr F I1213 00:21:50.807206 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:50.807721321+00:00 stderr F I1213 00:21:50.807472 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:50.807721321+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:50.807721321+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:50.807721321+00:00 stderr F I1213 00:21:50.807525 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (339.678µs) 2025-12-13T00:21:50.807721321+00:00 stderr F I1213 00:21:50.807539 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:50.807721321+00:00 stderr F I1213 00:21:50.807585 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:50.807721321+00:00 stderr F I1213 00:21:50.807633 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:50.807721321+00:00 stderr F I1213 00:21:50.807644 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:50.807983817+00:00 stderr F I1213 00:21:50.807928 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:50.832798610+00:00 stderr F W1213 00:21:50.832737 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:50.834760748+00:00 stderr F I1213 00:21:50.834727 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (27.18307ms) 2025-12-13T00:21:51.807980722+00:00 stderr F I1213 00:21:51.807908 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:51.808185167+00:00 stderr F I1213 00:21:51.808150 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:51.808185167+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:51.808185167+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:51.808231698+00:00 stderr F I1213 00:21:51.808208 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (306.487µs) 2025-12-13T00:21:51.808285209+00:00 stderr F I1213 00:21:51.808261 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:51.808328530+00:00 stderr F I1213 00:21:51.808302 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:51.808360231+00:00 stderr F I1213 00:21:51.808342 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:51.808407762+00:00 stderr F I1213 00:21:51.808351 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:51.808622528+00:00 stderr F I1213 00:21:51.808585 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:51.836659989+00:00 stderr F W1213 00:21:51.836607 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:51.839697115+00:00 stderr F I1213 00:21:51.839653 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.386014ms) 2025-12-13T00:21:52.808485319+00:00 stderr F I1213 00:21:52.808379 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:52.808923840+00:00 stderr F I1213 00:21:52.808831 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:52.808923840+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:52.808923840+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:52.809007833+00:00 stderr F I1213 00:21:52.808918 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (570.574µs) 2025-12-13T00:21:52.809007833+00:00 stderr F I1213 00:21:52.808971 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:52.809163766+00:00 stderr F I1213 00:21:52.809093 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:52.809186777+00:00 stderr F I1213 00:21:52.809171 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:52.809285199+00:00 stderr F I1213 00:21:52.809181 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:52.809616547+00:00 stderr F I1213 00:21:52.809564 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:52.837288140+00:00 stderr F W1213 00:21:52.837219 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:52.839219547+00:00 stderr F I1213 00:21:52.839182 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.210575ms) 2025-12-13T00:21:53.141330619+00:00 stderr F I1213 00:21:53.141271 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:53.141375520+00:00 stderr F I1213 00:21:53.141352 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:53.141677177+00:00 stderr F I1213 00:21:53.141416 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:53.141677177+00:00 stderr F I1213 00:21:53.141427 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:53.141726819+00:00 stderr F I1213 00:21:53.141690 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:53.193023424+00:00 stderr F W1213 00:21:53.192447 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:53.196972001+00:00 stderr F I1213 00:21:53.194624 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (53.357906ms) 2025-12-13T00:21:53.809199743+00:00 stderr F I1213 00:21:53.809106 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:53.809258224+00:00 stderr F I1213 00:21:53.809216 1 availableupdates.go:70] Retrieving available updates again, because more than 2m52.516505533s has elapsed since 2025-12-13T00:19:00Z 2025-12-13T00:21:53.811660372+00:00 stderr F I1213 00:21:53.811544 1 cincinnati.go:114] Using a root CA pool with 0 root CA subjects to request updates from https://api.openshift.com/api/upgrades_info/v1/graph?arch=amd64&channel=stable-4.16&id=a84dabf3-edcf-4828-b6a1-f9d3a6f02304&version=4.16.0 2025-12-13T00:21:54.395659267+00:00 stderr F I1213 00:21:54.395588 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:54.395659267+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:54.395659267+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:54.395809010+00:00 stderr F I1213 00:21:54.395775 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (586.67989ms) 2025-12-13T00:21:54.395809010+00:00 stderr F I1213 00:21:54.395793 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:54.395879662+00:00 stderr F I1213 00:21:54.395845 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:54.395909953+00:00 stderr F I1213 00:21:54.395893 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:54.396015815+00:00 stderr F I1213 00:21:54.395902 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:54.396236561+00:00 stderr F I1213 00:21:54.396193 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:54.425244316+00:00 stderr F W1213 00:21:54.425173 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:54.426645921+00:00 stderr F I1213 00:21:54.426601 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (30.80597ms) 2025-12-13T00:21:55.397492007+00:00 stderr F I1213 00:21:55.396982 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:55.397856466+00:00 stderr F I1213 00:21:55.397797 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:55.397856466+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:55.397856466+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:55.397963459+00:00 stderr F I1213 00:21:55.397891 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (921.514µs) 2025-12-13T00:21:55.397963459+00:00 stderr F I1213 00:21:55.397919 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:55.398060791+00:00 stderr F I1213 00:21:55.398003 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:55.398116642+00:00 stderr F I1213 00:21:55.398076 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:55.398187134+00:00 stderr F I1213 00:21:55.398092 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:55.398599134+00:00 stderr F I1213 00:21:55.398509 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:55.426772069+00:00 stderr F W1213 00:21:55.426702 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:55.429392304+00:00 stderr F I1213 00:21:55.429328 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (31.406874ms) 2025-12-13T00:21:56.553070149+00:00 stderr F I1213 00:21:56.552830 1 cvo.go:726] Started syncing available updates "openshift-cluster-version/version" 2025-12-13T00:21:56.553184962+00:00 stderr F I1213 00:21:56.553117 1 availableupdates.go:145] Requeue available-update evaluation, because "4.16.46" is Recommended=Unknown: EvaluationFailed: Could not evaluate exposure to update risk NMStateServiceFailure (failure determine thanos IP: services "thanos-querier" not found) 2025-12-13T00:21:56.553184962+00:00 stderr F NMStateServiceFailure description: The NMState service can fail on baremetal cluster nodes, causing node scaleups and re-deployment failures. 2025-12-13T00:21:56.553184962+00:00 stderr F NMStateServiceFailure URL: https://issues.redhat.com/browse/CORENET-6419 2025-12-13T00:21:56.553203382+00:00 stderr F I1213 00:21:56.553181 1 cvo.go:728] Finished syncing available updates "openshift-cluster-version/version" (362.479µs) 2025-12-13T00:21:56.553203382+00:00 stderr F I1213 00:21:56.553198 1 cvo.go:657] Started syncing cluster version "openshift-cluster-version/version", spec changes, status, and payload 2025-12-13T00:21:56.554596587+00:00 stderr F I1213 00:21:56.553249 1 cvo.go:689] Desired version from operator is v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false} 2025-12-13T00:21:56.554596587+00:00 stderr F I1213 00:21:56.553304 1 sync_worker.go:479] Update work is equal to current target; no change required 2025-12-13T00:21:56.554596587+00:00 stderr F I1213 00:21:56.553312 1 status.go:185] Synchronizing status errs=field.ErrorList(nil) status=&cvo.SyncWorkerStatus{Generation:4, Failure:(*payload.UpdateError)(0xc0034c2a20), Done:643, Total:955, Completed:0, Reconciling:true, Initial:false, VersionHash:"6WUw5aCbcO4=", Architecture:"amd64", LastProgress:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Actual:v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)}, Verified:false, loadPayloadStatus:cvo.LoadPayloadStatus{Step:"PayloadLoaded", Message:"Payload loaded version=\"4.16.0\" image=\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\" architecture=\"amd64\"", AcceptedRisks:"", Failure:error(nil), Update:v1.Update{Architecture:"", Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Force:false}, Verified:false, Local:true, LastTransitionTime:time.Time{wall:0xc24749f20860ad12, ext:304990671332, loc:(*time.Location)(0x31714e0)}}, CapabilitiesStatus:cvo.CapabilityStatus{Status:v1.ClusterVersionCapabilitiesStatus{EnabledCapabilities:[]v1.ClusterVersionCapability{"Build", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "MachineAPI", "OperatorLifecycleManager", "marketplace", "openshift-samples"}, KnownCapabilities:[]v1.ClusterVersionCapability{"Build", "CSISnapshot", "CloudControllerManager", "CloudCredential", "Console", "DeploymentConfig", "ImageRegistry", "Ingress", "Insights", "MachineAPI", "NodeTuning", "OperatorLifecycleManager", "Storage", "baremetal", "marketplace", "openshift-samples"}}, ImplicitlyEnabledCaps:[]v1.ClusterVersionCapability(nil)}} 2025-12-13T00:21:56.554596587+00:00 stderr F I1213 00:21:56.553644 1 status.go:100] merge into existing history completed=false desired=v1.Release{Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", URL:"https://access.redhat.com/errata/RHSA-2024:0041", Channels:[]string(nil)} last=&v1.UpdateHistory{State:"Completed", StartedTime:time.Date(2024, time.June, 26, 12, 38, 59, 0, time.Local), CompletionTime:time.Date(2024, time.June, 26, 12, 58, 10, 0, time.Local), Version:"4.16.0", Image:"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69", Verified:false, AcceptedRisks:""} 2025-12-13T00:21:56.590069642+00:00 stderr F W1213 00:21:56.587921 1 warnings.go:70] unknown field "spec.signatureStores" 2025-12-13T00:21:56.592025780+00:00 stderr F I1213 00:21:56.591091 1 cvo.go:659] Finished syncing cluster version "openshift-cluster-version/version" (37.884695ms) ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator/0.log.gzhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-vers0000644000175000017500000244707715117130647033142 0ustar zuulzuul‹§±¤Aß› \äæør¸fQL –;›¥…Räfý­ªæ/Á¼nÔØbœµš±®»º\Y]©éºº‹9rd‚qbìÝPáU½VpýiÍâÔÂ+è…Qd¢í&N)áš(ªu¡ÑÃLü®oÐ(8eú¯tµ äˆ:¢e+w=\Ó4§ œs5Ms¤Ô–!Gó¯Þ¸ÀôÒ`/[”p5ì—YãÔÛM†!D¨&Œ8KÐEG_½ ,Íl^Õ¤Ìjßœ¯Æ•Úa·L»&î7°Þ5FÛk*Á‰ÁB5•œ ΞÜ/<¢ºLèý‚? ÒàÒD¸„æ<è Q)Að®Jiˆ~_~'qyëAu¢zžqL¿)oçæý›ß׿Ü‹@Ûy™â h((²ôý›ŸMùÆ›™¢€†¾ó!.àf•©ß¤ 9çÞux1yyÙòçÞ<7× ÀbÙ ˆ$>Ö¦˜›ˆµDÇllcÀ„PÈèÍ·Çr”Bìâˆ[Ñ¢.òª¹lÀû:\<²t zÑâÒM¶ÕþB¼¿Áõlö œŽù|›†ßWwú®¢l_+ÎÏ?€“,ˆŠ}Ò2ƒ%µlé®Ãºt¯˜f‹$‚ ²m.›Á¿ËcRzI¶2’âÜÛ^éV{ϽbÚ&–…³QSrë-R{Ýú¹)øZû¯©5©. D©ßDUí½¼ªø¨úk´ê½àV–öêèGšÙ.tCTÂÏiÊÀö%V»­t)'~hd0²‹0åCà-h–hß!eGý$CfQÎúIŠÅýq— wïDÈÙ&HqºôEo½EaU ¾gRË>Jïì ÆÉú>‹ùŽªATOÂΪ)Ò çÁ­UL5Ð!ùÕû>Ø ,æÔ¯¹Zmgïζ—Ì €ð¸Jf„¶&••‡­Êì«÷wûéPÿ¨ñ“™€Baâ÷¯ÿÓèüìg«´³·Þ¯tç9ÌæÎÏvj°;?[þý§çg‘ ã#˜hŽõYˆ¤hnü ¥ÔÆ(b ûÝÇŸšáäü,̲p•Y5cû5¡O¦ÈyhV’D"&áÂ÷±I¢A9=?;û÷·Ð¡Àž{o~… H–޾ø¹2Ù¿7&ûÆ óðÂŒ£Gtâ³ÉÄøLCUU4>šŽu8FÊ€W ¡qÉïЇ@¦£NK`ly›%0ƒÌwy8K¨ŸµÁÊl5ø3S`÷M=à|ÛÌö¯Mc)ù¢š¾Ÿ{gÁ,l—Ù‘PÒYMÁ_Usf‹ ‹:ŒÃù¥þ ¡Y^zI †ZûµñÙ_¾|ülÃ[4ªþw®‘ÖÛë"1·I…«.ãÖ$¨…Œ`p8æO0Ūû|£QTü¹€ÿoX©¿†.^^[^ª=ô¯YãšGanƒYræýÎun¹Îc;oH«!©±Ì3¯ê+ÞYãàý`&qZÐgu¿ðÎîWŒêÀðŽË>1øe°HÊ3;X_oÝçÛjE§±„;¿x×ø+üÕûl‡®4Õ’i§³9Bz‡þÅ„Ó÷Åææ9“ßÖŒ-ø¡ÛR‚°Wñ ~+‡•ÄàÇ€¿žq´© À=„•S-üÁAæuKº†½QÔØqj`F[É&™,'Lþ4¾œúÁ5DšËàw‘òtíJL’t…<ȸ÷£ûêÿK]ý¿ëݯì¼_¿H¸ Í¥Ó°ET „þƒ¹îݹ쇑æ&Ì.Óø/®¯ƒda¬FîÕîÌF¥N}lùá{W)„ûõ­ /È¡jj{ÛXÀÛŸ!ÆLÃÛ϶Åe|mÞîxæŒ2íV2Äâz %2„ôßÉöWÇJ *¨K  Ý]OáÕb^ôîΚržÊqSy*´.cœtw[îð#li†¹Û†íì¡3ÌM ÏÀu̳$cS ÷ß)ô©¬[ØÝ¤” » àbúÅ>ÄØüô/d€0WÖŒ¹,Æý*xÈ©]ßkh˜¶a„‘K0'Ý9¶8äAQæ‹j᨟¶QÞ¹3L´Æ 9ÖÜ*9‰Uo°‡ðdûN 3ó ÒÒ¥Vk­Qv­ÖAüWÿiìœI¡rö&•ì {NM´HLÞ¿×ZõT‹KI ܉X`ªû@<„¯rb~!njoeJÎDÊÄCõÚiö‡- “a×ÊÈQ†ƒmÊ0²„íßC­v´Ê-µ<¸W’L ådk tÇvóíð#,X!¢)qº )?Ð÷¯jya“Ä¡ñÃÀnÈ¥Ë ü ´+Œ«þ·õFÿúV½íÛ`©9T:#>©©>œM¥|?‰'&¼ ÓTµòªY2뿳ÛBž®—+ˆ—í3.Æð ÓÞ€<Èù…Š" q…vj“hÕ_Ä…÷ÖmA›SÆeNÚL³ƒf¤äÎ9LÜñ„¾H#¤ÑNë9Lºà9X¬ñÄ–Jg ÕååAŽàCü]ïXþ<Ï®c(¯ª^B U‚–ÙÝÒ )üæ¼êè0ÉÑZì.«ðõm>Õw9kEôîF„Ûùëû­fL[â™võWµJ?”*,Ào»"CRõ£¶±Ôå‡Ùlž¥öèý®øpõ`Ãn¬Ê¼‹|P¸àH©°k"cå(>î³²ôà¦P J#â<…eå$}ejø p§ ã'F{ôMøëRÄe8R Œcg´cå(y]j˜g7&¿Ð5aÌÄ­ LÑëRÅu1Ÿš!Á’kg¤jåx7½â¨e°v„Óé éícIF–nd²ƒà¾Úµ+–+ÿ³`Þi€¸ü.lBñzµ¡.r83¬GIád !¹è€)Y®Í^a˜-ÒrÒϵØwµÔÑ\‡*0eî ƒ2¦† Nñx€£ºÏp<5GXº‡,ª í„çO³ìj¹òï0Ðãn”7YÆfÜ—AÄÐE×§ù8/ò,1›3m{ad™gyüWûÉǵ‡­ïô ~|*Ð)‘—ú©ñ­¹= ª°¢ÚyÄßÊñ.2ØE8ƒAÞ®­¿îe¬Âð‰aB9SÚµ r„uFuc­-¸év‘íøuÃyÉ%RÂålÔÊhOkúlÝç]¦&HÊi85áÕóäMŸI²úi½{WŒš q§#ðÙܹUríô¨Ç@¯VWþˆ5øÿ©ïqëũ𠀼Gð,ÊsŽÅÈ™º ä(é2F˸M¾6‡ta;vn.ã¢Ì·"þGÄQ`3|ÿQߥ^pZÔÒÏiS–a^¯ÄV;›áÁÁèÂ¥àÈI—à.vâ1¾XOÔîdóbÅñÃJêu~"4ЏYrÝÁÊ'&Ë«Ç 6mywc˜úmóÜWw{pঈb{LÞ¸Ââ@lâJ¿m'ò®]iðãÆÌf‹¤Ÿg,¹f’ ;#\›Éø Å*?rcó'wô±úvšÛòŽ>p ïþ ý⊚®4›M³²ÎÌwtÛ;äÕYÛwl í°Úc˳4÷ã!(¢B3å@ŽˆCl¯½Y“ÂÀ:µÉ:{+­ßø¡G8¸V2÷MçÖ[ÿ6PAÊy˜˜i"úgvÿ1Ú“ç§´–Œ9ùa)pßü¶>xêa$¥Êy`ä$A½ÜÌ®Ûm¿Ýòdö$ßO`À6>4×ÌK³œ#¡ÊД+Í4q‚¦º¿Î~:Уš:϶‹êÜðÐ{õ§C]ap·Ümê Ñc"ö‹4˜Ó¬z+V™gIR?™Ú„Ily’»9ÈíO×iêqHÞ’v”µg˜Ä0’9Ó’€U½Óá«„ ÕØÝpsvV¾-m_`TbMˆÓœ\õ LÜÖ,VËíÞÂo—ø˜ JKgˆiOJˆ¾ñÉÖjÎnh.ìtع}¨ØyÖä`j3(™x©’ Û”mNÖàÂ:g½:{¾~cËN›;䱑¦Ä=ÍŽÀ ¥”rv<‚űӽ{(î|2Aº˜?õÎApSøf\TgKêhb£|„&´x5š° øý(.®NU¿2eL )C+C øÊ ôµ(ÃVìzúã$ ¯NVœ¼}4{˧­Å^³‚ê¼Á¡a#öP„”¯EÙuœ—CëÁ>J¥[ºóµ‡ÓÕÃJözv¬3ž®9Q{¨æõŒ-Õ4:™çÙ813?2¥ {U§T ÷Z=È!|èi˜}ÚG`Ú‡%á»æpÄ2éF×Ë…¶°VÖ5Xæýè2X¼FD;!s˜Üu¹·-'Ûþýˆ&TCté$jOgh¶qZ@Êö.õ—_FA\åtý|b×­ï—´|±Ï“lh€ßæZ æ&ÞDtFœù-Ò«ë]³n3^òT”µbˆ 7eMI‡”²ëG,JþÔܿܿ‡ê6[¶" 3"äJTÉœÀ` IÑëÙϺ¸{.¢KÑÔ¹«Lìvæv™%3Ëo^ÌVíµ2Tu s÷ëÒ×Sú ¼²¤ÝK¬lÕ—ûbLRhc“œÁƒäT·kÒ#Y•rnƃÕú²ÊÁñ%Ùå‰ÐݬM„1æÆ$—·9&\£]µwäW¦œ'ä鋾¢ÜV1­U¥ÕwC@•ZJ.¸ªT‡Nšs`·wrÔVÛ5Ñ-…­·-·\ë¬æ‚"œ`õÁÖZš¢,ªÿú¹™g¹M0ùˆ@ê üðÓòw‡ÁÚkÎÅîVªƒ’ùq\{=tw<¹ÝÿˆÑo?ÿòTñ!%6©åÎ1~,ÑtðúN'o´Ý1‚…9‚ÔšL;i\k²¿g‰éôí‹OÄOq¥äüäaOi6üh?¯«½Ãì4€Jµ¢Ú ”|Œoc3SæqX\tózÕ½IúM¹¾Q.Žñ‡|koÒýôq.ñÕæLâ@H©PÌ•— ’ãRTtžÉù; TLíÉ-×¾-ÈÁÔ¿?=•špà®¶¿0ÌÉç®þr‡½Xg«î4ß&[èßÝÝ/ò$üÜ3Dœ{! 1]ü‡pxêà…x<ˆà †CÚ©iÉÊÌ´¯¦qyOÐÝö×AJ¥];l '{êsÕ«±‡õyõÛ¸OÂç ƒŽÀÒÉ_.zä?¸ÏÛ_/ÄçQ¤CÎÇUAN Ú£¦‡÷yÃt·GhBÚ¤/ÒçQ)H?}®iþÀnoYê“å=²ïµg!'y&PO£ÍŠüàÏIÿ…ø9.ì+ˆ‘ÓÏqFJëûïêzíbÑ%0åÔÄ~]ë +Æ‹8‰ómËòN#ž˜Ø7ë¸vt@”¶f/òú¶GÑ!N¤(¨Ù©c~X¾°½t<¤o¬‹=Bš!ôÿä]koÛH²ý+„¿$Dv¿ÆÍÅ&q2`çN¹3û)0(ŠŽy#‰ZQv6;Øÿ~«ùi=ÜÅnI60ÇR™ÝnãœcÛF ȳ·>ºO èó/ÁŸñ<¹ùa@/»˜b“A'‚¶/ƒ7ãñò›Üô‚QòÕܹ &ðöÁmx?ú9È’¯ÓÜt²àfžN‚{óÜÐ™Ý ÇI4øÿ€é݆‹à%ò=eòÍBŒ½¹zÏùÛ7ê ÿ€µ~O>hüáŠQ|Åñeð¨é—pñ*ø8΃—UGàaù)ø¯,ŽîæÐÍ¿O6zýïWÁ[¤Þºíý{%Þ|„¾}§Ñ{uõVSõF¡wo>súüþíÚ㇠ÄGŸýS0¾'€HtGß‚›tÞ|oàI‚̳‹,™À+Ã`I8®å“7)¾ÏÃÙÌÀ_ ã`Îò'=ÈcòC”öí/n°xœm´¦ù(§ƒAݽ³WÖ.”ááézj9“¡Þ/·ÃWÁM8éa,Ò Œ/–½‚a=‰øñW}ˆÒªäÝì8¸],fÙåÅE™` L6ý ï;Kò|Q#TÂp‘ÞÀ{Ã^Ôo]K]”R¯vm~’Ìç6Mã`ÕܳG›Üwì2V“f¶ç 3z¹Ê|÷çoAIÙYnŽeª¥€cqŽ`ÑáŒÆØŸùCÇâÌÌ¥š/KÑ»i>]WŸæQùñ¦ÐPnM³ fòwét”OL…̶¼FÙïäÀÛiZo_–ï݆SC2é Õ¼,&ôKð»yc?æ‹wõæƒB>¨“ŠQ’•ð¹œ÷>ò¿ÃÜvv¶üöÝ}ú3¸Ùk_À` ÐØÑHªUÊf†²—8<¯Õm˜âÝ'à=Lî&ËŽÆ|¤£€N¨4õ²€5¡Ù¶NR.ˆ5lÑÈ52ûUT ¬„IÞG°E¤w @;¿p|MaZ†¯âì|cۦدÈzÚæ;œeÛ’ˆ °úì¼94YΦ€ÁÝldT½é*V9r.ÊŸg{uÎÈ!±Ú9ÁåzçÊö—#·E×€Égq”£ä•±äÅ]Vxu¥Sµ­÷”+{”¢©£Œù´Œ¯÷þ®p sp÷•I*…´ƒJñšÆ…Ò_‚«bä.AÌ]Ê3®ï=>ÿ#×ý_oæàÉ™lÀ—g€e5°ÏŠñŸ|4³ÂåÙ?ïÂfBLœãâ"fÕLó·ì6$\€Bã›(bØs¤ÁìMòAãéhn£XŒTŒiL",9 C¡(±8Úú‚³ÿaÀ[ Іµž’‘k¤Ž7 º6Qç2¿€—“˜ÄˆÉ¿æfÀóæ-Ho–óEÎ ï3©ü¹Uß 7¹m#3dή¬=ƔԛëËQZÒ\¹ÐëÉ<3Ô"žÌÆá˜›»1økãzkw¨´Ç`›î4 i6,ÛôD“/ÁÕG…_U|.QbP{¹ø1‹_¿ø£þˆóœ'0˼€w³túúŘ¢^°ÉÀ_¿¸J2xXΦߧyƒYpŸ„kÔ‘VÊ|¦à¤—o‘Ÿ ×–* ÖÆ7Æç1&žLïÌ"éÅOÛpä0H­7 ÞRoÃ1ŸÉË·ÀÞC7«x]ŒÀm­KD5ÏÈ ¶uj4Ÿ}*hò²r”^Þ˜qXÜEÜúúJJøÏÞ@iÓ"—" ÿ¿Ã/q‹-¿-†ØÙÅÙ¾- G\ùMË-rP–»0¥Ç¿¬?äOºž\4¿Ë./¯ÀxÍËfmÿ¤1*ƒYRz™3ÈnÓ»ñ(˜¦ 0äê•GðïÅ÷8ž/¨vгË`s§ï{Àâ7(2‘™y£´Ð±Y…çLüÓîÈüë6ʪÏÁËQÞû`žwü<ÿñ`U,Ó¡æMïhbÛ óÑñQÕvÕõ­ô@¦µÏSmN]ÇPCsz.†E9çšD ¡I›¼,ÆìOˉ ?ó; °¢‚Âd$ØÙÞÝ”| i&¤i’Ývw‚‚—ä<¦Õ$ÛŠ%6•½X7s+ÙqQ9l—ËÅN.2[›¦J´˜- Ƶóé&'9Á¾ïÍo/Áçúmø ÊÏñM ÆÅå{ng'=p‚ê=·íð•b—gõï|¼Ê=¸7Ÿ>.¸ i.î1U«KIã|%ñxô)\ÜÂoÿù †Ðßeðâ̦͸¢:øàsihå¬ñ"(?ÈÓQ±Ù•o¢U„^Íh•SYì½öáSî­ÖÆaûÓV«™‹—*­&æãÕ BüX;q Õ8ÌZ®‹Õâê´0^š¨3L8ÝJ_+ޏ5K!ÈIÒåÜÕçUnñú4`Yø«úÊùÕÓ͹ÈÝæ°S‚slJ’ZÝs‚D§+ˆ»îç&¢sä[ŸI+Ä¥m;W2¤:åAÆyîysÐD}yXS†ËþR—¸ë앦¶€íæ K2†8§ÌŽ®Iêí Ýß¡ÿ¿Ýÿås^i¸s^a æ/iMImä:%á齘Švð?( É f‚Ù «"_Jö2…¸d;(j¬}§—SðœzS‚9ǽ›¹+Û9q&$·g0r*™<®[v‡ð6¬GRX'j.u§ò%›a.Á|‰6KÇI”xÈÁ°©ÑCY·@ZÀBÒ:Iƒm3æv/¹fÚCÿD&hA%%ØZ·äÀMv«`ŸK;׃«½”F”ib=ùWJwŠqܬ€dz3³Åü.*"]\sÚJ{¢3…9b [ËŽ™ÀßN)âÚíƒÉÚþ4HLƘ’ÖƒL£ÈÝòÂ_îR{Ø)çBck}c=nždÑm<ºûHTZ7u(¢Bdd…Xt«)e…ØWYa~"4ebÑ0ÂöñbR`¹P¦†r:hÚƒ 3‚É}i»£rBv\—Ç‹hd6?}ív4Ú:X®Ÿ‰òµÞ<9ÞuU¸ [/æëà,X›8FnGY£®Ü¿ìe£˜9Û«ÒQý(gç6Šc¢°Ï‹c­:c³¥>_^jGq~"Χ„hn²9,3mz¡pgÃf´WY«¸¹NÃ0y0Cúñ;ÉEæ˜Ù½9ü<½ù‡¶TÍ•V˜X÷N4¬Kx­Ã{¬ÁlžÞ'#S…Êt}ŽVofNKÃq6(CˆWË6™¢.µØ–úMõc>OÙ^(ËÔˆ©Ÿ·^åe=`ÍtÙk_úõp¤5kÈÑNù[臭lu ¢t2K§æ"È6ÿð]¾òû5œÙ*•5/¾\­…fLZÁÕJ‰½Á=)K¿gG¨Mñ3SÿÎcTãÏK_£Ù1ª?3Zš¥ßãù½?j€—´—³ËU¦ž—*î³Ùm쑜„¹D­——LÀ ígTôRðÛx8½®^í£BX—M GyÖ‹¯‹4RåÎÿ$œõê VŸE¥+^ì6Mz4CÁ²l€ºL‰›šàV\P< ¤¸?GþxjN”5“>Èm¹P¾3žßãámš~ôScÝèJ{þÅXƒ3@¬ÈbLú@–æå˜¯MÑólŸ¢ÌÇ2ÐÛÃ'é¡á«q; %’kdçP"xвëh’g=lbád®?˜û¤ÎJkd¹½’Ëa-úBue¯-üÞï&Ûþû:Ýñ$Œj¬-VšË‘Ná—mð„u|¯xî¿/ÐOÊ‘PÄr{ÑÈ1µ#<“á$_@ö jõP?ü qŘ&–ÓÑ\ŽhäÉr=Ý+ý¬Ñ;[(´ÀÉD¶‰Ò¸'lS/7¢q˜ef¥[üzþ¯Aù €Z²“‚ú4Aî‹'\ƒ\¦ >Eõ–ŒOÇ ²É¯~Š8cvR8âÙ8ý1ÉO¨On"N ç)MÕIÍ„óR"ç‘p4Ê3 œ ìŸ*ìÃ$O7}Òè zRìr‡ãÅm^á¨ðÆB1K¾õJ/§÷òO«ZWeæËci­D Щî‡Ùóäå ßcþmñŒcØ/Þ HFÈ5~SÀ£ð!–ܶ©rõ°XĨ:ˆ[Å7M’Ì ìyü5ÉóÿŽ“Qh’¹þ£xÊ»f¨Óƒ”ÑSô~)¶<Ù|P|Ë#ºÂ”…Ávt9í]|]/ÔÖf²Y¶Äñj)uJƒ_3ŽX KU}ÄðaRýº¼N°4¶á.îÊ4õÛjÜWgp;àÆ$ÁÄr!—¨£ 6Š®5nJyä]³ÓÀÃæÌòˆÄÅK@X"L„”VA"ÔÍ‚ñ£Õ÷ÐõòÓÛ4Û”wô‘0¼Mµó~@S€kš‚ÍNÓE‘™oÛ»ÃÔZ%7wãfR rëŠ\«n‹•1[¿F{ìøiŒ—ÜŠ¬°™kü6Þ(È Õq '™fУEóªGá"ü.nëû‰}Z¯·Tö9Ȇ` QD­±Fõ‡84^~ß7ÖMŒ«F…2‘œ lG™t=¤Þˆòcv½Ã¦äÇò1Wð˜ŸóÇl8>ò„¤Ù°)9ÖŸ½ §±ŸEskážo%5é”(µ~£]oô¥ã‰Áo–M–ïsÞÈPÕ3˜ÛË¥×Kú0úfHs–˜l_»ÃlÑÄ^Y1´ë½cp ÆÇ‚q³'‘•BÁ¢ƒ[‘•¢ëáZެD@|ãôë‘ »Úw3"8ÒÖ=(ã]/-ßwοŋÙ¹FúÚ•—Ûh¦±«´ų̈#,¨Õl,ª;îc”q`·rVÛ7¢«-7|çXI"Ên­u=QXÄÙ"Ëÿ?˜Ç³tnLîàHý/üáçêﺅà@ï)ç cû[ÙÉ—Ä롇óÉö!F¿ýý×Cù‡ðÖLQûšÜÈa¹:¸Â饢íÊŒé ,E RDYÁRÓ=À"Àµ2û{:Ž{­¾x ü%\؇¢¢²9VøQ7åj`v$€JŒµu‹ä¸`{Ê®'ñbžDÙu?åU[#9(Ûõ‰(ÃÔšËÀÈuÛ™/å‡x™îÇE\âT˘DOj‰ÍE+¤Z´¤¢÷LÎG8àµf¼/¬ßÇ<åZx^1yðdo@qÅ%ö©†+†»x}²q¹(^|OçßšWŸó@ã>Gñ¨ÀlProkÛQš¾-ÃÌÃì±z Gµìcê0Xw ÕJ àv´6Ri¬ARpD7lGS_×õ›-Õ&=ÍŽ G)ºÚ(¼ˆÉ??̦NÓØ©:OöµÚàyi¾ïZr Ø’X1Všð~1ö‘Ó«-ÎO#›—”H˜Rë®DH“~µé%u—aÓmMåÜzÑäîâЩºÔDá²™­˜:£ÇÄ„[}uZ¶Õßv\âø …A© áÙ£t<|•×ðíÓa?*n§ Ð¾{™Ín’ñw`’ £ïÊ©÷ÜμïÖgbÙ¡''{§žt¾¼ vVfᓎ†}Ç5®©[´Fwab\3îMm‘K§çÎI»S݈²nÄãn$M7"åÛçhê¡#®I``hKZ¢d³¥ÇÀ>º¥Ã¾•Ð0SàÏÙ(Ÿãôo`nb¥´_=ó¸V÷kmÖ€Ë}=³¥õ@]|¼¯.ž æcgÌ&à ÛÕ¯ ÙÆ¼ èÀšù;huinÞÈÁèáÍ`³L¯e×~»ÀÞüv·<–áq•hÛXQËÅWQ1›ewk§Ù¢-ª<ŠjM–?kFßX 1j~³K_Ö[ñ÷¨æ>WIæE–ƒ8ÔêaîØ‹õv‚9Sm`]Tr©sȵê‡ü Žm´æ¹9$Sçïê¢Qؘat“LAê&YA‡ Ïì£ÿ”~ÿîh+[?.‰?±=ÁgV᜕WWÛîSüøÂ øømAÆóE­›®£ø®V”¿Ö긯àáÅaƒ:L‚#GýêĘFoOq†Î0¤Ù§,6x }<=³f¥=Ë&iöë Â7üPYüRóbû8P ¹û½P¦¥jÿ¢çÕ¼’~é•t£ç`»À‡w_º°"ÉdžÌnÁxXÛQýg_¾y}rˆJ»€Qï]ƽÙZoZôKïÑMìùúS°çé´ßYq;Í `œ%®Á}[ráíÂ}ûlx»æ„¾uãýÖÊÁš‘Ý.|ΕéWNÍݪøtÛuø1ƒ°Áà–!µîM9-PÂ_Ï›¼«;Løéîc2†©õÇ€Ñ+i$å—T¦—øXÿ>'BpAb! A1ô¾¯¼'lìð€üÁ©¦"%‡Ÿ '5=Y¥êñm%Iö0w5êÚüýÝNÉÖ߀@7ÿòö®c/°¨S*^ü¸®xñيⵜâÅKŋߚ/¾«WÈê®zrŠ·|ùæZ?„óÓóI2ͯ³Â~ESuRYªZ7Ü›ÊêïIþ¢„å&+^fÃôµ½­ÒǘÅÕið±JÂ…1ìN¥éx4ãÛ¥x¸æÔ[“Ë,$„™ÚxmœÍ/ð¦/ÐißöÞ.xÅï*Cñ­I–(W§(Ò›i‘†îi0­±Ž½ÝÓ¦Ö½"Éßÿþn–L¯m>.¦ÑÙ|‚cÙž•ɺ7N‡àd8c1µðCÌâ :i ç6Þš†SCâhé×:N Çj]}LfˆÀüâýùÄ-˜\av&êäÓtЃ92±yuÜ&üšiCÛ÷%[8UK£ÞD·Jš‹èÇÑd”_ƒMEy¬Å-U`Öœ :^$¦¢ÚãÐÅ gùaSWc Œ¿«º¶q·ê*”e€¿OO»2·t³ónéFºe‘2­ÝØ{C‰"½Wj½÷ZÈÍÞÏݺ * žoï–¤4æ1Õ¾nIBèÆøk¦6»Uå· ƒè›äÒ9µTUJüV9Âè?ÝèØÇt–F3pg& Êá8¢é\ä²|Ðö9æ‚AsÞp5ã†ñJOwŒ›m^'øbCütíÜ]À౺¼g¿¬.N®âÝb¸%WˆIC¶Tâ\a‡ƒcr7vdX²t0¡ç¾¸Ù ç¡Kê¯`*€sT"E³rKS¹oi‚‹¡Ä•n»âûÑdu~y ­žØVíæ¥ƒè6öI­S­€J\c4ƒn¦zWO÷ü …È„Á'tމ[Ùaá¨P!à˜¯Wà×¶•öZÀI;¢Œå¢WZSѶ=¾‚#ê/”“XPmQÄø¹c( œD‰QÍ<ö›ä±ú–“ø–“øzr Á„y%Lì`Y«Juÿ9 cXÒx[NBõ)ï)%9hHÅÛ:êà˜ s/[j×xÑ0õð!¸Zõø6¤Ò‹b/b`”©&з0JUR¼í¢í¤}Õ¹ÀõXíEj(ˆ!m ¢ 8­ÂÆH%‰5‹T”$¡ÆäÃp&¼”2NT 1‚T`ôÉýH õ#å±àXzÙ F“å,ó#zò¸ÝùspB%„÷ ¯9…Ví”Z8–%e¢-´·1 ±3Dzœ$a”J/RÁˆŒ!l7¤'TØŸF’Ðñ´…1Ž ±gNh8ÉÁi šs@ãÍù´÷üt³ç&Œ]ÚÏ.­…ÒŠhR­¹ ›w±G0dObÑr¦<‚aá Ên1ã¥4fLbe3îAŠp”±1¤˜\çÜ‹T3b”T±ô Õv¹!©Ï»Ó=ð'!\¢}L-œæAó”ût©Ý;/°PN;¥öÌ™T_Wžb§ÞsAô_*OᨆÉ/¤à΃íX`T0§)õ÷LÆä[žâ[žâkÊS8ÁÔbbá`eäÝ; ±&ìfkžÂô8.À ·›O 'µ 2*>_Ù€'Ig±f¤à ›8Ì’i?RSð˜5muÐ-áâ^–]ã®6ðYkdÛ‡Xi!ç†5ï#_ÀñÚ}Ù×éxê4¼#îÎÞ$Ó±³5FŽah®àãÑ0ɯ/³d6ö¢Ó"JG@Ú k`DɨÞF—iŠÅ€n² îÈξž¤È&ûr4Ý>ÆuæâYçD¾×D2D€Êc÷\Xþ‚Ç^Ó¸5•·Þ@UÁÑ è‚{¢ Fz§{dRÇIPÒD/RÌlÒœý6‰rp¬fžïC¢š®À£•.Z@“PFP£7wH¬Ñ.°‚.¿WÚßÇùÑb}2_pâ1x ÞŒiˆòðàè=ÿ*&Ëמ â±ØÄ^.\=ãó¸ …Z“7ˆ®¾ |ÿ ˜fÃÇ ?VÆhÅ6Skôœ6_R ²Æø1˜`”Q‚K-=L8Y[súrLxT­@{Ds zO·ÛF '8½W~Tt]Þ>ΜÀ]:”H¢”òO5ðþ~É5Xù³|68šç" ô‚Q’0ŽñûçÁcÒ.¸âoXõÐ*Ãèû5Š ñåRt÷â˜j|tƒj¨eï…îYv“…óü«"üt“põϪ-c=ÌÓ`å—ÖPËÁÑÀÓ Ì‹”3.¤‰ÛrÛj.<+ Œƒ™„Ç’´›W„ã¦ÑÕ~UIÈÙ|õXâûùeºÜ#~¼úõkvΰtêÑlŽõ \¼80“ÑXƒëà¡ájÛ´ïc^Ž ‘+,_8y g‚c¢ ¢{Ö¾|^‘0‘^‘1‹Á­õÌh ÇÃÖ™„ô"5š2Mø¦ùÃ*R€£kEv_>·bž»*‹œéÑÍè}GrYÛñªÖ_,¹·Q{ºAmýLo‹=YW&pÙ•KªHkNÚÁ±°ãVB{‘ƌŊ+Á>ÍÓ-{î™é ,ž*oÝ¢ãàûÓ¬0—qŽ9ñ$‰- [o‘w‰“Ó±ŠŠÖHÐÂç"vö\ð¨VŒrõ¼ªú‰Ç ÏÜz·qàt‚ˆ UÉ?Ÿí›HyØ&)¼H…Æ{›9'¤xòSJ/Rè¤D*¥õURå¯RµhõÇå;ù£µb¦Ç-ÅLÉÃË2íaIiÅ·lC^£Û AR;ÏæPâ'i£ÿtƒ~Mƒ„M{…Íh{­d>¤xŽ>̺{œ'Î –L*Ó¾3ÓÁѰmØÒx‘jqL¥ñ ¸À]|Šø‘•R‚vð!U± ;#¯¨)žé$ºµzÝNÄaH}¶Ã@¸Á k§ÔÂÕ«| »øvë½’ì/µ‹ÏQµ%Ú·o/¸?Ø.>‹‘ƒÒŒIÀ¸ÅêÛ}ßvñ}U»øv`³/¸‹OH<)È·Ÿ6äÞÅ1^ìÜš%upJï´âTy^ãE¡ÜE‘ùãl|S½~ŸLô4Åcž[N®Q pš7-=ŸÀgïÎÝ©ÿÊcMxLÔ7ÞxÓwíøã*åçóË|0M‹51l!|š ÞÛ{â\š‡'ü0,÷nâD8…‡ë÷¯¼ÓÀˆ£5FèÖŒqhCÉcÇ*$m_‡Zð*hË®ò¥dOk´Ñºý쫃S<(*P¾´‚²‰¢ o]°pÜHuou–6/ÔÀžJ(¸´å6Ž€Qu´&L{ØåàtPŽNI/ƒñ|’RFQR»ÄêûDªyPÞFi/Rp1!0}èA N²1A‹ÊJëa\iÌö·"µp§_•ñ#Õœp¥ñ!ÕL‡EµšÜ'R–uÔÔ‹”2BcÉÛ×! +¯™©fäÈ´wqp<ìd¥öi_c3±i?ïà˜ZÐÒí+Hc®…kÕjkK¸µýì»kßÕ´=–¸ÄS®úv<@ú~2coMá•p:èЦ–^&J0ÖÐ|ûbº°[¥MXºG+/Rð afxâ GŒ Ò°Z‡ eƒêEJYØ™XíѰwQ‚aRJ´î¶,áš· ;é«^Ž&öªàÎÐ]p|1;8¸Ö)^KU—ûGmI°žÀ5ÎÚ×¹\‹Ï cŒ‚&ƒñ.¸]Þ«÷1Ù„ ÎY|µo7h,véùBÝ`²I3Û«qÝu_ytUï{£kœÙ¾^à]ª j¨íÜ­…¿íZÑ÷ó €F'Ù|<´ÏÜuAåòjË×Ã,ÍËúûåïNf{²»®vT]NŽWË–?Î7±ßCíÂ!^t‚»l§òiæfsPmŒày×ôC{±ÛF¢{ +÷Œ_v6ÇM"ʘL Y[R\K\Sªl.ãæ&™þYðµnG¾7ÖX7cÕ´Ê—;'gY“»òW—L½ÌЧÍÌóMîƒï9=ûϹÍ÷«ï(`AÆ PÊÚÚ5Ðôçöíò·ß_f¯²a¿_¾ Òü÷Wh:@=¢Ú"ÎÐOÚúãÚ¨L²¨ÂÝÛÖŸi6¬]¥®T2¹µ‚ÓÛmÄj=]‘ŠÍ>ï/ Tî" •Eî—/Ê;,ײ¿¾¦/FyníΣšùJ®ðÞ¤óìÒ”ÜÝ8„Øa8–ÉÚp”Sñ:èïTOòâÕÂj¯œSåI„§ów0Ûº‚¨éFRu#Áà/x †¯Xøö®®Ç‘[¹þa;Øó³HN` ŽïM`Ä~¹v °z4½3Êj$YÒìÞ½ ÿ÷»[šÞ‘º‹TS\ÙnÀ€g¥žéÃÃC²ŠdU}ƒo@DÐX’Ø߯͆`-‚¯Œï_~úñÇm¹ºyÕ|æ¿®N~hÜüÝÕŸ{žG¾þ¹!íçWõ‚ŒœŸ5|YÑR ¾³’ËÏ~²zò¬òÊ$¬XzÝ ô›ßŤwØ¥•”›æÈOë ƒOÃyüOßóù cÁßh§¹¼åº¼õÚ7’¡‰¯˜UÚ1/“)Îû‘âÿ˜Ÿ$úÿªd_þ†0¾-VÅ-Žîí¬Ü´¤´ÿøÃ^IÕwHnc=4\þþÇ¿V6þ]ût÷o~øøêßžfs¯ÌWß6–þøì#[]ßòŸUÝõ·]qmÿÁ¢šEü?Ô—½ÐŽòÿÚí^|¿3©h®‡ãwΞÛÕܧ|Ý^ã7…Ÿø6¯—ÿ\ yZ~üîÇE±Ú<,·Õ?ç˧»gG®£þ§ïróÍÿÕrÿ°=B…?¬ÿéÉ_Ó£ˆi¶ˆý·èk<–ÛMØo~(ùÓüÙvþáYkõôÖi ÷Í­¡×Ùö…ì1"qŒH¼€ˆDå®™¬j÷)Ó{ßµzZ™Þk?T+ æwLÞú©hRÙ…“WÞٸƑ¾(üªæ,¿Ž÷iµíϹÑ<§Ïy-=úhaƒ C—UqC#Õü R¤QúБ{ZUlµÙr(,hÕ›k½º"²rAšªY¯ˆµÿˆù#Gz||íäfRÜ/ÑëÍçÍîÁU½Oð„KÄ|ò¥AçCçìQs¤ï#)|ðµâx$l$Hqaþ{zý'óßãØqý÷(d¦4úï£ÿ>úï£ÿ>úï£ÿ>úï¤ÿµÎÚV`ãè¿þû¥øïÁvüœþ³pâšY›Âr‡žˆpèG=_/ª¿79œh׊{ÝhÞï‘Ztø¾9ö¾\{‹Z_B^—8bÊwåÝëú¶#Nn›ÉÔD>úH ¤èh—sœÞ|“g‹iùÒ³ÖÞ³nüp¡5/:{ãØŠ¨Ý³oÈä$ú'_¿V꨿mn˜O}…†Œ¤ÍsBéKò·cÑ;©ÿDþv$;Rê\þv,2ÅøèoþöèoþöèoþöèoúÛ±ë¬ke»ýíÑßþìþöNÀ˜ë+ µ{N(}F›_;…Þ<ó£8»æLJ篧ô­Ÿºob‹_ªËÎÔE»?¢œ–ÌÒ/“Ý¡u …/éP¾›-Ñn®Ê÷Ôÿf2ÚÈ£<ÚÈ£<ÚÈ£ü‡¶‘ëER3£”£Se:wɬ•¿ÔùQo&õ ÷¥÷ûs"}ä½__ËöJ7)¶øß¶|\m3d6¯5Àq­§ £EPÒÞlÀÑï «¿(‚Þ©¸êwUšçDP^ÞvbÖõN Œ6c¡³ë îÿËz¹Zµ¶!×åj¹ÞÖ»Ée±žÏ| ⢶²æËåªå\Îh<:cjÑ }&¡z¡OÞá³£~Ro_%x_zï¹õÕ[»A£×É}Îôãx9v,›ÿK¼¸èÄËW¿TâÞD¿ÀÝÉvxñ¾O‡K£é§Q¢ïŠ7a›úýóãZgèþ<ÝóÏñîßÔù—èD}Zá“}_V¼Z‚W©]€ Ûað©Îvøã8Ÿi¢ÎÎøœWî Ùøs¨5H ™Í1ÃGàq‘=>E––;ÖîJ¿…P׆iøÚ\ï(ü$Å–X·¬ŸF¥­?B%m$¥ ÄøËÀm»çׄÝŽÇ1¸û§ó™Ïƒà}ðÍuóÙ!¡¼ŸP-ïÁÞ=Ç$D !딥-%9ñ9i3(C+ÁÑå74ÎM"ÓïØ ê(Ú”Ö¦¯vÑ®ÿ…c©'‚4 VšcŸjšh•ÅæGi&8N_G~U69&ëI”X= 0eÑ»£¡—ÜH$Z`F(N]øœÎaÇ8 À&ñ4ðëÓlúvã¯*õðIì±Y#• XÇ,(p§MÉäkqÍE€œr±M6džâÑÎàq.‹ ðœSü½’¤×q ý@ èó¦žÒhYKj¦ûC ‘EÚhk] ©÷‹ëC÷n.‰Ã6'@‚aŠÂî„Ö,q0HÜáíi§~é9¡–ô uÄ;yŽmJ|/K®ðˆHKe×5O·(ÁúÌÇG:¶œ³¾6(½qê”É ’ÁC$¼f"‡KG¦6E>s’Ïþ^Á¸ôÕd)SŸ2öœê4}Æ@²A÷Z9kļ³ÛªH'+ÇàŠÆ#Yä*vBvðŒšk®DO>ûÝsìE\²Zv‚ æg,÷ÕA‡e MºËŒáÉ·y’Lá –c Dàq©¦Ö‡rþ8}@‹ØcÞxSi†”úOùì?0>㸕’òšñ9eÒü}–h™C1x’ bµ^þ_9ÝÆÐêZ ©É{›Bø´“Étñ9Ûá˜Ì!p<\ÄÉãµmÊ}U}¿©ªY0‚8ǸFÂ:‘HŽ]iÐ$RÇL–.vþ†$¹ùãŸã±ÎYšä8$¯84‚eI-¤ˆ³‚Bª¥œ½²ì‘VWé¢ÎŶÿ< .óàÄE%A©Q¿04zmÌŸ* JÝjÂËlýœÊ—¥~£³€}B#³m uL‚2xŽžc€çà9xŽžT€§_?Q¦hÐÖü$JmL‚2&AùüIPja¢[&¹¥,Õ4L–Åiv­…9žLRxGU(§Ýá튶÷X?'ZwÙÿ{è¾™Z:àÆIŠ-Uë¦ñïÙ™àc—@iiz¯3ÕÏ©Vaà3n‰…-mFn‘Öç[Â$(Ü ]èsn1Îï5vŒýêåF‚s2¤4µ‡‰^ÿ¹ÉƱcºc”’ï¡ToÔœi!È8Œ{(ãʸ‡2{(ãʸ‡¼‡·Îºv¬Ú¸‡2î¡|þ=”(kÖŠT;CáSî rð?*(ð|…[¢Á¹ßYá–}Lÿ‘ç-Ü¢pt‰HéïIðÞhÒú9Á‚òêþ{Åqï”ìÜWè"ñðÈÚ[oA´ïyCÏõBÿi9JG0 ‹4bãÎÆ2©CÄ´8"Þ©mqDàÈÛU‹â±Ü¬Šé‹½3´ ¯åÖÿöÕ´²ýÿ[Ói€þnóI̵ì¿#\?'!öŽíçǬ8ÏÐíxDä½ÉéÞ=Øì ¸|~æX®*‚ï«ûéªâÜœ;°V´±;‡šä…·³šôÏи›ÉÝlSy'“iÛ¹›¼pÆ“pöt‘x,¿4áï7UßXJxMŸþ„•Ís}€2 ï qC…‡ 8BgY„ŽÇ]œðþñ´.«Þq¤ô”dý·%›ç¤¼é6o°øŒ¯D@ƒYÄ g!x.N|Õ“ÛbúÖ÷a”C[ âr澎&a02ÏÒk¸ŒàQ|pþœÙûÞxôûÓj1œ‘E«S*ÚH°’éá¹R2AÕY ûp<À‡eH= í Ùœ2¢Ÿ=üj"ä¹~Ž·Ââz0‡pŠ >®‚c,NSJ$„DÙO"k” ½&gÓdÍŠÚdYg#ððdeDx|ùAÅ$±ê‚”,$r){" c ‡x÷¨…ÚÝŽ¨sx:øgŒ ÁÉVÒ}ú´* 2ørÙ‘‹Îú,±´@­²é2Á¥Qh0t­‚’¾9ú$-â&‡ùÇB;¾>ßÔ7 Úý%±,áPJr4Âh‘æ-XÛºNÑWóŽÑê—¢yGûÑ\ŽÕ+Oô]_¢)¿C¶š‹ò±¾ ·)Ѿn~~9ÁQrFI°Àè6 ƒ“.[ÖàyYç\ôg­ŸSy¤ @0ÙgÐà2Ùêö„ÆúB­´ÒǨ¤à$·’ž#qíæé2ßfÇ9nnDà1)+b uݵÆ$q4jýt©9–¬H\!å<ÑV„•Ó RøK¥ì^l{SeÕxž#lð#ð|¡Ú¯Òô©ÈþùÁ-P.ÊÀ“®¤Ìž£‡¥ß©ª~”µìü0M¤;i‡o¬…!Í Ç.K .§å;\™6O·›éz¶ª—§O™v©4¡$®‰Àç—†Ò⢤ÑÕ(»b[ÔáõŸÒkY*=h1|Ž Qž_Zñ‹Aü ŽNzßÌÖï‘§Íôí–—ódJ0r°¡fƒ½¬9A»dE©ŠõçØº‹TZƒµ‚óüB¡.J “íËx‹l7à|H zB‹—&š•É$¡ÌpID!Î Ž³W®Ägz¹>~ô%Ñ*™4œK* ïù…aøey†›tYÍËÕzùnæCçðWŽÚm:•2, ·)¢Ÿ_Ö^–¡im¬¡YmÞ=«öÉL³æ«?}µ;¥ÙSö\ ª&¿¢Hhë,§àxðÂ’ù±„¶M˜ûïðàªÉ,S@]Wœ9É„0L¼Á˜9a©­=éD¼Te¸ƒG‹Á;Z»”ïu ÉÂ)Eõ(Æ€ÑÀut1«LC&´ÀU%€FÛ'Ð瘂êo÷¬v k% ^•²Î‘.¼Ž©á{_)%­4WÆŠä9í=_· >e·ûøÖB ŸšàÓXÃ¥²$~c$;Ë‘JÒá ,ÂÆcÓ%e©x½›Ý—›íãÌ—ñ®²ˆu̽ÄH‚3f4Ù!˜–ÉvÍJ;¢6‡"@Ôàq:áZ'¥AOULÁ4¬\sÒ2¬J¹ÁvIM3"‡-GBj[ôØ ìßÀ”’;‡æiiH¦¦TM9†·@g±A#ðˆ´np'ƒÄD,¥LH £ë«“[›§ˆ6pŽ” øÀ“,ãg22ÑHìXJ´lœÐä‚'­žvìŸ,Õ`È=ESö|8‘ÖŠÜ÷ôÔI,žè»û€Z·Îq~.òáFàÖ9TÜù,Ç^«ªâÕ¼J¥$Ò"W8áé3Ú‰¢¸É!p< ³ ×˧m6Ì4£X4Rp²â->×›Já”õ!Éì ¤Í"‚`<*Y†àÙâͺØl×OSÜÔI%ap)ôR‚´i¸h%œSÀ¡° Ë8æßcŒ¦7G=î4vb ™Äq£L1Ôý"©¹µ"±ðYÐg‰!ŒÁ{fÕ½ÒvòI?RqLœi G›&Û¤NWêbemPF_<’:ax©Ë±«ާ§hdD^ަ‚\ÝMw_­ËyYlª{9³7M–êJ±5~› ¹â‚*újÇYq+齟€íkU&ëÓКi@lÇãs<áatû¤SÒÒ³)z.‡½ïOÅ‘ Mãq&Ý9ôâ?î¹ÉHxÕ~‡œ+E^£p†÷Ü ®ùÕçóëU“±¢®<9ƒ>48CC”9’wû÷X0füf¿JX²ïw’0†eÚ§© ±º„Wk‹2µE{*G[f—6N–·¿©¨ÙÉaÿnˆbœ Æ$5áœå8¤,Öx‚L¬@~%Ár‘#C®¯ˆåð€H•»®‹À¯^~pµ)*R9Aªö5ßÉ3)|NEÇÒ]b+´T9¤Ž êYàQràYo(•‚à´fœ17èèx«óMbBKA“ À²Ì Fi2U¡N' ©],ï:WRb3LqtP•!Bñ9#øÐSÞ”råÚeÀÑÀ]Ž@;Åù;UäðÁçX¢ÓÞ(:%A'`'²ú«‡ÏÓ]s¤ÜpÄ9j7ÅàqÉ‚\–¾m'ƒÄÊ)}ùa ƒÂüsÜ%:èM©Ùüa?,Qƒoyµ·”Ž²×¿¤¬‘\!^ ­ËÓy~ƒ Ù“cÈGà±"…£ÿI¯{ÂúI-%óæe£:AÄÀ@IFÀ5<ÃqL ‘ìIbÿe(m¥>%!ÚÊžÀ(¿>V•!ƒÑƒ'öŠwaž¨þ$Ú 80©-ð¹ë\²›XèQAgëõÏ帇…ïÁÙÂ1OtúøvæÓÍüùñfÏåÍóÑâÍQûoª€_Å!²÷­áJ§Èº G&t†í×(<'7Û§µõäô/bF «¬µ”Q€ÏA¼'uF½Å7¦åfž¾[zØ Í/“·ÅÚŸ–íРž4‰(ŽùUÍW_5ÿõz²Y•ÓÉô¡XÜ—›×M‡¼ž‹»Éªø0_w]è…Ђ<;õ­l™¶;ôþ¸ï/åf¶Fô;´Uïïúf›É;~ý_«»b[~üfônËê°÷æ‚þïúwn^©k× ?ùÎßÿ»yõëSñ}…¯¦kß;_-§««æ8ý_7…Ѐ̕o¦Su[jÍœ²þ~%6£,™›Þ–wo„.§%ÜÙ’ËR2q‹.»VEVê;Uàð]ÿ¾D§åæM1ß”¿u²ã¤bšfGt_"TYª¨á='%6p>Ù.уZ¯}ÂS”À}¹ý—ÉbÙt"¦_Ÿ<±]Èü9 ™vÌ#kÛV•,ª‰Çj}X/³´1þÍ×ofåüîú¯þæ÷³Íö‹ÅlþeóÀ×ÿä{ßÿêÿTMü±úô㔋²N—r#‘Úb6÷üÅ?7êk$PýÅ/¿`Ÿ2&î8€d_¾®æ?ìPþzòÓr[Ìop°½ž|»|\ÍK7ìõäo>u:›#Λíú©D¡xOóÿ¹»º¹näü*]  ñ¿ŠøÂÑn`'vÖ°¼ÈEì‹£ž¶¦£žt÷HÖ~¬¼À>ÙÏé™nÍôa‘çðPXÄQs¾*~¬*‹EÛNß#“¾i6W/Üýõ£m^¿ÿÅ|Eóü9íšëKgè§ß5›íëÛö€òb»¼^œÿ‰þsÐö˳öÏ_ß¿£ÕöòL’@Ò¿<³îå™Qô?0aã©Æ}w;oV$Æ×ôBD|ÿ±ãêï_–ßýñ»‹WÛíÝæâÕ«f>'AωPWÍ–öåׯh¢›móêÇoÞ|=#© ñÈHúg¯‰7‹Õæâ¿Ùlפñvöÿhu¼$b\>è<ÌëÝôv$¸ÌøîéO³]Ü]¼Øý,ü5í^ž}OpZ]ìþ¢ýu{;òÕÏ;¥ýüâ¬- ¦L¯5úXsÀúfK—Ÿƒ±úš4xG|üq¹y¿iÍ×˃‡Y·Zz¹[èÿ'ŒÞó)m©¼[aaü´nn6mJç'bz·LÂýþ±Y­.h+ù+dR¿•vñvNÿjñÛöB cÈl¢±^šÌÉ<®”ðË‚áºYˆùƒ`¼>èã{@¥ÇzdRûw¤Ü×#Üéòù¿ÿýÏ7m‹àÿ!B÷ÿËO¿¿ø×ûå*0óÅë®1TøÏ?=ö¤~ÝEÍô³vº~\¼#øþÔþ +sÿù}A|ý÷áOÙ¹Áï–¿.æŸæ«Å÷»Òס†¬çönEÁyøãÞÇoš`ø6/H/ÿqsûñf˜o¾}sÓÜm®n·íW·÷—¯ÝßY ÉžŽÿ[bË»«íUüçíåâ§ûžqŠyÓõá ÿ¹ï¤°?ÂRº ­ ¶«O{0ZëÌ[o`Cߘ„èÎV ìý¬¿œ]/ȹS$Lþ~ñ©-xÛ«eè7öélþàæ¾j—àÙeN}õÿÔ‹œ­Èª|õOæñ›N íÊ&‰>ý†]X¼7<þÙìüî¿ßßÁQä•%yeä¡ÉK‹'^y÷+?3añßdé7Iñü7}ÙyxjªŸº£^;Šõ¤á |Øüùq{B¡ë¿-o–›«qû“kçNÛ¿ÿïæØ>Êœ @!¥aê)Â8ðÙ§T“ï;üa`Yü(`ò GíwÂ1•q<)K¼¦Ø{ä÷üyBí N˜P Çë…õ0¨t;9Ó“…ÆM_‚‡§è{ÝÿCAletHV©: ãP!)Æ/IÇdðÚÚHÇãÆ?½œw–ï9UZ­$ ÝÐÅN£G“7´•ªÊü'ãr^y䜨¿ÑS´Q:aðÎÈP|ÆaõdПNFVÂŽ!&á±WÃ¥{×^P”<+T‰÷¢²´í•PXÏõméÆe7³+O[ «4?õ¡‘\©o´à—8é‘…ObšéIƒÇÙ‘Ïe_Ðj¿k¥åÞiÇi3p W  ‘i “¹ZiÎégNyÇ.Ýðo#í?’oÖ]Þû ¢ªÈA5}õFçFToìM)ê°Ù í¨8yŠhb¹Vñcmpl/u•‰NÆ£TÑ2kx<þœVÖe—ˆo•V4‰^±h‰ÖŽ©üÎÕtˆ^V_²ðø¥wÏôOñ($tÂ).ùHãtþõØ:¬LÁLW6Oî©pä³^|§µTÊyæ¶V;N*_¢Â.“~9}…@3®ì#r½oa¸x’†|Š¢ýxsx &×=ç. T®rð ”,pï߀»øÞËhÚ`9.[`B?¡Òo å`hW#a”ƒ ä‹ôÏ `<ÝbÐ*AžŸ‹hœ´¢hüp†¢mû )³Â ‹<à1BjÍãѹ­§%ZŸ¨t·^Î[½Å7áV;Þ;g§dƒÞÁ¬$J #Tú ç>Oˆ©€ÇCw3~¨±WÏÔßyZD°Ä6V}ˆÎ—Hýf$¢ÇpvÏõªÂ¯E ½׋©'K>æú™úžü9¨1žRr´ßEÔŠKO†‹:Ëñ1ü$><#x¼ÞVðÖŽ6ÉU?ò%³¸Þâ;MGÍ%bq’³_²QôH~’ãóÙ¨0*˜súޱÎhäñ-&¸øò˜kèUrPj<ÓZ °|äÆI5òTo k!¼£'Àh¨Ô0ýì‡ïã“&k-×`|W áé4·‹ ã²wØõK«‰è*/…Š9Âw¬— ð˜r-ÆžÓ¨BãÉ+òø ¼eV¸åm¡X{±1Lm7 jTô… É+iǃmA¯êöQ2@|Ãëµ1DTÏ­§½Ï‡ 5[øÆÕ8û ï ƒ‚ǃ®HÕNti€d´‡4yÐ:(Ø,°œ=òáÁ"^±ЬØóVdícÀƒeª¶r”ÏhzD)c÷H4NU¤‚g{1d÷isÂÕ5ïô¥SæžÆI=²b'­ðƒ>EÛ2û|yüù$s`ÅC L;•JýrÖÞw<[ï5ww«O‰.Îv·°ö7ÄiL¸}s»=k>4ËU¸µÕ‡Ü¥ä“Â8‘ÖŒÙêúe¡ë´@þ£ k0Ê+0ÒñÖÍ+çt±ös«³¹[#ªQúIÄÒ.ö\P KÆ‚ñ‡Ý8 eŠKÄýŸgÚÿuã¦?fï¾CþMÀ‘戶§-ÚØt~|EàxN(!˜Ænݸƒsî˜EñEö~Ôj®©y7Ψ ráÖ SUÚ]0íøìÀ £ ¼5Ž6n¬ûeNYHùèô÷—ºï€ÒL¢û÷èû+á.û²Ek8ì¹Ô  {q¥'Ñ”Llf¯ ‚ yÁCU^O?¿QÒIV®ü9åꨙ Zº0€iœQnüÕ”aŒ (½QÞð(íô·ßqZx´<'m©ª¢ÝE‹5‹Þƒ’þò]Kžƒ'71½_ëûø1_«¸ø¶ŒvdV(£8ovn*×[)¤æ %‹%»çÈùh¤rÕ/goÞ/;^Ýí.#…™~¶ÛìÚø„*°Ïÿ\.7m_ž³ùa[£³]ÓŸÑè¡‚)ÌÁƒ¶àÃ_óÍrv¹^†HÛ¶ÂsLsá=™ÓGÎkÅá¬Ã7tÚSñ[a«L{2™[¯ñØ€kú>.6O.µÞÞ_Îöcö6§ù¸™-Þn4èn¬¥b•NdÈä¢pdzyö¤§ÙhÙµ<-vi¨Ä®¿Ý¯3ÒöûÞY‘Å«ë.C° iêslú:ƒ<lî®ëþ<›¼˜$¢Æj;\ÙÊñ/UÆTáŸ7Ä?™rÚˆÅÚU +}p‚HVq«aœ¨pz¾²e,aœ•o··êÛýÿVo’Ó›²ZH‘@:+ª™Ý $:Ð꺞].ÂCÝòã ¡y+ÁO€2¶žiI’¨ŒAÉQ“UÖ„BÉs­Ã]ªª—?$#¾Ëö–µZv­€‚_ÓÎ9(Ù {Q'›V«Øú['<:‹È4¸¹buË%§×ûpYÝñxaj,Oð”[.»‚–˜ã'U€¡žX²u#4Nc9ÔÄm\…#¦<¹áÜ)ë³fL¥fïýiÇ zš œVàIM£•¦XçƒÝ¡ÙNÁýdzþ¢Õ¼)ŀȹúSL%ÂääQ…<ÎH4>ß<§Oµ³uxX¦ì3oúò8 C‰.ˆ?)"8a‹=†”¼öfûΖ¹Ž˜#ŠÄ<32 SÓÇ)}ZôQ¾È»!iž¬X†—;_7w³æær¶ UÕÛ]Ûïý¢Ž9 C )ŒÌ·.åà%M²xVU!M:œ„4nW÷׋f»mæWáºû3c1Â@a¤CŸœ,€§EÔ’e³{ú™¾}1ªx;UxàS„<)¢€€bDY/6Ë¿‘V—7¿®›Ív}ß¾TÿT× J‘T{’ zr‚¨:y°Àµ ÜáVÅ ²‹ ç«f³y¦iÉÒ#yú=€R‡’#¼ÞÀ&u[‘\r X«\WÎÍ\7Ë›YçËuË&S±¢+`-8ƒ à@ŠPÇ}¤ãÁ²8ôÙêe³¦^e î„ÈœCi.Gp3t“õ x|vu<£Þ³û¨Z6mêµp*jzqÕ`Œƒ§?U†Jù o…ó˜‚Ç—þ‡8}{¨^6-š ×”²qœƒiàzŸ"G¬h2kK>u(lBÔ£!z$H‘œ¥˜þ`Ò$‹WÅu¨Ps•„ÍÈ^‚GÛCôÂ’Z+ÁÂ"ø™Ñâí*²ðBiU¡ð%OnÚþÖ(½ jÿOZ]#;÷:<“€]¥‰=” :ôqM ’ÙRdÁµ€Éø¨Œd—²ZÀû˜¼›ßÞlÉÛ®ÚZ°Æ^¿ùöÍî–HVÙ§ãÉ~Ù¸¿Êk³|Pñ^Ãýµ>ÚrÆÖxÐ|u. öýÆ6ýµr’’þ2¬*#¹<´%bzˇŠV ©1Ý6ôav¼YµR—Ùoï”uƒßf£°R†>} JÚˆr¯–fe†¶Æ•§<Ù­€Ž©î°L3w–rÀ™ä®>¨[ôäVŸÖ4»Üi~86OÛ…÷¤û î×µóà‹±“wÜÓ S…TþÄl‡‡ –¸YØýhæFI†HNŠ) —ÉRƒR‘B­/B©ìCÊP™ÙhgAc”VÔf‰R…PæÄ•›>Œdgi:¼2ÓÌ-…ÌPàŒ9lRÀ‰‘mÉ×ä{ÕÝ%ïµ+¶#j¯‡ ®Aæ¤èÂ|Õ!×^¯ÚÚÅj1ûÓvL)Î`ò%ÙIŨA$´pZDrºÌ»&Â㸠>M-MZ¡:-Z!Œ Õûû·‹Y÷v1¿1iî·Wá¢ú¼Í¢ïjÔÚpŸ”ô 5+•Ú Œy|ú1¦­ã¤’ñdgtz³¡—7›õb~»¾ÜœïtØ› 5‚ñKR{iù1Þf?jñhœ!N"º<(Fl|øØ~t«G(£Gò à‹¥ù¿± d®ŸYx\)K±ÓbÊ™‰LÄ¡•×ʲÊá‘óXsÞC”ær:pÄL ¢ ß$…0Þ¤à)qÓÝÁ<\Æ2¾sH–TÀ†4Ndïm§ô»é¸¥Ã 3'w¦ûß–£¹>®<UžÒšŒƒ6œ}¡q";R-ÃKú´¶ÖIä!êƒ> 1óaYó¡Úvñ¹ÍŸ /‚ˆb/E Ì¥B6i¡¢-žË-Õö(5L©~7ÎÀ¸Š»4§˜…ÞxÍ9 çúÓ›¦\7ó«åÍbO¼gj!•4Žè±Â üb%dŒâ%#¦g=}Ç[çðxüÀŽÒúYi Ž«Ô{íÈ6kNª_¥é‡ò{i6ÍõÝjѾ¤]1ŒÓ?™…G‹bM¯ºÙÝœïôÖÛÈF÷p®´QÚ É‘V…BA]Í d±6G†ŒÈÀ#¥¬e.›ÅuÛ¨ÐÄÕ‰´Û¦ÜqðQûü6™Ó3:¾ª‚[ÈÁ£]‘]Ï®ì¼Säg¯/2¤&œ†¦Õ0XiœUÍ$S7¿Õá<¹EyÑ•´mH}ëõýÍvy½xXY;­v¼_·mGŸ®.@×0†» Bs¶¢F…²ÌÞx8“Á:+L:dàQ¢ø«’Oøêéf ùÛmƒ1;NT‘ß<­½ñ%]ÄdÄ&‘Œ$—¦Y‘Œözz®Ðw€bl£x<lIÓÑÅ]yº‡ì¡n Œg^wmÇEòT=t©KütIh^*°Mø”Ò xœ(þŠÑSÅÆlµQÍZk(fVÀåiœÊ>è¬Møa ‚1ÉÁƒ²”1 ÝÐW‹mžbãÁ½ÅÐ/X+M㜕ÅjÑ&!|º( +Ī9x´(Ð5¢Gu³åuónÇ’}½¶Uñ£§5‚4lDB㬷ÅÌÇÏĹ û×T ›I†\B'#6UêÇ2ðd ÷>Q•Ú‰%à¹ÝÛ­ailjìÿšdþuWÓÛX®cÿJЫÙ$%‰”DÕö½7ÀlÌl{árÜ]A'qÆq= æ×u¤œ“—¶$ç- H%´ïá¹EQiRŇî¦aþUÂûˆÃk%ȼÖkvˆÊ"i’³¯-†Øø|(Ž0 žì:DgIäÌ×.}è¢S0²œýêHëµ(Ò³Ô„G¨thXaNAÚå×—³ )åeB½ºV4jù‡Ø#j˜cÀóAÆþ·ýlx µ¬‰tˆÄ÷•fbÈ"§¡6Z¢’4ÇUËÁQ“ud{ã5èàÓ`Á“O,†vÈ»þ:—pùíB-”I¤à´R%œ¦ŠrXš–MjhÍó•ÈýïÚðÄØ¶Ô¹)Ú"‘V€”Éiao=˜‰§†‹Í,Ú€:ºÜß,x|c0p)ÇÛµô­Äi• ]ãRç--y¾%° žûmNLÔ‘:^Ü,¼üU âT`¶ 4³_ ø!‚õXÃÇ7=Y†Éá8²Mb,5(䈷 æï|¬Tä xŠkÒñâ ×Ü ÁW×7ÛéœÉŒSÜÊ”+àYΙÔŽ²^ƒ> ð<;î@ÔCe—›o‹å.?Q)GÝÕŽ‘²še9ἩifhiÍóÑCÿ†'6<ÖþŠê—.×›ÕúñòáÇííåc½8»Ý[rœ¢Oàëy¥Xû$—cî¹Û`7ÜTK3J¯bÎ H(ñs¼¹à <8ròqÚ±ù±¹H•CìDrÖÚ¢W¹èе)À{’SŽ)æU¼ÅÐ# œ8€Fš‡šv“˜áSŸ â=ÇrÆ>dï´Ú;“ÚÏ,Œ²lƒ5QÕÝR,x ŸXèÄØÁ >µ^úITtȱð©õNŽ£O®èðpïb´Tö$+MFLèFXÆŒgá!ë•Лõæfûsy»x|Üšµ7öå’ÿt³\Ôa³ò¦jû!Œ¤š9¯• 5íÌÒÑ#Ö^ ;Ñ?øÇ_^VJ'"å-ƒBŠ/A3àÚìÞ¿´àùj„4`KÜ‚çÈ*wëû›7ÞŽcV2råÀeÈAÑd’‹©‡o8Ò´'D’Pö:rÚ;@'%E Jê—yR‚¨>”)é>Aíž“’ói¤Óò¬“?¼{I)Éï‹ÃŒ©¶•ó‘eòF šùªÄþÓ“ OŒ'æPì'ëƒ_"t€d@#,>Žx¥óñc›ÂÝÍ…-/óï^ïFÝLDe‘(_·1)å{&9éÄÌø‰.§b­…ƒŽÕÓoÍÏ P;sêx‚õÎæÊ ò¶sáÉ3¾¯¼ø7ˆsÉ{ºu’^s¿_ÔŒï ðâááö§É ¿^0âÉÃWŽ/–?6Ž9ù[~ÜׯîÛõÅ÷Åýuý‰7«ÿýÁ_{ño®¶ûÏx¼šþóê’ûŽ›àx‰ÉS«òªXr£¶â'¾+d„.Ì‚±U©¾×$NÿÛc²Jï¸$™ËZŒƒCÒ°$—›•[ CžXðäÒjµô—{!¥"rX=5e­Ïò$²k­ { ê„%Q=v¢,‰,ÅÒßÚ,x¬Á°X]•§§Ç›©Æ®—ß舳jïô–õ_{ L#^©On—~û²¸¾»¹¿\,ÿª^7ˆ”a ’x)^ˆ,ÚåQza,æžä }VuGÝŸ‹í4«‚Ì›{zÕ«°œ#×!OÖ ¬ï¿CoÛÕÎ|“x¼Ú1ú¶ÊŠTF¨M0¨h6ÊrBe0{MÕÁØ úþf`Á]c3xšû1)'ü"a¤ì¢–H`9 Í­`ô2 bÀÍN_¸oñDâ~8-'øä" –Te9Ì-ë\öDû7{¶á±Þ÷Ÿ¨û"P)çP2ðŒXrÒ"Í ®¤Ôt¿t vÆ—û›OHÍNcÍàQNŸdò9AÍC±\Š¡Ý¹™‘À³Ÿu´ž:W–ÿ–‡ÂçcÁƒÆùåùMü¹Yÿx}P{—Éaá/ïßë¯?Ö7d_^ aÖ3úÚ†.suöO…ÜF1â¡ökÌ_ÇžéÜ®·Ó¡K ²;'ò¥me9¢ÒaÅ9zÁ;<±Œ8¾õžQÙµžK}ÔæR–s0æ˜NOh@xbÁSbOÏð=^>/ÿŸy•Óý…xžs)iÛÉeªÜÕCŒPD%B÷VbF<1Ž÷¯¹ "·¾Ö–Ï(_ãÞÉ!Å3øNÊ$çûŠ=ÈúzG'Èt²@HÁivîë)é<Úãï~.Ј§ø3øgZQ¤5𼘉!+j„©µÍ9\Fk=j1ÜþæaÁƒã‡õtOC:,_QQHÅ'93ÉùÒ%?¸0 ưà3ÄϬ&‘U¨Ë§Œr Ir8GtÑ\ ¡bz;ã°à!ç)êwß®;J³L)D, …Fõ=ŽŒ-úè€Ý›¦ñø3øŒWÔ’H-†RœóQ³p–K]²âçÑEتjg&<Ç{·k¾"³;•5(ÞiÚÔ¾Þù ޤ›:‚R¤äSÎ!ÍÀƒå|>å Ë(§£ÏÉ— zJ–‹‡æüùß/þs½½ùãçîîRUl÷ç¯Ûå=_6c/îWÛú·‹å÷Å=kvñ÷ëvÍ?L/ä·ÿÙü`äÛõÅoÿ¾¸}\ýö1n^:Du‰Àr9ÒÝ¢ñuÌW‹òiÁC]Wb/,^~ûù¬¢œ-‚Çà5÷Îr¾Mýºs¡Ï4À xˆFø¿I•ó¢ \-7&7ÝÉù<&šê«E€^‚§o¾¦&¼^6ê7›e-ý51*'HkG L˜´É½6%±¯«è¯B)#Œb6t=.»|0Ò2+§GëyÍâØxMêÙ8&ÍÛ_/t™lg$<Þu÷¯˜”Ó¤»Ã°I.m°“‹n€Çè=8^`ÁÓ£ÐàáõŠQ9-Z·ÁkmÅrÒ8ÑGˆ¦ žÔuǘc²_gþPN„Õ½>’¯ìä¼ë{B­ìi} ²?üšM9ÍYÀSZ8Ìr˜Æä&:Á~ÀÂÓ‚'”®žà¥4ÉD£œ¿,uâJŸÑdìë :áF°—cÁƒiˆ/xEg”2z~*DXäža“œêšµu½ðÓ¬RØõèštÇÊôÐÚs£· ÚðX¯½-75Äáw¸å?ÜÖÛHÓ{ÜHu²¤¦Y9d*_OÝÉ…“ž>tïþhÄ“ÓxcÈ"›¹øÚÁŠ‚>G Î` Íà—0À3Xð@› ‡ÇJ ©]IèT%b0·ÿŒZ@ÿÃñ6<ÚHCª †+8¨ö$÷ÙÚÉyJç3fZXZ™† Oò­ ª¼jÌtqNa:‡”zU³”JiVFðs©–]÷Þ£F

—y™éšY”ËïäàØ‚Ÿ_5ìŸ\±á ¥K0ûÞS=¹Uˇ°ô‰a»¢pˆ§oèú— s‰ÎåZÑDÅ^J«jÕãÁ×ñ‰öŽF9Ù‰WTx…T‚‹NÕƒýôž6)’G„«<ä†MQ!1å@y†U§d>Òó)Pw¯^hÓ=I$™Ëè€bÔ'¸èÅá“D3ð€#¢ž˜Î9Id…WÌõLxTõÀÂY'‰vŠ”8Â@fãÉ.µê~ÑfÍF2ÓÉ%“3jš%ͼþeTK¡ 0¢ä²÷.¦x€>—))È”}&u"MIh1p&#RUË®Ú~ºFÚ"e‹ØöÐ݈ܙ/Mbã™<*ËË À+~¯ÎÔµ q£¸x8ð4Äæãi×÷#*?øÝħ²ØÌ‰B=5¥âOÙS³X¸©$§+P'±93ê3YI™üŒ‡Y™ðDjY<ÜìšJ;²YÊÄpj¤£f¹ÐäLR#Ô¥8LQG]ˆ5y^ýs,¬âa¦Ò°W¯ ZJsÑ#%Š0òÕ7C-”lùêçã)Ô.17K%·EŇœº"öÚØ01× |Ý6Ó‰/>X2S žCt§ã .Œ6%‡UÏìóÈ {h¸!´?Ä#ð”¶Úè<ðû‰W9'äY›Ô‚óÅœ¶ÿœŠìåXð$hš\3ò*ç›ØÝa­¨Iš¾—˜K¥M‘û¼<„gô $ŸÔ Pï\CÒþáñêåÇ«¿èqgvuëHbÔC¨uç0(°«Úl¸œIß¿¶‘ õ±Êh¾:@£—i¤èy½ç@ƒM˜´¶„1¸û‡ú&N|’Ègä5K$ˆÚªŒå—|Ê€pÖ€'»ã"ŽvyÃÞ׫ïÀþã-Xïàuruzí!ü~Q£…ÍÅæ ëâááöçÉ`¿^üm÷é‹—‹Ÿ¾¸y¼¸_o/ÿ\ÜÜ.¾Ý®lÚþÇ{m÷º¥K©ä¬¦’--4Â>½G†„:ï°Uîù5~ÿñíñj·T}» d2"ˆõF†›B¥Y® pLLfÑC˜·iFªÕ%Ì%fLÚCYNŠufo`Ü-ø3Û‡ÛÅ´˜QNMÐQÁþc€§XËu|˜ß{C(tÕƒÚH‹1XΛ÷"Æâ #6,x¬ ‡S-{œ}Ùûy?¸Í »ŠZÛa£æ›38 Ø,I4yÁ x ëfÃ^Žä•ã{Í—œ„ÍÓ1pÚlÆrÞûã“8#7`LçZ Ì3ðä/±Ïr”y>×¢>ª‰ÇS ¹ÆK¡ÿë¶à)pÂøÝ‹ïžOLq°s½Ú‘&g-‰¢À3’P(:9gA]’·àÉîô”üÁ!g/ {6ç´laa´7Œ?Np8 ËjÁÛ¡˜åÿä´b!^\úHšQ Õ¡ " XHYðXo }¸9~ˆÃí¦°ëËåbbQJ!â•C1“r2e’ ¡ÉžþpØ¥{ÌmÂîôT¸6|.å3™ÉZ± ƒÒáb’ Åž?tèÓdž'†›lsÜ(:‘D_'%䨀f9gä>ja ÙîÕ[ð„pâ·%á8==:thpl¤>[ÿ*b6<%·ª0k<‘H^.È´ù‡å¼o×jr0ò\ú›€OqÍcø»?'"ÈtÖiºhuð&9ÎãƒùøÄæ`ÀƒÐ.÷¦óˆ"5·9`+ îzæ¡an ðà\0á)­ @r¬‹ÛÕfûR/!c”¹¤àcÕ—Õ#æ.Ÿ¼÷nÖÞqQöŽm  ÀÓÜÏË ìöæÕòçòvõÒôðŸöFv9ØCð%yPcd–£D'º¤þÊÔù ”&ßÉõ7”úvR¹ÌÀ“Û4;b92ä±[€RÒlŸå²oÊô×)Ô‚¥!©:…ëÝs˜då†ñN.7[ðH,ßÞ½j¸r¸!ää|Ò³ڹ©´[÷4U `ñjxQå`€AÔç0¥{ÏNÎêGŽâs¹Ø.n×¾æT!#;d«Ð|3Ë‘#Œâ(%‚KµƒµªDp8 ‰5ÂpAõÆUÎzNìp(© ´×e9¢LuÛ-guÎNu ´+—×Z=§ê~¯ ¨×L«×/ã™Í…¥ÙÖT[Ä®UÇ|h—¦[~¿¹Ÿ*Ø}ÙûùÕ öòŽE¢Z|T;É3ÉA †iºÈ‡ tä± pG‰Ø$ Õx·Ê¥î&ð«_ù4†ƒ¼yÁN=ºBêüÄrP\w+èã€W žÚågäu¹W*ùŵyI™yèxL ­ót a¶æ¬ªà^Goi–òê,eyèáz…Âïÿý×ÍÎ2ž.Ôû¬ðûí®Ý0cÑz‰'Å_—xÖÿä‡ß\_¯îO›ì][ðku5fŠ}Xv²««_d¾½f‚Ê&R=2ɸ’¶¬¦=Q³ 2g€`ÀäjÁƒ±©1¬¯Ÿ(”†X’™ä¡ã1jÈkDZvþa0ôh€ð´+Ëk”o+‰Ä,“Hˆ%ÅLhBhèÆ¡Æþ'àmx̵ª%Ÿ–§"•$RYþ…”ˆw’ æ~jŸ{0 XðäÐÞ Þ­ÆÞ°YD6CÆcYA2Ä\:XÂ0ø ƒ6+¹ñ: *°Ì&ñ|_[ÌjèÙ£QnWÿv<|¡qXCc0à±V!;Ìæöûâ~= +‘J/R 1•˜]Ñæ6–Ë  £±—ú›´¨†ü:KƒÌV*ÙEŸ4#e¹X¨IÅ‘Nð Ó<ִЇlq¬´¹¬Ÿ¼]/®ßr2wäyÙ¡]Њ»ná¹Å«¶ô¿»nÃsrŠ¢¾œq:º`áuà¹N-¤ÓZ”¹MùcÞi<’ï1iûè½åpxÞL¯¯Æ~®ÛfåFpƒNEQbµ{ÍÙÖÂäå—dÓxŒ.g\¢Hà 2¥¿Î 7¦ÕŒ›ÀãTnã^Ì}8l—þ U„rذ cI*•ÖÙ \U—'šÆÃyY#ë¨rœ9þAн—|çs¡:f*´Á)<–•0ò¥_¶:¹« M\?Á×V2ŠWp¡LS–5Ju«ap×Ïú?ŠÅk¹1¬ŠÁ³[[Ãà)<å î…sQáBPƒCÔ•‘¼œ¥ó’:^ÞÄ <’±’&v,.œåRø J(PË”5qFRc*˜8Çš&ž¶+ÿE'ŸyˆÔUO Р$ÙuÄtRË"v.„«Dc§ðÈ Ææqõ­’ŒSÓ8ûo_¬Ž[~¿jU¿E¥S‚¡SRÛðB:ÁYIKçe-.=‰G²ÒfŽÏ6)ëÏE[Ò#1/[ ó²ÚfNàqÛÌÓÙt1›­Öõ`|ºI þÜŒ$H1+0°JDåLc£H£Bþ¶ç‘8ˆ64Oä¾å\Yaî+…GñrÆÒÅæ“Ì©ŸÑÂòJÜ‘âÓÅö g°t?VÀê’ØX×¥+ß×îÞ£$öûzðlß›–ÉÌ2.åNƒ!îò é¬+nfŠÕïJSÖ’¬‚•?©Ùñh+‰kAºt2ñܞU¢ŠŠX^±ÀZÊà˜Î—átn&jŽ•‘$®ðêÒ•ßí™Æ#]5Ó븈þœ–   ýQ]ÏôÙ¨]ùÅÊ4žÔÅÊ'ÃÝß÷¾y˜‰JÇýõAÜj •cÀtŽÐKuXU…žÂ£ua3Û¸t¤ó¢(Tü-9LÞó±VؘƓ:¥²l‡W‹ñêÚßõÓþ¾Â°ækÆÓÕrpt»»˜‹ &„ +‰ãš!]únƒ\„BHg MˆýÝò&õïÁQ–¶=x”*aRˆÍ7ÙS†ƒÅœŠWBºHx™½LJ¥±!§KW¾Íõïñ·þrÑC1 .ŸIïít€¸hÜ߉- Pþ(‰ÉhÖR”¦ü 'iØÃ âÒ ¦€1«)TÁ¤P™ œÎª±>4ŽfUNV03òXÆh-Ea3˨tÒ_nî#ª¨>]ÖÎÔn¬}4Í*Ë’ï±Îqæh#!¯™ïíwÅ^QT8ÃÁ⊪¬ g2ùä3‘Æödä3q /kbNX­%ûƒÆ±eMœÔUèn%ð(¦‹˜8¨f¢ªa=# v÷¨ºÏ§Ûˆ=ü<ótZ™÷ƒw«fáãIÜXv=?9øÜ.–!ÊÿÝlÆMÄ—õGgëŽËy; /šé§vy¼ÖýxÐLGƒysíËm£Wa+$M¯6jîzãÛv9^ ý m0òm(ŒñrðN™šUûõÕbx1^µÃÕÕ¢=?BèïþæüHž‚>eøä‡Ëæ~ö÷«æút<;úkWg³áüdÑNÚfÙþÓò¢AfT®ý8Ê­ò÷ †å ®EÛ27üÐŽ>rÕ[=²-ˆó÷{mJ6¶BdÛh‡ïúól1lÏ?6“eûÇ6u´6hzZ½ýØ›4¨R§@£â5i1ƒ“Áj6@7\ø?èŸÚÕ?¦³µ±ÌüýÊ »Ìj‰Cód›‘êƒ[„ Å*t<½X̦ãÿÚ(«ø=Ë—ÇídtúÝb1[¼/W/¦ãÉ7ë/ÿä­ïÿôo!‹ïÂÓ¯ÿÒN½É½5JÛŒ'ÞÈ/þaí}kßøÍ öû1>¨°oŽC½†…ãÁϳU39Gůg—óI‹åâœÞ¶èÃñ9ÏW‹«Å_iƒùn=éûfyq~¤ÿöËÕ¼þ0ü«|‰v¾ïvÍåHK|ú¦Y®~ZÌ>-pÄr¾_¶§ß"à ¯öñ üþêê–¶ã`†À”>HŽÿi|¨Ž›tofÃf‚Ùx…o@"ô÷·¯~}^ÿþåí›ó£‹Õj¾]¬žâßf£öç+ß £„y‡Ïñgÿã‡fáƒvX´`ø¢4ŸŒ‡ãÕäúÎÕºêm[ÀŸù²GÀmΦݵ³Œ½\¶Ø¸c‡Ûûöw”Í·¶øïlq=Þ4s/CŒºîÔËÿ§­È`‚µÊË?ÝVßw2„’9ºmôñÖÝ⻊ç¦}–ëv÷/WS¬p8¶Ê€­²°ØBc+Í´Ê민W…Å¿Iá7{üMÏk‡‡UõÃæh«ƒS}:ŠX«ÇÃìºþy<//öŸøsÞì”1ý?ÿ½|jåN[ + è¿Ûõé„ÞØIì†rÆùûAh‡‹õ`¯™Ï'×a´úè…ms¹{6œŒÌ¿®¸ºhÝJ±ï¥¯;çø-W¡2ñvtë‘ÿé¢ë¡ã×^`Ï}óËÓð˽ëšQ´ñÓP§ˆ!˜OÇ”Se&Èqn¦ÖÅgìÓx ”›°qÕ´ß2¨%¦Ë>‹›„)üVИ‚‰ ÆÅ÷øÛhU•q9æÞ¡HÍùË/PF@:¦!ùÜÈsPJåM›Â¢”i9‹‹æã-IÉ-ɪ˜ióQJ¡+˜6G±ì¦ ŠEv(v µs ¢‹j!“‚å·k^DÇK·³I< ´(cTWÌJä2¾NÐ¥3Éò!:˜FtÅCa„÷6œ*Þ¾®Óåܤtÿ¬#qѬ1ûnŽ‚´F —Ñ®©”Öb ‘4¥Ý~¦0£i­Á:XEó`©.fZͯ˜á`ÐR˜”*gÚl”¼øÖÂD™±WüèWQÝ Ãžbñ%Ñ.]zXöÌ Re Z~ßJ÷%°V=x´-iàÈN§ …õ»Žâœ!ɺ§!TKS_“ dñœÄÃsni¸v‘›¸h¾è")i]Å„.cݼ”’ë ¦MઘimT4έPÎǵJ—3-M©±h3’ÒªâS‰<"5ÈE¸£ïæ&™Qûq< Ë­Ý2·ç––§7;jîÍ*1]TLᬳ â÷A„tÒoY8 zÎeyWHá¼¶+SbÿÑQ—ºtBðê®^–ŸIãIíß“ìöÖè DERNö¾ã7q…t"ýûTš»ò¦KáI z‰GEÒÌæg½ (L§Ëgº|TªB:…Gë½ï„ÝrzøÌß´9†“NBÄä8’Ä©Ý.´lÿË`«+Y¡û•£ ŠÉ%! Q`%ã$°ÙeËÿ3k^ÃäýyR÷Ÿ¼Hw›„[žßݱ Bŕլ#Wã"ÇÀ‘•ò‹i¡†9J3A:º\?«ƒäˈ`5$ƒmŽnäì«)šú `ŒÌƒIïs`&ÔÖ&ð¤Fp˧i|BÍí0_¨KÇ“Ç*˜ Qc_@ Ò;uFoOäÝ—-¨%(µ %zÐ¥oÄ«Šg˜®bL#4·}x €1%¡–?}Ž‘t–3UÀ˜ùð´­aÌþ<©WMÍã™ß01i?·“n†åª ¡pïH]wj÷d>i¦ím%fue|‰.Ô %¦ÉQþ>²U?£#$ðÈĶûãdöe9¼h/›Mí.gÓ1Š…)O6fò¥&dó153$¦–É6N5Zà®r˜7Ö{1„vÎp%™&YJÙ?'¬ªbèþ<ÚÔ®ÐãSeÆJnY æ,1–¶z…ž Þð £ô! VèñÙ/ã(ÉÉéHc­‘¼d…žÓª½ðžÔrž\G*Fh'üÈŽSû‘Œóç¶JWèa¥¾¨á??-šùEˆ‰fáNW?¾›ºŒ `ï—ªÂXytwÝ™ª¹Z]øà<]褻é9·øÑÕòä7단eÄØ ?Á"!ɱŒS2ycÿ!P+&j˜¾?@NÓ?¥]|e“ˆë»t‘ûv²øN°†ƒ#—ê­Î\¡Œ§ðØ= =óâÝÕÇO‰OagÇ%ÜP;¦0pzOKפ•5: )<»Tç· è;Õñ’p«©~,¦c;•ñC·Îw¤ð¤î)ë¥åSZB°A–ÜçÏ0%Ç*9fWÅì½y8Ëcöuh|Àe™ÑÔÇj°&“ÝkB»‡x’xÜN{9Ö£é>Õç–ç^Wˆʬã‚kMžÈÆt`ôn{9-#ÜÕpÇ¥Än ¡yDúÅ0ýt=ëŠÝíÇAF dĪ ÇC@bc— ùCYn%jŒð=ÎhÁ{ðÈlæXÍný$(ßêà˜ñW7;JQÇ”2<›' Ì´*;ã¾±b~ÒŒF›‘NÖ_lÙŲBóï©9©áMö°ÚîÛ‘ÏÍd"!¢SRi $´‹\v˜vö°*µb²†éûó€ÎµÈûHLJ‚’<®¤H¤ˆàù!„|Ç k£—_©MãI½êfû2g?%¡¤1ÂOÙ“äF+–íøP&t«±ããht#úÅ0ÒD7#í¥åƒ^¤ñ¤^¥ö°Ýá¦|ªZs©ÃÍ·ÔÊ—CAKSÜž½ç †ò›Óxv1ýÍw‚–ñV»’#ŽùtB€ËÓ÷xx[¡HáqÙö™m“sËó +'tÅ~¿0Ìù°Rˆl’ç͈s5¤7‚lÉvÑ5ÞâKâ¾Zª³‚éÌ¡S2#ÖAyIàq©¡sî­‰Þ¯k7Ä»9ÐÔÕ¿2®ž³Ü ¦4Eël$n#}cV}\^>\N"ÍÙe|ª©UQíI'X•¿o²ö‹Â %ÊÚóHEféx2Ä£%Ô„„NøËˆ‰ì¸c%ú…E™#K79ÍÞŸG±ý7bô¬$M\D°B;¦ ±+CnAÌE-•bESKY¡jÇ÷(·ºìÁcm™‘ÀƒnQPÑÆUÄ~§SŒQs ›-é õû‹bË^ulþÉóO)/™Öܬ 7W«ÙrØtêøìvpö!Æ£Q;ݺüõi<©aå·ŸÏ_Ëx§âòôægü†‡'ö… :’;n¢Ö”d,yöþ3ak4 <É·‘oÕô²^Œ§moM‰îˆuþx‘¢û ó™+î3¡e®tʱ¥ùNÇÍy!âý RJÐä:´!…Ûgd\S± 3¨)<©BŽ2×ÅO<£TŒw@´à8àäšêíùÒÛspü<تÂìh Oê1Û-CÎ1!¦¶¸¥áµ™ÌÏBo¡Š+ôçQ<߆-êÉø‚¤Ö~½°ÔèÓE$ÆÌÕpyc'ðìè‰Ca …G¡§ü_ê®f»­G¿J^À ’ mŸÞÏ9½î…,«\š²ceWuÍÓx¥8rl„//¥YU³|?|IÄSÀ€*þÈ.vký~b¡íxQˆ‰FPh,-‘½úü+ã*…Ù:ëAgÜFL¿wµñ'¶êO†˜KV=:ÒÐeÇfco,¼8 ,Ó‚'¥%-ú‰¼¨¡Ü,œ 6z‹òÑÆç¸uÛn³‚&Ò’BZv‘´Z„e2ÇElµ®(³ó~ÄÔðäW³‰CT8d¹i¡×wFÀ7²Î ‡ÜÉ x°ÿÄÿò8¨™¹Š³‹BðŽYÝŸ(xýâTÂ@¡(<Éõ;ç¤Ö=bB†Œ  !ã’OÏÿ“IL@ žf#°Ü‹Y!2BRSŠdf÷¹ ÝÉ‘çF¨@;ž˜:Ô?¦r (x¼Z­/d™ýçomu¥º÷,{O‚QMÔ‘qÙ›ŸõÎW–4ÀÌ4áéÖ@аì’WøLrÌQÅŸ ¸n4§`DF…SÚ.Aå218]™Sµ]xC%‰OV£…,?FšñÐÌZ¬Õ‡³\Z&ülˆÙa`˜˜ÆL,• ¥O†e‚:¾ýøÙãfu}`4*“Mä½£ ¢¦ KÅx|6‹¡­“MC&¿úΦa…Ù}7–ꎺ ¾ôBÊÚÛ¥Œ«­ù¬58~ñlÁ¹W‚\+¥åÿÛM¬¢Âj`vŽÕÛŽŒè–wJ1Žüùµ¸jRãª-Å!ÙŽ'w Q.×<ùÙãêf#GùNþín{3ÍÞîòÇ<¾ åtŠm<ù¨žú@¸[<ê©Eá¶ÂV¬kfN˜²L²öÑœjÝ«I4á¿þ›ë/?({ÇÌ»ÿþ$çÿÅÃíJõŽ۔ÎßÊÒmÇŠn„énÀcM|nŒŒøˆÍãÛQ wªÒJž!š±)ã2æe"÷㼟ûô黸€"Av:ÐüÙì nHK©ì:Râ1S,×BAÇÃÑ÷í;qàq¢ñÀ¢xyøã¯+€\g]¢”YUVt¬ÀùŠÂ©Å÷V‚Tª¾7ÓG+Å;j¦Ì©Óö¸¹Ùîžÿ¾8„}ì{W²Üª’gb ^þÄD|T:åÐŒÿ2.¸Œˆ::ÊDê-ãbœýˆ5eåQâÌÈ:Ê”}îpñ\怚Š&<暊o:*Ü­n6?x{§¨~µ”‘``={ÌVAšÃÜöÀ&Ç:ØÊ¾ãD³l¶Þéx’ë“« SÈ …!¥èªCÅ+clý0s1ííxB·ªË¿°øú¯…Ū߽ É®„«„ì\Á÷kú06¹“ߎ‡aÈä¿j 8ä“+‰H½T—qŸ-p.ø14]¼z°|4¸ XðPèÁ¶ÿáQϸûÇÍýîâêþþišÒ²›#±2ƒè0øTð‰Éõ `ë„Þ |uù)sSéÇ­ê_ûG‰Fèzηàá™­G~Ä•ByÞ)Ò[Ÿ¯àK.zÒì[Ì~b¥¥<ÑêÿÀTTôuk!•-Ðßrd\¥·Í¼ˆ9ºÓnÀ㻕þ•Å_þ~l#øº‰€²31kL©þ–úu7;‰Ñ¸æZðôŠ‘¶ðYD@Š ¨úÊ ã<ö‹Žî*@t˜#ê4õ;+¹;ŠÑ‘“+QYcMÆÉÄÏ>}‰¿Ýþ¶Yÿ½¾-IßÅ,+eå Q$1~4°DXñ§v{±^=­nïo&Ðuwe;¢S-a* Z¢ Vë?Š,»ÛoÖ –&W:tŠ,Ò’ðä927౦mVJbï§úÐ5à%¸|w¹¿YÈîÊ•‚ëç?Çj^Dç¹_ÙæáàaÄÕØ‚‡¨s÷ƒqpØà+œÖÏ.å¹9&ƒŒKfŸü #.3<̽cû]0ßN/á&ë§?“Ü®‚Gíôg \7u }Äî`ÁCưÿï«»ÍNÎÑW6`Õ¨‘´Â^¼t!&Îë‡ì4.’ÕwqR¸•g‘^“mÃsÇÉžüˆëdx 8gî¾Ü8JJºÛ^ †S)x¢’¯¿R¯Íûþönï2®ì©Ê£‹j¸evܺÇ_¾,‹ Õã\áñ@žy=ýS…O¬óI¥‹zVBB¦qØÑ¤;‰9Øä;rsf¥ßÙ¹^¿ÊçÍãýóC…Ë\å²4àྲྀ»Blïbz6ࣃ§¼Oǣᅼi¤:Á’vþƒè9cw€»4D0ý×÷··›õS©¦øÛöv³;D%Ÿ«suVÉË6‡NÛéB‰uö='N,†²ƒXðàÌ`¡“¾Êd ¥F°¦Ö2®TŸÛp2èqù.6ædBTBVû)†7W_?¶›f…Ä‹õæñib±îÒKäå ¤íp2.9{ø÷‚°e¨ö}ÏÓ€É<Ç<ýž‘vÏW»õãö¡êÂÉ®îÓØ’,sÖ¨”qûù Fci€½`Áƒ0àp¸¿½ûùã׿a¢·îÝCÇÉgÍ.½8]qNŒ“gÄÂÓŒìÝÝßòç»o‡ìÛÍã·zX•«;û²˜Å19Õ¹&ãâ¼´ÞÏŽ Êóó}_-ŸÃ¿^4¾ó–¢;Un ÄæÍœ5Yœ2Ôç&Lôv 0î/Ï‚/±>Au_prbíT꣒Õn\‡´ü”YðXsùß)¿ÔzhÞmž·ëé‚åë9*§Ǭ=®R9-ÃüR§” € <>/©‡(Ï÷I­û踜E O¬2.¤´¨ZŒ‘"ºï•<>õ(.¦?î\^wŽŽM_wÖ±lz>§ù@dœ7—!=O1Òå0à±¾EØVÜû”¾Zw5¯]ºt"I AÉQžÆ9p‹îCeÉ‹«‰ u+@1áXg¸¸ï|Rü)Ó¸Ê}Íœ³w&"¥”Æ€ÇZÛnýxÿý¿ï¯f¼øT%ÓƒG%mWiËöÑóŒÐÄåUÁ‚'/mƒ›ü—Xç²ô ZcÏi\4{±Ï|Z¾¤© µˆ™Ë_ü‰Ï\åÄ âIÃ¥ÞH^Z– RοŸBXðø9ñV77››ÕÓæ¢,¡ÍõvrëWë¿ÉWåhâDCGµ¨‘wôÒðÒò…lx¬…Í+lý¹Ýü5±ÅU¶BðÙ9ŒšªÉ¸d®Ì2Þò5wlxÌÎÆà¿ãSo²ª5ÜR ˲†½(l\8rqAð˜ÒE0à1¿%焚O_å³”œ‡RuTÁKùMñÌX¾ © /v¯rFÞÍ&üzs{µºý…A¨3HÅ~ˆÊsø4Λ#™OYî&Ý€ÇÚ¸^e°iÕ]jÅ}…¥ÓŽ‚¿4ú6ïç&@p•³à±v5>áúLÚÈDgÝ[&×Kì“æ’qÎÞèð¬ð{?À\´àyY _×÷ÛÍõÅz÷çd\øª»ÄP´Uì`_©)67wai€wi¿ µàé“¡ð–¶ý¾Xwq¡,PF­2Tœ°K"ÂÒ@ƒó#&Ø‚‡g¶šQlmB䪣Ô?ßó³û ƒaÈ$7ãIn±In.R7!IÒ©d1.7Ù Aæ1+»Ïr+ÛXäo“câ·ž†¾ÁH¯ÊNNsðï/Å'ûööêááö¿}ùÇþ½Øü_ZÑ—íîË÷û§/«?WÛÛÕÕíæCÐ9ÕÛ2qå›JoF¥ôfº,¥,#D¯) ÅÌ¡Cã÷?ž¯¦êè‡Ðë£ýÕcÝJ" X k‘2.ù¸,Rì…ý¯Mùc­ÄÕ©> îUÚäs’ÛKÐ1ËZÈ=ŠV MÕl/¼tåå9dª»ø¦q‘:ÄçŸzŠ‹¿©Úð¤0V °Î$‘c§u,.ã2# V‚^ÐÉPžÐ#ÂBåRð³èRÉN¨qíCi •þ·eœcߥvÔ™ ‡yyE²àÉg§HTçºD‘³GÖdûL‘ÿOÂE H<©CÖø! ÿ…û—üoM×1®’++zçÀ+Âȸˆ=ÒÆÏEš¨ŠOÄS« º:¹ä":U*ý—O®*ý¤A稊|9hÁC½j×ÌÞÆ'¦}•é0å&‡Šd¡Ô*ÝÊØœ™hŒay%²à±¦-Í4Ô™&O)ˆ¡¨IVâ}éÌ”¨›hÙP"ð=Òú°ª,—8[Tký”qsŸ¤¤ó+V®Çý”g {Ùç<‘ÎGybåÒ"yÒL~W ó:òt Gì<<Ö××nÏëåódësꚣèçß Á)kvÄô¶ã °àô¢F[HìX×ÂÙ/9½:ÎÉ5à GÕ.kA0I ‚ÁËT^ÉÑû¨|TÆÕܱ–î¨?÷¾ÇQòuŽ,p9.¿ x¼Ãaõ"¹º7•9 ¨­X=÷ˆ0‡7 °†,x¬Y/œU~“«;WKhsbP¢ §qhÎ÷qÄ6àÉ.vxH¨YM£~úÚ“«;P‘@,sVhœz<$œ§púžÚðÍ~ßn{N¾î7Í¥s‰ÊS@Ë8nþÓöpÔÉÃòSoÁ'ÝC¶w%6{¢¸î ÍTÚ<&Õ´‘qžà¤[Ç2A¡6<ÖˆÛ L³ÅæëÎÏ’™Jx "‘Œ‡‹>*B$pÌXðdîõ(0û\Ÿ˜®;?‰°§¬YbD2t{83Ñ" ˆž±à‰®‡_·Ëu(‹Ê'ÊAóVs)›ºøuÏM,àø3àg|Qúíöþ¯Ýú÷ÍݪÁYT÷—2Ë$»àµÁúP}"œ<ÀËaÀÎϤœãä¸e+«fOîÇUƒûtöd#Ö÷3&?üŸ[²$+²¾D×ä FÅAœ/])]éXñJOã ’0oË=, ëúùvï6Ï5¡ `^<Ùˆ‡{ež½0öú½ëÔQt>'T ”c·„³¥±2¤ÓlÀB¯u…¿ú~öy»|aør{?­W%ä¥ÑˆJÀ£ëÖ¤®úH9g¥gè„>·¼*ˆ19k­*§qî3yj/”½Ê˾NR(Ò{å߃ð‰ü´Ï¢JÉe@U aÀÔ•r^.¢Óñ`à>)Io·¿ãtž ¢ÒÎ{Pgp®S&Ò0Ä#¦¼O†^y#ïP¨Ž8J©È¡Î1 uÌJ«Ši\dè–/r6B ȱáIiIgíat͚ʱÊm`¦ ÀÚ>,ã0úEÝ´c…ᦡO%ùúSQÛŸY‘éÊptÁ“‹N;&eœ37£<{‘|ÈË+Oän¶ŸfëìO1j&zœz,õ °]Nœà|@¯‹4BYO ­Íü~œ5¤¾G&£:°k` #w ¸4ÂËAÌ  Ã;·¨¹Ñ²êF+¿ }løèò‘lÓwˆÑ)qýûq9Ϊ\{±¾½¾¾X?n®eÉnJ‚}Ì›§ß7Ï»‹?¨\&È9mîX.†Ä:l•´!ÿþò¯?¶{ûàýöåz»+¾Ûë/ëÕÃêj{»}ÚnvÅ,¿î/¿íC¹Xnë®A®àó£LÌÖݬSª‡ª4IЭ{ôB¢ˆîºº(•¶ŽJB¥ïHâØ€Çª$5cáfÿðãïåܯ½ºO…U,U_1d ³5øû,å‡m «Œå£4àÔ„‡{¸lV××òƒÝfw)¾»ŠœK<{iàÕóöözw¹Uø5>M‰TÌR.j‡y.IâÐMëA>®]Ó4¯kšá£8`¯Ë¥*NÄœðX½O]}e2W7›÷œÆÅNI^›¶fÄÄŸ~ 1BýÐwô¯ýo˜- ãY):èzƒ×ˆU'2çà@sß>Œˆ?­­¡£ D-x¬û„~ ~ÃõG?˜˜š®--’°ëðØ1K„Ï*O»ˆÜt²AËÉ&&eL¤4ƒ¢±íxz”Jÿqßÿðh¢ï7÷»‹«ûû§—œ}¤º,~€Èj®B)Í“»Ä¶ž=„6ý ªþ±˜2ªáA%ϯ²C5çlmžÖ×e­Ö­ùš` œtT9ÆåW…|‡%r:ê’¨Uh:ÞÛª¹téÄìVLã’9†®Àr0; ââOnÓw2ä¬äGîñX;€jŽ]ªIò‰Pk]5Kæ~®ËaÁL“Oгsn^)õQÞJ5û€.½ .—ç9¥wÍEAO3%\~r-x°‡©ñš·Wû™jRM (`²ÙhQ]üœ=LŒ±¨³[¾]Ÿ ,4õëÕÅÕó÷ëÛM¡/ÕéËÎ%ÊNÕTäʶ8sÒ—Á›]0Ý<~¡é>죯iÄ:bñq@ 6‰1 Mû²¸i„±%ß)µ(04àÉæb‡rçxÒ–ŽàüþTˆËUâ"Å2_q¥¬²³8<Òå]6…à„#ú<;‡°SA¤<üì±S1m(4f]0ú.eøÞèyÝó”²KœÈ«ëRÆëR€ÏްèYæ„m© ¥EOÕ=O—“)*-ãz”Tû¹-QGÊ=³èŠób^¨è¼£ ùø¨¥"îq³Õ }IdÙ=nþçy³{jK¦yûV$Ü^¬þ*;Òø "è³ï¡V«åÕ£ë‚ÌHÚ œC¨ Q¨(ç/5(x°7çáŸ7ÓL°¦R ¢lºDÉåÓ¨Ô[Qæ*:±Z Ad£TH1džU餻Ôô¯O«õe6‚Ó‹ ¹Ô°Tr:b} Î\åbOZÔó~œƒÊÎg¹Š7à±V©ë;7ë‡i¼¢V%oB4K•ª™D ªÕAf*”˜0%hLØC¢PÁ•¦G:0·+黼ÿÜ>>M3šJ…ŒZÕ~\âÓìToE™«T1øÿ£îj¶ÛÊqô«äÆ ðì}/f1çÌ ÌB±U±¦Û-Ë©T?ýW²-Ùè’”&½¨>ûüH ðv xñ<}ÇCäj>mÇi;ž·‰_O·Ë­9‚ÈÑ*˜ƒÈDç¡Õ§ÂÌ%Vd¶U-A£)±¢ÏÞ¦à¸tÖãoõýçô+Ód8ŽYÉGB³H\£f„43©…&ÿÏ#+5;dÏ*%’yç¤ÖãÃ_Ëõ¯É/G.€ƒ6F‘—É{f}*Ì\b/wÒ¼Ð0æ0Ĭ]>ªàIáœÄºÞ,îW¿§¹±\^•V°x‘ÎC¬O…™K,ç“h‡p. !–φ»!² ²–ßE12‰e‚‘_‡ÑÔêÉ“–ûá C<19žè>ÍÈêúö.޲M?|}ÚȲ½)m$x§63¡åÃŒ&ØóöŒìÕYgÌxãŠOîx<°9 ®6ëriT'í1ñæàmžTÞŠËpÕÙí]àB¶¶SàáÂèKðyoØ*y4+ª°SáÓ×ÝÿyÿD &g]ô|’\¥0€úYû@ØÖÒ€ÉWàØzò—›Š"ëAÛXº¤;®uË4lûùƒÜÑ hð¤Ô˜·ËÅÝæöúvyýgE¡õ`ëô8>€å6°Rá"Ak* –Àû©¶<Á4£Är³¸Ãõòçòfµ(?zúººÿc½xÊÇëõæy½ü´j«‡L©”²Î¢pá*•Îî}dAkˆÀ°² ñ,‡ü¬Ø(ÀCÐ&›åÏÇ»Åf)Sq=JD Š5ΉD úÑåL2¹4 €®ÀãOlH£rÓ^~ø×òûíÃÃtsïê1ÍÒÙØù˜8âçqÆ5xÙGÆ„À‹n)RižõBÊ8oû“b_“õ8d"{јÇ„dè}Dr¥OjÖªŠSè¤ÈZ/}Í›YÌvF¬ßQNã ¯º.y›B‡G[âQ¯È·ÞØSÏQ]—T®)²^ÆA‚ÐjðòyYðÙA„üâ^$ž+U+t™M®Ÿ«ûÉ4»º]”7²ÛÅ•ª:µY„¼W%N§¶,.èNˆîBD;€<ÚÕ‡jƒSô·¶ª¼©+/…ü9°åD›ÝÁ` ÚØ?ÕF‡'`‹©Îkç~ñcySU¢­*Še]ª2 K/¨ «*+¶|¶O9¯äÛ žþ˜F²ý§U…'õYËû.‘wuõfC&xÏÁ%Ô;ôgÆhÀt+ðDßhèËWõåJ ´Ä½§™Æ™`[-á>\ëð@h<¡WËß›«½Èø¤¿P×ßä_;Ï™¥m…ÖܰÃç°‹óKíˆÕXeyÌË!:àBeœ: w!¸ÝÛ[ƒÇcÛéoéÔƒV¥šC9m8Ë"«”.9qÆ»A¥0À(Óà‰¶IK©ÓRG•.P>Ó“cg(Öd£†€Ã‘(´£Ä“Út•`CT¡¢ ey„ä9Úæq•vÊvAWª<6œxt '^¨ õ@UôÙ¹°d8c43ε$ÀXðÙDéO °} ½Imõ8U$GS„œƒI®ÚEf–a×gpÆ+ðDcºžñZ¬‡¯²c )ij Ù÷€ï̯]ƒÇÆæ“Ø*ùê¨Rë±-"ˆ¥Š·á„È®¼‹í¹0\ ‡¶} gÚd¥Ú¬‡ÇR1ì‘;~a jð 6Žîýã¥ÙÖz¹¸Ù¦#„z¬,•v‰Øèr‡Æ·îõÅ8}™­3N­’è+í§Ò[çGj<Úb kXÌxGt¼eà„ϹÒâÊÕªìÆùÊã’#s~¡‚¤½ÛÈZ—_ëÒ2ýcÞz›Õr5^ÆU’Vå]ZÏ›ÛR‡ðúÅÃA¬{8 €•Fgí–¶¨ø§•ÿ5¶p 0ªC£ÉóÆAE(µ¨R8k„»£6+Us¨¿§¯/ |_ø¡îz”Ȱ[Iôí_À/=¡Lí}Þë¸8 ªáK53“¢çñXí•÷¡ÎöÞ#Ô½Á€!¸À Êã¼:n1 Vô&ÇÊ8 2‘¿“¢u^ ¦äæ;©G¶À½^÷uW/˜äÐE0ÙD Ô&ˆÁZÏn7¡ỖSžñP¬¶Mz§~ï÷ÙµÎ1ùù.ñoÒlÝ÷‹Ó%º%޼yœ×—£¼TQBçd5m²ŠJ³ŸµI/Q«ÊÌCv¸="+nGG^tFH¨ À£~—¤Qæ[Áß}eÖÝFòàóÃÏ㜉=©Ð}Á Á]Ïnð=e¨'ƒ“Ï×ÀmhDž’ïÚ¾'öxù<1´ª¤<{'½Ö3FJEÁÈ^Æ'̶d»JAçL`=¨ð¤÷§è´–9bKB[f/…z(kç|hrrV!|ÿ8çfÞŽ(£©å«h²×í>xFÿü€ŽöêF•ó4×ÿó¥œë/ë¸ÅããÝßzt¥-Í4üËëøw!‡ÕÓ—û‡Í—ůÅê®´µÑÉóQÛù«¢˜y`b溆!”“ãѦÞö½X»¿ú×óCÖê×é?ïã}6ÔII˜|ærø©Ýo­<‹ÉŒ „u­Q²XW˧¯[Ÿáƒ cU…ÙÑ‹–ÌœÆ9GÍ80 ³·¦ÿ´«ðP«ißs ò_—á6ø_¶ó‡õêßÓnþA³T×,9— °‡h—enƆN¢ørÍbxQ\ÿì žÔŒ$ÇŸA|}ù«z­Û}€>;àÕ‚¾/ã°EÎ*ˆèO(½@BàIͬ‰õâþÇrqw÷ðrwøªÍ¬Ûåú~q÷¡ùIÝÒ…r'‘“ 63ηÛB:È."^ܻʭY¸‘µpó?æ²»„†ÿ¨36-(Ë"àÑp?>—¯õ–ª¹-Õ°>~µlzød,óBj‡¶]#s‹BFÄLb™©ùhåø¢ý欻ĥ¢©,ð?vמõUs»‚SN—IÂþ«¯¿ý_Û_ž+³ hцõ˜ª÷#B6ÇKoB€FiÏñh1D,œÃ! ‘ãI2©:5#F­‚öéÄ89šý/NtxÔ±¨Ï.#Ù5øíèˆ×>º–L3âÈ[‚f4ÃF8°<ɵºš“L̶‚Cmí“eyŒ@4_«AqÀ³q2µb›TÖa[Fã%xæp„£OÒ¯)ìFœ‘K™.Š3•®]Âö"mÖ0¶¢Q°R‹º·$ƒ™À\³´=Õtý)Ý5£žLŸSàŽæˆ»0Ž¸Ð²ú®H÷¾U¼WP¥êÑŒ —eㄚ¶Ã)?4£LŒ³v—aæ áeq†¨G©gÑÄVÔ©Ô±Õì6'¢Ì hÂE1(Zè™i]6ÖY#@ï`5CߊAbai ƒ¤xÛìAmœ”ÔŒTgnKjÄ3²6z/‘Œ?/Æã}Wž‰f#±Ñl±8Áõ¡—RŽÑ¬Šcü4 ’ ÒIÐ=Žý­:lغ¼ƒ!‰$µ‡è3ÃÖBZqI*qcKQ* „<½w¨Ê,°áêRöÁJÄH¾ÛÎ$Á߈ERq“H@†’Oè·®«Ÿ S‹ñÛØ8L-ÞŠ7R9GdŒ–ï”"ûF¼H§l3z ­x!•˹1¼–¼‘à9©?³XÏlÔ9Eë¨OªC°¤…1y„‰,$ žhgÜ¢WUÌÆ‘S².H0Ê“|ôàZÍ¿X–!‘gL1bxRƒÎ¯UMs1aglv|y¨E¤=Æ6l‹dÍ6#Â:Û ÿX<®Š¯™òîúk+›ˆ¥ zò §Vmož=¡¤¯œG+‘Ý^yç¶¢ÓÉáˆû‡›åÁÒN,—‚sA"ƒ‡Þ±”ãà[ñG,ë S*FP€'¸ÙVÊ;å‚1,3²mï$èÄ÷Ü*X­æ<‘K)Ò˜‡³ÑZÇG…5¦•eòAÇ\€UÒÑl›ä8ºF  ÆX#ÙG¦’ÏÍ;5„Þ`†ÉñP~vVìªÌÇÚˆrÑÆœ b<ÉÏ8¦3y³¸þóªªllE °'å{kaæD¥ãðY8àçŸB;–‚!hL3Î-ÚV ÅH7&cÀADŒ<žfלžñjl¿ñ,ƒ¨Ü¼ „ŠJפ«4­&~ ÃБ‘ÄP€âw–Ÿô35\ÈU,¢éû®²Š¾ƒÄÂŽy¹í0X°<Þ´¨`Ù$× ¿uùç'ƒt—#O#–‰Å“å¤À.ƒe‡Ç5£Ñ™évD°Ñ¼Kþ¢xç Ì,Ã*+3§U^7§‚ËÇ—G‰<0æ”Ë€R”àAíÍózyS®YwOë忞—OŸ-××1oËômŒÌW²l`8+Ü'ÑI³æÆËưïõ[ÿ¡¥c( <ºñ0†Ž|”àÑ6]œ?e‹?¯—õébÕ®ÔÓ“ˆq5ru£a²ÎbÞŽÆXeÞ”»R ž8š†«ï?ë“ÅFǽI1 „óÆ#¡\ª^ô6Y/ÒʘÀº‡ÒZV€G]Xeöd=>üµ\ÿzªO‹÷Π‘ˆãh¨“¬] Q¤™1±4À£ žïÝp*þ¸~¼º¾þ9Í â÷¥t’DOÃ÷©Ýˆ%UÀ˜bxΓžˆ£‰õëéñvÉÙEì•€OIpÏáiãt’õ¢b0™Œͤ1T Ö£äú/˜áT¼Þ,îW¿ëÆÞ%ÈÆªD@ñãŸÑ’u£"å™3Í ˆê4'AfùGmÅb<'6tûùp¿š¢mo³ÿsq}»º_¾ÿxk­ûùÏ30Î 9ô¥p'9‹'6%HFèø+{ª]?5$ˆO°ƒ ò¶oLš­;nJË6 º´Nñ8š"½D¡8àV!ã)}®Áñx’wƒIr³(¿ë‹íîÇ£ ÒC QŽFƒµGLVT>à-×O«‡ûŸ«|.o;ì”·ÕÝóö§{kl§Óݯ_í~ÿêõ˜Ô\·ñ=úgotò8m—Èÿr9UèKßg‚àÑ6¡Ö©ùà‡wŒžë†¹§©¬gâ2§ò8ëLWþœM0tî–4x<´H ;,i7=Ø]=Oê«'ê„MCËz¤¡ÜÐ7ia?o°_hðhût=f$ËÍíòùiý|Xþâ4+¯žCòÊ7!gå…r] õi.U¬ò´G0nsòé³ *Àc•w;¯e"÷Oz`VV2Îà'%[yM+ŽN½¬¯ÖË«§ÍºÔbÀ”šLŒG0¾Uü`kk?}Ô÷¢½£9Fªê´p+ÃCŽ„eM¨íÍSX¨dG¸<Úw#‡%\”Tß?KAçà“å<.ß,Ô˜z !hÀ~¬Á£­÷iõãë C =k-– ÆZq¦Ïë©dT â jð ©=O»2ª¯ÚñuíPÞ8­‹žC“½˜H-‚ózZ‰!æß6&PƒgüÃåõ^-îVßߋыdš©Ù|²,Gúde¾Ü4„^Ùr ¦'Û:#¢ùt”ªÓdXŽ\b¡b8¹>J3—Zb©É^µèÜ;×é š+Åsëƒ,£he ^­¬¡3ÓjõýçõÃv6°³,˜s1ë3q†‘ ürÌ›¸nsfr½Ö›š&ıü’ æà\ü:"ÑlЉ%§Ë¢˜Ç‹Ø¿^J¨L³Âzx6Fo$Ò:ï>ö¹X³É&?º1dK%©G€‡Î¼ŸMÓ8vA¦†‘ˆ“ÎuJÊ1—NryÃ:ås8Hð€ºÆèÑHë¤ÞÇõóýr-·R¬É#”Æ1ÌËÕi„žb)ãb 8˜D ž˜€€Çã´6öÍýÓfê+ÞSp¢c5C\hÜïA9¹$haÀ+ðÀ¼¥ÛÍm½—4‡R'º¦º€>8Áåp‡¨ õëø¨€âhÀJUàñV¿ûc™qåùZl÷²:Ï˼hBâL ò¶Ñ›Ó^4!–+ÛÐàA˜×þPkß$ʬߜÄ|V–r\úh,·çÚŒÜÓ¸¨€da@~”¶`Æ4qº©s?’Çà­å2cÉÊsÊéêO69zH-´^– 4šê7r„béUÀH0=ïÒòAKN?"±Q'Xe:Äêþõ"ÏÖóuÙá”T_DÞxB6‡È…/N.B‘ö¢Á“æ4êž³5¦úígBt1V†<’ÖZ>™¬ T8ÂpVàqÆjWôìT¸X_‰ =>ò8§ÎaF<…a@´Jƒ'úVcNPkíÒ¿š’¼OÖÖïȦqÞ¨f%U§Ï$Š>&Nêÿp¹|'ûf12/^¶ã´… î—›2ä@;¶®‰EÏÔoÜŽ3ÔìAS{Ò@DCŽ—|0ËhR¶­x’kð`õFó‰—5z³}Ñ «Ê³e%ZÏ”ûŸÆ9õ›"=' œlׄÄÃñ#VlþN@kx<Á(¬‡’¹{ ›:Ïm²]b.ì¦qƧψg+Ù¼ ¯¹<κ3™¿ÈðxÛ4^µ­e·Uš«* Ê‹¡è·¡¼tÒÚVZ¾å@iCD<› ÿ <åê)ñx‚º\Úÿ.¯7Ú©óÈÆ¬hÏ¢¡<ƶI)%Gú×íPâI=œÖ}ÝùªîŠÏ]ž<kmжSN‡,õŸJ ¯¯^øûïÝÔiŽÅäò.p~–\–>>©‚Wr°ü€‰TàÑ6»ûê Uõ9,  Ç;W*m8ý²TO`ÀþªÂ£­‹t}»¼y¾;|Šu®; -÷6¯Œó)b/SA. Ö`€Àâ &˜ÏŒÇîÍ2Üì:WïôVóÉcU>Oz )póžÇ‘úÅã)DTJ{–u­r2• t ,hžÔ 2q¬HïõÃzùðtõýáa3q¬äµb(• iòÎqîHgTQk·²-KàÑðÀCà çïÄÒx2ÈŠt‘~¾=5–À¯Íöùg… .p¦™sÁÇ6·úq·æ‰ÃÔ <ñÆÓ4Ê”×\k…JC‘#Òx ®÷ç˜?©çn¦q6-† Öã”™N‚ë¿ÇÙ0?7;DÄO/I¶ÜÙñ¾PEX›ÿÆöýïÛ9Œ½Ç5Ñ ì¯ÿ¯ÖI<)D,\4æÒµb{ ÁI¨­Ùàß–aVxÀ¯eˆ¹ñ4>ûuxÀ#_V¼9=¯:W…ÄšúÀ䤨_û~÷m€fÞ&¥˜ž¦ûá–u­xk}’IPòhuGÀfkø¶C6 ¼ñÐŽ£+Ð É:Wÿ¤@ܹElʵÿ¾Û÷½Ð’ø4A chʱÏLT“ ´Ù)µZKçúŸf !¼šÊn:Àhk&h.±¦ù@LS/<²/-pµ¼{¿IóÏW©Ùsˬ° „ZJï]¢àŠÁ º=C4ÆÈ«Ur†øâQ¬g]Q%s×"EsLkÑ—úpboË߯Q„YG4ñ9\yg Ü,„Wq•¯ÊNNˆ<ÉÆW«³+×+›'æÜ†•9uò…2®|š@d'£ öö|a”QîÑ8ʆá '„ûàa8 7ÒBôÌI½îÓïS ½5¡={„V^ìxöH%‘ðÁ£‚EÁ+D´•vuÞY!¹“3 ª÷Áï´!,ðöLÑJzµl˜#eÀv `k-`ÃØXUä‡7.òX–Ïë ©œ=-M"R\ÂwgÓPûžUÈq¶h¦g•æÚ ë™®iCf“^}Töt¨t­bÚ̒¬&|Ð:”­9@`õâÀ0{Xær6'ȇ4ÝÃ:2`+Û}°€«»Íb2Oì!pä$ãÙ-Ö¾Ö^˜íY@%àðh‘ð€ ¤|ðP’¥\Á%ñb' 8S{ e" 0Û³@P÷iÛ^: $¡Â@Ö²U¬r.FeBay ”·³êàµïuðXñÁ?Ð ¾òÁ£xÛ^µ;Dʹ¶H‘d^âÓ¾I9ºálÍcX{œm2K/ƒðÀž›¤x0í~“x+\^ëÙ8ZÎõAøŸV> oæÛùCkßÛŒHæƒcûSŽ¥Ï=ˆ¼cvûÓ×ü*q 3¾=p1Ýì¼Z_÷ýóal9P*”ûàí"çüèJç‚UTx‘Ñ;Çew¬­ùÀÀ¶gë¶å»Vu—w¹óò® ó˵r^Rr¨q˜îl7hà _MðpÞUE¥]ÒÚ%-N;Õðí:&è#šÙ®&0VÌ^0Í pñj5hŽÎd²Å£]ùÇÅ£Y-–YŠgs3œšµçXÚåäuÃF”† :])LÉÔ¤ÛqUÊ«‰†£ ‹_WÝAYl7ŽŽåXÄ»Jǵ­&ÈÔ^B<¡·Ù Q•ˆäÜ!3Cvì<ϤlzÄî½¹¯ffº±ËÄÑ`(åj®EôìNXå¾eŽ˜hÊÄ>Á;ù§Àƒi'*€b˶žÁÓ —”!¡äZ“±"xªÄ“ è#±YC ~R$ØéĆq_/7™ÉXWH¹ò …äõ;šb –q¸Ñv;1„æh¥àÀÔh€‚&xšf‰Ù‰j'©ò>+s„àÓŠ©3D,”CˆK£˜•Š2^/q7‚ªÖþ N9¡n<ŒâP#}+½ÕÇ™HëýG­©B('c5‘%c~üii  .arÈâµ±ù¶Œ(HCÓ-àÝ‘û»ÅO7Å¿—QºJÆÑx/>$éeÑ)—Q¼˜D«øa¾Œ']Ñ«R„Ú-zc´¾JÒÌ/;´–{§,>áëŸV°j¿¼X§³,±‰oFú¿ògFì‹kß|gœ¸ÑÅoâÿÍx=^.²›åxŽÝ<‰Óä?ÒiL¸É%ïÇcv—pŽ´Ù–Šš$Hï’É{“q"&*Á4¡ˆÜ!,9‹c¡(Ÿ°$êúÈ1zSqòkWéèê­)&AJ¹¬_hdS<²%°t½6K@Iöoà²óÀ¶;²’£fia‘â@<:]/³–2¼'}þ~–Ì'×ßOý{p§Ÿ-fóoŠÏgzß<ú7ÛÄ[ûí—?& Óå¦7)ˆœnÓÉÏ~_°¯ €}ã7ÏÐ/cP’˜æ)úæÒêCèP|½]fñ|ƒí2z¹¼_Í#t½I€c°jFÙz“QÌh†²¶ûvLúSœNGâo?}æñË»ñ_ÙsèçÇ´‹ï'‚Á·ßÇiöz½´úu”Íî“ëWð™‘öedÿ~±ù£í2ÂÐ ¬/#..#FàÿæžYì,Ê}¿ÇshÆ ¨ßßä\ýòuùýÓ›ïGÓ,[¥£››x<††^¡¦qv=^Þß@GÇY|óæO·/® Õ xÄ0<öø·HæéèïïÒÌ„U±½ÿ«•ñ ˆ1ÙÊÜôëë¼{sŒ 3¾?üöËm–¬FÅwæçdÕüp¬,Šìëözäù? ¡ýã"_Ù/ú—T—ØuZºüÃ(« ÁðñÍ,ý˜Zõµe¹]ѲRº,úèÿ…Ò;îRKåb„™òv/R;m¾¦çÃÄ|úò9žÏG0Œ ~Ï5Çôóän O%¿d#ŠCC1ð7 MÆ v#żÌhcø²}ó+ÀxYòYJTÚ}ý°c’ý „[¬þ²<~þË· ë•BW?ùðåâ?7³¹aæÅË<ý¸ùøj· ýÒÕ0ßÙîz³]–4_ä) ÌÇò€œ/^gþúk1 ~?{ŸŒÆóä‡"üvƒöÌVs0ãÍŸû9>âK/@.Y,?/Ú5áö»ÛE¼J§ËÌþiÝ@`Õz9Ÿ'댱có·{Ç¢øq9IÞnŒ‰æÌ-|ŸÍÇ»xÜ'YË-°_ÍPZÍgãY6ØÀ!µ\½uµ(Æ'çY„ÞE÷ Lî` Ã|Ÿüb3³íþ]®¢ñvš{n‡`4ÉÍ©ç¿ÑY$šƒVyþ»züS.;²¡E»IÞP˜Å{ųŸY1ïþy³…C`VÆ0+S34ÌÒè`V.^ùH…Õ¿‰Ã›0:~Ó×í‡CU}8u'pÉãÛ¹'`ºþ§tÚÍ?ÿZˆk&ØÿþOzÊb6¹œ”ʑև٤x÷»¸þM Õ], å57ÃCU€s“ÇKK7u?×ܬde½dMލ]-QqlQ%à@£6`ª`G àizáâô¡½#5zRЏVŠæ&×®Tä¦ÖèñÞ¸Û¤%šöÏ<¡`—¬š\öˆSý83YÕŽ““¦È•9„ØÞ°MÚÛèРúM´Õic7,.d»ñß3.QJ`Tw‚T:^7«´&D£¬I‡g“lÖtàR³æ04ÈðÇ£:„X?šc*€¶È.°1­=ñÙ¬B¶‰Áý`Ô‘òiQ§iܽê-+øÒÑÂÊÅ,Š˜ƒ4f “9×L9¤:6jƒ»#]Ì%bä¸Ê_ÈA Ao<˜òNxŽ*³òæ.žxã¨C€÷€‡"ÈÑRâѦ¢zA[ ‹P”¨‰éQ7ß48 æOŠ„Ð>‚-VŠÞJ]#F×0Ðí`F>Y¤æ {à©NÁÒ*’«ÇUNª(­¼°K&„kЉbõjœÄT51ñÁÓ$“19lñ†í< ÞÁè¡ñÓ¢‡q²^àûÃ!VæÎEVaÎ{Þ‚à.‘$ÛîLïÆ ¤G|ñÐ`äÛÚ‘ÎUW˜‡˜ö0é…wªŠÑ™HRR¯ÆŠa¬¡”+ k›„86ÓVþÎuZ¡•ö1êMT².¡,;7 3|:È9² SâG"Üñ`Eýn}%>‚‘¼$&mƒ özŒ€JŠ4õiÀ@+0’‘Â|ð §Úý`n--íO¹áè\ª•ö2“Gãé9:N›Vu¦šoëZ©QÜäýôÁóÕ©6‹ï¡Slw8ׂ•äÒyÓÉ”#_d§ÚÓ•^š͵G»K[u‡Ê”Ç¡2ïJ5ÆàÒÐý v”æãæ.Ù/¸ë-ÕS°¨ŸTeŠ3„\î”3™^B]\ º¶AC$€MððîÃOt¿ÈÂQý4SŒƒjP.í åjÓC5¼æšÊ Z1„=ÝAä«é‡z3Š›CJ9ò¹ÛrŒ¶ÔÁxÜì h€‡£ .y½Z¿½À•¹j%±ËÄr5‰wû× õ4Ó“KÆ…k,B9¦H»á»ÂØÏTi犓)×X¡„ãËòt(ËùÛCˆFÐão;!žÂ}âŸHÜ1ãS·ðÍv\•É+Õ»èe¼'ÆîÞZ»ÑçélžDŸãY¶¥”ù© — +áCË&0PO0x)aïcœÈwQ¤-ÞQ<ªªÄïS]iøTÇ?I7óÌÄ´mýýår3ŸØþßä!ð–ñ&›ŽAÁ›á<.B ­ti<›&QAžÉ2Éé³}õa1ûg¿ÎÒ]€ìËhÎà•©UÁg‹Itñ×ù¥…|aìÎ}h#Ó–GÖÐÍ'|Q99K‚ÀNsw‡DMºc¯Œº£`ð+£ Tu;™ç±ölPÏÃG‹À¦`[ll 6ÓÓØÄM°·+oÏL•¦×öGCþ¦»ôkÎK_J²“~!ðmøº—Ù‹jurrØÉÎþskC£v!µŸÝ½ýùLÔ¯QôêÇÛŸ»·Ù¿£ 0ìc˜u/Ì›ý¯×Õ‚ëûªæøÁ ­AÒ´í847g !9ÆÎóÞ¶?šÈÙ$…iEȆ¶€'Uo="ñ_à‰"ÊcSøÏyÀI[Qu©{þÿ­dMïƒUvïš-«¢m‡´7®<û¦IGÝTÀ½ÍW‘Š>€ÝnÆã$™$“Š"%;ªºþ›Çõ/ï쇉™|WÓ8Mlp]ã=Á´#2N—‹‘å¸\@ÿ´˜&ñ<›>\F÷EaðdgŽ\X³EšÅ`cNªÙ ¢>làMØÀwçš³¡}ìÏ€Ú—õúÏ?._/'iiŒ•>‚ñ´‚K0 »ãÅü0©™o”qA낦µ´Ë“ðíñkŽ„ –ê­%ø^7åòŠŸ ]6‰ŸÌü•åìMOF#k”æ¿1 ¾]LVËÙ"{aƒ°Î š}Dߟ·?–­ƒòŠº¿ãTbç+]—ƽ‘MÉ‹ª!M3Δ¤VBqJ~ÍÞÛDÖ¥ŠÿÂ.8nÞè½u¯VéÕx=¾Îó÷&×Yÿ¼0{Ã&üÿx52‹Û7«Èõ¬F×Ëkvшӑъ“h3Ym@צR |õï§ÊŽÌ!ÑE2¶l]'ï7iõrSJ|ª’ÓC#Æè ±DEvÈO5¡»Ö´Žn9Ù»–#<µÎñkŽ´ÎË7‰ ƒ—ÆÈÈà~ùâUlr“þ¸Üý°-ùHÍ¿&åU÷‡>?%%ÄëâÙfPrX§ý,õÌꆫq¸n°ùõz³xÄÎü‹èåz¹øóòjJ£»$YDE2¾ÎS5±v ì­í‡ê嫿fãwàÚM-_uR©¤‚6« ½+£Qåþ¾`ú]É!MwÉY 9z¶Ý:àæXƒœk•‚_üøþ³Õ¼àˆd˜hNŸŒmb;INšaÿýÁÒ‹çÒ¬R\(¤Ëzðò³.k­ÔK/Æ´Ük«©²Éý®Nô§ûš^gdC.úV€åX"êáÏqT»–+öîX³áàŠÔL“èUl6Éjìy…‹·ë ´Æì…ÕŠ]ñáê \“w0ƒH.·»‰Ç5becú â¸8m˥ɓJM–£7ŠëØ­,]ÜùWHM–·&3޵[:´úØHðÔdyœSÅ<ú–ãY>¥Ôd‰øSKM†·ÙJâÅ&63-¶9JvÿÙzûòœŠìœŠìœŠìœŠìœŠì7“Š,Ÿ/%X†¹çUQÞ½:§";§"ûú©È,1™±¬êÏçÖŒô˜Š _cÉ`^¸?•ŠŒš\hšPÁëWöòr¸t¹ï)øM9*†¯÷3órTükùMy«¹¢{ômMùà~S^£âBIêF&°8ûMg¿éì7ý¦³ßtö›Î~S¥ßDÙ5ÌJH ­ký¦¢\9>óÙo:ûM_ßoʉI¸9·ê&0)Ý-éÅo‚Y““~³ûHD*G´Ž¼"Ok¿©zrýoÙoÊ[ÍSLº¥ÃÌoÊkàñ2äFÆË7èÏ~ÓÙo:ûMg¿éì7ý¦³ßtj^…q¤4ò˜W%Ü=ûMg¿éëûM91ç’S')Òº_¿‰›*OøMz„(xUQ\kÁå0«¾(ÊM6Ws²2ÔGñLQÌYÑÕá…Ê^¯“O³%ØÉ6ÊM>ðGÑÙ&>ÛÄg›ølŸmâ³Mü²‰NžìÑýãÉS)ú.»6ŠòAoî.}ØÍµ¡YXžÙ¢8ƒÿ²ä~•Eä4aƒQ]Ôà¦uR©|êÄ!ë”Ü«¤T'êX'G¨ßÌÑMñ`¤;§ù4`hqP“lšlÒ«ÊDU¼2sIÓã¼: ‹*Çñ^­{AÓFœx¾p¯-L’Gw†¾]ùŽ-¸çt$ñH*úíNFÓeš¥×ö#ÍC„©Q@ÖäÉõN-2!Ÿˆ¤ÁÈp?¶—LÓÍ]:^ÏVæÛCIëPœ(ŸUl͉&€û§•ôIQƒª`Q²ÁÕ‰ó»ªÅ«P(>0M:óÁ²p„Ÿ 8¡Hð~¶¾ÿ rJÇÓä>>”1Å„š|ƒÞLð„Ú?Ä£ƒ¨ŽWДÓx=126£î@À$`ºíÌœýAÒA쥥 fî‡K±b,²í€o#ºŽ9ê „'p*j²ÕúS¢âÖäðo§Oˆ€§iöo?Q§If6 ÍBQC(”.¼ýzÿIC6]z¨4¸û«õòÓ̬ÖÁ#'í6Š:€uÙpïÔèiQC"Œ¤e¾ÏW•M,¡›íìNd¥ mVøVä"G$eºKnæÈû' ­Vè_…,@—Ðd¹;H€M” Å ÆT0fÔÀ쟌‹'E&H¸éd¯šµ² E.BL#@û§‚ÀúIQAÑ9©Ÿ§ .ž´ס¨¡1o– =ðþ©¢«3œ|ªh†{¦Êçänº\~, \£`LѺ/¦Ôâî( ¡'¥Sª“µD9‘€Õo„Zã@DQ5a O%îþ‰‚{RDÁÁ}—½ŒmøÔõ]<¾‚™ÿ—+jŠ"Ø-âÈ!ÅÓ"GÓm4sÜân–§¯©öãWŽ‘jíTQék¿ö׆þ šóIˆjÔí¼N{©‡ZZý?ö®e¹±¹þŠVODQ ,Gïl‡^øfÁ–ØUŒÖË$Uýø/ÿ€¿Ì‰KJ¤$‰äÅEÑ1Ñ£Aó$2@>’AÁœÆà§§Œ§Ë2HÁ4¤Œ æV—¬)‚kÀ‘Z´Ó“"bC“ 6Å<þ¼&Å[i r®QkQÆTˆN{ï’œïÜe¯Ÿ€³¾Ë²C0Îù <šû‚´£D†jì!¶s8jAŸO‘ÚiA—¨ žp¶f'ô$1… ñ9¼b §_(N«FØÏ'Lõì(u!L-4Ê›‘ƒîAç ›ZCíEk;ÌÓOŸÌ¾AbÚL¿ô<ÚðŽÝÆý#ïñ¬ª r|ÿ‡A’Q$9ƒ>Jž4 Z«]ÿ‰ô”óI,˜͋⛮ÇSŸ £s"ßõ©+l]ÂòÅjŽ´1!… ÏYi¼‘ùȱ‡ÒQàñÐÊÞhäY¶ßèÀ²ûç¥ t|Ð׺ÓjMtèqe¦Á£½2Ûµ·Ü'Ù6·tø‹Üžîž¿,N´ÁÜ–J¾|‚Àè­ä,ó88ËóœœËõ@×A7hð@K‡ä¤H«F ’.»~ˆüÇœgFZ‡¥ ×ÑÊ•¤  ª:{±¢ƒâ›è:\Çið@j²ýãŽo·?íŠ åõ[þºkñ;, KC°¼³DØÁ•îlõ«Ã¶ªžØ. ©Çãbkÿö˜<‚àˆMkŒ^O…¯Sç¨ )­ÀªÎRAÖF1°L^4Nц`›ÎXܘâÐZAœ0uÈeÅ|·è €Œ'´»iªcÙòñiÅäF¾×¼ñ±e™ËÏÿ×® ÎâöµöÇKíýsFN,Xd5˜fm ž%4x´YËóûÅúiþ6ˆúåÙç˜ì |-ã]î‡AÄê($íÉþ;‚M±CÌœ6èòtô)þôþ³õ|ª°{ØŽø\8NšðþöSãÎPL ÜÔ!ðÉCn¼ÁFRƵE+4nfÑ»äZ~ºöÁ»dÐJ5=û”'ô—;N¤PЫ!AxŽóŽWÏ›EÝ6ó‚Îå3_ÊŠND}†ÜOÉ)fÑÃÔࡱ/µ¢äXNåð)‘Ec“„;Å‚[{ ø>Àcô@ lô '4g9~¶Š •Y=lçè’XPh„)èTÊ5;ø„-çc¯µÇ–Và÷=Ž <Á¾=;¼‡:*=¤—RŠìñŠhù(†ãïƒ:ÂÅ€÷­nƒD!–Ì‚É>žó’L,†5jv|;}¥@ï{dÊhðxlu`<)OyÈ ã²›,Q2¼á¥9Y‚Ôî2a,¯ëQ#ô`†ºìáa’ÒúOþçûϯ²ü¼_ôÏGåX~ €ù¢+Io‡˜vÁ!SZ1©TÕ@ ŠkŠohy ÛÅÌjð¨kDÉÛQ‹³ü†•¼ñÌO’.˜¦Pª(ñàðÇ!¿ýüjE®îX«üüO¯êñ?¶bv6ÏèÕÈó/ìÜà½âìqþÆ‹I~~`…ãÛd÷é Ò§+O{›<¨²¿½ú oTXù—<ÿ’5éû®Ã{UýÞ$0—<’LàÃÂõ¯ÇvUÿ}ù°\wásD–šýßÿY;7±Ds·<¶¼Ó†qÁc‹ dÕÁN0NŸh£Ããšu¸a¡- ùÿÁ 2äÛ}/böà¼Ç¡iîjTØ1Ùë_G[¬õx•)•4±…å¯s¸@Ê2Ûqêˆó.ð Èðaú¼Ìíw(²Ã]G{+6>='×ã·ãôYvRp*\"^x“N7üf~£üMt®‰¬7 \¬Àl+‹òºAçOˬÂÙc8i_\ŒåUŒÖÄL4qà£kVgŒQTÓ—'Qâ¡–5:?xd¥@ƒpíbrÆD¯y\4¥RßÊÕnGXÕìAÚbAí vtßœþ¹V‡Ç̉9dØ‘êeÁµZ-hŠÑck@¼¬5.ĪB]ß,õñ•·%Dr€„\ˆ‰b£âz}ð’Ë­Áƒ¾á–þ ·T”»ùäø¿‚€!…8¶pæ8Ý£€šBš~‰5xbj¿£ŠäÓZ>ƒ$Ççßr­•¤TØa¥5xB³æde RQ‚1Y>Ç¡ˆ‹`ÀM ¾ÕädÄ·‰52Œ³ÓÇT ß!Ï›;ÈxÔ—‹§ƒ¸KkžŠ×³á:AÌYõ ­yÊZm³¸ó‘°ýr°8ý¢çïä: XÇ+}¿#VÏeU9Η—¯¾=¯£4bTûá‘z> géí¸ƒj¯…ó5â/ÿ±kj+>:}‰þá;6X° ã±.NjÌo×Ç*S—må•Bvh¼=¼ƒ65ÔÏB9ô–D™ÇFÙniù;ÎÆ$T3Þâ ¡eéôc6¶tg¯1aŽ‹ríxœu¬6šVL?,™ˆÞ {8OʃŸÜ!ð'TRÆ¥D ¥¿Túäf‹ró¼ÀÉA×dœ¼‘S³R%ç{«*ÈÿºÝR+𤀣ãé߈Ž7Éjy³M:)oòóëBeð˜Ç?%O“|‡ÅÕà Mú)·F´m¤ 9h]i\¡v4@cÕ󡵂Ÿ§û(~¯K‡ †žviaù¥ƒ'üáiuW@™Eâ£ÜœnÊ7½56\ÐÎ`<ÚRìÅ"ˆ7ëåìvµÌïHY¬H˜†… ÌïÚJ‡oâeG¼(…Èx¢rÙo^Ãj×9{c±~òø|;ÛÙkÉùïëÙâ—õ䕦÷¦C]½‹T=÷pQJ}ÄNìúëyµ˜±´;¹*¶ÁÒäȘ;Ç‚µűàlOŽýº¼[œ\׊cbŽÕͦ?Çü…q̧>=Í(–`bŠ)&ÓŸaDŰBû©öåæiöt{r÷c+¢%纭vNÝù–\ßRt}ø¶üå~öíéföËÝãÍiwÆ7#ù‰É¦œPw¦Ñ…ùgd;ÙÎûùÃòn~rEB+ŠÚ‰)V;“þÜÂ˲šä{YÍoËÕæä‚ÄfÔšÞ'«›Hf‘»$fycB?ûøôøûbõm-˜”ÔˆeÞ8è`#•“êÍ8o.‹qýOß³o맯‹Õé{jF¼Ðï$ [þ%sYüÓ†84X£Ýâ<­Yð÷³ÛE®)³]™ÐêAÀKò¶f]ÕŒºsÍÚxQ\³Î·JµÜ]-ñþÞ],Ì[óÂ{w È6AOŠBMÒ·ÇÄóÎ’3^€› *@š~ù5x´MãÎ*âʟ˜’Å(®({ŸÔ.u¶%k'@.’ëð°¬Á£n•|Rž»øŒ’KÛ‚SÈ}Š&•k lÇ…º'TiO×àÀEð(}43n|Eæå®ÖÖÕ?£¥âB‚Ë•*“#Æé-òwØ´©DÅ7ù&U™wÂ:´l¶ÊRCŒÆ[¡°Ç0. ´©È{LÖÈ0ÉÄ‹‹¬5rF`ž#â=ó¸º]_ïdxRí¡Á²@ƒÇ® ²3xP‡“všA`Ýg£“gà§?N xØ@8D<¡ «¥ÄNŠû–%ú²@£CVÛˆÒ";i¶%F[pÆ!EÑæqÓ—•¾C.x!·q‹'(#ìvÂzõ#ñª3‚a/EV1w®< —qœ(9àäÍŒ¸xœé±Ÿ#ðnRÙ¶xÈ·nt½ðþxêÿ$Xªl¶Ã• ˜j ç‘'% ÈS ¡‡ ùJSēӞÔ:%ÚÙjÏûùämSQgñ!Þbí•Ù8 çOëèB¬˜YêA°Þ/U§ V,€¬Þ{³}j/ÖØ ’EAžJ ¨S#ÌáLúÁ”dúÅ.zÝPŸCăÎ@“Cd~ Ée—¹mŽË{i¯3›?ÜÎÖù̾™ñ¹ì7m˜“s’·ò,Œ^»´ƒ&i0²ô=ÉÓóÐÁsñÎ2‹½“<>gõƒz/õowÏ÷‹ùf3¿ùš/¶?H< ã ¬E Š3p!`[ÂÔC?‹, 9z¶rIžZ‡¦ìù;À.HH20ÞLH–õ®|ÿy—¯ýB²1÷–t »}ÅËçSE~Qó=®¼4xÔi§å½Z¬—±T—¿®æëÍêyh,ò^ÖŬdöú¾v ''Èôu³tx<4'ÈÎ+¼¹›¯×$m[Ñ#µ£Gä©ÉÑ!(X‰'6#Çý|ù0ÛÚòCÙºVt ú\¬óANNÄË"Àˆ‘£²=´Ù¯â…fðÔˆeœ“Ó`úaždÛÒàEí¾Š›Q€ Ncœ|ù‰.iùÁßvù_üôÍ¡x}# €±¾Ê8'¦g/‹д—‹êQ"†fìÀZvLrÒxwY¤ñqdáѺ?£aEåÙ²}"Þtaº@Jsº¶éÉ µÿWY§Vko)T®}CÐSÄùº® ÔöÒ}ôto]m¯lq_/ïö4,B¨ØcGGÏ(]Ôý8‚f!øë勈kb”À—•-™6¹$=mä6 .6‹Qje¥˜õsÑà±Ð; è üׄfrÅØ-éçèNë1µG7Q/‹@Ú ›Íä?J «ÐLÁ{§¢Ñ±it#¥K"“7¿™¾Ü< kQÉ›ºSéÃ$:É×áY›,çÙë®À¶AuÓ£I å§kBBK¢Ìræ´h=-Hð=4„O„–Ý Ï–ßÉç¨j) ”Çyòí’ ;Ýõ³ˆ¶‡Ëê)W3õ±¶rñQц¹ƒ´¾\½ª@uöUƒ¼Gº=ïKT`Ý%ëñ‹‚}êöç†u VìHõmÓO¦©RŒEª”`Âxliö1ÍL3NQš".û¬¹ô ÅËÒSê'ís²l„ÕVÁ6bc&LRM¥¡°Púë{ ¯ÎêGòc’Î5 ¶†„©ÁìÄLÕ‚íBŠé«…)ñ4íRRÜÛØòO5À½‘Zvâ.ôð¦3‚ÙsP«¯ï†ÐÖÅÝâ&ŸO‡Àfœ‰µuã&F"]–ƒŒ&¹6,„oƧj¹ÇlzÐÊš‹r’ÑZ3‚V¿=ÿ²˜­ÿä%¸—&óçÍ×|©}3 ìBð‡Sp&Bo—æä­5/m:!uˆôÒà¡1AÂò]È~ô ÇÒE)]Ë w^&9åüÆó‹)µ¸œV¡˜ÜòèðxÛÄò¼—ÚOGÅhËb Ëq5yœC4c*hµfk=p®Ãú+ðœ³þmî²^…›Ê€¹ou”&ys¹6Í—›°¹7ÚÔ <`[5Üþuùå¸ð\QxÁX뀰]À0¶§úÔVÌ&¢ž <£2Õ·îÖçWQža*|™o¯r]Ï6³ùíýrë¯S3A†pÎÝØÄgž0T³JÓ÷³Õáq¾=¾-¿±Ä¦;RHS²ã#à.äˆé²È‘¨9~™¯—7³çõ.Üܶ¢™I)r v¢õEr®Q^Ìû«êŽÍ|?%W È»Ð/ËèПd_>Oy,f¹Óµ3ÑbFò®AMƒ ‹ï‡ßBí¡Áƒ8¶/¡oÞÉ‹Éî:äaÌ+ÝYÏÖªiÅË"ˆ6zµÕ{ćsb1Iž®Á±)4„ÒœÀ² ¤6WçŸÑ«Ñò ÛáÖKƒÇÅq·^uDe•‹ƒ*MÎIx“³>5Zí©)\;)4#MO ¯MƒÑ¨ˆÁ*ÖIÐͦºúÚ¤Ó8ÛÞh¦êñ²ˆìÈš UùÔ£aFn …)ó½UsýJió(¤Íë>êÝEñ-xjvµu·±d¿òÃ…“Šõò¸à!¦‘wôM n=`ßÃ…Õà‰#–’›W>ÏxÇK”ƒN”Þ†qìSL˜‰:¼±(ðiË^·MÊ.\ö·&%ÉöPÝÏïb ¾’@'ŒïYðA„ût>”—%—\ Þ&‘®1Yµ2o•\p¢ÚÌãL5ž¿Ròh·ãÔ7—ÇÒ>Èn³ÊþÙíìf>ˆ¯ì!FŒ{«JœäqƪûÔM‚7Dç„îÛq¦_nþNÌí•*ä´ÝŒŽÅê4c9!ȦÔÉ£.÷ÓÆŸT@´€Un¿Ýþ„¬n„^syœÑwï›p9ëaw¹fÑࡱm‰k;™ª`EïÎ{•™×A¥ÃÕƒÐp–,Š>€øQWè€SÝpvÏøõüþén1TÔƒ²mgWÉĉ£ ”=ÇZb³-ã±Æ4+õ6¨ÎõõNn§û9 ¦“<uE÷³œé¦/šñàS‡+kB‹+™—;Ø­ ß”¤-k®MðÆ•[yã0`jbﻀ¾,ж,JEMÒwüéýfó»Åj3¨Ó’Éc¤œ•d¤‰Dë¨AñûùÍ×åÃb_ø"R3€°ÃÊ+ð¶ Zz+º×µþüŠ×¤XŠ?`h¼’”‘ƒ0›‹zÅ1Y—1‡É/ótxPëU¦'$9ˆÎ—EG>¢‹å¶y\ ^íò~?¬aò%­Þ?*ºáŸ6ó›ß¶ý­‘…¢Èr$P€èrª™o±¼“bÄÉ/è”xl ¿í·‡å°bQXÀt&„ €›RÒ.hWg£~&dzxq*<Ô¼šÐ{Á–ücgŠ’Íå<(DðÒLø|¯¾¤Í`º;lw žÔâåüñÛrµ9Wù°ƒùîØ;/i#4)‚m¢Á'ÂGÆt0ÊG…Éø¡Çy¿“+—Èû‰?©a„ÍfÎâ[­ø¾¼_¼„¬ì¤ºýŸÏ«¡úãû°•˜Ê¯Z)A6Äâ2ƒ¦¤ì-‰Ðxå–«Îg'›ì9»)Û¹|¸øÃˆ[¬P¤–C%Ѻò߯3’(‰*»•ª•©çéFÓèÆ–qm4+]÷°{Ž Ž[9%‹˜Ê®Äº0±þŽt6¦ˆdÀu3ɧ‰3Âñ¸&ÎèLDr|”„¨É¢œàEL¤crû ãX˜Sg€+0&¦ƒ£Œì2Ö‘C Œ÷d¸–`§lñ‹#m5ð8™ÜqjÓÛÄ— í‰bã™åAf¹2B(ãTD”ãÒå¸<\Rtç‚™¸*ÀzðôÕv;à‘&Wºâ#^C9äU(+¤c‘ä”.µÊ–;SDrè<6 (–—7Âc¥‹.w$§D†sï­O:®OvJøw¦ÔqE‰µ ¢X®²þÙ6ì^í@Eƒ'’+ÿŽÕ?Çh©‹ãÁ@©À©çL9á -bÙ@ÿÇ”óðü†› o–†G²=›nó¬·— N~±íŠ+ƒ$Rv••‘ è­\°wÜãÏ;w°Ö\÷;¦à1Y›1l#ñáu>%L˜S#9H+b^SRD¦²^I<Œ¶ן‚'5Ã.V=6iE Ÿó(Å.M*†+Š sÜÃè!xæ‘‚G̯ðÔ¹<Ô®îf[ÍP—oꔈG±,þ ËpX­p§¥M,œÑèÊ\Éä묱 …á´Œb!{ð9Rk¥ãÜA kbʸou“k¡v=™®|½!SjAG‰QðR9•Çûg´Z4Y)ñÙÉõ±Äç.8S;àq:GeºÇº„Ão£qͤˆqj4(a²DFsN@¯¥,o)xd©0<^,ëEsrq9›4T2fµbÐG/~¹äQ¸Öhž¥_ ÞY ºØ,ùòÁ7à^žVã6ëÈSÞrYª˜Ê‹A·–=¢ ô“ÁÒ•7ƒ<ªÏwPcs¹œyRC›,A­…Å7ü†•ä¤wˆ~‚ZÈâ­×ñ(µgñÎÄaIèÌc[Á÷Ï®Á •ò4᪵‰-^þ5U©ó¦‹åtu5žUM³>5©wæÉ5W4í” Æ×ÁÈ _’“ O#ƒÇGC)xS= r ír6ß!@ïüðCŽE˜ct ¬.Ê)-“«/å²ÜÝQj³[Cõ1)u=¬R xLê+¯KgRçÕWöÉ‹z5žœLªæìtQ-'8j6Ô‘+­R6XË’ëÂB¬±B 1Êñ†œžãdÜãœáù‡çÎ'üñ–âԩ‚DÒ‘¯ÕBdžÞ ÛC¿r˶jäf-{'äj\ÔÕàCq£xô¡VïqÞáQÒ)ˆãØóZ‰Ÿ¤ÛÏ ­Öáñ’8aµ¶q¨Tà kËÐr«2Pùe¦YÌÕ£ïcéÁçH†›8H½f´áÊïÖד·=rpY 2ˆKSÚÉØ46hæÉuL2›ðîXU uv¾ÑNÁcó¼H‹2hò0ht 0}šè2Ø'4èˆGª\%ßï’èÿµÆ$I·\Ú,\Z!YrÛè§^>!C <©÷|:_¾˜OïuHÙÀåÚ+:8 qè讎ÅLEIY®ð a·M-UBŒDš)u=œ©$à‘2¶ùµ^6S_0$tC#q¦U¸ÛY+' ÏÙG¤BY¼?\"ïœâE59ŸÎOªñ/äuE„2ë ÊQˆÜæhö¸w@›ÙqÕÇ(ïŽGäí+Ÿ’´¡9ÄxÚ1ªhÔÂË Ç²¼p?° ‹Gž|1Äã˜UAf9ú2#\俢—ù²v÷fI¡ü©eáòŸZ¶Ô}®V>þ†0wE»tØÊq‘-r,bª¸µ•†;·rJ±Ì€3#rÇ£Ùó­zr¯Eÿ`¡A)µS”c{¿XB`\9!â]ùr!·Üì0¤ÀSÓéÚ{Ë«e]ßÍMœz¢L(a×F=ì'úõ=`š¯÷ü¢½”C!>(Cz9Xv¸ª‹‹ÙU°Ñ7þ~ΕÍ`|¹\âÚßr9¯Ngõ`µœUó ý„‚Ëú_—øµƒgŸëÕú3š¡ÿÇ+ÿhMÓÍŠãÖ‰qîÂ-½ 4|ßãEKгM«]*aÙJšÜó Ͱeô~‡pħ‘‹@׸ª|…³2aG¯k Í;¹òGŸþ9T9ÚÚ8Í]f3èvùÛ˜ /ùtVÇ$ØXôb¥â–å¶‚¾ KÖCК‚GÈ|×Å:×½pL€Gw&Ð¥e2^*ˆTˆògIx€g=²è¨{ 2ߨ¡íèfKº—ÓŠg=§È‚ݹƒÑÊ•On ç*‹¹%ÔÊ¥^ œýíÀ£ òˆÑ-h€H´‚¸ÑÎÎwhÙ'pÍøN/:xäEGÚC”·º×Ë@è!“ø)C”Aä$Çgvœ{AçÌ*)£Ð³ÅW¢±–[­œ}¤w>dÔåŽk©´0 8z-{ÉóLWA`VCLG}OÊ>áX©ãx„%=Ã/¶9¹Þþ5×¼zF]Q…À¸â›fŠs§yQQ^(Ù•Ï(RðHÙ‡§ØÊlð†8"ÄåÕà–žÅ41ܺ~2ÃË«"dùN2ix’¯1î1ín²\<§¡dC …dÆEÞÇ‘3Ìöè; )á\ñ.)x8c¶ïq—[äV²¶~Šè"}ÑëøBÊziå3”<¸èу,&-¤S|´r‘» $ÇÑšîIáÊ–7‡<ÚÀo\Ó*ƒ´:FQš…˜û³tL©á2òêá-Ë›G Óc¼q±ðý‚é2v(”¦|(öu$'µë3ÒÈß•ÏúMãUÿ¾âšÕÐ1¡2&@º˜ÎéõTºñ¯ B¯•ù8x¿ª–«zrcÝ~bЛ‡Nد3¾ÍE=ŒÏªùçº9îÆåxPÍ'ƒ‹êj¶¨&[Ð[&á£9’SL?Do‘û7u3]"úk´Þn’˜¦ÍàW>üñbR­ê/¯–ã³éª¯.—õèAÿoû7£#ʳ2üä;Êdý벺N/ÆÔnkõb1¾À™ÕUSÿWsV¡å sõ§ñXžÖŠ.뛎óèS5kê?öeGmwWÒ K-þqR£‚3JïéR~hŸëÕ_óE7ˆ>Û‡ˆÝŒLZBkC& è5WäÍÂ{!«ÐðèÙr1Ÿþ{m.ã÷4/?MëÙdø5å7½6«góéìy'ðò+}úÓŸ¼Šïý§_þVÏë¶ ÉÚj:£A~ö—Îú:ðßøüû}ÌUŸA³z~ì!(?| #¸N¶ãÁëå¼á¼±ãÁ;ês2žÎçhµ¼¬ÑP(µeýðÝXÒ·8—GGú§SÕëÓñ?åKç»fWO´ÄOßVÍê‡åâ3NøfD= ‡oà3áÿñ÷j~Y-¯Žø[ûÏÿòǯö+üFD€öý®µÍ/‡µçß½­VÍèÅ‹jŒ>¬¢U«áxqþ¶ZU/Þ}ûþÕ Ú”D»‘ÿì5ÚÛ¼ž5£ÿûجèåˆí?<§S4„É5Ç4Ž?´ÃÙúˆ,áíýO¿¼_Õ££î3úu=ÁÇ|p<Ý/ü×Ýú—?w¤ý|Ôæ¶áåYÇUkÖÏôæñ39§WÈàÚß»ióKãÝÕµUû¼?ÏÒq7±G '÷pHß.ÆhÁ팢 ñaYÍŸ³óí¼ôӗߪÙl„ÓVðOÊ)§\Õ§cü«ú÷Õ¨üdV*ÇÈLÆ8ïýŸÒ×Ó—ÑŒn¸¬Ùó?Æëê¢:ÅÙ¼šÖÍš)Ý||ucIþwHîëváë¸|ø÷_¾ö““õß AoÿË«/Gÿ}9‘e½^̛Ŭ¦ßÜd6¼nËYág~¸ÞÕŸÑ.¯üsï5èÇïÛâW¯~øŽþõÏnÙ{;ýT¯Æ³ú{k,éwçzËÕÅŒöùÇ›’{‘—ÿ™/~›?N…÷ß½ŸWÍÙbåÿ9[\N^ßô¬\ƒÑþݪˆþsOõ¿Ckù|¶Ú@Å?p‡òá’¢²1ïñsü™~<­–õy½jaÅ ûƒ¦ÒÅl:ž®fW·a­uo[V|E/R ÒŒÖU'×ûn×UÆ>Îk\Ì1Æõ½þi£Õõ ÿ±¼ÂÈ´[Ö^ú)8˜´áÓËÿÐUd0C¯òò«÷ømKƒŸÙ¨ÑÍ"ßÐ…Á·ŽÇ¯ÇôŒë%ùrŽGh\“Åñìñ@¹Û5Ù»²ç7qÃþ&…ßÄÙÃo:ì8ÜwÕ÷—£ÍÌ­Û¡¤¹—[ u³ÁPõ›é|Úœí·<ㆸà6þ¼Ù¼sB ÎÆæÊáÒr€C–Ý6€‚®„€ˆäe´r¬ø;ö9F°H®t+'xg,ôÝòyJMRãŒQÚª˜e#¹è󠥄Ni¥Ê›E Ãú?m¹Cm ÿB0:8Ò¾iH•VŽóC8Ž2ºúüd2“D<Éí3̼ûoË\]ë”TV„¯œ{9PLÀ‘SGkSÞXRð}8ŸreHÒ|(¥âFŠp\âå@Y8 {)¦–†Òw+ñÈ¢ù¬7,žœ^]}’É´Š ix¸æ#ÉI+ ;–Âè /o xë%6ÙHª’êÀIkÍ#JЖH¹^ÜFY-´îÁ4Rð(ê%“úf¿»l–cªIí ¤h1Ô¸}•܆1½œr¶löû*p_35ªÜÅ—Žö9&V\´“Óýø‹­Ì4ÐFé3afINëg+Ó‡*ųâÓðHΊ{Ž;Lª0“N$Eî„3®¼ÇHƒ.¸4\oåÊ—!îðXd âx„Ô½yŠ;Œê £Ü#(ŠŒh€rZA"AÁ¸Õ Ãmw[9£xy£ Ûh¸ä…ÛQvrªè}ŒÉnoIdQ‚Å0(Î"*ædÙ;2¥`ïˆÇõrƒî.›6ȦÁ`Ø*ÜGÐ e–†½áÓ[8i¢ð)b.o „Ç€”.ŽGSÔÜ”ö4º Ž£u‚f1v\€-ì á– Ê ÙË>â.*t¢'‡±K§„ •0órÂZþŸR 2À¸V<ò’儱ºŸ˜(iàvÇoùŽu¼E¸¼EÚCy„RK©Ýo·á{‘ŽoÒ¡ºq\†š1„ZÖ 5äÔÄšEª1´rfïn!%àkÚ…¯Yùö9ΚpAµNNCÿÆ`‚l‚dL®ÔBràœÝj Ðþ±XM?]µN•h=tÉl·‰õ^‹f|VO.Q‰.i|2xS£ƒ ´ZÏÿÑ7”;tDNøèÃò²>ÚŸÚ×*#å@ÚØr˜ýøRéò¶œ‚'õ^Ë–Î2!5©kœhWÏX¤NrZi»Ý¤õÇÁ‡»¶<¨fÔÕê ÂfP P¯Íœ Ú¦ šÝ„˜×hÈh»Ó9ê~>¨N—«GOoÿ~&´“b£ÚBá>Ô¸ˆÚm¦¶@ùoW-€r²‹/õixROá2’ꂤRöC€±¡qÚ0{8ÓȦ…*Þ;(Ò¹êþGy ÓŽ…‚|3d”IÅ#½L[9%òÕ‰ïK5k¸`qÕ´)¾÷Ï1 T8k¿“ã𴌈™TÚÉ “Žh&¬2ɽ*ÿ,ª9Îdy#JÁÃM‘-ÈCw(%£:Ô,6%8®E™Ð½(jQ¼Òf"à%ã“M\BKË®&êû,3N‰¢aIIð¸Y²å !4…’mNÖóÊ۸ÐYtg,’5…rRBòÕšžÑÌ1ÃbŠ(\‹¿:ÆçP«wájnÓß"¡‚$rôCÉh_ÌKõ·HäC­Yù¡çF3Ü„0³ ×÷"¡ƒ\z5ª™/"ØAK–Ü ðÒ %žä´+oøF8Ç£øA äUrISLÄ ”c ºHdS„3QÞ@Rð¤fnï-gÏfƒLSk Žñ‘Žh¦ƒäJÞÕt‘F óČȅ™‰aŸ‰ÜöðrF=5#Š«æh«æÖ F…^ìCäÅ>}™¢6ÍVDª˜èa}Äçpn¤•;àÑ{ÆÈÝ2s2®gâTÍ.8j†:‚ˆM5”Sjßø8b«¤2‘ÖS\+>GY'•ˆãQ©‰T[ÂËy o/-WN‹H/y/GM_óÄÅ}çÒ”7€<©7r¶‡’›¨Üð™ç3´Ùt¸¯gZZné@r¨¦tÙbáƒ(`Ûiå‘‘•'í¡åSÊÒðè§3ÕÅ´M} ½‘528r¸1gŒ3`ÐÂi,ÇéL.Ô8¨¢¨©Qù¡ÇçøÛä*އ ÑÛЇ'­Äõ[rÌ©þ†>j^>ß* –ùævãR¹TVZ ‘Žç^Npž3©¤oðÊB Ëû6äRƒkˆ­L ‰±˜!d¯¬*o)xœÊzB»Î-Ÿ{^gRÀ†è>9h~/ÝÊvÒ;¡Í­ˆʆÛBtr¼ô~µ{ަüÖð¤Öž‰œ@%òꂼ]eÔ&œ¥ë夞÷(íPŠ%ËH ÃèAl SøQÓPT$¸$z9k¸;¤‰+‚[;NèäŠwcížã¬¾sHbyWá« ˆµAæ$£"Å[Ç'áÉÔÂ[» ñ›Aî4½jB!Áª…J|@°œ©ò‚‡—Ø~n¢‚š6GEÇlÓp!+±Ùxf ‚ÕÚÅ®£%”ö<†ÚcÜs—žEdÑ­[ÊÆP;a¹µ…⢰…êaðSðhQfõ¿¿VâRI·äªù¸>YÖãÅ’fà }G;½‡z †Üâbe,¾Qlå¤BqÀÁTÒ®xȘ„Ç0ÖÑlbUYL‚ËdD ¦`û2”¢j(éÊG U<\4aî4å·1P1¬·žP<\Ì6p‘9ã@'àyLù£B/¤P2NH-"ÁÁU_áb>ÌÖ”ö<®ÇpÑYTtJŠDP£œ–}†‹QØNRSÎ8l'DùÁ§ç8&9ßyá¢cA†5®Šb欨+{áb\%e gvrå÷þ9ÍXˆ8#ÔÁÂEÇì‚qœî&Ç´†;y¸p1Ÿ¸5ìÁ8ðˆ^ªXnœ†žYdÖZÎÕþŒhb hè§–[ª/~œ†Ç0Õ“‘<à4tìCÆQ©ul_ä˜VZ÷eå”0²øIDÅë=dY§t™$¨ É1#ܽG6Uô`$IxÜ¡¼‡ sê0ª Û10’][&oºaš×]I÷i†y×8©QÁUŠëŠÙÐ>׫¿RIºv}["v_d°¹‰°Uhxôl‰›ã¯Ídüžæå§i=› ¿¦Ê½o§ÍÊ7Úí^~E£Oú“W±kŽý·zNCN£ ·mÌŸý¥³¾Îü7R«î1YžÀ˜€=?ön”>P•ïN¶ÿgïêwÛ8’ü«úcm$¦?ª¿¸Ðgs .{1b#rlK‘:’²ã5ö±îîÉ®ªgH‘’ØÍ&»ÇJVÓd‘óëªêêªêîªu“Ýz4`§½ŸÑÆÉ1Aœm+ñ0¤#mÛº¶Õ¤ïq&Nô/o?©êåÅð'8§.ë[j绯Ÿ4½È_Íg¾ÏôF _¾êß[MoªùçÓ÷]{×ü‡oß¼|Aír—(ÅÁ´­òýöÀ$ÇW87:ªÿx÷Ý/¯—õõà¤}>ö½—ÿŠpMÂë^O«ëÅåléÿ9™ÝŒ^®/oÀh>A³€CDûyäð@mùp¹|€ÿ5ÕonÈ'‹1æ5¾¯éåE5¯¯êe+ΰÒTºžŒ‡ãåäó­D¸Ö˜·£W|½y¾ÿv]eì×ÞU‹9º¿¸¾×¿!Ûhu½lºÒ£_Ú.kç~ öFûtþ]Ez´*çZ›Çï6ø™#Z/òø ­|kxüzLÏX-É7S48Bãš,N{Òžö”»]“½){±ö¶LXø—þg÷éëÊᮩ¾»íR`κô"®ÀNßw躪ߧãÅåqñHï9U:sàäÕâŃ@1ð£äs¸äÑqmø×˺„‚¿¤APúwŸ+±*|%6í¡®üM‡D<©Q<ª×ìjÕ—xT¿O½wÒ;™ÖKúò¢¿ 5·{”ÐaÓÈqO'’¯C=Ü ÷ëì¡£:§-š°ÊEª-8ؽ@ù‚Wóz8¯+Jx¬ø³²0ë¬ÁISo½q VÆåìjüauT„ 2i¼J¿6ÞâQ.RθÅÙæHËÀ– ûðÅ¢¿~Ùÿ»]4jG§pCµÜàê)¿àé4O®÷øè† ËßJãmf¥ àõ6ò0i*f冀wr9Ø¢¸ˆ_¸òtº¿2’I06ŽG¤ÓZ·å¾·c½Ë¨n^„!œ4äÙ±põ¹†NÚÔ=šӕmâеë`a <Úˆ¸ @:ȱ˾'¿i¾ëvGÐEæ*VÓ¼¡K/0ÑÕ„VZ±øD'vÁ)ÐÖjÇ©*шð~v3]ÀÕ„‰A®;Ω尌ªDú°sc˜ P˜¦’º¼rÑs¬±Lí'õÖqëÅo·]|Æ·®k V¼$°8„ãZÄТó¨&uº}½1qn˜‰Š ÇÞÅ2Ex4“VÇñðÔzbÚCn>Å-û=Çt˜c–ºûh–¥—üÌ‘3ëˆØÒé7øî§ïÇ%Æ£õFBY뺙¡úŒnj‚ФN8îRC•ßÝmyK‘„Ç1w÷r»¯êå|<\x~Ú ?¹PŽ)T?Ò ÇóKËdyw6 ’y¹ò†°¹v3<Ü«ßõ1`w¶¤Üñm:c4-{ÕÇj<¡}ÑcE³yÁ<”J6‘TrÚCËßœIÄ“-5»ãåÍŢ߄ªw7‚™ Õ ú~‘r5DG5K³¥áºnÌ^Zg£Z'`ʱÈuzE5dÀ×Ù{cóðBï¹dI:Åeù9‘‚'õbЃù½;ì’av)F zÉœ8`9ò¥ð)ÆYyq¦à:WFiƒgßl¼ÞtnÑì©-cJI3Ú8u¨3öÕ‘;¡Ê«@ iÎûl¾m~É ¿Œ¢FŒ†ÅVaã/ÔžÄé  Ì¡|MÁ#õqÕöšdŸãB3,:TS~lênñvP 0=bþnøw«SèìŒê†i*Ì4Ãq`bnÒ°3üPjÎ;mÁOÉì¥îsÉ(!9ëéTrI–¯ƒS—ϤáIÍì¨l²—ý3A. «œn·ã逛LUã»…-dÂOÁ9îGïâárNlt6¬<m‹T)Á2‰ˆG—eO¿cØšïX”ˆGé£Sá±ésv›C6Ò…9I5&†\ôã“à_ºÝ$àI­ãøðÔ>fX˜‰Nk& ª¹NsYöͺEh²—Qô xR»Æ•pL‚f“ûvˆÍ°G%Fjg“©®Ú^óAi%f]¤FÑ ay¶ÒjÝ"—¦7.uÙ}øßÝ\AÙ‰Ö†q ANqߙπߢ "Žß”錯šÁŒÖ6ŽÇÒ‹ôp>†’Z¦/8F¼ÜDN•¨›Z¡7·±óÆ}h ½‘Ox)x$'¼µÿ:YÝ{^÷¢ßýц@]p£Éôq'È|GÆ‚tŒÛØñ1 ÆÊòŠ’‚Çåéºp<‹E˜ÅtUƒvˆ I°ãË*ç´\ÈHɬ7Üv ‚&”‰4=jð¤ºö»KNîò¬ªI=_® MPA^*a©­„4ì ;‘Ç(­Ø cR–•×$<6WÌâòäj«×sp·qYEy|[$‘Ž%7õ*®à à7ŠdÜÉ"Œ’Êy«=Çjz<‚å‹Z} *¥ðÍÆë-Õã.(=íÂÆM«Æ˜FÈŒQkäè/¨H÷†Nw¦hª9(cq<’éÒ*0\W#ò=xWÛöQ|Ôj;²qâé„eÙ´ —M€„âaO%:X‘†Õ²šÌ>ló‚<åt Œf‘1pEEºPвƒ0å›Ë¥ááÙÚLF&Úö‚ï‚':,u¾Lré"C”ßÊ×0è«a£CÈiá‘ÜLÚCw—wVüÚ{ý÷q£‹×íñh:-޾¿ÐL0* ×{ZÝ^q˜}ćG£zz,X]ÞÃJÓzªv÷ù{¯UHëõªËÌ»‡ð!˜bGôZs-Œ‹¡ \¸ânVšw^(Ñ*$àÉ™»¬g-Öf(X$ÅöµVÒÍld$H'D¾2?¹´ZS/ gyÌË!:Þ×…Ï‘B…=ð(“ÕF´KQsÜf‹§ÃÙ¼žÑå+ÏѰkèÐ8pÚ¡¹[PCõûõÍo®qÁÕt-mŸZ‚GÂr›Ç±7í+¨2wû–/JÇ“F ßWã ¢~¾ü|]Ÿ?{{ûÍí¢?5«V½x†*R-fÓóg;^×Ëg½«¦ òù³oÇ ü1âÇìÓ)/Ç×½ãê^µÅÙêë´²~¬§ËÅj‹~:‰¾B² Ò‹5¤¦Á଺!¡?;V¼Žkµ‹\à„YÝP\•4_vnªgûôÍêœR\U¦Ý_S÷¥Ÿ_?Ïn–5U;ÿÛwþ—¾­©^éÆg‹Áà[” U³^ìû• 5¸w¯sØ<½·¸œÝLFþçE½ò_/?Õõ´w5žÎÖJ²ô½1ÞAW‚^Ó7‚®‡¶bš|îÝøÊ¹Tþ²šŽèÕeS~¾^,{ÏG}oî÷ý_[Vyø¬±G‰j¶‹u]­ž½‚~üÑ÷kª0ÔTM´½ç¼/´B?\O5$•ŽT«Eú Cð/ч"‰;|·»›¿E²MÇ÷©ÅSЧ>O}(žúP<õ¡xêCqÔººUò©ÅSŠGЇ"I +Ù‡B+Ýçàþ¸Ü }?nZ]Â(tmqx–‹BJÇ£›¶FQ!–ŒÇpò'–†bÔ~û["r(Çö-STìYóÈC-›%¿×lå×ä„ñÉt­þø°Œ«ÕxzÚ»¨‡Õ ÚÓ+Ê¿-Ñõä•4}ƒ(„tB.z—7\HeÆÓaÝ»]a¨ö߇SÃul<œátÞ¥B¢Ò¾¹l´aÅÀjBè?{tUUÁá›Î–8ùç £ùÅ…Óã’2žR§”^u1»Y>PŸ¬^Gm 9Âà¼M_ù¨üä;2%'ô#'oÐç{Xù­5Ì ©×NtÚlôå]ÏLû€ 9hjöžs‰vW«ì®ôKXÇäcK¿¤ Wî_-ýb7¸‰ZÚj°]¦_IùÔô)ýò”~yJ¿<¥_žÒ/Oé—`ú%a]«ŸÒ/Oé—G–~IPàÍ™ùÓ/²¯­QF=7¹¾ Ò\Fly:ÜÆMk,X]›ãÚÏ?¹¾4(9Î{ Ä¸.PÏþ€h‘ƒÒ:hAëâçÉ“ðÎŽ¨Tx¯ Ëb8TÓÏ7ãµn ‹2eb퇈NiŽ)UØLguyá¦àq¶œpÃSBk&ÓJEP¢†n6˜Ò˜òÂMÁã ·p/gˆÝûÄ9äœ)¥Á¥#‚Ô m”Ì.à4¨Ž6llý îTΖ2⃈âqÌÚ²Bò†Îpe,/&p|CáëAu¥¯Ÿ§âq%„|E5ÕÏn-¡‰ðÏp%#5R:¦ËˆºàÀ활7Ü¢ƒ¼]Üù'$Uì×a+äép=â<`á¼ÀSðˆr3œçÂŒ£…ŠÇ€:P¦àÔΊÔêŒ8>Ça(¾TÖⱪ¤ˆ 2Nj¦0VD€"reEGj´6âH t`¶¥æŒÇãLglZ/é‡Î> _à"È=Å”âL³Z@ÏC–1Ú¥àvá”%àQŒ•¶sÏ Iw$c.¤²Üš.„…‹sÖ…km7tªx¢¤yŽáRDg6áv¼ °}é ë´’‚:çÊTtÄ$ˆ¼W~s\RMè¼³<J\û ñ‚¼´¸H[£y,”°¸pqUrâÇU7«-ÞK2‚³>"fk9¥b“z×[[ZÌÙ°*c;s«s‹y:›Îg³e»,ª 眖L¢üb®¹CG´€k^*W²¼Sð¤6nHå\(ûÈûÔñËmë§3Ür *FL„³á °Å0ÿ Ó‡ÇŠ2±ÍÙÆ9·Z‹ðÎ Ñ1ØIô™qLqÉÃæè,¶HÆûÐülxÁŠ'ÃÒð‘5×=¯'ÕE=y`÷à^ÙPä)ç`h##cà(ó;›IÊ›€5íÉ'ÿ<’ß¾|Ï52,p`è~„;Êy:¦•ËÐ̼kÔVç=püÏÅB3ÂãdzZª£¹Ÿ0PúQ1 ËeO¯g£–…¡•I‡9i8Zxà°zºÎ,A4ÚH:¼.xí锵ùv¿sð;ºU^sSðä+v}=Ÿ]Ô!&š0µÕŒ›ÈQiOÇén%йP #:}žÔ³ŠÓêª^\WÛã·¯Émö_A7#È:+•²tA-•è’Ï7äÇê€EZL´t¢1Ós,¢å{à1¼°˜C{’ P2Ò½³¡“F”s«<Ò-°¥cÅÅìŸc97lÞŽíê5¼™—ŸéÊtýÛ’Š‡/çÕ˜ÊÚŸ¬ãsæ‚ ãÄ-‡¾p Ò)Pùíc–Æ(É·ÖZ:éÊ‹Ÿã˜–‚ÇñXË2®×mí¸ +Ã3Fн-Â݉Ž[éíá2¨hBÁxya§àá¬Äœæ,È0‰â‡¡’wP (/ÒãRЅŰ #†qZÞï|k~]u >¦ñ-º;×õ°uT§­hN{ÕtÔ»nß‹ÞlLÀzê2ûmÓIzÖëÁÚ±/z¿‹õÇrÇî>ܹÔpÀû¯Ä“8!ÇÝ·9Í=TõòÏä²6BÄiõ?7ÄØ£‘=Ü0Ü*T<z9Ç©ð錿³8?®'£þ_æsê¾Xú¦Ú-ÁùŸHúôÕ_ü_ûw¿üG=%‘“4%²eòókµ¯Uÿ‹/ž³ß†Œ1Œ¯™b/N½õCòÓÞ›Ù²š p²­j×£;íý\£ ÇÄ9X¢«ŠBÖiÛ6Õ­&}_-.'ú—·ŸTõòbøœ£œ·Õ®ºiÀw¬ËWó™ï)¿Ñ®›¯zuWÓ›jþù´Ç}‡îõÿáÛ7/_Pkì%Jqðm¡~¿8ÉñU#ÎFèÒ„ï¾ûåõ²¾œ´ïÑǾÏú_ŽçEûÿ¹[»qþ®eÚ»“Þ˜˜†o”ç>¬ÚÐ|¦Wwdœîv>¿Õjª°2÷\:m'öàwaäî‹Ô÷œogMˆ7ójºow±§W_>U“ɧ­àï†fò‚«úbˆß¨s €ÄX ”c¤&Cœ÷ë–öôc4ãÑæP³ÿD/«ëêgór\/6Tiýöÿswu»‘å¸ùUŒ¹`Ý#Š¢$60Én‚ ’M™Ü.‚»¦ÛÿÁöÌ®±ØÇÊ äÉBž*Û埥::²“‹µ«ÔÖ§OERuÿ(IÓwBîÖcßrùúßÿå/Õ1?ÝýFzÿ¿¼ÿË7ÿðËÙ¹Jæ7¿·Ylýñwëëó«û QÉ¿*}égÓtýÇú‹(›ûéƒËIkè¿_Él_®ÿþßëoÿ¾Ýöþõì§õÉýÉùú÷›ê‹úÝÅJ´åÝõ¹ØtúëÓž~»REwûðò/—Wºe¨`3µ¾+󌲣o¶S>‘E’ 9Ð3g€Ò7Á›Kq uò«G’Ô­Y~º›ðÌy¼ Dåcˆêtb 1ê#qé½Ì1‡AOBÕãÞÒ@—V±#cìÒ.çÖÓìƒW§wÁ%MO2Qi–Äò2-ýLåÓržÖi’|‘¤à•$-P²Ÿ÷V·•ÖÓ]Õ‚'ûetÕD[,Ó3G¢lÂŒÉ#-4Ó}qâòI'mx 9OnÊ'D똴ïö©H§ÖGLl½Ð¾iç Ÿnöª9›ºYÛ-ÿ6åÔOò‘1WàáÃ&¹¢²é·kMh˜êR! Œ˜ÊNa¹A¦ö›Ã‹‹eþø-xšKm¿Q´Úb/ÙKúZ.°gmé€ôÁäµñ€úÚmx2 YðÜ'±sä˹Úsû‚ï,¡mp ×o:Íw#ž€íÇñ´¤+¹Ì%'´~´… ¸L½„°Ö#öDf¿OlÕîÓ9ÂÅ꺆Â=ŸoþÈÄ,™õIKBù1“©]‚Hÿ*KØ5e8˜tö ü&ÒÔ.úf+véõÛ¾àAô[ -x|ê¢ ÷%vYÌFWd6xt¡|0µÓpbë¦8ta7 ey©OsT£3³±Èl.êÅc$Q35sݱ¼7 Å//$-xдªöƒD(óËzËŒÊUÂ7í /½Ò'Ñ ;±yƒ=”Ìn€¨ P¾è·m‡´À[f~.<+ƒ©ÈiNŽ#‰ƒeŒAÚEG#­å½a@É ’<íR^9: zÙyä@h”ò™ÚZâͳÒÝ0°¼0´ài¾!ÕÓR>‰˜R²»&íb©ús½ëÞI¢›€ÇÅOañ´j†77æ&ïÖçš´wsÿ´û†ˆ}"»Á[p9ïßM!ds 8{°x)æM?ˆ²³f‚ïu×h¡M¼r‘W'–dK–É1çÔíìf‘heDk ÅW¾û ˆöÚ©]'Á; HpE^#xtËÏBlÚAæ÷{ 1—OY¶í–÷W7ý$ôÎUài­|ú˜Y>—S(ršÅ=G*X‹NÚÉX…cà "‰*ö ¢°µH? \06xò» FÙdc'ÿ¡O§ì23õ^̳‘Xß÷€6^ˆËË€öÃ²ç¦ <)ök=XTÒTyφN¾wþG?ü|¶á²æÏGW"Ë7g§§ë˽ø£¾ Q„¥)ýDHì¡Oιž§dß<¿æɚ騶£·‘Å€•3] ©f"\6¼Ý€ÒÝêöçÿúr³ºþ:Ý'ÎðÄ›æ•\>QÂÞNI+"=YŸ¯Xúû¬«ÿÏk’=m#OE³ÎzdŸ¡B2ë¹é`söÔú¶Ö"úQ‹<­5Ý_-Úç´Mla/¶¸ù`l,¼ä>Ôdr«}PÃV¯õ‹ŽyÉì\úH“‰ÐKº¾9»Òšeçë_×çsø—Måˆg7 7——¯ÏW—ëG%7eW…’ à?9HÑA.¿Zµi‡.ͼvÖiƒiC½xÆo#žÖX@™Ä·¸3f<ˆŸ$“™-¬ˆ[ÓŸ×à}L©ü<â¶Ž˜øàQ7´P‡'þ§ó«?Ýž|]_¬Þö™Žwr(C,Ò.k©*²`Šº¤æ#„wÂÉ~ùémÀö˜Þ’½š:q—]¦®š¼] 5€…åÛð@˜§À¯”¼§9~‹q¿ O £Íô\äÒcd1ŠÑ²{¤¾O¶X7©m8@ZðZ@¼E¡1ÿI”#OælÚÉà†í&xòÄ 6x0ÿI8'_'¤v.ÓÆ8„`Ù$Ò9,±æÛåT.8Z˜¥]L–:‚ÖAs©‹;\8]Iåä±ÝñŦôÛŒªú¼ãŒÉœLž ³±,ýˆ¨¯i'ËL“¹EÀ%]Ö~8óó¶û…]ÖÒÃtŠ!»ræ]!K¸MÃÍÌÐy@ª–ç¾]5ÆážÏ•W°„!GÀÌÎG.]ïåÛP7°»ñ‚ÒÙtØ9›v³;-̻ܵÇõ×7ëÙÇ&cOüêpæòêt}|·)Mzô·b<ߟžÝNEDNvk°=Õ2=ˆý^XÏåS'ùïi

–嫚´[ö½ w Æí"pÎ’@ËkÄCCv•)ä9±Ý- K–Ý>ÞÀqmFY«W۽ώêAyÿ±¤Ãçõ f…»‰ ò¬Q†1BŽBøXrÔšÉ_Ÿû9ZMj‰|ÔjØ;‡L¥L^ªÉä­î4ÁÇ«VSè幬|y³¾½-Ý'Leƒ8àåZiçVÒtHÊ»ôúvÙñ#¢%Ú9*?\¸m‡´ÌU‚o74¿žh,Û½!9/6ƒY$DNanåƒâ›9Ÿ³u—SÚ…¹iŠ'‘³ñï“'PCbyÎ §êl‰ª´Ãæ²ÆK¢¦à£9õÚ΃h?‰!» <{-øW<¾ü`bÒLÆ2‡d"— 4,´ «ôzv4ËœoÚ‘!Ég²‹"h;ê¦ò_^ÂÛûÍĨ! Ù—}°´?¥ºIB!n€žì-xrì•ÃQÇ$vb’±£¼3zúHÂÀˆÐK1² ÑÐ2Ï>›% HÝRj.ñ!w.v•<ÕhÔCnë”Òšðäyžêt=Xo Ÿþr¾[Sø•ÙdžŸ‘œ|ŒVÉÅè³o~faYÅÚ}]x$™á‘–N zš.:?Äe}^ÝžèxÂV–+oÄtšðt»–¸¥ñ‰ÅÛO?Ë_xyÓ ‘,é .EÓ?‰ú´››éÙ÷\î!bô`£#l|é‡Xuþ7xrŸzW \–ý%ñòRäÄ&viç\?¿#xY‚É*Þ?ç‘ÅCB“¹œ¤]¿øÞ>:÷|>ñZ6•PòNGûnà!¼–­­‘³õx›´KbÿÄn×»$§Y-xØÍÈ–{®kwÈ{0:6ú·|$™X·¾dÖ=“v¾ù``” ”ôñèìA„qé‡|ó´EñÄn¥ .V'_Ï.×Õœ–“î3EÀäÌÄ,v\€n‚Ñ}úX YÊZÛ¨¤¤Ù‡}p6ô©C&í»*±œmŸ9 ‚–ñ íbóÉAÖ70Zð´Þ-{doÙßeÕÉQŸásÞÂ*ípÎËÉsE²'˜ã<…#•º9ÞêD •k±˜ ½ ìLÔÞÍ~Êf¾hzp„¦O®`Ù˜râf;²ñÂ1‚·(,ï,¦¸³æaº(,nöÄœÅu5eTÚÅö¼ôÀÞÆÎÏ¿Q¹%–Ô%êËD(¯í|Ø_¿{LQçmÈý±~óïÖ_nV§òCuùf‡X«]ÖxÕ×™í¥›`g¿xô« Oìm!“ËdF"}¡,ðjëóá¦Lç•Z: <Ñ-} šX,/$1²F1œZ44A\(ܹ,ì½ÒÑa6Ϋ›:Ë?,Õ†§õmÈý+ùîfõeýëúæV>»8û²y¯ãÓƒÒynpPžNŸ9`2Îeí°¾¯ÔÇÜê§?kÑk;°Wã5W¡ßÔ=ìg°Ç(>!ÛhÅÒ~çg4N®.oÕ¦™cqAÔ„û, È“‹vÛ™Ó“¢Ïâ™Â-ípÀ6+ý '1·lš2; ž‹¹Ç6Ó‚>‘€z<­ï1>ßõêØ+{ºÁe½)Å–…!í¢›• Õ .Gm¸2 6médzN¤§p³«A“בXömBöÈCÿM»ý׆=м)w~öÓúäþä|ýð$òñõêägùasÁvÎÉÓH£.°)Ö.ªßë:ÍcB°2q7í X ÒOÔ§}*ðôßûê8-{â>{Ê-ÃVÛ5Gþ5s9Ûƒ@àÆj?]â <1Í®´yŸeÿœøŒÁRÚÎÞXzz‹’Ð@áMõ€Í¸†¶£.å ´ìx  slÉ´´k?#8†²{ ”è é'’(Š`ã‰gV7œsX 8{Hhe:0´Õ iÐ̶) ˆ).ŠëCxBê[&wËãDã–E!ñÓöÇ—'«Þ'ƒY1cRˆöHJÏþî‘áCAÄ@ÉÊîÛ“…CbMX0‰å [æà†(Ÿ,J.T¬”0ÏðÛoÖ_Înïnî·9w›R¦Œ…ìsöÑT9>Çæ#–>ø2ëµO_N¬H¾Äcû<ß3ßÇš±òĈ:’)tÒ.äŽçÁ0!§hÄ…ö#ª(§ iÙ¦%g^Ï>8„aß 4Ј‰oÀ8•®'¶¶d=ô&²°Y˜xQ}ô¬¬Á¤•¸töjž‘û½;»ÑâÛÒôåRo?m΃åÿ.ô ˜Ë1†DɬÍàcj5ÏR-i"ZÀD§—½—Ÿi­4ħõ6žä©ûŠ.ÇßYìo½=•1Qó‹Ëa5øœü€XA žà;¿%½ÕmÝç%C(|r!úÊ~ÛÔÎ5_Ø)®S¯žT«Øè<.~ájÓOu“+ðtH¡¶ìo/Öw7g'S¾gDä“õbô¦ïP¬£/tHÒq°¡òâ’[é¼<­'Gs™Œe&£f!'#…zjGéK\íöŒ Â'p™BÌ Æø¥]!Áh¡EPž:ˆè‚‘³içhùE ý°·jƒmÚµjÂ7OL.­ۜȉëdp-Cp™ƒ96-³ùî•àÈû×ÊËêoÓ ™ÖÊfÙ¼Í €FÅÆ©o~_íݧX+©$Îh3XN‚'f²ñ„æ?‹sË\ç@@ì,#NÚ…öGÚ>Àನ ¶WØR; RÑk>Ot<¿ÀÇí½|tñù‘ûϹaUÁ:.’‹.zï"X––´“eÑ¡ÈÒ£ ²9Ø£Áå¯I?Àz'%›ì‚8ºôÞ¢]™\â˜ù`)ti—ö_ù¦mÕnzÕXã'â»/s6 €‘=p\>í‡bP¹×³3³·±‰érü' †Ddº„ÒÎS¿—hÆ -èÓŸÄæÐ‚sl¼ÿeîêÖ¹qì«ø›ëµCHÎíîì ÌÜæB¶•nmÜ-lw’yúKn·Ü–ˆ‚‹¤5?_›é:A€ùäòŽSÇ.ž‰ ®é„¥¡gV%K²ðÝ™‘¨hÈ€ÇÚIáøóž6Zƪ–£å=Ÿ*Uô06){fbQ@ ž·<õK‚Òn’KõIM*ÃçDU,æœ|ÐÅb‚ä)g¢™æ¨Ù\ǾYÆ\ù<…L¬Â¤àóÒ"Ä‹p²#R£ÁÒ4¥Ó+–!i½«öã|ê8½¬¨-E™7çT˜‰Cì9½*ÎÒ‘,é«W€ÎK]&% 7\%.}W´%šJÊDh’¯sòðYtD¾®£$^Dr „Û£¢k„Wß $ ¯«¯„-jÚI¬œær4Í?(BäL¬­“2.Žà@)ø..éxÌ•OëtóU0ßM…Ë+š¬G*8 4ä¹d6ÉÃ]F_^¹ÿÌ[ð°qõ¿è¬š SÞ°œÖ]•+דҕlÇÞf‡ÈÝ'Õ†ÇÜ øØÕS-(˜5êÇÕ¹\Õ5¤rýŒ aeœÁL‡ž¶j‚޳Rªb?Îcš”êû$®XÐñk¥—£¦³ú|‰3VµÇÎqI-B-匑[\¨dµA¸©?5,x¬7óGR†æe QõÕ=]eç}Α´õ%qT4Ÿ§ê4bó0àÉL¹y¼Ôt£êë{¾*µ #¥"]Wüí5]dêOžäzÞ«ÏvÕ«/Šù*`ö©W$*ã|îz¹ÜP$‰ôYñ@¦qbÒÈwB©rM:ž`=~;}ƒ¶x_Ÿ4uM äè³áOã\‹¸tse‚ú7õ±á±Ö@­jo*šCUY¥é´-OcÌå—‡¢ëÿ2̈‡[-mï¾ì7æJ´FŠCH•\ï2(·Ëaœ ¢q ™‡šÜжÑrm!Æ+ Yi]ÆQå*Ó|R±”ù&à±Q ùˆ¤Ñc§Ý‹ÃïÊeÞN?ªè“«ú,'ë]Ðô¢‹›\Õà÷|±<:îO 0žzþv·ýãáæóúËjÆý_}u•B•âò%Ͳ­ý.w T: ·›w Ï­ÍçÝöé¾¢ËX×%rJH!iØ‘É\+¿%içãäþO’mx¬ñAÛ ÿ‚!¹#¼ ºÿñk¯“"¦Éøõ¢œì.vÏPW÷÷w-Åj}V·y(Ùÿ«ï¯êl²¾™—x˜Â^Ka%$^ÅŒ2GÍ”¢FÜÞ¢.³ð„¬©OH­­iE©ªÆ\þ¡¶ù—d!ת¸]1®·Oûbð>ÆFi@ÁžL­jo½hìuNhír,]9 9;1U¨2.”4ÍJnõÆJý³»lx[-碿õýXÓW/¾Úl§5ãÚ(“›öw#îìψ12ºßÞÞnvO÷Ew×O·ŸÖóN¥yUÜìòþözÒl}yùä@Rêé˸ƒ¹úÞùŠ2‚$<äZ4Z¨è󦸞7¥½Ûo›»õÃDŠèê&$º§Ò>M‘ʵS;_°‘Á3 gOý9aÁóžªn/*{U+úª’0FQ#*ÆZ,j¹ÍïÂTƒ±«'ÈËz¾:iR!Dšzš(®lWŠ!¾£šÛûh:UiÀüðPlSÀë­ }Xü)BU¦†›Q,ãœ9Ó¿/# Ð= XÛ<œ›´ý±*TáBr>&GšJ-qlÓèÃ$ð! „q«Ô«NQÑiihRÔd(íM¸ÙýÇ Ñ¿@Ž;gn?|³›T%^Þ¬w“CU‹ä8pª÷BÂxïÍTè›bv²àAƒœçN‚(Ç'¥ä‹ŒC‡ÕŠ\ä%”éœZ^hˆE2ë¦ÓñÞÃÓõÃÍns_½HŽêÐs(åö“¶+Ë8àg…<蛕ã;2ûꈃJu±n]%äTzm)2•qŽš•áë.”ir^*Fן8ò’AÇ“²ï™ûý½Tå?Ö-²ãR07iö$ú’Þ5ë{¤0Á¸°à!lZ ê=+rÒp݆GÀr‘& 3´-ÕO$‰µÜëý87 R)߉¨ÕÃܳF*•B7ïÖ.×µ‹I|¹¤IƒÈ˜ÛÕíùxqÊ¡I²ðøÐ¢ Ëü$€òÕèXB¬¢+õê›Ôqé/TR©eïD%{ÇôÑ1æÆ€ÇÇeçà7wÛ§ÛK .oeÉnVw?Öæ›ŽÓI¹N°À®ÝŠ8øõ⟿oözlƒ÷ï·›‡’2v{q³º_]oî$ŠZ?”4ùãþûåO[,ׯ€‡ZuP7hÝ7c KšÁF–HçE–w6M9Ú¶ü”âOþfšhDX}DýŠ:…D¤Ò£çœˆ$xr»~íæ9¸,=¯÷k›ñ©ÖiãŸúË2ŒV5+5t°¢´ƒ¢KÁñ 4'ÁÍÝjó¥¦Íú •ìS.¨ Ùƒ jF„&iìDÌã€LºíÝ—¿~ý'Lê­Ýèä+h;GOuqʸҸ½XÄl | ƒ£²C‘Sv(ÛGûSÒ†‡›=â½~ÚÜÝ–U\ÒX~~úR}7)H}.åh5Ä\мŒHå\¸ŠæË“û7º°àAçàÝ):/é,? ü½Z˜.Vë¥ žHžKFª†;†ìr3k´ˆÊ³!“ÃY5ÈëÈðÑþÏËxR›C>™ÌÐêØ=S mÈ·š6ïÞj„zò¸ùŸû?a±(ýŸÚð@ÛC?MÑÐŒµçÆó®Kˆ»Sù¼¨|ëCÀ7º>õ‹IóØŒ+µr³ïG‰Ð<4«½Sa³¾³A Vz_=ƒ C;O„Yòßïö¿<˜è)h¾¼Þn_ª†sªGGèbJÊÝ[äšt¥ÿ(ôé5þ¡Ê?,­•Ž„eœpË+ ­onËZ­{+TH±ÿª°àanPN¨¨éжEE_(Æ¡Ü$høÐ‡Ø¤fs?€#&Ô€·ôE5‡OécÝ ÃÒu œÏ*”Ò~Ð8wý°TÚÏ5œ&ž¼üUÿ+R¼Ò‹uß)‹q %Iß 3K˜žÄM<֓壛õk½½ú§ƒÂ&¡®Äì\pä@2ÆÔÂÅŒ:Äçu<®ÏÔ߬.¯Ÿ¾ÞÞ­‹ú¨ª¾Ò‹:æ¬8Ë8„ÜiÒ;áaÆ-x2õ™îg;úZõSî©à¢C¥ó4ÎùØiÚ;ãÎ#¦>ïÒÒš Ç–ŽàÜßkÔãƒl4!³ÓÜ2Ó‚žHÍï À+ÏzθµÚÖ±WƯn³žz9³yPtIÓmŽÔc!×ÞXDß%³4¨Hƒ7‚óñ ö|8ÿ&Z­O99A]z„(ÀÉQ¥zá•|Oþ„`,xи¼t‡yóºú¦x˜²7±zîF^¬•õr¾3¢ìb0e<)-Ãç¦-D¾Õ.¤,DF@³D2Ž8/лH½Gu 88@+xˆ(%OÄГ7«ÇÕÝöÓq¥ÖO%‰E£‰³vAì<»®´#„ç:<ÖRœÇK@赫/Ÿ‹Wl›¾~J1„€Ú»²išõ†Š!ÓábÐÅ AÁCå囎‡ßù,f‘J_­;TT+þmˆèUQ2w­¥dõoc¦R7LŸ„FœQŠ|ö*žrlÖ(·r°ZT¨¸h³!SN}·ŽåDž-‹ìƒtNt(;Z«¬™e®Ÿ¿S⪑0eÈàš=¡;‘Äø 8l°à‰F¯ãf·ýúÛëUr}ýÀž%prâàkÎ{Šöfìg„>ô§‚Bgô0Þ›tY?»/í–ÅÈf-Ô.Ø÷v;ÍàÅ ;„¬ƒÏ#‚Ô‚§”×p*v!w&ÂÏÞ¤Ïú.gô™ôc ÎßQö¹‹c$˜¡ØV \ö…΄|G ~ Iǃ޵à@]}©~ÎKѲŒ¸ÑÅdîË>‚³òˆÇb<Œ ž­>}Ú­?­×—Åj®o7Óòž/dö>ªè¢ Knß ]’_° Ýiå5œL@ÔSÐ2Îåf“ùm³þcÒVÝôE Äòi-Í¡ŒsÐdí.³5˜‘u¼F,UœrÅ8Ov «›ÎKDŽX‰dÝz EZ\Òt6&( ©˜’‡!Ó&Áºð>ÎÀ}‹ÎôoxšÂ(D7 clhdçšàDa´®?ÆáS,ÕÉ9&ÕŒÉ8ïû¶J< @¦P“F:ù¬c'"צ㼕Œ„g½òê+(û’ØpÝdPiqÞ¹© }Ngƒ0 1à1§µìîç×3ïÕ§²$—忤E•Qv4ó#f±—^Ý”EPòaÀc 1¾ëåÓnûtt=üòén{}P×s¯Áº§F\ý•_Ì¥âüòW_?*tmJ‘%}Ù€.¤¢O¶¾‰|)©ú°[ÿûiýð8¯¬ëÛ'ˆ¢ÛËÕeöKW#ý’Ç™oy; ² 0­Mà³~,¡þó´[O3‘›Q*Ä¡Ô[Q†‘ŠÒy‘Ê|ÐÕt&¦Ÿ>®n~/³®±bøbg¹FdhXð|ìøéæ~šߊVì>†VoE(öî¬Åž>ÔZ}ÛLÍN›ùÔ ô1–ê­(ÃH…þ¼H…ü‘¤úöpÿy½wG›ÑŠ>ÆY?*Ì0bñ™Y+þPkµ¹þ2ý+Ód„fÌŠã³—fµµÌ—^M'ã~ûÇz÷mŠË±ÙNtîC˜uT˜QÄ*IûçD¬èÓGëëÓãêëæÏi.¸±ðcˆuT˜aÄÂó:hˆ!4(†õÓé1F…#åy ©ÙŸÉq&hQ «¾<$´Ÿ‹Gl56¬ø'êúå§s”ý ÑKżòþ·ªJCÚ]jòÀµ¬÷×;øÒþEÃÁ(i\:R±†¨ã Íi°½|Ü•K£RçfÒžrÞL¥ÅšSS­dœý³Ó›@NÕÄØ2.ŽX÷”8 AÇÃÖJŸªgÝ×ã쀑œš]jµ€oM±°°¯[ðX[C¶F±½PÚ?PšÔYg³ìæXRãø9‰:¬Éò焟¦•ïDê3î ·_Tãþ—›íý¦˜þ‡oSÒܳ¾Ž™PÀÈÿ®œ£â¹à }½çRûÍÞŽu#Ät2•úÕ¸Ž%m¿Ã²Žƒ×ñ6k¢ò¬Â‡«ç¿ù¹ù„8tu-‚,_È5Ô ;íÂ>¥ïcå|€!à€i6à!hÑœà­Úö›!×õ†Þ%¯.t9·ëݶ˜³a 40Ý<ÍWõú±¢H_Wdð!9wª*Ô«q>4i>ûN¢_j'èÛŽŒ£ qÇ©ðÒÀuà‡O—kO3èài†;þÑäË멨.”ä0µw|–lÙÉ‹c¼j”d0’òU,!´Ž=6ö}>¯WwŸo>¯o~¯(…ŠÇ1ê`¬t‹š•¿zzü\îÄDŠ(ѵ˜Rè?ãÞúaÖ(ã}ëµÓbÊó§îÒ^s[–àØ`´OÄnŒ~\¹¿{±Q4iCPü§TÚ0D¯FI6Ì…e»Ps¶Á0ó<>ö_쇚¬_6@i M¨^ç@HöTg]ðõ'µàéRŠšB'EÖï"З~)Îi!¹¸ú‘r³"C‘'?b/°à Л?zM颡~9!1r’ÿiØ%¸!¿Ü,ßÈ ˆ+ÍÔξ9ò;–.tJ…'~þ£™'BÝ™&9«a‹Œ£Ø¤/aOQ"1é¢D?!ß™ò]Ó <:’äXÛGTr(%LY³oÄÉß}§°8xïÜ&XðäÞ›…__6_÷ù¼*|ØoÀõ MÆÈ.ë¹*\J$SOCñ:— Ño©5Kõ“øH½ü_§^yËRg •p=cÌYCËž`ÀŨOh2ÕÏi,·U%ÖÏÛSŒ¨4EG9÷´ÿï1YôqD1xÇMÐPÀ]Ô‚Çó²…4Ó3¥ú‘¶|‹CÈjÞªð0ûгñÛ„j«¿òžåÇ ¤ú’° ][ÔõÄŽÜýÙ†'ÆV1òžôšêz)£¸A‘CÆEXê´XJŽ\J*d‘Ì  ByÇUÞYèx*9%fgº’“1DUä¡¿bA<,‡ÚqЄ'/èïQã"VU&Þ&ç¬-ªT:ä_/þõy?óß„Ww»õêö¯‹Ï«‡‹Õ…LûdS¿n7¿=Ïí]=–b"“„/—/¨s¿Ù§A_ÜH¸%ÅÅÿ¬?íV·ò7Ó ýí_»'ñÉäüÛÿ®îÖ[(;{{›ºÑóÃ>Äþ”¶à±ÖU˜kmïl(ÔÕÇbõYÛådY#Ž·°yUËU8B¸<£"G_)P³p‡3è|>^ðûSÖ‚'a#+üJ_TÕWé¸")IÓ2NœœìZ™¡>i„ ²àa×xB/×>^¤•MúãFúK•Òe¦FZí­EˆÀçD‚iéCQãÓ’‚.e ü6[ÿ?¡gõ [¿¼½šæà׋ÂB1ÐÏàV÷÷wÙÑq£~ʆÛ<Ÿìbõmµ¹+5ÝŽËC'òš¶C>ŒkoÉXyKfúhð4€róñØûœLdýn‘žõï§­hõjúËÏ Ë¾n‡”>Nó’lZo4ï0œ ™|ô!k€±Öš¨ä;¥އRƒ£³Ùj¬=eŸFÍ’q•G †c³v¸™ÑAÖqXÿœó9ó <ÙÜ5@QãÏQcý $‹gÍu¤Åo÷ ·©Êå9UŒHªÁõO·œ¾ƒ$êx<5{gs¿Ûþ¹Y?\íoOÞ¨PY=˜Å aR9€Y¢£Æ«ÞH×àˆ³:Û¥•ã;/߉åñuÖñ˜ëS/Þ2ûd9+Š Þ YŽÙ\OªOC+4êʸ!K=„ò^K=D.ã\³–´_—¤ÔýÛ´â¬ow›ÿL{æÍ*ËŠ™\`ÒÎêd\@nR„¨Y­¬ß”q~ÀA¾|§ôàÎ^Ç51zÊ ;E‡)䜎9nçt!²Ò T"‹AÎîNå;%ãŽÇž:wR³ë›§Ýæñ¯Bq¦å/¢ÞÕæëãÃÕ÷_½Ñkm™Á•s™€“’:«¼¶ˆ…¼žÀÄR{8ê ™ºïÓwRN g(1EhhædÓ°¯ë9¥@NÅŽÔl‡øðìc@žìúœLjƒºÚ¢óÌéÌ Ù¾ÐÁzÍ$%0ÿ<¹Ùñà®Ü¯îîþŸ»«mnãHÎ…ÅWVŠ/óÚ3Í”ª¢Èw±9ç’ä܇Ø–ÀŠD<´ÍSù¿§gØžf‡Lê\uÔb€}ºû™žîyé™>T3Xi“t[Ï&ÕøIÆ¿Z"dÅ vJô4[Ãc‹R†«;xœÞ™æ·ád:W԰ź̘lohQwk1L¯R~΢¦„Øê|QbvÖ:Nà3û0›v~Íùv-P8f"ü˜¢· ä•§¤,áœR¤ç• ”Í–º® õ8|/Kî.­ât·9¥’!8•ÈI‚h]Ï©kBoŠ…-¥ë¨ó"÷Ù}Ò-­×õÝ<”V9Ý©TÃ)U ã¬e…kòU ê‰Ö–WÇŠ*ŽÆ-£zÖKÑ‘Þ9þ¥:úõKÆn—ƒ¦Hà'KÒËkÆVš[+VLþ2ò±Õ·h¿¼S:¯Ñc„tNérÄ#§ãÞ¼„:}»T½ÞŤó/ eh ‘Ñå@›‡!AÕN*V8ê¶CeV`<'ÜÚCU#C e ¸6+£TÄØ™‰7|0¾ˆÇPÖ€dî´hÛ)›á .ÛÏw¶XÝ.qcˆCùÓ&B(KœÒdbP<ÅçiÛ)S‚aZÈP—"d«c˜&lèìû^2<³Ö!F8¨®ó _ñ¬œLyØZ9Jly€.Â6)½å#2°^°=w÷é¼NåwO+…À[RüÆFËÚi}A}NØY8Ó²+?¤ ´f.¤¡¡MPúÆâ1ʻϣçéÃÌL€±Z£öì NíºÎmШoI²0‹bÃÖ¥jÚImòÏÜdšs j+²›¨;!û¿<¥yƒp @”Àc ZlxGí¬Ëw ,Öº¶ÛºÞxo)™åµiÐd)tÕNz+ТfG&j'ä¾.e¸Yü†· (ˆ‘¼tánÊþîÉA…S<Y·üDéž#8E»Ê)äÁÛ¤Ñ'êLŒSk^H0¢cÊòxœÔYkêD)8Ê î|" æ]ö„‡3@i‡@¾Wƒ€㺇PáÔñZél›‡¢làꀥßAkÈámöDŸ‰A<Á EÆ)EYZƒ,BŸå|:mÀ¬?xl×gX!Ph—žtgCŸ‡A¨ÑiÃÎë•XðæÆøXžÚ©<‹Ä™WäHŠÿù=ƒ;4Ê,P&ž¡’ÊEØÕC ž…Zå4Jðù ¢‡^ye dV8PP~HÐ9q¨~è•(GVa¸ƒÝ Ë‹­¼(Â*îDOÇí¹Ö6Î; !9B¹p¸Øñ’€ÇÞ–2"EÈÄ%GQ’ð¼ÄÎ`.a¸]Oóxü>óBi=»Ã ÝKJP„ŽÎ³bçÏY0(Ï윢xË‚Ó\¦và ¬g©`n x<èr׺xðð¡–Z» ºyeŒ”ÚX.!P ƒ½K1ìÍáC¢ÅUZÊQN múYÉêV¿ÎC¥¥s™®"—æM‰ž<Í7¾»¬OÛ<¸[Ý&O”Ø+üMZ˜JªÅ %ý>þ$ZÏ6!TÚŠw:ÂÒL(±¾„çj²*†l$è*5´{÷L¸Òö/q˜#ÏqèÊÿvjÚe£AWÂÎçÆa,Í|aÞ ÷xábUùëÊ“«´…’Í\”é¼(ö°x"M¼Òð²òíd¡ù´ÉtXotmÌÆ%¯úžIÛ ¾4üË¥4ªƒ£”¯”«„ÈÅ #ôþñÉnX…mn¾(››WÊìбÌfzåŽIv£+Íý²z½9ô.âÈl`fãƒÝwHÇZš _;š1yQ >Ÿv*;Û ¨‰Þ7pÌÒœ(²ª’€'Gµ¤Hg›ô´B0†¤¢-Ì+àE1ÄÊý2‘LçH×7E¶YR«LÚ¨Ó«4¥¦^VÜjµ)ròº f—Õà4Õ½olmÊÕÛïIëNô¥T¢øc žÔ“ÕÛovɲÓ0\š‹S.6M.#Oi–ù—5×k½},ÛF²ÍýZ´ÏL·‚戗µ'öœ×I-u0NeÓÂòç¨i•"~YþtêÊó¬†e–j<ŸÕ¿«çÛºëªÍc7}l—+ÉlÃ`£'†‹Ëưoõ®“¾èh_VØ©—zn²êw³ºÛ\{MTÿ¸M¼øù§%æ]o¤èTè'ǯÊh¡/Òn—i‹}ÒFãÁâ¤]Þt‹KÎIQÌ{ÆKõÜt^’¥)x;ý­žý:ï6˜ÎFCm‹Ñ0M²g§¢‘/‹Š¦xDy5¸= n»˜lŒ‹.œÚ“ÏN,xaÄ‚âÄúu~{]sq‘ÍÅ8]‘µ´dÏKE#м¤ˆ/„û¾4'w‹j2ú½Û`‡Š$ +çüÒ${f*Êp¤6¦Î;²uÞ^ª„}QüWöZ¿½™NFÍÜÜÓó¨ËB1+~ìxNd@Å$7@î [g*´ëš]Üzœ¶° Ö£À Ò±ª“‘ €@a‘t<¶0AýF£Y&ñph¥°ü‘en Í}³úg„ /‚c¯* í\‰¡Ò¡Ó ¥åñ¸t&ìºxlS›¤À›Ñ¤Ýì]M†ã¥&»WJ5ÓìPbS;….Ù9ô Ý*çQóÐ;ö×ä#AxO¸jÀDà±"ÃnŒõÒ[v.@÷¢%y.ã´` ¡Pk\ña!É}%ˆbK܈‚eáaX…/7ZížðRI57ÜR;ɃB~BûP‘’tÁãíÐz><áÚQÉãm3ÕÝX»‚ôô¦^TcM£êM=µÞµººšÕWÍßF»×¥½ò䫸«j©›+,È-‚Shò"¸ƒ„W(A)ö2ŸÐN›"¤ZtÝ«ËÞjë5²…޼Úbé±"ޣŊaÈ“” C•1ÌIQ”ßÿJ6'ߌ(-oï…[*¯ÕÝ]ût!K.¿~ºüþéê5wOñsjÃF)´3eÜFÉ`›u-Sðx“÷>æue^VA¤ÐF‡Ýkÿ¨Q:-ÙDµ5z×ásR:^.v4%áÁ>=ÇÆÃÛétÌè¹;±WÞkãÙjˆÓ/÷î‹ôÚ ÁæÐ¡-°í1¸ƒžÄÜ;ë§Am4ò®ŽÚ Û«ÛÈÎg£œfkb†v%ÂOzq¿Ý3àAŸãÄÈf½óæTñ²6êëšâÑg Þ2B7í˜ýâάTn  ±Â*2öE„÷„ÛÀЩÉöæ Ö|k£DÝ­ÄPA¹ËTšvÎØþ잌ÃØ¥yÔ¨eÓcP£çMOíR-o?ÕL§%5Ž,Ï)1´“©7§÷ˆÚ‹Ò[µFõozz” ­‹À“:ùä`ÁšÎUYNUÚ“>#T•>—p86æ²€Çf¤-bFãÂåñxlâñ¼¥f®ëj¼¸\׃Ï1}bÇRƒN½*…(gÒ•ÐN9¦’‹;ðZáúçA žÔ"šE¯gwÛKc§-ÕønÍÿ§5/‰öºë¸­L(JéAÖöçwl‘±¶EFn}©Ñ ••ŽÑêƒM‡Ë!AÕ ¨4<Òîbk3ÜÈhÕ;-PiV ©¿')~œÕƒY]-ê!)t>½›QÈüdx]ÆÒ§ƒŠÀ9k8pœÇƒ£¡„É=…lÊú<èú¼zêQëAgØ9Þ¥O+¤ä\µ³Æd˜,ëA'¬aså (H˜è=h„ŽQ¨S‰„˜T7õü¶ÚÌ;w©­;:±šÜ=ç"hj§’ÇÐr¾*E _`HÀ£E†õ–aý©º/N—ÅÖNYMjÑ­S$jZÞÿS;Á%ô#„2\íÊ áR7žä쇄Óy«‹Ó (-Ò{-G™T“é§7Y}4µ³èKÝ ªùÙƒîÎVš=MEvÇ‚N¡áÖg8 X_%³Ý\ŸùPfžü#g=OágòRe¯êŠG.…ýó. <ż¥³žÓ5 æòºGŠ@Ñhîîô¦Uúð©åÑzê”Ar]ÚSCL€'µ^ÔÖyœmêÛòl9Ê4Iˆà4ªçY h$…Ã3¤ÑMX;œÕW£ùb¶1jÄLÃX_ÄäJÐx:Žå¤Ž*­=çgú´·Ócj×í1‘:¼A`çQ)*’Fæ˜ÌÎÛ|§Ÿ„g¿Mƒ*-z˜³uV2 R”Ï{Å%o¡]W}ì.ÇÞ"çK˜L™ˆ<{šlËí‘=¡Ñg÷Òz£Ñò‹Z¡Ý¾ÙúÀSRÁŽž!¯p%ú°¨µ4šÇƒ©wÏn_jMÑe÷b;¢4ÜR]h·–ì ~Ì`Ý/GÕ, Þ4Xöì£åFŸÍs»í µåGçËÿ?>9šßÖƒ£Áu5¹ªç'KkœU“áÑmu?žVÃ]è) &yô~mÎú}|[ÏG3Bÿ€¶1ý*îÍ~•g?Ý)<ùòfF©ø¢,îfõÅ1þ¯ö;ÇæL™ '߇qöâøïwÕ= ©çƒ°irq>ÜRè2®«yý/óëJY ÍÕŸsY[+Ðx-’N]×—õð“²õ †¡¯¥®)½¤¬Àšª¯íÐÔ ½ë/Sì/>UãyýÇí¡"lk:ö¬GZj5Ðd‚A'5 8>ZL)ò˜Í7‰WõâŸ&Ó¥婒ʯ]ȤŽXõ È֓Ɔ÷ñ–ˆG@¯gÓÉèk=˜~gþúÓ¨Ïþ<›Mgï(äùf2¿Z6xý§`ýðÕ¿5"~hž~ù·zR·ÛÑ.4©¶ƒ‘¿ù§%û–h~ñÕ7â÷¹©œ°âÕIãÉ òäèãtQ/¨³½ÞÜŽkêâäè}Ø7 çÅbvWQB„Fmó­˜ô]5¿¾8†¿ýô›­Þ^þj^“7iWÝ ÁÐÓwÕ|ñãlzEQßüb1º©Ï¾%€ßˆæÿ^MîªÙýÉ=kÿ5þôñ-Á~C¿Hˆßï[n~y^>ÿôþÝÅñõbq;¿8?§„;#]W‹³Áôæœ [-ªó÷ß}xsJœ2D#éko‰o“z<¿øï_(¸% 7Öþ£Ñ鈈0|Ðq°ã­9[£_&¼ûúé—‹úöâxù,|\é5?œFËšŸ{ô¯^*íçã£&ئýk^V­±ƒÞÙÐãçàœÞo‰ïGóÏóÆ]=°:Ì=Î-,;öÅÿ '÷Ô¤ï¦bpÛ£B‡ø8«&ó&õùH±þ z÷7ï¿ÿëÝh˜yüv:™OÇuøóÛU¸ð¶ÍˆèYc®÷™sx0i¼Føó‡vvùÍ߇ýu9ì½}ª÷ƒq½,n>»©È[.nÇÕ yÑZ,RG7?&½üÇdúÛd?>|ÿaRÝί§‹æŸ[kl<~²*Çq øß[®®[TñŸÓaýñ.Äbœb>´ûšÃŸ»Ý£öGèJ·ãÑ`´ß?€ÑZëÞvŽøSãªYŸWzW…øå覦ÁœÂ^ßëßImat½…ýÛ÷”¥/‡µ×M¤ø´ Ÿ^ÿ?EŽÆäU^ÿiå¿kÕÐôl’h5ÈÓ/,ÃàGÇÓŒÇáCòÝ„Ž“ÕÉ‘ö'GÇ䯕½ZÅ .¬û—,ý’Oéyíðµ«þz8ÚI`§QADȺ^+q•ŽP¨ú—Ñd4¿>,9úFž BÉ›ù¶¼Éœ ©Ñ“ØÝ‹km;á²ì¯Í•ôPVí ÞÓûŠ[ó)¬÷>wn³MØkÓ¼Q‡­-À#ÓÖºË6šQÊŠ#·Ú§˜síò*ÂRƸ"̱ S¬ Å“z%ÂêØõÉJÅô,­LØ:¦8ÛƒK F¦ÒXÛÎB ã…º€‚wøÔ.µŽI†*úê=­Æ£Ëê2ªÚ FÍ^„= ¬“£v®à­òÄV…Ý%·ÔÒ[ž^^*Q‚^”°ç¬Ô.Õ7ä7G¸8¤1†äÈ¥¤´"„ê*¼Ø3¹žJs(µ”‘/µêÿˆtûg0ÂsyÏí¹®š-ƒ”ÖsÄÒ$b„H蟋XOd9”Vš²@-x™µ)ÍxÊ€ŒÀãà™i5º¼L[khŽYF ´|dä|6—µMœCÉeˆ." 0EÒYç-…î>b8´B>3¹VÀ51¿¬£$.bH츷o~íèPŠÐ #†E‹ªÅ@{3,ÂîMòEý×Ã-EU¸Ü× i#F x¾¸¾K¬CÉæ”Ó6bpr²Ì`éB•šˆa%œŒyV²5f`æBå¢P啇ÚiõLìÚ”ã0:–P^ÏÒ‰Ú)]€NÄn©?5Ix’×ävN¯6ê½ÝMêYô«wD¢ „çgy¨Um¢¸IȾ†Íú‚ľÀZNÀcÂ:8Ç©CŠ0?,Ó? !V¼µ÷Œê´ØÌ†ÚuÞQ´}¹>ib4@±aóEYŠšÜ¬Q*K\BýTWaáUØ;¼®!†çè„òJ³=A¸Ô#(½C’%2ã<6ÑhM/ÛÐMwßR‡Ôšsô]}Èaù~ «)áBðØÔåðVg»B—óev¯Ÿ(°((HcÁƒ&ÕðÉ,ážÉ‚! ìL ïñJz'õˆïhòiV‘µîš]ÚJ궘¦P NIZj0©‡õ‹ Â¡oò©¦k6¯kG‹LÚ12yÅ»w‘€^ õ’lk””ù«ÓDh>(ú"ëX¨Õ‡°K‰ %(v%2´+B z(îFĶ]ê­ª<¤£a÷B(iJƬ ¤QTÉ®=ÙkiaBþây8JÀ¶ïq”ßc›˜«NêEs0r];Œ±ÂJxv²‡’}·‡O/Â8êKƒ1¬NŠûÂ{¬’ÚFàÑë\þjUœZ‘>ÔÈ‹á}jGÞ‹ª±pÐC +{¡”7"&®©LÃ^Ý Ý0¦¤PÂI¶@()–­’B~Ê‘d=vÅ0´ƒ6vBHìDgh—¼4»­ÌÌW¥†4:lÏa„K;•çhÔõÈNÅ„vÉ—,$3ÒK©(#gƒC/–°dØÚ¦¸Ë/[ÜN$—þŸz°ØÐN·©Œ(Á²›‡©(—^ñ³G8X`M“^ý÷û Ýtob Èч¬ 0R€ÞÃT½)1k˜‚S+®ëáÝxó‘¶™ôc\råž­_!Óu6N³gãR^Zb+Be¼HûëÊÆƒé¬žÎO/§ÓE3І]À¬uŠ5܃ŠA*[ÄöÒ¡d`…v`˜Öÿ|wYŸ¶·zœïV/Ž¡œƒ°ÿËܵtÇ‘ëæ¿¢3ëÈæ 1»,’uw›E[ÒŒûZGjÝÉäרnK-«› «X´Îœ3 V}Añ°ÑW[°æl)X¥ã1@" RÅÁJd©½céÄMomzâ„¡âȤìÚO½‰jöCH˜m;DQÙcк犳S(yhzÀ»{¸ß 7å§§Î’¸«»»C0åw9j—PLÁ¼¤„®4¬óüÑ_Œ{¦œ`gÍrt¥‹#ü"MHxš ÊjîÚ‰©Ñ†ÀÉ›Áx¥ÃÜÓx nöŽÇXÑŸAéF¤±ëwX|\®ÀƒÔSû_þ¹Ù}=£díxJìJèJMƒ.“àfï8/ì³ÑÈúÊgZScëìé#Þ‚µñbË%¬8êè «Í ãìý—¿Qfº„CöŸ¼£\¡0çõöÿòñfs}`1Zb@ æ]ÅõOWƒ“PgK'Îf!¥ C¤A_µ8WH'ç…‘Ð}Xñóé{Õˆ&¢6:‹Ú„Y|~ÂlÇ7wßu:F®X@bé%¹ô í3•¼ïãÙ›­í÷ì|0¥ƒ…@ÌÙR\#²b<"¾›CÌ£íû³ß7wºá!$ÐÉP¦ÎºÀC¸ì²ýz”Õh]#ö˜w† ¥]ZtÄœ5ÑÒšƒèŸ9;l 9ûdÚzBçÜL _‹pæénX¸ÛsŒ)&û NqC9-2˜œ%µxc“,:[êWò±$!.~ï>ËÙˇ/é«ênb¶ï&º+ôú2$ò'ß$¬Âƒ+ëŠs<¦ Ô.¡º¯OìóåEœOÇ5‹¤QÄ{5x°‹ßx¹»_Ÿ`ùÍîêòû·í¥¶ÙÝÞè`€‰çÑ”— j–@q‘O¹û|y©^\#/ \žœ:ú›3Xoy#1¹”}ÍJ8õðE—,a¶ôÔ®1¹1·Sòä¨ O·áÀ?ØýƒÛçG©‹‡eÊLÀ”jðûê䕾ÀgK ´»N€!’R'ºnEn?Ì7NZ2Q4Õz;³!®¿û ?Öî·æ9T°öõш¨ÛÞ£ï·÷§®¿ó”>Öηö¤x«N -И`Ñ‚‹kwºÐú;Ëü¡v¶ð~>Ï3ý.HoäÈÚ)–ý^Åf3I!Q®AéãÒ¨u Þü]­&ÞX¨¤ê»ëZËXáAS«_wzœhù…(›±E‚±nª5û–áœ/Õ r$—<ÖÜI:åuއw’¹r¼·Wû£eÅëblóíê¡ÍÝíìü¡v;·6³©,ä\Š‹]c1ÚJ•¥ ˆ=~¨å³Îñ¡’£AWb·MÜv°`][Òq ¡Ry6˜åÙâŸO]Æ-·Rè| «´u«ÏœR°bõšõ†J‡, ˜’9ðAéZã Õ½´ŽÙW>-@:Ü™!@*/íýµ ^ï”e¶à  ·|+õJ+‡¶!mªc¦¸‚œ¬ó©Oœ÷:/ˆfûñÊõþ’¢ñHyvã+Œ7ur‹æL5-”?–!öJ}¹™úY?><ït<ÓËg³HÊʇ¤•/Æ„Îw1õË1¤S žæNýgyúƒ{ß¿mkXš-–Fšä+–À­ÖÅ [Q×ÀÉÅ`¯!’")² ÉÆ“‚_2{7íÙâ–¸+brýŒî?Þ¡;îªa‡i«ƒ\Z´+îþÜæû÷Û¿ÛÑée0‘_¼Ð¿%¿Ø>]Ü?ì.6ÿÚloõ29»LÙœ¨tP×åÂ(îôG=qÔºxë£^'‹vS§}¾¬9ÚÚKèò€¼AôÙŌ櫢҅Ð+ºò³û¢NZ¾ÆÑ ¬½¼Ì‰B×ÞøÃ-Ü€¤õ<¾C;œëŒîhõ.Íæhgí¤TèãWvy€Í÷­JÇ”|.öBÉ#¦žèwÁÕàiõÊOz´/Ì:ÞÛdD¢1ʯ˜“‹1æÐ<•¬»Vc~ÄnÀÓÚÁô´…ùN!žäb9w!9¨f§H„ÀÍuù«ÂŽ:!LØcæVéw¦ž0x.´…ëÑò9LÉ ¤taæyž‰K›A˜96BGGO%c–Ì7Aùc¬t*˜ÁÀ}Z6ÿYš¦üP—ð‘¶Ò b3ØG ³©ß³+`X2ƒzß7õ÷…òûÙðÝï?®——®JÑ2Ä m;LèÏi× ýRÑñèÙ,ÙTºCDGÇüzW'Ç^¦‰ñGáóÓà]²„&$ðìmø!úÅ5wsp/—½‡P±¾ù òXn6OtÔÃY:sB/È&~ƒ%'‘2† t—›x©€$Ç™*TQÈ£xÄr0[Í)]k ïOcÐKŒž8Œ–H€ 9VhâD³î›ˆ …@þs*˜#úÍjêØÙ¬Õ9äkô8Ëú‰ëƠœÉU ‡¾µÿ•°— ¹ÌÉW,oD©‡âI",ÑÆÓœôo4ê¨8 F€´J%V8yHH}º‹´€^((œˆŠ}µÏC®¥ÑÖrT°õÎ]-o »·3žÙ’*üQбú©Á³t{cOÊVÒÝÓ’¢vœwxRìsiœM>)*bï,Ð>¶ä+ [T-^ÁRšRÇmõGiD²¥|B`[ù^%íœQÐÞŠ°’¨ÔŠ·Ð¼ÝºUºà_*>ĬX'y ”ïvͯÀ°8»°÷;áõíÍãáÖ¼<üû‰ûV€7;/˰õzvnAiÚrø …'kó*pËŒCb°Y>ŒÑWài±ÑÆý«‡Ë×jÁè­¨kλh«ðì‰W‘–sx—ŠGH¢Bí« ‡8äjÊ!«õZ{Ls/3üí`WoY³hi¶^̱z([GÐK%¢ó¾B]Æ4ÄËÑÊB¬0ƒslíqt>Ág®iE]sߨÂÎÉÃâX‹±T’*5‹Mc.$pì¬EÁÃ]š)Ìå¿§ÍúFUcÉ…ÁU]/`©‰í™+^•3à§ÁŒ¾b†¥âa^˜XQ~­?‹/ed;”1ÂÜúüUÓ2ŠÌºŠ{iHØ>S“VqRsÖ™U@çÑ?¯¾‹C:ýjo8Z¡ÚL:÷²â^¥êÆá#WµTÔ8ʽS¡[xHÆž|G›ÉTèFüÕ¢¶ÝÜɦLÛaÅ‚Ùv`Ÿh±Ü/²SëY(^ìXóØ*ÖMuIe¹"©Œ}¨2¸Øû!Y¬½6ì,T¡KÔ+•FçY¿\ñ–ó‰)Þ˜ÁFŽe#Ñt‡„®XZ_ÿ;+[º&ÆY-xRceÉ «^8uü”Œ–,N4çX‘º ¡[Qo±lXňyê-xÐÿ²³^6‰Ä¼ÇIi‹:È­»Ë¤V>I‰ƒrRº8@·g¹Ûg¬`§á»÷Zñsé—‡$‰³å“”‰!ˆSæJbüëôAî¶ ÎU›GÃGÕE\_(ð„´ì¹ó„<¾ëG*7`aÓà“‹ÉÅ Þ2ØØst­•à#OQÓRÒú½³ÛðÄ•ôÓ©öÔ©XX/¸³ÎIFãùi¢+L ªœMÚE†…É­FÉ{:?bã3ŸÁ(çÙãaê’5SÇÆòùñŽ' ¶Ò…ܧI`'ܬ­ºØÆÍ1¬¿ýÞe—äó&M èÕa³Q­N|-ë¯Y;"Æh­C'ØÎ¬kµ$Ïí… ÀÉ|°²êöx:Ù§ÎÙ&:,:Ñd`À:ÏÜ똇“CHÙÆ9`âÞôÍ×ë=}Õÿ î%Wæ^ÐÑm-m% »v%pŽ™Òœœê<±GÏŠw•“/…’fÏ8OàsF‚h-|¡sÞL;`†«±%?7ùÁ­ßjúN’Óo„¡&º¹É"¥[óóé!)r}&d¢FŸ0vhu±žøÖ¯b  xR·öÑíV¸ò9[Él~5Ñssp¦` F²28ötëOJÔï KL<èBZœ“úʾ«¯7Wß.o?lïw{ÍʬӞÉJæØÓ5?S ]åw¶^'º4ÂTïÈQ 6LÐO^úšÜ ®|~´ P{ZÖ€öU¡¨—lVÎz žCL£¸ê öÆ2{QüQñÍå`€:ŠÂ,¹mË4@êñ ëÒkŽ5Îk$%™ÖÄ1¯iÿûâ_÷Ûÿãugs«m]þ¾øºyºØ\ÌÉJ¼ØmÿØ^M …þíâéA~pqss}±{ŽËÒï.6_žw'šŠžXêÅÕ×ÍýŸò¯ÿýGkÑýþöŸ›Û'q9åþöÇç›ßN.š$Bi€‡¡x@U0[¯™.ý'õÓà´Kb6!Ê_â¥×fcôŽÁŒÛ)FÆÛ â6fçl<>6þ¸}øëItÆÝæôQyÚì9–ŠCŸ¦l¾¡…¢ØúÞ§W÷OûrW•ãNÈhÄ“` žÐ-7éÀ¯ó)ËŽÓTåK!š°È·¶Ãì$ƒZ2«)6Äý¬'<W2ÃJ×686q«|.d—œÏœ¬«‹\†æQ2+ bn v¹OLÏóÕíöúá¯ûÛ‡ÍõÓ§ÃÏÞ3´ì†¬qs+jN“ÛÒ÷d×È)j*BÎ`Ãã8 ¨¯ßÉ¢ ±¹¾wñË´œ “vNwhíªÐ2Oæïr)ÈÄ&ÓAPº4Àm—ï Î’¯àhsÛ4“¡7ÿ+æÏýæööáÏÛíý·SËKœ*Ý­øg†ä›'ôßzœáÈj+¹¤ÞêÑöÑWK žØ^¶0íÅ×ç/Ǚߡ|&E¡ˆECl9F<ÝÞZ¥×!t(ps²W8@³Èw(00Øx¨5 h2ÕàdÙQž&7$ &'] í5 sD´ÒñL®’æ¦æhùhL$ªOk·¶iFÓÓN\ô»#5ÿùúq«:~÷ðpûm»SǼ8Ì ?9—ô¢4Þ÷tÎ÷Ö% Ä~‚D![ñ¡=Ýú3T¦ïdôÙxQØãÞ>Èqd¾ÀÑTæ(2Z*pO×\÷ÒMb‘´ÏŽ‘×5Ñ庲ІjÑ?–] 6gr!K £¬‰M"§Ö±Ç·ú?_ž··×Ǻ¾Xg„Ÿ|J)'q¾ ,B!õV!]¿a i„*iÁz«’ÿyÞ^}ô»?¡ÌO ï’‰_mëVEÒ,Ÿõ`Žgm•F2FËG×ÏzkÄÃ|í;+ü˜tèÏüZݨ‰ƒˆÅ} !iRŽ1ûi¢Kí¥pNAËÖŸ Ü†©÷£Çæî»FœÏòÒŒA$Y·~@ïöè¹ÐA€0ã€[¥¥Ñ#Ø ~kן‰çÔ眆go^Ü‚V3ML?”àÿáoó×Ó´7¹—à…@FðÞ-n˜à¥ð±/ñ‡¼ÿ{~œ*¯»‰Æ#zï—7LøÖ/ŒžYâ0!äeóEç—6¼žûš žÈ…(u¿“1é@è&ÏN…Yæž´€çWžÔܦÔâåß›»[“Ÿe3>bvLâ×[øÁ»å¥ Kä¶*±nÁãiY7ƒwl{W-œ(¹—|ŒžXh“º¤Ýߟ:Hª, !f3°¦ti@G¾­L­=îäùzs{wõuó¸SÌOZ3!7æ'ýé{~–ý(ˆÄÁYüÔR¼¸´ËÅrùm;Bñ7à—º4»¨abù%W§ZÇÐz„:ïûõÛë(´ ÀRЀ‡º½ɾÿóæj×ÂV6ØšµB&Fs¹PßÖÖù¢‡4gÑ«³§[d×þ;šÚ„xÀ÷ªz}ÇÇŸ0q²ü,‹Îpr– ]Êý\‚îr¬µ` F™³×ëÏáÚ'#£¯ÀCñžk‹…þúrúýT4ÌåÀê´°Lf®‡Ðæn5ž]d7k=§ÏцNGqéÕÌWgýcˆ!¤ŠÂ_D¾“™¬2Ã=…n¯Î§gÐxsÃô}<8+ÇÚ—£ …¡dFÀŠƒÃ#A~‚‰'8‡ÝâïBˆâæ½ü®©2BIJJh°à«G®´€…ÄRåŒ …º^4ÕEçC`²t&âКÈÜõbjAJ¼~RÍE­í¢Ä¾5èó°yÞ}½ZÇÿZò \Ž‹É­Í ˜LH,ÆQêRŒ¬ñý#]j¤g4P“܈§[ÂKaÐþ¬PŠÍžé“¨²$Rž¬LTæä]¿·‘°qõü¼6<ýr– @ÐqÇó's,×j”Âã8 Ö‚§Ù>?Å´7&n9¢è( Ö#§ßç.;º¸yÀF6àižà~‚W‹Ê¾(z1GxKá/i”fÕÜO¥4, y\[ð\CEïûÂÕsAŠTG‘è6í©iy3B|XE0z-$E/^¯3’‡õDñ Ïl<1QW¹Þj¡»­h‰G±yÏž¸rTƒ¦Î\âéZ ð.3wPùj,‹›œ#Ëf?@Ógyš9iâ ´HÓ¿¦^L©—ÿÚÞüu0rØàXBñ.0u>ß}䰞Ǒ>ÖžGŠ}ó³,;³9dHr”­€Kía¶>bZa€önÁƒKlñC"õåOœ›r¯.åÒt\9@¥mL³ µô¢Ðj+kv¸^t1óo':Ȭ9Å# дæxj­¼¬ ðpÛ}>áè +Ç)XmNíÂoÁŒ^î¿……_ËpF§Y#6Î!¶˜âIÞ¥dã‰o¯æØÝmîõ<ͽ²«Ã˜§Vá–áÅ8cjå/…+ÿ Øì<™VÙìïÏ_Ä™™xWrZò'&gå˜(X½i­^ìú|mxhSý2,ü5êˆÅá/‚F'ëbrÙBÉå•¶|ìÉ#«÷tÉ Ø|ùùMž>›Z7B™k ¤ÃˆL” íãh×pš §8b£ðôËZ™8¸Ûüi;¯XäfHL!Xè…!v9ísäT>ÏÑùœm˜ì¨výNvÎÈ—ÞÓa{üíñç“?½Ü\ßmï'~–·=zñoz tŒ)ö1Ô;/ 3k‹Sk ¸K„yË ò2R!ŠFâñ´Ü;ºÝA{èHZBQlüBî÷ÈpÿÇãæi÷ø|¥Ïêg¡—9ŸX´I²ÞÀ&º‚µõ%XC‚'ˆ!5Ê{º9ÍBŽñʬ…æ´ÏE¤:ÁOK›- Mèç…:g1T±w8ƒ ÷Qï/…ê'ñroNbû†l©s¡+t…nTç S"°SknþQ8ðå ž‚è]"Ë"Rð–Éòr¿†6]”[vê×:NÈäÇ…$Æò¡#ä3e‚rvDÿÏÞµ7·m$ù¯‚Òk;%Áó~ðÊWç³óÚµ—å\ªnãòB$d²Ì× Å•µ_à>Ùu@ ‰‡m‚ÍV™"˜ßôôôkfº÷Â=éÀÃùvÛñ!E©+ ”iÀеUU› | ºµª¾erZÌ~if,5 ˆš0[JÃ϶ͻƒeÚ´/zÏðáÙ ãÈrÑù¸6xÕ–1¶Ã½÷ÖùžÝ^¡†élP»ð$ñ—PkÚàQ!`-…=‚øÝƒ1v‡¬¨h•2O(š’˜hÊÁÁöF,Ú±/çî;ýa€gM3b!Z»¼î®†×"¦~ĬÙp‰Ø!Vžmá »Ëô@SöpÆÜv¬ÁÁ^˜Œ1Ë@ª˜ÐÎÈ6,®½qrBð†K3Nk[ô¶$éʹÁ0Ù´ìÁ¶¦½2j‡¬"¼ê+dCص#*ÔU¨»ñRÙòA€Ì ˜ƒ'®š˜Û'luö·Ô4#åšµ“Üê‚SåE‹i„8Õ~CÛq˃ÖVf>¡Ð¼5ÖD`~ñn8‘†3Õ$Þ¡ØÚí‘®UdÁåj 72BøÅ¶‘„S©XÓÊ5Bë`»½ýWµ»¥à ¥QO¬‡Ò˜IÎŒ1þDÙŽÐvÙâUI”ºkÇ÷Ù ß»é,5^М€%¢¹ÿìŽkÇ<ûfŒ«wÑÛaAêU:½dŒ.üU4L²(‰³C_-ze3ø"š¦é ÊgÑh cDÉÅl™cF@·,מ£ã^ÕK)Æ‹y˜LÑÓOÉhŒ ‹ Гo’q.¼ýäíb™žl§‚k‰?Eˆkþ™ Ï<"f‚ÐhIZN”]kuI/fåü«CÛ1Êuk‹ö@î@Í;æ’½>F­²aÊXëÏ)R¶c¢íôÓûr‡a+{ˆfІ±VŽÕ§J—Ê‹Uƒ[`0oKVM$°ÏQ9yg¦À¬‰m‘·.:F¼`-µJh*šô;fžÔ¦Mï‰ÌvIM3ZÍÛ [½Œú3&,o¨²T´ã–µsÜqïÅ $f/¶Í`%k-O]Ö¦ƒå¸>O§ôYz,Æ à”øOÈ•íhK"â¦@$œà6‰lD R„ÀÅ—éYv†Îd;NæÇ‰E!„iPØNÛÞÎÈÁ ±;lkì—— ÊöÖèz寷âúûëÜ Æ;ÊA*릅èÚÉÃXú@ ªÅ1M¶mgºorLmÁ3œ&ñ®…qʨ±; ÁžÃÝž› :½PÀëT ¾…¦š N‘q4,šÝ&[Öë^EZ‚lˆ¥G5t÷XJè)•íj4OxáqŽéP²<`;&4mÕÈÝŸ!0›ÐÖ–ò–·á^W·T´%œÒ“*lO]ÕêÜË𳀫•½¼·+&½ØTXDB7-+Ã5è½"a÷<§¼ ¾‰¼ÂÁLµ…42‡X«ˆb ± ×NšÀPÛ gW“ôðÒ £„D‰F)Ã1?Šù·ÓìŽÚ’Pï噿3Ìz‘ fñº¡jBí ‹[ÐKºŸbÝ·”öÊA¡ Fh3:»ïùï;‚W‰^ùŠj1ÃÀN­m¥öi^Ì€&ɇkƒ¿Ù& ]KX“ ÚZÖª*8|Yï Ýà¡–R5RÁ‰§”ŒI"ü¥p‹vÔÈVÎýíÏ|)Ò –“ÝŒi\0ø2+”b;t*Û,á‚ç›æãdšN’þp4MÝöòóíÝ)å75DSݨ±¥m/¤vÌPÍi3P&MK¦ÓöÅÿbG˜³ŠÅÍ©?þ“û6áWJZÖfu•6¹ ;LÒ$»°]pðNf ḵ¬› ”Ú«¬a<Ì>OdzdPÞYýy6M?º°÷K3X’’Z›‚ÐN³_±»v`*ÐcþÊFåpB+2Ã(ƒ/Ì b%]õC¡h×NiËÃÇ-™n¹^Ø|¼ü0*ê_âŸRƒûÜ^ŽY¶áê•k'9i»²Ôáà4Ò_ޤäž·àox k}(`iWiÃøøú6,É+Ö:Z<Ž˜Œ½‹0ª·ˆ%ªd>_ÀÚr {UHs”áYî(Y·®˜˜š7s…®Þx¹e³ió.z–Lû)–_™kÑçá`|NFùŠMð§:š Ûp'¬€[9Q½ s ê¨a*›)7aÀÊýäzÏ"\Ù0Ôtg(3MçÈ]»júÈ[ÝILfËqŽLcëEßàŠÁ4ÛàˆèÙl9¸ï–óA’ϸ4yù"M&Ùùô”» ¤¯Ù.ÇË®–6¾ÕVäeäXl}>+bÔ(ŠýÚÒµXY“чâ×M|A7 š¿YJx‡=Îé‡åj‡êß»ZЖ5ÄgŠvõ9µ)UÀqçËÉ$YŒ~u~S!U²ºNÑÿ"ÍLn˜ç3Õà¨ýX°M!Ã` —ü±–J?”ty5ËŸÖm+o>üªCìþ9Ïñdeo%7ž]¿ý‰»OÒ‹ž¿:ÿqº~›û;:Å”À:9Á'–׿Æj e÷§ä.ÛyâS›„º^H½ò‡×ÉZ}%ežãj:öÚ\Ó÷Üe%üÿ¸ýh¹XÀzÔÂIÉgÑ0™ð4\¤ÿ\¢èá‡4¯ÂËb÷Ç %37zTKgÎuÓ‰¢ ¡³XG2PLíȪoÝ`â¿ÁçÅÿS´Y6ÿܹ®£úVO¯ùÿ§Šþ{¾6k'D’¦äàE;áÉ‹´…ñ×"'$Lîï8M/ݘŽ÷¯f¯gƒ¬BÕÊÇé,šÃ a’é•;¨X/aÀ›rù/9 4_r ݤ!w$íæk6ˆùÃShsîDÍ›Ù2O¿žæ³Ñ4Úï§Y6‚–ÏÖ³ò~õã ‘ò†ÊD} Âêd˜çó¬÷ø±Kdpv-¨ÀþÎÎú‹~œƒ,óüÃ4çÃ_OzÑ`”Œ£¼?ïa’ÖËyÔô,2戀 1%=É{ŸŒ–ƒùêSÛ“àÂÓ³ÿÜÖ¶‡Å4í;2.ÒËe–jyIáQ½/©EËížR´ÉxÚ‘—6_³ÁK¯fo@¤·Õ¬M¼ùZ‰ fie˜|J+€åKÂèû Ö÷³§Ï<s»O÷%(ÔÜI³~R\¶=°Çï‘"¯Áe¼Á×ÅѳÅlú×Ù…ëè"M§Qø(÷q†í`ãxŽmášk5kê,è]Ù¥òü¦Q¸æ÷ßO³åå%ÖW›æ+æéõœùÎ-ù÷ßÀÃé¤Eõ¹ ½Ézö* +Á½ Ç…‘¾¢ÒÝ¥ƒƒØ·_¿Ø&z¸c•XÖÍp׆PztKîþdEFÔ²˜*SŒö„àkA$±´ât3=ëFô ùÔñžö]Å<Å_‡°F¿–sôpåúÓ‰0±ÑKÁ¨ ñ|Ñ|\²,Lã"æ‹ôj.h?_Ì\bÁ^ú¯6x:»¥OW,rŠîËé- wê7Oýv Ža—°O=øãúup,Å]ôf)dêõJ#ÞÕÙò«ûcë4ÏS ç>윅 €éR¿4ôõQkiÙ;P%’ÓUdjK&¶–q¥™?ubÑŽUî­õ?ÍÜÒÈ“ˆÿu¸v%0K¾Ú¶÷[þô¸ü÷ä4Êæi¿ üZä±?ëvÍ Ïµ½²”ËЫÊvá ½…ý<ÍF @¿BëfëÚ4È¢O4.ôΗ§‹þp”§.‹mï@—ÎPï,*øÆ)ÎÞÉ?—ɸ¤A¸Áz<ëãµ–qšdéeÄI”K/û}q‘JI¬·oëó4%¶‘.™Lû©˜”ò”vA¨–"I”ár ÒDYèë›Ù¢Ÿö.‘¯~«£´4¬™:ž’ÐBÛµæÅo‘&à–£œž­ü÷X\ôÿ@¥˜Dçº#akˆäÂêU/y:¶pö€‘ÀxkI\©}ïÉž\ŽÒñ v‡ òNGãGeƒ'ÁÙÇG‹jœøòm:Å)ÇÙä@Zm8É¿*¹¯d÷ÆGÉ/},O*™G§n³¢§ =ÞÎòdÜYu Rn‚ÖEœFo°îj4Fƒ"‡ÕŒ‚ÛÐÖMßš“¾K²aïDýôãg™<»èÿ žÀ<ßd»d2P¾}‘dùë•þÈG“4~"µO#÷÷ÓåXm§…1ÿ§ú4Á̘¤lõbÖOÆ0ˆ§ð~ÀÜþ¦àÔ/¿/wÿøæEomC$ÎኆI÷g“Ç0ÍIž<~óÝùÓ3³†{Ü7MÇYïïï2gÀ¹¹ÿÍQxl1XQgµ Š,ÐC¾xqûÛ/çy:ï”ßáÏéºy p-ÊÜ뮥ȓŸK¢ý|Rh7øâøTƒÎ’ ¯@ŸŽY~FQ….ë¸ñÍ(û˜9áµâqgš8*–˼÷o!ò6§Ô±r¹¾py¼]$ÓÌí¾N/ ~úò9{°ˆ½”VR~Aezч§Ò_ò'`l b„´Ù¤R`½Rðe¸þ9ø#"%~Ï’yrk;¥Y…•Ö__­9ÉýÄ-Mˆ’–›ÏùÚY›ƒê/ÀÐõO^}9ùïåhŒœyò¬ÜŃ×^ųâ -|ç¦ëMi º/ŠäÖøñeq\äéëïñ¯•7õbt™ö¯úãôeaâo“dg>'}×ѵ†Ï{Ù ÐåoS°ÏöÂù÷çÓdž g¹ûs<[®#-Å/ ÐôMÆ<¶Ñ‡a¾…è¿]âþiaÊ0)~¼Ky’æ¬f‚ý†K ݽQ>¾ºf€ªâ­Nÿs!°Xj£–åÜlÕ²„¼‹&)¨öh4mŸþ2*âNÃ:Wàe”Jî‰[‚à:cêÉT‹Dc*Oþ²ßdp+F´Vùð†Ò(¾<+í,J½û×åS ”*åÓHZ§£ohåò•7D˜ÿMÞDÉæ›~ßy¸-ªo«£m lc®ˆRŠ¿™í$p:¿mÞk!7“åÜyeΙßÁ19–×gE*];ïÙÕåWn-;öû9íĂĊXmˆ" üû3päFãqéíž¹fÑäÞ8º’ö¸ù߃‘Wîw¯ ÞÞ7åyƒ})=¤ÜÆÂˆÿûWöè`jÃ%ÕLmÎúuø¬9dw1÷²’ån  ì‘ æÓ `ŸÓ:OýbWt—HÓOé'?F )²(éñ ×±ÆsÉ\xð{Æ +pÈ£i?­r†ÿ-rDÉ?âCW‹øàZ½I}³…Eö"?0Ф1È’zFAןòMÿö§[P xš×Œò9Y &ÏŠòyÚûˆ–JäœÆè#1Ó^Ôç Ãú¿M*s£hð:vb·jàIô[sŠ1ŽI(©6¡*ÆïW˜' }åJàŸ"ÌD-éÝ…y@ÊK#&¬I– 2Luaž.ÌÓ…yº0OæéÂ<]˜g×0jOÍ9üÀµ¬PFuaž.ÌsÏÂ<»3°ÔDÑÒX¾Ên¶ºQLÄFi©mC1$×îõµîÏç£Ë«âDÃuÉ„¶j¸#î°AqîàÐÑx(â^ ¶õÕ¡w>#ÁdL…Ácþƒ®õ¹}Ãfý֙Ľ•„Á—´á•ÈP4 Ìðá2wRÁåû¸…+›E£<êƒCv‘®ÁŽ WTêÚVà‚Ù 4F÷N)>ÙJA ^–Hæ³EA¨¤ÓU¦¾ïÞ¾}}ŽyÄHìþëYbí,2Ö– [Ò^,Ð΀`Ú>û”ÌG%÷?<6Kä¾Gž~°Œ|"FàM`%Œ‘Â{•ĵ“šÜ¯°^z`Ñ?UX/Œ:–³;‹Ö"«lutÑš.ZÓEkºhM­é¢5]´Æ­ Ò²ŠT ÿuÑš.ZóûGk¸ZË¢ýhÐ ³¼nÛ¥=‚Gr,e‚ûªí•í 8÷÷È‹Z¡—H­Ñ[nÿL÷ `Ô"&„(mÁðRǵ“Bß‘UöH9 ÚWLd=ÕåÔ켨΋꼨΋꼨΋ÚÍ‹*µ'cà )Ù¨e1´ó¢:/êþxQ+Æ]û*P®Ú‰-›$-zQ1±”=ÙæEa¥"fR P×Ü=sÔ[÷·wïšî³‡â÷TA ºvtœüÐ| ´Ì7ÌÀC&^Óµ£VÙ{å¡g¤RUáÏàRÇÊ;óÃQÒÝ»ïüãÎ?îüãÎ?îüãÎ?ÞÕ?Ó²ŒwÉ;ÿø^ùÇ lÍ1Ï„ K³}“‘ñØX+”`‚øpºv¦šý0÷¸[uiÞšµ†â6žC­ÛÝâ;ƧÕÁîp™èKhB¼[Še"·{¶]\ ?WRÚŒžnqæÿÈîð*½ &6l¦ÓâÎÜá2oˆ¤’ì0oœ²ÎîÜáÎîÜáÎîÜáÎÞÑ.´§a¯e³–•ºÛ.îÜá{åŒi¦çof`Íš›Üap3¶ûÃîÔ/ÐÄJ¯ß^´STÝ+/ª@*8iF¿Ìd/ª¤Žf\Ófê€;ó¢\œHa¤Ú™î®.v^TçEu^TçEu^TçEíêEZ–QjwÐÿœè®}çEÝ+/*Œ©9f2o©u,­ÞîEIô°@‹j¼—È\;!6ó:#™w(,UÙ¬¸•Ì÷än%óN8Å]‚¨æWóôɃ¯Ÿ¼)°~œ‹ˆ‡à$XHjœ§ù9μzòàù(ƒ—!=@—@Ëáh}%³8[=Íé'øÙjY½vÜ O Y¥éEz9[¤8jKè$~pðô*Q›2~½ùºrœVc±*í]W¶ ۜŕÆÛœ¾õ/¯fyå¹Ê¾\Žó±õœ­œë,¿\Ú÷‹txŸóÏi:&£él=ÅY/Œ8_mîsO ~Vd ì=z=›ØG'.úa œ¯D*ß[f&‹æÐ0K¼L ÍO£yÁ‚YšFÿ˜õ#ð•£Iþ—Kìr¹È‡€s¶Âhœ<Ì~a…ޝ4œžÁÛf“7³ež¢“üþÇÏSTl•ß²^ï9,It{²©é£Â7½â8E‘l”­–Éø*Z:‹c ›øiXÒ,\gÑÂõ»âµÐí¶"ÊYßu¸ÌëFzÀ«¾WÐke”&ŒiÁ¥„Ò”Þ«@^zó';QRG2PÑÔ*ÞÝ퀢GJ…dbd\v¼.×òº@^Èëy] oÇ@žÓžÖ(ÅoÖ²´»=ßòîW /Œ9;æíyEỷ/·;P©î®ˆWÑ) úXÕNOñ¯"^¡äBµˆ‹-Þ.à“:F €úÿì]{ã6’ÿ*BþÈ̱‡ïG‡Å&»÷Â]Ü"@€Aà±ÕÓºi?`Ù=›|¯û÷É®HI¶ºmñÑ–ÔN†ØÙÝÁ4[,dUýX,j2Ðsƒ|ª)e¬º>J Bz̾8J B;\ŽJ DH&Ò ‰D $J Q‰H”@¢b(p+Û~â(Q‰¸J@L1‚oušºÿSÀ¶Aǿϳ­YªÂ¼_}2[Qf½Âì+jLa¥¯fƪ™ Ëï|ÿXBôÊ…;Ò7ý›ì¸A úI¦4Bœè³q\%ªäòúD…v ”]ÝswD¹K>TbQÚ™#ôI^ŠY‰jäóO 1¨‡ Í(!ªÌf×5“`²¶ùĶ슇왌¶¿Ÿ/œ:P쎕ž*%:˜¨8!¹®à=Nzþ…çGiGS9*Ë#œB£=x-KɉYHÌBb³˜…Ä,3 QV–œOéKÌBb^“Y0¥zÀÈ^i5åÂâq!‘„@Ð'(¢•lð{9Ïà™œô~ÏóÕT®59sXNªwÙ5uܲiÚarM‡å±Òþ%½>׌ša¥ök‡¶g† i›¹ÂÈõ¬ÈA2žªë§6…´)¤M!m iSHÒ6ÖSp.H€wÄU*„‘BÚ+ i`jsXà&*‡,'Hø” hI]ÜŒq©7í0ÖWCUR/NöKωú¢b¨jÔþ£¹_;B³Ñb¨êé ¤„Â’©Ö÷)†J1TŠ¡R •b¨C¥ÊCUV–iLÜLeÝŽ¦*ÅPWCUÀ”ˆPW¦tÀʼOÆ!:{EÈ”iN1ÕnA«vXêî§¾Èûì§Y±Ëîj©»:‚+FýQÙY#\PèìûmþX¬Ák¶¯¡U ÿ&Krò“‡œ<ää!'ùì!W&Ò$¹òŽ&—w2dJÑ÷Ùl³yøõ&«}ŽÅǃ½Ì¸ÏÖºæm;—Ívðg—/7»Œvˆ§¡[­…W< Íâífå§_>ng›{Ëß)ü>ûa¿2s“YÉ*É3|qŸ­ƒGŸ8¤OMµÌß§jÕbvôIZ}¢³}r„°¢à\úûuM½6.ÛŸŠÍÆôºÙæóm^Íùú®q5dòWe }[Ú´?èO7ÙÂVLÛ8ooY½Pº¤'ˆ îs8Í(»kIHÀýÜ´Ôm¾YowžÏ¶…©Žºª<ćõzÓ)J°ytg¹K€Áq m‡¦Øîj¶ÌË l ç|õyYLÛÂxF­° jµv‰ .1"Ò/&ÑOÞBXÜžõ—É÷Òi•ŸR6Ê´SK2DFâ¦}~0ZeSz÷‰~Á®MŽm&Í*{7û\NòeK󇟙àȇfþ«ý#jàq#dð¡øÀ.ØS?àâ±: º‚åé~ «gtý~þ´ý©sVpoëNlì `£éc°è áëq0¦0Æ(@ÎÇÄØ]ajŸwÌ ñbLS¡B%Õ( M ¾ÇJBùð°žw»3¼7°I60Ø"4>ÒF²Áòh4Ò–³Uñ0ëœÑÄ–C,t$£cKŒd5ÃåQ#YÍÇb»ëœ郖$Jò6F°¡¡8Þ:r‰Ä(ÈRZàIÌÆ³›õç|ûXzLŠò¡,tlJa##Õâ4ÀhA#4 âÂåQcGËϳm>y,7÷ù¶›ð’÷Z rÆ ©-ˆ[ø U†ç(X)FÇ_=9›í¿œ,r“±QÍŒðÄà$h=^ü6¢¾°†&2ÀO‚aâ1°†‰‚›†È}Ú·y¹Þoç “‰gÏs¿ª©%Xß5±4m¦åÉëv\Gtª Ñ”3÷ÝÆªWŽ ³÷6³îÛx…xpw+NÂûD}šïR£‹¦gSÂ9ÆL÷žiÚ1Š»qln±}_gÀBoD=M?(êDžì-ú/ɇT}Mp”<þ«c‚+5m×OËÀÖÊjq†¨Sk , Q” ¾ PoëúbFÈ-~¶cä‘‘ç¾µ²»¡UOoÓ*tèQ‡[Ž$‡DÉ9B'ŽFˆøŠy>›Ï×ûÕî\âM­à£ïÔõ«yÖ$å¡jCƒGayUàQÑ&ý¸ã(V 󳯺T;1­¬~y_QZ‡† — :44º. hL_´‹\´ö&KS£on™…è #:˜Ÿn ƒÃ‡_—Ò÷â9ÂòÇÂä¡›\›Z¿œm&³ÕbRG}7g9.ZÙr$Žß]úpÐtç+½h4ë‡ýºÝl~oîgœh\õHÏ€ }`°ÀÐô5Å<: XÊúBЉ¾uoPal¨ø(’\P¤ê (Û¼,~­«»í¬Üm÷öªâs]KÔHtûI¬ÐƒD_×N‚‘è µW8˜•剦q_ðÀ„÷0‘‡¦øºÀAûÇrV¬&•-oë–ôÙ|B%¯ šõ €¶Í>¨·/ÖTð<·‹ä„°áaÀ17’{cM;„êͶ{P­›6•Ѻe|ÑôÇŒ©«š~Â{Þ?}×V/ï  ÷·œƒÃ` c#Ž<]ŠB\|(!E_è (4ôDü¡AC1º*ÐPü²3O­ƒ‹ÅŠ ú¯½'ï‘A”<‘¼8TË|wŸïËíþ!ìáø+Vת·¹¡;@BV€£œ õ”P‰ê“u‰+gb î»›ªiÀk•0ºxD‚_Õ2c±Y)ޤ½²hT|Ôpwªu^áS!$Ò”y*íT툊MQê)“Êvož*vϯۉÁOm?Ô<*üòîCñع_¬JXëí¢œÖ:ìœw†˜S¡S"C>…jŒ5çý¥œö‹^Õsä¥Ã'½C?ÄT„úåa$vK8§ºvºuÞÓN)’ùö«ª ¾¡!Õ w~#¥j¦¼’<…ކ">6ŒÎ ãR0³QÀ®!*@©^ Lç;Ê%Ž ÑØ?†ÉèP:Ä¥@ ¬–£I åsÈ+¹QOùœÍÑN€()$×\ûœ%9Õ±aÏ+éxZ¢¿‰‘G°>‰ÛsÊè'å çÝ¢RÏ~v„àLÐkšu!™}^¼„ ·aÐ ‚nq&è»§ƒÑŒÓVÂɬ´p `ìkžÙ¶‹}¶oðÁÜdËÁÜkݰ&vÓùv>ÝåöIíÁ5?(ßm?ÌæS.¯·Åo6Zž~R¥y9ô¿;lNå»Q$e ÀƈͲÝ|“aM¦X¨)¦hбíÆ$w¬òùîðÞmó»}yî™bŠ§Ê¤yzNÔž«ªŽ«K¶›þæâTŠòCŠ~L§ZLqGôU;$ÈéëØò}óJù%c“•›|žÍïg«yùM½W}“ÍV‹¬~ºKz{Žûéͺã'Ò›¢‡ßV¯Í¤µã!=¹(³ßÅ+½]Úa„!wÂiÝ®û*8“ ¥J6{Üè\_Âu6ßo·æ’ @àc¾û§lµ®'13î±Ql—dÁº’µó–,,ìÂV€‚Þo׫ⷖ}ƒï”·wEþ°˜ÚgÄÿ£(wöõÖºÁmz˜<=Lž&O“§‡ÉÓÃäà‡É+ë ñ!Öö_¶O™ŽV¡÷Ù2Ó!"Xûü…õm³ûÂD ¿‚“Y¹[»³EåLÝþA­Hö»Êíׇíñ_*5Ø• #:˜|øBí7žÆ:³ÚîþÛ~`”‰1Êßd\[ýÄ*ן|²…¹¿ÄáK~éuçáùVýÜu˜"Œ) pïu«`Ô!8Çõ¯Åª(ï/‹N²·Zè)Eòÿþ·Í-mÄ™·O.º3Aµ´Z#jÂÔ¦®Yîx+qYYüc ©[5H¾_—Á|Oõ…'—ZLOÓc9Àt…«N ]}2R}ÉŽÕ¤›ƒÂožÎåaÜ4;üá.’t4¶¢A BÝücÕNHýbþ±§! ÂJ–ï^*[¿øVSB%ŠŸ^Äz2¶æÇÉxzÉÕL ¥Çi1õaòG{©¶ÙÌ«ò™×-ˆRö|½Þ.ŠUßôg}±òàô«U°v‚ Á#N­švX·^~ý"­"…?SPÂþw ©¿#JÈ”i}.K`S†áóthwÁeºÄ‰ðSC’½åTün¸ðXíéŸì9ô#¨¢!ò0ÚÛ["Vûå´žÈÎBLw^q¯¥’ˆ Æ™k3²í„¦‘a¯ÒGíCð?àç ·²½uHÔ Ù'Q¿X–°Å›×¸«„dÝsMCvé:†‹ì“@«¡×í$âÊ=Ê-cŸ¼*Àí~µ ·÷?˜‘íd?*Àá±ÏDiÇ21à gŒhç2±íd«òå2_ÎÍC¶FB"ø{ÀÅþa±z³Ë>˜>Ì1MAø}Ü®÷›ì\żKãКûG«[LiЦð’)¹Éîf…©ÈvÎ(î`áv¿nòç?ÜÝ·LàßýøßÙrV¡Ôtÿ7£àÚ½ý÷bµx[Âô,ö†l;îŽÂn¿¯Eý‹õO¯7!6[›ç•Ý^Šm‡øuï•T~I_z‚ÕE¼W£¦ I×ë-‡vœŽF¼W= 1•L&â=ï‰xOÄ{"Þñžˆ÷Pâ×wùâö_ð”ËŸˆ÷«"Þ+`jˆr¤òX = ñú”*vžx'dª%Õ&ÑP¸ÂÚªlÍL\×å1,1dÔæ-/âÔ¾m'øï›B‰­’"Q(ÃNH]ëv<דHM;¢ÙUQ(M !Á)óK/8û¢(”ª@ Q»‰Šªî~¼³w ¥ê‘3©¨ðKFJJ¢P…’(”D¡$ %Q(еžŒ¬©ô[YÙ~æ"Q(‰By} ¥0OOûÝ{FùÉ–fyŽA¡|ªÍÃæ´Ï)§mG ÉWQîl¹¸>©bç«ÄÉÃPì›ÙÍu¾3·*›+ùn¾˜,Àþ°žm†ö¬;wÅ¢Œ¬¤rQ¶n¿µF{](ð°W"ŸõÜ+¥ÀÐTcŒ¹"ÊÜUí•!«A»³·"ûTtàÕ)f—=Ic§óP¼træ»û|_N>)“ɪº_­E0¹Y°YºÖAÕŽ tYuØpQ®ýúT*÷ú™8ý. ³H‰)“Ô©gÛN ‘ö›@ÅVùQœséôªví畯Œ“ë/+ ¬5%Œ"áבh4 ³®Ä‹8gØ/E) ,Q˜‰ÂLf¢0…™(ÌP 3ÎÊ2)…™(Ìk¢0›' ìS~ <$…É™¹~ÍÏgqï Vš*…\áiÕŽ šh€E«&Z™· <÷bm;…PWliq‘őСé±Hy$í£tîÓuaJðu“b\O‰ ˆ)«Á¶#Š§Ê‹!ØmiL0¡Rû´ªQ›ë¸ òb÷´÷By=û|úb¶&5(™ëNÕLWA{vQé±JªÝ+¢åá‘{E³õWɱ-à,׫þŸ™îã±"ôz%×ZšpÌ'>ÄjìÄŒŸ²“û Lî"·%.lÿ¥baz$M[][­ëÕÿdyíE•â]'Ͼ5©´·oþ~üͧÀœÛÜî7€Y¹^ݾéhðc¾{¾¼¥-nß|[”ð1£ˆÑ å}±É‹Ù‰w´n~ÝÔy„HªlFQN³ïíŽ/¡Y«é‡à‘›ÁÀN°7)¼o.ž^,:õ›âû¬õlˆ˜ÊÛ½´wÚ*Ãßç,6‘d{úªøõ/6Óþ‡õ~—®ò—¿Ú/}››ø¢õ³òææ[˜AÃ>•¡¿Ò‚A£µß=¯zÏ ôÞ?,²Õz³Ù yß}ÎóU¶,VëHÊ›ì¼Ð­ñÞØüï:£¾(›ió³?T›˜kaþv_‘Ç`G²· +=øÀFð©ý¿'7 'ÕÍ„H˜u©n ÁMßè—¯.Ôé(ÏÄ@‘{ )裧‹sñO´Š_Ñ)R¬ôÉ/è)Z;œtŠ-™H§Hé)"¥S¤tŠ”N‘Ò)RØ)R¬••©–@:Eº¦S$fjÞ""QÄuŠTX qúì·$â48™=‚Ù³áÓÞNaùB*)J8Ùªcv î¿b ߃‹x ÉOeœ}œ«o ˆŸÏö°Ì–†¤ÙF3º¤r*™„jBKhÁ~€]à ¹XÍó¬%:Ö7\ßPõóÙñmüB¥Ü„#´S ´}Tvakxw»c`ÁBiG!ôz—ýåÏÙf½~È>»û þ­Üøp+JD4¡w3Z‚µêVøm­¡¦*ë—bugy~Ëûþ³qn­Óóõ¼Z·°›&'õ_‹Û™b‹Ù‡;:Éó» SDM>ˆžÜé‰;DÀüí£ °y'Rj"Τ·µeÛ Œµsö±!;0:ØçÇ“ªE–?ÎöÖï8ε†a§2±˜‰-–`äÀý¼ýûê“1º7Ùw‡ßª¸$Cô4´PýEøô?6ër¿µLHÝÛÖcöŸëÕÏëÕìáÏ¿ÁkCøc¾û63±oï*Ïv{X,Kø¡ÅâºÌþõû›ÿgïÚz#¹•ó_ô´>°Úd±ŠçÉ Ø9A€dZIö Þ]mt±áÎoO±{f4Úfu©›->LŽ]~üXͺ°Xì.×ù»þåÏ‹ûö´û§›‡—_é=‘±\Ü^ꃷ“^f›T¼Ëyö7Ïþ7ÿ9+ÚÏ9ëźøþ}Ÿá;9¿?¹ÛüÊãïÞ ½“/ó_oöÛ”Y·0øl¡vÊ~}w÷põÄB½½½ùíî꛾ûñ»ïø9㯔ttÒ 5ùû¿ýë÷ƒvýýê-Öcô<£ pòêQs³šÞs¤~Ÿsª쫽ß~!ð_jmÆ}¢ Yv{îóñ&§¸×¾ßyòœ\>Ü;VÙÎñò¡Ÿñùɇá÷·šÝ-Œwšüí»Îl00w3"g¾´“ñ@¦ñY†òäUè,Qô-ݬ¬Gœ•ÀBL4zÐfM¶49ó²C¼Ä)¸a2{Wê*ñX¿Ø³oÎ/ys?;¿ø%W§@‘²Ü–=(A›Ïû”‹Zû¸R‰Ÿbý%×àñ~ùk…»k+;:ïsj³g2™ _C¤/ïûüçSä!Ÿ[?jÂoç·åÎ~<™“>“{ršºÝ›†9Èæùðø‚` æXÎ_±òŒýò`(rl #;¡*€kêìE‰Þ}yräããÑïí€áòkÖ†ëûÝ–~õ‹ôGسq ©ùx|ꌳ‡OÚ,°ñò­Ú"Î,\€5ƒÙ<(YËèÀIàȤ½kÕ»^lÖÙžüqšOËO¿=ý‘÷ºÿþôëÓ\Tø?ïO¿ýc÷O§÷7Ÿ~ye¿~ýñdÓ›ííï'¯xWïßdïëÍno®oÞÜp€wùÕÉ«_Þ^ ÿ¹—¼ûƒÿõ__Ÿ¾>ýºø—,rûîêýëÓ?¿ÊãÝÜæÿkNþ²ÄÈù'ùOÿü3{·ÌôàÎß¿ËGÍêË >œß_¼cº²ïÉ:=‰©WO9ÊÁ€6¿ßÛó»ûÛ‡>7ÿ†ÿê×k^³GfúÊ…×§}ØQb@ù»ó~ŒG¯ïûÄÃÇ«|Ôþ6—lƒ‡ËƒÚæ­wÉð‚¶e¹½àyš·PÓ¼õ0¥“ +߸ގC˜Pp9¨à(ü|{þÿãY‰Ñ(0Ñ8Bg_P¦»·<6äâ#‘„‘åVXuÈO"ž`C8fŸ¤ìSÑö³OsÀ/“}* ÐI/˜}Z]MLKÍÁ;;-ÕÎáimäfUÓRìÞvùÉ&‚w£+ Æ9À&ÓV06 GDÑÏràkcÃï‡ddc9aY3ú# Æò;Èá¹—q:ƒË»MƒïñsVàüFhÁùp²O “)be¹àÐÇ Þò\O5rÕý'õ+ªøã—xw¶½u·åµg4-Ãhî„\C)ªhp´`mCËÏx{¤ó³·P»o¦2•X¤’bÏÆLÈ"õríªY¤<(ï¼µ"86µ0'‹Ô§/^]ÜýúæîáââŠy¾|Ì|ô§@ý¿×§>œ?~\[âÿ»ûg÷—!‘‘ó!»lHÿsìûÝ?Iv3<ê uÁDvFÀ•ý‘,ç}Úí©æ'¦Âê+´ ïV»Àó0“$0™ï >ŠÈÙk¤cšBŠ? Œ¶Ÿ¦˜~™4ENºñ4E‘éÓsðÎNSôƒ#$¼–AÎÔMS@gBˆh ®ž@„ĸ&€õ3* [ ¡P©GO¦Zš¢ÇŽø¿H8l´©ê#e<€MÆ–›DÂÀÛñ.‹œ”=/RUM` wJǹ5NÌF™u¦À¬gÕq)ÆP®@ßÈAZ5̃CÁX/‚c!ª ÞÝ^ßüzöñê>¯á1\-ô«hnS+d[²œõ{Ù–c0âÞm? ˜~™( €@'ÝxPdºÁ(`ÞÙQ€j—JÆU¬]ŠÇ+u`]c‡•:ôèjE:T³rl—üXå¨' ¡UŸë²j¦âLí`@‰'­w¸»ÇÒsjËœ&LÑy!˜ÍsHÞáÒ¹ö â %+” rH+¨C"cÉOÀµ©ö¯Ümõí5G üßN?Ûva Wº¯ƒ´Ê9«,g×-&W£yÅäq ,_óúþzÃkð$¾auÈM##¥ÿ?ïx€Û~~"@ݯƒÃg‡±EƒåÍ«—³úRðå>9ÒXýV ORV~_î:§”6•A 2"ù€^„q/%xÌŒ„zFÛÏÌ¿LF €@'ÝxF Ètƒ9xggúÁƒébJ/çSªšp;ÇÛ=…Ãû÷HQ´•Ȩrè(”¸ôè†ZGv%¢¼äÉĪL‰a$%Рüüo’Òªç‚Óל'°ÆÈ“ðŽÖð¨R°>˜€'Áú'ƒO¹…2·@É… +PðærFºÉø`|Ê^ή (Á¤ “ëš³Flh{:]‘Îü ¥h¤Ý6%H.¶f+4è÷Þ^ÜVLÇጡš5$¦ãu1‰•ÐĔȗq¦¼«%ÂUSE p¹ßa¸ä&àÑÞO]‚MAB0Æ!)‘å|‚øʰü`WP ž¨W†Ý¹y¤Æ"©ùY‹ÆIËx6ªj\˜%øÐ’Ž VŠÑöãÂ9à—‰ tÒÇ…E¦Œ çàöƒ#»B˜0ÈYªÛª&gÀcÂw£ëÊÛ}(·+ßÈùÔV\Ø£"Ë«l«Ø™…Zqáðûè¼³p8[1.ô®‹c¯j€ëÀ™`QŠ£{9oÖµíyP üÍ”ïËoäÌñÅvqÓ.0Ú¾mŸ~Û^@ “nܶ™nжÏÁ;Û¶÷ƒ;t $R9¨\xæL‡d »½)£˜5„¶l{*a°ÑÉèµÐ[ȶ¿ƒw8G¨iÛ)agÒÈ•sÀÜG†Ãv(ßÖäšg$sVH? ³ùùyè«'øúqØÅn‘nð¤—Ëé¤2©1 þi/‡&¾œj,7‹„+¨Æt<ûÍò5Šn>^3W*^%f:#0M†#y+A4~Í{K“:ú9XH¾Üf#ga m!À–Ÿ(ÞÈÑ ´µBm”¨M‘ &q*ÎÒªÍ ó ’ó‚.²öŸ^ャ퇑sÀ/Fè¤#‹L7FÎÁ;;ŒTíRñË÷€—½±žºÈaÖá'n”PC[/)ÑÇZ/)q¤ša$˜.Ä‘ ’z˜)ÆrƒµA.øu¯&çA‹ÙõÁ9€c†Xܳ Œ¶oÚç€_Æ´è¤7íE¦4ísðÎ6íª]Š BUÓî­åaJ»½C“¯¥M€3í=*b+d’Œ]µÓ_ M{`÷ÁÃèj{ç8äõ䜞÷»¹˜:9¡šK¬ŸÝÑá·~eðçý<“À.Å„g@‡Y<ó^⋤6uS«ßÒH‰'¶•D.tÏÛFx¶w³iÝ,¡ { %$±Àhû¡ÄðË„:éÆC‰"Ó †sðÎ%†ÁƒØå`ÃX7KècGc‘„©÷mEª˜@ðÞ6³¬õà¡GªÛ×r4l <|€„Ç z9ÏÞ½G–¨üvÑ çâѲ‹[vÑö-ûðËXötã–½Ètƒ–}ÞÙ–]µK!Ö}»CêÀai·Ÿ Õ7VFªC¨–iWâ¨jÚó;ºÚĪ™ pX ÛXê§”PMÍÙê uxªÜÿ²3HL>:ês{9áJ5™NÝtbZC'&ãAc^®Jð3–±”tù Ê{ðÂÅø,â—϶U ò ùm;¢.¦½C°cÀ0â m?`˜~™€¡€@'ÝxÀPdºÁ€aÞÙC?¸5> ]…ÞT¾SÎIrt¸`P5Q[Àަè' ·¦ZÀ ÃaMÝŽç00äWP‘ °[ÄÙË9^°ª@㲨¦Uèâ½”c¨Ä“jìX<{ûû¶ZÌ@ìÛ ]²z9c|>SÚPG‡q ˜Ž‡ æUÄC\º"—}gï“°³œ?Ô*³bÀƒ&cØ\ å98^]»£íGsÀ/褊L7ÌÁ;;è·¥ ­u‹CgÁú„*xã'@µ¶±j€•#©ÅÇf–µ°Vâp¦n5€ ã@LH)ײ 8³œƒªÝôŽ ƒò‰ƒ ƒ÷q'Ç Ž¢‹2žB‘öBõ c§.=¯Xä5»J‰RæÁr¶r g, ô+¼…¥Ã£} ëy1öAR ñ”3yŸB^ír¯ÁAÖ}è~Ô'gË÷39rxŒç®Ähó1À,ð‹Ä%:é¶c€2ÓíųðÎt»”7•Ûxè`Ì+TB…ØT  Dïjµ Pâ@[7€‘wOåá#{ÈÅPe3¸êÿ0(F“Êu69{4íâž]`´}Ó>ü2¦½€@'ݸi/2Ý iŸƒw¶iï'Þꃗw)2•Ó{>t6¸Òn?*´Õ4^‰Þ…Z¦]‡¡®i÷at±Y`Ê].³&ëý*çûê܃jµŸ Râq¦êq>o—;帽»½8c·¥gÔ Œ’s!Éåã- Ð_s©Bñçf*•¡°†jLÇãiµ²p*’˜Bô yC@§`Ÿwðâ¨Ó K¯ÀãM\»ÂÃ/Å¥ÇT7áÿÌ-M3ïÛR‡°NÖ”ÙÒ ìàu)…rê?˹DëæzpÎ&Á!„ãa1ð+0Ú~~`øeò:éÆóE¦ÌÌÁ;;? Ú¥Ö}T­í¢Ký"Qô úÆRÿ:ônê.”~?ùà¦àˆ5Û ÚÔ%_möæú›q’búüР[½üÇxpÕóJ<é%ËÂR¼ú°N¯Éçú®š©T¯SâIõSGO˜¤2“„¹¢±ÜÞ¨—s”¨~ˆXúþ9l=%ÐàI«…†Oõ£€Êå̃V«äÃR)$mUÉx¬òYªGûzû0)'§làQhúë{:}›ÝX.†Èr<ûUŸ¯À!ûîÑŠà<î•æ³#áaÑö³sÀ/“E( ÐI7žE(2Ý`aÞÙY„~ð€É¦$ïRd}å6&¤`íÁ6J¨©­KD:ô*5—Ê"èpX[·€i)Êçž¼dÚYŽ ´¶Øôdê-¶ GÍ&s:„‘—(vÙf%ã ¬”ƒœµ«:ryÐhØÁE³çe¹ ]`´}Gnøe¹tãŽ\‘é¹9xg;rýàÖ;+œ r&UeŠ]ÚÂn?jk7Atè]­~PJµo‚ÐØbS猱¹f¥œ¡ 3®û€¤ \‚½’Û£iÙ³ Œ¶oÚç€_Æ´è¤7íE¦4ísðÎ6íª]j¿ÅrÓŽ<ÌØsÁJ¨ØØMzªVé¡Ãá©nަdÙ©?—E%ËášGú“Ï{hÁĈIž‚wTû¨.ãyéƒp ÕãI5ò?Ý\žÝßÜŸ¿ïY ‹Ñ u$Ôã¿ä £njÉ®¡ “ñxcÛ:Ë-4Þw¾3ó}qáT/çT­™®û+Q’a“õµß Ðá€kÔx#â´Î„õÝ}ÁEÞâ.”…E»] säß÷¼F‰WoÈŠÇy¾–ö®gø‚ Ü)¬¡>zC2Òéîiüª0–NÙ×(3 `Ù^Y9ËÙ´l7£eUœA䉀]!#Çñ\œ€‡Ì¢°J^ˉ7ÇÎT ž¤“ –sꒂʺ1¼‘ƒ]á\E'Æ%ªÊ¦1ùÍð·=¡å³ ‡Æ’#†àhô>ĪJ=}"Ö¹4ƒÇ bQy–Óž$,º G!ÕE@–ÿG )X )Ž_\Å3Hk˜‘éx@û²ÂF3Î/.8ø¿«ŠÖ1 ñ4Ê‘Kˆ/é…ˆÀô‰¤5LÎd<¼e¿¤‡ŸÍ¦Þ‹Ÿ7&áKZy"Á#É5Î1ó8ŒFŽdóÅ‹¸d­Ç,@,ûsh Øì/ X1OÉ/Zµð,°ÉZˆQÇÊZn¡3žÜ±-ÉxB…ÊC–m1s)’Ô®—Cõ u;ÆO1ÍÂ炚¢Àã´aʾ|¢0w¿ó¿úðín¾Ýr÷­‚ârå²o”Œä_³úP#ùù ퟎ™`…’P ­ZLuÑ>3¼=‹ÂÇ|J²2³.;õåÿÒæzO(Î)æÄz}͘Ž'i£×LÚ¡bŒ¬%ú{ÆÊåB ¯  ^Dè}•â‹g(oì/¢&)§™åp…ŽÆ—D/<ã‰X'šøÜ÷æÏ†ÿüÃùÇ‹«³Û«‹›Ûüžåß6Ùò§D¹ç‡bâžØûUßVh~J+„ <έ£4‡X-ï ôÿì]Ýr7®~×ÞK&@?y½ÞÚÒJ³UdK%Ë9Çûô öØÒÈÖÍi’š‹ä&‰ «? €øAR ˆÞ=’±X£“eè2pÆCj žG»ŸõGè\ÞÀsp±JjNšïsÏ5@Ì3œÎ&<Ó^nvw÷ß>íF;oŽRû‘¼|£ãÍ%÷›5x=X‰ì‰ž Úý)äÍŸzÉQ„$Zì]÷²J’£tS„Žšœƒy-I]}0­ a‚>4àI0ád¸¾zºº»ÿÏkž¦><Íõ(Û¬Ô øÇOzoá_^‹ÏGGHmÛpM28©£ÑGœaK6à‰Ø+µõØm{u·{|zÎ,S㪼äÒL|;xÉßqCœr¢qé8YýE ÏØ¯Š„9øxb«Õp\!œ[÷µ^hôô"g ¶Ð~]mz)õjðÆä5¡Î2Ÿ«Z ØòѦ81-x´Ÿ‡»ÌJ*ñéÿýj7ƒ:Ò#Bー<uS½ÎÛ‡ÌÕI~¬­ G «4œ`ûòË„”Ñ?ËËÅx¼(ÿñáïÜî5ðá{ub)ÓµÿÚ«€‘–ÒpzæÊoîÿ´ßÞÜì>«D²âŒ$ÍS¶ ƒ?„gOG¡WÅö¢UF»èÕå 3®~NŽß ÑÔ rðüT£Ãœ;zü›w;$ÈD€ÁE®*4^ eQטƒÒ¤‹G›×÷ŸŸÌ”6-YN¬ÇÊÁ`›)³‡]!¶7НÄëá'˜àصàAìz$|¿yöÏÀ¯xz}ÿ¸»/M>-­»Êˆ¥á¯°§ÌežÐðc¡M› ”V¼È«¾à)„ÐÇ£Qú¹yõ#öâ iju‚õÇ伄Y½° 塎Æýˆ¥”æ”\%1ºƒôéš¡…®©C†Ä™=ë.†DåÂ3Æ/àìŠ]p:¢ý5äöÈôÒ GÏÈíð}†ÜV´QŸùÛ*§ÏpÈí¼›‡Üî?ÎæˆJáÁ ŽCn)ÅË(zdìé‚–Ñ%+ 2œ×Û6ôEp‡Ü6âȇ܂\2¶¹£’ýô£‹•þä«;ýÒõ“>%âÈht•Q›¦nÇ—·áQØP‡üËØ†/×׿]}þöõö¦¤ç:bU@óÁK9IB1à–Bäi0i†pðp'Üú–ÈPfµEò Å«(Ü~0i‚ÃÝ€Gbwáþ~oدÍ\Ú¬ÔÃý™’²²)2:NýÜ5‡R-.ÔhB;%ÙN}<x¬©Ê9“qŠŠê5dÇQŽƒ…Ü jŠÒ¦šðè!*-µ.^NB®óOÌðUHÞ£±ý†ˆº°íéä¾ZÝ”XiÁ#²úx4ç—*ÿ8a™?½0"G§¼`;7Ç ¼ ϸ¾ÔÁ9ŒãœÊ¨6¨ÝÔ}ßÈ:½ê4;P³¿ 8CöfJ ´OsËU­ñ²nóòÀ)Š÷¨ÃDÔüpÞWq9BŒ ®Ô‰•&<”¯Ç#!ÂȯõwD1›Âì]w»f*âHíux ݌Љ}'™å¦êãI(#Düy÷T~ÑÅŸ¸¤’T¹§\ú0{F›Š96:DÎmp±LÀìÂEÅ 7¸šMfC²‹'† 3{Ws¯–+"—ÄŠuÕÜÓmë¾Ù n.mч›Æ—?îñdÇ!ÛÓÅ4PØK.eu\:Š3ˆ•s¦¡’^U)Å„>VF f¡$ü]bt‰‹9UYÍYDs=4:9.?#§¢|”PcÊ>8:|ü+§âÈcy…£çŸS±|ŸœŠ ‚6ê3Ï©¨rú s*¶àÝœS±|Ü®•ä$ïé Í©à —éXJÅ‚€rD)9¯”Š=*v†Õï鲌J©X~éÔ›ó  S*$_bÄãÒ.©†œÆï ]”<Ú‚sÌõXÍQoÁµàéÿÀm—é£Uß]œ\å\*“ Áë]è¼™ ‚ZiîÔOÈ-xÇ ™ªœËjzRô޾¬€ ƒ…ìB……Ø…&¸Üå;"¤ºÅAB.lã*Û(æ˜%8ÁüBgl&á¾8YÆ‹·È@ñŠÃ63¼]±¤oQòüuêcÍ| „>xÌSd^¤5­À£} Äwf•ïîÞxJü¥è®~RÎ)š<]…È¥fhä~—n85OýzÃâÏ-x7.ÏPÐ?¥rġşæÇ\&ÎGê÷4yñöt¬çUý¹ "$Ñì£'È£ª?÷¿?ó:idCm¢K<*mÛ¥ƒën¦¨¡£ëÞËmÏìù<Ò-wirϨÊJpXYJ!¼®š…3Èhc>uêi†ØWã¡Ð9J÷ªÑ@®3Ž$krõ“rÅÿ=QÈ£æ4cg7àÉ2TÄTe•äheðîo£‹ÆŠ¸Òãx·àI8DÄ ×¸Î5sÄ%ø¦d©´#ßÎ0eÂCJ åq­¿Fp`ˆ¢îK /é ×…#E&† ô‚G“Æäãéú¦ýªã@=ö̘¢]ÿâEª‘±§lG¡Tœ`Zµàií…¸žiXwC¸”¿2ù ítI4L´>J»Ý‚›¶Pèò“ЉBPÖ¸OÏ䢗ªvÀºOÄ"¦V’]¹Š`¹v†¨îÙ‚‡1øx0È¡Ö_0IŒä©ÑEʃ„Ú bB/Ô<­³~j{]¨Šõ>! š“û4-$Ü<¦JAdvQJ™»3A´ëñÀ0ÑÖã=fÊ ž}§¨ÙçW3ì1qr±›z¼Àí;¶G}ƒªàaé¹ý¸…[õ“O1Ä,½“ÆètÜ&N½PÆ8ÁÿiÁ“::·¿«`=¨§f"'ÊîÛ’!qTü6å[/â ±6à‰=ji¾~Ù=^”¿YÞ×~æ]å^Ká2 Pï°Ðuušt° (¯•kăC÷.ÕùÆû$ öpr¨ôxjÙ»[´q=X˜ äÐñ]t¸aAœ'hDI›5¢á¬­ß»j6ù›ä=,”¡!Ì9 R7Äy‚ä[ðP묿eê§«‡5,<òó‹ý/Y8[OgQBNfN{–¡Ñåo*ñzà4ã=§OóȨ·T–O»»]ÙLß^ä^Úkg\˜ÄéÔ½§ƒØÞ™¨Â Ę82ùK_×ôý;* ¼Ok![oÎR³ 5'õVBP™Т$•ºïð‘°xZ¿ßìîî¿}Úú>B©+S­à<à?>üýÛ=/ûaþíý©ôãíÍÍîófüˆsTS¬Ï’øç´÷œúð·×*”]I¯E–`¥¤WCZ#H*=§“/t‘zºúòÇ?ÿóxõð{A—^øVê¢?ÿH G?•êµÈ?ÀÍÑžÕx"l«Ú¹úúô»íÐÛëÅ-{¹M^:Ÿ_ü!Åh—P?BÌö5ÏÁ31Jú’†w61¨×RN¸XZðD0Öî «L˜Š\ç)A(ùè­‚rÜXâÓO©W£.=Õ'hBž¾'Ã[¼«ß4¦¯æ {X±8Í0b®aµmXÄ„ªÝ6<ÈïuH§TÚ¡Qð<¸Ò³´9 Õ]‹Àæ Ѐ‡ò¶à¾0ïEÆo1¯~‘FÀ#e¬é´F}·#@¼E€”òº‹Ð<^ÖãÁJt´5«´ÑØZøª_cࢿ$˜Z#¶|téȹœ³Õˆ)þ©pôükĶ€ïS#VAÐF}æ5bUNŸaؼ›kÄ–s¦DþýŠÇÖˆ%ÑËÓ‘²¡‚&É>TÑp^EbMèG÷hÄñ«À;‰%¾ÌK»Ä‰EѵQbóìëþfj &ÆìzÕeUª,¼2YÚýèãÉ­ý·Ždr4DLêss©È‹/ºÐ¯R¤¿yšJnE7[èâÇϾC`&=¯À£áÍþê|¥ ˜1zw–Ñq蔳ÜO»)˜£š¢úàuFd¸|‡9iX‡`ÀIñ ëî´]^v›³{Ø]&~ÏÂQdȉœæÿßéâ„Øp.­¾0!¯ÀCÜãùùžB§¦§²x7ŸÑ1ÁˆÃá…^YdB|¨ö±VX`õË–Ìêö­£kd0Ky©çÜØü²Ø ÏÈ-xú䣜ÂS¬ó”‚-ÑsxŒ."ö9¶ksh† ŠÐ‚GO2¿'|¬1ºŽü¼ðê7.!O¡KËv‰ïvJ`§E˜È&8›-xâÖT¥×l[¸=n•›Á銳§Kñ´¤Ið§s=í/Ìäq+g÷ŧÐ Ì~ðÆ7‰hÓ ¹‡ÇÛûÒhén÷çînŸõõñêéçøßýç'»$/î®>ïž¹¥„#ÕŸÍÍÃÐcð :£“H§yysîœõ Ñ0c‡7às—Ü[Hϼ°±nKÉG )yβÑeiÍ-­Ì à‰&è@ n4ìÿ}wÿ_®ß}ºz;iûâ Œ+ÕÃêæÐ©JÏì4:I8è 8EW×ãÖZ žØÍ|ÿÙ :ú' Gë6¯–¶…à)¬ÑU†ÅÑ„ŽÛ€ó„PN žÖÞÛl«ù5õиÄrüy¦§ÑH·ÝÞUMÿÇÞÕ,Ç‘ãèWñ HM‚$@øº±×‰Ø½Î¡¬ªv+Z²’Ü3ýöKfé§ôSQ‰¤r"쓬¢*?|D’ý"@à·kð Ó€OAÚgl® Õò’ ^}#,ÿÚj<ÃäM^ÿSõX–üâa}ð£½€ÜæÉQŠÂY_—bbµÕ7PƒûIÕ¨Z×’¢p-I÷ÐFóž|xSëön7%¿Õ).¿sšk"ÔÙÃÏ7·øà Þöò¾¦m¿\ln7ß.¯..w÷_¿üOù‹ÿ›þ`¶Ë7DQâ±hCø”CsÀì«[Ž”Ì4/¹Îûz3ŽÐ…åséux´§çÇËZJË€ëÍ‹]YÂ~¿¼ÚÝŸüîmyËHh¦ {/t.€~„Öd·.­Éh¦5Od—7s·}O3™)IËëU’^°t—Ê)ñd+˜Ø=J²™i†³5¢ê}ëZ#0 AQÖæýÛÃ]ýp{v±™Øf3ňԫf˜GhHZÙŠnv5±&×Ç,¾ìÌ4…zKIØc¡1´.k3Ì»ùÙ¤ü]>LqlÍ…{÷3È#ôƒóªôƒœMÞŸ–s°R§PKä´…–?TÒá hu¨tÚRn›£ºmÈ^„úƒë²_ÔÙæÇ#»§‘­ô'·RÌÜë8ëB" Пœ×µþd6ëÕ$_ €f³H-û [‡–e€>1¬+®ßHÚP4F>J­Y¸–»W›Ž˜û´²¹OÙ®›ÖQ†Í‚±L WpŽÐ„¼2MÈyÈ®2…<'¶­²è,»}|€yy )b­Êî@çÑvŸøúÂ+›é°Ñ&ñº³¾|Ï=m*}çÊûB/;³ÉOd»=|r„`\—_6â~pb~¶Ùn’‘οjšo¦(­ž³âïZIFh¯kEñÚhë_›«Ëí¦ú×îÛ77¾IÍ;ÎùĵU”›=_iâÚá!­K;‚ÉÕëYá>)Âú¤é>^RŒcRxÂzÌѺ¿ñ^Ò­ëƃ~æíÏ£™¢ó¡ñ©m–È^í†}pÜÊûM=y¿ÝͰ*µ:©1êáû_>¼ÛÝß·J4QÛ|µ’;–òå˸èphYÙé¡2»$ƒË”•••ê…6]YÙ9àmÊÊ6èF¯¼¬l“é–•ƒwvYÙúðècÌò*ÅÙ-ZV6å|^¶Ö#…F÷PSÈE¨ÑE¿®²²*õYúƒÔã²²J¼`YY¨}òü‘ÙµÁÍÔ‹³íVMã0Ð̲²sM\r‹gêðøl’!ÖC¢H õš}t"è²ÉáðÊÙ|pË׈Ôá‰~ÁÊܤ­¬3e÷ËŽ˜e\†±6|}h µ Á%ðüˆ—Œ³£ë·á瀷±át£WnÃ7™^¡ ?ïl~zxB±-ñ~ÜA¥ê%lxbw0µê ,¿÷Üõ •t6¼=ñR6¼GÆ%mx¨ä‘ÙŽç9å@©Ÿ›ÆÕ Ü ÖjÙ *œO\Ñቼp- äÚÜ¥âÅQQw+£QÕÞ¹žÆ„ "¡íÆ^: 3®ÀcwEão11 m&ÑF—ˆ<ÇHfµ FC_¾©¼{«{}LÉÀœs& yÂÒUŸ¤«pŒw€å¾¾ãxpÐçg>4:ïFèšÍ;蛦ªÎඨÞ]cÕf/Í\âärWIL™¢Y)›7¤:»¾ÓeN—u£ªROÆâæçÃMuº§j|!\Š»)ïëÝåv»û1t #Þ,6øÔƒ‡­ê}<ÒøÂâýùÓÏåÞ–Õ!µµ(bÄ rJÅÔl\¡òÅÈõ% ‰_žF´Z«×ûØR7§Øæ4/ƒ¥Ó :Y]»{}B‹i„bôãI÷^x<Ü’B³ß}¬ÑŽE5¥…­F…‘÷úК`Bë°iœÇð+ò.…TŒ®?ò>¼Mä½@7zå‘÷&Ó+Œ¼ÏÁ;;ò¾8±§(¯R–mÊœˆÏsÀ£±X T 늼ïQq1Â{Ðg¿Tä]‰ƒŒ¼ûtžãÑÉŽµX'míeœO~fòŒ™{_Q猎dÔ@ì¸òœ1ù,ã Î&‡FÁe;ÀÁqbò’ýTÆa2ê®ùà) –ið XEçÑyä÷¯ ðr̲@žOkÊeá™hp&1ÿýx0Í –>ºÆ5´ý›Äbn³8u†Íbˆ·Œ Á®KÇj›<çÉÉ‚ ÷ÔA¢Õ9Í)¼×bÈ@ÙáD9À;˜i%Xêu±m{ì€Ma„:ïÃC O±ÈLÌ ™íÔ‡ˆXÔP´q°¸hv¤²€2÷  Ð žèg_[Wä=^÷WØËr’ß©)ÏÌÆ\´Tâ eŸë¸'å9¡x2Ažû¼·+Ar„½(¸]ņ 9¢¸•qÎnÚÁÍ™²¶®ã‬ò‘ÊxØ›U7T¼Š¯´OÒ#»äˆ(ˆX™Mà ‚eçü€ä\ X"*ø…(PBH^„£§AK|Ï’¤Ài‰ æ²\ÓˆÝHgD’« O2MŽ˜4.˜ÍXî[a;\šœs‘ˆîOÁ•ÑÀ€·BíÁËê87Ài«Ï)h¼ëÀ ò¿‰L,f‰EŽ9‹¡é2.3Ú†ô;Þ§TæÎ!‹§ÎUŒÌ#&¹æìqBo\Ò¶ˆäOäµMÜTp1g¬Ï~v  ´!ø(FÅ«T#6¾Š‡"ŠYxuÜŒ,¼FTc"-I¤!¹L¤¥¼ÔÍ «N?lľ+Y¼BQ¿¬ØÞÜóІ(¯ G«üß:Wå³»Í÷Ýc6Èõå÷}‘Šó§ É»Z׎ ¤Xk3—5X’$¶bÕê,àO%†Ô¥™,kfy^•)3á2Ôžµ¦eƧêK˜4Ä%Sà™{Iq©‚­D!$íP Шԛ‹ÿx¼{·Û~yÚO?¸€÷ª‚Õ“X÷»‡©^Za#n©jðh3m:ã–ÇØ Qž<»ÉƒæÀRPöP</.kµ‚žæô}>òÁd ½×©œ¼æ»o›‹³òšýûoéíª$fÌž‚踔q>,sFc²Ö1SÊÎ,¸Œ™¦ 7‹Iüuœ¶P~3†ÖE&´•£zÃQRŠ2£V)+¸BrÜO ÏVñ~ÅZœ¼À'ˆàdüÅõ–ÑaÍîFß0í-µ¡O˜“®ÙÉž4÷‘ID›“c³°±¥òö †L?í »ßh¸+.±•¿ÙѲç$ÐbÅ•ëRô0³·ŽT/g6Bˆ§¶Ø™]ÊG3øuMlX¨0Ç×§Ïîv›í#£íƒUô±€Fñ°­¸™Ã¬#~“½©.G0é <É¢h‰Âš@Y<ë,ã0.uSóU…àd1`[Ǹ„Sž“Ù£ xr4Ž4˜=»ØLl¶ì0¸â×y—Eôå+LŽMT·5â€C bûcÆ>N…ý>øZ‡VÌð+²’ºÈêEã;@G›«üÎc/¥õïî'VQ`µ¸ÆŒâÅ# ‘p£G#Íæ žÅ,Ð2q„µÐ§ÑÇç´C…>>…8†œcïtû]‰TsÝî#†xŒÛHã N,—ªy(HVÕàÉsbRµnÛýÃÝßÕókI[/Í;ï‹'ÂË!ƒñ1–Í»Ò-Àa®Ø‚óÝ'Fó T¥ÂNPœ­è¤ÔžÚChŽßr¢Î’£€e|Åp€Ù‡œËIÊÇX#5Ì9'ô"Ê2ÿZot”f*dÈ#ܦ¹AÆ9AH¹6㫲!]>Ñ ŽÔ»!/w?¯ªÁÑdqÊJS¹8ÙQ «P=0ª"2{AT`¦ƃO6K–|ÃâëÿVAšùâè¥²àˆ¨StÎfæçkk?æ4b]WàA²šù·,¾ùÿ«vít"€P¬èBe—‡hןr®ê„Z”O ÖqC¶w‚è‰Í®*mÙžY|Õ~OHJÊ>aô.Ht–qñÔÛ‹ªqñ6±X¢Çž=ˆ³ ž˜Ìj`kølg© †0¸ -ke\"³,¨º?öÕ9A< åÚý$’X=§ŒóÙ®X´Ý¬+8l¬Ýb- žZ:ž\ÆÄí$´:ÃÐ>k¸@™„ëZûqW¾õY;Ò@«Áèúû¬ÍoÓg­@7zå}ÖšL¯°ÏÚ¼³û¬MOP y•Š)/ÛgÍåóéHë-ÔtÐTc}Ötèb%Æ}Ööß!bð öY8wèÎ6¥T«§'pRuÖÓüâÓ•ÜÇÑšXw½ùQ,ÈÉòÁVþCA€ÁGADŠÞÏ|éGúœòwuùûîâï‹«ÝXjÒ9Û–ƒ6g߸›köìb󰹺ù>ÎmÐì‚—A»ì–Õ4BÊqùJ`:<¤<;ø±¹ÞÝßn.Þ×$OÝ+O¢yƒµb..Z *H˜9FuÛ„5€NiÄÄ÷ãÑžIŸÎ!49Ì.‘Ô*gçÕ‘ƒAë- iíûq‰—ŸøòϱƒÄ¤M³¯t½°õHÖóE·‰¬ …)FŸ“_t«<+ªðg=….¦ôãÆÉt„D2ttCæëûIÆCê² G ‹>îٵȟk‰ÞŸïO÷«ÛQõÙµ¹ è|ÎBíê:.qÒÆŒç+iAÇàø•q4b¦Ö\™d<^{i¾ƒ¬Ø&+peKÉŠ5ïÛ¬À­‘vƒ¯)^fZ‡’Ù;ý Ûž£XG9õ§˜B-ì#ÉP“hÿnG+tiùDff_Š“ìß®ww—Óazj{½\¼Èâ¢GÉîaçÅúï(YBOÈÅÊ·NvàÂ'­OJ n¬´Ü]´µšfjµÉÆT”UHJŸÆEuÝÝåÔU:˜suMVÓõõ¹6YòíÝ7x„j‰‘ R턨Nb¶×ãP]2Eknk@¾“'ÃU`ª;Çí€aAV ý‰ê|½f3´°À®¼L>ŠàÀÞ8ùUXàÈñ£ë/,0¼MaÝè•h2½ÂÂsðÎ., Z¥b ‹@¨ùÈUó=„\öê€ú¾ŒÌçС'Xª°€-XXÀ§s|¶‹)걈cPß2ý,[´_¦C ^θSà¡Eó»#î¾íDÇb®rÁ*9­Ñ'Ž`ií÷Ø«t< ÄÓ§ö€·ŠÞ\]ïß¿F¨¤}’ÑS RVGMý4S\…H<àp\…‡­R÷f‡['¦ÛvY¤ ÙQ$KÞq³á|­×qñ@ƒ‡ƒÙêñH^!s;ýªÁgû0¾6¦© £¥Ý¯ŒC$³Î!:®mÄF£À£n*ùq¾š Ë­íœêAZ õƒ¦TÓ8PWý+,ß‘T‡G[é÷«â"_ü±»Þtäö$¶z› ‚a†HÁn_1ZU4.¹[Î{?0ß_¾ßÝü¼mpIM.„œ™AÂs GD¦5àÂa"æ¯Èô‘cƒÑõG¦ç€·‰L7èF¯<2Ýdz…‘é9xgG¦u«T^¶ä­?w¹6wº¾Ÿ»Ú n]‘iúè—ŠL+q,™\æûXdšÎ±šä X'úÌQ›cdhöãdç`y{NƒÇ»™÷-f%ê7°þ÷;¬ppé°^f˜&àŸ_j ñîËÝ#ÔÍííÕßs±~}~ÓžÿºõÇÕĪ[Òó²wLÖâ^;y^/Ò·.n pq£|Y-Æ\Œ_”¼3LÔ³±îà}ˆÞ,u ¸ÈeƒÆÜ¦1bô½;bñ“Ìu`îå+Oêðde€äöf»½¼¿û9™žß~n¿ïúrÝ~{U¹õìvûmb–f9! É’0doTA¼n·ÛŸWûKnDVŽxóx­êG?3öúv¯0· ÊôS¡&ˆê.”ƒÕ´_”äÒ-PàQ×õù(U¢ÁçEµ@.Înïn~/îáýËr®Íj10ˆCö’è8Y]ñÓ”¹«wiÄ ÀƒÙʨü=Ó÷²ž?3|~y3-ªÒÄçTL. P̽EªÏBúÛ-FŒ8B'úñ ÎkH­ tbÒ·™¤4µŒc 9Åàìœ+uV ²6(ðh׆W”½ªðL³+^¾—ÎÌüÌäÆÚYvŸÀNèL»‡Î#vÿºò;`qíg$£ŽäZBA ´WJèN¨E~¢†ö£ÊqMÓL‰½Mùé÷¶ÑaéfjO+yÆPþ¸Œó)U…–¿©­Ã£­8~¼ŒîŠ#ÊçRhs ™‹%…ÒÂ_Æ‘úJÿåJ!» ƒÏ‰Í >HcÓr*iGç ‹š^Ö¿äÌ*F/­êýB1ŒP”n<Ã’éþOgo œÚIjªY›ä¥1¤ì’7µ^!ÄCƒ‡Õ+ÉÅ]P®’xv±»{˜XT!úÅèW‚¾ ï`ãˆÕAƒÇ¬,ÌýÏoÏ™-GdȵOº)¨caKž.n°²u Ž„x<’içS¶ç‰aA]0åìHè<‹Ù¸øç‹”FœHkðD»º»'³ÛÎÉ1“”¹’}1ø ‹5¾8ÝòÊ¢Á£lÌ?òU Cý}±‘ðèàÍoe£y舣 žóŽ:.®n~nÏŠU¸-¯ìåæêåÝ,ÖßõîáÝÏû³?sµÿ³pb¤€[v‹ƒ~ùß?/÷<Úàýúe{y_Sƶ_.6·›o—WÅ`ÛÝ×´òuÿõümsåjTüõ` “¹Šuo¥%Í|ÓwZb{˜²¤¼.eÁt’½{}SÜŸ7™ŸG‰?úÉ4`¦:9uªÎ‚B S$+R$pÎ)Íš—·ñîgïrô“³M15öït°Ñ§"R«kð+}Z^–1jUd†>+ÛXõ¡¼.]i®µ¡\ïï›mjXí¯ÝG[HË_ŠEÑ‹¨¨–W;ø!Öu?HFæS?™Â’jÒ;‰á«'osÞøèË¢:Â8RàñhgáóسQ`6FG$J’Avp}¢„8"ž£À“ Ÿ#Ì>þâéÿgõ÷ï^heÀêz‹b0€:a}¨±lB•¯ý¸ÎËL(«)RJòC“¢“ŒäÊ£d<íšÜn¶ÛòÁýîþ¼ü|}þ8¹çÿ®ëÿ>þŸ½«g#ÇÑežË.P›H8½ç‹6Þ -õÚ½#Y:IžûõVµä–¥&È.’ª@ìÌX°ø| H°Ê2é0]CÁV·Dé|1WÝ™ U5xjŸ ž•4p{}óóÇ/ä^å Ÿ‚ ÖF•Ô”^¶ëvЂÕÐÀÛï<Ñ6·—×ÛýMN›ù‰ ²‘°VAÏ Þ[‘ ³ŒØóqq;jðœyC5‡“‡Ä‚ŸO¹?g» “ÿÊ8X²Asa˜Äcë` –Òº>º¢–‚âþjžKÍ p¯6{ÜýåÇþú*mó)×ñ×0ÙÇqqcR‘„£ù q’óÕŽs¿Ïe¬Ì:pŠÝ—~‡ Jå»7„å ›/ŒöáO/ ›„lÛ<Á˜2Ù1êsN73E‹xL˜]d2Ä¢Â`ó3ý2Ð胢78(ãDcÙà!׿äWó^hõÖåcŠwÑjËFÎö:b2ñì[õJ¨'ï þ1ÿ†“SAï ¦n‚BÒñšžkŠv'‚|¥J>Ú,—Ëà*»C¯D|.5Xb(`9Œ ˆ•_@UæØúdø•®Oý`Ò¼W¸É¢}§SœoÑm g’ŒüeOSŒEG‚b}ô-õDпA‰RqcS£y ð6x]÷t¢;ÿðh¡§#µ‹TFñ¹•H*i˜#©µ^¼>4eZ HÁ5Hyïé¦É0F2êd¼‰|ÇztÁêx¬­´eÏ}Žß"“÷De¥MŒòÿ ‘«¯”Õ«ä¬èª“äŽ Læ¬W­³âù‰2¨ÈÅêxã¬Et6Êâ Qp—;#®S\þ ¼ô°ðçsííž*À"Cd´ÏÔM‡¶Ë«…í/¯Òæk5TìÕT™= Øæ\b’ö¨o– ®A‰°¤¦cg%äõå­³©ü¤e‹œAZ^& =Yê¨n­>¹ÿŠ'<žE«:ç|Ïûrû¸½¾ýú¶RZ8ñÀÅ`ªJ•x4àÐ úiP”LPÁ!…ôO”FÏhtýô—€oSA?ƒ Nzåô³š^aý%xWПéPV7¡bh»WЩ®çÛô'$P%*“[Wý„JœPˆAßÈß«‚þüû"ÆÞw¬ @ÜIw¢‚¾Œ/N;6Z$”ä ¶¨ôYë“Y´F§£Èù‡«i‡(à •GUI5/ÂØüI:¤î‚ÌêýT’³ÜÕ]_à]‚µìXtê,l&èh·À2޵ŒxjÛm¾ýÀ\/‰|q¨‰|tŽgnH<­vrD\]Ô¿–§,HU,2«A™Œã,i f3^”½0xG%ðBþî%ñ¢¶‚"äšT:èADL¯U¢pLŸõÏ;ÆI_˜Ï.ú¼é® ÕäÉ$ç;ŠËíÅ—߯®wI}˜UŸ8+èÅÐ"¥T¢]³gt=yœªî:ë êS"3€i¸OužÂýí÷ß~YPâÒ*ü çÈ5‡RälèejøLΣ1êmY’ó<Ç4#‘/ÀSÛà§T}‡Ïð¥)«Æ(qD­/Ñ$wF3ìÞ¤Ta²:zâù( Νäj-@uPv|I:éRa‚8ØŒš§(r–L'pƒ“ãePÍŽHrf@êm'ˆ xj°¾.rû–œóã­ü# ¶Ó1´ÕœSpA½*ÙÊ)J3Þ’ÞÀ°QÆ'úŒ:0®ó—ÿëyë¤O… Ñ¥s/ `Ù0×f^BÛÀdÅò:Rñ½§qô¤¹$çcÏbí¯îNòÙ~©ƒsä¨/y*zï×v•'è%¢a«ª“-îw•Ç©¦’z›äõl†míFbwy¼‘O]´Š¹Ÿä€Bos_m¢&`òá[%~–ëÿáÏã x x|XðÖxûõëýîëöqw‘6ÉÝÕ~²’Ù'¹2ªèJì’Q—[äLßRëUviB|ðÊåÔ,ç`À:“àH¹¬Ÿñ`³ºê™#³¤BΪЦæ‘Á;šéҔ㒇½gq3ݯ=ü›ädÞÎx(øˆ:öÜìKþc¿ûsÒV~1Ó3[ A©ˆ3É¿å‘`™•ÅoŒTp/’–?,OdÎe4ºþË%àÛ$XfÔI¯<Á2«é&X.Á»8Á²ÊJ¿ ì‘` 2 ƒ?é¦×@%ZWTV‡þõc€VQYŽ£‡Ší£2‹p§;…TíNƒea±ŸVêyˆ9U¯Páß¿ÂÎ4@$åÊu–s-žWNWÎäuÉ–bŒv‘³¡sïôžàû¿ûÆ‘Ï&F*ÀõýV瓉ÃÛÂsõi}²qÞ²®Ï]»2p‹bÊÌýÛUááÚ'ogåÕ󟤴¦ô²B+r}“ã·¥xSŽº÷:^ß¹eœ`€<à‰´°÷TÉâ4TjTiÔI‘3ä—6œ*Æä½ºE¦’ÁŽX6HªÑ— |¬´ÔÿºÏÿòÛîf›{A]VaQ´…d€Qê'.£7ŸÊUe¥*Ã4(‚DuPFSÍí[lÆ‘Ô3Éêÿ!È8Ñ„¨ã¡ÚÖVOzùzûãMðáÓ×ëÛ/GÝWf fx‡f“^ðÉNÎYï–Ãêðï™lïÊ•x7^ô¢Ègõé]ê êò• &¹À@#žgp)õ+‹0ËÙ£" GÏoŸ)æ4ºú£çEà›=çÔI¯ûè9¯éõ=/»ôèù08‡ÏS9È¡ïzô,ÁòÆð‰ ‘‚7}ToÌªŽž¨lP^æ>¡Žžçß©\O(Àá]Ï£gØD<¹ØÉ7¤¼?:É?ðW^“ëg—œ}*(ŸËî¯C—yÞÚΑ«ÁªŸv<µ5|¸O.ÈÃcYkÅ×5E·Û?“cç\l¥_4¥ ;Oäüæ•¶fU„Bû¾„JNÓ´ÜŒR߇R¯§2ŒT~]V «S‘›®Äô§ÛËßÓjxÓŒXø>¶êÄt†‘ úÈEðžäúzy7­ƒmF«ø>öêÕD†ŠiU„¢Ú'3?ï?ö÷ÓJ¸V”"ëÞÇR½žÊ(R‘åu‘ʽ«_õÇÃÝ·ÝìŽxߊVÁ¾¥zs2£ˆüºöàé=‰µÿr3ý•i1 ³ ¼ ³ÞžÍ0j!¬‹ZdÞ“Zw·îîÿ˜ârßì'ý÷{0ëÍÉŒ"Vt¸*bÅê5M×âûÇí÷ý_ÓZP3b¼ ±ÞœÌ0bñº6C6РUÁ/§Ç>h!¶žuxDž[g.4¸w`RG }Á Õ"YÍ ¸ô=Ž/Óò¨åz‚1Véx˜|Ãk¢®O¿œ£ÎßÌsË*çsYvcRc¦|õŸY¹ÖÔ/çâ4p°©ž°¸{JÚŒ"æ‹Ôä¼oÑôµÚæ-½yŸZúê ëÍißõœNj-HYŒÝz±æåxÐ4ÿ¾o/ïS6@ªï7i³ÚóŽ%̵ùÊ~“\D?ô½{86ü‘t¨e“e4ºþ¤Ã%àÛ$fÔI¯<é0«é&.Á»8é°ÎJÕ¬îòÞ]üF‹'Þ»WBåu5ªEϽ’kp¤·X}“½=¹Ø(©;T`ŠœÇp†·ÞÊû¬ †ú{p5xlXøú1ß"ÚMsAB-p:RB»ôQä0¨1 Yäb<ŒÝùâðšäªÁJF£ëi—€oÓfÔI¯<¦Íjz…1í¼‹cÚ*+EÀ}cZ—J-Ÿ(ëu€ ªX{ +‹iëÐG×+¦­Ä{Æ´qsêѤO£39Ê·‹œåâ[¥;îì2¨7΂WÞ"Nrñcg×LvF£ëßÙ—€o³³gÔI¯|gÏjz…;û¼‹wöyð£r%=Ëy껳Ø8r§­}Tâuíì3*&¦}èöD¾GìzZM›S/ä!žNÿc’C{µ¢¥Æ ŽÉ~ììšÉÎhtý;ûðmvö ‚:é•ïìYM¯pg_‚wñÎ> >•huª•²ÖÅ®;;z¿A—³öåP㺺aU¢gêµ³WápÆwÜÙÝø`O®v ô䢒;ÉÙÛgV¥ÄUÁuýCVâ©-4!z¹½y*$tµû—ðaέ?¨ðasø—ͳv7ûÛ¤EÈ’Àu©w±Í÷=Ÿå¬iPòhûãñ[zÔs99B©—o»ç ×á¡Ðà™ÏK×Èõ.¯:1(”2­Y*ràa¨³žuÆ€SÎ'pì>*Uª^XF£ëwÖ—€oã¬gÔI¯ÜYÏjz…Îú¼‹õyp±¢ù+Å}/Ø0¤NÉá„ûVÕ»²c¸:ô®WëÚÃï Öàè]©2œ^lAÈÆX«ížHÁa“×|ç»!X½÷ý=¸<àZ9ê/õ÷°yR௻ÏÕð¯A„ÇÕé…ÝâŒØVDÁmëhM\¤ÓŠTÖ?:˜œ <:ç±Ùú7£oüþU*ñTFí/uöÛŒý¤¥\Ñ4J» Eó­úf9¤±=Ò ^¾Q£ämNà8~$P¨.yF£ëÜ–€o¹eÔI¯œüwl¤Ÿ^V!Ï´äkȉ <>4ŽÓ¿í¶×ß.¿í.Ï(4gWƒl7àlª¬œÀ$gÂØ\øiPù”¡Ø„ršžÑèúC¹%àÛ„ruÒ+岚^a(·ïâPnœ<©6uÇX‡l•€@ä›\›ÁåF£žÚuèû—]®ÃS}™_£ÌŸ¹ËÇʤ¬2½G6Îy£€¹7¼…®±a4•ñbÅOr>* ¨NF£ë —€ofÔI¯<6Ìjz…±á¼‹cÃ*+å]èÃp§.~ê B\WlX‡»%hÖáxý¹m§W£0!(«È…ˆÔ0O¯‡ËZ1áôÕà~ÑÀãîæîzòqJT YJ¥&¬:¥€ž úÑå}æ¦Ì:<..¾RV{'<ý៻/ßno§>É€4ÊÎvÏ wŠÑ“q#ø ‘#¢B¤³ÊŒ¾¨¿üð·üÑÍçgå~~Òæçׯ"³ezRöQWeˆö´,O6èÁåŠ)ð6”ã9~¦ÝÍ:k2÷]ñÆyFã´BØ“œ!:¯ìl"Wa·ýsPêðÔ¶…9°`{y)Áßã‚cýI¯1¯W‹A+¡œäÄ[7爵M$@ <µUç¯vw×·ßì–딳:õƃ'TÊ=ËØ8ƒvr['ЙÁ:§@¹L«ÓvtðR,bIņ¡•½Ð:)2ox½Ç1*‰UI.ÕG©äÀ0"WL‚ì5x-,\_YJ#ƒî¿_¡££¬ÄTå}Zkçþù[òî»?€ÛÞÝ]ÿ]îóóQгü/‰Ëû‡é”ôù$n©¶é¨nBO\qe4ºþ›Ð%àÛÜ„fÔI¯ü&4«éÞ„.Á»ø&T ÖÒ‹GÍJ±Ø÷&Ô¥çbÍoþóT@kéP¬ìÁㄊ1ÈÆ¯£gï{Ý„¦ßo “–£7Ë…ž££7›háÄU¨  !¨ŽxîîâUz&àûgBÖá©mˆW¯ËÙÚÏq»pøêz7{άé”ÁZT=UâbwBtŸDŒv1Šñ°«obÿÂLW…¹¯ •£ˆ”ÐÑE¬³õI°#ÂVr¨ ø9?dõÖèx<6;¸»¿ýk/®Ê̇W*Tâç˜ÚkSt»»ÿ¾½~õÒØ(jb$*U§ƒ7|*>G–8Û¢Ž—Gä+‚e#4P¯Ï“œÅFŽÀ }a^_Χ⨚xç­ãv^@sžVÌ#bQ®IPsM*u§kà5$›Œã¨¹&nesžêçÆ=Gq÷§ËP¯.'Ê©ÏZyuß qtú6/r†,}Jze«ã 66¶3²…_=Ršô—?Ÿ/úK…>5¼é=¡k½Àç©üaÔñ­{/˜S¯’œ ËÇ«1Z`Árª‰ë|»Ò+]l8‰ß€AŸ @Ù–õ-§|P:}Ú)–럿ýã÷ýLûCc¸¤)™ð«TÑËëÛWGaÙán2“†&^íR.ÂÕo—Û»í—ýµè~÷rIå¯ÿ×óßþŸù//ž]pˆ_ƒ§öÚ›]\N©úxkJÍñ2{ˆoŽ9Òí`†„!á~ žž ™TÍ­ˆÁö ÆKƒù®Šܤޑú ~:)±¿‘i”)Fïã"ër&ìÁœq@«âL¦<ÃV¦n B+êx[X›3Ñf÷¸.íYÝ,»±ƒrÝ®OXÍÐfX^ƒÀ»66¨M­H°Ô j<¡Ñ<ÃuùG€Ü•gE«Á¦½¸½*ç1˜Uˆë²^™Œ­Î±?eÂ6#Tèwl]8…Ñ\ŠCN©¬•ß§ã¡hz[¨Ì*hÇÕÅÓˆlºY¦ümXT:]0Crè*ð0ö9·Î«ß7¢Oz‡Þø˜ºx#Þ  AÏ”ÿ…!9 ˆ=¡Žk+¿øzÿñew1GByukçÑ¥xÑE:ÇÌÔmËâyù!騂'p,8ÌMuáα'ÅzF•¥@½­2õÛ0‚`.˜ ±Åx–Ü¢gU¬#—b$ãϺ+/7výÉ [Óú“ fùY_VÓ¡ Ç'ze³!¿*6ÛÛ_ø¼½Û§XSþäÿÙ»ºÇrÜü*Æ^ííjýKl AgY`&;˜Á @€\¸ìÓ]NWÙÿôÌä½òy²PÇv•]e‹â9:²/’›Íöª|>‘Ÿ(Š¢È7užWÍd¶WöBÈœKKçµÊŸàM¤ ¯²ç ¦JÞM>*ETbµÆ“¥MÅ~½- 9Ÿ|\&–r|!þdÏUWÉœÈÇcUo/åp•ŘáEwÿä2¬Ú:¯óÂ%O‰Kd,K©^kßÛ'¹Œ®2´7ÅÝw×È< ÆÇÒ¥øàuÇM‚µ6;±#ø{B»'o&Óo㤰u)b!»ì\˜•9a¤º)NUà4›)sSŒZ÷ØC¸hk3ÄÈÛbˆéßɳǫ±ã’ÂcS¼]gÐÙÔf˜¿­}ÉøPåeë0¬î'Óq¬gúG«WŒQÙ÷½C ¯Ì [¥H·HßùþjErÍ”ð¥8eCn¿Î|j³ üM±Ì s,;ÝFB)º9ymº]˜XeÞ9ån‹wÚ÷ìt˜W ¦7Nüu¼N›üùØãcڟ骙Åk–Éãz[¦­Ï-×—1¯ËôuLÞYI »ý¡úÜö½|ëŽA‚rdññ(¨â¶gãñÜ”˜þ*›Ä¶piuQêüé)[Šœy]›†‰âëÐPCmÎïŸÒÊRÅHhu5æÏêêt7f ¯MÁçåoÍêû:­0]ІA›j4äÍìÚT Fß»+zo…}>§Ó§V/¦ã Þî{v×&sSÄ‚úÞÞ÷õóCCùE¶ã‚ÐõǛٕ©„¹)*a«Sq±ÝLóßÓ sŨèêQ‘7³«Sñ(¥"U§2ê43>êo‹ÿÛÒ>-ó6Úö¾¡Ç¾øÇ ?.ü;’q¸±Fy©4ùö Ç Û½n|ñºç6‚ tk'kð!ö¦ÆB«Ë–Ûà‰¸a°ÞJïèF:±%¥ëÖŒ¸uÑË‘=‘Ä…cI äã ª²IxåH+Yâ¨éµŠ…>IŠø.MlʲÙÇá ÕVÉùF<^I)i<‰ûÖËùÜŽ@ œ§<Ç”àTNð0fKßÜ©@•;#üNˆIôx¸5í{Kv6‰ÜJ•ˆ ##Ahr¥á¦l»t1ŽÚÑ369T‰áwœÑ&Õ%+j&dèˆKÁ€Ô¤¿BòMGû5A‡*iœ ܾþÝ/¿V|éÀ®Z;gÈÆ«8NI?ì1“Agìé0,<¡¸QxF°Íæ¡Ù®ÇßÂz|Q¨†*È€no '‡*vSÛ!¸‹ÆAh!h¼ªFO[üžÅ<éÈF<Üb¯^mÏwç…ô%Y/$j™ž‰Â*ãsˆ} == _#e¥@Í+ò`x€Ë·u.»H3í—[i½3Ž,[‰ãœw8q3ð}S&R…;ž:<5z©-¥œ#Ä‹‹9GZé,û´YÙ¸I<(8<†’Sñ‰¶!%™ç¸wmã(Ò—Žú[%t|’Iù½8.¾é|œ,FhÞ•JXx€{k5Ý®æ›?â2i~Mä×›Õd¾Ø¬O¥‰|š/v§´‡Ébö¸—$á†)m¼5Üfñï¡¶i`8{FÕð&xt2%³æËdû¸ï+IIG™jAÈÔJB’›œ²‚}‰94«³¡KYÅ>0ð@¸Â± v¦º‚#6YôÇ &»‘p]¢¨8#º8 }o@äL²R/­ S/­6"G‡´/KœÂ{j:¯I^.¸Px¸)Aý£¯1îW“G}œ`9ž|ýºj¾¾$[ºôNÏ —:åkã8¶”+u­)XW!'ˆƒÇÛ*¤ˆRôiûãc‘X©€²ùÞ —¨CõO«fоñ¦™¡×ËíjÚ¼¯3°Ï O'}Rà@UØ 9xô  _Åx¼Ùà!/-Ï`ÿGOâ&Ù6.{Ǽæ B„…Ç ¦½˜<5ëçÉiå³KbK;š”SZZêøãŒu¬}Žmʇmm m3ð89Ø‹€ñý$ÊÚÊP2ŒÙÔž ªá8ÃvõŠR4§õ²†®óñ°ïXQÕ˧ɧŠùb¾»ÌÞËn:Yßdw÷"Ù»ù²dÚWBÆ»òF"ÑÁ·ˆ¥ù ¥¨Ñ†ƒÇÊÞÕ9sn«½"$ØÑqк%s^2ÈJgà ƒ)ýMf£Og)„øbÎ 2u)ÈTÍÈþz µ“5TÏÁ%jf­C Q)2 ƒì\ŠA÷(r¯ª¨?O°½Wþé#ùtÝ /÷*ÈvYP`U¼ócª}8²æ£¶ºB¢1=«?ɬ•-$*4Zª›W[Gû<À|xµ—ÌC3yÜqMÊ­‡ ÂHjƈnËzXÖÆ:$J¦Ê½ŒÓ¶)bã.œÌÀã‹„28²ÔIYJ#%ˆäýê~œðÖ‹h”e4cñõÉà„ààÑ®Àå{»êÓvUZ£T,A!´ÚhY$¢Q³ùà­¨°þ9x¤îØÈÍæ8 ÐA)™wÐ7®146ïÔ¢RYxû³¶}=D~Ôšß±ñpk™¼t?v°Uzeá¿ü²¡ðà8ÅÎZ(c¹õÀå„Ùxê—>ïxò8¿ŸÜÇåû´–³‚zõú3æ“[¯µ÷¼µ»-zymzÅ–­2d1rYu5r½ŸM5jÙp[Ԫ߅é2¾¶QrôŠË›këÝ\ªÑjà'o|<îÊ´šß?M—;móŒ4¸k1ëÜtj‘ËU…\Œ!ão-er½4[kb(~eOLékñëÂŒúR,{æZÞÅômدCÿ V+¶ÏRïjرóÓªF6{cöÌ^ÙžµjpŨ啨u:jtòú¶èÄn@1˜ßŠ÷yµ]4«ìˆ~ði"…*GE•µà¹WÏ"cÙ€bíª šæàaÞ9M¶›‡Húé¾Þß‘ÒNŒ2àÂ&ï=p¿˜PUþ1fâk\1rðÓã¥ë¡âÊ{Çïp;ˆ&EgŒÐ€º²×•ð*ºé÷e+2åÐôo&«˜HpPÿäûdþmôhû<›lšõé°C¼/3ùqÿŸº.¨X”šgÔQ&ï €ý÷[š‹ÂŒÒ¶y0Þ5ß'Ûvý|Ý7ÓÉvÂ5wÒÝù§Ñ|9]>=5‹Y3û§_ßËߟF}ù«¿áï53Ü…–ÛÇÙh±Ü~ú÷çåz»jF›å^£Õ|ýmôoËÅ,“ÇŽq«wie¿4›_¦“Ç(Ä?ÁŸŒ6kvyfø“År=úûOŸ÷ö(ÙÝ?Žq^«y\ôñÛ_–ÛÅì/<‰Ž(@³f=]ÍŸãl?Ú1¸ç¶º\DZX.Æÿÿ|Y¸\¬G¿ÍGq£Éf´ÞÿÊëï®G‡üëÃÊÚ³a}Wü¯?ÿðiô°Ù<¯?}ü8_¯Ñ-¹C_áa²¹C½~¼_-[7ÿñù§Ï?üã×+𖇀7šA“Ÿÿõ¯?ìØõss¿\n~Dô8£_ö«õiB:¡I{¸â‹u%Ió\µ?0šmW­Ùh‡&?Û¶¢˜Œžv¿˜Ë]a¼yÌøñó?ÆÒ Õ×Jé£âÈ/&4üçèo¸;®úÚÐÑŸ­„;)íÿþÏú/½¡EûöPõï­ýÌaéæàü0Z?7ÓÑ9ùµYØoµF“Ålô<ùãq9™õFTÿî€Þ¡ ?¿A»Ã0û0úm‚îÉáê³Áƒ,.Môªúâ0G—š/R„3 ï ÆØöðNêËʶ`5ÐÑÀø¦ÏqŸIt÷:óQIY!û‡ƒG9vjüúØOk÷¤¥NX%Sïìöãl")§AבÕWˆÎ°ð@¿Ò®§Rû˜#Ìt¢„±Êhd–$ŽSüÛ‡ 8U.Ö9x¸'ª:«Ç’IÍ à=¶ €å—ˆžfùè­¨aá å ,fHÒ©7Öã• 3qœÜ$,59P\…ø&·à×<;nÛbh§f-(²€tTÄ5öô‘f€Šˆø–?'+ä¥pðè2 ÞøVÒNÎÆ¼8ÜɈ9à8É.µß©H¾BH“ƒ'0³¦[ÅÈ&íxz Êz—jn¹ç¥-Ô˜±8çò'ÒWÈŒåàéXâáÌ3£bMç”yë”÷ÎÓ°Ññ`rƒMTWá„ÄÁÃ}³:_|YMP[Ûi´v'BJ»¤>öÏ“JQï±pœìúnµ óó¨ñø‘ƒ\7ÿo$ŠT@‰ÎvïK4¤/lCìp§€¼1ÃqüΩÉ@åD…S0· þ—U³>9nè´'ßÀ–@Ú;Q¤Hv†qÐÖ°Å<^VíZ|í„–Ö~Z)')çž5øË’͸|8ÊUð™8x<÷åh³‰CN¤“ö(bI!o¨ø'€7R–L1(•ÒÖˆ,rð˜0ÄùõXv©¨¬¼m»cGDAÛqÂs:p Ç‚# ­JÍTå2^xÈF%e#…·Jxå ,R8ðf3j>¯òÁÆúîÃ+’G©¡Î¢Çâsiñ9K®hŠw2&÷qãlâåƒ1à*è2†ÝYí¿šéæD::)d•ŽÊÌkÇ9å‡:læS q„ u4Þ ÕðÚÄï ã „׺'}ïB oä–:œû´Ml0à•¦p› \à÷GãÒ0ŽV)æ/Òßÿ8‘IË@JIU¹jÇ ] `s9Ž() m[p‚ +(bkfAlÎ(!==W㿃§bx\èYÛ’Y:~Œ–Î ÒUô­oÙ ^V+žñ²†2Á*'¥¦ñX)ØA™§fóÐl׫í#Áÿ¯™<­Çql\ÚFˆ¤$­• @KjiÇqJ•lŸ]Â0!*㘠ї¸Im-†ÜZðǬŠþ¨½\º Âq)Žç}‘÷è±á„߯Åzþõa³“ ]±åïûáñ‡ Agà÷¶FüŽS6¨ :îÂó¥Q„iE0ç½Ó.›Í­êÃÕYÃ^+çÚKWEÃÁûà2VŒw¢ÿÓ‘ƒ,_{—Ä›¹V²@©Œ1ÖÐ@!¿XXG„u $d¬&>Y9~G í½§WPtÎz$m|ÛÞ7ã]“î—Åë¨íP‰X6:d Í®³Ô fW(©¤0@ÏC _…Ò¡—!2ðô*³òN¤­4%¥t…[¢àòKP2PuÖ1n›NÑû¸RFWѱ^“'˜"éYOËÅ¥‰ÿzn-=6«ÍÓd˵òV t¬éDÎÔ;ÚÔÒï»3O4˜dè—qAVá‰Ác‰Î°±†›µ×¶BÕb &dØOãUIGà\gãÙØB†µµªhÜâiݪ <Η´þã¯<:ï„j(;”ϰ¦Nª"›ÀYp5Žv"ÇqUΊ*wÀeSôÿ‘l-¥x¯]–ƒê ûüç0vÖ¿wñy~Æ$l•3½‚Êp©‚Ãé¼j&³½ˆEp MÆúÐà,Ô®lÐBJ4*s*Þ¿ѳ¢—˜:ô{@³ +~<¿¯RÑD-…í3`Î᯳ޥP´g¥¥ªr"Ð2% ³ENIñR±¾Ø©4ÚÄù%ï³3 ´SBÓ«_':He’RX›§Ë¹ð\õ‚d©èŸ6NBÈXÿƸŽë?agÝPYÀUE÷Ö(/3,€•…žl%ì©øŸvÖøa¬‡~6 hg&xërâÀÚë*wñ1‚ΈHêà`(+0^ÞÙ®w•Çëxç¶i…M5Ģʴ7«AAqÃ@‚îÌÂJ•1+Wå¦Àë§÷-#¤ØV\’9C4RJp.c ‡2$ö®|1±Ñª§·G“x³Y”/èZú2ðYâT1ÞEeggDÞl¦ãçoóq,´ºylb:S+s*ÌhbüÃÐ:£L¿GìùsVÓ(¨â‹m¼Q´½3Z«‚§‘¢§b•¸é uÆL¼,qRé3…ÎìA3¢LÆnP¡AÚî;(ÎŒ,ƒô)Öf/¢ƒ´×o8O…9ãÃ+a3¬w¢#òÛøVYà™C¸9û€­“íŸ xŒ+ö$þàœ"¨˜§±`•ÊArï<:Cì¬}‡VBgl¡NV‰ygCÖ®çŒ.®ý×+O; 3 PN÷çvÖ¼›Œ¢Ò ×@lÀ”±š@8nɯSszüüÂy*® !¾O¡ >;¬• ¨«flQ˜˜¦^C³¨¯3nW¡S¹úÔéî5ý{ü-´Ù¡žˆb:!ÚGX§E¡3h eG8!…’da™8Pcu#뽺qœ%j4dûZ;o|¿]Ì›6 \P$ˆ¤È@ Ù—Ú}`vfnî,ªÇiY… ¤ sç]lY’û­õx%Å££é¢Ñjg‹°€€Ù™È'=SåÅBì^C½4v+„‹aá÷&6(Jûá:ÃX+úF­Sð:kW‘·kßBµït@qÐxœ’EŠ´$oˆ‚¦ÔïBì±’×åº}ýpvæAÐZÙŒÕ索ÂP&@/Aèþ…µÂÅ嵚OwKËPªç2^¨á8«yg»|h]µß•2g‡¨‘É褖>зdˆÛBßwºgŸù]Âe¬ †vó¥QÌ|ÕÞ"’X¾Êg u4k•7ô+9'U§u¼ÏógØJG)Ï Hü ÈÖðv¬ùà´tŽ(Ž;’êñ®%ïF¿\iï©0¾‹FÈðK™ øT›ÝU¸åàá–ÔæIòµ Ä«,Ó'#gœ”dª—Ãå¦.ŸæbÿiÿtɸyoG÷›™'ž!1iSÁëå౦Àë콨ŽMMLYOÊ |Pñ-3…|â½K‘EÝŠ ð5/däuZ‹ê,ëÓ·q;©¦—‘—µ–N&Âq‰²­—ê¯â-~<¾Ó%×VW£˜wüŽóžÌã¸7—h#g1E`ò¸^Åê™ëÍii²åv6~óZä€[¹:J®Nî§ô<’íNN}ãa&pÑwúñóË^œh¬ž‘3QS‡@b>Fnת˹MÛœdµÜÆ¢w{)_Ì7ð6}1Dl²$ý»9üûé‚ðâèÅÃÿ±wµ½‘äÆù¯û!¸;`[,¾SÁ! Û8 AŒI¾8fµ½«ÉJY3ZßÚÈO‘=Òöž¦Y¤Höp>à,•š««ë)‹Åñúîßø àÑÑý[™£{ê¯9œýt8·~óôìj³?Û\ûóB_ÎÞã-¾¬›Ýg´ ¿Sx¿¾?£`ø%ï‡Íµ70TÖáj» O†¼ùÿôËù[v%2Í'„‡Á)ç¹ü4¬IZ]èÙêNê¯ÏàÖQ¬c%Rl®îVû’'á˜Z¡.8浜ʣöî>mSTß‚GEaŒ'É ”‹¥Ž“—_?«ÍÝÖ¯ïþ¼b‚@ésjšL)y¹5.‰ñ¯ÓYIn äÞ-r2„{RÖ<ˆ“Ä,)ØŸ¥ü¥SB^훯kŸJ[À—OÚ'Ò¶XáÀ(Žã8`Gã±NÖ*¢úeÿÔuòdh¯ÂpÎø2 ò.üÞTöÕ7¦–+ì­äàQ™{lOªzÒÔ¼hIF¯wá“Zà*]Æ?ó =šyM=› Ф`Šèp3ɵ/•œÆ±Šºág’Ë&ö'ïcª‹öã0ɼ# ¢ˆÜ S™aâ‚áo9 m…c´1&h<ÙÔœ¬=yîøÎc¿<žhš5qÍ*ŒoÁYÊ>QNfß@UÛ@ÓBKéh¬‘³¢à8¾è”%àQ¦F±Ä3OyR‹×¢ w«sò[rèµtn¸íŒ×i•šä4[Á œ1ãdCãQNµq §Je´o8|¹Ö‚t('´Yõ^£0(L†è\äì¬Dêõ^£… k"íÿ^£ðuî5Š È“îü^£¨¦;¼×¨oñ½F~pão‰%öâ&9%šÞk„Z”\¸è& М\!y9ÖÙµF*iuUÍ®5šžoîMÝøZ#ç`ñmû²!ƒ%p¢œ™-êîÇ×ã¥Ï²!Z¡µ†þ-ðnw÷p<¶×ÒŸaøÇ˜ü &!Xúd÷Ãv‡Kø-ªîíÇÝùö%oïù‡â—ÀÄÊ„k¹2\jE‘=Êéì‹QÛÅÔÇZ'AÓ°mûczagœ2ŽÆã¸),èL­ûñÃinµÖ†„¥¹r¥7®dáþJFã³Îݱ}4CTé…‡ù) ƒºJWlürß_C¦Â6ý4‰Î†ÓÓ‘í7V¦qŒ2R$à)¸áìÍÔçþâÉ¡\,–I\hÝP†Õ)ÐkWµW"ìRcá\*CÇY†_ÅX¸?×®ðHYµí_Âj)SAGÊ$`·®N¿¿Ð¥†"p©˜29Á×ñ*›®MÀ£rwœ¾ ì¾¹ ™9Ê$3V&ðˆp2™GRð”¾^éïÉH ÉWÉãéð(]‡4‹ü£ŽeŠIº€ÃO¤´ilñ JMH mS˜Rq¹Š ù”*K 6¥L!•äkþÉA•a5¾åK`D•Üì§ þRóÑB*GçŒæbóÁÕ-K!<­ÊADÞÀîö€º¾ï¬ÿöø÷AûT‚}ºP:&µc/oQ¿Ôx0ì:v _%3b,gÚ%ЗQ¬¥ñ\îÞ~íß!€ÊºJ«—is[†â-5§@¤¬*\ûææÓ8¾YF‚¯‹\ã›Q.Wø×Ê© s*ÉŠo !Éj/j>÷2Ð…†b™6ÖÙ„É©UÂ`‹š’Ü%à±Õº ¿4ޤ²®c\z‰a—÷#.žD©!®–Ö/Ô*yä#Æ%OÀcyŠÝ—êŸÊÓZ.(z‘a9·E}ÏŠ'Pj@Ü •öÛH/ʪä,CS–¹ý3wëñÐ áºJ¾´cVÓ2+™•=¼¿&\Žù–åÂЩ*åëô$iÁJéXòºèÄÔKÎõd7zð;åÃ>+n{ªe#”Ç|šðUÝÚqAl»9>Û'ZãÀT?:Gô, rf¦å×S 'a"íÿÀT ø:¦"ò¤;?0Õt‡¦Jð˜ÊòRvÖ´É)n&ìš ª’Œiêü4E'¦òÐϲþ•OLåá˜Õ4Õ?1¥Ô ,¾m¿J—2z9í?Ü¡Y”’1Ý~›‡‡ÛZiüÌ«¤WËcWdv꺪ɛ…å=Y‡Ô 6_*Ú¸R‘ÂIK±ʉün+ÎBù¾ç–ž…bi}%‘º˜† ?b“o’Óz{4F MEä °÷„zv×:þÒ¬dפä÷c…3¹|ö×· M–À`…n%gJ)—€ÇÖ¹;;M†R£@ä@¿wÉ­®Ö´­­¦ODH½Š=$ãÉ­­>Q tB¡_K€T´Á¾œãÊ 4+ÊÉÙi—N–(èç|S}‰’ƒtÛ¦Bð…%Šœ–RŒº×8ÈÍ ÔÖH?úA}hN‚³óCþ¯éÇ…¼RD£ý§KÀ×I?FäIwž~ŒjºÃôc Þâôc\„ðžöRBª¦éGéÜFF¼½0$Qj:ÉAgÜP9ëË|iôóv“•¹Ý?ß1©•$ÇxÓ†MBùíîÅ×m cJjF)ÌjßÅóeñ|¥¸3 ¬?!Ó8fÏÄãjx¾dÄ)ú.qDc I޽d!ßd!:áqþ*ÑÜíŠa)„“’Æ#A׺û2ßü®KL¯è·Œ°@Ôœ9 ¦JO´úF3 »‚£ÈÁãDa!ïK®* C;%Ôóøè7Ï :æ¾É†Îùù­&üè7ww×_^ñ≆ŸþèÄßøe—SŸB¡…™ù’Z#$­üùÁŸH~ÙwÞ[„ãAºU”3³{Ä2:!þ—FÖÿ·ÂÁ?nWïBLú¤«YgÄ©ôkÖ qüõ‹ ¢ÇÕ7AŸ˜–òñŠø®ãÓ róÖøkäpPÅÁ’õ‚S¯¥Hä"/¢Ñþs%àëä"ò¤;ÏD5Ýa. oq. ËKÁ,cÙ"Àkq³_|¯´Õʾryèj• ÈÂÞ0nK™5¡êA3 ^– Xaã…a=qd’kß%Œ£vŠÑx´á•¶öO-¨]\m0cÓA¤kEúŠî§n&Z¨ZÈLûâyxê39êk¹ÎFYF¨ƒwfˆÃ…ANÄŽFå•l¼ÄÓqZµÆëNÇãdÝJÚ“±·¬ÄÄQ†h†ä"Ý^˜Ïk Dó4^)ª§ñÎOgFœ„¨I¦˜!B Dßž‚úÜݶd Û¿ü<\׫Á9¿¼/?½oßßí¶·‡é»áUTç;™hYqߦ-VÅx?¯Ùã¾Ê7>»Žî„zE%õ ¦ëˆÆ#Y¸lOf! ÷ë'Õxy½}¿ûó­_Ìî‡ãÏž+”p§ø9P] ýÜ:,PdØéóQ³…S,qDaxÞ n NJÇ£™È¾¿9¼›«‡wóþ¶œ43'´$êõƒœµPÛßTúPœœX‘OrF­ñŽÑ÷(¤çA9•=Ïã6ÐõîãõööSD©Ës©~TäŒr‘æå _b©é”LjŠá¯v¢HÆ í¯‰ÍÃc2ÈöfóqÜîÇÍÍŒÎßßo½ë?ìvן¶ŸáQ&þ¢„á连7îøÍ/0 nfë—¤Íç`¼8Ã%ìÙ×=æË‡ûûñö€|¸ YöÃîìjãÛbÁÇž0ß}óáöCø?ß|Hgߎ»¤«®0Hÿ$,Ë¿4y5‡> €5Âþ <Õ½,¡I"ÒÆHßSŸB®–²¶=¬Ý0·‚dàÚ¹ÛÛÝaûa{j"•q:É,ãd† 0È7¼¶1¬<ÉÄF‘G±ÊFñ§‡íå§½ß&èSEõÉ™#©Ã˜^Nƒ«nëN€ho9xrgÒ•ÍÍÏ«,êRÇu Ê:NÆ9ÃlõtÐjà­Ñ+(çïj’ x\í˜áËææšÔg<°æ\€?èÎ)ü(ÇTmcXyz ÏÀFŠÉ<¦U\×7—Wèa=æ½OÀmQ¥þ§Ïõiãú¾³9p*F9Á굌øu&àÄ ‘ŽGææç–õyw¿ûŸñò£VW«”L“⽜°¦ž£ø5ç¹å¡¢ydàÉ$žkíËœ­rŒPœÓÀŒr$P'mnuȯ…4’ЪùŠÓñØÌÎ »ÍÃájªáŸWL)O|rmÑ)axGäÓPŽÏ¶“òi4¤‹³ÍíñÏñŸ¥Ò¾{óÓí”[:ûÃô“ßx©‹³?¾9÷;FçáÙß|òçŸá|6âþü8âß\œý~·?àŸ>–ãÃ0Ì r€ )Åô@ÿ—èLþÖ«Ã'»?>rÿðÎûÍåå¸ßߟ·ãŸ÷wØÞŒ»‡ÃÀöþñïýµ‡Ë»³o|á›èÞâ?ý‡÷d÷㇇ýøþÍ÷gW›ýÙÝ}(ß“ fµÀ1G_AsòÍù£·ÙJCÊÁʽ^àhëÖ œœ]þzÀb¡r>¢ÑþX”€¯sÀ"‚ OºóQMwxÀ¢oñ‹0¸ò½Ñí¥oÛëU23³Ôýs‚ ­1)Pg½»8`‘‡^‹V,¦çÇEÛò€…¸]~ÛV+e0.£E”“’¿äLèQÿúXd¿Ò¹PƒªÕÌHê›ór¾'ÿªaKǾy ÁéyÕ×°e"í?l)_'l‰ È“îóÑãÇZ'¹y˜ùÊí§vL£Ýs{ø*ÜC'Ý7·Ç5Ý·á-åöipôôŠÚK £—È—Ü1oŸ Õõ•“ÏC/Y«u{&ŽS·_ÔìÕÈÅÂË?¼µÔÕö“œVrUj÷ƒú>&9¯m˜IŸÑhÿÔ^¾µGäIwNíQMwHí%x‹©=ÏK¹ÆÔ®ÔÀdÌÛ8®â·G=NÉöEíyè›-Û3q´^¶+¹´lç8¾à ÀDÛ+9.­]•ÛsÀ ¦ì+·SN;¢Ñþ¹½|n È“îœÛ£šîÛKðs{–—Ñ6%¯5ÜR×ýL¨º¯™è›•Òeâ°-—í\,oÀ€ðã+ãûFFq9 «–Ò…A%G5*A‚“ _O÷‘N;¢Ñþ¹½|n È“îœÛ£šîÛKðs{—.~èìòTê³"·[åSò*âíýEŒÈÒNP]_Ûíyè´º>éø|Á-¤àà-×í\ fq»]âøZ;fã`Žr|Ýœ¼ÔøKÇâ÷Qå„|åvÊiG4Ú?·—€¯ÃíyÒs{TÓr{ ÞbnƒsFÚKÍ/£o²n—|ÖF¼=Âîhooc}q{@%Xü<½k¶Ý>áPÌÄ»ûå„h™“bpzi FyÊ9Æã@ƒœ´«¶å ƒZ\¨p¨îWr§¼vD£ý“{ ø:äA'Ý9¹G5Ý!¹—à-&÷,/å“»â¸p×*âíÓ¡šÎ’òyèm«¶<¹8·å¶´p÷÷=8n˜Ð.Ö,q’ƒYËÐÿ——à4¹åVpŒPÊaþÚ;²“Þ‘øFЄÑAÇ3eAĪM˜ü È ‡•8í %£ˆFûRKÀ× R#ò¤;R£šî0H-Á[¤†Á•Eì9éDëF Fhy³_|­¾M U4Rå:‹Q³Ðëç=:kŨ™8\ÃU²ÁÈ¥Ó=ÇËÁÆÛMrð|+±)µûA9×À™%Áù[1^©òÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜ÚÃà’IE¤#‚œ0¼5µ+ç„»Yôö\#)iGC•RõÅíyèkÅíY8Ô©-ºz=˜Ø Ü·[æ c5‹/ÛƒWëøƒj_~#hpʾöN&vD£ýs{ ø:ÜA'Ý9·G5Ý!·—à-æö0¸cHí@{)ÃÛr»4fJļ½óyTÝYŸ†,ô®]QhŽçÅ–5÷–4—‹¯Ã<Ilü¾ÈI˜ëíu{ôRY¢Øz’k—¦ÉÃñ|g£r²K¯Û{³ÆŠx*$ÈY¹j?Í0¨ämÑà$篽²I’Žh´ÿX®|X.‚ OºóX.ªéc¹¼Å±\ÿ瘣½”m¯ïRÌ—žÆ¼½Ä0GEÀÇ)ñ¾È=½•­È=<_2IÜ üˆ·e0‡A,5æàÌo0̉ˆšä´Zµ8 ª8F™‚‘àØ×Ɣӎi´{n/_…Ûcò¤ûæö¸¦ûãö"¼¥Ü> .¥æñ–ÄG9Ö6Oc´ÄRV~‚ ˜åÎ&@íìŒ#z\(éqûô|  p4åvÀPG\xݾÖF «™F“|Ur÷ƒZ°L9 Á©×…;íµ#íŸÜKÀ×!÷‚<éÎÉ=ªéɽo1¹gy) îNNšˆ·O‡Êûꨙ‰^´êº•‰C6ÍÊËA-uæà|–1 Îj{9cN´FoÊí9àì¼WÝ+·/8íˆFûçöðu¸=‚ OºsnjºCn/Á[Ìíy^ʶåvãì —ºegBí¬ëVzÎZuËÎļ-·Û%j~x'˜‰—WLrZ¯ºßu`X Á9öÚ—ƒöÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜Ú³¼Ô‰ûêÖNZ>8##Þ>ªì«v2½jU;™‰C³¶a0½ð²%F Üq .âÔä _w»Ý*8j Npýz,‚ôÙöOí%àëP{AžtçÔÕt‡Ô^‚·˜ÚÃàÒïh/Õ|»ÝÚA,‹8Bp¸¤…¨¦³Œ|@¥$C&¢Ñ+ތڧç[§™HÀaÚn·‹/¾n5çN€Š÷]rÀfçHîÇ×ãåawïá ¨6ô¨hƒw»»‡ëà³½š~ø Ãïïw?™Ú&}ܮޅ¯æk ®©'ÔÛ;ßêÃöãùöÿôgp>þ<5šzôùç6~Ì/­AŸœg¾ZDå"^Îÿó’  ä?ޝ¡Ç™ñ÷‘ùèæ—_æÃÝÇûÍû1x‘—E`–pLÇojä¸é«o@e˜?«¢iô'.‚Ñ?Ô÷[äƒ'´Á ñýæ° ãÐfþ=¸è¿þÃ=†´3äã‹7úhDS$Éð'?ùfqoþô°ùâÛœ]Þ£íÎw—woïÇëq³ÿ~µáJ£æÆ——òݨsÒ Ð#ÜqdîòÝøþWãå¨ßÛÄ(ÇÀ(¹Ùht8ïå¸ÑÇúíîþr¼ø°¹Þÿ[ª5ã-ÿÿ{"u¯%iPK“Õ{`˜½ ìªw†&ðq<ü-2üñ%†Žk^±ÅÈàë¢d2‹ð•Yÿ= ЫûÝíö/!~¿ôMïö?~ØŽ×ï‡ÐÙûÃw·Ûëï?þûþOÿ3Lñá§ýÝxë_¹›U;]ßýp´¾£ „'~ÿûù’±ÿcïj{¹‘ô_æÃa+$‹,’ò!—ÜewƒLîdY3Æ–IÞì È¿"%Ë=¶›ÕT³;Lâ‘iña±X/‹$ÀÍLüõõä«õjqi…|=ùa½ŸÝ^zc^Sx·¹¥˜íúR¼ŽÙ-Žª^î·÷ R”¹#j§ï¤I_Ïv7—¯ðÇÿþÅ̾¼šÿCNóü±ÚÍî®QÓ§ßÎv{2Îï¶‹Ýî2\›7ýŠþ%Hûõ$þû‹ûw´Ú^O$ HúOÚ×pV)­Å±Õ·k ¡i_Ð÷Òöïšúëï«Ý!n;Ý+x¸$°·Ñ4Ïö³Ï¾ÿúÍ4fMZD™‰´oµ¸Ý]þïÏ»};ãÜÿ%¼$µ¸~x˜Õï“{PË ß>ýô×7ûÅæòÕñ³ðëÅ5uó7‚eqüEüºG+òùOG¡ýôêp™$}нԨ³YEW¨Ï¨,?SõIpCÚø=å»h¼t<Þ¥ôú¸Ì/ÿ&ïù”FU>®¯°<~ØÎV»eXÓ?¦Iøé×_f··—´ˆ•|k¼‘p%ÍâjNEŽý„ÖÇ;m¼j2'+pZ)áËÂúi¥^ˆ¿þF0¾œmfW´¶÷ËÅ®¢J§?œ4)þîY òüïýxEéuõ7¤Ðõùá×Wÿ~¿¼ šùêËã} ôãW‹ÍíúÃè/c>‹Óõ=¥ð´<>ÄVц„yõß}þõ£üvùv1ÿ0¿¥´{E» ¿»›‘íÜongóØÑ£‡ßÍ‚ÙÛ½"¹üW QΛoÞ¬f›ÝÍzÿy»¾¿¦ì·ëÛÛŶãð2 4D²¦-‡ÿ iË»›ý ¢ø;%®?܇[X9Á¼¡ÏéçðãÕl»¸[ì°xý–Òæv9_îo?<*#µƒykíÿ½p/yY!~žPÄKî|¹"o¿ø‰-øÚú?Å»“ùƒ“û<.Á@ŸÿùŸÔ‹Lnɪ|þo'óøõA qeÓˆN.Ÿ¾á?žï¬~÷?ïWdp’SVÁ)¿ž}ôG^ùø•™°ô7ú&)žÓï;OMõSwT£Àµ¡ W³ ìz~ç1VN6ŒÜu )™èð¹ë6àËp× y­Î]'%=@îº ÞÖÜu–•²ØmÅ N=Ör™Í‘†„NH+$Xät"Yqü(FHÑiOæ7‹ù{ƒ¼Srª4 o¥U»ÉìÝúõaâƒÖm±áäžâÉÛɘx)àZcN[m_r&»7ù‹’ËEuW#_°އ»gäc+±r?Þ/³mˆ{£Ÿ²ÞÌ$R,“W·›Rؼš…1D¿˾Ü? µå˜Fp d§¤ºµS¡ÑÊZI5Gj@ÍAª«M”L]¾[6ìÔÚpZi+…cqêJ4;2!ÐÞzσ3v<²Ø z¨•è!È<|© ³A^ëÁ™ I2È<o “:'  x+…®ÛÚG#aJ¢HY{‹Ú¦_¦8¶3bx¾];£eôB;p]úv핱ÐÀgzá;­}tS ºvºQ p­`€"%Þô¶ëž *§ÏÉuHe-z©e©\'»6å:ÆO•%¦ŒÚ*Ï@¯_¸qýO^ÕÐ\:^TòȪrUÊׯª†±ªa¬j«ƪ†±ªa¬j`«2¼¬ª¼‹3V5ŒU èjÀPl¹¢ýÐNbÏ„3uêA;‡<8/ÅH8óLb­Dÿ„óùàKεòZžpNHz„óùx Ψ¤0 %P;åº%œ•›Rj\CEáT‚pÎLo»cØ¢T•M…¶Ýýó´ ˜ºÜv7ÂMƒ—}ù¹Æ<¤Ê lÛ= ½~ᦟ?5i—%S9¨Ø9i—‡l$íFÒn$íFÒn$íFÒn$íš“vä=µ'K/p^–\B¥(q$íFÒn¤N•’žtP&£vÕ›[‹¥qJY Œ¦*¥Œíò¥VO}8cÑÕ¥qà0@`Þ*‰í¤è—Þ Z¥ œ`ÁY©Æ÷½XÞ*!ÑáÓ›mÀ—¡7òZœÞLJz€ôf¼­éÍØ¹1’ o¥Àc§ôfxv¢ÞÒ7†i„aG¨œFdQ‹í¬ŸaÎ aV:NT%é°£µth› ³b$ìFÂn$ìFÂn$ìFÂn$ìšvà%EGÒóY¼¯>„=v#a7ÂΆ°Ý £tºÊ.¶3ªß'm²À9=VÙ±üBB¢Ã§¡Ú€/CC%äµ8 •”ôi¨6x[ÓP‡Î)PÈ[©Nù–}ÒFÈ©­}fÞ…GVÂ!WÉí™ØDñËm\ƒ¢4é €Ü‘têp{FM… d¤š#õv`ïúå wR~b¤]–t òUç¤]2=’v#i7’v#i7’v#i7’vI»/ëÁø‘´I»A‘v¤˜^(îãØÓ¼¹^I;7ÕRÄCR8M2z$í86&!Ñá“vmÀ—!íòZœ´KJz€¤]¼­I»,+eT·~K?ÕÞj[sâÓM ýR¡e6c;©à¬×þbø·Ùæ@ó¼w»èùNOû=¾éW}ÂÏøá¢%¡©ÞSyB2¢ Pžd¤þ­` #®ÐNuùâ´r8µà¨×º™%NÁ:òuhOŒTºÂ‚vF~j,c†t<¨>YÆdZ,ãÈ2Ž,ãÈ2Ž,ãÈ2Ž,cs–-Jé)nc½¬‘•ÇnF–qd‡Á2:PÔ•e®d íDõI±<·Ò:%˜Dí”°ßžï5‚®ÉãüTj)¥¬7 5´ªrù~¶{ÿï¶³ÍM õ¥èß߯‚ô&‚w¤aDëN+,ð“à  4l»Þl*aüv±Yo÷‡lìæ[®âŸ’O^oZãÁÚ„Ç{ÿ(…Øa )®‚sš¼:ÎÍ ºÉúí„bý—'„RoÆ2/@„vZVÎe¥&DW&D¶íTÕ+7!9xŒÎ›Õìn±Û?ýx Q@p±ZìÃ__Ì£Ûÿ;†¡@ 19oÆ“‘cNŇvDí­õRHýsL O ˜IÙºŸö<.sÎOÑÞ.P ‹Ýþ¹,Û\<09Ÿ5–÷Å»ù&ÊÜ–’yõLí3=êçÉ›÷˃ЇÁ]N®—»lNæÕX}ò$¶n-„Ê ¹CP<£§x³_vqn\1Åó~0Š÷lp}) 7(ÅC©§xaû3ÎŽ/¥z¨ôpTïùðzS>凥|ÞMùbËýlþ>Ìeq…ÐJ9¬b_Jhå°, U§„ÇMüê´<û4w‹ýv9>ÆÊ´y–riÃ!õJá禉ԉ6Zh˃Ñॉ†ÍÛ×yMß× Óúø¼ îxe5óÚÕ‡‘Ùiâ ‹:p-š›0c­²Vlj‚Z€lgD¼¢ðˆ¼TƒÁÔ{ޢЀ5s™Í¡Êõd´"×wÛÅn}¿“TÞ.Wqoˆë3°›Fá?þbzš–ér 9NIGêuƒq`ã½›´U + WérhW)µL™/dÍ—–Âç']j§M¿Å§±S§œÖ ÀÙŠ}‹Okª ~ñiðeŠOòZ¼ø4)韶ÁÛºø4vî 8ey+åAt{b܉©¨91~€à`ž°>´s»¼0 ";oŒ–,zrjúÓªPŒ£–àÑ*^:RaбG%µf›€¬æŠc…âX¡8V(ŽŠc…âX¡˜®P ÞÓy |D/K?V(ŽŠƒªPôSƒZH– ¦vƚ̽„²Û<à¡R¤Ø=×ú1B‚i€2·ÌÉ §Å¶!¨‹ýÍâ~wñÞ¤§ÒÒóBTšËTÐYLlQ„|î»íâFz s2ÒSV÷급X¡ü™° €Õ£pÝMo¥¨üÙfV Y’éA‚OyoHKÑz ¥ævƨ2"s=—×J‚¡É{ÉÃ×CýUÀ(Ø|>´S"M_-W×áÃL!BRˆŽRxig¿ÌQØÁ°ãÇNëCéœ`Má±”9grØ&ÚÈ”ê„Q‚™œ0I11„Û¬R4…˜ô–þ¼‘v9V»r:Uº{íÊÁSiaÒ¸¹­w®o·Øï¦ÇŸŸZ8´ÜD’MFÃA;/‹¹¸–v9 ©‹æQ{Û‹&¡%soɱ]QçãÚÁ§eè,XëS‡hc;-Ð>DKß«(Þ§NfJ ®?“sŽæµZh£m÷Š—ƒÇ`)Å»^íj¨EZ€Ií¬àð²£(6ëí{Š ¤á;߃­ xÓ„v¦˜­y»ˆ ú]¬A¬‘£LËÑ:A ‰àeeþÚ¾Ü% "ì$«¾ÖU+äSqgãžÐ©dãŒÐÎÔW?»jÑðæH6Ñ€ŸñÌ[€!䦜ÚוUé=†c;ÙÇ¢~#¯à1¢Ô:Éèf*±ãpP;ê(­w”ÄКw’l¤íòæ@‡z*­±_¼qâUIJ‘ƒê¨´®qo&Ôsõ–Æb…ËEÕƒÞ:J%4°$JÀã]1½½›/þ¹Xíw÷W§‚ا’öœú'”…–Í—B;­õGy¾ôÞ6°uÆè^” <ùíØ+ã‹“z»ÜÞQZ¸ØÍo(Äz*cÉi¢²EÖ¡!ÔóÕ!T¨ÁÆ ìE¬•Ì1¾c;],U¸™m¯ƒŒÃª{"`Åé‚×Òbíõ²}äÐçùŠàYc×`$Vv¯ Œ3J"ç¨]â||¶"PDö°àB½%m«§!šƒ¤J€@ ¿ððê!³U" ñ™ÊAc’è¤Rì˜(²}(‚ð‚ Ø©]ð˜(%äûÈ)~,èôÂÁ" ›â4+Šª‡÷\Å .Ü+¨ùì!š ¾héÉãI0_¹Š±!¬Ûõ?—!á¤?y1n3ŒfúرYµ“ ZkFà³UÃ(gùÝóЮÝ|êGKô¼CQ|®C‰ìâÝlSe$ŽtÝÅl³üì8‰ì´syqþá…NG(W"Yñ0¦±ŽF~¾²„Ë.¬[ƒ®e¡hÈ@Yç—|²"¿Z_ì·Aì×óY”¯e4e } ‹¥ðÅ4#ól5@…7æ¢t/j€Fx¼{Cp®œ;y4ÍO­²ãS Ô–ša7Òèùª`é0Þ" í£r"Ü7JAïÎ4Ø\þý…Ó Mðñ/£ÄÓ|&áF³±§Öªñ]‹…Ÿ«*Zp‚š6Ø‹ª„´€ÇƒBv¬*¿,®nÖë÷U{Ái )J#ΔCWš’Ä}¾¢xŠ3Tƒ‰ñЋM1!èüš¤6æ,E™ÍçëûÕ>…FKFQHK¤~Ý™êò¥î³Åh Feƒ‰”½(Š¡Õ&ø%iŒ´…ÃÑG¿¿¿Z\l¯fó òüÿúE­8¡xNK~Å4Å"Ó†ˆÏUV*Ås(Œî¡b˜4PZðlu „ûñ|‘“JUa|Øã‚Y™ wŠ´“ì%µSØ”;ín g+Ð𥡰½(qÖóŸÚ¡hw|õ|©kNs »­Iílã˜ÀŸ¯2ý´üð¼îÃ!¡’ ϫ°¹Ï"¤¤Îˆ™!Y­G§œå %,WÎ×¾‘R›PÁ'NH졦‡úÑE{ÇãUlû%Ò¡6r±ÚoÖáîŽzy¦£-Ðë°ýÆãÖ0MÕöLëÐx<ž’ÒJžÜ·rŽ¢ý J2]êƒZxš½qÁ«JMLù‹é”RSB‰N×]L‡h(»UìÕ£ÔÀ”5b…[Æ|¾4OiëU+Át4‚´ž³ì%¤è-((«=@v=Tåàñªè¤ïgïø%”¶üáб@Þ¡²”É¡/ö¢C S›ݹb—<¹×ß$8š÷»å»ÃþÄéúî‹Ííý»åêôtÁ±áÅ©e”gºöІ OHöºEidîäC6ºŒ8<Èåtêz¨“ÎÂSìÊÃÅ~~]Ÿ„XÁÌ o{˜E­tálª„ m„ïCšã‘Å4`¹z»íöÛû¸?T+Ê´Iw@ ¤•ܽÐÓ„VÚ›ƒ¹’«¤¬Žb­NóN­èã”g ­Ë»?w‡Í–Dæë“)„œ Þ2P.}»µ”lÏþ>È»ó%Ï9§6_&>räiÒòô­5 9üÞ) Ř›ßgÝ_²š‡Ç•$qjEÚ¨U”4&%-e¸†É1)_l‡ÝT›…\G÷Z’…Ç—‹VâN½#D @ Í‚ÖÒ©’Ysº­‘{ãóЮû[òb?Z‚`. 9àñªtdñ’N¯/-µ‹A1xC9cåV¿>ŽLdkÇ#\-|B¢Ã?2Ñ|™# y­~d")é™hƒ·õ‘‰,+¥u·G&ÀR|Zwb"iå²Aœ˜ÈBo¤ù´NLäI§þähùYÈPºñÄÄxbb<11ž˜OLŒ'&ÆMOLïi]¸oy/ëOLŒ'&ubBN i°Ês$%µ“¾üÖV>u–Xõ±£•ƒÇaG;Z/‰Ñ1bD$ïËnÄ…ZE¡ îN´¢Ô3PkÙËä7Ç£ Ô¾Äj±\+Cž |®° K‹ ï5„EœË¤vP9âT¾øY‚™Zïú»æÆPfìÈ I*J¬¦ínï‹;w›¼9xºÜäMîE¹2KÇ ÿ,9NÅ„ã…åjÐZ®÷æ ¥ïa¯.’Å®nŽDxH ÉHØðN ˜Úør³Þ bÓÇúÏÁ㊽‰¹žÝïoj%(ÓôÖ‘ž²Ž½Q :Ü»ok²râUªÐä¾Lõ4ÝØ®ï÷‹f•YPáµÞsõÔNzh™%… á‰/ÏÃNÔ-–›üШE<¦ÌõÜ9ÂL›«ÁiO>‰¯A‰r'áÚX1«]ÐZ¶lÊjõœÿæx¤Í,:_Íî»Ílþ”p ©çü:L¥f¤£‚ŸckL™€¿ †6FO–¬‡ì8Oî)“zZ+O¾I”qZC-¤\†ó_ÔNërGá:”+Ù½âÐN6;Ì鄸e¹K¾SjÕ½¶R?ž–Êí„)ðÂÝÇÆé³##ˆŠåÛåüñ<[ºÂŸd#AZÃ…&ŽRð컊Õp… RþÏ V èe¢3ðhWÌ AAZP}lØnȦ:TÁæ¸Meå¤Ì‡æÍ%ÛÆà;‘}&¹í46E&¡‡¨+Öç¶Ó;CA8i]¡´N £-·]B©üö®u7Ž[I¿ÊÀ¿œ n“E‹,‚lrp°Øœ ÉÁ»1ERbØ– ‘’õ¾×¾À>ÙÙsiIÓdsšÝêѹØÖpš_ɺ|,5ǧA8C€”ƒÇ@ÁÛÞÿqqÖ{²ã'ŠØ•d ÍK;…™”4ý^žOüöæb›ž¼Åùø¾‰ýM#þò÷0ÊZËRAÒ Ãœ°„Úl‚’ŠqÐ)Mpw¡ÒåÂÞ1s2 t¤ÆJ©ÏÃ#õèó)Þ¦÷Œ–yb †@Xce +r#UÄ°Ž˜ÃÁFnŠ/8Ðxrxÿ½ä}|ýð¯Ö§A¨2.T"KN'n)õíÀÏ 7]3Àœadà¡b&|}ööâüŽ£þ^…Iq)r Ú&\xßNèìêóMÜáormÆ<*Ê GˆÊ‘­3h’æJú:У«¥> ðª¬äá‘¶LrX–8U\œ X06”°¢˜ï7V™eÀŽLß‚³€ñJmÃm𸗯?òŸßŸì¤u² Œìîçl·q²K”,²_£’o@$Ž*€2õDÎÀO3Ä9x,‰¶ßAéé¨ô@*Ÿiœ L};ÿO®˜e+b!ÐÉä+èYâÈk]i•ÄKËèaüÞ.æâ¢!‰Â(‚’ùJ~‚É8n¤"}Á‘ÌÀƒ¶TþwRˆq~D±¿) ÊT¢üŽLö¦QÞüËÀ2‹µÎÂãJh lâ,ˆRÎZ:ëëÛ‘²®XžòØI8µž#rÏÁ“›’°r»Mû“ƒrŒ3 Z¢Î&ê ùvº»{?G1™p~K£“IU ‰HtùÅdÆ€/SL&‚ ¯õ‹ÉD%½Àb2cðŽ.&“§¥HL\LF4®§–LÐÎiÃEÔ’iÑ£2‰ä¹¶èçUK&¼5hŸ39@:$æ«%zÔ,˜uÐ-YkÉÔZ2µ–L­%SkÉÔZ2µ–L¼–L°ž¬ê”i+k:•öj-™ZKf µdüÄ4R¡IN`#….JCM2åŒéEgÑô$$÷ƒ,D—vµ ŠãJ%ÄÄDz»¹< r‹§Ú ±µ©±FC˜~?%[šÜè¶rð <šuÞ-lqÆs-Тc ©žÛÁ±çáÊLØá@•°3Œ{žÜÔ h{â{ðw/Æx†•1ƱŽOÂ6F)ǯû©ækþ92ArðX]âÔa_ZÍow?_¼ºùùôìÿà¿?YÆ“*ˆ|u0Ln}Î.4áÎÁmg˜9xN`vi5½BöB§XYË!¶PÉ kÑ`™Ã‹åfr/øÿxÞto9.TÍßÁl$øÌê„ðøH̰žƒíx/4+Õ/žNâKž‡‹æ’ÐÑn Ub5å¼…£9&Ä`<$¨Ø¹–š>*ÐxÚ˜#‰ S—Î8G¢€ŸZrJÅNBÀ ®k•9Îw$àÑÒÔñˆß9 Àê>e¨œŽÜ„õ'Yu6–,‘<\Rùã€Rg! ó󺦖´‡Ñ&\‚ö5íäó6 8ÊÏqL9 fŸ¼,Ðß{†$@d *ô€‰Bü0¯E”Nl‹s»PÂnüùòóžM ‚¸(2P9=ÃÜŽÇ‘¼¸W,¨®¦¨K£ü!7ö° b !9eE¹#†¥|ˆ¬7ÐjŽ1ÏÀ£M;œ#LŒ “=-š’Öi6”cäãgìpˆfú㤙x\²ä_l¾„“Ö‰>¶ö‘RøyîJ¹ 9=YÍ0Ú9x´é­–)%-=Ö³ÂÃNÚyÌS1OEŒîTÙEÍ ÔåÊ.o—àéõ¥7Ëþïvà䢸0j¶\ lÊá”Ú£ËÕ\}šW0¬8¥gbF§z]–ƒÇ;ïÚ5ZÓ¬‰ûõ[O‹©Ìç1S‘ê7¹Fv8@2r†¡ŽÇJ,¥dXhýk’R¦Þá ‰Ù Õvä‘בóq0TV0ÃpgàQPätdJ€q'^ZbÛ@É ‰Û Ke’’fC,§¯Ú‡ÇéR™ ÷dø@GÚ„Žô…[ÙIOÔ`òí@C¹Ë­Æ)¥Ðfv!HLŠXD“ ¶@J'j%M±4„qt8d sŒõp<.s¬÷úïæ.b[ÚÕ¦ °·Á›I¥e‰ô£ãg%h¡¬26‰T ˜ÁOóý9#à1²¼á~$¾xØ 8•Œ]ÁçÐåªð²S2ig“.àÙd€—Ñ)M_è,Ï1…Îp~þÈÈÇ*Zi„;×’”SIG†Û'&ps×€––-IZ¨ÜNÏàr?NƒÖ2Ç.Å?’›MÈ Œ“2Qð¼mî´=r6¢B‚$JTzŽ@žû1ìî9“ƃKս鑉gqê©ÛVÖ¸²ÜI“#ÁZ!­P*©vÈJ3 ´•hÁˆ$+€\ÁÌó ¾ÍïAn2!7´Îß¶’”J%Yš\}ÃK Òz‘Ûé9¼yîÇ‘K²­¾Ý‘|`£=.Á¸.TžNEE)]¨$ÛoS2“Vœ^ÞnAúFÃ0&†œ õÁ@ 7«¶¦ÚzõËæ@Ëèî:sôAw(¼ ²¾{wëGÚ¿ÛÉê¿ö¶`£ùâ[0—mñŸî‚÷»<›™³zô´¾«FÖmÕŸÎâõå«õ¦„Q÷ ÝmƒW7mx¨öLÀãÞb—¥vKÕÿˆxá‹í‰d1|"é£wÿº}îºÙÔ8êÝÉFIý¿¾¹¼º¹¼ýxöît}Ïùe·òæ›ÇÛKöKÂñ'9à¶Ò‹Û³óWç§ë·?_Þø{h­ÔÃçBøv,IÃ쟵9ݵÓZ÷ =¸nÙ;̾Û} ðèëþFw_«­O6à ûþv ‰¾õðÃÍñgl‡¼ùÜSèv&ò„ý¥‹Õþçþðr¨à~ëÈ\_µ+ðÑ“C¼õ-Ï·÷‘¾>¿¹ôW5Ý^]½ûíò6d\S^Woúu<éRü·ëÞÊ>EuV‰ðéÖ®ïÈžV¤œÅdÖ„ogô² ´TÆm1;ªþYhå·ÖÚÿ—”ŽÖ f,К‡ÌÊZ µh­ZkÖZ µh­Z‡hõÖ“»¢Æ¥­ì=®«h­ZP •'¦/i)ØSLM`2"R U¥Ü½ç5غŠüÝ`õÖ£{ÅH¯$w~sèmµç~N6l¼‡–ò^•e—^~zwó®õOV¹à˜z·®/ù˼nÎŽ$ýŠ>1lôüë×¾YãCñ+–V0xÍovígÈïòõcDë×D'«óK(œ]¯¤žx¶‘J4R†þNÇ={>t‹öÇ!ÿ£9~lHQèŒ^ºõëaèžtàúïï<4p» ‹wÔVÈËO[Wªùãòö-{½g¿„œî€#†_/n÷¥«n?^_<üгì»ß}õý«÷§í6µßRüËÍÕÝõÆsý7ö7^n®ZçÏ÷CÊ‘&ôù·ô_zôŸdÌ•§R²Cºß1ºÇîKMcζ]ïà­¢xÒ1é¿ÁúИìvÞŽô3vöæó ïƒjmÝhO9D òLØ~ˆ6DÊ–Ãùæêö‹!™C6Nwª0üö} NvÛaƒºÃóù×>X>YýôÍÕ·WçëÝÏOV?~¸â u×.õÍÃVÛ‡uò‘®ù¨<§>†$®fü`È¿P)qÜ`¤¶¡Šÿñc üo_p›ïÃÞïžClGáä¤óáw^ð_}8¿¾â¥÷E`—.ù1{ªò§í‡ÿ°Ãyï ›Ì¶ìg†xZ„ýôW{祳+_‘¹‘75rDÓÁzO‘=DÍÞÀúUG«½~{qúîöíÿ°úùêo_o€àûJÑðŒot£Å‰×°{@{}x¿ÉĺPç¸(ÔX]IêÈV÷žõh!~ñí_û–áæ#Ÿ1û“¯“ê2¼Xù%¾ãëãÁ/ß_ »¾›CxŠ­ŠÒNÿ‡0Ÿ ˜C†ì3kx.›±ÿWŸó2]šÎn®´›^¯æôà ¼ó˜—/þú!ùœõvPW›ö?¾h]œë^¨æ}›ØõöØÔ¼R¿½ZßòW-iy² {Ü¥õÝÏÿàÜn&Ü\ü~yñÇú_<ÕÎïs)Ö?T2©^|²z{ºæ ìâwFë#¤·më½6+e}Ç}^xÙŸ(&ÇÚçEåL”ɲ®zfÎý¯n2SÞ}\Ý…½4p¾=ýpîÿÔìK„vò(û¿Ü›H«û?É”þW¤¯ ·›ÑîMñz—ⲑçêå6åV½WÔ6”å#Þ¿{w{yýn3j¬¼n.¶ÓÉáz›-’ÿÓG¦`ýÀ‡úì>ÿ,îçzl÷LG6¨ééÃ’²÷oÆw>2ô $8ÍdÚh4²ì=†Ñ=–2á£Ã8ŽîtÊ»âà†“C»FÒ’½(*´’ë….ŠòÏÕÊ)¥SéÅÜ:…ºÊ_^"¥i|)mû2¡SZ8€T¶ŒsàœYZ&ôpôª[„ñ™dBgH'BM‘ ƒÌÐÌ™Ðz`&´þµÞeB³YB&´Ü¦G~¸;½ùøÙJ†¤¨Ý¿áÿÿðeÍ}®¹Ï5÷¹æ>×Üç?Sî³s¤ ¦+øû«è¤¬¹Ï5÷yQ¹ÏºþŒHˆ_ñÚAçÒÊB‘?Ù/ÕÒÅ#7ß¿ýß ®TðEs{7Ré&JIªsíÕö¸­¯5ô0p»»þõ†}šÀÂW)–Ûïàuº‚ !ÇæGA녅ȱÅÅÙoŒX¾÷)H­W§¿^}Öni©šW¡!OƒÛËw«0͉Pÿ9»î±HéÀàgÊ”Cv¡žÛÿ÷¿ãǽ{û"ö<ôÏ-`Ï’ö_]X>`ÏCÖ½ª¨ì5`¯{ ØkÀ^ö°°« V³Ö6 àüÅJ$ÁqC¬µMSE+#]~mÓ1àËÔ6 Èk½ðÚ¦QI/°¶é¼£k›fi)%Ô¤uÇQƒBöÇîŽ]T)¨¤5A¥g+=»0zVI%%à㜞ô¬¨;ÙžÅèY_¸ž|M¾Äò…ëÑNHÏ‚Ö ²>ÑÐÇχêgÎÒx:F%%1þè{n<¿5ÖÅ"-‰jNžÎ##Ò ›½Ä`åé*OWyºÊÓUž®òtÿd<ÛKYÀ»zog®Fn5r[Dä¦9pÃäÖ ÍM438e­H”ì í ^¢5„Aì•è?Ñ|<øRDs/‚¼Ö‹'š#’^$Ñ|<ÞD3w®’i-¥¬œ”hÖ e¡—hFÌ‹K¥Q¢R`\yžÎŸ“ºÑÍíXÚS€³²1$e?O—õÓõÄLÓ^–W±¢xÚŒ K`Vš.€#Ú¹$8K‚y•¦ëá_"]>M7|š.‚ ¯õÂiº¨¤HÓÁ;š¦ ;Tà0­¥Ü¡tº§ä¾<*'P‚°Iô žž÷ÞZjÃtÀØ*;)ë¬nØ ¶…j‰4¤RÎYO>ô†ØÕJ#YÓé*MWiºJÓUš®Òt•¦‹ÐtÁ^¢íX|5®ÒtË¢éxbZc´Õ.~Xh§eqšÎ4ȳÁh6‰„þ Ï”4éÆ("{êÉ“_Â1•9Ú)7/M:å‚ü48ëj6]’‰Htù4Ýðehº‚¼Ö §é¢’^ M7ïhšŽ;Wá¾ËDyǶ†I©c ±ÆõP9‚”ÂÕѲŽEHT§kÛõ¼ÅðÖào6uié@gNNÓ…•$9™JÓUš®Òt•¦«4]¥é*M×OÓ{É.™t"mW¨Õé*M·,šÎÍ«¬°qš.´Så¯}äç‚°žªKEnÜNé)i:'~Ciz²élî§CMw¡C;kç=ôê;•Ú°2IpR«ZÇ/É¿D$º|šn ø24]A^ë…ÓtQI/¦ƒw4M:7À¥U¨4ïø-{èÕˆFQ_¿‚%‰0*©eÑt[u)(žÜÖñ§¦éÂ[;‰µHKÇuÒ"'§é|àïáN”Dl‘¬4]¥é*MWiºJÓUš®Òtý4]°«*Ü –´«ìªJÓUšnQ4mŒ•hs¥é|;Ñ­ÓRˆ¦óÏušŒ›X@Æ"j1éõª±ìS’:ÌÓ9^ÃÊ1ŒD‚Eh‡fÞû6|§ÁÃ'H‚óqzåéRLD¢Ëç鯀/ÃÓEäµ^8O•ôyº1xGót¾s¶o(yL-H5í}l`p}étyPI.‹§ËB¯ úÏÍÓeIGËyº8Õcº¡ôLßu’ï`ë|¬½~œéödIèçÚ†ŸÉ$Dg:g:g:g:g:g:gºs¦+ý¥%Ž¤Û¯jÄäLçL·Óq™¥f!¨j“éJ…`w3]ãúÿ½ŒóèáÉE¯¶Å$ªpÌtRnaÃÖNZêÄ^eº‘pú¹ë“3݉¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝÐS Åže:.[Bã ÓEµÅ JO?7hø»™n¬u>Þ9gº±dæ{Ó9Ó9Ó9Ó9Ó9Ó9Ó5˜n¨_Ug:gº¥˜Nʽò˜%5™®ÖÑÇâž›˜®|nÒp¡w¥”›äI¦‹$Ñx2›N‹Èæ›¶È×:~•éêE5boíp­“­”œéNü¥Ñ¢ë3ÝLø{˜®‘`¬zq¦k¶ô‚L7“wšéêÅ“•·þþSJn™v/ÓoΘn(êç.K0]Meù­¡ŸÞàËfÓåo-¡¬!ng™ë‹{Ó%‹~–¯333333]ƒéjIdLÜïW1¨33ÝRL§…ß‚$MÖd:­G=ÈíGH”ÏULݳÍö:à™tc!8Qº”ïà(d"í;½Ôæw'Ó …KÉOèòK£E×Wº™ð÷(]#ÁXõâJ×lé•n&ï´Ò<¥âçÌíG”NÃñL鯢Âb½Ž¥ÿ¹¸øïVº¡Ö‰ðâdº±däk^]é\é\é\é\é\éJ7Ô¯R®t®tK)]*kNµšJWën_óZ>7YRîÌÛëXŸ=èÕ@,œ0m!ÿp1¿c'© ÑZ·¡ôɾlOñ¡Ö±¹ˆÜÆ’‰ýç7¸ùÀÍn>pó[cà6Ò¯¨O¯ðÛZ·üVƒ¤ù'ܸ•ºÄzûÑùsS ¶o R—¿ó“« ’lŒˆ’În$1B'i®K»¯¿1¿¢\"ì†`öù½?œ7Ztýù3áï™_ÑH0V½øüŠfK/8¿b&ïôüŠ¡§?:¿‚o)ñÉüб¨²Ó•ô”{Vê§§ŸYþv¦Ëß:ÿÃjgNôÞ:œÞdº’,ˆ\Hö±›—3333333Ýq¿šb$¹ð>¤G!;Ó9Ó­ÁtdRâÓåºà~¦£Ü9 ÆÞ ”ë¶U¸q~lÊì˜é8ÔWh‹)5oõ½îðܪ瘮^ËSˆB?œ}lâLwì/­]žé¦ÂßÂt­cÕk3]»¥×cº©¼³L7ô” ð(ÓI[a3Ý`Ô£³.þ ÓýJ¯Âûé#×2¨ý[ƒ†ÎÎ~¿êâ{GÿíWDáÔ>¢èW2ò=ÅéœéœéœéœéœéΙnï/M˧~¿ªÇË‹éœéþÓå&‡(Z6¨l1Ý^>öN½‡éêç YÒÎ ”ë>yô…Í(lð˜éb¾…©ˆ¢¶oõX‡>–^eº¡p|·¢®¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝØS*=Ëtùµm‹O˜n(ª\‹éÆÒ Ó µŽÂ{{Š&;þ«¿3333333Ýx¿JQ|·"gºµ˜.n ³µ™®ÖÞÎtåsËy{Q¬så:>ØþôF¦£ÍÊŸØOfÓA¹…ÁÊùͤ¥S´W™n$ÅQ¥3݉¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝØSJéÙMÅnÀpÂtcQm­E¯céáç´Å¿›éê·FÆÎ!¾¿êÂ{›Šÿº¢\J}6333333]ƒéJÉù…Qûýª¨33ÝZL—˜å,ô¨ÐÜ›n¯Jw3]ùÜ”¿ŠAìÝ@–Ÿ<úi#Ôtröc¾…r;YûV¯u‘ã«LW/JÉRÂ~8úX;ìLwâ/]ŸéfÂßÃtcÕ‹3]³¥dº™¼ÓLW/΀ÖÞ"äWÝÃgÿ¡Æ él6Ý/D…´ÓÕTBA-]H¯_Æt{ëp‚x¡uäc‹ÄÇ™®^Q9‚Å ÉØgÓ9Ó9Ó9Ó9Ó9Ó9Ó5˜®ô—S$è5°!áL·ÓáÆAóoSƒ6™®Ö!†»™®|n2ãÔ}1-uHOΦƒ óð5ò1ÓQ¾…[‡éj]Lô*Ó勿‘1§~8Cgºž¿4Zt}¦› Ó5ŒU/ÎtÍ–^éfòN3ÝÐSŠåÙÙt1?탊>í¯G•µ˜n(½%ü.¦h\/îMW®Mb r!úÎtÎtÎtÎtÎtÎt ¦éW#%?B™n-¦£-?š1¿¬‡ölºRŸw¿‰éÊç&€ÎtcÉ|6333333]‹éJ™»¡ªÝ~•)Eg:gº¥˜N7.¿LæŸ/†ÿüûœëBâ»™®|nÒ˜ÿwèÜ@¹ŽøI¦SÚ 17ü1Ó•A%Ë·zû/ݵ.¼ÊtCáÐgÓõý¥Ñ¢ë3ÝLø{˜®‘`¬zq¦k¶ô‚L7“wšéÆžRž=Bpó$Æ’ÊbJ7”žÂ—)ÝXëЋk^Ç’}Ì6u¥s¥s¥s¥s¥s¥s¥›ëW%‰++ÝRJ—ÊZV1‰ÚVºR`ñn¥k\ÿÿþ{}d“•lÈýÑÉÖtVou.§Ý6“Ö: ïN¦+e0¶Î2ž½NÌ•®Ç/]_éfÂߣtcÕ‹+]³¥Tº™¼ÓJW/Nò{ÿ)…éY¥‹ÛÙÆt{вK?÷ƒâZHWS±¡õÓ3Ù1¯õ[KîáBW)Ÿ§4›®ë/]ŸéfÂßÃtcÕ‹3]³¥dº™¼ÓL·_œSJÖJ1Ò³L'´1Ÿ¬yKúó¤‹?«t5•p¢Ò'û.¥Û[G£\ia~OéêËùNF’}LØw¥s¥s¥s¥s¥s¥s¥;ìW%"AgÉ^­ â§¼ºÒ­¥t¸åGs¢¢5•®ÖéÇñ 7)]ù\Ö,ô^™s]î3žL—’’3¥3ƒѬw«çºpptÞ¸åT -õÓGÁo¸•Ö±”ÿáú­‹ñ^¸å+bJ‘ñB²ãÙÚ>pó›Ü|àæ7¸ùÀí£¿Ôh&,á´/.' L×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pgÉIû«”CºÓlÆGÙtU$Κü ©ß›Òþ\L·«æ^õµ‡øgaºýWcK¸ï|z /ÇtûˆT†|cøº†N`ºÀtéÓ¦ L˜î)^fÎäýÝèÓ5ŽÀtéVÀtåŬ N›˜n·K|zmºú\Ê©Ì._JWÖ¦Û»ÉælG˜ÎLÊŸ¦·¹/vŠËÜܹv¤íª¯…Ó?íàV½Sææ¾w0ó·2¢r™{ï(³(*·8¸ÅÁ-nqp‹ƒ[óàæ.ˆ54u㪀jÜâà¶ÔÁÍ·DI•MÛùÅ-?Ø<éàÖÿO?Œ_ì’_™_‘·T6È~_᛺vŽ>ÕÜó­ù#â³F~EïÃyãëçẄ?'¿¢¡`ÌzñüЦ§̯˜Ñ;_1´J=ßò¸"¿‚L6ñ£üŠ1©kaº!õL–_1æË÷aº1e˜.0]`ºÀtéÓ¦k`º¡¸š%ŠŠ¦[ Ó ›gÒLW¯K¥|>¦;ÿÇ ÄÙ^”?=ÓÉÆBjúÓa*S˜kÖvÿnGŽz'¦{ˆ#Kî}qŒ× zü¥åÑå1Ý”øS0]KÁ˜õÚ˜®íéõ0Ý”ÞYL7¶JQ†k‹ŠKýôC¯1ÝC³›ÛRÍ—Âtcê™>+›îëWײâþ†w²Ý†é#ª´—üö Ó¦ L˜.0]`ºÀtǘn—‚š­M¾ìRdÓ¦[ ÓÕ“…IËÿ¶0ÝÃŽÔNÆtûss¢,ý Tì’]ÛûOUù5¦ƒ:… È´}ôÙíàÞÞAÙ3´›=}ÙQL×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pÉà¤ýUJÒµ½ÿHaãH7&t±’âUÊ%^¾©$ógAº!ïèS¦áån1—MÂÊØÒ¤ H. ]@º€tÇ®ÆËÈ)£wãªr@º€tKA:¨¹l†bMH·Û•×ülHWž+e_‡å_zˆÝ€¯„t¼!C¢×W^÷)\ ~¶¿sïvpï•×Ç YAÚ9½;H×£/ ®éfÄŸé Ƭ‡tMO/éfôNCº}p$ƒþ*e~í•×r¶Ø˜®¼ŽIu²µ0]U•Sù;»tÕçöY˜nÿÕPBtÒ7¼ã÷•ÿRV„±õ•DIñÀtéÓ¦ L˜®éöxéÀÞnó°3ˆ’âéÖÂtX1YY±‰év;¦Ó1]}.’Ø‹\¾'P‘èéÚ+¯TvÞŽ¯1Õ)\¶œEmSénÇH·bº:¨¥ša}qöÔÚ 0Ýixt}L7#þL×P0f½8¦kzzAL7£wÓ ­Rž¯Åtb´±åL7&ÕÃt»z ¢”ºê-}Xeº1ï@ºñÊë>"sçÊëCäÀtéÓ¦ L˜.0Ý1¦Û㥂s»×ïî¦ L·¦+/¦Yr0ç&¦{Ø¥Ó1]}®WJ×¹\ô°Ã+³é2l‚ôuç?ä2…=•ã·§zµ+ÞÄ[1Ý.NÔ©Ý îaÇÎézü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL·^³¼ú«”^|åUsÞ2ø¦“ʼ¦ÛUYÑ…o„ƒ,ù³0]ùÕ˜Ê&4u0Ýî'½Ó=”¨ãʔӦ L˜.0]`ºÀtǘn«,šˆzqå˜.0ÝR˜®¼˜ZvvåiíK¯»>]³8 ÓÕç’)jo)Ó…˜Žic¯5_^c:©S¸uê\ÏÝíPäVLW…r(sË}q˜®Ç_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆt#«$¸¶D‰Ÿ›õy”Jº¦ROFŸ…éê¯.Q9k×;åÿ}cmº}D4Ð o( L˜.0]`ºÀtéÓµ0]—%ö u>Ìívœ(0]`º¥0]y1Õ™Ò‹ï·¿þð«q’³1Ý>¾;çN5–‡\Šél³Œ™ü5¦Óº5VÈ$í£Oµ…{³éFÄÃèóÚå/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº¡UŠ(_‹éH7Ò#L7&Uq-L7¤žíò醼#tã¥×1eÑB"0]`ºÀtéÓ¦kaº/ÅБ½WÙž?̦ L·¦ÓZóÍŒrÓívæz6¦+ÏU)Óµ»1UK×fÓaÒŒù5¦Ë[-(îàn1»ˆÝŠéê œUT¹+Ž1ú¼öùKãëcºñç`º†‚1ëÅ1]ÓÓ bº½Ó˜nh•¢$—b:(Kññjÿ¾T¤µ0ݘúüa-$†¼Ã(÷aº1e-$Ó¦ L˜.0]`º¦«ñRxêd©ïqõ¹wY`ºÀt+`º\³Ô„Y›˜®Úe0?Ó•çæECênLsJ¯Ät²e&?(Mgu;p IM¡»šÞJéê Fœ±+N î¼öñKãëSºñçPº†‚1ëÅ)]ÓÓ Rº½Ó”nl•Rº”Ò•ÝØÎÉt»L™:m½¿~ÒbÉt»*Gr}Cý«o_Ó”nÈ;êp¥R–SÜy J”.(]Pº tAé”®ÆKe÷Ü«J Aé‚Ò-EéÊ‹)FÙɨIéª]NΦtûø&Ù“ö&˜~ϯ8³Ñ+mŒ‰ý ™ÎëÖ¸0S§ ÍnGßkŠ_ŠéöAs½¡Ã}qÉt}þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦{ .”\ú«TÆkKÓQÅt‡w^Ǥ¾h•úS1Ý®ÊÍT¡¯Þ> Ó yÇéÆÒtuDMfèo„qgL˜.0]`ºÀtéÓcº=® iêÔ’Øí50]`º¥0]y1¥l v0]µW;ÓÕçfÍ–Ä{¨œšX¯Ät°Yqk‚—˜ŽÒ>…©Ö o)}ØИî1¨”…ß÷\‹(0ÝkþÒòèò˜nJü)˜®¥`ÌzmL×öôz˜nJï,¦{ ž©Vè¯Rš®Í¦“D›èA6Ý T²¥0ÝC•q-ÑÐWŸQ> ÓyÇ0߆é#:#g~C¦ L˜.0]`ºÀtéŽ1Ý/Ë6ÉûÇã Ñè50ÝZ˜®¾˜YSYž1µ0Ýn'F|2¦«ÏµZÜ1SîM ƒ|)¦ÙÊ\LúúÒ+Á&D^sÑÚSj³˜âÑ[1Ý8qL×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº¡UJåZL§`@>ÀtcRób˜nH}¦ÏʦóŽ ß‡éÆ”y\z L˜.0]`ºÀté˜n ®–éfQš.0ÝZ˜6+¯oò²71ÝnGrv‰ý¹ ŠÞ™@æRÎ>WfÓ©l®èN¯9–9 b"Ü&ŠÕ®ìUïM§GÑéµ`]ŸÓ͈?‡Ó5ŒY/Îéšž^ÓÍèætC«Óµ-$¸†hÇN7&Ud-N7¤^‰?‹Óyç©mÖåœnHYNœ.8]pºàtÁé‚Ó§;æt{\Uõ±»q4Eqºàtkq:ܔ˙J-5;½îvõz÷Ùœ®,Ÿ®þjS“wþ¶~ãµ×!e€œ.8]pºàtÁé‚Ó§;ætCq•%§ N·§+/¦£¦œ¿Wáùõ‡Ø0ŸÞD¢5þÈ8yºòÚ+n% !Ò¦s×T$´›Òîv¢ëÜÔgöO;¸ xÇÌï<¸ (ó§‚^qp‹ƒ[Üâà·8¸ÅÁíe\-/©%Ôn\UŽƒ[Ü–:¸ñ–“)~ïrô»ƒ[µKxöÁ­>·ÌvlvYÚí2ë¥õŠ`3Í`¯»ÿB’ê‘Ì:J…ØåÞ{P»¸ò^¤,]q‚÷ ºÎ]?¿bFü9ù cÖ‹çW4=½`~ÅŒÞéüŠÇàhYßX¥(]›_AZW¬ƒîƒR‘×Ât»ªòã4Ùêó‡åW쿺îxûÞ‘œîÃtûˆ9 ´{!ýˆ{PéÓ¦ L˜.0]ÓÕx©ÊäLݸª¨‘_˜n5L§)q§ûßÃNýL§©È4w.èW;$¹ò”æ­ QÆxé¤LጠŠíTªÝîî²âû ¾×Në‹3ˆî]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦Z¥übLge½g“L7&i-LWUYâ25å õ–> ÓíÞp„ÜõŽ%…û0ÝC™“tn…?”ydÓ¦ L˜.0]`ºÀt L·ÇKA³~\e´Àté–Ât²im¾ç/àÓ¯¿‹ã阮>×Ô!u&¢¾<†éÌêU’„ðÓé&\æ*¢´•V»ºU½Ó ‰#ˆjE]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦[¥Ä®Í¦cÝœýÓIͺ¦RÏ(Ÿ…鯼“oÄtƒÊ,0]`ºÀtéÓ¦ LwŒé†âª=5/ L˜nLW^L4M–š˜®ÚqæÓ«Õç™zo™æte6]9¶¹½Æt¹La“DÖ¹ß^í8ÝŠé†Ä[`ºixt}L7#þL×P0f½8¦kzzAL7£wÓ­Ræ×b:£MåèÒëT†Åšÿ©Wû,L7ä›ÿ)øô˜.0]`ºÀtéÓ50ÝH\•µéÓ-†éÊ‹éœ@ÚÙtÕ婨ôI˜®1þÈÉ¿'Xœ‰é|+¾G~Mé¬Ìàì’´…Ú>Ó³ÜJé†Ä1E2]¿4<º>¥›¥k(³^œÒ5=½ ¥›Ñ;MéÆV©|íWFßXñ€Ò IX¬4ݘzý°;¯CÞÉOßÛ.§tcÊ0’é‚Ò¥ J”.(]Pº¥‰« ”.(ÝZ”ÎvúVNUÒN¦«v€éôdºú\–DØi}°Û‘^šLg›–iúÓU†I„:=Û½‚{JùVL7$N)’éºü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7¶Jå«“éòæv”L7&Õ»ó:¤¾ì3> Ó yÇùÆ;¯cÊžvCéÓ¦ L˜.0]`º©¸Z+Ö¦ L·¦+/f­“Êš˜®Ú‘ºœé|S0@ñÌ ¤ žøÊ¶ ¥×ÙtœÊTG«g²f³˜Ý.±ÞÚAbH<3ÄÀt¯ùKË£Ëcº)ñ§`º–‚1ëµ1]ÛÓëaº)½³˜nl•’‹;HÀà9ãüj_¢†/…éÆÔ—ÄGaº1ï˜Êm˜nL™sdÓ¦ L˜.0]`ºÀtǘn—bÐîþøUÓ¦[ ÓÕÓÔÌSò¦Û턟*eŸƒéöçº qoY.¿öBLGº»Z~}pƒ2…k¸bi+­vD‰nÅt#âJàK¯]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦Z¥^468Ó‘³ÒüjÏÌk•¦{¨Rsbí«— Ÿ…醼£t_iºÇˆõjå7”±¦ L˜.0]`ºÀtéŽ1]—R^SiwxØý®µ{`ºÀt `:Ø4•! `ÓívøT å$LWž ™“ªHg;ð+KÓiÚ¸¸žò+p+G.3½}¿½Ú¥O-qØÜ#ñ?ýáÏÿù׿üÛÿüá¯ÿñ¯û?Ïÿþ7šóÿ§¡þïbSÖÝGTüÚéÿK9Býûí{¡ýìôË?–püK=üòÇú§ÿeZ¾Þô+òW‘iÇ2«—¿”ü]y]ÿRÕ—#–#¡dé¸în­9¸*he$]qò<=¿pµ†G×ǯ3âÏÁ¯ cÖ‹ãצ§į3z§ñë>8I9¯i•BãKñ«m¬ùõeæ1©ôª±üÏį»*Æú]õ õ? ¿>¼S¿ù¦¾w˜oįûˆbå¿ô†2‰,ÉÀ¯_¿~ üøµ_k¼Ì)—HôÆnÎ_ÇÕÀ¯_~ÅM‘’dckâ×j‡ª§ã×ú\Ïœ¡{ (v˜àBüšiËZCÒküJe —sbSén§~o–dÔ’~ÿ3~—<0]¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éÆV©ï…üÎÅtš6?ÀtCR Öª98¨^>«æà˜wøé¾Ãå˜nL@`ºÀtéÓ¦ L˜îÓ ÅÕ,˜.0ÝR˜Ž7Q,¯o§nµÏpkËqš$§€O=ªÐðèúðiFü9ð©¡`ÌzqøÔôô‚ðiFï4|Z¥Téâ+ºì˜ñàŠ.oY œ•Í´ùQf·Sdz?ÊÈ–¸<¶D¸ö§ŽjGùÒFPÅQVþf˜_ºjHªóZ-|‡Ôs’Ïjá;æ|§Sfœ.8]pºàtÁé‚Ó§;ætCqõ¹)]pºàt+p:)‡G@/Gï¼Àåé~ï]Î]i ]qÑñâÓðèúœnFü9œ®¡`ÌzqN×ôô‚œnFï4§Ûgv~c•²tmcZ·­Hj<*ìÅö²³úÏ)Ép¹N’ùš åI’5Q»xpUƘô|ëhåDÇÒÛ;d½Àšñ†æ9Àê]W­}©”e5ë” QJÔWÏÆŸF`¼#ÏÍU®'°#Ê$ 6lØ °A`[v ®:rØ °KXÝDÙÙ ÛgÅjGIîí9<$Ž<l­5<º>m(³^œÀ6=½ Ñ;M`‡V)ñ|)±MElÞ¥zJÖn+Qí('¿uµWN«}o7<ºþj?#þœÕ¾¡`ÌzñÕ¾ééWû½Ó«ýÐ*¥|míÔrPØÊË{°ÚÛ&E‘f¶öW™Ý.=}o;é«L}® då¶«v;Ö+[:â&ÉÊÙŠ]õ¶TI‹}•ÙU)£1¼¡þ{Iô¿í¯2ïx6¢¾wôμø}Ä,¦o÷eõ+â«L|•‰¯2ñU&¾ÊÄW™ÆW™/!%'éÇÕbG_eâ«ÌR_eÊ‹©ÙjσÔ{U]øVN7".#ÆW™.€ixt}N7#þN×P0f½8§kzzAN7£wšÓ ­R×r:ʸéQíTßÀ¹– ‡ÔÄtÕNÇtñÿôãøäreMq5Þ˜Uú“š+3;¢žæÏÂtcÞÑ1ݘ²å+Ó¦ L˜.0]`º¦Š«ÉÓéÃt^ÎŽFIÔ©ó;H÷bº:¨j9ótÅyb L×ã/ ®éfÄŸƒé ƬÇtMO/ˆéfôNcº}pN Aú«bº¸Ì,`.¡ã>¹X‚”)w8³ËSG»Ó8Ýáø?¸ªœ@®,r%o%†ªù°«^Hý^”ägsºõÈøiœnÄ;FwrºeétÁé‚Ó§ Nœ.8]“Ó ÄUE Nœn%N'i“š:‘5»¶?ì€ÏîÚ^Ÿ‹„ÌnMN¸Û\yJ2lÌågòË“[•o‰ö¤ª»ßZâ!Ž™º~Ì 3Òì°ª–G—GšSâOAš-cÖk#Ͷ§×CšSzg‘æ×à%¼÷W)Êz}E^>¨ÈûÀŒîú†Ôï¥+~*§{¨sï«gû,N7æyª‰{5§{Œ¨ Ø.¬ÿÛ/ÐàtÁé‚Ó§ Nœ.8Ý!§Ûã% vã*ü/{÷²+»øU×…ë’ГŒx”A€Ü<=ÉÛ‡dÃe‰,Bg/ôÈöêâ/î’(}E‘ˆNN·˜Ó‰SÖÚÞÀé¤üËõN§dóèÉ­Ô%»Óé@t“r‰ºïtPÎas&ÜC×:Hœuº©påá3œn0]ßé΄¿Æé: æªwºnO/ètgòžvº©«‘Ýët›§ƒW„'£æµ¶mŸKÏ߯:ûc;Ý\ïän.™Ä{¯átátátátátát§›W•â½×pºµœ6AðL®ÜuºV§èW;]ùÜ2  1çÁ TêPìÖ‹Êc[ý™}Ÿé°žÂâ‰úo«Ãg§ÓÕFËßGÝuáí©2˜îÀ_:=º>Ó  ÓuÌU/ÎtÝž^éÎä=Ít­q,·üÙÇW)ä{ßâ-‰0Ý+‚›Zú ª­µ<Ý+ åqzÊôµ˜®u&fù`°Ìo#ùíL×Z”¤åÈ?HÓé‚é‚é‚é‚é‚é‚ézLWÇKJ b0W}w¦`º`º?é¨}ÍÄûÓÙ^uøèòt¯FKˆÛ8¿Í¾|:P…N®OgÂ_ƒOsÕ‹ãS·§ħ3yOãSk\2¹Àø*%è÷âøÆz„O´‰”ƒÀlýEj8\þ£Lù\e0-rðª#½sòô ŒÜŽºJ3zÇöoudk9]KEõ¹ËÆéwæ-þØN׎š À`Ü;ï“ owºÖbÎÂÀãdï{ ‡Ó…Ó…Ó…Ó…Ó…Ó…Óí«ž¬|ÇãªåpºpºµœŽ·œYÁe°RK©cÃtøÓ2ÿó_þñ¿ÿöË¿ÿß_þöŸÿÖþó·ô¿ÿ•s~»ÇÿíÌü—ÿù¥>—ÿøº‰ý×òtðÿÛ†ùöXðÓ?•‘æ§zßûÓ?Ô£úéô1ðáîmÙÊ1,•Uû;;Öþþ–äïʃô/õ¢±ß¢zù×ÉaÔ¢’=*±3á8Å4À1±uzt}‰=þ‰í$˜«^\b»=½ ÄžÉ{Zb§®RàV‰å [¹?8ØÁ¥®«×•ØZ'ô¶†êEÛiÿ]ejxçF!´eª;·ü×lWíDe_KbKªr‘¡ ‹£ôulj/¶áTï ós;—,kHlHlHlHlHlHlHì±Äò&)!r¬ÏßêyHlHìbk‰²Y&<¹iI|ý“›!Vœ@†¥ o|rC¢4¹Áþ“[.Od–T”ûïCÕ:õ¤’fmÔë_yÎÐb¡Uuzt}Ò<þÒì$˜«^œ4»=½ ižÉ{š4§®R¤÷’fNi+Ï!¤9Õ×ÚÐw.=Ó{³y®wìA§›Kö¶ãv8]8]8]8]8]8]8ݹqÕS̘ §[Ëéò&,J°³ÏÏ¿ÿ —ïùåïºåú®Z’Á $’vžÜ.t:…ÍÅS¦}¦“-;'ð„}¦«u¿[Ãû ¦› Wž¡ƒéFþÒéÑõ™îLøk˜®“`®zq¦ëöô‚Lw&ïi¦›»JI¾wŸ2¦X>Pº™¤¤i±í|§ÒûWÛ&d®wòƒï5¿’‰)ùÉ$fÓ…Ò…Ò…Ò…Ò…Ò…Òu”®«J`jÃq•óþ,õPºPº?MédÓäåA›»J×ê”íj¥+Ÿ h9©ðàRä;߃Ð-“ÊÁkPZoÍͽ¿ U«ñG•NÛÛ2Dl8 'ò¶nQ(Ý¿tzt}¥;þ¥ë$˜«^\éº=½ ÒÉ{Z馮R 7+Ý–‰õøjÿyTZl›©ôFüµ˜nªwé9¦«-jbAÍ$c¦ ¦ ¦ ¦ ¦ ¦ ¦;f:Ý$ŽF|Ià,ÁtÁtK1nfÊŠžûL×êHõj¦+Ÿë˜´®Œ=8Ì!§;'Ó¡ó¦\Z9xéÕê=´eåÁOÝ­NåÙí|k£†T¾9iÎàím™pº€éôèúNw&ü5N×I0W½¸Óu{zA§;“÷´ÓµÆ)g\í_u7晴¶$z0îÁëÈøAT£µœ®¥be!§ç¯¶o;ê\ž@Óx$·LnçÛZL¥Ñ’IlçNNNNNN×qº6^:‹ÙãªC¼ôN·–Óùu t“®Óµ:•Ë_z-Ÿ+Vn˜ÕúµNÒÞãÐeN—E7©'óÁ ߨÕÜÓ ª”o8?»8ÝT8}{y8œî`:=º¾Ó ÓuÌU/îtÝž^ÐéÎä=ítSW)Û[8àÊý6¶”öÛ˜Šê)¯åtsé9-§›ëÇçœn&YypHátátátátátátátÇN75®"Åk¯át«9p¹)¯/ œ®.b—ÓõN'Ù‘ÈÄG'P¹ÝÛyAçB§³ ÁÊùÓiIVNa$ÄÜ›N÷k؃ÛâþÚ¨db·q¸÷i‰Át{þÒïÑÅ™îdø ˜®Ÿ`®ze¦õôjLw2ï9¦›¾J‰ÝËtu Ã=¦›Žj+íõ:›^¾ÓýzÔ–2dü wÔbºÙdšb:]0]0]0]0]0]0ÝÓ}/ ¤ÌÃqµœoLL·Ó½¾˜L&¢)3Ý·:´·÷c®`ºoŸ›M€²ŽN ΔóLǸ©€Šì3ÔS˜R¦nÒZ‡.éQ¦kሜЇá!Óü¥Ó£ë3Ý™ð×0]'Á\õâL×íé™îLÞÓL7w•ÒtïêtHeL9Pº¹¤«)Ý+}ý]-ÓÓ—Zœî×£æœÌñƒÞ1zNéZ‹ЉÇÉ0”.”.”.”.”.”.”îXéÚxé î:WÅb2](ÝZJW¾˜¤™Ì»J×êÊ“ÓÕJW?×Ë1Ûè"Ä•Žòf¨”u_鰜 TƬ¾Òµº´·úJ×åò÷Q‡#Œ=$†üÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥k `J2¾Jå ÷®MǺQÊL×"(¤4ŽZ†¦µ˜®¥òL>èhû~ÖâÍt¯ÞQJ¢ãÞñÌÏ1]m1'eÕ¾uþ¶r0]0]0]0]0]0]0Ýþ¸ZwP‡ñˆŸ‘b‰`ºµ˜Žê“Ðú_`zÝÐ?;El&\×À§‘*tzt}|:þ|ê$˜«^Ÿº=½ >É{Ÿ¦®RÈ÷â9o˜áŸxË A”¡?uš7v/£ÒÕ?ÊtÚÿëïÛÏ)åïgÓ]¹±÷fžê>ç³]µÕu-§›J/¿–ÓÍõŽÃsN7™LÃéÂéÂéÂéÂéÂéÂéŽnf\…NN·”Óñ&è`BƒÕPJ]ýÇgW\+RqÁÑÙUâý±2œî`:=º¾Ó ÓuÌU/îtÝž^ÐéÎä=ítSW)Êt«Ó!lHŠt°ÀfÉÀÀZ·Ê@áõPwØþûŠTTî„:Ò-)Qžî©ï“jJ«9ÝLú,_Íé&zÇØŸtº™dátátátátátátát=§û|\5ÈñÚk8ÝjN§I¹|‰]n¥Ž”¯pÓº¶#øàµ×R è7>¸i59E†ý·\Náúæ- ’Ö:t~væáL¸òø¢9¤ªN®/šgÂ_#šsÕ‹‹f·§Í3yO‹æÜUÊåÞ=$Ä7U:˜y8•ÉÖbº¹ôúÅ^{êü6XÞÎtsÉ(¦ÓÓÓÓÓÓÓu˜®Ž— „èy<®z0]0ÝbL—·òD%€;›Âýüû/°¦äoËš]Ätås¡¾„ƒ÷Æ[ò/BYÞÐAí`~…”S8'a,Wëeº©pL7ö—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÜUJý^¦CÞ²mõ:U¬Åtséå‹Í¦›ê[÷v¦›KF±:]0]0]0]0]0]0]‡éÚ¸*B&>W3ÓÓ-Åtå‹éåÞûLWëÌÓå³é¤ò_®oÞŽN RWŒ™N*ER9Æ}¦Óö"Q?i­ËšùQ¦«ÖŽt±a8ÑäÁt#éôèúLw&ü5L×I0W½8Óu{zA¦;“÷4ÓM]¥œïM&ÉйÚ+$ èú¾œÉL×R±@Τ§¬_‹é¦zç}¦äíL×ZÌZžŠùƒd9^z ¦ ¦ ¦ ¦ ¦ ¦ë0]/-å<¾ñTG ¦ ¦[Šél³rWNƒ—^[¼Íf»ˆéÊçºBVÔþ TëüN¦ËÙË_Í pÿÉÍÚ3cùÛ ¢Ö'<‘G®6j‚$ãpåëN7˜N®ïtgÂ_ãtsÕ‹;]·§tº3yO;ÝÔUª<Üêtâ¸ùÁdº© ´–ÒÍ¥7øZJ7Ó;åÞ^ŸSº¹d,¡t¡t¡t¡t¡t¡t¡tÇJ75®y(](ÝbJ§PCIÀ¥SHùm ‰Ë”N!Krœ@¥Ž,ß¹4n9)!—SØ]T°ÿz{«ÛY¬èV¤+ÖÑ´Nþ…ã”0n¨/]é΄¿é: æªGºnO/ˆtgòžFº©« ÞŠtš`>Pº¹¤y1¥›Jé‹­L7×;òà+¯sÉÞ–Õ¥ ¥ ¥ ¥ ¥ ¥ ¥;7®ŠæPºPº¥”®|1­-®H¹«tµ.§·—,.Rºò¹åXit™Þ¹óËæâ¥;v•R=…€¼›ôUgèO*]k13Ãb ¥ðK¯G—WºSá/Qº^‚¹êµ•®ßÓë)Ý©¼g•îÕ8å ýu|¿ÕQºw*ûFbûL÷Š`í—ÕqT–¼Ó½R¹òx¬* üRL׎˽Lù{ŸœL÷jØv ÙIf±D0]0]0]0]0]0Ý1Óµñ’Uáƒqµ ¿ÁtÁt+1]ýbd—”¼Çt¯ºWo Ñ>‘Æ7¦† w¾òÊióúRëþZEíÖûóf[쬡w+ÓM…ÓL7ô—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÔUÊäæ}^µŒ)x°Ïë\T_‹éæÒËךM7Õ;X>ÿ9¦k-B9n÷’a0]0]0]0]0]0]0]‡éÚxÉHIu<®¢ÓÓ­Åtå‹©È#¦kudWo Ñ>—Õ¿l·:1¾‘é6ÝW:lgz¹¥¾'âë§úG×¥ûÎIr‡ã·I½¡tüÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥k«É—P±›'Ó‘oˆG“馢–\k)]ME 2R§ß3ÆZéfz‡ÒÛ¾·+]k »ÛÉ Þy ¥ ¥ ¥ ¥ ¥ ¥ë(]/ \Çãjù_(](ÝRJW¾˜fXnù»J×ê ]ýÎký\GVB¥Ñ äH†wN¦ó­<¼R¢}¦£vkì¬Ö¿…~Õ¥ü(ÓµF™ðƒp ±ÍëÐ_:=º>Ó  ÓuÌU/ÎtÝž^éÎä=Ít­ñ,å:ã«T¦{W¦Ë,[¦|Àt-‚$ΊDu\‹é^éÍGõªû~ÖâÍtí¨5g%÷ŽòsKÓ½Z,5É|ël¡ë`º`º`º`º`º`º`ºßÆKõH&±Ñk8]8]8]8]8]8]ÇéÚxiXËq<®jL§ §[Ìéx3ð”,§¾ÓÕ:+wYW;]ù\4RB=•:ô;Ô·ò\–ØöŸÜrmWÿ6ÞÈÌõ\wçG®…Ë.2˜@ñª“xíu0]ßé΄¿Æé: æªwºnO/ètgòžvºÖ¸&!µñUên§Ó”¶Óé^I˃Ë`•°W/6®¦‚TÎLÓqz·/öÖkë,mõ7—ÿV÷ä[¯­E*·9Y>H–c:]0]0]0]0]0]0]‡éÚx©`ãû!(§A0]0ÝRL—7­‹ë²t™®ÕAº|mºú¹ VîíFn¥î}o³ë™Îi£º&ÒÁN¯ROáòì“¥¯t­.ѳ³éZ£™A?ÿê€CéFüÒéÑõ•îLøk”®“`®zq¥ëöô‚Jw&ïi¥“×ݤÁ†[­N,ß»8¦Ý˜®Ep7DGµÌk1]M…X% ‡é1qúZL7Õ; n!ÑZdg·ôA2 ¦ ¦ ¦ ¦ ¦ ¦ ¦ë0]/µí!1W…‚é‚éÖb:Ù\-È÷oþüû/°«:áÕLW>×´ük¢Ñ ä–3Ó[HmRÑfÓi9‡ LG³éZ]ùæ<êt­ÑòÍ‘Á‹<­Ž-V§L§G×wº3á¯qºN‚¹êÅ®ÛÓ :Ý™¼§®5.Ž&y|•±{.—!:9]‹`l¢\P5ÓZN×R¹(Ó;}±étõ¨9{ 9ძӵxô³ÿ+YŽ­^ÃéÂéÂéÂéÂéÂé:NׯKr“n<ë²¾átátK9ÖÍ!²£@î:]­S Ë§Ó•Ïṵhu)ç;N7t3ÜW:ÛD°þ ?€°R—…¾Å[•®†#…4z§Õ½o˜JwÀ/]_é΄¿Fé: æªWºnO/¨tgòžVºÖ¸3ð`²Ð«äÞw^·lŽ~“©8q*üaTN(k)]Kuðòä«ng£ÚZéÚQ#xù{Óƒ³é^-:+¥’qlõJJJJJJ×Qº:^–gr¶Áb+¯º¤¡t¡tK)·uKýzoONâS WÒ•Áb®ÜÝÆÆCUèôèúøt&ü5øÔI0W½8>u{zA|:“÷4>M]¥(ß‹O¤¶¡ÌóÍ˸BfÖ_á Õe»ü'™ú¹D lØS¥³Ý¹ié%ǤL~ØUĥܯQiµ­^[*ÓÁÎL­no£ÚšéÚQ«pLY{Õ¥·¨-r}·Á?8AR0]0]0]0]0]0]0Ý1Óµq•F/¼ê’ÓÓ­Ät˜¶„ZE ÷äöªc¹z ‰ö¹æFÒ_ìúU§rëÚty#EÁý½ÿJ‘òËì(i©³'AóÕ¨° Ø8œ¼½Æ ¹/U½]4O…¿4{ æª×Í~O¯š§òžÍWã†Hã«”ú½ ™¶2¢í‹f‹ )¥üÉÕÜ—bºWzÀúw¦×ä_ë××Q£¦Á\ÃouÉcºW‹ÌÆÉ!˜.˜.˜.˜.˜.˜.˜îéêx©`Šƒ­Ë¾ÕQ ¦ ¦[Šé`KˆX›å.Ó•:pa»šé:íÿõí#Áï¼fÞRÎû{H”"Nl©¿ªx«3cyÔé ^`²3nðk]’·)=átÓéÑõîLøkœ®“`®zq§ëöô‚Nw&ïi§k—æIyx•Ù›»}¡Óáfu"9^íKTÁl2Žªlk9]Kå’ ‡púZ{H´£ÆDJDÃÞÁ÷UfowºÖ"Ëß:L±Õk8]8]8]8]8]8]ÏéÚx©\÷Ë«„ñÖk8ÝjN§r[…gàt¥Þ^,½Ìé4‹+YöÁ TêÈåF§3ÞÚŠ/ºÿà†å.=_þ2ý›ûZ‡ln!1NßVÎ ¦;ð—N®ÏtgÂ_ÃtsÕ‹3]·§dº3yO3ÝÔUjgÿÔkß6ÚŒá`:Ý\ÔÕ¦ÓM¥÷™~h¦›ëî­×W²ò@PnÇ>ILLLLLLL×aº6®2ÖÕi†ã*¡z0]0ÝRL‡›HúöήW“¹ã_刋…H¡q½¹\H(Z-Š”›$ZíÞDŠVsf90#f`Å·ÝÍa:œ§ícºûÁ᩽Y`jÚÿ®§ýR?—ËR½êu±#<Ó•çF H(¿)žY¯(â„ĉ7,(÷aåÜCý(Q±#‚ë{íÉÓ隦âÑñ9ÝñÇpºŠ‚>ëÁ9]ÕÓrº=zwsº¾QÊνꕌ'³-N×%UAÇât}êŸLü}sº.ïØ*1þtN×§L’s:çtÎéœÓ9§sNçœn›ÓõÌ«LàÇ^ÓÅéh²H”¥ZW¼ØiŒépN—Ÿk¤$”¬Ñ²];óØ+Ø”’np:Î}ØD•R½¯;Ž|Õû6úÄ™_öÚ0ŽÏéöˆ?†ÓUôYÎ骞ÓíÑ»›ÓõŒRèÜ|:Z®ïÞàt}R…Æât]êà¶8]Ÿw䊜®OYdçtÎéœÓ9§sNçœÎ9Ý6§Ëó¥I´Ö¼š{#{>sº±8O‰ ¦*§Ëv ëË^ât¼äó•kõ(»Dã™Ç^ó‚|Ò@‰é2§“)ÎH36ΤHYCÓÓ /Nåt]â’çÓµLÅ£ãsº=âát}Öƒsºª§ät{ôîæt=£”åñò\N7•”=“ý£½ië¶×>õ†é¶8]ŸwVi§sºe)¿íÕ9s:çtÎéœÓ9§«qºy¾dÄ ÚžWI<ŸÎ9ÝXœN&B@ÌõÛ^‹G;ü¶×ò\ )A#!u±:³¦üf”uJàSÓé`¢lã —Ð !Õ‰ülÇO늟ŠéJ£Gm\Æ1‹KÑÓµøKÅ£ãcº=âÁt}Öƒcºª§Ät{ôîÆt]£”E97›.Á$ȧ^û¤Z Óõ¨‡@7–M×çÕ‚Ó1]i1¿R[YÓ9¦sLç˜Î1c:ÇtÛ˜®k^5óË^Ó…étRŒyýTÇtÙMèðêtù¹œ×uŠ!5:RJt&¦-7Z¤Ë˜.å., ¥¾Ó]ì0J¼*¦ëG¨à˜®Å_*Óí ¦«(è³ÓU== ¦Û£w7¦ë¥èäË^pJ›—HôIU Óu©g·…麼³Þ<Óõ)uLç˜Î1c:ÇtŽéÓmcºžy•’c:ÇtCaº4%L&AR½8]š³éøpLWiÿß~Ù¾Æ ÅÙtÌF /s:Ë}8¢Q¤ú&üløºœnn4EŽmqÂ蜮`*ŸÓí §«(賜ÓU== §Û£w7§ë¥âÉÅéb±0mpº>©lcqº>õOS¿ßœ®¼µrÒ•6½£€Wät}ÊÔ/{uNçœÎ9s:çtÎé*œ®Ì—¥º~ ϘW£ø©Wçtcq:›S޵ëétÅŽíøË^­¤óQPÃÐè@ÙœÈé˜'&cà‹˜ŽBéÂB¬Z };ºtQáy˜nn4Í3*4Å¥üÔk‹¿Ô<:<¦Û%þLWSÐg=6¦«{zï]¯8ÝÒbÀz­ëŸ”±c:ÇtŽéÓ9¦sLç˜nÓÍó¥ä™©½š³°¾ÝÝ1cºßÓå3/DËάI Ó-v*Gß!1?7¢jé(Ûžz×kÔÉ0‹¹|ûAéÃdêuô;´«^"Qµ¼@–„Úg«ó2Îé6LÅ£ãsº=âát}Öƒsºª§ät{ôîæt}£T:—Óiɶ0]‡Òl0¦›UaŒV¿A}±»´ùõ»Æt]ÞA¾"¦›[dÎ_”=CÙåpÂ1c:ÇtŽéÓ9¦sL÷~¾$ÁƦël—ÿ¶c:ÇtCa:œ&ˆ«˜n¶ƒÕiɃ0]y®%PâúÂt¶SÖ1QIÙCݸý/Kˆ†@ ­%5¯¡_Óõˆ䘮Å_*Óí ¦«(è³ÓU== ¦Û£w7¦ë¥ÏÅt :á§ë“ǺëµO=¹-N×çõ˜³9]Ÿ²è—H8§sNçœÎ9s:çtN×5¯FHÎéœÓ Æé”B^ð)IƒÓ)®. =ŒÓ)%Eh¥ÓÍv(væ©×8b(—ïz%Ê]˜U'Rf»@pUL77š¥H[­ªd:¦Ûà/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓõRIOÅt¤XŠ‘n`º.©ƺD¢Sýj÷í&0]—w$\1®Oˆc:ÇtŽéÓ9¦sLç˜nÓ•ùRÊåèøŒyÕÀÓ9¦ ÓѤDb1rõ‰ÅŽíðtºòÜ$šß¨ØA8·8]°¼¦ä-Lg¥ k”–R3äÄ£nêáÖ·ï\5pëQ†^®È7ÜëÁó+ªž0¿bÞÝù]£T<÷”„)2mäWôIű.ÿëS¯qý®1]wò'wEL×§ÌÓ9¦sLç˜Î1c:ÇtL×5¯JTÇtŽéÃtËuè´é²:Ó©& ’ê·RÏv9ôÁ1ÚdÀ6ªIéÂóJ³ÞÕ‹]´ÕI£k`ºYE‰Ëÿf;D/*Þä/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓuR„'c:À’“´éú¤^8Hô›bº¢*¯I“ð3Ôó­UïóÎ bžŽéú”©9¦sLç˜Î1c:ÇtŽé¶1]×¼š08¦sL7¦“Ió@lŒ±^U¼ØAÒÃ1]~®Ä)¿q£iž+â©Ç Òl+›.NÄ‚¨…ªÒbGº½¸Gâÿ¾û÷×ï^ýx÷î«ûùŠÏ?}¤9÷Â_¼y(áwþÃe¹ÿòî?ǧsõÁ¿–Ÿýƒ#|ð—ŠÝ#þ[QÐg=8Š­zz@»Gïn[·@¬!´G)£s+R‰æUÅ Û'U+ߣ>OŽ7–19¿uþD!ò3¼#z=»(KŒöe‰Å:Šuë(ÖQ¬£XG±Û(vž/-¨Š´çUõÂñŽbC±qRÁ@´ž1™íȪÏA(¶´ùý¹Q`Ö©g¢X˜4;Ý6 Ç뼄cëú„«b:-™ÛA8Rh‹6Çt-þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1]×(ÉNÅtQp²”60]ŸTì`s—z}:Wý¾1]ŸwøŠ˜®O™DÇtŽéÓ9¦sLç˜Î1Ý6¦›çU$n¥ØÌv°¾ÑÙ1cº0N "ZãNÁžÂ§‚t~Ž|ÿæïßå™z~-vêEWgbâ"°3±6ìØôèÿ&öëÅÅÄ6ôYÏÄ*ž’‰ýz½0±ŽQJƒœ{Š¡rŠ8K°ˆÊAZ«¨d¬«SÄ«‰¦H.™Ñ}üOóÊ-/ž¾}9G'ó›_ä£w?¾¹ÿìÿ¾ÿ›ÿwÕ÷Yôwù“ø0/Þæ0óà ƒÜ?Ì ·9FýìÃÏÞæ‡å!8[~õðæî‡‡OÐÜëÇ¿~÷æ»ûò²ùíã[¼îþs^䆿Éf+Ó/î_½Î}©¬¾ý>72}xÙI"bÃÙ.˜lùÞçÃ?ò¦Ç¨{YÚl´!Æb³õR8åW| Ö?߬ü)ÿùëoþüúûw÷Eýmé{Ÿß—ÅäêÏÞ~úéçù,¨áísÿÊê3xrŠàË¥õ»g=Ž­_Ü?¾òËüÏïþqÿmM¾}ý~š)ö%Ñ«÷ýt9n±KÈ Ÿ~¦¯¼û~kÊ •Ç֗埾ZØàýÛww½œÕß}7 Ÿæÿ›~îÁy‰ú(øã/ç¦ÿ©ï‡ÞrÝ5„¿|lûQúfQ‹yÈKí¯Ô”Ÿ.xõ°عâ½ûBw™7ªè¤ Eó‚«»çÅ®œò9z÷¼<7¿)j£`×lGL§–×)–ÔWÛvÕó¥Ú`yϳªñ基­ž£ÞÖ†ÊüÖR¢ØöŽ`¸Þ†JiÑ ”ÄÏP&ž÷ì*¾¡â*¾¡â*¾¡RÙPÉóeÌ#Cj®æbê*¾¡2Ö†Jþ0 S4nTŠí€ù„È­ì90hu c£tbä–ch! ·7 ¨”l(Õ€ù¿êO8 $¾ÇÓ‚÷Ž¿Ç³Gü1{<}ÖƒïñT==àϽ»÷xºF)x:غÇS)kû{ŽÒu¨§Õa¦¡tÞá€×¤t=ÊÀ)S:§tNéœÒ9¥sJW¥tój"vJç”n0J7§ @TnP:eÁ$ÇSºÍöÙ8Fâ3«Д¥„¸QÀJ–ˆ u ?ÛqºnѹQ3c–¶¸¤à”®…_*ŸÒí ¥«(賜ÒU== ¥Û£w7¥+‹)„æ(Åáäûœ„ÓDºUD´O*DZ0]—z¸µê}ÞYUž8Óõ)‹~Ÿ“c:ÇtŽéÓ9¦sLWÁt]ój\…ÇŽéÓ€éò‡™’HŽ4¬ŠéŠÄãïsª´ÿË”Ô,žˆéMÆ‘6Šˆr(]=O[R¿Ng¶c W½v}—¿!lŠ0/˜Ðâ/5év‰?ÓÕôYéêžÓíÒ»Ó-S”<â·G):ÓQâ .cºN©JCaºESAÛê)ÊMaºå­-bh{‡WölLשL=›Î1c:ÇtŽéÓ9¦ÛÆtË|™Ê±½ÔžWs î˜Î1ÝH˜®|˜’@5/9k˜n± ¦cºù¹ѨÕò,¢vfµ"ž¨Ô*ÚÀt0)°D¨ÅbÄ0]Óõˆ‹y^uL×â/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓõRrî]?Ì¥Àm`º>©caº.õŒ·Uš®Ó;ë½õ³1]§2ϦsLç˜Î1c:ÇtŽé*˜®k^Uñl:Çtca:œ0JHÉ”ª˜®Ø®ª)„éòs5hÔÆ5‹è©EÅcùÕÔX.s:Ì}8;RýòŒÅŽõªµé–FcŠ@ÔcÓ•ßóÏ÷¹OÞÿpYã‹¿¿xXQºoÊE…nÜÑ7åZ3ŽÉÞÞ}õ"‡_ç¶¼ò÷9Öÿ…ô<ð]~Ÿ˜=¨ÒzŸºÎ]|(ŽÍŸÝÃ<¸çHêí‚6¾+ ãO¼{óúõ×wÿxx÷Õ]øù¿½ýþ‹ÿÉ+°· Y®x|Û9>ýy|{ó°ºb Œoù¿|òx]Çß¾}õú“à“ü¯o¾ú—²Rül^þáËe´û,ü…y–Oí/?{‘øå‹/^ÑÇ÷/¿|õ1'L_Àǯì%½ˆ¯æ•ÒZ×l0M!Å`¦PO¦,vbœÒ¶ð[Å£ãSÚ=â¡´}ÖƒSÚª§¤´{ôsã yÅÌÍQ*ÂSxxl2¥¦)σyò¸<Û÷‰å±Š.ª8¿#â3Ôß§íòòõŠv*»O:§uNëœÖ9­sZç´ÎiWóeÉSmÏ«âœÖ9íPœ–&‹A‰êœv¶cã9m~.Zj-Ló¿Ÿyê™xŠˆ x9pãÜ…57°ž9=Û½n:e—8^]eä nƒÀT<:>¨Û#þPWQÐg=8¨«zz@P·GïnP×5J žœNuÊAÈF:eŸT caº.õñi…Žß7¦+oBÈ}Žwâõnú--Z©l¬õZç?)óSÏŽéÓ9¦sLç˜Î1] Ó•yc©­­yÕ‚Š:¦sL7¦ËBQ0)Cõ¦ßÅ,éÊsSBˆÍ%s¶Ó gÞ!…t™ÒIîÁò :Õó>¥Ä¯›LÙ#ÎÉ)] ¿T<:>¥Û#þJWQÐg=8¥«zz@J·GïnJ×5JåèàTJ§T2Ë·jöIc]!Ò§žn‹Òõy'Ñõ(]Ÿ2c§tNéœÒ9¥sJç”Î)Ý6¥+ój¡Ѭ5¯ZþÛNéœÒEédŒ!å@ª”®Ø1®Â‘ƒ(]y®%H`©ÑU1œ™L—&*w„ØeL hǤêݳ]ñª˜nn” ¸~ËbG~êµÍ_*Óí ¦«(è³ÓU== ¦Û£w7¦+æé_¡=Jŧ7[Šé¢âdh˜®KêÅ$ïßÓu©7¸1L×ç½â"}ÊRrLç˜Î1c:ÇtŽéÓmcºy^,W•6çÕr´Ï1cº¡0]œ€’1HÓ-váð3¯å¹Q·jÐÍvDxf2L")èF±"-¤b^Ö•ê¼#®{æµK®j;¦Ûà/Žéöˆ?ÓUôYŽéªžÓíÑ»ÓuR„xnq:¢ ¦ë“ʃyíS¯7vÓo—wxuØé˜®O©c:ÇtŽéÓ9¦sLç˜nÓõÌ«ê¥éÓ…ét JÆ«˜NË™SLp4¦«´ÿ‹”í"yæ•lB ãeL—JN Ð8ÞžæÐÇôª˜®GbdÇt-þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1]×(µÎù=ÓY’)IÚÀt}Rm0L×¥ž)ݦëóNº"¦ëSfÑ1c:ÇtŽéÓ9¦sL·éºæU%ϦsL7¦K0 D\Åt³_š®<—r´Ä‘¨ìó™Ùtq”ˆ[˜Îr@&l)4”f; 8ZàÖ¡žo-pËoÍåø7>Ã;zÕÀ-·(¡œT|†².ðÀÍ7Ü©‘l(L·¨R&Dz†úd7…é–·N˜ŒcÛ;º¢Ågcº¹E )²<ãw3AÇtŽéÓ9¦sLç˜Î1Ý&¦[æUi¯æ//vLç˜î·ÂtùÃRÔ²­aºÅîø¢âå¹,ll©ÕФ3‹Š§){!¥tÓÁÜ…Eë$l¶ |ÕjEK£dÚâÖãcº þRñèø˜nøc0]EAŸõà˜®êé1ݽ»1ÝÜ8 ±ˆ|º'slQñ&£¢âR#Œ…éõˆÆü õo Ó-ÞI¤"mïðê3<ÓÍ-JYÇg(pLç˜Î1c:ÇtŽéÓmcºy¾LÀÈϘñóÒÀ1cº¡0”*D12A5›n±“ÕÎòA˜®<—%(¥V@‘íéLL§S˜‚]Æt8Áœu(ЧØ{š_q*¦+"äÿ™5Å9¦kñ—ŠGÇÇt{ăé* ú¬ÇtUOˆéöèÝéºF):ûÐ+ã”ç L×'5ÉX˜®K=ãeÓõyGãõ0]Ÿ²$ŽéÓ9¦sLç˜Î1cºmL×5¯*x6cº±0]þ0I@˜V1]±ËÍч^ççF¤[ˆDéÔC¯<ˆü/{gÐ3ÍmÛñ¯òÂçv,‘)úÖK^Š ø”““ >%hœýö¥4µŸiUÍŒ…wù¿ôè¿Ü‘(þ–¢žc:´)lnJ¶ m*-v¬Ÿâ¥˜nDœ€‚cºixt}L7#þL×P0f½8¦kzzAL7£wÓ ­Rˆz…¼©À¦“šh-L7¦þó=µ_7¦òч^Ç”%tLç˜Î1c:ÇtŽéÓcº¡¸ê˜Î1Ýj˜Î^LbËXbn^!QíÒé‡^ãÿv‡¯½B‚5‚TÓÙWj‚ýivÑ«vAõÞjº*‰´SÕ»ÛÇt]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦«ƒJÐüÂ*•/>ôª´Ù ˜nH*…µ®ØU%!Lð‚úï…醼“è¾+$ö™´w¶`Wö‡Ó9¦sLç˜Î1c:ÇtOã*Ĩ™_ØÍ=æîŽéÓ­€éLh²HlWÓU;x8r¦+ÏÅ„€¡Ë—Ì*^‰é(n9õ¦K6…1ÛF·§z±ñVL7"cö›^»ü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7´J赘.ñv鯄~îCúûBº!õñ½ ݘwøFH7¨LÒ9¤sHçÎ!C:‡tÇ®ÆK-Åt¡W…’C:‡tKA:{1 ‰¤Ý™®Ú éÊscæÜ@”RÂkïyÍ)ðsHg)IÈØ»ù¯Ú…Ïõ —Bº:h& Aûâ$9¤ëÒ—†Gׇt3âÏt cÖ‹Cº¦§„t3z§!]\™¡¿JeÁK!EÎÍ"Ú¦’ª«ÝóZT¥`ß2¦Ô.Züº1Ýî”9õ½“ë÷/ÇtuÄòkdÊ/(c¿@Â1c:ÇtŽéÓ9¦k`º/I8&éÇU ~äÕ1ÝZ˜Î^ÌĶWÏMLWí œ~Dy®¤Ä,Ô›@IL啵t°%ûspâUlsÌ¡=Ó‹]Rº÷ÄkTˆÄttÅ&ê”®‡_]ŸÒ͈?‡Ò5ŒY/Néšž^ÒÍè¦tC«åkïH¤[> t#Rh±ÆtCê1å÷¢tõS'  Ü÷ÝIéve©\éõ‚2öÆtNéœÒ9¥sJç”Î)]ƒÒY¼QŒ¤Ô‹«@üšW§tkQ:{1Å^a ©}âu·‹álJWž‹˜Â Syz:çö&Ù¥te-]Ú‡ƒb:Ý"FÊ!aû8Š–©ÎñÞbºqOñ8¥;À/ ®OéfÄŸCé Ƭ§tMO/HéfôNSº¡U*‚\[L‡²Ybr€éƤæ´¦Ro±ÿ½0ݘw›I_醔=þfé˜Î1c:ÇtŽéÓ9¦›‹«¹»c:Çt+`:{19“J@mbºÝ.ÈÙ˜®œ;²Ù#П@¬ªWÓ‘nö é)¦Ë¦Ì¦°Mó¬Ò8GôaƒÞx}Ĩ8$ÇtmþÒöèâ˜nRü ˜®­`ÌzeL×óôj˜nRï¦^¥2][LxËŸaºQ©ÂB˜nXý[Ýò:ìÛ0ݨ²¼šÎ1c:ÇtŽéÓ9¦;ÂtƒqÕ^gqLç˜nL·¿˜9g¶•¶QM÷aGñá‡30Ýþ\UæÞÊÊ.Ätœ7È¢<ÇtѦ°ý% ¶§únïÅtuÐPúâ(;¦ëò—†G×Çt3âÏÁt cÖ‹cº¦§Ät3z§1Ý>x&ŽÔ_¥_‹éˆy³Ðs€éªRä¾Ôµ½þ¢JEc~AýÃÝíoéê§F”Ð÷ޤp¦«#ªís^yë4&ÇtŽéÓ9¦sLç˜Î1Ý1¦+ñ1¨½ÈÝ¸Š ÞšÎ1ÝZ˜.n6xˆ Ÿ/*þþ×/°½¿|rkºöøÿöÛñ‰ò•˜.ƸeF=âtPÚK*3¶.‹ù°CÍr+§Gœü ‰.€ixt}N7#þN×P0f½8§kzzAN7£wšÓ ­R¯=õ*õÆ xÀ鯤"­Åé†Ôgz3NW>u €¢¹ïe¼Ó•ÙÂ/wâ®LÈ9s:çtÎéœÓ9§sNwÌéFâjzlÖëœÎ9Ý œʩӠ¬š›œÎìX0ç³9]?¡&„î*W›Éŧ^²òsL‡eªc*÷Ò6•V»øùÔ¥˜® *¬$²Þ*ŽÁ¯èò—†G×Çt3âÏÁt cÖ‹cº¦§Ät3z§1ÝÐ*%!^ŠéØV,Nr€éƤ~nEúûbº!õùsé÷×鯼“ã}˜nH™;¦sLç˜Î1c:ÇtŽéŽ1ÝH\ oNç˜n-Lg/¦½¦ÄH±‰éª]ÄÓ1]y.ª2õ&P¦ø,:ï¦WÙPíŸCL§ Yb¤ÜQjvpµÄm@==é‰þ•'nÞ‘xk⦊ARTzAxWqOÜ’ÞZ_1"N³×Wô~8oxtýúŠñçÔW4ŒY/^_Ñôô‚õ3z§ë+†V©'ͮϽü/äMòQ»¢1©²XWñ!õIð½0Ýî)W™ô½Ã$÷aº2b–P$zAYRÇtŽéÓ9¦sLç˜Î1Ý1¦«q˜r§ÛJµ‹LŽéÓ-†ébÊ!—+€:˜.&¡‡® §aº˜T‰Rν ”Ò•—ÿaÜD˜)>Çti‹%j‰}7M¥©Lu•t+¦‡ÁÞÇt=þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦+ƒGû× ©¿J¥‹1]Ü, PJÇ«ýËRù¡å˜®¨2åhÿtÕ—$ë½0ÝwôáÊË1ݘ2õËÿÓ9¦sLç˜Î1cº¦+ñ²à ~\M$ŽéÓ­…éRéB”$3·»U;âÓA•çZtˆQ»|‰ñRL—7ÎönêóÄm —ßÖClW(T»”øVLWÍ(Ⱥâì•ñnE]þÒðèú˜nFü9˜®¡`ÌzqL×ôô‚˜nFï4¦«ƒ3c¯5ÝnGWcº˜b x¼Úg¥Ò®¦/Ux±nEE•¤~8ЀoÖT¼~êÈA…úÞ‰-×/ÇtuD[Q‘ò ʲ7wLç˜Î1c:ÇtŽé˜®ÆKfˆIûq•ýЫcºÅ0ü¥éó©íïó³%äélLgÏ•˜YBè&b Q¼ÓÁVŽÿƃú Ù¢¬ ©sTFêVûÙóbºq  cºixt}L7#þL×P0f½8¦kzzAL7£wÓ­RY.Æt-ÎN/ö0,VL7¦þI)àWM醼CwžySæ”Î)S:§tNéœÒ9¥kQº/1*Fˆý¸ª7§9¥sJ·¥³Sˆés3”ïó‹Æ gS:{nFL3ô&Pù| êÌ«ÿ`£RñzК.—)Ì¡s:·Úá³óBJWD™1vÅE‹½Nézø¥áÑõ)ÝŒøs(]CÁ˜õâ”®éé)ÝŒÞiJ7´JE¾öê¿ eÑ:jM7&5ÃZ˜nH=<‰ª_5¦óŽæû0Ý2 ~õŸc:ÇtŽéÓ9¦sL×Àt%^@’ÎÍe5®fÇtŽé–Âtöbæ`éHûÌk.˜Ž„ÏÆt¹^)XБö&PŽA/=󪛶óÓi!ò ¢ÊM¥Åâ³¾Jbº!qôp ×1Ýixt}L7#þL×P0f½8¦kzzAL7£wÓ ­R)ð¥˜%l⦓ ‹UÓ©×7;ó:äæt¦S&~Ñ«c:ÇtŽéÓ9¦sL×Àt#q{5cºµ0½˜"íýå&¦«válL§¿•;×:ÍXª£^‹éb€žß C-œ Û¿tïv˜n4ÅÛ7Hìv¤Ñ1]‡¿´<º<¦› ¦k)³^Óµ=½¦›Ò;‹é>W 7©¿J=¹ôÜ‹^5ohŸcº]§ }©×Ât»*áúêßëÐëþ©5"õ½“oÄtÊ4Û¢ÚW¦Ó9¦sLç˜Î1c:Çt‡˜®ÆK 当pvÞšÎ1ÝZ˜®¼˜² ½Ã-LWìDˆÎ¾A¢5þo'dåp%¦K›ÉPxÞ­(Æ:…¡ÜëÜTZ힃ºÓÕAÅâeÈ}qÌ~ƒD—¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éöÁQ4¾°J ¤k1]’€0]•Y;w0|H%Y ÓUÉ"•v@TU¯ovÑëî$ιÉÈ}‡^?”q`zE™‚c:ÇtŽéÓ9¦sLç˜îÓÕx™C"Ð~\夎éÓ-…éìÅÔLårmbºÝ.ž]MgÏ…¢íê:U »\Ù›.¥ YósL6…Ybdlo¡¡. rëE¯â˜,eìŠc¡ä˜®Ç_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆtepû›D’ú«”ºú‰˜!êñjÿºÔ´¦Q/!è{aºú©£¢tj wïˆÞ‡é†”=v vLç˜Î1c:ÇtŽéÓ=«9H ðÂ~ˆÈ1cºµ0”*µ,¶§oö¦ÛíâÙ˜ê¡WJºÓ'ÐÕt´QÎ9&n€ 41ô0XdãÃͽýí¾üû_~úñÏÿûå§ÿúSýëüü»ŸiÎ?²¡ö•öc‹ÿÇ/ÿòKNZ³¦oþµ|Õß”¼à›ÿ´˜üͬpK:„§lÂ*Ï~Hù'{E, 铱,ÇT´}F¸Úë­ÈkLA‚NL©â4:rí²´†G×G®3âÏA® cÖ‹#צ§D®3z§‘ëØ*¥×ö´íÃfÛðƒÊÈ©9¬v€¹ªŠ63U_P¯o†\‡¼ùÆÊÈ:¢íjr§wW&âÈÕ‘«#WG®Ž\¹:r=F®5^rK}ûq•Ôû :r] ¹â¡ö^—fŸÁjèô>ƒõ¹©l™…:Èì‚Ê••‘¼©&Jϯ‰ö•&Œ!†ÎQëbú¬IÖ…˜® *9Ù»#]qòX`ê˜î€¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;é†V)¾ø3mBœ#¯ö/Kĵ0]YŽc$…®úƒ¼¦ò>T½^ŽéÆ”‰`vLç˜Î1c:ÇtŽé˜n$®ªíˆÓ9¦[ ÓQ9˜œs &¦«v¶fŸé¨à?-ÝûzÈìò¥} ·)¤ƒÄ-ÙW1w·T¶Ðï­¦Ç.?vLwÀ_]Ó͈?Ó5ŒY/Žéšž^ÓÍèÆtC«T¹ø³e@8½Ú›]X¬š®ªÊ˜Sçv¨j'ñÍ0]ùÔd¡¥ÿÝâc°¼Ó )cÊŽéÓ9¦sLç˜Î1cºcLgñRÕ¶s)j/®jfòë@Ó­…éÒ†!¤ CÓ¥ Táá(çI˜®ôö‹öA!ô&ZBñ,: ÓA[Í9=Ïܸ yJ¡‡ä«pº•Ó•A!"äûâ,}sN×0 ®ÏéfÄŸÃé ƬçtMO/ÈéfôNsº¡UJŸýÔq"§ÎüààÔë˜T kqºõ"¾§«Ÿ™rß;á>N7¦,ùµ½ÎéœÓ9§sNçœÎ9]ƒÓÕxÉ3s?®&ñS¯ÎéÖât\N½†Ò¾Ýh°Ø…ÇS§'qºò\=$éÁ/(Õ xe9mÈ)+<ÇtRSÆ í…jgZoÅtuPÆ“ôÅ¥DŽézü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiLWTè\w°ÛÅkïa‰[V=ÀtUBF|aµ^¬œ®ªRÆÔ¹»j·Ãð^˜®|j ¥»rÿ5ÄpÜØö|LWGŒ¬ÄÐW£ŸzuLç˜Î1c:ÇtŽé˜®ÄKÂD’ûû!Bðr:Çtka:©ø ìqík{«mÏÆtö\{* vÊéª1^{m/ç¤ùàÔk®)Pî°j—InÅt¹бÜsW"z5]—¿4<º>¦›¦k(³^Ó5=½ ¦›Ñ;éêඈ3Çþ*E®Åt7[¦0]•"Ç/HÕÅ0]UÅ„ýX…Lo†éê§Öˆ Ô÷N¦¯í-#R° Ñ { ÔÄŽéÓ9¦sLç˜Î1cºcLWã*"äΓ»]ðætŽéÖÂtyƒ@¶gUj7§«v |6¦+Ïee¹Ç—Ì.`¼²š¶’Ž®íÍ›ªù€B7õQs§àj‰›©ÇhoMêª ~·Ä­xG•sÿ»|€Õ7$n6"YFÖ)@ÚíÐ7OÜÇtö•JV›Åм_¾ÚqN÷VÓˆ³è‡^»ü¥áÑõ1ÝŒøs0]CÁ˜õ☮éé1ÝŒÞiL7´Ja¸øÐkÂ-<ÀtcRa­+$Õó›zòNÊù>L7¤Œ½7c:ÇtŽéÓ9¦sL×Ât#q5ˆŽéÓ-…éhÛØ©íÕ¥óCøÜÕûÜj6„BG˜lD©~VŠü‡/u7ùÅ^¼?V$Ñ•õÝ—ÿøûO_þñrÿõǶIºÙ׸Y>X¦éwl‘¬üÅß¾ý¡.€Û/-_øÿÄo?ý3VùùÑßîKÀ7ß}ùãegÿÃ_¿D­§Y¶ˆa‹±>ù;›þ6K~øé—ÿSüùßúóßÿv°Ñ öÆö6»]Y-½Po3àÝÒËïâé刲ç-Z=½ôôÒÓKO/=½ôôÒÓËÿG\MÄž^zzùìͮ嶱…_¥Gn°þYóÜÁ}‚ÀA& AfAÞýuŒ‹ãô)B[ •`7—jS¤¾Å"k5¼$¤è°Ž0× ñ_.Åvé’P, ±Ž…råÇÒ%G„ZÊÙ(JL¢e9ºä‹hÌcõÕèqtɬñ³Ñ8:ñ]u/]2[à˜ÙX™~JåJºLºLºLºLºLºLºür]Bt~œ³ åU I—ËÑ¥pÌÆ&£ÝwtºòÆ\Q+ñå»t9!´ÖÇÒ¥q‘ê@,0…–£ËãêUàqty<:†x+]N(#MºLºLºLºLºLºLºìÒåáu•±æÞeÒårtiVäÈÞ¥©_™û Þ±ª»ti$ Ã]¹hW»wÙ*È£U“A”h;y¹]RÄ ˜8ÕÇX|]¶è`…r :~']FÄ&¨”QÞ–t™t™t™t™t™tÙ¥KBByÑGëj¬ªž÷£%]®F—„"Æ2LFŒväBº´VEÚÝ»œ úØs—\Äɼ޿£h·]rÑâB‡êµ<ïZŸxjˆW„u°[÷.£ÇøZ¯‚Ñ!é2é2é2é2é2é2é²K—±^Š×‚6^WEkÒeÒåbtÉÅ ¾Ö‡¸V©—î]B|žò.]NuħҥX 'Nve½ÌØ õèþ4ºœˆ}ºþýºœQ¦™›t™t™t™t™t™tÙ¥KiY±¤ŒÃuµ°IºLº\Œ.Ť@ñá50bìv奱ürŽG†=ºlBãÖÆ_°”úTº¬^AÍêètj´+¾Ü±¡ŠS†·Ã´vø¸[}â©™PÁÇÑá{3c£G‰(P&I—I—I—I—I—I—I—]º¬îUÆëª&]&].F—Þ’ hm^Ä¿ªøÆÌØZÔö ÷ÎÕ‡Þê#­¤ “ÄŒÔÒÖ.ÆÎ­å[§Ñ'òPÜïà1ËïÔ­íDtýòÆgÄ¿§¼qGÁ\ëÅËw#½`yã3zO—7Þ:gؘíôÚòÆê“ÖNuãM0˜ûX© ®e%nªT‰ÆêõÇ2ÒÿÝVâöÔͱ;2 Mù>+±õè%>ÆÈóÑóDZ‰i%¦•˜VbZ‰i%v¬Äm½$¯õ;i^žVâZVb ÌŠâ›~ˆ#XýÒòSKÊ÷=n:.Tžj%J (Õ‘•í .G—M}-¨7ª(éÙç›¶Þ›æÆU~¬’ð?ÿ:€kÈúáMcüé?ÑÙÞyæ^Úv-÷ÏL?}ûãß~ùõ×öš™E~‰ýò—oÿëÏß>\æ?ücCÇRþÄôçòí·?ÃÏåŸÿq;°}F˜ªtsŽ·vñ÷îlâ¤`­6lš;# ¸Ñõw ΈÏNAGÁ\ëÅw º‘^p§àŒÞÓ;[ç (¥Œg)Q¾¶ÒÊ+–¯“Ž'¥ÖŶ š*Š•ú§ˆ~{J¡gmÌDåÓbyùVÁœ²O[<¹U[¹U[¹U[¹Upj]¥B¹UæZf˜³³*°ð5 ߺUPã¡Ù¿ïrÓq¡úÌ»Pãé±@‰õ¶À JXŠ›¯F—¡‹–*Cõ$O£Ëj»ÇÑA;é2zÜ–Ár@jÒeÒeÒeÒeÒeÒeÒe.c½4V¦ëj´JºLº\Œ.±!A`®rm"Z51ƒ=ºœê„¥Ëø”ØGnW´Sçå謔JƒdÃM½óóèj0¡ £c•ê­t9¡,ï^OºLºLºLºLºLºÐe+㟽C·âE,I—I—«Ñ%`nva)J—Ò%U(l»t9!Ôí±t*ãÈ1k-—;¡^¡<Ž.'¢£÷Ò儲Z’.“.“.“.“.“.“.»t몚U®«_ç%]&]þ;é’M£O¦Û±‰Ö éR_Á–J²K—Ç…j‘ÇÒeÝî^ž²ÇZ]ërtY±2 ÷.«ƒ×ÇÑeu‚ø‹0Žú½™±Õ5m¬Œ>ïª&]&]&]&]&]&]&]~¹®:;¯«U236ér9ºtƒa©cÌ×—î]Z¥]¸ în§|zK¿{*\R«]*à2Ú•¯J÷ü{á2T1ï»ÝÚ‘?îRŸxj)ÅñÀoËŸÓO¯‡ËèQK©EÇÊ„òØeÂeÂeÂeÂeÂeÂe.ÉâE+Ëp]•òéÔZÂeÂåpÙö¹B@`c÷Ká’‰Dv/õiBÛ¶Ž…~µÇúºª T}”¨íJ•Õè2T7a¬žžF—ñÔñÕ?ï8:ŒåNºŒ%¾éàÀ¨cÉc—I—I—I—I—I—I—]ºòÛa¸®zaKºLº\Œ.…ÁÙØ†Ÿ¬ íÚêrñ’øî±Ë&´0…:=–.«ÅБQ*E´C×Õè2T)#ŠÕkyÜÞe<µEt¼Œ£cxkblU.  u¬Ì!÷.“.“.“.“.“.“.»tëjŒa5®«,,I—I—‹ÑeUGcÑñ‡¡^YD¢ ýK}B™—a©ö@ôÐc—Ø R”»ÜÚźµ¸ñÖ©ZáÑÖN>UoÊâÆ;Uk;]¿¸ññï)nÜQ0×zñâÆÝH/XÜøŒÞÓŧf©/n{kqcf})àNqã9©´Ø9ˆMUË”: ¾>ì‚ðí©kü7ŽŽ Þç%N)«é%¦—˜^bz‰é%¦—˜^bÏKlë%›¡ ×U¬é%¦—¸”—ˆ[Í`Ћà•W¸á«Ö˜ðwNÙoÍdü¦Éc¯p×`+=ú9£(®F—¡*¾ \ëX=}]ÆSK1ãr :ŸJ‹Ý@—M™«ùQ'”„']&]&]&]&]&]vé2ÖK—Br`]uÀ¤Ë¤ËÅè2¦fŽžL†8&l¸”.Q@•÷è² ¨v@¨–gÒ%5«TîßE@Ûn“Þ›©²‰3EŠ#ýädd¦ÊN B'¢ëgªœÿžL•Ž‚¹Ö‹gªt#½`¦Ê½§3U¶Î+ÙÈoýM$\š©RE_¤²“©ò!Uœ÷š~´ãÅÊA4U\ÑÇê½>ìN•-:PÔ|¼Xr4»ÏKÜzÄv’PfY"½ÄôÓKL/1½Äô;^â¶^j,ù¨ãuU O½¥—¸–—Ø¦šª °*Ë¥^¢y[)¾ïrÓa¡ö…éù/ÑÛ}—H8áh «Ñe¨bªupŠã£]ñ§Ñe<µ@eÀqtäÓp]z;Uƒ±ÄPÆy"é2é2é2é2é2é²K—îñŽ¡1Œ×UwHºLº\Œ.=¾Ñêp W¸ôÆNªÄöèÒÛ9ˆ€Šñ›&"ôLºäæb™Â~”¶vjpk¦JëTZÅÊBCqñÁ`™©2JAèDtýL•3âß“©ÒQ0×zñL•n¤ÌT9£÷t¦ÊGç¨<ž¥°\›©"L/VÝÉT™“ ‹eªlªˆYªPÿEãÿj/ñ#:Zë8:tç*[lª ”)¦—˜^bz‰é%¦—˜^bz‰û^â¶^—2ÈÜÝÚ©Qz‰é%.å%¶ÙÊyýŠ€—fª¸c<Ï÷]nªNãoëʬÏôåU,桟µvì¼XmÙõ­¢÷³èr*: 7Vÿ™SV3S%é2é2é2é2é2é²C—SëªBÖ–Mº\‹.ÛÀ´Æ1•‡8x¤\J—"œû5]N •jO¥Kp&˜­T¢[3UfÄ)Ö¬þ3LAèDtýL•3âß“©ÒQ0×zñL•n¤ÌT9£÷t¦ÊÔ,E?^TòÞLkÕö2Uæ¤ZYÍKœPÏ Oóg¢SoõÛ‘XR 9 ÌÓKL/1½ÄôÓKL/1½Ä®—nÈ€õÀŠ_ÕÓKL/q1/¼¶so„Ã\Õ®õ1¤ù¾ËM‡…<´ú¶ù¨TÃAýœÖNÝôV/q'!mppqkG–^âÐ$êDt}/ñŒø÷x‰s­÷»‘^ÐK<£÷´—85K1Ùµ•ÄM^¥ÐŽ—8'U»AkS¥D28³÷ñ”_œcÿ¯ö§¢£xc­·­G#ÖÁív¿=§—˜^bz‰é%¦—˜^bz‰û^b[/+€ŠŒÙ½ÎZoé%®å%¶éb(¥ p»séR/±”xIê÷]n:,T„óMË7m±7 K¤Äÿ:€ÿç÷8Ú1þxÝ ãOÿyÖöî3ÿïÏl?ž©eúéÛÿö˯¿¶×ôÈ,òKè—¿|ûû_þöa3ÿá:–ò'¦?—o¿ý~.ÿüÛ*°mC…Q¥emÃxÆ[· fÄÕB’[#¸Ñõ· ΈÏVAGÁ\ëÅ· º‘^p«àŒÞÓ[s³T½6í˜Y^.{[SR¡ØZ[sê¥>k«`*:XnÜ*˜SyA^näVAnäVAnäVAg«`j]ýÝA£40ÓÀ\ÀÀ´W+ügƒu0€£Ú•Å6"fJüû7M%|h±öôV±F ‡Q2ÃåèK-%Õ»ÈÓè²EG-0hZ¤ÜI—ÂñÚ©P¦”t™t™t™t™t™t™t٣ˉuU$Ñ’.—£ËÁR…‡Ð†1RéBºz¡• Ø]¼œP*öX¼$lå Õä[;¨¬Ëáåaõ-Åéqx9Õ[ñrB™Iâeâeâeâeâeâeâe/ ÍÜÆëª•šx™x¹^’¸¶«>‡XÔýB¼¤WU!Ü…Ëã:+Ëcá²ñ7?óE‹Èrpy\=˜>.G? ¸.+£OµÕ.......Ï­«F™›p¹\ŠW#¦2ÀÁ~ré% €Ê»ty\¨•ÇÒ%•6tbÁEÉ*®F—êåy™±ÑQ”;érFYÒeÒeÒeÒeÒeÒeÒe.®«T TMºLº\Œ.`‚/.Ž{çÖ¥!±=ºœÊMŒ¥* à8ŠRe,u9º<®¾2=Ž.GÇ?_|]N(“,t™t™t™t™t™tÙ§Ë*à¬ÃuU°XÒeÒåjtYM[‘N`“‹c©²ß§ËãB¿¸ûú)t)\½ƒ¢$°]W_žF—Ñ‘rë±ËeÇ.“.“.“.“.“.“.»t9±®VË¢WI—«Ñeôg@RêhÇ+Á¥™±Ö.ÈÚ½3vF¨Ö§Òe5(l:<^­øW?ç¿—.›úÊ8<[ÛÚ)<.kÿTÆÑ¼•.g”IMºLºLºLºLºLºLºìÑe¬—"Ãê¬[;VOºLº\Œ.«i!¯ÃÍ÷¶uiåBº”W, n»ç.›Ð¦aü¦iA}&]ÖVºÝ…q@—[»`Ð[«×V¢š4“â¢ZV7–­íDtýêÆgÄ¿§ºqGÁ\ëÅ«w#½`uã3zOW7Þ:gTGÏRTõÒêÆ±j½ŠÕêÆSRÊZ^â¦J¤–êŸæ%nO­ 8ŽŽÜyAøÖ£A%“±2-%½ÄôÓKL/1½ÄôÓKÜ÷Ûz ñ1G2þ€¼ <½Äµ¼Ä60É” Ž?YI/­nLÛÍÆHßw¹é°PÃòT/Q@¥“¢í¨ÖÕ販׀ü2V/€O£ËöÔÆñˆŽÊt)ñ8Rë‘Q'Y~*é2é2é2é2é2é²O—Ç×Õ–xœt™t¹] xå¿Ép»½ö”=:©ìÑåŒPægÒ¥o»Hû»H[»@ù[3UZ§àR¥ÂPe¦Ê0¡Ñõ3UΈO¦JGÁ\ëÅ3Uº‘^0SåŒÞÓ™*S³ÂÅ™*ë™ïeªÌIåÅêAL©g·gy‰SÑ-÷y‰Ê@ñ€2ËL•ôÓKL/1½ÄôÓKìx‰m½Df³{«±^bz‰Ky‰10cì¢×Aâ|À^ê•ÕíC´sêmJ(Ð3½D.Û<ƒ¥û¿µ –¦;½Ä)q!ÒK˜D½ˆ.ï%žÿ/±§`®õÚ^b?Òëy‰§ôžõ'g)ÃK½Dªøª _{‰“R}­SosêÁžå%ÎE?í^í%~ôèZËe‚é%¦—˜^bz‰é%¦—˜^â®—8·®*•ôÓK\ÉKdˆ)`*µk†oíÐ Üj>͈#´4Ÿ†®B'¢ë›OgÄ¿Ç|ê(˜k½¸ùÔô‚æÓ½§Í§©YŠ”¯Md³ú¢ë ›ÙJ„w…ní˜Ê­sýÖ©»h?Ûé·v¢9×^âNDןëψÏ\ßQ0×zñ¹¾éçú3zOÏõ­s.^½Èp–âR˵s}µ£ïÌöôB(Ukõ֮˜q½u¶ŸWcyÍÙ~ôw"ºþlFü{fûŽ‚¹Ö‹ÏöÝH/8ÛŸÑ{z¶Ÿ›¥j½t¶—Vz ug¶ç&A½ ÷¿í?Úá½ßöü‘ßÒ¿ì£]ùt~&gû׸Ñõgû3âß3Ûw̵^|¶ïFzÁÙþŒÞÓ³ýGçFÒ¿³øÿE^;Û#¾bìîÌöÒ&`êZkíŠ;Ê­³ý&N@êXÈÙ~ôw"ºþlFü{fûŽ‚¹Ö‹ÏöÝHÿ{ç³kÉÛáW1f â²HŠ•HVYàEÑn7¦c7ºm$~ûPªÛö±ï)éÈõÇJ_Î fa¥_ñ–(ñ;5a´ß£ww´ŠRÏ¥öl‹JÒ­h/årâÜlɲڤK£}T‚R_£“œî4nxtþh¿Gü1Ѿ¡`Ìzòhßôô„Ñ~ÞÝÑ~!â!Ôâì©Ñ>R^l¿í“I ´ ª\íë ¬å"õ¾8¾ù­Û£ýÆ4nxtþh¿Gü1Ѿ¡`Ìzòhßôô„Ñ~ÞÝѾ.B¢~”Î's{±hϛў«F’giÈ׿—Ê!Þ^#ö?¯>”RéHSøæ‹Ÿ×øE=‡óÅßÊá®åãÛ7?¼*§ JÁôëÛ?Å0¦Ž«8ˆ&9õ,9…˜ìE7]õ°ÔŒsuôT¯ô²NÎxÇæ<]wrnLÙ ö“s~rÎOÎùÉ9?9ç'çüäÜÝu5CÌÚ¾‡µÚÙ¾3úÉ9?97ÕÉ9û0k ÔËÜÐM—õã2·dY•ÞJ¤˜NÌÜDEî¶‹jiv$ówlÃ×ÕîÚfùë ©ô*y@\º¹Ú‰æªjxt~¢¹Gü1D³¡`Ìzr¢Ùôô„DsÞÝD³®¶f%íG)x2Ñ,%p[D³JȶæèKÍÄsaº¢ŠCÊŒýå€óËÂtõ­íËYúÞ¹½YætLWGDL€ùeÉ1c:ÇtŽéÓ9¦sL×Àtu½I9öwsÌÊŽéÓM…ét¡DsÓU;”|4¦+ωö¡3ÌŽ“žˆé2,BlƒlaºœQ˜Sèå˜f£Ì–¸ ¨çH/-qðŽÜÀâ 7Q1d¡”QòÄÍ7OÜL·Kï^L·N±Ô£ö£Á¹˜ÎÖÏîcºUB šÃT˜nUÅ`û¬ØW^¦[ßZÊñkí{G.<µŽ¨ è‘¿›Í1ÇtŽéÓ9¦sLç˜Î1Ý&¦«ëefµä½Ÿg¦ì˜Î1ÝL˜®|˜°T‘51]±³PÍGWÓ­ã+jêM øDLq!ÁûÕt e C9‹Õü ¾ÚiŒ—v+*ƒÚˆ‰»â³cºixt~L·Gü1˜®¡`ÌzrL×ôô„˜nÞݘn,Je>ÓђĨ›Ñþq©su+zRUH'= ž_V5ÝúÖ¶å‘ûÞ±Lõ:LWGŒ¢Ú¾ãåIc:ÇtŽéÓ9¦sLç˜nÓÕõRrÊéÿ¶ÄÆ1cº0”j6Žp'ùú°ÅpMGcºòÜr “eD½ Ęïüî`5]°Ä rÂû‰.eÏž±÷üjG×bº:h)‹äÜÅ«éºü¥áÑù1ÝñÇ`º†‚1ëÉ1]ÓÓbº=zwcº:¸ k»õÖ“]8¹©¸Â’4mTÓ­D{õh«O†éªªÛWu<Ù=¿kþóÆtõ­3éüàVíô¦CÄ阮Œ”›Kúʲ8¦sLç˜Î1c:ÇtŽé˜®®«˜ÄÒ÷îº1ªc:ÇtSa:,‡I³Rçî¿ÕŽo*^ŸCÌ=LWìlûŠ'b:ÔR¼ßTœ©LaPEh¶?_íÂó …S1]”¥ó3|µ#ñÞt]þÒðèü˜nøc0]CÁ˜õ䘮éé 1ݽ»1ÝP”Š)Šé"ñ70ÝT8¦«ªÄöíîkOêE^¦[½#¤í[|ŸìàÂC¯uDÛåØFëe1:¦sLç˜Î1c:ÇtŽé¶1]Y/ËÇ1s]ÍšÓ9¦› ÓQ=LÊ‘£61]µ“ÄGcºò\Ä”0w AÀtf5]ZL ÀFoºhSXÊý‡¡] Rì˜)\Šéª8I•»â$D¯¦ëò—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝéÖÁ³­ÿÚRp/„ˆéxÕ L7&5Ñ\˜®ªÊ‘C÷T;¼éÅò"0Ýwè¶¡ôÙ˜nL9¦sLç˜Î1c:ÇtŽé˜nh]qLç˜n.L ~‹‘Sl÷¦«võhLWžKRºŠco 1†s1E„û˜Ž—rU s¢öT¯v!Ó¥˜® ª‰Ê‘Ý®8 7:¦Ûà/ Îéöˆ?Ó5ŒYOŽéšžžÓíÑ»Ó E)¼©ù=ÓÁš-ÁØŽöK%’¹0ÝúÈô²0Ýwð:L7¦ìþtŽéÓ9¦sLç˜Î1cº›õ2g[•ú¹»æÈŽéÓM…éìÃ̤¥J,71]± r4¦ãÅÞ&”Û^©3¨h<ó ‰„KÞh). †`“¸}ÕE±ƒð¼>áTH7$ŽÈ/èÒ—†Gç‡t{Äé Ƭ'‡tMOOéöèÝ éÆ¢”žÛ™Ž8-´QI7$”æBtcêù…õ¥òŽ„p¢T–Ñ9¢sDçˆÎ#:GtÛˆ®®«6‡ìÿ»ëjYÑ9¢› ÑÙ‡)¥3MÐv%]µ‹px_ºòÜÂ!åÞ’$z&¢#YJr)}éR™ÂQD24•V;b¹ÒÕAÕò†€}qé&«tH·A_Òí ¤k(³žÒ5==!¤Û£w7¤[O:=žDž[IÇdÓ­40?"5…¹0]QEÁr+ѾúüüFòÏÓ­ÞI:¿«¯v7]ûNÇtuDH1D|@YJŽéÓ9¦sLç˜Î1cºmLW×K–ÔßÍÙ÷î˜Î1Ý\˜.-µÁI@’&¦«vpSÞp¦+ÏZΑv ‰)虘%Ø8ðªe SNí_º«ÞË1OÄtuPa• }q–Ÿ8¦ëñ—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝéêà‰€:ÍÞV‘Ͻ>‚aI²UM·J1䤚®¹0]Q£`îªA^¦[½£ÉžÝ÷Ä 1]±Ül‘ø»AÇtŽéÓ9¦sLç˜Î1Ý6¦+ë%##t~þªv¶€9¦sL7¦ÓR¥ÆIcj_Qíb<üúˆò\[’-½ $Ùv¦gbºÒNœ3Ò¦Ë9UJ½©ž³æçu©À¯?&n?¿óÁö45Éü³€Îd!Ùw ]Yä·²›¡«#kÊñôjÔ«Ñr‹×ßþÓÃ#,A2ا÷ñ‹Wo~üו$Œñáõ—ÕÐ>ƒŸÞ¾ûâ2ó¿Qú¯ÝÚ9\L>mPʈ }qøüžÇÓ/sðþÅ¿ …EI6i‚ Q{·Œ?ýõi6šK³Ê+ž×·L— †›ýÿ—ÿ¼ø£àò¦‚1ëéárÃÓSÂå?¯÷¸lƒKΩË<‹ÝÂÊÏœyÚ[«%òü@ —|îÍÍ,yIÒ\,³–ßPûRóMóÞÓñl^‚ýG#vNj¬v7?V;žu<ëxÖñ¬ãYdzŽgﯫ”P:?{®vÑñ¬ãÙ¹ðlù0•c÷VÔß~·ÿðú»w¶dýX7¬Äð…èo¿·}ôûßÿü®fåßYâñ÷gÓjÍLÞ¼ýéûŸÿQ¿_§éWßZÿÃO_¾1?ÕXòÕÛì67>~e‹Æë–Jï¿úîUü—:ƒ@î¿ÍæÄØÏÕŽo®ù8<—ç"RʹëYÂåLðŒK&Ô­ùCJ!NV14¤ᅵȯoMå^ ~À;Ì—¦¤ÊÄo²ô”ÔSROI=%õ”ÔSÒvJJh›š(¬«Lê)©§¤³¥¤‚ s ÔKÜø„ÄM¢ Üý­D"$:7q `IÝJÜØ’WìNuÛhç|mÿ­2¨Ø'š»âÊ}^{ÑûQ½áÑùk/öˆ?¦ö¢¡`ÌzòÚ‹¦§'¬½Ø£wwíE<ª-nD)[•N¾ÉU•Óv´X*å0¦3UŒ¶¨‡¾úx‡~æ˜ÎÞZˆ$Sß;ÌáJL7¢LÈ1c:ÇtŽéÓ9¦sL×Âth[!Kß%ö×ÕŒ^9â˜n6L‡ÉR ûеƒéÌï¿Už«ˆÔçÜfáÌúŠK‚ˆï$nÙ”•)¬Xª¯·•~²cI×aº§Arë¬_ín*ÃÓÝã/mNŽévŠ?ÓµŒYÏŒézžž ÓíÔ»Ó}<æLô@”ŠNÆt ˜2nG{ˆ!†¾T™Ó=©ÊB@ì«×”^¦ûä̶›éÿm32]„é>5åV5ݯÊnš8¦sLç˜Î1c:ÇtŽéî­«1”+ZÅ@ŸÖÕ,Ù1cºy0Ýúa*Øž*JÓ=ÙK[ÅtOÏe"în™-Á£x&¦“…„ƒäû‰ØT‡ËY¸¦R(S/ÅtCâD“cºixt~L·Gü1˜®¡`ÌzrL×ôô„˜nÞݘn(J)ŸÝ«',¶ÊÞëÕ3,5é\˜nH}Îø²0Ý€wls/bº1eÞ&ß1c:ÇtŽéÓ9¦kaº¡u5ÇtŽéæÂtPÛÔc´Áš˜®ÚÑÁÝŠÖç& T:&õ&PLrîm–”Qä>¥Ã2Ó!«v~è^ínfú”®*)¢j_ßü ï”n¿4<:?¥Û#þJ×P0f=9¥kzzBJ·GïnJWOT ¥,°ŸLéxɼE醤&ä¹(]U¥Qrëìäoo©/‹ÒÕ·Î¥·›ô½£*×Qº2"fˆ¹¯,#8¥sJç”Î)S:§tNé¶)]]W)äþŠoVè”Î)ÝT”K‘pPjSºjnî_?ˆÒ•çJH ÐM‡”s†3‹éâbS9nÓ‘MaJö_io¡«m¶/ÅtePSbÆ®¸’·¦ëò—†GçÇt{ăé Ƭ'ÇtMOOˆéöèÝ醢è¹g^S 1m`ºUjL¨/)Í…éª*¢QP¯ô²0Ýê%ÄÔ÷±^‡éêˆbCR|@™ø¥†ŽéÓ9¦sLç˜Î1]Ó•õ’íŸÚ‡Ü_W5‚c:ÇtSa:û051*41]±ƒ$‡Ÿy-Ïå²íëzH£ê©˜.,B*‰·0]Î SèMu³£À³%n¦J²­»ØWÏÏ Y>÷ÄmÀ;"peâf#ª`†ô€²äWÿyâæ‰›'nž¸yâæ‰[3qËÙ>d›D¡»®F¿úÏ·¹·¸„Xên5ÅfâVí"èщ›=—©"ìt?-ãÇsOA1,9åCP± ¶3ïé,$']{ªÊ\ ¸úâ‚WWô~6oxtþêŠ=â©®h(³ž¼º¢éé «+öèÝ]]QÒ$ñ(•ù‡ PbÞ¨®’*@sAºªÊœØ=«z–—éê[k¤Ôé(¾zQ.¬®¨#fŽ |uæ‡téÒ9¤sHçÎ!Ý6¤+륥ÜiUTíloìÎ!ÝdSTûŸPҙߴÄ? Ò•‹3%îò%˘ҙÅ),ʶ}÷1—)  ƒW;àK1]T)díŠÓÛ³lŽé6øKãócº=âÁt cÖ“cº¦§'Ät{ôîÆtuðÒ‰%ä~”²8u*¦Ã…’¨èv´·ikõ¥òóÎt-¦«ª,é²V_½¾´^Eå­K)šjèz'c¸°£x‘Ëýe<¡c:ÇtŽéÓ9¦sLç˜nÓÙzÉ!&ÂNŸájG7!;¦sL7¦ãr¸I€µ/þ«v–;éÊsm¿)Ro©™…31]\RÌÀ²•¸IÀ±×UÉì4gùmªxýÝ;[ܬ[{bøÆ³o¿·lòýï~WéËw–÷ü½ BÿóÕû5'ú§~¬Í»·^¿|óãWo0K‹¿úîUyâ/5€p¾+WciÒK‰Í.EþSrW_~ Õ«è7oúþçÔ™ÿ«×o^áÛúŠ7obÛ×~xõîvúÝ«Üy5)IŠPH#KÕŽ\ L¥Dú‚(÷Ä™]ðæî]ÖðèüÀtøc€iCÁ˜õäÀ´éé é½»i¼\¥‹ÐRåÜ®Q€KƲƪ€D ÃKW»çâ¥U•„reJ_ý0ýyóÒÕ;œB¾wÓu¼´Œ²ýåP&~öØy©óRç¥ÎK—:/mðÒº® î\ Wí¢úÙcç¥sñRYEÅv£íÞîÕ.èáe幚-qK½iùMâÞ)¯ÃxiJ bæ¼QÖ˜ÊÖXbçz¤j—.¦tePÐ¥+ÎRèä”®‡_ŸÒí ¥k(³žœÒ5==!¥Û£w7¥[OåÈ~”RާR:°U\±íAÕ>n}@j𬬱¨Â@¹W‚_Õg‰/ ÓUïÙs¹ëÛÚçë0ݘ²DŽéÓ9¦sLç˜Î1cºmLWעìkŽýu•Ä{»;¦› Ó¥Ò[€HÛ˜®ÚÁ͈aºòÜTNæK7¡ÐÌg–5ò‚¶à¼Ÿ¸©MaÊ"ö—i*Õšú$ºÓˆCŒê˜®Ç_Óí ¦k(³žÓ5==!¦Û£w7¦ŠRwJÔÅt”Â#lTÓIÅɪéÆÔ§vãwn/S>Ó *ó+Ó9¦sLç˜Î1cº¦ZWÅ«éÓM†é´\ÁbÔ¾‚±ØD:Ó•çŠíw…´7THøÜÓÇ DÞÂt‰ƒäˆº¿jÇ7¶þšÓǺdû‹ H¢ŽÜŒ)ß” Èý÷×õ¯û%O¹èäq.׿†Í‘ækU»×Þ¨R¥B}q·=ž–nP°†G燥{ÄK Ƭ'‡¥MOOK÷èÝ Këà1§;}ŸG©çÂÒhÃm=“ª“ÁÒªŠ5¦Nr°ÚQzY°´¾µØŸ µï —#`‚ÎÑüUYôU–:,uXê°Ôa©ÃÒ,­ëj„¡ŸÛOK–NKsmÕ1?ï.þõ>`>––çR{twcª$ÊçÞ¨Y‘î·j„° R²Mt»Ucµ³½òŸ¢¶»üß_.âŽE( Ø"Ün,_íJŸ÷+¹ã8‹À^¤ÙJ-NÏw‰?„;¶ŒYÏÍÛžž;îÒ»—;ŽE©;3厂´è}î8(5ÎÅÇÔ ¿¬+bƼ“n†œÍ•%pîèÜѹ£sGçŽÎ;nrÇu]…HÒ]W…Øorvî8w´“eMòü,ó׿ÿ€ÍNHæŽõ¹‚ ©ÝëÉîy#Á¹#‡E8¦xÿ,5@™Â)c³åaµc}žúœŠéª8*—rjWœà CtL·Á_Óí ¦k(³žÓ5==!¦Û£w7¦ŠRzny`JKFÚÀtcRIçÂtUUÄ9? ^ñeaºÕ;¶ëáÐ÷N$¸ÓÕ™ÐþÍÊbpLç˜Î1c:ÇtŽéÓmcº²^&ÆÝu5…è79;¦› ÓAi%˜(#¤&¦«vBáhLWž›QwiŽùÌ›Iˆ`¸Oé°ÌàL6‰Û;èjG!]Jé†ÄÝžþqJ·_ŸÒí ¥k(³žœÒ5==!¥Û£w7¥ŠR à\Jg+ÇC¼ƒR æ¢tcê¿,J7ä•ë.&yR–E8> LÒ9¥sJç”Î)S:§t JWÖKÛ”*u×U…ìÅtN鿢t¸°&¤&¥«v¾?¸>×ö¥öÆÐ™@¶$Ñ)„¥Ìã-LGK¹@9•?NS)•©®zí™×qÙ‹éà/ Îéöˆ?Ó5ŒYOŽéšžžÓíÑ»Ó E) r.¦#]@·ŠéƤ"Ï…é†ÔÇ;·Ö˜îÿØ;·KnÜŽ•†ßdŽER¤H¿òA?ÁØÓ¾Àk÷`.öÛGRõØg¶OIU®Ë{,ŒEQú‹G%’¿Òe•w˜ä˜®¡`õà˜®éé1ݽ›1ÝÔx¹ˆ^û³D>v9ÂEbœÁt“£¨¶@jJcaºª*ÿ/b?VâMW{M!§=¸À;gîz”•ci²”Ó9¦sLç˜Î1c:Çtó˜®ÆKeIKOvLç˜n0LÇ%aNÚÄtÕN`÷]¯ù¹Lúå@8Ój€L'ùƘ´½š®Úñ©˜®6š0,'ì«éºü¥áÑñ1Ýñû`º†‚uÖƒcº¦§Ät[ônÆtµqc ýY*ÉÁ‡Ó)^rJ6ƒéVIU¡±0]QE9¬¦Îz±jîí¦×É;%ëI}ï@<ñ ‰Úbî1,P–È1c:ÇtŽéÓ9¦sL7éj¼”† â*›c:Çtcaº<0- BÄöM¯ÕÂ<ÄT©Ë—²É‘˜N/)a0¾éÒTúDè$÷©x'¯¦«âˆ³¸ØGüpº.ixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]m<–¹^ÌRrðjº/¤0ƒéVIÇÂtUç —¨yê?7¦«½–h©sŸêäňçaºuÊ’ß!á˜Î1c:ÇtŽéÓ50]—–ˆmA\½>XÞ1cº0]ª›N¥ÌÄMLWíðj9èN˜.Uü&“ô^ P8Ó¥KJ„"·1ÖW¸0Å6‘¯v ðTLW “uÅÅ|5]—¿4<:>¦Û"~L×P°ÎzpL×ôô€˜n‹Þ͘®6ެ±³Ûp² xìÙt/*sgÓ­“zëóÑ—ÄtUAçãפ^ïìª×É;šbg­ádOÜôZ[Œ\Ò¡ÊRtLç˜Î1c:ÇtŽéÓÍcº/ÅBÂù8¦sL7¦Ë³l8ôrØ·ÿ0€Mèê4”0]y®æŒ“C~™Ò¡«é(^Êæ[Is˜Î B²^ í„q´ÂÍŒB"Ô¾z ÷V¸å^#•ß·ï>³pË-’•‹V(óÂÍ 7/ܼpóÂÍ 7/ÜÚ…›Y ¨!Ä~\5¿üÏ ·Á 7»$Žj/ï8ú¬p«vQv/ÜÊs9”5½í¨ØE»q›øŽ…^ˆ˜m¦p³K.Ç çö% “ž{÷_m4{:õïd¢¯¯è}8oxtüõ[Äï³¾¢¡`õàë+šžp}Ž›×WÔÆ•Å:»sª]9t}…H¸ä¹|f}E•`åÀ¢ªQ ÓUr†±«>Gƒx_˜®öCŠÔ†’#þy˜nRfš ô•!ù6(ÇtŽéÓ9¦sL瘮éj¼H²$®²Šc:Çtƒa:´²âüÆ-Aÿ€é²]à°?¦Ë•P¢ûÜ{ŒáØCÅFA¼‰é0äW8‰‚´IØdÇfgbºÚ¨Žˆ Ä™ªcºiytxL·Iü.˜®¥`õؘ®íéñ0Ý&½[1Ýsã†yÂïÎRÒчŠKùôsÓM Щ6ÖjºgõÊ¡}%õ³]¼¯mPS¯ós©ýÁm²;óî¿©ÅXîÿÃÊLÓ9¦sLç˜Î1c:Çt³˜nŠ—bJúqUBrLç˜n$L—&åö’&°¦›ìôêF»}0]}.ªRh¯¦›ì¤1Æ‹€…™mPåfŽ’š‰&»ˆt*¦«*ç˜ûâ’ cºixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]i¼^aѾPoilÇžVĹ™4sZÑ*©ÂX«é&UÀœÚ¿žÕÛ}­¦›zÙÒ÷h8ÓMÊ”£i_ÇtŽéÓ9¦sLç˜Î1]ÓÕx™ó^4ëÇUÁ1cº¡0”S€$§…Ö\M7ÙñÕ>¤0\(çu,̽(Ûa¢#7½Ö—Ôðö¡âˆå_c`Åö—îj/7Šépš_L÷ÅEÇt=þÒðèø˜n‹ø}0]CÁ:ëÁ1]ÓÓbº-z7cºÚ¸@ˆ¬ýYŠ£»š®œ€§8ƒéÖIM<¦›Ô'ÆÎ×ãç^Ú}aºuÞ=ÓÕCç0øO=ÇtŽéÓ9¦sLç˜Î1Ý<¦Ëñ2g¤9£äÔ‹«)sLç˜n,L‡*'iG ibºj¯–aí„éòs‚$Ré¼@AõÈÕt± @Ë/ómLG厔°}}öd‡)ŠéJ£9\†Ô>JiÇÑïþëò—†GÇÇt[Äïƒé ÖYŽéšžÓmѻӭš¥Ò¡˜Žå f0Ý:©4¦[§^ïl5Ý*ï$:q5ÝJe¾šÎ1c:ÇtŽéÓ9¦k`º5q0úŽéÆÂttÉ-Iì`ºjG÷Ætå¹j©œcÝyòk¦väÝ1]ËšÁÛ˜.ÖWòßÛ Öª]þ阮6ʬ}•ÍdÉ1]—¿4<:>¦Û"~L×P°ÎzpL×ôô€˜n‹Þ͘nj<Ï¢Aú³c8v5ÓE•f0Ý:©q0LWU ©.Poz_˜®ö:!ÆúÞ‘tÞM¯S‹J…(SÇtŽéÓ9¦sLç˜Î1]Ó•x‰ÀÚ;i¸ÚsLç˜n,L/òð5ˆí³éªÄÝϦ+ÏM‘»ð+ÛÇ#7½òEC 0³é•Ë+L‘¸³¿½Ú{6]mT˜SûÅÉŽÙϦëò—†GÇÇt[Äïƒé ÖYŽéšžÓmÑ»ÓM[¤°` ჯHyÆš»BbT•±0]U•L%a_}¢;ÛôZz³»qïKïXLçaºª P³Š®²Üùä˜Î1c:ÇtŽéÓ9¦›Çt5^FŽ â*]ítLç˜nLÇ ÄŒÔÄtÕ.ØÓ•ç2”[ z/P± ‡žM‡eTT»é¤¦Ð*ùeo*­v!ž{Ókm”¡àÁ¾82_M×å/ Žé¶ˆßÓ5¬³Ó5== ¦Û¢w3¦›å¦Ùþ,Åáà³é]$Ì­¦['u,LWUIê, ÔË­¦›¼Sò™ÞÄó0]m1±š…ÊÓ9¦sLç˜Î1c:Çtó˜®Ä˘ÿÓ«1°8¦sL7¦“r5CbDMLWíXhoLWž[më¼@ÙŽáÈM¯¨SˆncºTîDМ†6•»(/•Šéª¸Å˜¢i_*ñ`«éª*É1©¯žï ÓÕ^'èæ“õÄ+$j‹jˆúÊrwLç˜Î1c:ÇtŽéÓÍcº/‰%ô㪂¯¦sL7¦ÓzÓ+ hbºjw}¨ÙN˜.?‚¤Î ”íñÈM¯ñ¢ά¦³ZÒ$VmטÕ.ǵS1Õù%¿X!uÅiˆê˜®Ç_Óm¿¦k(Xg=8¦kzz@L·EïfLWǤ¡?K‹éDåb3kéÖ Mi,H7©)tNb™ìï ÒÕ^SŒÑâï\1{8¤«-Æü«¡ô•øÉtéÒ9¤sHçÎ!]ÒÕxYB¹h?®&‡t醂tVN|£ÁÚHT;ä´7¤+ÏM9µéÅøà-¯„rûþ ÔRˆÒÌí‹]R³S¯y]%Έ|)]¾´<:<£Û$~F×R°ÎzlF×öôxŒn“Þ­ŒnÝ,!{0] ;˜n¥T‹Ò­SÏá¾(ÝJïÈy×¼®Tv}­S:§tNéœÒ9¥sJç”îV\HÉ’ôâªð¯N鯢ty`RPJ©¹”îÙ.ì½ãµ<BÙt£½Ä´Ø!¹”êb9œÁtP^amžXŸb’ðTL·B\YIÓõøKããcº-â÷Át ë¬ÇtMOˆé¶èÝŒé¦Æ•±½óÙ½?ÂÈ.¬iÓM,uŽûÔ¥±¦›Tiiï×ì’„ûÂtµ×f@í×Ïv$ça:¨åG$Y ,¡c:ÇtŽéÓ9¦sLç˜nÓÕ¸EBân\J¾˜Î1ÝX˜.«*{¹ãôÛÏp¶ ÷Æt幘›Wèñ¥lwë]1¦™nc:̯0¢• $šJ‹ð­íZbº*.A9Ó¦+üš×.ixt|L·Eü>˜®¡`õà˜®éé1ݽ›1]m\9òÒŸ¥ÒÁ÷Gä” !7f{•S_ª2…éŠ*Š ¢ýp]÷…éj¯0÷½å¼=¯S‹‚åâûÊÌïpLç˜Î1c:ÇtŽé˜®ÄˈQû‰g„«¡Ó9¦Óa¹—nœ“ýíç¸Üßö¾æµ>71¥ˆ½r(Û]ßOº?¦‹ñ’£ Ý.ܨ¤Æ´÷ ¾Ø¡á©ÓMâr ÌÐGru¹…cºþÒðèø˜n‹ø}0]CÁ:ëÁ1]ÓÓbº-z7c:ªY‹ÄØŸ¥Lm,öUÕGêªdv_ì«ö:?Ú:™pµã«»Ag_ë”±9ûröåìËÙ—³/g_ξæÙW‰—LyGíÆU¾þèìËÙ×ì« LB2î—#éÑÙqÕ€’^TâÌæº•塉éªÝõ»¶¦ËÏ%ŒùÏU Å <š.¦‹Kæ0æÁÑÃtÙ.­Æ,ê€b_}®²ï­ÆÌ½¦²Ê'õ½“_ë3kÌ¢,¨öÅÏÊ®®AöÓkL¯1½ÆôÓkL¯1oÇU± ý¸*~¦¸×˜ƒÕ˜ñ1¾ÜÇ÷YáVìÀ’í]¸•çZÙ7Ä픹ÚÁsvÜ•.Àšdfa|,˜((Z€ŽRÔäÜCÅW‰‹¾¾¢÷á¼áÑñ×Wl¿ÏúІ‚uÖƒ¯¯hzzÀõ[ôn^_±j–Ò—“ý¾@à¢6w¨ø*©F4¦[£žC¤ûÂtµ×ÀÖƒ˜“ÝÕeÇcºÚb$ŽK~·ë›1Ó9¦sLç˜Î1c:Çt·ãªä€ÿòæ´—qU0:¦sL7¦#•üòãß>üüXÿù¹@ÿæÎù³zý1Ûä‰w ‹Ï©þ›‡û£6­ÕÓWÿ“òW¥<øê?ÊÿÕœ~3£ÎÁP“]œ]¨Îšõ£SÓ¬NÃâçg)ÿšGì/e^½ÙbYÐ1°uZ, ú^Þ?y(­2uݯv‘Õ l­5<:>Ý"~ÛP°ÎzpÛôô€v‹ÞÍvÕ,Åx,%ЋÌð×I€wÎ}îÐ`Ë$yª#T1-P¯z_üµöºl–‡Ð÷Žð‰ÇP­S–Ôù«óWç¯Î_¿:uþ:Ï_K¼3Án\€àüÕùëPü•/eËçŒSšüµÚì~Z|y.3ä8Ð+‡²]¤#ù+ÚEs R½_%¿ÂZγè`Áj1 éJ£9–Úgtuè¾CºúÒðèøn‹ø} ]CÁ:ëÁ!]ÓÓBº-z7CºÚxŠ’3¦þ,%>-žÏ_%¨°v€ÉÔ%Ʊ0]VU¶îÄÞæƒjÞ¦«½N “sT»'bºuÊXÓ9¦sLç˜Î1c:Çtó˜®ÄK ’Å~\µä§Å;¦ ÓÉ…ò\,h/ðýöóLyÓî§Å—öó ”ÓºüÊ:íåá=¡Ê…AöÅ\á–ß` É&¥Ù.ÿ<ó« ñ»‡o_ÿ’kÿgÙsQ¹EÓúð\cB¹±ÿz÷ø¿<åD½.Èœfžo<)÷¤Ü“rOÊ=)÷¤üŸ-)·rQ5[¤^ð„@<ÄòÙ¤ÄÓÇëÈV>µ¼þðáñ··⌼dÁ´+/©ü™H|xýþ×ÿýéÝë·?WH¨ðÝÃü½ü6UÙ3䇭mÚÕi'6a×6ÓìÏBþþýÝÓÛ·WÌñÝãÛ§w&tüéÃUù1J†ó·§§·3z4¤4,Уa‰ðÊac›"žâƒ„úy%¨ÌŸ·if:¡6øãÓŸ{f~z÷ôñíõ«þÛSNóžJRòõËLüÏ,gL <<ýøSÇÙ*?¸uòó$ûx­ú×ß?¾úáÙ÷ÕoÓd÷êS7¿~›=ÿøáçÇï_ýªï‹zë¨/· D ½¯T†)Éì>+¿«ÙöÏpòókÂX^¡g~Qôqƒ!µ½™ë; NØUŸËɰr0œñ*®ÐÏvÂhX¡GÒºÑðCM£~{ý™+§?>„7_¿ÉEë÷O¯ß½ùÃJ\=Ù Q1kì!l'¾À´°›üÃa…]?-|ÿËïoÊ7:U;N•)án'Ødí˜8z8/—n§L KõHL 9uü1ÿßW-öƒF"¥.yÌ6Èa‚8iH/ï…œ‘ ¯Ð“›'¬íÔRîzˆAæ·”o˜'vÝË»ðŒ21·“ È=DG&y¼z÷øþéã»ßòkõhoP0€tGv$ûróÅn½ŠgŒ ’rç7/ÐWŽ‹ç••Ï ø*¿ö ²§-«m{ÚÊ5˜Øñ9FÊ¡ÉÆ_ò‹»ç_Ú=Ër=óG’ì™tÌz–:ƒƒI’%ëö¤\f§D˜º"rÀX£Ç΋4¿¿þíñýÛBì‹O¡íSQUHÔ[œídun:b'ôŒÙc…ÃógÏ}‹mߦò™:H·/å24ýóÇA19c ¬Ð“àÄäéÍäNê¸3aÌ^_~‚yùGL»ëG<# ¬Ñc_`ÞøäÖØsk,gçÅ~7"®2¾pªçÒRxA×,ž2b˜9%éëa䱊šNLW,º_Ÿ5Pø"qgá˰¢r^_£GÏ 9oŸÞToòNÞ"ljݺ6_ʼn”ðϳ/¡“…N‚§ \¦ÑÍj|Ì·¸#UËõ®Ìz Âba #Ñú-_vrºrX纼«½žCáé_ŸwWÓ;Ö3‚I(C,ÐcG3Ó¹I¶úµ„s^g¹>í¦mÙnþÛÕ…ʨ—cGž‰1õåž‘u–Ȩ ô°œ_§|òjgŒÅügáî2S+×~ sÿnè æ='RÑòì²Û¡º4õ\ÊAÕ°ß…È|t–¹iÞ[ÞãSÆc®ùŒ ñ´œ²7ûj€´ïÄt}aèá9e_5kè~Ívñ”Ÿ^ƒiNnèÑpvNÙšbÓ%äßN :iÐd‰éÃâ9­j“¡sØdw|jYÛùö®h¹ŽÇþJjžöE à¯Øªy˜§yp$mâŠm¥d;UÙÿÚØ/[¢¯%_Ù÷ 5IÝ­t&5åR`õ!‚ ˆCpNTÐÆ£÷‚ÓcˆgªeKµBl%áªul{-x¡0Å.Dr0^(:ÈyýDß­×Zo¤»†q†V¹"é5¢K—¯Ëø¬÷&ëÅŃ\Àù!æ÷—’bi7d¶wBzÝhÓ4ùTcÿ.Nq%”HësVà)Î æ7?ûðÅY¡¿*Wl,È Šé´cÈçä“ù3´2>nõáñ²”FkÚð;‘…Á\®U.çWˆjnT,4ú§ä@f¦z>_a8˜_/ÀýNË)Z¦P±9ª .ÌóH·¡­ã£Á/ö}sJdµ†¸-Aó54¹ºyÛ¨2Wæg{Ò8èz6 ‘ðÃlï2[?¬<Å™­Ç3–ù¤Å«_þz̧h(S_EÉöÄÌ´õR¸“U¯F,8%Z'õ¹^©ÇÔÖ#Äÿ³q¼Èq.}23€CHö™°ÊÁŒl]ýƒD\§t«3;¥Ê?[ô™-}2¨ÛøiÈ]ß6V·d}Ú6c˜ð‹~C=;°™ªrqÊýïI¥‚¡Ô²_›I$(ÑÔŸcÓë@eM¼šìxÕñÍó|Õž–¸ôxCäíït¶ÚÌH²œ»6Á\dkIØœ˜õEëgõ(rœ¢:ðÀк­ ~¢Q?|z¸¹úòéôQ_Ý`!f;_YhîºùÖ¼u™‘õõàái.Áðå92Šï!4ø¦ý§¾jަÞG _Á:]K—ÌÀ´;ŒP_ê¿Ö §˜C=h³¤xxJö¼f ‹¤p6n,1ö|A¦e¯?%™áÀÃ4ÛK¥K,eÅ]±êrNUÁ›^;œRT¿ÃÊ_g,wÑè3MZÛ°$ªÌ<¡v©œî¨ø)÷½<¹/åá„:Ïü|Ñ«qN,Bž%wD9ØžÐeF’a= eZñL£í-8j{×구ü}•£û–uµjˆd ¶Â2áÄáÀ#@]«Cœzmß}G*™ k3®r)Çyq…ÇÆ×!‡2Á4xbOüqûí5¿ÔÞŽ8isKË õ>Åý–ØL‹v q‚§ðàIáã nW&‘põxÖé©Êñà£\ví€M¼ƒç1Åsm¶·amL\²O,m„ejËû壹(‡À—‚½åý™^æ ^~Ëû-àû´¼o ðI_xËû¦¦/°åý¼›[Þ/§±$ÛK•£jÓ-ï ÃuBùßÿùtv^ëV•„m¨ÇWÑòž–¯JqCýñ‘ñoÑòÞ£¥@Ïky¿|1¦:`Zìøq¾½åýÞ]s﮹w×Ü»kîÝ5÷îš'÷U kvüœÓÞò~oyQ-ïé!SJÄÒly¿È•£Ìn§–÷ú{‰"”l]V¹rTƒÑ¿åýÒ?>GâgnˆúzJâh"v'óg&ë@ Ä­h•‹aB2߃ó+^ûq›X‚DZ2m›2QÂ9,{ÒÐøyKʃ'…¡W~OF´¨±}U–€³”hVÎW¹”_õÎÏ0gÇ@rœà&{]_‰L+Hùˆ7sFš ÷q†zø¸ÿðøœÅíݽû¸äÎ~úÇÇ»Ïú—?]?nÐ×Os\ŽuêˆÚ¦§Uø(V"¢Ê5ƒÝR¶ì_.Ì3O<2¨Zù»€mÑb»ô¿0 ö.5Q3Ô¼Ng Å:pi»álŠélH ¶Jmª\îÝLiÃ{`ó„²~Á1'öïO6Õ ë_¯áÍÝÕÃÝÍýƒz«+ýW” §T£3بr(ç«Òõ~û?ë׿–Õ<®›ÇŒÔÓ-÷?ðŸ‰ìÇdÔÕ‡w¿>ÎH/¼)ñ ‹XÇÝÄà¬úªÀ¯ú;¨o©]zúãu݉î(A;öÑœkÒv!Æ ;·÷5jÇ$MØ_=x2Ìq§´Ú.gÖrM0‰¹\mÝäϱõõCÈS ÃÇK35ªÀïΨ±!“Š˜ Ím°ÊÅ'¹ ¿A;†qÂ^âÁ4:Ñ×®„Žõ,P‚ÄV9ÿ®1Òt×ãf™ìqài$OÏùÓs~~V.85µ¸dkërybRzÁf˜à4}.z ¤“ ; ÒŒÉwàáä›üÛ§“ÌØëÖv½_Æ„5¨!˼«D™BH}©}{†ržÜÏxX4E¡Xþ*…fy§Ý;8á âÁ“èu£v}O­Ó fAw•‹ðÂÞ8cíÛ1€ñ=™}xäeJ½X© Bßl)&X é…GÕWAxF :tß7ÓóZ'w{íâqûò˧ëƒ/úþõƒÊ@˜d$ËcV9Lå•Ûd %D{(‰'”D*”T‚'¾V`ÛÎ/£>w­(K{Ìw£åu°kF(z‚±ÃQ"ªá“ØôIõ\e° °Êå¯ÒµgœµµC¶÷I•;¢47'GŠ ç¾Yƒ,`3ΠPèwJaãýšƒÜKØtÇSøhêWï¯T¿NçÃÏçÿÓÕ·8L°(d.5ž!±*Œª\(´ýAŽãÛÞŸþCû6vBÏ¿EØqæ=x¤}ö;u“)ÕÔhEU.ç­ïà¶LÇ`JÄñsïÁ}ÞÉÛ®âö.(¤”Œ-sMÚPŽ­Óž×ãËyÂò÷à)Ћ;w¤³Ÿþ|œ£æD‹òº K4Þø]äJéÔl{¬ñ.X%ÉÑóð¨@¿#A+ä’G¼·ç‡--¿ÿpl#‚ØÖ'Õã ?IêÆŒëaÝú\MˆÆ³69„ –@R-f؇“ÝžSÏõÕžim˜I¢¥¯¨Þ 9¶Ÿ©ÖØ*@ª¾Ø@ã¹â~®xˆØ •yÂÒ¿yûùíûû_Ÿë4µuª-[!°Xc ö3¦»X-iŰÖh`¤;ðÞv \å$Ûó  ”Öš=¸ËŒEÿ#­ƒÈ_£5ˆ*'ìùõ;%itlã)ЭòÍp¤oßß=|~ªÌm•’@d‹)¬r,’6ž{Øñj¼Õ×NØì=xJܰÙ%={wooïJ3&¹ÓÚ¢-õPþ#ÿo(W}ùhNÌq¸ãÇ‚w®úrC£—ÏUß¾W½À'}á\õ¦¦/«¾ïf®úòñzŠm/URÊUO¥\Á3­¯(š‘©B¼,®ú‚ŠS±®Ê£,³Öײ0ÒB kÌPrœÇU? «Çr6có*‡;W}çªï\õ«¾sÕw®úÎUopÕ—ý ±wü„¸sÕw®úeqÕå!FÒØ°ÉU_äŽéʸêú{s¶’´Uî˜ן«Ke@8óƘà˜)˜@[7÷´k:YÇ 1›Wa*wDkÔÆ`.¿¬àªn•æôË ëw ³äxr·w›uþ|ôçg÷+QڳǡÅÎ!ǰåpK6Ûƒ²” ÓíÀCyûã¶goNJSmÚ°ÅjôºÈ%w³ä¡f™t}§b–ú¨\œ0áõ;”J iž2|}ß<ÅÔ‹ƒnö¤mõ\RÝ?,cMúsèðÌÅK­Uë 0˜÷j*Ä9¯ßÉkV¶ìíÓBeÕm)5µ˜õæ/ Y¨³¯ñð5ï3Õ *Õefž|T&Üöëwê®^`ž"½nû-ÿyÔzú[”ÖlKUj¯×ºl¬»Ê!ö©íaͬ]ƒ¡ˆ i†A0èÙ<OŠÜÂuN‡ŸÔïÞ^ݼ]´ÈM-–å-š,Ê‚¦~8Gرvˆ†ºM{(éèhÙ:mDó´¡wSuó§d´œo. ÿþ響¿;Xá_¹3z–¬þ‘6s𱚺Òû¿§øøÍO÷õ|ùðîööîã9°õo—§¨NX2õ;ÂÉT®„^DØå [e»ºþ¦ÌïYƒ)µ+‹(5~²v€*× {(|}VüzØfÏ {yvž#Òaû$BÀÊâF+žªru€Ý¯½ìן`BPíÂC]½Á××aÑ=ÓéÍýÃݽòŠ?,mW%“¾¡…Á,E¯r/°GÙ²ºL8U;ðDïq§IZ’¦ö…… ä¬ýµ„¹¯CèdÂë‡@aFa½OI#œBK—íêd®:¶ÒY³Û 3bj‰ã-ÀG`ëc<Fú Hé&´ê¥`ëÓ;ã°ôƇ'§~iÒë¡&¥p¢Ô¶•9L8Ä©¿ÐùÀ×Cì`+p≹«;¸{ÿö—»÷'6ÈâæÜÖi…¯/·dk L=Ó¨ÛMC¬?†b!¯#˜` OÌ…W੸»§ÑOþô8fnÅJ ‹$¤¢Š_o :{†~Ö¼z†§œx öòÜß~UaË×–¦&#"RÊ… äUJéŸGßjÎü8ü'¯%´¸ˆ¶Û{n” cû]ûEånn¡‹ ¯‡!L0îæ î¹k)‘šJàBÚüʃ\DêH¢Ýj¸àâøÙ÷à‰ÐïŠõtœõ¼ôŒmu¹A \Lìõ70Í$ÙºÀ9zOz'ÙžfO¶4zñ$ÛMà»l[|Ò—M²mkúòH¶›ðn%Ùº¼Å ƒBg}1-,ÖöA…£´Æ%l+ªzÚ¯žžs´Ñ†¿ÉÖ¥n<;ÚdëD¶“lw’íN²ÝI¶;Év'Ùî$ÛÉÖµ¯ ÜI¶;Éö’H¶jÀ±žV2\årè˜Þ–7õ æ ÉgÅÃ򻂮SúÝUKL6UÉmUÖyeA°.ѱ„رª¿GêÔ~%kÚýXô—咥į¢ÿõÝGsIG ŸûðÒõ÷*A€0Z„*Ç yé%_çÄEä\®CY£uç3/h«\h´½XÝ$äyûcMÆ÷˜‰E;™ë¶Ê¦!S/€ZÑ1ÜÑyðä-O”ÿÐäüÓÍÍ›·ÿúòîV» µ§5ëƒIе=T¹œ©ç¦¶Ý;°œ°µ¹ðHÿ­íêÂwÚ”¶6Qçlô˜aËÃÛ Ö“'ÔÞyðH·ÌÛÎQY¦Y-gTåbÁË|£m®‡a‚owáé¶ÒŸW16´™CS›5Ø‘Rÿ/è5(òwœìf³˜¥¤ñ“îÁC©÷Jÿí¾b_Rª¹v¬_8æë³å“ª\ʱÛjïfŸëáç0¡d΃Ç[1s^›‡Kpõ›MUƦ* (Qˆæj•ܺ/uÑVõÔlé §—65}ôÒ-x7ÓK—Ç…ÅôR¥— Õx±ðé®5ÔW@ýñ1×¥—.¨R Rì½Jðhëÿ[ÐK—Qç@’V˜a:{ÒŸ^º|‘b#ùÁé^s;½t§—îôÒ^ºÓKwzéN/}Ü/±nKõXNÖ¾ª4Ôã<ûN/Ýé¥@/­†Yl™Ð2` ThHnÙ“"Õd}Ö—kl¸ŒÌõ;"ÌæyHåPÆ'˜ÁО¶[ÄkÉÿõ=ZÎÜYoUëì‹9Á@b$Âu¡‚…>œ9¥#§ÅôëTM~Ù»Ð#sÊü4á݃‡q{'UK{í”’Ò ´IÛȧÝàŽ ܃‡CÀNqikÝV]uËLÑôßU¶TGõ2Ìõp'<ëìéϣïë]d»ð$cL­™*G.rÛLX1N(’ñàS {¤ºv-YÖ‡™c°¡ê‹½:¢v³Qø2cÞx8tYïçjŠ-Í–v¥JÁ"ZIÁ %^ý©ÖŠg‚xð¼°‹SCuï?>Üßþµ Ίö:(VõB•‹ }ÿ(ƒ]?˜bÕCœY;¤ã¡ ©†ËºB5v6§—¤€›Õ<À×ãÕæ¾ã§Ùƒ‡»ôæ^ïS/Urkeø÷OÿüýÝA—ý0¿ùéþÏŠçÝííÝÇÍøI.È8„·6g~F/¹ÏLWdWÎôjHý&’ÃqûÖ#·xôÈmØüÑX.Êz"À6ÂüÛ/ŸÓr¤C±ØÓúûùÛÃW¿óòZLh¹¸˜B!Èí|¦ÊéKU#ƒî…³ñÜc¯ÉöàÁÐÿÄÿÇû?ß½¿ûuᜊ´5Ç b`Ã,Uµ¥â6R|?»\š±L˜ož”{.îSºËMÝÅjr9–Ö¨ï:„îkÚcœJ3¯ÿ3ŽR‹\ä Ó¬x ’ ]Öu³n«Ž Y%‹躮_`’ëÁb„ óìÀ³µó½*ïÛ“ê§”WšÊ`ɘ’e”ÚaV¯gË*X)ÅñíÁÓÿ1‹‡;­ì½ù|w{HlÖ9hk8°YîHŸݸ¦·Ûåz´&Äd<اÇïPS—u‹W´ìT·vÁ©xý(gŒbKU®žòN·¸Í ^>~ ø>øŸô…Sà›š¾@ ü¼›)ð./•‡RàË5KJùLq½ j>J4^Þ‡žþfx—vÊù×ûSà}Èîø¿Sàw üNß)ð;þ<¾î—1J’`&µh<ï–w üeQà«a¦$ ë¿–W Ü=ÁìÍ’zàò„¼£àø3´µGõJh÷®rÕ°BèMo|ÿ{mQ()ì Œ×…óÙC:ke̦[ç¥u¦_>y=ø­·ãù?ö®&;ŽI_ůW3 Ò@ ?>ÅìzMSt›Ï’èGJvû`s9Ù²D©D±@%ª×Î^t»I˜ùáC ÄOÊøæ_£°rbºÎ $õµŒ q‚Vn’„nDJ7ãå;ƒêÔ$>Œ ´V\¨KuêJ©”³xPY@ã;ý édÁ >f¤ËÍ’“‰Vòñä C6xÃf=ú$#ÙãÕ^Æå6Üá 2ÚƒuûõîÁ“óÆÛ»K–ÍB&â€TSŸÝÅL6“ÍÐ2!b¦†³òÚ>…·ØC'~^xõ ’<]O ¢që=£°*OˆŽëÁ³¢—ãçíÔhÕCψ5¦dÊÊÍÅ…s^ÚÛÉ혈L8zð(o£~<èÙÏ¿^h¬{ŠE$âÙ¥K¿cPðhˆw &˜pŸëÁ“â4UPXcB3ó=ýÅ¥²l¤ ÎÙÒî%7Ç ¶~ùÙáðôߦñ¥Iuò7 £õ˲q dÆž' RâÎyž uÊ*à ö@ùŽrr•@×[ éÕöª¿}üùîË:žÑñ‡I)ÂbV‰w=µq±»WÜ$™eãÑ °ìO4MX}.ÍØs$Ow ´/×¥Òç~­]U¿Yk 9Jr=)Z^¬aD—àóÅØ0PLÙÇšgÈ@Á“KÞ‡ze ›ººÃL¥ÔâB÷U1‚{5ÀTymŸˆeM×’mÑI¶íùh¬¥FËqþñïwKìqy±± “Ž\âP¯>Â~ø/Ô£Äò7÷OËÃý·Çq?|‰89 ѤÐ0‰ÓïœwOžÞ振îžçǰ#f¿ÊÝçìJ^+Þ Yè+΂˒êM(^jj™µùðññöîÍÝ/÷ï—p­þa”Ú€w7ïoïL…ýrÿöîéúègןWäúþaá<±•O&Q»[]¦`ô+¤¦y‚3ÒÐ žÈðÄÞjcޟɶy÷æ[šÙPàÌ‘× I+Ø52A#6ÌhŽLd0[µOoÜišvO’ìšf‘H¸qΫ%¢êyà†{Ú2ëI:¢ÀreµÃûÇå—o®no¶Õ ÈYܬÈ]šUÅ0Ì+$¤uZ0Ic4ãéµC_) Xåú”Å'Á•¶ëIËjµì¿’”ñØ×HLëô&Y«¥“Ž1³àëê'T)ÿæ]Ý.¶® ¨lAÞl¢ƒ¼F>Zg5É.]R7ÔÇ“ ‰êå<1I ¦ë&{Äd$òÒÒ<9˜$-ˆ9µàI0ê=é'v±$Jƒs(Ç3Dà œk$!)ç–©Ï‘„Sµ½~Ø3Uñâò\Øv²™¸Áýeò¶ÇÇ+˜×HHë´ò»#sÒ‚‡`ì9ñÓ^]lVÀ&̃‰×ЭYõÖ ÈœU7Ó©Á——69!¾Ð«®[•¢6`¥ c‡×@®J Á?FqNäG ?-x`cûÑ‹ùÕÍ›7GÁHWŸþÔ²®[Õ  6˜àÔü„·õLVHSódeŽ¥ÑŒG;¥é›·÷onJ5’?ï~þõáá·ÃóÙÇCÙ®¯b0^p¾p £¤Æ®çR3 ñé`H%œÂˆ7«Üš†‰ âªž7ƒ¦1CŽ2\–e]ÙçdìçIh%9¿áÒÍLç¶ÇÙ"µ¶d¬Eòæ–H^¡RZ¥á£³ÄŠ´áš,´ò)ùþP„¬VW…kqº@lbâ<5,ã2æ­“dVŠ›D}x ÛÍ~ûññþÃ_¥¶ÖÝ¿?Øÿ<}x¼¹ÿáé‡ܼÿëãý!ßZë„1çhÿÅ@愺²üþYl‡‹&¬ož‡<ú·«$.¡¨vÇŠèÒßyˆTv ¤€Û/sž¸É6Žõ}Q:ÞØ¥ä”P Àì«÷1@ÛQè„eîÀƒÃ ~|ÃãË,LB•IHròÎw\ánä–v$´ T°;‚y–¶ïd;"œG«Ã¸¨ã¶ô¯†z©ºVß Keð@JJçö¯ÞJ"%圡='ž°Þ’ÈV2SÁQ3mL¦*“ÉŒÞL^¾TjÝã‘ó|FŸœpj°½OŽ×¥Âèå÷ÉY~LŸœ ‚¾ÑÞ'§ÊôöÉYƒwuŸœ.-!lÛ'§”‡"ù¿ÿ}úïÕPñÂúäô¡?z&ú[ôÉébN—Ýß'§Ù±‡t÷ÉÙûäì}rö>9{Ÿœ½OΪs•Ž£=ö>9{Ÿœ è“c‚™4 â·•þùB€ÓWu/µN±¿‹1¦¨Nõ·2.¨À†­SR¼ûH€w§.nh׺ ‚.ÒŒ¢]ð]唤P;ðpàÍÜðu_v@ ä¾®”q‡ÕÑâ<6=a†”놷qÇ¥Ÿkñ6äÄÛ>ª˜ÜWñ2Ž'DaØw¸ˆøx¸·ØÇˇ%våéÖô­æcåqT3ãX4{o?˜ˆA¶Ó'0%‡·_ê<#_ê iïJ’ÏÕÑ‹h}k¢ä¬Jâí’2.ÑÊЛqb)dšB¹¨©òF>pÁ…ŠSªÅÒÊaHN—õÃØ–UK›ï\É¥I¼Å;¥µ+gÙ~Ý{ðo¼Ñ±N«‚f7z(—ó4&8gœŒ6ƒ×&Ämtà‰<,Fç'~¾ðZ?8Én5Á {ó0í&ï ÜŽµVrsœ ôà‰0vï¿¿ûPþħî1׉³ƒ)+:5¡–q•&Ýñ;HkÇDdB¤^ÃÊßœÃkýTåd›Û¬@ol—©Ìƒµ@—(w •ñ{=x$oª¨Nœ Ú<ý©R§Y{ÿðþñááó%£uÒ˜5Apn‰Ë8RÙÂñv†<¶cæ',t]_º©Ñºá*‰”0©ã$\ÆÁySƒÄ³e´ýR÷àɰ՞†P'M hˆNÉÃ2.¨â€6¦c²µéTš°Ôx2nãEáŠXX¬‚ZÂ2:¨Ki¡gu—„v ¤Guž Žê…±XgŒKt¥J[Õ´‘›ü ql‡¡-¶†Øš¾žö÷u¥|ŽŠ»ùøá¡¤ .ì”2÷Ÿ‹=üa îß¼¹{¿4ë„-ÑGhT¿°O4~añéúùŸí/¼lË•R®J_¹M§Ò…Í™ƒK“Ó‚—JHäÄ=/ãø¨²Ãž|"ß³Âè姯?&-¸‚ oô…§W™¾À´à5xW§>ÎÒ B%Á¦iÁ˜ñÚ._'Ò‚û ø¹ˆ´`,i0!1¤ä£×W’šÿ£Ó‚;Ø)}Æ'¦/_, S`ÏO3À=-xO ÞÓ‚÷´à=-xO ÞÓ‚O§/çeÊÊ¡áÄÿê}bO ÞÓ‚/ -Ø3AŒvq Ã…ñžø§rDŒ¼½Ï±ð6nx¨3†€5Åj®÷2îø=rP®wù»ˆYˆ¼˜‚2î¨X¹ÞxÍÒÐøîÔm\Û}ËwoJ"ê Ûžæ@–$ Ù}î-ãxÆ‘¤Êèã1]7Ê)ÿî`N4sZÞ@(‰)Y¼9OâFz\ˆ¶Éˆ]ˆ&߸ý²—žP¼äìø‘~¼ÿãþíÝ¿–~[† N³.‰¦Hfé>ï¦ 'KŒv­Jî$$† ¡Eå;Ùv36àéM˵ÛÞóRªÇndP4Å”½ƒ»Œƒ‘{ºKBíë8¸jã²LÐæOÖ”‚‡m¶­±Nš[_]g¶ƒ ×í¯A)~XNžÎ.óÑ áC%S‰>žˆ+Ø|ҥ̋õx%•6{úÏÆ1Æí62z(Õ¶„ës/ãd‚–¦ÅY®ì•qQÆmd;í‚kWÑç°«º9K%Tƒ¢«pl§µl ¥Øá£¢[ó½M’CâìâÉÝTN„L÷Y¨eÛá,ªž 0”<å;¼WX;€&šúÛƒ‡iËM^7aÍ G»¾ §Â¹´˜TJc xv W™°êíxR qm™O°‡õ¸Z)ž1»]ƒV䤛nn”„à<§›â­ÙÜ iu·”p ¥Ï“²$ltÇFƒ!ls"®Ä ­ZJI}<Î+‹ss{[ÊÆœ©±î­×¤HÂì@šÌ„ßæänÒf”Fò„c»ÏÈjw/H«[¸ªYÍXdÏϬ%i#žÝ¿­l¶OÀ†MXõ<çìõ…Æšã9_;…P’Ö/’˸”h»-.ÃPno£uáéî0zÚñ\ra>7à}º~Þ,/=δN¦@*);â·ÿ‡³¶~‹¨vàæBЧ·Ø‘ã¦\ØJU¶ŒŒ™9èbÊ(8ìqdŒv Ï·_ë.<:{çP%F±[DpÀ۸С¹^Rí³Ù+x˜Æö~ô.<ò–~ô…<¬“—rDæè)!H)Oßæždv œ°ôx´óRþeŸ·ÊBR¬“$„Y‘ÕUššð¦>ôlGË34wݶ–«¤s’ŠëÂiãRš›6Z>šEÊ3º ®Ägíi£^>`…ÑËO]~LÚhAßè O­2}i£kð®NíÒR$Û¦Æk»–×U» rÐËJ-¨ÙEO!Èß+m´‹Ê[èø´Ñ>dÇÊ=mtOÝÓF÷´Ñ=mtOÝÓF_;WY‚ª{®rÔ½›ìž6zYi£¹ä@RŒâ;Ÿ`÷óÀÙ^ÏTyû‚}xzkáWHª?‰&…˜“—_²Œ ¼M|V‹wSÊ+côQÚ&,¥}‡‘œ¶‡q™'DS/R•C,óбTM>Œ;ºuJ¶¿‹Ñ“'þ6.0Suù·¯qŒÕGE’·—,ûN©*Ê ,Iè×ÁWö²Vêgâ¾´2{èñþvé›’C i À â–ÙÝhéE§œfÙ.ÿTbR—Ì2oص= ·Ï(èÃÃ0\ó½¨ÚírÛTT]fôæ’¢Ò™õ‰Çn¶vÄPIÚ( x€¦h«úÛr Ê $ö\˜6ÎîcãuÀXù5[2iÐèOÆÌ Ò %rܳ«Œ£é£¤ÖEÉ”?›eçgd"⥀Ãóã¸Ow¿ÅAôîæ÷ OüüêðGfë×Smb²ªžz3îb¦q3C%ºc'˜=xW= õRF£Z²z·>Ú}Wž+ÙíSI:!ƪöæ9f¶n°—†‡Ì!do&¶˜ÖŸÅ»9Ìð—õàÁ2qLå"?ßÜ^™.þ÷_ž Îõ(UIv’芷ƒÀßYsа©PØ^JºðÈúȩ̀¿ìGï~úÌôOÏþt‚Ú…Óº!/¢’$ïVbãRÈ#ÄcCQï˜ ñùèÀÃ4ª%V‡VÎõW C!AöòFŒM€8ÀA¿…€÷LB&ø-zðèwSu³^‹.@u£±T## ë®5P²Û' y‚ Úƒ§×}ñæsˆÃ¹\Â0.k¹zá¸Ø@ÐN+±¾ (^”0(œ—ôÑÞˆm5BÔÆ%Ý#®æ {± oSò§çß=ÞݼùÄhÍÇGKZº¨rݧ¶Œƒî$û9'V×$J@ÀƢЇ'ó¨‚ 7³…W­ó* J>aòæ!)ögpn'Ó°%L‡<:ØßPaöêöfa3WÙŒ) +‚8èm\TV“a¼4wL¶7 ;ñèwÔÕÒVTzCc¹ãª7è~úœ ßíðSž -ºðtŠÅ—çŸÇg(½*ÿÞÓÂ*UY-QuDä ÷’Öž¾§Öp¤»c"œ×}xF\&Ïá4Ö9•Äm”7)=P:…c¢ŒwLãÈÌ«õó§ŸwßG9MÇê½»ÿײz§+~˜}R]Î5—²—žâµqqj¥ƒå£Bv½Fì•N¤°W½üJkÀ©tPAÐ7úÂ+T™¾ÀJkð®®t°|\‘³Ó‘ëRqÛÙ’¯#à‰Ù¨H¢õØËy• „À6ôÐÛ¸ð7k½Ì:–‰çvN·Ä_é`ùbRŽ!5 ;ŽôÚ+ì•öJ{¥ƒ½ÒÁ^é`¯tðÚ¹J1DrÏUJǽ>öJ{¥ƒ ¨t@¥Gr "t zŒAzQfùu;&‘¶÷MöàÉù{ùÊ¡Îiñ›ÚeÙÕl’*™™gUQî`³4û—V›ÊQ5š¿\]ùÒÿ¥Z ŒCˆÃ›”ÛßÕH4ºwQµ‹zØ0Ãu0ö³¾;áÁRêWsð¢V°D·àwSAõíbàrˆÁŸD† ¡@å;L ±O^î÷5m [ÉcKI9ƒNôÌðUð8ñ&´OZð«¦ìâAÉ<~1Ña+G`Ää¢Ë1à‹éÃc‚„ ð(ÌXÌ € þbfÐgqÿPº ½µÃóí!„åãÁÉö•®>Üó®ìªôþî³’[2³±³‚bf©&õŒ‚2îôRkqÿÏãÝóëÓ³ðJ°îWHŸßÑŸî>Vò°¦,YÕÇ:%8³|G3rhÀúI7¬SltëÑl9©š2"VÚC±­üf»ª›¹½sÇÆÖíe¢àÉ ®n*ãzƒòyûðçÓí¯wïn¾’‡CŒ¼:*È€õ­”Y‹yãn{(Û´Ç"»¬v7Ò¢ÌW'¨„‚EÝgË2.ÁÀ¶:MdB=@’í‘ÕÛKTŒ’ÞÏÂK)Êft»8s˜~KÅ ÊɽX<Â#ö|íVÃuîX"³YáVfÓb#ûì ‘Ðvô)MØî=x0­ÉÔic¯î5±f±·oʸ8fŸ/¨„…Å;%:±l‡~0uÃl#¿žÕÄf&‰èriTæu½ñ†Èi;\Ež°ôxò€TìFëî+Ã"ÄI=KÄÆ…„Ó­z^&Xõ=xz“ñ» £º5l·8ÈšÄ3Dl\Q)yˆ¤šm^<#â£N0Áœ+ßá¨Yðd_9¹Óº¿SM‘—¬Ï”/ã¢niÊ«‡Ó,yôqNȉ)ßQ Í xzK±t[H¹n k9nÈ« ½ŒƒÓÉ©+ŠT)¹î¾ÿ¦Œ›aÑk±ÞP]ÿMÁ#qlå¿6>k¾q.‰à!:Í]˸ ys¿*Á ØXŠÜlŒZžqËýñó3n<ùQt:ÆmŸ¥¹|Ê«.úx×T{xüÀu¤©m$R}XÄl_r×Èîj„ƒ‹Ù,Z %ºýE²|G­¥‹GCÐá5|Ú(ÍUJËã`ŒÌÞ²q©Û{<@fc¹Æ:§øaèöK^jNkyfiÀ#¸þyŠ5®³Æj¦ïè°qèñ’)!0Ûöòæ`ã(NXy ±ð xάâ6âmu°ôëhŠ1­}ò_‹ÔŽË˜2MYâÒÈ=¿Ë¸Øéþãæíý››=üçÝÏ¿><üöÂ×ò‰Ç…ÆO,‰×ŸþñeT@]”¢I¢kÙ8Fà>8OO±ÍÐ#¼ŒËy{°ïÄ ˆ¹®|ó[òb®žy{Q„}áNêÜIÊfÕ:‘yË8ÀÞ·þÉÒ*‰‘ÌÂð§Bš.щ]þ˜h6UíT˜'Èž`(i7àâÁJLï¡¿ügyûðx÷ðtU’-ÂH¢Õ,)ŒêZÂe\”©e– ¥L⃳µÞË$xùïF/¿LÂðcÊ$Tô¾ð2 U¦/°L¼«Ë$”ǘRrµ”ÙiÓ2 è:Q X>šdÀülÇSËêmz’zË´w3³>^ž&µdÒ^¤Š¬ÁA I+¬%Ì ·ïV@ð_ÿ±XëÑy’…Œ\uú˜À§…;ðR–¶cE7 ®Ä‚Çt03iT B}¸™(ŤAÎ'2÷X¥ØžÄ’YðX‹ÉhÅý-©æveX‚’ Dmgb6ç ®Å\æ0 àÄ‚'ú^xkÅ7~÷]ç@ Ç1„4 çvy…³X8lÀŸ÷ä¦#&©aë!å«oD58+ߟSânFë8ê­()}Bg x’5Ík¾rŒÅžXµg,’„ùóÚ…2z Ë•{ž‹Ú\ÿrMæ{ÛÎ7ùÛ¿E%X+ 櫵6ebÉ<Ýêôt´v†IžÕîÒšKÚ‚r­,UÁ9é>ªÜο柜Ÿ=ž]ß~›†ºî«’äE{qÍíÜ|ˆa;è)ýèüÙƒ¿GJõ`ñRÍç¤i¾JWâ]»˜÷îìüâàø%1õ‚.ä×_L-xÌB³Ò¢O¤ÜýàEMôátç8*nøâ1Jõã]ÊD¤l%m!M"¡¢ÙÓ…°¡ÒD½þ¨ðãAôûÙÍåCž—ïR ß›îuÎaí"§0§â ®bžÚùÝTp{PÕ>ß×&€ ‡nÓþ)öswh}ŽÛœµ©BˆRLAI{šÚA°ú%Vcq;h q x¬Rˆ‡Û«6q˜›I#/ÄxÜÀw=¬?ð9c¶ÖGž3VµôæŒ-Á»8gÌ´Jí»²×È£àó9c&ĕ”üq¥ŒÙÐGþk¥Œ™¬ã1ŽK³!Û{ÞÝRƶ”±-elKÛRƶ”±-elÙ¾÷U·”±-eìRÆ21½x¥ÚGiB\Á\w‰ÐEðN`Áãq±’²ö¶öûž(9sÝ’1 g¡š*6µƒ=‘¯N©bå÷ÆÈàI{a)íöÄÛû§Š1Ÿ–º(7st<ÈA¢†T€ÍE&Vp·Ã ¦ˆ¦Ž®÷)‘.Õ×Oà¯óT‹m~î8Û¡ƒ[?]Öˆ'] CÝ’’Ïb™~ÚRäÅsvÞrж£0Ÿ-xú…CÝ^ßìg+O’õù˜ʣ>küd,Rƃ§zè]†õ)`Á3c`äTm©5xʺžlë¶Ž #qTûV’ û…ÓtàwLÈ»¤'°”êŸD‰ëßá1ëÁÏÛñÉxÙ˜Ó_UìYŸtù0Bæ9*øs;N]²<ƒQïœÐ€M%Uîü÷©µºèú¶–º­Å繋êÄ,í0ö[QúÍñ’×;ã€KwþNIÅà ã‘äz¯,ßîoÜUlYŸxcÈÇ_• ¹S<ºUEÔÎ%GõÎ…8 º/§Èb*±»v€ËK’<ü+ÿÕÍ—gÛùeÂ/MAÑ©nܘÊ,ôZôy,jÒ}YH{ø8 §Ã‚G ÷R1c}‚å žÄ ¨:_Ña‡â«2ÚÒ°\Xð¤ðÙËE¨gQI©æµãwn t_.§¸wp1á1FßÝ^\\=Üÿ˜"ó¾þ¸øvùØæþýU*íÉÝÅ×ɲõé–Š.9‰ëžJJúôÅCá·¡7Þ ð‰Xðäþ<;?¿ýñº¶ØâÃÝdéznN¹Hê]1 ˆY*oð0t% 8†ð$ô®½{ž‚œ?žÜÝßþýêúòáåØ2“GI´}3•Œpl¬ôìü_-‚Õb@##X-Èöýù[ëÁºE°n¬[ëÁºE°~¼¯ÆLfõ"“Ûù¸E°n¬GÁšNó_ùY\5²qjG¾{dcù½£ËW¶êÚµX1²‘ü©d3ÀŒ~A à˜€U¤âÐó)N [×Öвá‘#saÝÒ¥¹GˆZÏJ,ïñ¢¹‚ÃÔ ?ÿVØ‘0<˪ :Y²>õ€ò½B òÜ$Ùª‚½ºöêl´I,x¬ò¾ÏçàþV¦º•crä´È«Ò.& k&tæ~3tq. ˆº.%¬­Ï8¤<Ä’¢W:Û‘9óó(néV°XðÄ#ZH|ÝÊâ#GÒz•Ïf±S†ÎôoﬧiÃ#¡×1ÖjÓúÜ#ôEÓ×k'«R[„a¨ºVù(gd¢”½Øµs›º–*›T±èñ«k-ßG]«‚ÀÖúÈÕµª–>Bu­%x«kMQ°a• ëªky ¥VÐŒ¼– *ì-¨GœPPE!jØ üÅäµLÖÁy5€þÁ 6dÞmÁ [pœ°'lÁ [pœ0œ`ÙW‹ªûœ°'YpåOAr¢:ŸÈIˆGäÏ®{Z ÝJë—j³á1—jëUœÏ(„ÃRrFãwLà $aÅá ½ÌÆæZ,Ÿ„s/ç®V÷“•ºŸù—q(J™•0-bŠ{ONÝ´f¿ÿ¶ÓÌûép+„i¹Sô‘æè2‚Rw,ϧ!ÍŠý´:¿º´w XB x’¹Äüù}{[1âÉùåýãdÅú>ëKyañ^{Ëí©K5×_üýhaaÝàÆAU<Ö´ÿ…Þê§*v±j>F‰@¢ÀÍí‚“ÂtãðX¸-x¢q¸ŸmVÕ³e'U‹…¢ͪ­BkÙî³KûZËv¤°þ@[ðøØKÎãáÇ×ç—¿Y½ƒèêGë|ñL1¸¨E‹ëæÞ‰¤ˆBF»O>ÿwÈš®]ž›Z½„!³KU[G‘8³Yé[dÔO~° • Ø ,ÿ<ÖàǃŽÍ·×7/?~ý&óÖS1JŠù–¦ÅèåvA G ýHÞ·w.Ò€ë÷‹ÕŒÛtbê9 RüѶ æv†\—1ÞÒŸ8`ã±à1Kš¾è<+ç¼°âKU>*çr)épÉymÃÌíˆy¹ôq'6·£®°¹ãèð`øÌcÇô>¿3q=]!Q¹e§=§åväÃá’PÝmîÝ€í‚h¹tÜ«uõéoO HD帟$¸<7Yãy¤?õœÑNôö>ùž ž¸ª2i³ï*™ ìNæqU_Ë®ÜA]nµ)`ë ¯^CňÎêgåù}—@¬ס$IG$·ª–`7º´º—È'a¯Ûëâ+ßdiªZ’sP锢;Ö¥â :·#PKÇŠGß>[/p/g÷X;»`‚ù8D¢‘Äyân7Ñ!ìmï»\1à×#u³•}ÕÊèbP|wS;à–_YûOC"ÅõybÁc}#±ôüìñìúöÛÇF…ªQ‰bÌvR:‘Û9‘‘¹›å£Þ¥TÏÝܵCÙr7µ¤¼šE>wsø.¹›5¶ÖÇ»Y·ôñån.»4ws÷qvôUʃ¬š»IÓGBø0ðÍ•ù¨r7ŸÐ'pê†[ÚÁ_+wÓhùâî¹›»/FÎw/ׂ,m¹›[îæ–»¹ånn¹›[îæ–»9›»9í—àR}_•Ä[îæ–»yL¹›…˜Áq(eÍ5çtÉÝájÍpƒd[¢Þ­Q1Îæï×ù‚|þË›³†ÜVÌ%Pj`C$ «z¨—8TÛ{á1øíx¬%[?žbz‰±“§c{ñÊ[²§¼0 #(½ðENÓúzёÜÞxó¶àa\˜×»$·­`à|LAÿ>¹èßß`eöæIšño¿M—ßžÝÝ]ÿk)Ö/¿üÜ/~¢Ú?.“â]}öàÙúúïú÷\YµáPOž~û”ˆúQ¤.Ûí*k¡#âJ,xŒ+YdÒW›Wý)ÞGp!p-…|jçÂ^é>)äÓïÍs;}¹Šù¼b 9?Ea/sžtÎw/,õ¤œï^óÑyíyÓAÙ¤.~\ï$b¬e™—T€#^ËówŠ*´ê.í¦Â+“í‡9C¬;{b œ/רX“?™âÕ C¤›à®î'¶á¡Ðmöÿ¼ºüs²Vª[+¦RÙQÙÀ¦v°d0Ç03&v¥¢”Þ™ŠúNÇ¡Ïx$Dn0.cêZÒûåu²0W- „^ÊLéQi¡ßÜod3gBžgÄõ ¿ˆœOpØáxØxךœ)èê¶t!Ÿ[Uì¹s}‹t¯Å^A.oøz—ÀØòwØynÀÓçþ¸Ìº¡j]$F'Zo¼³ë»¬Ìö Þ;DçTðLë?$íðø¤Èÿ<µ#cÎþÓ•êéeþP{BÝž‰b&v];¿´“T4£»U1Y‹Û†î@žè{”·hÕ€SʉPÔu– ô©j±¼šbÕ¢«fú¨‡ ²àAãÞ³;Áí‡%Ýþ¸89N{y™›wóåã?.<œü!e]çz§Šà½Ã¿ýöŸ\íìØï—ß.®¦,˜¼Âî%ýö&égq¿|8.zðôxi;ÌêÐ%µªIïXÒ ö0²D<.²Dé5oøÙŸLc€Ý¨“ ‘:+vb‘R<*"±3Ó½ÌÆû­ËýìONÎòQãq7§©Ÿ¸¦¸ýŠOë÷e­Ûª…IÏãq°˜`ékç:½Ù%Ÿœ•Z-?/?Ú‚PYþòýó¡F»ÄSÑ{6ûƒ\<-H4€íx¼ÕøË.ßîo|èyøýÛõí×½¹¾³ BOŽ<¦ !öEVj¨‚ÚŽ‹)©àÈï½Wl j3ÒX‹¿‚Úð}Ô*l­\A­jé#TP[‚w±‚ÚôñôUŠ“[UA-`:uIþ÷fÇ5°Ääu¨Á™‚Ú„*zN ïU‘þZ jS¯KÅètëȼS©¿‚ÚôÅ”‚ㆠ"ûi›‚Ú¦ ¶)¨m j›‚Ú¦ ¶)¨}´¯úäSP÷ÕƵ)¨m jG¥ –‰)ˆŒê‘µ„rYý+ºMKAO:lÂ^Çü<$Ñ7àáÔéù¶Ý˜Ê“Iyl*ïÊUÑ–Ò./cÐ[´N½Èg§fJûµËû‹¶pžµ‘#Έ¶d¤EáŽI½¼—væpÜu¼Ë æ7E2$·þL)x¼‹ :œ6:Ð?ßôÞQŸ,^0¯;I½ãû²°Ê æªSÝ‚~Ä´9îÉ0cϹL–­çB0¹XãunRê½4t 6:N!²Út4 "7§KqlÀ“¬ÔøñõÙGn“FÙ)£LæTø3«iH¹ë(î² µcnA¨NÒÒÓfäïˆ5S¢´ ¾_ÔÊŒeŸþâןOÊ?Ü­Åõìˆ äHõ¥‡|b´T[Ÿá?`¾ª$¾Ãú´(ß yÈ]^òûùí]¾jœ?üœR¥žì¥Œ·¸¼mÍí¢9_c$oÛû‘ö„øjÑFI6Ê[«ƒ¢.ÝðÑ0àÂgÁ©—ðØÕÝÙÅEñ¥]>œæÿ¾9}ÜÓ–óßN— ¡1AŒN´©´s²ìÆÐ<É&BÔ Í.߉¤¿•vzÜíß›mw€«çÌäƒDJ _£s;IÜM.¬ 'ól ™UèàxÀ%5òÞ,¡p÷Ù}~}vuS³f=#_ˆòq½†^(36u¹¢ÆØv äÜ€üq àåbç3'±Ér±—åPÒÒŒ¯QPÉÁQ 2¹¸Ú Ÿ<çNì5ŸK¦ïzƒ½d8®™íÑ­7èïïXI±£gŸØ‹Ž›Wú5‡¦úäö. °ø£F°®UØþñµ¼ñ?œî2øßª¶eØÄ“DOÚå ·CýÏKO>âƒpRoe¥ÝÞ3\u ^Mó/˿Ϋî±Ü.âÇiþNòù&@:é“ó°‹Éø(³x”aã̘”XGœšsCMÉz @™ë xˆ>©]ÉëµA†\r x¬ÂÊk•fhÔ8>#G8’[šRlD|(50¡îG(íbB *Õ®ôYÇÒûíí­ç~0Yž4®P =‰®Cñ¢.JOàEߘٷÕ{"lØÙJ…’3ac™AÃbÆ@ä½~ùÌw?ÜèÛûËÛ‡“’Ì2Å—å!Hýl’òMG6ÝR‰ŒÅnG«%§AdØSÛ­‘ŽÔC|Š0rÔH—ÛUDü,²V]G9£OeêèÞ$bÍd^™§xZŠ›:tÊ^™Û‘P%¿¹–ÔåãùEYÓj§:ªõ @ØðX @|Xf§˜éµŽrÝ^’‰BÔð‰$qËGñåAðª\­ªZ~tÉ­¯ðkÃÃæÊ.¿ÂÝJVÕåÃc›äÃû%%Ûöäìϲp`Õcfê8j<ìܑ¢¶!T¾·)R«O¸Ý§ªdXO#‘TJµöˆøs(õ¾+‹IÕÚeÇE*Ÿ>“TÓß>žÿQFƒœJ¬äòA§¡Wñsˆ5ÓÅäjí¶ŒY±ÐQT®C¿p&¹¾ßMã €Š^°oè~Òø®#K ÕÜa³Z¡GT²Ùžð|îjõóêþq T)Å^°¥Gþ“Vª÷]YLª$¶t™‘*JŠØ€'|ê*õóáî—»ã‘J«R=¢¥OQ>…Vvf)±(ß›Rd±šñ¤O½^}½™þÉ4^cé"—>…Y÷f1µXZz ƒ¨åK™<Ö$µ¾ƒqwûçåýÏé^Nª‡˜#¶ô‰>‡Yvf1±Z;½~Šé;¥ôhÓôfÿ™ÄúþãñìûÕ?§±±ZûäÝ眱>ìÌRb5wzý uôœÛo¼Ç«_âü|Òæ!”Ê}©G{£óÝ0 ‚ÇZŬ˜f¿p¬ß¡@òŠ´×uÜU×cgä–_0-xbèøJÍõûØ.œóùõ²d¤×L‰Þ %Í'_¤sÍÉíFÚ°¸õ‡Õ‚ǺÂ~P»÷ÕúµW±7Öo¦˜/þY)qUÚßóõ¼ûÚ±{0‘-xè|#« å|šçÓy8}ú·™žè†š¹¼w¨¨»!³¤Éÿ±wµ;näÊõUóãÂfd~(0Ç»›Ýï]óÎÉúGÔöÖHºjÉö¬±•È“¥È–4­‰lªÙt'w¶%ªû°xXU$«ŠÝlËò‹«õl<5A”žì8пL›s_\EéñºÚ-©Ú ^–"H8OppCñmö*÷Åè}â¦h¾RÊÇRh‡‚oÏèn²ÀÆ)‚]Bð„»ø¥X¬‚tG q-À@ùt=´#¨«i op‹Ó?Ä¡±£e±ò©NÀ93YÒk$ÌeÍR#ßv­ ”3}· jrt÷C‚'´úŒW×E>]]®‹ÑG‡@ÝñIÀWNÁ÷v@JFxp¼ELÒ M‘¸‚GŸVs¯ÐSy Ý w²nÅ8|$êP’c¡•7²ORDŠ=ûc6 Œ$ª ££i³ÿGÍ>àxb7ËÁdö~™—«åÚnª¬=ÈÜMÜWLµö±Z*DN­ÒÚ­ œ€!xôI‘–ùh4_ÏV ×þF„îM&[Q”#îÛdR‰ð¢Ei‰Ò—u{BðHÞ™Ž€¹¶˜ÚÀ´&"vï)©…¹uÑÛ%iJk«â˜_/‘Íñ0X n¼;£j.>å^sÃ@ ­%óûh /ëNCĤq@ŸXŠú!xi¢ï|ß~ø¹¸ºžÏm)æÞ—ÓʬvòmyC;|GP¬nŽ—’¶"Oð¶ñieð q%Û–pI‹^ ›@÷ýtþ¹„UÓMîJT®u "%1ǬN€¶׬}rEtÕRuAP¡´¿ w¾dßcvýQð´ôð²p·?%MÙw°g>Ûí(j;îé! Ùý ‡à‘êäüó]âöA‘Qȸ)æL©"Ãw©õOÑMèÜk†‡Ðý]Ẻ0ݱéŠRÄ8ƒŠÍééyåmè‚Q'ð <ªµ^OænÇÈ^“.´çÊ^h§4!¤ËÙ~AЫ6=Æ]–y˜@Üêš4b¾ 'Í¡´#ã@ܼ*AÞLž•eöU垼܆QKE¥ô˜v× ç¦4ø 6UíP‚inÞcöñt§ÍÔOiï ¶àLÝaêehW›P™ºGR0í¦nðq2uÂZ÷6dH2IùÃK‡Û‡lÚa;[Û<VÌŒŸÈ ªÅ(ÆÏÖh '”[¼3ŠµÔ‚ù†ZØÙ&ssä0 ‡à ­²{DþôIá>1d&X ƒóêÃ,ñî—l7·¥ &ý˜¹Np@gðHÁ¼[$¦v›lðµÉ›¬Ê˜sÖùrò‡]t<¬ûü‹SA(æÜ§µÌ½¸Áõ»¢0€Ì0ùü )JphgÞÃMÄ`< ETMÒåvËR1 ά7ÚÚÑàÐÁÄ\6QÁLáïŠîþÊ™ ÏÁú1ðpD£Ý?[£õ|†Å—üâÍaYV¶_=«Ûê "©¹:Ù7Ý„)YBcêˆüPŒ*…¨|’h2óÉ¥7¸Ñ´ãº›`B+6⛢°`ÄgqM;ïÚÈ«LdÃÚ߯(³ÚÕKÚOpêÐqukWòét¾-•´“&ȶXÎòéƒk~Ý&Xðl8'¾ˆ hÇ5ï(¸´áý’*¢¥§ìþ2ûS5ãx¸î4¾gOŠÔ-EE8c0|¨æ÷Wâ³¶y?D-Æ](½Ñ…/•' `žhnÉn îÌweÜ—Çë¶IOð+Ìk¢¾ž(PV¤ãŒ€Ù[&ˆpÁ£Ht´°ÅêºX—æZØ‹£Ben¡*‚ÀIõÚU¥°’ñ¶5:¡uó®hÞLK)¯– y©ëZ5ò.»ü8©H¹ØÜ¯f$~O>2§r5~ãîÛáFú.²|<)í)W6ªf‡ûZ÷N%ðCðháN¶c¢®O1,U$^ä*‡ê‰6-C¨O <¢C†XQëhÄ`¨ bìƒLÍ&ûÅ£’˜w>;ÚÂF2•vÃÎké¤þÍGÛyÑ8)z‰a;/I†5Ç#cU;h20ÕI§kî+g® Ò÷x–®O©Ù¦i¿Ø¦ÛTH;^ŸÄ)|‹P7µxQa'æ Æ¢WœÁ„´¬´gÓh4¢¸!ºîIjfQÕ/f1¥ {#¡³hôqUöpÓ縩9Â{ÆAcû4’=E*hUb¡NÌ*ûe¯¨ÂQ+Ã7¾ˆF[i—a§æŒîg"]Ä5‹: ãÚæDô‰Äzæ%³P/9¬¤s T4Q¾ÀІ>5ƒzæ 3ÉÓ‰³HÑÑHÅYKµ¹C©y&úµãÈ„ì”gFCGÛÍfRtC¯À~¤f•ê«”êzû™s ¢m[³ã{¤­·­v!1—8b½âǨk å…hÛÕœ Î4Sü©YDúå‘sŠ»Ù·v‹?Ú65g±·©OÍÖ/?œsÞBû|\_ÕJÈ-îhûÑ\œä…MÍ‹4'õÜ\Dì-fñœt¥Xc9ûvB)Žp„q˜Ð¼C õŠ Bµ¹¨À)b‹ò´³òfà¿Ä²Wã/ o¿×ç”´ŒF*Zìè5Øš Œö‹ Lwì/ óÅĬ5á“{Åà—E>Þ ‡ŠFºò'Â:’šW¢_ZFÅ»ɳ 0E^÷¦¶ŽÅ%%;ßK9>1”Òýâ­½”{Â%Åb†n¼º ‚•vÌ)a¬OcN G±<“2Æ‘†ž[û$ÇÑ¥f@¿¬Õµ³ Wƒ3"äD#Ž5-;¢¸Oìº0EoíÄÚre³nó*è3ãž1(ôö¶Ã—0F‰5#HFãmÊ©4ýIÍ2Ú3–±ž°lߌ¨htãßšnG:–šw¼g¼¬åu¨Í Ä´Æ)i˜[þm*Øô§gÔAI’Ñ0¬gã]ãš=¸~¶#â|Ù!9&ò(›âIïQ~-.F£;Šþ¦ýÐ,?v`ø­‰¥ûE,Š’ëS¹¸.|^Å8Ši2Æ…õì[S‘âžQ‘$§âl½Êg“/îѨHÓ)¿°ž}s*Ö–®ªÎºAU瀗Ê~ñŸÓ“Îoo泉ݛ{Xm|S*dÇ#Ÿ4q/…4x L`o?4‘Žê¼§WžS3=¤"ÁÒ%díò$O‘¦Û k©G ù®ÑRjz%JRr7îˆJrËb–ZiÜÙ+Y×Ò•%T0&ÝksÓãà{Tºfx|Â:ßô ÃÓ"ÆøðåCÃí¡à. „áŸT‚2⹊ʶã®hãnD‘¥’DsJü]‘˜&`¼žæ±U»ä~Å87?¶ReN©b“&Æ Už^@;êÚÙò]ª‹Ð€ƒ+"ñãå" à=‚0Ι@ÁyJÇn¦Û—&ðf2«òîóÙxº‘¤tKRaÜ$؇\aNijÕМÀÐ)™ðØ”l ¡¡ŒSíÇCBË·” (Xìmîݾ™À­ºh#¼JvëêÓšvÝÈtóóíýÝ»X1»í0¡ˆ"í×Ð3Ì–.‰1W–ø¡³†Ã¼GHâ5d¦]° q(`²î•Ê->iKLx=h‡ƒ¯åü–ŒèW÷‹axBo'óÞ‡‹ù|ꑳÛbSbü1ßuß¶,ÔÈÏ÷¼¼ûz axˆˆ´â¨]x|«¸|JÁE»)Æ“Š!ù‡ËâÖDh·D™¦qŽ|=`×N§FŸæV’D¼Ë.WùrUŒwcŸÊ'S³]š­ã|U”û»³•S¹aë³ÍßgÇÀq¦(~p¬´·°y¿å(ã c³[¼.î\T-²âS>][‘gWÅ(_— s6ÀbÀðY6)á—£ùÍM1ãçoggóϳaöýîW?ÀóŠñ0{9_OÇÙl¾Ú>ýe1/×Ë"[Í7òÈ–“òcö·ùì?ç³|úÂÄÌü\ ïe±ºåS#Ä'ïá‘æg㢲¬ð0«ó2ûéõpëÂd«/ _ˉ±¸æÝïçëÙøi˜D3 qQŽ–“…éí0³m²—ÕX–fžÌ€y˜ŸÃDú`.gÍ>O¦ÓÌô"ËWY¹yÊÝsËÌÞ¼_o'܆ å 2ø·o^ ³ëÕjQŸ=›”åH±,Æ×ùjãúìj9ÿ\Ï~yùúå«_Þ~wA´äaÂZÐäÍß¿ªØõ¦¸šÏW?zèÑå&'=M<€öhbÚn$L²sÐÏ¥}@6ÿÒ¨ ÛdÛùñÚŠ"Ïnªçoû2ˆŒ·3~~ùË–ˆ´ÕR¼–ººS¡ê]öÃd6)¯ÛêÐì åbÀÕÿüwy|X…‰–i€´V@nƒTpùPÙo°lgn˜çY¹(FÙèÚ\ç\žo ðy‹†l‘ßNçùøzÁ4RºúZÙˆ-zFö»¢œÀèîÐZk¿ ß=ÿ ÞZ}±%²*F+˜]Ã3ýÕo†•]@ðÉOæ&¸áÙß×ùí`262žïêÙ|´{<-ò²ø—ò:'\€äŠ÷£»*ÀÎj  ŠZH®Šñ{‹Q!ƪÀ´ ˆ\!,9Ëss5ü˜¹Ðð®æËQ1|ŸOËâÏ£ÒÀÆò˜+Â$H©’€¸F&`4s;_a!¸4¹@ÅêŸa¾n13ǰF°‡‘q„|ÑY=#ÖÒÂÚn¶û€^/ç³É5§ žS>?)¦ãÁ÷à)-_MÊÕ“ÙdútÓàù_Ì蛟þf»xi?ýúoŬ¨|ä!ÑV*tøäŸ6ìÛPÀ>ñéôe„eWœ1ôôܺ¹C®ñyöë|¯!Ì”sPè7‹)hàñ[_&‡1<ÃÕr]QÌõÀÐÖߎI?æåõðLüöö3Ï_^~aÏaœ÷i—ߌƒO_ååêõrŽ]YW“›bð| ìþšÏÖùòö<ƒPíýòí¯/ö x" ~¿©¸ùõÛòÙèݳ­ÞÍG`¤öô. l¾ÊŸ½ùñòÅP‡oÀAßfÅ´þ×»rĕí?­L'@„ñVÆf_WÃY úÐ0áÕýO¿^®ŠÅðló™ùºÃk~8V›/ìãîôÆóß7BûœC#4ø {©ÁËò;à–¿åô$¸þ½Ÿ¡´êjËj³‚XZ)o&öðÿ„’{8¤¯æàÍmf”™¿.óYiïÝþx^M 󯯟óétÓ–à÷\sL¯0/®Fð«âËjHc”!ŸF†&#˜÷ö§æñæafÆS,1+ÐÓ?ÆËZÈGJ»owL²ßp7~ñF–ÿõû™&©„>þËÛ¯gÿºžL 3Ï^‚{ KBóÏïvék/íRÛ|f‡ë ¸à0=ní3«5Ì?7~ñ‹×?™ÿý²1{¯&ï‹ÑíhZl¢UÌw79hËÕbšì‹îlz™EWž\þÝ,‚NëÂåO—³|Q^ÏWö¿ƒfî¾ÙÅ×´ìþOÀ–׫¢ø8ž¿®ÍòÛ'˜Ëj³Åüó*7«îUË/°?ÍTZL'£Éjz{GÔ*õvÄâs‚Á€z[ÀþâúñÆ]Eè]vS€1‡5Ø÷â ˆÍX×ë‰ÙTºÍF[³öÜNAãÜ+ÿüÿ©ɦ Užÿe§¬Ä`g6ôhgäá 7øNñX{lÞ±5Éë("À&“óŒªóŒë;›lUÙÓß°§ÂÜOâð$Œ>éÛŽÃ}U}ß!°TšRòСÿíÍ1ÞÝ®êç|i¦jYmå½ËÖÕ~LfýÀìÌ,.0Óg¹±jfÂÂô {ÿý $%¯ÝǶ[é ·ÖCÙ¢ˆ“sslá&5åaìEª‰dîóGÞ€nÞ1JRlèà¡QÊ}ì_WfKÂmˆbÅç>>3’Rí;ùƒvˆ¡XQ7 Cº y÷ ÀƒK²¥o¤(‘[ŠR ¤¸ô¢–’¹ ’5?ÈiIÜæx9J:„'°Tò]°Ýr}8$/,Ë}DƉ`,¸Ï¨A;Št•ЀÀ!°5îžx¢íkÏæÝ*ÃÈ»e¨”bJ{·w¡`*p賸yW$N`Bð„&ê´–l-Ëmk%S.¹§Ö𮢲:´êþ¶•0<¡·8¨ÒW˜8§-‰[‚š™”]ዃàc3’Z/4goónP”ÀD„àÁJZ‹÷ùzººØT¾ðJÚÈôÙ»–å8rû+7fu7’I€$€þŠ»›µ,W·5¶,‡1s?l~`¾lÈ,=RVAd2éŠèê•ÃfW‡$âõc—Ðû Þw]JnY ig6  *µà±^ œm¯“ëûD%  ‚ÆÝ<ƒïplBÞöI„ÙU­’B™ïÈõÊxÉyÉcÖ¼®SÛš°ÝбéÛ€z@Srk£¢ÃUýšvžz8{¶%@R£òó8ªôß+ÿºß½„?Ýïîžî³Ò?T“{fÃÅõUG±8N,f Š[¶ã\ÙêžIâäI4‹?#g½LmFQ ê4À¯bÂ#+ËCÎd6‰**¢*­FyZÉYvŽÀ6ÂbÂcTã³d¾î®¾?~½þº»þÖ²&ޤ›²¨I8xÍÍ—ÇR³§û®Ô>Žvyá|È©O2¶íøqu»{øyõÞ:&¶š»9]:tù( PÏÆU:[Ñû8Û&2ë P3yýÌäu‡?JYI¸ŸÆUªÔ‘^O-ç³&äÊé8l¾8ößaÑòa÷㬋ã:ï5w·/¶c¾]Ýü¸Ù¿Ú>ËîúêáòEv—¯’½¼¹›ÉUAzÄÑaP€çq!ÂzCwŠ¡»¸ ÛùÇ?±Ú½oúpòe‹ÖÆíKŠíñä‹SdOÅónÕôþÆüp9‰ïEzG•ޤ(=/ðè>vÉ\Ž`[º¶#'Ø`Àë=Ä6ËO·»Çû›ë‡Ixõ¥XL6& l©›Úûr7ƒC-‹m?Žìù;!ŸÀ :žJZ³Á,ú¸ƒ’¢v!f¯ùþ&@ëÝX+Ij@Ki€~ x̵IE[ß¿{v¢Nî"W•(b>‹PRTf€¥ôPêaÓ™)j€a€9gÁ‡×”/Þ‹«ï7Ÿ¯>Ç!r7&Tc {—ñn˜OkaÆÕófZôbúÝô*y¶“2|7r ÿ6r}œÍ(j¡;­ËþRÝ[MA¶6{ Áÿ.b}˜Ë0Zm,=}'8¡<~3­n>ß^ßíµ³šgU«¦´-³Mg-¹š§ýi‘+þîãðµÓ¤Ð_é·‰Gf4ŒbÇ-ÍßC1v'±½´™´»ñLÜïÝÇOkÙŽ[ž}ÉN€u<áwÛ`“’Ê.tºÛpš6þ&v½ŸÇj:…¼7ÍWÆÐ©9Qí¨×vïÏû§»ûf×-×íÊKSôâ´·¦<.Ì^4G +E€’r¢‚CŸÎÈÔÊR‰ž~²5àû « °>ñdUIŸ`²5xW +S õ,ˆ‘"o[€ì²Ô?–È>A$Ð¡ŠƒÓª@VP%_^“Š>9Н dÓ¬J@˜.ð<®ÙôÅl¬zŠ:²l/+++++++¯@6—\²#ì¡’¾t®@v®@vJÈ21Ùªó)ìQ6§ »ä|Ãå]ä±ìȉÓo7ó¢à×ë«üÇ[©!ŠJŠ`L”ªUæÊ8˜§¨tª2W~7DÕ‡Éwø¸a•9äËà%ìr‚OkHó¸èC·Pà œÊ–™¤ñ¢<´¦¨Ø‹ö?¾æ¾$Ñ ×C©J¨Ý•{¡…!l ¸Q9³(Ó€—5|®’°àà˜äYÊ»Qéz¨>Ýåq(qM…°õl4` aÀ+®5ø×†­ï¥ö©E˜õ€êHòç´,‰<H–-ím™Û>Ķ޵ñáYÖÃÜR^Ý)²hØSž 9zs*[Ðó€Þ‚GbÿZP •ú="±‡Œ5hwù<Ιù°9™ à‡ÐÁ€GÂʲ†21Ó\Ô‰šg imÕ‹­ ¡÷M¹í ”sÚÿFï›>*C˜2'ÔœñIXÆäêçMQH†1³¾¡¾²¨äÓ'P3­ó8{žØ˜Í0À*¶à±&‹¿åšóWêw2*u˧Íͦä"ž¶ 8à’hÁc5 ¯ž¿–÷ˆëç:ý3!ÕwXÎÌ‹>±vs-N}v l¾ÔËÐRÌ&iÐ'‘·×4O=`½O xXz¹ˆµ~·bb—o㬄LÑêXÎØvTèÜø,xÀÞâaî6)Ò©‡øK)?[¬ÍT] º]è»sÏ0‘ŠkÁ#=ºÀÿ"ÑÇû"Ñ/{³¥~òI)O—Íœvôð¶rÒgˆ. x¬ºüòãádj)té0bÞ”"zÓ¸|N÷xl\N+ZÜÞ ²á }=¯ûÞ&{¡…ºÐØAyQNHOVÙÖ ПÕsþçnz…ý«ÄŒÎ%DU •ŠyÉ'§ ò@ÎÜ)¥¥,(Ó=ðøè¶¸½Îeë²c_Ú!kX9Cµú4’Îiû½F<ÆÛˤ·w²áªl PË'¥Îd—­YØäzÚÎ-Xr¸½"-x¼ßè:_ýx‚|P94¸ì<[7_3ñÚÁò]ðc탛Þ_ee=M!×ï„$U!!Hr¬ ¡VÍtíe³a¼vZ^]ò¹Õ.çõ cļù{ÍpÃÊbd‚® \šedtd^V$zú ºkÀ÷IЭ °>ñݪ¤O0Aw ÞÕ º¦]ŠfÝw6IÐEÊ;ÖÿýïÃêÍžO+?ׄž=þ½òsmÒ9ž)Ð??׆LÂ9?÷œŸ{ÎÏ=ççžósÏù¹çüÜãù¹4Õ³`¥ËtþÂÜåpÎÏ=ççž@~n&&±“äHõ™QÞ­kË=¡í¨`ûòÞ6<>YÝÅÓî8—Nµ+oþJéèêù¶e\çzçÛV¾ÿ«4˜qæ<ß ß/“ÏÃÊ÷pàˆêSj(Éz°ts €èÄnÃc}û:œº² ®þUJ1xÐŽê<<™w:ë"¶Àa acC¥ÝãäȘKÇW¥I|òÑkïsyœ=Wj[82`9ðxg ¹+¡ÄïdUÙ¤)d4ÿ–‚%‹±Oçî{ƒaÉÉöúµàñ¼2»ÍØrúªÄŒÐëèØY0žŒˆ5(ßIÁû<Öx®ŸÉîñëîéáþé»jŠä?í®n.ÊØR >¸ú•ØS*¥O5äì*¡TGmÞgÚÁ$? \Ö‚ЬÖÿÚ]?¾“N=˜˜@1±v"äqh-Ë Ðƒ` YðÀ¦ëõí,{“eÝP¡©bm¯Éã½VVàð€{¼Ø×ìÿüûlêaÇùâFY´ý£¤²z·éеóÌ~DÌ´Å1köµÒÜÕ÷Ýýã^ªõ#ާ¢ F%æÍ©2‹ã”°ñ³ ÅÛ²O²…ºþºû’•û.ëáÉIb>ÛPÁ“ÇE”Aë×Î9Ë4f9Sµš8« áÖ~4¹‘£&<Ô!ñvÿöõ\„ò­8ËõÝýîîá¢Iím©¬ÁÄu_¡dˆÉ!¨àÙC4ç¨.Yí€pÖð¼F© ”±}´’½ÆónI?ïwS€dqHç ¼š¾¼f%éMµ^?á˜ïü©`*YðX *î^=fd`­ØBÍ•û®5–ÔæÀc4] <Ø¡¢Ö‹,g‡ÔÓã×I²¢ª;¡ó-@kÜûNh .Ö} U‡^™‚ŒÑ}+žiEŽá·§Ï»‹‡gQß~:.Þä4Ä|%C×€6´6ô\s9 ˜‰ZæA4„Éq€<WÍý ÒIš^Sz38ûªWQ-Öq+ìäÂç3}ß'›øöîÇM–fþÛCk©\LnŸ£¹Š¼Aeˆî±™Æù%K5îåNcXÀœm«<äVV{{ÌɪÞ?Ê7“ ®ÿV„Ëuß:£{AÂKŸ .`ÒüàQòºPñ‚Õ{@+Ð¥L/¥ö>?æØŽ‡`«]àâîóŸOûÔÏ‹‡òæö8 Û«¬hÅÎÒ}cPA/fxï#7ÌJÒ†´âïÇd®ù!ÏÁqËù {ÐóŒ¡ÇMeͳ§}Ž<†=9¶àñÖ6PÇû©>‹èEÚGÛ¨–0M3¥ ~ ~hdé |9SBˆ 3Ã0†)É5áé×DäÅxw‰H*'"¶ ­Ož‹!.×~óÆX%HS ž”ºkÿíIHÕ=KHØ€“¤Ÿî\®yáH-37Dó![Ç)¶à1Wú}¿ÎcdËš¢éP2ˆp…Øjd6Z¬ÙfÄiÌÍ4䓼B°ïÍô->ÿâOÑ¡$ªº…¼okrV,E¹˜Ñ¥Ô41/š§ø&Ot¡G¡Ù¾µz¹øüôãË÷Ý®ú1#”À´>¶Z{k`.gAë<ÀaAJÐð¶æhÖºxgÍtgâUý–ÍhƒïÂær4ÏCN‹1vòeÜbYuFF’Ð⯋äÖz­kð–k™›¤LƒN€É7Ô€ÇÃ|¤úHõ…ˆUßbr”B \i}ÊZ‡s1’Ïæ`ÃD’c &H)5ø“ÇõÍ^„›—×ýÍõ~i©þÁ„Yf-!ØîvíЖk»;Žñ§è7ᑵr¦ùÅ• ×àÏKÍ·ºó!‘Ђ8 Ò,;ŠMx–ùéžãü {¥ê°KMNèÄh[Ø+°.åº÷Õ0™y'…ZònT’w¹´”ÌŸ„PwMã|ô=wöKÀ.¨ò¥X)>1ó³Í±&²¤¤Ðçc!QÞͦq;RJG†=§9g¢¼<|Üsž‹æeqP5eÇ„,Áæ– öÈd~Õ|YRŒU™yL™DA‰eÙ;^ï³]›o`¯~Þ”­frF‡ÊÀ“ÓQrÛk6'•Ö¡AÇ“¬ejöUXsÝQtKÅí ¤ê–Øûd­eÙ‹€I¨Ö€‡¬9^«Ù?”¦)»‡Ç÷å†ïž¾\¼y;>x’kê&WiN`ØfG †_z¬(l_±Øˆ'õŠ ØMe€ïïžJO¶Ëg)}d§X7,]@ ¨íoyœ]vŽEœ&ãöª·à±†½ŠêURó7¶ õÝ!ø˜¯ C@ f;w+Û'Xé<z­ôéýüvÓ"R®‹KuuˆšaU*t›Ë]¯¤,–âøN¼m@ç^žÐ£T×Ç-ñSíŸo”“d•uå¿V·©Dåauçf1Ó綯GúyV$zúm_×€ïÓöµ‚À6úÄÛ¾V%}‚m_×à]ÝöÕ´Ky‡›¶}M‚—Ž´}Ý#±éìñç$Ú¾N¨ Táô”þ^m_ˬÙâºt„Ò¸¶¯²¼>$ê&»yrйíë¹íë¹íë¹íë¹íë¹íë¹íëÁsµœI‰ôs5D9·}=·}=©¶¯™˜%&¢F`ö®›#º¯ÓÔ0‡í3™mx¬ÏÇS¸~}ë}í†wðX‰–Êàëe§qn¶­uj,[~WR¾"õšÉgÉ7h, —Á£Ü¹ÎÈ€J)Eiç{,m憶L…„ëXðÈFþCY3AêÛQ(âÄÚ];öËz o¼þ Sˆ8`Gµà ~}½Êšèª%êù2ïG1y4Öæq`£ a m3 KoVx ^sËw$²êÒ-ãÌ?t¾/úÿÇ«\ÿx‘æjTD}1ÅLÞHÁioºyšßt{³—R€èU¬àiÀ`ÀS©ØdIêù°”¢¯J1Çä¢R*kÌíR‡r×0“èDôXð€ïeZM®)€¶¾ÎòŽUbŒQÛló8}zew!6eS+_nTñçqЦÆNj¸L~eöO{à{þ\(!F b Þ¯í‹mÃ%è¹A\aVö½–Ÿ@jJG*6ëQYeHŸ~l¿F¾–›œ§.á­¦€W3TÊ8â!K ¡dƒSÇ“üšæ Θ£áż ¯%ÓQÉ'(àXòe¾a„Kº;tB¿–:Tž>HŸ%yBƆ,Ž‚Çš'Üõ7 ~¸~Ôí—E¥‘Fœó±acaâÕ%Ó–à^KaL©aeK²ÓˆËKÁë‹4«…{\¬ЋٰIÞQá‰8ÎÀ©÷’Ö1k¯$HÞ÷Qœnfˆw0„ >ä4àYÑ9Jô$á¤QÂç­O\ÒæÚ‹!®%äÿHß³Õâ‡bÒ þíÇY]um5PŠ~’:iÄqä°=q×Ò­°×’½`j˜Þ€ª­ûïÄÌÞЀ¹kè†ÊUS‚†ã°ÒÙVÚz-QBi ›ôÉ7ä¢/ÙnFn8Ã5uûÃî]s'òÕ–cà  çH žµê¾<›é¸ã Ë!†Ør‘ˆ£É±ÕØk~=)nüg‡ÄD+ϵ3XK¡ä ‹f*cŽ’@¨áhK°ö(±Kþuƒöš‡UQhpçHéJ¹ðTé‚-}È%Š §g’4„>¤† ®®VÑÀkæó©ñüÿOÒ×¼"ŒÐbßQ‚å•ÅÖÃ_Gv.ß0Ôp€2ÎØ{òwÈAˆ x"nIžë»‹·boè¯+;ïuÎgØ6aË1¼kéá1îðÉ㎯å®ôð“¥Ø€'ö¨M\ø[pÈ$ó qÊ1/ Ø%­©Q» ôZ¢@Œ¤›yÈ¢ïƒîÑ.Íκ5'XjGF8ù òj.\çauƒÕ“XK$Œ16L6„!DBfô $Z+âŽ.X*ÿ¤‘(”®f kp²ªHîê ¬%P ÖóHʸà†(zr-GV´6û3¾ÖÅ1qñƒ[Z^uË0‚ ¬E4=‘`ˆÂ“OZ½Úý8çÇ”Y;~ý«dÐÜMÿ´7YcKŠù\m8‹ÒƵזÌj-ÕŠõ ‡SJcnKäÅņɬÅû+åæê6+eR‡h$£”€†³ªÒM}s’šÏZzqŸŽ ØTÆzP;qÎ…†ýŸy §…MX*l£O¼ KUÒ'Ø„e ÞÕMXòÇ“ IIÆ1»M›°ÀeÊF^ˆ‡Ë¶î¡–LÖ¡ ŸX–‚Êaôº =þ{ua™f1ÆÝzD×…eúbÑ7 ›ª>wa9wa9wa9wa9wa9wa9wa9t®B^j¬ºQó¸0>wa9wa9.,™˜ ”^9dË¿ó3êg¨.xF³àßýõôÓáÇÈÒ¸½*E‰¥0åÇĉÿüõTq¹w“•ò»äc§JMòµeÓ&+ÿÏÞ•%Gr#Ù«Èæo>ÈÂâðEטﱱ,’Rqª¸4it°¹Àœl‘\²ÈL8€ÒºSÖ&«fAŒàÏ=û”øÐn¢gÔùh]è¸BM¹æ¼ÓÅgÊŠGÔ9`´q¨à™Ÿ²Þ“‰'üÔ´ËÑüÅëKåiÞ?<ž¿üì³AËSr5‚„`|ÀT¢ÿEôÌ™YNh^óçq#Òô9à!øPGb?=Ê—‹oWßÏ®n/ïï4ÎÚ.5¡hº”SO…Pu\ Ø{ê÷à¬"ƒÀVÕçí8ŸÖÿøù9¨lä <ÐÛ è¾óêáVw¥w¿ÿ¸¾ý^0ªA R¿ª®ÀšQiÔt”¢Ì¡0é›]Š6Vˆ2€ÄÉÆTGÂhi§[ÙóÆ¢y1æàVñ¥i\aQ›ë!zºå-8®O’<­K„iTÃ’Ø“#o]:. ‰ñ»þ}p§¶L)©Ér¥¶‡â€d¡&<Õ_¿Í·ç¯»µVCyY§À¼nL D:.EîívÍèëwñiÃþ³¯¹½{ºþíz{à]°hY4ÛEïÐ ¨‰D¨• 3ùY )×$¬sÁt-¬<ÜêL×]OW››uàËåÃuvøOww?¾_?åü½T–"0ä³Xô–kãȺÇ/]èßð œ/-x:;”<__|̧â{–S:YòÝŠE[ÎZah¤D7ÞÖƒ¤•aÉÁDÓÁ4<”Àþ<­ué¿æ[Â]eqƒ! ÚÅb½Žó®»#é@û–ÀÇd-xÈõÎdÞ^—¶eY !$.E+¦’Üdªõ¤´™™$^¢Á£†¬‹EÀtúËBLöÉaÄ"ù9ìc—<®µÛÚíææêñ~³¯ìòk—Ï—N"ïu4r÷WÅUuíx ·.â¯ÍÍÓžT¶'¡€”úݼSš7’a-×cæàp OTú°ÚÞ¹>ådón<áÕë .x¹™EÛ°ný±vm.rg÷Ü=Î" {5oJáhxà“°Ú<†„õz\îà^žlójì=%3|‘D~ÚÆ…‚<é}œ§ÞÛ½[–Rt.Ú/Ö­Ýôþ–(P‡ºC»úqsñmóð”1?æ{j;çù§ŸíYލ<;H"hÚ3特åY6K¸›B€OhCMë&Ù¼?GtÞSZØŽð“Ù>%t…¢õB!LæÂ­ãÈ÷K°íÇÔ÷Á±õùC1]¥ßçÏx0²ñøØ-ÉN¿û_]<µ˜µ×fbm©Bn\µ´¶M×Ã%Á‚j<!t)qScÄX4bŒàP—QËsÅœlßï©;wÞ#º›‘<¾±¹Ûg«ýµKŠü×gÓßO•Ó¥¼—ˆ¨Û€À`…1Énoµ+ἂóY #–£Ï¾v¦Ô©Ξ'†E»ÎRðË+áÚFq%ÓÒGV g)ÞE•pš½Ônš5*á§sqðÿûøï‹¡î¤Èýí•pšÑÃNßúJ8íÖ9œþØ·N;²]Åî©ΩΩΩΩΩΩ΢uõ§|äS%œS%œ¿¹Î 1•¦€­|‘˜f`ŸJ8ÎL«Q‹ž[ðÄnïOvüøƒÉ’P´$8u`J§x°(Îë8ÇØµ(ÎëïMžå@ÁÅ}¥UûÅpκårs`ãAP¼/´@~ µ2F×å¤2 #£Oáfà€˜xóüôíBqä^‚ïEüusX6{ôâÁº£C}£h½ËŒ­‡N;I%qG*ˆ;Ú:"Ó¬Ok—®Bnüžì«ÜµÛc‰•¹b @.Êöjx‰pTDŽ*O™zç“ñßÿâS—ÜÔ‹$‚µ$Y醈êz¡é¾èö ÅL Ðqíù63— }–`ð^lLfRe3¹ <ÜšB³µÊ®„1I9·$‰.„â‚u+®ãS—@9YvÇcZ§ÜzKwkH6À!ž0IÒ­uŒ\‡»%?¾•Ü™’HÕ‚ÝF,[Q7MèI‚c ¡9®v-xh€Š±tQ±î8°ò&9çö‘·—ǹÔ-qi1ãrMKôfÖµŽó|aÅ$ØxB nóV׬¼JïT"ÙgÊrþ7å„;l¹@WèÒÜ$Q­fg ¶”ÖÿÌ-xZk¤”LõS?G)ïTˆÑ%Ñ:ŒÉ ±_­>\¬Çp€ô¸…Žòs5Ýá ʱœÖÍY7,ÐÉwœÐ-,m¹r/év<­YÇE›=?æ.ëWOù¾h{.XÞc2³’ÍÚqè8ê[Ö{'ëQóGÞ‚§{±«ƒÎ±œ›-ÑiA¦&NG{ÎëV޶ Å7;-x¨µVâvW²Õe¼,¸½2pÏtÏzù²i)‡´2-6¥>Ìoã¹½þyœ¬=¤^L À^óúòöñ ¡tÍãÏ]t,.AyÝÑq¨½sn6¡Ô…tíÏ܆'r‡ ýSdCe[åöôൎ#Á„ÝæñÖ#¦õ/âÛðÄn‹óoWÓ=÷ï“Òå€}ÑŽ^w‡Ä'·WV6ÞKiÙ®p²Øï#·à\>…'qÙDœµfÞ9 R.ĺÍÜÄ«î£ðm[ðp¯ <%çܬ«Û§û»œ xØž¡hÏIÙ­aøC`I®ÃD® g=&qn€“nÁãÂùûž63¥Õœýq}õçKÐ"e‹©#Ã|cl!$Iد NGÖ¿†.»OLkLëíß¾YõÐaÄ¢]£N[‚äƒñ:®½(cî6@d/ëú<Í5w-ö¢h?û`¹)™îìêòúåVו#ÚˆQaXnÓ8ع¡ÆÎÕͧn@ɇÁŸÔئ̶`ÑãWc/ßG]@Ð6úÈÕØEK¡{ ÞÅjì&/¥žvU5v8çýþ¤î6¨Á—» ý-ù?µ»É:pø,¨¿» ÙnÒêI}RcŸÔØ'5öI}RcŸÔØ‹ÖU á¤Æ>©±J­Ä‡«“yø”›M­sîÜéœTœn¼¼yE§ãè·xXg=Ùxc×óçËëÜþïæ:GWOô¡hPpSdEö4ÎíÜ+uÒiçß .¸Rw¯×qN¬©ÓŽxNÁ“ÄC[zú¡‚uƒ¨ã’,ºësà¬8rw£iÂv„õ§ äÛõ|{aãaðË*i¿Üa~Ù“”€®|q ¬‹50Z®XÃÛÐ×UvšÒ o iÀ‡¯Ç“\_yЂ©hÁ:q\$ëjIÇAH Ëe/àj NÁõ¿tžä`ñϺʛiÇp¹ßzå»ì\YØ;«¸Ê4.Fßw¦Ï$¦BI^ÿ—lÈC¦vƃ>1ÚxR¿òø“Ÿ6¿Û޲œBŠ,Þ#¼é8×\!}¾ææÃºm®n||–´³ñïV™í÷Ï_u?1Ù.Z¶£¨ dcÅÔy®÷aj-|ònȧ¯ÇÓoÞßþö°y|zxžNQš²œtªÓ=ê†ÇY“^ÇdºË&}5mÀîV±Zí»·àI´Ê”ßþøfsÿ.@Wˆ‰an9άb9ܾ©}òw¡k=vòB¼<¡_0¿=[>lÄr¢*‡mê•EÛ¬¸Œ+Íù¼å,o÷vÎr~½ùñx2SØò¸Ð%¼ß&•cdf—3þÄ’jäq®ã~!EPÀìÊ<©[¾üKÙºƒ6,g¨J>"ÌÝ ÌÔ`—i>‡  0 ˜Ö-xv9˜ûY¾ûeïOÏ6—7×·“=ËÁ±°n‹Ø›—:œë6árµ4X×[ðP¿ùžs0•Ž·Ã¹ uûiœ*æqœûœÑu¤mÓ `Xý|¾ Oô]šÜÍ3+•ÍJì%—O·^CÇ…~mï–0™8aq@@â˜ÑÆ]· 3SµÃƒôE úè€õÍuœîðú_ƒ»¹a gE“ýˆi}&èsHݧq»½ç^Ù½T5Ö1ûc*.[ŽƒÎE“:é;¶¼^DÚzÈJÛ»OëÎþP)—r6X(L·æ%å'u\^¸â/£cÔBHÚïÛ6á‘>!þ[-ò½ö“²ýt gI1Yx5jm>±ëCFÎY+$d" .á€/¬³×‰7ªGmñHè6{³¡bÑP1‹¿i(a+Ó âô%ÜkLá<ÍõwßÓŽÞBœ}&óå6¢¾nˆlBD”½Ømë.j ãÀGqg¦ž´­D‹‹¿¶u ø>ÚÖ‚¶ÑG®m-Zúµ­Kð.Ö¶N„Hh{)Ø9^¥Ó°xuZ‡: O’ƒTUüqi[·è‰ˆ>í\aÿKh[§·žÖè`[§P)®¿¶uz"AâT±Œãn·›“¶õ¤m=i[OÚÖ“¶õ¤m=i[÷­«ì1J°Oö»utOÚÖ“¶õ´­JLŽÀ̳âÈš³ø—}ŠË²ñdn”XDü€“ÄZ<¢ §æNlj“Ÿ:bAÑ8º×U‚°yfÑÅùrÊ%çš-Gò·àÁŽGÁûê³£/_o%(Eaqçvëï¥0š›¶"Á¯(,ö.ž3øøæÀ f¹UbH±½W]ãĬG }ß-3íEƒ}ðn?m˜bb$ý æ §s›3SSd¦,Ñ2Á"D^ÿƒ·à¡.¼ÎrRÇÙ¥†7¿¿­åËà”õÀl"fêW^c!GëA§@>{žÖRÎ4o×ÄjIÝ„}»z~<ûÎS;¡rî&æ¦"^½ºUÇQ{‹ÊU8ªP’°!³ ˆe2a}”GˆºÍòÉRå°õ›‰FZVäÇ…N²àùL$ò„`›QÇ­ß×jzNðºSé¶Z?^h˜öüãpïÖTN¼$ÝÆDf¢£Ž‹í®±Q™‚÷æ}U礣äçèŽÌ¹ <€Ëçí¾zõ“ÙÊQ,SˆÞþ¢Ä=[-&"±þ8ذxi}޳Ù<dA‘ýÇ¿ôÏ7¿¾Yë×× Õùöó÷FaåTÌÜS§³¼¹ŽKÒc©žIØœ… ý¾|é…ï lÊqì”0¯±£5Ó™HpQ‹«õ8Zý ìü€Yß‚§Uè;åçìžF”³1%(M(tŒ´›yØ‚”|Ä<âzê¹öfÝúr+ìt‚Æ`¹ɪíÖõº•s XFÄÔ-x¨›2¯¼-ÁÒÉyÌ].õy`Ä Ó8lvʽÙ×· têõÅñHW-f1uÞSÙŒ$1%2êlLãBè÷Õ—1µ3Ë€O_g·ÉIÕ§I-™Om÷p}1ÙÍíæc¾LdˆN”ȱ¯ær G[pKXÿ{WãÉ ÃcOç^*Eç¹lÄ,YMÐľùãwei=ÐBXÑñk7ài½Ìz±Û‹r±d¾ÿ?›1ÍOV©Ìi\ØÕ±/`iêiýß‚'ÁšSý½—² Ù9Œè½Yã•Èó&ú*„mÀ-qÀ§¯Å£ã\X~šòé¨âílâ ‘³QcѨ1‹GUÿkçx]/PMàÌ> Xî[ð uô{h(ÇÆYs1—Kq« Cuù¡y»ËæRªàd§bÑI×y@°W°èñë:—€ï£ë, h}äºÎ¢¥P×¹ïb]çöáƒé¥dXU×9œ'çè:Û F9.]ç„*"ª@Ïî_K×Ùdø·¿®szbH|‚—7à“®ó¤ë<é:OºÎ“®ó¤ë<é:ë:§õÙ{w†Ý+Ä“®ó¤ë<]§3_‹[æÍZƙֿEqÑ‚'ÅYP:Óîžu#P8Ï/ŠçDw‘‹ºÂiGè­+Ì¿7äJšd0s°fÃR8×m“?Ô°TDï'¶®› (#\Ïãú9gÍ heÄDiÁÓé²Î¨9ÊWÝÀ!ÄÌ™ ¹ÎïÌ;ù•§vÃЀTœ<ÜxWsùžÎ6f*3E§œ u®¢ãœë4û—¸qpgÛ‚—¤]~þzu¶>öÛ­|×uKˆÑÌÓq…®«¾ôú<ÍJÁÙèÝ€¯N¹‘€¤PGæé›ÞŽ”ÿmµÐ°–úC 6ó0S¨¶ž¸„Ñ„'n'8{Ú<~ÿ¯ß6÷ߦSföï¶Ëz©Û×k"è¡> ™·Ôy\À ŸÓr©º¥¸_bÄÇíñÃî÷Š®ì;Ôm¸àÁü`¨3bYçB'×݀ؠOðkærík,Ê›RÌòAS2£À]J&- h=XÞɘ)ù6}Šþ2IÌN*Ê4€n xšW¥ƒjÖ‹í±ëýÍíÕÍöLtêaüòçÚV,g^Sîg§~ÑZéuœgX5}lƜɉ­ú.Øè£æçÃ+ð¤ÎÑéd®ò^Nƒ(J„æ1‰Ž‹ÒOÝ™±B!™2œ<êÂ1]Nt ̃û<îp_ áðŸ¿üÇ÷ë-Ñ„ªl#}áOÞûmEöÇo5¿þryý8ÝEür±{•óËÛøƒo ŽLÇÑ€œK}Ž@®—ó|³Ñ·»ÇÌÀüǸ¥] 6x—¼†1ÉU.52u»XŠt>òA¨óö»]É0‘íÇu\êG†›‹«?týøüõ-ÅâDtž)Vàf^ΉÀó©£Çd»×ƒBˆÀ.UàéGËÍÓfJù`^v ¯íö› -÷6Êù$H¼T8g¡ÔÈx"!TøÞÔ¯Áýo×7ªrý˜›ÍG{‹ ‰R„ ›JZëJ&TBO]øbªpÏI`0r€ ·‹­¥0ÛøÛæá2Û8Ϻ(FñÁ#vàB ÎùD Ì­ö›P¢!D`Îíl<…ÊLÍDЈìuÂå|Ý Ý~ Ñ8Z”]ÓBƒõw,§DâÙä@—ë¢Ù‘ºõòsRÅ´Sçkãµ×ðC[›tô>B  Ø"]©aáO ú/;*BŸ†,ô‹WD7è¹cñâ«û‡»?®s¦ˆþ'{ã¶d1#€ÚŽ)Pww‹™Ñx>5r€ªWÂ1ÔˆN<‡ <sªñßlîºßžm¼žÇ¿™ì½²ìÖø“ÉÑâˆnƒ›ý1ÆZŽtF>Ÿ,QHªx7Â!dL 4ïJL“½;{zÈf¿<»ØLöµÎM„|ânÌ(ÀœOƒsûERgäè2º <ä:ÖÂsͽ²u‰º>¨il¼Åê}ÕËHÐùTP.¸w­SpP÷#SD ‹ëDUºà—ÿr²¸už‰¹@oMO¾6Âè |>Uto±bñ ATa]ÎjÜ1ƒ[™*^}ývw÷}×àbt¢@t7ÈBk1¥ˆ{>Q„Õ;Vø¡!Ë 9/V«ùí8—z‰êfèdpë ”\JÀPBQ:ážMò)—´ßÌÇ!—häu®®ÀC©s8únã)5ááëæâLWþÿùk2µu>J!ä{¸NË^‘i%âùäùާæ`È Šƒº <Ô§¢á®±.zfÌLëì”bÖØKihXI—õÞa>4FwËErK‘!Ô,C‘ܲN@ó­n­xŽP±K Ìé ~>e@·î!.Ar;C .V$Mô¤Œafë5·H®XESà©E;ŸICÅB”Æšƺ…=.h^bØÕ:F%$¬ð¾ˆÕ!Éxó?;9ÒlŠȘÏN¹ðAÅòA»Ç†µ­“Sâ\q‘HTM†Ž çS„‘±â2Ÿ8¡ˆäÀ®ÂSqóµÜžÒ³Œn­RsEh*ín£öÙ„aQwæÛýT6oE°.l±"û’ÆRàc[­ì5Ä v¨ÆÞ×Fý0Ï'HÐðºbÓ›Y !Hv(ÎŽ58º¥²Ñéú¥Üµ¤4ú‰õ ^ƒŠŠÙ¶+)è.rSwÁ€1’½ccÝú aTòºSà <â÷Ãý¹±Æ§.*œÊ{ŽÞ%[¬ù¨ìÍq;è‘Zðဥõ¸Ð7{§¼ˆâ&[ßš(&h1SÇyîÐ w)5ëáp¼Õ€'¶ö¨mœ³Çˆå]ª„ÜàM­†Žcho®MØÜ))[&¹Ç΀듌'Ÿ,{OóõÉþî9?[r§}N0>{γï,®ê8è¨Ù]JÖ¼5böl£fàìuSà’Äh~oCÇ®?f·,NÿÏÞµ4·‘#é¿¢˜‹g#–l ñJ8bN½—>íltìe#ö@K´Åµ\Rr÷×o¢Š”Ê ‰¬BÕpÆœ9t· ©¾|ÈLä#ˆÑBÓŒCMìtg>“žàåè­ ñgë .àÎÌ=Á%ȼ»ô¿ô¿ô¿ô¿ô¿ô¿ôÏõ§ûÒ{@6û)­ëö„¼ô¿ô?ƒžàvI4:oµ ÙæÏÍ:èÄh+5N¿•…ÀÄdÛu*V™{;Ìso h‹ŠÙëí:˜¼a^ûDËøÀíº ëÇ<ùÛ çBžsFiãrêûê§ìðí—Z§Üéß„1øTÏ!EãjNgXAs¨g9Ó¿³{Ô³5‚½ß|i=ɦ[Y;ÍôîùËæá¥9âaáâeeÃOÌòSC@£ç¶;­ ÊOp2•ï/TÔ3ˆ^‚Ǹ)žbžèzM'û÷nH›‘wJLAϧ:Ñ&–÷ [N@7Ó)÷Æ«™7^ÑGÑâ ª'ÀãB­zÎõÓõÍ~yÔ¾·=ƒÊÊ €À`™ÆŽi]ˆòÆŽSmBíLˆL¾r»núVcÍw¼7PÀEôÒt‘ì¬öâöóœL³½-WdÕ¬“_4Sé,’ãJ‡wä1ÇÎàòÜYìY“>ØÄ›vŸCåPÓ ™´Øvò£óI±™CëÝ]æ5/µNÍI¶‰€³Çió¶æ`“Ñ›…  ŠÝæ‰Ä9¬Ü„ÇEË´`n×Y[ùØéfë˜ü†5>bʤdùæ#@œuð¼œëÔ#_Ï÷LÏpôüÏ_gð|lõ™žÏrú ÏÁ;zð¼è”ò'(ýs%™È¸3g’‰ Ùeðü%Éä’drI2¹$™\’L.I&¹$º/ƒ2Æ(dc A©nÊø%Éä’drI&†ÔÔ0l¼5x_ÞT1RšR>¼â‘Ûé‹Üšï¸4¤Lóxœ6µ*$ütY~ZåŒM°™t#Z§;‰«¥¥ª£¸/ͺî÷ 2Qp ŠlÞLkÓô&m¹ô Kÿs¾vŒ^Y¦ÏGV³,µã ¯2 OL-ëy<Ñc¥ã§ß5èŸÇ‡üN2‡Ñy~¦T7Íá¨Åϳìñ€€–VñÀô=8›ïð€žÇcT¨X+ÖËÒ¢U §ó‰Î¦©ÏÚr·£­¾ÖœòŠº.! „éUE‚½8q)1êúnsCWWløÓíúî~qüÏÅÝæák“ˆj Ï5$ËLq÷Ò:©™¤8½F“¦3£:+*„tXo¸ž:{ó™€tmåä#‡—T Ès‰*i°fôfz1Kð;©ÍÐÍ…0&ÏET©p–Bu­tu·j€v ÷WéW»NåMbc>õ.eGpl ­ËÌêk!N‹{ÃP‚õt†aööløšÏ‰Jæ¶²Îq „¹Ìÿ:f àøÀ6~uà‘ÖÓdç«'Ƶùßûåá?ßU7ù8Fk•wŠóqÈ·ÖOhŽÖåbB¬Ñ3X<Ç€»Çç§uÙIëò1™Hf ±‡-À uMµ”â:º,ïŸ^$xìØF„ݸ[>Ì’Z+¥d‹XQ.ãp¥ÖP3ÄExpOÿ¿Ï›ë¯ûôRtõ§ÕÍÍâv½º{º½¾]_¼¡¼Ó›š&Ũ<êM­|GÛñlWÇz–€ÏEžÜR0t2)iò& s”G©… bÆ-_ùµ{fºß0×?ýšE‡ÛéÇ[bBž˜¦õ^`†=7ëÀTŠjWÔ–¤w7½¶–§Ð‹Ÿ_ÒÐYšt*¶N… M"hi©óìú%¡fÀü¤·Û™Ì›§ÕÝÝâz÷ý1]N BÌ#Dc›(!‡Áz&úÄJD®ü§¶ë”-ª3LXûË‚÷J|ÔÅ:åÅô‡¶ÜürH«H¯º›Ï›ë×ú\›å@¤][ŽW´T…+g€ŽJ †ñ¶ÒúoÛ»ÇÝz±Ýl×w›‡ÃFyˆ©òÐFf€¬k*ÅÆÒ”Ò'Ü1j·&sµh§Xv§ÓÌ1ÜGÓ:¨ rCJßÚhçxŒ~ÀÔî·oi/,6ßÖ)å+á³*/(M>°øÒG˜µr®ù¨‰ZàÁ™NÔKå\OIT†£ç_97|ʹ Ùê3¯œËrú +çÆà]9×|Ü›týñ§”³nÒÊ9§üÒÛS9'ƒÚ‰ªEå\ƒ*ò@ ®ïÕÏU9'âNèÖ¯œK_ÔÚzú?‹LÓ=~Éð¿døŸU†?)&zPJEÖHF¯úGbÅtýu·>Ú=Ç7¦÷ϯs±¶»ÇoM¨ÂÚ| O1ŸqV÷ëývõ¦¡Û1°u«ÉûV;“î•M“Oë¬é´k¯”&O¿´&W¸x§ÕQÇ÷>Oò$ÞÞ‚Ï[òboÖÍ=ÔÛÀêŽë|º!¤óûðGMñUòoÚ7ÌÏ!tõç§ïÛõ_>üçëOþX¸õï„s·¡Cô)éjÿøð—= ÈÆþ@'sSfú—ÿÖ ïJü ¹ÐÊÛÍöêÛfõÎFx<þøÕv·N¾êþHÅ~yõ×fÏÓ‡ïiYgé§5éâ:CGÿ3}dùá_Fó±sݼá£óßGçåâ;ζgר¯ÛNìE¹Â kP¨]W¦Ï/mô½v $ø™½~8ß9.^¯;×Ëѯ8øZ^/Ùê³÷ú3œ>K¯8Þ ^¿à” &Nêõ KòÛ2§=©6â^”ˆ¤ˆq²@§Sé#({?)tû¥ŸI|B‚þD_¢òø„€;ZÅ9ãdÞ]â—øÄ¹Å'¬M‘3 \.•µN;7þ!TüÊh‰Êb`/KÔèWP2äw‹Í=›ä»Ÿ°=A±•Œ–‹J¸Ð]X/*჋Î.6ŸÖuŠñêßÒÖ-„þ[Ú‡Ú ðHý Óg²ø‰VÄ>¿ßÇwñ“tš]5‰„Øâ’”µ»ß_­¾<þkkÜ&Ëz·^4‹HžÈÔ}èÓÿ5wPq¢ˆj" úMHÐÓÎÎ0ó¨œõ¶€ÃÑøŸÎ0p§?wÃŒ¨E²ÈRS‰‹av1ÌÎÍ0óèRy†ãØ;wœÕ§I) ù}A]²«Jh}ý‡ˆ³_-ÎÚ‡@¶úüã¬ýœ>Ï8ë`¼5⬂SÊúi㬗*ôÛÈΠ²¬Œ`¢4¹ú|9X§cµÉ=€yû‹ýªmŸ%‚îVº6¸DZGw~}—Ýi­½ š³OÉê8ñPÓe‡%©î›ü(B}87ÿ­=ÙûFÿlþ›„;ý]‘¦ðßÈ~è9qñß.þÛYøoÎÐïÓ†õßœ+Ö® ÂjÅ¡©7»§Ô·`ñÐCY`úˆÁ¼/À{{_g݉Î*å‰Wåv‹‚ëß×€K°ÞÇØ{_#éx*iàbJú®×·²}†²šIÔº±ýK¡7¸ózêJ é5—°îS–X¯–uÀ–@ º^eøýóÝÓæÈòõÃbpÖäÁ¦—35·ÿhóuR…(N¶LVz"øy¾“LÈøâü¦ÙVÔçaZ"A+í rìãÍçÅðÓãsÓÛÍæ«SGSð.A뜂J- kꃈ£û¾vëËO¢ÍsSë"«‘» m…2ï !€˜´¿Ö)œ.òõî. †I]cÁ¤iÌÆ4òÕï§:Vê›–'‡£­Õ½”çq„€ˆÜÑEë”õõZ‘"@jG+ëÿ|zÜïëÕ6,þØ<Ý6FNƒ2ß*’ Ø[ Œªµ«Å¨£Ò0¨›^»Õößéßï?¾`ÿøÚàæãIÜ9n“;Ng§&‘ä°fU0^q)… ¦ŸGÖ*Bn‘Lòvw…Z/vŸVשáÛß¾7Øó¼Oá9ô^;õ $VI?5ŒwÎÒ£u{CìaÓÀ‹yxd Ú - Œáþ BÔ?«¢üËÁ£ö&;Ü£ÌóݬuÌT¤f¯a?HõÂP´gÀðøÄc³N¹×ûw.d¶ß¬OÍ/@{­‡0ЇYSÅZpÞ™ˆ,8òÂ%UŒËÊpôüSÅÆ€¯“*–A [}æ©bYNŸaªØ¼£SÅD§8=iª˜ÅTÌÔ׈«…¢‡XÕãyåã4¨Œ÷¨5Þœ(Óý§ÎÇi¨¶: 7Ü 3ÖS4_ôÑF6|’Öu#—|œK>Îäã$ÅŒ>gxŽ^ã°´ˆÉ\¤1hÔŽÇŽ¦Îä^ z›wO¬2È;2ÙÔ´L¬žj›~¯ÇTË]˜–ök„)G[Ã2Òénzªc A*u'¼ø·œÄN’Óíún{1]î¿¶>å$Ÿò÷>ŸòÅv\^ýF,yÃDÌíjµº#‹þæ;yPë‡C#­›«t­>5ý°&rÚ¿<üÂô×d°Òs |)£í·w´Å…æüh Q€h âxOŸËÝ×çý¯ÙyôiT¼gÚZ7ëhÓÕ™hWó((‡oŽœ%n…(”;:jb€~#¹fùŽi–ï—Ž¶DÔ ¸ëǥѹr€†l ÆÌÛÙ°½{ˆ#f'‚З1 ÝƒŠ»†’|ÃøN 7M`ÕPižåOޝ ëNà Þd’§«´óšsÍ=IÖ¸ÊÚW&Z ĨG+ßîöæöx·5¡ËÅ·Íêu£0ì ÁVàNZgƒóÈ<\âåÑÒ¾Ãg³ø‚²Æcäb´ÎÙ Ye£ä-ÛDTW7»ÃoÃÜŒgŸƒh?U‘~9@P®Ö4\¹Þ/0ßN Þe1#ùÒ`¸Œ¸fv“i¬@ 0妓™yÀtöŽÖÚö;bw¯ƒ‹¼c˜J6¶'‹—SZâ¢ÝÉ4!î§mŸª“Ùfùœ¢&ÕÐ{¦If³N» ùƒT€>n F$­¯¦Ûf à– ÃÅ”¸K† §£´Nc¬“3\Þt™“×¢4ÔI§Ñ üåÜ\²PXª”`Ðäá6ë¬7ÔS(zBgÊæªùŽ«¨F¡Öåø"ÒÕvÓæ3÷Á†”!…Œ‡Ö¡ÁJÉã±o¦0ª¬ #³85Ýçƒã4ƒÖ9ÐÕnËšª!!¡ÈÎí¨Àî(ÁG½®)ßw³€ÑCž=!º¨#>mÖ³#«DF©b9Pª50É#ŽYÄM)¬ LvZç£Ø~¯«åPƒšâ?Øä§S/ݬóâøÒDêP9HkJO¦×uy CJ‡ã¢£6…|•|ÈQ›FWÜŸjHXúT >½ÚðbôcSkcŠªS• ´#“å(û¨w£;[¥¾*—ü9“º¨Ð_3 i ´Š¡êÎ 5×€œ #ûX"ùh?ãˆ@ŽÒ×Ív›>»=$[6Ó§?¿Wéd»‘”fT¾¾}¼ºiZÓår½Ú¶Ó 6ëýÇ«ßÛÕ£ÑKÛ¨d›À_ï7‹›Ý&Ýi‰ [M»cªßé êry¾©Ùlµô&¯€î‹ ™§É§ZïߤÒ?>ß,^×¼î©ÕûÅúÓ¾Ci÷ZqNU“`¡D&'…¹-é#¿¾|c4íÒ÷ïÁ,HaÌQ÷µ— ºš@sO'u* f~™Š›âÂçÍݺ— PM¦8L˨™_¦Ò¦ŒC¹ÀÞ4ÎÔ©Svb‘ ˆ™]¢N›¹%úåz»ØÞôj·­&X0³ ¶”¦ùåk`ùn>Ý/¾m¯Ÿî¯û¯£jž‰³SïZ!AóKÖÍ´sïW›»U/|5‘Nn/•R2¿,ƒžéþ¶Ù=õ2 T%ê©Þ2Bæ—$ÆùÎÛíãëÝ·=sDa5©Æ8Ù+$jv {涘¾Ýÿ±"§àÛ~{»ÞõûÕ‚C`6ËIJÛüò6zvy˜±Ý=¡÷‹”ƒt}à„¯pòVÍ'å"Šæ—­ÅZ•.׎ô÷àØõ38Ï„˜BÄÔ 8rø›^sãg: |ï ”ØõwòïÁÊl ÚÂô§Üû=±33vB\áRO%Š 0 ­¥Ò‡÷“ì|`Í£Âæ]•ƒÊ‹ß׫+I9ØàB­ó#ÜÃ?œy]HYf@{«‘¡uæ>M§ ÜR#úîeË5pL-8¾ø­,cЛ˜gý™Î‚^¯ô¾¿h(µǃUƇJL©i2 C…UL¢xPÁ‰šî>R»ö,qL*€op¥Öâ8 “3Mµ.ûźµxM6tÞW“I´²m2 S‹+(¨ÒÛ*9–©™Û&%¦'˜ã܇Åêáf±O™$O‹ÔaqÝQÊPKRAƒ|÷Ôƒ?¹@O"¤owÏ÷ëÕÓÓêú6eû½£« Èèº*‡>T8ä S¤Ù)…³Xm÷·ï鋜hŠñ;;…hxà“ f„ ð–>2Ù7ÿ—æŒ?|Þ­öO»ç¦¼æ-mLž{Ân-׉ù@c/)èÁqÖeUunÑë»Õ~ÿŽ2ÖÓ+G^Qe‡ ƒö3èo& j ã~µyX´gq—Öa-ÅU…ë95ã…º ïž¹/ä˜jä¨~}‚3L:‹„¤¦xFDË"ŒÆOŸ'ÁPßP­kI0º)ó%D¤Ì"@_ïEëe;úùö‚`rš$˜Ñ䔂.$,βIp áXsE/y­¿c˜t®bài‘É2ñ(q´PB¨‘O¤ûÿ®yZß­¯39 “&¡¦ø9~R2F Î)Ïv K묪“]pÇÊϧú—¢œD~SS3JŒÁ–QìLjñëó§õbÿH¾ç §ÕóÓmªh‡á^hŠóîiÌk@nÿÑ:#Oƒ¯æn `Òy_qŒ×n}ý¸»Ù/˜{=m«òç\LéV±¥´Î˜0&í}µ)'Çê1 ¼Ï÷ººÁ xàR¥hvz´Ìâ¦uµ×5§€ÕQ$`ª ´; .‰9‘é›' D]d í:åÆœ8µu'D-DÇ·!Ìlx½Mž6Ù4H\j vdˆ¡u¾bùSE=Ò•Á[–Ts—kvâÝÊsth­<ó×Òç«Îñ¢¦Þ‘>Á¢¦mð¶.j%¥gÝ5%v¨;RÔ4ª8±¢¦Qè5¡WQÓ¸Ñ9~8“¾¨)¾Q€¹c "¤êz},júXÔôŠš"c!ˆ0*ÈÀ†›(ëeçæ•6’Z<”Âv„v{Õ»F<ºVAjlZ¤5µ3«tJFc[HPŒ‹à)<¶#m¯׺Œy¦²`™×€)iÓëà]^uØ4D *Áó¾;·,pç¶x©Ɔ7r£¹NQšÐ]¼ª²)õÿ b¬ó[yëc;-YòzÄÐ/…§†ß¥L«.ëc†iL@yyDáð~c™ YôÝ‘fçHq,[ –*8páa°~ç‘0’MƒA•ÐŽD&Z1õ!RÁk $’i¦…¥! í¨°É’u7œÅ´Œ¶¬ \o‰ø§Ti¨’ÁHX‰ ‡hÿ²8ÕÚ^“ÕÖ¦(%MéÞ…ºÔÑZnÕuÀ[£˜åRÒê S´ehQ ¦Pè*e4cí„a­ []jaÛ‰ZÔ Á¾2†· âî5êÓ}¹ô Ì?‚ÜAõ[Z½nLÛdƒvD›öå?rEL[ƒðø%åz«.0ÀÎ*ZuZk-&R3Cm¬ Ÿ€oÝ_«lü¾HÊê úmiÃ\h2@°ÍR"uÄ ‰c­†è| UÂämAƒ¬ ã\­Fž@)0&ÆP‚@”!†WÙ4!* í¥ú@UlºØ‡êGüh/<þ5üÈ´Þ/õã.ÐŽÑä±iÐ/•À\,x^€g®•„Ħ),”b?¶•cìÃr„!¤ VOW!‘Ý °\žl†ï©7uQÆýÀw*:ÎÜ„ÖÂ(I‚C­5&Mií«ê`YÁ 2K¬S ¬|(¹°ð« ÷’ a¸x$zëlܘc"Àj!’§ºøÅýƒ Æ|å·ç§Ò@,Xˆq¤¶’˜cý8Ή A“‡bþI†…¥|á…¯@—1ZÒ|hGUl¶‰e¦J'ÉKBÜ'Ä·¾™ÿü\ÂÁÖ D»ºvÌ—*ñ²¨ÏSøMÊ3ëUp ˜"° ÖºDIEññfáNTïëÚH/EX¶JqÐtA;Am²¨§.˜+‚¬Zß>=˨ƒ)f%km";%ó› Ú¥ˆ ûÐNª¤ziw¼UŸ$ETÊÕRìqq´ø½UÌ©TpåC;Œ–N»()wÅÀ7m¢âcaûõoK—þ€m ·ÑÊëC³P}b‘:ÕrÀÔú³|Gˆßµ g0ÍLÈ®ƒvĘVÙÐÓrS}Ü´E ùðíñÛ*n¿"n Á 2¤ÿY'o“­‚.8§>%Z$ÓŸv__]Íêã1V)qÕ ˆ’þbíE;Êe—[B ?€´d†‘0pÍD›Á`álˆ0„’àHƒz®Ó.íŠoêS#:Zu ±^B(Ã"XÒØ!áh'û@$#E „%*‡eâH?~¬ãDàÿCøµUÖv³ñN}Jt:eèòãgñÚ/ ü»339›>!;E`XVèü1øÃZ©B²’vÃòí9ˆ°ZÛà@C»èJ‡nGDºB }zé(¡^J8Ó06¨;@;#y2¿Ïp[§´‚AÂ’¦Ï?zÿ¦¯dÚOƒ¡ÍÚßc;ÆSܨéŽg0…Ó†Ia±‡Çx- ѹ£L)áóë^Êð3'apV?f súFôäc~[OóëC×ú´c~ý#}z1¿­ð¶ù-_n•¡ÁÚ Ûi̯åð¢g- PÀ*ë@µ§•5¶@Å,èI&ŒžVò¢~ 1¿ÕÊrE óëÞÈ !¼Î¼óóûó{R1¿À˜‚*:Üz³$íhÅé›&Õõk¨À® Hà jÛe$ªZ¥…¦—G6Ö¾ÒD†4vŒXÕ&i>ªtfjšÛ´%X¢NÂügò ÊØl,´ÙC;JI²0˜ô6j !:AA¨âmPÒ6¨•+ÊúÝS’seji€hïVx@žŠ¡CÙîÜͪÿ4^"”¦6´Š¡çI*suÀ6õi„´óí[·Û°ÐÁ{SùR stƒ@;JE‡>äLƒ5V´^–GW_…aòÉtå² øOàÌ„ÄÜ !ðXyµm¹ÞtìQµ–*ÉîÝï\Ò\JËŒ?Ö£h'¬I"ÆòKz)Ó¤4ƒR{µ\ßÌfƒ%&ÝXUÔ0î?b×XåÕР٨1y¢ŒÛé8¥>x«CÎ(ÔDïò»œÐ7LðÞíi²¶a˜¼Š§Lu^c–ÿO“_÷5 "PÀâ#hÇHšÔT0RJtè*Æð¿Áâ,A6ºß´ÌÛ±àní8×I3¯wÇTD ¢[ß‚:jˆlón‚½àGl‰’=ÐŽVT”&nü¹)O5ë+7j_ËDsÌÚ/1™žŸzÿüÇ?ÿ§p›ïâÀãïŸ|w{•¯²ž=¯?Dv4€~žü8iT’Þï?ëá?÷zØÛ•óòÍëõI_C0;=üFcûpcû¡É¾9ûß|¼ÚLÉöc“Ù}2{/^ôXo:9;g³âüoÙ¤³¯×wS׳TùW#§u“Ñxò6Ÿ9¿?ðÊø™u¼ñýëÏuVôx–é|rv&ô,caÏÔÀšL&F@§6·ç†4â'ï[ÑñmcµaÓj®£õvÛlÿšÏó"ÞkHÒ_¡ï¾ô,W ×+ý¢ƒÇwÎSÐ[öÑE0 jÀT²·#Æ{Ð)ôùýw¯úš Q×7ZûO>Ë·nŽû=ϧ³V}þu+÷M¾˜^MÞbÞd9jÜgR±òÍïÀßæçù"Ÿ±šÖÝÝŽô-äÈN€à ”ȽbÃè— ?lü»(§Pjo¸Å×Ï4Ÿe™2ᡬØ3kc’©<Ÿi˜„Æ@Û‡f³÷ÅtžÍ¦ÂkGM§Ì±÷WÙ<{—O¾p#:êýô 0ÕγÏç«Åm“MúI£•€Žªþ=ÿLcù^öýÍõV6ô ‡bÞZnÙe/…Ü(þU¤‡îQ>âbDUEz4{C1‡¨kÎâôU_?PD¼üõ®>‚Ìú#?ð7h¸Yù>eóùÕª»ÅÃu«é¦t’ù¯Àûp|xþ/bÆDºÆ¿ /–=ô¨É{;Ù ó"üUÎÏùt–o³ËòÞ‡¿>³5Úèï ↻e£ÍäõååÍ C°FwþÏ2´!Šwýý€ñfuqµ˜þYðüÏóÞ&!ñËØÆg7+ØŽñq¯—]O7 ý-žaã>n—MÐý6ËÅm±‡¢%èpb¤µÜ{/ÛŠLTû„­â +•´‚nŒs– ?Æ2®©?Ï~ÑŽXÝex}ì9[öx››Kw%àf1sD/”k¨Ò*@Åžä°¶ >Š Ãòfw.¯æÓ{YÖ›Q"ý”hc…%¢]Hö!/œD±U}* k[&:®8¥{«e”2í(m]î¡SxÑî«ú ýõZ#_{;èz1au;že˰ðX[ƒ1|…§¸8´d`SÖÊ È…vœðfûô­óRŒi¹B¢ C/ÖÄJâímI×c7àâVÝj‘g—»‘ S̯€ÊEq7¢!`îöutλT‹©>JZÉòè3*(fb^ªd¥©Ü½. «ìE¾O“lyqv•-&0J†ú5 έ  Dv0ç†GWúkÅeÈ¡©Ø-VÓól\¤Lä~5¯’J!t"†Õ%©G×zºë#f”¤çOXXçðçóIpÀe8“Ó:³j¤ÔCÜŠu/S•ñð¨2«Øã­ØÐuGψžþ­Ø6àÓÜŠõ ˆk}â·b½#}‚·bÛàm}+_Ž)I™A)Å)SÞŠ•’SGnÅ:ŒR믒]B­”t8‰[±'FsFÏ û¸nÅ:ªÇŒ÷áÑǃˆÓߊÅ7‚"LTäíä…¼ûx+önÅÒ!ÚKÒ2î¿ëÚ±J ÏD·b±_£Œ¢$´´•Zò.oŲ¡”R©Ï‚„ä”H:èi£ÉÌÞ(ë2£McOç ÌÊ ’ûÏW˜ˆÖ*’–_ÚñÛØ–Œ@Îh½h<Šy©hyqÐ ¿)c”ÿàÓG3II*´cÑÇÐ 8/ŸaiÆî ÄÔ]9‚õŠiËë:m'½>V#X’«:AÄþBÅ5UD¢ÐŽ“˜M£ø ilAï#@Á+¡Q?4L¤HÐÉ€5IKs›¦í¼G@¶6Uù¦]ÐîS9¶.°û÷tÍ¥ÕŒÐÐI¡æ‚‘†gë²B6Jq)¦»vИX\ë… Ít²*K‰æ¼>xÅ’oÀ^qßåßbÑö–`M„Ί á‰dk]fˆÀFy½ø Á» æÝé'»f@åJ°€2X÷,¿åŒ×-I=¥Ø•bìÌrKE—r–°ÎäÖTôËDË–L´\¨è˦I˜-aìnxàÆÈnâ¥ð»­ÆÂ›¡5®4$eiÈÚ“\¡4™SôE6¹œÎÙø=J=ŸÌcCÂ9^éè<®•2 ÆLsBFe>¼Èg³«íÝkRúAjL-¤iÛ°¶„¥;³ˆšêŒ†¦wÑPß¡3OÒ¼X)Âr³o4þ¸‹Úq³>0q×%LýKíÙÝø‡½×«^>EٻȖ½l†÷ìo{gy>ï-`kü-ŸômVîëy‚ ørm¾MpgZ]L—ëãÅay¯÷È"ÑAHM4£A\ëfôŽô F3¶ÁÛ:šÑ½/[–Rš›N£)誒u8Ø$ªT§͈¨—\ÖЄ­ÜÃø(¢ãFG뇋fŒA&YÕæ~Œf|Œf§—£Áº5Šö~Žâ"×¥;ûÿƒ_ÚÒg=!T¡Ytsy™-¦â²ŠùÅòðK±Þl¢´ÆK=7µ©¦=ݽ­'ézË­Çç›rx¾¾Z½<>vqCÿô“‚¸¡ûÏ[§•Ž6ÚDíN7x>ý5±Qï¾¾zs5Ynžz•?ÁÐX\Ý©ÛÎzëÎ*"ÿ:¨@FÈæ·îXaø¬õdH1|Sn#r2Œ\wø÷»ÙðóÙìï¨Âȯ¾E·uåYeì¿»éÏ…òšÏ'×W`…­¡Ýæx#w¶Ò Œ4°{/ß¼vÛ@{ýf9 Øß‚à™ô0æ®[·Y÷ýåÔQs}µXzhæ|^~±¼ÛdD¯&Dw©fÑD¯$v]þy¯’®{7[·KBÛÇ÷/æ9ºbŸ÷Ê<ÈyñU™¨¶H¨ÝŸäç0´§Ö¯ûeæÚ>ˆ‘Û œ3Kâ<ËVŒõ€)–Qk3:˜¸ö~ææ>‡æzû^|¶Ò~öâšVß7µöóÿ˜ÿ½ýâ@Òá§ŸÀ0+ñÌšÏ{_fgh~^f×?hi‡Þõ×sµœá©Üv$Ñ„[¾Ÿ^.§È¡ýQµ¸Éq2^n3œîwX¾ï~¶a—]w÷aÙ²’Î÷§Ý>vóBËÏ{øé:}©c°fÁ™0MÛ죛ܣ;ìærïDzœ“—7ï`‰ÂtÁ<1˜(˜H³?c•„Ÿ•|Ÿ½MîΧŸìõ§gOÉcBXn3;&ðë·7gk²ÿ‡îÑ2_•£°^EÅÃò(áéø"¿ïíÒ‹‡È+ŒZ)„ë‹%,`íÌnŸE ÒÏ÷©9j(a©Èüöbq5‡Y.\þ½§k5‰_r=Ä#_Æ-ãËg£=Iº¼'Ÿû·¤l‘ïJÚdrwB…e/ü'Y®­\zo¾TQ®-%ApŠV*wsÓõ-èä°‚a„žnÍJaØ\òa W”:r™¯#¥TÎÎ?b» þäþü¿ûc9Ó´ ó¯vT‹ÕL˜¡,HÉ2C7G¶äÛj«^›ˆ_×-ŠmKµÆš©ñ~ùÍïþð«+T¦ñÇßþíNÿ\6ˆÿòFøuâ:.åã{E8ÓþÔ'ùI‡~ùȼUîÜw£D/üÑ=­½õ¹v,×±‹³è§Eïõ ê/üm‘ô­Ñ{½Q–ÐÆÊ)¢÷ˆÞ#zè=¢÷ˆÞ#zŸ‰ÞËwì˜ô…óQv¢7ÿOT&&FFE]Ž./üÄzèÑÿ ¿Äÿõâ¯ú%þPÁœõò¿Äw<½ä/ñ½Þ ~‰ŸX¥ëdÝñKYòhlÜj'. ­ ¹b—è\Èqc‡A…øf'O:z^ÈoˆœR~Èц%†ã²gõ[ð5»lïí“=%Ž:w¡?@¯®Oèψ¿†ÐwÌY/N軞^ПÑ{šÐ·ÁUÇ«Ôc;;=Tº~m7'Uë“=¥^3~¶›ó޼1ÛŽ*JIÙò Ê4°]`»ÀvíÛ¶ l7í&öY/ŸzTpl·¶£„MÒ“ë4_þ;ÂËïÊÕç*pΤƒ Tí€nÄvª³ÒslW³3²óà7úfÇêoÅvuP€ZhL†â K`»!éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØ® NIIòx•B’[±l˜ñÛU ȵa ¥²ØZØnJ½bú,l7å3z¶›RöØÈ'°]`»ÀvíÛ¶ l7Äv3û,[`»ÀvKa;®··1;u±]³ƒ‡*8a»ú\pt¢ +vFwb;N[9 ãQã5©SXRf—ߊí¦ÄÑCIÛÀv<¦ãÑõ±Ýñ×`»Ž‚9ëű]×Ó b»3zOc»¹UÊèÞl»Äp>ÀvSR9-vIvNý“+¾ÿ¯±Ýœwì}œç”=Ü}lØ.°]`»ÀvíÛ ±]Ý?kÁ%Eî³”ST¦l·¶“€1)kÛ5;H—gÛÕçzΈjM¦|oµ#Όٞc;­S¸V¦4{jv ïÅvuPFñ' ~Ç ØnÄc:]Û ¶ë(˜³^Ûu=½ ¶;£÷4¶kƒ‚%¯Rø,aùÊl;Êè¶›’J°XKŠªJÉÁQÇê>¬¡ä”wŒü}ØnN™q`»ÀvíÛ¶ lØîul7³Ï B)l·¶Ó–ÅVŽ€?ö.ûú󏨉ÐÕØN[CJG¼©Ø¡ÞYÛŽl+ŽWÄçØÎê6L’úe,›ë{kÛÕA5³¨ÚPœæ‡¶’íxLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvmpbM/¬Rj·b;aÛ,û¶›“꺶kªPнRw;àÏÂví­91Ñ ›%å7¶¤˜S†Ø.°]`»ÀvíÛ¶{ÛÕýÓ“¦”q¸ÏZÙËÛ¶[ ÛÙVïž–³{ê×¶kv(—·¤(ÏÅœ0y…GTÎT¤wfÛÁ¦ÄÙìÛ¹c.ÿ”z­;“V äŠz0@NCõX|ði\yëò\”(Ýí½3+#%uzAYŽ"åÈE \rÈE 7È•ýÓÌ}¼ÏÚC5Èä"[!ó­¼£AQп6ÕìHøê@®>WIˆ­`T;6²{9gÅ|Ð$Þ+‹!V(EWq{kþEÔ‰Á‡âL"ÿbôÃzÇ£ëç_œMþEGÁœõâù]O/˜qFïéü‹©UÊž¥°]˜‘7@—ãÕþu©«å_L©wû°Þ‚3ÞñôpÇøvl7§Ì!°]`»ÀvíÛ¶ l÷:¶›Úg)çÀvíÃv%táLè:ÀvµÚÒõØ®ì¢F8š@„XæÐØŽ¶zÒýi i£L’û™"»Ý“ºLwb»9q„Ñ[pÄcz]Û ¶ë)˜³^Ûõ=½¶;¥÷,¶›\¥ôæ"å"›ÓAoÁI©æKa»9õŒŸ…í&½£ï+R¾¨) â Ê,Š”¶ lØ.°]`»Àv¯c»¶zb Óá>Ëå?¶ l·¶«&¨KÊ=l÷ÍîF]ƒíÚs¹y¢y4ˆEñNl—·¬†ò¼Úä2…ÕLúS½Ú±Xz+¶›'ÙÛ yLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvS«Þ[í(ã–²`»9©š×ÂvSêô³°ÝœwsÚîÆvSÊ\â’l`»ÀvíÛ¶ l7íföYÞ‚íÖÂvåÃ$K)»v‹”7;‘|uµ£}ü27¨_…g·C¾ù’l²dÛA%ô57ºmD›ù³~ö7b»&ŽrÙUÓPœ#FoÁ!éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØnÜSÙÀÆ«TÙoîͶSÚ$Ó¶›“j°¶kª$%íWæÛí?«Hù7ïHò~¹‹¿xñ}½¿¨å;EYöÀvíÛ¶ lØ.°ÝëØ®ìŸPØÆ'H9I`»ÀvKa»òa–C"€æ~¶]³+GÝ«±]}®BÙÜFH5óEÊY7‘ÈdÛa=*{uU· ân§,oÅvX×IÏJ~/‹¶ñ˜ŽG×ÇvgÄ_ƒí: æ¬Çv]O/ˆíÎè=íÚàFf”Ç«”e¾¹¶]Jõ÷•ÃÕ’3ë+ ªùb—d«ªœÔu°«îvôa—dÛ[gcÂñg˜³¼ÛµÁÒø«Ëã’l`»ÀvíÛ¶ l7íÚþ)T¾ï³l‘mØn-lW>L6Ä—ÿúÝÌúø»üEØ®ã¨E8,ŽýHˆöxÞJíê jœ¸ßât‰µᘎG×§vgÄ_Cí: 欧v]O/HíÎè=Mí¦V©ÇžwP;Ýü¨²ÝœRÀµ Ýœzù0h7åÊö>h7§ s@»€víÚ´ hÐîuh7µÏŠD®]@»µ ]ù0Õ“ ríªI¾º!E}®åæ6daZÎTéÎ\;Û¤`)?§v\¦pùC1xª7;â÷^‘mƒj9rf‹Ó¤AíF8¦ãÑõ©Ýñ×P»Ž‚9ëÅ©]×Ó R»3zOS»6x9ó§AÓÝî͵C˧|€íªÌeg‚T£Åríšzð²]÷*„䟅íÚ[cRUzÁ;žß‡íve^^ZÇʱv`»ÀvíÛ¶ lØnˆíÚþYoÈú '€ƒÆOíÛýbØ®|˜ l‰»Ø®ÙÑÃU ‹°]}.qΜ}4˜üÞ†¬’˜Ÿc;)S¸V×Sì+mvéÇPèVl7%s\‘ò˜ŽG×ÇvgÄ_ƒí: æ¬Çv]O/ˆíÎè=íæV)¹9ÙNÓÆGý(æ”þXÆô—¥vSêéÓ Ûµ·–”Qaì¦7ö£Ø•ÕŸ÷ñee~ƒÚµ jÔ.¨]P» vCjW÷O6K¬/œÊ¦Ô.¨ÝRÔNj²›`êS;©ÉnÌ—·‘íŒÿý²LYî-l—@àˆÚiê^þ4Ò?ì7;Jï-l7%Î9úQ qLÇ£ëS»3⯡vsÖ‹S»®§¤vgôž¦vupIb>¨'²ÛÁ½ý(„hS:ÂvsRy±;²M%/ï8VŸ1}¶›òÎÏÊÇÝíꈚÊ䜆veý(Û¶ lØ.°]`» l7µÏjÜ‘ l·¶+&»±ÓÛ5»Çü‡‹°Ö‚u”“.w6;¸·,oÙMÒAY«S˜Y,õ;µ6;Ò÷V¶kƒZ.û¬ŽÅiŠÊvCÓñèúØîŒøk°]GÁœõâØ®ëé±Ý½§±µÓ§²­ŽW){VôÊd;Ï›ãÑÙ9©&ka»)õîôYØ®y3—/`腜އíÚˆdå­åeý(Û¶ lØ.°]`» l×öO%ŸTº`¶ l·¶³ÖgÂŒÀ»Ø®Ù]~G¶>×)e“ášM1Ý{GÖ(©´‘õvØwíwÎhv`ï½#Û•$O*þ(ŽÛxLÇ£ëc»3â¯ÁvsÖ‹c»®§ÄvgôžÆvmpD{a µ[±¥Të `»&ÁÀÊb9–ªy±ÒvM•›x‚±zû´ŽSÞqá÷a»:b½ºcƒ<À]™¶ lØ.°]`»Àví&°]ÛgKŒî:— ˆÛ¶[ Ûy-^þ‡t±]³ËWc»ú\fÉŽ8š@ÅîÄv µ,§ç}d1µ)\¶$ér»]r|'¶Û%.:x,-¶ð˜žG—Çv§Ä_‚íz æ¬×Æv}O¯‡íNé=‹íöÁ9»ºŽW)’{³íJü±•=ö9¶›”êkeÛíª(ÕóÃvð ØnkÍ9Œ½#ô>l7©Œ£#E`»ÀvíÛ¶ l÷:¶kû§'.Ñü û¬?ÄòíÛ-€íê‡)R^“zخٕµš/Æví¹ ^ží£ Tìò­—dmÓ\bçÔ.×lÙFq\³“‡V­ï v¹-C™­Ÿ?°‹ó‡ã~P»ÓñèúÔîŒøk¨]GÁœõâÔ®ëé©Ý½§©]“äá*å%î¿—Ú9oht@íšÎ9Óxµwd^‹Ú5UF˜²ŒÕ³ØgQ»)¯´Ý7eúÚW§9’í‚Úµ jÔ.¨]P» jWöOLŒJ<<”O"Ù.¨ÝZÔ.Wǵå\îR»fG·2.¢võ¹JH<š@Å.ɽԮD”ðÛAÂÀNýŽ·»]æ·–¶Ûe-q­Å=æ,¶;à1®íΈ¿ÛuÌY/Žíºž^ÛÑ{ÛµÁEú5žÙÁ½}dÙò¢Ø®I¨‰Õ^*º¶kªŒUÇÒý-> ÛÕ·Î){Ê/|†åDð>l7¥Ì)’íÛ¶ lØ.°]`» l×öYäò!÷Ù\N«íÛ-…í ––Ëõ’¬v±]³+ƒ^í vš(§THà $ÉñVl—6@#8¸#‹e *ã UÛÑÛó[±]§²ŒÅÁ㵟Àv<¦ãÑõ±Ýñ×`»Ž‚9ëű]×Ó b»3zOc»6xëNãUªDû·b»¼‰™uV{CAà±TƒÅîÈ6Uî¬à/¨Wÿ,l7åWz¶«#b2€”™¶ lØ.°]`»Àví^ÇvmŸÅr•4ÜgËFÙvíÖÂvåÔò<òÛ£~ýî0¹º#E{.¡çdÃ#´ÔãÞŽäÅxȹC‘p¤´Ø¡,ÈU ™5Õ“¥O äÊ[ 22½Ã’ßÈ•-¹¼òÕ±G \rÈE \rSœ{ùLk[åá>‹é¡YNrÈ­ÈÑ– ò~ï·Üíäúü‹òÜ2/jùñþýþfW¢¶{¯M[Öƒ_䨲˜r¬#°ÒúÛÓ[ó/fÄ¡æÈ¿þ°ÞñèúùgÄ_“ÑQ0g½xþE×Ó æ_œÑ{:ÿbj•²¬·æ_ûfÎצæ¤þ˜*òËb»ªŠ8;:ÕSú´jGSÞÉøÆåsÊ«§¶ lØ.°]`»ÀvíFØnjŸ¥íÛ-†íH QÂlG’oÀv$åjà0š@‚(·¶ô3«\›â:…Á€S06»Dï­vÔ%._ ŒÅ!``»éxt}lwFü5Ø®£`Îzql×õô‚ØîŒÞÓØnn•²{±²m.ÔnJéÓ"|¿$µÛU) á êI?‹Úµ·gÜšÚ½ão¼55¥ŒŸGAí‚Úµ jÔ.¨]P»çÔ®íŸõžÛxŸ5‰Î‚AíÖ¢våÃÔ$å1Þ¯QÞìÓÕÔ®>² ê_4»"àÎd;Ý„@å µ ´©^óûS½Ù¼·µ`´–þ¥AßÃÝî!Ê jw€c:]ŸÚ µë(˜³^œÚu=½ µ;£÷4µkƒ£§4^¥0ý5ʉ6Ô#l7'•ÃvM‰ø Âún—óga»öÖeÄÁ™c÷޽1Ù®( 9¿°“[`»ÀvíÛ¶ lØîulW÷OIà ãXžz¶ l÷‹a;©-ûLüIåí¯ß}À¬x5¶ëŒÿý%ä;±lI!\‘Õ:ÓÑŒ Ç5;p{+µ«ƒjR‹ÓÇDÀ v8¦ãÑõ©Ýñ×P»Ž‚9ëÅ©]×Ó R»3zOS»6xözïp¼Jeñ{¯ÈZY´èèŠl“PNl£s]³µ¨]SE :há¸Û¥ë,ØÞšë, ±wøá÷·Û©]±œ…ÒàÇ·]µ jÔ.¨]P» vAí&¨]Ý?Ëù_ —MZ¸µ j·µ+¦—£–öK”7;$»šÚiMâC75M  ú­qKìØÎêQÙœqpï¨Ù=)ò}+¶«ƒ€¢ÀP\ùW l7â1®íΈ¿ÛuÌY/Žíºž^ÛÑ{ÛµÁ‘ƒŒW)à{“íÔdÇl7%ÖÂvMÕøÉ^P¯òYØ®½5kcïü,¥ínl×F,_¹’Ž•qT¶ lØ.°]`»Àvíf°]Ý?=«%LÃ}ÖSÜ‘ l·¶³ŠãŒ°Ÿl×ìÄ.¯lg펮‹a˜&V¿7Ù®–­s>Âvîh‚%ä(-vp\þùË?üáO¿ÿ×ÿúò§ûmûßßâõ_ÿ…îüotô›?›²îî»â·“ÿ¿|ù»ÿ U[0õÓß׿øO5\øéËÖüÓYýø3§Ÿ­è_F§UìPýüMʯÊûûº¬>Ñëâ,õëêØìèÍ­FÊ ”r"¤ý4»„9€ìˆ´u<º>=#þ ÛQ0g½8ízzA {Fïi Û‡”ÀÒx•ÊpoÍÂr(Ú˜ýÈÎI5\ È6U˜™ŒÆêËqæ³€l{kJæ"cï ¿±Cð® ×veÏŲdÈ @6€ìs ÛöO'†ñ>k@6€ìZ@¶|˜ZÓ-ìÇ«@_¿û€53ðÕ@¶>MItx„V¦{ó(ëMlz^´R›êb–ºØn·3ÌïÄvmÐ Ù±ÿÃÐn—9ŠŽxLÏ£Ëc»Sâ/Áv=sÖkc»¾§×Ãv§ôžÅvß÷ØÃx•º;²ì±› =Çv»¬]éñ©?vEùE±Ý7õ–Ëé`¬á³ò(÷·®¶~¶â7ïøûŠî#2>VF‘GØ.°]`»ÀvíÛM`»}ÿtW÷NwæÑ!8°ÝRØ®|˜œÈ8Kâ¶kv€|uÑÂú\ *aB¡²ÝšGie uÃçÔ.— I ¥ å¶"ð[[ìâP=õsw;x¨žÔîÇt<º>µ;#þj×Q0g½8µëzzAjwFïij7µJ¡Ø½ÔÊ¢•à€ÚíÜR?]í›ñZÔ®©’ÚÎ>Õ}VÑÂ9ïðC_®Û©Ýœ²,Aí‚Úµ jÔ.¨]P»×©]Ý?Y^Øg9¨]P»¥¨]®wYT’w©]³£|u‡àöÜJÄe4Êñníœ7Åœì ÙêTOõ xÛA›êüÖ^#»8jt(=°ÝˆÇt<º>¶;#þl×Q0g½8¶ëzzAlwFïil7·J9Ý‹íH·–`»)©”ÃvM#”Ýìõ‚Ÿ…íÚ[KJe€±w˜Þˆíve^NZø‚2ËíÛ¶ lØ.°]`»×±]Ý?)[úoöÎ¥W–ܶã_e0k» >Ez—M–Y @A2qŒÄpàLùö‘ÔgìνÝRiêq•9„wcÞÒ¿Ø¥‡"ÅvÉ¢×H`»µ°]ù0M8Ë;S?|ñ›øÙØ®>ב"&åäWvfÚ´8•ÞdÛQ”Ê1¾Oè›°ÜŠíê R¢GFŠ„ȶò˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàb(nãUêM:ÛÁFXâ{¿ÚKM8^íE}1lWUi*âwìUâ™?¶kÞÁrê‘<ôŽÂsƒÀ«±]‘UÁÇÛ¸bòÀvíÛ¶ lØ.°Ý~lW÷ÏŒe ÜœkvÉ5°]`»¥°]ù0k‡Cל»Ø®ÙA:»Ep{®”—¡MðÚl;ßkÉ×·P¨LöÁ‘š[ —níH1%NS´ó˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàà$ý:¦v —b;AÞÊ·û&ÛnN*ùZØ®ªÊ I3Õ#äÏ…íæ¼“o,m7§Ìâ’l`»ÀvíÛ¶ l7í¦öYÕȶ l·¶+¦—ðô³íªóS$u¶kã“”Y4œ@®(W–¶“´dT}í¤NarIÜ/§.¿åã­Ø® *Ê\¶Ù4—U#ÛnÈc:]Û¶ë(˜³^Ûu=½ ¶;¢÷0¶›Z¥Œõâl»G>ÚÛÕ~BªéZØnF½=ÿçS`»)ïÞxIvNÇ%ÙÀvíÛ¶ lØnÛMí³Š)°]`»¥°ÔŽr·‘ìÃNÎÆvõ¹J&&#Vížjë]€ílc$tyÈé#ÄðþT¯vÙsºÛ5qLåÃÁ¡8ã¸$;æ1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁExt7áa÷ª÷‰ØN7`{“m÷à‚¤;¤®…íš*5NìcõªŸ¬%E{ë\oÃØ;™ø>l×FtUмC™æÀvíÛ¶ lØ.°Ý~lW÷O§dîã}Ö1S`»ÀvKa»òaZuìb»f—äôl»ú\¤$£NÌÕØõÊK²¸yÙ‘ˆ_c»\§°B‰„ú\nGï|o¶ÝŒ8Gˆ–CÓñèúØîˆøs°]GÁœõâØ®ëé±Ý½‡±ÝÜ*er)¶#ÔÄß`»)©ôâšé7ÅvSêùëÛÈ¿nl7ç ÷a»9e–Û¶ lØ.°]`»Àvû±]Ù?%¥2“²÷Y3lØn)l—+6c¯ T»Ø®Ùœ^Û®>—‘3¹&PÍ»²%ñfDÉßt’µ:Õ•sä_4;pºÛÕA¡÷Ë~:WÌ(°ÝˆÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal×G@T¯R ùRlç™6Éï²í¦¤–¾¶{¨r'’ê>¶koM¢yPO÷a‡ù>l×FäòÍÙŽmœ$°]`»ÀvíÛ¶ l7íÚþY¾bï³êQÛ.°ÝZØÎ*++1k?Û®ÙáS£…“°]}®¸yS±Kt-¶SÏòî’¬m% JÄ‚£Ã¾;fÖÕ¹ªÞU“Õ—ïK?[ WÞÀTÒïìèÚÞ‚œuË™^_›š”*kU;z¨¢²Ž գЧÂvÞÉ8¨•ÿa‡v¶{ŒX¯ ‚íP&Ø.°]`»ÀvíÛ¶ÛíûgF‚´ãPv…ÀvíVÂvåì=ëm éa»‡] ¥NÆví¹ eƒ†GÅ.½ª*q^oAØŒÜì ¶ƒ:…ËdwëÞ|ØÑÍØ® šµì˜y,NsT;ò˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàæà¸c 5ök±ÛF(o°ÝœÔ¼Ö%Ù¦J¹£ŒÕ{þ\—d?¼£¹üoèêÅû°]Õm‡²§*UíÛ¶ lØ.°]`»!¶kû'£1î8 ¶ l·¶ƒÚ³D»Ùv»ôÄOÂvõ¹˜U9¥Ñ*þH—fÛå-£½®v$X§°g”Ô-[û°Ë|koÁ6¨‚þLÿ°K‘m7ä1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁ „Ç;V)çk±ÒFü.Ûî!l‡T„Ű]SEÊ€´C½}®"åsÞù?¥À¯ÆvmDó~#”eOý‹Û¶ lØ.°]`»ÀvCl×öÏLfý²;åÀvíÖÂvذ9ö/É>ìÂÙØŽ#Ò~ŸŸ‡\‹ítCÓ²3½ÆvT¦p½XÚäš]Êz+¶kƒ–û­°vÏwÛ½á1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛµÁk’Sæñ*•.Åvª°øl7'Ua-lWU•#©ïØüÅß_5¶kÞDãßÖR¾ÛµËQŒÒøŒQÞ z ¶ lØ.°]`»ÀvØ®íŸÂ2¨!û°ãÀvíÃvåÃ4“ò°¯?à¾ø€ÍÈήmWŸë‰¬LŽáAÕú¥½q«•€^c;.SØ•\³#—[±Ý”¸L)°ÝˆÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal×wa†<^¥ŒíRl'`[z{IvNêj—dêËÞ¨6VïdŸ Û•·Ö¤bˆÃßV+„ºÛÍ)£¨mØ.°]`»ÀvíÛM`»©}–s lØn)lÇ›$¨°QÛ5;9ÛÕç‚”WL b|%¶cÚ°†ókj'ug&·þm^yœ¨om$ÛL€j×Ä9Gi»!Žéxt}jwDü9Ô®£`Îzqj×õô‚ÔîˆÞÃÔnf•‚„zq#ÙT7{¿Úï—Ê‹Q»)õþ¹¨Ýœwžo¢^MíÚˆD)XñC™Ei» vAí‚Úµ jÔn‚Úµý3KY}¼Ïj² vAí–¢våÃôzyÛ1u©]³ÎgS»ú\cG"M b‡|e²]Þ(“:¾ä´La„Zr©ï¨Ù%¦[±]TRx‡8& l7â1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛÕÁ)¥lƒBÀMdÖk±]&ÞÄàM²Ý”T—Ű]SåÈ9øÓã-ù“%Ûµ·FJ&ãÍ’Àò}Øî¡¬¼ï ðÃîéy`»ÀvíÛ¶ lØnˆíÚþ)ecÚË?ŸÛ¶[Û•Ó©ÄHÐÅvÍŽèô;²õ¹h†åᣠä%’“+KÛùfÉ<½éH‘Ë椚°Ÿ˜ÛaïͶkâ2¨ ¢Ìf'O•˜Û½á1®íŽˆ?ÛuÌY/Žíºž^ÛÑ{ÛM­R*WgÛ “+wVûÝR³­…íêÙÊÆ>VŸá“5’mom.’aìñ]Ñ9åL;”=]}lØ.°]`»ÀvíÛ ±]Ý?Ëjã@™Ø.°ÝRØ®|˜åãõ\žÓÅvÕÎÜðll—7)Ç`¯×7Φvõ¹9»¤¯©á—¨Ø ^Ií`kÍÙô5µó2…ÍË™ÔúS½Úe{ïÈN‰ÍAíF8¦ãÑõ©ÝñçP»Ž‚9ëÅ©]×Ó R»#zS»©UJ.nH!eÑz×bNªâZØnJ}†ô¹°ÝœwôÆd»)efØ.°]`»ÀvíÛ¶ÛíföYC‹d»Àvka;ßëÍŒ~ÙjGüTLå$lWŸ+”([L ¢ÓàÊd;Î$GGíj$)Î>ZìÈùÝY‰ÿñ»¿ûÓOø×ÿùî§û±ýßáúï~†; Žžtÿeiþí+áGð/ßýÍ_âÖY}ÿ·õçÿ¾Æßÿ}Ù§¿÷6bÎ ã·‘LïÞF¬¼Í½ªÕk"ÎXµ>ìþBµcý >¤ü¦|˨+î—#BaKTž”SîPåŸíD¿å¯ÞœÖÜ÷ðä»·1u¿šÝð«ïT’U»žò«sÆ:doQzØ©‰ð}„~Rœ§h>3@¯}.NèŠ?Ð÷ÌY¯LèGž^ÐÔ{ŒÐÏ®Rx-¡ÏiÃ׉µÓR¿.~ò ý´zýL×ág½ƒ7úie˜‚СB„>}ú ô; ýì>ËO±|ú ôߜзSÈs&|Oè¶<·Šå㹌تC&P±¿²et¦-IF~UŲ(€:…Áz}¦~¶+›Û­Ø® Z"åJäl$®Ø=ÄíÞð˜ŽG×ÇvGÄŸƒí: æ¬Çv]O/ˆíŽè=ŒíÚàîîãUj­lÕYõöõeþ_7 «o ‰ÑRzâ},¬ÊYi‡²×%«‚… ,,XX°°`a¯YXÛ?,Œ÷Y¢`aÁÂÖbaõÃD*ñ,í8à‹ÑÅEöõ-Œ’r$$a.¶ƒZíÒŸj3ž„íêø ÕóÀUÅNäJlGi33x‡í°üXÙ3›ôã’j§ÂéVl7#®œ £ùÌÇt<º>¶;"þl×Q0g½8¶ëzzAlwDïal7·Jéµ÷á-ÉÆ‰ÞdÛÍIÍ‹eÛM©'ùdÙvSÞál÷Æ)eÅ=Aƒ0a „1cÆý„±îŸYs¯µàÏvÉ%cÆ¥#Ö{æœÔ…»Ø®ÙŸŽíêsÀy0Š]‚+{F“m™ókjGu»•uÅ6;Ð|+µ›÷Üy;¨ÝÓñèúÔîˆøs¨]GÁœõâÔ®ëé©Ý½‡©ÝÔ*eùâÞ3X†!xCí¦¤:ÓZÔ®ªrp,»úP½'öÏEíÚ[ª½SƾڵYP w({ºzÔ.¨]P» vAí‚ÚµR»²Z—²0Ž÷ÙŒQÅ2¨ÝZÔŽ6¡b Ú¥vÍ.=å?œDíêsAˆXFF±c¼²ùŒà&åË4y‡íÜÊÉNFGêb—HV äŠ**Á°ÛXý«ä‘_y WÞZÝvü¶äpg WF4LÊ0VÆ\rÈE \rÈMrîDd>H¿hvø:Í1¹ä¾Y Ç[¯åo¸_ì¨Ú™=Uë:)+Ï-¯™xT!½Ú‰8\™~Q‚ÅlÅí¯9ÞÊ\­ÅJˈéÞü‹)qY"ÿbø‡õŽG×Ï¿8"þœü‹Ž‚9ëÅó/ºž^0ÿâˆÞÃùS«”§koçd[‰KÞä_ÌIÍi-l7¡¾˜|.l7ç¹ñÖÔœ2ºLíÛ¶ lØ.°Ý¶›ÚgŸÿ|Ø.°ÝØ.—cÈÛ;zêây¶Ë‰Ë‘¨D‚üÔØð|lþHÀÈðšÛIë”Ä©ŸÂPí’’ÝÊíê µ‹"Å•­ÁíF@¦ãÑõ¹Ýñçp»Ž‚9ëŹ]×Ó r»#zs»:xYA-óx Eÿºåݩܷ8å÷«ý^©”`±jGMU->¸õõPÿ‚:þª¹ÝÃ;b–aì ¹]±æÑíP¦Ü.¸]p»àvÁí‚Û·ÛÏíÚþi"ª;NÊ)¸]p»¥¸]ù0YMÍRîr»fW¦³¹]}®——ÍcÆfzi‘rÙŒ¡8öu §íH¶;"þl×Q0g½8¶ëzzAlwDïal×Ï’Ò ÎL³Óœ.ÅvbV+è½I·›’š“®…íš*—ÚÍz¬Þà“Ý’óŽÝXXÿtˆ&;”E¹£ÀvíÛ¶ lØnÛÍ쳌åŽÛ­…í´â0”rVéc»j—)Ÿ^î¨ïfÃðˆÍ˜¯Åv@,j¯±]nS8‰¥>¶kvék¥—b»6hY`x^ÑìÈr`»éxt}lwDü9Ø®£`Îzql×õô‚ØîˆÞÃØ® .èè0^¥XìRlǘ7d|ƒíæ¤:®…íê]h¬^?¶ko­lÐåa—Ò}Øî1¢¦r Ù¡ìévs`»ÀvíÛ¶ lØnˆíêþYwPÖq¸,)Å-ÙÀvka»òa2¨gÄÔÅvÍNäôl»ú\ªõÖl;ò ±Z#ûkl×r3J¸›÷P›]ºÛÕA3’èP\NíÆ<¦ãÑõ±Ýñç`»Ž‚9ëű]×Ó b»#zc»©UŠ/ζƒ(¿ÁvSRÃvM•Za»f'úÉ.ÉNyGïÃvmD+§ŒA/‘‡2‘ÀvíÛ¶ lØ.°Ý~lW÷OÏŽã}ÖRŽžíÖÂvU©x=v±ÝÃîé/Ï'a»úÜòå¼4„alhW6¤´9•€åÍ%YoGå$2PÚì8ß[Û® j@ÒX\~*רî éxt}lwDü9Ø®£`Îzql×õô‚ØîˆÞÃØÎÛ)È´l7ãUÊY®Í¶ÜTé ¶›“j°¶›Qo‰øsa»9ïäkÛµÁ˜IóoÛ¶ lØ.°]`»Àvû±]Û?Ë.‹iÇ  ˜¶ l·¶+& y‰³¼‹ívOÙn'a»úÜÌÈDy48æ+³í¼)}Ý’R™ÂΞ»Xš¹ßŠí¦Ä99¶ð˜žG—Çv‡ÄŸ‚íz æ¬×Æv}O¯‡íé=Šíƒ ¨@¯Rl×¶’e÷¿ÆvsRe±ÚvUš!ìPoŸ«•ì‡wLqì}nØz1¶«#bJI3íøÝôu@`»ÀvíÛ¶ lØî%¶kû,$M Ã}¶üãµíÛ-…íê‡)$Y•¸‡ívŒg·’mϲ4äM"È·¤È”€ù5¶ƒ:…É0çîÕš‡~ ]ŠíÚ ªžÆâÄ#ÛnÈc:]Û¶ë(˜³^Ûu=½ ¶;¢÷0¶kƒçòÅÀŽ%TíêÚv°‰o°]•¡¸Ž¥fZ+ÛnJ}9¾êçÂvSÞ¼/ÛîC–?ïPFíÛ¶ lØ.°]`» l×öÏŒœÓ8\% lØn)l5‹M)iêc»f'‰ÎÆvõ¹žÊ!x4؜ӵµíÀÊÁÍÞa;…šY .¥ÅNù똳°/¹ÿþÏßÿ¹œiZÐùKÝ„¬/ßLµü´Bi¸<q®M|òKû•[|ôñŸÚ’ÜV‰ýøÏÿ^tÃuGáCÃ}÷O¿ÿÓoÔ£"—?ÿøÛfW>ÑŸþðß}¡—äK÷üõÏ_|˜“¿weɶñ[´P$bª­Ž†8Xí÷Ö£ÿXí/«}«`ÎzyVÛñô’¬ö—ë=Õ–Á)yB¯RHzmACÆM’ôTr¯ùø2õÿÛ²Ú¢ŠÙÓõ_ïX¿vV[½£©lécï0묶Œ(”Id‡²(h¬6Xm°Ú`µÁjƒÕαڲfsê÷ý°Ã¬6Xíb¬V™1s°Ze½€Õ*‹*¦1jähé•)–´%ÿ_öέՖã¸ã_å =E£®kwL0<æ!Ž,É:Á¶KvÈ·OwÏ–¼|´¦{š¹¤ñ*û`ðQiú?µ§/õÛÕUõyÀšŽRÚ¶ûÅ.¿Ý E«8èE™Õ Ï;¶Ûà1 Î펈?Û5ŒYOŽíšžžÛÑ{Û­ƒGFê/¡@ —b;2^¶R,«VL(}©Œsõ!YUåm5pê«—ðb7£qMœÂï˜Ü‡íVe– N¬vÁ±c;ÇvŽíÛ9¶sl7€íÊþ‰!ŸeÇ>kÑS,ÛÍ…íò‡Y˜7ÛW;0;»I}. Hêó&%~R1ýÞ‹ºÛ)cÇvŽíÛ9¶slçØÎ±Ý¶+û' ÔþA”¼‰c»¹°]þ0%ŸTâlöÅGpIŠKgc»òÜŒD¥7$šÙµ 8„›Ñ\§:åÙÞ¾‡Zíòöv+¶«ƒÆÒzúâ”ѱ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal7´JÅpm´0Ù¶[%IÜ#u¶l»ªÊDÓõ^ Û­ÞÉÿÚû°]‘ƒ$ä_)9¶slçØÎ±c;ÇvŽíöc»ºÏR>Fëѱc»¹°¦F„íK²ÕNÒéÙvù¹X)¿Moi ½´‰.J7ÚH™Áf”ËM¡ÕŽÂ½ÝƒË ‚j’bWœ@rj×Å1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁ‰…C LñRjgKŠiƒÚ IÍ{Â\Ô®ªârKv¨×ôZÔnõN‚ Ô÷NÞÊï£vuDa…Îï0WeÂNíœÚ9µsjçÔΩS»ýÔ®îŸIòÎý}6¢wvj7µ“Ò•MÚÔ®Ú9½{p~®„|P¥~€!éJj‡²@Þ‘6¨æ¬ÁRê܆/vb ·R»*Ž“¥®8ÅhNíz8¦áÑù©ÝñçP»†‚1ëÉ©]ÓÓR»#zS»up“ÐI`[íøÚ+²Ì¼Ðµ“'»"[Uå3%ê«gx±†õ­%ïðΕíêˆ(‰ö•IP§vNíœÚ9µsjçÔΩÝ~jW÷O‹¬ûûlzÞøÉ©S»ÿ7j—?L!‹¸}E¶Ú¥¤gS»ò\AÓ^²êj÷$-àÔæÁ)ÿ6®ÈÆ<…#Hhc»jn®lWeîD™«]`Çv=ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]\©SÙnµ{–¯|&¶Ky½—­ÊvcRgë#[UiÞ¬Tw¨W}-lWß:樴ïüýÛ• „N«ŒªÌRrlçØÎ±c;ÇvŽíÛíÇvuŸ¥D$©»Ï¦Ç,ÇvŽífÀv±T–ˇÓÐI¶«v@§c»ò\SˆÔ›@Rr®¼" ‹åX‡âsl—–‚ÇNËj'tïÙ<(…@AH{â²ÝCÍ(Çv<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:xéºC• —b;1Z"Å lW% ä­wH“U¶«ª“ôÕçÝÿµ°ÝêSµØ÷N>Ü߇íꈂAiÇaplçØÎ±c;ÇvŽíÛ `»º&F ;öÙ¨ÞGÖ±Ý\Ø.Õþ°yÌ_æ}ñѬÀátlWžK¡Ç½«⥠)h %ù ma;3L=2f–l¶@.«Çüeuý\Þ’äÕ¹âUë{‘ï äòˆ:5üWeä=ó@Î9ä<ó@n(Ëû§˜jØqxüõ±rÈÍÈÙ‚DåvþE± )ž^ì¨<Ëo9:$¤ÚAºòÚ…%G:1èó@ÎF•dÔ¹6UìXS­þö°Äÿñî_¾ûñÃ7ÿûîÇo¿®ÿøí¯¿úé×ôŽ„ÿ¼6ö‡u)|‹¾z÷›ŸYC«Oþ¹üü?)ÁÃ'ÿ–7êO޾MÄÍ·‘”ßfRÕ¼Y§B ËÏàMÊ?äùCYrŸŽ(TÂò¸3b¶ po?É!qøp;³n6Ò)?ëæˆøs²n Ƭ'Ϻizz¬›#zgÝ ­Rôl =1ë&?QÞê'9&'ƒµcêÓ‹ÁÚ!ïðõ÷Ëaí 2ϺqXë°Öa­ÃZ‡µk`íÐ>«É³nÖÎk”#<©†ñ¬ÍvÄé|X«ÍBìä;T; |!¬Mº”NkJOa-†<… /¦6V^í ÜŠíÖA™,iì‹#IŽí:<¦åÑé±Ý!ñ§`»–‚1ë¹±]ÛÓóa»Czb»·Á%IܱJ1][šžXÚè'ù¦ ﮘv(™ŠÚ­ª„ xÇVõX{ý¨Ý›w¬äXö½#Íx®¦vëˆ1ä÷æÊ,8µsjçÔΩS;§vNívS»º2 1JwŸåðP Á©S» ¨]ù0ó#4 R‹Ú­v,p2µ«ÏÐhd½ ”$è•wå-YröœÚAžÂ’Õäª][+Ó‰ŠMjW~œÿúuž’_ÿ幯÷¿{ÿáÙýá»?­äýâ9DdH?¼ûö}Ž~Ÿ××òÊþ˜#ÿ¿•⯘þ}ã}Tƒ0Äîû”Þ`uö‡âØüÕ}¨k+p¬~XAÇŸ öø§ß¼ûþ»ï~ÿî>üøí»ðóßýðç/ÿ+À~XÉÿùë~üùmk¸úóòöý‡åçŸP]Þòß|þçï÷§|4ýá??üñ›ï>ÿ |žÿï÷ßþc9(þº?ýíºØý:¯û…€–õáÓ_ýú}â¯Þù }öõW¿ýæ3N˜>ûRßÃgߨWô^¿ ˜JŸþtÞ]•§ÞŠù¤”7¡:ÞÊvhžjÙ…q ÎÏlˆ?‡Ù6ŒYOÎl›žžÙÑ{˜ÙÖÁóžÚùño"ãµ} òF¹0©%ÝÚíÄJsaÛªJËõ¯…mWï ‰iß;úXHìjl[GŒ!|ÒeèÉ–ŽmÛ:¶ulëØÖ±í¶-û'BPj犭û¬EulëØv*l›?LKœéÔ¼_íòùll[žkÌ9$£Þ2£¨W6KŠ!na[,S]4¦vg‚jfv+¶-ƒ”´o늣<Ù²Kdܸk(³žÜ5==!¸;¢÷0¸«ƒ£jŒý%”ôRpKÉ3Ý^í‰ ™H_jÞçÂvU™¥ê_ìŽô›wJõÐ÷Î#?¾ÛÕ50õ• ªc;ÇvŽíÛ9¶slçØn?¶«û§%¶°ã š,9¶sl7¶Ã%…8˜4;T;5=ÛåçB D½#t¶ O®sXÐR—˜’DyÈQžÂi'«vAã­Ø® * bꊓǺ‹Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvC«^ŒíBÌcn/öY'Ú¥ç¢vU&v7Ô7» ¯EíÆ¼£÷õ]G$Ù¡ìáw¨NíœÚ9µsjçÔΩS».µ«û§)î8¨§vNí¦¢v´¤Ò&Ô,q“Ú;²„gS»ü\ ¢Œ"LÕ.¤vÌ‹ 2l$ÛqžÂœum@_íòöv+µ«ƒ2’)öÅ=fW8µÛÀ1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁ…(©ôW)ŽáRj—Wò™7JIÍÑÍ\Ø®ªRÌâõ‚¯…톼£ ÷a»:b ”bØ¡ ½{´c;ÇvŽíÛ9¶sl7€íòþ™B r¡¿Ï³c;ÇvSa;^&k'ÛU;À³»G×çJKÜ;¨&dã+K–õyŠ"<ÇvR¦:åŸN'S¤Ú!ß{G¶ª¦¤;Ä)y²]—Ç4<:?¶;"þl×P0f=9¶kzzBlwDïalW‡|èϧþþ*éÚ>ÂliÑ$ØnHªE˜ Û¨/^^ ÛÕ·ÆÀfØ÷ðIVeQBâÊ’w$qlçØÎ±c;ÇvŽí°]Ý?%‘"õ÷YÇvŽíæÂv²ä¨%E‰MlWí@Îî#\ž›ã4!íL l'¯,mg‹h)üöÛ•´bVR›J«]‚{ïÈ–AK5(¤Ø!EÇv=ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]¼ì5ÈýUêI ÛÉwdSŽÒ·Wû¨Ê`;T%ž ÛÕjA; ªz ñµ°]õ…h«¨Õê‚\ŽíÖ*¤FØWF×wÛ9¶slçØÎ±c;Çv=lWöOé$ ­vÁ;R8¶› ÛiÍ¢‹!YÛU»ÀállWž«Â)FèL r€¶tei»´$IFü<‹Kpòa™;‰µÕŽžõA¼Û•A¡Ô(ìüš~µC¿$Ûå1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁKûÖ/©v®½$ÅÜȶ“*“5’So/ÖHvÈ;(7fÛ)KÞHÖ±c;ÇvŽíÛ9¶ÀvuÿÔüuô÷Y‰Þ‘±Ý\Ø.. L£‚µÉ®vpz¶],8M{ÕlG×b»Pj%‹Ï±]ÊS˜¡dh´ó«]ˆr+¶«ƒjÌkIè‹“‡…Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvupËÀú«TŸ/Åv qÚªm7$Õx²ÚvEU>FäƒvÕK ôZØ®¾5äɸíêˆX^:îPFŽíÛ9¶slçØÎ±c»lW÷ÏÈý}VÉÛ9¶› Û¥%QŒ"’ÚØ®Ú©ÙÙØ.?—jÈQZge;øe$wnm;ä¨aÛY™Â’ÕØÆvÕŽ‰nÅvuPËŽ„⢘c»ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ® ®©×5|™äRl'Êy½× l7"UC°¹°Ýª*":ÇÒÕN_,Û®¾5R^¹ú›ež'p¶«#R>E̼ۡ%…c;ÇvŽíÛ9¶sl7€íêþ©SŸ•è—dÛÍ…í¬\>„˜¨‰íŠØÃAñ$lWž›âØ™@ óâ+³í`)©¤O±…<…#¨k–±\í‚ÜšmWMÂ!‡Ë]q‰Õ³íz<¦åÑé±Ý!ñ§`»–‚1ë¹±]ÛÓóa»Czb»u𼆆öïhÞD\\Û.ï´ª¼½Ú§$!!ô¥jšë’lUe!m_¶\Õ¼VKŠ7ï@ävÞüO^´Û°Ý:"€Úe`ŽíÛ9¶slçØÎ±c»ÝØnÝ?E’†þéÎX=ÛαÝTØ.˜‰R¤ü?Íl»ÕNªàœƒíÊs9PˆÑBg•¬<WfÛå@ŽcàçEÊ êT¤Ò&cÕ.k½ÛÁ’òq9/%Iúâòç娮ÇcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒAàÔY¥ŠH´K±R\$¤çÙvƒR-Î…íª*„@õÕ?«Ì÷wíê[S>úÞA¼¯¶Ý›²Hd‡2ñK²ŽíÛ9¶slçØÎ±Ý¶«ûg‚„aÇé.>´ìrlçØnl—?Ì›D|ÒÉõ‹>à|ÐM§c»ò\‚|N݃*QÐKkÛábjÄòÛá:ÕÚÕŽV;Å{³íÊ @BY_W 8¶ëò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=ŒíêàŒ Iû«ý2µúÜ–).7ZR¼Iåí*c?½ÒdØ®ªÒÀÔ¾$»ÚIx1l·zÑ:©éo^¼1Û®ŽE“ì˜ ŠžmçØÎ±c;ÇvŽíÛ `»²"‘’öO˜Ï«ŽíÛM…íò‡É(ù€ÿËòÌ_|ô3žÝI¶>—”õ&P¢Ò¥Ùv¸¨`Þ©žc;*Sòßwnx;°¤·b»*.KGJ]qøe:¶Ûà1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁµ´ Çþ*%zqm;±%ÒFm»1© 2¶«ª"1Gî«áÅ.ÉÖ·¶¼Õ3ô½“CÔû°]1¿C¢ÊNCŽíÛ9¶slçØÎ±c».¶«û,#©õÏGD ŽíÛM…í¨dÑ¡*@ÛU;ˆgw’­Ï…HÀíòЫÝÓðè4l‡i1Kù¹…íÌP˜ô°]¶ãÉz ®ª±‘ôÕ«¥W äŠw$Gò;¼“ï äÌ(HÊ$z çœrÈy çœr#\Þg 8v÷Y"ï-èÜd/5ÿHÚÕŽªñé\y.çØH¹=ª]$¹öÚ…r7ëy ÇK lA´ÈU»|j½5ÿbH%ðü‹Þ/Ö?ÿâˆøsò/ Ƭ'Ï¿hzzÂü‹#zç_ ­RŒ×)'•%lä_ŒI¥4¶SŸàµ°Ýwù>l7¦ŒÐ±c;ÇvŽíÛ9¶sl·Û í³Q“c;Çv“a»R$œ-u°]¶»Û¥ŸÆH½ ÄøÒjGqÑ 9{Ží$OaálÑIi®vD|+¶«ƒšhÀØ—„ÛõxLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvep 1I§vöj®½6¥IܪQ>¦T&K¶«ª0$ì$ÛU;|-j7æÓû¨]‘ÈR§vêÛx±#§vNíœÚ9µsjçÔn€ÚÕý3ÇãùŒÙßgó†ìÔΩÝTÔ.˜‘0Åö­©jR<›ÚImmXª½ DII¯M¶“h7zÄkžÂ1%ÅNðj)ÝJíÊ å:ÅŽªz²]Ç4<:?µ;"þj×P0f=9µkzzBjwDïajW·$¦Ò_¥øbj Rˆ¦Û«ý~©lsa»!õúZØnÌ;zc²Ý˜²èÉvŽíÛ9¶slçØÎ±Ý¶Úg½Ø‘c»¹°’T°‰íª]@<Û•çš0ôйT;V¸ÛÁBR³ç\ÌSØ(t®ÃW» ÷&Û­â,敨/Ž¢×(ïò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=Œíêàš£þ*Å1\ŠílAÜj-8$U`2lWUeí»ö*‰/†íVïhâNfú›o¬Q^G´ÂÔw|uª^Úαc;ÇvŽíÛ9¶Àvyÿ„@ÜÉ·¯vy®9¶sl7¶‹‡¤_†q_|ô“a:½µ`,wo#„¤Ý#4E¾ÛAéh…ϱ]*S8åÍÚ€±Úµ[±Ý8~¸«ïØnƒÇ4<:?¶;"þl×P0f=9¶kzzBlwDïal7´J‰\›mǦ ÙÀvcRãd¥í†Ôç-ôµ°Ýwâ’½Û•sœÐt‡2ôK²ŽíÛ9¶slçØÎ±Ý¶«û,bÞ¤°»Ï<$ 9¶sl7¶KÇ¥ØÉ¶+vÉ꼜„íRÁqBŠB½ ÄáiþóZ Ú’ÅçØ.ÿS@˧ýN—õbXîÀv#ârL Žíz<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»28aþ£¡»JåáêK²¥’éöj¿_j„¹°ÝzøeZãß7¶«o}Ã3ÇêŇrº—c»:"—bÙºC™y¶c;ÇvŽíÛ9¶sl7€íêþiçÙŽóQ¤àØÎ±ÝTØÎJ§¡ˆMlWíÈällWžK)rŒÖ›@L9š¸2Û., œ}ÿ4ãPÊH¤Íßѯv!ÝŠíÖAY-&ë‹#CÇvÓòèôØîøS°]KÁ˜õÜØ®íéù°Ý!½G±Ý:¸ c ýUŠŸ•=ù’,ÑÆ%Ù7©cÜ±Ú †©°ÝªJ!AûŠï›z}­K²ë[çÐ2¶¯¿yñÉ^íêˆ(Æd‡2KŽíÛ9¶slçØÎ±c»ÝØîÿØ;»dçM ï(ƒþ¥•tÿ;)àóuÒžì±2]ô¦UÍkƒà‰¶8K¨/úpþгŒ”dÛ-…íÚ‡Ù~ÝU‹a#ÙÍŽüjlן‹h\ßf6‹ÝÙ’ý!5N½i$ËЦ0+<ìúÐíþµÙÿ¶ÛÄ™«”©8.O—õÛ½á1®íΈ¿Û ³^Û =½ ¶;£÷4¶Û€ù*~om;‘x êl×%`uÔ©ø»{Æÿ‹íº*–b6Uuûú]dy‡Ÿ¼s;¶ë#ŠVm;¾:~Ýá.±]b»Äv‰íÛ%¶Kl÷ÛµøiÚ?ó8šÙv‰íÖÂvõì;Xtû}æ¯ÿ|À̬W×¶ëÏÕ:}h\ÛîÇŽèÞÚvD ¯±Ö),5"!ŽÚº]ù0¶ëƒ2 ‹âlvÄÙIvÊc]Û ¶(8f½8¶zzAlwFïil· n„`óUŠ1nÅvdu˜o°Ý1©‚ka»®J<Üa®^о Ûõ·ÖB´Ë;ñ¹N²Ûˆ†…MæÊ´Hb»Äv‰íÛ%¶Kl—Øn?¶kñS}ÏîN‹qb»ÄvKa;lYl¦„eŒíºR\í°ã@¬¯4@5bÜÚ’ýáZ7ïkÛ1µ­² âø>üf§ ÅvÔר ž‹{þù ±Ý3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»>8Ζ7‘ ÷^’¥xT1o°Ý1©ºVm»cêëô»°ÝöÖZ îñÎkÛm#¢91îP&Ù’"±]b»Äv‰íÛ%¶;€ízüä¶o³yœå’dÛ­…íê‡iâŽ@ÃÚv›]ýÒ¯Æví¹Pu6Ì •±–G(¼¹$Ë-±Z¢Êx³ßì4ü£d7qä1éhýc‡Ù’bÊc]Û ¶(8f½8¶zzAlwFïil×Wœ¯RL7×¶Sz ¼ÃvǤÚb—d©’ïÂvǼô9lwH™Hl—Ø.±]b»Äv‰íÛíÇv-~z=Êñ4Îz=ð'¶Kl·¶«¦Ô/Ëoõ×>`)n—c»ö\$uŸÃ°j‡rç%YnqÃâ5¶“m«ŒLã«5ÝŽã³—dÛ ^4‚y.ΟN"‰íÞð˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛZ¥žÛ–ßRÛü¡&o°Ý©^ÃçZØ®«‚RC•ìPï_vIvóŽ×?­Ï½OPóvl×G¬oëTv(ÓÄv‰íÛ%¶Kl—Ø.±Ýl×ã§bd;âìóY>±]b»°]ý094”bœm×íDàjl'‚y‰Ùª~Ñ;/É>¢ T~í´NáÀê¬IDíS½ÐG±Ý!q¦YÛnÊc]Û ¶(8f½8¶zzAlwFïilwd•Šrsm;ezùlwLªÒZØ®«ª[œ«GŠïÂvý­UæÞ!ý`'Ù>¢0¸ø\gm»Äv‰íÛ%¶Kl—Øî¶«ñ PÝûLwXãY$¶Kl·¶«&2‡ãÛu;¸ÛiÃqXwv³ $ïmIQêIñ5¶³¶U6 ÆqþE·{¾Zó lgm}Atј‰«ëç%Ù)xt}lwFü5Øn à˜õâØnèé±Ý½§±]œÚ" óU /ö×b;/u½7Øî˜T÷µ°]WÅE„x®ž¾ ÛmÞq@йw˜?˜m×GT`ã²CÙëw‰íÛ%¶Kl—Ø.±]b»×Ø®ÅOhŸ±Ñ<ÎFdm»Ävka;Û° !¶ëvð´…½ÛµçFq#šn¡«à½-)ꎭn‡_c;oS½žõ^ä%þK©÷©þáN²mж‘·â ÛÍxÌÀ£ëc»3â¯ÁvǬÇvCO/ˆíÎè=í¶Á[Ú4MW),RnÅvíG†ïW{3Ÿ\3ݤþ¾Ïûÿb»® EÌl®¾ ÛòNÝ}ÛõëV(&—w~Þ€Û%¶Kl—Ø.±]b»Ävû±]ŸÊ¡“,—n'’Ø.±ÝZØÎ{Í: tb»n§~y¶]{.B ¨Î&PµCº3Û®< {|w‹*5&Xº]AXí ×Ô»Õ£Æ\=~ÛA®½uX”²Ã;Oµ >p«#rÚ¡ì äA.ry˃\äò —¹¹?Uëg¼#Îr”<ÈåAn©ƒ\< D]†‹Ž¯Mu;ƒËó/êsÛÎØfY ÝNÍîÌ¿ˆ‡ªÔoóõA.Ž5dÕÝL©#:êGó/º8-anSq$d™1ûa}àÑõó/Έ¿&ÿb à˜õâùCO/˜qFïéü‹mððºhÏW)Eº5ÿÂÕ“Ê‹U;:¦Þé»°]kS˜µÙìž"ùíØ®èB>¹š½)£¬v”Ø.±]b»Äv‰íÛÀv-~rÿ‚8³\”Û%¶[ Û¹±Í°k±b×c;¯ç3¨¡@gHÿ•Â|Cþ…?j r°—ØNJ›Â(¦1TºÙ=×…ú¶ÛeQ÷ßì("±Ý„ÇŒ<º<¶;%þl7RpÌzml7öôzØî”Þ³Øn\X¬È|•âß-ï.Åv¤ñ ¯±Ý1©ò»žúÿŠí6UJl;ÔÙµ©ûÜ;Êþ1l·hĦe‡2ñÄv‰íÛ%¶Kl—Ø.±Ýnl×ã§@!˜ÆÙVŒ9±]b»•°]û0H„~oÿúϬ€—W;êÏE…z>šÂ0Ea¿ÛU¯²¼¦vÐvÊã²L›½*°{#µƒ¾ Õi*®ZIR»Žxt}jwFü5Ôn à˜õâÔnèé©Ý½§©Ý6¸úŽU Äo¥võüñ¨Ëùj·Ipª[•Rm­bG›ªº©à±ª]Fþ.j×ߺþe m‡wìsÅŽ~”)YÄe‘ÅŽ’Ú%µKj—Ô.©]R»Ô®ÇOE$ر?ó¤vIí–¢võÃ4oCÇwd7; ¼šÚµç†ÕªL7ªÕ®ÜyGVøÑ6éH¯±ö)2N¿èvÏ ‰>íú ÀÎsq.ÙZpÊc]Û ¶(8f½8¶zzAlwFïil· n“ÝÒf'q+¶”Л;²?\Ý÷HµµZ vU õˆå0U¯Eä»°Ýæ ßù±{®~7¶ë#"‰+íP¦™l—Ø.±]b»Äv‰íÛÀv=~²9{™ÇYÆ,m—Øn-lW?LQSã!¶ëvÌz5¶kÏ5'›N ±â·&ÛÁ£ÎEÖ×5Ê…ú–Úl–m×ížÚ>íú lDsqÄ™m7å1®íΈ¿Û ³^Û =½ ¶;£÷4¶;´J1ÆÍwdíaï¨Ý1¥¿£ù©]SU•ǤðÄf‡ô]•íŽy‡Ê“íúˆõ`\ß|‡2ÈÊvIí’Ú%µKj—Ô.©Ýj×ãgÝLÂ¸ÝØfWÏòIí’Ú-Eíê‡éHà/îLýõŸ¸u>¿<Ù®=—Q‹Òt9CàÉvõGáþæŽ,·šBo©»Dù(µëƒ†±ŠÎÅEɆS3ðèúÔîŒøk¨Ý@Á1ëÅ©ÝÐÓ R»3zOS»6¸` mÓUÊJ{ïȶd»À7Øî˜TY ÛSö]Øîw€?˜lÇFUPÞ¡L0±]b»Äv‰íÛ%¶Kl·Ûõø)P^\1üg™Û%¶[ ÛqObk÷&}ˆíšÆS¶èEØ®=×µM²º¼7ÙŽõMY‘:…ël•ñTovBÅv‡ÄÉSAb»7Žk|w;ÖÏ^‘탶R=Pæâ 1¡ÝŒÆ <º>´;#þh7PpÌzqh7ôô‚ÐîŒÞÓÐîÐ*å|o®…<‚áM®Ý1©«Q»¦ªý[štÿyË/£vǼóÌÆî¦v›2Þ¡,(©]R»¤vIí’Ú%µKj·ŸÚõ8K ³ŽTÝ‘’Ú%µ[ŠÚY»úÊ¡2¤vÝŽ”¯¦ví¹fR÷©S&¦…o¤v(V¦Zߤ_x›Â L“"–Ý®Àg±]”ÄÜx.ŽJ¶‘ò˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛõÁ¹ÕQÕ«”ó½ØÎ[nõ;lwH*ƒ­…íº*!шê¾ ÛmÞ©gRƒ¹w„ýsØ®¨%ö(“Hl—Ø.±]b»Äv‰íÛíÇv=~F5œÇY7Ll—Øn)lç ‡±‰Ã¸ìf‡—c»ö\wså)oªvrçYÔ£½ÁvQ§0i1˜ä_t;þ]äûVl×åbÌ“û»›]¶‘ó˜G×ÇvgÄ_ƒí ŽY/Ží†ž^ÛÑ{ÛõÁ±8‚ÍW)°{ÛÈjèì µÛ”jË{š+E^ŒÚuUTcºÃ\=á—¶ëo]·<2÷˯ÈöÅÙ&ݹ6»§¶RIí’Ú%µKj—Ô.©]R»)µkñSê9i~Z–"Ô.©ÝRÔ®~˜R·*õìmCj×íÄ.¿"[Ÿ«€E%¦ xQƒçRjçZ‡|í° k;eWFœ8ÿØ©Àç ÝÏ BŠ—©8€„vC3öèâÐî¤ø  ÝXÁ1ë•¡ÝÌÓ«A»“zÏA»?ƒ3Õuœç«ѽÐÎ ?^6£8*”KYˆÙýQÕüWt‡zÅ/bvÿxG1|ÇEž;ÈÝÊìþŒh¥ÔÃòeš=d“Ù%³Kf—Ì.™]2»½Ìî'~*0yÌwwZ83í’Ù-Äì¶S¹( 2í~ìHʵÍ(~ž+ZOsÚ¤u£ w–µ+•×ÐÚFª1+†J»ú(´ëƒ¶sØ\kB»)xt}hwFü5Ðn à˜õâÐnèé¡Ý½§¡]\YË|•»÷‚¬(=ª’7ØîT] Ûmª Ñ`‡zù2l×ߺ†q4{§A?‡í6e^JÙÆM²®]b»Äv‰íÛ%¶KlwÛµøiX÷G£Þì@-±]b»¥°]ý0ë ™œ†Ø®ÛÁSÙEØ®=¥0Ɔr‰{{Èz}yA¶*À6…[q¿oö»]Ý{ÛµA] ó©8—Ävs3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»mðŒ2_¥êB|+¶ ¥¸½ÁvǤ¯…íºªvtT·å;ïÂvý­£¨à<’»Ë³íÚˆQ}ÏW¢‰íÛ%¶Kl—Ø.±]b»ýØ®ÇYb¥Ó8(žØ.±ÝRØ®~˜ªõì­Cl·Ù=•f¾Ûa¿•ê(4ÝB«¿³‰,Æ£E¤Â¯±õ-ukÉ1Î` ¾¥Vþ(¶ëâD˜G½xÿ±Ã’ØnÆc]Û ¶(8f½8¶zzAlwFïil××Ö±<æ«ÔË*W^‘Õò0xwI¶K0ˆaÓ슮…íºª(¨²#¸Ñwa»úÖ\ŠÃ4’W;¤Ïa»>"(‡Ð\”lG‘Ø.±]b»Äv‰íÛÀv=~rÝ"ÉŽ?µ–Ll—ØnlG¢8ÄvÝŽŸ6øa»ú\ÕR£Í6ªÑŠßÝyIPP`æï¸] M.¢v;U^í$Am72W†ßv’kÞñÿÇÎô“'¹:b}aÀ2WåŽò$—'¹<ÉåI.Ory’;t’«ñ“Kýœç»;"“<ÉåIn©“? ¢s¼H+ø×I®ÛY‰«Orí¹\OŠã ïÍNíÎ{SðP }}ãc$PX'JÛÏ6¬MÀèƒF=ˆˆÍÅùSyÓLÀxóËúÀ£ë'`œMÆ@Á1ëÅ0†ž^0ãŒÞÓ ‡V©°{0‚ü!åݽ©cRÖÂvMÔu¾p™ª‡òUÿ¼5B…Þ ù¶Û”…Ð\Nl—Ø.±]b»Äv‰íÛíÇv=~2¡ãŽýqÞ›Jl·¶síÛûßå†þƒíª³^íZ¹%±BSfåö{SŃñͽ)©SëŸa¬´Ùxù(¶ëâ…§âP"±Ý”Ç <º>¶;#þl7PpÌzql7ôô‚ØîŒÞÓØ®nTOÆ>_¥¬Ü[î@U V{7eÀ¹TÇÅ²íºªŹúÐ/Ãví­[OšovúÁ{S}D(Vv( Ol—Ø.±]b»Äv‰íÛíÇv=~R ØŽifÛ%¶[ ÛI+c^ÆUÊ»?ýò|¶kÏm}7ë™M ê¸³J9ÊCÐàõANûVÙ'›}í9ùl•ò.Ž y‡8BÏrGS3ðèúØîŒøk°Ý@Á1ëűÝÐÓ b»3zOc»>8ÝNÜDªÞŠíjÌyH¼Ë¶;&õwOŠÿÛuURH'øg{K†ïÂv›w”ÊÉnv`ŸÃv}DE!çʨ$¶Kl—Ø.±]b»Äv‰íöc»?ûïc;â,c–;Jl·¶«¦ƒÔÍûÛu;ÖË«”·ç¢»JLŽ&·6ŒGk¤.ñÛY›Âb^§ñPi·ãW©"7b»>hõ⬅}·3Hl7å1®íΈ¿Û ³^Û =½ ¶;£÷4¶;´J9á­ØŽÉŽôÛ“*¸¶ëªBÍ íP_†í6ï¸íðN<íynÇvmD)†!ºC™e¶]b»Äv‰íÛ%¶KlwÛõ8K0m¾Üí"±]b»¥°5'¦ãl»ÍŽøjlWŸèjÓj”ò¢xöuØŽå¥Ý}í¼OáB8iIÔí |6ۮʥn‹u.Ž,±Ý”Ç <º>¶;#þl7PpÌzql7ôô‚ØîŒÞÓØnÜë†)æ«K¹Û‘ðÃø µÛ„‹íQê‹Q»®J4w„*¡/ëHÑßZIcR@nóN|ðŽlÑêŠj;¢¸bR»¤vIí’Ú%µKj—ÔîµkñSApV¬¥ÛÏÒvIíÖ¢võìÛÄvûÕ†Ô®ÛÝpG¶=W¡¾E™ª–;“íÊC ‘Þ$ÛEŸÂà6)ÔìÄ=>Jíº¸zaä©8Õ’ÔnŠc]ŸÚ µ(8f½8µzzAjwFïij×7„Œé*e@zsi;”ÓìýjoTwvs©H‹u¤èªÔŠîQ¯üeØ®¿µ×ÿÁ}î—6’m#:8ÙT™…Äv‰íÛ%¶Kl—Ø.±Ý~l×ãg¸#”yœõ§b-‰íÛ­€íê‡é8p’l×í ÂÕØ.¶;º¦“d»n'~ëY~ yyƒÒ6ñQÜÆ€q³ýh#Ù>¨‹±¢Oŵ:‰í&—b»Fµ ¼_í]Ia\»áÇÖ*m×UD¡SõQľ Ûmoè5–ϽOí›îÆvÛˆLŽHse”Ø.±]b»Äv‰íÛ%¶ÛíZü”‚5P¹Ìâ¬4«Äv‰íVÂvíôàa»ngxy#Ùm| -4åMÕãÞl»z„ñx}G m•‰yxñh³Ã"Åv}Pm}Eb.®~`‰íföŽ¿”“¸ÚíÊ2 ¤¡2M…íÚ´ hÐ. ]@»Юퟜ²ãø Ð. ÝbЮ|˜¦)©Hÿ†lµË—¶«ÏµFƒ¬€j—àÎT;Ü?r¹NáòGIØ…š=Jíê J9—“Ä Ù!Žéxt}j7#þj×QpÎzqj×õô‚ÔnFï4µÛ/ký ÷ÐnG÷R;qÝ?åÚ“šóZØ®©">°We¤/Ãv§¼CžÃvmDKéÀ6NHíÛ¶ lØ.°]`»ãخퟙåÀ @Û¶[ ÛåÚg"Õ8ØÅvÕMìjlWŸÛ’U…GÈ8ËØóV‚Ùü¡…•ì„™¬gªÙAÆG©]ÔHyÐ̺ٕo&¨ÝÇt<º>µ› µë(8g½8µëzzAj7£wšÚ•Á5±+ V©b—p-ÖTirÍ0V/øe×NwïäÌBcïüë |7 k#æ²³P&(,PX °@aÂ… ;ŽÂêþ Êè2>•3PX °µPXý0™D !JùÌ%Ý|˜1ã'å Õ œ¬ºÔ®Ù‰âÕÔ®>W«¼! +vÄxg; ÙœÔ8ô”×ÐÈ£?ª{­s¹ZÌy\}¥¸ßs–·"e8à…'cÎ3Ê^ÚÅDÌ1gÄœsFÌ1gÄœbβ2KÊi¼ÏKÄœs.sú–@EIº\³ã—ìð‹¹ú\Ã2}¸?šÜzk ek-Â?r^žA$(-vð»(Ó­ùmPÂûjv¬qkjøÃzÇ£ëç_̈¿&ÿ¢£àœõâù]O/˜1£w:ÿbÜ8ã%TôÞ å¬¸‰|ê+Ø$”ž’ºZ±£SêUõ»°]{ë¬êfcïd|°Byѳ’ñe/ɰíÛ¶ lØ.°]`»!¶«û' kT(ovà9°]`»Å°™{¶d2ÀvÅÎôlg– Úõ5;½³ØQ-Z ¤ðÛajS ûJw»ò>Ob»}ÐÚΰŸÆ²ÛáKÙÕÀvïyLÏ£Ëc»)ñ—`»ž‚sÖkc»¾§×ÃvSzg±ÝÏà%.¯R$x+¶#¦Íõ=µ;©4¯EívUB šÆê%}µÛßZ]ì•’ŸK¶ÛGÌ%$î×`øyjÔ.¨]P» vAí‚Ú§vmÿDp#‡ò‰‚Úµ[‰ÚÕÓQjên_ÁÝ®,ÙS»ö\NTª8š@N.~o‰r1ðÄï©Ô£r«©›Ï¶Û >šl×-먧±¸ò— j7Â1®OífÄ_Cí: ÎY/Níºž^ÚÍè¦v§V)Ï~s²lž?$۔꾶;£K(ö]Øîœwü¹d»sÊ E]¦ÀvíÛ¶ lØî¶kû'—X¾_Öf·#ºLíÖÂvP;öeõœ»wdw;Å«; Öç:€(ª^Ž‚rçÙZP ü=¶Ã:…­¯´Ù±è£Ø®J@bbCq”ÌÛxLÇ£ëc»ñ×`»Ž‚sÖ‹c»®§Äv3z§±]µVE¯RH÷Þ‘ÍÊgü€íš„²ÑÒ©¶¶kª¸³’ŽÕÓo>úwc»úÖœJäé2ô§Ë©ï#b¶Aü;ˆÎ‚íÛ¶ lØ.°Ý l×öO7Nx`ŸàÀví–ÂvX³èRYi7Çþç?°—¡_íêsYU}4œ‹á½wdA“‡l;ª!N9Ò%î‡BÍNüYlG-Âȉå€8l»1éxt}l7#þl×QpÎzql×õô‚ØnFï4¶ÛwM6^¥ŠH¸7ÛÎòB°Ý9©«a»¦ Sy=«‡ß™á7¶;ç—Íòvl×F¤Šµq¬ S`»ÀvíÛ¶ lØî¶kû§dA²P³‹.ˆíVÃvåÃ4˵î2v±]³¾º´]}®ªjgÿØÑ­Ùv%Œã”ÐÞc;®SX%åÜ?RóÏ’ð(¶kƒÖ¦7é€8ùù °ÝÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞilW—T–z¦á*% ô^l'¶ }º$»K`*{ð©ïö¥?‰íš*¨µhí€úüeµíNyøAl×FDIb~@Ùûp"°]`»ÀvíÛ¶ l÷ÛµýSÉ4¥ñ>Ë/ÉJíÛ­€íʇéŽ)åÔ¿$Ûìà¥òEØŽ7OIYy0ª¦;³íd+'æ÷ÔNÚL¯yt}0ÖìÞt¼•ÚÕA’ ÈPœ¦—KQAí>à˜ŽG×§v3⯡v笧v]O/HífôNS»}pwë7ðü±³{Rh*‹–çÔ®I@NyµÛ½IWû£Ô®©*'ܵÜÕû—•¶Û½cÀ‰ÆÞ!µç¨]±þäG¶qNÔ.¨]P» vAí‚Úµ;NíÚþ™²ŒcyU‹;²AíÖ¢vR;=H&”¶kve;šÚÕç–Ò8§Ñ*›ð½¥íÌì¶Ó2…3;÷›¼U»2ÓŸÅvMœ##úP\¶¦Øîéxt}l7#þl×QpÎzql×õô‚ØnFï4¶«ƒ ¢Èp•²ä÷Þ‘…ŠBì¬ö&8¨ˆ°Û©®…íš*42o†o*OüÕØ®½u9ðøà&ên÷âÛ±]‘‹>0A8:R¶ lØ.°]`»Àvg°]Û?͈œîrNíÛ-…íʇéf’Qµ‹íš]ÆË±Ö$ºò¶ £#tM¶¹·‘,—] ßÇq¹Ì`ϵŠQ_h³ÃÄR»6¨•CqÎcqör7¨ÝÓñèúÔnFü5Ô®£àœõâÔ®ëé©ÝŒÞij×÷²ˆb¯R.vsC Ù>5¤8'ÕÒZÔ®¨Ê  È§‘ú\c_FíNy^Û>ÜMíÚˆöæ0öFFe» vAí‚Úµ jÔîµkû§ˆ’ØgYãŠlP»µ¨]ÞH@³ôR4;¦Ë©]}.›bN£ Tì2êÔŽy+QZyÉ÷ØÎÚQ¹VìÿÞìÒ»˜óFl×åZ¥šÆâÈãŽìÇt<º>¶› ¶ë(8g½8¶ëzzAl7£wÛµÁŵ,…ãUJÒÍ•í6ÀO•íš- >Û©ºXCЦÊÒð·°f—ñ˰]{k¯èÜ^íwc»:"$-ßS+óíÛ¶ lØ.°]`»ãØ®í³$˜l|º|‰åÛ¶[Û•Ó³•0‹ûØ®Ù)ñÕØÎZŶ› ¶ë(8g½8¶ëzzAl7£wÛµÁ¡DãU*ó½Ùv\†‘Øn—*îv@ªÉbÙvU¦D¨ù€zÿ²>²§¼ã/áoÇvMY9 ÂPY-GØ.°]`»ÀvíÛ¶;ŽíÚþÉ`†ãp 0°]`»¥°7lfæ¹í|Çf—7¤ðvGVUho\q/¶ËNÙä-¶£T§° 3t[gìvœÒ“ØnÔe,NMÛ xLÏ£Ëc»)ñ—`»ž‚sÖkc»¾§×ÃvSzg±Ý¹UÊäîl;Ûé=¶;)5祰]SUN›0èH±Û%ú®>²ç¼é9l·ˆI…è€2°ÀvíÛ¶ lØ.°Ýal·ïŸ,Æý>²»Ei»Àvka»úaº –óŠ÷°ÝnG/¯Áv幞q9ŽÐÅ®8äNl—7²TÎÌï±´©NèÜWÚ숽$»ª&6§ÙvCÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞil×/ÿIÙ¬RvoG bý€ìv™ƒ~ÿµ2ívUE%d< žü»]}k®_ Ø(=?W×nW ÎîÈ.] »@vìÙ²;ŽìÚ>[û~ýØ»×~TìÙ-€ì`órJÌ ¿°ÿüû.v¯Ý .Bvõ¹&$Š<˜@^¯VÝ]×.ådïÛQÖ)¬Êƒ¢L»È£í(Ú BJÞoG±Û%L»!‹éxt}d7#þd×QpÎzqd×õô‚ÈnFï4²;µJÁïôµK‘olöÛ“ê‹eÚRœ¾ ÛóŽ?ˆíN)£í(Û¶ lØ.°]`»Ø®íŸZ?dï³bØ.°ÝRØ7O¬,ÒÅvÍŽ%_íêsÀ(Ù`»„·6‘M›Õ”Ã÷í(ˆê¶òWüÞìäÙvmPeæ°4;¢ØnÄc:]Û͈¿Ûuœ³^Ûu=½ ¶›Ñ;íêà¹,÷6ø‘³‰¾÷‚¬»o(Ÿ°Ý)©¯õ^—Àv§Ôg‚ïÂv§¼ãÎÏa»¦Œ I¦¡²2Å8°]`»ÀvíÛ¶ lwÛµý³DóÉÇ'€¬)²íÛ­…íhó2œ—³¢v±]³£—‚ha;ªY|DIltP-v˜ï¼ K¶•sˆ}Âvî(%˜¤V/¢ßÈðŽ&2+#”c’P•Ž"‹@.¹ä"‹@î\ ç^kÏ¡ND Ü äx+Orø@ô¯@®Ù©^ÈÕç–(JêO fWþ¹³Ò‘n™D ¿äj*5c(­¿Èá³ùuÐÚ0ɇârz‰‡#ÿâÃ뮟1#þšü‹Ž‚sÖ‹ç_t=½`þÅŒÞéü‹68&`LãU 2ßÛWPÊŠEŸªíRFE¬^i±ü‹¦ŠSÍ2«'àïÂví­…LøÀgÈ–žÃv»2Câ_]‰©Û¶ lØ.°]`»ÀvDZ]Û?]Á Çû¬qô l·¶ó¤bš”خڽ„'—a;O–ÊN0˜@ÕìækSP/@¾ÇvÒR¨$åA¦ˆìGoxÛÕA-ÕLšâü¥QB`»<¦ãÑõ±ÝŒøk°]GÁ9ëű]×Ó b»½ÓØ® âå›®ReS¸·Ú›lâüÛ’ ×Âv»ªÌipùç-å»°]{k”r McïàK7Û±]‘j¾PÆíÛ¶ lØ.°]`»Ø®íŸ*–?5;Áȶ l·¶“Í$I†}lWí’ãåEÊësQAIGF±ºµ¯ oŒŽ ï±–)ì lÚOhkvÉüQlW-Á8g´glâÜ$°ÝˆÇt<º>¶› ¶ë(8g½8¶ëzzAl7£wÛµÁ¡ûV©"ÒäVl[ÙÐHýãj\*¼©ôG±]S…˜tp›Wo_V¤|÷Ž•ÿ`cï àsØ®HŠé€2KíÛ¶ lØ.°]`»ãØ®íŸ*Rvðñ>+YÛ¶[ ÛéæI”Ë÷Û/RÞìäå÷Ý‹°]}n–TΩ£ TìøÍuž ±]ÞÊIXýCoÁ\¦0@‰\nSäQl×ÄyFŠŒK²cÓñèúØnFü5Ø®£àœõâØ®ëé±ÝŒÞil×/ç~G¯Rä÷fÛ™ëþé’ì.•³ò©L²¶kªj§ŒAûn÷&Wð¯Æví­5eÇ|À;¯‡ö»±]1 Êà×·Ÿ7lØ.°]`»ÀvíÛÀvuÿÄdeØgý5ß>°]`»°]®ÅÇ…]~ÿòüÏ¿?àZÌœ/ï-Xž[/¦R†Q€Qìò½Ùv¾¡+ê{jgu›â¨A³{-øµ«ƒbñ Åabj7Â1®OífÄ_Cí: ÎY/Níºž^ÚÍè¦vûà F<^¥o¾#ëeÑ’Owdw B9‘ú» ⟥vM²fÎÔ[þ.j·{G©Lƒ±wP¤vmDâòíûe*Aí‚Úµ jÔ.¨]P»ãÔ®îŸÄÊD0ÞgÅ(¨]P»¥¨mµý:X¿#E³Ó—ŽQ»ò\LR;RÈ`»²!ÜÛZT3~(mçm 3¡õ•6;þ}ïèVlW%ðŠðPÅÙ!éxt}l7#þl×QpÎzql×õô‚ØnFï4¶kƒSN6¸š°Û½kês!¶ó”6´OÉvM“Á #Ån‡‹5’mª´¸Pt¬^}¶kom.Él–YósØ®ŽÈI·ww»Ø.°]`»ÀvíÛ¶;íÚþI¤ÉÆç#¦—‚þíÛ­€í|s(ÁK9Ãö±]³ËˆWc»úÜõ ÒÄš0ßÛ‘"ƒ ¼/mÇ©LárÕ,]l×ìÊÞöèÙ]\fIÂCqY¢´ÝÇô<º<¶› ¶ë)8g½6¶ë{z=l7¥wÛ[¥rº7ÛŽ6*cHî¬öæ¨ýÚ ?R)/…íΩ·ü]Ø®½µ%ªÇÞqåǰݮ Ð Æg K€íÛ¶ lØ.°]`»ÃØnß?ûU2v;BlØn%lW>L¯|#ô°ÝnGzui»ö\-GhÑÑAÕ¡ìwfÛaͶË)Ùû@êf‡¬]Bßì²ã£dÛ Ž,ƒŸéw;x‰2Û}à1®ífÄ_ƒí: ÎY/Žíºž^ÛÍèÆvmpgÍãUŠ3ÜÜ‘B4 ÊçÕþ°TµÉÄXp`;ü]¥í~¼“U(½£¯‡ö»±]1——V< ŒÛ¶ lØ.°]`»Àv'°]Ù?=A½MA£}ÖS‚ȶ l·¶ƒÍ!;$rïb»f§—w¤hÏu¤¬ÃðèìY²ä6Ew¤ fr%ÞÿN¤^Ud»$R fTÂnÿ¡Å+¶ÃõHý.¶Ãº¾¨Ö.±#q%I‚Àv#ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»6¸QÖþÍÓ‘¦O7’Õ$»kýq¡sU¶kª€*¶Kcõù» ]óŽ0K¿ÀâjÇoæÚµÛÜÌ” hÐ. ]@»€víÚ‡vuÿDñy–h¸Ï"‰´ h7´Ã¥¢?%cÚU;(ÍQo‚võ¹ÆœF†Ûa–¡ë"bÙv9ªGe\Jÿ°O-*ùUh×ı;ˆh(>ס€v;4¦ãÑù¡Ýñ÷@»Ž‚sÖ“C»®§'„vWô^†vmp©íÈÇK(øžð(´“”H; )~¤¢ïy¤Jš Û5UŠ ƒŸÂVõò] )ÎyG?΋)i`»ÀvSa;ª&PŒ¥E¶Ù¥»±]}n‘0H[í>™kgK)(·±·#uÊ©ôó/šd{Ûq[_<¬µ±8L@íF<¦ãÑù±Ýñ÷`»Ž‚sÖ“c»®§'ÄvWô^ÆvmpH$ˆV©’žÅvÀ £ì`»U*K…TæÂvMù¿Ôç/˶;å?¼‡íÚˆT_ºP¦%°]`»ÀvíÛ¶ lwÛµýSS½Ð1Þgå£3Z`»Àv3`;®8,Kvœûçÿ?àj—înHQŸ‹‰²ŽÒÄV;ÈÏV¶ó!ÒÞYñ)L•R¿_³s³W±Ý)qòQ'°Ýéxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý¹Uª<›mÇÀ‹*î`»SRx.lwN½~YCŠSÞ±Ÿ*Çvç”}”álØ.°]`»ÀvíÛ ±]Ý?Ù·v£hvˆQÙ.°Ý\ØNê%YCVË]l×ì”ànlWŸ«â/Sl0ÜŽ·jÝwI—ÚH‚w: ªOaIþ—Éý©®mI w/É6q%®Ÿ5;Šl»1éxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý©UŠéé† tVûÃR%Í…íN©·?›'ýÝØî”w²¾˜mWGT— É(ËíÛ¶ lØ.°]`»Ø®í³¤–² ÷YE“Àví¦ÂvÚjÛ ÈŸ5|þùÿØíü8y7¶«Ïõ°ÇCL ·~²!•Eëá>²¶´æs¨íšRzÛÕAkÆ$ ²~›¸R¢!ÅÇt<:?¶»"þl×QpÎzrl×õô„ØîŠÞËØ® €ã%Ô’Ñ£Ø. / {—dW©(ÉÊX*Í…íš*d²‚Ô‹~¶[½#¾›ðâ‹ÙvmDR¢A8ñó÷K²íÛ¶ lØ.°Ý l×öO…lLã}V>óíÛ¶›ÛY½|ª¦þ¦]l×ìDn϶«Ïu…¥¿«xˆñä%Y^²(mc»\§pæ"Ø/—×%A_ÅvuМêWccqÅ¢%ÅÇt<:?¶»"þl×QpÎzrl×õô„ØîŠÞËØ® À‰óp•Êéá>²l¸PÞkIqJ*lÕ\ý/±]S… .ì€z‘ïÂv§¼ƒIŸc»:bñAË ¶Ý¯7lØ.°]`»ÀvíÛÇv§öYI%°]`»©°]®Yt¹$Véb»f§z;¶«ÏeªÿŒx“Û}–‡~ ÛN`2è¶+) –¶+µuž-sUì€Çê©à·rµ +r>W»$or>¢ p*”Aä_D \rÈE Ü©@®ò‰†zàtW4¹äæ äÊ’ü —êÍ¿n ×ì(—»9.&(&ƒraÍ.mU•¸³· y´J;ÕŽJe1þwÁAÄfgT^Í¿¨ƒÖ¾ˆyÐã´Ù}VPü‹Ö;?ÿâŠø{ò/: ÎYOžÑõô„ùWô^οhƒ“€ hÒ*²Ø³½--El'ÿb•ªl c©Ä“)oªX2¡P_òwa»öÖ¦4ÞÉ«‚÷°]QѶqƒÀvíÛ¶ lØ.°Ýql×öÏRH4Þgs‰Þ‚ífÃvIÅ?à¶s»®MÕçr)À8Âvn—7º ݈íd©Eky“Ú‰Çx)yÀ)Ü„V»$¯R»uPΖñ€8Ê)¨ÝÇô<:=µ»$þj×SpÎznj×÷ô|Ôî’Þ«Ôn\’Zÿ'šß"Ÿ½5ÅiQ¡mj÷#Šê©0µûQ•“r: ^¿+Ùn}k•þO\?v팟¦v눦þŸP&Ô.¨]P» vAí‚Úµ;LíÚþ ȨZ†û,$Õ vAíf¢võÃDüqÚ£v«ÞLíÚsU2ƒ ªõòÈ“5ÊÑÿb ˆdÛOaôãè DB³ó ^ÅvM»3’ Å!}¬CívxLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚà ¹ŒW)ÁôlrLKÞÅvç¤f™ Û5U%iî×¾[팾«ØÑwT=ø{§¼xG¶H~ÈФ”a`»ÀvíÛ¶ lØî¶kû¬Ã,ãóqlØn.lµˆ[ì;ZíHî¾#ÛžKÆÃ_¾›»äÉbG°¤bd¸í°Mõ¬dýü‹fçëÁ«Ø®ʾ§*ÁX\Ql7â1Îí®ˆ¿Ûuœ³žÛu==!¶»¢÷2¶kƒZA®Rœ´<Šíq½;²ç¤ÌU£|U…ˆÚ/Þö£^¿ Û­ÞádÅÆÞA|¯Fù:¢ŸÇ _‡ãGQ`»ÀvíÛ¶ lØî8¶kû§IQ‡Ë,íÛM…íüÃD?¨øB[ºØ®ÙiºÛùs (§<¸ÒìðÙÖ‚æaæ’·±ùôÓré+mv@ô*¶kƒzˆÁcqT"ÛnÈc:Û]¶ë(8g=9¶ëzzBlwEïel×—\€t¼JñŸ¿ÑÜŠíHq±;Ø®I0ñ#ÛÕͅ횪‚Hv`¯Ê?þjl·zÇrî÷Eù±ûìûð4¶«#ú!KSJ”mw*lØ.°]`»ÀvíÛmc»¶ÏRA°2ÜgÛ¶› Ûù‡I 3ô±]³û¼å}¶«Ï…\<¾TÝîÑÖ‚X[ Úµã5Žã2¸#ÛìùUj×-™ËX\ùÈ j·ƒc:ŸÚ]µë(8g=9µëzzBjwEïejÇ J˜ÿÃUJé³¥íDÜ¡vç¤Úd¥íš*„šô=VlßEíVï0¢êØ;îÅ÷¨]Ñߊúu~”aP» vAí‚Úµ jÔîµkû§Öª¾N‚‘lÔn.jç¦øI…r—ÚU;¹ÚÕçæ’¹Œªn—äAj'¸d$Ë;¥íħ°™ÔS]Wi³ƒôni»:hÿrW›]ŠŽcÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»6¸ov: I«Èò,¶ƒ 彊§¤"O–l×TQý-¬ŒÕëwa»öÖ\ëéŽwòìAê{Ø®(ìç1:  r`»ÀvíÛ¶ lØî8¶kûg&ʃ«ÍÎÛ¶› Ûù‡É¢ä‘G¿#E³Mwc»ú\KÅ äÑr;ÍO–¶ËKFD¡í@N} û>&Ê}2Víry¹#…êëk4’]íRdÛ yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚà¤ìKáx•¢­2·b;&ôuzwµ?¦PìK«ÔRæÂv«úÚ!>ÕóF®à_íÚ[‹f{G>ìãØ®¨>A¥VeØ.°]`»ÀvíÛ¶;íêþY7P.ãƒ(¤Ï*íÛM€íüÃd.µ8³u±]³Ër{#Ùú\õ@e±¢=yG–dÉfœËv gk —Ò #E³CÓW±­çý4ªedëy?:R yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíN­RÊÏfÛQ–…vKÛ­rÊ¢¤êdÙvM•){„5VoéË.ɶ·ö­ Œ½“éÅF²mÄ’!P¦Ñ‘"°]`»ÀvíÛ¶;íêþ (µ8ØpŸØ.°ÝTØÎ?ÌzA–0õ±]³K"wc»ú\#[K£ D†*Ï–¶“RÐv²ír›êàžþTové£ZôØ® ÊÙÊÀÍŽŠ¶ñ˜ŽGçÇvWÄ߃í: ÎYOŽíºžžÛ]Ñ{ÛµÁÅ×ûÁÝ„‘×¶3X|ßÁv§¤Êl)š*Mœ3PÏü]ØnõŽ’’Œ½£Àïa»:"¦$”Ž(ûàÚíÛ¶ lØ.°]`»!¶kû,©Zﳈ—dÛÍ…íüÃäú/ É®vnÂvõ¹œ³ aS}2ÛNʸ“mWêÖâ{Rª7;!|ÛÕA)©Á ë·‰+·kÛíð˜ŽGçÇvWÄ߃í: ÎYOŽíºžžÛ]Ñ{ÛµÁÁGÏ6\¥(Y~Û‰¦EEw°Ý)©~„› Û5Uè1VNÔó—5’]½ZŽx߬m×F¬%w2P¶}y'°]`»ÀvíÛ¶ l·íÚþ©È¾Ö÷Yáȶ l7¶+Çaf(ÒÅvÍ>jËÝ„íêsý=%óð ÊµBø“ÙveaWÂ[-)È•ùf¤\z…¬Ù¥Bïa»_ƒŠ–ÁÆâã’lŸÇô=:9¶»(þl×WpÎzfl7òôlØî¢ÞkØî÷तy¼J ð³Ùv$ §ÍN²¿%xxrH*éDØî—*U“Þa¿Õ—oªm÷ë­³ÿk6Kß¶_Âv¿F,©Vê8 LÛ¶ lØ.°]`»ÀvG±ÝÏþ)”¬{Çï—Ýgo´Àvíþsl·~˜E1Qê´¤øe'zï%ÙŸç Rf* O¶¤´(3lc;¨S8[­xÔUÚì”áUlWÕV°‡‡â4A`»!éxt~lwEü=Ø®£àœõ䨮ëé ±Ý½—±Ý:¸b&¯RðáK²º@âlwN*Â\ØnU•SJGÔë—a»öÖè±åŽ­v¯eÛý‘©Wuï·2ŠK²íÛ¶ lØ.°Ý l×öOM2Žå•3¶ l7¶ƒ–EWp£ò?ÿú€¹–l»ÛÕçú:Ÿ»×y~Ù%MÏÖ¶+¾Umc;¬SØXd@ƚһخj €ÆâÊGA`»ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»uðŒIi¸Jùží$ˉѽl»sRçÂvMÖ±zH_†íÖ·Öú{åï|œ9ÇvmDdÍ(‹l»ÀvíÛ¶ lØî¶kû'«ŽcyãØ.°Ý\ØëåS€ þ¾ûÏ¿>`ÆÏVÈ7a»ú\bSŸ£ ÄDö0¶#Àˆ·±ùÎBY¬دv&˜^ÅvM\ÉX¨ Åå‚QÛnÈc:Û]¶ë(8g=9¶ëzzBlwEïelW/Éc÷‚ÃUª${6Û*´C•ýÕ¾e$K”¹°]S…H㽪àŸi7¶[ß:g-ù€wìÅK²mDRÿ.ÓXA lØ.°]`»ÀvíÛÇvmÿTfá'ÑÀvíæÂvþaúa%Y¦ÜÅvn'¥¤|7¶ëŒÿï ¤õŽêƒØŽqaq‡Ún WP“© ÊTEÊ©2bÉ4VïŸÞ×rkÁ£ñWèñÓ«œ++ïÀe"‹@.¹ä"‹@.¹S\!ÂD0>Ý’D ÜT/ ”©þ× äš©ÝÈÕçæd¤ƒkS«=È¡,æø˜Û·ß<Ø'ûHiýÕæÏ~Mæ_´A5³ ò/šÇ¢‘1úa½ãÑùó/®ˆ¿'ÿ¢£àœõäù]OO˜qEïåü‹6¸Õ_ü,¡ötµ#•Å£kS«bIp@*NV¤¼©Ê¾¯&«ÏôeØ®½uq7 °Ýê‚ïa;ÑçF"‚ÛxÙNçlØ.°]`»ÀvíÛmc»¶ÏÖ˃ž^ÍS`»Àvsa;]ü»Taí_›ªv~†½ÛÕçígÒ¯vðd‘òL‹‡+{·¦d=Q3Ò/¤¨áÝ[SMœ»Çt(®þÄ AíF8¦ãÑù©Ýñ÷P»Ž‚sÖ“S»®§'¤vWô^¦vmp?ÖÓx•bJR;—±X‘j·J(T˜HÍ“Õ(oªÄ7³Á ßÕŽówQ»öÖZ£Ï<öŽ~ÔŸœÚµ-e“Û¸Šµ jÔ.¨]P» vAíŽS»ºB‚d¥Œ÷ÙbÔ.¨ÝdÔÎR™9‹ ¨ÛÐýÔ®øcÑÃÈax$–¤v¤‹°‘åml×*˜3Q4Alv´ÕÎþAl×õ¨Ö¹!ÍN9Š yLÇ£óc»+âïÁvç¬'Çv]OOˆí®è½ŒíÚàÙ|•ñ*eòp²]Æó^²ÝÚ¯‚àkµ›íŽlU…IAY«ú’¾ Û5ïƒåñNŽÉ^ÄvmD$ó³ÖX|$I¶ lØ.°]`»Àví†Ø®íŸ*ÙÄÆû¬h lØn*lç&cKÚo-ØìôþbGþ\©×Óà¾J³~´FyY” Ê6µ³v¢f,Òãl˜ò«Ô® êfTÔmv”)¨ÝÇt<:?µ»"þj×QpÎzrj×õô„ÔîŠÞËÔ® .)Û ƒm™î,ˆ¶ íu\¥úro¤J²¹¨]S¥Yí€z¡ï¢v«wDiP¢üÇ‹òµk#—œá€2¶ vAí‚Úµ jÔ.¨ÝqjW÷OÿLk,<Üg) µ j7µóSr"ñ8«KíªéÇÏò7Q»úÜâoÊiÉv¡û’í<Œ«MÔwJ”ç6Õm@ƪ¾}G¶‰#Ò¬e(Žð£}`»ÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»upƒ4¸¸Ú <‹íDRÚÁvMC­vp@ªñ\Ø®©‚QÃëõ-7RØÿjlwÎ;e`Çv«2#¤x®vQÙ.°]`»ÀvíÛ¶;ƒíêþÉÉÏvràtW@Û¶› Ûù‡É–})¦~e»f§tûÙúÜ’¬È / Úe?T=ÛBÁvJÛ•:…=pDêÇœÍNA^ÅvuPƤ%ÑPCŠÒvCÓñèüØîŠø{°]GÁ9ëɱ]×Ób»+z/c»up–$<^¥ž-m§É2ÛÁvç¤ÒdØ®©"c¤°¯ê3¶koÍþÆñNÎô&¶k# æ,¶qNQÚ.°]`»ÀvíÛ¶;íÚþ™ÅCrï³öÑ\2°]`»°]©wO©žÝûØ®ÙÂÝØ®>]At¼kvœÊ“Ø./¾á”ÒvÚ‘Úš~^àjÇæ>‰íÖA‹7IÇâ>>l·ÍczÛ] ¶ë)8g=7¶ë{z>lwIïUlׯ»HÁ2\¥$q~Û‘¯X%oc»“R M…íVU˜ÔÇêA¾+ÛîÇ;JFã\ð£1ÊÓØn‘$kÖʘÛ¶ lØ.°]`»Àv‡±ÝºZ">²Ï*@`»Àv3a»úaJÈúçïòÿüëö@JîÆví¹è‡<ëÿòÝìÀc¨'±.™Ð`›ÚÏ`E(ÖoHÑ섊½JíN‰Ó/µÛÁ1ÎOí®ˆ¿‡Úuœ³žœÚu==!µ»¢÷2µ;µJÙV¾òÔŽ(-Ùʵ;'•d.jwJ}6ù.jwÊ;þ¿÷¨Ý9ewdƒÚµ jÔ.¨]P»ÔîÌ>«9¨]P»©¨.ÉT‘sé6¤XíHÊÝÔ®>7«BÂþªvfðdYŸ¥~fKÛµí\‚Ïáš[;ø¾Ù¾ÚH¶ êçãb’‡âÌO§ÁíF@¦ãÑù¹Ýñ÷p»Ž‚sÖ“s»®§'ävWô^ævmð ¾8–ñ*e[KèÜßvLtµ·‚3Ž¥æ|µÿ”ÛUU9AÖ~ûï·ü.n·zG©é«¿ÈíÚˆ 5¤8 L‚Û· nÜ.¸]p»àv'¸]Û?3¥ÿ±wö¼Öܸÿ*îœj ’â[±]šT ®Rl'q±‰aï.oIs?÷iÆóò(>„;›>úïHCQ:޳”9¸]p»É¸]ÙJåDü¹cÛÜ®Ø!ÓùÜÎYD½<Ïh±<»óîÔC²Zd =Oä¨La/ù®÷{Û5;#½õJŠU“ŽÅ9q’ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{ÛµÁ0÷/Ê\í¦caE%0w³‘úb÷ärÙß5 kO]Ï_k{n¤¿œ…µÉXû—(ã,,XX°°`aÁ‚… ÛÎÂZüÔ\¦Ø†8+ ã‚…ÍÅÂꋉe§¼¢hÒ‹« («ñ‹Žmŧ–@œ»Ønµ{¸»ì$lW̨ÿå»Ù™¥K¯¤À%•œ6É+W•å†Gµ"¹eÆxë«8/«á*6;³¸Jvd:ŸÛ·ë(Øg=9·ëzzBnwDïanW‡úiÄh¼JùçŽq'Æl _¯öصÝè‡]–¹cS$’u¬à͎ɶ§.C•?ÜØ;˜oln×F$RvÜ L=c Æ@Œ1b ĸ1¶øÉ*à>޳wRbœ 1–SPYðs÷ê/¸]±“Œçs;— Ìdêä¤x!·#ZêírþÛq…± îƒÊÚÕìVl×Í&¢2—…ÛxLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÚàRÖ£/iv¬~)¶ËŠ‹Q~ÑÝn•ʤ c©B:¶kªKȤ±zMovJ¶=µ±ç-‘ÜîÄvmD¯½:6¼uæØ.°]`»ÀvíÛ¶ÛŽíjü¬¡=°Ýj÷¼?°]`»¯†íÊ‹)à©ìû§d›=4ß> ÛÕß%®e³6š@Å._yJ–ÒR‘^µ;’:…©l<WÉ6»d÷b»6¨˜1àXK l7â1Î펈?Ûu쳞Ûu==!¶;¢÷0¶ÛµJ _ÛÜŽU—DúÛ퓪“a»ª*““lQÿ¤‹àïÛíòŽ=^Øz5¶Û§Œ-°]`»ÀvíÛ¶ l·Û퉳!°]`»¹°]y1¹ö›}÷Å Ìöx«ÊIØN*,yž&Pk!~å)Y]Èë1ÙçØNÛT'0éOõf‡tïmPfàÁÍ.k4·ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{Û­ƒ;ûx•b¾ú.YZØ^Ý%»OªÒ\Ø®©RRœFnvònwÉ®ÞaÁA·»/æû°]ÑAG»¡UÙ#P lØ.°]`»ÀvíÛ°]Ÿ e¡7ÆY~<ãØ.°Ý Ø®¼˜%Ã@ñÔ?$ÛìDàllWWËøhê¨8_{HÚŽù9¶³:…‹`Ðt»Ùe”[±]T…q,NSTÛ yLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÚà& "V)²k±òRòôØn—Tûü‘æëb»ªJHh¾¶kOèœy성]1»ºÒeŒíÛ¶ lØ.°]`»íØ®ÆOKõ&LÇÙ_µÉlØnlW^L)ÿ€ t±Ýj—ällW~W1åÁý3ÍéRl' ©¨ÒKlçHd˜†›}Gxv{ÆWNä¶«G¿D®xÇÁYÇÞ¡‡=îHä9£º}ïQŽØ¦0AŸŒ5» éVl×õ0`ŠëCHô(ò˜ŽGçÇvGÄŸƒí: öYOŽíºžžÛÑ{ÛÕÁ’x¿ÍÀj—ðÚ3²’}a}Ñ£|§Tñ¹°Ý.õþ^ØnŸw„ïÃvû”i\-Ø.°]`»ÀvíÛíÀv-~–—Xû­í>ìR`»Àvsa;lgOë1ëb»j‡D§c»ú»Yj§M ÍÅ5×ÛåZm÷âŒ,µ-uEô}l×ìýVl×åòƤ<ÇëP`»<¦ãÑù±Ýñç`»Ž‚}Ö“c»®§'ÄvGôÆvmpI”}Ë*uñY¢¼”eú¶Û'Õm.l×TiRD«z³j»öԞؘÆÞ)»“û°Ý>ešÛ¶ lØ.°]`»ÀvÛ±]ŸˆžiÃî1ÅÕ‚íæÂvåÅAuÍØÅvÕŽUξ‘¢ý®™@Î2š@eC•åÊj;] §¢æ9¶+IdNÀƒ¾LÍŽ>+Õ Ã¾Läþòã¿ÿTö4-éümÀn—¬üp¹éÃÐUžcMA>þU[õÊÂóŸÿÚ"{ƒEõßüù¿üþßþÓÿýŸ¿^1ÿ¾èü釲>}[r“?þ\¶hß¾0øÇïÿümYôÚþîßþí?—«þ(oP±ü~üæ¯?üñÓŸñ¿~ùß¿ùñ§ïÿZBÎÏ¿<ÅÏË7ÿЦSøOÅìÁô_¾ÿ·òòÕ‡)«ê_Ê Ë·‡ÿ¼Ùà•éŸtÿ›»ý²c]—…—£Ký´3ùó ì¬O¦Áη«LðEò«v_ëà* °A¤LÖî«©ÄÎÒ”«øØj÷p3n|.yÁÁ;ÿsÉñç|.é(Øg=ùç’®§'ü\rDïáÏ%mp4eî` hpÐà ÁAƒƒ ¼·øIL0èM³Úѵe y1¶N.Ï"JƒÞtÍŽE\¸ž \—“œQÅ¥ ®›1Ÿ ®Ëïæ:Žëp®×CÚ—^¥,KÙÞ»¼äÖE(‰Ê¡ö„{}mj—“kbÕ ê ߎÚï+ÚØ;®x+«§P¥d=”YÜ­)g¤œ‘rFÊ)g¤œûRÎ d©l'‡q,ò¸ÈãfËã„38¨ò8a¢+ò8Kœi¸…®—aù•y/æI¾Jä<#'¶aùE±K~÷'ú2h­ã âŒâÜà†/œ/=úÿáCøoÖ‡ð— öYOÿ!¼ãé)?„ÿv½'|÷L) W)×kÛ}™ãR“—€¾H…ä®6”JIólØ®ªwM4%ËJï†íÊScírJcï ßZÏVF$Å”d¬ŒRœ lØ.°]`»Àvíva»?ÙÇq–ãnÍÀv“a;^RÙþ±÷ïÖ,veOcz6¶«ãcùeÅþFµÚ_ˆíÌ-o¦ÑslÇm Kmq=PZS¡›»ô·A­x2ùXœ¦¸\sÈc:Û¶ë(Øg=9¶ëzzBlwDïal·.Ž’Æ«”‘_{¹f†E^`»}R%Í…íš*·Äƒ¶Q«]z3lWŸ:§„ÀyƒwQ]íVeNIe¨¬ÁóK¿Û¶ lØ.°]`»Àvϱ]‹ŸÙJ”ÒqœÍÕvíæÂvº¤$(씺ØnµK§’-¿ õÐÔ“ñÿîËñ5?»Äì4l'¸¨£1<ÇvZ¦°$H–h ´Lu“{É6qHP–¢¡8ÕvCÓñèüØîˆøs°]GÁ>ëɱ]×Ób»#zc»]«Ôã7¢K.×]ò h· `H´E(MvD¶©"$YÕ>‹üû†ví©s*ÏÍcïÐCÉçåÐnUfehÐ. ]@»€víÚm‡v-~jí¯œÇqV$:Û´› Ú•¤»ìÒ3ä´«våNƒv®š JŠ6š@ªÅªÔÚA=LìÏ¡•)¬ ¢¹¿¥®vâH·B»&ÅÄÓPœb¦€v#ÓñèüÐîˆøs ]GÁ>ëÉ¡]×ÓB»#zC»6xÈãUŠ _ írÊKJøÛí“J“]H±O½å÷Âv«wj‰$½“éÆ#²mD–:A7(ËqD6°]`»ÀvíÛ¶ÛíZü4CßËkÉ—Û¶› Û•S…Í’ökíªk>½³5lµ`a4T)É•Røâ /°]ù¯åÏR·ûýÎvÍÒ½íÚ ,Nƒšßf—îb l÷‚Çt<:?¶;"þl×Q°Ïzrl×õô„ØîˆÞÃØn×*ÅéÚ#²|x…íöIEž Û5U‚9 ðϪ^ß Û­ÞÉlƒ[äV»‡`y9¶k#ZÉ–]¶(‹j»ÀvíÛ¶ lØn¶«ñÓŠsÇÙ÷µ¶ l÷Õ°]y1-9d´~µ]³“óÈÖßÍ,P6«£ dˆrå…D‹±ÙSlG©MueîÙfgšíNl·Š#GZíð!!l÷œÇô<:=¶;$þl×S°Ïznl×÷ô|ØîÞ£Ønç*ezmg;ÆÅAžc»}R)ÙTØn§ú7ël·>uÉýÎvvé¾{d눜Ryn€ Ê€Û¶ lØ.°]`»Àv›±Ýž8[ìž_üØ.°Ý×ÂvõÅ´²»W÷î=²«:ŒíÚïr‚\ö £ dÙôÊj;â% ” óÛAÂâÚ­¿Xí’ÞzHv”Å3n—ÕÛxLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=ŒíÖÁpËÊrñ… ¦½ívJµ¹.¤XU•ýC’‡vT¦0Õ”ÓúSÚ’p+´Û%.?üÚ½ 1Î펈?Úu쳞Úu==!´;¢÷0´ÛµJqæk;Ûe]À^a»}R…æÂv»Ô?6¾x l·Ï;ù¾{dw*{8ÚØ.°]`»ÀvíÛ¶b»]q6:Û¶› ÛQÅq¢FÞ¯µkvL§‘­¿«e Šý]V»Œ—Þ#›“d/.¤¨ ‹¯ ÓH©£<»ñö+'rÛÕ¥·Kä=AfØà‡–[w$rN‰LÅÜ«2·Hä"‘‹D.¹Hä"‘‹DnW"WÏœÈ8Î"E"‰ÜT‰\^¹õë/V»„g'rõwKðŒ³)ŽM¶› ÛÕÖ߯j4ÀvÅN„ÎÇv^öI\âŒ&)ˆ_{³ ‘1ËslÇe ×#’ÆýD®ÚeNz+¶kâJ0Ŷkvô°ßl÷‚Çt<:?¶;"þl×Q°Ïzrl×õô„ØîˆÞÃØn×*•³\ŠíœqÒØn•à„¤>__Û5U%¦'Ècõ oVm·>u-¸Ûâ‡p—c»6¢xÝmPf9°]`»ÀvíÛ¶ l·ÛÕø)õjÁA‡ågýáXy`»Àv3`;®8(Ñçv]ß}ñ9¦³±·c[f Ãj±Kp%¶K  Ðsl'uª#`Ö~©Hµcs¿Û5q„e`Š|ø3¶{Ác:Û¶ë(Øg=9¶ëzzBlwDïal·o•2¿¶ÛQ¢…_`»]R)Ù\Ø®©H4¸NcUooVm·Ë;™ó}ØnŸ2¡ÀvíÛ¶ lØ.°Ývl×âg­CÂ<޳JØ.°ÝTØ®¼˜V6+ôäÐÔw_¼À†*§7)¯¿›©’$M Ë W’E],»ó‹nGºÔ¢Ú\‹EºJ«¢§[±]'D08ÁÛì¢IùÓñèüØîˆøs°]GÁ>ëɱ]×Ób»#zc»]«”¤kÉÂBäH½Õ^pP¯¶Úe˜ Û5U*%Ç’±zEz/lמÚÁÓØ;&v¶«#:@I'tƒ²¨¶ lØ.°]`»Àvíö`»g©lÒxwW÷ íÛM…í´67fîb»je{6¶«¿+4º»Ù%ÎWVÛÉ¢’’æç‰œµ©n¨Ü¯¿hvéVl×e‹Ë)ÉyLÇ£óc»#âÏÁvû¬'Çv]OOˆíŽè=Œíö­RJ÷¶“…àUµÝ.© y.l·O½¼Ù!ÙòÔ%ΗlpD¦yGÒØnŸ2ˆ»Û¶ lØ.°]`»Ø®ÅY`Q°Qœ-v¨íÛM…íÊ‹ieuvïb»fGêgc;«80•gM /[A»öJŠ‘,Ãslçu ×ûAMʛݓ.|—b»6h}c<Åq\I1æ1Î펈?Ûu쳞Ûu==!¶;¢÷0¶kƒ«‚ŒW)Åt)¶cáÒ«Þvû¤æÉ°]SåŠ4¸òºÙó{a»úÔ2£ŽƒeÙË}Ø®\²cÝ  1°]`»ÀvíÛ¶ l·Ûµø™s àãݧÀví¦Âv^q(õ±WlWþ9ÛÕñ¡&’ƒ+ÍŽ“\ÛÛ®ø”èùM²9Õ)\ëj±[m÷a—n=$ÛEÈ,ýÏôvÑÛnÈczÛ ¶ë)Øg=7¶ë{z>lwHïQl·Nžh¼J‘¥خÄ×Úá9¶[%Ôˆú=WW»²ÿ› Û­ª¤ÒÁX=ë{UÛíóŽäû°Ý:¢%.³AGµ]`»ÀvíÛ¶ l·ÛµøYç¥qœ¥¤ÑÛ.°ÝTØ®¾˜®DlÖ=$»ÚŸÝÛ®ý® °2Ž&“^{H–4æ9¶ƒÿaïüvï¸q;þ*ÆÞõb§¢HŠT®ûE{‘« 0²Ùm°›Úp’}ûRšŸ½'ë3Òó'‰8Äè;<3"ù‰ª%F¡f¾ÕîI›ïK±]T)¦ }qIÛuyLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvëà‰TvÌR¯=’‚yÆ l7&•ã\Ø®ªÊ¦_Âõ9¾¶ƒšŒÖv—Ú7/>ôó½Û­Ê8e¦Êıc;ÇvŽíÛ9¶sl7€íjœÅQû>žæØÎ±Ý ØÎÌ̘cThb»ÕãÙØ®\7 1A·À0;¸Ûá¢åTèç›d)Ú+LµOy«v˜ŸÕœb»*Í?»âèqÕ¯c» ÓðèüØîˆøs°]CÁ˜õ䨮éé ±Ý½‡±]œ³¦þ,EÂ×n’eYBHØn•ZöÉj_*O¶IvU•(jÛ­v_ ÛÕ»¶œCS?’“=s÷a»:¢š„ö&Ù7»ç­²Û9¶slçØÎ±c;Çvϱ]‰Ÿ Jú‰(©c;ÇvSa;\BΚ%6±]µ³¬æll‡ fj÷¶«vŸØw¶äÂ& ž.hìFeâžÔbéVnWM’#A_\¢ìÜ®dŸÛ·k(³žœÛ5==!·;¢÷0·«ƒkN9K–Ò‹wÉrÊKŒ[Ü®HH0„Зj÷3·«ê­|JÄ]õ)¼Ú.Ùz×V~fêÿ¶é1’_Îíꈔ,×ÂÊ$8·snçÜιs;çvÎíös»?3Å~"šTÛ9·›‹ÛÑìïrÐÜln÷f‡x6·+×¥ B±Í›V;ºt¹.6@ÜèmGõ Nˆ’:BÍ.ɽ½íÊ ‚åC}ø°»Æ©ÝŽixt~jwDü9Ô®¡`Ìzrj×ôô„ÔîˆÞÃÔ®NQsŠýY 5^Jíl¢_d«µÝR‚É ]UŨ2ïPÿbÉŽy‡ã{dëˆ $«ìP†ìÐΡC;‡víÚ9´ÛíjüÔ¬wdú°VÈ¡C» /Êù Û{d«áé‹íÊu$vNb®vzå‰6†$6×?§v¼”E"€¥£ÔR?ºyì¸Ç­ÆNí6pLãóS»#âÏ¡v cÖ“S»¦§'¤vGô¦v#³Táâµv¥­µvcRcž Û ©G€×ÂvCÞ!¾q­Ý2fplçØÎ±c;ÇvŽíÛíÇv?%¤»qV;¶sl7¶Ëµí²ÆØÁvf‘ÎÇv9K¹‹ÎÙÕŽñBluI1n´¶Kå,ÐÙ4•jêèVlWÅEMV‡öÄ™ÝÃÇv<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:xb‰ ;f©Œ—b;L¸Ä­ÅvCJ1Â\ÔnL=ËkQ»!ï$¸±³]Q3"ÒeèÔΩS;§vNíœÚ9µ v%~BD´»q€Ú9µ›‹Ú•¦ÜÌ íÎvÕ><ŸDíjSp­;uš/Pµ“‡Ö—œ#K5ä<§vR_õcG©Ù…›Ï‘­ƒ2 uÛU;R_l×Å1 ÎO펈?‡Ú5ŒYONíšžžÚÑ{˜ÚÕÁS–ˆ©?KÙl)µK!,eÛU ‚*ÝŠ«]ä¹°]U¥©üÓW¯H¯…íÊ]ÇDûÞÉÁòrlW•E ‰°«,†‡~%ŽíÛ9¶slçØÎ±c».¶«ñ“BÈý8K’Û9¶› Ûå¬ÌjùJÛ™Ñé{du±4/KŽzµ‹—.¶Ë d̨ϱÚ+ŒAÍã©£4—-¿|+¶«â"©ÍD]qVѱ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal·. û³TLñâó(Â"°µG¶J@«SvH}Ö»á÷Äv«z‹a‡zÄ;G¶Þ5qÕv«wÖ|^Žíꈌ¤1÷•úyŽíÛ9¶slçØÎ±Ý¶«ñ3‹†°#ՇݎíÛÍ€í´à8Í™¾Na¿ý‡8« y6¶ËÇÅÒù»íª\ºÚ.¦EÙ’¶çÔnLhsÕq«úžÐÙ¯ÕGÂתãê]“p‚Ø÷Ž¥.÷ÕqcÊ0yçuœ×q^Çyçuœ×qûë¸?“¥Á¢ý8Ëš½Žó:nª:ÎL=è“^Cßþ˲ ê¸hyRiÌÙ{¢Eƒpm—‰‰q«+È&‹ v”æºÚþÖåePÚñ½mµ{8*Á—_l|WoxtþåGÄŸ³ü¢¡`ÌzòåMOO¸üâˆÞÃË/êà$¹×yµãtíò‹ ­]SUG¡}©t6lgª¨vNë]Õk~5lgw-Xˆ\ß;òóÜ€ílD{5¥³š{Uöü³c;ÇvŽíÛ9¶slçØn ÛYÕ!‚ô³;ñåŽífÃv9g¦1u°].pZNÆv–b´?Z/Ðj2\‹íXÓóõ¦ ¼ê$…ŽÒúªÇ;±Ý:(e{d¤/Î*bÇvÓòèôØîøS°]KÁ˜õÜØ®íéù°Ý!½G±Ý:x²b Ý/îÍŽ®mvDJ‹M…ϱݠT™ëdÁU•$ @}õ¯…íÖ»Ö%ï–ú,¯ÆvuÄ@í–w(K¾kʱc;ÇvŽíÛ9¶ÛíÖ8# ö3€dɬc;Çv3a»ò`fÍ–­pha»ÕNñìÕv u»EÐæ÷åÕ.rºÛYWZ<ïQn ì&j·¿¨vÕníQ>$.EïQÞç1 Î펈?Û5ŒYOŽíšžžÛÑ{Û ÍRHr-¶ã¸(ã¶“š&Ãv«z¹¯ž ¾¶«wÍ–„rÚá¾oµÝ›2µ[Ê;”©o’ulçØÎ±c;ÇvŽí°]ŸqGœÕ‡^-ŽíÛÍ€íb920¢67É®vü°Iõ$lW®›rŒÐîʹÚÙÍ^y´`X(E¦çØ.Ú+¬“&ê(-iý«í†Ä¥‡VóŽí6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvC³”^‹í/–·m`»1©çÂvCê•Òka»!ïäp_òAe¸Õ±c;ÇvŽíÛ9¶sl×Åv5Îbé"›»qVA¼·c»¹°.!HH‚™šØW¼vú&Ùr]ÕX˜vóªv‚r-¶Ë9[ÎüÛay…3‰t”–3 ž5fºÛ ‰£¦èØnƒÇ4<:?¶;"þl×P0f=9¶kzzBlwDïal76KI¸ÛE^Hâ¶“ª“­¶RÏQ_ Ûyç¡KíåØnL™ªc;ÇvŽíÛ9¶slçØn?¶+ñ3´Gúq6#:¶sl7¶£%€B¦,ØÄvÕ.žíÊu-™Ë˜š=¬vWö¶£°XÆÂó#)LA.ˆÓºW»ðVlW…@ Aûârö#)º<¦áÑù±Ýñç`»†‚1ëɱ]ÓÓb»#zc»:8„d)Sw–‚ÀñòÕvA·VÛI˜ Û­ê:'’¾Ù½Ú&Ùõ®5å]Þ‘7ÉÖ‘‘Ûg¶|¾ÇvŽíÛ9¶slçØÎ±Ý¶+ñ3‰exÝ8ÃC“eÇvŽífÀvVº€UC6˜4±]µƒLgc;^±])Òš/Pµ»öH T#&yNí¸dÊ"¡è«ç{©]4¢šsW\„‡ÕNí6pLãóS»#âÏ¡v cÖ“S»¦§'¤vGô¦vep‹]*µDU$ÅkO¤PÂoP»!©Ï–«ý®ÔnH½@x-j7ä½³µÝ˜2A§vNíœÚ9µsjçÔΩÝ~jWâ'Yœ%HÝ8‹)$§vNí¦¢v©.¶ÁÜ^lWíB8}±]7( éÚØ»ŒŸ5]ÚÚNDæüœÚ¥šQLª¡–Qg¸÷@Š*ΊñØ¡v«ûÙ.Žixt~jwDü9Ô®¡`Ìzrj×ôô„ÔîˆÞÃÔ®N9jg‘Ój—ôRjÇ Ö´Aíª&U¾T~Öqõ÷¤vUUŠ;R¬váŨÝz×veÞyXGy9µ«# Käa\‚S;§vNíœÚ9µsjçÔn€Ú•øIºq– ²S;§vSQ;)Ô, &Ò&µ“BÍ?ËŸDíìºh)´Uíhµ Wv¶ã°„4òsl'å¶x;'ÞV;«ŒoÅvuÐd ²¦¾¸Ç¯Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvupŒ;f)åK±&´˜ØnHª„ɶÈ©Wà×ÂvcÞIr¶+#2°=š;r ?GÖ±c;ÇvŽíÛ9¶Àv5Îr$¡gÕ¤pl7¶ÓºE•0(7±]µÃœÎÆvvÝBÊíU ÕàÒÅvi±ˆ”òÆ9²j¯p²¸¡§Ô^uIáVl7".Áà Ží6xLãóc»#âÏÁv cÖ“c»¦§'ÄvGôÆvc³Ô×§<œÛَ‚°ÕÙnHjüúÈÛßÛ©OøZØ®Þ5!C¾wnÄvcÊ¢c;ÇvŽíÛ9¶slçØnÛÕøi¯õãlzè§ïØÎ±Ý Ø.—nn!sol±†|6¶+×EËT%µÕjâ•íbZ4•‹Ï±]®)uîúj‡J·b»¼ÎCŠûâ,;ul×ã1 Î펈?Û5ŒYOŽíšžžÛÑ{ÛÕÁ%$å´c–ʯ¶‹´ÍØn• 9Á©“­¶«ª4Cê´a]Õ3½¶«w! Ä¾w4ßØÚ®ŒhïÜñÔeðM²ŽíÛ9¶slçØÎ±Ý¶«qIìIîÆY‰âR8¶›Û•Ÿ¢4Qvu¶Î‘ýl'¬§b»·ëZáÏvÇÛ/Ðg»Dxåj»¸ä˜ÂÓÕvU½Â+t”–)Aoìm÷yPM”bè‹Kâ'R´yLÛ£“c»ƒâOÀvmcÖ3c»ž§gÃvõÃv£³”…k±†%>ß$;*U§Zm÷EUæ”vÄ*M¯ÔÛîí®5 Ö·öÏÞÉ!Þ„í¾(‹!ïPöвıc;ÇvŽíÛ9¶slׯvŸãlÌ‘»qVAÛ9¶› ÛA=éETšØ®Úų±”ÞzbåµSèj—’\‹íT³ùÛA}…Ù’ÒÜQZRïD·b»:(¥l…m_ª÷¶ëò˜†GçÇvGÄŸƒí Ƭ'ÇvMOOˆíŽè=Œíêàl9í˜BIàÚM²Œ‹ÅÙ l7&5ǹ°]U•B ûêåµ°Ýꈒ±ïô¸¦íjlWG`ê`»·; ÇvŽíÛ9¶slçØÎ±Ý~lWâgK2»q6DÇvŽí¦Âvq @Ê”7±]µ“‡Mª'a»rÝ#@lc»j|%¶‹²¤²ØnÛÅúªc¢˜;JKêî]mWÅ‘½X1uÅeŒÁ±]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïalWg²yú³e¹Û Á’6WÛ­RS`Ù!•‰çÂvUU²ØˆÔWŸB|-lWïZÈJÐÁRî\mWGÌ™ˆq‡2òM²ŽíÛ9¶slçØÎ±Ý¶³ø™¦,ÒÍrˆ‡R9¶sl7¶ÃÅËœ1½Úí7Ø®Ú)ÉÙØ®\W‰¢tRèj‡)]¹ÚN…¼í°¼ÂD–.kG©ÙÁ×J/ÅveÐrbGd銃ÞÛ®ËcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒGU mýY*~M“NÅv°„D¼=×C©QÒ¡šæ‚vU©fèG*ÀŒ¯íê]' Ñ-²ÕîÈ^íêˆ4ÇÊRðµvíÚ9´shçÐΡݴ+ñ3r$ ýb9"²C;‡vSA;*[O)I ¹ íª]Œx6´+×µËÚ?ílÕÎîùJhUÙ(äª b5o_i$™«[ÕgËK©¯I_««wM‰Sß;–¶ÜWÈÕ™YTw(CñBÎ 9/ä¼óBÎ 9/äör%~b€Èûq6ß4å…Üt…Ę:›¦ª]|XSp^!ÇP·ŽÝBŽA.]}ó˜!ÈV!—³ý­¹½W å í«/ª8d!í‹‹ösûê‹Þgõ†Gç_}qDü9«/ Ƭ'_}Ñôô„«/Žè=¼ú¢NI“J–"º¶E9Û|76MU QwHÍÓa»¢Þr }õœâ«a;»ëD¢!ö½“ïÄv9#¨€öŸ:+;|ý…c;ÇvŽíÛ9¶sl7„í,~j9mGœ}ÜsâØÎ±Ý ØŽ»GL95±]±‹IO?Y°\W¬tàNW†jõÒå´ˆFÕ lÇ5¥æ("¥µ»·Ey‡L°+1x¯£.ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ®NÈyÇ,¥×nšÊ!,·°ÝT‚ɶMUU’„Ãõ,/Ö¢¼ÞµÅyìl*[íîÃveD*McÜ¡,c;ÇvŽíÛ9¶slçØn?¶«q6±$©g˳îØÎ±ÝTØ.ÕíPjh{µ]µ{l~¶+×»eê´‚­vté¶©¨K`+u6¶M¥š*sù££´”Lp/¶+ƒ(„N¯£Õ.øj».ixt~lwDü9Ø®¡`Ìzrl×ôô„ØîˆÞÃØ®nÓ½’”®=YPã¢HØnHj :¶+ª8A†¾z|µåCÞ±hy¶RÆâ-ÊÛ9¶slçØÎ±c»lWã,•¯²Ø³ŒŽíÛM…í¤¬¢³L…)4±]µ³ìllW®K”‚¦öº€j‡˜¯ívQ’Òsl'5¥.G QGi!ù™oÅvUZÆÐǑձ]Ç4<:?¶;"þl×P0f=9¶kzzBlwDïal·®9ög)¤k±&\8n,8&5å¹°]UeyDèto«v_¬Iùꘒîð‡t¶«#¦ a‡2PÇvŽíÛ9¶slçØÎ±Ý~lW⧤õ3¡ì½íÛÍ…ít±™˜T²´±Ýj÷p$ÝIØ®¬p#`¡N—ÿjäÊÞv¬KÊ•âsl§ö §€;MÊ‹gÄ[±]G–+KìŠK¨àØ®ÇcÛ¶k(³žÛ5==!¶;¢÷0¶«ƒ3”Ûú³e¼¶·à¢œ6°Ý*5q$éKeŽsa»ªÊ~fîý±ÚÁ‹IQïZ Ãïd¸ÛÕí¡Œ{^‰ìØÎ±c;ÇvŽíÛ9¶ÛíJü¶—¬g…ZÁ;¶sl7¶+m¹Ë.†ö&Ùj‡éô³íºA-;n/W]íX/ÄvD  )=ÇvyQ«â SÈU;Kž·’}›+¾{÷íû1W­²·#DåØ ysA@Bìß?ýð¿?~°Ä½üÏ·™ç›wž¤{’îIº'鞤{’þÏ¤× Éˆ‘¸L9lnŠUÅïÞ½ÿøñoÿ÷Í»õ¥g¹Å_¾DÌwT>«”øúÃc¤+ŸbÞÿòË?}üå?—§ö—h¼'O˱Å_äýòþç¿þ×_>½ÿøß"*|÷î?~ýŸòÛ¼«ÊÞ>ÀÖ˜ªÊý11Àž1ãž1)iH¹?&=°äƘð0fؓٯºã>wÿö§Ï%iûÏ¿þøñcõã§êw´ò›øóçló ÑþÃO,«ùPbnI+#K†þå›w,,~úñOö–nŠEû‘a‡ƒ¶q²{NÿÍô||€¶Ÿ~øøáÓ/+{ÿüå¯<­%%üÛ‡7õˆæ=æãÁ6ÿ輜ÿþ‹Õÿl^zû¶ôæ«Ç‰à§uÆùãû?þëÃÿñ³ƒKcRÈŸº%<ÙƒÒi˜½Úm¿ð§:r·žˆcŽüÞž¾?ÙÔýáWË þôC)+ê£ù‡÷# Jøô«Í«ËߟÊ勯-O©•Ol{³|¾°W¨û•… yK= ïj‘ð¥N»M¾Ú+Ü—/˜nxrVÜAF«ž”O}~Zãó÷5 øO¿· ýÃÏ¥¢¨ŦGs©“Y¥W ;ÀÁâÒ‰aD9éõψží2ÿ¬öû/©åÏÅ—1t|™˜!h_{"‰—?—‰gw<zþŸ½kÛìÖ±¿Ò8OçÅŽDRÕ_1À<Ž]'m¤ÛnØî>“ùú¡vùR¾”(Ö–Ô…IØŠ÷Ò"%Q/lS„ï‚dóðeóã¾l¥–uöò¿ž-Ûð¬¯3ëå¬ÈIuèÉ8òɨó§R^÷IŸÊnVNÅ`EÕ`Í yç40—Qû¡^üÇ£º¹úôt"¼7VÿüñûBTYp%JéŸ^ÛüC &Lš5 ã0¸!IÁƒ.5ç&œ÷å;´PŒí8²[¶dß“Åpÿ—ü÷·ÏÏkâóýååç‹›¿~\_ m¬ˆ5úˆœºE°³‘×Ë,‘)¤‘õ)Ä0Cð<1Ž0ôj\*‹ˆÑ»„êûƒ˜Õ2C«:ôÓÚv˜&ܘ-xb·Öá–¶Ä9ék={0P¸Ý`îú¨Æ ×€ÇûÞÂýr+Ø—çŸÂ\P˜‹ryTÏèRtÝ<*3Îr;žLc…;1£ÏSÓHÊG“’)«FOYOá”F¢åT=þ4’5àû¤‘TØFyI•é#L#YƒwuÉòqÖZ\lA¦±Õ_BHçûÒHSD¯C…š üÊ4’1ÇØpàÉ™ÿ¯ÓHLìN,Ú¼|1¸ÀMÈèÔkí¡vŠP;E¨"ÔNj§5S„šœŸÙËQÕ`ˆ& §4’SÉ1¥‘xw.º‹Q6âªoÇyNƒ=Ž¿™ * 30â‰8ÂãøíöÇÍÃÙ‹[>)üÄ@9«x_õcë)êq€÷?wôx;ž9g…?ŽIóånÇ SÞ ptSVx;ž1G …¸\'Σ‹²¼ÔCö4NÒ}‘îOÆë(bžGŠ8;…¸$ffˆ¤«€‡Š¸#Ò8c[ðä"¾Ù<”?tö–P×:{à‘ª2n@ÇH¸ÉM6ÈáëBvã… {T:ú\KâÞŽs;»aŸ$îíß Xr{T¶ˆÀ ;0ÔÉ>ó ÕkÆ\ßü¹M^ü¾ùúA Þ»œ‹ p€g}9ŸÓ„ÝzÁ…”f¨@@ïBÃîSpÜ} u¨Qˆ]Ê:TÛäÜË¥*¥ë7,8FÀz8æã8ŸGêd «íXÔã kô„‡+¤²£œÉÙ!jPKnRç@æ~û§aãâŒx2ô:“¾ß^=RX;Ýo³]õ®¡Ø{8zÕS7¬ÃØxbê½êonoînopm½¤˜8Di Áw—ò(¨yŠ›ñTÒeúY[ÊÌQ´QµÒ#¥n;{Ÿ]¨zžá‚nÇ“\¿ýüîö÷MDÅO!f};C3ÐàE»AåwJž-òB›&á˜={§Þ{(F„~+|­^6£N0ÃX3àA츸ók«Tr7*cµÄ»**OyR4à‰~àúÖÄ›£/]“U˜™¸…Fà 8E¼íx¨»7úûÝõÏ믛?6åé5+ïrDr™TE €ÜÿŒ•`† x† Ù;åi.¢,j¨¼Öô‘rG¬•·‰žbnÇÓ_Ìrr?Ü]_>l®¶v¬È@aOîR.eÕä Ù=„§˜b<3„­<ƒDŸÀG]5£¯Tªè(ì~pó a·ã‘ŽöBò¾)gº?_Æy?PÒ±òŒ <ùiÒ n7XÌŠC¹D¢ÈƬk$Ë„F‹¹Öèg¸L xàp¿èï×7Wåw­[¢â\N^.$^wæ&Oqźþe¨ƒŸ±Âå;qðÓD¯¸H¬?Ñ%¢ŒóD¯£N1衱2n¿}ÙSô”¼°˜u<ÑúÐysñmsÿýâòUݺ×åÍvkYz§ø ‡#èP9€usï5:1Ž\Ö<#\WðÏàu<Ò`1+¾(–1(@Ç™£ïûcÔ€5ç b<IJLt<€æÖ—?î®þ*©X›ÿyÉÎwq}ópÿéω Þ)>(Q/9ˆõè)Ž•T½5Tç"®gˆ´Ï‘zÅß”]ö³zgÇfl6ÂìflÆòB¨OýDú*iÐ{ų”)dκażrÅ: eÀ«Õ€‡h˜h?Rβ¾ʲ'Ún(!LYµíx¬¥.5ÒÞ¤–À½uÞ˵H¬=M eØÓÚÆ¾ˆ[°§ &•çþÁïºA¼a³nÍxŒĢ׎:/u«Ö°ÎH‰2áɃ|ݗ虓‹˜4‡’Œ ó‘ºi©~šð2mÁþךÝ¢Âf¨›»@Î!±šK˜+f{­yê…5ÓO¨5 R£îU—÷u2d¥~ŽvÓ—q.ô;â»éh;üJ–BG¹ð@·øàm©Ó²wV©¬_PŽ©èåÒ¡@—qCçånÒYÒL; ä¡‹½þh@%Ô1£×,ò)c¿ÅÞE=Û±³Ã é<Ä<᯻#†ú¥—äL.°šJʸÊsLŸÅ{!}UÁm˜\ xB²˜ÖêïBTªÞê(9V[-6õ3ÀË®`&<¹Ãbý!ÎYù?KÑÔ·ÜÕ¯¯èïÑ2Î'³t4ÐfxL-xÆ­ÛºW*ztè4”ÑC¥ŽeÝ®ÐÄv°èÜ„#·|';P_I˸dŒ,ÚFc}´ÝýöT:áåG/Íñ–ìõ³­ésý¿KíÜ…Øúm5]•AÚ«¾Œp—8÷‚Y)…ÔQþòvJ‹¿Çqû-ñW ŒÞ×}PQ.ý˜’zÔȸhNÕ™«¥\Ѝ'5~´Œ›QKÂ'®‰­Qù¼Ç^- Ö/ªIlI*©¡ b|O+ܤ±”ÑO°À-xzžä¯Iƒºã©Äe!%õ¯Äe­«øØO ói;žèŒYÏ mõ *“èA-_(ãȹa‹XÑG ʉµ<ÖÄÚÒÆê%f.™ê6Ã,§y´‚[ô¯g¦ ǰONµ^¿¸¼,Ï?vKg¡³~o͘œ¨eÔÂB3Ffß97h¦¢ /q ž„cÖrÝß”9EðNMã—qt€»s¼R¶ãiÂÑlÁô>‘Cc¯~O-Í=œÀÕî…¥)Omè¹ýhéõì²n×P<5ôü¸ScÑ£oè¹ |—†ž5¶ÑÇÝгÎôñ5ô\…wmCÏåã,;(€Sw)v;)Ø#z’O²iÁÇ =! è¶S:ª†ž[T>8±ÔÐGü[5ô|d'–Ä#JÇ™î =·_”kOj@O =O =O =O =O =O =O =ÛznÏÏàe5X„§†ž§†žÇÕÐÓŸ;Àä}|ß‘öUK¡eÜn½N-…Êßàs º¿içBÛWÄAN‘>î+"P¾†" iV¿² bž^`ÃÃ_Õ^ެ“Æ1&H È›J ê¬ñÛàæ4AÆíx¢5ùm½cä*—“£à‚¦Ÿ¾4îíùVnRP Ê7àÉŽ‡­jª“–c ŒÇ ”›:ùâÚ†êe;øì`‚Ä x¨®† „ñŠ`Áƒ‡Å]vöÆú 9%rÑ‘†»ìb=˘5ØtÆÊ·àÉnäÊUÞˆ<†¤=È.ã€âaÁWC4Ó]/p sùÜÅ›üíâû&6_‹ã÷íÈ…¾ú5*xÇ$KAÓO³º°c/ i|"‹Ï#}!-ÕIÃD11hÇÍ6ïÒêÅ íxiÆb¶à±.æ«çG¢÷EêFe­…ƒÿúôŸ^o¹ì‡ùó§ÛŸ‚çúêjs³À)ªJ?XnÀs`“±çðŸ¼ÎðA“t32v’n†Ô"È2ï¬ÃÛ-GöpqÿçÿqwñýË Äþ…7Q¢›§h>¿÷›˜Cjù&NRž,Ç…oÀc͆{›NzñãáKy•ÞÆ ;à ¢7à±V\O&Öß¿¸\?DÚe€Ëmå’ÕŠkA›'ì÷<ä¨ËkÁ:ª¼Ô)ެb/u“ãôM@ÓÛvô•z!5Á€Çzò¿¢ì9òi!©þšUÒÙS¢ ]ke\2×ù­Ÿð'¬õ’èîK¥ß¬ƒÛ×?¥·ïÉ[®0züéíkÀ÷Io¯ °>òôö*ÓG˜Þ¾ïêôöåãǬïR!Æ¡éíÁ¥sbÚ“Þn‚wjEzû‚*A €ô‘ÿ^éíˬY¬Æ@:;)¤yéíå‹ÙûRXAGÆ»>ÞSzû)½ý”Þ~Jo?¥·ŸÒÛOéíZzûrÎ’C9lÕs6ãnËÞSzû)½ýÒÛáÜA À×ÓÛ—q¸$ß)½½ü]1)(m>–q‘y`z;ºs—0~œÞ%¡äîhH1â»ÕKlÃÌßRlx¬){(lx˜ÌuK‹O9h˜³sœûyÚ¡*?¾A¦O^ÍTx{êËST 2¿¾\Úˆ#V ,Ýy9z"°Œ#ßémµ.Z@G/u žt˜Ôƒ[Þ!÷ü¼ðê]WfùÇéóA ëSž:¨/3‹tIJÝNÐðSÖ?U ,}©\@À€™&ò)ºk˜H¥•4¾~Š †õ=mîÿ’}ûüÌôç' ?ï¡vá4U9 äåvT¯‰ŒsÉêSœ©ç†‰ø£¿’Ù”Ì^Û7÷o»™ws¿ßm–¹â6–ù¾K}.ñRgÛgªOÿ¤¼“Ä~u}¿<(}ºÜ}ûôò®µoD1€×'aÂ=Þ€‡Ðuè¬þä‘ßaöU€4Å£ˆÈЀ7@cÆûJ kt![æqŽ.p$ׂ'ù^ÙOB© øvqs¹‘ì_×_7÷ç;?{›MB)ª ’“X- “àÔZaúZœœ„-ÌqŠÖ(Ï…:žàº%Ê>“-+ssõžfõèÙSf€õJÒ vNúÐ2ôst"ÈÑ„§[åÂî^’YÕˆ…ÂÄWkD#Ô5úÐ:›8IäÃØ„'w¸û×ïßîÊ/¯Î./¶³¦‘ðƒ~ï¡ïöP£æÒ<-S4¤¸ÕïIU®÷Y|ìºi †FMé}Ƥà37LæX«Íx‚[W¦¡Jù»W}¹×jŠ’Qj8¹ùÓ ò ýÈ 4ìñÙÓýȘ Zð@Ÿ> VÎAQp1xŸ&‚AMz"?\[Z'nÎùÓŽ°×ÓÕa[9vRp˜mÇPÿ)ÌÐJÇ¥?Ö®/û»‡‘Ošþxp)ú†™¤Öš’æ°Bä>¡e–<冞(¸<¥,â!ûÏÅååí×å[«äkPªÉå²Á»)3Û>4n*kô©u¶a’>%Ͼ üZöÒ¼k/µQÕFhâ-¡­‚­àÙg æÀsl9Œ|kÔ}­Û^†5glÉGÉÔÙUàœ+4¡y*çhB Ŧ<zU¬[ñâò\ØfU+¨7@G{||€y†´N‹&íÑ!…<¹ï9ñù…׬êB¢ìZ@Æ^‡ÄGèÖH½¼¬¶L Ñ©ç•Û€‡ýâ…Þ¬¹UKQ Öú\¡­ó@7çl@Ù9[t½ëqßy1?»¸ºÚ‰E:{üS‹ 4·*”ç#j¸‚#Œò¿[g²B›š'›ÂmjÆÃFÿÇÏ‹¯×W%Kþߛ߿ÜÞþ¹}>û±-/ó*ã ç ×ÐKkȵ^S{!ž äçÜF ƒgׂ‡{4ÓYåȪ‡•(fl™M³i:tkô¨u¦“¬UŠÅíØ€'¸•xöÆ~î…–Zܦ¥¯Ý­xÄ£Êu$a‹t7£·Èyå*Á¡!†ˆ§D°ApY„×€'‡u/É×ÛÚ8µbÊ©nGçPîôê^*ãÀ›‹}O „·L"Mƒ6áÉ¿*¹ î1)Ý(Êæ¢ô2ιµõÿ;è±n˜PÛÀ‚'Æ.ñ-$Öï#è肹{Ú\xÉMf;î/LÒØbäàXG—| Ì~ðœ•ÙŽÇ r}{'¶õ×ÍÏÍ×7YE;§õöBr&6ýÍæy“[šÌP¨réQlò€^ÛUd\¥R£-f°Û9cÏ1ŽW žÔ­Ê>:÷ü|áµ~ôÉEňTœ¢Ë¸ìQ ƒ5º|r3”€Ç³MþõõößEÂß.v¹{Ô‘’¶ÓmŠb•6À3ç¬Á”qö`¦ò&ÇËß‚}¯HÁCx­›P ög©¦r2Îe6*HOEæHŒI½È•qã{h.ß € PÇ\—õ_»ó¤*wˆ1—z6šŒe\µR©1ü¯¿²"Š3«öu7>0oû¤ ÚŽ‹~EÙ•×6ÔyOUÉ·vUÝÀ.’”œ̶Œ æó¿»ÎŠU!ë`S˜pµÃ’µJƒÛq1̾ p•KÂX:s±¶ld¦pxzY'=BÒá?á¨/߉å x(ö ¿ýèîT7“KÛ,/ǹ¶#QYã4ÝÆç^à'Ô&·á±öÄ5™F¹J[ ˆ‹u¦À,…}èê™·+¨lœ c k·¤&GÉGÖMá GLˆ¤î¡´Ð£‘&{î…“Ç'wm¿“Å x¬ÅTÍæO¨¿[E ‰œVÚmWi¤±Æg×K æW4 ž”ÖWƒh4~êvo,‹–zW“q?G/#]ÓRØ’+•ä,ÞIÎrk¿™ö':ÚúG>=›_üx¸-yË{†ÂL·ÂÜÝõÕÕæfh.Ó|hšp›1áéÖÙë‘ÆïÏŸþ[þÂÛZˆuK"¡\ÇJ¡1e2Îc :]Ö·užp(ð€Kc7Þx„ë7¹ÌFOAéÓ°Œ#gueLS`Ã$f¼sYð¤n›Â·md3§u¤¤Ž'A‹RbL)¹Aïv}6Àæ¶,ç¬d9Û¾Énb<æz÷{Õ¯ˆê~Qû ÷íú­ÇâüÉx§„®þ€Í9-õŒ‹ÈØmwê¼’Ú'Q)öÙQ= x t(qþÂã®IŒX?ß32ˆ­«jqç]7ÙPcXvIF}*¸s_þ?ö®n9Ž[G¿J.÷F2I˜§Øª½ÈÕ^ȲŽã¹$;Ùóö R²C.€5xWs´S}—^°…Ò¦\ÀEw¬x„  Aȵô‘@-gÆðˆ^ôÅFŸsø¹¸Ú¨J–hk‡w>ÏоXBÍs^€ì0Igçعv.€ `çعv.‹  žŸQ×]ûœ/ºTî\;Àp`­ñÏ1”.@“;ô¤q`­ñÌRÌ+´Ê!oÉ.‘€ÜsäP1EAR•KǫثKôŸO©Šùk\û;ù/RJ¿†n>+Ø”r?Ĉ9£0²µS¨3*PÛ¾£®¼:¸ ðð°ñ/B¶Ç´y¹M±ôÕ*9e£þ¥ÉE^K10à¼v:Ӯƣ̾ D ¢Z2ã9*ÇÄÃ8ç‡Û²cRh{£pàqSÎw—Ö"e¦ÐW¦(ð„œ,ðµ РÌÕ&í@/öžBãZ<Ñöç¾v©Ozs­Ê;yusËõ ß¾½·àšŒ‡eÚ‹}íµv[!X·×T ä`@:æJS]·pž0Ù<’7¹úìO-q-Àr ª\(«ò6FX«.NØç=x(Øç—)1õ•(œ€#Y‡“ÊÅŒÛÜÿÖ­ž®Ðôc«\œ° T>y`Y€'‡9›@Scß°6SÆ‚] d׬ý±ÖëAM¶ž¼+¸L§Ðש0©Þ“9†™; và.2Á–ãIal²kÓV– ‡Hb%Æ„²‹7ÂTƒ ˜°ü=xð´¸àJ}b_Ÿµå3DåãØÄÝ%6»Ìüyð@Þ2_·)¯?·Ym1™Qˆ ¹„}›çòô‚Rãf߃‡âˆ>n'¨”ú*•LQ2Z‘k•C7©Ôx ®Ì¬!@¶ÑR™°ü+ÌBlãɶˆë5¥õ§XŒ’tû±n!*'îÎã“ÌÔ3™àâyðœÐónÊêÙëÄ\1UÒZ¶àYHsœ1Å~xwU³bþ¾yûûÝݯh˜žôØÔø¤EUâåÓÿ}]VŸ8{š•TSžK ÆHT.PÚ$¦»`›r ÌÛ7iõášð¦ÛtØ_C›ÿQ¬HX•;‰CêGƒÆ ^»ŸøWüVh”-ÝJJlÅÏü?ws %§E,5•À¼[Çêú(Ïð7=x@VD”]8Ãj›÷h,e Q"ó)AåBáñûÏØµC! Y8Zåb™0÷ú¨äx²Œ‹58•ÚÏÜ*¢ö‹©Xys¥³óšøÒ‰,Ro}L|yÆ‚IõA™lé3/ïjú À×à]]þøñ¢&ö.±lZŽ„—PàH¸ª¤ó*o¨’znmô)¥Ÿ«¼ºuË’Úá4¯Ü… ÂÞ ~/ß À÷ð½|/ß ÀàíüÌzÄlŸ³tqØ À÷ð3(Wîìbº' „)-¥mº¼Ìë®Ê¥Í_ÅÚwr$½.ÀSâØÄFC‘¥¯HduéåŸÞêo¯€#‡.éA|ú»B³¥8•ô!æËšsñö˜__›£Õ×% im'0¢öëÄ`³°ncÍJåB˜°2jÍW¤, ðx;L¿Þnš[pñUoÿlî"±¿«DPƒ.–1V9(ƒS@W-cE”ÕýÁFž9m?ëúNQ/ô6vçö·‹çíÕ§cªlÌô÷o¯®/Ôþ÷ßÖQÓíÿBµê#‰X±º‰º;J·`a Û`±À;P<˜"ˆ‡É±UØ_Jµ"Yçb@V¹Äî7ع¶ë ¤²½9xðÀ°.óŽm–b_Ÿ’‰+M·…¿6D–1«í¹5n”lÌy{÷ö½J&ˆ ðœxUþŽ ¼ÒâË­ZLý•P9*Šù.Rå"ë¹2ÎtX%[‚liÂÉ ß¡…xááÕ!†.“¥KRß³;AçJû³ m´jkÀô+%ÙÐ {Š1äVÚG9^]0`4º;†P¯w…¬Niá”îÁ÷øùÌhL¢)+.Xår<Ãv]C›_¿þÝýÍÕ»'b²¥®Û€æòÐ[t¦a›ýê3J8bÛFuxe‚Xñ jl<9å)'ýá†)©bb‹ÕÓv«\âºôøë)żcW¹ùùêãÝûf¸ýgN))3[]K›eæ• Ôúòä4!KÃ…GFÝŠ=úìGä 2Å@ÁºL©ØÔŠ?]]ÿQïÁjEO›p=óö¦àÁÃ2ªEýÓÚü‹oí¼.ý‹šYZ‹Ò?J I'8Zw|h*0ì¢:ÐŽ—@ån`½+*žQ¾¬™uz6åþÖäâÁâ™QŸÜ>ŠíJeƒƒsÜ듞v4zþõÉkÀ©Oî ðIŸy}rWÓgXŸ¼ïêúäÇS zÛ»"lÛ šà’ñHyò#Ö½~ Ò çUžÜP‘䓞ºýåÉmÔꋦ%9 Ì+On_äDb;‘•¼—'ïåÉ{yò^ž¼—'ïåÉ{yòòòäz~êµ'Ƹàœ-‡Á›½ƒòdE X~§Êò·W Èé"ƒêQëïJ¨©¨æ²¤&lП:]B®÷¶ÛcŽ&¨Š" )¦ä&¯Ý6Pëaójž˜†½Ÿ‚gqˆçÀ×QƾN ë&´ÆP2¦¸þ ¤•ª]?ŸÑïÎ1Si”„(j—R•KÇ™îPu;¡[äö&ìÁa” øS1l,ÖM¦®&Sd]|–áª\"ØÖpó0¤¥l?ç<œ9d^ÝÞ<|ºzY±~Du/^¸¨¿nR- \‚…˜² [üC u9ôNC›ÓïÀƒqÀôw³r¾ö$'õN²õ’§r‘¼–ͬÕzÊŠ_Ž'Í[ñýe û¶N©e¦P»ìä¿ÊN=p m?å<î>™]íµTî‚}eUz—Œb='Õ¢7Ôf¹4‡ˆfØ'9¯nU]ß´õ¤¬ç»pSV9`mfD–ù!Ô.x#Wïûs +A¶ŸJžÈ£náwoë2:W›¾K¦Æ—’` ·”½•ëmÐ.ç ³ìÀÃ8|Áö—é9+„¥r1Ž»`¯·ApŽÛϲ„akùIyªÌwí:úì{«µd‡ÔH-ïPå:¤`§¯i…Žg8S<'&ò{®1ono>߸nÔ_7 r¥¥5«¤0nÁ3TÇ:‘Ëq¦àÁÃþûû»/Ÿ:ºä¾.ER­MKåbHë9 GÚ±úö™ð><œçîýÅÍs f`HåˆiøŽ°Î‚àu‰no<ÞŽ—¶.;j”¾õØ,XÌT•óónl¾Ë¡LØ/‹v¾yQŒvñéÝÛ¦ÙÒ׬pÄزv• ~¢ëmîŽÁɄПOÁsÛZúK´@Ñ{‘õV¡r!y¯¢sׄg(ó’Õ®¦Ï°du ÞÕ%«íãRyØ¢½K1¤mKVKºDŠGjV}PƒgQ³ZQåÀ˜R¶ÑËA0Yui§¤‰5«>d‡YÊ{Íê^³º×¬î5«{Íê^³º×¬Z5«íœ…9ÛîrN÷šÕ½fõ¬jV¹Ö‚ÆDÑ ¯rTÆ0´oõ cû§žë² mšŒ†&9RÉÿl÷Ûkä¹”áEÌõw¥rîÝ@›EÞ¶ˆ™‰Êí1Ï^¿³˜{¿pð'ªLJ/Mš°‚xtÈ?÷·§XòS!˨Uޏ¬$R»8 çí‹|x" áU÷*4õª†]{µXÏ?*sùáÛ†eÙËG“fl 4ÁVÑ7!ŽýŒš"¬ë$›e‘Ì$#2rgÚïòÁu*æZ‹¬·–eSÔµiÅ1 ôA7¹2•@ }”t~Œ.àM)íVexG£çO °üŸô™t5}†kð®&hÏœ…ÉÞ¥rܶç5J¾$<ÖôÚ5Çó"h¨DH°Ñþàÿ5@u‰‚“¼„‰õ‹•Üj ÿ4‚½éõN °ì;ÀN °8Ú9‹ÈDlž³ ‡}:v@à Ô0A¿3†n½x““ƒ:ÝAõâõwSÒã@Ìé0«|½x¢ËÚ…âí1G249È d=©ýèдk°}.›OŠ[ÆúŸ:‹_©±¯T£ ±É¥8 ×Á ´õö~>œ{ýŽž ð¸;ª–Ý_½ó^—ÒEÝoÞ}h\QúÚÒ[K&ˆæJ‘JQ¾Â4ëB·ñlcM0cUëw ð<Ì£H÷ªþžÕ÷|ìòY×îÚÙ‘=¨E-Î2Ë*—d ÄIv)­¯„ý@_åpÂ\×ï”à¶ŠÿúpówÓVeˆT궬óEå€h Þ(STôÙLj¬r3ÂXR_«8šaµ*ç}É}¡²ÃÂiý`WIC†¨G›Jå(Ò¸õºÐ=ðd³™7ÌäíStè­´èMꯇ"¬«9kk.§™n€ËQAžoáÁÃaõ3ú‘kËA\äþËr)©˜“u¤U¹xv•yŠ I$‹ãOW™§£¦XƒÚ9þJ¿Ee^E¦‡«¸©Èý¹½2o¯ÌÛ+óöʼ½2o¯ÌÛ+óìÊ<=? °˜D6*Çyoí»WæWe^i¥¥¬·•~¢H“šKÕ>ÊzL„lƒËvš(‹ÿ§£Ñó§‰Z~ MTOúÌi¢ºš>Cš¨5xWÓDùv©:Ø-h¢ó¥Ä#,QŠ ÌÿIdøÛ+¤*wMT†]7¥­vÍrU”a—ËBµín ‹ØHñ¼¢v UŽ)Ð=“„Ÿ+j×FÍ9‹ñðô(w¼G×ø¨]ý"Êl”6d%áµÛ£v{ÔnÚíQ»=j·Gí–GíÚ9«÷ÛÍ{‚ÃÛݵۣvgµ½¿4é«¡¿½Ë¦‰.>ðÛ·sñá)àLzÌd~Jq?UŸ±¯O,%Y^= q¯žb.HÙô§(ê¥b[¯>EF£^½H,Øô‹…e¹ÚÚì!âhÂZqà‰< Áî˜ M‰oíF"CWÇ2uþ,¯[å ¸s,'lžoo$<™ÜíÑš^Þßß}ùîõðæýÇ»·W_i°¿Ì¢°îŽŒÖVÙžÊót†]/TÙ¾s¦Ë–TY_NQ cW· 8UÙŠ«\äà︶; 'HÛ›ƒÂà]bÑ®Û_^IrÎ)™Av•ëP4 aÁ`ÏË“·/#÷á‰yÿÕÉ›rÓ0u5 ÐmËÜ+7“›4m†½{Ãö&âÂãdxøòöùÛGëóÈêÓÔÙ_q¡67+…QoR2ŒÙjK×!霔@æ 1no!ú½s°½ÎÊ÷kš×i7÷µ«vLY=Sk4Ý~óÇ f‚ÁCmά.”?Å ®lýN qtî%uÏj|s}÷éÃÍ»‹ë‡¿ZeÝ“¾ŒÕ$¬îZ´ð æ!%ù›š¯0çQìátÈN¿HˆÅêáÝð®%ªqRY4t Ûm±'_’ÐZjš“à Y5ùryd4íÇ0E0ó6ªÜŒ;jýY€㪠ä⺾ç]\??è}[›ŸóÍçßo¾<\ü!õ'!Xs‡ ”°¹CÀÒÿò_|xÔ㼿þòîÃC{ßûåúðyô—WÏ™ÇÆE5+Øã¢˜¦˜å²à|Q9¿y<û…'j=ZV’§è{ !ÿ°’a°×K=û ØÃË ¦‹:¼Ì öûì5–ã,¥Çôoþ½«iŽëÖ±Å5Û7RH€ €Ì~ÖSõo5 YÖ{VÅ_#ÉIf~ý€·ÛvËRd_^JUéJ¥’HŒï!x ,ku8˜‰O si¤Î†“XK$Ƹá$dSˆdf£HÃ!ɽfÍÝx÷µUÝýÍÅ•™»=ŸLɘ_éOÉÕȧíç²–VšU¨A?)h“¥ –†”d—5PÈä+ ˆk ¬]õãîyÓÅÕcÈï7ÏANœÌÿ)“wî’HoûŽe¨ î2—q2Á·ï˜U¯îyQÆAa(=Û.Z÷`I²š—‹îš:~aѶ°#ùh§q)ŠãŠ·\ÏMˆÊÙwbLÜ€§·FÏQ³]˜õ³¨\X—gâÞq_ÆÁ)›{Y @.¦¢ø@sž Ëí;Œ¬|<i}]à#QËErìHŽc–è"åGT^U9fl€:ã%]©\›¼˜2Òf‹|±t³HP< &0ó]}Ĩ´ÝbwBÔ~¦,SÝTøx(èv‹þô>B=92&R_C2ÞpéO.T,u¸`jñ ʃÝï¾A8öQ3~¢6¨Ù¾·@ùŽ„@9ø1+ã¨*¬·_®Þ½+Y7÷—öï/÷ë|ùg±vu/!:JÛìŸl:Û³Çr‰¢ðb¤$æ^DôwÌyEdßÁµ„q³#ò<ö‹E²õûQNü&ï65«æ:‚ÒÝV »¡÷2nF4Õ¾“0¨ûJ¹Œ y¸r¸þpuû±&Íú϶ñÅœ Wš6®»õÈdNËRÏYÀŸÊ„zλï¨Y¨Ø€‡FGHvÿƒoÿ}QþÇ®GpÌüÃÔPÀÆA¦ñªb%Á¥tcÙM¸)ã060Ñ5`ʦ">*t”}Ǽó“‡zÛÒ_·%“ôþr÷†üçŠÝ¹nwŠäB;t%X:Ðô²næî)(³bÜÝSgÔD?pcëåº%Ç 3ìgûABP…8&¶~¿K~îz·,EoÙ¨$êÅÄ'¿[è„zô–gŸï|l*Ù@gh˜Ê$2ä¥ags’õ '2k¦Ô<§µ¯:ŸJ ³þĽó*3R™B N1º/½Ê¸^jø¶èYûÅ"yô¸"¥g}ƒæ ž%¬šÂ©ä)W¸îMyÇÐt²aËɦå)Mƒ^V€Œ•BŒ ÊL{#Ϧò}3Lv¿_³&ÑW_éuø!•^kúF¿îJ¯uI¿¾J¯«ð®­ôºû¸¨R½Xö~˦•^Ñ%>_êu¡< ¬ßïǼ{ LT’b=X·ò—*`º›uŒšêõ5öãŽ_o /`ºÿ¢†\oµ—¹€é¹€é¹€é¹€é¹€é¹€é¹€isÓÝù™çà»ËŠÎLÏL_SÓBÌòh4ÑÓ7.ÿø‰Àhö ®Y¹ü¹Š‰ÁÛ@6.nY³Ã%2CŠ9r)«¥èúœ)óñ¶ÀZ\¢ÿº»ù‰ù~úPõæáú]¹Š¨\Æö¡’´õËóN<½ù³Ÿ®>Þܹz\¾±ˆéðê†ëòŠA2„žcdãˆdý*þx,q[äÔ ÷ì¾jK!}t‡l6[Í‚'Ùb‚‡»ó¾Û÷Å•½¹hKþ|z`²½¸ú£Äû¡öd‡Ó§œýùH¢Æ«Ç'rzúên"AëYÈûqa¡Lú)i„%T k/+¡¥4”ò‰Á‘†ˆ/C©§SYI* 9Ħœt©ÔÌë OïcÒ±+±üôáêú·²}}¸útûç²Ù#Ûaˆ <¿ÐaøìdÖ‹)`C8Ty’Æ*d¹áÜà^b=Üþ)zŒ\ç!•tYWù>,n{‹=$øN˜KA¦àÌ›WÀÚ}ÇŽCŒ±Oo¬"š7ÿ¶¿XЉÔESrÈKÁƒ"™)X»Nn•\ø¨¾×Zæ1CÇwàI00¹ÄÄõËO1°]=™ïI€õKHbKij{ ±v×µê¤]™p}؃GO«"rTtäºg b†Jöô˜#–‘I/Ø×Ž=#MXñ<½Ý§Dùùâá®ì3t®éÕo{°TUâœ=÷ÜÆqÊëWžHИ•þšã¹OÑ­ò±Üý×¾}\¤ÀFKÞ³<‹ãwy'5{àò„݃GÒ¨º{Þ_îÿåçJHõ[·„²t®õì0jÅ Ú}UÛQÇ8!jØ…G·Ùï×Wo¿~z÷¡Dgg7IC6°î#4—†%û®æj;l îTž„£wüÍCEu¿´tíð€—Ú÷7Úô|íÁ›'\iöàá¶ûÞxz,ÆzLD(™…îÅ m\Šaø¶_CØäyÂY߃‡Gïü÷7WÞ_¿¿¹þ­"к\ô¸ÆŠË i€(܃;O0ô{ðpw»Øë»›O‹ÎO妟ëñ,ó99Úë1ÖÆQâÑ{eÛgãЃ§·@QE %ûKлÛ%uìþòöÓ?ï®îÍwZRêž­‹˜êî3#GT o*ŒY¹ßËHãv¤Ò„ý߃'‡-[Ë?¹š¨‡ÅXÙ¬%OqÙ8 §6¡oû\pF¬¾O÷£ïvÑQ¾|XŠÓ´ˆ¸îpK©EONá Ý8Äm{Îw½¹Np;ð¤@£ÌW‚.E„õH›+6F7‡ÈÆa÷]ÝKº}N &Dzð`\}äÞ¯|ûá7oßþ¼Ó„û¶><ØyÓòÏŸÿ¸¿~óñª–;*PXé`‹Ùɽ]Æ…„îV:n: +NXëv<1À¨+wO ‹ ¹*ȈÐ~ p»œ‡°´¡HSC©\ïÁUþ0aÉà®±tl‰ÃnšG,h;rŒiûу6ß×ßµ%±3IU–%u9897eW*·W¶½úúð¾8‹;W¾TPÆ0 àæÍVûðhÕKìpŸ 8¢+’K£Aj)œ6ßáÄì/L.<ºõ&ß].îa¾ò»7;Å©U™¢ßD¦9ظœ†”ú8¼X9Ðöë߃'Ò¨ç¨åwùM€??û7¼.MÉB9§µJ‚EÀÍUÁ‰ôí˜M°ˆ{ðdXÙ„}‰ Öô)Õ÷SB–ŒêÆRR¹÷T}{àç koßQ`¯Aß7vševX—¥˜5)QL1Av ¡”q%Ý¢s‘‡S²m ‘·_Ó< #ö³)Èrsñ®*ÄúÆ *Ý*ƒx¯](q„^7n;`ÉŒµíÀ£ºúEè»ç  V˜—zkLž½iã€yÈVÀFC“RRŠ>ê&¸æå;’C†<ÖméF¿‡ê''{FEN¬X`jâfYž<¢ÈÛ1ùùú,©.Y‘M ¸»IX#¬ÔƒèkKúäB–Àô¾}'"“'v»m?2îµlz"²úþadˆ™¢§£J–l#ªlFØ2•’˜ü©OÚØw0 ùx0¤ UÁsE¨.Ì–‹ìš,æ æÓ³EWq·c%ç}à‚wàÑÎûÛ ªï[·ò>HÐkã’è–[ÿ‚v §4A×÷ࡼánÿQïP˜õë^15e–³^ˆÃVg}u;ðÒK¿O·ÕwLY>’W}³h*•Ø#x§¼ Ê[nõSØÙ>æ¸ýj÷àáM+†<½Ð„úíµ GeEϱ×ò®CFë=Lmg¨õ¸t°ÉÎÉÈG²L+}ýÉÈkÀIF® èýÊ“‘«’~…ÉÈkð®NFÞ}ƒ©B_KQÄmû K¼”|¬ŸpÁLÒ P‘_W2rúÓ_+¹K:|¼›Ãødä>d‰ÏÉÈçdäs2ò9ùœŒ|NF>'#·'#—ó#QTtÏY¡s2ò9ùU%#19LâØ XÊÃÓ㣪]S¡°yX²O”QñêûsLk¦ÙYkÿ0ñ^™n¹¿üö«'rÕº\…0`=_}wø¤wP¾ºý¹ cpê×—qæìƹɬ—ZÚFIN.˜ z‚ʸÌ1ŒNN>!ÝŽW&ì˜<z³éí‹ÇÒ<Ò-sU’1•šöž`ã2ë05:~³wL„1oO‰ $uZoïÆÍ¿•ï0Ƥ x(ðÜ=øËÑËæýâôN:N€Ä j9…V2c6ƒ–²Ao8Ò¤3‰BqEðHUü±eavqÀÚÞ—èñŒˆ³$jTK }ijysÄ6#[‚†ó„dÛ– á†3#ÚnjÇ˵V…¡Ì•Üp:ä[×H؃8cæ0Gi˜¥Î±‰¥á dX™43h£G£’F *Wj•Ñhë™ b–ÆDР5Àf)«æm¤½E ž¬Ç®¾O“ГC I}-$!ÆSés Ü1óÙ%HÃìæØG¥ñø3ÇÏà[œ¬IöäQ%‚(ú–v­Kù3T…zcb.%Ì&¹}¤ÝwT¡à­ò½¦P“ð³GóÍüÃI`•v9ö Î@¹ÝH ³$šÂ™RÁrM[KkZö¨S:e熃 S¡mND?ˆA)m¸yœc%——È ª>¡lÙ¹¸º^¬Y(L ª“¢ö;XÃÐbÙ®oˆ= m_†uùNµ3ÒÇ“ƒŽÑAcœ/N-Y 6XùÒîµOšÐ žqyÐÐ`Q󜘶˜£ŽÚ 9™pSž5­†zÑl‘¥â…?áèÕ9A¬ÒTÔúÓV˜¢½4 öw¹†^íÕóý¥º^ØZƒ‹4Ì$§ÍÂÖSÃ%¦ n¹ÌŸÃ¥HBhÀ“âÖª² ^¸Z!0Hl˜†Èfš©ÿ …–°ªy·S,r5_ 45àá¼Mܺ.~/L­å­Qƒ=¨xz$`ðA¼ÁÌÐà{+Ò”ˆ£&;¢Þ ”bR+´Ïò€qç ÕÅíÅ£µ´j9¯œdì:ˆI"pé”ò”‘0·¨s rŠ>i–³u6ò†Ô`ÛA]Š£á &ä`2m8YH§Ü[iN94Ü£iYq‹^±GÖ,Òpzd>鮼 Ü õ73Jµá„àˆSÖ_46h&&Zë«JÚ‹ «ùe±árFEŠˆ^ÆAl°ƒŒCƒz3pÌZÀ˜ÕÝi6nU÷åûëþD?5?øžÐÕ §‘’›r\ÆÅÍ쉾‰ áUéhk[æM2…WöcN¡ȤˆJ©õöhk«Ç%À’üãÏ`óXÊqðƒøY’ïl㈧ðƒÄØ€Gxµ•ò“p!åÃ~üÒÆ5{-]°­9fåÝ4ç,JeÍs‘Q–ÉGoéfòï©màj›ä8ºA H¥ ]Ëd(Ma˜uÚ€gí{òFoÐ`z| ÿgïZ—Û8–ó« ô㔜"©¹_RUÙ'v"»$;®JìK`%", ËQù±òy²tÏ.À…lÏ‹!“CW¹D‚ƒÝoº¿éî™ééÁ;¬"Vú@'‘޵'vh¼’9Âÿi›Ç'à]a,ÂGu\Ûá‚O^£ßÏ;…-)bm9ð`FÉC|D*Ìž8aœ±*Âó«³pÂrçM„³¬‡Ùl¤ÌE «aÐD8:«ø>$mO qGÌÀ7Yâ¼—&Â@»ÔÜÆN÷tj¬}Óƒ¼Ç»0éNùèS9zÓÃðZOç_B»WÙ‰¾'qVÁEtÖŠ, !?Æ#x¢o ”öžk&˜¥8sdîyD·¢yçéOO,“Ls'"ºŸ%Ëð(à3í4¸L= w*­l»jí—Kk]Œ_ѹO™;Öï pt"F>˼+ 8]œډîxN-³'N%"\ ²‰ÛM÷RÁFq-%g^Nç‰Ä¹‘<Ê–ʃM}î ¯…(«]ÃuÓæv˜¶.®š+qja˜+¼0ùFGŸAÈÞ7‚}›w%Ò¸$"D¸ñy‚.+µöfÒ&—8Ze^*Õ­.j¡"a"¸¾òöëT4ÄÛÑu„Ut:UôÌsºBàñ,7 '—×ÝÊ¢Vǹw†É“¿N‘³W'¢ `ZDÞ‚v2Ë"*Ø~\T‰Àãxn ÞÌ?”‹÷U·Â¨µx˜6kúNGl'M6¦õìTTäÎzº* ´³Y¬¡˜œF[g!RO⯰w£›óÑè:è…Zâx ‡>§í2zß8±$÷šN8†v,ƒI¿ˆÌ…L.£~´^ÞW7W%Q[çó>©HŸÏÆ¥õìTTTZú÷¦d–ý¡ÁÂùo¤lv7[-‹Ùäc·Â¨½¡-32ˆk-³Q1­g§¢¢‘&&-R˜V‚Ž:ÍXšªÓ ³¸ñRòðK+Gä ãÙA;²×óÙ$¬¶Ý­¾ÞÿØðcÏç@/ˆÉ Þà =½ïç@v¦¯Ënú-žÐ…,a)x\–¢ò(E۽꧰Ð.n*¨¡èªr½çn“|ŒNéH:¤àq¹-Å-m‚d»g  ï"6œ,„í„•ylE ³ãaËåÑRðv|º×>aÞ^»‰2ì^ˆÕ˜Å%9™! íŒÉn’X¬1—Š92’Æv9V 4¦71CžÄv\f¶ã¿¤Ú½V€éÃS¨^`šñ—æõËjg¤²œLÇv9 &à{žpŽÀcŽOókßå³ë* Áng qŽÄd‰lÇLn»Í^€Þד"°]Žjˆ+(§=G±ÄÅ쪾x¹¹ózS²úNŸFxµìVõ§­¡ÕÈ´ùúúbÍóÍ‚˜»—w FÆŠ)RÌÐŽñÃØÒ7Á!–QÒOC9ª<Á{$n@ó<–ŸÌD´,nb·ËµÆp^ä®9ÊÚ$[‰ûãsB¿ Ï@Ž<’Òll}x3ŸO 9w¯Åa>œÖ†\‡‡vR¸ÓÙDÖ' v–lð$;•݆Qö·Û3;¼ÂÄ;ÒþB;™>ñ¸GN'tLå`G|ig ývãpH´ ôƒøºÓ²³>”£àâ‚©&átlŽGírT¼MÁcí‘Y­-™QiBTXK[©_kìðÃ&”°I®r¨1Oj`#™«²˜.¯FWåè÷˜1±ç’jÛm‹ü“¥VƼÄZD}\g{œŠÇë9ˤž‚‡'nÊÞ’ryU®ªÅjZ¿H×} w–™£K6ûPƒ7‘ Y)БÖfP×f)om–ò/UŒ £´ wB;ÍníÓèý<ðQ˜ßo–ÅbYŽ7r+Þ“)î$V7ãbYVÛ×õ9‘&zÖüûd8É<ΖIp’µêŽl4ïúUä‚é«ò¶ÁyÝbP¾/¦« ¯³Áe9*VÐV]ps¡ø“Á¤‚oŽæ×àåÇåøùϳßgó³áà›Í·þ Ï+ÇÃÁËùj:ÌæËõáÑoæÕjQ–óFƒÅ¤ú}ð—ùì?æ³bú„¿¯Éó¦\¾SâÓ·ðHüÚ¸¬Ù*Í«Áw?×lýá9ôk1A–á»ßÎW³ñWiP€Æe5ZLn°·ÃAh3xYë²Bk3ªÿ ¿æèÞ??ø0™NØ‹A±TÍSnŸ[ ÂÅÇÀøöÚl5l¨.zÿóëWÃÁÕryS Ÿ=›TÕ H±(ÇWÅòôúìr1ÿP•Ï~xùãËW?üüõ¹ðV§!Hk@“×ß~óªf×ëòr>_~è¡Goš™h~š€¶h‚m׿ýe›ÐÏExÀ`¼Z³š¬;?^Qƒëúùë¾\ôŒ7Žß¿üáœ[&ö½œ[å‰È¸nת|±1¡î·ÁŸ'³Iuu¬ <5N]Hïÿç¿«½zåÎqµÄ7P¶w­}f=tcpž ª›r4'ß•ÕYÇœ ÀSnŠOÓy1Þ‡/P´¯’B±;èÄ*_—ÕÔ»A‚¦Ml†þ=¿ø9Hüó‹X‘e9ZÂð>Ðÿ^gX;Ÿ|‡÷ÞŸüuU|º˜ÌŸp¡pùl>º9_”Ó²¨Êª® ˆµAråÛÑH]–Z3¯ Ò7NY–Ì.Ëñ[¡ËQiÆ®ä²Ovɸժ(Œ“z¬ÊÂxxןç‹Q9|[L«ò}ÒQR["í©–¢Ú;ÓP¤TK Œ\” xÍ" ØÑj±À™.Pà]¹üG°˜¢†‚Ý‹LsÍ"ÜøÖ.v EpÞœ÷zµ˜Ï&kžðœêùÛI9_|³X̯&Õòél2ýªiðüO¨}üê/¡‹o§Ÿÿ¥œ•õòËPhk:|ú û „'~õ”}1Æô˜1Ͼ: 1âÄ|6øi¾,¦CˆçÎÀ¢_ßLÁ‡ì, 08Ðó —‹U D™M0w®V߆IßÕÕð‰ùåçºxy9úA==oÓ®¸Ÿ¾*ªå‹ù»EYUÃå亼øb$xyøýÅꌶ³—g Qøÿg±®ð³¦Ý«9øDèÆ x ¾¿®¹úù~ù†øÉÚ#ðZ[†],‹g¯¿}óâz­€Gšà³rZ ÿó·j‰~$hÿ ã c¼–9êõÇZ½5 †ÈŒW_~úùͲ¼>i>Ã?—cxÍ÷'È¢ùCxÜ­yþk#´_!ZD¡Á§—¼¬h±Þèò+« Áàãk"ª`¾Ö,ÇÍñEÒY3Їÿ'ŒÞ]•*7# ÈO‹bVMpTÿL¯‡ þôùC1a þV{Íå%×åå¾U~\%S*ñÑž!MF`6#†Ü!W%ûê€ñ²•Û¢ÒæãO&…¿p›@¹‘åÝïþf2oÛBïÿæ§ÏOþy5™"3Ÿ¼„xy>-ñǯ7‹‘/ÃD? êz 19 OáƒY°"øc(¿øñ;üí‡Æ ¾š¼-GŸFÓ²ÉìÅ¿]`=—7Ób^tëã« _õäòo8+:¬ o¾{3+nª«ù2üº3Áøö/›\ä#»ÿ°åÝÕr‡(þ‘èO+œCS‚yS¯ëã·é$Qû‡ÒÍt2š,§Ÿn @H­6o{#¼ê8ÚYÉwùYÆ~\—àÜaRþ¾übCo{5Áý‹OƒÑÚÍ=C£}ôúÏÿŸz‘Á¬Êó?mÌã·µÂȆmœ><¡ ‹o ÏÚ?«ÆïþëjG˜Ú+Kw6Ðþlð¥Wn¹eºŸ¤]íë¿|ÒýêáKSý¥;ÚC`Ťg^“Ó¬TœºŸsU2©+òô×w§áI-|~´d[ÉWžªpÚÃlâË^üòe/D»ØÞ‡bˆ«Z¤¿ Võ Þ LOp6z®`V`؃ìóž÷+ƒWÏjRŠJ·vO6óg¿cªÀzðTò ˜í í®÷Íô=f+âØhבmçqÎüã¢\/Õ`æ+ÉîîŠ5Ë3ç£Ð=èVc8ŸcpÄã‘}TÕÜ£ê­Û{ÅV™Nyr¼À1E îp^’嶛ѣ;ºŠ1#OO‹<¶ZŒË·Åjºµ¾ðm~½Ž>€j“Ù¤N¿kd7*ª‹µì.6’…iNdw¨.­ÇÛ¬%e楅ØS*…ÂóÅzM‚9ïûh¤=½¦Sð(Õ—¦kóQ]ñ­¥·Wéðîn™:§œWвÐîûxRºÆ#W,ƒCLÁsà¡4ÂX>ƒøt1UAxÝCIã9^¨B€UFi¥zS{¿ÜMè„âä<’÷ݵ –P»óÁ«S{£ “WÕñ§Ž$iOHÇöµãÄ{³XÍÊEôÒ­ëÞ´ÓÎZι{ í´Õ}[î--¾ FfØHÁ“Zé|³]ƒ÷X­z[ݽ `Œ–a™í—½í1œ€­ =‘&Ã3¼G)ωÀ“\I¼]Ttxw7L\ïÎK×½GƒGn¸ã¤4a×ü°‡DZ& ²<ƒ2ðXìvs¡|²9 òì^“±Zsk=£ò Ó☷dz1+7ÂCxÀò•,MÔ}ë²øAÒ³avïÔ€Atxƒ e­3ÖøÃ†öi™ßç2X‚<©™C»ë3¥È²{ýÌI¯ Þ®L`‡v*™ '§rz-20!dýgÜFHÔwGÇÎ3+…$ó.¡á¼—bc=’9¼Éh–‚çÀìO08Ï™ç¤hdÞ‹cKŽg‘I³’,#„ƒéžÓ/Õ*sâñ˜Ä”¤âf‚ )íè[t,,=ŤtÔâ«W‚õRó¼Ë–Ò…Y¨)x”L!7¢<Æÿúî9¼]9£È»Þy™|¥ÎA9Ï2ã(ï‡e§¿2 °=÷ÿB¢09‰ŽëÝnée·ðœƒÐ F6³„H¨Nåd<§D]&àI=ô3žU[’1’YAm׆vÂó>2ߡި þßJ­æþôzÄ÷X+´ŽÀ£]¯ “õEgµÐºÝ•pV i5§@:«TêXLd w&¸54ã2ŒCÄÓLãh<6õ|çÛ2bïpù¹-!Û)!)ABÖk*Ä•á¶ó~ã)%±†cD2SÝN±ÓëÞ㙦N Ôxœ;EÒI[vݾl-Nj8J@ªS5| éœç\pMªóæxu£9×4žšCô¶%×)ô6ÂqOQ Ú ­N2‘ç€P€Vy¬â&ì ¯f^ñ<–hÂÞ_·{R I[KŠ&R&50™xñ`2TžJãs¾&³·‹”µ »5[BòBÒÒ…“g–í$÷§šlÆ3,¯bg ù…ܺ&çÝKcÕJã<¸A;îR­òálŒG%”ΠÝ<Ú§Ù°ùÔ–NgÙh}a€C†NMÞ Lž‡œ’l˜°®ajdhàZfP+¼Çóö<ž÷q0ë€ÅµnçfÀ™±R]g"mòøM¦fo2h8OrQ«Y¹ ÙPmétoÁœ ø/Åh§LêÀ=Ž FÃ1:ÃÂ,¼Çc`!,⤬9nRlɦ{ÑÚIc@’"´ÓLõrέwÛÒa{%1GwK¬BÞêÀ_A£ëXЉ=òvRxZgQf4ÓÛ…[»Bø©,®«sl‹5§0Þï”$„²L3Òì¹ÎrÏ{lg¢Á8¦eµ&à1*Y­ÿUŽ–[ÒéÞ¦ðÒ¬YN-{@;.Lo?õÁ²è6ƒsMÁ“zÒ,M’·¾ìV–ÝŠÇ\eL§°;Ëì÷&¦²2Ö’ð¸dÕ~ü´%›® sÁ¸õ04ˆãyØNËVõ¬g“À™Öx<;¸çPX‡DþÙÁcÀ÷sv°AZë~v°SÒðìà1x>;˜d¥Ú{é'9;Èå…–bÏÙÁ4¨­Àâì`z'ýß×ÙÁ4éìO†ìÿì`2ß¾û÷ñìàãÙÁdzƒgÏ>ž|<;HLñ³f+7àñìàãÙÁpvP{%ãÒP–Þ'oŽŸvÑ3€—˜-A‚Ç+sNž‚Þ¥µxœÊ³€¼©î_€]_ÖR„T|ÇáŸ_¾ì…1Ì÷}dŸë`Öîˆ3u;®OxdPŠ k¸Q×û¦öÎk'¤'Õí¼2"}줭*ǃÑÜf xDâfJ5ÝÂ(Ø*k(u§|`Xr#…§X¦P*“ÉÐ¥Δn´Šu•vU­Ò®ìØ—*“ÁΦ౲‡ã‰×u¨ÚÜr[cyVl^ã"g½ 47jAÛŸáú¬ð¡ˆ?}zž_„*)2HN‹6 ^2Ñg ° î`K¯l„eðÒf‰¼ÒVqNãQÒ÷iýÏßË«f)JãZzÁ#F‰æ¾'°ÜÁ×ÞÅøW¯Î¢qƒ%‹"ÆìK¶Ô‚Çò&>Bñ®«rÁ1ÿ.Œë†Ž`ú÷"Lj÷Œy°Ÿ¤‚vÖžNÿ狲7"6Ý4¯Î…‚FÌý)h°êl€¾xã¦û"NEbx4Œºä¸n§ÄqõŸêuÎg»ýª¥( 71¤UB$D€éøÖ»fLÑg²è];ࡤñè£æþ‘âu”ú¶LEŒ#ýq€8˜³À*±žŠ÷æýg lêÁÐ;¥ºÁì‘,±úç9ãüÙ»Ö%7nåü*¬ýqJJí÷ SªŠ"Û±ùX%ÙqUbÿ˜%gµŒ¸äiYVù±òy²4ÀËrwI40BLκÎ)I$8ó¡ñ¡Ñh »EŒÚ –E ÎÿX„mÇžÂBàcO‰-áÛ<¬\#QÊYžŒcAk‚Q@€EJñ%€ ͺé€X ­™à–\…Û3Tš2LP°à»~hÇÄ¡´ÀÙôòjÑ,ojž5îÌmî…M1Vh.”Q8vÍHvÅ€‚nÍ[ÁuCy†Àn@’](ìIWì“9âC´ð)1Bà]°FJ} ØÛò…+¤Àgˆ „Ñ2˜–’ãüe¬ç)»Š³¥Wv¸Cäõ|pvûatæ®Ïǵ»VìeÎ1¾0A-'xXÇGì­ù©»‡wŽ9Ÿ€÷Ê"&'ãŠgÜ´½Àhãûh|? ½e9v*]ºÐš=°'$·dYàÄ0+{¤Õ”Ñ<©>ŒÁ¢™Oofu3]Ìõ°¾MFËb kq¯¥Ýœ¯ÿv¾¢óÑÔ \bœQŠ™ˆ½8SÑ'Û™·fŠvEpË’iVf]2œh1;µ¦Ù²¹®Í{›ÌçÉ,¡œD¬ ÆÄú<[Cl;úœJ$¾Ïa¶ŒUÂ]ÎÁˆ}'Âfý»#9;Áö»ׯ°ˆ“|c¿ `ë‘gVbY“—í´-2ò\ƒ$A»Ôâ¡:ݾ#«4æ×ä®V²‰`¤`±×["µYIÑÌKĶÈÚÏ¥11álºp9ÕÏWRÞ{È®eذ` Á,¶Æ»J¥ÚæÐ­œp>x°€ S“[÷XÊ™À“êÞˆj#©í36aÃÚ@C…5#¥ËžcRsFc¥1ÌíÞ Vâ{4’˜<‚äšékéÝ~ň4| Æ2,=Æ‚vʦ©ëFÙhZˆaLÁ#U†³žÇ*ñ"ôåjGé%žGÒÕ>Ú¢=1 5I6e—¸ }°΀ðP’­f÷C{{SGp§!¾( \m*%Ñ{jÐŽ$׫,Kð„®PZ@u¤àaR»Îã… O2Ö€0–cÆÊh£U¶:áYYß…@VàŒTHÀ£3ä> ‰I~¥ìËe˜yí¤âRmh›¼ˆ·(—ÝÇ|™N¬¿‘k-Í>ºß O&W‹—J^¿Ñ.éœmW55{ã±J[` HÀ£RSží>†¤FwJ1|eÈpm,¬\jh§Í0âãnBOŠ„¤àá"—Á˜hry×|xžë*GPô Ò¸`Nž¥vvbÇÃÜ…ËH‡\}íMîÕª?;[ýÞKŸecO—˜Åîð‹‘GÑã"2‡$Ï`zvFÊ)ÏÅKøAزo)zXÊŠ6¹ŽÇ®!aß]ñ2Ù8Ây—ìí@#Jk<žTÚþÓí¶v¤ÌF+;§Fë܉BD‚•Æ¡„’×fÚÊ_e"/'w #'ÇE ÞõbEø´¾3>Ñ:qÃA¯Äw@ªãðÔ¼Îmã÷oFßnaCê¿ZŽ&[ ?pTg›^£š¥ÇE5k¾4ÕFÕ Š›‹d4úè¨LJÑ‹F^*³—Êâ_*ÊûHijy¦ýWR\ì;‡ËÊß²ÿb EJ·YnM*íϸ’ÿvmJGlç<4õL|‡¿e‡@ïœ,’„ÐÖh%”D´ ùbžsS9¡%Šø¥à±ò‹é‡¥Ï UV(†ÄòúvFÜM«ÁoS/L¦`a›W3—êfMŠê·j4v+Ooq;¬æusy[^Ä&7П‹ÕŸ'{Ài¦9Ñ88M·v,«÷{ò bvËí¢¾kp¶lÑ««Æ õø´â­ L>qNÕ¹ '½Q¿Lo`{9¬‡/~š|˜L?Nú½¯7¿úžWam.ÆÃÞd:_?ýûí´YÌêÞ|º’Go6j>ôþ:üÇtR_þ_~¿L…õ®ž¿Tc'ÄgWðH÷³a âº/{óëj2mzß½é¯Hvùáôk6rÇÉîÝW ‡ÏÓ$Úà ëf0ݺÞö{¾ X~,7U&ÓÉÙîç0—ÞC›¦÷q4÷\/zռ׬žr÷ܦ·N‘¿^ϹšóÌàzûºß»žÏo›þÅŨiÀè: 躚ŸÃ¸^\Φ›úâ‡Wo^½þ᧯ΘÕ2 AZëš¼ýöë×Kv½­]F´ï=ôèÝj…(OÐ=š¸¶k›o2…¯¶Hýœùô†‹™W¾ÉºóÃ…EÕ»Y>Ý—óÌxã˜ñý«Ψ&¬«–b[޹ 5¿ö¾õÞ\wÕ¡½gB°s«øÿüwó¼3Ô­ëÕ+¨°Ö=Öö+0멃ó´×ÜÖƒÞ8ù¾nNW‹ði¯š {·Õ§ñ´îEË%‹A¿•‹gÞefûªnF0¼´~Åß8‡@ÑÿFÏòÿürZd^æ0½ú'úß—¿é/Ÿ|çÒöOþ¶¨>Áj{1˜¹…ùbê¶üõ¸®šúŸšëŠI’«¯qYK Ʃᰘ2ÅëšØÁe=¼b²ÔjhjÊkNØ%t@ŠªR†Ë¡¨+eá]ßLÁè_Á¦±þsŸt\n[¤Ò¹oR¤´”€Ÿ¹N&°jV~³™s±Þ×ó„ »ÄžÛÈ:ÁîAfàJD »w£ÊÓÂ/Þïwôz6ŒþØ2Üà9Í‹«Q=ž=›Mg¯GÍüÙd4~¾jðâ/nôÝOö]|ç?ýü/õÄ ¹M¢]êÐþ³X±oEÿÄçÏÈïbbÉóSo÷µ §½§ójÜÓê4úÍíTð°ON½±“í<ýùlQQœõmýðm˜ômÕ\÷OÔÏ?}”Õ«ËÁâŒó}ÚU7C%àÓ×U33›ú$Býùè¦>ÿ >sÒ>íù¿\¼‡ÙvÚ£ü$ ºÿ³Óž‚)£Œvùv¯§°&B7^Âðýí’«Ÿ¿,¿">Y+bØQBG·1 t5¯.Þ~ûîåôZÀ4Áÿ&õ¸éÿç¯ÍÜ­#~ôÿô21†k™»q}³Þ% úޝ~úùݼ¾íŸ¬>s_×CxÍ÷ÇËbõ…ÜyñËJh¿€µè„^jð²j‹-ðNO—_œ²z ¼>¾#¢ñêkÍrw›}æ¥tºšèýÿJïñz*¯f˜› ?ΪIã·I?Ó—ÓÄýíóÇj<îÃ4fô vb”_RY_àWõïó>'BpAŒ–8š @lfŠ{˜Óv¢&Ïÿ¯¶Üi[TÚ|üiÃ$ÿwe(¯dùø÷Ÿ¿žxOÝö7@èý¿üôù䟣±cæÉ+°—Ý=.øëW›S°WË1ð™®·`“Ãôøä?˜x-âþº2”_¾ùÎýë‡Õ2øztU> Æ`GûÍ¥ûî¦í9¿Wÿ¢»5¾©œâkN@.ÿævEíºðî»w“ê¶¹žÎý?½‡r³ÇÝ‚ñÀwÙ±ûë$ÌEñW°D\¸-9&˜wð9üÝýõ²šÕ7õ| ØŸn*ÝŽGƒÑ|ü鎈ԖêmŸ`,0I¬àÛ™íJ`9 Oj­Db^Z2,-c(‘–af3ÚXÑb"Šˆñ¸­%F9!$[õ³t Çò°mÍáăº@;"MÞ A×xTö<©e¬1uxÏn ¯~ÜXK C*ÀûvJÐv.’2ôŒïˆ>|xMÊ3íÂviK›à–jK A`B;’|•/#OpRÁ£vö)ȃ¥M¬ ®o§¤Ìe˶ÄxœZð@À{,SÚšœÕ“áít4™/hدS„Áî í¸¹§~Î&ô@È›Ù<ŠdVõï`>Mªñxú~<š|¡…ÿGìê„0›ñ6@ ÇcÕ%¼)x$/blU/Û!Þð±Žá.]Xw ‡Ï­!r:¡¶Ä‘‚GçÞ ’Dø`X¸GnK>´úüŽïß:Œ yKz"å¥Ö c<ALrÍ`?6׋ËíÜ«,¼¬[n”¡Ja*­;’¹ÕN§’]Ѻ&ã™uÍd:]–·¹ ŸFZæÏJOÌ@·¿@p~&@²q×§8ª6â_J ) 6R𤞿ûXŽf>««›­uàb89…?ŸNÇFsç Öë6ËÈ&CŒ¾Ýv~‰(kãó Ç.0xpîPç)Ê Ÿ Hôø£¬»€Ïe@ÖúÈ£¬ƒ’>Â(ë.x;GY§h)C¶*"ÊZR~®ˆÜeUÙ㊲ö¨`2‚âè·ëAþ]DY§Igÿ…ÎüQÖþÐ[!y²íS³§(ë§(ë§(ë§(ë§(ë§(ë§(k,ÊÚ¯ŸRh¡#,±]2ó)Êú)Êú¢¬˜‚*¢•ešäöjgðÅ&uA²ƒ_ºIÃÃUfï6ì 73ò”ˆ<|nÂq×ËvD去vÏÕŠX-*?-·*’ä»fö\HrºÙ³·wµ4eR#H¿ær7ǃt@¢Îr.öRf?'Sð¤ÎÉKgSmÔãÜá\kNˆÁû [”„0QÀ”‚Gf “½®Ç7ƒëj6w˜^sçÜ}úXža‹Š-ago°SHîâÍx÷à¸.܇*i‰¡OÀÃX·L/Äö(»€Ð,(=ÁÁ.!ŠcD\§8”`jJlsx<° 6¹f>ŒûÕƒyŠXÃv­0JSÐ`Ø‘´“ºk®› 4އ«û˜†GäÉÚ#D¢tÉh©A5´£LgÓÙ¹›Òþß<6ñ¨ð±Ô>m“Â}}æ¿÷9çmx/V–¦ôÔ-q&Ofœ ´GHË•q¸ð¤&åÝ<üHŽ?ð’ ß{Q\»Y.äÐŽÙÔñÏÊÔ¤,.ùÆ<N4ú¦Õb~=.iù]¶0iÃv±2ÆZF(va_¹ 6[{Øfr‚Þ­p툎:5V話{˜±¸áæÚiV€R>]£§†®]jàiàÐm‡[ÇUZ@ŒY)‹ÀJ©vÏ Õd'7€ä®Î>q,§¶¸sÙ‹<ù| ]€Í¹þÝÊqHŒ$ì 4JÙµ#&ÖSy˜t%°ÎBŠè¨ŽS_U_ZRÉ¥˜tµäüí– xX8Œ“’ViW¿v.œF౩‡ðK©lß’6¼i5°Cä–)lÅ1TÚýû”¤L£Î ¿¥1‘K1ÍE*F hB÷X)‹À#³]ËÛ¤àñÞiàCµÁ)J¡(µ8j)¤HNóÒ‚vRhâb9q<Ê”UïB¸”8ÍDCgK…·–lcîâ¨1h.G\>ïhgÆ%À¶LÙ<šdËÎãÖ,·Joƒìeø`É *µ¶#'´Bd¹ûÍÎl’¸ã–‚‡Ú|ù^âxÞ©XïtÁr£ûvRå;éÌÃÅxìŠðk¦àa9Ã’@tû3* CX7 F rµß·ã‚Í•ä_ªŒÐˆiÙnK¢O¹’ö$Á Hôøs%uŸ'WRAZë#Ï•”ôæJê‚·s®$ÿrø t„–2ê ¹’¸ç|_ª¤%RØðÓ¤Û†çQ¤J²ËBêŒ"î“%ú­kÍ©’–ÒÑD"åèWRTåR%¹7Z­ ¬3J?¥JzJ•ô”*é)UÒSª¤§TIO©’âS%ùu–kξ[¶TɧTIO©’Ž*UÒÿ²wmËÝ:öWRç]n’ A0_1Uópž¦¦Ü¶ÒVµoãKrò÷nɶÚ-„Èͨ’}^Òíƒö^IÄ…‰ ‘™c” ˆê>SÈ8ÿ# ¥Ž^æ×çÜZ|ý’wˆmŒZ(+ŽÙáç–¯ÿþ”å¬é]õ(ÿÞ†5•$Ey¾dÒœUð"X{¤æ?ÇK¢5â6N1wËë˜|ÔäG¦˜qû$£&ï,‹Üd•¬Œ'™Ô½ç‘TÔ¡ è¢8ó,çÕ=/º®e R ó϶OÔ¾¨nò6IãMa¹IÕýäªæÓ“¯‚Û“ËJãkxˆ^Ü€²ó-úþ=7Oã$EO!Ø ƒöÌtî ¢Œ'ØnùD×÷ÏGXìW›.„0Õѳÿsí¬÷â&=òÁ+>²[ƒóO3‡ådd×Ùq}º¢>= ãë9JÇ]–3Ôa!W“¿Åd7é,w¼‚j¿9Îß¡h+ðDÓ°~?RݦT¸Õï›õ;£¥l›zâ §G”b–3¾ßAÜ‘†Š¤[¶UoÙUúÜþ¿ïZ=æŒ(6xMØð2¥í2äֿز¼Oæn˜JÇ„EˆÉÚ&Yþr©O †Õ¾Ko_}ÒÜ”»Z_ov™¦lÑf M$c‡åŒ ³,ýNTU $ ð¤hð ïº\or9ö»M~ ~^¿ÝSËþyd;)Š^–Ã6½“xc°hð i«×³3޾¸í )[Ä8Õ %1è³_ÚõÝ:µ~É °÷4x ïáTƒe/t®É%é e9ëScQž®*p:3àö¦ÂCÍK<'YÞM!ׇµW6’#%—¼t׌¹Zô]é§S9 8Ôxbg»þåò›¼Q–}ÓÙ1ŠèY΂m_æ|ÕÀ¥—: m¹¥Jí=¾~e+yÒ]Ù¦\äoÊÅ@:ßè{0U?Ñ€©¯Ç¦[FÚæþ·§Ëç—§×)Œô¨*ËÞìys¤ Açû;z3Ï¢¯¦­ìˆý^G`\©»íï.?Þ Ñ” â”Å`¢d€²\„~õöúЕ4D°$cO#Ü9<ú­ü)¸ö¸ p090Õ¾ƒŸ·rû=Íä¢n?êѹ²ƒl'·gC-¹¨‡“ K=û\Ô&ð]rQKtÒç‹ZÖôùå¢6ámÍEÝ~<ø <ÚíäöfçÈEõ .ùÃɨ;9¼¼b·žÎ*u‹ s<±—Ñ£qÿ¨dÔí¨£®"µÓ¦aɨJdû¡PK2ê’Œº$£.ɨK2ê’Œº$£ ɨùüäºc'³¹$wZ’Q—dÔsJFÍÄÌÆS ÑX7ZëDUÂŽs¿>)ñP—·æÃov¡¬µÜù:ÂÏñÀÿþ„ö¢ßúä¦N¿7’IdD-aÜoÑÝ?7ÕÓ……'§f¹øºKPÊÙ—}ômŽåzÔvÀ¢PàñÝ: ï:OÕa*êÐzp‰Ð€™å,õy…?e%+`ºãÑ"ý¦ZƒG;ÕGB”~Ìæýrð§«Ëë»Íý¤O,ë3™œ/”Dü)Û{Ý|#W i xR· ÚÉË}DÁè z2$À.÷²}"ëºÒV1op~hðØ> ðNSk,«5·ò°¥GP–sê -ó0¹187€ <Ðía~jXrTƒ¶¨A€\ÝÔ‹ï4 ÌÄ>]ðæà®bäüüLÐàØf쓱Ìa›ªl>Câ«u(!M&:ì¶ê›H›¬o¼!3¬0`²ù;Sû˜ <èúTiÚó(d…¹¢Â|ÎÖ`%…±¨£¬ûÒ‘!„œ:çe¨ÁØÒów"[n¡O >&þ{;Áƒú+[ǹG±3A´EXÎC¯2C:2bÈÅd„4`†ù;ÉF‚ <1t[½YQPTTà3×S i*³\'e3÷BÎ.KÖ;0á€C˜¿“œ%2žÔP¤âÝÄ9¤2[¶`s_\2N\®YÎt\®ì££1`ed1 8dù;DâÁå0©ûÙ±N~è+VvÄO­mƒI³XΆpzj µr›ÞE£iþ äïï“è,ËrÎö[‡*f¡-[˜Ñ¢1Ì™X'v+¿¸ËÈ>zg(»Ž·i‘BþÁVÎ8ì¸bO¢_^’È`Ù®0áÀ§E1®.oŠDýÒU¾,®®×·¹ÌÒnó+™ž¼ ¢ÙÎrûÝjÛ8ªýËXƒ§Cµ€ý~­_X“wë—›õëóê;MUKË>aò!˜ÜFÂêùr3œÈ'p”!'›Dó4ËxÍçï £`¼Œ-t[哦ʶ(Q´d<ŠDÌÉ:% ŸÎļQ‚¯mY. °»r?ÒìKxR7§ôóÕÍúúõöx[×Pvè¦àõ˜Ó<õ[ÏlLy§q(:0²pQñw¦RÌ^Æc­o_·‡ÊbMj+[±‰ooùIV2"XÎt,êÒLÄzØ<6jðøØPËëùOþóݯïÚúõ-°èÓùþózÄ%¯½0ÎW.´º•s6Í>ŠÎÛrñÈ­\Ø+R°d‚Iñ+hôü3A[À÷É- ÐIŸy&hQÓg˜ Ú‚·9túxÁ-ÈC{f‚BºÀG2AUPÉŸW[Ò ˜<ÍVDæ@SÕ¿u&è4jkÉ Og[ííŸ ºE–0 .Ó-²ý ¼%tÉ]2A—LÐ%tÉ]2A¥LÐéü >ùXqÎú`—LÐ%ô¬2AmNM$ïÈŠ†¨/EØÍ{ªÂçÐᱟ„¼²…²ârÍs2‹ÙŸ“\@èý™onDNT·÷=Cö§»ŽmŠtwì¦NѸ„("%Vhl)®?Ÿk¹~ F¬mëÊÉuµD”Šª±¼ ’³A‚Âr¨NñìºbH£·óO¢O0=Ó»áZ,k/GŠÆ ”²œä¼ú­\˹z,ü€™TàñÐk&ËÑhŠ*th]îÛ$9 Dc\×­Sا iþ×àñØ55³IocQà[WN1˜äÐc·Yobªspö*ðDm}ôÝËPI}¬»§ÍÕ¤7[Ö°:ƒDQ€RßÞÆÑjÜhüˆùVà Øss/õ“°eƒøjbmr^;i{"tei=Ð4ŒÖ²Þém—ÈXRß§¿g5º¢}CGçP€Ír&Ù®{K¨­`Çið˜s©t‘°eSØóò%f¢ô Îr@æ´…> aëq{p¦^Ç›v¿ÓO®ŠwßÄQ%g¥BQ©!§8/U,çý¼»@55˜ãË^ƒ‡z÷7PW¶§ŸÑ8,!ùΗyh«E`(ð¨ó{w[êåÕÕÃëýKiK-*´ìÔÏi¤SØ“0€œFŠ]÷SH¬@ëæÏJÓáÑö´<¢<¡¶…+ÛÐ9&&$›s§îm€™ «Áˆ;¾Óî×ïOû'+³ìçŽ J¶/– ÐÉÉÓF` â4À3¯Àƒ¦Å3ÿýõëzµ=Së­lDóÝÝ b l®\@Úd¦ùyZ>šï1<á´Bïá¹ÿ: -”´ øä—å,œfê ‚>®ò/—Ïßÿ÷ÛÓåãͱKöCw¹<ïý[Ƚmþ(†! ªÆm’?»wìçmèÖþ|)ï17{/¼” P5›œ(ÚN"þN dSªÀƒfN'Ñ¡Jï®|Õ¢\(9’N6– ¡O] ŠÖƒÅ½„¾Òž’Ä=EóQ°§hðøniÖWÛÕÇÛËûõÝ6žtêp¼ûóç\W,?é$@-dN³œ nV¿Ô kFƒ> ðP)ð8ûZ§“ºÊw¹” òÊéúÌrÁôkÇÝ™±ŠA„ª­'—¶ÍG æ¹ÿùå¿¿o¶D{|ZOÉ©YG<àŸ ½H_;j~ýåzó<ÅqÿrµÿË»|óhÀ=^ƒ§_ŠwÝ<)Z¼¤GoIŒtÛ¯+5$¼ Ĩ!FO5xúùíøºÿøôðû&çæñ?9h·‘èëôÚ­KàÓ©A@D©bHèGPÚ)1¤ö™èÝ¿ùÃëüÖ¶º|Ü|yóÇ¿«ì£ÒìVù“ÊQàHõ’«¶5:#?™,õc#sVdIÚxDYå_V/OYí׫«ËI¿±3ucFæì4`°gµgXcMÇÚøï[óç]™D"x°¯³Ž‘   T¨JC…Z<àšÐ*·àÝ¿œ4žDj„*Lxg­íÙxUª‡6ˆ*<Õà sSåõ×›‡‡ïû OFdJôr Tl€×Ê”"î¢TlQˆ¬ÀsbÚÛâº:)ÜŠDI±>)‰Ò wQªG6äZkÙÀ _ƒGý¢*Ùy:žBž¾^^­øäÿÏŸ“ª%ÿ¨µyò+€[ãzY¦•ˆO'Gý˜Æì"Ö±•+ðX×'sbOÙ?VX +Dº€uX3WK—ùÆÐ@ êQ"7>Ôàhlo|²Ö½ÈŸÀÕŒÁk˜Ó|e„j†—ÆP£15x‚ïGAÍAäHniV±GjÑ6¢v<ÑŽ!ù«ð´43ô*ºQm ©%U»ÚOwú´;“{U ¹!Ó^'v·?m‹žSgƒÁ ì…¨^½ÁQ º"R0ò0†"ÕxR{‡“”.ºV„`j†àH¿mtÂÞ@Ïÿ«Œq¼×ãI=RÊ­. ÁèÇMµOD釹 È¥­UDj<©1môH¸þQ`9$¢*÷‰ù¬#þ55˜cUi°yŠŽ¹Òð= «ÀC¾¹?îû~*ÏHÅîÂîÂ8o&á8Êröb¹G´Ý›À9ø%‚ón/“i»w¤ŸZA£çßv¯|Ÿ¶{:é3o»WÔô¶ÝkÁÛÜvoú8± V±Kí…¾ÏÑv/G»î©ÂÏ=VÿÚ®{*Ï“ tÝÛ¢?Ð3áoÝuoup|eô²vüñ$¨þ]÷tÈÐ.]÷–®{K×½¥ëÞÒuo麷tÝ«ïº7ŸÄ­­8gÙŠZºî-]÷ΪëžË]ᜱNx÷䬉‹š‰µqtøÂìžF¤Ñ|bƒ]¦I—‚ß‘¦Í‰Š ø&9ÜÛ¬:5àË¿—u’ß’336àCw’½;vgO" |{b9{J¥¿Îþd\Äk$FG&Và!3O«§JÄ¢- _…Ñ€šå@_~æ•­ïM˜Ÿ<6öèøô£&÷šé8aÚ =ú%®²ø0SÏ'=Y¨ç/ÿ¬ÂãµåŸ‹-UÄÞY|ä5é|® ­³Ñtiût*UP“uóOºóýù.ËhÒœ0É”xõF!wh’ó1ôlùÓÎÏzèÁ˜tu~Üöb¿³Óî6ß¶þâ¶·¯ß6÷ï…qw‚«wÉIŸTÔ'PNƉ á2H8Ãz¯g­jœ?lG‡ÃWŸ¾4çýòÏ}ËH˜ïùJí¢ôüÆrÔKaëàövýR|â{t3€z<Ý Í­_®®Ÿ/ÞØ÷¹n4Åyó-àd‚tÙe93Ë}â„å¢A7 <ÞøŽ¥ä¦¬.•5Idù$]Ê}nÓéºunãl5fÚ/AQÚk@Ük4¿ •o%ܽ¸ìÊ®?.…É—]¥È;dJAŒÜc9K©gS«æÅ¢€î€óF…'uÞvö½ã`½Ù€†‚“qbìà¨ìIÖjäÑÒˆ¯Çãb/Ÿ¥FŸAÐ'y­ˆŸ ØÞ‹_EÙj Þàˆ‰¯Ç£uUmvxÃäÿ¾Ýön‰`Ëú$dex’‚ÉY.P?fGæÖý€k®Oð}™GUZ%5iºì@Œ@€à’Du–sz5¿ëÈuÍhU4x’Þ#’uu»¹~øã~ €û×Íúönõö×Õíæþûä7WÔ…|w–/Ï,¾kÃóÙ­Zðî¥À£¾®ÊæØ¡½7 úCD`“@Ä‹!F½“¢ƒ«a¢ƒÓ\ü¬6Ã=ÃKz §“A´Ö•QOr…ÝêäË‚–*Ààg7uxB˜éŽpH$¨Ñ“ ”4œäÂÏa7³¦NÍMx„^•ÛAì=*,é†GòÈ =ÿtÃð}Ò tÒgžnXÔô¦¶àmN7Ü~œ¢êáoåöj Ì’ntþHº¡éω‘mºaFÅç°FDÌ^çâDº¡N;80ÝP‡l?ÀfI7\Ò —tÃ%ÝpI7\Ò —tC)ÝPu΂¡%ÝpI7<«tC&¦G¾w@@‰À,gí\¯•'øÐ¸çoˆ¦ÃÝ|”Å—œI¯IÒkòÆ’ÇAÑÍû$©q¥×ÃNÆ ¡C5ž~½®vŠÛ9?_ìþú9æ0¿€ulÌ›t(f¥Nr¸×>±SVjþ½¼ÅÅhE¯~p ãŒY©.^¤”ßïŽøv,+*²®$ߎµ˜"ÍøÛºêë„|êðxj{¶}zx}Y×IÁ”µ‘-Hï:,ǪêaÜgÕkÀϟׯãÎëÿ\v?Z.j"@~r´SÒvއà²ÖŽ˜>:)çÿ^7Wߟó%à—]^_¯nÖ—·/7W7ë«] –5FR"![r’+dsU¾¾wÜdêaS&Zû¼Ák”i‹ÊtÀ¨È'‰¥,çžnÓHSÄ,le$þÇü=½ 59Þ¬¶yÄ¿fµ7ûùŸo•[hqLIIÁ,çœïsûíÈ^ú8àÒ£ÁCÝŠÕ§,2éØuœKņ$Y¢,gÔéÉÃù®açkðxl¶Ø<~¹¼½]]=ýùaO+û™€øhs$/Pu@æ_FêúAù½®¥´B/¤ê>Š‚5x"ôÉžçî¦éúËΜý›ß6Wéçå{š÷&yKbð Ë¡‡v“å„5£€Í€™ÖàÑfÐØú?·OëÕãæq}»¹ßmÌeŒç»n4Ft³œW' ÍÉÆzÜ딃¸“ˆÖ¡„,·”tò8eB5qÀ5Mƒ'¶Ÿ´7¼W¬6÷¿¯ïóë]Ö—/_kCÊö“wãØÀ<þ~’Îþëiýæúæ.ûÙ³òÑí‰Í¨ß§SÓ{Û "°­iðxå¶vy·~~¼üT:çÍ= ;(O/w¡Òä$çm_²jþiÆ–±µŸY¯lŠ®6w¼<žîß6òadMiOa¹Ü´W7©ÝXFøQ4xÐt«hpD_>ÿ`õ|9)µÌ¾óð‘â¬ôÌÉrüƒöe}M íˆ}[ƒÇ‡n—ÞœéwuÃ?Ê W÷“ãË÷ÄÈÜ3|E—¬;–³§Öf@ÒúQ8æ_çøNT™“Z«ÊIe6šÄ†¼€›å‚Oýnž'7CMÑŠK,ËÑ€è Êî!ˆ1Éxb‡×°7Íݽ޾lÞ(°¾ÿÆ>)¯KCÙ+¶qžä¼o}ëJP¾Œš`•‡ã6SÇYÏ2‚wNƃŽú¤§ªÔYæAò69gŒäÎOÞÜùúõ~khØw*<©yÉ?\ÿ¶zSà ßy'•}79-× z !數W¡šžü¬Æ͈—p l¯5±ï½;¨=aö™à­“¼,gΠü«ó3ú€`+Ð;‡ÿ´¼D5ð­1B…vŽœ#/‘¿È·^°AFû…C–¼Ä%/qÉK\ò—¼Ä%/qÉK”óùü$Š$Ÿ³?ÔL\ò—¼Ä3ÈKô†Ív ñç÷¯’&¹ýh¡NIHù÷BJSù‚1ÉQš³5Ø‹HÝ‘Þxä[§Œ4¤DC«‰MàxbÃS—/sK51©LTA£ç_M¬|Ÿjb:é3¯&VÔôVkÁÛ\Mlú¸çP(¼•³0k51î"98RNl‚Ÿ‹¾*áy¹í&Th\4AFCû[»í¦Qÿ?{W³ÝF®£_%Ë»¹ù‚@žbΙÅ]ÍB±u;šØ±G²“îyú!K²­ØAT±Ø:·k•¤›v}?’ ˆŸè‚‡%úŽn»á‹ì=PżÅËœ,n»Åm·¸í·Ýâ¶[Üv‹Ûîc·]>?£ÀBñíaÜ/‹ÛnqÛ]†Û.]MBZPã‚sÓc†Ôa/:ˆqö°‚f±ù6¾ÙÝæ4ë,”uBöº(ycC°ÁÏà œ!X~@s–„â+Æ´ÉòYo,—ØkÄÙdðަ÷ô˜¤‚ &vX <–Zuô•X^¹Ö³ÞKø\Òˆ¨] ¤j +@2ØùgZƒgDúõ[ýÏ—ûý~µY?ÄÕÏíã×!¨|Жµ–óVÑXÉ3˜Æãfå1§Rš|‡¹Vàa7ª–×akÜÿ™þ~÷ùE—Ÿ_Ë#|þPåÙwiÁ:HÂÆ!7ȵIRÌ8UžU= ePÖ*ã=솱¬2ŠäÙYÉ¡™ÆEÇ•q›‡’ àó—éÓá‰fôÒ~Ù&Õê,ó –ÉMáH€ŸËû@ƒÆ1\Õ`仹5núò>TÇ9º]ÒÜ*£²Ê+ÙZ’^Éò“£:µ¡-ëñSì°ÊÓwÐb¨ÀCÔ¢Vι<‘oO_6«Ý—õu.õÇŸƒ.Ë\€œãÀI†&ä‚u¡ÁÁWF´–¸›¾ÄsØÀÁâÙ»í ..«+‘ŒÀ[é¡4‹šTËiÈGø€æZG[Àóò1>¬ÌƒàÙƒ B&u—. ìs-Oø¢3óOµ·Ós¯÷Åê¶éˉc¥²'Ã8#+ªÏÆÆzìÁuØÄ5x¼oÒY£M(3s!*o£de¢G†Çö²ÖCŒ BéðÀô*ÿû´Þ}{Úÿêª(–¬MŸN:@é3³Î´i/Ò’” ø f¼3~b‘|uÉ´á»5^Æ6L­–?3À ¢T¿…ú… ûä"ÚNºÝåBA§omcª£wn~¦kðxÓvo;{Å¢ºpE\&H»ogCƒ †7C%œÛ¡âžgâ„§€× )ê´•TD_ÖX2~ñî›kEÛxuVR­btÞò4x,ÎÝ×›¯Ï¶Úß·ú±]¿nlÂô²5ÒeVÂʆ¦xþÇ3°"Ûò<}§¯rû`@²‚Ò8“ø§ ¦ÃÁªÁcýLk÷´Ó_&›È¥Y ’ƒÓÖ›Õ²± ïáÐàQ;}Î6%KJÛ쯞Õö¶Yv$ž×aÈñ­¸†|ð]sZ5àâéû×’Óz&Y± ÑËÏi¾MNknô…ç´5}9­SðNÎi>ž Y#Ê»Ôé³ç9­äÂU°çrZUPñ¤wáEä´¨²kBèUz@ùï•Ó:HMhÑU–]¿œÖüEÊ•è°bÞØ/¥è–œÖ%§uÉi]rZ—œÖ%§U‘Ó:œ³‰ÄR ¨aœGZrZ—œÖ‹ÊiMÄ„dh ¥àÒ8ÏÆÎæ^®v‘êÇÙÃÎtxhúóÐa/ËQ·¯ 1ÇÀÎz¶åzƒÃ8sâ%m”áš~/Úœà*ßFÒ ØÏ™ášÆ9°öL½Á„€Ø£CoëÄNßf.‡r=èNyï§§Í^3°¨¸D¶à ù=Ã8Ï zmŽZ¼ 0'œžÒ¿úÿ@ Ì*F”:âæqÎÛ ?f<ÿꑺùßÂuxõô9ÕØoïÔ‹ês9”;Z”Üo.XçDÈë©èÒÍÀ`š6åqÆž4~-EàÅ“<3õ£Î»ùY¥Á¾Õí ÅÖÛC-ó¬‹ÑØ¥Å,Ù/Ž=h“(?iaÔ㠦þ¢ÁcqZó ²Þ¨¨7.7qÒþ—Æ›fÖ_Kª*D¡®g9‰;Žæ£=îd*<Ôo ææñëæi¿úF9‚•Е§‹˜ˆØHGXgG•àh¶4@#v˜b ­: —5ÈE ‚i”ê„ ãl4-§ZÍJTç:X<~»ôú²sv6 4.Û¬·ð$z* ÷°Uxxræî/ªK“½Û^2LQk¹ 58Ç’e“Æ‘·3X†jjjÇÎ" r£æùX´ v¥–=У‰d¼tôF8=w,5«aFc;œØ<.´ÈØ-n‡Ñµ‡=åcG@‹ÙÓÉã&¹1/} ×àÑNøë¶·{*¹ƒâÊVL@Ü‚Ò8ˆÜ$ow5ëáßá¶¥ÂÃótk‡O‘5A4Ò¸¨ÎÄž€)™§!V`:ÙQJ7s²qU¥]¸Bbžm¤³÷)”MáèÓÙj¢dÔÕšî\¤\å_F#z{Å‚?—Üú¯oÛ§ŽIÁÙ–~·ÄöǘŸOÿ¯°ÏŸn¶û!ˆçÓõi Ô§c„Ðdôó×ÔáÑV(ú0ãëùuüz¿]Ýì¶ÙÛ˜Õ ÍÖÆBœˆ9õqøfŸvò—5í¤=4_¢õö9H|³S2äþéfõ:æuÏYÿܯ6_ö'š?5KB0"C1W2dvQ$2ýǧ7Se&ôaW-k:±+牭’¶¿ÛŠ`±Ô~¡ ÁÒô瘣Ëâ˜:ÊgÒ¬ük{»9;+®Ç Çê¤éÏ1ˆ—űàúpL´T‚—(FŽÐÅ ™pnŠ)„iưzéû0¬ÙØ›a¿_?¬nή~‰,8ª­ô6јhµ2µã[µúœšÕx Ó©¹ýr·úñp½úr{}Þœ=„€¾F®RˆT²)jÇ´Z  éô\ •kðP¦Ý­¿oo×ggEŠQä*ª]sKÒŽ[µ¢wò^° Œ¦O'nÝÿØîÏNH”¨U-ÓÜGe ͘Å]¨œ ôaV-ž^ž‹|œ<ÜÿÜì~ì…#…D–¹@±J6îpF*…jÇ8¢«Ñ‚ëĸZ<Þô¾ü¸û¹ÞmV?ö_7»ó~ÑyÏd<¹¹ÛM@+[;þÕ*úøj™s5ä <±û ôyrv÷Iñw«\ôêú83(>T FYW%Q+®‘1JƒªºÜ>ÉØ\UPÄ“pC«Tœ£k)­ï£célvC@á €rlÊ·çöÓÐŽŒ¨GÉ®ÇMPƒ§YßÙ3º6 ÊY¬ƒ´bÒ8×0§EøÐaÉkð ·ZòÇx’Ë1Dl‡¦Llb£n®Ðœ´ °ÔÁ¤ÐàáØ*¦úY}Ç?½•× ç°$ÏâóN›-öÉäTàî’+¡ÁÊ*ÈGe½l‘ƒz|3õ”ä1üÊ8FÅip¢¿¬iDn•ÓöüðwTð«É|î š‡f ˆµ¦ÿl"ÌNž{>çxE VàὊϩvµ;ìýA]UàEãko‡Ó€Ž%B½ |QD@ ÍZ^V¯½Õk Sê£hDq^·Ì ÃìôqFMzeÇ_nj¶ÍUºrÊs“ÚÕúûÍjŸ#ÃW»Í:y]´±sô»K;ø£IƒÞP¬/Ø>¤©Æg!ÍûÛ§»Íúñq}ý5g[½Ó8I„Iw  g JÙùcS},YªEcèB–Z<ÑØɲ?À}§onE•hÝT‘ÏM”h鲈âÚE»Í~ûI«ÛïÿÚ­÷»§¡4÷[] Ù×ì¾Á¤=;A ßƒr×oWƒ'6'ÈÑ*¼¾]ï÷ï4-z¾ª‘‡ØŽuÇ’£Z¤NÞ”j<±Ý1s·Þ~_ÎòSݺVt ßÀú@Ž&@`«„ˆ}P‹|[œžÙ/꽦ÕpƒoÄ2ÎÑ4ˆ­¯‘£Ï…¶B[o*Õx¼XÙæÃgaÅh\…s“Ai-ÎQr#ã`UxcŸ›A5mɹóå]Î.¨×t-ù))Í[š‘ ì\k%6=’ d|ö‹B‘9yç)•±±b¦(8–ŠŠæqÖ6«ð6"JFôÄËQÒŽ Gi>z>›MWä'Û#/®Íë—fd¹oKÅtÒÑl²D±Çõ”‚ ò}/ãÆf±¸ûí³Š_5|>ʧpè€ÁÅx耊]{™ãÐ5 eƒ‡q§apK/ó3Mª ½ü^æSÀ·ée^@ }á½Ì‹š¾À^æSðNîe>|܃uå]Êd ÌÑË0÷³;×Ë\/¬—¹=Óß«—ùA;¹0g…vüù´¸ö½Ì‡/‚$Ô—=J°ô2_z™/½Ì—^æK/ó¥—ùÒË\ÑË|8?sÞ/’|Î"àÒË|ée~Q½ÌJd;øì%§q…ìölèÿçÑU™Îèg/Ú{åö¸¤‹“å²›Llþxm%nò(~TÖés!_ÖZt ¾¯ôÏ7(£;*oÔ¶<ÿ^0¹ú¶¨¥è™íŒmË!\%UD¦myB3ü}´â©Xq@Ÿ5ߨ™¬bþ@u žhŒ²4ÿ‡ª;MnõÅbR‡BÙ¢g!j¨F>ÄèÏŸ=­ÃFÝÐü TZññØkÚÙá× óÀ";Òêl…L¡:Ga~a&‘ªZ`êCªZ<8g¦4 ¯™Œ¾XåB'Rœ%s”,](¹¥˜ Öài×òvôl ³`EFÕJÄ0c™•(Se {¡âqœéB(k‰Blû‘¦wéçF±ôÊ‹#à̶A|l-ØI¤¨Èõ±†¬'o| ž^G¢B݇0)^dDŒ5ÀÁL((1ñ$z„(…¯¥Â>ô¨Å£íÈñnýi÷ëÛ!¡ms»¹~<\œ}±NÍe2êC4Õ½8fc‘"3B¨}¬KìL žˆmŠ6˜ˆ ò‰)pPÕ%O{H3…V.Y¿*$æØ…VΦ­§30Vßž¾lVû?ÓÜÉ“õÓã×üætx<&Þ3€E>Ù¡`›=Œ³¥ºô¥>¶ µ ˜.t 6 ­½ù¾ßm®ïw7û«£ÏzBÁ”Ï%KÒ’‘ ôÁL©~Ù‡Æõâ€éðü¡Ác§¤„Ê>°×уËrç‘Ùú 9¤Ó8 íÜü ‰­€msVƒÇ5«Ž{ÔbÍ› ˜²ÅáÒ9ÒtKÎk—ïÃ~Ê ÑšË àØaOÐà‰¦óÅæmÍBåÇ!Áˆâ“FgL»Âº y­‘`þœA‚Þ]N" ¶Òk!F¢yS†8)[/èµ—E ëÌ_D ÕÐrv˜…ØŒFÞõ¦ÑGbt#“§Ë"üU»Qîô9ÌxV9oP|döÝ©ôNˆ©Dª–»©ö™úûún³X_ËÞ…r,ƒ§\Ý7ˆ·ù4§6}™Ý4«—&¸v¯7îÂ'ÃçU^ÿž¨ðûúqó¢×ýêñ~µ¾¹Û¼“Íh`T ½ ˆGûS5RuÙ x0t ÇíæçPÀ4cG¤9ÙñprÌßs@‡‡{ì_Öûíõêi,a[Q“"ç`÷ šxQDAë;åùxÙº£¹ÂLh*p0'W È'Ñ¥V:oúÐ¥OhÐËDéÚŒåç0¼óBû­a­›¾•U­ÙãÍNƒ±eÜÑGʳeëò+3+Å!û0Þë>C«ñG ü<ç[ð‡È¾_â>#4ÓdÄ)mŒÆ@}"hÄ"Y!þk¢…Þybù‰#¤UŒìÄK㊎¯êMdÒ–§@ëL‡‹©mÓÔö­Ö~ûPåKG rÞ9œÓ¸B¸¯²¯éìLV;ÜK5xÈÏ[ì·"µ b3íò\EUbŒ>v¢:sYDrvjàªò—“a:7¶ ðœå9ØØ9c+d8 ,/U9õB•SÝG{¡¨ð`ƒç˜Cæé2¶å›¦)ŠeÂ0mίç®7u0_4x¸ùr˜ë•Wö~áàa–h‰ù©Å·phèyYVm nqh{V ÷Oã‚qÍšÔœE Ú Äiwu[H™b‘}.vÍRjAÏ7_¨//sw¨)õš'™%€¹¬ ¡ 0 Òa‚x0¶ôúÿªº—Iþülß¾D®GÁ½C™£†Å™|ù‘¹*ö I5€{äã(𠡉aµ…­Q˜fbƒàIÚŠˆÀM‰¢ž‘ªõ2pìZ¯Á£m¼ô¡ÍpF“ƒêÊþ‹´ÿ€7l%_Rç8L0ÍRLfXƒ'ÆV)êvIaÂÓy“£ ¥£Ã³¸IuÐqö¼žh'Õ~§Â×úÒ`dÈâûǾA|8q07j]‘/PºT‚¸kˆ'…-Û·®pá*çuàݹË{.ž¼·PBV?°¶r'WcŒÆA‡e Àã[$íû¾=QV,*Ëú€Ñ£“Gi\@7n»k´HPÑÒüóªÁã°A#ŠwºK׎´ŽoV×ëA}ÂLSîPzȰùÁ¨ÉÒUñ°Íí¤ÃƒÔâÙìÇv÷x¢.*ªËA΃wBõìaxu2J{òið2Î?½ <Áp‹”ƒ:O˜õ´4Héò;x›ÓWÇJ¾ù#ÚtxÐ7XÄ?ö_7»Í‰Â¸¨0Oé‚#Lã<Ø:'Bî`L)ð€i1¥ßŸ×ß·¼*ŒLQaàcšk¥r‰Rmù£ù¶– ›ØÛ ÂNÆl‡ :}Ǧ ¿Aµfb*OmgÈás @žÝèÇEŽÄ•¡búà$عONâé͇ ÞFRÓ0}“ÝK»ÓUÖÏ.ÊÙu82žhe9M††Òy·ëÇ4äîDo¶¬·t€dÓI«/ƒóž¤úØù÷®éOÿ%Œ˜ØçHƈ¶ƒ·+‘#WàØª6õÁVß_õv¶€o ôÃz'é}<£HMr†FR“ØDãE \èÔ×pÚ‰­K+däïDÏÁVà ¾ÉL?(òtêƒ`ôa®“,CÉjÀÜõ2NwþÍ@^JxäqÞw˜ýôbt\'º^»Á/Ùß‘|Q¥1™ˆ!8ÑfJãœk“=:žÁ°=®-<Ú¦Ÿ Éæñëæi¿{º­RàooÿÃj8ð8XWÂ…"æ{¶3è$AÒ½?ºn[ƒŽÉ„!í²>È2„ÖþY­ê<»™ 7ëÍÝðö Šê$°ÉÒa#´iœ±ZÏFWrk$ÁÄÐà‰Ð¼Uð[Å–öcW¾pÓзƣt§qÆôÛ6ê9®ÀOì v /•뤾Ý.Qf{·y¶ÕZ=üóé¼üÖ^TR²í‰ƒt_Jã ¶‹¯™ƒì Q¬åùÉ¢ÁÓ¢ð9Õ­†ðããÒ{IˆÎåñ‹ÚÌ5M@B/t×ßCçãz½H:œ3<Zî&›M§Û’û®Œ7@2¦(š™®I£8—¾+t ÆÙoÚ’4z&° ÑËO¾MÒhnô…'5}I£SðNN=|<8¤Š]ʇ›%ié í¹¤Ñ |Ô““ë"’FT„®Ûôï ü{'´ƒ‘ ÊÚñú%_t ôÁx–`I]’F—¤Ñ%itI]’F—¤Ñú¤ÑáüL;½7öžÖ_’F—¤Ñ HÍÄ HÁFÙP Ás‹C ]¯:ø4{ O¨D«U£+«#{“îÊÅ,ÒaÜiÝFY¤ù÷rºf&†IjCvfÌ"Mß`Bˆtwî:ÏÑz h$¤Œ±©¯Otµ0Ñ@Õ£Àš½~{ú’6ãGbcQ±6wRŠ£ ˆõÈDSÊü¶Ýêq§ ˆóBƒG›Ë¨:œv÷üyªG_Ö#E"¶VòO¦qÚ½ÎÁäzI:”z×áQwo<«Ø_4ùp[«\**×yLDzd-¤qÖû9/%¿5À‰æg…Óf×;BYl¬4’Á㈠¶ÕÄ\<®—æ'Ñáq4Ë&Q«X.*ÖçÜFŒdå8+´³ØJb+³…ù© Áã`b[ˆQ‘Z¡¬OŠ" Í †q^ÝÆ±/—’pKBLœe“8jò| Qäÿgïêv#Ùqóõ¼…á«I÷臒(ß%{’`‘lv±¹ Ó¶Ûç8glÚž fó.û,ûd!«ºíöØ-«$¹7§v\õ‰bQÅTI™2)Ö»\õ¥nœUzj?‰ ê-™@ýN2<¡†#ÑÉ,}jî݇ÚäbÎÀMG+ùÓ5wø$Èik°ð<J" eýyþ¾“l:$ê,×ò!wfæÂqSº=OÒaÈ l•h[¨‡O¹D#H§1ßóäåZEG£Ì²#y/„r_sjj‹ârê*@ímà PÄ[È2­‹ŽsQç¶«ÈGLWÄ (¨¿ôœ$x\(Ò±úân³¾»_|üôáÃâž«Û=ì»lúLÎdTð1{©‘N÷¶ŒXNsà[\XKðÄ"UœÇœ³R1¼¸TVëŒI§btã´5MyÌÝK!ð¥^è¹ùm– šèñ󘧀/ÃcN >rsRÒGÈcž‚w2¹{¹#_tÞJ9mªò˜^£­=Àc–Auá¸xÌ*(È£÷~^<ænÖèµÊä]÷Ò9\À¦<¹{c4>WޤŸÁþ½ÜÌcžyÌ3yæ1Ï<æ™Ç<ó˜s.”_Og?zô™±'ôPzæ~ÿ‹Ó¯cD4äõ£¾ýwß­ŸN²ß>™Ž×“žù¯úrC‡Ì»Ëßq´ýòþlô3‹š•_ÿ/©Ço×WëÍúöbM¨¾|yf}{;òŒh÷ž-òI¿aœ>ûÐÏÙN±Õ~üA÷ñ®‚½8§C÷Ô¥]€°ˆçjå×ëKph ¦¹}·zthY}¸þ3½ölì’uêÝÇÊ.ÿ¥“èÙIÜzö³¾}Ø|³I¿õK[@›nV§ß·FÛ÷í³ûHaoNû˜ÑÈ'Mܲ·OéíFÿ_g=¶Oè¼<úŸÞ³ãÞЯ!ûZ„³ÿËõ¤gýQ3âûᅵ^‘W§gdèÏZÝþþ¶º½½{èé«üƒånÔõ--éåz¹YOº? Ç×x#e,äküê¢ÿÛÐEúÓv}®®?¬—ŸW7X÷¾~ý¾²Úè¿1Ä#wËQ›É/on>=ðÇÙèÿ»Ÿ!úÿ¾œ¾\Ú?=üx·¹þs¯óÿu{òØðùèl|þé¶cþñÉÉêãõ£BÿI÷?ãÁ§¼]ŽA÷O´Yn>÷ÇMðõ€¸áv÷2}x×^œÞϬ@xa8ä L¨ ‘à‘ö<=(:Šóf!)U«´v:ú\˜‰»É‹ŒÕCJ°×oj'Ã#nj7…aߥŽ|Ú|脊i¡Zo£÷ÙL>§Åuêõði˜úEÇdx̸7w·×œl8Y².-Ù­c¦Zn&1jð- IˆÔ\0‹èèÇp¬îŸeŒ"Û³¸ âè^)?}O! ’ÆÙ%µ‘L¡÷!Á#.ýí'ú2íŒæ3Ó†1£²µ¶“Hm„Z,ép<‡û¼¾¤}µ¥‡ÍzuóœP{Ý *íyǽMr„ àjZÜ2½”±Žö꣦¶…Ý$/ Ž©nr>ˆáþP.UV¤z]Ká¨m#l°°äy(Ý<˧º÷¢û¯-ØyKÊÎÒç¨"bnÿ· c©ìv¹GÉ¿]…%xl™|˜Ûõÿæâé´çÓ"Cr6è›Èm–“1C…w‘ë W,Á#æ vŸw¥w.×W×·]Ä÷±ÖÚîs¿|¥÷°JŠ€Ù³`sªIã”-ê¦#6ð˜%xâDBÁA ÷‹dP!-E Áx:€æPs]qÉ›ÊZ:»7 |1 « ð9C¯K2½5:ë½Òä¡eÓ80µ%D1åÀv¦Å—à‘îè‡JÁ „‰iaÒV¥0¨œöÒ89¥²ú  û¾»Op£ªOë‰Æ0…šÓXzVÒ ¨½Ø«šüR4õuM‚' 7ÝÒü°¹ûô¼UF§Áï_.ìÓ?rüÜd4½Eˆ>äàsL«ƒRè[ ŒŒUEK.îîv¢LïÛÁ.góE¤ñ‚J+ÀocƒÈ®Ï~÷b±¢'úNœœØI2­ ȱ ¥²Ö ·ò m ‡§ t-jc¼”hz?GäîâÎ`n´Ø,T6iìÆ«Ú Àt±ŽIä˜Ö&Ž9δÉà¦qNc“ªR5–LÁ78•Jð[sw ·~±;ÞïäÚI4dc G›Sf§œ/W {º6®Í J+ô­æ²C$/…Qœíêf_<æÌnup“b˧Z%‚^* H',H»Ù<ŽƦÌP 8Ð{©®33ôå/!Ñãg†N_†š@ }äÌФ¤:ïdf¨ÈJYc«r+ÜÒj8À­”AÝ«ç~$Vú—{Õÿof¨H:p¸ž`yf¨ Ù~™:3CgfèÌ ™¡33tf†æ˜¡¼:­ƒÍźqʇ™:3CŠÊŠÉ9†LV£Ö±êuÙ¸X®l õ‰NB<±ÅµÙAÉZ•–¬çniB’iÚS{§¿BLS~nðÖ•=ÅøàL¬É4µKHZææÐ)Ÿ/Ñ{Ì!ÅT÷ÌéŽBÒÃák£|B<±}€?$¥©ùÚ+‚‡ z64¹qk$SÁX_1$x"¶»y|¤wv2Õi™"Bäµ¹9 û¶"ƒØ žX$EzŒP1)TZhk17 —ða*¸aÅLÂh__3$x ´wÄžËÖ¤e‹´ò;hÒ8 åUWPóá³°ÐBCx¼™íHŸ8+ ¢¶9/™{$û·p7$z.˜ Ôg`ÉðX×ÐḻìÅiÓâDω¼™:*Ý8/æQ·Ópô\H7Ý~g;N…JÁLg4 À#%f.+”•knI:ª´¿OçeÐÙÜÁ ¬G„–~È0}àÐÀ|Hð zd'VH‹÷p59OŠy¨ª\_é&/˜Zh`[$x¤U:jK:}i.¸œƒå8$GfZt©©h<“àquÒ¦_ú~Q¥Þï2$¤nœóª¡G2L½ð½i°½HðØ7¸œÚI5}_Á­Vm &æfAø(–yUŒšÖ»AôC‚BÍèÇk²L;úÁrÓun ÖãÞÀ™¦È‚i¸'[ íÛùülÎÿëDš¾ŸuTs1½@»¡Çª!j=òÉ:¹¦}vR ÞúÜr2wRÒGHæž‚w2™»{¹§´j€•ÚKmªAæKtæ™»‡ŠœŸ‡ê÷Ú™›Qi ͣ礑Ÿ™[$<ìË•'sË¡ÉÜ3™{&sÏdî™Ì=“¹g2÷p2w·Ï:úÊ‚Ëî³öYp3™{&s™›ùFFaVѪ·¸å’DW;˜ µ2jÀtbõËÐW8àô‡ð¹3¤ *-eRNöñI’7ã^»¥IÞü\ç逓“*ÛËä(Oò6¸D>ú›C§ 6³ûÓºyfA&P-ïcƒI€'à[f„¤\µuÞ)ks6—Æ9‹oxY(³ ‚iyÝÀöJðSóÞðQŠ‹óÏ»,/Ðia2/ ÉÈG¼izA(6‘ú ˆex¤ ˆŸ,ìæ“S?(w“’6ÞX ÙéÌÌhœg¸VWxúÄNYNO$xLÖ«B5i¡bNÊÝæn(µ"o¬í‚©ah /<Ò2#µ%>SYæ»™™¥É‹™|Í>É,°}‘à‰u½“»Ëõãgs¿¹X|ºïý=›–h×P²gmË]ÓÌ‘—Xjjv`Ë “i™!{)¨*Á¦%bn=ÒÅÅ*Áù.­`(r‡s€iœU¡®4ò›’L!48CIð`“z8‡% iÉ¢³!xŸ ð8­&æ]–QgtåÒ!ZØzSÞÛ8–©|3PŽé•wœ›‚6æN|Ü89ª6þËH%&ˆQ‘ûhòSIôY-§Œ'Z—ÇÁUwbžIÒ¥%‰<ç¾äÓ8qÏǪڌœånœÏw¶Á¹‡ßƒhT€GÚØípœê5Q¾ò³NžiEðL_Œ6ë‡sŸ[oê; @wê«‚iç,<“húÆÃcp`ÈÈÍ É¥G] j³`8¬¹Í\$/ L’SP€A¶úxÍË•.XÒv™Å†6È€¦q´mµóR_` ¡…Ã*ÂSµòïǻ˧έ¾0 LY›½Ü\Q®A¬ˆú Pcƒk žšÙ€´õFú¶-Ęóíh@ÝRzÀvÊÖ_{ ÝÄy.ÍôÅ’Å"_ç¬3^·ûþ])Ô¶Åå¿Ñå¢Ãd™ÞG£ ¢6¹€E¤Sµiã Hx8|ÚÀ¨‚©ÊÒ„NŒé;¦8s&hNƒc²ÁƈԱ"*,8U« ð`hmRÛ«í¸\ÞG›Uñ8ï4å‹‹ÀíWÏùâˆÀ ‰?_| ø2|ñÙè#ç‹'%}„|ñ)x'óÅ%V*¨½ÈD¾xðKã5ÿî hðκP½9.¾¸½V?³æßݬi|$ïÛñÅeÈö;"Í|ñ™/>óÅg¾øÌŸùâ3_<Çí³Ïø 3_|æ‹_œ3:e¼6YŽ}Ý+ªáÑTnÀêwT= ÚØxÚTB~.N§Òâ V¡r/³iÿóøÁĽl…B p~.Ðs]¦'E7ÑÔl󖨽vîæÐy£C‹¾a7.‘S-ŒŸ =ß ° 3EைóÀÏ;¹bR®Ú‡Êg®#ºqàÛ¤$‹>~·×l=•f{ifjòKëw‚•áqÒÆdoînvŒüËõÕõmwj:9½]?ð/ß/wê¶|\drhíB°éµC$GÅkƒMJ ¦,³¼ìG$˜Hl`œxö«¢”`€ åšv• (£b.uª§¢¸$P=ÅàÞ®¦Œ’Ë%ƒAóãhœ2…IeW]0‘úM-exпáVé­’Ù1Ž“ 3ó qèÎ7#¿Ù¬w²»hç´>ÞœvHïûÇÎ_]Ü\ÿ°Ã &Ã]}=à's6F[nå׋¯»õ~üã’<èÞ.qa¿¤D‘\V>ó±L£ño¹áb¹‰xh æÝ×õÔrJ4m~†1ƦLöœ‰Ñd:Ÿ÷ãöÌüÌd?@QNHôø™ìSÀ—a²'ÈF9“=)é#d²OÁ;™ÉÞ½Üj:¨V*ºªLvqé1`²‹ Ú½xæQ0Ù;T èçfz§~^Lö^:Ü7+¿““¡“½coÜdû§µ™É>3Ùg&ûÌdŸ™ì3“}f²ç˜ìÝþ‰ƒü>û¬ÿÚÌdŸ™ìGÀd'Åô¸îwº‰u7Îîq[ Q˜ù¹Q™à1{<òˆkR˜ÕÒYcà…™ ùuà!ëRÓ8¥«§#fbŒ°Duxœk”ŽˆIjëuü?ö®nÇŽG¿Š¯‚¹XW$‘”ÈÁb1o°7f1Xtb;ñ®31lgæõ—Rµ=åîSâQ—ªZ;§_$6QúŠGEò“ø“,Ó®r@­—xcœD7¼b%Ë®ß.iÁž–©ôuë¶ùøîçOEŸÆ–áèœ4ñ3¥æ¤½¶ùõ˜™°-xäÀ¬E©j1ä®ÃJú-Ô!ïð'fŸíºu^€€öß-xŸ”>Ô«Z4ƒP«½ –=ŧå{tEÔØhÃÂJÕ*Ä£Q!^’Ø¿'0Ø|^G#߯ÀCÝJã¾üˆ¿þñÓ§iN(~Xß_­4P9S[âÀÉ©œÖhx¥ýËÚð°{¶TNñU­b̉F‰,מåÖ§:]߸byˆùâO MtH^—¶ÑáQ[^G4ÆÁ+ð´–š\¬à{ .0ÔÅ)…ƒ ¯ÔI?_F¦˜¯Áì’é¼²ð¥ë:‚Î'gãÏGtq¼hS‹f뾈 *óT{d¼‰Êùæ¦O6p¾@DÑMx¤WiÙBgß/þ{yò•Àø¡õý„,ÃIÌŒtHëÁ'ïÙë_EÜ֠σÆ.jö‘N¡ªÓ¨1)’óVl„ˆâºÕŒuØØ È Ø -xš§Ïþ³æ›Àé[}¿4+—Ö­h…*çØeZ·ëõ/áÃ?zx怫šMyÞ=€`«ÄðôJ®'ï߀¸ÿ˜É6øøæîõ›YiÆœgHC ¤:4Æg º½-FÔNQyqŠê·.z@»6<Þos>_vÜË÷_RïÏ߯ÿÓË^ÈŠAõRî-ÖÁ£@LQ¶D[¾”¹Yâî; „í]'Vít½èVÔ&ªÓ3ËjòÀfýÊî;õú—<À ´à!ì’k¶]Å5VO“CŸÌSݰe9Ç‹¦’G” pA=㢧Èy:‡_›µ Ž_2¼|Ÿ’á ‚6éÁK†«š°dx ÞÍ%ÃMV*,.©ö(tÅ•Šá6¤ÆªžQIÎo½}âÛª.oÓ]ØväÖ3œûW —Ñ'6&LÌÈ– qgÅðY1|V ŸÃgÅðY1|V [ÃÙñ>Ù~6úsöõY1š¨“ qcë‘–zß²dÈÞ½ -ÜÚh¤“\-ù^ÑØU¿wäæÉ·}°ÂÈ2±Wºyà>È3"¶‘J÷#¥‹»ôXç‰09+ô¡Ü<.ögÃðSôÖ¬ŒY¸_­‹»­åZ»|ãl™åRÝØ³ª¡ðsCxÄÔ+¬[ûP¿õ… ë¾0æ¬T4•Î »ùóÃÁ/«ÌT$þë‹?çz†÷o^-,xñ_ß)äܽûüåmò?m†±ðÌWÀp{Á€µZ é¯/æ|—O/ÞÞQo^naå,G.gÖ~úãýçüKçwûáÅýõ‚1_= °5< È¡ãC³ó¢U­äLpAýú™ml¥‘V4:~¶ñð}²+Ú¤Ï6®jzÀlã-x7g—Å£rÈ+Lh@†]³}nªŒÎ_¾·œ¡æËÛhCU–4VºqA%‘ƒ>ùt[éÆMÚaŒÇ¥çAw¾Ô9¿ìð”4¨œ|xfAYPý² ¢u ü«o÷onη8»è”“Ÿ+ùô!Yß×0~×¹ ÞË$.ä¬9Ÿ«­Á±{uSù¤?t0|PbóúMq†O%-°XFó{×£Oɹ[ó{-ÚY?ÝÃï5ì:òp0·¾»eWäÓ)ŸNy§Ü°å^}é_Õèÿ‡³¡§ƒïu6´Š Mzø³¡Š¦‡<z:ÞgC VÊG¿óÙPÄ„n=t9•ʆ*q1àxž—h³„Q÷U ¤ŠcÔxéÍÏÿ«/á ~ÒOÈ‘×wþôâî—ßÿmþåó¶ûøæeT®ôY÷Á˜ôCÀ¿lÆþø\+aº æFÞñâO0ïš‚ƒÏ}‘üãôÏ4Råpq£ÙF/âC2”<»K½ú•Wø‰ˆimü^Ò }ž™®5 WÓpkt­E;‘¤k È`Y°q2¢“ÀˆêóÄ‘é_t‹îà_B J`}?H~1Æ­¿ irÑ“‹—ýK*1±1¤ÈyÂCfÊ¢”;Àyú“ØšŒ¥¢Ññ‰íð}ˆmA›ôàĶªé‰í¼›‰m›•â¸s‹µd¢‚ Mzp2QÕô€db ÞÍd¢,žëp¡¢E.=>1ïJ&eYcMPa,6Á¥O[ð˜ÐF/·–IÖ¢rŽceEÔ÷5®&Фx²‰“MŒÄ&t_æ!‘¹Î&Š.fštbú\ñ.’cóË'iÏ êœƒàòü³56!¢¼K?t0f¹ £ùEÅìÀ{}Z´Šºÿ¢q…ó!z²µ#ÌGúEù°ÊF°ÀNÿrú—ü‹LåI„gO~ã_Š.†:ù—ü\Ѱ±nµg9t{^}ÇI<‡5ÿ¢0º“q.åH‚;ô´ª€ ɸ—/ra1@ü<­Z9†¨htüÓª-àûœVU´I~ZUÕô€§U[ðn>­j²R¾§UÞOÀaå´ª êhõþ¢¤”®@Ï7vZÕ¤\ïØŸM”“ú_¹b×Åå9ÚÉ&N61›Hy|­ h±‰ä–‰àýØ„†Ùú®F«ÇYp_68F‹l‚ܔӪ0@¬"å\¤#Ùļ(†à!ÚààœUl†‰5Ï&6ïÂ&jÚ¤ÇfuMÇ&6áÝÊ&î'õ$WX)„}+D!ÉDà/³‰{ѹú4°/¯†b3ª\uX¿¹¿G/·5¬ø^;Ƙ{¹ècóŠÂ˜êÝîåùÛ'›8ÙÄ󳉼/0²|7ðêÁþÍýÝ{³‰ò\Bç!˜1zRÛ¿sY^6¿I.³ ¯_°j;P}ªÀ,R<”M”EÕ>B}ÐÉ,Çg¿;L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6‘Ÿ§IEÓJ‰Ù•M0¤)"®°‰ª€1ôê^.ÑXl¢  "±^Ux/oŒM”·VõØž\`}>P6QVL>9¶‘Åtf:lb(6‘û»'C4ØDé/L½ÙD~®äþTõü¡Yî¢ÕîW——ë4"ðåº< “#— E}òQ–C‡²‰DšàTîlòa‡‰ŽÏ&¶€ïÃ&*Ú¤gUMÈ&¶àÝÌ&Êâè5ÎÛJ¡Ã}3ˆ'5÷+lb†Àõùà÷rƒÕM̨ˆÍ›•Yo+Ói~ë2 P®ÐÎú Ãþl¢¬(ìbº™,'ºŸlâdÏÏ&t_²÷ XÏtšå\ìÝ2°<"FFÓjë«>>ñïÉ&`òˆ|¹#-Áý‡ž ~ÜSäøÒ¹ÚŽd"/ê!`òÎçÁù“LXQbE£ã“‰-àû‰ ‚6éÁÉDUÓ’‰-x7“‰²8"Yá÷raß&aòB+db† ±ÑP/¹¥ç$E¡zaÀ½Üµ œßZɱµÃWeEå¹l#[fœdâ$ Ý—‰DzœÉúêÁþMIBw2‘Ÿ+„.Šùe'AJû&:IJ1\.Â&Ô/88"À:›ÀÙ"Z„=ƒËue &¸€á,›0ÃÄŠFÇg[À÷amÒƒ³‰ª¦d[ðnfeqÊգѶR„ûaÇ Úó ›(¢Woe{UKtšÑ³q¸4ËÅÛº>¿uŠ|°µ“踡ëeEðœÑDÞÓŒN61›À¥;ýIªC×g9èÍ&ô¹¹é·ÒÓj³Kä÷d¿µÄ«<Ð’À좗Ery¨L'ñ€ø(J‰ò€Ô*›(r Ò›Mäç20£a–s{² “ª=!_ö/©|é¢Ô8RùÒáÐá¨38ìÄžwf˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍl¢ÉJAJ;a§)¬ GmƒŠ—üÒs²‰•úKrW ÇkéTÞš0Z“f9wÜpÔ²bR†ëL‚ŒÓY„}²‰¡ØDÊDò,”*›(r¡7›ÈÏõì…(Zßûv^Gø`¥ ›õ ÌÕOõº<.Úã¡l¢ œ¸p² +L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6Ñf¥dßq$0yI+l¢jr~¬áuèolv›v<X…Ý„l9tðd'›€Mp‰Ò½ó®^7Q䈻g:ñܪI_•­ïGå<í;¼EheÚ„”سl‹œ÷Ç&:•EÕÿ‰6¸èÏD'3J¬ht|2±|2QAÐ&=8™¨jz@2±ïf2Qg xl+•|Ü·Ûû)ĵ"ì6¨q°D§ŒŠÂUî€åÆZ:µh‡;nö¼"@â`Ǫ•“Lœdb(2!SîúÊÌ©N&Š`êM&òsSÔ·5*ۊܾýaQ&åKʘ.° RdùØÀëRkéôEÎù'aY4V+Ä¿¾Dª³‰üsþçý$ßüý2Æ»_îÞ-¸Äo¿œ³ð¤)aŒͧ¿Þ©}}÷áS~åwÓˆätÿCð¹ü>Q)YLµNy_äÜ"qKµªt×½+¦Õ£ÚûOsö1‡cþ~ÿý½†jŸ}á¾þݧ?~úu†ŸæÀMÉOŸ¿¾mñ¢_ÍÛ‡wÓ×_¨˜7ý›ïÿøðËÇ;µËÿýîooÿþïþ{ýß¿þû:Ùï~{ñ»Ÿgc÷£šýÌ̲}øîÝëï_ßýô^¾yýóÛ—È_þïüË·òîâ[Àáw÷¿û³Q¹¨­Ü#0´¥rÑ ½ê$¡®ÑÁ¹äFð¸dA›ôÈ\ÒÒôh\r#Þm\òËâ'_8·|l¥Ю\2hD¡q»gYóö-`i¤D·/¨$G3ÎFÏìoˆM¶jGåöe“÷+JHúJl"ÓXôl|²ÉqØä¼/õ£r‰_ ½z°Iéwe“÷Ï œ{eYMå’ ûNÉÍÍ<\ö/¾|éìR­•ñ½œÚ x(›,à”ÁQ &8EžôÉ'VÅŠFÇç[À÷ám҃󉪦ä[ðnæeq¤<í϶R¸3Ÿ —'X]œ„þBt ×@MT1¤m_%„x[l¢I;ÑÈ&ÊŠJ'¢gãY„²‰¡Ø„îKL>èÎLU6‘å¢@w6‘ŸËä"c²¾dD·oƒ`aç¿Ì&Â²Ú ÿ’圓t(›hp² 3L¬ht|6±|6QAÐ&=8›¨jz@6±ïf6Ñd¥Ðí[6>M$+d¢ ipc‘‰6ô‘n‹L4i‡G&š%8¯&N21™ùj‚‚ ‡*™(rco2‘Ÿ«Á-;­ï‡T'{VÍ@˜ÄèWÝ ÁâR0¹ŽMtË‹z̃[œ ·…:ÉÄJ”XÑèødb ø>d¢‚ Mzp2QÕô€db ÞÍd¢ÉJî{5&çרDÔ(c±‰}ò1=†KtšßZòl†+´sXàûƒcd¹b×ɲרÉ&N6ñülB÷et¬»÷±Ýyõ`ÿF%õfù¹às'­d}?Üã®»=Ëfü‰)Л ŒNÀH¥d>Žæ_DÀÅÀÆê,÷xŽû¿ºÑ·ö‚!Ú¿-èãô/º"$¹¹ó´êô/CùÌ;@€ÇVóÿRä¹ÞþEŸ\¤èR=©¥È‘ìzõ&f'+‡U˜DµÏh 9ŠÇVåEƒwä}0Á·è[{V­œBT4:þaÕð}«*Ú¤?¬ªjzÀê-x7V•Å‘Xø +‚»V±Ò‰°šG[ äI6TòƒåÑT‘ò­‰>†»ú.o­MrÅ6LpàÕw^Q_ýÈ””Ÿ=^N21™€¼ ‡˜ 2„Ëêƒnd(qJÕñ ÷rÑï:Ì `"&Ï+Uy”¿`!Åú±t– Dz‰.DMp9;èdV˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍl¢,®QHºÂJ=ž7Û7ó0«2Ñ„E& *t)¢»=¦Û"³vÀ{ö¶vT‹Ç‘‰²bL)°,úófâ$C‘ ÊÁ<¤$U2Qä 7™ÈÏåä™^wENMôžd"¯¡ºÇËd"ꌤn8Ô«ŠÜ…:]ÉDY”u]68õÉ'™°¢ÄŠFÇ'[À÷!mÒƒ“‰ª¦$[ðn&±åÄS¶­”º„}'ã&šÜåɸ­P% –G;£'Çd£'Âm±‰òÖê䜑Ã6k‡á86QVDÌ9†62ÀójâdC±‰˜üSLøøB÷ÕƒýKqÙ¹º›ÈÏeò!ˆ£cعa º0é2›HŶ$ª¼Tü‹À¡l¢€Óu]……:ÙÄJ˜XÑèølb ø>l¢‚ Mzp6QÕô€lb ÞÍlb^c4ZÌr^ö½šˆqòa…L´!…ÁÈDA…ž%Å+Ч»š˜µy ‹­ ñ82QVd`ØF¦‘ÏI&N21™H¹u†t!D~õ`ÿª’îyNù¹Aß×[ß¡š€=‹&dò¹²iåj‚õ Ž1ùú±A‘ÓßðP2Qåè# .-ÌÐI&V¢ÄŠFÇ'[À÷!mÒƒ“‰ª¦$[ðn&eqq¨q¼m¥8†}ɇ)_amPÓ`l"£JÎS4ZEÌoà¶ØÄ¬r¶vt#»ãØDY|BaYð'›8ÙÄPl‚srSÁUÙD‘SÃÓ›MäçÇ.y3 V¹ ­:² š<0ÃJ ¶ä/˜9ZÍŠ\ÄcÙD^”÷&œàYƒm†‰ŽÏ&¶€ïÃ&*Ú¤gUMÈ&¶àÝÌ&Z¬;ˆû² ‘ ÓÚÝÄ E ]•+›(¨|ÐP‚¯@/7–è4k‡±ü{ç®ËmœáWaF;™B7ú0sâÈN\ŠÈ%Úf•-±DIU~{ã²´×çì;ÂÌæâ™}¦ÿé\>\ºûÑxcÂÀâ‘Èìßí9£æ¢‰EÐDú.Ù9¦ $7i¢Øáù Ós(µËzíGàŵ¼s:! ×•·!¤ì"Ô «]P¼“&ªSÚ5«]|*½hâõ4±ÑéibHü)4ÑRpÌznšhGz>šÒ;JÇz) ïMPîï_ÃÄA¥¯¹~A˜8¨^ä£`âÂ4ïG‡á¾[ÙcT †íœ÷U™>ýn &Lüò0‘¿KËGÝš0QìÐøì¨å¹FB@Ök?ÉäB˜à\5º¾>èZzBØÙ<®vAo-ŒZr.K}qÏÅÞLìÌ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0Qœ‹€;÷{)‰ñR˜`ñ Iwh¢HPéKÕ碉¢ÊLßìëmô_7Mä·†½ñúÓÆÍå4Q”¥)Hÿ«ƒÈ«–Ñ¢‰©hòÝjÎYšw°«~m¢ì`ÝšèÎ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0QœSÞ™ð~/EáúZ°Skâ¡ ýû–R s±DQÅy/ÿ¡ŠäÃX¢¼µ‰í¢„(2ÜÇÅ£3Â;ÊŒK,–˜Š%bÉ΂„öÎD±‹ñì*Øå¹nÂâ±×~8Oä¯Ý™H-4AÕk˜ Ô‚£ä¢tí½ãbGrkvØêÔCè+vÊ+;lw–؈èü01"þ˜h(8f=9L4#=!LŒè†‰êÜÈô.Ô/ÎKŠ[ ÃMT éO|G*ã\4‘UD]õ”fcŸE5:h©çêG྄NÕ#‘¾ñ»E[µ&MLEé»L ‘f˜ÐL[íξ‚]ž‹”`Üb¯ýÆpeå:´-R=vh‚K vChßš(v¨·–Á®N5§Ð~Cœ<å\4±3MlDt~šM4³žœ&š‘ž&FôÓDuÎý^JQ¯¥ ¡ÂN庇„44uÀ§ÚÑd眊ª4'ÅÎ|¹ØØìô½¾&²Çœ‡:wuŠøÚ›X41Mp>?$fðõÆ÷o¾ø~Åù©4éI4‘ž«ix ÝF!8_yÎIM Úëô°é÷Ìé©]µ][ôaîMèTœæbí€ÕetêNŸ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4Q;“~/£^|Ð))¼Ç”Êd[EiNeÞWOáÃ:•·fÖNž÷Gtì¾ì°Õ£:R»¦ÞÃîé¸ò‚‰À„䬫"A¹½5Qì8ž]»<Ò =öÚ ñÅW°ÉÄxçÖ„æ,é§ íZktïÖDqš@§wÊ·Ø™.˜èÎ&FÄŸ Ǭ'‡‰f¤'„‰½Ã0‘K! ý^ʉ/?è$¸wÐé˜Ô§Ô$SÐDUïf»ê%ÐgÕÁ®o~Z0êGðÆô°Å#A‚ è+‹¸hbÑÄT4¡…4R»r]µ£§+Ð'Ñ„Jȵ™¹×~•¯Üš@ÙÔÀNl°œ(¨´Ç—bÀn¥‰â” ¬CÅ.⪃Ý&6":?MŒˆ?‡& ŽYONÍHOH#z‡i¢8ç4ü‡7ºP²k+×‘áæ»4Q¥æ¬éЗʯàþ’4QT¥qU;—«úOKèTÞ:}¡Ò. øˆ"ßxÐ){4Ä<è*³ÿÃ9‹&Müò4‘¾Káô_£´÷&²Ÿ~ ;?Wb ìµÄxíµ …ÝÂu¶¹£"¦Ÿ¥#4ÙI€Ù†—¬^ Hì«W¤O^Ò[{ú;gØŠ Ü9¼¸çlöQúÊ"®áe /3 /¾`QŠÍá¥Ø=×:ixÉÏU„À¡ ň®^hcN-¼_¼Nv* d» ÷n}qÑœ)vÅú*ŒÚ]…hDtþŪñç,V5³ž|±ªé «Fô/VçœÓ]k¿—J赋Uœï6íÝÊ;$•Âd[ßÇÔÚ­¼ oD‡ïÌñQ<&‚1´¾2 ¼hbÑÄ\4‘Ï”˜Z‡&²]ˆçÓD”4‡ìÝö#LBפMòñ%M`(}‹#·ß<ìb¸“&ŠS‡ eÞçWeÔÞ4±ÑéibHü)4ÑRpÌznšhGz>šÒ;JÕ9FM“ø~/…—Ò„8n²s+¯*ˆ:·œJͦ‚‰‡z3 í«u§ð× õ­óμ1VÝw޶zÔ Á¥¯l•2Z01LäïRˆÈ$6·&ª]´³·&ÊsÙÔ;éûv®=GË ðõÖ7Â)`úÓ郊„[ÏÑV§äÎíËÁ»§H &vf‰ˆÎ#âω†‚cÖ“ÃD3ÒÂĈÞa˜(ÎYDã](ÃÅ Ó˜¢Ž;4Q%Xúºß‘*stªª4…ØW/>‹&ŽEçÆÅ#äû‚$]e>ÏE‹&f¢ ÈOÙ½YµÚ=o œD £{h—³®v.ÝšðÈ™_tBÌ-ØòM‡æAÚj'xkeÔ⃧ɻâ0ÄŲ;MlDt~šM4³žœ&š‘ž&FôÓDqŽ Iô{)trÝŒvnå”JsÝÊ;¨Þ?ë S}똾¯Îœ½F‘é>š(5Ÿa²¾2áumbÑÄT4‘¾KO¿‡³µi"Ù™;œã£øŽäíkÕŽâ•9>ÈsiTˆúš&bjÁù”vö¾‹ê½{Å©xòûâ˜VÆÀî4±ÑùibDü94ÑPpÌzršhFzBšÑ;L‡z)¹¸˜QØ$ÂM“J“t*ª4‚êc•ØgåøxD‡¢ËÑÑx#Md ï$´¯MT»çEÁE‹&& ‰ô]jΩQ›4Qì(ž¾7‘ŸÝ8B·ý$f¹²˜QØò9Ò×4A¹SŠ”´¹§Ø¡ßKÅ©¤`tÊ;ÖUµ;MlDt~šM4³žœ&š‘ž&FôÓÄ¡^J(\K¬©ÇÚ‰ªÀü¥yj,½öc9ó•{º! †šÀ-ïŽBTo·ôb‡ñÞ½‰âTsQsë‹Ó¸ö&ºÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçîLú½”_¼7¡¶°·5‘X`µ(]¥â\µëª*`tð¾z@þ,˜(ohi®~Øa¸&ŠGqÜoƬ &L̹µ @{k¢Ø9;¥Sy®1¦>­Ûkç­‘‹k×å\ ;,˰á;,Qíoe‰â4—*o×ö|ØÅu»;IlDt~–K4³žœ%š‘ž%Fô³DqžOé÷Rôª =óœP¾c»Ǥ¾Zãú%a¢¨b"k„}¨wü,˜¨ÑQèýèðS]ÂËa¢x4%•7~7åµ3±`b*˜ˆåÖ‚ Ö„‰bǪgÃD~nIN ¡×~Œ!^z[6QWÇ=špGBÁv&èj‡4Ýør@}$û´ñ%½5 1ö£CªwŽ/žsTF§7”=o9­ñe/Œ/´¥¾Uš·òª] ³oååçFÈKUíšÐÕŽ$\¹Xå"ÀÎð’ÑÆ :B“]´[S|T§e’€}qb+ÅGw¢Ñù«FÄŸ³XÕPpÌzòŪf¤'\¬Ñ;¼X•œKHC¿—r‹Ñê¦qïí©É.†¹`¢¨BÊ»ß}õ@á³`âPtîKX=’)ô•®Ky &&ƒ‰|Ù-$T×L$»ÀDt ˆ»-;_ɽ&`‹1ŸayM\úG…ö„½Øß{)¯8e$鬙»çüV‹&v¦‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(Îò À~/¥q®œÞE!÷Õ›~ØáÔ#ÑðTëïò)zñ(¤ ¡¯ uN]Sô©¦èù»Ä!¶³åW»®½| ·ätg †7JÿÓ\Å›4Q삟¾õž›š*ƒY섊‚û¥)>È7Ú9H+yp" í {±³è·Ò„”¾Ü …»âVúñþ4±ÑùibDü94ÑPpÌzršhFzBšÑ;LÅ9F¢Î¥‰j®M?Ny»;àθX%Pú÷©q²ƒNEUDŽÁßP¯v¶FGsžï~tb¼ñ Sñ(èÚWÆq¥_à3ø¤9` !¢S³˜Q¶C‹tzÂÀìßvÁ+Û)Á•{*éG“4•Þ£ ËÿÛœz}P²ãðUBZÍsô/Ç—?ÿøoüíï¾/cá_ˇdM7ìeõªÜÉÝTížV+Ÿ—ßÂ1÷ëÿôÃüð§Ükþþw?ü© ƒe‚ôÍßüé¿~üþ»oÿéÿæcŠÿhµÿ^å?¤ÞõÛÔáÿö§?üþ»ow ÒÜûÛÔcÿôSjÛß}ûw?ü”–Ö¾ÿdùï?üøÍ_~øíWAüÃÏý›ÿøý_ÒHóÓÏoñÓöÍ?–Î 9þÏdödú/ßÿkj:ùeÒðçädûöo‡ãøtÓä‹8¦‰Ñ?ÿL>ÿ3 >^à»Ú©íy7Î÷IÞðn7y9ůH’¾èâößü bLsÙ=ù<¹g­$‹³àÑúâLèÎ韂³K?l4®Uœ>žïFôÿÃ*Î_/þ¬Uœ]Ǭ§_ÅiDzÊUœ¿^ï «8z)¹¶ˆ\„- 4q(ÍóÙ;WšŠ]\gýOÏÆÓF9ÿ>1wx:Û!ÏÓFª¡W¨¯Ú‰ÒµgýÀ¢8ìòtR*A1ô•*ÉtàJ¹™P?Î5~òƒ8zÿ·Íg•n°s`HƒP÷wãܯñe/s/bj[é[íŒ/(ž?¾ìúÿû/ýÇèWŽ/Ì›GØ)©ù¨NjëÒ郊]ôpëŠFqªÑĤ/Nžî÷®uƒ lDtþuƒñç¬4³ž|Ý é × Fô¯dç1¸ƒi·—Jƒ\¸vÝ€`SØ[7(@€ÑÞê“{/ªrËìlßT;€Ï¢‰òÖUÇ7¢ã7žþ(Sû$|ãwãM,š˜‰&4ç:Æœš4‘íTAΦ‰ü\O/Û+Tì«"¿ç¥QãÍ wòÜXnÁƬ­´ØéÍgɳSBÉ!ꊣçk³‹&v¦‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎËù\ï÷Rè×Þ±bMî†9$5’ÎEEUšD¤D_=Á‡íM”·æ4 ¥w¢ó”²úrš(SÛ J}eÖÞ÷¢‰©hÂ6 ’1¤]á¥ØY<ýfj~®!SˆØi?ùí¥å")n¹òîì}{jÁœk[vJ¼»÷–‹<$iŸïNŸ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4q¬—²kÏ4¢…-êÞÞÄ!©1LV|þ˜ú,ô«¦‰òÖ‰&œüè8ÜGÙ£ ¤éPè+ãuÒiÑÄ\4ái–®Ь‚¿Ø^/2?×ò¬n¿—ìR'w-M$– |}Ò‰BnÁŠ@í¬™ÕŽõVš(N#A{o¢ÚY{½ib+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ªó¨i®ú½|}œÿÜ“NÁ6 ;9øIEŸëÞDUE©¯žÂgíMÔ·N4éèÜHգvkobÑÄ\4‘¿Kg7ÞÜ›¨vÆgïM”çæëPÆÝ^Ûâ•'"n¹dX°×4©+0u2‘U»ô‰ßJÅ)P;ÝVµ‹ºö&ºÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç è](ñµ{iʶ‰ÓMT©L /u¶ôaUU¾nïñ õüY÷&ê[§/”ÚuQÔûÒ&œ ׺Ê,/šX41M¤ïÒ#blÞ®vñô“Nå¹éÓü»;À8G¾2k&êfD¾“5“p‹ÌÓÔÞ›ÀÒÒ-ÞJ‡Ä‘Ø¢‰Þ4±ÑùibDü94ÑPpÌzršhFzBšÑ;L‡z)Æki‚X7â=š8&•l.š8¦þÅò_5MŠŽð{G”å¬8‹&MÌD˜+jÅFÍŠ^ÕätšÈÏE 铈öC¹ÀŠ^{ÒÉ!ß»zM9ƒµ¨¶O[;sä[i¢ˆ‹Æ¢ÚçëÞÄÓÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0M罓(ég‘מt" ›cÜ¡‰CR)Ìu »ªb6ngJ|¨gû,š8~*¸z9MF:'5Š]Ò¶hbÑÄL4ñßìi²ä¨FwäÐ ZÂ[Aï'ð­Žì®´H¶‹è$êgéZŸ•f8 o`îŠ^»Òå4QŸKÈåm¨Ó~ÐßO½ò¤Séa€ßß©ëÀ u©-ÝéÙ“NM!¤ž¸˜W}àî41ˆèü41"þšœ³žœ&ÂHOH#z‡ibwîÍû½ÔëšÇ4ÁÌ[Ƙ8§4Í• ¶©âdŽüzþ=ï&ÎD§üõs)Î)#[—°LL²ÕlJ ñÖD³Cò«a¢>WUP:í§{™Çß õG+¡88褥£f°o¢4;áô(LT§Ä””¥+®tC«Lew–Dt~˜ L ÎYOa¤'„‰½Ã0ÑœKͧŸú½¿Ëcq!L¸–:]ÂÞ¥*h\þíÇŽ|.šhªŒÕ­?V‘êw¥tú‰Ž–iûÑ1y&ªG.¨ïä]eŒ‹&MÌDZ—ü r¦˜&š]é[¯¦‰ú\®åQ½7 ®Ôãw^›`ز&Öƒ­ k}Kñ'ñöh³SyöÚDsêDع½Õì²Ñ¢‰Þ41ˆèü41"þšœ³žœ&ÂHOH#z‡i¢:g(}}gAf©÷¯‰hâŒT†7þ(MØ>Û$錪»úLßE{t’[²~t¼„Ý<ŠªÃ¿¿û^4±hbš(ߥ§Â —›hv–óÕ¥°Ûs³@NÖmÙ^‹ñÜ[¼ŽÁ™ø=M¤Ò‚U0›Ä kiï¡íQš8%îu)mÑÄÁ41ˆèü41"þšœ³žœ&ÂHOH#z‡iâ\/•ä^šÜ€ŽN:’ª³]›8§^¿Œ&NEÇèÁKا”•yÓ¢‰E3ÑDªEéËÄ4Ñìà%õòE4QŸ[SÔ j§ýÔ²$wîMð–)9Ù{šÈ¥ åÜ)z¿Ûazö¤SsªbˆÞ'f‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰S½”ò½4Á–7â#šhÊà‚«»Ô7óñ?JçÔ[J§SÑyô¤Sõ¨PÚGçÀrSæ”M,š˜‰&j‰éò¨ÌÓD³{åô‹h"·ûb­Ó~Šð­{ÅGr9¢ w"–ÔIòÑìe¶ñ¥¨bÎûê)÷/5:*Ü«>9¾IéÊL×jÕ_¦_|¢Z×ZS8¾4;Í—/õ¹B¢dñP³C¥{“|€fƒ{y^fˆ©Œ°¥ê(-vò{‰ˆ[W«šSÌJ}qiÝËë/CµjDü5«U‚sÖ“¯V…‘žpµjDïðjÕî\˜€û½T™ÈÝ»÷ec>J¸KÐZuã©2ÙjUU• ã”ñ¬«>ƒ~YqÔ=:&¹“üw·ã÷¾›GÏùƒß_ª/šX41M–ŽÛßì=ÿ‹&ê~.úõ4!XÜ#CoŽ^íàÎ⨂[ù!JC~K µ+g‹¯¾ívì& ß&÷!Øí,§EibÑéibHü%4)8g=7MÄ‘ž&†ôŽÒÄîÜkæUï÷RÙòÍ4a¥Ó:(gtNªOv’¶©rôZçõú]{§¢ã`ðMì©ÿÕ9é*g´hb&š(ßeÝSV³ßiø¯~¿”_€_Cí¹Âìî½ip±#Î7ßË£šjé=M`mÁ9œ¹g·3Gi¢8͹0¡÷Ä;Z{ÝibÑùibDü54(8g=9M„‘ž&FôÓDsŽ"hØï¥n.Žš|£tD»-ãë'Ry®œ»*béœÈüQÿeY>~¢c ðAt^óàßNÍ£INª}e²N:-š˜‹&°ÎÒÄBšhvò’ö"š¨Ï­)rS§ýÔm[Ö{i˜Ìè=MPiÁ¥ƒÅy˜š äGi┸İh¢7M ":?MŒˆ¿†&ç¬'§‰0ÒÒĈÞaš8ÕK9ø­4¡X³:éMœ“Ê“íMœQO€þ]4q.:é¹â¨ç”1­“N‹&¦¢ *s@®9À=Þ›¨v”äêœí¹I< k§ýcº·žQÍãáï3+—Ì( :«UÕŽòïë=·ÒDGTcÔW^be ïNƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&ªs¬ÕDû½”¦{i"AÞŽÊSj€sÁÄ9õòe[ç¢ó’ü÷v˜8¥Ì_˶.˜X0ñça‚Ë@ââ¨Õ\_Ò\Õ?R6”^û)vèw¦ ¤´i€”ÞÄÔeÂB]ñY¢fWâù(L4§¢èkÍŽ_$/˜8˜%&FÄ_‚sÖ“ÃDé abDï0Lìγ[œÑéÇÎÍ)mù&šN(Í“íL4U†e&}õÊ_{tXÌ?ˆŽásÕŒšÇ–=ú£xŽuÎiÁÄT0!u’n¥K a¢Ù)]]µ=—±ÖFí/R˜CàÞŒµJëÁÆ„–\úgæÎÆDµS6y”%N‰3^,Ñ$Ÿ%FÄ_ÂsÖ“³Dé YbDï0Kœê¥ÒÍÇœX}Ó£SNç”2ÌÅçÔ§/»}*:9?x»)#DŒžý¼¬Ø‹%¦b Ý„Iì÷ìÆýóû-v|y>§ö\vFè´Ÿb÷fŠ~e>§¼%qV[°frŠ•6;ÑGkíNs.?‘÷Åe‘½YbÑùabDü50(8g=9L„‘ž&FôÃDsî™A¨ßK9úÍ7°mˇ7°w ž2~"u²ZFMUèd ÙíH¿‹&Ú[#eû :þ M4,åãéu¥1¤E‹&f¢‰2$wd¥øÎD³Kzù‰òÜÂ( Ô9ŸÓì2ß¹3!²%«£Ý{šHµgq¤¸‡nvfÏÞÀ®NËÀgܧ›¼,j,š8˜&Ÿ&FÄ_C‚sÖ“ÓDé ibDï0MìÎÅrú —B¢{iÂÒ–³Ãž“*“]šhªHÕD?PŸ¿«2êOtQ‡&v;{®2êîQS¢ôÁïöšgÑÄ¢‰ h"ÕsFI´|ª!M4»ë+×µç*h&ÁNû‡[³ÃÚV+Óáûʨšk v€b*mvæÏtªNëì]qî‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æœP¡ßKaN÷æsÛøð¤Ó)©ôn\ú“4ÑT1’vvVvõ’¾‹&öèL îG‡Ÿ¼5ÑWÑSêd‡ÝíÞ䞸0;,×Ó\œõ MXQV[0“{ÔCÿ²#x°öSeH¤¹+N²¯Zñ41Žèä41(þšˆœ³ž™&z‘ž&õŽÑÄÉ^JQî=éDuíˆÞîDŸ–šgº7qV=åo¢‰¿££ ýè0?U¹îÇ£ABþD™óJ»hbšhߥ0Ô”JÜ›øe‡éÚ:Ø?Ï¥òBá Ï_v¯þo  ÛÜYߦ‡- °¶àdNQzØ_vFÏÒDujèlŽ]q®‹&zÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æœ‘µßKÑ»âŸWžtrÛÊ„ûÝÞÄi©ÊsÑDS%–@úc•ñWåt:QxŽ&šÇöÒÜW–dÝ›X41M`™¥{.ý¡Ä4QírÑy5Mþÿ÷oÿÇï¼7!¼±”®õ€&¨´àŒ¥ k<¾Për~”&ΈK”Ö½‰î41ˆèü41"þšœ³žœ&ÂHOH#z‡iâT/Ål÷&ˆuØTñ€&ÎIÕ<MœR/ ßEç¢óòÛÞNM™ eJ]e9­Òu‹&æ¢ jb­ÞÄi¢ÙÑÅ'~žkÉ;í§Øá»‚£×íMä-qR°÷4ÁµS‰•ÇëÍ’M{í‡üVšð­V&K÷¾µ¶`-ãpŠû fWÐçQš¨N±|­œR_\~9æ»hâ`šDt~š M ÎYONa¤'¤‰½Ã4±;O(»½»«ÍÒn^¦”ǽ=‚3¡ 5Ã\4ÑT¤LÔWÂßE{t,™ôGr$~pï»y4¬ØW¦È‹&MÌDÚªŽ*‹§&š]z™#_Dõ¹LÙ wæ7%.®¤ ÚÊ@wp/¯Ì–€sÒ¸‡®v˜ìYšhâJ€”¡+®ž^4Ñ›&Ÿ&FÄ_C‚sÖ“ÓDé ibDï0MìÎú](±âíY>eù8'õ]ö©?IM•–[­¯^äËîåíÑcÿ :ú$M4NÞ;©Ñì’®½‰ESфլ۵8)ÄY>šèå'¬R Ö=e鵟béÞzFe¸*ƒØ{šHµgE—ø¤S³SåGi¢:-ŸEÝþì‹sZ9»ÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰Ýy¸8u{)¾7g xéïAhb—3zþ@j¹h¢©BSIÐWÿî~Êš&ö·N9ù'ÑIÒDó(¦9}ðÕñk6ÃE‹&þéÔì@.§‰ú\.ßCêT ªv”nÍòA¾)ØA–ÜZ:yîÜÀmvdô(M4§J‰Àúâde ïOƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&šsÓš®©ßKi¾÷¤³m¨0qJ©Ád—°Ï©ÿ¶kí­Ë_ù'¹åôLTRë¶vŽ}7;X[ &悉ò]–²°þÞ²þú×÷Ë‚¨WÃD}®jg굮צîMò‘JÜñà¶×\ ÄrÜ5;ðga¢9­ììlïv$ &z³Ä ¢óÃĈøk`"PpÎzr˜#=!LŒè†‰æ\Ê0ÂþA/usÊ@ƼúMœ’Z&€sÑÄ®Ê >P¯_–Ò©½µ"äÎ!ƒÝî%=ûí4Q=*b½uÙWæ°¶&MLE^Sñ Hœ2°Ù!]~ ÛÛµ .]_·eלnwÒ¥ Q<¿¿„P[:¡2…öÝìÑä»S©Ã}qü²”¶hâý41Šèô41$þšˆœ³ž›&âHÏGCzGibwn¨ŸãØíTèæƒN´tú‘šDbšø±ƒ¹hbW•5J_}‚ï*ŽúÌ”µlüM4†ÉSêÿn†¸/š˜‰&êwÉT¾@æ°œQ³C÷«:µç²qNÒ`˜ ¶ßKŽe } ±¶`V‰û fGøèµ‰Ýi #Ƶ–v»×|†‹&¦‰AD秉ñ×ÐD àœõä4FzBšÑ;LÍy‚Ú“÷{)Ó{:‰êfé œÑI©y®“N?ꬃm»ÓwÑD}ë„ %œñ9šhÊ(YŽ“Ðÿؽîš,šX4ñçiÛ &/] „4Ñìà%øE4QŸ›LRo ¨Ù)ÝšÒ ¶ºI‚ï/a#Õ¬‰¸Ã=ÍN0=JÕifpˆóîv´N:õ§‰AD秉ñ×ÐD àœõä4FzBšÑ;L»sîÕpø%òÞrÙ}s>¸7±KPQŠO°ïvbs]ÂÞU•Aˆâ³<»é—ÑÄõÂÔýè$yîÞDóèªõ,rW™‹¬âu‹&¦¢‰ò]²9Õ‚˜!M4»¤W'ˆmÏÍgèµΖà^š ®)ÆßÓïãKä»г'šS×¢CúâœV‚Øî41ˆèü41"þšœ³žœ&ÂHOH#z‡i¢:÷š'©³ ³ÛÙ½)ÌJŸîM쨦ˆí÷öŽi®r»*F5оú2øMìÑÉàÉúÑa{&šG+_~|?|·Ó´N:-š˜Š&¸tbËÓD³{-ÒsMÔç²”ñ-wû½bg·Þ›îMH™-·ÌŠ+mveÄ~”&šS£¾8~ùMLƒˆÎO#⯡‰@Á9ëÉi"Œô„41¢w˜&šse%“~/%ùæ{ ›%: ‰SR•æ*…½«2IW üQŸí»h¢½u’òxèGÇž<éT=":pœéåÇîåÜ÷¢‰EÐD-1­˜,YXnâÇüjš¨Ï5+-›»-›MQï¤ Ý<'zOZ[°(çÎÂZ³cx´x]sJ¥Ê(kâT×I§î41ˆèü41"þšœ³žœ&ÂHOH#z‡iâT/eŒ7'ˆÕ-t:§T'Ûš8§Þ¿«ÚĹèü£BÜÝ0Ñ”•†ä]e¾R:-˜˜ &ÊwYËH0 …0Ñì€/ßš¨ÏURbä^û)vhwtâM5;¼OÛ&Ë%.£¨ÕŽ˜M»‹ã„ˆcÊyÁDo–Dt~˜ L ÎYOa¤'„‰½Ã0±;w“üA/õ¦¼ô¥0‚ëÑ%¹&¡žêK˜ìÚDSU†Õò‚¨Wÿ.šhom ‰R?:*ÒDõXÿ1|ðÕùËÒꢉEЄµk êFñÖD³K(WÓDy®”(ƒÄGw»ÂãwnMØ–P‹Ï÷ãKj}܈[6;åg:U§µ8&xê‹Ë¶R:u§‰AD秉ñ×ÐD àœõä4FzBšÑ;L§z)—{ËMæÍäè Ó© sÑDSEX³™ >}Y‚ØSÑÁ—›Ûi¢yTÉ„(Y4±hb*š¨¦‹#Íä!M4;ä«ËMÔç–îÌÐ:v»wi½/¥ (ÇMäÖ·Xaâñ%ï=´>JÍ©1)a_ÜkÕŽEÓÄ ¢óÓĈøkh"PpÎzrš#=!MŒè¦‰æÜkÿý^ÊñÞ±Ym3::éT%(HâNÆíÝî]ªÁ?IMš—YD_=Ê—ÑD{kRN4’»?¸7Ñ<ªfÔþCWJ§ESÑDù.kvVòïM4;¶Ëi¢>·Þ”Ξìnw+M°lœ2Ñ„;aÊÒ¹–×ì^¯«O2¾U$Û¬¯Ý¿m|)o]þ–-÷£C&OŽ/Å£zrýàwSHk|YãËLã‹o€¹föþ}Vþñ¥ÙáÕãKy.0Ä3ÇfGzëµ¼¼a.?…¾_|«kUefì(-3I{8eàq/µ–ÖjÕÁ2DÑùW«FÄ_³Z(8g=ùjUé W«Fô¯V5çe`ëå0øyïIZɲe:*gtJêÛ3Y’&šª2Æ,¨O_–2pާ‚ýèÐËZÞí4Ñ<ªfÁ~·â~ÑÄ¢‰¹hB$—é¼uiBÊ×{ýÞw}®{vN=š(v™ñÞ{y¦µ„ñ[š (-8¡'kqìv î}ïN…$Å›»¯“´ÝibÑéibHü%4ñöÎ6rTWÃ;Šð·½’ÙÿN.:RÝé TD’A]ŒúW·'~ã „'`»¥àœõÜ4ÑŽô|41¤w”&vç‚í:I/‘t/MPØftp’ö¤T›ë$íK=¹ðï*Áßjg´ßu†¨L[_DGŸ;I»{ôÈ‹Œ/~7çÕuÑÄL4QžKvô ÄÖ¢‰Ýôêæ¨åº’XÝRwüH¢{i7ÏÄŸ«| äìLˆÑžƒªÁ³4Q†i§7îvÊ«d`w™Øˆèü41"þšh(8g=9M4#=!MŒè¦‰âø·h¢Þµ3 ~ñ²4çh"{D4AˆîÉ+¶´*/š˜Š&°î áŸiÿù×ó[ö®®@^¯ Š$í|±ÝNnÍ› ØMÚ!•œò¿§æIÚbv¸¸•&ª8 ë”.}Ù­“Nýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªs‘°dýYJn¥ qÚô€&ª×Îç™Ý.Í•7±«rfSùB½ÛoÑD½ëÈœ ÑN<¹7Q<1ÃÊ€’.šX41MPéäfh©IÕ.SóÕ4Q®kYCêŸl÷©6Ó•5:qùl °Ý¡Úø³N‰“·# &V‰ˆÎ#⯉†‚sÖ“ÃD3ÒÂĈÞa˜85K}X¡_»5¶¹mMœ“ú'÷ü·0qN½ýXÚÄ©è˜Às0Q<’;@;‰t¿0_0±`b&˜àr€(eÆöÖD±ƒô–tL”ë2dL!îá$voÚ£“Úgš2‚±´$k¬ªvIéQš¨N%¿·:}±LlDt~š M4œ³žœ&š‘ž&FôÓÄ©YJä^š[R> ‰]Bÿäèng“ÑDUeàØWÿé˜Ö_M{t¥]®ëQ|&ŠGBHy­ÐUF@«@좉©h"?—â™%"Ú[Õùò$ì|]… 3½"Õ.Á­|cÊNö&´Œ`)iÛsPµc°Gi¢:uEã/ĬvFÝeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&vç®L_ÌR.tsÚ„mîbOJ5™‹&Š*N9„Éûêå·hbN·Û½ìøÁƒNÕ£"¥v““ÝŽ}%a/š˜Š&´^M¢Ø¤‰bçïüE4¡5mÑ:iGÕŽ>Ÿ¸ô SšÐÏ4ayKBk¿_¬ŽtŒGi¢ŠÃHn©+NWÚDw™Øˆèü41"þšh(8g=9M4#=!MŒè¦‰êœJS¸þ*$÷&añæ`4qNªù\4QUqi#¥}õL?VÒ©ÞuùløÍËRà¹æuÕ£æ»Füâ© Z'MLEù¹U‹„í±ÕNäò“Nåº. Ð?âïíZni7‘JþÆçæuèe¤ §ÞçÿjGö,MT§Áb)ÕÎh5¯ë.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4±;rÀþ,åä÷îMXÚŽN:í”Ü¿‘*:MTUQRíµ¯>üM”»¶Èl_D'è9š¨ÊÐ(w•ÙÊ›X41Mäç2½ôUlÒD±CÔËK:•ëfLH½B|U§(ßHª›š'>(%«"³DþÓTZìÌôÑVØ»8 ¡Î1¬j‡‹&zËÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mœš¥ÈïÍ›PJ›‘ÐD•PúR¶Ï¯/˜‹&vUÐnöü²ãÛ›¨w-BÖŽÀs­°wn€ß(3\y‹&¦¢‰¨YÐáÚ)éTí’ÙÕ4Q®[6¡?k‹Dè'l· i"¿{ÊÁö.Ên§OÒDuù L®]qñžÌ¾hâó2±ÑéibHü%4ÑRpÎznšhGz>šÒ;Jçf)»7 [¼”ðûL'¥:NEçÔ#ýVö~×”,/z¾ˆNÚµ:ð<¥®¸ü|-šè.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q£¸B Lpo?#.gŽòòÎIåÉö&ª*R‡NÖÇ®>ü·hb޳±ö£Cú MTy¡€òÅï¦È‹&MÌDù¹ÔPJ„ܤ‰j÷Þ¥ç"šÈ×5€<„ÜzãÇòÜè÷Vùðdžö¾µŒà(µÛ öjGòl^^qªd ‚]qЏh¢»LlDt~š M4œ³žœ&š‘ž&FôÓĹYêS2•4!¼e\8 ‰SR x.š8§^~ì$m½kN‘ñEt"ž£‰êÑ’ñ»É[#‘E‹&& ‰ü\*”ªÖ¦‰j(WÓD¹n&qL½ñ“YìÞ½‰ùsP3жzÌIMÛsPµKòìI§ê”BóÙG¸ª|t—‰ˆÎO#⯡‰†‚sÖ“ÓD3ÒÒĈÞaš¨Î8¥øb–Š{÷&Hd:€‰SJóÂt.˜ØUEX§dànÇö[0QïZòu;¨µÛ%&ªÇ²g’¾øÝ,邉3ÁD~.ÕćvBÿüëùÕ<ÇàÕ0a5-0€A{ã'ßë­íŒ(mᙪ`·R=Ä;›ÇÅ΂ãQ˜¨âĀѻâW‘þ*±ÑùabDü50ÑPpÎzr˜hFzB˜Ñ; çf©À›a‚7>Êš8¥”`²nFçÔËÕ?xðœSõ¨ Þ9ýðºƒUãcÁÄT0‘ŸKG–?«üó¯ç×Áðr˜(×%0NžÝï¬ȼyRðø QF° C§Dµ3|6k¢8< )cW\¬¬‰/V‰ˆÎ#⯉†‚sÖ“ÃD3ÒÂĈÞa˜ØGþ×þš­îÝ™`ÑMãhk¢JÈà©n¢ü·4±«,Tûê‘~Œ&ê]¸}'»Uy1$}e +kbÑÄT4¥ç¨dìjÇérš(×Õ”I¥“<\ì$ÒÍ9Øù5'Yœ¶|ÿP´6Gún—Ÿ¤‰Ý)™ZH_Ü:çÔ_&¶":=M ‰¿„&Z ÎYÏMíHÏGCzGibw®åÕ…ýYŠîÝš0Ùè ™ÑK)DÒo”ò\çœÎ©þ­fF'£Ï•ß=†Uè+s]­QLÌå¹´2b’4Ëþì’] õºTÚ‡öÆ2ßyÎ 7s÷øÜ̈¡Œ`ËsP»=ón§6¯¾&ŠSŽN%ˆÝ.½Õ¨X0q°JlDt~˜ L4œ³ž&š‘ž&FôÃDuNÔþr´Û¡¤{›)n@ÎI¥4WyØ]U©Rnô…z¥ß¢‰=:n‘¾xY2Ós4Q=:9÷•åuÓ¢‰E3Ñ”-Uw²&MT;Q½š&Êu8å\oü¨&¹5›7“BLŸióæ ¼o+-vôaåVš8#Žñ-“}ÑÄÁ2±ÑùibDü54ÑPpÎzršhFzBšÑ;Lçf)½7k"/Këš8'ÕÓ\4qJ=áoå`ŸŒŽés4qJ™ˆ,šX41Mäç2,âÏ‚dÿüëù 3¾º5j¾®¤äB)ioüDËJ(m/>Óå,ØÞ­vIŸÝ›¨N3&(x_,šè-Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4±;/ÇE£?K1Ü»7A ÈMT ’ ,}!•l.šØÕCé#õ…úø­ò°¯è¨†~ñ&~&ªGwÿâw³·ÃÔ‹&ML@Tš8¸`3 {·»&Êu9;7ê.ƒË‰¨;O:¡m®˜*:1ב‚튻飪Ӽ@(ß™ºâ0-šè-Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q—WDýYŠøÞ½ %ÛÒáI§sRm²“NUU¦‰¤_¼òóð[4±GGIÔûÑ|. {÷èâÚWö^gÑÄ¢‰ h"?—æÁ)¢MÅήo]W¯è¦Ðµíÿ5â¾'oB‹“Ï4!ygð1é|­ªvDÏîM§^>Gy_œÓ¢‰þ2±ÑùibDü54ÑPpÎzršhFzBšÑ;LÅy$3NÞŸ¥Üt®%úõ‘ø·ºKïwBÐIÕ¨v@~ð¯I€XúOʽ [^ùñ#£‹a?ˆ,«ÛÄ¢‰©h"?—A¬Ê$Mš¨v”àjš(× ÕÞø ḳ6ëf‚üy¤çÕ²$L4í )­#m]·‹Ë3&rꉓ„‹&zËÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mìαsc·»9o"”6B;Ø›¨$:Õ§þwKsÏ®>khWâ}ÙáoˆÝïZ”ô‹è¼ý¶·ƒOõXˆÆ¤¯LcÑÄ¢‰©h"?—¦Š‚Mš¨vò6ï\DZ÷F„”­7~Ì4ìÞ“N€Ÿ»M°Õ.ÁÔ~½T;{&ŠS@‹^-ˆÝnµ®ë¯&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q“aˆõg)¿÷ “ÈFéè Ó9©ÊsÁDUÅLÉ¢¯žA~ &ê] º|{®wÝîÑÑ:Õòw;£ÕmbÁÄT0ae‘n%i85a¢Ú©ÐÕ0aõSF‰þø±R›õ^˜0P¥ƒnžGpF®9¨Ú=Ýmbg :¨³ÛÉJÂî.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q—?Êœ»é½%lÎGIØ» ¥o¤Êd4QUi „è«—?ýwÓÄ©èèÛqŒÛi¢x¤¤¥ |_Yðꄽhb*šð}k"¶:U;Š«{×Õë:äùX¼7~ÌSÒ;iB6aOG4Q`°÷~‰Ú„t¶÷Ë÷ê)ýÚûåLt<ž|¿d\zKr_¾gYï—õ~™àý[*háâívFÅ.Ï1—¤Í×EdÐö‘Ÿj—>Xºîý[E úü~‰²’´”_sÞQZøÅŸ-òqJ\f´õµª÷¢Ñù¿Vˆ¿ækUCÁ9ëÉ¿V5#=áת½Ã_«ÎÌR”nî*TzaËÁתsR?¬ÇÿSš8§Þ¬9ê©èús4qJ½1좉ESÐD^ø c¯yµcN×ÓG°®LÝîÖ¯UäùG³˜´•š¸e?¤9ЫEø“0Q¸ôÅâ:HÛ[%¶":=L ‰¿&Z ÎYÏ íHÏCzGaâåÜ ÏýYêý€Î-õÇ·R™í ö.¸]máesmMìªDKk×/ÔûomM¼¢c–ì‹èˆ>W1°zôü·ù7é+óX0±`b&˜(Ïe º©4·&v;q¾&êu9%wÑÞø ò[k|Pͤ€Ïï(#„¹}üf·KühÅÀÝ©$AúB'Z4Ñ[&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:Wjõ~‰¿yk"»±ƒ­‰sRa.š(ª"å¿Wî«÷ø­´¼WtXõ‹èDz«ý{;MTì¬a}eäë Ó¢‰©hJ%¾(Gˆ¬IÕÎäêƒNõºù~{ã'èÖŠi€ðƒ¯UXFp¦ ôöj—Ü¥‰ê”…©/Žitê.Ÿ&FÄ_C ç¬'§‰f¤'¤‰½Ã4Q‹·›ì¼ìøÞ½‰¼dÛr(h¢JPè5›~IÉö&võ¥-Nê«Wý­ƒNû]»uöv»K¥šR#ÐS–ípÑÄ¢‰©hëÞD*g˜š4Q턯NË«×Õr6±³7QíPìÞ½ :  ª#ØUÚE¤Š]~»Ä³4QÅ‘F–×§ù^{Ýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&NÍRDv/MHžïh¢JàdÉì ©“• |©÷àvÉÀ—Ý• <·¾··ÓDõ˜Ð;œSílu3Z41MÐ&‡9šE>ªR\]2°\WØ­4|îŒz__O›'<Èšà<~I¨Ý¡ÚУω“•5Ñ_$6":?KŒˆ¿†% ÎYOÎÍHOÈ#z‡YâÜ,å7 Tßv&NIÍ+ιXâœzþ­‚'£Ï <¥ !VÁÀÅS±—l„”Wò¬M–(vh—§`×ëæ÷FÊ÷Û?!w²å-!tFÉ#˜!ã¦Òb‡ÏîL§,y`ur°«¯òãýeb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªs+®¸?K©ÂÍïÝRc¶gwK4´ý–t®‚NU•@ŠÔî7¾Û¥ÅÓÿjš¨w]¾ŒbêG'¯íŸ£‰êÑ’fžè+˿ܢ‰E3ÑD)¹êÊ©ƒ]Ksë[ÖÂE4‘¯+è.ὑãb7Ò„Ë–#áé`oBËÂö ]ì2ùУ4Qœ*zò/Ä)¾ X4q°LlDt~š M4œ³žœ&š‘ž&FôÓDuÎП¥8éÍ4QV܂dz½–ž÷4´]ªë\4QU)鯃›@7MÔ»6PÄèGGãÁ¬‰âÑÀ:çœv»·ç‹&ML@ºIpjç`W;|kÆuM”ëq~ýHgü»D÷fM8pØAE'«sP¶Òj÷¡ÆÇ­4Qœæ7¸©!.ÞÊ.š8X&6":?MŒˆ¿†& ÎYONÍHOH#z‡iâÌ,•gÚ›s°6t<8é´K0æÎç™ÝŽ&;éTUsòøB½Ùÿ±wvIÒ£¸Þ‘ý‚–0+èUœýß«'²»ÒØC|IOÏÅL(Ì›*#ô¾‹&ʯ¦˜Â+yDãçh¢Œ¨)J¤¶²W Ü4±ibšˆ¹>+ ×÷&ŠqœMù¹"FâÖüÉ…3äÎÖué0$2}OéünÀ­ÛâÅ.س{ePåfmqºëöÓÄŠG×§‰ñsh¢¢ Ïzqš¨zzAšÑ;Leð¨1%jG©È÷¶®S¥CÕ.h¢HH¸Qqã´ÓÅ*:eU)P„¶z³/£‰â ÀMïøÿ|po¢Œ(‚~ ŒuïMlšXŠ&R©¨Ä1±Ui¢Øáü{©Ô‡kÆ=3ÐpçÞDþVe¬'Ìg°Hl”#Év)Ò³4QÄå–72»ÅŽuÓD3M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4QWÇh‡ÐÜWîVšHª‡^ÞÂî’ªÖ¢‰¢Ê¡Q(~ ^í»h¢üêDêÏo{ÇŽ&|Äè?È"6•¹ìú°›&–¢ ;„üÏ¡êõa‹ñôŠNù¹`è!óÇíÒ­4¡x ©Â»ú°Ñ•ù ÉDUY_~ì냷°{Å%ÝÝ&êibÝ£‹ÓÄ ø 4QWÐg½2M´<½M ꣉Þ(eF7÷®‹G}GR!€-DÝê#|MôzТ‰ŸEL<×j*#…} {ÓÄ:4QÞKqHS«ÔtúÛŽ^výfÐÄÏs5qž@ù#ž¢ã­{tä³Tdïi"÷ME$K†U¥Å.¼kãz#M”AÉjÇ|ÿkÇ»¦S3M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4QgMo6ÂG©»oa“ÒáÙMôIE^‹&Š*!T ¨ü]4qz‡S@h{Çóçh"H¾ü†ZÇÄ¿í^vÓĦ‰hò7ÿ\ÑÉR•&ŠkšMù¹h‰¬ÖIþo;OäïÜ›¹¬êÅÞ–™™b=aÏvÆŸ¥‰".²$¢¦8ŠÀ›&ZibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰<8‰C;J™Ü]!–BŒV‰ö ¢›R×¢‰¢Š A[=}U'ìŸ_­`Ô^É9=GeÄ”B@l+‹»B즉µh ýýV¥‰lçgÓD~®a>ìÔšÙ®Óøæ~É#Œðûõ…|û"Ɉ\UšíÈ¥‰.q¯×·6M\¤‰®O#âçÐDEAŸõâ4Qõô‚41¢w˜&º¢”¼wo"zÄ W{}RW;éÔ§>~MtyG_:GÝN=Ê4¼”fÜ4±ibš CÔ<…N)Vi¢ØÉKÝÓI4áÏ€ ÌÒ˜?’ %ÝÚ [|u1¹¢ sìQyCC©Ûù¾ÚúÒ¡ƒ|ÛúÒá òäúâ#úïáÆWÆb÷ÚÕv¯/{}Y`}áã6ǹ¾¾d;ôUhöú’Ÿ››©ÖO,e;a½s}œRß/0œSÉäNjHõTÞ±ÖŸ«ºÄ‘Øþ\ÕúQñèúŸ«FÄÏù\UQÐg½øçªª§ü\5¢wøsUW”b¼ùsáa«ú„rX &úÔ§/;HÛåÁ?Vu)³ÝÎhÃÄj0!!%H)¦Lä­ïLŸ’{)¥ØøØ›íØÍ×ò"K„‹­oÉ3üŸÆAZ)3£,‘5Ђ5ÅåkÄ›%ZIbÅ£ë³Äˆø9,QQÐg½8KT=½ KŒèf <1ÄÆqÿlGo*:Í-@~Àtí?—ji±­ïõìÿÇwÑDùÕ”{­Ç¶wðåòÎí4‘G”@!ˆ¶•í­ïMKÑ„x–ý ¤ßÍ}ÿúçû+ 6¹dàÏsÍØ'f+îååzg;#’ÃïOç÷ë‹æª%iÿ¶Syö mT\[ kŠx骶iâ"M¬xt}š?‡&* ú¬§‰ª§¤‰½Ã4Q'‘V°?í ÜÞÎÈsÅ‹½‰"ƒ§*üÔ¤kÑÄ©>kä˧Ýï+ä6M”_-l–>X,…Ü›È#jð_ ØVf¸ ošXŠ&Ô³t1(õ’ÅÎyx6MäçFÿ¹Ø(ntÚ…[÷&ðpª2ºØ›ˆy¦ 4v!‹#âe•"ABûÔ´Xò¢Ê™Ñ0´ÕK ß…ùW[ʵŹécx°f`î< ÛÊÌdãÄÆ‰•pÂÀ|Î_¤Ž§ÝËæÀ$œðçRˆ>}¸¾ÀäñéÖî¨Hô'¬Lujî£XÑnù(Nôˆ3ÑÝШ™'V<º>NŒˆŸƒ}Ö‹ãDÕÓ âĈÞaœèŠRÊé^œP80áNôI‹uêRoöe8‘½ãipã¶fñ">y»K™ð>ë´qb1œÌ­‡!¥N¸ÎÇ AÅ[qÏí<Õ¿ó¬“çÃy’Â[š€gz q—×”f»dñÙä=â|™Ù›Í4±æÑåibHüš¨)è³^›&êž^&†ôŽÒD_”›7'$Ê!QßÓD§Ô´ÖY§>õ„ßus¢Ó;ñ9šèS&{sbÓÄR4áï¥P¨ø»þÃ_ÿ|…"¿÷ŸCå¹–4ùBј?ygÒìFš`;Pãû£Ny¦Kd•êi¢ÓŽ8>JyPDÿ7iSâËÓ¦‰‹4±âÑõibDüš¨(è³^œ&ªž^&FôÓDœ04Ž:ýˆL÷ÞÃöEö0Ž4Ñ%•Þ´løŸÒDQÅùànø@ýïš!6MœÞI€ÚÞaæçh¢Œ%ÅúŽ»°kÄnšXŠ&à!Ê¥«íQO;ˆ³N”çæNxÁZ3[²P¸ó¨ßwßÌX8ŠÕt±{SæãV˜(ƒ&ÿ?q—ˆmf‰®#âçÀDEAŸõâ0Qõô‚01¢w&º¢TÒxï5l‹‡gŠ0Q$˜9KÐR»†]TQ€Ûê-ÅâD15½C¯ o‡‰2";G×»;žvøòÖm˜Ø0±Là!1ù˜~ßHúëŸï¯x:½;jmüÿü{|¦o„‰äÀ¹Gß{š ŸÁŒùÊzýcU¶#Cz”&ò b-rSsÜ[Í4±âÑõibDüš¨(è³^œ&ªž^&FôÓDW”»·¨S":’^ÑD—ÔD¸Mt©ÓéϦ‰ïäöÏÑD—2”ÝpbÓÄR4A‡EŽ’ê4Qìfu*ÏÍ5È×ÅN;T¾ó S8˜QÓû±Ày†(õMÈbÞÝ ¼‘&Ê îD h‹c‹›&ZibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰2x.ºY/¹qÚi¼¹áÉA1]ÐD‘à«0jhK¼VÉ¢Jr{9Pø²KØç¯&„Fùß»ð\‰ØsDMãÊüÞ4±ib%šàC4$‹Q«—°O;Ù5Ês ZqO”â­{¼0\]–3q¬ƒ>íðáKØeÐ\AÛ´-NvÉvšXñèú41"~MTôY/NUO/H#z‡iâ<·iþ „*Ü{Ò‰,Àb;¥Êb'Šª”Àê¥N»×þC_AåW›ãDãäi¼6‘GŒH*Üþ»Exi&¼ibÓÄ4á*-i´`õkÅNŒgÓ„þúcîZŸ?ÅøÖ ±J¿ë‹ ±9]¶|[=Ö‹AŸv áQœèÇ'ÚybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ¾(e÷Öt"އ^´¯ëS*°ØI§S½¥¤ÜV¯ðe%ú¼£îMä–ÆŽ¢Ìˆ7MlšX‰&b~S„¨uN/v>µM‘ã!Ž1$Æ8·s«"·rŸŠG×O‘GÄÏI‘+ ú¬O‘«ž^0EÑ;œ"—ÁÑ<>J;JáÍçw òö…ãý‘ä@(Ôó¨lÉžöEQ²ÆiÍb‡/õ@v´¿˜Æ®íGÄωö}Ö‹Gûª§Œö#z‡£ý9¸¦øI”r‘÷V’9’\}É,¤€Õ‚lçΤÙ;þ\ÖmW¹ÝÙüd ñÊUyYŒþgmK\l+º¨R² ÚVÿz]å+>å_þ~ðF}ðš\Qæ¿[6•¡„ýñhï(Ëcߎº”Q€}a;ZéÛ‘¿—‰|Ý àÖ“ˆ¢Âüærüÿü{|¾óR40ŠŠéý®=‚gæ¢êà•íØäÑŠ­§8²êÍ)~ìh·¦nfÔ®^#âç€WEAŸõâàUõô‚à5¢w¼Êàž„)`;J1ý­©[$8Ô } 5ÑZ8QT)‰¶Õ }Wÿ‡ïÄhõú&?vüÜVtÑ'‡€@S™Þ5–6N,…p¨ÏÈåP«8Qìä¥rÐ$œÈÏA¨µ³Ý­g ø$å”Þ÷@ôS2jÄ l§¯ÇlŸ ‰.q›&šibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰¾(%z+M Á‘ð¢bk§Ô¸Vkê>õªßU±µÏ;¯uo§‰e>Å6MlšXŠ&¨¼—Q§‰²Ç|4G.â0W:NMqþÝL~*]?G?'G®(è³^È3_KÊ¡úí(Û áìn?å¹–ëQ6\•í’…;Ï:9JRHñÒUŸKµÅ6'zÔ[/É>ïØsE–ú”ÑËÇÝ'À Îkœxì3j¼¿nÇø,N”Aa|ek‹‹‚'ZybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ©¾ ¯E}ê¿í&v—w¬ë”GôŸ!YS™üãRǦ‰Mÿ{šð÷’-izS¸ï¯½¿lüRã`MÄL )_»K­ù“CÀ4AéLW{¹IƒCMUi±s8{”&Ê  )¶ÅÑKy¹MibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰sðB#Y:íHî¥ GÔ+š8%$ø‰Tµµh¢¨’¼ÑŽmõò{èϦ‰ò«L{§wìAšÈ#æúí#·,Ý4±ib%šH9Kþ Ö;b»À2ýj^~.²Ø¸ïvÚ…x/Mq¸h9aù³AÀ°.4Û é³0Ñ%NiÃD3K¬xt}˜?&* ú¬‡‰ª§„‰½Ã0Ñ¥ìæƒN‡èKt)°XØ>õßVÔ©Ë;Id‰¢Œ!7^n*S€} {³ÄR,aG¥˜ƒJ•%ŠÒôkù¹ÑX4`cþ¸]â;;N¸[ Ʀoi‚‚Oá$†XÇžb§I½…Ý%.ÂË×–MïÓÄšG—§‰!ñSh¢¦ Ïzmš¨{z=šÒ;J}Q on`Çï5^”Óí“J–‰Nõ¿›­þÑ8ÑçרwãDŸ2•}ÐiãÄJ8áï¥C.³J 'N»À³¯M”ç¢æˆù£åvÒ8ar€GVxm‚àÈͰ)T·GO;y”&|P Î…I›â0Ý ìšibÅ£ëÓĈø94QQÐg½8MT=½ MŒè¦‰®(xïÞ'>D/j:ò \Õ¤òZ›}ê_Kü}M”_íH1¶½CÖt:GTH?ø»É>è´ib-šÈ Õ‚RÐ*Md;Ii:Mä‹ÏN3ù¿­ùã6¿?WMÝœÈ)¼o‡Mè3<ºÔ¿;wè£4Q•€Xï£yÚ1ìKØÍ4±âÑõibDüš¨(è³^œ&ªž^&FôÓD_”Jt/Mø0éêöTd`û@ª­umâT¥¹±µÕËï}ô?›&º¼£¯Šî¦‰<¢'Ä š(Ê,ð¦‰M+Ñ[®×T¥‰b<û¨Synt–iâ;íäÖkî…ÃŒbzqÂ%øT'õÆRsH@~'Ê ¢ôfé·8NûâD3O¬xt}œ?'* ú¬ljª§ĉ½Ã8qž¢¡¶£”èÝ›rÐåæD‘ ÒGR¯…§zSød­RJß…åWçºIõ›?vAžÃ‰<"…$,(³—›E'6N,€YhÀª7'N;:'òsc*=9«ó§ØiÐqBYŽ,ˆ½Ç‰>©‹5HýQïÿQn«Oô]e>ʯN(ùÐÝÞI~¯òýF¾x4•ň/]÷³˜%FùàRj,0ŒD)Í_`˜„!µ¢¶Ûa¸ójjô,0Æx¹ÀxB ISCªçœ’ž=LÛ%.ÙÞþn~ˆ¨xtýïU#âç|¯ª(è³^ü{UÕÓ ~¯Ñ;ü½ª+Jã½…>"pñ½ªGª/ ¼Nô¨ç¯Ã‰ï˜<‰Ê0mœØ8±N8 ‹z68ávürálNDëCcþDâ7-ófâ„Ü­ÕÉê=NHžê¹;u½(I±cz'ºÄiÜ•>šybÅ£ëãĈø98QQÐg½8NT=½ NŒèƉ®(ïÅ ÎÃ\4Hí”J‹UúèS¿«p`Ÿw Ä ‘ÐXÉbS™€î©'– ɷM¢§êUœÈvôz°dNäçFÈ-R©5T-ɧiñÀ|ý‚&4ÏàÜ ¤ñá@K b}”&NqISãâàiÇ{s¢™&V<º>MŒˆŸC}Ö‹ÓDÕÓ ÒĈÞašÈƒ+!ÆÄí(%`·Ò„$ÍŸ².h¢Oª,¶9ѧ޾ls¢Ë;Êé9šèRf¶ËošXŠ&ôPÆ Q7‚ûëŸï¯æî 6›&òs‰-µÓ`·£tg¥€x$ ù=N”“PA›‡i³ÚÃwóò ÿÏÞÙôÜrÛvü«xç¬")¾-¼ë&«(ЬºH·½@œ¶ ß¾’æ¹é‰ïé(š«>‚wïÑøÌPüé…´ )Â@Sœ¢¯ÂÍ<±âÑùqbDü98QQÐg=9NT==!NŒèƉ®(EvíY§¨°¹•úè’Ãd…ûÔ¿[áÀ.ï0à}8ѣ̭͉…SáDz/=åÐb1Vq"Û©ñéwót“BÊ¿E[ß;^zÖ)Í.òÝ»ç4aù Žä õÛÓÅŽèÞ£NyP'FDn‹ãUé£&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ¥$^[8·hGGú¤ÊdGºÔë“Z¿jšèóßX†¼G™€E‹&f¢ ÛrïaÊm…ª4‘í‚àé4Qÿ·? ^@´¥øãÁ=lÏ_pîêÜSìHî=êÔ%NÂТ‰ƒ4±âÑùibDü94QQÐg=9MT==!MŒè¦‰¾(ep1MÈGe»”¦‡ &úÔÇ7ÛšèòŽÑUÒˆ1¶”QÊÆÂ‚‰3ÁDz/•Œ´~ »ØI8½§Qú] L ùý¤DŸàÒ*ä›"Øs–ˆ!Àù´U½\ún—|u'K”A‚c½Çntsj%‰5NÏCâOa‰š‚>ë¹Y¢îéùXbHï(K샣Ö{LØ=ÛÜ=‘%4ÀŽ9í Q"·•ÒdÇœvU1¥õ"ÛvÞŠ%ö§f®Xþ°c½%ʈ˜^:fh*K‰Êb‰Å3±D„ü^š™Ö;;ðnM‘{Ä!R\)r+÷©xtþyDü9)rEAŸõä)rÕÓ¦È#z‡Sä¾(¥py"<:¼“$ˆ„”ª°Qmå(Ûqâ :yå(ÿ®‚#a½ŽEÑÉÄWÖ‡ (|䪗¥š=©±ú‹âD—zïuΩË;0/lj.e¨«B쉩pÓ—oC@½Áo±‹o=½Ó'ŽpáD3O¬xt~œNTôYOŽUOOˆ#z‡q¢/J™\ÜE7óƒ. ‘¶ÜÜ•¼ž­d»è|ïâÑ.NHxS\®¹¢}ë3®xtþh?"þœh_QÐg=y´¯zzÂh?¢w8Ú—ÁSpÔí(EpñÍ/)bG{‰`fÕ; »]xˆö'-¥ßU ÎÜp•(:_Y®· ÁIáÈU/KMøås-õ©·7Û‹.O ÒO·½yùâQ1FˆöÂßpíE¯Å£©bšãsûæúû[ì——s/ʼn2(%7Ôëq|<„.œhæ‰Î#âÏÁ‰Š‚>ëÉq¢êé qbDï0NtE)ººg¦<™Žö¢9åМæoàD¶‹õROÂ‰Êø¿ýùø,Ž—îE§t,1SDíuÕ—R&;ÚZT9hã"؇zy¯‚àÞ!WyÁ;~'NäU¥©L€×^ô‰©p"½—)hzJÆ©5Á$;Ô &ÍS¡õýD° 'æMX¿;˜^˜Ò¬‘k—7„¦¤Ánå®".½7ìmqWÉvB]ñèüÜ5"þîª(賞œ»ªžž»FôsW_”ò‹» n廲TQoKeÐÙ`"©RȽ-^PÏön0ÑáQ¸&˜ÒŸ„ÚÊ lð[01L° ‹{h­V%»Ç…µÓ`‚]‚µ¾W£+ËKØŸÓ„l!`¯—Ìvd¬t+Mtˆ‹¹㢉VšXñèü41"þš¨(賞œ&ªžž&FôÓDW”zZõDšÈÍl<ÚÇã\åÀ;Õ¿Y¯Ò>ï0ßH]ÊœhÑÄ¢‰™h"½—‰Òó‰ÞúÖD²79ývߢ8IëûQ@¾²8…ÍÒ·ø¼SiÔü+­—êÜíýV˜(ƒ¦äíý!nÓ/˜8È+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜Ø7@´v”2¹¶‚GÛù`kb—&%}EªÅ¹`"«ËÝø¤­ÞñÍ`¢Ç;/'\ ]ÊPL,˜˜ &Ò{™¢{.ô U˜Èvá1™? &ÊøÑbh}?)—¿²x‚‰[£<§ Ë_°»p¬'ìÅèÞûê]â"®­‰fšXñèü41"þš¨(賞œ&ªžž&FôÓD_”²‹/˜HžS`¢K)OÖ[¨S=½Ù9§>ïÞ]Êþ®ëÑ‚‰¿ýT¦ÁÂß_ýæ§ÿùþÛo¾þ·ÿû—+H_í?§GùáSŠ®_§€ÿûÿü§o¾>0ø×oú:EìLßö7_ÿÓ§Óe·¥÷?Yþ÷§ï¿úë§ßáÄ?þç_}ÿ÷M3ÍŸŸâÇí«)Á ü]2{0ýoÿ3}:ùaÒ”ð—4ÈöõðˉŽüHÿþyaíoâÇ|³µ£Ñs;¥FѼ•ô^ÉK§wî-£Ô'Nñ˨¦OâogXûê7 ºÁá>t—H 7Öd(#J¾;~8n»6 V+“`Dü9›}Ö“oT==á&ÁˆÞáM‚2xâb"lG)ù2Gë¹q¢îéùpbHï(NtE©âµÅ[ëÉÁ«êé ÁkDï0xíƒ;‡`í(ñÚB7¦º1Ôàß%8@hK˜«?pŸz}rGëWM]Þ1¾‘&òˆÂ€ÚT&d«?𢉩h"½—¦ ¬z7u·{¼³xM¤ßõ\5[ÊÅî±ÞãËU²A¾öÿ¼£cú‚ÀëJqAr+Mqœ›â4>ìÞ.š8H+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(ƒ+!oG)Ñpu þ¼)•ho&)ikK}vX襉¢Ê1Ķzs~/šÈOœÒÖô޽¯AÑ2J7•yäU6sÑÄT4›„|}Ó¼Z6s· §÷Ÿ/¿›¿ ¬·Œü°ƒ+Ëf²n’>c<  ÚÒ_&Zc~)v!ÞzǤOéÚ›h¦‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÁY0¥Úí(ñZš ÇÜrþ`o¢O*ÍUà¥Sý›Õ5ëó£ÝGyDQ$@[™ó:é´hb*š \3©i•&²]>!z6M¤ßM$áˆõÂ0»Ò¥u3qÃrö9MÄôcŒ©ƒ²¸ß{Ò)JeY£-ŽP⢉VšXñèü41"þš¨(賞œ&ªžž&FôÓDW”JFïM9Òq°]©è\0Ñ¥>â{áïôŽÝW%oÑLB½¢Ãn'b &LÌ1 Bôe{ÞßýìýõœGŸ ùw)ä³NÍï'Ùá¥×&|f ó §/8šÕûïvo­uZMï’@[œ?ˆ[0q%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0Ñ¥8ðµ½a ~tm¢HÖÐ8r²ÛÁ\-½vU)N3æ êýÍj:uy籋éå4QF þÀ»Ýc+¶E‹&&  Þ$Xú5«ÒD± túµ‰ü»ÑS<­=Ù™ø•[¾©YÀƒƒ´’¾` ¦Ú(‘í¡Ý{m¢ˆ‹1¨µÅ >ôƒY4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ¥ìÚKØd°E<:èÔ%õ±ËË4ѧ>Â{ÑDŸw^ÃËi"¨1eø‚2A]4±hb&šÐô^æ*h½ÅNéËâ—æÈ=⌔WŽÜJ~*?GNŽ\QÐg=yŽ\õô„9òˆÞá¹ ΢®±¥áâã;àb|¾"¢›æâ}èV߉Îvñ±]îIkGùws L–V@Ov¢W^’+]Ä(/ðâ¡«^–ª8Ù-ì.õÃ{áD~ê|º,¾à¿'òˆŽݬ©Ìá¡#ˉ…Sà„Ï5•ÌL²“‡šJ§M0€`[ßO²ºts‚:ÒAC#Û8_X¢úEçb—þ<·‚W—8zØ™_àuQW<:?xˆ?¼* ú¬'¯ª§'¯½ÃàÕ¥.>êDœˆÂö&ú”êd'Šª”G€j[}zÊ÷‚‰òÔù|5ZÛ;‚7ÂD¢DŒí·,˜X01LX¾ÜœÞÌôjVa"Û„ÓW«*ãÿüûqŒŽ×¶›H³ÊÁ%l/_0«Hýº¸—XuóI§"ŽÒÜÇÒ( &šYbÅ£óÃĈøs`¢¢ Ïzr˜¨zzB˜Ñ; eðèÛQŠ¢_Û6<¦ˆåt@]Rc˜ìvQÅ9ƒˆmõüDý¯š&ö§6ò—¼£7Þ›(#:±ê Ê”×I§EóЄ¥?Ez/Ñ‘+[¢Ÿí0Ø}9r¯8¦u žüÔ=:yŽ<(þ„¹® Ïzæ¹åéÙräA½c9ro”’pím€ˆa3Æg9ò‡¯5úlÇ'7úø]fII:7]Åpi«RØB EâCW½,q¦^ØÝêõ®a÷z‡:^‹½ÊÖ5셓ᔉC8ÄX}‹]”{q"3Ä 5ÅEàÕE¡™'V<:?NŒˆ?'* ú¬'ljª§'ĉ½Ã8Q'&mG)B»¼Ã3à–ì1¤pïU©Ù.ò8?‹Ëåökõ1?ÛE^¬ÍϸâÑù£ýˆøs¢}EAŸõäѾêé £ýˆÞáh¿îL1´£ãµU®ÝhµãhonŒOfÅßýLª¹‚½x„ù@‰ÆZᢿÙ]z‹·€ HGžrPñ€ÚV:×VôgUiZ‡¶z3}¯µ£üÔÈÒרô†÷­•Ñk…#ÿfÇkíh­Mµv„›kD lÌ/É.œ?¿äY+ÏpÔø~²]ä+¶ZØ€º>»±ž$PÊ$_Ö u¶Ùín޵σjÌçr½)N‰Ã¯VF]ñèüà5"þðª(賞¼ªžž¼FôƒWœ•b;JñÅí…\e39¯"A€¹±)¸Ûœ 'Š*&´Ô¿N”§6Pj{Ç„îÉá/xÇâ4ÁªäQÚÊÐM,š˜Œ&b$o-V%;²p>MX®ÿÚø~²_yo"† £ðs–ÐüýRò:ÕïG;p+K”AE94Î;޶X¢•$V<:?KŒˆ?‡%* ú¬'g‰ª§'d‰½Ã,Ñ¥„¯mêÊB›¸°Ä.Á-Ò+Ru²KE•%dk¬p; þ^,Ñå»sg"¨Œ ¥bGºXb±ÄT,¡›äÒŠTg‰Ý.œ¾ñ7FhÖø~²Ó•,¡›jD8¸‚mù NÔ…V¯’QìðæÓ¿eP‰Q5ÄŠÝcßEibÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;LepCsÃv”‡‹w&`Káþ€&º¤*Ñ\4±«çÜ¢©­ÞðÍh¢<µƒpãd÷îÕûh"hœ/n¶ß:\W°MLE¶ Xа^Ñ)Û©;ŸMéw1D’hÞø~²]¸”&pdN¾±EvÓF±…b‡t/M”AÅc4i‹c]{Í4±âÑùibDü94QQÐg=9MT==!MŒè¦‰®(%*×ÒËært—°HHãë RÝ碉¬ŠÓLˆmõí½h¢Ë;öpãrš(ʈÔ½Yö'X½…MÌEž2è”û }¹Võ»¿“èé{ùw%Åã ­/;ÛÁ•çœ · À@OiBú‚ˆ#Wi¢Ø‰B¸“&ºÄiúüM4ÒÄšG§§‰!ñ§ÐDMAŸõÜ4Q÷ô|41¤w”&ú¢€]¾7Ax°7ñ!Á]_Jsu›èToòV4±?5šÆz'‘;º¯<ì>"+i|áï=,šX41M$¡1·/ËR?ÒD±#“³ËÖßB•PŸ`²û¥·&Xm3ˆžoN$ lNHbC*›¹ù­8Ñ#ΑN4óÄŠGçljñçàDEAŸõä8Qõô„81¢w'º¢TJõ/Æ Ü¢áD‘×ÏD^*<NU1Do«ø^ÅÀ;½ã÷ßGôô}À Ê7•N,œ˜'#!¡a'’]2;'siÇ`-œÈvÏ ñ×¼N6Š‚=§‰Ü~2EÓú—^ìPñVš(ƒ²çÓ²mqëö ibÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;LepOá^ˆR*_ø Å𨥦pï/H5ŸëvQÀfõÃ<»]0~/𨽓°V¨í¸ñâÄ>¢æ–VÖVÆa]œX41M`¾^-èRï]·Û1Ƴi ¥€šKãûIvpm+lÚPüù%l =îR’ZÕIeÂ[û`ïâÈÓÜéK´’ÄŠGçg‰ñç°DEAŸõä,Qõô„,1¢w˜%Êà1_E°v”Š.oÁ„G×&:¥bœ‹%úÔË{]ÂÞŸ:7÷Ðöã}%ʈ(Äúµ‰]YúŽK,–˜‰%(åèÒÿ¦úÎD¶Ó³›M”ß5¥HŽï'Ù±]y ›tΕrŸÓDÌ_zž‡¥ZÒéÃŽî½6QU†ÄiX;Í4±âÑùibDü94QQÐg=9Mü/{WÛ#I¤?ï¿hñíÞ19¶#Â/-N{·‡œN UMOõLÝö›º{XFˆÿ²¿eÙ…E“3]éÈÄ™‰J@¨»*:ãq¤_ÂYKWÈ&J𳉤Ü[«„õÓVçÝçDÆ6^Q›ˆ@iŽÁA†¨²•‰„^3®|i ½œ:²•‰ÔjÃÍ&`°àÊDÒˆÁ{aÛw+×Ù]·²‰•MTÀ&°±àWùcI:§w'bñ¹LÇu^?,§ÑÌYÒÉ6š ‹é95AikmUi’‹²‰¤Ô)'Kiålç\ÙÊ&zÒÄŒEëg%à§aã¤+gYKWÈ&J𳉤Ü[@£e/å4Í»6¡1¡ïÔÄ8¨TÙ!ì='0}÷0äQ°‰ÔêÀÏv$['˜aG÷ŠÈÐàÊ&V6Q› tj†ü>§$güäl">—<‘É_ÔÊ9ëç\›0RNAOI'Ë#˜”%ãòó=Q-Z ¶‡Ö õÅ÷r $+›èI3­ŸM”€Ÿ†MdŒ“®œMd-]!›(Á[Ì&’rbG¨¬ì¥ç-édµn˜Võ°‰ÁGùkör øýºl"¡rHÖi½ÓGv;¶Ú*$œÁNÖ Ýœ}n6‘*NtDdÖtWMV6±²‰_ŸM¸4²ˆ{p¾ÿF¹XE`Ñ9¯9ýÁÅÃkŽ,%?‹ÖŸ#—€Ÿ&GÎ ']yŽœµt…9r Þây”—‚C•£'­Säê­Sä›8Ï¡¼sù•è$§ýäûZ[ þ4òs¯IN«9+nè&(Dç}©Æ@µº.:1 ½}\tbœuÜ‚t"id7ƒù;÷-pëâÄJ'ª¢Ü/ƒ ƒbÿ N«e %¥V© • Ž´Yé„”'f,Z?(? È ']9ÈZºB:Q‚·˜N´ÊÙ‡.Ôμ‡˜NX×w DÞhi 3ÉY·ìvͨÔÃæ@œ×´.°ŠÃ8cÑú½} øi¼}Á8éʽ}ÖÒzû¼ÅÞ¾UîÈ‘•½”¡yX)؆Èõz{´„GIŽIÙÔ“Gñ¹Æ“åX"˜Šåœq³NéÀ,Ki´½¦ •_k]“Gz«¤é‘}8²µèQÖ —›<ƒ ŒY¯Z'jš<2ªåƒf^¹“䜷S¯NÄçrÿ·@ùÉ«¤?X3˯ÌA ƒ[[…ç‰óC½•;xÕÞ|Ì‹•òÔdt¾Â` .tìʼ§Ô9‹VϼŠÀO¼rÆI×ͼò–®yá-e^#¼Ë)˜·ì³‰ƒ=̼FB¥ºèÄ8ôZ×m¥m«öšÂëØåÖ¢[äÐ AFj¥+¨ŒN„àÑû¸¨Ó;t"ÄM§>(gtÃÎXy‹ù­áĨç<(Bbß<Ì&4`& Z8Ò·—3zQ6‘”k5ÀagÏÚÊ&zÒÄŒEëg%à§aã¤+gYKWÈ&J𳉤œ# gB²—¢0ó&`¯ßw½Ðjœ\r2T ¡.6‘P9 k‚Œþ@}“ß6›H­ö–éä€`éI-Ç&¢FÃñÙäË¥ìå:›ïV6±²‰ Ø„iŒ7ÊYÊHrÚ{;5›ˆÏ%BP.ïµ£„Y‹ø±Á›¸²iz' aâqå´]tp«”Èkmepë™B1OÌX´~:Q~:‘A0Nºr:‘µt…t¢o1HÊ-/{)k`^:á°qªNDVs¨VÍT*£ ½±.x9Xƒê¸èDj5¤[ eë@ç®ÀÙéDÒh)e•‘QXéÄJ'ª¢аsG :Çs+gú«í“7ßž|½ÙqàÝ{û>e6^£²2èŸà!þíÉç·ÛïvׯïÒ—ûátzò~ô‡_ò'_'é/ÓÇ?üi{`|5üBÿ³Í*NYóõmêtœüñúj{êÐ}pòÕõýæâ4=t ¶¨þ %ÉbÆuÇ!¿Û«ÝýŽe÷æÞ¿üÿÚܽ:}Ï~ýß§ÍGÏÏ>ÃgüjÞv·›ËùÓO¸§~{ý’©ÖÝÛÝŽöëÃ×/9Êpgî|ÜÛ4ÃGþÒz‡Ö¨ØßꀲFôu »xŒ¦Ÿ·Aµ}Õ§ñýòî§ì<¶7§ïí?‹_'ÿñ)ÃI¶Ø‘÷sü|öÍÞhß0KŠFãæ·+Ûtz ëLÝ囤ßчûrë7Oÿ_ûǯ4uåý‹ä«ÛÍÕÝîmïúá‹Sõý™ÑçHÃsMÛçgüWÛïïO㱘RP±›œþþ_FJ|Ø~¯¾í4nÕ~dmn6Ïytßï¶w®ôðñ›‡ž”¾cãî âÞ–ÿþ‡ÿ¸Š9î‹î7Ü¡ûÿòÍïýûëÝEì™æ®î®/¶ñÇ?no.®ß0=¿çÏw/ãgéu}Á\”‡Ç›ôÁUò"ñÇ=AüðóãoŸíÓ¿OvçÛ³7g̯øooãw—ö‘÷7›³¤èçÜönßÝ{l—?ÇÙ€_Ö„/?þòjss÷êú>ýzqýú·àþöúâb{ÛÑ~Ãn›Èþ´°ùsoyùêþ€)þ ì«×qL2LœþâŸãÏ7·ÛËí} K6Øq(Ý\ìÎv÷o~î‚ÕZ÷öciÈ$Ý»-Ç{øödsssñæô¤ô'L^>ÄÅŒ31Šn»‘.Înlîï·—7÷'ªž7:pÖ-Âóºslè~s÷·¿¾¼ÝܼJy¹×ßž|ñú*¾›“„lÏ«u©N£`ˆN=D''ë­‡è4‡mËD5j/µ3Z¾„Ä´”UNgMgú:}Ù3/•±hýÓ—%à§™¾Ì ']ùôeÖÒN_–à-ž¾LÊ™¶*åd/…Êͼ±Ç꛾D†à”%È_.·—SËzû¤9ø…à ¬WN‰Ã8cÑú½} øi¼}Á8éʽ}ÖÒzû¼ÅÞ¾U X’½*3éb•o˜|ôx{b!F!}Žrž>7ÉJµBÇËKàXN¯Þ^Æ‹ÖïíKÀOãí3ÆIWîí³–®ÐÛ—à-ööI¹ÓÎæï‰nå¬wk‚i4Åù¡ÃÑlÞZgíã«·öìE9Ru¯‰öìeôü®~ vÎ={Ð8Ï]†£†kªP}eGLG¡G8®»GZÇ-¸)È^ g.Ãbt=0cŒ@qú›…åÌ#8³zû1àÀ¨õ˜¼8Œ3­ßÛ—€ŸÆÛgŒ“®ÜÛg-]¡·/Á[ìíÇy) ³z{0K¯ô{{ ÐÀŒ";y”ä M^’Ÿ‹üaÜ%.™Šå”ŸµÜ1㈜õ™ çò··r@•J¨,± Œžü‘J­väµ"Ù:—+wœ4·J @Æ´ð®¬d uöh=šnöÈsCí ëæQ¬¡EùÄ(p¡Slå=‰bÆ¢õó‰ðÓð‰ ‚qҕ󉬥+ä%x‹ùÄ/…ì·çÝz„ªáèÑÃ'‚W†|9Ê$g;k”ñ‰ø\Nã­öä&9T8+Ÿ`¶À‰ˆQ¡×Tá’¯‹O$TÖx'Ìh¶­´GÆ'¸ÕÁ*ÐF¶ŽEXŽOøXäBSð INiXùÄÊ'*ã.Vuû¸îÖ;†åŒ™!Â8òŽ”´Ï$É‘³à±£†Ð{ð—=Æ+(þ+ e9hȱBŽ2ú&X Xt²œÃAG6q€ÎX[Þ{#ë zБMŽO&Èÿ ÛZ¹ÎBTF§ 3Xt¤eÛvLÔ ¶ vFkkT~·v”Ó¡sÕuF§Ú9F§Q‡è ¢Nˆ9+*,çô ~«•¨Ôê˜*«üµˆ­œ‚e7ÖŒn½ßH伋Ö?5R~š©‘ ‚qÒ•Od-]áÔH Þâ©‘Q^ ÉÎ;5ØX×752ª«ŒïBO渮Kig–ãûQ£åÿŒÑ"2«Ð­|åûUñ}î˜ìN8ý}<•öõ;˜bÝž©ù~|ntÇ^‰ˆ4ÎzÁQãÙ´ÔÃ÷ù[£Mœv$)Ëé0ˆ“j¡v¨H¼uB‰ÇVÎÙaÄI(^Ä31NPå`˜R•¢2Æ h©¥,§Í0ó¢¬ÔÅb•èE¥wƒ”’¨”cò³×­»$/m•z/˜‘ÁÙÎqÇ•—&9‹VÏK‹ÀOÂKsÆI×ÍKó–®—á-å¥ã¼”›y 0„æ0/Õ]/mQs wî¨xij5wP$²u,ÇK“FOšÉ9†3 V^ºòÒšxiì˜Öh&[&»äT÷œÜ4¼4=½1N£4€,v¯Ôžž—¢kÀ{B¸ì 0Á1Pùõ¼$Çonк¥¶"‡¡ÔéA —Ú JurVÊÍ;µ$Ç|Qâ””zkÈ+3ë•8IqÆ¢õ§ðÓ§ ‚qÒ•§¬¥+$N%x‹‰SRB0e/HÏJœc :Ýpv:‘4‚wÜ£dd¦Ã·W:±Ò‰脉…oX¥Ÿ¥I:\'¢ñ¹ñV¢  Œ+ 3Ò ã¢ÀÎõ0€Æ…*.Pd‘B ØnÑJ,ãÀ9kW:!剋ÖO'JÀOC'2ÆIWN'²–®N”à-¦£¼”Ÿù¸‡œ&`ß¶®qP©‹NŒBôq•e :[Ñg§£V:±Ò‰ªèwL¯ƒ¶öñe-_¿Ó9 í¤óщŒþwóúñ¥²Ò 1Ì&tÁ¥‚“²ÜÍN¿v€‰èɇ`eôæúßx€áVƒ6žÜët¨â†5:EŽ‚Œ¬;¡»˜5ÀÔ`(ÄûÇÀæL+שV4Q€‰Ïå|?ÖÊ (ç¼›sù[#5<Ž9ÆŽ0“Ià +l§Å˜L.|ñÔpüÇpë„UÏLDÆ¢õOX•€ŸfÂ*ƒ`œtåVYKW8aU‚·xÂ*)·hùÿ²—B7ï9Dε¯}Ï„Õ(¨TÛ9ÄqèÝ‘m§M­öqÏ•­c;U"fçQ£æð †DdZÙµîÊ'jãLÚ»d>å:÷kLÆ' uè„©‚VÎÌ9aÅã¸áÁ¼ïã¼#ÍŽH„êm‡úäÎ ÕLãÃÐêxõ£¨­¢aec‚¬Ô¢GO!HJ­1~P£(w„)«D¥d†È1ZT¹,0û“ZêcáÖAç<•r ÎjCЂR–³0¨#©þ¥Ì´É—wjåP-[•')uÈ\^Ëà¬_·oˆ4'cÑúÙp øiØpÁ8éÊÙpÖÒ²á¼Ål8)œº.ÔYšwû†ÓMß-ÌãúÊ.‡Ù£N,¬Œ>¨##Ô¨¤CéZk°àÙÒ„ Èa›z’3(¾’á• ×@†¹cR,¿ _”'ÉM~¶”R±Z¯ˆ•3 sž-Õ F¾ïsa~Ÿ#ùË_cÓÊuÏ}ä(ŒTÍ”ÆúÛ‡$¥±Р›= ‰Jãí06Ö”²Œ{|ÞwVÞ•šx°FœÑÕà•7õ$Ä‹ÖÏ›JÀOÛ2ÆIWΛ²–®7•à-æMI9»rj€— 3_@t,i×CœZ¨ä¼0µoReÄ)¡bÞ  ÷ÇuËFÛjÔ¤••­«f'NIc¼XSØ ›ä,áJœVâTqâŽi ƒ,qJrÊãÔÄ)>—Y[ ï¤d­Ç9OÑbÜNHa'o:ô^òWý´r: [Z“ª™ºy°0‹±^PŠ Bgb&§T¸û‘f G#ƒ‚R–ï%NIiPÉà<®ÕLÅŒ8cÑú‰S øiˆSÁ8éʉSÖÒ§¼ÅÄ)*GÅ‘-­ì^Nã¬ÄÉ5 CqJŒ¬ U#ÖEœF¡7xdÄ)µÐi¡`+§<Ε4Æ9sðÞˆÖj¦+qª‹8¹&~¢Ío¿LrÊMNœâs¹©ü¹aXŽüœÛ/91jÑSOý!ߘxŸŸÜJrì‘/߀ ZƒP0É[ç’¶_òȹ‹’Rî¬aЦDP”:PÞ KIIN©A›AËæõäW ½S–S~ОO0bKc˜ò …ÂëIÎв+zI)«„™Œ¶ý¾‡ˆit _lÙ¹o¿;Œqór³ëÐÒËëÛ6Ç?Kò.žGNŠîN^m8T_lnîb“wWœÜ¾êÿô´'p!aOH’#ß™™ÙEòÿÚ¥(­‘S‡»6—¿™ýGžÜ\__pÖÿêD=|v÷úùÿr^u×ræ$w÷­M ÙC ¼Ù5o(JþäéëîTáÿº»:¿~ú~šúØ¿m8_{¶¹|añý³6l>ã "’üiÞß½x¶ñøbóüžl_œ?AîÓOžÛ~r^ÀÆž+ ßß¿÷gmx:h-7TJ°V\‘vëz®È73­Z¢ü4Óã¤+Ÿ–ÈZºÂi‰¼ÅÓI9$w-{)Τg–0¬FÅ"ô¥„wÒšh+G•2K¨HiÒvúpd÷S޲Ò‚…Ì’FgQ(#³nÝ »NLÔ51á› @ƒ Ÿ˜ˆrýäu‘ãsQÇ¢$ –SzÎ:3¨Óvßþø…(OZb8Á£aÄY:Ͷºâè/¬èF9kœrJQTê-€Ò ¬]'9cÜ¢Ä9)u ƒö28ëÕJ¤œ8cÑú©S øi¨SÁ8éÊ©SÖÒR§¼ÅÔ))÷ÁJ“Ø­œõó®è4¦· N‚Сp`¼•¨‹8ET¨A¥Dô¨àȈSjuÜÀ r7DÓ)0;qJ1T¢•#·§•8ÕDœP%Ç‚rÙ78H›S±œvaÎÝšˆgˆUè3Õp¨ëºÞeÞ9„ £5l_"å7CŽTzdSVm«Q9ðj€u<½Ýjtq«è€÷F~=„°NYÕ5eešXU)xC&Ö’Áäa-Þm¬-:—_ºMrlœ1¬Y„ÆÝæàb?CðNÅ‹ µ zvÇËî.X·jùŒEëç{%à§á{ã¤+ç{YKWÈ÷Jðó½Q^Êͼ{ØyhAßµ²%ðqè9Ñ8.>1Î:n¹2ºc9¥Íz§ÌÊ'jã!` V |‚åÜô|"(4ì|Aâqñ¬— 6`=ú^:Á\ÃÆí.NBªtoÓä¶§~ï}"Ä®¯Îw/?ÝÜœüŒí²Íížœ¥ïžü亟þíõóí“Û盳'7·×ß¿ù)x±;??=ùç?þù8ZøÃ8ðñï÷Õ››í§ÛûÍiŒ?ý2òAOø9¿ûóBûÇ•qOø×GOxhÔØG%0~þñO#ò‚yë ßé±ÏH¶ýñ—üÑgéhìÃ+ùù×_òv×4ÍÉÓ§'æd÷‚CÜŽU;Nï~ÉÃþ²¹ÜÞÝl8`ÿ{çÖcÇmäñg‹^5ȺUÚ§]{ØEñÆzÈðÈÁBt1$9Fbø»o‘}$iÎauOwžžÁ—Xÿ®f“üñVW·~¸•ùN!üìË›Ïþëù«¿Ÿp9ÿÍÖºñ×?}qª°±Ä'×ùæé“'ô0>¹†‡¤OÒC•ëü𩆤7úLÂêSÓëžã/7o_ÿ`£§Õô¸DÄœðNB›ÅÞíþñæU©Ÿ–VÂ]ýóÂØ‡^ÛÆS/¿?*·4cöëŸj~õöAéʆôІÎ¡>²±˜jeþõ?ðó]B4Åý¾ýϾ¸ùe8òiÉ6FZTæߨ—û?6RxýôËû6Ÿ¾}tç2WmVþü£U¿Ü<»±Q¸ !]ýôÓG­ïØŽÝð¼ú·aÏØa<8LØ|õÃJ;UZí¿¨߃ëŒß>±‘ÓC Oñ!%¥‡úDåá·á:ÝÜ<%`3¶gûùnoÏF#×/žÿËÜ>ºë+«Õû¿¯_ÙØñéjD]ýík«Týî?_½³qñJþìNÿè èM}ªŸŒ£îܾÊþó÷¿´ ÆÿKZØeJÛñ§¶‡ê(¬9j=îæa|‡e¬e:Ç¿|•õU,Šß~óÓƒg,€×YûcÿýÒ† ‡¿Ûß º^¿«¡~[~1¼·zþÊ^éÓ#·o¬îßIÇÏ¿ÿ•*ãJc½.nYí¿ïçÙó7Ã?¯_¾(uï矿¹\e»SGÿIC|ÇÞòNÉŸ^¾ü¡^^óèÎ=ÿ×…!ÆŸŸÜ~Ö#þðî»×ožÿk¬óÿ÷êêêÍaô÷ïïÞ½yþä‡wÖ—__]]ÿüC…þGWŒ”îò.êþÃ:Ë7ÿ5ÞíOÍjà#¦mV/v]•ÖÚâÜÜ\ ËQEStæ˜yÚ9ßì;Õ¹Ì {NE¦h×i$„ö§ãÔ,ÂQî‚–Su–t¾¢œ<§SN“nã¦àìóÁAb%&ïš]ž–þ•¢û¤’#G…ö¹ªjò…7*Ï!î ×îŠd#¢ý/\/¿ÎÂuCÁ<ëή›‘îpáz‰ÞŠ׳Z)<Õ„®yv€pæìöL©úZ¸ž¥ž"ܯ…ëyÑ9ʵ²ùÂuñöЉ£¯,ãžÆh_¸îkáÅhLx"ñôã+°bHÊk/\—r˶uiÏì·Lc”êÓ ×T>u.Ù}ÚˆXíˆé¢8Q 2ùâ2¦'¼qb#¢ýãÄñëàDCÁ<ëÎq¢éqb‰ÞÅ8Qœ×ëq1º­”ö;Û\ZU•¤0š}õ˜ïÙaµúÔÌDí{Üvx¹û•FJÁIp;ÚÉÑÒ}Œ¾Ñ{£—Š™3±±­_ãÆ9Ñp°Qp éÜ ¹Ü¡ËYoY}ü±T³ «ß¡[˵&¨®sBevÈ›&÷‘!rÉq:R<@ …”±Ý›T» “ÖÖÈÉc…qÄÊEöŽSŽÒ¤'BשdfuŽ1W».{€°8-w’EWÞ/ŒqGĈöNKįN ó¬;§f¤;§%zƒÓèœPdB+ã¶ë0„q8w€pžT”¾¯ª28N)NP⺙ß4ãÑQû‡èGŽ:ËͯzÌR&_+Ûog¼5r«ší{¶ªHZœJ¹ ‚š<20;£Ì-Ê¢}¦xœì}2'€³bTí@¦í%#g[²zd¥™.rœš]œ¶’ØujQP°·-ŽS³Kħ┱fUWCÞWœÜq#¢ýƒÓñë€SCÁ<ëÎÁ©éÁi‰ÞÅàTÛ@=ð[)TÞöæ•„ƒ0œ§Q*«úR‰R_àTU1Jžv[=C¼_àTŸ:iÉ íG']rq¬xL!QF_™}DûÍ+;8õNiвŒ¢í›Wª]¤Õ7°Y¹\r¬Ev> bwªÙ^ œ2 ªœñ̶l¯”Áë`Š@‚I “†É”†O ÔqZìÂ4pʮӲHΑ§¹p›äÔ;Í•K ™ÁJLŽS³KtÙe®âÔb¢)úâl$µ§Št‡áˆöOkKįCk ó¬;§µf¤;¤µ%zÓZu3qT¿•Š·!bå,8<ˆœË‚3Ojêl+cU’2‚¯"ß/Z›Ì—£µê‘¼ƒP£]Üí´Ö­åA£QP9ðÓ¤µj‡Gx%Z+åBöŽU»S9rW½'³œDMgvRÊ`íO”c›ÖŠðW¶Æ»B†,AR ÄŽS³‹: 9¸N% þqœšdšä4:ˆ(¥…LV­Éuj-)Æ‹ÒZq*‘ÑÛð[íBÞs–ºÃðFDû§µ%âסµ†‚yÖÓZ3ÒÒÚ½‹i­:oûh—`Ûµ5¢!¥x†Öª$Egr´ÃÎ.‡¨ªˆ”|õá~ÑZ}jFJ0¡³äxÁµµêQTHÙW&G™“vZÛi­Z£ Dº½—ûñÇØì¢®~š«”K9¨¨×Ø]ʲ%­¥‰c”s´¦ Rr+x8¡e±2öÖÁ¨–ÛþÄeM-W…è}ë`Jt”ìKð£%\²ƒ)é59‡]e)‘ìÌÞÁtÕÁè@ j»ƒ)v9lWê`¬\Œ˜HS› ŠM1lÛÁ¨½ :3¨6FÔ L9;J‹Ý©rÎWU§Ìɾ,_ïóUÞDD#¢ýÏW-¿Î|UCÁ<ëÎ竚‘îp¾j‰ÞÅóUÕ¹XCè,žV»|jƒÖšóUÈCŠçæ«L‚„IùR%uvˆ¶ª/fðzÕb—îYÎúÔÆÙÙ†Yíà’8Q=Ú£hž ,Ñ~ˆvljÞpB—L²œ0»¼þî‚Rn&J™½AºÙáí›kÞ>”‡r+qæ“8ÁÁ>a mOMW»—½Ìt—@±½¥þ`wtMàŽ§Ç‰­ˆv‹Ä¯‚-ó¬ûƉv¤ûÉEz—âÄè\cŠíuÇÑNtÛÍÊq`:ßÚcÀÛÙ¡v±¯Õ‰QU¢€ú*ŒïNŒO-¨}§îhw|àÖ8Q=–µCMà*£ãÍ„;Nì8ÑNXÅT(wÆÛsî?®À Žn]['j¹T¶¨ºÍ¶BÙÿ²!NPÈxBÎàD °‘†æŒÏhGyRÎ>v.3µÂJ»ÚWŒv!_tIdtÊ)ÇöÝ=;Ú\ºƒÓFDûg˜%â×a˜†‚yÖ3L3Ò2̽‹¦:/YT#ø­Tâm&Å8(åÓK"UÇ¡,u”ª}1LUÂÆ"®zŽr¿–DƧ6¼s68ì.¸…·zLQ"8_•)ì¹#v†é‹a¬I-Õ7(4ó»vÖ¾§” VìY£óiT½»bÍV•œ¼!q#¢ý“ÓñëSCÁ<ëÎÉ©éÉi‰ÞÅäT§Xþð[)ÒmÉ ,h!g»˜éR9w¶/qTOQüÒìàž]¬WŸ:‡Ìí´´‡èä ’Sñ˜£2bt•åp´þ»“ÓNN=S.W%$LA‚1fWÏ5TÊÅ€Bêœ`ípËuD,»ÞÎ-#Ê E…œKf«]žÈ˜ÞusRÛ—l˜)žS³ xQ†©Nlx2A‡ýªwpÚˆhÿ ³Dü: ÓP0Ϻs†iFºC†Y¢w1ÃTç)'Dð[©CëU†ÑZ,à3«?UBFëéÈ—z|Ê¥ †©ª"¡úêEîÙêOyê’–Ônt$çÞœaªG{"ô•!î«?;ÃôÅ0R&`âöÙªjGiõG¥\- ¨n³]ì–ùR†(|î¶9³pGhµÃ'!ŒwÛÜ<§yÚÚOvZ[•)eÇ©Ù/^‚›ªS ) ûâd¿âÎ7"Ú?7-¿75̳š‘î›–è]ÌM£sŒäÜw¹ñ•*ƒ9=ÃMó¤ÆÎÎUCŽNn‘Qýí} ¿mn£cC+ç†ÜÑ.^›ªÇr¤Þ¹e¸Ú!ìk?;7õÅMZòÔ¤±ÉMÕîø.ƒ•¸©”«‘9fï² ¤˜·¼¦ÛèC)Ò9nRf JÑC³‹iÚÒ´FÍÉ…h¼Ú˜–9Ø¡L<⤮S€’!7´žô`ÒQ_ÔpšÃŠN#ò¤z9ºN1”‚ç´äÉâIgå2¸Nm T6J¶ÖßÛÁ%±ô½SÎ[g>ØåXÛ±ôo´#Ú9–.¿–¶̳îK½H÷†¥ õ.ÃÒ÷ÎmèýVjëå<TNæz/!§Ä¾Ô|bS߯‡¥ïU©ÑMžÐ(ܧìQ‡§†ÀŒâWCËõÞ#†„­-‰ïíŽsqîXºc鯎¥cÅ´ñvYjN^.wWMÃl]³0YÍe€ó,|°Kéh•j >”[:¬Œn{"iÓëÞó)ž¾NÅĪuþØn_ªõ~“^9à‡zÉ~ˆ"Æqt/ NÅ)¬+ƒ+Îþõ¾žçŽˆíœ–ˆ_œ æYwNÍHwNKô.§Ñ9(ú­T ß ‰uøyœF ¼©ÏÃ#å¾À©ª.›O&¨¸_à4F§£?:äràT=&ÛLyoŒ°ƒÓN]S, “­gl2LµcYaJ¹Ö½Äƒ÷ÆùÆWeëf%Ãi†±qIìÌÞŒv!\'ªÓ”„œiÃjÇy_‡qljˆöKįƒ ó¬;ljf¤;ĉ%zãDuž²3Ží·=VÅ0X$ÎàD• ÖÓfð¥Šh_8QTa(ÿ²«C ÷ 'êS—Ü“ª~t"éåp¢z¤ üĘ|lj'ú «˜hM¦ÄÐÆ‰jÇ¢kãD)—‚ZÃç6Ûf[âÆ!¢ÕM=8”[±\2ߌT±#=ºüÓH={ñúÇR Þ¼~1\ÿ¼ôýÖ<ý]Þ– òøäæÝ5^ýÁŒ¾üö»›—×¥‘zzóý¹”öi€þî÷ò_zÒñ׿û·«2¾w|îIBr—«§iksì,þàÀI’Äæy©÷v'Ý»ž“ë4%²æÕ{R³Ã¿~õ­½þ§&È®~üÎ8Ñ‚ú¼~úϬ½.ÿk± ™%#Î’¹¬49 ±ÅòÙÙQ{’¯¯Wïo­êŒ¯q©;9ïŽC™Š{ûËwW¯ŸÕg{tõ·¯O:4$@ ̞ì’.:=Y&Ä âXö˜î¼S#¢ýOO.¿ÎôdCÁ<ëΧ'›‘îpzr‰ÞÅÓ“£sÍ¢ê·R)o{•©õî‘ÎLOΓª±7x¬ÓWÎ)¾j—Qï<ÚS—s|üèË%áQm('Š’\eãÂ+ß½ùagÇWeÇ’YVv4» y}vTdÂÉý~MÁ–솄ðÌä$ÛœUœME\,âe7;Ì—ißìàíŸ&–ˆ_‡& æYwNÍHwHKô.¦‰Y­”Ü>ߺ.M Ö‡ž¡‰YR5H_41O½¦ûEs¢“_p)j–²ãÜÎ;Mì4ÑMpÙÐO†ÃÒÞÎ^íð¨Õ\‰&J¹˜’5íî÷ƒÈ)m{ޏ$ >sŒ8•X2(·—̪]¼¯kS˜(Ns g¶e´ ºÃ„7JlD´˜X"~˜h(˜gÝ9L4#Ý!L,Ñ»&ªsœ)Oh¥tÛ[‰âD1Äó­ýd©§öÿª01O}ºg0QŸš¢ª?:¼ÎµzL ™'¼7>Ê=°ÃÄÀ„ÕKLQO¥ß}üIý-Yèim˜(åJ¤€ Þ÷ƒ6Ý8a(ñÌÊD¶X˜ &ÚMP±Ë’.»Ï©ŠÓrÄ\qr<‘¶ÃÄ™Qb#¢ýÃÄñëÀDCÁ<ëÎa¢éab‰ÞÅ0Qœ+`ðväŽv7† PkËóùÖ^±\Õç7¨Š±³;‰ª*œ€}õé~ÁD}jÑ»±iŒŽ\ðN¢ê1#Ú#ùÊr &v˜è &¬^ZÛÎDÒ†‰b—8¤µaÂÊ•2Åí`ÄÇ-÷9e@(Ÿ£ ©ý):Ù{ª^”&ªÓdÒX|q)î7œºÃÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢:Ïê$ëí`ãS‡ñÌ>§*ARHʾTAì‹&L•*ƒ«¯^o_%ûÛ¦‰1:Ê’Üjhvù‚ûœªGRˆN*ÆÑŽ÷¥‰&º¢ «—hÍIÂÛ3î?©¿héÕOMX¹TÆ·Üï‡ÊdÑÆœrV9 jpL(Éi‚ªÞNÔ¾)Lü?{çš,»É«á¹º$£8óÿ{¸¬¯ª³WšÂö¦Ò¤ò+QYokÐc@ÊN1$é\Á.vÄûœS7KlDt}˜˜ L4ŒY/ÍH/3z§a¢8WIS=ög)±{Û%Ä#7ð?Ÿí ìƒ •Ò²°LU¦ÆJ}õl_¶51×b1·ÃDö¨”à´³w—í˜Ã¾4±ab)˜H沈hšQÚ0Qìˆ.?生«¹ã2wU¤;»%9Èûò]!·°NïCŽxKiµ‰OÒDuj‰tÚa¬vºi¢›&¶"ºöÕ¿ÙGÿoÓÄPtžÛ›¨U(ðÊ„öA§MKÑDz/9¤ä7ÆfA§j|õA§ò\ÌMìc7Gç´ ÞIÑ iŒ¾ßû1`‚ÜD¨­4Û¡ó³4QÄQZ¬/Ž0ìKØÝ4±ÑõibFü54ÑP0f½8M4#½ MÌ覉¡YŠÀî½6!r„³kƒRãZ—°TyZ°ýõ¿‹&ʯfƒÀ¡ŽøMdéW³¨ô•)æ‰M+ÑDz/“CI¯fóv±£ôïÕ4‘Ÿ+Ž{ã'1½óÚD<óÕˆ÷4y§Á.Øž¡‹˜?JÙ)Gn¬va7›è§‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MTçª*ý)”c¼wo‚Y4'ŸÐĘTÖµh¢¨¢`àØWð]}Ï££ðMâêúÊøµØÔ¦‰MŸ&0óuëÐDµ‹—ŸtÊÏ%tîŽ&T¼wo"„ ò¾ÛD 4‚Eƒb»¬\µ ðhI§â4­àÊí€Uœ¿\,Û4q’&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó4Qs L¡ xoë:G?Xõ„&ªI ñ'Ry±{EUP³NVZíà»naÿüjG–O¢£Ï5®)&†¾2‚½7±ib)šHï%«¹!Y“&²A¸ü¤SÃÿŸã‡™ìÎ[ØÑJ ŸÐ§l™6:·°¹Œt|´vGŠÀ}q–ÐmÓD/MlDt}š˜ M4ŒY/NÍH/H3z§i¢8—̽?K1ÝÝn"DVlÍöKõ°MTõîB_½Ð—ÑDùÕ¹ö a?: ¶ž£‰ìÑÁCïvµÝ4±ib%šHï%§Ú´÷&ŠúÕ°ós%DŠÄÝyOB¸÷¤ ÓII'ÉXc4h¬*vö®ËNØ:çÅŠl˜èe‰ˆ®3⯉†‚1ëÅa¢éabFï4L ÍRÁî½6‘V³CN·&†¤Æ°VØAõòe—°Ë¯& Ú¾î¾&ŠGKÓnÑ]íØwØ KÁ„dHˆlšb«]Ë·&ÒsSæMØ?ü¦g祰Sj$vRÒIÓŽ!8l6ÈvÁ¥‰"%tÅň»¤S7MlDt}š˜ M4ŒY/NÍH/H3z§ibh– ·ÒJ<$žÀĘÒÅZ×UU¹Ëj»¡òzÿ²[CÑ!ypg¢x5ŒÞW&´ëÃn˜X &´$éÄüûôÎ?¼¿’þl—ÃD~nÈLJ0ôÆ„´˜Ü{kN*:åæ}h&ØÙ;ÎvñMsÑ[abDFÝ[Ý,±ÑõabFü50ÑP0f½8L4#½ LÌ膉¡Y q±ÃCcêõË E‡øÁ2Ic·à÷"¬Ù‘Ò†’jqÓĦ‰¥hÂJ6o¢íÞuÅ^ëà]D ÿŽ ‚|ï­‰èîf§4‘QjèÜš(v¨ë-…Ÿ«g_·D‡íÑ¥ðseJûí^_–Z_ü€àÀC»›Q¶3yù|Ñú’žÓûíü¬èt¾µÆ‡Ç“õÅS†H Ù:J“]ˆôèתâ”U%~ Ža­ê~†hDtý¯U3â¯ùZÕP0f½ø×ªf¤üZ5£wúkUq.LП¥î­ñA:ùn0&5òZ4QTiäÐé×SÕ[oÔéT ¬vø`7£ì‘#8Dé*ã×:b›&6M,A.1±„îÐD²ƒ—³M—Ñ„ ªti"Ù‰‡{÷¾! ¡·0! àü]-¶——»øèÖwuʉ¶ öűîòã½,±ÑåabJü%0ÑR0f½6L´#½LLé…‰ê\ó·péÏRâzï­<ãñäV^•ÖÖ@Þ—ª‹•¯ªJ_‘–ÿ½Vý§a¢üj”þk¨ÀþLTÉ'õ•!î‚&V‚‰ü^ 9Eöf‰j—2é‹a¢îk7Ó¢ÑÉI'Ê#C€Øž¡)ÏA¦ÏîMqb¦}qÉŽ÷ÞD7MlDt}š˜ M4ŒY/NÍH/H3z§i¢8·ôo烱SÆ[iÂ<"|BUª{hWµø±³µ U!-© ,Îö]4Q£ã–Öúnt¼´c¿&ŠG²`‘ûÊè¥¥í¦‰M ÐDz/Óƒ”™µIÕ®.@^ž«Ðtç=U|óåÂnF)65Á÷4Áy{â h¯/ÅM¥‰ì4bìnÞ»øò¹eÓÄIšØˆèú41#þšh(³^œ&š‘^&fôNÓDqÎ"Ô¹þZíàÞÞ¨á˜ãùl%-Âð©v¿eý]š¨ªrŸÀ–Ñø]4Q~µ²÷.V»×2¬wÓDöˆ¹¼&Å®2LÑÙ4±ib%šHï¥:'þ]EàŸ?Þ_M²ìjšHϵ0vgí|¿ãÎähGÊ´ÉOîMH™[ÀTÛ÷ŋ೷°³SL °0wÅ忍›&zib#¢ëÓÄŒøkh¢¡`ÌzqšhFzAš˜Ñ;MÅy¾÷kÞŸ¥î=é¤=k,S%P°NÏû;X«@lUÅQQ±¯þÛÚMÔ_-¤WR¼&ŠG'úÁ2îa·3Ú4±M¤÷2eÈ1ï>7i¢Ø‰^~Ò)?YcôØ?šëzßIp)é MhÁJ.ÚÞ›(vbÏžtÊNcgó¶Úlšè¥‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MçDΚ5U¤ËÍ{éq!úùlÿ±TúýíèïÒDQŦiê«gü2š(¿Z$b?:Þ›È%%cÚÁíbdÓĦ‰¥h´ÜF¥½7‘íB¼¼y]y.9²ufí¢Ó‚ßH‰Dr²ÀXž\ŒÌ:Ró$¤ü(NX™‡¢#xWœ„—ÝÛ'yb#¢ëãÄŒøkp¢¡`ÌzqœhFzAœ˜Ñ;C³T |oQ'ö#†³¢NUB ÿD*.vÔ©¨Â(ÂêõËNÔèX ú¿Õ޼8Q<‚v:²;Qß8±qb-œHÙ@ŒQ;8‘ìÀõzœpÍ÷«#õp"Û¹ws"ª›ž\Ãö2‚É5´7¿‹?Û »8U=ž"Ι6MôÒÄFD×§‰ñ×ÐDCÁ˜õâ4ÑŒô‚41£wš&ªs—Ðù*\ín¾†Í”æ{8£‰1©Ö¢‰¢*8sç0Oµ‹_Öq¢üê”pXçxuµ{¹¿s;MRö`}eü°›&6M,@ž9ê›Ïÿüñþ*¾¶Q¸ˆ&òs%¦LÛ»#[ߺ²}]`a{ ;±FÁÈní«mÕ.š=IÕ©sZ@¤/Î^îÝmšxŸ&¶"ºX«4|W‰Øú«-=º]HòÇŸ»8Q»'òs#¤y¯ÝLæGç­e‚Ãá‚èú'ƤÒZ…>ªza®ú¿? þ·˜±èøƒ Ì2Ø Ì^`ÖZ`œ#†vßÏb—fÍgˆq$¨!vÅEÜGDût݈èúafÄ_ó¦¡`Ìzñ0ÍH/øfFïôG˜âÜYcçZlysmm;Tôä# 6«ÄV»py•ØüÜ)W¢vëµjÇvg;;rí8¡ÿ; ÕçRÛþ.ª,¢!ôÕ§Qó]8‘~uqèþm#?Y7pL™¾Ô=Û8±qbœà#-Ãéí5‚ΓíЮ_`b¾³*€Ô?ÉôÖJ¨Gî*r¶À8 0©ö¤zúò³¥>†Äáî‘ÚO©]Ÿ¼fÄ_C^ cÖ‹“W3Ò ’׌Þiò›¥ìÞËyHx¨Ÿ‚×€R °MŒ¨gý6šˆ=IÊlW!ß4±MÈ}sJ b“&Šx¼š&ÒsY0‚c›Æ³ÿ4ïá4apšãÉç*ÉCÓ2I±#5 ua}”&FÄ%tƒM½4±ÑõibFü54ÑP0f½8M4#½ MÌ覉±YÊéÞRä‡ÐÙaÚ!©!ÐZ81¦^¾ì0íPtbxps¢xLyw ;Ò]8pãÄR8¡é½TÀH»qú3Í‘‹8Qì‹S´]¿¢›ü4"º~Ž<#þš¹¡`Ìzñ¹ésä½Ó9rqΡs‹«ŠôpkŽŽ€9yÿEÄdJ:ƒ5[Ø;ô—âÚ}<ÊÏUðÀíVAÅŽQìΫæVz®æ§¡úX*ÅÅpbLý›º‡ÿiœŠGx'Š2SvÅ.Ú¾:±qb)œ°ƒ=Äø»7ó {DòëvrK;vÆçfÄpëY'?"›œ"·” ©ÉíÙRÅ©‚Qç³F±{Ý:Ùäu’R7"º>y͈¿†¼ Ƭ'¯f¤$¯½ÓäU³Š[–zSûâ³N~ÈÙæÄ˜RŽ«ÑDRe&b_½¡|M¤_혒êGÇÁŸ¤ Oi‰|ðwó`{sbÓÄb4á†im Dšp‹æ7|®ò”¡§Õ-ôh<ÛE¹“&8ûP>)Cîy§H‰´óu/s?{e½ˆc ÞÙcªv°/Nt³ÄFDׇ‰ñ×ÀDCÁ˜õâ0ÑŒô‚01£w&Šó”Ç}0Kið[aB9ÍXpV†o~ð¯váFšäCô}Ù@‚2Qš†bGiž¡õыթ3À¾8Ù×°»ib+¢ËÓÄ”øKh¢¥`ÌzmšhGz=š˜Ò;K?ÎÉÝ©?Ki„[i‚5¢òž&ªC£ H]¬©QQ ¸ÆÖ*³ïªêT£˜Ì¥œ™M̈¿†& Ƭ§‰f¤¤‰½Ó4‘Ç4»Z–ò›÷&˜ü0áš“êkÝ›øQŸFf»ÿÞÙwÑDùÕ9›ñØNˆúMŒhÚÏ1⿚·nšØ4±M¸å©„Ú¡ Ïç‘üzšp3N?Ƥ7~ÌÈîìiåà)¾¿6A1`Kósû†Gµ~ôÚĘ8Æ}m¢›&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó414K Ü{aÝÓ¤e'41&5âZ41¦^¿Œ&†¢£òÜI§â‘Rž¥]eøÚfÓĦ‰h`ÒfljjG¤WÓD~n„`„í¯(Ù.˜ßYæ#_ÂVø¾ JRç è ÐQšgh{ö¤–é%½Gì]q/.7Mœ¤‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MT繊ÿ³Tø}áÚ±GГ[Ø?ÄŒá©ìkÑDQ•Ffo«ºÚÁ—ÑÄXtì¹±Õcn9a±¯Œ^:²lšØ4±M¸S$ÑM$;`¿ž&<_3HùwwÉÅ3îì8ñP°hüž&(` ÁÚß ŠÝkO‡'h¢8¥Ä0ú8‚Ýp¢›&6"º>M̈¿†& Ƭ§‰f¤¤‰½Ó4‘súÏ­?K±Ý{Ò)AÀÁ|¶71$Up±“Ncêõ»j:EGIž£‰e q×tÚ4±M¤÷ÒcÐÈ¿ßßþx=ÂåbËs‘ó]0ïŸdõÖþu‡Yú‘ï›a§,0@û»×ÝÊGk:Uqˆ„î]qy·æè¦‰ˆ®O3⯡‰†‚1ëÅi¢éibFï4MTç¹^ôg)ä{÷&8ú¡|vobL꛳B•&Šª”ìbľzŠ_¶7Q~5c0¦¢ãžt*-Dµ~Ž!úÒ4qÓĦ‰h‚óžC`ÐØ¾…]쀮®[ž+  ÚMƒsɾ;k:¥ÕE‚ÐMHÁJÄÔ.oXíâÿ³w.»’ÜÈ~íÚÞWF,´óÞÞZy!cd»ÏŒ Ö à·7É<:SÒ©dTN^š£"ZÚ4¢3þŒJ^>^"Œ/¥‰æ4 XeÍ® “&¢ib'¢ãÓÄñÇÐDGÁ6ëÁi¢éibÞÝ4Ñœ›dïW!~©zîI'Ï7F^¡‰MRÛ›hªêõp”'ÔûkU¯kom‰­ü¼at,¥ i¢y¤LÎ+C{“&†¢ ­³t©§õ¨KÍŽ¿…]ŸkPïN„íÇ­¼ô™'ê9ÚäyåÞD.-ØjÓUZíÌåÚ{[ÄÕ IÑ4±Ññibøch¢£`›õà4Ñô€4±GïnšØÖK)Ÿ[o¢fÝ •z¥Ú`b7©Ïœ^‹&6EÇôœNÅcÐòF)Ë îwM&MLšøú4Q¾Ë2MWÐ.M4»|—Kî š°z»%©÷WQšŸIpc^)7aµ›ÕêÐb‡_›Ø$Nx¦t g‰ˆŽ{ÄÛ¬‡‰n¤„‰=zwÃĦ^JñäKØÉnY×`b›T«öFõ&¯›¢“™®ƒ‰êœËw+ƒä4abÂÄH0áe’®R—äûš  õ¹†9( ·ØÑ©—°óMJäÓDS€T˜Šb¥ˆcG]T[@‹]ʯ5¾´·æZrОˆŽ]¸XÕ¾M,oí”SÛé-:wǹϦ‰æ‘ ÎÆR¨¬LEfmÔICÑ”Y:³z7c`³C;|ë»=W &$î0Íž›œÙ¤- <3"çþYþfG,—¤Ý&.ÓÌñN;Ÿ&öˆ?†&: ¶YNÝHH{ôM½” Ÿ¼7¡7£•½‰Ró`4±I½Óke ÜNxÝAÚÅ#™q¬ utš41M`«š“I7ÇÇ›]:|o¢>9iN}šhv”ùÜü㬘3?¦ ¬-ØD4PZìTíRš¨NŠþPœÍ½‰pšØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7M,ÎÉŸè¥ðä“N¢vÈ+4±Hæ~]™_^i0šhªÈÊ4cõD¯•ãcyk–Ìýüãovp]mÔÅ£qys‰•iž'M ET÷&Êk¢B—&¨íMØÑ9>Ús%&è÷ÚÕŽÍùäüãLå%ÓÝjmfK(-v‰¯¥‰æTˆ!q,Nî¶x&M¬L;Ÿ&öˆ?†&: ¶YNÝHH{ô漖E‹{)=›&Ln–×h¢IÈ9¥±Ôü¨ÊÞפ‰¦ÊÁ‰$VoþZI>Ú[çÄ9§«;L×ÑDóHèÒ¯¿¾ØáÌ8ib,šàºçà) ‰fgztþñö\¤òE½v³Ã|æI'”›0çÇ9>ŠÏÎîAõÑfg˜/-ZZJe`΋-vLì Ltl³&º‘&öèÝ Í9‚&ñ¸—‚|îµ ¸9(ˆ¬ööEjá.†X*Ê`0ÑT —ÙËf‚ׂ‰öÖÊ`úÄo{Ÿ¸òt˜hÝ™ceŽóÚÄ„‰¡`Bn H¡L’»;ÑÃ:Õç ¡‚öü›>(âv൉âƒ+²<_¤´àò·Îš¥nð@é©4ÑÄ1§Ü¯ä·ØÝŸ›4±2MìDt|šØ#þšè(Øf=8Mt#= MìÑ»›&šsõZû1ðdš 2¡ÌÖéíͳÄRïKÞAM•ƒ¸?¡Þòk•FmoM)Aêç‡}‹âÝ¥’Ói¢)#%xBažùÇ'M EE%$B)3õ.M4;¤Ãi¢>]sÿ2B³;ùÚ„å”×V«´µ`ÆÄ(-vpñ%ìæÔ@À 7/a?1MìDt|šØ#þšè(Øf=8Mt#= MìÑ»›&ªsQöRœ>›8”&à¤+6I-‹&š*²€&š¾X‚Øå­Ë Éâ蔹ýu4Ñ<º³Z<Ç`“I“&†¢‰\×üYT?æìûMäeoàèjí¹µi—éw·ý4;ÂSK£–Ít%¥S®-X²b ´öUri¹‰Å©íL±¸ûd)“&V¦‰ˆŽO{ÄCÛ¬§‰n¤¤‰=zwÓDu.IjJÕ¸—rÉç^›@-ÖZ‚Ø-R%Ñ`{Mdð`h±{±ÚuË[cy0Àѹ°vÝâQ Ò3ÊXç%ìICÑ„•YzV¬÷Ⱥ4Qí4mšp Î5;zT$è8š [‰,òÊI'+-XK;O¢ÒÒÒ¯¥‰&®D‘ƒ”N‹ä‰hšØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7M4ç"êÙã^ªÌãN¥ MrËj+4±Mêh4ÑT)7õöb)–è`å S:U¥é@P wQf“&&MŒD~+ïŠeÝ¥‰jGbt4MÔç–N;LÖÖìØäLš(m”•œÓ„·LT¥Å.#]JÕiFV$ Åe™ 6œ&v":>Mì Mtl³œ&º‘&öèÝM‹óœ«ýØK¡ÀÉÅëèF¾vÒi›Ô cÑDSE9H»ØÁ‹ÑÄòÖÎbÏD'_xo¢y,“!Í)V&6oaOš†&°~¯uÏA«­ïM¼Û J¿<×Êî¬üÍñÜRØV{˜{‹‚ÚÒ¤”;¾Œ&ÞštŠÌ¾Ûå»üâ“&LƒˆŽM{Å柳@Á6ëi"Œô`4±Wï.šxwîLOt¡²[¼ÎÓMœÐÄf©NåtúE•¥"Ë1ToéÁ­ß-M¼¿5(•A:ŽàE'Þ= I¯øÖ»ù,^7ib(š€F –“i—&šß•ü<ˆ&Ês1‘ÕU nûivø¨×>²6ºÑcš€Ú‚Y™¥Å/,^÷îÔ ƒ>!î~ãdÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±GïnšhÎ]ÄIã^ÊìÜ{ÊrsòšØ$ÕIÆ¢‰ªÊSi™)…ê=½R)ì÷·†úlŠ£l×ÑDó(™SâXÙýII“&  ,³t`N¢ýï·Ù=(EëÌ󷽿_~üÿí‡ÿûfÇÍXùã±,¾«©wçºÊs¬íýí¯>ÿïçŸkkúÓ>ÿܺÇ6p~óO?ÿß?|ûéßÿö/ߦ~o¿æ¿?}.­îSé¾ÿòç?}ûiÅ ÌÉ>•–üåKùÍ¿ýô/Ÿ¿”‡Õx”ùt±üŸÏ?~ó×ÏßSþüË?ÿæÇŸ~øké¾üò_nßü[ûHŠã?³;Óÿüá¿  Õ—)]Å_Š“Û§ÞG¹£ŒßÄþã—ñ{Gùöß.ûšw­Ä0ö®òàãÊf¿®Z•0ÝleÞМgpŠEfH—RbuŠ ÂG“Î:"áô¿Ññ)qøc(±£`›õà”Øô€”¸GïnJì8ÿîC/EÇ/I.þ9QŠ»pääçRªçKg°aï•1y·#Éç&ÄgpH+‹§‹R·'”2ÓX8ÝT‰—Y´Äêå•jÁ¿¿µ’g×8:š.Üœ«%œÎ+sž‡&N†Ó¢PÆ¡¬ÑH(¥a¥FBÉ–™ã–]ìNNC€&º:¾¸9å”1jéÅ/Æ.÷T Y˜bqY&v=1Ÿ^è?výýâ®UÛ¬‡Ç®N¤‡Ä®¿_ïØUœßÑžQ³Ë¢'\ʼÚÛ{òº‚Œ±TO0M¸×|9˜4TéÁúß9M”·MHG®JCðî±f£Ó'”Ý•š41ibš ¶^Æd‚+ïv÷×-¢‰ú\rDãþæ`³3ÓsúÕÒfyåâ--œ,PZìÈèRšhN9ãt³»Ï†2ibešØ‰èø4±Gü14ÑQ°ÍzpšèFz@šØ£w7MTç˜ à Æ½ÔÙ)’µ¸aÌ+»([¤bJ:M4UeÒ€&õöb4±DÇË44Çѽ&šGIÌÏün÷;b“&&M @¼\ït¹þW4Ñì’Nõ¹®–¢ƒVÍNüLšÀ|«‰ÓÄ¢Ô=§'”jì(ySU^Î-Çê¤dû}/›¢cW^Lååb]ùM¯®Ljh%ŸãËhã çz3U1_Š]¢Æö2t‰CÔ~ØÜìÌñÅoh™iu|q'*–#¥e&™.]­jâ²*pÕìÔfš›p¢ÑñW«öˆ?fµª£`›õà«UÝH¸ZµGïîժ꜒²„½úhSô¢ŠÐ!Ès¶¨'?yç>™’ÀúXõ´Ôböj4QÞšëYÔG‡T®¤‰âÑPÒÃ8gÈ“&&MŒDrKDI³¨ti¢Ú¡ß]ˆ?ˆ&êsÙ²kê¯4;Õ3SðsºÕ$ÿ¼BR[°’R ´öA.—ÒDu* @ê,vw¥r&M¬L;Ÿ&öˆ?†&: ¶YNÝHH{ô漌^D÷R˜Ï=Ik^:-Ö•½ïE*K–K%±À§©â"?y¬¾pÛkÑD{k2Ç'¢ã~M4Æœé‰ayuŸ„”;½})cŽÅR…;IÛT)f .V,véÅRð··Î‰58O´DÇñ:š¨-)Dé<;œ'&M E¹R¸ªq—&š]òÃO:Õ纕á-õ§ÁÍN™Ï½—¥‹‘üx|É­o1° HÊb‡×fù¨N ÊGáгt—ûtÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±Gïnšhα€L°t´Øœ~//ÑÚ½¼&R‹ÓRm0šXÔ×ÿ㱪V¼-šhoÍõÒ¢ÄÑá|á½¼æÑij=¡¬´I“&F¢ «'êä÷c©_ÑD³Ó»å¢ƒh¢>7× ‹ÖŸ7;˧–ö›f[9‹j­gÉ¢Òb§xí½‰êÔkˆÅ9ðÌN;Ÿ&öˆ?†&: ¶YNÝHH{ôÅ9×jôq/…xrªt¤[éíWhb‘PwRì ©2ØI§¦ªŒ—d)VOøb{í­™ (`¾Û¹9&šÇò«apšºÙ©Mš˜41MørÒH5(ìuo «MÞîmEeÑ«šá¹ÈëåD]¡ o-]X‚]”f§.áGn)%©‰ºÃÇbÇó¤S8MìEtxšØ%þšè)Øf=6Mô#=MìÒ»—&ç™øÁ`ó±—R>û¤SB4||Òi‘`ý™5ËXÈ›*¨‘–'möZõŒÞ¢ã–ûw5ßììº{‹G–zH-VF:sNš‰&êw‰F$‰º·°;¼Ë„w M´çz{¨Ÿ|±Ó|f=#,ÄžðñI'€Ú‚¥æŒë÷ÐÍ®üD—ÒDuŠåï=‡so"œ&v":>Mì Mtl³œ&º‘&öèÝMÍyÞÈ%î¥î×<Ρ DT‘õÞ³ÛRÆÊ»¨22Ê«W“×¢‰úÖDÖßXì_—Óiñ˜“ÒÃ8É]†ËI“& ‰ò]RéùúÈ»û úÑD}®%,صʎzrõm•Ò}<_°ö¼9eó~Kov"—Ö3jNËAR,ÎïNMšX™&v":>Mì Mtl³œ&º‘&öèÝM[z)JzîI'q»AZ¹7±Qê£Ìå_“&õîÚ¯–ñf÷qý÷Mí­Ñ)8³üf'×Ý›X<*¹ÁÊdîMLš‹&ÊwÉí-vëM,vbG×3jÏ-VˆÛ3Ø©b ±ÔS½+4Aµ[ÑÝ“N‹â¥b›SOª±8NwÝФ‰•ib'¢ãÓÄñÇÐDGÁ6ëÁi¢éibÞÝ4±©—ásiÂ꘲FÛ¤æ±na/ªH•‚}ôf‡üb{›¢Sò:šhµ€4r¬LxÞ›˜41MÔ¼ ÎÀÝ{ovx8M´¼™)Eí§Ø{oB ×›^Z0y?‡÷bGW«N¥‰æÔ@žg2O:…ÓÄNDǧ‰=â¡‰Ž‚mÖƒÓD7ÒÒĽ»ibqî ã^ÊñdšðtcY£‰mRy°{U•€çï›]’üZ4ÑÞ”ÅÑl×ÑDóXf9™â¯NÄg†ØICÑתp5ýxêætjv¤tt½‰ö\áZ:4l?Tæú§žtJ7R[¡ )-8'ËÚ¿…Ýì4óµ·°›8.ÿi Ååû$“&V¦‰ˆŽO{ÄCÛ¬§‰n¤¤‰=zwÓDs.5 7Žs:•&2ÂMVh¢IPÜO?õöJ¥~]šhª2rRÕëÇS¹¿ošhoíBØ/ÉþE¹ðvõhDÆu†2O:MšŠ&ÊwÉ©L/åãùï~óýrÂÃka·çb ¢p̈çV¯Ëž¯œtÒÚ‚™)A¿‡nv ¹žHÍiQ§±¸|—{qÒÄÊ4±Ññibøch¢£`›õà4Ñô€4±GïnšhÎݵÌÒã^ÊìÜ[ØZú{¦• ±Û¤:vo¢ªò$–ûeßìàÅN:µ·&":z!M4å]ŸPF6kaOšŠ&ŠJ+=·ÛÇ*=ßýæûÍî|øÞDõ_«<…Ó`cËgîMd¹9c^ٚȷõ¶8`{š]Òt)L4§’Qä ql&ÂYb'¢ãÃÄñÇÀDGÁ6ëÁa¢éabÞÝ0Ñœ«` ö¡‘vnñ:ºè Ll’ª8ØÖDSÕJÂæ'Ôg{-˜X¢#P†ú8:FtLT@T¾»ø«¤ymbÂÄP0Q¾KÖzé§_ »Ù‰ õ¹Æ\wØ~JPôÔrù–KC†•äVZ0‚“ô«M7;¸Ô4ÑÄ•_‘TCqH4Ka‡ÓÄNDǧ‰=â¡‰Ž‚mÖƒÓD7ÒÒĽ»ib[/åv*M°áŒWhb“Ô¹l¿.MlS¯/VnbSt.¤‰ê‘’Öʱ²|—ç}ÒĤ‰hÂê"E#ìÓD³+3í£iÂZJ§œ9H”ÔìÀíLš€[®ecVi“˜›JÝáÁ ¯=¾õeäH¡úò–ðjãK-½ ” }‹"_:¾ÔâÁY£ƒ€‹]š«Us|j|ñ6s«¦ßk6;¹¸H·É[‘–s,Îy& áºÑñ×`öˆ?f ¦£`›õàk0ÝH¸³Gïî5˜æ4ÈU½ˆT:wG7ËMa-uE-q]H‚ûš]™$Mõ¹œ<ØQnvšó™4·¬õªùW#ÅVÓ²I¬4Ë`i›ªzñ,¸ô±ØÑ‹ÑD}kIª1þ åÒrFÍ#%q‹7¹¿¬3ibÒÄ4!œÑÃñEJÿN'Œ/¢ǃ ‹⹫U¬å-¯Vaª“©Ù»ëj‹§K“|,N´Œ~±¸Ì6¹+˜P÷":ÚΨ:•`1ǶJv·‰ˆÎO#⯡‰†‚sÖ“ÓD3ÒÒĈÞaš(Îs·Æþ,¥ÁnÎËKkŠîÑD•ÓŽHÕ¹ WUiKJb}õñËn:Õ·ö´åèGÇá¹æ¨Å£$Têÿê„Þ¾X.šX41M@Éw³Èš(v¨W·3ÊÏM,=¶/T» |'MÈFfÈð™&0à|òÙùnPì@½V&Ò1ö¾8~«¼¸hbg›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰êÜÈáÀ,%ê÷æån kvh¢HÐ|× Hµ¹n:UU%ÉœûêÓFã»h¢¼µ“†#‹eÔçn: iOÓ.{ÿ²ƒU2pÑÄT4%/Ñ¥ÙΨÚI¸ºd`y."lßa,vô1Ûí2š@Ûˆ‚Ég˜ 4€#{g Êv*…‰"Î’¾vŸj§q• ìî&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q3‚r–2¼·›lùHÞ[³½9±É©2ÙÑDVe! ·“!«zGÿ.˜(ѼJC7:ödÚDõ¨ìÀ±¯ì¶L,˜˜&(C‚E¥Ÿ·ÿüóûeŒpuÚDyn *Ý=:ç¶Gw„nd¹ußçõ…óN{'mwT«v ö(M§’¶ F}qWÚDw›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰S³”ð½È9ÐFq§7êI©*sÑDQ¥šVlí«×ð]½Që[GDB;{ð¢Söè!ýá<ö•¹®’‹&¦¢‰ô»´4§Fþ¹GþóÏï7§î…«i"=×C@JroüäVurg7#ØÈ)~®ñ²Ä”bûžS¶K¡âGaâŒ8Ј &z»ÄFD燉ñ×ÀDCÁ9ëÉa¢é abDï0LdçÄÊieëÎR„7ÃnlˆögûÃR)Lv4qN½|Yö©è0>x4Q<:tÄŠÚ:šX01LH®Ô$1²B&Š]WÃD~®JâÿvŸ²—Ý­GD’ÅøyyѼ £FµöWƒb÷^¢ú ˜(N9­a}qïýLìì&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q3±[–R¸9iÂ|KRvN&ª¡Hý9Ûÿ.LU‘ÐÚM ^êýËî9•·6Î×–ûщo¨u;Lh;€Þ=§b—ö* &LÌI¥@®ê í{NÙ.¼ß¯¼&òsÑE¹7~’ʽ0F{´±`GçØ¾kYì={Ï©8epVï‹#])ØÝmb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&ªsOKÛ)”ùfš°™ÓM ’K¢ã©¦sÑDQ¥Ò^¸¯>½ãwÑDyk <ð·U|®u]ñÈ!½o°¾2Y4±hb&šH¿K11Wm¶F­vìr5M¤ç¦Yœœ{ãGÃÍÍ&t‹”t|† Ë”EÚQ‹Ú³÷œŠÓHi&‰}q1¬ê°Ý]b#¢óÃĈøk`¢¡àœõä0ÑŒô„01¢w&¬6JSn7 ®vïMÁÖà›ƒíÀ„Õ®e ÔŸí%üÌÿ]˜(ªr;Eµ¾zñ»`¢¼5–û~¢cñ9˜(¾„€}e uL,˜˜ &,oÒM)zûh¢Ø1_ݵÅ<€›µ[;ñŸimwÂDÇœ&îŠSÆ•‚ÝÛ%¶":=L ‰¿&Z ÎYÏ íHÏCzGa¢:‰J±?K ѽë6Öø&^Ü-ú©“Ýsªªb N‡Àj§¾ &^щ@GËø¾e¿&ŠÇ€dý=F|oŽ»`bÁÄïÃDþ]*†4iB³ÕD±KÄquÖDy.‰#ºöÆO²Ó›ï9y„´Ò}¦ H#ØÑ©‰=ÕôÑÆuÕ©©`ûhâeG+k¢»MlDt~š M4œ³žœ&š‘ž&FôÓDržïpxh-}Ù…{i„7õì—„(Ðn¨ú²û´.ý&MU ¹Ïj_=Ðw5®«o˜/¨õ£ƒïíáâQ”H ¯LÞÎîM,š˜€&`p Ö¦‰b§~u¯‰ü\2êÍÚÙNîÌÁvØÐŒ‚¦ Ì#XQ°}ô]íØýQš(NE4ôÅ™¯´‰î6±ÑùibDü54ÑPpÎzršhFzBšÑ;LÙ9@èU¿|‰üÔ®çÊìÈ9;c‡&ÎH…çªèôR/Þc¡jß•ƒ]ßÉ ý@tü¹ìêQÓOŸúËx®›³hbÑÄL4ùl‚Å56ËÃV»4¯¦‰ü\2Ë)Q½ñ“ìHï<›À-jnÅð™&¨ÎAÛ}°«г4Qœ&„±v†øËŽ×M§î6±ÑùibDü54ÑPpÎzršhFzBšÑ;LŹPú¿±?K¥eðæ´ Úh/mâ¤T´¹h¢¨R4µp@}ü2š(oÓ¶»s6Q£HÏ¥MT.nGöæ¼hbÑÄL4‘~—Ê`š4Aå¦_~Ó)?WØœÚ Q/»`÷6›ˆ8ÈM¸£"ëÍAÉŽâtëKR¥@,ÒW/¾m}ÉÑ1•îÞ!Ûqxr}q§Ä°ÄÔU–·5k}YëËLë §}Ožv‚´÷GÙŽžÍ6;#Žñ­ÓÒú³׈Îÿ fDü5ß` ÎYOþ ¦é ¿ÁŒèþsj–"Ô{Ot‘6QÜùù%¥×Àf}Øb—{f^M ÿÿ†JÍÜ™„­°aP;7Î)ÕÉòòN©Wø®"'£#ž}ŸRfëì{ÑÄl4®Á£…ÐY_ÀÿúZtÙúIÕÚM=«ÆxãúL[,b?/0’v‚ªÚC½Ø=ܵ:ÕBD}q¬«f`wG݈èüà5"þðj(8g=9x5#=!xè¯S³”(ß ^„›í‚×9©ªîý*NUÑ;Ëj}ËÇÿ×8QÞÚÐ"„~t"=×µxT0ï+Óð~,¿pbáÄïã„äCåÜÌÛ8Qì \ž˜—Ÿ1mpÙzãG5ꭇ߰ALS+|¦ Í#ÙÛ#½Ø>ÚϨ:M?ï‹ã·\ƒE;ÛÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0Mçjª›U¤Ñ͉y°ÅÝ2§¤j˜ìpâœzù²«´å­ ò1@?:ø9šÈ# ]eIغJ»hb*šÐ\ 0_Î÷ؤ‰b—7G-ÏÍyçˆÖ?Š®por’<š?ÓDL#8E›¥S—/ÛE‰ò(Mœgé·³h¢·MlDt~š M4œ³žœ&š‘ž&FôÓĹYÊî-ó!h›ŠìÐÄ)©&Kœ8§žý»hâ\tÞö<·ÓDñ˜Pß:·&‹è*¸hb*šH¿Ku‘œÓÓ¤‰bÇÂWÓDznÌE½“0Vín-H´!¹ïܤµ4€!ýmÚŠ`x&ŠSv°vÛÁ—¯ äÝ]b#¢óÃĈøk`¢¡àœõä0ÑŒô„01¢w&ŠóÜ^ÚL¡Â÷6GM{ÉÍíÀD•`¹òÞ©1ÌEUT°ÎUžb§ôeyå­Í#ÀŸaÔ«|$Ð=Z÷WÇÞ3:L,˜ø}˜H¿Ë˜·éiŠoÂD±º&òs1½¨wªV»ï­¨˜›Ÿ~¦ ¯#°³V»ðlÍÀâT¢[ľ8¡U¼»MlDt~š M4œ³žœ&š‘ž&FôÓDqAÒrÓŸ¥ônš`ÛÜöhâœÔ8Y?£¢Ê©sý¾¾å·ÑDykO‹¥9>GÙc®KÌ]eðÞ3qÑÄ¢‰ hÂó'‡h±}ѩإ]òÕ4‘ŸËܵ;ïEr€»&ðsÅ@iü" ¡ùÕ Øã£µ¯ª8éÞ­v¤²X¢³IlEtz– K´œ³ž›%Ú‘ž%†ô޲ĹYŠf_~Í ö®9”jsL¼Ô[pò¾z¡ïJÁ®o­¸Ý—ðY¢zôô—“7[,±Xb*–H¿KÈå¦ÅÁ[,QíÈ®®?^ž«®iÚñ¹@ô,a¼‘3~¦ H#˜È>õ(ÿKi±C~ôd¢:Ueî,Åî½}Å¢‰mb#¢óÓĈøkh¢¡àœõä4ÑŒô„41¢w˜&NÍR*áÞ“ à e§ ÓK‚qè¬K¯Wš+i¢ª2оÉSíâ‡óþÿ5M”·v ôwëGÇ"W¶zL íò–ÕŽÞî×-šX41M¤ß¥sÓMš(vñú³‰ü\ ’ëµõơحÍ&tÓ Øgšà4‚иbûl"Û©>[¶Š“ä¶Ý³ãeÇël¢»MlDt~š M4œ³žœ&š‘ž&FôÓDqž–,ï|ð¯vzïM'ØB@Û;‰." Ô—g£‰¢Ê¢w:Â¾ìø»r°ë[§u\Ûm£^vlÏÑDöh˜¶Ø_Æs¬³‰ESÑDú]Æ4ïè‡:EþùýÆ4kÆ«i"?7íG2)ôÆO²ÃpoE§ÂN¶ä,E¼MòïôMq-BW\š†xÑDo›Øˆèü41"þšh(8g=9M4#=!MŒè¦‰âœ8·6:0K¹ÞJ¤´…½£‰SJ çê\WU1Pÿ³áøÿ&jtRx:ÇNÕãs0QÄælbÚ3t±‹Ï6›(NƒH§I±#]º»ÄFD燉ñ×ÀDCÁ9ëÉa¢é abDï0L碦P7Û& 0íÎö0—ô}©î>Mõä.´‰j§_VÒ©¼µBúïÐÎ{#‘Ûi"{„ôuÒ&Š]X%MÌEI¥±’˜¶&ŠDºš&òs5ýĸ7~rÈ4A²¥ù7„£ïXÖ JžöbùÙ‹N±,~c_\^$Wëºî6±ÑùibDü54ÑPpÎzršhFzBšÑ;LÅ9¥uÍ¥?KнiæPÜ9›8'Ue.š(ªX-v>.U;ü²³‰òÖ‚Á:çNÕ.l&ÜÀ޲&šcÁX*½ë±÷'a¢©bÊ|¢Û퀾 &Ú[—u\ƒú°{tüÁŠNÍ£;0@ʶmÁÄ‚‰™`"oPÛx&¡þÑD³Ë/u©/‚‰ú\õT&¾hü;ËwÞsRܤ–:9¨ke •Ù% ‰f÷Úãï š¨N5¹iPÎp·ã•5n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4ќ׎pžãY S¾·>¬Ë–Øhb—PFµ¤fŸ‹&š*N¯UJY¿‹&öèXm%G‡Už£‰ê1'ÂL+sY}°MLEV³Ì‘»4±ÛÁå4QŸ›3¨%›Þ{Ñ 6N@|pÑÉëHGJ9¸ÒØìÞÔ¿•&šSv Ú‰ïv´Î&âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šs¡ÄÈñ,Å~/M°ÂæÄ4qJªü~Œògi¢©Ê`(«Wø²ìsѱ+:U (Be–hÕ‡]41MxÝ¥ƒ`ú=à¯ý~k% Ë+:ÕçzM ‹Æ{2¿—&D-¾¡ (Êö‘Ž;ß{~Ù½^&º&~9•Zƒbq¯©ô‹&Þmûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøå¼ü'ëµæúe§zïM'%Ü8½í]÷·T!c‹¥fš©¢Ó/UìöÁr`öMg?oí‰kß“0:^~­ÑÄ/$e|J¬Œ^vÑÄ¢‰?Míw ¸l r§¢Ó/;Âk»Mü<—’BÊŒHø®.ö¥½ë‹{OPG0¦ÜíŸùË ?JÍi™ r,Ž|õ® ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ…j~u›¨NÌØcqðzijhâ`›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰æ\³¥x–ò›;a£˜€Ïöå-\>j:MTUe» ª/ÿwþ.šhoXâèÀKNÌí4Ñ<—ßþ7¥u6±hb*šÀ­æÄ!¿9|þ럿ßb§¬WÓDy.Ö»VѬ]ìà÷l„+Ï&|÷2L߯/TG0%GìŸ}7;`x”&ªS¢¬bq„ ‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsAò^ÝÒ_v,|3M%Îv<Û“0&§Xª ÎEMUæzé V¯”¾‹&öèhù :ùeÏs;MTÌœHãeœ M,š˜Š&¨P !¦þÙD³£—jiÑD}n–œ %?ÅŽ5ÝI´9ºÁÑòâåÏ’=C´¼x­d˳-/'Ôë›kÀÿñååLtÌž\^>WÆ)­´¼µ¼Lµ¼ð–X4Kÿ"m³c¼|y©Ïµòè(Ÿ¬Úe×;?V of–ù_Š‚\ï̳{¤4—í&=ú±ê”8̸>VE_!:ÿcÕˆøk>Vuœ³žücU7Ò~¬Ñ;ü±êÔ,E ·~¬B×ÍÍ.Òž“:[ZÞ9õoºzü§iâTt˜è9š8¥L_ÒŠM,š˜‚&€Ì…³A@@/îÚ÷ÿ¿û7Iw¦åiý£™ ¿§ i#¸Þçç®Òf'éÙ‹´Õ©¤Z‚„bq¶h"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,å7}³ÒfÎ4qFª¤”梉]•%ù@½|Y‘öÖŠäqtØà9š¨UÀ4Vfyµ3Z41MÈVïî§‚ÃÚ¥‰f‡/í¸.¢‰úÜòžÂ)Ú£;‘;Ï&8o"%¶iyZGpAÖ—f§éÙ"Õ©‚ú›꿉Óä+-/Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9GË9¸ß¹ÛÝ[2qËzT2°I .+1| Õ`.šhª (ÄkUY|é»hâTt¤‰æÑ̈r¬,¯vF‹&æ¢ Ý ¸ª©¾©KÍ®üȯ¦‰ú\JboJþïßþÑýÎvF¢[™Eñ ë;—\?ëç ÎP³#ðGa¢9Í|mivb++/Ü%v":?LŒˆ¿&: ÎYOÝHO#z‡a¢9w”è“G³3O·ÂD™7¥|U‚Xt4Ñì’M–•×TaÌq  Ó—¥M´·&¯gqtʦã9˜¨=`¢ÙÑK.肉ÀDÞ pª‹xù_ Õ?‚jQŒ(frgVÓ&Šæòž&¬Í¼"jýC”f—\¥‰æT¸ü;ÆâxõF·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ5ÓS¨ÝKõºÈáE§]*”…ø©å•梉]•dw;Æï¢‰öÖY34±Û½\î¾&ªGçäM4;X½QMÌE¶A²\@ý÷ –ýó÷[ìÄ/ïfTž ©ö2ÂhÖ.vIùÞÞ¨,e’9¸èäun)“Ÿÿ}ŸƒðQš¨NK‘CqeZgá6±ÑùibDü54ÑQpÎzršèFzBšÑ;L§f)¼—&0o9ËMœ“J“¥MøþíÚ<óêó—u3Ú£S/h×ûÛiÂÎ&„$V&ºÒ&MLE¾Õß.hæ~ýñf~y7£ú\UFõhü;¡;“°ËêR"+ü¾d ¤:‚•Púiw;V{’&ªS+Ã*aùØÅÙK Eï·‰½ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQš83K»›K:‰Â†ªïiâG‚zêW²ýû•¦¢‰]º¦ÔûwMœ‹¦çêïµüIŒbe²Ò&MLEåw a7mb·ÃË Ä¶çZíÄŒŸÚÑOï¤ ²Íá}3#€6€•$u ìvœ½è´;5çœ?—_¢¸`â`—؉èü01"þ˜è(8g=9Lt#=!LŒè†‰êR-¥á,oÊ$]œ6‘7„ƒ´‰ ŒÂøÔ7ÍEÿ(L4U`e"×Ôçïªèt.: ÂDó(‚eû+ãÕuÁÄ\0õÈÁغõaw;õ«s°Ûs3Õ´@ ÆO±Ã7µ'.<šÐ ó{–À2~1+iê/„ÍôY–¨N‰ÈËß'G¨+i"Ü$v":?KŒˆ¿†%: ÎYOÎÝHOÈ#z‡Y¢9çâ»wèGd†›£bÙ¶‰ÏöKe˜ì`âœzû²ƒ‰öÖ’ÁãèÈ 'ÞÎÍ£Yú@™½4__,±Xb–À €Úyn7i¢Ù¡ñÕIí¹,T9%?ÅŽsº7i"INô¾sPÁEiùóôi¢Ù1<Úkbwš™,W˜lÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓDsîPKÑij”é½',¶¤`7 R[†÷«þÿØ¡ÏEM`N/_VÐéç­’§¢cé9šh9òwãE‹&æ¢ ª”5—¹µKÍNíê‚Ní¹Âåú…3v;ò[i"m…Ph‚÷™WHûŸÿy__탽;­'GÁ=ÐÝŽW v¸MìDt~š Mtœ³žœ&º‘ž&FôÓDs^›Ü¯ZúcGùVšÈÈ[V8 ‰*AI4–ªÉm.šhªÊBE™bõD_v6ÑÞZ,i¿åÉÝ“÷œªÇ à9I¨,'^I‹&¦¢ ®gµä)÷ï95;–Ëï9Õçæ¢€(š÷Š]ºµs]¡ q88û–6Ò‰€ûÜSíÔŸm6ÑœbÒ,¡8K¾Î&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsA5‰§P£œo¾éÄêr”#×$d¦,)–ªïîßþIš¨ª¼V¡x9ðo냽¿5—?C²i¢,N\þ$þêj Y[4±hb&ššÚ,Tüõi¢Ù1]]Щ>˸HîÑ‹ªß{Ó)gd>8›Ð¶¾”(`ÿ²Ù‰=[ÐIÛ²P(&¸0¶Û½ô¼Y4q°MìDt~š Mtœ³žœ&º‘ž&FôÓDsÉ Ò³”ß\Ð ­ìÉÊÃþHU…à^ën2M4UÈÀÁ7º]½}ÙM§öÖAÙ—=ŠùÁ›NÍ£–Ÿ]¿Xþn'´Î&MLEÚò ½™wþúçï·ØÑK¢‹h¢>WsPv·•{Ëæ §÷4‘ë÷4aéÏÐÍ¥‰æTÑØ?'«¢S¼MìDt~š Mtœ³žœ&º‘ž&FôÓÄî\ë>.ž¥”î͆Íà¨<ì9©2Wëº]UV,{áX}Nö]4ÑÞºv!ˆ£c ÏÑDõµl-a¨ ð…aM,š˜€&Zƒk(S·w›M4;Óë³°[ƒë2—¡ŒŸjpwÞf88›¨_²€Sýët•6;Ôgi¢9ÍPÏ?cq «¦S¸MìDt~š Mtœ³žœ&º‘ž&FôÓĹYÊíæ,ì2ßÐÄ.•3|2Ûg˜, »ªÂz”cõ˜€¾‹&ÎEǤ‰æ‘ËN!¨oÙìhµ®[41MØT–Àï'ßýó÷[í®?›¨ÏE•ò¾ÑSì$ßyÓ‰Óf¢i‘IÀ¥Å.å4ÛúRÕ»SPè}·ËümëKykf“8:ÿ˜Åï__ŠÇ,î̱2}ÉkZëËZ_&X_|+lÞúU>ª•yóêõ¥>W X0kïvpëMZ®íW‹Â÷ë‹·½k‰TðżÚÕ¾K~­jâjG(ÃXì`}‡Ÿ!:ÿkÕˆøk¾Vuœ³žükU7Ò~­Ñ;üµª9'©iËñ,E‰ïýZE°™áÁתsRy²*M—µ1H€ÜíÞÜþOÓD{k©½Fäƒè¨é¤ms1éßÉjv*ÏÞtÒ6 aùM`(à¥I󢉃mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR÷Öt’ŒñAwÔ ììë~ì&ËÂnªHŒ‚sô]½}MìÑÉàA½®ÝîÁÇ\^X,V¶Î&MLFåwY6éfœ¸KÍ.½|Ÿˆ&êsY­¬á¹,wö3¢Êæg¹Œ`$(3t}ivÉž½éÔœZΙ<W€hÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓDs^›Ä§ÏRæpsÞ„mFGgU¥š)øÁlÿæ$úÏÒDSTöÂñr@ðm5Ú[$Sˆ£ƒüàM§æ1§ÿêHÓªé´hb*š(¿KÎÅ%©ti¢ÚIÁŽ«i¢ãÿßã‡UÝ雷²È–—|OÖÖɹßǵÙᛪ{·ÒDWëÒš†âèµ´Ü¢‰ƒmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&vç"Q!‹Ýåæ ±e…†£ ±ç¤NÖ½nW¥µÞÆê_w¥_Aí­ JD9ê¶ïí¼éT=ry!' •q2[4±hb&š°zƒˆ(1õ³°m¿‘DWÓD}®dÊl9?5¿Âî¼é„[2AÎïiÂë&!–þ‡µf‡ÓDsªlI-'+o"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,¥è÷VˆÛ2ÕtÚ%dàäHåÉò&š*«‡Pøzÿ²›Ní­½faæ8:öºg¿›&ªG!eü`€ÒÊ›X41M”ߥp™3IúYØÍ.Ñågõ¹’¥LÛá¬-¢zgÞ„À–’%yÓ‰RÁ€DýÂfÇ®ð$Mìâj–<[(NÊokÑD°MìEtzš Môœ³ž›&ú‘ž&†ôŽÒÄî¼Ùî7çú±Kp3MЖè€&v ‚Éû57~¤²NEçÔË—Mìo­e‘fù :¯Ù 7ÓDó¨©N©9VæiUˆ]41MÔß%i™³x&v;ò«k:Õçrb6ÏáÃõ‚ê½bDßÓÔ‘N–ƒ>{»Ò£y»ÓÌ.ø¸×òè‹&¶‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÝûu¬w;ËvïM'ÅÍÐhâ”T§¹n:5UÊZÖï[°Û%ÎßEí­‘P ÅÑ{. {÷X@¿à~¬¬`Ç¢‰E3ÑDù]jm$áh]š¨vètu/ìöÜzèúu½w;þ=·ùʳ Û„@ð}M'Â6óêÁþHǶ¾€>JM£QŽÅeâÕo"Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibwn…4ž¥˜o®éİ©Ôt:)Uçºé´«Ê b«LßE§¢£éÁ³‰ê±^P³~Ú]™ùÊ›X41M`͇ȉ½_Ói·¸º¦S{®b¶$R ¿¹¶²‚¿§ j#XSöþúÒìrz´ßDsj„Y‚06»ÿ³wfIŽãºÞ‘‚ÄŒ•ôþwrI*«Ã§Ò"¬ ¤f\ó¥2PÂ/X>À¢‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ý9—Á&ޢܻ7A¾ÜÂþ‘à¢ýz?v2ÙI§¦ŠD 0VOÙ¾‹&ö·6 ¢£ÏÝ›Ø={î×òÝí”ÖÞÄ¢‰©h¢|—TpÂ%÷i¢ÚÁk†ã‹h¢>·Ð„²†-›$ù­µ°uÓÌ%îG4áÎR~‹&ìÅ™g_¼üÆ «7Éß6¾|´ùÉñ¥xt,C>ÇÊþ'ÿÈ_Öøòß/´%pb&êæ Üí/_­*ÏÅ\Ó‡·Ýšæ[ÇÞJçð>ËGQëã¹sz·CÇGW«šS+p?¡án§ë$m¼ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­jÎ$Ë»H‡{÷¾3lx”üGj•‚“N?¯DsÑDQU«`«&û@½ðwÑÄÉžÃèÔâ±üM4˜Ê/±2 µ÷½hb2šÈ('$i@ÅN…®§‰Œf)‡í§Øå›÷¾k± :ØûæÒ‚Kß›3öÏürëÄ¥‰&NDË/ŠËðRjÑÄÁ4±ÑùibDü54ÑQpÎzršèFzBšÑ;L§z)4¼•&òVKþä^oÿ©TJ“ÝË;§žå»hâTt8?¸÷]=BR”à^^Sf/{w‹&ML@\÷´IR¦þÞw³C¸:g`{.‘3öËoÿØe¹so·”²¾_¤õA„ìMÈÞ=›3PZ7äªÁ½¼&ÎlÕ3 §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8ÓKA’{OÒ²é–Qö&ÎIµÉh¢©‚2/íWcÚí²|Ù½¼=:ÌàGðAš¨ÑØ‚ꎻ2΋&MLEå»,_¥eÆ~ÎÀjW:ÍËsÖç‚‚PˆÚÃÛ*t×UGM1e?8é¤{ vMý½‰f÷š]ñ šhNMjŽ«Xœf]4M;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4±;wàìvJ÷f ÷´¡tjJ”Ä>‘ê“eù¨ª°žÀJ«wú®zF§¢ƒéÉ{Í#‚§XÐÊ@¾hb*šÐšÙ;y™Ð÷ïMh;i”.?éTýçì¤I¢öC9éÍ'²¿§ «-XJK—þŠF³#²Gi¢9õZObqʸh"š&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibwN²Æ½”ÜL¸ÉὉ]gû@*å¹h¢ª*Ãe™/C¨ž~ÙI§=:‚É(ŽNÁîçh¢y¤Ò¥¹eš¾|u‹&ML@VO©Hò~òfÇ~9MXÝóÀ$I†¤;÷&êþG™h+½§ ßê&/Áï¯hT;ry6g`G„Š)ÇÈ´h"š&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâT/E ÷Ò„Á–ì¨:ê.Ák—ÿTÖ¹h¢©ªÅ÷‚Ú{?oéßE§¢Ãòà-ìê±L…( Û”­ ä‹&&£‰ò]–·Ì~­Ò©Ú¡¼ìú]Dõ¹šËä›+Å”><ÛwÑÄ™èHBŽ&šG¢zÂ.V¾®M,š˜Š&ÊwÉ¢hÎÝJØ»¼ôšÑD}®b™§Gí‡K@èÞ;ØÜ¶@ßÓ„Õ\ë¾{ßÍŽ¥‰æÔ”>WÌMDÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çŽBˆq/å‰o¯„ vP û¤T ¹h¢ªÒZ¤4ÈlÒìRÎßE§¢“ŸÌÛKg÷>M4»×Ò‹ÑDy®–ïÊàµqG¿ù6;É»{P”µ¾“agù·Cw€çhâ8.Ð+„ýÇŽltêOûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøã܈‰5î¥Äï-7¡ÆêÛ{$”q˜{…°ÿµƒ™hâGU­Îf 9û7ÑÄŸ·. ȆqtPŸ*7ñÇ£–/?Q¬Ló:é´hbšØ¿K-·çßçhÿùëû-v/+îWÐÄÏs)›‚‡½v±ƒ[ aã–]Eì=Mä½›[î*mv9ó£4qJ,šˆ¦‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8×K½ËŠwå½ È¼?étVªd˜‹&Ω·/£‰öÖž³ÄÑQLÏÑDõ å •AÖµ7±hb*š(ߥZãëÒD³Ò«i¢<×Ê»¤µŸbG·Þ¶ò{š¨e*¡îŽJÝ Ù!>»7Q"0ˆÇâòº7N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ×Ì¥Iã^ o.…]F”D ḷGNæ’c©$4M4UR~æÞ­ßòÛö&Ú[k¡ øà3&ªÇÂÑÌ’Be”Ó*…½hb*š(ߥ‡’º4QíÈÓå4QŸ[0M1j?¦f|g)l*óáú–ïǬ-8)bÐU;t¤Gi¢:evÁ Úípeˆ §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhνÀLþ —2½·ÜDQ\ื*ÏJ•$>M4UÚª*ÇêÅòwÑD}k-(©GG“Às4Q=ZÎ"±2ñuÒiÑÄT4•& ¯Uýó×÷kL)_Mõ¹Vº^ µ3¢;s:lX\¤|4¾ÔÛ’rŽh¢ØeÖÙÆ—¢ ËËõÊýû–.ß6¾”·.s {ñEÕ'Ç—â±Ì¢rJ±2Uu/S/´¥Òí¸öW«v»ëÇ—ú\2©‰—ºí§Ù©ð½9Dß/TgöX†bÏÒbþìjíxž!§i­V…ˈοZ5"þšÕªŽ‚sÖ“¯Vu#=ájÕˆÞáÕªæÜ©ô?÷RžåÖÕ*AÝÊÐsp’–ÚŠ†Ö"š¡Ôò²“¤mªÊtÓ]cõ¾lµª½5*±¦8:(î}7R~ˆJ^4±hb2š(]&g( ÐD±£—z\—ÑD®å ’j8G/vor¥^˜3P ñûÁÎ7×Ui-½KW'·vŽÏÞÊkâj5<Å™¤•ã#œ$v":?KŒˆ¿†%: ÎYOÎÝHOÈ#z‡Y¢9·2²1Ľ”ýNètñ9ZÈJÎÞÞ™(Èè´Û½É¹÷Ÿ²DUå9gv&š]ü.–Ø£ch)ÇÑ)Ÿòs,Q<²ˆ GÊXÒb‰Å3±Dù.Ý” ¥+¯Ù‰ÛÕ,Á¤ ¦N)h?Õ.§Y‚aƒZ4û`gBZÏËuŬ«TÚø’Ò£4ÑÄQé…²…âWþñxšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æœ ÍK»]º7ÿ¸ä´íLìÜ,8 óóJ“e lªÊ¨.U.ɾ‹&NEGéÁsNÅ#§2ÑrðHY±[4±hb.š(ߥºPBïŸsjvœ/§‰ò\«¹ÃÆ2›Ý\IÞÓ„Ö Ê\|kv™äQšhNKx€<W¾®EÑ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy™ÅGý.RáÞjFè›9ÐÄ9©>ÙÞÄ®^Êcõ‚_vk¢½u=f¬ îÑy,o§‰ê1£x¶øwË ëœÓ¢‰©hBë,Ý„š&J‹”‹—‹­ &MLš‰&ô»LÂBø<ÁÇëϾß$Ä»'tÒç x'fÿpÉ[ÌÈiONñ*M€»ä*ÂÍ­ïjGˆgÒDi4p.¿MqÕÞ&M\Ÿ&¶<:JB±Õ#ÝÙ½¼5%¶‡ÇDpâ=·˜33E¹AYJsÁNчš¢ëwœ‹)¶ì<ÙœºJñ6^#{—0EŽÍ”NÕÎ-·îD9MŸäØp•Z¥C÷&/97ÐuOùɧªž[µ6u’€Ñx¦t2§‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš(çô—ì(¥Ìsð%ìtñ诟tÚ&5!>E•ètñõé¾îM”·eZJ7 –Âç%ˆ­ÊsÆ~S™x?K£Nð |ô»L!$¾RDáõg߯Ú-n$íDù¹ cŒÞì?j‡pì½ ¿– 0£zÐŽÐÅðÔ±¥Ñ\Ÿ™Ú×îªûYºÎœ&6<:>Môˆß‡& ¶YNMOH=z»i"7îc@hجvDt(Mø‹s a=Úß,•i¬ÒuÛÔ_ b·y'.îÄN¹E„6•©ü4ibÒÄH4¡ß¥ø|½ZÚ{Ùtê¾7Mäç¢'ôÁœ£‹2{8ò¤“»tÉ_/gù'õèÚ9X«¤SÄÖFu—älqae“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš('&gç¯"½…ÜE‡´•½‰*5&"¶¥&ìÞDV….H°Õ ÝW!ì'ï”*ЦwÐÅónaוô…¢­ŒÝL;ib(š L 1×Íl–›¨v÷¾…]ž‹NDМ£«Ý¡9”& æôVk4<ç‘Ð'CiYWs£/ÔãóÑ1fÂø\ý¯ïÿöóÛ·e$ÿ£´A–Âܽ {¼³œ}0ìmQ¶(j¼ød²2ñYØÓ_=~ÿøKí~|óøKùËÄöÅ_~ùß÷o_½ü¿ÿŸOhöôÛý«þî??ê¨øR?|øéÇW/W ”™^êHûáƒþ¯^þËã}XþŽ5n©åwï_üöøð¬Oþôññþç·¿©«>||‹—ÿV> mø5[˜~ûö†¼ü2:”ÿª\^v•´øþ?ó#xü¯Äúû/úô¯ê§ÝÙ:?ßn¥x%ønŒ /þ¢ïuѹ:¯ß ržþ˜3¬áfX[æ "'¯Bª8Ñoè\ffù–—V=úÿaò‹ßkrUÁ6ëáW!žròëÝarC”Šž¾9äÑEYÝs Êœ®dûŒ§Õü<b^.f2]¥và¼9—˜Ž ÈÙÙBÇÃéBÌõblõáù)›/ž[£:'DÛ;ÏåÖ!pb4•EG&&L r|ý4á%Ûì?¼D(ÙÑÚîÈvtäðBþâˆ[IšÉú“: ¡ñ­Ø-뀟]¥Ñà‘½ÊbÇ‹04±ke>ÝðèøØÕ#~ìj(Øf=8v5== võèíÆ®Ü8ƒ#ïÅŒRìü± ÄÉ…píðÇ6©i°£äE•pÆaéú–|g4Q½Sk{Çûi¢´¨¿ˆ´KU;^Å41ibš`A{L:'hÓDµƒ½ z•çbD°zv¶sxtÒLÑ©öuš¥§+÷øvO/v¼(rMäFƒS„¶8™4Óœ&6<:>Môˆß‡& ¶YNMOH=z»i¢6ÉÊâQíðXšÐX~qa&¶IåÁh¢¨*Ùõã êåÎÎÔ•·ö¹8Û;è<š(-²äŸ¶2’¹71ib(š%~¸–¦éõg߯Ú9¿7MäçFç8×=³]vÇ%O:ÑN+inbîÁ) Æö&}±‹x.MäF£zÒ7ZŠÝr‹éMäŸóßßj—|ûÛu{x\°ÄùÐlžƒ½À0^"i FñøáÅw_¿xÿ!¿òã:#ùT:¤ÀôŸ×ß'‚/G ¬÷‰ ïßý˜«_Ýc ­@ï?Ô ØÏy:öÏÿôâýO?}¯Sµ_¾{á~ÿ»¿~û?:~¨7H~øå÷·-£èïáíýãå÷_¨„7ý›o>-þïÇßýôÍoðþëûïþñAÙW?¼ ôÕ_k°{¥a?“YŽ_=¾yõèÍ÷ïðë·oþúîkJ>}ýmx€¯ßÉ|ïà}e Ž”ÄÈɨcQì–é¯'K®@Bãã³dø}X²¡`›õà,Ùôô€,Ù£·›%kã¢ÓegG)æcz„KÂ\zhm´ß"6Ʊh²¨R Ñ(Tí ÞM–·ŽÀH­X½³H†r8MæÅEcoª(“E!¾I““& Éx!”䟟Ô|ýé÷«v$»Ó¤>—”Û…ß‹IÇžtCAmôúø’JO—°ÝÓSéé|n †"=¨BSœxšiŽÌ‰bããóDø}x¢¡`›õà<Ñôô€<Ñ£·›'6E)tǦ9"p9mêÊÞT•ÀDF*œ¯4MUäuT¡ªwV‚¡zG\2R“V»3ïÍ”õ+£ö\± 8“¦NšŠ&Ò…‚K¼´¯e»eVìh"?W笩FÿÉ·æÝ› )ï¿E”5šÐ>. ÉŠAj½m|Ña0Rð¦zt÷6¾lð,vhO_´E•%le(qŽ/s|i|‘‹óúÊ•t÷ŸŒ/Å.î=¾äçzÊg ÚÛ¶ÕäØ?Ñ…éúø"Êg4œŽA†RµsáÜ{™¥QaHlqâgÁPs¢áÑñW«zÄï³ZÕP°ÍzðÕª¦§\­êÑÛ½Z•½˜Q Ü•\žû¦ÃÑ0­#íz´HQ_Ä– <ØjUQå“Ñ«À?«¾lš(oùÀg´½ƒg®V•€Ð ¿'˜41ib,š "íYú4¡veš 9‘¾±ÇPìŽ,ñC|Áàh%Ë‹wÚƒ½sÖ„½Ø)á™4QÅa ¶88ÏÒZÓÄ–G‡§‰.ñ»ÐDKÁ6ë±i¢íéñh¢Ko/MÔÆ)qrbG)í|/.¼v/³JÈ›8í<)Ov†¢‰ª*0cûœÖ“Þ×½ÌúÖDü ƒetç¤--¢cA°•éËûI“&¢‰ü]b Š©¹7Qíh5÷¡‰ü\rŠ(`õµstl–Çùbÿuš€[tp!C)ÔH~ê½ÌÒ(ê™LqŠnMôˆß‡& ¶YNMOH=z»ibS”‚ƒsFRrð¸BU‚N9 ðyz%‹&Š*OÀ7¨O÷u’¶¾5æ’›h{GÛ>&J‹Á‰qÒ©ÚQœC'M Eú]¢wQ§Í,/Åw§‰ü\Œ¬5 b€ÍòB1¯ \§ ¯=˜P[i±û$ñÈ 4‘Õ‡<³)Ž)ν sšØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Üx'1Ù!4¸DŸtª×£}ÐX?o[*ðXõ¬ª*ÔiE[½r_4Q½“’´sF>ÙQ:&J‹“—d+cž{“&†¢ ý.‘¼†éçõ¤^öý"î¾<—C®×hFmdf:öÞ‰[¡ ,‘7¯÷´'ìÅÎI8•&J£…¨l¨Ú-S×NšX™&6<:>Môˆß‡& ¶YNMOH=z»i¢4ò¸B(§c³|°sò´²7Q¥‚÷á©Áu »ªŠ:­h'‘}RÏw¶7Q½CˆtƒwâbßépšÈ-F`!¦²è¼71ib(šÀrÒI’÷í“NÕŽÜÞ4ù“c0ûÚùxäÞ„¶áóªÝuš Üƒ1ijsO±Ó ù©4QM¬(älqˤª“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnšÈ'DÀvIÀ*Râ±÷&’À%E^¡‰-R““±naoSñ¾r:móŽ_œY>œ&J‹)éTËÆS³žÕ¤‰¡hB¿Ëœ‹3^I³ñú³ï7/ÛǽiBŸËམ=[ýøØ ä)„šàyõŸv&ˆjçžç <”&J£ Ñ[ÛÅŽå'M¬LŸ&zÄïC Û¬§‰¦§¤‰½Ý4Q×a8ÙQ*R8”&"ò…Vh"Ký[GdKŒ&ŠzP¢íhqáÎnaoòÎ2_×á4QZ$h×d®v˜fòICÑ—:EJ Ò¬gTípqoa'šàRÏ(§;sVÿ!5;²žú `¢t=¹¹³Ž0©=a/vú”Si"Oåsº’%Níf}T{šØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Ò8†ŒËÁÕ½7‘"äŠ+4Q$„ÈÞ9[*EEU 9°­>ò}eˆ­o-¹ Ê ƒ¥,~ÛÃi"·rÂ)S Ìꨓ&†¢‰)!ç¸{>›ýÙ÷‹9µßÞ4ò &Ÿ]°ú9 t콉P²ý]§‰˜{0iœ5na;Lp*MäF½k)λy Ûž&6<:>Môˆß‡& ¶YNMOH=z»i¢4®ÿH ;JÁóRØûÒ„Æ{åšš(Åë×õ•Â`4QTQÄÐλþô–B÷Eå­™‘ÚÞáåœýhš(- åW²•ÅyÒiÒÄX4óI#í3ÞÈéËI#¿{N§ü\Ïù2B´úÚ=/¿½çI'wñ9CìÊ-ìT"´$0²·¥:›Ó©4= q† Ø…0iœ&6<:>Môˆß‡& ¶YNMOH=z»i"7Ž9Cµv”JÍ›#Vô+4±Iª¸Áhb›zö÷EÕ;L.ÚÞÑ&N¼…]Z̉èñeË\\“&&M @©œ` Dܾ7Qì<î~o"å=AFb—èÈêuJ,ù,¢»^µd|Bq±MÅŽR:•&J:*ŸóÞ‰)NåÁ¤ kšØðèø4Ñ#~šh(Øf=8M4== Môèí¦‰Òx.7áÑŽR>ÄCiB4Þ¯U¯Û$a°êuE“Dºa8 +Õ2¾hš(oPg¡öHNœN¬^—[ä’‡Le@h¦t*v.-R/ïDù¹ž¼6û{ý±{)X£ /LÎY4¡véZ™½?w|AGž0˜êÑ9¸·ñEß0qL7xgY4èøñE[¤R°ÖV†a&ù˜ãËPã ]œG%Î`¬V;Xô¬Æ—ü\v!ãÌO¶#!8vµÊ%k÷òHgˆ@18C©ÚùH§®V•FÕ‰F½·jâÜû6—!µªGü>«U Û¬_­jzzÀÕª½Ý«U›¢”ŽoÇ®V ^¢¬$ ¯RdÇ`KM~0šÈª¼s¢‘üõrg÷òªw”© Öªvg®V•sþq޶2t“&&M FÄœ³ö=ŸÍFj‡ûŸ¤ÍÏÕ¡'x»gsk'–ö£ ¾$uþušà܃!ˆ¸ö B®±ÊJ¥QFtälq´§'M¬LŸ&zÄïC Û¬§‰¦§¤‰½Ý4Q„.¢¥8â¡4Á!]"ÄšØ$5€ŒEEU®R7 ñŠú/š&6y'˜2°´ˆú展¡B줉I#Ñç=eH,®}/¯Ú¹½—çFrhö&ŸÞûœPV²H…܃9¦Ð.ƒ\íˆO-ŽZÕ? íêÑOv0³|˜ÓĆGǧ‰ñûÐDCÁ6ëÁi¢ééi¢Go7M”Æ1oG)$98ˇ QÇÙõhObtv´Ï7ÖÇ¢‰¬ŠÑ%7 wGÅ;¤XíÂyÅQk‹)±sl+‹4÷&&M Eú]jèÐ>ó<ÓëϾß"ì¾7‘ Ô…ˆÞŒ{1„CO:±¿X)ŽŠ1÷à\I.´wQŠãssæFƒýdÄ'‹¤ª“&V¦‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnš(C B`F©Ïkí3P£=âz´¿]*vÒ©¨ÂèÐÈQìüG­oMIÈ9Û;„'æ Ì-æ„g¶²è³¡I“&  ý.ƒ8 ÍrFÅ.9Þ&ô¹Q'è.y½‹Æþ#3§‹¯×FÅ”;0å+kí]ì<ñ©0‘MÎ'Ïd‹‹3¹=Klxt|˜è¿L4l³&šž&zôvÃĦ(%pìA'Í.:Ò®tªRƒCc‘¸J½6,ý™0QÕGÉVŸû‚‰òÖÀ ˜ÕîL˜(-rƱ•Ñâw›01ab˜ÐïR<$‡Ø>è”í\XÌ‘w‚‰¤µ¡ä­þÃλtäÖDÔ ½[£ É=XÇ3o$ù(vàÎ=è”Õ¯FYˆLq‚nG5§‰ ŽO=â÷¡‰†‚mÖƒÓDÓÓÒDÞnšØ¥„Þš@Ñþf©Dƒ]›(ª¦Èb«gO÷EÕ;œÞààO¼„­-¢CJ‰Leúî3eउ±hB¿Ë˜"ø”Ú—°‹/ríDúÜä"±Øý'¹€|äA§\ÎHÖ¶¾ÉiεÒ(5Ïü;‡rê%ìmâ–Ki“&®O[ž&ºÄïB-۬Ǧ‰¶§Ç£‰.½½4±)Jà±4£ŽÐ¼’€|£Ôà†¢‰mêýó„!_4MlóúóÄnSƳ8꤉¡h"—‘}ÎÏŠ-š(viošÈÏM}4ÊU;#÷&H..ïÒÈuš€ÜƒóBŽoîMT»O½6Qõ$îÿØ;·å¸$ _ë-:x±1³k•uÎŽ˜ =ãUŒ=VX>\¬ë&Ù’{ÍSð`[£Ð»Ì³Ì“mf¡Iµ¨F%ÐÚå!x¡»“•?uúP…,“ënm7ít’§‰™ˆ–O}ÄCݬ §‰l¤ ¤‰>z{ÓDrŽ …Ûµ]Œqì—°µ ˜éíŠ4;Q*Ǿ,šHª4È¿B^ÛÁ#;uô@ŽŽÞØ1:M$>𙊲2o§ÃQ'š(Š&¨^íÀQ³ÉÒD²³ÃÓ—kçó¤öCvzTš€J£Âíg£ZM ØI–]ú®í”1{…‰ä”ixÅù'˜f‰™ˆ–}Äݬ ‡‰l¤ „‰>z{ÃDrŽÞ(å^ ͸¯M ØÊ£oXš` œI#h¹Cuʶ4‘TsÖÈc•óÈ–&ÒUkBàåèh³¿NµÇàU‹ZG¨8ÁÄ%ÁÕKšÇ£qÖfa‚í\PC¿6‘ÊEšß‚[vä\#gtÒ6Ú&˜püJ8­¡d§7FÂ]‹úûÙ·‹ÍPÖª›œ¹uŒ²3ë_ó†œ=»Zþ¼º¸½N_®ûùì?8€Ïé“o“õóôñëO—çSmùŽ'3{¾íU¹_æ¿‹IÑû·4UåuãöñÕÕâüzõî(Æÿ{ýËâôt®~=ÖðÂ!ÍÀ-Žé¯–¿Þ̲üð#Z‡Š«ÉñüÿyßR¸°?þAýj €]ª?¾!/.GÔ¸oVËëªtÿñ«û𔾣à®AzË÷ÿþõ_ΙN6¿¡ Ýü—¯^üùvuÊ5“¦ç×§Kþï'ËËÓ‹Wg4…£_¬^ògév}IÌNÍãUúà±ÜÚZµO”}2êyÖ>k;½×w¿k§A;g•,.¨énñYw&¢å/‰ô?Ì’HFA7ë—D²‘.pI¤ÞÞK"É9B$ÿr/qä¸C¬š†%–à”´Æœì¶åbúM—D’*m¬VVïŸMøï½$’®šÊu&ÈÑÑaû«’GïÀ‚—•¹¾‡\¤Mk"ÓšÈpk"T1=Ÿ jm6/mmg7Ö$Zár-‘AD±ÛöV™8âšˆŽ•â' ;¬LåƒR'Dv›Ùšr”È0葸ɉa†hÅÁ ڈʀ°>Ÿì‚kw¥Z¸R›zRDï¼ä”ì î•ÖØi¢(.(;eê§á™ˆ–Ok}ÄCkݬ §µl¤ ¤µ>z{ÓZí<†?YúNä¸ïÖÓd—Ƥ¦ lݤ†Âh-©ÒA)aT®íô#£µúª‰Ã­i`öGkÉ£×ZÇ÷Í›hm¢µ¢hÍò)"F£vyZc;½šÖ2þ6 raÜÎPÜ·ŸIH œV¼K;‚ ”ìL»Eh0"Ãx‘>&ðù'íV„ÁŠN]WAIÝ-Ùiô­œJk¥®Ò|¸!=Û¹¸±!çÔé¡]xƒè”Ær©­œ’]Œ~¯\ÊN©¾¡S²¸8ei#Ñò¹´øa¸4£ ›uá\št\ÚGoo.íÔKi?îá–¡ ¸´›ÔhËâÒNêv‹K»E'øýqi'e~ãô׉K'.-K©búM à²\šì|œK¹\ŒˆAY©‘…‘¹Ô)hÊÓ๩; JÌ/s±]tï'Ò'È©¥-ø ‰³ 6rM8Ñ0OÌD´|œè#~œÈ(èf]8Nd#] NôÑÛ'’sëœÑZÁ‘“¾EÞøh{{’(NF–j·K¿%N$UÁÓìdõÞ>®i꫎‘ßÿ‘£³ùÅè8ÁÁhï¬<Œƒ†8áÄ„Eá„gLà)&ä5$;³ñ  œàr©a{4Jj@¼Ê¦ÇÝ”¨éLJíL¨›º¶ÂŒ=ÙÛ﮹äÔi>Mg§]sò<1Ñòq¢øap"£ ›uá8‘t8ÑGooœ¨[¯ƒ‘{)7rÚ7«T¥±éDš$Áó«V¾…T§Ë‰Nê½ 'ê«ö}›è8½?œ`šAZµ˜c Ø '&œ( '¨bo©Í˜ì—ÉÎm&ê'¸Üïv–P>ø1W'|ôãV'bů¯8íðIv›g=çvXI¹!"÷/Ôá;áåšd§ö|ªfrJå[ˆ£:61Œ49ÍD´|†é#~†É(èf]8Ãd#] ÃôÑÛ›a’s`„ $ÉÎ|Žs¡ ¶éÍŸZªóZXª¯íla “T zY}xloþ¤«Få£),ÙÅöÇ0ì1­ˆxY™Ñ:L 31LQ yç?=Ñbö¨Z¾“òÌÜc:² ³àTÛ©Á“CP¹œ,;:#^4ÙiwƤž¡ œaˆ (¼•ìœiNZ‰·‡Ò4•ìÔ{låD§HãšâS™§ø]VNµì5'3¯”ì¬õ¥MH•‡èäÑU:÷Ø& ¢ãAïsÒ€Heœ°{')C5%wš& eMøüi˜1¼Ÿ7íñ;ÙEüÀ‹tþµ‹ b¾%;;êYܚèN¼ AKQTJí†RéuadØðÚJ‰•“Ýf:‚}<øLNÑ)/< Nv1âôàSz¢•‰hù>ûˆæÁgFA7ëÂ|f#]àƒÏ>z{?ødçi3¸7b/eiX9呯¨OnxðYKpªTWØ^ð¤J+…=ÅÉà‘mÞ¨£Cá1QŽŽV{Ly”?¯¥Ú²rò¬ÕëˆÐB=]壧úªi ù÷×ÖÑñû;Ù£öˆš7ƒÈÊ¢›vŒLàT8qÅ ¢1!»ë=ÙYoôÀà”óÿ°‡fÔœ<ä#ZcÀ *ãq:ûd®¶‹-÷QÉL©0KQw|~†à”Ÿß˜½žNX; ª8¿qtâÄ0 “ÓLDËg˜>â‡a˜Œ‚nÖ…3L6Ò2L½½¦vn8Õ¹ÜK­Æee*g]ÃÔX‹-¤ZUÃ$U¨ƒ^¯¶‹æ‘1 _5ol1XbóqìÃ3LRfͯœ¨,è)¯èÄ0…1 0hEbvñ§¶ƒ×mb.×óIéN"²s£æÕXñîr«›&jA)Ù©ÐêPw-œ$O…ñ~e¯Há!;ëÚ9‚S] à ÆüPšì”i·â„²S^[ã¼®’Ó[wa„ýýTuËh|É)Ùmyo|TDd§Q§NH‡~:À^œûg"Z>"ö? "ft³.³‘.ûè툵s~ÝØ‹½TTN{ôD •· GO¬%áÛH eí¬Ui‚q%U”\ˆØ-:~/F×-BŒQVöΡ"NˆX"jNšä´ ù#k;ëGD.7¢Vô¯Ô€5²8î2u­vû+Ê$ÀeQEaEm§Ú! ˆã5Áh”`íT«óë`ÍTšÈÏòäØN¡i…¥ÆˆN fµ6Ô$»à[í¾4Vtʽ7è¼à”{ïý¦ÿ­"W%”Å…‰å©&¢åbñÃbFA7ë 1é ±ÞÞ„ÈΑ;na¤¶3ãNVДþ·–QGq\c;]ØFÈ¤Š†*/ìJvÚ¨ÇEˆut8õ¦–£c6¦›“Gï¼5-îÛfj›‰'B, “ ‘1Ù)?ôd©\äíëùk;e͸‹ˆHÇwÛÑò›aÑÌgÁ¨í|h‡ˆÒË\¶² ‘ˆCÁ©u Ì^ÓÿvgÃt"¢89ÍD´|†é#~†É(èf]8Ãd#] ÃôÑÛ›a:õRÎŒ»Êe!V^71L7©Þ•Å0Ô{ÀÇÅ0Ý¢³ÇÖ;)e¦L~ÔÅ0T1cDPò!“]~#$•‹Ê‡ùüÙµs£fÁ †‰‘B¿a7áèÐ;õ’v{Mª×MœÛX¯™p¢až˜‰hù8ÑGü08‘QÐͺpœÈFº@œè£·7NtꥼwIÄUYÝ´i®›Ô-çqü¦8Áª´¢ §°|ŸÔo!þ(p¢ŽŽ÷ ¼ÞWÛm<'’Gc,ù¾im§'œ( '¨bFe›=Ø£¶3vèS¹4øÝ“;ò’Hô¨BNDM‡rJ JÉÎm*¥kO ÝÄuïÃ?¾8±zùùâröVÛY=·;û×?ÿõOn-÷¾î&žÿþÉW¯.—Ÿ/os#î~éXÐ!•óäo«sB…ú‡Æ•n%ü×{%Ü_T×¢’˜ž=½k‘;Šy§„Ÿ¡k)¶ovù£/ŽþFçû[òö×]îªf~8Ó³Õ q+ê¬êvz½Ka_œ-¯/4`ÏÞû9+óN!|ò|yúâ³ÕùO[\v¿³©n|ýô“m…Õ%-ÂòäèÈÂÑBZ<ò‡áð$òÃ\⋨vªOY¯;]Ç—Ëë‹[š=ÝWÓÍ5Áì$4[ìn÷ðÓå9W‡¥¥µË¥ÌŒ½µi>uv¹Q.wcôñë4¢Ï®x(?TþP{ÀççDT(•ùõW¼Ù%DmÜïÔöŸ|²|;yX2Í‘z•ùéµÜg4S¸8y¾¤¶yr=ß¹ÌA»•/~¡êñåòÅ’fá4…œÏ^¿~§÷­û‘꾃áé÷ȳzÀ8X?°ùæžÃ¸Ÿâ^ûþƒÔøÁÑÌéЪsÈÇ•âÆÃcµðËå‰uQ;2¦k{³ÛÝ£ÙÈâtõr;ßõ–¥êýùâœæŽ'MÏþç{ªTï|ö—óšïPò“þh-è*]ÕÁƒyÔÎýûºì/.ßö õÄÇ’zÙëRê~£þI½ÇºžåAœ[½Ñ{ìæ¡¾‡<×"õ/ß@¯²¾V|ýÃëƒs àâ`NýýÿŒ¦ ëßé7‚®‹›êkþ º³ZÓ-=Y¹ý@u'o>ø*ã@sßi]4¿ºHÿ[ߟ«ÓeõjqvÊuïÍ›öWÙvètÄ;Ž–; &OÏÎnoøQé|ç‘ÿ“3Dýóúàý;@#âíÍW«Ôuþ»óÙìj=ûûèææjut{CÃ1<›-.W÷úg¨?cã.wQ÷g,¯^Õw»ÀmO5|¥ÁÍçŸ*°]Ä–¯`yá,_VÖ¸è•à”ìŒnu¦ ¢Sèco§dçB»+¢S~š‰Æƒä”íŒÙëÊjrŠ^!SOmg¦•UqÉ,ÑòWVûˆfe5£ ›uá+«ÙH¸²ÚGoï•UvnZ’Š×vaܚљʡmXYMÀð ã²Tк¬•Õ¤J«èj¡>>®ãÊÖÑá´ ZŽŽö{ÌX™<òAëdenã]™ieuZY-aeÕsš>JÈX™ì „¡WV©ÜHc›Qœ“„qWV©gµqû‘ËŽÏÊ´  ']´c)c¨´Ç ƒ”0#Ùyß*‰•ò8A¸4Ùù°ßãÊØ©Õ&_Qœ´8I3âLD˧>⇧Œ‚nÖ…ƒS6Ò‚S½½Á)9'ߘ?çymgÇ=®Ì_ÃpJÔ KµË§¤Ê+m„CDë« ì ·::4c±ZŽŽß8)htpbNEºl+*sJMà4SYàøˆ.òå•Ë‚S²S8x–.—Ú,D¹iGš¥ûqó8¢qÎ4€Sà\‰ÖhƒQPÊ9µoÅ0 €SLý=ÒÐwšìlØïâO¬;¿à½Å¹Í /Ã4LN3-Ÿaúˆ†a2 ºYÎ0ÙHÈ0}ôöf˜ä\§ Kr/ÆÍ4¨u¥cÓkuµTGd©dTÃ$UV.‚¬Þ÷¸¦ŽN4ÚY9:vŸ “<«‚jqß‚š2 N SÃPÅŒÔ÷{oó Ãv>†Á³tDξ¡lP(΃#3j–[©+¥é˨…õy¶S¸‘¥#Ç0R^xäÔë1êApÊvÐΩZç zù§l§÷»ø“œò»MÂ!^É.†iñGœg"Z>8õ? 8et³.œ²‘.œúèí Nì<(OL„b/ÔÈé ƒó•vMé “ú¬‘¥‚+쯤ÊÂîˆÚNÁã§tÕV[ݦZµÇíÉct¨„ÔúÉ.l4 œ&p*œÄ;gߟ$û GtˆCƒ•‹QçÁdc¦h7¦Bo 6qS NÖó´ ”ìbhõ¶‘µÂþ5¬"ï^Ó^ÀÊd[ÂZ6-¼!'ÜWyÐÙíwvÎÚýqÓÚi$2Uèeq¸‘Åxâ¦mâ|D 禞â়‚nÖ%s“éÒ¸©§Þ~Ütç\G…`Ä^ЬÆå&çuLØÆMwŒ ro!–”þN•µˆ:Êê-<¦·î®ÚQM´ GÇ™}-8Ýy䧦…²¸y,óÄM7ýÖÜTWLôÖ¼¿×øÛÙÙ Ü´.—š,µ ±ÛÆh·=¬ªtðà¬a€‰Fa‡V £§Kj·i΋ ƒŒQÆK˜@vZ·:OËÁ)Pg…ÊØ ó÷„í▤£‚Sç `Qº|Í85̈3-œúˆœ2 ºYNÙHN}ôö§Ú¹ƒ˜ÛO|o·mÍ~È4 !T$¦œj bl#Õ™²À)©ŠŽföQVì#'ºj¯q“Gr²3và”<òûÀÆÉÊŒvêMàT8A:ÏJ„<8%;âÐàÄå ‹bBC]äÈ N†ZéYÃøâ´â·Y4a'; ºÂd3ÍQaºRšj8á¹L²sªUj‹¢SÞ¼³^pj´ÕíòQ8%:¥Òt¹}wvNïw•‹¾F´(‹#Z›`Mš…g"Z>¬õ? ¬et³.Ö²‘.Öúèí k]z)PG^å‚ Ñ7ÀZ7©Ë‚µNêÿŸ½3Ù¡åÆÍð«xŸ  Š“˜,È: /‚^8= ƒ´ÓÒ¯Ju»sì{J¬B |Û›{éâ_<¥áÓ@§írŠŽO•žƒµê‘J½YµCÑ,~GÔ¤œ‚3ÈÕŽñáÒÕi©§ùCqjyfä ":½êŒøkÖ«: ŽY¾^Õô€ëUgôž^¯jαæä)a/UÞ$йt½*/Fô¾ÌÐß$ˆú„\c©¯Y[†À‰ªÊy°Žë±zø,œð·V¨‡s4çp;}©H{;NTY˜Ã9†þàãĉ‰cà'6¨+øN¸B¹'|~^KTjÔ´«]ºõ€tYJJÉlk€©Ééê˨©×œÅ¸ë’'çàÜ0{¿‘©¸Ž™äQ†iâKaŽÄiBÖÉ0Ñä´ÑñæŒøk¦£à˜õà Óô€ sFïi†Y[Ó¸—"¹7;, ÒÆ±®&¡ÔÊá;¤²å±æz¥üY ÓÞÚ2ú#ŽNy©T~;ÃTPÏ=J •¿Àd˜É0C1 ׬3‰ëá÷.ÃT;µ”¯f˜ú\#%“pl†%ÝY‹Ûk[Þ`Ø1¡Ôm÷@©ÛÁ¾´¢Œ!ÃøÃPügαS¿Ë)…NMýc85àiW.SæÀ©´R©?6;&¥µæ”êèœcq(ó„t8 ïDt|Z;#þZë(8f=8­u#= ­Ñ{šÖVç…1ï-Ž¢‹ÁÆécJu°ŒÂœ Ât³aº‘aÎè=0͹"øø÷R"éÞKž–TÚ`˜CR5éX ÓTIL9V_>-QM}kL8¸ ¶F§<È0MYv(2—7;xIŠ8f2Ì £ '‡ˆÜOT³Úå˦>s–œRЀÜŒî<4— ï]ß3L©MX…RP1§ÙIy6ïeuê?a’à‚P³›E Âyb'¢ããÄñ×àDGÁ1ëÁq¢éqâŒÞÓ8Ñœ#9cV;¸7ï¥0.m'V Å8X­ ­u³œÖº‘ÖÎè=MkÍ9€QP)uù.Iò•U 2.™·JÊ­R¹ˆr,õ]žÿŸ”Öš*Ÿ*@²X}þ´lí­ý¹­­véÁÍŸæQJÝ›Š•IšØ&­EkV7UøMª–oø» ^MkÖ21 óbÔñ¹ÝkòÞ[6XEÒÆm#[ŠÖª–Ì(u;È»ÀIRNfB˜Jà´Ú%ÜåBp2ÍP tª9¥]÷ª$È i¡¬õCèÞXí°ìJH!ØÓú0(–‰"§*IÓ“ˆ¸:%QÍ9‡e&¤ˆæþ½ˆˆ§Ä_‚ˆ=ǬÇFÄ~¤ÇCÄSzÏ"âêœÛû¸—"»7!‘,˜7Š@|‘ZO‡íÊP†BÄUUMÏVÒõ峒ꋎèsE šÇz:¨E¼Ú¥tw-NÔšòN¬R3eÙ!•ËÑýEU¡¢´C=}ÖŽÓúÖ‚û»‰_ì^ÝŽÕcñé1÷o ¯ÊìeqvâÄĉpïÞÙŠ|½eúí?`·#¸'êsÍê5Ë4 ·S¹3¿æ%“ùø±…%×\;ÆÑSr"ÚujN(Øɵ!ÉØOɲÚ!=zjnuê¿I[í„e2L49íDt|†9#þ†é(8f=8Ãt#= ÜÑ{šaõRŠ73LÊ ÃÃ4 …¹Ëæ_^i¬Ss«*ŸºøûÅê‹|ÃŠŽ½f¾›aªGCŸú(†Êlæèž 3ÃÔ.µ°fÎÝ;N«ÙÕu†êsëý]rŒ Ûq¹³.69}@öh|¿1À¨™ï¸‚|µÝw¬‹†ÁÚo˜9=õGµfWøÑ´oÕiIÙ?Õ°š¼$Ÿ ³19íDt|†9#þ†é(8f=8Ãt#= ÜÑ{šaõRï=Ö…¥,À©«Jå±ê ­ª9[Ú¡Þø³¦½5qâàDÅEyašÇ’³ûgÍNM†™ 3Ã`]òNÓ¾Î[øí?`·Ë×ïÃÔçÐ~N€ÕŽ î܇¡Å‡0Ô÷©«]AMˆ4`ÜŽyßÍ æ€SÉûÀI§´$EòÞª_vµCÞUÜH‚äþ0çGðÿ'¸„Óìì»ãdá›Öoº&ääÀ©Û)ïºã¤éJ§Ev…W!vª˜sÃÍ.Ѿ7ÍáoZ]’E_o³“}MF1|Sòž³ÞÒÓÀiýØ ?ŠýÕ)›& ›–Û½.Mìßà¹NDÇÇþ3â¯ÁþŽ‚cÖƒc7Òbÿ½§±ÿP/%póÖå5ÛÅÆaûcRÉÆÂþCêÒgaÿ¡è¸€ç°¿zà”Be:·.'ö…ý´°{·d"]ìov„W—nÏ­õL0h@n*÷n]r®Àð~€áµl^í7u^+LÚ.† *ýÖ‡Õ~ƒr?ËH³ãbÏ2L—-e±PœdÈ“a¢Éi'¢ã3Ìñ×0LGÁ1ëÁ¦éæŒÞÓ ÓœûìQû™ÏW;‚tsÒ–¹jcëò˜T¦±æzæòY ÓÞZ)ip{£ÙÉ“Ç/«G.Ú¯Xµ*³yür2Ì` Ãíø#)]#áÛ~À Æ×¿¬þ¡ži n‡6;,7o]š‡|£ên.óˆú )ÕN­XuH\I‡Ï4 ®—&ïÌHA´ ˜ÁÆž».§LÖ¯j‡FÛ¥‚ó»ßÿµ~üýï–ïþð_uì÷îé¿ËŸêò¿ðŸ¿þówøÍ¿ºÑ¿ÿò·¿þþ»ÚIýê×ø£ÏÜ¿L(½ix´þÑCþÿ#éúǘÿ៿©óûÀÇËã7ÞT©f é¯ÞT;F]›?lþhíIkr’À©Ûå$ÒZsjÙ §/ç-&­mLÃ;ŸÖΈ¿†Ö: ŽYNkÝHHkgôž¦5wn)© qÔK¹Ñ­´VÈaÚ µ&¡.¢ÆR!vo­©Ê>Ž•¯vY>‹ÖÚ[# #ÆÑÁü ­5" ˜ce¯÷J&­MZÖ´LS(ojöüˆAÜŽ¯¿·VŸ[RV°¨ã㬜nͨ *RÙ¤5ÖLþ«ÄJ5gÝw€MB†…ĤNE“Á®{kª{œRʱSÀGõ¢{ke¡zŽ2½)tý§Õ®m–oVŸqüËwÿóKo]¿úûõ¦oþú[Ÿ$ù7û_­gý‡õ¯NÊðyD:"#Ý%c3Þf~ñÍ·ÍûŸ¼e®­ä´»ÍÚÙÞuUýÓ_~÷ço~ÿ›önÿôÍüâ½CŸÓªu³W;gK…5§*E cq2«IÇÐÕ‰èøl~Fü5lÞQpÌzp6ïFz@6?£÷4›ê¥ôæœ2\¬^/Û`ócRy°¼˜M•O9À4V_?‹ÍÛ[“ÅÕîÉÜþÕcÎõZGü»e8»“úç?þe¢ùDó Ѽ8òRª¥Âúi1«¾æ]¼Íës5·*€Aû©vxgZL¬·ÂÄì}1iW`V_EJÝ.im|qUþ#kN±úüõ"ûÏ}|9„ôäøâ¥Ô½ãX¿ljÏñeŽ/Œ/¶¤œ,—þñ“jçóZ¾z|©ÏE,ýùY³ËVî_ ¦,sÎz?¾X›IzK·(u»ôõÕ¬[W«šSöyBºÙÍkÄá2D'¢ã¯VÍjUGÁ1ëÁW«º‘pµêŒÞÓ«U͹$–¬q/ÅŒ7_#ÖŽl¬V“úîéOI«ú:N±z¡K…ÔÞº@=°GG󃩪GôÙœ›]âI“&£‰ša¨ž¿¢€&Ü.‘]Oœ‹÷Ú\$h?nÇYî]­ò¦L™rª-˜ÛYÅžÒÕÎcú$M¬NUü£(±8š4L{ž&N‰¿„&z ŽYMýHG§ôž¥‰Õy±”KŽ{©×Œ–÷Ü"Ö…·n”*cÝ"nª(ù<¡?ª®ê ì£hâKtTsÿÎÁ»¬ÑÄê)kÿ@ßj—iÒĤ‰‘h¿KÆ VrÑM|±ƒ«sµç¢hÿñjÇ…î¤ [jÙú÷4µ×bý¼ªÍMqºŠ#Vë×n^í<Þ“&¢ib'¢ãÓÄñ×ÐDGÁ1ëÁi¢éiâŒÞÓ4Ñœ3ZÜ…’÷Ò„ä¥Ù ‰CRƺ庪jw‹h‡zù¬[®ë[« æqtôå·½&ªG(%TÆé¥ÒÖ¤‰IÐD«L öi¢ÚÁÕUÚs‘´`8 v;Ÿ¤ßH¹,E'6h"{ Î\úš[>z/ï8¦Mœ Mt³œ&º‘&Îè=MÇz©Rî¥ ÒóM’Êi¬{yÕógt:y¹½s;MQ&ø²b9ibÒÄ4‘ë½¼bZ‚½‰f§/y¡/¢‰ú\ v›]ηÖKH‹:ïËû”9½+î¯VU;VÂGi¢:-©fÖ‹ÅÍz {¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDs^/DǽTNv+Mä–}2Ávo_ЇÆ`\ZíEM!Ø1Pú°½‰öÖ †”ãèpz.çêѤhÙÑ@^O¨Mš˜41M ÏÒM hêÞÂ^í˜.ß›ðç"äZô,êµÝT契ºó¶ª¯eZ8%$‘Òß=¦6¾”giâˆ8Ó4i"œ&v":>Mœ Mt³œ&º‘&Îè=MÇz©›óùÂ%+ïìMG»6qH}áÏ*¾v,:–ܚدÌí²¦ &F‚ ª×j Ã.L4»‚W_ÂnÏ.‚˫ݽ)eña¡ÈÆb·–…µ›bµy4¥ÓêÔ¿ ýš¼Ô;™0±1KìDt|˜8#þ˜è(8f=8Lt#= LœÑ{&õRšÒÍxAÜHétPê›kÌ?)M4UÅ-±úBŸ•€¼½5$R‘#¹=˜ vUF™¸_€tµC˜×&&M Eþ]rÊ V¬Knç W'ˆmþkvXã¨ý0¼KewMpZüY6®Mˆ·àœÌÈú-½ÚÁkñª'h¢‰ãZ݉Cq™^ lÒÄÆ4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïišXs!Õ¸—â|oJ'Á¼d² šX%(ä´G*UnbU¥dEr¬þÓÄ‹ÎëáîÛi¢zô–Râ9&–I“&F¢ ÿ.‘ÀûMëÓDµC}É,zMÔçr½€%j?ÈXìæƒNY”7®MèÚ a?Al³x6¥SuŠÙ?š€&š Nšˆ¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDsNI1؈þ"o¦ ]r)4qH*Â`'Ž©ç»6±F§ÞÝJ&ˆm ×ceb³8꤉¡hBë,-[I]š¨v¤/»~ÑD}®dÿ/H‰Öì ÝLZ“1m”›(µ0I÷š|tïVš¨N©¦ fż,¥MšØ˜&v":>Mœ Mt³œ&º‘&Îè=M‡z©Œ÷^Âvžòk«ÜÄ!©”Û›hª˜jí½êˇ•›8~2¥SõX—û2H¬Ì2Mš˜41Møw‰…÷/a7;ÄËO:ùs)ø+§¨ý¸]N÷¯“"†ïaÂjC¯© S_h³C~vk¢:•T±bqÅfíºp–؉èø0qFü50ÑQpÌzp˜èFz@˜8£÷4L饼›½ù Ébyë Ó*A¸ïŠ2L4U ˆRv¨/üY0±F§¤ÄGÇ£øL4Þvv4ƹ51ab(˜ðï’(Õ£N}˜¨vuÿàj˜¨Ï¡,9ìµIÞ$ ¿&(-„ YÞÒ¦Ú‚ÝbwAcµc|´öêÔ(ûØ‹+23:EÓÄ^D‡§‰Sâ/¡‰ž‚cÖcÓD?ÒãÑÄ)½gi¢9WŸîp?ýåj—àÞƒNd¸øÈóž&JŠ&VU˜4Äê¡ÀGÑıèøÌä1šX=ŠÏÆ Æ•q¦tš41MÔŒîA§ÕŽèê”Ní¹˜s{mÂDrïÖ„©¥üþÚBë[jjñ~Kovð.¹á4k7$D;ıÌká4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïišX‹°ì襄ôÞ”N ‹ÓÂM4 >ÄB?íÏ©:M4Uê‹Õ—¯ýó¦‰5:h¨;¢c ÏÑDõXê_„Ê êÜ›˜41M@£‰”’tk×­v¯Un.¢‰ú\ÌŠ·l·Kt/M0²½‰\[°ÿR”»I>V»dö(M4§dJ±8¹7N;Ÿ&Έ¿†&: ŽYNÝHHgôž¦‰æ\˜£Î¾Ù±å›/aÛâB6hâTì¤Óªª¨Q?'Çj§øa4ÑÞÚ¸”²c$7 çh¢z´z œ,TfÙ`ÒĤ‰‘h¿KGtÁÝr«ÐÕ)Ús Ô¢]ÇÕ.ɽ•°É·±5µk=%Ô_WkvÌ–®«N!1¥ ¶çjG4·&ÂYb'¢ãÃÄñ×ÀDGÁ1ëÁa¢éaâŒÞÓ0Ñœ‹(šÄ½ëÝ¥ëêa*‚ÍÞ~¿Tù:ùÔO M•SéŽá@ì³òö·`Äx$Hú L4býË:«01ab$˜ðï’8{·ùužoôý¿&½&êsµ¸'ˆÚ)–tsí:BÖÍñŬ¦ÈÍ)u»ô®.ÆO;¾PŸßþ™/¢ƒ(OŽ/î±!r¬Ló¬f4Ç—¡ÆZü“fúúÈÍÆ—j‡¦Wg lÏeËïrŒüÛýs)|sþqFB{?¾P›»U–@i%|v뻉#HY8³šQ¸ Ñ‰èø«UgÄ_³ZÕQpÌzðÕªn¤\­:£÷ôjÕêÍyfG/e|ïAZ)KN²±õÝ$0Jâo¯4ØÖwS%Yr¬žùö¾Û[«$ƒƒ¥Âÿ±w¦9Û0½‘!îâIrÿ›Œ$WÏÒe± ÙŽ0% ù‘nÆü̲–§…|.c`óˆígK¡2„·L‹&MLAÂl„òÿ‹&Š^¿ZÅ-yGùóx€)îó­[ß¼93ñMÔc¶­Øö¹§ÚÛ£µQ›S*^5GðV|ÑÄÁ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy™-±ZÜKáÍ)aC#=î퉰ÌÍÒRe²kyM[=3«çôc{ßí­%“òŸ¡@~Ž&ªG®k–A"±¦Ì×µ¼EsÑ„Ô=å(©¶Ú©ÃÕùÇÛs½ži!Ú;êÍ{ß 2}Î?^þ²6`¨;Ÿ]¡ÍÎôÑüãÍi½µèý ³»¾][^0q0KìDt~˜ Ltœ³ž&º‘ž&FôÃÄî<«æ÷RÄrs1#ÝLŽn哪6L4U\+ïQ¬žñǶ&Ú[×µýôMtü¹Ò¨»GO.AŽfgie \01Lh¤—94K?c`µËY.?èTž[¸¢ §Á’á^˜@ö|”ãÃ6ÁÄ"}ºJ›r~”&šÓŒå7úBœÉº–N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ»d1‹{©÷šk·Ð„ÖšÕŒš(=e n‹ïv0ÙÖDS•svâX=æÛš8ÊÒÄ)eï)5M,š˜€&ÊwY§é¢ˆ]š¨vlxùÖD}®Q] ѨýˆÁ½Mk^ÂÏ4‘k .Á‚`ÂÞìXž½6Q–þåSù¿Ä•ñMDÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çõÖ„XÜKßKn°¹ÙM4 Ì)åK-ƒ×\4ÑTÕÂÁ¡ÜÝ.åߢ‰öÖ Lä¨ï;wÓDõX^ªL‡$Væo‰ÄM,š˜€&Êw©ñïÚ¾ÿüëûÕ¤zumÔö\ÌZÏ0FíGQÑï­ ^ @|¦ ¯-48ÒØìPžÝ›¨N³"XpN¶Ù¯“Ná4±ÑùibDü54ÑQpÎzršèFzBšÑ;L§z)‚tï%ì-)Ëqoÿ½Tšloâ”zN?VõTtD¼„]=zùʳK¨Ìå-ò¢‰EÐDù.³dã¨6j³ÓëO:ÕçšÕ¬›á=›ÞZÍH|«'ôó½‰Â¥k¡îòÿnÇðhmÔÝ© J¿šÑë%2/𦉽ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQšØgJà÷Rö©†Ã…4AÂlMœSZÿ{&˜ØU993TåËèt.:þ¶7LT娾w¤¬|š²`bÁÄL0Q¿KÉ̹»5±Û•žçb˜¨Ï-ˆlÉîv)ӽŌ(Õý—Ï0µ3$ñ¾ÒfGðhF§ÝivV“XÜ{%¿³ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0Lœê¥ ìV˜`L2ÐÄ9©4×%ì¦ R™•&Õgÿ­­‰sÑq~îÚÄ®ŒKë±*J«4ꢉ©hê,ÈRÒ.M4»÷ã;ÑD}.¸° FíG!ßK°'<Ø™¨µYA kh¿¡7;&{&ªSLåwdŽÅ9.˜g‰ˆÎ#â¯‰Ž‚sÖ“ÃD7ÒÂĈÞa˜hÎkú~õ°—B`¼&LmsØhj>ˆ{{Ä$sÁDSEBžâá «tÝþÖŒeÒ¡qtž+]·{´ZâBbešL,˜˜ &Êw©Z&É¢ÝsNÍNÞÓ_õ¹Voǽ¶ãÍw° ñÛgš Ò‚‰­+o·+]£4ÑœÖZEýË'»È*6N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4±;g-ÃMÜK)Ý[l‚4oùèœÓ®@¿R*sÕÁÞU匆«/£ïoÁÄ©èd{î vóÈàž„Ê`%tZ01LPÝqp©Õ„º0Ñì2]¾3QŸË–\™¢ö£zgå:Ì[ ½ÊçKy̵çò×Á곃”…‰êTjV®ô…8[ô[0q0KìDt~˜ Ltœ³ž&º‘ž&FôÃÄ™^ªtÅéæÊu) ˜÷ö_K˜+¡ÓIõF¿E§¢Cò\åºÝc®Ü1V¦ï—ÃM,šøïi¢|—¦mžÞ­ƒ½ÛÉÛ­Ÿ‹h¢>7§œ½_"èe‡wžsbÜvLhöÅïF¾ö¾MÌDõ»4"+³ù.M¼ìÞhøšhÏ•’ûËUÍŽ]n>IÛªÄÐÔ,äÁ1°ÝŽüÑœ»Ó¬‚ì±8}K–²hâ`šØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰êœêÆH¿ˆÃ.ÒáÞœB¼!éMœ“*<MìêÙ$T_ÞR‹&Ú[Cyñ~FÅWtòƒ4Ñ<²•÷úBéÚ›X41M@½o§À KÍN„®¦ h”âÙ2Fí§Øñ'(mT—s>Ÿ¤,-X¼OØZzz4ËÇ9qb+y8MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^JÁï-ŽJ°©ÜË;)UæÊxN½}(9þÿš&NE'?Gg”qZ IMÌEXgé5··vsîvlWß›hÏBG€¨ý˜ Ýœ\!3~¾7!TûGõþÝàÝîá äÍiXS(Ž ÖÞD8MìDt~š Mtœ³žœ&º‘ž&FôÓDs.‰0Xðovl÷Þ›P´M<ÐÄ.µ–XêlõŒvUE‚÷¯Ñ½ìÐ~‹&Ú[ç:šñæ·»š·ÓDõ(`\^8T&@‹&MLEÔ²wx¢¿+üó¯ï×Dàò½ jÙ;ÀÑ=j?f |'Mà¦(€ø™&¸¶`Kµ¦QWi³ƒlÒDuªÈ®ýœ/;\{á4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍy­{ÚÏRý²ãtwuTÕœù¸·×š¿ é ©>WÎÀ—ú:‰X=ëoå ÜßZØ£uÁÝŽž»…Ý×9zØk›£Þ\»®fS8ÚùÎ{ †B ]¥Õ.¹<[m¢‰cÑd±8 ƒÑ,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; ͹'Ѹ—â|s~XÁÒcå‡=%Uh2šhª´hË_¨ÿ±Úu¯èXù×ãè¨>HÕ#¢±püÕ!˜/šX41Mä}kÂPúÕ&šÝûùÊ‹h¢<7·ã©`Qûɬ7ßš@r³Ï4áµ+e—þÅ„f'Ÿ6éo¤‰ê”Á“¹Rš8ë ML;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4q¦—"À{óÃ*Ù†‡ÎI­ÚÄ)õøkùaÏEç}Î~7MœR&b‹&MÌDå»ÌDÉòßÅIÿù×÷[5]~»>W¼Vg¥¨ýdQ±{3:¬Êôévýhk ®‰Å{Y2þØ¡>xÐéSSU·XœæU »?MìGtršMôœ³ž™&¢HÏFƒzÇhâóÌÙz‹Âÿ'’îÝ›€–süMœ•š?dXýïhâ*(CöêÕˆ&^oÍ ˆãèøÛ©à{iâ24ëÖÞú_;XµëMÌCûw™Í „:Õ&þØÙ[ž+hâõ\Çšv5l?Ù!Ë4!^ øLÐú #éån{Ù‘K~”&š822öPSòEÑ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9+BÜK1­4Á9o¢|@ç¤Nµ7ñG•‰hþb8P…ߢ‰SÑ1ðçh¢z,ó UÂP™À¢‰EsÑDù.3ˆ&ëÒD³c¥«i¢>—˜A%œ£gB¶{ïMX-»AŸi_-8qE£Ú±?JM‹¹Çâ„Þ š/š8˜&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9×Ò…JŽ{)þ´½{åÞ„µœç4qJªØd4ÑT$ XhKοEí­3[‰£chÏÑDõ¨5ue¨L‰Ö½‰ESÑDù.s.ÿ” }—&š3^Må¹eOep û=/ÿó­{¼©eÍ{T[°b‰ûJ›¹=JÕ©)ÖC—¡8“´îM„ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎëí×÷RJvsµ Jì`ǽ½i¶_H•ÉN:5U9#£Åêè·h¢¾uNZ~[Œ£ãOÒDS&¨œ¦nvÄ«vÝ¢‰©h¢|—íœSy\—&ªº^~Ò©>·˜Ö^îŒ?v¬z'Mè–S-s4¾¸cM“œ¶lvD8ÛøRT 1…ýT}Kƒ__jt e™ÆÑyÏlöÀøR¤‰sRg£‰¦ª¦îrøB}þ±Ò¨{tj⌣㔟£‰êQ €¾P¦ëÖÄ¢‰©hÂ7MìÊ3jvï©Ò.¢‰ú\eªe#‚öSì Ã4Ái#ª—¬?Ò¤Ú‚3rN]¥»ê£éÇ›ÓÂdŒ}ÔÙíè­Â좉ÏÓÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(MìÎ…Ôû“Ü—Hç[i"KÚŠ—Ï4qNª LE»*KɳÄêì§hâzÕÙâèo:u—‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDmÜH#öF©dÇ×ÞtJ˶…a¥ØD•€¬sã!u4š(ªˆCë«Ç;_lšØåz*Åq9M”-­†$ö•éS~ËI“&  Z ©¾(÷éËï7ÙèÙ4‘Ÿëé{ðÐëÙÑٮŽè+gœ{0é”F­v·ÆMÔF£Ó‹š!Å=§ ™4±²Llxt|š8"þšh(Øg=8M4== MÑ{˜&JãNÈíô};ÐKi"`XDp…&²LÃ=GÞ •ÇÊéôPŸ¦KèÏU˜fß÷¢‰òÖr&ɾwo<›(-ªÇ°¡ƒ Ø,„=ib(šàœ+‰‰^ì¸úòûMvÉðlšÈÏ$H«øNÿIvÀñâj@Ñ_—®C)cKÔÐÙX“:Bë­4Q•$¿³©Qì|ÒDo™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”ÆÒ*Œú£”ѵ4¡N‹ÑJ†ØR?Æøº4±K}|³BØï(@‡«ßH¹EÊÅ·µ¿ÆHk¶™ÓiÒÄP4!iDϤоé”ìÐáé*ëI4‘ÛWäø¢ýßô#ñW¹3N£ å…r±ú•›NZzºP öTìîÂÎ2`ú(B_\ú¼&Mô–‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDiõöfqÕ;!¸Kß;tçÙDiÑ „¸AÙ¼é4ib0šÐ´J,é IÅãÙ•°ósY£…À½þ“옮¼éd´"‡×bÓï©AÌÁ;§ÅäÞ¸‰]â»Mšè-Ÿ&Žˆ?‡& öYNMOHGô¦‰]£ÔåQØXÂÞ'u´¸‰]êß,nbŸwôFš(-FÒÉÅUíhžMLšŠ&¬Üt DÖÎéTìN›ÈÏåïÖëÙÉîEyÔ3£°}G +g!õ`M?NìDx;a¼•&J£.,öÅÅY½®¿Llxt|š8"þšh(Øg=8M4== MÑ{˜&rã†Q “À¯ÚáµbÓöµ¸‰}RMÇ¢‰¢Š)ýÕŸŒôÍÎ&ªw”µ“N½ÚÉ}ÕëJ‹!¿oøÝÌf-ìICÑD®1ÍÄc³z]µÃ§³µ“h"?WL ;ý'Ù_Yo‚r6BÔ×4KO'qhŸg;‹Qn¥‰"N€P¸+.ŸOšè-Ÿ&Žˆ?‡& öYNMOHGô¦‰Ò¸JÔN¢ÿ*Ò®Ž›ÐBX¡‰]RõEņ¯JE•%ñˆÔû›ÅM”·ÎGôqÃdùœ²äršÈ-FRLƒjWYú×YobÒÄP4—¼ õÅ÷ûéËï7ÛÑÙµ°Ës9}{ktK}ŒüÊ(l]Ü•Öh„¡7B';m~Éêƒs_=ÒÛÍ/{¼âóKj1aïܤ-vò”[fÎ/s~`~ñÒ ª‰Ú»UÅ.êégßù¹ÄÆ íÝÞj÷1ß™qy°pZ)®Ý¤õBéÇ1é(Í ÷îVåF# ;lçOAƒs·je¢áÑñw«Žˆ?g·ª¡`Ÿõà»UMO¸[uDïáݪ}£T¸6¹XX¯UG­R9ÍÁÚ•;û.ª0ºŠ÷ÕãÇTïlšðº'$½½¼jrM”-½¸R_™>®Nš˜41Mä«U±—WìäéÐô4š°’“ÖÜ:ý'Ùáµ»U¸€Ç ¯i‚ õ`7Ëîj)­vv'MäFÓ4M©gÅž¸Ï 'M¼^&¶<:ºËĆGǧ‰#âÏ¡‰†‚}ÖƒÓDÓÓÒĽ‡i¢4.î±³Œ«vxmT^Ì&WibŸÔY2¾*MU(è†é@Ã{ÕFÝç“û¢&J‹ žs÷•¹Í¨¼ICÑ•-ÿàÀÖ¤‰bgñìjFù¹ˆ˜ÂÞ¨ìàE4ò©9>,gb_¹ç”æž(ÀܹkYì’ÝJ»Ä©ÎÚ¨ÝebããÓÄñçÐDCÁ>ëÁi¢ééiâˆÞÃ4±k”2ºöl‚s3´RÍh§T±hbŸúÞ‹&vy'<Õ<¹œ&Š2JœÓŽ!­vè3ÿø¤‰¡h‚Ó*=µÄêÍìj§x:Mp©¹Í;W³]´K3²,ie„¯’”ŽcìÜD-vÈp+L”F•ÝÚùf«<7N˜XY%6<:>LL4ì³&šž&Žè= ¥qc°-C¨Æ«/:邼v4Q¥æ¨‰ Rm°ìª*-!Dlƒú ïÕ;âL¡ï/J] ¹Etè*S´ &†‚ É€êŽí¨‰bngÃD~nT–`½þ“KKˆ^ ¼DÌR^Ó„æLHhÍd Õb¸•&J£‚AÚyK«Û,Ú]&6<:>MM4쳜&šž&Žè=Lµñh½?J¥•ðµG!,+ÅŒvJµÁ.:U9K£l˜«Þ,{ŸwôÆ£‰Ü¢a®¿á«s™a“&†¢ ]Œr̰r3ýx±³çŒ}'ÑDi_Aˆ Ó’à•éa…RZ ›°ÜÓYÉ:„ÅŽˆo¥‰Òh`}qÏf'M¬,Ÿ&Žˆ?‡& öYNMOHGô¦‰]£TxuWôÌbF@ €®ÐD‘ B»àͯ¯Ç¢‰ª>»ôÕG‰ïEå­]ÁúÞñ§ë—ÓDn1}ä˜Ö]ep†MLšŠ&¬\`ÊZ¸IÅN,œMù¹¹÷˜ÅNÿIv€tmzXâàίi",D½“5©ØÑ‹¼Fÿíüò×ÿã§ïÿôC™ Gì“å÷Ɔ—F«÷ÅñÓõÝ'¿d‘NyØ~ü§Ïÿõù—<(þùOŸ)³\Yÿ|ó¿üÏ?|÷í¿üÿÿùXÁ?:å?%?}Nƒç·i<ÿþç¿üù»oW ÒÒúÛ4 ÿüsêºß}ûŸNË^IŸw²üÏÏ?~ó·ÏßøÆþòëÿþÍ?ýð·4‘üüë[ü¼|óÏ¥¯§†ÿ;™=™þÛÿžzF~™4âÿ55²|»úíIê(eØÃߺæG$þ×_Áæÿæ»Ç |WǬµÖ5FQÙð‰ñ`·>v©Wýx½F‹fgÇÍ¥ÒtZY»–¶Í‚ɆNlO¹çfÀ å5<:þfÀñçl4ì³|3 éé7Žè=¼PÏâ¶a67x³DaÕ;”7x'ðµ&Dh Àé&‚æmå RÃ}µ«2Æ´|Ö®²HOÕ"&NOœ§ÓwiÀ”§Ym¥Úáù8Ÿ‹†¸a!i¨Dׯ ‘ûJéÆB†cФžÅCì«‘í?ZZB 9lðŽÝ;½¥¾âeŽóîÇœ^F›^œÅ8tÂR‹ËÓ‹‹ƒ…Ð]ã&¯(]{“œ¢F¦µùEc°\E™:J³ÈÍÛ¢#H0Ü Îm¦ÌÜÀñ«ý{ØîùýâÏÚîYU°Ïzøíž†§‡ÜîùýzOØîIc>w·î(âµw?ÒˆE¬«{¤*FI#õn$;z³ÒÀïAß; ~'M¤MÒµáwÓ0«yMšŒ&r„*wð;;¿šW~®“I½“Ù ã/. ,”¸æ5MÄÔƒ’×; ø«à­4Qå Ü©ßRìÈaÒDo™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”ÆÐ|Ã(Å!\{<Šº]»S¥"Ü0Ú –2³¨Ò´N°°A½Â{ÑDykƒî}ï<—'¸œ&R‹iéÐaCIŸç¤‰I#ÑD\Œ8šh“&ŠôŸ«¨L‘bzem`ÖÅTiåhÂs †m¡ÅNMo… Ï£ {''Cµƒ ÝUbããÃÄñçÀDCÁ>ëÁa¢ééaâˆÞÃ0Q'$è¤Ì¬"ýÚ»–&iÄâµüûU*¥9vƒTB &Š*§`}õ o–ºÏ;OuŸ/‡‰ÒbŽ&ví+3)3'L ^ŽÒ¨ÚGÅžö´O‚ Ï@hÚID™í@ùÒ”™°0bˆö’&ÒŸª#y»ünµ#½•&j£Q9 E}qiªŸ4ÑY&¶<:M¤ï2­]ÓÈÓ¤‰jåì”™ù¹ŒiÐæNÿIv|m5/_8’Ø M`îÁ“³š—-«Y¼•&r£,¨Ü>»­vÏÑ/“&V–‰ ŽOGÄŸC û¬§‰¦§¤‰#zÓDi\™­3ØW;¸ö¢S4Z‚­„M<$„ˆQ6H5‹&Šª í‚UÕÎÂ{a×·ŽT6|†Qï ›(- EGìu9»÷¤‰I#Ñ&J„ÃЛ¨vâÙ4‘Ÿ«„»=;Ù™]„M 8»¾NÀŸþ4õ`ñ´xjïh»çû4Q¥úH_œÍ°‰þ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïašØ5JE¾¶8°RÞ=Z£‰"ÁóÕzÛ U£‰ª>­75lPïá½h"¿µ‚…HØ÷ŽÛ}弪²œ}ùý&;<=n¢<× cúG§ÿ$;¼6›—)M k4áÎù§¡M¸Sí&íõ &ï6¿¤·&äйÅVìøÎù%µ(Ž ±¯ŒŸŠwÏùeÎ/Ì/²:{l–‹¬vøTîô¤ù%=7çïÜNòQíìÒ›´–à¹ÏëùEÊJÒ 0¥i%évkò*αü°ƒ™€¼» Ñðèø»UGÄŸ³[ÕP°Ïzðݪ¦§Ü­:¢÷ðnUiëÁi¢ééiâˆÞÃ4‘Ç¤× =+v.Î@î°øjÎÀ"A!hç¥Ø±…±h¢¨2eÂþt€ovÓ©¼u4ÍYß;n¤‰Ü"QˆÖ¹÷]ìð©ºã¤‰IÐD̹ø$BW›4‘íRÏ¢³i"?W4íõ‰vm–DGZ‰›ð܃!­ =f;t¹7n¢ˆS$êŠ# 3g`w™Øðèø4qDü94ÑP°Ïzpšhzz@š8¢÷0M”Æsû¬ýQJ#\\ÕÕQ«T¥Þ.Wµ‹&Šª–¥â}õÞ,yyëc`î{'Úq¹E¦ÀÚ©ŽZíxÖ3š41Mx¦ ˆù\´I^h‚O¿é”Ÿ+éE  ×XÀùÚ(l7y\ öàD ÍSÈj÷"²íJš¨jGÖ'ëÁi¢ééiâˆÞÃ4Q÷4öG)üÚ|òÙ„/â+gU!ÆMR}¬œNõiÑ¡‰jÇï…]ßš5¤Wï{ç¹øßå4QZ4Fì+Ó0£°'M Eé»d¦ôõ²7i"Û‘ëÙ7Ês…CêÝžÍBA®¬7á KSÈkš Ôƒsþ)ëŒAÅŽèÖ(ìÚh†²v©‡Í›NÝebããÓÄñçÐDCÁ>ëÁi¢ééiâˆÞÃ4Q÷4„¶YT»hábš =®ÐD–sáû¾T·±ª×Uõ9C,†®ú€o…]ßš,=¾ÿR¸&J‹ª1´c3v&MLš‰&¨®ÒÍÛ9ª>¥=‰&òsÕ9ˆt{6kÔp%MÄ%º°¼®Ž*\F^Ïé šJ‹„{Ï&J£f1´‹>ìfN§þ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïaš(籄ýQ*òµgqàš(œ;'ÑÕâX4‘UEPåv-ì‡Ñ{ÑDykT6†¾wïË[[T@Æ Êdætš41M¤ïRÓ`ÈñãÉ÷§ß|¿ÉO¿éÄõ¦•vêµT»ç;ŒÜt¢%!­ÄMHy:u*vtoÜDiTBÒÏ}qòT„pÒÄÊ2±áÑñiâˆøsh¢¡`Ÿõà4Ñôô€4qDïaš(§a;{ÅN/ÎkÈ‹CX¡‰*57ïK5Œ&ŠªäAl§ø{Ø¡¾Mä·vÈ Ubß;_D'\ME§…ô•9ù¤‰ICÑ„ä̯’þÆvÜD±c<»z]y®£’ü/{gšì¸­ƒÑ¹b °’ì'¤n¿rr%À*J +fçW:ˆð ‡Ãv»]ºõ¤¦a*×&¸5`-™È_îévÂ&ˆmNëÀQD‰«voL̈Î#⯠GÁ9ëÉaÂô„01¢w&ºsfR¸—z?ÇqO)l~•| v“ÐÒ…û ü6;‘¹ÊMlª,e‘­ò] bû[C»|\*ÙìàÁƒNÝ#YI%Æ¡NÚL,˜˜ &êwIœù÷÷û×?¾_ªŸùå0Ñž[ ƒãCÝ.Ýz å…%É~µ ’Ö€˜èv»©Òo„‰îÔˆ£ ÒnWÈÑ,щèü01"þ˜pœ³ž&ÜHO#z‡aâT/UðÞü°,öR•˜8'•'ƒ‰Sê-L´·ÎIRöǼƒÝ=šXQ‹•¡®ü° &¦‚‰ú]R’¬ü»ßùëß/µT WÃD{n&ÊœsêvÈrïÎ[»›µO¥·àÌä×ÅØìàá[Ýiû%ýbE›óÊèNˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRz+MÛ‹óÑ­‰.¡‰_ßìÏ+Á\4qN½~YF§-:Ìè×›þ±£çj×uØ~¸`iµÛÁÊ»hb.š(m˵èï÷¿þñý%ºüÖD{nÛle‚¨ýP+zoí:¬¨pŸ&´µ`Òú«ø-½Ûezvo¢;­MeÝ®¤uk"œ&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mtç–•”ã^Jõæ[L¯ÄGš„:§ËÜbþy¥ÉòÞQO)áwÑDk` `αEçÉ[Ý#·?+#[ùaMLEÚ×ü¡Ò„»Û¥«i¢=¹@¢pLu.ŸîÍ[c_ˆh [¬P¥f¹Ì–1pSo¥âP¨IøÛÆ—úÖ"k1ŽÎ{͓Ƴ:/@ƒ”Ù&®ñe/Œ/öJ-­E"ôÒv;¸z|iÏÍÆœƒŒÝN‰ïÝûNj÷‡kÄÌÃJ ´OqŸÝúîN)µ)x,ßVÒÖbÕÁ*„Ñù«FÄ_³Xå(8g=ùb•é «Fô/VuçÜòfHÜKíÌq/ÞúÆn}Ÿ’Ê`sÁDWU$+|0ÉwÁÄ©è”üà­¼æ±ßÕ öœº]Z LÌBµãN˜R-k,óõ0Q;³\ˆ%l?$ w& Dx‰ ñþb§Ö‚s²dîrÏf—8?I›Ó6.ÇâP×AÚhšèEtzš Mx ÎYÏM~¤ç£‰!½£4±9WJH÷RïtnÙú©ÖAiÔ“R÷¹þEšøQ/\!ÿó–ßEý­%%†qt žÛ𨔵ƒ´Ï1ê?¸hbÑÄD4ѾKm©‘ÜÒ¨›¼*¹†&úsµ$DŸ&6»$RÒåW©AÍûiz„X‘ÊU ½‡6|”&º80.šBqÊM„ÓD'¢óÓĈøkhÂQpÎzršp#=!MŒè¦‰î¡Ž]÷R™îMò¬/†˜8§´ÌuŽvSEí>‡Æê‘¿«2êOt £|’çr|lK}a•XÙûmÊ &&€ h ë$˜èvWŸsêÏ-£ó›Ýû­Ö[Î9ia€˜È­×P©A7;ÊÞÊÛœjÍ)'¶:…³D'¢óÃĈøk`ÂQpÎzr˜p#=!LŒè†‰æ¼@F.q/UÒdW~Ô3äX=àw]EØÞ:çLöAtrz.qÆæ‘i¯øûoeô–üqMÑ×}‚)zû.kw­Á!öŸ^ðÞ㡦/0:XƒÉý !&·–Ñf‡éêŒý¹Rgè%‡¡"À{oå%’ŠLû4íÇBIšÝÜ ›]Æg:5§š¤—O6qF¶h"š&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mœé¥4ѽŌ4——áѸØ%H{êGj™ëÚÄú"ìß2þ±Ëù»À§¿uæ”?ˆŽ=wmbóX4 ¬ÛÑ[âß> |&ì{¥‚{{³Cºü S{nÛ³Ô¨ýT;¹5LJ¾Xk+Õ}š Ú‚­ªùãK·ËÉ¥‰SâXÖA§pšèDt~š M8 ÎYONn¤'¤‰½Ã4qª—’Œ·_Âf<¸„}R*Ï•üœúû)ÿmšèo­õûŽ££ftj3Jj ¬‘²ö?¯jF‹&¦¢ j«ÿ càfo’.¢‰ö\!«œ Qû©}öN· ¯Mð‹ª”ýŒN̵C8ü{mÍ.‰Á£0ÑÅ‘ ùÙ ìpÁD8Kt":?LŒˆ¿&ç¬'‡ 7ÒÂĈÞa˜èÎë`“â^гܻ5Q»{$>€‰sR¥Ì§Ô×ÉëwÁÄöÖ-¿jþ :ŒÏÁDóØ>§`N·K°¶&LLõ»äD ‹ù[Ý.Óå0ÑžÛjjS]¡Û‰–{¯MTZª-yŸ&¤µ`â+ÂÞìÀʳw°»¸’üÚÑ›¼­i,š8˜&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0MlÎ¥}2q/Uò½ÅŒJ±Wgh¢K¨£°úå3ÿ¼’ÎEM&ÁôÉp`ùË.aŸ‰&~ð†G÷ÈPÿ>žc –UlbÑÄT4Q¿K¦Úñ·&ºÝõÅŒúsÙZ‰ °e3+Ò½¥QS»q¶ôMÈRG4(ìvTž½6ÑœR†¼“æ÷—¸¿•ÂX4q0Mt":?MŒˆ¿†&ç¬'§ 7ÒÒĈÞaš8ÕKÝ»7Qˆ_–ŽR:“Z&;è´©oõe!VŸÑ¾‹&ú[£Y”›q³ÃòMtEê[K¬LdÑÄ¢‰©h¢~—\*—&º]æË/a·çZí•Â^»ÚÁ­4A/ÃêÃöiBk æœ0ùue6»ô{µêVšèN9 ¶xº¡.šˆ¦‰ND秉ñ×Є£àœõä4áFzBšÑ;Lݹ!dø—RÅ›/aëKùho¢I$ˆ©ÄRíwm¡—&ºzÈuÒP}}Ë/»6Ñß:k±>ÃVKî9šè¹åxZ×&MÌEÚªH¤ÚŹ4¡}.OÛžÛ?ç¸×æB–î­]ÇØªÎíÓ„ý´àúÇUÚí2?{Ò©;-œ XJëvb+¥S8Mt":?MŒˆ¿†&ç¬'§ 7ÒÒĈÞašèΕkwq/¥poí:¶ò’Rhb“P¨:HåÉrÙ6U%%ƒàœVWoðe4±EG°¾eú!?¸7Ñ<*%+ ¡²™²hbÑÄL4Q¿ËÚ¥¶Ð~‚Øn¤WÓD{®!U(ÛOí´áNš hw3ª“šàª¬÷-*Å ìÄö¶é?âдx™åþØå²ÊMøÓD?¢“ÓÄ ø hÂWpÎzfšˆ"=M ꣉³½ʽ b Ë+çݽ‰ÓR÷V¹þ5šø£Š,e¤X=±~Müyk¶¤¹ÄÑáôMüxÔúÖòÉW§eÑÄ¢‰yhbû.¥0âN…¿þñýJ¡|m‚ØŸç²f„¨ýÔ&Fù^šP*Ë>M@ï[”rö•v;ú}ÃãVšhNµ¥hgÅÙ:éOˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&ÎôRõ_ï­„-Z^‰`¢+È•í¥,sÁlK×uö’cõ™ówÁÄb÷öÿ힃‰î±Î³$}ð»±®k &¦‚ hy_±=†\˜èv9¥«a¢=·õhšÂñEZò´;:é+!W°Ù‡‰\[°¥b¥øJ›ó£0ÑŵRåh¡8ò`"œ%:&FÄ_Ž‚s֓Äé abDï0LtçLBÁzÌ&RïMéĤ/.|@]‚  P,• 梉sê ¾‹&ú[°”$ŽÎß®:ßMÕ#¦V(žÂß­¾û{êÚE‹&þ}šÈ­ŠsmYàÒÄfw=MÔ瀤á4¸@Ú[ºrk‚“êÑÖ¶œ´BþP¸Ù¥g:u§\ýÆâø­ ù¢‰ƒi¢ÑùibDü54á(8g=9M¸‘ž&FôÓDw^ê|)èì7»Ä7§t ÐD— "î}ñÿKœ‹&šª:ZÖAUCõ-éwÑDkÈhœâèÔ?ÏÑD÷È©PŠçP?ÞE‹&f¢ l”@ñ÷ÞÀ_ÿø~[]7¾š&°×®“V$>j?Õõν ®Ä’íˆ&Ì2f*Òj·S ûß_ª*J­¯ŠÕãÎ¥ÂÿøøÒ¢£‚ÁÉ€Íò“ãKõ¨¹~P(+oµ×ø²Æ— ÆzÕw¬Ã‹©?¾lv€W/í¹\€8hÙÝNDïM@^º¬«UTgˆÙª¿"ÒjöìjUsŠ©õD)‡‰x­VEËNDç_­Íj•£àœõä«Un¤'\­Ñ;¼ZÕ×ÿ–RŽ{)€{ÒVXz‰ÊÁjÕ9©¿O:ý»4ÑUåbŒ«Ï¿O,ÿ·i¢¿5‘ }0XâÛ¾Óí4Ñ<’fCø@Ùû‰‹E‹&¦  1ƒÐDµKoÅI/£ )I-@Ô~Jz§ñh¢¼X°íÓ·–^q#e7¤Ù¡¡>JÍ)3¡% Å1Ñ*ŽNˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRœïM@/I`ÆÇ½=sÛ„ä¤Òd{]•(gÕX½ì”õøOÓDkÅLÁ f·+o+¨·ÓDó(Àª9V&©¬½‰ESÑ·=íœkŸâßËëvr5M´çr… .aû©ôï,RWÒDzeÈ–hBZ .\Rp¶ªÛ1=KÍiÜX(W­”á4щèü41"þšpœ³žœ&ÜHOH#z‡ibsNÄùƒ^ ò½'i¥”•ƒ­‰MS ÷Þh²$]UV‰’©lêíË`¢¿52Ô¡<Ž&}&ºGQIö2[0±`b&˜6I—vH5¹0Ñí˜èj˜hÏ­í‡KÜïÕ™<é½0ÖJ&íÃDi-˜Ú9'ÿ m·Ã‡ów§ ªú¸B¸`"š%:&FÄ_Ž‚s֓Äé abDï0LtçÖÖ®RÜKÈÍÕŒðŇ6 ,hðÔÙ’|4U @9q¨^—þÖÙj#ˆ?C…÷ €»i¢{l–RüÕ)óJ¸hb*š(í§–ÜÂ¥‰n‡zùµ¼Ò“wdC§ÁmSàÖ­‰Ü®6ñÑA'í-Ø´¢‚«´ÛQÂGi¢9­Ú!AÅYzË©ºhâ`šèDt~š M8 ÎYONn¤'¤‰½Ã4qª—‚|/M³:¦U3:'•&»6qN½~Y’þÖÈ„ÂqtÝ›è )É »¤•2pÑÄT4Q¿KMZt§×_ÿø~5±]Ní¹„¦†á])Û­µQùmt&¬7`’`O·c~&ªSJ 5ƒÅâ,­sNá,щèü01"þ˜pœ³ž&ÜHO#z‡aâD/Uíî-ZèUð¨šÑ&¡eÜøD*æ¹`¢«Ê•%‚ë)Ýо &¶è´Š¶D'?™Ñ©{äR9úeï%mL,˜˜&¬MÒÉŒÀ?èÔì°ÒÄÕ0ÑžÛ*æ¨ý(ç’ï…‰Œ˜ukRmÁÀµ û6;ü]oîNšèNëß ùI*6q†ëv4Mô":=M ‰¿„&<ç¬ç¦ ?ÒóÑÄÞQšØœcÉ)ŽTÎt÷ì” ‰÷öŸKµ¹®MlªHIür?vð]¶·fÉì߯ÿ±KÏU3êëKYpÝeSf¸2:-š˜‰&ÚwYêL ±G›•«óÃöçZ»ù̵Ÿ¢&vçA'i>ÒAþq€Þ·´ê¨îjÕfGòhmÔîsÅ­ êâ,/š§‰ND秉ñ×Є£àœõä4áFzBšÑ;Lgz©VeçÞ;Ød/SÝß›8)Ui.šØÔKôúÌú]4Ñߺ>Xâèà[Â’Ûi¢y¬«P>PVV5£EsÑ´LbHèV›Øì®?èÔŸ‹( GíGñæƒN鹯a¿ÚäÖÒA’=t·Kø,Mt§l~.ˆÍóª6NˆÎO#⯡ GÁ9ëÉiÂô„41¢w˜&NõRTʽ—°ÛáYæš8%•ód{]•pFΨ×ﺄýÂuÒGGøÁ½‰æ‘sÖúå…Êò:é´hb*š¨ß¥ÖIzb—&º]Á«O:õç’Z¸'»ÙÝZµÒ&N÷ik –ÚV¹¸÷&ºçbÒD‡ FŠ“¼R:ÅÓD'¢óÓĈøkhÂQpÎzršp#=!MŒè¦‰S½‚Þ}Ò‰œÞþc©8Wµ‰MÕáZôõ ßEý­ý;1?v ž£‰æ±Ô·Þ™ýRVÒÛ µE‹&&  l÷êç[›–KÝNõêjý¹«{ çèZ¸—&¬%é>_¨·`Ò]œ¿)¥Þ“çò(Mtq”Q‚cXÝut §‰ND秉ñ×Є£àœõä4áFzBšÑ;LÝ9SJ ýˆÜ[¹–&„Øéë?ÊPæb‰®ªïLðêù»î`Ÿ‹ŽäçÒÃn ‚•Õn§EK,–˜‰%èURQÛ;½ó×ß¿ßjGxù9§ö\+Å‚öSí¤ð,!ù\™æàœ×¬Y±6cWi·ƒß»oe‰æÔ€¬ø47qï×ÊKLˆÎÏ#â¯a GÁ9ëÉYÂô„,1¢w˜%NõR’ïe }ÕžðàœÓ9©<Ù­‰SêKú²‰sÑyc­Ûi¢+#&fCÛðº5±hb*š¨ß¥¡°óï`w;~;§wM´çŠÕ7NáÝXYîÜ™(/0Ù‡ é [¬\¡Ý.ñ£•ë6§dJø8z[ÓX0q0Kt":?LŒˆ¿&ç¬'‡ 7ÒÂĈÞa˜èÎ8içºÛ½ XøÅtP¹îG*·rɱT¡É`¢«*=Ky¬¾À—ÁDkU’à0Äf÷$LT\_È0Æ«ÝÛW·`bÁÄ0!íøRi ÝZ›ÈÕu°ûs ä:GÏQûÑ’ìέ ,/¢’ñà˜Si-8eÍä·ôÒû {öÒD—Ôr$®vCºh"œ&:Ÿ&FÄ_CŽ‚sÖ“Ó„é ibDï0Mtç˜ë„‡â^jçLë¥4F¯¢p@›Tñ+Iÿy¥¹ŠMlªSE…Ô—/»‚Ýßšrp½~‹by®rÝæQ³™X¬¬¤u{ÑÄT4Qú,½°±¿5Ñìä½õE4Ñž«ÄÌöÚª(x'Mè+a †îÓDKÿ€õ×ñû n—àÙ½‰î”´rÂâÐÖìpšèDt~š M8 ÎYONn¤'¤‰½Ã4Ñsí ĽÔûê-¥ë¬öXtPºîG*´PÅR9MF]•dF Ôïì¬ü§iâTt¤‰æ1c2N±²œÓÚ›X41MôúÖVrtib«—Í—›hÏÍ"ÁÆnGzëÞDz™ÖO“hÂ,K*((­v¤2ÛøÒÔ·Ãb«gûºñåDt„ìÉñÅ ÁJa•aµZµÆ—©Æ{¥œP‹?¾t;’˶çRb´`~Öìð½Ø ã ¼ˆkS>(fdÛLR-8òÛíÀž]­êN¥þ>ô8~Ë?²V«–!œˆÎ¿Z5"þšÕ*GÁ9ëÉW«ÜHO¸Z5¢wxµê\/e÷¦' ¥£½ïSR&£‰M•J ÒXov’¿‹&ú[«¢I>ºÝßv˜ï¦‰æ³šš„ÊM,š˜‹&D! Öyr@mU‹n ‰V¤ˆë“1j?5&…î]­âÚË0íÒDN­WwDîºÁfÇŸ¤‰î”Rû'ÇâÞ¡lÑÄþ4Ñ‹èô41$þšðœ³ž›&üHÏGCzGiâT/Umøæ½ïô²£½ï BAÁ›?¯ÄSÑĦ zñ½ÔÛwÑÄöÖê|¼ÄÑò\iÔÍ#™$ÿüö]Z4±hb&šhߥ–~ ÂÝ›ØìjÛº˜&ús­~á£jpgiT¤— hÞO˜¡µàR;ÿÌïf'ð,M4§ -û£†âÚ=âEÑ4щèü41"þšpœ³žœ&ÜHOHÿcïlvf¹3|+Zy§6ë¬ Aî ÙÇ–e)°-Á²}ý)’Ÿ…QÎ49 v· q´Ñ‡B×ËšæÏÓ$«FôÓıQêâ,l²aÚIxL*À\ÅŒª’:ëWM¥ÕjýèàÃ)éËi¢x”­¤Ú1ࢉE3Ñ”,¢DÍ,Õ.b8›&òssþq6èõMtiiT ¼“>§ ,=]Ôå6•;¾7g`uª ÖùêWìÒÃEŒE;ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mdç1 ¦þ(õäXë¹4¡Ñ­½½‰#RcÀ¹rVU>—ùZâõú^÷&j«‘“¶ÏC|Dñj/§‰âÑ›ìHÐWæ ¢E‹&f¢ ÌÙ;иMÕÎ.Zž«”K´u—ÁšD]L–(Ñsš ïÁ)0¶¹‡ÊˆpoÎÀ*%@gó¶ØA\9»ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MTç‰Bza”B¾ö¤9MXÚ¹7qPjœŒ&Š*‡ J¡¯žÂ›íMÔV'HüJtä¾{Õ£ËJ/¬1R$[4±hb&š -…¼3¡)6i¢Ø¥ó÷&òs½_cîôŸlÇWæ dÛ41ÑózFÈ[ŒùÍUlAnçåp+MqŒF{âb ÔY4±³LlDt~šM4³žœ&š‘ž&FôÓDvDUú£”¾”&pƒdi´‡˜CêKõ¥Ý\4QÔç~Ò´/$ä½h"·c§é‡ÞxÒ©xç\éOãÈ¶Š£.š˜Š&ØWél˜øóÕü׿|S §ïMäçŠùüF½þãvÑÒ•4!›‹ÂÏçÉ#o`Gªöþh¶ †÷ÒDÇQ]_W ,šè.Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4qh”"ÁkO:qÚœªvö&ŽIMse ?¦žQß‹&j«-Y»é‡Þx ;{ÄDÎ)ʯ ±‹&¦¢ /0A 6M;8ÿv~n¾•:_ü‹È¥·°iK‰‚îìMÄÜÓ‘CèìB»ï=éTœŠãV»ØÒ‡]X9ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç‘!´ËM|ˆ´kïM¥,îÐD•j>p_j|¶gþ¯¤‰¢* Q‡&>Zùf'J«Ôì…™\õƽ‰ì‘„„E»Êˆ0.šX41Mø{i!“6«£V;ÏϦ‰üÜ|!ÊÝë?†ÞÍ.¤ ´-MÏ‹£b*=gˆmoM;¾·ÜDqÊHJŠ}q–Öµ‰î*±ÑùabDü90ÑPpÌzr˜hFzB˜Ñ; Å9‡¤1uG)¦¤—„mH{0Q¥j’NÚŸjÇ“¥t*ª|Ác®báô^0QZJ?:ñá‚ýå0‘=J ø‚2ãUuÁÄT0áï¥iòñ¬ ÅNÊÜœÉðÞ–5túÛ…dWnMÈ)qØ9è¤uäõA¨ÝÓ‹Ó½ b‹SUŸÅ_UMô–‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞašÐº2éTA¨v!]žÒIwS:“Êa.šÐ²*‚Ô@ÕÞ,¥SmµQïAµK÷•®Iƒšõ•QX)MLEþ^‹÷,kt*vôgÓD~®øäfýþc¿¸~Iñ:§š€Ïa¼GF¥NíºbGtï­‰ì4¢u>ú; «ÚDw•؈èü01"þ˜h(8f=9L4#=!LŒè†‰c£Tº&`˱ö>“j“ÝÁ.ª"uN×Vʛծ«Ñ!o÷ Ñá;ï` 󈾲Èëö‚‰©`Âò"5…ØÎ[ìˆO¿ƒŸ›‚z÷Ñ^ÿ1ÿ§×žs*tžtrÔˆí š=½Úð4Qб³/NPgÑÄóeb+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ªó„>@”ŠxýÖ‡­‰ƒRÓ\ùa«*sjo¬|´òómô_5M‹Ž>|@½š&ŠGC úÂ4n²hbÑÄD4‘?LJôȱ™¶ÚñéÕ&Êsóvr'_µÓxéA§¸‘<§ (=8×ìnrOµ xëìê””¨/ŽÖÞD™Øˆèü41"þšh(8f=9M4#=!MŒè¦‰âœ1Ÿ za”Ò‹3:åz¨whâT0MUB‰Ú•÷>Ô?¹ôñ«¦‰œšñ…™üñRÉå4QVR/OJþªiâPtb¼‘&){LR°hbÑÄ4QRl‡Ô¡‰l÷ð‰ÿ4šH3¤óºØA ׿Wp`áçó‹”‘7Pê¤-v>aßJÙ©¿7IÛÕÞ>ì„Mô–‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎÑT%õG)”k¯å)Ë&»I>Š6±v= ;Ö¹h¢¨Š¨½kyÅNTÞ‹&J«ý]²Nº®jîË?^ª]‚k“|n‰h‡&ФW¤r˜,yQ%°“Þ¶ª—7£‰$ ÖŽÀ×ò²GfÖþ樂ۢ‰EÐDÚV1mŸt*vøp-î$šÈÏÅ`:Gi³ÃÄ•{Ì›¤â¨¤=]:§‹ʽ'ŠSõ— /.Ƶ7Ñ]&6":?MŒˆ?‡& ŽYONÍHOH#z‡iâÐ(•8^JÒ†¼wÒéTŸ©ç¢‰¢ÊÐD¨¯Þž°Ð¯š&j«“OöüBtä¾â¨Å£3RaØM,š˜€&tK@Aýi'ùÈv>:ñÙ4‘Ÿëã:¤;ý'óÅ匀LwhÂrv¥½ryVÆ º÷vÇ„áq®mGí.Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4QÅΧ£j'צ ”ò…Wvhâ˜ÔdsÑDQ)aç‚J±z³{‡¢ƒÞGîQÄÐÆÕWÑ+eࢉ©hÂ6¢ÁôIý¯ùþº]°ÓO:ùs)„DÜI˜ý›¥+‹£JØœWdç6‡ÜƒcDŠÍ3ÕN­ È‹SG¡Ì[]qhÝÂî-[ž&†ÄŸB-Ǭ禉v¤ç£‰!½£4ñá<Á £ \{o"µ€ä¥Nv »ª"daë«Gз¢‰è8Rã Ñ¡‡ŽWÓDõh)B;/qµ‹L‹&MLDþ^&È÷°“5Ë»øx úš(ÏUU í,ÕN¯Ü›HY@°ç4Þƒ‘L:g²ªÝÓ„$ÒDqjHÔÎ[í"¬“NÝeb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&R)„ËO:Žâ¨¥â\õ&>Ô‹jçËWµ ñ½h"·š‚µÏ}ØÝ¸7Q=æ#åí#Õ.",šX41M@>iD™b“&Š>ì­DPh†AÛׇ‹ÓÌ¥byËWH‚>§ Ì=XRŠØƒŠË­'ŠS,±¾¶Š{ 㢉eb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&ŽŒRèÚ[Ødƒ”vh¢JPävú©;¹h¢¨‚°}›÷Ã.¼W½‰ÚjD /DGï»7Q=:æ@û~xµcZ·°MLE˜)Á4¤v†ØjGΦ‰üÜ\Ƥ׳S¾©}é½ Ú0ßI{~o‚É{°€äo9M¥Å.ÞJÅ))B»ˆæ‡ÝÃõ“E;ËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçìC¶Y”zRªôTš ‹[Ø¡‰*A±S´ûŸMšŒ&²ªˆ$ ¯þ1kÑ[ÐıèØ'Ž(‹V†ØESÑå\IdȽ‰b‡†gÓD~.ª¯o!uúOÎR®píI'Ìßsž×›`Þ¢'ÿ]ÚÖŠ ÝJÙ)’2tÅåÑjÑDo™Øˆèü41"þšh(8f=9M4#=!MŒè¦‰â\| Mý!}Iuq-l—á#õþhï¯ö³¯lŸKõ…Ý\4‘UˆOìÔUOö^4QZíïtöªÝW »zŒ“¥¾²+§Ó¢‰©h‚sÎíHªš4Qì$~o"?WÐûFêõŸlGWÞÂÎû‡¾Ð–{RæŽÜùnPìTï=é”1ZçRGµ ‹&ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0Mç2¤?Jq ‹s:X;9>$D_KƤò\9ª*I¢õÕ ¾ÙI§ÒêÈDÖŸÉ}mtãÞDöȾRèÿn 9qM,š˜€&dKT#bû¤“Û±>rúI4‘ýk0 ‘;ýÇÛ«I¯¬^ç³K®}‘žÓDÌ=˜rއö‚½ØAº÷¤SqÙYèq‚ºh¢·LlDt~šM4³žœ&š‘ž&FôÓÄ¡Q*Òµ·°Yy a§z݇Ÿ‰_:MU)p0yA½¾Ù-ìÒjM– ô£“ä¾ZØÅ£Ãi'W± iÑÄ¢‰©h"n ý_NJÖ¤‰líü½‰ü\õᘱ׳ݎ¯½7á‘ðä9M¤ÜƒA£uv«Þ›Ó©8ÁÄ ¬{Ýeb#¢óÓĈøsh¢¡à˜õä4ÑŒô„41¢w˜&ŠóˆÔ)0ýa÷yQµs÷&¢n÷naW 1æxAêl4QTiL/LŠïU »´:Ži7:1À9ŠGoqÐ~‰¾n[4±hb&šÈ5¦#‹úÒ¤‰bG–Φ‰Tê]$DîÚÙŽ/Íé„›Ç=ÙÎI'Í=C'ûT¶‹÷îMqÄ ºâ|Z'ºËÄFD秉ñçÐDCÁ1ëÉi¢é ibDï0MçÌÇûþ(EѮݛÜp‡&ªTDíl£T©6Y†Ø¢Ê¡1…ÔWÏñÍêMÔèD üBtDâ}4‘=æ —/LãšVN§ESÑ„n‰ ¤¤ÊMšÈvÁîŸDù¹Þž\ó¢Ó¡½¶¶‡^…÷hÂÌ(&îõt·cšn~9 ^>OÎøkŸ_¼Õ‘P½»u~1£@*Ê}e©×ü²æ— æÛüåEe í“´Åäô½ïü\KQcjï}»˜â•÷ò҆ ð|~±¼’DÎEb;JóŠóæ{yÅ©`ꥸ*v¼rö?C4":ÿתñç|­j(8f=ùתf¤'üZ5¢wøkUqƒhÆ«HºøkUL›É^=£cRgËXT%§‰(}õ1½ÙתÒjõçv¾V}DñÆ äÙ£B>äÛW¦ÁV=£E“ÑDBI!2p‡&ÜNϧ‰„Ic.ÑÖé?ù¼/_Ir}‹žg‘’à=Øq =B;}’+ýJš8$΂®{y½eb+¢ÓÓÄøSh¢¥à˜õÜ4ÑŽô|41¤w”&ŽRðRšPÍp'ËÇA©¦SÑÄ1õO*yÿªiâXt(Øm4qL™Äµ7±hb&šð÷2± ˜|~¯ôë_¾¿nGtöÞw~®z ¡ÓÜ.Íò‘g—$¦ñ9M@[T;»ôÕï­ŽZƸ} ¹Ú=nm/šØY&6":?MŒˆ?‡& ŽYONÍHOH#z‡i¢:ǘè…Q*µÕQ)ÙÆºSÏèCYç,ê?íp.š(ª’k ð‚z}¯“NµÕšÜa?:Åûh6_ùôkíjÕ.<œ€^4±hbš€|B• ˆšY>ªÝùÕQËs™…Xz=Ûíðó*AgîMðsØç0¹;Ñ4[V»ÈéV˜ÈNbŠíQÕ.è:èÔ]%6":?LŒˆ?& ŽYOÍHO#z‡a¢8Çb»ŽÌ‡H½¶œ‘ø”w`¢JET}A*ÍE*Ù êã{% ÿˆçõx?:tcòê1ß“Tí+“¸R.˜˜ &ü½DKÜNXíB<&0_÷“DI»Ë` œ®Lˆ¶¡$ä­ òÌ„R{~ÉvÄÁn¥‰"NH¥³CZìøuMì,Ÿ&FÄŸC Ǭ'§‰f¤'¤‰½Ã4QœGLÚ¥Dàêâ¨"©1Ú¿,u¶ƒNE•EÖöñûj—Þ¬œÑ±è¨Ñ}4‘= §\“µ«LWÊÀESÑ„¿—ŒÞcX±IÅ.Ƴ/a—çf(Ýþà ‰¯Üš°M܉>¿„-ì=8úèÂÖVší$¦{¯MTq%»HW\ º:u—‰ˆÎO#âÏ¡‰†‚cÖ“ÓD3ÒÒĈÞaš(ÎÑ×Þ¤ýQêêk°9UíìM’ŠORxÿKi¢¨"ŽÉ êíÍh¢´š Ð^˜,)ÝxЩxô&‹p_Y|äœE‹&þõ4Á™EAhÒD¶<›&òs} Ò9žZì"ص4AhïÚ„”1ˆ5P[i±#Š·ÒDqê¿¡t¶¶k#΋=£‰üsþ×¼KþáÏ5~úã§ïXâÏ?üµ.̾ ?Sò*F$Cúé‹ï>ùøú§O?þ”›üý_|Eòÿ¤ã¿Aúïçíõ zí|<¹åQõÿü­û¾ ­À>ÞÿT`Í˱ÿü/~üá‡?ùRíoß}~þÛOÿÝÿúdøS]¸ùBò§¿ýÜÚ2‹þ<¼ýøýöó/T†7ÿËoÿþãÿúÉÇåÿùþ/ßþðÛÀoýüîß?ù$ûÕ§?ù7¿¯ƒÝW>ìg2ËãÃo¾ÿæ«OÊß|úÝ·ôå¾ùý·_:Òë—¿‹ŸàËoíú¿ Hóñ»U•§Ñ2A!†v^ÁjGyÑKî@B#¢ó³äˆøsX²¡à˜õä,ÙŒô„,9¢w˜%‹óØ;çVEêµ¥q‘ãæNqo¶Ïb1rxAl “íMU‰¹w–«ªü^4y(: î+Ž[MœSÏ_ÖmäLt4áƒ%½šÇBœ5V¸n:-š˜Š&¼RBMà¶þÙD³“lWÓ„·›VHÑ>AµW:åJš¨Õų½¡ )ÊÊ\B€:gß¿ì˜, p>D¿õËN2=JÕiæÚRBq…aÑD´LìDt~š Mtœ³žœ&º‘ž&FôÓÄ©Q Ò½yì¾eÖš8'uªv#§Õë—MœŠ‚ÿï_¿_Ç$x5Mà–Ê«dAÕèûq#½õlB7@7<  ÜjYEòÄýݪj—]ž=›hâ¨îèÅâÊK®šNá2±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9gO’âQŠüæ³ â-磳‰]j™iᩜe.šhª„Ø?P/ö]4±GGâèÈ“7ªG•¢*SâÕ }ÑÄT4Ae5µ¸[—&šØå7ês’{0Á4»¬w6/$ØÊÓ²¾‡‰]¨ˆöúÊüc÷®/Æßœ^š*d¦X="}×ôÒÞš²åà’ôn—ø¹é¥y±Ü`Ûíp]¤]ÓËlÓ BÒ²ôñhzÁì/ÃûuÓ –éM%‡_v±{såç£ïšô­èh~qAðDAZ^³Ë„nV5§L€ ±¸ò–k³*Ú…èDtþͪñ×lVuœ³ž|³ªé 7«FôoV¥8ã½%6ð|°YuN*âl4QT‰gv‰Õ À·ÑDyk+—¦8:šó“4áR~s…öã_å¼.Ò.š˜Œ&ÜÑÁ°×õ—ÝÕíŒ~ž«ìNŽ{ÅNøÞvF ð -Û,í÷4»dü(M4§BŒ¤±8Éëè;\&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9·zêñ(õ¦gçµíŒ·L~@»TÎŒö¦“]¤­ª8±$òP='ø²³‰öÖ,qŽ£óÛšýnšhɲÙÊè¥{ð¢‰EÐ×âÙ¹O»]ºœ&ês˼ì¢ìv”î¤ Ø˜KÐó{šú‹J Éþ¥?KÕ©Ô›Sâ¡8‘—„—EËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç¦&Âñ(e„7— ,«Á p<Ú‹3hPd·C˜‹&ª*Íž1T¯9}ÙÙD{kD*OŽ£ƒ/Ååo§‰æÑÁ‰ãDM×EÚESÑ„Ö ²Îšÿ\ÍÿFÍ^FÍ‹h¢>·ÌmTþÙý~v;º9-O2gà÷ó‹n®„P4PêZB•¥‰SâtÑD¼LìDt~š Mtœ³žœ&º‘ž&FôÓĹQŠnng„´Qƒ³‰sRÙ碉SêíÏò¶ÿmš8‡o:Q–óKq–E‹&&  «E6¤þº4Ñì8_^2°>× ÿCp¨Ú©Ü{6![íØ”ò&¬~Á†’ RZìž=›¨NŠ ’PÀK¤EËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MTç”ëÉm G)JD·ÒleÚ%éŒöKÍ8M4U˜Ë@ϱz /+ØÞšP=hö´G,@^=2Ô³‰+Í‹&MÌD¾¥ V~™Ö?›hvüR²ï"š¨Ï%b+<Ñý~š]Îwù@/ߨÎ/õߢh-I(-vÌÏfaû>©åm7»Ä+o"\&v":?MŒˆ¿†&: ÎYONÝHOH#z‡ibw^»‰r ‰SR)OvÓ©©ÒBÓX½¼9YùOÓD{kã] Þ£øÒêõvš¨3‘z¨,#®šN‹&¦¢ ª7ˆ‰ûbw;ÐËi¢>·žÙQ?#j·#¼5 Û¶T&Y>  ª_0Ô$æHi±+„ö(M4§Ìõ7GN‹&¢eb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsÃ`ëui÷fa3ù–\h¢I0TŽ¥ŠLvÓé”ú×õòWÐD{ëzÑ }ð34{®{]ó˜jv¨ /šX41Mp=s(‹yK}šhv¯e6.¢‰ú\Ð,©_ay·c½õl"mÎEàû~EAù‚‹P´Hi±K(ÒDuŠUC,Ž`ÑD¸LìDt~š Mtœ³žœ&º‘ž&FôÓÄ©QŠéÞ¼ ÜÎ&ÎI•Én:R/ô]½°ÏEGéÁ¼‰¦ÌY‚Úµ»­ ±‹&梉¢2g†Ú¢´KÍŽ®§‰ú\ÌP5t¿Ÿf—Lï¥ Îì}wÔ¢ |Áel!Ö@i±ãülÞDuʹn’I(®Óê7.;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœhê·úÝíî¾é”·2œ[îŒöŸKõ¹*ÄîªÄXbõ¤_VÓéTt^¡övš¨¥îîö{ùîÊü%ýqÑÄ¢‰ hB·TÛHüy¶öM4;öt5MÔç)ƒõ—ÁÍ…ï¬é› ¨ÙûùEë—NÈäM4;ø³;ê­4Ñœš&˱8••….;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ×ÎaŠñ(åùÞîušiS88š8§t²VØM•–)(a<h²ïj^÷«½DâèÀKtn‡‰æ±&‡»ÆÊÄ×E§SÁDM$%wa¢Úñkñ‡‹`¢>WI¨?î5;¼·@,¥Œ*ïaÂÚ—.BA‘f÷Z5é ˜¨N jCŽXœå„ &¢Ub'¢óÃĈøk`¢£àœõä0Ñô„01¢w&v瘙0¥ ë½,m`z@»„ò‰TÔ¹h¢©"Ê™,VòeG{têªÇãèÐË•åÛi¢y´\ïeštÑÄ¢‰™h·Z¸"aÿ¢S³cå«i¢>W(YT‹¨Ù!ÝIl[®u«ò{š8§Ô'+òÑT1¦Üoäýó–ò]ÍQ¢Sœììvô`ÉÀêQëUÀ ­¨Ù%\Gßk~™m~Qg7IÍ/êäxÃüRwO9µkƒ‹;¾‘6pó£ù¥ü«„¨A9o+IÑGw«š8P)Ã_(Î!­æ¨á6D'¢óïVˆ¿f·ª£àœõä»UÝHO¸[5¢wx·ª9§2ýcŠG)¤{w«lC´ƒÝªsRMg£‰¢Š3X°-¸¿%Ë·ÑDŽ%Ž£Ã/   ÷” '·X™½TÁY4±hâïÓ¤-e-Ëùn‘»—»M×ÐD}nùx8u¿ì;ÈwÒDÞ³Êû"EAýÒ9¥þiHµ3wz´ÈÇ.¸ŸÝøc'ëì;Z&ö":=M ‰¿„&z ÎYÏMýHÏGCzGibwNµ‘<Ç£‰Ü{ö² ” Ü%Ôô•X*gžŠ&vU…Uýõ†_Eû[krÆ&Kñçξ›Çœ‘ â…Êrz9;\4±hbšÈe•î˜ËÀÓ=›Øí²^Nå¹Å;;LµKEèÍ7is*í{šÈõ FÜOÀÝí ?Z€|w*N ?'/YÝ‹&–‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎYÅãQJõfš0Þà&šO”b©æ“ÑDUåOœèƒéÀ•¿‹&ZtrJF’?w6±{DGý`äU2pÑÄT45ß®–öN©KÕŽ\®ÎËkÏ2òþý¡»t÷M§¤ŠïóòŠ‚ò׎j’¥õK'x”&ªSD/ž%‡H«ÈG¸LìDt~š Mtœ³žœ&º‘ž&FôÓDs.Le@ŠG)IvsÉ@¨û»z<Ú£RRü`@Õ sÑDUU&K±þ]ž]ýë¬þ4Ñ¢µ)¬‡Ñ¡¬ÏåMì½ÌÏý|—Ýî·Òè‹&Mü}šÀz6Y$¸éTí’ùÕÍQÛsË»æŒý ¦ÚÛyy蔩Nñýü‚m„®Å¶R ôýür+M4q ˜ CqDžMDËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç"@ý$Ó;„[i¢,£·"äàl¢IÐzïÕb©šæÊÂÞU9€ ÅêíÛn:շ朌ÉÃèp²o:5Xþ‹ñ¯Žñ5£cÑÄ¢‰¿O´Ÿ98þ¹]ôM4;MWga·çJÆòÎý ¦Ù%º9o¢|Å(gT¿`HžR¤´ŽAŽÒDuªFýš»Ýë¦Æ¢‰ƒeb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ªóZsC‚ƒèf—ôÞvFÐn¤:ö–Ù$åXj¦¹ÚíªزÄêÁ¿+ {kª{‹£ƒ®ÏÑD󨩖q‰•‰¬š‹&¦¢ Þ2sqÖ¿éÔìè¥(ìE4QŸkI(S\íÔßµ´¾òl¢Naø¾¦SQàuª ´‘Òzàá,ìæTØÞ”~üSœ¬ äñ2±ÑùibDü54ÑQpÎzršèFzBšÑ;L͹"¤€&v‘~oÞDÊ7:8›8%Ua²³‰¦ÊÙ8VodßE{t؃Nâ?v6G…ªTÖC˜Îiv°²°MÌER÷üصOÍ.ÛÕÈÛsÙ0ú~Àî¤ °M+2½‡ Ù‡ ÌA™Œf÷Z¼í ˜Ð6º”¿cWÜn'+ ;\%v":?LŒˆ¿&: ÎYOÝHO#z‡a¢9Ç²Ê Ïívw'a'Ø ÁD“@T†BŒ¥¾–þ&L4U e!±zNú]0±¿µSRû :/-o‡‰æ±¬ÄÞtªÿS™Ãºè´`b*˜Ðš6QV9÷Ó&š]ÖËÓ&ês¡¼„sо}ׄ ذ–?  +_p.ÿ¯Bÿ*jµKfÏM4qL ÁMÐÝ.¯£‰p™Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰ê Õ>ñ(¥Ä·Ò„)m¬|@M*UNˆGûl2M4õB5Ý=T’¾«yÝþÖ&…©5ŽŽ=صyDT–·w;XIØ‹&¦¢‰ò»DÑZ OÍNðò£‰ò\f'ÂàH¶Ø‘+ÜI¤›q½Nõž&|«›PHŒÐծ̘ϦM4q’Bqĸš×…ËÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çJµ›ØEšÞœ6!ärTÀï”T…ÉÒ&ª*N­¶J¬ÞìËJ:íÑ!.ÀGÇŸ,éÔ””·›½r΢‰EŸ&¼¦C”´cîÒD³{M¸ˆ&¼Q —5Iøý 0ë½IØFå%óÛùSƒ¼üyºJ›9?Ú {G¦à)ǯÕkM¼_&ö":=M ‰¿„&z ÎYÏMýHÏGCzGiâÜ(EpsÚ„ç­ðÂû³‰“Ry®³‰]»¨b¬žÿ¬õŸ¦‰ý­ESÎGG²=FÍ£dQ4•I_4±hb"š¨¿Kô$êÒM›hvæx5MÔçR¦$Ø/b°Û!ÜÚn‚7c|_䣰Fù‚¡6íîhìveŒ~”&šS!EÕX¿L~‹&–‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÙ‚ÅÒ.ÒðVšÐL…&òMìRËB[S,Uß5Uý›4ÑT909Çêé»hbN-)#qtœžk^×<–ÅÊÒ¢‰ESÑDù]q͈èÓD³£tuI§öÜ2;FßIùϽ%ÞÓ”/Ø»§îe¢f§þ(M4qh`ýÎz?vyˆ —‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎkU¼~2Â]¶›³°qc<È›8)Uæj…½«p‹ÕÝÙĹ輴غ&ªGOšX?˜Æ_ï?,šX41M@=›àš ]šhvtùM§ú\ʉ܀£ï‡’¿+Äwe6qá{OX¿tª Œý}ƒfþìM§â´ #Åkÿ¢ì.Î%-šˆ–‰ˆÎOÿgïÜzì(’<þUZ<¬™:Ê̸e¶ävØËh™ ŒüZµMcZÛn·Üm ñÝ72ë`ÊîS§¨ 9>%$hNGWü+NÞ~y‰œ"~š¨(gÝ8MT#Ý MLÑ;™&Šó Q¼³Z)ŸÑeOa«Á¡N£¤zl+Aì8õ!œÖ¹‰î­A3õTïûè¤×&ŠÇè{°Š÷òFn4±ÑD4y”Î:˜¸øüä½ò‹eöµ‰üÜÉ%­ú£vŽ–¥ Ï œ†h"¥è ¬Zíü¡K»ÿØþe„zðîÔú—”÷hxèˆèİfÿ’8Á„ÞV{9×¶þeë_è_°ŒÜ$¹z–Î.„UsWtN…ØÛ☶9®+mfŠøyæ`* ÆY7>Stƒs0SôNžƒÕJ- îÄóÀL¾b€R"©ÒD± ½ ÷3ÑD~.`0F¹ÅΓ_–&Rdgˆ©45v.¯¨ÉS2¶úNlí»‹NL”ÈŽ¬˜å£ó(¤¯}„2–m¶j£‰Öh‚P«<ÌRó~ÿ’“Ôøúâ õšÄª?ùÀ-.Ù¿¸]Eþþ…t0àÄh,/;ï×=—Wœ2¡Ó-ÅŽx;—g¨+mŸ»¦ˆŸ‡»* ÆY7Î]ÕH7È]SôNæ®â\Ä8„݉”eÏåé¨mç¹k”TÅ·¶h¢¨Jlî.vnËø°i"¿uð€£àxŵï⑹~Kg‡~»Ïh£‰¦h‚öçâÄ  êÎÅÍNù¹ ‰­óÈÙ. /º“6ß™y 9p×¶hêGÔ¹´ä+gù(âHò%¦¸€°­â˜ÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢s“Ù­qZ”&a˜h¢“RôÇHMeù(ªE;v[=ËiÝŽZÞò† c³D'¹õnGí”a bä–)vÀMl4ÑM¨J!ÏT½Ï¨³cšû>£ò\B}v«þä{x²kßR²D¦ )mPÐÔkz¶ 1ÅUi¢ˆ#ÊcŠìAáFÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢8ϹšŒô {‘ËÞgÄB»à‡h¢Hˆ U‡-UBh‹&²*tQ¼©Ý©å ,oíÈßÛ@½8M¤˜€G(øå Üh¢)šML?MTŒ³nœ&ª‘n&¦èLÅyfãž„b'ì—Íòq8”30K íþë¦;©É7vn¢¨÷yÑŸLõä݉›(oò&¶xDt­GÅ#g H¶2Œ´ÑÄF-ÑDÌ;˜(¹ðp‡å“÷Ê/2Š››&òs£×á·7k6FçqYšÐ†Õ¥µ‰TjpJ`¬M;XùvÔìT¢eÆ—zÛ|7š&V"Ú>ML?MTŒ³nœ&ª‘n&¦èLs{'»•bwè*ì?rˆ^TåA¨÷¶zÿpŸÖ‡=D/o !çâ:â»]8¿]ˆ Ø)U”pd+í'¼Z&ŠÇ„ìÜ%,9·ÁÄ-ÁDÊKìµ…¯/M;ê-èÎù¹ù°›9DG ¸èÒ„ßùÈ@a&bHœo*ІRµÃ‡ÇÅ%Ñßï _ß>uñÝei./FdY¨C:gËêßÝså¥[ÆýGW×W÷¹Ý¹ùîê¾tye„qöñý›ÛËÇþþÛ_îÇÈûrÿ…ê|u¥íÓ#m2/î^Þ<~4` ƒ×GÚæÝÝiíxü賫;}Xއ– µüáêöìÇ«‹_ãË_ÿüìöÕåÚVßýúw»³/KuRÇ/Ô¬gúôò{-|ùe´Q}­Nv†¿^JàŽ‰cïÆ¼÷âè|û+:¼íRö/ð¸k†¼K^¥óGxqeRU§‰óFv[\zx UPÔÑ‘EÿìcDØ‚Á!BÔv1$Bk šíz÷²m8=ÈIƒýgÀéß/~.œT0κyœ®DºIœþýzgÀiu®È%1ÑJ…ØN«*òÆÙþî-NˆNÿèê"8´ÃjåÑR9­ÉÓê1æ}¸`+ØR$o<ÝO+Bzbð´Ú¹ùSðçç"ëËz³ƒa¤¸h ~Üñ0OsÎØ€ÁÛJUVs]!çÝ3Lh«„“ë cÎÂ/vt¢ðªý SÉÀcb"SE`ë_¶þ¥‰þ%9O³Ñ¿¤¼ ÐÏß¿ä'¬ñ™Ú .z05îœ$Hsƒ.¯¸H`WŸµêì˜VR+N%x§-‘)N|ܶ’[Óµˆ6?[5Iü,³U5ã¬Ûž­ªGº½ÙªIz§ÎVuÎÉÕ·’ïíüÂinwÉ÷rÏ*míRéTåmŠàmõ(§•2³{kb鈮’h½”™ÇúÒh+{'ýÎÆKüá,‘Ë%»œ•¥Æ]?‹Ù<,QžKê>8³ÕFL‹îý²#qàXÂk Žê.a±óìVe‰â4&# -N˜6–°‰•ˆ¶ÏSÄÏÃã¬g‰j¤d‰)z'³Dqž‚s|D+Ó²In¢ƒÚJ´— àê»÷vK-ªRr:¦úML?MTŒ³nœ&ª‘n&¦èLãZ©Cç%f¤ ·c7°×rœTrÔMULŠ9éõWÑ?lšë%àÏC¾ÉÎ¥`+KŒMl4ÑM„’²Rœ Õ}´Î~.µ<7å}Aõ}è[4efH;bêM@®éÉ)LÔÛ b§íÁª4‘æke0Ùâòµ‘MXÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“i¢s®´úÙï½,Kœh'C ø; àé©©±µ‰N}¬_d¿·ƒ£‰òÖ‘êgJövaŵ‰â1ahØÊ·,7M4Eyµ—<åÍzUš(v.̾Ó)?—9„hw0Ä‹®MÐ.€ÖÑx˜&0×`ñXWZì@ÖÝé„]3œx[\ŒÛÚ„9L¬D´}š˜"~š¨(gÝ8MT#Ý MLÑ;™&²óà€ÄØÞ‰ä°,M8Ù äUÜK>2¥ê_C[4QT)Ž¡·»íNë:¯î­!`¨_®°âš;ŠGfq„2Š[Ž&š¢ Ìsþ⽤êìÎŽÎM˜wP‘#1ö;<°ÃsÖµ ty§äaš ­Ác=)_± ‚¸*Md§bràLq{ÝÇFÃÄJDÛ§‰)âç¡‰Š‚qÖÓD5Ò ÒĽ“ibT+•dÙ¬r~P‡”<ÜÚ+]ðmÑÄ8õrZwŒ‹Ž‡õ2:SÖß)²ÑÄF Е5‡DÎW¯ó*vyîŒå¹L(1’Uˆá@»7ãÚDØ*ñ;8×`AqÆÚw±;pÂcQšÈNIÿÁzJÕÎ΋ßhÂ&V"Ú>ML?MTŒ³nœ&ª‘n&¦èLs!²[©ÀËÒ¥¸?´6ÑIˆ1sG{;×MUŠ `l|-vàOìv/ç´£ƒ½«“§‰â‘“'[õm4±ÑD4Áùt³gŽR§‰bçæ?7‘Ÿ›§ˆ„ÌV[ôK®MÄ]"Ѹ¦ É58/ã-t±ƒ°.MŒǽm¾M +mŸ&¦ˆŸ‡&* ÆY7NÕH7HSôN¦‰Q­T<4!3'MPÚ1òMŒ’š\[·MUì­Ó¼Ý©ÑDykŸ"ÛÑñ½]‹ÓDñHJúñeˆÛ)ì&š¢ -—,Úp'®ŸÂ.v‚³¯MèsÅŠ$±êø@° M ítÌ?°ös VèJTÏe[ì€hUšÈN%„œË§qÜÖ&Ìab%¢íÓÄñóÐDEÁ8ëÆi¢éibŠÞÉ4Ñ9ç”ìV*øeiB›òsC÷MŒ“ Â.ª Ï…#ÔÇÛéTÞóÅ~`G{ùº§‰â1‚’ÙÊX¶»ë6šhŠ&b™ó—ì°JÅÎËì§°cÙéä ˆYˆÑ§eÏMxçqˆ&R;éo ¥)Ský‹ª''ÑTèý©õ/úÖìýª]û¹4–ï_RÒ¯#øD¦2Ô¿Þú—­i©Iekežq­÷/Ù“Ÿ½IeWò¾©jýIÝ,{÷vˆÑóÀ¹¼¬ Šçd\ÆQìpåÙªì4…Äœ¼).ùÞ׸ÍV LCT"ÚþlÕñóÌVUŒ³n|¶ªég«¦è<[5ª• ¸ðlU ;æ¡ÙªqRcc³UE¢cËr±>±µïòÖ<¢ØÑyçÖ ¥iB=BÞö¡ñ·”óý{[7šØh¢ š`yƒ-š`¯š  .[–Œ ä] eï3  Øaš —k0ÇõUúÎ.ÄUi¢8-×§×Q§—Ü–åÃ&Ö"Úšð»(‡hÆ|è€êË£Ù(¤Ua¢ˆÓø¸ú™ôÎŽz³-L Œ+m&¦ˆŸ&* ÆY7ÕH7SôN†‰â\ô—G4¡‡ïp˜õ:#Ï*­ý±Rŵ•€¼SuÄo¯Oëv÷Ö ÅŽN¤—&²GÖ7w¦² &6˜h & ÒI<=¼RúÉ{å—ÒÜ)Ës)’Ö!³ÕÖ:À²0áçI±•EpMl4ÑMh¹ä‚0WS:uv~öËQósEI«þˆÃeaÃ.ßVäÓašà\ƒQ|}ú¿Ø…^¤Ö ‰â”B F÷Qì°·!y£‰ab%¢íÓÄñóÐDEÁ8ëÆi¢éibŠÞÉ4Ñ9×®¦~ÛÄÞniš€¸ w£îˆÒÄ1J¹±¥‰¢Š)Ÿû°Õó)º&º·ŽJ’ñˆèðŠ²Ç¨8Àõ{0:;ç·ŒNL4œ79ñ®~›Q±ÃÔ[Ð &òs9EŒóPÅ..zÛDH»qð 6ïTc,ÅÎÅÁÕ(†oÏž\\ée/{È™²‹Ob; nðzuöå«Ë¯^¾¾+¿Ü7<çgÿ’#ø•~ò¤XU>þù?/orw‘ ±ýÿè†_çêùå«R=?9ûìåÍå¹ |röõËû‹ëóDô¶ iÙƒO Mè÷‡¦ç¹ÅÒZpsu¥¶û‚¹¯&ÿuq÷ÃùGüäïÿ ‹??}ö>ÖBünÇtñâ;Fýôsm¾|õò¹–¢»w+(í«à§¯Ÿë«ÕR­—Aÿ뵚"§¼•/Ò{UõSõ Š>ÐFôacÇ_vÃî«>ÏßÿçïªÍìåíùGûÏò¯KKûW•Sb±ÿEyÜo#Çßìƒöâdš~°|ÔÔÙE¯´¨ÏR\¾ÉÙ÷Û¾Ãe¹ëaÎÿ)†E¿ÒR”÷5,W¯_]ÜÜ]½ÛåŸ~þÇÅõõ¹ûéYðßç´˜ðÔÓåÓgúW—?ÝŸC¾Ä ]DJ.“gçÿëÛš’ö§ÝOàÅã¥ûÓ/*ãÏ·Oµvß_]ÞõŠÒÛß¼-IåwÜ=Iïcùðïþ÷› ßõ£zø/ßüüÑ¿½¾ºÎ%S7w/¯/óŸ]Þ^¿|óBqúá÷WÏógåëú›B»V7僛Ҋä÷$ýé—Éÿ÷Å~ üùÕ÷—ÏÞ<»Vоѿ}•÷âBÛÈûÛë‹gÅÑoÝÉÝEnøî>Ò¸üwîÿ~ß+|õ—¯n.nï~xy_þ÷úåëïô î_½¼¾¾|Õ“ÑýF›}EmO'¾þ_´´<ÿáþ@(þGQõë×¹·“{vý9ÿøôâÕå‹ËûN–°_rUº½¾zvuýæ·`D­kÞ~™ÚeÂðt1·g··×oÎϺJ¦#ˆçoûÅ3ÌS:¹½ì÷tyèâþþòÅíý™—ó•]¦uqûC!˜è¿=ûÛë›üÝœeû ?ÙgôÇøô¶Ïàt”‚5 S;×Û«Uñz>Ç6§“žØ‚0ðŽãÌ÷ÉSÏÑYÓ 9“Ã1>ñŸ  ‰é3:*¶dÇVòŽû=¦£b˦O ‘P,ÜW;¢£b+Fl¥€¼Vãû,v`Õµ‚âSdb[Äm­Àœ®D´ýµ‚)âçY+¨(gÝøZA5Ò ®LÑ;y­`T+…ä–=Ç8ï4X,'•K±TTQˆñõtj‹Ý[GŒ1„õ ²Çäthã½­¬µïïZ,(ÚV ¶Õ‚ùV ´`²ø”ÄÕ·uv~öcÑù¹ÉùèƒYµY›·ìAïbȺ¡”úò-ô&Mˆ(\C0Ñ ¦ò,Jà"Ø>©w~«â3™>C”œJÊÈóí(ö Dm†Á™NYX‹<[ÁU;‚ãœZó1·Ý.…úv­bVNÁ[œDkºØao2bãÃ%¢íóáñóðaEÁ8ëÆù°éùpŠÞÉ|Ø9×ñƒqx¼³£eϹâŽîß+Èãôc”JcxXTINKöz>µû– mw‹ñìׯ‚‡A¢A@Ù.&HËn&æ“)ª@ò ‰ÑÈÜ×Ù…xÂXˤqçU*@[”oyÂ㜂é4ĈDà¬æVI2áqkÐh:aК­–Tí‚;jÑÒ“é4ªSeDcղ؅Þ=(5§<§S ãXX §)w@ ´âꥷØ1ÄUY8;Õq[¾WÈ—³uo,lAN%¢í³ðñó°pEÁ8ëÆY¸éYxŠÞÉ,\œqÈ`·R!À¢,LÞï‡`¸H@Ö¦üˆ5¤Ø gU)ŠwÆàb'§Ãå­S€d ;;ç׃aõHùÚ=Bo)£wöôm0¼Áp 0¬3z—“ »* g;ׯZ3Áp~n€à|0é1ßʶ £Û%@†E•iãP$ºJB‰_í¤å\'~u*Žäî{(ŽaKúV'Ö#Ú8NL?NÔŒ³n'¬H·†õNÉÿgïì–ì¸;~­·Øâm²#4º ì#%.U¢Èåé"I•–â2a,~IÙ‘Y|?‹Ÿ, Ìîê{=sæC°ÎP*—¸îEÿ§gÀotß;G.Ù½”N†×Ýz9×;†÷"ޑо!œ¸S•tÁl«Wì8#œ¸½j,õ F –‰·Úzy§Œ(ŒyêÐ'·ãÄŽíàDÿ`–3]p'î쀖ʼn¾ÝèÄÍIzT"_uë¥tˆé8M@~ƒAÃ^Ë«vg·éA®;§Y^í³ß½ÝÁ8½ÓÄÀ4±ÑöibŽøeh¢¢`šuã4Qtƒ41Gïlš(ÎsÅ »—bXyqÂÇŽ8 ÐD/!B¤8BjSYßîT…à¨vpýÞãyÑD¹jÑÉxðvtÂv4‘=’“G(#wX*g§‰&  èÀKÒ~çáªè7>À?jãÒ4¡íbJ:‰k’®vAüŠ4‘ôŽÅ”$áDa/ÄÖ«®v(£NUAõ(—6æs¿rU§ÅîÈñäU&;eý"ÚâíÉ(ÌÉi%¢í3ÌñË0LEÁ4ëÆ¦éfŽÞÙ Sœ#&» e°*Ç.y?À0Ó¤2·Å0“Ô#Ðy1L¹jJàõ¢Þn8‘ëò S<ê傌P&§¨w†Ù¦†ÑSPû¾jêê;;õ¹4Ãäv)…œpÍz„ŽÕÿZô´QNóè&+`ˆäL¥ÚEÊFÉd£p°œª]wÆ»1N%6¶’õvn­y0²„²EZ §jçeÔi#ïM§!æ|§N‚áTíô%å §˜Ç‚€¶;æm¹4;ÕÑ,Rr¦¸àü¾¶fG%¢íséñËpiEÁ4ëÆ¹´é¹tŽÞÙ\Zœ{`Ž`÷R:“Z•KC`“Ò—N“šãÒ¢JG!"±Õ#óyqi¹jÖñFÜ[† ¹´xLÎém+‹~ß©·si[\Š™79å U.-v”ÜÒ\šÛ Ag¸v·­ƒ¬›$Q %—¢> 'd0,ylk C&Ãäüù^€,§jw˜C²æ”M§ù”%b8¥tìˆÓÏ=”ª*FOÑÙêÙɹ ¥zÕÁ9?":Éù7JÕc¢P­Ô|oçö¡tJÛJ©sÉëQßô^ìÐ/¾M%·›/˜¹>Vd;JǺíå†RßéHªRŽ¥ª€rF¶”ªúM¿Ì§y÷¶8v¼™³>¹T"Úþ—¹9â—ù2WQ0ͺñ/sÕH7øenŽÞÙ_æŠóü­ÃØõÞÛyX½| ÈÐŽ‘"A`L#¤ÆÔNUÉ«HÅ.J:/œÈWjŠft¢CØ'ŠGdGÉ~A42¼ãÄŽ­á„DþÒÃ…úqB¢Î3WÀ‰œ’4„`΃sÍÅUÏÐB—|Œ‡p"Jt.yç ¥Qä0“ní#Y0>’qî7’ˆ3z>þ¸Ù‚a&‰ igsrZ‰hû 3Gü2 SQ0ͺq†©FºA†™£w6ÃLê¥"ʺ'wƒtœ†NîN“Ë4I}:·]ïùª““œˆÃŒNrDÛ1Lñ˜ÌÇùŠò¾$²3L[ Ã9]§ ±±» Ûðâ» *þ?~¢?ÌË»Ã`‚cðÏò%{€cCiN‰—FUžðÕ2ò¥1½æ Q §œkާh:Qr~nou·1ãï¸- ɤ5HÀƒõ $/rPeªâq¥¡d{ŽHRo±s6EÄIâˆýŽˆÖÜ¿ÑöqŽøe±¢`šuãˆXtƒˆ8GïlDœÔK)¿ì2WÐNKp§Iõm@/ªTBµ¶Â½z9³äN“¢pCD̽w <ÙÊïÉvDl sB£¼‹ÂˆXìèà”æBˆ¨í¦|Ö{´^ ”ëíYysî@ D ÊÔFIÌPªv”FŒF0"Ç$ÁÕ zt£Jö¡u\8t¢°FHѺRµƒq\Šh:"^}  §jG8ŠÀÑÚß/¥[¯ÓŠªÓb'6EÄì´°6îôvvD´æþ•ˆ¶ˆsÄ/ƒˆÓ¬GÄj¤DÄ9zg#bqî œ±í½™Âʈ¨cRBÄ^ª€7ßÜ]R[ˆXTaði„útf«ˆåªu‚è"ÛÑAÙðŒrñ(ˆ„#”ñÁÒÂŽˆ;"¶€ˆÒéóG¢þêù{»ƒÔÚ !bn—] ÀÎx²_3wc'ùÐAx>0ÀŸw8s,Ì©Â8†acß¿äDU§Å.¸m—¹²SeaïÄÙâ’àÎ0Öä´ÑöfŽøe¦¢`šuã Stƒ 3Gïl†™ÒKyHë.s…²¥_f’TìXÀÏÉ0ÓÔIåÿ‹f˜IÑÁƒ‰Îê 3IÙaš˜av†iab®Ûá#$#9D±;Ì´Ãh»Ép`A±ãuOsÅÎ'n 9DÒW˜Y1ÎHö–í0à¶i[§ˆ£÷r"æ<±ÑöqbŽøep¢¢`šuã8Qtƒ81Gïlœ˜ÔKIXwIJÊôDó{{ŠÜØ’È$õ)œYõ)Ñaw=xuœ˜¤ìp+ÍŽ;N´€)ïFË™æ\}×\¶ËÉ0—ƉÜnt‚Þ‹õ¥àV=XE:À„„) 0(Qzñ†Rµ ¹£¿Ók//:ÆÛÞ'Küìå‹§ÏþûËëW?i{ÞÏí.¿+ÿßå]×ýéx|sùúñõw—¯^¿ü¿ïæOž=}zuñ·¿þí¯ùm¹oðÝ4ñù÷?ùý¯n¾¼y{}•Lj»¿LlèRÛùä_Ÿ½PTèÿè¸2­…xÐÂýEMmªˆùÕo¾¸{#OóA ‚©m”ؾ?å—¾zü¿::ßß’ŸþzÊÝý¤ëº‹O?½ðÏžè÷L;«þ=}sJcÿ~ýüæÍ«k°/üyd>Ì'…ð“ßÝ|ÿôßž½øã—Óïly6þðÅçÇë[||-7O?¦Kx|í/)=—)^Ëå“\=!¤›ô4º“ž§ª×“®ã·7o^þ ³§ûÇô°E¤r’Ðj³§ÝÃ_ß¼ÈÀÇ­•?î”Kÿ,3öí¨­ó©ç¯ÚÍݘþø]Ñ/Þ<ÊCù¥ —:‚€¿ÂtåñBÕ6ÿðûϽ?%DcÜŸôîòùÍOÓ‘[Ö9Ò¬6ýZßÜßèLáå“ßÝè»ùäÍÕÉm.Ú­|õg}<~{óôFgá:…¼ºx÷îƒÞ·ïGºû&O§r|Ñn?Ø|}Ïa¹ŸÊ½öýÊË÷èZð»Ç:sº$÷/Iò2=Nñò;wnnžäꀬÆzmïO»{:¹þþÙ_ÔíÕ©·¬<Þ_^¿Ð¹ã“)½ºøÿ҇ꃟýó‹·:/>¡åONú¥[A¯ËU=úhurÿ~ÛöW¯~êõÿ[š9dß¶Ò÷ýŸÒ{ܶPfyþŠù ÷8ÍCó\Kuöùfµõ5dÅo¾}÷èé•ðúÑ•ö?úßÏuÚpûwý›B×Ë·%Ôoòº;«g/ô–>¹QrûVŸý“t¼ÿÇŸéa\h®ñwú,†¿gQÿëöþ<}öýM÷ãõóïó³÷þý·Û=l' ôuÄ'Ž–' &_<þÃÛü©ôêä‘ÿóëÌýŸwÞxû?/_?ûKÿÌÿç‹‹‹×·³¿_½}ûúÙãÞêpœ|qqýêÙýý'è–åáòuÿ¤ƒåë{§]àñ¯,L˜ËS¬®vF•XA+‘cꂈwÂFr¬b<ª 'Šé4 Çè’$Ãi>Ç]i¬;×9Ú˜‘;LtµÁÊjï”Ôo½ê­î)+­%³ZD›_Y%~‘•Õš‚iÖm¯¬Ö#ÝÞÊê,½sWV{ç9ؽÔaqU› w)Ðñ𥆶ªxõªBdqÉVà¼ò‘ôW-.úQщ~³•ÕâQgSì‘le ÷”•ûÊjS+«ùÁŒ‘âÃT?ß|ôLj´ôa³Ònæ S`¿nÊJ'É6S^;g‹ëíp\mJL&ÃèS|À†Sµ£Ç8%#‘#€vV1HôÆ•f;eÉM³t§ lx .9œ¬q%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ{Ÿ\ v/ó .¼%•”R1O“ê¶NEFvõôÊ·êƒ;/p*WM9‘ÚÑ¡ 9öS$’h+J;8íàÔ8Aç‚×&cœŠÉÒ‰K»È”X¬W[íHFå¾ 0p¤4“3ФÝÚ±¬Hk¨ zÇÓ®ùÎ'ù»i|鈎À+BLç9S\ð;ÄŒ˜V"Ú>ÄÌ¿ ÄTL³nbª‘nbæè 1Åyй#E»—‡‡¨…ß!‹ã8ÜÛ–JØVšŽ[õ9Ùq²Õ9þˆ˜rÕ‘¢§ƒ¥ðvçêŠGÌ5ÒÀV†Bq‡˜bš‚߃¤­%¨BL± ˆKCLnup³ãS;J²î¹: `Xò;,I ¥œwºª¨EFbxm,Šýô†SµÃ‘NÄð€¥³B¥ÖºÓbܶ«?Ù)yb1Å‘‡œÌq%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ“×!ËîBI'«‚“¤ØQÈÑ~+U¢¯TºµãÔ8eU:hƒšêðy%$é¯Ú A=Gû­]Ø.G{ï‘ FÜ7†}ÛÜNm“>˜I;MJÏ–|óÑœtŸ–'Ì@æ ¼3_픎§XpÛuQiÖa°C ¼[ŒÚAuÞˆÈØÁ†é“ÀNÐê_]wÞˆŒÄð@eQœ…ú=Év6§"Î{ õc_·Aûy#sF\‰hûà4Gü2àTQ0ͺqpªFºApš£w68õÎsQ‘d÷Rúëëž7".n5Q*†¶À©¨BŠ a„záó§>::móÞŽ2oNÅcò:Í1Œë$c§œš'Ê@0a=1|o§Ò–' ”YÀ;¯²0¯ NÜ!¸ô|`xI^¼>Z/zò<ª1ƒ`X{ !$4¹Š‚ß”`ŠS핃ÁtÅŽ…v‚±¦¦•ˆ¶O0sÄ/C0Ó¬'˜j¤$˜9zgLïøƒ¡7tð§—êÁÚ$ÝÛAcÛæzUb éíøÌþä«öù@0{3:Þá†K"Å#£sF¡¡bG„;Oì<ÑOÄÎ;b!çê…†z;·ø’Hn—µC‹õ¤·v~ÍBC€Ô9ÇÑòDJžbNÍgHU;ÔÚ3A½áÜF˜ Ñ!Øt„Q‚ØÊ‚„}„ÙG˜¦F˜”§>E¤~² Ø1o›E&—Åȳ–ŅíHûg˜¾®D´ýÏ0sÄ/ó¦¢`šuãŸaª‘nð3̽³?ÃLê¥üÊÿ± hè˜Gê€EÅÒÚÀß|(˜(-Îÿ_|ì?ñš<¨S´‰0=TG¤¶V{Šz}þè¼xbRtpË/VÙcT–&ˆ¦²°GØy¢5ž@Š’{D1F$‰A–ash½Új§/ÚÊÅÖA|a‚§È!Ûx‹]ÀqÛx­ê©‹¬— Ãi_à|”S\Ò)ºQ™E™êN½ËÓ­È¢žkN{;7ݰ¾¾‡¼·#Ï›2LqQÔâ¿pî 309­D´}†™#~†©(˜fÝ8ÃT#Ý ÃÌÑ;›aŠó\Ú&€ÝKŰòAON*=QjlëNVcLÆ*R•t^õ¦D'æ òv S<¢:5ȹØ)_ï ³3LS ã• œN ¤~г·ƒƒ2È 1LnWáÈ{Æ ¤v(²nîKÒá8Á`çrÈÙÇj†¨Þî°€M`Œ:¹1mš$XN#ø@›Lq*ˆ¾~±· ²gï7§¦•ˆ¶O0sÄ/C0Ó¬'˜j¤$˜9zgLqóG:o÷R‘yU‚ è»ãÁ ‰ÐÉz;/mLVå ‰=x‡gF0åª!ÚLvtt¼ßŽ`ŠGŒ‰“=Çð¸ÌN0 –ÚÏBÎø8ÐÛùqû£¢93Wjq>âÃý—ß|ä4g)¦¥±)·ËtÎoõ¶ëb¯Z»˜< °Ë9®ÌþEí€â¨ÛcÔZÓÆ„ò¸o=j‡0Êip¦Ó¨WðÂ"ªAå §%7Q—êÒ{;ˆãœzãé§<„ɉåTï@Üve-;E½ùâÁ—x_Y3£Ñö¹tŽøe¸´¢`šuã\Ztƒ\:Gïl.ÒK¡‹ëžpbÊõ›Ã—N’ ®±•µ^•Wõ$¡·v|f\Z®…<Ù#9úƒ=2«siñ˜¦¥ʈ÷’ ;—¶Å¥”W¬h’¯#b¶siñ’ ¥]€T/%ÐÛyX³$ƒOÊÞ‘Òñš?ª ¥`õÊØ½¹Q5Úƒ‘ÂëˆYÀc×x¶Ã@qS†™"Ž?Êí 309­D´}†™#~†©(˜fÝ8ÃT#Ý ÃÌÑ;›a&õR„ëžp’sO=À0Ó¤R[Iõ&ªXoã—Í0“¢Ã˜¶c˜IÊ„÷2;ôÅ0\"çË«ïä²;?áTü'ýÇØTPì$ÁºY£Zæâ.2‘x0²;ŸÆ1Œ•iN¢(N¸î4ÛJÛ®ÃL'xPpg˜Éi%¢í3ÌñË0LEÁ4ëÆ¦éfŽÞÙ Sœ³'+õA/Ò¯{ÂIGöN’ 0Ì4©m1Ì4õ1Ô«NÅHÒGñàü×ê 3IYª–Ý'5M–ãÔ¥þíª4ôD'…ÇBW3LõXª«¿ÆH@i1Ìb˜©FËþJR|RéíÓÿ}€³>|<9‰aÊu‘£Dçpµ£k«eÓÆ3/üpð‚Q SoÕŽø¥Js".Ãt8e}- G§¶…™9µ§ÛjÚ µœzyoVZNf€ÔDžSKí´Ö%.Ñ:5ç.ßÖFÄŸCk }Ö“ÓZ3ÒÒÚˆÞaZ«ÎcHÚîI÷!R/®©§²åÌ­õI}rîì7¥µª „1_}^Û¼­uEðÆŠÅ#BÊË)u•­Š‹Ö&£5Û"pF%Õ6­;ŠtzæOõOF¼Tì‚]»ã„1‡ý‡ƒ÷Kd„<¹7e»(/µg•ärSþ·òǼ—„höRm .7å‹–Ø©N»Û ¿ä4úNIÑÄ{²]6|É)¸áÅRf@½;Ív$¯…;MùUË êv_õj—_p+–öˆ‹¨« …ˈΥ#âÏÁÒ†‚>ëɱ´é ±tDï0–VçŒæÍÛ»H¹¸ …d7xt²Oj saiQ)¿æ}õlö^XÚI7–z¯Ê";)ö;ÀÕ®jaé\Xš2î)qˆÑšXZí{œ„¥åº¦AÙÙž©v‚rm»ªH_„?¼`ˆ!±¢S>°Ø•Ò“/1Œ“Ì…!Ï cj*Úíâ¯÷X¯d˜Ý©€}qüÐ&`1ÌóÅi+¢Ó3ÌøS¦¥ Ïzn†iGz>†Ò;Ê0ÎH»©û‡âµ l£pp²SªÎÕrwWeBL¾z}³­µè”¶/DÇô>†©%J¾ou•«Å0‹afb˜ü`FÉ›-ww;æ³ RÔë’D2cge;¦+ÛU‰m™¦¥§ ƒq 0‹MÍòƒ»„ã õüþÇ¿•§à§¿ß¾ýËŸË»?OOÿi?—ä¿ã|÷Ë·øÍ¿d£ýýŸ¾ûáÛ2Iýỿü”Wî Ê<4 ýîrÈÿ÷Mºÿ3Âïþ雲¾w|<\þàNc†ZŽÍƒ»]"/Ñš“¶V.Væ÷¨í]Äj'Âz+­§­Ý¬l¿ ~è6²hí`Þˆèü´6"þZk(賞œÖš‘žÖFôÓZß,e×–̘’'­ƒ§N©i®ò}êõIͯšÖº¢c$~9­õ(³øÐ}{ÑÚ¢µh-n¥+-«u$ÛÁÃÐ:‰ÖÊuUbŠè  lÇ×–@O9  G´–Î0¡ä(Ív¤/•ÞPqF¸Væu6uªÂK=U_pƒ0˜ùN½T(QÍq e†”œâS»iº•ÖŠÓ”ï4 úâ iÑš· oDt~Z­5ôYONkÍHOHk#z‡i­k–J!]›¶F¼)¤­uJ…4­õ©²3øUÓZOtRàû %ö)CŒ‹Ö­MEkPÒÁ0?™íB‰»ÝcFèI´V®+ÂÌÎætµc¹ò| è&ô¼ÆH@\Ê[°ƒ0ÕNÒaå ÍoÄþö¿~Ÿý?ü=»é›¿ý)¿ÄsLÿ\Gþót]þ׌Xη·«Ž|ØÅÐ##tÊ(äÅF©u ƒ!¿t?Uï?ç'gÿÜA$ŒJ¾»Ç½­/Üq(œôó_¿ÿå›ÿXïí¿ù·âë|G`hSrµ‹pïN_ušAm×Üú°ƒUòÄ…‚FDçgÇñç°cCAŸõäìØŒô„ì8¢w˜«s¶’ïëÏRìRvÄdc<`Ç*Á,’sÀg— <;â–b¯Âè¨/vüfìXï:Š¡Èp;V ^QF;´ÿ/vüå§¿.t\èx":âŽd Ú©e»=Ô´: Ëu­Ô1ðÆä×\Û'ŒØ>ïuœ¤`Dì)ÍvQe¶÷KVEÀ(â«G´w{¿”è0¼º±¤Öî1ei}e:z’d½_ÖûåÜ÷ m!¦üœnš¬v‚§ûÏ×…SlãÚíàâ÷KÊ¯Úø¼¤VVC)RÀíây»Ýc¡©;¾V§D=<ÙípKw?C4":ÿתñç|­j(賞ükU3Ò~­Ñ;üµª:†Ö®SñaÇמtˆ[ÈqàÆl_ê'"‹/5ÎEUæ_ÁìõÙ¨oA{tŒƒ?:xgqñH"%ÃÑUF¸hbÑÄ\4‘"£94‘íÂé…êu™ó_qç=ä§|Ï+„„D ÏB"çŒ@¤íÊ¥»]¤[˪îN…I5øâ˜×±iw•؈èü01"þ˜h(賞&š‘ž&FôÃD×,õøåêš’D¸ÅtT’¨J06pvé?ni²$×.õß,ɵ/: /‡‰â‘€-˜ºÊ(>´Ç^0±`b˜ÈÏ%Z9ŽÚuvªãé9®åº©4Ëwü`²g–΃‰Ò¶¾¥Žà<Ô¡=Õ.€ÝJ²«ÁÇ­}M,Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞaš¨Îµ¤è°?KÉÅ[ùÕ¹ÙKt Õ8Ù1Úª*© ¾ð2°'™H_5K”»æ¼f7§˜ÂE»‘%ª2Tˆí–ÈvK,–˜‰%¤l8H`¥ö1§jGvú1Úr]áüf î¬BáÒ q+§yEž³„æ,Ɖbû³Z±ã'².e‰.qøÐ6w±ÄÁ"±ÑùYbDü9,ÑPÐg=9K4#=!KŒèf‰®YŠ˜®MÊ£¸éÑÆDŸR¥¹`¢K=Cx/˜è‹Îj]]ÊŒVÎÄ‚‰©`¢´8³Úª€š0Qí(~Ê)_·”ªŒÒîBþaw)Lm âÁ1'+#]4d¤j*­vŒñV˜(Nó ¼$ úâ’®cNî*±ÑùabDü90ÑPÐg=9L4#=!LŒè†‰ê¼øwJ•ï"-^Ü=Z6Gµüû¤¦ÉªCVU ”„}õ‘Ò{ÑÄM˜^ˆÈ9Õ#«Z$_‡E‹&¦¢ «[ªÉ)¹ÛE:›&Êu%±“ŠPìøYUÜS3°MòÁÖD*#Ø’&'»ÚIºwk¢85@3ðÅYd^4á-Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞašØKqïÏR@z-M$Ühb— ä©2Y=§ªŠ‚S³¼Ú!ñ{ÑÄÓ¢CtãA§êQEÑ9 Ví»‡/šX41MäçÕŒÌiã\íäaÞ9‰&Êu%‘½ñƒ “^{Ð)–’zOa‚BÀ)2Q»ðTµËÌqëÖÄ.ŽhðÅ% kkÂ[%¶":=L ‰?&Z ú¬ç†‰v¤çƒ‰!½£0±;RIàÏR,מs’`›mMôI•'G…~K˜ØU)kˆü‚z{¯ƒNû]›–=?:zãA§â1㨱ÛÉC;«E‹&&  ({Ù!§Ð¤ ¨{pvuØz]¶ØNˆÚí8\\4šêsšÀ<‚±Te í‘^ì@‰n¥‰*Ž2FtÅáã¹hâ`™Øˆèü41"þšh(賞œ&š‘ž&FôÓÄî\óŸf)b»”&tcŠ4±K0e‡&>ni®^»* j¾ú'ûè_7MtEç1Uôrš(3æ`zaIVÚÄ¢‰©h"?—T>¹Ç_wqùôÅóK%aëlš(×ň%“Í?„A®<éicÕx„MTGºlW²­vh÷&aïâ0YP_AZ½&Üeb#¢óÓĈøsh¢¡ ÏzršhFzBšÑ;LÕ9A4~a–BÃk÷&bÜÔ ÄöI¥8Wì] ™¥Ô ¾MìÑÑÄí6ØŸ£xcÞDñÈôÇjatZ41Md¡!bû¤Sµ‹tvØzÝHª¨î¼—í®¤ ÝòòÑI'Î#XÀÔ¬ý݀댠·v®ÛÅq2t&ÈÝ.®“Nî2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰ìÀjsa–º¶s]Ü0ïú¬4ÈoÉ]ê•à½X¢D‡¢–†½^t€Âåaw\”«¯ŒpíL,–˜Š%¸d# è“ç÷ÓÏ/sÔÓs°Ëu5ZþwwœíàJ– WÃJt°3Qzd“Zjï|W;J|+K§ )Ç|q¶v&üEb#¢ó³ÄˆøsX¢¡ Ïzr–hFzB–Ñ;ÌÕ9S9£áÎRù…tmÖ„jØŒá`g¢K*˜‹&úÔó›ÑD½k!ŒÉY2 ÜGţļñ•¥‡î¸‹&ML@Rr MŸíç~úâùeQ<»ÙD½nJšßî…KËÃâ–òt°3¡e¤3"9'²ªa¸•&ªS 1@òÅéÃ1ÙEËÄFD秉ñçÐDCAŸõä4ÑŒô„41¢w˜&ªó¬‰]¤À¥4AÙ)ÐDŸÔ4YÖDQ•‚Œâ«Oúf4±GGó¢CÝèhxØ »œ&ªG„ÒÄW¼r°MLE¥Á4!E£öÞDµƒxzÖ„ÖŠN11²7~8Ó8\›5¢éÁÖ·ÕˆÕ.¦{a¢:Uaè‹S”Þ*±ÑùabDü90ÑPÐg=9L4#=!LŒè†‰â<ƒLRç,™äÚcNdK‡0±KM„ÁŸP-è\ëvU9Îä$ïvúfIõ®¡Ô°?:À7¦`‰)ˆs ÚQL &LÌù¹)%b›0Qí¢œ~Щ\— ™ù#[J_Ïk :ŃG4‘2̰D ŽÒlãtï—¬ ËÙè«Ïo˜w{¿”èX$§ºînGáÎ÷Kö( $/ün̺Þ/ëý2Óû%m $H{ë»Ú9}ë»\9‰8Gäw»pqgT ¥ŠÐó÷K*kW)©ÉQší˜î=H[œ¦@ˆ®¸”Ÿ¯õµÊû шèü_«Fğ󵪡 Ïzò¯UÍHOøµjDïðתê0–‚xm3#1Ú?Ví Ñ©)·ÛédQ«*d# ¾zÄ7«>^ïš(€ÓGd·ƒ{U¢,ä/1Òc›ú &¦€‰Èª€ Ñ‰l'Qχ‰r>Ö ‚;²³]¼´3jØ"êÑÇ*uƒRû³ôn÷$ÓáJ˜(N1iÆBOŒ«ú¸·JlEtz˜ L´ôYÏ íHÏCzGabwžXœBmv.®ðARЧÎö˜E˜¾ µ4ÌžŠ&>ÔSH}õÞkkb¿kb¯ÿèGÓ}5>v™•ØW¦M,š˜ˆ&ÊsiÀ%4«ïvŠgo}×ë’J?æ#A¼&7ÎÄDñùû%ÖŒìl¢ìvÈ·¤í')-šð–‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçùå–×ðþ,•.Þš`µ ô 3j§Ô'_÷Sš(ª p’È/¨҉髦‰(ªÍNéxtMTTzÕú¿[¶ZÕÇMLEù¹”âP ÙµÚÑùQëu…HXÜ‘-ò¤ˆÔ™{´F~NPGº(ZûkUµ‹zkgÔÝ)[&.ôÅñJË󗉈ÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçùuÀþ,%ñÚ^F¤\æûšè“Š4MTU&Âí,íÝNñ½Òòú¢c|_ÅÀê±43òŸº :+-oÑÄT4‘ŸËìHš4Qí¢œNåºIBÊãÖ?’Óµiy)ƒšÀ:‚óÍ:C»]@¼•&ªSŽ(ÎGjG°ê»ËÄFD秉ñçÐDCAŸõä4ÑŒô„41¢w˜&vç¨^˜¥8\[äƒA7U> ‰>©æ¢‰ªÊò´Ý+ãCý“%_5MtEGnÌ›¨…ËWÛè*#L‹&MLEX÷&HÚ'pß›8»3j½®%$ñÆOÙ›¸²b ÂfX*?§‰ü“bé ±Mŧ[i¢G!¯“Nî2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰®YŠÐ®Ý›Hy¾×£½‰>©<Mt©‚m_7MôEGì>šèRf%qM,š˜€&ŠÐ<õ'IÍúãÕN¾7Q®kÄjѼñ“ïÕìânF%ý9Lpýl€% M¡ÅŽé׉m—ÂD—(%ç›Fµ³‡*î &V‰ˆÎ#âω†‚>ëÉa¢é abDï0LtÍR)ÐÕIØ&1Bc¶¯Möà©<ÙA§¢Jƒ)»ê5¼ÛA§z×ù¯:Kv»x_ýñÝ#EÃv3£Ýî±–Ê‚‰Ào¢å¥¼¶a"?¿¥kèéIØÅ¿™fuÆáxåÖ„¤?¯ËRG°”Íï¦Òj÷´¸á…4Q&N,è‹Katr—‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MçCî,e1ÆKi"aØR:JÂî“*sˆÝU¡Äà-Úï’è½h¢+:HrM)ŒÎñºªì±¥í¢‰EÐD~.•00ýº@ë§/žß ÍϦ‰r]NQ Ýqt·³k:é¦y,†ƒƒNZFz‹ÜVZìÌÌn¥‰*.sŽºââÚ›p—‰ˆÎO#âÏ¡‰†‚>ëÉi¢é ibDï0MTçĬüYŠàÚ±R:I: ‰>©4YI§ªŠóÐt RíêíÍ:Õ»–r”Žýè0ßxÐ){¤cLž2 aˆ]41Mh9èU’µi¢Ú9»ÝD½®IâÄ?ÙŽ¯l7©ìÓ$a[Áãmš¨våVš¨NÙß8ÙíòÊM,Ÿ&FÄŸC }Ö“ÓD3ÒÒĈÞaš¨ÎƒÐ S¨\¼7‘Ñ[^‰ÐÄ.P_™í'k7QU™X zA}z³$ì®è¨ÜØn¢x¬íÑǨ+ {ÑÄT4aeÏ!sº:b«ñéIØåºˆ ƒ7~”È•{¼A {)`I¤Ý »ÚÅ >·ÒDqŠ\úqDWR\{î2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰Ý¹ZžÈýYêIý¾“ó&Rí(q<Û¿.U&Ë›¨ªòrW|õߌ&ê]›RJÁŽ><†—ÓDñH’¿ÆÈ‹ŒÕ {ÑÄT4‘ŸKÕ ñ §úâùUIç7¯+×5ª'y½ñ“퀯m7aÌÉžååYVVF0Õæ2ÇJ?ÛÜØnâ³Sf|AœØ¢‰ö2±ÑÉibPü 4ÑVÐg=3Mx‘ž&õŽÑćsÎþ ;Kq`½”&ŒiË/”g{½Rcĉh⳪ ŒÐjÏöÙ~ýåë+¦‰Ïw°Øjþ÷(ÞÖnâ³Går¤ÚW&MºM,šøib.µ¤C+oâ³À¹'>®«©Y}â³]¼4 cÉÊÓ§yYAÌ#XæßŠšJc}¿H¼•&ºÄQ²EÞ2±ÑùibDü94ÑPÐg=9M4#=!MŒè¦‰®YŠÿ‡½³Í•ÕÁðŽ"ão¯¤÷¿“ ą̈úv*"É .FêžøO~Ø~sØÒÂ(ÑMœ’ªsÑÄ9õßE§¢cÏÑDU†É›{ÿØ%^bMLEùwé’¤\hÒDµƒ—úÑDy®‘£õÆçÅäν N›æx&{OXF°µGzµ“'ÛMü8ÍbÐgMtÓÄFD秉ñ×ÐDCÁ9ëÉi¢é ibDï0MTç Ë ÔŸ¥ÜÛ [̶œJÐÄ.„ø©ˆsÑDU…‚òúoۛأãI[— ÿµ{iØ{;MT™£Q©¯,£E‹&f¢‰ü»´œ•7i¢ØåŒß¯¦‰üÜ<åE$lwo+lÊù0½‡ ÊØÒ›âW¿ -v¦þìÖDq)ÏBÈ]qåjÈ‚‰^–؈èü01"þ˜h(8g=9L4#=!LŒè†‰êM9¬?K¥°›¯MHi7¡Ç³ýÇRQÒ\0QUQ°vŽòìv˜¾ &ê[—^©¿’ƒ?Ù£@ löèþ×lÁÄ‚‰™`"ÿ.ÝÅÙš0QíXéj˜ÈÏ„ÍËnÿØ¥d÷nM¸+­/‘—&ê(Ív(f[_"œÍ¹¯>Þ¨ÿË×—Ï£CÀ®/Ù£p"±¾2–µõ½Ö—©ÖÞò;æª7×—Ýî¥IÏEëKy®° Jû#ýnvïÇ*Ì”%üÂ%CÄÒ:JsŽ›“ÜG¿VUq$)zâ²ÉúZÕû шèü_«FÄ_󵪡àœõä_«š‘žðkÕˆÞá¯U§f)†¸¹d`ÎÉœ¶¾ÏIÅÉhbW¬ø‰úø²­ïúÖªBBýèˆës4Q=†A·úÊ"­­ïE“ÑDrf ðMd;z™à/£‰äš°±êW ¿·d`)^˜¾VIÁIУs˜¿Ú1Ò£4Qœ"°Jgù¨âü¥É¢‰ƒ4±ÑùibDü54ÑPpÎzršhFzBšÑ;L§f©x×Ãáʃ´hù8'Ud.šØÕ›¡ôÕ—ý—úÖIQ:§Øöè¼°Öí4Q=f‚ û@Å*¸hb*šÈ¿ËÒ)šPÛ×òª _Må¹yNGIÚ?9&oæ½K‹|d`{ºÏ-îÚVªu}‰g÷&ª¸¼òaò®¸<]¥E½4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹‚G‚f©¸·‘&Ú(ü€&v©ìÒ)»ÛÑd4QUq³aÏ¿voJgýÕ4QßÚ-ô“ÅÒåÁ½‰â‘êùíþ!¤ÕÎhÑÄT4¡eÏAéÏ{׿þï÷ë,pùµ¼ò\Í+Lpwd»&»ù¤“9ìMXÁ–8qûLcµS¶Gi¢8e6OOi»¯æ¨Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹:`’þ,¥,7ß˃TúÏöl\.?õ¥Nv/¯ª HÔW‰¿‹&Ê[KÂdòì»ðs4Q=RùlÛ B¯7:M,šøïiÂ*MXiÜÛ¤‰j—øò“NV)Rwd»Ä½E>Ò†¥ùAÉ@/#X)Ú#½Ø±ó³'Έ¢uÒ©›&6":?MŒˆ¿†& ÎYONÍHOH#z‡iâÜ,u÷Þ„•±[§”æÅs.˜8§^¾ìv}k-'"¢±/a ¦è©¯l]›X01LxIæI©½5QírÊ5L”ç:°«Boü¸ë0›$•÷0e¤—²ÜVu¤?ܵŠ$†èŠSÆU¼›%6":?LŒˆ¿& ÎYOÍHO#z‡a¢:WDèœÚEÞ À›šÐÄ9©6Ù%ì]½k^¬úêý»h¢¾µ#„y?:FöMFypRÿWgH´hbÑÄL4õr3Ç›M¿_ÿ÷û-[—w3*ϵr‚ º#ÛôÖÞ¨°™ãÑ%ìe§”:ß ª†ø“4±‹Ë”Ð9¼Û½†qÑÄû4±ÑéibHü%4ÑRpÎznšhGz>šÒ;J?έ,ýØñÍÝŒ@6?*é´K¨es>‘êsÑÄ®JM“Z_½ÂwíMìomÄÚî@úxŽ&ªÇR^3¤ÿ«óëÚÄ¢‰™h¢ü.C¨ô†Mìvɯ¾6QŸ«ž¸7~"OwöF¥ØÐò;¾?H›RÁDÔ.·Pí =JUœ8ZW\þu­±Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ¹¥„‘ú³”ĽbÝa‹£“Nç¤êl4QU……µoÑívnñ]4‘ßZ¡TÆM܋޼v ½›&ªÇÒÛ¬¯,cù¢‰E3ÑD*Y:›½)ññëÿ~¿¡$W·›¨Ïuˆ–ÞøÉ³£ßZ Ö7 §HïióŸ4¹¤DÍ]”j¯ež ‰â9u6Nª¼[4q&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:IÜ>ֺ۹ܻ7)CòãÙþs©1MU Ñn½÷ó–þ]'öè$kgü‰âƒí&v\z£r_½´ä]4±hbšÀ QÜA5š4Qí4.§‰ò\C3o7 Úí’ßYÒI9ÓD¬/TGzJ턽ÚQz´¤SuÊ©^¦ïŠãko¢›&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:gË“(ôg)¢tsI§”':ÀãÙþs©³ÑDU%9)•–öô]4QßZÑ´³7±Gñ¥!üí4QDäÇhûÞDµ{½·pM”çæamÜÏÑsÚpoXñP8X_¤®/š¸3Ò¥®/þìI§*Ž  Ý§ûÇÖI§nšØˆèü41"þšh(8g=9M4#=!MŒè¦‰ê\”‚ ?K1Ý{Ò‰Õ6:¸7qNªÏU!öG½Úê…¾ìv}keÔv½®;xð¤SñhÉX“u•躅½hb*š’¥c 5i¢Ú¥—]¿‹h¢<7˜8‰ôÆO¥¸·Blžó2ûž&´ŽàüwéT‚(vêöìI§*ŽJµ¦¾8CY{Ý4±ÑùibDü54ÑPpÎzršhFzBšÑ;LÕ9'b€þ,E(7Ólyù< ‰sR碉]}]°ûêé»hâ\t<ž£‰âÑSi(øÁ¯Îeõ›X41MhÉÒó, Ò>éTí䥊ÀE4¡BbJæu»!ÜÛ ;¨(|OVGpäì©°W;u}”&¬NCœg"íŠsðU!¶›&6":?MŒˆ¿†& ÎYONÍHOH#z‡i¢:G„þêo.É][!6ÉfIhâ”TD™‹&ª*bŒ„¨ïj…ýo7 ÿ±ÓôMT â}eò2@M,š˜€&¬PX™8›4Qì<Òå÷&,SH2„^Œåg#÷îM”N~t@^F0gÞÀö ]íèášNÙ©ˆF§ aµËk䢉^šØˆèü41"þšh(8g=9M4#=!MŒè¦‰Ýy0ög)¸¹¦2ËÑNt•žÿ£¾Ôž‹&Šª‘zTv;ú²šNõ­sx¸S‡õÇîAš¨•¹GÕNpÑÄ¢‰©hÂ7L…s±sÒ©ÚåUèjš(Ï%àèŸl§rë-lÜŒœÙÖ—rKÑùnPí ÒlëKVÉ¥S£dKo[_"¯-DI¨ ~r})ó¢æ}eŒ/=×ú²Ö— Ö—Øò;æ¬öͼóÛúRí .?I[žkH@ÑΫ]NÑîÝû/»Öï×—¨™$¸XOi¶ã‡OÒ§ˆÜ9\íÒª@Þÿ шèü_«FÄ_󵪡àœõä_«š‘žðkÕˆÞá¯UÕ9@ô§PD¾wïÛ6<Øû>'Õh.š¨ªXuj”ìoi_v’vNþßôƒèð“5«Ç@O–q{åœE‹&f  uGÔÔ¡ öðëi!ÏëâÛÔÕÎ™ï¤ ÙØ”ßÃÂ>БÚƒv»< = Õ)a)Ú ]q”l]Ëëe‰­ˆNCâ/‰–‚sÖsÃD;ÒóÁÄÞQ˜Ø—7´?KQÜ[2P=mñ&v Bò¦1ÇŸRy2˜ØUi(µ?qÿØ}Ùµ¼ý­-qj ÿ±ƒç¶&ªÇ" @ºÊdm}/˜˜ &òï²|©ChÂÄn—g׋a¢>WËÞžcgüd;R½wë;À#Ñ{šHy ¢@ûƒFµcÔG·&Ή‹Eý4±ÑùibDü54ÑPpÎzršhFzBšÑ;Lgf)Á[iÂ6ƒ"'¥ÎF§Ô§ô]NFGŸ+¸{4,Kt_-šX41M¤ Ë[Ê›Kß¿~ÿýb©TWÓDñOaíÖÕ.¯Q~'Mè¦èõÇ÷®©Ý½z·~´ÆGuª ÌÛõÇwqþr©{ÁÄA–؈èü01"þ˜h(8g=9L4#=!LŒè†‰Ýy(³ög©¸½b`:ª¸K¥ÒíºRi®»*AêÜuÿyKû2˜8InMTÎÊ!}e9L,˜˜ &°œ_Iøçµë_¿ÿ~³Ýk-΋`¢ ‰sRÿ,nøßÒDUU.Dtåîv”¾‹&ê[#»}xpk¢zäü_úàWG/wXM,š˜€&¨Òþ¹ºüúý÷[h‚®¾5QŸ›gmNÚ›÷ MðÝŒÐ7“U{OùOêèYiû~Gµ+Ÿ¥‰3â^z˜/š8HŸ&FÄ_C ç¬'§‰f¤'¤‰½Ã4qj–bà{iBb3=Ú›8'u.š8§Þ¿ìÚÄ©èãs4Q=†F´KjþؽPࢉEÐo(IƒÛ4Qì4]Ð)?×ó›J¨wÆO¶S½“&¶(ý½ö&¤Œ`¥ü¯ýÝ@ö¥‰â´Ü¦´®8ÊÃoÑD/MlDt~š M4œ³žœ&š‘ž&FôÓÄ©Y*™ß[Ñ b“£;ا”"LvÐéœzÅSÑ!|®5ê9eâë Ó‚‰©`BÊÝfEekßš¨vÌW·F-ÏEÈONäñ“íÀì΂Nº¹AŠ÷,¡e³C%ªI<ÊÕ©…“{_œÆªÛMŸ%FÄ_à ç¬'g‰f¤'d‰½Ã,Qœ3°‚J–òÐ{;£ å%E`â”ÔxÓ è?…‰ª>¥p·®zù²[õ­sÆ.ê‡{_/:ß Õ#{Î2úË8ç!¼`bÁÄL0¡°lç 7a¢Øå„ÿê^Fõ¹˜"¸3~²‹Þ{k"ÆLXÀ®˜¤-´ÚÑ£0Qœ ²X»Fõn—^jü.˜8È&FÄ_ ç¬'‡‰f¤'„‰½Ã0Q‡&ïÏRè÷nLhù¾«v§¤Á\0QU fhì¯UÂúe—&ê[+’‰ö£#üàìâQA$:Wuª²xùÕ-˜X01LX ÷Øl\·Û\~Ì©ú±ª8-·0´+.ƒê:HÛý шèü«FÄ_󱪡àœõ䫚‘žðcÕˆÞáUÕ9“áS(3ÜÜ•eˉÈÁǪ]BPŽÔRm²[yU•¦°¾zù³ÐûßMõ­óoÉ:«v»k|T‚ŒÞ®ñ±Û%]õÇMLFˆŠN¡Ú¡‰b÷r€ÿ2šÀ ø”þÅ.Iüé…lŠûÅä·hâ L¬xt~šMTôYONUOOH#z‡ibo\¬uyx·Ãki"©äË{}RÎEEUŸÍäõöfùÇ»¼ó˜ërš(-¦\=8µ•/šX41MäÕ!7Xª_Ë+vrþÞDÑÉCôصÝã•{¹â˜Or@š× оU¥Å޾®–w)M”Fó‘ËFb'ko¢&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4¡[ 0…ÀQÊí‚\›€\RÚDN:õIµÉîMìê…LÚê!½M”·.W$­í7îM”s0&¡­,…•äcÑÄT4¡9JWFŽX¥‰bÇ|ú½‰ü\‹±qï¨Øʵ4Pà(¹ål Ú8“Uì’Ù­4‘W‡ªMqܸhâ L¬xt~šMTôYONUOOH#z‡i¢4ƒ†ÆÕæb—3òˆm3äšè“J“têSoo–Ó©¼µÿ­4*î^¼3e`i‘5PhÇVN§EsÑ„—èSK1«4Q좤³i"?WsÅQl†Á¨1†+O:É$wä§4A¡ô`%•ªÒÝŽèÖrF{£Š–êv;!Z4Ñkž&†ÄŸB5}ÖsÓDÝÓóÑÄÞQš(çc­&Ò¥TìÚ[ØÄ>hìMtIÁæ*gô¡žsºÑ¶úHïUukrÿ¼à“Ûhbo‘)¹ÿÛÊ®½‰E3ÑDþ.=–j4±Û=æ78‡&Ês}46h‡Á˜ï-\»7¡ŠÊÏO:ä,A¸ž¥z· _ß¾”&J£ÖLS±Û©.šh†‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0Mì QÀö(e)^[5ÀÆz°7ñ!ÁXëõ&>¿ÌEY:4¦zÔÝ.È{åtÚß:²À “¥'÷Õ›Ø[L¤Ì¡­ÌqxÑÄ¢‰™h"g¶ f–¯o›}úâûEJϦ‰ü\Î×ÀÛ£6&ÕxåÞDÜ,Zäç4½ç RP?éTìèæ ±¥Qbª_êØ_¬-š8+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞašè¥\œÓ rN'<í_—ç:éô¡*ü ^ÞëÞÄþÖ˜8„öLÎeÎ/§‰Ò"KJð‚2§ŽE‹&f¢ ÿ.=”G¬Ö›Øí$ž]µ<—£øàË­þƒü$ÍÇ™4¶îãóù½[@€ÆI§l'Š÷ÒD8凗‹&ÂÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&¼qËGÖ¡=JiÒ«i"ÐñhÿªTðÀn®[Ø}ê…ÞëÞDŸwôá†ýå4Ñ£ píM,š˜‹&0×qP…DV¥‰bÇv:Mäç2&Ávÿ!ŽWÒDÚ’¢Ô› Êópdc®+-vÀ·Ö›Øå|Þ2¶Å=?]4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QW‰XOþ!RÒµ·°£lHœtê’*0Mô©ç7;éTÞÚBhÛ½¨7Þ› Ì/Qc£Xün÷c,šX41Møw‰…£Uoaïvð°sMP®ŠÐCjõ ¬píÞ„1ðó”N”¼GãXßD)vùV˜(*˜G mqqÁD+J¬xt~˜LTôYOUOO#z‡a¢4n€P¯Øó!’¯… æ¸y8y»TQOŠôù•d.˜ÈªÐ1à…¹Ê¾ÎÄûÛ†‰âPàLìvwt*-rhnM»L,˜˜ &ü»d˃ÿ×¥Ü?}ñý²9;Al~®ññ¸q/¯Øñ×W›O„‰”—ªÐ2:ñ–/LPLXšíði× a¢G‘­ü°Í(±âÑùabDü90QQÐg=9LT==!LŒè†‰Ò¸öÆÒ¥RºxgÂtS>¨]×'•Ñæ‚‰¬*¡š@[}Ьïå­ $m{2½_¹ECÄÐTÆaíL,˜˜ &ü»LQ9W3ªÂD±#<=£S~nR&KÐê?‰äÒ[¶ fáàœ“äÌ&V¯y_ìHYo¥‰q)®jí0±âÑùibDü94QQÐg=9MT==!MŒè¦‰¾QÊðâü°aƒ€4Ñ%Ÿå-ÿ5i¢Oý“<‰¿išèòÎ/ò&]M]ÊØÖ­‰ESÑ„— 1Z¨W›Øíàü­‰ü\ï1"´úOr¡ébšš`B½ *IãœÓn÷žs*jŠ­üÙÅNxÁD3J¬xt~˜LTôYOUOO#z‡a"7®jA𣔆ÄÂÆ‹MäR©ÀÉuh[*¤É¶&Šz5$K-õn÷f…°÷·¶ìm{ÇÒ]]¹Åh„’š1†ÛZ0±`b&˜ðï’Ù»EJR… ·Kšrlœ•ö¿ì?ê•07ïÆÞÆM˜åt ¬­žîva¶„E•»Sh«‡Þm~éðN|ø o˜_,ßiJ¡q©¨Ø‘­Ò¨k~™j~±m´ŠïvœîM\a9(KQ©V;‚°Ö`Zp]ñèük0#âÏYƒ©(賞| ¦êé ×`Fô¯Áì›ס=J¥xíñPÜœÂí( žy-Ê"P?èTìR:ý S~®Yþ¡°á*·£tåÖªl”ØPŽ]õºT›ìZ^—úôõ­ß6Nty‡ã×ò¼EÄÀ,Ä–2¥Çš¶ 'NLþU¦ÀA[ŒÛùmˆ÷òú¤òd'iwUë[÷êõ½hbkUS}a² ·ÑDi1ÇcHmevÑÄ¢‰_Ÿ&òwÉM£A&v»„gg ,ÏÍC6(µúƒ‘^º\µ9+¸Äç4¹'#©WËÛí"†[i"7êÒ¨SìÂÃþEabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¹q¦ro¢=J‘àÅÕŒ|Ú…ƒJØ}RÌu”¶S½Ù{ÑD~kI"Œ©é!»ï¨ÓÞ¢IhÔvÜíTÓ¢‰E3Ñähž0ú×Y¥‰b‡§‰N¢ (GišãžÛ…‹³|¨ä[+Ïç—˜G^–\ ¯ª´Ø‘¥[i"7ÊA¤µqRÄÙÃ=’EabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¥qð©Ë´9Jå5®ksªX‡{»TÈ ›ÚRý•梉¬Jb$ áõé½²|ôy'Þ˜å£OYµ7±hb*šˆ[F Ìzãûu;ˆ__œÈ‘ç—£æ_üÏ?ÿî÷ß—þïŽ 1WjlYø°4ôÐt–g1÷÷úá¿øKîMüý)Ãc™8¿ù‡¿üÏß÷í¿þß_~„~¿æ?»Î?ÿà½î[~÷ÓŸþøÝ·“}ë=ù§Ÿü7ÿîÛúá'Xö‡ÇÓnù_?üøÍß~øÝWsÊŸ>ÿù7?þùû¿ùôÓç·øiûæ_ÊGâ ÿÁÍLÿýûÿp@Ë/ãCÅ_½‘íÛö#É‘ýÿö9"þy üxïö}´uù…äIôÐùuå\ôyôt7ôˆŒt3%¾.­2U/„ÿ‡ýÿ@‰¿ø³(ñPAŸõô”Xñô””ø÷ë=sãdF/ŒRõÚ\7iG{ŸTüŸÁ¸ºzšíØB<{õÔŸ› ņ«$ÇÙ+ ®Ä¸©¹;ŽVO{”j˜§;Ô ‡wãéïXwò´$D„šÊ€ÓJt³xz2žVÐÄ1ÖÝìvø°tÚü¢Ñ0x'âFÿѨ‘®¼9$âÔ*ö|~Á¼r†‘\GUi¶Ë¹{nå®"ŽM´Åù‡C‹»ZuÅ£ós׈øs¸«¢ ÏzrîªzzBîÑ;Ì]]£\|Ö/zPYí_–Ês¥5ÛU™r¬§ÛíÞì¬_—wôγ~¹E¥‚„¦2 ‹&ML@˜ÏÐp£¢×nÍœDù¹J­Õò»^œƒŸ•ñà╎®15;Ô{Ó”F]Ê â$ႉV”Xñèü01"þ˜¨(賞&ªžž&FôÃDiÜØc{”zܹ& o>–lâôI¥¹’šUÌ(¦¦z÷ó{•Þß:æ5Lk{îƒ o!÷M£¶²L,˜˜ &ü»ääñ¥G˜U˜Èv¿(z{LäçªY‚F‚‘ÝîÒ¤fh9ùL$ïè!§kh졤ÒÑáÞ{C]âôAÜ‚‰ƒ(±âÑùabDü90QQÐg=9LT==!LŒè†‰®QÊL®=f¸‘¦˜èŠ!̶3‘UAЄ‹zxržì7 Å;s.ñ¦wÜ97ÞÚ•á+ÊVuàSÁD*¹ÂLÃ×KUŸ¾ø~ÝNß™ÈÏõ1/A½ZÁ‡_ i 1>/è•8¼ ¾ÜSì€îÍiV-µ-ŽâÚšh†‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MôR|íÖ l$p@}Re²œf»ú¼ÚêSä÷¢‰òÖìSycÛi÷Îcâ«i"·è‘8¶•)¯œf‹&¦¢ .9Í×oå;´³ z•碩F‚Vÿq;¼´ n”ö&$÷ôHlPß„”}EãÞ½‰Ò(Y@„¶8ÔEÍ0±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QŠÒÅ4ás °DŸP™,£YQ%ù’±µÕ'¤÷b‰.ï0ßxÌ)·ƒúg÷‚2{¬³Xb±Ä¯Ïþ]J®bB_gýýôÅ÷+ pvíÆò\Tc­×nÜí8]š9n ÐÜÓ½«®¯U» |+K”FEÒ‡ëv;H¬xt~–KTôYOÎUOOÈ#z‡Y¢k”z¬0{ K¨n|¸3Ñ'•'£‰.õ)¼Y~äòÖœ2¾à»¯|iƒwžF•œ¢ÌhsZ41Mè¦IéëëHŸ~ùýº¦ÓiŸ0ßoôáéU·óòúôB ñ(a yfÔœ¿*5Û%T¼'ºÄñJ¤ÛŽ+'FÄŸƒ}Ö“ãDÕÓâĈÞaœè¥”®Å H›Ïµ8Ñ%UÍ…}êŸ\úøMãDw8È r‹ •^PWñÆ…sá„—ò®CªoN»@§oÌÏ…”¹'ŠÝ“êÀç^›FŸžÐ„¹²ÜƒXH•ÖŸíðÎô°ŸUVEl‹“°JÁ×ÃĺG'§‰Añ'ÐD]AŸõÌ4Ñòôl41¨wŒ&zG)%¾¶x#Äéé%ìŸ%””N/H}6/ýj4ñ¡Êe“¶z{«ŒN?{ǘj~¶ã»6'>·ÈìOj+£°ŠM,š˜‡&öïR0QÎzLŸíÀøTšØŸ«òJT3FõÊ£N jHÄç4el‘”jee>ì\è½4QÄQR ©)N×µ‰f˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi<åiR ®=ꔈ6Ÿxhb—àbÅ^ú,9ȯI°G¥J@[=?)áô›¦‰òÖ‚!l{G@ܢ‰IÛÊ4ࢉESÑä=‡„J‰«4QìèáüÎI4‘ŸA]@³ÿ¸]¸4¥m h"æ±E£Xco"îcÝJ¹QÿIšâ#,šh…‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÆÉÔÚ£ñµ{‡%Ã?KHì‘6¶¥&”¹h¢¨â\ÜŒÛê9Á{ÑDykõ.@/|†’Â}4‘[´˜Sg¶c YÕ&MLE1GéYk'>ìòuí³i"?ÉdH­þãvA¯¥ es‰Ïi÷ù%¡ÕW«ðcʼ•&r£„¬e»­½‰f˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi”ˆ¸=J_[Û#æMô&Š‚ˆHµœ?ÛÁd0QTa®QÚê1à{ÁDyk%ü±/xÇâ}0QZŠJ±­LÂÊé´`b*˜ðïRCJ \ßšØíB8&òscŠ!p³ÿh$¹ô “·‘„™Ž`Â,¦œJJÝn®,ŸU Fn«rÿñ·>¿ø[kNøÂo«"wÎ/f*ª¡© á!¥æš_Öü2ÁüB[SE¡úAÚbáôƒ´þÜHA êTí@0\8¿Ä|õ/_ž}>¿ä"Èæœ§™°ØQä[«ºÄ±®­ïæ*DÅ£ó/Vˆ?g±ª¢ ÏzòŪª§'\¬Ñ;¼XÕ5J áÅùÇqÓ£[y}Jy²[y]êUå½`b÷ŽŸ¥ÛÞ1äû`Â[¤@þ?HKŒ &LL‘S dÑ0áv!Àù0óa%KØìÙÌ(—ÂoIcÀƒÅªä=’½åžlRº÷m—8åµóÝŒ+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥ ÈÅ·òP7Á#šè“j<Mt©y³s´]Þ‰3ùå4Ñ¥,ÁÚú^41M¤rB„õs´Ù.ÚÃmà“h"?W-€5²÷d;zerä¼½ôàV—̌׷G‹¥{s|”F-/ˆÓ¸Ê5ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&öÆsýƒØ¥ Â¥4AÌD< ‰>©Ï¶ÌMšÈª|&óŸ™_PÏö^4Ñã7ºñ SiHcפØEY9>MLE\ŠŽú_göÿôÅ÷›3 ÚÙ4‘Ÿ›‚¿i#%m¶#»öV^ÜL@âAþqñLSlìBf»œ®éVš(â07.7;‡M´ÂÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&öÆQÚC(!^[Í(iØè0c`ŸÔ¤sÑDQE ŠÖVOðf'Ê[§háïØ4QZTÍAF[™àÊñ±hb*š’;¿^qÿôÅ÷[î6œMR(A’nõ·‹éJš[^‹b{Nš{°c6zz±Kf·ÒDn4Ïj@Mq Â:éÔ +Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(G(Ú¥"¦‹s|èFtEú@ÙH?¾+µÉÒUÄô…Ù€ðÍŠ•·N|ªo{'Åa¢´¨ÉBcMp·[0±`b&˜Ð|€)_ˆh$ ,vIO?蔟K¥JP3Dw»p%LDÝ,˜xVÆÖÔÈæSì(Æ[aÂöa¬‘€¤ØÉÚšhG‰Î#âÏ‰Š‚>ëÉa¢êé abDï0LäÆ9Hl­÷‘&צ'ƒÍðˆ&z¤r˜mk¢¨4Qk«‡Þ‹&Ê[G&"¼à;:•IÁ£š¶2|Ü4Y4±hâ×§‰œÅñ“ùÓ߯å|zÆÀüÜjÔ(8Zì‚áµNé)M@Ø{::{Õ”»dxë%ì]œDïX±)Žùá¼Ú¢‰çabÍ£ÓÓÄøSh¢¦ Ïznš¨{z>šÒ;J{ãjIYÚ£”¢\Jq”ç4±K°|¥à©FSÑDQ%DÚêß«4ê‡wü/¨=YJH÷¥tÚ[ÄDB[YÔµ7±hb&šÈߥåKÃ"Õ½‰b*g_›(Ïå}‚kÆèFö¤ˆÛ‰ùÇu žç‡Øçôߥªö©ðÖbF¥Qñ¹94Å <œ[4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QG¢Ô¥b¸–&\ÇÆ 4Ñ'Uæ:éô¡Þ$·Õ£¤÷¢‰òÖ>—Çz¥ª»‡Éòrš(- ˆê _?$P^4±hbšðïRL”ª bw;y8ÉzMøs€$iõ…ðl è¼k¸©Oaé€&¢÷`”WÖªJ‹ÝãŠÆ4Q%qm؇²R:5ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&ºF)J§t º1Æš(|&6¢¤Ê\µë>Ôç”êÿËÞÙôZrgø¯håÚ¬O²AU¶ÙÈ&G–=m)–íüýÙ7ÂßÓä%ØÝ"|(fTh¾]§ùñ¬ª¨x­›NooK~`&—‡[l—ÓDiÑ?¹ˆP–xa/š˜Š&p# *BUšp;0³³ƒ°Kû¹ÜEkŸ Ûñ³3Ùóh"úz8úT÷<È{°!i¬W¯Þí‚É­4Qå|‰ ¶Å1‡E­ebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¥ñèÓ[àö(¥ïËKŸK‰7Ÿxh¢HH¬žøÿMjš«ÜĮʂOUÖVŸâ‹ÑÄîñ>mïØc¬óÕ4‘«³$`lþn€uÑÄ¢‰™h‚rt³€šÕJØ»r<›&òsEc’öÝíðÒâu²‘7óœ&8÷àˆ$›´Å.X¸•&ºÄ-šøÀ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QJ//7ñ ÜD§Ô'~QšèR¯ðZå&ö·6b_ݵ½áƸ‰Ü"Hbl*~ÈI³hbÑÄ4áße"eF©–›ØíÐÎŽÂ.ÏU#‘zÎÀÝN1][¼.°iÐç4!e J­¾£Qì¢Ð­4‘ ͨbµ0M,+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(S’ íQ ÓµgÂa³£r}R)Ĺh¢¨b XÏþ¦^éµhb÷ŽF%i{‡ùÆ›N¥E#ÀÚʒࢉE3ÑD)E!‡Wi¢ØÑÙ4‘Ÿk|M‚­þÍyæÚ ±ÞEõyí:ÿ9Qƒ…F>’b‡|oviTY¡q… Ø=–ÂX0q°J¬xt~˜LTôYOUOO#z‡a¢4nìÓµG)Õk«M0Úáèh¢OªÍ• ¶O}¤;šèòÎcâ™Ëa"·è/œß¼©Œ ¬± &¦‚ ÿ.gNHu˜ÈvH¨gÃDiŸ©ã»]¸2¥Á¦îy<ÂŽ¹3"5°§ØÞ{4QÕ$ÐÈ,·ÛÁ¢‰æ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰Òx¢óbáÚ l2óñþ(¥SŸÔÉÊM¼©×„T»¿V%ìý­M§Ôöέa¹EöFª—LÜíp•›X41Mä Ó)XªVÂ.vÑôô±þÜÀA¡^¼n· réÑ„ üœ&Réé>øZ}~Ie Â[+aw‰c„•Ò©¹L¬xt~šMTôYONUOOH#z‡i¢o”J׆Mê&xD]R)„¹h¢¨bãøõübú¼óPâêrš(-æÅXcDzةð¢‰E3ÑDÊ«ô\ŽëgÅNñìRØå¹˜|ýªÍþ“02_{щɑ%Ñ„戵FE‡lç•Ì6¿¸zDÐ(Mõ¯6¿txß9¿x‹šµ•iX)×ü2Õüb[NQ“·WêayÅ.Èé Èós9†œÁ¿ÚŠò¥g߸åRhéùôb¾@Ÿd[BÝîq?èŽÍªÜ¨úÓ*µTÄÉÃ-ßµYu° Qñèü›U#âÏÙ¬ª(賞|³ªêé 7«FôoVuR*çø`Û$]¤í’ê“õ\0Ñ¥Þ^-c`w4[iªÇ^Œ&Ê[C¢¨ðï<Þ'ºš&J‹® ¥½Æ@ ´hbÑÄL4áߥ%@ŽZ§‰l§ˆgW3òçúÛ˜5úÛ%Ó+Ï&ÂÆˆž§ D*=8k¨A´U·ÖFÝÕhüq%fM,+Ÿ&FÄŸC}Ö“ÓDÕÓÒĈÞaš(G 飔]\ÍÈçÿvh¢Kê³Þ¿(MU š(vI^+ù›wU?à»±6ji‘ÈqBBS¯ ìESÑ„—ƬA¬7±Û¡œ2°q¤zÅ Ý.¤{ã&J£1²¶Å=Æ7.š8X&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4QO¬^&áM$]…­*>§ØMìRU áRu®Ú¨»*Ÿ/¹ÁBû[Ú‹ÑD~k1ĶwìÆ”»2"NÐ^cøjlÝtZ41MðF a5y± ¦g×F-Ï¥ H”ý'ŸaÈ•gd$Ðç4!{ŽØƒŠ€ÞJ¥Q ð$üå½8¦…Ý\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4±7Nì#£”¥‹o:ñfáè¦S—T_–ÎEEUŽòÃÌU¢¯•ÓéÍ;I¡^‚ôÍo<›È-æ\3C[™=ÜÔX4±hbšÈû3ÁŸBªUš(vN§‰ü\ Îh·lFéÊœNÑ×ÃJr7¡ÞƒÕq#5â&²O/·Gí§AM´–‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MtR1\{ÓIƒl)ƚ蓊<Mt©OŒ¯E]Þ1¹1n¢(㘨qÅ ØQZ9MLEºQ.±€QªåŒv;±³3Ä–çFD!iõl·ƒhWÆMÀÆî{;ˆ›ˆÞƒ-‰Öw4²]»—&zÄù'#‹&ZËÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&ºF)»”&`CŠ!ÉøhŸX'‹›èR/øbQØ}Þy¸v9Mô(³HaÑÄ¢‰™h"æÌ¯Â> ÖÏ&Šų‹£–ç:¤môÊÈ/-^ç}4‚Dx>¿¤ÜƒCLV/ãZì’ɽ4Qı‘(7ÅãºéÔ\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ7J¥‹£°%ÜÖƒ³‰"A@Rú€T i.š(ªòJXìêõÅêMtyGï¬7á-FòÕXäÐR ºé´hb*šHŠ©½¯—òí—߯۱žÓ)?—}7ŠØ•õ&”7ÍqÏa¼çB?ÖHR]ìˆî…‰Ü($M­³ÛbÇa%ˆm®+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥œ8®>š`‰¬Ç£ý‡¥êl0ѧ>¾LtyÇ×õ÷ÁDn1ˆR[™›­ ìSÁ„mœs¿¾¿hôí—߯ÛA’³a"?WA@-4úHº4A,mî ÏS:QØÇ–ëR»Þ{Ñ©4Š˜R#¥Ó›Ý*…Ý\&Ö<:=M ‰?…&j ú¬ç¦‰º§ç£‰!½£4±7N‘¤^é÷ÍNäÚ°‰È›Ä`ï „0µ•òûäS¿(L쪔AÒf…×Êè´¿u$sh{Ç;Ëm0QZ$ (¡½Ä dž &&‚ ÿ.‰}b@Õ¨‰b¤z2L”çúÛj£L®S /„‰è}T"‹=‡ (g1XªÏ„Ùo­]Wú @S®£‰æ*±âÑùabDü90QQÐg=9LT==!LŒè†‰¾QÊÂÅG!%Wr<ÚT*šëh¢O=½MôyçÆ{N¹ÅT‚ÃU[Êèºç´hb.š€“æýê*Md»€étš¨´ÿoß¾÷í+3:qÚ¢¿$Ì/˜Ç AêÕ&v;xvJ!M”FUØa¨-NV~Øö2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®Qê±Âæ4Á 6N|p6Ñ'•Ã\4QT9i=³É›ú«6±¿µ¡Äˆm︀ûh"·(H`Me M,š˜‰&r…ify_NèÛ/¿ßƒ­g_t*Ï•èZ£v¶£K«M@ÎËÏ:yΑ[VOà½ÛQ¼µt]Ÿ¸H¼`¢µJ¬xt~˜LTôYOUOO#z‡a¢k”Jñâl -ÁA!ìN©6WzØ¢*‹©±ù¾¿¥Åׂ‰âˆØöN„pãE§Ò¢æW’¶2æu4±`b*˜ ò©Ÿ#8Vab·8&òs}ÐN>l7úO¶»´tà–=zDfsö«ÐPj†Ši¶ù¥C}|µù¥Ã;IàÎùåãÊÖü²æ—¹æÞBÎðAOŠ1¿»tzT^yn´¨±±YµÛ]›0Pòñ¾ÙÁüÂ&×TSC©¯$í}±¼Kw«zÄ¥`õôãùçü÷ßz—üíßžküô»OŸöªþøÃŸwðÿŠþHq‹¬ŠdH?}õûO>¾þáÓ?åWþü''Þ¿“.ÿ$òï“SÁû—Ñzïê{Õÿõ¯îsZ}¼ÿiü?gÜÿ×ùêÇ~øÃWÿûù/¿ÿ*üüw?ýõ7ÿí“áOûÆÀÿ8‡ÿåç·-³èÏÃÛŸ·Ÿ¡2¼ùßüú¯?þîÏŸ|\þÏÏúþ‡_ÿ ~íüñ÷ÿüÉ'Ùo>ýñ;å_ý×>Ø}ãÃ~ÞùËãï>÷ͧÄß}úÍ÷ôõo¿û¯ï¿öE^úú7ú ¾þÞ¾£Oú}@ ü«·ßý›}Pyê-IÌ Xm-vÎhaíU¶6¡*¯rDü9{•}Ö“ïUV==á^åˆÞá½Ê®QÊP¯-Œ|E!áð¢[ŸXž«4n—z úb©»¼á¾ôó}ÊèáÐqÑä¢É)hRÒÔH±A“ ‰ϧIeuöÀFÿQf‡Þ+Fâ–¯¤ƒ»’{0ÔCËĦ2÷Êʹhb*šMÉŸ•À Jn‡‰ñìÒ¸¥} I¥Ñ\§É¥¥qã!±ЄæBR¬Ï/ÙNCÃï  ÍÃPŽ!BiŠ3â–Ù\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4‘ìó[k”2ä¯Í¹¡p:ë?.4¥ÉN&zÔ±¾Kôx‡ƒÜ˜âE7 r€[+ –K,–˜‰%4¯åÙüÔF;´ÓïQçç*Fhõb¾ôd‚a“œ\þ û|,œHŠõ]ƒl4Þ{Ï­GœU‹%š‹ÄŠGçg‰ñç°DEAŸõä,Qõô„,1¢w˜%ºF©«O&4¯É¢2û¤ÒdQ™}ê_¬–UŸwî«eÕ§,®¨™EsÑD.8Rb}OÃ_ЄÛ!„DgÓDn‚ õ3Ç¢ÓäÒôód[>ùô'RÙÈAUI ©n‡ÏÒ]ˆ¥QÍTöqb+ÿ|sXñèü81"þœ¨(賞'ªžž'FôãD×(¥rmi\fÝTñ'v 9Ë‹}@jÔ¹p¢¨²ï0µÕGx1œèòNB¹'r‹ìtêÓTÆáá2õ‰…Sà„"ŠÔHò’írEÅóqB‘‰c¢Ðè?n‡|åE' ›¨ T³²|@j>ô6*ê»(÷NäF£€ÿ‡šâRZ4ÑZ&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ5J¡^†-ˆ›¯Èh¢K*¡ÍE]ê^ìªSŸwBÔ/§‰.e*«6©hÂrvâ¼c]¥‰b‡kä“h"?×!%‘µúÛE WÒn”œjž‡åqÈ—“h¬_jt; ž_I}â„Wvk™Xóèô41$þš¨)賞›&êžž&†ôŽÒDç(•äÚ¤N6²ƒ«N}R5àT4Ñ©þI-®dšèôÎÃÖÛÕ4ѧÌt¥tZ41Mäï××,Uib·<›&òs $×Çþ“ƒ°ñÒjV´DŽÏórX…ùÏŒªÇô»O§·Òø0P£·ÅñCÔࢉƒebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£”0^KI¶à€&ú¤êd4Ñ¥^_ì¦S§wâ}嬺”Yˆ+¥Ó¢‰©h¿ßd “¼O–öí—߯CºÄÓiŸkÁÀ—Ú¡ÑÜN€.¤ ƒÍ˜½Íç4¹§sÌHSUй§{Ÿ¾•&ºÄ¯¸‰æ2±âÑùibDü94QQÐg=9MT==!MŒè¦‰®QŠÃµaØ>ÃnÂGg}Ri®›NêÓk%ˆíóŽÐ}7ú”=Þ¦^4±hbšÀ’ø5xsuš(vhgÇM”çZ Ôê?ålä⸉”IžÓ•}ïÄž^ì€o-7±7*ŒîǶ8 ‹&šËÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&JãŠBõJµo"Ÿïž…l#àšè“ja.šØÕ§ëÉÞìèµâ&ö·N ÐöN|ÈS|9MäCĦ2x¬A¼hbÑÄ4A%E,s&ŠÉÙIÊs}¹ÕÜåÚ›N‘¼3>Âf.=]žyê ¥¼÷t½•&r£H1‰A[ÓJÛ\&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ7JÙµQØ >ÞÛAN§>©såtêT¯ñµh¢Ë;Š7žMt){,À¾hbÑÄ4Á¾‚6ó/•ëqÙ.Éù4áÏeL¤ª­5zdP»òl8lãQ¶äžžÔ˜ê=½Ø=ŽAwÐDn”rº¿¦8àE­ebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L¹qJ!i{”2»¶xnöB‰,PSª@œ,n"«ŠˆÈp´Ä‡bÏ/AþÖÝ5’Ú“e”tãÙD‡2ˆÞ‰M,š˜‰&dKÔôïÇo¿ü~Ýîñ*ëI4áÏ•(¢X£ÿ$V WætB†˜°ջp9Gl}‚É•*Ò½‡E\Ie-qn÷pÏwáÄÁ:±âÑùqbDü98QQÐg=9NT==!NŒèƉÒ8IT…ö(EáÚ‚F¸%ˆ‡}Rg œ(ª˜0¶ç*Wÿd“î'vï¤ê'Þìô¾úu{‹I$Æ(‹a%uZ81NøwÉ€êHQ¿ê”«Q[äÓq¢Òþß÷·C¼6p"Eóë9MDïÁ yªŒ» ÷ÒD—8 +Els™Xñèü41"þš¨(賞œ&ªžž&FôÓD×(Å"_u ÂÑU§>©OêIÿ¢4QTå°zâõÿË£‰>ï<&b½š&r‹HI˜¥­ÌÂ*8±hb*šðïÒWò)DHUš(vòÀé'ÑD~.…ÄL±ÕÍ. œ€MlHŸÓDòÌ)™ÖKË;[i¢Gƒ¬«NÍebÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£^œ"¶ƒŒöŒ:Mt©'Â×¢‰.ïø?÷ÑD—²(¸hbÑÄL4áߥ„\çC•&ŠÛégù¹1_1jõÁ —ÒDÜbÂãÑübF¦n` ¥fŸ)ýeç—õÆújóËǽCAn_:”áJ¸æ—¹æÛ‚!óû}à/æ—b÷x?ô¤ù%?—HƒÕþ“íPã•ó Úf˜ô RÃ|˜SH% ¡nÏHëÂͪÜèÿ±w®É²œ¸Qz£‘xþ3¹@û–}*adf+NÑÑín|¥< ÉбlaÇâd?¤ŸBt<ÿ°jEü5‡UsÖÁ«ºžxXµ¢wù°jj–ÒßÄ^zXUhé(é“«ï9©ì!íœzç)ïØ[¹ŽÛa¢)Sä‚CeFiVm˜*¥Ÿ²IÁD±“ëãòj» >xˆ^íØóõŒÈBóüù2DRÁš”º#½Ùɇ:®wÒÄK¥š­e(NQöCÚÑ6±çÑð4±$þšè)˜³ŽM}OÇ£‰%½«4ñêœËò/:ž¥î­gÄ ¤“‡´“R‰BÑÄ/õ5¡S«gH_E¯_-–‰ñÞ±çrÖ1yK ?Vfº³|lšˆDÔ²½¢Ð½ú~Ù•Ùübš¨í*”þ¥_yáeg7Ò„°dµ˜Àgœ€2„sqÁ ‚°Ù™Ð£IçÄù[†'ûÄŽGããÄŠøkp¢£`Î:8Nt='Vô.ãÄÜ,%÷fù`’ÃàÄœTãX81£>'þ®ò¨“Þy/t7NL)#Ü/6Nà eusàD±£·ÛçËpB¹Ìgªƒƒ‚jW¿ù¥SYìóS'©Z©±÷•;pÆü(ML‰3ÕM£mbÇ£ñibEü54ÑQ0gœ&ºžH+z—ibj–Ê~o\[>0ŸM¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR.7ç GYŸíAcÑÄœúüeØSÞ!zðrbJ™ð¾œØ4Š&ÊwiLŒ–ú—ÅNÜ®¦‰Nÿÿ?èw¦ub9’æâ×Ïë‹–lJÙ#½ÙÁÃO¦Ä 컉á6±ãÑø4±"þšè(˜³N]O¤‰½Ë417K݇M–Îpr7Ñ$XYP~ 5Ǫh4§þC6Þ?›&æ¼cúMÔ39*²¼”¹ï»‰M¡h¢|—jÙkéÞ.M4;“Ëã°k»Ž „y4~ÔAðNš€CÌøŒ&¬ŽàŒãÍ®ˆ}”&j§Ñ}0 5;ÐÕi¸Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2M´Î ÈÔÆ³ʽqØâv嚘“êÁh¢©bHe]«§ßkyÿÙ4ñòŽ&{‡é¹’¯Ëær+SßY6M„¢‰ò]fÀòiæÜ¥‰bgYáòÀ‰Nÿÿ?¹øåΗN5W3&úL¹ÞBZM©×?X«vùÃiÕ­41%Îy—œn;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»Ls³”ß[ÀŽ3e =¡‰©ž¢…aO©/Ðñ]41çü`ŽØ—²wÉce–vV§M¡h¢|—™Êw)¿ßúýõŸï·b\~7QÛ§\þ1?e‹.ùNšÐ£ŸåˆõC)<ˆjvšìQš¨–^…†âx'un;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»LS³²ÝJ…Vp?¡‰&ˆIåR bÑDS%,e£5VO®ßESÞað¥Sí‘QuTq¢){¯s²ibÓDšð·Pÿ“û9b›Zºš&J»9³ö Ý¿ìÈîLê$vÔ«aøP›ª²2‚‘Ê¿ïŒôì ?…ýO§ŠÄÇâ„Ó¦‰Þ6qàÑØ4±*~& æ¬ÓÄÐÓÁhbUïMüç™=ÿ` UÒ{_:¥| Éš˜—ªî&þQ•%«óX½ÁÝMüó«Ý´üqÇÞñ·››[iâ´¡²²Žï¸‰Mah¢~—p$Ê)!Ùyößv–õÒ—N¿ÚeÒvwüT;@NwÆM”åEÊ(uúŒPêÑà@jMÁð(NL‰³·'É'Nö‰ÆÇ‰ñ×àDGÁœupœèz: N¬è]Ɖ©YêC‡‹Ëa×I‹NpbNªy,œ˜Rï߆sÞqx'f”«ýÔiãD0œÈ’йPø'Š]rº'²HÙÄ{çÚño;º3©ba~–âõÏ8e‹c2§®ÔjWþ€ò(N̈ÐýÖi¸Oìx4>N¬ˆ¿': 欃ãD×ÓqbEï2NLÍRtoV'ÇNpbN*A,œ˜Soð]81åy'¦”I’'"á¶·Nª¤Ðʼnj‡tí[§Aÿÿ?FöûýÊvù(ÄÄhŸi‚ŽšlÁ[_iµ«5§¥‰)qB¾ib´Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR~oV'Ê5‹ßMLIUFsê¿©~Ý´wL¼œ˜QfðVÿkÓĦ‰4A5[SÖ$F]š¨v¦×æˆýÕn®ïxQh4~2"é½'¼ŒÒ3špWbM”J‹¤m}™P ¾m}™ñŽè“ë‹»±x2+ƽ¾ìõ%ÒúÂGB¯_¯Hw}ivüV²î¢õ¥´Kœ¥ü뎟fÇ„÷žV1ä²Ò}^_¸ìÝËHœ«U;KöìiUé”'ÌÀ#q\&QÞ§U£cˆŽGãŸV­ˆ¿æ´ª£`Î:øiU×ÓO«Vô.ŸV½:G°ü“YÊïÌ£ƒDõt¶çå¡$c© ÁžÒ6U’’ÿ@½ÙiÕ”w x?GµGHÊD4V–wE£MÑh¢æö–dìš(vêézšÐ¬¬()ÆOV¢tïi•—5ìäê[ÊFײ÷zµLú(L̈+ÚtÃÄh—Øñh|˜X LtÌY‡‰®§ÂÄŠÞe˜˜š¥àS,•I™;¹úž“* &¦Ôcæï‚‰)ï=SÊD÷CÚ ¡`B¢L½‚FÿØá[bí‹`¢¶+Rþ+>?ÅŽ>=Xº &ŒDgayzÔ#ƒ²ÈQÿj¢Ùò£4Ñ:5¶4xÏßìø-cÓÄÉ6±ãÑø4±"þšè(˜³N]O¤‰½Ë415K‰ÝœåCý0æš(ÑÌYX›T÷`I'ÔK²,ôGÓÄŒwÀýÁ°¼Ú£¦úˆÊ$[Þ4±i"M”ïÒ³ôÒ6;âËi¢¶[ k'ãÑøq4¾õ!m.«‹ƒž<¤µZE¨ßHSâÐwyÔá6±ãÑø4±"þšè(˜³N]O¤‰½Ë415K‘ÞK¤rˆŸÀÄ”RSêßsE|LÌyçio‡‰ÚcÙ  `¢){Ïà¼abÃD˜°š»Ã‘¬ ͮ쵯†‰Ú®3 épd›Sº&ˆ#,~ÿ ¹T`Y%—(Õ.}HI{+L̈«ùÞ7LŒv‰Ƈ‰ñ×ÀDGÁœup˜èz: L¬è]†‰©Y ÓÍOª£ÎKÅ`9>š*B…Á5úK½ç—w´&K{‡øÁzF­GÃäöeòv´»ibÓDšÈu—Ž)Áï#ë¯ÿ|¿–oÈñ‘k4jŽ6;ÊùÎä^þhe–9¡ ¯×£@NÚÏáíz4?ûЩ‰£òݨ Å!òÎ8Ü&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&¦f)Ât/M9ŸÀÄœR ‚=§þ÷BÞ6L´_-)—Õ|ìÆüL´­xðwSÙ 7L„‚ ¯¡Ír-÷C°›Ý{Ž‹`¢¶kF5'öhüdÓ[£&HÔuõ& •œ³ê ¾ãe§)= µSMâ¬8çÙ7LŒv‰=†‡‰%ñ—ÀDOÁœul˜è{:L,é]…‰™YJ¤{c°áP@;í'¤Z¬ô°sê ¿ëjbÎ;,Ï]M´\´_*þe—Ô7MlšDõ»¬‰—Q{4ñ²Câ‹i¢´k)‘$"ŒŸb‡zçÕ§ˆ øóúRVŸšwÊÁº‘[Í>ˆ¸•&¦Äù~è4Þ&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&ff)Lw?t>?_ML*•XïœæÔÇ‹•?&漓Ÿ‹šh=’`r++ ¸abÃD(˜€²I—$µÐV&ªg»ºÖDm—R.,c4?ÅÎnÍ«¹“‘}®5XF0ƒçÜÏFò²KþìÕDëÔFd³Ó·šƒ&Nv‰Ƈ‰ñ×ÀDGÁœup˜èz: L¬è]†‰Ú¹$ÔÌR~szØlxˆÈ M4©P·Ú0”ZŸàÇ¢‰¦ªl!’ËX=Ú—]M´_-¹¦~{‡!=GµÇòGƒÌ6T¦)íÊu›&BÑÖ+N¹[¹îe‡zuÔDk×3û UÛË îÁ†‚ ˆŸi‚ÊΩ̳ޚhvæøh öKœ”ïHi(.cÞ41Ü&v<Ÿ&VÄ_CsÖÁi¢ëé€4±¢w™&ZçZ#ôdM¬ˆ¿†&: 欃ÓD×ÓibEï2MLÍRY²˜EâÉÝÄœT³X41¥Þ/êñgÓÄœwôÁ»‰eµÆÉ¦‰M‘h­UR?j¢Ù)\þЩ¶›kV Ÿb‡éλ ¦¥L­'/jí>(›ñþ“¬fW¾ðGi¢uÊVþ÷q¼ï&ÆÛÄŽGãÓÄŠøkh¢£`Î:8Mt=&Vô.ÓDë\rN@ãYJ”ïÁ?Hô„&š„즃›èf§)M4Uîe_Ácõ®_ƒ]µ&Ï:È›ô²ËÆ`·IÇÊès6MlšøßÓDnw ĹKÕŽE®.]÷«ÁM¬ˆ¿†&: 欃ÓD×ÓibEï2MÌÍR.÷¦G;XÏ‚&¦¤2H,š˜So_FõWkÊ’szG ¼›hÊ `B?ðÿ—îôã›&BцDIÀûÏœªjºü™Sm—“•–Gã§ètº5h©”>3B©sKVïßM4;ägC°k§ŠÆâ²lšm;O+⯡‰Ž‚9ëà4Ñõt@šXÑ»L­sB~åº_"=ß\5éYº—в °_v‚±X¢©bœp¿ì¾íf¢ýê²@“ëØ;ðK´Ë_$Û”™ì›‰Í¡XBjš$fõ~)£—\žÎ©µK(lýäÊ/;`¸“%ô°Ì…>¯/ZFpæìŒýíúËŸ Àn–oÁìâLw)£á&±ãÑø,±"þ–è(˜³Î]Od‰½Ë,Ñ:wlÑ_vxoÌ„l'¥ŒšO 0ã¤æ`7M=”Ýò Òýeð]4Ñ~5BùHó¼ó`©‰W\KÅËX¿•Ù4±i"M”4ˆf¡AÔD³¿º0jk7×ãª~‡fW»‘&0èÊ|5aGE.b«5»dÏFM´N™ÜÈÇâtÓÄh›Øñh|šX MtÌY§‰®§ÒÄŠÞešh " 2â½ìÒ½4Áµ´Ò M4 õ)Éàtÿ%UƒÅ`7U–D²Õ—ÆwÑDûÕÝÓVr³ç £¶¡ì1’—qH¾“ÃnšEÖî³c—&Š]¡t·«i¢ö€BÑm.7Ç`SüLùhu(È]rËÒÄŒ8(ä¶ib´Mìx4>M¬ˆ¿†&: 欃ÓD×ÓibEï2M´Î %õër½ìTïM«Žœ…`Ï)õ &ª*LR60Vo_V{Î;Y|èÔ”•?Èÿ±w.=³ÛÈþ+Þ9+u%k1»ì³ ¼60Fr€`ư=ä߇¤ÚpO«Ø %™pÓKŸúT¯ªÅËÃK•ùUâvOk‚ &LL©Læ% þÖÄn÷ôýžå¹å¶] VË.v®­4¡Rê\¿† +-˜”•ü–nµ²{“ÃöˆCÂu»9Kt":?LŒˆ?&}֓Äé abDï0LTçÌfßè¥ôâ+Ø 7 |@}Rg;è´«×<áLmõyøý,š¨o-¥Þ ¼»±ÒDõh€&o(‹O3M,š˜€&lË%‘0‹KÕãé×&ÊsSÙõàÖ48ÛåNîBšÝÐ Ókš [pY4`t·¾«é×[ßWÒD—8FZéa[ÓD/¢ÓÓÄøShÂSÐg=7Mø‘ž&†ôŽÒDg/•ìÚ­ ’ ä n]ŸT‚¹ÒÃvª×øQ4ÑF½&ú”ŧ¼Œ‹&Müñ4‘¿ËHQK…v7=ìÃΦ‰úܲÙÚ(Œú°»´Øó–B@L¯ir –P*£º{ßÕŽ“ÜšvGÙE¤¦8¡°h¢9Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MTçLI@Þè¥ìZš¥Üß\Âî“Ê8Wì]•DPIo¨OôY4Ñ©5}5M ˜ò„¨©Lƒ¬*Ø‹&¦¢ ØrMœ7¥Ón‡pöµ‰ò\"HÉbh´ŸbG—VÁŽ[ €¥ëÊ9(SEfò[zµ¾õÚDŸ8~Ú8Y4q0Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MtõRצ‡¥”;-:¸7Ñ)Uçº7±«ŠÍ/6ñxKú¬{}щr_J§ê±ÔHþ¹ï‡-šX41M`I¼‚ú{ÅÓé'êsKRV¤Ôh?ÙðÊ{·{å×—°‰r N!‡ ú-}· ·¦tÚRɹemqD«ØDsšèDt~šM8 ú¬'§ 7ÒÒĈÞaš¨ÎË-9y£ e ×Þ›(s2:Héô ü[Ø;Õ¹h¢ª’<ãôoó>ìX?‹&ê[+IF…vtï+]·{4 loün)­“N‹&¦¢ Ú"•Ì«ÝÒuÕ.ÀS…“h"?—³¡ØšG*u]®<é6„@¬G4aVªMQC©ÚlãK‡z~Q"ðO>¾ôDÇðÎñ¥C™Æ•2p/S/¼åw´’>Ç_ŠQ<;ËG}®Y0ñïvµåŒ0­VqI& B ¥uÆyk9£êÔ(rnŠ3z:‚³V«–!œˆÎ¿Z5"þœÕ*GAŸõä«Un¤'\­Ñ;¼ZUçÙ’øE×v_/{’VrEG'i¹œƒ )j÷ö&¯vQþHš¨ê!I m¨/vòYY>ö·FÉó‰ÔŽÒ{ßÕ£@hd)Øí˜WqÔE“ÑD ’¢FM¤ÀúTRú4šHA#B«×Î: Ó…4!/A ¤ùÜ„sæ×qÍvy€É#æ­8QÅQÈS»Ø—íN4ç‰NDçljñçà„£ Ïzrœp#=!NŒèƉݹöËRþ*òÚÍo*8h¢Oél4QU1FÅð†ú¯3½ÿ¹i¢+: ÷•3Ú=Æ ÞøêÄÖIÚESÑ„”ûvI™8º4Qì¢Êé{ßRöÔU“´Ú&3¹¶8ªF”£{yZ× [g~«Û½{Õiþ`5šw»(«œQs–èDt~˜L8 ú¬'‡ 7ÒÂĈÞa˜¨Î-•î±ÝK¥¯kvž{’6ÐïåuI5šì$í®Ê¬5_Þí¢|M”·F€RÒ¤ü×7žtª‰"iûwÃ窄‹&ML@ù»$¤’Eß¿—WìÀðô{y幜›vhä ¬vrm=£Ôüßkš(9@Pò×8éTs…˜Þ{/¯Š+5Ô#6Åa’´h¢5Mt":?MŒˆ?‡&}Ö“Ó„é ibDï0MçBžI»—2Ž—ÒlmD÷IµÉ2Wõ@Кê)Ä;é´G'?¹ÁZ»Ë}4Q=r~íÛʈiÑÄ¢‰™h"–½.ƒ »4Qí0¥³i¢<!F0nµŸl®¬g„¶B|=¾¤Ò‚c)„ì_Q¯v‚x+MT§óGcmq)­½‰æ4щèü41"þšpôYONn¤'¤‰½Ã4±;7 ñ^ÊâÅ{J[Ôƒê¨Uç‰J á ©6WuÔ]=hžJpS=MìÑI((Þ˜¼zƒ7”‘-šX41M¤2K×<#`Ÿ&v;<=yy.k°V6Ðj'p%MPØRˆõŒ,·`Aai,ÿW;t+MT§IBh,FU»Öµ‰æ4щèü41"þšpôYONn¤'¤‰½Ã4Q[ž¢7ÊMì"¿>€s*M¤`›æ ,4²¹-õÅ%¹?–&ªzdÊãPS½|XòúÖTïk¶£ƒ‰ï£‰êÑRIöØV–xåtZ41MX¡I)™ŸÓ©Úqà³i¢<—c0°Ôj?™&D¯½7!†^çtâ[p ‰Ð_­ªvjtk=£])ä/§).b\È[ÓD/¢ÓÓÄøShÂSÐg=7Mø‘ž&†ôŽÒÄÃy*·äÚ½]œÓ‰7‚šØ%0aŠá ©i® ±»*%?úã-õ³2?¢Ãå¨P;:B÷e ¯‚¶·ô\+~ÑÄ¢‰?ž&Êw™{Sõhb·c:ûv}®…r'#¶Ú'»´:*Ã&Œ ¯ïMäÍ}K¦žÐ_ªò­·°‹SHd~1¿*.­½‰æ4щèü41"þšpôYONn¤'¤‰½Ã4QSîýÔ‡]¸–&hÓó¸wØÛwH•¹naïª$µÕ3~ÖI§GtrlÂÑÉQ¼&ªÇ„ ý½‰ÝN-.šX41Mäï’ÉRb7§Ón—âé4Qž+ üœh»Ýó½ N:Ñ–;˜ÆÌ-ýšj»]¾•&ªS2k$ª~Øáº7Ñœ&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&ªsN)5vPw;¼:§“lQñ`o¢O*ó\4QUI’H¡­^ð³ª×ío­ÌI±çW—ÓDõhÂQ¡­,ÉªŽºhb*šÈß%EDz±óÝï¾_Šξ7QŸk5ùóv;‰WÒÆ-GÁ^'ˆeÊ ¸ì?ùnéÖKØ»S1 ~žÝ‡ÝÓ0½`â`–èDt~˜L8 ú¬'‡ 7ÒÂĈÞa˜¨ÎËqþ¨í^J/Þšà=±ÇL숽#纄½«*«`o¨Ÿumbk+´Êí褧Ìb—ÃDñÈÊi§¦2†çM“ &þx˜Èߥ°”¤èÂDµã§”J'ÁDy®%EÔæ4Xò¼®<è¤@Ê}M\Zp(—´}¥\û*¹÷ÚDGÉ'Aw;X×&šÓD'¢óÓĈøshÂQÐg=9M¸‘ž&FôÓDuÎbÜ ‰ÝîU‹3iBtKztm¢O*OvЩª’hÚ ‰]½ÅÏ¢‰úÖ*ÂÚѹuk¢x” úÆ0n¼ÊM,š˜Š&òwÉ!¨rƒ&¸l $;&ŠÿRŽýô Õèëÿ3·&´¤ø`9؛҂K͋Ƶ‰j§z/M§‚ ´)Nð)É¢‰ƒi¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢:ç²ÞßîB…âµ{aK•°w¹G‰m¥‚4LTUÙ¯Œô°cú,˜(o­TñßÖô¾ŒN»2!m+Ó™ &f‚ )“y‹ÊæÃDµKr:L”çj(—À¡Õ~8·ì‹óÃr¦…tpkB븑DJ«ë½ªÓD±•ѩڭJØöÎ#âÏ GAŸõä0áFzB˜Ñ; »óÜÝ“µ{©ÄéÚ;ؖʘv@»„¸qïàñJså‡ÝU™dlöz –ÑIwš(»2oD'Þ¸5Q•e€mÔ8yØ,šX41MhÙšˆX2\š¨v’N§‰òÜRÂD[í‡1¿øµ[˜ôàœSÌ ¸ìs#å^±S»ùÖDG%YSjŠËCòÚ™hΈÎ#âÏ GAŸõä0áFzB˜Ñ; ]½Ô‹,I'ÃDÜÔŽ`¢J`È{CjœìœSU•ë<°¶Õg^ú,˜è‹ŽÝxk¢zL¨­,†µ5±`b*˜ÈßeÙ –ü+ØÕŽèìbõ¹Y-4[6‹º¶ØD¹‹Nðš&RiÁ†d~9 ÝN¿.ft)M§©äiiìlW»ðÔ -š8˜&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&ºz)€k·&8ò‹MôI¥É¶&úÔ¿8=ü§¦‰®è ݸ5Q= GjTc©v,‹&MLEu³W9¥¯ï#}÷»ï·l!œž6UJ  ¡Ù~˜‘äÚ­‰yÒ#˜0Cˆ±…=Õ.€Î6¼dUHy8–¶zú´á¥DGÊIÚvtð©Ü0¼dÊ‹¶2¦•}| /S /¶ĨÁ¾^lý—á¥ÚéÓ¥Ò“†—ò\†’2ÐßP®v.=G‹^Bz=ºX™¦v­¹j§"·.U§Vú `Mqžò´¬¥ªƒ5'¢ó/Uˆ?g©ÊQÐg=ùR•é —ªFô/UUç`‰Ú½_»Te%-,UõImã»ªÊ ‘_ ­?m㻾5%%Líèà},Q=F%“7†qI+÷øb‰ÉX¢ì:gO)4X"ÛÅó+•ç&L°Ù~²ò•KU™WX@^£•°E ¥¤ ¿ÚSìÀ,Ø4±‹# Ø—ÿX×¼Ö4Ñ‹èô41$þšðôYÏM~¤ç£‰!½£4ñp®¨úF/Eh'§ ñàm§T SÑÄ®ŠE Èêí³v&ÑÑäè°Þ·ñ½{ŒÊÜV¦kã{ÑÄT4Q¾Kæº^íføØíÎÞ™¨ÏÕTÚÓ`K—£•M1%}½ó-PZ°‘@ô[zµKpkºÀê´Ôpƒt·]{Íi¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢«—Bº˜&RØ ìM<$D3|Gª¤¹h¢ª"õoº?ìÂgíM<Þ:rLïDGå>š(1@Ò(meö”¥rÑÄ¢‰ h"—µ”‘{Îi·£§N'ÑDy.&KàŸÚíâ«’çs‚-Ç]í`o÷ ê§äÛížwqî ‰â1k·¶8ħt³‹&¦‰ND秉ñçЄ£ Ïzršp#=!MŒè¦‰êœr_ ÝKÑ«{Í'Ò„”|x}|—À„ÖØ›Øí&»5±«)ÂÚê?+_àþÖªÜ8ü°c»&ŠÇ wä eÏåM,š˜€&òw™]”Ч‰jϦ‰ò\“Áš½v¶»öR^Ü td¯i‚j߸q+o·ã{V§$Ë‹Mq¹»‚E­i¢ÑùibDü94á(賞œ&ÜHOH#z‡i¢:Ç ¬Ðî¥ ^[5Ú(ÀÄ®´Ô릶Rä¹Jíª(SI[=ágÕEÝߺ$~g¬d¸ñ SõXN6ø—Ãw;M¼`bÁÄL0Aûµ Põa¢Ú_ʨ>7a~ÕÆÖÄn'|íA§”á&8·`,I ]¥ÅŽ,…[a¢Š#%h¬iT;Œ²`¢5Kt":?LŒˆ?&}֓Äé abDï0Lì΂ŸrõaGrmöq-9œá€&ú¤Ê\ùwU¥ˆ÷cU~Kù,šèŠË[ÕcÌ«o|uª«”Ñ¢‰©h‚Ë’¿•› âÒDµ»`k‚Ë*TKÐlÙøÚ|©îÿ¦ É-XŒg~«¼*~!MT§$Ëo‹c\4Ñœ&:Ÿ&FÄŸCŽ‚>ëÉiÂô„41¢w˜&vç¥r½ÑKÙµˆh‹G眺” È\0QU•ÓŒo¨—;çÔÅoMZ ¥7¦ÉVvØSÁ„ÔI:Ê‹Ó;ßýîû•ŸÎWžå¹5/v«ýÚµ07Öó×,¡¥¡GÕÄþªAµSº7¡Sqó7Mq17¿Å­I¢ÑùYbDü9,á(賞œ%ÜHOÈ#z‡Y¢«—¼ö˜“ ohr» ªïHež &ª*ÄRç õVÉhkBÀ íè<Ÿ†¸&ªGËßf[Çu{ÁÄT0‘¿KaQ{qé»ß}¿Â,§ïL”犥$Øœ£‹$µkë¢F„ bþ×Då$”¯´Úa [i¢:Uƒ fmq ë vsšèDt~šM8 ú¬'§ 7ÒÒĈÞaš¨ÎKê×@Øí"õ4!´AÔš¨,÷öÛROv»¨²€yºüÆp`ôaçœjt@©•6éÅi¢z¤üÖoünF´®`/š˜Š&òw)Š"û[ÕN/\WŸÉ'«ÐÅ4‘{Ѩ Ríƒ8Dõ·¾«Ý‹{—ÒDuJ‚©‘]¼Ú¡é¢‰Ö4щèü41"þšpôYONn¤'¤‰½Ã4±;Oepk÷Rùò[rt»*à˜ÞQj0LTU’± ±­^>-;l_tôÆKÕ£åÆÓX¬vQ×9§SÁD*[)’¨¿5QíâÓéГ`¢}LìÑ‘Ra«}*Ö{9L"emmeö´´»`bÁÄ0‘¿ËRRR2&ªž\ûñ\1ˆ‰›³`-Wůܚ  X#Òk˜ÀÒÒ™»J«]þ‰n…‰ê´” 4j‹SŽ &Z³D'¢óÃĈøs`ÂQÐg=9L¸‘ž&FôÃDW/)\ ̲%¦š¨R„<ξ!Up.šØÕçÿä õIé³h¢¾µ)F°vtì)aÉå4Q<2‚·7^[‹&¦¢ ,”rç¯âÒD±+½ëÙ4Qž+yòíæ0øÕñRš€-IJ¼¦ *-Øò>MT;r+Mt‰£§²z‹&¦‰ND秉ñçЄ£ Ïzršp#=!MŒè¦‰®^ŠåZš rïhk¢Oiœ &ºÔ ëgÁD_tžò•\Å£„”ÿ!¶•™®ô° &¦‚‰R¼G„}žõ»ß}¿1¿x6L”çÆ²ŸØX­ªv˜®Lèĸ%Â#–0ÂV”íx¶á¥C}Dø´á%¿u¶'ÒvtÒmù 9æwn*+%[Ö𲆗™†Þ˜%Ýá¥Ø%ˆñìá%?˾{ž|¹íg·Ãtm)#ËmôõøÂy‚¨!¦Ö}±¸u­ªKœ®µªÖ"„ÑùתFÄŸ³Vå(賞|­Êô„kU#z‡×ªºz©xñZ•€lyH9X¬ê’šH碉>õ/Γý©i¢+:&rM”êó`ÈH¡¥Œ?¥©\4±hb šˆ‰k-®Md;Né|šˆI³ Íö“T_„>&$l)*Æš-ä‰8æ‘Ã§¯7Cb™£ÿ~|ùÇÿõÓ÷ý¡Ž…ÿ?Žè‘%Ì¿­ö=¹.ò KÏøø__þçË/¥ßùÛ_¿üR’:Åøæß~ùßøË·ÿùÛ_>&Éïþ?²ÎŸ¾äþéÛÜe~ÿóßÿö—o òìõÛÜçýüsnùöß¿üœVâ‘¿ lùß_~üæŸ_¾ÿêgüû¯þÍ?ýðÏÜWÿüì]Û’E’}æ/Êúavé$®ámÆ3Œl`!@ ¶Tw—D/}³¾™þe¾e¾lÝ#K­T«2<³2³ˆ¡S¦‡®,¯ðž~ââñº×ÕâqêN¤øŒÄ¢‡«gÔø¸2äToIIõŸ ¶c#IÊ=;jcxÍu>®ÝÂPí~Cj{6tƒž­kñ>õÛ–ö H…EXýLÕʰjëDG¡¥lr’»‹°XcŒ|RdͽYs„5GXDXÜ.‘ ¥¬Ü~µÚí,dDaŠà9æYȖ饌EËŸ…~œYÈ ‚~Ò…ÏBf-]à,伃g!“ò¨ÁÛÁKá´‰†]å‘‘v–|tZ½{®úé=¨Á=:Ÿærƒ6A—L‚ jÚÍÊ)2D»¥:#ÕÚÇ&B0ÚSÓê€àÁ± ²ŽÕÔkeëãwÊ&¢JGÄ€' 6.@šÙÄÌ&Š`ZExû’0¾°œ¶ã/ZkTÚ*ÉkS :LyRÉTÀ@BÛø,x‡•€”ä¬ß5ï"¥H°12¸ çÝêV‹þ;ð®íÁÅ»Zô“.žwe,]$ïÚK‡—ŠÖN»ûÃÅÊ:h]›èÕ‡ÒØDô¸ázš?8›ècp»dÁòÔÞhY °if3›(‹Mr†Žw& l‚äœ㳉”Jç1¤þˆ1LœEÍ@P-YÔ€{0¯âø¼JrÖÙ² Vƒ!Ep¼ÛffR˜˜±hùlbøqØDA?éÂÙDÖÒ²‰!x³‰¤-(ÁÙ×rjÚU]ùh øvoJ‰§k¨¨Êb ½›.Zx=j÷Àö’§Zgo»–k$ó›œMïW:an5! g61³‰’Ø0KМïÜfÙD’SÆf\.º`<Š1: ~Úë"©‹’“Ù<¾îÁȤókIŽœÂNÙD`7Ä÷@'#¹ymB3-ŸM ?›È è']8›ÈZº@61ï`6‘”Š_!tðRQO»'ÌCE‘tËÚD/¨F¶6‘PYã¢SÐ?´ë"këÐÿÐÁ:v—l"iåƒ22oæ4j3›(ŠMP»¤0W)ôù´IÎ5v²ŽÄ&¸\¯#§€’úOpÍ«I&`®räYMËåó±>[.îiLrv›“™•j‹Áij[3›ÂÄŒEËgCÀÃ&2úIÎ&²–.M Á;˜M$åÎk¯‚쥜‰×&¯Ñèvo¯€w] ºX›H¨ìð¾HÖh´æ536ÓEÏlbf¿?› v4X0.ÃK’ó0ú¹ .×j¥E¯Í9™ãÄ7¼80¾em“‚0Ï&’œ»Ýé””¢åŒÊ28TóÚ„&f,Z>›~6‘AÐOºp6‘µtlbÞÁl‚•óÓ`;¸PŒŸW×±rÖ´¬MÔP©Âöu• »}>¡Ò6!^®Ñãc©ÖÆ ¦Þj+6.¢˜œM$C´VFæÜ¼61³‰¢ØµË5{ž|ÖL– ØØa9› r9‹ñFô{QyÐS²‰Xy«œÛœåC«J#z³ãK-ç\k&Í‹§Ë QÖ°Û”÷&ŸÄn Jë6e`IÙã«Õ/'·×é˵ã9Xü[ð =yš¤Ÿ¤Ç/?_ópÁ˜šþ_êøë€4_\¥îùáⳋógiüpñÍÅÍòô€lr×…¨íѯ˜NÐ{àØô€=õ‚ó“›’]7Ìu7ùëòú§ƒ=xúí¯~ùéáÑWîcjÄoL˳cpôô ò ¯.žS+º~»ƒúuüäö9½Xê––º)õKC-uUG-K[Í=÷­®ú i DP'ú®3à¸ãq~Ô¯ú€ßÿ÷Ÿ’›]]ì­Ÿñ×ÉÓ~Y'8½û"÷&ÒøøûµÑ¾'>ÉF£Ó[”-­…t¦æò=‡3÷}ßæ¶\0ÿaÑ»¯45åuãòÍÕòüúäíqŒÿzùëòôô@ývdô3OÞÂj¿:<¢_­~»9`—gŠÎ£âfrtðþÝõ.ìƒ÷ÕoVíVêƒWãÓåeW÷duÝhJw_ܵ¤ô÷íÔ¹ïþþåŸÏ™ 7¿¡ÝþË/÷þt{rÊ-“‚óë‹ÓÿùÙêòôâÅqôðÙÉs~–^××ÄÚ©{¼HΓá?×Tú“ÇøÓWë@ù‹“g«£G§Ä´Ïé·WüÝÙ’|äÍåéò()z3œ\/Ùñ]ï‘]þÆãßvUxòèÉùòòú§‹›ôñôâö˜jpsuqzººjÀ¨¿!·@U$:°ú¨µ<ÿéfƒ)þN\õ›[ß%ÃðÈNóŸ‡Ë«ÕÙꦆ%ìw¥ËÓ“£“›Óo€`µÚ½½:d‚j2c´?,–——§/u§_Pñün\\8žÓáQtÕéxhys³:»¼Y¨ð õð ´ñ<Ãs«oÞÍòúçÿ}~µ¼ü)1˜¨X|}{Îïf‘­g t›NêØFYgÔª‹NÝE'_„æUééŒNÓÐÙbÛè´Q6¤5ÉÅŽ]tZ±žÖ¡2Ú…(è´.’çï¢ÓÉ:=‚±Þ[I'ùùЩ ù.:‰xÓdÍyêŒNu‚ ¼ $$P;£3ˆmˆz¯ :IÜN“,%¥6(åƒÁYߨÁ5/lžÎY´øÅ‚AàGY,È!è']öbAÞÒå- Â;t± V@Œ²— 6L¼õH«Z’Û®¡b´Jöö¶™Š¿„Å‚„Êií•ò"z§Þ½cã½X°¶X£@¶Ž6¸³Å‚Z£ƒàc‡÷æôÀ”­É@ójÁ¼Z0Új7ÌàT …Ùœ­IÎb{ïQ*×å©®R ‚žvïQÔà”o`¢GçW HÎ6nÓËP˜(Ò&*‹ü^Ô ë„ntºâ”½#IëG¹)%* |ƒú()%¹°c®ÆJ½ÑZ8mSËi3ƒðŒEËçjCÀÃÕ2úIÎÕ²–.« Á;˜«ÕÊ#¤NöRfâc"]…¶å:äžPËâj •&äÏã¬åÞÍ<üÇæj©Ö ŒUÞ­Ó»ÛØUk àuþâu ‚¹ÚÌÕŠâjš¹‡Èùûk¹Ø¸o$®Æå£dÇÇé£&æjVÜ|½ˆ**ba€ÐÈ«RVµ–›¨0ò. J†ät[š5"oBOf ’O#¹è»)•JM¥€¨Ÿq1¿ œäÈ"”:Q©ö‘¸7 I®yÁoN©•( ´–jJr:-•jS©kÌç”Q)'Š0ÆåóÓ%9»Í!ÝÂ|¨È¬2-Ÿ€?Ï è']8ÏZº@>ï`ÞÇKÅÕ“p( ³º…€÷ƒªËÊÓн~`7Òô´Ž‡Ýð^Èló¼ÅLÀg^§†-ÅÕ¤2KÀ“œ5£/–r¹Þj4Nb$g NIÀ±2Ê[×ÂÀmêê ¢@“܆t×“Ò V´EoepA+œé„'f,Z>~:‘AÐOºp:‘µttbÞÁt")7ÑÓÐ%{)ã§]Ï‹ÑW¨ÚÖóËË9 Cµï.=þ¾t"¡rœ&eôÍ«A'R­½5^ØöTËéÝ]p™4FÁæ5Ôrôë™NÌt¢(:A 3‚ÁM»ûžÞkÀ”}=Ë CbƒÓSÒ §*…*ØÍ7\jÇ]Ø‘±B~›5Ë´;½“¦uDaÇC’ƒ8¯NˆqbÆ¢åÓ‰!àÇ¡ý¤ §YKH'†àL'’r ¡ì¥âÄwÒ Ð­b 衬.*Ê¢f¹k9õ°î¤©k­1Â÷Z®=§Ãøt"i¤b}dÖè™NÌt¢(:᪨}¤09K'’œo\å5àr1R…SšIΫ)éDP•wÚ†–Äo>uõ(å@­å¬ÇÒ‰¤4ãRe$¹sf1NÌX´|:1ü8t"ƒ Ÿtát"kééļƒé)÷J+g…DT $Úié„GäÜ-t¢ÔPVé>èyä}`›jë8åL”­£Íî.¥©5:ŽÈ: ³qÎ 1Ó‰²è7L´SOrº¢‘ø´ÃÊ*¥³WáÔrF¼:• 1¢Øk#è W€¸Ã *£¢j¹jDµ2Q˜fb¹€!tz=(¾ï”B!åe’ÓØé<ŒQc*µ®[þI)é%TÖs ù†ähôè¤ÔˆJÇH¿±’R>æâ:½ScE¥Á×—?IJi ÚõÚZgcTEpœ|&ÃËÉX´|2<ü8d8ƒ Ÿtád8kéÉð¼ƒÉpRî£Ö²—rÆOK†µ®À¸2\CˆN….P!”E†*‚öDÖrÆ=,2œj œ0_Ëí2õk4Zóé\™Q¥™ Ïd¸2 Ì7U€ >½×€£A=úÉ.—ú¬'R%u âÁLÉKmÅü\·œü ‰8yB²¼$ç:r)9D BÉð^’ÄN¹•’³‚@!‡TS’ó;ÎYÈJ©hV8tj&NRDœ±hùÄiøqˆSA?é‰SÖÒ§!x§Zy4NÈ5[Ë–8Y¨PC qª! E×®ÔhÊ"N •¦!Ò€Œ^›vÆ)ÕÚ(ž'ï`T»#NI£ œa^FfãLœfâTq¢†y¾iC2³§÷0}?g!—kƒ‹^‰ˆªjã”ÄIUÎEÔ-ùåcEC05Ý`ót‚å8Ë`'#åš‹ì7y|a½3ÉywÊaX)5œˆBÇaλ.§‹–Ïa†€‡Ãdô“.œÃd-] ‡‚w0‡é㥜ŠqZº¢ñ®…Ãô‚º)qÚïÊa*Ã'‡Lô^?,S[Ç¢”\­–kœþ˜œÃ$Ä­”ê€ÌÁÌafS‡‰¼¨b‚ÏÓäTcSàH†Ëå„rÖ‰Az TÛi<Ý[9 ïõW&x )¢AJ`}´ lã¨åt|h ÕÚ€ Â]µÜ.%^ƒ.(Or®Áñçf`J`°R&šè!ä'É’é{€¡r­Ò¼?'¡±~Äiw½‡J9à–+úÎ7 åë°Ó¡#]$O…å<€ÖKJÁ‚ê¶ë]: UPœ —­°\ÄŽÓ(Ö”tÊÑ'5’t;d¥Þðí^çõœgIžçÉX´üéÀ!àÇ™Ì è']øt`ÖÒNÁ;x:0)·HC¢’½” Óni FR…¶K oh,:ÑË:ÞÀîèDÒˆuþØa-üö3Ó‰²è5L¤‡}öXU-×¼kt$:Áåjª²7’œ‹nJ:a*ÅK?f3°•#@A›ü°ZN5ójSÝSG·qí}â§çÏNž¹¼\¼ÁvVÇvûGé»ý×®û£ŸoWûW‡Ë£ýË«‹ß^¼ŽŽOž=;XüëŸÿú'÷–»_öϿ«/W7Ë#^èYÐ>•óÞßNΉ*Ôÿh\éW¿SÂ]¥ú•À|òøÑë¹%˜·JøE÷-#ÙöÕ6?úêðÿht¾{%o>nóvß«ªjñÑG ³89¦!UÝO¯·)ìï˳Õõå’ìÅ;ÿöÄÆ¼• ß{²:}öÅÉùÏTö³©m|ûè³M…Õ%.ÃêøðÐíëÃ¥Ùwxû—aÿ8:T€+|ÕVí)«u«z|½º¾¸¥èé®™6K´žbÅ­€f‹Ýî~¾:ç&p¿´ôOmSõO™c¯GmЧÎ.å²£Ç/Óˆ¾¸Þã¡|_Á>‡ÚX¤ÒT(•ùí7Ÿî½ÚÆD]ÔoÕ÷ßûlõ&¹_2ÅHƒÊüüŠzîcŠ.ŽŸ¬¨o_l]æ¨nå«_©y|½z¶¢(œBȃÅË—oyßÚTw†Ã)öÈ‹zÀØ[OØ|wÇÃØO±×¾{:ßÞ2Ø£CŠœö:¶û|Bw1î©%¬VÇÎGãI˜êöj»·GÑÈòôä¤ö`ÛW–š÷—ËsŠÿ’,z°øŸ¨Q½õìÏç7oQò{[ýh è*Õjï^µµ_—ýÕåß°Wþ[–4pÈ^—Rûú_òë8Ê3á@aÃ{l§¡~‡kÎúÃwzPYßiF|ýã˽gdÀåÞùúûŒÂ†õgúD¤ëâ&™úšT¯¥NÎ镯ˆ¹ýHm+¯>üãH±ÆÜ§l‹ô×úý<;9]U/–g§Üö^½úqwm«þž#Þr´Üj0ytvv{ÃS¥[üŸ-™CÔÿ^î½ûhD¼½ùéâêäu›ÿþ|±¸ZGŸÜÜ\ÞÞÐpÌ‹ååÉ]ƒþE×ÏXx‡ËmÐý‰Ë«5Æí*øjãO@­‚qJàê@á¼étÁŠaˤ­‚‚J sI Óœ6ÈJA;>Ÿ´¾–Ó¸Ó›)“R‹lÐÐ8†9/r¶¬^e,Zþ"çðã,rfô“.|‘3ké9‡à¼ÈÙËKÓ&'v¦Ò±m‘³TWVîÈ~èãCÛ3ÙË:͘eòEÎ>ÈŒ1z^äœ9‹Z䤆‰àÌpKrƾg’ËåÓmZؿ䜚ôfJ¨Àk®m‘“ÏšEEh¤$Ýn¦´Qà0®² -’ËÏû—$gb·t$(* žü¨2®'9ïù<3J¸9ö¢r^RJív{À-)Eçµ°³7ÉÅ8_‡)†á‹–ÏÖ†€‡­eô“.œ­e-] [‚w0[«•‹B¶…ZÎOÌÖVh} [cVÑ ;@º,¶–ÐÓcÌ粯崇‡ÅÖjëxŠä‘Ü»C¶–4Rïð]Þ[´nfk3[+Š­¹*2·ˆFØ’šäÌèWÉp¹ZG'¢Ä ¢V&N™<Ò¢<ÊZÜÌÖ|J•hÒ‘$9§;­89-'Ï~ÃjÔùû¨k9£vzf­”où(ƒkfÎ9LKpš±hùføq8LA?éÂ9LÖÒr˜!xs˜Z¹G JöRÞL{¬Î²ÓBaú!õ®, “Pc´vÐã;U×Ë:jw†5:zª±CˆÔ03…™)L †&ÅÛäüó—•%9Âè§ê¸Ü€1h9 ÆqÒËÊ\ÅÙvâÿ³w.=´ã¶ÿ*³OaH|Il— tÝÅ]]Ló@‚¤M0“4_¿”tÚúÎ=íúaŽ€™ÅÜË1ÿæ±)ý,‰Ü(Ò!K©$9@€©v´¯=yUá¥ä6MÂT;yaŠS6†¤¾8ÍsÓœ;7íDt|„9#þ„é(8f=8Ât#= œÑ{aªsaÈÍRlYèV„IYA6æ˜Tű¦©O9ñõða=¼Ú]£A€S¦¸ÙÁƒ…«ÇdïfÜ¡LÖ D“a&à À0RØ$¤.Ã;H«Ýº1L¹.iÄ$â½@JIâ½UáíÍ [ £ÁBe€ÂŽR³[ŸÁé1ŒW>-PÊâfp §T;عöCÎþµ´ QSì·nvœwU…'vZ†TpÚT»㣴Vœ–î8Á'““Ö¼ix'¢ãÓÚñ×ÐZGÁ1ëÁi­éiíŒÞÓ´Vœ'(E¶ØÍRv…{7ÍÅ˦²ü~0mRm4uv!W»8Z¯ªŠX½ÍWÕ?mÓ\½kfÎyÇoKúàŠSñ˜Q9¹Ê2…YÇqÒÚX´–{J#”*¼]Z+v ruÇåzݤÁ^o/ñ™Ý›F»Ò‡År 2¾`ò‚F&–hûÍÕ›]H»:.“8 “bû¡‚ÕŽÒ®sU”\§%Ýæ ŽÓbŸ=mTª#ùâ4ÍÓFÑñÁéŒøkÀ©£à˜õààÔô€àtFïip*Îs$ œÝ,eëÍí´¢.*[Í›„lù^wHMƒÀ¯ª@€É«2Pø,pªwXŽEøÑÁÈÏSõXŠîÇè+cžË\œÆ§\–¹²sî‚SµK«ž³S¹.Çd™½Hù«²ò×/sÉRú;ÓF|]bH'tÊÀT;^U±è1ŒWB‰1ˆç m± š÷-s©ë4‰Fe8N-©åo{ÉÜ NUœÚå,ü5;žÃÜq'¢ãƒÓñ×€SGÁ1ëÁÁ©éÁéŒÞÓàdÎS@B7K•mòt÷Š“BJ[Ú) Ä ÑWú®“ïß”›ª*)Ÿ}ÅW/ðaGœê]çRüèdÂ縩xŒ6µ‰J®²È³JÃä¦Á¸I—˜¤l½írSµÃÕÁ‹¸©\×’(x4Rìnå&´ù0áæSj%@ ŽR³ØuĉCa0,–í¡Tí솧ÙEä]NcÁÉ.V™(!8N͎ã NÕiÓÔßšÑÄMnò'ĽˆÏM§Ä_ÂM=Ç¬Çæ¦~¤Çã¦SzÏrSsQ‘’›¥â:oß²àÄq!Ñ÷ N¥VÞ®©B )D_ý§-8‹Âs;õªG´¹àeëšœ&8 Nö`–]e"ñÛWëË×°Ä«œzþÿù§þ…ßm°¾œx!âø~+8F{…«‚1^Wiµ‹ãDu*1çþ†ÇfÇ8÷¯¹óÄNDÇlj3â¯Á‰Ž‚cÖƒãD7Ò⼧qâP–’÷VË^$nTË~I `s•Ra0œ¨ªr)Çv¨ÏôY8q(:ós8Q<"ØqbâÄ8 &”}³ßV«þòõ\Ök.¯–]¯k³t†þéfï]‡¡…T4¿ß¿†PÚ1%”þ÷ªjGiWñvJÍÙÅÊç P§˜x³Ãg«e7§‰ÈÙ¼Ûì$Í%wrÚ‰èø sFü5 ÓQpÌzp†éFz@†9£÷4ÃTç™cîo•y‰ät+ÃȈ sHjŽ<Ã4UIr+úÿÜåg•škw­ÌNã©—]xî NõHÀ™P\eÓ<ƒ3f,†Ê&Z6 w¦ÚÁª.ÈE uIcDoœË ¼“apÉHŒï‹ØßbÌbÿU/v¬û °±SjîS‰ûjt3¹N“=‚ÁrZvœ¦wõgeö&B*Í‚=§–qó³´VÚ°«¬¾¸´ª-4imcÞ‰èø´vFü5´ÖQpÌzpZëFz@Z;£÷4­ç,ýô;ˆ¿ìâÍ…Á1­Å Z«" ö‹~½ì¢ŒEkU•ÝœSRèeø³h­Þ5F¥aGtôÁ§êQÙùèZí8Ï l“ÖÆ¢5#ˆ–¾ÝSðåëØìb¾º0x½.dƒEõÈÀì„øÞþ¬ ¥°ó­©"°BöÀI æÑ˜ê†>m€9f~r€Ù¯ CÂ9ÀÌf¨†–€1uk™6»„—,×%ŽÉ95ÞìˆîÜ!M°)¿`¨tÅ‹bÓûè(•r¼}WYQç#ÙÄT"EDÏ©M`UžÝ–]ÅaÊŒÙ' ³¸ûõ£Ññ?’ÍG²Ž‚cÖƒ$ëFzÀdgôžþHÖœk–=Y ó½Û²‰dI°µ¥¡J ²4¶C*…±Êã4U¥¸Cý§òlÑ üèðª³Òí S=j§œ|³K³¬èd˜á&G©O=†1»u⹌ar̆'@ì¼@¥oCJw2L\XTÃð½Âe)(ô›q7;ágq¢8Í$)»â2§¹æîÎ;'Έ¿': ŽYŽÝHˆgôžÆ‰êÜž ;²”Þ]l³TÓÔíl¯±l¢õ³½†0V3î¦ !Dð]šâ~NÔ»&é÷pxÙÁƒK"æ1Û–"E_™ê\™81N𒃦˜ÞSþòõlvœ.Ç »nŒlÃ[&ç2;Ô;›q—ó™…&âûFÊR‡ 0Þ—ƒb—`_—NÎ’ˆ]Œ3 g·Qµ£}͸9;Õ6¥ì|18%„«]ÊûZ#°ºwj²ôâfïNKŽÏú(­IIó€()xâ,pi.þ¸ÓðNDǧµ3⯡µŽ‚cÖƒÓZ7ÒÒÚ½§i­:§Àèì®vˆ÷6ãΖ$´±øÓ¤FMºC*‹Öª*.mÂw ôí*ÛÏ›Öê]Û”%øÑa}®7Bõ"¨«Ìf¥6>¦CÙ%®¹§Ø•¥ê= #^o„´]ÛF`§dQ±+M÷v9®SæÄYS¿…çË.ìs "¦šI#‰óµµØ}˜ÖŠS@@q–(«Ð\[s§áˆŽOkgÄ_CkǬ§µn¤¤µ3zOÓZuN)§¨~–¢›;€Ç…ˆÆ­ÁôˆT¬úPUÅ”G_=»ˆùó¦µz×% îˆÎª¾ìí´V=ÚÜÁ)_™Â¤µIkcÑZj´ròÒ¦ÙÁê#WofŽ.˜~Ænû¼f'ùò½rÝij{Ó)Ü[¶UDCÜ8D›-o šØ~ðfñY†©Ns,öÅÙS=Æ›œv":>Ü Ãt³œaº‘aÎè=Í0‡²”À½T9„NˆcRy°.ÇÔ뇕L8ôdÉ„â±VPq•Q̳÷d˜±&/‚j'ªÝº·ÁE8Q® “8Í•«â­Tó"I¢ntÐYÙxCûJ›]صkN¼b¦Ú2Ÿ¥—à8-ã‚ð£ SœiiÏãŠ#Z­ÁM†Ù˜œv":>Ü Ãt³œaº‘aÎè=Í0Õ¹–î`àg©Dró:LȪ!n 1)s,µïÄ—ªy°º¢U=¤³2Íô³¦Þ5æ N…ðf·ÚZs;ÃTöPªìøÝÖ‡'ÃL†atÉÔrü·m¾|ý×Ý¡z5Ôë’ݯ ÷j—%™pg]QÐE@DÞ3 …²§N’’t?¤4;Ö}+VNo»XRKÄýZsÕÎFÇð$Ã4qöOþ4»¨³“79íEtx†9%þ†é)8f=6Ãô#=ÜÒ{–a^΃ìÈRâ½eßû÷ý:ÌK–.u;¤ÂX'š*BL;Ô§øQ óгMcüèЃݸ›ÇÄÌ‚¾2YUÞ 3f†±³œf }†©vÁf3L½®Ý§*y#L.gøï,û†q1)Ió{†±¿år®Cµ»­«Ù¥¸aœÒÕ/§¨_ºúe‡ûá$ש„ÄÙ.‡ŽÓb·êÑsê‡(FEè÷r}ÙÁ¾ðª{§F›AÅÙ€^í²îãÒ.tj`ºÏiô*@ÍÞojv–{…áêT˜¢îÇ«†7(§ÑñaøŒøk`¸£à˜õà0Üô€0|Fïi>”¥äæFÄi‘­èMB å$í©˜Æ‚á¦^ÊÆ_}ú°Fí®³Dõ£“W…šo‡áâ!Н¬´c›08 8u³œº‘œÎè= NÕy9ë|{ªv)Þ»Šhü¶¨È8“Êa,pªªL?ñŽá ¿é¡ò³§FÆÑQzn'dõH`ÀWFqu¶~‚Ó§À –í)'µº Sí@®>ÍU® A;E¾›Ç;WAJyÚ¨ïë.!äÒÇÑYÐkvûNs%r-o¹…LÉqjvHñQ†)NÔ˜[]q2wBv":>Ü Ãt³œaº‘aÎè=Í0Õ9³Æì§PAÅÛOsAÎqkˆ1 "$ΖšjGy0†)ªŒIuU}âÏ:ÍõŠNýëF'ÅðàâOõ˜ì±‹à+ãÕÂÉ0“aF`\ÌU2‚nEŠfwýi®z]$`p'éf÷f„¹x&€†‰@Loªµ©bòjIä§‘úÍþø×òüðÇ?,ßÿéweì·ôôûücy@þ+þû¯ÿü=~÷Ofô/¿üí¯ÿãû’¤~õë?ý`3÷ׄÒ^ Ð_ü…üÿFÒöÇ¿ø‡ïÊüÞñ±ºüÆ¢äDNºmv«jy=Zc‡Ö¨Žß©Ð¬ã´LIàÑÚÕi.û-›kš´æMÃ;ŸÖΈ¿†Ö: ŽYNkÝHHkgôž¦µâ\UÊî07K•­Ø·Ò,¬•¶ÓR×½ƒ† µCê?lÅéPtpUævZ;¤,­6ŽLZ›´6­‘QÑR̃˜§¼úÜp­•ë&ŽPÈìHâ½õ-²a‹Öx‰eOÄ~ j—ójm¬Ç0â0 /je’§²î<ã”.tªF·»œfשeHfÐà…×숟¥µÒ];ؽ¦¬Ž¸b“ÖÜix'¢ãÓÚñ×ÐZGÁ1ëÁi­éiíŒÞÓ´Vpô³„{;V%ÖÓV•‘&!%t³}±“±ª½7U6ñ>17;ú¬jïí®‰É;—ÝìV}[n§µê1%"Ù¡,­*\NZ›´6­q;cÄoØ~ùÉ Êáòýv] ™%ø#Liۛ蘆°p „é=­IÙ÷gxâT nvq»¿p²!ñ¿ÿÏ_ÚÏÿ«ÿ=ÞôÝ_k£¸õwõÕÿåëòW[2ì §U“›õˆŒxTf§ãî˶ؕÁFÝ/Õûöè´ŸqÓ]B²Ãl0³w«€Òùß¿ûãoê½ýýwÿúooZ–Tz:Ín½šö<§9$%W\©=áÑ£‚NDLJÇ3â¯ÇŽ‚cÖƒÃc7ÒÂã½§á±:·5ûY nÞ˜)¹äó­±Ý$`Š!_*¾iü7…ÇªŠµŒW¾zù,x,w!ftªÃÕè¬æÜU™ý¤þ)œlwüçþ2Ùq²ã…ì(e¥*öwV»È—ïË,×UR!pó^9{{k…Ê´Ž!ñûñ%ÕÌ«)9åKš]x¶à}qZö·fg²Ù­OšØ˜&v":>Mœ Mt³œ&º‘&Îè=MÕ¹D@g²Tíø]µŸKiÊ×5ÚÎö¥EŒâK‚±h¢ªÒ`¿BôÕ¯«Ø}M´è ýü;¢£ð`©Šâ‘)«?Ç È4ibÒÄH4‘Êé-Îz÷ÕŽ:g—þŸ4Q®‹€e?¸÷þØ,˜åÞž]Œh£RE.o0& ÐÓ«ȳ4QJvÖê›]˜kî4±ÑñiâŒøkh¢£à˜õà4Ñô€4qFïiš¨ÎSÄœw¤PÑ›+†››íŠáMji"²CjŠy,š¨ªlJûêó§C*wÍ6Œ§ýè(ás4Q•a‚Dþ0ΠsmbÒÄP4aÏ%ûº4Qí2ÑÕ4Q®«viää½?¤oö]H夓Xè7iB(%‰žÒrNu´ñÅTåZ¸ÆWocè§/ªÊ3ùÑÑž_Lg–¤®2\7›ãË__t 1sÎ*ý}ÓÕŽ(]=¾Øuí“hî¯1T»˜ï¬IiÉ–~·zCh™!jæà,ÜT»,øèתâT  GtÅI”Ù¦Áý щèø_«Îˆ¿ækUGÁ1ëÁ¿Vu#=àת3zO­:”¥€î-qŠJK¢­c˜Ç¤Žv ³ª*«;É«?­¿]½k¢Hη¼õ óÝ4Q=JB‘¿3Nš˜41M”–šeUš(;nùšÈ˜sÙõ“¼÷s áδ°eàoi‚Cyƒõ×¾›«>IÕ©µ$ý¦o/»&M8ÓÄ^D‡§‰Sâ/¡‰ž‚cÖcÓD?ÒãÑÄ)½gi¢9'ËŽýiÍéî¢.–±xƒ&^RqW¶·[‹&š*¶‰e¿"óKýj¾ü 4ñŠŽ†Ø§‰—=GÕ£ÍÔY•oÊT¤‰IÑDy.(¤îÚD³ƒÕªß54Q¯+ee¤ß‘§Ù­KØÞ²“-ô´Aµ]xPPì®V»¤ôh•&K%`pÅeȳD¤;MìDt|š8#þšè(8f=8Mt#= MœÑ{š&šóœ5ìÈR(÷¶&Å„lÐÄ1©i¬NMUÙ§…ì«'ø¬µ‰v׌e>¾#:úÜNÚæÑ¤±ìP&yî¤41MÄòÍß =¥néj—®®ç_¯«¤“ûþ˜]¼µž\J¡%Þ  °7X˶ß~©f#&MlL;Ÿ&Έ¿†&: ŽYNÝHHgôž¦‰æ<' èg)½¹ÊGDH²íHMcËkªl´LýSÍñÃh¢E§œ~Ë~tèÁ*Í£r„~]âf—Vt&MLš€& tý’lÞº;šËÕõæëu“Prú.5;Ê|﹉REJÞŸ›`´78F¤þøRíBx–&ªSK/Ø/hØìç¹ wšØ‰èø4qFü54ÑQpÌzpšèFz@š8£÷4MÊR‚÷Ö ¶¤•tcmâ˜ÔÆ¢‰ª*C‚°c¬JßPùyÓD¹kˆ!§~EÅ}°yS& Üo;ÖìhVù˜41MØs)öð*|{ÚìËOžßr.îê äõº¨œ¼êÕ.ç;ÏM±@È&ä=MPyƒí· Ðÿ¢A5åø(MTq™ÑûÜÒìVKL“&6¦‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDqŽ˜Äi†ÛìÂÝÈÉ2¡lg{$òúíò^·4Xò¦J3E_½ÄÏêgÔî:ÙuiÇc˜Âƒç&ŠGŠ!Q¿‚r³ ¢“&&MŒDö\R»â·§¾üäù-+§|5M”ë‚Ý*¨›µ©PûÝç&r ø~|áú¦s~„fðÙsÕ)—9‡« 2“&¼ib'¢ãÓÄñ×ÐDGÁ1ëÁi¢éiâŒÞÓ4q(KãÍç&t‰ºun¢J°‘8óŽlOi0š¨ª$ ùê™?lm¢EGÀþO?:òäN§â‘ƒÍ:q‡2]5™41ibš°ç’PFívGmvL—¯M”ëÚ =§~#žjWš¨Þyn",”sN_«¤¼él? ö?ÿW»7u2n¥‰ê4gá~¡Ýf—xÖtr§‰ˆŽOgÄ_CǬ§‰n¤¤‰3zOÓDu®1fL~–Êšî¥ N‹ Ÿ4Ѥ‚ 9;¤jLcÑDQ%Áþ¡°C=V?£WtQ£ ðàÚDõˆše2 ³¦Ó¤‰¡hBÊ,½|\ùoöÎ4×q\£; Ä™\Iï'Oí‡tW"Åí*jÔ¿&ÂϼÖp,‘ä~vµ#9ýl¢ü®8’¦á¬Í¢nמM˜JÂ74ay—Ì-Ã~†T±Óp¹•&ª8㡸,dßtn;]Ÿ&fÄŸCǬ§‰n¤¤‰½Ó4qh–"¼6o‚¤Ôt‚74Q%°RžÉ?*‹ÑÄ1õñe4q(:ÌzMTùq“} ,?À¦‰M+ÑD~/Ù\еŸ7Qíì)·ø$šÈ¿›7ðÀÔïþØìòº¶ß„’ókšðü]¥ßA³ÙÑÍý&ªÓRT‘h,îù*覉7ÛÄND×§‰ñçÐDGÁ1ëÅi¢éibFï4M4çF¬L¡Î×ÖtbŇػ³‰&!¯®ƒ;'?´XvQ@:*^ÕGŠï¢‰cÑyÎN¸š&ª²R±p"VíðùÖ¦‰Mž&¼ž H…þM§jgvzM§ò»†–çí4?ÙN®ÌÂF‰€¼¦‰x8&€Rs¡«´Ú¥W)„ÒDuÊÁ£¤Áf»Bìp›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉ê\@ã“YJÒÅý&¢LZü†&š ôAú±[,o¢ªRƬªMý‹¬¿š&Zt Ò ò˯(ÞGÅcÆ‚ôÁß-€7MlšX‰&ò{ÉZ¾UI¿Blµ“8½ßDù]ËãB‡ßÑÎPèÚ ±„ñ:o³²:‚0t>ÿÿ²s¼ñ¦ÓÓüG PŠCöØ4ÑÝ&ö#º8MLŠ?&ú ŽY¯L£H¯F“zçhâè,¥$×trS~?Û.u©›NGÕ[J_D‡£#wu¯;¨ŒRÚ4±ibšhï%YõNM§_vxrÞÄÏﺚH Ç»¼êtMÈ# <ãëõ%¯>ÈD™i¼«´Ø&¸•&ª8UÅ¡8&£M£mb'¢ëÓÄŒøsh¢£à˜õâ4Ñô‚41£wš&ªs1#‘ñ,%pu÷:"ÉËGg¶ÿXꫯ\’&ª*ÃĽ.N¿ìù»h¢>u¶ù`±4’ûh¢x.E{Ç7!Ø56M,Eù½”¼´$sëÒDµ“§û;'ÑDù]—RÅ€FãG w\™7AÀxKXG:ê‹ZºÿRZíÐñVš¨NK y¯ˆÇ/;y*_»iâÍ6±ÑõibFü94ÑQpÌzqšèFzAš˜Ñ;M‡f)¥k+Ä’–ªã¯.:ý_Aä­| Td-˜¨ªJ–£Òêÿ &EÇäF˜(t€°Õ.ñ.é´ab)˜À²IGO*Þ…‰jOÝIO‚‰ü»š(, >¢T;”+&ˆ’iâ5LPÁ$l½;¿¿ì0Ý Õ©A"ú@Üs׎ ov‰ˆ®3âÏ‰Ž‚cÖ‹ÃD7Ò ÂÄŒÞi˜¨Î#o—˜Æ³”ûµiyÇü@ˆ74qHj€¯EE•¥PïµTþ¿z×0àÁ@³KqMT™d¬L‘6MlšX‰&j‹kâ¼ÅL]š¨vøt=ô$šh­°Â‡ãGÉ®m7QŠÐ–[ïh""o2ùŒ”f;g^m}9 >¿m}‰ À¼ÏŠ¢óô-ï†õ%+#-Ú†Êi}ïõe©õ… =oÚSô¿V; ‰³×—ò»¡è®ý‘]íˆìÚõERJñf}á¼CÌqÑÒúõOoýZUæi8ÆâvZÞð3D'¢ë­šÎ×ªŽ‚cÖ‹­êFzÁ¯U3z§¿V5çy÷jéƒYÊ®½HKló­ª àd$Ÿý½<ìŸe‰ªJÂÌ>X©8¾Œ%EGžZž\ÎÅcFmF‚±²Œû›%6K¬Å.¤ù·Ì,‘í„ð|–ðüÀàÂ2?ÂÏ%r.HÊË,™©ð5KHéy#è_£­v)Ò­,QŠ!G‹“´“ò†›ÄND×g‰ñç°DGÁ1ëÅY¢éYbFï4KTçÆL(ÌRqm‰‘¼B“¾¡‰*!R²Á7®öH kÑDS Žcõy·ñ]4‘Ÿš$t¾†”ÞHÕc~1•|¬ŒX6MlšX‰&¤Ü-}·»4Qí,~2!å~,ˆ gmM‰âÊ“‰ô@Ë«Àk˜Ð2€3Qù ˆFµSç[a¢8p£AIµK&F»ÄNDׇ‰ñçÀDGÁ1ëÅa¢éabFï4Lš¥@®íŒÊ^:£¾KÊ«R~ Õ»æÔÔ óX=þ^˜ñúÔ$–\ÇÑy>Å¿&ªGE"ûàï&¶ën˜X &´@y`P&šŸ~V+L( ÇO¶£‹{‹Æ›£ Ë#‰9AÿëµC»·úxuªYúXœÂîŒ:Ü&v"º>M̈?‡&: ŽY/NÝH/H3z§i¢:72ÐD³ƒk+|°¤GBzCU‚ Ð ½£IU]‹&ªª0Óø`9ø²¤¼òÔ囚 H±Ú%ºñh¢zT„Q2eµû׬M›&þM̈?‡&: ŽY/NÝH/H3z§i¢:gL<¸ÕÚDúÕ½Œ¬pÕûÉ^80 Œ•²-vÑ©ª#q«ù²ìúÔÊ ƒ¬‰f‡7^t*kYdÿ`ßY&–‚‰¨Õ¿=ù •Q³ÃÓs°£Õ!$t·hv×MèÃò2¯ë9A*#5ºsP³#¼&šSË´ÕÿÚÒìžIgÃÄë]b/¢ËÃÄ”øS`¢§à˜õÚ0Ñôz01¥w&šs¯ÍZÆ³Ô‹Äæs&ØyÃýúhâ˜Tçµ*:5UQZ£Æê_Ô£ú›i¢>µ%@îß·û‰¢ÝGM†Ê7{îp²ibÓÄŸ§‰ò^@]éæ`7»çìÝsh¢þ.f?Ñ/EÔì äÊ£‰’çMÂðš& Œ`*ýœºšÒ­IØÍié¨1§´/: ·‰ˆ®O3âÏ¡‰Ž‚cÖ‹ÓD7Ò ÒÄŒÞiš84K_{ÑIÀò†&šÔòŒ¥:¯Õ˨ªò„ù¯ cõÁé»h¢F§$ù‹£ãÿê?z5MT’€m<@Ê3lšØ4±Mä÷Òͥѩٜ]Ñ©þ® ä:?–si¯ {Döùš&°ŽàPƒnêV³C¹—&ªS Ädcq’tÓÄh›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉ê\Ëÿôf©Ðk{M?Þ±DJy‘±P…µR°›*ã$öÁJ¥bßŇ¢cOKåå,QŽŽÞy͉+$ံ›죉 KÁ—£0ïV‡mvd~6L”ßUuÔsú±KqíÑä…áu…úÕ)úe «] Ã[a¢ŠcB$ V;z*à±aâÍ.±ÑõabFü90ÑQpÌzq˜èFzA˜˜Ñ; ͹$Žf)Æ‹[M(=Äße`“Êk‡mªòjͨ÷/»çÔ¢ ãèˆÜxÏ©z ¢ :ceù߆‰ +ÁDÉ:N”´Õ…Ά )'Š:¨ÔÖìDüJ˜ˆ(½¡ }”ïy,}¥Å.ÏS÷Þs:"Ÿûžnšx³MìDt}š˜Mt³^œ&º‘^&fôNÓÄ¡YŠ.n5a”.ïî9“ª¸MRÏßE‡¢#(÷ÑDñHTÎLx¨,›íÆu›&–¢‰ü^šiWÑ¿çTíÎo5Q7ÐP’ÆO¶£+&¨øàÐ×0am+ ’&Š:Þ›€]ÅA(„ Åð†‰á.±ÑõabFü90ÑQpÌzq˜èFzA˜˜Ñ; Õ9ZÞ{Çx–B”‹&ÒÃøÝÑÄ1©«•sªª(¯Tƒ”¦þÛŠÃÖ§æ”`PI½Eñéæå0Q=*Y’4V&¼ï9m˜X &¬l -S.R&ªøéGåw•M‡{ôlòªÛèy0ADéMÛ:ð<€Y9úB‹…¦[a¢8HC$«v vöp—؉èú01#þ˜è(8f½8Lt#½ LÌ膉êó`8ùnÑq¾s}‰ÒJÕb¨Œù)t¯/{}Y`}‰GÊ“aiŒÚí‹Zí Î?ù.¿KJyÚëÏÚÕŽâÒ ð|Ów;+È;Ä(Õ;| 4ÛéÍYyÅ©&Í/ÅiڌƟ!:]ÿkÕŒøs¾Vu³^ükU7Ò ~­šÑ;ýµª:á¼ÏR@÷E5~¾;únò®n°ãüy¤ÅîÑVUy%ÓÁM¨f—¾¬“Q{j1óO¢óTiærš¨µ–+cÙGß›&£ 7ÐD¶£§šÚ§Ñ„[~ÖÑø15¹”&ÒCBñu[TLyZ Š7»·Þ£mNËÕ ~¼fG¾ïÑŽv‰½ˆ.SâO‰ž‚cÖkÃD?ÒëÁÄ”ÞY˜hÎóZãYŠùââãò×,qP©¯uòý£^’÷ÏU~ìð»N¾ž:’2|§¯‚W³DóPš¼Œ•ùîd´Yb)–(ï¥a‚`è^£­v'³Dý]*rd8²P.-(3„7,y{¹W݃‰j—‰ãÖƒ‰&Ž¡_˰Ù!îÚãÃMb'¢ë³ÄŒøsX¢£à˜õâ,Ñô‚,1£wš%ªs._„`-5;ü®rí©óBýKÆÍNŸ.]Åc@æ4V¶¯Ñn˜X &ò{éÉâ÷žÿüçýuU:;'¯þ®SÊÓÞpìWLÄÃHÈð5M`餔£ÐUZíž7ìwÐDujùÅðĉnšn;]Ÿ&fÄŸCǬ§‰n¤¤‰½Ó4Ñœ‹ñx–2¸¶“G<å M“Êk]sjªòºêý®{?v/J§ÿÕ4QŸ:À±Ÿšðc—îKÊ+%•bký¶¨ÍžÄ6MlšX€&ò{éÊhݤ‰j—ÂξæT—HÂûÀªÚ«þsg¶E VMTJÕ’ a¤W;p¸•&ªSW”~¨f§¶K| ·‰ˆ®O3âÏ¡‰Ž‚cÖ‹ÓD7Ò ÒÄŒÞiš(Î ™ÁÍî¹íã4ál´w4qH*0¬EUUf E«Gþ®¤‰úÔ”BS|Ð/:œgÔàñ[GötAmÓĦ‰h"¿—!ÆÁÒ->Þìä©ÄÆI4Q~×ÈPFˆ»•Ѧ‰¥h‚Ë™Cv š¨vìx6M”ßå *D:?NFtmA§D˜'×4!uçØc?»ÚÜ›ƒ] Œ*ª6;RÚ41Ú&v"º>M̈?‡&: ŽY/NÝH/H3z§iâÐ,Å×ÒGž±ÞÑÄ1©–Ö¢‰¦žuPçýÇ.ùwÑD{je£Vr½&ªÇ D@ce¾{mšX‹&¤4-=Ô»4Qí@O?›rÓJœ¼_ÈîÇîUQïó’°=ÓD©§ðš&4`@ŒôbÇzóÙDgâ*Cqù!lÓÄh›Ø‰èú41#þšè(8f½8Mt#½ MÌ覉êGÑMäïu’N¥ ³ô Ç74qHj^Ö¢‰ªŠJ¿ž4VO‰¿‹&ŽEGà>š¨‘郷Nžê'ošØ4±MhÍ®Ö<ý§.MT;9½™Qý]Oœˆ‡#Û-#ǵyJ¥¯Ókš°<‚•Q%õOQªƽyÕ©©ËXœÒ¦‰á6±ÑõibFü94ÑQpÌzqšèFzAš˜Ñ;M‡f)K|m6úÓ›f¥âbYØU• )ûêõ˲°ëSG¹…ÆÑñ§½—ÓDñhH)†Ê Ð7MlšX‰&¬Ü4¢0ç>MT;ä³[×ÕßUÈû׎—+k:a< Üê}ݺ.¯=eRNƒêSÞæ »•&ŠSœ4»71Þ&v"º>M̈?‡&: ŽY/NÝH/H3z§i¢:Wc”jvŸMăàMT ¡ˆƒoGÕÎu±¼‰¢ªùàìzµK„ßEõ©1Yøx±t¸³ÝDõ(Ì®(cÝ76M,E^¾ù'¥~» ¯•dåô³ ¯bóì58u¬vèW·®CGzCQGºf¦êsOÔõ…ï=›¨âyT¦¢ÚY‚M£mb'¢ëÓÄŒøsh¢£à˜õâ4Ñô‚41£wš&šsÖAæ_"¯Í›€G2Èótg¶wÉ+ñ'R9Ö¢‰ª*¯Ö‰ÒêãËúM´èZú :¡|M%³Ñ‡Ê|gaošXŠ&¢PB"‹ß{²ýóŸ÷·¼èélš(¿kb‰GãÇ_ÝO=3oBS)Åþr}ɬQ äÝšNÍ.™ÜIÍ)Kx‚±8²}61Ú&ö"ºÂYb'¢óÃĈøk`¢£àœõä0Ñô„01¢w&šs"RËq/u°|)Ld°­töo`â”TJy.˜hªØ(w wõY¾ &Ú[ –éÅÑá¬ÏÁDõ¨˜4V¦Ö­¼SÁ„lž² ùŸsä_¿¿ÅNír˜(Ï…ÔÊE½v±ã|çÎ7ˆeÄ:¦ ­-8å2Âô4ª¸={޶‰L ~ÍŽò¢‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰S½ë½¥Q%å-Ó»ç¤æÉ6UZ¦ ±zaý.šho]†cSŽ££/P{;MT†ÉÕã¯Î|ÑÄ¢‰™h¢|—NHYRŸ&š\Ÿã£>W½tÈýšž»2Ü»5%ð®Ç4aµ i½§Ùqzvo¢9­äqF‹&Âib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&vçõ6ŽT½9ý8l$ïhâœT“¹h¢©ªÇ‡Qbõ‡rÿmš¨o!a”­«Ù¥—9Çí4Ñ·^·KÆAûi{(zçÞDÚ²ÕK½Ç4‘K ®—&¨_ׯÙ僌´·ÒDqZÏ*¨Ó^Âd¥§‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞaš8ÑKY*n.ʈð¶·?!•&£‰¦ 12ÇêÁ¾,ÿx{kÒ2F{âû9šh½H Jœ4;{ÉŸ¼hbÑÄ4‘7‡Ú¡˜ö‹U»Òü.?éTŸ›™Š{ÚO±C¹³˜ç ‹}³7á¥'î/Í¥‰ê´f©È‰Cq5SÉ¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æ\‰ÁSÜK êÝ4‘Ëlß÷öKU˜,ÿxSå˜sʱúÕúÖDPó…Ñ¡×Ä•·ÓDõÈuŽ‘ãß!¯üã‹&¦¢ ßJ§S(×­ »Ù%ô«i¢<—Ð8iÐ~ŠªÝH’7 2¼¯Vµ .Aà~}‹fG(ùIš8'Î^2Ë-š8ž&ö":=M ‰¿„&z ÎYÏMýHÏGCzGiâ\/å.ø_J¥Ü ²Ÿt:)Uçº7qJ='ø®[Øç¢SÃÇhâœ2[4±hb"š(ߥ'4ÑîI§ÝŽ_N]Cí¹Ân©¿´Û¡Ù½µQKdÕŽ«1´,œû7KQû©™d“Þ[o‚¸.èÓ„Ô–®^«Þw•6;ÉV¯«Nsq îM4»´ª×ÅÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎk{)¼·zûæÙÞÐÄ9©6ÙI§¦Š¸¦ŠÕ#MœŠ=¹7Ñ<:bIl·“¼2Ä.š˜Š&jiDwñþI§fv9MÔç2(F-ÛÜøNš ­àBÑrLZZ0¤z »ßÒµõ"ÒDÇFŒŠ+°°h"œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9/=}v{©×ÉÒ-·°ëœLßÝÂÞ¥ÖÅ#‹¥êQ¤¿IM•³ªI¬ÞáËN:Õ·®S;¦FÕ£@Âèò|³K/‰ÄL,˜˜&r;@„ZF˜.L4»ü²¦}Lä– VKÛNAû© jíæk¦‰”ßÑDžz{Q¢ »;8ãlãËçê1ýÉBÿúøRÞº&@fŠ£DOŽ/Å# ~ Œ2®ñe/3/¾%ÈeèÐÔ/gÔìòåÅQës11Õ…°nûivÈw& Ǽ‘«¹/Þfˆå—I(­sÜ?“HݺZÕÄ1”ß('hë m¸ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­:ÕKÑQzåAZ¨ Èó›ÕªsRg+ŽÚT•~Çêí»hâ\tòƒ4Q=jùì¢BKM™ñZ­Z41M8 ©j@õúžåëi¢®‚ai9h?ÅÎ’ßL$šŽW«$µ–ŽRà«§t·ãg‹£6§–JÙ?¼‹s[ÅQ£ib/¢ÓÓÄøKh¢§àœõÜ4Ñô|41¤w”&vç(ýó»(ÝJfºÉ»rFç¤"ÉT4±«âDÖ¿W±ÛùWÑÄþÖ’ÈÙâè0?—2°y̦¬}šØíuÑÄ¢‰‰h¢|—uOÙÅÿ,'ôë÷ï·Ø%¸zo¢=WËÈ‘ûåÀv;E¹“&|#%74­o©7p»«U»²>JÕi&-Äg±8QY4M;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4qª—Ò|oÊ@fÛ ¿£‰SR-ñ\4qJ}Æü]4q.:YŸ£‰¦LÔrÿ ÖnÇiG]41MÀæ$ΞR÷ZÞng—'ùhϵBã QË.vâwG­û‡µ|ÁšÀÚ‚Aj†¥®ÒfW¾ðGi¢9UB…‹“l‹&¢ib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šózD#}Ð…šÂ½{D5aÞš¨Ê åMb©ž&Û›hêÁ(¸ìþó–ü]÷òö·FBî'æû±{³ßMÍ£mþ2IëÞÄ¢‰©h7‡¬©&ÑïÒD³ã—$ÑÖT€\†Žp€©vpë½<Ø +”¨Óµ¾E÷oHívô(M4§b б8¶µ7N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ+ö3¢ýˆÔ›÷&”6Jïö&ÎIÍs% ÿQïΤ±z=8§õOÓD{ëb˜âèP¯ßUoâ':.-Žñƒ·°›Ç¬ŽÁ¬f'ºö&MLEZ÷ æ²w3Äîvàr5MÔçªfËõ{Õ.¥{i‚ ²Ðš°ÖÒ3[°‹Òì,={ »:•$`”5qþ²y»hâÍ4±ÑùibDü54ÑQpÎzršèFzBšÑ;LÍ9(YpIné÷Òle$}C§¤ÍUobW…–25¨?ÈmòOÓÄLe$Œ£ƒšŸ£‰æQò'¿›ðº…½hb*š°¶ç9co¢Ù%IWÓ„µ=V rg4;r¿7C,ŠdxC¹µ`ÏÜËkvœž=éÔœºðâ^7oM¼™&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâT/•ÿœä^{Ò)•9™òš8%Õa²“NU•B„Ô?P¯_F-:¥ßJÿ¶ ž£‰æ1cÆÿnå·Í‹&MÌD¹ÍÒÓ§ÿúýû­'Yýêêuí¹YŒ0HFTíL(ßœÓI2ë›{^[0©r°Mßìž­^×œš–†…±8Ãu ;œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9wÎ9îÎëÑ» ±U‚•X$îP-ñd4ÑTaÊ¢«/MìÑ1ò°îvú\-ìÝ£d=¨%ü§2‘U {ÑÄT4áí´”þ° »Ù_¾7QŸ›½´k‹Zv±Ë"÷ÒD>ª7AEÙæ))±]”fWZºÒƒ'~Äg÷ÞRÚÿì^Ša,š8š&ö#:9M Š¿€&ú ÎYÏLQ¤g£‰A½c4ñŸs㺜÷Rê7gˆÝ’!®ýH@ì¦ÜþOj¯'¢‰“êȾˆ&ÎFÓSbÿ§Ì³dŠ•½æë_4±hâoÓÄþ]²±eëÝ›øÏŽøÚ½‰Ÿç:i­_µί–o¨^g[‚„ Çã ì=o?߸أ4ÑÄ1—ÿÈ¡8$[·°Ãib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šs!¦ ³ßEj¾÷¤ÒFYŽö&ÎJà¹h¢©R¨—ˆ?P¯ü]4ÑÞÚ@RoàQT}Ž&ªGJ^@ÇBe”Ò¢‰ESÑD­1í*”)wi¢ÙÉÅ9öçr­c9œs:\ºno"oLE‡ÓÖÌ¥=t³CÅGi¢9-"8g¸2ĆÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç®)KŽ{©œï=é¤$›ˆ¿¡‰SReý]š¨ª8%Ïâ¨Ïø]4Ñ¢IR°7±Gñõ®óÝ4Ñ<§øwcTY4±hb&š(ßeñ$Ч‰fzùÞD}.e­ÃOÔ~ïÌéÄyÃ$dzLTZ°äº Ñ_š¨—KçÕͯðŸ¨Ü[½NRA&;¾8•ŸÅ8ìƒ3òtãËçê‹å×/'¢ãÏŽ/+#D_ãË_f_x«g–ˆèÏբ߯—fÇÆW/õ¹j`9X‡nvšn­gD[ ¹!/Üæ®JÉ×ÀsØ~Ø3ßz-Ï6B¡cšÈ­oŽŠ 4;Éü(MäÖ Õ²TŠ£ô2ø-šx3MìDt~š Mtœ³žœ&º‘ž&FôÓDsN,øïv–îÝ›pÛ ÛšhD‚“£»â\4ÑT‘P²¢¢š:Ý{m‚Õ ý{çÖkI­Üñ¯2O<иnv9Ò(Š’—ó”(R„é(Úà°#.#†ƒ”oŸ²½!ÍìÕöjÜÝ8,Ï‘ÎPtý»Vûòó¥JoÓDL-X‚G¨¯÷;ç/¥‰ä” …,BMqGœ4Ñš&V":>Môˆ?†&* öYNÕHH=z»i";gvÒÈ‹]ì\<•&"†ÅÉÖ%ì,A¡‘ʶH ~,šÈª|JRÚê=„õï>Ôi"ۭχDnWSº‡ÝìµÐã™÷&d4Ü.g˜Zp ëû£Ån}wë šØ%.¬ªvÜ¢‰ôsþû—Ö$¿üé¶Æ§¿>=¯XâÛï(³7ô-…Å>&‘Þ¿ùúÉú×ožÞ½O¯üüÍH~-ã?ÿçí÷AgJµží½Ø9Y]RyNµ¯î9w­ÀÖß¿/°ÒtìŸÿéÍ»ï¿ÿƦj?~ýÆýòwïÿöùÛ`ø¾LÜl"ùþÇ_Þ6¢¿toïž—_~¡Ü½Ùß|ú·wýáÉúåÿzþî«ï?ý >µÿûîë|²AöíÓ·_xþè/¥³{kÝ~"³Ô?|ôüÅÛ'å/ž>ÿŠ>ùò‹¿|õ‰µ0ýäsÿŸ|¿ 'ÿ•C£ì^~÷·¥S¹-J×I)%„¬/¶e;ô0Y² •ˆŽÏ’=âaÉŠ‚}Öƒ³d5Ò²dÞn–ÌÎ9 ×5ÿ,òÜb#ȸ„t$yc´ß%–ÝXÅ ‹*oÀQ?•ý¢Þ?VN¯}ÑñணÉìÑ>;â;”Í Á“&£IJ7Y¼ý£J“ÙŽèèœ^ù¹ªè‚k¶ÕgfƸS¡ÛÅ !ý¤ÎQªZUš3ÅÄ‹Oºeq¤.º¦8³“y ¿9Q¬Dt|žè OT쳜'ª‘'zôvóDvÎÑ: n÷RìåÜ“n ‹}»{SY‚K[ª ‹&²ªQáŽáÀ¿.õǦ‰üÖ*ùŽÏPW…âO§‰ä D¼£¸yÒmÒÄP4Ái–îíó}½'þÙ߯٭z̓hž›¶r”ÁµÚO:¢Ï­7’<úÛ9½@R Ö”¿¥¾í(\KÉ)‚³8ú¶8ã¡I­ib%¢ãÓDøch¢¢`Ÿõà4Qô€4Ñ£·›&Šsr\O€Zì\ÔSi"·PÄ šØ%^W—ú}i¢¨ ¬w¨—Ǫ^XÞY"¹vtÐ]HÙ£·!º±#–íÄÍz#“&†¢ û.£^¤Zo¤Ø1} ??×F7 ÙïÅTåðܽ‰Ño\›ñö‹¤ôõ³dÙν>H}*Ld§âƒn‹[çyž0±1K¬Dt|˜è LTì³&ª‘&zôvÃDvœ²Óv/åùG<ëbIJY‚Æèêu_^‰ÂX0‘Tq*\Ñ8>žÕGz¬r#»¢Ãà:˜È‰Ä·•!Å &F‚ û.S®,Š\ßšÈvNŽNœŸ+bÃ4§ÁQøÔÁK±Îõ6M„Ü·Pº†_UÊ8tíÖD—àz>Ë;7/á7§‰•ˆŽO=â¡‰Š‚}ÖƒÓD5ÒÒDÞnšÈÎ9i÷R7î"{Ð ÝB~ë Ó>©8V‚à¢J„¡‘´ª¨×;è”ßÚ¸vtÄÇëh"y”t˜úža<®Ȥ‰IÐDH³tóå]5Ap¶ãuñ̓h"=צfû‰ÞþœI¸ /ø-šð‚ ±¡Ôì¼ºÑÆ—=êýëºa‰0>Tÿr÷<ä¿•‚ÌjÊ Ð–µÎüþ ÃÞŽèèê¦Îê‡IQŠ˜F½—¿zþæùÇ4¦|÷Åó9Zyúøæï~üŸw_¾ýø?þï¿| —ý«E÷‡g{>¶áðéý÷ß½ýxÃÀÈäcÏÞ¿·8¾ýø_žßÛÃÒ×b½ƒY~ýüîÍOÏO¯¾üïþÏß¼ûáËŸl~ÿó[¼_Þü[¼9þÖÌV¦ŸùUJ‘~¡çïþfN–ÿ¾;ŽáÒéƒy´¹CŒ÷([Püà¤?ÿL¬¿(i&oHOUgݺjÛû:à/B¸Ñùîì¬ãÍ6§õ÷kTGrñ"Ž9EvÛ?Ÿ‚ÌLsö7Øì/}¿â¡‘B¬ØÑ<}ÇêÒfDÿ?,BþvñG-Bn*Øg=ü"d%ÒC.Bþv½,Bšs2Fd¹£—Š'goñé^¼nÏVì8ÿúÞÎ8ÌŽÏigÏ Î¹Tš³*#Á[·=Äé4”9ÜÄéJc§í[!h-[˜ÐÃ-×&ÔJ¿½oGW{®à­à‚Ø k[™ç¹\;'ì£MØcôNäu=µÇ—9ðñã‹uÚÑ[›mµl³S=õð‡[TéíêРö“z´¦‘“;Û9v—®d§‚B>¶ÅÑjÕ}rׯ„ºÑñ¹«Gü1ÜUQ°ÏzpîªFz@îêÑÛÍ]ûz)=·:4I\¶’ïÊn0–اž«6ôÎèÄ ’gj“ãF5·lçuÞJ,1KØw•ƒ²Ö~d;‡gÌLÏÄBB­ö#jVµòÖbs;wDG¯«YëÁi¢éi¢Go7Mdç’NZB»—â(§ÒDDX˜62Aï“*~0šHªH»c8ðä‹&vE'°^GYً׳;\sΤ‰I¿?MPÊž6‹Õr‘ÙN½1О‹Î»@­ö“’õ œ»7ÁöŠ·"ç.(D_/ï^윻vk";M°WÏ5Tì'L4g‰•ˆŽ=≊‚}ÖƒÃD5ÒÂDÞn˜ØÕK ž¼5A¸Ä­j‘;¥òX)vªvЩDGÑ5ŽC;áë`"y´Ñ™l–ÑTÆç%ì CÁçKТ±^Þ"Ûq £/aççzdmÍÑÍÎA87e çÚ·iB¬‹Cp±¾X•ìâµ×&v‰ÓÈ“&ZÓÄJDǧ‰ñÇÐDEÁ>ëÁi¢éi¢Go7Mdç@,õ²O?‹ëÁi¢éi¢Go7Mdçcl÷R륫³.a»Íü°û¤b‹&²ªÈ‘EÚêõѪMØ[«ÇÚØ¹Év.¼„= 05ª„d;¦y {ÒÄP4$Ìáu¢¢Ï~ýýš_Õw>ˆ&ÒsÙxFC«e#‘Æ3iBÒf<nÒ¹Ô‚5¨£j"ˆbàÒ{Ù)؇äšâ'M4¦‰µˆO]⡉š‚}ÖcÓD=ÒãÑD—Þ^š(Ιn÷R¤çÒ„¯ Ám˜Ø§”q¬­‰¢J©~mâE}„‡‚‰òÖÞ†qÛÑ‘pÝìâ1%t‚ölÆ4abÂÄ@0aßeºŽ@ž¥ ÅÎ^º.?WRÎÕúìb§þÔƒN¼0Y;¾Í» 3ðu¡ðÒ‘_ÊÉ):GNï&K´&‰•ˆŽÏ=âa‰Š‚}Öƒ³D5Ò²DÞn–ÈÎm¶³¸"’ϯ\§ncgâE*x wH‡cÁDV…lóºC½êcÁD~kréA;:xá­‰âÑ«Q`{G޳rÝ„‰¡`ÒmOÞúÕ*Ld»uiÒƒ`"=—£øFMÊ;çÏM«­Ÿ¹Mh-˜H%Ô/;ÄK‹M§!uDÐçqÞšhN+Ÿ&zÄCû¬§‰j¤¤‰½Ý4Qœ³¼£ '§‡µ1gq~#£Ó‹ wHe‹&’*¶nœê§ï‹Ý£ÝÁ~‰Žì¸X•y=&²G¡¨õ´¾ÅŽ×Ê&MLšøýiÓÝf˜]5=l±<:=l~n:Üëu°‹»5!‹ênßÁ&J-Øâ dGÖS]J{Äñºl뤉ib%¢ãÓDøch¢¢`Ÿõà4Qô€4Ñ£·›&²sEÞb»—òzrF'ñ‹êÆì}Ruk"«òÎ!Fjªvu{_tdÅZ§ÓÄ.e&MLšŠ&hAD²y:Æ*Md;Ô£K×åçR zQÊbçƒ;“&t‰ä4Ä-šˆ-L¾q1!Û±òhãËõþñÆ—ôÖªÐX4-vWî}'ŒÈÒTf/Ïs|™ãËHã /ƒ×T«:¾d;v‡/é¹Ö#6V鳓SÇ¿H0BÛã çb®HÖPjvzëJɉ«UÉ©i·£)Î#ÌüãÍeˆJDÇ_­êÌjUEÁ>ëÁW«ª‘pµªGo÷jUvn³%½£—R>wïhqqë$í.©ä[­ÊªƒM´îPï«4jykÄz–ï—(òu‹Ç`¿Ü=¿›ý™41ib,š@›úzÖ׫­Ð Ójµè0šØôÿ§ý‹gV3òq!uakï[¬+¤-õ>(Ùºxï;‹ã¼ªÑ§D3ÇGsšX‰èø4Ñ#þš¨(Øg=8MT#= Môèí¦‰ì\"†v/Å«­½3hŒ%8Tzû»¥ò`I>ŠzD÷¨¶7Q¢ã£ó±‘ ÷&’ÇèÀ ÚÊ"̽‰ICÑ„,HŽE™ê4‘í æ¦‰ô\4PñÔš›à™'i™"£Ü_|jéœj/×W«²úk³|d§éŠ…º¶¸0k£¶§‰•ˆŽO=â¡‰Š‚}ÖƒÓD5ÒÒDÞnš0çÑ9°yœ´{©HþäŒ~ñ>nìM©,T¯úb‡ƒeùȪ­«‡¶zàK˜ßšÀLíèàŠO§‰ìQ XÛÊ|˜{“&†¢ Ÿî»Å .ú*Md;•Ã÷&|>¡K©u7Ú™ɹY>À¥,!÷A6C}ží@¯Ý›ÈNM^l$ÊvtÒDkšX‰èø4Ñ#þš¨(Øg=8MT#= Môèí¦‰ä:W/ô"2žKjý= oÐD‘š– )DZh"«‚Àž¹­äÁh"¿µ¹j‘b±ã gbxïP&B“&&MŒD!Ýw‹ÁÝHþÙ¯¿_³S=º6j~®°ÅÖcvtj5#â…Ø†ºjFj- Å5nxd;äx)M$§6;ß¼LšhM+Ÿ&zÄCû¬§‰j¤¤‰½Ý4‘§BJäš½!ú³O:¥Å¡¸ÝÛ1ãR£ŒEY“·9s[=E÷X4‘ßZ"»ã3¾®6jöÈ€@ÔþÝØñ¬:ib(šÐ4KOGé©~ ;ÛwtmÔüÜ@.%Ài´³Ã[·ÝŽ£ Z¢‹ºuÒ)æl±’ú.d²£¯Í@žÅ1)KlŠc†™¼9M¬Dt|šè MT쳜&ª‘&zôvÓDv.LBØî¥ݹ·°Õ/Daco¢HHÙIè©a°“NYU mõ^lo"¿µãŒÐŽŽ®–ÞN§‰äQÈS[™@ “&&MŒDqA!\¯g”íÖů¢‰ô\*åDíÇì :ÎÍÈÆþöÞ»Ô‚í‡ñ¾º{\ìÈ_zÒ©8ÕTËÍ·Åé¼7Ñœ&Ö":ˆPOó‘íB<÷¶[7r2æ>(Ýð¨÷Иû ‘Ki"‹ÆÆüÿ²w®Ér«:‘ ¡÷HÎügrïœê›´Q»°}¨4¿£X_k›Ç2HÚíè%©cÑÄÁ6±ÑùibDü54ÑQpÎzršèFzBšÑ;L͹e´x–b¿·;ªpÞ4Mœ’*Ÿ¥rJo§Ðk³°A ‡gû\«üÁ\4qN=ÙM§SÑÉð\M§sÊWö¢‰©h¢¼—êµ*Eîfaïv—ŸM”ç´/Q«å¿o¤ K–À¿o^GT°r¿LÆnGòìÑDu eT•5,WæªÑ.±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; Í9´»5á,or®=šH¸ ÚÁÑÄ)©“¥M4UY‚«<»zý®$ìýW#¦(©d¢és0Ñ<–=FîöÝíXVÚÄ‚‰©`¢¼—ЩÕ.ÉKI¥‹`¢>—iŽÆ¡ç;a‚òf)£\tâ2‚³€‰÷7ìÍ.á³4Ñœºe‹Å½öÐ\4q°MìDt~š Mtœ³žœ&º‘ž&FôÓDu^6qIƒ$ìÝ.Ý{4á^&->¢‰sR³ÌEMX½Ô«/pù]4Ñ~5Å+9¾¶¡½&šGÁ\v±2~¹(²hbÑÄ4±·¸6+@Ñ¥‰Ý.]Ý »=·ÅHA^Þn—î¼è”m³ÃÞu$m ÊHÐÿÜÓìÞT²½&šSQ"ú@Ûê6î;&FÄ_ç¬'‡‰n¤'„‰½Ã0QSÎâäá,E@7Ã„ÚÆG°OJuš &šª²ÛD—Êßv4Ñ~5•íDP7i¢>x4Ñ<šå²Ï‰•©Ð‚‰3ÁDy/™ÜŒE»0ÑìL¯®Ûž+ˆ6mvï„ Î[y1‘ßw› msQ}ìiv˜í6±;1 ÎMšûÊÁ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašØ›bؼÛé½Ý&P}sƒšØ%¸}"Õl.šhªÔ胵JéËŽ&Ú¯và GÇôÁú°Õ#g+ À¡2]Ý&MLEZ/:iJúgÇÏ~{)e¿ü¢S}.’ª9Fã‡àµM 9غ—©ÕÞÓ„Õ,àåïÓUÚì8åGi¢:•ÚØr,Îiå`‡ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœ™¥jWŠ{ëÃfÜ(ñMœ“š'£‰¦ªö;®ßïê¿­>lûÕ3©ÅÑzðl¢ydñÌ(+¸hbÑÄL4QÞKb¦„ÚO›hv×÷®kÏUPãhü&Iw¦MÔeö#špϦY%¢‰b—hºõå„úüçÉÊß¾¾œ‰Îë,~ÿúrB™dXëËZ_fZ_|+K ªŠö»5»üR¸ú¢õ¥>דyÔ.¯ÚÙëÝ‘ÖÙ´låà&­·bÙJB ”;´g{£6§eýÕ ÈG³YiyágˆNDçÿZ5"þš¯Uç¬'ÿZÕô„_«Fô­jÎkyZÌñ,¥ùÞÞ¨È^f¬ƒUç”Rž &võq+¦ÝŽ¿ìcUûÕ^{jaGËWå'Ï*ÓüR£`ÁÄ‚‰)`"sm9lÁD±“L×ÃDæú(ܳ+Ý|‘ÖMá -SÁµÿwÿÊün÷pkÔsâäeZ0ñ~—Ø‹èô01$þ˜è)8g=7Lô#=L é…‰“³”ß[1Po†ïaâœR…¹²òΩÿ6˜8§çN&ªGHL(3Êëä{ÁÄL0QßK4ÎY ›•·Ûá˼s LÔçd`KÈwž|#nè% ïYÚV“~Ò÷n—åY–¨NëÉ †âVcÔx“؉èü,1"þ–è(8g=9Kt#=!KŒèf‰Ý¹æ²tųætw+#CW8ží?—ÊsUßU1åL)VOô]Q¢£‚ qtø¥£Öí0Q—Á‘¢ñSskÖ¥MÄÌÞ—ø`Úê}, ¾4»ò›¥‰æÔIƒj†?v‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ªsDÎÎR’þlØy)MËfFgç¤N–ƒý£ÞrÕCþ®fFû¯®÷»Ù?ˆŽ?W0p÷X6˜â=†ÐKÁ³E‹&& ‰ò^’±È›÷÷ŸßÞ_2¢«;£¶çº»š„#›ÜÞÝO½2mÂ’–ྦྷ ®ß PÕ´?CW;)«õ£4qFœ¾^X[4q°MìDt~š Mtœ³žœ&º‘ž&FôÓĹYJï=› “°ÏIuž‹&N©Ïßv6q.:†ÏÑDó(b©ß´u·ãU~|ÑÄ\4Á•êE£Ü¿éÔìô¥5éE4QžËµ×3s8²9¡Üœ„ ’ü( [ʶ„|º5÷šzöGi¢‰Ct ®a5»¼h"Þ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,…éæò°f§ƒú°?ØÑà©oŠ"ý§4ÑT¡dü@ýú«i¢ýj®iêG‡äÁ›NÍ£Ifý@™r^4±hb&š(ï%Õîåº4Ñìñjš¨Ï5(@®²wfagß„-ñÁÙ„n9§” ƒºØK~´5ê.NÄ=@fÇ’MDÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4çFe'§ñ,¥ï¦ÐKk:”ý`g¶¯È1–ê6YÞDSŸ©û‡êK¾ì¦SûÕˆIRÜ£ø`ëºÝ£‹’¦X™¾d‘.šX41M”÷’ K”å¥_Ó©Ùe¿ü¦S}®3P Ç{ö;[סmXšjZÁêÒf—…¥‰æTjßű­¼‰p›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ýy™AùƒYJä^š À íèlb—àå>‘j“Uˆmª,©"Äê5ëwÑÄW Ê©ïvô`ÞDõ˜³‚ô{×ív`‹&MLEÖÎÙúyöûÙÀE4a-¢Ì½ñ¬M®vq¸òl¶¤fGb½Ž`ª¥ú#½Ù=KÍ©3›§Xœâê]n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Q#YYî8œ¥åî³ B°ãÙþc©ïª"ý§4ÑTÕDcôX=Ó—Ýtj¿ZËïîw9ÿ‰¢æçh¢z,”kìñý¥„ò¢‰EÐDy/µ^4²?»(üóÛû«®ï7QŸ[8¡ Úpüh™î¬K¶9±è;š¨/m›yA€:ß{~ÙÛs4ñËiízËsXÝëúÛÄ~D'§‰AñÐD_Á9ë™i"Šôl41¨wŒ&~œS’l¤á,E ýö~žÞö›ø%ÐÍ<– y¦,ì_ªê:çX}NßtÓéׯFFÈGᩳ‰_…JøãeœpÑÄ¢‰ihb/…M,e?¦‰_vš®½éôó\Íœ’Z4~DÁüNšÀ-I!+{OPF0›% 6ìÍ(?JÕ© &MŠÌ+ ;Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9'JÖkÑü¯ðÝgIËTx<Û.q.šhªk¦ãê•¿‹&Ú¯ 'Ž£ó\/ì_] LH¬ÌÜM,š˜‰&jiJ\ïæti¢õ¢~ý¨}MÔç2hð¹ªÙÓ½YØ ÊÜJï×—\F°y èoØ«z†GiâŒ8˼Î&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR˜î=› „›s>8›8'5Ë\4qN½èwÑÄ©èP¦çhâ”2}9[4±hbš(ï¥1h™Vû4ÑìRÆ«i¢<×… Is4~ °¿«ë}MˆlȤpðµ ËvM¨Áùh³Ëö,M4§¦’=ÅâŒÒ¢‰h›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰âSJµîGÍ£I¶ì±2áÕ¼nÁÄT0QÞK!#ÄÄ]˜hv"t5LÔç &³hüˆd¸3meËI•à&ÜH nJ‹Ý+vM²¾ÔvãÀ%бúüç:þ·¯/5: É4Ž¢>¹¾æI‚ ÐÍîµ8ËZ_Öú2ÁúB[m&„ìYºëK³ã‹Óò~žkà Ò¿òÓì’ä׿­À H~¿¾Pã’2»¸J+i >ú±ª‰#e±‰+v¼J†_!:ÿcÕˆøk>Vuœ³žücU7Ò~¬Ñ;ü±ª9åŒÏR‚toZžÕ“«š­™+ÕdsÁDSeÌÝêéÿÚ½9·ÿ«a¢ýꂊ.G§€ïs0Q=B¶28ce×É÷‚‰Ù`"—ý¸%6 `¢Ú1\Y”¤ÌÈáøEÅ{?VI­ƒ~p–ëæÔ__šÝ›<[a¢:­§,)Wû%,˜ˆv‰ˆÎ#â¯‰Ž‚sÖ“ÃD7ÒÂĈÞa˜h΃oÂÍŽ’Þ îeÆ:¬ñqNªOvôÝT‰gˆÕ³}ÙÑwûÕFÂIâèüßmÕ»i¢v /»cO†¡2·—e|ÑÄ¢‰ h¢¼—â&’4ui¢Ù•5âjšàZ‰P%—õ-? "·v3òòG+•÷4!ûT/u•6;‘gïÑV§ˆPߊP\YýhÑD´MìDt~š Mtœ³žœ&º‘ž&FôÓÄîÜÁƒ¤£ÝŽøæ{´y#?€‰¦€’gÿD©é\0ÑT1 Äê‰ü»`bÕŽ^qtós0Ñ<:eno7;Óu4±`b*˜zä-¡ô“òš¨] õ¹eæ-‘]PÂïlf„¸%e´ƒ{´ZF°Ô«ðÁTíØI…‰&NÀU,'¼î9Å»ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0LTçšËôÜ0ivð’t4Ž©BgˆÕ`û®-zûÕ$JˆqtèÉ-zõhZ68ÁµfG/µ¹Ö}mÑ'Ø¢×÷ÒµV›÷všäÞÛ¡y³d–÷ȵtGâÜ?š¨väx9MÔç*&2 ˜Z!çΣ N[r?ˆ“ÕUØ«Ì>K4;Óg‹[[¥k©–X\YÍK„›ÄNDçg‰ñ×°DGÁ9ëÉY¢é YbDï0K4ç˜Ø‚¹]¤§{Ë"lrX.p— ªþT„ɰÇö »qPÄzW/_v2±G‡kÓÓ8:ôdvóèˆ\Okv†«1êž©°Çj2‚¡÷+|4;3¹š%¬]_JäàÑøQ ¹µÂGù£•8Ñ„×h…úy ÍÞ-…7ÒDsʬ Y³{=Ø^4q°MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©YŠs¾•&tC9€‰sJi2˜ØU9GY»~L´_-)8©Øí^JuÝ^s& \ƒH¥„+gbÁÄT0QÞËBßÄàýkNÍ.çËa¢>—UÝF›è­9ºAöœß÷E…TO›Ê ¹ûA£Ù™3= »8Ê..¡8GZGÑ.±ÑéabHü%0ÑSpÎzn˜èGz>˜Ò; ?ÎÉrЧÐ"òÞœ ØHù=MüH`+¯÷R‰¦¢‰]×Ë÷þzË_Eû¯¤òßãè°?w#k÷è*d(3_7²MÌDõ½-[èŒÝ£‰Ý.§«ë9Õçj2)ØqÎ~'M¤-i‰¿§ hßXô•BéæÒD'à†‰£Ä MDÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0MìÎÉ50K Ü{ÑI‘·ÄÕaw Ц”>j“ÑÄ®^2õ/ÌüØÑwåwì¿ÚËž£r³ÛÙË5°Ûi¢z„š|*ÐE‹&¦¢ ¨©Í šº4QíØÐ¯¦‰ú\圇³¶ü_æÖõ4‘eS"qO¹Œà]fýL˜f¨ü(Mœç¶ê9…ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœ™¥2¤{/:iò­,}4qN*Ë\4qJ}†ïê5±ÿjL˜ƒ=Ç—û·ÓDóXvYF+c^ëMLEå½Ô\vé¦ÝÆu»ÂÕ)Øí¹¨bâE‘[Ï&`c:¨ XG0)R¿9ón‡ôh¯‰Ý©!²| NaM„ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0M4ç^Sä(ž¥Lï=›`J›èAÞÄ9©y.š¨ª°^ÀWù@ý—%aïÑ„ÖoAûÅ—“›Ûi¢y$Ïœˆív/Ëø¢‰EÐDy/ՌܹOÍNôò›Nå¹V^‡²ý·ÁV¿&Ü›7Q„0Üt¢6qBìÏÐôûôM4§Ä$ îvøÒM|ÑÄÁ6±ÑùibDü54ÑQpÎzršèFzBšÑ;L§f)B»·¤S=›°ƒ’N?L20ÛÏÕlbWŵ2~ Þ¿«sÝOt${æçú`ïÅÓoÊj6±hb*š ºKÏ*’»}°w;`¾š&ês¡¸‡xn%$éNš°-ƒ daוTæë×`Ýí(É£4QrâòÖH,ÎiõÁ·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ!a²»~Dʽ7XÊšrD»Ô²¨ÅR!MFMUÏÁ9ú®^ñ»h¢ýj”äÌqtò“YØÍ£*R¬LÒ:›X41MÔþÒšÝÕú4Ñì^ç‹h¢>×Ë/æx¬žïm]WþhF(øž&¤Ž`JÔ§‰f—õÙ³‰æÔ˜¥_k·[7>Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,e¨·Ò„¤´9êMìÔË?| •m.šhªœ‚ŠT»zÿ²¼‰ú« £Ä+yÙ²?xÓ©yDC×+C\g‹&¦¢ ©ù†ß4rÿç·÷W3¿$Æ]Dõ¹Òo7ñc—î<›Èõlù 6hÁš€=¸I[í¤€×£4QZ™ùÊìŠSL«Bl¸MìDt~š Mtœ³žœ&º‘ž&FôÓĹYÊà^š0ßRòš8%•ÒdyçÔó—Ýt:ÖçúMT ‰%¢‰ªÌQVö¢‰©h¢6˜NJBM4;¹¼v{n-c›£ñc9‘ß{6!ž²°­Ž`pȹÛµÙ™ù³YØMœ×¶Šs}©½¸hâ`›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰S³”'¼•&`+úÌ'{N…5$82ß•â\ÍëvõPLL‘úò+ß$#þÕ0Ñ~uáø-,v/0o‡‰æ±ðKy/ceB &LLÿcïLsÇu5¼#CœÅ•ôþwr5äé®XŒá¡„:ÿš0_ó‹†ÇÉ\ÓÜ™ò8 »Ù)\ž6QŸ›•¤¼l4~ræ…±/M›(S+È.L”uPµ~Ó”;am})ªL„’ÆêUÓ¯­/å­s.6£cÂO®/e¯‚e„8„ÊR^ëËZ_fZ_|KeÎ4ŒŠ|T;¡ë×—ò\cŒÒZ›]â;›£fØØ5í]¤õúY€Ì(PZv’ÙŸmŽÚÄ•’9,ëè;ü 1ˆèü«Îˆ¿æcÕ@Á1ëÉ?V #=áǪ3zO¬êÎYY%ž¥„î-@®’6żsôÝ%H!…o¤Êd_«šªò‚”òêó}··6 ¿¢ø`‘êËF ü eþÖ2qÑÄ¢‰)h¢i—‘ÅÐD-íÇx=M`&$r‰h¢Ø¥„÷ gVJü‘&0Õ‘µòp†îvï—ù ‰îTZöXÛ¢‰h›8Šèô4qJü%41RpÌznšGz>š8¥÷,MtçZ& ñÿ—¦›ÓòdÝ¡‰.Á(I’XªÁ\4ÑT‘rÑ ¡z’+ØßÚÊ¿q)—>Wä£yä¤f+ã„‹&MÌDõwYvó9'ùhve!Ћi¢=W\™0Eã'ÿ«:Å 4›»ÃÎÝ*„>·˜Œ/ów;¥ô(MT§\~yÜļÛ!¬äá6qÑùiâŒøkhb à˜õä41Œô„4qFïišhÎ)c<…2ɽ4Q`³Ç”rš«ÆGW¥\¿~¡Þ«›Ñ+:’•¿X+ ’=Õ£8Xüw”UãcÁÄT0õ‚,›ÚŸÁÿùÏï7˜] õ¹uʦqe¦ngpçEZôJèéóE'Ä6Ò=}À®)mvÈÏÂDuZ;oæPœÂ[ó;»ÄAD燉3⯉‚cÖ“ÃÄ0ÒÂĽ§a¢9'g‹½DÚíYyb£Éþk¥„s•øèª~¡þÇZ£ö·,» £Ãü\V^÷èêåceYWÖÄ‚‰©`¢ü.s™ù’%ÂD³ã|ukÔöÜ ¹žîEã'ç¤vï=§šÀ¡øy}¡:‚Ia|ÙìøÏúV˜¨N-µÓÛXœëjfî&Έ¿& ŽYOÃHOgôž†‰æR8KÙ‡ÉþÚ{N–6Ú»ætHi™/炉¦ “ê¸ßøÿ¿åo¥`÷·&Ê|¿ìQ|ûðv;L4ŠL±2ÕËhÁÄT0A &rò?GÖ?ÿùý;¼:i¢=7CÍÀgízëV˜H›c‰üÎ5'n#½`×8q«Û±?Úµ9Í©f–}!Îy%M„»ÄAD燉3⯉‚cÖ“ÃÄ0ÒÂĽ§a¢;¯)zÎREd¾&„u+bvhâ˜ÔOulÿ&M4UPV—X=ÐÑD{kdÍÁ=§n—žëeÔ=š(Ã3Vö¾[4±hbš(¿Ëlä„CšhvH—'MÔçf7‹n§v;Î÷ÒDñ€)¦ ©#˜³”ÝøPi³#zöh¢9­nÇÕr»¾µT[4±³MDt~š8#þš(8f=9M #=!MœÑ{š&šs »‹ä{&ÊL¿ ÒM’še2š8¤Þ?6ùŸ¦‰þÖ5è}Ù=™5Q=:%Í_üÝM,š˜‰&Êï²^ÒËŒÃ^F/»tùE§ú\Bb—ïïv äÞ^Fµÿƒ|ZGp™„ P •v»Oµ o¤‰âTYÑ–#q­%Ü¢‰h›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çDŒñ,ÅùÞ³ ³ú…w&ºT1SŽ¥Ê§²å“&š*%K¤±zý‹ø?Mí­ëõ4ûb±T° SõX/×YpS£Ù%Z9Ø‹&¦¢ ­»tu³4l6Ñí$óÕ4QŸ+eíÇOýÖÂ7wFMµúgš°:‚É x¿4;ÄgÏ&šÓ\xr,.¿ÕÏ^4±³MDt~š8#þš(8f=9M #=!MœÑ{š&ªs¬{o‰§PüðéèRšÈI6ð=šhP¾˜Pp²ŠNMUY«X0V_þÿ[4q(:}Ÿ£‰æQ-[pa¹Ù½·8Y4±hbš(¿KO"„<¾éÔìèzš¨ÏeöìAFT·KùÞò°De(ïäMä6Ò8ï4æ>ÒíQš¨N)Jð1ª‰ËoÆE;ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§i¢;2ˆ§PJÉï½é¤²ñM“Š“Ýtjª –I±zHü[4ÑÞs½LG§Ðïs4Ñ,›lä{4qLªMV ¶«§Zp#V/ðc­ëÚ[Š2³/¢£žMT’ [α2§•7±hb*šðJ eMÒ‹Ðá¬n÷áNÖ­4Ñœ–ÅÔéviÝt ·‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDsîä4Nmîv9ë½gÎâM’ê4×ÙDSeIJ¼¾XÜ~ël¢G4¥ñAþËžk^×=2d•Ñ[¡÷E‹&&  Ø( Ï&º]WÓD}.zÁÆO±Ë÷faËf¾{Ó‰°Ž`©·‡ç£ÝŽÓ£7ºÓLŽã ±ÝÎÞ.Œ-šØÙ&":?Mœ M ³žœ&†‘ž&Îè=M͹×=g©l7Wˆ5ÜìÜt:&Õa®,ì¦*'%¾Ñuõú[5E'—-às4Ñ<–½ Ûì(-šX41M`Ù¥— ²ëŸy?ÿüû÷[S+äêîuí¹"uí|Ù¥[+ÄÖ³ ´ ±Du{…ñÔì²>š…Ýœ:Š A(®ÞX[4mŸ&Έ¿†& ŽYONÃHOHgôž¦‰îd§y]àî9“B«ÉlË‹;"– P}±J¿¶¼x½Æ™5Ž=Xä£{MI¿ø»É[i™µ¼¬åe‚å…·ò$ ïvàW·3jÏÍJñK³“.\^´ÂKFþ¼¾pÛ –¡9PZìäÏöÕ·~¬*N­ «Ýçïv°.Ò†_!ÿcÕñ×|¬(8f=ùǪa¤'üXuFïéUÍ9zª+¼ìôÞ£o“¼9ï}¬jˆ5ÇR‰`.šhªØ$ŒÕ3ËoÑD{kaÌÁs·£/Ò6™å«¿›)-šX41M —ÿH hÛ—šëi¢]äÅäѬ]ìÒGßi˦ewô™&¤Œ`‚2Ÿ¥«]Y1ŸMËkâÈÄBq„¶J†ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâÐ,E|o;#Rßa‡&ŽIÕÉÒòšªú…pÜ7êe÷cÈû[ Ô›t_DǤ‰æÑ”Å1V¦o{ŒE‹&&  iiqP3î†4Ñì˜/¿H[Ÿ+Ù)*QíÈ5ßKš .|† -˜!k’qQ¾jGùÏååV˜hâˆ! ‡âÓºGî&Έ¿& ŽYOÃHOgôž†‰c³”Þ\dÓÝ£‰cRódºú²%÷|Ù‘þLÔ·°²–ç8:ÿêt7LRf¸²òLLZ7óž¥qV^³3¾º›Q{®!`´G/và·ÖøàÍ“àÞE'k#]É‚ÏÍ.ѳ5>šÓš¹‹#L‹&¢mâ ¢óÓÄñ×ÐÄ@Á1ëÉibé iâŒÞÓ4Ñœ3”}núb–Ê~óÑ„mD{G‡¤r²¹h¢©’BBúÅZÅücYy=:HyÜ—ðeÖøh#ÇÊÞÛˆ,šX41M”=Q¶!M4;ðËi¢>Wrm®ìbWV“i‚Sõáy§b`.#XÙ¨üY†J›©>JÍ©Y²àSZ³S[4nŸ&Έ¿†& ŽYONÃHOHgôž¦‰C³”ñ½gä°éncR•碉¦Ê³åX}æ«?Þ£Ój³ÄÑy¯y;MTFR~<*3Е„½hb*šÈõÌ8ƒ/:5»D~5MÔçªÖ:|Ѭ]ë€Ø­Iؾ儈;i^G°§¬÷4;Ågi¢8-ÂÌ|Jkvö–˾hbg›8ˆèü4qFü541PpÌzršFzBš8£÷4MTçhµ­#„³~œB/¤ Ù 6Í»³ý©îsÑDUE€eY§X½«ÿMôè° ÆÑ! {£VŒ(¤f6ee/šX41MøF\‹˜Ž{£;rx;[»ˆ&Ês¥lH ÌDÛàbG|çÙ„ú¦äBŸó&8µ¹E Ëð{O·{ç®h¢;-k€Œ¡ìe÷ÖâvÑÄçmâ(¢ÓÓÄ)ñ—ÐÄHÁ1ë¹ibéùhâ”Þ³4Ñg'É)ž¥ŒîM¶̛}>›8&5Û\õÇ›*U‹Õ»ÿÖM§‚Q0>xÙÑsõÇ›GIZVhŠ•½÷8Y4±hâïÓD»ÝC`H4Ì›xÙÁÕ7ÚsY-œµ‹g¾9 ’~N›`¨Ø˜ÔÆB›ò£ÍŒšÓD73êv iÁD´KDt~˜8#þ˜(8f=9L #=!LœÑ{&ºsÎbÏRïkÍ-IØ7Ê;­QJ¥¹Z£vUXLç/ÔçßjúŠŽ+‹ÆÑAyîh¢{TJd+“·Š8 &LL°Õâ¬Z¼ ›5;pËWÃD}.±«C4²«Ý‡#Ù /:åP2íЖ¬É’Œ¤š8>š„ÝÅ ÄPœâÛ‚E;ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâÐ,E”¶ât‡&ºö ßH¬ÛDWUë2¡Þè·h¢GÇ)%‹£óÞ‹ãvšh“é7]4±hb.šÀ­–fUt^têvï•—/¢‰ú\ÎHA·‰ng@wMš03¥Ï4AmK™„ÆsP³–&ªS+‹4fÅYÒÕ».Ü&":?Mœ M ³žœ&†‘ž&Îè=MÍ9Õ¹ÏR$ùÞ‹NÆ›¦$ì.A€)Ÿ.Õ'»ètH=Ûoõ®{E§¶Øâ8:Âψm NP7ŠïÊòJÂ^41MÐÖR<ù°¤S·KrùÙD}n™»œÆ=î»)Ý›„MF”vÎ&¸Ž`7ÎÁ•¬f—íÑvÍiFÉ<®SÝíÀW»‰p›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çL€ãÜànG÷&a{ÚPm‡&ŽIu‹&š*I˜‚3ý-%ýMô蔿[ÁÓ&šÇ²ÇPöXY6]4±hb&šàz»§üxiHÍŽóåiõ¹Z“Ý$?ÅΘo>›`±æu,e;–åe\ä£Û%‘Gi¢9ŒJ:u;&[4mŸ&Έ¿†& ŽYONÃHOHgôž¦‰îœ9éS¨|º,ziÞÕóîšèªÖo¤ò\b»*UÎAÞD·ƒßj^×ߺ,ÇhúEtì¹±Õ£'(ãs¤¬µ[4±hb&šzæ@.éÏl€þýû-vøvEæ"š¨Ï5Jt]ëvˆrçÙ„mjR–Ï4¡unÉY£ÏÿÍîc­ôiBëôB)„²f‡«y]¼MDt~š8#þš(8f=9M #=!MœÑ{š&ºsDŽw_"ïm^'9mÆ{gM'ÌÁ]¡.u¶¼‰®^sÙiÅêËæõ·h¢½µ”ð¸~'i¢y4-C±2µuÓiÑÄT4¡a¡\$ÓD±÷Ë Ä6ÿPóŽ ?ÅŽìÖ³ Ø™ûÎÙ„µ¬eŸw»ôh»‰æ påP$X4nŸ&Έ¿†& ŽYONÃHOHgôž¦‰æY8c·Âæ\F0¡œ6;áô(MT§œ’±X,Î3.šˆ¶‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDsìÔ±îvxo» M¾íMt ÎÈø…Ôqm^ɦ¾×µI(¤`_HÕÉú5U*Dù‹µJ~-/ïPt¬@Þ<–¿ 1Ín5G]41M–y•íÏ;7ÿ¡‰b‡®×Ó•¡“)ß ºÜLœ¡Dý#MHa W ’4L*ivâúhÍÀ.N4³P(Nù-dÑÄçmâ(¢ÓÓÄ)ñ—ÐÄHÁ1ë¹ibéùhâ”Þ³4ql–*»½›óòhó´sö}PªÌU士2r÷õèvšñ§h¢¿µ;~ñ·5}Ž&ŠGJV (;ÊŠ¥u“vÑÄL4Q—TËç »£v;…«+·çJ­ò¡Ÿ²W»·f ¤LdŸi¶ºqªÍ·‡_«º§ü(MT§9‰C€:ÝŽV?£p›8ˆèü4qFü541PpÌzršFzBš8£÷4M4çeU“qÚÄËŽôVšÈä[âš]•(k:½¤NV¼«ªoƼ_oùc7iû[—¦ñJ^ v=GÕ££y"¡2 ‹&ML@PÏ i¢Ù‘ÁÕ4QŸ+FåáÑS+¥ªÍteÍ@ª wÎ&p£ÄÙÅÇõ|š$z–&‰ó´ú…ÛÄAD秉3⯡‰‚cÖ“ÓÄ0ÒÒĽ§iâØ,e÷ö3rM›ûN•ƒR'ëŽzH=ýVÞÄÁèøs5){o$²hbÑÄ4Q~—”Œë— !M4;e¿š&ês‘£¯(Í®(½÷¦~®(ÔG0–` •6;ôG«|t§j%Ћӷ‹&v¶‰ƒˆÎOgÄ_CǬ'§‰a¤'¤‰3zOÓDuN`jã ä]¤û½y´e݉#J«Ö¹`âzøP:ë&Ú[“€±V–=ǃ0Ñ<* 1ÅÊäí2õ‚‰ÀDù]2#Y¶4„‰foÙ»ÁD}®˜±hü°°¦{K2f¢Ï,Áu{v”1K4»÷hoe‰ê´„‘‚öÝŽÝKD›ÄADçg‰3â¯a‰‚cÖ“³Ä0ҲĽ§Y¢9/ã˜âYJåÞ{N°pʸ?Û-Õ炉¦ªvÁÈ_,Ù~ &ê[ dΊqtü=Óùn˜hÊÆ•»»,˜X01Lp½Ûðì[7rÇ¿ÊÀÁn`°Èª"k€}pvs1²›5Öôb#Ëcy±$H£¬Ãß=Eö‘|$fu§/b|h¿ŒÎÔtý»Nóòk’UÁÅT…‰lçNw®XòŠ­‹]à·Üç$çK£åì¢7²‘;”·9qA¢4ÅQ€Øaš%V"Ú>L,¿LT̳n&ª‘n&–è] Å9ê¸åÙî¥Ðo|Ü!¬LÌ“zî0ßÇ„‰¢ŠRv¶z¹,˜(wÍì|=íËÑv<4‘=r.¡Eö÷Æ Ða¢ÃDK0Ay2ϹºE¨ÂD±Ã“ÃZ+ÁD¾n.,Œl¶ìÀÚж- }¡ .-˜Å(½=Ø9¿kzØÁ)%BB[A?‚mN+mŸ&–ˆ_‡&* æY7NÕH7HKô.¦‰â\¼<¡—J~ÛCQû{ô~„&²„˜_ÿÖËgo £‰¢>W#0^q;ˆñ²h¢ÜµÎ& žú÷hwµ›ÓDñ˜ê•me§Éò;Mtšh€&8ï*/‚êûœ²FZ&òu£vêf¯1ЖG°Ñå}NQF6:EmÁI§N:xT•;<—dCšÈNS½æÍÑ{± sšX‰hû4±Dü:4QQ0Ϻqš¨FºAšX¢w1MçyóP½&å`Gç2l¯»Ñ‰RŠ•Þ^8FLvo/ÌÁ.ªá„á žQÿ›¦‰r×’Ï?²ñ;žšPàJ®³€ó}m¢ÓD[4‘“% ⥾ѩØ!­žÐI¯‹ú8JÞj?˜ÛÙ¶G°õjaìv*}KʅߪJÓГï{;åî…È膻Ðw:™ÓÄJDÛ§‰%âס‰Š‚yÖÓD5Ò ÒĽ‹i¢8§hä\ì n»6x „‘µ‰"uün‚Tl«öQ½x£ òÑÎ_XB§r×1Ãâ”蜜1Üœ&²Ç|˜™LeÚŒ{B§NMÑ„>—˜«?£‘Ð)Û9¿þ±‰â_<#™½6B·ñN'ÎoíÎÓ„äRãmU± »ÒDq*‘“‘ew°s}mœ&V"Ú>M,¿MT̳nœ&ª‘n&–è]LÙ¹×9:Õ‘‚Û›ª›¤Æ|FÙ”êÝ™œH•&Š*Ï‚FšÄbwºÿþ"h¢ÜµÎ:ÀMønÃ~4‘=†\à$’­L ïtê4ÑMèsIˆÁŸYõ{ôÞóKÂ.×¥ˆ`Nƒ‰â¦‡°18œ-„Mª,·t‰K­ÈÞ[;¿ãN§7N "Öº¡7v(==l}šXhã4±Pü 4QW0Ϻeš°"ÝM,Ô»Œ&Þ8gòàÄî¥Øo›Ò‰ë˜r¶ö[ ’ –ä­]j©öU‰8¸ ÃAäK*„ý段ÛžðÊn4qôˆ:>‡ ÊP‰»ÓD§‰fhbx.IBÌÅ‘Çi✤$[ƒ&†ë2$fD³ßc` [ÒD<”À§ó4¹£†µ±oìüžkoœŠ`¬e|kwÒ uš™&V"Ú>M,¿MT̳nœ&ª‘n&–è]LÙ99Þxç1عmÏMĈd?Bƒ„àSš"ÕS[4QTÑ[½xY41DóŽ ;:§»«7§‰â1:S”Qè b;M4Eú\"Œä©JÙŽamšÈ×MeY›­öS²o¹ÓÉHH>MøLÚ®‚Uš(v‰xmšÈ×M!WâNVûQÁmiB‡ðgä4`"¿T¿>LÊ9r¼KVûÉ5pÛ—UŽyd#-iKÏԷV;û.}g§>¸1Þù»Ó·ù&Ff‰•ˆ¶Kį󬇉j¤„‰%zÃÄà<×¥c»— Ž6.gäb‚ñÞÞëaŠTtmÑDQ…Qœ­á–&Ê]“ƒ)ÑI;.} ráS[Yt}é»ÓDS4A9mR" ¡NÅ.ºÕ7ÒæëŠGã…ÿ`ç6M@Î „vçÇÖêoÿ‹ÝÞKÅ©8!cœ.vL®Ó„5M¬D´}šX"~š¨(˜gÝ8MT#Ý M,Ñ»˜&fõR‘7N˜èÀc§òf)M±1˜Ô£gˆ¶z [šÈw.0çë‡è$¿Le„ ³S"ù&Z‚‰œƒ•|PJ¯ÃD±׆‰’‚'ãôC±;[5b=˜ˆù+?1·`NŒSy±ôA¼ïÒDG\S¢ëÍYb%¢íÃÄñëÀDEÁ<ëÆa¢éab‰ÞÅ018OQÀOè¥dÛÚ¨:â’ðMÌ’JmÑDQ]Pe¶z† ËXî: 0a°ŒaÇ¥‰ì‘ò‰T¦2r©ÓD§‰¦h"æ DQ=~˜éÑ{Ï/9Y›&òu}H^Ç7«ýP^“Ý&|—LŒ3ØÅŽNVtW¢‰|Ý ÿY•ŠÝæ4¡´<²‘Vr &ï8Õ×&dèÜ®4Qœ¦¨3A\¤Næ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±DïbšÈΣÃäŒW¯E¤ Û–&Æj£Î“šËXÔCÙT¯wé.‹&Ê]{1w: Q<)$¾9Mü`è(vC§‰N-Ñ„äL¬-Ë861ØùÕMäë&çcD³×¦()m{[t\ðçi\nÁ‰†êÚ÷`ìIÅiÊçë9<; ¾ÓÉš&Ö"Ú¦T§‰b‡' ŽW¢‰|]Ñù-ï Š]ÂÍ«–‘µ Ÿ[p"õM¿ƒ&»ÒDv Žc4^¥qr’_¼ÓÄÈ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±Dïbš(Î=æä¯f/.âÆ4äÆhb–T8Sýí£ÒÄ >:¬¡}s——• v¸ë"S˜Ù¯vÝà‘Õgœð½‘ïå&:M4Eú\²~˜ß¯Ti¢ØAX»v]¹. €g³×V;€i‚ÄŸ?„ !7`ÌOD½¡;ŸöÝèTœF²ŠB v|2Lw˜™%V"Ú>L,¿LT̳n&ª‘n&–è] ÅyŠëEÃŽv~ã”N%g?ÀÄ<©­ÁDQ%‰¨~Ìøh¶4‘ïÚ;d©u>FGö;„=( ¨ó,û{ó>õN&š‚‰P´Ó‡“ùGï=¿Ä"«otÊ×M¥Þ…9G§„iÓCØ|`m‹Dçiµ‚DRoéÙÎËÎÇ&Џ$:K°Å…È}£“9M¬D´}šX"~š¨(˜gÝ8MT#Ý M,Ñ»˜&ŠsIÌõü}G;Ç[×®ËiE ÒÛO–꣉yê?,ëñÛ¦‰|×èHa›ÑÁw¶mMÅ£~!@”yî):M4Eú\r"‡‰êŠ]µa—ëJÄ0Y퇅iÓCØtп ç+aåŒè}=Iõ`çíJÅ) ’ñ*m°ÃžÒÉœ&V"Ú>M,¿MT̳nœ&ª‘n&–è]LŹöØ\ϱ}´óÛÂÖçÀcg°ç)¥ÐLU1$WOðwT/v»Üµ¶€ ceŠûU›()x™Ê´÷}N&š‚ }.DPLOU˜(v$«ïsÊ×ÕÁMo¬ö“7=ƒrHAuŒœÁæÒÒYÃ^mPì¼ÛµÚÄàT9޶8L½¶9K¬D´}˜X"~˜¨(˜gÝ8LT#Ý L,Ñ»&fõR´qµ‰éŒÐÄ<©©­JØóÔ³¿°Nó¢w<5‘=²6ðd*cçûF§NMÑç%‡$ Zmb°cY»v¾®ŽÆNÖ4±Ñöib‰øuh¢¢`žuã4Qtƒ4±Dïbš(Îг{)Èm{ÒÁE?B³¤¢o,£SQ• ¨F?A=§Ë¢‰!:Q¼‘7i° ;ÒDñ(!X‡uŠ]¢^m¢ÓDS4¡Ï%ko¨¾êŠÝé4t%šÈ×MÚ[¢Š]~Kš€C’Ä0²Ñ)i ލQçúÚw±;“Á{Sš(N#úHh‹ã^»Îž&V"Ú>M,¿MT̳nœ&ª‘n&–è]Lƒó3v/qãŒNà0JE‚8F‘ R¹­ÚuGõQ•­^<\M w-@~JtÒŽ;²Ç¤Jýö`«ç‡Í×%ð vËæ|ºcÛJØ>¦(8F"Ùkà ¥j§ÝPk㋪ÊYæMîòÒÆ—Ÿ_WÙÑ!à=Çõ˜@LxêbèùÇûøÒÔø"I™©:¾»tòŽ¥ñE¯ë]ŠÉG»3UÜV­fDÞÇ‘—U¢D!Ñ!TíÂΧòŠÓ䎶¸è{Æ@ó-D%¢í¿¬Z"~—Uó¬YUtƒ/«–è]ü²ª8í±Ú½ÔÙ4Ikf p€ÑŒ*K.ÅŒ;hìX^Q¥Ã¥ÎV. &†èøöHž£HûÁDñÈ”«ÚÚʨ—Fí0ÑLä]H>E&Ôît޼L„ˆsÏgµµ nÛ´^y‰Ïg ô®´àR0¯¦t° ~׌Å)è—HõýȃB/fdMkmž&‰_…&j æY·MõH·G‹ô.¥‰£ó<ÀMè¥Üƽ„TzûéR±­¥‰AUίÍb«çpYKÃ]§ ÔÁŽNÄý–¾‹Gï 1ØOìK&Z¢‰ü\j¯Œnd}ôÞóË¢kešÈ×\b«ýD [.MÈÑ9ïÏ/ûá¤OoUi±Kâw¥‰ìÔ#CâdŠóJo&¬ib%¢íÓÄñëÐDEÁ<ëÆi¢éib‰ÞÅ4Qœ3³ Ý…êà6¥ ? G’| $ï ó¶Ô©-šÈªPà ÃA®ìzY4QîÚ³';:÷[›Úq_›0§‰•ˆ¶OKįC󬧉j¤¤‰%zÓDqžr~´{©ˆ²u5#‡d¼·Ï›Nb½Ð UZ[›(êó¦]ãw±¹¬”ÇèHòbäïTÙœ&ŠG&ïýe|º«ÓD§‰Oú\ÆèˆÑÕ×&²+r¬Mùº¢ÄïL7?~ǨŸþYû„/^<{¢OÑËw(›àg¯žè«Í2h3ÕvàÓ+ÔæšP¿2Iï5ÔÏôúªç7Ú…~ØäYÇÃäcø¢¯ó·ÿç÷?ÕNööùõ'ÇÏò¯K?û•SbqüE¹Ü¯óŒ?|} Ú×J“9húÁöQSg7'ÏŠú,Ë×y2ó~ÏwþIÆ—ëÿ“¢¿Òò(ÛWn_½¸yúòîÝQ,ÿôóßoîï¯ÝO=|OB¾ºýö±þÕíO×Á)[ Ë%å\~L_ÿîß¶”|±ßÿÎý Þºßÿ¢2þxóüæ[mÛw·/O¥·¿~û$•ßip }Œå‡ÿó??Í,ðÝéoôÿË×?òO¯îîó“©Ó§/ŸÝßæÿtûüþÙëu §~÷$V¾®¿)³kóx]>xZúüã¤?ûâóü¯¿§É¾ûþöñëÇ÷ÊÙOõo_äßýx£=äÃóû›ÇÅѯƒÉË›Üí½üDãòïyôû¿Ý—ŸùôæùËž=”Þ?{õÞÁËg÷÷·/Nd ¿ÑnAoQ{Ó…·ÿ¹>-O~x8ŠÿPRýêUÝ­Àäq]Î?~{óâöÇÛ‡A–°_rSz~÷øîáþõ¯€µ¡{ûelÀŒÈQœ=`žòÓ{fJ᛫›çÏï___ þJçOÞŽŠW˜ßèä1ôötœËonn|þpåFäIpÀ›òÄŸìx¸yùßÿõäÅÍó ¿$øæêo¯žæïæª(;¾€QŸIÄX ìX¦øôS|’C'ö¼E'Ý'œø<ÛZ>Õît³~Åg°}&ïr¡ëqK $ú)>q‚Oqy Óg?éû¤)>uŒp ¦OñÓbˆO<8–œÕŒë÷™íRˆ<Åg4ž[ÌxžGhH†ÏŒñw]*ÈNHØySÃÉ)¸¾T0ò¸Ñö— –ˆ_g© ¢`žuãKÕH7¸T°Dï⥂âÜ‹°°ÝKŸØfã8L5sóÐ9÷iRS[9x{Už]ÞVï!Þ-@œ¼Ýõ³½Ç$Mêet¤zÄ[D©˜š@Z35T±Ø9¿xÚ,-7ÐmSÖ”£‡U7“åÎé­nþf ƒ‰@R!äÕJ#ÄÌaà NÁ ÅNu9Z@o:MzÔŽ!Z]©ØZ)…`:Í5Ïz²Â›u>aÜ“Zk¥ÔI•òäÈ8¥v1[ãZÒi¦q)ÙN¥’1Z2]ç߆Õi–˜Dc™¼Ø¹}µÔ¦œJDÛ‡á9â—ኂiÖÃp5Ò Âð½³axR+uäž”eVvi†{ º GH%j †‹*Œ2, ¶ztánÁpÿÔ”ò¨èÄín·ì=Ê#yo+“z·ÃðÃMÁ0é‰%ŽR5«×'÷v9º¥aXËM]2²»°êÉ*d]Šã4p²*É'ÌN§ê}¡Úe†msHq^H½~¿eo‡aÇ sœX‰hû81Gü28QQ0ͺqœ¨FºAœ˜£w6NLj¥<¬{¿e€Ð¥„8ÑKˆ!7Bjk›/‹ª€2*ÅêéŽe}ë£ÃR°£#5`;œ(3"9¤‹]:H²¾ãÄŽ-à„TL¡t§Ùª8Qìà`Ùz!œÐråÃÖât±£ë&jðÕ¡+i²~ÂÚ 3Å.ûm/¸§Ñavlœå*vÀqÇ kœX‰hû81Gü28QQ0ͺqœ¨FºAœ˜£w6Nçô7v+å]Xwu"ú.¦¡Õ‰^B¦ˆq„ÔcéHIœ(ªbŽÁبPìÁ݉IщiÃ$ÒêQx@jÔˆ$sÚqblj¦pB*f†@1c=‹t±Ãœ—Æ -7f0‹»¸nÞ·ÐIOçC8Ž\>ubg¬Ô«;rKÁª8QÄÅ\&Sø¸ã„9N¬D´}œ˜#~œ¨(˜fÝ8NT#Ý NÌÑ;'&µRG–¢—ʼnP’ø àÄ4©±1œ˜¤>¹Ÿó£Æ‰iÑ!¿NLzrÂV–SØqblj¦p‚5å²Rx'z;·8Nh¹ƒ€Š·> ]Ƶ Nø.ùÁ½N,£åœ(g¶¾tµsiÔy +5w¥=c4vð»Ç‡aÓ)an FRµó™F¥ñC7Æ)eñI¦SÍ•7Ê©‘€Ò»Î9½R¬úN‹]äƒ5šS4䘙êëq½]p£ÎU¡7ê+uõsѽ]Ͷ‚U t”lõ9Ü­;6úè` ö»¤í&[{ÈQFnŸlÝGGMŽúŠ™“´<#‚M/¢ëÅÉxEØÝ‡ðáZŸÎ¾/îåSéa^y§ËBÔu=6eI´ß¶mMYÕ"ÚüÌæ,ñ‹ÌlÖL³n{f³éöf6gé;³Ù;÷,Ô„v+u˜{þ ÔŠQÇL·¿*WȈäñÃrG‰˜|)¿zñÓÓ«Ïîóó_¾{MÂWòý<»–8ß—Ø^<òø³ûòmÜ—ÑP¹Ôå³û_\?—Â4ÊÒ®‰å_®Ÿžýx}ñÁ¸üÉ›??{úìêGÅ=óÏ»³¯K?+ŽoÄìÀôòê‘0»>Œ ·^Š“îþ`¥ Ä®~ñÖqåäÁ9ÇãÛ 'Jõyè•ú?¿iDߎ‹ßܨÓm†¼{ãÎô[»p¤·NG5±»>ûv>côy0D½^xµÙ ÞÓÒ‰BK¹ÂŸŽÈ;(ûgƒ‚ï'Çà àS&´…´ÇÓšîsжú##Çž§sðÙœR»ƒS[ðt1KÁ#Þ[œ{gåÎÓ;O/ÎÓâ)9 ÌFÿ"vñ þbý‹½+Fz@ã;ȼîæ%Ýt†ú]W„”ÍÑBYWä'|Ô»¯ßóskw°*±þ º FôÿáŸ.~)ÂT0ͺy¯DºIÂ?]ï„/΃n‚v+`åDMŽ:æp|ïR/!FÏõ‹ðn¥f×Mˆªä„l½­ž€ïMht Î[Ò„¤æù@˜ö“Õ;M4GBÂ,c²h"é±âhBù1EkBQìrpk&jÊòŒN˜ÿ8N€ntñ,¿¯Ø‹Ýa.¤Úž#—¯s0Vÿ‹¨å4šN¥Qc=~ §bG¼iJªâTg¥ 7Åw° g§q%¢íƒÓñË€SEÁ4ëÆÁ©éÁiŽÞÙàTœ#ˆ{´[)ð+§¤ŠÒ=D?NÓ¤~xûó/ N½z”>ÒÛêÑÝ1pꟚõâÀщnk,cr0æ½…ÃÓí;8íàÔ8žÍÎ(õ·š’ª·K°ô¡R.¡èÍH?ÿUSRAÇ‘8°Îzû†“—RϧRì§QWƒ q5ˆc4î‰îí2mš«8Þ ×9S\Ä´oï4§•ˆ¶Ï0sÄ/Ã0Ó¬g˜j¤d˜9zg3Lqtï~°[©ënKŒì;ty€aŠ„!!5a[ STÇÙVã;šÕGÇ ÊÛ=y$Üîàzï‘9f¡Œa_üÙ¦-†ÑÌÒž¤#§Ÿ¾}¯kºÅ·*k¹)z0,÷vÁóÊ Czšá8ÃøÎKWŒõd|½]ÀQ7 âáMƒpÔ©ÔƒàÀËÞj—ˆÆ­Î¦S¡°’«'ÓïíÈ;¹n—×Â"i#c¶œjW@›ÒZq*ˆˆÞÙâ’;­YÃðJDÛ§µ9â—¡µŠ‚iÖÓZ5Ò ÒÚ½³i­8×ÍÐÙÛ­TÎ+g-ιs˜hm’TöЭ©ª¤éˆ}4Õ'çîVÖâþ©ÑQÑY&w0ïº:­‘È®u)fÜim§µ¦hÍË 01Æüábî·ïV`±ª[šÖ¤\tzZ,2Ù¥itÅ #À­EùÊ™ØüÔÕGåóÎ'‚ˆ9ùd1ŒÚ!Œr &8%AάæVÑÚèÑ|RfD–ªoõ€bZë¿'¨ÇÏ4|ìý÷”èìÚ ÿfmü(šµØÞÓ„îýw[ýw$rÒÛ`®'†(v‡g꿵\MÕ˜\½ƒ*v1øuïDÖû\:Þ‚g½óÅJuHÏaÓé@uªçŸÉHüPì í'wÍyžJDÛŸœ#~™éÀŠ‚iÖOV#Ýàtཱི§'µRxìâ–%§ ;œì%dÈõÌôo)´…E•§”s¶Õ{wÇp¢j½D‰GDç öê8¡•ïì1;·çÚq¢5œï…¤=´›M’&rÔB½÷ÆÔ–õfwŠl0ŒÚLž,Æ0ž]Î1D󫻸&Ãxì¤ÉÂ8È0A¢®÷ÌYp ÀãöQø`¼ž¨Ç€eW¿X®·#µÕÞ[Ç…cW²j3o8; m §Áçd‹ aß¼aÃ+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµÞypž²ÝJEäu §Ô¥0”g©H `Nn„ÔÐØâO¯Þ ‡POîŽmµ/O9ô#¢“7Üj/É2›nŠ2ÎûVûÖÚ¢µ¨[Ø)çêy–z;X<Ï’–Ë.F)Üú€ræ +/þ 63ÇÁ‰ôS÷œÐX¦*vHÛ¦m-N9Çhä{ëíÈí8a+m'æˆ_'* ¦Y7ŽÕH7ˆsôÎÆ u€AÆaf+.ÅUq"K{ŸÒNôRXéÜz»ØØ%E²ólõHw 'ÊSëæDãˆÞ.ox DñH9owã@qß ¾ãD[8Ae}Å;ÓWq¢Ø¡[|/8éúŠ<2’ù±  «à„w  ¦ä]„l5Ñb÷/Ï^>tŸo[•øù“Ç®ÿûË‹§g?k»éÇvçß—ÿwþ¦éþô¯//¯ÎŸ]^|þôÙ“ÿýéÍøááõ£GÎþñ÷ü]¿–·¾š&^ÿþ“?þôôêË«´xó‰K9ŸüûõcA…þGú•i%üÓ%¼}¨©E1¿þú·o¾ÈżSÂ0µŒÛ×§üÑW—ÿ#½óÛWòó?Oy»Ÿt]wöé§gxvýPº¸ki¬úïôù)…ýÇÅÍÕó§ÒaŸ}ðsϬÌ'…ð“?\ýðèw×ÿzÄåô7[êÆ7¿ýâXa}‰—éêáåe8‡Ë <|Içœ/Òùì+¬|Ų;©>U½žô¿¿zþ䥌žÞVÓÃÈŸ$´Zìiïð7Wµ ¼_Zùq§<úçÊØ·½¶Œ§nž”«Í˜üúUéÑÏžßÓ®üÜÑ9’tà8ûÏ?K¥zçwÿúø…Œ‹O(ù““þèVгòT÷ÞGܾߖýÕÓŸÛ†{ýÀÿÄ’fvÙ·¥ôíFÿSZÛÊ(Þm=NóпCk‰Îþ‚Yeý Tñóï^Ý{ô@xqï´?òß72l¸ý·üK ëÉ‹êçú‹îÕõcy¥¯„ܾ“º’Ž×ÿü UÆ…Æ{]\³.ÊݾŸG×?\u?]Üü uïõëï¶«l'uôï5Ä'ö–'u&¿½¹yùB§JœÜóq¡ Ñÿ¼º÷áñå‹¿ØNÅÎùQé‰=Ns'µ¥ bÝ©ÚÅ£²lgœp“ÂÈKCJ,§ä=ÅMiM=iŠ“wµ_&cÃ+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµI­Ô;ÛpÖØ’|'×­M’ŠèÚ¢µiêìÌú¨imRt|Ø0½¡zŒôˆ¼­ŒqOo¸ÓZ[´&“u”œrýBÌb‡¸ø 7-7{ï³cëâ|d>pÉÔ ¡s@.æ›&i~t$N†R±s4j™+€N¬i`!§:Ì;—·=VWœrˆœÑ—hÏ©hN+mŸaæˆ_†a* ¦Y7Î0ÕH7È0sôÎfuž@šnÌf+•œ§U†cî( åT,|`rÁ– Ìm1L¯žäG¨÷þŽ«+O8Œ¨†áà>²ÕF=fïB0æI‹º=KÇÎ0m1ŒTLÖ]kþßoß«À¬É¢–fîôºGÒÛ€HìŽä¾X–aä[ÌÇ“t§WNjrÃúlEo‡.×ƸwD Óf"p0œJÃÇœ·D˜^\ÔëN¼)NS%îcŒMkmaf‰_aj ¦Y·0õH·‡0³ôÎE˜[çÒþÔóÝÚù•ÓÂ3tìf¢Ô„M!L¯*yLžmõw a&F'»Í¦xdðÞ؈;xo;ÂìÓÂhÅdÍlCª!LoG9/Œ0Rnp.&2òmôvVÍÐ]Î0 1LÒÒœ¢h)¥`Ô-‘Á› “!z ÑìÕÄÇ¥…¡¾M c ’“áTìr·iÎH @[HŒàê×…övnÛMs½SJ!%°ÅÅq;­ Ã+mŸÖæˆ_†Ö* ¦Y7NkÕH7HksôΦµÞ9'ä­;%º ­‘Næe µ"!¹”GHå¶Ž8©ª$ý”t9d©OºGënÑZyjDÝgGaCZ+õ"p[Yô¼ÓÚNkMÑ P‚:­AY:há¢5-ׇÐyã;ȫҚï$îŽýÍ@#üÂ#DC©ØåqGœ ƒƒHÒîAõ´Qoa­%ÓiiÔä’áTµ—¹Ô)è³"Ûâ˜q'kD\‰hûà4Gü2àTQ0ͺqpªFºApš£w68õÎ3¥äÌV `íÛu§ž§pš&5µuÚ¨W…9Ë@ÈVìˆôG Nå©õèp=Íü­‹ÛSñ˜„Ã=Æ {nˆœÚ' ‘ÁçjüÞÎÌ7-NZnräœ1ó v:áµ&8Å.1ÆÌ7Œ^¼ÕD‹ð¨üvÁÈ ¡çk)…H©~ï{ow˜3¨æ”M§¥MÁ[Nµ3:¸ëw p*N…ë²QaŠÝáÝŠ;8 Œˆ+mœæˆ_œ* ¦Y7NÕH7NsôΧâ<Ç@Æ,[oçÝÊûsãÐþ@• ‹i„Tn œŠzg»¯Bx·À© ±ó~e„qŒÒÈÜ t0º¥‹#›}‹]H£NÃDo2Œ&Å9¶ÓÃëŸkNƒá4I嬕6­Þ¨»ÃÄž5§Ñt*•ÀE&é´¤ì¶]p*â¤Zƒ‹bwxãÜNkÃðJDÛ§µ9â—¡µŠ‚iÖÓZ5Ò ÒÚ½³i­w®½$Û­Ôá2«Ðšç.ó­M“C[´VTE´ÕGôw‹ÖÊS“ãÈc¢³%­©Ç좓•­Œ÷„ ;­5FkIrFõLsÅ.ÌÇ-DkR.8ByëÓ»¼2­Åð8¬åΡȤú€½ØºÚ(’0¹4 ¬iªÞÎm›º@J=Í.Øâ2î9ßì±i%¢í#ÌñË LEÁ4ëÆ¦éfŽÞÙSœ‡€Án¥ä•s¾¥.§¡œoEBBö#¤&h aŠ* >҈œàý¨¦<µ"²q\µ·s"Œzd§}ÙTƇ×ï³#L “u!Ç!çXÏù¦v™ÓÒ·³j¹ ß#Y"L^uÁ ;pžÓ Ãè1X]œ³”ŠrsŒ¨ ˜][½Ïù®u0Œ.¢6í`ÄcöÓe÷9²½ƒi«ƒáN/ÑÂbµƒ)vqùFÊÕS“s}!»Ø¹´fƒ¹“Áûÿ±w.;—ÜÆí4XW’É2@^À‹YY(¶‘"C’í×O‘}fr>ÏiV7ØÝ&t(í¾)tý»Nóòã¥J1½`r™#Z×";JÍÓ½5 ªS‰6g_œàÜrw"½ªGü9ëU Ǭ_¯jFzÀõª½ÝëUÕ¹ŠÍxÐï¥4\[£€T–´±ZuL(¶á^UE.Õv¨Ïov<º¾u¢œ:kõF˜0)Šýç)KöeÎÕª £ÁC BìÀ„Ù!^ hÃk@f(^»ZE)Fz}L5¤—¯78?}ü€Ùú÷O§Õ´n¼dvútÁ¤E¹‚¿>,fpÇ †˜Ó¥eâbÃQˆ¯óG ؈§6½ÔVZíì›Ø Nx`sjØÑ®'·ÚY[Þå|§öûcûÜójgV»œ:©‡ìad¶à8NÍñÖ“à«S-ëå;ĉÎÔCîÜ¿Ññ±Gü9ˆØPpÌzpDlFz@DìÑÛˆÅ9çT¶öÜ^ŠžK]ˆ¸æ¸5˜ª4ÖÞÚ1õ‘Þ«ú÷±è$É÷!âe6-{kÇBDX˜S²9wjc¨v¥òõÙˆhÏU %iœ‡#Ì9jºÖTà([Œ` 1Côfì‚ø\§¼Å0ä€SÙîã2h«Ü[O®:(W“¢+Î wÖ“s'§ˆŽÏ0=âÏa˜†‚cÖƒ3L3Ò2LÞn†Yk–vá‡À¥ £¬‹M7¶¹ªTµ7Ù!u´m®ªŠA±]˜lµ#|¯óèêžè0Ê} S• Mà++wö&ÃL†Ša°ìø`Fç6kµ3†‘³¦<7Ä`ðÙQºô| /Ê•ù× C €ÄHÛcaµ“¸«&¶²Ã0Tû¬è8¥’ä@oe˜Câô)"“a6&§ˆŽÏ0=âÏa˜†‚cÖƒ3L3Ò2LÞn†9ÔKÅ«“Š&]Œ46æ˜ÔÁ’ŠSŸ4¼ÃŠNF¼aŽ(‹ø”|2Ìd˜† ›°´êU; §3 U6aí:m«^[ hcÃk†áÂ&åt_h/’ñÊ:»®©8 õ ‰):Nˉޚ¥auš­5§ä‹‹œ'Ãx“ÓFDÇg˜ñç0LCÁ1ëÁ¦é¦Go7Ãç)@çÄë*òr† ÑÓe,†©êA0;Wc×·Œü^ ³F'ÚÔ@ýè€è} S=2²wݨÚÑÓ0>f2Ì à  A³0Bµ ñü³då¹™3xݶÙ!_É0KIÞ¨û0²XÌ‘¨}ê­Ø)?%n1Œ: sÌé¾r ]§ Jc$Ç©Ù=¯m9õ’CÈB×{3ŽS³Ã¸/¼ÙwšRÎÀí‚};¸÷¨^qš…$rpÅ•Œ¢½¹#¢ã#bøs±¡à˜õàˆØŒô€ˆØ£·õR6ϸ6#…Ä…o â1©£Õ;¤þù0Ú[ â±è$¾«Ç$†íÙWöœ‚q"âDÄQJ¦ ƒ¿ à|Àfy׿M .ƒpŠº+û@ß){Æö¡Äj·áò\d0Ò辰^»¡g}’lÁ0«áEö:5³SÜ•#¢Kk%9¼ï›0;·Ë©w›KÊØ9~YíðÞÚ„«S ‰sôÅ Î]DwîßˆèøˆØ#þDl(8f=8"6#= "öèíFÄÕ9 ìéBðZDDÙˆ6ñ˜T c!bU‚„ê#¾"Ö·NÙÉl¶F‘n¼ÍU<"HDWΤ…ÇBD›ŠØô—B3ÛûjÂéàTžk]/8 Èì8_™?°”Q08üqc€)Iõ,VNâõj‡²œØ§Xú FElŸŸ¨v„÷&^¯NµdÚa_œ>¥ü› ³19mDt|†éÃ4³œaš‘azôv3Lu)y©V;䋦UÙJ¼^%ØÐŸ1í:XŪUU®µI|õIÞŒaE'Ó'!‹Gd›Ü©?Ç@œ'!'à Æ0q±n5#qhgÕ«v‰ÒÙ SžKF˜½y°Ùá‹k¬'nþÀ"B9ÐÓ‚%­ˆ¤f¤ªÒvŠúïøéïå+øù§–oÿòç2ö[÷ô?é—òü þû»_¿¥oþÃŒ~ÿ‡?}÷ã·¥“úãwùÙfî ¥5 Ì¿û ùÿ¤ëŸ ÷oß”ù½ããéño 8WȪýp»hMZK¥‡±ß˜§fGùÞ{kÕif¬¾¸(³º°; oDt|Zë­5³œÖš‘ÖzôvÓZqnsË“ßKå‹%ŠÚðuƒÖŽI̓í8­êS¹Þ@ô^´Vߣs§¡Ú=gѺœÖªGÔŒ¾2kÄ“Ö&­ Ek©ää„Á9ªWíì>›ÖR9H3{ ¨)¼4÷ÆrK¶h-j2ˆÑì1ŒÙIÚU<*ªË0%§•õø ŽS³ ²+iaŒ{œ 'åä;µ¯u—SïÞZ¶’C=wÑtZìJ݉[i­ŠC±<»ââ¤5wÞˆèø´Ö#þZk(8f=8­5#= ­õèí¦µÕy œvôR¨ñâ½5sƒ[çW YB ;¤&‹Öª*A¤¤¾zz·+dÇ¢ó4¥¼œÖªÇ’þü9˼B6im0ZË…‚2圵IkÕ.áé{kå¹H!‹ˆÓ€ÊŪ@×ÒZ†’òÇÆxÈ~©¼¦nvA7i-Úøïßþïìçÿã—ëMßüýO6Š[Pÿ\›þ÷Ö_—ꕨGdÀU2žöd?Ê´Q÷Sõþ‹}:ëÏØíŽ·gZÛ* ôË_øõ›Ÿ¾¯ïö¯ßüçý£C5¥ÃÎ¥~Lã»üb‡x<>œŠ|l‡_ìæå2‡ Ú;ÅŸmǬG†G/Ò£Ác§Þ>xü윣Íd²ßKñ«õ·3·úQyŸ%H–ÜÚúbp xü¬JcNÑ«,úFðøù­³QV Ñ>Û¥”n‚LJGÅRÇÌW¦Ýiöýù¯“';žÅŽõ»d¬Õ ¿¾1õéã÷kvÏeÖÏ`ÇÇs… p«\øg;ŒùJvÌKŒ`ãÇ/Ç(-X"‰¶'ìÕî9Ò4Qœ¸Â}q™çV”;MlDt|šèM4³œ&š‘&zôvÓDuÄâLÑW‘zñÁÁŒKÌqƒ&V©cÚ!ˆÇ¢‰ª -ÎYw¨Ïð^4QßšÊw˜üè` ÷ÑDõhsþ0qÒĤ‰‘h&k495ibµC9›&Ês#Ûÿ‚Nûá3|!MpXÐ^1¦×4Ö‚¢ý:í–^í‚ÜKÕiÉ8’ƒ/Nž2%NšØ˜&6":>Môˆ?‡& ŽYNÍHH=z»i¢:±ßKi¾ö`›–Ó )oÐÄ*U£8«û« FEUɾ }õã{ÑD*©žê¹\NÕ#ÛŸ|eä‰I#ÑÚ,]s`þ:Kù§ß¯ÑDJt6M”禢¨7G7;ÆKsFäEE_¦½3T0‘„ö&Jµ ù^˜¨N•Y}qL8a›%6":>Lôˆ?& ŽYÍH=z»aâP/%píA'¦¸Øx¶Ǥr &Ž©oõ­í eH~tô¶,ÚÕ#‡*ÁW– &LL˜ &¨”â5O@m˜¨vúÃ'ÁDynD4œðæèfrem]D ñL”æ[NÒzBsMA4ÚðbêQ!¸ê?$…y“á¥rbôÎH¯Q|º |ÃðR– D(‰¯LòMôˆ?‡& ŽYNÍHH=z»iâH/%¯ÝùÆ…“âvo_ºñýÞ^d,š¨ª(§Ä°C}z³­‰úÖöÚ¤ìG‡ŸÒ·\NRËÚ&'ÅGµ#I“&&MŒDö]æ9ÄÐÞš¨v!ÂÙ4Qž ¬h·×Î@/Nüœ¸ó­ö£Yl_/µ~µJC{±ªÚ…¯·è/…‰µ¸6go˜®vi„7KlDt|˜èL4³&š‘&zôvÃÄên5}ùÙm PyÒĤ‰‘h¾˔Ô0©IÕN€Ï¦‰òÜ’#‘Û~RN|iаØ0'¸QË(—l@ÅNÂÀjGpïÞDuj_Dr*£V;e™4áMŸ&zÄŸC Ǭ§‰f¤¤‰½Ý4q¨—ŠÀ—ïMÜÚ›8&•»6QU¥r¢-ûêÓ×Wü~Û4Qß:3±s©dÎSŽäËi¢xŒ ê+ЙÑiÒÄP4Qê9D‰¾Îƒ÷é¾ß9Ÿž0°Ö“QJ쵟ô*#í™{i‰Q¼>é¡´` =µ/a¯v€·¦_ eóë‹ã§Û[“&^O[ž&ºÄŸB-ǬǦ‰v¤Ç£‰.½½4±:·éRhgÅ~ˆ”kS:±¦y#ýøA©9E«ªRò´M·äôV4ñˆNI”üè<'g¼š&ªG´•¢ÿÕ!41ib š°ïRjª1þ:¿ñ§ß¯ÙÑSfÑsh¢>ׯHÑë÷ÌN‘®­Œ ¿† ( ¸HuæëÅÎÞ†n…‰*M[WœõBskÂ%6":>Lôˆ?& ŽYÍH=z»abužrJ¸£—zuVôL˜^ÌÇL¬rN¼Gj«–ÑªŠ•)G_=½LŠÃ}ùaW‘%ŠøÊô‰¶'LL˜&ì»Ì4ç6a¢ØY§yöµ‰ú\Ãu{íÌ.=è„‹áB­ ´L %MnS)ÖáÞj«¸T¶˜ÐG:óÃúÓÄFDǧ‰ñçÐDCÁ1ëÁi¢éi¢Go7Mçå GjŸæ_í‚^{m‚*Cn÷öŠPІûR!uÐiUE ’¯Ó{]›xD'r þHn£å[Õc å,²¯ì9%Τ‰IÐ.Éfóòõ¦ß§ß¯D¡³i¢<×(ARôZ¶Ù]›Ò BZç´qoÈš0‡L)·›új‡r+NT§ŒÂ@¾8ž›þ<±Ññq¢Gü98ÑPpÌzpœhFz@œèÑÛÕ¹@É“³£—J×â„Í&ó²±9±JÕìÜ’{ؽ*ÙýÏĉª*‰&_}zqNë7å­%PHÜèH¸'ªG&ûxÐWF³öĉ±p‚ÊæDbÍÔ̻کœ¾9Qž+6öd¿ýdQMWnNÀ¡\ÏzM\Z0(xËUÕ.„{7'ªSñ×ÒªóÜœp§‰ˆŽO=âÏ¡‰†‚cÖƒÓD3ÒÒDÞnšXgôÖ$.*Lšð¦‰ˆŽO=âÏ¡‰†‚cÖƒÓD3ÒÒDÞnš8ÔKåpñÅ ÂÃÖ-ìcRq°½‰#êã‹•¯ß6M‹ŽÜHÕ£äœÍ»jÇO¹¸&MLš€&J‰ie4Noß®vô´7pM”çÆH%e·×~̵bKõ:ÜØ›ÐÚ·”øvK×:¾ð½4QÅQLÞ­Žj‡‚“&¼ib#¢ãÓDøsh¢¡à˜õà4ÑŒô€4Ñ£·›&õRDçt ²HÚº†}Lª vq¢ªbæLÑWÏáÍ®a×·–ÀŽè<Íy.§‰ê1Eɾ2“&&M EZ®WÛ,=FnÒDµÓ§ièI4Qžk Éï÷2CÆ+3Äêba|]½bmÁœÚJãÚWå[i¢8M%3&_\ÎóÞ„;MlDt|šèM4³œ&š‘&zôvÓDuŽ`Ý#»½T‚L—Ò„Zo¸°AU“Whoµ#ˆcÑDUC¹bí«·Ÿã½h¢¾u)Të$ìZíèÆ¤NÅcÆÄÄU–çI§ICÑD,'˜‚ÄäÜ›°ï7笧Ÿt*þ9ÊNû1»ç ìMÐÔ×0‘j.Ç(ÛB«Ý‹,—ÂDuªö1øâ”Ä o–؈èø0Ñ#þ˜h(8f=8L4#= Lôè톉ê<‰‚³³Úáµ—°•eAÎ0Q%d(Õ…vH-§“©Š%QbÎ;†‹ô{ÁDN©ål;­QL7ÂDõhs!Fô•‘Îr&†‚‰T:•”dЄ‰jGO½æI0QžË"sÙ­Úñ\¹5—L9(mÑDÎXR‹O©ÙY0Úøbª%9Ř>¿å»/¢C!ß9¾˜Ç·“¯LŸ2ÇÏñeŽ/Œ/y±I[9âýu¯ùa|©v§—3*ÏMåι3s¬v¯_H¬ªy=¾ä:ÇójäÞzFÕ©î'0ÒºËˆŽ¿ZÕ#þœÕª†‚cÖƒ¯V5#=àjUÞîÕªê<¢ oÉï¥bÐkW«bZÂÖbUUˆ½¬r«Rl±ªªÊ%)–øê³Æ÷‚‰òÖ¥âIÄàFBäû`¢zTCXçhHµ“4O˜ &JÍÓ’„480Q²àE:&$€9Î}Ûj®½•WÒÍÒëâ¨}E¼¥tµ թÄ$}q‚i„3KlEtx˜è L´³&Ú‘&ºôöÂÄê¼Ü‡j][íT/†‰Óòš&ªDdH;zûŒ<M¬ê€Úµ²o)á­h¢¾52ŒJ~tòÓ9±«ibUÆ2fWQ˜çh'MŒDö]Jà$$¹¹5±Ú=WÕ<‡&êsµ”¸È^¯mv‚t%Mð¢@¤¯·&Jß"ÖAÇvK‡:¾ˆÞJP;H@iCÙúù)ᤉib#¢ãÓDøsh¢¡à˜õà4ÑŒô€4Ñ£·›&ªs°Á¦}û!2§Kirlq»·'È)e¿·'HcíM¬ª0ÙϾzÔ7£‰úÖÄ‘w –D7ÞÊ[=JFŠÁW&2÷&&M Eö]ædÓGlç¯vÑtMå¹Ùš¥sí{µ³™üµ9>løÀôú -¢µ`œ{ÓÕŽ2ßšãcG ÑÇøÔ MšØ˜&6":>Môˆ?‡& ŽYNÍHÿ{ç²sÉmÜñWñÊ;·Éº±*€yƒldŒ-+V`[‚eûùÃËgáÈsšuìî!t¸˜Å jºþ]§yùñR5!MŒè¦‰âœ)sÐéC¤\›ãƒ@7‚üãǤғz@_”&Ž©Wy/š8¦ûòSf‘M,š˜‰&`+Iý ¥ŸüÃîáÚÂI4QžkhQœÍïfÇW^› Ú S„çI>s – ÈŽR¬=B¼5ÉG—¿˜üñ¸âDVmTšØ‰èü41"þšè(8f=9Mt#=!MŒè¦‰êÜbîb“ßK©Æ‹÷&òl"î÷ö%Oó¥šÆ¹h¢ª‡rÏQ\õ)~¾ ô˦‰úÖ(€/ –éÎKØÍ£Hž*ø_]\ùÇMLE¸qHe?û'ªœ^ͨ>×Àƒ:í§ØÉ¥ÕŒ`ƒR˜,=_¨´àTòùôÏdU;½•&ŠSL¦Á§7ÄMìL;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4q¨—ŠéÚjF̶Y”½‰CR!LFU2ƒs–§©ç7Û›8„ïMTó G_óJ¸hb*š zob·šQ³Kñô½‰òÜd¢*ä´Ÿl§éÊ[Ø`›HJagµŠËZVä Úß›¨vnMéÔœ2¢×AV;¤¸h›&v":?MŒˆ?‡&: ŽYONÝHOH#z‡iâP/E‘.®J[žˆìÐÄ1©ˆsÑDU•‘@M fGïUµ½uþ_ æG'=Œä—ÓDö¨!XÄä*Ëvq%ˆ]41MpÍ—ÛÅÝjFÍNéìäå¹1˜f^¿Wì®­fkÅ$ع…-µKHÐç)}¥p+MTqƒ&öÄi@^·°Ýib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&šs ’ßKÑç5Î¥ ÍcŠÒMT 0ÅW¤Îv »ª’ÐÙYio‰ô^4Ñ¢‘å…èH¸qo¢z4a/(SY÷&MLEÒN0IÚ¥‰jÇtv†Øú\‹%g`tÚO¶Ë#Ì•{º1³î$ˆM¹—‰@}¡Õîq¾~L§H‰Õ‡xÁ„7KìDt~˜Lt³ž&º‘ž&FôÃDuΪ~ŠÂ¥0[Ì,‘`¿·]jš &ªªD™}õòn bË[S`õŽ,7;€û`¢z,eNúµ›ÀÚšX01L¤óä1©&èÂDµ#“³a¢?ÉúõÏ¿ßlG‹Ú'ÑDy.¶ˆ^û)vŸ'J:·Ü„ÚNñ: µop†ÂfùÖrÍiþrb|AÃ:éäM{ž&†ÄŸB=Ǭ禉~¤ç£‰!½£4Ñœ—R¥¤~/%tíI'2Ü v®M4 )õ‹×}Hµ¹.a7U øêóçðV4ÑÞÚ„ôÂ`©%®®¦‰êQ£at•id]4±hb"šÈß%GIÙtÄ6;‰p2MÔçj(t¼~¯ØÅKKa‡ #QŒÏi"¶­¿ ÙìÞš ¶9eÎß„ùâø!Oõ¢‰ib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&ªsÉþûÛ»v¯ªIÇwh¢JHÔ/ªúa7Yñº¦Ê°,ÀùêíIzÛ_4M”·¶€Äà†98tMT( /4CX bMLEqc`Óì²[¼®Ù%ŽgÓDyn‚h‚^¿—íâ“U”iB7BH¼³7YIÊ–úɧŠ]éÈoMéÔÄA¹Ož¸Ü?ÊJéäN;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Ñœ›²³ƒÚìÒµ b Ó&švh¢JÀ`)¼"Õæº7ÑTQL”ÔWÿ˜/ü-h¢EÇXû§«ÿE¾&ªÇ-ùÊ$®{‹&¦¢ Ø% êïM;6;}o¢<·,¢`òzmÆr-îÊRغE*wÒžÓ–l¹êçtú°‹ñVš(Ns 3(€+.âCMìL;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q³¨8 2ÍîóÎþܽ‰díÁDU 1c—½ Ô&ÛšhêUœhvò^—°Û['uVÞšÝ×&ªGhôÂï!¬”N &¦‚ Ü8š0÷·&ªëÙµëÊs!æwu*aØ!_[»ŽK5 ݃‰ÌRòoEŽR³hÏ2Ù~Ùñ%«$JÉU!…w_Jt$©øXôúñ%{äòJ/ünÖµ¼5¾L5¾Ð¢âôù…ן/Å®ÔZ;{|ÉÏͶ;K@Ííâüãd'yVgˆl¥œ‘£´Ø…[“|4§–4úâ4­Å*w¢Ñù«FÄŸ³XÕQpÌzòŪn¤'\¬Ñ;¼XUœçÁ‹c ¿—²¯MòÃÂNÊÀƒRm®$M}¦1î×búxK”÷¢‰úÖûI>>¢(÷ÕFm)•ÜȾ2´u-oÑÄd4Á˗餭vŠv>MpDgµªØ‘2^[U™å9MpnÁòÈáÜ|+vhŒ·ÒD—½†à‹#[4áM;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q !²ø½”<˺zêÖ7n¹;ß¡‰*A9/ø‚Ôg5»¿$M4õ)9éÓÿù–o¶7QßÚJ!Ž>C»±œQõÈ•]e !-šX41Mpž¥1{’Bê럿\öôlš(ÏÍͽ›\i&]I6Ãç4!¥Ç<¾ôË„7»Àr+MT§B1ZôÅIÐEÞ4±ÑùibDü94ÑQpÌzršèFzBšÑ;LÕ¹i)sï÷R&צ ´Üc‰ì$ ¯8(š+U³2{_’&ª*JN‹f‡o–2°¾5(¼ðÛrºñ$mñ˜(•;C®²„itZ41MH¹–g˜ýi—&ªÝcÉÏ“h¢œÇ¸œ&ªÇ„ÅŸc¨¬,‹&æ¢ -kþ’ÿ|¾óýõϿ߲ý–N¿7‘ŸK‘D¼ Õbºro·€°SÌÈrû5€¸"«Úº÷ÖDuÊJj/ˆ# ‹%¼Ib'¢ó³ÄˆøsX¢£à˜õä,Ñô„,1¢w˜%ªs Îæî‡H½ö6§ tïœS“ Èú‚ÔÇ"S°DU¥æ¤ä¨v)¾Ù9§úÖy.NšüèX¸ñÖDÉ2PKût”e;xHR°Xb±Ä,‘¿K„ $ÎÎD±‹ÂÙ,Qž‹5º½6—«{—ÞÁÆÍ¨Jr Îól2íîŽV» |kF§CâbþM8ÓÄ^D§§‰!ñ§ÐDOÁ1ë¹i¢éùhbHï(Më¥ éÅ¥Qe‹qggâ˜T a*š8¨žÞ‹&ŽE‡o:_LÍ£2aŒ¾2yX\4±hâËÓDù.s·m!õi¢Ú‰>Ô=8‡&êsòÜk?Ù.¾ƒ]ưðüÖÅÚ‚-jŸ{š‘ÞJÅi†Â’|qÆ‹&Üib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&ªó<Ø%·—‚˜.Î脲å‘o‡&I˜ko¢©B%¬ ø^4QߺnÞ˜z¨$r9MT!úÊ’¬½‰ESÑD,÷Ð,Räî9§b—gÒrö9§òÜÜé)sÿˆNµÓ+iBÃV²)ðÎÞ”¬ÙŸôû j'rkiÔê9åo"¸âtÝšp§‰ˆÎO#âÏ¡‰Ž‚cÖ“ÓD7ÒÒĈÞaš¨Î%S]úC¤ØµÊ­ Ù©zPêdµQ›ª ûµ2>Þòó•¯_6MÔ·V*Ieüè$º¯v]õH™££ùÊ(¨,šX41M@ÉÔ“*vk×5»€x6M”ç¦Ùí÷0Ã8]yÒ‰¶LwîMæÌ,ûg«ɽÕ&ªS âl‘Ö—0XµëÜib'¢óÓĈøsh¢£à˜õä4Ñô„41¢w˜&šs àöR?l® ‰¸Qàô}MA$/mù‡Ò4WB§¦ CHÎÂW{K}¯bÇ¢Ê÷ÁDõ(™î,øÊXxÁÄ‚‰™`"—hœ€?ßôûú_¾_4³ÓÖçR ‰XÝñ…òL?\ ʉ«çã åœ$«ˆý>ˆjK÷nM—òK.˜ðf‰ˆÎ#âÏ‰Ž‚cÖ“ÃD7ÒÂĈÞa˜8ÖKéµ0Á&ƽ­‰CR#Ñ\4qH=À{Â> 7nMjæM™òJ»hb*š ²äŸ¿K“þÖÕ- NgÓDyn~ÙÜ~Ðk?˜¢×ÂŽY`JÏi‚KK)Iè¯þ7»›i¢:– f¾8Z4áM;Ÿ&FÄŸCǬ'§‰n¤'¤‰½Ã4Q«ö+ }Øñµ—°5…-h;4Q%”:{I|©F“]Â.ª,”ôYÁUoAÒ{ÑD}kàÌ«þg˜?ÖûÒÃ6’gäã¥6ᢉE3Ñ×Ù< vi¢ÚA<;¥S}®qJê¬T;¾’&ÊEoŠ;0!¥G%'ç^³ v/LT§ÌôqüP¶uÁÄÎ,±ÑùabDü90ÑQpÌzr˜èFzB˜Ñ; Õyî~Bb¿—2¸¶r•rÛ;0‘%Ä<ÄjpnÃV»'KG_&ªªŒlÅWÿXÈô-`âPt0ޘѩz£øŠ2¶•ÑiÁÄT0QêK§(¥ЩÚ:=£Sy®E ¬êµ´@WnM€mA 3ÓsšH¹GÈ‘²þ„½ÚÅ›óÃV§œÕøâ˜`Ñ„7MìDt~šMt³žœ&º‘ž&FôÓDu.,!’ßKÉÅ·&Èpc¡š¨4;ÑÕ.ádª*Ó<œ½hã7Ë[ÞcÍÝè@¸sk¢z¤¨@/(ÇÊ_‹&ML@ù»dˆ)ôóÃV»O?èTžKH19—ݪ\š–`#MÈüœ&´¶ôýðcyåïþ’g$ÿ"ÿ-âï¼)J²è¾¥øp$â»ØüÕ}W»ÖH¹¿ÿ±MÀþZ¦cÿù¿úáûïÿ”§jûã¯ÂOÿöãß÷y0ü±MÜòDòÇ¿ýô¶uý©{ûá»í§_¨voù_~û÷þ÷¯Ÿr¿ü?ßýåÛïûøÛü×þøïŸò ûÕ§?#ôëß·Îî«Üí2+ýï¿ûæ«OJß|úÝ·ø›?|óûoC ú›ßɧø›oíü$ß–2èôëßý«Ö©gaÿ‘PaÓ7R?U.ÿÇh⪚@œs¬^ó/õB?LOea¿5㸑&ªSOH¤‹sXÈÃmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&ÎÌRŸ Ä^KBÂMì -ÛRq²*M$SùFý¯Uùho¤Gç½^×í4Ñr«ˆœƒ/þÍ.3ßIº¡$ôƒšVGzÆÚT©«ÔÚHw~”&Έƒ‚n‹&¢mb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍR$÷Ö ¤,[ÙHÐÄ9©Y碉Sê?õõøWÓÄ©èÈÛžãvšhËK[öX™™.šX41MX­ÅÇT»Lti¢Ù^ž7QŸ«Ì–ƒÚÍŽôΛNÄ›"úgšð2‚4qй¬Ù%|öl¢9%‡|nÙíÒ:›·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎê‚óÅ,ex;MP:¢‰SR9á\4ÑTiNÜ¿ovÂ?v6q*:*úMT¦šã=FyùuÓiÑÄT4áõ‘P²°›]~û\tMxËîÎ%æ5;ÎùÞ›Ne‚Q4©Ž`ÔÔUºÛAzôlbwÊ&,‹ã·ô“EŸ·‰½ˆNOCâ/¡‰ž‚sÖsÓD?ÒóÑÄÞQšØKbóof)O÷ÞtJ¸±äM¼¤BvýBª¤¹ò&^ª<‘}±VýÚM§ý­[wÆ/¢cþM4eu./.¡2†·3¥E‹&þyš(¿KKHP›¤õhb·Ã·b@×ÐD{n±5;?ÅN)Ýy6‘6Éð9 jïTÀdÝÏÿ»É£ÝQw§fŒýÃÛÝ.Ã:›·‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÊãYÊÓ½4!B›“ÐÄ.ÅD¾ŠsÝtjª„êÍúX½$ù­~F碞£‰æQÔŠˆXëºé´hb*š€²K7ìÞtÚí(_}Ó©=7SY&úýÀv;üðåÒ ±„ ôs…XÀ6ÒÍSðµªÙ±ÙM§]½×{:±z¡Ë›ho]~KŠGGéÁ³‰êÑ%ȵX™¯,ìEsфּ "@ëöÂÞíñjš¨ÏE1fíb—Ò­5hc·2|¦‰Üæ DÉû§Í®ÌÒDuêDÁÙÄ9É¢‰h›Ø‰èü41"þšè(8g=9Mt#=!MŒè¦‰3³”'ð{i¢Öðã›Nç¤êd5š*TËÁ!Ðë-ì¦Ó¹è؃gÍ£$ÌýžV»Ý{¡÷E‹&& ‰\)¥¶réÒD³C»œ&êsÁÚ-Æ`ü»,ùÞšNjùওmÙp{Wi³K`ÒDsÊš™¾Ço'‹&¶‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎÅÔ=dz”(Þ\Ó)oªG5š-+-Y,Ua2šhªrY0¾Poþ[4ÑÞÚ¬^cú`¿‰ê ¼xPè½Ù%_y‹&¦¢ «ßüU‰ÿÞEá¯ÿýý;¦ËûMÔçr2Á`üÔô$–{{a'Ä2Lh½ü×B ÑHwŸ-/¯©GeªÇ÷3¨Y_Ê[“bÆøo‹ôc¬/Å£e—pWSìò:û^ëË\ë‹o ʼT ovéú›´õ¹ž úëK³S·{oÒfcñƒõÅëQ©ìr-PZ÷¸÷3ªN±ü&Œ=þVÈ}}­:ø щèü_«FÄ_󵪣àœõä_«º‘žðkÕˆÞá¯UÍ9“€@8K!å{«|À&Ì|<Û-•梉]•Kù¾P¯?V3°½µE=Hw»7R¼&šGGÍü…2£ÕuÑÄd4aÀuŸÎM»Ät=M”Ý9¥¬Ÿb‡ŸÎ®¬@®žËñq}ÁTF01;÷;/ívïÜõMœG¼jFÛÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(Mœ›¥8é½U>Ä6у*»IDýýøKªúT4ñRoJý{Z/»õÓÿÍ4±¿µ¦¨ùߟ(âc4±{,ýÝÐËî-[gÑÄ¢‰ž&ÊïÒÊO×Ñ­ÛÏèe÷6Á_Cí¹œ¬ŒY ÆOµCº“&ÒVb¯ùóÙB=¹ ìÞÏû~Ù>JÍi.;;ÿB\~û3.š8Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢97Pˆg)½ù&­mŒ7iw eY zÞ¼ì&ëgÔTIÙ¼d/Ô»þM´è@rD £S4<×u÷ÈÄ™ãe¼ö¾]4±hb&š€J euá¿Åøë¿íÆ­\MÐÎFÊöÛ5?ÕŽýΛN¾a*“Ç皈m“Y¿È÷nþ,M4§J5ç&'²ª|„ÛÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mìγhúb U¹¹ÊGö-3ÐD“Å)ø<³K5œ‹&šª²[Nýü”?où[7^ѱ¨‚ËËŽŸËËk•ܹŸ1¸ÛáÛ½ïE‹&&  Ü QjçQïÒD³+Èq5MÔç’e³Ÿz“Vï¬@N¶¥,EÍgš ­–-uõ~ç²f§bæMœg¾ò&Âmb'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NÍRþé²è•7L6>¼étNª¥¹hâŒúò§ú±³‰sÑy+„;M4¤Ùƒ;X»®*‹&¦¢ ª]G…x·ùnÇoî/¢‰úÜì@œ%?Å.sº÷¦S)Xó™&jd¯ù‹Á1}µ36 {G*б8§·0.š8Ø&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9çœÜ8ž¥˜o¾éi3;:›hH‚“èÝ.MF»*ó²}¡^«ŸÑþÖex¿ÛÓËîÁšN»GÏŒýž¼/;Z5MLE\)ÁŒ˜ú4Ñìòõ4Á-»šX¦ØßZÓÉ6­•ä3MH›[êbØß°ïvôìM§âÔSÊ©LD‘¸bǸh"Ú&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9ÇÚ [ãY òÝyºåMìR™,¸é´ÛÑ\5vU’Ý¿Pïü[4±G'Œ‚8:¤ÏÕtÚ=*$S‹•‰­,ìESÑ„l­®æL]šhvÉ®®ÛžkƦ)Ú»,|/M8àAÞ„–Œ‚šs?/¯Ú=J§Äɺéo;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4qj–RM÷æM(lEÈMœ“j4MœRo˜~‹&ÎEÇ<›8£ 1åE‹&f¢ ­»t&ì×tjv…篦‰ú\wNQúp³Ë"÷ÒDRS8¨›Ë®gǘúç£ÕNŸ=›8#®LU«;j¸MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©YÊìÞ¼ Ü”ì€&NIõ4YM§sêåÇÎ&ÎDÇÓ„•ìÉ25¼«Y²Gi¢‰+k8˜†â<åU!6Ü&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâÔ,/»zq¿ ÝT`âœRå¹`¢©ª_¸Ácõ?v4ÑÞšPRJ&aÛf)‰£i”U»´J:-˜˜ &ÊïÊc(wa¢ÙáÛ‹`¢<—R–òÒáø¡¤zkó:ØØÕJ:y[˜00ÑìПMÂnNAÉcq &Â]b'¢óÃĈøk`¢£àœõä0Ñô„01¢w&vç\œÏRŠt/Lm`~@»Ñ”¾‘Ê“]tjªŒë™Xýψmoí˜S²8:fòMx¥æ ÉI³Ã÷+X‹&Müó4Q~—…’ppѩٽ÷)¸ˆ&ês M£=z±KŒ÷^t"'´Ï(Õ\~¹ÖO𨅡Ê\EMìâÈ$âê4”×E§h›Ø‹èô41$þšè)8g=7Mô#=M 饉ݹHËñ,EœïM›`ÚÒ&NJ5Š&Ωgú­³‰“ÑÉöMì3Q2Œ•©Ò¢‰EÑDý]b&ödÝv»ÈÕÚs-+„ã§Ø1Üœ6!`ieƒ&•Ôý޳۱ɣ4QbJ€`±8ç•„n;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ“:x6Ñž«þª9…MÝíÒ¥üG#ïvd‹&Ì 'ã`m¢ÚyŠe£ …µŠE-%Tþ’ï6îˆN¶[‡B÷H>¾`Ž•=…s|™ãËã .)[-Œôe’ùÆ—j§–O_ü¹þ¢Ñlïj—/®X0‘m¬}cÍ3²R”zÒk7o¤mâÀ3ráP•‡…ù9[µ1 Ñ‰èø³UGÄŸ3[ÕQ°ÏzðÙªn¤œ­:¢÷ðlUs^ïe‹{)ÀkïF¡¥l\º* ’‹Æ=á³+»K˜XÕKZ‹ÕÐ{ÁD{k.XúÕº>¢£÷•_=ZÎ _øÝœ·'LL˜ &´ˆÕàÀ„Û)¦óaB‹e±¤aËv»k'«`aKbKßÔú A vÌ7;Êr+L4§ê?c²Xœ< ~&6²ÄNDLJ‰#âÏ‰Ž‚}ÖƒÃD7ÒÂĽ‡a¢97(ÌǬ"5] ¼lÉÛ%Ô¾ìë[–¨ªØÈ/ŒTFú^,±FÇŽ£Ãþ×÷±Dóˆþ»)ÅÊà¡jüd‰É°µí©êÿ£_á£ÙéÃõ“X¢>—K'Ûp¼rmZ !åòœ%¸µà”$_š]æ{&šSB–àì|³Ãy•Qœ$v":>KKt쳜%º‘%Žè=ÌÍ9—z,0õâzyÝ¢‰]R9u—ѪJ|2xA½”÷¢‰öÖZ-qt„n\™¨ý#§'ÙØÊäñÔ꤉IÐ× Þf´t/FmvÙèô õ¹À Â4€Ê•‡ò /VôòsšÚ‚±niìÏh4;§[i¢9UÕl‹ã$“&¢4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašØÕKù#®]™P]¬lÚ'µä±hbŸz~³ê㻢£|c½ÀêQ áX™ÿ7÷9MšŠ&¤Ò;(¤î]F«åÓK|Ôç¢?—‚µ½f‡×Ò„$*g&´5`“œúÓ=ÕN<ú&š8@HÁUFÍ®ð,>f‰ˆŽGÄŸû¬‡‰n¤„‰#zÃDsމ0¸ nIt-L,–óLì“*ƒmtZÕ›•„±z¤7ƒ‰öÖTÀ˜âèPJ÷ÁDó¨X”^øÝØæÒÄ„‰¡`¿KPâ/š}ÿ«ï¼w=&êsM…58”×ì„èÚz XŸÓD½ÚXÅ»è ÖB³#»wi¢:µÄ˜bq¦óbÔ0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šóœé•.ÔE^{—Rpƒ&öIÕÁJ6U%s hb}Kz³zNkt|,®2ZíðÆ¥‰æ‘@¡XÚ¬ç4ib(šðïA€…úÇ&š—Óë9Õç’÷Úbµ¬™ü•4A‹ù—¹QÏ ëÕÆfjÜßl¹Ú©Þz»:Íþ±²õwa­vùað›4ñ¦l¬Mì”:Ø]F«*„ h±zxrñk¦‰õ­‰$÷Ë]}D‘î»Ëhõ(äŸÆÊøaElÒĤ‰ßž&êw (»V;~¨ÎzM´çrb†¶ ¾úbToÌù9MdoÁ|€é¯®v%ߺÓiuJBÒ/Ÿ½Ú9¾MšˆÒÄNDǧ‰#âÏ¡‰Ž‚}ÖƒÓD7ÒÒĽ‡ibW/EtuA§²ˆmÀDSÀ9k¿NççñX0±ª÷¡¼0Tñ“zT_5L¬om¹ô+©ØÝX¶yôÌ!©Z¨¬$œK&†‚ ÿ.1%b°îUFÕÌèìŠN͆Zz"Gí½ùë•0a 'y~KmÁõNjìÏþ7»x+L4§$¢)ÅâhžÁ޳ÄNDLJ‰#âÏ‰Ž‚}ÖƒÃD7ÒÂĽ‡a¢9ç„Ò¿ÀáC¤^[Ñ X°­¥‰Ujfì_ƒýùJƒÑDSÕî,xa8x·ŠNû¢#x_}Øæ±`e •A’I“&†¢‰ºØK–Aâ^Ëã~ûϾfž¿î5ÿüÓ¿ÿü»ßÿÐzø¿9;~Y䇲Ë®«<+µ½üÓÿùãŸjkúÃïüSëÛÀùÍßýé¿úá»oÿùÿò#õûø5ÿÑuþü£·ºo½#øÝ/üÃwßnxNö­·ä_~ñßü»oÿáÇ_üa5žO»åüøÓ7ùñw_Œ)üüóo~úù‡¿xôËç[ü²|óOí#qÇÿåf¦ÿúÿ9 Õ—ñ®âÏîdùöïÇñGç¨ù̈ÿÚQ~¼ÀwëÇ~Ø;êÝöº¸’Ÿ|ùò$µÙùé×ëxËâùÕvRóºÈÇ5ùI‰ÛéÿVDÿ_Pâß,þ4JÜR°Ïz|JÜŽô˜”ø7ë=ƒ_ï¥ç².9“lñNu»CeUa¦nËÕ¸œ={êÏ5PòÔ.œ“´ ΞŠ.R¹Å6gOw(¬ Á>õž|¼O­<ýº2y˜Å™<=yzž6ÿ~År0¾¸˜?¾×SûI9h?FJùÊ2,K]¹Ú(j†PgÄÀDûG;W»¢åV8lNð ¼ Že êNDÇç®#âÏᮎ‚}ÖƒsW7Òr×½‡¹«:ÇÄDq/e_–>•»(¹¥ îÚ#S†±hbU¥Æ/ þ–o¶:×Þº¤d"qtr¹¯¨ÙêH^øÝ€'MLšŠ& ïÏJ¢¥KÕ®¦ÒgÓ„?— äZË#j?TJºr¶ tÉŽüyãà¶ž×{ þýºÍÎ#uo‚&$‚PB±IQšØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬ÎK=Ò÷R˜óÅ·7êâCÏM¬Àà%©EÇ¢‰¦ŠÀ»zyA½¼W‰äè Iy!:vM4–j¢+“2ï[™41M`=9dˆœµKÍŽn=‰&°-ÿg€\¢öƒ€WŸª5Í@žÓy &ÿÿ"ý„½Ù»÷äPsÊì=dŠÅ‘Í“CašØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M4ç’D(ǽ ^|á .²yrhŸTlm¢©ÒÆm±z÷*‘üfê¬vw5«¹€Z@Í.Á¤‰ICÑÕóýµ,ôwÒ6»”OßI[Ÿ Iê5IQû©'—Ê•4!KQH°Q‡€k æÊTýŠ#Õ®–U¸•&š8SŽ0®vy®M„ib'¢ãÓÄñçÐDGÁ>ëÁi¢éiâˆÞÃ4QK&Ä`ê¨Ù¥rí “¼äZʾl÷ö¯KíÜDSU<- VVÖ·”7£‰öÖ­ôï‚ÿˆâwÁ¯Ù½0Œ =0줉IЄ—îLP§S³{lY'Ñ„?—RÊZš¨þÍžTsM4;t6MÔçŠ÷za¯R¯Ý鄉 ˜ÐEKrèî7tm]P¹÷ØÄq …'LDYb'¢ãÃÄñçÀDGÁ>ëÁa¢éaâˆÞÃ0±¯—â‹‹_, [Ç&öIÕÁ`b—zz/˜Ø;K:UePЦHY)´=abÂÄ0áß%)¥Ü/é´Ú=¤¡'ÁD}®¤\8¸Ð«Úñ“ÍçÂ&óv\¶h ²„{‰š]Â4򿉻 ˜æ«ÏïV‚WtÊÃ&¶Æ«UlôÂW÷x¬iŽ/s|`|±ÅÓ6÷'Ü_šÂé%ësÅ®”ïYíÊ•iÛdnŒ/Ö2D" 6'¹]6å{—¾wˆ+‰öCÏÙªiˆNDÇŸ­:"þœÙªŽ‚}ÖƒÏVu#=àlÕ½‡g«öõR|í±<Í´(ãÆlÕ>©:MìR/åÍhbWt«ï£‰=Êò¼ÐkÒÄp4¡H9׃¥M¸]ât>M(J&bШý $KW.}{>œê½Oi‚Rméɧ‰\[p½¡EÄj÷î¤]*$îoZíä¡ÖФ‰4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašXK¶,q/¥|ñuFH oÐÄ*AY’¾ UÆ*@¾ª2FéWý°+ïu9j{kLÀÀå…èØ};Veà äP–2‹|LšŠ&rËÒ©v ¯v,gïtªÏåŒÞkcز9æ+iB–zòp&Jë‚<èýÛŒš]íÈo…‰&ý›ÈŠCx˜Ó˜0±‘%v":>LLtì³&º‘&Žè= Í9圇ú©z1L”EòL4 >ÜiʱTìnÔU•¤¬±zµ÷:–×ÞÚ¿BDˆGr×pß±¼Õ#¥XÒÜè4ab(˜(uë¨d#â.L4»óåµç’¡w_a¯Í¤—ÖǼ1x~j‚ 6`© ô…6;€r+Lì7/3z!KìDt|˜8"þ˜è(Øg=8Lt#= LÑ{&võRL_f²@Ù`‰}Je°mNUgÆÄªç¤ò^,±FG=>GÇÝßÇÕ£§ ’úwP­Ê`VŸ,1K@«ê­*_7øþWß/zÚpú„?—²£¸wiQû¡ éÒC´pËö&°õ-*ê·ôjG–o=‚½ŠƒRŠj(Ž ÍmNa–؉èø0qDü90ÑQ°Ïzp˜èFz@˜8¢÷0L4ç˜(•÷RP.®>^l)¸uhbŸT´±h¢©òÜ ‘Åê‘Þ«úø¾èMMt쳜&º‘&Žè=LÍ9¥zgÜK¡^KeÉÙh«àÆ*Í‚í·«ŒU~|UÅžpR Pœ(ÝGÍ#PÝn+ƒy3꤉Ñh A134QíÎ?5QŸËÎfM¸]yrZì<šÈL ËMHmÁl¢ÁìÕ.Ù­4Q–$už,WÁ¤‰(MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šó`q/•õÚ«QÉò(4Ñ$Õºå±Ô2ØÕ¨«*TI ±z|R&ñ«¦‰öÖĆù…Ïп€ûh¢yT±”[íæN§IcÑ„.)‘0ë—‡mv̧Ÿ›¨ÏÈ J¿ßkvEìJštdI ô'´u.U…Rë@tïÝuÍ)ä\Á,WW'&NDyb'¢ããÄñçàDGÁ>ëÁq¢éqâˆÞÃ8ÑœSM)\{Û„•^I§&궬KÀ±p¢©¢T«¦Äê±¼ÙâÄC ¨vµ»s«Sóh ‚/4Ë󶉉ƒá„rà¬à„Ûåó·:i;8‘ Çš^ZÒ Òây4ØÆ1lk-X‰±¿ø½Úݼ8Q"æÍ¥5;ž4¥‰ˆŽOGÄŸCû¬§‰n¤¤‰#zÓDsNÙûXŽ{)4¸xq‚M[‹«TR2‰¥ÒX4ÑTI X¨Ùñ»ÑD{kÅZÇ2ŽŽ–o›¨)gÒãNº³¨Ó¤‰¡h¿Ëz•{.Ú?8±Úw]}.ªç‡W;¸òî:JKQiŸÑ„º²Ú‚¤ÓCÚ=™­ºŽ&>j²Ô;Õñi'yu꧉ýˆNÅŸ@}û¬G¦‰(Ò£ÑÄA½ÇhâÃ9'1æ÷R¦×œ`æ…Š=£‰R9ÑHw×}ª*ZË Æê³ÙÑÄç[û{wk–üÕ.—›hâÓ£Šçñ0Î'‹&MLšø­ibý.ˆr纉O;x(—vM|<—±h·üħØ•w×aZ²ËsšÈÞ‚%©‹ê/¹µt²[i¢‰Sï`ÔivŒó¾‰0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&ªsM‚Y4î¥Ì®-ê”—Âf$Û½½C—f±Pª i,šhªˆ¦x8¨×ܽM슘ÞGÕ£¡@Âø«3H:ibÒÄH4Qo˜¶’“•>M4»œÊÙ4áÏ•ÄíȨý¸]ºÓ©,$ËÆÚD©}‹CJŸ{š]b¾•&Jë†êyz Å褉0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&šsŠ–wW»LÓ2—íÞþu©CuúTÅžofŠÕ3¼M´·6Ì9CÉ|M¸GLÝg¨ÌQ&MLšŠ&j™¼\rAÃ.M4»ü@Ã'ÑD}nÌ–Â4Øíò•××,™R6~>¾Ô üR"¢ÞñÕÎû*¦[i¢‰¨uH"qÞ ÍsqšØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬ÎÍÇŽ{)ørûй%b.bec§S“€I å©–Ç¢‰U½Ôë‘bõð^4ÑÞZ@_ø ùÎNÕcö/?½2ŒФ‰I#Ñ„—¬E”º4Ñì²òÙ4QŸkäÃiÔ~¸†åÊ˰mIõVpÛ¢ ³’ELs Ô,›Êhã‹«/ÈJªw¸|»ñ¥FGü×Ç8:å¡´Ù ã‹{ô!F$þê<(s'í_†_pñ‚DûkßÍ.Ñé;iý¹=­ÖÜo?ÍÓ¥×£Ú‚.D6öVaÍ$9aÑH©Ûá×£~8-ÉBqþ׳y8 Ñ‰èø³UGÄŸ3[ÕQ°ÏzðÙªn¤œ­:¢÷ðlUsÎRo¦‹{©' Ê'¯}'õ¡6o÷ö/Kå4ع¼U•šà+ê¿,ÎøuÓD{keD•8:é>š¨±­r¨¬NGNš˜41M¨Ô,!4ávðp-ói4¡…4DíG;ôښĂic|!oÁIMû-½Ù%¼wí»9åzÁvŠÅÎ*ašØ‰èø4qDü94ÑQ°ÏzpšèFz@š8¢÷0M¬Îë=;9î¥øâëQÉpI¦kßû¤"EM•‚%xa¬’"ïEktü÷§¢£pã¹¼ê3Õó¢¡2L4/4š41MP]Ó.Zð˵µïõýºÝùçò¨­}×+•$j?n—ðʵo÷AEhc'-×,Þƒ³UÍŽìÞµ‰ê”³P,ÎÒ¤‰0MìDt|š8"þšè(Øg=8Mt#= MÑ{˜&öõRzm•̶HÞÚI»G*¥šh) (ÆÊ0Ï´“&†¢ ^Ú•Ì`¹OÍ.³œMõ¹™sÁ$Qû‘|í}FÅóaö`lÔ ”ÖÒ±0÷ç š¨ÞJÍi­‹cžçòÂ4±Ññiâˆøsh¢£`Ÿõà4Ñô€4qDïašØÕK ]»6áÛ¢9oÐÄ*UüE^èíu´ äUû°n’bõö¤âáWM-:9p~¼JüršhýÓÇò‚2x¨Î2ibÒÄ4!-›/Y³ti¢ÙQ‚³i¢>ôØ;ÓÆu\î( 8ŠKx+¸ûßÉÓ ¤º1†l·ÐQ¡~XægÆŽ1qŠÚ‘Á¥4aGÈ ïi"·–žêõÄ¡ÒÜzh¢[i¢‰Óv,WB¶i"œ&"º>M̈?‡& ŽY/NÃH/H3z§i¢;¯gE=î¥Þç?•&Lä¡òio¢IÈDdokvF²MtõBš¿P_ðî·h¢½µcZý":ùÆ“NÕ£a”8Tf‰Ò¦‰M+ÑDn9Ë|>oa7»„§ŸtªÏEu)<µŸb—üâ“N>œtòÚ‚).šÝkZ¾;h¢9U(>$'Ì›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ºs­y½ã^JñÚœ¤VúûO{]BFÃo¤*­EMUFÆàÖG³3J¿E=:d¼êv|#MT9y¸7Ñí^Î}ošØ4±M”ïÒSfÎ<¾…Ýìálš¨Ï%ölÁ¢z·»²:ª<Œ”)¿¥‰µ×bi㬮ÝN™ï¤‰æ´¦^L˜BqN/[Û›&ÞOG]ž&¦ÄŸB#Ǭצ‰q¤×£‰)½³4ÑkŒ{)‘«ëAb2ÿÜÛ/Õq)šèª²RˆÇ*Wý­zFõ­Œâèøëí„‹i¢+“òºãÙP·#ß·°7M¬Dõ»th¥¶†4ÑíÔϦ‰ö\"r·°×vB¼ro‚页´ÒšH­3øBs·Ã¿×{.¥‰æT!{0|4;aÜ4M]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4Ñ—6AÜK]Ó‰Áð©žÑS‚PÖô…T^«ÞDW•“¦qn §zÏ¿Eõ­PöôÅHîvßÞDWFÄã’PhÓĦ‰•h¢|—¹<χÕQ›–ùüÙ4QŸË5•R²¨ýd6¸´:*?€‘ì}õ턵³kÞt;Êv+M4§9'§¯}Úí ±ñ4qÑõibFü941PpÌzqšFzAš˜Ñ;MÕ9‚RvP›Hw¸”&4ñCó‡ ±]jJ);…R|­[Ø]RM^«O¿Fí­ Œ9ÇÑAçûh¢y4¨ŸT¬ìõ|ݦ‰M Ð>êu$e‚á-ìf'.élšøÿ¿ÿôoHWîM˜<Ù§{‰J fa·`|ivl·ætjNë*SÁ˜Pœì{á4qÑõibFü941PpÌzqšFzAš˜Ñ;MÍ9Š$丗B¼ú¶>Ê0û&šJ¥+”/¤úZ÷&º*V r:=í~ìÞDëòëâøvÂÓîµâôÕ4Ñ<º×䯱2çÓiÓÄR4Aõæ±aJ˜‡4ÑìôôZØí¹HŠèaû)v—žtB{ñûJ؉KûÕT“éŽûgnýóÍ;Mç¬Á9§fG°Ï9…“ÄAD×g‰ñç°Ä@Á1ëÅYbéYbFï4Kꥮ­6áT{{ûÀǤ¾™ÿWYâ˜ú«„ÝßZÔ²¥8:ò’÷år–h³gÊ_(3Ø•°7K,ÅÜX"¥7™+þùï7¦ÓÏ9Õç¢B’qE¸n'ì×V›H‰\ßW›Hµ²)ñxg¢Ù±ß{k¢9õD)¥XœÙ¦‰pš8ˆèú41#þš(8f½8M #½ MÌ覉î¼*Ÿ—z½üzÉ­ ö‡} ‰cRËÛT•áÚYù õÆ¿EG¢Sìî«6Ñ=’3áÊ0ï;Ø›&–¢‰ò]º‘’¤ñÎD³C9ýv}nVTÎaûñ,˜¯¼ƒ ÉYÞ›(?g—ºs<>Õì0ßZ»:Uwg©îâ÷¥‰p–8ˆèú01#þ˜(8f½8L #½ LÌ膉桦؎z©*Ò.…‰ÜøLt©D>N[þï+-¶5ÑT±#³…?ÕÿÚ¥‰CÑaºqk¢z,˜_X‚beN{kbÃÄR0¡}’NüwQÈþãûõÍr6LÔç–«Dí§&}ºrk‚ü!IÒ{š°Ú‚-%—ÏìvJr+MXï†D&kvwéºpš8ˆèú41#þš(8f½8M #½ MÌ覉æ͒丗B¹öÒ„Rz`út»I ÄDK%ðµh¢©b oƪDùÇ:õè8|1X²Ü˜Ð©y40 Òˆ5;})hµibÓÄ4QË÷¸ƒå ¡S³» =¬Õ"$",Qûñ„DWtòZÛáÃA§\Z0–1.·šè½šS«—Ê<÷zÞrÓćiâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&šóš™–8ÁÅ{øÀüio¢Ip6ÉKuZì vUEÀ„žCõe6¿Eí­Bvˆ£“€î£‰æ‘‰K#‰•‘ï+Ø›&–¢‰\géœ1ºž¾7QŸKe ï0ÅŽ¯,]GðHà¥þD^sJÕ:•Òb§„«/õ€“(ÄêMíׯ—2É‚G'ë­ã‹;!'I*#x™€íñe/ Œ/þ¬¥Ýd\õiG§¯V•çR»$fêvéʃ´”\¸ãS±<¯3Dõ„lÒbWºñ[W«šSÏÀa«Ý^­Š—!]µjFü9«UǬ_­FzÁÕª½Ó«UÕ9dLúE/•/.fÄð ýT̨K­»±ÔòJ‹]ËëªL1ÈÛí˜~‹&Ú[×üÁön7¦o™ô‹ßdŸ¤Ý4±MÔ½g"Wh¢î=OÙ JP‚p\׊øÊ½ïü°DÞ¯V!”\ë–©×¥›»Þš~¼‹#Ã4üžv›&Âiâ(¢ËÓÄ”øShb¤à˜õÚ41Žôz41¥w–&ºsæ ãIîÓÒÅ4!þTõ)Á]“!•y)šèªjé½q‰Íßò·’|<£ã9ÚÉ}4Ñ=zrç/”e–M›&¢‰ú]JÈIˆF4ñ´ƒ³S¶ç¢–'× º¼ÛQ>5ýx]x/SiÁ¹ AA:Ÿf§–#âL}ÓD8MDt}š˜M ³^œ&†‘^&fôNÓÄ¡^*˵Y>Ò#  Ó|ooÙÖ*ÚT9—)ÄøÄLWïù·NÒöèTªU £SÜw/¯z,Ó„2>c¨Ì@eßËÛ4±M¤‡"ƒÙð$m·K§ŸtjÏ%&£ ý;ú»öö™'ä!5ÆlãKï¢t»×åÿ;h¢9ÕTÄ{,Nvòxš8ˆèú41#þš(8f½8M #½ MÌ覉îœDÇÇùÿym–B}ȇjF•âZI>ºªÌŠY¿PŸ«2ê±èß—2°yt(/D+óäcÃÄZ05_ùxA‡ÕŒºÑÙÊsÀÊ—£–íõ0á¥)ñRbËïa‚jKG{WCö¥Í2ß Í©x‰ãâxÃDG'û}—°«G+s‘qjônGy_›Ø4±MÐÃSÍŸ£0¼6ÑíÒ˦ßI4QŸ[³§ ý»¤W¦ d°dQ}O\[0jÀ=Íð^šhNÙˇı8N´i"š&"º>M̈?‡& ŽY/NÃH/H3z§i‚ûd)±Ë½ÔÅåŒíÁø‰&I•´Ö%ì®JAx\7ê©^~ìÚD{k/ÿ/1’«ß¸7Q=¦òÞåÅCe$Žß4±ibšà2KGdïM4»ô’Vû$š¨Ï%ÕZT!h?5O¹ë•4A0FÄ÷4!µ›˜×òšÝë‘°;h¢:E®U”r(®Ѧ‰hš8ˆèú41#þš(8f½8M #½ MÌ覉æ\‰h\¹ە©üÅ×&*îí¿—ªºM4Uê ¹X½ÚÑDŽg ®:w»Ë5e6TÞ)þêþH¹ibÓÄ4!õ:Ãßå„þùóû-vtþÞD}®°‰K Ú×-@¸ö¤“A–×&´¶`4M8¾àÑìÞì¢\JÍ©²§`o¢Û½ä‘ß4ñaš8ˆèú41#þš(8f½8M #½ MÌ覉æÜ¨°ŒÆ½”]|mB2?ì㽉&!'K±ÔŒ‹ÑDSåá›áÀåÇN:Õ·.#Ž?Cº1¥SóH˜ù‹ßÑ÷ÞĦ‰¥hB%”iúßÅ}ÿùóû­—µÕϦ‰ú\Æ”08)Øì’éµåŒˆ%eyOÖZºiNc¥Ö[ú­ÅQ»Ó̃“fg¼S:…ÓÄAD×§‰ñçÐÄ@Á1ëÅibéibFï4M4ç®- w;òKiB•‚þ&ªÊ _ôö¾ZJ§¦>9¸ÆÃ¤+ŽÚß³–ÇÑÁ—Ò~—ÓDó¨žS¬LdGÝ4±MXM•¤š² ‹×u»×“F'ÑD}®hs ÚO±”kiBó‡ü°¹6`*2ƒá¥Ù%º&šS)C¸@,Nvµ‰x–8ˆèú01#þ˜(8f½8L #½ LÌ膉æ¼^~Ë_ôRŠ×t*¨ð@’0qLª¥µ`âz#ù-˜8×4Å—ÃDõ¨Èü2M¸·&6L,¹BB²Ìç7þçÏï×3~Ð)÷­ Ô&šd½¶v'Bx_ ½¶`F"oB6»×Mœ;h¢:µòÏܺkv%›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&šsfFˆ»P#Í_›À¬Àö¹·7!æ "A¥¼ØA§¦*§T‚«·ô[•°û[;x’/ˬ7¦tªsN(‰BeYm_›Ø4±Mx]ò‡\w'†4ÑíÒé bësU9'‹zíb'œ¯¤ €ÔíÏ·ã AmÁ‰Çueº]Ñz'Mt§†–Æ;¤ÝN9m𦉣ˆ.OSâO¡‰‘‚cÖkÓÄ8ÒëÑÄ”ÞYšhÎë-=E{)'º”&ÊL±ôXö&ºÔ2÷qâÿç+åµ*awU”\Æ©MºæüS4ñŒŽg`ˆ£Cù¾kÝc ~Pýi'»ö¦‰•h¢|—Ò±V ÒD·ƒ|öµ‰úÜ^OZE-»Øe¹2A,¥‡åš_ê=MÔê“îJÀÃngéÖÚuÕi®§`|½±Û•ïkÓD4MDt}š˜M ³^œ&†‘^&fôNÓÄ¡^ \|Ò)?ŒùM“J°M4UÄ–!¡Þð·hâPtˆîKéÔ=j™…øÊ$ïk›&–¢‰ZaºÞ4RÀ!M4;V<›&R¥„æ”9h?…fØ/Þ›@Eâ÷)K .RYÓ˜&ª]Á®[O:uqhå£áP\BØå&Âiâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ºóº$#_ôRY®­„íùÁ€h¢I ÄAq¡§¬Un¢«bFfÿB=ÛoÑÄ¡èðK)‘Ëi¢yÌ\ºÔ+SÜ'6M,EXËHHù“‡÷&ºŸž ¶=—É”‚öSììʽ z€ka¦÷4A­¥—Qn\µÛ©ÜZn¢9EÎ(¡¸?2smšø0MDt}š˜M ³^œ&†‘^&fôNÓDsÎå‹ àt‘×:•&Lá‘ìMt©-Ø3ïvºV)ì®JLHã± …õ·h¢½µQy¶ÅÑy½Ur9MT”³Æ „^OSošØ4±MÐÃ1•‰:ñ°x]·ËrúI§ú\bõ¨×.väW–›àzšÊA?Á„{Á®"•"¡Ž,yµá¥¨R%_OyÚüÚðRÞÚQü‹è˜Ý9¼¸S‚Z TFö𲇗¥†~@MQSÐ`¼õÝìÈäìá¥>W¡–gOϪ¼II{îb;ˆ¿Ï?^” ¢h ´N8=ߺXUrv)ŠcÓ½õ®B "ºþbÕŒøs« ŽY/¾X5Œô‚‹U3z§«õRÙìâ$êîŸ{{®œ€é ©¾V’§úŒ‚«wþ­üãí­¨ˆÃèÔ£Ô÷ÑDóH @ñï&å×Ý4±ib-špÈ14Qì,åóiÂH˜j'éʃ´òÐ2€Ø‡ñEj v«KÓC¥ÍîõøÍ4Q*•QcqŠº“|„ÓÄAD×§‰ñçÐÄ@Á1ëÅibéibFï4MtçŠÈ÷RD×Ò„;=àS5£ƒRÅÖ¢‰¦Š¹¼€|¡Þlo¢½u™Œ›[¾1y÷h¬f+Ó—œš›&6M,@RÈ–ÑÅ`œä£Ù¡œ2°=—JwLæAû)vÂWV3B{¤\bûá ­>²'/ZÐÒ‹]=¬zo’Câ$írFá4qÑõibFü941PpÌzqšFzAš˜Ñ;MÇz)Çk¯åYéï=} ‰CR5áZ4qL½üXÊÀCÑ1¸‘&(«å˜öµ¼MKфֲR¾Þ¿‚ÿóç÷[ìHNß›¨ÏÅ 9í§ØÙ»^û¼“N\¯å~Hòa¥oqK¢4æžftïI§â´øÄ ;z³ß4N]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4q¨—Bå«Ë1hÒ½}‘Š&AJ§ç+ÑZ4ÑT1ª$øBý¯]ËëÑ¡Òq}~©Ñ{9M4†FA2Ãf§û¤Ó¦‰µhÂÊ,]ÊÜ×Ò˜&ª»Nõ¹*€AqánÇvir-?š | ‰\×0¡ã˜{š¤{i¢:E`• ;zG ›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&õRzqr|ðÇäǤâb'Ž©·ß*Žz,:$滚&º2cË9V–ó>é´ib)šÈuÏAASö!M4»×â¤'ÑD}nyŸ”ƒu‚jW&.~m9#fÉö¾ø6ymé¢*ÁÕ„f‡)ÝJ‡Äí“N_L]Ÿ&fÄŸCǬ§‰a¤¤‰½Ó4Ñœ+‘ 2O‘|ñI'x}€‰cJób0ÑÕs=‚«Wú1˜homÌl;õèä·&ªGF‰•ùKÞø &€ ÆedÕqÆÀfGÌgÃD}®yÔï;¾öÚ®¢÷—°jKRow;„[/a7§Œeô Åqz©:¸aâý,qÑåabJü)01RpÌzm˜Gz=˜˜Ò; Ýyé0¥¸—rÈt²ÄƒÞ^ʬ!îíÿHû³MSŸòoÕF=|I#y5Mtõq|‰´Û½¦%Þ4±iâ¿O5çvùtÉK³ÑD·+cÐÉ4ÑžëR^X£–í˜=_\ÍÈÝüýlNµœˆ„6»lz+LT§õæKPaöiǸa"š%"º>L̈?& ŽY/ÃH/3z§a¢91r{)¡«aBMùCþ¾§„òóR3¯M•AO±zµß*ÚߺÕ9ÿâ3Ìz_~Øæ1'+xãE¿o˜Ø0±L¤‡·Ý\áaiÔn÷Zî$˜¨Ï•âêAûqb·+‹‘=¨5å÷ã –ìÈÀ0Ü„ìvÉøVšhNˆÆ¥0ºäMá4qÑõibFü941PpÌzqšFzAš˜Ñ;MÝy® Áã^JõÚŒN‚5縼?èÔ%‚‚ÇR Öª]×Uå27`ŽÕgáߢ‰öÖuÓ¾,ýå†úå4 ¨)ÜeÅ®kÓĦ‰•h¢|—ÅŸ Ž:U;v~ù~O¢‰ê?•—aŽ˜ªSôBššãƒ2¾¿5ÁTZp5×jv€ùÖ;Ø]\ׂŨfg"›&¢iâ ¢ëÓÄŒøshb à˜õâ41Œô‚41£wš&ªóò¯IÇEºH—tùÞDöϽý÷R=­E]}ÍGd¡zJ¿Eí­9o:?£ã7tjÙ€ð eô²b¹ibÓÄ4Q¾K®>®„ÝíèüƒNõ¹9“3…½6«ò•˜*eû@ÜZ°—_e¼nÐì –ÜJÍiù"ðÿÙ;³dÉm\ ïHAÄ´„»ï'—¤ŽÝiWŠH%£’õЧaáR>@ú@Üëªß¢‰ƒib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsG€q/er/MÙj{§¤zš«vSEI¼L>Po_¶7Ñ¢šƒ,¬?Q|’&šG¥z?3V&yU›X41M”ï’J’,wóÃîvö’ßø"šÈí¤“'Ö°×&~-“pÃI'Ü  a~pÒ‰k 6CÆþ½‰f÷Z¥ò š¨Nsý}Bq^î•-š8˜&v":?MŒˆ¿†&: ÎYONÝHOH#z‡i¢9ÏÍ8î¥Ê°t/M(o¥·? ‰sR}²{M•C¿Ž÷Ï[¾9?üGÓÄ©èÈ“'ªGΞ58éÔìÖI§E“ÑDù.IQ·ön—üò½‰òÜÒg—GgŒÚO™Ü[m­hy_Í(Kí[¸"ME£ÙeNÒDsj¢Ä‹Ó´R:…ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœê¥ì×׿‡uØ@*aŸ”š'»7ÑTÕ$¨î¨÷/£‰úÖ\ÕãèøË­’Ûi¢)#È¥ý„ʉM,š˜‰&ÊwIIëe³þI§fÇruµ‰ö\$@…°ß£=po‚X*ÄÑ-lmë¤A ïÝŽÈ¥‰æTÁ¼_mb·+#Ý¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰æÜ4yþD2ÝJYl+õMœ’j4ÙI§¦Ê…‰ýõþe÷&ê[KR¢0:R¦€ÏÑDóHªÑùºÝVN§ESÑDù. 'ÕÔ?éTíÐírš¨Ï­{œlvB|ç½ Û´^Î8¸7a¥+C ªvâðl¹‰]œ‘“„â4ûª„N;Ÿ&FÄ_Cç¬'§‰n¤'¤‰½Ã4Ñœ—Iýë½bÅi;<è´+EN¤±R™íÚDSeXFöx¨*S×/ƒ‰úÖ&‚Èü¶þ²€z;LœQf9­k &¦‚ «)•È4×&š_ õ¹„š1‡ý^»³væµæ€=„ ¯£ © ›]ÆùÆG3£ ½m³Ó7÷úøRwè ¯¦0:%‚ôèøâ™ B¨,g^‹Uk|™j|ñ­üIË—)ý­ïf'¢W/õ¹Z«®½v³KFw¤Í›”Çì|{ –¡8§o¼‚£kUMœ‹J¿Bóݪf/Bt":ÿZÕˆøkÖª: ÎYO¾VÕô„kU#z‡×ªªsKä5ÝínΈùøÖD“€9Q€=»â\0ÑTQ­]cõ”¿l±ª½u™ðPpÊx·£«šGSOÏ1Ü^È‚‰SÀD­R„t!€‰bG€×ÃDéÍÊ«”&j?Åî]ÕˆëÎÑòVâ®ÈoÇNµ ¤éNØw;ÖGw¾«ÓZ ˜0WìM„ÓÄ^D§§‰!ñ—ÐDOÁ9ë¹i¢éùhbHï(MìÎíƒ^êÍñ΋i‚(ÚaoÆÖ¯²·Û%æ©hbWE¬Ú/•½Ûaú®[yç¢Cé¹s´»Ç2ÁPü@ûÚú^41MÔï’S"èÒÄn÷ºŠq M´çb],˵Ÿ:ݹ5‘aS$0z?¾@7¤_o®Ù%§Gs|ìâÈÙû{?v°r|„ÓÄND秉ñ×ÐDGÁ9ëÉi¢é ibDï0Mœë¥äÞŒ¹¸‘tp+o—Q¹Ÿ»íÇn²ƒ´»ªv`Æ>P/_F{tê´Çãèðƒ·òv&åßÊttZ41M”ï2“±÷i¢Ùi¾zo¢=W"Eí'ókþþ[:aÎ`ïoå1–ŒY”‚ñ¥Ù‘*sÝÂÞU‰£q×ëµ`{í\K“Þ¹5‘6Pc|Ö† CJý=Hk#á¯o…‰&®üŽ&Šs´Um"œ%v":?LŒˆ¿&: ÎYOÝHO#z‡aâT/•Yï=ç$´©ÈL4 œ˜Ì?ª6Lìê]‚"?vôe[í­@‚›Î?Q|&šÇZæ„S¬Ì^òk.˜X01LX;¿TSôóÃ6»‚WÃD}n\¬<;j?ÅîÞÚu¾¬:Øúö­.Uµä]¥Í.Ù³šS&÷~ ÀÝ.¿íX4q0MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^ªLåo®6!ØQµ‰sRq²ü°M•¨Xu}Wïü]4q*:ÂðMTD ØšhÊœW~ØESÑDù.™’ÖÇ]š¨vè×ÓD}.—Q9lÙÌì·æ‡å- 9Ú𢬶`’^Å›¿íDý9šøqZ8È wÐé»—+ƒ‹&ÞMûœ&Å_@}ç¬g¦‰(Ò³ÑÄ Þ1šøÛ9¦Ò;ZÜKÜ\m"ÓÆü–&NKÍ4Mü£^IsŠÕ#~ÓÞÄ?oí"½TÿØÙS{{ä‚9Ï1 ¿”Š_4±hâwÓÄþ]rb@ÑÎì¿írºöÚÄÏs±g0‰Ú#²Ý[m W‰¼§ (-S¶ò¿]¥ÕHøQšhâ“% Å!¬“Nñ4±ÑùibDü54ÑQpÎzršèFzBšÑ;L»s5f{)”{¯M8à–²ÐÄ.Á“À'R 碉¦*'âƪ2úòwÑÄ2Š£“SzŽ&šGÓ”\be‚‹&MLEå»äŒÉI¼KÍ.]MÐöj·Nƒ¹ ½“&h"p~OXZ0e×~ÝìR~voâ”8â•6œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâ\/å÷ÒDFÜXð€&v©†¹wæä¯4ÙÞÄ®ªž þD½ÁwÑD{keMŸ|† òMT¹&Ù‚øwˉ×%ìESÑDù.%I=!š»4Ñì8ÛÕ4QŸ‹Ä扣öSÓbߺ7a›i-ã÷ž&¨(`ʹ¨ê*­vD¤ÒÄqYlÑD8MìDt~š Mtœ³žœ&º‘ž&FôÓÄ©^JÉoN[¸Æ®èí³²ÎE§ÔÛ·ÑÄ©è8ás4qFƒ®±‹&¦¢‰ò]zCþõnñ_ÿù~='º|o¢>·@J•µg¸7¥“¤BVx8¾|®ôõLØ,ãË õoFÇ?}|ñòðz1ïƒè¸=:¾8k}‹•‰¯jFk|™j|É[bw7ø5ËÆ¿Æ—j§œÒÕãKyn™Ÿ‘£õW{«]‰ Þ8¾ˆW~¡#~Éu†ˆõ·Ñ@i±K"®VU§‚¨®Š«%›ÖjU´ щèü«U#â¯Y­ê(8g=ùjU7Ò®Vè^­jÎÉ(3ĽݼZ…Y(Éqoÿ±TâÉö¾›*NªÅ.ÿZ,ãϦ‰öÖåµ9¸s¹G1ççh¢zÔz1s¨L“­œ‹&¦¢ ÞRMžoœ¤mvâp5MÔçfJ¹4înûiv¨·ÖFÕz£ÎÞ/\[pNdÒÚ#س÷òªSƒ†ò8×Eá4±ÑùibDü54ÑQpÎzršèFzBšÑ;L»sw5 {)´[i˜7Á|p’¶I 2Ò¢ÆR߯ø4ÑT©Ô]ÔX}Nò]4q*:ÌüMTN¨‰$TVó,šX41MÔÎÝD,¢‰b§ty–¥àš OãÍîmºëh7Ö|°Z%µ'5 rV»Bô(M4qHh”CqµÜÒ¢‰hšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰Ýy]»â¸—BÆ[i‚·÷µQÿQP+˜Öý¼Ñd[M§2O€X=ýš›ñφ‰sÑÑ·&šÇ" ùƒQœ_“.˜X0ñûaBË$½€°@&š^ŸäCÛÖH"R3U;`ö{a‹ÇƒsNºâጒ0êõû³çœNˆ«ùˆl±D4IìDt~– Ktœ³žœ%º‘ž%Fô³Ä©^ ò½·ò2ùFr”üœÔ7õ€~+LœRôeçœÎEÇô9˜h%Qû¼ä_0±`b ˜¨·ÝJ"ÀD±K×Ä{‚ºµŸzáÎ ”ÊL›í=MXiÁPB ÖÇžfG†ÒDsjœ N|U3 §‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašØGEv~DÞK ¸ñM4 ž n|"Õ'Ë?^U!€:H¬ÞíËòïÑ¡ò»iZªê9šheDš]ÆukbÑÄT4á[ªÇX‰šØí¯¦‰ò\,MÑ­£j"ùFšÈ¾©°¥üž&¼¶àš#Ã"¥µ‡ÎÏÒD‡å÷! Å!ÊÚ›§‰ˆÎO#⯡‰Ž‚sÖ“ÓD7ÒÒĈÞašhÎ3×Jòq/u{þqÁMíèÖÄ)©m.šhª8ƒ  Þ¾lob޳÷ëw;~&šG·üÁïöš‘fÑÄ¢‰)h¢¦s¢êÐD±K˜®§ wÏjoöFþÛ~ÜÉò½{ÎȦoiRiÁ”Ô»4ÑìÊëÑ“NÍi½HŸÜBqõ·^4L{ž&†Ä_B=ç¬ç¦‰~¤ç£‰!½£4Ñœ3b …w; {ï`ã†zq£I͹ˆ¥¾.ÏÌ@çÔ3ØWÑÄÉè¼@íÝ4qN™Ë¢‰E3Ñ@½¶þîZÜ+Mìvé¥ÀÄ54ÑžK¥OS›õÿþëŸ^ó+ß@¶er}Щ( =QÆþ.}³cOÏÂDÇ€Ö¯½ÛQ¢Ñ,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; çz)»ùÚ„Û†zpÐéœÔ 0LœSÏßuÐé':\÷âèpzîÚÄîÑŒs¬ì_E[L,˜øý0e’nYðÍñÁD³#¾º4j}.$T Ô_­jv忾ó6m’ÌÑßÓÖ,e”맨þ±ƒü(MT§š¼üBŠÓ¤‹&Âib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&šsÂ,¦q/…~/M(ë–h¢IÈ^þa,•|®ô°»*)ãúc•²|Wéºý­­Ð*~ðÛ*Âs4Q=–7NÜÏ#¶ÛA^4±hb*š 2K'ID]šhvürBæ"š¨ÏeMf©OãÍŽáVš°­„Bø=LPë ‘RÿDÖn—éÙ­ j½ ‘ô3Þív L„³ÄND燉ñ×ÀDGÁ9ëÉa¢é abDï0L4ç˜Øˆâ^ rº&@78€‰&¡ Þ¯HüóJ:L4U…RÆXý›ë)6L´·®ÅLHãè0=W»ytr-*sÌëœÓ‚‰©`"×-L¥ã´.L4»„—ÃD}nm?ô{Í]ï„ Ø–©öû;ØEAiÁà–-RZìÊ'þ(M4§ÙÜ3Äâè¥à¢‰ƒib'¢óÓĈøkh¢£àœõä4Ñô„41¢w˜&NõR9ãÍw°Ó–ù ?ìI©2ÙA§¦JÊt³_Ôc·cø2š8—,–·ÓDóè¢,­6;“•vÑÄT4Á[‚òP±KÍ.ÁÕ•ëÚsk%í§"ÚíDýÞƒNHéà6o5Ñ‚«¸J+qHz”&ªS&ªWÆBq¼2:}0MìDt~š Mtœ³žœ&º‘ž&FôÓDsž“²AÜKýùa“l2:”:Y~ØõÆ–)VÿŽ…þhšhoÍ™U8Ž¿^N¸›&j}`Nõè¹FÊü¥Žï¢‰EÐDQ‰õ„%cŸ&v»ËK×µçÖLMÖ/]÷cÇtg~XÚœÍáà¶”¾E$SZzµ+•=JgÄ ¤U;œ&v":?MŒˆ¿†&: ÎYONÝHOH#z‡iâ\/uw~XÒÍühoâ”T„Éhâœú_‹þÙ4q.:Oft:¥ì_×ÃM,šøý4¡uͳyê_Ânv¯®6ÑžKB”ƒ]ÇfÇzçI'ÔØñè¶–¬’”!RZìRæGi¢9uCg‰Å9¯½‰pšØ‰èü41"þšè(8g=9Mt#=!MŒè¦‰êÜ ôG˜Â^Ê’Þ{o¶DrÜÙ˜Zp{·ËsÂÞU¡Z 2ívü]Å&ö·¦\þ“x¬4¢çŠMìË<çƒß}]›X01LX›¤3‘÷ï`7»l—gt²v·šSÒþ†r³Ë~7Ld:€ ÛÜSB÷{Š]‚øÑÒu»¸Z§;‡â<‰¬KØá,±ÑùabDü50ÑQpÎzr˜èFzB˜Ñ; Õ9$ñ,q/¯•î Uw‚ãÞJêRß-rýNšhªrÒ š~=”ûgÓD{kV à8Än÷’ÿvš¨‘ˆ T†å‹&MÌDµÀuM{êÖ?èÔìøåöîE4Ñ l r2Ù5;F½¹Ø‹,Vª“Õ2jª¬0ŽçX½¦/»•÷Så%ŽÎë˜Û‡—Zÿ=!z ¦Í^êñ®áe /s /¹ÌÛ¼8Œ†—ü¯Å¢ë†—¬Ëÿ‘¢ö“•î]¬²TúV8_ʱvÐÁ­¼f—üÙÍ)'I ±¸2ˆ®Åªh¢Ñù«FÄ_³XÕQpÎzòŪn¤'\¬Ñ;¼Xµ;ׂ ÷R|seÔ̶Q>:G»KpBþ ·gͳÑDQ%Â,zìvðu4q&:ìOÒD]>õ¤üÁ0î’M,š˜ˆ&0µÅªlh]šØíøò­ïö\GÊÒ?‡¾ÛË4!+ÃÁѪ¢ÀëõÁrÙí¥‰æ4cÁ½>MüØQZ4L{ž&†Ä_B=ç¬ç¦‰~¤ç£‰!½£4±;ϨA•„ÝŽÞ]E¸&¤¯#{O?RµŒKKÍ2Wþñ]×_9«2~Mìo-Y5ŽŽ¤çr|ü?{gÛ#ÇqÜñ¯BèE`Æ¢»ºªð Å΃;,z#!8Q'ŠÈñä1!è»§ºg%­È®Ï̦ÃŽ{uSÿ©~øMwWU !9%m»HûÒ÷N]ÑD¬¹;8n.}vé$9ÄJ4Q®›a Íw@ÕN³n¹6tà¨QÏW32¥¥—bFì(µ>(çxQš¨âX ÀÇ”`§ ošØˆhÿ4±Dü:4ÑP0ϺsšhFºCšX¢w1MTç)”GÆï¥Lä¶4ÒuŒ&©FÎ[®ã-å¾h¢ªR1Âõz]ÇòŽÑÉ÷‚ÕN_Ž&ŠG{r„‚¯,áIíà&všè€&ÀféP“ñI“&Š]YœX›&ÊuÉ:ur^WU;Ò¼íFZ¤çij ®õY¥fñ²4Q&´/h‚8†=LJ;MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&fõR)¦w:åít:J`‰“¤b_ç&UŠ |õÂW¶6Qï:CIFàGGOŽ\nNÅ£P æÔU&1îwšèŠ&°ÌÒ5!çfmÔÁN0­Måº rÖÜî÷»@[ÒD>0 ÊM`iÁÖÎs{ÓoµKY/š|‡)BH®8²'ùp§‰ˆöOKįC ó¬;§‰f¤;¤‰%zÓĬ^ aÛüãHùd&æ)¥¾Òª(¦Ì2A½^W1£ct’A`ö£c¼y9˜¨%‘Ý”¯,íK;Lôd“t jœÞ^š(viu˜(×%ÌœíCÕ¶=6QŽþÉȱ ²¬1Hj'ù¨vrZ¶ì0QÅ!“vÅ)œìÂÚabd–؈hÿ0±Dü:0ÑP0Ϻs˜hFºC˜X¢w1LTçÄHºPżñÒDJ‡GJ£Î“Jˆ}ÑDUÅÖÏK˜ ^¯+cà1:ìÆýè0].¥Óà±”)¢ ÊvšØi¢+šàC({{XÞO–ö+š(vNêK¬D ÿŸ¼ëŸBØò6ò­å‘c\ûë‚Ú);¢‹æœ–”о¸ŒûÒ„;MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&Šó2ØDÞí¥rH[ÂN‡˜ÇŽMT SŽêKÐMTU²·}x°£+[›¨wØ9œ0Øá7:U)°ä ßÛiþæ&všè€&L%Ú:Bn§tªvœW§‰r] jÿµÛO±KÖ‚¶¤ 9hRá‘NröàPBGi.{óEi¢Š³nˆ©=øU;>93¸ÓÄÈ4±Ñþib‰øuh¢¡`žuç4ÑŒt‡4±Dïbš˜×KIÞ”&Tð8ÐÄ<©¹³cU•„Hí¨Ç»Ät]41D'jð£#ñ‚kÅcŒ9x;5ª]8É̸ÓÄNЄ )": b«]¢Õi¢\7›·ý˜]¤ma§GÖ&´´`¶¨§öžÆjGç²nHÕ©dbu;Üa»ÓÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢8‡2}m¥Dæ°ñN'ãØN§yRSì‹&õ9 GW=œ)BûaÓD½ëR‘=ûƒ%ÄKÒDõhS!š¢ u/7±ÓDW4¡5Ak¶ÏC“&´&hMk×F-×ŲˆS'¡Úñ¦;ÌG‰½âyšÈ¥לOíõÑj—Þ¯½½)M§e•‰]qOjaì412MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&ªsŠ˜ý^ hÛµ ›K$ŽÂž%#÷Eƒ*Ê0LP/×U¼n¸k6P$G’ &ˆ­Ë~j§FµSÝS:í4ÑMä2KÇrÚ¬} »Ú­ž ¶\7J²nÛm?ÙxmÂFYkÌgiá<·“mƒ]-^78%Ü®Ü:Øá~nÂ&¶"Ú=M,¿ M´̳î›&Ú‘î&é]Józ) Û–›°¾ü çiâ(5à©ÐM ªˆ1NPŸäªhb^t8^îÜÄàQíó)s ýÜÄN=ÑDy.Á WE°Eƒ]BZ™&Êu1DŽÌnûÁ2Þ’&ø"åó4K Îs;Iõ`—䢧°«Óå¥ctÅ¥H²Ó„7MlD´šX"~šh(˜gÝ9M4#Ý!M,Ñ»˜&ç6Š(ø½Ô™ÃÁ«ÒD<+€ðxoŸ@RÊS¤¦¾2ĪPKªD_=¾Æïæ‰zׄxJtô‚4Q=ÚýÚûÊRØ×&všèŠ&b9]Ë)èܤ‰j§'™©W¢‰r]%!ð§Á(ªºå)ìx(-ÒùSØÖ‚0ÇØæžj$]”&ªSFç­_µ#Ùw:¹ÓÄFDû§‰%âס‰†‚yÖÓD3ÒÒĽ‹i¢:OˆØ®Ð|©Û–›`scP5²61Kj:sòàÿ”&U9h êåºJaw]*(¶ËÐíNó°nMÅ£`ÎílSƒ²Óœ4;Mì4ÑMØs‰È’ì!lÒDµ#\&Êu)—÷Uî4IyKš±ÑGßVaiéš$æf–Á.…‹žÂ®N3é´Uí`ßéäNíŸ&–ˆ_‡& æYwNÍHwHKô.¦‰ê¼LÐÛ üŽv6¥ ‰z Ghb@ÉÉÕùÓ-I_4QUQ²Ïy‚ú|]¥°ÑQûBtè¤rÔæ4Q=ªÍÆT|e‚;Mì4ÑMØsi½ Mš¨vÖ>7Q®[ÖÉm?T÷¶]›À¨pŒ&r‹•:ËôÕ$ö6¾”ÄâÉ@‚|õ˜ãµ/3¢C|ÑñňËh¬Ã®8P‚&¼ib#¢ýÓÄñëÐDCÁ<ëÎi¢éib‰ÞÅ418g†)½TÆmååšóM $9…1~º¥Î–&Š*ŒÈ9WQí\Ù±¼!:™)€›µ_Ž&ªGöðD_EÜib§‰žhÂTr0N÷6Ò¦2›²vqÔêJ©2t[6Û}Ë¥ Ì5u8˓҂S mš¨v„ù¢4Qœ–ÆÞë–*.ë¾%Ì&6"Ú?M,¿M4̳îœ&š‘î&–è]LÕ9XÛKQ„åQ:0Œ­MÌ“JÚMTU(‘$ùêáÚÖ&fEOÒ$oNÕ£FJÎÑÿjg_ÒN;MôDö\R9 ¬ÜÞèTíˆóÚ4Q®›¬%gƒgµ $ÛËÓÌ9å1š (6žIfG©Ù¥4úFƒ¾zôÅíS›¢e8SBàLu4«QBsöé«»ÿyúâÍëúËcÇsóèJ?³O¾¨ÖŸÕø×»çe¸(±=úÿ2Ì¿nÌó‹WµyþîÑ_<¿»’ß=úüÅÃíýMfþ¹ Ù³g-¤à„}enzSz,kÏŸ><5Ûãƒyl&ÿvûú»›Òÿù7¾ýÃ×ÿB¿·‡ø×Óí³oÙ§²>áÓW/žØSôú× ”Mðã7O싵f‰ÖL­]"=IˆœQÞmª›Sôv¢ïweÞñé0ý¾ê›òýÿéÝO­›½{yóÑñ³òëÚÓþÙäÔXQ/÷ËLã÷_ƒö¥ñd š}°}ÔÌÙíÉÓb>ëãòe™Î¼Û÷–‡ææÿÅ´èý¯´>ÊÇVÈç¯nŸ¿~úëq¬üôÃßnïïoÂ÷!~Ë™#~ùîëÇöWwß?Ü uìHA­så1y|ó›ü¹¥”‹ýö7á{Œ6ùº ¿ýÑdüáöåí×ÖºžÞ½>y”~þøíÏORý÷ˆÒÇX¾ÿ÷?üóóBßœþÆèñ¿|ûÃGÿôæé}y2mBðüõ‹û»òãï^Þ¿xûÌ&qöá·OŸ”Ïê×õW£vkoëÏk/R~<¢ôÇŸ~Rþõ—ãDùOO¿½{üöñ½‘ösûÛWåwÏn­|xyû¸:úe8y}[:¾×Y\þ½Œß-|öÉgÏo_¾þîÅCýçý‹7ߨ<¼zq÷êDÆðëì­?]xûŸØÓò仇3¡øcÕÏß”ñÝ LÙíçòã×·¯îžÝ= ²ü€ýXšÒËû§Ÿ>Ü¿ýåp¢6to?Ž ™l*©;dªMÅdžLUüêÑíË—÷oo þ‘Í žü<.>¢òN§Œ¢w§#]ytûðp÷ìåãp^Äš|ß{ßWìâ/›@n_ÿ÷=yuûò»J0¿zô×7ÏËwó¨*;¾ˆc>ˆU}ŸãŸqŠO͈ȾO aŠO8ñ9Ûœ1²F7¶Ùfš8Å'º÷Ébw  ÞËe³ƒi>É÷©$ÁɈWí8Oú>yŠÏ,QBò}ê´ûL®ÏSðbkv!Lj+âÜù°+>´³¤ÌRy¿´óï<À†gyõ³)åºHɼ” é–IÞ©|iÆ’¼gƒ0I$!{8av|’à«Å0à€S> (äDÎúá`G“Öò"ºNm(GÁä,W»À“ó"¹N¹„í\µwœš™Mrê,•R(ÝrPNØtZí úõ’ˆ8ˆ#fl×6ìðäØØŽˆççþ­ˆvˆ‹Ä¯‚ˆ-ó¬ûFÄv¤ûCÄEz—"âàœ%h{X<ÚÅ‹ ƒTGŠ Ï”Ê¹+DTIŒÎRÔ`—ÞùA#â1: N žŸ¢¨CÄâBùÖÄÆmòÉaGÄ{BÄò`r)l[ˆ8ØÁêU…ëuËI"ˆn·mØ6ÍÜO‡LFç3÷¢¥2¹8!’O:¡Ã$Ÿaf8•iˆ(ŽÓX:+Ì1¦f*£Ýek÷ NsöÅɾ¶æÏˆíœ–ˆ_œ æYwNÍHwNKô.§â<êöR6Œ›'j pªÊ[Ívé£]À¾À©ªª¯3'áºê[wM¥†]˜É—§â¢!=F_gÞÁi§®À)–¼kRr*6ó¾ v§ÅLV§r]"UtV7ª m Nxˆjq'°&ŒE±=À;ºhéyâwœp版öKįƒ ó¬;ljf¤;ĉ%zãļ^J6®I“¬Ç‘¼o3¥æ¾²HÏS¦øÏ‡å®9”’=äF‡ètCÜÖ81K™MÙvœØq¢+œ°“5SMœ(vét³ÒJ8Q®›í† V¼Ä9¤Mq†—D)Ÿ/JCXºèh´ÁíwÅ2\43Ä ŽDsòÅái!ì'F扈öKįƒ ó¬;ljf¤;ĉ%zãDuž08É’;¦°)N$Œ‡9¿séÎ¥=p©Ö͆Öj Í¥Õ.…Õ3›”ëJ)ªÜÆÆÀ¸)—}Ø£ #ÛTL$‰%¹‰§ÔðuZB $—a”RvjvÈÓhDÔƒG)ŸH›˜ö7”æŒÁ D_}æxmCéôè`ˆù’C©yÄI&(C }(݇Ү†Ò\ ŸC©ü“›Ciµ Êk¥y(f+êT'ªvœ`ÛW¼"ÃÈ+^S€!$"gÇÈ`.{Z¸:ET…싃´ïq_¹4"Úÿ›¹%â×y3×P0Ϻó7sÍHwøfn‰ÞÅoæfõR¼m “˜Ær8kš"µ/œ¨ªMÿ„± ® 'ê]3äœÈÀÖæ8Q=¦\2RúÊNS¦î8±ãD'8ÁÉ`˜þ—½ó[®ã6òðµÞ‚¥í–Ã1€þ@[¾ÈÚ›”+ëµ+Ž­‹ÝT™6\‹‹¤œuTz—œ ƒóÁ[E©Øqg¢ÿ¹\{~Ñ!\÷>Iâ—¯_½8ýïoŽÎÞk;ks»Ãçù¿ÞtÝŸÿüæxsxq|ôüðüâõÿþz“?œœ¾xñôàÿÇßÓÛrÛàÛqâÓß?úÓ¯ç›o6WGOÓqó‘ J;þpúJP¡ý‘qe\ ŸÝkáö¢Æ6•Åüö»¯oÞÈÅ|ÐÂ/vl9¶ïvù£oÿGFçÛ[òþŸ»ÜÝGMÓ|þù;8=‘!îT:«ö=½Ü¥±ÿ8:Û\žÉ€}pïç±ú0ïÂGßo^¾ø÷ÓW?oq9þÎæg㇯¿ÚÖXÛâñ‘ßœã¡=>r‡ù0†#x0Ž›ø"˜ž§¢×®ã›Ë×o${º}L»-ÑNB‹Íîv¿y•»­å³Ë¥™ûzÔ–|êì¼ÓnêÆä×oóˆ~pù8 准ËžZwp J›?üéËÇïv Ñ÷;½û¾Ú¼OGî¶,9Ò¤6!oîw’)¼>ù~#ïæÉåÓÛœµ[ùö¯òxüqób#Y¸¤OÞ¾ý ÷mû‘涃IéTê‘Úãñõ›o9,õS©×¾ýE~ùyx~,™Ó!š8L‹èãq ‡ÏÍo6'(i޼2rmïv»{’½<ý›¸}ºë-Ë÷7G¯$w<ù]ŽèÓƒÿü³Ð¿ØöwÉøq.wQ÷¯2X^üÚjÜí·ÕH•½"V;7¬à(¥½¤±è=9°ê·@±ƒÈƒœzũ͟FQ~ÊN³¹½ÖDÎN±ËuÇZqÑÙu’S›½*D´þIÎ)âç™ä,(g]ù$g1ÒNrNÑ;y’³uŽ©…ÚK9c`áÝÌ2ÜQÏnæ‘R+«‰Üª²iõ þ­™l¯Àꃥsa“œÙc*Õ]®‰ÜÚ1­»™×Iκ&9mžÕT6㤻óûö³ÌDk©]´"íòhì’KõÀ7âÐö¬Ô£‘8’W6þd;º?5æSŽ~w|ys.ls²Écá®1FV€Ú†=Qå´•έqmØÎGÊÎc•¢]õ®uúòô*)¯NN¯r´rúxðOW¿žo¾xòÃû¿ü°0Ú·Ý‹SzžÈpxtùúÕOz „LžÈxvy)qüâÉW§—ÒXz¨^§Bmòürztï{}óçç›_d¾¼¹ŠËæà»xq|&fÓãÍ éWÒÅÈ€ùFœ4Oþyr©7ŽÖÁŸo¸ð6]¸¾€/Ú¨Ï{ oÉðîÍ>“ñÙ;e:»µ3vÏHFˆ qKë· #;Ü4(`ã|/s ²ó,wMÕÈάÙßšýÕ–ýy€!®_!õÏK½ýÿðrwñs}…ìU0κú¯…HWùrw½3|…çN9qz©¸ì†aˆØHºÑŸ¤¢ƒ„÷¿ÝÁi±ãN5žÙpƒÐ¼ÑV}d»@K³Ü8I‘{yz¸ÒhBuàœIòÝõŽ\GÇ™Î^¿} Op¬® :Sîkƾfìudì$ 8¿ÜÛFd¿ÀCÖˆ4b;o—Ü!àb㣷û÷CªrŠR±c?¨&²ºæ$¸`Xüj£šXD7l[‚Wœrš\J÷Ä–¯4Û1ó^¿ã$§ÞxGÑ©â|w=㊘=ìPˆhýˆ9Eü<ˆYP0κrÄ,FºBÄœ¢w2b¶Î#ƒ½” ŸXê%ýì;±4K°⩱2pʪÀm¾·½Ê-ê?ipèTü\œ²GNGF]ùµ&Õ Nu“¨ iÛPùË\¶ÃÎ7‰™À)µËÆ‹{Ò^ @iÉm ¡ h˜ÂYÏ“ÊÛ’ð‰†bçaÐ)(¨óå´Ÿ;` QëùÄŽÂ0pÒŠùú´»ËÆÊ)²ÙnK°EÁ);%¹º8ê”ÙXÁ©'#.D´~pš"~p*(g]98#]!8MÑ;œ²ót\¬ z/Åà–ÝÏMÔ°åpʼ‘Æ +[*™UIf`ìkí¶œáòIƒS¾êT/ưèöNÉctd½×‡ñèì N+8ÕNò`zëÙnYaþìÎì­u³Ï8¥vÑa:­P{<\ò „0¨gÆÉ§}Ú1§¿‘íB¤! CFeé èâºZ­ ³%9U"Z7ÃL?aã¬+f5Ò•1ÌT½“æÖ94dõ^Š]Xx-™oÐl[K6^ê}Üúx s«*+†¿µóô€†¿¹jJ­=:$OÀ~æÖcoYÏ1ˆ;…PV†Yæc3ÌõƒéÀ½‹ý skdæd˜›vÑ2ˆ/í;GËV/¾•qËZ²VAÚ€#U`˜[»®ÒÀÊ0 ˆ˜=ƒê48ƒœ¢ê4‚nI»R±snÐ$DŠS›{HŠ¥íŒ·väq¯´–œ²5` õ®níL'¹Xi­' /D´~Z›"~Z+(g]9­#]!­MÑ;™Ö²s¨w¡,ƒþ²§ÑG“B¬e’¨{ku¥PSô[U$©‚']=YzX°–¯:–ÈAÎÞ£¿ñè%5óR al\am…µª`Ͷ9èî—{vçlg™ Ö¤ÝèL €ê«-vÞ,9áD &òYÏÓV}sÖ+JƒóØÁÚ°‚0.õhMpeXËvÞÙ½"Lrê-I@œ*ÎwïÝŠ0=¹i!¢õ#Ìñó LAÁ8ëʦé fŠÞÉ“;&P>=µvvY„!kc©‡aÆI%ª‹a²*€X*‹ú^}|` ÓF'+$ÝÚìa²GŽMЕQgÍÏÊ0+ÃÔÀ0.oÈ.ðýÃàŸÝy€åÍb˜›aR»dLª‰¨½@/yƒó [cû伇òG²l·Ï-ý7N£s‚NN­]÷à¨yb!¢õãÄñóàDAÁ8ëÊq¢é qbŠÞÉ8Ñ:OÇ‚ÞK‰ÈEqÂ6Ö:×ßÛGGhÌ©XÙ”HVé% ~Ë„Î'£¢#lj쑅N•5òÙîƒb+N¬8QN@Ú[C’a²/âD¶ÃÎ13á„´Š+œ×yã?FKN‰˜ÆÛ´Û¥o€IÜ$Ô^u± 8¬,™W¦D -3dµy±s1œ‡ ºScãiN£ó–jJ“zŸ>òèê…‡6”ÊUGAÛ`õèD ûJcˆä-¨Ê ›k®Cé:”Ö0”bcœw(Ïj,¥ÉŽ»û±gJS»L$.¥ÙÎY·lYQI‹z†RQ@œ8$M©`¯ÝïÎÒì”<±²V¹µ[wÓ?¹"Zÿ—¹)âçù2WP0κò/sÅHWøenŠÞÉ_æ’sËlL€½TðËÛô¾!´=ý£¤rmýI•K‡²:«ªwÝÉtÕN<'}°”(îq±rV(¿×ï8Xqbʼnêp"I¢² ÛQö½)Êàß4æ-€Ó&xG¸Ã„X|V/:xc®ð‰€Î÷1 û˜>‰aÄÎÓ ¥lÔÛãƒE µ¡TìÀúÈVuœ˜³ ]i²3ƒ¶³²V|ˆRÒÆ;_¾Òlç,ï³Sö^n„.ŽBXQËý ­§ˆŸ ÆYWŽˆÅHWˆˆSôNFÄì\ÆHƽóÂ'€35ÎA"Ž“b]ˆ˜UEI‡¬ÓÕ{⇅ˆ£¢y3NÉ£sÁ¡±ª2çÌŠˆ+"Ö…ˆò`Êc*ôçL‘Ö²<èsÓZj×¥ù&e-x¶ ÛÎÖlZ—NܳœëÓa­F`ÄŽ`ÐaÜ 8qê7(DãÊ»`²Ò~&;•Kݲðç¾8Á¿•a´ä´ÑúfŠøy¦ `œuå SŒt… 3Eïd†ÉΣ$ÈÁè½Tt°,ÃÛ0cÃŒ“ºeÝÙGeÎ9À¬Lä´vöPÍWÆ8 :ݓۖf˜ì%c1¨+\X¦.†•>Á”kòd;îF6Ãpš> ¥Ó¦oí¶í¼™u?+$á¬g€á`“.ZËØÙGëM2* ãS¿áze&ÛQØïR½ä [‹N;T·2LOrZˆhý 3Eü< SP0κr†)FºB†™¢w2ô΃ôÉz*"ia†‰ ø>†'5غ&«r z]ý¶ZèŸ4ÃŒŠN·*ùâ “=’ô¨JYÄl'™ÈÊ0+ÃTÅ0ò`JHÄåUsÙíìó0Òn´dµj,`@‹ž‹íK‚=ë`Ø'ÔŠA{ÕÅý C ˜† ¹t~^Ñi¶söÊ0É©t|òÀ¢*.¹+ÃhÉi!¢õ3Ìñó0LAÁ8ëʦé fŠÞÉ “³5N) 툖.$9¬§ž!fœÔXÃdU½¶Øº½JÀ‡Å0mtbºqztÂ>×’%ìpÛgì{ÊÒz™•aV†©ŠaäÁŒ\4¶\(ÛÑü “ÚõÖ‚5ŽÞøEã¶t`¸o€!"Hç«uBb' †a•aÈG‹ÈhU§ÑtæyKN½ê4:(}šQœŠ¸ÇI«>$ù´ÝH¿Ò”˜aKõ´}o1 r©Ìe§ÙŽß+"ÆÞè™ù`§Ñð êÞêN%X£Ã" g;„Ùa8µ‹) šñ§Y2\rQbhûÙˆuÞB 4ŠR±ë®/Ý­8„4&ˆHÁ­';‚a¢²›+e•j±ŒÅÉNØëé„cıéÖ~\q{î_Šhõˆ8Iü,ˆXR0κnD,Gº>Dœ¤w*"Žì¥â²E 1o^æíåÆIgªBÄ‘êýêH1.:û[ Ùzôì±\þìú :É+"V€ˆéÁŒÎ ³6”À)ÛÙˆsŸN˜ÛM3sQµ£Xñ’+!AèÃqྜG·ríŒÖN¤b,ƒ“µÒoØtênymLk°×i®Öi:t¢<ÛØÚ\+R¨Éi!¢õ3Ìñó0LAÁ8ëʦé fŠÞÉ “œ;ÄŒz/ò»¹8ÚÆÇí³\­R‹†Ë§æ]_Q¨« E«ÊE2¸-`?m„ÉW „®¼óùÚÎÐþ&{do±|êrk·°¾"LmcÓCpȾXT¯µ³4÷‰ˆ¹]a(†@Ú ãÖn{>„á&ÆhCÏ72›—ý¹ËgG¶v¶{/^¾þkz .^¿lŽÎOÓÐ/ÝÓÏá2= ¿ØãÍÕüNŒ¾þ—ÍÙQê¤N6ç’¸_ç“òj¸øÙo$äïÒö×à>û—ƒ”Þ+>:Í÷]©Oev´îVì8 Z}éI5—{H„X^ÉÐÚ!íÖ²Ó€Û– ßç× '= /D´~X›"~X+(g]9¬#]!¬MÑ;ÖZçžèBºeOÉŠ®Ap=´6N*Õµ&±UÓ†4£«[Îmù¤i-]5˜4aãDgå[e#cT•…•ÖVZ«‹Ö\:ˆŠÈm©±ýìÎ|w.w&ZsùÀ* ëX{Bà°èJ=jBú2Ig= º„0l´WÓ„Ó }UžU†!é6L:PLq*vÎÚAæý§ääù‹ºS„AEF¼²mÍ‚ô(DN¡ÜÇ';ˆH{¥µ,$opF‡®Sds¥µž4¼ÑúimŠøyh­ `œuå´VŒt…´6EïdZËÎÑs`Ð{)ðËA4‹}´ÖJ2°¡.¹2Z˪(–÷T\Û=°B‰íU§»V®Qvmgö¸<ðÿØ;»Émä ßÊùÌ2Y¿äFìäÄ@‚ Û;ëYø{sû)R£íi±Z ¤%¦ ì‰g ªWõ5‹|D²ªzÌžø»¥»3C“Ö&­@k¸ÄH-É7 %V;¼¯ôy­•ç²ÄìT]í˜ÏÜ[ãhD(¤øÃÆcƒ\BÂvAŠÕŽò&èM‰}ûã7öçÿö÷ÛMoþ÷;›Å-¨ïëÐgùºü_½2ì·´GFP6vú¸Ž`HÑ¡Üb'–È/¥‰*ŽÊ%½èŠS‚Iî2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ¹pi·èg)‰çnE,’6`¢*ÐZù ¥y°BUUJ2øêÐü´a¢¾u¹dÒnp³c¹&ŠG{cPö•%Pœ01ab$˜`[¤GQhv^íè®ÀÞA0QžËd ‚7~bíÛ}nÇ.A [0!e‡²ÇÔž_¤æ º¶ì]‡I‚øâÊn΄ o•؈èø0Ñ#þ˜h(Øg=8L4#= Lôè톉êœrJÎvòjwò¹6&\r š¨lbRg¯wµƒÁjFTU@í^Š7»W»…TßÚ&úôÄLž$_×<«zÌ!Ú{±¯,ëÜš˜41MH)A¥šh›&ªÒÑåÂës3SfçMôˆ?†& öYNÍHH=z»i¢:å•zYÊìÂÉ[€KÝ ‰UEÔô„T„±h¢ªÂ˜Ø¹»Zí :^‹&vEïÚ4žNÕ£ÄÒ/ÓWÆ1Nš˜41MhÙsPÌ í’ÕNôðtå¹Hˆ1xYÛì0ó¹5´í‘9?¦‰TG0€wÑ¿ÚáÇÕ¾O¥‰ê4QŽü„8å¹7á.Ÿ&zÄC û¬§‰f¤¤‰½Ý4QœÇ hIÖÏRùä iÌiIykobŸÔÄcÑDUʹÝÒïö–9½MìŠÈ…õ¬«GQV§v[µã”'MLš‰&Ò‰Cúø¤ÑWüýš!ÇÑ4QžË˜žÙÅ.Æ3¯ÜÓ‚¼ÑÊÔ”û%¼‹mÕN96¿äŒ`ÜÑU‘ùÕæ{kÄ€üèÀ•{ßÕ£„hë+cIs~™óËHóK^BÌ9Ú›óKµ3Î9z~±çZÖ+ÇßÛ#»Ú‰Ê¹­²3`ôx~)Ê!%VO©Bàk÷¾«SŽAžGi6_s?C4":þתñÇ|­j(Øg=ø×ªf¤üZÕ£·ûkUu.€’ƒŸ¥8Ÿ[䃳.6õl|­Z¥Š-«Á—*ŒcÑDU¥„IÈWo³ÚkÑD}ëÌÌòÄÏиã:š(±üò1»ÊŒ…aÒĤ‰±h"Ë:JAš0;Èt±'5i¢ÚE£i¢<·VæO茳“pê½¼¸h¹þ¸f @ý–•XC{~©v”Ò¥4Qœ ƒ“ «]äIî2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ¹%Prhbµ<{oÂfZØNöµ•ª¯Ôðh,˜¨ª20>gå×êfTßZCʨÁN©¹qLT6:sòÿnÊw]¨&LL˜&ÀéP6Ö¨ ÕN# P:e â-Ñ‹]8õÚD^PmÔÇó .õx@J¡ƒªÂ¥×&V§)Åè0YµÓ&¼Ub#¢ãÃDøc`¢¡`Ÿõà0ÑŒô€0Ñ£·&ŠóÌ¥O¸Y*S>÷ “Í(9âv¶ZꃱÿZš¨ª4br>|U;×jgTÞ:…òûj·þ[£“/,X•ª-ÈÜ_ýW˜%'M E¸”3´ö ؤ‰j'áðƒNå¹¹¥MÞ=†dq9—&(ÐMå–Ä©Tao*­v”¯½6QœfËDÙù”¶Úñ,@î.Ÿ&zÄC û¬§‰f¤¤‰½Ý4Q#ŠŸBmAuîA'%\LæÆA§*cÀ'¤æÁ:UU¤ ùê _ìÚD}kLÎ¥’ÕŽè:š •_Ê‘n_Y’I“&†¢  ¥=ÇöÞDµ»/ MØs1Ù£U³3~"²Ê™ØÖÛ4Áu+çØé«]¼–&¸|nI!€'®ÔÅ›{î2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ9¥RÂÏRôñœcÛI\"ç š¨ƒ7/­vQÇ¢‰ªJêþÄêÓkˆ½EGí†~täÊ“NÅ£,(ú7[7ñ¤‰I#Ñ/1[§§v;£Õî¾ÊÆA4QžKÿmçÚÑjwnsÔX÷Idƒ&¤Œà2Ô©=V»ðqã¥Si¢:eIÒ®yw³»»ý2ibc™Øˆèø4Ñ#þšh(Øg=8M4#= Môèí¦‰êÜ|S@?K‰œ{m‚SZ7 Ä® ûR5ÊX4QUeÈÞ—¯j—ò‹íM”·†Êö„œ¯kg´*CŸí樫ܕ˟41ibš[¥3“&lÓDµÃtø%ìòÜÒs47² t žI+Y…{yZs‹Rv>ÿW»$×ÒDq Š‚íúâ«c˜4á-Ÿ&zÄC û¬§‰f¤¤‰½Ý4±+KÉ£«gGžt] otÚ'•ÆjŽZU!¢Q[tÕc¤»…½+:pwìtš¨Y îûÊ'MLšŠ&t)ò(°CÕ."Mö\ûG[}ƒ—µÍ.œKº.Ù4ð˜&RÍ-¼RÐÕ.†ki¢:¥²qB¾8š·°ýeb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&ªs¡Ã)”O¾7AåëÑ&Mì“*a,šØ¥^Ô6ù¤i¢¾µýB³Ckt®,[ë±Q²éñP²Ko/JîËR‘Îݘ‚zt‰òÖl¿G¬ÐP4¹ªÂr6úêáÁªêS¦ÉÑI×ÑäêQ 1øÊæÖԤɑh²ü.)bÆøq±¼¯þôû¥Rüë`š¬ÏÅcÎîø!ȉÎ-\êdäù%–LX*6•Æ5WÁ¥4Yj”Ðþ¹ÚÉ,éå/Ÿ'zÄà û¬ç‰f¤ä‰½Ý<±:GLøD–Ò˜Ï=èFy±Dýø Û*!A™ŸŠc5/¼©Ï)eõÕ'~1š¨om9Ѿœr³ƒ i¢x´%(EWÁÝí×I“& ‰XiÊÆO“&ªÝ}øƒh"®” Šâ³>—&ÊÍ y| ¡Œà@ŒÚž ¡æ ”Ki¢Š³q»NùjÌ“&¼eb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&V碟¥Â¹4\æ” šX%è“R9EUi$_=}\ÌþÓ¦‰}ѹŽ&ªG¶õ˜¯Ldžt›41MØï’l™N‚Ík3«]=š&ÊsY -Þø!~P3òØk3¶42dyLh#˜¡œæn+­v1à¥4Q’‚ÓYñffóBw™Øˆèø4Ñ#þšh(Øg=8M4#= Môèí¦‰ÕyŽÌøD–R:™&`¡Mš¨8DNÏHÍcÝ›¹©Ï6­>1WñƒÒ6Ÿ4MÔ·–²< ×µY=æ¤ì+K”'MLš‰&°|Ÿ‰R6þš4Qí] ¸>…YÐ?„¬ù̽‰R(# æ-šÈÙ°‹TÉQšËVem~Ù¡>‡ðjóKÎhi)£<”®œ_L &ð•!ÄY2rÎ/CÍ/´ó”“!Ds~©v.YŸ›˜!¤ög–jGtf‘àE²¦Ï/´”û¬âœù­vxmsÜÕ©1(·‹¬vrw=|~­Úø шèø_«zÄ󵪡`Ÿõà_«š‘ðkUÞî¯UÕyéΟH¡ zò×*]‚l¤Ý'•ÆjŽ[Uiàl*|õ^«ý®èhÀ ÷¾«G*ÿó§qÅ8¿VMšŒ&°žW‡&ŠÄãi¢Ü´N¥*ˆ7~ˆl±&Mè’‰ä1MpÁI‚WH¥Ú^»÷½KœMÔ“&¼eb#¢ãÓDøch¢¡`Ÿõà4ÑŒô€4Ñ£·›&ve)ç6Ç%ÆÂM쓊c _Ue oç~}Kz­æ¸û¢“ïN±NÅcùÑyW«Ý}±¶I“& ‰r7AB,wîš4Qìø~dDå¹¶öΖ¿¼ñc‰]äÜ{yT@ =¦ )#8D`nïÒKÍA¤—ÒDW*dqÅ%Ô8iÂ[&6":>Môˆ?†& öYNÍHH=z»i¢:gÚî²´Ú™ÂSi‚SXXtƒ&ªAÍ}©œÛ›¨ªâÓ䣉úÖ‰)Ñ?C{îu4Q<æˆäô=¸Ù˜41ib$šzß-ríšÕŽùð*å¹ZzÉfwdu¤pîIZI1ÂãvV¥unÌõØoû,jµ‹/¥‰ê”sT_á¬Aî.Ÿ&zÄC û¬§‰f¤¤‰½Ý4Q³bp’ýjwrs\L²(Ç š¨Œ&¢sÑy•*ƒÑĪ^" úê^Œ&ê[+e"~":é“Nælu\hßW–%Nš˜41MØï’rùiÆvÍÀjwÍi¢ {.—vêgm*ñÜ*1‡÷òRÍA‘’´Gzµ“‹÷&RICFBØgé*Í*î2±Ññi¢Gü14ÑP°ÏzpšhFz@šèÑÛMÕ9’*ƒŸ¥ÎmŽ+¹tµØ€‰ª€±tm÷•ÒhÈ«*IDù‰8sæ×‚‰]ÑÕë`¢xŒ¥jñ³x q¶3š01LØï’…€RntªvòÑ0Qž›")¢;²93aBK1@×&²`K.€±}6¯!\ U—^L슂 î*±Ñña¢Gü10ÑP°Ïzp˜hFz@˜èÑÛ »²Ÿ|m".˜s#Ù³­³IžPJ2LTUÂQSzBý«sªo­±4÷£#raE§âC´ :ûÊò]¥Ç &€ û]R2‡QÛ0Qì4çÃÏ9•çfN Á_(S>µb ,øLÏ9QX3´åÞæH_í8_Úµ:5Ô“Ønƒp³“YÑÉ[%¶":Môˆ?†& öYNÍHH=z»i¢:§@ÕÏRx2MÀ¢ ±‘ì…8…öGâ› UUé»Gè«ç/ø}Ú0QßZ4'µV» [£VÈþ"â*ÓfA§ CÁ„ý.ÍaŠQÚ0±ÚÁÑÍ&ês%Cùå–têl !y\Љà6‚S»=sµ+ot)LTq†[}q¥ûë„ o•؈èø0Ñ#þ˜h(Øg=8L4#= Lôèí†‰êœ `»ìÜM¤ž\Vu±ù}ckb—TŠq,š¨ª˜0·KÝÔLmŸ6M¬ÑÉ‘%úÑa¦ëh¢zÌœíÅ|e‰çÖĤ‰¡hÊ*KáSiÒDµ#<ú v}n*×òBöÆkÆt.MpÙ>| X°`'AW;kw&ŠÓ„ˆ¹ÝÁ|µž—&ÜUb#¢ãÃDøc`¢¡`Ÿõà0ÑŒô€0Ñ£·&VçÔù³Úѹ0¡¥Û6nÔsZ%!·{%ߤ¦Á`¢ªâ\¾úê)¿Vìõ­E¢Ê“¥À…0Q<æRjß*Zíî?íN˜˜01LØïRb`ð`¢Ø…û ÁDyn™Þ¤ÝùqµÓxj='²1Êáñ¥<¢:€-P{&,v)ÇK‹Ã®âP9*ºâ2ÜÝ}™0±±JlDt|˜è L4ì³&š‘&zôvÃDunÙž!úYŠò¹0akí%êF«‰}Ry´sNûÔç×jƒ½¾µŠ0²Ñ aÂ<Ú;¯ÉS†!Î6Ø&Æ‚ û]¦l1ÿqƒž¯þôûMJñè+Øö\2'ôÖèäÌ •E#Q‚Ç4Áe[êçÒDµ ùZšØ%NÂl5á.Ÿ&zÄC û¬§‰f¤¤‰½Ý4±/KL¹”ÜØ ‰]R5ŽÕjbŸú$/F»¢“ï*ÿžNÅ#ÌÑÙ+vhžsš41MØï’³’ª4[M¬v’ÒÑ4ÁekËÇ2òÆD=ùœSÙ¸”'e‡¤Á¹Ö&5]\Ï©Š+wìÕŽdžsrW‰ˆŽ=≆‚}ÖƒÃD3ÒÂDÞn˜¨Î%fpVq«Ètr§ ËX€[0±JŨø„T‰cU‡]U)’¶ ™ßÔË‹ÕsªoÔ+~t”/¬çTÓ‹mM쉅×ÑDõhk'”a˜·&&M EZVó$ÿÇÞ¹æJ®ê`tF‘ñ¤ç?“ ¤ÎUõÙ "ÉF§Zý£ÛŠ¿¸Âc6¸¾5QìÞÓâ.¢‰üÜÔbµÚ’“ßK‰—4œä`ÇÜ‚Àê÷Ýïv ¥‰ìT‚š5îšØí`tjN+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Îs%šFŠénGroÚòÆ$'4±KpŒü‰T›,m¢¨"uFk«§ÀßEå­9äáüƒèÄi¢xtµöCÞË[.šX41M¤ïRÕ“³Ÿ;ßþõýªÆpyv~nŒ©]cs¬¹ Ã4AEˆNg4᎑-4”&;8¸ è—Ç—¬ÞšqÎv¾m|Io–¯3jGçý¸ÝãKò(–¾Kn+c_«Uk|™j|ñ 0@"Nª×øØíàòƒ´ù¹•,Ö÷¾‹ý˜+Ç—°%28Y­ò-W" 86”&;€gW«ŠSHŸN[¿ýŒkµêd¢ÑùW«FÄ_³ZUQÐg=ùjU5Ò®Vè^­êë¥~.]¼÷-žî}÷IõÉ*v©þ²“´]ÑÑð Mn†“´Å.š,šX41M¡EÉŽP¯§‰Äì`Ò¢‰lG·®Véæ9cû¸d @jÁ!Ížª}P±K½ñ£õÇwq$Ô¨•²ÛaäEib-¢ÓÓÄøKh¢¦ Ïznš¨Gz>šÒ;J»sb³v/E~o^ž„°œì}÷Ieš«ÈÇ®JD‚Ãêý».3ÚßZb;:bÏÑDñÒ,äƒaÜeíM,š˜‰&òwCêuª'iw;&¾˜&Ês)2h»ýÄܽßH9÷à M„ÜÒ%ý;T÷&v;úyCÄ­4‘z®6$ØçámQcÑÄÉ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰â<ÍÏc}éè%ÒïÍËK,ÌÏ{û¥²Îu’vW¥"&¡­^~ÞAûߦ‰òÖ¦ õœ˜—v;•p5M„¼ç¡Cl¶ "·æåáù¬ì M`ié1¡Þ;’g÷&ŠÓ¨–Ô·Å™¬¼¼æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ì<€§:¶{)¿ù¤“o '5»¤@™‹&Š*ä *vý»h¢¼5!jc]pâƒ×íM!´•é.šX41M`>AÙ«'v;5¼š&òs…’#€VûI“`·{ïFÍ·¯b<¦ J-¸ìLx57¸Ø§gi¢ˆËÛ²Ô‡VÞDsšX‰èü41"þš¨(賞œ&ª‘ž&FôÓDqÎÌáƒ^*ê½yÊËIÞÄK*£„¤žÀýMš(ª$ý©gó¾ÔÇïªøŠNN?×vtDŸ«@¾{t‹DÖVæÀ‹&MÌDé»T‹Êü3àÏ¿¾_5WçM”ç:4jn¾ìÂ'P71~œ…-œZ08Ô³°w;‘gO:e§iLËÿÑÇ!® äÍib%¢óÓĈøkh¢¢ Ïzrš¨FzBšÑ;L»slÚî¥îÝ›°DÈxB}Rq® ä»*É׉c[=ÃwÕ ìŒÎÛ½·ÓDö(Îi’ñ2Ó•7±hb*šHßeL]¿ Tk»¼?~5MäçFul¥}ÙÔJ½0o"ïM˜T É-]€ê»(ÅŽÎÂ.N#§ïæqiP^4Ñš&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4!ûd)ýµ{©ƒËë®Ý›pÚଦSŸT'˜‹&²*…¡‘C¾«ßUÓ©+: ¦ÏÑDñÈc[‡•7±hb*šRÓ)W¹«gaË^ÓéršÈÏECâM;å›÷&,¤w<† - 8õ½Zá³0Qœš ÖëìîvúVÏpÁÄÉ,±ÑùabDü50QQÐg=9LT#=!LŒè†‰âÜ‘C»—Š7—tRŒ†³´‰]ª'梶T“•tʪÒO†!þ@½âwÁD‰N@K7£cð^†õn˜(Ùäe ë:£SÁ„æRIòþD&²zð«a"?7MÑ%H³×ŽLïL›°ÓHK'E>,µàÌ(Ô[z±KΣ4QœJ ƒ¦ŸâÖuL+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞašè륎ªâ]zÝmiByB»TJäóT “¥MUQÌðõvBþŸ¦‰=:öYt¢ëÉi¢é ibDï0MtõRD÷&a“‡M!œÐDŸT™Œ&úÔû—]7QÞZ‚¨H;:,ñ9š(cbÜú%Ý»ê*»hb*šHßeÞ¨%‘úuÙŽíjšÈÏU2 BIÅîæ±²9©ñÑÞDHÊR šz šÒ—8Ùs4Ñ).¼W Y4q4M¬GtršMÔôYÏL­HÏFƒzÇh¢·—bÛ“°-^…Ý-gº¼®[½}ÓÞDotŸJÂþÇcÔä”ÛÊLÖI§EóÐÄþ]&&ósšøÇŽ]JûscòãÕ{þ±;:áya¶oè!D9¦‰Z0QȽoUi¶Cd|”&²Ó…d¶Ë—m.šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞašèê¥ï¾ Û=×:ïí?—z´gþ›4Ñ¥ž ~MtEGøAšèRæë*ìEsÑDÈÙÕ’¯€ŽUš(vLz5MäçæÂÏT;Ÿú²#v/MAŠûñø‚¹¥'š•ªR,-ݞݛ(â˜,ÄØ'¤¼h¢5M¬Dt~š MTôYONÕHOH#z‡i¢87’èÖî¥Do¾ `ó㼉^©J“íMô©ÿªËëþ* –FòMdJÁÛsŒôý®ë&MLEé»L]ªQ«ÒD±¹œ&òs1 TóÅ^vÁùÖ¼ ÝXÙŽi‚R 6t% U¥Åî½®Ü4‘æ=0n‹ã·:¿‹&N¦‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0MtõR÷îM0ëNh¢O*Ú\4ѧÞÂwÑDWtô±¼‰NeÖÞÄ¢‰¹h"}—ž¾^f©ïM;`¸š&hcИZFhÍÑR§}g…X¶ÍHü,o‚6w¶|³Œ7”z®G³/êMðÛÆ—ŽèÄž_Ü…Á­±éTìð-[g/k|™`|á  <ü\mýk|)vï³ò‹Æ—ü\5hì}g»\²ïÎÕªÇ:Œ'ã ç"£»rCi²C ®V§óñ€¶8áµZÕ\†¨DtþÕªñ׬VUôYO¾ZUô„«U#z‡W«Šó4Ïv†v/upöµ«U”ÜØÙjU‘`ð©™m¿JEUTѶzSø.š(ofìÄÞŽNŒþMdŽš&cíaÜ®¼¼E“ÑCrdР # ×ÓDBñèÈí'Ûá'i%l‰ªñ˜&$·`C„ƺZ±KƒÐ£4‘œ*dâÄ¥8.šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(θ"¶z©,Ro¥ ѸiÔšè“u.š(ª¢¶Õ’òÖ”Æq‹íè ðs4Q<&¾#üàwcçE‹&f¢ I³tŒžo­ÒD±»ao"?—E$6ªä;æ[÷¾3ñshiBs 6 Õ+ƒþ±“øìIÚì4€šh‹ó¸*7§‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0M瘰݅ø¹à-M lhg5û¤úd'iwõ1†Æ‰ÌÝîÛh¢¼5¦&Ð`­ÝŽ<éT<*Ѽ¦ŸÊ„⢉E3Ñ„æ{‚@E¼^30Û™\_å#?×ɡٲcô[im‹ˆê'{–[0QÒPoéÅåYš(NUˆ]Ûâ0-šhM+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Î=v’v/åtóí¨1nN`"+ÀÄ›Jt2˜(ªÒ˜šfmõAé»`bŽç½›vtðÉ´¼âQ¢«ð»±.˜X01L¤ïÒ™£G¥*L;zK‹»&òs5õ{hÍöã~º2m‚7LNa"¦œoÞ–Fòl‡‡#á0QÄ)‹27Å¿U Y0q2K¬Dt~˜ LTôYOÕHO#z‡a¢«—¾wkB‰6“xB}Ru²´‰.õŠ_–„]ÞÚ0…G?ˆŽÁs4‘=r0õàmeî«dࢉ©h"}—ž—Ò£Ö‹|;â˯3ÊÏd&­öã(Ü›„-Âì~L^Z0@{Šñ³[Ù©‰Z„ØÇ×ÖDsšX‰èü41"þš¨(賞œ&ª‘ž&FôÓDW/â½4ÁRæd'4Ñ%y²½‰.õD_F}Ññ:u)sY{‹&¦¢‰ô]º¡s}o¢Ø¼¼d`~nTŽÚ(”Tì$ú½{D1gMH 8aTv»ô$LìN¥ln·ÅñÛ!ßdzÄZD§‡‰!ñ—ÀDMAŸõÜ0Qô|01¤w&vçJ,´{)‰÷žs’¼@…| /©l HUš &vUÜXøz©ßUü)ÇËŽè1˜(œ8~ ÌßkM-˜X0ñë0‘¾ËÔ布Ö+:ívñmÁý˜(ÏEÕ˜Zf£ýäpõ{o32¡4ÌÓDÈ-=¦PÝDÙíàÙ­‰Ýi.f‡±-NÞÎÉ.š8™&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4Qœ;Qã"ùÝ.ú½[цt’ƒ]$„µ£ý•œæ¢‰]}¾€šê ì»¶&ö·Ñ©žbø²³çîFÝ=²«n+c]õaMLE!/ù#2Q&мU$»ˆ&òsÓë*ÖëþìàÖ´ Ì5pØÓæì‰&ê9Ø»R|”&²Ó¨ÎÚ»è:èÔœ&V":?MŒˆ¿†&* ú¬'§‰j¤'¤‰½Ã4Qœ[ 3h÷Rê÷Þ6Hƒ§„óÞ>Z•ø©æª»«J8fõ4ã—òwÑD_tÌž£‰ìÑC ZO"ÝíÀM,š˜Š&0§#sjE§bÇþ–}Mäç*³iNƒ=ç‹Ý[ÑI„ìx|¡ÒÒÙ¡^&£Ø¥®êYš(⸬j4Å9-šhO+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(ΩQ±æ%Rï¥ Ž‰'ÂÙI§>©q®$ì—ú(Ô˜/ïv¬ßEå­ó]"EçÁ´‰ìÑêkþTö×Ý‹&Mü>MP>Á”Z–Ç:M;æË÷&òs• ­Ñ~’]`½woBAëÆ|ojêò={U¥œ[ºã³'Š8Œ–>‡–¸ôºJ:5§‰•ˆÎO#⯡‰Š‚>ëÉi¢é ibDï0MçÄbÁÚ½Á½ym3<)Û)';éTîøf×OÔë—têŠÓƒ'º”éÛ æ‹&ML@\ò„U¥JÅŽõò½ .w×aL=r£ý$;b»¹@lˆx¶÷-©Iq×:Md;ðð,Md§é£QiŠ þVzqÑÄÉ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰Ý¹¨5¬ÿ#òæ“NS7ç½}¾FÄ>*“ÑDQ"Y„Ôܼ÷Ÿ¦‰®è~&ŠGI€ç¡­Œq•tZ41MH.ÕDÑöþüëûuKsí«i"?7æíDÄVûñ(DwžtJóaD:_4¯`Pkäå;8:ô{#M§jés ¶¸÷kºMœL+Ÿ&FÄ_C}Ö“ÓD5ÒÒĈÞaš(Î#E°Øî¥ŒäÞë&$l(~²7Q$¤Q üƒÞ>þ¼©ôwibWïyÑVïôeyù­T-J3: o91·ÓDñÈÎÖVÆo¿Û¢‰EÐDš‚¢*Vi¢Ø]}ÝDynr#[í'ÙÁ­4AéG³|ß÷1MXjÁ‚ÖLt.vpT+ýFš(NM”ÚâÔWM§æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ì\ÓPC)zù^Mÿ–šN.‰&ô„&ŠÔ@1 7¥*ü¬¹ñ»4QTatб­¾‹&Ê[‹å½§vtXŸ»¼®xL¤Û¿›IX4±hb*š°1÷(ÖØ›(vrýI§ü\áÔms«×NvtkÞûF`ˆ'{ß±ôA(èõbÇÀÒDqê«‹´ÅEÒE­ib%¢óÓĈøkh¢¢ Ïzrš¨FzBšÑ;LÙyúíÝ]äÍYØ*¼±žíMìRÛRÓ+M¶7QT…ˆNÔ'Ç—äQZÕYŠ­“´k|™k|ñ-=ÉDZ{ßÅNßN•\4¾äçzêÏ,Öw”³]t÷ž¤ÍÈϪ|xš!F£¨ÞRšìÄàÑÕªä4ˆGŠ8^«U­eˆJDç_­ÍjUEAŸõä«UÕHO¸Z5¢wxµª§—Jƒ‚Þ|;ª$f9»Ï¨Oêlyy]êCø²Õª¾è¼íÐÝN]ÊxÕ \41Mp`“H4‘ì”øzšàD 1Ñ 4ÚO¶ÃxçIZߢ²_g„:e‰ÕåžÝôÑä»ÓôÝ( ŽiùhÍk&†Ä_5}ÖsÃD=ÒóÁÄÞQ˜x9ÆõªÞ/;Š7 ÇMù¤y§T SÁÄ®Jµuêeè«`bëd߸Uk·³÷ä·›a¢x à›ÊÂû–Ò‚‰¿é»äíࢰ?¿ÉŽôj˜(Ï nx¦Ñ~’]<º6â:˜…\õ˜&BnÁé—IàSUZì„Ýš(N‘r}ô¶8 ¸h¢9M¬Dt~š MTôYONÕHOH#z‡i¢«—B½·¹¥þ>„“ä/©˜†ÚR æJËÛU¥¡øƒ@“úwÑDWt˜Ÿ;H»{tU¡ÐVf²ÒòMLE!Í#ûÏš°þþ~“½-j_Dù¹1ÍÒ]¤Ñ~’Ñ—£Rž;ŸsBL ¸ìŽÔ«ùìv`ø(L§¢ÄÛâÞó/˜8™%V":?LŒˆ¿&* ú¬'‡‰j¤'„‰½Ã0Qœk.ýìôR~oÅ@!Û@NÎ9õIÕÀsÁDQÓ4äõò]w£¾¢# õ2ø/;~înÔâQ‚§O¿­,EgeM,˜˜ &pcÕ X=ç´Û \]ã£<5¡µæèÉüV˜°s5Öc˜ Ü€£äõªÐbwxÇ0Ñ%ŽÞJq-˜8™%V":?LŒˆ¿&* ú¬'‡‰j¤'„‰½Ã0ÑÕK½ŸÆ¼åœSRÌð&ú¤ŠÍ]ê%|W v_t"?W0°x47Ö¦²D°kgbÁÄT0Ai’CúçŸû¹þþ~Mßj8]TJ‡ˆ¤i|£ý0ås w ” !—:¦ Î-ØÄZ{ÅìYšÈNóÝäXÏdßÅ9¯­‰æ4±ÑùibDü54QQÐg=9MT#=!MŒè¦‰ž^*=áÞ«Q]ÿÇÞ¹&IŽòjxG„nh%½ÿ.Õ'²§Ó" lD'1?¦Ja^«Ìå$å5é M4 ÙOøû•»çTUA´ü›Ô¿IœõOÓD}ëHd~êß/òs¥Q[‹œWd~Òûf÷ʰ›&6M,@”)!;‰ÐM?Þìà%ú"š(Ï%~Ù…j‡l7ßsÊP“è$›sN9v”;Ey–&ª8 ~ÞÅfu—Fí.®O3⯡ GÁ˜õâ4ázzAš˜Ñ;Mµñr—¬?J¡Ñ­4¨™x£=aH!ô¥¬•¶©b Ò‰ oêßDÿÓ4QÞÚ2ÒªŸä»yGâƒ41¤,™mšØ4±MpYÍc”74üëÏï7ÛÅpuúñú\N1öÖèÙNnM‹zD±ÓŒNÒzpv¼?BW;¥g3:•F¢önùV;]̨»Lt<º>M̈¿†&cÖ‹Ó„ëéibFï4M R‘ï-fĆG³³‰1©¶XF§!õH_v6ѼC)…¼C¨ÏÑDnÑòê8öh¢*³—ÒM›&  9ç~ebFÍNáršÈÏ•ÀDÑ/²ÓìHî,f”_ïjšà=Nhéê);Ë•ª¥«‹ê£81 .Wi§tê®®3â¯Á GÁ˜õâ8ázzAœ˜Ñ;C£ß^5ϵ'81&•É1õŸýÛ81äâS:•êmä¾²$ûpbãÄR8¡‡²RüÔýùýf»—GaççfLÈ] q§ÿh)ykµ ±ÃR)kú'RîÂ)Ï”þžFµƒw•—nĉÚhb <ÕN_`6Nœ¬®3â¯Á GÁ˜õâ8ázzAœ˜Ñ;¥q ! €þ(eéÞâuIâÃYR§&•U1v¥R µŠ×5U€yÉÌ}õ_v:Qß:’åyß;¯72nljÚ"SŒÜŸÆ‰ÃŽœØ8±N¤ƒòÒ ŠŸ!¶Øåˆ¯Æ‰òÜXëìôƽlÇrk½‰tpŒ!½‡ Ë8•ä¯äAÍ.>[n¢6ªRv5úâ4î¤NÝU¢ãÑõabFü50á(³^&\O/3z§a¢6n¢Ü‰2mv7W–²$Ké&Š ùãŽÒ•j;›¨ª‘°?„彩ÝÞ:rì•iÞ±¯:Õ…2Áö;ˆI &V‚ +‹ô \˜¨vò’Lî"˜(ÏÑNòŒjG×3º8C,Q{KJÎó h6;Ö›ÈRîU’çµÔWì`Mô–‰žG—§‰)ñ—Є§`Ìzmšð=½ML饉Ö8 †þ(Åï÷_{4yÐÒ“RØMBÉÍç—*ý‘jkM4U%P>Po¿Š&ê[CþÓ%Àëšýfšh- ļÒê+#Û)b7M¬Då»Dd éïꤿþóý"^Ô©>—%ã¸Ü\í(1ÝItDV>9›@È=8æéEý2{ÕìÙêuMå/F´+.ï0ìî2Ññèú41#þšpŒY/N®§¤‰½Ó4QgÑNŠØ;²›Ã°á@> Ãn2M ¿Ëõ#Õx-šhê-QľzÑho­JÛÕ?vüÜÙDm±ÜÁ@]ed‡aošXŠ& ìÇKuħ‰j‡árš(ÏMP*Nt{v&ŽH÷œ€ˆ¤ï£°1æLrO÷÷{ª†G£°k£ÒJšwÅ•ke›&zËDÇ£ëÓÄŒøkhÂQ0f½8M¸ž^&fôNÓDi¼„¿™‘£Ù1Ý›"¼¢æóÑ>ÿš4i_ª¤Åh¢¨J ‘ûêíMñ½š&ªwJªw]ïäŽò\-ìÒ"€ è*ƒüé†M›&V¢‰XVéVª¹»'š’\Mù¹ÌhbØ]£3»5©“ÔIÌNÎ&°ü–€Lý{µÃgSĶFË%,о¸×„S›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Úx2HŒýQ*ɽåë8Ò¡ñ$»I0"êì7; kÑDQ•BÌkù@½ÙwÑDóŽ)øAØ?vé¹bØ­E–¤ûÊvÜĦ‰¥h"—Ä( àîvdtuÜD}n)ž—°ÛH)ÝIhs¦8£ ³X“:õh"Ûi\n~)êS2?ïÅo›_,¯P°ë²qôäü’[$ ê+CØ9÷ü²ÔüBGþQþÿÞþc~©v¯e™/š_Ês5ä'“ß³‹]©6tóMZ‰p¶[Ee…lŠ¡£´¬qŸ-Úå(ÐG´³|t·!®¿[5#þšÝ*GÁ˜õâ»U®§Ü­šÑ;½[54JñÍÈ©4¢³Ýª&$¦øT¤µh¢ª ™Ü>P¯ß•åãÇ;I5Iß;BÏ¥ l-f Œáƒ¿[zYmšØ4±M”³oB¥Ô¡ "f×ÓÄiûÿí?”'¹7e R)úž&8¯–G ~!×b—GqäGi¢ˆƒ#j쉃`/áÁ›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Ú8`à ÝQªü»•&JT^D²óÑ@£~ •ÓZ4QUŤ©UØìþÞùú·i¢¾5² õ'KÀ'i¢¶(1ÿ÷ÁßmߤÝ4±Mð‘G.y¤ü¸¼jGéršÈÏ5•D ½Q[LÞfz½Œ&b¢Œrrô]ËDcGhµx´šQm4B§|E³ !l˜è­®3⯠GÁ˜õâ0ázzA˜˜Ñ; c£Ý›€œHHñähbLª,–2pH=„/KòQß:FÅ”>ðÎË9þí0Q[´À ûkŒHqW3Ú0±Läï’ÐH´ÕN_â®/‚‰ò\Ž–yzý‡øõŠü GxdZÉnOš{°Pz—ªý¥ÅŽÄž=šÇIyÓDo™èxt}š˜ M8 Ƭ§ ×Ó ÒÄŒÞiš¥,éÝG¹Í¨ó£½„ kÑĘzþ²°‰!ï@x0ÉGm±Ü­êšT;Ü4±ib%šÈßeÊ7³²KÕŽìò”å¹Jš»FêõŸ¤`w^tR({UB'4‘rÖ ß .v&ðQš¨âÐ4I_œb„M½e¢ãÑõibFü54á(³^œ&\O/H3z§i¢6N~0JQ¸9lBøˆ|VΨI@‚NöÏ+-–2°ªbJâê¿‹&šwòÊA>ðÎkæÊÛi¢´˜BPë¤F¯ÊÒËjlÓĦ‰h"b%‡@ú;‰À¯?¿ßl_Æ‹h¢<7…T2âvúXîdvëE§P²|¤³(l+]½;tH‹<›3°Šc`Ö—(ìœÝu¢ãÑõqbFü58á(³^'\O/ˆ3z§qbl”²poÎÀ”Ž gõŒšT”B_*ÃbWª*¼Hˆ¨×/‹›¨o­¥Võ½#üàáDiÑ€K©®2 ;gàÆ‰µp"—#畺Õ©ÚêÕ8Qž[ê®bçªSµ‹”î¼êÄGžæð} Å œ±ËM>Õì„Ó0Q…R™"Æ®8°o:õV‰žG—‡‰)ñ—À„§`Ìzm˜ð=½LLé…‰Ö8¤ ~•»ï=›HtœœM J•¸L4U¨¥–w_}´ïJéôã ø‰wPŸ Ân-*’¦” í±&V‚‰ò]r D&nJ§j§æ/†‰ò\‰‰‘ý¸£f‡|kqÔp[ŠkÅM4UVBT´¯>áwG󎽖ˆ»›&J‹˜×bªýiãËÞꦉM ÐDþ.Å8·x]³C»:n¢L̈¿&c֋ÄëéabFï4LÔÆMAEú£”‘݃].:ÒD‘@!ÏÃÁºR)`X‹&ª*Ðr3¹¯¾,Al{k,éûŸ!E{.¥SkQòçä§tjvÌ»ÜĦ‰¥h‚eƨ‚þÑD±£ëc°ësE…%B§ÿdÚo-7‡„ÜÊ{šà܃(ÅN€T±#‹ú(M”F¥w65êK0ïâuÝe¢ãÑõibFü54á(³^œ&\O/H3z§ibh”’w9¶¯ › 8ØÎÂ&ƤêZ•°ÇÔ+~ÙÙÄwÒƒ b‡” @Ü4±ib%šÈße^?§ô¦Ë¯ÿ|¿Ù.^„]žkŠü"Õ.É­ac¤“l)8øÅ.¿Ñ³GU D€®8á}4Ñ_%:]&fÄ_Ž‚1ëÅaÂõô‚01£w&jãŠA¨?„ŠÜ|ÏIÌ•3˜(RBìJUÓµÒÃ6õù€Q»êóÿ~LԷƧÁþd™¢ês0Q[”Vô2~¹^·abÃÄ0!‡–Rk¢¯‘ýùýj)ånWÃDy.›Y¦‰Nÿ©q|'L@:PÂÉÙ„–‰CË©¶ Y퀞ʼnÒh¢h úâê®6Ñ]':]'fÄ_ƒŽ‚1ëÅqÂõô‚81£w'jã"üšj?v˜n.…Íó¿óÑ>i(‰?>j‹áDU•Ð²Š¾úßUm¢½µeªî„:7»ð`J§ÒbAúÊ lãÄÆ‰¥pBK<23'ªÈå7ÊsS^h‡Øí?œ'¢;o:E;*VÄM¤:¶h -fϦt*‘RWœa’M½e¢ãÑõibFü54á(³^œ&\O/H3z§i¢6.¢IûC¨ …›o:Åðìp¢J(;X€}©h-šÈª0#ñÓAâ/K[½SžÒóN¶{‰‰¹&j‹F,~­øf—Ò¾é´ib)šH‡QÙõGñ£°‹ØËvÑE4QžkÙÍýq/;ÅâQØ vÄ`Ìv†–µ¢Jèuu³9­6Á ¨Çß6Á x‡PŸœ`” óž`ö³Òceá£=‚¨vŸÝ„©šaìì7;ÜGº]ºv<ºþ&ÌŒøk6acÖ‹o¸ž^pfFïô&LiBTŽ¡;JýQ™øž¼Úép¶ c‡„üêuÍîúÀ‰ò\I’'›Þ€žíøÖÀ :bŠ)àD•`ÆÒ)†Ýìt±ò¨E•æ2±tÕkPù.œhÞI%Ⱦï×3ºÛq¢¶(j±¯ŒeŸ~oœX '„Ä’Jo‚!!Õ&˜Ü) 5uGíÜ:Ý\UÅøíüÂ!¯b`~5Žjø$x5q„tÅE ;4¯·¢ö<º’àûÝ*†Üƒ ³¹kÍ.áÒDm´Djê‹£—«_›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉Úx^Å©_)áÇNàÞä%¯ÓYò&AJ–XîK}›ÍöIUUÊoèçâmv á»h¢¾µqfÀ>C{’&J‹ V‚îºÊÄ6MlšX‰& ®Òóó÷¨ùë?ßoæxõ]Úò\Íï‹ù“èõ !êÍ4‘,Úû´\ë_`Éqèž7»(æ oæiíq¢;2¯»Lt<º>M̈¿†&cÖ‹Ó„ëéibFï4MÔÆ {¹øš]²{s ¦#ØÙÙD‘ #ûWØ›]µn:5U%ûœb_}^f|MÔ·¦¼ìþ`²Ì_Ƀ4Q[Ô¼ ëœ)5»—º¶›&6M,@±Ô4ö¯Ò6»< ]Må¹IR¹EÕë?َ½9ȹDvŸœM`_jGiµ ø,MÔFù7’›Ù®hÔ]&:]Ÿ&fÄ_CŽ‚1ëÅiÂõô‚41£wš&jã‚úÙˆšèÍ4AvÀ)M4©y-é×Ðü±#Y‹&Šª²ùƈ}õ&ß•ç£y'#-ûq?v,ÏÑDm±ìWBè+ㆽib-šÈße ßž|ÿúÏ÷«Dárš(Ïe£ýÌÞÍNìVš(¹DbF«÷4A¥#‘¿±VíÓ£4ÑÄ)Ä@}qˆ›&ºËDÇ£ëÓÄŒøkhÂQ0f½8M¸ž^&fôNÓDk<™Ñ'£”Ø­4Ñ–˜h Ì4òJÓb0QUqÔL}õy>û.˜ò‡&j‹Æd~Nœf§/56Ll˜X&¨9 pàÂDµ‹‰®†‰ò\ ä—^hv¬·†M„£lÙ½G Îý7±EêMpíç”E‰q)ÿ¥7JôÖˆŽG×G‰ñ× „£`Ìzq”p=½ JÌèF‰¡Q ¢Þ‹GJ'¹¯¥ÒZùaÕ§ô],1äøRôäv–RF²&6K,Å\JÐû!ØÕNäòƒ‰òÜ<õ$K©×TìÖŒ˜MH)½§ ©=B/(¯ÚE~´8jkT ¨/N^2Alš8Y&:]Ÿ&fÄ_CŽ‚1ëÅiÂõô‚41£wš&Zã‚b ¡zsqT²<§$:¡‰*!•úuúT kÑDQe!kঃÄ_–ÐiÈ;ÆÏ•3jʱÿÕ†ÐiÓÄR4‘¿K-m…NB§b…/§‰ò\áüÎÐ`THùNšÐC!Ó‚¾§ =ˆB€ É¿Ð¨u R~”&FÄóÁî.®O3⯡ GÁ˜õâ4ázzAš˜Ñ;MC£”Ä{¯9‰¤#‚ÐĘT^Œ&†Ô~YvõNÙšì{‡¼”‡¹&†”!ï{N›&–¢ ­ 8¸Å&šÊå÷œÊs-Šhçmµ{w&{M”K´ˆ§ RîÁ”¨?Xì2’<41$îµfǦ‰“e¢ãÑõibFü54á(³^œ&\O/H3z§ibh”JtsÐD²ñì¦Ó˜ÔÅJש·ø]•°½£Ï•®kÊJ…K‰]e_BH7MlšX€&R mΠ ~vµ£—Œ}ÑDy®F‹ØIºZíàݨ}ÝÙ„±”½÷4a¹cö'«ØÅ`ÏžM ‰£´K×u—‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉¡QêM=¸ko:‰”›­'41&U;›Rÿu¥ëƼ#žM )³7MlšX‰&òw©€Q|š°Z"…«i¢<7¦Ü5´Û4˽g)…¨ñ MĬ,÷`Ž%ÄÃɶðc‡IŒ›Gò2 mšx·Lô=º8MLŠ¿€&|cÖ+ÓDÏÓ«ÑĤÞ9š¥R¼•&RäÃìmÜĨT +ÂVÿUF½“ŸûM *ã°3:mšXˆ&ÚwiÁò—ÉNF§ßv˜øRšøynn£wôcG†wÒ‡#FKïiJ† è3úmðYš¨R!ë‹£—ZE›&N–‰ŽG×§‰ñ×Є£`Ìzqšp=½ MÌ覉ڸ„`^Eµßv¬÷ætb†UNh¢I%ÕD}©kÂþ­Ê¸\®ï«ÿ®›N?o-½Ü¿ÿoñ9š¨-bÙÒü@Y´ÓiÓÄR4Q LCˆÿÇÞÙ3O’Ûfü«\vvÒE€—9·×FäÒÙÞ*[ºÒIªò·7ÈþŸ4Ú&‡bw/}à 6ØÂ6ŸÆ4Iüø@L\¥‰\Ûì†O¢‰Ò¾¨J»ÿ¸Ñ•{þ‹Žö&pË«8Þ×±§ØE[a¢4š"RËÅNL´¢ÄŠG燉ñçÀDEAŸõä0Qõô„01¢w&Jã>Ò'Óö(•.¾6A¶Àx]R'ƒ‰¬J8¾2WiÒ÷‚‰ïh0½&º”QZ×&LL˜¯C$ˆ‘  ÅNøô­‰üÜRl"6ûÛa¸ø ¡"<§‰è=؈¸°g;¾wk¢KœCᢉV˜Xñèü41"þš¨(賞œ&ªžž&FôÓDÏ(eèâr¶¹Ìšè“ú¤’ô7¥‰.õðV)º½c7nMx‹ìÑqˆmeöpûqÑÄ¢‰ h"×—ö::èVi"Û%µÓ·&òs½Ç2Y³ÿ¸]¤kK×qW¤#š0ÿ]‘µ¡ÔíÄ`¶ùÅÜÉ ¤©ž#Á»Í/f@H í5ºs~ÉÅx)’¦¦²lÍ/k~™j~¡-Ä`ÀZKòñ‹›œ=¿äçbRýh¶µ‹ÇüiÊóù…<’ŸˆQ¤¡ÔíäÖÕªÒh#‚¶¸Di­Vµ–!*µjDü9«U}Ö“¯VU==ájÕˆÞáÕªÜ8úpš£Tž{/]­òxlÔƒÕª"˜±-âd«UEƒ¸²¶zz’>ýWMå­E°Zªê/v!ÝG¹ÅóIsn*‹`ëZÞ¢‰Éh‚Œ-ñÔ ‰|}.ðù4A&‘°EnåÊ”Dú"ôœ&8÷`æ\¢µª4Ûa t+MäFÉaÏb[ùü±h¢&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Ñ5JEäKi6‰„!ö¯K%‹&úÔë›í}wy‡ªj]N¹EÎg)ÄÚÊDM,š˜Š&J¹iP Iª4‘í¢Q<›&òs% ¤Ôì?ʯE{/˜(omQRã¸ÝîŇĕ—ÃDnQ1½cÈã†Ø‚‰À„—œèÉìòé‹ï× "ž ²q9Z«ÿ¸]TºvkB£¸˜ç4‘r¡‘g(•1(ÝK¹ÑÀ’¤).…‡ÉoÑÄA˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi<ªÅÆù›Ý.Ä‹·&ˆBP<íó¬õ…•žÔú¦4QTù„£¯8úYæ¬_5M”·69‚µ½£éÆky¹E¥YÛ¿›{fe \41Møw©Êôë”°Ÿ¾ø~UÉN§‰”)Åg‰$Íþc ¨WnMІ!A<¸6¡[ ‹b¨sO¶SÀ¯“|äýËùåO?ýç~óÛË\ø÷qD‘•«±6RÍîòå¯|h:Ë3Ì#ãÇ?}þïÏÌãÎï~ûùe")!ÆwÿðÇÿýéǾÿ׿þÏ ùã»ÿg×ù‡Ï>>}ïCæo~þýï~øþÀÀ£×ï}Ìûùgï?|ÿOŸö‡eøä–ÿõù§ïþüù7_ýŒ¿ÿå¿÷Ó~ü³Õ?ÿò?oßýKéNÞðÿ¸Ùƒé¿ÿøþñå—ñAõOÞÈöý?ûQÓ‘ã¿ý™R>^à‡}X8lÝgÿWZ§'ÉÔ(=é_Wްb%*=àE‘Š<ÙÅÔ¢^‚‡©íâ' ã¿î«Ï;w&>ð9„ A­­Ì³E¯kEXß>ÂÊß/ˆ¾Q^5Ó½¤³8 œØšâØ'½US½¹¼Tñèü«#âÏY…¬(賞|²êé W!Gô¯BvRfzmÝ’-aÞR:Œ‘“JÌdª<]ì8œž†ÀŸ«,¾JÓUê‘®äiÙ0:{êu¥Ì4M¼®^bx;šxÝ;‰ñVšÐ”˜¶‡xؽ^4±hb š€ê3mÌ/nÇSáió Dg™ÍUPˆ|mš›\?,.V‘ åÐ{ChÞŒx3våý,÷a²¶8}…vÆÓ‡ýÿ€]¿ø³°ëPAŸõôØUñô”Øõ÷ë=»ˆó~—4RÎîváÚ,7¬² •‹Ü%$¢@/H•é¶&\0R3\ÎvïþÖÈB{²T t'Lx‹ì?È+¿éÚšX01L CjÀ„Ûº&ò¡Ž\ѽ9Á¸Ý“\ô'.VÅ-Ÿ°ŒÅçmË Ô²ØzOÏv>Ü»‰“`„€MqL«^d3L¬xt~šMTôYONUOOH#z‡i¢4žÀZ%w‹ÄkëEâÉâQ‚"AÉÁ'´¥&œì(yQe”D©­^íͲÜä·¦Ày«íK7nMeÂŒ/ünV,M,š˜€&ü»´ˆÌò5§úâûµõNO¢‰ü\bÓØž`r‰'»¶Â‹2=ß›€à=XØg¨®h;ŒwÒDŸ8cY4Ñkž&†ÄŸB5}ÖsÓDÝÓóÑÄÞQšè¥8(^›?Áp¿O*ð\4ѧŸ(û5ÓDŸw(ÜW}¾O™‘,šX41MøwÉùÞD¾ô]£‰bGôPGðšÈÏÂP£ÿ¸]ˆ—fà×MìàZ@îÁQƒjuEãÃnMs³7*"©^àÃŽM4ÃÄŠG秉ñçÐDEAŸõä4Qõô„41¢w˜&Jã) Ò¥]œ4“`:¨>¿KÐdbÖ–ªÏ*Ã|Kš(ª,¯a[½É{U.o-•#5½#áá3¼œ&J‹±äÖi+‹‹&MÌDþ]ZòùÉæó§/¾_K gŸtòç–¤™ õºß¥}“¤×žtR`fžÓ挊œêJ‹Э'öF9$JÚ÷x©cÑÄA˜Xñèü41"þš¨(賞œ&ªžž&FôÓÄÞxì_¥8À¥4SØ ÀDŸR sÁDQ%(ˆ/LUÌo¶5±{‡R¬gàÿ°‹÷•óÚ[4ÿåø…ßMÍL,˜˜ &°éQ‘ê[Å.ðÙ9>ü¹>i ¤„¡ÑÜ.`¸6g&ù<û&¢÷à$‰8ÔÇ bÇéÖKØ{£¦þÙ@[œ=Ü>Y0q%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0‘×@ ÒB5<$”ºdkBb.÷~@»V©çBÿ°‹“t*ª€Ðg«Ôë{ÕóÚß%4ª}xñÆü°{‹¤ ¿.šX41MøwihJòu=­O_|¿†ª§tÊÏ¥´‘×~·c¾ò æšaŽU4á¬!Ðü¯ªÒl§&|+Mq‘Gkгˆ+¥S3L¬xt~šMTôYONUOOH#z‡i¢4N1µdv;¸6“.%Ý4ÑÄ.!ùd_*q.š(ªr aÜVÏo–ÒikqØ’ÔöŽ„û.aï-‡/(³UÏkÑÄ\4A[¾„‰ê—°‹bà³iŸ£2YcG9Û¡\J7ÈI<žÃo=„”¤\† g9å/„‰"ÎBÞ½m‰srKëÖD3J¬xt~˜LTôYOUOO#z‡a"7þWL‹]Ы‹GcKǃ=DŠ¥­Ÿœú¦,QT¥|HGÛêÅÞìÒD~kÌYRê•ëv»ððÛ^ιÅüÑ5jMìv1®K‹%¦b ÞÌT1ÖY¢Øéù—&ü¹S«ÿ¸Û•éa9l1q ñùü"e b‘zŠêÝøÞK¥Q&r!mq”xÁD+J¬xt~˜LTôYOUOO#z‡a¢4.ÈF±=JñÕW°sý/<º‚Ý'Uæª\×§þqíý-h¢¼µr !¼àº1¡Sn1Bà€©­ÌBX4±hb&š¼ãà!2|]BáÓß~¿92Ÿ6?7b¾Ž­='I’+w&Øò΄7òœ&RééùS} Úí½ J£‰YË-»]\[Í0±âÑùibDü94QQÐg=9MT==!MŒè¦‰Ò¸10c{”2¼öքư%£šÈrAŸ‹_ª“íMìêSŽ"šê)¼ÛìòÖ@wkÛ;€7ÒDi1W8‰ÜVæóø¢‰E3ÑDÚ„ÙÇC¨ÓÄnNß›ÈÏÝëµÂàlôÚbÎ4†{ê=˜=ÐæÆ.J¶û›zwÐD縇õJKvaÑD3L¬xt~šMTôYONUOOH#z‡i¢o”J×›ˆ>ÞÓQ­‰¢ ‚šÁ JU炉]= 6RívøfW°Ë[“ ¼àt#L”“$¨W®Ûí$®­‰SÁ„æDÑ„úA§b ÉN‚‰ü\´\³³ÕÜ.¾²Ö„®žŽ`ÂA# ¶Ž¢f»|ùb¶ù%«W¶dMõþ¿õÝæk`heþýð¢Ü9¿x‹ 16ã·£u)oÍ/sÍ/¶°!Q}±ªØ1¾XåÏE h–êü’íÐ]¹X[È÷î¶¾Í#DAI¬ÚPêvî=H›M”#Ƕ8 ¶«Z«οX5"þœÅªŠ‚>ëÉ«ªžžp±jDïðbUß(•®=H•768X­ê“j<Mt©ç$ïE]Þ¼1ÅG—2M+ýø¢‰Éh‚°ÆðuŒüMd;„ói‚CDfPiô·“kæ,KéyeT ¹§K P?«Zì„9ÜI¥Q !ÄúÁý%,Á¢‰F˜Xóèô41$þš¨)賞›&êžž&†ôŽÒÄÞ8h\þ©áâL„G{ò©)¶¥BœkobW…)j}w÷ÃîÍ®åíoMˆú‰»]|H’|5Mì-&$¢ö4®¢+ýø¢‰™h¿K,ð,Qé§¿ý~Ýîñ ø94Qž›X0q«gg;¼2ÉÚ½‹$‘B(=Ø Ôw?ìb¼•&¼Q 9YJ=‹{g ë m3L¬xt~šMTôYONUOOH#z‡i¢4ÎAš£”2¸˜&4 Žöê¨a±žÀûÃ.È\4QT©Æ¨ØVïSï{ÑDykSA|á·µ‡.—ÓDn8Qãô‡ÝýÕE‹&&  ð(]̈°N»]¤³i"?WIHBjôŸl¯<é„iSTIø|~AïÁ1RÀXïéÙo½–×%.YÅŒšabÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£Àµ4á@µ!œtꔊsÝËëTŸè½h¢Ë;È÷Ý›Ø[äÀZO¸Ûy̶hbÑÄL4çk?)pªÒD± ñìbFå¹"`POºÛÅ`WÞË‹$Ñ£½ïXÆ dõ½ïb‡÷Þ›Øe‹¡~ýäÃîáÖ¢‰ƒ0±âÑùibDü94QQÐg=9MT==!MŒè¦‰Ò¸pÔF·ÛAº69Ê–âAòN©q²½‰¢*¡yÌü‚z}¯[Øû[{ÀNõÒ¨^äûîM” ¸©Œ¬,‹&¦¢‰˜×ü Ð0Ti¢ØI:;y~.Ä|ù¨ÑÜ.@ºvo"¦Èppo‚rÎ…Šb&ŠÝJ¹Q– ÚǜҢ‰V˜Xñèü41"þš¨(賞œ&ªžž&FôÓDi\siRJW×FÅȇkGE‚‰’Z[ªÑ\ È‹*É)©”›êäÍh¢¼uÎ’Ü8¼Û=$¸œ&J‹N: ÔVæ4´hbÑÄL4A¥k^Å€X¥‰bÇÑüI4áÏÅZ1ºÛ½4¹ÿhè¬pp’–½û·¨¯Vqƒ€o¥‰qŠ5ÌM„‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0MtR‘®-g„[`cì5¦É¶&ºÔÓ»]›èòÇtLäÍûAh*³ ë Ó‚‰©`‚s½:¬ÂD±‘³a"?7RDkô·ÓkS:ÙMDõùü"¹;ti=yÛn÷¤ÀÅ¥0Ñ%NÂJéÔŒ+&FÄŸ}Ö“ÃDÕÓÂĈÞa˜è¥ìÚjF¶)tê’ªBsÑÄ®Þ@WÈw;y³kþÖ˜¶¼cÁ­î£‰Ü"DhlMeøpÂ`ÑÄ¢‰ hBò’Â|R¯JÅŽϦ‰ü\d¨­QÛíH®<èqKÓÑÖDÊ=XòÕ‘zO/vœî¥‰Ü(„±žg÷Ãn•›h‡‰ÎO#âÏ¡‰Š‚>ëÉi¢êé ibDï0M”ÆQ ¶G)d»”&ÄÈG¬£KØE“¢…¶Ô(“t*ªR0hÐD±“'¥÷~Õ4QÞZ£jÛ;Šá>šÈ-bâ\r¢© å1ÙÔ¢‰Eßž&ÒÆ1Yða³Z¼®Ø‰éé4‘rÒo£|Û©ÑÜŽðJš`ÞS8(ŽŠê=˜s–ÚTïéÅ.>[X»&J£)r$h‹{\õ[4q&V<:?MŒˆ?‡&* ú¬'§‰ª§'¤‰½Ã4Q×T_B} ½¸x]؈Žh¢KªÆÉö&Š*cÀƬº«·÷*^WÞZ<åFQ‡Ý‹w^›(Ê"˜²6• Ø¢‰ESÑ„æTMd©¾7QìôaQû$šÈÏMN È­QÛíâµ{°q. sp’Ö¼[Þm¬÷d»½×&ºÄE[—°›abÅ£óÓĈøsh¢¢ Ïzrš¨zzBšÑ;L]£]\¼ŽlÆ|@]RùYªÁoI]êEã{ÑD—w¡öršèQ¦áá¾Ë¢‰EЄmì,‘„ª4áv˜?ô³i"·19¨ÄFÿq»gÉ'N¤ ÞÄ)@å)MÄPzpÎ+W]Ñ(vÉäVšØÅ%‘Ûâôñ Á¢‰çabÍ£ÓÓÄøSh¢¦ Ïznš¨{z>šÒ;J}£TÄki"&ÞPö&:¥NEêõ½na÷y‡J‰\M}ÊM,š˜‰&ü»tš {Åοñ³ËMìí‹YLÚè?ÙîÉ Ïi"nhh0yÙ@•ªg‰v;Ž·Ö®Ë»*-$jx1Û‰ëv3J¬xt~˜LTôYOUOO#z‡a¢k”R¹:?l$ðIöh´ï‘ªs]›èRïÑ'¾Ltyç±ÂÕå0Ñ¥L`-˜X0ñía¿˨̧W¯M; |v~ؽ}G ¬×~Üí\Y ;æJØ!ÄçÓ zÎë¨õ޾Û=«»t!LäFö˜ëÇdw;Lkg¢%V<:?LŒˆ?&* ú¬'‡‰ª§'„‰½Ã0±7®Œ`íQ*ʵéaIpƒH;Ea.Ö–J0×­‰]UÎç^/³ú¡ÞÞëœÓ‡w´u¶úÃî¡Løå0QZÌu%ñejkgbÁÄT0áß%‹ÿU-„]ì"ÑÙ¥ëÊsSð†fìéÊBضŸÓD.Þ—<·zF§bç#ù­¥ëúÄ)颉V˜Xñèü41"þš¨(賞œ&ªžž&FôÓDß(eßÁ6ÜäèvŸTƒ¹ªMt©—ovΩÏ;|ã9§.e¨+?좉©h"æ%‰9ev•&ŠÓÙw°ós)¨ÕSîvtíÖå¬kèˆ&ÌPС 5”º'˜m~Ééð\R[}’·›_ü­ÀHÚÞ1¼u~É9ïi³d»‡ÚÁk~YóËó m=Üöa§z+o·£xú9Úü\ I<úªöŸlféÚjF$)>Oè@‚Ñœ¿&ÝŠMŸ0ô‘àx ¨Žbž$ÈMì†còÜ.›«‹;ô"‘‰ž½ŸY:´Ì:|Ä m¨sǸٸmi)ï¼`U$˜çeò¨ôÍù›êMª½Ú»çbá"œÊ•så@Òo ñè +‘ŠewÜÔEàÐ-UãÌã¾­ÿTø>às*|AYëÊO…ZºÂSáûàÝûTx™—"7í…iÄMê9^4bmz§=¹07½Sb<ªÞ,Æ­dí¢w½S…Þ!ýe¾ðÏÊ$h;§\ù’&¿H}r§f˜ëº7ÂÁ+ãVvª£ÀônnXbcž…+B–Ür `¡ÀÊ(P8Ÿ ã‘¶Rœt¤JûH°(Í•E#mĬE‡#òÜÎï¸6{Òœ_ ¸à·Ôô’óëIæ X´þœß>à“ó@PÖºòœß ¥+Ìùíƒwïœ_‘— ~ÚJàS£AkOÖ¯ jLuIž"ôÛ[Æf!yʬ³]¼{jÉS„Œ`Éú-’§*É#M>m,âó—uŠ¿¾Xô» 4̱g£CTJó=I x1 fvíà¸åïÛNIãOñrÙ¶ÍX´~ѳøÃˆže­+=ƒ–®PôìƒwoÑÓvÎÀöR¦=Ñ¥|‘ièi!ˆhÀ6Táʶv·¨’d»v0³óMyÔùº¤dTìÚáËß·=‚§`æoÛ^DÏ"zª=y^Fd’½9#NZî)ò ÚåM¯{”GFiÍCt ‚>Nö8X+¢k•í÷ëPQ¾;o3Ê4¯Jëˆ~1ÛÖ!<^¹§®G}"#žo-à-$¸àOO‚í¼ ‡÷«v퀦Ìüù&%D¿»ÜSPܱ!z&$¨±¶N!0¬Dys¾3ïWŽãrí¥•Ð²hõy¿½À$ï7„ ¬uÝy¿aK×—÷Û ï¾y¿®ó”OEÛKIœx³SCìwçý6P£Ú‰m¨)øÚ$ù69,k7èÅÍMòŒ·3ï×õˆ˜/°‘/’g‘<•I œ/*°¼¶s~JÉÇÄ=§œ ‘î¨> Íã[O)Lq8ñ×¶s|Ô¢]§ íœ Íc³­_óìþ0šgAYëÊ5Ï ¥+Ô<ûàÝ[ó´#çÍV¶—Â&.ê `ä)CJuÝqÜ¡" Èd£Ÿ[U‡nÔ18KÎvÖáãíïîzLè]Œ62‰K=óEòT%yò¼"7ç¯w“ÞlåëQ<-€¨Á<àižeì0¨ÞóN•¡V<¡u[ñ¨»»‹À©³çEñX¡ì€EëW<û€?Œâ@PÖºrÅ3hé Ï>x÷Võ’ £†š£ ÔólÓ¡j ”Œz'm;¬ Ð'7;,±ÎVÒæ$=4“ð1טX¬Œ£ýf‰Û•?; ‚ãD½JPè‹–<Û@}’Ù’`Q±À––Òv P ¦ä%D½ªÝÙ‘ Z'o½Š¦u’Ûª~z T…J‚#ž[‚°àB‚•‘ »ô÷Ñâm'4åÎlªkã>Ì@Õ8±&7[”¨JÁ§h¹#mäj#AE¥"{}œßöP5cdsU<·wL”ˆ^H$šÈÐ/W¸.$X JŒÎååjsþRbœtcŒz6ÁØG‚¨"r¶ƒŒŽfZâGç(€ðpÝжS¨‹3ª¼¤éMôìü¼î1ïFíƒ÷†ìÚm%‹''Á¶Gô#ÙÈ`kMe!Á…+ ÁoÑó&¸´})ã’÷ëIè X´þ¼ß>à“÷@PÖºò¼ß ¥+Ìûíƒwï¼_‘—ò¦Íû%h©'ïWc]’§E8†èef\´£ÇddE;+ò%OÛ# 'ñÜÈ/y¿EòT%y(ßÞù`zÍàó¤’'ŸØòþ¦Ï½vÕ­œ…ä‰ 8$5¢q'·ƒôe5†‡§l§à”±÷áãSë‚.T ÎŒ};ë@L¶uýñØ7÷˜‹{“÷&2mÓ[Œ8Oú§»û{}~ݯûzƒè\Š#zÛ:˜¸pýÂõp}Ñü…íÒå_¾-1ŸVì„ï£2EË}ýQd/Áî^ïNTËh?gWí ¢5æëkô]º½È¿x÷ ?^Ý®wæSÊú~}a fþïw××ê¤3#µopN,d«?|î|ÿ¾‰­qSñ¸…œQ4{ƒ‘¬që“ÎqŒóØ~_W§y9f,³¢úÝ·Nc̯³öÈG÷þº8ÝË‘Ãd#Gëi¼ÿpv±nÇ~°1k\v„™I{þþãmkŸÖyúL|¿Z½[¿¿º½Í£úñîDcÎ0(²s {¯[Ó†¥Y]Ý]ìo2'c(~$Œ8ªïˆ»LC{½ Ñ3 ±0Ô>˜DŸC”ËõYF7qs µ¾Þ·k _ÈüéÉÝíÉæO×w‘Z~–y6=ì -ºí3›Ï†@_ú䇯ÑîÖ?\­| Ğǘ’ý°Jß^åRÕ¼¶ÏŒÈaÒú›Â_ô3(J;p~~}•ÁCÖ ×W7WyTÙÔªtâÝfW}»Zç¯=ýìFÎnÏ5غØ{듨ãÑ“ÌO¢Ž·ŽÐq%j àmñy9¾ˆÆúD£@V¶˜Q@“ÞÔÄ„ÜW³JƒÈnôZDeò·mð«/™:ÅË»‡›ÕåU^öϸ–É`Aìd°ÀëBZ/ÔÁËgÖjC3¼@ÈŠÃ}YpåóÕ9ê{_öýÇ}{[à>ï± îÈ+|šg?¬WWïo»°÷ye¿› —jœ§«_¯GÎ<nÏnÖ÷*kßlMÿvâœÜœÝª_¸ØÌ£›³ûÇ7Ÿ»~þôðó¥Ïdß¼2 X¹ƒg¼9R°eA߯ï9!Ï/uÊfÏÆå~»‰rúúŽª9Ù}Ó.ÉöB™Üœ]Ý~«ìqî;­@ ŪW¡ša“G¿µáñf­„|ÞE'¿{÷Éê¿~ÿûï~×îk!Â<©Î öâ‹'•k»@€ÓWõׯRÏÑJ]ù'd¦ÓÕ?ýãã/þE#Šía¯*>Êë“F/ÙÄGýç(ålm·¾îÞ¶[ÞÓnÒéöµ´¾º¾;ÿs!¥Ü=\ä¸ü½–Ç ¥ìö"/H´íòÝLà7åƒðf^YçÈŽ9,ôyý%[PGþ«ö1~zûfNR±Áž9)zo¾¸ÝÃ+Iô¿¥‹×?¨ŽhûÏ/ÏÛüÓ/4žýöÝÿ*Žï×—:TÆþí¿¯n/N¿úM~v½ýöùAŸöGK›f§_}þùßü‡†›ç,ú|NÎàRNP8¼ƒtqP.ÖÄ1mûoß}ó)BÝ=Ý´Ñ÷ëÇ»ªÆ>µ‰Y£]­¯/¾;{úCÉ_ž®žþz¿>]}ý[ £Î®¿Ö tö˜eý׿i­üvóŠ|½Òéþ?ëwçþ.Oðòr}‚É­Oäâ2ž¸Ëˆçï4ÄuÒòÈö é}"1¹”F<ò¶;-Œ9‚÷Á±3—#ƒw‰m"ùÂe™“Q;ç¤ÏÒîÜÓ& ­Pý¼C±ÉÕ¼ ²þá#&tÎF€;3|¶ßÍê«NOµòQ)B®‹Í½kÞÇK"X§æìñq}óîú¯*ÉõÙM³¾N'ÏŸýÃÿj&\õl±././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015117130647033171 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015117130654033167 5ustar zuulzuul././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000017701315117130647033204 0ustar zuulzuul2025-08-13T19:59:46.011585192+00:00 stderr F I0813 19:59:45.901523 1 plugin.go:44] Starting Prometheus metrics endpoint server 2025-08-13T19:59:46.064059488+00:00 stderr F I0813 19:59:46.054048 1 plugin.go:47] Starting new HostPathDriver, config: {kubevirt.io.hostpath-provisioner unix:///csi/csi.sock crc map[] latest } 2025-08-13T19:59:54.824698355+00:00 stderr F I0813 19:59:54.823425 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.836695 1 hostpath.go:88] name: local, dataDir: /csi-data-dir 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.837369 1 hostpath.go:107] Driver: kubevirt.io.hostpath-provisioner, version: latest 2025-08-13T19:59:54.842475592+00:00 stderr F I0813 19:59:54.839745 1 server.go:194] Starting domain socket: unix///csi/csi.sock 2025-08-13T19:59:54.846271330+00:00 stderr F I0813 19:59:54.843974 1 server.go:89] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} 2025-08-13T19:59:56.223891950+00:00 stderr F I0813 19:59:56.210090 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T19:59:56.498880609+00:00 stderr F I0813 19:59:56.484339 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T19:59:58.205910298+00:00 stderr F I0813 19:59:58.122081 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T19:59:58.239965509+00:00 stderr F I0813 19:59:58.206037 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:06.991021332+00:00 stderr F I0813 20:00:06.970088 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T20:00:07.168862783+00:00 stderr F I0813 20:00:07.147043 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-08-13T20:00:07.353057946+00:00 stderr F I0813 20:00:07.320948 1 server.go:104] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-08-13T20:00:07.589521437+00:00 stderr F I0813 20:00:07.589477 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:10.866698493+00:00 stderr F I0813 20:00:10.847387 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:00:11.139147861+00:00 stderr F I0813 20:00:11.038993 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543271 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543509 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:26.553247529+00:00 stderr F I0813 20:00:26.543770 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-08-13T20:00:26.845955495+00:00 stderr F I0813 20:00:26.834162 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:26.990584009+00:00 stderr F I0813 20:00:26.989183 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.045191487+00:00 stderr F I0813 20:00:27.044369 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.073488273+00:00 stderr F I0813 20:00:27.073376 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.110229281+00:00 stderr F I0813 20:00:27.109569 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-08-13T20:00:27.132535977+00:00 stderr F I0813 20:00:27.110290 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-7cbd5666ff-bbfrf csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:42b6a393-6194-4620-bf8f-7e4b6cbe5679 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:27.701925913+00:00 stderr F I0813 20:00:27.692296 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.744057214+00:00 stderr F I0813 20:00:27.729731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.744057214+00:00 stderr F I0813 20:00:27.738204 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:27.813944467+00:00 stderr F I0813 20:00:27.813337 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-08-13T20:00:27.813944467+00:00 stderr F I0813 20:00:27.813370 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75779c45fd-v2j2v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:54.807024101+00:00 stderr F I0813 20:00:54.798046 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:00:54.807024101+00:00 stderr F I0813 20:00:54.805210 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:00:54.824138009+00:00 stderr F I0813 20:00:54.807706 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899397 1 healthcheck.go:84] fs available: 62292721664, total capacity: 85294297088, percentage available: 73.03, number of free inodes: 41533206 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899696 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:00:54.900730423+00:00 stderr F I0813 20:00:54.899712 1 nodeserver.go:330] Capacity: 85294297088 Used: 23001575424 Available: 62292721664 Inodes: 41680368 Free inodes: 41533206 Used inodes: 147162 2025-08-13T20:00:56.656209940+00:00 stderr F I0813 20:00:56.597000 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:01:10.384473536+00:00 stderr F I0813 20:01:10.384126 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:01:10.516433778+00:00 stderr F I0813 20:01:10.490297 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:01:10.516433778+00:00 stderr F I0813 20:01:10.490338 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:01:10.725402807+00:00 stderr F I0813 20:01:10.539162 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:01:10.742578247+00:00 stderr F I0813 20:01:10.725588 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012392 1 healthcheck.go:84] fs available: 61630750720, total capacity: 85294297088, percentage available: 72.26, number of free inodes: 41533186 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012581 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:01:11.012727550+00:00 stderr F I0813 20:01:11.012604 1 nodeserver.go:330] Capacity: 85294297088 Used: 23663546368 Available: 61630750720 Inodes: 41680368 Free inodes: 41533186 Used inodes: 147182 2025-08-13T20:01:56.740528286+00:00 stderr F I0813 20:01:56.740322 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:02:10.485513699+00:00 stderr F I0813 20:02:10.485234 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:02:10.485513699+00:00 stderr F I0813 20:02:10.485341 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:02:31.814900262+00:00 stderr F I0813 20:02:31.814574 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:02:31.816711174+00:00 stderr F I0813 20:02:31.816638 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:02:31.816934930+00:00 stderr F I0813 20:02:31.816735 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825011 1 healthcheck.go:84] fs available: 57135521792, total capacity: 85294297088, percentage available: 66.99, number of free inodes: 41516178 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825038 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:02:31.825060512+00:00 stderr F I0813 20:02:31.825050 1 nodeserver.go:330] Capacity: 85294297088 Used: 28158775296 Available: 57135521792 Inodes: 41680368 Free inodes: 41516178 Used inodes: 164190 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225329 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225397 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:02:40.226898781+00:00 stderr F I0813 20:02:40.225433 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-08-13T20:02:56.756340071+00:00 stderr F I0813 20:02:56.756230 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:03:10.486762909+00:00 stderr F I0813 20:03:10.486679 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:10.487170981+00:00 stderr F I0813 20:03:10.487152 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:11.493869339+00:00 stderr F I0813 20:03:11.493707 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:11.493869339+00:00 stderr F I0813 20:03:11.493746 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:13.499529645+00:00 stderr F I0813 20:03:13.499420 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:13.499529645+00:00 stderr F I0813 20:03:13.499457 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:17.502945632+00:00 stderr F I0813 20:03:17.502827 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:17.502945632+00:00 stderr F I0813 20:03:17.502899 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:25.509244321+00:00 stderr F I0813 20:03:25.508881 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:25.509244321+00:00 stderr F I0813 20:03:25.508945 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:41.517643887+00:00 stderr F I0813 20:03:41.517458 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:41.517746540+00:00 stderr F I0813 20:03:41.517732 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:03:56.915135236+00:00 stderr F I0813 20:03:56.915006 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:04:01.000095908+00:00 stderr F I0813 20:04:00.999928 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:04:01.005914114+00:00 stderr F I0813 20:04:01.003058 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:04:01.005914114+00:00 stderr F I0813 20:04:01.003149 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025128 1 healthcheck.go:84] fs available: 54261784576, total capacity: 85294297088, percentage available: 63.62, number of free inodes: 41488434 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025264 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:04:01.026741118+00:00 stderr F I0813 20:04:01.025369 1 nodeserver.go:330] Capacity: 85294297088 Used: 31032512512 Available: 54261784576 Inodes: 41680368 Free inodes: 41488434 Used inodes: 191934 2025-08-13T20:04:10.486988651+00:00 stderr F I0813 20:04:10.486697 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:10.487205307+00:00 stderr F I0813 20:04:10.487185 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:04:13.523738710+00:00 stderr F I0813 20:04:13.523676 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:13.524015298+00:00 stderr F I0813 20:04:13.523997 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:04:56.964622012+00:00 stderr F I0813 20:04:56.959874 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:05:10.491516851+00:00 stderr F I0813 20:05:10.491238 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:05:10.491516851+00:00 stderr F I0813 20:05:10.491302 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:05:46.708170333+00:00 stderr F I0813 20:05:46.708026 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:05:46.714759392+00:00 stderr F I0813 20:05:46.714656 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:05:46.714759392+00:00 stderr F I0813 20:05:46.714693 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728583 1 healthcheck.go:84] fs available: 47486980096, total capacity: 85294297088, percentage available: 55.67, number of free inodes: 41469070 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728625 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:05:46.728663180+00:00 stderr F I0813 20:05:46.728641 1 nodeserver.go:330] Capacity: 85294297088 Used: 37807316992 Available: 47486980096 Inodes: 41680368 Free inodes: 41469070 Used inodes: 211298 2025-08-13T20:05:56.984080993+00:00 stderr F I0813 20:05:56.983475 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:06:06.802203704+00:00 stderr F I0813 20:06:06.798708 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:06.802203704+00:00 stderr F I0813 20:06:06.798750 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:10.487223827+00:00 stderr F I0813 20:06:10.487127 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:10.487332100+00:00 stderr F I0813 20:06:10.487318 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:21.529625288+00:00 stderr F I0813 20:06:21.529548 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:21.529924837+00:00 stderr F I0813 20:06:21.529905 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:06:57.096820633+00:00 stderr F I0813 20:06:57.092522 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:07:10.508947399+00:00 stderr F I0813 20:07:10.508715 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:07:10.513067847+00:00 stderr F I0813 20:07:10.510553 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:07:20.657193328+00:00 stderr F I0813 20:07:20.657106 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:07:20.695849096+00:00 stderr F I0813 20:07:20.685970 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:07:20.695849096+00:00 stderr F I0813 20:07:20.686005 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.733739 1 healthcheck.go:84] fs available: 49462222848, total capacity: 85294297088, percentage available: 57.99, number of free inodes: 41476143 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.738849 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:07:20.749953126+00:00 stderr F I0813 20:07:20.738935 1 nodeserver.go:330] Capacity: 85294297088 Used: 35832098816 Available: 49462198272 Inodes: 41680368 Free inodes: 41476150 Used inodes: 204218 2025-08-13T20:07:57.145481707+00:00 stderr F I0813 20:07:57.144413 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:08:10.502928778+00:00 stderr F I0813 20:08:10.499463 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:08:10.502928778+00:00 stderr F I0813 20:08:10.499507 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:08:36.367695340+00:00 stderr F I0813 20:08:36.367492 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:08:36.369372678+00:00 stderr F I0813 20:08:36.369306 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:08:36.369396839+00:00 stderr F I0813 20:08:36.369339 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380557 1 healthcheck.go:84] fs available: 51706716160, total capacity: 85294297088, percentage available: 60.62, number of free inodes: 41485057 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380626 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:08:36.380687823+00:00 stderr F I0813 20:08:36.380647 1 nodeserver.go:330] Capacity: 85294297088 Used: 33587580928 Available: 51706716160 Inodes: 41680368 Free inodes: 41485057 Used inodes: 195311 2025-08-13T20:08:57.162177496+00:00 stderr F I0813 20:08:57.160999 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:09:10.507345642+00:00 stderr F I0813 20:09:10.507012 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:09:10.507345642+00:00 stderr F I0813 20:09:10.507250 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:09:39.085725629+00:00 stderr F I0813 20:09:39.085497 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:09:39.090844026+00:00 stderr F I0813 20:09:39.090722 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:09:39.090984640+00:00 stderr F I0813 20:09:39.090767 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105406 1 healthcheck.go:84] fs available: 51679031296, total capacity: 85294297088, percentage available: 60.59, number of free inodes: 41484961 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105490 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:09:39.105516197+00:00 stderr F I0813 20:09:39.105509 1 nodeserver.go:330] Capacity: 85294297088 Used: 33615265792 Available: 51679031296 Inodes: 41680368 Free inodes: 41484961 Used inodes: 195407 2025-08-13T20:09:57.199103275+00:00 stderr F I0813 20:09:57.198850 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:10:10.518927925+00:00 stderr F I0813 20:10:10.505588 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:10.518927925+00:00 stderr F I0813 20:10:10.505631 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:10:36.529453897+00:00 stderr F I0813 20:10:36.528695 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:36.529453897+00:00 stderr F I0813 20:10:36.528761 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:10:40.251553782+00:00 stderr F I0813 20:10:40.251303 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:10:40.253756016+00:00 stderr F I0813 20:10:40.253705 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:10:40.253836788+00:00 stderr F I0813 20:10:40.253736 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:10:40.267351885+00:00 stderr F I0813 20:10:40.267295 1 healthcheck.go:84] fs available: 51647049728, total capacity: 85294297088, percentage available: 60.55, number of free inodes: 41484928 2025-08-13T20:10:40.267418467+00:00 stderr F I0813 20:10:40.267405 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:10:40.278090563+00:00 stderr F I0813 20:10:40.276155 1 nodeserver.go:330] Capacity: 85294297088 Used: 33647247360 Available: 51647049728 Inodes: 41680368 Free inodes: 41484928 Used inodes: 195440 2025-08-13T20:10:57.213663048+00:00 stderr F I0813 20:10:57.212333 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:11:10.503087476+00:00 stderr F I0813 20:11:10.502811 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:11:10.503087476+00:00 stderr F I0813 20:11:10.502957 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:11:57.227966364+00:00 stderr F I0813 20:11:57.227715 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:12:10.505112391+00:00 stderr F I0813 20:12:10.504953 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:12:10.505112391+00:00 stderr F I0813 20:12:10.505061 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:12:36.137386269+00:00 stderr F I0813 20:12:36.136412 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:12:36.141004743+00:00 stderr F I0813 20:12:36.140064 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:12:36.141004743+00:00 stderr F I0813 20:12:36.140092 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150177 1 healthcheck.go:84] fs available: 51640684544, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485041 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150248 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:12:36.150283989+00:00 stderr F I0813 20:12:36.150271 1 nodeserver.go:330] Capacity: 85294297088 Used: 33653612544 Available: 51640684544 Inodes: 41680368 Free inodes: 41485041 Used inodes: 195327 2025-08-13T20:12:57.251129275+00:00 stderr F I0813 20:12:57.251039 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:13:10.503850112+00:00 stderr F I0813 20:13:10.503729 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:13:10.503917984+00:00 stderr F I0813 20:13:10.503904 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:13:57.263200051+00:00 stderr F I0813 20:13:57.262977 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:14:04.803178800+00:00 stderr F I0813 20:14:04.802515 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:14:04.804445436+00:00 stderr F I0813 20:14:04.804387 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:14:04.805386423+00:00 stderr F I0813 20:14:04.804429 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811560 1 healthcheck.go:84] fs available: 51640279040, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485153 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811598 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:14:04.811721495+00:00 stderr F I0813 20:14:04.811614 1 nodeserver.go:330] Capacity: 85294297088 Used: 33654018048 Available: 51640279040 Inodes: 41680368 Free inodes: 41485153 Used inodes: 195215 2025-08-13T20:14:10.505047437+00:00 stderr F I0813 20:14:10.504870 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:14:10.505047437+00:00 stderr F I0813 20:14:10.504991 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:14:57.277850577+00:00 stderr F I0813 20:14:57.277638 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:15:10.509881435+00:00 stderr F I0813 20:15:10.505916 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:15:10.509881435+00:00 stderr F I0813 20:15:10.506155 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:15:51.769155450+00:00 stderr F I0813 20:15:51.769000 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:15:51.772506196+00:00 stderr F I0813 20:15:51.772372 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:15:51.772761773+00:00 stderr F I0813 20:15:51.772515 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781436 1 healthcheck.go:84] fs available: 51640188928, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485105 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781516 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:15:51.781601066+00:00 stderr F I0813 20:15:51.781541 1 nodeserver.go:330] Capacity: 85294297088 Used: 33654108160 Available: 51640188928 Inodes: 41680368 Free inodes: 41485105 Used inodes: 195263 2025-08-13T20:15:57.291996787+00:00 stderr F I0813 20:15:57.291842 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:16:10.506534710+00:00 stderr F I0813 20:16:10.506390 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:16:10.506534710+00:00 stderr F I0813 20:16:10.506435 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:16:57.304684138+00:00 stderr F I0813 20:16:57.304610 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:17:10.519147126+00:00 stderr F I0813 20:17:10.518943 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:17:10.523121470+00:00 stderr F I0813 20:17:10.519870 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:17:36.432045642+00:00 stderr F I0813 20:17:36.431604 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:17:36.438031932+00:00 stderr F I0813 20:17:36.435931 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:17:36.438031932+00:00 stderr F I0813 20:17:36.436033 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466277 1 healthcheck.go:84] fs available: 50719629312, total capacity: 85294297088, percentage available: 59.46, number of free inodes: 41481141 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466319 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:17:36.466379512+00:00 stderr F I0813 20:17:36.466336 1 nodeserver.go:330] Capacity: 85294297088 Used: 34574667776 Available: 50719629312 Inodes: 41680368 Free inodes: 41481141 Used inodes: 199227 2025-08-13T20:17:57.326616570+00:00 stderr F I0813 20:17:57.326474 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:18:10.526943734+00:00 stderr F I0813 20:18:10.524915 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:18:10.526943734+00:00 stderr F I0813 20:18:10.524956 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:18:57.355476292+00:00 stderr F I0813 20:18:57.355306 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:19:10.522212718+00:00 stderr F I0813 20:19:10.521996 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:19:10.522212718+00:00 stderr F I0813 20:19:10.522137 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:19:29.886707247+00:00 stderr F I0813 20:19:29.886609 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:19:29.890321280+00:00 stderr F I0813 20:19:29.890184 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:19:29.890591578+00:00 stderr F I0813 20:19:29.890222 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898142 1 healthcheck.go:84] fs available: 49281728512, total capacity: 85294297088, percentage available: 57.78, number of free inodes: 41478728 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898180 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:19:29.898231086+00:00 stderr F I0813 20:19:29.898196 1 nodeserver.go:330] Capacity: 85294297088 Used: 36012568576 Available: 49281728512 Inodes: 41680368 Free inodes: 41478728 Used inodes: 201640 2025-08-13T20:19:57.368016391+00:00 stderr F I0813 20:19:57.367718 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:20:10.519551602+00:00 stderr F I0813 20:20:10.519431 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:20:10.519551602+00:00 stderr F I0813 20:20:10.519479 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:20:57.389371162+00:00 stderr F I0813 20:20:57.389152 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:21:10.523616582+00:00 stderr F I0813 20:21:10.523457 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:21:10.523616582+00:00 stderr F I0813 20:21:10.523543 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:21:19.817840363+00:00 stderr F I0813 20:21:19.817595 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:21:19.821330213+00:00 stderr F I0813 20:21:19.820144 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:21:19.821330213+00:00 stderr F I0813 20:21:19.820181 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841120 1 healthcheck.go:84] fs available: 51637252096, total capacity: 85294297088, percentage available: 60.54, number of free inodes: 41485141 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841205 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:21:19.841292043+00:00 stderr F I0813 20:21:19.841221 1 nodeserver.go:330] Capacity: 85294297088 Used: 33657044992 Available: 51637252096 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:21:57.403416385+00:00 stderr F I0813 20:21:57.403188 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:22:10.524463732+00:00 stderr F I0813 20:22:10.524337 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:22:10.524463732+00:00 stderr F I0813 20:22:10.524388 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:22:56.284454137+00:00 stderr F I0813 20:22:56.283874 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:22:56.287110313+00:00 stderr F I0813 20:22:56.287065 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:22:56.287344410+00:00 stderr F I0813 20:22:56.287175 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296459 1 healthcheck.go:84] fs available: 51642810368, total capacity: 85294297088, percentage available: 60.55, number of free inodes: 41485141 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296511 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:22:56.297120549+00:00 stderr F I0813 20:22:56.296527 1 nodeserver.go:330] Capacity: 85294297088 Used: 33651486720 Available: 51642810368 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:22:57.418258791+00:00 stderr F I0813 20:22:57.416314 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:23:10.527744748+00:00 stderr F I0813 20:23:10.527566 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:23:10.527744748+00:00 stderr F I0813 20:23:10.527615 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:23:57.431277872+00:00 stderr F I0813 20:23:57.431156 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:24:10.527718182+00:00 stderr F I0813 20:24:10.527537 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:24:10.527718182+00:00 stderr F I0813 20:24:10.527594 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:24:18.305041165+00:00 stderr F I0813 20:24:18.303307 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:24:18.306838946+00:00 stderr F I0813 20:24:18.306669 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:24:18.307206067+00:00 stderr F I0813 20:24:18.306756 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.314975 1 healthcheck.go:84] fs available: 51577217024, total capacity: 85294297088, percentage available: 60.47, number of free inodes: 41485141 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.315035 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:24:18.315068762+00:00 stderr F I0813 20:24:18.315051 1 nodeserver.go:330] Capacity: 85294297088 Used: 33717080064 Available: 51577217024 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:24:57.451153962+00:00 stderr F I0813 20:24:57.450706 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:25:10.527768953+00:00 stderr F I0813 20:25:10.527627 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:25:10.527768953+00:00 stderr F I0813 20:25:10.527694 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:25:54.737841501+00:00 stderr F I0813 20:25:54.737523 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:25:54.740075855+00:00 stderr F I0813 20:25:54.739979 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:25:54.740249760+00:00 stderr F I0813 20:25:54.740049 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:25:54.764451082+00:00 stderr F I0813 20:25:54.764195 1 healthcheck.go:84] fs available: 51575119872, total capacity: 85294297088, percentage available: 60.47, number of free inodes: 41485141 2025-08-13T20:25:54.765378308+00:00 stderr F I0813 20:25:54.765255 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:25:54.765662736+00:00 stderr F I0813 20:25:54.765639 1 nodeserver.go:330] Capacity: 85294297088 Used: 33719177216 Available: 51575119872 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:25:57.469624523+00:00 stderr F I0813 20:25:57.469476 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:26:10.527753664+00:00 stderr F I0813 20:26:10.527598 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:26:10.527753664+00:00 stderr F I0813 20:26:10.527645 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:26:57.484049366+00:00 stderr F I0813 20:26:57.483879 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:27:10.562146652+00:00 stderr F I0813 20:27:10.561015 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:27:10.562146652+00:00 stderr F I0813 20:27:10.561118 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:27:16.955541565+00:00 stderr F I0813 20:27:16.955467 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:27:16.960923459+00:00 stderr F I0813 20:27:16.957556 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:27:16.960923459+00:00 stderr F I0813 20:27:16.957592 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986431 1 healthcheck.go:84] fs available: 50553176064, total capacity: 85294297088, percentage available: 59.27, number of free inodes: 41482383 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986493 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:27:16.987278892+00:00 stderr F I0813 20:27:16.986513 1 nodeserver.go:330] Capacity: 85294297088 Used: 34741121024 Available: 50553176064 Inodes: 41680368 Free inodes: 41482383 Used inodes: 197985 2025-08-13T20:27:57.502633244+00:00 stderr F I0813 20:27:57.502378 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:28:10.531412180+00:00 stderr F I0813 20:28:10.531096 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:28:10.531412180+00:00 stderr F I0813 20:28:10.531169 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:28:33.112173790+00:00 stderr F I0813 20:28:33.111824 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:28:33.115109274+00:00 stderr F I0813 20:28:33.114985 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:28:33.115441193+00:00 stderr F I0813 20:28:33.115092 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123446 1 healthcheck.go:84] fs available: 51568517120, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123954 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:28:33.124039291+00:00 stderr F I0813 20:28:33.123971 1 nodeserver.go:330] Capacity: 85294297088 Used: 33725779968 Available: 51568517120 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:28:57.520944432+00:00 stderr F I0813 20:28:57.520680 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:29:10.533174696+00:00 stderr F I0813 20:29:10.532999 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:29:10.533311740+00:00 stderr F I0813 20:29:10.533291 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:29:44.340528574+00:00 stderr F I0813 20:29:44.340329 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:29:44.341679667+00:00 stderr F I0813 20:29:44.341619 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:29:44.341708008+00:00 stderr F I0813 20:29:44.341651 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:29:44.365910144+00:00 stderr F I0813 20:29:44.365397 1 healthcheck.go:84] fs available: 50792988672, total capacity: 85294297088, percentage available: 59.55, number of free inodes: 41482040 2025-08-13T20:29:44.366058328+00:00 stderr F I0813 20:29:44.365988 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:29:44.367612433+00:00 stderr F I0813 20:29:44.366118 1 nodeserver.go:330] Capacity: 85294297088 Used: 34501308416 Available: 50792988672 Inodes: 41680368 Free inodes: 41482040 Used inodes: 198328 2025-08-13T20:29:57.699870673+00:00 stderr F I0813 20:29:57.699698 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:30:10.532263930+00:00 stderr F I0813 20:30:10.532146 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:30:10.532459155+00:00 stderr F I0813 20:30:10.532436 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:30:57.725149969+00:00 stderr F I0813 20:30:57.725009 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:31:07.875860146+00:00 stderr F I0813 20:31:07.875731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:31:07.878296166+00:00 stderr F I0813 20:31:07.878232 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:31:07.878572504+00:00 stderr F I0813 20:31:07.878335 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887145 1 healthcheck.go:84] fs available: 51566333952, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887179 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:31:07.887215242+00:00 stderr F I0813 20:31:07.887194 1 nodeserver.go:330] Capacity: 85294297088 Used: 33727963136 Available: 51566333952 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:31:10.532145442+00:00 stderr F I0813 20:31:10.532100 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:31:10.532199013+00:00 stderr F I0813 20:31:10.532186 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:31:57.750300037+00:00 stderr F I0813 20:31:57.749893 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:32:10.533952959+00:00 stderr F I0813 20:32:10.533818 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:32:10.533986770+00:00 stderr F I0813 20:32:10.533975 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:32:21.647876845+00:00 stderr F I0813 20:32:21.647682 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:32:21.650811060+00:00 stderr F I0813 20:32:21.650714 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:32:21.651427357+00:00 stderr F I0813 20:32:21.650756 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660146 1 healthcheck.go:84] fs available: 51570003968, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660184 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:32:21.660228610+00:00 stderr F I0813 20:32:21.660201 1 nodeserver.go:330] Capacity: 85294297088 Used: 33724293120 Available: 51570003968 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:32:57.765045545+00:00 stderr F I0813 20:32:57.764764 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:33:10.536131259+00:00 stderr F I0813 20:33:10.535921 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:33:10.536131259+00:00 stderr F I0813 20:33:10.535980 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:33:22.980927831+00:00 stderr F I0813 20:33:22.980505 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:33:22.984701520+00:00 stderr F I0813 20:33:22.984616 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:33:22.984751711+00:00 stderr F I0813 20:33:22.984661 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:33:22.993114092+00:00 stderr F I0813 20:33:22.993024 1 healthcheck.go:84] fs available: 51570139136, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:33:22.993114092+00:00 stderr F I0813 20:33:22.993106 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:33:22.993139062+00:00 stderr F I0813 20:33:22.993127 1 nodeserver.go:330] Capacity: 85294297088 Used: 33724157952 Available: 51570139136 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:33:57.792135620+00:00 stderr F I0813 20:33:57.791627 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:34:10.537730761+00:00 stderr F I0813 20:34:10.537651 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:34:10.537912446+00:00 stderr F I0813 20:34:10.537894 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:34:57.805666005+00:00 stderr F I0813 20:34:57.805502 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:35:01.408516971+00:00 stderr F I0813 20:35:01.408392 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:35:01.410161969+00:00 stderr F I0813 20:35:01.410032 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:35:01.410270932+00:00 stderr F I0813 20:35:01.410066 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416222 1 healthcheck.go:84] fs available: 51570311168, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485141 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416257 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:35:01.416319106+00:00 stderr F I0813 20:35:01.416277 1 nodeserver.go:330] Capacity: 85294297088 Used: 33723985920 Available: 51570311168 Inodes: 41680368 Free inodes: 41485141 Used inodes: 195227 2025-08-13T20:35:10.537402616+00:00 stderr F I0813 20:35:10.537200 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:35:10.537402616+00:00 stderr F I0813 20:35:10.537241 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:35:57.828048567+00:00 stderr F I0813 20:35:57.827894 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:36:10.540230706+00:00 stderr F I0813 20:36:10.540159 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:36:10.540230706+00:00 stderr F I0813 20:36:10.540203 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:36:45.003879095+00:00 stderr F I0813 20:36:44.997700 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:36:45.007661814+00:00 stderr F I0813 20:36:45.006888 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:36:45.007661814+00:00 stderr F I0813 20:36:45.006921 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017730 1 healthcheck.go:84] fs available: 51568951296, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485140 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017765 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:36:45.017881339+00:00 stderr F I0813 20:36:45.017833 1 nodeserver.go:330] Capacity: 85294297088 Used: 33725345792 Available: 51568951296 Inodes: 41680368 Free inodes: 41485140 Used inodes: 195228 2025-08-13T20:36:57.844487042+00:00 stderr F I0813 20:36:57.844282 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:37:10.538597061+00:00 stderr F I0813 20:37:10.538482 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:37:10.538597061+00:00 stderr F I0813 20:37:10.538537 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:37:57.863257984+00:00 stderr F I0813 20:37:57.862999 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:38:10.540288405+00:00 stderr F I0813 20:38:10.540127 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:38:10.540288405+00:00 stderr F I0813 20:38:10.540205 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:38:36.464710306+00:00 stderr F I0813 20:38:36.464263 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:38:36.467944479+00:00 stderr F I0813 20:38:36.466838 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:38:36.467944479+00:00 stderr F I0813 20:38:36.466865 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477635 1 healthcheck.go:84] fs available: 51565973504, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485098 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477678 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:38:36.478705029+00:00 stderr F I0813 20:38:36.477694 1 nodeserver.go:330] Capacity: 85294297088 Used: 33728323584 Available: 51565973504 Inodes: 41680368 Free inodes: 41485098 Used inodes: 195270 2025-08-13T20:38:57.880440552+00:00 stderr F I0813 20:38:57.880262 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:39:10.541717859+00:00 stderr F I0813 20:39:10.541462 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:39:10.541717859+00:00 stderr F I0813 20:39:10.541506 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:39:57.907047914+00:00 stderr F I0813 20:39:57.905077 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:40:10.541471925+00:00 stderr F I0813 20:40:10.541375 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:40:10.541471925+00:00 stderr F I0813 20:40:10.541406 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:40:35.536596687+00:00 stderr F I0813 20:40:35.536482 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:40:35.538166962+00:00 stderr F I0813 20:40:35.537987 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:40:35.538166962+00:00 stderr F I0813 20:40:35.538023 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546307 1 healthcheck.go:84] fs available: 51564703744, total capacity: 85294297088, percentage available: 60.46, number of free inodes: 41485140 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546353 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:40:35.547501511+00:00 stderr F I0813 20:40:35.546369 1 nodeserver.go:330] Capacity: 85294297088 Used: 33729593344 Available: 51564703744 Inodes: 41680368 Free inodes: 41485140 Used inodes: 195228 2025-08-13T20:40:57.920707454+00:00 stderr F I0813 20:40:57.920519 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:41:10.545660471+00:00 stderr F I0813 20:41:10.545536 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:41:10.545660471+00:00 stderr F I0813 20:41:10.545577 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:41:57.937944436+00:00 stderr F I0813 20:41:57.937728 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-08-13T20:42:10.552582979+00:00 stderr F I0813 20:42:10.552449 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:42:10.552739754+00:00 stderr F I0813 20:42:10.552697 1 controllerserver.go:230] Checking capacity for storage pool local 2025-08-13T20:42:16.573332707+00:00 stderr F I0813 20:42:16.572654 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-08-13T20:42:16.586949320+00:00 stderr F I0813 20:42:16.577589 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-08-13T20:42:16.586949320+00:00 stderr F I0813 20:42:16.577636 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589379 1 healthcheck.go:84] fs available: 51561603072, total capacity: 85294297088, percentage available: 60.45, number of free inodes: 41485107 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589439 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-08-13T20:42:16.589730510+00:00 stderr F I0813 20:42:16.589487 1 nodeserver.go:330] Capacity: 85294297088 Used: 33732689920 Available: 51561607168 Inodes: 41680368 Free inodes: 41485108 Used inodes: 195260 ././@LongLink0000644000000000000000000000027400000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000003746315117130647033210 0ustar zuulzuul2025-12-13T00:13:14.904501956+00:00 stderr F I1213 00:13:14.904213 1 plugin.go:44] Starting Prometheus metrics endpoint server 2025-12-13T00:13:14.904501956+00:00 stderr F I1213 00:13:14.904466 1 plugin.go:47] Starting new HostPathDriver, config: {kubevirt.io.hostpath-provisioner unix:///csi/csi.sock crc map[] latest } 2025-12-13T00:13:14.959631769+00:00 stderr F I1213 00:13:14.959553 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS 2025-12-13T00:13:14.959631769+00:00 stderr F I1213 00:13:14.959610 1 hostpath.go:88] name: local, dataDir: /csi-data-dir 2025-12-13T00:13:14.959709761+00:00 stderr F I1213 00:13:14.959689 1 hostpath.go:107] Driver: kubevirt.io.hostpath-provisioner, version: latest 2025-12-13T00:13:14.960082914+00:00 stderr F I1213 00:13:14.960051 1 server.go:194] Starting domain socket: unix///csi/csi.sock 2025-12-13T00:13:14.960274540+00:00 stderr F I1213 00:13:14.960221 1 server.go:89] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"} 2025-12-13T00:13:14.972153200+00:00 stderr F I1213 00:13:14.972012 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:13:19.943882281+00:00 stderr F I1213 00:13:19.941814 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-12-13T00:13:20.783396120+00:00 stderr F I1213 00:13:20.783339 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-12-13T00:13:21.490984647+00:00 stderr F I1213 00:13:21.489379 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-12-13T00:14:15.034378138+00:00 stderr F I1213 00:14:15.034273 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:14:22.115573239+00:00 stderr F I1213 00:14:22.115484 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-12-13T00:14:22.117542223+00:00 stderr F I1213 00:14:22.117503 1 server.go:104] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-12-13T00:14:22.123656829+00:00 stderr F I1213 00:14:22.120385 1 server.go:104] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-12-13T00:14:22.156848787+00:00 stderr F I1213 00:14:22.156806 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetInfo 2025-12-13T00:14:22.416843170+00:00 stderr F I1213 00:14:22.416366 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:14:22.416843170+00:00 stderr F I1213 00:14:22.416386 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:15:12.667634587+00:00 stderr F I1213 00:15:12.667529 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:15:12.674924982+00:00 stderr F I1213 00:15:12.674864 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:15:12.675751026+00:00 stderr F I1213 00:15:12.675720 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:15:12.694965025+00:00 stderr F I1213 00:15:12.694874 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:15:12.696485400+00:00 stderr F I1213 00:15:12.696430 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-12-13T00:15:12.696653005+00:00 stderr F I1213 00:15:12.696450 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75779c45fd-v2j2v csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:15:15.049278132+00:00 stderr F I1213 00:15:15.048856 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:15:22.415349430+00:00 stderr F I1213 00:15:22.415276 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:15:22.415349430+00:00 stderr F I1213 00:15:22.415297 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:16:15.064257639+00:00 stderr F I1213 00:16:15.064127 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:16:22.421948539+00:00 stderr F I1213 00:16:22.419150 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:16:22.421948539+00:00 stderr F I1213 00:16:22.419172 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:16:36.490355090+00:00 stderr F I1213 00:16:36.490295 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:16:36.492297164+00:00 stderr F I1213 00:16:36.492263 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-12-13T00:16:36.492317534+00:00 stderr F I1213 00:16:36.492282 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:16:36.503410937+00:00 stderr F I1213 00:16:36.503063 1 healthcheck.go:84] fs available: 45269401600, total capacity: 85294297088, percentage available: 53.07, number of free inodes: 41456701 2025-12-13T00:16:36.503410937+00:00 stderr F I1213 00:16:36.503087 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-12-13T00:16:36.503410937+00:00 stderr F I1213 00:16:36.503100 1 nodeserver.go:330] Capacity: 85294297088 Used: 40024895488 Available: 45269401600 Inodes: 41680320 Free inodes: 41456701 Used inodes: 223619 2025-12-13T00:17:15.081853810+00:00 stderr F I1213 00:17:15.081768 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:17:22.418436293+00:00 stderr F I1213 00:17:22.418251 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:17:22.418436293+00:00 stderr F I1213 00:17:22.418284 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:18:15.103871921+00:00 stderr F I1213 00:18:15.103791 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:18:22.421384882+00:00 stderr F I1213 00:18:22.421292 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:18:22.421384882+00:00 stderr F I1213 00:18:22.421314 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:18:30.245375971+00:00 stderr F I1213 00:18:30.245259 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:18:30.245966288+00:00 stderr F I1213 00:18:30.245884 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-12-13T00:18:30.245966288+00:00 stderr F I1213 00:18:30.245896 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:18:30.251926192+00:00 stderr F I1213 00:18:30.251871 1 healthcheck.go:84] fs available: 45238943744, total capacity: 85294297088, percentage available: 53.04, number of free inodes: 41456714 2025-12-13T00:18:30.251926192+00:00 stderr F I1213 00:18:30.251902 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-12-13T00:18:30.251926192+00:00 stderr F I1213 00:18:30.251918 1 nodeserver.go:330] Capacity: 85294297088 Used: 40055353344 Available: 45238943744 Inodes: 41680320 Free inodes: 41456714 Used inodes: 223606 2025-12-13T00:18:32.483988050+00:00 stderr F I1213 00:18:32.483731 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:18:32.485026839+00:00 stderr F I1213 00:18:32.484987 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:18:32.486400667+00:00 stderr F I1213 00:18:32.486340 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:18:32.487261790+00:00 stderr F I1213 00:18:32.487204 1 server.go:104] GRPC call: /csi.v1.Node/NodePublishVolume 2025-12-13T00:18:32.487287891+00:00 stderr F I1213 00:18:32.487217 1 nodeserver.go:82] Node Publish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 PublishContext:map[] StagingTargetPath: TargetPath:/var/lib/kubelet/pods/d73c4e63-30ef-4915-925d-f44201c612ec/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount VolumeCapability:mount:<> access_mode: Readonly:false Secrets:map[] VolumeContext:map[csi.storage.k8s.io/ephemeral:false csi.storage.k8s.io/pod.name:image-registry-75b7bb6564-rnjvj csi.storage.k8s.io/pod.namespace:openshift-image-registry csi.storage.k8s.io/pod.uid:d73c4e63-30ef-4915-925d-f44201c612ec csi.storage.k8s.io/pv/name:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 csi.storage.k8s.io/pvc/name:crc-image-registry-storage csi.storage.k8s.io/pvc/namespace:openshift-image-registry csi.storage.k8s.io/serviceAccount.name:registry storage.kubernetes.io/csiProvisionerIdentity:1719494501704-84-kubevirt.io.hostpath-provisioner-crc storagePool:local] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:18:43.383234004+00:00 stderr F I1213 00:18:43.383200 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:18:43.384228521+00:00 stderr F I1213 00:18:43.384194 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-12-13T00:18:43.384312563+00:00 stderr F I1213 00:18:43.384263 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/d73c4e63-30ef-4915-925d-f44201c612ec/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:18:43.389331072+00:00 stderr F I1213 00:18:43.389307 1 healthcheck.go:84] fs available: 45237125120, total capacity: 85294297088, percentage available: 53.04, number of free inodes: 41456611 2025-12-13T00:18:43.389385073+00:00 stderr F I1213 00:18:43.389374 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-12-13T00:18:43.389415954+00:00 stderr F I1213 00:18:43.389406 1 nodeserver.go:330] Capacity: 85294297088 Used: 40057171968 Available: 45237125120 Inodes: 41680320 Free inodes: 41456611 Used inodes: 223709 2025-12-13T00:19:15.111439665+00:00 stderr F I1213 00:19:15.111395 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:19:18.286686381+00:00 stderr F I1213 00:19:18.284434 1 server.go:104] GRPC call: /csi.v1.Node/NodeUnpublishVolume 2025-12-13T00:19:18.286686381+00:00 stderr F I1213 00:19:18.284451 1 nodeserver.go:199] Node Unpublish Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 TargetPath:/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:19:18.286686381+00:00 stderr F I1213 00:19:18.284476 1 nodeserver.go:206] Unmounting path: /var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount 2025-12-13T00:19:22.422976479+00:00 stderr F I1213 00:19:22.422877 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:19:22.422976479+00:00 stderr F I1213 00:19:22.422917 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:19:48.975565283+00:00 stderr F I1213 00:19:48.975498 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:19:48.976767576+00:00 stderr F I1213 00:19:48.976718 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-12-13T00:19:48.976767576+00:00 stderr F I1213 00:19:48.976736 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/d73c4e63-30ef-4915-925d-f44201c612ec/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:19:48.986681579+00:00 stderr F I1213 00:19:48.986442 1 healthcheck.go:84] fs available: 45237280768, total capacity: 85294297088, percentage available: 53.04, number of free inodes: 41456719 2025-12-13T00:19:48.986681579+00:00 stderr F I1213 00:19:48.986466 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-12-13T00:19:48.986681579+00:00 stderr F I1213 00:19:48.986477 1 nodeserver.go:330] Capacity: 85294297088 Used: 40057016320 Available: 45237280768 Inodes: 41680320 Free inodes: 41456719 Used inodes: 223601 2025-12-13T00:20:15.122050664+00:00 stderr F I1213 00:20:15.121970 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:20:22.425227067+00:00 stderr F I1213 00:20:22.425177 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:20:22.425227067+00:00 stderr F I1213 00:20:22.425202 1 controllerserver.go:230] Checking capacity for storage pool local 2025-12-13T00:20:49.778597238+00:00 stderr F I1213 00:20:49.778543 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetCapabilities 2025-12-13T00:20:49.779824230+00:00 stderr F I1213 00:20:49.779796 1 server.go:104] GRPC call: /csi.v1.Node/NodeGetVolumeStats 2025-12-13T00:20:49.779824230+00:00 stderr F I1213 00:20:49.779809 1 nodeserver.go:314] Node Get Volume Stats Request: {VolumeId:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 VolumePath:/var/lib/kubelet/pods/d73c4e63-30ef-4915-925d-f44201c612ec/volumes/kubernetes.io~csi/pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} 2025-12-13T00:20:49.787654522+00:00 stderr F I1213 00:20:49.787593 1 healthcheck.go:84] fs available: 45252554752, total capacity: 85294297088, percentage available: 53.05, number of free inodes: 41456868 2025-12-13T00:20:49.787654522+00:00 stderr F I1213 00:20:49.787621 1 nodeserver.go:321] Healthy state: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 Volume: true 2025-12-13T00:20:49.787654522+00:00 stderr F I1213 00:20:49.787634 1 nodeserver.go:330] Capacity: 85294297088 Used: 40041742336 Available: 45252554752 Inodes: 41680320 Free inodes: 41456868 Used inodes: 223452 2025-12-13T00:21:15.131389185+00:00 stderr F I1213 00:21:15.131321 1 utils.go:221] pool (local, /csi-data-dir), shares path with OS which can lead to node disk pressure 2025-12-13T00:21:22.424833777+00:00 stderr F I1213 00:21:22.424739 1 server.go:104] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:21:22.424833777+00:00 stderr F I1213 00:21:22.424758 1 controllerserver.go:230] Checking capacity for storage pool local ././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015117130654033167 5ustar zuulzuul././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000277515117130647033206 0ustar zuulzuul2025-12-13T00:13:19.932130996+00:00 stderr F I1213 00:13:19.929080 1 main.go:135] Version: v4.15.0-202406180807.p0.g9005584.assembly.stream.el8-0-g69b18c7-dirty 2025-12-13T00:13:19.932130996+00:00 stderr F I1213 00:13:19.929495 1 main.go:136] Running node-driver-registrar in mode= 2025-12-13T00:13:19.932130996+00:00 stderr F I1213 00:13:19.929502 1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock" 2025-12-13T00:13:19.940010041+00:00 stderr F I1213 00:13:19.939980 1 main.go:164] Calling CSI driver to discover driver name 2025-12-13T00:13:19.946011282+00:00 stderr F I1213 00:13:19.945559 1 main.go:173] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-12-13T00:13:19.946011282+00:00 stderr F I1213 00:13:19.945608 1 node_register.go:55] Starting Registration Server at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-12-13T00:13:19.946477618+00:00 stderr F I1213 00:13:19.946440 1 node_register.go:64] Registration Server started at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-12-13T00:13:19.947993899+00:00 stderr F I1213 00:13:19.946738 1 node_register.go:88] Skipping HTTP server because endpoint is set to: "" 2025-12-13T00:13:20.779205939+00:00 stderr F I1213 00:13:20.779151 1 main.go:90] Received GetInfo call: &InfoRequest{} 2025-12-13T00:13:20.829832420+00:00 stderr F I1213 00:13:20.829776 1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} ././@LongLink0000644000000000000000000000027500000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000277515117130647033206 0ustar zuulzuul2025-08-13T19:59:54.983940954+00:00 stderr F I0813 19:59:54.882907 1 main.go:135] Version: v4.15.0-202406180807.p0.g9005584.assembly.stream.el8-0-g69b18c7-dirty 2025-08-13T19:59:55.140220789+00:00 stderr F I0813 19:59:55.102275 1 main.go:136] Running node-driver-registrar in mode= 2025-08-13T19:59:55.336952618+00:00 stderr F I0813 19:59:55.271313 1 main.go:157] Attempting to open a gRPC connection with: "/csi/csi.sock" 2025-08-13T19:59:56.036679094+00:00 stderr F I0813 19:59:55.934459 1 main.go:164] Calling CSI driver to discover driver name 2025-08-13T19:59:56.994891017+00:00 stderr F I0813 19:59:56.988897 1 main.go:173] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-08-13T19:59:56.994891017+00:00 stderr F I0813 19:59:56.991755 1 node_register.go:55] Starting Registration Server at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-08-13T19:59:57.003131672+00:00 stderr F I0813 19:59:56.996236 1 node_register.go:64] Registration Server started at: /registration/kubevirt.io.hostpath-provisioner-reg.sock 2025-08-13T19:59:57.043204285+00:00 stderr F I0813 19:59:57.030985 1 node_register.go:88] Skipping HTTP server because endpoint is set to: "" 2025-08-13T19:59:58.136694005+00:00 stderr F I0813 19:59:58.083078 1 main.go:90] Received GetInfo call: &InfoRequest{} 2025-08-13T19:59:58.490232413+00:00 stderr F I0813 19:59:58.490135 1 main.go:101] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015117130654033167 5ustar zuulzuul././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000013035115117130647033176 0ustar zuulzuul2025-12-13T00:14:22.070451028+00:00 stderr F W1213 00:14:22.069884 1 feature_gate.go:241] Setting GA feature gate Topology=true. It will be removed in a future release. 2025-12-13T00:14:22.070451028+00:00 stderr F I1213 00:14:22.070399 1 feature_gate.go:249] feature gates: &{map[Topology:true]} 2025-12-13T00:14:22.080562624+00:00 stderr F I1213 00:14:22.080507 1 csi-provisioner.go:154] Version: v4.15.0-202406180807.p0.gce5a1a3.assembly.stream.el8-0-g9363464-dirty 2025-12-13T00:14:22.080562624+00:00 stderr F I1213 00:14:22.080533 1 csi-provisioner.go:177] Building kube configs for running in cluster... 2025-12-13T00:14:22.085975708+00:00 stderr F I1213 00:14:22.084156 1 connection.go:215] Connecting to unix:///csi/csi.sock 2025-12-13T00:14:22.091199976+00:00 stderr F I1213 00:14:22.089706 1 common.go:138] Probing CSI driver for readiness 2025-12-13T00:14:22.091199976+00:00 stderr F I1213 00:14:22.089766 1 connection.go:244] GRPC call: /csi.v1.Identity/Probe 2025-12-13T00:14:22.095179514+00:00 stderr F I1213 00:14:22.089770 1 connection.go:245] GRPC request: {} 2025-12-13T00:14:22.115290210+00:00 stderr F I1213 00:14:22.115230 1 connection.go:251] GRPC response: {} 2025-12-13T00:14:22.115290210+00:00 stderr F I1213 00:14:22.115255 1 connection.go:252] GRPC error: 2025-12-13T00:14:22.115290210+00:00 stderr F I1213 00:14:22.115267 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-12-13T00:14:22.115290210+00:00 stderr F I1213 00:14:22.115271 1 connection.go:245] GRPC request: {} 2025-12-13T00:14:22.117066048+00:00 stderr F I1213 00:14:22.117017 1 connection.go:251] GRPC response: {"name":"kubevirt.io.hostpath-provisioner","vendor_version":"latest"} 2025-12-13T00:14:22.117066048+00:00 stderr F I1213 00:14:22.117036 1 connection.go:252] GRPC error: 2025-12-13T00:14:22.117066048+00:00 stderr F I1213 00:14:22.117049 1 csi-provisioner.go:230] Detected CSI driver kubevirt.io.hostpath-provisioner 2025-12-13T00:14:22.117066048+00:00 stderr F I1213 00:14:22.117061 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-12-13T00:14:22.117091659+00:00 stderr F I1213 00:14:22.117065 1 connection.go:245] GRPC request: {} 2025-12-13T00:14:22.120128546+00:00 stderr F I1213 00:14:22.120055 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}}]} 2025-12-13T00:14:22.120128546+00:00 stderr F I1213 00:14:22.120081 1 connection.go:252] GRPC error: 2025-12-13T00:14:22.120128546+00:00 stderr F I1213 00:14:22.120121 1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-12-13T00:14:22.120152157+00:00 stderr F I1213 00:14:22.120124 1 connection.go:245] GRPC request: {} 2025-12-13T00:14:22.140867033+00:00 stderr F I1213 00:14:22.140789 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":11}}}]} 2025-12-13T00:14:22.140867033+00:00 stderr F I1213 00:14:22.140806 1 connection.go:252] GRPC error: 2025-12-13T00:14:22.156304440+00:00 stderr F I1213 00:14:22.156221 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments 2025-12-13T00:14:22.156304440+00:00 stderr F I1213 00:14:22.156265 1 connection.go:244] GRPC call: /csi.v1.Node/NodeGetInfo 2025-12-13T00:14:22.156341301+00:00 stderr F I1213 00:14:22.156270 1 connection.go:245] GRPC request: {} 2025-12-13T00:14:22.157213639+00:00 stderr F I1213 00:14:22.157187 1 connection.go:251] GRPC response: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"node_id":"crc"} 2025-12-13T00:14:22.157213639+00:00 stderr F I1213 00:14:22.157202 1 connection.go:252] GRPC error: 2025-12-13T00:14:22.158097107+00:00 stderr F I1213 00:14:22.157253 1 csi-provisioner.go:351] using local topology with Node = &Node{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[topology.hostpath.csi/node:crc] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} and CSINode = &CSINode{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:CSINodeSpec{Drivers:[]CSINodeDriver{CSINodeDriver{Name:kubevirt.io.hostpath-provisioner,NodeID:crc,TopologyKeys:[topology.hostpath.csi/node],Allocatable:nil,},},},} 2025-12-13T00:14:22.210017377+00:00 stderr F I1213 00:14:22.208968 1 csi-provisioner.go:464] using apps/v1/DaemonSet csi-hostpathplugin as owner of CSIStorageCapacity objects 2025-12-13T00:14:22.210017377+00:00 stderr F I1213 00:14:22.209033 1 csi-provisioner.go:483] producing CSIStorageCapacity objects with fixed topology segment [topology.hostpath.csi/node: crc] 2025-12-13T00:14:22.313971311+00:00 stderr F I1213 00:14:22.313209 1 csi-provisioner.go:529] using the CSIStorageCapacity v1 API 2025-12-13T00:14:22.313971311+00:00 stderr F I1213 00:14:22.313556 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0004e4648 = topology.hostpath.csi/node: crc], removed [] 2025-12-13T00:14:22.313971311+00:00 stderr F I1213 00:14:22.313962 1 controller.go:732] Using saving PVs to API server in background 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.344031 1 reflector.go:289] Starting reflector *v1.CSIStorageCapacity (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.344031 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.344061 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.344067 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.348087 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.348107 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.348487 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-12-13T00:14:22.354961179+00:00 stderr F I1213 00:14:22.348503 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414232 1 shared_informer.go:341] caches populated 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414264 1 shared_informer.go:341] caches populated 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414280 1 controller.go:811] Starting provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_c7da79fa-8900-4241-9a69-56c395212eb0! 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414316 1 capacity.go:243] Starting Capacity Controller 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414330 1 shared_informer.go:341] caches populated 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414335 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0004e4648 = topology.hostpath.csi/node: crc], removed [] 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414375 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414389 1 capacity.go:279] Initial number of topology segments 1, storage classes 1, potential CSIStorageCapacity objects 1 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414395 1 capacity.go:290] Checking for existing CSIStorageCapacity objects 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414465 1 capacity.go:725] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 matches {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414475 1 capacity.go:255] Started Capacity Controller 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414496 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414512 1 volume_store.go:97] Starting save volume queue 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414701 1 reflector.go:289] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-12-13T00:14:22.414984450+00:00 stderr F I1213 00:14:22.414710 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-12-13T00:14:22.416562771+00:00 stderr F I1213 00:14:22.415100 1 reflector.go:289] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-12-13T00:14:22.416562771+00:00 stderr F I1213 00:14:22.415114 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-12-13T00:14:22.416562771+00:00 stderr F I1213 00:14:22.415177 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:14:22.416562771+00:00 stderr F I1213 00:14:22.415205 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:14:22.416562771+00:00 stderr F I1213 00:14:22.415255 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:14:22.416562771+00:00 stderr F I1213 00:14:22.415260 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:14:22.416813269+00:00 stderr F I1213 00:14:22.416781 1 connection.go:251] GRPC response: {"available_capacity":45736325120,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:14:22.416813269+00:00 stderr F I1213 00:14:22.416793 1 connection.go:252] GRPC error: 2025-12-13T00:14:22.416823629+00:00 stderr F I1213 00:14:22.416806 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44664380Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:14:22.421799169+00:00 stderr F I1213 00:14:22.421746 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41090 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:14:22.421976455+00:00 stderr F I1213 00:14:22.421921 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41090 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44664380Ki 2025-12-13T00:14:22.519393447+00:00 stderr F I1213 00:14:22.515007 1 shared_informer.go:341] caches populated 2025-12-13T00:14:22.519393447+00:00 stderr F I1213 00:14:22.515895 1 controller.go:860] Started provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_c7da79fa-8900-4241-9a69-56c395212eb0! 2025-12-13T00:14:22.519393447+00:00 stderr F I1213 00:14:22.516801 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-12-13T00:14:22.519393447+00:00 stderr F I1213 00:14:22.517462 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-12-13T00:14:22.519393447+00:00 stderr F I1213 00:14:22.517470 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-12-13T00:15:22.415092423+00:00 stderr F I1213 00:15:22.414630 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:15:22.415092423+00:00 stderr F I1213 00:15:22.414741 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:15:22.415092423+00:00 stderr F I1213 00:15:22.414781 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:15:22.415092423+00:00 stderr F I1213 00:15:22.414788 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:15:22.415792163+00:00 stderr F I1213 00:15:22.415745 1 connection.go:251] GRPC response: {"available_capacity":44284309504,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:15:22.415792163+00:00 stderr F I1213 00:15:22.415761 1 connection.go:252] GRPC error: 2025-12-13T00:15:22.415828264+00:00 stderr F I1213 00:15:22.415794 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 43246396Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:15:22.425011747+00:00 stderr F I1213 00:15:22.424914 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41494 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 43246396Ki 2025-12-13T00:15:22.425035037+00:00 stderr F I1213 00:15:22.425012 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41494 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:16:22.415871454+00:00 stderr F I1213 00:16:22.415787 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:16:22.415957026+00:00 stderr F I1213 00:16:22.415869 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:16:22.415957026+00:00 stderr F I1213 00:16:22.415918 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:16:22.416177552+00:00 stderr F I1213 00:16:22.415923 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:16:22.419719149+00:00 stderr F I1213 00:16:22.419693 1 connection.go:251] GRPC response: {"available_capacity":45269446656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:16:22.419719149+00:00 stderr F I1213 00:16:22.419709 1 connection.go:252] GRPC error: 2025-12-13T00:16:22.419763451+00:00 stderr F I1213 00:16:22.419739 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44208444Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:16:22.429024283+00:00 stderr F I1213 00:16:22.428553 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41662 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:16:22.429024283+00:00 stderr F I1213 00:16:22.428822 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41662 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44208444Ki 2025-12-13T00:17:22.417118439+00:00 stderr F I1213 00:17:22.416892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:17:22.417118439+00:00 stderr F I1213 00:17:22.417079 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:17:22.417218971+00:00 stderr F I1213 00:17:22.417194 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:17:22.417665663+00:00 stderr F I1213 00:17:22.417206 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:17:22.419175643+00:00 stderr F I1213 00:17:22.419039 1 connection.go:251] GRPC response: {"available_capacity":45256196096,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:17:22.419175643+00:00 stderr F I1213 00:17:22.419069 1 connection.go:252] GRPC error: 2025-12-13T00:17:22.419175643+00:00 stderr F I1213 00:17:22.419106 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44195504Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:17:22.436049751+00:00 stderr F I1213 00:17:22.435224 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41812 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44195504Ki 2025-12-13T00:17:22.436049751+00:00 stderr F I1213 00:17:22.435244 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41812 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:18:22.420496329+00:00 stderr F I1213 00:18:22.420332 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:18:22.420550260+00:00 stderr F I1213 00:18:22.420497 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:18:22.420600832+00:00 stderr F I1213 00:18:22.420569 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:18:22.420830018+00:00 stderr F I1213 00:18:22.420582 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:18:22.421857135+00:00 stderr F I1213 00:18:22.421794 1 connection.go:251] GRPC response: {"available_capacity":45238341632,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:18:22.421857135+00:00 stderr F I1213 00:18:22.421810 1 connection.go:252] GRPC error: 2025-12-13T00:18:22.421889186+00:00 stderr F I1213 00:18:22.421835 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44178068Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:18:22.434191583+00:00 stderr F I1213 00:18:22.434058 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 41988 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:18:22.434695806+00:00 stderr F I1213 00:18:22.434572 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 41988 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44178068Ki 2025-12-13T00:19:22.421766735+00:00 stderr F I1213 00:19:22.421553 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:19:22.421866968+00:00 stderr F I1213 00:19:22.421838 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:19:22.422095144+00:00 stderr F I1213 00:19:22.422012 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:19:22.422298191+00:00 stderr F I1213 00:19:22.422037 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:19:22.423860053+00:00 stderr F I1213 00:19:22.423801 1 connection.go:251] GRPC response: {"available_capacity":45238501376,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:19:22.423860053+00:00 stderr F I1213 00:19:22.423831 1 connection.go:252] GRPC error: 2025-12-13T00:19:22.424021008+00:00 stderr F I1213 00:19:22.423887 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44178224Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:19:22.441175131+00:00 stderr F I1213 00:19:22.441097 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 42221 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44178224Ki 2025-12-13T00:19:22.441175131+00:00 stderr F I1213 00:19:22.441114 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 42221 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:20:22.423789437+00:00 stderr F I1213 00:20:22.423618 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:20:22.423844008+00:00 stderr F I1213 00:20:22.423809 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:20:22.424267210+00:00 stderr F I1213 00:20:22.424211 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:20:22.424630270+00:00 stderr F I1213 00:20:22.424231 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:20:22.425848344+00:00 stderr F I1213 00:20:22.425816 1 connection.go:251] GRPC response: {"available_capacity":45242007552,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:20:22.425848344+00:00 stderr F I1213 00:20:22.425833 1 connection.go:252] GRPC error: 2025-12-13T00:20:22.425902045+00:00 stderr F I1213 00:20:22.425871 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44181648Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:20:22.438611585+00:00 stderr F I1213 00:20:22.438560 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 42857 is already known to match {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:20:22.438782880+00:00 stderr F I1213 00:20:22.438738 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 42857 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44181648Ki 2025-12-13T00:20:38.470858619+00:00 stderr F I1213 00:20:38.469031 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 6 items received 2025-12-13T00:20:38.470858619+00:00 stderr F I1213 00:20:38.469960 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 6 items received 2025-12-13T00:20:38.470858619+00:00 stderr F I1213 00:20:38.470120 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-12-13T00:20:38.475801643+00:00 stderr F I1213 00:20:38.475402 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=5m18s&timeoutSeconds=318&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:38.475801643+00:00 stderr F I1213 00:20:38.475515 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42857&timeout=5m48s&timeoutSeconds=348&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:38.475801643+00:00 stderr F I1213 00:20:38.475571 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m33s&timeoutSeconds=393&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:38.501135646+00:00 stderr F I1213 00:20:38.484043 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 6 items received 2025-12-13T00:20:38.501135646+00:00 stderr F I1213 00:20:38.499271 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42833&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:38.501135646+00:00 stderr F I1213 00:20:38.499484 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-12-13T00:20:38.508427624+00:00 stderr F I1213 00:20:38.504723 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42847&timeout=8m50s&timeoutSeconds=530&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:39.328221175+00:00 stderr F I1213 00:20:39.328092 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42857&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:39.420877785+00:00 stderr F I1213 00:20:39.420573 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42833&timeout=7m53s&timeoutSeconds=473&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:39.617152702+00:00 stderr F I1213 00:20:39.617065 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:39.721986911+00:00 stderr F I1213 00:20:39.721901 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:39.788760133+00:00 stderr F I1213 00:20:39.788684 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m35s&timeoutSeconds=395&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:41.171287390+00:00 stderr F I1213 00:20:41.171204 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42857&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:41.281578107+00:00 stderr F I1213 00:20:41.281482 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42833&timeout=5m56s&timeoutSeconds=356&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:41.657901371+00:00 stderr F I1213 00:20:41.657836 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:41.727034717+00:00 stderr F I1213 00:20:41.726973 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:42.571562036+00:00 stderr F I1213 00:20:42.571479 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:46.002775877+00:00 stderr F I1213 00:20:46.002706 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=5m37s&timeoutSeconds=337&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:46.711824171+00:00 stderr F I1213 00:20:46.711744 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42857&timeout=8m49s&timeoutSeconds=529&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:46.723767183+00:00 stderr F I1213 00:20:46.723702 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42847&timeout=5m16s&timeoutSeconds=316&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:46.974510430+00:00 stderr F I1213 00:20:46.974288 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:47.624574431+00:00 stderr F I1213 00:20:47.624504 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42833&timeout=9m55s&timeoutSeconds=595&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:53.311826720+00:00 stderr F I1213 00:20:53.311704 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=8m9s&timeoutSeconds=489&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:53.766563371+00:00 stderr F I1213 00:20:53.765653 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42857&timeout=7m30s&timeoutSeconds=450&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:53.968064479+00:00 stderr F I1213 00:20:53.967545 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:55.529442733+00:00 stderr F I1213 00:20:55.529366 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42847&timeout=9m22s&timeoutSeconds=562&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:20:58.765608789+00:00 stderr F I1213 00:20:58.765535 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42833&timeout=7m12s&timeoutSeconds=432&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:21:06.750691023+00:00 stderr F I1213 00:21:06.750605 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:21:14.180994698+00:00 stderr F I1213 00:21:14.176450 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=42833&timeout=7m23s&timeoutSeconds=443&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:21:14.479926984+00:00 stderr F I1213 00:21:14.479844 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=42847&timeout=9m59s&timeoutSeconds=599&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:21:16.976711010+00:00 stderr F I1213 00:21:16.976599 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=42857&timeout=7m50s&timeoutSeconds=470&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:21:18.766497386+00:00 stderr F I1213 00:21:18.766160 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=42847&timeout=9m38s&timeoutSeconds=578&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-12-13T00:21:22.424154079+00:00 stderr F I1213 00:21:22.424077 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-12-13T00:21:22.424154079+00:00 stderr F I1213 00:21:22.424145 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} 2025-12-13T00:21:22.424224220+00:00 stderr F I1213 00:21:22.424197 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-12-13T00:21:22.424386085+00:00 stderr F I1213 00:21:22.424206 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-12-13T00:21:22.425206297+00:00 stderr F I1213 00:21:22.425175 1 connection.go:251] GRPC response: {"available_capacity":45232758784,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-12-13T00:21:22.425206297+00:00 stderr F I1213 00:21:22.425195 1 connection.go:252] GRPC error: 2025-12-13T00:21:22.425271019+00:00 stderr F I1213 00:21:22.425239 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner}, new capacity 44172616Ki, new maximumVolumeSize 83295212Ki 2025-12-13T00:21:22.439401210+00:00 stderr F I1213 00:21:22.439346 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 43039 for {segment:0xc0004e4648 storageClassName:crc-csi-hostpath-provisioner} with capacity 44172616Ki 2025-12-13T00:21:46.006688323+00:00 stderr F I1213 00:21:46.006614 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 42833 (42935) 2025-12-13T00:21:50.922904562+00:00 stderr F I1213 00:21:50.922829 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 42847 (42935) 2025-12-13T00:21:51.064876874+00:00 stderr F I1213 00:21:51.064790 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 42847 (42935) 2025-12-13T00:21:51.703977027+00:00 stderr F I1213 00:21:51.703896 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 42847 (42935) ././@LongLink0000644000000000000000000000026700000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000054130515117130647033203 0ustar zuulzuul2025-08-13T20:00:06.445084695+00:00 stderr F W0813 20:00:06.443324 1 feature_gate.go:241] Setting GA feature gate Topology=true. It will be removed in a future release. 2025-08-13T20:00:06.445484987+00:00 stderr F I0813 20:00:06.445445 1 feature_gate.go:249] feature gates: &{map[Topology:true]} 2025-08-13T20:00:06.445541259+00:00 stderr F I0813 20:00:06.445526 1 csi-provisioner.go:154] Version: v4.15.0-202406180807.p0.gce5a1a3.assembly.stream.el8-0-g9363464-dirty 2025-08-13T20:00:06.445584310+00:00 stderr F I0813 20:00:06.445567 1 csi-provisioner.go:177] Building kube configs for running in cluster... 2025-08-13T20:00:06.695957019+00:00 stderr F I0813 20:00:06.658088 1 connection.go:215] Connecting to unix:///csi/csi.sock 2025-08-13T20:00:06.730316519+00:00 stderr F I0813 20:00:06.730187 1 common.go:138] Probing CSI driver for readiness 2025-08-13T20:00:06.730426782+00:00 stderr F I0813 20:00:06.730385 1 connection.go:244] GRPC call: /csi.v1.Identity/Probe 2025-08-13T20:00:06.763211887+00:00 stderr F I0813 20:00:06.730446 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925372 1 connection.go:251] GRPC response: {} 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925421 1 connection.go:252] GRPC error: 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925443 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginInfo 2025-08-13T20:00:06.929922880+00:00 stderr F I0813 20:00:06.925449 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034174 1 connection.go:251] GRPC response: {"name":"kubevirt.io.hostpath-provisioner","vendor_version":"latest"} 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034258 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034275 1 csi-provisioner.go:230] Detected CSI driver kubevirt.io.hostpath-provisioner 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034292 1 connection.go:244] GRPC call: /csi.v1.Identity/GetPluginCapabilities 2025-08-13T20:00:07.049764737+00:00 stderr F I0813 20:00:07.034297 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.272894790+00:00 stderr F I0813 20:00:07.242737 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"Service":{"type":2}}}]} 2025-08-13T20:00:07.293117656+00:00 stderr F I0813 20:00:07.273106 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.293289511+00:00 stderr F I0813 20:00:07.293266 1 connection.go:244] GRPC call: /csi.v1.Controller/ControllerGetCapabilities 2025-08-13T20:00:07.293398524+00:00 stderr F I0813 20:00:07.293311 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.558471632+00:00 stderr F I0813 20:00:07.557998 1 connection.go:251] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":11}}}]} 2025-08-13T20:00:07.558471632+00:00 stderr F I0813 20:00:07.558063 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578185 1 csi-provisioner.go:302] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578240 1 connection.go:244] GRPC call: /csi.v1.Node/NodeGetInfo 2025-08-13T20:00:07.589250729+00:00 stderr F I0813 20:00:07.578246 1 connection.go:245] GRPC request: {} 2025-08-13T20:00:07.691395962+00:00 stderr F I0813 20:00:07.676466 1 connection.go:251] GRPC response: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"node_id":"crc"} 2025-08-13T20:00:07.691395962+00:00 stderr F I0813 20:00:07.676565 1 connection.go:252] GRPC error: 2025-08-13T20:00:07.929995405+00:00 stderr F I0813 20:00:07.676661 1 csi-provisioner.go:351] using local topology with Node = &Node{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[topology.hostpath.csi/node:crc] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} and CSINode = &CSINode{ObjectMeta:{crc 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:CSINodeSpec{Drivers:[]CSINodeDriver{CSINodeDriver{Name:kubevirt.io.hostpath-provisioner,NodeID:crc,TopologyKeys:[topology.hostpath.csi/node],Allocatable:nil,},},},} 2025-08-13T20:00:08.534874933+00:00 stderr F I0813 20:00:08.532659 1 csi-provisioner.go:464] using apps/v1/DaemonSet csi-hostpathplugin as owner of CSIStorageCapacity objects 2025-08-13T20:00:08.534874933+00:00 stderr F I0813 20:00:08.532718 1 csi-provisioner.go:483] producing CSIStorageCapacity objects with fixed topology segment [topology.hostpath.csi/node: crc] 2025-08-13T20:00:08.555891592+00:00 stderr F I0813 20:00:08.555419 1 csi-provisioner.go:529] using the CSIStorageCapacity v1 API 2025-08-13T20:00:08.558949119+00:00 stderr F I0813 20:00:08.556337 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0001296f8 = topology.hostpath.csi/node: crc], removed [] 2025-08-13T20:00:08.558949119+00:00 stderr F I0813 20:00:08.557178 1 controller.go:732] Using saving PVs to API server in background 2025-08-13T20:00:08.584915160+00:00 stderr F I0813 20:00:08.576298 1 reflector.go:289] Starting reflector *v1.CSIStorageCapacity (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.584915160+00:00 stderr F I0813 20:00:08.576380 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.601974936+00:00 stderr F I0813 20:00:08.591735 1 reflector.go:289] Starting reflector *v1.StorageClass (1h0m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.601974936+00:00 stderr F I0813 20:00:08.591826 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.617532940+00:00 stderr F I0813 20:00:08.612115 1 reflector.go:289] Starting reflector *v1.PersistentVolumeClaim (15m0s) from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:08.617532940+00:00 stderr F I0813 20:00:08.612165 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:00:10.156014488+00:00 stderr F I0813 20:00:10.115770 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:00:10.156148102+00:00 stderr F I0813 20:00:10.156111 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168114 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168191 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.175935736+00:00 stderr F I0813 20:00:10.168371 1 controller.go:811] Starting provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_9c0cb162-9831-443d-a7c1-5af5632fc687! 2025-08-13T20:00:10.200235409+00:00 stderr F I0813 20:00:10.191948 1 capacity.go:243] Starting Capacity Controller 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380224 1 shared_informer.go:341] caches populated 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380278 1 capacity.go:339] Capacity Controller: topology changed: added [0xc0001296f8 = topology.hostpath.csi/node: crc], removed [] 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380614 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380631 1 capacity.go:279] Initial number of topology segments 1, storage classes 1, potential CSIStorageCapacity objects 1 2025-08-13T20:00:10.380888950+00:00 stderr F I0813 20:00:10.380815 1 capacity.go:290] Checking for existing CSIStorageCapacity objects 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380903 1 capacity.go:725] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 24362 matches {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380917 1 capacity.go:255] Started Capacity Controller 2025-08-13T20:00:10.380956782+00:00 stderr F I0813 20:00:10.380933 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:00:10.380968402+00:00 stderr F I0813 20:00:10.380954 1 reflector.go:289] Starting reflector *v1.PersistentVolume (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:00:10.380968402+00:00 stderr F I0813 20:00:10.380961 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.384352 1 volume_store.go:97] Starting save volume queue 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.385212 1 reflector.go:289] Starting reflector *v1.StorageClass (15m0s) from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:00:10.385743418+00:00 stderr F I0813 20:00:10.385224 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.609281 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 24362 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.635463 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.636202 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:00:10.643051176+00:00 stderr F I0813 20:00:10.636211 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:00:11.139235764+00:00 stderr F I0813 20:00:11.119364 1 connection.go:251] GRPC response: {"available_capacity":63507808256,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.358355 1 connection.go:252] GRPC error: 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.358538 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 62019344Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.178084 1 shared_informer.go:341] caches populated 2025-08-13T20:00:11.361356596+00:00 stderr F I0813 20:00:11.359593 1 controller.go:860] Started provisioner controller kubevirt.io.hostpath-provisioner_csi-hostpathplugin-hvm8g_9c0cb162-9831-443d-a7c1-5af5632fc687! 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.359641 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.620275 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:00:11.627920227+00:00 stderr F I0813 20:00:11.620441 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:00:11.648860944+00:00 stderr F I0813 20:00:11.646741 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 29050 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:00:11.648860944+00:00 stderr F I0813 20:00:11.646990 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 29050 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 62019344Ki 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.440600 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.441262 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.442478 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:01:10.489766068+00:00 stderr F I0813 20:01:10.442489 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.566852 1 connection.go:251] GRPC response: {"available_capacity":61631873024,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.567008 1 connection.go:252] GRPC error: 2025-08-13T20:01:10.613188907+00:00 stderr F I0813 20:01:10.567315 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 60187376Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:01:14.381652061+00:00 stderr F I0813 20:01:14.376669 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30349 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:01:14.446122080+00:00 stderr F I0813 20:01:14.427122 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 30349 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 60187376Ki 2025-08-13T20:02:10.481966068+00:00 stderr F I0813 20:02:10.481609 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:02:10.481966068+00:00 stderr F I0813 20:02:10.481897 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:02:10.482290887+00:00 stderr F I0813 20:02:10.482127 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:02:10.482566955+00:00 stderr F I0813 20:02:10.482150 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:02:10.486612800+00:00 stderr F I0813 20:02:10.486560 1 connection.go:251] GRPC response: {"available_capacity":57153896448,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:02:10.486612800+00:00 stderr F I0813 20:02:10.486584 1 connection.go:252] GRPC error: 2025-08-13T20:02:10.486756145+00:00 stderr F I0813 20:02:10.486636 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 55814352Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:02:13.039704301+00:00 stderr F I0813 20:02:13.038706 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:02:13.052607229+00:00 stderr F I0813 20:02:13.052485 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 30628 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 55814352Ki 2025-08-13T20:02:29.596263111+00:00 stderr F I0813 20:02:29.591484 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 2 items received 2025-08-13T20:02:29.643445047+00:00 stderr F I0813 20:02:29.643044 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.676729 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 5 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.677578 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.679991 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.682654 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=9m8s&timeoutSeconds=548&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.687225776+00:00 stderr F I0813 20:02:29.683529 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.701086371+00:00 stderr F I0813 20:02:29.690134 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m10s&timeoutSeconds=310&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.708374689+00:00 stderr F I0813 20:02:29.704397 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m6s&timeoutSeconds=366&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:29.708374689+00:00 stderr F I0813 20:02:29.707113 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:30.540496978+00:00 stderr F I0813 20:02:30.540338 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m21s&timeoutSeconds=501&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:30.583592947+00:00 stderr F I0813 20:02:30.583343 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m20s&timeoutSeconds=560&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.109182150+00:00 stderr F I0813 20:02:31.109009 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m44s&timeoutSeconds=464&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.124899568+00:00 stderr F I0813 20:02:31.124727 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=7m38s&timeoutSeconds=458&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:31.156989043+00:00 stderr F I0813 20:02:31.156761 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=5m2s&timeoutSeconds=302&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:32.744557062+00:00 stderr F I0813 20:02:32.744349 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m9s&timeoutSeconds=549&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:32.838680177+00:00 stderr F I0813 20:02:32.838465 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.163339149+00:00 stderr F I0813 20:02:33.163234 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=9m8s&timeoutSeconds=548&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.547760206+00:00 stderr F I0813 20:02:33.547306 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m10s&timeoutSeconds=370&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:33.892723506+00:00 stderr F I0813 20:02:33.892171 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m50s&timeoutSeconds=410&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:36.877874593+00:00 stderr F I0813 20:02:36.877703 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m23s&timeoutSeconds=443&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:38.534622675+00:00 stderr F I0813 20:02:38.534406 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.060985840+00:00 stderr F I0813 20:02:39.060316 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m12s&timeoutSeconds=372&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.439426896+00:00 stderr F I0813 20:02:39.439295 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:39.521892639+00:00 stderr F I0813 20:02:39.521633 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:44.532424188+00:00 stderr F I0813 20:02:44.530300 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:47.765023075+00:00 stderr F I0813 20:02:47.764585 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m18s&timeoutSeconds=498&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:48.193273602+00:00 stderr F I0813 20:02:48.193156 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:49.033706956+00:00 stderr F I0813 20:02:49.033579 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:51.421538544+00:00 stderr F I0813 20:02:51.421281 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=6m19s&timeoutSeconds=379&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:02:57.470707981+00:00 stderr F I0813 20:02:57.470627 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:07.083578567+00:00 stderr F I0813 20:03:07.083080 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=6m15s&timeoutSeconds=375&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:09.932190890+00:00 stderr F I0813 20:03:09.932042 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:10.483036413+00:00 stderr F I0813 20:03:10.482545 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:03:10.483876137+00:00 stderr F I0813 20:03:10.483736 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:10.484152475+00:00 stderr F I0813 20:03:10.484091 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:10.484967098+00:00 stderr F I0813 20:03:10.484128 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:10.488412296+00:00 stderr F I0813 20:03:10.488337 1 connection.go:251] GRPC response: {"available_capacity":55023792128,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:10.488412296+00:00 stderr F I0813 20:03:10.488355 1 connection.go:252] GRPC error: 2025-08-13T20:03:10.488824748+00:00 stderr F I0813 20:03:10.488495 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53734172Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:10.490741163+00:00 stderr F E0813 20:03:10.490570 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:10.490741163+00:00 stderr F W0813 20:03:10.490611 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 0 failures 2025-08-13T20:03:10.693011203+00:00 stderr F I0813 20:03:10.692499 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:11.492386517+00:00 stderr F I0813 20:03:11.492204 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:11.492601553+00:00 stderr F I0813 20:03:11.492514 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:11.493017445+00:00 stderr F I0813 20:03:11.492540 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:11.494756084+00:00 stderr F I0813 20:03:11.494698 1 connection.go:251] GRPC response: {"available_capacity":54914170880,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:11.494756084+00:00 stderr F I0813 20:03:11.494734 1 connection.go:252] GRPC error: 2025-08-13T20:03:11.496875165+00:00 stderr F I0813 20:03:11.494947 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53627120Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:11.497473372+00:00 stderr F E0813 20:03:11.497368 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.497473372+00:00 stderr F W0813 20:03:11.497402 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 1 failures 2025-08-13T20:03:13.498706382+00:00 stderr F I0813 20:03:13.498530 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:13.498706382+00:00 stderr F I0813 20:03:13.498619 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:13.498881047+00:00 stderr F I0813 20:03:13.498628 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:13.499975518+00:00 stderr F I0813 20:03:13.499936 1 connection.go:251] GRPC response: {"available_capacity":54619185152,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:13.499975518+00:00 stderr F I0813 20:03:13.499953 1 connection.go:252] GRPC error: 2025-08-13T20:03:13.500033699+00:00 stderr F I0813 20:03:13.499987 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 53339048Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:13.501631665+00:00 stderr F E0813 20:03:13.501561 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.501631665+00:00 stderr F W0813 20:03:13.501614 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 2 failures 2025-08-13T20:03:16.036603780+00:00 stderr F I0813 20:03:16.036401 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=5m11s&timeoutSeconds=311&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:17.501995015+00:00 stderr F I0813 20:03:17.501871 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:17.501995015+00:00 stderr F I0813 20:03:17.501973 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:17.502119369+00:00 stderr F I0813 20:03:17.501980 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:17.503699514+00:00 stderr F I0813 20:03:17.503642 1 connection.go:251] GRPC response: {"available_capacity":53907591168,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:17.503699514+00:00 stderr F I0813 20:03:17.503676 1 connection.go:252] GRPC error: 2025-08-13T20:03:17.503722575+00:00 stderr F I0813 20:03:17.503701 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 52644132Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:17.505298440+00:00 stderr F E0813 20:03:17.505246 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:17.505298440+00:00 stderr F W0813 20:03:17.505284 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 3 failures 2025-08-13T20:03:25.507222693+00:00 stderr F I0813 20:03:25.506678 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:25.507309735+00:00 stderr F I0813 20:03:25.507277 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:25.507887532+00:00 stderr F I0813 20:03:25.507288 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:25.509641382+00:00 stderr F I0813 20:03:25.509551 1 connection.go:251] GRPC response: {"available_capacity":52684136448,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:25.509641382+00:00 stderr F I0813 20:03:25.509589 1 connection.go:252] GRPC error: 2025-08-13T20:03:25.509764366+00:00 stderr F I0813 20:03:25.509677 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51449352Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:25.511732552+00:00 stderr F E0813 20:03:25.511669 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:25.511732552+00:00 stderr F W0813 20:03:25.511709 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 4 failures 2025-08-13T20:03:41.513143198+00:00 stderr F I0813 20:03:41.512502 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:03:41.513143198+00:00 stderr F I0813 20:03:41.512892 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:03:41.513257972+00:00 stderr F I0813 20:03:41.512904 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:03:41.518480131+00:00 stderr F I0813 20:03:41.518403 1 connection.go:251] GRPC response: {"available_capacity":51869433856,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:03:41.518480131+00:00 stderr F I0813 20:03:41.518435 1 connection.go:252] GRPC error: 2025-08-13T20:03:41.518547163+00:00 stderr F I0813 20:03:41.518467 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50653744Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:03:41.520934581+00:00 stderr F E0813 20:03:41.520881 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.520934581+00:00 stderr F W0813 20:03:41.520922 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 5 failures 2025-08-13T20:03:42.890655574+00:00 stderr F I0813 20:03:42.890433 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m10s&timeoutSeconds=490&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:47.364616962+00:00 stderr F I0813 20:03:47.364496 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m45s&timeoutSeconds=405&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:48.306607314+00:00 stderr F I0813 20:03:48.306279 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=7m14s&timeoutSeconds=434&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:50.851697078+00:00 stderr F I0813 20:03:50.851564 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=8m11s&timeoutSeconds=491&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:03:52.214568860+00:00 stderr F I0813 20:03:52.214453 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:10.484515361+00:00 stderr F I0813 20:04:10.484202 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:04:10.484515361+00:00 stderr F I0813 20:04:10.484433 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:04:10.484827289+00:00 stderr F I0813 20:04:10.484734 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:10.485284332+00:00 stderr F I0813 20:04:10.484765 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:04:10.488669139+00:00 stderr F I0813 20:04:10.488550 1 connection.go:251] GRPC response: {"available_capacity":53007642624,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:04:10.488948477+00:00 stderr F I0813 20:04:10.488876 1 connection.go:252] GRPC error: 2025-08-13T20:04:10.489021569+00:00 stderr F I0813 20:04:10.488938 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51765276Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:04:10.492210750+00:00 stderr F E0813 20:04:10.492117 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:10.492210750+00:00 stderr F W0813 20:04:10.492154 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 6 failures 2025-08-13T20:04:13.522250948+00:00 stderr F I0813 20:04:13.521990 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:04:13.522464694+00:00 stderr F I0813 20:04:13.522448 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:04:13.522983819+00:00 stderr F I0813 20:04:13.522486 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:04:13.524461261+00:00 stderr F I0813 20:04:13.524434 1 connection.go:251] GRPC response: {"available_capacity":52794683392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:04:13.524668987+00:00 stderr F I0813 20:04:13.524619 1 connection.go:252] GRPC error: 2025-08-13T20:04:13.524953595+00:00 stderr F I0813 20:04:13.524744 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 51557308Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:04:13.526714495+00:00 stderr F E0813 20:04:13.526603 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:13.526714495+00:00 stderr F W0813 20:04:13.526646 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 7 failures 2025-08-13T20:04:18.700132527+00:00 stderr F I0813 20:04:18.694650 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m24s&timeoutSeconds=504&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:30.683585878+00:00 stderr F I0813 20:04:30.683216 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=30628&timeout=5m58s&timeoutSeconds=358&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:34.170544641+00:00 stderr F I0813 20:04:34.170234 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30619&timeout=6m13s&timeoutSeconds=373&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:42.112098456+00:00 stderr F I0813 20:04:42.111886 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30625&timeout=9m26s&timeoutSeconds=566&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:04:49.089169411+00:00 stderr F I0813 20:04:49.086947 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=5m27s&timeoutSeconds=327&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:05:02.568697791+00:00 stderr F I0813 20:05:02.568367 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30620&timeout=8m4s&timeoutSeconds=484&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:05:10.486039784+00:00 stderr F I0813 20:05:10.485702 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:05:10.486039784+00:00 stderr F I0813 20:05:10.486018 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:05:10.486309361+00:00 stderr F I0813 20:05:10.486220 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:05:10.486683242+00:00 stderr F I0813 20:05:10.486245 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:05:10.492159129+00:00 stderr F I0813 20:05:10.492067 1 connection.go:251] GRPC response: {"available_capacity":48768638976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:05:10.492159129+00:00 stderr F I0813 20:05:10.492101 1 connection.go:252] GRPC error: 2025-08-13T20:05:10.492188450+00:00 stderr F I0813 20:05:10.492134 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 47625624Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:05:10.495313319+00:00 stderr F E0813 20:05:10.494837 1 capacity.go:551] update CSIStorageCapacity for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}: Put "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities/csisc-f2s8x": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:10.495313319+00:00 stderr F W0813 20:05:10.494884 1 capacity.go:552] Retrying capacity.workItem{segment:(*topology.Segment)(0xc0001296f8), storageClassName:"crc-csi-hostpath-provisioner"} after 8 failures 2025-08-13T20:05:27.772514028+00:00 stderr F I0813 20:05:27.772064 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 30628 (30718) 2025-08-13T20:05:29.777518023+00:00 stderr F I0813 20:05:29.777231 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 30619 (30718) 2025-08-13T20:05:32.165918598+00:00 stderr F I0813 20:05:32.165240 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 30620 (30718) 2025-08-13T20:05:34.082920372+00:00 stderr F I0813 20:05:34.081109 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 30625 (30718) 2025-08-13T20:05:51.792116716+00:00 stderr F I0813 20:05:51.791632 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 30620 (30718) 2025-08-13T20:05:59.738435116+00:00 stderr F I0813 20:05:59.736072 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:05:59.787175102+00:00 stderr F I0813 20:05:59.786630 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 30628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.772732740+00:00 stderr F I0813 20:06:06.772544 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:06:06.795659576+00:00 stderr F I0813 20:06:06.795480 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:06:06.795721518+00:00 stderr F I0813 20:06:06.795633 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.796370387+00:00 stderr F I0813 20:06:06.796282 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:06.797268303+00:00 stderr F I0813 20:06:06.796407 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:06.798042965+00:00 stderr F I0813 20:06:06.796415 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:06.799757244+00:00 stderr F I0813 20:06:06.799632 1 connection.go:251] GRPC response: {"available_capacity":47480958976,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:06.799974140+00:00 stderr F I0813 20:06:06.799949 1 connection.go:252] GRPC error: 2025-08-13T20:06:06.800095603+00:00 stderr F I0813 20:06:06.800050 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46368124Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:06.820370124+00:00 stderr F I0813 20:06:06.819646 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 31987 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46368124Ki 2025-08-13T20:06:06.826149130+00:00 stderr F I0813 20:06:06.826035 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 31987 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486104 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486227 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486337 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:10.486524997+00:00 stderr F I0813 20:06:10.486344 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516209 1 connection.go:251] GRPC response: {"available_capacity":47480971264,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516250 1 connection.go:252] GRPC error: 2025-08-13T20:06:10.516318230+00:00 stderr F I0813 20:06:10.516275 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46368136Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:10.580150658+00:00 stderr F I0813 20:06:10.579169 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32004 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:10.583094132+00:00 stderr F I0813 20:06:10.582981 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32004 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46368136Ki 2025-08-13T20:06:18.840921105+00:00 stderr F I0813 20:06:18.839541 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:06:21.527915969+00:00 stderr F I0813 20:06:21.527730 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:21.528032313+00:00 stderr F I0813 20:06:21.527996 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:06:21.528336031+00:00 stderr F I0813 20:06:21.528020 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:06:21.535661441+00:00 stderr F I0813 20:06:21.535546 1 connection.go:251] GRPC response: {"available_capacity":47480754176,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:06:21.535661441+00:00 stderr F I0813 20:06:21.535579 1 connection.go:252] GRPC error: 2025-08-13T20:06:21.535744454+00:00 stderr F I0813 20:06:21.535615 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 46367924Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:06:21.550683701+00:00 stderr F I0813 20:06:21.550618 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32037 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 46367924Ki 2025-08-13T20:06:21.551119974+00:00 stderr F I0813 20:06:21.551008 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32037 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:06:21.866016731+00:00 stderr F I0813 20:06:21.865906 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:06:28.050184360+00:00 stderr F I0813 20:06:28.044093 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:06:28.062771960+00:00 stderr F I0813 20:06:28.055746 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:06:28.062771960+00:00 stderr F I0813 20:06:28.062736 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:06:28.062945775+00:00 stderr F I0813 20:06:28.062823 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:07:10.490577272+00:00 stderr F I0813 20:07:10.490248 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:07:10.490577272+00:00 stderr F I0813 20:07:10.490483 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:07:10.491729745+00:00 stderr F I0813 20:07:10.490719 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:07:10.491729745+00:00 stderr F I0813 20:07:10.490752 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:07:10.513115619+00:00 stderr F I0813 20:07:10.512331 1 connection.go:251] GRPC response: {"available_capacity":49674403840,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:07:10.514217710+00:00 stderr F I0813 20:07:10.514163 1 connection.go:252] GRPC error: 2025-08-13T20:07:10.514511549+00:00 stderr F I0813 20:07:10.514232 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48510160Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:07:10.571925205+00:00 stderr F I0813 20:07:10.571615 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32389 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:07:10.573383717+00:00 stderr F I0813 20:07:10.571987 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32389 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48510160Ki 2025-08-13T20:08:10.496212475+00:00 stderr F I0813 20:08:10.494950 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:08:10.496212475+00:00 stderr F I0813 20:08:10.495758 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:08:10.497057509+00:00 stderr F I0813 20:08:10.496985 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:08:10.497678547+00:00 stderr F I0813 20:08:10.497017 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505524 1 connection.go:251] GRPC response: {"available_capacity":51680923648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505565 1 connection.go:252] GRPC error: 2025-08-13T20:08:10.505908763+00:00 stderr F I0813 20:08:10.505623 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50469652Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:08:10.529526140+00:00 stderr F I0813 20:08:10.529455 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 32843 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50469652Ki 2025-08-13T20:08:10.529643024+00:00 stderr F I0813 20:08:10.529465 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 32843 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:08:26.211449194+00:00 stderr F I0813 20:08:26.209243 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 1 items received 2025-08-13T20:08:26.232885748+00:00 stderr F I0813 20:08:26.226685 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 2 items received 2025-08-13T20:08:26.245728977+00:00 stderr F I0813 20:08:26.245648 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.245971094+00:00 stderr F I0813 20:08:26.245940 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m29s&timeoutSeconds=329&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.329062316+00:00 stderr F I0813 20:08:26.329006 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 7 items received 2025-08-13T20:08:26.338875157+00:00 stderr F I0813 20:08:26.337266 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:08:26.339112204+00:00 stderr F I0813 20:08:26.339083 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 2 items received 2025-08-13T20:08:26.339557827+00:00 stderr F I0813 20:08:26.339500 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=7m55s&timeoutSeconds=475&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.348191124+00:00 stderr F I0813 20:08:26.348106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=9m15s&timeoutSeconds=555&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:26.354041382+00:00 stderr F I0813 20:08:26.353987 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m32s&timeoutSeconds=392&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.271516707+00:00 stderr F I0813 20:08:27.271446 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.443477858+00:00 stderr F I0813 20:08:27.441995 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=5m47s&timeoutSeconds=347&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.459096985+00:00 stderr F I0813 20:08:27.458965 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m54s&timeoutSeconds=534&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.719648486+00:00 stderr F I0813 20:08:27.715522 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:27.915456340+00:00 stderr F I0813 20:08:27.915341 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:28.964473995+00:00 stderr F I0813 20:08:28.964297 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:29.650917816+00:00 stderr F I0813 20:08:29.650675 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:30.207930386+00:00 stderr F I0813 20:08:30.198136 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=9m44s&timeoutSeconds=584&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:30.226399856+00:00 stderr F I0813 20:08:30.219066 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=6m28s&timeoutSeconds=388&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:31.042481194+00:00 stderr F I0813 20:08:31.042351 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=6m39s&timeoutSeconds=399&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:33.424378073+00:00 stderr F I0813 20:08:33.424253 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.154934419+00:00 stderr F I0813 20:08:34.153059 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.169285610+00:00 stderr F I0813 20:08:34.169180 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:34.971095929+00:00 stderr F I0813 20:08:34.970921 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:36.935505080+00:00 stderr F I0813 20:08:36.935337 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=5m59s&timeoutSeconds=359&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:41.031694191+00:00 stderr F I0813 20:08:41.026092 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m30s&timeoutSeconds=330&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:42.459972361+00:00 stderr F I0813 20:08:42.453283 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:43.610085645+00:00 stderr F I0813 20:08:43.608699 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:44.909074679+00:00 stderr F I0813 20:08:44.907690 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32890&timeout=8m34s&timeoutSeconds=514&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:48.575034265+00:00 stderr F I0813 20:08:48.574857 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=32843&timeout=7m29s&timeoutSeconds=449&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:57.009273752+00:00 stderr F I0813 20:08:57.009001 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32566&timeout=9m52s&timeoutSeconds=592&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:08:58.719116385+00:00 stderr F I0813 20:08:58.718969 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32890&timeout=5m50s&timeoutSeconds=350&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:09:00.268765025+00:00 stderr F I0813 20:09:00.265320 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32800&timeout=8m22s&timeoutSeconds=502&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:09:03.686105372+00:00 stderr F I0813 20:09:03.685709 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass closed with: too old resource version: 32890 (32913) 2025-08-13T20:09:03.686105372+00:00 stderr F I0813 20:09:03.685724 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity closed with: too old resource version: 32843 (32913) 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498204 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498512 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498673 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:09:10.505736656+00:00 stderr F I0813 20:09:10.498682 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509149 1 connection.go:251] GRPC response: {"available_capacity":51679956992,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509163 1 connection.go:252] GRPC error: 2025-08-13T20:09:10.516154315+00:00 stderr F I0813 20:09:10.509258 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50468708Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:09:10.536886979+00:00 stderr F I0813 20:09:10.536467 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33017 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50468708Ki 2025-08-13T20:09:31.302131848+00:00 stderr F I0813 20:09:31.300214 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim closed with: too old resource version: 32890 (32913) 2025-08-13T20:09:32.674389312+00:00 stderr F I0813 20:09:32.674302 1 reflector.go:445] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume closed with: too old resource version: 32566 (32913) 2025-08-13T20:09:34.049559569+00:00 stderr F I0813 20:09:34.049452 1 reflector.go:325] Listing and watching *v1.CSIStorageCapacity from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:09:34.054219852+00:00 stderr F I0813 20:09:34.054183 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33017 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:09:34.387063665+00:00 stderr F I0813 20:09:34.386506 1 reflector.go:325] Listing and watching *v1.StorageClass from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848 2025-08-13T20:09:42.139423570+00:00 stderr F I0813 20:09:42.139143 1 reflector.go:445] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass closed with: too old resource version: 32800 (32913) 2025-08-13T20:10:10.500175528+00:00 stderr F I0813 20:10:10.499530 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:10:10.500175528+00:00 stderr F I0813 20:10:10.499996 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:10.500258670+00:00 stderr F I0813 20:10:10.500203 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:10.505378987+00:00 stderr F I0813 20:10:10.500259 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506508 1 connection.go:251] GRPC response: {"available_capacity":51678912512,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506547 1 connection.go:252] GRPC error: 2025-08-13T20:10:10.508842676+00:00 stderr F I0813 20:10:10.506612 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50467688Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:10:10.525308928+00:00 stderr F I0813 20:10:10.525178 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33141 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:10.525308928+00:00 stderr F I0813 20:10:10.525256 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33141 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50467688Ki 2025-08-13T20:10:11.134723951+00:00 stderr F I0813 20:10:11.134630 1 reflector.go:325] Listing and watching *v1.PersistentVolume from sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.139712 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.140144 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:10:11.141076763+00:00 stderr F I0813 20:10:11.140154 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:10:11.329857436+00:00 stderr F I0813 20:10:11.325844 1 reflector.go:325] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:10:36.518379150+00:00 stderr F I0813 20:10:36.518068 1 reflector.go:325] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:150 2025-08-13T20:10:36.527029198+00:00 stderr F I0813 20:10:36.526895 1 capacity.go:373] Capacity Controller: storage class crc-csi-hostpath-provisioner was updated or added 2025-08-13T20:10:36.527029198+00:00 stderr F I0813 20:10:36.527007 1 capacity.go:480] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:36.527070579+00:00 stderr F I0813 20:10:36.527043 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:10:36.527211473+00:00 stderr F I0813 20:10:36.527164 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:10:36.527713867+00:00 stderr F I0813 20:10:36.527203 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:10:36.529408726+00:00 stderr F I0813 20:10:36.529332 1 connection.go:251] GRPC response: {"available_capacity":51647598592,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:10:36.529408726+00:00 stderr F I0813 20:10:36.529359 1 connection.go:252] GRPC error: 2025-08-13T20:10:36.529526919+00:00 stderr F I0813 20:10:36.529439 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50437108Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:10:36.549439450+00:00 stderr F I0813 20:10:36.549272 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33214 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50437108Ki 2025-08-13T20:10:36.549439450+00:00 stderr F I0813 20:10:36.549307 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33214 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:11:10.500524073+00:00 stderr F I0813 20:11:10.500223 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:11:10.500600195+00:00 stderr F I0813 20:11:10.500577 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:11:10.500900653+00:00 stderr F I0813 20:11:10.500838 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:11:10.501486520+00:00 stderr F I0813 20:11:10.500871 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:11:10.504303841+00:00 stderr F I0813 20:11:10.504273 1 connection.go:251] GRPC response: {"available_capacity":51642920960,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:11:10.504357362+00:00 stderr F I0813 20:11:10.504341 1 connection.go:252] GRPC error: 2025-08-13T20:11:10.504544088+00:00 stderr F I0813 20:11:10.504452 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50432540Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:11:10.524081448+00:00 stderr F I0813 20:11:10.524021 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33386 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50432540Ki 2025-08-13T20:11:10.527750263+00:00 stderr F I0813 20:11:10.525990 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33386 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:12:10.502114835+00:00 stderr F I0813 20:12:10.501553 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:12:10.502316631+00:00 stderr F I0813 20:12:10.502191 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:12:10.502569629+00:00 stderr F I0813 20:12:10.502518 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:12:10.503161966+00:00 stderr F I0813 20:12:10.502541 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:12:10.505836932+00:00 stderr F I0813 20:12:10.505727 1 connection.go:251] GRPC response: {"available_capacity":51640741888,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:12:10.505978606+00:00 stderr F I0813 20:12:10.505891 1 connection.go:252] GRPC error: 2025-08-13T20:12:10.506156291+00:00 stderr F I0813 20:12:10.506014 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50430412Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:12:10.524858878+00:00 stderr F I0813 20:12:10.524703 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33496 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50430412Ki 2025-08-13T20:12:10.525074764+00:00 stderr F I0813 20:12:10.524990 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33496 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.502418991+00:00 stderr F I0813 20:13:10.502124 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:13:10.502418991+00:00 stderr F I0813 20:13:10.502280 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.502514664+00:00 stderr F I0813 20:13:10.502454 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:13:10.502838033+00:00 stderr F I0813 20:13:10.502495 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:13:10.504610113+00:00 stderr F I0813 20:13:10.504575 1 connection.go:251] GRPC response: {"available_capacity":51640733696,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:13:10.504610113+00:00 stderr F I0813 20:13:10.504588 1 connection.go:252] GRPC error: 2025-08-13T20:13:10.504770328+00:00 stderr F I0813 20:13:10.504630 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50430404Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:13:10.527331795+00:00 stderr F I0813 20:13:10.527240 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33585 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:13:10.527486770+00:00 stderr F I0813 20:13:10.527375 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33585 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50430404Ki 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.502892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.503054 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:14:10.503254435+00:00 stderr F I0813 20:14:10.503103 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:14:10.503470731+00:00 stderr F I0813 20:14:10.503113 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:14:10.505520070+00:00 stderr F I0813 20:14:10.505426 1 connection.go:251] GRPC response: {"available_capacity":51640299520,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:14:10.505520070+00:00 stderr F I0813 20:14:10.505470 1 connection.go:252] GRPC error: 2025-08-13T20:14:10.505621053+00:00 stderr F I0813 20:14:10.505495 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50429980Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:14:10.526839471+00:00 stderr F I0813 20:14:10.526667 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33694 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50429980Ki 2025-08-13T20:14:10.527307325+00:00 stderr F I0813 20:14:10.527213 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33694 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504159 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504294 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:15:10.504389959+00:00 stderr F I0813 20:15:10.504337 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:15:10.504606455+00:00 stderr F I0813 20:15:10.504343 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:15:10.507081435+00:00 stderr F I0813 20:15:10.507023 1 connection.go:251] GRPC response: {"available_capacity":51639975936,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:15:10.507081435+00:00 stderr F I0813 20:15:10.507056 1 connection.go:252] GRPC error: 2025-08-13T20:15:10.507177058+00:00 stderr F I0813 20:15:10.507079 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50429664Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:15:10.556052854+00:00 stderr F I0813 20:15:10.555982 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33862 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50429664Ki 2025-08-13T20:15:10.556361263+00:00 stderr F I0813 20:15:10.556289 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33862 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:03.530489450+00:00 stderr F I0813 20:16:03.530193 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 6 items received 2025-08-13T20:16:10.505406177+00:00 stderr F I0813 20:16:10.505244 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:16:10.505517601+00:00 stderr F I0813 20:16:10.505459 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:10.505533831+00:00 stderr F I0813 20:16:10.505515 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:16:10.505725797+00:00 stderr F I0813 20:16:10.505523 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:16:10.507020333+00:00 stderr F I0813 20:16:10.506933 1 connection.go:251] GRPC response: {"available_capacity":51641487360,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:16:10.507020333+00:00 stderr F I0813 20:16:10.506989 1 connection.go:252] GRPC error: 2025-08-13T20:16:10.507103596+00:00 stderr F I0813 20:16:10.507039 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50431140Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:16:10.535623750+00:00 stderr F I0813 20:16:10.535510 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 33987 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:16:10.536485935+00:00 stderr F I0813 20:16:10.536359 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 33987 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50431140Ki 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.512414 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.512837 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.513068 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:17:10.515615905+00:00 stderr F I0813 20:17:10.513080 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:17:10.521410071+00:00 stderr F I0813 20:17:10.521268 1 connection.go:251] GRPC response: {"available_capacity":51120128000,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:17:10.521410071+00:00 stderr F I0813 20:17:10.521396 1 connection.go:252] GRPC error: 2025-08-13T20:17:10.521606207+00:00 stderr F I0813 20:17:10.521489 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49922000Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:17:10.625718320+00:00 stderr F I0813 20:17:10.625536 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34153 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49922000Ki 2025-08-13T20:17:10.627687856+00:00 stderr F I0813 20:17:10.627461 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34153 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:17:25.400881884+00:00 stderr F I0813 20:17:25.400545 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:17:36.059047411+00:00 stderr F I0813 20:17:36.055432 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 18 items received 2025-08-13T20:17:53.142821593+00:00 stderr F I0813 20:17:53.142547 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 9 items received 2025-08-13T20:18:10.516188997+00:00 stderr F I0813 20:18:10.515307 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:18:10.516438474+00:00 stderr F I0813 20:18:10.516409 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:18:10.517535065+00:00 stderr F I0813 20:18:10.516641 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:18:10.518358489+00:00 stderr F I0813 20:18:10.517614 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531300 1 connection.go:251] GRPC response: {"available_capacity":49833275392,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531345 1 connection.go:252] GRPC error: 2025-08-13T20:18:10.532150993+00:00 stderr F I0813 20:18:10.531403 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48665308Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:18:11.155853054+00:00 stderr F I0813 20:18:11.155701 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34319 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:18:11.156017789+00:00 stderr F I0813 20:18:11.155957 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34319 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48665308Ki 2025-08-13T20:18:17.333243462+00:00 stderr F I0813 20:18:17.333025 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 9 items received 2025-08-13T20:19:10.517415961+00:00 stderr F I0813 20:19:10.517177 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:19:10.517690579+00:00 stderr F I0813 20:19:10.517663 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:19:10.517998237+00:00 stderr F I0813 20:19:10.517936 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:19:10.518746609+00:00 stderr F I0813 20:19:10.518045 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523373 1 connection.go:251] GRPC response: {"available_capacity":49260212224,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523409 1 connection.go:252] GRPC error: 2025-08-13T20:19:10.524400020+00:00 stderr F I0813 20:19:10.523467 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48105676Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:19:10.548734825+00:00 stderr F I0813 20:19:10.547481 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34447 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48105676Ki 2025-08-13T20:19:10.548734825+00:00 stderr F I0813 20:19:10.547860 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34447 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.517892 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518171 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518324 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:20:10.518768340+00:00 stderr F I0813 20:20:10.518330 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:20:10.520223671+00:00 stderr F I0813 20:20:10.520172 1 connection.go:251] GRPC response: {"available_capacity":51637207040,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:20:10.520462378+00:00 stderr F I0813 20:20:10.520441 1 connection.go:252] GRPC error: 2025-08-13T20:20:10.520596282+00:00 stderr F I0813 20:20:10.520548 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50426960Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:20:10.562236352+00:00 stderr F I0813 20:20:10.561759 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34570 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:20:10.562236352+00:00 stderr F I0813 20:20:10.562004 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34570 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50426960Ki 2025-08-13T20:21:10.520579165+00:00 stderr F I0813 20:21:10.520199 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:21:10.520690438+00:00 stderr F I0813 20:21:10.520562 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:21:10.520998747+00:00 stderr F I0813 20:21:10.520895 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:21:10.521716037+00:00 stderr F I0813 20:21:10.520905 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:21:10.524537428+00:00 stderr F I0813 20:21:10.524468 1 connection.go:251] GRPC response: {"available_capacity":51637256192,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:21:10.524537428+00:00 stderr F I0813 20:21:10.524501 1 connection.go:252] GRPC error: 2025-08-13T20:21:10.525048933+00:00 stderr F I0813 20:21:10.524594 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50427008Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:21:10.583213255+00:00 stderr F I0813 20:21:10.583011 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34707 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:21:10.583606476+00:00 stderr F I0813 20:21:10.583515 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34707 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50427008Ki 2025-08-13T20:22:10.521492087+00:00 stderr F I0813 20:22:10.520940 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:22:10.521492087+00:00 stderr F I0813 20:22:10.521372 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:22:10.522094324+00:00 stderr F I0813 20:22:10.521708 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:22:10.522720792+00:00 stderr F I0813 20:22:10.521741 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:22:10.525556303+00:00 stderr F I0813 20:22:10.525450 1 connection.go:251] GRPC response: {"available_capacity":51641798656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:22:10.525556303+00:00 stderr F I0813 20:22:10.525489 1 connection.go:252] GRPC error: 2025-08-13T20:22:10.525664026+00:00 stderr F I0813 20:22:10.525569 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50431444Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:22:10.541904360+00:00 stderr F I0813 20:22:10.541672 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34825 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:22:10.544800883+00:00 stderr F I0813 20:22:10.544685 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34825 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50431444Ki 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.523137 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.523664 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.524175 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:23:10.527052118+00:00 stderr F I0813 20:23:10.524187 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.528753 1 connection.go:251] GRPC response: {"available_capacity":51645571072,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.528970 1 connection.go:252] GRPC error: 2025-08-13T20:23:10.530033843+00:00 stderr F I0813 20:23:10.529061 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50435128Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:23:10.544002692+00:00 stderr F I0813 20:23:10.543885 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 34939 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50435128Ki 2025-08-13T20:23:10.544418204+00:00 stderr F I0813 20:23:10.544368 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 34939 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:23:46.064547252+00:00 stderr F I0813 20:23:46.063669 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-08-13T20:24:10.525005884+00:00 stderr F I0813 20:24:10.524701 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:24:10.525238161+00:00 stderr F I0813 20:24:10.525152 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:24:10.525447487+00:00 stderr F I0813 20:24:10.525375 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:24:10.526138997+00:00 stderr F I0813 20:24:10.525406 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:24:10.528528825+00:00 stderr F I0813 20:24:10.528481 1 connection.go:251] GRPC response: {"available_capacity":51577241600,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:24:10.528583917+00:00 stderr F I0813 20:24:10.528567 1 connection.go:252] GRPC error: 2025-08-13T20:24:10.528880635+00:00 stderr F I0813 20:24:10.528721 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50368400Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:24:10.544500702+00:00 stderr F I0813 20:24:10.544404 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35044 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50368400Ki 2025-08-13T20:24:10.544734719+00:00 stderr F I0813 20:24:10.544673 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35044 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:24:21.337206815+00:00 stderr F I0813 20:24:21.336881 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:24:27.539097402+00:00 stderr F I0813 20:24:27.537890 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:24:48.149301469+00:00 stderr F I0813 20:24:48.149070 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 8 items received 2025-08-13T20:25:10.525298092+00:00 stderr F I0813 20:25:10.525174 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:25:10.525392985+00:00 stderr F I0813 20:25:10.525296 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:25:10.525691593+00:00 stderr F I0813 20:25:10.525633 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:25:10.526530837+00:00 stderr F I0813 20:25:10.525670 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:25:10.528561555+00:00 stderr F I0813 20:25:10.528481 1 connection.go:251] GRPC response: {"available_capacity":51577327616,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:25:10.528561555+00:00 stderr F I0813 20:25:10.528519 1 connection.go:252] GRPC error: 2025-08-13T20:25:10.528626707+00:00 stderr F I0813 20:25:10.528560 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50368484Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:25:10.566050317+00:00 stderr F I0813 20:25:10.565076 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35166 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50368484Ki 2025-08-13T20:25:10.566050317+00:00 stderr F I0813 20:25:10.565288 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35166 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:25:11.140600555+00:00 stderr F I0813 20:25:11.140487 1 reflector.go:378] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.140929 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.141596 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:25:11.141747908+00:00 stderr F I0813 20:25:11.141725 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:25:11.332425400+00:00 stderr F I0813 20:25:11.330959 1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync 2025-08-13T20:26:07.408562415+00:00 stderr F I0813 20:26:07.408258 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 10 items received 2025-08-13T20:26:10.525908641+00:00 stderr F I0813 20:26:10.525712 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:26:10.526061135+00:00 stderr F I0813 20:26:10.525979 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:26:10.526190229+00:00 stderr F I0813 20:26:10.526140 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:26:10.526542699+00:00 stderr F I0813 20:26:10.526164 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:26:10.528577897+00:00 stderr F I0813 20:26:10.528486 1 connection.go:251] GRPC response: {"available_capacity":51575119872,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:26:10.528577897+00:00 stderr F I0813 20:26:10.528540 1 connection.go:252] GRPC error: 2025-08-13T20:26:10.528706281+00:00 stderr F I0813 20:26:10.528596 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50366328Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:26:10.540600391+00:00 stderr F I0813 20:26:10.540490 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35280 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:26:10.540904880+00:00 stderr F I0813 20:26:10.540752 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35280 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50366328Ki 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527220 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527530 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527858 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:27:10.531099845+00:00 stderr F I0813 20:27:10.527870 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:27:10.562296477+00:00 stderr F I0813 20:27:10.562169 1 connection.go:251] GRPC response: {"available_capacity":51174940672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:27:10.562374879+00:00 stderr F I0813 20:27:10.562202 1 connection.go:252] GRPC error: 2025-08-13T20:27:10.562616176+00:00 stderr F I0813 20:27:10.562443 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49975528Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:27:10.621948322+00:00 stderr F I0813 20:27:10.621763 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35442 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:27:10.622171959+00:00 stderr F I0813 20:27:10.622064 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35442 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49975528Ki 2025-08-13T20:28:10.528586639+00:00 stderr F I0813 20:28:10.528237 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:28:10.528586639+00:00 stderr F I0813 20:28:10.528479 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:28:10.529034552+00:00 stderr F I0813 20:28:10.528970 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:28:10.529561847+00:00 stderr F I0813 20:28:10.529026 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:28:10.532628515+00:00 stderr F I0813 20:28:10.532243 1 connection.go:251] GRPC response: {"available_capacity":51567644672,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:28:10.532681407+00:00 stderr F I0813 20:28:10.532664 1 connection.go:252] GRPC error: 2025-08-13T20:28:10.532953675+00:00 stderr F I0813 20:28:10.532871 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50359028Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:28:10.559875859+00:00 stderr F I0813 20:28:10.556836 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35577 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:28:10.559875859+00:00 stderr F I0813 20:28:10.557475 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35577 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50359028Ki 2025-08-13T20:29:10.529715136+00:00 stderr F I0813 20:29:10.529278 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:29:10.529715136+00:00 stderr F I0813 20:29:10.529606 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:29:10.530144499+00:00 stderr F I0813 20:29:10.530084 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:29:10.530790507+00:00 stderr F I0813 20:29:10.530110 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:29:10.535148743+00:00 stderr F I0813 20:29:10.534987 1 connection.go:251] GRPC response: {"available_capacity":51567120384,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:29:10.535148743+00:00 stderr F I0813 20:29:10.535126 1 connection.go:252] GRPC error: 2025-08-13T20:29:10.535486382+00:00 stderr F I0813 20:29:10.535285 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50358516Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:29:10.549869346+00:00 stderr F I0813 20:29:10.549666 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35709 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:29:10.549869346+00:00 stderr F I0813 20:29:10.549727 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35709 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50358516Ki 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530088 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530249 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:30:10.530567451+00:00 stderr F I0813 20:30:10.530382 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:30:10.530663024+00:00 stderr F I0813 20:30:10.530389 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:30:10.535724309+00:00 stderr F I0813 20:30:10.535645 1 connection.go:251] GRPC response: {"available_capacity":49228713984,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:30:10.535724309+00:00 stderr F I0813 20:30:10.535675 1 connection.go:252] GRPC error: 2025-08-13T20:30:10.535972716+00:00 stderr F I0813 20:30:10.535726 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48074916Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:30:10.550323769+00:00 stderr F I0813 20:30:10.550147 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 35890 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48074916Ki 2025-08-13T20:30:10.550751311+00:00 stderr F I0813 20:30:10.550690 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 35890 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:30:27.073297549+00:00 stderr F I0813 20:30:27.073049 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 14 items received 2025-08-13T20:30:43.344585819+00:00 stderr F I0813 20:30:43.344328 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:30:44.158134804+00:00 stderr F I0813 20:30:44.158056 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:31:10.530875095+00:00 stderr F I0813 20:31:10.530527 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:31:10.530875095+00:00 stderr F I0813 20:31:10.530696 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:31:10.530951338+00:00 stderr F I0813 20:31:10.530912 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:31:10.531405681+00:00 stderr F I0813 20:31:10.530922 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:31:10.532597835+00:00 stderr F I0813 20:31:10.532491 1 connection.go:251] GRPC response: {"available_capacity":51566354432,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:31:10.532597835+00:00 stderr F I0813 20:31:10.532523 1 connection.go:252] GRPC error: 2025-08-13T20:31:10.532734639+00:00 stderr F I0813 20:31:10.532566 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50357768Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:31:10.544753214+00:00 stderr F I0813 20:31:10.544528 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36027 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:31:10.545745613+00:00 stderr F I0813 20:31:10.545608 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36027 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50357768Ki 2025-08-13T20:32:10.531836828+00:00 stderr F I0813 20:32:10.531466 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:32:10.531836828+00:00 stderr F I0813 20:32:10.531705 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:32:10.532556459+00:00 stderr F I0813 20:32:10.532005 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:32:10.532820336+00:00 stderr F I0813 20:32:10.532035 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:32:10.534975388+00:00 stderr F I0813 20:32:10.534850 1 connection.go:251] GRPC response: {"available_capacity":51568164864,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:32:10.534975388+00:00 stderr F I0813 20:32:10.534875 1 connection.go:252] GRPC error: 2025-08-13T20:32:10.535175014+00:00 stderr F I0813 20:32:10.534946 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50359536Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:32:10.553530152+00:00 stderr F I0813 20:32:10.553268 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36146 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:32:10.554085678+00:00 stderr F I0813 20:32:10.553945 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36146 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50359536Ki 2025-08-13T20:32:19.542178946+00:00 stderr F I0813 20:32:19.541133 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:32:39.416373564+00:00 stderr F I0813 20:32:39.416148 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 7 items received 2025-08-13T20:33:10.533275907+00:00 stderr F I0813 20:33:10.532719 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:33:10.533340859+00:00 stderr F I0813 20:33:10.533270 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:33:10.533504244+00:00 stderr F I0813 20:33:10.533448 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:33:10.534404340+00:00 stderr F I0813 20:33:10.533472 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:33:10.537336664+00:00 stderr F I0813 20:33:10.537200 1 connection.go:251] GRPC response: {"available_capacity":51570118656,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:33:10.537336664+00:00 stderr F I0813 20:33:10.537232 1 connection.go:252] GRPC error: 2025-08-13T20:33:10.537362225+00:00 stderr F I0813 20:33:10.537289 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50361444Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:33:10.550672547+00:00 stderr F I0813 20:33:10.550436 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36259 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50361444Ki 2025-08-13T20:33:10.551315246+00:00 stderr F I0813 20:33:10.551053 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36259 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.533967613+00:00 stderr F I0813 20:34:10.533496 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:34:10.533967613+00:00 stderr F I0813 20:34:10.533736 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.534398095+00:00 stderr F I0813 20:34:10.534321 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:34:10.536031192+00:00 stderr F I0813 20:34:10.534354 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:34:10.539150562+00:00 stderr F I0813 20:34:10.538875 1 connection.go:251] GRPC response: {"available_capacity":51570245632,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:34:10.539150562+00:00 stderr F I0813 20:34:10.539064 1 connection.go:252] GRPC error: 2025-08-13T20:34:10.539240014+00:00 stderr F I0813 20:34:10.539147 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50361568Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:34:10.554901905+00:00 stderr F I0813 20:34:10.554593 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36359 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:34:10.554901905+00:00 stderr F I0813 20:34:10.554608 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36359 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50361568Ki 2025-08-13T20:35:10.534873013+00:00 stderr F I0813 20:35:10.534360 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:35:10.534985776+00:00 stderr F I0813 20:35:10.534872 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:35:10.535289855+00:00 stderr F I0813 20:35:10.535233 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:35:10.536180410+00:00 stderr F I0813 20:35:10.535261 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:35:10.538021483+00:00 stderr F I0813 20:35:10.537966 1 connection.go:251] GRPC response: {"available_capacity":51569307648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:35:10.538021483+00:00 stderr F I0813 20:35:10.537992 1 connection.go:252] GRPC error: 2025-08-13T20:35:10.538226819+00:00 stderr F I0813 20:35:10.538058 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50360652Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:35:10.552433408+00:00 stderr F I0813 20:35:10.552356 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36494 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50360652Ki 2025-08-13T20:35:10.552599143+00:00 stderr F I0813 20:35:10.552576 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36494 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.535607 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.535982 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.536269 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:36:10.537165098+00:00 stderr F I0813 20:36:10.536277 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:36:10.541271176+00:00 stderr F I0813 20:36:10.541238 1 connection.go:251] GRPC response: {"available_capacity":51568975872,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:36:10.541353548+00:00 stderr F I0813 20:36:10.541337 1 connection.go:252] GRPC error: 2025-08-13T20:36:10.541559594+00:00 stderr F I0813 20:36:10.541489 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50360328Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:36:10.562043683+00:00 stderr F I0813 20:36:10.561909 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36628 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:36:10.562558298+00:00 stderr F I0813 20:36:10.562329 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36628 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50360328Ki 2025-08-13T20:36:15.162456315+00:00 stderr F I0813 20:36:15.162055 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:37:02.083170034+00:00 stderr F I0813 20:37:02.082585 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 13 items received 2025-08-13T20:37:10.537019775+00:00 stderr F I0813 20:37:10.536721 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:37:10.537075267+00:00 stderr F I0813 20:37:10.537017 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:37:10.537088937+00:00 stderr F I0813 20:37:10.537082 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:37:10.537475688+00:00 stderr F I0813 20:37:10.537088 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:37:10.539417454+00:00 stderr F I0813 20:37:10.539325 1 connection.go:251] GRPC response: {"available_capacity":51568967680,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:37:10.539417454+00:00 stderr F I0813 20:37:10.539367 1 connection.go:252] GRPC error: 2025-08-13T20:37:10.539441095+00:00 stderr F I0813 20:37:10.539415 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 49180Mi, new maximumVolumeSize 83295212Ki 2025-08-13T20:37:10.547160107+00:00 stderr F I0813 20:37:10.547063 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36750 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 49180Mi 2025-08-13T20:37:10.547378274+00:00 stderr F I0813 20:37:10.547148 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36750 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:37:31.353038325+00:00 stderr F I0813 20:37:31.352700 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 7 items received 2025-08-13T20:38:10.537857165+00:00 stderr F I0813 20:38:10.537355 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:38:10.537937767+00:00 stderr F I0813 20:38:10.537858 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:38:10.539092880+00:00 stderr F I0813 20:38:10.539018 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:38:10.539377699+00:00 stderr F I0813 20:38:10.539057 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:38:10.540839291+00:00 stderr F I0813 20:38:10.540716 1 connection.go:251] GRPC response: {"available_capacity":51566211072,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:38:10.540839291+00:00 stderr F I0813 20:38:10.540743 1 connection.go:252] GRPC error: 2025-08-13T20:38:10.540984755+00:00 stderr F I0813 20:38:10.540877 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50357628Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:38:10.575113129+00:00 stderr F I0813 20:38:10.574998 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 36879 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:38:10.575113129+00:00 stderr F I0813 20:38:10.575017 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 36879 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50357628Ki 2025-08-13T20:39:10.539029861+00:00 stderr F I0813 20:39:10.538567 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:39:10.539123284+00:00 stderr F I0813 20:39:10.539028 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:39:10.539391742+00:00 stderr F I0813 20:39:10.539323 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:39:10.540115593+00:00 stderr F I0813 20:39:10.539350 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542210 1 connection.go:251] GRPC response: {"available_capacity":51565502464,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542237 1 connection.go:252] GRPC error: 2025-08-13T20:39:10.542366978+00:00 stderr F I0813 20:39:10.542316 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50356936Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:39:10.555006812+00:00 stderr F I0813 20:39:10.554849 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37007 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50356936Ki 2025-08-13T20:39:10.555279120+00:00 stderr F I0813 20:39:10.555226 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37007 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540041 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540237 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:10.540322852+00:00 stderr F I0813 20:40:10.540309 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:40:10.540570459+00:00 stderr F I0813 20:40:10.540314 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:40:10.542059762+00:00 stderr F I0813 20:40:10.541991 1 connection.go:251] GRPC response: {"available_capacity":51564032000,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:40:10.542059762+00:00 stderr F I0813 20:40:10.542021 1 connection.go:252] GRPC error: 2025-08-13T20:40:10.542082482+00:00 stderr F I0813 20:40:10.542041 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50355500Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:40:10.555260212+00:00 stderr F I0813 20:40:10.555106 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37130 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50355500Ki 2025-08-13T20:40:10.555260212+00:00 stderr F I0813 20:40:10.555237 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37130 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:40:11.141967927+00:00 stderr F I0813 20:40:11.141843 1 reflector.go:378] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: forcing resync 2025-08-13T20:40:11.142890584+00:00 stderr F I0813 20:40:11.142131 1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 4bd4486a-d347-4705-822c-1c402df66985 20233 0 2024-06-27 13:21:41 +0000 UTC map[] map[pv.kubernetes.io/provisioned-by:kubevirt.io.hostpath-provisioner volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection] [{csi-provisioner Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:volumeAttributes":{".":{},"f:csi.storage.k8s.io/pv/name":{},"f:csi.storage.k8s.io/pvc/name":{},"f:csi.storage.k8s.io/pvc/namespace":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{},"f:storagePool":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {kube-controller-manager Update v1 2024-06-27 13:21:41 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{32212254720 0} {} 30Gi BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:kubevirt.io.hostpath-provisioner,VolumeHandle:pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,ReadOnly:false,FSType:,VolumeAttributes:map[string]string{csi.storage.k8s.io/pv/name: pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97,csi.storage.k8s.io/pvc/name: crc-image-registry-storage,csi.storage.k8s.io/pvc/namespace: openshift-image-registry,storage.kubernetes.io/csiProvisionerIdentity: 1719494501704-84-kubevirt.io.hostpath-provisioner-crc,storagePool: local,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteMany],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:openshift-image-registry,Name:crc-image-registry-storage,UID:f5d86efc-9248-4b55-9b8b-23cf63fe9e97,APIVersion:v1,ResourceVersion:17977,FieldPath:,},PersistentVolumeReclaimPolicy:Retain,StorageClassName:crc-csi-hostpath-provisioner,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.hostpath.csi/node,Operator:In,Values:[crc],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Bound,Message:,Reason:,LastPhaseTransitionTime:2024-06-27 13:21:41 +0000 UTC,},} 2025-08-13T20:40:11.142890584+00:00 stderr F I0813 20:40:11.142763 1 controller.go:1239] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" 2025-08-13T20:40:11.142914225+00:00 stderr F I0813 20:40:11.142894 1 controller.go:1260] shouldDelete volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" is false: PersistentVolumePhase is not Released 2025-08-13T20:40:11.331528343+00:00 stderr F I0813 20:40:11.331348 1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync 2025-08-13T20:41:10.541671996+00:00 stderr F I0813 20:41:10.541162 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:41:10.541847171+00:00 stderr F I0813 20:41:10.541674 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:41:10.542047867+00:00 stderr F I0813 20:41:10.541965 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:41:10.544402995+00:00 stderr F I0813 20:41:10.542001 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:41:10.546830845+00:00 stderr F I0813 20:41:10.546706 1 connection.go:251] GRPC response: {"available_capacity":51564699648,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:41:10.546830845+00:00 stderr F I0813 20:41:10.546734 1 connection.go:252] GRPC error: 2025-08-13T20:41:10.546981129+00:00 stderr F I0813 20:41:10.546873 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 50356152Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:41:10.566679587+00:00 stderr F I0813 20:41:10.566585 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37251 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:41:10.566918594+00:00 stderr F I0813 20:41:10.566893 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37251 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 50356152Ki 2025-08-13T20:41:43.426306873+00:00 stderr F I0813 20:41:43.424528 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 9 items received 2025-08-13T20:41:52.548745175+00:00 stderr F I0813 20:41:52.546570 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 10 items received 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542362 1 capacity.go:518] Capacity Controller: enqueuing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} for periodic update 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542512 1 capacity.go:574] Capacity Controller: refreshing {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:42:10.544697742+00:00 stderr F I0813 20:42:10.542703 1 connection.go:244] GRPC call: /csi.v1.Controller/GetCapacity 2025-08-13T20:42:10.547095161+00:00 stderr F I0813 20:42:10.542711 1 connection.go:245] GRPC request: {"accessible_topology":{"segments":{"topology.hostpath.csi/node":"crc"}},"parameters":{"storagePool":"local"},"volume_capabilities":[{"AccessType":{"Mount":null},"access_mode":{}}]} 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555855 1 connection.go:251] GRPC response: {"available_capacity":49223258112,"maximum_volume_size":{"value":85294297088},"minimum_volume_size":{}} 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555892 1 connection.go:252] GRPC error: 2025-08-13T20:42:10.556140592+00:00 stderr F I0813 20:42:10.555940 1 capacity.go:667] Capacity Controller: updating csisc-f2s8x for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner}, new capacity 48069588Ki, new maximumVolumeSize 83295212Ki 2025-08-13T20:42:10.582284396+00:00 stderr F I0813 20:42:10.582197 1 capacity.go:672] Capacity Controller: updated csisc-f2s8x with new resource version 37403 for {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} with capacity 48069588Ki 2025-08-13T20:42:10.584191091+00:00 stderr F I0813 20:42:10.584074 1 capacity.go:715] Capacity Controller: CSIStorageCapacity csisc-f2s8x with resource version 37403 is already known to match {segment:0xc0001296f8 storageClassName:crc-csi-hostpath-provisioner} 2025-08-13T20:42:36.366462995+00:00 stderr F I0813 20:42:36.366327 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.366722473+00:00 stderr F I0813 20:42:36.366690 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: Watch close - *v1.StorageClass total 0 items received 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.372885 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.373023 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.CSIStorageCapacity total 11 items received 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.378923 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384868956+00:00 stderr F I0813 20:42:36.378981 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.PersistentVolumeClaim total 5 items received 2025-08-13T20:42:36.417150636+00:00 stderr F I0813 20:42:36.406967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.417150636+00:00 stderr F I0813 20:42:36.407104 1 reflector.go:790] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: Watch close - *v1.PersistentVolume total 6 items received 2025-08-13T20:42:36.481197803+00:00 stderr F I0813 20:42:36.479176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.492491459+00:00 stderr F I0813 20:42:36.490471 1 reflector.go:790] k8s.io/client-go/informers/factory.go:150: Watch close - *v1.StorageClass total 0 items received 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.604957 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=7m4s&timeoutSeconds=424&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605095 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=5m12s&timeoutSeconds=312&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605145 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=8m35s&timeoutSeconds=515&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.605333 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=6m59s&timeoutSeconds=419&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:36.607854644+00:00 stderr F I0813 20:42:36.606900 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=9m54s&timeoutSeconds=594&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:37.849300036+00:00 stderr F I0813 20:42:37.849155 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:37.870661042+00:00 stderr F I0813 20:42:37.870079 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=5m8s&timeoutSeconds=308&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.091716185+00:00 stderr F I0813 20:42:38.089119 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.150465799+00:00 stderr F I0813 20:42:38.139752 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=5m25s&timeoutSeconds=325&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:38.205443034+00:00 stderr F I0813 20:42:38.205106 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=6m40s&timeoutSeconds=400&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.059831485+00:00 stderr F I0813 20:42:40.059687 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:848: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37332&timeout=6m21s&timeoutSeconds=381&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.085673111+00:00 stderr F I0813 20:42:40.085582 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.StorageClass returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=37354&timeout=7m46s&timeoutSeconds=466&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.347258672+00:00 stderr F I0813 20:42:40.347129 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://10.217.4.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=37526&timeout=5m7s&timeoutSeconds=307&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:40.721875683+00:00 stderr F I0813 20:42:40.721701 1 reflector.go:421] sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/controller.go:845: watch of *v1.PersistentVolume returned Get "https://10.217.4.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=37407&timeout=7m56s&timeoutSeconds=476&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off 2025-08-13T20:42:41.306913760+00:00 stderr F I0813 20:42:41.306567 1 reflector.go:421] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIStorageCapacity returned Get "https://10.217.4.1:443/apis/storage.k8s.io/v1/namespaces/hostpath-provisioner/csistoragecapacities?allowWatchBookmarks=true&labelSelector=csi.storage.k8s.io%2Fdrivername%3Dkubevirt.io.hostpath-provisioner%2Ccsi.storage.k8s.io%2Fmanaged-by%3Dexternal-provisioner-crc&resourceVersion=37403&timeout=5m13s&timeoutSeconds=313&watch=true": dial tcp 10.217.4.1:443: connect: connection refused - backing off ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000755000175000017500000000000015117130654033167 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000061415117130647033174 0ustar zuulzuul2025-12-13T00:13:21.487779809+00:00 stderr F I1213 00:13:21.487714 1 main.go:149] calling CSI driver to discover driver name 2025-12-13T00:13:21.507456720+00:00 stderr F I1213 00:13:21.500167 1 main.go:155] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-12-13T00:13:21.507456720+00:00 stderr F I1213 00:13:21.500190 1 main.go:183] ServeMux listening at "0.0.0.0:9898" ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_c0000644000175000017500000000061415117130647033174 0ustar zuulzuul2025-08-13T19:59:57.766335818+00:00 stderr F I0813 19:59:57.747274 1 main.go:149] calling CSI driver to discover driver name 2025-08-13T19:59:58.592328403+00:00 stderr F I0813 19:59:58.579963 1 main.go:155] CSI driver name: "kubevirt.io.hostpath-provisioner" 2025-08-13T19:59:58.592328403+00:00 stderr F I0813 19:59:58.580031 1 main.go:183] ServeMux listening at "0.0.0.0:9898" ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130653033056 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000367015117130646033070 0ustar zuulzuul2025-08-13T20:07:25.650906231+00:00 stderr F I0813 20:07:25.647078 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc00086f720 max-eligible-revision:0xc00086f4a0 protected-revisions:0xc00086f540 resource-dir:0xc00086f5e0 static-pod-name:0xc00086f680 v:0xc00086fe00] [0xc00086fe00 0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f720 0xc00086f680] [] map[cert-dir:0xc00086f720 help:0xc0008821e0 log-flush-frequency:0xc00086fd60 max-eligible-revision:0xc00086f4a0 protected-revisions:0xc00086f540 resource-dir:0xc00086f5e0 static-pod-name:0xc00086f680 v:0xc00086fe00 vmodule:0xc00086fea0] [0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f680 0xc00086f720 0xc00086fd60 0xc00086fe00 0xc00086fea0 0xc0008821e0] [0xc00086f720 0xc0008821e0 0xc00086fd60 0xc00086f4a0 0xc00086f540 0xc00086f5e0 0xc00086f680 0xc00086fe00 0xc00086fea0] map[104:0xc0008821e0 118:0xc00086fe00] [] -1 0 0xc00085e870 true 0x73b100 []} 2025-08-13T20:07:25.650906231+00:00 stderr F I0813 20:07:25.648509 1 cmd.go:41] (*prune.PruneOptions)(0xc00085d8b0)({ 2025-08-13T20:07:25.650906231+00:00 stderr F MaxEligibleRevision: (int) 11, 2025-08-13T20:07:25.650906231+00:00 stderr F ProtectedRevisions: ([]int) (len=6 cap=6) { 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 6, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 7, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 8, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 9, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 10, 2025-08-13T20:07:25.650906231+00:00 stderr F (int) 11 2025-08-13T20:07:25.650906231+00:00 stderr F }, 2025-08-13T20:07:25.650906231+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:25.650906231+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:07:25.650906231+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:07:25.650906231+00:00 stderr F }) ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000033200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000003306015117130646033064 0ustar zuulzuul2025-12-13T00:10:45.246860416+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-12-13T00:10:45.250750414+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-12-13T00:10:45.256035270+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:10:45.256826839+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-12-13T00:10:45.322105798+00:00 stderr F W1213 00:10:45.321960 1 cmd.go:244] Using insecure, self-signed certificates 2025-12-13T00:10:45.322345691+00:00 stderr F I1213 00:10:45.322264 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1765584645 cert, and key in /tmp/serving-cert-2039865499/serving-signer.crt, /tmp/serving-cert-2039865499/serving-signer.key 2025-12-13T00:10:45.684729257+00:00 stderr F I1213 00:10:45.683808 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:10:45.686082344+00:00 stderr F I1213 00:10:45.685823 1 observer_polling.go:159] Starting file observer 2025-12-13T00:10:55.686375336+00:00 stderr F W1213 00:10:55.686285 1 builder.go:266] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods": net/http: TLS handshake timeout 2025-12-13T00:10:55.687019444+00:00 stderr F I1213 00:10:55.686983 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-12-13T00:11:01.904875465+00:00 stderr F I1213 00:11:01.904720 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-12-13T00:11:01.905184203+00:00 stderr F I1213 00:11:01.905152 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-12-13T00:16:30.838360556+00:00 stderr F I1213 00:16:30.838285 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock 2025-12-13T00:16:30.842701364+00:00 stderr F I1213 00:16:30.842634 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fea77749-6e8a-4e4e-9933-ff0da4b5904e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_f15d452b-3695-4299-a2a0-0630f74dcdb6 became leader 2025-12-13T00:16:30.848525903+00:00 stderr F I1213 00:16:30.848483 1 csrcontroller.go:102] Starting CSR controller 2025-12-13T00:16:30.848545263+00:00 stderr F I1213 00:16:30.848530 1 shared_informer.go:311] Waiting for caches to sync for CSRController 2025-12-13T00:16:30.850207209+00:00 stderr F I1213 00:16:30.850171 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:16:30.854657681+00:00 stderr F I1213 00:16:30.854613 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.855335659+00:00 stderr F I1213 00:16:30.855303 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.858101145+00:00 stderr F I1213 00:16:30.858067 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.859572765+00:00 stderr F I1213 00:16:30.859541 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.870141274+00:00 stderr F I1213 00:16:30.870071 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.877727161+00:00 stderr F I1213 00:16:30.877683 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.882633655+00:00 stderr F I1213 00:16:30.882582 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.886314825+00:00 stderr F I1213 00:16:30.886279 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.893525193+00:00 stderr F I1213 00:16:30.893484 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:30.949695197+00:00 stderr F I1213 00:16:30.949630 1 shared_informer.go:318] Caches are synced for CSRController 2025-12-13T00:16:30.949695197+00:00 stderr F I1213 00:16:30.949688 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:16:30.949745828+00:00 stderr F I1213 00:16:30.949695 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:16:30.949745828+00:00 stderr F I1213 00:16:30.949702 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:16:30.950906129+00:00 stderr F I1213 00:16:30.950862 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:16:30.950922560+00:00 stderr F I1213 00:16:30.950905 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:36.741365350+00:00 stderr F I1213 00:19:36.740853 1 core.go:359] ConfigMap "openshift-config-managed/csr-controller-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:13Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-12-13T00:19:36Z"}],"resourceVersion":null,"uid":"4aabbce1-72f4-478a-b382-9ed7c988ad76"}} 2025-12-13T00:19:36.744378883+00:00 stderr F I1213 00:19:36.741671 1 event.go:364] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/csr-controller-ca -n openshift-config-managed: 2025-12-13T00:19:36.744378883+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:21:09.230435809+00:00 stderr F I1213 00:21:09.230380 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:10.008425413+00:00 stderr F I1213 00:21:10.008295 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.820356332+00:00 stderr F I1213 00:21:12.820267 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.702637450+00:00 stderr F I1213 00:21:13.700445 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:15.600336779+00:00 stderr F I1213 00:21:15.600289 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.144430800+00:00 stderr F I1213 00:21:18.144317 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:20.471972328+00:00 stderr F I1213 00:21:20.471881 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:20.927873331+00:00 stderr F I1213 00:21:20.927800 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:24.559367497+00:00 stderr F I1213 00:21:24.559288 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000002613215117130646033066 0ustar zuulzuul2025-08-13T20:08:14.197956647+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-08-13T20:08:14.202063545+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-08-13T20:08:14.221748139+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:14.225107916+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-08-13T20:08:14.457003233+00:00 stderr F W0813 20:08:14.455918 1 cmd.go:244] Using insecure, self-signed certificates 2025-08-13T20:08:14.457003233+00:00 stderr F I0813 20:08:14.456485 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1755115694 cert, and key in /tmp/serving-cert-54853747/serving-signer.crt, /tmp/serving-cert-54853747/serving-signer.key 2025-08-13T20:08:15.002454652+00:00 stderr F I0813 20:08:15.002044 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:15.006327353+00:00 stderr F I0813 20:08:15.006199 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:15.025571455+00:00 stderr F I0813 20:08:15.025458 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-08-13T20:08:15.032073211+00:00 stderr F I0813 20:08:15.032017 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-08-13T20:08:15.034477840+00:00 stderr F I0813 20:08:15.032351 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-08-13T20:08:15.047637117+00:00 stderr F I0813 20:08:15.046947 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock 2025-08-13T20:08:15.050152229+00:00 stderr F I0813 20:08:15.047956 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"fea77749-6e8a-4e4e-9933-ff0da4b5904e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32883", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_4dcd504f-4413-4e50-8836-0f9844860e38 became leader 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.072722 1 csrcontroller.go:102] Starting CSR controller 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.072814 1 shared_informer.go:311] Waiting for caches to sync for CSRController 2025-08-13T20:08:15.073932201+00:00 stderr F I0813 20:08:15.073619 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:08:15.110874860+00:00 stderr F I0813 20:08:15.110647 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.111544390+00:00 stderr F I0813 20:08:15.111477 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.125419677+00:00 stderr F I0813 20:08:15.125251 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.136199657+00:00 stderr F I0813 20:08:15.136093 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.140250753+00:00 stderr F I0813 20:08:15.140161 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.149215380+00:00 stderr F I0813 20:08:15.149097 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.157014833+00:00 stderr F I0813 20:08:15.153563 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.157014833+00:00 stderr F I0813 20:08:15.155864 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.163627113+00:00 stderr F I0813 20:08:15.163577 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:15.174963438+00:00 stderr F I0813 20:08:15.174910 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:08:15.175033700+00:00 stderr F I0813 20:08:15.175018 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:08:15.175122503+00:00 stderr F I0813 20:08:15.175107 1 shared_informer.go:318] Caches are synced for CSRController 2025-08-13T20:08:15.175204425+00:00 stderr F I0813 20:08:15.175189 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:08:15.175239226+00:00 stderr F I0813 20:08:15.175227 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:08:15.175270617+00:00 stderr F I0813 20:08:15.175259 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:25.185664033+00:00 stderr F E0813 20:08:25.185445 1 csrcontroller.go:146] key failed with : Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:08:35.194931186+00:00 stderr F E0813 20:08:35.194614 1 csrcontroller.go:146] key failed with : Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:09:00.927591932+00:00 stderr F I0813 20:09:00.927364 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.126142517+00:00 stderr F I0813 20:09:03.125981 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubecontrollermanagers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.735864809+00:00 stderr F I0813 20:09:03.733930 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.052092650+00:00 stderr F I0813 20:09:07.050311 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:08.250621750+00:00 stderr F I0813 20:09:08.250516 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:08.999427729+00:00 stderr F I0813 20:09:08.999355 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:09.482069647+00:00 stderr F I0813 20:09:09.481699 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.710151277+00:00 stderr F I0813 20:09:10.710080 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:12.143121140+00:00 stderr F I0813 20:09:12.141232 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.401701 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.376416 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.473142 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483130 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483496 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483591 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.483732 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.485449 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.491548501+00:00 stderr F I0813 20:42:36.485893 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.839045900+00:00 stderr F I0813 20:42:36.838854 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:36.844614870+00:00 stderr F E0813 20:42:36.844372 1 leaderelection.go:308] Failed to release lock: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:42:36.848355598+00:00 stderr F I0813 20:42:36.848208 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848420 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848449 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848475 1 csrcontroller.go:104] Shutting down CSR controller 2025-08-13T20:42:36.848494642+00:00 stderr F I0813 20:42:36.848485 1 csrcontroller.go:106] CSR controller shut down 2025-08-13T20:42:36.849512342+00:00 stderr F I0813 20:42:36.848300 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:36.849512342+00:00 stderr F I0813 20:42:36.849486 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:36.849651526+00:00 stderr F I0813 20:42:36.848500 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:36.849651526+00:00 stderr F I0813 20:42:36.849609 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:36.851173319+00:00 stderr F W0813 20:42:36.850995 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000503615117130646033066 0ustar zuulzuul2025-12-13T00:06:37.187122180+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 9443 \))" ]; do sleep 1; done' 2025-12-13T00:06:37.193037157+00:00 stderr F ++ ss -Htanop '(' sport = 9443 ')' 2025-12-13T00:06:37.197323817+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:06:37.198146000+00:00 stderr F + exec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager --listen=0.0.0.0:9443 -v=2 2025-12-13T00:06:37.271125213+00:00 stderr F W1213 00:06:37.270977 1 cmd.go:244] Using insecure, self-signed certificates 2025-12-13T00:06:37.271446242+00:00 stderr F I1213 00:06:37.271424 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1765584397 cert, and key in /tmp/serving-cert-949451254/serving-signer.crt, /tmp/serving-cert-949451254/serving-signer.key 2025-12-13T00:06:37.643593421+00:00 stderr F I1213 00:06:37.643512 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:06:37.644454815+00:00 stderr F I1213 00:06:37.644352 1 observer_polling.go:159] Starting file observer 2025-12-13T00:06:47.646601661+00:00 stderr F W1213 00:06:47.646492 1 builder.go:266] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods": net/http: TLS handshake timeout 2025-12-13T00:06:47.646668723+00:00 stderr F I1213 00:06:47.646635 1 builder.go:298] cert-recovery-controller version 4.16.0-202406131906.p0.g0338b3b.assembly.stream.el9-0338b3b-0338b3be6912024d03def2c26f0fa10218fc2c25 2025-12-13T00:06:54.274465163+00:00 stderr F I1213 00:06:54.274213 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaTopology 2025-12-13T00:06:54.274496874+00:00 stderr F I1213 00:06:54.274467 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cert-recovery-controller-lock... 2025-12-13T00:09:51.155321372+00:00 stderr F I1213 00:09:51.155214 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-12-13T00:09:51.155321372+00:00 stderr F W1213 00:09:51.155299 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000032200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001510615117130646033065 0ustar zuulzuul2025-08-13T20:08:13.912570675+00:00 stderr F I0813 20:08:13.912110 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:13.912570675+00:00 stderr F I0813 20:08:13.912498 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.012990 1 base_controller.go:73] Caches are synced for CertSyncController 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013087 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013232 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:08:14.014008973+00:00 stderr F I0813 20:08:14.013695 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:00.642329395+00:00 stderr F I0813 20:09:00.642073 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:00.642836380+00:00 stderr F I0813 20:09:00.642748 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.039403674+00:00 stderr F I0813 20:09:06.039168 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.040066213+00:00 stderr F I0813 20:09:06.040023 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.040328071+00:00 stderr F I0813 20:09:06.040299 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.040866436+00:00 stderr F I0813 20:09:06.040697 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:09:06.041170375+00:00 stderr F I0813 20:09:06.041082 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:09:06.041673049+00:00 stderr F I0813 20:09:06.041618 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:00.657764897+00:00 stderr F I0813 20:19:00.657557 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:00.669646716+00:00 stderr F I0813 20:19:00.669516 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:00.676291486+00:00 stderr F I0813 20:19:00.676168 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:00.684225793+00:00 stderr F I0813 20:19:00.684075 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:19:06.041870472+00:00 stderr F I0813 20:19:06.041707 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:19:06.042281974+00:00 stderr F I0813 20:19:06.042201 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:00.643715058+00:00 stderr F I0813 20:29:00.643482 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:00.644704676+00:00 stderr F I0813 20:29:00.644581 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:00.644983084+00:00 stderr F I0813 20:29:00.644921 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:00.646332343+00:00 stderr F I0813 20:29:00.645357 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:06.042539590+00:00 stderr F I0813 20:29:06.042335 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:06.043743395+00:00 stderr F I0813 20:29:06.043351 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:29:06.043743395+00:00 stderr F I0813 20:29:06.043572 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:29:06.044232079+00:00 stderr F I0813 20:29:06.044104 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:00.644997435+00:00 stderr F I0813 20:39:00.644245 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:00.645629583+00:00 stderr F I0813 20:39:00.645535 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:00.645942712+00:00 stderr F I0813 20:39:00.645848 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:00.646226851+00:00 stderr F I0813 20:39:00.646102 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:06.043658059+00:00 stderr F I0813 20:39:06.043487 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:06.049015103+00:00 stderr F I0813 20:39:06.048915 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-08-13T20:39:06.049189868+00:00 stderr F I0813 20:39:06.049110 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-08-13T20:39:06.049439836+00:00 stderr F I0813 20:39:06.049391 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000002436715117130654033075 0ustar zuulzuul2025-12-13T00:10:45.159636747+00:00 stderr F I1213 00:10:45.159449 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-12-13T00:10:45.159636747+00:00 stderr F I1213 00:10:45.159476 1 observer_polling.go:159] Starting file observer 2025-12-13T00:10:55.167217158+00:00 stderr F W1213 00:10:55.166980 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:10:55.167217158+00:00 stderr F W1213 00:10:55.167048 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:10:55.167558687+00:00 stderr F I1213 00:10:55.167485 1 trace.go:236] Trace[1230528725]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:10:45.159) (total time: 10007ms): 2025-12-13T00:10:55.167558687+00:00 stderr F Trace[1230528725]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (00:10:55.166) 2025-12-13T00:10:55.167558687+00:00 stderr F Trace[1230528725]: [10.007472038s] [10.007472038s] END 2025-12-13T00:10:55.167558687+00:00 stderr F I1213 00:10:55.167484 1 trace.go:236] Trace[416937124]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:10:45.159) (total time: 10007ms): 2025-12-13T00:10:55.167558687+00:00 stderr F Trace[416937124]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (00:10:55.167) 2025-12-13T00:10:55.167558687+00:00 stderr F Trace[416937124]: [10.007511629s] [10.007511629s] END 2025-12-13T00:10:55.167558687+00:00 stderr F E1213 00:10:55.167515 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:10:55.167558687+00:00 stderr F E1213 00:10:55.167533 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:11:01.860734965+00:00 stderr F I1213 00:11:01.860613 1 base_controller.go:73] Caches are synced for CertSyncController 2025-12-13T00:11:01.860734965+00:00 stderr F I1213 00:11:01.860647 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-12-13T00:11:01.861054983+00:00 stderr F I1213 00:11:01.860988 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:11:01.861513296+00:00 stderr F I1213 00:11:01.861476 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:20.769374139+00:00 stderr F I1213 00:13:20.769291 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:20.769879576+00:00 stderr F I1213 00:13:20.769853 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:20.789231916+00:00 stderr F I1213 00:13:20.789159 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:20.789496115+00:00 stderr F I1213 00:13:20.789470 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:20.815050944+00:00 stderr F I1213 00:13:20.814589 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:20.815050944+00:00 stderr F I1213 00:13:20.814847 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:20.870515048+00:00 stderr F I1213 00:13:20.869899 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:20.870515048+00:00 stderr F I1213 00:13:20.870206 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:20.900709363+00:00 stderr F I1213 00:13:20.900620 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:20.900884208+00:00 stderr F I1213 00:13:20.900853 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:25.727980940+00:00 stderr F I1213 00:13:25.727876 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:25.728146745+00:00 stderr F I1213 00:13:25.728120 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:25.747842567+00:00 stderr F I1213 00:13:25.747702 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:25.748160597+00:00 stderr F I1213 00:13:25.748123 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:25.765769019+00:00 stderr F I1213 00:13:25.765710 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:25.766002878+00:00 stderr F I1213 00:13:25.765979 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:25.773872591+00:00 stderr F I1213 00:13:25.773802 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:25.774368339+00:00 stderr F I1213 00:13:25.774338 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:25.787851221+00:00 stderr F I1213 00:13:25.787788 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:25.788122540+00:00 stderr F I1213 00:13:25.788084 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:13:25.795164247+00:00 stderr F I1213 00:13:25.795122 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:13:25.795326092+00:00 stderr F I1213 00:13:25.795296 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:19:39.940994228+00:00 stderr F I1213 00:19:39.940673 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:19:39.945990126+00:00 stderr F I1213 00:19:39.943848 1 certsync_controller.go:146] Creating directory "/etc/kubernetes/static-pod-certs/configmaps/client-ca" ... 2025-12-13T00:19:39.945990126+00:00 stderr F I1213 00:19:39.943907 1 certsync_controller.go:159] Writing configmap manifest "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-12-13T00:19:39.950734057+00:00 stderr F I1213 00:19:39.949107 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:19:39.950734057+00:00 stderr F I1213 00:19:39.949252 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CertificateUpdated' Wrote updated configmap: openshift-kube-controller-manager/client-ca 2025-12-13T00:21:12.862985993+00:00 stderr F I1213 00:21:12.862861 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:21:12.863516527+00:00 stderr F I1213 00:21:12.863454 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:21:12.863624260+00:00 stderr F I1213 00:21:12.863588 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:21:12.863817735+00:00 stderr F I1213 00:21:12.863783 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:21:21.315405728+00:00 stderr F I1213 00:21:21.315290 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:21:21.315570383+00:00 stderr F I1213 00:21:21.315536 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] 2025-12-13T00:21:21.315651155+00:00 stderr F I1213 00:21:21.315618 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:21:21.315775058+00:00 stderr F I1213 00:21:21.315744 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000001163415117130654033066 0ustar zuulzuul2025-12-13T00:06:37.055068855+00:00 stderr F I1213 00:06:37.054745 1 observer_polling.go:159] Starting file observer 2025-12-13T00:06:37.055301532+00:00 stderr F I1213 00:06:37.055183 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-12-13T00:06:37.059096378+00:00 stderr F W1213 00:06:37.058993 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-12-13T00:06:37.059134419+00:00 stderr F E1213 00:06:37.059102 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-12-13T00:06:37.059134419+00:00 stderr F W1213 00:06:37.059063 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-12-13T00:06:37.059222952+00:00 stderr F E1213 00:06:37.059194 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": dial tcp [::1]:6443: connect: connection refused 2025-12-13T00:06:48.246823497+00:00 stderr F W1213 00:06:48.246719 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:48.247156656+00:00 stderr F I1213 00:06:48.247122 1 trace.go:236] Trace[223160759]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:06:38.244) (total time: 10002ms): 2025-12-13T00:06:48.247156656+00:00 stderr F Trace[223160759]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:06:48.246) 2025-12-13T00:06:48.247156656+00:00 stderr F Trace[223160759]: [10.002191467s] [10.002191467s] END 2025-12-13T00:06:48.247156656+00:00 stderr F E1213 00:06:48.247147 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:48.634054190+00:00 stderr F W1213 00:06:48.633941 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:48.634078191+00:00 stderr F I1213 00:06:48.634053 1 trace.go:236] Trace[986800151]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Dec-2025 00:06:38.632) (total time: 10001ms): 2025-12-13T00:06:48.634078191+00:00 stderr F Trace[986800151]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:48.633) 2025-12-13T00:06:48.634078191+00:00 stderr F Trace[986800151]: [10.001816356s] [10.001816356s] END 2025-12-13T00:06:48.634078191+00:00 stderr F E1213 00:06:48.634071 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:54.955595414+00:00 stderr F I1213 00:06:54.955520 1 base_controller.go:73] Caches are synced for CertSyncController 2025-12-13T00:06:54.955595414+00:00 stderr F I1213 00:06:54.955556 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-12-13T00:06:54.955934715+00:00 stderr F I1213 00:06:54.955879 1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}] 2025-12-13T00:06:54.956261084+00:00 stderr F I1213 00:06:54.956217 1 certsync_controller.go:169] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}] ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000403415117130646033063 0ustar zuulzuul2025-12-13T00:08:32.168802800+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-12-13T00:08:32.173126196+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-12-13T00:08:32.178327928+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:08:32.179022748+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-12-13T00:08:32.241600722+00:00 stderr F I1213 00:08:32.241352 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:08:32.242686874+00:00 stderr F I1213 00:08:32.242632 1 observer_polling.go:159] Starting file observer 2025-12-13T00:08:32.244260649+00:00 stderr F I1213 00:08:32.244204 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-12-13T00:08:32.245643430+00:00 stderr F I1213 00:08:32.245596 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:09:01.836669888+00:00 stderr F I1213 00:09:01.836555 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-12-13T00:09:01.836755290+00:00 stderr F F1213 00:09:01.836737 1 cmd.go:170] failed checking apiserver connectivity: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000040332215117130646033066 0ustar zuulzuul2025-08-13T20:08:13.208515559+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-08-13T20:08:13.220701708+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-08-13T20:08:13.241290128+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:13.250386589+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-08-13T20:08:13.608173907+00:00 stderr F I0813 20:08:13.607874 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:13.609914107+00:00 stderr F I0813 20:08:13.609835 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:13.624952348+00:00 stderr F I0813 20:08:13.624655 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-08-13T20:08:13.626584115+00:00 stderr F I0813 20:08:13.626491 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:14.664548094+00:00 stderr F I0813 20:08:14.662425 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.676873 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678838 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678872 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:08:14.679061880+00:00 stderr F I0813 20:08:14.678879 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:08:14.693006890+00:00 stderr F I0813 20:08:14.691169 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:08:14.693006890+00:00 stderr F I0813 20:08:14.692424 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:08:14.711465439+00:00 stderr F W0813 20:08:14.711350 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-08-13T20:08:14.712233371+00:00 stderr F I0813 20:08:14.712027 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cluster-policy-controller-lock... 2025-08-13T20:08:14.713337843+00:00 stderr F I0813 20:08:14.713274 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:08:14.713531348+00:00 stderr F I0813 20:08:14.713444 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:08:14.713655132+00:00 stderr F I0813 20:08:14.713580 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:08:14.713655132+00:00 stderr F I0813 20:08:14.713635 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:14.713698223+00:00 stderr F I0813 20:08:14.713576 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-08-13T20:08:14.714049543+00:00 stderr F I0813 20:08:14.714017 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:14.716051860+00:00 stderr F I0813 20:08:14.714734 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:08:14.716264537+00:00 stderr F I0813 20:08:14.716236 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:14.717059509+00:00 stderr F I0813 20:08:14.716760 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.71671728 +0000 UTC))" 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.717945 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.717855672 +0000 UTC))" 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718004 1 secure_serving.go:213] Serving securely on [::]:10357 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718032 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:08:14.718251374+00:00 stderr F I0813 20:08:14.718048 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:14.720447926+00:00 stderr F I0813 20:08:14.720412 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.724976626+00:00 stderr F I0813 20:08:14.724881 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.729246629+00:00 stderr F I0813 20:08:14.729159 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:14.729419954+00:00 stderr F I0813 20:08:14.729391 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cluster-policy-controller-lock 2025-08-13T20:08:14.730022001+00:00 stderr F I0813 20:08:14.729949 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cluster-policy-controller-lock", UID:"bb093f33-8655-47de-8ab9-7ce6fce91fc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_2c95bd88-c329-4928-a119-89e7d6436f66 became leader 2025-08-13T20:08:14.738369680+00:00 stderr F I0813 20:08:14.738315 1 policy_controller.go:78] Starting "openshift.io/cluster-quota-reconciliation" 2025-08-13T20:08:14.814328098+00:00 stderr F I0813 20:08:14.814216 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:14.814499683+00:00 stderr F I0813 20:08:14.814217 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:14.814741960+00:00 stderr F I0813 20:08:14.814693 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.814646487 +0000 UTC))" 2025-08-13T20:08:14.815283746+00:00 stderr F I0813 20:08:14.815214 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.815180533 +0000 UTC))" 2025-08-13T20:08:14.816965214+00:00 stderr F I0813 20:08:14.816924 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:14.825132808+00:00 stderr F I0813 20:08:14.825025 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.824993454 +0000 UTC))" 2025-08-13T20:08:14.825274092+00:00 stderr F I0813 20:08:14.825210 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:14.82519215 +0000 UTC))" 2025-08-13T20:08:14.825289022+00:00 stderr F I0813 20:08:14.825274 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:14.825240061 +0000 UTC))" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825400 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825458 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825487 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825506 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825538 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825557 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-08-13T20:08:14.825588811+00:00 stderr F I0813 20:08:14.825576 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-08-13T20:08:14.825623412+00:00 stderr F I0813 20:08:14.825592 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-08-13T20:08:14.825623412+00:00 stderr F I0813 20:08:14.825609 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-08-13T20:08:14.825634042+00:00 stderr F I0813 20:08:14.825626 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-08-13T20:08:14.825697544+00:00 stderr F I0813 20:08:14.825642 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-08-13T20:08:14.825697544+00:00 stderr F I0813 20:08:14.825692 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825709 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825834 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:14.825282982 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825883 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:14.825864139 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825955 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825940541 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825974 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825963172 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.825989 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825979022 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.826007 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.825994963 +0000 UTC))" 2025-08-13T20:08:14.826039484+00:00 stderr F I0813 20:08:14.826023 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:14.826012713 +0000 UTC))" 2025-08-13T20:08:14.826067965+00:00 stderr F I0813 20:08:14.826049 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:14.826030704 +0000 UTC))" 2025-08-13T20:08:14.826133267+00:00 stderr F I0813 20:08:14.826072 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:14.826060315 +0000 UTC))" 2025-08-13T20:08:14.826567179+00:00 stderr F I0813 20:08:14.826496 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:14.826462946 +0000 UTC))" 2025-08-13T20:08:14.826931140+00:00 stderr F I0813 20:08:14.826822 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115694\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:14.826762155 +0000 UTC))" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830158 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830227 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-08-13T20:08:14.830272085+00:00 stderr F I0813 20:08:14.830250 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830267 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830286 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-08-13T20:08:14.830307426+00:00 stderr F I0813 20:08:14.830302 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-08-13T20:08:14.830329687+00:00 stderr F I0813 20:08:14.830320 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:14.831114599+00:00 stderr F I0813 20:08:14.830389 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-08-13T20:08:14.831114599+00:00 stderr F I0813 20:08:14.830482 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831415 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831478 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831498 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831521 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831550 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831566 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831597 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831616 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831714 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831753 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831768 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831948 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831969 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.831987 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832076 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832113 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832154 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832179 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832198 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832218 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832234 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832250 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832268 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832292 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832331 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832350 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832390 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832407 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832432 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832447 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832463 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832480 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832496 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832511 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832546 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832583 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832599 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832615 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832641 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832657 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832713 1 policy_controller.go:88] Started "openshift.io/cluster-quota-reconciliation" 2025-08-13T20:08:14.832867600+00:00 stderr F I0813 20:08:14.832723 1 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833019 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833093 1 reconciliation_controller.go:140] Starting the cluster quota reconciliation controller 2025-08-13T20:08:14.833540459+00:00 stderr F I0813 20:08:14.833272 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:14.876513471+00:00 stderr F I0813 20:08:14.876383 1 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" 2025-08-13T20:08:14.876513471+00:00 stderr F I0813 20:08:14.876440 1 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" 2025-08-13T20:08:14.877069787+00:00 stderr F I0813 20:08:14.877029 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.883489 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: [] 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885290 1 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885311 1 policy_controller.go:78] Starting "openshift.io/privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.887001612+00:00 stderr F I0813 20:08:14.885539 1 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller 2025-08-13T20:08:14.889977587+00:00 stderr F I0813 20:08:14.889608 1 policy_controller.go:88] Started "openshift.io/privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.889977587+00:00 stderr F I0813 20:08:14.889639 1 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" 2025-08-13T20:08:14.890300166+00:00 stderr F I0813 20:08:14.890259 1 privileged_namespaces_controller.go:75] "Starting" controller="privileged-namespaces-psa-label-syncer" 2025-08-13T20:08:14.890355978+00:00 stderr F I0813 20:08:14.890339 1 shared_informer.go:311] Waiting for caches to sync for privileged-namespaces-psa-label-syncer 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904851 1 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904911 1 policy_controller.go:78] Starting "openshift.io/resourcequota" 2025-08-13T20:08:14.904998308+00:00 stderr F I0813 20:08:14.904982 1 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller 2025-08-13T20:08:15.162401148+00:00 stderr F I0813 20:08:15.161553 1 policy_controller.go:88] Started "openshift.io/resourcequota" 2025-08-13T20:08:15.162401148+00:00 stderr F I0813 20:08:15.161594 1 policy_controller.go:91] Started Origin Controllers 2025-08-13T20:08:15.169223483+00:00 stderr F I0813 20:08:15.162368 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-08-13T20:08:15.169223483+00:00 stderr F I0813 20:08:15.169155 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:15.169337147+00:00 stderr F I0813 20:08:15.169293 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176319 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176639 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.176918 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.180857717+00:00 stderr F I0813 20:08:15.177210 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.216483758+00:00 stderr F I0813 20:08:15.211262 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.231770197+00:00 stderr F I0813 20:08:15.231706 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.234820014+00:00 stderr F I0813 20:08:15.234745 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.237585623+00:00 stderr F I0813 20:08:15.237558 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.244701137+00:00 stderr F I0813 20:08:15.237756 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.294877586+00:00 stderr F I0813 20:08:15.242155 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.294877586+00:00 stderr F I0813 20:08:15.288151 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295008890+00:00 stderr F I0813 20:08:15.288479 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295342139+00:00 stderr F I0813 20:08:15.295285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295398661+00:00 stderr F I0813 20:08:15.295372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295510714+00:00 stderr F I0813 20:08:15.294031 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.295674509+00:00 stderr F I0813 20:08:15.295656 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.298247583+00:00 stderr F I0813 20:08:15.298219 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.298883051+00:00 stderr F I0813 20:08:15.298858 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.345075025+00:00 stderr F I0813 20:08:15.345008 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.352814607+00:00 stderr F I0813 20:08:15.352734 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.377367631+00:00 stderr F I0813 20:08:15.377297 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.380231 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.383691 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.387926 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.389594 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-08-13T20:08:15.436101725+00:00 stderr F I0813 20:08:15.427446 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-08-13T20:08:15.445339760+00:00 stderr F I0813 20:08:15.445274 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.484563625+00:00 stderr F I0813 20:08:15.484393 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.489766 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [image.openshift.io/v1, Resource=imagestreams], removed: []" 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.489952 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:15.492000598+00:00 stderr F I0813 20:08:15.488683 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.492547094+00:00 stderr F I0813 20:08:15.492488 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.494281263+00:00 stderr F I0813 20:08:15.494227 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.494741616+00:00 stderr F I0813 20:08:15.494715 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "basic-user" not found 2025-08-13T20:08:15.496173787+00:00 stderr F I0813 20:08:15.496128 1 trace.go:236] Trace[1789160028]: "DeltaFIFO Pop Process" ID:cluster-autoscaler,Depth:186,Reason:slow event handlers blocking the queue (13-Aug-2025 20:08:15.383) (total time: 112ms): 2025-08-13T20:08:15.496173787+00:00 stderr F Trace[1789160028]: [112.348001ms] [112.348001ms] END 2025-08-13T20:08:15.502290473+00:00 stderr F I0813 20:08:15.502258 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506106 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506151 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-provisioner-cfg" not found 2025-08-13T20:08:15.506173914+00:00 stderr F I0813 20:08:15.506159 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-provisioner-cfg" not found 2025-08-13T20:08:15.506199455+00:00 stderr F I0813 20:08:15.506179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found 2025-08-13T20:08:15.506199455+00:00 stderr F I0813 20:08:15.506191 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506198 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506207 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506223216+00:00 stderr F I0813 20:08:15.506214 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506236806+00:00 stderr F I0813 20:08:15.506222 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506236806+00:00 stderr F I0813 20:08:15.506229 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506249946+00:00 stderr F I0813 20:08:15.506238 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506249946+00:00 stderr F I0813 20:08:15.506244 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found 2025-08-13T20:08:15.506262537+00:00 stderr F I0813 20:08:15.506252 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found 2025-08-13T20:08:15.506262537+00:00 stderr F I0813 20:08:15.506259 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:bootstrap-signer" not found 2025-08-13T20:08:15.506274767+00:00 stderr F I0813 20:08:15.506267 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:cloud-provider" not found 2025-08-13T20:08:15.506285057+00:00 stderr F I0813 20:08:15.506276 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:controller:token-cleaner" not found 2025-08-13T20:08:15.506297818+00:00 stderr F I0813 20:08:15.506291 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found 2025-08-13T20:08:15.506307568+00:00 stderr F I0813 20:08:15.506300 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found 2025-08-13T20:08:15.506319168+00:00 stderr F I0813 20:08:15.506306 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found 2025-08-13T20:08:15.506319168+00:00 stderr F I0813 20:08:15.506315 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-controller-manager" not found 2025-08-13T20:08:15.506339989+00:00 stderr F I0813 20:08:15.506321 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506339989+00:00 stderr F I0813 20:08:15.506335 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506355549+00:00 stderr F I0813 20:08:15.506346 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506367130+00:00 stderr F I0813 20:08:15.506359 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506380 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-approver" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506408 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506423 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-samples-operator" not found 2025-08-13T20:08:15.506434522+00:00 stderr F I0813 20:08:15.506429 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506452792+00:00 stderr F I0813 20:08:15.506444 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "csi-snapshot-controller-operator-role" not found 2025-08-13T20:08:15.506467723+00:00 stderr F I0813 20:08:15.506458 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506478923+00:00 stderr F I0813 20:08:15.506471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-configmap-reader" not found 2025-08-13T20:08:15.506488833+00:00 stderr F I0813 20:08:15.506479 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506488833+00:00 stderr F I0813 20:08:15.506485 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-public" not found 2025-08-13T20:08:15.506501614+00:00 stderr F I0813 20:08:15.506493 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.506513854+00:00 stderr F I0813 20:08:15.506500 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-approver" not found 2025-08-13T20:08:15.506513854+00:00 stderr F I0813 20:08:15.506509 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-network-public-role" not found 2025-08-13T20:08:15.506532774+00:00 stderr F I0813 20:08:15.506521 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:oauth-servercert-trust" not found 2025-08-13T20:08:15.506532774+00:00 stderr F I0813 20:08:15.506528 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506548415+00:00 stderr F I0813 20:08:15.506540 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "coreos-pull-secret-reader" not found 2025-08-13T20:08:15.506560575+00:00 stderr F I0813 20:08:15.506549 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506560575+00:00 stderr F I0813 20:08:15.506555 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "ingress-operator" not found 2025-08-13T20:08:15.506572646+00:00 stderr F I0813 20:08:15.506565 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.506588206+00:00 stderr F I0813 20:08:15.506578 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506600286+00:00 stderr F I0813 20:08:15.506585 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506612357+00:00 stderr F I0813 20:08:15.506599 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-user-settings-admin" not found 2025-08-13T20:08:15.506624287+00:00 stderr F I0813 20:08:15.506611 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.506624287+00:00 stderr F I0813 20:08:15.506620 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506639688+00:00 stderr F I0813 20:08:15.506632 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506651758+00:00 stderr F I0813 20:08:15.506644 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506667328+00:00 stderr F I0813 20:08:15.506658 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-controller-manager" not found 2025-08-13T20:08:15.506679559+00:00 stderr F I0813 20:08:15.506664 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "dns-operator" not found 2025-08-13T20:08:15.506679559+00:00 stderr F I0813 20:08:15.506674 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506747431+00:00 stderr F I0813 20:08:15.506699 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506747431+00:00 stderr F I0813 20:08:15.506733 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506768191+00:00 stderr F I0813 20:08:15.506746 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.506768191+00:00 stderr F I0813 20:08:15.506762 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-image-registry-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506770 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "node-ca" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506827 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506852 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-openshift-controller-manager" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506861 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-creating-route-controller-manager" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506873 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "ingress-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506880 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506921 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506939 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506950 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506965 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506978 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506991 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-election-lock-cluster-policy-controller" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.506998 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507009 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507022 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:sa-listing-configmaps" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507039 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-autoscaler" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507045 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-autoscaler-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507053 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "control-plane-machine-set-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507058 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507066 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-api-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507075 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s-cluster-autoscaler-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507083 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s-machine-api-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507095 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "machine-config-daemon" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507107 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "mcc-prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507115 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "mcd-prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507122 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507134 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "marketplace-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507141 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-marketplace-metrics" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507153 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "cluster-monitoring-operator-alert-customization" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "console-operator" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507173 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "whereabouts-cni" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507190 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "network-diagnostics" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507197 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507209 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "network-node-identity-leases" not found 2025-08-13T20:08:15.509924612+00:00 stderr F E0813 20:08:15.507232 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507241 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507300 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:node-config-reader" not found 2025-08-13T20:08:15.509924612+00:00 stderr F I0813 20:08:15.507312 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509924612+00:00 stderr P I0813 20:08:15.507330 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "col 2025-08-13T20:08:15.509993354+00:00 stderr F lect-profiles" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507338 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "operator-lifecycle-manager-metrics" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507344 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "packageserver" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507354 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "packageserver-service-cert" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507374 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-control-plane-limited" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507382 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node-limited" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507389 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507401 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507437 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:leader-locking-openshift-route-controller-manager" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507453 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "prometheus-k8s" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507490 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "copied-csv-viewer" not found 2025-08-13T20:08:15.509993354+00:00 stderr F I0813 20:08:15.507498 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "shared-resource-viewer" not found 2025-08-13T20:08:15.511094655+00:00 stderr F I0813 20:08:15.510971 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.512133275+00:00 stderr F E0813 20:08:15.511975 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-08-13T20:08:15.514202474+00:00 stderr F E0813 20:08:15.514156 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-08-13T20:08:15.524208091+00:00 stderr F I0813 20:08:15.524124 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.532220851+00:00 stderr F I0813 20:08:15.532160 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.545559513+00:00 stderr F I0813 20:08:15.545469 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.556140527+00:00 stderr F I0813 20:08:15.556044 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.583228713+00:00 stderr F I0813 20:08:15.583171 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.585195870+00:00 stderr F I0813 20:08:15.585160 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.596064472+00:00 stderr F I0813 20:08:15.595457 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:08:15.596064472+00:00 stderr F I0813 20:08:15.595523 1 resource_quota_controller.go:496] "synced quota controller" 2025-08-13T20:08:15.752113946+00:00 stderr F I0813 20:08:15.752055 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.766668683+00:00 stderr F I0813 20:08:15.766543 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.791997529+00:00 stderr F I0813 20:08:15.791017 1 shared_informer.go:318] Caches are synced for privileged-namespaces-psa-label-syncer 2025-08-13T20:08:15.805262569+00:00 stderr F I0813 20:08:15.805174 1 base_controller.go:73] Caches are synced for namespace-security-allocation-controller 2025-08-13T20:08:15.805262569+00:00 stderr F I0813 20:08:15.805219 1 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... 2025-08-13T20:08:15.805317911+00:00 stderr F I0813 20:08:15.805294 1 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations 2025-08-13T20:08:15.949999609+00:00 stderr F I0813 20:08:15.949577 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:15.970305261+00:00 stderr F I0813 20:08:15.968108 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.167957588+00:00 stderr F I0813 20:08:16.167440 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.343548593+00:00 stderr F I0813 20:08:16.343039 1 request.go:697] Waited for 1.173256768s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/resourcequotas?limit=500&resourceVersion=0 2025-08-13T20:08:16.346233830+00:00 stderr F I0813 20:08:16.346152 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.367920662+00:00 stderr F I0813 20:08:16.366461 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.370045532+00:00 stderr F I0813 20:08:16.369989 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:08:16.508977916+00:00 stderr F I0813 20:08:16.508543 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.549842197+00:00 stderr F I0813 20:08:16.549722 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.568407570+00:00 stderr F I0813 20:08:16.568294 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.766668494+00:00 stderr F I0813 20:08:16.766564 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.945633085+00:00 stderr F I0813 20:08:16.945525 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:16.967295166+00:00 stderr F I0813 20:08:16.967193 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.086702110+00:00 stderr F I0813 20:08:17.085952 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.158487008+00:00 stderr F I0813 20:08:17.157991 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.172618403+00:00 stderr F W0813 20:08:17.172277 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:08:17.172618403+00:00 stderr F I0813 20:08:17.172391 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.181821517+00:00 stderr F W0813 20:08:17.181359 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:08:17.349080902+00:00 stderr F I0813 20:08:17.346494 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.363956099+00:00 stderr F I0813 20:08:17.363068 1 request.go:697] Waited for 2.191471622s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/servicemonitors?limit=500&resourceVersion=0 2025-08-13T20:08:17.369678783+00:00 stderr F I0813 20:08:17.369557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.567909807+00:00 stderr F I0813 20:08:17.567537 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.577176012+00:00 stderr F I0813 20:08:17.577067 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.586982193+00:00 stderr F I0813 20:08:17.586130 1 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller 2025-08-13T20:08:17.586982193+00:00 stderr F I0813 20:08:17.586172 1 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... 2025-08-13T20:08:17.768841898+00:00 stderr F I0813 20:08:17.768711 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.973583317+00:00 stderr F I0813 20:08:17.973528 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:17.980504715+00:00 stderr F I0813 20:08:17.980440 1 namespace_scc_allocation_controller.go:116] Repair complete 2025-08-13T20:08:18.168683820+00:00 stderr F I0813 20:08:18.168557 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.365096742+00:00 stderr F I0813 20:08:18.363927 1 request.go:697] Waited for 3.191556604s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v2/operatorconditions?limit=500&resourceVersion=0 2025-08-13T20:08:18.368047266+00:00 stderr F I0813 20:08:18.367957 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.567371701+00:00 stderr F I0813 20:08:18.567211 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.766389487+00:00 stderr F I0813 20:08:18.766128 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:18.966073862+00:00 stderr F I0813 20:08:18.965964 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.180138490+00:00 stderr F I0813 20:08:19.179751 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.364296150+00:00 stderr F I0813 20:08:19.364147 1 request.go:697] Waited for 4.191066971s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0 2025-08-13T20:08:19.379091444+00:00 stderr F I0813 20:08:19.379038 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.568422642+00:00 stderr F I0813 20:08:19.568366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.766052129+00:00 stderr F I0813 20:08:19.765998 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:19.967672679+00:00 stderr F I0813 20:08:19.967043 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.168539889+00:00 stderr F I0813 20:08:20.168358 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.366753172+00:00 stderr F I0813 20:08:20.366609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.563195074+00:00 stderr F I0813 20:08:20.563064 1 request.go:697] Waited for 5.389329947s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/operators.coreos.com/v1/operatorgroups?limit=500&resourceVersion=0 2025-08-13T20:08:20.566910400+00:00 stderr F I0813 20:08:20.566494 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.767176112+00:00 stderr F I0813 20:08:20.767083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:20.970139491+00:00 stderr F I0813 20:08:20.970051 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.166165592+00:00 stderr F I0813 20:08:21.166100 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.366337351+00:00 stderr F I0813 20:08:21.366152 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.564432569+00:00 stderr F I0813 20:08:21.564369 1 request.go:697] Waited for 6.389277006s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs?limit=500&resourceVersion=0 2025-08-13T20:08:21.575270540+00:00 stderr F I0813 20:08:21.575083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.766876404+00:00 stderr F I0813 20:08:21.766682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.966526208+00:00 stderr F I0813 20:08:21.965911 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:21.970859722+00:00 stderr F I0813 20:08:21.970506 1 reconciliation_controller.go:224] synced cluster resource quota controller 2025-08-13T20:08:22.034338812+00:00 stderr F I0813 20:08:22.034242 1 reconciliation_controller.go:149] Caches are synced 2025-08-13T20:08:44.520618251+00:00 stderr F E0813 20:08:44.520526 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=32866&timeout=5m7s&timeoutSeconds=307&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:45.257388125+00:00 stderr F E0813 20:08:45.257309 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=32866&timeout=5m53s&timeoutSeconds=353&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:45.783930832+00:00 stderr F E0813 20:08:45.783822 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=32890&timeout=6m30s&timeoutSeconds=390&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:47.641227111+00:00 stderr F E0813 20:08:47.639719 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=32882&timeout=5m56s&timeoutSeconds=356&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:47.889473979+00:00 stderr F E0813 20:08:47.889166 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=32866&timeout=8m34s&timeoutSeconds=514&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.277980608+00:00 stderr F E0813 20:08:48.277870 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=32887&timeout=6m7s&timeoutSeconds=367&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:48.487870096+00:00 stderr F E0813 20:08:48.485466 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=32881&timeout=9m9s&timeoutSeconds=549&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-08-13T20:08:58.353763679+00:00 stderr F I0813 20:08:58.353644 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.468278553+00:00 stderr F I0813 20:08:58.468098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.701085768+00:00 stderr F I0813 20:08:58.700666 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:58.937601149+00:00 stderr F I0813 20:08:58.937479 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:59.881007247+00:00 stderr F I0813 20:08:59.880767 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:59.987227123+00:00 stderr F I0813 20:08:59.987057 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.003355905+00:00 stderr F I0813 20:09:00.003258 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.150216806+00:00 stderr F I0813 20:09:00.150103 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.425437117+00:00 stderr F I0813 20:09:00.425289 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.492074917+00:00 stderr F I0813 20:09:00.491969 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.705735963+00:00 stderr F I0813 20:09:00.705607 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:00.908423513+00:00 stderr F I0813 20:09:00.908259 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.307012931+00:00 stderr F I0813 20:09:01.306948 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.332152642+00:00 stderr F I0813 20:09:01.332074 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:01.943401017+00:00 stderr F I0813 20:09:01.943318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.165926027+00:00 stderr F I0813 20:09:02.165756 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.170353624+00:00 stderr F I0813 20:09:02.169458 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.200222970+00:00 stderr F I0813 20:09:02.198436 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.243060378+00:00 stderr F I0813 20:09:02.243000 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.276498097+00:00 stderr F I0813 20:09:02.276377 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.413029791+00:00 stderr F I0813 20:09:02.412912 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.474952837+00:00 stderr F I0813 20:09:02.471915 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.480103634+00:00 stderr F I0813 20:09:02.479473 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.714094713+00:00 stderr F I0813 20:09:02.713322 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:02.723976247+00:00 stderr F I0813 20:09:02.723848 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.241760852+00:00 stderr F I0813 20:09:03.241565 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.284594570+00:00 stderr F I0813 20:09:03.284437 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.503286530+00:00 stderr F I0813 20:09:03.503146 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.604591535+00:00 stderr F I0813 20:09:03.604525 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.722984019+00:00 stderr F I0813 20:09:03.722920 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.850325700+00:00 stderr F I0813 20:09:03.850225 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:03.850922437+00:00 stderr F I0813 20:09:03.850753 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:09:03.851303858+00:00 stderr F I0813 20:09:03.851165 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:09:04.098252009+00:00 stderr F I0813 20:09:04.097733 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.152472923+00:00 stderr F I0813 20:09:04.152204 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.225751164+00:00 stderr F I0813 20:09:04.225694 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.817556751+00:00 stderr F I0813 20:09:04.817408 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.831556452+00:00 stderr F W0813 20:09:04.831450 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:09:04.831616414+00:00 stderr F I0813 20:09:04.831588 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.840324364+00:00 stderr F W0813 20:09:04.840265 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:09:04.859071671+00:00 stderr F I0813 20:09:04.858997 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:04.894092585+00:00 stderr F I0813 20:09:04.893961 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.441834341+00:00 stderr F I0813 20:09:05.441682 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.474877109+00:00 stderr F I0813 20:09:05.474730 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.562097250+00:00 stderr F I0813 20:09:05.562031 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.640134787+00:00 stderr F I0813 20:09:05.640072 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.838480114+00:00 stderr F I0813 20:09:05.838389 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:05.990681308+00:00 stderr F I0813 20:09:05.990602 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.433181395+00:00 stderr F I0813 20:09:06.433114 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.474216631+00:00 stderr F I0813 20:09:06.474129 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.505855978+00:00 stderr F I0813 20:09:06.505680 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.513476227+00:00 stderr F I0813 20:09:06.513410 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.558408355+00:00 stderr F I0813 20:09:06.558346 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.571862281+00:00 stderr F I0813 20:09:06.571677 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.699625664+00:00 stderr F I0813 20:09:06.699451 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.951436104+00:00 stderr F I0813 20:09:06.951223 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.370529009+00:00 stderr F I0813 20:09:07.370423 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.459069147+00:00 stderr F I0813 20:09:07.458924 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.471986478+00:00 stderr F I0813 20:09:07.471871 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.600928745+00:00 stderr F I0813 20:09:07.600544 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.149093319+00:00 stderr F I0813 20:09:08.149020 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.218213131+00:00 stderr F I0813 20:09:08.218130 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.324632872+00:00 stderr F I0813 20:09:08.324517 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.624968103+00:00 stderr F I0813 20:09:08.624869 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.710337260+00:00 stderr F I0813 20:09:08.710168 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.856873891+00:00 stderr F I0813 20:09:08.856737 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.951410252+00:00 stderr F I0813 20:09:08.951241 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:08.958402552+00:00 stderr F I0813 20:09:08.958285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.342542026+00:00 stderr F I0813 20:09:09.342458 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.516463073+00:00 stderr F I0813 20:09:09.516330 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:09.586271414+00:00 stderr F I0813 20:09:09.586068 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.136190901+00:00 stderr F I0813 20:09:10.136083 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.477340542+00:00 stderr F I0813 20:09:10.476685 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:10.552882668+00:00 stderr F I0813 20:09:10.552464 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.762035674+00:00 stderr F I0813 20:09:11.747513 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.815543008+00:00 stderr F I0813 20:09:11.813580 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.818041440+00:00 stderr F I0813 20:09:11.818002 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:09:11.818113032+00:00 stderr F I0813 20:09:11.818093 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:09:12.066879514+00:00 stderr F I0813 20:09:12.065303 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.315147813+00:00 stderr F I0813 20:09:12.303247 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.524755562+00:00 stderr F I0813 20:09:12.524662 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:12.720248007+00:00 stderr F I0813 20:09:12.720183 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.244186329+00:00 stderr F I0813 20:09:13.244048 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.257643605+00:00 stderr F I0813 20:09:13.257585 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.616384901+00:00 stderr F I0813 20:09:13.616285 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.735556497+00:00 stderr F I0813 20:09:13.735460 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:13.984755492+00:00 stderr F I0813 20:09:13.984631 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:17:31.923443489+00:00 stderr F W0813 20:17:31.922743 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:19:03.872750719+00:00 stderr F I0813 20:19:03.872574 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:19:03.872750719+00:00 stderr F I0813 20:19:03.872691 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:19:11.823569241+00:00 stderr F I0813 20:19:11.823055 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:19:11.823691854+00:00 stderr F I0813 20:19:11.823534 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:23:32.941069167+00:00 stderr F W0813 20:23:32.940701 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:29:03.858920750+00:00 stderr F I0813 20:29:03.858613 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:29:03.858920750+00:00 stderr F I0813 20:29:03.858685 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:29:11.824287689+00:00 stderr F I0813 20:29:11.824144 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:29:11.824287689+00:00 stderr F I0813 20:29:11.824213 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:30:08.952423486+00:00 stderr F W0813 20:30:08.951886 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:38:14.959442937+00:00 stderr F W0813 20:38:14.959259 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-08-13T20:39:03.857190733+00:00 stderr F I0813 20:39:03.856948 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:39:03.857190733+00:00 stderr F I0813 20:39:03.857044 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-08-13T20:39:11.824009178+00:00 stderr F I0813 20:39:11.823497 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:39:11.824009178+00:00 stderr F I0813 20:39:11.823576 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-08-13T20:42:36.401697921+00:00 stderr F I0813 20:42:36.401556 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.402032941+00:00 stderr F I0813 20:42:36.401959 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.452903 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453302 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453415 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453526 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453623 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.453837 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454384 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454477 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454593 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454704 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454871 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.454963 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.455028 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455103101+00:00 stderr F I0813 20:42:36.455092 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455254165+00:00 stderr F I0813 20:42:36.455171 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.455598315+00:00 stderr F I0813 20:42:36.455311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508540511+00:00 stderr F I0813 20:42:36.476285 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509114978+00:00 stderr F I0813 20:42:36.476311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509639023+00:00 stderr F I0813 20:42:36.476325 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510044045+00:00 stderr F I0813 20:42:36.476337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510289942+00:00 stderr F I0813 20:42:36.476350 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510379984+00:00 stderr F I0813 20:42:36.476374 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510477887+00:00 stderr F I0813 20:42:36.476387 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510663292+00:00 stderr F I0813 20:42:36.476398 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510663292+00:00 stderr F I0813 20:42:36.476411 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.510860668+00:00 stderr F I0813 20:42:36.476423 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511060774+00:00 stderr F I0813 20:42:36.476435 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511280550+00:00 stderr F I0813 20:42:36.476445 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511485186+00:00 stderr F I0813 20:42:36.476728 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.511736753+00:00 stderr F I0813 20:42:36.476742 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531584186+00:00 stderr F I0813 20:42:36.476757 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531934476+00:00 stderr F I0813 20:42:36.476768 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532140292+00:00 stderr F I0813 20:42:36.476838 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532311397+00:00 stderr F I0813 20:42:36.476850 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.532883403+00:00 stderr F I0813 20:42:36.476862 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537875467+00:00 stderr F I0813 20:42:36.476872 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538035062+00:00 stderr F I0813 20:42:36.476884 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538194606+00:00 stderr F I0813 20:42:36.476894 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538423403+00:00 stderr F I0813 20:42:36.476905 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538639449+00:00 stderr F I0813 20:42:36.476915 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.538853175+00:00 stderr F I0813 20:42:36.476927 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539185355+00:00 stderr F I0813 20:42:36.476950 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539359230+00:00 stderr F I0813 20:42:36.476961 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540034479+00:00 stderr F I0813 20:42:36.476973 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540474832+00:00 stderr F I0813 20:42:36.476984 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542741847+00:00 stderr F I0813 20:42:36.476995 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542931643+00:00 stderr F I0813 20:42:36.477110 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543525590+00:00 stderr F I0813 20:42:36.477126 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543853859+00:00 stderr F I0813 20:42:36.484817 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.549611285+00:00 stderr F I0813 20:42:36.484846 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.549737449+00:00 stderr F I0813 20:42:36.484864 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550030577+00:00 stderr F I0813 20:42:36.484875 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550115900+00:00 stderr F I0813 20:42:36.484887 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550366187+00:00 stderr F I0813 20:42:36.484898 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550463350+00:00 stderr F I0813 20:42:36.484911 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554275140+00:00 stderr F I0813 20:42:36.484922 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554705332+00:00 stderr F I0813 20:42:36.484934 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554705332+00:00 stderr F I0813 20:42:36.484945 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554825256+00:00 stderr F I0813 20:42:36.484958 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555266918+00:00 stderr F I0813 20:42:36.485013 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555976549+00:00 stderr F I0813 20:42:36.485027 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556657328+00:00 stderr F I0813 20:42:36.485046 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556854264+00:00 stderr F I0813 20:42:36.485063 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557073040+00:00 stderr F I0813 20:42:36.485074 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557320978+00:00 stderr F I0813 20:42:36.485085 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557464872+00:00 stderr F I0813 20:42:36.485095 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557752930+00:00 stderr F I0813 20:42:36.485108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.557977447+00:00 stderr F I0813 20:42:36.485119 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558130701+00:00 stderr F I0813 20:42:36.485138 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558316766+00:00 stderr F I0813 20:42:36.485148 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558438440+00:00 stderr F I0813 20:42:36.485159 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.558550093+00:00 stderr F I0813 20:42:36.485173 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568607083+00:00 stderr F I0813 20:42:36.485183 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.570719554+00:00 stderr F I0813 20:42:36.485201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575013368+00:00 stderr F I0813 20:42:36.485212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575123871+00:00 stderr F I0813 20:42:36.485257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575261375+00:00 stderr F I0813 20:42:36.485276 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.043289618+00:00 stderr F I0813 20:42:37.042098 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.044261266+00:00 stderr F I0813 20:42:37.044132 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:37.045152852+00:00 stderr F I0813 20:42:37.044963 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:37.046348896+00:00 stderr F I0813 20:42:37.046268 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:37.046475130+00:00 stderr F I0813 20:42:37.046323 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:37.046685336+00:00 stderr F I0813 20:42:37.046626 1 reconciliation_controller.go:159] Shutting down ClusterQuotaReconcilationController 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047620 1 secure_serving.go:258] Stopped listening on [::]:10357 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047658 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047672 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047765 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:37.048059266+00:00 stderr F I0813 20:42:37.047186 1 base_controller.go:172] Shutting down pod-security-admission-label-synchronization-controller ... 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048472 1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048504 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:37.048577981+00:00 stderr F I0813 20:42:37.048523 1 base_controller.go:172] Shutting down namespace-security-allocation-controller ... 2025-08-13T20:42:37.048654573+00:00 stderr F I0813 20:42:37.048574 1 privileged_namespaces_controller.go:85] "Shutting down" controller="privileged-namespaces-psa-label-syncer" 2025-08-13T20:42:37.048654573+00:00 stderr F I0813 20:42:37.048640 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:37.048713745+00:00 stderr F I0813 20:42:37.048670 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_csr-approver-controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048937 1 base_controller.go:114] Shutting down worker of namespace-security-allocation-controller controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048968 1 base_controller.go:104] All namespace-security-allocation-controller workers have been terminated 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048544 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.048996 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.049010 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-08-13T20:42:37.049026484+00:00 stderr F I0813 20:42:37.049017 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_csr-approver-controller workers have been terminated 2025-08-13T20:42:37.049174998+00:00 stderr F I0813 20:42:37.049126 1 resource_quota_monitor.go:339] "QuotaMonitor stopped monitors" stopped=72 total=72 2025-08-13T20:42:37.049174998+00:00 stderr F I0813 20:42:37.049155 1 resource_quota_monitor.go:340] "QuotaMonitor stopping" 2025-08-13T20:42:37.049209279+00:00 stderr F I0813 20:42:37.049130 1 resource_quota_monitor.go:339] "QuotaMonitor stopped monitors" stopped=1 total=1 2025-08-13T20:42:37.049356803+00:00 stderr F I0813 20:42:37.049337 1 resource_quota_monitor.go:340] "QuotaMonitor stopping" 2025-08-13T20:42:37.049442716+00:00 stderr F I0813 20:42:37.049427 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:37.049846687+00:00 stderr F I0813 20:42:37.049757 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:37.050170747+00:00 stderr F I0813 20:42:37.050063 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.050614049+00:00 stderr F I0813 20:42:37.050553 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050637 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050666 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050680411+00:00 stderr F I0813 20:42:37.050675 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050695842+00:00 stderr F I0813 20:42:37.050683 1 reconciliation_controller.go:367] resource quota controller worker shutting down 2025-08-13T20:42:37.050711832+00:00 stderr F I0813 20:42:37.050702 1 base_controller.go:114] Shutting down worker of pod-security-admission-label-synchronization-controller controller ... 2025-08-13T20:42:37.050723803+00:00 stderr F I0813 20:42:37.050714 1 base_controller.go:104] All pod-security-admission-label-synchronization-controller workers have been terminated 2025-08-13T20:42:37.051066102+00:00 stderr F I0813 20:42:37.050975 1 resource_quota_controller.go:317] "Shutting down resource quota controller" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051005 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051076 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051096 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051102 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051106 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051116 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051119 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051131 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051142215+00:00 stderr F I0813 20:42:37.051132 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051166095+00:00 stderr F I0813 20:42:37.051141 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051166095+00:00 stderr F I0813 20:42:37.051146 1 resource_quota_controller.go:279] "resource quota controller worker shutting down" 2025-08-13T20:42:37.051279829+00:00 stderr F I0813 20:42:37.051203 1 builder.go:330] server exited 2025-08-13T20:42:37.058025303+00:00 stderr F E0813 20:42:37.056294 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock?timeout=1m47s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:37.058025303+00:00 stderr F W0813 20:42:37.056374 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000067731715117130646033107 0ustar zuulzuul2025-12-13T00:10:44.718257212+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done' 2025-12-13T00:10:44.722395951+00:00 stderr F ++ ss -Htanop '(' sport = 10357 ')' 2025-12-13T00:10:44.728778687+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:10:44.730418144+00:00 stderr F + exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2 2025-12-13T00:10:44.879410935+00:00 stderr F I1213 00:10:44.879113 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:10:44.880724933+00:00 stderr F I1213 00:10:44.880702 1 observer_polling.go:159] Starting file observer 2025-12-13T00:10:44.888727595+00:00 stderr F I1213 00:10:44.888478 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88 2025-12-13T00:10:44.890447244+00:00 stderr F I1213 00:10:44.890409 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:11:01.783475166+00:00 stderr F I1213 00:11:01.783359 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:11:01.790958738+00:00 stderr F I1213 00:11:01.790212 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:11:01.790958738+00:00 stderr F I1213 00:11:01.790250 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:11:01.790958738+00:00 stderr F I1213 00:11:01.790268 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:11:01.790958738+00:00 stderr F I1213 00:11:01.790273 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:11:01.806720557+00:00 stderr F I1213 00:11:01.806635 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:11:01.807485768+00:00 stderr F I1213 00:11:01.807012 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:11:01.817000987+00:00 stderr F W1213 00:11:01.810679 1 builder.go:358] unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.811203 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-controller-manager/cluster-policy-controller-lock... 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.812055 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ControlPlaneTopology' unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.814850 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.814874 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.814895 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.814926 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.814961 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.814895 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.815510 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.815880 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:11:01.815827544 +0000 UTC))" 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.816397 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584645\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:11:01.816370389 +0000 UTC))" 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.816423 1 secure_serving.go:213] Serving securely on [::]:10357 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.816447 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:11:01.817000987+00:00 stderr F I1213 00:11:01.816463 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:11:01.820881722+00:00 stderr F I1213 00:11:01.820162 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:11:01.820881722+00:00 stderr F I1213 00:11:01.820173 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:11:01.820881722+00:00 stderr F I1213 00:11:01.820461 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:11:01.916105450+00:00 stderr F I1213 00:11:01.916028 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:11:01.916105450+00:00 stderr F I1213 00:11:01.916063 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:11:01.916138731+00:00 stderr F I1213 00:11:01.916104 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:11:01.916362287+00:00 stderr F I1213 00:11:01.916332 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.916306205 +0000 UTC))" 2025-12-13T00:11:01.916652184+00:00 stderr F I1213 00:11:01.916636 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:11:01.916612403 +0000 UTC))" 2025-12-13T00:11:01.917007854+00:00 stderr F I1213 00:11:01.916977 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584645\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:11:01.916960793 +0000 UTC))" 2025-12-13T00:11:01.917169838+00:00 stderr F I1213 00:11:01.917152 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:11:01.917104537 +0000 UTC))" 2025-12-13T00:11:01.917177078+00:00 stderr F I1213 00:11:01.917172 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:11:01.917162318 +0000 UTC))" 2025-12-13T00:11:01.917216010+00:00 stderr F I1213 00:11:01.917196 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:11:01.917178839 +0000 UTC))" 2025-12-13T00:11:01.917244280+00:00 stderr F I1213 00:11:01.917228 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:11:01.917210279 +0000 UTC))" 2025-12-13T00:11:01.917265621+00:00 stderr F I1213 00:11:01.917251 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.91723594 +0000 UTC))" 2025-12-13T00:11:01.917272441+00:00 stderr F I1213 00:11:01.917266 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.917256421 +0000 UTC))" 2025-12-13T00:11:01.917285171+00:00 stderr F I1213 00:11:01.917280 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.917270441 +0000 UTC))" 2025-12-13T00:11:01.917307712+00:00 stderr F I1213 00:11:01.917295 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.917284471 +0000 UTC))" 2025-12-13T00:11:01.917314562+00:00 stderr F I1213 00:11:01.917310 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:11:01.917299852 +0000 UTC))" 2025-12-13T00:11:01.917340803+00:00 stderr F I1213 00:11:01.917324 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:11:01.917315742 +0000 UTC))" 2025-12-13T00:11:01.917348183+00:00 stderr F I1213 00:11:01.917343 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.917333093 +0000 UTC))" 2025-12-13T00:11:01.917639242+00:00 stderr F I1213 00:11:01.917603 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:11:01.917589001 +0000 UTC))" 2025-12-13T00:11:01.917879348+00:00 stderr F I1213 00:11:01.917854 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584645\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:11:01.917843087 +0000 UTC))" 2025-12-13T00:13:20.377212322+00:00 stderr F I1213 00:13:20.376849 1 leaderelection.go:260] successfully acquired lease openshift-kube-controller-manager/cluster-policy-controller-lock 2025-12-13T00:13:20.377760050+00:00 stderr F I1213 00:13:20.377697 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-controller-manager", Name:"cluster-policy-controller-lock", UID:"bb093f33-8655-47de-8ab9-7ce6fce91fc7", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_0ef79bab-1eb8-4f19-b8c6-12186a71e73f became leader 2025-12-13T00:13:20.386622468+00:00 stderr F I1213 00:13:20.386554 1 policy_controller.go:78] Starting "openshift.io/cluster-quota-reconciliation" 2025-12-13T00:13:20.456318140+00:00 stderr F E1213 00:13:20.456258 1 reconciliation_controller.go:121] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: apps.openshift.io/v1: stale GroupVersion discovery: apps.openshift.io/v1, authorization.openshift.io/v1: stale GroupVersion discovery: authorization.openshift.io/v1, build.openshift.io/v1: stale GroupVersion discovery: build.openshift.io/v1, image.openshift.io/v1: stale GroupVersion discovery: image.openshift.io/v1, oauth.openshift.io/v1: stale GroupVersion discovery: oauth.openshift.io/v1, project.openshift.io/v1: stale GroupVersion discovery: project.openshift.io/v1, quota.openshift.io/v1: stale GroupVersion discovery: quota.openshift.io/v1, route.openshift.io/v1: stale GroupVersion discovery: route.openshift.io/v1, security.openshift.io/v1: stale GroupVersion discovery: security.openshift.io/v1, template.openshift.io/v1: stale GroupVersion discovery: template.openshift.io/v1, user.openshift.io/v1: stale GroupVersion discovery: user.openshift.io/v1 2025-12-13T00:13:20.456671331+00:00 stderr F I1213 00:13:20.456632 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-12-13T00:13:20.456686092+00:00 stderr F I1213 00:13:20.456675 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-12-13T00:13:20.456709653+00:00 stderr F I1213 00:13:20.456693 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-12-13T00:13:20.456718813+00:00 stderr F I1213 00:13:20.456709 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-12-13T00:13:20.456729563+00:00 stderr F I1213 00:13:20.456723 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-12-13T00:13:20.456758134+00:00 stderr F I1213 00:13:20.456738 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-12-13T00:13:20.456767425+00:00 stderr F I1213 00:13:20.456758 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-12-13T00:13:20.456777495+00:00 stderr F I1213 00:13:20.456772 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-12-13T00:13:20.456813996+00:00 stderr F I1213 00:13:20.456792 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-12-13T00:13:20.456824696+00:00 stderr F I1213 00:13:20.456812 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-12-13T00:13:20.456863708+00:00 stderr F I1213 00:13:20.456839 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-12-13T00:13:20.456863708+00:00 stderr F I1213 00:13:20.456857 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-12-13T00:13:20.456906749+00:00 stderr F I1213 00:13:20.456873 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-12-13T00:13:20.456906749+00:00 stderr F I1213 00:13:20.456890 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-12-13T00:13:20.456925400+00:00 stderr F I1213 00:13:20.456909 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-12-13T00:13:20.456972551+00:00 stderr F I1213 00:13:20.456949 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-12-13T00:13:20.457008313+00:00 stderr F I1213 00:13:20.456988 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-12-13T00:13:20.457018033+00:00 stderr F I1213 00:13:20.457010 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-12-13T00:13:20.457051554+00:00 stderr F I1213 00:13:20.457030 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-12-13T00:13:20.457061094+00:00 stderr F I1213 00:13:20.457054 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-12-13T00:13:20.457106186+00:00 stderr F I1213 00:13:20.457083 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-12-13T00:13:20.457116266+00:00 stderr F I1213 00:13:20.457111 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-12-13T00:13:20.457143117+00:00 stderr F I1213 00:13:20.457124 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-12-13T00:13:20.457154197+00:00 stderr F I1213 00:13:20.457147 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-12-13T00:13:20.457184328+00:00 stderr F I1213 00:13:20.457165 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-12-13T00:13:20.457193949+00:00 stderr F I1213 00:13:20.457183 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-12-13T00:13:20.457204719+00:00 stderr F I1213 00:13:20.457199 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-12-13T00:13:20.457261392+00:00 stderr F I1213 00:13:20.457226 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-12-13T00:13:20.457261392+00:00 stderr F I1213 00:13:20.457254 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457271 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457333 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457356 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457390 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457422 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457447 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457474 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457495 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457523 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457550 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-12-13T00:13:20.457578803+00:00 stderr F I1213 00:13:20.457572 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-12-13T00:13:20.457617434+00:00 stderr F I1213 00:13:20.457593 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-12-13T00:13:20.457626534+00:00 stderr F I1213 00:13:20.457616 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-12-13T00:13:20.457671456+00:00 stderr F I1213 00:13:20.457640 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-12-13T00:13:20.457681356+00:00 stderr F I1213 00:13:20.457670 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-12-13T00:13:20.457772179+00:00 stderr F I1213 00:13:20.457749 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-12-13T00:13:20.457810650+00:00 stderr F I1213 00:13:20.457783 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-12-13T00:13:20.457810650+00:00 stderr F I1213 00:13:20.457807 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-12-13T00:13:20.457848502+00:00 stderr F I1213 00:13:20.457827 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-12-13T00:13:20.457858962+00:00 stderr F I1213 00:13:20.457848 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-12-13T00:13:20.457888253+00:00 stderr F I1213 00:13:20.457869 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-12-13T00:13:20.457899453+00:00 stderr F I1213 00:13:20.457894 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-12-13T00:13:20.458025547+00:00 stderr F I1213 00:13:20.457958 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-12-13T00:13:20.458025547+00:00 stderr F I1213 00:13:20.458006 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-12-13T00:13:20.458041088+00:00 stderr F I1213 00:13:20.458034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-12-13T00:13:20.458079979+00:00 stderr F I1213 00:13:20.458059 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-12-13T00:13:20.458138391+00:00 stderr F I1213 00:13:20.458118 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-12-13T00:13:20.458168932+00:00 stderr F I1213 00:13:20.458151 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-12-13T00:13:20.458194683+00:00 stderr F I1213 00:13:20.458176 1 policy_controller.go:88] Started "openshift.io/cluster-quota-reconciliation" 2025-12-13T00:13:20.458194683+00:00 stderr F I1213 00:13:20.458187 1 policy_controller.go:78] Starting "openshift.io/cluster-csr-approver" 2025-12-13T00:13:20.458409770+00:00 stderr F I1213 00:13:20.458386 1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller 2025-12-13T00:13:20.458444691+00:00 stderr F I1213 00:13:20.458423 1 reconciliation_controller.go:140] Starting the cluster quota reconciliation controller 2025-12-13T00:13:20.458595896+00:00 stderr F I1213 00:13:20.458573 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-12-13T00:13:20.464030169+00:00 stderr F I1213 00:13:20.463994 1 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" 2025-12-13T00:13:20.464030169+00:00 stderr F I1213 00:13:20.464009 1 policy_controller.go:78] Starting "openshift.io/podsecurity-admission-label-syncer" 2025-12-13T00:13:20.464134462+00:00 stderr F I1213 00:13:20.464111 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-12-13T00:13:20.475497644+00:00 stderr F I1213 00:13:20.469385 1 reconciliation_controller.go:171] error occurred GetQuotableResources err=unable to retrieve the complete list of server APIs: apps.openshift.io/v1: stale GroupVersion discovery: apps.openshift.io/v1, authorization.openshift.io/v1: stale GroupVersion discovery: authorization.openshift.io/v1, build.openshift.io/v1: stale GroupVersion discovery: build.openshift.io/v1, image.openshift.io/v1: stale GroupVersion discovery: image.openshift.io/v1, oauth.openshift.io/v1: stale GroupVersion discovery: oauth.openshift.io/v1, project.openshift.io/v1: stale GroupVersion discovery: project.openshift.io/v1, quota.openshift.io/v1: stale GroupVersion discovery: quota.openshift.io/v1, route.openshift.io/v1: stale GroupVersion discovery: route.openshift.io/v1, security.openshift.io/v1: stale GroupVersion discovery: security.openshift.io/v1, template.openshift.io/v1: stale GroupVersion discovery: template.openshift.io/v1, user.openshift.io/v1: stale GroupVersion discovery: user.openshift.io/v1 2025-12-13T00:13:20.475533555+00:00 stderr F E1213 00:13:20.475466 1 reconciliation_controller.go:172] unable to retrieve the complete list of server APIs: apps.openshift.io/v1: stale GroupVersion discovery: apps.openshift.io/v1, authorization.openshift.io/v1: stale GroupVersion discovery: authorization.openshift.io/v1, build.openshift.io/v1: stale GroupVersion discovery: build.openshift.io/v1, image.openshift.io/v1: stale GroupVersion discovery: image.openshift.io/v1, oauth.openshift.io/v1: stale GroupVersion discovery: oauth.openshift.io/v1, project.openshift.io/v1: stale GroupVersion discovery: project.openshift.io/v1, quota.openshift.io/v1: stale GroupVersion discovery: quota.openshift.io/v1, route.openshift.io/v1: stale GroupVersion discovery: route.openshift.io/v1, security.openshift.io/v1: stale GroupVersion discovery: security.openshift.io/v1, template.openshift.io/v1: stale GroupVersion discovery: template.openshift.io/v1, user.openshift.io/v1: stale GroupVersion discovery: user.openshift.io/v1 2025-12-13T00:13:20.475794224+00:00 stderr F I1213 00:13:20.475766 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles storage.k8s.io/v1, Resource=csistoragecapacities whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: [] 2025-12-13T00:13:20.481634530+00:00 stderr F I1213 00:13:20.481588 1 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" 2025-12-13T00:13:20.481634530+00:00 stderr F I1213 00:13:20.481624 1 policy_controller.go:78] Starting "openshift.io/privileged-namespaces-psa-label-syncer" 2025-12-13T00:13:20.481662481+00:00 stderr F I1213 00:13:20.481635 1 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller 2025-12-13T00:13:20.486616518+00:00 stderr F I1213 00:13:20.486546 1 policy_controller.go:88] Started "openshift.io/privileged-namespaces-psa-label-syncer" 2025-12-13T00:13:20.486616518+00:00 stderr F I1213 00:13:20.486587 1 policy_controller.go:78] Starting "openshift.io/namespace-security-allocation" 2025-12-13T00:13:20.486675560+00:00 stderr F I1213 00:13:20.486656 1 privileged_namespaces_controller.go:75] "Starting" controller="privileged-namespaces-psa-label-syncer" 2025-12-13T00:13:20.486675560+00:00 stderr F I1213 00:13:20.486669 1 shared_informer.go:311] Waiting for caches to sync for privileged-namespaces-psa-label-syncer 2025-12-13T00:13:20.500456323+00:00 stderr F I1213 00:13:20.500025 1 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" 2025-12-13T00:13:20.500520445+00:00 stderr F I1213 00:13:20.500505 1 policy_controller.go:78] Starting "openshift.io/resourcequota" 2025-12-13T00:13:20.500602358+00:00 stderr F I1213 00:13:20.500587 1 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller 2025-12-13T00:13:20.802477002+00:00 stderr F I1213 00:13:20.802428 1 policy_controller.go:88] Started "openshift.io/resourcequota" 2025-12-13T00:13:20.802477002+00:00 stderr F I1213 00:13:20.802448 1 policy_controller.go:91] Started Origin Controllers 2025-12-13T00:13:20.802543754+00:00 stderr F I1213 00:13:20.802522 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-12-13T00:13:20.802595186+00:00 stderr F I1213 00:13:20.802575 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-12-13T00:13:20.802732200+00:00 stderr F I1213 00:13:20.802699 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-12-13T00:13:20.847915798+00:00 stderr F I1213 00:13:20.847852 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.858963460+00:00 stderr F I1213 00:13:20.858881 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.861174824+00:00 stderr F I1213 00:13:20.861139 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.861910889+00:00 stderr F I1213 00:13:20.861889 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.867994113+00:00 stderr F I1213 00:13:20.867961 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.868135628+00:00 stderr F I1213 00:13:20.868097 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.868412397+00:00 stderr F I1213 00:13:20.868383 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.868457568+00:00 stderr F I1213 00:13:20.868445 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.870505117+00:00 stderr F I1213 00:13:20.870452 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.870524188+00:00 stderr F I1213 00:13:20.870513 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.870717044+00:00 stderr F I1213 00:13:20.870683 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.870728225+00:00 stderr F I1213 00:13:20.870702 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.870911111+00:00 stderr F I1213 00:13:20.870845 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.871002114+00:00 stderr F I1213 00:13:20.870974 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.871119198+00:00 stderr F W1213 00:13:20.871093 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:20.871136658+00:00 stderr F E1213 00:13:20.871120 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:20.871192490+00:00 stderr F I1213 00:13:20.871166 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.871349516+00:00 stderr F I1213 00:13:20.871309 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.871488710+00:00 stderr F I1213 00:13:20.871459 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.872177484+00:00 stderr F I1213 00:13:20.872145 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.873751336+00:00 stderr F I1213 00:13:20.873678 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.875723823+00:00 stderr F I1213 00:13:20.875700 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.877881826+00:00 stderr F I1213 00:13:20.877859 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.883456033+00:00 stderr F I1213 00:13:20.883403 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.885016315+00:00 stderr F I1213 00:13:20.884980 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.886091332+00:00 stderr F I1213 00:13:20.886058 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "basic-user" not found 2025-12-13T00:13:20.886106172+00:00 stderr F I1213 00:13:20.886089 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.886146753+00:00 stderr F I1213 00:13:20.886132 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.886154044+00:00 stderr F I1213 00:13:20.886145 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-autoscaler" not found 2025-12-13T00:13:20.886160644+00:00 stderr F I1213 00:13:20.886155 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-autoscaler-operator" not found 2025-12-13T00:13:20.886193755+00:00 stderr F I1213 00:13:20.886181 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-monitoring-operator" not found 2025-12-13T00:13:20.886203265+00:00 stderr F I1213 00:13:20.886199 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.886219066+00:00 stderr F I1213 00:13:20.886206 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-reader" not found 2025-12-13T00:13:20.886219066+00:00 stderr F I1213 00:13:20.886216 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator" not found 2025-12-13T00:13:20.886241607+00:00 stderr F I1213 00:13:20.886228 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator-imageconfig-reader" not found 2025-12-13T00:13:20.886248817+00:00 stderr F I1213 00:13:20.886240 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-samples-operator-proxy-reader" not found 2025-12-13T00:13:20.886255407+00:00 stderr F I1213 00:13:20.886248 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-status" not found 2025-12-13T00:13:20.886261947+00:00 stderr F I1213 00:13:20.886256 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.886280538+00:00 stderr F I1213 00:13:20.886267 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console" not found 2025-12-13T00:13:20.886287948+00:00 stderr F I1213 00:13:20.886280 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-12-13T00:13:20.886294838+00:00 stderr F I1213 00:13:20.886289 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found 2025-12-13T00:13:20.886313199+00:00 stderr F I1213 00:13:20.886301 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "console-operator" not found 2025-12-13T00:13:20.886320289+00:00 stderr F I1213 00:13:20.886311 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-12-13T00:13:20.886350770+00:00 stderr F I1213 00:13:20.886338 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "control-plane-machine-set-operator" not found 2025-12-13T00:13:20.886845777+00:00 stderr F I1213 00:13:20.886349 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-12-13T00:13:20.886870907+00:00 stderr F I1213 00:13:20.886853 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-provisioner-runner" not found 2025-12-13T00:13:20.886870907+00:00 stderr F I1213 00:13:20.886865 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-provisioner-runner" not found 2025-12-13T00:13:20.886880468+00:00 stderr F I1213 00:13:20.886874 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "csi-snapshot-controller-operator-clusterrole" not found 2025-12-13T00:13:20.886893248+00:00 stderr F I1213 00:13:20.886887 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.886899968+00:00 stderr F I1213 00:13:20.886895 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-image-registry-operator" not found 2025-12-13T00:13:20.886908159+00:00 stderr F I1213 00:13:20.886903 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "dns-monitoring" not found 2025-12-13T00:13:20.886949380+00:00 stderr F I1213 00:13:20.886911 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found 2025-12-13T00:13:20.886949380+00:00 stderr F I1213 00:13:20.886924 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "kube-apiserver" not found 2025-12-13T00:13:20.886960520+00:00 stderr F I1213 00:13:20.886948 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.886960520+00:00 stderr F I1213 00:13:20.886956 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-controllers" not found 2025-12-13T00:13:20.886982791+00:00 stderr F I1213 00:13:20.886969 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-controllers-metal3-remediation" not found 2025-12-13T00:13:20.886982791+00:00 stderr F I1213 00:13:20.886979 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-operator" not found 2025-12-13T00:13:20.887003182+00:00 stderr F I1213 00:13:20.886989 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-api-operator-ext-remediation" not found 2025-12-13T00:13:20.887003182+00:00 stderr F I1213 00:13:20.886998 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-controller" not found 2025-12-13T00:13:20.887024473+00:00 stderr F I1213 00:13:20.887009 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-daemon" not found 2025-12-13T00:13:20.887024473+00:00 stderr F I1213 00:13:20.887019 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-config-server" not found 2025-12-13T00:13:20.887033773+00:00 stderr F I1213 00:13:20.887027 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "machine-os-builder" not found 2025-12-13T00:13:20.887043813+00:00 stderr F I1213 00:13:20.887039 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:anyuid" not found 2025-12-13T00:13:20.887060354+00:00 stderr F I1213 00:13:20.887048 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "marketplace-operator" not found 2025-12-13T00:13:20.887060354+00:00 stderr F I1213 00:13:20.887056 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "metrics-daemon-role" not found 2025-12-13T00:13:20.887086665+00:00 stderr F I1213 00:13:20.887070 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-admission-controller-webhook" not found 2025-12-13T00:13:20.887086665+00:00 stderr F I1213 00:13:20.887081 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2025-12-13T00:13:20.887095545+00:00 stderr F I1213 00:13:20.887089 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2025-12-13T00:13:20.887102205+00:00 stderr F I1213 00:13:20.887096 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus" not found 2025-12-13T00:13:20.887124086+00:00 stderr F I1213 00:13:20.887111 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "multus-ancillary-tools" not found 2025-12-13T00:13:20.887131246+00:00 stderr F I1213 00:13:20.887123 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "whereabouts-cni" not found 2025-12-13T00:13:20.887137786+00:00 stderr F I1213 00:13:20.887131 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "network-diagnostics" not found 2025-12-13T00:13:20.887144337+00:00 stderr F I1213 00:13:20.887139 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "network-node-identity" not found 2025-12-13T00:13:20.887164947+00:00 stderr F I1213 00:13:20.887152 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:operator-lifecycle-manager" not found 2025-12-13T00:13:20.887164947+00:00 stderr F I1213 00:13:20.887162 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns" not found 2025-12-13T00:13:20.887185648+00:00 stderr F I1213 00:13:20.887172 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-dns-operator" not found 2025-12-13T00:13:20.887192768+00:00 stderr F I1213 00:13:20.887186 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-pruner" not found 2025-12-13T00:13:20.887200858+00:00 stderr F I1213 00:13:20.887196 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-operator" not found 2025-12-13T00:13:20.887208949+00:00 stderr F I1213 00:13:20.887205 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ingress-router" not found 2025-12-13T00:13:20.887232710+00:00 stderr F I1213 00:13:20.887218 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-iptables-alerter" not found 2025-12-13T00:13:20.887232710+00:00 stderr F I1213 00:13:20.887229 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-control-plane-limited" not found 2025-12-13T00:13:20.887252770+00:00 stderr F I1213 00:13:20.887239 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-node-limited" not found 2025-12-13T00:13:20.887252770+00:00 stderr F I1213 00:13:20.887249 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "openshift-ovn-kubernetes-kube-rbac-proxy" not found 2025-12-13T00:13:20.887274331+00:00 stderr F I1213 00:13:20.887262 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-12-13T00:13:20.887643333+00:00 stderr F I1213 00:13:20.887612 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "prometheus-k8s-scheduler-resources" not found 2025-12-13T00:13:20.888181961+00:00 stderr F I1213 00:13:20.888156 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "registry-monitoring" not found 2025-12-13T00:13:20.888181961+00:00 stderr F I1213 00:13:20.888171 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:registry" not found 2025-12-13T00:13:20.888198932+00:00 stderr F I1213 00:13:20.888179 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "router-monitoring" not found 2025-12-13T00:13:20.888198932+00:00 stderr F I1213 00:13:20.888191 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found 2025-12-13T00:13:20.888206232+00:00 stderr F I1213 00:13:20.888199 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "self-provisioner" not found 2025-12-13T00:13:20.888212902+00:00 stderr F I1213 00:13:20.888207 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.888235113+00:00 stderr F I1213 00:13:20.888218 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-bootstrapper" not found 2025-12-13T00:13:20.888235113+00:00 stderr F I1213 00:13:20.888229 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found 2025-12-13T00:13:20.888244293+00:00 stderr F I1213 00:13:20.888237 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found 2025-12-13T00:13:20.888256964+00:00 stderr F I1213 00:13:20.888247 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found 2025-12-13T00:13:20.888263614+00:00 stderr F I1213 00:13:20.888259 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found 2025-12-13T00:13:20.888271764+00:00 stderr F I1213 00:13:20.888266 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found 2025-12-13T00:13:20.888279904+00:00 stderr F I1213 00:13:20.888275 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:attachdetach-controller" not found 2025-12-13T00:13:20.888287955+00:00 stderr F I1213 00:13:20.888283 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:certificate-controller" not found 2025-12-13T00:13:20.888308285+00:00 stderr F I1213 00:13:20.888295 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:clusterrole-aggregation-controller" not found 2025-12-13T00:13:20.888308285+00:00 stderr F I1213 00:13:20.888305 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:cronjob-controller" not found 2025-12-13T00:13:20.888332236+00:00 stderr F I1213 00:13:20.888315 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:daemon-set-controller" not found 2025-12-13T00:13:20.888341176+00:00 stderr F I1213 00:13:20.888334 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:deployment-controller" not found 2025-12-13T00:13:20.888349407+00:00 stderr F I1213 00:13:20.888342 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:disruption-controller" not found 2025-12-13T00:13:20.888357537+00:00 stderr F I1213 00:13:20.888350 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpoint-controller" not found 2025-12-13T00:13:20.888381458+00:00 stderr F I1213 00:13:20.888363 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslice-controller" not found 2025-12-13T00:13:20.888381458+00:00 stderr F I1213 00:13:20.888376 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslicemirroring-controller" not found 2025-12-13T00:13:20.888390458+00:00 stderr F I1213 00:13:20.888385 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ephemeral-volume-controller" not found 2025-12-13T00:13:20.888398518+00:00 stderr F I1213 00:13:20.888392 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:expand-controller" not found 2025-12-13T00:13:20.888416059+00:00 stderr F I1213 00:13:20.888404 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:generic-garbage-collector" not found 2025-12-13T00:13:20.888416059+00:00 stderr F I1213 00:13:20.888412 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:horizontal-pod-autoscaler" not found 2025-12-13T00:13:20.888425699+00:00 stderr F I1213 00:13:20.888420 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:job-controller" not found 2025-12-13T00:13:20.888447720+00:00 stderr F I1213 00:13:20.888428 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:legacy-service-account-token-cleaner" not found 2025-12-13T00:13:20.888447720+00:00 stderr F I1213 00:13:20.888442 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:namespace-controller" not found 2025-12-13T00:13:20.888455260+00:00 stderr F I1213 00:13:20.888450 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:node-controller" not found 2025-12-13T00:13:20.888463390+00:00 stderr F I1213 00:13:20.888458 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:persistent-volume-binder" not found 2025-12-13T00:13:20.888481631+00:00 stderr F I1213 00:13:20.888469 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pod-garbage-collector" not found 2025-12-13T00:13:20.888488781+00:00 stderr F I1213 00:13:20.888480 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pv-protection-controller" not found 2025-12-13T00:13:20.888495372+00:00 stderr F I1213 00:13:20.888488 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pvc-protection-controller" not found 2025-12-13T00:13:20.888504252+00:00 stderr F I1213 00:13:20.888500 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replicaset-controller" not found 2025-12-13T00:13:20.978274928+00:00 stderr F I1213 00:13:20.888656 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replication-controller" not found 2025-12-13T00:13:20.978307269+00:00 stderr F I1213 00:13:20.978271 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:resourcequota-controller" not found 2025-12-13T00:13:20.978307269+00:00 stderr F I1213 00:13:20.978287 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:root-ca-cert-publisher" not found 2025-12-13T00:13:20.978307269+00:00 stderr F I1213 00:13:20.978296 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:route-controller" not found 2025-12-13T00:13:20.978323251+00:00 stderr F I1213 00:13:20.978306 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-account-controller" not found 2025-12-13T00:13:20.978323251+00:00 stderr F I1213 00:13:20.978315 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-ca-cert-publisher" not found 2025-12-13T00:13:20.978332421+00:00 stderr F I1213 00:13:20.978325 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-controller" not found 2025-12-13T00:13:20.978339381+00:00 stderr F I1213 00:13:20.978334 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:statefulset-controller" not found 2025-12-13T00:13:20.978348222+00:00 stderr F I1213 00:13:20.978344 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-after-finished-controller" not found 2025-12-13T00:13:20.978357792+00:00 stderr F I1213 00:13:20.978352 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-controller" not found 2025-12-13T00:13:20.978385783+00:00 stderr F I1213 00:13:20.978362 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:deployer" not found 2025-12-13T00:13:20.978385783+00:00 stderr F I1213 00:13:20.978378 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:discovery" not found 2025-12-13T00:13:20.978393763+00:00 stderr F I1213 00:13:20.978387 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-builder" not found 2025-12-13T00:13:20.978400803+00:00 stderr F I1213 00:13:20.978394 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found 2025-12-13T00:13:20.978409394+00:00 stderr F I1213 00:13:20.978404 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found 2025-12-13T00:13:20.978466016+00:00 stderr F I1213 00:13:20.978449 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-dns" not found 2025-12-13T00:13:20.978466016+00:00 stderr F I1213 00:13:20.978463 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found 2025-12-13T00:13:20.978477016+00:00 stderr F I1213 00:13:20.978471 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:master" not found 2025-12-13T00:13:20.978486646+00:00 stderr F I1213 00:13:20.978481 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:monitoring" not found 2025-12-13T00:13:20.978493687+00:00 stderr F I1213 00:13:20.978488 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node" not found 2025-12-13T00:13:20.978502217+00:00 stderr F I1213 00:13:20.978496 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found 2025-12-13T00:13:20.978509277+00:00 stderr F I1213 00:13:20.978504 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found 2025-12-13T00:13:20.978517907+00:00 stderr F I1213 00:13:20.978512 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-bootstrapper" not found 2025-12-13T00:13:20.978524668+00:00 stderr F I1213 00:13:20.978519 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found 2025-12-13T00:13:20.978549108+00:00 stderr F I1213 00:13:20.978529 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found 2025-12-13T00:13:20.978549108+00:00 stderr F I1213 00:13:20.978541 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found 2025-12-13T00:13:20.978558969+00:00 stderr F I1213 00:13:20.978551 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:build-config-change-controller" not found 2025-12-13T00:13:20.978567449+00:00 stderr F I1213 00:13:20.978559 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:build-controller" not found 2025-12-13T00:13:20.978611450+00:00 stderr F I1213 00:13:20.978568 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-csr-approver-controller" not found 2025-12-13T00:13:20.978611450+00:00 stderr F I1213 00:13:20.978605 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:cluster-quota-reconciliation-controller" not found 2025-12-13T00:13:20.978657892+00:00 stderr F I1213 00:13:20.978635 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:default-rolebindings-controller" not found 2025-12-13T00:13:20.978657892+00:00 stderr F I1213 00:13:20.978653 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:deployer-controller" not found 2025-12-13T00:13:20.978668022+00:00 stderr F I1213 00:13:20.978662 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:deploymentconfig-controller" not found 2025-12-13T00:13:20.978678153+00:00 stderr F I1213 00:13:20.978671 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:horizontal-pod-autoscaler" not found 2025-12-13T00:13:20.978685103+00:00 stderr F I1213 00:13:20.978680 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:image-import-controller" not found 2025-12-13T00:13:20.978692083+00:00 stderr F I1213 00:13:20.978687 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:image-trigger-controller" not found 2025-12-13T00:13:20.978714284+00:00 stderr F I1213 00:13:20.978697 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found 2025-12-13T00:13:20.978737065+00:00 stderr F I1213 00:13:20.978720 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints-crd-reader" not found 2025-12-13T00:13:20.978737065+00:00 stderr F I1213 00:13:20.978734 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:check-endpoints-node-reader" not found 2025-12-13T00:13:20.978746565+00:00 stderr F I1213 00:13:20.978741 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found 2025-12-13T00:13:20.978755625+00:00 stderr F I1213 00:13:20.978750 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:namespace-security-allocation-controller" not found 2025-12-13T00:13:20.978764386+00:00 stderr F I1213 00:13:20.978759 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:origin-namespace-controller" not found 2025-12-13T00:13:20.978790516+00:00 stderr F I1213 00:13:20.978774 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:podsecurity-admission-label-syncer-controller" not found 2025-12-13T00:13:20.978798077+00:00 stderr F I1213 00:13:20.978787 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:privileged-namespaces-psa-label-syncer" not found 2025-12-13T00:13:20.978805057+00:00 stderr F I1213 00:13:20.978797 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:pv-recycler-controller" not found 2025-12-13T00:13:20.978811987+00:00 stderr F I1213 00:13:20.978805 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:resourcequota-controller" not found 2025-12-13T00:13:20.978818887+00:00 stderr F I1213 00:13:20.978814 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found 2025-12-13T00:13:20.978828058+00:00 stderr F I1213 00:13:20.978822 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ingress-ip-controller" not found 2025-12-13T00:13:20.978836578+00:00 stderr F I1213 00:13:20.978832 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:serviceaccount-controller" not found 2025-12-13T00:13:20.978845528+00:00 stderr F I1213 00:13:20.978840 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:serviceaccount-pull-secrets-controller" not found 2025-12-13T00:13:20.978868779+00:00 stderr F I1213 00:13:20.978850 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-instance-controller" not found 2025-12-13T00:13:20.978868779+00:00 stderr F I1213 00:13:20.978862 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "admin" not found 2025-12-13T00:13:20.978876629+00:00 stderr F I1213 00:13:20.978871 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-instance-finalizer-controller" not found 2025-12-13T00:13:20.978885080+00:00 stderr F I1213 00:13:20.978878 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "admin" not found 2025-12-13T00:13:20.978909210+00:00 stderr F I1213 00:13:20.978879 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller 2025-12-13T00:13:20.978916811+00:00 stderr F I1213 00:13:20.978905 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... 2025-12-13T00:13:20.978990063+00:00 stderr F I1213 00:13:20.978889 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:template-service-broker" not found 2025-12-13T00:13:20.979003313+00:00 stderr F I1213 00:13:20.978996 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:unidling-controller" not found 2025-12-13T00:13:20.979011924+00:00 stderr F I1213 00:13:20.979007 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found 2025-12-13T00:13:20.979018884+00:00 stderr F I1213 00:13:20.979012 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979026004+00:00 stderr F I1213 00:13:20.979018 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979034665+00:00 stderr F I1213 00:13:20.979024 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979034665+00:00 stderr F I1213 00:13:20.979031 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager" not found 2025-12-13T00:13:20.979052845+00:00 stderr F I1213 00:13:20.979039 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:image-trigger-controller" not found 2025-12-13T00:13:20.979063345+00:00 stderr F I1213 00:13:20.979051 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:ingress-to-route-controller" not found 2025-12-13T00:13:20.979063345+00:00 stderr F I1213 00:13:20.979057 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-controller-manager:update-buildconfig-status" not found 2025-12-13T00:13:20.979072156+00:00 stderr F I1213 00:13:20.979064 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:openshift-route-controller-manager" not found 2025-12-13T00:13:20.979079956+00:00 stderr F I1213 00:13:20.979071 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979086866+00:00 stderr F I1213 00:13:20.979078 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979086866+00:00 stderr F I1213 00:13:20.979084 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:operator:etcd-backup-role" not found 2025-12-13T00:13:20.979094056+00:00 stderr F I1213 00:13:20.979090 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979100847+00:00 stderr F I1213 00:13:20.979095 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979107767+00:00 stderr F I1213 00:13:20.979100 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979107767+00:00 stderr F I1213 00:13:20.979105 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979114857+00:00 stderr F I1213 00:13:20.979110 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979121607+00:00 stderr F I1213 00:13:20.979115 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979128508+00:00 stderr F I1213 00:13:20.979121 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found 2025-12-13T00:13:20.979128508+00:00 stderr F I1213 00:13:20.979125 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979138268+00:00 stderr F I1213 00:13:20.979131 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979138268+00:00 stderr F I1213 00:13:20.979135 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979145488+00:00 stderr F I1213 00:13:20.979141 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979152228+00:00 stderr F I1213 00:13:20.979146 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979159099+00:00 stderr F I1213 00:13:20.979151 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979159099+00:00 stderr F I1213 00:13:20.979156 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979166619+00:00 stderr F I1213 00:13:20.979161 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979177749+00:00 stderr F I1213 00:13:20.979166 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found 2025-12-13T00:13:20.979177749+00:00 stderr F I1213 00:13:20.979171 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found 2025-12-13T00:13:20.979191600+00:00 stderr F I1213 00:13:20.979176 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found 2025-12-13T00:13:20.979191600+00:00 stderr F I1213 00:13:20.979182 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-controller-manager" not found 2025-12-13T00:13:20.979191600+00:00 stderr F I1213 00:13:20.979187 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:tokenreview-openshift-route-controller-manager" not found 2025-12-13T00:13:20.979203260+00:00 stderr F I1213 00:13:20.979195 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:useroauthaccesstoken-manager" not found 2025-12-13T00:13:20.979211700+00:00 stderr F I1213 00:13:20.979203 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found 2025-12-13T00:13:20.979211700+00:00 stderr F I1213 00:13:20.979208 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found 2025-12-13T00:13:20.979225111+00:00 stderr F I1213 00:13:20.979213 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:sdn-reader" not found 2025-12-13T00:13:20.979225111+00:00 stderr F I1213 00:13:20.979219 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found 2025-12-13T00:13:20.979233511+00:00 stderr F I1213 00:13:20.979223 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found 2025-12-13T00:13:20.979233511+00:00 stderr F I1213 00:13:20.979229 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:webhook" not found 2025-12-13T00:13:20.980348588+00:00 stderr F I1213 00:13:20.979900 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.980681939+00:00 stderr F I1213 00:13:20.980657 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.982725748+00:00 stderr F I1213 00:13:20.982700 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.983311758+00:00 stderr F E1213 00:13:20.983282 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:20.983763943+00:00 stderr F E1213 00:13:20.983739 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-12-13T00:13:20.987102036+00:00 stderr F I1213 00:13:20.987058 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.988853124+00:00 stderr F I1213 00:13:20.988793 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.992015590+00:00 stderr F I1213 00:13:20.991970 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.992144754+00:00 stderr F I1213 00:13:20.992116 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-12-13T00:13:20.993525501+00:00 stderr F E1213 00:13:20.993498 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:20.994291307+00:00 stderr F E1213 00:13:20.994020 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-12-13T00:13:20.999018126+00:00 stderr F I1213 00:13:20.994855 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:20.999018126+00:00 stderr F E1213 00:13:20.994994 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-12-13T00:13:21.002963718+00:00 stderr F I1213 00:13:21.002896 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.011042210+00:00 stderr F I1213 00:13:21.010993 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.011814235+00:00 stderr F I1213 00:13:21.011755 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.198629813+00:00 stderr F I1213 00:13:21.198570 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.211060721+00:00 stderr F I1213 00:13:21.210069 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.413833925+00:00 stderr F I1213 00:13:21.413773 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.419150953+00:00 stderr F I1213 00:13:21.419105 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419307 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "default" should be enqueued: namespace "default" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419336 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "default" should be enqueued: namespace "default" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419346 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "default" should be enqueued: namespace "default" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419383 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419419 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419456 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419466 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419474 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "hostpath-provisioner" should be enqueued: namespace "hostpath-provisioner" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419493 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419500 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419522 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-node-lease" should be enqueued: namespace "kube-node-lease" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419529 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419535 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419540 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-public" should be enqueued: namespace "kube-public" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419546 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419550 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419554 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419580 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419602 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419610 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419617 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419624 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419630 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419646 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419653 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419665 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419672 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419679 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419684 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419688 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419693 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419701 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419716 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419721 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419726 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419732 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419738 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419753 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419760 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419765 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419769 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419774 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419778 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419789 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr F E1213 00:13:21.419804 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420277781+00:00 stderr P E1213 00:13:21.419809 1 podsecurity_label_sync_controller.go:420] failed to determine 2025-12-13T00:13:21.420330743+00:00 stderr F whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419817 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419828 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419837 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419859 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419865 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419881 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver-operator" should be enqueued: namespace "openshift-apiserver-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419891 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419898 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419905 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419912 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-apiserver" should be enqueued: namespace "openshift-apiserver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419920 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419956 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419963 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419976 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication-operator" should be enqueued: namespace "openshift-authentication-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419981 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419986 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.419996 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420000 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-authentication" should be enqueued: namespace "openshift-authentication" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420018 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-network-config-controller" should be enqueued: namespace "openshift-cloud-network-config-controller" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420029 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-network-config-controller" should be enqueued: namespace "openshift-cloud-network-config-controller" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420033 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-network-config-controller" should be enqueued: namespace "openshift-cloud-network-config-controller" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420068 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-platform-infra" should be enqueued: namespace "openshift-cloud-platform-infra" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420083 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-platform-infra" should be enqueued: namespace "openshift-cloud-platform-infra" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420093 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cloud-platform-infra" should be enqueued: namespace "openshift-cloud-platform-infra" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420099 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420112 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420119 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420126 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-machine-approver" should be enqueued: namespace "openshift-cluster-machine-approver" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420131 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420137 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420142 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420149 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-samples-operator" should be enqueued: namespace "openshift-cluster-samples-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420183 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-storage-operator" should be enqueued: namespace "openshift-cluster-storage-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420195 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-storage-operator" should be enqueued: namespace "openshift-cluster-storage-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420203 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-storage-operator" should be enqueued: namespace "openshift-cluster-storage-operator" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420209 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-version" should be enqueued: namespace "openshift-cluster-version" not found 2025-12-13T00:13:21.420330743+00:00 stderr F E1213 00:13:21.420216 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-version" should be enqueued: namespace "openshift-cluster-version" not found 2025-12-13T00:13:21.420330743+00:00 stderr P E1213 00:13:21.420225 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-cluster-version" should be enqueued: namespace "openshift-clust 2025-12-13T00:13:21.420358573+00:00 stderr F er-version" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420234 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-managed" should be enqueued: namespace "openshift-config-managed" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420240 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-managed" should be enqueued: namespace "openshift-config-managed" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420262 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-managed" should be enqueued: namespace "openshift-config-managed" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420270 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420275 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420290 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420295 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config-operator" should be enqueued: namespace "openshift-config-operator" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420299 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config" should be enqueued: namespace "openshift-config" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420304 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config" should be enqueued: namespace "openshift-config" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420317 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-config" should be enqueued: namespace "openshift-config" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420333 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-12-13T00:13:21.420358573+00:00 stderr F E1213 00:13:21.420352 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-12-13T00:13:21.420392885+00:00 stderr F E1213 00:13:21.420371 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-12-13T00:13:21.420392885+00:00 stderr F E1213 00:13:21.420381 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-operator" should be enqueued: namespace "openshift-console-operator" not found 2025-12-13T00:13:21.420392885+00:00 stderr F E1213 00:13:21.420386 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-user-settings" should be enqueued: namespace "openshift-console-user-settings" not found 2025-12-13T00:13:21.420392885+00:00 stderr F E1213 00:13:21.420390 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-user-settings" should be enqueued: namespace "openshift-console-user-settings" not found 2025-12-13T00:13:21.420404065+00:00 stderr F E1213 00:13:21.420395 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console-user-settings" should be enqueued: namespace "openshift-console-user-settings" not found 2025-12-13T00:13:21.420442646+00:00 stderr F E1213 00:13:21.420412 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-12-13T00:13:21.420442646+00:00 stderr F E1213 00:13:21.420437 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-12-13T00:13:21.420450277+00:00 stderr F E1213 00:13:21.420442 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-12-13T00:13:21.420450277+00:00 stderr F E1213 00:13:21.420447 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-console" should be enqueued: namespace "openshift-console" not found 2025-12-13T00:13:21.420472707+00:00 stderr F E1213 00:13:21.420457 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-12-13T00:13:21.420479937+00:00 stderr F E1213 00:13:21.420473 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-12-13T00:13:21.420486748+00:00 stderr F E1213 00:13:21.420480 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-12-13T00:13:21.420493708+00:00 stderr F E1213 00:13:21.420488 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager-operator" should be enqueued: namespace "openshift-controller-manager-operator" not found 2025-12-13T00:13:21.420500518+00:00 stderr F E1213 00:13:21.420495 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-12-13T00:13:21.420527389+00:00 stderr F E1213 00:13:21.420509 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-12-13T00:13:21.420527389+00:00 stderr F E1213 00:13:21.420520 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-12-13T00:13:21.420534869+00:00 stderr F E1213 00:13:21.420527 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-controller-manager" should be enqueued: namespace "openshift-controller-manager" not found 2025-12-13T00:13:21.420544360+00:00 stderr F E1213 00:13:21.420536 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-12-13T00:13:21.420551180+00:00 stderr F E1213 00:13:21.420545 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-12-13T00:13:21.420557980+00:00 stderr F E1213 00:13:21.420551 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-12-13T00:13:21.420564780+00:00 stderr F E1213 00:13:21.420558 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns-operator" should be enqueued: namespace "openshift-dns-operator" not found 2025-12-13T00:13:21.420571571+00:00 stderr F E1213 00:13:21.420564 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-12-13T00:13:21.420578461+00:00 stderr F E1213 00:13:21.420571 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-12-13T00:13:21.420609822+00:00 stderr F E1213 00:13:21.420588 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-12-13T00:13:21.420623592+00:00 stderr F E1213 00:13:21.420615 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-12-13T00:13:21.420635893+00:00 stderr F E1213 00:13:21.420621 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-dns" should be enqueued: namespace "openshift-dns" not found 2025-12-13T00:13:21.420635893+00:00 stderr F E1213 00:13:21.420629 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-12-13T00:13:21.420644353+00:00 stderr F E1213 00:13:21.420634 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-12-13T00:13:21.420653613+00:00 stderr F E1213 00:13:21.420648 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-12-13T00:13:21.420663724+00:00 stderr F E1213 00:13:21.420659 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd-operator" should be enqueued: namespace "openshift-etcd-operator" not found 2025-12-13T00:13:21.420672154+00:00 stderr F E1213 00:13:21.420664 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-12-13T00:13:21.420672154+00:00 stderr F E1213 00:13:21.420669 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-12-13T00:13:21.420680304+00:00 stderr F E1213 00:13:21.420673 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-12-13T00:13:21.420680304+00:00 stderr F E1213 00:13:21.420677 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-12-13T00:13:21.420696455+00:00 stderr F E1213 00:13:21.420681 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-12-13T00:13:21.420696455+00:00 stderr F E1213 00:13:21.420692 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-etcd" should be enqueued: namespace "openshift-etcd" not found 2025-12-13T00:13:21.420706535+00:00 stderr F E1213 00:13:21.420701 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-host-network" should be enqueued: namespace "openshift-host-network" not found 2025-12-13T00:13:21.420714355+00:00 stderr F E1213 00:13:21.420705 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-host-network" should be enqueued: namespace "openshift-host-network" not found 2025-12-13T00:13:21.420721836+00:00 stderr F E1213 00:13:21.420715 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-host-network" should be enqueued: namespace "openshift-host-network" not found 2025-12-13T00:13:21.420729516+00:00 stderr F E1213 00:13:21.420724 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420737316+00:00 stderr F E1213 00:13:21.420728 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420745346+00:00 stderr F E1213 00:13:21.420738 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420753247+00:00 stderr F E1213 00:13:21.420747 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420761497+00:00 stderr F E1213 00:13:21.420751 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420783628+00:00 stderr F E1213 00:13:21.420768 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420783628+00:00 stderr F E1213 00:13:21.420777 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-image-registry" should be enqueued: namespace "openshift-image-registry" not found 2025-12-13T00:13:21.420792318+00:00 stderr F E1213 00:13:21.420782 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420792318+00:00 stderr F E1213 00:13:21.420786 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420802268+00:00 stderr F E1213 00:13:21.420797 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420810748+00:00 stderr F E1213 00:13:21.420805 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420818589+00:00 stderr F E1213 00:13:21.420810 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420834959+00:00 stderr F E1213 00:13:21.420820 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420834959+00:00 stderr F E1213 00:13:21.420829 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420841920+00:00 stderr F E1213 00:13:21.420834 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420850010+00:00 stderr F E1213 00:13:21.420844 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420856540+00:00 stderr F E1213 00:13:21.420848 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420866880+00:00 stderr F E1213 00:13:21.420861 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420873421+00:00 stderr F E1213 00:13:21.420866 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420898701+00:00 stderr F E1213 00:13:21.420884 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420898701+00:00 stderr F E1213 00:13:21.420889 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420898701+00:00 stderr F E1213 00:13:21.420894 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420920392+00:00 stderr F E1213 00:13:21.420908 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420920392+00:00 stderr F E1213 00:13:21.420913 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420995406+00:00 stderr F E1213 00:13:21.420958 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.420995406+00:00 stderr F E1213 00:13:21.420985 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421006956+00:00 stderr F E1213 00:13:21.420999 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421028947+00:00 stderr F E1213 00:13:21.421012 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421028947+00:00 stderr F E1213 00:13:21.421021 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421036457+00:00 stderr F E1213 00:13:21.421028 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421036457+00:00 stderr F E1213 00:13:21.421032 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421046677+00:00 stderr F E1213 00:13:21.421038 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found 2025-12-13T00:13:21.421046677+00:00 stderr F E1213 00:13:21.421042 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-canary" should be enqueued: namespace "openshift-ingress-canary" not found 2025-12-13T00:13:21.421053838+00:00 stderr F E1213 00:13:21.421048 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-canary" should be enqueued: namespace "openshift-ingress-canary" not found 2025-12-13T00:13:21.421060728+00:00 stderr F E1213 00:13:21.421052 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-canary" should be enqueued: namespace "openshift-ingress-canary" not found 2025-12-13T00:13:21.421060728+00:00 stderr F E1213 00:13:21.421057 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-12-13T00:13:21.421090749+00:00 stderr F E1213 00:13:21.421072 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-12-13T00:13:21.421097839+00:00 stderr F E1213 00:13:21.421092 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-12-13T00:13:21.421104769+00:00 stderr F E1213 00:13:21.421098 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress-operator" should be enqueued: namespace "openshift-ingress-operator" not found 2025-12-13T00:13:21.421125830+00:00 stderr F E1213 00:13:21.421111 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-12-13T00:13:21.421132810+00:00 stderr F E1213 00:13:21.421124 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-12-13T00:13:21.421160211+00:00 stderr F E1213 00:13:21.421136 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-12-13T00:13:21.421167471+00:00 stderr F E1213 00:13:21.421158 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ingress" should be enqueued: namespace "openshift-ingress" not found 2025-12-13T00:13:21.421174352+00:00 stderr F E1213 00:13:21.421168 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kni-infra" should be enqueued: namespace "openshift-kni-infra" not found 2025-12-13T00:13:21.421201112+00:00 stderr F E1213 00:13:21.421182 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kni-infra" should be enqueued: namespace "openshift-kni-infra" not found 2025-12-13T00:13:21.421228583+00:00 stderr F E1213 00:13:21.421206 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kni-infra" should be enqueued: namespace "openshift-kni-infra" not found 2025-12-13T00:13:21.421228583+00:00 stderr F E1213 00:13:21.421216 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-12-13T00:13:21.421228583+00:00 stderr F E1213 00:13:21.421221 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-12-13T00:13:21.421239624+00:00 stderr F E1213 00:13:21.421226 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-12-13T00:13:21.421239624+00:00 stderr F E1213 00:13:21.421232 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver-operator" should be enqueued: namespace "openshift-kube-apiserver-operator" not found 2025-12-13T00:13:21.421239624+00:00 stderr F E1213 00:13:21.421236 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-12-13T00:13:21.421247374+00:00 stderr F E1213 00:13:21.421242 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-12-13T00:13:21.421254264+00:00 stderr F E1213 00:13:21.421248 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-12-13T00:13:21.421261204+00:00 stderr F E1213 00:13:21.421253 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-12-13T00:13:21.421261204+00:00 stderr F E1213 00:13:21.421258 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-apiserver" should be enqueued: namespace "openshift-kube-apiserver" not found 2025-12-13T00:13:21.421269855+00:00 stderr F E1213 00:13:21.421263 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-12-13T00:13:21.421327657+00:00 stderr F E1213 00:13:21.421305 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-12-13T00:13:21.421327657+00:00 stderr F E1213 00:13:21.421319 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-12-13T00:13:21.421335377+00:00 stderr F E1213 00:13:21.421327 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager-operator" should be enqueued: namespace "openshift-kube-controller-manager-operator" not found 2025-12-13T00:13:21.421356888+00:00 stderr F E1213 00:13:21.421343 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-12-13T00:13:21.421356888+00:00 stderr F E1213 00:13:21.421353 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-12-13T00:13:21.421366258+00:00 stderr F E1213 00:13:21.421360 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-12-13T00:13:21.421375708+00:00 stderr F E1213 00:13:21.421369 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-12-13T00:13:21.421384209+00:00 stderr F E1213 00:13:21.421378 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-12-13T00:13:21.421413040+00:00 stderr F E1213 00:13:21.421394 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-controller-manager" should be enqueued: namespace "openshift-kube-controller-manager" not found 2025-12-13T00:13:21.421413040+00:00 stderr F E1213 00:13:21.421404 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-12-13T00:13:21.421456471+00:00 stderr F E1213 00:13:21.421440 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-12-13T00:13:21.421456471+00:00 stderr F E1213 00:13:21.421448 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-12-13T00:13:21.421456471+00:00 stderr F E1213 00:13:21.421453 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler-operator" should be enqueued: namespace "openshift-kube-scheduler-operator" not found 2025-12-13T00:13:21.421505493+00:00 stderr F E1213 00:13:21.421473 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-12-13T00:13:21.421505493+00:00 stderr F E1213 00:13:21.421487 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-12-13T00:13:21.421505493+00:00 stderr F E1213 00:13:21.421497 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-12-13T00:13:21.421520133+00:00 stderr F E1213 00:13:21.421503 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-12-13T00:13:21.421520133+00:00 stderr F E1213 00:13:21.421510 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-12-13T00:13:21.421520133+00:00 stderr F E1213 00:13:21.421516 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-scheduler" should be enqueued: namespace "openshift-kube-scheduler" not found 2025-12-13T00:13:21.421562835+00:00 stderr F E1213 00:13:21.421539 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-12-13T00:13:21.421572755+00:00 stderr F E1213 00:13:21.421560 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-12-13T00:13:21.421572755+00:00 stderr F E1213 00:13:21.421567 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-12-13T00:13:21.421602196+00:00 stderr F E1213 00:13:21.421585 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator-operator" should be enqueued: namespace "openshift-kube-storage-version-migrator-operator" not found 2025-12-13T00:13:21.421609636+00:00 stderr F E1213 00:13:21.421599 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-12-13T00:13:21.421609636+00:00 stderr F E1213 00:13:21.421606 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-12-13T00:13:21.421644827+00:00 stderr F E1213 00:13:21.421624 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-12-13T00:13:21.421670058+00:00 stderr F E1213 00:13:21.421655 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-kube-storage-version-migrator" should be enqueued: namespace "openshift-kube-storage-version-migrator" not found 2025-12-13T00:13:21.421670058+00:00 stderr F E1213 00:13:21.421664 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421677188+00:00 stderr F E1213 00:13:21.421669 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421677188+00:00 stderr F E1213 00:13:21.421673 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421721070+00:00 stderr F E1213 00:13:21.421704 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421721070+00:00 stderr F E1213 00:13:21.421717 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421763671+00:00 stderr F E1213 00:13:21.421747 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421799292+00:00 stderr F E1213 00:13:21.421777 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421799292+00:00 stderr F E1213 00:13:21.421788 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421816593+00:00 stderr F E1213 00:13:21.421800 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-api" should be enqueued: namespace "openshift-machine-api" not found 2025-12-13T00:13:21.421816593+00:00 stderr F E1213 00:13:21.421805 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421829673+00:00 stderr F E1213 00:13:21.421823 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421838104+00:00 stderr F E1213 00:13:21.421828 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421838104+00:00 stderr F E1213 00:13:21.421835 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421874795+00:00 stderr F E1213 00:13:21.421847 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421874795+00:00 stderr F E1213 00:13:21.421855 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421874795+00:00 stderr F E1213 00:13:21.421866 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421909336+00:00 stderr F E1213 00:13:21.421890 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421909336+00:00 stderr F E1213 00:13:21.421903 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421923567+00:00 stderr F E1213 00:13:21.421917 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-machine-config-operator" should be enqueued: namespace "openshift-machine-config-operator" not found 2025-12-13T00:13:21.421991569+00:00 stderr F E1213 00:13:21.421956 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.421991569+00:00 stderr F E1213 00:13:21.421968 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.421991569+00:00 stderr F E1213 00:13:21.421980 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.422003519+00:00 stderr F E1213 00:13:21.421989 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.422003519+00:00 stderr F E1213 00:13:21.421996 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.422010459+00:00 stderr F E1213 00:13:21.422002 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.422035940+00:00 stderr F E1213 00:13:21.422020 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.422035940+00:00 stderr F E1213 00:13:21.422029 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-marketplace" should be enqueued: namespace "openshift-marketplace" not found 2025-12-13T00:13:21.422051631+00:00 stderr F E1213 00:13:21.422047 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-12-13T00:13:21.422074652+00:00 stderr F E1213 00:13:21.422060 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-12-13T00:13:21.422074652+00:00 stderr F E1213 00:13:21.422068 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-12-13T00:13:21.422094802+00:00 stderr F E1213 00:13:21.422083 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-monitoring" should be enqueued: namespace "openshift-monitoring" not found 2025-12-13T00:13:21.422113293+00:00 stderr F E1213 00:13:21.422099 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422113293+00:00 stderr F E1213 00:13:21.422108 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422132883+00:00 stderr F E1213 00:13:21.422122 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422153084+00:00 stderr F E1213 00:13:21.422138 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422153084+00:00 stderr F E1213 00:13:21.422146 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422171215+00:00 stderr F E1213 00:13:21.422160 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422179565+00:00 stderr F E1213 00:13:21.422175 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-multus" should be enqueued: namespace "openshift-multus" not found 2025-12-13T00:13:21.422187685+00:00 stderr F E1213 00:13:21.422182 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-12-13T00:13:21.422206076+00:00 stderr F E1213 00:13:21.422195 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-12-13T00:13:21.422240927+00:00 stderr F E1213 00:13:21.422210 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-12-13T00:13:21.422248057+00:00 stderr F E1213 00:13:21.422239 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-diagnostics" should be enqueued: namespace "openshift-network-diagnostics" not found 2025-12-13T00:13:21.422291609+00:00 stderr F E1213 00:13:21.422257 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-12-13T00:13:21.422291609+00:00 stderr F E1213 00:13:21.422266 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-12-13T00:13:21.422335440+00:00 stderr F E1213 00:13:21.422301 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-12-13T00:13:21.422335440+00:00 stderr F E1213 00:13:21.422319 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-node-identity" should be enqueued: namespace "openshift-network-node-identity" not found 2025-12-13T00:13:21.422335440+00:00 stderr F E1213 00:13:21.422326 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-12-13T00:13:21.422344410+00:00 stderr F E1213 00:13:21.422334 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-12-13T00:13:21.422344410+00:00 stderr F E1213 00:13:21.422341 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-12-13T00:13:21.422371701+00:00 stderr F E1213 00:13:21.422349 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-12-13T00:13:21.422371701+00:00 stderr F E1213 00:13:21.422360 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-network-operator" should be enqueued: namespace "openshift-network-operator" not found 2025-12-13T00:13:21.422371701+00:00 stderr F E1213 00:13:21.422368 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-node" should be enqueued: namespace "openshift-node" not found 2025-12-13T00:13:21.422379732+00:00 stderr F E1213 00:13:21.422374 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-node" should be enqueued: namespace "openshift-node" not found 2025-12-13T00:13:21.422386352+00:00 stderr F E1213 00:13:21.422379 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-node" should be enqueued: namespace "openshift-node" not found 2025-12-13T00:13:21.422394492+00:00 stderr F E1213 00:13:21.422389 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-nutanix-infra" should be enqueued: namespace "openshift-nutanix-infra" not found 2025-12-13T00:13:21.422401082+00:00 stderr F E1213 00:13:21.422395 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-nutanix-infra" should be enqueued: namespace "openshift-nutanix-infra" not found 2025-12-13T00:13:21.422407663+00:00 stderr F E1213 00:13:21.422401 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-nutanix-infra" should be enqueued: namespace "openshift-nutanix-infra" not found 2025-12-13T00:13:21.422426773+00:00 stderr F E1213 00:13:21.422418 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422437 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422444 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422450 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-oauth-apiserver" should be enqueued: namespace "openshift-oauth-apiserver" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422476 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-openstack-infra" should be enqueued: namespace "openshift-openstack-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422483 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-openstack-infra" should be enqueued: namespace "openshift-openstack-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422488 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-openstack-infra" should be enqueued: namespace "openshift-openstack-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422520 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422529 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422536 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422556 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422562 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operator-lifecycle-manager" should be enqueued: namespace "openshift-operator-lifecycle-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422569 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operators" should be enqueued: namespace "openshift-operators" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422577 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operators" should be enqueued: namespace "openshift-operators" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422583 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-operators" should be enqueued: namespace "openshift-operators" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422588 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovirt-infra" should be enqueued: namespace "openshift-ovirt-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422603 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovirt-infra" should be enqueued: namespace "openshift-ovirt-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422608 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovirt-infra" should be enqueued: namespace "openshift-ovirt-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422613 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422627 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422632 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422638 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422652 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-ovn-kubernetes" should be enqueued: namespace "openshift-ovn-kubernetes" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422656 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422661 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422675 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422683 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-route-controller-manager" should be enqueued: namespace "openshift-route-controller-manager" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422700 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422706 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422727 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422734 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca-operator" should be enqueued: namespace "openshift-service-ca-operator" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422740 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422746 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422769 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422789 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-service-ca" should be enqueued: namespace "openshift-service-ca" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422795 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-user-workload-monitoring" should be enqueued: namespace "openshift-user-workload-monitoring" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422810 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-user-workload-monitoring" should be enqueued: namespace "openshift-user-workload-monitoring" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422827 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-user-workload-monitoring" should be enqueued: namespace "openshift-user-workload-monitoring" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422835 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-vsphere-infra" should be enqueued: namespace "openshift-vsphere-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr F E1213 00:13:21.422847 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-vsphere-infra" should be enqueued: namespace "openshift-vsphere-infra" not found 2025-12-13T00:13:21.423036724+00:00 stderr P E1213 00 2025-12-13T00:13:21.423087065+00:00 stderr F :13:21.422886 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift-vsphere-infra" should be enqueued: namespace "openshift-vsphere-infra" not found 2025-12-13T00:13:21.423087065+00:00 stderr F E1213 00:13:21.422894 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift" should be enqueued: namespace "openshift" not found 2025-12-13T00:13:21.423087065+00:00 stderr F E1213 00:13:21.422907 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift" should be enqueued: namespace "openshift" not found 2025-12-13T00:13:21.423087065+00:00 stderr F E1213 00:13:21.422917 1 podsecurity_label_sync_controller.go:420] failed to determine whether namespace "openshift" should be enqueued: namespace "openshift" not found 2025-12-13T00:13:21.612453359+00:00 stderr F I1213 00:13:21.612403 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.646100359+00:00 stderr F I1213 00:13:21.646051 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.797001370+00:00 stderr F I1213 00:13:21.795629 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.813041638+00:00 stderr F I1213 00:13:21.812239 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:21.994483895+00:00 stderr F I1213 00:13:21.993359 1 request.go:697] Waited for 1.186666325s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0 2025-12-13T00:13:22.002084281+00:00 stderr F I1213 00:13:22.001108 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.009521311+00:00 stderr F I1213 00:13:22.009445 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.082352289+00:00 stderr F I1213 00:13:22.082279 1 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller 2025-12-13T00:13:22.082352289+00:00 stderr F I1213 00:13:22.082308 1 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... 2025-12-13T00:13:22.088966260+00:00 stderr F I1213 00:13:22.087749 1 shared_informer.go:318] Caches are synced for privileged-namespaces-psa-label-syncer 2025-12-13T00:13:22.102031130+00:00 stderr F I1213 00:13:22.101030 1 base_controller.go:73] Caches are synced for namespace-security-allocation-controller 2025-12-13T00:13:22.102031130+00:00 stderr F I1213 00:13:22.101053 1 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... 2025-12-13T00:13:22.102031130+00:00 stderr F I1213 00:13:22.101085 1 namespace_scc_allocation_controller.go:111] Repairing SCC UID Allocations 2025-12-13T00:13:22.160437292+00:00 stderr F W1213 00:13:22.160374 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:22.160437292+00:00 stderr F E1213 00:13:22.160404 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:22.198458880+00:00 stderr F I1213 00:13:22.198401 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.210225685+00:00 stderr F I1213 00:13:22.210162 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.395969407+00:00 stderr F I1213 00:13:22.395240 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.410988061+00:00 stderr F I1213 00:13:22.410775 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.610509025+00:00 stderr F I1213 00:13:22.610443 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.814364946+00:00 stderr F I1213 00:13:22.814302 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.824063511+00:00 stderr F I1213 00:13:22.822887 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:22.994127616+00:00 stderr F I1213 00:13:22.994011 1 request.go:697] Waited for 2.186571194s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0 2025-12-13T00:13:23.009915667+00:00 stderr F I1213 00:13:23.009853 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.012024607+00:00 stderr F I1213 00:13:23.011981 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.069965284+00:00 stderr F I1213 00:13:23.069851 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.195468341+00:00 stderr F I1213 00:13:23.195407 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.210174635+00:00 stderr F I1213 00:13:23.210115 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.410041862+00:00 stderr F I1213 00:13:23.409975 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.612627739+00:00 stderr F I1213 00:13:23.612560 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:23.619355595+00:00 stderr F I1213 00:13:23.619279 1 namespace_scc_allocation_controller.go:116] Repair complete 2025-12-13T00:13:23.810283210+00:00 stderr F I1213 00:13:23.810235 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:24.008945496+00:00 stderr F I1213 00:13:24.008881 1 request.go:697] Waited for 3.162505527s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/k8s.ovn.org/v1/egressfirewalls?limit=500&resourceVersion=0 2025-12-13T00:13:24.011132039+00:00 stderr F I1213 00:13:24.011051 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:24.210340123+00:00 stderr F I1213 00:13:24.210289 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:24.410126227+00:00 stderr F I1213 00:13:24.410063 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:24.612575730+00:00 stderr F I1213 00:13:24.612459 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:24.810246421+00:00 stderr F I1213 00:13:24.810191 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:25.013013685+00:00 stderr F W1213 00:13:25.010198 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:25.013013685+00:00 stderr F E1213 00:13:25.010234 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io) 2025-12-13T00:13:25.013013685+00:00 stderr F I1213 00:13:25.010255 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:25.208261535+00:00 stderr F I1213 00:13:25.208161 1 request.go:697] Waited for 4.347655662s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/whereabouts.cni.cncf.io/v1alpha1/ippools?limit=500&resourceVersion=0 2025-12-13T00:13:25.210147159+00:00 stderr F I1213 00:13:25.209952 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:25.413372598+00:00 stderr F I1213 00:13:25.413151 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:25.613036687+00:00 stderr F I1213 00:13:25.611676 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:25.852292456+00:00 stderr F I1213 00:13:25.852228 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:26.010667639+00:00 stderr F I1213 00:13:26.010563 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:26.208615420+00:00 stderr F I1213 00:13:26.208564 1 request.go:697] Waited for 5.347096714s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/apis/monitoring.coreos.com/v1/alertmanagers?limit=500&resourceVersion=0 2025-12-13T00:13:26.210777932+00:00 stderr F I1213 00:13:26.210391 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:26.241284107+00:00 stderr F I1213 00:13:26.241192 1 reconciliation_controller.go:224] synced cluster resource quota controller 2025-12-13T00:13:26.261957293+00:00 stderr F I1213 00:13:26.261871 1 reconciliation_controller.go:149] Caches are synced 2025-12-13T00:13:31.163570977+00:00 stderr F I1213 00:13:31.163514 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:50.854118024+00:00 stderr F I1213 00:13:50.853641 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [image.openshift.io/v1, Resource=imagestreams], removed: []" 2025-12-13T00:13:50.854176905+00:00 stderr F W1213 00:13:50.854130 1 shared_informer.go:591] resyncPeriod 9m48.498907629s is smaller than resyncCheckPeriod 10m0s and the informer has already started. Changing it to 10m0s 2025-12-13T00:13:50.854273199+00:00 stderr F I1213 00:13:50.854224 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-12-13T00:13:50.854273199+00:00 stderr F I1213 00:13:50.854238 1 shared_informer.go:318] Caches are synced for resource quota 2025-12-13T00:13:50.854273199+00:00 stderr F I1213 00:13:50.854245 1 resource_quota_controller.go:496] "synced quota controller" 2025-12-13T00:13:50.903313807+00:00 stderr F I1213 00:13:50.903262 1 shared_informer.go:318] Caches are synced for resource quota 2025-12-13T00:13:56.251820078+00:00 stderr F I1213 00:13:56.251712 1 reconciliation_controller.go:207] syncing resource quota controller with updated resources from discovery: added: [apps.openshift.io/v1, Resource=deploymentconfigs authorization.openshift.io/v1, Resource=rolebindingrestrictions build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds image.openshift.io/v1, Resource=imagestreams route.openshift.io/v1, Resource=routes template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates], removed: [] 2025-12-13T00:13:56.251894241+00:00 stderr F I1213 00:13:56.251850 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-12-13T00:13:56.251910791+00:00 stderr F I1213 00:13:56.251897 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-12-13T00:13:56.251998824+00:00 stderr F I1213 00:13:56.251952 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-12-13T00:13:56.252260073+00:00 stderr F I1213 00:13:56.252187 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-12-13T00:13:56.252260073+00:00 stderr F I1213 00:13:56.252246 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-12-13T00:13:56.252391167+00:00 stderr F I1213 00:13:56.252339 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-12-13T00:13:56.252391167+00:00 stderr F I1213 00:13:56.252381 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-12-13T00:13:56.254578791+00:00 stderr F I1213 00:13:56.254516 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.259761925+00:00 stderr F I1213 00:13:56.259699 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.259761925+00:00 stderr F I1213 00:13:56.259731 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.259808007+00:00 stderr F W1213 00:13:56.259657 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:13:56.259915260+00:00 stderr F I1213 00:13:56.259865 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.259960462+00:00 stderr F I1213 00:13:56.259918 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.260028114+00:00 stderr F I1213 00:13:56.259935 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.261130861+00:00 stderr F I1213 00:13:56.261067 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:56.266782980+00:00 stderr F W1213 00:13:56.266718 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:13:56.353748833+00:00 stderr F I1213 00:13:56.353615 1 reconciliation_controller.go:224] synced cluster resource quota controller 2025-12-13T00:19:37.576594801+00:00 stderr F I1213 00:19:37.576543 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.576502598 +0000 UTC))" 2025-12-13T00:19:37.576594801+00:00 stderr F I1213 00:19:37.576587 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.57656371 +0000 UTC))" 2025-12-13T00:19:37.576626962+00:00 stderr F I1213 00:19:37.576610 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.576593271 +0000 UTC))" 2025-12-13T00:19:37.576638402+00:00 stderr F I1213 00:19:37.576631 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.576615731 +0000 UTC))" 2025-12-13T00:19:37.576675893+00:00 stderr F I1213 00:19:37.576655 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.576638022 +0000 UTC))" 2025-12-13T00:19:37.576686623+00:00 stderr F I1213 00:19:37.576681 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.576663913 +0000 UTC))" 2025-12-13T00:19:37.576708204+00:00 stderr F I1213 00:19:37.576701 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.576685853 +0000 UTC))" 2025-12-13T00:19:37.576770536+00:00 stderr F I1213 00:19:37.576720 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.576705684 +0000 UTC))" 2025-12-13T00:19:37.576770536+00:00 stderr F I1213 00:19:37.576744 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.576728414 +0000 UTC))" 2025-12-13T00:19:37.576782426+00:00 stderr F I1213 00:19:37.576767 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.576753385 +0000 UTC))" 2025-12-13T00:19:37.576807967+00:00 stderr F I1213 00:19:37.576790 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.576773916 +0000 UTC))" 2025-12-13T00:19:37.576829787+00:00 stderr F I1213 00:19:37.576814 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.576798916 +0000 UTC))" 2025-12-13T00:19:37.577297700+00:00 stderr F I1213 00:19:37.577274 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:19:37.577249319 +0000 UTC))" 2025-12-13T00:19:37.577675090+00:00 stderr F I1213 00:19:37.577652 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584645\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:19:37.577633419 +0000 UTC))" 2025-12-13T00:19:40.262999167+00:00 stderr F I1213 00:19:40.259461 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openstack namespace 2025-12-13T00:19:40.262999167+00:00 stderr F I1213 00:19:40.259555 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "openstack" NS 2025-12-13T00:19:41.012891936+00:00 stderr F I1213 00:19:41.012821 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openstack-operators namespace 2025-12-13T00:20:42.535524223+00:00 stderr F E1213 00:20:42.535432 1 leaderelection.go:332] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller-lock: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock?timeout=1m47s": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:51.003104271+00:00 stderr F E1213 00:20:51.003040 1 resource_quota_controller.go:440] failed to discover resources: Get "https://api-int.crc.testing:6443/api": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:20:54.932531545+00:00 stderr F E1213 00:20:54.932466 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=42886&timeout=6m46s&timeoutSeconds=406&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:20:57.338956602+00:00 stderr F E1213 00:20:57.338878 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=42873&timeout=6m15s&timeoutSeconds=375&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-12-13T00:20:57.481264191+00:00 stderr F E1213 00:20:57.481171 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=42674&timeout=7m49s&timeoutSeconds=469&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:20:58.171738034+00:00 stderr F E1213 00:20:58.171651 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=42671&timeout=9m43s&timeoutSeconds=583&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:20:58.460253039+00:00 stderr F E1213 00:20:58.459228 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=42659&timeout=9m24s&timeoutSeconds=564&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:20:59.010957010+00:00 stderr F E1213 00:20:59.010880 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=42638&timeout=8m28s&timeoutSeconds=508&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:20:59.162184360+00:00 stderr F E1213 00:20:59.162140 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?allowWatchBookmarks=true&resourceVersion=42663&timeout=5m25s&timeoutSeconds=325&watch=true\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:08.083441748+00:00 stderr F I1213 00:21:08.083336 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:09.633493105+00:00 stderr F I1213 00:21:09.633420 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:09.830004948+00:00 stderr F I1213 00:21:09.829922 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:09.892130275+00:00 stderr F I1213 00:21:09.892085 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:10.333764992+00:00 stderr F I1213 00:21:10.333696 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:10.665008771+00:00 stderr F I1213 00:21:10.664598 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:10.774882746+00:00 stderr F I1213 00:21:10.774798 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:10.790766544+00:00 stderr F I1213 00:21:10.790641 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:10.794927027+00:00 stderr F I1213 00:21:10.793328 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:11.676523977+00:00 stderr F I1213 00:21:11.676444 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:11.696102865+00:00 stderr F I1213 00:21:11.696041 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:11.859505224+00:00 stderr F I1213 00:21:11.859416 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.572109213+00:00 stderr F I1213 00:21:12.572054 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.594635651+00:00 stderr F I1213 00:21:12.594563 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.601384984+00:00 stderr F I1213 00:21:12.601336 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.680228421+00:00 stderr F I1213 00:21:12.680118 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.693103728+00:00 stderr F I1213 00:21:12.692989 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.784393362+00:00 stderr F I1213 00:21:12.784319 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:12.891519642+00:00 stderr F I1213 00:21:12.891459 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:13.188965738+00:00 stderr F I1213 00:21:13.188882 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:13.626260200+00:00 stderr F I1213 00:21:13.626099 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:13.785567387+00:00 stderr F I1213 00:21:13.785463 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:13.883026288+00:00 stderr F I1213 00:21:13.882835 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:14.102959533+00:00 stderr F I1213 00:21:14.102856 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:14.478357773+00:00 stderr F I1213 00:21:14.476139 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:15.030433690+00:00 stderr F I1213 00:21:15.030367 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:15.095734552+00:00 stderr F I1213 00:21:15.095653 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:15.822847794+00:00 stderr F I1213 00:21:15.822735 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:15.951094563+00:00 stderr F I1213 00:21:15.951021 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:16.053988160+00:00 stderr F W1213 00:21:16.053911 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?resourceVersion=42638\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:16.053988160+00:00 stderr F E1213 00:21:16.053962 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?resourceVersion=42638\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:16.497810397+00:00 stderr F I1213 00:21:16.497727 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:16.538865405+00:00 stderr F I1213 00:21:16.538807 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.120222513+00:00 stderr F W1213 00:21:17.120155 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?resourceVersion=42659\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:17.120222513+00:00 stderr F E1213 00:21:17.120200 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?resourceVersion=42659\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:17.309140980+00:00 stderr F I1213 00:21:17.309077 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.385413258+00:00 stderr F I1213 00:21:17.385342 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.536512096+00:00 stderr F I1213 00:21:17.536444 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.555856918+00:00 stderr F I1213 00:21:17.553695 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.664827568+00:00 stderr F I1213 00:21:17.664757 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.734362935+00:00 stderr F I1213 00:21:17.734274 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.740350726+00:00 stderr F I1213 00:21:17.740273 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.867704314+00:00 stderr F W1213 00:21:17.867624 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?resourceVersion=42886\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:17.867704314+00:00 stderr F E1213 00:21:17.867656 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?resourceVersion=42886\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:17.995885852+00:00 stderr F I1213 00:21:17.995815 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:18.228505449+00:00 stderr F I1213 00:21:18.228392 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:18.549828130+00:00 stderr F W1213 00:21:18.549767 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?resourceVersion=42663\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:18.549828130+00:00 stderr F E1213 00:21:18.549807 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?resourceVersion=42663\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding 2025-12-13T00:21:18.665515261+00:00 stderr F I1213 00:21:18.665403 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.038207588+00:00 stderr F I1213 00:21:19.038126 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.113063488+00:00 stderr F I1213 00:21:19.113004 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.202345678+00:00 stderr F I1213 00:21:19.202280 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.245029689+00:00 stderr F I1213 00:21:19.239926 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.303060516+00:00 stderr F I1213 00:21:19.302985 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.379705473+00:00 stderr F I1213 00:21:19.379655 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.514097910+00:00 stderr F I1213 00:21:19.514059 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.514347047+00:00 stderr F I1213 00:21:19.514265 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-12-13T00:21:19.514347047+00:00 stderr F I1213 00:21:19.514297 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "external-health-monitor-controller-cfg" not found 2025-12-13T00:21:19.557655486+00:00 stderr F I1213 00:21:19.557600 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:19.955615975+00:00 stderr F I1213 00:21:19.955550 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.026001164+00:00 stderr F I1213 00:21:20.025918 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.045735956+00:00 stderr F I1213 00:21:20.045666 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.120311439+00:00 stderr F I1213 00:21:20.120254 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.198521129+00:00 stderr F I1213 00:21:20.198441 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.482827081+00:00 stderr F I1213 00:21:20.482748 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.568699819+00:00 stderr F I1213 00:21:20.568609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.639463768+00:00 stderr F I1213 00:21:20.639375 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.745996853+00:00 stderr F I1213 00:21:20.745873 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.746878457+00:00 stderr F I1213 00:21:20.746720 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-12-13T00:21:20.746878457+00:00 stderr F I1213 00:21:20.746744 1 sccrolecache.go:466] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "crc-hostpath-external-health-monitor-controller-runner" not found 2025-12-13T00:21:20.931251252+00:00 stderr F I1213 00:21:20.931187 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:20.931954241+00:00 stderr F I1213 00:21:20.931906 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.087766746+00:00 stderr F W1213 00:21:21.087702 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:21:21.087796466+00:00 stderr F I1213 00:21:21.087763 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.092119424+00:00 stderr F W1213 00:21:21.092077 1 warnings.go:70] apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ 2025-12-13T00:21:21.100204692+00:00 stderr F I1213 00:21:21.100163 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.211031832+00:00 stderr F I1213 00:21:21.210912 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.561683004+00:00 stderr F I1213 00:21:21.561605 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.575537558+00:00 stderr F I1213 00:21:21.575479 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.725671460+00:00 stderr F I1213 00:21:21.725603 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.749577295+00:00 stderr F I1213 00:21:21.749524 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-12-13T00:21:21.750006456+00:00 stderr F I1213 00:21:21.749978 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for service-telemetry namespace 2025-12-13T00:21:21.762702009+00:00 stderr F I1213 00:21:21.762620 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:21.821516846+00:00 stderr F I1213 00:21:21.821443 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:22.082256603+00:00 stderr F I1213 00:21:22.082201 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:22.209108225+00:00 stderr F I1213 00:21:22.209005 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:22.590738094+00:00 stderr F I1213 00:21:22.590328 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:23.511968702+00:00 stderr F I1213 00:21:23.511908 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:23.670106130+00:00 stderr F I1213 00:21:23.670010 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:24.303311318+00:00 stderr F I1213 00:21:24.303248 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:24.656698953+00:00 stderr F I1213 00:21:24.656641 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:26.718398728+00:00 stderr F I1213 00:21:26.718338 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:41.913733830+00:00 stderr F I1213 00:21:41.913688 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:48.723684828+00:00 stderr F I1213 00:21:48.723638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:49.729133748+00:00 stderr F I1213 00:21:49.727160 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:50.365738800+00:00 stderr F I1213 00:21:50.365686 1 podsecurity_label_sync_controller.go:304] no service accounts were found in the "service-telemetry" NS 2025-12-13T00:21:50.371833071+00:00 stderr F I1213 00:21:50.371499 1 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-controller-manager", Name:"kube-controller-manager-crc", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CreatedSCCRanges' created SCC ranges for openshift-must-gather-zffxd namespace ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000264042715117130646033101 0ustar zuulzuul2025-08-13T20:08:12.802058725+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-08-13T20:08:12.814964715+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-08-13T20:08:12.823497100+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:12.826307440+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-08-13T20:08:12.826307440+00:00 stderr F + echo 'Copying system trust bundle' 2025-08-13T20:08:12.826307440+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-08-13T20:08:12.826418804+00:00 stdout F Copying system trust bundle 2025-08-13T20:08:12.833987271+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-08-13T20:08:12.834642410+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.217037 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218038 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218143 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.218936 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219007 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219115 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219181 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219238 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-08-13T20:08:13.219440442+00:00 stderr F W0813 20:08:13.219374 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-08-13T20:08:13.219493834+00:00 stderr F W0813 20:08:13.219435 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-08-13T20:08:13.219503624+00:00 stderr F W0813 20:08:13.219496 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-08-13T20:08:13.219624757+00:00 stderr F W0813 20:08:13.219574 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-08-13T20:08:13.219694409+00:00 stderr F W0813 20:08:13.219655 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-08-13T20:08:13.219957897+00:00 stderr F W0813 20:08:13.219834 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-08-13T20:08:13.219980567+00:00 stderr F W0813 20:08:13.219954 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-08-13T20:08:13.220114111+00:00 stderr F W0813 20:08:13.220028 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-08-13T20:08:13.220130522+00:00 stderr F W0813 20:08:13.220122 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-08-13T20:08:13.220338958+00:00 stderr F W0813 20:08:13.220244 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-08-13T20:08:13.220510753+00:00 stderr F W0813 20:08:13.220445 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-08-13T20:08:13.220660927+00:00 stderr F W0813 20:08:13.220583 1 feature_gate.go:227] unrecognized feature gate: Example 2025-08-13T20:08:13.220759810+00:00 stderr F W0813 20:08:13.220672 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-08-13T20:08:13.220759810+00:00 stderr F W0813 20:08:13.220746 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-08-13T20:08:13.220980366+00:00 stderr F W0813 20:08:13.220930 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-08-13T20:08:13.221078669+00:00 stderr F W0813 20:08:13.221034 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-08-13T20:08:13.221226703+00:00 stderr F W0813 20:08:13.221167 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-08-13T20:08:13.221321016+00:00 stderr F W0813 20:08:13.221259 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-08-13T20:08:13.221435869+00:00 stderr F W0813 20:08:13.221357 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-08-13T20:08:13.221449420+00:00 stderr F W0813 20:08:13.221439 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-08-13T20:08:13.221647945+00:00 stderr F W0813 20:08:13.221535 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-08-13T20:08:13.221647945+00:00 stderr F W0813 20:08:13.221628 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-08-13T20:08:13.221755538+00:00 stderr F W0813 20:08:13.221707 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-08-13T20:08:13.222715416+00:00 stderr F W0813 20:08:13.222675 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-08-13T20:08:13.222878071+00:00 stderr F W0813 20:08:13.222845 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-08-13T20:08:13.223023605+00:00 stderr F W0813 20:08:13.222982 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-08-13T20:08:13.223200670+00:00 stderr F W0813 20:08:13.223143 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:08:13.223277612+00:00 stderr F W0813 20:08:13.223248 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-08-13T20:08:13.223453957+00:00 stderr F W0813 20:08:13.223362 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-08-13T20:08:13.223503038+00:00 stderr F W0813 20:08:13.223473 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-08-13T20:08:13.223605581+00:00 stderr F W0813 20:08:13.223570 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-08-13T20:08:13.223913790+00:00 stderr F W0813 20:08:13.223758 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-08-13T20:08:13.224109056+00:00 stderr F W0813 20:08:13.223957 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-08-13T20:08:13.224109056+00:00 stderr F W0813 20:08:13.224079 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-08-13T20:08:13.224210499+00:00 stderr F W0813 20:08:13.224173 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-08-13T20:08:13.224326502+00:00 stderr F W0813 20:08:13.224289 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-08-13T20:08:13.224450766+00:00 stderr F W0813 20:08:13.224413 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-08-13T20:08:13.224622561+00:00 stderr F W0813 20:08:13.224535 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-08-13T20:08:13.224868638+00:00 stderr F W0813 20:08:13.224731 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-08-13T20:08:13.224955920+00:00 stderr F W0813 20:08:13.224940 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-08-13T20:08:13.225143595+00:00 stderr F W0813 20:08:13.225056 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-08-13T20:08:13.225218398+00:00 stderr F W0813 20:08:13.225163 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-08-13T20:08:13.225303580+00:00 stderr F W0813 20:08:13.225268 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-08-13T20:08:13.226198776+00:00 stderr F W0813 20:08:13.225924 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-08-13T20:08:13.226198776+00:00 stderr F W0813 20:08:13.226080 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-08-13T20:08:13.226355810+00:00 stderr F W0813 20:08:13.226243 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.231603 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.231843 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232047 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232157 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-08-13T20:08:13.234484243+00:00 stderr F W0813 20:08:13.232381 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232672 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232693 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232707 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232716 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232721 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232727 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232735 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232744 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232749 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232753 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232827 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232841 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232846 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232854 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232865 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232869 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232873 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232882 1 flags.go:64] FLAG: --cloud-config="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232915 1 flags.go:64] FLAG: --cloud-provider="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232920 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232931 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232935 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232942 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232947 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232951 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232961 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232965 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232969 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232973 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232977 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232981 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232985 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232988 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.232994 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233000 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233004 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233008 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233013 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233018 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233022 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233026 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233029 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233033 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233062 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233067 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233071 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233075 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233079 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233082 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233086 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233090 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233094 1 flags.go:64] FLAG: --contention-profiling="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233098 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233106 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233137 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233143 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233155 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233160 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233165 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233169 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233174 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233196 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233202 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233206 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233253 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233259 1 flags.go:64] FLAG: --help="false" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233264 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233268 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233272 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233276 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233279 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233284 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233294 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233298 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233303 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233308 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-08-13T20:08:13.234484243+00:00 stderr F I0813 20:08:13.233312 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-08-13T20:08:13.234484243+00:00 stderr P I0813 20:08:13.233319 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps 2025-08-13T20:08:13.234855374+00:00 stderr F /controller-manager-kubeconfig/kubeconfig" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233326 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233334 1 flags.go:64] FLAG: --leader-elect="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233339 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233343 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233350 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233354 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233358 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233362 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233366 1 flags.go:64] FLAG: --leader-migration-config="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233370 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233374 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233378 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233386 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233390 1 flags.go:64] FLAG: --logging-format="text" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233396 1 flags.go:64] FLAG: --master="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233400 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233404 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233408 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233412 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233420 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233425 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233429 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233433 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233436 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233442 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233447 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233451 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233454 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233458 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233462 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233467 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233471 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233474 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233478 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233482 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233488 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233492 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233501 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233507 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233511 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233515 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233524 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233529 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233539 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233550 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233564 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233568 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233573 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233577 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233581 1 flags.go:64] FLAG: --secure-port="10257" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233586 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233590 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233594 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233598 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233602 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233610 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233635 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233723 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233729 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233737 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233743 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233748 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233754 1 flags.go:64] FLAG: --v="2" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233761 1 flags.go:64] FLAG: --version="false" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233769 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233833 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-08-13T20:08:13.234855374+00:00 stderr F I0813 20:08:13.233840 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-08-13T20:08:13.247379713+00:00 stderr F I0813 20:08:13.246721 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.820033282+00:00 stderr F I0813 20:08:13.816639 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.820033282+00:00 stderr F I0813 20:08:13.816752 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.827928258+00:00 stderr F I0813 20:08:13.827555 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-08-13T20:08:13.827928258+00:00 stderr F I0813 20:08:13.827621 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-08-13T20:08:13.839544281+00:00 stderr F I0813 20:08:13.839462 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:13.839409377 +0000 UTC))" 2025-08-13T20:08:13.839544281+00:00 stderr F I0813 20:08:13.839535 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:13.83951253 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839555 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:13.839542801 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839572 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:13.839560951 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839587 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839576672 +0000 UTC))" 2025-08-13T20:08:13.839617973+00:00 stderr F I0813 20:08:13.839604 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839593662 +0000 UTC))" 2025-08-13T20:08:13.839631154+00:00 stderr F I0813 20:08:13.839620 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839609253 +0000 UTC))" 2025-08-13T20:08:13.839643864+00:00 stderr F I0813 20:08:13.839635 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:13.839624983 +0000 UTC))" 2025-08-13T20:08:13.839692735+00:00 stderr F I0813 20:08:13.839651 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839640424 +0000 UTC))" 2025-08-13T20:08:13.839711196+00:00 stderr F I0813 20:08:13.839702 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:13.839680665 +0000 UTC))" 2025-08-13T20:08:13.840731445+00:00 stderr F I0813 20:08:13.840547 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-08-13 20:08:13.840525139 +0000 UTC))" 2025-08-13T20:08:13.841108246+00:00 stderr F I0813 20:08:13.841055 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115693\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115693\" (2025-08-13 19:08:13 +0000 UTC to 2026-08-13 19:08:13 +0000 UTC (now=2025-08-13 20:08:13.841028684 +0000 UTC))" 2025-08-13T20:08:13.841128376+00:00 stderr F I0813 20:08:13.841103 1 secure_serving.go:213] Serving securely on [::]:10257 2025-08-13T20:08:13.841605360+00:00 stderr F I0813 20:08:13.841538 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:13.841704753+00:00 stderr F I0813 20:08:13.841650 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:13.841704753+00:00 stderr F I0813 20:08:13.841657 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:08:13.842010682+00:00 stderr F I0813 20:08:13.841932 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:08:13.844290897+00:00 stderr F I0813 20:08:13.844225 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2025-08-13T20:08:24.708357459+00:00 stderr F E0813 20:08:24.708183 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:28.619865836+00:00 stderr F E0813 20:08:28.618751 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:33.289069024+00:00 stderr F E0813 20:08:33.288206 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:37.779061605+00:00 stderr F E0813 20:08:37.778956 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:08:47.305208117+00:00 stderr F I0813 20:08:47.304304 1 leaderelection.go:260] successfully acquired lease kube-system/kube-controller-manager 2025-08-13T20:08:47.307865224+00:00 stderr F I0813 20:08:47.306050 1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="crc_02346a15-e302-4734-adfc-ae3167a2c006 became leader" 2025-08-13T20:08:47.314085082+00:00 stderr F I0813 20:08:47.313970 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-token-controller" 2025-08-13T20:08:47.319254970+00:00 stderr F I0813 20:08:47.319158 1 controllermanager.go:787] "Started controller" controller="serviceaccount-token-controller" 2025-08-13T20:08:47.319254970+00:00 stderr F I0813 20:08:47.319196 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-protection-controller" 2025-08-13T20:08:47.319562919+00:00 stderr F I0813 20:08:47.319326 1 shared_informer.go:311] Waiting for caches to sync for tokens 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.330725 1 controllermanager.go:787] "Started controller" controller="persistentvolume-protection-controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.330769 1 controllermanager.go:756] "Starting controller" controller="cronjob-controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.331075 1 pv_protection_controller.go:78] "Starting PV protection controller" 2025-08-13T20:08:47.332049047+00:00 stderr F I0813 20:08:47.331085 1 shared_informer.go:311] Waiting for caches to sync for PV protection 2025-08-13T20:08:47.345728149+00:00 stderr F I0813 20:08:47.345645 1 controllermanager.go:787] "Started controller" controller="cronjob-controller" 2025-08-13T20:08:47.345728149+00:00 stderr F I0813 20:08:47.345692 1 controllermanager.go:756] "Starting controller" controller="node-lifecycle-controller" 2025-08-13T20:08:47.348831908+00:00 stderr F I0813 20:08:47.346065 1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" 2025-08-13T20:08:47.348831908+00:00 stderr F I0813 20:08:47.346100 1 shared_informer.go:311] Waiting for caches to sync for cronjob 2025-08-13T20:08:47.351836084+00:00 stderr F I0813 20:08:47.349161 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:47.359707530+00:00 stderr F I0813 20:08:47.359642 1 node_lifecycle_controller.go:425] "Controller will reconcile labels" 2025-08-13T20:08:47.362833019+00:00 stderr F I0813 20:08:47.360031 1 controllermanager.go:787] "Started controller" controller="node-lifecycle-controller" 2025-08-13T20:08:47.362833019+00:00 stderr F I0813 20:08:47.360095 1 controllermanager.go:756] "Starting controller" controller="service-lb-controller" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.380943 1 node_lifecycle_controller.go:459] "Sending events to api server" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.380996 1 node_lifecycle_controller.go:470] "Starting node controller" 2025-08-13T20:08:47.381045792+00:00 stderr F I0813 20:08:47.381006 1 shared_informer.go:311] Waiting for caches to sync for taint 2025-08-13T20:08:47.400273773+00:00 stderr F E0813 20:08:47.400179 1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" 2025-08-13T20:08:47.400273773+00:00 stderr F I0813 20:08:47.400230 1 controllermanager.go:765] "Warning: skipping controller" controller="service-lb-controller" 2025-08-13T20:08:47.400273773+00:00 stderr F I0813 20:08:47.400244 1 controllermanager.go:756] "Starting controller" controller="persistentvolumeclaim-protection-controller" 2025-08-13T20:08:47.409490347+00:00 stderr F I0813 20:08:47.409417 1 controllermanager.go:787] "Started controller" controller="persistentvolumeclaim-protection-controller" 2025-08-13T20:08:47.409490347+00:00 stderr F I0813 20:08:47.409466 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-expander-controller" 2025-08-13T20:08:47.412744711+00:00 stderr F I0813 20:08:47.409730 1 pvc_protection_controller.go:102] "Starting PVC protection controller" 2025-08-13T20:08:47.412744711+00:00 stderr F I0813 20:08:47.409761 1 shared_informer.go:311] Waiting for caches to sync for PVC protection 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413923 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413966 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413977 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.413985 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414091 1 controllermanager.go:787] "Started controller" controller="persistentvolume-expander-controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414103 1 controllermanager.go:756] "Starting controller" controller="namespace-controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414324 1 expand_controller.go:328] "Starting expand controller" 2025-08-13T20:08:47.417077905+00:00 stderr F I0813 20:08:47.414337 1 shared_informer.go:311] Waiting for caches to sync for expand 2025-08-13T20:08:47.484767685+00:00 stderr F I0813 20:08:47.484662 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.511982 1 controllermanager.go:787] "Started controller" controller="namespace-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512070 1 controllermanager.go:750] "Warning: controller is disabled" controller="bootstrap-signer-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512082 1 controllermanager.go:756] "Starting controller" controller="cloud-node-lifecycle-controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512320 1 namespace_controller.go:197] "Starting namespace controller" 2025-08-13T20:08:47.513165790+00:00 stderr F I0813 20:08:47.512331 1 shared_informer.go:311] Waiting for caches to sync for namespace 2025-08-13T20:08:47.518661417+00:00 stderr F E0813 20:08:47.518410 1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518457 1 controllermanager.go:765] "Warning: skipping controller" controller="cloud-node-lifecycle-controller" 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518501 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"] 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518510 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"] 2025-08-13T20:08:47.518661417+00:00 stderr F I0813 20:08:47.518523 1 controllermanager.go:756] "Starting controller" controller="taint-eviction-controller" 2025-08-13T20:08:47.521696374+00:00 stderr F I0813 20:08:47.521606 1 shared_informer.go:318] Caches are synced for tokens 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528370 1 controllermanager.go:787] "Started controller" controller="taint-eviction-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528423 1 controllermanager.go:756] "Starting controller" controller="pod-garbage-collector-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528595 1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528649 1 taint_eviction.go:291] "Sending events to api server" 2025-08-13T20:08:47.528885020+00:00 stderr F I0813 20:08:47.528669 1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller 2025-08-13T20:08:47.533961206+00:00 stderr F I0813 20:08:47.533595 1 controllermanager.go:787] "Started controller" controller="pod-garbage-collector-controller" 2025-08-13T20:08:47.533961206+00:00 stderr F I0813 20:08:47.533691 1 controllermanager.go:756] "Starting controller" controller="job-controller" 2025-08-13T20:08:47.542092599+00:00 stderr F I0813 20:08:47.538074 1 gc_controller.go:101] "Starting GC controller" 2025-08-13T20:08:47.542092599+00:00 stderr F I0813 20:08:47.538152 1 shared_informer.go:311] Waiting for caches to sync for GC 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543498 1 controllermanager.go:787] "Started controller" controller="job-controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543543 1 controllermanager.go:756] "Starting controller" controller="deployment-controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543719 1 job_controller.go:224] "Starting job controller" 2025-08-13T20:08:47.544197089+00:00 stderr F I0813 20:08:47.543748 1 shared_informer.go:311] Waiting for caches to sync for job 2025-08-13T20:08:47.556115701+00:00 stderr F I0813 20:08:47.555921 1 controllermanager.go:787] "Started controller" controller="deployment-controller" 2025-08-13T20:08:47.559824737+00:00 stderr F I0813 20:08:47.559712 1 deployment_controller.go:168] "Starting controller" controller="deployment" 2025-08-13T20:08:47.559824737+00:00 stderr F I0813 20:08:47.559810 1 shared_informer.go:311] Waiting for caches to sync for deployment 2025-08-13T20:08:47.565910802+00:00 stderr F I0813 20:08:47.555967 1 controllermanager.go:756] "Starting controller" controller="service-ca-certificate-publisher-controller" 2025-08-13T20:08:47.578665598+00:00 stderr F I0813 20:08:47.578540 1 controllermanager.go:787] "Started controller" controller="service-ca-certificate-publisher-controller" 2025-08-13T20:08:47.578665598+00:00 stderr F I0813 20:08:47.578585 1 controllermanager.go:756] "Starting controller" controller="ephemeral-volume-controller" 2025-08-13T20:08:47.578839603+00:00 stderr F I0813 20:08:47.578749 1 publisher.go:80] Starting service CA certificate configmap publisher 2025-08-13T20:08:47.578839603+00:00 stderr F I0813 20:08:47.578821 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593446 1 controllermanager.go:787] "Started controller" controller="ephemeral-volume-controller" 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593506 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"] 2025-08-13T20:08:47.593589226+00:00 stderr F I0813 20:08:47.593563 1 controllermanager.go:756] "Starting controller" controller="resourcequota-controller" 2025-08-13T20:08:47.594115581+00:00 stderr F I0813 20:08:47.593956 1 controller.go:169] "Starting ephemeral volume controller" 2025-08-13T20:08:47.594115581+00:00 stderr F I0813 20:08:47.593990 1 shared_informer.go:311] Waiting for caches to sync for ephemeral 2025-08-13T20:08:47.663499480+00:00 stderr F I0813 20:08:47.663405 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressservices.k8s.ovn.org" 2025-08-13T20:08:47.663592503+00:00 stderr F I0813 20:08:47.663576 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorpkis.network.operator.openshift.io" 2025-08-13T20:08:47.663757647+00:00 stderr F I0813 20:08:47.663737 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" 2025-08-13T20:08:47.663855670+00:00 stderr F I0813 20:08:47.663838 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" 2025-08-13T20:08:47.664090207+00:00 stderr F I0813 20:08:47.664064 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" 2025-08-13T20:08:47.664204100+00:00 stderr F I0813 20:08:47.664180 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templateinstances.template.openshift.io" 2025-08-13T20:08:47.664274882+00:00 stderr F I0813 20:08:47.664256 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="subscriptions.operators.coreos.com" 2025-08-13T20:08:47.665563459+00:00 stderr F I0813 20:08:47.665538 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ippools.whereabouts.cni.cncf.io" 2025-08-13T20:08:47.672934471+00:00 stderr F I0813 20:08:47.665683 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinesets.machine.openshift.io" 2025-08-13T20:08:47.681052003+00:00 stderr F I0813 20:08:47.680988 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com" 2025-08-13T20:08:47.681210638+00:00 stderr F I0813 20:08:47.681193 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="overlappingrangeipreservations.whereabouts.cni.cncf.io" 2025-08-13T20:08:47.681275660+00:00 stderr F I0813 20:08:47.681261 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints" 2025-08-13T20:08:47.681527937+00:00 stderr F I0813 20:08:47.681514 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps" 2025-08-13T20:08:47.681640190+00:00 stderr F I0813 20:08:47.681622 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" 2025-08-13T20:08:47.681755253+00:00 stderr F I0813 20:08:47.681735 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" 2025-08-13T20:08:47.681918398+00:00 stderr F I0813 20:08:47.681874 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="builds.build.openshift.io" 2025-08-13T20:08:47.682005181+00:00 stderr F I0813 20:08:47.681990 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="routes.route.openshift.io" 2025-08-13T20:08:47.682088773+00:00 stderr F I0813 20:08:47.682074 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddresses.ipam.cluster.x-k8s.io" 2025-08-13T20:08:47.682201246+00:00 stderr F I0813 20:08:47.682146 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com" 2025-08-13T20:08:47.682254948+00:00 stderr F I0813 20:08:47.682241 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates" 2025-08-13T20:08:47.689962349+00:00 stderr F I0813 20:08:47.682335 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" 2025-08-13T20:08:47.690404331+00:00 stderr F I0813 20:08:47.690360 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindingrestrictions.authorization.openshift.io" 2025-08-13T20:08:47.690549246+00:00 stderr F I0813 20:08:47.690490 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="imagestreams.image.openshift.io" 2025-08-13T20:08:47.690656209+00:00 stderr F I0813 20:08:47.690633 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podnetworkconnectivitychecks.controlplane.operator.openshift.io" 2025-08-13T20:08:47.690863845+00:00 stderr F I0813 20:08:47.690813 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresscontrollers.operator.openshift.io" 2025-08-13T20:08:47.690983548+00:00 stderr F I0813 20:08:47.690962 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" 2025-08-13T20:08:47.691075101+00:00 stderr F I0813 20:08:47.691034 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" 2025-08-13T20:08:47.691162583+00:00 stderr F I0813 20:08:47.691142 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediationtemplates.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:47.691261316+00:00 stderr F I0813 20:08:47.691239 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machinehealthchecks.machine.openshift.io" 2025-08-13T20:08:47.691387000+00:00 stderr F I0813 20:08:47.691364 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com" 2025-08-13T20:08:47.691459882+00:00 stderr F I0813 20:08:47.691440 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps" 2025-08-13T20:08:47.691533464+00:00 stderr F I0813 20:08:47.691516 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="buildconfigs.build.openshift.io" 2025-08-13T20:08:47.691663838+00:00 stderr F I0813 20:08:47.691614 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machineautoscalers.autoscaling.openshift.io" 2025-08-13T20:08:47.691750050+00:00 stderr F I0813 20:08:47.691730 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="machines.machine.openshift.io" 2025-08-13T20:08:47.693725277+00:00 stderr F I0813 20:08:47.691869 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorgroups.operators.coreos.com" 2025-08-13T20:08:47.693972164+00:00 stderr F I0813 20:08:47.693888 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com" 2025-08-13T20:08:47.694075027+00:00 stderr F I0813 20:08:47.694053 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="catalogsources.operators.coreos.com" 2025-08-13T20:08:47.694980603+00:00 stderr F I0813 20:08:47.694954 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" 2025-08-13T20:08:47.695050235+00:00 stderr F I0813 20:08:47.695036 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" 2025-08-13T20:08:47.695113096+00:00 stderr F I0813 20:08:47.695099 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressfirewalls.k8s.ovn.org" 2025-08-13T20:08:47.695176268+00:00 stderr F I0813 20:08:47.695162 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com" 2025-08-13T20:08:47.695549889+00:00 stderr F I0813 20:08:47.695534 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts" 2025-08-13T20:08:47.695709674+00:00 stderr F I0813 20:08:47.695690 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deploymentconfigs.apps.openshift.io" 2025-08-13T20:08:47.695829957+00:00 stderr F I0813 20:08:47.695765 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="templates.template.openshift.io" 2025-08-13T20:08:47.695972371+00:00 stderr F I0813 20:08:47.695949 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="dnsrecords.ingress.operator.openshift.io" 2025-08-13T20:08:47.696053863+00:00 stderr F I0813 20:08:47.696037 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertrelabelconfigs.monitoring.openshift.io" 2025-08-13T20:08:47.697097493+00:00 stderr F I0813 20:08:47.696100 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" 2025-08-13T20:08:47.697188466+00:00 stderr F I0813 20:08:47.697170 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="clusterserviceversions.operators.coreos.com" 2025-08-13T20:08:47.697251688+00:00 stderr F I0813 20:08:47.697237 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com" 2025-08-13T20:08:47.697311319+00:00 stderr F I0813 20:08:47.697298 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="installplans.operators.coreos.com" 2025-08-13T20:08:47.697363921+00:00 stderr F I0813 20:08:47.697350 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" 2025-08-13T20:08:47.697460114+00:00 stderr F I0813 20:08:47.697441 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges" 2025-08-13T20:08:47.697514945+00:00 stderr F I0813 20:08:47.697502 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" 2025-08-13T20:08:47.699990416+00:00 stderr F I0813 20:08:47.699964 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controlplanemachinesets.machine.openshift.io" 2025-08-13T20:08:47.700063808+00:00 stderr F I0813 20:08:47.700049 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com" 2025-08-13T20:08:47.700124190+00:00 stderr F I0813 20:08:47.700110 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="projecthelmchartrepositories.helm.openshift.io" 2025-08-13T20:08:47.700181062+00:00 stderr F I0813 20:08:47.700167 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="metal3remediations.infrastructure.cluster.x-k8s.io" 2025-08-13T20:08:47.700241273+00:00 stderr F I0813 20:08:47.700227 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="operatorconditions.operators.coreos.com" 2025-08-13T20:08:47.700302275+00:00 stderr F I0813 20:08:47.700289 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch" 2025-08-13T20:08:47.700361927+00:00 stderr F I0813 20:08:47.700348 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="network-attachment-definitions.k8s.cni.cncf.io" 2025-08-13T20:08:47.700418678+00:00 stderr F I0813 20:08:47.700404 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertingrules.monitoring.openshift.io" 2025-08-13T20:08:47.700488130+00:00 stderr F I0813 20:08:47.700473 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ipaddressclaims.ipam.cluster.x-k8s.io" 2025-08-13T20:08:47.700552982+00:00 stderr F I0813 20:08:47.700539 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressqoses.k8s.ovn.org" 2025-08-13T20:08:47.700609754+00:00 stderr F I0813 20:08:47.700595 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com" 2025-08-13T20:08:47.708060808+00:00 stderr F I0813 20:08:47.708031 1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="egressrouters.network.operator.openshift.io" 2025-08-13T20:08:47.708192761+00:00 stderr F I0813 20:08:47.708170 1 controllermanager.go:787] "Started controller" controller="resourcequota-controller" 2025-08-13T20:08:47.708234283+00:00 stderr F I0813 20:08:47.708222 1 controllermanager.go:756] "Starting controller" controller="statefulset-controller" 2025-08-13T20:08:47.708613683+00:00 stderr F I0813 20:08:47.708589 1 resource_quota_controller.go:294] "Starting resource quota controller" 2025-08-13T20:08:47.708658455+00:00 stderr F I0813 20:08:47.708645 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:47.708983164+00:00 stderr F I0813 20:08:47.708956 1 resource_quota_monitor.go:305] "QuotaMonitor running" 2025-08-13T20:08:47.717629892+00:00 stderr F I0813 20:08:47.717537 1 controllermanager.go:787] "Started controller" controller="statefulset-controller" 2025-08-13T20:08:47.717629892+00:00 stderr F I0813 20:08:47.717590 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-approving-controller" 2025-08-13T20:08:47.717967062+00:00 stderr F I0813 20:08:47.717887 1 stateful_set.go:161] "Starting stateful set controller" 2025-08-13T20:08:47.717967062+00:00 stderr F I0813 20:08:47.717946 1 shared_informer.go:311] Waiting for caches to sync for stateful set 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725686 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-approving-controller" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725735 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-attach-detach-controller" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.725962 1 resource_quota_controller.go:470] "syncing resource quota controller with updated resources from discovery" diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=imagestreams infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies operator.openshift.io/v1, Resource=ingresscontrollers operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy/v1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes storage.k8s.io/v1, Resource=csistoragecapacities template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.726044 1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving" 2025-08-13T20:08:47.727717951+00:00 stderr F I0813 20:08:47.726061 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving 2025-08-13T20:08:47.731523350+00:00 stderr F W0813 20:08:47.731379 1 probe.go:268] Flexvolume plugin directory at /etc/kubernetes/kubelet-plugins/volume/exec does not exist. Recreating. 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733142 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733180 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733191 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733199 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" 2025-08-13T20:08:47.733283421+00:00 stderr F I0813 20:08:47.733236 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-08-13T20:08:47.733397194+00:00 stderr F I0813 20:08:47.733310 1 controllermanager.go:787] "Started controller" controller="persistentvolume-attach-detach-controller" 2025-08-13T20:08:47.733397194+00:00 stderr F I0813 20:08:47.733323 1 controllermanager.go:756] "Starting controller" controller="serviceaccount-controller" 2025-08-13T20:08:47.734055003+00:00 stderr F I0813 20:08:47.733559 1 attach_detach_controller.go:337] "Starting attach detach controller" 2025-08-13T20:08:47.734055003+00:00 stderr F I0813 20:08:47.733597 1 shared_informer.go:311] Waiting for caches to sync for attach detach 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742516 1 controllermanager.go:787] "Started controller" controller="serviceaccount-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742608 1 controllermanager.go:756] "Starting controller" controller="node-ipam-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742622 1 controllermanager.go:765] "Warning: skipping controller" controller="node-ipam-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.742629 1 controllermanager.go:756] "Starting controller" controller="root-ca-certificate-publisher-controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.743502 1 serviceaccounts_controller.go:111] "Starting service account controller" 2025-08-13T20:08:47.743914816+00:00 stderr F I0813 20:08:47.743515 1 shared_informer.go:311] Waiting for caches to sync for service account 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748654 1 controllermanager.go:787] "Started controller" controller="root-ca-certificate-publisher-controller" 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748695 1 controllermanager.go:750] "Warning: controller is disabled" controller="token-cleaner-controller" 2025-08-13T20:08:47.749229948+00:00 stderr F I0813 20:08:47.748706 1 controllermanager.go:756] "Starting controller" controller="persistentvolume-binder-controller" 2025-08-13T20:08:47.751055660+00:00 stderr F I0813 20:08:47.750996 1 publisher.go:102] "Starting root CA cert publisher controller" 2025-08-13T20:08:47.751055660+00:00 stderr F I0813 20:08:47.751038 1 shared_informer.go:311] Waiting for caches to sync for crt configmap 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769926 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769963 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769979 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769987 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.769994 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770003 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770016 1 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770071 1 controllermanager.go:787] "Started controller" controller="persistentvolume-binder-controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770094 1 controllermanager.go:739] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"] 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770106 1 controllermanager.go:756] "Starting controller" controller="endpoints-controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770573 1 pv_controller_base.go:319] "Starting persistent volume controller" 2025-08-13T20:08:47.771337832+00:00 stderr F I0813 20:08:47.770661 1 shared_informer.go:311] Waiting for caches to sync for persistent volume 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.783877 1 controllermanager.go:787] "Started controller" controller="endpoints-controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.783943 1 controllermanager.go:756] "Starting controller" controller="garbage-collector-controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.784181 1 endpoints_controller.go:174] "Starting endpoint controller" 2025-08-13T20:08:47.785079096+00:00 stderr F I0813 20:08:47.784193 1 shared_informer.go:311] Waiting for caches to sync for endpoint 2025-08-13T20:08:47.820883362+00:00 stderr F I0813 20:08:47.820581 1 controllermanager.go:787] "Started controller" controller="garbage-collector-controller" 2025-08-13T20:08:47.820883362+00:00 stderr F I0813 20:08:47.820627 1 controllermanager.go:756] "Starting controller" controller="horizontal-pod-autoscaler-controller" 2025-08-13T20:08:47.830451327+00:00 stderr F I0813 20:08:47.830373 1 garbagecollector.go:155] "Starting controller" controller="garbagecollector" 2025-08-13T20:08:47.830451327+00:00 stderr F I0813 20:08:47.830427 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-08-13T20:08:47.830523069+00:00 stderr F I0813 20:08:47.830478 1 graph_builder.go:302] "Running" component="GraphBuilder" 2025-08-13T20:08:47.877555977+00:00 stderr F I0813 20:08:47.877463 1 controllermanager.go:787] "Started controller" controller="horizontal-pod-autoscaler-controller" 2025-08-13T20:08:47.877555977+00:00 stderr F I0813 20:08:47.877513 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-cleaner-controller" 2025-08-13T20:08:47.877706862+00:00 stderr F I0813 20:08:47.877652 1 horizontal.go:200] "Starting HPA controller" 2025-08-13T20:08:47.877706862+00:00 stderr F I0813 20:08:47.877683 1 shared_informer.go:311] Waiting for caches to sync for HPA 2025-08-13T20:08:47.894572865+00:00 stderr F I0813 20:08:47.894436 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-cleaner-controller" 2025-08-13T20:08:47.894572865+00:00 stderr F I0813 20:08:47.894485 1 controllermanager.go:756] "Starting controller" controller="certificatesigningrequest-signing-controller" 2025-08-13T20:08:47.894677348+00:00 stderr F I0813 20:08:47.894627 1 cleaner.go:83] "Starting CSR cleaner controller" 2025-08-13T20:08:47.906217069+00:00 stderr F I0813 20:08:47.906136 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.907055493+00:00 stderr F I0813 20:08:47.907018 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.907687251+00:00 stderr F I0813 20:08:47.907664 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.908426862+00:00 stderr F I0813 20:08:47.908405 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.908849514+00:00 stderr F I0813 20:08:47.908826 1 controllermanager.go:787] "Started controller" controller="certificatesigningrequest-signing-controller" 2025-08-13T20:08:47.908945487+00:00 stderr F I0813 20:08:47.908923 1 controllermanager.go:756] "Starting controller" controller="node-route-controller" 2025-08-13T20:08:47.909004379+00:00 stderr F I0813 20:08:47.908989 1 core.go:290] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true 2025-08-13T20:08:47.909040140+00:00 stderr F I0813 20:08:47.909028 1 controllermanager.go:765] "Warning: skipping controller" controller="node-route-controller" 2025-08-13T20:08:47.909075031+00:00 stderr F I0813 20:08:47.909063 1 controllermanager.go:756] "Starting controller" controller="clusterrole-aggregation-controller" 2025-08-13T20:08:47.909285347+00:00 stderr F I0813 20:08:47.909267 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving" 2025-08-13T20:08:47.909325288+00:00 stderr F I0813 20:08:47.909313 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving 2025-08-13T20:08:47.909398970+00:00 stderr F I0813 20:08:47.909385 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client" 2025-08-13T20:08:47.909430071+00:00 stderr F I0813 20:08:47.909419 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client 2025-08-13T20:08:47.909487813+00:00 stderr F I0813 20:08:47.909474 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client" 2025-08-13T20:08:47.909518374+00:00 stderr F I0813 20:08:47.909507 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client 2025-08-13T20:08:47.909576075+00:00 stderr F I0813 20:08:47.909562 1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown" 2025-08-13T20:08:47.909615386+00:00 stderr F I0813 20:08:47.909600 1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown 2025-08-13T20:08:47.909668118+00:00 stderr F I0813 20:08:47.909653 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.909880414+00:00 stderr F I0813 20:08:47.909831 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.910015758+00:00 stderr F I0813 20:08:47.909999 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.910103910+00:00 stderr F I0813 20:08:47.910089 1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-08-13T20:08:47.925983626+00:00 stderr P I0813 20:08:47.925923 1 garbagecollector.go:241] "syncing garbage collector with updated resources from discovery" attempt=1 diff="added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apiserver.openshift.io/v1, Resource=apirequestcounts apps.openshift.io/v1, Resource=deploymentconfigs apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets authorization.openshift.io/v1, Resource=rolebindingrestrictions autoscaling.openshift.io/v1, Resource=clusterautoscalers autoscaling.openshift.io/v1beta1, Resource=machineautoscalers autoscaling/v2, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs build.openshift.io/v1, Resource=buildconfigs build.openshift.io/v1, Resource=builds certificates.k8s.io/v1, Resource=certificatesigningrequests config.openshift.io/v1, Resource=apiservers config.openshift.io/v1, Resource=authentications config.openshift.io/v1, Resource=builds config.openshift.io/v1, Resource=clusteroperators config.openshift.io/v1, Resource=clusterversions config.openshift.io/v1, Resource=consoles config.openshift.io/v1, Resource=dnses config.openshift.io/v1, Resource=featuregates config.openshift.io/v1, Resource=imagecontentpolicies config.openshift.io/v1, Resource=imagedigestmirrorsets config.openshift.io/v1, Resource=images config.openshift.io/v1, Resource=imagetagmirrorsets config.openshift.io/v1, Resource=infrastructures config.openshift.io/v1, Resource=ingresses config.openshift.io/v1, Resource=networks config.openshift.io/v1, Resource=nodes config.openshift.io/v1, Resource=oauths config.openshift.io/v1, Resource=operatorhubs config.openshift.io/v1, Resource=projects config.openshift.io/v1, Resource=proxies config.openshift.io/v1, Resource=schedulers console.openshift.io/v1, Resource=consoleclidownloads console.openshift.io/v1, Resource=consoleexternalloglinks console.openshift.io/v1, Resource=consolelinks console.openshift.io/v1, Resource=consolenotifications console.openshift.io/v1, Resource=consoleplugins console.openshift.io/v1, Resource=consolequickstarts console.openshift.io/v1, Resource=consolesamples console.openshift.io/v1, Resource=consoleyamlsamples controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1, Resource=prioritylevelconfigurations helm.openshift.io/v1beta1, Resource=helmchartrepositories helm.openshift.io/v1beta1, Resource=projecthelmchartrepositories image.openshift.io/v1, Resource=images image.openshift.io/v1, Resource=imagestreams imageregistry.operator.openshift.io/v1, Resource=configs imageregistry.operator.openshift.io/v1, Resource=imagepruners infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediations infrastructure.cluster.x-k8s.io/v1beta1, Resource=metal3remediationtemplates ingress.operator.openshift.io/v1, Resource=dnsrecords ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddressclaims ipam.cluster.x-k8s.io/v1beta1, Resource=ipaddresses k8s.cni.cncf.io/v1, Resource=network-attachment-definitions k8s.ovn.org/v1, Resource=adminpolicybasedexternalroutes k8s.ovn.org/v1, Resource=egressfirewalls k8s.ovn.org/v1, Resource=egressips k8s.ovn.org/v1, Resource=egressqoses k8s.ovn.org/v1, Resource=egressservices machine.openshift.io/v1, Resource=controlplanemachinesets machine.openshift.io/v1beta1, Resource=machinehealthchecks machine.openshift.io/v1beta1, Resource=machines machine.openshift.io/v1beta1, Resource=machinesets machineconfiguration.openshift.io/v1, Resource=containerruntimeconfigs machineconfiguration.openshift.io/v1, Resource=controllerconfigs machineconfiguration.openshift.io/v1, Resource=kubeletconfigs machineconfiguration.openshift.io/v1, Resource=machineconfigpools machineconfiguration.openshift.io/v1, Resource=machineconfigs migration.k8s.io/v1alpha1, Resource=storagestates migration.k8s.io/v1alpha1, Resource=storageversionmigrations monitoring.coreos.com/v1, Resource=alertmanagers monitoring.coreos.com/v1, Resource=podmonitors monitoring.coreos.com/v1, Resource=probes monitoring.coreos.com/v1, Resource=prometheuses monitoring.coreos.com/v1, Resource=prometheusrules monitoring.coreos.com/v1, Resource=servicemonitors monitoring.coreos.com/v1, Resource=thanosrulers monitoring.coreos.com/v1beta1, Resource=alertmanagerconfigs monitoring.openshift.io/v1, Resource=alertingrules monitoring.openshift.io/v1, Resource=alertrelabelconfigs network.operator.openshift.io/v1, Resource=egressrouters network.operator.openshift.io/v1, Resource=operatorpkis networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses oauth.openshift.io/v1, Resource=oauthaccesstokens oauth.openshift.io/v1, Resource=oauthauthorizetokens oauth.openshift.io/v1, Resource=oauthclientauthorizations oauth.openshift.io/v1, Resource=oauthclients oauth.openshift.io/v1, Resource=useroauthaccesstokens operator.openshift.io/v1, Resource=authentications operator.openshift.io/v1, Resource=clustercsidrivers operator.openshift.io/v1, Resource=configs operator.openshift.io/v1, Resource=consoles operator.openshift.io/v1, Resource=csisnapshotcontrollers operator.openshift.io/v1, Resource=dnses operator.openshift.io/v1, Resource=etcds operator.openshift.io/v1, Resource=ingresscontrollers operator.openshift.io/v1, Resource=kubeapiservers operator.openshift.io/v1, Resource=kubecontrollermanagers operator.openshift.io/v1, Resource=kubeschedulers operator.openshift.io/v1, Resource=kubestorageversionmigrators operator.openshift.io/v1, Resource=machineconfigurations operator.openshift.io/v1, Resource=networks operator.openshift.io/v1, Resource=openshiftapiservers operator.openshift.io/v1, Resource=openshiftcontrollermanagers operator.openshift.io/v1, Resource=servicecas operator.openshift.io/v1, Resource=storages operator.openshift.io/v1alpha1, Resource=imagecontentsourcepolicies operators.coreos.com/v1, Resource=olmconfigs operators.coreos.com/v1, Resource=operatorgroups operators.coreos.com/v1, Resource=operators operators.coreos.com/v1alpha1, Resource=catalogsources operators.coreos.com/v1alpha1, Resource=clusterserviceversions operators.coreos.com/v1alpha1, Resource=installplans operators.coreos.com/v1alpha1, Resource=subscriptions operators.coreos.com/v2, Resource=operatorconditions policy.networking.k8s.io/v1alpha1, Resource=adminnetworkpolicies policy.networking.k8s.io/v1alpha1, Resource=baselineadminnetworkpolicies policy/v1, Resource=poddisruptionbudgets project.openshift.io/v1, Resource=projects quota.openshift.io/v1, Resource=clusterresourcequotas rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles route.openshift.io/v1, Resource=routes samples.operator.openshift.io/v1, Resource=configs scheduling.k8s.io/v1, Resource=priorityclasses security.internal.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=rangeallocations security.openshift.io/v1, Resource=securitycontextconstraints storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=csistoragecapacities storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments t 2025-08-13T20:08:47.926037307+00:00 stderr F emplate.openshift.io/v1, Resource=brokertemplateinstances template.openshift.io/v1, Resource=templateinstances template.openshift.io/v1, Resource=templates user.openshift.io/v1, Resource=groups user.openshift.io/v1, Resource=identities user.openshift.io/v1, Resource=users whereabouts.cni.cncf.io/v1alpha1, Resource=ippools whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations], removed: []" 2025-08-13T20:08:47.930503085+00:00 stderr F I0813 20:08:47.930474 1 controllermanager.go:787] "Started controller" controller="clusterrole-aggregation-controller" 2025-08-13T20:08:47.930555957+00:00 stderr F I0813 20:08:47.930542 1 controllermanager.go:756] "Starting controller" controller="ttl-after-finished-controller" 2025-08-13T20:08:47.930855115+00:00 stderr F I0813 20:08:47.930766 1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" 2025-08-13T20:08:47.930944148+00:00 stderr F I0813 20:08:47.930920 1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator 2025-08-13T20:08:47.942238932+00:00 stderr F I0813 20:08:47.942157 1 controllermanager.go:787] "Started controller" controller="ttl-after-finished-controller" 2025-08-13T20:08:47.942238932+00:00 stderr F I0813 20:08:47.942206 1 controllermanager.go:756] "Starting controller" controller="endpointslice-controller" 2025-08-13T20:08:47.942862410+00:00 stderr F I0813 20:08:47.942411 1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" 2025-08-13T20:08:47.942862410+00:00 stderr F I0813 20:08:47.942443 1 shared_informer.go:311] Waiting for caches to sync for TTL after finished 2025-08-13T20:08:47.951827457+00:00 stderr F I0813 20:08:47.951738 1 controllermanager.go:787] "Started controller" controller="endpointslice-controller" 2025-08-13T20:08:47.951940510+00:00 stderr F I0813 20:08:47.951885 1 controllermanager.go:756] "Starting controller" controller="endpointslice-mirroring-controller" 2025-08-13T20:08:47.952293440+00:00 stderr F I0813 20:08:47.952270 1 endpointslice_controller.go:264] "Starting endpoint slice controller" 2025-08-13T20:08:47.952336421+00:00 stderr F I0813 20:08:47.952324 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice 2025-08-13T20:08:47.965276902+00:00 stderr F I0813 20:08:47.965247 1 controllermanager.go:787] "Started controller" controller="endpointslice-mirroring-controller" 2025-08-13T20:08:47.965335574+00:00 stderr F I0813 20:08:47.965322 1 controllermanager.go:756] "Starting controller" controller="daemonset-controller" 2025-08-13T20:08:47.965650653+00:00 stderr F I0813 20:08:47.965587 1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" 2025-08-13T20:08:47.965947521+00:00 stderr F I0813 20:08:47.965884 1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring 2025-08-13T20:08:47.984336619+00:00 stderr F I0813 20:08:47.984295 1 controllermanager.go:787] "Started controller" controller="daemonset-controller" 2025-08-13T20:08:47.984405141+00:00 stderr F I0813 20:08:47.984391 1 controllermanager.go:756] "Starting controller" controller="disruption-controller" 2025-08-13T20:08:47.984636617+00:00 stderr F I0813 20:08:47.984617 1 daemon_controller.go:297] "Starting daemon sets controller" 2025-08-13T20:08:47.984675948+00:00 stderr F I0813 20:08:47.984663 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-08-13T20:08:48.022675888+00:00 stderr F I0813 20:08:48.022550 1 controllermanager.go:787] "Started controller" controller="disruption-controller" 2025-08-13T20:08:48.022675888+00:00 stderr F I0813 20:08:48.022596 1 controllermanager.go:756] "Starting controller" controller="replicationcontroller-controller" 2025-08-13T20:08:48.022881444+00:00 stderr F I0813 20:08:48.022826 1 disruption.go:433] "Sending events to api server." 2025-08-13T20:08:48.022881444+00:00 stderr F I0813 20:08:48.022874 1 disruption.go:444] "Starting disruption controller" 2025-08-13T20:08:48.022925815+00:00 stderr F I0813 20:08:48.022885 1 shared_informer.go:311] Waiting for caches to sync for disruption 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045213 1 controllermanager.go:787] "Started controller" controller="replicationcontroller-controller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045261 1 controllermanager.go:756] "Starting controller" controller="replicaset-controller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045467 1 replica_set.go:214] "Starting controller" name="replicationcontroller" 2025-08-13T20:08:48.046176452+00:00 stderr F I0813 20:08:48.045477 1 shared_informer.go:311] Waiting for caches to sync for ReplicationController 2025-08-13T20:08:48.056757625+00:00 stderr F I0813 20:08:48.056717 1 controllermanager.go:787] "Started controller" controller="replicaset-controller" 2025-08-13T20:08:48.056887219+00:00 stderr F I0813 20:08:48.056870 1 controllermanager.go:750] "Warning: controller is disabled" controller="ttl-controller" 2025-08-13T20:08:48.056970581+00:00 stderr F I0813 20:08:48.056953 1 controllermanager.go:756] "Starting controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-08-13T20:08:48.057249199+00:00 stderr F I0813 20:08:48.057228 1 replica_set.go:214] "Starting controller" name="replicaset" 2025-08-13T20:08:48.057290110+00:00 stderr F I0813 20:08:48.057277 1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet 2025-08-13T20:08:48.074530845+00:00 stderr F I0813 20:08:48.074487 1 controllermanager.go:787] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller" 2025-08-13T20:08:48.077342405+00:00 stderr F I0813 20:08:48.076871 1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" 2025-08-13T20:08:48.077342405+00:00 stderr F I0813 20:08:48.076923 1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner 2025-08-13T20:08:48.086612211+00:00 stderr F I0813 20:08:48.085824 1 shared_informer.go:311] Waiting for caches to sync for resource quota 2025-08-13T20:08:48.130402997+00:00 stderr F I0813 20:08:48.130305 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.138358525+00:00 stderr F I0813 20:08:48.138217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.170315431+00:00 stderr F I0813 20:08:48.170265 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.171449313+00:00 stderr F I0813 20:08:48.171427 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.172733700+00:00 stderr F I0813 20:08:48.172666 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.174917463+00:00 stderr F I0813 20:08:48.174283 1 reflector.go:351] Caches populated for *v1.ControllerRevision from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.175025206+00:00 stderr F I0813 20:08:48.174973 1 reflector.go:351] Caches populated for *v1.Lease from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.198988943+00:00 stderr F W0813 20:08:48.198926 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.199176618+00:00 stderr F E0813 20:08:48.199153 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:48.207232879+00:00 stderr F I0813 20:08:48.207003 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.208400983+00:00 stderr F I0813 20:08:48.208321 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216556407+00:00 stderr F I0813 20:08:48.215182 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216579717+00:00 stderr F I0813 20:08:48.216549 1 reflector.go:351] Caches populated for *v1.CronJob from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.216974529+00:00 stderr F I0813 20:08:48.216922 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217110883+00:00 stderr F I0813 20:08:48.216938 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217265997+00:00 stderr F I0813 20:08:48.217211 1 reflector.go:351] Caches populated for *v2.HorizontalPodAutoscaler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217400161+00:00 stderr F I0813 20:08:48.217342 1 reflector.go:351] Caches populated for *v1.ResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217544645+00:00 stderr F I0813 20:08:48.217463 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217544645+00:00 stderr F I0813 20:08:48.217534 1 reflector.go:351] Caches populated for *v1.NetworkPolicy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217710190+00:00 stderr F I0813 20:08:48.217663 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.217933916+00:00 stderr F I0813 20:08:48.217865 1 reflector.go:351] Caches populated for *v1.LimitRange from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218073750+00:00 stderr F I0813 20:08:48.218022 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218409630+00:00 stderr F I0813 20:08:48.218346 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218512183+00:00 stderr F I0813 20:08:48.218456 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.218690978+00:00 stderr F I0813 20:08:48.218631 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.220661534+00:00 stderr F I0813 20:08:48.220472 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.223578098+00:00 stderr F I0813 20:08:48.220985 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.224551166+00:00 stderr F I0813 20:08:48.221162 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"crc\" does not exist" 2025-08-13T20:08:48.224595457+00:00 stderr F I0813 20:08:48.221204 1 topologycache.go:217] "Ignoring node because it has an excluded label" node="crc" 2025-08-13T20:08:48.224743342+00:00 stderr F I0813 20:08:48.224717 1 topologycache.go:253] "Insufficient node info for topology hints" totalZones=0 totalCPU="0" sufficientNodeInfo=true 2025-08-13T20:08:48.224842944+00:00 stderr F I0813 20:08:48.221292 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225055050+00:00 stderr F I0813 20:08:48.221631 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225223615+00:00 stderr F I0813 20:08:48.221716 1 reflector.go:351] Caches populated for *v1.Job from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225371769+00:00 stderr F I0813 20:08:48.221719 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.225526074+00:00 stderr F W0813 20:08:48.221748 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:48.225670288+00:00 stderr F E0813 20:08:48.225648 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:48.225749350+00:00 stderr F I0813 20:08:48.221862 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-28658250" 2025-08-13T20:08:48.225880434+00:00 stderr F I0813 20:08:48.225847 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:08:48.225964516+00:00 stderr F I0813 20:08:48.225950 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:08:48.225994367+00:00 stderr F I0813 20:08:48.221888 1 reflector.go:351] Caches populated for *v1.PodTemplate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226186543+00:00 stderr F I0813 20:08:48.221990 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226397989+00:00 stderr F I0813 20:08:48.222054 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226537903+00:00 stderr F I0813 20:08:48.222075 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226714268+00:00 stderr F I0813 20:08:48.222116 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.226981866+00:00 stderr F I0813 20:08:48.217472 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.228448028+00:00 stderr F I0813 20:08:48.228393 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.230164247+00:00 stderr F I0813 20:08:48.230139 1 shared_informer.go:318] Caches are synced for taint-eviction-controller 2025-08-13T20:08:48.234610694+00:00 stderr F I0813 20:08:48.231951 1 shared_informer.go:318] Caches are synced for PV protection 2025-08-13T20:08:48.234610694+00:00 stderr F I0813 20:08:48.232593 1 reflector.go:351] Caches populated for *v1.EndpointSlice from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.238498946+00:00 stderr F I0813 20:08:48.238401 1 shared_informer.go:318] Caches are synced for GC 2025-08-13T20:08:48.242876641+00:00 stderr F I0813 20:08:48.242709 1 shared_informer.go:318] Caches are synced for TTL after finished 2025-08-13T20:08:48.244119587+00:00 stderr F I0813 20:08:48.244009 1 shared_informer.go:318] Caches are synced for job 2025-08-13T20:08:48.244119587+00:00 stderr F I0813 20:08:48.244088 1 shared_informer.go:318] Caches are synced for service account 2025-08-13T20:08:48.250977604+00:00 stderr F I0813 20:08:48.250638 1 shared_informer.go:318] Caches are synced for ReplicationController 2025-08-13T20:08:48.250977604+00:00 stderr F I0813 20:08:48.250684 1 shared_informer.go:318] Caches are synced for cronjob 2025-08-13T20:08:48.278232365+00:00 stderr F I0813 20:08:48.278135 1 shared_informer.go:318] Caches are synced for HPA 2025-08-13T20:08:48.279346197+00:00 stderr F I0813 20:08:48.279293 1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner 2025-08-13T20:08:48.298048733+00:00 stderr F I0813 20:08:48.297946 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.317593214+00:00 stderr F I0813 20:08:48.317526 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.318342615+00:00 stderr F I0813 20:08:48.318317 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.325951913+00:00 stderr F I0813 20:08:48.325858 1 shared_informer.go:318] Caches are synced for namespace 2025-08-13T20:08:48.331644416+00:00 stderr F I0813 20:08:48.331605 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.333685375+00:00 stderr F I0813 20:08:48.333658 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.338846663+00:00 stderr F I0813 20:08:48.338697 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.348167540+00:00 stderr F I0813 20:08:48.348048 1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator 2025-08-13T20:08:48.350661502+00:00 stderr F I0813 20:08:48.350587 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.350875298+00:00 stderr F W0813 20:08:48.350769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:48.352023321+00:00 stderr F E0813 20:08:48.351990 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:48.352316749+00:00 stderr F I0813 20:08:48.352292 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.352696890+00:00 stderr F I0813 20:08:48.352673 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.353075921+00:00 stderr F W0813 20:08:48.353051 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:48.354002427+00:00 stderr F E0813 20:08:48.353935 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:48.354090910+00:00 stderr F W0813 20:08:48.353565 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:48.354207563+00:00 stderr F E0813 20:08:48.354182 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:48.354292706+00:00 stderr F I0813 20:08:48.353672 1 reflector.go:351] Caches populated for *v1.VolumeAttachment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.354933774+00:00 stderr F W0813 20:08:48.353699 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:48.358727313+00:00 stderr F I0813 20:08:48.358592 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359022251+00:00 stderr F I0813 20:08:48.358977 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359144925+00:00 stderr F I0813 20:08:48.353742 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359256398+00:00 stderr F I0813 20:08:48.359208 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359385592+00:00 stderr F W0813 20:08:48.359333 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:48.359401552+00:00 stderr F E0813 20:08:48.359384 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:48.359477644+00:00 stderr F I0813 20:08:48.356161 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.359677240+00:00 stderr F I0813 20:08:48.359138 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.360197125+00:00 stderr F I0813 20:08:48.360133 1 reflector.go:351] Caches populated for *v1.RoleBindingRestriction from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.361672467+00:00 stderr F E0813 20:08:48.361440 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:48.375195585+00:00 stderr F I0813 20:08:48.375086 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379486658+00:00 stderr F I0813 20:08:48.379428 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379542450+00:00 stderr F I0813 20:08:48.379435 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.379935441+00:00 stderr F I0813 20:08:48.379854 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.381218408+00:00 stderr F I0813 20:08:48.381024 1 shared_informer.go:318] Caches are synced for persistent volume 2025-08-13T20:08:48.387830607+00:00 stderr F I0813 20:08:48.387722 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.388398514+00:00 stderr F I0813 20:08:48.388370 1 shared_informer.go:318] Caches are synced for daemon sets 2025-08-13T20:08:48.388496096+00:00 stderr F I0813 20:08:48.388480 1 shared_informer.go:311] Waiting for caches to sync for daemon sets 2025-08-13T20:08:48.388531187+00:00 stderr F I0813 20:08:48.388519 1 shared_informer.go:318] Caches are synced for daemon sets 2025-08-13T20:08:48.388627580+00:00 stderr F I0813 20:08:48.388610 1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring 2025-08-13T20:08:48.388703572+00:00 stderr F I0813 20:08:48.388683 1 endpointslicemirroring_controller.go:230] "Starting worker threads" total=5 2025-08-13T20:08:48.389498105+00:00 stderr F I0813 20:08:48.389447 1 shared_informer.go:318] Caches are synced for taint 2025-08-13T20:08:48.389560367+00:00 stderr F I0813 20:08:48.389516 1 node_lifecycle_controller.go:676] "Controller observed a new Node" node="crc" 2025-08-13T20:08:48.389620629+00:00 stderr F I0813 20:08:48.389566 1 controller_utils.go:173] "Recording event message for node" event="Registered Node crc in Controller" node="crc" 2025-08-13T20:08:48.389633729+00:00 stderr F I0813 20:08:48.389620 1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone="" 2025-08-13T20:08:48.389888996+00:00 stderr F I0813 20:08:48.389735 1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="crc" 2025-08-13T20:08:48.391201484+00:00 stderr F I0813 20:08:48.391145 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.391743690+00:00 stderr F I0813 20:08:48.391691 1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal" 2025-08-13T20:08:48.393003736+00:00 stderr F I0813 20:08:48.392924 1 event.go:376] "Event occurred" object="crc" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node crc event: Registered Node crc in Controller" 2025-08-13T20:08:48.393402597+00:00 stderr F I0813 20:08:48.388428 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.395087505+00:00 stderr F I0813 20:08:48.395035 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.397462214+00:00 stderr F I0813 20:08:48.397421 1 shared_informer.go:318] Caches are synced for ephemeral 2025-08-13T20:08:48.400224593+00:00 stderr F I0813 20:08:48.400144 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.401080097+00:00 stderr F I0813 20:08:48.400979 1 shared_informer.go:318] Caches are synced for endpoint 2025-08-13T20:08:48.410372524+00:00 stderr F I0813 20:08:48.410296 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.410976841+00:00 stderr F I0813 20:08:48.410950 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.411347542+00:00 stderr F I0813 20:08:48.411308 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.411610529+00:00 stderr F I0813 20:08:48.411587 1 shared_informer.go:318] Caches are synced for PVC protection 2025-08-13T20:08:48.411818525+00:00 stderr F I0813 20:08:48.411718 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.412553236+00:00 stderr F I0813 20:08:48.412524 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.417402485+00:00 stderr F I0813 20:08:48.417332 1 shared_informer.go:318] Caches are synced for expand 2025-08-13T20:08:48.417578380+00:00 stderr F I0813 20:08:48.417554 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown 2025-08-13T20:08:48.417712944+00:00 stderr F I0813 20:08:48.417698 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client 2025-08-13T20:08:48.417850498+00:00 stderr F I0813 20:08:48.417732 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.418007713+00:00 stderr F I0813 20:08:48.417959 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving 2025-08-13T20:08:48.418052554+00:00 stderr F I0813 20:08:48.418037 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client 2025-08-13T20:08:48.418422424+00:00 stderr F I0813 20:08:48.418400 1 shared_informer.go:318] Caches are synced for stateful set 2025-08-13T20:08:48.431114778+00:00 stderr F I0813 20:08:48.423134 1 shared_informer.go:318] Caches are synced for disruption 2025-08-13T20:08:48.435616157+00:00 stderr F I0813 20:08:48.435579 1 shared_informer.go:318] Caches are synced for certificate-csrapproving 2025-08-13T20:08:48.438044067+00:00 stderr F I0813 20:08:48.438016 1 shared_informer.go:318] Caches are synced for attach detach 2025-08-13T20:08:48.454388596+00:00 stderr F I0813 20:08:48.454249 1 shared_informer.go:318] Caches are synced for endpoint_slice 2025-08-13T20:08:48.454388596+00:00 stderr F I0813 20:08:48.454349 1 endpointslice_controller.go:271] "Starting worker threads" total=5 2025-08-13T20:08:48.460518801+00:00 stderr F I0813 20:08:48.460429 1 shared_informer.go:318] Caches are synced for deployment 2025-08-13T20:08:48.478241410+00:00 stderr F I0813 20:08:48.478170 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.503509074+00:00 stderr F I0813 20:08:48.503079 1 shared_informer.go:318] Caches are synced for ReplicaSet 2025-08-13T20:08:48.508104666+00:00 stderr F I0813 20:08:48.508060 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="131.204µs" 2025-08-13T20:08:48.508303131+00:00 stderr F I0813 20:08:48.508281 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="117.054µs" 2025-08-13T20:08:48.508427425+00:00 stderr F I0813 20:08:48.508411 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="74.312µs" 2025-08-13T20:08:48.508714653+00:00 stderr F I0813 20:08:48.508693 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="233.996µs" 2025-08-13T20:08:48.508920109+00:00 stderr F I0813 20:08:48.508873 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="117.344µs" 2025-08-13T20:08:48.509201057+00:00 stderr F I0813 20:08:48.509124 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="150.545µs" 2025-08-13T20:08:48.509321921+00:00 stderr F I0813 20:08:48.509304 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="63.222µs" 2025-08-13T20:08:48.509433834+00:00 stderr F I0813 20:08:48.509416 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="64.282µs" 2025-08-13T20:08:48.509532907+00:00 stderr F I0813 20:08:48.509517 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="53.111µs" 2025-08-13T20:08:48.510116973+00:00 stderr F I0813 20:08:48.509758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="193.206µs" 2025-08-13T20:08:48.510340060+00:00 stderr F I0813 20:08:48.510316 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="66.902µs" 2025-08-13T20:08:48.510444173+00:00 stderr F I0813 20:08:48.510428 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="52.671µs" 2025-08-13T20:08:48.510617848+00:00 stderr F I0813 20:08:48.510548 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="47.331µs" 2025-08-13T20:08:48.510865445+00:00 stderr F I0813 20:08:48.510842 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="184.695µs" 2025-08-13T20:08:48.511003849+00:00 stderr F I0813 20:08:48.510980 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="46.281µs" 2025-08-13T20:08:48.511213555+00:00 stderr F I0813 20:08:48.511193 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="67.052µs" 2025-08-13T20:08:48.511316458+00:00 stderr F I0813 20:08:48.511300 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="50.271µs" 2025-08-13T20:08:48.511487413+00:00 stderr F I0813 20:08:48.511471 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="123.113µs" 2025-08-13T20:08:48.511580455+00:00 stderr F I0813 20:08:48.511565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="46.872µs" 2025-08-13T20:08:48.511685768+00:00 stderr F I0813 20:08:48.511670 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="46.642µs" 2025-08-13T20:08:48.511837123+00:00 stderr F I0813 20:08:48.511762 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="44.761µs" 2025-08-13T20:08:48.512094350+00:00 stderr F I0813 20:08:48.512072 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="190.645µs" 2025-08-13T20:08:48.512197953+00:00 stderr F I0813 20:08:48.512179 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="51.971µs" 2025-08-13T20:08:48.512302406+00:00 stderr F I0813 20:08:48.512286 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="48.421µs" 2025-08-13T20:08:48.512395189+00:00 stderr F I0813 20:08:48.512380 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="48.661µs" 2025-08-13T20:08:48.515739085+00:00 stderr F I0813 20:08:48.515698 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="3.261564ms" 2025-08-13T20:08:48.515985072+00:00 stderr F I0813 20:08:48.515957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="99.363µs" 2025-08-13T20:08:48.516149796+00:00 stderr F I0813 20:08:48.516126 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="93.453µs" 2025-08-13T20:08:48.516332402+00:00 stderr F I0813 20:08:48.516313 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="94.853µs" 2025-08-13T20:08:48.516480176+00:00 stderr F I0813 20:08:48.516457 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="61.871µs" 2025-08-13T20:08:48.516847846+00:00 stderr F I0813 20:08:48.516823 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="304.438µs" 2025-08-13T20:08:48.517000331+00:00 stderr F I0813 20:08:48.516976 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="83.823µs" 2025-08-13T20:08:48.517119064+00:00 stderr F I0813 20:08:48.517101 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="60.702µs" 2025-08-13T20:08:48.517214557+00:00 stderr F I0813 20:08:48.517199 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-75cfd5db5d" duration="48.281µs" 2025-08-13T20:08:48.517416273+00:00 stderr F I0813 20:08:48.517399 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="149.344µs" 2025-08-13T20:08:48.517542806+00:00 stderr F I0813 20:08:48.517495 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="47.162µs" 2025-08-13T20:08:48.517640709+00:00 stderr F I0813 20:08:48.517625 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="47.821µs" 2025-08-13T20:08:48.517733182+00:00 stderr F I0813 20:08:48.517718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="45.691µs" 2025-08-13T20:08:48.518174034+00:00 stderr F I0813 20:08:48.518154 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="147.475µs" 2025-08-13T20:08:48.518278607+00:00 stderr F I0813 20:08:48.518262 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="48.322µs" 2025-08-13T20:08:48.518372390+00:00 stderr F I0813 20:08:48.518357 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="47.482µs" 2025-08-13T20:08:48.518461153+00:00 stderr F I0813 20:08:48.518445 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="43.021µs" 2025-08-13T20:08:48.518642008+00:00 stderr F I0813 20:08:48.518625 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="133.144µs" 2025-08-13T20:08:48.518733881+00:00 stderr F I0813 20:08:48.518718 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="45.801µs" 2025-08-13T20:08:48.518917706+00:00 stderr F I0813 20:08:48.518871 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="104.853µs" 2025-08-13T20:08:48.519125312+00:00 stderr F I0813 20:08:48.519013 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="51.771µs" 2025-08-13T20:08:48.519227035+00:00 stderr F I0813 20:08:48.519212 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="52.222µs" 2025-08-13T20:08:48.519330148+00:00 stderr F I0813 20:08:48.519314 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="57.292µs" 2025-08-13T20:08:48.519425300+00:00 stderr F I0813 20:08:48.519409 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="45.112µs" 2025-08-13T20:08:48.519514883+00:00 stderr F I0813 20:08:48.519499 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="44.561µs" 2025-08-13T20:08:48.519698918+00:00 stderr F I0813 20:08:48.519682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="136.063µs" 2025-08-13T20:08:48.519873833+00:00 stderr F I0813 20:08:48.519845 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="106.423µs" 2025-08-13T20:08:48.520035408+00:00 stderr F I0813 20:08:48.520011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="47.032µs" 2025-08-13T20:08:48.520123260+00:00 stderr F I0813 20:08:48.520108 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="38.331µs" 2025-08-13T20:08:48.520224123+00:00 stderr F I0813 20:08:48.520205 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="43.781µs" 2025-08-13T20:08:48.520428139+00:00 stderr F I0813 20:08:48.520411 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="156.254µs" 2025-08-13T20:08:48.520514642+00:00 stderr F I0813 20:08:48.520499 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="39.891µs" 2025-08-13T20:08:48.520614754+00:00 stderr F I0813 20:08:48.520599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="44.941µs" 2025-08-13T20:08:48.520707577+00:00 stderr F I0813 20:08:48.520691 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="46.591µs" 2025-08-13T20:08:48.521066787+00:00 stderr F I0813 20:08:48.521045 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="304.829µs" 2025-08-13T20:08:48.521168150+00:00 stderr F I0813 20:08:48.521151 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="44.222µs" 2025-08-13T20:08:48.521297464+00:00 stderr F I0813 20:08:48.521282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="42.281µs" 2025-08-13T20:08:48.521387367+00:00 stderr F I0813 20:08:48.521369 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="40.101µs" 2025-08-13T20:08:48.521618653+00:00 stderr F I0813 20:08:48.521596 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="175.855µs" 2025-08-13T20:08:48.521724346+00:00 stderr F I0813 20:08:48.521709 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="56.512µs" 2025-08-13T20:08:48.521877411+00:00 stderr F I0813 20:08:48.521856 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="97.333µs" 2025-08-13T20:08:48.522025955+00:00 stderr F I0813 20:08:48.522009 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="60.672µs" 2025-08-13T20:08:48.522209420+00:00 stderr F I0813 20:08:48.522190 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="130.343µs" 2025-08-13T20:08:48.522306923+00:00 stderr F I0813 20:08:48.522291 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="47.221µs" 2025-08-13T20:08:48.522396456+00:00 stderr F I0813 20:08:48.522378 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="40.661µs" 2025-08-13T20:08:48.522477938+00:00 stderr F I0813 20:08:48.522462 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="35.511µs" 2025-08-13T20:08:48.522561200+00:00 stderr F I0813 20:08:48.522546 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="38.981µs" 2025-08-13T20:08:48.522763826+00:00 stderr F I0813 20:08:48.522744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="123.553µs" 2025-08-13T20:08:48.523657262+00:00 stderr F I0813 20:08:48.523633 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="58.202µs" 2025-08-13T20:08:48.523737074+00:00 stderr F I0813 20:08:48.523680 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="100.573µs" 2025-08-13T20:08:48.523935630+00:00 stderr F I0813 20:08:48.523817 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="44.211µs" 2025-08-13T20:08:48.523935630+00:00 stderr F I0813 20:08:48.523918 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="41.351µs" 2025-08-13T20:08:48.524028362+00:00 stderr F I0813 20:08:48.524010 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="306.618µs" 2025-08-13T20:08:48.524132005+00:00 stderr F I0813 20:08:48.524115 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="49.082µs" 2025-08-13T20:08:48.524378432+00:00 stderr F I0813 20:08:48.524361 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="198.806µs" 2025-08-13T20:08:48.528627834+00:00 stderr F I0813 20:08:48.528586 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.529736296+00:00 stderr F I0813 20:08:48.524011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="66.492µs" 2025-08-13T20:08:48.529835749+00:00 stderr F I0813 20:08:48.523496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="44.411µs" 2025-08-13T20:08:48.529881410+00:00 stderr F I0813 20:08:48.523422 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-66f66f94cf" duration="76.642µs" 2025-08-13T20:08:48.532723372+00:00 stderr F I0813 20:08:48.532694 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.540652669+00:00 stderr F I0813 20:08:48.540449 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[v1/Node, namespace: openshift-network-diagnostics, name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" observed="[v1/Node, namespace: , name: crc, uid: c83c88d3-f34d-4083-a59d-1c50f90f89b8]" 2025-08-13T20:08:48.562388312+00:00 stderr F I0813 20:08:48.562313 1 shared_informer.go:311] Waiting for caches to sync for garbage collector 2025-08-13T20:08:48.566424568+00:00 stderr F I0813 20:08:48.566367 1 reflector.go:351] Caches populated for *v1.PriorityLevelConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.566628734+00:00 stderr F I0813 20:08:48.566556 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.566831220+00:00 stderr F I0813 20:08:48.566750 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.567184670+00:00 stderr F I0813 20:08:48.567158 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.567446997+00:00 stderr F I0813 20:08:48.566866 1 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.569103285+00:00 stderr F I0813 20:08:48.567739 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.569210628+00:00 stderr F I0813 20:08:48.569149 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.570571407+00:00 stderr F I0813 20:08:48.570495 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.573558342+00:00 stderr F I0813 20:08:48.572326 1 reflector.go:351] Caches populated for *v1.FlowSchema from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.573985525+00:00 stderr F I0813 20:08:48.573960 1 reflector.go:351] Caches populated for *v1.PriorityClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.574140849+00:00 stderr F W0813 20:08:48.574121 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:48.574193961+00:00 stderr F E0813 20:08:48.574176 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:48.576663381+00:00 stderr F I0813 20:08:48.576638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.577426713+00:00 stderr F I0813 20:08:48.577400 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.579201814+00:00 stderr F I0813 20:08:48.579139 1 shared_informer.go:318] Caches are synced for crt configmap 2025-08-13T20:08:48.586915115+00:00 stderr F W0813 20:08:48.585066 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:48.586915115+00:00 stderr F E0813 20:08:48.585116 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:48.591644731+00:00 stderr F I0813 20:08:48.591578 1 reflector.go:351] Caches populated for *v1.ClusterResourceQuota from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592452454+00:00 stderr F W0813 20:08:48.592423 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:48.592565347+00:00 stderr F I0813 20:08:48.592512 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592703811+00:00 stderr F I0813 20:08:48.592654 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.592880516+00:00 stderr F I0813 20:08:48.592831 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593031731+00:00 stderr F I0813 20:08:48.592980 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593171065+00:00 stderr F E0813 20:08:48.592512 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:48.593312349+00:00 stderr F I0813 20:08:48.592871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593403331+00:00 stderr F I0813 20:08:48.593253 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.593720100+00:00 stderr F I0813 20:08:48.593307 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.599483286+00:00 stderr F I0813 20:08:48.599421 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602196293+00:00 stderr F I0813 20:08:48.600468 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602296876+00:00 stderr F I0813 20:08:48.600671 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602373079+00:00 stderr F I0813 20:08:48.601187 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.602494552+00:00 stderr F I0813 20:08:48.601738 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[config.openshift.io/v1/ClusterVersion, namespace: openshift-monitoring, name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" observed="[config.openshift.io/v1/ClusterVersion, namespace: , name: version, uid: a73cbaa6-40d3-4694-9b98-c0a6eed45825]" 2025-08-13T20:08:48.608744571+00:00 stderr F I0813 20:08:48.608609 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.615807384+00:00 stderr F I0813 20:08:48.615638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.622410963+00:00 stderr F I0813 20:08:48.622293 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623100253+00:00 stderr F I0813 20:08:48.623058 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623356120+00:00 stderr F I0813 20:08:48.623293 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.623675739+00:00 stderr F I0813 20:08:48.623549 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.628728374+00:00 stderr F I0813 20:08:48.628596 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.652296110+00:00 stderr F I0813 20:08:48.652191 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.658929790+00:00 stderr F I0813 20:08:48.658113 1 shared_informer.go:318] Caches are synced for crt configmap 2025-08-13T20:08:48.676236946+00:00 stderr F I0813 20:08:48.675092 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.708386508+00:00 stderr F I0813 20:08:48.707751 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.741002513+00:00 stderr F I0813 20:08:48.738120 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.741002513+00:00 stderr F I0813 20:08:48.738431 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Console, namespace: openshift-console, name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" observed="[operator.openshift.io/v1/Console, namespace: , name: cluster, uid: 5f9f95ea-d66e-45cc-9aa2-ed289b62d92e]" 2025-08-13T20:08:48.841039621+00:00 stderr F I0813 20:08:48.838733 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.844560452+00:00 stderr F I0813 20:08:48.844037 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.862288021+00:00 stderr F I0813 20:08:48.862136 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.874573103+00:00 stderr F I0813 20:08:48.874469 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.893934288+00:00 stderr F I0813 20:08:48.891728 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.916631109+00:00 stderr F I0813 20:08:48.916556 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.927503220+00:00 stderr F I0813 20:08:48.927264 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.941566404+00:00 stderr F I0813 20:08:48.941452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.958976503+00:00 stderr F I0813 20:08:48.958634 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.977092122+00:00 stderr F I0813 20:08:48.976871 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:48.990464056+00:00 stderr F I0813 20:08:48.988703 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.012039984+00:00 stderr F I0813 20:08:49.010217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.014452403+00:00 stderr F W0813 20:08:49.014420 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:49.014524605+00:00 stderr F E0813 20:08:49.014506 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:49.022316009+00:00 stderr F I0813 20:08:49.020263 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.037744521+00:00 stderr F I0813 20:08:49.037694 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.048128789+00:00 stderr F I0813 20:08:49.048016 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.064155498+00:00 stderr F I0813 20:08:49.062308 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.078476709+00:00 stderr F I0813 20:08:49.078311 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.091086090+00:00 stderr F I0813 20:08:49.090366 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.105036000+00:00 stderr F I0813 20:08:49.104982 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.120080202+00:00 stderr F I0813 20:08:49.120025 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.130395058+00:00 stderr F I0813 20:08:49.130346 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.147741075+00:00 stderr F I0813 20:08:49.147222 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.162212570+00:00 stderr F I0813 20:08:49.162098 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.176187670+00:00 stderr F I0813 20:08:49.176131 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.190469110+00:00 stderr F I0813 20:08:49.190333 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.208186768+00:00 stderr F I0813 20:08:49.208129 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.208536468+00:00 stderr F I0813 20:08:49.208504 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: master, uid: 8efb5656-7de8-467a-9f2a-237a011a4783]" 2025-08-13T20:08:49.208618340+00:00 stderr F I0813 20:08:49.208600 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: openshift-machine-api, name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" observed="[machineconfiguration.openshift.io/v1/MachineConfigPool, namespace: , name: worker, uid: 87ae8215-5559-4b8a-a6cc-81c3c83b8a6e]" 2025-08-13T20:08:49.220694246+00:00 stderr F I0813 20:08:49.220638 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.236635423+00:00 stderr F I0813 20:08:49.236452 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.254709032+00:00 stderr F I0813 20:08:49.254553 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.275917170+00:00 stderr F I0813 20:08:49.275764 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.290558850+00:00 stderr F I0813 20:08:49.290497 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.300722161+00:00 stderr F I0813 20:08:49.300641 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.312479628+00:00 stderr F I0813 20:08:49.312393 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.328198769+00:00 stderr F I0813 20:08:49.328091 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.328658202+00:00 stderr F W0813 20:08:49.328589 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:49.328658202+00:00 stderr F E0813 20:08:49.328644 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:49.344517027+00:00 stderr F I0813 20:08:49.344372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.361311858+00:00 stderr F I0813 20:08:49.360169 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.375651149+00:00 stderr F I0813 20:08:49.375555 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.376022760+00:00 stderr F I0813 20:08:49.375736 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/DNS, namespace: openshift-dns, name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" observed="[operator.openshift.io/v1/DNS, namespace: , name: default, uid: 8e7b8280-016f-4ceb-a792-fc5be2494468]" 2025-08-13T20:08:49.398016871+00:00 stderr F I0813 20:08:49.397925 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.416319915+00:00 stderr F I0813 20:08:49.416217 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.428527415+00:00 stderr F I0813 20:08:49.428372 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.442392953+00:00 stderr F I0813 20:08:49.442298 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.455387295+00:00 stderr F I0813 20:08:49.455267 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.466575146+00:00 stderr F I0813 20:08:49.466459 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.515061396+00:00 stderr F I0813 20:08:49.514868 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:49.515827588+00:00 stderr F I0813 20:08:49.515264 1 graph_builder.go:683] "replacing virtual item with observed item" virtual="[operator.openshift.io/v1/Network, namespace: openshift-host-network, name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" observed="[operator.openshift.io/v1/Network, namespace: , name: cluster, uid: 5ca11404-f665-4aa0-85cf-da2f3e9c86ad]" 2025-08-13T20:08:49.675247759+00:00 stderr F W0813 20:08:49.675128 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:49.675247759+00:00 stderr F E0813 20:08:49.675207 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:49.725230932+00:00 stderr F W0813 20:08:49.725119 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:49.725230932+00:00 stderr F E0813 20:08:49.725191 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:49.822870962+00:00 stderr F W0813 20:08:49.822769 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:49.822989725+00:00 stderr F E0813 20:08:49.822972 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:49.825884788+00:00 stderr F W0813 20:08:49.825859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:49.826011682+00:00 stderr F E0813 20:08:49.825995 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:49.888533904+00:00 stderr F W0813 20:08:49.888484 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:49.888609186+00:00 stderr F E0813 20:08:49.888590 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:49.960108826+00:00 stderr F W0813 20:08:49.958192 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:49.960108826+00:00 stderr F E0813 20:08:49.958250 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:49.976144286+00:00 stderr F I0813 20:08:49.976056 1 reflector.go:351] Caches populated for *v1.PartialObjectMetadata from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:50.069486842+00:00 stderr F W0813 20:08:50.069337 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:50.069486842+00:00 stderr F E0813 20:08:50.069406 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:50.140110166+00:00 stderr F W0813 20:08:50.139872 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:50.140110166+00:00 stderr F E0813 20:08:50.139952 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:51.308362571+00:00 stderr F W0813 20:08:51.307680 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:51.308362571+00:00 stderr F E0813 20:08:51.307763 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:51.773955860+00:00 stderr F W0813 20:08:51.773871 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:51.774141596+00:00 stderr F E0813 20:08:51.774083 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:51.806047811+00:00 stderr F W0813 20:08:51.805161 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:51.806047811+00:00 stderr F E0813 20:08:51.805217 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:52.179231950+00:00 stderr F W0813 20:08:52.179055 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:52.179231950+00:00 stderr F E0813 20:08:52.179115 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:52.265486693+00:00 stderr F W0813 20:08:52.265263 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:52.265486693+00:00 stderr F E0813 20:08:52.265331 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:52.491827473+00:00 stderr F W0813 20:08:52.491636 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:52.491827473+00:00 stderr F E0813 20:08:52.491694 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:52.500160531+00:00 stderr F W0813 20:08:52.500089 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:52.500160531+00:00 stderr F E0813 20:08:52.500138 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:52.812302311+00:00 stderr F W0813 20:08:52.812161 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:52.812302311+00:00 stderr F E0813 20:08:52.812225 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:52.815945425+00:00 stderr F W0813 20:08:52.815757 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:52.815976186+00:00 stderr F E0813 20:08:52.815945 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:53.031703841+00:00 stderr F W0813 20:08:53.031583 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:53.031703841+00:00 stderr F E0813 20:08:53.031646 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:56.011840344+00:00 stderr F W0813 20:08:56.011526 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:56.011840344+00:00 stderr F E0813 20:08:56.011692 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: an error on the server ("Internal Server Error: \"/apis/image.openshift.io/v1/imagestreams?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get imagestreams.image.openshift.io) 2025-08-13T20:08:56.601994895+00:00 stderr F W0813 20:08:56.601862 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:56.601994895+00:00 stderr F E0813 20:08:56.601951 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RangeAllocation: failed to list *v1.RangeAllocation: an error on the server ("Internal Server Error: \"/apis/security.openshift.io/v1/rangeallocations?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get rangeallocations.security.openshift.io) 2025-08-13T20:08:56.803401380+00:00 stderr F W0813 20:08:56.803277 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:56.803401380+00:00 stderr F E0813 20:08:56.803351 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.UserOAuthAccessToken: failed to list *v1.UserOAuthAccessToken: an error on the server ("Internal Server Error: \"/apis/oauth.openshift.io/v1/useroauthaccesstokens?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get useroauthaccesstokens.oauth.openshift.io) 2025-08-13T20:08:57.305835734+00:00 stderr F W0813 20:08:57.305720 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:57.305835734+00:00 stderr F E0813 20:08:57.305815 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Build: failed to list *v1.Build: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/builds?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get builds.build.openshift.io) 2025-08-13T20:08:57.338952673+00:00 stderr F W0813 20:08:57.338861 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:57.339030125+00:00 stderr F E0813 20:08:57.339013 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: an error on the server ("Internal Server Error: \"/apis/route.openshift.io/v1/routes?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get routes.route.openshift.io) 2025-08-13T20:08:57.612987720+00:00 stderr F W0813 20:08:57.612859 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:57.612987720+00:00 stderr F E0813 20:08:57.612941 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.TemplateInstance: failed to list *v1.TemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templateinstances.template.openshift.io) 2025-08-13T20:08:57.796716768+00:00 stderr F W0813 20:08:57.796587 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:57.796716768+00:00 stderr F E0813 20:08:57.796653 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.DeploymentConfig: failed to list *v1.DeploymentConfig: an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1/deploymentconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io) 2025-08-13T20:08:58.662998036+00:00 stderr F W0813 20:08:58.662024 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:58.662998036+00:00 stderr F E0813 20:08:58.662124 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BuildConfig: failed to list *v1.BuildConfig: an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1/buildconfigs?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get buildconfigs.build.openshift.io) 2025-08-13T20:08:58.715656035+00:00 stderr F W0813 20:08:58.715505 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:58.715656035+00:00 stderr F E0813 20:08:58.715607 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.BrokerTemplateInstance: failed to list *v1.BrokerTemplateInstance: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/brokertemplateinstances?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get brokertemplateinstances.template.openshift.io) 2025-08-13T20:08:58.744535883+00:00 stderr F W0813 20:08:58.744405 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:08:58.744535883+00:00 stderr F E0813 20:08:58.744479 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Template: failed to list *v1.Template: an error on the server ("Internal Server Error: \"/apis/template.openshift.io/v1/templates?limit=500&resourceVersion=0\": Post \"https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 10.217.4.1:443: connect: connection refused") has prevented the request from succeeding (get templates.template.openshift.io) 2025-08-13T20:09:03.688126690+00:00 stderr F I0813 20:09:03.688005 1 reflector.go:351] Caches populated for *v1.RangeAllocation from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.809332475+00:00 stderr F I0813 20:09:04.809170 1 reflector.go:351] Caches populated for *v1.ImageStream from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.427590013+00:00 stderr F I0813 20:09:05.427367 1 reflector.go:351] Caches populated for *v1.TemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.487728298+00:00 stderr F I0813 20:09:05.487572 1 reflector.go:351] Caches populated for *v1.UserOAuthAccessToken from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.573574079+00:00 stderr F I0813 20:09:05.573457 1 reflector.go:351] Caches populated for *v1.DeploymentConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.227612332+00:00 stderr F I0813 20:09:07.227471 1 reflector.go:351] Caches populated for *v1.BrokerTemplateInstance from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.547623066+00:00 stderr F I0813 20:09:07.545003 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.547623066+00:00 stderr F I0813 20:09:07.545470 1 graph_builder.go:407] "item references an owner with coordinates that do not match the observed identity" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-08-13T20:09:07.868350962+00:00 stderr F I0813 20:09:07.868204 1 reflector.go:351] Caches populated for *v1.Build from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.443062899+00:00 stderr F I0813 20:09:10.442950 1 reflector.go:351] Caches populated for *v1.BuildConfig from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:11.364711734+00:00 stderr F I0813 20:09:11.364560 1 reflector.go:351] Caches populated for *v1.Template from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:11.388139495+00:00 stderr F I0813 20:09:11.387977 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:09:11.388139495+00:00 stderr F I0813 20:09:11.388056 1 resource_quota_controller.go:496] "synced quota controller" 2025-08-13T20:09:11.410004902+00:00 stderr F I0813 20:09:11.409687 1 shared_informer.go:318] Caches are synced for resource quota 2025-08-13T20:09:11.430987904+00:00 stderr F I0813 20:09:11.430874 1 shared_informer.go:318] Caches are synced for garbage collector 2025-08-13T20:09:11.430987904+00:00 stderr F I0813 20:09:11.430946 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463303 1 shared_informer.go:318] Caches are synced for garbage collector 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463340 1 garbagecollector.go:290] "synced garbage collector" 2025-08-13T20:09:11.463425614+00:00 stderr F I0813 20:09:11.463419 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" virtual=false 2025-08-13T20:09:11.463690351+00:00 stderr F I0813 20:09:11.463664 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" virtual=false 2025-08-13T20:09:11.463939508+00:00 stderr F I0813 20:09:11.463865 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" virtual=false 2025-08-13T20:09:11.464029561+00:00 stderr F I0813 20:09:11.463976 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" virtual=false 2025-08-13T20:09:11.464126364+00:00 stderr F I0813 20:09:11.464058 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" virtual=false 2025-08-13T20:09:11.464126364+00:00 stderr F I0813 20:09:11.464107 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464183 1 garbagecollector.go:549] "Processing item" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464264 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464287 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464302 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" virtual=false 2025-08-13T20:09:11.464325160+00:00 stderr F I0813 20:09:11.464303 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" virtual=false 2025-08-13T20:09:11.464452453+00:00 stderr F I0813 20:09:11.464379 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" virtual=false 2025-08-13T20:09:11.464533586+00:00 stderr F I0813 20:09:11.464477 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" virtual=false 2025-08-13T20:09:11.464533586+00:00 stderr F I0813 20:09:11.463860 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" virtual=false 2025-08-13T20:09:11.464701300+00:00 stderr F I0813 20:09:11.464219 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" virtual=false 2025-08-13T20:09:11.464822804+00:00 stderr F I0813 20:09:11.464225 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464241 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464252 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464266 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" virtual=false 2025-08-13T20:09:11.465949756+00:00 stderr F I0813 20:09:11.464267 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" virtual=false 2025-08-13T20:09:11.485689722+00:00 stderr F I0813 20:09:11.483124 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-controller-manager-operator, uid: d21d09aa-3262-40bb-bb1c-9a70b88b2f48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.485689722+00:00 stderr F I0813 20:09:11.483194 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.486306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operators, name: global-operators, uid: 5a05d65b-a6fc-48c8-8588-06b4ec3a70e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.486367 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-multus, uid: 117f4806-93cc-4b78-ae9b-74d16192e441]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488147 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" virtual=false 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488431 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-operator, uid: 3796492a-40b7-473e-aa60-d1e803b2692a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.488878523+00:00 stderr F I0813 20:09:11.488457 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" virtual=false 2025-08-13T20:09:11.489874192+00:00 stderr F I0813 20:09:11.489157 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-monitoring, uid: 98779a1e-aec5-4b04-aba2-1e5d3ba2ec08]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.489874192+00:00 stderr F I0813 20:09:11.489254 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" virtual=false 2025-08-13T20:09:11.493095934+00:00 stderr F I0813 20:09:11.491170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/CronJob, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 946673ee-e5bd-418a-934e-c38198674faa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.493095934+00:00 stderr F I0813 20:09:11.491207 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" virtual=false 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495106 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-controller-manager-operator, uid: 73d77f82-bd47-45ec-bb06-d29938e02cfd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495170 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" virtual=false 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495421 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-managed, uid: c4538c2f-b9cd-4ad4-8d0e-02315cca7510]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.496080280+00:00 stderr F I0813 20:09:11.495447 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" virtual=false 2025-08-13T20:09:11.498893421+00:00 stderr F I0813 20:09:11.498853 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operators.coreos.com/v1/OperatorGroup, namespace: openshift-operator-lifecycle-manager, name: olm-operators, uid: b9143910-b01b-4a5d-b64e-0612b2b7b21d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.498959612+00:00 stderr F I0813 20:09:11.498929 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" virtual=false 2025-08-13T20:09:11.499281652+00:00 stderr F I0813 20:09:11.499145 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-node-identity, uid: c5e796ee-668d-4610-a134-4468bf0cbdf2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.499281652+00:00 stderr F I0813 20:09:11.499196 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" virtual=false 2025-08-13T20:09:11.530719793+00:00 stderr F I0813 20:09:11.530619 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console, uid: eb0e1992-df9a-419d-b7f4-4d9b50e766e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.530719793+00:00 stderr F I0813 20:09:11.530670 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" virtual=false 2025-08-13T20:09:11.532400661+00:00 stderr F I0813 20:09:11.531008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operators, uid: 7920babe-d7ab-4197-9546-c2d10b1040a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.532400661+00:00 stderr F I0813 20:09:11.531054 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" virtual=false 2025-08-13T20:09:11.540053941+00:00 stderr F I0813 20:09:11.539760 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-nutanix-infra, uid: 3e99d67f-8f0c-4b0e-b68c-e85c2944daf4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.540053941+00:00 stderr F I0813 20:09:11.539925 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" virtual=false 2025-08-13T20:09:11.559864139+00:00 stderr F I0813 20:09:11.559719 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-user-settings, uid: a8cf2c15-bc3c-4cf4-9841-5e4727e35f10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.559864139+00:00 stderr F I0813 20:09:11.559764 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" virtual=false 2025-08-13T20:09:11.562980258+00:00 stderr F I0813 20:09:11.561154 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-network-config-controller, uid: 54f58fbf-f373-4947-8956-e1108a6bd97e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.562980258+00:00 stderr F I0813 20:09:11.561206 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" virtual=false 2025-08-13T20:09:11.565924383+00:00 stderr F I0813 20:09:11.563281 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-scheduler-operator, uid: 6f0ce1c2-37a2-421e-91e4-7be791e0ea85]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.565924383+00:00 stderr F I0813 20:09:11.563318 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" virtual=false 2025-08-13T20:09:11.577965448+00:00 stderr F I0813 20:09:11.575135 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-machine-approver, uid: 9bfd5222-f6af-4843-b92c-1a5e0e0faafe]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.577965448+00:00 stderr F I0813 20:09:11.575179 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" virtual=false 2025-08-13T20:09:11.591330640+00:00 stderr F I0813 20:09:11.590238 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-config-operator, uid: 74ba47b5-3bcc-42c0-9deb-e12d8da952ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.591330640+00:00 stderr F I0813 20:09:11.590311 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" virtual=false 2025-08-13T20:09:11.607001829+00:00 stderr F I0813 20:09:11.605842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-version, uid: 2a354368-bd77-40cc-ae1a-68449ece8efb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.607001829+00:00 stderr F I0813 20:09:11.605964 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" virtual=false 2025-08-13T20:09:11.640845670+00:00 stderr F I0813 20:09:11.640673 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-machine-api, uid: df6705c6-ec2d-4f74-a803-3f769fd28210]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.640887131+00:00 stderr F I0813 20:09:11.640845 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641360 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-marketplace, uid: ca0ec70d-cbfe-4901-ae7a-150a9dfe5920]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641410 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641460 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-etcd-operator, uid: 7413916b-b5eb-4718-9e41-632b89d445af]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-apiserver-operator, uid: 52965c25-9895-4c80-8901-655c30931a31]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641560 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-service-ca-operator, uid: f62224e7-87fc-4c57-b23f-4581de499fa2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641712 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" virtual=false 2025-08-13T20:09:11.642844667+00:00 stderr F I0813 20:09:11.641780 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" virtual=false 2025-08-13T20:09:11.664780876+00:00 stderr F I0813 20:09:11.664673 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-openstack-infra, uid: ea73e21f-aeeb-4b3d-9983-95bf78a60eea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.664780876+00:00 stderr F I0813 20:09:11.664757 1 garbagecollector.go:549] "Processing item" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" virtual=false 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670364 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cloud-platform-infra, uid: 2d3b34a1-0340-4ea4-b15a-1a0164234aa8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670410 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" virtual=false 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovirt-infra, uid: 2d9c4601-cca2-4813-b9e6-2830b7097736]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.671300193+00:00 stderr F I0813 20:09:11.670670 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" virtual=false 2025-08-13T20:09:11.673116295+00:00 stderr F I0813 20:09:11.671985 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-storage-operator, uid: 01a4f482-d92f-4128-87f6-1f1177ad4f4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.673116295+00:00 stderr F I0813 20:09:11.672040 1 garbagecollector.go:549] "Processing item" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699000 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-user-workload-monitoring, uid: 1d85aaa3-25a8-4cbd-88ce-765463dbeca9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699066 1 garbagecollector.go:549] "Processing item" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699325 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-host-network, uid: b448caf8-673f-4b75-9f4c-85cbabc7ec6c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699343 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" virtual=false 2025-08-13T20:09:11.699675736+00:00 stderr F I0813 20:09:11.699650 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kni-infra, uid: a8db5ce0-f088-4b83-8f52-bf8a94f40b94]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.699716667+00:00 stderr F I0813 20:09:11.699668 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" virtual=false 2025-08-13T20:09:11.702848717+00:00 stderr F I0813 20:09:11.700316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ovn-kubernetes, uid: 393084dd-9b88-439d-9c82-e5d9e9fc7290]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.702848717+00:00 stderr F I0813 20:09:11.700379 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" virtual=false 2025-08-13T20:09:11.705897565+00:00 stderr F I0813 20:09:11.705756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-authentication-operator, uid: a94f410e-4561-4410-90f7-263645a79260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.705897565+00:00 stderr F I0813 20:09:11.705863 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" virtual=false 2025-08-13T20:09:11.723875370+00:00 stderr F I0813 20:09:11.722541 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-console-operator, uid: 744177a1-0b8c-479f-9660-aa2ce2d1003b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.723875370+00:00 stderr F I0813 20:09:11.723168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" virtual=false 2025-08-13T20:09:11.762132647+00:00 stderr F I0813 20:09:11.762062 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-controller-manager, name: controller-manager, uid: b42ae171-8338-4274-922f-79cfacb9cfe9]" 2025-08-13T20:09:11.762162448+00:00 stderr F I0813 20:09:11.762129 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" virtual=false 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.768206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-network-node-identity, name: network-node-identity, uid: c35a9635-b45c-47ac-af09-3ee2daf91305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.768272 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" virtual=false 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.771153 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-cluster-samples-operator, uid: 9b74ff82-99c7-48fa-852e-a3960e84fedd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.771640870+00:00 stderr F I0813 20:09:11.771186 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" virtual=false 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.784705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: ovn, uid: 5ca00511-43a3-491e-8d30-2c9b23d72bf1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.784743 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" virtual=false 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.785041 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-route-controller-manager, name: route-controller-manager, uid: db9b5a0e-2471-4a94-bbe5-01c34efec882]" 2025-08-13T20:09:11.787451463+00:00 stderr F I0813 20:09:11.785059 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" virtual=false 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798363 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-storage-version-migrator-operator, uid: 56366865-01d3-431a-9703-d295f389a658]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798441 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" virtual=false 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config-operator, uid: 45fc724c-871f-4abe-8f95-988ef4974157]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.800075395+00:00 stderr F I0813 20:09:11.798719 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" virtual=false 2025-08-13T20:09:11.802963598+00:00 stderr F I0813 20:09:11.800572 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-ingress-operator, uid: c9d5dc69-e308-44cd-ad51-e2b7cce2619e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.802963598+00:00 stderr F I0813 20:09:11.800629 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" virtual=false 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.811035 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-network-diagnostics, uid: 267bb0cc-5a49-450c-a61e-a94080c18cf9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.811086 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" virtual=false 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.812320 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[network.operator.openshift.io/v1/OperatorPKI, namespace: openshift-ovn-kubernetes, name: signer, uid: 17184c6c-10e6-40d8-9367-33d18445ef3e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.813057857+00:00 stderr F I0813 20:09:11.812348 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" virtual=false 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836450 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-authentication, name: oauth-openshift, uid: 5c77e036-b030-4587-8bd4-079bc5e84c22]" 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836541 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" virtual=false 2025-08-13T20:09:11.836568521+00:00 stderr F I0813 20:09:11.836481 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-multus, name: multus-admission-controller, uid: 3d4e04fb-152e-4b67-b110-7b7edfa1a90a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.836703275+00:00 stderr F I0813 20:09:11.836599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" virtual=false 2025-08-13T20:09:11.842381268+00:00 stderr F I0813 20:09:11.842199 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-service-ca, name: service-ca, uid: 054eb633-29d2-4eec-90a7-1a83a0e386c1]" 2025-08-13T20:09:11.842381268+00:00 stderr F I0813 20:09:11.842249 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" virtual=false 2025-08-13T20:09:11.845851897+00:00 stderr F I0813 20:09:11.845357 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-operator-lifecycle-manager, uid: 45ab0ea7-17dd-4464-9f7e-9913e01318e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.845851897+00:00 stderr F I0813 20:09:11.845410 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" virtual=false 2025-08-13T20:09:11.847981208+00:00 stderr F I0813 20:09:11.845932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-config, uid: c38b4142-4b5d-445b-9db1-5d43b8323db9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.847981208+00:00 stderr F I0813 20:09:11.845999 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" virtual=false 2025-08-13T20:09:11.850861941+00:00 stderr F I0813 20:09:11.848977 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-vsphere-infra, uid: 91f4e064-f5b2-4ace-91e6-e1732990ada8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.850861941+00:00 stderr F I0813 20:09:11.849009 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" virtual=false 2025-08-13T20:09:11.854377312+00:00 stderr F I0813 20:09:11.854323 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-ingress, name: router-default, uid: 9ae4d312-7fc4-4344-ab7a-669da95f56bf]" 2025-08-13T20:09:11.854398752+00:00 stderr F I0813 20:09:11.854378 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" virtual=false 2025-08-13T20:09:11.860222669+00:00 stderr F I0813 20:09:11.860191 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-apiserver, name: apiserver, uid: a424780c-5ff8-49aa-b616-57c2d7958f81]" 2025-08-13T20:09:11.860301602+00:00 stderr F I0813 20:09:11.860286 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" virtual=false 2025-08-13T20:09:11.860632361+00:00 stderr F I0813 20:09:11.860607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-dns-operator, uid: b99894aa-45e2-4109-8ecc-66c171f7a6d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.860687503+00:00 stderr F I0813 20:09:11.860673 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" virtual=false 2025-08-13T20:09:11.860985341+00:00 stderr F I0813 20:09:11.860959 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ovn-kubernetes, name: ovnkube-control-plane, uid: 346798bd-68de-4941-ab69-8a5a56dd55f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.861048233+00:00 stderr F I0813 20:09:11.861033 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" virtual=false 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888111 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-kube-apiserver-operator, uid: 778598fb-710e-443a-a27d-e077f62db555]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888234 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" virtual=false 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888673 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator, name: migrator, uid: 88da59ff-e1b0-4998-b48b-d9b2e9bee2ae]" 2025-08-13T20:09:11.889986623+00:00 stderr F I0813 20:09:11.888694 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" virtual=false 2025-08-13T20:09:11.891468375+00:00 stderr F I0813 20:09:11.891401 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 0b1d22b6-78ae-4d00-94ad-381755e08383]" 2025-08-13T20:09:11.891832736+00:00 stderr F I0813 20:09:11.891737 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" virtual=false 2025-08-13T20:09:11.892158405+00:00 stderr F I0813 20:09:11.892118 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-config-operator, name: openshift-config-operator, uid: 46cebc51-d29e-4081-9edb-d9f437810b86]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.892280449+00:00 stderr F I0813 20:09:11.892258 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" virtual=false 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.902848 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: bf9e0c20-07cb-4537-b7f9-efae9f964f5e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.902960 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" virtual=false 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.903236 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machine.openshift.io/v1beta1/MachineHealthCheck, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: da0c8169-ed84-4c23-a003-b6883fd55935]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.903685206+00:00 stderr F I0813 20:09:11.903260 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" virtual=false 2025-08-13T20:09:11.903731527+00:00 stderr F I0813 20:09:11.903685 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: image-registry, uid: ff5a6fbd-d479-457d-86ba-428162a82d5c]" 2025-08-13T20:09:11.903731527+00:00 stderr F I0813 20:09:11.903710 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" virtual=false 2025-08-13T20:09:11.909882383+00:00 stderr F I0813 20:09:11.909254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 4e10c137-983b-49c4-b977-7d19390e427b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.909882383+00:00 stderr F I0813 20:09:11.909310 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" virtual=false 2025-08-13T20:09:11.922423143+00:00 stderr F I0813 20:09:11.922356 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: 7ee99584-ec5a-490c-bc55-11ed3e60244a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.922534976+00:00 stderr F I0813 20:09:11.922515 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" virtual=false 2025-08-13T20:09:11.923117003+00:00 stderr F I0813 20:09:11.923036 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-marketplace, name: marketplace-operator, uid: 0b54327c-0c40-46f3-a17b-0f07f095ccb7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.923183885+00:00 stderr F I0813 20:09:11.923168 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" virtual=false 2025-08-13T20:09:11.923394271+00:00 stderr F I0813 20:09:11.923374 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/Deployment, namespace: openshift-oauth-apiserver, name: apiserver, uid: 8ac71ab9-8c3d-4c89-9962-205eed0149d5]" 2025-08-13T20:09:11.923445722+00:00 stderr F I0813 20:09:11.923432 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" virtual=false 2025-08-13T20:09:11.929533177+00:00 stderr F I0813 20:09:11.929488 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: console, uid: acc4559a-2586-4482-947a-aae611d8d9f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-08-13T20:09:11.929614339+00:00 stderr F I0813 20:09:11.929599 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" virtual=false 2025-08-13T20:09:11.942948991+00:00 stderr F I0813 20:09:11.941430 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b5151a8e-7df7-4f3b-9ada-e1cfd0badda9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.942948991+00:00 stderr F I0813 20:09:11.941514 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" virtual=false 2025-08-13T20:09:11.943400094+00:00 stderr F I0813 20:09:11.943370 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-conversion-webhook, uid: 4dae11c2-6acd-446b-b52c-67345d4c21ea]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.943472496+00:00 stderr F I0813 20:09:11.943458 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" virtual=false 2025-08-13T20:09:11.971524541+00:00 stderr F I0813 20:09:11.971460 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: package-server-manager, uid: 3368f5bb-29da-4770-b432-4e5d1a8491a9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.971697176+00:00 stderr F I0813 20:09:11.971675 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" virtual=false 2025-08-13T20:09:11.994898121+00:00 stderr F I0813 20:09:11.992669 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console, name: downloads, uid: 03b2baf0-d10c-4001-94a6-800af015de08]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Console","name":"cluster","uid":"5f9f95ea-d66e-45cc-9aa2-ed289b62d92e","controller":true}] 2025-08-13T20:09:11.994898121+00:00 stderr F I0813 20:09:11.992737 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 8cb0f5f7-4dca-477c-8627-e6db485e4cb2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995501 1 garbagecollector.go:549] "Processing item" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-diagnostics, name: network-check-source, uid: 5694fe8b-b5a5-4c14-bc2c-e30718ec8465]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.995697 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" virtual=false 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.996054 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e81203d-c202-48ae-b652-35b68d7e5586]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:11.997935278+00:00 stderr F I0813 20:09:11.996080 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" virtual=false 2025-08-13T20:09:12.017887330+00:00 stderr F I0813 20:09:12.015311 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 4a08358e-6f9b-492b-9df8-8e54f40e2fb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.017887330+00:00 stderr F I0813 20:09:12.015397 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" virtual=false 2025-08-13T20:09:12.041160677+00:00 stderr F I0813 20:09:12.041085 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: f13e36c5-b283-4235-867d-e2ae26d7fa2a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.041262830+00:00 stderr F I0813 20:09:12.041247 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" virtual=false 2025-08-13T20:09:12.049357052+00:00 stderr F I0813 20:09:12.049267 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-ingress-operator, name: ingress-operator, uid: a575f0c7-77ce-41f4-a832-11dcd8374f9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.049535217+00:00 stderr F I0813 20:09:12.049515 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: 59f9d1a9-dda1-4c2c-8c2d-b99e720cbed0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.049640290+00:00 stderr F I0813 20:09:12.049621 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" virtual=false 2025-08-13T20:09:12.049960139+00:00 stderr F I0813 20:09:12.049932 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: e8b2ce3d-9cd4-43a5-a8aa-e724fcbf369d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050025021+00:00 stderr F I0813 20:09:12.050011 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" virtual=false 2025-08-13T20:09:12.050276048+00:00 stderr F I0813 20:09:12.050206 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-machine-api, name: machine-api-operator, uid: 7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050363321+00:00 stderr F I0813 20:09:12.050349 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" virtual=false 2025-08-13T20:09:12.050620128+00:00 stderr F I0813 20:09:12.050598 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 1685682f-c45b-43b7-8431-b19c7e8a7d30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.050672680+00:00 stderr F I0813 20:09:12.050658 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" virtual=false 2025-08-13T20:09:12.053485991+00:00 stderr F I0813 20:09:12.052006 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" virtual=false 2025-08-13T20:09:12.054286883+00:00 stderr F I0813 20:09:12.054263 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 485aecbc-d986-4290-a12b-2be6eccbc76c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054364546+00:00 stderr F I0813 20:09:12.054349 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" virtual=false 2025-08-13T20:09:12.054559731+00:00 stderr F I0813 20:09:12.054539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: e9ed1986-ebb6-4ce1-af63-63b3f002df9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054612673+00:00 stderr F I0813 20:09:12.054599 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" virtual=false 2025-08-13T20:09:12.054930162+00:00 stderr F I0813 20:09:12.054880 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-dns-operator, name: dns-operator, uid: d7110071-d620-4ed4-b7e1-05c1c458b7f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.054994774+00:00 stderr F I0813 20:09:12.054979 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" virtual=false 2025-08-13T20:09:12.060510762+00:00 stderr F I0813 20:09:12.060461 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 945d64e1-c873-4e9d-b5ff-47904d2b347f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.060623125+00:00 stderr F I0813 20:09:12.060605 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" virtual=false 2025-08-13T20:09:12.061691196+00:00 stderr F I0813 20:09:12.061225 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: hostpath-provisioner, name: csi-hostpathplugin, uid: f3d8e73a-8b83-44c2-ac21-da847137bc76]" 2025-08-13T20:09:12.061756458+00:00 stderr F I0813 20:09:12.061741 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" virtual=false 2025-08-13T20:09:12.062305663+00:00 stderr F I0813 20:09:12.061474 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-console-operator, name: console-operator, uid: e977212b-5bb5-4096-9f11-353076a2ebeb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.062326874+00:00 stderr F I0813 20:09:12.062307 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" virtual=false 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.068469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: d75ff515-88a9-4644-8711-c99a391dcc77]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.068538 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" virtual=false 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.070876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 8a9ccf98-e60f-4580-94d2-1560cf66cd74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.071225569+00:00 stderr F I0813 20:09:12.070959 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" virtual=false 2025-08-13T20:09:12.071640041+00:00 stderr F I0813 20:09:12.071612 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ingress-operator, name: ingress-operator, uid: e3ef40dc-9c44-44df-ad9c-5a0bb6e10f9d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.072191877+00:00 stderr F I0813 20:09:12.072166 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" virtual=false 2025-08-13T20:09:12.076067098+00:00 stderr F I0813 20:09:12.076018 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 4d34cc47-ddd7-4071-9c9d-e6b189052eff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.078967051+00:00 stderr F I0813 20:09:12.078882 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" virtual=false 2025-08-13T20:09:12.079216158+00:00 stderr F I0813 20:09:12.079084 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-etcd-operator, name: etcd-operator, uid: fb798e33-6d4c-4082-a60c-594a9db7124a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.080136485+00:00 stderr F I0813 20:09:12.080010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-operator-lifecycle-manager, name: packageserver, uid: c7e0a213-d3b0-4220-bc12-3e9beb007a7b]" owner=[{"apiVersion":"operators.coreos.com/v1alpha1","kind":"ClusterServiceVersion","name":"packageserver","uid":"0beab272-7637-4d44-b3aa-502dcafbc929","controller":false,"blockOwnerDeletion":false}] 2025-08-13T20:09:12.083709447+00:00 stderr F I0813 20:09:12.083590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: networking-rules, uid: fc335c36-0c0a-48e7-a47f-5e72d0e62a18]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.097817612+00:00 stderr F I0813 20:09:12.097632 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-multus, name: prometheus-k8s-rules, uid: bc5d3e92-6eee-46da-a973-4b25555734ea]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098618 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098877 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: imagestreams-rules, uid: 3f104833-4f1e-4f16-8ada-d5643f802363]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.098990 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" virtual=false 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.099138 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 8b301c8a-95e8-4010-b1dd-add84cec904e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099244372+00:00 stderr F I0813 20:09:12.099174 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" virtual=false 2025-08-13T20:09:12.099485639+00:00 stderr F I0813 20:09:12.099424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" virtual=false 2025-08-13T20:09:12.099543011+00:00 stderr F I0813 20:09:12.099509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Namespace, namespace: , name: openshift-image-registry, uid: b700e982-db6f-41d5-bf38-c2a82966abe8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.099672715+00:00 stderr F I0813 20:09:12.099598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" virtual=false 2025-08-13T20:09:12.099819039+00:00 stderr F I0813 20:09:12.099759 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" virtual=false 2025-08-13T20:09:12.102401363+00:00 stderr F I0813 20:09:12.102314 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-operator-lifecycle-manager, name: olm-alert-rules, uid: 78f48107-8ef2-45e0-a5cb-4f3174faa9d9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.102401363+00:00 stderr F I0813 20:09:12.102368 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" virtual=false 2025-08-13T20:09:12.102957679+00:00 stderr F I0813 20:09:12.102870 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 38913059-2644-4931-bd5e-a039fa76b712]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.102978670+00:00 stderr F I0813 20:09:12.102968 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" virtual=false 2025-08-13T20:09:12.107458998+00:00 stderr F I0813 20:09:12.107389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/Deployment, namespace: openshift-network-operator, name: network-operator, uid: d09aa085-6368-4540-a8c1-4e4c3e9e7344]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.107492199+00:00 stderr F I0813 20:09:12.107454 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" virtual=false 2025-08-13T20:09:12.108032264+00:00 stderr F I0813 20:09:12.107973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-api, name: machine-api-operator-prometheus-rules, uid: d6bb2e2c-e1cd-49b4-96df-decb63e7b0fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.108065565+00:00 stderr F I0813 20:09:12.108031 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-9, uid: 6ffaec76-c3c8-4997-ba38-69fd80011f84]" 2025-08-13T20:09:12.108065565+00:00 stderr F I0813 20:09:12.108050 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" virtual=false 2025-08-13T20:09:12.108075166+00:00 stderr F I0813 20:09:12.108066 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" virtual=false 2025-08-13T20:09:12.108594791+00:00 stderr F I0813 20:09:12.108569 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-machine-approver, name: machineapprover-rules, uid: 5c4e26c1-e7a2-400a-8d52-1a4e61d81615]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.108654192+00:00 stderr F I0813 20:09:12.108639 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" virtual=false 2025-08-13T20:09:12.109243969+00:00 stderr F I0813 20:09:12.109217 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-cluster-samples-operator, name: samples-operator-alerts, uid: 5f30e0db-f607-4d3c-966c-6a44f8597ed1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.109349772+00:00 stderr F I0813 20:09:12.108590 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-network-operator, name: openshift-network-operator-ipsec-rules, uid: 8250bdbe-c4d3-4856-aa6a-373951b82216]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.109379303+00:00 stderr F I0813 20:09:12.109293 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" virtual=false 2025-08-13T20:09:12.109548158+00:00 stderr F I0813 20:09:12.109372 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" virtual=false 2025-08-13T20:09:12.116526798+00:00 stderr F I0813 20:09:12.116423 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-5, uid: fb832e88-85bd-49f2-bece-a38d2f50681d]" 2025-08-13T20:09:12.116526798+00:00 stderr F I0813 20:09:12.116503 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" virtual=false 2025-08-13T20:09:12.116969681+00:00 stderr F I0813 20:09:12.116897 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-3, uid: c8feb0fe-7b0f-4f04-8d98-24bf9b6e5cd2]" 2025-08-13T20:09:12.116989171+00:00 stderr F I0813 20:09:12.116967 1 garbagecollector.go:549] "Processing item" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905, uid: 7f4930dc-9c3e-449f-86b0-c6f776dc6141]" virtual=false 2025-08-13T20:09:12.117066763+00:00 stderr F I0813 20:09:12.117043 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-8, uid: 8d893e05-24ef-4d69-8b21-c40f710bda8c]" 2025-08-13T20:09:12.117175677+00:00 stderr F I0813 20:09:12.117156 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" virtual=false 2025-08-13T20:09:12.119927885+00:00 stderr F I0813 20:09:12.118382 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-10, uid: 79d2f65f-9a16-40bd-b8af-596e51945995]" 2025-08-13T20:09:12.119927885+00:00 stderr F I0813 20:09:12.118434 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" virtual=false 2025-08-13T20:09:12.120191133+00:00 stderr F I0813 20:09:12.120167 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-1, uid: 04732c30-44b2-491a-9d6a-453885c75b2f]" 2025-08-13T20:09:12.120282096+00:00 stderr F I0813 20:09:12.120267 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" virtual=false 2025-08-13T20:09:12.120491802+00:00 stderr F I0813 20:09:12.120469 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-11, uid: e5243530-5a5f-4545-adfa-642dfa650103]" 2025-08-13T20:09:12.120602845+00:00 stderr F I0813 20:09:12.120585 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124429 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-6, uid: ed3f3154-72c5-4095-b56f-856b65764f6d]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124502 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124879 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-7, uid: 4e7e679a-7f59-4947-b9e5-f995fc817b7a]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.124929 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-ovn-kubernetes, name: master-rules, uid: 999d77e7-76f0-4f53-8849-6b1b62585ead]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125180 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125318 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 17da81ae-ac8b-4941-aff1-3d2bf3f00608]" 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125336 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125581 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-etcd-operator, name: etcd-prometheus-rules, uid: bbe9d208-cdc7-420f-a03a-1d216ca0abb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125599 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" virtual=false 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125743 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver, name: kube-apiserver-performance-recording-rules, uid: 237c1b6b-f59d-4a3a-b960-8c415a10471e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.126195175+00:00 stderr F I0813 20:09:12.125761 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" virtual=false 2025-08-13T20:09:12.127499803+00:00 stderr F I0813 20:09:12.127446 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-6, uid: 75cd3034-678b-4464-b011-f96e3a76bfeb]" 2025-08-13T20:09:12.127499803+00:00 stderr F I0813 20:09:12.127492 1 garbagecollector.go:549] "Processing item" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251920, uid: 11f6ecfd-32e5-4666-a477-30fd544386df]" virtual=false 2025-08-13T20:09:12.128876522+00:00 stderr F I0813 20:09:12.127733 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-console-operator, name: cluster-monitoring-prometheus-rules, uid: 8e077079-ee5d-41a1-abbe-a4efe5295b9a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.128876522+00:00 stderr F I0813 20:09:12.127820 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" virtual=false 2025-08-13T20:09:12.129457119+00:00 stderr F I0813 20:09:12.129427 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-machine-config-operator, name: machine-config-server, uid: acfc19b1-6ba7-451e-a76d-1490cb8ae35e]" 2025-08-13T20:09:12.129578222+00:00 stderr F I0813 20:09:12.129523 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-dns-operator, name: dns, uid: 43fd1807-ae6c-4bfd-9007-d4537c06cf0a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.129631784+00:00 stderr F I0813 20:09:12.129591 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" virtual=false 2025-08-13T20:09:12.130128788+00:00 stderr F I0813 20:09:12.129973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-marketplace, name: marketplace-alert-rules, uid: 325f23ba-096f-41ad-8964-6af44a8de605]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.130128788+00:00 stderr F I0813 20:09:12.130021 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" virtual=false 2025-08-13T20:09:12.130150349+00:00 stderr F I0813 20:09:12.129513 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" virtual=false 2025-08-13T20:09:12.130355574+00:00 stderr F I0813 20:09:12.130273 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-ca-bundle, uid: 974bd056-bb37-4a0d-a539-16df96c14ed2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.130567431+00:00 stderr F I0813 20:09:12.130470 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" virtual=false 2025-08-13T20:09:12.131152237+00:00 stderr F I0813 20:09:12.131126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 017f2b8e-38d4-4d07-b8aa-4bcdb3e002ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.131214269+00:00 stderr F I0813 20:09:12.131199 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.131665 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: certified-operators, uid: 16d5fe82-aef0-4700-8b13-e78e71d2a10d]" 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.131710 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132331 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 54a74216-e0ff-4fdf-8ef9-dfd95ace8442]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132357 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" virtual=false 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132490 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-ingress-canary, name: ingress-canary, uid: b5512a08-cd29-46f9-9661-4c860338b2ca]" 2025-08-13T20:09:12.135594365+00:00 stderr F I0813 20:09:12.132510 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" virtual=false 2025-08-13T20:09:12.140217257+00:00 stderr F I0813 20:09:12.140124 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager, name: revision-status-11, uid: 9b60c938-b391-41f2-ba27-263b409f84ac]" 2025-08-13T20:09:12.140217257+00:00 stderr F I0813 20:09:12.140202 1 garbagecollector.go:549] "Processing item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" virtual=false 2025-08-13T20:09:12.140333231+00:00 stderr F I0813 20:09:12.140272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-image-registry, name: image-registry-rules, uid: e3d343bb-85c3-48e6-83f3-e0bff03e2610]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.140368022+00:00 stderr F I0813 20:09:12.140342 1 garbagecollector.go:549] "Processing item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" virtual=false 2025-08-13T20:09:12.140540736+00:00 stderr F I0813 20:09:12.140471 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/PrometheusRule, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 585c690d-0c74-4dd0-b081-e5dc02f16e88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.142763940+00:00 stderr F I0813 20:09:12.141077 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" virtual=false 2025-08-13T20:09:12.143075239+00:00 stderr F I0813 20:09:12.141132 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: community-operators, uid: e583c58d-4569-4cab-9192-62c813516208]" 2025-08-13T20:09:12.143192553+00:00 stderr F I0813 20:09:12.143172 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" virtual=false 2025-08-13T20:09:12.143303126+00:00 stderr F I0813 20:09:12.140679 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-9, uid: 794bafc6-76b1-49e3-b649-3c4435ab156a]" 2025-08-13T20:09:12.143333507+00:00 stderr F I0813 20:09:12.142878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-ovn-kubernetes, name: ovnkube-node, uid: 1482e709-7589-42fd-976a-3e8042ee895b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.143430199+00:00 stderr F I0813 20:09:12.143372 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" virtual=false 2025-08-13T20:09:12.143712207+00:00 stderr F I0813 20:09:12.143690 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" virtual=false 2025-08-13T20:09:12.143986035+00:00 stderr F I0813 20:09:12.140599 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-8, uid: 160e9f84-5bea-4df5-a596-f02e78e90bcc]" 2025-08-13T20:09:12.144078658+00:00 stderr F I0813 20:09:12.144061 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148210 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-operators, uid: 9ba0c63a-ccef-4143-b195-48b1ad0b0bb7]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148284 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148522 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: dns-default, uid: bc037602-dc89-4ac4-9d39-64c4b2735a9f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148549 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148763 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-12, uid: a1e0f8b2-4421-4716-96ad-e0da033e5a6a]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148837 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.148938 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-7, uid: 280618b3-e4f5-445a-afe7-23ea06b14201]" 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.149025 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" virtual=false 2025-08-13T20:09:12.149299678+00:00 stderr F I0813 20:09:12.149252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus-additional-cni-plugins, uid: df2f05a5-7dea-496f-a19f-fd0927dffc2f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152116508+00:00 stderr F I0813 20:09:12.149954 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905, uid: 7f4930dc-9c3e-449f-86b0-c6f776dc6141]" owner=[{"apiVersion":"batch/v1","kind":"CronJob","name":"collect-profiles","uid":"946673ee-e5bd-418a-934e-c38198674faa","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152259572+00:00 stderr F I0813 20:09:12.152238 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" virtual=false 2025-08-13T20:09:12.152353735+00:00 stderr F I0813 20:09:12.150233 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" virtual=false 2025-08-13T20:09:12.152383816+00:00 stderr F I0813 20:09:12.150263 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[operators.coreos.com/v1alpha1/CatalogSource, namespace: openshift-marketplace, name: redhat-marketplace, uid: 6f259421-4edb-49d8-a6ce-aa41dfc64264]" 2025-08-13T20:09:12.152410617+00:00 stderr F I0813 20:09:12.150432 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apps/v1/DaemonSet, namespace: openshift-image-registry, name: node-ca, uid: e04c9af2-e9b1-4c40-b757-270f8e53c17d]" 2025-08-13T20:09:12.152436268+00:00 stderr F I0813 20:09:12.150476 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-dns, name: node-resolver, uid: b85ec2e5-ac1c-43ad-9c87-35eb71cbd95f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"DNS","name":"default","uid":"8e7b8280-016f-4ceb-a792-fc5be2494468","controller":true}] 2025-08-13T20:09:12.152518170+00:00 stderr F I0813 20:09:12.150517 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[batch/v1/Job, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251920, uid: 11f6ecfd-32e5-4666-a477-30fd544386df]" owner=[{"apiVersion":"batch/v1","kind":"CronJob","name":"collect-profiles","uid":"946673ee-e5bd-418a-934e-c38198674faa","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152577312+00:00 stderr F I0813 20:09:12.150685 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-diagnostics, name: network-check-target, uid: 91f9db4f-6cfc-4a79-8ef0-8cd59cb0f235]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152605922+00:00 stderr F I0813 20:09:12.150722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-operator, name: iptables-alerter, uid: 65b9447d-47aa-4195-a11f-950ea16aeb71]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152632433+00:00 stderr F I0813 20:09:12.152010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: multus, uid: caa46963-1770-45a0-9a3d-2e0c6249b258]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.152751747+00:00 stderr F I0813 20:09:12.152675 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" virtual=false 2025-08-13T20:09:12.153314163+00:00 stderr F I0813 20:09:12.153198 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-multus, name: network-metrics-daemon, uid: dde018de-6a0a-400a-b753-bd5cd908ad9c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.153314163+00:00 stderr F I0813 20:09:12.153248 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" virtual=false 2025-08-13T20:09:12.153599511+00:00 stderr F I0813 20:09:12.153500 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" virtual=false 2025-08-13T20:09:12.155172536+00:00 stderr F I0813 20:09:12.155128 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" virtual=false 2025-08-13T20:09:12.157216495+00:00 stderr F I0813 20:09:12.157163 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" virtual=false 2025-08-13T20:09:12.159301594+00:00 stderr F I0813 20:09:12.159273 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" virtual=false 2025-08-13T20:09:12.159539891+00:00 stderr F I0813 20:09:12.159480 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-2, uid: a0e0a799-c6ef-45ab-9c7d-bceba4db7d81]" 2025-08-13T20:09:12.159560312+00:00 stderr F I0813 20:09:12.159551 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" virtual=false 2025-08-13T20:09:12.159893161+00:00 stderr F I0813 20:09:12.159779 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-1, uid: 7bc85b09-0648-4f4e-bbcd-9571d2655676]" 2025-08-13T20:09:12.159893161+00:00 stderr F I0813 20:09:12.159872 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" virtual=false 2025-08-13T20:09:12.160164019+00:00 stderr F I0813 20:09:12.160109 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" virtual=false 2025-08-13T20:09:12.160282903+00:00 stderr F I0813 20:09:12.160235 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" virtual=false 2025-08-13T20:09:12.160880920+00:00 stderr F I0813 20:09:12.160856 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-8, uid: 5eec5212-9944-4d33-9069-6d85c7ad2c1a]" 2025-08-13T20:09:12.161076945+00:00 stderr F I0813 20:09:12.161057 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" virtual=false 2025-08-13T20:09:12.161314362+00:00 stderr F I0813 20:09:12.160926 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apps/v1/DaemonSet, namespace: openshift-network-node-identity, name: network-node-identity, uid: b28beb6b-9f0f-4fa2-90a3-324fe77364f6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.161412945+00:00 stderr F I0813 20:09:12.161393 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" virtual=false 2025-08-13T20:09:12.176955860+00:00 stderr F I0813 20:09:12.176871 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: 2f481e44-8799-4869-ac92-2893c3d079ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.177926438+00:00 stderr F I0813 20:09:12.177854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-operator-config, uid: 4f6ec328-7fca-45fe-9f6b-45e4903ea3e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator-ext-remediation, uid: b645a3b5-011d-4a2c-ac12-008057781b22]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: d26b8449-3866-4f5c-880c-7dab11423e72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177350 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: 1723c4aa-59d2-43e4-8879-a34e49b01f7b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178037242+00:00 stderr F I0813 20:09:12.177379 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 4fe81432-467e-413e-ab32-82832525f054]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178125084+00:00 stderr F I0813 20:09:12.177707 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver, name: kube-apiserver, uid: f37115af-5343-4070-9655-42dcef4f4439]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.178340290+00:00 stderr F I0813 20:09:12.178315 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" virtual=false 2025-08-13T20:09:12.178518585+00:00 stderr F I0813 20:09:12.178500 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" virtual=false 2025-08-13T20:09:12.181700677+00:00 stderr F I0813 20:09:12.181587 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" virtual=false 2025-08-13T20:09:12.181861941+00:00 stderr F I0813 20:09:12.181764 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" virtual=false 2025-08-13T20:09:12.182004805+00:00 stderr F I0813 20:09:12.181957 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" virtual=false 2025-08-13T20:09:12.182129669+00:00 stderr F I0813 20:09:12.182066 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" virtual=false 2025-08-13T20:09:12.182449348+00:00 stderr F I0813 20:09:12.182425 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" virtual=false 2025-08-13T20:09:12.192205018+00:00 stderr F I0813 20:09:12.192096 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-operator, uid: e428423f-fee5-4fca-bdc8-f46a317d9cf7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.192205018+00:00 stderr F I0813 20:09:12.192160 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" virtual=false 2025-08-13T20:09:12.192485266+00:00 stderr F I0813 20:09:12.192441 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 62dbb159-afde-42ff-ae4d-f010b4e53152]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.192499226+00:00 stderr F I0813 20:09:12.192484 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" virtual=false 2025-08-13T20:09:12.192868117+00:00 stderr F I0813 20:09:12.192841 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-network, uid: 4d4aca4a-3397-4686-8f20-a7a4f752076f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.193029261+00:00 stderr F I0813 20:09:12.192970 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" virtual=false 2025-08-13T20:09:12.193188676+00:00 stderr F I0813 20:09:12.192860 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ingress-operator, name: ingress-operator, uid: c53f85af-6f9e-4e5e-9242-aad83f5ea8c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193203056+00:00 stderr F I0813 20:09:12.193192 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" virtual=false 2025-08-13T20:09:12.193548976+00:00 stderr F I0813 20:09:12.193526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry-operator, uid: ef1c491a-0721-49cc-98d6-1ed2478c49b0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193615328+00:00 stderr F I0813 20:09:12.193599 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" virtual=false 2025-08-13T20:09:12.193941588+00:00 stderr F I0813 20:09:12.193854 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: fa0bd541-42b6-4bb8-9a92-d48d39d85d53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.193963218+00:00 stderr F I0813 20:09:12.193951 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" virtual=false 2025-08-13T20:09:12.194042570+00:00 stderr F I0813 20:09:12.194024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-oauth-apiserver, name: openshift-oauth-apiserver, uid: 041dc6d1-ce80-46ef-bbf1-6bcd4b2dd746]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194093642+00:00 stderr F I0813 20:09:12.194079 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" virtual=false 2025-08-13T20:09:12.194156014+00:00 stderr F I0813 20:09:12.194104 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: olm-operator, uid: 9e0ffff9-0f79-4998-9692-74f94eb0549f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194156014+00:00 stderr F I0813 20:09:12.194126 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console-operator, name: console-operator, uid: 9a33b179-4950-456a-93cf-ba4741c91841]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.194169124+00:00 stderr F I0813 20:09:12.194156 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" virtual=false 2025-08-13T20:09:12.194290668+00:00 stderr F I0813 20:09:12.194243 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194252 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-operator, name: network-operator, uid: 43b09388-8965-42ba-b082-58877deb0311]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194378 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194489 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-etcd-operator, name: etcd-operator, uid: 0e2435e0-96e3-49f2-beae-6c0c00ee7502]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.194509 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" virtual=false 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.195007 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 43311039-ddd0-4dd0-b1ee-3a9fed17eab5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.195496942+00:00 stderr F I0813 20:09:12.195045 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" virtual=false 2025-08-13T20:09:12.196044708+00:00 stderr F I0813 20:09:12.196019 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: da943855-7a79-47eb-a200-afae1932cb74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.196108950+00:00 stderr F I0813 20:09:12.196094 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" virtual=false 2025-08-13T20:09:12.208664920+00:00 stderr F I0813 20:09:12.208603 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-node, uid: d8379b36-ded1-48dc-a90a-ce7085a877fa]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.210105881+00:00 stderr F I0813 20:09:12.208749 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" virtual=false 2025-08-13T20:09:12.210307237+00:00 stderr F I0813 20:09:12.210283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-multus, name: monitor-multus-admission-controller, uid: 02fdf30f-73db-47db-b112-d48ffcd81df7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.210369729+00:00 stderr F I0813 20:09:12.210355 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" virtual=false 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216074 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-marketplace, name: marketplace-operator, uid: 6fe0d0d5-6410-460e-b01a-be75fdd6daa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216092 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-ovn-kubernetes, name: monitor-ovn-control-plane-metrics, uid: 3461a67f-3c88-4a19-b593-448a4dadfbeb]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216132 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" virtual=false 2025-08-13T20:09:12.216191035+00:00 stderr F I0813 20:09:12.216171 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" virtual=false 2025-08-13T20:09:12.216833614+00:00 stderr F I0813 20:09:12.216740 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-operator-lifecycle-manager, name: catalog-operator, uid: 0198e73b-3fef-466e-9a33-d8c461aa6d9b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.216946657+00:00 stderr F I0813 20:09:12.216893 1 garbagecollector.go:549] "Processing item" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" virtual=false 2025-08-13T20:09:12.217211575+00:00 stderr F I0813 20:09:12.217190 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: 056bc944-7929-41ba-9874-afcf52028178]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217284797+00:00 stderr F I0813 20:09:12.217230 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-version, name: cluster-version-operator, uid: 4a80685f-4ae9-45b6-beda-e35c99fcc78c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217297887+00:00 stderr F I0813 20:09:12.217282 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" virtual=false 2025-08-13T20:09:12.217334648+00:00 stderr F I0813 20:09:12.217320 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" virtual=false 2025-08-13T20:09:12.217563315+00:00 stderr F I0813 20:09:12.217197 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: a28811b4-8593-4ba1-b2f8-ed5c450909cb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.217633817+00:00 stderr F I0813 20:09:12.217617 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" virtual=false 2025-08-13T20:09:12.217772131+00:00 stderr F I0813 20:09:12.217549 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-network-diagnostics, name: network-check-source, uid: b9f78ce7-8b27-49d9-bffc-1525c0c249e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.218021748+00:00 stderr F I0813 20:09:12.217975 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-image-registry, name: image-registry, uid: d6eb5a24-ee3d-4c45-b434-d20993cdc039]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.218057489+00:00 stderr F I0813 20:09:12.218019 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" virtual=false 2025-08-13T20:09:12.218309466+00:00 stderr F I0813 20:09:12.218252 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" virtual=false 2025-08-13T20:09:12.218436530+00:00 stderr F I0813 20:09:12.217761 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-console, name: console, uid: c8428fe0-e84e-4be1-b578-d568de860a64]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.218487231+00:00 stderr F I0813 20:09:12.218473 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" virtual=false 2025-08-13T20:09:12.220758446+00:00 stderr F I0813 20:09:12.219119 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-route-controller-manager, name: openshift-route-controller-manager, uid: b0f070d7-8193-440c-87af-d7fa06cb4cfb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.220758446+00:00 stderr F I0813 20:09:12.219254 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" virtual=false 2025-08-13T20:09:12.223625759+00:00 stderr F I0813 20:09:12.223593 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: 5b800138-aea7-4ac6-9bc1-cc0d305347d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.223704391+00:00 stderr F I0813 20:09:12.223690 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" virtual=false 2025-08-13T20:09:12.223897986+00:00 stderr F I0813 20:09:12.223767 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver-operator-check-endpoints, uid: 7c47d56a-8607-4a94-9c71-801ae5f904e6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.223897986+00:00 stderr F I0813 20:09:12.223890 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" virtual=false 2025-08-13T20:09:12.224086352+00:00 stderr F I0813 20:09:12.223641 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication-operator, name: authentication-operator, uid: 5e7e7195-0445-4e7a-b3aa-598d0e9c8ba2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.224147064+00:00 stderr F I0813 20:09:12.224131 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" virtual=false 2025-08-13T20:09:12.224251647+00:00 stderr F I0813 20:09:12.223716 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-controller-manager, name: kube-controller-manager, uid: 55df2ef1-071b-46e2-a7a1-8773d1ba333e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.224328929+00:00 stderr F I0813 20:09:12.224286 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.225486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-apiserver, name: openshift-apiserver, uid: ffd6543e-3a51-45c8-85e5-0b2b8492c009]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.225542 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.226890 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-apiserver, name: revision-status-1, uid: 7df87c7c-0eaf-4109-8b79-031081b1501b]" 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.226954 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" virtual=false 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.227110 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-authentication, name: oauth-openshift, uid: e8391c3f-2734-4209-b27d-c640acd205de]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.227991054+00:00 stderr F I0813 20:09:12.227128 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" virtual=false 2025-08-13T20:09:12.228750075+00:00 stderr F I0813 20:09:12.215968 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-machine-api, name: machine-api-controllers, uid: 5b2318f0-140f-4213-ac5a-43d99820c804]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.228766906+00:00 stderr F I0813 20:09:12.228754 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" virtual=false 2025-08-13T20:09:12.229110386+00:00 stderr F I0813 20:09:12.229047 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler-operator, name: kube-scheduler-operator, uid: 3860e86d-4fc6-4c08-ba64-6157374888a3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.229110386+00:00 stderr F I0813 20:09:12.229090 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" virtual=false 2025-08-13T20:09:12.240537414+00:00 stderr F I0813 20:09:12.240469 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-cluster-machine-approver, name: cluster-machine-approver, uid: 2c495d08-0505-4d86-933e-6b3b35d469e8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.240687398+00:00 stderr F I0813 20:09:12.240667 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" virtual=false 2025-08-13T20:09:12.241118140+00:00 stderr F I0813 20:09:12.241091 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-controller-manager, name: openshift-controller-manager, uid: c725187b-e573-4a6f-9e33-9bf5a61d60ba]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.241193892+00:00 stderr F I0813 20:09:12.241176 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" virtual=false 2025-08-13T20:09:12.241581703+00:00 stderr F I0813 20:09:12.241555 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-samples-operator, name: metrics, uid: 9d303729-12a5-43d3-a67f-d19bf704fa88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.241669886+00:00 stderr F I0813 20:09:12.241651 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" virtual=false 2025-08-13T20:09:12.242099178+00:00 stderr F I0813 20:09:12.242070 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator, uid: e156be76-80be-4eff-8005-7c24938303ae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.242211792+00:00 stderr F I0813 20:09:12.242191 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" virtual=false 2025-08-13T20:09:12.242457019+00:00 stderr F I0813 20:09:12.240588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-dns-operator, name: dns-operator, uid: 0e225770-846d-4267-827d-96a0e29db21c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.242753617+00:00 stderr F I0813 20:09:12.242730 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" virtual=false 2025-08-13T20:09:12.248399919+00:00 stderr F I0813 20:09:12.248316 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-config-operator, name: config-operator, uid: ebb46164-d146-4396-be7d-4f239cfde7b4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.248445300+00:00 stderr F I0813 20:09:12.248396 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" virtual=false 2025-08-13T20:09:12.249083479+00:00 stderr F I0813 20:09:12.249057 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-source, uid: e72217e4-ba2c-4242-af4f-6a9201f08001]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.249158521+00:00 stderr F I0813 20:09:12.249141 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" virtual=false 2025-08-13T20:09:12.249591473+00:00 stderr F I0813 20:09:12.249464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-diagnostics, name: network-check-target, uid: 151fdab6-cca2-4880-a96c-48e605cc8d3d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.249591473+00:00 stderr F I0813 20:09:12.249531 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" virtual=false 2025-08-13T20:09:12.249687706+00:00 stderr F I0813 20:09:12.249665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[monitoring.coreos.com/v1/ServiceMonitor, namespace: openshift-kube-scheduler, name: kube-scheduler, uid: 71af2bbe-5bdb-4271-8e06-98a04e980e6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.249750918+00:00 stderr F I0813 20:09:12.249734 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" virtual=false 2025-08-13T20:09:12.250150769+00:00 stderr F I0813 20:09:12.250125 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-apiserver-operator, name: metrics, uid: ed79a864-3d59-456e-8a6c-724ec68e6d1b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250220171+00:00 stderr F I0813 20:09:12.250203 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" virtual=false 2025-08-13T20:09:12.250331984+00:00 stderr F I0813 20:09:12.250219 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-operator, name: metrics, uid: 1e390522-c38e-4189-86b8-ad75c61e3844]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250386216+00:00 stderr F I0813 20:09:12.250371 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" virtual=false 2025-08-13T20:09:12.250673734+00:00 stderr F I0813 20:09:12.250614 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-service-ca-operator, name: metrics, uid: 030283b3-acfe-40ed-811c-d9f7f79607f6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250673734+00:00 stderr F I0813 20:09:12.250666 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" virtual=false 2025-08-13T20:09:12.250761387+00:00 stderr F I0813 20:09:12.250741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 7c42fd7c-0955-49c7-819c-4685e0681272]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.250876190+00:00 stderr F I0813 20:09:12.250854 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" virtual=false 2025-08-13T20:09:12.251012814+00:00 stderr F I0813 20:09:12.250955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 422d22df-8c75-41f2-a2b3-636f0480766d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.251098046+00:00 stderr F I0813 20:09:12.250874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: c1b7d52c-5c8b-4bd1-8ac9-b4a3af1e9062]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.251133857+00:00 stderr F I0813 20:09:12.250766 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ingress-canary, name: ingress-canary, uid: cd641ce4-6a02-4a0c-9222-6ab30b234450]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-08-13T20:09:12.251257711+00:00 stderr F I0813 20:09:12.251203 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" virtual=false 2025-08-13T20:09:12.251440246+00:00 stderr F I0813 20:09:12.251394 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" virtual=false 2025-08-13T20:09:12.251581170+00:00 stderr F I0813 20:09:12.251502 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" virtual=false 2025-08-13T20:09:12.251702794+00:00 stderr F I0813 20:09:12.251649 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-controller-manager-operator, name: metrics, uid: 136038f9-f376-4b0b-8c75-a42240d176cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.251718564+00:00 stderr F I0813 20:09:12.251700 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" virtual=false 2025-08-13T20:09:12.252037803+00:00 stderr F I0813 20:09:12.251978 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-version, name: cluster-version-operator, uid: b85c5397-4189-4029-b181-4e339da207b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.252037803+00:00 stderr F I0813 20:09:12.252008 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" virtual=false 2025-08-13T20:09:12.266836177+00:00 stderr F I0813 20:09:12.266683 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-image-registry, name: image-registry-operator, uid: 1a925351-3238-40d4-a375-ce0c5a2b70a6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.266836177+00:00 stderr F I0813 20:09:12.266752 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" virtual=false 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.267722 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-machine-webhook, uid: 7dd2300f-f67e-4eb3-a3fa-1f22c230305a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.267777 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" virtual=false 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.269128 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-network-operator, name: metrics, uid: f8e1888b-9575-400b-83eb-6574023cf8d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.269862194+00:00 stderr F I0813 20:09:12.269153 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" virtual=false 2025-08-13T20:09:12.275197617+00:00 stderr F I0813 20:09:12.275109 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: multus-admission-controller, uid: 35568373-18ec-4ba2-8d18-12de10aa5a3f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.275197617+00:00 stderr F I0813 20:09:12.275171 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" virtual=false 2025-08-13T20:09:12.275375472+00:00 stderr F I0813 20:09:12.275237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: webhook, uid: 0bec6a60-3529-4fdb-81de-718ea6c4dae4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.275375472+00:00 stderr F I0813 20:09:12.275290 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" virtual=false 2025-08-13T20:09:12.275645460+00:00 stderr F I0813 20:09:12.275588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-storage-version-migrator-operator, name: metrics, uid: 3e887cd0-b481-460c-b943-d944dc64df2f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.275725782+00:00 stderr F I0813 20:09:12.275686 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" virtual=false 2025-08-13T20:09:12.276086313+00:00 stderr F I0813 20:09:12.276002 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-controller-manager-operator, name: metrics, uid: 2f6bb711-85a4-408c-913a-54f006dcf2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.276086313+00:00 stderr F I0813 20:09:12.276046 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" virtual=false 2025-08-13T20:09:12.276288369+00:00 stderr F I0813 20:09:12.276222 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver-operator, name: metrics, uid: 4c2fba48-c67e-4420-9529-0bb456da4341]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.276288369+00:00 stderr F I0813 20:09:12.276276 1 garbagecollector.go:549] "Processing item" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" virtual=false 2025-08-13T20:09:12.284424682+00:00 stderr F I0813 20:09:12.284287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-multus, name: network-metrics-service, uid: d573fe5e-beee-404b-9f93-51f361ad8682]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.284424682+00:00 stderr F I0813 20:09:12.284361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" virtual=false 2025-08-13T20:09:12.284479383+00:00 stderr F I0813 20:09:12.284447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-authentication-operator, name: metrics, uid: 20ebd9ba-71d4-4753-8707-d87939791a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284479383+00:00 stderr F I0813 20:09:12.284471 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" virtual=false 2025-08-13T20:09:12.284707390+00:00 stderr F I0813 20:09:12.284607 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-dns-operator, name: metrics, uid: c5ee1e81-63ee-4ea0-9d8f-d24cd624c3f2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284707390+00:00 stderr F I0813 20:09:12.284662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" virtual=false 2025-08-13T20:09:12.284960887+00:00 stderr F I0813 20:09:12.284851 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-kube-scheduler-operator, name: metrics, uid: 080e1aaf-7269-495b-ab74-593efe4192ec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.284960887+00:00 stderr F I0813 20:09:12.284925 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" virtual=false 2025-08-13T20:09:12.285163303+00:00 stderr F I0813 20:09:12.285098 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: catalog-operator-metrics, uid: 6766edb6-ebfb-4434-a0f6-d2bb95b7aa72]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285163303+00:00 stderr F I0813 20:09:12.285148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" virtual=false 2025-08-13T20:09:12.285247625+00:00 stderr F I0813 20:09:12.284660 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-etcd-operator, name: metrics, uid: 470dd1a6-5645-4282-97e4-ebd3fef4caae]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285326448+00:00 stderr F I0813 20:09:12.285252 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" virtual=false 2025-08-13T20:09:12.285343518+00:00 stderr F I0813 20:09:12.285329 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator, uid: ef047d6e-c72f-4309-a95e-08fb0ed08662]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285381289+00:00 stderr F I0813 20:09:12.285350 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" virtual=false 2025-08-13T20:09:12.285605626+00:00 stderr F I0813 20:09:12.285545 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-controllers, uid: 6a75af62-23dd-4080-8ef6-00c8bb47e103]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285605626+00:00 stderr F I0813 20:09:12.285594 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" virtual=false 2025-08-13T20:09:12.285620386+00:00 stderr F I0813 20:09:12.285602 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-config-operator, name: metrics, uid: f04ada1b-55ad-45a3-9231-6d1ff7242fa0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.285629906+00:00 stderr F I0813 20:09:12.285622 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" virtual=false 2025-08-13T20:09:12.286560013+00:00 stderr F I0813 20:09:12.286478 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-apiserver, name: check-endpoints, uid: 435aa879-8965-459a-9b2a-dfd8f8924b3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.286560013+00:00 stderr F I0813 20:09:12.286527 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" virtual=false 2025-08-13T20:09:12.286871272+00:00 stderr F I0813 20:09:12.286741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 3988dee0-bf3f-4018-92f4-25a33c418712]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.287074038+00:00 stderr F I0813 20:09:12.286935 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" virtual=false 2025-08-13T20:09:12.287955653+00:00 stderr F I0813 20:09:12.287194 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-controller, uid: 3ff83f1a-4058-4b9e-a4fd-83f51836c82e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.287955653+00:00 stderr F I0813 20:09:12.287244 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" virtual=false 2025-08-13T20:09:12.293719958+00:00 stderr F I0813 20:09:12.293056 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: f7c88755-f4e4-4193-91c1-94a9f4727169]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.293719958+00:00 stderr F I0813 20:09:12.293155 1 garbagecollector.go:549] "Processing item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" virtual=false 2025-08-13T20:09:12.301522992+00:00 stderr F I0813 20:09:12.297237 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-console-operator, name: metrics, uid: 793d323e-de30-470a-af76-520af7b2dad8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.301522992+00:00 stderr F I0813 20:09:12.297307 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.310588 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-admission-controller-webhook, uid: 8512c398-7b2e-4b14-9cdd-dfe72f813153]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.310661 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311587 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-daemon, uid: bddcb8c2-0f2d-4efa-a0ec-3e0648c24386]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311615 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" virtual=false 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311965 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-node-limited, uid: f845abac-4b6d-49f7-ad14-ea5802490663]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312244339+00:00 stderr F I0813 20:09:12.311994 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" virtual=false 2025-08-13T20:09:12.312497737+00:00 stderr F I0813 20:09:12.312423 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus-ancillary-tools, uid: 6b5b3d0e-daa2-4431-9b3a-17f2ad98c6cd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.312497737+00:00 stderr F I0813 20:09:12.312482 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" virtual=false 2025-08-13T20:09:12.312830996+00:00 stderr F I0813 20:09:12.312725 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-marketplace, name: marketplace-operator-metrics, uid: 1bfd7637-f88e-403e-8d75-c71b380fc127]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.312961070+00:00 stderr F I0813 20:09:12.312835 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" virtual=false 2025-08-13T20:09:12.313132905+00:00 stderr F I0813 20:09:12.313098 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-monitoring-operator-namespaced, uid: 64176e20-57cd-450a-82b8-c734edcf2055]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313132905+00:00 stderr F I0813 20:09:12.313125 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" virtual=false 2025-08-13T20:09:12.313529186+00:00 stderr F I0813 20:09:12.313444 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: olm-operator-metrics, uid: f54a9b6f-c334-4276-9ca3-b290325fd276]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313680291+00:00 stderr F I0813 20:09:12.313662 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" virtual=false 2025-08-13T20:09:12.313748242+00:00 stderr F I0813 20:09:12.313512 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 355a1056-7d77-4a52-a1f5-8eb39c13574e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.313867126+00:00 stderr F I0813 20:09:12.313760 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" virtual=false 2025-08-13T20:09:12.314473653+00:00 stderr F I0813 20:09:12.313558 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[apiregistration.k8s.io/v1/APIService, namespace: , name: v1.packages.operators.coreos.com, uid: 16956e05-669a-486b-95ff-66e13a972b59]" 2025-08-13T20:09:12.314575056+00:00 stderr F I0813 20:09:12.314488 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" virtual=false 2025-08-13T20:09:12.320078794+00:00 stderr F I0813 20:09:12.320053 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount-anyuid, uid: a16549a7-7bfd-45d7-b114-aea9597226a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.320142526+00:00 stderr F I0813 20:09:12.320128 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" virtual=false 2025-08-13T20:09:12.331265975+00:00 stderr F I0813 20:09:12.331199 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator:cluster-reader, uid: 0632bc2e-8683-447f-866f-dc2c3f20dbaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.331377248+00:00 stderr F I0813 20:09:12.331361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" virtual=false 2025-08-13T20:09:12.332702606+00:00 stderr F I0813 20:09:12.332677 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-operator-lifecycle-manager, name: package-server-manager-metrics, uid: ae547e8e-2a0a-43b3-8358-80f1e40dfde9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.332766778+00:00 stderr F I0813 20:09:12.332752 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" virtual=false 2025-08-13T20:09:12.342028193+00:00 stderr F I0813 20:09:12.341973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-proxy-reader, uid: 4f462e59-f589-4aa7-8140-854302a78457]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.342183748+00:00 stderr F I0813 20:09:12.342168 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" virtual=false 2025-08-13T20:09:12.349950290+00:00 stderr F I0813 20:09:12.349734 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Service, namespace: openshift-machine-api, name: machine-api-operator-webhook, uid: 128263d4-d278-44f6-9ae4-9e9ecc572513]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.349950290+00:00 stderr F I0813 20:09:12.349851 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" virtual=false 2025-08-13T20:09:12.350963259+00:00 stderr F I0813 20:09:12.350931 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: whereabouts-cni, uid: 88054557-80c4-48d2-a55d-ab10752a9270]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.351649619+00:00 stderr F I0813 20:09:12.351627 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" virtual=false 2025-08-13T20:09:12.351864935+00:00 stderr F I0813 20:09:12.351084 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-iptables-alerter, uid: 827450c1-83c3-45d0-9aa5-fbb86a2ae6a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.351967908+00:00 stderr F I0813 20:09:12.351945 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" virtual=false 2025-08-13T20:09:12.352274527+00:00 stderr F I0813 20:09:12.351370 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-operator, uid: dc1b4ca4-3dce-4e4f-b6d5-d095e346f78d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.352330339+00:00 stderr F I0813 20:09:12.352316 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" virtual=false 2025-08-13T20:09:12.352467823+00:00 stderr F I0813 20:09:12.351407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-view, uid: 764fb58b-036b-45ae-97f9-281416353daf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.352522534+00:00 stderr F I0813 20:09:12.352506 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" virtual=false 2025-08-13T20:09:12.356563010+00:00 stderr F I0813 20:09:12.356532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: aggregate-olm-edit, uid: 945e6f8b-1604-4556-a7bf-195fa62d6c14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.356750355+00:00 stderr F I0813 20:09:12.356732 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" virtual=false 2025-08-13T20:09:12.357100215+00:00 stderr F I0813 20:09:12.357078 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator:cluster-reader, uid: b6fdcbdf-86e5-40c5-9a3a-5dc05d389718]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.357158247+00:00 stderr F I0813 20:09:12.357144 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" virtual=false 2025-08-13T20:09:12.364557999+00:00 stderr F I0813 20:09:12.364447 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: fab3e514-6318-4108-8ef6-378758fbbc7e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.364761565+00:00 stderr F I0813 20:09:12.364698 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" virtual=false 2025-08-13T20:09:12.372323552+00:00 stderr F I0813 20:09:12.372201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-operator, uid: 71adaeb8-9332-4dc7-b8b2-52415e589919]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.372548008+00:00 stderr F I0813 20:09:12.372528 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" virtual=false 2025-08-13T20:09:12.374115013+00:00 stderr F I0813 20:09:12.374089 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler, uid: 329dfd1b-be44-4c8b-b72d-d32f8fab8705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.374261918+00:00 stderr F I0813 20:09:12.374244 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" virtual=false 2025-08-13T20:09:12.375837833+00:00 stderr F I0813 20:09:12.375616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation-aggregation, uid: 726262c4-2098-4379-a985-90474531ff53]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.375837833+00:00 stderr F I0813 20:09:12.375706 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" virtual=false 2025-08-13T20:09:12.376448560+00:00 stderr F I0813 20:09:12.376404 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted-v2, uid: 32267ecd-433d-41f4-a3f0-a0b5b8f32162]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.376448560+00:00 stderr F I0813 20:09:12.376439 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" virtual=false 2025-08-13T20:09:12.378645203+00:00 stderr F I0813 20:09:12.378560 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork, uid: 7fa3d83a-0d76-405a-a7b9-190741931396]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.378838279+00:00 stderr F I0813 20:09:12.378746 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" virtual=false 2025-08-13T20:09:12.381070403+00:00 stderr F I0813 20:09:12.381048 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot-v2, uid: 6efcc9e9-c5e4-4315-940a-636bac274a19]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.381133485+00:00 stderr F I0813 20:09:12.381119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" virtual=false 2025-08-13T20:09:12.396499335+00:00 stderr F I0813 20:09:12.396407 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: control-plane-machine-set-operator, uid: a39c14db-78b4-4b58-8651-479169195296]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.396701841+00:00 stderr F I0813 20:09:12.396623 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" virtual=false 2025-08-13T20:09:12.397560596+00:00 stderr F I0813 20:09:12.397538 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-diagnostics, uid: 6e739f62-fe8b-4f65-8df1-a50711c9b496]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.397691069+00:00 stderr F I0813 20:09:12.397673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.403536 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:controller:operator-lifecycle-manager, uid: 2b5865fd-ac96-41b7-96bc-48cce5469705]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.403659 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404050 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostaccess, uid: ae10afa1-d594-42ec-9bf7-626463cb1630]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404076 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" virtual=false 2025-08-13T20:09:12.404298939+00:00 stderr F I0813 20:09:12.404267 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-config-operator:cluster-reader, uid: 3162d1be-8f00-4957-8db6-a2b1361aaedf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.404432743+00:00 stderr F I0813 20:09:12.404291 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" virtual=false 2025-08-13T20:09:12.406146462+00:00 stderr F I0813 20:09:12.406090 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: helm-chartrepos-viewer, uid: d3d3a0df-2feb-4db3-b436-5b73b4c151eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.406170052+00:00 stderr F I0813 20:09:12.406140 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" virtual=false 2025-08-13T20:09:12.406757639+00:00 stderr F I0813 20:09:12.406726 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console, uid: 8c434f71-b240-42c1-88a4-a6fbc903b388]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.406927144+00:00 stderr F I0813 20:09:12.406882 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" virtual=false 2025-08-13T20:09:12.408725406+00:00 stderr F I0813 20:09:12.408614 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostnetwork-v2, uid: ad07fee7-e5d1-4a09-8086-698133feb11a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.410531157+00:00 stderr F I0813 20:09:12.410405 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" virtual=false 2025-08-13T20:09:12.412443682+00:00 stderr F I0813 20:09:12.412409 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-autoscaler-operator, uid: 99f02060-bf1c-484c-8c17-2b1243086e3f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.412522745+00:00 stderr F I0813 20:09:12.412508 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" virtual=false 2025-08-13T20:09:12.416593851+00:00 stderr F I0813 20:09:12.416547 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-samples-operator, uid: dbe391a0-135a-4375-82db-699a0d88ce8e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.416688384+00:00 stderr F I0813 20:09:12.416673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" virtual=false 2025-08-13T20:09:12.417007953+00:00 stderr F I0813 20:09:12.416983 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-dns-operator, uid: d1456a44-d6b7-4418-911a-f4bbaf6427c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.417067905+00:00 stderr F I0813 20:09:12.417054 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" virtual=false 2025-08-13T20:09:12.417556739+00:00 stderr F I0813 20:09:12.417534 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: metrics-daemon-role, uid: 199383b0-9cef-4533-81a7-f22f011a69a5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.417611870+00:00 stderr F I0813 20:09:12.417598 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" virtual=false 2025-08-13T20:09:12.419447023+00:00 stderr F I0813 20:09:12.418165 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:nonroot, uid: b76bf753-d87c-45a9-91db-eea6f389f4d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.419447023+00:00 stderr F I0813 20:09:12.418219 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" virtual=false 2025-08-13T20:09:12.422225643+00:00 stderr F I0813 20:09:12.422075 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: net-attach-def-project, uid: 7c380190-b6b7-4bba-8d3d-6bda6c81bc8e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.422225643+00:00 stderr F I0813 20:09:12.422215 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" virtual=false 2025-08-13T20:09:12.423494039+00:00 stderr F I0813 20:09:12.423389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: operatorhub-config-reader, uid: e2651db3-cf40-4b1c-a7da-ae197e936593]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.423494039+00:00 stderr F I0813 20:09:12.423460 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" virtual=false 2025-08-13T20:09:12.428961936+00:00 stderr F I0813 20:09:12.428863 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: ff646139-2b95-4409-9ed8-321d5912f92e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.429135791+00:00 stderr F I0813 20:09:12.429078 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" virtual=false 2025-08-13T20:09:12.435999688+00:00 stderr F I0813 20:09:12.435876 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:controller:machine-approver, uid: 672519e3-9ef9-4c9e-a091-7c4b0d3de1c4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.436231434+00:00 stderr F I0813 20:09:12.436211 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" virtual=false 2025-08-13T20:09:12.438677634+00:00 stderr F I0813 20:09:12.438616 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: cluster-image-registry-operator, uid: 6edd083b-28fe-4808-aa2e-67cd5ba297ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.438760677+00:00 stderr F I0813 20:09:12.438742 1 garbagecollector.go:549] "Processing item" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" virtual=false 2025-08-13T20:09:12.441166536+00:00 stderr F I0813 20:09:12.441086 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: marketplace-operator, uid: f411d32e-1a09-441c-99dd-7b75e5b87298]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.441266709+00:00 stderr F I0813 20:09:12.441249 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" virtual=false 2025-08-13T20:09:12.450292927+00:00 stderr F I0813 20:09:12.450201 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-kube-rbac-proxy, uid: 5402260d-a88f-4905-b91a-c0ec390d8675]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.451266095+00:00 stderr F I0813 20:09:12.451242 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: network-node-identity, uid: c11a15d3-53e9-40f6-b868-c05634ec19ff]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.451402359+00:00 stderr F I0813 20:09:12.451385 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" virtual=false 2025-08-13T20:09:12.452016007+00:00 stderr F I0813 20:09:12.451992 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" virtual=false 2025-08-13T20:09:12.454326373+00:00 stderr F I0813 20:09:12.454300 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers, uid: 9206794b-b0af-49c1-a4fb-787d45f3f1f8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.454392435+00:00 stderr F I0813 20:09:12.454377 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" virtual=false 2025-08-13T20:09:12.454642582+00:00 stderr F I0813 20:09:12.454594 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:anyuid, uid: 74815ef0-3c58-4271-9c93-5c834b5a10e5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.454700504+00:00 stderr F I0813 20:09:12.454686 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" virtual=false 2025-08-13T20:09:12.455150337+00:00 stderr F I0813 20:09:12.454987 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: multus, uid: 6e9e0a19-df2a-431c-a2d8-e73b63d4f45c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.455653451+00:00 stderr F I0813 20:09:12.455633 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" virtual=false 2025-08-13T20:09:12.455999951+00:00 stderr F I0813 20:09:12.455969 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: machine-api-controllers-metal3-remediation, uid: 4f414358-8053-4a3d-b8f5-95980f16dda0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.456096404+00:00 stderr F I0813 20:09:12.456078 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" virtual=false 2025-08-13T20:09:12.457175995+00:00 stderr F I0813 20:09:12.457152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:machine-config-operator:cluster-reader, uid: 1fda91fe-aa05-455a-8865-3034f4e4cff8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.457244887+00:00 stderr F I0813 20:09:12.457226 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" virtual=false 2025-08-13T20:09:12.477003863+00:00 stderr F I0813 20:09:12.476842 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:hostmount, uid: 43c793ea-8f8d-4996-bb74-5e312be44b1a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.477003863+00:00 stderr F I0813 20:09:12.476946 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" virtual=false 2025-08-13T20:09:12.477135457+00:00 stderr F I0813 20:09:12.477107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-cluster-reader, uid: 2d3a058f-4b36-405a-8d7a-38d4fac9bbdf]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.477344733+00:00 stderr F I0813 20:09:12.477269 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-3, uid: e5c106c5-9215-42bf-9f83-ad893e5dfc9f]" 2025-08-13T20:09:12.477344733+00:00 stderr F I0813 20:09:12.477322 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" virtual=false 2025-08-13T20:09:12.477401165+00:00 stderr F I0813 20:09:12.477384 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" virtual=false 2025-08-13T20:09:12.477833677+00:00 stderr F I0813 20:09:12.477752 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-etcd, name: revision-status-2, uid: 6002c8b9-0d97-445d-a699-b98f9b3b0a7e]" 2025-08-13T20:09:12.477854728+00:00 stderr F I0813 20:09:12.477844 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" virtual=false 2025-08-13T20:09:12.478324681+00:00 stderr F I0813 20:09:12.478265 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ingress-operator, uid: e3a75357-5a06-4934-a482-0d77f1fbb9b2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.478461955+00:00 stderr F I0813 20:09:12.478443 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" virtual=false 2025-08-13T20:09:12.479538086+00:00 stderr F I0813 20:09:12.479505 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: console-extensions-reader, uid: ecabf9a2-18e2-45e0-9591-e2a9b4363684]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.479538086+00:00 stderr F I0813 20:09:12.479532 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" virtual=false 2025-08-13T20:09:12.493480686+00:00 stderr F I0813 20:09:12.493361 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-namespace, uid: fa51c0bf-8455-4908-a5b9-5047521669d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.493740073+00:00 stderr F I0813 20:09:12.493643 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" virtual=false 2025-08-13T20:09:12.498699055+00:00 stderr F I0813 20:09:12.498093 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: project-helm-chartrepository-editor, uid: 86c81b4a-a328-46e4-a36e-736a9454eb6d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.498699055+00:00 stderr F I0813 20:09:12.498165 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" virtual=false 2025-08-13T20:09:12.501325531+00:00 stderr F I0813 20:09:12.501260 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[policy/v1/PodDisruptionBudget, namespace: openshift-operator-lifecycle-manager, name: packageserver-pdb, uid: 7faaf7ff-09b4-4ea8-95d0-99384dbe0390]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.501400523+00:00 stderr F I0813 20:09:12.501324 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" virtual=false 2025-08-13T20:09:12.508439585+00:00 stderr F I0813 20:09:12.508356 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 8ee51cdb-bf4d-4e61-9535-b23c9dd08843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.508491956+00:00 stderr F I0813 20:09:12.508443 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" virtual=false 2025-08-13T20:09:12.508763754+00:00 stderr F I0813 20:09:12.508721 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:cluster-samples-operator:cluster-reader, uid: b1a93bf7-4bf4-4d81-a69b-76fb07155d62]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.508817975+00:00 stderr F I0813 20:09:12.508768 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" virtual=false 2025-08-13T20:09:12.522491778+00:00 stderr F I0813 20:09:12.522362 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:restricted, uid: 75ded65f-c56d-483a-aa78-9a3dfb682f3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.522491778+00:00 stderr F I0813 20:09:12.522440 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.526882 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver, name: revision-status-10, uid: 84db8ecd-30ab-46ed-a310-4e96d94d3fd1]" 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.526958 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.528365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: system:openshift:scc:privileged, uid: 301bd3c5-2517-462f-bb39-9758c8065aa4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.528393 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" virtual=false 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.529008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: registry-monitoring, uid: df05d985-dbd7-487d-b14e-5f77e9a774d1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.531540917+00:00 stderr F I0813 20:09:12.529033 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" virtual=false 2025-08-13T20:09:12.550622954+00:00 stderr F I0813 20:09:12.550550 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-node-limited, uid: 991c2e92-733c-4b46-b14a-a76232a62c05]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.550622954+00:00 stderr F I0813 20:09:12.550598 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" virtual=false 2025-08-13T20:09:12.551651474+00:00 stderr F I0813 20:09:12.551566 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workloads-namespace, uid: eadeeb62-a305-4855-95f4-6d8dc0433482]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.551942492+00:00 stderr F I0813 20:09:12.551755 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" virtual=false 2025-08-13T20:09:12.552365464+00:00 stderr F I0813 20:09:12.552307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: authentication-operator-config, uid: b02c5a6c-aa5e-45ae-9058-5573f870c452]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.552365464+00:00 stderr F I0813 20:09:12.552355 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" virtual=false 2025-08-13T20:09:12.553244169+00:00 stderr F I0813 20:09:12.553152 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator-config, uid: b7bf0a70-f77f-40ce-8903-84d4dba4ea3a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.553244169+00:00 stderr F I0813 20:09:12.553197 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" virtual=false 2025-08-13T20:09:12.553442745+00:00 stderr F I0813 20:09:12.553420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRole, namespace: , name: prometheus-k8s-scheduler-resources, uid: c2078daf-5c4f-48e6-b914-b4ca03df3cb9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.553509177+00:00 stderr F I0813 20:09:12.553491 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" virtual=false 2025-08-13T20:09:12.553583769+00:00 stderr F I0813 20:09:12.553532 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 147efe47-06b1-489e-a050-0509fe1494a0]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.558109069+00:00 stderr F I0813 20:09:12.557092 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: node-cluster, uid: 13ea4157-64e4-4040-bd37-d252de132aff]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.558109069+00:00 stderr F I0813 20:09:12.557146 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" virtual=false 2025-08-13T20:09:12.559074326+00:00 stderr F I0813 20:09:12.553580 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" virtual=false 2025-08-13T20:09:12.559685664+00:00 stderr F I0813 20:09:12.559656 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator-config, uid: c29eb760-4269-415b-a4e7-ce7850749f0e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.559850299+00:00 stderr F I0813 20:09:12.559764 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" virtual=false 2025-08-13T20:09:12.575375254+00:00 stderr F I0813 20:09:12.575307 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-operator, name: prometheus-k8s, uid: d3b6e7ef-13fc-4616-8132-30ed8b95b8b6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.575933170+00:00 stderr F I0813 20:09:12.575871 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" virtual=false 2025-08-13T20:09:12.576147906+00:00 stderr F I0813 20:09:12.575346 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-route-controller-manager, name: prometheus-k8s, uid: 2247b757-2dac-4d86-bfbd-8dff6864b9e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.576231668+00:00 stderr F I0813 20:09:12.576213 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" virtual=false 2025-08-13T20:09:12.576455695+00:00 stderr F I0813 20:09:12.575368 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-approver, uid: ff6a6f67-ddd9-476d-bf91-e242616a03c7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.576455695+00:00 stderr F I0813 20:09:12.576446 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" virtual=false 2025-08-13T20:09:12.577232417+00:00 stderr F I0813 20:09:12.575465 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: d937764d-a2f2-4f97-8faa-ee73604e87e3]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.577383341+00:00 stderr F I0813 20:09:12.577361 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" virtual=false 2025-08-13T20:09:12.577482354+00:00 stderr F I0813 20:09:12.575524 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-config-operator, name: prometheus-k8s, uid: 4e0e5d70-4367-4096-a0d7-165ab4597f6f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.577520175+00:00 stderr F I0813 20:09:12.575563 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-pod, uid: 950b403d-a1ab-4def-9716-54d82ee220cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.577753982+00:00 stderr F I0813 20:09:12.577675 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" virtual=false 2025-08-13T20:09:12.578699689+00:00 stderr F I0813 20:09:12.577633 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" virtual=false 2025-08-13T20:09:12.582263241+00:00 stderr F I0813 20:09:12.581183 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager, name: prometheus-k8s, uid: 25e48399-1d46-4030-b302-f0e0a40b15ac]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.582263241+00:00 stderr F I0813 20:09:12.581249 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" virtual=false 2025-08-13T20:09:12.588976464+00:00 stderr F I0813 20:09:12.588896 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-user-settings, name: console-user-settings-admin, uid: 64c7926d-fb43-4663-b39f-8b3dc932ced2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.589136238+00:00 stderr F I0813 20:09:12.589075 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" virtual=false 2025-08-13T20:09:12.599089704+00:00 stderr F I0813 20:09:12.598960 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: b5fbb36a-21eb-4ffc-98a3-b0ccc0c0751c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.599089704+00:00 stderr F I0813 20:09:12.599058 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" virtual=false 2025-08-13T20:09:12.605089976+00:00 stderr F I0813 20:09:12.605023 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ingress-operator, name: trusted-ca, uid: 1f6546f8-a303-43d3-8110-ebd844c52acc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.605274441+00:00 stderr F I0813 20:09:12.605249 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" virtual=false 2025-08-13T20:09:12.608372920+00:00 stderr F I0813 20:09:12.605634 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-operator, uid: 644d299e-7c04-454d-a20a-2641713ce520]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.608757351+00:00 stderr F I0813 20:09:12.608729 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: console-operator, uid: 00c6aa74-29ed-484b-adf5-26bb1faa032f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.608943616+00:00 stderr F I0813 20:09:12.608858 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" virtual=false 2025-08-13T20:09:12.609003888+00:00 stderr F I0813 20:09:12.608983 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" virtual=false 2025-08-13T20:09:12.609266195+00:00 stderr F I0813 20:09:12.608610 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-apiserver-operator, name: trusted-ca-bundle, uid: fec29a7e-ed54-4cd2-a16a-9f72be2c61f3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:12.609369928+00:00 stderr F I0813 20:09:12.609314 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" virtual=false 2025-08-13T20:09:12.609510002+00:00 stderr F I0813 20:09:12.608682 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: whereabouts-cni, uid: 32ee943f-953b-467b-83ff-09220026838e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.609510002+00:00 stderr F I0813 20:09:12.609497 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" virtual=false 2025-08-13T20:09:12.609528213+00:00 stderr F I0813 20:09:12.609510 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler-operator, name: prometheus-k8s, uid: 007819f0-5856-454e-a23f-645d2e97ddc9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.609539903+00:00 stderr F I0813 20:09:12.609532 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" virtual=false 2025-08-13T20:09:12.609635346+00:00 stderr F I0813 20:09:12.609081 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-etcd-operator, name: prometheus-k8s, uid: bdfbbe53-9083-4628-919f-d92be5217116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.609703858+00:00 stderr F I0813 20:09:12.609650 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" virtual=false 2025-08-13T20:09:12.614504206+00:00 stderr F I0813 20:09:12.614402 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-cluster-total, uid: c38274d9-e6e8-4d0b-b766-d07f00c60697]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.614616709+00:00 stderr F I0813 20:09:12.614597 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" virtual=false 2025-08-13T20:09:12.619052696+00:00 stderr F I0813 20:09:12.617203 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager-operator, name: prometheus-k8s, uid: 56b96a63-85f5-47bb-88cb-be23eec085bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.619052696+00:00 stderr F I0813 20:09:12.617280 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" virtual=false 2025-08-13T20:09:12.636666471+00:00 stderr F I0813 20:09:12.636562 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: prometheus-k8s, uid: ea595b36-3d8a-484e-8397-2d1568e5dbf5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.636666471+00:00 stderr F I0813 20:09:12.636637 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" virtual=false 2025-08-13T20:09:12.652250608+00:00 stderr F I0813 20:09:12.645874 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-scheduler, name: prometheus-k8s, uid: af8e9559-dd0c-4362-9ecb-cdae5324c3fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.652250608+00:00 stderr F I0813 20:09:12.645968 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" virtual=false 2025-08-13T20:09:12.667004601+00:00 stderr F I0813 20:09:12.660727 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-operator, uid: dff291f1-42cd-4626-8331-8b0fef4d088c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.667004601+00:00 stderr F I0813 20:09:12.661014 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.670689 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: marketplace-operator, uid: af983aa6-e063-42fe-945c-3f6bb1f3e446]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.670818 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 9876773e-9d26-40a6-bfd6-e2f1512249d0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671445 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" virtual=false 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671686 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver-operator, name: prometheus-k8s, uid: 543709bc-20a4-4aa5-9c2d-22d28d836926]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.672155409+00:00 stderr F I0813 20:09:12.671706 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" virtual=false 2025-08-13T20:09:12.752097651+00:00 stderr F I0813 20:09:12.749502 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-operator, uid: 60e015be-05fd-4865-8047-01a0e067f6f0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.752097651+00:00 stderr F I0813 20:09:12.749572 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" virtual=false 2025-08-13T20:09:12.766936966+00:00 stderr F I0813 20:09:12.764520 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-service-ca-operator, name: prometheus-k8s, uid: 0bd8dea8-4527-46af-9f03-68d1df279d8b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.766936966+00:00 stderr F I0813 20:09:12.764614 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" virtual=false 2025-08-13T20:09:12.769938302+00:00 stderr F I0813 20:09:12.769769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver, name: prometheus-k8s, uid: 2284de94-810e-4240-8f67-14ef839c6064]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.770103907+00:00 stderr F I0813 20:09:12.769978 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" virtual=false 2025-08-13T20:09:12.770444817+00:00 stderr F I0813 20:09:12.770423 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-controller-manager, name: prometheus-k8s, uid: 05961b4d-e50c-47f7-80b9-a2fc6dea707c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.770582391+00:00 stderr F I0813 20:09:12.770564 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" virtual=false 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781338 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: node-ca, uid: 06d86d33-0cb8-49e8-baff-236e595eef4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781442 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" virtual=false 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.781986 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-cluster-rsrc-use, uid: 54d1c16d-4e01-45e7-a710-5f70910784d6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.783942574+00:00 stderr F I0813 20:09:12.782077 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" virtual=false 2025-08-13T20:09:12.796726050+00:00 stderr F I0813 20:09:12.795367 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: machine-api-controllers, uid: 7c112e2e-5dbd-404e-b2c5-a73aa30d28a1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.796726050+00:00 stderr F I0813 20:09:12.795585 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" virtual=false 2025-08-13T20:09:12.807507729+00:00 stderr F I0813 20:09:12.807422 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: console-operator, uid: 819a8d31-510a-43dc-9b44-d2d194694335]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.809240 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" virtual=false 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.810134 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: coreos-pull-secret-reader, uid: cdaab605-1dd2-4c71-902b-e715152f501c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.810893156+00:00 stderr F I0813 20:09:12.810160 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" virtual=false 2025-08-13T20:09:12.832015512+00:00 stderr F I0813 20:09:12.829524 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: b43cd4d0-8394-41bd-9c23-389c33ffc953]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.832015512+00:00 stderr F I0813 20:09:12.829567 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" virtual=false 2025-08-13T20:09:12.865013238+00:00 stderr F I0813 20:09:12.864864 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-machine-api-operator, uid: e1dd6a03-7458-44ac-a6c7-edfefd391324]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.865013238+00:00 stderr F I0813 20:09:12.864977 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" virtual=false 2025-08-13T20:09:12.873874792+00:00 stderr F I0813 20:09:12.871746 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: node-ca, uid: 0bd6243f-f605-4cae-ae6f-9274fc0fab04]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.873874792+00:00 stderr F I0813 20:09:12.871993 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" virtual=false 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.890509 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-config-operator, name: machine-config-operator, uid: 2a1a6443-92bf-42c9-af0e-799ad6d8e75f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.890661 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" virtual=false 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.892010 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ingress-operator, name: ingress-operator, uid: 66faf8af-a62a-4380-adf8-70a5fd528d66]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.892882537+00:00 stderr F I0813 20:09:12.892114 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" virtual=false 2025-08-13T20:09:12.895847312+00:00 stderr F I0813 20:09:12.893068 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-node, uid: 940e7205-a7c2-4202-bda2-b8d6e6cc323e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.895847312+00:00 stderr F I0813 20:09:12.893184 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" virtual=false 2025-08-13T20:09:12.898930050+00:00 stderr F I0813 20:09:12.896708 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 353f2c71-579c-4479-8c03-fec2a64cdccb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.898930050+00:00 stderr F I0813 20:09:12.896988 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906484 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: olm-operator-serviceaccount, uid: 4e27b169-814e-4c39-bb1c-ed8a71de6e00]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906555 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906652 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift, name: copied-csv-viewer, uid: c333d29d-0f91-4c19-a9fa-625f1576a12a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.906750 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" virtual=false 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.907040 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-apiserver-operator, name: openshift-apiserver-operator, uid: 651212c1-c815-4a28-8ec6-1c6280dbdbec]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.908932277+00:00 stderr F I0813 20:09:12.907070 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" virtual=false 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.935885 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: iptables-alerter, uid: a04dfba9-1e5c-429c-af4e-a16926a1a922]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936095 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" virtual=false 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936108 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: admin-gates, uid: ca7c55e0-b2d1-4635-a43a-3c60b3ca968b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.937872787+00:00 stderr F I0813 20:09:12.936135 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" virtual=false 2025-08-13T20:09:12.953929447+00:00 stderr F I0813 20:09:12.953257 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-marketplace, name: marketplace-operator, uid: 621c721c-fedc-415a-95e9-429e192d9990]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.953929447+00:00 stderr F I0813 20:09:12.953468 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" virtual=false 2025-08-13T20:09:12.965881760+00:00 stderr F I0813 20:09:12.965646 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console-operator, name: console-operator, uid: 5d735f56-5d70-4872-9ba1-eeb14212f370]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.965881760+00:00 stderr F I0813 20:09:12.965692 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" virtual=false 2025-08-13T20:09:12.983866316+00:00 stderr F I0813 20:09:12.983705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-operator, name: cluster-network-operator, uid: e6388388-ee10-4833-b042-1e47aaa130a7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.983866316+00:00 stderr F I0813 20:09:12.983775 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" virtual=false 2025-08-13T20:09:12.995885660+00:00 stderr F I0813 20:09:12.995107 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-service-ca-operator, name: service-ca-operator, uid: 5de31099-cb98-4349-a07d-5b47004d4e10]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.995885660+00:00 stderr F I0813 20:09:12.995169 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" virtual=false 2025-08-13T20:09:12.998082183+00:00 stderr F I0813 20:09:12.996410 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator, uid: efae6cd6-358c-4e88-a33d-efa2c6b9d0e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.998082183+00:00 stderr F I0813 20:09:12.996455 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" virtual=false 2025-08-13T20:09:12.998882086+00:00 stderr F I0813 20:09:12.998833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 906775a7-f83d-4a63-b5ed-bad9d086c7aa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:12.998934438+00:00 stderr F I0813 20:09:12.998876 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" virtual=false 2025-08-13T20:09:13.002491110+00:00 stderr F I0813 20:09:13.002453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-node-identity, name: network-node-identity, uid: 7e561373-e820-499a-95a4-ce248a0f5525]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.002587312+00:00 stderr F I0813 20:09:13.002566 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" virtual=false 2025-08-13T20:09:13.012008912+00:00 stderr F I0813 20:09:13.011947 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-config-operator, name: openshift-config-operator, uid: 97fb8215-a3a4-428f-b059-ff59344586ab]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.012109495+00:00 stderr F I0813 20:09:13.012094 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" virtual=false 2025-08-13T20:09:13.012169307+00:00 stderr F I0813 20:09:13.011976 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ac, uid: f0debeff-4753-4149-9bd6-028dddb9b67d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.012261210+00:00 stderr F I0813 20:09:13.012221 1 garbagecollector.go:549] "Processing item" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" virtual=false 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.014925 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-ovn-kubernetes, name: ovn-kubernetes-control-plane, uid: 92320b12-3d07-45bf-80cb-218961388b77]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.014968 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" virtual=false 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.015272 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-machine-approver, name: machine-approver-sa, uid: b07a4cd6-c640-4266-9ce2-600406fbd64f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.015548044+00:00 stderr F I0813 20:09:13.015311 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" virtual=false 2025-08-13T20:09:13.031552413+00:00 stderr F I0813 20:09:13.031318 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: dae999a5-24f7-4e94-a4b1-471fa177fa34]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.031552413+00:00 stderr F I0813 20:09:13.031407 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" virtual=false 2025-08-13T20:09:13.043345101+00:00 stderr F I0813 20:09:13.042453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-termination-handler, uid: 38e87230-6b06-40a0-af2a-442a47fe9507]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.043345101+00:00 stderr F I0813 20:09:13.042538 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" virtual=false 2025-08-13T20:09:13.046583434+00:00 stderr F I0813 20:09:13.046029 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-cluster-samples-operator, name: cluster-samples-operator, uid: 77b47efb-3f01-4bfa-9558-359ab057875f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.046583434+00:00 stderr F I0813 20:09:13.046094 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" virtual=false 2025-08-13T20:09:13.060381259+00:00 stderr F I0813 20:09:13.060293 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus, uid: 30dd8554-cd5a-476c-b346-91c68439eed7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.060588545+00:00 stderr F I0813 20:09:13.060566 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" virtual=false 2025-08-13T20:09:13.066276918+00:00 stderr F I0813 20:09:13.066136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: metrics-daemon-sa, uid: 3052cc53-1784-4f82-91bc-b467a202b3b1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.066276918+00:00 stderr F I0813 20:09:13.066241 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" virtual=false 2025-08-13T20:09:13.089862925+00:00 stderr F I0813 20:09:13.089744 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-controller-manager-operator, name: kube-controller-manager-operator, uid: 7216ec91-40a8-4309-9d6f-2620c82247e2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.090052650+00:00 stderr F I0813 20:09:13.090028 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" virtual=false 2025-08-13T20:09:13.090622636+00:00 stderr F I0813 20:09:13.090564 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-multus, name: multus-ancillary-tools, uid: 456ced67-1537-4b8d-8397-a06fbaaa6bc4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.090684488+00:00 stderr F I0813 20:09:13.090668 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" virtual=false 2025-08-13T20:09:13.091029158+00:00 stderr F I0813 20:09:13.091001 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator, uid: 46c2de5e-5b41-4f38-9bf5-780cad1e517d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.091160512+00:00 stderr F I0813 20:09:13.091143 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" virtual=false 2025-08-13T20:09:13.095020513+00:00 stderr F I0813 20:09:13.094981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: 10dd43d4-0c6c-4543-8d29-d868fdae181d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.095108125+00:00 stderr F I0813 20:09:13.095090 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" virtual=false 2025-08-13T20:09:13.095408744+00:00 stderr F I0813 20:09:13.095384 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-console, name: console, uid: 970aab5a-86f2-41fd-87c5-c7415f735449]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.095636160+00:00 stderr F I0813 20:09:13.095549 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" virtual=false 2025-08-13T20:09:13.100217561+00:00 stderr F I0813 20:09:13.100180 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator, uid: 9ae8aac8-5e71-4dc5-acdc-bd42630689b5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.100395677+00:00 stderr F I0813 20:09:13.100329 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" virtual=false 2025-08-13T20:09:13.137387087+00:00 stderr F I0813 20:09:13.137293 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-etcd-operator, name: etcd-operator, uid: 87565fdf-8fb7-4af2-bb82-2bbd1c71cec9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.137387087+00:00 stderr F I0813 20:09:13.137372 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" virtual=false 2025-08-13T20:09:13.137681486+00:00 stderr F I0813 20:09:13.137622 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-dns-operator, name: dns-operator, uid: fb958c91-4fd0-44ff-87ab-ea516f2eb6cf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.137681486+00:00 stderr F I0813 20:09:13.137669 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" virtual=false 2025-08-13T20:09:13.146855939+00:00 stderr F I0813 20:09:13.146467 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-machine-api, name: machine-api-controllers, uid: 8577a786-447c-40a7-bbfa-d3af148d8584]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.146855939+00:00 stderr F I0813 20:09:13.146753 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" virtual=false 2025-08-13T20:09:13.165397000+00:00 stderr F I0813 20:09:13.163994 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-kube-storage-version-migrator-operator, name: kube-storage-version-migrator-operator, uid: c607908b-16ad-4fb3-95dd-a2d9df939986]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.178827755+00:00 stderr F I0813 20:09:13.167036 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" virtual=false 2025-08-13T20:09:13.179596237+00:00 stderr F I0813 20:09:13.179360 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 70bd2dd3-4e78-4176-92de-09c3ccb93594]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.179881066+00:00 stderr F I0813 20:09:13.179857 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" virtual=false 2025-08-13T20:09:13.188229685+00:00 stderr F I0813 20:09:13.187076 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: etcd-dashboard, uid: c6f28bae-aa57-468c-9400-cc06ce489bf2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.188229685+00:00 stderr F I0813 20:09:13.187157 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" virtual=false 2025-08-13T20:09:13.249248164+00:00 stderr F I0813 20:09:13.248861 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-node, uid: 39c6a6e2-d1ca-48b1-b647-64eb92ab22d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.249248164+00:00 stderr F I0813 20:09:13.248947 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" virtual=false 2025-08-13T20:09:13.251207321+00:00 stderr F I0813 20:09:13.251121 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: grafana-dashboard-apiserver-performance, uid: 08a66e39-2f66-4f16-8e3f-1584a5baa4d7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.254506875+00:00 stderr F I0813 20:09:13.254470 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" virtual=false 2025-08-13T20:09:13.254625179+00:00 stderr F I0813 20:09:13.251225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ServiceAccount, namespace: openshift-authentication-operator, name: authentication-operator, uid: 0ab5e157-3b4f-4841-8064-7b4519d31987]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255062961+00:00 stderr F I0813 20:09:13.251340 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-operator-config, uid: 1ab68a4b-dd84-4ca5-8df6-3f02a224a116]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255243496+00:00 stderr F I0813 20:09:13.251382 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-controller-manager-operator, name: openshift-controller-manager-images, uid: 8cb4b490-1625-48bb-919c-ccf0098eecd7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255411801+00:00 stderr F I0813 20:09:13.251464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-node-rsrc-use, uid: b8637e8a-1405-492e-bb16-9110c9aa6848]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255592556+00:00 stderr F I0813 20:09:13.254317 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-cluster, uid: 0d4712c5-d5b0-41d5-a647-d63817261699]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.255737890+00:00 stderr F I0813 20:09:13.255167 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" virtual=false 2025-08-13T20:09:13.256109371+00:00 stderr F I0813 20:09:13.255343 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" virtual=false 2025-08-13T20:09:13.256625366+00:00 stderr F I0813 20:09:13.255515 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" virtual=false 2025-08-13T20:09:13.257069259+00:00 stderr F I0813 20:09:13.255697 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" virtual=false 2025-08-13T20:09:13.258028156+00:00 stderr F I0813 20:09:13.255882 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263495 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: service-ca-bundle, uid: af98ab2a-94fb-4365-a506-7ec3b3dad927]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263563 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263784 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-apiserver-operator, name: kube-apiserver-operator-config, uid: a31ddc0b-aec5-455b-8a66-7121efaad6c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.263864 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264093 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler-operator, uid: 122f7969-6bbd-4171-b0d2-3382822b14bc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264113 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264260 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-prometheus, uid: 5985f141-3c54-4417-9255-f53e7838729d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264277 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264417 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-cluster-machine-approver, name: kube-rbac-proxy, uid: 83f36bd0-03ab-465f-8813-55e992938c92]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264434 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" virtual=false 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264580 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-namespace-by-pod, uid: 77e50462-2587-42e1-85b6-b8d283d88f5d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.266318724+00:00 stderr F I0813 20:09:13.264598 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" virtual=false 2025-08-13T20:09:13.279019018+00:00 stderr F I0813 20:09:13.278873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-k8s-resources-workload, uid: 134b842b-df65-4691-b059-613958e332a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.279211213+00:00 stderr F I0813 20:09:13.279142 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" virtual=false 2025-08-13T20:09:13.279737128+00:00 stderr F I0813 20:09:13.279714 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: telemetry-config, uid: 44471669-105d-4c00-b23a-d7c3e0f0cede]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.279859402+00:00 stderr F I0813 20:09:13.279839 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" virtual=false 2025-08-13T20:09:13.280452989+00:00 stderr F I0813 20:09:13.280430 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-authentication-operator, name: trusted-ca-bundle, uid: 3a5e7c7a-1b75-49bd-b1ad-2610fcb12e76]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.280717027+00:00 stderr F I0813 20:09:13.280694 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" virtual=false 2025-08-13T20:09:13.292366881+00:00 stderr F I0813 20:09:13.292254 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler, uid: 79c4826b-23cb-475e-9f0c-eab8eb98f835]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.292366881+00:00 stderr F I0813 20:09:13.292319 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" virtual=false 2025-08-13T20:09:13.292857535+00:00 stderr F I0813 20:09:13.292771 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-etcd-operator, name: etcd-service-ca-bundle, uid: fe11618d-f458-43e0-9d36-031e3c99e7b7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.292878845+00:00 stderr F I0813 20:09:13.292857 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" virtual=false 2025-08-13T20:09:13.293312008+00:00 stderr F I0813 20:09:13.293287 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: dashboard-pod-total, uid: c9b01aa2-5b5e-42b8-b64d-61b1d6ebd241]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.293394050+00:00 stderr F I0813 20:09:13.293354 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" virtual=false 2025-08-13T20:09:13.298474026+00:00 stderr F I0813 20:09:13.293694 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-autoscaler, uid: 90140550-0bf0-45b6-bc21-2756500fa74e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.298474026+00:00 stderr F I0813 20:09:13.293747 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" virtual=false 2025-08-13T20:09:13.319708115+00:00 stderr F I0813 20:09:13.319464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-monitoring-operator, uid: f9b8a349-3d3e-43d3-8552-f17fd46dfe4d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.320034494+00:00 stderr F I0813 20:09:13.320012 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" virtual=false 2025-08-13T20:09:13.320962761+00:00 stderr F I0813 20:09:13.320839 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-extensions-reader, uid: 402476b7-93bc-4cfa-b038-c3108d7ea260]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.320962761+00:00 stderr F I0813 20:09:13.320946 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" virtual=false 2025-08-13T20:09:13.321174957+00:00 stderr F I0813 20:09:13.321097 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-version-operator, uid: d1772a5b-7528-4218-b1a5-b8e37adaddd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.321378752+00:00 stderr F I0813 20:09:13.321353 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.333665 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-auth-delegator, uid: 5b5cc72d-8cfc-47d3-8393-7b66b592f99e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.333743 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers-baremetal, uid: f4e67882-ed45-4d9c-b562-81ffe3d1cb30]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334185 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334226 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-proxy-reader, uid: 0fc0e835-2d7d-4935-a386-83f3cdcc2356]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334292 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334559 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-network-operator, uid: 741fb11c-22e2-4896-b41e-4bc6506dadb4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334588 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator, uid: 2613967c-5e5d-427c-802b-3219c1314d9f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334659 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.334973 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console, uid: b060a198-fcb6-4114-b32b-339ffffe6077]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335000 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" virtual=false 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335232 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: collect-profiles, uid: 65ec419a-7cc1-4ad7-851d-a3e1eb8bf158]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.335278341+00:00 stderr F I0813 20:09:13.335265 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" virtual=false 2025-08-13T20:09:13.335495837+00:00 stderr F I0813 20:09:13.335439 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: helm-chartrepos-view, uid: 22aa2287-b511-4fb0-a162-d3c7fc093bc5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.335512978+00:00 stderr F I0813 20:09:13.335495 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" virtual=false 2025-08-13T20:09:13.336057523+00:00 stderr F I0813 20:09:13.336029 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: custom-account-openshift-machine-config-operator, uid: 7d9e4776-ed62-42b1-b845-cc4c6cc67c88]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.336242219+00:00 stderr F I0813 20:09:13.336219 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" virtual=false 2025-08-13T20:09:13.336650830+00:00 stderr F I0813 20:09:13.336624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: default-account-cluster-image-registry-operator, uid: cda37d84-e0e5-4e44-9a71-02182975026d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.336814855+00:00 stderr F I0813 20:09:13.336737 1 garbagecollector.go:549] "Processing item" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" virtual=false 2025-08-13T20:09:13.337409332+00:00 stderr F I0813 20:09:13.337334 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: cluster-samples-operator-imageconfig-reader, uid: 6e8698c1-8ae7-4cf2-9591-7d7aaf70f1e4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.338635057+00:00 stderr F I0813 20:09:13.337484 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.366693 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator-auth-delegator, uid: 6aee10d5-48ad-432d-ae71-69853ed0161c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.366768 1 garbagecollector.go:549] "Processing item" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367313 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-controllers, uid: 9d6d5d27-a1df-40a9-b87c-44c9c7277ec0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367335 1 garbagecollector.go:549] "Processing item" item="[operator.openshift.io/v1/KubeControllerManager, namespace: , name: cluster, uid: 466fedf7-9ce3-473e-9b71-9bf08b103d4a]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367564 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: control-plane-machine-set-operator, uid: a398c8c9-26b5-430a-94f5-42d3f40459d5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367583 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" virtual=false 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: marketplace-operator, uid: 62bf9f3d-b2d7-4e1a-bd45-41b46e18211a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.367936587+00:00 stderr F I0813 20:09:13.367678 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" virtual=false 2025-08-13T20:09:13.374689641+00:00 stderr F I0813 20:09:13.374596 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: csi-snapshot-controller-operator-clusterrole, uid: 3c762506-422e-4421-915e-5872f5c48dbd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.375007130+00:00 stderr F I0813 20:09:13.374863 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" virtual=false 2025-08-13T20:09:13.376608086+00:00 stderr F I0813 20:09:13.376539 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: machine-approver, uid: e117a220-adbb-4f6c-89f8-b4507726d12f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.376703549+00:00 stderr F I0813 20:09:13.376686 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" virtual=false 2025-08-13T20:09:13.377532662+00:00 stderr F I0813 20:09:13.377507 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: multus.openshift.io, uid: 8e8dfece-6a87-43e4-aef0-9bae8de3390b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.377627915+00:00 stderr F I0813 20:09:13.377610 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" virtual=false 2025-08-13T20:09:13.378016906+00:00 stderr F I0813 20:09:13.377993 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator-ext-remediation, uid: 06cb8f99-d6bc-46a7-bcf1-c610b6fc190a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.378108909+00:00 stderr F I0813 20:09:13.378091 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" virtual=false 2025-08-13T20:09:13.380415715+00:00 stderr F I0813 20:09:13.380389 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: console-operator, uid: cd350fa5-6dc3-49dd-a977-ebe3ffe5edaa]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.380584200+00:00 stderr F I0813 20:09:13.380480 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: console-operator, uid: 8943421a-94fe-4e82-ba2b-ff3e7994ada6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.380584200+00:00 stderr F I0813 20:09:13.380551 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" virtual=false 2025-08-13T20:09:13.380759865+00:00 stderr F I0813 20:09:13.380502 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" virtual=false 2025-08-13T20:09:13.380884209+00:00 stderr F I0813 20:09:13.380865 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: machine-api-operator, uid: 2b488f4f-97f2-434d-a62e-6ce4db9636a2]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.381071064+00:00 stderr F I0813 20:09:13.381013 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" virtual=false 2025-08-13T20:09:13.406487093+00:00 stderr F I0813 20:09:13.406365 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-public, uid: 57bec53f-bb5e-4dc9-8957-267dc3c81e5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.406487093+00:00 stderr F I0813 20:09:13.406462 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" virtual=false 2025-08-13T20:09:13.407113531+00:00 stderr F I0813 20:09:13.407055 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: metrics-daemon-sa-rolebinding, uid: 6dc834f4-1d8d-4397-b483-82d00bc808ca]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.407135321+00:00 stderr F I0813 20:09:13.407113 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" virtual=false 2025-08-13T20:09:13.407333367+00:00 stderr F I0813 20:09:13.407269 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: prometheus-k8s, uid: 0c4210ed-76ef-4fcc-b765-edadb0b0c238]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.407333367+00:00 stderr F I0813 20:09:13.407306 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[operator.openshift.io/v1/KubeControllerManager, namespace: , name: cluster, uid: 466fedf7-9ce3-473e-9b71-9bf08b103d4a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.407440120+00:00 stderr F I0813 20:09:13.407343 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" virtual=false 2025-08-13T20:09:13.407496982+00:00 stderr F I0813 20:09:13.407290 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: network-node-identity.openshift.io, uid: 6dec73bc-003d-45c2-b80b-6abdd589c12e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.407560733+00:00 stderr F I0813 20:09:13.407546 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" virtual=false 2025-08-13T20:09:13.407860632+00:00 stderr F I0813 20:09:13.407695 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration, namespace: , name: openshift-control-plane-operators, uid: e58c2fe6-ebe9-4808-9cea-7443d2a56c5c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.412673 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408221 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-ancillary-tools, uid: 7b8f2404-6a99-452b-99e6-2ee5aee1a907]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.412981 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408256 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-admission-controller-webhook, uid: 048da2fc-4827-4d11-943b-079ac5e15768]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413217 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408416 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration, namespace: , name: controlplanemachineset.machine.openshift.io, uid: c0896a42-9644-4ae1-b9d2-64b9d8d72a93]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413340 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408470 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: ingress-operator, uid: ab127597-b672-4c7c-8321-931d286b7455]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413531 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" virtual=false 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.408544 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ingress-operator, name: ingress-operator, uid: 47b053c2-86d6-43b7-8698-8ddc264337d8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413684849+00:00 stderr F I0813 20:09:13.413632 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" virtual=false 2025-08-13T20:09:13.413750331+00:00 stderr F I0813 20:09:13.409166 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[scheduling.k8s.io/v1/PriorityClass, namespace: , name: openshift-user-critical, uid: 53eb906b-da85-4299-867d-b35bdfc9d7dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.413763121+00:00 stderr F I0813 20:09:13.413753 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" virtual=false 2025-08-13T20:09:13.414027469+00:00 stderr F I0813 20:09:13.411010 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" virtual=false 2025-08-13T20:09:13.424537780+00:00 stderr F I0813 20:09:13.424475 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console, name: prometheus-k8s, uid: 82076a4f-26ca-40e7-93f2-a97aeb21fd34]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.424715915+00:00 stderr F I0813 20:09:13.424696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" virtual=false 2025-08-13T20:09:13.425479357+00:00 stderr F I0813 20:09:13.425452 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-operator-lifecycle-manager, name: operator-lifecycle-manager-metrics, uid: ffc884e1-7792-4e09-a959-3dd895bdc7be]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.425969221+00:00 stderr F I0813 20:09:13.425946 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" virtual=false 2025-08-13T20:09:13.426696432+00:00 stderr F I0813 20:09:13.426624 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: prometheus-k8s, uid: 6c37d9c1-9f73-4181-99c5-8e303e842810]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.427005881+00:00 stderr F I0813 20:09:13.426982 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" virtual=false 2025-08-13T20:09:13.427273909+00:00 stderr F I0813 20:09:13.427250 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-samples-operator, name: prometheus-k8s, uid: 23bfecb0-7911-4670-91df-97738b1eff9e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.427334890+00:00 stderr F I0813 20:09:13.427318 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" virtual=false 2025-08-13T20:09:13.435208336+00:00 stderr F I0813 20:09:13.435142 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-image-registry, name: cluster-image-registry-operator, uid: d15aeb28-0bc5-4c54-abc8-682b1d2fe551]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.435349080+00:00 stderr F I0813 20:09:13.435330 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" virtual=false 2025-08-13T20:09:13.435880395+00:00 stderr F I0813 20:09:13.435715 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-marketplace, name: openshift-marketplace-metrics, uid: 1e11e685-d90b-49c4-a86c-7a1bfec99a48]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.436124982+00:00 stderr F I0813 20:09:13.436095 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" virtual=false 2025-08-13T20:09:13.436457652+00:00 stderr F I0813 20:09:13.436435 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-dns-operator, name: dns-operator, uid: 57d9aa60-9c70-4a04-875e-7d6d78ca3521]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.436545814+00:00 stderr F I0813 20:09:13.436531 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" virtual=false 2025-08-13T20:09:13.441559038+00:00 stderr F I0813 20:09:13.441526 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: openshift-network-public-role, uid: df54377a-e4ab-45dd-b1aa-937aea1ce027]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.441675811+00:00 stderr F I0813 20:09:13.441660 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" virtual=false 2025-08-13T20:09:13.442136135+00:00 stderr F I0813 20:09:13.442085 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config, name: machine-api-controllers, uid: c9a01375-7b9c-4ef6-9359-758fb218dce6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.442270429+00:00 stderr F I0813 20:09:13.442250 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: hostnetwork-v2, uid: 456c5d8f-a352-4d46-a237-b5198b2b47bf]" virtual=false 2025-08-13T20:09:13.442769403+00:00 stderr F I0813 20:09:13.442745 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: prometheus-k8s-cluster-autoscaler-operator, uid: c9d3067d-afa1-4cb6-8ad4-50b15b22f883]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.442956278+00:00 stderr F I0813 20:09:13.442934 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: machine-api-termination-handler, uid: 0d71aa13-f5f7-4516-9777-54ea4b30344d]" virtual=false 2025-08-13T20:09:13.443288498+00:00 stderr F I0813 20:09:13.443186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: machine-api-controllers, uid: fb58d9da-83d6-4a5c-8e72-5cd11c3d11bd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.443348979+00:00 stderr F I0813 20:09:13.443334 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: nonroot-v2, uid: 414f5c6e-4eb7-4571-b5b7-7dc6cb18bba6]" virtual=false 2025-08-13T20:09:13.443516034+00:00 stderr F I0813 20:09:13.443497 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-console-operator, name: prometheus-k8s, uid: eba18ce2-3b68-4a35-8e42-62f924dd6873]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.443564586+00:00 stderr F I0813 20:09:13.443551 1 garbagecollector.go:549] "Processing item" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: restricted-v2, uid: ccc1f43b-6b7f-41ce-a399-bc1d0b2e9f0d]" virtual=false 2025-08-13T20:09:13.443891595+00:00 stderr F I0813 20:09:13.443838 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-version, name: prometheus-k8s, uid: fada1de4-8b21-4108-922b-4896582712b1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.444072160+00:00 stderr F I0813 20:09:13.444056 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" virtual=false 2025-08-13T20:09:13.444386469+00:00 stderr F I0813 20:09:13.444355 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: f89858c6-0219-4025-8fcf-4fd198d46157]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.444536564+00:00 stderr F I0813 20:09:13.444491 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" virtual=false 2025-08-13T20:09:13.444979366+00:00 stderr F I0813 20:09:13.444955 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-oauth-apiserver, name: prometheus-k8s, uid: cd505f93-0531-428a-87d0-81f5b7f889c3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.445090699+00:00 stderr F I0813 20:09:13.445075 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" virtual=false 2025-08-13T20:09:13.445266624+00:00 stderr F I0813 20:09:13.445246 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication-operator, name: prometheus-k8s, uid: f07b75b1-a4a5-4e43-897d-6a260cd48bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.445316236+00:00 stderr F I0813 20:09:13.445303 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" virtual=false 2025-08-13T20:09:13.451400100+00:00 stderr F I0813 20:09:13.451347 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-apiserver-operator, name: prometheus-k8s, uid: 92ba624a-dcbc-4477-85d4-f2df67ae5db7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.451501153+00:00 stderr F I0813 20:09:13.451485 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" virtual=false 2025-08-13T20:09:13.451657248+00:00 stderr F I0813 20:09:13.451637 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-multus, name: prometheus-k8s, uid: f8cc27cf-7bee-48e2-88b3-8be26f38260a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.451708439+00:00 stderr F I0813 20:09:13.451694 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" virtual=false 2025-08-13T20:09:13.452104661+00:00 stderr F I0813 20:09:13.452081 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-machine-approver, name: prometheus-k8s, uid: 84939b38-04f1-4a93-a3ca-c9043e356d14]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.452163852+00:00 stderr F I0813 20:09:13.452150 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" virtual=false 2025-08-13T20:09:13.452397689+00:00 stderr F I0813 20:09:13.452374 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: control-plane-machine-set-operator, uid: 6953f399-e3c8-4b06-943f-747a47e7d1dd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.452450420+00:00 stderr F I0813 20:09:13.452437 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" virtual=false 2025-08-13T20:09:13.455024574+00:00 stderr F I0813 20:09:13.454984 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-controller-manager-operator, name: prometheus-k8s, uid: 7b72705a-23d0-4665-9d24-8e04a8aae0ef]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455118307+00:00 stderr F I0813 20:09:13.455076 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" virtual=false 2025-08-13T20:09:13.455267011+00:00 stderr F I0813 20:09:13.455247 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-machine-api, name: cluster-autoscaler-operator, uid: e28ff59d-f034-4be4-a92c-c9dc921cc933]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455321923+00:00 stderr F I0813 20:09:13.455307 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" virtual=false 2025-08-13T20:09:13.455840988+00:00 stderr F I0813 20:09:13.455766 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-cluster-storage-operator, name: csi-snapshot-controller-operator-role, uid: bc19cbb1-cf64-4d32-9037-aae84228edd0]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.455956611+00:00 stderr F I0813 20:09:13.455934 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" virtual=false 2025-08-13T20:09:13.461531121+00:00 stderr F I0813 20:09:13.461492 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-kube-apiserver, name: prometheus-k8s, uid: 7187d3b6-e6d6-4ff7-b923-65dff4f99eed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.461651464+00:00 stderr F I0813 20:09:13.461635 1 garbagecollector.go:549] "Processing item" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" virtual=false 2025-08-13T20:09:13.468554532+00:00 stderr F I0813 20:09:13.468521 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-managed, name: console-configmap-reader, uid: 92fde0d1-fb5a-4510-8310-6ae35d89235f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.468869991+00:00 stderr F I0813 20:09:13.468849 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" virtual=false 2025-08-13T20:09:13.473698810+00:00 stderr F I0813 20:09:13.473638 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-authentication, name: prometheus-k8s, uid: f1df4a56-c0f4-4813-bb2c-0b0a559bf7eb]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.474087021+00:00 stderr F I0813 20:09:13.474021 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" virtual=false 2025-08-13T20:09:13.474732509+00:00 stderr F I0813 20:09:13.474675 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 2e6f30f3-f11e-4248-ae14-d64b23438f44]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.474993347+00:00 stderr F I0813 20:09:13.474880 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" virtual=false 2025-08-13T20:09:13.484234522+00:00 stderr F I0813 20:09:13.483741 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/Role, namespace: openshift-config-operator, name: prometheus-k8s, uid: 96b52d1f-be7e-421c-89e5-37d718c9e5d3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.484234522+00:00 stderr F I0813 20:09:13.483859 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" virtual=false 2025-08-13T20:09:13.499940822+00:00 stderr F I0813 20:09:13.499769 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: hostnetwork-v2, uid: 456c5d8f-a352-4d46-a237-b5198b2b47bf]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.500235710+00:00 stderr F I0813 20:09:13.500146 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" virtual=false 2025-08-13T20:09:13.504300597+00:00 stderr F I0813 20:09:13.504168 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: machine-api-termination-handler, uid: 0d71aa13-f5f7-4516-9777-54ea4b30344d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.504300597+00:00 stderr F I0813 20:09:13.504250 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" virtual=false 2025-08-13T20:09:13.512250795+00:00 stderr F I0813 20:09:13.511857 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver, uid: 08401087-e2b5-41ab-b2ae-1b49b4b21090]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.512250795+00:00 stderr F I0813 20:09:13.511992 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" virtual=false 2025-08-13T20:09:13.512566694+00:00 stderr F I0813 20:09:13.512531 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: nonroot-v2, uid: 414f5c6e-4eb7-4571-b5b7-7dc6cb18bba6]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.512652596+00:00 stderr F I0813 20:09:13.512631 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" virtual=false 2025-08-13T20:09:13.519372129+00:00 stderr F I0813 20:09:13.519325 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[security.openshift.io/v1/SecurityContextConstraints, namespace: , name: restricted-v2, uid: ccc1f43b-6b7f-41ce-a399-bc1d0b2e9f0d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.519467732+00:00 stderr F I0813 20:09:13.519452 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" virtual=false 2025-08-13T20:09:13.533261697+00:00 stderr F I0813 20:09:13.533170 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-operator, uid: 435f63da-cd39-4cca-8a62-b69e4f06c2e9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.533261697+00:00 stderr F I0813 20:09:13.533239 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" virtual=false 2025-08-13T20:09:13.533665949+00:00 stderr F I0813 20:09:13.533638 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-authentication-operator, uid: e5927a4e-7a4c-4a11-a3d1-4186ce46adf3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.533735521+00:00 stderr F I0813 20:09:13.533719 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" virtual=false 2025-08-13T20:09:13.536348366+00:00 stderr F I0813 20:09:13.536318 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-controller-manager, uid: 68cc303b-a0cf-4b58-b7ad-75a4bff72246]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.536468969+00:00 stderr F I0813 20:09:13.536423 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" virtual=false 2025-08-13T20:09:13.536743237+00:00 stderr F I0813 20:09:13.536672 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-apiserver-sar, uid: 939310fe-0d2e-4a8a-b1b3-0a5805456306]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.536759008+00:00 stderr F I0813 20:09:13.536738 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" virtual=false 2025-08-13T20:09:13.567349115+00:00 stderr F I0813 20:09:13.566833 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-etcd-operator, uid: 9a9602ea-26a7-4c24-9fda-572140113932]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.568242800+00:00 stderr F I0813 20:09:13.568208 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-monitoring-metrics, uid: 9bbe637c-a1dc-48d4-bf30-3aee0abd189a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.568333043+00:00 stderr F I0813 20:09:13.568313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" virtual=false 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568416 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver, uid: b67a06ba-eaf2-4045-a4f3-80f32bd3af74]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568479 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" virtual=false 2025-08-13T20:09:13.569097585+00:00 stderr F I0813 20:09:13.568585 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" virtual=false 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.572339 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-kube-apiserver-operator, uid: 29bd4a1f-f8f3-4c5b-8b56-4b5faf744ff9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.572424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" virtual=false 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.575071 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-server, uid: 1cc90dfb-3877-4794-b4fb-a55f9f9fd21f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.575425206+00:00 stderr F I0813 20:09:13.575139 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" virtual=false 2025-08-13T20:09:13.577416103+00:00 stderr F I0813 20:09:13.576873 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-oauth-apiserver-sar, uid: d58e2800-eb4f-4b97-a79d-1d5375725f25]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.577416103+00:00 stderr F I0813 20:09:13.576958 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" virtual=false 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.586674 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[flowcontrol.apiserver.k8s.io/v1/FlowSchema, namespace: , name: openshift-ovn-kubernetes, uid: e774665f-b1c0-416c-adde-718d512e91f9]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.586762 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" virtual=false 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.587023 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-cluster-readers, uid: 11e86910-6d61-4703-8234-c26167470b26]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.587123532+00:00 stderr F I0813 20:09:13.587114 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" virtual=false 2025-08-13T20:09:13.597370555+00:00 stderr F I0813 20:09:13.597273 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-group, uid: 7560b26d-8f8c-4cda-ac4b-0a2f586da492]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.597370555+00:00 stderr F I0813 20:09:13.597352 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" virtual=false 2025-08-13T20:09:13.608836714+00:00 stderr F I0813 20:09:13.608728 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-transient, uid: 9ca91046-71b1-4755-8430-f290694fb843]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.608896796+00:00 stderr F I0813 20:09:13.608867 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" virtual=false 2025-08-13T20:09:13.611452139+00:00 stderr F I0813 20:09:13.611353 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-diagnostics, uid: 79dad4ee-c1a9-4879-a1dd-69ea8ca307a1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.611491640+00:00 stderr F I0813 20:09:13.611409 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" virtual=false 2025-08-13T20:09:13.615759543+00:00 stderr F I0813 20:09:13.615705 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: multus-whereabouts, uid: be8f44a2-0530-4fc0-a743-77944a2d6cfe]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.615830865+00:00 stderr F I0813 20:09:13.615766 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" virtual=false 2025-08-13T20:09:13.626977234+00:00 stderr F I0813 20:09:13.626765 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: network-node-identity, uid: 2fd4e670-d0fc-405b-9bb0-978652cd7871]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.626977234+00:00 stderr F I0813 20:09:13.626949 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" virtual=false 2025-08-13T20:09:13.637646600+00:00 stderr F I0813 20:09:13.637605 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: olm-operator-binding-openshift-operator-lifecycle-manager, uid: 9ccd11bb-8a90-4da3-bd98-a6b01e412542]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.637739093+00:00 stderr F I0813 20:09:13.637723 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" virtual=false 2025-08-13T20:09:13.646385521+00:00 stderr F I0813 20:09:13.646352 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-dns-operator, uid: 97ca47d3-e623-4331-b837-ffc3e3dac836]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.646500994+00:00 stderr F I0813 20:09:13.646465 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" virtual=false 2025-08-13T20:09:13.649411438+00:00 stderr F I0813 20:09:13.649386 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ingress-operator, uid: 26eba0ad-34cf-49ed-9698-bb35cae97907]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.649478569+00:00 stderr F I0813 20:09:13.649464 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" virtual=false 2025-08-13T20:09:13.657377166+00:00 stderr F I0813 20:09:13.657225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-iptables-alerter, uid: 848eff75-354e-4076-906d-a4b9cc19945b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.657377166+00:00 stderr F I0813 20:09:13.657299 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" virtual=false 2025-08-13T20:09:13.661150164+00:00 stderr F I0813 20:09:13.659981 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-control-plane-limited, uid: 7f642a86-d7eb-4509-8666-f07259f6a62f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.661150164+00:00 stderr F I0813 20:09:13.660034 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" virtual=false 2025-08-13T20:09:13.666638971+00:00 stderr F I0813 20:09:13.666528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-identity-limited, uid: 52425be5-e912-46fb-8a48-8e773a98d5c1]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.666638971+00:00 stderr F I0813 20:09:13.666605 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" virtual=false 2025-08-13T20:09:13.676642828+00:00 stderr F I0813 20:09:13.676136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: openshift-ovn-kubernetes-node-kube-rbac-proxy, uid: 245c37d5-9ba8-4086-aa11-840b6e8a724e]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.676642828+00:00 stderr F I0813 20:09:13.676211 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" virtual=false 2025-08-13T20:09:13.689615640+00:00 stderr F I0813 20:09:13.688406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: registry-monitoring, uid: f277fb99-3fcb-44ac-b911-507aa165a19e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.689615640+00:00 stderr F I0813 20:09:13.688528 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" virtual=false 2025-08-13T20:09:13.690349261+00:00 stderr F I0813 20:09:13.690264 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-console-operator, name: console-operator-config, uid: 412419b7-04a8-4c46-913c-1f7d4e7ef828]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.690349261+00:00 stderr F I0813 20:09:13.690337 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" virtual=false 2025-08-13T20:09:13.697501996+00:00 stderr F I0813 20:09:13.697274 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: prometheus-k8s-scheduler-resources, uid: 83861e61-c21e-4d21-bf6a-21620ffd522d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.697501996+00:00 stderr F I0813 20:09:13.697347 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" virtual=false 2025-08-13T20:09:13.700591095+00:00 stderr F I0813 20:09:13.700486 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: release-verification, uid: 10773088-74cb-4fd7-a888-64eb35390cc8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.700591095+00:00 stderr F I0813 20:09:13.700564 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" virtual=false 2025-08-13T20:09:13.708161272+00:00 stderr F I0813 20:09:13.707979 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-config-managed, name: openshift-network-features, uid: e0bb8616-5510-480e-98cd-00a3fc1de91b]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.708161272+00:00 stderr F I0813 20:09:13.708069 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" virtual=false 2025-08-13T20:09:13.712588149+00:00 stderr F I0813 20:09:13.712498 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-image-registry, name: trusted-ca, uid: 817847ce-1358-44f5-9e39-d680eb81f478]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.712836536+00:00 stderr F I0813 20:09:13.712813 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" virtual=false 2025-08-13T20:09:13.719177678+00:00 stderr F I0813 20:09:13.719008 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:controller:machine-approver, uid: 7b6449e0-b191-4f94-ad74-226c264035e7]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.719177678+00:00 stderr F I0813 20:09:13.719076 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" virtual=false 2025-08-13T20:09:13.723093280+00:00 stderr F I0813 20:09:13.722922 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:authentication, uid: 32dcecb0-39fb-4417-a02d-801980f312a5]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.723093280+00:00 stderr F I0813 20:09:13.722970 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" virtual=false 2025-08-13T20:09:13.737514924+00:00 stderr F I0813 20:09:13.737391 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:cluster-kube-scheduler-operator, uid: cac5ab58-ac29-4b01-978f-1d735a5c5af1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.737514924+00:00 stderr F I0813 20:09:13.737480 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" virtual=false 2025-08-13T20:09:13.744522544+00:00 stderr F I0813 20:09:13.744425 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-1, uid: 7d5f234a-24b6-4ac1-954c-7cc8434206a4]" 2025-08-13T20:09:13.744588796+00:00 stderr F I0813 20:09:13.744524 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" virtual=false 2025-08-13T20:09:13.748353244+00:00 stderr F I0813 20:09:13.748283 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:etcd-operator, uid: fe8cbe89-e1d8-491c-abcd-6405df195395]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.748559260+00:00 stderr F I0813 20:09:13.748531 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" virtual=false 2025-08-13T20:09:13.749489707+00:00 stderr F I0813 20:09:13.749461 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-2, uid: 899b36ec-d94b-42d9-8be5-92ce893bdf60]" 2025-08-13T20:09:13.749588160+00:00 stderr F I0813 20:09:13.749568 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" virtual=false 2025-08-13T20:09:13.750179797+00:00 stderr F I0813 20:09:13.750150 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-controller-manager-operator, uid: 8fa65b16-3f3f-4a00-9f7f-b6347622e728]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.750269479+00:00 stderr F I0813 20:09:13.750240 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" virtual=false 2025-08-13T20:09:13.754986244+00:00 stderr F I0813 20:09:13.754877 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-3, uid: 1543dfc3-df4c-45db-8acd-845d6e66194a]" 2025-08-13T20:09:13.755054466+00:00 stderr F I0813 20:09:13.754996 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" virtual=false 2025-08-13T20:09:13.755302883+00:00 stderr F I0813 20:09:13.755184 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-apiserver-operator, uid: 6d428360-3d92-42b3-a692-62fcef7dc28f]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.755302883+00:00 stderr F I0813 20:09:13.755231 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" virtual=false 2025-08-13T20:09:13.758261748+00:00 stderr F I0813 20:09:13.758150 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-4, uid: 86dedbd4-e2d0-4b4f-8c78-ace31808403c]" 2025-08-13T20:09:13.758261748+00:00 stderr F I0813 20:09:13.758198 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" virtual=false 2025-08-13T20:09:13.759644848+00:00 stderr F I0813 20:09:13.759589 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:kube-storage-version-migrator-operator, uid: 7c611553-0af9-459b-9675-c9d6e321b535]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.759644848+00:00 stderr F I0813 20:09:13.759615 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" virtual=false 2025-08-13T20:09:13.764300921+00:00 stderr F I0813 20:09:13.764230 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-5, uid: 9924a3a2-9116-4daa-916b-0afdeb883e44]" 2025-08-13T20:09:13.764445256+00:00 stderr F I0813 20:09:13.764423 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" virtual=false 2025-08-13T20:09:13.765971299+00:00 stderr F I0813 20:09:13.765741 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-kube-scheduler, name: revision-status-6, uid: 46686543-d20d-4e62-906e-aaf6c4de8edf]" 2025-08-13T20:09:13.766123394+00:00 stderr F I0813 20:09:13.766055 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" virtual=false 2025-08-13T20:09:13.775135382+00:00 stderr F I0813 20:09:13.775063 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-apiserver-operator, uid: 3b1d0d06-e85b-43d3-a710-b1842b977bda]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.775284306+00:00 stderr F I0813 20:09:13.775260 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" virtual=false 2025-08-13T20:09:13.780334191+00:00 stderr F I0813 20:09:13.780199 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-config-operator, uid: 88bb6e9f-f058-4bf1-91ec-fac61efb59fd]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.780334191+00:00 stderr F I0813 20:09:13.780271 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" virtual=false 2025-08-13T20:09:13.781297219+00:00 stderr F I0813 20:09:13.781215 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:openshift-controller-manager-operator, uid: 6da2b488-b18a-4be7-b541-3d8d33584c6e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.781297219+00:00 stderr F I0813 20:09:13.781263 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" virtual=false 2025-08-13T20:09:13.789884805+00:00 stderr F I0813 20:09:13.789723 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:operator:service-ca-operator, uid: 90ce341e-6032-46e0-b9d2-7a01b6b8a68b]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.789884805+00:00 stderr F I0813 20:09:13.789863 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" virtual=false 2025-08-13T20:09:13.796895156+00:00 stderr F I0813 20:09:13.796698 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/ClusterRoleBinding, namespace: , name: system:openshift:scc:restricted-v2, uid: cb821e01-a19c-4b4c-8409-e70b79a0669a]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.796895156+00:00 stderr F I0813 20:09:13.796842 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" virtual=false 2025-08-13T20:09:13.804105813+00:00 stderr F I0813 20:09:13.803146 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-scheduler-operator, name: openshift-kube-scheduler-operator-config, uid: c6ebf7f7-9838-428d-bacd-d9377dca684d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.804105813+00:00 stderr F I0813 20:09:13.803218 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" virtual=false 2025-08-13T20:09:13.842101862+00:00 stderr F I0813 20:09:13.841989 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-kube-storage-version-migrator-operator, name: config, uid: 1bb5d3ff-abf0-42d2-868b-f27622788fc1]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.842101862+00:00 stderr F I0813 20:09:13.842053 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" virtual=false 2025-08-13T20:09:13.845007075+00:00 stderr F I0813 20:09:13.844888 1 garbagecollector.go:596] "item doesn't have an owner, continue on next item" item="[v1/ConfigMap, namespace: openshift-oauth-apiserver, name: revision-status-1, uid: 68bff2ff-9a16-46c3-aeb1-4de75aa0760b]" 2025-08-13T20:09:13.845007075+00:00 stderr F I0813 20:09:13.844962 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" virtual=false 2025-08-13T20:09:13.854466177+00:00 stderr F I0813 20:09:13.854348 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy, uid: d55be534-ee47-42be-a177-eeff870f3f9c]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.854466177+00:00 stderr F I0813 20:09:13.854424 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" virtual=false 2025-08-13T20:09:13.856594638+00:00 stderr F I0813 20:09:13.856528 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: kube-rbac-proxy-cluster-autoscaler-operator, uid: 21319178-4025-4471-9dc1-a15898759ed3]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.856594638+00:00 stderr F I0813 20:09:13.856557 1 garbagecollector.go:549] "Processing item" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" virtual=false 2025-08-13T20:09:13.865252486+00:00 stderr F I0813 20:09:13.865155 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: machine-api-operator-images, uid: 9037023c-6158-4d9d-8f71-bae268832ce9]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.865297697+00:00 stderr F I0813 20:09:13.865245 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" virtual=false 2025-08-13T20:09:13.872710760+00:00 stderr F I0813 20:09:13.871890 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-api, name: mao-trusted-ca, uid: 9a2cfbbf-b8be-4334-82e9-91abbed9baee]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.872710760+00:00 stderr F I0813 20:09:13.871974 1 garbagecollector.go:549] "Processing item" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" virtual=false 2025-08-13T20:09:13.875620983+00:00 stderr F I0813 20:09:13.874997 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: coreos-bootimages, uid: 32557db9-6675-42cd-8433-382f972fd9ed]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.875620983+00:00 stderr F I0813 20:09:13.875119 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" virtual=false 2025-08-13T20:09:13.878346631+00:00 stderr F I0813 20:09:13.878266 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: kube-rbac-proxy, uid: ba7edbb4-1ba2-49b6-98a7-d849069e9f80]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.878346631+00:00 stderr F I0813 20:09:13.878325 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" virtual=false 2025-08-13T20:09:13.883757186+00:00 stderr F I0813 20:09:13.883066 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-operator-images, uid: 1a9b80cc-4335-4f9b-b76a-0fd268339500]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.883757186+00:00 stderr F I0813 20:09:13.883148 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" virtual=false 2025-08-13T20:09:13.886694131+00:00 stderr F I0813 20:09:13.886582 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-machine-config-operator, name: machine-config-osimageurl, uid: 6dc55689-0211-4518-ae6b-72770afc52d4]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.886730652+00:00 stderr F I0813 20:09:13.886696 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" virtual=false 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.889455 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-trusted-ca, uid: d0f4a635-2baa-4cf8-ab26-6cb4ffda662d]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.889526 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" virtual=false 2025-08-13T20:09:13.890283644+00:00 stderr F I0813 20:09:13.890261 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: cni-copy-resources, uid: 3807440c-7cb7-402f-9b6c-985e141f073d]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.890325895+00:00 stderr F I0813 20:09:13.890313 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" virtual=false 2025-08-13T20:09:13.893479375+00:00 stderr F I0813 20:09:13.893039 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: default-cni-sysctl-allowlist, uid: 4b4aac2c-265b-4c8a-86d2-81b7b6b58bd6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.893479375+00:00 stderr F I0813 20:09:13.893098 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" virtual=false 2025-08-13T20:09:13.897031317+00:00 stderr F I0813 20:09:13.896866 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-multus, name: multus-daemon-config, uid: 0040ed70-dcc9-471d-b68f-333eb3d3a64c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.897031317+00:00 stderr F I0813 20:09:13.896947 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" virtual=false 2025-08-13T20:09:13.900166967+00:00 stderr F I0813 20:09:13.900055 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-node-identity, name: ovnkube-identity-cm, uid: 08920289-81e3-422d-8580-e3f323d2bc00]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.900166967+00:00 stderr F I0813 20:09:13.900142 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" virtual=false 2025-08-13T20:09:13.903428440+00:00 stderr F I0813 20:09:13.903277 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: applied-cluster, uid: 0f792eb6-67bb-459c-8f9b-ddc568eb49b6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.903428440+00:00 stderr F I0813 20:09:13.903327 1 garbagecollector.go:549] "Processing item" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" virtual=false 2025-08-13T20:09:13.906514819+00:00 stderr F I0813 20:09:13.906406 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-network-operator, name: iptables-alerter-script, uid: 3fcfe917-6069-4709-8a13-9f7bb418d1e6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.906514819+00:00 stderr F I0813 20:09:13.906470 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" virtual=false 2025-08-13T20:09:13.924139574+00:00 stderr F I0813 20:09:13.923878 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-config, uid: e9bee127-1b3d-47ee-af83-2bfcb79f8702]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:13.924139574+00:00 stderr F I0813 20:09:13.923972 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" virtual=false 2025-08-13T20:09:13.928879240+00:00 stderr F I0813 20:09:13.928268 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config, name: machine-api-controllers, uid: a777ad98-1b3e-45c1-a882-fa02dc5e0e45]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.928879240+00:00 stderr F I0813 20:09:13.928347 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" virtual=false 2025-08-13T20:09:13.935465829+00:00 stderr F I0813 20:09:13.935276 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: console-public, uid: 8d38d3d4-2560-4d42-8625-65533a47724e]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.935465829+00:00 stderr F I0813 20:09:13.935346 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" virtual=false 2025-08-13T20:09:13.977415072+00:00 stderr F I0813 20:09:13.977322 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-nodes-identity-limited, uid: c4aa8346-c5c7-4f58-9efa-e1f81fc14f12]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.977415072+00:00 stderr F I0813 20:09:13.977387 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" virtual=false 2025-08-13T20:09:13.979169852+00:00 stderr F I0813 20:09:13.979111 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-config, uid: d30c2b3b-e0c9-4cd0-9fc5-e48c94f3f693]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.979265325+00:00 stderr F I0813 20:09:13.979166 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" virtual=false 2025-08-13T20:09:13.988566541+00:00 stderr F I0813 20:09:13.988453 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-ovn-kubernetes, name: ovnkube-script-lib, uid: 2276c91e-6ff8-4fa7-a126-0ab84fde00d5]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:13.988566541+00:00 stderr F I0813 20:09:13.988525 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" virtual=false 2025-08-13T20:09:13.994508452+00:00 stderr F I0813 20:09:13.994420 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/ConfigMap, namespace: openshift-service-ca-operator, name: service-ca-operator-config, uid: 3d3e7ab7-58a1-43a1-974e-5528e582f464]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825","controller":true}] 2025-08-13T20:09:13.994508452+00:00 stderr F I0813 20:09:13.994475 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" virtual=false 2025-08-13T20:09:14.001161803+00:00 stderr F I0813 20:09:14.000597 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-operator-lifecycle-manager, name: pprof-cert, uid: c9632a1a-4f13-4685-bf8a-cb629542c724]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.001161803+00:00 stderr F I0813 20:09:14.000767 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" virtual=false 2025-08-13T20:09:14.006212027+00:00 stderr F I0813 20:09:14.006105 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[v1/Secret, namespace: openshift-etcd-operator, name: etcd-client, uid: e8eb8415-8608-4aeb-990e-0c587edd97cc]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.006212027+00:00 stderr F I0813 20:09:14.006157 1 garbagecollector.go:549] "Processing item" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" virtual=false 2025-08-13T20:09:14.007450383+00:00 stderr F I0813 20:09:14.006640 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: system:openshift:scc:hostnetwork-v2, uid: 6a315570-1774-46a4-8c1e-a8dec4ae42dd]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.007450383+00:00 stderr F I0813 20:09:14.006713 1 garbagecollector.go:549] "Processing item" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" virtual=false 2025-08-13T20:09:14.009835451+00:00 stderr F I0813 20:09:14.009730 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: kube-system, name: network-diagnostics, uid: 42b5b6e6-9a80-4513-a033-38d0e838dc93]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.009863642+00:00 stderr F I0813 20:09:14.009847 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-11405dc064e9fc83a779a06d1cd665b3, uid: 6c39c9b1-6ae4-4da9-bc28-1744aa4c8a1d]" virtual=false 2025-08-13T20:09:14.013264310+00:00 stderr F I0813 20:09:14.013186 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: openshift-ovn-kubernetes-control-plane-limited, uid: bab757f6-b841-475a-892d-a531bdd67547]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.013264310+00:00 stderr F I0813 20:09:14.013233 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5, uid: c414de32-5f1e-433d-8ddc-6ef8e86afda7]" virtual=false 2025-08-13T20:09:14.017742038+00:00 stderr F I0813 20:09:14.017635 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: prometheus-k8s, uid: 82531bdb-b490-4bb6-9435-a96aa861af59]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.017742038+00:00 stderr F I0813 20:09:14.017719 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, uid: 8980258c-3f45-4fdf-85aa-5d7816ef57b0]" virtual=false 2025-08-13T20:09:14.019994403+00:00 stderr F I0813 20:09:14.019883 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: multus-whereabouts, uid: d9f001d9-3949-4f2b-b322-bdc495b6b073]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.020016583+00:00 stderr F I0813 20:09:14.019999 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-83accf81260e29bcce65a184dd980479, uid: 064c58c6-b7e3-4279-8d84-dee3da8cc701]" virtual=false 2025-08-13T20:09:14.025669655+00:00 stderr F I0813 20:09:14.025112 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-ovn-kubernetes, name: prometheus-k8s, uid: 16831aec-8154-47dc-bd17-bf35f40f27a2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.025669655+00:00 stderr F I0813 20:09:14.025162 1 garbagecollector.go:549] "Processing item" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff, uid: 3ef54a1f-2601-44d6-aa6d-d19e1277d0b9]" virtual=false 2025-08-13T20:09:14.027746875+00:00 stderr F I0813 20:09:14.027600 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-config-managed, name: openshift-network-public-role-binding, uid: 3df77a31-3595-4c7d-8f42-fe6d72f5e8fc]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.027746875+00:00 stderr F I0813 20:09:14.027686 1 garbagecollector.go:549] "Processing item" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" virtual=false 2025-08-13T20:09:14.029419133+00:00 stderr F I0813 20:09:14.029059 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-multus, name: prometheus-k8s, uid: 76ffecc8-c07a-4b4b-a533-f0349dddd305]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.034613732+00:00 stderr F I0813 20:09:14.034479 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-diagnostics, name: network-diagnostics, uid: d04be21e-6483-4e06-947a-bfe10bccef5f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.038099632+00:00 stderr F I0813 20:09:14.037756 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[rbac.authorization.k8s.io/v1/RoleBinding, namespace: openshift-network-node-identity, name: network-node-identity-leases, uid: 323976da-f51f-48d6-91ee-874a014db6f7]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.046549154+00:00 stderr F I0813 20:09:14.046371 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminnetworkpolicies.policy.networking.k8s.io, uid: 3b2c3d0f-1154-4221-88d8-96b15373169a]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.065247960+00:00 stderr F I0813 20:09:14.065136 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: baselineadminnetworkpolicies.policy.networking.k8s.io, uid: d059be2d-29f9-4a7a-9bc5-e75a784d63e4]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.065678152+00:00 stderr F I0813 20:09:14.065612 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: adminpolicybasedexternalroutes.k8s.ovn.org, uid: 8e2ba691-6cd2-489e-929b-7aaef51d1387]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.066061833+00:00 stderr F I0813 20:09:14.066024 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressfirewalls.k8s.ovn.org, uid: a49e9150-d58b-4ccf-9fa5-c6abaac414b2]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.097353211+00:00 stderr F I0813 20:09:14.096713 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressips.k8s.ovn.org, uid: 9f3a9522-f1f8-4d28-bf47-0823bd97a03c]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.100254584+00:00 stderr F I0813 20:09:14.100156 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressqoses.k8s.ovn.org, uid: c31943bc-e8ef-4bee-9147-31e90cad6638]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.104339911+00:00 stderr F I0813 20:09:14.103727 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: egressservices.k8s.ovn.org, uid: 1b3e9680-0044-44dc-b306-5f8b2922b184]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.107078699+00:00 stderr F I0813 20:09:14.106627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: ippools.whereabouts.cni.cncf.io, uid: 6327ea74-f505-4fab-a6a5-2f489c34a8c6]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.114501082+00:00 stderr F I0813 20:09:14.113938 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: overlappingrangeipreservations.whereabouts.cni.cncf.io, uid: ae18c5bd-30f0-4e7f-bcc1-2d4dcbc94239]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.117990042+00:00 stderr F I0813 20:09:14.116225 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[apiextensions.k8s.io/v1/CustomResourceDefinition, namespace: , name: network-attachment-definitions.k8s.cni.cncf.io, uid: a99eded9-1e26-4704-8381-dc109f03cc1f]" owner=[{"apiVersion":"operator.openshift.io/v1","kind":"Network","name":"cluster","uid":"5ca11404-f665-4aa0-85cf-da2f3e9c86ad","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.120355630+00:00 stderr F I0813 20:09:14.119464 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[config.openshift.io/v1/Image, namespace: , name: cluster, uid: f9e57099-766b-4ef5-9883-f7bf72d3acd8]" owner=[{"apiVersion":"config.openshift.io/v1","kind":"ClusterVersion","name":"version","uid":"a73cbaa6-40d3-4694-9b98-c0a6eed45825"}] 2025-08-13T20:09:14.121110752+00:00 stderr F I0813 20:09:14.121034 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-11405dc064e9fc83a779a06d1cd665b3, uid: 6c39c9b1-6ae4-4da9-bc28-1744aa4c8a1d]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.123210882+00:00 stderr F I0813 20:09:14.123012 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-24d41a0eb2da076d6c5b713d7a1eb8d5, uid: c414de32-5f1e-433d-8ddc-6ef8e86afda7]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.125406005+00:00 stderr F I0813 20:09:14.125289 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a, uid: 8980258c-3f45-4fdf-85aa-5d7816ef57b0]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"master","uid":"8efb5656-7de8-467a-9f2a-237a011a4783","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.130221963+00:00 stderr F I0813 20:09:14.130187 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-83accf81260e29bcce65a184dd980479, uid: 064c58c6-b7e3-4279-8d84-dee3da8cc701]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"worker","uid":"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.132879569+00:00 stderr F I0813 20:09:14.132824 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[machineconfiguration.openshift.io/v1/MachineConfig, namespace: , name: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff, uid: 3ef54a1f-2601-44d6-aa6d-d19e1277d0b9]" owner=[{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"MachineConfigPool","name":"worker","uid":"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e","controller":true,"blockOwnerDeletion":true}] 2025-08-13T20:09:14.135660319+00:00 stderr F I0813 20:09:14.135627 1 garbagecollector.go:615] "item has at least one existing owner, will not garbage collect" item="[route.openshift.io/v1/Route, namespace: openshift-ingress-canary, name: canary, uid: d099e691-9f65-4b04-8fcd-6df8ad5c5015]" owner=[{"apiVersion":"apps/v1","kind":"daemonset","name":"ingress-canary","uid":"b5512a08-cd29-46f9-9661-4c860338b2ca","controller":true}] 2025-08-13T20:10:15.242365070+00:00 stderr F I0813 20:10:15.233259 1 event.go:376] "Event occurred" object="openshift-multus/cni-sysctl-allowlist-ds" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cni-sysctl-allowlist-ds-jx5m8" 2025-08-13T20:10:18.196245188+00:00 stderr F I0813 20:10:18.196096 1 garbagecollector.go:549] "Processing item" item="[apps/v1/ControllerRevision, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-56444b9596, uid: 296d6144-09f5-4dc7-9ab3-000f2dd8cf46]" virtual=false 2025-08-13T20:10:18.196610098+00:00 stderr F I0813 20:10:18.196559 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-jx5m8, uid: b78e72e3-8ece-4d66-aa9c-25445bacdc99]" virtual=false 2025-08-13T20:10:18.219466944+00:00 stderr F I0813 20:10:18.219401 1 garbagecollector.go:688] "Deleting item" item="[apps/v1/ControllerRevision, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-56444b9596, uid: 296d6144-09f5-4dc7-9ab3-000f2dd8cf46]" propagationPolicy="Background" 2025-08-13T20:10:18.220159104+00:00 stderr F I0813 20:10:18.219455 1 garbagecollector.go:688] "Deleting item" item="[v1/Pod, namespace: openshift-multus, name: cni-sysctl-allowlist-ds-jx5m8, uid: b78e72e3-8ece-4d66-aa9c-25445bacdc99]" propagationPolicy="Background" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.683724 1 replica_set.go:621] "Too many replicas" replicaSet="openshift-route-controller-manager/route-controller-manager-6884dcf749" need=0 deleting=1 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684108 1 replica_set.go:248] "Found related ReplicaSets" replicaSet="openshift-route-controller-manager/route-controller-manager-6884dcf749" relatedReplicaSets=["openshift-route-controller-manager/route-controller-manager-776b8b7477","openshift-route-controller-manager/route-controller-manager-5446f98575","openshift-route-controller-manager/route-controller-manager-5b77f9fd48","openshift-route-controller-manager/route-controller-manager-5c4dbb8899","openshift-route-controller-manager/route-controller-manager-6884dcf749","openshift-route-controller-manager/route-controller-manager-777dbbb7bb","openshift-route-controller-manager/route-controller-manager-7d967d98df","openshift-route-controller-manager/route-controller-manager-846977c6bc","openshift-route-controller-manager/route-controller-manager-66f66f94cf","openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc","openshift-route-controller-manager/route-controller-manager-7f79969969","openshift-route-controller-manager/route-controller-manager-868695ccb4"] 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684171 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set route-controller-manager-6884dcf749 to 0 from 1" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684410 1 controller_utils.go:609] "Deleting pod" controller="route-controller-manager-6884dcf749" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" 2025-08-13T20:10:59.686088783+00:00 stderr F I0813 20:10:59.684749 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set controller-manager-598fc85fd4 to 0 from 1" 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687007 1 replica_set.go:621] "Too many replicas" replicaSet="openshift-controller-manager/controller-manager-598fc85fd4" need=0 deleting=1 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687101 1 replica_set.go:248] "Found related ReplicaSets" replicaSet="openshift-controller-manager/controller-manager-598fc85fd4" relatedReplicaSets=["openshift-controller-manager/controller-manager-659898b96d","openshift-controller-manager/controller-manager-6ff78978b4","openshift-controller-manager/controller-manager-75cfd5db5d","openshift-controller-manager/controller-manager-7bbb4b7f4c","openshift-controller-manager/controller-manager-99c8765d7","openshift-controller-manager/controller-manager-b69786f4f","openshift-controller-manager/controller-manager-5797bcd546","openshift-controller-manager/controller-manager-67685c4459","openshift-controller-manager/controller-manager-78589965b8","openshift-controller-manager/controller-manager-c4dd57946","openshift-controller-manager/controller-manager-778975cc4f","openshift-controller-manager/controller-manager-598fc85fd4"] 2025-08-13T20:10:59.687644158+00:00 stderr F I0813 20:10:59.687333 1 controller_utils.go:609] "Deleting pod" controller="controller-manager-598fc85fd4" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" 2025-08-13T20:10:59.695564135+00:00 stderr F I0813 20:10:59.695433 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-controller-manager/controller-manager" err="Operation cannot be fulfilled on deployments.apps \"controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.702011570+00:00 stderr F I0813 20:10:59.701910 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-route-controller-manager/route-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"route-controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.721870629+00:00 stderr F I0813 20:10:59.721668 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set route-controller-manager-776b8b7477 to 1 from 0" 2025-08-13T20:10:59.731856846+00:00 stderr F I0813 20:10:59.728994 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set controller-manager-778975cc4f to 1 from 0" 2025-08-13T20:10:59.748226745+00:00 stderr F I0813 20:10:59.747647 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager-6884dcf749" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: route-controller-manager-6884dcf749-n4qpx" 2025-08-13T20:10:59.800091072+00:00 stderr F I0813 20:10:59.799961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="116.454919ms" 2025-08-13T20:10:59.807908986+00:00 stderr F I0813 20:10:59.800945 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager-598fc85fd4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: controller-manager-598fc85fd4-8wlsm" 2025-08-13T20:10:59.809548343+00:00 stderr F I0813 20:10:59.809388 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="161.014787ms" 2025-08-13T20:10:59.809574074+00:00 stderr F I0813 20:10:59.809537 1 replica_set.go:585] "Too few replicas" replicaSet="openshift-controller-manager/controller-manager-778975cc4f" need=1 creating=1 2025-08-13T20:10:59.840894912+00:00 stderr F I0813 20:10:59.840650 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="192.344795ms" 2025-08-13T20:10:59.841197671+00:00 stderr F I0813 20:10:59.841143 1 replica_set.go:585] "Too few replicas" replicaSet="openshift-route-controller-manager/route-controller-manager-776b8b7477" need=1 creating=1 2025-08-13T20:10:59.842972571+00:00 stderr F I0813 20:10:59.841585 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="154.722446ms" 2025-08-13T20:10:59.846082041+00:00 stderr F I0813 20:10:59.843119 1 deployment_controller.go:507] "Error syncing deployment" deployment="openshift-route-controller-manager/route-controller-manager" err="Operation cannot be fulfilled on deployments.apps \"route-controller-manager\": the object has been modified; please apply your changes to the latest version and try again" 2025-08-13T20:10:59.847259364+00:00 stderr F I0813 20:10:59.847173 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="47.132021ms" 2025-08-13T20:10:59.850840847+00:00 stderr F I0813 20:10:59.849446 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="121.073µs" 2025-08-13T20:10:59.895562309+00:00 stderr F I0813 20:10:59.893518 1 event.go:376] "Event occurred" object="openshift-controller-manager/controller-manager-778975cc4f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: controller-manager-778975cc4f-x5vcf" 2025-08-13T20:10:59.916033096+00:00 stderr F I0813 20:10:59.913261 1 event.go:376] "Event occurred" object="openshift-route-controller-manager/route-controller-manager-776b8b7477" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: route-controller-manager-776b8b7477-sfpvs" 2025-08-13T20:10:59.956079364+00:00 stderr F I0813 20:10:59.955984 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="114.325958ms" 2025-08-13T20:10:59.956286910+00:00 stderr F I0813 20:10:59.956255 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="93.453µs" 2025-08-13T20:10:59.970990092+00:00 stderr F I0813 20:10:59.970868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="161.369486ms" 2025-08-13T20:10:59.994078604+00:00 stderr F I0813 20:10:59.993975 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="153.216473ms" 2025-08-13T20:11:00.002982189+00:00 stderr F I0813 20:11:00.002852 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="31.625627ms" 2025-08-13T20:11:00.003505364+00:00 stderr F I0813 20:11:00.003416 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="217.526µs" 2025-08-13T20:11:00.006635064+00:00 stderr F I0813 20:11:00.006602 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="38.101µs" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.020429 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="26.091848ms" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.021165 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="131.754µs" 2025-08-13T20:11:00.022237361+00:00 stderr F I0813 20:11:00.021224 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="18.911µs" 2025-08-13T20:11:00.051909872+00:00 stderr F I0813 20:11:00.051612 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="62.532µs" 2025-08-13T20:11:00.399700743+00:00 stderr F I0813 20:11:00.399640 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="77.293µs" 2025-08-13T20:11:00.509064199+00:00 stderr F I0813 20:11:00.508908 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="66.812µs" 2025-08-13T20:11:00.666413040+00:00 stderr F I0813 20:11:00.666271 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="78.033µs" 2025-08-13T20:11:00.744231472+00:00 stderr F I0813 20:11:00.744045 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="304.548µs" 2025-08-13T20:11:00.772302806+00:00 stderr F I0813 20:11:00.772210 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="83.382µs" 2025-08-13T20:11:00.782855379+00:00 stderr F I0813 20:11:00.780878 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="96.213µs" 2025-08-13T20:11:00.820830668+00:00 stderr F I0813 20:11:00.815070 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="59.832µs" 2025-08-13T20:11:00.848323746+00:00 stderr F I0813 20:11:00.845189 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="57.481µs" 2025-08-13T20:11:01.521766564+00:00 stderr F I0813 20:11:01.521384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="407.931µs" 2025-08-13T20:11:01.553975188+00:00 stderr F I0813 20:11:01.552726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="95.063µs" 2025-08-13T20:11:01.573428735+00:00 stderr F I0813 20:11:01.573000 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="215.366µs" 2025-08-13T20:11:01.597047563+00:00 stderr F I0813 20:11:01.596684 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="84.792µs" 2025-08-13T20:11:01.614690128+00:00 stderr F I0813 20:11:01.612124 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="61.432µs" 2025-08-13T20:11:01.643016831+00:00 stderr F I0813 20:11:01.642674 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="85.323µs" 2025-08-13T20:11:02.189949232+00:00 stderr F I0813 20:11:02.189586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="118.123µs" 2025-08-13T20:11:02.284109911+00:00 stderr F I0813 20:11:02.282662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="160.454µs" 2025-08-13T20:11:02.706573933+00:00 stderr F I0813 20:11:02.706232 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="3.430578ms" 2025-08-13T20:11:02.747130546+00:00 stderr F I0813 20:11:02.742157 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="59.321µs" 2025-08-13T20:11:03.756875606+00:00 stderr F I0813 20:11:03.756699 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="27.958882ms" 2025-08-13T20:11:03.757122423+00:00 stderr F I0813 20:11:03.757071 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="118.803µs" 2025-08-13T20:11:03.831419043+00:00 stderr F I0813 20:11:03.831349 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="48.773649ms" 2025-08-13T20:11:03.831652280+00:00 stderr F I0813 20:11:03.831627 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="92.093µs" 2025-08-13T20:11:03.840501514+00:00 stderr F I0813 20:11:03.837482 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-66f66f94cf" duration="9.26µs" 2025-08-13T20:11:03.878395910+00:00 stderr F I0813 20:11:03.875526 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-75cfd5db5d" duration="11.16µs" 2025-08-13T20:15:00.203593711+00:00 stderr F I0813 20:15:00.203243 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created job collect-profiles-29251935" 2025-08-13T20:15:00.210739586+00:00 stderr F I0813 20:15:00.210633 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.344609754+00:00 stderr F I0813 20:15:00.339868 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.344609754+00:00 stderr F I0813 20:15:00.340109 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251935" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: collect-profiles-29251935-d7x6j" 2025-08-13T20:15:00.374665496+00:00 stderr F I0813 20:15:00.374329 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.382615054+00:00 stderr F I0813 20:15:00.379152 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.411165522+00:00 stderr F I0813 20:15:00.409102 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:00.466745326+00:00 stderr F I0813 20:15:00.466507 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:01.020671647+00:00 stderr F I0813 20:15:01.020543 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:02.374771730+00:00 stderr F I0813 20:15:02.374665 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:03.385932521+00:00 stderr F I0813 20:15:03.385873 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:03.395491475+00:00 stderr F I0813 20:15:03.395432 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:04.448076683+00:00 stderr F I0813 20:15:04.446505 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:04.689482515+00:00 stderr F I0813 20:15:04.689325 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.382243267+00:00 stderr F I0813 20:15:05.381489 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.546215138+00:00 stderr F I0813 20:15:05.545919 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.593678289+00:00 stderr F I0813 20:15:05.593530 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.629989270+00:00 stderr F I0813 20:15:05.629819 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251935" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" 2025-08-13T20:15:05.629989270+00:00 stderr F I0813 20:15:05.629766 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:15:05.675380811+00:00 stderr F I0813 20:15:05.675228 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-28658250" 2025-08-13T20:15:05.675952768+00:00 stderr F I0813 20:15:05.675839 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulDelete" message="Deleted job collect-profiles-28658250" 2025-08-13T20:15:05.675952768+00:00 stderr F I0813 20:15:05.675883 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SawCompletedJob" message="Saw completed job: collect-profiles-29251935, status: Complete" 2025-08-13T20:18:48.575668025+00:00 stderr F I0813 20:18:48.575532 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:18:48.576946972+00:00 stderr F I0813 20:18:48.576877 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="501.145µs" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578014 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="252.247µs" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578835 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:18:48.579160035+00:00 stderr F I0813 20:18:48.578877 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:18:48.579479144+00:00 stderr F I0813 20:18:48.579402 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="1.280157ms" 2025-08-13T20:18:48.579602698+00:00 stderr F I0813 20:18:48.579552 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="62.932µs" 2025-08-13T20:18:48.580147903+00:00 stderr F I0813 20:18:48.580037 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="397.592µs" 2025-08-13T20:18:48.580406671+00:00 stderr F I0813 20:18:48.580310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="204.405µs" 2025-08-13T20:18:48.580451922+00:00 stderr F I0813 20:18:48.580415 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="49.102µs" 2025-08-13T20:18:48.580581386+00:00 stderr F I0813 20:18:48.580511 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="37.091µs" 2025-08-13T20:18:48.580660798+00:00 stderr F I0813 20:18:48.580609 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="48.081µs" 2025-08-13T20:18:48.580824103+00:00 stderr F I0813 20:18:48.580726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="39.251µs" 2025-08-13T20:18:48.581135021+00:00 stderr F I0813 20:18:48.581062 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="175.775µs" 2025-08-13T20:18:48.581301466+00:00 stderr F I0813 20:18:48.581206 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="45.061µs" 2025-08-13T20:18:48.581301466+00:00 stderr F I0813 20:18:48.581288 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="37.992µs" 2025-08-13T20:18:48.581422490+00:00 stderr F I0813 20:18:48.581384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="62.401µs" 2025-08-13T20:18:48.581541263+00:00 stderr F I0813 20:18:48.581504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="46.972µs" 2025-08-13T20:18:48.581645876+00:00 stderr F I0813 20:18:48.581610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="31.421µs" 2025-08-13T20:18:48.581941554+00:00 stderr F I0813 20:18:48.581832 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="120.523µs" 2025-08-13T20:18:48.581961025+00:00 stderr F I0813 20:18:48.581947 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="37.771µs" 2025-08-13T20:18:48.582184921+00:00 stderr F I0813 20:18:48.582075 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="27.691µs" 2025-08-13T20:18:48.582184921+00:00 stderr F I0813 20:18:48.582163 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="41.281µs" 2025-08-13T20:18:48.582299475+00:00 stderr F I0813 20:18:48.582249 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="37.531µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582409 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="56.532µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582562 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="52.932µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582634 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="38.631µs" 2025-08-13T20:18:48.582888912+00:00 stderr F I0813 20:18:48.582744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="81.872µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583189 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="102.563µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583290 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="42.012µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583454 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="44.371µs" 2025-08-13T20:18:48.583560411+00:00 stderr F I0813 20:18:48.583506 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="30.511µs" 2025-08-13T20:18:48.583678534+00:00 stderr F I0813 20:18:48.583610 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="68.892µs" 2025-08-13T20:18:48.583689784+00:00 stderr F I0813 20:18:48.583675 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="41.401µs" 2025-08-13T20:18:48.583878770+00:00 stderr F I0813 20:18:48.583763 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="44.461µs" 2025-08-13T20:18:48.584012814+00:00 stderr F I0813 20:18:48.583939 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="44.741µs" 2025-08-13T20:18:48.584209389+00:00 stderr F I0813 20:18:48.584158 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="120.563µs" 2025-08-13T20:18:48.584343763+00:00 stderr F I0813 20:18:48.584270 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="37.201µs" 2025-08-13T20:18:48.584480837+00:00 stderr F I0813 20:18:48.584360 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="40.231µs" 2025-08-13T20:18:48.584703693+00:00 stderr F I0813 20:18:48.584654 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="49.902µs" 2025-08-13T20:18:48.585082864+00:00 stderr F I0813 20:18:48.584838 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="122.894µs" 2025-08-13T20:18:48.585082864+00:00 stderr F I0813 20:18:48.585011 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="88.173µs" 2025-08-13T20:18:48.585104995+00:00 stderr F I0813 20:18:48.585079 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="41.281µs" 2025-08-13T20:18:48.585226448+00:00 stderr F I0813 20:18:48.585142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="31.771µs" 2025-08-13T20:18:48.585293140+00:00 stderr F I0813 20:18:48.585242 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="57.141µs" 2025-08-13T20:18:48.585418334+00:00 stderr F I0813 20:18:48.585364 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="39.571µs" 2025-08-13T20:18:48.585596599+00:00 stderr F I0813 20:18:48.585510 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="45.891µs" 2025-08-13T20:18:48.585695662+00:00 stderr F I0813 20:18:48.585636 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="57.112µs" 2025-08-13T20:18:48.585842386+00:00 stderr F I0813 20:18:48.585746 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="55.842µs" 2025-08-13T20:18:48.586018651+00:00 stderr F I0813 20:18:48.585930 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="50.101µs" 2025-08-13T20:18:48.586438363+00:00 stderr F I0813 20:18:48.586348 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="59.982µs" 2025-08-13T20:18:48.586755572+00:00 stderr F I0813 20:18:48.586671 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="245.357µs" 2025-08-13T20:18:48.587424411+00:00 stderr F I0813 20:18:48.587357 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="36.031µs" 2025-08-13T20:18:48.594681488+00:00 stderr F I0813 20:18:48.594599 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="37.611µs" 2025-08-13T20:18:48.594706819+00:00 stderr F I0813 20:18:48.594697 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="32.811µs" 2025-08-13T20:18:48.594801982+00:00 stderr F I0813 20:18:48.594751 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="24.561µs" 2025-08-13T20:18:48.594943216+00:00 stderr F I0813 20:18:48.594887 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="30.461µs" 2025-08-13T20:18:48.595144622+00:00 stderr F I0813 20:18:48.595051 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="35.071µs" 2025-08-13T20:18:48.595160232+00:00 stderr F I0813 20:18:48.595142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="37.141µs" 2025-08-13T20:18:48.595255485+00:00 stderr F I0813 20:18:48.595197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="23.461µs" 2025-08-13T20:18:48.595335777+00:00 stderr F I0813 20:18:48.595282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="30.451µs" 2025-08-13T20:18:48.595410959+00:00 stderr F I0813 20:18:48.595359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="27.43µs" 2025-08-13T20:18:48.595473371+00:00 stderr F I0813 20:18:48.595435 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="25.651µs" 2025-08-13T20:18:48.595512062+00:00 stderr F I0813 20:18:48.595498 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="34.711µs" 2025-08-13T20:18:48.596365926+00:00 stderr F I0813 20:18:48.596286 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="34.821µs" 2025-08-13T20:18:48.596385377+00:00 stderr F I0813 20:18:48.596367 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="36.311µs" 2025-08-13T20:18:48.596489480+00:00 stderr F I0813 20:18:48.596420 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="23.921µs" 2025-08-13T20:18:48.596502440+00:00 stderr F I0813 20:18:48.596485 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="29.291µs" 2025-08-13T20:18:48.596592103+00:00 stderr F I0813 20:18:48.596541 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="25.54µs" 2025-08-13T20:18:48.596678375+00:00 stderr F I0813 20:18:48.596627 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="32.831µs" 2025-08-13T20:18:48.596820069+00:00 stderr F I0813 20:18:48.596731 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="34.631µs" 2025-08-13T20:18:48.597425287+00:00 stderr F I0813 20:18:48.597366 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="592.737µs" 2025-08-13T20:18:48.597515979+00:00 stderr F I0813 20:18:48.597470 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="40.571µs" 2025-08-13T20:18:48.597581281+00:00 stderr F I0813 20:18:48.597543 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="36.811µs" 2025-08-13T20:18:48.597709255+00:00 stderr F I0813 20:18:48.597629 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="30.711µs" 2025-08-13T20:18:48.597852239+00:00 stderr F I0813 20:18:48.597758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="22.721µs" 2025-08-13T20:18:48.597909680+00:00 stderr F I0813 20:18:48.597875 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="41.331µs" 2025-08-13T20:18:48.597958332+00:00 stderr F I0813 20:18:48.597927 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="20.131µs" 2025-08-13T20:18:48.598063635+00:00 stderr F I0813 20:18:48.598017 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="73.102µs" 2025-08-13T20:18:48.598143787+00:00 stderr F I0813 20:18:48.598101 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="31.091µs" 2025-08-13T20:18:48.598239250+00:00 stderr F I0813 20:18:48.598197 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="32.871µs" 2025-08-13T20:18:48.598292361+00:00 stderr F I0813 20:18:48.598251 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="21.781µs" 2025-08-13T20:18:48.598344053+00:00 stderr F I0813 20:18:48.598310 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="16.211µs" 2025-08-13T20:18:48.598395504+00:00 stderr F I0813 20:18:48.598359 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="27.161µs" 2025-08-13T20:18:48.598407275+00:00 stderr F I0813 20:18:48.598393 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="21.41µs" 2025-08-13T20:18:48.598491587+00:00 stderr F I0813 20:18:48.598449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="34.551µs" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561715 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561942 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:28:48.562112435+00:00 stderr F I0813 20:28:48.561975 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:28:48.572416611+00:00 stderr F I0813 20:28:48.572297 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="85.463µs" 2025-08-13T20:28:48.572499904+00:00 stderr F I0813 20:28:48.572463 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="117.013µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572551 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="34.691µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572558 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="33.061µs" 2025-08-13T20:28:48.572616307+00:00 stderr F I0813 20:28:48.572483 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="76.202µs" 2025-08-13T20:28:48.572757081+00:00 stderr F I0813 20:28:48.572662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="75.922µs" 2025-08-13T20:28:48.572873254+00:00 stderr F I0813 20:28:48.572822 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="41.641µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.572909 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="47.451µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="47.121µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573518 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="38.131µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573685 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="45.511µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573745 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="43.461µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573854 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="78.192µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573899 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="27.591µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573961 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="23.76µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.573998 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="24.511µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574109 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="39.352µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574180 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="33.721µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="55.232µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574505 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="205.626µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574598 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="60.452µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574636 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="24.411µs" 2025-08-13T20:28:48.574711027+00:00 stderr F I0813 20:28:48.574697 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="29.201µs" 2025-08-13T20:28:48.574873782+00:00 stderr F I0813 20:28:48.574763 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="30.641µs" 2025-08-13T20:28:48.574894543+00:00 stderr F I0813 20:28:48.574884 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="26.231µs" 2025-08-13T20:28:48.574996966+00:00 stderr F I0813 20:28:48.574952 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="37.291µs" 2025-08-13T20:28:48.575148860+00:00 stderr F I0813 20:28:48.575105 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="61.651µs" 2025-08-13T20:28:48.575220462+00:00 stderr F I0813 20:28:48.575180 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="33.781µs" 2025-08-13T20:28:48.575297414+00:00 stderr F I0813 20:28:48.575258 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="27.651µs" 2025-08-13T20:28:48.575443028+00:00 stderr F I0813 20:28:48.575400 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="54.142µs" 2025-08-13T20:28:48.575522701+00:00 stderr F I0813 20:28:48.575478 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="40.361µs" 2025-08-13T20:28:48.575628464+00:00 stderr F I0813 20:28:48.575586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="44.871µs" 2025-08-13T20:28:48.575731537+00:00 stderr F I0813 20:28:48.575692 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="32.401µs" 2025-08-13T20:28:48.575953403+00:00 stderr F I0813 20:28:48.575902 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="130.744µs" 2025-08-13T20:28:48.576073166+00:00 stderr F I0813 20:28:48.576033 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="50.821µs" 2025-08-13T20:28:48.576170999+00:00 stderr F I0813 20:28:48.576127 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="57.532µs" 2025-08-13T20:28:48.582190482+00:00 stderr F I0813 20:28:48.582120 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="29.911µs" 2025-08-13T20:28:48.582343637+00:00 stderr F I0813 20:28:48.582293 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="129.864µs" 2025-08-13T20:28:48.582499811+00:00 stderr F I0813 20:28:48.582449 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="45.631µs" 2025-08-13T20:28:48.583335125+00:00 stderr F I0813 20:28:48.583281 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="113.853µs" 2025-08-13T20:28:48.583938443+00:00 stderr F I0813 20:28:48.583890 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="45.131µs" 2025-08-13T20:28:48.584046436+00:00 stderr F I0813 20:28:48.583990 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="33.051µs" 2025-08-13T20:28:48.584188930+00:00 stderr F I0813 20:28:48.584142 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="99.673µs" 2025-08-13T20:28:48.585535798+00:00 stderr F I0813 20:28:48.584312 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="44.421µs" 2025-08-13T20:28:48.586182467+00:00 stderr F I0813 20:28:48.586133 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="913.937µs" 2025-08-13T20:28:48.586293080+00:00 stderr F I0813 20:28:48.586249 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="59.842µs" 2025-08-13T20:28:48.586364702+00:00 stderr F I0813 20:28:48.586323 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="39.511µs" 2025-08-13T20:28:48.586441614+00:00 stderr F I0813 20:28:48.586400 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="34.391µs" 2025-08-13T20:28:48.586544867+00:00 stderr F I0813 20:28:48.586500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="43.001µs" 2025-08-13T20:28:48.586659321+00:00 stderr F I0813 20:28:48.586616 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="59.341µs" 2025-08-13T20:28:48.586722923+00:00 stderr F I0813 20:28:48.586682 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="30.361µs" 2025-08-13T20:28:48.586953449+00:00 stderr F I0813 20:28:48.586879 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="114.814µs" 2025-08-13T20:28:48.587034661+00:00 stderr F I0813 20:28:48.586968 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="51.511µs" 2025-08-13T20:28:48.587130644+00:00 stderr F I0813 20:28:48.587086 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="41.431µs" 2025-08-13T20:28:48.587262428+00:00 stderr F I0813 20:28:48.587178 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="34.911µs" 2025-08-13T20:28:48.587368671+00:00 stderr F I0813 20:28:48.587326 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="75.113µs" 2025-08-13T20:28:48.587453754+00:00 stderr F I0813 20:28:48.587412 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="30.911µs" 2025-08-13T20:28:48.587557306+00:00 stderr F I0813 20:28:48.587514 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="38.641µs" 2025-08-13T20:28:48.587613678+00:00 stderr F I0813 20:28:48.587576 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="23.861µs" 2025-08-13T20:28:48.587703781+00:00 stderr F I0813 20:28:48.587656 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="28.301µs" 2025-08-13T20:28:48.587851305+00:00 stderr F I0813 20:28:48.587761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="45.171µs" 2025-08-13T20:28:48.587911167+00:00 stderr F I0813 20:28:48.587869 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="29.431µs" 2025-08-13T20:28:48.588074651+00:00 stderr F I0813 20:28:48.587997 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="42.771µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588170 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="74.632µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588259 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="37.031µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588304 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="30.261µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588366 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="34.341µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588419 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="21.48µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588496 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="23.58µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="39.001µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588586 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="91.553µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588606 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="15.021µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588676 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="29.521µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588744 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="54.632µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588868 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="99.053µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.588957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="39.531µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.589076 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="98.843µs" 2025-08-13T20:28:48.589212284+00:00 stderr F I0813 20:28:48.589133 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="24.191µs" 2025-08-13T20:28:48.589284106+00:00 stderr F I0813 20:28:48.589212 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="49.501µs" 2025-08-13T20:28:48.589284106+00:00 stderr F I0813 20:28:48.589245 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="18.491µs" 2025-08-13T20:28:48.589508483+00:00 stderr F I0813 20:28:48.589442 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="166.574µs" 2025-08-13T20:28:48.589623106+00:00 stderr F I0813 20:28:48.589571 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="61.752µs" 2025-08-13T20:28:48.589721529+00:00 stderr F I0813 20:28:48.589668 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="40.511µs" 2025-08-13T20:30:01.235727093+00:00 stderr F I0813 20:30:01.235606 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created job collect-profiles-29251950" 2025-08-13T20:30:01.241730696+00:00 stderr F I0813 20:30:01.241647 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.844719909+00:00 stderr F I0813 20:30:01.844619 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.844986387+00:00 stderr F I0813 20:30:01.844929 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251950" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: collect-profiles-29251950-x8jjd" 2025-08-13T20:30:01.967460807+00:00 stderr F I0813 20:30:01.966767 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:01.986882936+00:00 stderr F I0813 20:30:01.986767 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.037764628+00:00 stderr F I0813 20:30:02.037702 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.065930468+00:00 stderr F I0813 20:30:02.065451 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:02.813309422+00:00 stderr F I0813 20:30:02.811373 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:03.324810106+00:00 stderr F I0813 20:30:03.324606 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:04.354378292+00:00 stderr F I0813 20:30:04.354319 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:05.704293336+00:00 stderr F I0813 20:30:05.704129 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:06.998551512+00:00 stderr F I0813 20:30:06.997918 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:07.101296225+00:00 stderr F I0813 20:30:07.100365 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:07.348520052+00:00 stderr F I0813 20:30:07.348345 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.078659269+00:00 stderr F I0813 20:30:08.075462 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.095253666+00:00 stderr F I0813 20:30:08.095157 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.107656023+00:00 stderr F I0813 20:30:08.107555 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles-29251950" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" 2025-08-13T20:30:08.108126586+00:00 stderr F I0813 20:30:08.108002 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:30:08.125248199+00:00 stderr F I0813 20:30:08.125193 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SuccessfulDelete" message="Deleted job collect-profiles-29251905" 2025-08-13T20:30:08.125360992+00:00 stderr F I0813 20:30:08.125312 1 event.go:376] "Event occurred" object="openshift-operator-lifecycle-manager/collect-profiles" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="SawCompletedJob" message="Saw completed job: collect-profiles-29251950, status: Complete" 2025-08-13T20:30:08.127420121+00:00 stderr F I0813 20:30:08.127298 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251905" 2025-08-13T20:30:08.131391925+00:00 stderr F I0813 20:30:08.131365 1 garbagecollector.go:549] "Processing item" item="[v1/Pod, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905-zmjv9, uid: 8500d7bd-50fb-4ca6-af41-b7a24cae43cd]" virtual=false 2025-08-13T20:30:08.156610490+00:00 stderr F I0813 20:30:08.156406 1 garbagecollector.go:688] "Deleting item" item="[v1/Pod, namespace: openshift-operator-lifecycle-manager, name: collect-profiles-29251905-zmjv9, uid: 8500d7bd-50fb-4ca6-af41-b7a24cae43cd]" propagationPolicy="Background" 2025-08-13T20:38:48.563688441+00:00 stderr F I0813 20:38:48.563520 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251950" 2025-08-13T20:38:48.563921368+00:00 stderr F I0813 20:38:48.563891 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251920" 2025-08-13T20:38:48.563995810+00:00 stderr F I0813 20:38:48.563976 1 job_controller.go:554] "enqueueing job" key="openshift-operator-lifecycle-manager/collect-profiles-29251935" 2025-08-13T20:38:48.579185338+00:00 stderr F I0813 20:38:48.579044 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-6d5d9649f6" duration="96.223µs" 2025-08-13T20:38:48.579382724+00:00 stderr F I0813 20:38:48.579325 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-diagnostics/network-check-source-5c5478f8c" duration="64.582µs" 2025-08-13T20:38:48.579478777+00:00 stderr F I0813 20:38:48.579417 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6884dcf749" duration="70.062µs" 2025-08-13T20:38:48.579595150+00:00 stderr F I0813 20:38:48.579542 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67f74899b5" duration="128.404µs" 2025-08-13T20:38:48.579691783+00:00 stderr F I0813 20:38:48.579644 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-network-operator/network-operator-767c585db5" duration="38.571µs" 2025-08-13T20:38:48.579874068+00:00 stderr F I0813 20:38:48.579751 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-868695ccb4" duration="23.39µs" 2025-08-13T20:38:48.579874068+00:00 stderr F I0813 20:38:48.579855 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5c4dbb8899" duration="53.032µs" 2025-08-13T20:38:48.579893449+00:00 stderr F I0813 20:38:48.579555 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6" duration="41.712µs" 2025-08-13T20:38:48.579893449+00:00 stderr F I0813 20:38:48.579881 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-78589965b8" duration="32.901µs" 2025-08-13T20:38:48.580004002+00:00 stderr F I0813 20:38:48.579957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5446f98575" duration="33.591µs" 2025-08-13T20:38:48.580082494+00:00 stderr F I0813 20:38:48.580029 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc" duration="35.401µs" 2025-08-13T20:38:48.580273420+00:00 stderr F I0813 20:38:48.580135 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-etcd-operator/etcd-operator-768d5b5d86" duration="29.191µs" 2025-08-13T20:38:48.580355612+00:00 stderr F I0813 20:38:48.580309 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-7cbd5666ff" duration="38.441µs" 2025-08-13T20:38:48.580447345+00:00 stderr F I0813 20:38:48.580402 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/packageserver-8464bcc55b" duration="35.661µs" 2025-08-13T20:38:48.580511216+00:00 stderr F I0813 20:38:48.580468 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-778975cc4f" duration="29.241µs" 2025-08-13T20:38:48.580644020+00:00 stderr F I0813 20:38:48.580575 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-67cbf64bc9" duration="46.271µs" 2025-08-13T20:38:48.580763494+00:00 stderr F I0813 20:38:48.580716 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-6d9765f7fd" duration="42.351µs" 2025-08-13T20:38:48.581030471+00:00 stderr F I0813 20:38:48.580978 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-6b869cb98c" duration="71.373µs" 2025-08-13T20:38:48.581085763+00:00 stderr F I0813 20:38:48.581044 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-99c8765d7" duration="26.761µs" 2025-08-13T20:38:48.581194846+00:00 stderr F I0813 20:38:48.581123 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-b69786f4f" duration="28.801µs" 2025-08-13T20:38:48.581346230+00:00 stderr F I0813 20:38:48.581230 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-7955f778" duration="34.051µs" 2025-08-13T20:38:48.581507155+00:00 stderr F I0813 20:38:48.581457 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58" duration="34.011µs" 2025-08-13T20:38:48.581600998+00:00 stderr F I0813 20:38:48.581520 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7d967d98df" duration="23.7µs" 2025-08-13T20:38:48.581851845+00:00 stderr F I0813 20:38:48.581745 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-74fc7c67cc" duration="91.943µs" 2025-08-13T20:38:48.582722360+00:00 stderr F I0813 20:38:48.582669 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-7874c8775" duration="79.962µs" 2025-08-13T20:38:48.582849384+00:00 stderr F I0813 20:38:48.582757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-config-operator/openshift-config-operator-77658b5b66" duration="27.321µs" 2025-08-13T20:38:48.583107141+00:00 stderr F I0813 20:38:48.583052 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-67685c4459" duration="197.715µs" 2025-08-13T20:38:48.583309887+00:00 stderr F I0813 20:38:48.583255 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca-operator/service-ca-operator-546b4f8984" duration="117.753µs" 2025-08-13T20:38:48.583549744+00:00 stderr F I0813 20:38:48.583504 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-service-ca/service-ca-666f99b6f" duration="205.326µs" 2025-08-13T20:38:48.583708129+00:00 stderr F I0813 20:38:48.583662 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865" duration="28.671µs" 2025-08-13T20:38:48.583830742+00:00 stderr F I0813 20:38:48.583742 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-7fc54b8dd7" duration="41.281µs" 2025-08-13T20:38:48.583936325+00:00 stderr F I0813 20:38:48.583891 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-version/cluster-version-operator-64d5db54b5" duration="26.831µs" 2025-08-13T20:38:48.584010307+00:00 stderr F I0813 20:38:48.583968 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-846977c6bc" duration="37.761µs" 2025-08-13T20:38:48.584276475+00:00 stderr F I0813 20:38:48.584227 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-585546dd8b" duration="196.265µs" 2025-08-13T20:38:48.584376748+00:00 stderr F I0813 20:38:48.584333 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress-operator/ingress-operator-7d46d5bb6d" duration="32.111µs" 2025-08-13T20:38:48.584441130+00:00 stderr F I0813 20:38:48.584401 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-marketplace/marketplace-operator-8b455464d" duration="31.011µs" 2025-08-13T20:38:48.584523642+00:00 stderr F I0813 20:38:48.584483 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-777dbbb7bb" duration="27.481µs" 2025-08-13T20:38:48.584710087+00:00 stderr F I0813 20:38:48.584664 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication-operator/authentication-operator-7cc7ff75d5" duration="38.741µs" 2025-08-13T20:38:48.584883842+00:00 stderr F I0813 20:38:48.584758 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-conversion-webhook-595f9969b" duration="26.791µs" 2025-08-13T20:38:48.584902043+00:00 stderr F I0813 20:38:48.584887 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-5797bcd546" duration="52.341µs" 2025-08-13T20:38:48.584992956+00:00 stderr F I0813 20:38:48.584949 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-7bbb4b7f4c" duration="27.581µs" 2025-08-13T20:38:48.585127759+00:00 stderr F I0813 20:38:48.585070 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-operator-76788bff89" duration="53.972µs" 2025-08-13T20:38:48.585459759+00:00 stderr F I0813 20:38:48.585410 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/package-server-manager-84d578d794" duration="33.531µs" 2025-08-13T20:38:48.585527751+00:00 stderr F I0813 20:38:48.585485 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-7f79969969" duration="34.571µs" 2025-08-13T20:38:48.585616714+00:00 stderr F I0813 20:38:48.585565 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-765b47f944" duration="31.711µs" 2025-08-13T20:38:48.589687711+00:00 stderr F I0813 20:38:48.589631 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-84fccc7b6" duration="4.005145ms" 2025-08-13T20:38:48.589843235+00:00 stderr F I0813 20:38:48.589742 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-ingress/router-default-5c9bf7bc58" duration="66.022µs" 2025-08-13T20:38:48.590003700+00:00 stderr F I0813 20:38:48.589957 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/control-plane-machine-set-operator-649bd778b4" duration="50.462µs" 2025-08-13T20:38:48.590130664+00:00 stderr F I0813 20:38:48.590062 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c" duration="33.121µs" 2025-08-13T20:38:48.590369011+00:00 stderr F I0813 20:38:48.590321 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-storage-version-migrator/migrator-f7c6d88df" duration="223.846µs" 2025-08-13T20:38:48.590464003+00:00 stderr F I0813 20:38:48.590420 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-config-operator/machine-config-controller-6df6df6b6b" duration="36.831µs" 2025-08-13T20:38:48.590549516+00:00 stderr F I0813 20:38:48.590505 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-5b6c864f95" duration="28.981µs" 2025-08-13T20:38:48.590647599+00:00 stderr F I0813 20:38:48.590601 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-57b5589fc8" duration="38.001µs" 2025-08-13T20:38:48.590744671+00:00 stderr F I0813 20:38:48.590701 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-apiserver/apiserver-8457997b6b" duration="60.032µs" 2025-08-13T20:38:48.590946127+00:00 stderr F I0813 20:38:48.590847 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-machine-approver/machine-approver-5596ddcb44" duration="88.072µs" 2025-08-13T20:38:48.590946127+00:00 stderr F I0813 20:38:48.590939 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-75779c45fd" duration="30.511µs" 2025-08-13T20:38:48.591036330+00:00 stderr F I0813 20:38:48.590992 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-5d9678894c" duration="39.751µs" 2025-08-13T20:38:48.591137643+00:00 stderr F I0813 20:38:48.591096 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-659898b96d" duration="48.141µs" 2025-08-13T20:38:48.591320338+00:00 stderr F I0813 20:38:48.591264 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d" duration="39.011µs" 2025-08-13T20:38:48.591434191+00:00 stderr F I0813 20:38:48.591365 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-598fc85fd4" duration="42.922µs" 2025-08-13T20:38:48.591563155+00:00 stderr F I0813 20:38:48.591507 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-6ff78978b4" duration="34.601µs" 2025-08-13T20:38:48.591712439+00:00 stderr F I0813 20:38:48.591665 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-64b8d4c75" duration="63.352µs" 2025-08-13T20:38:48.591858964+00:00 stderr F I0813 20:38:48.591761 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-oauth-apiserver/apiserver-69c565c9b6" duration="55.722µs" 2025-08-13T20:38:48.592280436+00:00 stderr F I0813 20:38:48.592216 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-776b8b7477" duration="112.373µs" 2025-08-13T20:38:48.592280436+00:00 stderr F I0813 20:38:48.592241 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-7b5ff7b59b" duration="54.821µs" 2025-08-13T20:38:48.592337147+00:00 stderr F I0813 20:38:48.592292 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-8545fbf5fd" duration="28.831µs" 2025-08-13T20:38:48.592410689+00:00 stderr F I0813 20:38:48.592361 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-644bb77b49" duration="55.082µs" 2025-08-13T20:38:48.592423270+00:00 stderr F I0813 20:38:48.592407 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-multus/multus-admission-controller-6c7c885997" duration="48.321µs" 2025-08-13T20:38:48.592544483+00:00 stderr F I0813 20:38:48.592501 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-route-controller-manager/route-controller-manager-5b77f9fd48" duration="46.191µs" 2025-08-13T20:38:48.592544483+00:00 stderr F I0813 20:38:48.592500 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/catalog-operator-857456c46" duration="73.452µs" 2025-08-13T20:38:48.592645506+00:00 stderr F I0813 20:38:48.592602 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/downloads-65476884b9" duration="35.051µs" 2025-08-13T20:38:48.592645506+00:00 stderr F I0813 20:38:48.592607 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6" duration="74.652µs" 2025-08-13T20:38:48.592742669+00:00 stderr F I0813 20:38:48.592700 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-controller-manager/controller-manager-c4dd57946" duration="37.121µs" 2025-08-13T20:38:48.592858822+00:00 stderr F I0813 20:38:48.592814 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-machine-api/machine-api-operator-788b7c6b6c" duration="32.851µs" 2025-08-13T20:38:48.593329486+00:00 stderr F I0813 20:38:48.593282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-authentication/oauth-openshift-5b69888cbb" duration="36.211µs" 2025-08-13T20:38:48.593329486+00:00 stderr F I0813 20:38:48.593317 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b" duration="71.042µs" 2025-08-13T20:38:48.593463430+00:00 stderr F I0813 20:38:48.593410 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console-operator/console-operator-5dbbc74dc9" duration="59.982µs" 2025-08-13T20:38:48.593463430+00:00 stderr F I0813 20:38:48.593430 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-console/console-b6485f8c7" duration="57.031µs" 2025-08-13T20:38:48.593569523+00:00 stderr F I0813 20:38:48.593515 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4" duration="33.301µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593581 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f" duration="57.002µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593520 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-dns-operator/dns-operator-75f687757b" duration="108.993µs" 2025-08-13T20:38:48.593720277+00:00 stderr F I0813 20:38:48.593656 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958" duration="93.573µs" 2025-08-13T20:38:48.593840021+00:00 stderr F I0813 20:38:48.593757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="openshift-image-registry/image-registry-5b87ddc766" duration="761.432µs" 2025-08-13T20:42:36.319033048+00:00 stderr F I0813 20:42:36.318589 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319139171+00:00 stderr F I0813 20:42:36.319088 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319178332+00:00 stderr F I0813 20:42:36.317754 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319271204+00:00 stderr F I0813 20:42:36.319077 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.319420109+00:00 stderr F I0813 20:42:36.317882 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.328448 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.330937 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.330939 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331078 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331255 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331304 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331362 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331380 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331430 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331436 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331495 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331521 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331638 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331714 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331767 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331857 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331953 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.331966 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332074 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332104 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332160 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332270 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332294 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332370 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332392 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332495 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332518 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332543 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332633 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332724 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332746 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.332906 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333041 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333101 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.333156 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334203 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334319 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334389 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334458 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334576 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334639 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334689 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334822553+00:00 stderr F I0813 20:42:36.334739 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334892535+00:00 stderr F I0813 20:42:36.334863 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.334973467+00:00 stderr F I0813 20:42:36.334926 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335049099+00:00 stderr F I0813 20:42:36.335007 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335274136+00:00 stderr F I0813 20:42:36.335166 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335292186+00:00 stderr F I0813 20:42:36.335281 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335344998+00:00 stderr F I0813 20:42:36.335317 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335406310+00:00 stderr F I0813 20:42:36.335360 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335515213+00:00 stderr F I0813 20:42:36.335444 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335531863+00:00 stderr F I0813 20:42:36.335523 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335630546+00:00 stderr F I0813 20:42:36.335586 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335706258+00:00 stderr F I0813 20:42:36.335663 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335871703+00:00 stderr F I0813 20:42:36.335753 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.335890124+00:00 stderr F I0813 20:42:36.335880 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.336009607+00:00 stderr F I0813 20:42:36.335967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346184240+00:00 stderr F I0813 20:42:36.346144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346439938+00:00 stderr F I0813 20:42:36.346417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346598742+00:00 stderr F I0813 20:42:36.346578 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346698135+00:00 stderr F I0813 20:42:36.346682 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.346955483+00:00 stderr F I0813 20:42:36.346934 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347066456+00:00 stderr F I0813 20:42:36.347050 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347323543+00:00 stderr F I0813 20:42:36.347301 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347448877+00:00 stderr F I0813 20:42:36.347432 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347572440+00:00 stderr F I0813 20:42:36.347555 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347707654+00:00 stderr F I0813 20:42:36.347691 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.347887830+00:00 stderr F I0813 20:42:36.347866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354572862+00:00 stderr F I0813 20:42:36.347987 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354670485+00:00 stderr F I0813 20:42:36.335346 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354752457+00:00 stderr F I0813 20:42:36.347994 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.354893622+00:00 stderr F I0813 20:42:36.348005 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355175890+00:00 stderr F I0813 20:42:36.348010 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355309814+00:00 stderr F I0813 20:42:36.335334 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355389816+00:00 stderr F I0813 20:42:36.348124 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355461678+00:00 stderr F I0813 20:42:36.348139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355558661+00:00 stderr F I0813 20:42:36.348153 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.355760807+00:00 stderr F I0813 20:42:36.348167 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353021 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353054 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353068 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353097 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353131 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353146 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353163 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353188 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353201 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353256 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.399051745+00:00 stderr F I0813 20:42:36.353275 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.507074939+00:00 stderr F I0813 20:42:36.353287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353300 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353313 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353324 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353349 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.531252906+00:00 stderr F I0813 20:42:36.353361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353373 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353385 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353404 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353429 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353441 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353452 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353469 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.537572638+00:00 stderr F I0813 20:42:36.353480 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539522884+00:00 stderr F I0813 20:42:36.353493 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539643678+00:00 stderr F I0813 20:42:36.353505 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539880275+00:00 stderr F I0813 20:42:36.353532 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.539911286+00:00 stderr F I0813 20:42:36.353546 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540170323+00:00 stderr F I0813 20:42:36.353558 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540352968+00:00 stderr F I0813 20:42:36.353570 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541644816+00:00 stderr F I0813 20:42:36.353582 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541845051+00:00 stderr F I0813 20:42:36.353593 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.541925174+00:00 stderr F I0813 20:42:36.353685 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542021067+00:00 stderr F I0813 20:42:36.353711 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542147120+00:00 stderr F I0813 20:42:36.353723 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542277294+00:00 stderr F I0813 20:42:36.353733 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542345226+00:00 stderr F I0813 20:42:36.353745 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542485010+00:00 stderr F I0813 20:42:36.353824 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.542703936+00:00 stderr F I0813 20:42:36.353846 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543057326+00:00 stderr F I0813 20:42:36.353862 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543371405+00:00 stderr F I0813 20:42:36.353884 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543665844+00:00 stderr F I0813 20:42:36.353900 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.543995933+00:00 stderr F I0813 20:42:36.353910 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.544116567+00:00 stderr F I0813 20:42:36.353924 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.544263771+00:00 stderr F I0813 20:42:36.353941 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.550576843+00:00 stderr F I0813 20:42:36.353952 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554663011+00:00 stderr F I0813 20:42:36.353961 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.554996211+00:00 stderr F I0813 20:42:36.353972 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555433513+00:00 stderr F I0813 20:42:36.353983 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555596668+00:00 stderr F I0813 20:42:36.353998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.555743802+00:00 stderr F I0813 20:42:36.354008 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556133413+00:00 stderr F I0813 20:42:36.354020 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556312429+00:00 stderr F I0813 20:42:36.354034 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354045 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354055 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354070 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354081 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354170 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354187 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354203 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354221 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354280 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354291 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354307 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354337 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354353 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.568504670+00:00 stderr F I0813 20:42:36.354924 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.167199781+00:00 stderr F I0813 20:42:37.167091 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.167384846+00:00 stderr F I0813 20:42:37.167363 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.169842477+00:00 stderr F I0813 20:42:37.168712 1 secure_serving.go:258] Stopped listening on [::]:10257 2025-08-13T20:42:37.172441962+00:00 stderr F I0813 20:42:37.172372 1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-08-13T20:42:37.177007703+00:00 stderr F I0813 20:42:37.176268 1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-08-13T20:42:37.179891917+00:00 stderr F I0813 20:42:37.179850 1 publisher.go:114] "Shutting down root CA cert publisher controller" 2025-08-13T20:42:37.181820452+00:00 stderr F I0813 20:42:37.180049 1 publisher.go:92] Shutting down service CA certificate configmap publisher 2025-08-13T20:42:37.185348774+00:00 stderr F I0813 20:42:37.185285 1 garbagecollector.go:175] "Shutting down controller" controller="garbagecollector" 2025-08-13T20:42:37.185440086+00:00 stderr F I0813 20:42:37.185376 1 job_controller.go:238] "Shutting down job controller" ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000016224715117130646033076 0ustar zuulzuul2025-12-13T00:20:51.951054920+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-12-13T00:20:51.956227050+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:51.961447221+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,45sec,0)' ']' 2025-12-13T00:20:51.961447221+00:00 stderr F + sleep 1 2025-12-13T00:20:52.967995742+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:52.973465910+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,44sec,0)' ']' 2025-12-13T00:20:52.973520771+00:00 stderr F + sleep 1 2025-12-13T00:20:53.976071765+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:53.982453567+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,43sec,0)' ']' 2025-12-13T00:20:53.982515598+00:00 stderr F + sleep 1 2025-12-13T00:20:54.985319639+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:54.990059467+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,42sec,0)' ']' 2025-12-13T00:20:54.990115989+00:00 stderr F + sleep 1 2025-12-13T00:20:55.992874368+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:55.997380400+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,41sec,0)' ']' 2025-12-13T00:20:55.997442351+00:00 stderr F + sleep 1 2025-12-13T00:20:57.002147782+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:57.011099725+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,40sec,0)' ']' 2025-12-13T00:20:57.011173457+00:00 stderr F + sleep 1 2025-12-13T00:20:58.014267795+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:58.020298718+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,39sec,0)' ']' 2025-12-13T00:20:58.020345119+00:00 stderr F + sleep 1 2025-12-13T00:20:59.023979241+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:20:59.030084976+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,38sec,0)' ']' 2025-12-13T00:20:59.030084976+00:00 stderr F + sleep 1 2025-12-13T00:21:00.030864041+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:00.035534388+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,37sec,0)' ']' 2025-12-13T00:21:00.035534388+00:00 stderr F + sleep 1 2025-12-13T00:21:01.038282616+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:01.043874887+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,36sec,0)' ']' 2025-12-13T00:21:01.043874887+00:00 stderr F + sleep 1 2025-12-13T00:21:02.049676038+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:02.056417281+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,35sec,0)' ']' 2025-12-13T00:21:02.056417281+00:00 stderr F + sleep 1 2025-12-13T00:21:03.059788616+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:03.066045195+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,34sec,0)' ']' 2025-12-13T00:21:03.066139247+00:00 stderr F + sleep 1 2025-12-13T00:21:04.069023440+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:04.074765954+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,33sec,0)' ']' 2025-12-13T00:21:04.074839686+00:00 stderr F + sleep 1 2025-12-13T00:21:05.078095979+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:05.084400219+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,32sec,0)' ']' 2025-12-13T00:21:05.084486301+00:00 stderr F + sleep 1 2025-12-13T00:21:06.088417383+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:06.092988765+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,31sec,0)' ']' 2025-12-13T00:21:06.093075548+00:00 stderr F + sleep 1 2025-12-13T00:21:07.097759109+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:07.107664127+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,30sec,0)' ']' 2025-12-13T00:21:07.107664127+00:00 stderr F + sleep 1 2025-12-13T00:21:08.111176386+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:08.115462451+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,29sec,0)' ']' 2025-12-13T00:21:08.115462451+00:00 stderr F + sleep 1 2025-12-13T00:21:09.120397269+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:09.138173269+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,28sec,0)' ']' 2025-12-13T00:21:09.138262172+00:00 stderr F + sleep 1 2025-12-13T00:21:10.141634188+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:10.148540114+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,27sec,0)' ']' 2025-12-13T00:21:10.148540114+00:00 stderr F + sleep 1 2025-12-13T00:21:11.152502656+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:11.158746174+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,26sec,0)' ']' 2025-12-13T00:21:11.158746174+00:00 stderr F + sleep 1 2025-12-13T00:21:12.161950145+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:12.166168769+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,25sec,0)' ']' 2025-12-13T00:21:12.166168769+00:00 stderr F + sleep 1 2025-12-13T00:21:13.168472796+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:13.176976735+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,24sec,0)' ']' 2025-12-13T00:21:13.176976735+00:00 stderr F + sleep 1 2025-12-13T00:21:14.176289491+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:14.180885636+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,23sec,0)' ']' 2025-12-13T00:21:14.180885636+00:00 stderr F + sleep 1 2025-12-13T00:21:15.183656285+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:15.188882676+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,22sec,0)' ']' 2025-12-13T00:21:15.188970598+00:00 stderr F + sleep 1 2025-12-13T00:21:16.191489871+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:16.196533617+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,21sec,0)' ']' 2025-12-13T00:21:16.196592159+00:00 stderr F + sleep 1 2025-12-13T00:21:17.200054917+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:17.204510907+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,19sec,0)' ']' 2025-12-13T00:21:17.204563999+00:00 stderr F + sleep 1 2025-12-13T00:21:18.207258516+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:18.212887598+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,18sec,0)' ']' 2025-12-13T00:21:18.212993161+00:00 stderr F + sleep 1 2025-12-13T00:21:19.215718839+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:19.221965917+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,17sec,0)' ']' 2025-12-13T00:21:19.222037419+00:00 stderr F + sleep 1 2025-12-13T00:21:20.262992229+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:20.272132176+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,16sec,0)' ']' 2025-12-13T00:21:20.272132176+00:00 stderr F + sleep 1 2025-12-13T00:21:21.276532389+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:21.283657782+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,15sec,0)' ']' 2025-12-13T00:21:21.283657782+00:00 stderr F + sleep 1 2025-12-13T00:21:22.287643925+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:22.293416570+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,14sec,0)' ']' 2025-12-13T00:21:22.293416570+00:00 stderr F + sleep 1 2025-12-13T00:21:23.296630042+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:23.303304842+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,13sec,0)' ']' 2025-12-13T00:21:23.303304842+00:00 stderr F + sleep 1 2025-12-13T00:21:24.307860151+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:24.314272533+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,12sec,0)' ']' 2025-12-13T00:21:24.314272533+00:00 stderr F + sleep 1 2025-12-13T00:21:25.318627815+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:25.323643010+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,11sec,0)' ']' 2025-12-13T00:21:25.323716962+00:00 stderr F + sleep 1 2025-12-13T00:21:26.326721978+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:26.331770835+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,10sec,0)' ']' 2025-12-13T00:21:26.331853817+00:00 stderr F + sleep 1 2025-12-13T00:21:27.335110989+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:27.339889078+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,9.864ms,0)' ']' 2025-12-13T00:21:27.339975201+00:00 stderr F + sleep 1 2025-12-13T00:21:28.343155171+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:28.349900393+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,8.854ms,0)' ']' 2025-12-13T00:21:28.350017036+00:00 stderr F + sleep 1 2025-12-13T00:21:29.354038390+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:29.363142935+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,7.841ms,0)' ']' 2025-12-13T00:21:29.363241858+00:00 stderr F + sleep 1 2025-12-13T00:21:30.367602931+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:30.376488320+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,6.828ms,0)' ']' 2025-12-13T00:21:30.376596693+00:00 stderr F + sleep 1 2025-12-13T00:21:31.380225055+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:31.389194597+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,5.815ms,0)' ']' 2025-12-13T00:21:31.389314070+00:00 stderr F + sleep 1 2025-12-13T00:21:32.392798170+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:32.401632008+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,4.802ms,0)' ']' 2025-12-13T00:21:32.401720540+00:00 stderr F + sleep 1 2025-12-13T00:21:33.405126387+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:33.409379951+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,3.794ms,0)' ']' 2025-12-13T00:21:33.409428143+00:00 stderr F + sleep 1 2025-12-13T00:21:34.411915104+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:34.417878554+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,2.786ms,0)' ']' 2025-12-13T00:21:34.417878554+00:00 stderr F + sleep 1 2025-12-13T00:21:35.420809919+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:35.425397982+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,1.778ms,0)' ']' 2025-12-13T00:21:35.425397982+00:00 stderr F + sleep 1 2025-12-13T00:21:36.427803613+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:36.433335552+00:00 stderr F + '[' -n 'TIME-WAIT 0 0 [::ffff:192.168.126.11]:10257 [::ffff:192.168.126.11]:41316 timer:(timewait,771ms,0)' ']' 2025-12-13T00:21:36.433438705+00:00 stderr F + sleep 1 2025-12-13T00:21:37.436813197+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:37.443261246+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:21:37.445047960+00:00 stdout F Copying system trust bundle 2025-12-13T00:21:37.445077081+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-12-13T00:21:37.445077081+00:00 stderr F + echo 'Copying system trust bundle' 2025-12-13T00:21:37.445077081+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-12-13T00:21:37.450185487+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-12-13T00:21:37.450525555+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-12-13T00:21:37.526479939+00:00 stderr F W1213 00:21:37.526313 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-12-13T00:21:37.526479939+00:00 stderr F W1213 00:21:37.526468 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-12-13T00:21:37.526519590+00:00 stderr F W1213 00:21:37.526507 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-12-13T00:21:37.526554301+00:00 stderr F W1213 00:21:37.526539 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-12-13T00:21:37.526602072+00:00 stderr F W1213 00:21:37.526582 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-12-13T00:21:37.526634573+00:00 stderr F W1213 00:21:37.526621 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-12-13T00:21:37.526676694+00:00 stderr F W1213 00:21:37.526656 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-12-13T00:21:37.526705174+00:00 stderr F W1213 00:21:37.526692 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-12-13T00:21:37.526764986+00:00 stderr F W1213 00:21:37.526747 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-12-13T00:21:37.526793566+00:00 stderr F W1213 00:21:37.526780 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-12-13T00:21:37.526822597+00:00 stderr F W1213 00:21:37.526809 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-12-13T00:21:37.526855498+00:00 stderr F W1213 00:21:37.526842 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-12-13T00:21:37.526880779+00:00 stderr F W1213 00:21:37.526868 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-12-13T00:21:37.526905959+00:00 stderr F W1213 00:21:37.526893 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-12-13T00:21:37.526941840+00:00 stderr F W1213 00:21:37.526918 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-12-13T00:21:37.526980181+00:00 stderr F W1213 00:21:37.526966 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-12-13T00:21:37.527007842+00:00 stderr F W1213 00:21:37.526995 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-12-13T00:21:37.527033132+00:00 stderr F W1213 00:21:37.527020 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-12-13T00:21:37.527105984+00:00 stderr F W1213 00:21:37.527087 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-12-13T00:21:37.527184016+00:00 stderr F W1213 00:21:37.527170 1 feature_gate.go:227] unrecognized feature gate: Example 2025-12-13T00:21:37.527210507+00:00 stderr F W1213 00:21:37.527197 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-12-13T00:21:37.527236407+00:00 stderr F W1213 00:21:37.527223 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-12-13T00:21:37.527260918+00:00 stderr F W1213 00:21:37.527248 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-12-13T00:21:37.527286368+00:00 stderr F W1213 00:21:37.527273 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-12-13T00:21:37.527311379+00:00 stderr F W1213 00:21:37.527298 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-12-13T00:21:37.527337930+00:00 stderr F W1213 00:21:37.527325 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-12-13T00:21:37.527362520+00:00 stderr F W1213 00:21:37.527350 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-12-13T00:21:37.527388541+00:00 stderr F W1213 00:21:37.527376 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-12-13T00:21:37.527415601+00:00 stderr F W1213 00:21:37.527403 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-12-13T00:21:37.527440532+00:00 stderr F W1213 00:21:37.527427 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-12-13T00:21:37.527465424+00:00 stderr F W1213 00:21:37.527452 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-12-13T00:21:37.527491774+00:00 stderr F W1213 00:21:37.527478 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-12-13T00:21:37.527524265+00:00 stderr F W1213 00:21:37.527504 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-12-13T00:21:37.527536875+00:00 stderr F W1213 00:21:37.527531 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-12-13T00:21:37.527573086+00:00 stderr F W1213 00:21:37.527555 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:21:37.527596127+00:00 stderr F W1213 00:21:37.527581 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-12-13T00:21:37.527621007+00:00 stderr F W1213 00:21:37.527608 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-12-13T00:21:37.527648318+00:00 stderr F W1213 00:21:37.527635 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-12-13T00:21:37.527673589+00:00 stderr F W1213 00:21:37.527661 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-12-13T00:21:37.527720280+00:00 stderr F W1213 00:21:37.527707 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-12-13T00:21:37.527753281+00:00 stderr F W1213 00:21:37.527740 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-12-13T00:21:37.527784771+00:00 stderr F W1213 00:21:37.527771 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-12-13T00:21:37.527816912+00:00 stderr F W1213 00:21:37.527804 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-12-13T00:21:37.527849683+00:00 stderr F W1213 00:21:37.527836 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-12-13T00:21:37.527874794+00:00 stderr F W1213 00:21:37.527862 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-12-13T00:21:37.527899414+00:00 stderr F W1213 00:21:37.527886 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-12-13T00:21:37.527963866+00:00 stderr F W1213 00:21:37.527950 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-12-13T00:21:37.527994136+00:00 stderr F W1213 00:21:37.527980 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-12-13T00:21:37.528024127+00:00 stderr F W1213 00:21:37.528011 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-12-13T00:21:37.528048968+00:00 stderr F W1213 00:21:37.528036 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-12-13T00:21:37.528079679+00:00 stderr F W1213 00:21:37.528066 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-12-13T00:21:37.528229822+00:00 stderr F W1213 00:21:37.528206 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-12-13T00:21:37.528257653+00:00 stderr F W1213 00:21:37.528244 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-12-13T00:21:37.528338955+00:00 stderr F W1213 00:21:37.528325 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-12-13T00:21:37.528375116+00:00 stderr F W1213 00:21:37.528360 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-12-13T00:21:37.528412007+00:00 stderr F W1213 00:21:37.528395 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-12-13T00:21:37.528447928+00:00 stderr F W1213 00:21:37.528430 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-12-13T00:21:37.528479588+00:00 stderr F W1213 00:21:37.528466 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-12-13T00:21:37.528548290+00:00 stderr F W1213 00:21:37.528535 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-12-13T00:21:37.528658863+00:00 stderr F I1213 00:21:37.528638 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-12-13T00:21:37.528658863+00:00 stderr F I1213 00:21:37.528651 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:21:37.528666293+00:00 stderr F I1213 00:21:37.528656 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:21:37.528666293+00:00 stderr F I1213 00:21:37.528660 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-12-13T00:21:37.528673303+00:00 stderr F I1213 00:21:37.528664 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-12-13T00:21:37.528673303+00:00 stderr F I1213 00:21:37.528668 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:21:37.528680223+00:00 stderr F I1213 00:21:37.528672 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-12-13T00:21:37.528680223+00:00 stderr F I1213 00:21:37.528675 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-12-13T00:21:37.528680223+00:00 stderr F I1213 00:21:37.528678 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-12-13T00:21:37.528707394+00:00 stderr F I1213 00:21:37.528681 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-12-13T00:21:37.528707394+00:00 stderr F I1213 00:21:37.528699 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:21:37.528707394+00:00 stderr F I1213 00:21:37.528703 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-12-13T00:21:37.528715054+00:00 stderr F I1213 00:21:37.528706 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-12-13T00:21:37.528715054+00:00 stderr F I1213 00:21:37.528709 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:21:37.528721974+00:00 stderr F I1213 00:21:37.528713 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:21:37.528721974+00:00 stderr F I1213 00:21:37.528717 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-12-13T00:21:37.528728824+00:00 stderr F I1213 00:21:37.528719 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:21:37.528728824+00:00 stderr F I1213 00:21:37.528726 1 flags.go:64] FLAG: --cloud-config="" 2025-12-13T00:21:37.528735645+00:00 stderr F I1213 00:21:37.528729 1 flags.go:64] FLAG: --cloud-provider="" 2025-12-13T00:21:37.528742275+00:00 stderr F I1213 00:21:37.528732 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-12-13T00:21:37.528742275+00:00 stderr F I1213 00:21:37.528739 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-12-13T00:21:37.528749145+00:00 stderr F I1213 00:21:37.528742 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-12-13T00:21:37.528749145+00:00 stderr F I1213 00:21:37.528745 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-12-13T00:21:37.528755995+00:00 stderr F I1213 00:21:37.528748 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-12-13T00:21:37.528755995+00:00 stderr F I1213 00:21:37.528751 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-12-13T00:21:37.528762875+00:00 stderr F I1213 00:21:37.528755 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-12-13T00:21:37.528762875+00:00 stderr F I1213 00:21:37.528757 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-12-13T00:21:37.528762875+00:00 stderr F I1213 00:21:37.528760 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-12-13T00:21:37.528770295+00:00 stderr F I1213 00:21:37.528763 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-12-13T00:21:37.528770295+00:00 stderr F I1213 00:21:37.528766 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-12-13T00:21:37.528770295+00:00 stderr F I1213 00:21:37.528768 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-12-13T00:21:37.528781286+00:00 stderr F I1213 00:21:37.528771 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-12-13T00:21:37.528781286+00:00 stderr F I1213 00:21:37.528774 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-12-13T00:21:37.528781286+00:00 stderr F I1213 00:21:37.528776 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-12-13T00:21:37.528788646+00:00 stderr F I1213 00:21:37.528780 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-12-13T00:21:37.528788646+00:00 stderr F I1213 00:21:37.528785 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-12-13T00:21:37.528795466+00:00 stderr F I1213 00:21:37.528788 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-12-13T00:21:37.528795466+00:00 stderr F I1213 00:21:37.528791 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-12-13T00:21:37.528802316+00:00 stderr F I1213 00:21:37.528794 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-12-13T00:21:37.528802316+00:00 stderr F I1213 00:21:37.528796 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-12-13T00:21:37.528802316+00:00 stderr F I1213 00:21:37.528799 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-12-13T00:21:37.528809596+00:00 stderr F I1213 00:21:37.528802 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-12-13T00:21:37.528809596+00:00 stderr F I1213 00:21:37.528805 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-12-13T00:21:37.528809596+00:00 stderr F I1213 00:21:37.528807 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-12-13T00:21:37.528816886+00:00 stderr F I1213 00:21:37.528810 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-12-13T00:21:37.528816886+00:00 stderr F I1213 00:21:37.528813 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-12-13T00:21:37.528823757+00:00 stderr F I1213 00:21:37.528816 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-12-13T00:21:37.528823757+00:00 stderr F I1213 00:21:37.528818 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-12-13T00:21:37.528823757+00:00 stderr F I1213 00:21:37.528821 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-12-13T00:21:37.528831047+00:00 stderr F I1213 00:21:37.528824 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-12-13T00:21:37.528831047+00:00 stderr F I1213 00:21:37.528827 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-12-13T00:21:37.528837877+00:00 stderr F I1213 00:21:37.528830 1 flags.go:64] FLAG: --contention-profiling="false" 2025-12-13T00:21:37.528837877+00:00 stderr F I1213 00:21:37.528833 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-12-13T00:21:37.528846307+00:00 stderr F I1213 00:21:37.528838 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-12-13T00:21:37.528852827+00:00 stderr F I1213 00:21:37.528845 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-12-13T00:21:37.528859408+00:00 stderr F I1213 00:21:37.528848 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:21:37.528859408+00:00 stderr F I1213 00:21:37.528854 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-12-13T00:21:37.528859408+00:00 stderr F I1213 00:21:37.528857 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-12-13T00:21:37.528866678+00:00 stderr F I1213 00:21:37.528859 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-12-13T00:21:37.528866678+00:00 stderr F I1213 00:21:37.528862 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-12-13T00:21:37.528882298+00:00 stderr F I1213 00:21:37.528865 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-12-13T00:21:37.528882298+00:00 stderr F I1213 00:21:37.528868 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-12-13T00:21:37.528882298+00:00 stderr F I1213 00:21:37.528870 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-12-13T00:21:37.528912309+00:00 stderr F I1213 00:21:37.528873 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-12-13T00:21:37.528912309+00:00 stderr F I1213 00:21:37.528896 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-12-13T00:21:37.528912309+00:00 stderr F I1213 00:21:37.528900 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:21:37.528912309+00:00 stderr F I1213 00:21:37.528903 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-12-13T00:21:37.528912309+00:00 stderr F I1213 00:21:37.528907 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-12-13T00:21:37.528912309+00:00 stderr F I1213 00:21:37.528909 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-12-13T00:21:37.528921419+00:00 stderr F I1213 00:21:37.528912 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-12-13T00:21:37.528921419+00:00 stderr F I1213 00:21:37.528915 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-12-13T00:21:37.528928269+00:00 stderr F I1213 00:21:37.528920 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-12-13T00:21:37.528928269+00:00 stderr F I1213 00:21:37.528925 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-12-13T00:21:37.528973210+00:00 stderr F I1213 00:21:37.528958 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-12-13T00:21:37.528973210+00:00 stderr F I1213 00:21:37.528964 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-12-13T00:21:37.528973210+00:00 stderr F I1213 00:21:37.528967 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:21:37.528985121+00:00 stderr F I1213 00:21:37.528970 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-12-13T00:21:37.528985121+00:00 stderr F I1213 00:21:37.528974 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:21:37.528985121+00:00 stderr F I1213 00:21:37.528978 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-12-13T00:21:37.528985121+00:00 stderr F I1213 00:21:37.528982 1 flags.go:64] FLAG: --leader-elect="true" 2025-12-13T00:21:37.528992851+00:00 stderr F I1213 00:21:37.528984 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-12-13T00:21:37.528992851+00:00 stderr F I1213 00:21:37.528987 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-12-13T00:21:37.528992851+00:00 stderr F I1213 00:21:37.528990 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-12-13T00:21:37.529000101+00:00 stderr F I1213 00:21:37.528993 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-12-13T00:21:37.529000101+00:00 stderr F I1213 00:21:37.528996 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-12-13T00:21:37.529006941+00:00 stderr F I1213 00:21:37.528998 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-12-13T00:21:37.529006941+00:00 stderr F I1213 00:21:37.529001 1 flags.go:64] FLAG: --leader-migration-config="" 2025-12-13T00:21:37.529006941+00:00 stderr F I1213 00:21:37.529004 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-12-13T00:21:37.529018471+00:00 stderr F I1213 00:21:37.529007 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:21:37.529025102+00:00 stderr F I1213 00:21:37.529014 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:21:37.529025102+00:00 stderr F I1213 00:21:37.529020 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:21:37.529025102+00:00 stderr F I1213 00:21:37.529022 1 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:21:37.529032392+00:00 stderr F I1213 00:21:37.529025 1 flags.go:64] FLAG: --master="" 2025-12-13T00:21:37.529032392+00:00 stderr F I1213 00:21:37.529028 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-12-13T00:21:37.529039242+00:00 stderr F I1213 00:21:37.529031 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-12-13T00:21:37.529039242+00:00 stderr F I1213 00:21:37.529034 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-12-13T00:21:37.529039242+00:00 stderr F I1213 00:21:37.529036 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-12-13T00:21:37.529046462+00:00 stderr F I1213 00:21:37.529039 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-12-13T00:21:37.529046462+00:00 stderr F I1213 00:21:37.529042 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-12-13T00:21:37.529053282+00:00 stderr F I1213 00:21:37.529045 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-12-13T00:21:37.529053282+00:00 stderr F I1213 00:21:37.529047 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-12-13T00:21:37.529053282+00:00 stderr F I1213 00:21:37.529050 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-12-13T00:21:37.529060502+00:00 stderr F I1213 00:21:37.529053 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-12-13T00:21:37.529060502+00:00 stderr F I1213 00:21:37.529056 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-12-13T00:21:37.529067323+00:00 stderr F I1213 00:21:37.529059 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-12-13T00:21:37.529067323+00:00 stderr F I1213 00:21:37.529062 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-12-13T00:21:37.529067323+00:00 stderr F I1213 00:21:37.529064 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-12-13T00:21:37.529089043+00:00 stderr F I1213 00:21:37.529072 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:21:37.529089043+00:00 stderr F I1213 00:21:37.529078 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-12-13T00:21:37.529089043+00:00 stderr F I1213 00:21:37.529081 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:21:37.529089043+00:00 stderr F I1213 00:21:37.529084 1 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:21:37.529096813+00:00 stderr F I1213 00:21:37.529088 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-12-13T00:21:37.529096813+00:00 stderr F I1213 00:21:37.529091 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-12-13T00:21:37.529096813+00:00 stderr F I1213 00:21:37.529093 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-12-13T00:21:37.529104143+00:00 stderr F I1213 00:21:37.529096 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-12-13T00:21:37.529104143+00:00 stderr F I1213 00:21:37.529100 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-12-13T00:21:37.529110984+00:00 stderr F I1213 00:21:37.529104 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-12-13T00:21:37.529110984+00:00 stderr F I1213 00:21:37.529106 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-12-13T00:21:37.529121444+00:00 stderr F I1213 00:21:37.529109 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:21:37.529121444+00:00 stderr F I1213 00:21:37.529115 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:21:37.529128264+00:00 stderr F I1213 00:21:37.529119 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-12-13T00:21:37.529134794+00:00 stderr F I1213 00:21:37.529126 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-12-13T00:21:37.529141344+00:00 stderr F I1213 00:21:37.529131 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-12-13T00:21:37.529141344+00:00 stderr F I1213 00:21:37.529138 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-12-13T00:21:37.529148105+00:00 stderr F I1213 00:21:37.529141 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-12-13T00:21:37.529154655+00:00 stderr F I1213 00:21:37.529147 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-12-13T00:21:37.529154655+00:00 stderr F I1213 00:21:37.529150 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-12-13T00:21:37.529161475+00:00 stderr F I1213 00:21:37.529153 1 flags.go:64] FLAG: --secure-port="10257" 2025-12-13T00:21:37.529161475+00:00 stderr F I1213 00:21:37.529156 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-12-13T00:21:37.529168305+00:00 stderr F I1213 00:21:37.529160 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-12-13T00:21:37.529168305+00:00 stderr F I1213 00:21:37.529163 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:21:37.529168305+00:00 stderr F I1213 00:21:37.529165 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-12-13T00:21:37.529175555+00:00 stderr F I1213 00:21:37.529168 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-12-13T00:21:37.529206576+00:00 stderr F I1213 00:21:37.529172 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:21:37.529206576+00:00 stderr F I1213 00:21:37.529195 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:21:37.529206576+00:00 stderr F I1213 00:21:37.529199 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:21:37.529206576+00:00 stderr F I1213 00:21:37.529202 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:21:37.529215636+00:00 stderr F I1213 00:21:37.529207 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-12-13T00:21:37.529215636+00:00 stderr F I1213 00:21:37.529210 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-12-13T00:21:37.529215636+00:00 stderr F I1213 00:21:37.529213 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-12-13T00:21:37.529222976+00:00 stderr F I1213 00:21:37.529216 1 flags.go:64] FLAG: --v="2" 2025-12-13T00:21:37.529222976+00:00 stderr F I1213 00:21:37.529219 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:21:37.529229767+00:00 stderr F I1213 00:21:37.529223 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:21:37.529236297+00:00 stderr F I1213 00:21:37.529229 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-12-13T00:21:37.529246637+00:00 stderr F I1213 00:21:37.529232 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-12-13T00:21:37.531586475+00:00 stderr F I1213 00:21:37.531553 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:21:37.799054442+00:00 stderr F I1213 00:21:37.797466 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:21:37.799054442+00:00 stderr F I1213 00:21:37.797664 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:21:37.800847966+00:00 stderr F I1213 00:21:37.800790 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-12-13T00:21:37.800847966+00:00 stderr F I1213 00:21:37.800817 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-12-13T00:21:37.802743643+00:00 stderr F I1213 00:21:37.802651 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:21:37.802743643+00:00 stderr F I1213 00:21:37.802699 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:21:37.803315717+00:00 stderr F I1213 00:21:37.803271 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:21:37.803424380+00:00 stderr F I1213 00:21:37.803318 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:21:37.803285536 +0000 UTC))" 2025-12-13T00:21:37.803443170+00:00 stderr F I1213 00:21:37.803429 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:21:37.803405749 +0000 UTC))" 2025-12-13T00:21:37.803490751+00:00 stderr F I1213 00:21:37.803455 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:21:37.80343751 +0000 UTC))" 2025-12-13T00:21:37.803490751+00:00 stderr F I1213 00:21:37.803483 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:21:37.803466021 +0000 UTC))" 2025-12-13T00:21:37.803525512+00:00 stderr F I1213 00:21:37.803505 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:21:37.803489561 +0000 UTC))" 2025-12-13T00:21:37.803541143+00:00 stderr F I1213 00:21:37.803526 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:37.803511272 +0000 UTC))" 2025-12-13T00:21:37.803560213+00:00 stderr F I1213 00:21:37.803548 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:37.803533802 +0000 UTC))" 2025-12-13T00:21:37.803606054+00:00 stderr F I1213 00:21:37.803574 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:37.803559223 +0000 UTC))" 2025-12-13T00:21:37.803606054+00:00 stderr F I1213 00:21:37.803597 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:21:37.803584104 +0000 UTC))" 2025-12-13T00:21:37.803653435+00:00 stderr F I1213 00:21:37.803620 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:37.803605244 +0000 UTC))" 2025-12-13T00:21:37.803673526+00:00 stderr F I1213 00:21:37.803662 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:37.803636475 +0000 UTC))" 2025-12-13T00:21:37.804223769+00:00 stderr F I1213 00:21:37.804168 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:21:37.804125147 +0000 UTC))" 2025-12-13T00:21:37.804642099+00:00 stderr F I1213 00:21:37.804584 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765585297\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765585297\" (2025-12-12 23:21:37 +0000 UTC to 2026-12-12 23:21:37 +0000 UTC (now=2025-12-13 00:21:37.804553767 +0000 UTC))" 2025-12-13T00:21:37.804642099+00:00 stderr F I1213 00:21:37.804625 1 secure_serving.go:213] Serving securely on [::]:10257 2025-12-13T00:21:37.804683520+00:00 stderr F I1213 00:21:37.804656 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:21:37.804990639+00:00 stderr F I1213 00:21:37.804927 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000016573715117130646033105 0ustar zuulzuul2025-12-13T00:06:36.205602718+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-12-13T00:06:36.211045631+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:06:36.218077899+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:06:36.218826960+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-12-13T00:06:36.219069267+00:00 stderr F + echo 'Copying system trust bundle' 2025-12-13T00:06:36.219123828+00:00 stdout F Copying system trust bundle 2025-12-13T00:06:36.219178290+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-12-13T00:06:36.234060108+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-12-13T00:06:36.234625175+00:00 stderr P + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false 2025-12-13T00:06:36.234680777+00:00 stderr F --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-12-13T00:06:36.412112068+00:00 stderr F W1213 00:06:36.411789 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-12-13T00:06:36.412169399+00:00 stderr F W1213 00:06:36.412145 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-12-13T00:06:36.412575000+00:00 stderr F W1213 00:06:36.412290 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-12-13T00:06:36.412640082+00:00 stderr F W1213 00:06:36.412617 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-12-13T00:06:36.412700974+00:00 stderr F W1213 00:06:36.412670 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-12-13T00:06:36.412742935+00:00 stderr F W1213 00:06:36.412721 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-12-13T00:06:36.412793016+00:00 stderr F W1213 00:06:36.412772 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-12-13T00:06:36.412842498+00:00 stderr F W1213 00:06:36.412823 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-12-13T00:06:36.412944071+00:00 stderr F W1213 00:06:36.412920 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-12-13T00:06:36.412991042+00:00 stderr F W1213 00:06:36.412972 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-12-13T00:06:36.413047414+00:00 stderr F W1213 00:06:36.413027 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-12-13T00:06:36.413094435+00:00 stderr F W1213 00:06:36.413075 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-12-13T00:06:36.413144676+00:00 stderr F W1213 00:06:36.413125 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-12-13T00:06:36.413194298+00:00 stderr F W1213 00:06:36.413175 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-12-13T00:06:36.413239589+00:00 stderr F W1213 00:06:36.413221 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-12-13T00:06:36.413285830+00:00 stderr F W1213 00:06:36.413261 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-12-13T00:06:36.413320031+00:00 stderr F W1213 00:06:36.413298 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-12-13T00:06:36.413360532+00:00 stderr F W1213 00:06:36.413338 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-12-13T00:06:36.413519278+00:00 stderr F W1213 00:06:36.413482 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-12-13T00:06:36.413583209+00:00 stderr F W1213 00:06:36.413560 1 feature_gate.go:227] unrecognized feature gate: Example 2025-12-13T00:06:36.413632401+00:00 stderr F W1213 00:06:36.413610 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-12-13T00:06:36.413678072+00:00 stderr F W1213 00:06:36.413656 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-12-13T00:06:36.413720603+00:00 stderr F W1213 00:06:36.413699 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-12-13T00:06:36.413761334+00:00 stderr F W1213 00:06:36.413739 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-12-13T00:06:36.413801045+00:00 stderr F W1213 00:06:36.413780 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-12-13T00:06:36.413841237+00:00 stderr F W1213 00:06:36.413819 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-12-13T00:06:36.413884268+00:00 stderr F W1213 00:06:36.413861 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-12-13T00:06:36.413927239+00:00 stderr F W1213 00:06:36.413902 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-12-13T00:06:36.413963550+00:00 stderr F W1213 00:06:36.413940 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-12-13T00:06:36.414006621+00:00 stderr F W1213 00:06:36.413981 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-12-13T00:06:36.414045432+00:00 stderr F W1213 00:06:36.414020 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-12-13T00:06:36.414085363+00:00 stderr F W1213 00:06:36.414059 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-12-13T00:06:36.414135195+00:00 stderr F W1213 00:06:36.414112 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-12-13T00:06:36.414179376+00:00 stderr F W1213 00:06:36.414158 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-12-13T00:06:36.414233668+00:00 stderr F W1213 00:06:36.414207 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:06:36.414283579+00:00 stderr F W1213 00:06:36.414261 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-12-13T00:06:36.414330360+00:00 stderr F W1213 00:06:36.414307 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-12-13T00:06:36.414373581+00:00 stderr F W1213 00:06:36.414351 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-12-13T00:06:36.414464464+00:00 stderr F W1213 00:06:36.414428 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-12-13T00:06:36.414536016+00:00 stderr F W1213 00:06:36.414514 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-12-13T00:06:36.414577917+00:00 stderr F W1213 00:06:36.414557 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-12-13T00:06:36.414620898+00:00 stderr F W1213 00:06:36.414599 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-12-13T00:06:36.414662769+00:00 stderr F W1213 00:06:36.414642 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-12-13T00:06:36.414702641+00:00 stderr F W1213 00:06:36.414682 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-12-13T00:06:36.414744232+00:00 stderr F W1213 00:06:36.414724 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-12-13T00:06:36.414784743+00:00 stderr F W1213 00:06:36.414764 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-12-13T00:06:36.414855795+00:00 stderr F W1213 00:06:36.414834 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-12-13T00:06:36.414899126+00:00 stderr F W1213 00:06:36.414874 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-12-13T00:06:36.414935607+00:00 stderr F W1213 00:06:36.414915 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-12-13T00:06:36.414976578+00:00 stderr F W1213 00:06:36.414956 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-12-13T00:06:36.415014109+00:00 stderr F W1213 00:06:36.414993 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-12-13T00:06:36.415229915+00:00 stderr F W1213 00:06:36.415189 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-12-13T00:06:36.415277767+00:00 stderr F W1213 00:06:36.415256 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-12-13T00:06:36.416183022+00:00 stderr F W1213 00:06:36.416068 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-12-13T00:06:36.416183022+00:00 stderr F W1213 00:06:36.416121 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-12-13T00:06:36.416183022+00:00 stderr F W1213 00:06:36.416162 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-12-13T00:06:36.416249914+00:00 stderr F W1213 00:06:36.416202 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-12-13T00:06:36.416266614+00:00 stderr F W1213 00:06:36.416253 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-12-13T00:06:36.416418919+00:00 stderr F W1213 00:06:36.416335 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-12-13T00:06:36.416564473+00:00 stderr F I1213 00:06:36.416506 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-12-13T00:06:36.416564473+00:00 stderr F I1213 00:06:36.416527 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:06:36.416564473+00:00 stderr F I1213 00:06:36.416536 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:06:36.416564473+00:00 stderr F I1213 00:06:36.416542 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-12-13T00:06:36.416564473+00:00 stderr F I1213 00:06:36.416547 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-12-13T00:06:36.416564473+00:00 stderr F I1213 00:06:36.416553 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:06:36.416592734+00:00 stderr F I1213 00:06:36.416561 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-12-13T00:06:36.416592734+00:00 stderr F I1213 00:06:36.416568 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-12-13T00:06:36.416592734+00:00 stderr F I1213 00:06:36.416574 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-12-13T00:06:36.416592734+00:00 stderr F I1213 00:06:36.416580 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-12-13T00:06:36.416636935+00:00 stderr F I1213 00:06:36.416593 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:06:36.416636935+00:00 stderr F I1213 00:06:36.416601 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-12-13T00:06:36.416636935+00:00 stderr F I1213 00:06:36.416607 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-12-13T00:06:36.416636935+00:00 stderr F I1213 00:06:36.416613 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:06:36.416636935+00:00 stderr F I1213 00:06:36.416621 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:06:36.416636935+00:00 stderr F I1213 00:06:36.416627 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-12-13T00:06:36.416667666+00:00 stderr F I1213 00:06:36.416633 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:06:36.416667666+00:00 stderr F I1213 00:06:36.416642 1 flags.go:64] FLAG: --cloud-config="" 2025-12-13T00:06:36.416667666+00:00 stderr F I1213 00:06:36.416648 1 flags.go:64] FLAG: --cloud-provider="" 2025-12-13T00:06:36.416667666+00:00 stderr F I1213 00:06:36.416654 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-12-13T00:06:36.416694157+00:00 stderr F I1213 00:06:36.416668 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-12-13T00:06:36.416694157+00:00 stderr F I1213 00:06:36.416677 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-12-13T00:06:36.416694157+00:00 stderr F I1213 00:06:36.416683 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-12-13T00:06:36.416715097+00:00 stderr F I1213 00:06:36.416690 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-12-13T00:06:36.416715097+00:00 stderr F I1213 00:06:36.416697 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-12-13T00:06:36.416715097+00:00 stderr F I1213 00:06:36.416704 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-12-13T00:06:36.416715097+00:00 stderr F I1213 00:06:36.416710 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-12-13T00:06:36.416734378+00:00 stderr F I1213 00:06:36.416716 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-12-13T00:06:36.416734378+00:00 stderr F I1213 00:06:36.416722 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-12-13T00:06:36.416734378+00:00 stderr F I1213 00:06:36.416727 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-12-13T00:06:36.416752968+00:00 stderr F I1213 00:06:36.416733 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-12-13T00:06:36.416752968+00:00 stderr F I1213 00:06:36.416739 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-12-13T00:06:36.416752968+00:00 stderr F I1213 00:06:36.416744 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-12-13T00:06:36.416771819+00:00 stderr F I1213 00:06:36.416750 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-12-13T00:06:36.416771819+00:00 stderr F I1213 00:06:36.416757 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-12-13T00:06:36.416771819+00:00 stderr F I1213 00:06:36.416763 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416768 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416775 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416780 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416786 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416790 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416794 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-12-13T00:06:36.416806710+00:00 stderr F I1213 00:06:36.416798 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416803 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416809 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416813 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416817 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416821 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416825 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416829 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-12-13T00:06:36.416837751+00:00 stderr F I1213 00:06:36.416833 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-12-13T00:06:36.416859951+00:00 stderr F I1213 00:06:36.416837 1 flags.go:64] FLAG: --contention-profiling="false" 2025-12-13T00:06:36.416859951+00:00 stderr F I1213 00:06:36.416842 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-12-13T00:06:36.416859951+00:00 stderr F I1213 00:06:36.416846 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-12-13T00:06:36.416859951+00:00 stderr F I1213 00:06:36.416852 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-12-13T00:06:36.416878922+00:00 stderr F I1213 00:06:36.416857 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:06:36.416878922+00:00 stderr F I1213 00:06:36.416864 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-12-13T00:06:36.416878922+00:00 stderr F I1213 00:06:36.416869 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-12-13T00:06:36.416878922+00:00 stderr F I1213 00:06:36.416873 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-12-13T00:06:36.416898123+00:00 stderr F I1213 00:06:36.416877 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-12-13T00:06:36.416898123+00:00 stderr F I1213 00:06:36.416882 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-12-13T00:06:36.416898123+00:00 stderr F I1213 00:06:36.416886 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-12-13T00:06:36.416898123+00:00 stderr F I1213 00:06:36.416890 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-12-13T00:06:36.416960264+00:00 stderr F I1213 00:06:36.416894 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-12-13T00:06:36.416960264+00:00 stderr F I1213 00:06:36.416933 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-12-13T00:06:36.416960264+00:00 stderr F I1213 00:06:36.416943 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:06:36.416960264+00:00 stderr F I1213 00:06:36.416949 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-12-13T00:06:36.416960264+00:00 stderr F I1213 00:06:36.416955 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-12-13T00:06:36.416996385+00:00 stderr F I1213 00:06:36.416960 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-12-13T00:06:36.416996385+00:00 stderr F I1213 00:06:36.416966 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-12-13T00:06:36.416996385+00:00 stderr F I1213 00:06:36.416972 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-12-13T00:06:36.416996385+00:00 stderr F I1213 00:06:36.416977 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-12-13T00:06:36.416996385+00:00 stderr F I1213 00:06:36.416986 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-12-13T00:06:36.417021826+00:00 stderr F I1213 00:06:36.416992 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-12-13T00:06:36.417021826+00:00 stderr F I1213 00:06:36.416999 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-12-13T00:06:36.417021826+00:00 stderr F I1213 00:06:36.417005 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:06:36.417021826+00:00 stderr F I1213 00:06:36.417011 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-12-13T00:06:36.417045577+00:00 stderr F I1213 00:06:36.417019 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:06:36.417045577+00:00 stderr F I1213 00:06:36.417027 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-12-13T00:06:36.417045577+00:00 stderr F I1213 00:06:36.417033 1 flags.go:64] FLAG: --leader-elect="true" 2025-12-13T00:06:36.417045577+00:00 stderr F I1213 00:06:36.417038 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-12-13T00:06:36.417069017+00:00 stderr F I1213 00:06:36.417044 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-12-13T00:06:36.417069017+00:00 stderr F I1213 00:06:36.417049 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-12-13T00:06:36.417069017+00:00 stderr F I1213 00:06:36.417055 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-12-13T00:06:36.417069017+00:00 stderr F I1213 00:06:36.417061 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-12-13T00:06:36.417092238+00:00 stderr F I1213 00:06:36.417067 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-12-13T00:06:36.417092238+00:00 stderr F I1213 00:06:36.417073 1 flags.go:64] FLAG: --leader-migration-config="" 2025-12-13T00:06:36.417092238+00:00 stderr F I1213 00:06:36.417078 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-12-13T00:06:36.417092238+00:00 stderr F I1213 00:06:36.417084 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:06:36.417115999+00:00 stderr F I1213 00:06:36.417089 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:06:36.417115999+00:00 stderr F I1213 00:06:36.417100 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:06:36.417115999+00:00 stderr F I1213 00:06:36.417106 1 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:06:36.417115999+00:00 stderr F I1213 00:06:36.417111 1 flags.go:64] FLAG: --master="" 2025-12-13T00:06:36.417139049+00:00 stderr F I1213 00:06:36.417119 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-12-13T00:06:36.417139049+00:00 stderr F I1213 00:06:36.417125 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-12-13T00:06:36.417139049+00:00 stderr F I1213 00:06:36.417131 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-12-13T00:06:36.417159020+00:00 stderr F I1213 00:06:36.417136 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-12-13T00:06:36.417159020+00:00 stderr F I1213 00:06:36.417142 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-12-13T00:06:36.417159020+00:00 stderr F I1213 00:06:36.417148 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-12-13T00:06:36.417159020+00:00 stderr F I1213 00:06:36.417154 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-12-13T00:06:36.417187381+00:00 stderr F I1213 00:06:36.417159 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-12-13T00:06:36.417187381+00:00 stderr F I1213 00:06:36.417165 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-12-13T00:06:36.417187381+00:00 stderr F I1213 00:06:36.417170 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-12-13T00:06:36.417187381+00:00 stderr F I1213 00:06:36.417177 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-12-13T00:06:36.417187381+00:00 stderr F I1213 00:06:36.417182 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-12-13T00:06:36.417207331+00:00 stderr F I1213 00:06:36.417189 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-12-13T00:06:36.417207331+00:00 stderr F I1213 00:06:36.417195 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-12-13T00:06:36.417207331+00:00 stderr F I1213 00:06:36.417200 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:06:36.417225852+00:00 stderr F I1213 00:06:36.417207 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-12-13T00:06:36.417225852+00:00 stderr F I1213 00:06:36.417213 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:06:36.417225852+00:00 stderr F I1213 00:06:36.417219 1 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:06:36.417244302+00:00 stderr F I1213 00:06:36.417225 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-12-13T00:06:36.417244302+00:00 stderr F I1213 00:06:36.417231 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-12-13T00:06:36.417244302+00:00 stderr F I1213 00:06:36.417237 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-12-13T00:06:36.417264063+00:00 stderr F I1213 00:06:36.417243 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-12-13T00:06:36.417264063+00:00 stderr F I1213 00:06:36.417251 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-12-13T00:06:36.417264063+00:00 stderr F I1213 00:06:36.417259 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-12-13T00:06:36.417282713+00:00 stderr F I1213 00:06:36.417265 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-12-13T00:06:36.417282713+00:00 stderr F I1213 00:06:36.417271 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:06:36.417300184+00:00 stderr F I1213 00:06:36.417279 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:06:36.417300184+00:00 stderr F I1213 00:06:36.417288 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-12-13T00:06:36.417317904+00:00 stderr F I1213 00:06:36.417299 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-12-13T00:06:36.417317904+00:00 stderr F I1213 00:06:36.417307 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-12-13T00:06:36.417335345+00:00 stderr F I1213 00:06:36.417314 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-12-13T00:06:36.417335345+00:00 stderr F I1213 00:06:36.417320 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-12-13T00:06:36.417335345+00:00 stderr F I1213 00:06:36.417325 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-12-13T00:06:36.417335345+00:00 stderr F I1213 00:06:36.417330 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-12-13T00:06:36.417362115+00:00 stderr F I1213 00:06:36.417335 1 flags.go:64] FLAG: --secure-port="10257" 2025-12-13T00:06:36.417362115+00:00 stderr F I1213 00:06:36.417340 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-12-13T00:06:36.417362115+00:00 stderr F I1213 00:06:36.417346 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-12-13T00:06:36.417362115+00:00 stderr F I1213 00:06:36.417351 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:06:36.417362115+00:00 stderr F I1213 00:06:36.417355 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-12-13T00:06:36.417382026+00:00 stderr F I1213 00:06:36.417360 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-12-13T00:06:36.417382026+00:00 stderr F I1213 00:06:36.417365 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:06:36.417382026+00:00 stderr F I1213 00:06:36.417375 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417381 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417387 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417421 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417427 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417432 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417436 1 flags.go:64] FLAG: --v="2" 2025-12-13T00:06:36.417452178+00:00 stderr F I1213 00:06:36.417441 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:06:36.417479799+00:00 stderr F I1213 00:06:36.417448 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:06:36.417479799+00:00 stderr F I1213 00:06:36.417454 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-12-13T00:06:36.417479799+00:00 stderr F I1213 00:06:36.417458 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-12-13T00:06:36.426696528+00:00 stderr F I1213 00:06:36.426600 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:06:36.614615044+00:00 stderr F I1213 00:06:36.614528 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:06:36.614798799+00:00 stderr F I1213 00:06:36.614757 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:06:36.618727980+00:00 stderr F I1213 00:06:36.618641 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-12-13T00:06:36.618727980+00:00 stderr F I1213 00:06:36.618681 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-12-13T00:06:36.622224018+00:00 stderr F I1213 00:06:36.622157 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:06:36.622320261+00:00 stderr F I1213 00:06:36.622164 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:06:36.622856056+00:00 stderr F I1213 00:06:36.622817 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:06:36.623078962+00:00 stderr F I1213 00:06:36.623034 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:06:36.62298888 +0000 UTC))" 2025-12-13T00:06:36.623098163+00:00 stderr F I1213 00:06:36.623081 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:06:36.623064862 +0000 UTC))" 2025-12-13T00:06:36.623114093+00:00 stderr F I1213 00:06:36.623103 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:06:36.623090273 +0000 UTC))" 2025-12-13T00:06:36.623129904+00:00 stderr F I1213 00:06:36.623118 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:06:36.623108023 +0000 UTC))" 2025-12-13T00:06:36.623146274+00:00 stderr F I1213 00:06:36.623134 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:06:36.623124093 +0000 UTC))" 2025-12-13T00:06:36.623162034+00:00 stderr F I1213 00:06:36.623151 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:06:36.623140864 +0000 UTC))" 2025-12-13T00:06:36.623177705+00:00 stderr F I1213 00:06:36.623166 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:06:36.623155884 +0000 UTC))" 2025-12-13T00:06:36.623205116+00:00 stderr F I1213 00:06:36.623182 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:06:36.623171555 +0000 UTC))" 2025-12-13T00:06:36.623205116+00:00 stderr F I1213 00:06:36.623197 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:06:36.623187095 +0000 UTC))" 2025-12-13T00:06:36.623225646+00:00 stderr F I1213 00:06:36.623213 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:06:36.623203306 +0000 UTC))" 2025-12-13T00:06:36.623572026+00:00 stderr F I1213 00:06:36.623532 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:06:36.623516494 +0000 UTC))" 2025-12-13T00:06:36.623912486+00:00 stderr F I1213 00:06:36.623881 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584396\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584396\" (2025-12-12 23:06:36 +0000 UTC to 2026-12-12 23:06:36 +0000 UTC (now=2025-12-13 00:06:36.623775053 +0000 UTC))" 2025-12-13T00:06:36.623929947+00:00 stderr F I1213 00:06:36.623916 1 secure_serving.go:213] Serving securely on [::]:10257 2025-12-13T00:06:36.624024729+00:00 stderr F I1213 00:06:36.623978 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:06:36.624588875+00:00 stderr F I1213 00:06:36.624532 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... 2025-12-13T00:06:36.629029249+00:00 stderr F E1213 00:06:36.628916 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:40.098022168+00:00 stderr F E1213 00:06:40.097863 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:44.492904663+00:00 stderr F E1213 00:06:44.492813 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:50.897626268+00:00 stderr F E1213 00:06:50.897545 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:56.920683746+00:00 stderr F E1213 00:06:56.920615 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:00.675826193+00:00 stderr F E1213 00:07:00.675703 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:06.308543181+00:00 stderr F E1213 00:07:06.308314 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:09.716096210+00:00 stderr F E1213 00:07:09.716008 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:14.343581558+00:00 stderr F E1213 00:07:14.343491 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:19.768182631+00:00 stderr F E1213 00:07:19.768029 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:23.542272782+00:00 stderr F E1213 00:07:23.542150 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:27.992320628+00:00 stderr F E1213 00:07:27.991673 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:33.016633261+00:00 stderr F E1213 00:07:33.016532 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:36.860731501+00:00 stderr F E1213 00:07:36.860570 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:42.250432112+00:00 stderr F E1213 00:07:42.250345 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:46.067609773+00:00 stderr F E1213 00:07:46.067521 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:49.403138771+00:00 stderr F E1213 00:07:49.403084 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:53.954582157+00:00 stderr F E1213 00:07:53.954486 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:58.501789031+00:00 stderr F E1213 00:07:58.501639 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:04.644267087+00:00 stderr F E1213 00:08:04.643738 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:07.798290098+00:00 stderr F E1213 00:08:07.798229 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:11.860578641+00:00 stderr F E1213 00:08:11.860461 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:15.492757989+00:00 stderr F E1213 00:08:15.492553 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:21.604090549+00:00 stderr F E1213 00:08:21.604005 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:25.367857252+00:00 stderr F E1213 00:08:25.367710 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:29.374977218+00:00 stderr F E1213 00:08:29.374871 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:33.441348712+00:00 stderr F E1213 00:08:33.441233 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:38.849485613+00:00 stderr F E1213 00:08:38.849351 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:44.400515976+00:00 stderr F E1213 00:08:44.400384 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:50.899572011+00:00 stderr F E1213 00:08:50.899452 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:56.703832289+00:00 stderr F E1213 00:08:56.703678 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:03.250864297+00:00 stderr F E1213 00:09:03.250762 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:07.558199014+00:00 stderr F E1213 00:09:07.558026 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:11.076661966+00:00 stderr F E1213 00:09:11.076450 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:14.637904544+00:00 stderr F E1213 00:09:14.637757 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:18.394804279+00:00 stderr F E1213 00:09:18.394714 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2025-12-13T00:09:24.319701390+00:00 stderr F E1213 00:09:24.319565 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:27.988260987+00:00 stderr F E1213 00:09:27.988091 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:33.521844602+00:00 stderr F E1213 00:09:33.521297 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:38.044488894+00:00 stderr F E1213 00:09:38.044346 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:44.337010714+00:00 stderr F E1213 00:09:44.335925 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:49.911883423+00:00 stderr F E1213 00:09:49.911787 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-controller-manager: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=6s": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:51.150915004+00:00 stderr F I1213 00:09:51.148610 1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:09:51.150915004+00:00 stderr F I1213 00:09:51.149712 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-12-13T00:09:51.150915004+00:00 stderr F I1213 00:09:51.150538 1 controllermanager.go:332] Requested to terminate. Exiting. ././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012737115117130646033075 0ustar zuulzuul2025-12-13T00:21:46.273746861+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10257 \))" ]; do sleep 1; done' 2025-12-13T00:21:46.280488727+00:00 stderr F ++ ss -Htanop '(' sport = 10257 ')' 2025-12-13T00:21:46.285802208+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:21:46.286673549+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ']' 2025-12-13T00:21:46.286688160+00:00 stderr F + echo 'Copying system trust bundle' 2025-12-13T00:21:46.286696040+00:00 stdout F Copying system trust bundle 2025-12-13T00:21:46.286703200+00:00 stderr F + cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem 2025-12-13T00:21:46.290659478+00:00 stderr F + '[' -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ']' 2025-12-13T00:21:46.291118469+00:00 stderr F + exec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key '--controllers=*' --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 2025-12-13T00:21:46.366800586+00:00 stderr F W1213 00:21:46.366612 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-12-13T00:21:46.366899428+00:00 stderr F W1213 00:21:46.366796 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-12-13T00:21:46.366899428+00:00 stderr F W1213 00:21:46.366840 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-12-13T00:21:46.366899428+00:00 stderr F W1213 00:21:46.366872 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-12-13T00:21:46.366919209+00:00 stderr F W1213 00:21:46.366903 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-12-13T00:21:46.366993751+00:00 stderr F W1213 00:21:46.366966 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-12-13T00:21:46.367013971+00:00 stderr F W1213 00:21:46.367000 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-12-13T00:21:46.367063602+00:00 stderr F W1213 00:21:46.367036 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-12-13T00:21:46.367152495+00:00 stderr F W1213 00:21:46.367113 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-12-13T00:21:46.367171115+00:00 stderr F W1213 00:21:46.367153 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-12-13T00:21:46.367247207+00:00 stderr F W1213 00:21:46.367188 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-12-13T00:21:46.367247207+00:00 stderr F W1213 00:21:46.367230 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-12-13T00:21:46.367334479+00:00 stderr F W1213 00:21:46.367263 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-12-13T00:21:46.367334479+00:00 stderr F W1213 00:21:46.367310 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-12-13T00:21:46.367371180+00:00 stderr F W1213 00:21:46.367348 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-12-13T00:21:46.367465092+00:00 stderr F W1213 00:21:46.367405 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-12-13T00:21:46.367465092+00:00 stderr F W1213 00:21:46.367457 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-12-13T00:21:46.367567055+00:00 stderr F W1213 00:21:46.367503 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-12-13T00:21:46.367658017+00:00 stderr F W1213 00:21:46.367621 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-12-13T00:21:46.367766549+00:00 stderr F W1213 00:21:46.367732 1 feature_gate.go:227] unrecognized feature gate: Example 2025-12-13T00:21:46.367783420+00:00 stderr F W1213 00:21:46.367776 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-12-13T00:21:46.367842251+00:00 stderr F W1213 00:21:46.367808 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-12-13T00:21:46.367888612+00:00 stderr F W1213 00:21:46.367860 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-12-13T00:21:46.367971184+00:00 stderr F W1213 00:21:46.367912 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-12-13T00:21:46.367994145+00:00 stderr F W1213 00:21:46.367983 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-12-13T00:21:46.368043736+00:00 stderr F W1213 00:21:46.368019 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-12-13T00:21:46.368061747+00:00 stderr F W1213 00:21:46.368053 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-12-13T00:21:46.368104478+00:00 stderr F W1213 00:21:46.368082 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-12-13T00:21:46.368122788+00:00 stderr F W1213 00:21:46.368116 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-12-13T00:21:46.368166299+00:00 stderr F W1213 00:21:46.368143 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-12-13T00:21:46.368181090+00:00 stderr F W1213 00:21:46.368171 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-12-13T00:21:46.368223151+00:00 stderr F W1213 00:21:46.368201 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-12-13T00:21:46.368240871+00:00 stderr F W1213 00:21:46.368232 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-12-13T00:21:46.368281632+00:00 stderr F W1213 00:21:46.368260 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-12-13T00:21:46.368320253+00:00 stderr F W1213 00:21:46.368293 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:21:46.368334523+00:00 stderr F W1213 00:21:46.368324 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-12-13T00:21:46.368373564+00:00 stderr F W1213 00:21:46.368351 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-12-13T00:21:46.368410945+00:00 stderr F W1213 00:21:46.368388 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-12-13T00:21:46.368453546+00:00 stderr F W1213 00:21:46.368430 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-12-13T00:21:46.368525048+00:00 stderr F W1213 00:21:46.368500 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-12-13T00:21:46.368539898+00:00 stderr F W1213 00:21:46.368530 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-12-13T00:21:46.368598701+00:00 stderr F W1213 00:21:46.368572 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-12-13T00:21:46.368613731+00:00 stderr F W1213 00:21:46.368606 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-12-13T00:21:46.368652522+00:00 stderr F W1213 00:21:46.368631 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-12-13T00:21:46.368667142+00:00 stderr F W1213 00:21:46.368658 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-12-13T00:21:46.368705353+00:00 stderr F W1213 00:21:46.368683 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-12-13T00:21:46.368768385+00:00 stderr F W1213 00:21:46.368744 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-12-13T00:21:46.368783945+00:00 stderr F W1213 00:21:46.368773 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-12-13T00:21:46.368828086+00:00 stderr F W1213 00:21:46.368797 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-12-13T00:21:46.368828086+00:00 stderr F W1213 00:21:46.368825 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-12-13T00:21:46.368873647+00:00 stderr F W1213 00:21:46.368851 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-12-13T00:21:46.369042621+00:00 stderr F W1213 00:21:46.369003 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-12-13T00:21:46.369042621+00:00 stderr F W1213 00:21:46.369038 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-12-13T00:21:46.369121723+00:00 stderr F W1213 00:21:46.369091 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-12-13T00:21:46.369137984+00:00 stderr F W1213 00:21:46.369120 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-12-13T00:21:46.369154324+00:00 stderr F W1213 00:21:46.369146 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-12-13T00:21:46.369196225+00:00 stderr F W1213 00:21:46.369171 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-12-13T00:21:46.369210526+00:00 stderr F W1213 00:21:46.369199 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-12-13T00:21:46.369290097+00:00 stderr F W1213 00:21:46.369258 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369373 1 flags.go:64] FLAG: --allocate-node-cidrs="false" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369384 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369391 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369395 1 flags.go:64] FLAG: --allow-untagged-cloud="false" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369398 1 flags.go:64] FLAG: --attach-detach-reconcile-sync-period="1m0s" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369402 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369406 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369409 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369412 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369416 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369421 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369426 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-12-13T00:21:46.369433191+00:00 stderr F I1213 00:21:46.369429 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369432 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369437 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369440 1 flags.go:64] FLAG: --cidr-allocator-type="RangeAllocator" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369443 1 flags.go:64] FLAG: --client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369446 1 flags.go:64] FLAG: --cloud-config="" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369449 1 flags.go:64] FLAG: --cloud-provider="" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369452 1 flags.go:64] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369457 1 flags.go:64] FLAG: --cluster-cidr="10.217.0.0/22" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369460 1 flags.go:64] FLAG: --cluster-name="crc-d8rkd" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369463 1 flags.go:64] FLAG: --cluster-signing-cert-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt" 2025-12-13T00:21:46.369470142+00:00 stderr F I1213 00:21:46.369466 1 flags.go:64] FLAG: --cluster-signing-duration="8760h0m0s" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369470 1 flags.go:64] FLAG: --cluster-signing-key-file="/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369473 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-cert-file="" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369476 1 flags.go:64] FLAG: --cluster-signing-kube-apiserver-client-key-file="" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369479 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-cert-file="" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369481 1 flags.go:64] FLAG: --cluster-signing-kubelet-client-key-file="" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369484 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-cert-file="" 2025-12-13T00:21:46.369490032+00:00 stderr F I1213 00:21:46.369487 1 flags.go:64] FLAG: --cluster-signing-kubelet-serving-key-file="" 2025-12-13T00:21:46.369507223+00:00 stderr F I1213 00:21:46.369489 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-cert-file="" 2025-12-13T00:21:46.369507223+00:00 stderr F I1213 00:21:46.369492 1 flags.go:64] FLAG: --cluster-signing-legacy-unknown-key-file="" 2025-12-13T00:21:46.369507223+00:00 stderr F I1213 00:21:46.369496 1 flags.go:64] FLAG: --concurrent-cron-job-syncs="5" 2025-12-13T00:21:46.369507223+00:00 stderr F I1213 00:21:46.369500 1 flags.go:64] FLAG: --concurrent-deployment-syncs="5" 2025-12-13T00:21:46.369507223+00:00 stderr F I1213 00:21:46.369503 1 flags.go:64] FLAG: --concurrent-endpoint-syncs="5" 2025-12-13T00:21:46.369523953+00:00 stderr F I1213 00:21:46.369506 1 flags.go:64] FLAG: --concurrent-ephemeralvolume-syncs="5" 2025-12-13T00:21:46.369523953+00:00 stderr F I1213 00:21:46.369509 1 flags.go:64] FLAG: --concurrent-gc-syncs="20" 2025-12-13T00:21:46.369523953+00:00 stderr F I1213 00:21:46.369511 1 flags.go:64] FLAG: --concurrent-horizontal-pod-autoscaler-syncs="5" 2025-12-13T00:21:46.369523953+00:00 stderr F I1213 00:21:46.369514 1 flags.go:64] FLAG: --concurrent-job-syncs="5" 2025-12-13T00:21:46.369523953+00:00 stderr F I1213 00:21:46.369517 1 flags.go:64] FLAG: --concurrent-namespace-syncs="10" 2025-12-13T00:21:46.369523953+00:00 stderr F I1213 00:21:46.369519 1 flags.go:64] FLAG: --concurrent-rc-syncs="5" 2025-12-13T00:21:46.369547564+00:00 stderr F I1213 00:21:46.369522 1 flags.go:64] FLAG: --concurrent-replicaset-syncs="5" 2025-12-13T00:21:46.369547564+00:00 stderr F I1213 00:21:46.369526 1 flags.go:64] FLAG: --concurrent-resource-quota-syncs="5" 2025-12-13T00:21:46.369547564+00:00 stderr F I1213 00:21:46.369529 1 flags.go:64] FLAG: --concurrent-service-endpoint-syncs="5" 2025-12-13T00:21:46.369547564+00:00 stderr F I1213 00:21:46.369533 1 flags.go:64] FLAG: --concurrent-service-syncs="1" 2025-12-13T00:21:46.369547564+00:00 stderr F I1213 00:21:46.369537 1 flags.go:64] FLAG: --concurrent-serviceaccount-token-syncs="5" 2025-12-13T00:21:46.369563504+00:00 stderr F I1213 00:21:46.369546 1 flags.go:64] FLAG: --concurrent-statefulset-syncs="5" 2025-12-13T00:21:46.369563504+00:00 stderr F I1213 00:21:46.369550 1 flags.go:64] FLAG: --concurrent-ttl-after-finished-syncs="5" 2025-12-13T00:21:46.369563504+00:00 stderr F I1213 00:21:46.369554 1 flags.go:64] FLAG: --concurrent-validating-admission-policy-status-syncs="5" 2025-12-13T00:21:46.369563504+00:00 stderr F I1213 00:21:46.369558 1 flags.go:64] FLAG: --configure-cloud-routes="true" 2025-12-13T00:21:46.369579145+00:00 stderr F I1213 00:21:46.369562 1 flags.go:64] FLAG: --contention-profiling="false" 2025-12-13T00:21:46.369579145+00:00 stderr F I1213 00:21:46.369566 1 flags.go:64] FLAG: --controller-start-interval="0s" 2025-12-13T00:21:46.369579145+00:00 stderr F I1213 00:21:46.369569 1 flags.go:64] FLAG: --controllers="[*,-bootstrapsigner,-tokencleaner,-ttl]" 2025-12-13T00:21:46.369579145+00:00 stderr F I1213 00:21:46.369575 1 flags.go:64] FLAG: --disable-attach-detach-reconcile-sync="false" 2025-12-13T00:21:46.369594585+00:00 stderr F I1213 00:21:46.369578 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:21:46.369594585+00:00 stderr F I1213 00:21:46.369582 1 flags.go:64] FLAG: --enable-dynamic-provisioning="true" 2025-12-13T00:21:46.369594585+00:00 stderr F I1213 00:21:46.369585 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-12-13T00:21:46.369594585+00:00 stderr F I1213 00:21:46.369587 1 flags.go:64] FLAG: --enable-hostpath-provisioner="false" 2025-12-13T00:21:46.369594585+00:00 stderr F I1213 00:21:46.369590 1 flags.go:64] FLAG: --enable-leader-migration="false" 2025-12-13T00:21:46.369610325+00:00 stderr F I1213 00:21:46.369593 1 flags.go:64] FLAG: --endpoint-updates-batch-period="0s" 2025-12-13T00:21:46.369610325+00:00 stderr F I1213 00:21:46.369596 1 flags.go:64] FLAG: --endpointslice-updates-batch-period="0s" 2025-12-13T00:21:46.369610325+00:00 stderr F I1213 00:21:46.369599 1 flags.go:64] FLAG: --external-cloud-volume-plugin="" 2025-12-13T00:21:46.369625926+00:00 stderr F I1213 00:21:46.369602 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-12-13T00:21:46.369625926+00:00 stderr F I1213 00:21:46.369621 1 flags.go:64] FLAG: --flex-volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" 2025-12-13T00:21:46.369640616+00:00 stderr F I1213 00:21:46.369625 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:21:46.369640616+00:00 stderr F I1213 00:21:46.369629 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-cpu-initialization-period="5m0s" 2025-12-13T00:21:46.369640616+00:00 stderr F I1213 00:21:46.369632 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-delay="5m0s" 2025-12-13T00:21:46.369640616+00:00 stderr F I1213 00:21:46.369634 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-downscale-stabilization="5m0s" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369638 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-initial-readiness-delay="30s" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369642 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-sync-period="15s" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369645 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-tolerance="0.1" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369649 1 flags.go:64] FLAG: --horizontal-pod-autoscaler-upscale-delay="3m0s" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369652 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369655 1 flags.go:64] FLAG: --kube-api-burst="300" 2025-12-13T00:21:46.369663437+00:00 stderr F I1213 00:21:46.369658 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:21:46.369680807+00:00 stderr F I1213 00:21:46.369662 1 flags.go:64] FLAG: --kube-api-qps="150" 2025-12-13T00:21:46.369680807+00:00 stderr F I1213 00:21:46.369666 1 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig" 2025-12-13T00:21:46.369680807+00:00 stderr F I1213 00:21:46.369670 1 flags.go:64] FLAG: --large-cluster-size-threshold="50" 2025-12-13T00:21:46.369680807+00:00 stderr F I1213 00:21:46.369673 1 flags.go:64] FLAG: --leader-elect="true" 2025-12-13T00:21:46.369680807+00:00 stderr F I1213 00:21:46.369676 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-12-13T00:21:46.369696787+00:00 stderr F I1213 00:21:46.369679 1 flags.go:64] FLAG: --leader-elect-renew-deadline="12s" 2025-12-13T00:21:46.369696787+00:00 stderr F I1213 00:21:46.369682 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-12-13T00:21:46.369696787+00:00 stderr F I1213 00:21:46.369685 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-controller-manager" 2025-12-13T00:21:46.369696787+00:00 stderr F I1213 00:21:46.369688 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-12-13T00:21:46.369696787+00:00 stderr F I1213 00:21:46.369691 1 flags.go:64] FLAG: --leader-elect-retry-period="3s" 2025-12-13T00:21:46.369696787+00:00 stderr F I1213 00:21:46.369694 1 flags.go:64] FLAG: --leader-migration-config="" 2025-12-13T00:21:46.369712818+00:00 stderr F I1213 00:21:46.369696 1 flags.go:64] FLAG: --legacy-service-account-token-clean-up-period="8760h0m0s" 2025-12-13T00:21:46.369712818+00:00 stderr F I1213 00:21:46.369700 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:21:46.369712818+00:00 stderr F I1213 00:21:46.369702 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:21:46.369712818+00:00 stderr F I1213 00:21:46.369708 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:21:46.369727938+00:00 stderr F I1213 00:21:46.369711 1 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:21:46.369727938+00:00 stderr F I1213 00:21:46.369714 1 flags.go:64] FLAG: --master="" 2025-12-13T00:21:46.369727938+00:00 stderr F I1213 00:21:46.369717 1 flags.go:64] FLAG: --max-endpoints-per-slice="100" 2025-12-13T00:21:46.369727938+00:00 stderr F I1213 00:21:46.369720 1 flags.go:64] FLAG: --min-resync-period="12h0m0s" 2025-12-13T00:21:46.369727938+00:00 stderr F I1213 00:21:46.369722 1 flags.go:64] FLAG: --mirroring-concurrent-service-endpoint-syncs="5" 2025-12-13T00:21:46.369743149+00:00 stderr F I1213 00:21:46.369725 1 flags.go:64] FLAG: --mirroring-endpointslice-updates-batch-period="0s" 2025-12-13T00:21:46.369743149+00:00 stderr F I1213 00:21:46.369729 1 flags.go:64] FLAG: --mirroring-max-endpoints-per-subset="1000" 2025-12-13T00:21:46.369743149+00:00 stderr F I1213 00:21:46.369732 1 flags.go:64] FLAG: --namespace-sync-period="5m0s" 2025-12-13T00:21:46.369743149+00:00 stderr F I1213 00:21:46.369734 1 flags.go:64] FLAG: --node-cidr-mask-size="0" 2025-12-13T00:21:46.369743149+00:00 stderr F I1213 00:21:46.369737 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv4="0" 2025-12-13T00:21:46.369743149+00:00 stderr F I1213 00:21:46.369740 1 flags.go:64] FLAG: --node-cidr-mask-size-ipv6="0" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369742 1 flags.go:64] FLAG: --node-eviction-rate="0.1" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369746 1 flags.go:64] FLAG: --node-monitor-grace-period="40s" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369749 1 flags.go:64] FLAG: --node-monitor-period="5s" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369752 1 flags.go:64] FLAG: --node-startup-grace-period="1m0s" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369755 1 flags.go:64] FLAG: --node-sync-period="0s" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369758 1 flags.go:64] FLAG: --openshift-config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:21:46.369767149+00:00 stderr F I1213 00:21:46.369761 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-12-13T00:21:46.369784009+00:00 stderr F I1213 00:21:46.369764 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:21:46.369784009+00:00 stderr F I1213 00:21:46.369768 1 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:21:46.369784009+00:00 stderr F I1213 00:21:46.369771 1 flags.go:64] FLAG: --pv-recycler-increment-timeout-nfs="30" 2025-12-13T00:21:46.369784009+00:00 stderr F I1213 00:21:46.369774 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-hostpath="60" 2025-12-13T00:21:46.369784009+00:00 stderr F I1213 00:21:46.369776 1 flags.go:64] FLAG: --pv-recycler-minimum-timeout-nfs="300" 2025-12-13T00:21:46.369784009+00:00 stderr F I1213 00:21:46.369779 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-hostpath="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-12-13T00:21:46.369800500+00:00 stderr F I1213 00:21:46.369783 1 flags.go:64] FLAG: --pv-recycler-pod-template-filepath-nfs="/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml" 2025-12-13T00:21:46.369800500+00:00 stderr F I1213 00:21:46.369787 1 flags.go:64] FLAG: --pv-recycler-timeout-increment-hostpath="30" 2025-12-13T00:21:46.369800500+00:00 stderr F I1213 00:21:46.369789 1 flags.go:64] FLAG: --pvclaimbinder-sync-period="15s" 2025-12-13T00:21:46.369800500+00:00 stderr F I1213 00:21:46.369792 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:21:46.369816130+00:00 stderr F I1213 00:21:46.369796 1 flags.go:64] FLAG: --requestheader-client-ca-file="/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:21:46.369816130+00:00 stderr F I1213 00:21:46.369802 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-12-13T00:21:46.369816130+00:00 stderr F I1213 00:21:46.369805 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-12-13T00:21:46.369816130+00:00 stderr F I1213 00:21:46.369810 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-12-13T00:21:46.369831301+00:00 stderr F I1213 00:21:46.369814 1 flags.go:64] FLAG: --resource-quota-sync-period="5m0s" 2025-12-13T00:21:46.369831301+00:00 stderr F I1213 00:21:46.369818 1 flags.go:64] FLAG: --root-ca-file="/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt" 2025-12-13T00:21:46.369831301+00:00 stderr F I1213 00:21:46.369821 1 flags.go:64] FLAG: --route-reconciliation-period="10s" 2025-12-13T00:21:46.369831301+00:00 stderr F I1213 00:21:46.369824 1 flags.go:64] FLAG: --secondary-node-eviction-rate="0.01" 2025-12-13T00:21:46.369831301+00:00 stderr F I1213 00:21:46.369827 1 flags.go:64] FLAG: --secure-port="10257" 2025-12-13T00:21:46.369853691+00:00 stderr F I1213 00:21:46.369830 1 flags.go:64] FLAG: --service-account-private-key-file="/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key" 2025-12-13T00:21:46.369853691+00:00 stderr F I1213 00:21:46.369834 1 flags.go:64] FLAG: --service-cluster-ip-range="10.217.4.0/23" 2025-12-13T00:21:46.369853691+00:00 stderr F I1213 00:21:46.369837 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:21:46.369853691+00:00 stderr F I1213 00:21:46.369840 1 flags.go:64] FLAG: --terminated-pod-gc-threshold="12500" 2025-12-13T00:21:46.369853691+00:00 stderr F I1213 00:21:46.369843 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-12-13T00:21:46.369869222+00:00 stderr F I1213 00:21:46.369847 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:21:46.369869222+00:00 stderr F I1213 00:21:46.369856 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:21:46.369869222+00:00 stderr F I1213 00:21:46.369859 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:21:46.369869222+00:00 stderr F I1213 00:21:46.369863 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:21:46.369884102+00:00 stderr F I1213 00:21:46.369869 1 flags.go:64] FLAG: --unhealthy-zone-threshold="0.55" 2025-12-13T00:21:46.369884102+00:00 stderr F I1213 00:21:46.369873 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-12-13T00:21:46.369884102+00:00 stderr F I1213 00:21:46.369876 1 flags.go:64] FLAG: --use-service-account-credentials="true" 2025-12-13T00:21:46.369884102+00:00 stderr F I1213 00:21:46.369879 1 flags.go:64] FLAG: --v="2" 2025-12-13T00:21:46.369898892+00:00 stderr F I1213 00:21:46.369889 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:21:46.369898892+00:00 stderr F I1213 00:21:46.369894 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:21:46.369912393+00:00 stderr F I1213 00:21:46.369898 1 flags.go:64] FLAG: --volume-host-allow-local-loopback="true" 2025-12-13T00:21:46.369912393+00:00 stderr F I1213 00:21:46.369901 1 flags.go:64] FLAG: --volume-host-cidr-denylist="[]" 2025-12-13T00:21:46.372339533+00:00 stderr F I1213 00:21:46.372271 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:21:46.560144475+00:00 stderr F I1213 00:21:46.560072 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:21:46.560191656+00:00 stderr F I1213 00:21:46.560175 1 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:21:46.562355930+00:00 stderr F I1213 00:21:46.562300 1 controllermanager.go:203] "Starting" version="v1.29.5+29c95f3" 2025-12-13T00:21:46.562355930+00:00 stderr F I1213 00:21:46.562340 1 controllermanager.go:205] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-12-13T00:21:46.564100582+00:00 stderr F I1213 00:21:46.564049 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" 2025-12-13T00:21:46.564131213+00:00 stderr F I1213 00:21:46.564096 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" 2025-12-13T00:21:46.564556613+00:00 stderr F I1213 00:21:46.564476 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:21:46.564571493+00:00 stderr F I1213 00:21:46.564510 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:21:46.564469391 +0000 UTC))" 2025-12-13T00:21:46.564622135+00:00 stderr F I1213 00:21:46.564587 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:21:46.564571203 +0000 UTC))" 2025-12-13T00:21:46.564622135+00:00 stderr F I1213 00:21:46.564608 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:21:46.564597244 +0000 UTC))" 2025-12-13T00:21:46.564632725+00:00 stderr F I1213 00:21:46.564625 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:21:46.564613214 +0000 UTC))" 2025-12-13T00:21:46.564662536+00:00 stderr F I1213 00:21:46.564641 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:21:46.564629865 +0000 UTC))" 2025-12-13T00:21:46.564662536+00:00 stderr F I1213 00:21:46.564658 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:46.564648715 +0000 UTC))" 2025-12-13T00:21:46.564713868+00:00 stderr F I1213 00:21:46.564686 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:46.564675356 +0000 UTC))" 2025-12-13T00:21:46.564727818+00:00 stderr F I1213 00:21:46.564706 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:46.564694467 +0000 UTC))" 2025-12-13T00:21:46.564735138+00:00 stderr F I1213 00:21:46.564727 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:21:46.564717658 +0000 UTC))" 2025-12-13T00:21:46.564765069+00:00 stderr F I1213 00:21:46.564746 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:46.564732368 +0000 UTC))" 2025-12-13T00:21:46.564805760+00:00 stderr F I1213 00:21:46.564786 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:21:46.564766899 +0000 UTC))" 2025-12-13T00:21:46.565128858+00:00 stderr F I1213 00:21:46.565093 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"kube-controller-manager.openshift-kube-controller-manager.svc\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:24 +0000 UTC to 2027-08-13 20:00:25 +0000 UTC (now=2025-12-13 00:21:46.565076047 +0000 UTC))" 2025-12-13T00:21:46.565386094+00:00 stderr F I1213 00:21:46.565357 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765585306\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765585306\" (2025-12-12 23:21:46 +0000 UTC to 2026-12-12 23:21:46 +0000 UTC (now=2025-12-13 00:21:46.565343473 +0000 UTC))" 2025-12-13T00:21:46.565394194+00:00 stderr F I1213 00:21:46.565384 1 secure_serving.go:213] Serving securely on [::]:10257 2025-12-13T00:21:46.565444946+00:00 stderr F I1213 00:21:46.565415 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:21:46.565717792+00:00 stderr F I1213 00:21:46.565683 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager... ././@LongLink0000644000000000000000000000022600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000755000175000017500000000000015117130646033034 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000755000175000017500000000000015117130653033032 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000644000175000017500000000014015117130646033031 0ustar zuulzuul2025-12-13T00:14:27.000810895+00:00 stdout F /etc/hosts.tmp /etc/hosts differ: char 159, line 3 ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-res0000644000175000017500000000000015117130646033024 0ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015117130647033044 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015117130656033044 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001665715117130647033065 0ustar zuulzuul2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520444 1 flags.go:64] FLAG: --add-dir-header="false" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520547 1 flags.go:64] FLAG: --allow-paths="[]" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520553 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520556 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520559 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520564 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520567 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-12-13T00:11:03.520588604+00:00 stderr F I1213 00:11:03.520569 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520575 1 flags.go:64] FLAG: --client-ca-file="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520578 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520581 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520584 1 flags.go:64] FLAG: --http2-disable="false" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520587 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520591 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520594 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520597 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520600 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520604 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520608 1 flags.go:64] FLAG: --kubeconfig="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520611 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520614 1 flags.go:64] FLAG: --log-dir="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520616 1 flags.go:64] FLAG: --log-file="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520619 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520628 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520635 1 flags.go:64] FLAG: --logtostderr="true" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520637 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520640 1 flags.go:64] FLAG: --oidc-clientID="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520643 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520646 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520648 1 flags.go:64] FLAG: --oidc-issuer="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520651 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520657 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520659 1 flags.go:64] FLAG: --one-output="false" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520662 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520665 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:9192" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520668 1 flags.go:64] FLAG: --skip-headers="false" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520671 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520674 1 flags.go:64] FLAG: --stderrthreshold="" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520676 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520679 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-12-13T00:11:03.520690317+00:00 stderr F I1213 00:11:03.520687 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520690 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520694 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520698 1 flags.go:64] FLAG: --upstream="http://127.0.0.1:9191/" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520701 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520704 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520706 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520709 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520712 1 flags.go:64] FLAG: --v="3" 2025-12-13T00:11:03.520718818+00:00 stderr F I1213 00:11:03.520715 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:11:03.520729928+00:00 stderr F I1213 00:11:03.520719 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:11:03.520729928+00:00 stderr F W1213 00:11:03.520726 1 deprecated.go:66] 2025-12-13T00:11:03.520729928+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:11:03.520729928+00:00 stderr F 2025-12-13T00:11:03.520729928+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:11:03.520729928+00:00 stderr F 2025-12-13T00:11:03.520729928+00:00 stderr F =============================================== 2025-12-13T00:11:03.520729928+00:00 stderr F 2025-12-13T00:11:03.520740108+00:00 stderr F I1213 00:11:03.520734 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-12-13T00:11:03.521391926+00:00 stderr F I1213 00:11:03.521374 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:11:03.521421487+00:00 stderr F I1213 00:11:03.521412 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:11:03.521980992+00:00 stderr F I1213 00:11:03.521964 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9192 2025-12-13T00:11:03.522277150+00:00 stderr F I1213 00:11:03.522246 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9192 ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001706415117130647033056 0ustar zuulzuul2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312609 1 flags.go:64] FLAG: --add-dir-header="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312952 1 flags.go:64] FLAG: --allow-paths="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312962 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312973 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312977 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312983 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312990 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312994 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.312999 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313003 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313007 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313011 1 flags.go:64] FLAG: --http2-disable="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313015 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313020 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313024 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313030 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313037 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313042 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313054 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313621 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313717 1 flags.go:64] FLAG: --log-dir="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313723 1 flags.go:64] FLAG: --log-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313728 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313735 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313753 1 flags.go:64] FLAG: --logtostderr="true" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313759 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313765 1 flags.go:64] FLAG: --oidc-clientID="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313770 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313861 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313867 1 flags.go:64] FLAG: --oidc-issuer="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313871 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313879 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313883 1 flags.go:64] FLAG: --one-output="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313887 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313892 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:9192" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313906 1 flags.go:64] FLAG: --skip-headers="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313912 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313917 1 flags.go:64] FLAG: --stderrthreshold="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313921 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313925 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313936 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313941 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313945 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313951 1 flags.go:64] FLAG: --upstream="http://127.0.0.1:9191/" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313956 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313962 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313966 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313970 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313974 1 flags.go:64] FLAG: --v="3" 2025-08-13T19:50:43.315215609+00:00 stderr F I0813 19:50:43.313979 1 flags.go:64] FLAG: --version="false" 2025-08-13T19:50:43.318225195+00:00 stderr F I0813 19:50:43.313985 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:50:43.318225195+00:00 stderr F W0813 19:50:43.316320 1 deprecated.go:66] 2025-08-13T19:50:43.318225195+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F =============================================== 2025-08-13T19:50:43.318225195+00:00 stderr F 2025-08-13T19:50:43.318225195+00:00 stderr F I0813 19:50:43.316363 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:50:43.323724493+00:00 stderr F I0813 19:50:43.321089 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:50:43.323724493+00:00 stderr F I0813 19:50:43.321223 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:50:43.324886206+00:00 stderr F I0813 19:50:43.324349 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9192 2025-08-13T19:50:43.333905294+00:00 stderr F I0813 19:50:43.331692 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9192 2025-08-13T20:42:46.941041162+00:00 stderr F I0813 20:42:46.940758 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000755000175000017500000000000015117130656033044 5ustar zuulzuul././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001302015117130647033042 0ustar zuulzuul2025-12-13T00:11:04.463262253+00:00 stderr F W1213 00:11:04.461645 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:11:04.463262253+00:00 stderr F W1213 00:11:04.462541 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:11:04.463262253+00:00 stderr F I1213 00:11:04.463027 1 main.go:150] setting up manager 2025-12-13T00:11:04.463846958+00:00 stderr F I1213 00:11:04.463808 1 main.go:168] registering components 2025-12-13T00:11:04.463846958+00:00 stderr F I1213 00:11:04.463828 1 main.go:170] setting up scheme 2025-12-13T00:11:04.464949729+00:00 stderr F I1213 00:11:04.464889 1 main.go:208] setting up controllers 2025-12-13T00:11:04.464969839+00:00 stderr F I1213 00:11:04.464929 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-12-13T00:11:04.465003950+00:00 stderr F I1213 00:11:04.464968 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-12-13T00:11:04.465154874+00:00 stderr F I1213 00:11:04.465125 1 main.go:233] starting the cmd 2025-12-13T00:11:04.467130718+00:00 stderr F I1213 00:11:04.467084 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-12-13T00:11:04.469695858+00:00 stderr F I1213 00:11:04.469655 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-12-13T00:11:04.471132117+00:00 stderr F I1213 00:11:04.471076 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-12-13T00:11:34.473908134+00:00 stderr F E1213 00:11:34.473795 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:45.081514294+00:00 stderr F E1213 00:12:45.081379 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:16:35.659217731+00:00 stderr F I1213 00:16:35.659156 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-12-13T00:16:35.659402546+00:00 stderr F I1213 00:16:35.659378 1 recorder.go:104] "crc_b6d04488-d277-4b68-9394-4a2d1fa2af69 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41688"} reason="LeaderElection" 2025-12-13T00:16:35.659681324+00:00 stderr F I1213 00:16:35.659631 1 status.go:97] Starting cluster operator status controller 2025-12-13T00:16:35.661854503+00:00 stderr F I1213 00:16:35.661815 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-12-13T00:16:35.662163962+00:00 stderr F I1213 00:16:35.662135 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-12-13T00:16:35.662163962+00:00 stderr F I1213 00:16:35.662158 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-12-13T00:16:35.662207483+00:00 stderr F I1213 00:16:35.662168 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-12-13T00:16:35.688276765+00:00 stderr F I1213 00:16:35.688171 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:35.821687509+00:00 stderr F I1213 00:16:35.821609 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:35.887353162+00:00 stderr F I1213 00:16:35.887269 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-12-13T00:21:25.856226972+00:00 stderr F I1213 00:21:25.856139 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-12-13T00:21:26.058041778+00:00 stderr F I1213 00:21:26.057971 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:21:26.537176067+00:00 stderr F I1213 00:21:26.537097 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000001665615117130647033064 0ustar zuulzuul2025-08-13T20:05:12.855041932+00:00 stderr F W0813 20:05:12.854614 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:05:12.856080652+00:00 stderr F W0813 20:05:12.855207 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T20:05:12.856080652+00:00 stderr F I0813 20:05:12.855407 1 main.go:150] setting up manager 2025-08-13T20:05:12.856517694+00:00 stderr F I0813 20:05:12.856386 1 main.go:168] registering components 2025-08-13T20:05:12.856517694+00:00 stderr F I0813 20:05:12.856465 1 main.go:170] setting up scheme 2025-08-13T20:05:12.857273606+00:00 stderr F I0813 20:05:12.857204 1 main.go:208] setting up controllers 2025-08-13T20:05:12.857273606+00:00 stderr F I0813 20:05:12.857261 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-08-13T20:05:12.857315647+00:00 stderr F I0813 20:05:12.857279 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-08-13T20:05:12.860378595+00:00 stderr F I0813 20:05:12.860230 1 main.go:233] starting the cmd 2025-08-13T20:05:12.860902380+00:00 stderr F I0813 20:05:12.860686 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T20:05:12.861960160+00:00 stderr F I0813 20:05:12.861896 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-08-13T20:05:12.863479814+00:00 stderr F I0813 20:05:12.863313 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-08-13T20:08:02.991035955+00:00 stderr F I0813 20:08:02.989962 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-08-13T20:08:02.991690854+00:00 stderr F I0813 20:08:02.990989 1 recorder.go:104] "crc_38dd04bb-211c-4052-882f-1b12e44fa6dd became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"32804"} reason="LeaderElection" 2025-08-13T20:08:03.000627140+00:00 stderr F I0813 20:08:03.000537 1 status.go:97] Starting cluster operator status controller 2025-08-13T20:08:03.018414770+00:00 stderr F I0813 20:08:03.017937 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:08:03.021056156+00:00 stderr F I0813 20:08:03.020547 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-08-13T20:08:03.021056156+00:00 stderr F I0813 20:08:03.020594 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:08:03.078039859+00:00 stderr F I0813 20:08:03.075934 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:03.078039859+00:00 stderr F I0813 20:08:03.077053 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T20:08:03.349822692+00:00 stderr F I0813 20:08:03.349675 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:03.368563049+00:00 stderr F I0813 20:08:03.368427 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-08-13T20:08:59.007661338+00:00 stderr F E0813 20:08:59.007399 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:09:05.293722175+00:00 stderr F I0813 20:09:05.293507 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:05.536332881+00:00 stderr F I0813 20:09:05.536106 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T20:09:05.790060166+00:00 stderr F I0813 20:09:05.789989 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:42:36.491959513+00:00 stderr F I0813 20:42:36.480040 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514396170+00:00 stderr F I0813 20:42:36.479177 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514396170+00:00 stderr F I0813 20:42:36.489175 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.634924855+00:00 stderr F I0813 20:42:39.633727 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:39.635499292+00:00 stderr F I0813 20:42:39.634967 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:39.639672802+00:00 stderr F I0813 20:42:39.639594 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:39.639672802+00:00 stderr F I0813 20:42:39.639655 1 controller.go:242] "All workers finished" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T20:42:39.639736334+00:00 stderr F I0813 20:42:39.639706 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:39.644435049+00:00 stderr F I0813 20:42:39.644388 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:39.644719988+00:00 stderr F I0813 20:42:39.644686 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:39.647757065+00:00 stderr F I0813 20:42:39.647042 1 server.go:231] "Shutting down metrics server with timeout of 1 minute" logger="controller-runtime.metrics" 2025-08-13T20:42:39.647757065+00:00 stderr F I0813 20:42:39.647249 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:39.651514124+00:00 stderr F E0813 20:42:39.651384 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-mach0000644000175000017500000002505015117130647033050 0ustar zuulzuul2025-08-13T19:50:52.226135278+00:00 stderr F W0813 19:50:52.191052 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:52.238309136+00:00 stderr F W0813 19:50:52.238020 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:50:52.241364754+00:00 stderr F I0813 19:50:52.239659 1 main.go:150] setting up manager 2025-08-13T19:50:52.290188489+00:00 stderr F I0813 19:50:52.289306 1 main.go:168] registering components 2025-08-13T19:50:52.290188489+00:00 stderr F I0813 19:50:52.289358 1 main.go:170] setting up scheme 2025-08-13T19:50:52.291735653+00:00 stderr F I0813 19:50:52.291002 1 main.go:208] setting up controllers 2025-08-13T19:50:52.292664000+00:00 stderr F I0813 19:50:52.291879 1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory 2025-08-13T19:50:52.292664000+00:00 stderr F I0813 19:50:52.292278 1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}} 2025-08-13T19:50:52.295243134+00:00 stderr F I0813 19:50:52.295107 1 main.go:233] starting the cmd 2025-08-13T19:50:52.308932555+00:00 stderr F I0813 19:50:52.307024 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T19:50:52.327417733+00:00 stderr F I0813 19:50:52.326253 1 leaderelection.go:250] attempting to acquire leader lease openshift-cluster-machine-approver/cluster-machine-approver-leader... 2025-08-13T19:50:52.343960816+00:00 stderr F I0813 19:50:52.343359 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress="127.0.0.1:9191" secure=false 2025-08-13T19:51:22.437513386+00:00 stderr F E0813 19:51:22.437168 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:38.535367297+00:00 stderr F E0813 19:52:38.535240 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:54.958972291+00:00 stderr F E0813 19:53:54.958581 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:58.672042812+00:00 stderr F E0813 19:54:58.671715 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:15.692884729+00:00 stderr F E0813 19:56:15.692681 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:11.955337238+00:00 stderr F E0813 19:57:11.955019 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:58:04.802508691+00:00 stderr F I0813 19:58:04.802165 1 leaderelection.go:260] successfully acquired lease openshift-cluster-machine-approver/cluster-machine-approver-leader 2025-08-13T19:58:04.803114468+00:00 stderr F I0813 19:58:04.803034 1 status.go:97] Starting cluster operator status controller 2025-08-13T19:58:04.805735363+00:00 stderr F I0813 19:58:04.804317 1 recorder.go:104] "crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 became leader" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"27679"} reason="LeaderElection" 2025-08-13T19:58:04.806422283+00:00 stderr F I0813 19:58:04.806346 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:58:04.809971334+00:00 stderr F I0813 19:58:04.806616 1 controller.go:178] "Starting EventSource" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" source="kind source: *v1.ConfigMap" 2025-08-13T19:58:04.810082087+00:00 stderr F I0813 19:58:04.810029 1 controller.go:186] "Starting Controller" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" 2025-08-13T19:58:04.814620396+00:00 stderr F I0813 19:58:04.813702 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/cluster-machine-approver/status.go:99 2025-08-13T19:58:04.819118554+00:00 stderr F I0813 19:58:04.818976 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:58:04.999867317+00:00 stderr F I0813 19:58:04.999733 1 reflector.go:351] Caches populated for *v1.ConfigMap from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T19:58:05.021482623+00:00 stderr F I0813 19:58:05.021318 1 controller.go:220] "Starting workers" controller="certificatesigningrequest" controllerGroup="certificates.k8s.io" controllerKind="CertificateSigningRequest" worker count=10 2025-08-13T19:58:05.021741430+00:00 stderr F I0813 19:58:05.021656 1 controller.go:120] Reconciling CSR: csr-fxkbs 2025-08-13T19:58:05.068715569+00:00 stderr F I0813 19:58:05.068648 1 controller.go:213] csr-fxkbs: CSR is already approved 2025-08-13T19:59:54.738971221+00:00 stderr F I0813 19:59:54.736169 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783131 1 csr_check.go:163] system:openshift:openshift-authenticator-dk965: CSR does not appear to be client csr 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783176 1 csr_check.go:59] system:openshift:openshift-authenticator-dk965: CSR does not appear to be a node serving cert 2025-08-13T19:59:55.783770514+00:00 stderr F I0813 19:59:55.783237 1 controller.go:232] system:openshift:openshift-authenticator-dk965: CSR not authorized 2025-08-13T19:59:56.015861990+00:00 stderr F I0813 19:59:56.015717 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T19:59:57.081251229+00:00 stderr F I0813 19:59:57.073260 1 controller.go:213] system:openshift:openshift-authenticator-dk965: CSR is already approved 2025-08-13T20:00:01.347367856+00:00 stderr F I0813 20:00:01.346761 1 controller.go:120] Reconciling CSR: system:openshift:openshift-authenticator-dk965 2025-08-13T20:00:01.631136935+00:00 stderr F I0813 20:00:01.631069 1 controller.go:213] system:openshift:openshift-authenticator-dk965: CSR is already approved 2025-08-13T20:03:21.934357110+00:00 stderr F E0813 20:03:21.934093 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:17.937495002+00:00 stderr F E0813 20:04:17.937199 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:04:38.942610615+00:00 stderr F I0813 20:04:38.936003 1 leaderelection.go:285] failed to renew lease openshift-cluster-machine-approver/cluster-machine-approver-leader: timed out waiting for the condition 2025-08-13T20:05:08.957482381+00:00 stderr F E0813 20:05:08.957257 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T20:05:08.990523638+00:00 stderr F F0813 20:05:08.990431 1 main.go:235] unable to run the manager: leader election lost 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028498 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028591 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028608 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:05:09.028667000+00:00 stderr F I0813 20:05:09.028585 1 recorder.go:104] "crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 stopped leading" logger="events" type="Normal" object={"kind":"Lease","namespace":"openshift-cluster-machine-approver","name":"cluster-machine-approver-leader","uid":"396b5b52-acf2-4d11-8e98-69ecff2f52d0","apiVersion":"coordination.k8s.io/v1","resourceVersion":"30699"} reason="LeaderElection" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028819 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028849 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:05:09.028896756+00:00 stderr F I0813 20:05:09.028884 1 internal.go:537] "Wait completed, proceeding to shutdown the manager" ././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000026300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130653033056 5ustar zuulzuul././@LongLink0000644000000000000000000000027000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000366515117130646033074 0ustar zuulzuul2025-08-13T20:00:19.608298628+00:00 stderr F I0813 20:00:19.605121 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc0008f6be0 max-eligible-revision:0xc0008f6960 protected-revisions:0xc0008f6a00 resource-dir:0xc0008f6aa0 static-pod-name:0xc0008f6b40 v:0xc0008f72c0] [0xc0008f72c0 0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6be0 0xc0008f6b40] [] map[cert-dir:0xc0008f6be0 help:0xc0008f7680 log-flush-frequency:0xc0008f7220 max-eligible-revision:0xc0008f6960 protected-revisions:0xc0008f6a00 resource-dir:0xc0008f6aa0 static-pod-name:0xc0008f6b40 v:0xc0008f72c0 vmodule:0xc0008f7360] [0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6b40 0xc0008f6be0 0xc0008f7220 0xc0008f72c0 0xc0008f7360 0xc0008f7680] [0xc0008f6be0 0xc0008f7680 0xc0008f7220 0xc0008f6960 0xc0008f6a00 0xc0008f6aa0 0xc0008f6b40 0xc0008f72c0 0xc0008f7360] map[104:0xc0008f7680 118:0xc0008f72c0] [] -1 0 0xc0008adc80 true 0x73b100 []} 2025-08-13T20:00:19.608298628+00:00 stderr F I0813 20:00:19.606611 1 cmd.go:41] (*prune.PruneOptions)(0xc0008e31d0)({ 2025-08-13T20:00:19.608298628+00:00 stderr F MaxEligibleRevision: (int) 9, 2025-08-13T20:00:19.608298628+00:00 stderr F ProtectedRevisions: ([]int) (len=6 cap=6) { 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 4, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 5, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 6, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 7, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 8, 2025-08-13T20:00:19.608298628+00:00 stderr F (int) 9 2025-08-13T20:00:19.608298628+00:00 stderr F }, 2025-08-13T20:00:19.608298628+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:00:19.608298628+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:00:19.608298628+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:00:19.608298628+00:00 stderr F }) ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130646033027 5ustar zuulzuul././@LongLink0000644000000000000000000000035300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130654033026 5ustar zuulzuul././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000035733115117130646033045 0ustar zuulzuul2025-08-13T20:05:30.576738510+00:00 stderr F I0813 20:05:30.564427 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:30.576738510+00:00 stderr F I0813 20:05:30.572354 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:30.585268894+00:00 stderr F I0813 20:05:30.583436 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:30.862237025+00:00 stderr F I0813 20:05:30.862163 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-08-13T20:05:31.764230725+00:00 stderr F I0813 20:05:31.762474 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:31.764230725+00:00 stderr F W0813 20:05:31.762888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:31.764230725+00:00 stderr F W0813 20:05:31.762898 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:31.771968887+00:00 stderr F I0813 20:05:31.771303 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:31.771968887+00:00 stderr F I0813 20:05:31.771722 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-08-13T20:05:31.842474786+00:00 stderr F I0813 20:05:31.841086 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:31.844878425+00:00 stderr F I0813 20:05:31.844819 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:31.845188073+00:00 stderr F I0813 20:05:31.844915 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:31.846711397+00:00 stderr F I0813 20:05:31.846567 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:31.850923628+00:00 stderr F I0813 20:05:31.846618 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:31.851387891+00:00 stderr F I0813 20:05:31.846641 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:31.851501344+00:00 stderr F I0813 20:05:31.851481 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:31.856524428+00:00 stderr F I0813 20:05:31.856485 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:31.856905299+00:00 stderr F I0813 20:05:31.856602 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:31.873882465+00:00 stderr F I0813 20:05:31.873732 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-08-13T20:05:31.875052119+00:00 stderr F I0813 20:05:31.875001 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31332", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_e7b654d9-77cd-448f-885c-1fad32cba9ad became leader 2025-08-13T20:05:31.893305981+00:00 stderr F I0813 20:05:31.893203 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:31.945610709+00:00 stderr F I0813 20:05:31.945524 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:31.958186119+00:00 stderr F I0813 20:05:31.958120 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.954502 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.962226 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:31.962905474+00:00 stderr F I0813 20:05:31.962346 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:32.186166178+00:00 stderr F I0813 20:05:32.181264 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:05:32.207337004+00:00 stderr F I0813 20:05:32.206298 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:05:32.209975339+00:00 stderr F I0813 20:05:32.208241 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:05:32.225986218+00:00 stderr F I0813 20:05:32.224997 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.237841 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238049 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238171 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238193 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238208 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238241 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238265 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238281 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:32.239433453+00:00 stderr F I0813 20:05:32.238297 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:05:32.260341982+00:00 stderr F I0813 20:05:32.260284 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-08-13T20:05:32.260746383+00:00 stderr F I0813 20:05:32.260726 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T20:05:32.261051902+00:00 stderr F I0813 20:05:32.261025 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:05:32.261267278+00:00 stderr F I0813 20:05:32.261248 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541429 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541471 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541501 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.541506 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.542500 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:05:32.543214912+00:00 stderr F I0813 20:05:32.542511 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:05:32.550244423+00:00 stderr F I0813 20:05:32.550159 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:32.550244423+00:00 stderr F I0813 20:05:32.550199 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:32.550411688+00:00 stderr F I0813 20:05:32.550356 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:05:32.550952684+00:00 stderr F I0813 20:05:32.550889 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.582093436+00:00 stderr F I0813 20:05:32.581994 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:05:32.582234650+00:00 stderr F I0813 20:05:32.582216 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:05:32.609923302+00:00 stderr F I0813 20:05:32.603422 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.641180047+00:00 stderr F I0813 20:05:32.639060 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:05:32.641180047+00:00 stderr F I0813 20:05:32.639217 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:05:32.648563229+00:00 stderr F I0813 20:05:32.648487 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.649318871+00:00 stderr F I0813 20:05:32.649295 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:05:32.649368642+00:00 stderr F I0813 20:05:32.649354 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:05:32.649417263+00:00 stderr F I0813 20:05:32.649404 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:05:32.649448994+00:00 stderr F I0813 20:05:32.649437 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.650958 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.679140 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.695711 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.679216 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-08-13T20:05:32.697144040+00:00 stderr F I0813 20:05:32.696984 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T20:05:32.738932587+00:00 stderr F I0813 20:05:32.738728 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:05:32.738932587+00:00 stderr F I0813 20:05:32.738891 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:05:32.751229279+00:00 stderr F I0813 20:05:32.750144 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:05:32.751229279+00:00 stderr F I0813 20:05:32.750230 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:05:32.814209622+00:00 stderr F I0813 20:05:32.814019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:32.995206675+00:00 stderr F I0813 20:05:32.986392 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.056409658+00:00 stderr F I0813 20:05:33.056318 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.072895720+00:00 stderr F I0813 20:05:33.072733 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:05:33.072895720+00:00 stderr F I0813 20:05:33.072839 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:05:33.119484604+00:00 stderr F I0813 20:05:33.119347 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:05:33.119484604+00:00 stderr F I0813 20:05:33.119392 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:05:33.124444846+00:00 stderr F I0813 20:05:33.120767 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:05:33.124444846+00:00 stderr F I0813 20:05:33.123645 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:05:33.127741881+00:00 stderr F I0813 20:05:33.127651 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:05:33.127984328+00:00 stderr F I0813 20:05:33.127958 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:05:33.193213226+00:00 stderr F I0813 20:05:33.193157 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:05:33.261045287+00:00 stderr F I0813 20:05:33.260983 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T20:05:33.261153770+00:00 stderr F I0813 20:05:33.261137 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:05:33.780178083+00:00 stderr F I0813 20:05:33.779760 1 request.go:697] Waited for 1.082746184s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc 2025-08-13T20:05:35.270288154+00:00 stderr F I0813 20:05:35.269500 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:35.270288154+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:35.270288154+00:00 stderr F CurrentRevision: (int32) 7, 2025-08-13T20:05:35.270288154+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0016d1d70)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:35.270288154+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:35.270288154+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:05:35.270288154+00:00 stderr F } 2025-08-13T20:05:35.270288154+00:00 stderr F } 2025-08-13T20:05:35.270288154+00:00 stderr F because static pod is ready 2025-08-13T20:05:35.360014093+00:00 stderr F I0813 20:05:35.359946 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:05:35.361982280+00:00 stderr F I0813 20:05:35.361950 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 6 to 7 because static pod is ready 2025-08-13T20:05:35.372351637+00:00 stderr F I0813 20:05:35.369665 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:05:35Z","message":"NodeInstallerProgressing: 1 node is at revision 7","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:05:35.451418841+00:00 stderr F I0813 20:05:35.451335 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" 2025-08-13T20:07:15.704486660+00:00 stderr F I0813 20:07:15.701599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.783097454+00:00 stderr F I0813 20:07:15.780987 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-pod-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:15.815624836+00:00 stderr F I0813 20:07:15.815563 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:16.977272212+00:00 stderr F I0813 20:07:16.976932 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:17.180451136+00:00 stderr F I0813 20:07:17.173849 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/scheduler-kubeconfig-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:17.505357562+00:00 stderr F I0813 20:07:17.505279 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:18.719316847+00:00 stderr F I0813 20:07:18.718327 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:19.941996393+00:00 stderr F I0813 20:07:19.938574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-8 -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:19.981566097+00:00 stderr F I0813 20:07:19.979750 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:20.101976720+00:00 stderr F I0813 20:07:20.100453 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:20.114002764+00:00 stderr F I0813 20:07:20.112277 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 8 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:21.320572926+00:00 stderr F I0813 20:07:21.318488 1 installer_controller.go:524] node crc with revision 7 is the oldest and needs new revision 8 2025-08-13T20:07:21.320572926+00:00 stderr F I0813 20:07:21.319194 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:21.320572926+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:21.320572926+00:00 stderr F CurrentRevision: (int32) 7, 2025-08-13T20:07:21.320572926+00:00 stderr F TargetRevision: (int32) 8, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001c32198)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:21.320572926+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:21.320572926+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:07:21.320572926+00:00 stderr F } 2025-08-13T20:07:21.320572926+00:00 stderr F } 2025-08-13T20:07:21.364919068+00:00 stderr F I0813 20:07:21.362636 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 7 to 8 because node crc with revision 7 is the oldest 2025-08-13T20:07:21.401748604+00:00 stderr F I0813 20:07:21.400658 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:07:21Z","message":"NodeInstallerProgressing: 1 node is at revision 7; 0 nodes have achieved new revision 8","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:21.405192112+00:00 stderr F I0813 20:07:21.405162 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:21.431219419+00:00 stderr F I0813 20:07:21.429589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 7; 0 nodes have achieved new revision 8"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8" 2025-08-13T20:07:23.084966174+00:00 stderr F I0813 20:07:23.081293 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-8-crc -n openshift-kube-scheduler because it was missing 2025-08-13T20:07:23.920610573+00:00 stderr F I0813 20:07:23.915736 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:26.094867570+00:00 stderr F I0813 20:07:26.093717 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:27.701233176+00:00 stderr F I0813 20:07:27.691110 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:58.161748364+00:00 stderr F I0813 20:07:58.161265 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:59.618478561+00:00 stderr F I0813 20:07:59.618418 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because waiting for static pod of revision 8, found 7 2025-08-13T20:08:00.260228759+00:00 stderr F I0813 20:08:00.257628 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because waiting for static pod of revision 8, found 7 2025-08-13T20:08:08.302031825+00:00 stderr F I0813 20:08:08.299863 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:08.669468880+00:00 stderr F I0813 20:08:08.668167 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:09.661708029+00:00 stderr F I0813 20:08:09.660459 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:11.298218219+00:00 stderr F I0813 20:08:11.296588 1 installer_controller.go:512] "crc" is in transition to 8, but has not made progress because static pod is pending 2025-08-13T20:08:32.423503917+00:00 stderr F E0813 20:08:32.421764 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.657075264+00:00 stderr F E0813 20:08:32.656916 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.659262606+00:00 stderr F E0813 20:08:32.659211 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.665237858+00:00 stderr F E0813 20:08:32.664273 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.671484387+00:00 stderr F E0813 20:08:32.670156 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.676878851+00:00 stderr F E0813 20:08:32.676854 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.684176151+00:00 stderr F E0813 20:08:32.684095 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.700397636+00:00 stderr F E0813 20:08:32.700290 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.709248700+00:00 stderr F E0813 20:08:32.709170 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.742218485+00:00 stderr F E0813 20:08:32.742106 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.754565889+00:00 stderr F E0813 20:08:32.754514 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.857328175+00:00 stderr F E0813 20:08:32.857281 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.058487083+00:00 stderr F E0813 20:08:33.058429 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.256689975+00:00 stderr F E0813 20:08:33.256581 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.460638473+00:00 stderr F E0813 20:08:33.460541 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:33.859451947+00:00 stderr F E0813 20:08:33.859324 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.259980410+00:00 stderr F E0813 20:08:34.259926 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:34.459226493+00:00 stderr F E0813 20:08:34.458756 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.065538257+00:00 stderr F E0813 20:08:35.065047 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.260978180+00:00 stderr F E0813 20:08:35.259879 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:35.857700678+00:00 stderr F E0813 20:08:35.857534 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.062962473+00:00 stderr F E0813 20:08:36.061522 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.460504571+00:00 stderr F E0813 20:08:36.460427 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.657974753+00:00 stderr F E0813 20:08:36.657874 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.863096394+00:00 stderr F E0813 20:08:36.862953 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.060307248+00:00 stderr F E0813 20:08:37.060175 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465059463+00:00 stderr F E0813 20:08:37.464972 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:37.860871651+00:00 stderr F E0813 20:08:37.858753 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.260733035+00:00 stderr F E0813 20:08:38.260682 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:38.663861994+00:00 stderr F E0813 20:08:38.662819 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.262952820+00:00 stderr F E0813 20:08:39.260829 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:39.462171091+00:00 stderr F E0813 20:08:39.462085 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:40.063432860+00:00 stderr F E0813 20:08:40.059750 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.486148519+00:00 stderr F E0813 20:08:40.481360 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.059009174+00:00 stderr F E0813 20:08:41.058291 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.466963720+00:00 stderr F E0813 20:08:41.465427 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:41.659714477+00:00 stderr F E0813 20:08:41.659572 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.265556167+00:00 stderr F E0813 20:08:42.265423 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.861740440+00:00 stderr F E0813 20:08:42.861671 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.469264278+00:00 stderr F E0813 20:08:43.463841 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.859357553+00:00 stderr F E0813 20:08:44.859243 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.105574872+00:00 stderr F E0813 20:08:45.105237 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.260272658+00:00 stderr F E0813 20:08:45.260163 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.657439815+00:00 stderr F E0813 20:08:46.657116 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.060711047+00:00 stderr F E0813 20:08:47.060353 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:47.264939003+00:00 stderr F E0813 20:08:47.262274 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.466948065+00:00 stderr F E0813 20:08:47.464918 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.860878800+00:00 stderr F E0813 20:08:48.860045 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.261750484+00:00 stderr F E0813 20:08:49.261652 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.664740178+00:00 stderr F E0813 20:08:50.664349 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.865964007+00:00 stderr F E0813 20:08:50.864296 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.260165459+00:00 stderr F E0813 20:08:51.260058 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.462859232+00:00 stderr F E0813 20:08:52.462645 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.061524716+00:00 stderr F E0813 20:08:53.061168 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.261631214+00:00 stderr F E0813 20:08:54.261472 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.864943431+00:00 stderr F E0813 20:08:54.861968 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:55.861874535+00:00 stderr F E0813 20:08:55.861383 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.900825473+00:00 stderr F E0813 20:08:56.900663 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.440125344+00:00 stderr F E0813 20:08:57.440012 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:57.507969159+00:00 stderr F E0813 20:08:57.507842 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.434416582+00:00 stderr F E0813 20:08:58.433978 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:28.101218495+00:00 stderr F I0813 20:09:28.100409 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:29.927306530+00:00 stderr F I0813 20:09:29.926836 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:29.930692147+00:00 stderr F I0813 20:09:29.930523 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:29.952076110+00:00 stderr F I0813 20:09:29.951739 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:09:29.952076110+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:09:29.952076110+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:09:29.952076110+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedTime: (*v1.Time)(0xc00042e9d8)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:09:29.952076110+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:09:29.952076110+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:09:29.952076110+00:00 stderr F } 2025-08-13T20:09:29.952076110+00:00 stderr F } 2025-08-13T20:09:29.952076110+00:00 stderr F because static pod is ready 2025-08-13T20:09:29.974254396+00:00 stderr F I0813 20:09:29.974145 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 7 to 8 because static pod is ready 2025-08-13T20:09:29.975662596+00:00 stderr F I0813 20:09:29.975627 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:29.976646144+00:00 stderr F I0813 20:09:29.976583 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:29.995650949+00:00 stderr F I0813 20:09:29.995595 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 8"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" 2025-08-13T20:09:30.537031651+00:00 stderr F I0813 20:09:30.536506 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.543282740+00:00 stderr F E0813 20:09:30.543234 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.544034722+00:00 stderr F I0813 20:09:30.544004 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.550363613+00:00 stderr F E0813 20:09:30.550266 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.551084374+00:00 stderr F I0813 20:09:30.550996 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.556504790+00:00 stderr F E0813 20:09:30.556470 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.561870313+00:00 stderr F I0813 20:09:30.561683 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.568154224+00:00 stderr F E0813 20:09:30.568032 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.609735966+00:00 stderr F I0813 20:09:30.609603 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.616439068+00:00 stderr F E0813 20:09:30.616301 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.699459548+00:00 stderr F I0813 20:09:30.697607 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.707994923+00:00 stderr F E0813 20:09:30.707681 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:30.869500534+00:00 stderr F I0813 20:09:30.869428 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:30Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:30.880470038+00:00 stderr F E0813 20:09:30.880354 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:31.203732086+00:00 stderr F I0813 20:09:31.203624 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:31Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:31.213663921+00:00 stderr F E0813 20:09:31.213471 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:31.855883135+00:00 stderr F I0813 20:09:31.855749 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:31Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:31.862935347+00:00 stderr F E0813 20:09:31.862764 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:32.274533878+00:00 stderr F I0813 20:09:32.274336 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:32.348989672+00:00 stderr F I0813 20:09:32.347245 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:32.702465977+00:00 stderr F I0813 20:09:32.702301 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:32.709356515+00:00 stderr F E0813 20:09:32.709256 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:33.136957823+00:00 stderr F I0813 20:09:33.136156 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:33Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:33.147025742+00:00 stderr F E0813 20:09:33.146737 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:33.148276978+00:00 stderr F I0813 20:09:33.148199 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:33Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:33.157408120+00:00 stderr F E0813 20:09:33.157256 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:34.113335487+00:00 stderr F I0813 20:09:34.113272 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:34.534085630+00:00 stderr F I0813 20:09:34.534017 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:34.774416151+00:00 stderr F I0813 20:09:34.774049 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:35.933337538+00:00 stderr F I0813 20:09:35.932834 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:36.134601698+00:00 stderr F I0813 20:09:36.134500 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:36.873630366+00:00 stderr F I0813 20:09:36.873530 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:37.075117703+00:00 stderr F I0813 20:09:37.074269 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:37.130367177+00:00 stderr F I0813 20:09:37.130258 1 request.go:697] Waited for 1.19496227s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc 2025-08-13T20:09:37.136475272+00:00 stderr F I0813 20:09:37.136238 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:37Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:37.142530496+00:00 stderr F E0813 20:09:37.142432 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:37.143245286+00:00 stderr F I0813 20:09:37.143168 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:37Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:37.149396453+00:00 stderr F E0813 20:09:37.149312 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:37.334031207+00:00 stderr F I0813 20:09:37.333836 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:38.268554420+00:00 stderr F I0813 20:09:38.268410 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.274959354+00:00 stderr F E0813 20:09:38.274735 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.330433744+00:00 stderr F I0813 20:09:38.330277 1 request.go:697] Waited for 1.195071584s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc 2025-08-13T20:09:38.340314518+00:00 stderr F I0813 20:09:38.340212 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.346515885+00:00 stderr F E0813 20:09:38.346338 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.347238156+00:00 stderr F I0813 20:09:38.347124 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.357465189+00:00 stderr F E0813 20:09:38.357309 1 base_controller.go:268] StatusSyncer_kube-scheduler reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-scheduler": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.533927949+00:00 stderr F I0813 20:09:38.533022 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:41.658292706+00:00 stderr F I0813 20:09:41.658105 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:41.798998570+00:00 stderr F I0813 20:09:41.798438 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:43.796436379+00:00 stderr F I0813 20:09:43.795019 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:44.200848973+00:00 stderr F I0813 20:09:44.198357 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:49.203368260+00:00 stderr F I0813 20:09:49.202253 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:51.434943531+00:00 stderr F I0813 20:09:51.431532 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:52.016265048+00:00 stderr F I0813 20:09:52.016174 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:12.320140828+00:00 stderr F I0813 20:10:12.317306 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:14.400878154+00:00 stderr F I0813 20:10:14.399461 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:18.062113022+00:00 stderr F I0813 20:10:18.058972 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:19.598567013+00:00 stderr F I0813 20:10:19.598491 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:26.215610350+00:00 stderr F I0813 20:10:26.214704 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:27.797339089+00:00 stderr F I0813 20:10:27.797234 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:28.405947179+00:00 stderr F I0813 20:10:28.405471 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:31.021413336+00:00 stderr F I0813 20:10:31.021171 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:40.552074109+00:00 stderr F I0813 20:10:40.543346 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:10:41.705436466+00:00 stderr F I0813 20:10:41.705315 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:36.355339714+00:00 stderr F I0813 20:42:36.341575 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350385 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350426 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350439 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373365804+00:00 stderr F I0813 20:42:36.350460 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.373428316+00:00 stderr F I0813 20:42:36.350472 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350487 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350501 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350524 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350535 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350575 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.320254 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350642 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350656 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350666 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350677 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350687 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350723 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350738 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350754 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350866 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350883 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350892 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350908 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.376687750+00:00 stderr F I0813 20:42:36.350917 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.393459673+00:00 stderr F I0813 20:42:36.350935 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.412702238+00:00 stderr F I0813 20:42:36.412551 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.402538706+00:00 stderr F I0813 20:42:39.401878 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:39.403501624+00:00 stderr F I0813 20:42:39.402611 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:39.403579226+00:00 stderr F E0813 20:42:39.403535 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.403682989+00:00 stderr F W0813 20:42:39.403643 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000023040715117130646033037 0ustar zuulzuul2025-08-13T19:59:06.750257233+00:00 stderr F I0813 19:59:06.742937 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:06.786971529+00:00 stderr F I0813 19:59:06.782877 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:06.942901614+00:00 stderr F I0813 19:59:06.940217 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:07.926525343+00:00 stderr F I0813 19:59:07.924691 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-08-13T19:59:12.720179267+00:00 stderr F I0813 19:59:12.719111 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:12.820604490+00:00 stderr F I0813 19:59:12.820420 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:12.820604490+00:00 stderr F W0813 19:59:12.820512 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:12.820604490+00:00 stderr F W0813 19:59:12.820521 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:12.823878544+00:00 stderr F I0813 19:59:12.822329 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-08-13T19:59:14.074160714+00:00 stderr F I0813 19:59:14.073686 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-08-13T19:59:14.086618029+00:00 stderr F I0813 19:59:14.082412 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"27949", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_b7a0aeae-1f73-4439-9109-a6d072941331 became leader 2025-08-13T19:59:14.277518981+00:00 stderr F I0813 19:59:14.276568 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:14.277518981+00:00 stderr F I0813 19:59:14.277119 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563198 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563643 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:15.568504613+00:00 stderr F I0813 19:59:15.563239 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:15.587516315+00:00 stderr F I0813 19:59:15.582508 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:15.607910957+00:00 stderr F I0813 19:59:15.605047 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:15.607910957+00:00 stderr F I0813 19:59:15.605131 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:15.620592328+00:00 stderr F I0813 19:59:15.620545 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:15.620755023+00:00 stderr F I0813 19:59:15.620734 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:15.845312024+00:00 stderr F I0813 19:59:15.821569 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:15.845858660+00:00 stderr F I0813 19:59:15.845737 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:15.846489558+00:00 stderr F I0813 19:59:15.846218 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:15.847163457+00:00 stderr F I0813 19:59:15.846455 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:16.319895263+00:00 stderr F I0813 19:59:16.318321 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:16.319895263+00:00 stderr F E0813 19:59:16.318472 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.319895263+00:00 stderr F E0813 19:59:16.318510 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.334965852+00:00 stderr F E0813 19:59:16.326704 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.360658714+00:00 stderr F E0813 19:59:16.360212 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.360658714+00:00 stderr F E0813 19:59:16.360280 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.381641183+00:00 stderr F E0813 19:59:16.378296 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.445417691+00:00 stderr F E0813 19:59:16.436610 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.488232291+00:00 stderr F E0813 19:59:16.483222 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.568202701+00:00 stderr F E0813 19:59:16.558084 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.654130840+00:00 stderr F E0813 19:59:16.651388 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.654130840+00:00 stderr F E0813 19:59:16.651483 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.861331357+00:00 stderr F E0813 19:59:16.813732 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.861331357+00:00 stderr F E0813 19:59:16.827618 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F E0813 19:59:17.034141 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F E0813 19:59:17.176599 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.088712 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.118470 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.161507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:17.177242403+00:00 stderr F I0813 19:59:17.175011 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:19.941138489+00:00 stderr F I0813 19:59:19.923556 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-08-13T19:59:19.984238058+00:00 stderr F I0813 19:59:19.938257 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:19.989219850+00:00 stderr F I0813 19:59:19.988407 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:19.989219850+00:00 stderr F E0813 19:59:19.988689 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154621 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154758 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154911 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.154937 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155286 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155309 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155322 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:20.168042778+00:00 stderr F I0813 19:59:20.155766 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-08-13T19:59:20.217902409+00:00 stderr F I0813 19:59:20.217548 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:20.238744153+00:00 stderr F I0813 19:59:20.238127 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:20.571913630+00:00 stderr F E0813 19:59:20.571743 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027082 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027502 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027548 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:24.075747208+00:00 stderr F I0813 19:59:24.027556 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:24.075747208+00:00 stderr F E0813 19:59:24.031037 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.075747208+00:00 stderr F E0813 19:59:24.031073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173715 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173875 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:24.174209355+00:00 stderr F I0813 19:59:24.173900 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:24.180871845+00:00 stderr F I0813 19:59:24.178934 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:24.197711115+00:00 stderr F I0813 19:59:24.197643 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:24.252630560+00:00 stderr F I0813 19:59:24.252208 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:24.285859177+00:00 stderr F I0813 19:59:24.282293 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:24.298903559+00:00 stderr F I0813 19:59:24.286007 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:24.308657957+00:00 stderr F I0813 19:59:24.308498 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:24.308755230+00:00 stderr F I0813 19:59:24.308735 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:24.309183862+00:00 stderr F I0813 19:59:24.309119 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:24.309183862+00:00 stderr F I0813 19:59:24.309171 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:24.309538402+00:00 stderr F I0813 19:59:24.308546 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:24.309596684+00:00 stderr F I0813 19:59:24.309577 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:24.313347991+00:00 stderr F I0813 19:59:24.286034 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:24.313418613+00:00 stderr F I0813 19:59:24.313402 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:24.337597902+00:00 stderr F I0813 19:59:24.337309 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:24.338316453+00:00 stderr F I0813 19:59:24.338235 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:24.339141446+00:00 stderr F I0813 19:59:24.338647 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:24.345083176+00:00 stderr F I0813 19:59:24.345016 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.356059319+00:00 stderr F I0813 19:59:24.355992 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.357731286+00:00 stderr F I0813 19:59:24.357700 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-08-13T19:59:24.360130935+00:00 stderr F I0813 19:59:24.360021 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T19:59:24.362277496+00:00 stderr F I0813 19:59:24.362238 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:24Z","message":"NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)","reason":"NodeController_MasterNodesReady","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:58:01Z","message":"NodeInstallerProgressing: 1 node is at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:24.371413966+00:00 stderr F I0813 19:59:24.364198 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:24.419431825+00:00 stderr F I0813 19:59:24.419309 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:24.419431825+00:00 stderr F I0813 19:59:24.419354 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:24.461939797+00:00 stderr F I0813 19:59:24.458873 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-08-13T19:59:24.461939797+00:00 stderr F I0813 19:59:24.458941 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-08-13T19:59:25.315461666+00:00 stderr F E0813 19:59:25.312038 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:25.516356702+00:00 stderr F I0813 19:59:25.504740 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:25.820550084+00:00 stderr F I0813 19:59:25.820094 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-26T12:58:01Z","message":"NodeInstallerProgressing: 1 node is at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:26.105266230+00:00 stderr F I0813 19:59:26.105188 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:26.237973862+00:00 stderr F I0813 19:59:26.192630 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:26.593444855+00:00 stderr F E0813 19:59:26.593093 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.892963843+00:00 stderr F I0813 19:59:26.892221 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.979948 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.979994 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.980052 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:26.980165039+00:00 stderr F I0813 19:59:26.980059 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:27.875125470+00:00 stderr F E0813 19:59:27.872973 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.919694851+00:00 stderr F I0813 19:59:27.919459 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:27.919999479+00:00 stderr F I0813 19:59:27.919974 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:31.716486518+00:00 stderr F E0813 19:59:31.716107 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:32.995353672+00:00 stderr F E0813 19:59:32.994615 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.965062365+00:00 stderr F E0813 19:59:41.962572 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:43.238992878+00:00 stderr F E0813 19:59:43.238112 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:51.822111606+00:00 stderr F I0813 19:59:51.815170 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915673 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.915501758 +0000 UTC))" 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915890 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.915755946 +0000 UTC))" 2025-08-13T19:59:51.915942131+00:00 stderr F I0813 19:59:51.915924 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.91590089 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915949 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.915930981 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915958592 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.915988 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915976512 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916011 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.915993482 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916029 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916018183 +0000 UTC))" 2025-08-13T19:59:51.916148947+00:00 stderr F I0813 19:59:51.916055 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.916034914 +0000 UTC))" 2025-08-13T19:59:51.917876096+00:00 stderr F I0813 19:59:51.916576 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 19:59:51.916552738 +0000 UTC))" 2025-08-13T19:59:52.021908452+00:00 stderr F I0813 19:59:52.021159 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 19:59:52.021103789 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.728508 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.728468532 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729386 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.729306736 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729417 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.729396818 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729494 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.72946132 +0000 UTC))" 2025-08-13T20:00:05.730978713+00:00 stderr F I0813 20:00:05.729511 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.729500051 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.729770 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.729519012 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799009 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.7988868 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799112 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799092906 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799206 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.799118016 +0000 UTC))" 2025-08-13T20:00:05.801291348+00:00 stderr F I0813 20:00:05.799338 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.799217719 +0000 UTC))" 2025-08-13T20:00:05.825056146+00:00 stderr F I0813 20:00:05.822722 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:00:05.822678308 +0000 UTC))" 2025-08-13T20:00:05.825056146+00:00 stderr F I0813 20:00:05.823980 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:00:05.823609605 +0000 UTC))" 2025-08-13T20:00:30.817609083+00:00 stderr F I0813 20:00:30.815856 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 7 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:30.880127836+00:00 stderr F I0813 20:00:30.874647 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-pod-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:30.901642529+00:00 stderr F I0813 20:00:30.900057 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:31.038110090+00:00 stderr F I0813 20:00:31.035252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:31.628087483+00:00 stderr F I0813 20:00:31.626200 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/scheduler-kubeconfig-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.047671160+00:00 stderr F I0813 20:00:33.034887 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.234393285+00:00 stderr F I0813 20:00:33.230541 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/serving-cert-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.384646049+00:00 stderr F I0813 20:00:33.383876 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-7 -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:33.794477925+00:00 stderr F I0813 20:00:33.794386 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 7 triggered by "optional secret/serving-cert has changed" 2025-08-13T20:00:33.994328463+00:00 stderr F I0813 20:00:33.993397 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 7 created because optional secret/serving-cert has changed 2025-08-13T20:00:33.997412101+00:00 stderr F I0813 20:00:33.997345 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:35.638695201+00:00 stderr F I0813 20:00:35.635725 1 installer_controller.go:524] node crc with revision 6 is the oldest and needs new revision 7 2025-08-13T20:00:35.638695201+00:00 stderr F I0813 20:00:35.636690 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:35.638695201+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:35.638695201+00:00 stderr F CurrentRevision: (int32) 6, 2025-08-13T20:00:35.638695201+00:00 stderr F TargetRevision: (int32) 7, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedRevision: (int32) 5, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0017426d8)(2024-06-26 12:52:00 +0000 UTC), 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:35.638695201+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:35.638695201+00:00 stderr F (string) (len=2059) "installer: duler-pod\",\n SecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=31) \"localhost-recovery-client-token\"\n },\n OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=12) \"serving-cert\"\n },\n ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\n (string) (len=18) \"kube-scheduler-pod\",\n (string) (len=6) \"config\",\n (string) (len=17) \"serviceaccount-ca\",\n (string) (len=20) \"scheduler-kubeconfig\",\n (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\n },\n OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=16) \"policy-configmap\"\n },\n CertSecretNames: ([]string) (len=1 cap=1) {\n (string) (len=30) \"kube-scheduler-client-cert-key\"\n },\n OptionalCertSecretNamePrefixes: ([]string) ,\n CertConfigMapNamePrefixes: ([]string) ,\n OptionalCertConfigMapNamePrefixes: ([]string) ,\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:10.196140 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:10.287595 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:10.292603 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:20.300456 1 cmd.go:503] Pod container: installer state for node crc is not terminated, waiting\nI0626 12:49:30.295953 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:50:00.296621 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:14.299317 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:00:35.638695201+00:00 stderr F } 2025-08-13T20:00:35.638695201+00:00 stderr F } 2025-08-13T20:00:35.744594801+00:00 stderr F I0813 20:00:35.744447 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 6 to 7 because node crc with revision 6 is the oldest 2025-08-13T20:00:35.765922529+00:00 stderr F I0813 20:00:35.762651 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:35Z","message":"NodeInstallerProgressing: 1 node is at revision 6; 0 nodes have achieved new revision 7","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:35.843115420+00:00 stderr F I0813 20:00:35.842474 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:35.907574728+00:00 stderr F I0813 20:00:35.905035 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 6; 0 nodes have achieved new revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6; 0 nodes have achieved new revision 7" 2025-08-13T20:00:37.153230506+00:00 stderr F I0813 20:00:37.136051 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-7-crc -n openshift-kube-scheduler because it was missing 2025-08-13T20:00:38.492115353+00:00 stderr F I0813 20:00:38.485670 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:39.595945127+00:00 stderr F I0813 20:00:39.578461 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:46.881302911+00:00 stderr F I0813 20:00:46.866452 1 installer_controller.go:512] "crc" is in transition to 7, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.981038 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.980942085 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998578 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.994291525 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998644 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.998610639 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998677 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.99865393 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998719 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998701701 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.998744 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998726352 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999094 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.998750653 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999209 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.999111613 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999251 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.999224796 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999302 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.999281778 +0000 UTC))" 2025-08-13T20:01:00.002010316+00:00 stderr F I0813 20:00:59.999330 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.999312609 +0000 UTC))" 2025-08-13T20:01:00.028106330+00:00 stderr F I0813 20:01:00.016271 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:10 +0000 UTC to 2026-06-26 12:47:11 +0000 UTC (now=2025-08-13 20:01:00.016232431 +0000 UTC))" 2025-08-13T20:01:00.028106330+00:00 stderr F I0813 20:01:00.020201 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:01:00.020135532 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.375177 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.375957 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.376642 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382332 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:20.382291382 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382367 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:20.382353574 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382388 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:20.382372944 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382413 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:20.382396385 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382431 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382419856 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382454 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382441626 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382474 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382461357 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382505 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.382480117 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382523 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:20.382512488 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382541 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:20.382530409 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382567 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:20.38255572 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.382990 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-08-13 20:01:20.382970791 +0000 UTC))" 2025-08-13T20:01:20.385126573+00:00 stderr F I0813 20:01:20.383322 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115151\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115149\" (2025-08-13 18:59:08 +0000 UTC to 2026-08-13 18:59:08 +0000 UTC (now=2025-08-13 20:01:20.383296541 +0000 UTC))" 2025-08-13T20:01:21.951018043+00:00 stderr F I0813 20:01:21.950025 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="bb1e896cccd42fdf3b040bf941474d8b98c2e1008b067302988ef61fbb74af9d", new="f2ef63f4fe25e3a28410fbdc21c93cf27decbc59c0810ae7fae1df548bef3156") 2025-08-13T20:01:21.951018043+00:00 stderr F W0813 20:01:21.950997 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was modified 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951354 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="af4bed0830029e9d25ab9fe2cabce4be2989fe96db922a2b2f78449a73557f43", new="fd988104fafb68af302e48a9bb235ee14d5ce3d51ab6bad4c4b27339f3fa7c47") 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951708 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951904 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.951966 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952066 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952097 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952166 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952182 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952302 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952312 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952406 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952419 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952426 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952430 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952437 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952448 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952452 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952461 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952508 1 base_controller.go:172] Shutting down StatusSyncer_kube-scheduler ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952533 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952555 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952570 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952584 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952597 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952609 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952624 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952641 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952694 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952829 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952872 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:21.959634619+00:00 stderr F I0813 20:01:21.952884 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.953768 1 base_controller.go:150] All StatusSyncer_kube-scheduler post start hooks have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962390 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962408 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962418 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962424 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962495 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962552 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962565 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962602 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:21.962742357+00:00 stderr F I0813 20:01:21.962665 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:21.962742357+00:00 stderr F E0813 20:01:21.962704 1 request.go:1116] Unexpected error when reading response body: context canceled 2025-08-13T20:01:21.962967624+00:00 stderr F I0813 20:01:21.962524 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963093 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963127 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963139 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:21.963154879+00:00 stderr F I0813 20:01:21.963148 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:21.963176590+00:00 stderr F I0813 20:01:21.963155 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:21.963176590+00:00 stderr F I0813 20:01:21.963160 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:21.963189860+00:00 stderr F I0813 20:01:21.963180 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963187 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963193 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963201 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963209 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963085 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:21.963236351+00:00 stderr F I0813 20:01:21.963231 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963236 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963248 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-scheduler controller ... 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963255 1 base_controller.go:104] All StatusSyncer_kube-scheduler workers have been terminated 2025-08-13T20:01:21.963270052+00:00 stderr F I0813 20:01:21.963265 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:21.963284713+00:00 stderr F I0813 20:01:21.963272 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:21.963284713+00:00 stderr F I0813 20:01:21.963277 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:21.963441377+00:00 stderr F I0813 20:01:21.963377 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:21.963531170+00:00 stderr F I0813 20:01:21.963478 1 builder.go:330] server exited 2025-08-13T20:01:21.963705225+00:00 stderr F I0813 20:01:21.963681 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:21.969730416+00:00 stderr F I0813 20:01:21.964094 1 base_controller.go:172] Shutting down KubeControllerManagerStaticResources ... 2025-08-13T20:01:21.969887911+00:00 stderr F I0813 20:01:21.964110 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:21.969938282+00:00 stderr F I0813 20:01:21.964128 1 base_controller.go:114] Shutting down worker of KubeControllerManagerStaticResources controller ... 2025-08-13T20:01:21.969994814+00:00 stderr F I0813 20:01:21.969976 1 base_controller.go:104] All KubeControllerManagerStaticResources workers have been terminated 2025-08-13T20:01:21.970029175+00:00 stderr F I0813 20:01:21.964139 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:21.970070696+00:00 stderr F I0813 20:01:21.970054 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:21.970116817+00:00 stderr F I0813 20:01:21.970100 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:21.970154508+00:00 stderr F I0813 20:01:21.964419 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:21.970195970+00:00 stderr F I0813 20:01:21.970180 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:21.970282332+00:00 stderr F I0813 20:01:21.964467 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:21.970337104+00:00 stderr F I0813 20:01:21.964538 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:21.970382325+00:00 stderr F I0813 20:01:21.970365 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:21.997038065+00:00 stderr F E0813 20:01:21.972707 1 base_controller.go:268] TargetConfigController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:21.997038065+00:00 stderr F I0813 20:01:21.972864 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:21.997038065+00:00 stderr F I0813 20:01:21.972905 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:22.298082319+00:00 stderr F W0813 20:01:22.298025 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000036000000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000017551315117130646033045 0ustar zuulzuul2025-12-13T00:13:15.441003094+00:00 stderr F I1213 00:13:15.440597 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:13:15.441003094+00:00 stderr F I1213 00:13:15.440924 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:15.442236735+00:00 stderr F I1213 00:13:15.442166 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:15.488667696+00:00 stderr F I1213 00:13:15.487327 1 builder.go:299] openshift-cluster-kube-scheduler-operator version 4.16.0-202406131906.p0.g630f63b.assembly.stream.el9-630f63b-630f63bc7a30d2662bbb5115233144079de6eef6 2025-12-13T00:13:16.068789719+00:00 stderr F I1213 00:13:16.065705 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.068789719+00:00 stderr F W1213 00:13:16.066102 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.068789719+00:00 stderr F W1213 00:13:16.066108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.085138528+00:00 stderr F I1213 00:13:16.084635 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.085138528+00:00 stderr F I1213 00:13:16.084733 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.085138528+00:00 stderr F I1213 00:13:16.084736 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.086762262+00:00 stderr F I1213 00:13:16.086321 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.086762262+00:00 stderr F I1213 00:13:16.086365 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.086762262+00:00 stderr F I1213 00:13:16.086684 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.086762262+00:00 stderr F I1213 00:13:16.086707 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.086762262+00:00 stderr F I1213 00:13:16.086728 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.086762262+00:00 stderr F I1213 00:13:16.086733 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.089030149+00:00 stderr F I1213 00:13:16.088974 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.089328729+00:00 stderr F I1213 00:13:16.089226 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock... 2025-12-13T00:13:16.189312079+00:00 stderr F I1213 00:13:16.187626 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.189312079+00:00 stderr F I1213 00:13:16.188011 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:16.189312079+00:00 stderr F I1213 00:13:16.188058 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:18:08.501746011+00:00 stderr F I1213 00:18:08.500901 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock 2025-12-13T00:18:08.501746011+00:00 stderr F I1213 00:18:08.501104 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-cluster-kube-scheduler-operator-lock", UID:"71f97dfe-5375-4cc7-b12a-e96cf43bdae0", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41935", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_d3c76e4c-9b1f-4865-a883-77f9a36cf62d became leader 2025-12-13T00:18:08.503533499+00:00 stderr F I1213 00:18:08.503469 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:08.506201130+00:00 stderr F I1213 00:18:08.506090 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:08.506363904+00:00 stderr F I1213 00:18:08.506257 1 starter.go:80] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:18:08.516108144+00:00 stderr F I1213 00:18:08.516051 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:18:08.516896555+00:00 stderr F I1213 00:18:08.516805 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:18:08.517011088+00:00 stderr F I1213 00:18:08.516972 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:18:08.517078679+00:00 stderr F I1213 00:18:08.517039 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-12-13T00:18:08.523650964+00:00 stderr F I1213 00:18:08.523601 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-12-13T00:18:08.523650964+00:00 stderr F I1213 00:18:08.523627 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-12-13T00:18:08.523675384+00:00 stderr F I1213 00:18:08.523641 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-12-13T00:18:08.523774757+00:00 stderr F I1213 00:18:08.523685 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-12-13T00:18:08.523774757+00:00 stderr F I1213 00:18:08.523730 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-12-13T00:18:08.523774757+00:00 stderr F I1213 00:18:08.523751 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:18:08.523774757+00:00 stderr F I1213 00:18:08.523762 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-12-13T00:18:08.523789997+00:00 stderr F I1213 00:18:08.523778 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:08.523822938+00:00 stderr F I1213 00:18:08.523799 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-12-13T00:18:08.524156027+00:00 stderr F I1213 00:18:08.524125 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-12-13T00:18:08.524254080+00:00 stderr F I1213 00:18:08.524226 1 base_controller.go:67] Waiting for caches to sync for KubeControllerManagerStaticResources 2025-12-13T00:18:08.524400013+00:00 stderr F I1213 00:18:08.524376 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-12-13T00:18:08.524543988+00:00 stderr F I1213 00:18:08.524520 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-scheduler 2025-12-13T00:18:08.617082639+00:00 stderr F I1213 00:18:08.616975 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:18:08.617082639+00:00 stderr F I1213 00:18:08.617026 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:18:08.624442034+00:00 stderr F I1213 00:18:08.624380 1 base_controller.go:73] Caches are synced for NodeController 2025-12-13T00:18:08.624442034+00:00 stderr F I1213 00:18:08.624403 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-12-13T00:18:08.624551086+00:00 stderr F I1213 00:18:08.624490 1 base_controller.go:73] Caches are synced for PruneController 2025-12-13T00:18:08.624551086+00:00 stderr F I1213 00:18:08.624528 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-12-13T00:18:08.625405940+00:00 stderr F I1213 00:18:08.625341 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-scheduler 2025-12-13T00:18:08.625405940+00:00 stderr F I1213 00:18:08.625306 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:18:08.625405940+00:00 stderr F I1213 00:18:08.625378 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-scheduler controller ... 2025-12-13T00:18:08.625405940+00:00 stderr F I1213 00:18:08.625391 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:18:08.625469462+00:00 stderr F I1213 00:18:08.625324 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:08.625469462+00:00 stderr F I1213 00:18:08.625448 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:08.626569400+00:00 stderr F I1213 00:18:08.626500 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:18:08.719071680+00:00 stderr F I1213 00:18:08.718916 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:18:08.724514055+00:00 stderr F I1213 00:18:08.724410 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-12-13T00:18:08.724514055+00:00 stderr F I1213 00:18:08.724462 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-12-13T00:18:08.921887922+00:00 stderr F I1213 00:18:08.921798 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:18:08.924811180+00:00 stderr F I1213 00:18:08.924633 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-12-13T00:18:08.924811180+00:00 stderr F I1213 00:18:08.924666 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-12-13T00:18:08.924811180+00:00 stderr F I1213 00:18:08.924662 1 base_controller.go:73] Caches are synced for InstallerController 2025-12-13T00:18:08.924811180+00:00 stderr F I1213 00:18:08.924714 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-12-13T00:18:08.924811180+00:00 stderr F I1213 00:18:08.924717 1 base_controller.go:73] Caches are synced for GuardController 2025-12-13T00:18:08.924811180+00:00 stderr F I1213 00:18:08.924732 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-12-13T00:18:08.925153320+00:00 stderr F I1213 00:18:08.925025 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-12-13T00:18:08.925153320+00:00 stderr F I1213 00:18:08.925068 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-12-13T00:18:08.926090004+00:00 stderr F I1213 00:18:08.925920 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:08.934995061+00:00 stderr F E1213 00:18:08.934253 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:08.936157061+00:00 stderr F I1213 00:18:08.935308 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:18:08.936157061+00:00 stderr F E1213 00:18:08.935384 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:08.936157061+00:00 stderr F I1213 00:18:08.935523 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:08.936157061+00:00 stderr F I1213 00:18:08.935645 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:18:08.939953853+00:00 stderr F I1213 00:18:08.939869 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:08.940039345+00:00 stderr F E1213 00:18:08.940003 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:08.944805582+00:00 stderr F I1213 00:18:08.944742 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" 2025-12-13T00:18:08.962784190+00:00 stderr F E1213 00:18:08.962714 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:08.962784190+00:00 stderr F I1213 00:18:08.962766 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:09.003870292+00:00 stderr F E1213 00:18:09.003794 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:09.004436037+00:00 stderr F I1213 00:18:09.004377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:09.084897617+00:00 stderr F I1213 00:18:09.084800 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:09.085034920+00:00 stderr F E1213 00:18:09.084986 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:09.122761403+00:00 stderr F I1213 00:18:09.122597 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:18:09.123867372+00:00 stderr F I1213 00:18:09.123788 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-12-13T00:18:09.123867372+00:00 stderr F I1213 00:18:09.123823 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-12-13T00:18:09.217262486+00:00 stderr F I1213 00:18:09.217211 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:18:09.217346368+00:00 stderr F I1213 00:18:09.217331 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:18:09.246661847+00:00 stderr F I1213 00:18:09.246601 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:09.257458124+00:00 stderr F E1213 00:18:09.257341 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:09.258631646+00:00 stderr F I1213 00:18:09.258577 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:18:09.259189721+00:00 stderr F I1213 00:18:09.259106 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8 2025-12-13T00:18:09.259380476+00:00 stderr F E1213 00:18:09.259323 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8] 2025-12-13T00:18:09.260403233+00:00 stderr F I1213 00:18:09.260355 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:18:09.270845050+00:00 stderr F I1213 00:18:09.270749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-8,kube-scheduler-cert-syncer-kubeconfig-8,kube-scheduler-pod-8,scheduler-kubeconfig-8,serviceaccount-ca-8, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" 2025-12-13T00:18:09.319031061+00:00 stderr F I1213 00:18:09.318907 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:18:09.523275982+00:00 stderr F I1213 00:18:09.523174 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:18:09.523886818+00:00 stderr F I1213 00:18:09.523843 1 base_controller.go:73] Caches are synced for RevisionController 2025-12-13T00:18:09.523960240+00:00 stderr F I1213 00:18:09.523921 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-12-13T00:18:09.524716451+00:00 stderr F I1213 00:18:09.524688 1 base_controller.go:73] Caches are synced for KubeControllerManagerStaticResources 2025-12-13T00:18:09.524762462+00:00 stderr F I1213 00:18:09.524747 1 base_controller.go:110] Starting #1 worker of KubeControllerManagerStaticResources controller ... 2025-12-13T00:18:09.617111218+00:00 stderr F I1213 00:18:09.616999 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:18:09.617111218+00:00 stderr F I1213 00:18:09.617061 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:18:09.617221740+00:00 stderr F I1213 00:18:09.617207 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-12-13T00:18:09.617252081+00:00 stderr F I1213 00:18:09.617242 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-12-13T00:18:10.117233395+00:00 stderr F I1213 00:18:10.117122 1 request.go:697] Waited for 1.191717526s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller 2025-12-13T00:18:11.531026385+00:00 stderr F I1213 00:18:11.530511 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:18:11.533957293+00:00 stderr F I1213 00:18:11.533879 1 status_controller.go:218] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:29Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:19Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:03Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:03Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:18:11.540793445+00:00 stderr F I1213 00:18:11.540265 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"f13e36c5-b283-4235-867d-e2ae26d7fa2a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-8]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571746 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.571708786 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571893 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.571882841 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571914 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.571902121 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571945 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.571917591 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571965 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.571949762 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571982 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.571971623 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.571996 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.571986423 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572009 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.572000034 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572030 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.572020284 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572048 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.572035825 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572063 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.572052875 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572089 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.572078486 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572400 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-scheduler-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-scheduler-operator.svc,metrics.openshift-kube-scheduler-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:26 +0000 UTC to 2027-08-13 20:00:27 +0000 UTC (now=2025-12-13 00:19:37.572384285 +0000 UTC))" 2025-12-13T00:19:37.572737515+00:00 stderr F I1213 00:19:37.572640 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584795\" (2025-12-12 23:13:15 +0000 UTC to 2026-12-12 23:13:15 +0000 UTC (now=2025-12-13 00:19:37.572628502 +0000 UTC))" 2025-12-13T00:21:08.533064210+00:00 stderr F E1213 00:21:08.532535 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler-operator/openshift-cluster-kube-scheduler-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.730702754+00:00 stderr F E1213 00:21:08.730636 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.743926381+00:00 stderr F E1213 00:21:08.743853 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.759013727+00:00 stderr F E1213 00:21:08.758909 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.783840898+00:00 stderr F E1213 00:21:08.783783 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.827695041+00:00 stderr F E1213 00:21:08.827631 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.913966919+00:00 stderr F E1213 00:21:08.913794 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.928151342+00:00 stderr F E1213 00:21:08.928079 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.930228218+00:00 stderr F E1213 00:21:08.930151 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.936926519+00:00 stderr F E1213 00:21:08.936873 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.938697986+00:00 stderr F E1213 00:21:08.938648 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.948873081+00:00 stderr F E1213 00:21:08.948803 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.129507665+00:00 stderr F E1213 00:21:09.129420 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.329224895+00:00 stderr F E1213 00:21:09.329172 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.531262887+00:00 stderr F E1213 00:21:09.530966 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:09.731304965+00:00 stderr F E1213 00:21:09.731009 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.928629220+00:00 stderr F E1213 00:21:09.928547 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.529532655+00:00 stderr F E1213 00:21:10.529492 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.730506328+00:00 stderr F E1213 00:21:10.730463 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:10.928607454+00:00 stderr F E1213 00:21:10.928540 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.529336414+00:00 stderr F E1213 00:21:11.529293 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.729254919+00:00 stderr F E1213 00:21:11.729205 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.333765881+00:00 stderr F E1213 00:21:12.333722 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:12.530645364+00:00 stderr F E1213 00:21:12.530582 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.731120174+00:00 stderr F E1213 00:21:12.731054 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:12.928871661+00:00 stderr F E1213 00:21:12.928821 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.530262978+00:00 stderr F E1213 00:21:13.530211 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.730029400+00:00 stderr F E1213 00:21:13.729965 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.128226304+00:00 stderr F E1213 00:21:14.128161 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.330323668+00:00 stderr F E1213 00:21:14.330270 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:14.931226473+00:00 stderr F E1213 00:21:14.930591 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.331038772+00:00 stderr F E1213 00:21:15.330973 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:15.928996808+00:00 stderr F E1213 00:21:15.928668 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.529441001+00:00 stderr F E1213 00:21:16.529371 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.730669761+00:00 stderr F E1213 00:21:16.730603 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.330074926+00:00 stderr F E1213 00:21:17.330006 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-scheduler-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:17.531281595+00:00 stderr F E1213 00:21:17.531203 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:18.929203137+00:00 stderr F E1213 00:21:18.929076 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.129745969+00:00 stderr F E1213 00:21:19.129638 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeschedulers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:19.351527823+00:00 stderr F E1213 00:21:19.351455 1 base_controller.go:268] KubeControllerManagerStaticResources reconciliation failed: ["assets/kube-scheduler/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/leader-election-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/scheduler-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/policyconfigmap-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-scheduler/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:45.975219177+00:00 stderr F I1213 00:21:45.974758 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:50.549900832+00:00 stderr F I1213 00:21:50.549403 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:52.101893661+00:00 stderr F I1213 00:21:52.101832 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:53.927994652+00:00 stderr F I1213 00:21:53.927339 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:54.983047645+00:00 stderr F I1213 00:21:54.982959 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130653033056 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000004641115117130646033070 0ustar zuulzuul2025-08-13T20:01:08.774138269+00:00 stderr F I0813 20:01:08.766484 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc0005f2640 cert-dir:0xc0005f2820 cert-secrets:0xc0005f25a0 configmaps:0xc0005f2140 namespace:0xc0002d9f40 optional-cert-configmaps:0xc0005f2780 optional-configmaps:0xc0005f2280 optional-secrets:0xc0005f21e0 pod:0xc0005f2000 pod-manifest-dir:0xc0005f23c0 resource-dir:0xc0005f2320 revision:0xc0002d9ea0 secrets:0xc0005f20a0 v:0xc0005f3220] [0xc0005f3220 0xc0002d9ea0 0xc0002d9f40 0xc0005f2000 0xc0005f2320 0xc0005f23c0 0xc0005f2140 0xc0005f2280 0xc0005f20a0 0xc0005f21e0 0xc0005f2820 0xc0005f2640 0xc0005f2780 0xc0005f25a0] [] map[cert-configmaps:0xc0005f2640 cert-dir:0xc0005f2820 cert-secrets:0xc0005f25a0 configmaps:0xc0005f2140 help:0xc0005f35e0 kubeconfig:0xc0002d9e00 log-flush-frequency:0xc0005f3180 namespace:0xc0002d9f40 optional-cert-configmaps:0xc0005f2780 optional-cert-secrets:0xc0005f26e0 optional-configmaps:0xc0005f2280 optional-secrets:0xc0005f21e0 pod:0xc0005f2000 pod-manifest-dir:0xc0005f23c0 pod-manifests-lock-file:0xc0005f2500 resource-dir:0xc0005f2320 revision:0xc0002d9ea0 secrets:0xc0005f20a0 timeout-duration:0xc0005f2460 v:0xc0005f3220 vmodule:0xc0005f32c0] [0xc0002d9e00 0xc0002d9ea0 0xc0002d9f40 0xc0005f2000 0xc0005f20a0 0xc0005f2140 0xc0005f21e0 0xc0005f2280 0xc0005f2320 0xc0005f23c0 0xc0005f2460 0xc0005f2500 0xc0005f25a0 0xc0005f2640 0xc0005f26e0 0xc0005f2780 0xc0005f2820 0xc0005f3180 0xc0005f3220 0xc0005f32c0 0xc0005f35e0] [0xc0005f2640 0xc0005f2820 0xc0005f25a0 0xc0005f2140 0xc0005f35e0 0xc0002d9e00 0xc0005f3180 0xc0002d9f40 0xc0005f2780 0xc0005f26e0 0xc0005f2280 0xc0005f21e0 0xc0005f2000 0xc0005f23c0 0xc0005f2500 0xc0005f2320 0xc0002d9ea0 0xc0005f20a0 0xc0005f2460 0xc0005f3220 0xc0005f32c0] map[104:0xc0005f35e0 118:0xc0005f3220] [] -1 0 0xc0009ffb30 true 0x73b100 []} 2025-08-13T20:01:08.774138269+00:00 stderr F I0813 20:01:08.770143 1 cmd.go:92] (*installerpod.InstallOptions)(0xc0000dc820)({ 2025-08-13T20:01:08.774138269+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:01:08.774138269+00:00 stderr F Revision: (string) (len=2) "10", 2025-08-13T20:01:08.774138269+00:00 stderr F NodeName: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:01:08.774138269+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:01:08.774138269+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:01:08.774138269+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.774138269+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:01:08.774138269+00:00 stderr F }, 2025-08-13T20:01:08.774138269+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:01:08.774138269+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.774138269+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:01:08.774138269+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:01:08.774138269+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:01:08.774138269+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:01:08.774138269+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:01:08.774138269+00:00 stderr F }) 2025-08-13T20:01:08.830058183+00:00 stderr F I0813 20:01:08.828941 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:01:09.501531129+00:00 stderr F I0813 20:01:09.497182 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:01:09.701953794+00:00 stderr F I0813 20:01:09.687538 1 cmd.go:502] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:01:20.207197869+00:00 stderr F I0813 20:01:20.203486 1 cmd.go:502] Pod container: installer state for node crc is not terminated, waiting 2025-08-13T20:01:31.558920221+00:00 stderr F I0813 20:01:31.543103 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:02:01.544989289+00:00 stderr F I0813 20:02:01.544661 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:02:03.699434270+00:00 stderr F I0813 20:02:03.699248 1 cmd.go:538] Latest installer revision for node crc is: 10 2025-08-13T20:02:03.699434270+00:00 stderr F I0813 20:02:03.699332 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744093 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744255 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744967 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:02:05.745352876+00:00 stderr F I0813 20:02:05.744985 1 cmd.go:225] Getting secrets ... 2025-08-13T20:02:11.197718897+00:00 stderr F I0813 20:02:11.192487 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-10 2025-08-13T20:02:15.107310794+00:00 stderr F I0813 20:02:15.095719 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-10 2025-08-13T20:02:17.830040326+00:00 stderr F I0813 20:02:17.813908 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-10 2025-08-13T20:02:17.830040326+00:00 stderr F I0813 20:02:17.818969 1 cmd.go:238] Getting config maps ... 2025-08-13T20:02:20.454584586+00:00 stderr F I0813 20:02:20.454385 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-10 2025-08-13T20:02:21.108877962+00:00 stderr F I0813 20:02:21.107944 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-10 2025-08-13T20:02:21.667184459+00:00 stderr F I0813 20:02:21.667055 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-10 2025-08-13T20:02:23.567610131+00:00 stderr F I0813 20:02:23.567409 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-10 2025-08-13T20:02:24.259894000+00:00 stderr F I0813 20:02:24.250233 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-10 2025-08-13T20:02:25.669894114+00:00 stderr F I0813 20:02:25.666308 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-10 2025-08-13T20:02:25.939952998+00:00 stderr F I0813 20:02:25.939182 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-10 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.164324 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-10 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174281 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-10: configmaps "cloud-config-10" not found 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174311 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.174970 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.175600 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177183 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177519 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177704 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177760 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.177939 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178068 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178119 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.crt" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178198 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.key" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178285 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178454 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178607 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config" ... 2025-08-13T20:02:26.178887774+00:00 stderr F I0813 20:02:26.178729 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config/config.yaml" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187037 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187129 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:02:26.187282033+00:00 stderr F I0813 20:02:26.187265 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187323 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187513 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187632 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.187875 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188023 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188169 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188321 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188411 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188474 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188555 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188662 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188758 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:02:26.201932781+00:00 stderr F I0813 20:02:26.188770 1 cmd.go:225] Getting secrets ... 2025-08-13T20:02:26.880541580+00:00 stderr F I0813 20:02:26.880477 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:02:31.905193168+00:00 stderr F I0813 20:02:31.903646 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.920573976+00:00 stderr F I0813 20:02:31.917909 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.972818077+00:00 stderr F I0813 20:02:31.972687 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.242241153+00:00 stderr F I0813 20:02:32.242086 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.244102906+00:00 stderr F W0813 20:02:32.243835 1 recorder.go:217] Error creating event &Event{ObjectMeta:{installer-10-crc.185b6c1d263b4cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-10-crc,UID:79050916-d488-4806-b556-1b0078b31e53,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 10: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,LastTimestamp:2025-08-13 20:02:32.242212062 +0000 UTC m=+87.200792358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/events?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.244195099+00:00 stderr F F0813 20:02:32.244176 1 cmd.go:105] failed to copy: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/kube-controller-manager-client-cert-key?timeout=14s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130647033004 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130656033004 5ustar zuulzuul././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000250115117130647033004 0ustar zuulzuul2025-08-13T19:59:13.861232435+00:00 stderr F W0813 19:59:13.860364 1 deprecated.go:66] 2025-08-13T19:59:13.861232435+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.861232435+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.861232435+00:00 stderr F =============================================== 2025-08-13T19:59:13.861232435+00:00 stderr F 2025-08-13T19:59:13.862362848+00:00 stderr F I0813 19:59:13.862339 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:14.022979775+00:00 stderr F I0813 19:59:14.022157 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:14.022979775+00:00 stderr F I0813 19:59:14.022382 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:14.025228869+00:00 stderr F I0813 19:59:14.025138 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-08-13T19:59:14.133310880+00:00 stderr F I0813 19:59:14.131998 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 2025-08-13T20:42:43.317911807+00:00 stderr F I0813 20:42:43.317675 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000032600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000000227415117130647033013 0ustar zuulzuul2025-12-13T00:13:16.492831937+00:00 stderr F W1213 00:13:16.491709 1 deprecated.go:66] 2025-12-13T00:13:16.492831937+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:16.492831937+00:00 stderr F 2025-12-13T00:13:16.492831937+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:16.492831937+00:00 stderr F 2025-12-13T00:13:16.492831937+00:00 stderr F =============================================== 2025-12-13T00:13:16.492831937+00:00 stderr F 2025-12-13T00:13:16.492831937+00:00 stderr F I1213 00:13:16.492008 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-12-13T00:13:16.492831937+00:00 stderr F I1213 00:13:16.492758 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:16.492831937+00:00 stderr F I1213 00:13:16.492804 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:16.495612771+00:00 stderr F I1213 00:13:16.493890 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:9001 2025-12-13T00:13:16.495612771+00:00 stderr F I1213 00:13:16.494337 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:9001 ././@LongLink0000644000000000000000000000033300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000034000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000013261415117130647033015 0ustar zuulzuul2025-12-13T00:13:15.642463554+00:00 stderr F I1213 00:13:15.638308 1 start.go:62] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:13:15.642463554+00:00 stderr F I1213 00:13:15.639350 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:16.006256708+00:00 stderr F I1213 00:13:16.005713 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... 2025-12-13T00:19:10.331095728+00:00 stderr F I1213 00:19:10.330502 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config-controller 2025-12-13T00:19:10.388851741+00:00 stderr F I1213 00:19:10.388785 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:19:10.388943844+00:00 stderr F I1213 00:19:10.388890 1 metrics.go:100] Registering Prometheus metrics 2025-12-13T00:19:10.389020306+00:00 stderr F I1213 00:19:10.388987 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-12-13T00:19:10.438985884+00:00 stderr F I1213 00:19:10.436310 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-12-13T00:19:10.439978971+00:00 stderr F I1213 00:19:10.439810 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.447130089+00:00 stderr F W1213 00:19:10.444313 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:10.447130089+00:00 stderr F E1213 00:19:10.444362 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:10.447130089+00:00 stderr F I1213 00:19:10.445977 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.447130089+00:00 stderr F I1213 00:19:10.446348 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.447130089+00:00 stderr F I1213 00:19:10.446853 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.447364025+00:00 stderr F I1213 00:19:10.447269 1 machine_set_boot_image_controller.go:151] "FeatureGates changed" enabled=["AdminNetworkPolicy","AlibabaPlatform","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CloudDualStackNodeIPs","ClusterAPIInstallAWS","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallVSphere","DisableKubeletCloudCredentialProviders","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","HardwareSpeed","KMSv1","MetricsServer","NetworkDiagnosticsConfig","NetworkLiveMigration","PrivateHostedZoneAWS","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereStaticIPs"] disabled=["AutomatedEtcdBackup","CSIDriverSharedResource","ChunkSizeMiB","ClusterAPIInstall","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallPowerVS","DNSNameResolver","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MixedCPUsAllocation","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereMultiVCenters","ValidatingAdmissionPolicy","VolumeGroupSnapshot"] 2025-12-13T00:19:10.447405806+00:00 stderr F I1213 00:19:10.447374 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.447563440+00:00 stderr F I1213 00:19:10.447446 1 start.go:107] FeatureGates initialized: enabled=[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs] disabled=[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:19:10.448504706+00:00 stderr F I1213 00:19:10.447675 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-12-13T00:19:10.448504706+00:00 stderr F I1213 00:19:10.447871 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.448504706+00:00 stderr F I1213 00:19:10.448292 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:19:10.448504706+00:00 stderr F I1213 00:19:10.448383 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.450992105+00:00 stderr F I1213 00:19:10.449989 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-12-13T00:19:10.451327974+00:00 stderr F I1213 00:19:10.451281 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-12-13T00:19:10.452222579+00:00 stderr F I1213 00:19:10.452164 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:10.452804625+00:00 stderr F I1213 00:19:10.452743 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.453504864+00:00 stderr F I1213 00:19:10.453418 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-12-13T00:19:10.453811812+00:00 stderr F I1213 00:19:10.453752 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:19:10.456201358+00:00 stderr F I1213 00:19:10.456150 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-12-13T00:19:10.456489366+00:00 stderr F E1213 00:19:10.456464 1 node_controller.go:505] getting scheduler config failed: cluster scheduler couldn't be found 2025-12-13T00:19:10.457699430+00:00 stderr F I1213 00:19:10.457288 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:19:10.469590318+00:00 stderr F I1213 00:19:10.469515 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:10.487577643+00:00 stderr F I1213 00:19:10.487498 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:10.487815110+00:00 stderr F I1213 00:19:10.487767 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-12-13T00:19:10.502641249+00:00 stderr F I1213 00:19:10.502583 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-12-13T00:19:10.527367250+00:00 stderr F I1213 00:19:10.527302 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:19:10.551115036+00:00 stderr F I1213 00:19:10.551048 1 container_runtime_config_controller.go:242] Starting MachineConfigController-ContainerRuntimeConfigController 2025-12-13T00:19:10.552350459+00:00 stderr F I1213 00:19:10.552312 1 node_controller.go:247] Starting MachineConfigController-NodeController 2025-12-13T00:19:10.552609556+00:00 stderr F I1213 00:19:10.552593 1 render_controller.go:127] Starting MachineConfigController-RenderController 2025-12-13T00:19:10.556112044+00:00 stderr F I1213 00:19:10.555893 1 drain_controller.go:168] Starting MachineConfigController-DrainController 2025-12-13T00:19:10.556749281+00:00 stderr F I1213 00:19:10.556708 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController 2025-12-13T00:19:10.557071610+00:00 stderr F I1213 00:19:10.557024 1 template_controller.go:227] Starting MachineConfigController-TemplateController 2025-12-13T00:19:10.557203583+00:00 stderr F I1213 00:19:10.557153 1 machine_set_boot_image_controller.go:181] Starting MachineConfigController-MachineSetBootImageController 2025-12-13T00:19:10.777684743+00:00 stderr F I1213 00:19:10.773184 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-12-13T00:19:10.792511812+00:00 stderr F I1213 00:19:10.792445 1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master 2025-12-13T00:19:10.962686665+00:00 stderr F I1213 00:19:10.962621 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-12-13T00:19:11.165707812+00:00 stderr F I1213 00:19:11.165608 1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker 2025-12-13T00:19:11.964185150+00:00 stderr F I1213 00:19:11.964112 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-12-13T00:19:11.981971221+00:00 stderr F W1213 00:19:11.981897 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:11.982001392+00:00 stderr F E1213 00:19:11.981977 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:12.563173927+00:00 stderr F I1213 00:19:12.563089 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-12-13T00:19:13.935593891+00:00 stderr F W1213 00:19:13.935546 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:13.935708845+00:00 stderr F E1213 00:19:13.935695 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:15.459036190+00:00 stderr F E1213 00:19:15.458636 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.460070478+00:00 stderr F I1213 00:19:15.460027 1 node_controller.go:988] Pool master is paused and will not update. 2025-12-13T00:19:15.461463697+00:00 stderr F I1213 00:19:15.461407 1 status.go:266] Degraded Machine: crc and Degraded Reason: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:19:15.461463697+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.466797544+00:00 stderr F I1213 00:19:15.466716 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.469077776+00:00 stderr F E1213 00:19:15.468144 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.472843411+00:00 stderr F E1213 00:19:15.472789 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.476197533+00:00 stderr F I1213 00:19:15.476133 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.479338689+00:00 stderr F I1213 00:19:15.479268 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.482397794+00:00 stderr F E1213 00:19:15.482358 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.490460886+00:00 stderr F E1213 00:19:15.490381 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.490955770+00:00 stderr F I1213 00:19:15.490898 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.498519608+00:00 stderr F I1213 00:19:15.498451 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.501681906+00:00 stderr F E1213 00:19:15.501408 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.508965527+00:00 stderr F I1213 00:19:15.508861 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.519182758+00:00 stderr F E1213 00:19:15.518823 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.526287014+00:00 stderr F I1213 00:19:15.526216 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.529671597+00:00 stderr F E1213 00:19:15.529618 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.535737504+00:00 stderr F I1213 00:19:15.535659 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.567182211+00:00 stderr F E1213 00:19:15.567102 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.574179855+00:00 stderr F I1213 00:19:15.574099 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.576602412+00:00 stderr F E1213 00:19:15.576478 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.584735056+00:00 stderr F I1213 00:19:15.584578 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.654959212+00:00 stderr F E1213 00:19:15.654849 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.665715509+00:00 stderr F E1213 00:19:15.665658 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:15.667306022+00:00 stderr F I1213 00:19:15.667269 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.827655264+00:00 stderr F E1213 00:19:15.827592 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:15.870404153+00:00 stderr F I1213 00:19:15.870323 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:16.030912739+00:00 stderr F E1213 00:19:16.030830 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:16.066912232+00:00 stderr F I1213 00:19:16.066753 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:16.268879331+00:00 stderr F I1213 00:19:16.268792 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:16.387406379+00:00 stderr F E1213 00:19:16.387325 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:16.467738734+00:00 stderr F I1213 00:19:16.467675 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:16.589191873+00:00 stderr F E1213 00:19:16.589107 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:16.669821737+00:00 stderr F I1213 00:19:16.669737 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:17.108315738+00:00 stderr F E1213 00:19:17.108257 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:17.117828700+00:00 stderr F I1213 00:19:17.117587 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:17.310452312+00:00 stderr F E1213 00:19:17.310409 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:17.318077012+00:00 stderr F I1213 00:19:17.318033 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:18.399005768+00:00 stderr F E1213 00:19:18.398890 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:18.405007484+00:00 stderr F I1213 00:19:18.404728 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:18.598703356+00:00 stderr F E1213 00:19:18.598613 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:18.602804178+00:00 stderr F W1213 00:19:18.602774 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:18.602878340+00:00 stderr F E1213 00:19:18.602859 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:18.606032647+00:00 stderr F I1213 00:19:18.606002 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:20.474600292+00:00 stderr F I1213 00:19:20.474314 1 status.go:266] Degraded Machine: crc and Degraded Reason: missing MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-12-13T00:19:20.474600292+00:00 stderr F machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:20.966335592+00:00 stderr F E1213 00:19:20.966259 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:20.973884540+00:00 stderr F I1213 00:19:20.973816 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:21.166647745+00:00 stderr F E1213 00:19:21.166561 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:21.174826901+00:00 stderr F I1213 00:19:21.174770 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:26.094560535+00:00 stderr F E1213 00:19:26.094487 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:26.102953756+00:00 stderr F I1213 00:19:26.102836 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:26.295405093+00:00 stderr F E1213 00:19:26.295342 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:26.302018185+00:00 stderr F I1213 00:19:26.301973 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:30.165236875+00:00 stderr F W1213 00:19:30.165166 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:30.165236875+00:00 stderr F E1213 00:19:30.165201 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:36.343238252+00:00 stderr F E1213 00:19:36.343184 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:36.360068926+00:00 stderr F I1213 00:19:36.359986 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:36.542432794+00:00 stderr F E1213 00:19:36.542373 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:36.548011518+00:00 stderr F I1213 00:19:36.547967 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:53.002782151+00:00 stderr F W1213 00:19:53.002669 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:53.002782151+00:00 stderr F E1213 00:19:53.002732 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:19:56.840478684+00:00 stderr F E1213 00:19:56.840373 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:56.847993191+00:00 stderr F I1213 00:19:56.847880 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:19:57.029004582+00:00 stderr F E1213 00:19:57.028947 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:19:57.036065426+00:00 stderr F I1213 00:19:57.036008 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:20:36.802852829+00:00 stderr F W1213 00:20:36.802019 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.802852829+00:00 stderr F E1213 00:20:36.802280 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.809304937+00:00 stderr F E1213 00:20:37.809240 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:20:37.810654674+00:00 stderr F E1213 00:20:37.810578 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.810654674+00:00 stderr F I1213 00:20:37.810603 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-12-13T00:20:37.997454304+00:00 stderr F E1213 00:20:37.997349 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:20:37.998719549+00:00 stderr F E1213 00:20:37.998631 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.998719549+00:00 stderr F I1213 00:20:37.998676 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-12-13T00:21:10.349288571+00:00 stderr F E1213 00:21:10.348343 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:23.525579890+00:00 stderr F W1213 00:21:23.525491 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:21:23.525579890+00:00 stderr F E1213 00:21:23.525530 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-12-13T00:21:47.825222827+00:00 stderr F I1213 00:21:47.824726 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:21:49.061588773+00:00 stderr F I1213 00:21:49.061458 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:21:55.701609948+00:00 stderr F I1213 00:21:55.701315 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 ././@LongLink0000644000000000000000000000034000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000041210415117130647033010 0ustar zuulzuul2025-08-13T19:59:09.332601413+00:00 stderr F I0813 19:59:09.332217 1 start.go:62] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:09.334155298+00:00 stderr F I0813 19:59:09.333384 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:09.904644890+00:00 stderr F I0813 19:59:09.904226 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... 2025-08-13T19:59:09.976994272+00:00 stderr F I0813 19:59:09.954145 1 leaderelection.go:260] successfully acquired lease openshift-machine-config-operator/machine-config-controller 2025-08-13T19:59:10.501277606+00:00 stderr F I0813 19:59:10.501175 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:10.506563697+00:00 stderr F I0813 19:59:10.502810 1 metrics.go:100] Registering Prometheus metrics 2025-08-13T19:59:10.506684680+00:00 stderr F I0813 19:59:10.506662 1 metrics.go:107] Starting metrics listener on 127.0.0.1:8797 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.495076 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.542454 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.563449 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.565377 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.565897 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F E0813 19:59:11.578688 1 node_controller.go:505] getting scheduler config failed: cluster scheduler couldn't be found 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.579413 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.607228 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.625149 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.626272 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:11.711210236+00:00 stderr F I0813 19:59:11.626618 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.205253159+00:00 stderr F I0813 19:59:12.202935 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.206409072+00:00 stderr F I0813 19:59:12.205694 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.359926668+00:00 stderr F W0813 19:59:12.353915 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:12.359926668+00:00 stderr F E0813 19:59:12.354497 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:12.360216916+00:00 stderr F I0813 19:59:12.360107 1 event.go:364] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-machine-config-operator", Name:"crc", UID:"c83c88d3-f34d-4083-a59d-1c50f90f89b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:12.361595616+00:00 stderr F I0813 19:59:12.361563 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.421095472+00:00 stderr F I0813 19:59:12.305966 1 machine_set_boot_image_controller.go:151] "FeatureGates changed" enabled=["AdminNetworkPolicy","AlibabaPlatform","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CloudDualStackNodeIPs","ClusterAPIInstallAWS","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallVSphere","DisableKubeletCloudCredentialProviders","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","HardwareSpeed","KMSv1","MetricsServer","NetworkDiagnosticsConfig","NetworkLiveMigration","PrivateHostedZoneAWS","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereStaticIPs"] disabled=["AutomatedEtcdBackup","CSIDriverSharedResource","ChunkSizeMiB","ClusterAPIInstall","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallPowerVS","DNSNameResolver","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MixedCPUsAllocation","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereMultiVCenters","ValidatingAdmissionPolicy","VolumeGroupSnapshot"] 2025-08-13T19:59:12.767226169+00:00 stderr F I0813 19:59:12.765698 1 start.go:107] FeatureGates initialized: enabled=[AdminNetworkPolicy AlibabaPlatform AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CloudDualStackNodeIPs ClusterAPIInstallAWS ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallVSphere DisableKubeletCloudCredentialProviders ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP HardwareSpeed KMSv1 MetricsServer NetworkDiagnosticsConfig NetworkLiveMigration PrivateHostedZoneAWS VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereStaticIPs] disabled=[AutomatedEtcdBackup CSIDriverSharedResource ChunkSizeMiB ClusterAPIInstall ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallPowerVS DNSNameResolver DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MixedCPUsAllocation NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereMultiVCenters ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:12.767226169+00:00 stderr F I0813 19:59:12.765975 1 drain_controller.go:168] Starting MachineConfigController-DrainController 2025-08-13T19:59:12.768964998+00:00 stderr F I0813 19:59:12.767517 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:12.776213485+00:00 stderr F I0813 19:59:12.776110 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.779706024+00:00 stderr F I0813 19:59:12.779669 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:12.782413791+00:00 stderr F I0813 19:59:12.781746 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:12.903315078+00:00 stderr F I0813 19:59:12.897736 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController 2025-08-13T19:59:12.903315078+00:00 stderr F I0813 19:59:12.900105 1 container_runtime_config_controller.go:242] Starting MachineConfigController-ContainerRuntimeConfigController 2025-08-13T19:59:12.965361687+00:00 stderr F I0813 19:59:12.965178 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:12.973180490+00:00 stderr F I0813 19:59:12.973073 1 machine_set_boot_image_controller.go:181] Starting MachineConfigController-MachineSetBootImageController 2025-08-13T19:59:13.861069870+00:00 stderr F W0813 19:59:13.858186 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:13.861069870+00:00 stderr F E0813 19:59:13.858311 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:14.076055558+00:00 stderr F I0813 19:59:14.074825 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:14.076055558+00:00 stderr F I0813 19:59:14.075481 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T19:59:14.105357153+00:00 stderr F I0813 19:59:14.083405 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T19:59:14.138707644+00:00 stderr F I0813 19:59:14.137033 1 template_controller.go:227] Starting MachineConfigController-TemplateController 2025-08-13T19:59:14.160541337+00:00 stderr F I0813 19:59:14.160424 1 node_controller.go:247] Starting MachineConfigController-NodeController 2025-08-13T19:59:14.160688981+00:00 stderr F I0813 19:59:14.160594 1 render_controller.go:127] Starting MachineConfigController-RenderController 2025-08-13T19:59:16.807137172+00:00 stderr F W0813 19:59:16.784255 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:16.807137172+00:00 stderr F E0813 19:59:16.784949 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:18.197085633+00:00 stderr F I0813 19:59:18.195574 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:20.562090150+00:00 stderr F I0813 19:59:20.538752 1 trace.go:236] Trace[1112042174]: "DeltaFIFO Pop Process" ID:openshift-ingress-canary/ingress-canary-2vhcn,Depth:47,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:20.437) (total time: 100ms): 2025-08-13T19:59:20.562090150+00:00 stderr F Trace[1112042174]: [100.102354ms] [100.102354ms] END 2025-08-13T19:59:21.122969158+00:00 stderr F W0813 19:59:21.122736 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:21.123692448+00:00 stderr F E0813 19:59:21.123566 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:26.419192658+00:00 stderr F I0813 19:59:26.399961 1 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:26.424919171+00:00 stderr F I0813 19:59:26.422230 1 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:27.283901637+00:00 stderr F I0813 19:59:27.271959 1 render_controller.go:530] Generated machineconfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a from 9 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 99-node-sizing-for-crc machineconfiguration.openshift.io/v1 } {MachineConfig 99-openshift-machineconfig-master-dummy-networks machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:27.601928073+00:00 stderr F I0813 19:59:27.599531 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23974", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-ef556ead28ddfad01c34ac56c7adfb5a successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:27.752237497+00:00 stderr F E0813 19:59:27.750714 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.048144682+00:00 stderr F E0813 19:59:28.047461 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.055639386+00:00 stderr F I0813 19:59:28.050728 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:28.466367593+00:00 stderr F I0813 19:59:28.465479 1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master 2025-08-13T19:59:28.846212220+00:00 stderr F I0813 19:59:28.843613 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-08-13T19:59:31.536257221+00:00 stderr F W0813 19:59:31.497234 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:31.559062731+00:00 stderr F E0813 19:59:31.558862 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:33.466238635+00:00 stderr F I0813 19:59:33.448482 1 reconcile.go:175] user data to be verified before ssh update: { [] core [ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBACrKovYP/jwO+a53sdlihFLUWOCBZJORwFiQNgBoHgse9pb7UsuVllby/9PMvDGujPs69Sme2dqr+ruV4PQyRs6BAD87myXikco4bl4QHlNCxCiMl4UGh+qGe3xP1pvMotXd+Cl6yzdvgyhr9MuMLVjrj2IZWM5hjJC3ZAAd98IO0E4xQ==] } 2025-08-13T19:59:33.466238635+00:00 stderr F I0813 19:59:33.449191 1 reconcile.go:151] SSH Keys reconcilable 2025-08-13T19:59:33.714022388+00:00 stderr F I0813 19:59:33.549553 1 render_controller.go:556] Pool master: now targeting: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:33.720678308+00:00 stderr F I0813 19:59:33.720638 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.727381299+00:00 stderr F I0813 19:59:33.727341 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.889329125+00:00 stderr F I0813 19:59:33.888263 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.910104118+00:00 stderr F I0813 19:59:33.908677 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:33.952057924+00:00 stderr F I0813 19:59:33.950705 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.036304985+00:00 stderr F I0813 19:59:34.032993 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.193674731+00:00 stderr F I0813 19:59:34.193315 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:34.553558660+00:00 stderr F I0813 19:59:34.542021 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:35.184505775+00:00 stderr F I0813 19:59:35.184412 1 render_controller.go:380] Error syncing machineconfigpool master: status for ControllerConfig machine-config-controller is being reported for 6, expecting it for 7 2025-08-13T19:59:36.486676493+00:00 stderr F I0813 19:59:36.474371 1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker 2025-08-13T19:59:36.732022357+00:00 stderr F I0813 19:59:36.731222 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:37.038352279+00:00 stderr F I0813 19:59:37.038159 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-08-13T19:59:39.230871006+00:00 stderr F I0813 19:59:39.230169 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.292203 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.296699 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T19:59:39.312551285+00:00 stderr F I0813 19:59:39.296717 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T19:59:39.522953082+00:00 stderr F I0813 19:59:39.521362 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:39.774453491+00:00 stderr F I0813 19:59:39.774353 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28168", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc0007ca988) 2025-08-13T19:59:39.829324796+00:00 stderr F I0813 19:59:39.827563 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:39.929983155+00:00 stderr F I0813 19:59:39.929679 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.030141060+00:00 stderr F I0813 19:59:40.025318 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.059067385+00:00 stderr F I0813 19:59:40.057122 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.108063251+00:00 stderr F I0813 19:59:40.105291 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.289229585+00:00 stderr F I0813 19:59:40.288704 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.317947454+00:00 stderr F I0813 19:59:40.317581 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:40.320859347+00:00 stderr F I0813 19:59:40.318019 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28255", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T19:59:40.469715670+00:00 stderr F I0813 19:59:40.467188 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:40.837558036+00:00 stderr F I0813 19:59:40.835718 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T19:59:43.090480505+00:00 stderr F I0813 19:59:43.087270 1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 2025-08-13T19:59:43.295528870+00:00 stderr F I0813 19:59:43.294255 1 render_controller.go:530] Generated machineconfig rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:43.296176938+00:00 stderr F I0813 19:59:43.296135 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"87ae8215-5559-4b8a-a6cc-81c3c83b8a6e", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23055", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:43.792422874+00:00 stderr F I0813 19:59:43.733757 1 render_controller.go:556] Pool worker: now targeting: rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff 2025-08-13T19:59:46.371377217+00:00 stderr F I0813 19:59:46.370946 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:46.804124423+00:00 stderr F I0813 19:59:46.801706 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:47.150328001+00:00 stderr F I0813 19:59:47.150101 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T19:59:47.151249728+00:00 stderr F I0813 19:59:47.150912 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28384", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T19:59:47.225183345+00:00 stderr F I0813 19:59:47.224970 1 render_controller.go:530] Generated machineconfig rendered-master-11405dc064e9fc83a779a06d1cd665b3 from 9 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 } {MachineConfig 99-node-sizing-for-crc machineconfiguration.openshift.io/v1 } {MachineConfig 99-openshift-machineconfig-master-dummy-networks machineconfiguration.openshift.io/v1 }] 2025-08-13T19:59:47.252541415+00:00 stderr F I0813 19:59:47.252418 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28255", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-11405dc064e9fc83a779a06d1cd665b3 successfully generated (release version: 4.16.0, controller version: 9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:59:47.534896974+00:00 stderr F E0813 19:59:47.531644 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:47.930438080+00:00 stderr F E0813 19:59:47.923294 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:47.930438080+00:00 stderr F I0813 19:59:47.923353 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:49.297673365+00:00 stderr F I0813 19:59:49.248434 1 status.go:249] Pool worker: All nodes are updated with MachineConfig rendered-worker-cd34d74bc72d90aa53a9e6409d8a13ff 2025-08-13T19:59:50.209966579+00:00 stderr F E0813 19:59:50.197069 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.613584565+00:00 stderr F E0813 19:59:50.613296 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.613675228+00:00 stderr F I0813 19:59:50.613659 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:51.892659457+00:00 stderr F I0813 19:59:51.890142 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:52.768347259+00:00 stderr F W0813 19:59:52.767605 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:52.768347259+00:00 stderr F E0813 19:59:52.768293 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T19:59:53.254425664+00:00 stderr F I0813 19:59:53.253448 1 render_controller.go:556] Pool master: now targeting: rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T19:59:58.604698936+00:00 stderr F I0813 19:59:58.547385 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T19:59:58.641913876+00:00 stderr F I0813 19:59:58.565737 1 node_controller.go:1210] No nodes available for updates 2025-08-13T19:59:59.211441021+00:00 stderr F E0813 19:59:59.210702 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.249989550+00:00 stderr F E0813 19:59:59.249943 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.250047982+00:00 stderr F I0813 19:59:59.250034 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:03.749065482+00:00 stderr F I0813 20:00:03.745946 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:15.506909632+00:00 stderr F I0813 20:00:15.505103 1 drain_controller.go:182] node crc: uncordoning 2025-08-13T20:00:15.506909632+00:00 stderr F I0813 20:00:15.505950 1 drain_controller.go:182] node crc: initiating uncordon (currently schedulable: true) 2025-08-13T20:00:16.676519712+00:00 stderr F I0813 20:00:16.673529 1 drain_controller.go:182] node crc: uncordon succeeded (currently schedulable: true) 2025-08-13T20:00:16.676519712+00:00 stderr F I0813 20:00:16.674017 1 drain_controller.go:182] node crc: operation successful; applying completion annotation 2025-08-13T20:00:22.034286821+00:00 stderr F I0813 20:00:22.033417 1 node_controller.go:576] Pool master: node crc: Completed update to rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:00:23.939924262+00:00 stderr F W0813 20:00:23.936455 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:23.939924262+00:00 stderr F E0813 20:00:23.937099 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:27.063247291+00:00 stderr F I0813 20:00:27.050625 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T20:00:27.063247291+00:00 stderr F I0813 20:00:27.061253 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T20:00:27.236879842+00:00 stderr F I0813 20:00:27.236013 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc000ee6f08) 2025-08-13T20:00:27.281201456+00:00 stderr F I0813 20:00:27.280951 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:27.281765232+00:00 stderr F I0813 20:00:27.281728 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:00:29.407765243+00:00 stderr F I0813 20:00:29.407432 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T20:00:29.423561903+00:00 stderr F I0813 20:00:29.423510 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"28780", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T20:00:33.181499996+00:00 stderr F I0813 20:00:33.165414 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:33.516675434+00:00 stderr F I0813 20:00:33.511690 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:00:38.620300518+00:00 stderr F I0813 20:00:38.619649 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:00:54.455970291+00:00 stderr F W0813 20:00:54.454636 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:00:54.462198839+00:00 stderr F E0813 20:00:54.455773 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:11.763012024+00:00 stderr F I0813 20:01:11.761647 1 drain_controller.go:182] node crc: uncordoning 2025-08-13T20:01:11.763012024+00:00 stderr F I0813 20:01:11.762824 1 drain_controller.go:182] node crc: initiating uncordon (currently schedulable: true) 2025-08-13T20:01:14.404614056+00:00 stderr F I0813 20:01:14.404272 1 drain_controller.go:182] node crc: uncordon succeeded (currently schedulable: true) 2025-08-13T20:01:14.404702519+00:00 stderr F I0813 20:01:14.404687 1 drain_controller.go:182] node crc: operation successful; applying completion annotation 2025-08-13T20:01:38.032189301+00:00 stderr F W0813 20:01:38.031293 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:38.032189301+00:00 stderr F E0813 20:01:38.032036 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:01:51.239914643+00:00 stderr F I0813 20:01:51.236916 1 node_controller.go:576] Pool master: node crc: Completed update to rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:01:56.264214234+00:00 stderr F I0813 20:01:56.261042 1 status.go:249] Pool master: All nodes are updated with MachineConfig rendered-master-11405dc064e9fc83a779a06d1cd665b3 2025-08-13T20:02:19.147200121+00:00 stderr F W0813 20:02:19.146314 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:02:19.147482209+00:00 stderr F E0813 20:02:19.147454 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:03:13.621956248+00:00 stderr F W0813 20:03:13.621082 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.621956248+00:00 stderr F E0813 20:03:13.621837 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.110874138+00:00 stderr F E0813 20:03:21.110202 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.755041202+00:00 stderr F W0813 20:04:02.748980 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.755041202+00:00 stderr F E0813 20:04:02.749582 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:21.134592986+00:00 stderr F E0813 20:04:21.133978 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:45.331437244+00:00 stderr F W0813 20:04:45.323019 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:45.331437244+00:00 stderr F E0813 20:04:45.330911 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:21.114757787+00:00 stderr F E0813 20:05:21.113708 1 leaderelection.go:332] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:39.906343176+00:00 stderr F W0813 20:05:39.905388 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:05:39.906343176+00:00 stderr F E0813 20:05:39.906129 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:06.591223742+00:00 stderr F I0813 20:06:06.589573 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:14.707209002+00:00 stderr F I0813 20:06:14.706548 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:14.866933636+00:00 stderr F I0813 20:06:14.866097 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:15.643188785+00:00 stderr F W0813 20:06:15.642535 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:15.643350520+00:00 stderr F E0813 20:06:15.643214 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:06:18.021051777+00:00 stderr F I0813 20:06:18.019988 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.164475020+00:00 stderr F I0813 20:06:19.164291 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.611652556+00:00 stderr F I0813 20:06:19.611552 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:23.554076800+00:00 stderr F I0813 20:06:23.550996 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:25.825147654+00:00 stderr F I0813 20:06:25.824990 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:27.136463934+00:00 stderr F I0813 20:06:27.136396 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:29.797014992+00:00 stderr F I0813 20:06:29.796187 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:30.531452663+00:00 stderr F I0813 20:06:30.531307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:30.535016035+00:00 stderr F I0813 20:06:30.534117 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:06:35.440386944+00:00 stderr F I0813 20:06:35.439627 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:36.198413388+00:00 stderr F I0813 20:06:36.197860 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:36.616589747+00:00 stderr F I0813 20:06:36.615422 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:39.422978139+00:00 stderr F I0813 20:06:39.422199 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.778065952+00:00 stderr F I0813 20:06:42.776328 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:44.194632006+00:00 stderr F I0813 20:06:44.194259 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:06:44.650823986+00:00 stderr F I0813 20:06:44.650712 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:44.911028646+00:00 stderr F I0813 20:06:44.910966 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:50.084892825+00:00 stderr F I0813 20:06:50.084114 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:06:54.079645208+00:00 stderr F I0813 20:06:54.078126 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:10.877700932+00:00 stderr F W0813 20:07:10.876997 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:10.877700932+00:00 stderr F E0813 20:07:10.877546 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:42.503931311+00:00 stderr F W0813 20:07:42.500600 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:07:42.503931311+00:00 stderr F E0813 20:07:42.501278 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:08:33.357863726+00:00 stderr F W0813 20:08:33.356766 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.358155465+00:00 stderr F E0813 20:08:33.358117 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1alpha1/machineosbuilds?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.553640806+00:00 stderr F W0813 20:09:29.552854 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:09:29.553640806+00:00 stderr F E0813 20:09:29.553366 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:09:33.937122765+00:00 stderr F I0813 20:09:33.935964 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:34.571283297+00:00 stderr F I0813 20:09:34.568451 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:35.089614588+00:00 stderr F I0813 20:09:35.089312 1 reflector.go:351] Caches populated for *v1.MachineConfigPool from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:35.660374652+00:00 stderr F I0813 20:09:35.660267 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:36.614850148+00:00 stderr F I0813 20:09:36.612327 1 reflector.go:351] Caches populated for *v1.ImageTagMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:37.349693196+00:00 stderr F I0813 20:09:37.349540 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:37.349889271+00:00 stderr F I0813 20:09:37.349752 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:09:38.306224350+00:00 stderr F I0813 20:09:38.306099 1 reflector.go:351] Caches populated for *v1.ControllerConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:40.359632383+00:00 stderr F E0813 20:09:40.356692 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.368343192+00:00 stderr F E0813 20:09:40.367830 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.372648726+00:00 stderr F E0813 20:09:40.371734 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.372648726+00:00 stderr F I0813 20:09:40.372136 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.380648125+00:00 stderr F E0813 20:09:40.380522 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.380648125+00:00 stderr F I0813 20:09:40.380577 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:45.551874839+00:00 stderr F I0813 20:09:45.549011 1 reflector.go:351] Caches populated for *v1.MachineConfiguration from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:46.279887832+00:00 stderr F I0813 20:09:46.277766 1 reflector.go:351] Caches populated for *v1.MachineConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:46.575994511+00:00 stderr F I0813 20:09:46.575528 1 reflector.go:351] Caches populated for *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:47.411594738+00:00 stderr F I0813 20:09:47.410700 1 reflector.go:351] Caches populated for *v1.ImageDigestMirrorSet from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:47.414546083+00:00 stderr F I0813 20:09:47.412227 1 reflector.go:351] Caches populated for *v1.Node from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:50.334154551+00:00 stderr F I0813 20:09:50.333387 1 reflector.go:351] Caches populated for *v1.KubeletConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:09:52.122596887+00:00 stderr F I0813 20:09:52.122510 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:03.968061577+00:00 stderr F W0813 20:10:03.967024 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:10:03.968061577+00:00 stderr F E0813 20:10:03.967728 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:10:07.599688058+00:00 stderr F I0813 20:10:07.599277 1 reflector.go:351] Caches populated for *v1alpha1.ImageContentSourcePolicy from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:10:08.776015395+00:00 stderr F I0813 20:10:08.775892 1 reflector.go:351] Caches populated for *v1.Scheduler from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:09.470276829+00:00 stderr F I0813 20:10:09.468847 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:10.215148165+00:00 stderr F I0813 20:10:10.214593 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:10:19.692741613+00:00 stderr F I0813 20:10:19.692077 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:23.228125756+00:00 stderr F I0813 20:10:23.227339 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:27.929125358+00:00 stderr F I0813 20:10:27.928636 1 reflector.go:351] Caches populated for *v1.ContainerRuntimeConfig from github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 2025-08-13T20:11:02.478100283+00:00 stderr F W0813 20:11:02.477126 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:02.478100283+00:00 stderr F E0813 20:11:02.477947 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:33.421838594+00:00 stderr F W0813 20:11:33.420285 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:11:33.421838594+00:00 stderr F E0813 20:11:33.421743 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:27.036588905+00:00 stderr F W0813 20:12:27.036047 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:27.036588905+00:00 stderr F E0813 20:12:27.036497 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:58.297409024+00:00 stderr F W0813 20:12:58.296634 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:12:58.297409024+00:00 stderr F E0813 20:12:58.297348 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:13:28.588911106+00:00 stderr F W0813 20:13:28.588395 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:13:28.589057570+00:00 stderr F E0813 20:13:28.589040 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:14:20.404079860+00:00 stderr F W0813 20:14:20.402852 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:14:20.404079860+00:00 stderr F E0813 20:14:20.403448 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:15:15.591592395+00:00 stderr F W0813 20:15:15.590886 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:15:15.591592395+00:00 stderr F E0813 20:15:15.591532 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:16:10.081905945+00:00 stderr F W0813 20:16:10.081372 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:16:10.081905945+00:00 stderr F E0813 20:16:10.081875 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:04.217098347+00:00 stderr F W0813 20:17:04.216341 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:04.217098347+00:00 stderr F E0813 20:17:04.217026 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:56.486311514+00:00 stderr F W0813 20:17:56.485625 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:17:56.486311514+00:00 stderr F E0813 20:17:56.486247 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:18:54.917419329+00:00 stderr F W0813 20:18:54.916815 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:18:54.917419329+00:00 stderr F E0813 20:18:54.917299 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:19:48.919583120+00:00 stderr F W0813 20:19:48.918738 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:19:48.919583120+00:00 stderr F E0813 20:19:48.919471 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:20:36.034141950+00:00 stderr F W0813 20:20:36.033271 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:20:36.034141950+00:00 stderr F E0813 20:20:36.034025 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:14.926291207+00:00 stderr F W0813 20:21:14.925592 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:14.926291207+00:00 stderr F E0813 20:21:14.926203 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:47.846378734+00:00 stderr F W0813 20:21:47.845509 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:21:47.846378734+00:00 stderr F E0813 20:21:47.846321 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:22:35.519597564+00:00 stderr F W0813 20:22:35.518565 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:22:35.519597564+00:00 stderr F E0813 20:22:35.519565 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:23:17.253591317+00:00 stderr F W0813 20:23:17.252247 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:23:17.253839864+00:00 stderr F E0813 20:23:17.253555 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:12.130840552+00:00 stderr F W0813 20:24:12.130174 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:12.130947525+00:00 stderr F E0813 20:24:12.130717 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:46.986507470+00:00 stderr F W0813 20:24:46.985612 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:24:46.986507470+00:00 stderr F E0813 20:24:46.986375 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:25:38.363311548+00:00 stderr F W0813 20:25:38.362318 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:25:38.363311548+00:00 stderr F E0813 20:25:38.363059 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:26:32.842179333+00:00 stderr F W0813 20:26:32.841247 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:26:32.842179333+00:00 stderr F E0813 20:26:32.842122 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:27:26.684468834+00:00 stderr F W0813 20:27:26.683694 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:27:26.684468834+00:00 stderr F E0813 20:27:26.684369 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:28:04.854152355+00:00 stderr F W0813 20:28:04.853294 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:28:04.854152355+00:00 stderr F E0813 20:28:04.854021 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:00.352371193+00:00 stderr F W0813 20:29:00.351581 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:00.352371193+00:00 stderr F E0813 20:29:00.352355 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:39.887393446+00:00 stderr F W0813 20:29:39.886355 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:29:39.887393446+00:00 stderr F E0813 20:29:39.887065 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:30:30.924838473+00:00 stderr F W0813 20:30:30.924209 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:30:30.924838473+00:00 stderr F E0813 20:30:30.924735 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:07.703564453+00:00 stderr F W0813 20:31:07.702500 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:07.703702457+00:00 stderr F E0813 20:31:07.703684 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:41.915769202+00:00 stderr F W0813 20:31:41.915124 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:31:41.915769202+00:00 stderr F E0813 20:31:41.915654 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:32:17.032996847+00:00 stderr F W0813 20:32:17.032355 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:32:17.032996847+00:00 stderr F E0813 20:32:17.032943 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:01.622681426+00:00 stderr F W0813 20:33:01.622034 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:01.622930893+00:00 stderr F E0813 20:33:01.622891 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:47.072377624+00:00 stderr F W0813 20:33:47.071605 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:33:47.072377624+00:00 stderr F E0813 20:33:47.072320 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:23.300561987+00:00 stderr F W0813 20:34:23.299956 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:23.300664670+00:00 stderr F E0813 20:34:23.300551 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:59.364951638+00:00 stderr F W0813 20:34:59.364285 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:34:59.364951638+00:00 stderr F E0813 20:34:59.364742 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:35:31.472926985+00:00 stderr F W0813 20:35:31.472191 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:35:31.472926985+00:00 stderr F E0813 20:35:31.472873 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:36:30.556270120+00:00 stderr F W0813 20:36:30.555625 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:36:30.556270120+00:00 stderr F E0813 20:36:30.556227 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:08.151012647+00:00 stderr F W0813 20:37:08.150338 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:08.151169502+00:00 stderr F E0813 20:37:08.151108 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:48.345633999+00:00 stderr F W0813 20:37:48.344753 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:37:48.345633999+00:00 stderr F E0813 20:37:48.345434 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:38:23.450721151+00:00 stderr F W0813 20:38:23.449862 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:38:23.450721151+00:00 stderr F E0813 20:38:23.450450 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:00.580320141+00:00 stderr F W0813 20:39:00.579684 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:00.580320141+00:00 stderr F E0813 20:39:00.580237 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:40.650243669+00:00 stderr F W0813 20:39:40.649332 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:39:40.650328611+00:00 stderr F E0813 20:39:40.650253 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:40:40.538996386+00:00 stderr F W0813 20:40:40.538258 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:40:40.538996386+00:00 stderr F E0813 20:40:40.538909 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:41:14.181708488+00:00 stderr F W0813 20:41:14.180081 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:41:14.181708488+00:00 stderr F E0813 20:41:14.181138 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:10.626970184+00:00 stderr F I0813 20:42:10.622903 1 template_controller.go:126] Re-syncing ControllerConfig due to secret pull-secret change 2025-08-13T20:42:12.654457226+00:00 stderr F W0813 20:42:12.654097 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:12.654563459+00:00 stderr F E0813 20:42:12.654543 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1alpha1.MachineOSBuild: failed to list *v1alpha1.MachineOSBuild: the server could not find the requested resource (get machineosbuilds.machineconfiguration.openshift.io) 2025-08-13T20:42:16.008754910+00:00 stderr F I0813 20:42:16.007899 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.014437714+00:00 stderr F I0813 20:42:16.014402 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.025124922+00:00 stderr F I0813 20:42:16.025012 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.045668374+00:00 stderr F I0813 20:42:16.045548 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.060137731+00:00 stderr F I0813 20:42:16.060088 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.066087593+00:00 stderr F I0813 20:42:16.065977 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.076672748+00:00 stderr F I0813 20:42:16.076547 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.086157632+00:00 stderr F I0813 20:42:16.086122 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.098042974+00:00 stderr F I0813 20:42:16.097929 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.138759688+00:00 stderr F I0813 20:42:16.138645 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.168434424+00:00 stderr F I0813 20:42:16.167938 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.219057253+00:00 stderr F I0813 20:42:16.218971 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.328636572+00:00 stderr F I0813 20:42:16.328586 1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.379500419+00:00 stderr F I0813 20:42:16.379362 1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) 2025-08-13T20:42:16.822158331+00:00 stderr F I0813 20:42:16.821429 1 render_controller.go:556] Pool master: now targeting: rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:16.827337360+00:00 stderr F I0813 20:42:16.827259 1 render_controller.go:556] Pool worker: now targeting: rendered-worker-83accf81260e29bcce65a184dd980479 2025-08-13T20:42:21.836593786+00:00 stderr F I0813 20:42:21.835700 1 status.go:249] Pool worker: All nodes are updated with MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 2025-08-13T20:42:21.863504552+00:00 stderr F I0813 20:42:21.863159 1 node_controller.go:566] Pool master: 1 candidate nodes in 0 zones for update, capacity: 1 2025-08-13T20:42:21.863504552+00:00 stderr F I0813 20:42:21.863202 1 node_controller.go:566] Pool master: filtered to 1 candidate nodes for update, capacity: 1 2025-08-13T20:42:21.894567958+00:00 stderr F I0813 20:42:21.894471 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:42:21.955022110+00:00 stderr F I0813 20:42:21.950945 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37435", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node crc to %!s(*string=0xc000f1f1c8) 2025-08-13T20:42:21.990988767+00:00 stderr F I0813 20:42:21.990888 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:21.993939322+00:00 stderr F I0813 20:42:21.991069 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37453", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ef556ead28ddfad01c34ac56c7adfb5a 2025-08-13T20:42:22.105384245+00:00 stderr F E0813 20:42:22.105190 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.106994052+00:00 stderr F E0813 20:42:22.106943 1 render_controller.go:443] Error syncing Generated MCFG: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112701226+00:00 stderr F E0813 20:42:22.112589 1 render_controller.go:465] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112701226+00:00 stderr F I0813 20:42:22.112625 1 render_controller.go:380] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112978044+00:00 stderr F E0813 20:42:22.112887 1 render_controller.go:465] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:22.112978044+00:00 stderr F I0813 20:42:22.112922 1 render_controller.go:380] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:42:24.125361272+00:00 stderr F I0813 20:42:24.124848 1 node_controller.go:576] Pool master: node crc: changed annotation machineconfiguration.openshift.io/state = Working 2025-08-13T20:42:24.127582136+00:00 stderr F I0813 20:42:24.125590 1 event.go:364] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"8efb5656-7de8-467a-9f2a-237a011a4783", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"37453", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node crc now has machineconfiguration.openshift.io/state=Working 2025-08-13T20:42:26.958697318+00:00 stderr F I0813 20:42:26.957735 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:27.009661437+00:00 stderr F I0813 20:42:27.009377 1 node_controller.go:576] Pool master: node crc: changed taints 2025-08-13T20:42:31.999703171+00:00 stderr F I0813 20:42:31.999076 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:31.999847395+00:00 stderr F E0813 20:42:31.999080 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.014363134+00:00 stderr F I0813 20:42:32.014258 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.020685466+00:00 stderr F E0813 20:42:32.020594 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.032041254+00:00 stderr F I0813 20:42:32.030072 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.040847727+00:00 stderr F E0813 20:42:32.040665 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.050491705+00:00 stderr F I0813 20:42:32.050420 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.071996205+00:00 stderr F E0813 20:42:32.071826 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.080745538+00:00 stderr F I0813 20:42:32.080664 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.121400500+00:00 stderr F E0813 20:42:32.121311 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.131085009+00:00 stderr F I0813 20:42:32.130997 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.211753795+00:00 stderr F E0813 20:42:32.211476 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.232496443+00:00 stderr F I0813 20:42:32.232403 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.394884324+00:00 stderr F E0813 20:42:32.393039 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.404468440+00:00 stderr F I0813 20:42:32.404355 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.725402322+00:00 stderr F E0813 20:42:32.725263 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:32.734254078+00:00 stderr F I0813 20:42:32.734154 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.201087597+00:00 stderr F E0813 20:42:33.200596 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.216832791+00:00 stderr F I0813 20:42:33.216170 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.222384611+00:00 stderr F E0813 20:42:33.222356 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.233886922+00:00 stderr F I0813 20:42:33.233775 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.244505438+00:00 stderr F E0813 20:42:33.244416 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.256767982+00:00 stderr F I0813 20:42:33.256685 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.279054565+00:00 stderr F E0813 20:42:33.277531 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.293007427+00:00 stderr F I0813 20:42:33.292724 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.335658456+00:00 stderr F E0813 20:42:33.335315 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.351373300+00:00 stderr F I0813 20:42:33.351312 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.376716570+00:00 stderr F E0813 20:42:33.376532 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.402590646+00:00 stderr F I0813 20:42:33.401612 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:33.432120678+00:00 stderr F E0813 20:42:33.432061 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.444271948+00:00 stderr F I0813 20:42:33.444176 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.606433643+00:00 stderr F E0813 20:42:33.605536 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.615295959+00:00 stderr F I0813 20:42:33.615145 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.936213291+00:00 stderr F E0813 20:42:33.936081 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:33.947201958+00:00 stderr F I0813 20:42:33.946321 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.588471676+00:00 stderr F E0813 20:42:34.588375 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.605863277+00:00 stderr F I0813 20:42:34.597025 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:34.682711883+00:00 stderr F E0813 20:42:34.682433 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:34.705688565+00:00 stderr F I0813 20:42:34.704968 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:35.886755436+00:00 stderr F E0813 20:42:35.886347 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:35.896417474+00:00 stderr F I0813 20:42:35.896291 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:36.323923739+00:00 stderr F I0813 20:42:36.323531 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.324596938+00:00 stderr F I0813 20:42:36.324510 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.324995299+00:00 stderr F I0813 20:42:36.324940 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.323763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.345954 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346192 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346305 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346364 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346501 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346557 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.323714 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324345 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324439 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.324456 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346847 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346907 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.346965 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.347027 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382438916+00:00 stderr F I0813 20:42:36.347181 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.382925350+00:00 stderr F I0813 20:42:36.382895 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.016337511+00:00 stderr F I0813 20:42:37.015940 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.017895636+00:00 stderr F I0813 20:42:37.017753 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.023579580+00:00 stderr F I0813 20:42:37.023551 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.024699342+00:00 stderr F I0813 20:42:37.024673 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.035967567+00:00 stderr F I0813 20:42:37.035916 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.037004967+00:00 stderr F I0813 20:42:37.036970 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.058903918+00:00 stderr F I0813 20:42:37.058743 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.059901537+00:00 stderr F I0813 20:42:37.059873 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.101688622+00:00 stderr F I0813 20:42:37.101598 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.104434691+00:00 stderr F I0813 20:42:37.104407 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.217067408+00:00 stderr F I0813 20:42:37.217009 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.267748759+00:00 stderr F E0813 20:42:37.267687 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:37.268837191+00:00 stderr F E0813 20:42:37.268747 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.268893922+00:00 stderr F I0813 20:42:37.268879 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:37.421141582+00:00 stderr F I0813 20:42:37.421048 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.620343355+00:00 stderr F I0813 20:42:37.618982 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:37.834513760+00:00 stderr F I0813 20:42:37.827334 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.149293695+00:00 stderr F I0813 20:42:38.149172 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:38.222857066+00:00 stderr F I0813 20:42:38.217434 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.459902060+00:00 stderr F E0813 20:42:38.457113 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:38.459902060+00:00 stderr F E0813 20:42:38.457837 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.459902060+00:00 stderr F I0813 20:42:38.457859 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:38.620535611+00:00 stderr F I0813 20:42:38.620441 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.018395911+00:00 stderr F I0813 20:42:39.018295 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.217161502+00:00 stderr F I0813 20:42:39.216939 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:39.617432801+00:00 stderr F I0813 20:42:39.617015 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.816411778+00:00 stderr F I0813 20:42:39.816312 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.216590945+00:00 stderr F I0813 20:42:40.216482 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.616889276+00:00 stderr F I0813 20:42:40.616739 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.018724301+00:00 stderr F I0813 20:42:41.018146 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.217121801+00:00 stderr F I0813 20:42:41.216984 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:41.616830045+00:00 stderr F I0813 20:42:41.616691 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.818520409+00:00 stderr F I0813 20:42:41.818366 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.217537563+00:00 stderr F I0813 20:42:42.217150 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.389906863+00:00 stderr F E0813 20:42:42.389755 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:42.390755547+00:00 stderr F E0813 20:42:42.390682 1 render_controller.go:465] Error updating MachineConfigPool master: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.390837650+00:00 stderr F I0813 20:42:42.390765 1 render_controller.go:380] Error syncing machineconfigpool master: could not generate rendered MachineConfig: could not get current MachineConfig rendered-master-ef556ead28ddfad01c34ac56c7adfb5a for MachineConfigPool master: machineconfig.machineconfiguration.openshift.io "rendered-master-ef556ead28ddfad01c34ac56c7adfb5a" not found 2025-08-13T20:42:42.861080747+00:00 stderr F I0813 20:42:42.860980 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.579242811+00:00 stderr F E0813 20:42:43.579116 1 render_controller.go:443] Error syncing Generated MCFG: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:43.579945061+00:00 stderr F E0813 20:42:43.579869 1 render_controller.go:465] Error updating MachineConfigPool worker: Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.579945061+00:00 stderr F I0813 20:42:43.579916 1 render_controller.go:380] Error syncing machineconfigpool worker: could not generate rendered MachineConfig: could not get current MachineConfig rendered-worker-83accf81260e29bcce65a184dd980479 for MachineConfigPool worker: machineconfig.machineconfiguration.openshift.io "rendered-worker-83accf81260e29bcce65a184dd980479" not found 2025-08-13T20:42:44.146971519+00:00 stderr F I0813 20:42:44.146875 1 node_controller.go:848] Error syncing machineconfigpool worker: could not update MachineConfigPool "worker": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.178582740+00:00 stderr F I0813 20:42:44.178448 1 node_controller.go:1210] No nodes available for updates 2025-08-13T20:42:44.180025342+00:00 stderr F I0813 20:42:44.179930 1 node_controller.go:848] Error syncing machineconfigpool master: could not update MachineConfigPool "master": Put "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.527609583+00:00 stderr F I0813 20:42:44.527249 1 helpers.go:93] Shutting down due to: terminated 2025-08-13T20:42:44.527643104+00:00 stderr F I0813 20:42:44.527618 1 helpers.go:96] Context cancelled 2025-08-13T20:42:44.527938452+00:00 stderr F I0813 20:42:44.527875 1 machine_set_boot_image_controller.go:189] Shutting down MachineConfigController-MachineSetBootImageController 2025-08-13T20:42:44.529565509+00:00 stderr F I0813 20:42:44.529496 1 render_controller.go:135] Shutting down MachineConfigController-RenderController 2025-08-13T20:42:44.529694943+00:00 stderr F I0813 20:42:44.529633 1 node_controller.go:255] Shutting down MachineConfigController-NodeController 2025-08-13T20:42:44.529712833+00:00 stderr F I0813 20:42:44.529701 1 template_controller.go:235] Shutting down MachineConfigController-TemplateController 2025-08-13T20:42:44.529954020+00:00 stderr F E0813 20:42:44.529858 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.529954020+00:00 stderr F I0813 20:42:44.529904 1 start.go:146] Stopped leading. Terminating. ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015117130647033072 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015117130654033070 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000002643515117130647033106 0ustar zuulzuul2025-12-13T00:13:16.133636698+00:00 stderr F I1213 00:13:16.129427 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:13:16.164832196+00:00 stderr F I1213 00:13:16.158515 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:13:16.166844544+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="FeatureGates initializedknownFeatures[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot]" 2025-12-13T00:13:16.169061058+00:00 stderr F 2025-12-13T00:13:16Z INFO controller-runtime.metrics Starting metrics server 2025-12-13T00:13:16.173418624+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-12-13T00:13:16.173475416+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-12-13T00:13:16.173515298+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-12-13T00:13:16.173539688+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:16.173563549+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:16.173590370+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-12-13T00:13:16.173612821+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting Controller {"controller": "dns_controller"} 2025-12-13T00:13:16.174581984+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-12-13T00:13:16.174653956+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-12-13T00:13:16.174682947+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-12-13T00:13:16.174705258+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting Controller {"controller": "status_controller"} 2025-12-13T00:13:16.192512016+00:00 stderr F 2025-12-13T00:13:16Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-12-13T00:13:16.625522307+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} 2025-12-13T00:13:16.625522307+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="reconciling request: /default" 2025-12-13T00:13:16.625522307+00:00 stderr F 2025-12-13T00:13:16Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-12-13T00:13:17.092049882+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 17, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 17, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-12-13T00:13:17.103541238+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="reconciling request: /default" 2025-12-13T00:13:26.007187912+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="reconciling request: /default" 2025-12-13T00:13:34.971051627+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="reconciling request: /default" 2025-12-13T00:13:35.024232594+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 17, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 17, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 35, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 35, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.December, 13, 0, 13, 35, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-12-13T00:13:35.027107250+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="reconciling request: /default" 2025-12-13T00:21:16.285759425+00:00 stderr F time="2025-12-13T00:21:16Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" ././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000003325215117130647033101 0ustar zuulzuul2025-08-13T19:59:13.692541717+00:00 stderr F I0813 19:59:13.684769 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:15.984962045+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="FeatureGates initializedknownFeatures[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot]" 2025-08-13T19:59:15.992337545+00:00 stderr F 2025-08-13T19:59:15Z INFO controller-runtime.metrics Starting metrics server 2025-08-13T19:59:16.542397145+00:00 stderr F 2025-08-13T19:59:16Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-08-13T19:59:16.562715444+00:00 stderr F I0813 19:59:16.552380 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T19:59:16.717207318+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting Controller {"controller": "status_controller"} 2025-08-13T19:59:16.776197580+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T19:59:16.789697915+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-08-13T19:59:16.789933952+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-08-13T19:59:16.789988983+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T19:59:16.790040805+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T19:59:16.790110607+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-08-13T19:59:16.790152098+00:00 stderr F 2025-08-13T19:59:16Z INFO Starting Controller {"controller": "dns_controller"} 2025-08-13T19:59:26.606910839+00:00 stderr F 2025-08-13T19:59:26Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T19:59:26.606910839+00:00 stderr F 2025-08-13T19:59:26Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T19:59:26.711345946+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:31.665157205+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsDesired\", Message:\"No DNS pods are desired; this could mean all nodes are tainted or unschedulable.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available node-resolver pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-08-13T19:59:31.941140121+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:32.963913386+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:41.020328726+00:00 stderr F time="2025-08-13T19:59:41Z" level=info msg="reconciling request: /default" 2025-08-13T19:59:43.077599078+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDNSPodsAvailable\", Message:\"No DNS pods are available.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 17, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 0 available DNS pods, want 1.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"False\", LastTransitionTime:time.Date(2024, time.June, 27, 13, 34, 15, 0, time.Local), Reason:\"NoDaemonSetPods\", Message:\"The DNS daemonset has no pods available.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"10.217.4.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2025, time.August, 13, 19, 59, 42, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2024, time.June, 26, 12, 47, 19, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}" 2025-08-13T19:59:43.159758230+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:14.997050813+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:19.742230217+00:00 stderr F time="2025-08-13T20:00:19Z" level=info msg="reconciling request: /default" 2025-08-13T20:00:47.670975966+00:00 stderr F time="2025-08-13T20:00:47Z" level=info msg="reconciling request: /default" 2025-08-13T20:02:29.340606718+00:00 stderr F time="2025-08-13T20:02:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:03:29.346119497+00:00 stderr F time="2025-08-13T20:03:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:04:29.348626550+00:00 stderr F time="2025-08-13T20:04:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:06:11.240719884+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:26.959054664+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:27.922351049+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:28.002841484+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:54.997534705+00:00 stderr F time="2025-08-13T20:06:54Z" level=info msg="reconciling request: /default" 2025-08-13T20:06:55.533701098+00:00 stderr F time="2025-08-13T20:06:55Z" level=info msg="reconciling request: /default" 2025-08-13T20:08:29.702753933+00:00 stderr F time="2025-08-13T20:08:29Z" level=error msg="failed to ensure default dns Get \"https://10.217.4.1:443/apis/operator.openshift.io/v1/dnses/default\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:09:29.337380577+00:00 stderr F time="2025-08-13T20:09:29Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:38.974292084+00:00 stderr F time="2025-08-13T20:09:38Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:41.903293700+00:00 stderr F time="2025-08-13T20:09:41Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:41.984477328+00:00 stderr F time="2025-08-13T20:09:41Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:44.310670152+00:00 stderr F time="2025-08-13T20:09:44Z" level=info msg="reconciling request: /default" 2025-08-13T20:09:52.517276043+00:00 stderr F time="2025-08-13T20:09:52Z" level=info msg="reconciling request: /default" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000755000175000017500000000000015117130654033070 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000000202015117130647033066 0ustar zuulzuul2025-12-13T00:13:17.491898999+00:00 stderr F W1213 00:13:17.491781 1 deprecated.go:66] 2025-12-13T00:13:17.491898999+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:17.491898999+00:00 stderr F 2025-12-13T00:13:17.491898999+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:17.491898999+00:00 stderr F 2025-12-13T00:13:17.491898999+00:00 stderr F =============================================== 2025-12-13T00:13:17.491898999+00:00 stderr F 2025-12-13T00:13:17.492912812+00:00 stderr F I1213 00:13:17.492896 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:17.492995845+00:00 stderr F I1213 00:13:17.492983 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:17.497217377+00:00 stderr F I1213 00:13:17.497089 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-12-13T00:13:17.497498227+00:00 stderr F I1213 00:13:17.497466 1 kube-rbac-proxy.go:402] Listening securely on :9393 ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator0000644000175000017500000000222515117130647033075 0ustar zuulzuul2025-08-13T19:59:32.028488101+00:00 stderr F W0813 19:59:32.022135 1 deprecated.go:66] 2025-08-13T19:59:32.028488101+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.028488101+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.028488101+00:00 stderr F =============================================== 2025-08-13T19:59:32.028488101+00:00 stderr F 2025-08-13T19:59:32.085928918+00:00 stderr F I0813 19:59:32.083706 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:32.085928918+00:00 stderr F I0813 19:59:32.083991 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:32.147723120+00:00 stderr F I0813 19:59:32.147658 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-08-13T19:59:32.152233128+00:00 stderr F I0813 19:59:32.152208 1 kube-rbac-proxy.go:402] Listening securely on :9393 2025-08-13T20:42:42.635290547+00:00 stderr F I0813 20:42:42.634461 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015117130646033110 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000755000175000017500000000000015117130654033107 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000214255315117130646033126 0ustar zuulzuul2025-08-13T20:05:42.847593472+00:00 stderr F I0813 20:05:42.847360 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:42.849079444+00:00 stderr F I0813 20:05:42.848832 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:42.850582657+00:00 stderr F I0813 20:05:42.850357 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:42.959341362+00:00 stderr F I0813 20:05:42.959227 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T20:05:43.835638706+00:00 stderr F I0813 20:05:43.832980 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:43.835638706+00:00 stderr F W0813 20:05:43.833036 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:43.835638706+00:00 stderr F W0813 20:05:43.833046 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:43.839256959+00:00 stderr F I0813 20:05:43.837451 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:43.842104231+00:00 stderr F I0813 20:05:43.841213 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:43.842104231+00:00 stderr F I0813 20:05:43.841744 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:43.842415920+00:00 stderr F I0813 20:05:43.842362 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:43.842824481+00:00 stderr F I0813 20:05:43.842538 1 secure_serving.go:213] Serving securely on [::]:9104 2025-08-13T20:05:43.842953695+00:00 stderr F I0813 20:05:43.842935 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:43.843316465+00:00 stderr F I0813 20:05:43.843296 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:43.847064563+00:00 stderr F I0813 20:05:43.846944 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-08-13T20:05:43.849444071+00:00 stderr F I0813 20:05:43.847922 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:43.849444071+00:00 stderr F I0813 20:05:43.848425 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:43.849493042+00:00 stderr F I0813 20:05:43.849475 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:43.950929837+00:00 stderr F I0813 20:05:43.950757 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:43.950983549+00:00 stderr F I0813 20:05:43.950918 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:43.952076690+00:00 stderr F I0813 20:05:43.951937 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:10:14.618367979+00:00 stderr F I0813 20:10:14.615016 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-08-13T20:10:14.619163142+00:00 stderr F I0813 20:10:14.619073 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33148", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_c4643de4-0a66-40c8-abff-4239e04f61ab became leader 2025-08-13T20:10:14.783630177+00:00 stderr F I0813 20:10:14.783197 1 operator.go:97] Creating status manager for stand-alone cluster 2025-08-13T20:10:14.784895684+00:00 stderr F I0813 20:10:14.784846 1 operator.go:102] Adding controller-runtime controllers 2025-08-13T20:10:14.806739700+00:00 stderr F I0813 20:10:14.805093 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-08-13T20:10:14.813119033+00:00 stderr F I0813 20:10:14.812083 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:10:14.823420598+00:00 stderr F I0813 20:10:14.823371 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:10:14.825390405+00:00 stderr F I0813 20:10:14.825331 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:10:14.838390267+00:00 stderr F I0813 20:10:14.838340 1 client.go:239] Starting informers... 2025-08-13T20:10:14.839898711+00:00 stderr F I0813 20:10:14.839877 1 client.go:250] Waiting for informers to sync... 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.041276 1 client.go:271] Informers started and synced 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.041345 1 operator.go:126] Starting controller-manager 2025-08-13T20:10:15.043609921+00:00 stderr F I0813 20:10:15.042155 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T20:10:15.043980672+00:00 stderr F I0813 20:10:15.043893 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:10:15.044043254+00:00 stderr F I0813 20:10:15.044026 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:10:15.044104816+00:00 stderr F I0813 20:10:15.044071 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:10:15.045016342+00:00 stderr F I0813 20:10:15.044954 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-08-13T20:10:15.045158616+00:00 stderr F I0813 20:10:15.045136 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:10:15.045206317+00:00 stderr F I0813 20:10:15.045189 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:10:15.045246928+00:00 stderr F I0813 20:10:15.045232 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046648 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046696 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046726 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.046874225+00:00 stderr F I0813 20:10:15.046768 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047049 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047177 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047487 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047536 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.047670 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000b7bb80" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048008 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc000b7bc30" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048136 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048165 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048265 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-08-13T20:10:15.048727278+00:00 stderr F I0813 20:10:15.048279 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-08-13T20:10:15.052039263+00:00 stderr F I0813 20:10:15.049571 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:10:15.052097065+00:00 stderr F I0813 20:10:15.047085 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.052127916+00:00 stderr F I0813 20:10:15.050615 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-08-13T20:10:15.052158247+00:00 stderr F I0813 20:10:15.050697 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc000b7bd90" 2025-08-13T20:10:15.052283460+00:00 stderr F I0813 20:10:15.052266 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-08-13T20:10:15.052378773+00:00 stderr F I0813 20:10:15.052362 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-08-13T20:10:15.055895624+00:00 stderr F I0813 20:10:15.052956 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T20:10:15.064980024+00:00 stderr F I0813 20:10:15.056021 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc000b7bad0" 2025-08-13T20:10:15.065230501+00:00 stderr F I0813 20:10:15.065197 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-08-13T20:10:15.065286083+00:00 stderr F I0813 20:10:15.065268 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-08-13T20:10:15.065560431+00:00 stderr F I0813 20:10:15.056196 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:15.066300372+00:00 stderr F I0813 20:10:15.066210 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.069116893+00:00 stderr F I0813 20:10:15.068249 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055259 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055426 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc000b7bce0" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074287 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074304 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055632 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000b7be40" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074346 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc000b7bef0" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074372 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc0002ae790" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074386 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.074392 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.055668 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-08-13T20:10:15.077937286+00:00 stderr F I0813 20:10:15.058618 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.080641983+00:00 stderr F I0813 20:10:15.079665 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.080641983+00:00 stderr F I0813 20:10:15.080391 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081068 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081116 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081450 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081473 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081485 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081490 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081497 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.081502 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.058942 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-08-13T20:10:15.082739593+00:00 stderr F I0813 20:10:15.082569 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.084970727+00:00 stderr F I0813 20:10:15.083428 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.084970727+00:00 stderr F I0813 20:10:15.084043 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085413390+00:00 stderr F I0813 20:10:15.085380 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085639496+00:00 stderr F I0813 20:10:15.059371 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.085742399+00:00 stderr F I0813 20:10:15.059579 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.086010897+00:00 stderr F I0813 20:10:15.054200 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000b7b6b0" 2025-08-13T20:10:15.086122110+00:00 stderr F I0813 20:10:15.086106 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-08-13T20:10:15.086163021+00:00 stderr F I0813 20:10:15.086150 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-08-13T20:10:15.088455547+00:00 stderr F I0813 20:10:15.086260 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088631592+00:00 stderr F I0813 20:10:15.059959 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088903120+00:00 stderr F I0813 20:10:15.088853 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.088969962+00:00 stderr F I0813 20:10:15.060065 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089088075+00:00 stderr F I0813 20:10:15.060137 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089189328+00:00 stderr F I0813 20:10:15.060191 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089282461+00:00 stderr F I0813 20:10:15.060315 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089355763+00:00 stderr F I0813 20:10:15.061244 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089459266+00:00 stderr F I0813 20:10:15.061712 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089474106+00:00 stderr F I0813 20:10:15.059849 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089570569+00:00 stderr F I0813 20:10:15.086609 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.089879838+00:00 stderr F I0813 20:10:15.089859 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-08-13T20:10:15.089982301+00:00 stderr F I0813 20:10:15.089963 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:10:15.090061853+00:00 stderr F I0813 20:10:15.090013 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-08-13T20:10:15.090105684+00:00 stderr F I0813 20:10:15.090093 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:10:15.100008418+00:00 stderr F I0813 20:10:15.099132 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-08-13T20:10:15.141355304+00:00 stderr F I0813 20:10:15.137225 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.145938415+00:00 stderr F I0813 20:10:15.144502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150383 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150424 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150547 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-08-13T20:10:15.151855365+00:00 stderr F I0813 20:10:15.150641 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:10:15.158031662+00:00 stderr F I0813 20:10:15.157991 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.158970089+00:00 stderr F I0813 20:10:15.158871 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-08-13T20:10:15.159072092+00:00 stderr F I0813 20:10:15.159011 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T20:10:15.159352200+00:00 stderr F I0813 20:10:15.159283 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-08-13T20:10:15.159451862+00:00 stderr F I0813 20:10:15.159384 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-08-13T20:10:15.159468423+00:00 stderr F I0813 20:10:15.159449 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.159886 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.159941 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.166186 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.171500 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.175656397+00:00 stderr F I0813 20:10:15.172980 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.188693941+00:00 stderr F I0813 20:10:15.188566 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-08-13T20:10:15.191884182+00:00 stderr F I0813 20:10:15.188865 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-08-13T20:10:15.197997997+00:00 stderr F I0813 20:10:15.193331 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:15.197997997+00:00 stderr F I0813 20:10:15.193401 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:15.214512021+00:00 stderr F I0813 20:10:15.214316 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:15.224471057+00:00 stderr F I0813 20:10:15.222161 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.224471057+00:00 stderr F I0813 20:10:15.222263 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-08-13T20:10:15.229861391+00:00 stderr F I0813 20:10:15.228541 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-08-13T20:10:15.240978060+00:00 stderr F I0813 20:10:15.233518 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.265250526+00:00 stderr F I0813 20:10:15.260705 1 log.go:245] successful reconciliation 2025-08-13T20:10:15.269867948+00:00 stderr F I0813 20:10:15.267717 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-08-13T20:10:15.269867948+00:00 stderr F I0813 20:10:15.267902 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.276520 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.276604 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-08-13T20:10:15.284082236+00:00 stderr F I0813 20:10:15.278257 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-08-13T20:10:15.308118515+00:00 stderr F I0813 20:10:15.305285 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:15.308468075+00:00 stderr F I0813 20:10:15.308386 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.308486515+00:00 stderr F I0813 20:10:15.308478 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:10:15.338965739+00:00 stderr F I0813 20:10:15.338564 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.338965739+00:00 stderr F I0813 20:10:15.338664 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T20:10:15.353518877+00:00 stderr F I0813 20:10:15.353420 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.353565208+00:00 stderr F I0813 20:10:15.353520 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-08-13T20:10:15.360472336+00:00 stderr F I0813 20:10:15.360382 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.360514647+00:00 stderr F I0813 20:10:15.360482 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-08-13T20:10:15.381515149+00:00 stderr F I0813 20:10:15.379059 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.381515149+00:00 stderr F I0813 20:10:15.379154 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-08-13T20:10:15.387460300+00:00 stderr F I0813 20:10:15.387420 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.387566783+00:00 stderr F I0813 20:10:15.387553 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-08-13T20:10:15.520354019+00:00 stderr F I0813 20:10:15.520245 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.520690129+00:00 stderr F I0813 20:10:15.520661 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-08-13T20:10:15.551942095+00:00 stderr F I0813 20:10:15.551594 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.553274773+00:00 stderr F I0813 20:10:15.553210 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:10:15.553274773+00:00 stderr F I0813 20:10:15.553242 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:10:15.600361053+00:00 stderr F I0813 20:10:15.600133 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.625530425+00:00 stderr F I0813 20:10:15.624979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.635443619+00:00 stderr F I0813 20:10:15.634847 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.657638876+00:00 stderr F I0813 20:10:15.657545 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.696231862+00:00 stderr F I0813 20:10:15.695244 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.710192442+00:00 stderr F I0813 20:10:15.707050 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:15.713141237+00:00 stderr F I0813 20:10:15.713054 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.713343193+00:00 stderr F I0813 20:10:15.713319 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:10:15.713431175+00:00 stderr F I0813 20:10:15.713417 1 log.go:245] Reconciling proxy 'cluster' 2025-08-13T20:10:15.727638152+00:00 stderr F I0813 20:10:15.727582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.745508955+00:00 stderr F I0813 20:10:15.740998 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:10:15.746159484+00:00 stderr F I0813 20:10:15.746132 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:15.746209645+00:00 stderr F I0813 20:10:15.746197 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T20:10:15.747837812+00:00 stderr F I0813 20:10:15.746759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.756361906+00:00 stderr F I0813 20:10:15.755365 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:10:15.787690624+00:00 stderr F I0813 20:10:15.787166 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-08-13T20:10:15.905854422+00:00 stderr F I0813 20:10:15.905622 1 log.go:245] Reconciling proxy 'cluster' complete 2025-08-13T20:10:16.493622982+00:00 stderr F I0813 20:10:16.492608 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:16.504737851+00:00 stderr F I0813 20:10:16.504547 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:16.521899913+00:00 stderr F I0813 20:10:16.521531 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:16.537375427+00:00 stderr F I0813 20:10:16.537311 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:16.537504620+00:00 stderr F I0813 20:10:16.537487 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:16.551889763+00:00 stderr F I0813 20:10:16.551720 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:16.692362681+00:00 stderr F I0813 20:10:16.692297 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:10:16.700454323+00:00 stderr F I0813 20:10:16.700272 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.702027648+00:00 stderr F I0813 20:10:16.700874 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.795859278+00:00 stderr F I0813 20:10:16.795186 1 log.go:245] successful reconciliation 2025-08-13T20:10:17.097212428+00:00 stderr F I0813 20:10:17.096540 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-08-13T20:10:17.100598595+00:00 stderr F I0813 20:10:17.100378 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:17.300866087+00:00 stderr F I0813 20:10:17.300728 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:17.304838281+00:00 stderr F I0813 20:10:17.304748 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:17.310176314+00:00 stderr F I0813 20:10:17.310118 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:17.310201994+00:00 stderr F I0813 20:10:17.310159 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000940380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:17.318161843+00:00 stderr F I0813 20:10:17.317414 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:17.318235395+00:00 stderr F I0813 20:10:17.318219 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:17.318275836+00:00 stderr F I0813 20:10:17.318262 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:17.322330792+00:00 stderr F I0813 20:10:17.322283 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:17.322399284+00:00 stderr F I0813 20:10:17.322386 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:17.322429815+00:00 stderr F I0813 20:10:17.322418 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:17.322458676+00:00 stderr F I0813 20:10:17.322448 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:17.322533848+00:00 stderr F I0813 20:10:17.322507 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:17.491410320+00:00 stderr F I0813 20:10:17.491325 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T20:10:17.703263844+00:00 stderr F I0813 20:10:17.703141 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:17.719277103+00:00 stderr F I0813 20:10:17.719118 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:17.719277103+00:00 stderr F E0813 20:10:17.719231 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="1f50cad5-60d2-4f3d-b8ab-18987764a41f" 2025-08-13T20:10:17.725032788+00:00 stderr F I0813 20:10:17.724614 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:17.897303557+00:00 stderr F I0813 20:10:17.896154 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T20:10:17.902614249+00:00 stderr F I0813 20:10:17.902576 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T20:10:17.923173799+00:00 stderr F I0813 20:10:17.922722 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T20:10:17.943853801+00:00 stderr F I0813 20:10:17.943707 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T20:10:17.943853801+00:00 stderr F I0813 20:10:17.943755 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T20:10:17.957862963+00:00 stderr F I0813 20:10:17.957742 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T20:10:18.093558154+00:00 stderr F I0813 20:10:18.093248 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:10:18.099429212+00:00 stderr F I0813 20:10:18.099385 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.109217933+00:00 stderr F I0813 20:10:18.108590 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.178884190+00:00 stderr F I0813 20:10:18.178330 1 allowlist_controller.go:146] Successfully updated sysctl allowlist 2025-08-13T20:10:18.200862150+00:00 stderr F I0813 20:10:18.198317 1 log.go:245] successful reconciliation 2025-08-13T20:10:18.289874792+00:00 stderr F I0813 20:10:18.289321 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:18.313854600+00:00 stderr F I0813 20:10:18.313708 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:18.313854600+00:00 stderr F I0813 20:10:18.313768 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T20:10:18.315971261+00:00 stderr F I0813 20:10:18.315882 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:10:18.492054889+00:00 stderr F I0813 20:10:18.491952 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-08-13T20:10:18.495456387+00:00 stderr F I0813 20:10:18.495370 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:19.498520906+00:00 stderr F I0813 20:10:19.498423 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:19.503122328+00:00 stderr F I0813 20:10:19.503082 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:19.507103102+00:00 stderr F I0813 20:10:19.507068 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:19.507187084+00:00 stderr F I0813 20:10:19.507146 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003acdb80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:19.517054227+00:00 stderr F I0813 20:10:19.517007 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:19.517129689+00:00 stderr F I0813 20:10:19.517115 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:19.517163580+00:00 stderr F I0813 20:10:19.517152 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:19.521718241+00:00 stderr F I0813 20:10:19.521682 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:19.521859855+00:00 stderr F I0813 20:10:19.521764 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:19.521903126+00:00 stderr F I0813 20:10:19.521888 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:19.522002509+00:00 stderr F I0813 20:10:19.521986 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:19.522061621+00:00 stderr F I0813 20:10:19.522049 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:19.703470441+00:00 stderr F I0813 20:10:19.703361 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:19.720904881+00:00 stderr F I0813 20:10:19.720475 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:19.720904881+00:00 stderr F E0813 20:10:19.720610 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="046702fa-95f0-4511-9599-1da40328714e" 2025-08-13T20:10:19.731743692+00:00 stderr F I0813 20:10:19.731668 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:10:19.892101659+00:00 stderr F I0813 20:10:19.890303 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-08-13T20:10:19.893996923+00:00 stderr F I0813 20:10:19.893943 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.091372732+00:00 stderr F I0813 20:10:20.091233 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-08-13T20:10:20.093704069+00:00 stderr F I0813 20:10:20.093616 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.493945505+00:00 stderr F I0813 20:10:20.493604 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-08-13T20:10:20.498337961+00:00 stderr F I0813 20:10:20.498289 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:20.891459272+00:00 stderr F I0813 20:10:20.891337 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-08-13T20:10:20.905319379+00:00 stderr F I0813 20:10:20.905262 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.103277445+00:00 stderr F I0813 20:10:21.103186 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:10:21.106639701+00:00 stderr F I0813 20:10:21.106580 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:10:21.108664629+00:00 stderr F I0813 20:10:21.108585 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:10:21.108664629+00:00 stderr F I0813 20:10:21.108619 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0010c5800 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114159 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114206 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:10:21.114252159+00:00 stderr F I0813 20:10:21.114224 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.117969 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118007 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118026 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:10:21.118045648+00:00 stderr F I0813 20:10:21.118033 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:10:21.118318786+00:00 stderr F I0813 20:10:21.118274 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:10:21.295458115+00:00 stderr F I0813 20:10:21.293016 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:10:21.297510904+00:00 stderr F I0813 20:10:21.297171 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.491060843+00:00 stderr F I0813 20:10:21.490953 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:10:21.494009628+00:00 stderr F I0813 20:10:21.493448 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.504048745+00:00 stderr F I0813 20:10:21.503017 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:10:21.523490663+00:00 stderr F I0813 20:10:21.523122 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:10:21.523490663+00:00 stderr F I0813 20:10:21.523183 1 log.go:245] Starting render phase 2025-08-13T20:10:21.561189264+00:00 stderr F I0813 20:10:21.559944 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601509 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601549 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601586 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:10:21.602377435+00:00 stderr F I0813 20:10:21.601624 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:10:21.691747777+00:00 stderr F I0813 20:10:21.691660 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-08-13T20:10:21.695859975+00:00 stderr F I0813 20:10:21.694041 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.707107557+00:00 stderr F I0813 20:10:21.707020 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:10:21.707107557+00:00 stderr F I0813 20:10:21.707053 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:10:21.894252693+00:00 stderr F I0813 20:10:21.893607 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-08-13T20:10:21.897754513+00:00 stderr F I0813 20:10:21.897684 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-08-13T20:10:21.897838216+00:00 stderr F I0813 20:10:21.897816 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897857 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897952 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.897985 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898029 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898082 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898122 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898151 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898173 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898203 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898244 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.898652789+00:00 stderr F I0813 20:10:21.898265 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T20:10:21.939293424+00:00 stderr F I0813 20:10:21.938402 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:10:22.312176035+00:00 stderr F I0813 20:10:22.310021 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:10:22.325019283+00:00 stderr F I0813 20:10:22.323766 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:10:22.325019283+00:00 stderr F I0813 20:10:22.324015 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:10:22.336750280+00:00 stderr F I0813 20:10:22.336655 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:10:22.336750280+00:00 stderr F I0813 20:10:22.336710 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:10:22.346664414+00:00 stderr F I0813 20:10:22.346573 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:10:22.346753177+00:00 stderr F I0813 20:10:22.346663 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:10:22.362561200+00:00 stderr F I0813 20:10:22.362508 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:10:22.362671523+00:00 stderr F I0813 20:10:22.362656 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:10:22.370158458+00:00 stderr F I0813 20:10:22.369302 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:10:22.370158458+00:00 stderr F I0813 20:10:22.369394 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:10:22.382879762+00:00 stderr F I0813 20:10:22.382733 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:10:22.382879762+00:00 stderr F I0813 20:10:22.382865 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:10:22.397029888+00:00 stderr F I0813 20:10:22.396511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:10:22.397029888+00:00 stderr F I0813 20:10:22.396601 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:10:22.409078684+00:00 stderr F I0813 20:10:22.408017 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:10:22.409078684+00:00 stderr F I0813 20:10:22.408086 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:10:22.421082108+00:00 stderr F I0813 20:10:22.419645 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:10:22.421082108+00:00 stderr F I0813 20:10:22.419731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:10:22.428624814+00:00 stderr F I0813 20:10:22.427127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:10:22.428624814+00:00 stderr F I0813 20:10:22.427193 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:10:22.520892639+00:00 stderr F I0813 20:10:22.520085 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:10:22.520892639+00:00 stderr F I0813 20:10:22.520153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:10:22.721411529+00:00 stderr F I0813 20:10:22.721251 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:10:22.721411529+00:00 stderr F I0813 20:10:22.721321 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:10:22.927994422+00:00 stderr F I0813 20:10:22.927876 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:10:22.927994422+00:00 stderr F I0813 20:10:22.927981 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:10:23.123548518+00:00 stderr F I0813 20:10:23.121703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:10:23.123548518+00:00 stderr F I0813 20:10:23.121845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:10:23.323863840+00:00 stderr F I0813 20:10:23.322137 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:10:23.323863840+00:00 stderr F I0813 20:10:23.322283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:10:23.524312658+00:00 stderr F I0813 20:10:23.521103 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:10:23.524312658+00:00 stderr F I0813 20:10:23.521193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:10:23.721742038+00:00 stderr F I0813 20:10:23.721581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:10:23.721742038+00:00 stderr F I0813 20:10:23.721722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:10:23.924133671+00:00 stderr F I0813 20:10:23.922598 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:10:23.924133671+00:00 stderr F I0813 20:10:23.922687 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:10:24.119658897+00:00 stderr F I0813 20:10:24.119602 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:10:24.119879823+00:00 stderr F I0813 20:10:24.119855 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:10:24.319565018+00:00 stderr F I0813 20:10:24.319507 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:10:24.319760864+00:00 stderr F I0813 20:10:24.319741 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:10:24.522948220+00:00 stderr F I0813 20:10:24.522836 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:10:24.522948220+00:00 stderr F I0813 20:10:24.522904 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:10:24.799190120+00:00 stderr F I0813 20:10:24.799067 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:10:24.799372835+00:00 stderr F I0813 20:10:24.799350 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:10:24.931274677+00:00 stderr F I0813 20:10:24.931220 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:10:24.931429361+00:00 stderr F I0813 20:10:24.931414 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:10:25.124266860+00:00 stderr F I0813 20:10:25.123539 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:10:25.124382073+00:00 stderr F I0813 20:10:25.124367 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:10:25.320061254+00:00 stderr F I0813 20:10:25.320004 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:10:25.320175797+00:00 stderr F I0813 20:10:25.320161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:10:25.519275435+00:00 stderr F I0813 20:10:25.518652 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:10:25.519275435+00:00 stderr F I0813 20:10:25.518731 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:10:25.736134673+00:00 stderr F I0813 20:10:25.736036 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:10:25.736134673+00:00 stderr F I0813 20:10:25.736108 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:10:25.925590275+00:00 stderr F I0813 20:10:25.925203 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:10:25.925590275+00:00 stderr F I0813 20:10:25.925275 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:10:26.123879040+00:00 stderr F I0813 20:10:26.123028 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:10:26.123879040+00:00 stderr F I0813 20:10:26.123099 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:10:26.323700779+00:00 stderr F I0813 20:10:26.323533 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:26.323700779+00:00 stderr F I0813 20:10:26.323636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:10:26.527292577+00:00 stderr F I0813 20:10:26.527233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:26.527426310+00:00 stderr F I0813 20:10:26.527402 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:10:26.724639985+00:00 stderr F I0813 20:10:26.724538 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:10:26.724699056+00:00 stderr F I0813 20:10:26.724643 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:10:26.923624379+00:00 stderr F I0813 20:10:26.923318 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:10:26.923887836+00:00 stderr F I0813 20:10:26.923867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:10:27.128076081+00:00 stderr F I0813 20:10:27.127842 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:10:27.128076081+00:00 stderr F I0813 20:10:27.128023 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:10:27.325535142+00:00 stderr F I0813 20:10:27.325407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:10:27.325535142+00:00 stderr F I0813 20:10:27.325510 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:10:27.529757257+00:00 stderr F I0813 20:10:27.529598 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:10:27.529757257+00:00 stderr F I0813 20:10:27.529682 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:10:27.724759538+00:00 stderr F I0813 20:10:27.724633 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:10:27.724759538+00:00 stderr F I0813 20:10:27.724711 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:10:27.920503770+00:00 stderr F I0813 20:10:27.920391 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:10:27.920503770+00:00 stderr F I0813 20:10:27.920468 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:10:28.124461178+00:00 stderr F I0813 20:10:28.123842 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:28.124461178+00:00 stderr F I0813 20:10:28.124025 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:10:28.319762888+00:00 stderr F I0813 20:10:28.319650 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:10:28.319762888+00:00 stderr F I0813 20:10:28.319733 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:10:28.522318335+00:00 stderr F I0813 20:10:28.522195 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:10:28.522318335+00:00 stderr F I0813 20:10:28.522260 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:10:28.721852396+00:00 stderr F I0813 20:10:28.721622 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:10:28.721852396+00:00 stderr F I0813 20:10:28.721715 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:10:28.927618676+00:00 stderr F I0813 20:10:28.927482 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:10:28.927618676+00:00 stderr F I0813 20:10:28.927562 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:10:29.136455113+00:00 stderr F I0813 20:10:29.136266 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:10:29.136455113+00:00 stderr F I0813 20:10:29.136352 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:10:29.326397239+00:00 stderr F I0813 20:10:29.326310 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:10:29.326440441+00:00 stderr F I0813 20:10:29.326399 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:10:29.529527483+00:00 stderr F I0813 20:10:29.529420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:10:29.529527483+00:00 stderr F I0813 20:10:29.529505 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:10:29.724242336+00:00 stderr F I0813 20:10:29.724035 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:10:29.724242336+00:00 stderr F I0813 20:10:29.724102 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:10:29.952650554+00:00 stderr F I0813 20:10:29.952593 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:10:29.952941233+00:00 stderr F I0813 20:10:29.952896 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:10:30.156097907+00:00 stderr F I0813 20:10:30.155900 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:10:30.156097907+00:00 stderr F I0813 20:10:30.156025 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:10:30.319976035+00:00 stderr F I0813 20:10:30.319680 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:10:30.319976035+00:00 stderr F I0813 20:10:30.319769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:10:30.520501744+00:00 stderr F I0813 20:10:30.520394 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:10:30.520563216+00:00 stderr F I0813 20:10:30.520555 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:10:30.734934362+00:00 stderr F I0813 20:10:30.734182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:10:30.734934362+00:00 stderr F I0813 20:10:30.734256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:10:30.921047819+00:00 stderr F I0813 20:10:30.920898 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:10:30.921177432+00:00 stderr F I0813 20:10:30.921161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:10:31.119574791+00:00 stderr F I0813 20:10:31.119426 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:10:31.119574791+00:00 stderr F I0813 20:10:31.119505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:10:31.322412276+00:00 stderr F I0813 20:10:31.322271 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:10:31.322412276+00:00 stderr F I0813 20:10:31.322363 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:10:31.521363790+00:00 stderr F I0813 20:10:31.521222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:10:31.521363790+00:00 stderr F I0813 20:10:31.521297 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:10:31.723192067+00:00 stderr F I0813 20:10:31.723043 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:10:31.723192067+00:00 stderr F I0813 20:10:31.723129 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:10:31.920969337+00:00 stderr F I0813 20:10:31.920680 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:10:31.920969337+00:00 stderr F I0813 20:10:31.920767 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.121627650+00:00 stderr F I0813 20:10:32.121471 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.121627650+00:00 stderr F I0813 20:10:32.121564 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.324740024+00:00 stderr F I0813 20:10:32.324619 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.324740024+00:00 stderr F I0813 20:10:32.324722 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.518459638+00:00 stderr F I0813 20:10:32.518327 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.518459638+00:00 stderr F I0813 20:10:32.518405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:10:32.718275557+00:00 stderr F I0813 20:10:32.718118 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:10:32.718275557+00:00 stderr F I0813 20:10:32.718222 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:10:32.922672908+00:00 stderr F I0813 20:10:32.922513 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:10:32.922672908+00:00 stderr F I0813 20:10:32.922624 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:10:33.121058765+00:00 stderr F I0813 20:10:33.120958 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:10:33.121058765+00:00 stderr F I0813 20:10:33.121044 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:10:33.322137191+00:00 stderr F I0813 20:10:33.321998 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:10:33.322137191+00:00 stderr F I0813 20:10:33.322076 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:10:33.521951420+00:00 stderr F I0813 20:10:33.521861 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:10:33.522072753+00:00 stderr F I0813 20:10:33.522056 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:10:33.721470860+00:00 stderr F I0813 20:10:33.721344 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:10:33.721470860+00:00 stderr F I0813 20:10:33.721428 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:10:33.924454139+00:00 stderr F I0813 20:10:33.924308 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:10:33.924660925+00:00 stderr F I0813 20:10:33.924645 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:10:34.122531938+00:00 stderr F I0813 20:10:34.122431 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:10:34.122590230+00:00 stderr F I0813 20:10:34.122528 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:10:34.318450145+00:00 stderr F I0813 20:10:34.318305 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:10:34.318450145+00:00 stderr F I0813 20:10:34.318389 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:10:34.527260152+00:00 stderr F I0813 20:10:34.527157 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:10:34.528939790+00:00 stderr F I0813 20:10:34.528564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:10:34.725633460+00:00 stderr F I0813 20:10:34.725198 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:10:34.725633460+00:00 stderr F I0813 20:10:34.725356 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:10:34.925560072+00:00 stderr F I0813 20:10:34.925500 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:10:34.925682115+00:00 stderr F I0813 20:10:34.925667 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:10:35.119538073+00:00 stderr F I0813 20:10:35.119417 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:10:35.119578044+00:00 stderr F I0813 20:10:35.119548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:10:35.318718504+00:00 stderr F I0813 20:10:35.318551 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:10:35.319024753+00:00 stderr F I0813 20:10:35.319001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:10:35.519099909+00:00 stderr F I0813 20:10:35.518651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:10:35.519099909+00:00 stderr F I0813 20:10:35.518748 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:10:35.722389858+00:00 stderr F I0813 20:10:35.722324 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:10:35.722567473+00:00 stderr F I0813 20:10:35.722546 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:10:35.921644231+00:00 stderr F I0813 20:10:35.921586 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:10:35.922010341+00:00 stderr F I0813 20:10:35.921740 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:10:36.129045217+00:00 stderr F I0813 20:10:36.128878 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:10:36.129045217+00:00 stderr F I0813 20:10:36.128967 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:10:36.342385344+00:00 stderr F I0813 20:10:36.342256 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:10:36.342385344+00:00 stderr F I0813 20:10:36.342348 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:10:36.519449990+00:00 stderr F I0813 20:10:36.519340 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:10:36.519449990+00:00 stderr F I0813 20:10:36.519424 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:36.719878037+00:00 stderr F I0813 20:10:36.719706 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:36.719948579+00:00 stderr F I0813 20:10:36.719876 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:36.919298924+00:00 stderr F I0813 20:10:36.919195 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:36.919298924+00:00 stderr F I0813 20:10:36.919269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:10:37.118161506+00:00 stderr F I0813 20:10:37.117957 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:10:37.118161506+00:00 stderr F I0813 20:10:37.118039 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:10:37.322144624+00:00 stderr F I0813 20:10:37.322025 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:10:37.322144624+00:00 stderr F I0813 20:10:37.322107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:10:37.519550883+00:00 stderr F I0813 20:10:37.519429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:10:37.519550883+00:00 stderr F I0813 20:10:37.519499 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:10:37.718666492+00:00 stderr F I0813 20:10:37.718559 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:10:37.718666492+00:00 stderr F I0813 20:10:37.718648 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:37.940935225+00:00 stderr F I0813 20:10:37.938980 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:37.940935225+00:00 stderr F I0813 20:10:37.939317 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:38.123136988+00:00 stderr F I0813 20:10:38.123011 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:38.123136988+00:00 stderr F I0813 20:10:38.123099 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:10:38.327121127+00:00 stderr F I0813 20:10:38.326997 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:10:38.327121127+00:00 stderr F I0813 20:10:38.327105 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:10:38.518325349+00:00 stderr F I0813 20:10:38.518189 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:10:38.518325349+00:00 stderr F I0813 20:10:38.518302 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:10:38.720081113+00:00 stderr F I0813 20:10:38.719936 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:10:38.720081113+00:00 stderr F I0813 20:10:38.720022 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:10:38.924877685+00:00 stderr F I0813 20:10:38.924686 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:10:38.924877685+00:00 stderr F I0813 20:10:38.924830 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:10:39.119301820+00:00 stderr F I0813 20:10:39.119180 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:10:39.119301820+00:00 stderr F I0813 20:10:39.119262 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:10:39.322502676+00:00 stderr F I0813 20:10:39.322362 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:10:39.322502676+00:00 stderr F I0813 20:10:39.322449 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:10:39.522842260+00:00 stderr F I0813 20:10:39.522711 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:10:39.522905202+00:00 stderr F I0813 20:10:39.522864 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:10:39.720545158+00:00 stderr F I0813 20:10:39.720445 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:10:39.720545158+00:00 stderr F I0813 20:10:39.720510 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:39.925620748+00:00 stderr F I0813 20:10:39.923488 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:39.925620748+00:00 stderr F I0813 20:10:39.923561 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:10:40.118724484+00:00 stderr F I0813 20:10:40.118611 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:10:40.118724484+00:00 stderr F I0813 20:10:40.118691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:10:40.320312704+00:00 stderr F I0813 20:10:40.320183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:10:40.320312704+00:00 stderr F I0813 20:10:40.320273 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:10:40.530647374+00:00 stderr F I0813 20:10:40.530503 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:10:40.530647374+00:00 stderr F I0813 20:10:40.530601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:10:40.719393536+00:00 stderr F I0813 20:10:40.719282 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:10:40.719446957+00:00 stderr F I0813 20:10:40.719386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:10:40.919968957+00:00 stderr F I0813 20:10:40.919588 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:10:40.919968957+00:00 stderr F I0813 20:10:40.919670 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:10:41.120396652+00:00 stderr F I0813 20:10:41.120317 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:10:41.120438443+00:00 stderr F I0813 20:10:41.120409 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:41.327983544+00:00 stderr F I0813 20:10:41.327640 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:41.328054846+00:00 stderr F I0813 20:10:41.327997 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:10:41.522964074+00:00 stderr F I0813 20:10:41.522755 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:10:41.523020465+00:00 stderr F I0813 20:10:41.522971 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:10:41.725399998+00:00 stderr F I0813 20:10:41.725267 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:10:41.725399998+00:00 stderr F I0813 20:10:41.725342 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:10:41.919321038+00:00 stderr F I0813 20:10:41.919192 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:10:41.919472992+00:00 stderr F I0813 20:10:41.919407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:10:42.120738593+00:00 stderr F I0813 20:10:42.120473 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:10:42.120738593+00:00 stderr F I0813 20:10:42.120550 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:10:42.321723816+00:00 stderr F I0813 20:10:42.321611 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:10:42.321841499+00:00 stderr F I0813 20:10:42.321711 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:10:42.526454565+00:00 stderr F I0813 20:10:42.526331 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:10:42.526454565+00:00 stderr F I0813 20:10:42.526424 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:10:42.719186791+00:00 stderr F I0813 20:10:42.719072 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:10:42.719252073+00:00 stderr F I0813 20:10:42.719180 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:10:42.922417198+00:00 stderr F I0813 20:10:42.922270 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:10:42.939641362+00:00 stderr F I0813 20:10:42.939553 1 log.go:245] Operconfig Controller complete 2025-08-13T20:13:15.311294346+00:00 stderr F I0813 20:13:15.310918 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:13:42.940568520+00:00 stderr F I0813 20:13:42.940483 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:13:43.374828721+00:00 stderr F I0813 20:13:43.374716 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:13:43.377434225+00:00 stderr F I0813 20:13:43.377324 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:13:43.380157543+00:00 stderr F I0813 20:13:43.380092 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:13:43.380179564+00:00 stderr F I0813 20:13:43.380129 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00074a600 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385539 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385589 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:13:43.385634841+00:00 stderr F I0813 20:13:43.385600 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389128 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389169 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389192 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389200 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:13:43.389670876+00:00 stderr F I0813 20:13:43.389330 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:13:43.403831402+00:00 stderr F I0813 20:13:43.403740 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:13:43.418676848+00:00 stderr F I0813 20:13:43.418573 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:13:43.418676848+00:00 stderr F I0813 20:13:43.418656 1 log.go:245] Starting render phase 2025-08-13T20:13:43.432573196+00:00 stderr F I0813 20:13:43.432482 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469024 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469065 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:13:43.469153245+00:00 stderr F I0813 20:13:43.469136 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:13:43.469218897+00:00 stderr F I0813 20:13:43.469166 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:13:43.487704557+00:00 stderr F I0813 20:13:43.487597 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:13:43.487704557+00:00 stderr F I0813 20:13:43.487653 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:13:43.503889401+00:00 stderr F I0813 20:13:43.503672 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:13:43.520217339+00:00 stderr F I0813 20:13:43.520092 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:13:43.527586020+00:00 stderr F I0813 20:13:43.527422 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:13:43.527586020+00:00 stderr F I0813 20:13:43.527494 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:13:43.536987710+00:00 stderr F I0813 20:13:43.536676 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:13:43.536987710+00:00 stderr F I0813 20:13:43.536759 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:13:43.545637138+00:00 stderr F I0813 20:13:43.545533 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:13:43.545637138+00:00 stderr F I0813 20:13:43.545579 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:13:43.554432040+00:00 stderr F I0813 20:13:43.554328 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:13:43.554432040+00:00 stderr F I0813 20:13:43.554370 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:13:43.563117879+00:00 stderr F I0813 20:13:43.562860 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:13:43.563117879+00:00 stderr F I0813 20:13:43.562923 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:13:43.570672736+00:00 stderr F I0813 20:13:43.570530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:13:43.570672736+00:00 stderr F I0813 20:13:43.570618 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:13:43.578617994+00:00 stderr F I0813 20:13:43.578499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:13:43.578617994+00:00 stderr F I0813 20:13:43.578605 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:13:43.584363808+00:00 stderr F I0813 20:13:43.584218 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:13:43.584363808+00:00 stderr F I0813 20:13:43.584292 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:13:43.590532865+00:00 stderr F I0813 20:13:43.590429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:13:43.590532865+00:00 stderr F I0813 20:13:43.590502 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:13:43.615068418+00:00 stderr F I0813 20:13:43.614748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:13:43.615068418+00:00 stderr F I0813 20:13:43.615055 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:13:43.816850564+00:00 stderr F I0813 20:13:43.816620 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:13:43.816850564+00:00 stderr F I0813 20:13:43.816732 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:13:44.045211281+00:00 stderr F I0813 20:13:44.045101 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:13:44.045211281+00:00 stderr F I0813 20:13:44.045173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:13:44.214676170+00:00 stderr F I0813 20:13:44.214560 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:13:44.214676170+00:00 stderr F I0813 20:13:44.214633 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:13:44.414017305+00:00 stderr F I0813 20:13:44.413907 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:13:44.414017305+00:00 stderr F I0813 20:13:44.414004 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:13:44.616854340+00:00 stderr F I0813 20:13:44.616651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:13:44.617054796+00:00 stderr F I0813 20:13:44.616892 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:13:44.816605287+00:00 stderr F I0813 20:13:44.815293 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:13:44.816605287+00:00 stderr F I0813 20:13:44.816217 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:13:45.018822195+00:00 stderr F I0813 20:13:45.018280 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:13:45.018822195+00:00 stderr F I0813 20:13:45.018359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:13:45.216616916+00:00 stderr F I0813 20:13:45.216222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:13:45.216616916+00:00 stderr F I0813 20:13:45.216293 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:13:45.417322111+00:00 stderr F I0813 20:13:45.417262 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:13:45.417534628+00:00 stderr F I0813 20:13:45.417515 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:13:45.615292018+00:00 stderr F I0813 20:13:45.615154 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:13:45.615292018+00:00 stderr F I0813 20:13:45.615220 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:13:45.814613313+00:00 stderr F I0813 20:13:45.814560 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:13:45.814888881+00:00 stderr F I0813 20:13:45.814867 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:13:46.025141219+00:00 stderr F I0813 20:13:46.024971 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:13:46.025141219+00:00 stderr F I0813 20:13:46.025052 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:13:46.225747440+00:00 stderr F I0813 20:13:46.225614 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:13:46.225747440+00:00 stderr F I0813 20:13:46.225690 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:13:46.414999516+00:00 stderr F I0813 20:13:46.414885 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:13:46.415107769+00:00 stderr F I0813 20:13:46.415093 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:13:46.614743393+00:00 stderr F I0813 20:13:46.614643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:13:46.614913388+00:00 stderr F I0813 20:13:46.614882 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:13:46.815699004+00:00 stderr F I0813 20:13:46.815560 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:13:46.815699004+00:00 stderr F I0813 20:13:46.815642 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:13:47.022632887+00:00 stderr F I0813 20:13:47.022324 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:13:47.022632887+00:00 stderr F I0813 20:13:47.022434 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:13:47.218154962+00:00 stderr F I0813 20:13:47.217983 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:13:47.218154962+00:00 stderr F I0813 20:13:47.218104 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:13:47.420447001+00:00 stderr F I0813 20:13:47.420287 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:13:47.420447001+00:00 stderr F I0813 20:13:47.420392 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:13:47.615309948+00:00 stderr F I0813 20:13:47.615199 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:47.615309948+00:00 stderr F I0813 20:13:47.615294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:13:47.815000913+00:00 stderr F I0813 20:13:47.814648 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:47.815000913+00:00 stderr F I0813 20:13:47.814727 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:13:48.018637922+00:00 stderr F I0813 20:13:48.018534 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:13:48.018637922+00:00 stderr F I0813 20:13:48.018579 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:13:48.217180944+00:00 stderr F I0813 20:13:48.216324 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:13:48.217180944+00:00 stderr F I0813 20:13:48.216413 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:13:48.414719087+00:00 stderr F I0813 20:13:48.414622 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:13:48.414719087+00:00 stderr F I0813 20:13:48.414691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:13:48.614108514+00:00 stderr F I0813 20:13:48.613413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:13:48.614149025+00:00 stderr F I0813 20:13:48.614131 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:13:48.817015652+00:00 stderr F I0813 20:13:48.816916 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:13:48.817055383+00:00 stderr F I0813 20:13:48.817009 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:13:49.019493637+00:00 stderr F I0813 20:13:49.019344 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:13:49.019493637+00:00 stderr F I0813 20:13:49.019461 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:13:49.217746911+00:00 stderr F I0813 20:13:49.216564 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:13:49.217746911+00:00 stderr F I0813 20:13:49.216637 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:13:49.443239016+00:00 stderr F I0813 20:13:49.442371 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:49.443239016+00:00 stderr F I0813 20:13:49.443043 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:13:49.618431979+00:00 stderr F I0813 20:13:49.617657 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:13:49.618431979+00:00 stderr F I0813 20:13:49.618405 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:13:49.816567380+00:00 stderr F I0813 20:13:49.816398 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:13:49.816667963+00:00 stderr F I0813 20:13:49.816646 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:13:50.014527865+00:00 stderr F I0813 20:13:50.014388 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:13:50.014527865+00:00 stderr F I0813 20:13:50.014460 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:13:50.225897456+00:00 stderr F I0813 20:13:50.225706 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:13:50.225897456+00:00 stderr F I0813 20:13:50.225863 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:13:50.420505806+00:00 stderr F I0813 20:13:50.420379 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:13:50.420505806+00:00 stderr F I0813 20:13:50.420455 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:13:50.625005169+00:00 stderr F I0813 20:13:50.624863 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:13:50.625005169+00:00 stderr F I0813 20:13:50.624933 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:13:50.824855698+00:00 stderr F I0813 20:13:50.824663 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:13:50.824855698+00:00 stderr F I0813 20:13:50.824758 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:13:51.017863611+00:00 stderr F I0813 20:13:51.017673 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:13:51.017863611+00:00 stderr F I0813 20:13:51.017749 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:13:51.289760177+00:00 stderr F I0813 20:13:51.287445 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:13:51.289760177+00:00 stderr F I0813 20:13:51.287519 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:13:51.453939414+00:00 stderr F I0813 20:13:51.453768 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:13:51.453939414+00:00 stderr F I0813 20:13:51.453894 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:13:51.615114005+00:00 stderr F I0813 20:13:51.615004 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:13:51.615114005+00:00 stderr F I0813 20:13:51.615074 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:13:51.817841177+00:00 stderr F I0813 20:13:51.817663 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:13:51.817841177+00:00 stderr F I0813 20:13:51.817744 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:13:52.016338579+00:00 stderr F I0813 20:13:52.016238 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:13:52.016338579+00:00 stderr F I0813 20:13:52.016312 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:13:52.216276261+00:00 stderr F I0813 20:13:52.216221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:13:52.216375604+00:00 stderr F I0813 20:13:52.216361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:13:52.414331500+00:00 stderr F I0813 20:13:52.414205 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:13:52.414331500+00:00 stderr F I0813 20:13:52.414278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:13:52.615858418+00:00 stderr F I0813 20:13:52.615651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:13:52.615858418+00:00 stderr F I0813 20:13:52.615720 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:13:52.816060237+00:00 stderr F I0813 20:13:52.815899 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:13:52.816060237+00:00 stderr F I0813 20:13:52.816013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:13:53.019017056+00:00 stderr F I0813 20:13:53.018889 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:13:53.019066518+00:00 stderr F I0813 20:13:53.019021 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:13:53.216182769+00:00 stderr F I0813 20:13:53.216074 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:13:53.216182769+00:00 stderr F I0813 20:13:53.216144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.418753757+00:00 stderr F I0813 20:13:53.418643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.418753757+00:00 stderr F I0813 20:13:53.418731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.625225817+00:00 stderr F I0813 20:13:53.625005 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.625225817+00:00 stderr F I0813 20:13:53.625073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:53.815445371+00:00 stderr F I0813 20:13:53.815317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:53.815445371+00:00 stderr F I0813 20:13:53.815387 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:13:54.014515318+00:00 stderr F I0813 20:13:54.014401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:13:54.014515318+00:00 stderr F I0813 20:13:54.014468 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:13:54.231292933+00:00 stderr F I0813 20:13:54.231165 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:13:54.231500299+00:00 stderr F I0813 20:13:54.231279 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:13:54.418158130+00:00 stderr F I0813 20:13:54.417727 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:13:54.418158130+00:00 stderr F I0813 20:13:54.417841 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:13:54.618489853+00:00 stderr F I0813 20:13:54.618262 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:13:54.618489853+00:00 stderr F I0813 20:13:54.618344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:13:54.815864362+00:00 stderr F I0813 20:13:54.815675 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:13:54.815864362+00:00 stderr F I0813 20:13:54.815760 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:13:55.017643068+00:00 stderr F I0813 20:13:55.017533 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:13:55.017643068+00:00 stderr F I0813 20:13:55.017618 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:13:55.221649977+00:00 stderr F I0813 20:13:55.221007 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:13:55.221649977+00:00 stderr F I0813 20:13:55.221077 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:13:55.420225810+00:00 stderr F I0813 20:13:55.419610 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:13:55.420225810+00:00 stderr F I0813 20:13:55.419684 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:13:55.616715154+00:00 stderr F I0813 20:13:55.616622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:13:55.616715154+00:00 stderr F I0813 20:13:55.616674 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:13:55.819831198+00:00 stderr F I0813 20:13:55.819646 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:13:55.820014403+00:00 stderr F I0813 20:13:55.819929 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:13:56.020237654+00:00 stderr F I0813 20:13:56.020119 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:13:56.020237654+00:00 stderr F I0813 20:13:56.020193 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:13:56.218649183+00:00 stderr F I0813 20:13:56.217976 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:13:56.218649183+00:00 stderr F I0813 20:13:56.218612 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:13:56.443436028+00:00 stderr F I0813 20:13:56.442659 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:13:56.443436028+00:00 stderr F I0813 20:13:56.442737 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:13:56.620633618+00:00 stderr F I0813 20:13:56.620546 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:13:56.620681910+00:00 stderr F I0813 20:13:56.620650 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:13:56.817469731+00:00 stderr F I0813 20:13:56.817347 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:13:56.817704398+00:00 stderr F I0813 20:13:56.817688 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:13:57.016028065+00:00 stderr F I0813 20:13:57.015855 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:13:57.016028065+00:00 stderr F I0813 20:13:57.015967 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:13:57.220501447+00:00 stderr F I0813 20:13:57.219443 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:13:57.220501447+00:00 stderr F I0813 20:13:57.220307 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:13:57.424032872+00:00 stderr F I0813 20:13:57.423901 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:13:57.424093534+00:00 stderr F I0813 20:13:57.424028 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:13:57.637005778+00:00 stderr F I0813 20:13:57.636883 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:13:57.637005778+00:00 stderr F I0813 20:13:57.636933 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:13:57.815596899+00:00 stderr F I0813 20:13:57.815465 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:13:57.815596899+00:00 stderr F I0813 20:13:57.815547 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.015258352+00:00 stderr F I0813 20:13:58.015162 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.015258352+00:00 stderr F I0813 20:13:58.015233 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.221014882+00:00 stderr F I0813 20:13:58.220477 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.221014882+00:00 stderr F I0813 20:13:58.220645 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:13:58.416857407+00:00 stderr F I0813 20:13:58.416681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:13:58.416857407+00:00 stderr F I0813 20:13:58.416762 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:13:58.614262667+00:00 stderr F I0813 20:13:58.614088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:13:58.614262667+00:00 stderr F I0813 20:13:58.614173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:13:58.816146785+00:00 stderr F I0813 20:13:58.816089 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:13:58.816263938+00:00 stderr F I0813 20:13:58.816250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:13:59.015859581+00:00 stderr F I0813 20:13:59.015758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:13:59.015904282+00:00 stderr F I0813 20:13:59.015867 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.226871771+00:00 stderr F I0813 20:13:59.226666 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.226871771+00:00 stderr F I0813 20:13:59.226742 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.419289098+00:00 stderr F I0813 20:13:59.419152 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.419289098+00:00 stderr F I0813 20:13:59.419230 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:13:59.619827387+00:00 stderr F I0813 20:13:59.619650 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:13:59.619827387+00:00 stderr F I0813 20:13:59.619724 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:13:59.815385904+00:00 stderr F I0813 20:13:59.815249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:13:59.815385904+00:00 stderr F I0813 20:13:59.815325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:14:00.015931724+00:00 stderr F I0813 20:14:00.015717 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:14:00.015931724+00:00 stderr F I0813 20:14:00.015885 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:14:00.234377297+00:00 stderr F I0813 20:14:00.234281 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:14:00.234377297+00:00 stderr F I0813 20:14:00.234351 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:14:00.414505691+00:00 stderr F I0813 20:14:00.414410 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:14:00.414505691+00:00 stderr F I0813 20:14:00.414484 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:14:00.615417922+00:00 stderr F I0813 20:14:00.615262 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:14:00.615417922+00:00 stderr F I0813 20:14:00.615346 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:14:00.815018555+00:00 stderr F I0813 20:14:00.814875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:14:00.815018555+00:00 stderr F I0813 20:14:00.814972 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:14:01.019400795+00:00 stderr F I0813 20:14:01.018477 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:14:01.019612631+00:00 stderr F I0813 20:14:01.019521 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:01.217608247+00:00 stderr F I0813 20:14:01.216765 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:01.217608247+00:00 stderr F I0813 20:14:01.216920 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:14:01.415330817+00:00 stderr F I0813 20:14:01.415212 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:14:01.415330817+00:00 stderr F I0813 20:14:01.415284 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:14:01.617152582+00:00 stderr F I0813 20:14:01.617097 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:14:01.617282986+00:00 stderr F I0813 20:14:01.617268 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:14:01.814648095+00:00 stderr F I0813 20:14:01.814586 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:14:01.814864021+00:00 stderr F I0813 20:14:01.814760 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:14:02.014336540+00:00 stderr F I0813 20:14:02.014160 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:14:02.015155753+00:00 stderr F I0813 20:14:02.015102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:14:02.215230920+00:00 stderr F I0813 20:14:02.214761 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:14:02.215286621+00:00 stderr F I0813 20:14:02.215233 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:14:02.415911973+00:00 stderr F I0813 20:14:02.415713 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:14:02.416126300+00:00 stderr F I0813 20:14:02.416052 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:02.615518257+00:00 stderr F I0813 20:14:02.615394 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:02.615518257+00:00 stderr F I0813 20:14:02.615476 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:14:02.838552571+00:00 stderr F I0813 20:14:02.838390 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:14:02.838617163+00:00 stderr F I0813 20:14:02.838584 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:14:03.027283372+00:00 stderr F I0813 20:14:03.027133 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:14:03.027283372+00:00 stderr F I0813 20:14:03.027242 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:14:03.222765997+00:00 stderr F I0813 20:14:03.222560 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:14:03.222765997+00:00 stderr F I0813 20:14:03.222681 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:14:03.420601300+00:00 stderr F I0813 20:14:03.420453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:14:03.421113364+00:00 stderr F I0813 20:14:03.421057 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:14:03.617066523+00:00 stderr F I0813 20:14:03.616967 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:14:03.617121434+00:00 stderr F I0813 20:14:03.617064 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:14:03.818043865+00:00 stderr F I0813 20:14:03.817875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:14:03.818043865+00:00 stderr F I0813 20:14:03.817982 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:14:04.021085856+00:00 stderr F I0813 20:14:04.020926 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:14:04.021085856+00:00 stderr F I0813 20:14:04.021019 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:14:04.222729668+00:00 stderr F I0813 20:14:04.222669 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:14:04.245133170+00:00 stderr F I0813 20:14:04.245074 1 log.go:245] Operconfig Controller complete 2025-08-13T20:14:46.226385533+00:00 stderr F I0813 20:14:46.226016 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.236012789+00:00 stderr F I0813 20:14:46.235859 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.243878934+00:00 stderr F I0813 20:14:46.243702 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.255624171+00:00 stderr F I0813 20:14:46.255576 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.263317642+00:00 stderr F I0813 20:14:46.263194 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.277946161+00:00 stderr F I0813 20:14:46.276878 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.290137251+00:00 stderr F I0813 20:14:46.289856 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.301030933+00:00 stderr F I0813 20:14:46.300886 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:14:46.311577085+00:00 stderr F I0813 20:14:46.311445 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:15:16.693640856+00:00 stderr F I0813 20:15:16.693425 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:15:16.694565642+00:00 stderr F I0813 20:15:16.694470 1 log.go:245] successful reconciliation 2025-08-13T20:15:18.098319579+00:00 stderr F I0813 20:15:18.093704 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:15:18.099662928+00:00 stderr F I0813 20:15:18.099572 1 log.go:245] successful reconciliation 2025-08-13T20:15:19.293654066+00:00 stderr F I0813 20:15:19.291996 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:15:19.293654066+00:00 stderr F I0813 20:15:19.292752 1 log.go:245] successful reconciliation 2025-08-13T20:16:15.330726733+00:00 stderr F I0813 20:16:15.330619 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:17:04.248934446+00:00 stderr F I0813 20:17:04.247105 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:17:04.923761548+00:00 stderr F I0813 20:17:04.923308 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:17:04.928896624+00:00 stderr F I0813 20:17:04.927616 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:17:04.933402173+00:00 stderr F I0813 20:17:04.933366 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:17:04.933532467+00:00 stderr F I0813 20:17:04.933442 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc0038ed480 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948374 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948451 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:17:04.948513865+00:00 stderr F I0813 20:17:04.948460 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:17:04.953758454+00:00 stderr F I0813 20:17:04.953665 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:17:04.953864467+00:00 stderr F I0813 20:17:04.953849 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:17:04.953902188+00:00 stderr F I0813 20:17:04.953887 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:17:04.953931809+00:00 stderr F I0813 20:17:04.953920 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:17:04.954125055+00:00 stderr F I0813 20:17:04.954104 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:17:04.975956108+00:00 stderr F I0813 20:17:04.975644 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:17:05.003896586+00:00 stderr F I0813 20:17:05.002820 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:17:05.003896586+00:00 stderr F I0813 20:17:05.002885 1 log.go:245] Starting render phase 2025-08-13T20:17:05.018018340+00:00 stderr F I0813 20:17:05.016832 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:17:05.095545084+00:00 stderr F I0813 20:17:05.095477 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:17:05.095627966+00:00 stderr F I0813 20:17:05.095611 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:17:05.095709958+00:00 stderr F I0813 20:17:05.095682 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:17:05.095826582+00:00 stderr F I0813 20:17:05.095761 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:17:05.114408032+00:00 stderr F I0813 20:17:05.114126 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:17:05.114408032+00:00 stderr F I0813 20:17:05.114169 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:17:05.128207176+00:00 stderr F I0813 20:17:05.126949 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:17:05.148047803+00:00 stderr F I0813 20:17:05.146815 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:17:05.155462774+00:00 stderr F I0813 20:17:05.155385 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:17:05.155462774+00:00 stderr F I0813 20:17:05.155434 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:17:05.165527762+00:00 stderr F I0813 20:17:05.165441 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:17:05.165630375+00:00 stderr F I0813 20:17:05.165615 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:17:05.189557328+00:00 stderr F I0813 20:17:05.187764 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:17:05.189557328+00:00 stderr F I0813 20:17:05.187892 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:17:05.199769360+00:00 stderr F I0813 20:17:05.198740 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:17:05.200000276+00:00 stderr F I0813 20:17:05.199944 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:17:05.215723615+00:00 stderr F I0813 20:17:05.215615 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:17:05.216057955+00:00 stderr F I0813 20:17:05.216006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:17:05.229190230+00:00 stderr F I0813 20:17:05.229001 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:17:05.231911968+00:00 stderr F I0813 20:17:05.231476 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:17:05.249879071+00:00 stderr F I0813 20:17:05.247699 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:17:05.249879071+00:00 stderr F I0813 20:17:05.247765 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:17:05.262497681+00:00 stderr F I0813 20:17:05.262441 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:17:05.262631155+00:00 stderr F I0813 20:17:05.262611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:17:05.269325766+00:00 stderr F I0813 20:17:05.269269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:17:05.269427789+00:00 stderr F I0813 20:17:05.269414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:17:05.278072156+00:00 stderr F I0813 20:17:05.276175 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:17:05.278072156+00:00 stderr F I0813 20:17:05.276242 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:17:05.396580740+00:00 stderr F I0813 20:17:05.396502 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:17:05.396580740+00:00 stderr F I0813 20:17:05.396569 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:17:05.600000119+00:00 stderr F I0813 20:17:05.599714 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:17:05.600124203+00:00 stderr F I0813 20:17:05.600107 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:17:05.800443824+00:00 stderr F I0813 20:17:05.799029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:17:05.800443824+00:00 stderr F I0813 20:17:05.799429 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:17:05.994423033+00:00 stderr F I0813 20:17:05.994317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:17:05.995766151+00:00 stderr F I0813 20:17:05.994432 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:17:06.200320373+00:00 stderr F I0813 20:17:06.199748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:17:06.200430726+00:00 stderr F I0813 20:17:06.200412 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:17:06.397920626+00:00 stderr F I0813 20:17:06.396137 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:17:06.397920626+00:00 stderr F I0813 20:17:06.396208 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:17:06.597172416+00:00 stderr F I0813 20:17:06.595172 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:17:06.597172416+00:00 stderr F I0813 20:17:06.595253 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:17:06.961050417+00:00 stderr F I0813 20:17:06.960676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:17:06.961753458+00:00 stderr F I0813 20:17:06.961479 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:17:07.025874209+00:00 stderr F I0813 20:17:07.025442 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:17:07.025874209+00:00 stderr F I0813 20:17:07.025856 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:17:07.600142918+00:00 stderr F I0813 20:17:07.596664 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:17:07.600142918+00:00 stderr F I0813 20:17:07.596734 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:17:08.303768101+00:00 stderr F I0813 20:17:08.303595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:17:08.303768101+00:00 stderr F I0813 20:17:08.303664 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:17:09.114260336+00:00 stderr F I0813 20:17:09.114146 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:17:09.114260336+00:00 stderr F I0813 20:17:09.114227 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:17:09.141484114+00:00 stderr F I0813 20:17:09.141308 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:17:09.141484114+00:00 stderr F I0813 20:17:09.141358 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:17:09.151986274+00:00 stderr F I0813 20:17:09.149873 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:17:09.151986274+00:00 stderr F I0813 20:17:09.150046 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:17:09.436568531+00:00 stderr F I0813 20:17:09.436517 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:17:09.436715335+00:00 stderr F I0813 20:17:09.436697 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:17:09.450451017+00:00 stderr F I0813 20:17:09.450399 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:17:09.450556220+00:00 stderr F I0813 20:17:09.450539 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:17:09.467747911+00:00 stderr F I0813 20:17:09.467593 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:17:09.467747911+00:00 stderr F I0813 20:17:09.467664 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:17:09.491358585+00:00 stderr F I0813 20:17:09.490353 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:17:09.492730374+00:00 stderr F I0813 20:17:09.492588 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:17:09.513332593+00:00 stderr F I0813 20:17:09.513225 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:17:09.513332593+00:00 stderr F I0813 20:17:09.513291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:17:09.521854906+00:00 stderr F I0813 20:17:09.519131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:09.521854906+00:00 stderr F I0813 20:17:09.519202 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:17:09.525744807+00:00 stderr F I0813 20:17:09.525647 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:09.525744807+00:00 stderr F I0813 20:17:09.525719 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:17:09.623143019+00:00 stderr F I0813 20:17:09.622999 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:17:09.623143019+00:00 stderr F I0813 20:17:09.623069 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:17:09.809079268+00:00 stderr F I0813 20:17:09.808507 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:17:09.809079268+00:00 stderr F I0813 20:17:09.808594 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:17:10.025402256+00:00 stderr F I0813 20:17:10.025194 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:17:10.025402256+00:00 stderr F I0813 20:17:10.025275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:17:10.209152154+00:00 stderr F I0813 20:17:10.209003 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:17:10.209152154+00:00 stderr F I0813 20:17:10.209075 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:17:10.399028096+00:00 stderr F I0813 20:17:10.398950 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:17:10.399139459+00:00 stderr F I0813 20:17:10.399124 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:17:10.624912547+00:00 stderr F I0813 20:17:10.624513 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:17:10.625088892+00:00 stderr F I0813 20:17:10.625068 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:17:10.819764451+00:00 stderr F I0813 20:17:10.819627 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:17:10.819845573+00:00 stderr F I0813 20:17:10.819766 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:17:11.011765324+00:00 stderr F I0813 20:17:11.011626 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:11.011765324+00:00 stderr F I0813 20:17:11.011698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:17:11.206567386+00:00 stderr F I0813 20:17:11.206413 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:17:11.206567386+00:00 stderr F I0813 20:17:11.206509 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:17:11.409652566+00:00 stderr F I0813 20:17:11.409552 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:17:11.409652566+00:00 stderr F I0813 20:17:11.409636 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:17:11.599700713+00:00 stderr F I0813 20:17:11.599627 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:17:11.599743204+00:00 stderr F I0813 20:17:11.599697 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:17:11.828678432+00:00 stderr F I0813 20:17:11.828561 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:17:11.828678432+00:00 stderr F I0813 20:17:11.828666 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:17:12.007482418+00:00 stderr F I0813 20:17:12.007375 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:17:12.007482418+00:00 stderr F I0813 20:17:12.007460 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:17:12.246413281+00:00 stderr F I0813 20:17:12.245524 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:17:12.246413281+00:00 stderr F I0813 20:17:12.245593 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:17:12.421217743+00:00 stderr F I0813 20:17:12.421129 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:17:12.421217743+00:00 stderr F I0813 20:17:12.421201 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:17:12.615136101+00:00 stderr F I0813 20:17:12.615078 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:17:12.615335817+00:00 stderr F I0813 20:17:12.615316 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:17:12.841163046+00:00 stderr F I0813 20:17:12.841047 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:17:12.841163046+00:00 stderr F I0813 20:17:12.841126 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:17:13.041407154+00:00 stderr F I0813 20:17:13.041305 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:17:13.041407154+00:00 stderr F I0813 20:17:13.041386 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:17:13.195388202+00:00 stderr F I0813 20:17:13.195274 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:17:13.195388202+00:00 stderr F I0813 20:17:13.195345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:17:13.397167614+00:00 stderr F I0813 20:17:13.395151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:17:13.397167614+00:00 stderr F I0813 20:17:13.395395 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:17:14.550767769+00:00 stderr F I0813 20:17:14.546523 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:17:14.550767769+00:00 stderr F I0813 20:17:14.546616 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:17:14.751016676+00:00 stderr F I0813 20:17:14.750921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:17:14.751136970+00:00 stderr F I0813 20:17:14.751116 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:17:15.457592522+00:00 stderr F I0813 20:17:15.457543 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:17:15.458085836+00:00 stderr F I0813 20:17:15.457691 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:17:16.081516709+00:00 stderr F I0813 20:17:16.080115 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:17:16.081516709+00:00 stderr F I0813 20:17:16.080196 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:17:18.510213935+00:00 stderr F I0813 20:17:18.509540 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:17:18.510213935+00:00 stderr F I0813 20:17:18.509656 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:17:20.689240763+00:00 stderr F I0813 20:17:20.687479 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:17:20.689240763+00:00 stderr F I0813 20:17:20.687544 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.815747 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.815890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.827718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:20.829185339+00:00 stderr F I0813 20:17:20.828089 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.110686038+00:00 stderr F I0813 20:17:21.110587 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.110732109+00:00 stderr F I0813 20:17:21.110719 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.455195706+00:00 stderr F I0813 20:17:21.455105 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.455195706+00:00 stderr F I0813 20:17:21.455173 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:17:21.468591539+00:00 stderr F I0813 20:17:21.468122 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:17:21.468591539+00:00 stderr F I0813 20:17:21.468188 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:17:21.479685966+00:00 stderr F I0813 20:17:21.477249 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:17:21.479685966+00:00 stderr F I0813 20:17:21.477320 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:17:21.609719289+00:00 stderr F I0813 20:17:21.609615 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:17:21.609759620+00:00 stderr F I0813 20:17:21.609712 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:17:21.643704690+00:00 stderr F I0813 20:17:21.642515 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:17:21.643704690+00:00 stderr F I0813 20:17:21.642611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:17:22.156243025+00:00 stderr F I0813 20:17:22.155518 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:17:22.163195764+00:00 stderr F I0813 20:17:22.162545 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:17:22.175694211+00:00 stderr F I0813 20:17:22.174350 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:17:22.175940878+00:00 stderr F I0813 20:17:22.175917 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:17:24.354742039+00:00 stderr F I0813 20:17:24.354686 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:17:24.354908564+00:00 stderr F I0813 20:17:24.354893 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:17:25.198849754+00:00 stderr F I0813 20:17:25.196380 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:17:25.198849754+00:00 stderr F I0813 20:17:25.196470 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:17:25.367419628+00:00 stderr F I0813 20:17:25.367017 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:17:25.367419628+00:00 stderr F I0813 20:17:25.367087 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:17:25.611864598+00:00 stderr F I0813 20:17:25.610414 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:17:25.611864598+00:00 stderr F I0813 20:17:25.610533 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:17:25.865057108+00:00 stderr F I0813 20:17:25.864496 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:17:25.865057108+00:00 stderr F I0813 20:17:25.864581 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:17:25.964360494+00:00 stderr F I0813 20:17:25.964059 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:17:25.964360494+00:00 stderr F I0813 20:17:25.964140 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:17:25.976997645+00:00 stderr F I0813 20:17:25.976861 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:17:25.976997645+00:00 stderr F I0813 20:17:25.976925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:17:25.993494326+00:00 stderr F I0813 20:17:25.993267 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:17:25.993494326+00:00 stderr F I0813 20:17:25.993333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:17:26.007510616+00:00 stderr F I0813 20:17:26.006831 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:17:26.007510616+00:00 stderr F I0813 20:17:26.006899 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:17:26.015620538+00:00 stderr F I0813 20:17:26.015577 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:17:26.015652549+00:00 stderr F I0813 20:17:26.015626 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:17:26.028443164+00:00 stderr F I0813 20:17:26.028273 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:17:26.028443164+00:00 stderr F I0813 20:17:26.028365 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:17:26.156008707+00:00 stderr F I0813 20:17:26.155864 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:17:26.156008707+00:00 stderr F I0813 20:17:26.155938 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:17:26.192566731+00:00 stderr F I0813 20:17:26.192221 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:17:26.192566731+00:00 stderr F I0813 20:17:26.192307 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:17:26.202242798+00:00 stderr F I0813 20:17:26.202193 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:17:26.202305549+00:00 stderr F I0813 20:17:26.202262 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.206278563+00:00 stderr F I0813 20:17:26.206218 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.206525590+00:00 stderr F I0813 20:17:26.206505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.210906375+00:00 stderr F I0813 20:17:26.210881 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.211096360+00:00 stderr F I0813 20:17:26.211077 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:17:26.272601667+00:00 stderr F I0813 20:17:26.272547 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:17:26.272877395+00:00 stderr F I0813 20:17:26.272858 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:17:26.497129599+00:00 stderr F I0813 20:17:26.497077 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:17:26.497238062+00:00 stderr F I0813 20:17:26.497223 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:17:26.671165119+00:00 stderr F I0813 20:17:26.671060 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:17:26.671215510+00:00 stderr F I0813 20:17:26.671180 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:17:26.878282874+00:00 stderr F I0813 20:17:26.876824 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:17:26.878282874+00:00 stderr F I0813 20:17:26.876943 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.078192682+00:00 stderr F I0813 20:17:27.077948 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.078192682+00:00 stderr F I0813 20:17:27.078042 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.281010014+00:00 stderr F I0813 20:17:27.280909 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.281151648+00:00 stderr F I0813 20:17:27.281132 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:17:27.475283332+00:00 stderr F I0813 20:17:27.475194 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:17:27.475283332+00:00 stderr F I0813 20:17:27.475264 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:17:27.673828522+00:00 stderr F I0813 20:17:27.673727 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:17:27.673934355+00:00 stderr F I0813 20:17:27.673919 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:17:27.882406239+00:00 stderr F I0813 20:17:27.881724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:17:27.882453420+00:00 stderr F I0813 20:17:27.882413 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:17:28.089025749+00:00 stderr F I0813 20:17:28.088921 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:17:28.089025749+00:00 stderr F I0813 20:17:28.089011 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:17:28.316000211+00:00 stderr F I0813 20:17:28.312312 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:17:28.316000211+00:00 stderr F I0813 20:17:28.312405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:17:28.474872858+00:00 stderr F I0813 20:17:28.474171 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:17:28.474872858+00:00 stderr F I0813 20:17:28.474241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:17:28.677523755+00:00 stderr F I0813 20:17:28.677094 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:17:28.677523755+00:00 stderr F I0813 20:17:28.677162 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:17:28.878186476+00:00 stderr F I0813 20:17:28.877952 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:17:28.878186476+00:00 stderr F I0813 20:17:28.878077 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:29.074164322+00:00 stderr F I0813 20:17:29.072985 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:29.074345078+00:00 stderr F I0813 20:17:29.074323 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:17:29.810163930+00:00 stderr F I0813 20:17:29.808755 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:17:29.810163930+00:00 stderr F I0813 20:17:29.808885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:17:29.979875406+00:00 stderr F I0813 20:17:29.979530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:17:29.980053091+00:00 stderr F I0813 20:17:29.980026 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:17:29.987354000+00:00 stderr F I0813 20:17:29.987307 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:17:29.987504534+00:00 stderr F I0813 20:17:29.987452 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:17:30.003726477+00:00 stderr F I0813 20:17:30.003608 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:17:30.003848001+00:00 stderr F I0813 20:17:30.003737 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:17:30.081601471+00:00 stderr F I0813 20:17:30.081307 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:17:30.081601471+00:00 stderr F I0813 20:17:30.081480 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:17:30.285579766+00:00 stderr F I0813 20:17:30.285454 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:17:30.285700750+00:00 stderr F I0813 20:17:30.285616 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:30.476206270+00:00 stderr F I0813 20:17:30.474574 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:30.476206270+00:00 stderr F I0813 20:17:30.474644 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:17:30.681186174+00:00 stderr F I0813 20:17:30.679527 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:17:30.681186174+00:00 stderr F I0813 20:17:30.679616 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:17:30.899633072+00:00 stderr F I0813 20:17:30.898764 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:17:30.899633072+00:00 stderr F I0813 20:17:30.898896 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:17:31.086475278+00:00 stderr F I0813 20:17:31.086416 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:17:31.086651643+00:00 stderr F I0813 20:17:31.086601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:17:31.278907913+00:00 stderr F I0813 20:17:31.276175 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:17:31.278907913+00:00 stderr F I0813 20:17:31.276245 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:17:31.489349833+00:00 stderr F I0813 20:17:31.488684 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:17:31.489349833+00:00 stderr F I0813 20:17:31.489295 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:17:31.673177362+00:00 stderr F I0813 20:17:31.672490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:17:31.673177362+00:00 stderr F I0813 20:17:31.672744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:17:32.992174549+00:00 stderr F I0813 20:17:32.991855 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:17:32.992174549+00:00 stderr F I0813 20:17:32.991920 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:17:34.146403610+00:00 stderr F I0813 20:17:34.141673 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:17:34.418636985+00:00 stderr F I0813 20:17:34.418051 1 log.go:245] Operconfig Controller complete 2025-08-13T20:19:15.390012368+00:00 stderr F I0813 20:19:15.389744 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:20:15.101084387+00:00 stderr F I0813 20:20:15.100748 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.114815029+00:00 stderr F I0813 20:20:15.114695 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.125764452+00:00 stderr F I0813 20:20:15.125638 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.137224420+00:00 stderr F I0813 20:20:15.137120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.152676501+00:00 stderr F I0813 20:20:15.152593 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.164924782+00:00 stderr F I0813 20:20:15.164672 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.176847732+00:00 stderr F I0813 20:20:15.176734 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.187516697+00:00 stderr F I0813 20:20:15.187416 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.198853051+00:00 stderr F I0813 20:20:15.198680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.220993194+00:00 stderr F I0813 20:20:15.220128 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.300436784+00:00 stderr F I0813 20:20:15.300324 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.503327633+00:00 stderr F I0813 20:20:15.503190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.707100297+00:00 stderr F I0813 20:20:15.706874 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:15.899922868+00:00 stderr F I0813 20:20:15.899841 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.109863998+00:00 stderr F I0813 20:20:16.109035 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.300858926+00:00 stderr F I0813 20:20:16.300648 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.502383996+00:00 stderr F I0813 20:20:16.502267 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.700403735+00:00 stderr F I0813 20:20:16.700085 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:16.709526806+00:00 stderr F I0813 20:20:16.709440 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:20:16.722279010+00:00 stderr F I0813 20:20:16.722178 1 log.go:245] successful reconciliation 2025-08-13T20:20:16.903428678+00:00 stderr F I0813 20:20:16.903340 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.101131458+00:00 stderr F I0813 20:20:17.101067 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.302029259+00:00 stderr F I0813 20:20:17.301849 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.506245175+00:00 stderr F I0813 20:20:17.506137 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.698525250+00:00 stderr F I0813 20:20:17.698399 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:17.901593583+00:00 stderr F I0813 20:20:17.901476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.098873941+00:00 stderr F I0813 20:20:18.098756 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.112693666+00:00 stderr F I0813 20:20:18.112663 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:20:18.113472659+00:00 stderr F I0813 20:20:18.113449 1 log.go:245] successful reconciliation 2025-08-13T20:20:18.300057921+00:00 stderr F I0813 20:20:18.299926 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:20:18.499598024+00:00 stderr F I0813 20:20:18.499481 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:20:19.309851151+00:00 stderr F I0813 20:20:19.309652 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:20:19.318428196+00:00 stderr F I0813 20:20:19.318244 1 log.go:245] successful reconciliation 2025-08-13T20:20:34.428447821+00:00 stderr F I0813 20:20:34.419477 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:20:34.851948604+00:00 stderr F I0813 20:20:34.851346 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:20:34.855412533+00:00 stderr F I0813 20:20:34.855289 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:20:34.858759729+00:00 stderr F I0813 20:20:34.858709 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:20:34.858873532+00:00 stderr F I0813 20:20:34.858750 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000ff3900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866657 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866693 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:20:34.866739777+00:00 stderr F I0813 20:20:34.866702 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:20:34.870574977+00:00 stderr F I0813 20:20:34.870536 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870556 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870577 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:20:34.870599937+00:00 stderr F I0813 20:20:34.870582 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:20:34.873440519+00:00 stderr F I0813 20:20:34.873276 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:20:34.900480641+00:00 stderr F I0813 20:20:34.900355 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:20:34.919302469+00:00 stderr F I0813 20:20:34.919179 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:20:34.919409652+00:00 stderr F I0813 20:20:34.919351 1 log.go:245] Starting render phase 2025-08-13T20:20:34.938492368+00:00 stderr F I0813 20:20:34.938343 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990139 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990184 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990234 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:20:34.990484114+00:00 stderr F I0813 20:20:34.990288 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:20:35.004262808+00:00 stderr F I0813 20:20:35.004169 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:20:35.004262808+00:00 stderr F I0813 20:20:35.004218 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:20:35.017341791+00:00 stderr F I0813 20:20:35.017274 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:20:35.051393225+00:00 stderr F I0813 20:20:35.051231 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:20:35.058551829+00:00 stderr F I0813 20:20:35.058446 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:20:35.059034963+00:00 stderr F I0813 20:20:35.058900 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:20:35.079694123+00:00 stderr F I0813 20:20:35.079607 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:20:35.079694123+00:00 stderr F I0813 20:20:35.079679 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:20:35.090843102+00:00 stderr F I0813 20:20:35.090591 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:20:35.090989686+00:00 stderr F I0813 20:20:35.090906 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:20:35.108679192+00:00 stderr F I0813 20:20:35.108582 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:20:35.108679192+00:00 stderr F I0813 20:20:35.108661 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:20:35.115992141+00:00 stderr F I0813 20:20:35.115879 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:20:35.115992141+00:00 stderr F I0813 20:20:35.115945 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:20:35.123428763+00:00 stderr F I0813 20:20:35.123263 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:20:35.123428763+00:00 stderr F I0813 20:20:35.123331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:20:35.129516987+00:00 stderr F I0813 20:20:35.129375 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:20:35.129516987+00:00 stderr F I0813 20:20:35.129442 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:20:35.135237861+00:00 stderr F I0813 20:20:35.135154 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:20:35.135276982+00:00 stderr F I0813 20:20:35.135234 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:20:35.140937714+00:00 stderr F I0813 20:20:35.140874 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:20:35.140998655+00:00 stderr F I0813 20:20:35.140939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:20:35.146605056+00:00 stderr F I0813 20:20:35.146536 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:20:35.146636316+00:00 stderr F I0813 20:20:35.146606 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:20:35.316872931+00:00 stderr F I0813 20:20:35.316759 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:20:35.316911002+00:00 stderr F I0813 20:20:35.316867 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:20:35.515310052+00:00 stderr F I0813 20:20:35.515201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:20:35.515310052+00:00 stderr F I0813 20:20:35.515289 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:20:35.716497682+00:00 stderr F I0813 20:20:35.714520 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:20:35.716497682+00:00 stderr F I0813 20:20:35.714574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:20:35.914878012+00:00 stderr F I0813 20:20:35.914749 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:20:35.914878012+00:00 stderr F I0813 20:20:35.914866 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:20:36.115593748+00:00 stderr F I0813 20:20:36.115430 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:20:36.115593748+00:00 stderr F I0813 20:20:36.115478 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:20:36.314676358+00:00 stderr F I0813 20:20:36.314602 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:20:36.314676358+00:00 stderr F I0813 20:20:36.314651 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:20:36.516143806+00:00 stderr F I0813 20:20:36.516030 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:20:36.516143806+00:00 stderr F I0813 20:20:36.516127 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:20:36.714144305+00:00 stderr F I0813 20:20:36.714085 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:20:36.714343080+00:00 stderr F I0813 20:20:36.714319 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:20:36.914930013+00:00 stderr F I0813 20:20:36.914717 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:20:36.915104228+00:00 stderr F I0813 20:20:36.915080 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:20:37.115703191+00:00 stderr F I0813 20:20:37.115575 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:20:37.115703191+00:00 stderr F I0813 20:20:37.115659 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:20:37.318935769+00:00 stderr F I0813 20:20:37.318860 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:20:37.319066933+00:00 stderr F I0813 20:20:37.319052 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:20:37.525262466+00:00 stderr F I0813 20:20:37.525170 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:20:37.525304217+00:00 stderr F I0813 20:20:37.525259 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:20:37.732249852+00:00 stderr F I0813 20:20:37.732083 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:20:37.732249852+00:00 stderr F I0813 20:20:37.732159 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:20:37.914341566+00:00 stderr F I0813 20:20:37.914221 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:20:37.914404717+00:00 stderr F I0813 20:20:37.914348 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:20:38.118032347+00:00 stderr F I0813 20:20:38.117908 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:20:38.118032347+00:00 stderr F I0813 20:20:38.118019 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:20:38.318535957+00:00 stderr F I0813 20:20:38.318407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:20:38.318535957+00:00 stderr F I0813 20:20:38.318516 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:20:38.522085155+00:00 stderr F I0813 20:20:38.522006 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:20:38.522085155+00:00 stderr F I0813 20:20:38.522072 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:20:38.729408640+00:00 stderr F I0813 20:20:38.729323 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:20:38.729408640+00:00 stderr F I0813 20:20:38.729392 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:20:38.919334157+00:00 stderr F I0813 20:20:38.919168 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:20:38.919334157+00:00 stderr F I0813 20:20:38.919238 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:20:39.114197166+00:00 stderr F I0813 20:20:39.113946 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:39.114197166+00:00 stderr F I0813 20:20:39.114030 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:20:39.314688925+00:00 stderr F I0813 20:20:39.314561 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:39.314688925+00:00 stderr F I0813 20:20:39.314633 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:20:39.516921546+00:00 stderr F I0813 20:20:39.516837 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:20:39.516921546+00:00 stderr F I0813 20:20:39.516910 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:20:39.713355890+00:00 stderr F I0813 20:20:39.713214 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:20:39.713355890+00:00 stderr F I0813 20:20:39.713285 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:20:39.913859030+00:00 stderr F I0813 20:20:39.913665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:20:39.913859030+00:00 stderr F I0813 20:20:39.913753 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:20:40.119089585+00:00 stderr F I0813 20:20:40.118954 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:20:40.119089585+00:00 stderr F I0813 20:20:40.119072 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:20:40.317057773+00:00 stderr F I0813 20:20:40.316939 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:20:40.317107275+00:00 stderr F I0813 20:20:40.317073 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:20:40.522411752+00:00 stderr F I0813 20:20:40.522288 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:20:40.522411752+00:00 stderr F I0813 20:20:40.522356 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:20:40.717497637+00:00 stderr F I0813 20:20:40.717372 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:20:40.717497637+00:00 stderr F I0813 20:20:40.717442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:20:40.915160267+00:00 stderr F I0813 20:20:40.915061 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:40.915160267+00:00 stderr F I0813 20:20:40.915130 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:20:41.114289977+00:00 stderr F I0813 20:20:41.114186 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:20:41.114289977+00:00 stderr F I0813 20:20:41.114261 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:20:41.316610860+00:00 stderr F I0813 20:20:41.316514 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:20:41.316610860+00:00 stderr F I0813 20:20:41.316597 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:20:41.514700131+00:00 stderr F I0813 20:20:41.514583 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:20:41.514700131+00:00 stderr F I0813 20:20:41.514658 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:20:41.746578498+00:00 stderr F I0813 20:20:41.746453 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:20:41.746578498+00:00 stderr F I0813 20:20:41.746523 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:20:41.920291183+00:00 stderr F I0813 20:20:41.920172 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:20:41.920291183+00:00 stderr F I0813 20:20:41.920282 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:20:42.123346296+00:00 stderr F I0813 20:20:42.123249 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:20:42.123346296+00:00 stderr F I0813 20:20:42.123323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:20:42.360555405+00:00 stderr F I0813 20:20:42.359752 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:20:42.360555405+00:00 stderr F I0813 20:20:42.359862 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:20:42.521421362+00:00 stderr F I0813 20:20:42.521174 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:20:42.521421362+00:00 stderr F I0813 20:20:42.521269 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:20:42.764649163+00:00 stderr F I0813 20:20:42.764515 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:20:42.764649163+00:00 stderr F I0813 20:20:42.764587 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:20:42.947993663+00:00 stderr F I0813 20:20:42.947726 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:20:42.947993663+00:00 stderr F I0813 20:20:42.947869 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:20:43.115195341+00:00 stderr F I0813 20:20:43.115053 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:20:43.115195341+00:00 stderr F I0813 20:20:43.115161 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:20:43.314743734+00:00 stderr F I0813 20:20:43.314552 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:20:43.314743734+00:00 stderr F I0813 20:20:43.314640 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:20:43.538945162+00:00 stderr F I0813 20:20:43.536506 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:20:43.538945162+00:00 stderr F I0813 20:20:43.536611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:20:43.714613733+00:00 stderr F I0813 20:20:43.714499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:20:43.714613733+00:00 stderr F I0813 20:20:43.714569 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:20:43.916191274+00:00 stderr F I0813 20:20:43.916093 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:20:43.916425280+00:00 stderr F I0813 20:20:43.916375 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:20:44.112591947+00:00 stderr F I0813 20:20:44.112526 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:20:44.112694870+00:00 stderr F I0813 20:20:44.112680 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:20:44.313680614+00:00 stderr F I0813 20:20:44.313552 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:20:44.313680614+00:00 stderr F I0813 20:20:44.313627 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:20:44.521593585+00:00 stderr F I0813 20:20:44.521003 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:20:44.521593585+00:00 stderr F I0813 20:20:44.521564 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:20:44.714654432+00:00 stderr F I0813 20:20:44.714600 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:20:44.714764915+00:00 stderr F I0813 20:20:44.714751 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:44.915892243+00:00 stderr F I0813 20:20:44.915660 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:44.915892243+00:00 stderr F I0813 20:20:44.915760 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.115664543+00:00 stderr F I0813 20:20:45.115534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.115664543+00:00 stderr F I0813 20:20:45.115620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.341162817+00:00 stderr F I0813 20:20:45.341077 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.341162817+00:00 stderr F I0813 20:20:45.341146 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:20:45.516009704+00:00 stderr F I0813 20:20:45.515460 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:20:45.516009704+00:00 stderr F I0813 20:20:45.515531 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:20:45.718480460+00:00 stderr F I0813 20:20:45.718322 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:20:45.718480460+00:00 stderr F I0813 20:20:45.718406 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:20:45.917883789+00:00 stderr F I0813 20:20:45.917660 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:20:45.917883789+00:00 stderr F I0813 20:20:45.917741 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:20:46.115838446+00:00 stderr F I0813 20:20:46.115696 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:20:46.115838446+00:00 stderr F I0813 20:20:46.115764 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:20:46.317691464+00:00 stderr F I0813 20:20:46.317581 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:20:46.317691464+00:00 stderr F I0813 20:20:46.317634 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:20:46.523676001+00:00 stderr F I0813 20:20:46.523581 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:20:46.523720803+00:00 stderr F I0813 20:20:46.523669 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:20:46.719160048+00:00 stderr F I0813 20:20:46.718504 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:20:46.719160048+00:00 stderr F I0813 20:20:46.718650 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:20:46.918250678+00:00 stderr F I0813 20:20:46.918163 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:20:46.918250678+00:00 stderr F I0813 20:20:46.918244 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:20:47.114304512+00:00 stderr F I0813 20:20:47.114169 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:20:47.114304512+00:00 stderr F I0813 20:20:47.114274 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:20:47.317070466+00:00 stderr F I0813 20:20:47.316924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:20:47.317070466+00:00 stderr F I0813 20:20:47.317018 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:20:47.524885645+00:00 stderr F I0813 20:20:47.524665 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:20:47.524885645+00:00 stderr F I0813 20:20:47.524762 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:20:47.719041623+00:00 stderr F I0813 20:20:47.718885 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:20:47.719041623+00:00 stderr F I0813 20:20:47.719023 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:20:47.916272160+00:00 stderr F I0813 20:20:47.916161 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:20:47.916272160+00:00 stderr F I0813 20:20:47.916232 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:20:48.114940447+00:00 stderr F I0813 20:20:48.114849 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:20:48.114940447+00:00 stderr F I0813 20:20:48.114925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:20:48.318125074+00:00 stderr F I0813 20:20:48.318029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:20:48.318125074+00:00 stderr F I0813 20:20:48.318095 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:20:48.515077522+00:00 stderr F I0813 20:20:48.514919 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:20:48.515077522+00:00 stderr F I0813 20:20:48.515033 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:20:48.716356645+00:00 stderr F I0813 20:20:48.716255 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:20:48.716400156+00:00 stderr F I0813 20:20:48.716355 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:20:48.943259860+00:00 stderr F I0813 20:20:48.943095 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:20:48.943259860+00:00 stderr F I0813 20:20:48.943245 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:20:49.169943599+00:00 stderr F I0813 20:20:49.168227 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:20:49.169943599+00:00 stderr F I0813 20:20:49.168305 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:20:49.317727692+00:00 stderr F I0813 20:20:49.317655 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:20:49.317727692+00:00 stderr F I0813 20:20:49.317717 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.518447409+00:00 stderr F I0813 20:20:49.518139 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.518447409+00:00 stderr F I0813 20:20:49.518411 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.715203211+00:00 stderr F I0813 20:20:49.715070 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.715203211+00:00 stderr F I0813 20:20:49.715160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:20:49.923103433+00:00 stderr F I0813 20:20:49.922956 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:20:49.923204656+00:00 stderr F I0813 20:20:49.923166 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:20:50.116761217+00:00 stderr F I0813 20:20:50.116517 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:20:50.116761217+00:00 stderr F I0813 20:20:50.116673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:20:50.314908870+00:00 stderr F I0813 20:20:50.314731 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:20:50.315132137+00:00 stderr F I0813 20:20:50.315102 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:20:50.514398462+00:00 stderr F I0813 20:20:50.514290 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:20:50.514398462+00:00 stderr F I0813 20:20:50.514382 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:50.726556925+00:00 stderr F I0813 20:20:50.726420 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:50.726556925+00:00 stderr F I0813 20:20:50.726535 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:50.919371475+00:00 stderr F I0813 20:20:50.919234 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:50.919371475+00:00 stderr F I0813 20:20:50.919331 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:20:51.116153669+00:00 stderr F I0813 20:20:51.116044 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:20:51.116153669+00:00 stderr F I0813 20:20:51.116120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:20:51.316356891+00:00 stderr F I0813 20:20:51.316310 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:20:51.316471514+00:00 stderr F I0813 20:20:51.316456 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:20:51.516163331+00:00 stderr F I0813 20:20:51.516066 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:20:51.516163331+00:00 stderr F I0813 20:20:51.516133 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:20:51.719646336+00:00 stderr F I0813 20:20:51.719546 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:20:51.719770710+00:00 stderr F I0813 20:20:51.719756 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:20:51.916366349+00:00 stderr F I0813 20:20:51.915645 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:20:51.918048257+00:00 stderr F I0813 20:20:51.918017 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:20:52.117860137+00:00 stderr F I0813 20:20:52.117626 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:20:52.117860137+00:00 stderr F I0813 20:20:52.117750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:20:52.314295181+00:00 stderr F I0813 20:20:52.314191 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:20:52.314295181+00:00 stderr F I0813 20:20:52.314259 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:20:52.516091738+00:00 stderr F I0813 20:20:52.516039 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:20:52.516239083+00:00 stderr F I0813 20:20:52.516222 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:52.719680517+00:00 stderr F I0813 20:20:52.719550 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:52.719680517+00:00 stderr F I0813 20:20:52.719668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:20:52.914934307+00:00 stderr F I0813 20:20:52.914744 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:20:52.915017569+00:00 stderr F I0813 20:20:52.914998 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:20:53.121269393+00:00 stderr F I0813 20:20:53.121151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:20:53.121269393+00:00 stderr F I0813 20:20:53.121221 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:20:53.318071618+00:00 stderr F I0813 20:20:53.317222 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:20:53.318071618+00:00 stderr F I0813 20:20:53.317285 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:20:53.515249083+00:00 stderr F I0813 20:20:53.515093 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:20:53.515249083+00:00 stderr F I0813 20:20:53.515188 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:20:53.715506386+00:00 stderr F I0813 20:20:53.715429 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:20:53.715561408+00:00 stderr F I0813 20:20:53.715502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:20:53.928710710+00:00 stderr F I0813 20:20:53.928579 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:20:53.928710710+00:00 stderr F I0813 20:20:53.928647 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:54.115359034+00:00 stderr F I0813 20:20:54.115214 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:54.115431756+00:00 stderr F I0813 20:20:54.115353 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:20:54.329936146+00:00 stderr F I0813 20:20:54.329755 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:20:54.329936146+00:00 stderr F I0813 20:20:54.329912 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:20:54.530932861+00:00 stderr F I0813 20:20:54.529003 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:20:54.530932861+00:00 stderr F I0813 20:20:54.530847 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:20:54.721034324+00:00 stderr F I0813 20:20:54.720904 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:20:54.721113416+00:00 stderr F I0813 20:20:54.721035 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:20:54.915171662+00:00 stderr F I0813 20:20:54.915090 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:20:54.915229324+00:00 stderr F I0813 20:20:54.915182 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:20:55.115471167+00:00 stderr F I0813 20:20:55.115377 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:20:55.115552259+00:00 stderr F I0813 20:20:55.115490 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:20:55.315049650+00:00 stderr F I0813 20:20:55.314920 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:20:55.315049650+00:00 stderr F I0813 20:20:55.315022 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:20:55.515115428+00:00 stderr F I0813 20:20:55.514999 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:20:55.515172839+00:00 stderr F I0813 20:20:55.515132 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:20:55.721548628+00:00 stderr F I0813 20:20:55.721333 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:20:55.742750874+00:00 stderr F I0813 20:20:55.742651 1 log.go:245] Operconfig Controller complete 2025-08-13T20:22:15.414660491+00:00 stderr F I0813 20:22:15.414332 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:23:55.745175459+00:00 stderr F I0813 20:23:55.744924 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:23:56.056716478+00:00 stderr F I0813 20:23:56.056606 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:23:56.062176634+00:00 stderr F I0813 20:23:56.062079 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:23:56.066000683+00:00 stderr F I0813 20:23:56.065853 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:23:56.066000683+00:00 stderr F I0813 20:23:56.065883 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003b9ea80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.071972 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.072031 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:23:56.072064717+00:00 stderr F I0813 20:23:56.072040 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077868 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077895 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077901 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.077907 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:23:56.078091609+00:00 stderr F I0813 20:23:56.078022 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:23:56.091823722+00:00 stderr F I0813 20:23:56.091671 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:23:56.106619705+00:00 stderr F I0813 20:23:56.106510 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:23:56.106619705+00:00 stderr F I0813 20:23:56.106587 1 log.go:245] Starting render phase 2025-08-13T20:23:56.124137446+00:00 stderr F I0813 20:23:56.124053 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161549 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161596 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:23:56.161680999+00:00 stderr F I0813 20:23:56.161653 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:23:56.161763471+00:00 stderr F I0813 20:23:56.161690 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:23:56.175322409+00:00 stderr F I0813 20:23:56.175221 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:23:56.175322409+00:00 stderr F I0813 20:23:56.175269 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:23:56.189743842+00:00 stderr F I0813 20:23:56.189626 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:23:56.223138826+00:00 stderr F I0813 20:23:56.223025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:23:56.229318543+00:00 stderr F I0813 20:23:56.229226 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:23:56.229318543+00:00 stderr F I0813 20:23:56.229284 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:23:56.238065373+00:00 stderr F I0813 20:23:56.238029 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:23:56.238157806+00:00 stderr F I0813 20:23:56.238144 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:23:56.248264415+00:00 stderr F I0813 20:23:56.248236 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:23:56.248351667+00:00 stderr F I0813 20:23:56.248335 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:23:56.256419778+00:00 stderr F I0813 20:23:56.256333 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:23:56.256419778+00:00 stderr F I0813 20:23:56.256408 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:23:56.263159241+00:00 stderr F I0813 20:23:56.263114 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:23:56.263267974+00:00 stderr F I0813 20:23:56.263249 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:23:56.271216531+00:00 stderr F I0813 20:23:56.271161 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:23:56.271329714+00:00 stderr F I0813 20:23:56.271314 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:23:56.279186849+00:00 stderr F I0813 20:23:56.279141 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:23:56.279288662+00:00 stderr F I0813 20:23:56.279274 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:23:56.285667304+00:00 stderr F I0813 20:23:56.285512 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:23:56.285667304+00:00 stderr F I0813 20:23:56.285593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:23:56.292482569+00:00 stderr F I0813 20:23:56.292447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:23:56.292577362+00:00 stderr F I0813 20:23:56.292562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:23:56.301649901+00:00 stderr F I0813 20:23:56.301509 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:23:56.301649901+00:00 stderr F I0813 20:23:56.301564 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:23:56.506046986+00:00 stderr F I0813 20:23:56.505885 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:23:56.506090177+00:00 stderr F I0813 20:23:56.506040 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:23:56.703672107+00:00 stderr F I0813 20:23:56.703336 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:23:56.703672107+00:00 stderr F I0813 20:23:56.703429 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:23:56.903017297+00:00 stderr F I0813 20:23:56.902868 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:23:56.903017297+00:00 stderr F I0813 20:23:56.902937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:23:57.105524158+00:00 stderr F I0813 20:23:57.105419 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:23:57.105524158+00:00 stderr F I0813 20:23:57.105497 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:23:57.304444006+00:00 stderr F I0813 20:23:57.304332 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:23:57.304444006+00:00 stderr F I0813 20:23:57.304407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:23:57.503685003+00:00 stderr F I0813 20:23:57.503636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:23:57.503957881+00:00 stderr F I0813 20:23:57.503938 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:23:57.704355091+00:00 stderr F I0813 20:23:57.704301 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:23:57.704573877+00:00 stderr F I0813 20:23:57.704552 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:23:57.905446501+00:00 stderr F I0813 20:23:57.902869 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:23:57.905587625+00:00 stderr F I0813 20:23:57.905570 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:23:58.103037451+00:00 stderr F I0813 20:23:58.102875 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:23:58.103037451+00:00 stderr F I0813 20:23:58.102951 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:23:58.305037147+00:00 stderr F I0813 20:23:58.304951 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:23:58.305097379+00:00 stderr F I0813 20:23:58.305086 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:23:58.504179291+00:00 stderr F I0813 20:23:58.504046 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:23:58.504179291+00:00 stderr F I0813 20:23:58.504118 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:23:58.716846142+00:00 stderr F I0813 20:23:58.716743 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:23:58.716968416+00:00 stderr F I0813 20:23:58.716953 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:23:58.915933825+00:00 stderr F I0813 20:23:58.915878 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:23:58.916067789+00:00 stderr F I0813 20:23:58.916051 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:23:59.107436621+00:00 stderr F I0813 20:23:59.107367 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:23:59.107547914+00:00 stderr F I0813 20:23:59.107531 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:23:59.303224768+00:00 stderr F I0813 20:23:59.303168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:23:59.303363672+00:00 stderr F I0813 20:23:59.303344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:23:59.504193185+00:00 stderr F I0813 20:23:59.504044 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:23:59.504193185+00:00 stderr F I0813 20:23:59.504108 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:23:59.707910500+00:00 stderr F I0813 20:23:59.707758 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:23:59.708084205+00:00 stderr F I0813 20:23:59.708067 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:23:59.905622053+00:00 stderr F I0813 20:23:59.905511 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:23:59.905622053+00:00 stderr F I0813 20:23:59.905586 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:24:00.111376367+00:00 stderr F I0813 20:24:00.111228 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:24:00.111376367+00:00 stderr F I0813 20:24:00.111333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:24:00.309879493+00:00 stderr F I0813 20:24:00.307164 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:00.309879493+00:00 stderr F I0813 20:24:00.307251 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:24:00.502397528+00:00 stderr F I0813 20:24:00.502198 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:00.502397528+00:00 stderr F I0813 20:24:00.502279 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:24:00.703889160+00:00 stderr F I0813 20:24:00.703701 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:24:00.703889160+00:00 stderr F I0813 20:24:00.703847 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:24:00.932875887+00:00 stderr F I0813 20:24:00.932711 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:24:00.932875887+00:00 stderr F I0813 20:24:00.932857 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:24:01.109561740+00:00 stderr F I0813 20:24:01.107418 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:24:01.109561740+00:00 stderr F I0813 20:24:01.107519 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:24:01.303446083+00:00 stderr F I0813 20:24:01.303328 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:24:01.303446083+00:00 stderr F I0813 20:24:01.303412 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:24:01.505444009+00:00 stderr F I0813 20:24:01.505299 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:24:01.505444009+00:00 stderr F I0813 20:24:01.505379 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:24:01.707515097+00:00 stderr F I0813 20:24:01.707435 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:24:01.707628561+00:00 stderr F I0813 20:24:01.707610 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:24:01.903363987+00:00 stderr F I0813 20:24:01.903289 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:24:01.903363987+00:00 stderr F I0813 20:24:01.903342 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:24:02.104367165+00:00 stderr F I0813 20:24:02.104131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:02.104571011+00:00 stderr F I0813 20:24:02.104553 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:24:02.304961631+00:00 stderr F I0813 20:24:02.304722 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:24:02.304961631+00:00 stderr F I0813 20:24:02.304918 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:24:02.504882538+00:00 stderr F I0813 20:24:02.504097 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:24:02.504882538+00:00 stderr F I0813 20:24:02.504765 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:24:02.704090464+00:00 stderr F I0813 20:24:02.703946 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:24:02.704171946+00:00 stderr F I0813 20:24:02.704101 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:24:02.910926357+00:00 stderr F I0813 20:24:02.910395 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:24:02.910926357+00:00 stderr F I0813 20:24:02.910473 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:24:03.109578138+00:00 stderr F I0813 20:24:03.109479 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:24:03.109650070+00:00 stderr F I0813 20:24:03.109604 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:24:03.311612165+00:00 stderr F I0813 20:24:03.311452 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:24:03.311612165+00:00 stderr F I0813 20:24:03.311527 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:24:03.518483780+00:00 stderr F I0813 20:24:03.518204 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:24:03.518483780+00:00 stderr F I0813 20:24:03.518275 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:24:03.709574234+00:00 stderr F I0813 20:24:03.709513 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:24:03.709700518+00:00 stderr F I0813 20:24:03.709686 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:24:03.941181797+00:00 stderr F I0813 20:24:03.941042 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:24:03.941181797+00:00 stderr F I0813 20:24:03.941116 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:24:04.148030891+00:00 stderr F I0813 20:24:04.147255 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:24:04.148075283+00:00 stderr F I0813 20:24:04.148031 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:24:04.303652371+00:00 stderr F I0813 20:24:04.303504 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:24:04.303652371+00:00 stderr F I0813 20:24:04.303550 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:24:04.506547913+00:00 stderr F I0813 20:24:04.506449 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:24:04.506547913+00:00 stderr F I0813 20:24:04.506515 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:24:04.706940153+00:00 stderr F I0813 20:24:04.706715 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:24:04.706940153+00:00 stderr F I0813 20:24:04.706865 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:24:04.906138199+00:00 stderr F I0813 20:24:04.905277 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:24:04.906138199+00:00 stderr F I0813 20:24:04.905349 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:24:05.102190895+00:00 stderr F I0813 20:24:05.102072 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:24:05.102190895+00:00 stderr F I0813 20:24:05.102163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:24:05.303721187+00:00 stderr F I0813 20:24:05.303579 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:24:05.303721187+00:00 stderr F I0813 20:24:05.303651 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:24:05.504195840+00:00 stderr F I0813 20:24:05.504088 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:24:05.504195840+00:00 stderr F I0813 20:24:05.504182 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:24:05.703151339+00:00 stderr F I0813 20:24:05.703030 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:24:05.703295753+00:00 stderr F I0813 20:24:05.703218 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:24:05.905630218+00:00 stderr F I0813 20:24:05.905487 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:24:05.905630218+00:00 stderr F I0813 20:24:05.905566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.104202557+00:00 stderr F I0813 20:24:06.104087 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.104202557+00:00 stderr F I0813 20:24:06.104178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.304850084+00:00 stderr F I0813 20:24:06.304742 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.304896585+00:00 stderr F I0813 20:24:06.304870 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.502601377+00:00 stderr F I0813 20:24:06.502497 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.502601377+00:00 stderr F I0813 20:24:06.502574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:24:06.707914818+00:00 stderr F I0813 20:24:06.707659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:24:06.707914818+00:00 stderr F I0813 20:24:06.707874 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:24:06.905040635+00:00 stderr F I0813 20:24:06.904866 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:24:06.905040635+00:00 stderr F I0813 20:24:06.904934 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:24:07.104293682+00:00 stderr F I0813 20:24:07.104200 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:24:07.104293682+00:00 stderr F I0813 20:24:07.104283 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:24:07.305269639+00:00 stderr F I0813 20:24:07.305107 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:24:07.305269639+00:00 stderr F I0813 20:24:07.305193 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:24:07.505268098+00:00 stderr F I0813 20:24:07.505131 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:24:07.505268098+00:00 stderr F I0813 20:24:07.505260 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:24:07.707370547+00:00 stderr F I0813 20:24:07.706715 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:24:07.707370547+00:00 stderr F I0813 20:24:07.707020 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:24:07.908327433+00:00 stderr F I0813 20:24:07.908202 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:24:07.908327433+00:00 stderr F I0813 20:24:07.908274 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:24:08.107377655+00:00 stderr F I0813 20:24:08.107259 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:24:08.107377655+00:00 stderr F I0813 20:24:08.107332 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:24:08.304240254+00:00 stderr F I0813 20:24:08.304146 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:24:08.304240254+00:00 stderr F I0813 20:24:08.304217 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:24:08.506155518+00:00 stderr F I0813 20:24:08.506100 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:24:08.506360984+00:00 stderr F I0813 20:24:08.506343 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:24:08.708078002+00:00 stderr F I0813 20:24:08.707897 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:24:08.708078002+00:00 stderr F I0813 20:24:08.708005 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:24:08.905852117+00:00 stderr F I0813 20:24:08.905680 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:24:08.905852117+00:00 stderr F I0813 20:24:08.905748 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:24:09.105362862+00:00 stderr F I0813 20:24:09.105229 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:24:09.105362862+00:00 stderr F I0813 20:24:09.105302 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:24:09.308393187+00:00 stderr F I0813 20:24:09.308281 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:24:09.308393187+00:00 stderr F I0813 20:24:09.308351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:24:09.503841136+00:00 stderr F I0813 20:24:09.503616 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:24:09.503841136+00:00 stderr F I0813 20:24:09.503689 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:24:09.710423843+00:00 stderr F I0813 20:24:09.710291 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:24:09.710423843+00:00 stderr F I0813 20:24:09.710370 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:24:09.904903784+00:00 stderr F I0813 20:24:09.904143 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:24:09.905225733+00:00 stderr F I0813 20:24:09.905103 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:24:10.117640306+00:00 stderr F I0813 20:24:10.117359 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:24:10.117640306+00:00 stderr F I0813 20:24:10.117462 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:24:10.334751444+00:00 stderr F I0813 20:24:10.334449 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:24:10.334751444+00:00 stderr F I0813 20:24:10.334553 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:24:10.505470656+00:00 stderr F I0813 20:24:10.505341 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:24:10.505470656+00:00 stderr F I0813 20:24:10.505420 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:10.705679281+00:00 stderr F I0813 20:24:10.705551 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:10.705679281+00:00 stderr F I0813 20:24:10.705638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:10.902584351+00:00 stderr F I0813 20:24:10.902460 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:10.902584351+00:00 stderr F I0813 20:24:10.902548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:24:11.104597097+00:00 stderr F I0813 20:24:11.104095 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:24:11.104635988+00:00 stderr F I0813 20:24:11.104593 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:24:11.303297819+00:00 stderr F I0813 20:24:11.303215 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:24:11.303340690+00:00 stderr F I0813 20:24:11.303301 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:24:11.507098977+00:00 stderr F I0813 20:24:11.506921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:24:11.507098977+00:00 stderr F I0813 20:24:11.507039 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:24:11.704892512+00:00 stderr F I0813 20:24:11.704748 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:24:11.704962174+00:00 stderr F I0813 20:24:11.704935 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:11.908494334+00:00 stderr F I0813 20:24:11.908373 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:11.908494334+00:00 stderr F I0813 20:24:11.908476 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:12.107716141+00:00 stderr F I0813 20:24:12.107620 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:12.107922847+00:00 stderr F I0813 20:24:12.107719 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:24:12.306970778+00:00 stderr F I0813 20:24:12.306848 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:24:12.306970778+00:00 stderr F I0813 20:24:12.306943 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:24:12.510876339+00:00 stderr F I0813 20:24:12.510737 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:24:12.510937191+00:00 stderr F I0813 20:24:12.510900 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:24:12.703612550+00:00 stderr F I0813 20:24:12.703499 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:24:12.703612550+00:00 stderr F I0813 20:24:12.703577 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:24:12.908060046+00:00 stderr F I0813 20:24:12.907951 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:24:12.908060046+00:00 stderr F I0813 20:24:12.908041 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:24:13.106532681+00:00 stderr F I0813 20:24:13.106404 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:24:13.106532681+00:00 stderr F I0813 20:24:13.106494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:24:13.304753239+00:00 stderr F I0813 20:24:13.304631 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:24:13.304753239+00:00 stderr F I0813 20:24:13.304735 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:24:13.505024716+00:00 stderr F I0813 20:24:13.504860 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:24:13.505024716+00:00 stderr F I0813 20:24:13.505011 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:24:13.705484167+00:00 stderr F I0813 20:24:13.705360 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:24:13.705484167+00:00 stderr F I0813 20:24:13.705464 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:13.903625263+00:00 stderr F I0813 20:24:13.903529 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:13.903625263+00:00 stderr F I0813 20:24:13.903602 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:24:14.104092195+00:00 stderr F I0813 20:24:14.103966 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:24:14.104133356+00:00 stderr F I0813 20:24:14.104092 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:24:14.303117126+00:00 stderr F I0813 20:24:14.302972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:24:14.303117126+00:00 stderr F I0813 20:24:14.303097 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:24:14.508547180+00:00 stderr F I0813 20:24:14.508316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:24:14.508547180+00:00 stderr F I0813 20:24:14.508405 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:24:14.706561422+00:00 stderr F I0813 20:24:14.706437 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:24:14.706561422+00:00 stderr F I0813 20:24:14.706527 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:24:14.904234084+00:00 stderr F I0813 20:24:14.904096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:24:14.904234084+00:00 stderr F I0813 20:24:14.904220 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:24:15.102945636+00:00 stderr F I0813 20:24:15.102856 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:24:15.102945636+00:00 stderr F I0813 20:24:15.102920 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:15.304903511+00:00 stderr F I0813 20:24:15.304838 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:15.305076096+00:00 stderr F I0813 20:24:15.305056 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:24:15.507388510+00:00 stderr F I0813 20:24:15.507226 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:24:15.507492543+00:00 stderr F I0813 20:24:15.507399 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:24:15.720216476+00:00 stderr F I0813 20:24:15.719621 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:24:15.720216476+00:00 stderr F I0813 20:24:15.719699 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:24:15.904202717+00:00 stderr F I0813 20:24:15.904044 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:24:15.904202717+00:00 stderr F I0813 20:24:15.904115 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:24:16.105078811+00:00 stderr F I0813 20:24:16.104583 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:24:16.105078811+00:00 stderr F I0813 20:24:16.104659 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:24:16.303380021+00:00 stderr F I0813 20:24:16.303289 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:24:16.303380021+00:00 stderr F I0813 20:24:16.303362 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:24:16.504512272+00:00 stderr F I0813 20:24:16.504309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:24:16.504512272+00:00 stderr F I0813 20:24:16.504421 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:24:16.704225613+00:00 stderr F I0813 20:24:16.704172 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:24:16.704360707+00:00 stderr F I0813 20:24:16.704345 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:24:16.911404137+00:00 stderr F I0813 20:24:16.911266 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:24:16.937133472+00:00 stderr F I0813 20:24:16.936845 1 log.go:245] Operconfig Controller complete 2025-08-13T20:25:15.429967135+00:00 stderr F I0813 20:25:15.429662 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:25:16.740137398+00:00 stderr F I0813 20:25:16.739949 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:25:16.742323171+00:00 stderr F I0813 20:25:16.742244 1 log.go:245] successful reconciliation 2025-08-13T20:25:18.126867019+00:00 stderr F I0813 20:25:18.126717 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:25:18.127921029+00:00 stderr F I0813 20:25:18.127892 1 log.go:245] successful reconciliation 2025-08-13T20:25:19.346569016+00:00 stderr F I0813 20:25:19.345537 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:25:19.346569016+00:00 stderr F I0813 20:25:19.346393 1 log.go:245] successful reconciliation 2025-08-13T20:27:16.938669622+00:00 stderr F I0813 20:27:16.938467 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:27:19.812921138+00:00 stderr F I0813 20:27:19.812057 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:27:19.816840280+00:00 stderr F I0813 20:27:19.814837 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:27:19.819847356+00:00 stderr F I0813 20:27:19.817458 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:27:19.819847356+00:00 stderr F I0813 20:27:19.817517 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a8ed00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824289 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824340 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:27:19.828855064+00:00 stderr F I0813 20:27:19.824379 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828891 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828939 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828948 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:27:19.829031209+00:00 stderr F I0813 20:27:19.828956 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:27:19.829950725+00:00 stderr F I0813 20:27:19.829083 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:27:19.861508177+00:00 stderr F I0813 20:27:19.860090 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:27:19.882350973+00:00 stderr F I0813 20:27:19.879935 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:27:19.882350973+00:00 stderr F I0813 20:27:19.880025 1 log.go:245] Starting render phase 2025-08-13T20:27:19.896873369+00:00 stderr F I0813 20:27:19.896220 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931254 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931296 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931322 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:27:19.931929931+00:00 stderr F I0813 20:27:19.931349 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:27:19.944909902+00:00 stderr F I0813 20:27:19.943601 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:27:19.944909902+00:00 stderr F I0813 20:27:19.943649 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:27:19.957856962+00:00 stderr F I0813 20:27:19.955095 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:27:19.990336141+00:00 stderr F I0813 20:27:19.982749 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:27:19.999886924+00:00 stderr F I0813 20:27:19.995326 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:27:19.999886924+00:00 stderr F I0813 20:27:19.995387 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:27:20.006681439+00:00 stderr F I0813 20:27:20.004174 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:27:20.006681439+00:00 stderr F I0813 20:27:20.004253 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:27:20.019916377+00:00 stderr F I0813 20:27:20.019718 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:27:20.019916377+00:00 stderr F I0813 20:27:20.019844 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:27:20.032081685+00:00 stderr F I0813 20:27:20.030848 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:27:20.032081685+00:00 stderr F I0813 20:27:20.030919 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:27:20.039156817+00:00 stderr F I0813 20:27:20.038392 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:27:20.039156817+00:00 stderr F I0813 20:27:20.038461 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:27:20.048527175+00:00 stderr F I0813 20:27:20.046105 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:27:20.048527175+00:00 stderr F I0813 20:27:20.046172 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:27:20.052883210+00:00 stderr F I0813 20:27:20.051561 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:27:20.052883210+00:00 stderr F I0813 20:27:20.051642 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:27:20.058851640+00:00 stderr F I0813 20:27:20.057424 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:27:20.058851640+00:00 stderr F I0813 20:27:20.057470 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:27:20.065037407+00:00 stderr F I0813 20:27:20.062875 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:27:20.065037407+00:00 stderr F I0813 20:27:20.062920 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:27:20.072289915+00:00 stderr F I0813 20:27:20.072255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:27:20.072368487+00:00 stderr F I0813 20:27:20.072355 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:27:20.277151312+00:00 stderr F I0813 20:27:20.277043 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:27:20.277151312+00:00 stderr F I0813 20:27:20.277109 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:27:20.479363295+00:00 stderr F I0813 20:27:20.479249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:27:20.479363295+00:00 stderr F I0813 20:27:20.479316 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:27:20.674626708+00:00 stderr F I0813 20:27:20.674501 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:27:20.674626708+00:00 stderr F I0813 20:27:20.674573 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:27:20.877436387+00:00 stderr F I0813 20:27:20.877296 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:27:20.877436387+00:00 stderr F I0813 20:27:20.877385 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:27:21.073222746+00:00 stderr F I0813 20:27:21.073047 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:27:21.073222746+00:00 stderr F I0813 20:27:21.073108 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:27:21.275581292+00:00 stderr F I0813 20:27:21.275420 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:27:21.275581292+00:00 stderr F I0813 20:27:21.275493 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:27:21.475888709+00:00 stderr F I0813 20:27:21.475668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:27:21.476053114+00:00 stderr F I0813 20:27:21.475950 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:27:21.674091187+00:00 stderr F I0813 20:27:21.673969 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:27:21.674091187+00:00 stderr F I0813 20:27:21.674079 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:27:21.877266086+00:00 stderr F I0813 20:27:21.877110 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:27:21.877266086+00:00 stderr F I0813 20:27:21.877249 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:27:22.075151775+00:00 stderr F I0813 20:27:22.075092 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:27:22.075265938+00:00 stderr F I0813 20:27:22.075247 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:27:22.275664958+00:00 stderr F I0813 20:27:22.275608 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:27:22.275879375+00:00 stderr F I0813 20:27:22.275859 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:27:22.492454097+00:00 stderr F I0813 20:27:22.492318 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:27:22.492579351+00:00 stderr F I0813 20:27:22.492565 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:27:22.686749513+00:00 stderr F I0813 20:27:22.686621 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:27:22.686749513+00:00 stderr F I0813 20:27:22.686691 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:27:22.873680618+00:00 stderr F I0813 20:27:22.873593 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:27:22.873680618+00:00 stderr F I0813 20:27:22.873661 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:27:23.072481833+00:00 stderr F I0813 20:27:23.072305 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:27:23.072481833+00:00 stderr F I0813 20:27:23.072374 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:27:23.276458574+00:00 stderr F I0813 20:27:23.276323 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:27:23.276458574+00:00 stderr F I0813 20:27:23.276436 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:27:23.484912135+00:00 stderr F I0813 20:27:23.483658 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:27:23.484912135+00:00 stderr F I0813 20:27:23.484887 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:27:23.677401369+00:00 stderr F I0813 20:27:23.677220 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:27:23.677401369+00:00 stderr F I0813 20:27:23.677313 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:27:23.880580669+00:00 stderr F I0813 20:27:23.880454 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:27:23.880580669+00:00 stderr F I0813 20:27:23.880539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:27:24.074331689+00:00 stderr F I0813 20:27:24.074174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:24.074331689+00:00 stderr F I0813 20:27:24.074250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:27:24.275878012+00:00 stderr F I0813 20:27:24.275651 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:24.275878012+00:00 stderr F I0813 20:27:24.275734 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:27:24.490976322+00:00 stderr F I0813 20:27:24.490877 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:27:24.490976322+00:00 stderr F I0813 20:27:24.490949 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:27:24.678108233+00:00 stderr F I0813 20:27:24.677687 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:27:24.678108233+00:00 stderr F I0813 20:27:24.677785 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:27:24.879197503+00:00 stderr F I0813 20:27:24.879127 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:27:24.879323967+00:00 stderr F I0813 20:27:24.879309 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:27:25.074599921+00:00 stderr F I0813 20:27:25.074447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:27:25.074599921+00:00 stderr F I0813 20:27:25.074543 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:27:25.278032578+00:00 stderr F I0813 20:27:25.277904 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:27:25.278091839+00:00 stderr F I0813 20:27:25.278028 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:27:25.482462533+00:00 stderr F I0813 20:27:25.481667 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:27:25.482462533+00:00 stderr F I0813 20:27:25.481758 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:27:25.675832623+00:00 stderr F I0813 20:27:25.675602 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:27:25.675832623+00:00 stderr F I0813 20:27:25.675686 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:27:25.875290516+00:00 stderr F I0813 20:27:25.875144 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:25.875290516+00:00 stderr F I0813 20:27:25.875220 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:27:26.074473591+00:00 stderr F I0813 20:27:26.074356 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:27:26.074473591+00:00 stderr F I0813 20:27:26.074429 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:27:26.275488309+00:00 stderr F I0813 20:27:26.275383 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:27:26.275488309+00:00 stderr F I0813 20:27:26.275470 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:27:26.475374105+00:00 stderr F I0813 20:27:26.475280 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:27:26.475374105+00:00 stderr F I0813 20:27:26.475356 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:27:26.682038284+00:00 stderr F I0813 20:27:26.681942 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:27:26.682249990+00:00 stderr F I0813 20:27:26.682232 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:27:26.880741565+00:00 stderr F I0813 20:27:26.880534 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:27:26.880741565+00:00 stderr F I0813 20:27:26.880624 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:27:27.085580382+00:00 stderr F I0813 20:27:27.085487 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:27:27.085580382+00:00 stderr F I0813 20:27:27.085559 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:27:27.294074424+00:00 stderr F I0813 20:27:27.293871 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:27:27.294074424+00:00 stderr F I0813 20:27:27.293955 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:27:27.479526467+00:00 stderr F I0813 20:27:27.479420 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:27:27.479526467+00:00 stderr F I0813 20:27:27.479508 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:27:27.713295301+00:00 stderr F I0813 20:27:27.713142 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:27:27.713479096+00:00 stderr F I0813 20:27:27.713458 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:27:27.905872938+00:00 stderr F I0813 20:27:27.905709 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:27:27.905872938+00:00 stderr F I0813 20:27:27.905783 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:27:28.076200558+00:00 stderr F I0813 20:27:28.075234 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:27:28.076200558+00:00 stderr F I0813 20:27:28.075333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:27:28.272687606+00:00 stderr F I0813 20:27:28.272579 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:27:28.272983305+00:00 stderr F I0813 20:27:28.272963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:27:28.475242958+00:00 stderr F I0813 20:27:28.475165 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:27:28.475400733+00:00 stderr F I0813 20:27:28.475385 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:27:28.676608776+00:00 stderr F I0813 20:27:28.676464 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:27:28.676866624+00:00 stderr F I0813 20:27:28.676840 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:27:28.874520685+00:00 stderr F I0813 20:27:28.874390 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:27:28.874520685+00:00 stderr F I0813 20:27:28.874488 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:27:29.078941491+00:00 stderr F I0813 20:27:29.078704 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:27:29.078941491+00:00 stderr F I0813 20:27:29.078885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:27:29.274753910+00:00 stderr F I0813 20:27:29.274270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:27:29.274753910+00:00 stderr F I0813 20:27:29.274357 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:27:29.478494395+00:00 stderr F I0813 20:27:29.476646 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:27:29.478494395+00:00 stderr F I0813 20:27:29.476729 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:27:29.673460880+00:00 stderr F I0813 20:27:29.673334 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:27:29.673460880+00:00 stderr F I0813 20:27:29.673448 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:29.903861329+00:00 stderr F I0813 20:27:29.903288 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:29.903924220+00:00 stderr F I0813 20:27:29.903904 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.075440425+00:00 stderr F I0813 20:27:30.075353 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.075577639+00:00 stderr F I0813 20:27:30.075562 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.275364651+00:00 stderr F I0813 20:27:30.275309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.275477125+00:00 stderr F I0813 20:27:30.275462 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:27:30.475919065+00:00 stderr F I0813 20:27:30.475812 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:27:30.475919065+00:00 stderr F I0813 20:27:30.475891 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:27:30.676348546+00:00 stderr F I0813 20:27:30.676239 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:27:30.676403658+00:00 stderr F I0813 20:27:30.676357 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:27:30.875751948+00:00 stderr F I0813 20:27:30.875595 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:27:30.875751948+00:00 stderr F I0813 20:27:30.875706 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:27:31.077562149+00:00 stderr F I0813 20:27:31.077426 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:27:31.077562149+00:00 stderr F I0813 20:27:31.077503 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:27:31.275134298+00:00 stderr F I0813 20:27:31.274718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:27:31.275134298+00:00 stderr F I0813 20:27:31.274879 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:27:31.478038670+00:00 stderr F I0813 20:27:31.477907 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:27:31.478038670+00:00 stderr F I0813 20:27:31.477991 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:27:31.677351119+00:00 stderr F I0813 20:27:31.677256 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:27:31.677411741+00:00 stderr F I0813 20:27:31.677367 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:27:31.902942360+00:00 stderr F I0813 20:27:31.902022 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:27:31.902942360+00:00 stderr F I0813 20:27:31.902168 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:27:32.074235438+00:00 stderr F I0813 20:27:32.074163 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:27:32.074288709+00:00 stderr F I0813 20:27:32.074270 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:27:32.275389269+00:00 stderr F I0813 20:27:32.274892 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:27:32.275389269+00:00 stderr F I0813 20:27:32.274967 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:27:32.478609180+00:00 stderr F I0813 20:27:32.478481 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:27:32.478609180+00:00 stderr F I0813 20:27:32.478551 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:27:32.676428627+00:00 stderr F I0813 20:27:32.676267 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:27:32.676428627+00:00 stderr F I0813 20:27:32.676390 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:27:32.875700675+00:00 stderr F I0813 20:27:32.875605 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:27:32.875700675+00:00 stderr F I0813 20:27:32.875675 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:27:33.074303934+00:00 stderr F I0813 20:27:33.074195 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:27:33.074303934+00:00 stderr F I0813 20:27:33.074294 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:27:33.275156507+00:00 stderr F I0813 20:27:33.275042 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:27:33.275156507+00:00 stderr F I0813 20:27:33.275119 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:27:33.486121099+00:00 stderr F I0813 20:27:33.485938 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:27:33.486121099+00:00 stderr F I0813 20:27:33.486059 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:27:33.676598736+00:00 stderr F I0813 20:27:33.676496 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:27:33.676658168+00:00 stderr F I0813 20:27:33.676624 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:27:33.884867211+00:00 stderr F I0813 20:27:33.884746 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:27:33.884867211+00:00 stderr F I0813 20:27:33.884851 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:27:34.117624456+00:00 stderr F I0813 20:27:34.117501 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:27:34.117624456+00:00 stderr F I0813 20:27:34.117581 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:27:34.274883432+00:00 stderr F I0813 20:27:34.274724 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:27:34.274883432+00:00 stderr F I0813 20:27:34.274855 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.475435797+00:00 stderr F I0813 20:27:34.474983 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.475435797+00:00 stderr F I0813 20:27:34.475083 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.674946292+00:00 stderr F I0813 20:27:34.674724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.674946292+00:00 stderr F I0813 20:27:34.674915 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:27:34.877277537+00:00 stderr F I0813 20:27:34.877183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:27:34.877430331+00:00 stderr F I0813 20:27:34.877415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:27:35.074172647+00:00 stderr F I0813 20:27:35.074074 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:27:35.074302281+00:00 stderr F I0813 20:27:35.074286 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:27:35.324443254+00:00 stderr F I0813 20:27:35.323637 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:27:35.324562067+00:00 stderr F I0813 20:27:35.324546 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:27:35.476392218+00:00 stderr F I0813 20:27:35.476338 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:27:35.476532242+00:00 stderr F I0813 20:27:35.476514 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:35.679270359+00:00 stderr F I0813 20:27:35.679214 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:35.679430624+00:00 stderr F I0813 20:27:35.679415 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:35.878030423+00:00 stderr F I0813 20:27:35.877335 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:35.878030423+00:00 stderr F I0813 20:27:35.877973 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:27:36.078351921+00:00 stderr F I0813 20:27:36.078295 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:27:36.078520286+00:00 stderr F I0813 20:27:36.078497 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:27:36.277145725+00:00 stderr F I0813 20:27:36.276983 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:27:36.277145725+00:00 stderr F I0813 20:27:36.277125 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:27:36.473252673+00:00 stderr F I0813 20:27:36.473140 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:27:36.473252673+00:00 stderr F I0813 20:27:36.473234 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:27:36.682613589+00:00 stderr F I0813 20:27:36.682452 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:27:36.682848456+00:00 stderr F I0813 20:27:36.682747 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:27:36.875639039+00:00 stderr F I0813 20:27:36.875564 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:27:36.875764662+00:00 stderr F I0813 20:27:36.875750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:27:37.077621424+00:00 stderr F I0813 20:27:37.077553 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:27:37.077683456+00:00 stderr F I0813 20:27:37.077622 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:27:37.275223275+00:00 stderr F I0813 20:27:37.275132 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:27:37.275264046+00:00 stderr F I0813 20:27:37.275234 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:27:37.476175620+00:00 stderr F I0813 20:27:37.475378 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:27:37.476175620+00:00 stderr F I0813 20:27:37.475454 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:37.675347645+00:00 stderr F I0813 20:27:37.675220 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:37.675347645+00:00 stderr F I0813 20:27:37.675305 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:27:37.876045223+00:00 stderr F I0813 20:27:37.875901 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:27:37.876045223+00:00 stderr F I0813 20:27:37.876018 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:27:38.075255540+00:00 stderr F I0813 20:27:38.075132 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:27:38.075255540+00:00 stderr F I0813 20:27:38.075206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:27:38.275369412+00:00 stderr F I0813 20:27:38.275255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:27:38.275369412+00:00 stderr F I0813 20:27:38.275327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:27:38.476051670+00:00 stderr F I0813 20:27:38.475905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:27:38.476093981+00:00 stderr F I0813 20:27:38.476044 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:27:38.673141516+00:00 stderr F I0813 20:27:38.672967 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:27:38.673141516+00:00 stderr F I0813 20:27:38.673065 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:27:38.874939536+00:00 stderr F I0813 20:27:38.874841 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:27:38.874983707+00:00 stderr F I0813 20:27:38.874942 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:39.080548935+00:00 stderr F I0813 20:27:39.080478 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:39.080742101+00:00 stderr F I0813 20:27:39.080726 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:27:39.275239472+00:00 stderr F I0813 20:27:39.275107 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:27:39.275320675+00:00 stderr F I0813 20:27:39.275260 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:27:39.479622147+00:00 stderr F I0813 20:27:39.479412 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:27:39.479622147+00:00 stderr F I0813 20:27:39.479482 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:27:39.678976277+00:00 stderr F I0813 20:27:39.678915 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:27:39.679137262+00:00 stderr F I0813 20:27:39.679119 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:27:39.874529669+00:00 stderr F I0813 20:27:39.874368 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:27:39.874529669+00:00 stderr F I0813 20:27:39.874466 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:27:40.074724013+00:00 stderr F I0813 20:27:40.074608 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:27:40.074724013+00:00 stderr F I0813 20:27:40.074684 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:27:40.277202343+00:00 stderr F I0813 20:27:40.277103 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:27:40.277240424+00:00 stderr F I0813 20:27:40.277203 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:27:40.475867144+00:00 stderr F I0813 20:27:40.475539 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:27:40.476030788+00:00 stderr F I0813 20:27:40.475980 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:27:40.684934542+00:00 stderr F I0813 20:27:40.684749 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:27:40.717508413+00:00 stderr F I0813 20:27:40.717296 1 log.go:245] Operconfig Controller complete 2025-08-13T20:28:15.445211430+00:00 stderr F I0813 20:28:15.444915 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:30:15.103488261+00:00 stderr F I0813 20:30:15.102262 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.120589893+00:00 stderr F I0813 20:30:15.120430 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.131220018+00:00 stderr F I0813 20:30:15.131144 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.144720936+00:00 stderr F I0813 20:30:15.144172 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.162873468+00:00 stderr F I0813 20:30:15.160154 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.176998854+00:00 stderr F I0813 20:30:15.175634 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.238151032+00:00 stderr F I0813 20:30:15.235906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.253899975+00:00 stderr F I0813 20:30:15.253546 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.271076388+00:00 stderr F I0813 20:30:15.270870 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.319889072+00:00 stderr F I0813 20:30:15.319191 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.333916245+00:00 stderr F I0813 20:30:15.332979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.501238095+00:00 stderr F I0813 20:30:15.501167 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.701316776+00:00 stderr F I0813 20:30:15.700247 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:15.899667878+00:00 stderr F I0813 20:30:15.899606 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.105764482+00:00 stderr F I0813 20:30:16.105626 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.308067017+00:00 stderr F I0813 20:30:16.307675 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.500222871+00:00 stderr F I0813 20:30:16.500105 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.707581772+00:00 stderr F I0813 20:30:16.707413 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:16.765385643+00:00 stderr F I0813 20:30:16.765326 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:30:16.766400673+00:00 stderr F I0813 20:30:16.766196 1 log.go:245] successful reconciliation 2025-08-13T20:30:16.903174684+00:00 stderr F I0813 20:30:16.903121 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.102322299+00:00 stderr F I0813 20:30:17.101879 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.308739373+00:00 stderr F I0813 20:30:17.307947 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.504629714+00:00 stderr F I0813 20:30:17.503536 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.700326829+00:00 stderr F I0813 20:30:17.700165 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:17.902967744+00:00 stderr F I0813 20:30:17.902587 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.101906923+00:00 stderr F I0813 20:30:18.101174 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.146436613+00:00 stderr F I0813 20:30:18.146369 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:30:18.147537174+00:00 stderr F I0813 20:30:18.147506 1 log.go:245] successful reconciliation 2025-08-13T20:30:18.301902702+00:00 stderr F I0813 20:30:18.301414 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:30:18.497639668+00:00 stderr F I0813 20:30:18.497531 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:30:19.369631973+00:00 stderr F I0813 20:30:19.368156 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:30:19.369722486+00:00 stderr F I0813 20:30:19.369627 1 log.go:245] successful reconciliation 2025-08-13T20:30:40.720631459+00:00 stderr F I0813 20:30:40.720502 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:30:41.019113419+00:00 stderr F I0813 20:30:41.018948 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:30:41.023017072+00:00 stderr F I0813 20:30:41.022967 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:30:41.027077108+00:00 stderr F I0813 20:30:41.026934 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:30:41.027077108+00:00 stderr F I0813 20:30:41.026973 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a58f80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:30:41.033230705+00:00 stderr F I0813 20:30:41.033178 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:30:41.033311127+00:00 stderr F I0813 20:30:41.033296 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:30:41.033345358+00:00 stderr F I0813 20:30:41.033333 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:30:41.037094376+00:00 stderr F I0813 20:30:41.037018 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:30:41.037164898+00:00 stderr F I0813 20:30:41.037147 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:30:41.037224280+00:00 stderr F I0813 20:30:41.037206 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:30:41.037263221+00:00 stderr F I0813 20:30:41.037248 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:30:41.037358094+00:00 stderr F I0813 20:30:41.037340 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:30:41.053752475+00:00 stderr F I0813 20:30:41.053611 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:30:41.072225106+00:00 stderr F I0813 20:30:41.072124 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:30:41.072312749+00:00 stderr F I0813 20:30:41.072299 1 log.go:245] Starting render phase 2025-08-13T20:30:41.085439336+00:00 stderr F I0813 20:30:41.085383 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:30:41.132865619+00:00 stderr F I0813 20:30:41.132699 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:30:41.132865619+00:00 stderr F I0813 20:30:41.132756 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:30:41.132968252+00:00 stderr F I0813 20:30:41.132901 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:30:41.132968252+00:00 stderr F I0813 20:30:41.132950 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:30:41.151236047+00:00 stderr F I0813 20:30:41.151108 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:30:41.151236047+00:00 stderr F I0813 20:30:41.151155 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:30:41.164714875+00:00 stderr F I0813 20:30:41.164602 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:30:41.181887598+00:00 stderr F I0813 20:30:41.181699 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:30:41.189108226+00:00 stderr F I0813 20:30:41.188930 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:30:41.189108226+00:00 stderr F I0813 20:30:41.189004 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:30:41.203751137+00:00 stderr F I0813 20:30:41.203609 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:30:41.203751137+00:00 stderr F I0813 20:30:41.203682 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:30:41.213886538+00:00 stderr F I0813 20:30:41.213712 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:30:41.213886538+00:00 stderr F I0813 20:30:41.213845 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:30:41.224961757+00:00 stderr F I0813 20:30:41.224729 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:30:41.224961757+00:00 stderr F I0813 20:30:41.224884 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:30:41.231178145+00:00 stderr F I0813 20:30:41.231089 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:30:41.231178145+00:00 stderr F I0813 20:30:41.231141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:30:41.236415726+00:00 stderr F I0813 20:30:41.236325 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:30:41.236415726+00:00 stderr F I0813 20:30:41.236383 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:30:41.241243505+00:00 stderr F I0813 20:30:41.241148 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:30:41.241243505+00:00 stderr F I0813 20:30:41.241199 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:30:41.246169636+00:00 stderr F I0813 20:30:41.246013 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:30:41.246169636+00:00 stderr F I0813 20:30:41.246093 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:30:41.251924512+00:00 stderr F I0813 20:30:41.251822 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:30:41.251924512+00:00 stderr F I0813 20:30:41.251873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:30:41.265869343+00:00 stderr F I0813 20:30:41.265736 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:30:41.265869343+00:00 stderr F I0813 20:30:41.265849 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:30:41.466570622+00:00 stderr F I0813 20:30:41.466472 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:30:41.466570622+00:00 stderr F I0813 20:30:41.466545 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:30:41.669875076+00:00 stderr F I0813 20:30:41.668975 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:30:41.669875076+00:00 stderr F I0813 20:30:41.669075 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:30:41.871059470+00:00 stderr F I0813 20:30:41.870961 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:30:41.871245545+00:00 stderr F I0813 20:30:41.871225 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:30:42.066910000+00:00 stderr F I0813 20:30:42.066720 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:30:42.066982132+00:00 stderr F I0813 20:30:42.066927 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:30:42.265595481+00:00 stderr F I0813 20:30:42.265493 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:30:42.265595481+00:00 stderr F I0813 20:30:42.265566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:30:42.467688321+00:00 stderr F I0813 20:30:42.467458 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:30:42.467688321+00:00 stderr F I0813 20:30:42.467506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:30:42.664977302+00:00 stderr F I0813 20:30:42.664915 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:30:42.664977302+00:00 stderr F I0813 20:30:42.664963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:30:42.865076634+00:00 stderr F I0813 20:30:42.864938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:30:42.865076634+00:00 stderr F I0813 20:30:42.865013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:30:43.064592409+00:00 stderr F I0813 20:30:43.064424 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:30:43.064592409+00:00 stderr F I0813 20:30:43.064503 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:30:43.266429742+00:00 stderr F I0813 20:30:43.266315 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:30:43.266429742+00:00 stderr F I0813 20:30:43.266384 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:30:43.467720158+00:00 stderr F I0813 20:30:43.467559 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:30:43.467720158+00:00 stderr F I0813 20:30:43.467643 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:30:43.682762459+00:00 stderr F I0813 20:30:43.682707 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:30:43.682942494+00:00 stderr F I0813 20:30:43.682926 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:30:43.901669821+00:00 stderr F I0813 20:30:43.899014 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:30:43.901669821+00:00 stderr F I0813 20:30:43.899096 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:30:44.067638902+00:00 stderr F I0813 20:30:44.067574 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:30:44.067746905+00:00 stderr F I0813 20:30:44.067732 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:30:44.268109565+00:00 stderr F I0813 20:30:44.268021 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:30:44.268392203+00:00 stderr F I0813 20:30:44.268377 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:30:44.467892528+00:00 stderr F I0813 20:30:44.466314 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:30:44.467892528+00:00 stderr F I0813 20:30:44.466385 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:30:44.672502490+00:00 stderr F I0813 20:30:44.672371 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:30:44.672502490+00:00 stderr F I0813 20:30:44.672441 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:30:44.866728623+00:00 stderr F I0813 20:30:44.866626 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:30:44.866728623+00:00 stderr F I0813 20:30:44.866699 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:30:45.068913005+00:00 stderr F I0813 20:30:45.068734 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:30:45.068913005+00:00 stderr F I0813 20:30:45.068884 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:30:45.266746562+00:00 stderr F I0813 20:30:45.266691 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:45.267020740+00:00 stderr F I0813 20:30:45.266993 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:30:45.467309668+00:00 stderr F I0813 20:30:45.467206 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:45.467309668+00:00 stderr F I0813 20:30:45.467294 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:30:45.668306165+00:00 stderr F I0813 20:30:45.667615 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:30:45.668306165+00:00 stderr F I0813 20:30:45.668254 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:30:45.868098279+00:00 stderr F I0813 20:30:45.867943 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:30:45.868159231+00:00 stderr F I0813 20:30:45.868026 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:30:46.066620786+00:00 stderr F I0813 20:30:46.066456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:30:46.066620786+00:00 stderr F I0813 20:30:46.066532 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:30:46.266411099+00:00 stderr F I0813 20:30:46.266159 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:30:46.266411099+00:00 stderr F I0813 20:30:46.266255 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:30:46.476275982+00:00 stderr F I0813 20:30:46.476164 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:30:46.476275982+00:00 stderr F I0813 20:30:46.476242 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:30:46.669208458+00:00 stderr F I0813 20:30:46.669152 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:30:46.669312791+00:00 stderr F I0813 20:30:46.669298 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:30:46.866456978+00:00 stderr F I0813 20:30:46.865708 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:30:46.866456978+00:00 stderr F I0813 20:30:46.866417 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:30:47.065006916+00:00 stderr F I0813 20:30:47.064927 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:47.065006916+00:00 stderr F I0813 20:30:47.064995 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:30:47.266742004+00:00 stderr F I0813 20:30:47.266681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:30:47.266862928+00:00 stderr F I0813 20:30:47.266745 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:30:47.470235984+00:00 stderr F I0813 20:30:47.470126 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:30:47.470235984+00:00 stderr F I0813 20:30:47.470209 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:30:47.666509226+00:00 stderr F I0813 20:30:47.666430 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:30:47.666509226+00:00 stderr F I0813 20:30:47.666477 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:30:47.875247236+00:00 stderr F I0813 20:30:47.875142 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:30:47.875247236+00:00 stderr F I0813 20:30:47.875215 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:30:48.071547479+00:00 stderr F I0813 20:30:48.071454 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:30:48.071547479+00:00 stderr F I0813 20:30:48.071525 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:30:48.272230598+00:00 stderr F I0813 20:30:48.272099 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:30:48.272230598+00:00 stderr F I0813 20:30:48.272166 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:30:48.476084348+00:00 stderr F I0813 20:30:48.475878 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:30:48.476084348+00:00 stderr F I0813 20:30:48.475947 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:30:48.670865127+00:00 stderr F I0813 20:30:48.670753 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:30:48.670932099+00:00 stderr F I0813 20:30:48.670882 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:30:48.907618253+00:00 stderr F I0813 20:30:48.907289 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:30:48.907618253+00:00 stderr F I0813 20:30:48.907398 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:30:49.098693795+00:00 stderr F I0813 20:30:49.098640 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:30:49.098886131+00:00 stderr F I0813 20:30:49.098866 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:30:49.265969404+00:00 stderr F I0813 20:30:49.265883 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:30:49.265969404+00:00 stderr F I0813 20:30:49.265948 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:30:49.472286255+00:00 stderr F I0813 20:30:49.470468 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:30:49.472286255+00:00 stderr F I0813 20:30:49.470607 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:30:49.667573459+00:00 stderr F I0813 20:30:49.667472 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:30:49.667630500+00:00 stderr F I0813 20:30:49.667601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:30:49.866920589+00:00 stderr F I0813 20:30:49.866592 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:30:49.866920589+00:00 stderr F I0813 20:30:49.866668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:30:50.067448213+00:00 stderr F I0813 20:30:50.067332 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:30:50.067448213+00:00 stderr F I0813 20:30:50.067404 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:30:50.270901692+00:00 stderr F I0813 20:30:50.269555 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:30:50.270901692+00:00 stderr F I0813 20:30:50.269634 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:30:50.465650800+00:00 stderr F I0813 20:30:50.465403 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:30:50.465650800+00:00 stderr F I0813 20:30:50.465480 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:30:50.664696432+00:00 stderr F I0813 20:30:50.664595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:30:50.664696432+00:00 stderr F I0813 20:30:50.664675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:30:50.864598677+00:00 stderr F I0813 20:30:50.864493 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:30:50.864598677+00:00 stderr F I0813 20:30:50.864558 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.068003334+00:00 stderr F I0813 20:30:51.067922 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.068003334+00:00 stderr F I0813 20:30:51.067976 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.268356824+00:00 stderr F I0813 20:30:51.268193 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.268356824+00:00 stderr F I0813 20:30:51.268267 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.465676606+00:00 stderr F I0813 20:30:51.465523 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.465676606+00:00 stderr F I0813 20:30:51.465609 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:30:51.667382744+00:00 stderr F I0813 20:30:51.667285 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:30:51.667382744+00:00 stderr F I0813 20:30:51.667357 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:30:51.872449269+00:00 stderr F I0813 20:30:51.872009 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:30:51.872449269+00:00 stderr F I0813 20:30:51.872198 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:30:52.067623169+00:00 stderr F I0813 20:30:52.067490 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:30:52.067623169+00:00 stderr F I0813 20:30:52.067578 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:30:52.266402163+00:00 stderr F I0813 20:30:52.266348 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:30:52.266521707+00:00 stderr F I0813 20:30:52.266506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:30:52.465656291+00:00 stderr F I0813 20:30:52.465489 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:30:52.465873007+00:00 stderr F I0813 20:30:52.465847 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:30:52.667109222+00:00 stderr F I0813 20:30:52.667051 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:30:52.667232086+00:00 stderr F I0813 20:30:52.667218 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:30:52.870098207+00:00 stderr F I0813 20:30:52.869973 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:30:52.870146849+00:00 stderr F I0813 20:30:52.870093 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:30:53.069201471+00:00 stderr F I0813 20:30:53.069143 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:30:53.069335705+00:00 stderr F I0813 20:30:53.069321 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:30:53.269534440+00:00 stderr F I0813 20:30:53.269479 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:30:53.269729705+00:00 stderr F I0813 20:30:53.269682 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:30:53.465182654+00:00 stderr F I0813 20:30:53.465126 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:30:53.465304477+00:00 stderr F I0813 20:30:53.465290 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:30:53.668848778+00:00 stderr F I0813 20:30:53.668682 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:30:53.668913010+00:00 stderr F I0813 20:30:53.668852 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:30:53.868876008+00:00 stderr F I0813 20:30:53.868747 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:30:53.868934110+00:00 stderr F I0813 20:30:53.868885 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:30:54.068198808+00:00 stderr F I0813 20:30:54.068087 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:30:54.068198808+00:00 stderr F I0813 20:30:54.068153 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:30:54.268524407+00:00 stderr F I0813 20:30:54.268393 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:30:54.268524407+00:00 stderr F I0813 20:30:54.268475 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:30:54.465879089+00:00 stderr F I0813 20:30:54.465370 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:30:54.465879089+00:00 stderr F I0813 20:30:54.465444 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:30:54.670215923+00:00 stderr F I0813 20:30:54.670087 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:30:54.670215923+00:00 stderr F I0813 20:30:54.670201 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:30:54.867530015+00:00 stderr F I0813 20:30:54.867414 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:30:54.867530015+00:00 stderr F I0813 20:30:54.867497 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:30:55.074336970+00:00 stderr F I0813 20:30:55.074217 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:30:55.074336970+00:00 stderr F I0813 20:30:55.074299 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:30:55.287739444+00:00 stderr F I0813 20:30:55.287549 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:30:55.287739444+00:00 stderr F I0813 20:30:55.287648 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:30:55.466752630+00:00 stderr F I0813 20:30:55.466509 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:30:55.467058379+00:00 stderr F I0813 20:30:55.466932 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:55.667846611+00:00 stderr F I0813 20:30:55.667238 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:55.668121898+00:00 stderr F I0813 20:30:55.668065 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:55.865593585+00:00 stderr F I0813 20:30:55.865535 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:55.865732009+00:00 stderr F I0813 20:30:55.865716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:30:56.065572483+00:00 stderr F I0813 20:30:56.065470 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:30:56.065572483+00:00 stderr F I0813 20:30:56.065543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:30:56.265318505+00:00 stderr F I0813 20:30:56.265244 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:30:56.265318505+00:00 stderr F I0813 20:30:56.265297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:30:56.464912203+00:00 stderr F I0813 20:30:56.464767 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:30:56.465016976+00:00 stderr F I0813 20:30:56.465001 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:30:56.667616009+00:00 stderr F I0813 20:30:56.667003 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:30:56.667877337+00:00 stderr F I0813 20:30:56.667855 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:56.872565761+00:00 stderr F I0813 20:30:56.872507 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:56.872705995+00:00 stderr F I0813 20:30:56.872686 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:57.069824261+00:00 stderr F I0813 20:30:57.069557 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:57.069824261+00:00 stderr F I0813 20:30:57.069631 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:30:57.267319578+00:00 stderr F I0813 20:30:57.267193 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:30:57.267319578+00:00 stderr F I0813 20:30:57.267269 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:30:57.467141612+00:00 stderr F I0813 20:30:57.466963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:30:57.467141612+00:00 stderr F I0813 20:30:57.467073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:30:57.665717791+00:00 stderr F I0813 20:30:57.665665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:30:57.665977288+00:00 stderr F I0813 20:30:57.665939 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:30:57.871306161+00:00 stderr F I0813 20:30:57.871173 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:30:57.871306161+00:00 stderr F I0813 20:30:57.871241 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:30:58.065934614+00:00 stderr F I0813 20:30:58.065885 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:30:58.066085039+00:00 stderr F I0813 20:30:58.066066 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:30:58.264672847+00:00 stderr F I0813 20:30:58.264573 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:30:58.264672847+00:00 stderr F I0813 20:30:58.264642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:30:58.465591573+00:00 stderr F I0813 20:30:58.465490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:30:58.465735327+00:00 stderr F I0813 20:30:58.465714 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:30:58.667376133+00:00 stderr F I0813 20:30:58.667275 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:30:58.667376133+00:00 stderr F I0813 20:30:58.667355 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:30:58.865918911+00:00 stderr F I0813 20:30:58.865764 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:30:58.865918911+00:00 stderr F I0813 20:30:58.865873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:30:59.065382604+00:00 stderr F I0813 20:30:59.065328 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:30:59.065550759+00:00 stderr F I0813 20:30:59.065531 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:30:59.266353811+00:00 stderr F I0813 20:30:59.266260 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:30:59.266526286+00:00 stderr F I0813 20:30:59.266505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:30:59.467545325+00:00 stderr F I0813 20:30:59.467453 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:30:59.467727930+00:00 stderr F I0813 20:30:59.467713 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:30:59.668898103+00:00 stderr F I0813 20:30:59.668676 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:30:59.668898103+00:00 stderr F I0813 20:30:59.668761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:30:59.867370678+00:00 stderr F I0813 20:30:59.866937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:30:59.867370678+00:00 stderr F I0813 20:30:59.867013 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:31:00.067346366+00:00 stderr F I0813 20:31:00.067238 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:31:00.067346366+00:00 stderr F I0813 20:31:00.067294 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:31:00.267185481+00:00 stderr F I0813 20:31:00.266536 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:31:00.267185481+00:00 stderr F I0813 20:31:00.266619 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:31:00.468525989+00:00 stderr F I0813 20:31:00.468416 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:31:00.468525989+00:00 stderr F I0813 20:31:00.468500 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:31:00.679869654+00:00 stderr F I0813 20:31:00.677152 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:31:00.679869654+00:00 stderr F I0813 20:31:00.677235 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:31:00.866958832+00:00 stderr F I0813 20:31:00.866765 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:31:00.867020224+00:00 stderr F I0813 20:31:00.866957 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:31:01.066267221+00:00 stderr F I0813 20:31:01.066207 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:31:01.066368744+00:00 stderr F I0813 20:31:01.066354 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:31:01.266406504+00:00 stderr F I0813 20:31:01.266312 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:31:01.266406504+00:00 stderr F I0813 20:31:01.266382 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:31:01.468601207+00:00 stderr F I0813 20:31:01.468511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:31:01.468601207+00:00 stderr F I0813 20:31:01.468561 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:31:01.668015968+00:00 stderr F I0813 20:31:01.667907 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:31:01.668015968+00:00 stderr F I0813 20:31:01.667978 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:31:01.874729560+00:00 stderr F I0813 20:31:01.874545 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:31:01.897344430+00:00 stderr F I0813 20:31:01.897256 1 log.go:245] Operconfig Controller complete 2025-08-13T20:31:15.463724055+00:00 stderr F I0813 20:31:15.463641 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:34:01.898846109+00:00 stderr F I0813 20:34:01.898266 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:34:02.347061513+00:00 stderr F I0813 20:34:02.346995 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:34:02.349605686+00:00 stderr F I0813 20:34:02.349583 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:34:02.353003844+00:00 stderr F I0813 20:34:02.352974 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:34:02.353157229+00:00 stderr F I0813 20:34:02.353045 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002c81300 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:34:02.358610565+00:00 stderr F I0813 20:34:02.358578 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:34:02.358666277+00:00 stderr F I0813 20:34:02.358653 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:34:02.358710538+00:00 stderr F I0813 20:34:02.358698 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:34:02.362291441+00:00 stderr F I0813 20:34:02.362252 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:34:02.362351813+00:00 stderr F I0813 20:34:02.362339 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:34:02.362383404+00:00 stderr F I0813 20:34:02.362371 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:34:02.362413865+00:00 stderr F I0813 20:34:02.362401 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:34:02.362584459+00:00 stderr F I0813 20:34:02.362561 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:34:02.376896681+00:00 stderr F I0813 20:34:02.376855 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:34:02.394217089+00:00 stderr F I0813 20:34:02.394175 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:34:02.394371933+00:00 stderr F I0813 20:34:02.394355 1 log.go:245] Starting render phase 2025-08-13T20:34:02.408479919+00:00 stderr F I0813 20:34:02.408389 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459339 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459359 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459390 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:34:02.459436904+00:00 stderr F I0813 20:34:02.459415 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:34:02.481868178+00:00 stderr F I0813 20:34:02.477364 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:34:02.481868178+00:00 stderr F I0813 20:34:02.477405 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:34:02.490105455+00:00 stderr F I0813 20:34:02.490028 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:34:02.516704930+00:00 stderr F I0813 20:34:02.515292 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:34:02.525922575+00:00 stderr F I0813 20:34:02.525857 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:34:02.526061659+00:00 stderr F I0813 20:34:02.526039 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:34:02.538826236+00:00 stderr F I0813 20:34:02.538702 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:34:02.538885037+00:00 stderr F I0813 20:34:02.538825 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:34:02.549844182+00:00 stderr F I0813 20:34:02.549367 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:34:02.549844182+00:00 stderr F I0813 20:34:02.549439 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:34:02.557690168+00:00 stderr F I0813 20:34:02.557571 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:34:02.557690168+00:00 stderr F I0813 20:34:02.557644 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:34:02.564064961+00:00 stderr F I0813 20:34:02.563968 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:34:02.564064961+00:00 stderr F I0813 20:34:02.564018 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:34:02.571366011+00:00 stderr F I0813 20:34:02.571235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:34:02.571528926+00:00 stderr F I0813 20:34:02.571361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:34:02.580674579+00:00 stderr F I0813 20:34:02.580567 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:34:02.580925406+00:00 stderr F I0813 20:34:02.580869 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:34:02.590179692+00:00 stderr F I0813 20:34:02.590113 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:34:02.590179692+00:00 stderr F I0813 20:34:02.590160 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:34:02.596106172+00:00 stderr F I0813 20:34:02.595989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:34:02.596217205+00:00 stderr F I0813 20:34:02.596144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:34:02.605350648+00:00 stderr F I0813 20:34:02.605284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:34:02.605471461+00:00 stderr F I0813 20:34:02.605412 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:34:02.791759386+00:00 stderr F I0813 20:34:02.791696 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:34:02.791857339+00:00 stderr F I0813 20:34:02.791826 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:34:02.992460636+00:00 stderr F I0813 20:34:02.992338 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:34:02.992460636+00:00 stderr F I0813 20:34:02.992427 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:34:03.190330584+00:00 stderr F I0813 20:34:03.190203 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:34:03.190330584+00:00 stderr F I0813 20:34:03.190270 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:34:03.392275479+00:00 stderr F I0813 20:34:03.391893 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:34:03.392275479+00:00 stderr F I0813 20:34:03.391985 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:34:03.590293521+00:00 stderr F I0813 20:34:03.590185 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:34:03.590293521+00:00 stderr F I0813 20:34:03.590276 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:34:03.793694477+00:00 stderr F I0813 20:34:03.793546 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:34:03.793694477+00:00 stderr F I0813 20:34:03.793631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:34:03.991153244+00:00 stderr F I0813 20:34:03.990960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:34:03.991420951+00:00 stderr F I0813 20:34:03.991281 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:34:04.193105408+00:00 stderr F I0813 20:34:04.192850 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:34:04.194444156+00:00 stderr F I0813 20:34:04.193990 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:34:04.390930445+00:00 stderr F I0813 20:34:04.390837 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:34:04.390930445+00:00 stderr F I0813 20:34:04.390908 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:34:04.592739116+00:00 stderr F I0813 20:34:04.592578 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:34:04.592739116+00:00 stderr F I0813 20:34:04.592648 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:34:04.791824868+00:00 stderr F I0813 20:34:04.791398 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:34:04.791824868+00:00 stderr F I0813 20:34:04.791478 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:34:05.006237832+00:00 stderr F I0813 20:34:05.006111 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:34:05.006237832+00:00 stderr F I0813 20:34:05.006213 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:34:05.205585162+00:00 stderr F I0813 20:34:05.205455 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:34:05.205585162+00:00 stderr F I0813 20:34:05.205563 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:34:05.391462976+00:00 stderr F I0813 20:34:05.391375 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:34:05.391462976+00:00 stderr F I0813 20:34:05.391447 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:34:05.592371751+00:00 stderr F I0813 20:34:05.592168 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:34:05.592371751+00:00 stderr F I0813 20:34:05.592327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:34:05.792461253+00:00 stderr F I0813 20:34:05.792350 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:34:05.792461253+00:00 stderr F I0813 20:34:05.792418 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:34:05.997855747+00:00 stderr F I0813 20:34:05.997661 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:34:05.997855747+00:00 stderr F I0813 20:34:05.997731 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:34:06.192750000+00:00 stderr F I0813 20:34:06.192023 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:34:06.192750000+00:00 stderr F I0813 20:34:06.192131 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:34:06.405499526+00:00 stderr F I0813 20:34:06.405402 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:34:06.405499526+00:00 stderr F I0813 20:34:06.405474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:34:06.591590125+00:00 stderr F I0813 20:34:06.591475 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:06.591590125+00:00 stderr F I0813 20:34:06.591543 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:34:06.791561694+00:00 stderr F I0813 20:34:06.791434 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:06.791599175+00:00 stderr F I0813 20:34:06.791554 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:34:06.990274817+00:00 stderr F I0813 20:34:06.990146 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:34:06.990274817+00:00 stderr F I0813 20:34:06.990229 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:34:07.193212091+00:00 stderr F I0813 20:34:07.193041 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:34:07.193212091+00:00 stderr F I0813 20:34:07.193178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:34:07.390697817+00:00 stderr F I0813 20:34:07.390573 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:34:07.390697817+00:00 stderr F I0813 20:34:07.390674 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:34:07.590741787+00:00 stderr F I0813 20:34:07.590633 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:34:07.590741787+00:00 stderr F I0813 20:34:07.590711 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:34:07.793677420+00:00 stderr F I0813 20:34:07.793387 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:34:07.793677420+00:00 stderr F I0813 20:34:07.793517 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:34:07.997853869+00:00 stderr F I0813 20:34:07.997702 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:34:07.997900471+00:00 stderr F I0813 20:34:07.997890 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:34:08.192646179+00:00 stderr F I0813 20:34:08.192560 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:34:08.192688900+00:00 stderr F I0813 20:34:08.192641 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:34:08.391741752+00:00 stderr F I0813 20:34:08.391636 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:08.391741752+00:00 stderr F I0813 20:34:08.391731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:34:08.590589948+00:00 stderr F I0813 20:34:08.590476 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:34:08.590589948+00:00 stderr F I0813 20:34:08.590547 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:34:08.792295437+00:00 stderr F I0813 20:34:08.792186 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:34:08.792295437+00:00 stderr F I0813 20:34:08.792256 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:34:08.994584142+00:00 stderr F I0813 20:34:08.994441 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:34:08.994584142+00:00 stderr F I0813 20:34:08.994536 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:34:09.200691657+00:00 stderr F I0813 20:34:09.200536 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:34:09.200691657+00:00 stderr F I0813 20:34:09.200660 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:34:09.401433477+00:00 stderr F I0813 20:34:09.401329 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:34:09.401433477+00:00 stderr F I0813 20:34:09.401420 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:34:09.603680051+00:00 stderr F I0813 20:34:09.602910 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:34:09.603680051+00:00 stderr F I0813 20:34:09.603043 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:34:09.803002570+00:00 stderr F I0813 20:34:09.802890 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:34:09.803002570+00:00 stderr F I0813 20:34:09.802975 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:34:09.996217015+00:00 stderr F I0813 20:34:09.995977 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:34:09.996217015+00:00 stderr F I0813 20:34:09.996101 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:34:10.230448068+00:00 stderr F I0813 20:34:10.229725 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:34:10.230448068+00:00 stderr F I0813 20:34:10.230408 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:34:10.428630135+00:00 stderr F I0813 20:34:10.428498 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:34:10.428630135+00:00 stderr F I0813 20:34:10.428569 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:34:10.589824859+00:00 stderr F I0813 20:34:10.589701 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:34:10.589869290+00:00 stderr F I0813 20:34:10.589847 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:34:10.792166955+00:00 stderr F I0813 20:34:10.792015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:34:10.792166955+00:00 stderr F I0813 20:34:10.792135 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:34:10.990958880+00:00 stderr F I0813 20:34:10.990854 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:34:10.990958880+00:00 stderr F I0813 20:34:10.990926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:34:11.194214743+00:00 stderr F I0813 20:34:11.194059 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:34:11.194214743+00:00 stderr F I0813 20:34:11.194150 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:34:11.391465992+00:00 stderr F I0813 20:34:11.391317 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:34:11.391465992+00:00 stderr F I0813 20:34:11.391386 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:34:11.590546464+00:00 stderr F I0813 20:34:11.590394 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:34:11.590546464+00:00 stderr F I0813 20:34:11.590443 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:34:11.792278223+00:00 stderr F I0813 20:34:11.792183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:34:11.792278223+00:00 stderr F I0813 20:34:11.792257 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:34:11.994766304+00:00 stderr F I0813 20:34:11.994636 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:34:11.994878057+00:00 stderr F I0813 20:34:11.994733 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:34:12.189735708+00:00 stderr F I0813 20:34:12.189641 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:34:12.189735708+00:00 stderr F I0813 20:34:12.189727 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.393122385+00:00 stderr F I0813 20:34:12.392978 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.393122385+00:00 stderr F I0813 20:34:12.393100 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.590111908+00:00 stderr F I0813 20:34:12.589937 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.590111908+00:00 stderr F I0813 20:34:12.590006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.790832658+00:00 stderr F I0813 20:34:12.790703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.790873779+00:00 stderr F I0813 20:34:12.790835 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:34:12.990068935+00:00 stderr F I0813 20:34:12.989969 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:34:12.990068935+00:00 stderr F I0813 20:34:12.990038 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:34:13.193573535+00:00 stderr F I0813 20:34:13.193472 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:34:13.193573535+00:00 stderr F I0813 20:34:13.193543 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:34:13.392244116+00:00 stderr F I0813 20:34:13.392125 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:34:13.392244116+00:00 stderr F I0813 20:34:13.392208 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:34:13.592920595+00:00 stderr F I0813 20:34:13.592819 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:34:13.592920595+00:00 stderr F I0813 20:34:13.592891 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:34:13.791756610+00:00 stderr F I0813 20:34:13.791647 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:34:13.791756610+00:00 stderr F I0813 20:34:13.791744 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:34:13.994705574+00:00 stderr F I0813 20:34:13.994533 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:34:13.994705574+00:00 stderr F I0813 20:34:13.994646 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:34:14.198914114+00:00 stderr F I0813 20:34:14.198527 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:34:14.199246014+00:00 stderr F I0813 20:34:14.199219 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:34:14.396138834+00:00 stderr F I0813 20:34:14.395985 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:34:14.396138834+00:00 stderr F I0813 20:34:14.396098 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:34:14.591892783+00:00 stderr F I0813 20:34:14.591738 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:34:14.591939165+00:00 stderr F I0813 20:34:14.591899 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:34:14.791929321+00:00 stderr F I0813 20:34:14.791768 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:34:14.791992423+00:00 stderr F I0813 20:34:14.791889 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:34:14.999191288+00:00 stderr F I0813 20:34:14.999125 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:34:14.999324092+00:00 stderr F I0813 20:34:14.999307 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:34:15.193444792+00:00 stderr F I0813 20:34:15.193317 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:34:15.193444792+00:00 stderr F I0813 20:34:15.193407 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:34:15.391907047+00:00 stderr F I0813 20:34:15.391685 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:34:15.391907047+00:00 stderr F I0813 20:34:15.391845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:34:15.472661219+00:00 stderr F I0813 20:34:15.472518 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:34:15.590637810+00:00 stderr F I0813 20:34:15.590511 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:34:15.590637810+00:00 stderr F I0813 20:34:15.590611 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:34:15.792261286+00:00 stderr F I0813 20:34:15.792159 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:34:15.792261286+00:00 stderr F I0813 20:34:15.792251 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:34:15.991879234+00:00 stderr F I0813 20:34:15.991680 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:34:15.991879234+00:00 stderr F I0813 20:34:15.991757 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:34:16.192925793+00:00 stderr F I0813 20:34:16.192759 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:34:16.193001135+00:00 stderr F I0813 20:34:16.192915 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:34:16.398388169+00:00 stderr F I0813 20:34:16.398285 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:34:16.398388169+00:00 stderr F I0813 20:34:16.398371 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:34:16.626511537+00:00 stderr F I0813 20:34:16.626381 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:34:16.626511537+00:00 stderr F I0813 20:34:16.626463 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:34:16.790697576+00:00 stderr F I0813 20:34:16.790576 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:34:16.790697576+00:00 stderr F I0813 20:34:16.790651 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:16.990958463+00:00 stderr F I0813 20:34:16.990747 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:16.990958463+00:00 stderr F I0813 20:34:16.990950 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:17.190279383+00:00 stderr F I0813 20:34:17.190166 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:17.190279383+00:00 stderr F I0813 20:34:17.190238 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:34:17.391147327+00:00 stderr F I0813 20:34:17.391009 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:34:17.391212799+00:00 stderr F I0813 20:34:17.391144 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:34:17.593873105+00:00 stderr F I0813 20:34:17.593713 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:34:17.593957077+00:00 stderr F I0813 20:34:17.593935 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:34:17.790564209+00:00 stderr F I0813 20:34:17.790444 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:34:17.790564209+00:00 stderr F I0813 20:34:17.790539 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:34:17.990932369+00:00 stderr F I0813 20:34:17.990765 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:34:17.990932369+00:00 stderr F I0813 20:34:17.990915 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.197471136+00:00 stderr F I0813 20:34:18.197310 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.197471136+00:00 stderr F I0813 20:34:18.197417 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.395934641+00:00 stderr F I0813 20:34:18.395759 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.395934641+00:00 stderr F I0813 20:34:18.395897 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:34:18.595005712+00:00 stderr F I0813 20:34:18.594883 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:34:18.595005712+00:00 stderr F I0813 20:34:18.594963 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:34:18.792035076+00:00 stderr F I0813 20:34:18.791938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:34:18.792035076+00:00 stderr F I0813 20:34:18.792008 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:34:18.991953233+00:00 stderr F I0813 20:34:18.991677 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:34:18.992025365+00:00 stderr F I0813 20:34:18.991976 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:34:19.199985023+00:00 stderr F I0813 20:34:19.199520 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:34:19.199985023+00:00 stderr F I0813 20:34:19.199645 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:34:19.392242189+00:00 stderr F I0813 20:34:19.392145 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:34:19.392242189+00:00 stderr F I0813 20:34:19.392214 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:34:19.590447337+00:00 stderr F I0813 20:34:19.590324 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:34:19.590447337+00:00 stderr F I0813 20:34:19.590411 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:34:19.790043175+00:00 stderr F I0813 20:34:19.789893 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:34:19.790303302+00:00 stderr F I0813 20:34:19.790283 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:34:19.991140495+00:00 stderr F I0813 20:34:19.990982 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:34:19.991140495+00:00 stderr F I0813 20:34:19.991069 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:20.190366642+00:00 stderr F I0813 20:34:20.190237 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:20.190366642+00:00 stderr F I0813 20:34:20.190307 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:34:20.391485854+00:00 stderr F I0813 20:34:20.391364 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:34:20.391527305+00:00 stderr F I0813 20:34:20.391494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:34:20.590484144+00:00 stderr F I0813 20:34:20.590359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:34:20.590484144+00:00 stderr F I0813 20:34:20.590468 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:34:20.792546193+00:00 stderr F I0813 20:34:20.792493 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:34:20.792848241+00:00 stderr F I0813 20:34:20.792768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:34:20.993187900+00:00 stderr F I0813 20:34:20.992968 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:34:20.993187900+00:00 stderr F I0813 20:34:20.993038 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:34:21.190684068+00:00 stderr F I0813 20:34:21.190623 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:34:21.190863913+00:00 stderr F I0813 20:34:21.190838 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:34:21.391707346+00:00 stderr F I0813 20:34:21.391595 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:34:21.391707346+00:00 stderr F I0813 20:34:21.391685 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:21.600415226+00:00 stderr F I0813 20:34:21.600355 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:21.600548830+00:00 stderr F I0813 20:34:21.600533 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:34:21.794062293+00:00 stderr F I0813 20:34:21.793997 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:34:21.794323100+00:00 stderr F I0813 20:34:21.794298 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:34:21.999035905+00:00 stderr F I0813 20:34:21.998981 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:34:21.999185979+00:00 stderr F I0813 20:34:21.999170 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:34:22.195112490+00:00 stderr F I0813 20:34:22.194997 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:34:22.195171272+00:00 stderr F I0813 20:34:22.195105 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:34:22.391537307+00:00 stderr F I0813 20:34:22.391375 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:34:22.391537307+00:00 stderr F I0813 20:34:22.391449 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:34:22.590581448+00:00 stderr F I0813 20:34:22.590471 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:34:22.590581448+00:00 stderr F I0813 20:34:22.590564 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:34:22.791332809+00:00 stderr F I0813 20:34:22.791203 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:34:22.791332809+00:00 stderr F I0813 20:34:22.791295 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:34:22.994927962+00:00 stderr F I0813 20:34:22.994657 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:34:22.994927962+00:00 stderr F I0813 20:34:22.994740 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:34:23.196711822+00:00 stderr F I0813 20:34:23.196646 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:34:23.217697855+00:00 stderr F I0813 20:34:23.217574 1 log.go:245] Operconfig Controller complete 2025-08-13T20:35:16.785492872+00:00 stderr F I0813 20:35:16.785143 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:35:16.787115989+00:00 stderr F I0813 20:35:16.787010 1 log.go:245] successful reconciliation 2025-08-13T20:35:18.170311261+00:00 stderr F I0813 20:35:18.170171 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:35:18.174400889+00:00 stderr F I0813 20:35:18.174311 1 log.go:245] successful reconciliation 2025-08-13T20:35:19.385155273+00:00 stderr F I0813 20:35:19.385001 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:35:19.385899974+00:00 stderr F I0813 20:35:19.385759 1 log.go:245] successful reconciliation 2025-08-13T20:37:15.481051441+00:00 stderr F I0813 20:37:15.480749 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:37:23.219393589+00:00 stderr F I0813 20:37:23.219216 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:37:23.607865939+00:00 stderr F I0813 20:37:23.605470 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:37:23.614544421+00:00 stderr F I0813 20:37:23.613518 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:37:23.629073360+00:00 stderr F I0813 20:37:23.627334 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:37:23.629073360+00:00 stderr F I0813 20:37:23.627379 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003acd900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.636944 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.637162 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:37:23.638643286+00:00 stderr F I0813 20:37:23.637178 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642346 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642392 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642416 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642423 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:37:23.643098785+00:00 stderr F I0813 20:37:23.642612 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:37:23.695010751+00:00 stderr F I0813 20:37:23.694666 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:37:23.717315084+00:00 stderr F I0813 20:37:23.717177 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:37:23.717315084+00:00 stderr F I0813 20:37:23.717248 1 log.go:245] Starting render phase 2025-08-13T20:37:23.737492456+00:00 stderr F I0813 20:37:23.737362 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784170 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784204 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:37:23.784281595+00:00 stderr F I0813 20:37:23.784244 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:37:23.784358457+00:00 stderr F I0813 20:37:23.784270 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:37:23.800890364+00:00 stderr F I0813 20:37:23.799868 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:37:23.800890364+00:00 stderr F I0813 20:37:23.799906 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:37:23.815894706+00:00 stderr F I0813 20:37:23.814725 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:37:23.842947236+00:00 stderr F I0813 20:37:23.842823 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:37:23.848839156+00:00 stderr F I0813 20:37:23.848721 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:37:23.848839156+00:00 stderr F I0813 20:37:23.848759 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:37:23.858370741+00:00 stderr F I0813 20:37:23.858253 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:37:23.858370741+00:00 stderr F I0813 20:37:23.858323 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:37:23.874865737+00:00 stderr F I0813 20:37:23.874692 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:37:23.875210397+00:00 stderr F I0813 20:37:23.874977 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:37:23.887039078+00:00 stderr F I0813 20:37:23.886972 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:37:23.887209623+00:00 stderr F I0813 20:37:23.887190 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:37:23.894961726+00:00 stderr F I0813 20:37:23.894886 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:37:23.895193193+00:00 stderr F I0813 20:37:23.895165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:37:23.902042160+00:00 stderr F I0813 20:37:23.901964 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:37:23.902236726+00:00 stderr F I0813 20:37:23.902211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:37:23.910201475+00:00 stderr F I0813 20:37:23.910042 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:37:23.910201475+00:00 stderr F I0813 20:37:23.910184 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:37:23.916724563+00:00 stderr F I0813 20:37:23.916579 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:37:23.916724563+00:00 stderr F I0813 20:37:23.916673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:37:23.924086736+00:00 stderr F I0813 20:37:23.923638 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:37:23.924270911+00:00 stderr F I0813 20:37:23.924248 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:37:23.931562381+00:00 stderr F I0813 20:37:23.931519 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:37:23.931682785+00:00 stderr F I0813 20:37:23.931662 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:37:24.107742351+00:00 stderr F I0813 20:37:24.107614 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:37:24.107742351+00:00 stderr F I0813 20:37:24.107698 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:37:24.307965673+00:00 stderr F I0813 20:37:24.307867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:37:24.308059906+00:00 stderr F I0813 20:37:24.307958 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:37:24.507947379+00:00 stderr F I0813 20:37:24.507853 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:37:24.507947379+00:00 stderr F I0813 20:37:24.507926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:37:24.710617201+00:00 stderr F I0813 20:37:24.710550 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:37:24.710695853+00:00 stderr F I0813 20:37:24.710630 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:37:24.908331961+00:00 stderr F I0813 20:37:24.908164 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:37:24.908331961+00:00 stderr F I0813 20:37:24.908232 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:37:25.107497343+00:00 stderr F I0813 20:37:25.107416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:37:25.107497343+00:00 stderr F I0813 20:37:25.107465 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:37:25.307848999+00:00 stderr F I0813 20:37:25.307732 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:37:25.307909041+00:00 stderr F I0813 20:37:25.307859 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:37:25.511641345+00:00 stderr F I0813 20:37:25.511530 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:37:25.511641345+00:00 stderr F I0813 20:37:25.511627 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:37:25.707709358+00:00 stderr F I0813 20:37:25.707622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:37:25.707759669+00:00 stderr F I0813 20:37:25.707702 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:37:25.912432070+00:00 stderr F I0813 20:37:25.912270 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:37:25.912432070+00:00 stderr F I0813 20:37:25.912342 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:37:26.107642188+00:00 stderr F I0813 20:37:26.107510 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:37:26.107642188+00:00 stderr F I0813 20:37:26.107596 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:37:26.316556871+00:00 stderr F I0813 20:37:26.316473 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:37:26.316599892+00:00 stderr F I0813 20:37:26.316557 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:37:26.519348457+00:00 stderr F I0813 20:37:26.519242 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:37:26.519348457+00:00 stderr F I0813 20:37:26.519323 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:37:26.707399369+00:00 stderr F I0813 20:37:26.707282 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:37:26.707399369+00:00 stderr F I0813 20:37:26.707350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:37:26.908764955+00:00 stderr F I0813 20:37:26.908643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:37:26.908880428+00:00 stderr F I0813 20:37:26.908761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:37:27.108116552+00:00 stderr F I0813 20:37:27.107985 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:37:27.108116552+00:00 stderr F I0813 20:37:27.108061 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:37:27.316842560+00:00 stderr F I0813 20:37:27.316700 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:37:27.316842560+00:00 stderr F I0813 20:37:27.316832 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:37:27.510084171+00:00 stderr F I0813 20:37:27.509947 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:37:27.510084171+00:00 stderr F I0813 20:37:27.510036 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:37:27.713203738+00:00 stderr F I0813 20:37:27.713073 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:37:27.713203738+00:00 stderr F I0813 20:37:27.713185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:37:27.913630256+00:00 stderr F I0813 20:37:27.913477 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:27.913680007+00:00 stderr F I0813 20:37:27.913631 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:37:28.107087573+00:00 stderr F I0813 20:37:28.106960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:28.107087573+00:00 stderr F I0813 20:37:28.107035 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:37:28.309265712+00:00 stderr F I0813 20:37:28.309159 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:37:28.309265712+00:00 stderr F I0813 20:37:28.309246 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:37:28.508677781+00:00 stderr F I0813 20:37:28.508560 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:37:28.508677781+00:00 stderr F I0813 20:37:28.508632 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:37:28.707441951+00:00 stderr F I0813 20:37:28.707314 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:37:28.707441951+00:00 stderr F I0813 20:37:28.707387 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:37:28.910568438+00:00 stderr F I0813 20:37:28.910450 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:37:28.910568438+00:00 stderr F I0813 20:37:28.910523 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:37:29.114977161+00:00 stderr F I0813 20:37:29.113892 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:37:29.114977161+00:00 stderr F I0813 20:37:29.113979 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:37:29.316423349+00:00 stderr F I0813 20:37:29.316314 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:37:29.316423349+00:00 stderr F I0813 20:37:29.316386 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:37:29.510629207+00:00 stderr F I0813 20:37:29.510455 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:37:29.510629207+00:00 stderr F I0813 20:37:29.510544 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:37:29.708267586+00:00 stderr F I0813 20:37:29.708112 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:29.708267586+00:00 stderr F I0813 20:37:29.708211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:37:29.909850897+00:00 stderr F I0813 20:37:29.909753 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:37:29.909956990+00:00 stderr F I0813 20:37:29.909879 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:37:30.109516274+00:00 stderr F I0813 20:37:30.109431 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:37:30.109516274+00:00 stderr F I0813 20:37:30.109502 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:37:30.307823971+00:00 stderr F I0813 20:37:30.307667 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:37:30.307823971+00:00 stderr F I0813 20:37:30.307760 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:37:30.514521560+00:00 stderr F I0813 20:37:30.514407 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:37:30.514582472+00:00 stderr F I0813 20:37:30.514548 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:37:30.717672507+00:00 stderr F I0813 20:37:30.717559 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:37:30.717672507+00:00 stderr F I0813 20:37:30.717643 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:37:30.917608311+00:00 stderr F I0813 20:37:30.917472 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:37:30.917608311+00:00 stderr F I0813 20:37:30.917582 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:37:31.118597046+00:00 stderr F I0813 20:37:31.118382 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:37:31.118597046+00:00 stderr F I0813 20:37:31.118471 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:37:31.313304599+00:00 stderr F I0813 20:37:31.313188 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:37:31.313304599+00:00 stderr F I0813 20:37:31.313259 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:37:31.549347115+00:00 stderr F I0813 20:37:31.549115 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:37:31.549347115+00:00 stderr F I0813 20:37:31.549216 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:37:31.742304547+00:00 stderr F I0813 20:37:31.742197 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:37:31.742304547+00:00 stderr F I0813 20:37:31.742290 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:37:31.908476348+00:00 stderr F I0813 20:37:31.908341 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:37:31.908476348+00:00 stderr F I0813 20:37:31.908430 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:37:32.110072410+00:00 stderr F I0813 20:37:32.109951 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:37:32.110072410+00:00 stderr F I0813 20:37:32.110045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:37:32.311913049+00:00 stderr F I0813 20:37:32.311826 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:37:32.311963980+00:00 stderr F I0813 20:37:32.311911 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:37:32.508538458+00:00 stderr F I0813 20:37:32.508483 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:37:32.508677642+00:00 stderr F I0813 20:37:32.508661 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:37:32.707224186+00:00 stderr F I0813 20:37:32.707114 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:37:32.707224186+00:00 stderr F I0813 20:37:32.707206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:37:32.907333975+00:00 stderr F I0813 20:37:32.907219 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:37:32.907333975+00:00 stderr F I0813 20:37:32.907320 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:37:33.107524727+00:00 stderr F I0813 20:37:33.107397 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:37:33.107524727+00:00 stderr F I0813 20:37:33.107482 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:37:33.308833780+00:00 stderr F I0813 20:37:33.308683 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:37:33.308883852+00:00 stderr F I0813 20:37:33.308767 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:37:33.508307632+00:00 stderr F I0813 20:37:33.508186 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:37:33.508307632+00:00 stderr F I0813 20:37:33.508259 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:33.711968483+00:00 stderr F I0813 20:37:33.711904 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:33.712018365+00:00 stderr F I0813 20:37:33.711971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:33.909602181+00:00 stderr F I0813 20:37:33.909480 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:33.909602181+00:00 stderr F I0813 20:37:33.909548 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:34.108732982+00:00 stderr F I0813 20:37:34.108526 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:34.108732982+00:00 stderr F I0813 20:37:34.108624 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:37:34.308305936+00:00 stderr F I0813 20:37:34.308054 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:37:34.308305936+00:00 stderr F I0813 20:37:34.308156 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:37:34.510729862+00:00 stderr F I0813 20:37:34.510621 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:37:34.510729862+00:00 stderr F I0813 20:37:34.510707 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:37:34.708272368+00:00 stderr F I0813 20:37:34.708107 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:37:34.708272368+00:00 stderr F I0813 20:37:34.708211 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:37:34.908726827+00:00 stderr F I0813 20:37:34.908629 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:37:34.908726827+00:00 stderr F I0813 20:37:34.908697 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:37:35.106672564+00:00 stderr F I0813 20:37:35.106564 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:37:35.106672564+00:00 stderr F I0813 20:37:35.106637 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:37:35.311843897+00:00 stderr F I0813 20:37:35.311735 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:37:35.311906899+00:00 stderr F I0813 20:37:35.311858 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:37:35.512505492+00:00 stderr F I0813 20:37:35.512411 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:37:35.512505492+00:00 stderr F I0813 20:37:35.512488 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:37:35.722896327+00:00 stderr F I0813 20:37:35.722061 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:37:35.722896327+00:00 stderr F I0813 20:37:35.722160 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:37:35.908469097+00:00 stderr F I0813 20:37:35.908325 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:37:35.908469097+00:00 stderr F I0813 20:37:35.908411 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:37:36.109765210+00:00 stderr F I0813 20:37:36.109588 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:37:36.109765210+00:00 stderr F I0813 20:37:36.109734 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:37:36.311869647+00:00 stderr F I0813 20:37:36.311711 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:37:36.311936499+00:00 stderr F I0813 20:37:36.311867 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:37:36.511670837+00:00 stderr F I0813 20:37:36.511531 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:37:36.511670837+00:00 stderr F I0813 20:37:36.511653 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:37:36.712617920+00:00 stderr F I0813 20:37:36.711973 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:37:36.712617920+00:00 stderr F I0813 20:37:36.712579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:37:36.913230494+00:00 stderr F I0813 20:37:36.913096 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:37:36.913230494+00:00 stderr F I0813 20:37:36.913195 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:37:37.108671609+00:00 stderr F I0813 20:37:37.108545 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:37:37.108671609+00:00 stderr F I0813 20:37:37.108643 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:37:37.311879127+00:00 stderr F I0813 20:37:37.311227 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:37:37.311879127+00:00 stderr F I0813 20:37:37.311295 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:37:37.511670577+00:00 stderr F I0813 20:37:37.511561 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:37:37.511716399+00:00 stderr F I0813 20:37:37.511668 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:37:37.718119269+00:00 stderr F I0813 20:37:37.717978 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:37:37.718119269+00:00 stderr F I0813 20:37:37.718057 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:37:37.931684726+00:00 stderr F I0813 20:37:37.931575 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:37:37.931684726+00:00 stderr F I0813 20:37:37.931669 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:37:38.110214683+00:00 stderr F I0813 20:37:38.109307 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:37:38.110278385+00:00 stderr F I0813 20:37:38.110221 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.310597560+00:00 stderr F I0813 20:37:38.310485 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.310597560+00:00 stderr F I0813 20:37:38.310559 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.517357531+00:00 stderr F I0813 20:37:38.516962 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.517881746+00:00 stderr F I0813 20:37:38.517768 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:37:38.713851576+00:00 stderr F I0813 20:37:38.713639 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:37:38.713851576+00:00 stderr F I0813 20:37:38.713761 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:37:38.913720767+00:00 stderr F I0813 20:37:38.913474 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:37:38.913720767+00:00 stderr F I0813 20:37:38.913587 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:37:39.111837119+00:00 stderr F I0813 20:37:39.111638 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:37:39.111965403+00:00 stderr F I0813 20:37:39.111890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:37:39.308543710+00:00 stderr F I0813 20:37:39.308471 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:37:39.308610902+00:00 stderr F I0813 20:37:39.308549 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.516062593+00:00 stderr F I0813 20:37:39.515940 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.516062593+00:00 stderr F I0813 20:37:39.516029 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.720623170+00:00 stderr F I0813 20:37:39.720563 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.721215527+00:00 stderr F I0813 20:37:39.721163 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:37:39.908000432+00:00 stderr F I0813 20:37:39.907906 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:37:39.908000432+00:00 stderr F I0813 20:37:39.907979 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:37:40.107714410+00:00 stderr F I0813 20:37:40.107612 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:37:40.107714410+00:00 stderr F I0813 20:37:40.107682 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:37:40.310193258+00:00 stderr F I0813 20:37:40.310015 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:37:40.311905397+00:00 stderr F I0813 20:37:40.310253 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:37:40.513283703+00:00 stderr F I0813 20:37:40.513228 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:37:40.513401556+00:00 stderr F I0813 20:37:40.513388 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:37:40.708574643+00:00 stderr F I0813 20:37:40.708517 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:37:40.708689806+00:00 stderr F I0813 20:37:40.708674 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:37:40.909344761+00:00 stderr F I0813 20:37:40.909216 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:37:40.909344761+00:00 stderr F I0813 20:37:40.909309 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:37:41.107342230+00:00 stderr F I0813 20:37:41.107235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:37:41.107509094+00:00 stderr F I0813 20:37:41.107328 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:37:41.309322663+00:00 stderr F I0813 20:37:41.309263 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:37:41.309322663+00:00 stderr F I0813 20:37:41.309315 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:41.508740131+00:00 stderr F I0813 20:37:41.508433 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:41.508740131+00:00 stderr F I0813 20:37:41.508505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:37:41.709441598+00:00 stderr F I0813 20:37:41.709284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:37:41.709441598+00:00 stderr F I0813 20:37:41.709355 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:37:41.908863267+00:00 stderr F I0813 20:37:41.908758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:37:41.908921099+00:00 stderr F I0813 20:37:41.908905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:37:42.112349844+00:00 stderr F I0813 20:37:42.112233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:37:42.112349844+00:00 stderr F I0813 20:37:42.112308 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:37:42.308936191+00:00 stderr F I0813 20:37:42.308859 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:37:42.309073755+00:00 stderr F I0813 20:37:42.309013 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:37:42.520145489+00:00 stderr F I0813 20:37:42.511742 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:37:42.520145489+00:00 stderr F I0813 20:37:42.511854 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:37:42.708728676+00:00 stderr F I0813 20:37:42.708617 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:37:42.708728676+00:00 stderr F I0813 20:37:42.708693 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:42.916895448+00:00 stderr F I0813 20:37:42.915329 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:42.916895448+00:00 stderr F I0813 20:37:42.915397 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:37:43.108367398+00:00 stderr F I0813 20:37:43.108240 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:37:43.108367398+00:00 stderr F I0813 20:37:43.108322 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:37:43.314610504+00:00 stderr F I0813 20:37:43.314521 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:37:43.314610504+00:00 stderr F I0813 20:37:43.314597 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:37:43.510358067+00:00 stderr F I0813 20:37:43.510242 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:37:43.510358067+00:00 stderr F I0813 20:37:43.510319 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:37:43.707886952+00:00 stderr F I0813 20:37:43.707754 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:37:43.708051337+00:00 stderr F I0813 20:37:43.707980 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:37:43.909417972+00:00 stderr F I0813 20:37:43.909287 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:37:43.909460144+00:00 stderr F I0813 20:37:43.909423 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:37:44.108268745+00:00 stderr F I0813 20:37:44.107990 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:37:44.108268745+00:00 stderr F I0813 20:37:44.108067 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:37:44.310035742+00:00 stderr F I0813 20:37:44.309955 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:37:44.310080864+00:00 stderr F I0813 20:37:44.310047 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:37:44.522115036+00:00 stderr F I0813 20:37:44.520425 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:37:44.546810448+00:00 stderr F I0813 20:37:44.546650 1 log.go:245] Operconfig Controller complete 2025-08-13T20:40:15.097755834+00:00 stderr F I0813 20:40:15.096603 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.106656070+00:00 stderr F I0813 20:40:15.106593 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.116356380+00:00 stderr F I0813 20:40:15.116120 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.126956326+00:00 stderr F I0813 20:40:15.126523 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.135594905+00:00 stderr F I0813 20:40:15.135452 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.145745757+00:00 stderr F I0813 20:40:15.145707 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.155562220+00:00 stderr F I0813 20:40:15.155518 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.164362444+00:00 stderr F I0813 20:40:15.164243 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.172441127+00:00 stderr F I0813 20:40:15.172395 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.187869062+00:00 stderr F I0813 20:40:15.187749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.297835262+00:00 stderr F I0813 20:40:15.297718 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.494120081+00:00 stderr F I0813 20:40:15.494020 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:40:15.495948704+00:00 stderr F I0813 20:40:15.495463 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.699396089+00:00 stderr F I0813 20:40:15.699217 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:15.895930785+00:00 stderr F I0813 20:40:15.895768 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.098507356+00:00 stderr F I0813 20:40:16.098373 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.297440331+00:00 stderr F I0813 20:40:16.297311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.494455340+00:00 stderr F I0813 20:40:16.494324 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.698251816+00:00 stderr F I0813 20:40:16.698096 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:16.803980614+00:00 stderr F I0813 20:40:16.803839 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:40:16.805148378+00:00 stderr F I0813 20:40:16.805001 1 log.go:245] successful reconciliation 2025-08-13T20:40:16.901600799+00:00 stderr F I0813 20:40:16.901504 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.097096905+00:00 stderr F I0813 20:40:17.097019 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.297375289+00:00 stderr F I0813 20:40:17.297223 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.499045973+00:00 stderr F I0813 20:40:17.498826 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.696763303+00:00 stderr F I0813 20:40:17.696577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:17.896643316+00:00 stderr F I0813 20:40:17.896577 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.097033654+00:00 stderr F I0813 20:40:18.096890 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.191251460+00:00 stderr F I0813 20:40:18.191147 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T20:40:18.192369292+00:00 stderr F I0813 20:40:18.192299 1 log.go:245] successful reconciliation 2025-08-13T20:40:18.297909155+00:00 stderr F I0813 20:40:18.297078 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:40:18.495371918+00:00 stderr F I0813 20:40:18.495316 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:40:19.402151561+00:00 stderr F I0813 20:40:19.402033 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T20:40:19.402839470+00:00 stderr F I0813 20:40:19.402737 1 log.go:245] successful reconciliation 2025-08-13T20:40:44.550953322+00:00 stderr F I0813 20:40:44.547924 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:40:44.880082911+00:00 stderr F I0813 20:40:44.879976 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:40:44.885301581+00:00 stderr F I0813 20:40:44.885179 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:40:44.888364619+00:00 stderr F I0813 20:40:44.888287 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:40:44.888387960+00:00 stderr F I0813 20:40:44.888319 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc002a58f80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893531 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893575 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:40:44.893609900+00:00 stderr F I0813 20:40:44.893586 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900643 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900709 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900726 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900732 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:40:44.901888579+00:00 stderr F I0813 20:40:44.900864 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:40:44.922083981+00:00 stderr F I0813 20:40:44.921997 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:40:44.939979497+00:00 stderr F I0813 20:40:44.938052 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:40:44.939979497+00:00 stderr F I0813 20:40:44.938126 1 log.go:245] Starting render phase 2025-08-13T20:40:44.959439848+00:00 stderr F I0813 20:40:44.959229 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:40:45.000872032+00:00 stderr F I0813 20:40:45.000752 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:40:45.000995475+00:00 stderr F I0813 20:40:45.000950 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:40:45.001087238+00:00 stderr F I0813 20:40:45.001035 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:40:45.001255463+00:00 stderr F I0813 20:40:45.001101 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:40:45.021410054+00:00 stderr F I0813 20:40:45.021335 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:40:45.021410054+00:00 stderr F I0813 20:40:45.021374 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:40:45.031547496+00:00 stderr F I0813 20:40:45.031435 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:40:45.047656371+00:00 stderr F I0813 20:40:45.047569 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:40:45.054137838+00:00 stderr F I0813 20:40:45.054082 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:40:45.054293752+00:00 stderr F I0813 20:40:45.054245 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:40:45.066159814+00:00 stderr F I0813 20:40:45.066090 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:40:45.066159814+00:00 stderr F I0813 20:40:45.066154 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:40:45.076235345+00:00 stderr F I0813 20:40:45.075669 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:40:45.076235345+00:00 stderr F I0813 20:40:45.075868 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:40:45.083839714+00:00 stderr F I0813 20:40:45.083750 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:40:45.083937557+00:00 stderr F I0813 20:40:45.083899 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:40:45.090948699+00:00 stderr F I0813 20:40:45.090864 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:40:45.090948699+00:00 stderr F I0813 20:40:45.090934 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:40:45.096176520+00:00 stderr F I0813 20:40:45.096122 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:40:45.096229091+00:00 stderr F I0813 20:40:45.096178 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:40:45.103018037+00:00 stderr F I0813 20:40:45.102976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:40:45.103082479+00:00 stderr F I0813 20:40:45.103024 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:40:45.108667400+00:00 stderr F I0813 20:40:45.108546 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:40:45.108667400+00:00 stderr F I0813 20:40:45.108620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:40:45.113478218+00:00 stderr F I0813 20:40:45.113383 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:40:45.113478218+00:00 stderr F I0813 20:40:45.113442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:40:45.132231419+00:00 stderr F I0813 20:40:45.132133 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:40:45.132263690+00:00 stderr F I0813 20:40:45.132233 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:40:45.334941113+00:00 stderr F I0813 20:40:45.333055 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:40:45.334941113+00:00 stderr F I0813 20:40:45.333120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:40:45.534044994+00:00 stderr F I0813 20:40:45.533935 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:40:45.534044994+00:00 stderr F I0813 20:40:45.534020 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:40:45.734093761+00:00 stderr F I0813 20:40:45.733991 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:40:45.734093761+00:00 stderr F I0813 20:40:45.734056 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:40:45.934989923+00:00 stderr F I0813 20:40:45.934862 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:40:45.934989923+00:00 stderr F I0813 20:40:45.934937 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:40:46.136050940+00:00 stderr F I0813 20:40:46.135386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:40:46.136261176+00:00 stderr F I0813 20:40:46.136241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:40:46.334704357+00:00 stderr F I0813 20:40:46.334389 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:40:46.334704357+00:00 stderr F I0813 20:40:46.334492 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:40:46.534071955+00:00 stderr F I0813 20:40:46.533976 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:40:46.534071955+00:00 stderr F I0813 20:40:46.534046 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:40:46.734065191+00:00 stderr F I0813 20:40:46.733923 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:40:46.734065191+00:00 stderr F I0813 20:40:46.734018 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:40:46.935165579+00:00 stderr F I0813 20:40:46.935051 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:40:46.935165579+00:00 stderr F I0813 20:40:46.935123 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:40:47.133985861+00:00 stderr F I0813 20:40:47.133850 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:40:47.133985861+00:00 stderr F I0813 20:40:47.133914 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:40:47.336224892+00:00 stderr F I0813 20:40:47.336084 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:40:47.337163039+00:00 stderr F I0813 20:40:47.336992 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:40:47.556753560+00:00 stderr F I0813 20:40:47.556557 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:40:47.556753560+00:00 stderr F I0813 20:40:47.556645 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:40:47.744716309+00:00 stderr F I0813 20:40:47.744594 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:40:47.744716309+00:00 stderr F I0813 20:40:47.744664 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:40:47.933715097+00:00 stderr F I0813 20:40:47.933590 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:40:47.933715097+00:00 stderr F I0813 20:40:47.933657 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:40:48.133974231+00:00 stderr F I0813 20:40:48.133292 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:40:48.133974231+00:00 stderr F I0813 20:40:48.133946 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:40:48.333302858+00:00 stderr F I0813 20:40:48.333182 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:40:48.333302858+00:00 stderr F I0813 20:40:48.333279 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:40:48.539219144+00:00 stderr F I0813 20:40:48.539086 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:40:48.539219144+00:00 stderr F I0813 20:40:48.539181 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:40:48.735148462+00:00 stderr F I0813 20:40:48.735023 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:40:48.735148462+00:00 stderr F I0813 20:40:48.735103 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:40:48.942234152+00:00 stderr F I0813 20:40:48.942114 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:40:48.942272123+00:00 stderr F I0813 20:40:48.942231 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:40:49.133289281+00:00 stderr F I0813 20:40:49.133155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:49.133289281+00:00 stderr F I0813 20:40:49.133256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:40:49.338485556+00:00 stderr F I0813 20:40:49.338022 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:49.338485556+00:00 stderr F I0813 20:40:49.338108 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:40:49.536014531+00:00 stderr F I0813 20:40:49.535911 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:40:49.536014531+00:00 stderr F I0813 20:40:49.535998 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:40:49.732605459+00:00 stderr F I0813 20:40:49.732506 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:40:49.732605459+00:00 stderr F I0813 20:40:49.732579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:40:49.934277163+00:00 stderr F I0813 20:40:49.934067 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:40:49.934277163+00:00 stderr F I0813 20:40:49.934237 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:40:50.133117616+00:00 stderr F I0813 20:40:50.132972 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:40:50.133117616+00:00 stderr F I0813 20:40:50.133083 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:40:50.336270013+00:00 stderr F I0813 20:40:50.336095 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:40:50.336270013+00:00 stderr F I0813 20:40:50.336180 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:40:50.541051407+00:00 stderr F I0813 20:40:50.540908 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:40:50.541051407+00:00 stderr F I0813 20:40:50.540998 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:40:50.738020606+00:00 stderr F I0813 20:40:50.737874 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:40:50.738020606+00:00 stderr F I0813 20:40:50.737961 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:40:50.933706277+00:00 stderr F I0813 20:40:50.933583 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:50.933767719+00:00 stderr F I0813 20:40:50.933716 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:40:51.134281030+00:00 stderr F I0813 20:40:51.134102 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:40:51.134281030+00:00 stderr F I0813 20:40:51.134177 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:40:51.337694324+00:00 stderr F I0813 20:40:51.337469 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:40:51.338152948+00:00 stderr F I0813 20:40:51.337707 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:40:51.535758445+00:00 stderr F I0813 20:40:51.535658 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:40:51.535758445+00:00 stderr F I0813 20:40:51.535737 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:40:51.741660291+00:00 stderr F I0813 20:40:51.741520 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:40:51.741905518+00:00 stderr F I0813 20:40:51.741876 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:40:51.942284875+00:00 stderr F I0813 20:40:51.942181 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:40:51.942435259+00:00 stderr F I0813 20:40:51.942409 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:40:52.141424596+00:00 stderr F I0813 20:40:52.141239 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:40:52.141424596+00:00 stderr F I0813 20:40:52.141311 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:40:52.344283844+00:00 stderr F I0813 20:40:52.344100 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:40:52.344283844+00:00 stderr F I0813 20:40:52.344176 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:40:52.543569269+00:00 stderr F I0813 20:40:52.543477 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:40:52.543725784+00:00 stderr F I0813 20:40:52.543709 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:40:52.775044693+00:00 stderr F I0813 20:40:52.774900 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:40:52.775302360+00:00 stderr F I0813 20:40:52.775289 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:40:52.965434652+00:00 stderr F I0813 20:40:52.965380 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:40:52.965575536+00:00 stderr F I0813 20:40:52.965557 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:40:53.133901969+00:00 stderr F I0813 20:40:53.133753 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:40:53.134093365+00:00 stderr F I0813 20:40:53.134070 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:40:53.333476783+00:00 stderr F I0813 20:40:53.333361 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:40:53.333476783+00:00 stderr F I0813 20:40:53.333426 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:40:53.532567543+00:00 stderr F I0813 20:40:53.532463 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:40:53.532567543+00:00 stderr F I0813 20:40:53.532532 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:40:53.733328031+00:00 stderr F I0813 20:40:53.733183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:40:53.733328031+00:00 stderr F I0813 20:40:53.733301 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:40:53.934893202+00:00 stderr F I0813 20:40:53.934723 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:40:53.934893202+00:00 stderr F I0813 20:40:53.934877 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:40:54.140545551+00:00 stderr F I0813 20:40:54.140415 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:40:54.140545551+00:00 stderr F I0813 20:40:54.140522 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:40:54.334310287+00:00 stderr F I0813 20:40:54.334151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:40:54.334310287+00:00 stderr F I0813 20:40:54.334278 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:40:54.540096500+00:00 stderr F I0813 20:40:54.539963 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:40:54.540096500+00:00 stderr F I0813 20:40:54.540064 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:40:54.737360797+00:00 stderr F I0813 20:40:54.737277 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:40:54.737448070+00:00 stderr F I0813 20:40:54.737359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:54.936895240+00:00 stderr F I0813 20:40:54.936435 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:54.936895240+00:00 stderr F I0813 20:40:54.936568 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.135324501+00:00 stderr F I0813 20:40:55.135174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.135324501+00:00 stderr F I0813 20:40:55.135311 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.333496524+00:00 stderr F I0813 20:40:55.333401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.333496524+00:00 stderr F I0813 20:40:55.333474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:40:55.534464098+00:00 stderr F I0813 20:40:55.533916 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:40:55.534464098+00:00 stderr F I0813 20:40:55.534436 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:40:55.733257638+00:00 stderr F I0813 20:40:55.733132 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:40:55.733315220+00:00 stderr F I0813 20:40:55.733259 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:40:55.935170400+00:00 stderr F I0813 20:40:55.935001 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:40:55.935170400+00:00 stderr F I0813 20:40:55.935125 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:40:56.137363189+00:00 stderr F I0813 20:40:56.137224 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:40:56.137363189+00:00 stderr F I0813 20:40:56.137345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:40:56.333316829+00:00 stderr F I0813 20:40:56.333179 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:40:56.333379720+00:00 stderr F I0813 20:40:56.333317 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:40:56.537067023+00:00 stderr F I0813 20:40:56.536396 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:40:56.537067023+00:00 stderr F I0813 20:40:56.537048 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:40:56.739909091+00:00 stderr F I0813 20:40:56.739827 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:40:56.739955312+00:00 stderr F I0813 20:40:56.739917 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:40:56.939033072+00:00 stderr F I0813 20:40:56.938924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:40:56.939033072+00:00 stderr F I0813 20:40:56.939011 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:40:57.134730513+00:00 stderr F I0813 20:40:57.134621 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:40:57.134730513+00:00 stderr F I0813 20:40:57.134702 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:40:57.335601675+00:00 stderr F I0813 20:40:57.335497 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:40:57.335601675+00:00 stderr F I0813 20:40:57.335564 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:40:57.536922069+00:00 stderr F I0813 20:40:57.536749 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:40:57.537086954+00:00 stderr F I0813 20:40:57.537025 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:40:57.735013100+00:00 stderr F I0813 20:40:57.734861 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:40:57.735013100+00:00 stderr F I0813 20:40:57.734952 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:40:57.935669555+00:00 stderr F I0813 20:40:57.935559 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:40:57.935669555+00:00 stderr F I0813 20:40:57.935638 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:40:58.134578770+00:00 stderr F I0813 20:40:58.134440 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:40:58.134578770+00:00 stderr F I0813 20:40:58.134514 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:40:58.334404561+00:00 stderr F I0813 20:40:58.334294 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:40:58.334404561+00:00 stderr F I0813 20:40:58.334365 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:40:58.534003205+00:00 stderr F I0813 20:40:58.533921 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:40:58.534003205+00:00 stderr F I0813 20:40:58.533985 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:40:58.735044331+00:00 stderr F I0813 20:40:58.734959 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:40:58.735145574+00:00 stderr F I0813 20:40:58.735044 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:40:58.940868225+00:00 stderr F I0813 20:40:58.940706 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:40:58.940868225+00:00 stderr F I0813 20:40:58.940847 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:40:59.169112596+00:00 stderr F I0813 20:40:59.168295 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:40:59.169112596+00:00 stderr F I0813 20:40:59.168393 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:40:59.333872705+00:00 stderr F I0813 20:40:59.333697 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:40:59.333930886+00:00 stderr F I0813 20:40:59.333875 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.533867841+00:00 stderr F I0813 20:40:59.533268 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.533867841+00:00 stderr F I0813 20:40:59.533351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.733832806+00:00 stderr F I0813 20:40:59.733672 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.733832806+00:00 stderr F I0813 20:40:59.733746 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:40:59.933404709+00:00 stderr F I0813 20:40:59.933267 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:40:59.933404709+00:00 stderr F I0813 20:40:59.933356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:41:00.135516177+00:00 stderr F I0813 20:41:00.135412 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:41:00.135516177+00:00 stderr F I0813 20:41:00.135481 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:41:00.337475089+00:00 stderr F I0813 20:41:00.337316 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:41:00.337475089+00:00 stderr F I0813 20:41:00.337384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:41:00.532647036+00:00 stderr F I0813 20:41:00.532587 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:41:00.532765110+00:00 stderr F I0813 20:41:00.532749 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:00.753741351+00:00 stderr F I0813 20:41:00.752926 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:00.753741351+00:00 stderr F I0813 20:41:00.753011 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:00.939638220+00:00 stderr F I0813 20:41:00.939578 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:00.939826206+00:00 stderr F I0813 20:41:00.939761 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:41:01.134297502+00:00 stderr F I0813 20:41:01.134230 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:41:01.134439496+00:00 stderr F I0813 20:41:01.134425 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:41:01.335296097+00:00 stderr F I0813 20:41:01.335059 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:41:01.335296097+00:00 stderr F I0813 20:41:01.335145 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:41:01.534527091+00:00 stderr F I0813 20:41:01.534407 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:41:01.534527091+00:00 stderr F I0813 20:41:01.534504 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:41:01.740891471+00:00 stderr F I0813 20:41:01.739968 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:41:01.740891471+00:00 stderr F I0813 20:41:01.740139 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:41:01.935963905+00:00 stderr F I0813 20:41:01.935834 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:41:01.935963905+00:00 stderr F I0813 20:41:01.935913 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:41:02.133495580+00:00 stderr F I0813 20:41:02.133434 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:41:02.133631094+00:00 stderr F I0813 20:41:02.133612 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:41:02.334090013+00:00 stderr F I0813 20:41:02.333947 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:41:02.334090013+00:00 stderr F I0813 20:41:02.334034 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:41:02.540864845+00:00 stderr F I0813 20:41:02.540755 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:41:02.541058460+00:00 stderr F I0813 20:41:02.541038 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:02.733239701+00:00 stderr F I0813 20:41:02.733155 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:02.733383725+00:00 stderr F I0813 20:41:02.733345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:41:02.934099401+00:00 stderr F I0813 20:41:02.933992 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:41:02.934099401+00:00 stderr F I0813 20:41:02.934069 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:41:03.138715190+00:00 stderr F I0813 20:41:03.138572 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:41:03.138715190+00:00 stderr F I0813 20:41:03.138639 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:41:03.333914997+00:00 stderr F I0813 20:41:03.332724 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:41:03.333914997+00:00 stderr F I0813 20:41:03.332971 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:41:03.532882163+00:00 stderr F I0813 20:41:03.532038 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:41:03.532882163+00:00 stderr F I0813 20:41:03.532120 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:41:03.735113184+00:00 stderr F I0813 20:41:03.734248 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:41:03.735113184+00:00 stderr F I0813 20:41:03.734328 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:41:03.934974305+00:00 stderr F I0813 20:41:03.933942 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:41:03.934974305+00:00 stderr F I0813 20:41:03.934050 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:04.137248687+00:00 stderr F I0813 20:41:04.136951 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:04.137248687+00:00 stderr F I0813 20:41:04.137087 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:41:04.334109462+00:00 stderr F I0813 20:41:04.333452 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:41:04.334325979+00:00 stderr F I0813 20:41:04.334306 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:41:04.556272377+00:00 stderr F I0813 20:41:04.555674 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:41:04.556420642+00:00 stderr F I0813 20:41:04.556405 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:41:04.749264212+00:00 stderr F I0813 20:41:04.745995 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:41:04.749264212+00:00 stderr F I0813 20:41:04.746078 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:41:04.939856926+00:00 stderr F I0813 20:41:04.939308 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:41:04.939856926+00:00 stderr F I0813 20:41:04.939425 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:41:05.134035275+00:00 stderr F I0813 20:41:05.133413 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:41:05.134089146+00:00 stderr F I0813 20:41:05.134032 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:41:05.333707351+00:00 stderr F I0813 20:41:05.332665 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:41:05.333707351+00:00 stderr F I0813 20:41:05.332833 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:41:05.536474187+00:00 stderr F I0813 20:41:05.536400 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:41:05.536603541+00:00 stderr F I0813 20:41:05.536587 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:41:05.737175183+00:00 stderr F I0813 20:41:05.737070 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:41:05.758628642+00:00 stderr F I0813 20:41:05.758569 1 log.go:245] Operconfig Controller complete 2025-08-13T20:42:36.391312212+00:00 stderr F I0813 20:42:36.391113 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.391312212+00:00 stderr F I0813 20:42:36.389176 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.391453956+00:00 stderr F I0813 20:42:36.389413 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.392183 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.389336 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440209101+00:00 stderr F I0813 20:42:36.395504 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.440454488+00:00 stderr F I0813 20:42:36.440420 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.460945159+00:00 stderr F I0813 20:42:36.460876 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512163326+00:00 stderr F I0813 20:42:36.512108 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512654420+00:00 stderr F I0813 20:42:36.512632 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.512876606+00:00 stderr F I0813 20:42:36.512854 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513069152+00:00 stderr F I0813 20:42:36.513050 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513195025+00:00 stderr F I0813 20:42:36.513178 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513426082+00:00 stderr F I0813 20:42:36.513406 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513558686+00:00 stderr F I0813 20:42:36.513541 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513675699+00:00 stderr F I0813 20:42:36.513659 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.513892706+00:00 stderr F I0813 20:42:36.513871 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514085401+00:00 stderr F I0813 20:42:36.514061 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514206105+00:00 stderr F I0813 20:42:36.514189 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514409870+00:00 stderr F I0813 20:42:36.514388 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514547994+00:00 stderr F I0813 20:42:36.514529 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514919475+00:00 stderr F I0813 20:42:36.514897 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515101870+00:00 stderr F I0813 20:42:36.515083 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515254945+00:00 stderr F I0813 20:42:36.515205 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515385399+00:00 stderr F I0813 20:42:36.515367 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515515932+00:00 stderr F I0813 20:42:36.515499 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515645166+00:00 stderr F I0813 20:42:36.515624 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515849292+00:00 stderr F I0813 20:42:36.515828 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.515985746+00:00 stderr F I0813 20:42:36.515967 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516156521+00:00 stderr F I0813 20:42:36.516139 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516309315+00:00 stderr F I0813 20:42:36.516288 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.516422969+00:00 stderr F I0813 20:42:36.516406 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522322549+00:00 stderr F I0813 20:42:36.520212 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522322549+00:00 stderr F I0813 20:42:36.520650 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522705880+00:00 stderr F I0813 20:42:36.522676 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.522979318+00:00 stderr F I0813 20:42:36.522958 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523127952+00:00 stderr F I0813 20:42:36.523111 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523295327+00:00 stderr F I0813 20:42:36.523274 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523473222+00:00 stderr F I0813 20:42:36.523454 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523632686+00:00 stderr F I0813 20:42:36.523613 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.523770170+00:00 stderr F I0813 20:42:36.523753 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.524150731+00:00 stderr F I0813 20:42:36.524127 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.524397088+00:00 stderr F I0813 20:42:36.524376 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.534004525+00:00 stderr F I0813 20:42:36.524513 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.556483373+00:00 stderr F I0813 20:42:36.555278 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.495414124+00:00 stderr F I0813 20:42:40.494156 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.498079220+00:00 stderr F I0813 20:42:40.498000 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.499157992+00:00 stderr F I0813 20:42:40.499087 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.502500498+00:00 stderr F I0813 20:42:40.502429 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:40.502583810+00:00 stderr F I0813 20:42:40.502556 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.502687953+00:00 stderr F I0813 20:42:40.502645 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:40.503464056+00:00 stderr F I0813 20:42:40.503402 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:40.503464056+00:00 stderr F I0813 20:42:40.503449 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:40.503516897+00:00 stderr F I0813 20:42:40.503468 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.503516897+00:00 stderr F I0813 20:42:40.503477 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.505134334+00:00 stderr F I0813 20:42:40.504123 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.505327259+00:00 stderr F I0813 20:42:40.505277 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:40.505347790+00:00 stderr F I0813 20:42:40.505336 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:40.505361020+00:00 stderr F I0813 20:42:40.505342 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.508271484+00:00 stderr F E0813 20:42:40.507176 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:40.508370927+00:00 stderr F I0813 20:42:40.508323 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509205 1 secure_serving.go:258] Stopped listening on [::]:9104 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509260 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509264 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.509313034+00:00 stderr F I0813 20:42:40.509298 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.509335995+00:00 stderr F I0813 20:42:40.509314 1 builder.go:330] server exited 2025-08-13T20:42:40.509335995+00:00 stderr F I0813 20:42:40.509321 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.509346375+00:00 stderr F I0813 20:42:40.509338 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.510411026+00:00 stderr F I0813 20:42:40.510349 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.510652763+00:00 stderr F I0813 20:42:40.510631 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:42:40.511647552+00:00 stderr F I0813 20:42:40.511625 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="operconfig-controller" 2025-08-13T20:42:40.511714894+00:00 stderr F I0813 20:42:40.511700 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="proxyconfig-controller" 2025-08-13T20:42:40.511752765+00:00 stderr F I0813 20:42:40.511741 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="ingress-config-controller" 2025-08-13T20:42:40.511843317+00:00 stderr F I0813 20:42:40.511825 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="infrastructureconfig-controller" 2025-08-13T20:42:40.511911469+00:00 stderr F I0813 20:42:40.511898 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="clusterconfig-controller" 2025-08-13T20:42:40.511946980+00:00 stderr F I0813 20:42:40.511935 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pki-controller" 2025-08-13T20:42:40.511981081+00:00 stderr F I0813 20:42:40.511969 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="egress-router-controller" 2025-08-13T20:42:40.512019012+00:00 stderr F I0813 20:42:40.512007 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="signer-controller" 2025-08-13T20:42:40.512057463+00:00 stderr F I0813 20:42:40.512046 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pod-watcher" 2025-08-13T20:42:40.512091064+00:00 stderr F I0813 20:42:40.512079 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="allowlist-controller" 2025-08-13T20:42:40.512144096+00:00 stderr F I0813 20:42:40.512132 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="dashboard-controller" 2025-08-13T20:42:40.512217028+00:00 stderr F I0813 20:42:40.512204 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:42:40.512325451+00:00 stderr F I0813 20:42:40.512311 1 controller.go:242] "All workers finished" controller="pod-watcher" 2025-08-13T20:42:40.512364382+00:00 stderr F I0813 20:42:40.512352 1 controller.go:242] "All workers finished" controller="allowlist-controller" 2025-08-13T20:42:40.512399143+00:00 stderr F I0813 20:42:40.512388 1 controller.go:242] "All workers finished" controller="proxyconfig-controller" 2025-08-13T20:42:40.512432934+00:00 stderr F I0813 20:42:40.512421 1 controller.go:242] "All workers finished" controller="infrastructureconfig-controller" 2025-08-13T20:42:40.512473035+00:00 stderr F I0813 20:42:40.512458 1 controller.go:242] "All workers finished" controller="dashboard-controller" 2025-08-13T20:42:40.512513917+00:00 stderr F I0813 20:42:40.512501 1 controller.go:242] "All workers finished" controller="ingress-config-controller" 2025-08-13T20:42:40.512548758+00:00 stderr F I0813 20:42:40.512537 1 controller.go:242] "All workers finished" controller="clusterconfig-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512597 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512608 1 controller.go:242] "All workers finished" controller="operconfig-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512684 1 controller.go:242] "All workers finished" controller="signer-controller" 2025-08-13T20:42:40.512694282+00:00 stderr F I0813 20:42:40.512685 1 controller.go:242] "All workers finished" controller="egress-router-controller" 2025-08-13T20:42:40.512716922+00:00 stderr F I0813 20:42:40.512699 1 controller.go:242] "All workers finished" controller="pki-controller" 2025-08-13T20:42:40.512754053+00:00 stderr F I0813 20:42:40.512739 1 controller.go:242] "All workers finished" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:42:40.512845306+00:00 stderr F I0813 20:42:40.512827 1 internal.go:526] "Stopping and waiting for caches" 2025-08-13T20:42:40.513871366+00:00 stderr F I0813 20:42:40.513704 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"37442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_c4643de4-0a66-40c8-abff-4239e04f61ab stopped leading 2025-08-13T20:42:40.514043371+00:00 stderr F I0813 20:42:40.513966 1 internal.go:530] "Stopping and waiting for webhooks" 2025-08-13T20:42:40.514043371+00:00 stderr F I0813 20:42:40.514017 1 internal.go:533] "Stopping and waiting for HTTP servers" 2025-08-13T20:42:40.516137041+00:00 stderr F W0813 20:42:40.516017 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000114364215117130646033125 0ustar zuulzuul2025-12-13T00:11:03.818971313+00:00 stderr F I1213 00:11:03.818684 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:11:03.820219207+00:00 stderr F I1213 00:11:03.819196 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:11:03.820219207+00:00 stderr F I1213 00:11:03.819623 1 observer_polling.go:159] Starting file observer 2025-12-13T00:11:03.842339169+00:00 stderr F I1213 00:11:03.842271 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-12-13T00:11:04.251345844+00:00 stderr F I1213 00:11:04.250466 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:11:04.251345844+00:00 stderr F W1213 00:11:04.251300 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:11:04.251345844+00:00 stderr F W1213 00:11:04.251308 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:11:04.252379212+00:00 stderr F I1213 00:11:04.252333 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:11:04.253581344+00:00 stderr F I1213 00:11:04.253537 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-12-13T00:11:04.257325647+00:00 stderr F I1213 00:11:04.257277 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:11:04.257612104+00:00 stderr F I1213 00:11:04.257560 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:11:04.257799289+00:00 stderr F I1213 00:11:04.257753 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:11:04.258086727+00:00 stderr F I1213 00:11:04.258053 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:11:04.258421296+00:00 stderr F I1213 00:11:04.258079 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:11:04.259024992+00:00 stderr F I1213 00:11:04.258957 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:11:04.260001649+00:00 stderr F I1213 00:11:04.259924 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:11:04.260921104+00:00 stderr F I1213 00:11:04.260877 1 secure_serving.go:213] Serving securely on [::]:9104 2025-12-13T00:11:04.260921104+00:00 stderr F I1213 00:11:04.260916 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:11:04.358694901+00:00 stderr F I1213 00:11:04.358549 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:11:04.361339093+00:00 stderr F I1213 00:11:04.359706 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:11:04.361339093+00:00 stderr F I1213 00:11:04.359780 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:16:37.229587640+00:00 stderr F I1213 00:16:37.229498 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-12-13T00:16:37.229841907+00:00 stderr F I1213 00:16:37.229748 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"22664f33-4062-41bd-9ac9-dc79ccf9e70c", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_bba8ea88-7904-4bed-9f81-4a8fed9b3547 became leader 2025-12-13T00:16:37.245560417+00:00 stderr F I1213 00:16:37.245518 1 operator.go:97] Creating status manager for stand-alone cluster 2025-12-13T00:16:37.245620438+00:00 stderr F I1213 00:16:37.245598 1 operator.go:102] Adding controller-runtime controllers 2025-12-13T00:16:37.247112969+00:00 stderr F I1213 00:16:37.247084 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-12-13T00:16:37.249069382+00:00 stderr F I1213 00:16:37.249037 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:16:37.250985355+00:00 stderr F I1213 00:16:37.250921 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:16:37.251140499+00:00 stderr F I1213 00:16:37.251078 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:16:37.256745832+00:00 stderr F I1213 00:16:37.256708 1 client.go:239] Starting informers... 2025-12-13T00:16:37.258141770+00:00 stderr F I1213 00:16:37.258100 1 client.go:250] Waiting for informers to sync... 2025-12-13T00:16:37.358742527+00:00 stderr F I1213 00:16:37.358633 1 client.go:271] Informers started and synced 2025-12-13T00:16:37.358742527+00:00 stderr F I1213 00:16:37.358683 1 operator.go:126] Starting controller-manager 2025-12-13T00:16:37.359332564+00:00 stderr F I1213 00:16:37.359275 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-12-13T00:16:37.359520479+00:00 stderr F I1213 00:16:37.359465 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-12-13T00:16:37.359617642+00:00 stderr F I1213 00:16:37.359566 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-12-13T00:16:37.359617642+00:00 stderr F I1213 00:16:37.359593 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-12-13T00:16:37.359713915+00:00 stderr F I1213 00:16:37.359657 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000b95080" 2025-12-13T00:16:37.359729465+00:00 stderr F I1213 00:16:37.359710 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-12-13T00:16:37.359744595+00:00 stderr F I1213 00:16:37.359735 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-12-13T00:16:37.359756046+00:00 stderr F I1213 00:16:37.359742 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-12-13T00:16:37.359767536+00:00 stderr F I1213 00:16:37.359760 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-12-13T00:16:37.359804757+00:00 stderr F I1213 00:16:37.359764 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-12-13T00:16:37.359817797+00:00 stderr F I1213 00:16:37.359804 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-12-13T00:16:37.359830018+00:00 stderr F I1213 00:16:37.359806 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-12-13T00:16:37.359841488+00:00 stderr F I1213 00:16:37.359832 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-12-13T00:16:37.359962841+00:00 stderr F I1213 00:16:37.359852 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc00077a0b0" 2025-12-13T00:16:37.359962841+00:00 stderr F I1213 00:16:37.359930 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc00077a2c0" 2025-12-13T00:16:37.359985452+00:00 stderr F I1213 00:16:37.359970 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc00077a160" 2025-12-13T00:16:37.360061384+00:00 stderr F I1213 00:16:37.360025 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-12-13T00:16:37.360061384+00:00 stderr F I1213 00:16:37.360053 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-12-13T00:16:37.360092225+00:00 stderr F I1213 00:16:37.360030 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-12-13T00:16:37.360092225+00:00 stderr F I1213 00:16:37.360075 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc00077a000" 2025-12-13T00:16:37.360092225+00:00 stderr F I1213 00:16:37.360084 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-12-13T00:16:37.360108245+00:00 stderr F I1213 00:16:37.360095 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-12-13T00:16:37.360119956+00:00 stderr F I1213 00:16:37.360107 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-12-13T00:16:37.360131546+00:00 stderr F I1213 00:16:37.360120 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-12-13T00:16:37.360146916+00:00 stderr F I1213 00:16:37.360136 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-12-13T00:16:37.360158927+00:00 stderr F I1213 00:16:37.360034 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-12-13T00:16:37.360204208+00:00 stderr F I1213 00:16:37.360172 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-12-13T00:16:37.360218268+00:00 stderr F I1213 00:16:37.360199 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc00077a210" 2025-12-13T00:16:37.360274740+00:00 stderr F I1213 00:16:37.360242 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-12-13T00:16:37.360274740+00:00 stderr F I1213 00:16:37.360264 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-12-13T00:16:37.360340622+00:00 stderr F I1213 00:16:37.360311 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-12-13T00:16:37.360609479+00:00 stderr F I1213 00:16:37.360553 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc00077a370" 2025-12-13T00:16:37.360656290+00:00 stderr F I1213 00:16:37.360626 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc00077a420" 2025-12-13T00:16:37.360669340+00:00 stderr F I1213 00:16:37.360657 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc00077a4d0" 2025-12-13T00:16:37.360680991+00:00 stderr F I1213 00:16:37.360670 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-12-13T00:16:37.360692721+00:00 stderr F I1213 00:16:37.360679 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-12-13T00:16:37.360925307+00:00 stderr F I1213 00:16:37.360890 1 dashboard_controller.go:113] Reconcile dashboards 2025-12-13T00:16:37.361101202+00:00 stderr F I1213 00:16:37.361062 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:16:37.361125073+00:00 stderr F I1213 00:16:37.361096 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:16:37.361125073+00:00 stderr F I1213 00:16:37.361096 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-12-13T00:16:37.361125073+00:00 stderr F I1213 00:16:37.361114 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-12-13T00:16:37.361149053+00:00 stderr F I1213 00:16:37.361124 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-12-13T00:16:37.361149053+00:00 stderr F I1213 00:16:37.361064 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-12-13T00:16:37.361149053+00:00 stderr F I1213 00:16:37.361136 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-12-13T00:16:37.361200455+00:00 stderr F I1213 00:16:37.361164 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-12-13T00:16:37.361273997+00:00 stderr F I1213 00:16:37.361239 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-12-13T00:16:37.361314398+00:00 stderr F I1213 00:16:37.361285 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:16:37.361327798+00:00 stderr F I1213 00:16:37.361311 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:16:37.361408120+00:00 stderr F I1213 00:16:37.361374 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-12-13T00:16:37.361408120+00:00 stderr F I1213 00:16:37.361383 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-12-13T00:16:37.361447681+00:00 stderr F I1213 00:16:37.361380 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-12-13T00:16:37.361513633+00:00 stderr F I1213 00:16:37.361496 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-12-13T00:16:37.361570745+00:00 stderr F I1213 00:16:37.361554 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-12-13T00:16:37.361626576+00:00 stderr F I1213 00:16:37.361609 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-12-13T00:16:37.361853563+00:00 stderr F I1213 00:16:37.361810 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:16:37.362423729+00:00 stderr F I1213 00:16:37.362372 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.363894868+00:00 stderr F I1213 00:16:37.363841 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:37.363894868+00:00 stderr F I1213 00:16:37.363878 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-12-13T00:16:37.363914479+00:00 stderr F I1213 00:16:37.363893 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-12-13T00:16:37.363914479+00:00 stderr F I1213 00:16:37.363900 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-12-13T00:16:37.367292461+00:00 stderr F I1213 00:16:37.367230 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.368503245+00:00 stderr F I1213 00:16:37.368426 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.368670209+00:00 stderr F I1213 00:16:37.368628 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-12-13T00:16:37.368939406+00:00 stderr F I1213 00:16:37.368907 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.369737198+00:00 stderr F I1213 00:16:37.369695 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.369941713+00:00 stderr F I1213 00:16:37.369914 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.370279203+00:00 stderr F I1213 00:16:37.370242 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.370571471+00:00 stderr F I1213 00:16:37.370552 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.370884290+00:00 stderr F I1213 00:16:37.370858 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.371224149+00:00 stderr F I1213 00:16:37.371196 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.371662901+00:00 stderr F I1213 00:16:37.371623 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.371936208+00:00 stderr F I1213 00:16:37.371894 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.372782771+00:00 stderr F I1213 00:16:37.372736 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.372984206+00:00 stderr F I1213 00:16:37.372927 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.373105471+00:00 stderr F I1213 00:16:37.373061 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.373672346+00:00 stderr F I1213 00:16:37.373626 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.374192070+00:00 stderr F I1213 00:16:37.374151 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.374650322+00:00 stderr F I1213 00:16:37.374611 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.374948360+00:00 stderr F I1213 00:16:37.374906 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.375032813+00:00 stderr F I1213 00:16:37.375010 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.375390532+00:00 stderr F I1213 00:16:37.375327 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.377797498+00:00 stderr F I1213 00:16:37.377735 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.379170006+00:00 stderr F I1213 00:16:37.378703 1 dashboard_controller.go:139] Applying dashboards manifests 2025-12-13T00:16:37.386340671+00:00 stderr F I1213 00:16:37.386290 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-12-13T00:16:37.386340671+00:00 stderr F I1213 00:16:37.386316 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-12-13T00:16:37.388836880+00:00 stderr F I1213 00:16:37.388723 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-12-13T00:16:37.410733088+00:00 stderr F I1213 00:16:37.410661 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-12-13T00:16:37.410733088+00:00 stderr F I1213 00:16:37.410697 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-12-13T00:16:37.423007433+00:00 stderr F I1213 00:16:37.421393 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-12-13T00:16:37.484442501+00:00 stderr F I1213 00:16:37.482434 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-12-13T00:16:37.484442501+00:00 stderr F I1213 00:16:37.482519 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-12-13T00:16:37.484442501+00:00 stderr F I1213 00:16:37.482764 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-12-13T00:16:37.484442501+00:00 stderr F I1213 00:16:37.482879 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-12-13T00:16:37.484442501+00:00 stderr F I1213 00:16:37.483488 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.485055 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.485645 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.486151 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.486390 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.486435 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.486587 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-12-13T00:16:37.488009049+00:00 stderr F I1213 00:16:37.486824 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-12-13T00:16:37.491015971+00:00 stderr F I1213 00:16:37.490781 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.493474138+00:00 stderr F I1213 00:16:37.491981 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.498365111+00:00 stderr F I1213 00:16:37.497974 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.498365111+00:00 stderr F I1213 00:16:37.498041 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-12-13T00:16:37.500189431+00:00 stderr F I1213 00:16:37.500147 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.505191138+00:00 stderr F I1213 00:16:37.505142 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.505223429+00:00 stderr F I1213 00:16:37.505191 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-12-13T00:16:37.511040508+00:00 stderr F I1213 00:16:37.510998 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.511094669+00:00 stderr F I1213 00:16:37.511041 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-12-13T00:16:37.521010319+00:00 stderr F I1213 00:16:37.518104 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.521010319+00:00 stderr F I1213 00:16:37.518159 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-12-13T00:16:37.524091183+00:00 stderr F I1213 00:16:37.523226 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.524091183+00:00 stderr F I1213 00:16:37.523272 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-12-13T00:16:37.526174420+00:00 stderr F I1213 00:16:37.526155 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.526245372+00:00 stderr F I1213 00:16:37.526233 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-12-13T00:16:37.532098652+00:00 stderr F I1213 00:16:37.532056 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.532123743+00:00 stderr F I1213 00:16:37.532106 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-12-13T00:16:37.534560230+00:00 stderr F I1213 00:16:37.534538 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.534625061+00:00 stderr F I1213 00:16:37.534614 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-12-13T00:16:37.572564308+00:00 stderr F I1213 00:16:37.572507 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.572596789+00:00 stderr F I1213 00:16:37.572564 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-12-13T00:16:37.584025421+00:00 stderr F I1213 00:16:37.583991 1 log.go:245] "network-node-identity-cert" in "openshift-network-node-identity" requires a new target cert/key pair: past its refresh time 2025-11-13 01:56:26 +0000 UTC 2025-12-13T00:16:37.590075226+00:00 stderr F I1213 00:16:37.590035 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-12-13T00:16:37.590152198+00:00 stderr F I1213 00:16:37.590134 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-12-13T00:16:37.625903765+00:00 stderr F I1213 00:16:37.625849 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:37.661589780+00:00 stderr F I1213 00:16:37.661392 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-12-13T00:16:37.661589780+00:00 stderr F I1213 00:16:37.661423 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-12-13T00:16:37.685609015+00:00 stderr F I1213 00:16:37.685537 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.692192964+00:00 stderr F I1213 00:16:37.691911 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.698037455+00:00 stderr F I1213 00:16:37.698006 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.705083927+00:00 stderr F I1213 00:16:37.705043 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.710158356+00:00 stderr F I1213 00:16:37.710091 1 log.go:245] Updated Secret/network-node-identity-cert -n openshift-network-node-identity because it changed 2025-12-13T00:16:37.710158356+00:00 stderr F I1213 00:16:37.710114 1 log.go:245] successful reconciliation 2025-12-13T00:16:37.710393672+00:00 stderr F I1213 00:16:37.710371 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.717044453+00:00 stderr F I1213 00:16:37.717004 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.723397968+00:00 stderr F I1213 00:16:37.723344 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.729318109+00:00 stderr F I1213 00:16:37.729230 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.741314937+00:00 stderr F I1213 00:16:37.741251 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:16:37.893794001+00:00 stderr F I1213 00:16:37.893732 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:16:37.893843522+00:00 stderr F I1213 00:16:37.893791 1 log.go:245] Reconciling proxy 'cluster' 2025-12-13T00:16:38.094122722+00:00 stderr F I1213 00:16:38.093717 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-12-13T00:16:38.165804561+00:00 stderr F I1213 00:16:38.165741 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-12-13T00:16:38.168465352+00:00 stderr F I1213 00:16:38.168432 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:38.366151012+00:00 stderr F I1213 00:16:38.365427 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:16:38.371181869+00:00 stderr F I1213 00:16:38.371130 1 log.go:245] Reconciling proxy 'cluster' complete 2025-12-13T00:16:38.375680842+00:00 stderr F I1213 00:16:38.375626 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:16:38.376239258+00:00 stderr F I1213 00:16:38.376206 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-12-13T00:16:38.376239258+00:00 stderr F I1213 00:16:38.376218 1 log.go:245] Successfully updated Operator config from Cluster config 2025-12-13T00:16:38.567161202+00:00 stderr F I1213 00:16:38.567074 1 dashboard_controller.go:113] Reconcile dashboards 2025-12-13T00:16:38.573205137+00:00 stderr F I1213 00:16:38.573137 1 dashboard_controller.go:139] Applying dashboards manifests 2025-12-13T00:16:38.580024073+00:00 stderr F I1213 00:16:38.578979 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-12-13T00:16:38.595013983+00:00 stderr F I1213 00:16:38.593527 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-12-13T00:16:38.595013983+00:00 stderr F I1213 00:16:38.593549 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-12-13T00:16:38.608803609+00:00 stderr F I1213 00:16:38.608749 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-12-13T00:16:38.767950485+00:00 stderr F I1213 00:16:38.767821 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-12-13T00:16:38.771893653+00:00 stderr F I1213 00:16:38.771824 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:38.773712203+00:00 stderr F I1213 00:16:38.773654 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:38.869599382+00:00 stderr F I1213 00:16:38.869513 1 log.go:245] "ovn-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: past its refresh time 2025-11-13 01:56:29 +0000 UTC 2025-12-13T00:16:38.956399833+00:00 stderr F I1213 00:16:38.956287 1 log.go:245] Updated Secret/ovn-cert -n openshift-ovn-kubernetes because it changed 2025-12-13T00:16:38.956399833+00:00 stderr F I1213 00:16:38.956321 1 log.go:245] successful reconciliation 2025-12-13T00:16:39.170800768+00:00 stderr F I1213 00:16:39.170708 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-12-13T00:16:39.174126649+00:00 stderr F I1213 00:16:39.174046 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:39.374280486+00:00 stderr F I1213 00:16:39.374195 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-12-13T00:16:39.376451165+00:00 stderr F I1213 00:16:39.376412 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-12-13T00:16:39.378326436+00:00 stderr F I1213 00:16:39.378287 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-12-13T00:16:39.378374547+00:00 stderr F I1213 00:16:39.378303 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc001935300 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-12-13T00:16:39.569974490+00:00 stderr F I1213 00:16:39.569829 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-12-13T00:16:39.575994015+00:00 stderr F I1213 00:16:39.575864 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-12-13T00:16:39.575994015+00:00 stderr F I1213 00:16:39.575889 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-12-13T00:16:39.575994015+00:00 stderr F I1213 00:16:39.575899 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-12-13T00:16:39.581171676+00:00 stderr F I1213 00:16:39.581120 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:16:39.581171676+00:00 stderr F I1213 00:16:39.581140 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:16:39.581171676+00:00 stderr F I1213 00:16:39.581145 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:16:39.581171676+00:00 stderr F I1213 00:16:39.581151 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:16:39.581225328+00:00 stderr F I1213 00:16:39.581173 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-12-13T00:16:39.768739479+00:00 stderr F I1213 00:16:39.768649 1 dashboard_controller.go:113] Reconcile dashboards 2025-12-13T00:16:39.773830238+00:00 stderr F I1213 00:16:39.773779 1 dashboard_controller.go:139] Applying dashboards manifests 2025-12-13T00:16:39.780827939+00:00 stderr F I1213 00:16:39.780771 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-12-13T00:16:39.801045291+00:00 stderr F I1213 00:16:39.800832 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-12-13T00:16:39.801118493+00:00 stderr F I1213 00:16:39.801053 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-12-13T00:16:39.818470567+00:00 stderr F I1213 00:16:39.818317 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-12-13T00:16:40.166501883+00:00 stderr F I1213 00:16:40.166442 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:16:40.171766207+00:00 stderr F I1213 00:16:40.171707 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:16:40.183666331+00:00 stderr F I1213 00:16:40.183598 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:16:40.188512394+00:00 stderr F I1213 00:16:40.188167 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:16:40.188512394+00:00 stderr F E1213 00:16:40.188241 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="0422b265-fb34-46cd-9e38-d0a3122ab167" 2025-12-13T00:16:40.188512394+00:00 stderr F I1213 00:16:40.188273 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-12-13T00:16:40.188512394+00:00 stderr F I1213 00:16:40.188393 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-12-13T00:16:40.188512394+00:00 stderr F I1213 00:16:40.188463 1 log.go:245] Successfully updated Operator config from Cluster config 2025-12-13T00:16:40.368155600+00:00 stderr F I1213 00:16:40.368105 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-12-13T00:16:40.371841530+00:00 stderr F I1213 00:16:40.371792 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:40.374201005+00:00 stderr F I1213 00:16:40.374131 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:16:40.404437831+00:00 stderr F I1213 00:16:40.404359 1 allowlist_controller.go:146] Successfully updated sysctl allowlist 2025-12-13T00:16:40.469510798+00:00 stderr F I1213 00:16:40.469425 1 log.go:245] "signer-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: past its refresh time 2025-11-13 01:56:32 +0000 UTC 2025-12-13T00:16:40.535804959+00:00 stderr F I1213 00:16:40.534425 1 log.go:245] Updated Secret/signer-cert -n openshift-ovn-kubernetes because it changed 2025-12-13T00:16:40.535804959+00:00 stderr F I1213 00:16:40.534448 1 log.go:245] successful reconciliation 2025-12-13T00:16:40.567036532+00:00 stderr F I1213 00:16:40.566984 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-12-13T00:16:40.571317298+00:00 stderr F I1213 00:16:40.571268 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:41.573712755+00:00 stderr F I1213 00:16:41.573244 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-12-13T00:16:41.575619658+00:00 stderr F I1213 00:16:41.575565 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-12-13T00:16:41.577297983+00:00 stderr F I1213 00:16:41.577265 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-12-13T00:16:41.577297983+00:00 stderr F I1213 00:16:41.577280 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000ec1300 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-12-13T00:16:41.581172270+00:00 stderr F I1213 00:16:41.581123 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-12-13T00:16:41.581172270+00:00 stderr F I1213 00:16:41.581150 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-12-13T00:16:41.581172270+00:00 stderr F I1213 00:16:41.581159 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-12-13T00:16:41.583247145+00:00 stderr F I1213 00:16:41.583207 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:16:41.583247145+00:00 stderr F I1213 00:16:41.583223 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:16:41.583247145+00:00 stderr F I1213 00:16:41.583227 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:16:41.583247145+00:00 stderr F I1213 00:16:41.583232 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:16:41.583277266+00:00 stderr F I1213 00:16:41.583267 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-12-13T00:16:41.968809156+00:00 stderr F I1213 00:16:41.968714 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-12-13T00:16:41.971751966+00:00 stderr F I1213 00:16:41.971657 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:41.972097316+00:00 stderr F I1213 00:16:41.972049 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:16:41.982681605+00:00 stderr F I1213 00:16:41.982621 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-12-13T00:16:41.982681605+00:00 stderr F I1213 00:16:41.982661 1 log.go:245] Starting render phase 2025-12-13T00:16:42.005389165+00:00 stderr F I1213 00:16:42.005328 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-12-13T00:16:42.043308501+00:00 stderr F I1213 00:16:42.043243 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-12-13T00:16:42.043308501+00:00 stderr F I1213 00:16:42.043265 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-12-13T00:16:42.043308501+00:00 stderr F I1213 00:16:42.043288 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-12-13T00:16:42.043373833+00:00 stderr F I1213 00:16:42.043311 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-12-13T00:16:42.166885146+00:00 stderr F I1213 00:16:42.166798 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-12-13T00:16:42.168791668+00:00 stderr F I1213 00:16:42.168750 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.173709312+00:00 stderr F I1213 00:16:42.173644 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-12-13T00:16:42.173709312+00:00 stderr F I1213 00:16:42.173667 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-12-13T00:16:42.367259028+00:00 stderr F I1213 00:16:42.367179 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-12-13T00:16:42.372641355+00:00 stderr F I1213 00:16:42.372599 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-12-13T00:16:42.372659046+00:00 stderr F I1213 00:16:42.372651 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372715527+00:00 stderr F I1213 00:16:42.372686 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372752158+00:00 stderr F I1213 00:16:42.372730 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372801240+00:00 stderr F I1213 00:16:42.372772 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372837121+00:00 stderr F I1213 00:16:42.372815 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372879732+00:00 stderr F I1213 00:16:42.372858 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372917933+00:00 stderr F I1213 00:16:42.372898 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.372994835+00:00 stderr F I1213 00:16:42.372935 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.373006935+00:00 stderr F I1213 00:16:42.373001 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.373050136+00:00 stderr F I1213 00:16:42.373029 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.373085227+00:00 stderr F I1213 00:16:42.373064 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.373118298+00:00 stderr F I1213 00:16:42.373097 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.388476538+00:00 stderr F I1213 00:16:42.388433 1 log.go:245] Render phase done, rendered 112 objects 2025-12-13T00:16:42.566628053+00:00 stderr F I1213 00:16:42.566533 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-12-13T00:16:42.569316677+00:00 stderr F I1213 00:16:42.569258 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:42.970799202+00:00 stderr F I1213 00:16:42.970731 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-12-13T00:16:42.972678733+00:00 stderr F I1213 00:16:42.972646 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:43.173599811+00:00 stderr F I1213 00:16:43.173525 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-12-13T00:16:43.173599811+00:00 stderr F I1213 00:16:43.173572 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-12-13T00:16:43.175760080+00:00 stderr F I1213 00:16:43.175715 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:16:43.179544663+00:00 stderr F I1213 00:16:43.179494 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-12-13T00:16:43.179544663+00:00 stderr F I1213 00:16:43.179534 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-12-13T00:16:43.189914097+00:00 stderr F I1213 00:16:43.189851 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-12-13T00:16:43.189969288+00:00 stderr F I1213 00:16:43.189912 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-12-13T00:16:43.197711040+00:00 stderr F I1213 00:16:43.196632 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-12-13T00:16:43.197711040+00:00 stderr F I1213 00:16:43.196681 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-12-13T00:16:43.202404617+00:00 stderr F I1213 00:16:43.202349 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-12-13T00:16:43.202432328+00:00 stderr F I1213 00:16:43.202402 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-12-13T00:16:43.208323920+00:00 stderr F I1213 00:16:43.208293 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-12-13T00:16:43.208349411+00:00 stderr F I1213 00:16:43.208331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-12-13T00:16:43.212392580+00:00 stderr F I1213 00:16:43.212355 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-12-13T00:16:43.212420821+00:00 stderr F I1213 00:16:43.212393 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-12-13T00:16:43.216509232+00:00 stderr F I1213 00:16:43.216466 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-12-13T00:16:43.216509232+00:00 stderr F I1213 00:16:43.216494 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-12-13T00:16:43.220005639+00:00 stderr F I1213 00:16:43.219912 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-12-13T00:16:43.220005639+00:00 stderr F I1213 00:16:43.219985 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-12-13T00:16:43.222865357+00:00 stderr F I1213 00:16:43.222759 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-12-13T00:16:43.222865357+00:00 stderr F I1213 00:16:43.222790 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-12-13T00:16:43.225854908+00:00 stderr F I1213 00:16:43.225788 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-12-13T00:16:43.225854908+00:00 stderr F I1213 00:16:43.225813 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-12-13T00:16:43.367814825+00:00 stderr F I1213 00:16:43.367736 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-12-13T00:16:43.369779249+00:00 stderr F I1213 00:16:43.369733 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:43.377788658+00:00 stderr F I1213 00:16:43.377751 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-12-13T00:16:43.377808998+00:00 stderr F I1213 00:16:43.377797 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-12-13T00:16:43.567869939+00:00 stderr F I1213 00:16:43.567806 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-12-13T00:16:43.569744181+00:00 stderr F I1213 00:16:43.569705 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:43.578327755+00:00 stderr F I1213 00:16:43.578288 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-12-13T00:16:43.578347416+00:00 stderr F I1213 00:16:43.578325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-12-13T00:16:43.769223769+00:00 stderr F I1213 00:16:43.769098 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-12-13T00:16:43.771048418+00:00 stderr F I1213 00:16:43.770975 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:16:43.778176533+00:00 stderr F I1213 00:16:43.778111 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-12-13T00:16:43.778176533+00:00 stderr F I1213 00:16:43.778159 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-12-13T00:16:43.979480661+00:00 stderr F I1213 00:16:43.979404 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-12-13T00:16:43.979480661+00:00 stderr F I1213 00:16:43.979464 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-12-13T00:16:44.178685512+00:00 stderr F I1213 00:16:44.178591 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-12-13T00:16:44.178685512+00:00 stderr F I1213 00:16:44.178643 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-12-13T00:16:44.379297441+00:00 stderr F I1213 00:16:44.379156 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-12-13T00:16:44.379297441+00:00 stderr F I1213 00:16:44.379219 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-12-13T00:16:44.577294748+00:00 stderr F I1213 00:16:44.577201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-12-13T00:16:44.577294748+00:00 stderr F I1213 00:16:44.577264 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-12-13T00:16:44.779144731+00:00 stderr F I1213 00:16:44.779076 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-12-13T00:16:44.779144731+00:00 stderr F I1213 00:16:44.779121 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-12-13T00:16:44.979134104+00:00 stderr F I1213 00:16:44.979070 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-12-13T00:16:44.979167545+00:00 stderr F I1213 00:16:44.979137 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-12-13T00:16:45.181788558+00:00 stderr F I1213 00:16:45.181699 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-12-13T00:16:45.181788558+00:00 stderr F I1213 00:16:45.181740 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-12-13T00:16:45.378864330+00:00 stderr F I1213 00:16:45.378744 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-12-13T00:16:45.378864330+00:00 stderr F I1213 00:16:45.378829 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-12-13T00:16:45.592966918+00:00 stderr F I1213 00:16:45.592862 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-12-13T00:16:45.592966918+00:00 stderr F I1213 00:16:45.592925 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-12-13T00:16:45.800021223+00:00 stderr F I1213 00:16:45.799852 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-12-13T00:16:45.800021223+00:00 stderr F I1213 00:16:45.799905 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-12-13T00:16:45.980300256+00:00 stderr F I1213 00:16:45.980203 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-12-13T00:16:45.980300256+00:00 stderr F I1213 00:16:45.980243 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-12-13T00:16:46.179369713+00:00 stderr F I1213 00:16:46.179255 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-12-13T00:16:46.179369713+00:00 stderr F I1213 00:16:46.179320 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-12-13T00:16:46.381920015+00:00 stderr F I1213 00:16:46.381824 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-12-13T00:16:46.382132181+00:00 stderr F I1213 00:16:46.382100 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-12-13T00:16:46.584082067+00:00 stderr F I1213 00:16:46.584024 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-12-13T00:16:46.584082067+00:00 stderr F I1213 00:16:46.584072 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-12-13T00:16:46.782756872+00:00 stderr F I1213 00:16:46.782672 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-12-13T00:16:46.782850475+00:00 stderr F I1213 00:16:46.782811 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-12-13T00:16:46.982615671+00:00 stderr F I1213 00:16:46.982001 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-12-13T00:16:46.982746665+00:00 stderr F I1213 00:16:46.982729 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:16:47.180681981+00:00 stderr F I1213 00:16:47.180631 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:16:47.180778904+00:00 stderr F I1213 00:16:47.180764 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:16:47.387118408+00:00 stderr F I1213 00:16:47.387050 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:16:47.387266362+00:00 stderr F I1213 00:16:47.387252 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-12-13T00:16:47.580018617+00:00 stderr F I1213 00:16:47.579972 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-12-13T00:16:47.580127580+00:00 stderr F I1213 00:16:47.580113 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-12-13T00:16:47.781974203+00:00 stderr F I1213 00:16:47.781838 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-12-13T00:16:47.781974203+00:00 stderr F I1213 00:16:47.781911 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-12-13T00:16:47.979088767+00:00 stderr F I1213 00:16:47.979031 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-12-13T00:16:47.979216320+00:00 stderr F I1213 00:16:47.979200 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-12-13T00:16:48.178401120+00:00 stderr F I1213 00:16:48.178345 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-12-13T00:16:48.178401120+00:00 stderr F I1213 00:16:48.178393 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-12-13T00:16:48.385499787+00:00 stderr F I1213 00:16:48.385406 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-12-13T00:16:48.385499787+00:00 stderr F I1213 00:16:48.385461 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-12-13T00:16:48.589025444+00:00 stderr F I1213 00:16:48.588923 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-12-13T00:16:48.589146868+00:00 stderr F I1213 00:16:48.589126 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-12-13T00:16:48.781147512+00:00 stderr F I1213 00:16:48.780485 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-12-13T00:16:48.781147512+00:00 stderr F I1213 00:16:48.781079 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:16:48.980749973+00:00 stderr F I1213 00:16:48.980278 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:16:48.980749973+00:00 stderr F I1213 00:16:48.980690 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:16:49.179502691+00:00 stderr F I1213 00:16:49.179430 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:16:49.179541612+00:00 stderr F I1213 00:16:49.179528 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-12-13T00:16:49.384133620+00:00 stderr F I1213 00:16:49.383003 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-12-13T00:16:49.384133620+00:00 stderr F I1213 00:16:49.383068 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-12-13T00:16:49.578894960+00:00 stderr F I1213 00:16:49.578820 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-12-13T00:16:49.578894960+00:00 stderr F I1213 00:16:49.578865 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-12-13T00:16:49.791081224+00:00 stderr F I1213 00:16:49.790979 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-12-13T00:16:49.791138476+00:00 stderr F I1213 00:16:49.791093 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-12-13T00:16:50.010548829+00:00 stderr F I1213 00:16:50.009972 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-12-13T00:16:50.010548829+00:00 stderr F I1213 00:16:50.010528 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-12-13T00:16:50.190713279+00:00 stderr F I1213 00:16:50.190666 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-12-13T00:16:50.190824222+00:00 stderr F I1213 00:16:50.190809 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-12-13T00:16:50.397839906+00:00 stderr F I1213 00:16:50.397775 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-12-13T00:16:50.397839906+00:00 stderr F I1213 00:16:50.397821 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-12-13T00:16:50.587860926+00:00 stderr F I1213 00:16:50.587796 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-12-13T00:16:50.587860926+00:00 stderr F I1213 00:16:50.587845 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:16:50.831227233+00:00 stderr F I1213 00:16:50.831184 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:16:50.831340416+00:00 stderr F I1213 00:16:50.831324 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:16:51.008037071+00:00 stderr F I1213 00:16:51.007973 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:16:51.008037071+00:00 stderr F I1213 00:16:51.008020 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:16:51.178789865+00:00 stderr F I1213 00:16:51.178716 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:16:51.178789865+00:00 stderr F I1213 00:16:51.178769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-12-13T00:16:51.380256397+00:00 stderr F I1213 00:16:51.380169 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:16:51.380323689+00:00 stderr F I1213 00:16:51.380268 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-12-13T00:16:51.577999807+00:00 stderr F I1213 00:16:51.577917 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-12-13T00:16:51.578122471+00:00 stderr F I1213 00:16:51.578104 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-12-13T00:16:51.778870623+00:00 stderr F I1213 00:16:51.778798 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:16:51.778870623+00:00 stderr F I1213 00:16:51.778840 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-12-13T00:16:51.979369879+00:00 stderr F I1213 00:16:51.979312 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-12-13T00:16:51.979369879+00:00 stderr F I1213 00:16:51.979351 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-12-13T00:16:52.178980951+00:00 stderr F I1213 00:16:52.178918 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-12-13T00:16:52.179015992+00:00 stderr F I1213 00:16:52.178989 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-12-13T00:16:52.379640891+00:00 stderr F I1213 00:16:52.379540 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-12-13T00:16:52.379640891+00:00 stderr F I1213 00:16:52.379598 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-12-13T00:16:52.578407860+00:00 stderr F I1213 00:16:52.578343 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-12-13T00:16:52.578407860+00:00 stderr F I1213 00:16:52.578389 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:16:52.780320904+00:00 stderr F I1213 00:16:52.780253 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:16:52.780358575+00:00 stderr F I1213 00:16:52.780324 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:16:52.981861539+00:00 stderr F I1213 00:16:52.981775 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:16:52.981861539+00:00 stderr F I1213 00:16:52.981826 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:16:53.179830336+00:00 stderr F I1213 00:16:53.179750 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:16:53.179830336+00:00 stderr F I1213 00:16:53.179797 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:16:53.379227772+00:00 stderr F I1213 00:16:53.379107 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:16:53.379227772+00:00 stderr F I1213 00:16:53.379175 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:16:53.579768560+00:00 stderr F I1213 00:16:53.579694 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:16:53.579768560+00:00 stderr F I1213 00:16:53.579739 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-12-13T00:16:53.779409692+00:00 stderr F I1213 00:16:53.779336 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-12-13T00:16:53.779409692+00:00 stderr F I1213 00:16:53.779396 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-12-13T00:16:53.978998833+00:00 stderr F I1213 00:16:53.978906 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-12-13T00:16:53.978998833+00:00 stderr F I1213 00:16:53.978983 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.179530 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.179566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.379798 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.379842 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.581310 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.581390 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.783213 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.783267 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.982497 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:54.982556 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:55.178673 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-12-13T00:16:55.232220991+00:00 stderr F I1213 00:16:55.178728 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-12-13T00:16:55.379702739+00:00 stderr F I1213 00:16:55.379629 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-12-13T00:16:55.379733329+00:00 stderr F I1213 00:16:55.379698 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:16:55.580799411+00:00 stderr F I1213 00:16:55.580694 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:16:55.580799411+00:00 stderr F I1213 00:16:55.580743 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-12-13T00:16:55.781078542+00:00 stderr F I1213 00:16:55.780976 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-12-13T00:16:55.781078542+00:00 stderr F I1213 00:16:55.781020 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:16:55.982103191+00:00 stderr F I1213 00:16:55.982036 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:16:55.982103191+00:00 stderr F I1213 00:16:55.982086 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:16:56.178246828+00:00 stderr F I1213 00:16:56.178162 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:16:56.178246828+00:00 stderr F I1213 00:16:56.178198 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:16:56.378712843+00:00 stderr F I1213 00:16:56.378606 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:16:56.378712843+00:00 stderr F I1213 00:16:56.378652 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-12-13T00:16:56.578830258+00:00 stderr F I1213 00:16:56.578735 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-12-13T00:16:56.578830258+00:00 stderr F I1213 00:16:56.578774 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-12-13T00:16:56.780765274+00:00 stderr F I1213 00:16:56.780674 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-12-13T00:16:56.780765274+00:00 stderr F I1213 00:16:56.780718 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-12-13T00:16:56.990663246+00:00 stderr F I1213 00:16:56.990587 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-12-13T00:16:56.990663246+00:00 stderr F I1213 00:16:56.990640 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-12-13T00:16:57.206374238+00:00 stderr F I1213 00:16:57.206310 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-12-13T00:16:57.206374238+00:00 stderr F I1213 00:16:57.206359 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-12-13T00:16:57.378656224+00:00 stderr F I1213 00:16:57.378593 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-12-13T00:16:57.378689875+00:00 stderr F I1213 00:16:57.378654 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:16:57.587357013+00:00 stderr F I1213 00:16:57.587294 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:16:57.587418235+00:00 stderr F I1213 00:16:57.587360 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:16:57.779866772+00:00 stderr F I1213 00:16:57.779767 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:16:57.779866772+00:00 stderr F I1213 00:16:57.779820 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:16:57.979222546+00:00 stderr F I1213 00:16:57.979155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:16:57.979222546+00:00 stderr F I1213 00:16:57.979202 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-12-13T00:16:58.180195615+00:00 stderr F I1213 00:16:58.180095 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-12-13T00:16:58.180195615+00:00 stderr F I1213 00:16:58.180142 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-12-13T00:16:58.381587546+00:00 stderr F I1213 00:16:58.381411 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-12-13T00:16:58.381587546+00:00 stderr F I1213 00:16:58.381521 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-12-13T00:16:58.579998995+00:00 stderr F I1213 00:16:58.579801 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-12-13T00:16:58.579998995+00:00 stderr F I1213 00:16:58.579869 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-12-13T00:16:58.790984028+00:00 stderr F I1213 00:16:58.790895 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:16:58.790984028+00:00 stderr F I1213 00:16:58.790957 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-12-13T00:16:58.984419211+00:00 stderr F I1213 00:16:58.984348 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:16:58.984419211+00:00 stderr F I1213 00:16:58.984399 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-12-13T00:16:59.181817653+00:00 stderr F I1213 00:16:59.181727 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:16:59.181817653+00:00 stderr F I1213 00:16:59.181769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:16:59.379382798+00:00 stderr F I1213 00:16:59.379284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:16:59.379382798+00:00 stderr F I1213 00:16:59.379371 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:16:59.579141313+00:00 stderr F I1213 00:16:59.579052 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:16:59.579195845+00:00 stderr F I1213 00:16:59.579141 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-12-13T00:16:59.787118245+00:00 stderr F I1213 00:16:59.787016 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:16:59.787118245+00:00 stderr F I1213 00:16:59.787093 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-12-13T00:16:59.981228086+00:00 stderr F I1213 00:16:59.981103 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:16:59.981228086+00:00 stderr F I1213 00:16:59.981188 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-12-13T00:17:00.180133988+00:00 stderr F I1213 00:17:00.180063 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-12-13T00:17:00.180133988+00:00 stderr F I1213 00:17:00.180122 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-12-13T00:17:00.379272927+00:00 stderr F I1213 00:17:00.379138 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-12-13T00:17:00.379272927+00:00 stderr F I1213 00:17:00.379195 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-12-13T00:17:00.580083992+00:00 stderr F I1213 00:17:00.579983 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-12-13T00:17:00.580083992+00:00 stderr F I1213 00:17:00.580023 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-12-13T00:17:00.780131595+00:00 stderr F I1213 00:17:00.779673 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:17:00.780131595+00:00 stderr F I1213 00:17:00.780088 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-12-13T00:17:00.982591184+00:00 stderr F I1213 00:17:00.982124 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-12-13T00:17:00.982656826+00:00 stderr F I1213 00:17:00.982585 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-12-13T00:17:01.180619073+00:00 stderr F I1213 00:17:01.180529 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-12-13T00:17:01.180619073+00:00 stderr F I1213 00:17:01.180596 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:17:01.380347777+00:00 stderr F I1213 00:17:01.380272 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:17:01.380347777+00:00 stderr F I1213 00:17:01.380336 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:17:01.579343132+00:00 stderr F I1213 00:17:01.579272 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:17:01.579394644+00:00 stderr F I1213 00:17:01.579340 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-12-13T00:17:01.778583804+00:00 stderr F I1213 00:17:01.778497 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-12-13T00:17:01.778583804+00:00 stderr F I1213 00:17:01.778566 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-12-13T00:17:01.978570636+00:00 stderr F I1213 00:17:01.978462 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-12-13T00:17:01.978570636+00:00 stderr F I1213 00:17:01.978513 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-12-13T00:17:02.179971707+00:00 stderr F I1213 00:17:02.179851 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:17:02.179971707+00:00 stderr F I1213 00:17:02.179908 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-12-13T00:17:02.382812317+00:00 stderr F I1213 00:17:02.382695 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-12-13T00:17:02.382812317+00:00 stderr F I1213 00:17:02.382747 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-12-13T00:17:02.589177853+00:00 stderr F I1213 00:17:02.589077 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:17:02.589177853+00:00 stderr F I1213 00:17:02.589139 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-12-13T00:17:02.782281367+00:00 stderr F I1213 00:17:02.782196 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-12-13T00:17:02.782330569+00:00 stderr F I1213 00:17:02.782276 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-12-13T00:17:02.979815312+00:00 stderr F I1213 00:17:02.979729 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-12-13T00:17:02.979815312+00:00 stderr F I1213 00:17:02.979794 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-12-13T00:17:03.179888197+00:00 stderr F I1213 00:17:03.179778 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:17:03.179888197+00:00 stderr F I1213 00:17:03.179824 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-12-13T00:17:03.380678990+00:00 stderr F I1213 00:17:03.380590 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-12-13T00:17:03.380678990+00:00 stderr F I1213 00:17:03.380639 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-12-13T00:17:03.581584717+00:00 stderr F I1213 00:17:03.581530 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-12-13T00:17:03.581584717+00:00 stderr F I1213 00:17:03.581579 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-12-13T00:17:03.786982987+00:00 stderr F I1213 00:17:03.786865 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:17:03.806494560+00:00 stderr F I1213 00:17:03.806405 1 log.go:245] Operconfig Controller complete 2025-12-13T00:17:03.806562502+00:00 stderr F I1213 00:17:03.806503 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-12-13T00:17:04.009738341+00:00 stderr F I1213 00:17:04.009631 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-12-13T00:17:04.011152149+00:00 stderr F I1213 00:17:04.011081 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-12-13T00:17:04.012427075+00:00 stderr F I1213 00:17:04.012391 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-12-13T00:17:04.012427075+00:00 stderr F I1213 00:17:04.012409 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003a06600 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-12-13T00:17:04.016000902+00:00 stderr F I1213 00:17:04.015907 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-12-13T00:17:04.016000902+00:00 stderr F I1213 00:17:04.015926 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-12-13T00:17:04.016000902+00:00 stderr F I1213 00:17:04.015957 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-12-13T00:17:04.018693056+00:00 stderr F I1213 00:17:04.018616 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:17:04.018693056+00:00 stderr F I1213 00:17:04.018644 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:17:04.018693056+00:00 stderr F I1213 00:17:04.018651 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:17:04.018693056+00:00 stderr F I1213 00:17:04.018661 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:17:04.018734177+00:00 stderr F I1213 00:17:04.018690 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-12-13T00:17:04.028095492+00:00 stderr F I1213 00:17:04.028051 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:17:04.039062732+00:00 stderr F I1213 00:17:04.039004 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-12-13T00:17:04.039062732+00:00 stderr F I1213 00:17:04.039036 1 log.go:245] Starting render phase 2025-12-13T00:17:04.052724135+00:00 stderr F I1213 00:17:04.052668 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-12-13T00:17:04.088101371+00:00 stderr F I1213 00:17:04.088047 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-12-13T00:17:04.088101371+00:00 stderr F I1213 00:17:04.088071 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-12-13T00:17:04.088149223+00:00 stderr F I1213 00:17:04.088098 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-12-13T00:17:04.088149223+00:00 stderr F I1213 00:17:04.088126 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-12-13T00:17:04.096645335+00:00 stderr F I1213 00:17:04.096596 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-12-13T00:17:04.096645335+00:00 stderr F I1213 00:17:04.096611 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-12-13T00:17:04.103009218+00:00 stderr F I1213 00:17:04.102952 1 log.go:245] Render phase done, rendered 112 objects 2025-12-13T00:17:04.116866517+00:00 stderr F I1213 00:17:04.116795 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-12-13T00:17:04.178079989+00:00 stderr F I1213 00:17:04.178014 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-12-13T00:17:04.178079989+00:00 stderr F I1213 00:17:04.178051 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-12-13T00:17:04.379782967+00:00 stderr F I1213 00:17:04.379702 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-12-13T00:17:04.379782967+00:00 stderr F I1213 00:17:04.379756 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-12-13T00:17:04.580026586+00:00 stderr F I1213 00:17:04.579908 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-12-13T00:17:04.580026586+00:00 stderr F I1213 00:17:04.579979 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-12-13T00:17:04.783647858+00:00 stderr F I1213 00:17:04.783552 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-12-13T00:17:04.783647858+00:00 stderr F I1213 00:17:04.783618 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-12-13T00:17:04.981001468+00:00 stderr F I1213 00:17:04.980901 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-12-13T00:17:04.981001468+00:00 stderr F I1213 00:17:04.980980 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-12-13T00:17:05.178883142+00:00 stderr F I1213 00:17:05.178805 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-12-13T00:17:05.178923893+00:00 stderr F I1213 00:17:05.178879 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-12-13T00:17:05.378316548+00:00 stderr F I1213 00:17:05.378198 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-12-13T00:17:05.378316548+00:00 stderr F I1213 00:17:05.378239 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-12-13T00:17:05.579323549+00:00 stderr F I1213 00:17:05.579243 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-12-13T00:17:05.579323549+00:00 stderr F I1213 00:17:05.579308 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-12-13T00:17:05.779401704+00:00 stderr F I1213 00:17:05.779295 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-12-13T00:17:05.779476526+00:00 stderr F I1213 00:17:05.779408 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-12-13T00:17:05.980910697+00:00 stderr F I1213 00:17:05.980831 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-12-13T00:17:05.980910697+00:00 stderr F I1213 00:17:05.980902 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-12-13T00:17:06.181173006+00:00 stderr F I1213 00:17:06.181077 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-12-13T00:17:06.181173006+00:00 stderr F I1213 00:17:06.181152 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-12-13T00:17:06.381236850+00:00 stderr F I1213 00:17:06.381140 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-12-13T00:17:06.381236850+00:00 stderr F I1213 00:17:06.381203 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-12-13T00:17:06.581040808+00:00 stderr F I1213 00:17:06.580927 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-12-13T00:17:06.581040808+00:00 stderr F I1213 00:17:06.581005 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-12-13T00:17:06.779970701+00:00 stderr F I1213 00:17:06.779843 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-12-13T00:17:06.779970701+00:00 stderr F I1213 00:17:06.779883 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-12-13T00:17:06.980764174+00:00 stderr F I1213 00:17:06.980658 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-12-13T00:17:06.980764174+00:00 stderr F I1213 00:17:06.980750 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-12-13T00:17:07.180884410+00:00 stderr F I1213 00:17:07.180771 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-12-13T00:17:07.180884410+00:00 stderr F I1213 00:17:07.180829 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-12-13T00:17:07.379389922+00:00 stderr F I1213 00:17:07.379291 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-12-13T00:17:07.379389922+00:00 stderr F I1213 00:17:07.379354 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-12-13T00:17:07.579743443+00:00 stderr F I1213 00:17:07.579653 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-12-13T00:17:07.579743443+00:00 stderr F I1213 00:17:07.579710 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-12-13T00:17:07.787764856+00:00 stderr F I1213 00:17:07.787622 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-12-13T00:17:07.787764856+00:00 stderr F I1213 00:17:07.787692 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-12-13T00:17:07.981199529+00:00 stderr F I1213 00:17:07.981080 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-12-13T00:17:07.981199529+00:00 stderr F I1213 00:17:07.981140 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-12-13T00:17:08.183683899+00:00 stderr F I1213 00:17:08.183571 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-12-13T00:17:08.183683899+00:00 stderr F I1213 00:17:08.183661 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-12-13T00:17:08.396003158+00:00 stderr F I1213 00:17:08.395849 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-12-13T00:17:08.396003158+00:00 stderr F I1213 00:17:08.395945 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-12-13T00:17:08.594778827+00:00 stderr F I1213 00:17:08.594691 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-12-13T00:17:08.594778827+00:00 stderr F I1213 00:17:08.594760 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-12-13T00:17:08.782007970+00:00 stderr F I1213 00:17:08.781882 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-12-13T00:17:08.782071681+00:00 stderr F I1213 00:17:08.782021 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-12-13T00:17:08.979430472+00:00 stderr F I1213 00:17:08.979011 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-12-13T00:17:08.979430472+00:00 stderr F I1213 00:17:08.979415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-12-13T00:17:09.179704872+00:00 stderr F I1213 00:17:09.179205 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-12-13T00:17:09.179704872+00:00 stderr F I1213 00:17:09.179654 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-12-13T00:17:09.380786964+00:00 stderr F I1213 00:17:09.380712 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-12-13T00:17:09.380786964+00:00 stderr F I1213 00:17:09.380755 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-12-13T00:17:09.580298252+00:00 stderr F I1213 00:17:09.580242 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-12-13T00:17:09.580298252+00:00 stderr F I1213 00:17:09.580283 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-12-13T00:17:09.780508161+00:00 stderr F I1213 00:17:09.780436 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-12-13T00:17:09.780508161+00:00 stderr F I1213 00:17:09.780484 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:17:09.980970255+00:00 stderr F I1213 00:17:09.980868 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:17:09.980970255+00:00 stderr F I1213 00:17:09.980927 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:17:10.178552112+00:00 stderr F I1213 00:17:10.178486 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:17:10.178552112+00:00 stderr F I1213 00:17:10.178533 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-12-13T00:17:10.381357111+00:00 stderr F I1213 00:17:10.381306 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-12-13T00:17:10.381357111+00:00 stderr F I1213 00:17:10.381348 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-12-13T00:17:10.578808524+00:00 stderr F I1213 00:17:10.578590 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-12-13T00:17:10.578808524+00:00 stderr F I1213 00:17:10.578637 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-12-13T00:17:10.778979210+00:00 stderr F I1213 00:17:10.778886 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-12-13T00:17:10.778979210+00:00 stderr F I1213 00:17:10.778926 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-12-13T00:17:10.978789818+00:00 stderr F I1213 00:17:10.978703 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-12-13T00:17:10.978789818+00:00 stderr F I1213 00:17:10.978749 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-12-13T00:17:11.181304819+00:00 stderr F I1213 00:17:11.181224 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-12-13T00:17:11.181353820+00:00 stderr F I1213 00:17:11.181315 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-12-13T00:17:11.383293416+00:00 stderr F I1213 00:17:11.383206 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-12-13T00:17:11.383293416+00:00 stderr F I1213 00:17:11.383255 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-12-13T00:17:11.580994976+00:00 stderr F I1213 00:17:11.580897 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-12-13T00:17:11.580994976+00:00 stderr F I1213 00:17:11.580956 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:17:11.778666614+00:00 stderr F I1213 00:17:11.778571 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:17:11.778666614+00:00 stderr F I1213 00:17:11.778622 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:17:11.979678114+00:00 stderr F I1213 00:17:11.979618 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:17:11.979678114+00:00 stderr F I1213 00:17:11.979664 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-12-13T00:17:12.180914950+00:00 stderr F I1213 00:17:12.180815 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-12-13T00:17:12.180914950+00:00 stderr F I1213 00:17:12.180888 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-12-13T00:17:12.380379818+00:00 stderr F I1213 00:17:12.380319 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-12-13T00:17:12.380421669+00:00 stderr F I1213 00:17:12.380390 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-12-13T00:17:12.592284806+00:00 stderr F I1213 00:17:12.592193 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-12-13T00:17:12.592284806+00:00 stderr F I1213 00:17:12.592274 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-12-13T00:17:12.790168950+00:00 stderr F I1213 00:17:12.790052 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-12-13T00:17:12.790168950+00:00 stderr F I1213 00:17:12.790104 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-12-13T00:17:12.987863159+00:00 stderr F I1213 00:17:12.987775 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-12-13T00:17:12.987863159+00:00 stderr F I1213 00:17:12.987826 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-12-13T00:17:13.198311257+00:00 stderr F I1213 00:17:13.198215 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-12-13T00:17:13.198311257+00:00 stderr F I1213 00:17:13.198293 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-12-13T00:17:13.383215097+00:00 stderr F I1213 00:17:13.383144 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-12-13T00:17:13.383270759+00:00 stderr F I1213 00:17:13.383215 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:17:13.634781658+00:00 stderr F I1213 00:17:13.634692 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:17:13.634836250+00:00 stderr F I1213 00:17:13.634782 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:17:13.821212489+00:00 stderr F I1213 00:17:13.821143 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:17:13.821212489+00:00 stderr F I1213 00:17:13.821193 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:17:13.980746067+00:00 stderr F I1213 00:17:13.980046 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:17:13.980746067+00:00 stderr F I1213 00:17:13.980088 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-12-13T00:17:14.181622953+00:00 stderr F I1213 00:17:14.181502 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:17:14.181622953+00:00 stderr F I1213 00:17:14.181559 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-12-13T00:17:14.381402649+00:00 stderr F I1213 00:17:14.381302 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-12-13T00:17:14.381402649+00:00 stderr F I1213 00:17:14.381381 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-12-13T00:17:14.583813737+00:00 stderr F I1213 00:17:14.583723 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:17:14.583813737+00:00 stderr F I1213 00:17:14.583783 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-12-13T00:17:14.779549104+00:00 stderr F I1213 00:17:14.779411 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-12-13T00:17:14.779549104+00:00 stderr F I1213 00:17:14.779474 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-12-13T00:17:14.980057130+00:00 stderr F I1213 00:17:14.979349 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-12-13T00:17:14.980057130+00:00 stderr F I1213 00:17:14.979912 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-12-13T00:17:15.180522126+00:00 stderr F I1213 00:17:15.180457 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-12-13T00:17:15.180566287+00:00 stderr F I1213 00:17:15.180520 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-12-13T00:17:15.381758161+00:00 stderr F I1213 00:17:15.381028 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-12-13T00:17:15.381758161+00:00 stderr F I1213 00:17:15.381651 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:17:15.579392859+00:00 stderr F I1213 00:17:15.579294 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:17:15.579392859+00:00 stderr F I1213 00:17:15.579358 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:17:15.782373622+00:00 stderr F I1213 00:17:15.782288 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:17:15.782373622+00:00 stderr F I1213 00:17:15.782347 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:17:15.978596281+00:00 stderr F I1213 00:17:15.978492 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:17:15.978596281+00:00 stderr F I1213 00:17:15.978555 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:17:16.180102445+00:00 stderr F I1213 00:17:16.180019 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:17:16.180160597+00:00 stderr F I1213 00:17:16.180098 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:17:16.381007772+00:00 stderr F I1213 00:17:16.380904 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:17:16.381079554+00:00 stderr F I1213 00:17:16.381013 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-12-13T00:17:16.581411106+00:00 stderr F I1213 00:17:16.581314 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-12-13T00:17:16.581411106+00:00 stderr F I1213 00:17:16.581378 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-12-13T00:17:16.778912340+00:00 stderr F I1213 00:17:16.778822 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-12-13T00:17:16.778912340+00:00 stderr F I1213 00:17:16.778897 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-12-13T00:17:16.979090437+00:00 stderr F I1213 00:17:16.978867 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-12-13T00:17:16.979090437+00:00 stderr F I1213 00:17:16.978905 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-12-13T00:17:17.181322930+00:00 stderr F I1213 00:17:17.181236 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-12-13T00:17:17.181322930+00:00 stderr F I1213 00:17:17.181293 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-12-13T00:17:17.382245587+00:00 stderr F I1213 00:17:17.382170 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-12-13T00:17:17.382245587+00:00 stderr F I1213 00:17:17.382232 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-12-13T00:17:17.587295698+00:00 stderr F I1213 00:17:17.587216 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-12-13T00:17:17.587295698+00:00 stderr F I1213 00:17:17.587262 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-12-13T00:17:17.786822327+00:00 stderr F I1213 00:17:17.786719 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-12-13T00:17:17.786822327+00:00 stderr F I1213 00:17:17.786802 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-12-13T00:17:17.983062907+00:00 stderr F I1213 00:17:17.982921 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-12-13T00:17:17.983062907+00:00 stderr F I1213 00:17:17.983043 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-12-13T00:17:18.185422424+00:00 stderr F I1213 00:17:18.185328 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-12-13T00:17:18.185422424+00:00 stderr F I1213 00:17:18.185398 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:17:18.382086585+00:00 stderr F I1213 00:17:18.381993 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:17:18.382086585+00:00 stderr F I1213 00:17:18.382052 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-12-13T00:17:18.583216678+00:00 stderr F I1213 00:17:18.583116 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-12-13T00:17:18.583216678+00:00 stderr F I1213 00:17:18.583185 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:17:18.780681922+00:00 stderr F I1213 00:17:18.780613 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:17:18.780681922+00:00 stderr F I1213 00:17:18.780673 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:17:18.979763889+00:00 stderr F I1213 00:17:18.979685 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:17:18.979795730+00:00 stderr F I1213 00:17:18.979778 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:17:19.178373543+00:00 stderr F I1213 00:17:19.178295 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:17:19.178373543+00:00 stderr F I1213 00:17:19.178367 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-12-13T00:17:19.379825845+00:00 stderr F I1213 00:17:19.379754 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-12-13T00:17:19.379825845+00:00 stderr F I1213 00:17:19.379801 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-12-13T00:17:19.581246155+00:00 stderr F I1213 00:17:19.581166 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-12-13T00:17:19.581331478+00:00 stderr F I1213 00:17:19.581242 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-12-13T00:17:19.791507478+00:00 stderr F I1213 00:17:19.791422 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-12-13T00:17:19.791507478+00:00 stderr F I1213 00:17:19.791479 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-12-13T00:17:20.039359058+00:00 stderr F I1213 00:17:20.039291 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-12-13T00:17:20.039394769+00:00 stderr F I1213 00:17:20.039357 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-12-13T00:17:20.180457549+00:00 stderr F I1213 00:17:20.180348 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-12-13T00:17:20.180457549+00:00 stderr F I1213 00:17:20.180393 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:17:20.380033906+00:00 stderr F I1213 00:17:20.379794 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:17:20.380033906+00:00 stderr F I1213 00:17:20.379845 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:17:20.580338702+00:00 stderr F I1213 00:17:20.580245 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:17:20.580338702+00:00 stderr F I1213 00:17:20.580296 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:17:20.779103716+00:00 stderr F I1213 00:17:20.779049 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:17:20.779250630+00:00 stderr F I1213 00:17:20.779228 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-12-13T00:17:20.979299449+00:00 stderr F I1213 00:17:20.979178 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-12-13T00:17:20.979299449+00:00 stderr F I1213 00:17:20.979233 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-12-13T00:17:21.179329518+00:00 stderr F I1213 00:17:21.179251 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-12-13T00:17:21.179329518+00:00 stderr F I1213 00:17:21.179290 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-12-13T00:17:21.377887867+00:00 stderr F I1213 00:17:21.377800 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-12-13T00:17:21.377887867+00:00 stderr F I1213 00:17:21.377871 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-12-13T00:17:21.583502354+00:00 stderr F I1213 00:17:21.583442 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:17:21.583540565+00:00 stderr F I1213 00:17:21.583492 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-12-13T00:17:21.783554304+00:00 stderr F I1213 00:17:21.782652 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:17:21.783554304+00:00 stderr F I1213 00:17:21.782703 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-12-13T00:17:21.979647617+00:00 stderr F I1213 00:17:21.979579 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:17:21.979647617+00:00 stderr F I1213 00:17:21.979624 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:17:22.178101673+00:00 stderr F I1213 00:17:22.178029 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:17:22.178101673+00:00 stderr F I1213 00:17:22.178079 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:17:22.380225168+00:00 stderr F I1213 00:17:22.380166 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:17:22.380275799+00:00 stderr F I1213 00:17:22.380230 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-12-13T00:17:22.589153512+00:00 stderr F I1213 00:17:22.589070 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:17:22.589205644+00:00 stderr F I1213 00:17:22.589152 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-12-13T00:17:22.778560569+00:00 stderr F I1213 00:17:22.778518 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:17:22.778591920+00:00 stderr F I1213 00:17:22.778574 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-12-13T00:17:22.977725354+00:00 stderr F I1213 00:17:22.977637 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-12-13T00:17:22.977725354+00:00 stderr F I1213 00:17:22.977682 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-12-13T00:17:23.179537830+00:00 stderr F I1213 00:17:23.179452 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-12-13T00:17:23.179601142+00:00 stderr F I1213 00:17:23.179532 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-12-13T00:17:23.381416167+00:00 stderr F I1213 00:17:23.381337 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-12-13T00:17:23.381416167+00:00 stderr F I1213 00:17:23.381399 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-12-13T00:17:23.579254758+00:00 stderr F I1213 00:17:23.579123 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:17:23.579254758+00:00 stderr F I1213 00:17:23.579183 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-12-13T00:17:23.779209384+00:00 stderr F I1213 00:17:23.778695 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-12-13T00:17:23.779209384+00:00 stderr F I1213 00:17:23.779185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-12-13T00:17:23.980805924+00:00 stderr F I1213 00:17:23.980658 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-12-13T00:17:23.980805924+00:00 stderr F I1213 00:17:23.980723 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:17:24.179898517+00:00 stderr F I1213 00:17:24.179833 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:17:24.179898517+00:00 stderr F I1213 00:17:24.179887 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:17:24.377958154+00:00 stderr F I1213 00:17:24.377874 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:17:24.377958154+00:00 stderr F I1213 00:17:24.377915 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-12-13T00:17:24.579038659+00:00 stderr F I1213 00:17:24.578921 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-12-13T00:17:24.579083360+00:00 stderr F I1213 00:17:24.579044 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-12-13T00:17:24.781897564+00:00 stderr F I1213 00:17:24.781817 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-12-13T00:17:24.781897564+00:00 stderr F I1213 00:17:24.781871 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-12-13T00:17:24.978247054+00:00 stderr F I1213 00:17:24.978138 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:17:24.978247054+00:00 stderr F I1213 00:17:24.978186 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-12-13T00:17:25.179951137+00:00 stderr F I1213 00:17:25.179854 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-12-13T00:17:25.179951137+00:00 stderr F I1213 00:17:25.179906 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-12-13T00:17:25.382857892+00:00 stderr F I1213 00:17:25.382781 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:17:25.382857892+00:00 stderr F I1213 00:17:25.382847 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-12-13T00:17:25.579160191+00:00 stderr F I1213 00:17:25.579085 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-12-13T00:17:25.579160191+00:00 stderr F I1213 00:17:25.579126 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-12-13T00:17:25.779707714+00:00 stderr F I1213 00:17:25.779628 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-12-13T00:17:25.779707714+00:00 stderr F I1213 00:17:25.779679 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-12-13T00:17:25.977689098+00:00 stderr F I1213 00:17:25.977604 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:17:25.977689098+00:00 stderr F I1213 00:17:25.977649 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-12-13T00:17:26.178095566+00:00 stderr F I1213 00:17:26.178012 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-12-13T00:17:26.178095566+00:00 stderr F I1213 00:17:26.178069 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-12-13T00:17:26.379177512+00:00 stderr F I1213 00:17:26.379101 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-12-13T00:17:26.379177512+00:00 stderr F I1213 00:17:26.379168 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-12-13T00:17:26.581078801+00:00 stderr F I1213 00:17:26.581005 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:17:26.590947653+00:00 stderr F I1213 00:17:26.590879 1 log.go:245] Operconfig Controller complete 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561517 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.561487075 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561551 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.561538246 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561569 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.561556046 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561585 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.561573147 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561601 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561590467 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561616 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561605368 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561631 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561620018 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561647 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561635019 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561663 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.561650809 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561688 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.56167863 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561704 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.56169306 +0000 UTC))" 2025-12-13T00:19:37.562090031+00:00 stderr F I1213 00:19:37.561748 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561708051 +0000 UTC))" 2025-12-13T00:19:37.562175173+00:00 stderr F I1213 00:19:37.562082 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-12-13 00:19:37.56205691 +0000 UTC))" 2025-12-13T00:19:37.568022284+00:00 stderr F I1213 00:19:37.562369 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584664\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584664\" (2025-12-12 23:11:03 +0000 UTC to 2026-12-12 23:11:03 +0000 UTC (now=2025-12-13 00:19:37.562354388 +0000 UTC))" 2025-12-13T00:19:37.601412375+00:00 stderr F I1213 00:19:37.600264 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-12-13T00:19:47.012188483+00:00 stderr F I1213 00:19:47.012072 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-12-13T00:19:47.211456618+00:00 stderr F I1213 00:19:47.211383 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-12-13T00:19:47.213320320+00:00 stderr F I1213 00:19:47.213276 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-12-13T00:19:47.214785720+00:00 stderr F I1213 00:19:47.214747 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-12-13T00:19:47.214804771+00:00 stderr F I1213 00:19:47.214768 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc004575280 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-12-13T00:19:47.218310607+00:00 stderr F I1213 00:19:47.218268 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-12-13T00:19:47.218310607+00:00 stderr F I1213 00:19:47.218282 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-12-13T00:19:47.218310607+00:00 stderr F I1213 00:19:47.218289 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-12-13T00:19:47.220340824+00:00 stderr F I1213 00:19:47.220292 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:19:47.220340824+00:00 stderr F I1213 00:19:47.220314 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:19:47.220340824+00:00 stderr F I1213 00:19:47.220319 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-12-13T00:19:47.220340824+00:00 stderr F I1213 00:19:47.220325 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-12-13T00:19:47.220373155+00:00 stderr F I1213 00:19:47.220357 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-12-13T00:19:47.228638462+00:00 stderr F I1213 00:19:47.228580 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:19:47.239212513+00:00 stderr F I1213 00:19:47.239155 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:19:47.239337647+00:00 stderr F I1213 00:19:47.239294 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-12-13T00:19:47.239337647+00:00 stderr F I1213 00:19:47.239324 1 log.go:245] Starting render phase 2025-12-13T00:19:47.248204072+00:00 stderr F I1213 00:19:47.248136 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-12-13T00:19:47.273845449+00:00 stderr F I1213 00:19:47.273765 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-12-13T00:19:47.273845449+00:00 stderr F I1213 00:19:47.273790 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-12-13T00:19:47.273845449+00:00 stderr F I1213 00:19:47.273814 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-12-13T00:19:47.273845449+00:00 stderr F I1213 00:19:47.273839 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-12-13T00:19:47.284361448+00:00 stderr F I1213 00:19:47.284292 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-12-13T00:19:47.284361448+00:00 stderr F I1213 00:19:47.284315 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-12-13T00:19:47.291663570+00:00 stderr F I1213 00:19:47.291582 1 log.go:245] Render phase done, rendered 112 objects 2025-12-13T00:19:47.305013518+00:00 stderr F I1213 00:19:47.304901 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-12-13T00:19:47.308695029+00:00 stderr F I1213 00:19:47.308633 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-12-13T00:19:47.308725040+00:00 stderr F I1213 00:19:47.308704 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-12-13T00:19:47.313983925+00:00 stderr F I1213 00:19:47.313923 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-12-13T00:19:47.313983925+00:00 stderr F I1213 00:19:47.313967 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-12-13T00:19:47.321027370+00:00 stderr F I1213 00:19:47.320970 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-12-13T00:19:47.321027370+00:00 stderr F I1213 00:19:47.321000 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-12-13T00:19:47.326205252+00:00 stderr F I1213 00:19:47.326135 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-12-13T00:19:47.326205252+00:00 stderr F I1213 00:19:47.326173 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-12-13T00:19:47.330649735+00:00 stderr F I1213 00:19:47.330577 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-12-13T00:19:47.330649735+00:00 stderr F I1213 00:19:47.330601 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-12-13T00:19:47.335250912+00:00 stderr F I1213 00:19:47.335183 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-12-13T00:19:47.335250912+00:00 stderr F I1213 00:19:47.335216 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-12-13T00:19:47.339283613+00:00 stderr F I1213 00:19:47.339243 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-12-13T00:19:47.339283613+00:00 stderr F I1213 00:19:47.339277 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-12-13T00:19:47.344210129+00:00 stderr F I1213 00:19:47.344182 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-12-13T00:19:47.344233150+00:00 stderr F I1213 00:19:47.344206 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-12-13T00:19:47.348285451+00:00 stderr F I1213 00:19:47.348245 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-12-13T00:19:47.348285451+00:00 stderr F I1213 00:19:47.348274 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-12-13T00:19:47.436503914+00:00 stderr F I1213 00:19:47.436401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-12-13T00:19:47.436503914+00:00 stderr F I1213 00:19:47.436447 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-12-13T00:19:47.635610394+00:00 stderr F I1213 00:19:47.635481 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-12-13T00:19:47.635610394+00:00 stderr F I1213 00:19:47.635523 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-12-13T00:19:47.837929063+00:00 stderr F I1213 00:19:47.837809 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-12-13T00:19:47.837929063+00:00 stderr F I1213 00:19:47.837890 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-12-13T00:19:48.037168747+00:00 stderr F I1213 00:19:48.037114 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-12-13T00:19:48.037280500+00:00 stderr F I1213 00:19:48.037265 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-12-13T00:19:48.235214728+00:00 stderr F I1213 00:19:48.235165 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-12-13T00:19:48.235328091+00:00 stderr F I1213 00:19:48.235312 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-12-13T00:19:48.437762573+00:00 stderr F I1213 00:19:48.437639 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-12-13T00:19:48.437762573+00:00 stderr F I1213 00:19:48.437711 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-12-13T00:19:48.637677206+00:00 stderr F I1213 00:19:48.637598 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-12-13T00:19:48.637807400+00:00 stderr F I1213 00:19:48.637795 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-12-13T00:19:48.835655085+00:00 stderr F I1213 00:19:48.835598 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-12-13T00:19:48.835655085+00:00 stderr F I1213 00:19:48.835640 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-12-13T00:19:49.036803831+00:00 stderr F I1213 00:19:49.036705 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-12-13T00:19:49.036912344+00:00 stderr F I1213 00:19:49.036884 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-12-13T00:19:49.235673135+00:00 stderr F I1213 00:19:49.235608 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-12-13T00:19:49.235673135+00:00 stderr F I1213 00:19:49.235652 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-12-13T00:19:49.435541876+00:00 stderr F I1213 00:19:49.435486 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-12-13T00:19:49.435541876+00:00 stderr F I1213 00:19:49.435530 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-12-13T00:19:49.636974980+00:00 stderr F I1213 00:19:49.636890 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-12-13T00:19:49.636974980+00:00 stderr F I1213 00:19:49.636935 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-12-13T00:19:49.842935500+00:00 stderr F I1213 00:19:49.842865 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-12-13T00:19:49.842935500+00:00 stderr F I1213 00:19:49.842924 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-12-13T00:19:50.052206500+00:00 stderr F I1213 00:19:50.052094 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-12-13T00:19:50.052206500+00:00 stderr F I1213 00:19:50.052141 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-12-13T00:19:50.236925834+00:00 stderr F I1213 00:19:50.236805 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-12-13T00:19:50.236925834+00:00 stderr F I1213 00:19:50.236851 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-12-13T00:19:50.436220440+00:00 stderr F I1213 00:19:50.436133 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-12-13T00:19:50.436220440+00:00 stderr F I1213 00:19:50.436192 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-12-13T00:19:50.636385439+00:00 stderr F I1213 00:19:50.636279 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-12-13T00:19:50.636385439+00:00 stderr F I1213 00:19:50.636318 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-12-13T00:19:50.840812806+00:00 stderr F I1213 00:19:50.840746 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-12-13T00:19:50.840812806+00:00 stderr F I1213 00:19:50.840803 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-12-13T00:19:51.037411877+00:00 stderr F I1213 00:19:51.037332 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-12-13T00:19:51.037411877+00:00 stderr F I1213 00:19:51.037387 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-12-13T00:19:51.240579239+00:00 stderr F I1213 00:19:51.240480 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-12-13T00:19:51.240579239+00:00 stderr F I1213 00:19:51.240560 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:19:51.439855254+00:00 stderr F I1213 00:19:51.439790 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:19:51.439855254+00:00 stderr F I1213 00:19:51.439832 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:19:51.635674194+00:00 stderr F I1213 00:19:51.635619 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:19:51.635674194+00:00 stderr F I1213 00:19:51.635659 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-12-13T00:19:51.837643453+00:00 stderr F I1213 00:19:51.837584 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-12-13T00:19:51.837643453+00:00 stderr F I1213 00:19:51.837631 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-12-13T00:19:52.036624740+00:00 stderr F I1213 00:19:52.036109 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-12-13T00:19:52.036624740+00:00 stderr F I1213 00:19:52.036167 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-12-13T00:19:52.238479706+00:00 stderr F I1213 00:19:52.238411 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-12-13T00:19:52.238479706+00:00 stderr F I1213 00:19:52.238461 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-12-13T00:19:52.436061963+00:00 stderr F I1213 00:19:52.436006 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-12-13T00:19:52.436061963+00:00 stderr F I1213 00:19:52.436048 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-12-13T00:19:52.638018613+00:00 stderr F I1213 00:19:52.637923 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-12-13T00:19:52.638099405+00:00 stderr F I1213 00:19:52.638018 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-12-13T00:19:52.840197158+00:00 stderr F I1213 00:19:52.840088 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-12-13T00:19:52.840197158+00:00 stderr F I1213 00:19:52.840139 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-12-13T00:19:53.039251167+00:00 stderr F I1213 00:19:53.038591 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-12-13T00:19:53.039251167+00:00 stderr F I1213 00:19:53.038648 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:19:53.236117225+00:00 stderr F I1213 00:19:53.236052 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:19:53.236117225+00:00 stderr F I1213 00:19:53.236096 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:19:53.436524702+00:00 stderr F I1213 00:19:53.436454 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:19:53.436524702+00:00 stderr F I1213 00:19:53.436499 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-12-13T00:19:53.638035627+00:00 stderr F I1213 00:19:53.637938 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-12-13T00:19:53.638069429+00:00 stderr F I1213 00:19:53.638049 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-12-13T00:19:53.836927812+00:00 stderr F I1213 00:19:53.836869 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-12-13T00:19:53.836927812+00:00 stderr F I1213 00:19:53.836916 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-12-13T00:19:54.053175025+00:00 stderr F I1213 00:19:54.052248 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-12-13T00:19:54.053175025+00:00 stderr F I1213 00:19:54.052295 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-12-13T00:19:54.253652293+00:00 stderr F I1213 00:19:54.253551 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-12-13T00:19:54.253721275+00:00 stderr F I1213 00:19:54.253647 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-12-13T00:19:54.441204035+00:00 stderr F I1213 00:19:54.441130 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-12-13T00:19:54.441204035+00:00 stderr F I1213 00:19:54.441183 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-12-13T00:19:54.648577713+00:00 stderr F I1213 00:19:54.648495 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-12-13T00:19:54.648577713+00:00 stderr F I1213 00:19:54.648554 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-12-13T00:19:54.845002189+00:00 stderr F I1213 00:19:54.844173 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-12-13T00:19:54.845002189+00:00 stderr F I1213 00:19:54.844233 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:19:55.075400533+00:00 stderr F I1213 00:19:55.075345 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:19:55.075400533+00:00 stderr F I1213 00:19:55.075390 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:19:55.277458304+00:00 stderr F I1213 00:19:55.277408 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:19:55.277614889+00:00 stderr F I1213 00:19:55.277586 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:19:55.435906343+00:00 stderr F I1213 00:19:55.435855 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:19:55.436051597+00:00 stderr F I1213 00:19:55.436034 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-12-13T00:19:55.636509765+00:00 stderr F I1213 00:19:55.636437 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:19:55.636509765+00:00 stderr F I1213 00:19:55.636494 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-12-13T00:19:55.835306257+00:00 stderr F I1213 00:19:55.835262 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-12-13T00:19:55.835393889+00:00 stderr F I1213 00:19:55.835383 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-12-13T00:19:56.037813280+00:00 stderr F I1213 00:19:56.037740 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:19:56.037975705+00:00 stderr F I1213 00:19:56.037938 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-12-13T00:19:56.236684844+00:00 stderr F I1213 00:19:56.236623 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-12-13T00:19:56.236824548+00:00 stderr F I1213 00:19:56.236800 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-12-13T00:19:56.436130784+00:00 stderr F I1213 00:19:56.436047 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-12-13T00:19:56.436130784+00:00 stderr F I1213 00:19:56.436117 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-12-13T00:19:56.637179078+00:00 stderr F I1213 00:19:56.637115 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-12-13T00:19:56.637179078+00:00 stderr F I1213 00:19:56.637158 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-12-13T00:19:56.837253165+00:00 stderr F I1213 00:19:56.837213 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-12-13T00:19:56.837329677+00:00 stderr F I1213 00:19:56.837319 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:19:57.037238539+00:00 stderr F I1213 00:19:57.037018 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:19:57.037238539+00:00 stderr F I1213 00:19:57.037066 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:19:57.237892703+00:00 stderr F I1213 00:19:57.237801 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:19:57.237892703+00:00 stderr F I1213 00:19:57.237874 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:19:57.443610925+00:00 stderr F I1213 00:19:57.443563 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:19:57.443733969+00:00 stderr F I1213 00:19:57.443719 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:19:57.635721324+00:00 stderr F I1213 00:19:57.635681 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:19:57.635810987+00:00 stderr F I1213 00:19:57.635798 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:19:57.838257459+00:00 stderr F I1213 00:19:57.838208 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:19:57.838369032+00:00 stderr F I1213 00:19:57.838354 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-12-13T00:19:58.035484017+00:00 stderr F I1213 00:19:58.035423 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-12-13T00:19:58.035484017+00:00 stderr F I1213 00:19:58.035476 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-12-13T00:19:58.236141921+00:00 stderr F I1213 00:19:58.236087 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-12-13T00:19:58.236141921+00:00 stderr F I1213 00:19:58.236134 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-12-13T00:19:58.436660089+00:00 stderr F I1213 00:19:58.436618 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-12-13T00:19:58.436742862+00:00 stderr F I1213 00:19:58.436731 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-12-13T00:19:58.635925435+00:00 stderr F I1213 00:19:58.635849 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-12-13T00:19:58.636057649+00:00 stderr F I1213 00:19:58.636042 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-12-13T00:19:58.838673586+00:00 stderr F I1213 00:19:58.838631 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-12-13T00:19:58.838774809+00:00 stderr F I1213 00:19:58.838762 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-12-13T00:19:59.042852246+00:00 stderr F I1213 00:19:59.042807 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-12-13T00:19:59.042969369+00:00 stderr F I1213 00:19:59.042938 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-12-13T00:19:59.239709614+00:00 stderr F I1213 00:19:59.239667 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-12-13T00:19:59.239822577+00:00 stderr F I1213 00:19:59.239811 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-12-13T00:19:59.435917714+00:00 stderr F I1213 00:19:59.435879 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-12-13T00:19:59.436020717+00:00 stderr F I1213 00:19:59.436009 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-12-13T00:19:59.637179104+00:00 stderr F I1213 00:19:59.637131 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-12-13T00:19:59.637291377+00:00 stderr F I1213 00:19:59.637276 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:19:59.838508116+00:00 stderr F I1213 00:19:59.838458 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:19:59.838720912+00:00 stderr F I1213 00:19:59.838703 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-12-13T00:20:00.037583046+00:00 stderr F I1213 00:20:00.037514 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-12-13T00:20:00.037685478+00:00 stderr F I1213 00:20:00.037670 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:20:00.240852790+00:00 stderr F I1213 00:20:00.240797 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:20:00.241093418+00:00 stderr F I1213 00:20:00.241066 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:20:00.437855053+00:00 stderr F I1213 00:20:00.437797 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:20:00.438036338+00:00 stderr F I1213 00:20:00.438010 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:20:00.635867103+00:00 stderr F I1213 00:20:00.635820 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:20:00.635986077+00:00 stderr F I1213 00:20:00.635971 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-12-13T00:20:00.837669628+00:00 stderr F I1213 00:20:00.837603 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-12-13T00:20:00.837669628+00:00 stderr F I1213 00:20:00.837652 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-12-13T00:20:01.036910252+00:00 stderr F I1213 00:20:01.036847 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-12-13T00:20:01.036910252+00:00 stderr F I1213 00:20:01.036903 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-12-13T00:20:01.247042256+00:00 stderr F I1213 00:20:01.246974 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-12-13T00:20:01.247097018+00:00 stderr F I1213 00:20:01.247039 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-12-13T00:20:01.455186885+00:00 stderr F I1213 00:20:01.454712 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:01.455186885+00:00 stderr F I1213 00:20:01.455148 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:01.456038148+00:00 stderr F I1213 00:20:01.455538 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-12-13T00:20:01.456038148+00:00 stderr F I1213 00:20:01.455592 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-12-13T00:20:01.491375943+00:00 stderr F I1213 00:20:01.491310 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:01.491375943+00:00 stderr F I1213 00:20:01.491338 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:01.502150390+00:00 stderr F I1213 00:20:01.500512 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:01.502150390+00:00 stderr F I1213 00:20:01.501069 1 log.go:245] Network operator config updated with conditions: 2025-12-13T00:20:01.502150390+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.502150390+00:00 stderr F status: "False" 2025-12-13T00:20:01.502150390+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:01.502150390+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:01.502150390+00:00 stderr F status: "False" 2025-12-13T00:20:01.502150390+00:00 stderr F type: Degraded 2025-12-13T00:20:01.502150390+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.502150390+00:00 stderr F status: "True" 2025-12-13T00:20:01.502150390+00:00 stderr F type: Upgradeable 2025-12-13T00:20:01.502150390+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:01.502150390+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is being processed 2025-12-13T00:20:01.502150390+00:00 stderr F (generation 3, observed generation 2) 2025-12-13T00:20:01.502150390+00:00 stderr F reason: Deploying 2025-12-13T00:20:01.502150390+00:00 stderr F status: "True" 2025-12-13T00:20:01.502150390+00:00 stderr F type: Progressing 2025-12-13T00:20:01.502150390+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:01.502150390+00:00 stderr F status: "True" 2025-12-13T00:20:01.502150390+00:00 stderr F type: Available 2025-12-13T00:20:01.526199673+00:00 stderr F I1213 00:20:01.522774 1 log.go:245] ClusterOperator config status updated with conditions: 2025-12-13T00:20:01.526199673+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:01.526199673+00:00 stderr F status: "False" 2025-12-13T00:20:01.526199673+00:00 stderr F type: Degraded 2025-12-13T00:20:01.526199673+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.526199673+00:00 stderr F status: "True" 2025-12-13T00:20:01.526199673+00:00 stderr F type: Upgradeable 2025-12-13T00:20:01.526199673+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.526199673+00:00 stderr F status: "False" 2025-12-13T00:20:01.526199673+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:01.526199673+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:01.526199673+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is being processed 2025-12-13T00:20:01.526199673+00:00 stderr F (generation 3, observed generation 2) 2025-12-13T00:20:01.526199673+00:00 stderr F reason: Deploying 2025-12-13T00:20:01.526199673+00:00 stderr F status: "True" 2025-12-13T00:20:01.526199673+00:00 stderr F type: Progressing 2025-12-13T00:20:01.526199673+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:01.526199673+00:00 stderr F status: "True" 2025-12-13T00:20:01.526199673+00:00 stderr F type: Available 2025-12-13T00:20:01.591792772+00:00 stderr F I1213 00:20:01.591548 1 log.go:245] Network operator config updated with conditions: 2025-12-13T00:20:01.591792772+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.591792772+00:00 stderr F status: "False" 2025-12-13T00:20:01.591792772+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:01.591792772+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:01.591792772+00:00 stderr F status: "False" 2025-12-13T00:20:01.591792772+00:00 stderr F type: Degraded 2025-12-13T00:20:01.591792772+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.591792772+00:00 stderr F status: "True" 2025-12-13T00:20:01.591792772+00:00 stderr F type: Upgradeable 2025-12-13T00:20:01.591792772+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:01.591792772+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out 2025-12-13T00:20:01.591792772+00:00 stderr F (0 out of 1 updated) 2025-12-13T00:20:01.591792772+00:00 stderr F reason: Deploying 2025-12-13T00:20:01.591792772+00:00 stderr F status: "True" 2025-12-13T00:20:01.591792772+00:00 stderr F type: Progressing 2025-12-13T00:20:01.591792772+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:01.591792772+00:00 stderr F status: "True" 2025-12-13T00:20:01.591792772+00:00 stderr F type: Available 2025-12-13T00:20:01.592514592+00:00 stderr F I1213 00:20:01.591959 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:01.635979570+00:00 stderr F I1213 00:20:01.635913 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-12-13T00:20:01.636086904+00:00 stderr F I1213 00:20:01.636062 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:20:01.679973303+00:00 stderr F I1213 00:20:01.679902 1 log.go:245] ClusterOperator config status updated with conditions: 2025-12-13T00:20:01.679973303+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:01.679973303+00:00 stderr F status: "False" 2025-12-13T00:20:01.679973303+00:00 stderr F type: Degraded 2025-12-13T00:20:01.679973303+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.679973303+00:00 stderr F status: "True" 2025-12-13T00:20:01.679973303+00:00 stderr F type: Upgradeable 2025-12-13T00:20:01.679973303+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:01.679973303+00:00 stderr F status: "False" 2025-12-13T00:20:01.679973303+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:01.679973303+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:01.679973303+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" update is rolling out 2025-12-13T00:20:01.679973303+00:00 stderr F (0 out of 1 updated) 2025-12-13T00:20:01.679973303+00:00 stderr F reason: Deploying 2025-12-13T00:20:01.679973303+00:00 stderr F status: "True" 2025-12-13T00:20:01.679973303+00:00 stderr F type: Progressing 2025-12-13T00:20:01.679973303+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:01.679973303+00:00 stderr F status: "True" 2025-12-13T00:20:01.679973303+00:00 stderr F type: Available 2025-12-13T00:20:01.837096526+00:00 stderr F I1213 00:20:01.837032 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:20:01.837155707+00:00 stderr F I1213 00:20:01.837106 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:20:02.009067138+00:00 stderr F I1213 00:20:02.006496 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:02.009067138+00:00 stderr F I1213 00:20:02.006525 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:02.036889756+00:00 stderr F I1213 00:20:02.036800 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:20:02.036889756+00:00 stderr F I1213 00:20:02.036852 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:20:02.236665924+00:00 stderr F I1213 00:20:02.236589 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:20:02.236665924+00:00 stderr F I1213 00:20:02.236646 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-12-13T00:20:02.265178010+00:00 stderr F I1213 00:20:02.265102 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:20:02.265178010+00:00 stderr F I1213 00:20:02.265133 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:20:02.438013166+00:00 stderr F I1213 00:20:02.437895 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-12-13T00:20:02.438013166+00:00 stderr F I1213 00:20:02.437960 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-12-13T00:20:02.486414581+00:00 stderr F I1213 00:20:02.486257 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:02.486680928+00:00 stderr F I1213 00:20:02.486644 1 log.go:245] Network operator config updated with conditions: 2025-12-13T00:20:02.486680928+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:02.486680928+00:00 stderr F status: "False" 2025-12-13T00:20:02.486680928+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:02.486680928+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:02.486680928+00:00 stderr F status: "False" 2025-12-13T00:20:02.486680928+00:00 stderr F type: Degraded 2025-12-13T00:20:02.486680928+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:02.486680928+00:00 stderr F status: "True" 2025-12-13T00:20:02.486680928+00:00 stderr F type: Upgradeable 2025-12-13T00:20:02.486680928+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:02.486680928+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2025-12-13T00:20:02.486680928+00:00 stderr F 1 nodes) 2025-12-13T00:20:02.486680928+00:00 stderr F reason: Deploying 2025-12-13T00:20:02.486680928+00:00 stderr F status: "True" 2025-12-13T00:20:02.486680928+00:00 stderr F type: Progressing 2025-12-13T00:20:02.486680928+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:02.486680928+00:00 stderr F status: "True" 2025-12-13T00:20:02.486680928+00:00 stderr F type: Available 2025-12-13T00:20:02.637534818+00:00 stderr F I1213 00:20:02.637437 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-12-13T00:20:02.637534818+00:00 stderr F I1213 00:20:02.637505 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-12-13T00:20:02.836410642+00:00 stderr F I1213 00:20:02.836326 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-12-13T00:20:02.836410642+00:00 stderr F I1213 00:20:02.836370 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-12-13T00:20:02.868187718+00:00 stderr F I1213 00:20:02.866652 1 log.go:245] ClusterOperator config status updated with conditions: 2025-12-13T00:20:02.868187718+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:02.868187718+00:00 stderr F status: "False" 2025-12-13T00:20:02.868187718+00:00 stderr F type: Degraded 2025-12-13T00:20:02.868187718+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:02.868187718+00:00 stderr F status: "True" 2025-12-13T00:20:02.868187718+00:00 stderr F type: Upgradeable 2025-12-13T00:20:02.868187718+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:02.868187718+00:00 stderr F status: "False" 2025-12-13T00:20:02.868187718+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:02.868187718+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:02.868187718+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2025-12-13T00:20:02.868187718+00:00 stderr F 1 nodes) 2025-12-13T00:20:02.868187718+00:00 stderr F reason: Deploying 2025-12-13T00:20:02.868187718+00:00 stderr F status: "True" 2025-12-13T00:20:02.868187718+00:00 stderr F type: Progressing 2025-12-13T00:20:02.868187718+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:02.868187718+00:00 stderr F status: "True" 2025-12-13T00:20:02.868187718+00:00 stderr F type: Available 2025-12-13T00:20:03.040251173+00:00 stderr F I1213 00:20:03.040207 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:20:03.040345316+00:00 stderr F I1213 00:20:03.040332 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-12-13T00:20:03.238853250+00:00 stderr F I1213 00:20:03.238804 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:20:03.238964403+00:00 stderr F I1213 00:20:03.238949 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-12-13T00:20:03.439320397+00:00 stderr F I1213 00:20:03.439248 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:20:03.439320397+00:00 stderr F I1213 00:20:03.439298 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:20:03.635317992+00:00 stderr F I1213 00:20:03.635253 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:20:03.635317992+00:00 stderr F I1213 00:20:03.635293 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:20:03.687451289+00:00 stderr F I1213 00:20:03.687381 1 log.go:245] Network operator config updated with conditions: 2025-12-13T00:20:03.687451289+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:03.687451289+00:00 stderr F status: "False" 2025-12-13T00:20:03.687451289+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:03.687451289+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:03.687451289+00:00 stderr F status: "False" 2025-12-13T00:20:03.687451289+00:00 stderr F type: Degraded 2025-12-13T00:20:03.687451289+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:03.687451289+00:00 stderr F status: "True" 2025-12-13T00:20:03.687451289+00:00 stderr F type: Upgradeable 2025-12-13T00:20:03.687451289+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:03.687451289+00:00 stderr F message: |- 2025-12-13T00:20:03.687451289+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-12-13T00:20:03.687451289+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-12-13T00:20:03.687451289+00:00 stderr F reason: Deploying 2025-12-13T00:20:03.687451289+00:00 stderr F status: "True" 2025-12-13T00:20:03.687451289+00:00 stderr F type: Progressing 2025-12-13T00:20:03.687451289+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:03.687451289+00:00 stderr F status: "True" 2025-12-13T00:20:03.687451289+00:00 stderr F type: Available 2025-12-13T00:20:03.687899342+00:00 stderr F I1213 00:20:03.687855 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:03.835828301+00:00 stderr F I1213 00:20:03.835760 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:20:03.835828301+00:00 stderr F I1213 00:20:03.835803 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-12-13T00:20:04.040109594+00:00 stderr F I1213 00:20:04.039887 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:20:04.040109594+00:00 stderr F I1213 00:20:04.039956 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-12-13T00:20:04.074069210+00:00 stderr F I1213 00:20:04.072233 1 log.go:245] ClusterOperator config status updated with conditions: 2025-12-13T00:20:04.074069210+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:04.074069210+00:00 stderr F status: "False" 2025-12-13T00:20:04.074069210+00:00 stderr F type: Degraded 2025-12-13T00:20:04.074069210+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:04.074069210+00:00 stderr F status: "True" 2025-12-13T00:20:04.074069210+00:00 stderr F type: Upgradeable 2025-12-13T00:20:04.074069210+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:04.074069210+00:00 stderr F status: "False" 2025-12-13T00:20:04.074069210+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:04.074069210+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:04.074069210+00:00 stderr F message: |- 2025-12-13T00:20:04.074069210+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-12-13T00:20:04.074069210+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-12-13T00:20:04.074069210+00:00 stderr F reason: Deploying 2025-12-13T00:20:04.074069210+00:00 stderr F status: "True" 2025-12-13T00:20:04.074069210+00:00 stderr F type: Progressing 2025-12-13T00:20:04.074069210+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:04.074069210+00:00 stderr F status: "True" 2025-12-13T00:20:04.074069210+00:00 stderr F type: Available 2025-12-13T00:20:04.237017053+00:00 stderr F I1213 00:20:04.236888 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:20:04.237017053+00:00 stderr F I1213 00:20:04.236960 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-12-13T00:20:04.436969826+00:00 stderr F I1213 00:20:04.436911 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-12-13T00:20:04.437001927+00:00 stderr F I1213 00:20:04.436968 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-12-13T00:20:04.637003543+00:00 stderr F I1213 00:20:04.636889 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-12-13T00:20:04.637050834+00:00 stderr F I1213 00:20:04.637004 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-12-13T00:20:04.837459480+00:00 stderr F I1213 00:20:04.837395 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-12-13T00:20:04.837459480+00:00 stderr F I1213 00:20:04.837444 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-12-13T00:20:05.037958739+00:00 stderr F I1213 00:20:05.037829 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:20:05.037958739+00:00 stderr F I1213 00:20:05.037873 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-12-13T00:20:05.235475855+00:00 stderr F I1213 00:20:05.235406 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-12-13T00:20:05.235475855+00:00 stderr F I1213 00:20:05.235445 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-12-13T00:20:05.436597921+00:00 stderr F I1213 00:20:05.436535 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-12-13T00:20:05.436597921+00:00 stderr F I1213 00:20:05.436579 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:20:05.637163032+00:00 stderr F I1213 00:20:05.636635 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:20:05.637163032+00:00 stderr F I1213 00:20:05.636686 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:20:05.836530079+00:00 stderr F I1213 00:20:05.836462 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:20:05.836530079+00:00 stderr F I1213 00:20:05.836513 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-12-13T00:20:06.035813334+00:00 stderr F I1213 00:20:06.035745 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-12-13T00:20:06.035813334+00:00 stderr F I1213 00:20:06.035801 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-12-13T00:20:06.238561635+00:00 stderr F I1213 00:20:06.238496 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-12-13T00:20:06.238561635+00:00 stderr F I1213 00:20:06.238544 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-12-13T00:20:06.436838102+00:00 stderr F I1213 00:20:06.436467 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:20:06.436838102+00:00 stderr F I1213 00:20:06.436825 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-12-13T00:20:06.637012263+00:00 stderr F I1213 00:20:06.636463 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-12-13T00:20:06.637012263+00:00 stderr F I1213 00:20:06.636515 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-12-13T00:20:06.846287943+00:00 stderr F I1213 00:20:06.846221 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:20:06.846334294+00:00 stderr F I1213 00:20:06.846298 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-12-13T00:20:07.037191458+00:00 stderr F I1213 00:20:07.037112 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-12-13T00:20:07.037191458+00:00 stderr F I1213 00:20:07.037167 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-12-13T00:20:07.236190185+00:00 stderr F I1213 00:20:07.236128 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-12-13T00:20:07.236190185+00:00 stderr F I1213 00:20:07.236170 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-12-13T00:20:07.435687396+00:00 stderr F I1213 00:20:07.435582 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:20:07.435687396+00:00 stderr F I1213 00:20:07.435627 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-12-13T00:20:07.638579040+00:00 stderr F I1213 00:20:07.638492 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-12-13T00:20:07.638579040+00:00 stderr F I1213 00:20:07.638533 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-12-13T00:20:07.838659338+00:00 stderr F I1213 00:20:07.838027 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-12-13T00:20:07.838659338+00:00 stderr F I1213 00:20:07.838077 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-12-13T00:20:08.046130869+00:00 stderr F I1213 00:20:08.046055 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:20:08.060169906+00:00 stderr F I1213 00:20:08.060113 1 log.go:245] Operconfig Controller complete 2025-12-13T00:20:08.060204467+00:00 stderr F I1213 00:20:08.060174 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-12-13T00:20:08.255755479+00:00 stderr F I1213 00:20:08.255678 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-12-13T00:20:08.259421290+00:00 stderr F I1213 00:20:08.258039 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-12-13T00:20:08.260818878+00:00 stderr F I1213 00:20:08.260584 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-12-13T00:20:08.260818878+00:00 stderr F I1213 00:20:08.260600 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00432cc00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-12-13T00:20:08.264302845+00:00 stderr F I1213 00:20:08.264254 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-12-13T00:20:08.264302845+00:00 stderr F I1213 00:20:08.264276 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-12-13T00:20:08.264302845+00:00 stderr F I1213 00:20:08.264284 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-12-13T00:20:08.266162625+00:00 stderr F I1213 00:20:08.266126 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 3 -> 3 2025-12-13T00:20:08.266162625+00:00 stderr F I1213 00:20:08.266144 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 3 -> 3 2025-12-13T00:20:08.266162625+00:00 stderr F I1213 00:20:08.266153 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=true 2025-12-13T00:20:08.275379630+00:00 stderr F I1213 00:20:08.275296 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-12-13T00:20:08.289177830+00:00 stderr F I1213 00:20:08.289116 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-12-13T00:20:08.289177830+00:00 stderr F I1213 00:20:08.289151 1 log.go:245] Starting render phase 2025-12-13T00:20:08.289587822+00:00 stderr F I1213 00:20:08.289557 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:08.298433916+00:00 stderr F I1213 00:20:08.298260 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-12-13T00:20:08.320205256+00:00 stderr F I1213 00:20:08.320140 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-12-13T00:20:08.320205256+00:00 stderr F I1213 00:20:08.320159 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-12-13T00:20:08.320205256+00:00 stderr F I1213 00:20:08.320178 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-12-13T00:20:08.320205256+00:00 stderr F I1213 00:20:08.320198 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-12-13T00:20:08.326485929+00:00 stderr F I1213 00:20:08.326430 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-12-13T00:20:08.326485929+00:00 stderr F I1213 00:20:08.326446 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-12-13T00:20:08.333576194+00:00 stderr F I1213 00:20:08.333520 1 log.go:245] Render phase done, rendered 112 objects 2025-12-13T00:20:08.344223148+00:00 stderr F I1213 00:20:08.344139 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-12-13T00:20:08.437651444+00:00 stderr F I1213 00:20:08.437558 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-12-13T00:20:08.437651444+00:00 stderr F I1213 00:20:08.437621 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-12-13T00:20:08.641004652+00:00 stderr F I1213 00:20:08.640876 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-12-13T00:20:08.641058403+00:00 stderr F I1213 00:20:08.641025 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-12-13T00:20:08.838481608+00:00 stderr F I1213 00:20:08.838418 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-12-13T00:20:08.838481608+00:00 stderr F I1213 00:20:08.838456 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-12-13T00:20:09.037824674+00:00 stderr F I1213 00:20:09.037774 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-12-13T00:20:09.037824674+00:00 stderr F I1213 00:20:09.037817 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-12-13T00:20:09.236201355+00:00 stderr F I1213 00:20:09.236113 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-12-13T00:20:09.236201355+00:00 stderr F I1213 00:20:09.236168 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-12-13T00:20:09.438432551+00:00 stderr F I1213 00:20:09.438370 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-12-13T00:20:09.438432551+00:00 stderr F I1213 00:20:09.438414 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-12-13T00:20:09.637276394+00:00 stderr F I1213 00:20:09.637233 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-12-13T00:20:09.637363086+00:00 stderr F I1213 00:20:09.637352 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-12-13T00:20:09.836385784+00:00 stderr F I1213 00:20:09.836326 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-12-13T00:20:09.836440926+00:00 stderr F I1213 00:20:09.836390 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-12-13T00:20:10.037275003+00:00 stderr F I1213 00:20:10.036749 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-12-13T00:20:10.037275003+00:00 stderr F I1213 00:20:10.036820 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-12-13T00:20:10.236556659+00:00 stderr F I1213 00:20:10.236489 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-12-13T00:20:10.236556659+00:00 stderr F I1213 00:20:10.236531 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-12-13T00:20:10.436602505+00:00 stderr F I1213 00:20:10.436546 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-12-13T00:20:10.436602505+00:00 stderr F I1213 00:20:10.436590 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-12-13T00:20:10.635842319+00:00 stderr F I1213 00:20:10.635781 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-12-13T00:20:10.635842319+00:00 stderr F I1213 00:20:10.635823 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-12-13T00:20:10.837875460+00:00 stderr F I1213 00:20:10.837085 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-12-13T00:20:10.837875460+00:00 stderr F I1213 00:20:10.837134 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-12-13T00:20:11.035924001+00:00 stderr F I1213 00:20:11.035860 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-12-13T00:20:11.035924001+00:00 stderr F I1213 00:20:11.035902 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-12-13T00:20:11.236484112+00:00 stderr F I1213 00:20:11.236422 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-12-13T00:20:11.236484112+00:00 stderr F I1213 00:20:11.236467 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-12-13T00:20:11.436664561+00:00 stderr F I1213 00:20:11.436593 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-12-13T00:20:11.436664561+00:00 stderr F I1213 00:20:11.436633 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-12-13T00:20:11.636625265+00:00 stderr F I1213 00:20:11.636540 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-12-13T00:20:11.636625265+00:00 stderr F I1213 00:20:11.636581 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-12-13T00:20:11.835714934+00:00 stderr F I1213 00:20:11.835659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-12-13T00:20:11.835714934+00:00 stderr F I1213 00:20:11.835698 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-12-13T00:20:12.037188690+00:00 stderr F I1213 00:20:12.037132 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-12-13T00:20:12.037225741+00:00 stderr F I1213 00:20:12.037198 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-12-13T00:20:12.236637111+00:00 stderr F I1213 00:20:12.236561 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-12-13T00:20:12.236687502+00:00 stderr F I1213 00:20:12.236637 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-12-13T00:20:12.436082930+00:00 stderr F I1213 00:20:12.436028 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-12-13T00:20:12.436082930+00:00 stderr F I1213 00:20:12.436066 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-12-13T00:20:12.648394055+00:00 stderr F I1213 00:20:12.648319 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-12-13T00:20:12.648394055+00:00 stderr F I1213 00:20:12.648370 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-12-13T00:20:12.757074101+00:00 stderr F I1213 00:20:12.753112 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.758914712+00:00 stderr F I1213 00:20:12.758653 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.765174994+00:00 stderr F I1213 00:20:12.765138 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.773038221+00:00 stderr F I1213 00:20:12.772966 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.781667719+00:00 stderr F I1213 00:20:12.779535 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.785822224+00:00 stderr F I1213 00:20:12.785751 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.791911922+00:00 stderr F I1213 00:20:12.791847 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.797571028+00:00 stderr F I1213 00:20:12.797471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.804473868+00:00 stderr F I1213 00:20:12.804426 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:20:12.846566519+00:00 stderr F I1213 00:20:12.846109 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-12-13T00:20:12.846566519+00:00 stderr F I1213 00:20:12.846150 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-12-13T00:20:13.036230038+00:00 stderr F I1213 00:20:13.036163 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-12-13T00:20:13.036230038+00:00 stderr F I1213 00:20:13.036200 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-12-13T00:20:13.236189062+00:00 stderr F I1213 00:20:13.236126 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-12-13T00:20:13.236189062+00:00 stderr F I1213 00:20:13.236163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-12-13T00:20:13.437722479+00:00 stderr F I1213 00:20:13.437627 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-12-13T00:20:13.437722479+00:00 stderr F I1213 00:20:13.437700 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-12-13T00:20:13.644987024+00:00 stderr F I1213 00:20:13.644865 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-12-13T00:20:13.644987024+00:00 stderr F I1213 00:20:13.644911 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-12-13T00:20:13.840159326+00:00 stderr F I1213 00:20:13.840087 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-12-13T00:20:13.840159326+00:00 stderr F I1213 00:20:13.840136 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-12-13T00:20:14.039509883+00:00 stderr F I1213 00:20:14.039397 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-12-13T00:20:14.039509883+00:00 stderr F I1213 00:20:14.039454 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:20:14.236295219+00:00 stderr F I1213 00:20:14.236198 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:20:14.236295219+00:00 stderr F I1213 00:20:14.236259 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:20:14.438683871+00:00 stderr F I1213 00:20:14.438611 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:20:14.438683871+00:00 stderr F I1213 00:20:14.438675 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-12-13T00:20:14.640382003+00:00 stderr F I1213 00:20:14.639740 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-12-13T00:20:14.640382003+00:00 stderr F I1213 00:20:14.640354 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-12-13T00:20:14.836062007+00:00 stderr F I1213 00:20:14.835994 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-12-13T00:20:14.836098408+00:00 stderr F I1213 00:20:14.836073 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-12-13T00:20:15.038292555+00:00 stderr F I1213 00:20:15.038205 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-12-13T00:20:15.038292555+00:00 stderr F I1213 00:20:15.038254 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-12-13T00:20:15.235749649+00:00 stderr F I1213 00:20:15.235586 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-12-13T00:20:15.235749649+00:00 stderr F I1213 00:20:15.235636 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-12-13T00:20:15.437794540+00:00 stderr F I1213 00:20:15.437680 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-12-13T00:20:15.437794540+00:00 stderr F I1213 00:20:15.437729 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-12-13T00:20:15.647265127+00:00 stderr F I1213 00:20:15.647130 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-12-13T00:20:15.647265127+00:00 stderr F I1213 00:20:15.647227 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-12-13T00:20:15.837162373+00:00 stderr F I1213 00:20:15.837074 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-12-13T00:20:15.837162373+00:00 stderr F I1213 00:20:15.837119 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-12-13T00:20:16.038266969+00:00 stderr F I1213 00:20:16.038202 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-12-13T00:20:16.038266969+00:00 stderr F I1213 00:20:16.038256 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-12-13T00:20:16.235819156+00:00 stderr F I1213 00:20:16.235731 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-12-13T00:20:16.235819156+00:00 stderr F I1213 00:20:16.235790 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-12-13T00:20:16.436746296+00:00 stderr F I1213 00:20:16.436680 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-12-13T00:20:16.436746296+00:00 stderr F I1213 00:20:16.436720 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-12-13T00:20:16.636707130+00:00 stderr F I1213 00:20:16.636645 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-12-13T00:20:16.636707130+00:00 stderr F I1213 00:20:16.636698 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-12-13T00:20:16.842750642+00:00 stderr F I1213 00:20:16.842644 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-12-13T00:20:16.842750642+00:00 stderr F I1213 00:20:16.842701 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-12-13T00:20:17.042833979+00:00 stderr F I1213 00:20:17.042751 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-12-13T00:20:17.042833979+00:00 stderr F I1213 00:20:17.042805 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-12-13T00:20:17.242094973+00:00 stderr F I1213 00:20:17.242023 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-12-13T00:20:17.242094973+00:00 stderr F I1213 00:20:17.242066 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-12-13T00:20:17.444652119+00:00 stderr F I1213 00:20:17.444579 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-12-13T00:20:17.444652119+00:00 stderr F I1213 00:20:17.444634 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-12-13T00:20:17.640526450+00:00 stderr F I1213 00:20:17.640449 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-12-13T00:20:17.640526450+00:00 stderr F I1213 00:20:17.640505 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:20:17.864588919+00:00 stderr F I1213 00:20:17.864515 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:20:17.864588919+00:00 stderr F I1213 00:20:17.864563 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-12-13T00:20:18.074984120+00:00 stderr F I1213 00:20:18.074913 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-12-13T00:20:18.074984120+00:00 stderr F I1213 00:20:18.074973 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:20:18.244069452+00:00 stderr F I1213 00:20:18.244012 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:20:18.244069452+00:00 stderr F I1213 00:20:18.244061 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-12-13T00:20:18.437315071+00:00 stderr F I1213 00:20:18.437218 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:20:18.437315071+00:00 stderr F I1213 00:20:18.437272 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-12-13T00:20:18.636916895+00:00 stderr F I1213 00:20:18.636820 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-12-13T00:20:18.636916895+00:00 stderr F I1213 00:20:18.636891 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-12-13T00:20:18.838862044+00:00 stderr F I1213 00:20:18.838807 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-12-13T00:20:18.838909125+00:00 stderr F I1213 00:20:18.838860 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-12-13T00:20:19.036304339+00:00 stderr F I1213 00:20:19.036223 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-12-13T00:20:19.036304339+00:00 stderr F I1213 00:20:19.036274 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-12-13T00:20:19.235821590+00:00 stderr F I1213 00:20:19.235751 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-12-13T00:20:19.235821590+00:00 stderr F I1213 00:20:19.235790 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-12-13T00:20:19.435435814+00:00 stderr F I1213 00:20:19.435373 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-12-13T00:20:19.435435814+00:00 stderr F I1213 00:20:19.435417 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-12-13T00:20:19.638206966+00:00 stderr F I1213 00:20:19.638099 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-12-13T00:20:19.638206966+00:00 stderr F I1213 00:20:19.638186 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:20:19.836386580+00:00 stderr F I1213 00:20:19.836314 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:20:19.836386580+00:00 stderr F I1213 00:20:19.836352 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:20:20.038194634+00:00 stderr F I1213 00:20:20.038114 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:20:20.038194634+00:00 stderr F I1213 00:20:20.038166 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:20:20.235920807+00:00 stderr F I1213 00:20:20.235861 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:20:20.235920807+00:00 stderr F I1213 00:20:20.235906 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:20:20.438345729+00:00 stderr F I1213 00:20:20.438270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:20:20.438345729+00:00 stderr F I1213 00:20:20.438321 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-12-13T00:20:20.638022746+00:00 stderr F I1213 00:20:20.637851 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-12-13T00:20:20.638022746+00:00 stderr F I1213 00:20:20.637893 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-12-13T00:20:20.837360932+00:00 stderr F I1213 00:20:20.837300 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-12-13T00:20:20.837399963+00:00 stderr F I1213 00:20:20.837359 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-12-13T00:20:21.038084536+00:00 stderr F I1213 00:20:21.038030 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-12-13T00:20:21.038132237+00:00 stderr F I1213 00:20:21.038091 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-12-13T00:20:21.237901057+00:00 stderr F I1213 00:20:21.237796 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-12-13T00:20:21.237901057+00:00 stderr F I1213 00:20:21.237843 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-12-13T00:20:21.437152161+00:00 stderr F I1213 00:20:21.437062 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-12-13T00:20:21.437152161+00:00 stderr F I1213 00:20:21.437108 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-12-13T00:20:21.639494330+00:00 stderr F I1213 00:20:21.639056 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-12-13T00:20:21.639494330+00:00 stderr F I1213 00:20:21.639483 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-12-13T00:20:21.839435543+00:00 stderr F I1213 00:20:21.839370 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-12-13T00:20:21.839435543+00:00 stderr F I1213 00:20:21.839419 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-12-13T00:20:22.040664292+00:00 stderr F I1213 00:20:22.040607 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-12-13T00:20:22.040664292+00:00 stderr F I1213 00:20:22.040649 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-12-13T00:20:22.237712026+00:00 stderr F I1213 00:20:22.237187 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-12-13T00:20:22.237712026+00:00 stderr F I1213 00:20:22.237690 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-12-13T00:20:22.437182006+00:00 stderr F I1213 00:20:22.437092 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-12-13T00:20:22.437182006+00:00 stderr F I1213 00:20:22.437135 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-12-13T00:20:22.637464749+00:00 stderr F I1213 00:20:22.637379 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-12-13T00:20:22.637464749+00:00 stderr F I1213 00:20:22.637420 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-12-13T00:20:22.836901118+00:00 stderr F I1213 00:20:22.836819 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-12-13T00:20:22.836901118+00:00 stderr F I1213 00:20:22.836874 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-12-13T00:20:23.039246318+00:00 stderr F I1213 00:20:23.039179 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-12-13T00:20:23.039246318+00:00 stderr F I1213 00:20:23.039236 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:20:23.237667579+00:00 stderr F I1213 00:20:23.237572 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:20:23.237667579+00:00 stderr F I1213 00:20:23.237636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-12-13T00:20:23.435649178+00:00 stderr F I1213 00:20:23.435590 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-12-13T00:20:23.435649178+00:00 stderr F I1213 00:20:23.435630 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-12-13T00:20:23.637112813+00:00 stderr F I1213 00:20:23.636921 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-12-13T00:20:23.637112813+00:00 stderr F I1213 00:20:23.637002 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-12-13T00:20:23.838369283+00:00 stderr F I1213 00:20:23.838274 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-12-13T00:20:23.838369283+00:00 stderr F I1213 00:20:23.838353 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-12-13T00:20:24.042222925+00:00 stderr F I1213 00:20:24.042085 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-12-13T00:20:24.042222925+00:00 stderr F I1213 00:20:24.042153 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-12-13T00:20:24.254348363+00:00 stderr F I1213 00:20:24.254056 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-12-13T00:20:24.254348363+00:00 stderr F I1213 00:20:24.254299 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-12-13T00:20:24.469239879+00:00 stderr F I1213 00:20:24.469151 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-12-13T00:20:24.469239879+00:00 stderr F I1213 00:20:24.469194 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:20:24.637140359+00:00 stderr F I1213 00:20:24.637032 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:20:24.637140359+00:00 stderr F I1213 00:20:24.637074 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:20:24.835530430+00:00 stderr F I1213 00:20:24.835456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:20:24.835530430+00:00 stderr F I1213 00:20:24.835513 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-12-13T00:20:25.036366878+00:00 stderr F I1213 00:20:25.036291 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-12-13T00:20:25.036366878+00:00 stderr F I1213 00:20:25.036345 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-12-13T00:20:25.240258870+00:00 stderr F I1213 00:20:25.240171 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-12-13T00:20:25.240433154+00:00 stderr F I1213 00:20:25.240420 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-12-13T00:20:25.436385038+00:00 stderr F I1213 00:20:25.436296 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-12-13T00:20:25.436442159+00:00 stderr F I1213 00:20:25.436378 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-12-13T00:20:25.636851836+00:00 stderr F I1213 00:20:25.636778 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-12-13T00:20:25.636851836+00:00 stderr F I1213 00:20:25.636827 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-12-13T00:20:25.842284100+00:00 stderr F I1213 00:20:25.842205 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:20:25.842451285+00:00 stderr F I1213 00:20:25.842425 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-12-13T00:20:26.039028806+00:00 stderr F I1213 00:20:26.038539 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:20:26.039028806+00:00 stderr F I1213 00:20:26.038591 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-12-13T00:20:26.239165034+00:00 stderr F I1213 00:20:26.239113 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-12-13T00:20:26.239278657+00:00 stderr F I1213 00:20:26.239264 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:20:26.448586019+00:00 stderr F I1213 00:20:26.444700 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:20:26.448586019+00:00 stderr F I1213 00:20:26.444755 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-12-13T00:20:26.641041665+00:00 stderr F I1213 00:20:26.635865 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-12-13T00:20:26.641041665+00:00 stderr F I1213 00:20:26.635919 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-12-13T00:20:26.847986972+00:00 stderr F I1213 00:20:26.847872 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:20:26.847986972+00:00 stderr F I1213 00:20:26.847911 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-12-13T00:20:27.035835972+00:00 stderr F I1213 00:20:27.035766 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-12-13T00:20:27.035835972+00:00 stderr F I1213 00:20:27.035823 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-12-13T00:20:27.236821694+00:00 stderr F I1213 00:20:27.236746 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-12-13T00:20:27.236821694+00:00 stderr F I1213 00:20:27.236796 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-12-13T00:20:27.437295682+00:00 stderr F I1213 00:20:27.437230 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-12-13T00:20:27.437295682+00:00 stderr F I1213 00:20:27.437286 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-12-13T00:20:27.637746539+00:00 stderr F I1213 00:20:27.637665 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-12-13T00:20:27.637746539+00:00 stderr F I1213 00:20:27.637709 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-12-13T00:20:27.838058943+00:00 stderr F I1213 00:20:27.837978 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:20:27.838058943+00:00 stderr F I1213 00:20:27.838041 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-12-13T00:20:28.044523926+00:00 stderr F I1213 00:20:28.044455 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-12-13T00:20:28.044523926+00:00 stderr F I1213 00:20:28.044508 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-12-13T00:20:28.237698193+00:00 stderr F I1213 00:20:28.237635 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-12-13T00:20:28.237698193+00:00 stderr F I1213 00:20:28.237678 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:20:28.439236550+00:00 stderr F I1213 00:20:28.439141 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:20:28.439236550+00:00 stderr F I1213 00:20:28.439211 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-12-13T00:20:28.636730696+00:00 stderr F I1213 00:20:28.636641 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-12-13T00:20:28.636730696+00:00 stderr F I1213 00:20:28.636686 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-12-13T00:20:28.837648527+00:00 stderr F I1213 00:20:28.837578 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-12-13T00:20:28.837648527+00:00 stderr F I1213 00:20:28.837631 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-12-13T00:20:29.038350990+00:00 stderr F I1213 00:20:29.038238 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-12-13T00:20:29.038350990+00:00 stderr F I1213 00:20:29.038297 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-12-13T00:20:29.236221667+00:00 stderr F I1213 00:20:29.236141 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:20:29.236221667+00:00 stderr F I1213 00:20:29.236185 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-12-13T00:20:29.439953865+00:00 stderr F I1213 00:20:29.439872 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-12-13T00:20:29.439953865+00:00 stderr F I1213 00:20:29.439915 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-12-13T00:20:29.642980353+00:00 stderr F I1213 00:20:29.641613 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-12-13T00:20:29.642980353+00:00 stderr F I1213 00:20:29.642135 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-12-13T00:20:29.837117436+00:00 stderr F I1213 00:20:29.837017 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-12-13T00:20:29.837117436+00:00 stderr F I1213 00:20:29.837080 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-12-13T00:20:30.037296696+00:00 stderr F I1213 00:20:30.037216 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-12-13T00:20:30.037296696+00:00 stderr F I1213 00:20:30.037260 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-12-13T00:20:30.236365285+00:00 stderr F I1213 00:20:30.236303 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:20:30.236365285+00:00 stderr F I1213 00:20:30.236356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-12-13T00:20:30.432700689+00:00 stderr F I1213 00:20:30.432632 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:20:30.432700689+00:00 stderr F I1213 00:20:30.432672 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:20:30.438329694+00:00 stderr F I1213 00:20:30.438269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-12-13T00:20:30.438329694+00:00 stderr F I1213 00:20:30.438310 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-12-13T00:20:30.487901602+00:00 stderr F I1213 00:20:30.487822 1 log.go:245] Network operator config updated with conditions: 2025-12-13T00:20:30.487901602+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:30.487901602+00:00 stderr F status: "False" 2025-12-13T00:20:30.487901602+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:30.487901602+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:30.487901602+00:00 stderr F status: "False" 2025-12-13T00:20:30.487901602+00:00 stderr F type: Degraded 2025-12-13T00:20:30.487901602+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:30.487901602+00:00 stderr F status: "True" 2025-12-13T00:20:30.487901602+00:00 stderr F type: Upgradeable 2025-12-13T00:20:30.487901602+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:30.487901602+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2025-12-13T00:20:30.487901602+00:00 stderr F 1 nodes) 2025-12-13T00:20:30.487901602+00:00 stderr F reason: Deploying 2025-12-13T00:20:30.487901602+00:00 stderr F status: "True" 2025-12-13T00:20:30.487901602+00:00 stderr F type: Progressing 2025-12-13T00:20:30.487901602+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:30.487901602+00:00 stderr F status: "True" 2025-12-13T00:20:30.487901602+00:00 stderr F type: Available 2025-12-13T00:20:30.488104757+00:00 stderr F I1213 00:20:30.488062 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:30.510431393+00:00 stderr F I1213 00:20:30.510347 1 log.go:245] ClusterOperator config status updated with conditions: 2025-12-13T00:20:30.510431393+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:30.510431393+00:00 stderr F status: "False" 2025-12-13T00:20:30.510431393+00:00 stderr F type: Degraded 2025-12-13T00:20:30.510431393+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:30.510431393+00:00 stderr F status: "True" 2025-12-13T00:20:30.510431393+00:00 stderr F type: Upgradeable 2025-12-13T00:20:30.510431393+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:30.510431393+00:00 stderr F status: "False" 2025-12-13T00:20:30.510431393+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:30.510431393+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:01Z" 2025-12-13T00:20:30.510431393+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 2025-12-13T00:20:30.510431393+00:00 stderr F 1 nodes) 2025-12-13T00:20:30.510431393+00:00 stderr F reason: Deploying 2025-12-13T00:20:30.510431393+00:00 stderr F status: "True" 2025-12-13T00:20:30.510431393+00:00 stderr F type: Progressing 2025-12-13T00:20:30.510431393+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:30.510431393+00:00 stderr F status: "True" 2025-12-13T00:20:30.510431393+00:00 stderr F type: Available 2025-12-13T00:20:30.638561235+00:00 stderr F I1213 00:20:30.638482 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-12-13T00:20:30.638561235+00:00 stderr F I1213 00:20:30.638553 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-12-13T00:20:30.844964987+00:00 stderr F I1213 00:20:30.844881 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-12-13T00:20:30.859001534+00:00 stderr F I1213 00:20:30.858854 1 log.go:245] Operconfig Controller complete 2025-12-13T00:20:32.405673933+00:00 stderr F I1213 00:20:32.405628 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:32.405761965+00:00 stderr F I1213 00:20:32.405749 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:20:32.455674611+00:00 stderr F I1213 00:20:32.455399 1 log.go:245] Network operator config updated with conditions: 2025-12-13T00:20:32.455674611+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:32.455674611+00:00 stderr F status: "False" 2025-12-13T00:20:32.455674611+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:32.455674611+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:32.455674611+00:00 stderr F status: "False" 2025-12-13T00:20:32.455674611+00:00 stderr F type: Degraded 2025-12-13T00:20:32.455674611+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:32.455674611+00:00 stderr F status: "True" 2025-12-13T00:20:32.455674611+00:00 stderr F type: Upgradeable 2025-12-13T00:20:32.455674611+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:32Z" 2025-12-13T00:20:32.455674611+00:00 stderr F status: "False" 2025-12-13T00:20:32.455674611+00:00 stderr F type: Progressing 2025-12-13T00:20:32.455674611+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:32.455674611+00:00 stderr F status: "True" 2025-12-13T00:20:32.455674611+00:00 stderr F type: Available 2025-12-13T00:20:32.456752031+00:00 stderr F I1213 00:20:32.456711 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:20:32.472973958+00:00 stderr F I1213 00:20:32.472867 1 log.go:245] ClusterOperator config status updated with conditions: 2025-12-13T00:20:32.472973958+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-12-13T00:20:32.472973958+00:00 stderr F status: "False" 2025-12-13T00:20:32.472973958+00:00 stderr F type: Degraded 2025-12-13T00:20:32.472973958+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:32.472973958+00:00 stderr F status: "True" 2025-12-13T00:20:32.472973958+00:00 stderr F type: Upgradeable 2025-12-13T00:20:32.472973958+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-12-13T00:20:32.472973958+00:00 stderr F status: "False" 2025-12-13T00:20:32.472973958+00:00 stderr F type: ManagementStateDegraded 2025-12-13T00:20:32.472973958+00:00 stderr F - lastTransitionTime: "2025-12-13T00:20:32Z" 2025-12-13T00:20:32.472973958+00:00 stderr F status: "False" 2025-12-13T00:20:32.472973958+00:00 stderr F type: Progressing 2025-12-13T00:20:32.472973958+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-12-13T00:20:32.472973958+00:00 stderr F status: "True" 2025-12-13T00:20:32.472973958+00:00 stderr F type: Available 2025-12-13T00:20:37.275692319+00:00 stderr F E1213 00:20:37.275593 1 leaderelection.go:332] error retrieving resource lock openshift-network-operator/network-operator-lock: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": dial tcp 38.102.83.51:6443: connect: connection refused 2025-12-13T00:21:08.112133602+00:00 stderr F I1213 00:21:08.112050 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:09.905983879+00:00 stderr F I1213 00:21:09.903289 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:09.914428307+00:00 stderr F I1213 00:21:09.914359 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:10.962230482+00:00 stderr F I1213 00:21:10.962181 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:11.453221250+00:00 stderr F I1213 00:21:11.453138 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:11.465703207+00:00 stderr F I1213 00:21:11.465627 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.471962206+00:00 stderr F I1213 00:21:11.471912 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.477227038+00:00 stderr F I1213 00:21:11.477174 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.484306079+00:00 stderr F I1213 00:21:11.484223 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.493822456+00:00 stderr F I1213 00:21:11.493757 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.500237049+00:00 stderr F I1213 00:21:11.500180 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.508269516+00:00 stderr F I1213 00:21:11.508207 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.512689385+00:00 stderr F I1213 00:21:11.512631 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.518588754+00:00 stderr F I1213 00:21:11.518519 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:11.901124097+00:00 stderr F I1213 00:21:11.901049 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:11.901316322+00:00 stderr F I1213 00:21:11.901283 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-12-13T00:21:11.956014708+00:00 stderr F I1213 00:21:11.955922 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.102583714+00:00 stderr F I1213 00:21:12.102514 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.102663366+00:00 stderr F I1213 00:21:12.102639 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-12-13T00:21:12.102663366+00:00 stderr F I1213 00:21:12.102653 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-12-13T00:21:12.102663366+00:00 stderr F I1213 00:21:12.102659 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-12-13T00:21:12.102674116+00:00 stderr F I1213 00:21:12.102663 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-12-13T00:21:12.105745599+00:00 stderr F I1213 00:21:12.105707 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.115630245+00:00 stderr F I1213 00:21:12.115583 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.127365032+00:00 stderr F I1213 00:21:12.127273 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.133062706+00:00 stderr F I1213 00:21:12.132996 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.139469179+00:00 stderr F I1213 00:21:12.139387 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.265283684+00:00 stderr F I1213 00:21:12.265210 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.321830670+00:00 stderr F I1213 00:21:12.321746 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.465283341+00:00 stderr F I1213 00:21:12.465222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.670756135+00:00 stderr F I1213 00:21:12.670694 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:12.800283951+00:00 stderr F I1213 00:21:12.800176 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.866596220+00:00 stderr F I1213 00:21:12.866519 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:13.064290815+00:00 stderr F I1213 00:21:13.064218 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.263890071+00:00 stderr F I1213 00:21:13.263840 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:13.372046059+00:00 stderr F I1213 00:21:13.371596 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.467202807+00:00 stderr F I1213 00:21:13.467134 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:13.580230647+00:00 stderr F I1213 00:21:13.580161 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.580282498+00:00 stderr F I1213 00:21:13.580260 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:21:13.580282498+00:00 stderr F I1213 00:21:13.580275 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580295 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580320 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580336 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580344 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580352 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580362 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580372 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580379 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580386 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:21:13.581134511+00:00 stderr F I1213 00:21:13.580393 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-12-13T00:21:13.607490403+00:00 stderr F I1213 00:21:13.607378 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.690181254+00:00 stderr F I1213 00:21:13.681699 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:13.693709730+00:00 stderr F I1213 00:21:13.690593 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.693709730+00:00 stderr F I1213 00:21:13.690905 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-12-13T00:21:13.693709730+00:00 stderr F I1213 00:21:13.691650 1 log.go:245] successful reconciliation 2025-12-13T00:21:13.867276923+00:00 stderr F I1213 00:21:13.867200 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:13.923018997+00:00 stderr F I1213 00:21:13.922926 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.973632433+00:00 stderr F I1213 00:21:13.973565 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:14.069695395+00:00 stderr F I1213 00:21:14.069630 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:14.105054548+00:00 stderr F I1213 00:21:14.104922 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:14.116748985+00:00 stderr F I1213 00:21:14.116684 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:14.188452179+00:00 stderr F I1213 00:21:14.188401 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-12-13T00:21:14.188844189+00:00 stderr F I1213 00:21:14.188830 1 log.go:245] successful reconciliation 2025-12-13T00:21:14.265617972+00:00 stderr F I1213 00:21:14.265565 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:14.386457002+00:00 stderr F I1213 00:21:14.386367 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-12-13T00:21:14.386868313+00:00 stderr F I1213 00:21:14.386793 1 log.go:245] successful reconciliation 2025-12-13T00:21:14.457487509+00:00 stderr F I1213 00:21:14.457421 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:14.465964208+00:00 stderr F I1213 00:21:14.465888 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:14.665563755+00:00 stderr F I1213 00:21:14.665502 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:14.865217612+00:00 stderr F I1213 00:21:14.865144 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:15.034079218+00:00 stderr F I1213 00:21:15.033990 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:15.034187361+00:00 stderr F I1213 00:21:15.034145 1 log.go:245] Reconciling proxy 'cluster' 2025-12-13T00:21:15.036429042+00:00 stderr F I1213 00:21:15.036357 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-12-13T00:21:15.046279917+00:00 stderr F I1213 00:21:15.046202 1 log.go:245] Reconciling proxy 'cluster' complete 2025-12-13T00:21:15.064564061+00:00 stderr F I1213 00:21:15.064464 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:15.266765157+00:00 stderr F I1213 00:21:15.266703 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:15.461064951+00:00 stderr F I1213 00:21:15.460565 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:15.461112372+00:00 stderr F I1213 00:21:15.460880 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-12-13T00:21:15.463353212+00:00 stderr F I1213 00:21:15.463316 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-12-13T00:21:15.463371663+00:00 stderr F I1213 00:21:15.463365 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463410344+00:00 stderr F I1213 00:21:15.463387 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463419974+00:00 stderr F I1213 00:21:15.463415 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463463545+00:00 stderr F I1213 00:21:15.463438 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463472825+00:00 stderr F I1213 00:21:15.463468 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463508876+00:00 stderr F I1213 00:21:15.463492 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463538547+00:00 stderr F I1213 00:21:15.463523 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463572458+00:00 stderr F I1213 00:21:15.463557 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463607729+00:00 stderr F I1213 00:21:15.463587 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463638200+00:00 stderr F I1213 00:21:15.463616 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463650760+00:00 stderr F I1213 00:21:15.463644 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.463679391+00:00 stderr F I1213 00:21:15.463662 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-12-13T00:21:15.465645164+00:00 stderr F I1213 00:21:15.465613 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:15.568157431+00:00 stderr F I1213 00:21:15.568085 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:15.605745355+00:00 stderr F I1213 00:21:15.605648 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:15.666313370+00:00 stderr F I1213 00:21:15.666259 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:15.866256825+00:00 stderr F I1213 00:21:15.866190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:16.064802412+00:00 stderr F I1213 00:21:16.064731 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:16.266768762+00:00 stderr F I1213 00:21:16.266685 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:16.466922584+00:00 stderr F I1213 00:21:16.466848 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:16.666277173+00:00 stderr F I1213 00:21:16.666191 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:16.864805590+00:00 stderr F I1213 00:21:16.864738 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:16.904856771+00:00 stderr F I1213 00:21:16.904767 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:16.927084371+00:00 stderr F I1213 00:21:16.927001 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:16.967216384+00:00 stderr F I1213 00:21:16.967142 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.044591802+00:00 stderr F I1213 00:21:17.044518 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.411066151+00:00 stderr F I1213 00:21:17.410984 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.421982276+00:00 stderr F I1213 00:21:17.421880 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:17.422664504+00:00 stderr F I1213 00:21:17.422604 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.427125304+00:00 stderr F I1213 00:21:17.427066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:17.466079855+00:00 stderr F I1213 00:21:17.465984 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:17.517320679+00:00 stderr F I1213 00:21:17.517243 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.517461642+00:00 stderr F I1213 00:21:17.517415 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-12-13T00:21:17.522118237+00:00 stderr F I1213 00:21:17.522067 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.522136598+00:00 stderr F I1213 00:21:17.522116 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-12-13T00:21:17.526640120+00:00 stderr F I1213 00:21:17.526601 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.526640120+00:00 stderr F I1213 00:21:17.526627 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-12-13T00:21:17.531366738+00:00 stderr F I1213 00:21:17.531288 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.531366738+00:00 stderr F I1213 00:21:17.531332 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-12-13T00:21:17.535389356+00:00 stderr F I1213 00:21:17.535311 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.535389356+00:00 stderr F I1213 00:21:17.535371 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-12-13T00:21:17.539164687+00:00 stderr F I1213 00:21:17.539113 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.539164687+00:00 stderr F I1213 00:21:17.539141 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-12-13T00:21:17.542131607+00:00 stderr F I1213 00:21:17.542085 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.542131607+00:00 stderr F I1213 00:21:17.542109 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-12-13T00:21:17.545619862+00:00 stderr F I1213 00:21:17.545563 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.545619862+00:00 stderr F I1213 00:21:17.545588 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-12-13T00:21:17.548614853+00:00 stderr F I1213 00:21:17.548533 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.548614853+00:00 stderr F I1213 00:21:17.548593 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-12-13T00:21:17.552638411+00:00 stderr F I1213 00:21:17.552590 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.552638411+00:00 stderr F I1213 00:21:17.552626 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-12-13T00:21:17.556681041+00:00 stderr F I1213 00:21:17.556613 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-12-13T00:21:17.622962799+00:00 stderr F I1213 00:21:17.622860 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.666170035+00:00 stderr F I1213 00:21:17.666122 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:17.864480346+00:00 stderr F I1213 00:21:17.864424 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:18.034369840+00:00 stderr F I1213 00:21:18.034310 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.034493124+00:00 stderr F I1213 00:21:18.034456 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-12-13T00:21:18.066151758+00:00 stderr F I1213 00:21:18.066092 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:18.269658090+00:00 stderr F I1213 00:21:18.269587 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:18.436712878+00:00 stderr F I1213 00:21:18.436597 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.469994006+00:00 stderr F I1213 00:21:18.469868 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:18.667152046+00:00 stderr F I1213 00:21:18.667082 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:18.866491145+00:00 stderr F I1213 00:21:18.866431 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:19.065430663+00:00 stderr F I1213 00:21:19.065330 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:19.184850606+00:00 stderr F I1213 00:21:19.184753 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:19.268180545+00:00 stderr F I1213 00:21:19.268013 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:19.467803581+00:00 stderr F I1213 00:21:19.467716 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:19.669025222+00:00 stderr F I1213 00:21:19.668907 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:19.735068073+00:00 stderr F I1213 00:21:19.732667 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:19.782263417+00:00 stderr F I1213 00:21:19.782173 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:19.865897204+00:00 stderr F I1213 00:21:19.865838 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:20.065010057+00:00 stderr F I1213 00:21:20.064929 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:20.269584157+00:00 stderr F I1213 00:21:20.269151 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:20.468355401+00:00 stderr F I1213 00:21:20.468280 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:20.667632538+00:00 stderr F I1213 00:21:20.667576 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:20.708767669+00:00 stderr F I1213 00:21:20.708692 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:20.866877565+00:00 stderr F I1213 00:21:20.866171 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:21.066503261+00:00 stderr F I1213 00:21:21.066438 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:21.151526976+00:00 stderr F I1213 00:21:21.151409 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:21.268824222+00:00 stderr F I1213 00:21:21.268763 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:21.470243988+00:00 stderr F I1213 00:21:21.470179 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:21.667465659+00:00 stderr F I1213 00:21:21.667318 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:21.866743947+00:00 stderr F I1213 00:21:21.866679 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:22.063543997+00:00 stderr F I1213 00:21:22.063484 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:22.268567281+00:00 stderr F I1213 00:21:22.268504 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:22.372435703+00:00 stderr F I1213 00:21:22.372397 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:22.464972760+00:00 stderr F I1213 00:21:22.464909 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:22.667208518+00:00 stderr F I1213 00:21:22.667151 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:22.866005402+00:00 stderr F I1213 00:21:22.865898 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:23.067959901+00:00 stderr F I1213 00:21:23.067013 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:23.267120966+00:00 stderr F I1213 00:21:23.267062 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:23.267386783+00:00 stderr F I1213 00:21:23.267351 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:23.470521865+00:00 stderr F I1213 00:21:23.470456 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:23.666545305+00:00 stderr F I1213 00:21:23.666475 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:23.702963197+00:00 stderr F I1213 00:21:23.702856 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:23.864711291+00:00 stderr F I1213 00:21:23.864645 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:23.909502550+00:00 stderr F I1213 00:21:23.909413 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:24.072986532+00:00 stderr F I1213 00:21:24.070394 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:24.168097329+00:00 stderr F I1213 00:21:24.168007 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:24.269854215+00:00 stderr F I1213 00:21:24.269726 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:24.472100882+00:00 stderr F I1213 00:21:24.471918 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:24.667539476+00:00 stderr F I1213 00:21:24.667448 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:24.869651260+00:00 stderr F I1213 00:21:24.868822 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:25.064315633+00:00 stderr F I1213 00:21:25.064250 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:25.267463565+00:00 stderr F I1213 00:21:25.267399 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:25.466308810+00:00 stderr F I1213 00:21:25.466245 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:25.665567167+00:00 stderr F I1213 00:21:25.665514 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:25.866572972+00:00 stderr F I1213 00:21:25.866504 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:38.768849632+00:00 stderr F I1213 00:21:38.768799 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-12-13T00:21:38.769551219+00:00 stderr F I1213 00:21:38.769523 1 log.go:245] successful reconciliation 2025-12-13T00:21:40.368577570+00:00 stderr F I1213 00:21:40.368538 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-12-13T00:21:40.369026721+00:00 stderr F I1213 00:21:40.369012 1 log.go:245] successful reconciliation 2025-12-13T00:21:41.767882623+00:00 stderr F I1213 00:21:41.767817 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-12-13T00:21:41.768248082+00:00 stderr F I1213 00:21:41.768215 1 log.go:245] successful reconciliation 2025-12-13T00:21:45.846107453+00:00 stderr F I1213 00:21:45.846051 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.852159842+00:00 stderr F I1213 00:21:45.852105 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.859615816+00:00 stderr F I1213 00:21:45.859582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.866477785+00:00 stderr F I1213 00:21:45.866419 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.871833307+00:00 stderr F I1213 00:21:45.871786 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.877693862+00:00 stderr F I1213 00:21:45.877598 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.883803022+00:00 stderr F I1213 00:21:45.883761 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.889555224+00:00 stderr F I1213 00:21:45.889504 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-12-13T00:21:45.893522442+00:00 stderr F I1213 00:21:45.893481 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-oper0000644000175000017500000144355515117130646033133 0ustar zuulzuul2025-08-13T19:50:51.526005638+00:00 stderr F I0813 19:50:51.524133 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:50:51.805548568+00:00 stderr F I0813 19:50:51.796567 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:50:51.907527603+00:00 stderr F I0813 19:50:51.906451 1 observer_polling.go:159] Starting file observer 2025-08-13T19:50:52.430967163+00:00 stderr F I0813 19:50:52.430191 1 builder.go:299] network-operator version 4.16.0-202406131906.p0.g84f9a08.assembly.stream.el9-84f9a08-84f9a080d03777c95a1c5a0d13ca16e5aa342d98 2025-08-13T19:50:53.696147402+00:00 stderr F I0813 19:50:53.658721 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:50:53.696515842+00:00 stderr F W0813 19:50:53.696475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:50:53.696565593+00:00 stderr F W0813 19:50:53.696546 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:50:53.711051698+00:00 stderr F I0813 19:50:53.688429 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:50:53.729164305+00:00 stderr F I0813 19:50:53.728532 1 secure_serving.go:213] Serving securely on [::]:9104 2025-08-13T19:50:53.732060848+00:00 stderr F I0813 19:50:53.731301 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:50:53.733718265+00:00 stderr F I0813 19:50:53.733681 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.737647 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.739633 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748216 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748340 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748333 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748369 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:50:53.749106915+00:00 stderr F I0813 19:50:53.748549 1 leaderelection.go:250] attempting to acquire leader lease openshift-network-operator/network-operator-lock... 2025-08-13T19:50:53.855680041+00:00 stderr F I0813 19:50:53.851938 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:50:53.855680041+00:00 stderr F I0813 19:50:53.852293 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:50:53.871563255+00:00 stderr F I0813 19:50:53.871012 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:50:53.878122182+00:00 stderr F E0813 19:50:53.878073 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.878459512+00:00 stderr F E0813 19:50:53.878441 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.884662709+00:00 stderr F E0813 19:50:53.883761 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.888043106+00:00 stderr F E0813 19:50:53.887730 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.894232243+00:00 stderr F E0813 19:50:53.894194 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.921112841+00:00 stderr F E0813 19:50:53.921050 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.921221634+00:00 stderr F E0813 19:50:53.921202 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.944473119+00:00 stderr F E0813 19:50:53.944405 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.965390587+00:00 stderr F E0813 19:50:53.964305 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:53.990391071+00:00 stderr F E0813 19:50:53.989304 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.053048102+00:00 stderr F E0813 19:50:54.051645 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.076296547+00:00 stderr F E0813 19:50:54.076033 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.214466776+00:00 stderr F E0813 19:50:54.213642 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.237106333+00:00 stderr F E0813 19:50:54.236942 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.539054753+00:00 stderr F E0813 19:50:54.535234 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:54.558006074+00:00 stderr F E0813 19:50:54.557434 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:55.176672496+00:00 stderr F E0813 19:50:55.176032 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:55.199176329+00:00 stderr F E0813 19:50:55.198548 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:56.458594364+00:00 stderr F E0813 19:50:56.458543 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:56.485217234+00:00 stderr F E0813 19:50:56.481127 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:59.034562497+00:00 stderr F E0813 19:50:59.034344 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:50:59.044633055+00:00 stderr F E0813 19:50:59.044467 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:04.165674402+00:00 stderr F E0813 19:51:04.162623 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:04.166167186+00:00 stderr F E0813 19:51:04.166139 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:14.403445769+00:00 stderr F E0813 19:51:14.403252 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:14.407896356+00:00 stderr F E0813 19:51:14.406714 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:34.885304072+00:00 stderr F E0813 19:51:34.884178 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:34.887387631+00:00 stderr F E0813 19:51:34.887255 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:51:53.853097978+00:00 stderr F E0813 19:51:53.852977 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:15.845380614+00:00 stderr F E0813 19:52:15.845240 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:15.847854545+00:00 stderr F E0813 19:52:15.847751 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:52:53.852904639+00:00 stderr F E0813 19:52:53.852557 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:53:37.766395565+00:00 stderr F E0813 19:53:37.766247 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:53:53.853033193+00:00 stderr F E0813 19:53:53.852902 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:54:53.853306268+00:00 stderr F E0813 19:54:53.853107 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:54:59.688769553+00:00 stderr F E0813 19:54:59.688621 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:55:53.852650979+00:00 stderr F E0813 19:55:53.852507 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:21.606934133+00:00 stderr F E0813 19:56:21.606713 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:27.298752589+00:00 stderr F I0813 19:56:27.298520 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-network-operator", Name:"network-operator-lock", UID:"f2f09683-2189-4368-ac3d-7dc7538da4b8", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"27121", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_3457bc4e-7009-4fb9-bd36-e077ad27f0dd became leader 2025-08-13T19:56:27.299471170+00:00 stderr F I0813 19:56:27.299360 1 leaderelection.go:260] successfully acquired lease openshift-network-operator/network-operator-lock 2025-08-13T19:56:27.351114894+00:00 stderr F I0813 19:56:27.351001 1 operator.go:97] Creating status manager for stand-alone cluster 2025-08-13T19:56:27.351565257+00:00 stderr F I0813 19:56:27.351512 1 operator.go:102] Adding controller-runtime controllers 2025-08-13T19:56:27.354990565+00:00 stderr F I0813 19:56:27.354921 1 operconfig_controller.go:102] Waiting for feature gates initialization... 2025-08-13T19:56:27.358337231+00:00 stderr F I0813 19:56:27.358191 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:56:27.363699364+00:00 stderr F I0813 19:56:27.363602 1 operconfig_controller.go:109] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:56:27.364136466+00:00 stderr F I0813 19:56:27.364072 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-network-operator", Name:"network-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:56:27.366122993+00:00 stderr F I0813 19:56:27.366051 1 client.go:239] Starting informers... 2025-08-13T19:56:27.366292048+00:00 stderr F I0813 19:56:27.366232 1 client.go:250] Waiting for informers to sync... 2025-08-13T19:56:27.567890954+00:00 stderr F I0813 19:56:27.567745 1 client.go:271] Informers started and synced 2025-08-13T19:56:27.567980227+00:00 stderr F I0813 19:56:27.567965 1 operator.go:126] Starting controller-manager 2025-08-13T19:56:27.569429138+00:00 stderr F I0813 19:56:27.569399 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:56:27.569530191+00:00 stderr F I0813 19:56:27.569512 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:56:27.569658345+00:00 stderr F I0813 19:56:27.569634 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:56:27.569953743+00:00 stderr F I0813 19:56:27.569405 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:56:27.569976414+00:00 stderr F I0813 19:56:27.569955 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:56:27.569988794+00:00 stderr F I0813 19:56:27.569970 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:56:27.570112128+00:00 stderr F I0813 19:56:27.570015 1 server.go:185] "Starting metrics server" logger="controller-runtime.metrics" 2025-08-13T19:56:27.571338433+00:00 stderr F I0813 19:56:27.570882 1 server.go:224] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false 2025-08-13T19:56:27.573138374+00:00 stderr F I0813 19:56:27.573062 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="informer source: 0xc000d2d970" 2025-08-13T19:56:27.573237987+00:00 stderr F I0813 19:56:27.573176 1 controller.go:178] "Starting EventSource" controller="egress-router-controller" source="kind source: *v1.EgressRouter" 2025-08-13T19:56:27.573380101+00:00 stderr F I0813 19:56:27.573272 1 controller.go:178] "Starting EventSource" controller="proxyconfig-controller" source="kind source: *v1.Proxy" 2025-08-13T19:56:27.573393931+00:00 stderr F I0813 19:56:27.573376 1 controller.go:186] "Starting Controller" controller="proxyconfig-controller" 2025-08-13T19:56:27.573393931+00:00 stderr F I0813 19:56:27.573376 1 controller.go:186] "Starting Controller" controller="egress-router-controller" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.573928 1 controller.go:178] "Starting EventSource" controller="clusterconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.574014 1 controller.go:186] "Starting Controller" controller="clusterconfig-controller" 2025-08-13T19:56:27.574079911+00:00 stderr F I0813 19:56:27.573086 1 controller.go:178] "Starting EventSource" controller="pki-controller" source="kind source: *v1.OperatorPKI" 2025-08-13T19:56:27.574103762+00:00 stderr F I0813 19:56:27.574085 1 controller.go:186] "Starting Controller" controller="pki-controller" 2025-08-13T19:56:27.574197864+00:00 stderr F I0813 19:56:27.574118 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574211965+00:00 stderr F I0813 19:56:27.574195 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Network" 2025-08-13T19:56:27.574211965+00:00 stderr F I0813 19:56:27.574207 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="informer source: 0xc001030000" 2025-08-13T19:56:27.574395470+00:00 stderr F I0813 19:56:27.574278 1 controller.go:178] "Starting EventSource" controller="operconfig-controller" source="kind source: *v1.Node" 2025-08-13T19:56:27.574395470+00:00 stderr F I0813 19:56:27.574335 1 controller.go:186] "Starting Controller" controller="operconfig-controller" 2025-08-13T19:56:27.574414261+00:00 stderr F I0813 19:56:27.574389 1 controller.go:178] "Starting EventSource" controller="infrastructureconfig-controller" source="kind source: *v1.Infrastructure" 2025-08-13T19:56:27.574426401+00:00 stderr F I0813 19:56:27.574417 1 controller.go:186] "Starting Controller" controller="infrastructureconfig-controller" 2025-08-13T19:56:27.574506623+00:00 stderr F I0813 19:56:27.574434 1 controller.go:178] "Starting EventSource" controller="dashboard-controller" source="informer source: 0xc0010302c0" 2025-08-13T19:56:27.574542484+00:00 stderr F I0813 19:56:27.574279 1 controller.go:178] "Starting EventSource" controller="signer-controller" source="kind source: *v1.CertificateSigningRequest" 2025-08-13T19:56:27.574664318+00:00 stderr F I0813 19:56:27.574565 1 controller.go:186] "Starting Controller" controller="dashboard-controller" 2025-08-13T19:56:27.574776281+00:00 stderr F I0813 19:56:27.574601 1 controller.go:178] "Starting EventSource" controller="allowlist-controller" source="informer source: 0xc001030210" 2025-08-13T19:56:27.574888574+00:00 stderr F I0813 19:56:27.574574 1 controller.go:186] "Starting Controller" controller="signer-controller" 2025-08-13T19:56:27.575969945+00:00 stderr F I0813 19:56:27.574542 1 controller.go:178] "Starting EventSource" controller="ingress-config-controller" source="kind source: *v1.IngressController" 2025-08-13T19:56:27.576037697+00:00 stderr F I0813 19:56:27.576022 1 controller.go:186] "Starting Controller" controller="ingress-config-controller" 2025-08-13T19:56:27.576094398+00:00 stderr F I0813 19:56:27.574714 1 controller.go:220] "Starting workers" controller="dashboard-controller" worker count=1 2025-08-13T19:56:27.576491250+00:00 stderr F I0813 19:56:27.574875 1 controller.go:186] "Starting Controller" controller="allowlist-controller" 2025-08-13T19:56:27.576617133+00:00 stderr F I0813 19:56:27.576531 1 controller.go:220] "Starting workers" controller="allowlist-controller" worker count=1 2025-08-13T19:56:27.576926262+00:00 stderr F I0813 19:56:27.574307 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc0010300b0" 2025-08-13T19:56:27.576951953+00:00 stderr F I0813 19:56:27.576927 1 controller.go:178] "Starting EventSource" controller="configmap-trust-bundle-injector-controller" source="informer source: 0xc001030160" 2025-08-13T19:56:27.577041156+00:00 stderr F I0813 19:56:27.576992 1 controller.go:186] "Starting Controller" controller="configmap-trust-bundle-injector-controller" 2025-08-13T19:56:27.577074326+00:00 stderr F I0813 19:56:27.576996 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:27.577215551+00:00 stderr F I0813 19:56:27.577133 1 controller.go:220] "Starting workers" controller="configmap-trust-bundle-injector-controller" worker count=1 2025-08-13T19:56:27.577215551+00:00 stderr F I0813 19:56:27.574959 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc001030370" 2025-08-13T19:56:27.577232791+00:00 stderr F I0813 19:56:27.577215 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc001030420" 2025-08-13T19:56:27.577244481+00:00 stderr F I0813 19:56:27.577233 1 controller.go:178] "Starting EventSource" controller="pod-watcher" source="informer source: 0xc0010304d0" 2025-08-13T19:56:27.577254382+00:00 stderr F I0813 19:56:27.577246 1 controller.go:186] "Starting Controller" controller="pod-watcher" 2025-08-13T19:56:27.577264902+00:00 stderr F I0813 19:56:27.577254 1 controller.go:220] "Starting workers" controller="pod-watcher" worker count=1 2025-08-13T19:56:27.577437857+00:00 stderr F I0813 19:56:27.575918 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577667 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577720 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577732 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577737 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577744 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577750 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:56:27.577776886+00:00 stderr F I0813 19:56:27.577767 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:56:27.578450466+00:00 stderr F I0813 19:56:27.577775 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.578575999+00:00 stderr F I0813 19:56:27.578480 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.578575999+00:00 stderr F I0813 19:56:27.578544 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.578921419+00:00 stderr F I0813 19:56:27.575299 1 log.go:245] openshift-network-operator/kube-root-ca.crt changed, triggering operconf reconciliation 2025-08-13T19:56:27.579011332+00:00 stderr F I0813 19:56:27.578991 1 log.go:245] openshift-network-operator/mtu changed, triggering operconf reconciliation 2025-08-13T19:56:27.579067203+00:00 stderr F I0813 19:56:27.579050 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T19:56:27.579356502+00:00 stderr F I0813 19:56:27.579333 1 log.go:245] Reconciling configmap from openshift-ingress-operator/trusted-ca 2025-08-13T19:56:27.580948257+00:00 stderr F I0813 19:56:27.579506 1 reflector.go:351] Caches populated for *v1.OperatorPKI from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.587549346+00:00 stderr F I0813 19:56:27.584463 1 log.go:245] openshift-network-operator/iptables-alerter-script changed, triggering operconf reconciliation 2025-08-13T19:56:27.591530269+00:00 stderr F I0813 19:56:27.590312 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:27.591530269+00:00 stderr F I0813 19:56:27.591221 1 reflector.go:351] Caches populated for *v1.EgressRouter from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.591860269+00:00 stderr F I0813 19:56:27.591702 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.592230659+00:00 stderr F I0813 19:56:27.592106 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:27.592852457+00:00 stderr F I0813 19:56:27.592740 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.593769843+00:00 stderr F I0813 19:56:27.593747 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.594589167+00:00 stderr F I0813 19:56:27.594464 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.595849663+00:00 stderr F I0813 19:56:27.595733 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.596215333+00:00 stderr F I0813 19:56:27.596118 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.599428015+00:00 stderr F I0813 19:56:27.599166 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.599659351+00:00 stderr F I0813 19:56:27.599636 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.601773632+00:00 stderr F I0813 19:56:27.600107 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.602556664+00:00 stderr F I0813 19:56:27.602266 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.603653376+00:00 stderr F I0813 19:56:27.603261 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.605116727+00:00 stderr F I0813 19:56:27.605009 1 reflector.go:351] Caches populated for *v1.IngressController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.605654523+00:00 stderr F I0813 19:56:27.605606 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.606413 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.606613 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.607689741+00:00 stderr F I0813 19:56:27.607599 1 reflector.go:351] Caches populated for *v1alpha1.PodNetworkConnectivityCheck from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.608162284+00:00 stderr F I0813 19:56:27.608138 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.609076180+00:00 stderr F I0813 19:56:27.609050 1 allowlist_controller.go:103] Reconcile allowlist for openshift-multus/cni-sysctl-allowlist 2025-08-13T19:56:27.609481932+00:00 stderr F I0813 19:56:27.609420 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=openshiftapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.612131368+00:00 stderr F I0813 19:56:27.612050 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:27.635883836+00:00 stderr F I0813 19:56:27.635730 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.635883836+00:00 stderr F I0813 19:56:27.635768 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:56:27.639028526+00:00 stderr F I0813 19:56:27.638976 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:27.639028526+00:00 stderr F I0813 19:56:27.639008 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:27.644852602+00:00 stderr F I0813 19:56:27.644681 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].configMap.namespace\"" logger="KubeAPIWarningLogger" 2025-08-13T19:56:27.644852602+00:00 stderr F I0813 19:56:27.644740 1 warning_handler.go:65] "unknown field \"spec.template.spec.volumes[0].defaultMode\"" logger="KubeAPIWarningLogger" 2025-08-13T19:56:27.662414693+00:00 stderr F I0813 19:56:27.661700 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:27.677311729+00:00 stderr F I0813 19:56:27.675885 1 log.go:245] /crc changed, triggering operconf reconciliation 2025-08-13T19:56:27.677694760+00:00 stderr F I0813 19:56:27.677612 1 controller.go:220] "Starting workers" controller="signer-controller" worker count=1 2025-08-13T19:56:27.677762442+00:00 stderr F I0813 19:56:27.677703 1 controller.go:220] "Starting workers" controller="egress-router-controller" worker count=1 2025-08-13T19:56:27.677977208+00:00 stderr F I0813 19:56:27.677866 1 controller.go:220] "Starting workers" controller="pki-controller" worker count=1 2025-08-13T19:56:27.678407880+00:00 stderr F I0813 19:56:27.678174 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.683757 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.683940 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684316 1 controller.go:220] "Starting workers" controller="clusterconfig-controller" worker count=1 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684541 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684572 1 controller.go:220] "Starting workers" controller="proxyconfig-controller" worker count=1 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684543 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T19:56:27.685178653+00:00 stderr F I0813 19:56:27.684871 1 controller.go:220] "Starting workers" controller="ingress-config-controller" worker count=1 2025-08-13T19:56:27.685271766+00:00 stderr F I0813 19:56:27.685248 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685387 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/kube-root-ca.crt' 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685465 1 controller.go:220] "Starting workers" controller="infrastructureconfig-controller" worker count=1 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685503 1 controller.go:220] "Starting workers" controller="operconfig-controller" worker count=1 2025-08-13T19:56:27.692572995+00:00 stderr F I0813 19:56:27.685712 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:27.698346949+00:00 stderr F I0813 19:56:27.698189 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.701887971+00:00 stderr F I0813 19:56:27.700366 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:27.728995845+00:00 stderr F I0813 19:56:27.728910 1 log.go:245] configmap 'openshift-config/kube-root-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.729443867+00:00 stderr F I0813 19:56:27.729417 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/registry-certs' 2025-08-13T19:56:27.743084397+00:00 stderr F I0813 19:56:27.742986 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.743084397+00:00 stderr F I0813 19:56:27.743033 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:56:27.749590293+00:00 stderr F I0813 19:56:27.749370 1 log.go:245] configmap 'openshift-config/registry-certs' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.751598850+00:00 stderr F I0813 19:56:27.749864 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-acks' 2025-08-13T19:56:27.796282896+00:00 stderr F I0813 19:56:27.793146 1 log.go:245] "network-node-identity-cert" in "openshift-network-node-identity" requires a new target cert/key pair: already expired 2025-08-13T19:56:27.814100495+00:00 stderr F I0813 19:56:27.808184 1 log.go:245] configmap 'openshift-config/admin-acks' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.814100495+00:00 stderr F I0813 19:56:27.808279 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/initial-kube-apiserver-server-ca' 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838241 1 log.go:245] configmap 'openshift-config/initial-kube-apiserver-server-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838324 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-metric-serving-ca' 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838474 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.840895580+00:00 stderr F I0813 19:56:27.838488 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:56:27.849897277+00:00 stderr F I0813 19:56:27.849302 1 log.go:245] configmap 'openshift-config/etcd-metric-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.849897277+00:00 stderr F I0813 19:56:27.849392 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-serving-ca' 2025-08-13T19:56:27.865566954+00:00 stderr F I0813 19:56:27.864192 1 log.go:245] configmap 'openshift-config/etcd-serving-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.865566954+00:00 stderr F I0813 19:56:27.864295 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-install-manifests' 2025-08-13T19:56:27.872170163+00:00 stderr F I0813 19:56:27.871893 1 log.go:245] configmap 'openshift-config/openshift-install-manifests' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:27.872170163+00:00 stderr F I0813 19:56:27.872007 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T19:56:27.915571142+00:00 stderr F I0813 19:56:27.915478 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:27.917293291+00:00 stderr F I0813 19:56:27.917181 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.917293291+00:00 stderr F I0813 19:56:27.917230 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:27.979374604+00:00 stderr F I0813 19:56:27.979044 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:27.979374604+00:00 stderr F I0813 19:56:27.979088 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T19:56:27.984852501+00:00 stderr F I0813 19:56:27.980469 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.002926967+00:00 stderr F I0813 19:56:28.002764 1 log.go:245] Updated Secret/network-node-identity-cert -n openshift-network-node-identity because it changed 2025-08-13T19:56:28.002926967+00:00 stderr F I0813 19:56:28.002871 1 log.go:245] successful reconciliation 2025-08-13T19:56:28.027777056+00:00 stderr F I0813 19:56:28.023051 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.027777056+00:00 stderr F I0813 19:56:28.025626 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.028953530+00:00 stderr F I0813 19:56:28.028774 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.029014962+00:00 stderr F I0813 19:56:28.028982 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T19:56:28.045195174+00:00 stderr F I0813 19:56:28.045140 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.045630416+00:00 stderr F I0813 19:56:28.045572 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "False" 2025-08-13T19:56:28.045630416+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "False" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Degraded 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.045630416+00:00 stderr F message: |- 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.045630416+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.045630416+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Progressing 2025-08-13T19:56:28.045630416+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.045630416+00:00 stderr F status: "True" 2025-08-13T19:56:28.045630416+00:00 stderr F type: Available 2025-08-13T19:56:28.048961891+00:00 stderr F I0813 19:56:28.048120 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:28.078560756+00:00 stderr F I0813 19:56:28.078446 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:56:28.078560756+00:00 stderr F I0813 19:56:28.078494 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:56:28.109633104+00:00 stderr F I0813 19:56:28.109512 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:33Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "False" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Degraded 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "False" 2025-08-13T19:56:28.109633104+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.109633104+00:00 stderr F message: |- 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.109633104+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.109633104+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Progressing 2025-08-13T19:56:28.109633104+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.109633104+00:00 stderr F status: "True" 2025-08-13T19:56:28.109633104+00:00 stderr F type: Available 2025-08-13T19:56:28.169379430+00:00 stderr F I0813 19:56:28.168772 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.171271324+00:00 stderr F I0813 19:56:28.171043 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "False" 2025-08-13T19:56:28.171271324+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.171271324+00:00 stderr F message: |- 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" rollout is not making progress - last change 2024-06-27T13:34:19Z 2025-08-13T19:56:28.171271324+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Degraded 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.171271324+00:00 stderr F message: |- 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.171271324+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.171271324+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Progressing 2025-08-13T19:56:28.171271324+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.171271324+00:00 stderr F status: "True" 2025-08-13T19:56:28.171271324+00:00 stderr F type: Available 2025-08-13T19:56:28.182717181+00:00 stderr F I0813 19:56:28.182605 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.190129682+00:00 stderr F I0813 19:56:28.190008 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.215646811+00:00 stderr F I0813 19:56:28.215536 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.221927250+00:00 stderr F I0813 19:56:28.221689 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.221927250+00:00 stderr F message: |- 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" rollout is not making progress - last change 2024-06-27T13:34:19Z 2025-08-13T19:56:28.221927250+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Degraded 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "False" 2025-08-13T19:56:28.221927250+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.221927250+00:00 stderr F message: |- 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.221927250+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.221927250+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Progressing 2025-08-13T19:56:28.221927250+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.221927250+00:00 stderr F status: "True" 2025-08-13T19:56:28.221927250+00:00 stderr F type: Available 2025-08-13T19:56:28.226875732+00:00 stderr F I0813 19:56:28.226700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.235752065+00:00 stderr F I0813 19:56:28.235649 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.251752272+00:00 stderr F I0813 19:56:28.251700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:28.418311498+00:00 stderr F I0813 19:56:28.417090 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.418311498+00:00 stderr F I0813 19:56:28.417135 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.446930105+00:00 stderr F I0813 19:56:28.446671 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.446930105+00:00 stderr F I0813 19:56:28.446723 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:56:28.447300036+00:00 stderr F I0813 19:56:28.446986 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.447300036+00:00 stderr F I0813 19:56:28.447064 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/etcd-ca-bundle' 2025-08-13T19:56:28.541006542+00:00 stderr F I0813 19:56:28.540908 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:28.541331291+00:00 stderr F I0813 19:56:28.541228 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "False" 2025-08-13T19:56:28.541331291+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.541331291+00:00 stderr F message: |- 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.541331291+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Degraded 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.541331291+00:00 stderr F message: |- 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.541331291+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.541331291+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Progressing 2025-08-13T19:56:28.541331291+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.541331291+00:00 stderr F status: "True" 2025-08-13T19:56:28.541331291+00:00 stderr F type: Available 2025-08-13T19:56:28.819877075+00:00 stderr F I0813 19:56:28.819654 1 log.go:245] configmap 'openshift-config/etcd-ca-bundle' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T19:56:28.819877075+00:00 stderr F I0813 19:56:28.819756 1 log.go:245] Reconciling proxy 'cluster' 2025-08-13T19:56:28.822445798+00:00 stderr F I0813 19:56:28.822344 1 log.go:245] httpProxy, httpsProxy and noProxy not defined for proxy 'cluster'; validation will be skipped 2025-08-13T19:56:28.896447592+00:00 stderr F I0813 19:56:28.896354 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:28.896447592+00:00 stderr F message: |- 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:28.896447592+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Degraded 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Upgradeable 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "False" 2025-08-13T19:56:28.896447592+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:28.896447592+00:00 stderr F message: |- 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:28.896447592+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:28.896447592+00:00 stderr F reason: Deploying 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Progressing 2025-08-13T19:56:28.896447592+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:28.896447592+00:00 stderr F status: "True" 2025-08-13T19:56:28.896447592+00:00 stderr F type: Available 2025-08-13T19:56:29.014293887+00:00 stderr F I0813 19:56:29.014225 1 log.go:245] Reconciling proxy 'cluster' complete 2025-08-13T19:56:29.259978901+00:00 stderr F I0813 19:56:29.259705 1 log.go:245] Reconciling configmap from openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T19:56:29.262194434+00:00 stderr F I0813 19:56:29.262113 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:29.568310526+00:00 stderr F I0813 19:56:29.568258 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:29.573930976+00:00 stderr F I0813 19:56:29.573686 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:29.629145853+00:00 stderr F I0813 19:56:29.629050 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:29.717455055+00:00 stderr F I0813 19:56:29.717192 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:29.717455055+00:00 stderr F I0813 19:56:29.717251 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:29.725126934+00:00 stderr F I0813 19:56:29.724986 1 log.go:245] Reconciling Network.config.openshift.io cluster 2025-08-13T19:56:29.727114510+00:00 stderr F I0813 19:56:29.727056 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:29.731300240+00:00 stderr F I0813 19:56:29.730746 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:29.733667117+00:00 stderr F I0813 19:56:29.732921 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:29.737509137+00:00 stderr F I0813 19:56:29.737396 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:29.737509137+00:00 stderr F I0813 19:56:29.737431 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000b25200 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:29.863906976+00:00 stderr F I0813 19:56:29.861509 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/ovn 2025-08-13T19:56:29.869447885+00:00 stderr F I0813 19:56:29.868308 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:29.944962301+00:00 stderr F I0813 19:56:29.944068 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:29.963897282+00:00 stderr F I0813 19:56:29.962888 1 log.go:245] "ovn-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: already expired 2025-08-13T19:56:30.021370463+00:00 stderr F I0813 19:56:30.021290 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:30.021404664+00:00 stderr F I0813 19:56:30.021394 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:30.024986796+00:00 stderr F I0813 19:56:30.024932 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:30.024986796+00:00 stderr F W0813 19:56:30.024966 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:30.025103729+00:00 stderr F I0813 19:56:30.025059 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:30.025103729+00:00 stderr F W0813 19:56:30.025087 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:30.025313625+00:00 stderr F I0813 19:56:30.025270 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:30.257542217+00:00 stderr F I0813 19:56:30.257476 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:30.304158318+00:00 stderr F I0813 19:56:30.303717 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:30.304158318+00:00 stderr F I0813 19:56:30.303893 1 log.go:245] Successfully updated Operator config from Cluster config 2025-08-13T19:56:30.305343021+00:00 stderr F I0813 19:56:30.305227 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:30.407072026+00:00 stderr F I0813 19:56:30.406632 1 log.go:245] Updated Secret/ovn-cert -n openshift-ovn-kubernetes because it changed 2025-08-13T19:56:30.407072026+00:00 stderr F I0813 19:56:30.406702 1 log.go:245] successful reconciliation 2025-08-13T19:56:30.414902470+00:00 stderr F I0813 19:56:30.414745 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:30.433026738+00:00 stderr F I0813 19:56:30.432771 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:56:30.434142459+00:00 stderr F E0813 19:56:30.434050 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="3ac04967-f509-46f0-a407-9eb8ddb74597" 2025-08-13T19:56:30.440151681+00:00 stderr F I0813 19:56:30.439985 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:30.705619982+00:00 stderr F I0813 19:56:30.705474 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "False" 2025-08-13T19:56:30.705619982+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:30.705619982+00:00 stderr F message: |- 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:30.705619982+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Degraded 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Upgradeable 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:30.705619982+00:00 stderr F message: |- 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:30.705619982+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:30.705619982+00:00 stderr F reason: Deploying 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Progressing 2025-08-13T19:56:30.705619982+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:30.705619982+00:00 stderr F status: "True" 2025-08-13T19:56:30.705619982+00:00 stderr F type: Available 2025-08-13T19:56:30.706326832+00:00 stderr F I0813 19:56:30.706243 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:31.279067037+00:00 stderr F I0813 19:56:31.278972 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:31.279067037+00:00 stderr F message: |- 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:31.279067037+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Degraded 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Upgradeable 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "False" 2025-08-13T19:56:31.279067037+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:31.279067037+00:00 stderr F message: |- 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:31.279067037+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.279067037+00:00 stderr F reason: Deploying 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Progressing 2025-08-13T19:56:31.279067037+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:31.279067037+00:00 stderr F status: "True" 2025-08-13T19:56:31.279067037+00:00 stderr F type: Available 2025-08-13T19:56:31.307715565+00:00 stderr F I0813 19:56:31.307639 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "False" 2025-08-13T19:56:31.307715565+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:31.307715565+00:00 stderr F message: |- 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:31.307715565+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Degraded 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Upgradeable 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:31.307715565+00:00 stderr F message: |- 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:31.307715565+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:31.307715565+00:00 stderr F reason: Deploying 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Progressing 2025-08-13T19:56:31.307715565+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:31.307715565+00:00 stderr F status: "True" 2025-08-13T19:56:31.307715565+00:00 stderr F type: Available 2025-08-13T19:56:31.308573909+00:00 stderr F I0813 19:56:31.308483 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:31.870004360+00:00 stderr F I0813 19:56:31.869375 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:31.871996847+00:00 stderr F I0813 19:56:31.871895 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:31.874093797+00:00 stderr F I0813 19:56:31.873954 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:31.874184540+00:00 stderr F I0813 19:56:31.874125 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc003cc6380 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:31.880532731+00:00 stderr F I0813 19:56:31.880410 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:31.880532731+00:00 stderr F I0813 19:56:31.880446 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886341 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:31.886454200+00:00 stderr F W0813 19:56:31.886377 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886385 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:31.886454200+00:00 stderr F W0813 19:56:31.886390 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:31.886454200+00:00 stderr F I0813 19:56:31.886414 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:32.073586814+00:00 stderr F I0813 19:56:32.073229 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:56:32.073586814+00:00 stderr F message: |- 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:56:32.073586814+00:00 stderr F reason: RolloutHung 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Degraded 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Upgradeable 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "False" 2025-08-13T19:56:32.073586814+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:56:32.073586814+00:00 stderr F message: |- 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:56:32.073586814+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:56:32.073586814+00:00 stderr F reason: Deploying 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Progressing 2025-08-13T19:56:32.073586814+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:56:32.073586814+00:00 stderr F status: "True" 2025-08-13T19:56:32.073586814+00:00 stderr F type: Available 2025-08-13T19:56:32.215632320+00:00 stderr F I0813 19:56:32.215466 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:32.231190424+00:00 stderr F I0813 19:56:32.231059 1 log.go:245] Failed to update the operator configuration: could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:56:32.231261466+00:00 stderr F E0813 19:56:32.231196 1 controller.go:329] "Reconciler error" err="could not apply (/, Kind=) /cluster, err: failed to apply / update (operator.openshift.io/v1, Kind=Network) /cluster: Operation cannot be fulfilled on networks.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" controller="operconfig-controller" object="cluster" namespace="" name="cluster" reconcileID="e3de9d8e-7ea9-4fab-b6a7-656f1b43f054" 2025-08-13T19:56:32.241853199+00:00 stderr F I0813 19:56:32.241740 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:56:32.461214022+00:00 stderr F I0813 19:56:32.460764 1 log.go:245] Reconciling configmap from openshift-machine-api/mao-trusted-ca 2025-08-13T19:56:32.463244990+00:00 stderr F I0813 19:56:32.463203 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:32.669238543+00:00 stderr F I0813 19:56:32.669132 1 dashboard_controller.go:113] Reconcile dashboards 2025-08-13T19:56:32.674038810+00:00 stderr F I0813 19:56:32.673902 1 dashboard_controller.go:139] Applying dashboards manifests 2025-08-13T19:56:32.683527121+00:00 stderr F I0813 19:56:32.683422 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health 2025-08-13T19:56:32.699001412+00:00 stderr F I0813 19:56:32.698891 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-ovn-health was successful 2025-08-13T19:56:32.699001412+00:00 stderr F I0813 19:56:32.698936 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats 2025-08-13T19:56:32.710297945+00:00 stderr F I0813 19:56:32.710190 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/grafana-dashboard-network-stats was successful 2025-08-13T19:56:33.461166035+00:00 stderr F I0813 19:56:33.461019 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-ovn-kubernetes/signer 2025-08-13T19:56:33.466487867+00:00 stderr F I0813 19:56:33.466357 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:33.468734431+00:00 stderr F I0813 19:56:33.468632 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:56:33.563471056+00:00 stderr F I0813 19:56:33.563150 1 log.go:245] "signer-cert" in "openshift-ovn-kubernetes" requires a new target cert/key pair: already expired 2025-08-13T19:56:33.664681716+00:00 stderr F I0813 19:56:33.664587 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T19:56:33.667218929+00:00 stderr F I0813 19:56:33.667162 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T19:56:33.670093191+00:00 stderr F I0813 19:56:33.670062 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T19:56:33.670153243+00:00 stderr F I0813 19:56:33.670127 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000abf900 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T19:56:33.675535246+00:00 stderr F I0813 19:56:33.675435 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout progressing; 0/1 scheduled; 0 available; generation 3 -> 3 2025-08-13T19:56:33.675535246+00:00 stderr F I0813 19:56:33.675515 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=true 2025-08-13T19:56:33.678138111+00:00 stderr F I0813 19:56:33.678080 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:33.678138111+00:00 stderr F W0813 19:56:33.678122 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:33.678138111+00:00 stderr F I0813 19:56:33.678131 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 2 -> 2 2025-08-13T19:56:33.678163041+00:00 stderr F W0813 19:56:33.678138 1 ovn_kubernetes.go:1641] daemonset openshift-ovn-kubernetes/ovnkube-node rollout seems to have hung with 0/1 behind, force-continuing 2025-08-13T19:56:33.678172702+00:00 stderr F I0813 19:56:33.678163 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T19:56:33.702260569+00:00 stderr F I0813 19:56:33.702159 1 log.go:245] Updated Secret/signer-cert -n openshift-ovn-kubernetes because it changed 2025-08-13T19:56:33.702260569+00:00 stderr F I0813 19:56:33.702186 1 log.go:245] successful reconciliation 2025-08-13T19:56:33.859683725+00:00 stderr F I0813 19:56:33.859617 1 log.go:245] Reconciling configmap from openshift-console/trusted-ca-bundle 2025-08-13T19:56:33.862468614+00:00 stderr F I0813 19:56:33.862444 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.017071259+00:00 stderr F I0813 19:56:34.017011 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T19:56:34.036715309+00:00 stderr F I0813 19:56:34.036622 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T19:56:34.036914765+00:00 stderr F I0813 19:56:34.036872 1 log.go:245] Starting render phase 2025-08-13T19:56:34.037773970+00:00 stderr F I0813 19:56:34.037655 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:56:34.068159047+00:00 stderr F I0813 19:56:34.068024 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106431 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106484 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T19:56:34.106548934+00:00 stderr F I0813 19:56:34.106516 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T19:56:34.106600245+00:00 stderr F I0813 19:56:34.106554 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T19:56:34.218480220+00:00 stderr F I0813 19:56:34.218361 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout progressing; 1/1 scheduled; 1 unavailable; 0 available; generation 1 -> 1 2025-08-13T19:56:34.421917649+00:00 stderr F I0813 19:56:34.421720 1 node_identity.go:204] network-node-identity webhook will not be applied, the deployment/daemonset is not ready 2025-08-13T19:56:34.421917649+00:00 stderr F I0813 19:56:34.421856 1 node_identity.go:208] network-node-identity webhook will not be applied, if it already exists it won't be removed 2025-08-13T19:56:34.426493139+00:00 stderr F I0813 19:56:34.426420 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T19:56:34.461098758+00:00 stderr F I0813 19:56:34.460764 1 log.go:245] Reconciling configmap from openshift-image-registry/trusted-ca 2025-08-13T19:56:34.464623398+00:00 stderr F I0813 19:56:34.464156 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.862418207+00:00 stderr F I0813 19:56:34.862291 1 log.go:245] Reconciling configmap from openshift-authentication-operator/trusted-ca-bundle 2025-08-13T19:56:34.863104167+00:00 stderr F I0813 19:56:34.863025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T19:56:34.864596619+00:00 stderr F I0813 19:56:34.864483 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:34.870510848+00:00 stderr F I0813 19:56:34.870429 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T19:56:34.870543839+00:00 stderr F I0813 19:56:34.870507 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T19:56:34.880596586+00:00 stderr F I0813 19:56:34.880486 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T19:56:34.880596586+00:00 stderr F I0813 19:56:34.880545 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T19:56:34.891750095+00:00 stderr F I0813 19:56:34.891663 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T19:56:34.892159396+00:00 stderr F I0813 19:56:34.892081 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T19:56:34.902209913+00:00 stderr F I0813 19:56:34.902054 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T19:56:34.902209913+00:00 stderr F I0813 19:56:34.902128 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T19:56:34.910589693+00:00 stderr F I0813 19:56:34.910560 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T19:56:34.910863740+00:00 stderr F I0813 19:56:34.910730 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T19:56:34.919874608+00:00 stderr F I0813 19:56:34.919635 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T19:56:34.919874608+00:00 stderr F I0813 19:56:34.919709 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T19:56:34.926683742+00:00 stderr F I0813 19:56:34.926177 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T19:56:34.927102754+00:00 stderr F I0813 19:56:34.927021 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T19:56:34.934686861+00:00 stderr F I0813 19:56:34.934559 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T19:56:34.934686861+00:00 stderr F I0813 19:56:34.934619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T19:56:34.940050964+00:00 stderr F I0813 19:56:34.939867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T19:56:34.940050964+00:00 stderr F I0813 19:56:34.939909 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T19:56:34.944758798+00:00 stderr F I0813 19:56:34.944607 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T19:56:34.944758798+00:00 stderr F I0813 19:56:34.944675 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T19:56:35.060988547+00:00 stderr F I0813 19:56:35.060754 1 log.go:245] Reconciling configmap from openshift-authentication/v4-0-config-system-trusted-ca-bundle 2025-08-13T19:56:35.063339344+00:00 stderr F I0813 19:56:35.063294 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.069140020+00:00 stderr F I0813 19:56:35.068999 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T19:56:35.069140020+00:00 stderr F I0813 19:56:35.069061 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T19:56:35.269428489+00:00 stderr F I0813 19:56:35.269371 1 log.go:245] Reconciling configmap from openshift-controller-manager/openshift-global-ca 2025-08-13T19:56:35.273852706+00:00 stderr F I0813 19:56:35.273718 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T19:56:35.273878236+00:00 stderr F I0813 19:56:35.273861 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T19:56:35.274124703+00:00 stderr F I0813 19:56:35.274103 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.463171062+00:00 stderr F I0813 19:56:35.463013 1 log.go:245] Reconciling configmap from openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T19:56:35.466748384+00:00 stderr F I0813 19:56:35.466671 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.473264810+00:00 stderr F I0813 19:56:35.473220 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T19:56:35.473337272+00:00 stderr F I0813 19:56:35.473284 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T19:56:35.661679340+00:00 stderr F I0813 19:56:35.661587 1 log.go:245] Reconciling configmap from openshift-marketplace/marketplace-trusted-ca 2025-08-13T19:56:35.666281531+00:00 stderr F I0813 19:56:35.666181 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.670559483+00:00 stderr F I0813 19:56:35.669432 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T19:56:35.670559483+00:00 stderr F I0813 19:56:35.669496 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T19:56:35.863934805+00:00 stderr F I0813 19:56:35.863675 1 log.go:245] Reconciling configmap from openshift-apiserver-operator/trusted-ca-bundle 2025-08-13T19:56:35.868876786+00:00 stderr F I0813 19:56:35.866928 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:35.877013529+00:00 stderr F I0813 19:56:35.873385 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T19:56:35.877013529+00:00 stderr F I0813 19:56:35.873492 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T19:56:36.114440068+00:00 stderr F I0813 19:56:36.107927 1 log.go:245] Reconciling configmap from openshift-apiserver/trusted-ca-bundle 2025-08-13T19:56:36.114440068+00:00 stderr F I0813 19:56:36.112344 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.115323574+00:00 stderr F I0813 19:56:36.115043 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T19:56:36.115475538+00:00 stderr F I0813 19:56:36.115455 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T19:56:36.260204401+00:00 stderr F I0813 19:56:36.260087 1 log.go:245] Reconciling configmap from openshift-config-managed/trusted-ca-bundle 2025-08-13T19:56:36.262636010+00:00 stderr F I0813 19:56:36.262575 1 log.go:245] trusted-ca-bundle changed, updating 12 configMaps 2025-08-13T19:56:36.262660061+00:00 stderr F I0813 19:56:36.262630 1 log.go:245] ConfigMap openshift-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262660061+00:00 stderr F I0813 19:56:36.262656 1 log.go:245] ConfigMap openshift-authentication-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262716893+00:00 stderr F I0813 19:56:36.262679 1 log.go:245] ConfigMap openshift-authentication/v4-0-config-system-trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262721 1 log.go:245] ConfigMap openshift-controller-manager/openshift-global-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262743 1 log.go:245] ConfigMap openshift-kube-apiserver/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262765 1 log.go:245] ConfigMap openshift-marketplace/marketplace-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.262870197+00:00 stderr F I0813 19:56:36.262859 1 log.go:245] ConfigMap openshift-apiserver-operator/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263059382+00:00 stderr F I0813 19:56:36.262890 1 log.go:245] ConfigMap openshift-image-registry/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263074453+00:00 stderr F I0813 19:56:36.263057 1 log.go:245] ConfigMap openshift-ingress-operator/trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263221087+00:00 stderr F I0813 19:56:36.263138 1 log.go:245] ConfigMap openshift-kube-controller-manager/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263367011+00:00 stderr F I0813 19:56:36.263260 1 log.go:245] ConfigMap openshift-machine-api/mao-trusted-ca ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.263367011+00:00 stderr F I0813 19:56:36.263308 1 log.go:245] ConfigMap openshift-console/trusted-ca-bundle ca-bundle.crt unchanged, skipping 2025-08-13T19:56:36.270019731+00:00 stderr F I0813 19:56:36.269948 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T19:56:36.270156515+00:00 stderr F I0813 19:56:36.270141 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T19:56:36.468757485+00:00 stderr F I0813 19:56:36.468668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T19:56:36.468907739+00:00 stderr F I0813 19:56:36.468753 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T19:56:36.669737104+00:00 stderr F I0813 19:56:36.669618 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T19:56:36.669737104+00:00 stderr F I0813 19:56:36.669696 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T19:56:36.870671062+00:00 stderr F I0813 19:56:36.870397 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T19:56:36.870671062+00:00 stderr F I0813 19:56:36.870476 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T19:56:37.070240691+00:00 stderr F I0813 19:56:37.070104 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T19:56:37.070585640+00:00 stderr F I0813 19:56:37.070449 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T19:56:37.282758169+00:00 stderr F I0813 19:56:37.282586 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T19:56:37.282758169+00:00 stderr F I0813 19:56:37.282706 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T19:56:37.486565189+00:00 stderr F I0813 19:56:37.486446 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T19:56:37.486565189+00:00 stderr F I0813 19:56:37.486513 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T19:56:37.671652084+00:00 stderr F I0813 19:56:37.671551 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T19:56:37.671652084+00:00 stderr F I0813 19:56:37.671620 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T19:56:37.870397229+00:00 stderr F I0813 19:56:37.870274 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T19:56:37.870397229+00:00 stderr F I0813 19:56:37.870350 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T19:56:38.070926605+00:00 stderr F I0813 19:56:38.070534 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T19:56:38.070926605+00:00 stderr F I0813 19:56:38.070592 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T19:56:38.280464439+00:00 stderr F I0813 19:56:38.280129 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T19:56:38.280464439+00:00 stderr F I0813 19:56:38.280180 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T19:56:38.473032357+00:00 stderr F I0813 19:56:38.472924 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T19:56:38.473032357+00:00 stderr F I0813 19:56:38.473016 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T19:56:38.672108052+00:00 stderr F I0813 19:56:38.671995 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T19:56:38.672211555+00:00 stderr F I0813 19:56:38.672197 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T19:56:38.870135517+00:00 stderr F I0813 19:56:38.869996 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:38.870135517+00:00 stderr F I0813 19:56:38.870081 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T19:56:39.069669964+00:00 stderr F I0813 19:56:39.069522 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:39.069669964+00:00 stderr F I0813 19:56:39.069628 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T19:56:39.270930261+00:00 stderr F I0813 19:56:39.270701 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T19:56:39.270930261+00:00 stderr F I0813 19:56:39.270901 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T19:56:39.469740118+00:00 stderr F I0813 19:56:39.469665 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T19:56:39.469952214+00:00 stderr F I0813 19:56:39.469936 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T19:56:39.669101311+00:00 stderr F I0813 19:56:39.668988 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T19:56:39.669101311+00:00 stderr F I0813 19:56:39.669087 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T19:56:39.869489454+00:00 stderr F I0813 19:56:39.869368 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T19:56:39.869489454+00:00 stderr F I0813 19:56:39.869457 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T19:56:40.080668163+00:00 stderr F I0813 19:56:40.079891 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T19:56:40.080668163+00:00 stderr F I0813 19:56:40.079990 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T19:56:40.274378724+00:00 stderr F I0813 19:56:40.274266 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T19:56:40.274378724+00:00 stderr F I0813 19:56:40.274335 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T19:56:40.470202326+00:00 stderr F I0813 19:56:40.470092 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T19:56:40.470202326+00:00 stderr F I0813 19:56:40.470165 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T19:56:40.669492477+00:00 stderr F I0813 19:56:40.669386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:40.669492477+00:00 stderr F I0813 19:56:40.669450 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T19:56:40.872887145+00:00 stderr F I0813 19:56:40.871054 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T19:56:40.872887145+00:00 stderr F I0813 19:56:40.871135 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T19:56:41.072220557+00:00 stderr F I0813 19:56:41.072099 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T19:56:41.072279518+00:00 stderr F I0813 19:56:41.072217 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T19:56:41.271660712+00:00 stderr F I0813 19:56:41.271529 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T19:56:41.271660712+00:00 stderr F I0813 19:56:41.271631 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T19:56:41.477445898+00:00 stderr F I0813 19:56:41.477261 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T19:56:41.477445898+00:00 stderr F I0813 19:56:41.477350 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T19:56:41.689719429+00:00 stderr F I0813 19:56:41.689618 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T19:56:41.689719429+00:00 stderr F I0813 19:56:41.689703 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T19:56:41.875882805+00:00 stderr F I0813 19:56:41.875677 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T19:56:41.875882805+00:00 stderr F I0813 19:56:41.875862 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T19:56:42.083596536+00:00 stderr F I0813 19:56:42.083236 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T19:56:42.083596536+00:00 stderr F I0813 19:56:42.083547 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T19:56:42.289346682+00:00 stderr F I0813 19:56:42.289193 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T19:56:42.289346682+00:00 stderr F I0813 19:56:42.289326 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T19:56:42.505345469+00:00 stderr F I0813 19:56:42.505191 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T19:56:42.505345469+00:00 stderr F I0813 19:56:42.505331 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T19:56:42.726707130+00:00 stderr F I0813 19:56:42.726650 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T19:56:42.726974988+00:00 stderr F I0813 19:56:42.726959 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T19:56:42.869456647+00:00 stderr F I0813 19:56:42.869380 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T19:56:42.869582880+00:00 stderr F I0813 19:56:42.869565 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T19:56:43.070341413+00:00 stderr F I0813 19:56:43.070201 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T19:56:43.070341413+00:00 stderr F I0813 19:56:43.070278 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T19:56:43.270219901+00:00 stderr F I0813 19:56:43.270102 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T19:56:43.270219901+00:00 stderr F I0813 19:56:43.270191 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T19:56:43.473703151+00:00 stderr F I0813 19:56:43.473570 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T19:56:43.473703151+00:00 stderr F I0813 19:56:43.473654 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T19:56:43.671879419+00:00 stderr F I0813 19:56:43.671643 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T19:56:43.671879419+00:00 stderr F I0813 19:56:43.671717 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T19:56:43.870044978+00:00 stderr F I0813 19:56:43.869963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T19:56:43.870044978+00:00 stderr F I0813 19:56:43.870036 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T19:56:44.071046257+00:00 stderr F I0813 19:56:44.070938 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T19:56:44.071046257+00:00 stderr F I0813 19:56:44.071025 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T19:56:44.270071800+00:00 stderr F I0813 19:56:44.269958 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T19:56:44.270071800+00:00 stderr F I0813 19:56:44.270023 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T19:56:44.470025050+00:00 stderr F I0813 19:56:44.469913 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T19:56:44.470025050+00:00 stderr F I0813 19:56:44.469980 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:44.669212908+00:00 stderr F I0813 19:56:44.669080 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:44.669212908+00:00 stderr F I0813 19:56:44.669163 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:44.868872549+00:00 stderr F I0813 19:56:44.868710 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:44.868993452+00:00 stderr F I0813 19:56:44.868978 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:45.069440356+00:00 stderr F I0813 19:56:45.069285 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:45.069440356+00:00 stderr F I0813 19:56:45.069366 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T19:56:45.274258124+00:00 stderr F I0813 19:56:45.273359 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T19:56:45.274258124+00:00 stderr F I0813 19:56:45.273617 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T19:56:45.474058959+00:00 stderr F I0813 19:56:45.473945 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T19:56:45.474058959+00:00 stderr F I0813 19:56:45.474024 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T19:56:45.672082224+00:00 stderr F I0813 19:56:45.671497 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T19:56:45.672082224+00:00 stderr F I0813 19:56:45.671605 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T19:56:45.872153397+00:00 stderr F I0813 19:56:45.872041 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T19:56:45.872153397+00:00 stderr F I0813 19:56:45.872110 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T19:56:46.071185660+00:00 stderr F I0813 19:56:46.071082 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T19:56:46.071185660+00:00 stderr F I0813 19:56:46.071169 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T19:56:46.272108338+00:00 stderr F I0813 19:56:46.271639 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T19:56:46.272108338+00:00 stderr F I0813 19:56:46.271731 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T19:56:46.475244508+00:00 stderr F I0813 19:56:46.475168 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T19:56:46.475364051+00:00 stderr F I0813 19:56:46.475347 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T19:56:46.672464330+00:00 stderr F I0813 19:56:46.672407 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T19:56:46.672563062+00:00 stderr F I0813 19:56:46.672549 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T19:56:46.870759002+00:00 stderr F I0813 19:56:46.870673 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T19:56:46.871024059+00:00 stderr F I0813 19:56:46.871000 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T19:56:47.070729842+00:00 stderr F I0813 19:56:47.070569 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T19:56:47.070769013+00:00 stderr F I0813 19:56:47.070732 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T19:56:47.272746319+00:00 stderr F I0813 19:56:47.272608 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T19:56:47.272746319+00:00 stderr F I0813 19:56:47.272689 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T19:56:47.472268006+00:00 stderr F I0813 19:56:47.472172 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T19:56:47.472268006+00:00 stderr F I0813 19:56:47.472243 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T19:56:47.669651062+00:00 stderr F I0813 19:56:47.669352 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T19:56:47.669651062+00:00 stderr F I0813 19:56:47.669530 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T19:56:47.869551281+00:00 stderr F I0813 19:56:47.869456 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T19:56:47.869551281+00:00 stderr F I0813 19:56:47.869518 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T19:56:48.070491658+00:00 stderr F I0813 19:56:48.070388 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T19:56:48.070491658+00:00 stderr F I0813 19:56:48.070460 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T19:56:48.270906480+00:00 stderr F I0813 19:56:48.270569 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T19:56:48.270906480+00:00 stderr F I0813 19:56:48.270648 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T19:56:48.471754555+00:00 stderr F I0813 19:56:48.471633 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T19:56:48.471754555+00:00 stderr F I0813 19:56:48.471707 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T19:56:48.679738444+00:00 stderr F I0813 19:56:48.679667 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T19:56:48.680020982+00:00 stderr F I0813 19:56:48.679998 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T19:56:48.927119287+00:00 stderr F I0813 19:56:48.919690 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T19:56:48.927631082+00:00 stderr F I0813 19:56:48.927557 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T19:56:49.053428524+00:00 stderr F I0813 19:56:49.053356 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.067288300+00:00 stderr F I0813 19:56:49.067134 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.069858223+00:00 stderr F I0813 19:56:49.069697 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T19:56:49.070064309+00:00 stderr F I0813 19:56:49.069941 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.075114563+00:00 stderr F I0813 19:56:49.075070 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.091433090+00:00 stderr F I0813 19:56:49.091319 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.098720678+00:00 stderr F I0813 19:56:49.098614 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.108031803+00:00 stderr F I0813 19:56:49.106716 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:56:49.269661349+00:00 stderr F I0813 19:56:49.269561 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.269661349+00:00 stderr F I0813 19:56:49.269630 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.471665687+00:00 stderr F I0813 19:56:49.471548 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.471665687+00:00 stderr F I0813 19:56:49.471642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T19:56:49.669349102+00:00 stderr F I0813 19:56:49.669279 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T19:56:49.669378623+00:00 stderr F I0813 19:56:49.669344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T19:56:49.871717901+00:00 stderr F I0813 19:56:49.871571 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T19:56:49.871717901+00:00 stderr F I0813 19:56:49.871636 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T19:56:50.069382135+00:00 stderr F I0813 19:56:50.069261 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T19:56:50.069382135+00:00 stderr F I0813 19:56:50.069333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T19:56:50.294058591+00:00 stderr F I0813 19:56:50.293963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T19:56:50.294058591+00:00 stderr F I0813 19:56:50.294035 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.474174344+00:00 stderr F I0813 19:56:50.474124 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.474286327+00:00 stderr F I0813 19:56:50.474266 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.678359292+00:00 stderr F I0813 19:56:50.678236 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.678404604+00:00 stderr F I0813 19:56:50.678369 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T19:56:50.876748957+00:00 stderr F I0813 19:56:50.876645 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T19:56:50.876748957+00:00 stderr F I0813 19:56:50.876715 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T19:56:51.084106038+00:00 stderr F I0813 19:56:51.083989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T19:56:51.084106038+00:00 stderr F I0813 19:56:51.084060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T19:56:51.268857194+00:00 stderr F I0813 19:56:51.268668 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T19:56:51.268857194+00:00 stderr F I0813 19:56:51.268735 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T19:56:51.476582676+00:00 stderr F I0813 19:56:51.476441 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T19:56:51.476582676+00:00 stderr F I0813 19:56:51.476509 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T19:56:51.670874374+00:00 stderr F I0813 19:56:51.670615 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T19:56:51.670874374+00:00 stderr F I0813 19:56:51.670759 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T19:56:51.869480295+00:00 stderr F I0813 19:56:51.869386 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T19:56:51.869480295+00:00 stderr F I0813 19:56:51.869456 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T19:56:52.072013618+00:00 stderr F I0813 19:56:52.071894 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T19:56:52.072013618+00:00 stderr F I0813 19:56:52.071968 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T19:56:52.272030859+00:00 stderr F I0813 19:56:52.271918 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T19:56:52.272030859+00:00 stderr F I0813 19:56:52.271999 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:52.469568890+00:00 stderr F I0813 19:56:52.469439 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:52.469568890+00:00 stderr F I0813 19:56:52.469506 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T19:56:52.669634632+00:00 stderr F I0813 19:56:52.669507 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T19:56:52.669634632+00:00 stderr F I0813 19:56:52.669602 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T19:56:52.870676873+00:00 stderr F I0813 19:56:52.870269 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T19:56:52.870676873+00:00 stderr F I0813 19:56:52.870349 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T19:56:53.071203919+00:00 stderr F I0813 19:56:53.070908 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T19:56:53.071203919+00:00 stderr F I0813 19:56:53.070992 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T19:56:53.272353583+00:00 stderr F I0813 19:56:53.272206 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T19:56:53.272353583+00:00 stderr F I0813 19:56:53.272315 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T19:56:53.470096510+00:00 stderr F I0813 19:56:53.469939 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T19:56:53.470096510+00:00 stderr F I0813 19:56:53.470021 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T19:56:53.670002818+00:00 stderr F I0813 19:56:53.669895 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T19:56:53.670002818+00:00 stderr F I0813 19:56:53.669958 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:53.853625041+00:00 stderr F E0813 19:56:53.853468 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:56:53.872491880+00:00 stderr F I0813 19:56:53.872351 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:53.872491880+00:00 stderr F I0813 19:56:53.872427 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T19:56:53.872585833+00:00 stderr F I0813 19:56:53.872497 1 log.go:245] Object (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io has create-wait annotation, skipping apply. 2025-08-13T19:56:53.872585833+00:00 stderr F I0813 19:56:53.872529 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T19:56:54.079313836+00:00 stderr F I0813 19:56:54.079125 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T19:56:54.079313836+00:00 stderr F I0813 19:56:54.079196 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T19:56:54.274740046+00:00 stderr F I0813 19:56:54.274540 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T19:56:54.274740046+00:00 stderr F I0813 19:56:54.274617 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T19:56:54.474113979+00:00 stderr F I0813 19:56:54.473973 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T19:56:54.474113979+00:00 stderr F I0813 19:56:54.474062 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T19:56:54.670616190+00:00 stderr F I0813 19:56:54.670498 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T19:56:54.670616190+00:00 stderr F I0813 19:56:54.670566 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T19:56:54.871342992+00:00 stderr F I0813 19:56:54.871161 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T19:56:54.871342992+00:00 stderr F I0813 19:56:54.871256 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T19:56:55.073747421+00:00 stderr F I0813 19:56:55.073604 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T19:56:55.073747421+00:00 stderr F I0813 19:56:55.073706 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T19:56:55.282257015+00:00 stderr F I0813 19:56:55.281910 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T19:56:55.309945396+00:00 stderr F I0813 19:56:55.309311 1 log.go:245] Operconfig Controller complete 2025-08-13T19:57:16.034315282+00:00 stderr F I0813 19:57:16.034152 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.042003061+00:00 stderr F I0813 19:57:16.041865 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.070489014+00:00 stderr F I0813 19:57:16.070373 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.088582771+00:00 stderr F I0813 19:57:16.088455 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.192429817+00:00 stderr F I0813 19:57:16.192311 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.202311909+00:00 stderr F I0813 19:57:16.202193 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.225711577+00:00 stderr F I0813 19:57:16.225656 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.245669617+00:00 stderr F I0813 19:57:16.245515 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.254938681+00:00 stderr F I0813 19:57:16.254894 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.281678735+00:00 stderr F I0813 19:57:16.281563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.339818425+00:00 stderr F I0813 19:57:16.339719 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.431200795+00:00 stderr F I0813 19:57:16.431143 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:16.761302101+00:00 stderr F I0813 19:57:16.761186 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.761302101+00:00 stderr F I0813 19:57:16.761234 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.802314282+00:00 stderr F I0813 19:57:16.800418 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:16.802314282+00:00 stderr F I0813 19:57:16.800463 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:16.808145299+00:00 stderr F I0813 19:57:16.808077 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.808291143+00:00 stderr F I0813 19:57:16.808189 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-additional-cni-plugins updated, re-generating status 2025-08-13T19:57:16.969932258+00:00 stderr F I0813 19:57:16.969745 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "False" 2025-08-13T19:57:16.969932258+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:16.969932258+00:00 stderr F message: |- 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:16.969932258+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Degraded 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Upgradeable 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:16.969932258+00:00 stderr F message: |- 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:16.969932258+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:16.969932258+00:00 stderr F reason: Deploying 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Progressing 2025-08-13T19:57:16.969932258+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:16.969932258+00:00 stderr F status: "True" 2025-08-13T19:57:16.969932258+00:00 stderr F type: Available 2025-08-13T19:57:16.972714958+00:00 stderr F I0813 19:57:16.972639 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.003603950+00:00 stderr F I0813 19:57:17.003501 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.003603950+00:00 stderr F message: |- 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus-additional-cni-plugins" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.003603950+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Degraded 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "False" 2025-08-13T19:57:17.003603950+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.003603950+00:00 stderr F message: |- 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.003603950+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.003603950+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Progressing 2025-08-13T19:57:17.003603950+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.003603950+00:00 stderr F status: "True" 2025-08-13T19:57:17.003603950+00:00 stderr F type: Available 2025-08-13T19:57:17.027331517+00:00 stderr F I0813 19:57:17.025409 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "False" 2025-08-13T19:57:17.027331517+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.027331517+00:00 stderr F message: |- 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.027331517+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Degraded 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.027331517+00:00 stderr F message: |- 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.027331517+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.027331517+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Progressing 2025-08-13T19:57:17.027331517+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.027331517+00:00 stderr F status: "True" 2025-08-13T19:57:17.027331517+00:00 stderr F type: Available 2025-08-13T19:57:17.027331517+00:00 stderr F I0813 19:57:17.025657 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.064305253+00:00 stderr F I0813 19:57:17.064072 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.064305253+00:00 stderr F message: |- 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.064305253+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Degraded 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "False" 2025-08-13T19:57:17.064305253+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.064305253+00:00 stderr F message: |- 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.064305253+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.064305253+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Progressing 2025-08-13T19:57:17.064305253+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.064305253+00:00 stderr F status: "True" 2025-08-13T19:57:17.064305253+00:00 stderr F type: Available 2025-08-13T19:57:17.088951607+00:00 stderr F I0813 19:57:17.088886 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:17.089022279+00:00 stderr F I0813 19:57:17.089009 1 pod_watcher.go:131] Operand /, Kind= openshift-network-node-identity/network-node-identity updated, re-generating status 2025-08-13T19:57:17.201901152+00:00 stderr F I0813 19:57:17.201844 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "False" 2025-08-13T19:57:17.201901152+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.201901152+00:00 stderr F message: |- 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.201901152+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Degraded 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.201901152+00:00 stderr F message: |- 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.201901152+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.201901152+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Progressing 2025-08-13T19:57:17.201901152+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.201901152+00:00 stderr F status: "True" 2025-08-13T19:57:17.201901152+00:00 stderr F type: Available 2025-08-13T19:57:17.202354635+00:00 stderr F I0813 19:57:17.202188 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.223129958+00:00 stderr F I0813 19:57:17.222995 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:57:17.232103015+00:00 stderr F I0813 19:57:17.232074 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.245811336+00:00 stderr F I0813 19:57:17.245701 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.245811336+00:00 stderr F message: |- 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-network-node-identity/network-node-identity" rollout is not making progress - last change 2024-06-27T13:34:16Z 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.245811336+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Degraded 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "False" 2025-08-13T19:57:17.245811336+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.245811336+00:00 stderr F message: |- 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.245811336+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.245811336+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Progressing 2025-08-13T19:57:17.245811336+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.245811336+00:00 stderr F status: "True" 2025-08-13T19:57:17.245811336+00:00 stderr F type: Available 2025-08-13T19:57:17.249137321+00:00 stderr F I0813 19:57:17.249110 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.259308692+00:00 stderr F I0813 19:57:17.259190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.270118710+00:00 stderr F I0813 19:57:17.269988 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.274218327+00:00 stderr F I0813 19:57:17.274162 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:17.275952317+00:00 stderr F I0813 19:57:17.274720 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "False" 2025-08-13T19:57:17.275952317+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.275952317+00:00 stderr F message: |- 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.275952317+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Degraded 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.275952317+00:00 stderr F message: |- 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.275952317+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.275952317+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Progressing 2025-08-13T19:57:17.275952317+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.275952317+00:00 stderr F status: "True" 2025-08-13T19:57:17.275952317+00:00 stderr F type: Available 2025-08-13T19:57:17.438910890+00:00 stderr F I0813 19:57:17.438515 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.591118266+00:00 stderr F I0813 19:57:17.587982 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:17.591118266+00:00 stderr F message: |- 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:17.591118266+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Degraded 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Upgradeable 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "False" 2025-08-13T19:57:17.591118266+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:17.591118266+00:00 stderr F message: |- 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:17.591118266+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:17.591118266+00:00 stderr F reason: Deploying 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Progressing 2025-08-13T19:57:17.591118266+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:17.591118266+00:00 stderr F status: "True" 2025-08-13T19:57:17.591118266+00:00 stderr F type: Available 2025-08-13T19:57:17.632087666+00:00 stderr F I0813 19:57:17.631991 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:17.642120213+00:00 stderr F I0813 19:57:17.641978 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:57:17.831496050+00:00 stderr F I0813 19:57:17.831406 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.036197856+00:00 stderr F I0813 19:57:18.035861 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.133008850+00:00 stderr F I0813 19:57:18.132935 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:57:18.133102643+00:00 stderr F I0813 19:57:18.133088 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-control-plane updated, re-generating status 2025-08-13T19:57:18.196054480+00:00 stderr F I0813 19:57:18.195709 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "False" 2025-08-13T19:57:18.196054480+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.196054480+00:00 stderr F message: |- 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.196054480+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Degraded 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.196054480+00:00 stderr F message: |- 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.196054480+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.196054480+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Progressing 2025-08-13T19:57:18.196054480+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.196054480+00:00 stderr F status: "True" 2025-08-13T19:57:18.196054480+00:00 stderr F type: Available 2025-08-13T19:57:18.202886366+00:00 stderr F I0813 19:57:18.202753 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:18.236058863+00:00 stderr F I0813 19:57:18.235671 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.430881896+00:00 stderr F I0813 19:57:18.430665 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.579966113+00:00 stderr F I0813 19:57:18.579881 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.579966113+00:00 stderr F message: |- 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.579966113+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Degraded 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "False" 2025-08-13T19:57:18.579966113+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.579966113+00:00 stderr F message: |- 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.579966113+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.579966113+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Progressing 2025-08-13T19:57:18.579966113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.579966113+00:00 stderr F status: "True" 2025-08-13T19:57:18.579966113+00:00 stderr F type: Available 2025-08-13T19:57:18.603711061+00:00 stderr F I0813 19:57:18.603625 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "False" 2025-08-13T19:57:18.603711061+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.603711061+00:00 stderr F message: |- 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.603711061+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Degraded 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.603711061+00:00 stderr F message: |- 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.603711061+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.603711061+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Progressing 2025-08-13T19:57:18.603711061+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.603711061+00:00 stderr F status: "True" 2025-08-13T19:57:18.603711061+00:00 stderr F type: Available 2025-08-13T19:57:18.604328479+00:00 stderr F I0813 19:57:18.604264 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:18.630157446+00:00 stderr F I0813 19:57:18.630055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.831180777+00:00 stderr F I0813 19:57:18.831084 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:57:18.982905299+00:00 stderr F I0813 19:57:18.982710 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:18.982905299+00:00 stderr F message: |- 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:18.982905299+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Degraded 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Upgradeable 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "False" 2025-08-13T19:57:18.982905299+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:18.982905299+00:00 stderr F message: |- 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:18.982905299+00:00 stderr F Deployment "/openshift-ovn-kubernetes/ovnkube-control-plane" is not available (awaiting 1 nodes) 2025-08-13T19:57:18.982905299+00:00 stderr F reason: Deploying 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Progressing 2025-08-13T19:57:18.982905299+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:18.982905299+00:00 stderr F status: "True" 2025-08-13T19:57:18.982905299+00:00 stderr F type: Available 2025-08-13T19:57:19.031076985+00:00 stderr F I0813 19:57:19.030963 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:57:19.674537108+00:00 stderr F I0813 19:57:19.674439 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "False" 2025-08-13T19:57:19.674537108+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:19.674537108+00:00 stderr F message: |- 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:19.674537108+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Degraded 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Upgradeable 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:19.674537108+00:00 stderr F message: |- 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.674537108+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:19.674537108+00:00 stderr F reason: Deploying 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Progressing 2025-08-13T19:57:19.674537108+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:19.674537108+00:00 stderr F status: "True" 2025-08-13T19:57:19.674537108+00:00 stderr F type: Available 2025-08-13T19:57:19.675118124+00:00 stderr F I0813 19:57:19.675020 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:19.984504589+00:00 stderr F I0813 19:57:19.984386 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:19.984504589+00:00 stderr F message: |- 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:19.984504589+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Degraded 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Upgradeable 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "False" 2025-08-13T19:57:19.984504589+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:19.984504589+00:00 stderr F message: |- 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:19.984504589+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:19.984504589+00:00 stderr F reason: Deploying 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Progressing 2025-08-13T19:57:19.984504589+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:19.984504589+00:00 stderr F status: "True" 2025-08-13T19:57:19.984504589+00:00 stderr F type: Available 2025-08-13T19:57:20.007128255+00:00 stderr F I0813 19:57:20.006991 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "False" 2025-08-13T19:57:20.007128255+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:20.007128255+00:00 stderr F message: |- 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:20.007128255+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Degraded 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Upgradeable 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:20.007128255+00:00 stderr F message: |- 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.007128255+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:20.007128255+00:00 stderr F reason: Deploying 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Progressing 2025-08-13T19:57:20.007128255+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:20.007128255+00:00 stderr F status: "True" 2025-08-13T19:57:20.007128255+00:00 stderr F type: Available 2025-08-13T19:57:20.007128255+00:00 stderr F I0813 19:57:20.007109 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:20.384928903+00:00 stderr F I0813 19:57:20.384687 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:20.384928903+00:00 stderr F message: |- 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:20.384928903+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Degraded 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Upgradeable 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "False" 2025-08-13T19:57:20.384928903+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:20.384928903+00:00 stderr F message: |- 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) 2025-08-13T19:57:20.384928903+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:20.384928903+00:00 stderr F reason: Deploying 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Progressing 2025-08-13T19:57:20.384928903+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:20.384928903+00:00 stderr F status: "True" 2025-08-13T19:57:20.384928903+00:00 stderr F type: Available 2025-08-13T19:57:27.646199035+00:00 stderr F E0813 19:57:27.645984 1 allowlist_controller.go:142] Failed to verify ready status on allowlist daemonset pods: client rate limiter Wait returned an error: context deadline exceeded 2025-08-13T19:57:36.833565267+00:00 stderr F I0813 19:57:36.829316 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.834918476+00:00 stderr F I0813 19:57:36.834168 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.943902088+00:00 stderr F I0813 19:57:36.943626 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:36.943902088+00:00 stderr F I0813 19:57:36.943743 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus updated, re-generating status 2025-08-13T19:57:37.094172179+00:00 stderr F I0813 19:57:37.088743 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:37.094172179+00:00 stderr F I0813 19:57:37.090964 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "False" 2025-08-13T19:57:37.094172179+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.094172179+00:00 stderr F message: |- 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:37.094172179+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Degraded 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.094172179+00:00 stderr F message: |- 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.094172179+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.094172179+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Progressing 2025-08-13T19:57:37.094172179+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.094172179+00:00 stderr F status: "True" 2025-08-13T19:57:37.094172179+00:00 stderr F type: Available 2025-08-13T19:57:37.165531076+00:00 stderr F I0813 19:57:37.165462 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.165531076+00:00 stderr F message: |- 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - pod ovnkube-node-44qcg is in CrashLoopBackOff State 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-q88th is in CrashLoopBackOff State 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/multus" rollout is not making progress - last change 2024-06-27T13:34:15Z 2025-08-13T19:57:37.165531076+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Degraded 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "False" 2025-08-13T19:57:37.165531076+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.165531076+00:00 stderr F message: |- 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.165531076+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.165531076+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Progressing 2025-08-13T19:57:37.165531076+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.165531076+00:00 stderr F status: "True" 2025-08-13T19:57:37.165531076+00:00 stderr F type: Available 2025-08-13T19:57:37.199699932+00:00 stderr F I0813 19:57:37.199578 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:37.200026730+00:00 stderr F I0813 19:57:37.199884 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "False" 2025-08-13T19:57:37.200026730+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.200026730+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:37.200026730+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.200026730+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Degraded 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.200026730+00:00 stderr F message: |- 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.200026730+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.200026730+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Progressing 2025-08-13T19:57:37.200026730+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.200026730+00:00 stderr F status: "True" 2025-08-13T19:57:37.200026730+00:00 stderr F type: Available 2025-08-13T19:57:37.267659872+00:00 stderr F I0813 19:57:37.267557 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:37.267659872+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:37.267659872+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:37.267659872+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Degraded 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Upgradeable 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "False" 2025-08-13T19:57:37.267659872+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:37.267659872+00:00 stderr F message: |- 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) 2025-08-13T19:57:37.267659872+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:37.267659872+00:00 stderr F reason: Deploying 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Progressing 2025-08-13T19:57:37.267659872+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:37.267659872+00:00 stderr F status: "True" 2025-08-13T19:57:37.267659872+00:00 stderr F type: Available 2025-08-13T19:57:40.331406117+00:00 stderr F I0813 19:57:40.331298 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.331406117+00:00 stderr F I0813 19:57:40.331339 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.363359359+00:00 stderr F I0813 19:57:40.362143 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.363359359+00:00 stderr F I0813 19:57:40.362189 1 pod_watcher.go:131] Operand /, Kind= openshift-ovn-kubernetes/ovnkube-node updated, re-generating status 2025-08-13T19:57:40.422123417+00:00 stderr F I0813 19:57:40.421990 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:40.422558630+00:00 stderr F I0813 19:57:40.422454 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "False" 2025-08-13T19:57:40.422558630+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:40.422558630+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:40.422558630+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:40.422558630+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Degraded 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.422558630+00:00 stderr F message: |- 2025-08-13T19:57:40.422558630+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.422558630+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Progressing 2025-08-13T19:57:40.422558630+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.422558630+00:00 stderr F status: "True" 2025-08-13T19:57:40.422558630+00:00 stderr F type: Available 2025-08-13T19:57:40.451011842+00:00 stderr F I0813 19:57:40.450905 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2025-08-13T19:56:28Z" 2025-08-13T19:57:40.451011842+00:00 stderr F message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making 2025-08-13T19:57:40.451011842+00:00 stderr F progress - last change 2024-06-27T13:34:18Z 2025-08-13T19:57:40.451011842+00:00 stderr F reason: RolloutHung 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Degraded 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "False" 2025-08-13T19:57:40.451011842+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.451011842+00:00 stderr F message: |- 2025-08-13T19:57:40.451011842+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.451011842+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Progressing 2025-08-13T19:57:40.451011842+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.451011842+00:00 stderr F status: "True" 2025-08-13T19:57:40.451011842+00:00 stderr F type: Available 2025-08-13T19:57:40.483402657+00:00 stderr F I0813 19:57:40.483342 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:40.485476046+00:00 stderr F I0813 19:57:40.484150 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "False" 2025-08-13T19:57:40.485476046+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "False" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Degraded 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.485476046+00:00 stderr F message: |- 2025-08-13T19:57:40.485476046+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.485476046+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Progressing 2025-08-13T19:57:40.485476046+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.485476046+00:00 stderr F status: "True" 2025-08-13T19:57:40.485476046+00:00 stderr F type: Available 2025-08-13T19:57:40.515236706+00:00 stderr F I0813 19:57:40.515082 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "False" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Degraded 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Upgradeable 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "False" 2025-08-13T19:57:40.515236706+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:40.515236706+00:00 stderr F message: |- 2025-08-13T19:57:40.515236706+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F DaemonSet "/openshift-network-operator/iptables-alerter" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:40.515236706+00:00 stderr F reason: Deploying 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Progressing 2025-08-13T19:57:40.515236706+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:40.515236706+00:00 stderr F status: "True" 2025-08-13T19:57:40.515236706+00:00 stderr F type: Available 2025-08-13T19:57:48.718553717+00:00 stderr F I0813 19:57:48.718162 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:57:48.718553717+00:00 stderr F I0813 19:57:48.718217 1 pod_watcher.go:131] Operand /, Kind= openshift-network-operator/iptables-alerter updated, re-generating status 2025-08-13T19:57:50.186632608+00:00 stderr F I0813 19:57:50.186549 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "False" 2025-08-13T19:57:50.186632608+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "False" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Degraded 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Upgradeable 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:50.186632608+00:00 stderr F message: |- 2025-08-13T19:57:50.186632608+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:50.186632608+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:50.186632608+00:00 stderr F reason: Deploying 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Progressing 2025-08-13T19:57:50.186632608+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:50.186632608+00:00 stderr F status: "True" 2025-08-13T19:57:50.186632608+00:00 stderr F type: Available 2025-08-13T19:57:50.188397668+00:00 stderr F I0813 19:57:50.187123 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:57:50.711039923+00:00 stderr F I0813 19:57:50.710052 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "False" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Degraded 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Upgradeable 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "False" 2025-08-13T19:57:50.711039923+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:57:50.711039923+00:00 stderr F message: |- 2025-08-13T19:57:50.711039923+00:00 stderr F DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready 2025-08-13T19:57:50.711039923+00:00 stderr F Deployment "/openshift-multus/multus-admission-controller" is waiting for other operators to become ready 2025-08-13T19:57:50.711039923+00:00 stderr F reason: Deploying 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Progressing 2025-08-13T19:57:50.711039923+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:57:50.711039923+00:00 stderr F status: "True" 2025-08-13T19:57:50.711039923+00:00 stderr F type: Available 2025-08-13T19:57:53.854169574+00:00 stderr F E0813 19:57:53.854019 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:58:53.853220130+00:00 stderr F E0813 19:58:53.852971 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:16.933536645+00:00 stderr F I0813 19:59:16.928079 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:59:17.449368829+00:00 stderr F I0813 19:59:17.449298 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.540009033+00:00 stderr F I0813 19:59:17.539733 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.797601196+00:00 stderr F I0813 19:59:17.794238 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:17.849392622+00:00 stderr F I0813 19:59:17.849009 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.021281852+00:00 stderr F I0813 19:59:18.020347 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.078086281+00:00 stderr F I0813 19:59:18.076133 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:59:18.078086281+00:00 stderr F I0813 19:59:18.076275 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/network-metrics-daemon updated, re-generating status 2025-08-13T19:59:18.247723307+00:00 stderr F I0813 19:59:18.247439 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:18.718923629+00:00 stderr F I0813 19:59:18.716957 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:19.320117607+00:00 stderr F I0813 19:59:19.294730 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T19:59:19.725952816+00:00 stderr F I0813 19:59:19.703422 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:20.361934375+00:00 stderr F I0813 19:59:20.359979 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:20.786513098+00:00 stderr F I0813 19:59:20.784268 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.111025047+00:00 stderr F I0813 19:59:21.108740 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.648204880+00:00 stderr F I0813 19:59:21.648113 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.852747820+00:00 stderr F I0813 19:59:21.852489 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:21.966195494+00:00 stderr F I0813 19:59:21.945572 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:22.005721751+00:00 stderr F I0813 19:59:22.003273 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:59:22.007893113+00:00 stderr F I0813 19:59:22.006732 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "False" 2025-08-13T19:59:22.007893113+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "False" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Degraded 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Upgradeable 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:59:22.007893113+00:00 stderr F message: Deployment "/openshift-multus/multus-admission-controller" is waiting for 2025-08-13T19:59:22.007893113+00:00 stderr F other operators to become ready 2025-08-13T19:59:22.007893113+00:00 stderr F reason: Deploying 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Progressing 2025-08-13T19:59:22.007893113+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:22.007893113+00:00 stderr F status: "True" 2025-08-13T19:59:22.007893113+00:00 stderr F type: Available 2025-08-13T19:59:22.623140311+00:00 stderr F I0813 19:59:22.622565 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "False" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Degraded 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Upgradeable 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "False" 2025-08-13T19:59:22.623140311+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-27T13:34:15Z" 2025-08-13T19:59:22.623140311+00:00 stderr F message: Deployment "/openshift-multus/multus-admission-controller" is waiting for 2025-08-13T19:59:22.623140311+00:00 stderr F other operators to become ready 2025-08-13T19:59:22.623140311+00:00 stderr F reason: Deploying 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Progressing 2025-08-13T19:59:22.623140311+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:22.623140311+00:00 stderr F status: "True" 2025-08-13T19:59:22.623140311+00:00 stderr F type: Available 2025-08-13T19:59:23.111462501+00:00 stderr F I0813 19:59:23.104684 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.350067482+00:00 stderr F I0813 19:59:23.315737 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.446624725+00:00 stderr F I0813 19:59:23.425940 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:23.962965053+00:00 stderr F I0813 19:59:23.962769 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:24.943743630+00:00 stderr F I0813 19:59:24.935070 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:25.812553016+00:00 stderr F I0813 19:59:25.811360 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:26.292940969+00:00 stderr F I0813 19:59:26.292674 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:26.995629630+00:00 stderr F I0813 19:59:26.995568 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.126353436+00:00 stderr F I0813 19:59:27.126299 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.330258469+00:00 stderr F I0813 19:59:27.330199 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.764258490+00:00 stderr F I0813 19:59:27.760309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:27.805641620+00:00 stderr F I0813 19:59:27.802246 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:59:28.078679293+00:00 stderr F I0813 19:59:28.078586 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.101021519+00:00 stderr F I0813 19:59:28.098559 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:59:28.101021519+00:00 stderr F I0813 19:59:28.098616 1 pod_watcher.go:131] Operand /, Kind= openshift-multus/multus-admission-controller updated, re-generating status 2025-08-13T19:59:28.363168061+00:00 stderr F I0813 19:59:28.363039 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.696077801+00:00 stderr F I0813 19:59:28.692985 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:28.843750770+00:00 stderr F I0813 19:59:28.839683 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.196290770+00:00 stderr F I0813 19:59:29.196230 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.349409284+00:00 stderr F I0813 19:59:29.348558 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.477965059+00:00 stderr F I0813 19:59:29.477700 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.641111209+00:00 stderr F I0813 19:59:29.641055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:29.699073332+00:00 stderr F I0813 19:59:29.685576 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T19:59:29.709884170+00:00 stderr F I0813 19:59:29.709004 1 log.go:245] Network operator config updated with conditions: 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Degraded 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "True" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Upgradeable 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2025-08-13T19:59:29Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "False" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Progressing 2025-08-13T19:59:29.709884170+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:29.709884170+00:00 stderr F status: "True" 2025-08-13T19:59:29.709884170+00:00 stderr F type: Available 2025-08-13T19:59:30.241056071+00:00 stderr F I0813 19:59:30.241003 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:31.334293824+00:00 stderr F I0813 19:59:31.332858 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:31.510259090+00:00 stderr F I0813 19:59:31.509102 1 log.go:245] ClusterOperator config status updated with conditions: 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2025-08-13T19:57:40Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Degraded 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "True" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Upgradeable 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:45:34Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: ManagementStateDegraded 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2025-08-13T19:59:30Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "False" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Progressing 2025-08-13T19:59:31.510259090+00:00 stderr F - lastTransitionTime: "2024-06-26T12:46:58Z" 2025-08-13T19:59:31.510259090+00:00 stderr F status: "True" 2025-08-13T19:59:31.510259090+00:00 stderr F type: Available 2025-08-13T19:59:48.140554559+00:00 stderr F I0813 19:59:48.131293 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.391260996+00:00 stderr F I0813 19:59:48.389276 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.570048173+00:00 stderr F I0813 19:59:48.568410 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.833236585+00:00 stderr F I0813 19:59:48.832707 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:48.974349148+00:00 stderr F I0813 19:59:48.973569 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.216714687+00:00 stderr F I0813 19:59:49.205400 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.290936663+00:00 stderr F I0813 19:59:49.287212 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:49.426012323+00:00 stderr F I0813 19:59:49.425152 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.346580474+00:00 stderr F I0813 19:59:50.346523 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.613374570+00:00 stderr F I0813 19:59:50.609724 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:50.864720594+00:00 stderr F I0813 19:59:50.864066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.243916454+00:00 stderr F I0813 19:59:51.243825 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.804263647+00:00 stderr F I0813 19:59:51.804204 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:51.834284163+00:00 stderr F I0813 19:59:51.834085 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.979645897+00:00 stderr F I0813 19:59:51.979480 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.954065868 +0000 UTC))" 2025-08-13T19:59:51.979891574+00:00 stderr F I0813 19:59:51.979865 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.979721539 +0000 UTC))" 2025-08-13T19:59:51.980096460+00:00 stderr F I0813 19:59:51.979996 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.979966846 +0000 UTC))" 2025-08-13T19:59:51.980178442+00:00 stderr F I0813 19:59:51.980157 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.98011923 +0000 UTC))" 2025-08-13T19:59:51.980242584+00:00 stderr F I0813 19:59:51.980217 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980200393 +0000 UTC))" 2025-08-13T19:59:51.980446290+00:00 stderr F I0813 19:59:51.980416 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980275205 +0000 UTC))" 2025-08-13T19:59:51.980535682+00:00 stderr F I0813 19:59:51.980508 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.98047528 +0000 UTC))" 2025-08-13T19:59:51.980603604+00:00 stderr F I0813 19:59:51.980583 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980560143 +0000 UTC))" 2025-08-13T19:59:51.980667716+00:00 stderr F I0813 19:59:51.980653 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.980630925 +0000 UTC))" 2025-08-13T19:59:52.032246636+00:00 stderr F I0813 19:59:52.030423 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 19:59:52.030310541 +0000 UTC))" 2025-08-13T19:59:52.032246636+00:00 stderr F I0813 19:59:52.031124 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 19:59:52.031027672 +0000 UTC))" 2025-08-13T19:59:52.146350009+00:00 stderr F I0813 19:59:52.144680 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.231451655+00:00 stderr F I0813 19:59:52.230944 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.309745156+00:00 stderr F I0813 19:59:52.309454 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:52.637698115+00:00 stderr F I0813 19:59:52.634134 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T19:59:55.316949758+00:00 stderr F I0813 19:59:55.316873 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T19:59:58.072087613+00:00 stderr F I0813 19:59:58.071428 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.305632231+00:00 stderr F I0813 19:59:58.304088 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.377723176+00:00 stderr F I0813 19:59:58.367820 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.434523765+00:00 stderr F I0813 19:59:58.434098 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.482619416+00:00 stderr F I0813 19:59:58.480009 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.528341879+00:00 stderr F I0813 19:59:58.522759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.545087616+00:00 stderr F I0813 19:59:58.543626 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.600185317+00:00 stderr F I0813 19:59:58.589599 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.703059880+00:00 stderr F I0813 19:59:58.701218 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.738516400+00:00 stderr F I0813 19:59:58.737600 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.787730443+00:00 stderr F I0813 19:59:58.787055 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.841489106+00:00 stderr F I0813 19:59:58.838749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.884679367+00:00 stderr F I0813 19:59:58.856358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T19:59:58.963088732+00:00 stderr F I0813 19:59:58.961511 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T19:59:59.131081701+00:00 stderr F I0813 19:59:59.125180 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T19:59:59.191984487+00:00 stderr F I0813 19:59:59.189505 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.016409427+00:00 stderr F I0813 20:00:00.015619 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:00:00.044321713+00:00 stderr F I0813 20:00:00.039911 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:00:00.051241160+00:00 stderr F I0813 20:00:00.047119 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:00:00.051241160+00:00 stderr F I0813 20:00:00.047155 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc000abff80 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085465 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085513 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:00:00.085951090+00:00 stderr F I0813 20:00:00.085531 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.100949 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101022 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101030 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101036 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:00.102467720+00:00 stderr F I0813 20:00:00.101079 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:00:00.240105194+00:00 stderr F I0813 20:00:00.236030 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:00:00.415568186+00:00 stderr F I0813 20:00:00.415503 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:00:00.419339323+00:00 stderr F I0813 20:00:00.419293 1 log.go:245] Starting render phase 2025-08-13T20:00:00.537154070+00:00 stderr F I0813 20:00:00.537099 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.583753909+00:00 stderr F I0813 20:00:00.576994 1 log.go:245] Skipping reconcile of Network.operator.openshift.io: spec unchanged 2025-08-13T20:00:00.585098887+00:00 stderr F I0813 20:00:00.585069 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.606038084+00:00 stderr F I0813 20:00:00.605336 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.704095379+00:00 stderr F I0813 20:00:00.685563 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.755946427+00:00 stderr F I0813 20:00:00.750192 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:00:00.756553244+00:00 stderr F I0813 20:00:00.756405 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.810522553+00:00 stderr F I0813 20:00:00.810427 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.935619299+00:00 stderr F I0813 20:00:00.925007 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:00.966938491+00:00 stderr F I0813 20:00:00.966348 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.012044437+00:00 stderr F I0813 20:00:00.994077 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.198850282+00:00 stderr F I0813 20:00:01.196550 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.378529464+00:00 stderr F I0813 20:00:01.373556 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.526869373+00:00 stderr F I0813 20:00:01.526736 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.793436441+00:00 stderr F I0813 20:00:01.793339 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:01.974855962+00:00 stderr F I0813 20:00:01.963018 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.048943 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.048980 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.049010 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:00:02.051281641+00:00 stderr F I0813 20:00:02.049058 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:00:02.145098276+00:00 stderr F I0813 20:00:02.144755 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.293526629+00:00 stderr F I0813 20:00:02.293464 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:00:02.293683323+00:00 stderr F I0813 20:00:02.293670 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:00:02.323525774+00:00 stderr F I0813 20:00:02.323471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.433367366+00:00 stderr F I0813 20:00:02.425473 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:00:02.833232198+00:00 stderr F I0813 20:00:02.833170 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.912416596+00:00 stderr F I0813 20:00:02.912359 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:02.960892078+00:00 stderr F I0813 20:00:02.959429 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:00:02.990424180+00:00 stderr F I0813 20:00:02.990367 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.003689948+00:00 stderr F I0813 20:00:03.003291 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:00:03.003689948+00:00 stderr F I0813 20:00:03.003423 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:00:03.024762009+00:00 stderr F I0813 20:00:03.024709 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:00:03.024974555+00:00 stderr F I0813 20:00:03.024956 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:00:03.157048601+00:00 stderr F I0813 20:00:03.156993 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.204146314+00:00 stderr F I0813 20:00:03.203526 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:03.204146314+00:00 stderr F I0813 20:00:03.203594 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:00:03.428984805+00:00 stderr F I0813 20:00:03.428928 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:03.429097758+00:00 stderr F I0813 20:00:03.429084 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:00:03.441752649+00:00 stderr F I0813 20:00:03.440235 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.529341597+00:00 stderr F I0813 20:00:03.528243 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:00:03.538519809+00:00 stderr F I0813 20:00:03.528329 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:00:03.588427262+00:00 stderr F I0813 20:00:03.588335 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.640600989+00:00 stderr F I0813 20:00:03.640539 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:00:03.641026771+00:00 stderr F I0813 20:00:03.641007 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:00:03.772039317+00:00 stderr F I0813 20:00:03.765621 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:00:03.772039317+00:00 stderr F I0813 20:00:03.769461 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:00:03.782352661+00:00 stderr F I0813 20:00:03.772442 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:03.871938865+00:00 stderr F I0813 20:00:03.866133 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:00:03.872073069+00:00 stderr F I0813 20:00:03.872053 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:00:03.917299689+00:00 stderr F I0813 20:00:03.914318 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:00:03.920611853+00:00 stderr F I0813 20:00:03.917435 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:00:03.929444985+00:00 stderr F I0813 20:00:03.929282 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:00:03.929554168+00:00 stderr F I0813 20:00:03.929539 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:00:03.961414587+00:00 stderr F I0813 20:00:03.961358 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:00:03.961531410+00:00 stderr F I0813 20:00:03.961513 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:00:03.985005849+00:00 stderr F I0813 20:00:03.982309 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:00:04.003910608+00:00 stderr F I0813 20:00:03.999925 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:00:04.005635377+00:00 stderr F I0813 20:00:04.005601 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.027927552+00:00 stderr F I0813 20:00:04.027069 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:00:04.045293458+00:00 stderr F I0813 20:00:04.045115 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:00:04.063575569+00:00 stderr F I0813 20:00:04.063518 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:00:04.063912578+00:00 stderr F I0813 20:00:04.063892 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:00:04.088456888+00:00 stderr F I0813 20:00:04.088401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:00:04.089140188+00:00 stderr F I0813 20:00:04.088955 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:00:04.131601498+00:00 stderr F I0813 20:00:04.131542 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.182439358+00:00 stderr F I0813 20:00:04.182377 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:00:04.182577902+00:00 stderr F I0813 20:00:04.182554 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:00:04.322262595+00:00 stderr F I0813 20:00:04.322190 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.371423647+00:00 stderr F I0813 20:00:04.371362 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:00:04.371567621+00:00 stderr F I0813 20:00:04.371546 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:00:04.563670579+00:00 stderr F I0813 20:00:04.563619 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:04.579273074+00:00 stderr F I0813 20:00:04.577416 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:00:04.579526481+00:00 stderr F I0813 20:00:04.579502 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:00:04.768828728+00:00 stderr F I0813 20:00:04.768573 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:00:04.768828728+00:00 stderr F I0813 20:00:04.768703 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:00:05.089912124+00:00 stderr F I0813 20:00:05.089594 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:00:05.090033748+00:00 stderr F I0813 20:00:05.090018 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:00:05.245299915+00:00 stderr F I0813 20:00:05.243752 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:00:05.245299915+00:00 stderr F I0813 20:00:05.243892 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:00:05.402293141+00:00 stderr F I0813 20:00:05.402240 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:00:05.402397694+00:00 stderr F I0813 20:00:05.402383 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:00:05.494256684+00:00 stderr F I0813 20:00:05.494179 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.521273424+00:00 stderr F I0813 20:00:05.521228 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.571085004+00:00 stderr F I0813 20:00:05.571031 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.585370512+00:00 stderr F I0813 20:00:05.585318 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.592692180+00:00 stderr F I0813 20:00:05.592578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.613300498+00:00 stderr F I0813 20:00:05.612470 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:00:05.613300498+00:00 stderr F I0813 20:00:05.612559 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740411 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.740375171 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740519 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.740496435 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740546 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.740529816 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740565 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.740551536 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740584 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740572197 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740624 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740610108 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740643 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740629369 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740660 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.740648819 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740682 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.74066537 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.740703 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.74069001 +0000 UTC))" 2025-08-13T20:00:05.748233965+00:00 stderr F I0813 20:00:05.741475 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 20:00:05.741448102 +0000 UTC))" 2025-08-13T20:00:05.748960716+00:00 stderr F I0813 20:00:05.748933 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.782825402+00:00 stderr F I0813 20:00:05.782349 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:00:05.782298587 +0000 UTC))" 2025-08-13T20:00:05.829491792+00:00 stderr F I0813 20:00:05.827515 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:00:05.829491792+00:00 stderr F I0813 20:00:05.827590 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:00:05.936560876+00:00 stderr F I0813 20:00:05.935582 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:05.973523859+00:00 stderr F I0813 20:00:05.973349 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:00:05.973523859+00:00 stderr F I0813 20:00:05.973409 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:00:06.147090158+00:00 stderr F I0813 20:00:06.146975 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.173986895+00:00 stderr F I0813 20:00:06.171235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:00:06.174203062+00:00 stderr F I0813 20:00:06.174180 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:00:06.335594323+00:00 stderr F I0813 20:00:06.335530 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.413332790+00:00 stderr F I0813 20:00:06.411275 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:00:06.413332790+00:00 stderr F I0813 20:00:06.411388 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:00:06.543155352+00:00 stderr F I0813 20:00:06.538471 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.591404678+00:00 stderr F I0813 20:00:06.587102 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:00:06.591567032+00:00 stderr F I0813 20:00:06.591551 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:00:06.727749705+00:00 stderr F I0813 20:00:06.726725 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:06.834137209+00:00 stderr F I0813 20:00:06.833488 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:00:06.834137209+00:00 stderr F I0813 20:00:06.833576 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:06.972059312+00:00 stderr F I0813 20:00:06.969446 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.068523202+00:00 stderr F I0813 20:00:07.067173 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:07.068523202+00:00 stderr F I0813 20:00:07.067241 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:07.126004661+00:00 stderr F I0813 20:00:07.125906 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.181911975+00:00 stderr F I0813 20:00:07.179116 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:07.181911975+00:00 stderr F I0813 20:00:07.179189 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:00:07.367050765+00:00 stderr F I0813 20:00:07.364935 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.439404058+00:00 stderr F I0813 20:00:07.439299 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:07.439404058+00:00 stderr F I0813 20:00:07.439362 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:00:07.570584558+00:00 stderr F I0813 20:00:07.566959 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.610458055+00:00 stderr F I0813 20:00:07.609288 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:00:07.610458055+00:00 stderr F I0813 20:00:07.609359 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:00:07.740640546+00:00 stderr F I0813 20:00:07.740525 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:07.913240478+00:00 stderr F I0813 20:00:07.908830 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:00:07.913240478+00:00 stderr F I0813 20:00:07.908932 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:00:08.024148710+00:00 stderr F I0813 20:00:08.021659 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:00:08.024148710+00:00 stderr F I0813 20:00:08.021742 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:00:08.076240345+00:00 stderr F I0813 20:00:08.072585 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.234126367+00:00 stderr F I0813 20:00:08.232925 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:00:08.234126367+00:00 stderr F I0813 20:00:08.232991 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:00:08.246128620+00:00 stderr F I0813 20:00:08.245571 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.484883697+00:00 stderr F I0813 20:00:08.484422 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:08.484883697+00:00 stderr F I0813 20:00:08.484544 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:00:08.509748366+00:00 stderr F I0813 20:00:08.507183 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.724308484+00:00 stderr F I0813 20:00:08.723940 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:08.890706279+00:00 stderr F I0813 20:00:08.890426 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:00:08.906373966+00:00 stderr F I0813 20:00:08.905441 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:08.923653378+00:00 stderr F I0813 20:00:08.920199 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:09.990633902+00:00 stderr F I0813 20:00:09.985965 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:09.990633902+00:00 stderr F I0813 20:00:09.986155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:10.110526881+00:00 stderr F I0813 20:00:10.110462 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.206356513+00:00 stderr F I0813 20:00:10.206050 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:10.206356513+00:00 stderr F I0813 20:00:10.206099 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:00:10.221050462+00:00 stderr F I0813 20:00:10.220763 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.611933238+00:00 stderr F I0813 20:00:10.607622 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:10.612359180+00:00 stderr F I0813 20:00:10.612334 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:00:10.612448893+00:00 stderr F I0813 20:00:10.612432 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:00:10.825699854+00:00 stderr F I0813 20:00:10.825565 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:10.830917772+00:00 stderr F I0813 20:00:10.825762 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:00:10.852116287+00:00 stderr F I0813 20:00:10.848476 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.308090267+00:00 stderr F I0813 20:00:11.307681 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:00:11.308090267+00:00 stderr F I0813 20:00:11.307755 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:00:11.316324702+00:00 stderr F I0813 20:00:11.316292 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.695420422+00:00 stderr F I0813 20:00:11.695361 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:11.700181848+00:00 stderr F I0813 20:00:11.698073 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:00:11.700181848+00:00 stderr F I0813 20:00:11.698161 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:00:11.924462803+00:00 stderr F I0813 20:00:11.924415 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:00:11.924557115+00:00 stderr F I0813 20:00:11.924540 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:00:12.057868176+00:00 stderr F I0813 20:00:12.057643 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.138956629+00:00 stderr F I0813 20:00:12.138102 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.219931 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.220000 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:00:12.236306495+00:00 stderr F I0813 20:00:12.225358 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.341434712+00:00 stderr F I0813 20:00:12.338142 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.382584996+00:00 stderr F I0813 20:00:12.381413 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:00:12.382584996+00:00 stderr F I0813 20:00:12.381486 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:12.822989293+00:00 stderr F I0813 20:00:12.816201 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:12.912339241+00:00 stderr F I0813 20:00:12.902022 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:12.912339241+00:00 stderr F I0813 20:00:12.902123 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:13.015617166+00:00 stderr F I0813 20:00:13.012751 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.232013697+00:00 stderr F I0813 20:00:13.223038 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:13.232013697+00:00 stderr F I0813 20:00:13.223117 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:13.250970737+00:00 stderr F I0813 20:00:13.239234 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.306879791+00:00 stderr F I0813 20:00:13.298383 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:13.306879791+00:00 stderr F I0813 20:00:13.298451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:13.338236715+00:00 stderr F I0813 20:00:13.335162 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.359024598+00:00 stderr F I0813 20:00:13.358209 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:13.359024598+00:00 stderr F I0813 20:00:13.358297 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471221 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471283 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:13.477082374+00:00 stderr F I0813 20:00:13.471658 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:13.638533468+00:00 stderr F I0813 20:00:13.634291 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:13.638533468+00:00 stderr F I0813 20:00:13.634535 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:00:13.678753645+00:00 stderr F I0813 20:00:13.677949 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:00:13.678753645+00:00 stderr F I0813 20:00:13.678384 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:00:14.228906282+00:00 stderr F I0813 20:00:14.228151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:00:14.228906282+00:00 stderr F I0813 20:00:14.228229 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:00:14.997396433+00:00 stderr F I0813 20:00:14.993453 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:00:15.392155450+00:00 stderr F I0813 20:00:15.389447 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:00:15.392155450+00:00 stderr F I0813 20:00:15.389661 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:00:16.733912929+00:00 stderr F I0813 20:00:16.729412 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:00:16.733912929+00:00 stderr F I0813 20:00:16.729568 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:17.055233911+00:00 stderr F I0813 20:00:17.046488 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:17.055233911+00:00 stderr F I0813 20:00:17.046589 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.084964209+00:00 stderr F I0813 20:00:17.080672 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:17.396449320+00:00 stderr F I0813 20:00:17.396312 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.396449320+00:00 stderr F I0813 20:00:17.396395 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.518882001+00:00 stderr F I0813 20:00:17.509358 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/openshift-service-ca.crt' 2025-08-13T20:00:17.608238829+00:00 stderr F I0813 20:00:17.607567 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.608238829+00:00 stderr F I0813 20:00:17.607642 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.971545149+00:00 stderr F I0813 20:00:17.971490 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:17.972118835+00:00 stderr F I0813 20:00:17.971982 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:17.999936048+00:00 stderr F I0813 20:00:17.976912 1 log.go:245] configmap 'openshift-config/openshift-service-ca.crt' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:18.198934632+00:00 stderr F I0813 20:00:18.198525 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:18.198934632+00:00 stderr F I0813 20:00:18.198631 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:00:18.375177797+00:00 stderr F I0813 20:00:18.370384 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:00:18.375177797+00:00 stderr F I0813 20:00:18.370537 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:00:18.554072558+00:00 stderr F I0813 20:00:18.553209 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:00:18.554072558+00:00 stderr F I0813 20:00:18.553370 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:00:18.920695282+00:00 stderr F I0813 20:00:18.920636 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:18.920943779+00:00 stderr F I0813 20:00:18.920917 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:00:19.139574513+00:00 stderr F I0813 20:00:19.137568 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:00:19.139574513+00:00 stderr F I0813 20:00:19.137636 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:00:19.404905919+00:00 stderr F I0813 20:00:19.402702 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:00:19.404905919+00:00 stderr F I0813 20:00:19.402860 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:00:19.573699642+00:00 stderr F I0813 20:00:19.573642 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:00:19.574088273+00:00 stderr F I0813 20:00:19.574064 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:00:19.724215163+00:00 stderr F I0813 20:00:19.723080 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:19.874118948+00:00 stderr F I0813 20:00:19.872469 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:00:19.874118948+00:00 stderr F I0813 20:00:19.872687 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:00:19.946009208+00:00 stderr F I0813 20:00:19.922655 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.075301534+00:00 stderr F I0813 20:00:20.075240 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.090052095+00:00 stderr F I0813 20:00:20.076416 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:00:20.117163328+00:00 stderr F I0813 20:00:20.105138 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:00:20.251120698+00:00 stderr F I0813 20:00:20.251042 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.314602038+00:00 stderr F I0813 20:00:20.314148 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:00:20.314602038+00:00 stderr F I0813 20:00:20.314290 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:20.361443863+00:00 stderr F I0813 20:00:20.361386 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.584204975+00:00 stderr F I0813 20:00:20.584143 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:20.584352459+00:00 stderr F I0813 20:00:20.584337 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:00:20.607314304+00:00 stderr F I0813 20:00:20.599333 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.794267785+00:00 stderr F I0813 20:00:20.794008 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:00:20.794267785+00:00 stderr F I0813 20:00:20.794119 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:20.801894842+00:00 stderr F I0813 20:00:20.800106 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.932976790+00:00 stderr F I0813 20:00:20.932921 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:20.967903216+00:00 stderr F I0813 20:00:20.960389 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:20.967903216+00:00 stderr F I0813 20:00:20.963155 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:21.002545324+00:00 stderr F I0813 20:00:21.001559 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:21.317566856+00:00 stderr F I0813 20:00:21.316563 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:21.317566856+00:00 stderr F I0813 20:00:21.316644 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.014466 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.014514 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.015032 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:22.015934828+00:00 stderr F I0813 20:00:22.015078 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:00:22.106024227+00:00 stderr F I0813 20:00:22.105687 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:00:22.106024227+00:00 stderr F I0813 20:00:22.105765 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:00:22.130968929+00:00 stderr F I0813 20:00:22.130918 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:00:22.131218176+00:00 stderr F I0813 20:00:22.131200 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:00:22.191311289+00:00 stderr F I0813 20:00:22.174576 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:00:22.191311289+00:00 stderr F I0813 20:00:22.174677 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:00:22.548052572+00:00 stderr F I0813 20:00:22.547727 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:00:22.548052572+00:00 stderr F I0813 20:00:22.547934 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:00:22.649770082+00:00 stderr F I0813 20:00:22.649264 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:00:22.649770082+00:00 stderr F I0813 20:00:22.649389 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.170936145+00:00 stderr F I0813 20:00:23.169369 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.170936145+00:00 stderr F I0813 20:00:23.169451 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.219909761+00:00 stderr F I0813 20:00:23.216758 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.219909761+00:00 stderr F I0813 20:00:23.216930 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:00:23.277926796+00:00 stderr F I0813 20:00:23.277766 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:23.278022798+00:00 stderr F I0813 20:00:23.278009 1 log.go:245] openshift-network-operator/openshift-service-ca.crt changed, triggering operconf reconciliation 2025-08-13T20:00:23.311979927+00:00 stderr F I0813 20:00:23.311917 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:00:23.312134401+00:00 stderr F I0813 20:00:23.312106 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:00:23.448147939+00:00 stderr F I0813 20:00:23.446470 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:00:23.448147939+00:00 stderr F I0813 20:00:23.446518 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:00:23.809745610+00:00 stderr F I0813 20:00:23.808217 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:00:23.809745610+00:00 stderr F I0813 20:00:23.808331 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:00:24.459348903+00:00 stderr F I0813 20:00:24.455964 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:00:24.459348903+00:00 stderr F I0813 20:00:24.456049 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:24.754925571+00:00 stderr F I0813 20:00:24.744957 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:24.754925571+00:00 stderr F I0813 20:00:24.745021 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:24.983936881+00:00 stderr F I0813 20:00:24.982250 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:24.983936881+00:00 stderr F I0813 20:00:24.982318 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:00:25.021522503+00:00 stderr F I0813 20:00:25.017295 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:00:25.021522503+00:00 stderr F I0813 20:00:25.017361 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:00:25.081994857+00:00 stderr F I0813 20:00:25.081905 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:00:25.082097940+00:00 stderr F I0813 20:00:25.082084 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:00:25.145287482+00:00 stderr F I0813 20:00:25.145235 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:00:25.145432446+00:00 stderr F I0813 20:00:25.145414 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:00:25.237749949+00:00 stderr F I0813 20:00:25.237611 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:00:25.237953855+00:00 stderr F I0813 20:00:25.237936 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:00:25.276163404+00:00 stderr F I0813 20:00:25.276107 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:00:25.276324179+00:00 stderr F I0813 20:00:25.276304 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:00:25.304971936+00:00 stderr F I0813 20:00:25.304829 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:00:25.305650655+00:00 stderr F I0813 20:00:25.305185 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:00:25.352379178+00:00 stderr F I0813 20:00:25.352324 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:00:25.356604208+00:00 stderr F I0813 20:00:25.356569 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:00:25.519980045+00:00 stderr F I0813 20:00:25.519607 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:00:25.519980045+00:00 stderr F I0813 20:00:25.519720 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:25.547145310+00:00 stderr F I0813 20:00:25.546995 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:25.547145310+00:00 stderr F I0813 20:00:25.547060 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:00:25.626914625+00:00 stderr F I0813 20:00:25.621566 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:00:25.626914625+00:00 stderr F I0813 20:00:25.621871 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:00:25.678964769+00:00 stderr F I0813 20:00:25.677521 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:00:25.678964769+00:00 stderr F I0813 20:00:25.677575 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:00:25.724325682+00:00 stderr F I0813 20:00:25.724270 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:00:25.724439176+00:00 stderr F I0813 20:00:25.724418 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:00:25.755367988+00:00 stderr F I0813 20:00:25.755303 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:00:25.755481341+00:00 stderr F I0813 20:00:25.755466 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:00:25.810361636+00:00 stderr F I0813 20:00:25.810284 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:00:25.810565262+00:00 stderr F I0813 20:00:25.810536 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:00:26.008240528+00:00 stderr F I0813 20:00:26.006854 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:00:26.008240528+00:00 stderr F I0813 20:00:26.006968 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:26.200354916+00:00 stderr F I0813 20:00:26.200288 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:26.200509281+00:00 stderr F I0813 20:00:26.200482 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:00:26.409191671+00:00 stderr F I0813 20:00:26.407084 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:00:26.409191671+00:00 stderr F I0813 20:00:26.407611 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:00:26.663284947+00:00 stderr F I0813 20:00:26.663029 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:00:26.663441071+00:00 stderr F I0813 20:00:26.663423 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:00:26.803330350+00:00 stderr F I0813 20:00:26.803278 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:00:26.803432753+00:00 stderr F I0813 20:00:26.803415 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:00:27.017349632+00:00 stderr F I0813 20:00:27.017209 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:00:27.019970497+00:00 stderr F I0813 20:00:27.017413 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:00:27.224954812+00:00 stderr F I0813 20:00:27.221811 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:00:27.224954812+00:00 stderr F I0813 20:00:27.221904 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:00:27.410078641+00:00 stderr F I0813 20:00:27.410001 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter was successful 2025-08-13T20:00:27.410425941+00:00 stderr F I0813 20:00:27.410404 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:00:27.604440143+00:00 stderr F I0813 20:00:27.604250 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script was successful 2025-08-13T20:00:27.604440143+00:00 stderr F I0813 20:00:27.604339 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:00:27.835231354+00:00 stderr F I0813 20:00:27.833215 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:00:28.068597348+00:00 stderr F I0813 20:00:28.064478 1 log.go:245] Operconfig Controller complete 2025-08-13T20:00:28.068597348+00:00 stderr F I0813 20:00:28.064713 1 log.go:245] Reconciling Network.operator.openshift.io cluster 2025-08-13T20:00:29.775416506+00:00 stderr F I0813 20:00:29.771208 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu-host=, the list of nodes are [] 2025-08-13T20:00:29.775416506+00:00 stderr F I0813 20:00:29.775167 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/dpu=, the list of nodes are [] 2025-08-13T20:00:29.794362896+00:00 stderr F I0813 20:00:29.784451 1 ovn_kubernetes.go:753] For Label network.operator.openshift.io/smart-nic=, the list of nodes are [] 2025-08-13T20:00:29.794362896+00:00 stderr F I0813 20:00:29.784499 1 ovn_kubernetes.go:842] OVN configuration is now &{GatewayMode: HyperShiftConfig:0xc00428ad00 DisableUDPAggregation:false DpuHostModeLabel:network.operator.openshift.io/dpu-host DpuHostModeNodes:[] DpuModeLabel:network.operator.openshift.io/dpu DpuModeNodes:[] SmartNicModeLabel:network.operator.openshift.io/smart-nic SmartNicModeNodes:[] MgmtPortResourceName:} 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798045 1 ovn_kubernetes.go:1661] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete; 1/1 scheduled; 1 available; generation 3 -> 3 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798086 1 ovn_kubernetes.go:1666] deployment openshift-ovn-kubernetes/ovnkube-control-plane rollout complete 2025-08-13T20:00:29.804066233+00:00 stderr F I0813 20:00:29.798095 1 ovn_kubernetes.go:1134] ovnkube-control-plane deployment status: progressing=false 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823867 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823908 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823915 1 ovn_kubernetes.go:1626] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 2 -> 2 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823921 1 ovn_kubernetes.go:1631] daemonset openshift-ovn-kubernetes/ovnkube-node rollout complete 2025-08-13T20:00:29.838588567+00:00 stderr F I0813 20:00:29.823948 1 ovn_kubernetes.go:1165] ovnkube-node DaemonSet status: progressing=false 2025-08-13T20:00:29.932167476+00:00 stderr F I0813 20:00:29.918701 1 log.go:245] reconciling (operator.openshift.io/v1, Kind=Network) /cluster 2025-08-13T20:00:29.980936696+00:00 stderr F I0813 20:00:29.975894 1 log.go:245] Apply / Create of (operator.openshift.io/v1, Kind=Network) /cluster was successful 2025-08-13T20:00:29.989299175+00:00 stderr F I0813 20:00:29.989037 1 log.go:245] Starting render phase 2025-08-13T20:00:30.016067508+00:00 stderr F I0813 20:00:30.013338 1 ovn_kubernetes.go:316] OVN_EGRESSIP_HEALTHCHECK_PORT env var is not defined. Using: 9107 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107529 1 ovn_kubernetes.go:1387] IP family mode: node=single-stack, controlPlane=single-stack 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107568 1 ovn_kubernetes.go:1359] IP family change: updateNode=true, updateControlPlane=true 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107595 1 ovn_kubernetes.go:1534] OVN-Kubernetes control-plane and node already at release version 4.16.0; no changes required 2025-08-13T20:00:30.108168064+00:00 stderr F I0813 20:00:30.107618 1 ovn_kubernetes.go:524] ovnk components: ovnkube-node: isRunning=true, update=true; ovnkube-control-plane: isRunning=true, update=true 2025-08-13T20:00:30.125986172+00:00 stderr F I0813 20:00:30.124066 1 ovn_kubernetes.go:1626] daemonset openshift-network-node-identity/network-node-identity rollout complete; 1/1 scheduled; 0 unavailable; 1 available; generation 1 -> 1 2025-08-13T20:00:30.125986172+00:00 stderr F I0813 20:00:30.124102 1 ovn_kubernetes.go:1631] daemonset openshift-network-node-identity/network-node-identity rollout complete 2025-08-13T20:00:30.144326355+00:00 stderr F I0813 20:00:30.144236 1 log.go:245] Render phase done, rendered 112 objects 2025-08-13T20:00:30.213997792+00:00 stderr F I0813 20:00:30.212562 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster 2025-08-13T20:00:30.237987136+00:00 stderr F I0813 20:00:30.237936 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster was successful 2025-08-13T20:00:30.238115149+00:00 stderr F I0813 20:00:30.238100 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io 2025-08-13T20:00:30.273752145+00:00 stderr F I0813 20:00:30.273704 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /network-attachment-definitions.k8s.cni.cncf.io was successful 2025-08-13T20:00:30.273929241+00:00 stderr F I0813 20:00:30.273912 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io 2025-08-13T20:00:30.287072325+00:00 stderr F I0813 20:00:30.286516 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /ippools.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:30.287072325+00:00 stderr F I0813 20:00:30.286581 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io 2025-08-13T20:00:30.299328935+00:00 stderr F I0813 20:00:30.299291 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /overlappingrangeipreservations.whereabouts.cni.cncf.io was successful 2025-08-13T20:00:30.299507580+00:00 stderr F I0813 20:00:30.299451 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-multus 2025-08-13T20:00:30.306408037+00:00 stderr F I0813 20:00:30.306354 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-multus was successful 2025-08-13T20:00:30.306479469+00:00 stderr F I0813 20:00:30.306465 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus 2025-08-13T20:00:30.315886657+00:00 stderr F I0813 20:00:30.314876 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus was successful 2025-08-13T20:00:30.315886657+00:00 stderr F I0813 20:00:30.315010 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools 2025-08-13T20:00:30.329411403+00:00 stderr F I0813 20:00:30.329281 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-ancillary-tools was successful 2025-08-13T20:00:30.329411403+00:00 stderr F I0813 20:00:30.329360 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus 2025-08-13T20:00:30.342455165+00:00 stderr F I0813 20:00:30.342345 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus was successful 2025-08-13T20:00:30.342687001+00:00 stderr F I0813 20:00:30.342668 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient 2025-08-13T20:00:30.353904471+00:00 stderr F I0813 20:00:30.353878 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-transient was successful 2025-08-13T20:00:30.354048575+00:00 stderr F I0813 20:00:30.353994 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group 2025-08-13T20:00:30.368868138+00:00 stderr F I0813 20:00:30.366356 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-group was successful 2025-08-13T20:00:30.368868138+00:00 stderr F I0813 20:00:30.366421 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools 2025-08-13T20:00:30.457129684+00:00 stderr F I0813 20:00:30.454154 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ancillary-tools was successful 2025-08-13T20:00:30.457129684+00:00 stderr F I0813 20:00:30.454266 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools 2025-08-13T20:00:30.734759971+00:00 stderr F I0813 20:00:30.732826 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-ancillary-tools was successful 2025-08-13T20:00:30.734759971+00:00 stderr F I0813 20:00:30.732939 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers 2025-08-13T20:00:30.846214329+00:00 stderr F I0813 20:00:30.846151 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-cluster-readers was successful 2025-08-13T20:00:30.846328932+00:00 stderr F I0813 20:00:30.846314 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts 2025-08-13T20:00:31.033400936+00:00 stderr F I0813 20:00:31.033259 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-whereabouts was successful 2025-08-13T20:00:31.033400936+00:00 stderr F I0813 20:00:31.033344 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts 2025-08-13T20:00:31.264141776+00:00 stderr F I0813 20:00:31.263064 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/multus-whereabouts was successful 2025-08-13T20:00:31.264141776+00:00 stderr F I0813 20:00:31.263128 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni 2025-08-13T20:00:31.423354566+00:00 stderr F I0813 20:00:31.423155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /whereabouts-cni was successful 2025-08-13T20:00:31.423354566+00:00 stderr F I0813 20:00:31.423228 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni 2025-08-13T20:00:31.642241897+00:00 stderr F I0813 20:00:31.641700 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/whereabouts-cni was successful 2025-08-13T20:00:31.642241897+00:00 stderr F I0813 20:00:31.641940 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project 2025-08-13T20:00:31.830724141+00:00 stderr F I0813 20:00:31.829337 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /net-attach-def-project was successful 2025-08-13T20:00:31.830724141+00:00 stderr F I0813 20:00:31.829470 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist 2025-08-13T20:00:33.021246527+00:00 stderr F I0813 20:00:33.017039 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/default-cni-sysctl-allowlist was successful 2025-08-13T20:00:33.021246527+00:00 stderr F I0813 20:00:33.017201 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources 2025-08-13T20:00:33.214942320+00:00 stderr F I0813 20:00:33.211612 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/cni-copy-resources was successful 2025-08-13T20:00:33.225679446+00:00 stderr F I0813 20:00:33.223342 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config 2025-08-13T20:00:33.291981177+00:00 stderr F I0813 20:00:33.291921 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-multus/multus-daemon-config was successful 2025-08-13T20:00:33.292109290+00:00 stderr F I0813 20:00:33.292076 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus 2025-08-13T20:00:33.518331391+00:00 stderr F I0813 20:00:33.511285 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus was successful 2025-08-13T20:00:33.518331391+00:00 stderr F I0813 20:00:33.511361 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins 2025-08-13T20:00:33.618065725+00:00 stderr F I0813 20:00:33.613622 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/multus-additional-cni-plugins was successful 2025-08-13T20:00:33.618065725+00:00 stderr F I0813 20:00:33.613729 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa 2025-08-13T20:00:33.781043142+00:00 stderr F I0813 20:00:33.779456 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/metrics-daemon-sa was successful 2025-08-13T20:00:33.781043142+00:00 stderr F I0813 20:00:33.779529 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role 2025-08-13T20:00:33.809218685+00:00 stderr F I0813 20:00:33.808024 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /metrics-daemon-role was successful 2025-08-13T20:00:33.809218685+00:00 stderr F I0813 20:00:33.808281 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding 2025-08-13T20:00:33.976896016+00:00 stderr F I0813 20:00:33.976325 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /metrics-daemon-sa-rolebinding was successful 2025-08-13T20:00:33.980126638+00:00 stderr F I0813 20:00:33.978085 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon 2025-08-13T20:00:34.074260693+00:00 stderr F I0813 20:00:34.074164 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-multus/network-metrics-daemon was successful 2025-08-13T20:00:34.076908508+00:00 stderr F I0813 20:00:34.074419 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network 2025-08-13T20:00:34.180443360+00:00 stderr F I0813 20:00:34.179269 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-network was successful 2025-08-13T20:00:34.180443360+00:00 stderr F I0813 20:00:34.179367 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/network-metrics-service 2025-08-13T20:00:34.240868913+00:00 stderr F I0813 20:00:34.239584 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/network-metrics-service was successful 2025-08-13T20:00:34.240868913+00:00 stderr F I0813 20:00:34.239660 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:34.345699852+00:00 stderr F I0813 20:00:34.345343 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:34.345699852+00:00 stderr F I0813 20:00:34.345407 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:34.494032052+00:00 stderr F I0813 20:00:34.493918 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:34.494032052+00:00 stderr F I0813 20:00:34.493988 1 log.go:245] reconciling (/v1, Kind=Service) openshift-multus/multus-admission-controller 2025-08-13T20:00:34.638011747+00:00 stderr F I0813 20:00:34.636415 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:34.638011747+00:00 stderr F I0813 20:00:34.636571 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-multus/multus-ac 2025-08-13T20:00:34.941888462+00:00 stderr F I0813 20:00:34.940449 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-multus/multus-ac was successful 2025-08-13T20:00:34.941888462+00:00 stderr F I0813 20:00:34.940613 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook 2025-08-13T20:00:35.142183403+00:00 stderr F I0813 20:00:35.129461 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /multus-admission-controller-webhook was successful 2025-08-13T20:00:35.142183403+00:00 stderr F I0813 20:00:35.129619 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook 2025-08-13T20:00:35.292967053+00:00 stderr F I0813 20:00:35.292333 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /multus-admission-controller-webhook was successful 2025-08-13T20:00:35.292967053+00:00 stderr F I0813 20:00:35.292402 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io 2025-08-13T20:00:35.599894075+00:00 stderr F I0813 20:00:35.597497 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /multus.openshift.io was successful 2025-08-13T20:00:35.599894075+00:00 stderr F I0813 20:00:35.597575 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller 2025-08-13T20:00:35.786772323+00:00 stderr F I0813 20:00:35.786550 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-multus/multus-admission-controller was successful 2025-08-13T20:00:35.786951618+00:00 stderr F I0813 20:00:35.786915 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller 2025-08-13T20:00:35.919109847+00:00 stderr F I0813 20:00:35.908447 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-multus/monitor-multus-admission-controller was successful 2025-08-13T20:00:35.919109847+00:00 stderr F I0813 20:00:35.908521 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s 2025-08-13T20:00:36.280135760+00:00 stderr F I0813 20:00:36.279934 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:36.280135760+00:00 stderr F I0813 20:00:36.279997 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s 2025-08-13T20:00:36.304390692+00:00 stderr F I0813 20:00:36.304280 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-multus/prometheus-k8s was successful 2025-08-13T20:00:36.304390692+00:00 stderr F I0813 20:00:36.304353 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules 2025-08-13T20:00:36.527641798+00:00 stderr F I0813 20:00:36.526617 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-multus/prometheus-k8s-rules was successful 2025-08-13T20:00:36.527641798+00:00 stderr F I0813 20:00:36.526714 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-ovn-kubernetes 2025-08-13T20:00:36.872335626+00:00 stderr F I0813 20:00:36.870403 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:36.872335626+00:00 stderr F I0813 20:00:36.870490 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org 2025-08-13T20:00:37.039218725+00:00 stderr F I0813 20:00:37.035778 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressfirewalls.k8s.ovn.org was successful 2025-08-13T20:00:37.039218725+00:00 stderr F I0813 20:00:37.035911 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org 2025-08-13T20:00:37.061460459+00:00 stderr F I0813 20:00:37.060257 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressips.k8s.ovn.org was successful 2025-08-13T20:00:37.061460459+00:00 stderr F I0813 20:00:37.060813 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org 2025-08-13T20:00:37.405100147+00:00 stderr F I0813 20:00:37.404219 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressqoses.k8s.ovn.org was successful 2025-08-13T20:00:37.405100147+00:00 stderr F I0813 20:00:37.404308 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org 2025-08-13T20:00:37.677505535+00:00 stderr F I0813 20:00:37.676925 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminpolicybasedexternalroutes.k8s.ovn.org was successful 2025-08-13T20:00:37.677505535+00:00 stderr F I0813 20:00:37.677028 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org 2025-08-13T20:00:38.481051027+00:00 stderr F I0813 20:00:38.480998 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /egressservices.k8s.ovn.org was successful 2025-08-13T20:00:38.481223452+00:00 stderr F I0813 20:00:38.481204 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:39.484972473+00:00 stderr F I0813 20:00:39.482212 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /adminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:39.484972473+00:00 stderr F I0813 20:00:39.482313 1 log.go:245] reconciling (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io 2025-08-13T20:00:39.657008589+00:00 stderr F I0813 20:00:39.655702 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.718378388+00:00 stderr F I0813 20:00:39.716749 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.811242325+00:00 stderr F I0813 20:00:39.809485 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:39.926295526+00:00 stderr F I0813 20:00:39.925431 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.132291870+00:00 stderr F I0813 20:00:40.127019 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.192144436+00:00 stderr F I0813 20:00:40.190426 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.247094393+00:00 stderr F I0813 20:00:40.246116 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.264164620+00:00 stderr F I0813 20:00:40.260585 1 log.go:245] Apply / Create of (apiextensions.k8s.io/v1, Kind=CustomResourceDefinition) /baselineadminnetworkpolicies.policy.networking.k8s.io was successful 2025-08-13T20:00:40.264164620+00:00 stderr F I0813 20:00:40.260662 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.294411 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.294480 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:40.295207435+00:00 stderr F I0813 20:00:40.295031 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.343954825+00:00 stderr F I0813 20:00:40.341401 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:40.343954825+00:00 stderr F I0813 20:00:40.341469 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited 2025-08-13T20:00:40.350194063+00:00 stderr F I0813 20:00:40.348485 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:40.386627212+00:00 stderr F I0813 20:00:40.385264 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-nodes-identity-limited was successful 2025-08-13T20:00:40.386627212+00:00 stderr F I0813 20:00:40.385333 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited 2025-08-13T20:00:40.447356633+00:00 stderr F I0813 20:00:40.446253 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-node-limited was successful 2025-08-13T20:00:40.447356633+00:00 stderr F I0813 20:00:40.446327 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited 2025-08-13T20:00:40.512614574+00:00 stderr F I0813 20:00:40.512450 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-identity-limited was successful 2025-08-13T20:00:40.512614574+00:00 stderr F I0813 20:00:40.512525 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy 2025-08-13T20:00:40.564477983+00:00 stderr F I0813 20:00:40.564392 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-kube-rbac-proxy was successful 2025-08-13T20:00:40.574181020+00:00 stderr F I0813 20:00:40.573921 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy 2025-08-13T20:00:40.617123694+00:00 stderr F I0813 20:00:40.616045 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-node-kube-rbac-proxy was successful 2025-08-13T20:00:40.617123694+00:00 stderr F I0813 20:00:40.616113 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config 2025-08-13T20:00:40.665397691+00:00 stderr F I0813 20:00:40.665036 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-config was successful 2025-08-13T20:00:40.665397691+00:00 stderr F I0813 20:00:40.665126 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:40.817811727+00:00 stderr F I0813 20:00:40.768086 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:40.825312231+00:00 stderr F I0813 20:00:40.817770 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.865493456+00:00 stderr F I0813 20:00:40.864759 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.893745262+00:00 stderr F I0813 20:00:40.893356 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.940753422+00:00 stderr F I0813 20:00:40.933778 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.940753422+00:00 stderr F I0813 20:00:40.933933 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:40.959415614+00:00 stderr F I0813 20:00:40.953109 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:40.959415614+00:00 stderr F I0813 20:00:40.953208 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited 2025-08-13T20:00:41.088007401+00:00 stderr F I0813 20:00:41.078313 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/openshift-ovn-kubernetes-control-plane-limited was successful 2025-08-13T20:00:41.088007401+00:00 stderr F I0813 20:00:41.078427 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn 2025-08-13T20:00:41.111112900+00:00 stderr F I0813 20:00:41.107727 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/ovn was successful 2025-08-13T20:00:41.111112900+00:00 stderr F I0813 20:00:41.107887 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer 2025-08-13T20:00:41.287864020+00:00 stderr F I0813 20:00:41.287155 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-ovn-kubernetes/signer was successful 2025-08-13T20:00:41.287864020+00:00 stderr F I0813 20:00:41.287242 1 log.go:245] reconciling (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes 2025-08-13T20:00:41.573910536+00:00 stderr F I0813 20:00:41.571755 1 log.go:245] Apply / Create of (flowcontrol.apiserver.k8s.io/v1, Kind=FlowSchema) /openshift-ovn-kubernetes was successful 2025-08-13T20:00:41.573910536+00:00 stderr F I0813 20:00:41.571885 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader 2025-08-13T20:00:41.589723997+00:00 stderr F I0813 20:00:41.589567 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:00:41.804696177+00:00 stderr F I0813 20:00:41.799270 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:41.884044949+00:00 stderr F I0813 20:00:41.882960 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-ovn-kubernetes-cluster-reader was successful 2025-08-13T20:00:41.884044949+00:00 stderr F I0813 20:00:41.883028 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib 2025-08-13T20:00:42.031881395+00:00 stderr F I0813 20:00:42.029309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:45.426981842+00:00 stderr F I0813 20:00:45.426759 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:45.465658575+00:00 stderr F I0813 20:00:45.460570 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-ovn-kubernetes/ovnkube-script-lib was successful 2025-08-13T20:00:45.465658575+00:00 stderr F I0813 20:00:45.460646 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules 2025-08-13T20:00:45.854964905+00:00 stderr F I0813 20:00:45.847386 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/master-rules was successful 2025-08-13T20:00:45.854964905+00:00 stderr F I0813 20:00:45.847471 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules 2025-08-13T20:00:46.049895544+00:00 stderr F I0813 20:00:46.049222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.185547212+00:00 stderr F I0813 20:00:46.177358 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-ovn-kubernetes/networking-rules was successful 2025-08-13T20:00:46.185547212+00:00 stderr F I0813 20:00:46.177434 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features 2025-08-13T20:00:46.357890236+00:00 stderr F I0813 20:00:46.355693 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.557055535+00:00 stderr F I0813 20:00:46.556373 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-config-managed/openshift-network-features was successful 2025-08-13T20:00:46.557055535+00:00 stderr F I0813 20:00:46.556443 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics 2025-08-13T20:00:46.638880888+00:00 stderr F I0813 20:00:46.638271 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:00:46.862627028+00:00 stderr F I0813 20:00:46.861683 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-control-plane-metrics was successful 2025-08-13T20:00:46.862627028+00:00 stderr F I0813 20:00:46.861773 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane 2025-08-13T20:00:47.049860316+00:00 stderr F I0813 20:00:47.046662 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:47.103034452+00:00 stderr F I0813 20:00:47.099528 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-control-plane was successful 2025-08-13T20:00:47.103034452+00:00 stderr F I0813 20:00:47.099593 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node 2025-08-13T20:00:47.445677632+00:00 stderr F I0813 20:00:47.442375 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-ovn-kubernetes/monitor-ovn-node was successful 2025-08-13T20:00:47.445677632+00:00 stderr F I0813 20:00:47.442477 1 log.go:245] reconciling (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node 2025-08-13T20:00:47.461549324+00:00 stderr F I0813 20:00:47.460309 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:47.653116347+00:00 stderr F I0813 20:00:47.640629 1 log.go:245] Reconciling additional trust bundle configmap 'openshift-config/admin-kubeconfig-client-ca' 2025-08-13T20:00:48.149622854+00:00 stderr F I0813 20:00:48.147700 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-ovn-kubernetes/ovn-kubernetes-node was successful 2025-08-13T20:00:48.149622854+00:00 stderr F I0813 20:00:48.147769 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:48.711000051+00:00 stderr F I0813 20:00:48.710428 1 log.go:245] Deleted PodNetworkConnectivityCheck.controlplane.operator.openshift.io/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics because it is no more valid. 2025-08-13T20:00:48.751613939+00:00 stderr F I0813 20:00:48.751242 1 log.go:245] configmap 'openshift-config/admin-kubeconfig-client-ca' name differs from trustedCA of proxy 'cluster' or trustedCA not set; reconciliation will be skipped 2025-08-13T20:00:48.766983538+00:00 stderr F I0813 20:00:48.763258 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:48.766983538+00:00 stderr F I0813 20:00:48.763335 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s 2025-08-13T20:00:48.785319110+00:00 stderr F I0813 20:00:48.783615 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:00:49.312439251+00:00 stderr F I0813 20:00:49.311670 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-ovn-kubernetes/prometheus-k8s was successful 2025-08-13T20:00:49.312439251+00:00 stderr F I0813 20:00:49.311743 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-host-network 2025-08-13T20:00:49.381325275+00:00 stderr F I0813 20:00:49.377963 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:00:49.780431785+00:00 stderr F I0813 20:00:49.780116 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:49.828949339+00:00 stderr F I0813 20:00:49.828457 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-host-network was successful 2025-08-13T20:00:49.828949339+00:00 stderr F I0813 20:00:49.828519 1 log.go:245] reconciling (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas 2025-08-13T20:00:50.410139431+00:00 stderr F I0813 20:00:50.408428 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:00:50.447921398+00:00 stderr F I0813 20:00:50.446764 1 log.go:245] Apply / Create of (/v1, Kind=ResourceQuota) openshift-host-network/host-network-namespace-quotas was successful 2025-08-13T20:00:50.447921398+00:00 stderr F I0813 20:00:50.447337 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.311867 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-ovn-kubernetes/ovnkube-control-plane was successful 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.311985 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node 2025-08-13T20:00:57.313215275+00:00 stderr F I0813 20:00:57.312066 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:00:59.931671730+00:00 stderr F I0813 20:00:59.928650 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.928441377 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997081 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.928746796 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997209 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997146407 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997306 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.997279051 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997471 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997384424 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997510 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997484367 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997536 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997519628 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997585 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997542278 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997621 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.99759535 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997647 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.997632671 +0000 UTC))" 2025-08-13T20:00:59.998538677+00:00 stderr F I0813 20:00:59.997905 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.997665592 +0000 UTC))" 2025-08-13T20:01:00.013692589+00:00 stderr F I0813 20:01:00.013315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:16 +0000 UTC to 2026-06-26 12:47:17 +0000 UTC (now=2025-08-13 20:01:00.012919427 +0000 UTC))" 2025-08-13T20:01:00.013764041+00:00 stderr F I0813 20:01:00.013712 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:01:00.013687419 +0000 UTC))" 2025-08-13T20:01:03.338570499+00:00 stderr F I0813 20:01:03.337009 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-ovn-kubernetes/ovnkube-node was successful 2025-08-13T20:01:03.338570499+00:00 stderr F I0813 20:01:03.337223 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-diagnostics 2025-08-13T20:01:03.347989128+00:00 stderr F I0813 20:01:03.340281 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:01:05.775416984+00:00 stderr F I0813 20:01:05.773578 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:01:05.851148293+00:00 stderr F I0813 20:01:05.848382 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-diagnostics was successful 2025-08-13T20:01:05.851148293+00:00 stderr F I0813 20:01:05.848503 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:06.799594387+00:00 stderr F I0813 20:01:06.795264 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:06.975266097+00:00 stderr F I0813 20:01:06.965133 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:06.975266097+00:00 stderr F I0813 20:01:06.965226 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:07.408913532+00:00 stderr F I0813 20:01:07.408155 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:07.408913532+00:00 stderr F I0813 20:01:07.408221 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics 2025-08-13T20:01:07.505907187+00:00 stderr F I0813 20:01:07.496222 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:07.544588930+00:00 stderr F I0813 20:01:07.542569 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/network-diagnostics was successful 2025-08-13T20:01:07.544588930+00:00 stderr F I0813 20:01:07.542660 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics 2025-08-13T20:01:07.713043074+00:00 stderr F I0813 20:01:07.705253 1 log.go:245] PodNetworkConnectivityCheck.controlplane.operator.openshift.io/network-check-source-crc-to-openshift-apiserver-endpoint-crc -n openshift-network-diagnostics: podnetworkconnectivitychecks.controlplane.operator.openshift.io "network-check-source-crc-to-openshift-apiserver-endpoint-crc" not found 2025-08-13T20:01:07.800526638+00:00 stderr F I0813 20:01:07.798978 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-diagnostics was successful 2025-08-13T20:01:07.800526638+00:00 stderr F I0813 20:01:07.799081 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics 2025-08-13T20:01:07.868560558+00:00 stderr F I0813 20:01:07.867235 1 log.go:245] unable to determine openshift-apiserver apiserver service endpoints: no openshift-apiserver api endpoints found 2025-08-13T20:01:07.997000220+00:00 stderr F I0813 20:01:07.996687 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-default-service-cluster-0 -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.001494768+00:00 stderr F I0813 20:01:07.997167 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-diagnostics was successful 2025-08-13T20:01:08.001494768+00:00 stderr F I0813 20:01:07.997252 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics 2025-08-13T20:01:08.111965388+00:00 stderr F I0813 20:01:08.111873 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.179292768+00:00 stderr F I0813 20:01:08.178584 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) kube-system/network-diagnostics was successful 2025-08-13T20:01:08.179292768+00:00 stderr F I0813 20:01:08.178677 1 log.go:245] reconciling (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.267942886+00:00 stderr F I0813 20:01:08.262127 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-kubernetes-apiserver-endpoint-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.424896791+00:00 stderr F I0813 20:01:08.421744 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-openshift-apiserver-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.425373585+00:00 stderr F I0813 20:01:08.425345 1 log.go:245] Apply / Create of (apps/v1, Kind=Deployment) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.425494888+00:00 stderr F I0813 20:01:08.425473 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.557829191+00:00 stderr F I0813 20:01:08.557427 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.557973345+00:00 stderr F I0813 20:01:08.557950 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source 2025-08-13T20:01:08.574873937+00:00 stderr F I0813 20:01:08.570390 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-external -n openshift-network-diagnostics is applied 2025-08-13T20:01:08.658270425+00:00 stderr F I0813 20:01:08.653255 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=ServiceMonitor) openshift-network-diagnostics/network-check-source was successful 2025-08-13T20:01:08.658270425+00:00 stderr F I0813 20:01:08.653325 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:01:08.809928729+00:00 stderr F I0813 20:01:08.809867 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:01:08.810065923+00:00 stderr F I0813 20:01:08.810045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s 2025-08-13T20:01:08.850252439+00:00 stderr F I0813 20:01:08.850045 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-load-balancer-api-internal -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.377500883+00:00 stderr F I0813 20:01:09.377442 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-diagnostics/prometheus-k8s was successful 2025-08-13T20:01:09.377669027+00:00 stderr F I0813 20:01:09.377650 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target 2025-08-13T20:01:09.495525358+00:00 stderr F I0813 20:01:09.495391 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:01:09.495525358+00:00 stderr F I0813 20:01:09.495459 1 log.go:245] reconciling (/v1, Kind=Service) openshift-network-diagnostics/network-check-target 2025-08-13T20:01:09.585283377+00:00 stderr F I0813 20:01:09.578555 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-service-cluster -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.674311386+00:00 stderr F I0813 20:01:09.670623 1 log.go:245] Apply / Create of (/v1, Kind=Service) openshift-network-diagnostics/network-check-target was successful 2025-08-13T20:01:09.764236940+00:00 stderr F I0813 20:01:09.763933 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role 2025-08-13T20:01:09.957706507+00:00 stderr F I0813 20:01:09.957533 1 log.go:245] The check PodNetworkConnectivityCheck/network-check-source-crc-to-network-check-target-crc -n openshift-network-diagnostics is applied 2025-08-13T20:01:09.982307818+00:00 stderr F I0813 20:01:09.982249 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-config-managed/openshift-network-public-role was successful 2025-08-13T20:01:09.982460653+00:00 stderr F I0813 20:01:09.982442 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding 2025-08-13T20:01:10.205196494+00:00 stderr F I0813 20:01:10.204294 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-config-managed/openshift-network-public-role-binding was successful 2025-08-13T20:01:10.205196494+00:00 stderr F I0813 20:01:10.204407 1 log.go:245] reconciling (/v1, Kind=Namespace) /openshift-network-node-identity 2025-08-13T20:01:14.365655855+00:00 stderr F I0813 20:01:14.365324 1 log.go:245] Apply / Create of (/v1, Kind=Namespace) /openshift-network-node-identity was successful 2025-08-13T20:01:14.365655855+00:00 stderr F I0813 20:01:14.365415 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:18.807159000+00:00 stderr F I0813 20:01:18.807070 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:18.807159000+00:00 stderr F I0813 20:01:18.807143 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity 2025-08-13T20:01:20.173082827+00:00 stderr F I0813 20:01:20.171163 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-node-identity was successful 2025-08-13T20:01:20.173082827+00:00 stderr F I0813 20:01:20.171250 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity 2025-08-13T20:01:20.947087297+00:00 stderr F I0813 20:01:20.945174 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-node-identity was successful 2025-08-13T20:01:20.947087297+00:00 stderr F I0813 20:01:20.945275 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:01:21.297330493+00:00 stderr F I0813 20:01:21.296989 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:01:21.297330493+00:00 stderr F I0813 20:01:21.297045 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases 2025-08-13T20:01:21.660375606+00:00 stderr F I0813 20:01:21.659963 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=Role) openshift-network-node-identity/network-node-identity-leases was successful 2025-08-13T20:01:21.660375606+00:00 stderr F I0813 20:01:21.660031 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 2025-08-13T20:01:21.927616236+00:00 stderr F I0813 20:01:21.922315 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-network-node-identity/system:openshift:scc:hostnetwork-v2 was successful 2025-08-13T20:01:21.927616236+00:00 stderr F I0813 20:01:21.922433 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm 2025-08-13T20:01:21.978028653+00:00 stderr F I0813 20:01:21.975980 1 log.go:245] Reconciling update to IngressController openshift-ingress-operator/default 2025-08-13T20:01:22.195034151+00:00 stderr F I0813 20:01:22.194690 1 log.go:245] Apply / Create of (/v1, Kind=ConfigMap) openshift-network-node-identity/ovnkube-identity-cm was successful 2025-08-13T20:01:22.195234016+00:00 stderr F I0813 20:01:22.195212 1 log.go:245] reconciling (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:22.838528918+00:00 stderr F I0813 20:01:22.832272 1 log.go:245] Apply / Create of (network.operator.openshift.io/v1, Kind=OperatorPKI) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:22.838528918+00:00 stderr F I0813 20:01:22.832354 1 log.go:245] reconciling (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io 2025-08-13T20:01:23.444975610+00:00 stderr F I0813 20:01:23.443616 1 log.go:245] Apply / Create of (admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration) /network-node-identity.openshift.io was successful 2025-08-13T20:01:23.444975610+00:00 stderr F I0813 20:01:23.443759 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity 2025-08-13T20:01:23.878311757+00:00 stderr F I0813 20:01:23.872714 1 log.go:245] Apply / Create of (apps/v1, Kind=DaemonSet) openshift-network-node-identity/network-node-identity was successful 2025-08-13T20:01:23.879092539+00:00 stderr F I0813 20:01:23.878991 1 log.go:245] reconciling (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules 2025-08-13T20:01:23.995899970+00:00 stderr F I0813 20:01:23.994182 1 log.go:245] Apply / Create of (monitoring.coreos.com/v1, Kind=PrometheusRule) openshift-network-operator/openshift-network-operator-ipsec-rules was successful 2025-08-13T20:01:23.995899970+00:00 stderr F I0813 20:01:23.994291 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter 2025-08-13T20:01:28.283247368+00:00 stderr F I0813 20:01:28.283056 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.283877656+00:00 stderr F I0813 20:01:28.283516 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:28.284144654+00:00 stderr F I0813 20:01:28.284080 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284554 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:28.284501904 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284631 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:28.284602557 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284655 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.284638218 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284674 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:28.284660188 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284729 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284692069 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284747 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.28473534 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284860 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284752491 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284887 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284871524 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284907 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:28.284895155 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284936 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:28.284925406 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.284957 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:28.284944016 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.285444 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"*.metrics.openshift-network-operator.svc\" [serving] validServingFor=[*.metrics.openshift-network-operator.svc,*.metrics.openshift-network-operator.svc.cluster.local,metrics.openshift-network-operator.svc,metrics.openshift-network-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:01:28.285370599 +0000 UTC))" 2025-08-13T20:01:28.286060078+00:00 stderr F I0813 20:01:28.285903 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755114653\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755114652\" (2025-08-13 18:50:52 +0000 UTC to 2026-08-13 18:50:52 +0000 UTC (now=2025-08-13 20:01:28.285885023 +0000 UTC))" 2025-08-13T20:01:29.312658341+00:00 stderr F I0813 20:01:29.310941 1 log.go:245] Apply / Create of (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-iptables-alerter was successful 2025-08-13T20:01:29.312658341+00:00 stderr F I0813 20:01:29.311047 1 log.go:245] reconciling (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter 2025-08-13T20:01:29.863595680+00:00 stderr F I0813 20:01:29.863511 1 log.go:245] Reconciling pki.network.operator.openshift.io openshift-network-node-identity/network-node-identity 2025-08-13T20:01:29.868243463+00:00 stderr F I0813 20:01:29.868114 1 log.go:245] successful reconciliation 2025-08-13T20:01:31.577566643+00:00 stderr F I0813 20:01:31.573682 1 log.go:245] Apply / Create of (/v1, Kind=ServiceAccount) openshift-network-operator/iptables-alerter was successful 2025-08-13T20:01:31.577566643+00:00 stderr F I0813 20:01:31.574006 1 log.go:245] reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter 2025-08-13T20:01:31.911241717+00:00 stderr F I0813 20:01:31.911140 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="7cb00b31757c1394caec2ab807eb732759e4f60c33864abe1196343a32306fb6", new="9ad6ea9e8750b1797a2290d936071e1b6afeb9dca994b721070d9e8357ccc62d") 2025-08-13T20:01:31.911347140+00:00 stderr F W0813 20:01:31.911330 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:31.911447063+00:00 stderr F I0813 20:01:31.911429 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="4b32fe525f439299dff9032d1fedd04523f56def3ae405a8eb1dcca9b4fa85c6", new="87b482cbe679adfeab619a3107dca513400610c55f0dbc209ea08b45f985b260") 2025-08-13T20:01:31.911757862+00:00 stderr F E0813 20:01:31.911723 1 leaderelection.go:369] Failed to update lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock?timeout=4m0s": context canceled 2025-08-13T20:01:31.911910516+00:00 stderr F I0813 20:01:31.911891 1 leaderelection.go:285] failed to renew lease openshift-network-operator/network-operator-lock: timed out waiting for the condition 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.914450 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.916186 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:31.918981938+00:00 stderr F I0813 20:01:31.916519 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:31.919079251+00:00 stderr F I0813 20:01:31.919055 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:31.919271436+00:00 stderr F I0813 20:01:31.919254 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:31.924649409+00:00 stderr F I0813 20:01:31.919711 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:31.925018960+00:00 stderr F I0813 20:01:31.924991 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:31.925073401+00:00 stderr F I0813 20:01:31.925059 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:01:31.925122453+00:00 stderr F I0813 20:01:31.925106 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:31.925156124+00:00 stderr F I0813 20:01:31.925144 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:31.925197425+00:00 stderr F I0813 20:01:31.925182 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:01:31.925228356+00:00 stderr F I0813 20:01:31.925217 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:01:31.925347939+00:00 stderr F I0813 20:01:31.925333 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:31.926140032+00:00 stderr F I0813 20:01:31.926120 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:01:31.926218674+00:00 stderr F I0813 20:01:31.926205 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:01:31.926305776+00:00 stderr F I0813 20:01:31.926240 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:01:31.927496950+00:00 stderr F I0813 20:01:31.927469 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:31.927554022+00:00 stderr F I0813 20:01:31.927541 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929071 1 internal.go:516] "Stopping and waiting for non leader election runnables" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929147 1 internal.go:520] "Stopping and waiting for leader election runnables" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929151 1 secure_serving.go:258] Stopped listening on [::]:9104 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929197 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929514 1 log.go:245] could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: failed to apply / update (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /openshift-iptables-alerter: Patch "https://api-int.crc.testing:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/openshift-iptables-alerter?fieldManager=cluster-network-operator%2Foperconfig&force=true": context canceled 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929620 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929647 1 builder.go:330] server exited 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929764 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="dashboard-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929929 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="operconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929942 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="infrastructureconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929954 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="ingress-config-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929946 1 log.go:245] reconciling (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929976 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="clusterconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930177 1 log.go:245] could not apply (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: failed to apply / update (/v1, Kind=ConfigMap) openshift-network-operator/iptables-alerter-script: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930268 1 log.go:245] reconciling (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930294 1 controller.go:242] "All workers finished" controller="dashboard-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930368 1 controller.go:242] "All workers finished" controller="ingress-config-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930381 1 controller.go:242] "All workers finished" controller="infrastructureconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930419 1 controller.go:242] "All workers finished" controller="clusterconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930448 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="signer-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930454 1 controller.go:242] "All workers finished" controller="signer-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930465 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pki-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930474 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="egress-router-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930479 1 controller.go:242] "All workers finished" controller="egress-router-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930489 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930496 1 controller.go:242] "All workers finished" controller="configmap-trust-bundle-injector-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930505 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="pod-watcher" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930510 1 controller.go:242] "All workers finished" controller="pod-watcher" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930519 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="allowlist-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.930525 1 controller.go:242] "All workers finished" controller="allowlist-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.929965 1 controller.go:240] "Shutdown signal received, waiting for all workers to finish" controller="proxyconfig-controller" 2025-08-13T20:01:31.934501850+00:00 stderr F I0813 20:01:31.931928 1 controller.go:242] "All workers finished" controller="proxyconfig-controller" 2025-08-13T20:01:31.937819265+00:00 stderr F I0813 20:01:31.937655 1 log.go:245] could not apply (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: failed to apply / update (apps/v1, Kind=DaemonSet) openshift-network-operator/iptables-alerter: client rate limiter Wait returned an error: context canceled 2025-08-13T20:01:35.298495932+00:00 stderr F E0813 20:01:35.297231 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "network-operator-lock": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:01:35.298495932+00:00 stderr F W0813 20:01:35.297515 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000755000175000017500000000000015117130646033026 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000755000175000017500000000000015117130654033025 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000644000175000017500000000113215117130646033025 0ustar zuulzuul2025-12-13T00:13:14.036850881+00:00 stdout F serving on 8888 2025-12-13T00:13:14.037025797+00:00 stdout F serving on 8080 2025-12-13T00:15:37.037028718+00:00 stdout F Serving canary healthcheck request 2025-12-13T00:16:37.070969999+00:00 stdout F Serving canary healthcheck request 2025-12-13T00:17:37.096125510+00:00 stdout F Serving canary healthcheck request 2025-12-13T00:18:37.129411497+00:00 stdout F Serving canary healthcheck request 2025-12-13T00:19:37.150379488+00:00 stdout F Serving canary healthcheck request 2025-12-13T00:21:37.186900943+00:00 stdout F Serving canary healthcheck request ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-cana0000644000175000017500000000555215117130646033037 0ustar zuulzuul2025-08-13T19:59:34.879020507+00:00 stdout F serving on 8888 2025-08-13T19:59:35.198359410+00:00 stdout F serving on 8080 2025-08-13T20:08:02.339625798+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:09:02.392271856+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:10:02.482997678+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:11:02.534550541+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:12:02.579987363+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:13:02.622390324+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:14:02.665508540+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:15:02.709870087+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:16:02.748527721+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:17:02.809741878+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:18:03.008946071+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:19:04.599125722+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:20:04.657458550+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:21:04.729419456+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:22:04.794676989+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:23:04.860257187+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:24:04.910380940+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:25:04.970661472+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:26:05.020180631+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:27:05.076309120+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:28:05.120889853+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:29:05.169859934+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:30:05.293400605+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:31:05.339396053+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:32:05.396747728+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:33:05.450110989+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:34:05.494021784+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:35:05.541527757+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:36:05.601877841+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:37:05.651693983+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:38:05.693964495+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:39:05.739405167+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:40:05.786630512+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:41:05.844252540+00:00 stdout F Serving canary healthcheck request 2025-08-13T20:42:05.898761088+00:00 stdout F Serving canary healthcheck request ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130646033051 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016500515117130646033062 0ustar zuulzuul2025-08-13T20:00:30.228096014+00:00 stderr F I0813 20:00:30.192242 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000a0b9a0 cert-dir:0xc000a0bb80 cert-secrets:0xc000a0b900 configmaps:0xc000a0b4a0 namespace:0xc000a0b2c0 optional-cert-configmaps:0xc000a0bae0 optional-cert-secrets:0xc000a0ba40 optional-configmaps:0xc000a0b5e0 optional-secrets:0xc000a0b540 pod:0xc000a0b360 pod-manifest-dir:0xc000a0b720 resource-dir:0xc000a0b680 revision:0xc000a0b220 secrets:0xc000a0b400 v:0xc000a20fa0] [0xc000a20fa0 0xc000a0b220 0xc000a0b2c0 0xc000a0b360 0xc000a0b680 0xc000a0b720 0xc000a0b4a0 0xc000a0b5e0 0xc000a0b400 0xc000a0b540 0xc000a0bb80 0xc000a0b9a0 0xc000a0bae0 0xc000a0b900 0xc000a0ba40] [] map[cert-configmaps:0xc000a0b9a0 cert-dir:0xc000a0bb80 cert-secrets:0xc000a0b900 configmaps:0xc000a0b4a0 help:0xc000a21360 kubeconfig:0xc000a0b180 log-flush-frequency:0xc000a20f00 namespace:0xc000a0b2c0 optional-cert-configmaps:0xc000a0bae0 optional-cert-secrets:0xc000a0ba40 optional-configmaps:0xc000a0b5e0 optional-secrets:0xc000a0b540 pod:0xc000a0b360 pod-manifest-dir:0xc000a0b720 pod-manifests-lock-file:0xc000a0b860 resource-dir:0xc000a0b680 revision:0xc000a0b220 secrets:0xc000a0b400 timeout-duration:0xc000a0b7c0 v:0xc000a20fa0 vmodule:0xc000a21040] [0xc000a0b180 0xc000a0b220 0xc000a0b2c0 0xc000a0b360 0xc000a0b400 0xc000a0b4a0 0xc000a0b540 0xc000a0b5e0 0xc000a0b680 0xc000a0b720 0xc000a0b7c0 0xc000a0b860 0xc000a0b900 0xc000a0b9a0 0xc000a0ba40 0xc000a0bae0 0xc000a0bb80 0xc000a20f00 0xc000a20fa0 0xc000a21040 0xc000a21360] [0xc000a0b9a0 0xc000a0bb80 0xc000a0b900 0xc000a0b4a0 0xc000a21360 0xc000a0b180 0xc000a20f00 0xc000a0b2c0 0xc000a0bae0 0xc000a0ba40 0xc000a0b5e0 0xc000a0b540 0xc000a0b360 0xc000a0b720 0xc000a0b860 0xc000a0b680 0xc000a0b220 0xc000a0b400 0xc000a0b7c0 0xc000a20fa0 0xc000a21040] map[104:0xc000a21360 118:0xc000a20fa0] [] -1 0 0xc000a023c0 true 0xa51380 []} 2025-08-13T20:00:30.241164866+00:00 stderr F I0813 20:00:30.240993 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000a14340)({ 2025-08-13T20:00:30.241164866+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:00:30.241164866+00:00 stderr F Revision: (string) (len=1) "9", 2025-08-13T20:00:30.241164866+00:00 stderr F NodeName: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-08-13T20:00:30.241164866+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:00:30.241164866+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=11) "etcd-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "encryption-config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=12) "cloud-config", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "aggregator-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=14) "kubelet-client", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=9) "client-ca", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:00:30.241164866+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:00:30.241164866+00:00 stderr F }, 2025-08-13T20:00:30.241164866+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-08-13T20:00:30.241164866+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:00:30.241164866+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:00:30.241164866+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:00:30.241164866+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:00:30.241164866+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:00:30.241164866+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:00:30.241164866+00:00 stderr F }) 2025-08-13T20:00:30.264011288+00:00 stderr F I0813 20:00:30.263917 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:00:30.302991629+00:00 stderr F I0813 20:00:30.302663 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:00:30.336711361+00:00 stderr F I0813 20:00:30.311214 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:01:00.316616777+00:00 stderr F I0813 20:01:00.315386 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:01:05.767036875+00:00 stderr F I0813 20:01:05.763385 1 cmd.go:539] Latest installer revision for node crc is: 9 2025-08-13T20:01:05.767036875+00:00 stderr F I0813 20:01:05.763700 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:01:06.853768102+00:00 stderr F I0813 20:01:06.853476 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:01:06.869727917+00:00 stderr F I0813 20:01:06.869633 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9" ... 2025-08-13T20:01:06.878019154+00:00 stderr F I0813 20:01:06.871148 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9" ... 2025-08-13T20:01:06.878019154+00:00 stderr F I0813 20:01:06.871194 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:07.408893731+00:00 stderr F I0813 20:01:07.408596 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-9 2025-08-13T20:01:08.339296910+00:00 stderr F I0813 20:01:08.339174 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-9 2025-08-13T20:01:08.436071690+00:00 stderr F I0813 20:01:08.429282 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-9 2025-08-13T20:01:08.510517961+00:00 stderr F I0813 20:01:08.510440 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-9: secrets "encryption-config-9" not found 2025-08-13T20:01:08.565021486+00:00 stderr F I0813 20:01:08.564954 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-9 2025-08-13T20:01:08.565151519+00:00 stderr F I0813 20:01:08.565130 1 cmd.go:239] Getting config maps ... 2025-08-13T20:01:08.623391170+00:00 stderr F I0813 20:01:08.623334 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-9 2025-08-13T20:01:08.683964707+00:00 stderr F I0813 20:01:08.683901 1 copy.go:60] Got configMap openshift-kube-apiserver/config-9 2025-08-13T20:01:08.808296833+00:00 stderr F I0813 20:01:08.808012 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-9 2025-08-13T20:01:09.196587924+00:00 stderr F I0813 20:01:09.196529 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-9 2025-08-13T20:01:09.493872661+00:00 stderr F I0813 20:01:09.486134 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-9 2025-08-13T20:01:09.605195335+00:00 stderr F I0813 20:01:09.594112 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-9 2025-08-13T20:01:09.927254178+00:00 stderr F I0813 20:01:09.919756 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-9 2025-08-13T20:01:10.003346238+00:00 stderr F I0813 20:01:10.003247 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-9 2025-08-13T20:01:10.221272442+00:00 stderr F I0813 20:01:10.218353 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-9: configmaps "cloud-config-9" not found 2025-08-13T20:01:10.326975286+00:00 stderr F I0813 20:01:10.319482 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-9 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.407728 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-9 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.407859 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client" ... 2025-08-13T20:01:14.409000361+00:00 stderr F I0813 20:01:14.408445 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client/tls.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409156 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/etcd-client/tls.key" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409353 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409407 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.409896 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:01:14.410232186+00:00 stderr F I0813 20:01:14.410201 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410373 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410522 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-08-13T20:01:14.411103001+00:00 stderr F I0813 20:01:14.410745 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-08-13T20:01:14.420639313+00:00 stderr F I0813 20:01:14.420544 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/webhook-authenticator" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.420869 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/secrets/webhook-authenticator/kubeConfig" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.421250 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/bound-sa-token-signing-certs" ... 2025-08-13T20:01:14.421512138+00:00 stderr F I0813 20:01:14.421475 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.421654 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/config" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.421763 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/config/config.yaml" ... 2025-08-13T20:01:14.422232248+00:00 stderr F I0813 20:01:14.422214 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/etcd-serving-ca" ... 2025-08-13T20:01:14.426586313+00:00 stderr F I0813 20:01:14.422292 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.426586313+00:00 stderr F I0813 20:01:14.422534 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-audit-policies" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.446867 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.447272 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-08-13T20:01:14.447609162+00:00 stderr F I0813 20:01:14.447334 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:01:14.447697575+00:00 stderr F I0813 20:01:14.447640 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.447761 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448156 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448585 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/version" ... 2025-08-13T20:01:14.449193027+00:00 stderr F I0813 20:01:14.448743 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-08-13T20:01:14.481921090+00:00 stderr F I0813 20:01:14.481337 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kubelet-serving-ca" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.515599 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.516600 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.516819 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.518506 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.519250 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.519910 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-server-ca" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.520220 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.520725 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/oauth-metadata" ... 2025-08-13T20:01:14.527044517+00:00 stderr F I0813 20:01:14.521051 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/configmaps/oauth-metadata/oauthMetadata" ... 2025-08-13T20:01:14.536874107+00:00 stderr F I0813 20:01:14.535308 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-08-13T20:01:14.536874107+00:00 stderr F I0813 20:01:14.535433 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:18.851315639+00:00 stderr F I0813 20:01:18.840609 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-08-13T20:01:20.264538334+00:00 stderr F I0813 20:01:20.257589 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-08-13T20:01:20.938573184+00:00 stderr F I0813 20:01:20.927406 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-08-13T20:01:21.314140603+00:00 stderr F I0813 20:01:21.313762 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-08-13T20:01:21.682394503+00:00 stderr F I0813 20:01:21.667018 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-08-13T20:01:21.964411525+00:00 stderr F I0813 20:01:21.964309 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-08-13T20:01:22.200873837+00:00 stderr F I0813 20:01:22.198399 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-08-13T20:01:22.848864233+00:00 stderr F I0813 20:01:22.842456 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-08-13T20:01:23.458539357+00:00 stderr F I0813 20:01:23.458246 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-08-13T20:01:23.877878804+00:00 stderr F I0813 20:01:23.866135 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-08-13T20:01:23.987633334+00:00 stderr F I0813 20:01:23.987242 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-08-13T20:01:27.081947854+00:00 stderr F I0813 20:01:27.081695 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-08-13T20:01:31.496984765+00:00 stderr F I0813 20:01:31.490588 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-08-13T20:01:31.589915885+00:00 stderr F I0813 20:01:31.563334 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-08-13T20:01:36.824979329+00:00 stderr F I0813 20:01:36.823316 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-08-13T20:01:50.825746244+00:00 stderr F I0813 20:01:50.825271 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-004?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2025-08-13T20:01:56.153885648+00:00 stderr F I0813 20:01:56.150536 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-08-13T20:01:59.443927570+00:00 stderr F I0813 20:01:59.433966 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-08-13T20:02:01.285415308+00:00 stderr F I0813 20:02:01.285274 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-08-13T20:02:03.459094407+00:00 stderr F I0813 20:02:03.457472 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-08-13T20:02:05.722268108+00:00 stderr F I0813 20:02:05.721390 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-08-13T20:02:09.365492892+00:00 stderr F I0813 20:02:09.362561 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-08-13T20:02:09.365492892+00:00 stderr F I0813 20:02:09.362663 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:13.044816027+00:00 stderr F I0813 20:02:13.044401 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-08-13T20:02:15.097919247+00:00 stderr F I0813 20:02:15.089376 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-08-13T20:02:16.180974263+00:00 stderr F I0813 20:02:16.179374 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-08-13T20:02:17.837210851+00:00 stderr F I0813 20:02:17.830667 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-08-13T20:02:20.534021793+00:00 stderr F I0813 20:02:20.531035 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-08-13T20:02:20.535981769+00:00 stderr F I0813 20:02:20.535950 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-08-13T20:02:20.536941226+00:00 stderr F I0813 20:02:20.536680 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-08-13T20:02:20.537939034+00:00 stderr F I0813 20:02:20.537889 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-08-13T20:02:20.538769028+00:00 stderr F I0813 20:02:20.538690 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-08-13T20:02:20.538769028+00:00 stderr F I0813 20:02:20.538732 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-08-13T20:02:20.540256270+00:00 stderr F I0813 20:02:20.540170 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-08-13T20:02:20.540469467+00:00 stderr F I0813 20:02:20.540399 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-08-13T20:02:20.540469467+00:00 stderr F I0813 20:02:20.540461 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.540669492+00:00 stderr F I0813 20:02:20.540589 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-08-13T20:02:20.540817227+00:00 stderr F I0813 20:02:20.540735 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-08-13T20:02:20.540834697+00:00 stderr F I0813 20:02:20.540814 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.541009062+00:00 stderr F I0813 20:02:20.540964 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-08-13T20:02:20.541183147+00:00 stderr F I0813 20:02:20.541137 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-08-13T20:02:20.541195297+00:00 stderr F I0813 20:02:20.541179 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.541486776+00:00 stderr F I0813 20:02:20.541436 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:02:20.541641640+00:00 stderr F I0813 20:02:20.541607 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-08-13T20:02:20.541641640+00:00 stderr F I0813 20:02:20.541625 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.630024441+00:00 stderr F I0813 20:02:20.629923 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-08-13T20:02:20.630249658+00:00 stderr F I0813 20:02:20.630212 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-08-13T20:02:20.630264268+00:00 stderr F I0813 20:02:20.630246 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-08-13T20:02:20.630597198+00:00 stderr F I0813 20:02:20.630538 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-08-13T20:02:20.630886386+00:00 stderr F I0813 20:02:20.630822 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-08-13T20:02:20.630886386+00:00 stderr F I0813 20:02:20.630870 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-08-13T20:02:20.631072681+00:00 stderr F I0813 20:02:20.631039 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-08-13T20:02:20.632426470+00:00 stderr F I0813 20:02:20.632312 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-08-13T20:02:20.632426470+00:00 stderr F I0813 20:02:20.632404 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-08-13T20:02:20.632697428+00:00 stderr F I0813 20:02:20.632617 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-08-13T20:02:20.633043997+00:00 stderr F I0813 20:02:20.632913 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-08-13T20:02:20.633158671+00:00 stderr F I0813 20:02:20.633103 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-08-13T20:02:20.633451849+00:00 stderr F I0813 20:02:20.633357 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-08-13T20:02:20.633451849+00:00 stderr F I0813 20:02:20.633396 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-08-13T20:02:20.635386384+00:00 stderr F I0813 20:02:20.635321 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-08-13T20:02:20.635597430+00:00 stderr F I0813 20:02:20.635523 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:02:20.636249549+00:00 stderr F I0813 20:02:20.636167 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:02:20.636457635+00:00 stderr F I0813 20:02:20.636391 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-08-13T20:02:20.636457635+00:00 stderr F I0813 20:02:20.636410 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.636551 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.636573 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.637108 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-08-13T20:02:20.637398282+00:00 stderr F I0813 20:02:20.637231 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-08-13T20:02:20.637574647+00:00 stderr F I0813 20:02:20.637490 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:02:20.637733881+00:00 stderr F I0813 20:02:20.637640 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:02:20.638261256+00:00 stderr F I0813 20:02:20.638139 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-9 -n openshift-kube-apiserver 2025-08-13T20:02:21.120887644+00:00 stderr F I0813 20:02:21.117049 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:02:21.120887644+00:00 stderr F I0813 20:02:21.117113 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-08-13T20:02:21.120887644+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"9"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"9"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url= 2025-08-13T20:02:21.120947706+00:00 stderr F https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.122047 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.124085 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr F I0813 20:02:21.124100 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-08-13T20:02:21.130930681+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"9"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"9"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-a 2025-08-13T20:02:21.131093105+00:00 stderr F piserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.124736 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-08-13T20:02:21.131093105+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"9"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=9","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.125040 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:02:21.131093105+00:00 stderr F I0813 20:02:21.125152 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-08-13T20:02:21.131093105+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"9"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-9"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=9","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000755000175000017500000000000015117130646033114 5ustar zuulzuul././@LongLink0000644000000000000000000000033200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000755000175000017500000000000015117130654033113 5ustar zuulzuul././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000644000175000017500000006147315117130646033131 0ustar zuulzuul2025-08-13T20:11:02.865431268+00:00 stderr F I0813 20:11:02.864881 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:11:02.865957823+00:00 stderr F I0813 20:11:02.865900 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:11:02.869571566+00:00 stderr F I0813 20:11:02.869444 1 observer_polling.go:159] Starting file observer 2025-08-13T20:11:02.871168652+00:00 stderr F I0813 20:11:02.871084 1 builder.go:299] route-controller-manager version 4.16.0-202406131906.p0.g3112b45.assembly.stream.el9-3112b45-3112b458983c6fca6f77d5a945fb0026186dace6 2025-08-13T20:11:02.873305093+00:00 stderr F I0813 20:11:02.872602 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:11:03.231962636+00:00 stderr F I0813 20:11:03.230544 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:11:03.241153190+00:00 stderr F I0813 20:11:03.241104 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:11:03.241245003+00:00 stderr F I0813 20:11:03.241227 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:11:03.241307574+00:00 stderr F I0813 20:11:03.241291 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:11:03.241343445+00:00 stderr F I0813 20:11:03.241331 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:11:03.248281334+00:00 stderr F I0813 20:11:03.248242 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:11:03.248348536+00:00 stderr F W0813 20:11:03.248335 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:11:03.248401628+00:00 stderr F W0813 20:11:03.248388 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:11:03.248685216+00:00 stderr F I0813 20:11:03.248665 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:11:03.252619539+00:00 stderr F I0813 20:11:03.252579 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.252521796 +0000 UTC))" 2025-08-13T20:11:03.252898317+00:00 stderr F I0813 20:11:03.252834 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:11:03.253358230+00:00 stderr F I0813 20:11:03.253304 1 leaderelection.go:250] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... 2025-08-13T20:11:03.253598927+00:00 stderr F I0813 20:11:03.253574 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.253525135 +0000 UTC))" 2025-08-13T20:11:03.253748771+00:00 stderr F I0813 20:11:03.253659 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:11:03.253748771+00:00 stderr F I0813 20:11:03.253740 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:11:03.253970027+00:00 stderr F I0813 20:11:03.253765 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:11:03.253970027+00:00 stderr F I0813 20:11:03.253821 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:11:03.254063190+00:00 stderr F I0813 20:11:03.253670 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:11:03.254248195+00:00 stderr F I0813 20:11:03.254181 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:11:03.254374039+00:00 stderr F I0813 20:11:03.254319 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.253603 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.254609 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:11:03.254626986+00:00 stderr F I0813 20:11:03.254372 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:11:03.256473759+00:00 stderr F I0813 20:11:03.256445 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.257269042+00:00 stderr F I0813 20:11:03.257244 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.263174311+00:00 stderr F I0813 20:11:03.263124 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.266734733+00:00 stderr F I0813 20:11:03.266693 1 leaderelection.go:260] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers 2025-08-13T20:11:03.268959017+00:00 stderr F I0813 20:11:03.268857 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"2ba9fc4c-f1d7-4b43-b8a4-0a6afbf10f5f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"33361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' route-controller-manager-776b8b7477-sfpvs_f8c2bc95-1e3b-4dd4-b71e-62fdd54204d3 became leader 2025-08-13T20:11:03.285414649+00:00 stderr F I0813 20:11:03.273726 1 controller_manager.go:36] Starting "openshift.io/ingress-to-route" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297503 1 ingress.go:262] ingress-to-route metrics registered with prometheus 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297553 1 controller_manager.go:46] Started "openshift.io/ingress-to-route" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297567 1 controller_manager.go:36] Starting "openshift.io/ingress-ip" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297574 1 controller_manager.go:46] Started "openshift.io/ingress-ip" 2025-08-13T20:11:03.297621429+00:00 stderr F I0813 20:11:03.297580 1 controller_manager.go:48] Started Route Controllers 2025-08-13T20:11:03.298100563+00:00 stderr F I0813 20:11:03.298077 1 ingress.go:313] Starting controller 2025-08-13T20:11:03.313641928+00:00 stderr F I0813 20:11:03.307439 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.313641928+00:00 stderr F I0813 20:11:03.308068 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.332055726+00:00 stderr F I0813 20:11:03.331962 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.342666120+00:00 stderr F I0813 20:11:03.342509 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:11:03.355204820+00:00 stderr F I0813 20:11:03.355148 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:11:03.355473718+00:00 stderr F I0813 20:11:03.355452 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:11:03.355534719+00:00 stderr F I0813 20:11:03.355520 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:11:03.356633191+00:00 stderr F I0813 20:11:03.356544 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:11:03.356434885 +0000 UTC))" 2025-08-13T20:11:03.356708193+00:00 stderr F I0813 20:11:03.356692 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:11:03.356671482 +0000 UTC))" 2025-08-13T20:11:03.356771135+00:00 stderr F I0813 20:11:03.356754 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.356729844 +0000 UTC))" 2025-08-13T20:11:03.356883988+00:00 stderr F I0813 20:11:03.356868 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.356843687 +0000 UTC))" 2025-08-13T20:11:03.356991411+00:00 stderr F I0813 20:11:03.356971 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.356904939 +0000 UTC))" 2025-08-13T20:11:03.357045013+00:00 stderr F I0813 20:11:03.357032 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357015692 +0000 UTC))" 2025-08-13T20:11:03.357212707+00:00 stderr F I0813 20:11:03.357191 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357064243 +0000 UTC))" 2025-08-13T20:11:03.357282179+00:00 stderr F I0813 20:11:03.357268 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.357250468 +0000 UTC))" 2025-08-13T20:11:03.357327551+00:00 stderr F I0813 20:11:03.357315 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:11:03.35730128 +0000 UTC))" 2025-08-13T20:11:03.357505346+00:00 stderr F I0813 20:11:03.357488 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:11:03.357469695 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.359232 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.359196934 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.359974 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.359899514 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360194 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:11:03.360175212 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360217 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:11:03.360204343 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360276 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.360222524 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360296 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:11:03.360283565 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360314 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360302636 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360332 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360319096 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360368 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360336977 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360390 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.360377138 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360410 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:11:03.360399279 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360430 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:11:03.360416149 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360447 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:11:03.36043704 +0000 UTC))" 2025-08-13T20:11:03.361553802+00:00 stderr F I0813 20:11:03.360751 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-08-13 20:11:03.360726238 +0000 UTC))" 2025-08-13T20:11:03.363035544+00:00 stderr F I0813 20:11:03.362972 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115863\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115862\" (2025-08-13 19:11:02 +0000 UTC to 2026-08-13 19:11:02 +0000 UTC (now=2025-08-13 20:11:03.362947502 +0000 UTC))" 2025-08-13T20:11:03.771896587+00:00 stderr F I0813 20:11:03.771480 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:35.517719936+00:00 stderr F I0813 20:42:35.515539 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:35.521577918+00:00 stderr F I0813 20:42:35.521486 1 ingress.go:325] Shutting down controller 2025-08-13T20:42:35.522140354+00:00 stderr F I0813 20:42:35.520517 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:35.538315390+00:00 stderr F I0813 20:42:35.538116 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:35.538315390+00:00 stderr F I0813 20:42:35.538286 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:35.538345431+00:00 stderr F I0813 20:42:35.538320 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:35.538396633+00:00 stderr F I0813 20:42:35.538343 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:35.538396633+00:00 stderr F I0813 20:42:35.538362 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:35.539195936+00:00 stderr F I0813 20:42:35.539088 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:35.539195936+00:00 stderr F I0813 20:42:35.539178 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:35.539902056+00:00 stderr F I0813 20:42:35.539840 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:35.539902056+00:00 stderr F I0813 20:42:35.539891 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:35.540137253+00:00 stderr F I0813 20:42:35.540075 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:35.540467262+00:00 stderr F I0813 20:42:35.540410 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:35.541122221+00:00 stderr F I0813 20:42:35.541044 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:35.541445950+00:00 stderr F I0813 20:42:35.541373 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:35.541445950+00:00 stderr F I0813 20:42:35.541415 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:35.543525750+00:00 stderr F I0813 20:42:35.543421 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:35.544800087+00:00 stderr F I0813 20:42:35.544735 1 builder.go:330] server exited 2025-08-13T20:42:35.552278363+00:00 stderr F W0813 20:42:35.552143 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-contro0000644000175000017500000006527415117130646033134 0ustar zuulzuul2025-12-13T00:13:16.099822922+00:00 stderr F I1213 00:13:16.097405 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:13:16.099822922+00:00 stderr F I1213 00:13:16.099072 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:16.100997591+00:00 stderr F I1213 00:13:16.100279 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:16.125323458+00:00 stderr F I1213 00:13:16.123069 1 builder.go:299] route-controller-manager version 4.16.0-202406131906.p0.g3112b45.assembly.stream.el9-3112b45-3112b458983c6fca6f77d5a945fb0026186dace6 2025-12-13T00:13:16.125323458+00:00 stderr F I1213 00:13:16.124865 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.636207726+00:00 stderr F I1213 00:13:16.635700 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:16.650989402+00:00 stderr F I1213 00:13:16.644653 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:16.650989402+00:00 stderr F I1213 00:13:16.644858 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:16.650989402+00:00 stderr F I1213 00:13:16.644908 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:16.650989402+00:00 stderr F I1213 00:13:16.644913 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:16.654997096+00:00 stderr F I1213 00:13:16.651714 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.654997096+00:00 stderr F W1213 00:13:16.651733 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.654997096+00:00 stderr F W1213 00:13:16.651738 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.654997096+00:00 stderr F I1213 00:13:16.651889 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:16.664409933+00:00 stderr F I1213 00:13:16.655251 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.664409933+00:00 stderr F I1213 00:13:16.661954 1 leaderelection.go:250] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... 2025-12-13T00:13:16.678020350+00:00 stderr F I1213 00:13:16.677624 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.678020350+00:00 stderr F I1213 00:13:16.677662 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.678020350+00:00 stderr F I1213 00:13:16.677711 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.678020350+00:00 stderr F I1213 00:13:16.677724 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.678547989+00:00 stderr F I1213 00:13:16.678389 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.678547989+00:00 stderr F I1213 00:13:16.678425 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.678547989+00:00 stderr F I1213 00:13:16.678482 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.680641458+00:00 stderr F I1213 00:13:16.680605 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-12-13 00:13:16.680569396 +0000 UTC))" 2025-12-13T00:13:16.681414985+00:00 stderr F I1213 00:13:16.681390 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:16.681367253 +0000 UTC))" 2025-12-13T00:13:16.681429425+00:00 stderr F I1213 00:13:16.681420 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.681466276+00:00 stderr F I1213 00:13:16.681448 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:16.681474337+00:00 stderr F I1213 00:13:16.681466 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.692302561+00:00 stderr F I1213 00:13:16.691642 1 leaderelection.go:260] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers 2025-12-13T00:13:16.692401694+00:00 stderr F I1213 00:13:16.692320 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"2ba9fc4c-f1d7-4b43-b8a4-0a6afbf10f5f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"40139", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' route-controller-manager-776b8b7477-sfpvs_dd5f3008-58fc-4dcf-a6aa-d0873ec3d486 became leader 2025-12-13T00:13:16.694683120+00:00 stderr F I1213 00:13:16.694132 1 controller_manager.go:36] Starting "openshift.io/ingress-ip" 2025-12-13T00:13:16.694683120+00:00 stderr F I1213 00:13:16.694149 1 controller_manager.go:46] Started "openshift.io/ingress-ip" 2025-12-13T00:13:16.694683120+00:00 stderr F I1213 00:13:16.694154 1 controller_manager.go:36] Starting "openshift.io/ingress-to-route" 2025-12-13T00:13:16.703963672+00:00 stderr F I1213 00:13:16.700149 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:16.706404965+00:00 stderr F I1213 00:13:16.704596 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:16.720995095+00:00 stderr F I1213 00:13:16.719216 1 ingress.go:262] ingress-to-route metrics registered with prometheus 2025-12-13T00:13:16.720995095+00:00 stderr F I1213 00:13:16.719494 1 controller_manager.go:46] Started "openshift.io/ingress-to-route" 2025-12-13T00:13:16.720995095+00:00 stderr F I1213 00:13:16.719507 1 controller_manager.go:48] Started Route Controllers 2025-12-13T00:13:16.722234116+00:00 stderr F I1213 00:13:16.719306 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:16.722371090+00:00 stderr F I1213 00:13:16.722352 1 ingress.go:313] Starting controller 2025-12-13T00:13:16.748972435+00:00 stderr F W1213 00:13:16.746883 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:16.748972435+00:00 stderr F E1213 00:13:16.746926 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:16.748972435+00:00 stderr F I1213 00:13:16.748204 1 reflector.go:351] Caches populated for *v1.IngressClass from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:16.757740539+00:00 stderr F I1213 00:13:16.757147 1 reflector.go:351] Caches populated for *v1.Ingress from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.787756 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.788008 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.788560 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.788433481 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.790283 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.792631 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-12-13 00:13:16.79258149 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793015 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:16.792994393 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793387 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:16.793367687 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793408 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:16.793397198 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793428 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:16.793412648 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793448 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:16.793434549 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793462 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.793452559 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793482 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.79346823 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793500 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.793486791 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793523 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.793504961 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793538 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:16.793527392 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793561 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:16.793548683 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793577 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.793567803 +0000 UTC))" 2025-12-13T00:13:16.794000208+00:00 stderr F I1213 00:13:16.793913 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-12-13 00:13:16.793896314 +0000 UTC))" 2025-12-13T00:13:16.794313688+00:00 stderr F I1213 00:13:16.794255 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:16.794242396 +0000 UTC))" 2025-12-13T00:13:16.809046433+00:00 stderr F I1213 00:13:16.808982 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:17.329198882+00:00 stderr F I1213 00:13:17.328654 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:13:18.165500703+00:00 stderr F W1213 00:13:18.154563 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:18.165500703+00:00 stderr F E1213 00:13:18.155168 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:20.388674727+00:00 stderr F W1213 00:13:20.388153 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:20.388674727+00:00 stderr F E1213 00:13:20.388648 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:24.681599098+00:00 stderr F W1213 00:13:24.681068 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:24.681599098+00:00 stderr F E1213 00:13:24.681581 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-12-13T00:13:36.150840381+00:00 stderr F I1213 00:13:36.150330 1 reflector.go:351] Caches populated for *v1.Route from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:19:37.568824356+00:00 stderr F I1213 00:19:37.568387 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.568357034 +0000 UTC))" 2025-12-13T00:19:37.568904198+00:00 stderr F I1213 00:19:37.568893 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.568879208 +0000 UTC))" 2025-12-13T00:19:37.568974960+00:00 stderr F I1213 00:19:37.568963 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.56894712 +0000 UTC))" 2025-12-13T00:19:37.569032812+00:00 stderr F I1213 00:19:37.569005 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.568993131 +0000 UTC))" 2025-12-13T00:19:37.569068443+00:00 stderr F I1213 00:19:37.569059 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569046882 +0000 UTC))" 2025-12-13T00:19:37.569102334+00:00 stderr F I1213 00:19:37.569093 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569081623 +0000 UTC))" 2025-12-13T00:19:37.569141865+00:00 stderr F I1213 00:19:37.569132 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569115324 +0000 UTC))" 2025-12-13T00:19:37.569181866+00:00 stderr F I1213 00:19:37.569169 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569156845 +0000 UTC))" 2025-12-13T00:19:37.569553387+00:00 stderr F I1213 00:19:37.569539 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.569195366 +0000 UTC))" 2025-12-13T00:19:37.569596808+00:00 stderr F I1213 00:19:37.569586 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.569572208 +0000 UTC))" 2025-12-13T00:19:37.569643160+00:00 stderr F I1213 00:19:37.569632 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.569619539 +0000 UTC))" 2025-12-13T00:19:37.569691621+00:00 stderr F I1213 00:19:37.569678 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.56966072 +0000 UTC))" 2025-12-13T00:19:37.570125903+00:00 stderr F I1213 00:19:37.570107 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"route-controller-manager.openshift-route-controller-manager.svc\" [serving] validServingFor=[route-controller-manager.openshift-route-controller-manager.svc,route-controller-manager.openshift-route-controller-manager.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:48 +0000 UTC to 2027-08-13 20:00:49 +0000 UTC (now=2025-12-13 00:19:37.570081821 +0000 UTC))" 2025-12-13T00:19:37.570471292+00:00 stderr F I1213 00:19:37.570452 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:19:37.570431841 +0000 UTC))" 2025-12-13T00:21:16.786664122+00:00 stderr F E1213 00:21:16.786140 1 leaderelection.go:332] error retrieving resource lock openshift-route-controller-manager/openshift-route-controllers: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-route-controller-manager/leases/openshift-route-controllers?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130646033057 5ustar zuulzuul././@LongLink0000644000000000000000000000031300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000144017415117130646033074 0ustar zuulzuul2025-12-13T00:13:15.017041938+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="log level info" 2025-12-13T00:13:15.017041938+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="TLS keys set, using https for metrics" 2025-12-13T00:13:15.022028175+00:00 stderr F W1213 00:13:15.021575 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:13:15.022561084+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="Using in-cluster kube client config" 2025-12-13T00:13:15.030908133+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="Using in-cluster kube client config" 2025-12-13T00:13:15.032036442+00:00 stderr F W1213 00:13:15.031996 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:13:15.049270981+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" 2025-12-13T00:13:15.049270981+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" 2025-12-13T00:13:15.049270981+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="skipping irrelevant gvr" gvr="apps/v1, Resource=deployments" 2025-12-13T00:13:15.218825479+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="detected ability to filter informers" canFilter=true 2025-12-13T00:13:15.228943148+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="registering owner reference fixer" gvr="/v1, Resource=services" 2025-12-13T00:13:15.237103302+00:00 stderr F W1213 00:13:15.237041 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:13:15.239515613+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-12-13T00:13:15.239820764+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="operator ready" 2025-12-13T00:13:15.239820764+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="starting informers..." 2025-12-13T00:13:15.239820764+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="informers started" 2025-12-13T00:13:15.239820764+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="waiting for caches to sync..." 2025-12-13T00:13:15.341425857+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="starting workers..." 2025-12-13T00:13:15.341690746+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.341748278+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=/469K namespace=default 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=/469K namespace=default 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=VucZn namespace=hostpath-provisioner 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=VucZn namespace=hostpath-provisioner 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="operator ready" 2025-12-13T00:13:15.343619801+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="starting informers..." 2025-12-13T00:13:15.343656603+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="informers started" 2025-12-13T00:13:15.343656603+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="waiting for caches to sync..." 2025-12-13T00:13:15.348849137+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.356229346+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace default" id=/469K namespace=default 2025-12-13T00:13:15.356264897+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=Q9exw namespace=kube-node-lease 2025-12-13T00:13:15.356264897+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=Q9exw namespace=kube-node-lease 2025-12-13T00:13:15.357374884+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.357395164+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.357431216+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=vGniI 2025-12-13T00:13:15.357431216+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.357441406+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.357896521+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=VucZn namespace=hostpath-provisioner 2025-12-13T00:13:15.357896521+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=ROhAO namespace=kube-public 2025-12-13T00:13:15.357896521+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=ROhAO namespace=kube-public 2025-12-13T00:13:15.357896521+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.358096118+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.358096118+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.358096118+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=qM/Pq 2025-12-13T00:13:15.358145439+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.358145439+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.397525703+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=Q9exw namespace=kube-node-lease 2025-12-13T00:13:15.397525703+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace kube-public" id=ROhAO namespace=kube-public 2025-12-13T00:13:15.397525703+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=mQF0w namespace=openshift 2025-12-13T00:13:15.397525703+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=mQF0w namespace=openshift 2025-12-13T00:13:15.400586515+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=b/n0g namespace=kube-system 2025-12-13T00:13:15.400586515+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=b/n0g namespace=kube-system 2025-12-13T00:13:15.403029168+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace openshift" id=mQF0w namespace=openshift 2025-12-13T00:13:15.403029168+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=bwh9s namespace=openshift-apiserver 2025-12-13T00:13:15.403029168+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=bwh9s namespace=openshift-apiserver 2025-12-13T00:13:15.403029168+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace kube-system" id=b/n0g namespace=kube-system 2025-12-13T00:13:15.403029168+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=4jvtw namespace=openshift-apiserver-operator 2025-12-13T00:13:15.403029168+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=4jvtw namespace=openshift-apiserver-operator 2025-12-13T00:13:15.443893451+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="starting workers..." 2025-12-13T00:13:15.448244747+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.448244747+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.448244747+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.448244747+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=bwh9s namespace=openshift-apiserver 2025-12-13T00:13:15.448244747+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=leELA namespace=openshift-authentication 2025-12-13T00:13:15.448244747+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=leELA namespace=openshift-authentication 2025-12-13T00:13:15.452167149+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="catalog update required at 2025-12-13 00:13:15.452097047 +0000 UTC m=+0.636779448" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.650073500+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.650073500+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.650073500+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.650073500+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=vGniI 2025-12-13T00:13:15.650073500+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-12-13T00:13:15.661205553+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=4jvtw namespace=openshift-apiserver-operator 2025-12-13T00:13:15.661205553+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=mawOa namespace=openshift-authentication-operator 2025-12-13T00:13:15.661205553+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=mawOa namespace=openshift-authentication-operator 2025-12-13T00:13:15.732708626+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-12-13T00:13:15.847054738+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=leELA namespace=openshift-authentication 2025-12-13T00:13:15.847054738+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="resolving sources" id=BZuTH namespace=openshift-cloud-network-config-controller 2025-12-13T00:13:15.847054738+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="checking if subscriptions need update" id=BZuTH namespace=openshift-cloud-network-config-controller 2025-12-13T00:13:15.872969579+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qM/Pq 2025-12-13T00:13:15.872969579+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-12-13T00:13:15.972874596+00:00 stderr F time="2025-12-13T00:13:15Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-12-13T00:13:16.070668742+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.070668742+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.084067082+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.084067082+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.084067082+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.084067082+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=8WAtu 2025-12-13T00:13:16.084067082+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.084067082+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.261977330+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=mawOa namespace=openshift-authentication-operator 2025-12-13T00:13:16.261977330+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="resolving sources" id=FW4HF namespace=openshift-cloud-platform-infra 2025-12-13T00:13:16.261977330+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="checking if subscriptions need update" id=FW4HF namespace=openshift-cloud-platform-infra 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="catalog update required at 2025-12-13 00:13:16.454457649 +0000 UTC m=+1.639140040" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=BZuTH namespace=openshift-cloud-network-config-controller 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="resolving sources" id=JED2p namespace=openshift-cluster-machine-approver 2025-12-13T00:13:16.454971026+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="checking if subscriptions need update" id=JED2p namespace=openshift-cluster-machine-approver 2025-12-13T00:13:16.654879483+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8WAtu 2025-12-13T00:13:16.654879483+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-12-13T00:13:16.673410746+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.673410746+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.683544936+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=43jKl 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=FW4HF namespace=openshift-cloud-platform-infra 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="resolving sources" id=nPy8U namespace=openshift-cluster-samples-operator 2025-12-13T00:13:16.854637725+00:00 stderr F time="2025-12-13T00:13:16Z" level=info msg="checking if subscriptions need update" id=nPy8U namespace=openshift-cluster-samples-operator 2025-12-13T00:13:17.074978859+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=JED2p namespace=openshift-cluster-machine-approver 2025-12-13T00:13:17.074978859+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="resolving sources" id=ot8rN namespace=openshift-cluster-storage-operator 2025-12-13T00:13:17.074978859+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="checking if subscriptions need update" id=ot8rN namespace=openshift-cluster-storage-operator 2025-12-13T00:13:17.252414031+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:17.252414031+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:17.252414031+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:17.252414031+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="catalog update required at 2025-12-13 00:13:17.2494093 +0000 UTC m=+2.434091691" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:17.259977785+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.259977785+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.450294290+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=nPy8U namespace=openshift-cluster-samples-operator 2025-12-13T00:13:17.450294290+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="resolving sources" id=Xpf7k namespace=openshift-cluster-version 2025-12-13T00:13:17.450294290+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="checking if subscriptions need update" id=Xpf7k namespace=openshift-cluster-version 2025-12-13T00:13:17.452706002+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=43jKl 2025-12-13T00:13:17.452706002+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-12-13T00:13:17.509238401+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-12-13T00:13:17.649281277+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=ot8rN namespace=openshift-cluster-storage-operator 2025-12-13T00:13:17.649318508+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="resolving sources" id=hyh5U namespace=openshift-config 2025-12-13T00:13:17.649318508+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="checking if subscriptions need update" id=hyh5U namespace=openshift-config 2025-12-13T00:13:17.656798840+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.656798840+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.656798840+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.656798840+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=dUPFd 2025-12-13T00:13:17.656798840+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.656798840+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:17.847263900+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=Xpf7k namespace=openshift-cluster-version 2025-12-13T00:13:17.847263900+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="resolving sources" id=aMp2i namespace=openshift-config-managed 2025-12-13T00:13:17.847263900+00:00 stderr F time="2025-12-13T00:13:17Z" level=info msg="checking if subscriptions need update" id=aMp2i namespace=openshift-config-managed 2025-12-13T00:13:18.049343100+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:18.049343100+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:18.049343100+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:18.049343100+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dUPFd 2025-12-13T00:13:18.063844347+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.063844347+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.248445290+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="No subscriptions were found in namespace openshift-config" id=hyh5U namespace=openshift-config 2025-12-13T00:13:18.248473201+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="resolving sources" id=+iUp+ namespace=openshift-config-operator 2025-12-13T00:13:18.248473201+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="checking if subscriptions need update" id=+iUp+ namespace=openshift-config-operator 2025-12-13T00:13:18.248713829+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.251300306+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.251300306+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.251300306+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=M+N1V 2025-12-13T00:13:18.251300306+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.251300306+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.452431985+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=aMp2i namespace=openshift-config-managed 2025-12-13T00:13:18.452458176+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="resolving sources" id=Ts6Vu namespace=openshift-console 2025-12-13T00:13:18.452458176+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="checking if subscriptions need update" id=Ts6Vu namespace=openshift-console 2025-12-13T00:13:18.645293275+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.645524183+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.645572675+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.645638557+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=M+N1V 2025-12-13T00:13:18.652132885+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.652132885+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.847295213+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.847376806+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.847376806+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.847386016+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=znFf7 2025-12-13T00:13:18.847386016+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.847393356+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:18.853355157+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=+iUp+ namespace=openshift-config-operator 2025-12-13T00:13:18.853355157+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="resolving sources" id=Mr3fd namespace=openshift-console-operator 2025-12-13T00:13:18.853355157+00:00 stderr F time="2025-12-13T00:13:18Z" level=info msg="checking if subscriptions need update" id=Mr3fd namespace=openshift-console-operator 2025-12-13T00:13:19.052227149+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="No subscriptions were found in namespace openshift-console" id=Ts6Vu namespace=openshift-console 2025-12-13T00:13:19.052227149+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="resolving sources" id=TQNO1 namespace=openshift-console-user-settings 2025-12-13T00:13:19.052268551+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="checking if subscriptions need update" id=TQNO1 namespace=openshift-console-user-settings 2025-12-13T00:13:19.247826512+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:19.247826512+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:19.247826512+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:19.247826512+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=znFf7 2025-12-13T00:13:19.253814723+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.253814723+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.445992451+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.445992451+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.445992451+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.445992451+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=k+1ky 2025-12-13T00:13:19.445992451+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.445992451+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.452096426+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=Mr3fd namespace=openshift-console-operator 2025-12-13T00:13:19.452096426+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="resolving sources" id=xhc0O namespace=openshift-controller-manager 2025-12-13T00:13:19.452096426+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="checking if subscriptions need update" id=xhc0O namespace=openshift-controller-manager 2025-12-13T00:13:19.649326553+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=TQNO1 namespace=openshift-console-user-settings 2025-12-13T00:13:19.649326553+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="resolving sources" id=sg1n+ namespace=openshift-controller-manager-operator 2025-12-13T00:13:19.649326553+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="checking if subscriptions need update" id=sg1n+ namespace=openshift-controller-manager-operator 2025-12-13T00:13:19.847416759+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.847416759+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.847416759+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.847416759+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=k+1ky 2025-12-13T00:13:19.849609413+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:19.849609413+00:00 stderr F time="2025-12-13T00:13:19Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.045457044+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.045879438+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.045879438+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.045907459+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=v1P7t 2025-12-13T00:13:20.045907459+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.045915950+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.047102869+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=xhc0O namespace=openshift-controller-manager 2025-12-13T00:13:20.047189702+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="resolving sources" id=Tl6e/ namespace=openshift-dns 2025-12-13T00:13:20.047222503+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checking if subscriptions need update" id=Tl6e/ namespace=openshift-dns 2025-12-13T00:13:20.263678946+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=sg1n+ namespace=openshift-controller-manager-operator 2025-12-13T00:13:20.263678946+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="resolving sources" id=B6cDD namespace=openshift-dns-operator 2025-12-13T00:13:20.263713837+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checking if subscriptions need update" id=B6cDD namespace=openshift-dns-operator 2025-12-13T00:13:20.448328841+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.448328841+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.448328841+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.448328841+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=v1P7t 2025-12-13T00:13:20.448328841+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.448328841+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.451835940+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.451835940+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.648859010+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.648888171+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=Tl6e/ namespace=openshift-dns 2025-12-13T00:13:20.648896071+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="resolving sources" id=+PZ84 namespace=openshift-etcd 2025-12-13T00:13:20.648903831+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checking if subscriptions need update" id=+PZ84 namespace=openshift-etcd 2025-12-13T00:13:20.652049867+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.652049867+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.652049867+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nt1FH 2025-12-13T00:13:20.652049867+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.652049867+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:20.846113877+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.846261822+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.846261822+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.846309384+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=HiNrS 2025-12-13T00:13:20.846309384+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.846309384+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:20.851671754+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=B6cDD namespace=openshift-dns-operator 2025-12-13T00:13:20.851671754+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="resolving sources" id=ZVCO4 namespace=openshift-etcd-operator 2025-12-13T00:13:20.851671754+00:00 stderr F time="2025-12-13T00:13:20Z" level=info msg="checking if subscriptions need update" id=ZVCO4 namespace=openshift-etcd-operator 2025-12-13T00:13:21.047787834+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=+PZ84 namespace=openshift-etcd 2025-12-13T00:13:21.047787834+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="resolving sources" id=lKjoh namespace=openshift-host-network 2025-12-13T00:13:21.047818125+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="checking if subscriptions need update" id=lKjoh namespace=openshift-host-network 2025-12-13T00:13:21.249723140+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=ZVCO4 namespace=openshift-etcd-operator 2025-12-13T00:13:21.249723140+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="resolving sources" id=Ln4p4 namespace=openshift-image-registry 2025-12-13T00:13:21.249723140+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="checking if subscriptions need update" id=Ln4p4 namespace=openshift-image-registry 2025-12-13T00:13:21.443916546+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:21.444069321+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:21.444069321+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:21.444147813+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nt1FH 2025-12-13T00:13:21.444385851+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.444398212+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.446818883+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=lKjoh namespace=openshift-host-network 2025-12-13T00:13:21.446868415+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="resolving sources" id=6qeOL namespace=openshift-infra 2025-12-13T00:13:21.446868415+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="checking if subscriptions need update" id=6qeOL namespace=openshift-infra 2025-12-13T00:13:21.645781569+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:21.646068978+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:21.646068978+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:21.646068978+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=HiNrS 2025-12-13T00:13:21.646968858+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=Ln4p4 namespace=openshift-image-registry 2025-12-13T00:13:21.647009069+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="resolving sources" id=IJt67 namespace=openshift-ingress 2025-12-13T00:13:21.647009069+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="checking if subscriptions need update" id=IJt67 namespace=openshift-ingress 2025-12-13T00:13:21.844094783+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.844233737+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.844233737+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.844253898+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=T9UkD 2025-12-13T00:13:21.844253898+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.844253898+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:21.846287846+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=6qeOL namespace=openshift-infra 2025-12-13T00:13:21.846287846+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="resolving sources" id=i3Bwj namespace=openshift-ingress-canary 2025-12-13T00:13:21.846309366+00:00 stderr F time="2025-12-13T00:13:21Z" level=info msg="checking if subscriptions need update" id=i3Bwj namespace=openshift-ingress-canary 2025-12-13T00:13:22.046498844+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=IJt67 namespace=openshift-ingress 2025-12-13T00:13:22.046537285+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="resolving sources" id=/9Msx namespace=openshift-ingress-operator 2025-12-13T00:13:22.046537285+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="checking if subscriptions need update" id=/9Msx namespace=openshift-ingress-operator 2025-12-13T00:13:22.244966793+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:22.244966793+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:22.244966793+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:22.244966793+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T9UkD 2025-12-13T00:13:22.246537895+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=i3Bwj namespace=openshift-ingress-canary 2025-12-13T00:13:22.246568326+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="resolving sources" id=I9MTZ namespace=openshift-kni-infra 2025-12-13T00:13:22.246568326+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="checking if subscriptions need update" id=I9MTZ namespace=openshift-kni-infra 2025-12-13T00:13:22.447249360+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=/9Msx namespace=openshift-ingress-operator 2025-12-13T00:13:22.447249360+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="resolving sources" id=LOzqS namespace=openshift-kube-apiserver 2025-12-13T00:13:22.447288451+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="checking if subscriptions need update" id=LOzqS namespace=openshift-kube-apiserver 2025-12-13T00:13:22.652829287+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=I9MTZ namespace=openshift-kni-infra 2025-12-13T00:13:22.652829287+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="resolving sources" id=8FRdR namespace=openshift-kube-apiserver-operator 2025-12-13T00:13:22.652829287+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="checking if subscriptions need update" id=8FRdR namespace=openshift-kube-apiserver-operator 2025-12-13T00:13:22.848016216+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=LOzqS namespace=openshift-kube-apiserver 2025-12-13T00:13:22.848016216+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="resolving sources" id=UbCMb namespace=openshift-kube-controller-manager 2025-12-13T00:13:22.848016216+00:00 stderr F time="2025-12-13T00:13:22Z" level=info msg="checking if subscriptions need update" id=UbCMb namespace=openshift-kube-controller-manager 2025-12-13T00:13:23.047003643+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=8FRdR namespace=openshift-kube-apiserver-operator 2025-12-13T00:13:23.047037154+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="resolving sources" id=ARizV namespace=openshift-kube-controller-manager-operator 2025-12-13T00:13:23.047037154+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="checking if subscriptions need update" id=ARizV namespace=openshift-kube-controller-manager-operator 2025-12-13T00:13:23.246699563+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=UbCMb namespace=openshift-kube-controller-manager 2025-12-13T00:13:23.246741674+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="resolving sources" id=R0TUD namespace=openshift-kube-scheduler 2025-12-13T00:13:23.246741674+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="checking if subscriptions need update" id=R0TUD namespace=openshift-kube-scheduler 2025-12-13T00:13:23.446809117+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=ARizV namespace=openshift-kube-controller-manager-operator 2025-12-13T00:13:23.446854819+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="resolving sources" id=u9nJR namespace=openshift-kube-scheduler-operator 2025-12-13T00:13:23.446854819+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="checking if subscriptions need update" id=u9nJR namespace=openshift-kube-scheduler-operator 2025-12-13T00:13:23.647401937+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=R0TUD namespace=openshift-kube-scheduler 2025-12-13T00:13:23.647434408+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="resolving sources" id=xDZFa namespace=openshift-kube-storage-version-migrator 2025-12-13T00:13:23.647434408+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="checking if subscriptions need update" id=xDZFa namespace=openshift-kube-storage-version-migrator 2025-12-13T00:13:23.877671675+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=u9nJR namespace=openshift-kube-scheduler-operator 2025-12-13T00:13:23.877671675+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="resolving sources" id=ejacu namespace=openshift-kube-storage-version-migrator-operator 2025-12-13T00:13:23.877671675+00:00 stderr F time="2025-12-13T00:13:23Z" level=info msg="checking if subscriptions need update" id=ejacu namespace=openshift-kube-storage-version-migrator-operator 2025-12-13T00:13:24.047387218+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=xDZFa namespace=openshift-kube-storage-version-migrator 2025-12-13T00:13:24.047387218+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="resolving sources" id=hQpk9 namespace=openshift-machine-api 2025-12-13T00:13:24.047387218+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="checking if subscriptions need update" id=hQpk9 namespace=openshift-machine-api 2025-12-13T00:13:24.247162620+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=ejacu namespace=openshift-kube-storage-version-migrator-operator 2025-12-13T00:13:24.247162620+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="resolving sources" id=ZD8/A namespace=openshift-machine-config-operator 2025-12-13T00:13:24.247162620+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="checking if subscriptions need update" id=ZD8/A namespace=openshift-machine-config-operator 2025-12-13T00:13:24.447099539+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=hQpk9 namespace=openshift-machine-api 2025-12-13T00:13:24.447099539+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="resolving sources" id=UhdlN namespace=openshift-marketplace 2025-12-13T00:13:24.447099539+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="checking if subscriptions need update" id=UhdlN namespace=openshift-marketplace 2025-12-13T00:13:24.646792759+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=ZD8/A namespace=openshift-machine-config-operator 2025-12-13T00:13:24.646824700+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="resolving sources" id=8lWjS namespace=openshift-monitoring 2025-12-13T00:13:24.646824700+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="checking if subscriptions need update" id=8lWjS namespace=openshift-monitoring 2025-12-13T00:13:24.846944194+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=UhdlN namespace=openshift-marketplace 2025-12-13T00:13:24.846944194+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="resolving sources" id=1VOkG namespace=openshift-multus 2025-12-13T00:13:24.846944194+00:00 stderr F time="2025-12-13T00:13:24Z" level=info msg="checking if subscriptions need update" id=1VOkG namespace=openshift-multus 2025-12-13T00:13:25.051335952+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=8lWjS namespace=openshift-monitoring 2025-12-13T00:13:25.051335952+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="resolving sources" id=KUYHp namespace=openshift-network-diagnostics 2025-12-13T00:13:25.051335952+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="checking if subscriptions need update" id=KUYHp namespace=openshift-network-diagnostics 2025-12-13T00:13:25.247396640+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=1VOkG namespace=openshift-multus 2025-12-13T00:13:25.247433012+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="resolving sources" id=Z59ur namespace=openshift-network-node-identity 2025-12-13T00:13:25.247433012+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="checking if subscriptions need update" id=Z59ur namespace=openshift-network-node-identity 2025-12-13T00:13:25.450441284+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=KUYHp namespace=openshift-network-diagnostics 2025-12-13T00:13:25.450441284+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="resolving sources" id=ak1sl namespace=openshift-network-operator 2025-12-13T00:13:25.450513756+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="checking if subscriptions need update" id=ak1sl namespace=openshift-network-operator 2025-12-13T00:13:25.646925006+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=Z59ur namespace=openshift-network-node-identity 2025-12-13T00:13:25.646925006+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="resolving sources" id=sfNNx namespace=openshift-node 2025-12-13T00:13:25.646925006+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="checking if subscriptions need update" id=sfNNx namespace=openshift-node 2025-12-13T00:13:25.846490171+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ak1sl namespace=openshift-network-operator 2025-12-13T00:13:25.846544133+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="resolving sources" id=mhJNW namespace=openshift-nutanix-infra 2025-12-13T00:13:25.846544133+00:00 stderr F time="2025-12-13T00:13:25Z" level=info msg="checking if subscriptions need update" id=mhJNW namespace=openshift-nutanix-infra 2025-12-13T00:13:26.046747671+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="No subscriptions were found in namespace openshift-node" id=sfNNx namespace=openshift-node 2025-12-13T00:13:26.046747671+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="resolving sources" id=LUEpT namespace=openshift-oauth-apiserver 2025-12-13T00:13:26.046747671+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="checking if subscriptions need update" id=LUEpT namespace=openshift-oauth-apiserver 2025-12-13T00:13:26.250240748+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=mhJNW namespace=openshift-nutanix-infra 2025-12-13T00:13:26.250240748+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="resolving sources" id=+Y8uK namespace=openshift-openstack-infra 2025-12-13T00:13:26.250240748+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="checking if subscriptions need update" id=+Y8uK namespace=openshift-openstack-infra 2025-12-13T00:13:26.447125715+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=LUEpT namespace=openshift-oauth-apiserver 2025-12-13T00:13:26.447161466+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="resolving sources" id=DNS0m namespace=openshift-operator-lifecycle-manager 2025-12-13T00:13:26.447161466+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="checking if subscriptions need update" id=DNS0m namespace=openshift-operator-lifecycle-manager 2025-12-13T00:13:26.647387914+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=+Y8uK namespace=openshift-openstack-infra 2025-12-13T00:13:26.647387914+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="resolving sources" id=Hg+oK namespace=openshift-operators 2025-12-13T00:13:26.647420405+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="checking if subscriptions need update" id=Hg+oK namespace=openshift-operators 2025-12-13T00:13:26.885986531+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=DNS0m namespace=openshift-operator-lifecycle-manager 2025-12-13T00:13:26.885986531+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="resolving sources" id=c1tCu namespace=openshift-ovirt-infra 2025-12-13T00:13:26.885986531+00:00 stderr F time="2025-12-13T00:13:26Z" level=info msg="checking if subscriptions need update" id=c1tCu namespace=openshift-ovirt-infra 2025-12-13T00:13:27.047020733+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=Hg+oK namespace=openshift-operators 2025-12-13T00:13:27.047020733+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="resolving sources" id=A8iwG namespace=openshift-ovn-kubernetes 2025-12-13T00:13:27.047020733+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="checking if subscriptions need update" id=A8iwG namespace=openshift-ovn-kubernetes 2025-12-13T00:13:27.246665071+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=c1tCu namespace=openshift-ovirt-infra 2025-12-13T00:13:27.246665071+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="resolving sources" id=2dwhc namespace=openshift-route-controller-manager 2025-12-13T00:13:27.246710692+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="checking if subscriptions need update" id=2dwhc namespace=openshift-route-controller-manager 2025-12-13T00:13:27.447327224+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=A8iwG namespace=openshift-ovn-kubernetes 2025-12-13T00:13:27.447376295+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="resolving sources" id=3cYZm namespace=openshift-service-ca 2025-12-13T00:13:27.447376295+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="checking if subscriptions need update" id=3cYZm namespace=openshift-service-ca 2025-12-13T00:13:27.646298470+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=2dwhc namespace=openshift-route-controller-manager 2025-12-13T00:13:27.646344811+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="resolving sources" id=Yw6Pt namespace=openshift-service-ca-operator 2025-12-13T00:13:27.646344811+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="checking if subscriptions need update" id=Yw6Pt namespace=openshift-service-ca-operator 2025-12-13T00:13:27.847023424+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=3cYZm namespace=openshift-service-ca 2025-12-13T00:13:27.847023424+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="resolving sources" id=t9KxF namespace=openshift-user-workload-monitoring 2025-12-13T00:13:27.847023424+00:00 stderr F time="2025-12-13T00:13:27Z" level=info msg="checking if subscriptions need update" id=t9KxF namespace=openshift-user-workload-monitoring 2025-12-13T00:13:28.047438659+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=Yw6Pt namespace=openshift-service-ca-operator 2025-12-13T00:13:28.047438659+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="resolving sources" id=nleL0 namespace=openshift-vsphere-infra 2025-12-13T00:13:28.047438659+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="checking if subscriptions need update" id=nleL0 namespace=openshift-vsphere-infra 2025-12-13T00:13:28.246302541+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=t9KxF namespace=openshift-user-workload-monitoring 2025-12-13T00:13:28.246347782+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="resolving sources" id=7b97L namespace=openshift-monitoring 2025-12-13T00:13:28.246347782+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="checking if subscriptions need update" id=7b97L namespace=openshift-monitoring 2025-12-13T00:13:28.446028412+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=nleL0 namespace=openshift-vsphere-infra 2025-12-13T00:13:28.446028412+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="resolving sources" id=rBpMD namespace=openshift-operator-lifecycle-manager 2025-12-13T00:13:28.446028412+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="checking if subscriptions need update" id=rBpMD namespace=openshift-operator-lifecycle-manager 2025-12-13T00:13:28.648887169+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=7b97L namespace=openshift-monitoring 2025-12-13T00:13:28.648887169+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="resolving sources" id=/5jIs namespace=openshift-operators 2025-12-13T00:13:28.648887169+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="checking if subscriptions need update" id=/5jIs namespace=openshift-operators 2025-12-13T00:13:28.847975559+00:00 stderr F time="2025-12-13T00:13:28Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=rBpMD namespace=openshift-operator-lifecycle-manager 2025-12-13T00:13:29.051036012+00:00 stderr F time="2025-12-13T00:13:29Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=/5jIs namespace=openshift-operators 2025-12-13T00:13:33.645826426+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.645826426+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.645826426+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.645826426+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.649305914+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.649335885+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.649347125+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.649357596+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.649368096+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=BBZ2O 2025-12-13T00:13:33.649368096+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.649379016+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.649454449+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.649454449+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.649476840+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Cuie6 2025-12-13T00:13:33.649476840+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.649487350+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.655243573+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.655327526+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.655327526+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.655391198+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cuie6 2025-12-13T00:13:33.655421909+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.655430840+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.656609409+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.656917729+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.656917729+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.656917729+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BBZ2O 2025-12-13T00:13:33.656963271+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.657033793+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.657033793+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.657033793+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Avn9M 2025-12-13T00:13:33.657052764+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.657052764+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.664702001+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.664881257+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.664881257+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.664909818+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Avn9M 2025-12-13T00:13:33.682543410+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.682543410+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.687437405+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.687534948+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.687534948+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.687548378+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Xrt7b 2025-12-13T00:13:33.687548378+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.687564059+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:33.718430836+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:33.718430836+00:00 stderr F time="2025-12-13T00:13:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.048752166+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.048816988+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.048816988+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.048837999+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=hHr1R 2025-12-13T00:13:34.048853709+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.048853709+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.249453280+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:34.249631776+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:34.249631776+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:34.249631776+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Xrt7b 2025-12-13T00:13:34.249682768+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.249682768+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.648923752+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.649070177+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.649092278+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.649092278+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Pi6Uy 2025-12-13T00:13:34.649092278+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.649102328+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:34.848468378+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.848586232+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.848586232+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.848664005+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hHr1R 2025-12-13T00:13:34.848750438+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:34.848787609+00:00 stderr F time="2025-12-13T00:13:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:35.252966450+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:35.253054213+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:35.253063233+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:35.253072353+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=5brfh 2025-12-13T00:13:35.253079594+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:35.253111625+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:35.448723708+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:35.448894954+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:35.448894954+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:35.449042369+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Pi6Uy 2025-12-13T00:13:35.449112431+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:35.449143012+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:35.848831293+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:35.848969187+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:35.848969187+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:35.848969187+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/ZY// 2025-12-13T00:13:35.848969187+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:35.848988358+00:00 stderr F time="2025-12-13T00:13:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:36.049618329+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:36.049709552+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:36.049709552+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:36.049777904+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=5brfh 2025-12-13T00:13:36.448830064+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:36.449058541+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:36.449058541+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:36.449111773+00:00 stderr F time="2025-12-13T00:13:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/ZY// 2025-12-13T00:13:45.647881033+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.647881033+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.650470500+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.650580894+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.650580894+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.650590264+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=7kMSS 2025-12-13T00:13:45.650597614+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.650604704+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.659115860+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.659277926+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.659277926+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.659379379+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7kMSS 2025-12-13T00:13:45.872509141+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.872509141+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.875495431+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.875610345+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.875610345+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.875622045+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=P88jE 2025-12-13T00:13:45.875622045+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.875631476+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.881667748+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.881747781+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.881747781+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:45.881841174+00:00 stderr F time="2025-12-13T00:13:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=P88jE 2025-12-13T00:13:46.653888227+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.653888227+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.657645533+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.657805198+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.657805198+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.657818919+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=90MMm 2025-12-13T00:13:46.657826849+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.657835719+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.667125351+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.667275657+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.667275657+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:46.667392151+00:00 stderr F time="2025-12-13T00:13:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=90MMm 2025-12-13T00:13:47.468919674+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.469063549+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.475047279+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.475253856+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.475312688+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.475377470+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Hxj8z 2025-12-13T00:13:47.475423792+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.475470103+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.482469459+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.482689786+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.482732918+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:13:47.482838451+00:00 stderr F time="2025-12-13T00:13:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Hxj8z 2025-12-13T00:14:15.660252228+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.660252228+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.663549193+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.663687028+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.663729679+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.663775151+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=p1r8i 2025-12-13T00:14:15.663800001+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.663822672+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.672261404+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.672416699+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.672468860+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.672571474+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=p1r8i 2025-12-13T00:14:15.883252880+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.883252880+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.886351930+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.886426152+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.886426152+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.886444473+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1rQSv 2025-12-13T00:14:15.886444473+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.886444473+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.894417469+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.894460581+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.894460581+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:15.894571204+00:00 stderr F time="2025-12-13T00:14:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1rQSv 2025-12-13T00:14:16.668489085+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.668489085+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.671817682+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.671998848+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.671998848+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.671998848+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=RhAVr 2025-12-13T00:14:16.672017098+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.672017098+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.679219580+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.679367415+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.679367415+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:16.679422967+00:00 stderr F time="2025-12-13T00:14:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RhAVr 2025-12-13T00:14:17.483365334+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.483365334+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.486517935+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.486590837+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.486590837+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.486614528+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dZLHe 2025-12-13T00:14:17.486614528+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.486614528+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.494165081+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.494246524+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.494246524+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:17.494332217+00:00 stderr F time="2025-12-13T00:14:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dZLHe 2025-12-13T00:14:22.104161652+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.104161652+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.108129871+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.108129871+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.108129871+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.108129871+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=aCYHF 2025-12-13T00:14:22.108129871+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.108129871+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.131463051+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.131463051+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.131463051+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.131463051+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=aCYHF 2025-12-13T00:14:22.179977171+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.179977171+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.185140467+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.185140467+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.185140467+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.185140467+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=hUpRA 2025-12-13T00:14:22.185140467+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.185140467+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.197016819+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.197016819+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.197016819+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.197016819+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.197016819+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.197016819+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=hUpRA 2025-12-13T00:14:22.200071067+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.200071067+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.200071067+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.200071067+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=2/WLo 2025-12-13T00:14:22.200071067+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.200071067+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.212795097+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.212795097+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.212795097+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.212795097+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2/WLo 2025-12-13T00:14:22.530122013+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.530122013+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.538324827+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9Hx7/ 2025-12-13T00:14:22.561071138+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:22.561071138+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:22.705776582+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:22.705880476+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:22.705880476+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:22.705896156+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fWiZf 2025-12-13T00:14:22.705903427+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:22.705910537+00:00 stderr F time="2025-12-13T00:14:22Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:23.110656925+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:23.110797990+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:23.110797990+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:23.110952225+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fWiZf 2025-12-13T00:14:23.625079350+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.625079350+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.626308050+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:23.626329061+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:23.628162069+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.628285253+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.628285253+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.628310784+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=CHPf8 2025-12-13T00:14:23.628310784+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.628318554+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:23.629255455+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:23.629354048+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:23.629354048+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:23.629362368+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=It8rx 2025-12-13T00:14:23.629369758+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:23.629376709+00:00 stderr F time="2025-12-13T00:14:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:24.111066122+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:24.111153324+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:24.111153324+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:24.111224647+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=CHPf8 2025-12-13T00:14:24.306769746+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:24.307763268+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:24.307763268+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:24.307971385+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=It8rx 2025-12-13T00:14:24.603770719+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.603770719+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.609282205+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.609611927+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.609659608+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.609706380+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=yKWhg 2025-12-13T00:14:24.609740371+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.609774302+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:24.630795807+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:24.630795807+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:24.907249199+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:24.907431045+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:24.907463856+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:24.907495127+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=yT4ur 2025-12-13T00:14:24.907520158+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:24.907559069+00:00 stderr F time="2025-12-13T00:14:24Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:25.105391493+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:25.105569988+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:25.105600769+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:25.105673612+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yKWhg 2025-12-13T00:14:25.521500535+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:25.521600719+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:25.521600719+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:25.521665401+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yT4ur 2025-12-13T00:14:25.587114756+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.587114756+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.598635047+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:25.598635047+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:25.705893696+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.705999320+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.705999320+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.706012190+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=SfTFA 2025-12-13T00:14:25.706012190+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.706026450+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:25.905591449+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:25.905733814+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:25.905733814+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:25.905744594+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=71ZS0 2025-12-13T00:14:25.905753234+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:25.905761954+00:00 stderr F time="2025-12-13T00:14:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:26.508712037+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:26.508778360+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:26.508778360+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:26.508866473+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SfTFA 2025-12-13T00:14:26.520807207+00:00 stderr F 2025/12/13 00:14:26 http: TLS handshake error from 10.217.0.17:51860: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "x509: invalid signature: parent certificate cannot sign this kind of certificate" while trying to verify candidate authority certificate "Red Hat, Inc.") 2025-12-13T00:14:26.594093704+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:26.594093704+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:26.705656842+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:26.705749975+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:26.705749975+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:26.705819707+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=71ZS0 2025-12-13T00:14:26.705872959+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:26.705872959+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:26.911910195+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:26.911910195+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:26.911910195+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:26.911910195+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=j6OcJ 2025-12-13T00:14:26.911910195+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:26.911910195+00:00 stderr F time="2025-12-13T00:14:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:27.105761071+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.105868484+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.105868484+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.105877474+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=F1Na0 2025-12-13T00:14:27.105877474+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.105891815+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.707252646+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:27.707252646+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:27.707252646+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:27.707252646+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=j6OcJ 2025-12-13T00:14:27.906976560+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.906976560+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.906976560+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:27.906976560+00:00 stderr F time="2025-12-13T00:14:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=F1Na0 2025-12-13T00:14:28.614322420+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.614322420+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.622251305+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.622251305+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.622251305+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.622251305+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IrkYa 2025-12-13T00:14:28.622251305+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.622251305+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.628082553+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.628082553+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.628082553+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.628082553+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IrkYa 2025-12-13T00:14:28.640731260+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.640731260+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.657170748+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:28.657170748+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:28.706009579+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.706106783+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.706106783+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.706106783+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LV6Ul 2025-12-13T00:14:28.706121793+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.706121793+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:28.915964602+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:28.915964602+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:28.915964602+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:28.915964602+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=YW4km 2025-12-13T00:14:28.915964602+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:28.915964602+00:00 stderr F time="2025-12-13T00:14:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:29.506285499+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:29.506347721+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:29.506357161+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:29.506438784+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LV6Ul 2025-12-13T00:14:29.705878248+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:29.706019013+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:29.706019013+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:29.706084825+00:00 stderr F time="2025-12-13T00:14:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=YW4km 2025-12-13T00:14:30.618849313+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.618849313+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.621537839+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.621653753+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.621653753+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.621686544+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=u0tB6 2025-12-13T00:14:30.621686544+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.621686544+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.628848854+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.628968017+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.628968017+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.629034209+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=u0tB6 2025-12-13T00:14:30.995822117+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:30.995822117+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:30.998287056+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:30.998287056+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:30.998287056+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:30.998287056+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OJgCS 2025-12-13T00:14:30.998287056+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:30.998287056+00:00 stderr F time="2025-12-13T00:14:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:31.018873618+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:31.018991482+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:31.018991482+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:31.019040934+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OJgCS 2025-12-13T00:14:31.023098694+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.023098694+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.105643108+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.105700920+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.105700920+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.105711271+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=klBD5 2025-12-13T00:14:31.105720171+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.105720171+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.504986263+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.505105607+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.505105607+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.505178959+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=klBD5 2025-12-13T00:14:31.732955025+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:31.732955025+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:31.734508385+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:31.734644700+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:31.734644700+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:31.734644700+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=n+Gu0 2025-12-13T00:14:31.734657640+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:31.734657640+00:00 stderr F time="2025-12-13T00:14:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:32.107730529+00:00 stderr F time="2025-12-13T00:14:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:32.107793091+00:00 stderr F time="2025-12-13T00:14:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:32.107811262+00:00 stderr F time="2025-12-13T00:14:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:32.107903905+00:00 stderr F time="2025-12-13T00:14:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=n+Gu0 2025-12-13T00:14:37.641371678+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.641422209+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.650850313+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.650886194+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.650912095+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.650912095+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=/h2uH 2025-12-13T00:14:37.650912095+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.650912095+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.666708514+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.666751635+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.666751635+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.666796586+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/h2uH 2025-12-13T00:14:37.666878829+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.666878829+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.667280962+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.667280962+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.675204767+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.675283309+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.675283309+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.675294070+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=g47E7 2025-12-13T00:14:37.675294070+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.675347581+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.675500246+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.675566358+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.675566358+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.675577879+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=skK8+ 2025-12-13T00:14:37.675587039+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.675587039+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.684381602+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.684381602+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.684381602+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.684381602+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=skK8+ 2025-12-13T00:14:37.684381602+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.684381602+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=g47E7 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=7U+SZ 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.687115220+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:37.689081372+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:37.689081372+00:00 stderr F time="2025-12-13T00:14:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.046069904+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.046273051+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.046320682+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.046370174+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=05faZ 2025-12-13T00:14:38.046411635+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.046439966+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.246613475+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:38.246613475+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:38.246613475+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:38.246613475+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7U+SZ 2025-12-13T00:14:38.246613475+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.246613475+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.645569696+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.645666879+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.645674949+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.645699340+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ph2ZQ 2025-12-13T00:14:38.645699340+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.645699340+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:38.844598217+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.844775573+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.844775573+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.844775573+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=05faZ 2025-12-13T00:14:38.844848145+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:38.844859565+00:00 stderr F time="2025-12-13T00:14:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:39.247833406+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:39.247833406+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:39.247833406+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=false id=cRAxS 2025-12-13T00:14:39.247833406+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:39.445298237+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:39.445386880+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:39.445386880+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:39.445434681+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ph2ZQ 2025-12-13T00:14:39.445503183+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:39.445503183+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:39.846161150+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:39.846224403+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:39.846224403+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:39.846245893+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=DimZf 2025-12-13T00:14:39.846245893+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:39.846265204+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:40.045065547+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:40.045172482+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:40.045172482+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:40.045230553+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="creating desired pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS pod.name= pod.namespace=openshift-marketplace 2025-12-13T00:14:40.456250592+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cRAxS 2025-12-13T00:14:40.456250592+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:40.456250592+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:40.645066406+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:40.645260872+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:40.645260872+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:40.645379966+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=DimZf 2025-12-13T00:14:40.645481309+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:40.645481309+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:40.845784871+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:40.845877804+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:40.845877804+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:40.845889844+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=gThcd 2025-12-13T00:14:40.845898705+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:40.845907225+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:41.044560514+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.044665228+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.044665228+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.044675868+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=wnb8h 2025-12-13T00:14:41.044700109+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.044700109+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.645379489+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:41.645570615+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:41.645570615+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:41.645806992+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:41.645806992+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=gThcd 2025-12-13T00:14:41.645901265+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:41.645911856+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:41.846651273+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.846814098+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.846814098+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:41.846987663+00:00 stderr F time="2025-12-13T00:14:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wnb8h 2025-12-13T00:14:42.046386656+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.046460439+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.046470199+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.046497490+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ozv3K 2025-12-13T00:14:42.046497490+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.046497490+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.448113517+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.448113517+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.448113517+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.448113517+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ozv3K 2025-12-13T00:14:42.807634291+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:42.807674612+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:42.811250317+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:42.811250317+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:42.811250317+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:42.811250317+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=esrXI 2025-12-13T00:14:42.811250317+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:42.811274798+00:00 stderr F time="2025-12-13T00:14:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:43.009018208+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:43.009018208+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:43.044714065+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:43.044816979+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:43.044816979+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:43.044898241+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="catalog update required at 2025-12-13 00:14:43.04486529 +0000 UTC m=+88.229547691" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:43.245128552+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:43.245128552+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:43.245128552+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=false id=4Geuc 2025-12-13T00:14:43.245128552+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:43.451357104+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=esrXI 2025-12-13T00:14:43.461828021+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:43.461828021+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:43.844913743+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:43.845008606+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:43.845008606+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:43.845039857+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=qKwZg 2025-12-13T00:14:43.845039857+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:43.845039857+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:44.045198744+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:44.045273306+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:44.045273306+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:44.045345939+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="creating desired pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc pod.name= pod.namespace=openshift-marketplace 2025-12-13T00:14:44.453266629+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:44.453266629+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4Geuc 2025-12-13T00:14:44.461836865+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:44.461836865+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:44.645001037+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:44.645112320+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:44.645112320+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:44.645291576+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:44.645291576+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qKwZg 2025-12-13T00:14:44.645392529+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:44.645392529+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:44.846218507+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:44.846330711+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:44.846330711+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=false id=tQixr 2025-12-13T00:14:44.846343481+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:45.082624882+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:45.082706964+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:45.082706964+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=false id=fDRmS 2025-12-13T00:14:45.082706964+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:45.645040570+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:45.645119793+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:45.645119793+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:45.645199236+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="creating desired pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr pod.name= pod.namespace=openshift-marketplace 2025-12-13T00:14:45.845202458+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:45.845345143+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="of 0 pods matching label selector, 0 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:45.845345143+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="registry pods invalid, need to overwrite" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:45.845493248+00:00 stderr F time="2025-12-13T00:14:45Z" level=info msg="creating desired pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS pod.name= pod.namespace=openshift-marketplace 2025-12-13T00:14:46.053792797+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:46.053792797+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tQixr 2025-12-13T00:14:46.061711842+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.061711842+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.251069642+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fDRmS 2025-12-13T00:14:46.251069642+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:46.251069642+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:46.444115271+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.444282317+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.444282317+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.444282317+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SdcR4 2025-12-13T00:14:46.444282317+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.444282317+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:46.645095165+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:46.645174488+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:46.645174488+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:46.645187349+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=K0VJs 2025-12-13T00:14:46.645198179+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:46.645198179+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SdcR4 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.246230490+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.444811267+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:47.444875569+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:47.444875569+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:47.445001203+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:47.445001203+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=K0VJs 2025-12-13T00:14:47.445024854+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:47.445040094+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:47.644155308+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.644258731+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.644258731+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.644274342+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=KmC4C 2025-12-13T00:14:47.644284122+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.644306483+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:47.850023680+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:47.850023680+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:47.850023680+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:47.850023680+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=OTkPw 2025-12-13T00:14:47.850023680+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:47.850023680+00:00 stderr F time="2025-12-13T00:14:47Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:48.444827260+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:48.444990536+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:48.445005436+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:48.445225553+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:48.445246764+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=KmC4C 2025-12-13T00:14:48.445299605+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:48.445310376+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:48.645034769+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:48.645167613+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:48.645176174+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:48.645243916+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTkPw 2025-12-13T00:14:48.645304658+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:48.645320348+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:48.845291051+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:48.845401754+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:48.845401754+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:48.845421315+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QMXMf 2025-12-13T00:14:48.845421315+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:48.845421315+00:00 stderr F time="2025-12-13T00:14:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:49.045704666+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.045810139+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.045810139+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.045839040+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ncGkg 2025-12-13T00:14:49.045839040+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.045839040+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.647980167+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:49.648142942+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:49.648152973+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:49.648302838+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:49.648317669+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QMXMf 2025-12-13T00:14:49.648387501+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:49.648413622+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:49.848352462+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.848475756+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.848475756+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.848592580+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.848592580+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ncGkg 2025-12-13T00:14:49.848669792+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:49.848669792+00:00 stderr F time="2025-12-13T00:14:49Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:50.045532644+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.045630947+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.045639687+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.045646877+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=oZyVI 2025-12-13T00:14:50.045662448+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.045662448+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.245200286+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:50.245429174+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:50.245494926+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:50.245559338+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=9RklH 2025-12-13T00:14:50.245611189+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:50.245662031+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:50.845390420+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.845522284+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.845522284+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.845623337+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.845623337+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oZyVI 2025-12-13T00:14:50.925592129+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:50.925592129+00:00 stderr F time="2025-12-13T00:14:50Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.045321710+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:51.045529987+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:51.045587998+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:51.045689972+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9RklH 2025-12-13T00:14:51.244739974+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.244912619+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.244977781+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.245021903+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ufK7E 2025-12-13T00:14:51.245053614+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.245096246+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.644309325+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.644501162+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.644534233+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.644634956+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:51.644661687+00:00 stderr F time="2025-12-13T00:14:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ufK7E 2025-12-13T00:14:53.032665100+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.032665100+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.034790237+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.034909311+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.034909311+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.034921622+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OvSWh 2025-12-13T00:14:53.034973763+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.035045516+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.041965069+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.041965069+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.041965069+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.042133734+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.042133734+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OvSWh 2025-12-13T00:14:53.043765627+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.043765627+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.048968653+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.048968653+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.048968653+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.048968653+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=uHvbe 2025-12-13T00:14:53.048968653+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.048968653+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.054348556+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.054475231+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.054475231+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:53.054519982+00:00 stderr F time="2025-12-13T00:14:53Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uHvbe 2025-12-13T00:14:54.336444423+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.336521105+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.341202346+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.341202346+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.341202346+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.341202346+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ZXEZA 2025-12-13T00:14:54.341202346+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.341202346+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.346906930+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.347086876+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.347121897+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.347196239+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZXEZA 2025-12-13T00:14:54.355019081+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.355019081+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.356244340+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.356332463+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.356332463+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.356347973+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=58BQx 2025-12-13T00:14:54.356347973+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.356355473+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.364382172+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.364556787+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.364556787+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.364667101+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:54.364667101+00:00 stderr F time="2025-12-13T00:14:54Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=58BQx 2025-12-13T00:14:56.354207220+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.354207220+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.359486980+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.359605394+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.359605394+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.359620904+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=FL1+V 2025-12-13T00:14:56.359620904+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.359646295+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.366456845+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.366543628+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.366543628+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.366672482+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.366672482+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FL1+V 2025-12-13T00:14:56.398106423+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.398106423+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.398971950+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.398971950+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.398971950+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.398971950+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=u9ZNa 2025-12-13T00:14:56.398971950+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.398971950+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.410521371+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.410634875+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.459488237+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.459694993+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:14:56.459694993+00:00 stderr F time="2025-12-13T00:14:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u9ZNa 2025-12-13T00:15:00.924251368+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.924251368+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:00.945855932+00:00 stderr F time="2025-12-13T00:15:00Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IWQH8 2025-12-13T00:15:04.898568073+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.898568073+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.900827776+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.900942240+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.900954280+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.900966371+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=F9g/A 2025-12-13T00:15:04.900975571+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.900975571+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.905553079+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.905626581+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.905626581+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.905727824+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:04.905727824+00:00 stderr F time="2025-12-13T00:15:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F9g/A 2025-12-13T00:15:05.518928917+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.518928917+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.520315581+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.520446495+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.520464396+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.520487066+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=W1H4d 2025-12-13T00:15:05.520494366+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.520502947+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.527232694+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.527344827+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.527344827+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.527431930+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:05.527431930+00:00 stderr F time="2025-12-13T00:15:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=W1H4d 2025-12-13T00:15:11.435576438+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.435576438+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.441110502+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.441146073+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.441146073+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.441146073+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=UVzFG 2025-12-13T00:15:11.441146073+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.441180414+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.449926062+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.454316832+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.454316832+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.454316832+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.454316832+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=UVzFG 2025-12-13T00:15:11.535196136+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-12-13T00:15:11.535196136+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.535196136+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.535310839+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="resolving sources" id=MC9J7 namespace=openshift-marketplace 2025-12-13T00:15:11.535310839+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="checking if subscriptions need update" id=MC9J7 namespace=openshift-marketplace 2025-12-13T00:15:11.537549006+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=MC9J7 namespace=openshift-marketplace 2025-12-13T00:15:11.537713991+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.537873695+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.537882086+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.537911626+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=C4lVy 2025-12-13T00:15:11.537911626+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.537925887+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.543335907+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.543385938+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.543385938+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.543483461+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.543483461+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=C4lVy 2025-12-13T00:15:11.549320984+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.549320984+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.554564159+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.554564159+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.554564159+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.554564159+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=UyD3D 2025-12-13T00:15:11.554564159+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.554564159+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.559560717+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.559560717+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.559560717+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-zg7cl current-pod.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.559560717+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:11.559560717+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=UyD3D 2025-12-13T00:15:16.251114055+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.251114055+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.253885407+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.253971320+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.253971320+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.253971320+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=avXEO 2025-12-13T00:15:16.253971320+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.253971320+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.263092810+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.263143801+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.263143801+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.263211673+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=avXEO 2025-12-13T00:15:16.272785657+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.272785657+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.278165995+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.278165995+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.278165995+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.278165995+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kV1X4 2025-12-13T00:15:16.278165995+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.278165995+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.282501114+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.282614788+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.282614788+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.282712880+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kV1X4 2025-12-13T00:15:16.523422125+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.523422125+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.526432864+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.526691002+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.526691002+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.526691002+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=sWyUU 2025-12-13T00:15:16.526819586+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.526819586+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.535124621+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.535255895+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.535255895+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.535478092+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.535478092+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=sWyUU 2025-12-13T00:15:16.545636203+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.545636203+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.546980922+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.547080345+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.547080345+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.547105636+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=drGpV 2025-12-13T00:15:16.547105636+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.547105636+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.655467944+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.655727181+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.655727181+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.655907966+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:16.655907966+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=drGpV 2025-12-13T00:15:17.579766772+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.579816593+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.582527204+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.582640087+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.582640087+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.582650877+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Qknd9 2025-12-13T00:15:17.582659357+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.582668008+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.591084867+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.591427097+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.591449457+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.591872270+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:17.591872270+00:00 stderr F time="2025-12-13T00:15:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Qknd9 2025-12-13T00:15:20.066233919+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-12-13T00:15:20.066288281+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.066288281+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.066421175+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="resolving sources" id=d7lSA namespace=openshift-marketplace 2025-12-13T00:15:20.066421175+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="checking if subscriptions need update" id=d7lSA namespace=openshift-marketplace 2025-12-13T00:15:20.069593148+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.069733773+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.069733773+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.069774634+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=eCIZv 2025-12-13T00:15:20.069774634+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.069774634+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.071858066+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=d7lSA namespace=openshift-marketplace 2025-12-13T00:15:20.076779351+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.076837762+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.076869073+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.077020448+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.077020448+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eCIZv 2025-12-13T00:15:20.087691464+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.087691464+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.090395954+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.090499547+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.090499547+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.090499547+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Bza62 2025-12-13T00:15:20.090499547+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.090522537+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.096673120+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.096725082+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.096725082+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-lcrg8 current-pod.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.096902837+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:20.096902837+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Bza62 2025-12-13T00:15:31.531453547+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.531453547+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.537227808+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.537396123+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.537396123+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.537415664+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Pmzr2 2025-12-13T00:15:31.537415664+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.537415664+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.543089642+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.543347030+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.543410372+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.543549486+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Pmzr2 2025-12-13T00:15:31.551465430+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.551564414+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.554730527+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.554865591+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.554865591+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.554882062+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=F4jH4 2025-12-13T00:15:31.554882062+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.554894942+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.565048462+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.565174926+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.565174926+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:31.565283489+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=F4jH4 2025-12-13T00:15:36.574698744+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.574801577+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.582162974+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.582314568+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.582314568+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.582353879+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=h/udm 2025-12-13T00:15:36.582372680+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.582372680+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.594203131+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.594507450+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.594507450+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.594606433+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h/udm 2025-12-13T00:15:36.594670185+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.594670185+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.596717454+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.596828998+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.596828998+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.596828998+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+zXv7 2025-12-13T00:15:36.596842368+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.596842368+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.605098223+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.605189785+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.605189785+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:36.605261398+00:00 stderr F time="2025-12-13T00:15:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+zXv7 2025-12-13T00:15:39.628768050+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.628855983+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.632035237+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.632188892+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.632229143+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.632259814+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=k+EcK 2025-12-13T00:15:39.632287234+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.632318275+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.640403735+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.640595050+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.640595050+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:39.640717194+00:00 stderr F time="2025-12-13T00:15:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k+EcK 2025-12-13T00:15:43.892701859+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.892701859+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.900013135+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.900138820+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.900138820+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.900148900+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=8EVos 2025-12-13T00:15:43.900156360+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.900163510+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.908750945+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.908879848+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.908879848+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:43.908945140+00:00 stderr F time="2025-12-13T00:15:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8EVos 2025-12-13T00:15:44.761220228+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.761277150+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.764247437+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.764392162+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.764392162+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.764419312+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=yZ77d 2025-12-13T00:15:44.764429073+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.764438523+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.771464131+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.771626356+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.771639437+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.778109158+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.778109158+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZ77d 2025-12-13T00:15:44.779880541+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.779916702+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.783024753+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.783149437+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.783149437+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.783149437+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=WCPXQ 2025-12-13T00:15:44.783165618+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.783165618+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.789237347+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.789409542+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.789409542+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.792231646+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:44.792231646+00:00 stderr F time="2025-12-13T00:15:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WCPXQ 2025-12-13T00:15:46.263875846+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.263875846+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.267684239+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.267814753+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.267814753+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.267814753+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=3W1xF 2025-12-13T00:15:46.267843694+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.267843694+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.279106597+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.279277452+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.279277452+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.283262140+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.283262140+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3W1xF 2025-12-13T00:15:46.760407103+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.760407103+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.762970208+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.762970208+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.762970208+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.762970208+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+B3FC 2025-12-13T00:15:46.762970208+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.762970208+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.771603745+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.771828371+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.771849402+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.775991574+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.775991574+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+B3FC 2025-12-13T00:15:46.849319014+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.849319014+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.853243421+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.853350274+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.853350274+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.853350274+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GvJ8U 2025-12-13T00:15:46.853350274+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.853350274+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.860355041+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.860837055+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.861068502+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.867157233+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:46.867157233+00:00 stderr F time="2025-12-13T00:15:46Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GvJ8U 2025-12-13T00:15:47.015591597+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.015591597+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.019255815+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.019373358+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.019373358+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.019386329+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=TBx+c 2025-12-13T00:15:47.019397749+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.019397749+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.364757781+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.364757781+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.364757781+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.364757781+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=TBx+c 2025-12-13T00:15:47.676110406+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.676147987+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.679754535+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.679903619+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.679903619+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.679954041+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bKJD2 2025-12-13T00:15:47.679978612+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.679989432+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.965663458+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.965866734+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.965866734+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.966141582+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bKJD2 2025-12-13T00:15:47.979026753+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:47.979113576+00:00 stderr F time="2025-12-13T00:15:47Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.165095080+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.165176783+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.165176783+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.165198834+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=gNbH2 2025-12-13T00:15:48.165198834+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.165218734+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.565488482+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.565559484+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.565592475+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.565696298+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.565696298+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gNbH2 2025-12-13T00:15:48.565735299+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:48.565752100+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:48.767003437+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:48.767233573+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:48.767278015+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:48.767316816+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=7/VMs 2025-12-13T00:15:48.767347987+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:48.767376868+00:00 stderr F time="2025-12-13T00:15:48Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:49.165485542+00:00 stderr F time="2025-12-13T00:15:49Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:49.165967466+00:00 stderr F time="2025-12-13T00:15:49Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:49.165967466+00:00 stderr F time="2025-12-13T00:15:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:49.166161071+00:00 stderr F time="2025-12-13T00:15:49Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:15:49.166161071+00:00 stderr F time="2025-12-13T00:15:49Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=7/VMs 2025-12-13T00:16:12.027446427+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-12-13T00:16:12.027591521+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.027668454+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.027744126+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="resolving sources" id=fhNkY namespace=openshift-marketplace 2025-12-13T00:16:12.027744126+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="checking if subscriptions need update" id=fhNkY namespace=openshift-marketplace 2025-12-13T00:16:12.036043502+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=fhNkY namespace=openshift-marketplace 2025-12-13T00:16:12.036170755+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.036286609+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.036286609+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.036298259+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=emlvW 2025-12-13T00:16:12.036307959+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.036307959+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.044257315+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.044257315+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.044257315+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.044257315+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.044257315+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=emlvW 2025-12-13T00:16:12.050823859+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.050823859+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.054059054+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.054147918+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.054147918+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.054147918+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=PqqUR 2025-12-13T00:16:12.054162708+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.054162708+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.062498875+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.062588058+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.062588058+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.062701441+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:12.062701441+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PqqUR 2025-12-13T00:16:15.258635668+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-12-13T00:16:15.258683599+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.258683599+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.258683599+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="resolving sources" id=dwOQe namespace=openshift-marketplace 2025-12-13T00:16:15.258709240+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="checking if subscriptions need update" id=dwOQe namespace=openshift-marketplace 2025-12-13T00:16:15.262076772+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=dwOQe namespace=openshift-marketplace 2025-12-13T00:16:15.262777621+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.262890854+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.262890854+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.262903445+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=u52Km 2025-12-13T00:16:15.262903445+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.262914955+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.271225442+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.271225442+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.271225442+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.271225442+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.271225442+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u52Km 2025-12-13T00:16:15.278467129+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.278467129+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.280144895+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.280257398+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.280257398+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.280269879+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=kXX4J 2025-12-13T00:16:15.280269879+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.280279329+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.288817762+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.288915335+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.288915335+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-nv4pl current-pod.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.289079329+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:15.289079329+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=kXX4J 2025-12-13T00:16:17.365128850+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.365128850+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.367169236+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.367255328+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.367255328+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.367291279+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ZZoIY 2025-12-13T00:16:17.367291279+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.367291279+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.375653028+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.375864293+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.375874784+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.376141041+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.376141041+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZZoIY 2025-12-13T00:16:17.383855282+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.383855282+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.385221479+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.385342263+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.385360173+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.385360173+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=nCTg3 2025-12-13T00:16:17.385368113+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.385375454+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.393894936+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.393999579+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.393999579+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-s2hxn current-pod.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.394105102+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:16:17.394105102+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCTg3 2025-12-13T00:19:40.240059835+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="resolving sources" id=YKkX1 namespace=openstack 2025-12-13T00:19:40.240059835+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="checking if subscriptions need update" id=YKkX1 namespace=openstack 2025-12-13T00:19:40.245278479+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="No subscriptions were found in namespace openstack" id=YKkX1 namespace=openstack 2025-12-13T00:19:40.259611334+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="resolving sources" id=nwvRS namespace=openstack 2025-12-13T00:19:40.259611334+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="checking if subscriptions need update" id=nwvRS namespace=openstack 2025-12-13T00:19:40.267293806+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="No subscriptions were found in namespace openstack" id=nwvRS namespace=openstack 2025-12-13T00:19:40.291581095+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="resolving sources" id=K83Eg namespace=openstack 2025-12-13T00:19:40.291581095+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="checking if subscriptions need update" id=K83Eg namespace=openstack 2025-12-13T00:19:40.295025440+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="No subscriptions were found in namespace openstack" id=K83Eg namespace=openstack 2025-12-13T00:19:40.998602822+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="resolving sources" id=YPGe0 namespace=openstack-operators 2025-12-13T00:19:40.998602822+00:00 stderr F time="2025-12-13T00:19:40Z" level=info msg="checking if subscriptions need update" id=YPGe0 namespace=openstack-operators 2025-12-13T00:19:41.006099648+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=YPGe0 namespace=openstack-operators 2025-12-13T00:19:41.012323570+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="resolving sources" id=e4v4/ namespace=openstack-operators 2025-12-13T00:19:41.012323570+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="checking if subscriptions need update" id=e4v4/ namespace=openstack-operators 2025-12-13T00:19:41.016998628+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=e4v4/ namespace=openstack-operators 2025-12-13T00:19:41.018866140+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="resolving sources" id=+SzTs namespace=openstack-operators 2025-12-13T00:19:41.018866140+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="checking if subscriptions need update" id=+SzTs namespace=openstack-operators 2025-12-13T00:19:41.021048491+00:00 stderr F time="2025-12-13T00:19:41Z" level=info msg="No subscriptions were found in namespace openstack-operators" id=+SzTs namespace=openstack-operators ././@LongLink0000644000000000000000000000032000000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000560135215117130646033075 0ustar zuulzuul2025-08-13T19:59:14.867388586+00:00 stderr F time="2025-08-13T19:59:14Z" level=info msg="log level info" 2025-08-13T19:59:14.944315019+00:00 stderr F time="2025-08-13T19:59:14Z" level=info msg="TLS keys set, using https for metrics" 2025-08-13T19:59:15.055645443+00:00 stderr F W0813 19:59:15.055574 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:15.250577380+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:16.004044899+00:00 stderr F time="2025-08-13T19:59:15Z" level=info msg="Using in-cluster kube client config" 2025-08-13T19:59:16.313683285+00:00 stderr F W0813 19:59:16.310301 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:19.740591563+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="apps/v1, Resource=deployments" 2025-08-13T19:59:19.790674340+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterroles" 2025-08-13T19:59:19.801225861+00:00 stderr F time="2025-08-13T19:59:19Z" level=info msg="skipping irrelevant gvr" gvr="rbac.authorization.k8s.io/v1, Resource=clusterrolebindings" 2025-08-13T19:59:22.002547450+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="detected ability to filter informers" canFilter=true 2025-08-13T19:59:22.214694658+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="registering owner reference fixer" gvr="/v1, Resource=services" 2025-08-13T19:59:22.327504744+00:00 stderr F W0813 19:59:22.326666 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:22.425442905+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:22.425657291+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="operator ready" 2025-08-13T19:59:22.425657291+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="starting informers..." 2025-08-13T19:59:22.513393653+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="informers started" 2025-08-13T19:59:22.524531920+00:00 stderr F time="2025-08-13T19:59:22Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:23.026938681+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting workers..." 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.041385643+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=1tszK namespace=default 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=1tszK namespace=default 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.065088729+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.096940857+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="connection established. cluster-version: v1.29.5+29c95f3" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="operator ready" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting informers..." 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="informers started" 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="waiting for caches to sync..." 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=bkTke namespace=hostpath-provisioner 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.149491875+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.284758571+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="starting workers..." 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=SfIzQ namespace=kube-node-lease 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace default" id=1tszK namespace=default 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.453691866+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-public" id=Nm0y1 namespace=kube-public 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=IcxZR namespace=openshift 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=IcxZR namespace=openshift 2025-08-13T19:59:23.507687285+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace kube-system" id=Fd4f+ namespace=kube-system 2025-08-13T19:59:23.516924149+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.523150956+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.523150956+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.529682902+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:23.530038362+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.530123125+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=2zAJE 2025-08-13T19:59:23.530157036+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.530187557+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=NduNP namespace=openshift-apiserver 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="No subscriptions were found in namespace openshift" id=IcxZR namespace=openshift 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="resolving sources" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:23.912587797+00:00 stderr F time="2025-08-13T19:59:23Z" level=info msg="checking if subscriptions need update" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:24.726412565+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=A0rZ7 namespace=openshift-authentication 2025-08-13T19:59:24.726522058+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="resolving sources" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:24.726556649+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="checking if subscriptions need update" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=vgUXD namespace=openshift-apiserver-operator 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="resolving sources" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:24.861561787+00:00 stderr F time="2025-08-13T19:59:24Z" level=info msg="checking if subscriptions need update" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=N9wn5 namespace=openshift-authentication-operator 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.290710720+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=NVR/s namespace=openshift-cloud-network-config-controller 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:25.792509454+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:25.805455853+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805642469+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805682480+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.805870025+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=2zAJE 2025-08-13T19:59:25.807135631+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807258165+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807296346+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807404539+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=L1+GA 2025-08-13T19:59:25.807543523+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=gBgIJ namespace=openshift-cloud-platform-infra 2025-08-13T19:59:25.807607975+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="resolving sources" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:25.807646726+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="checking if subscriptions need update" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:25.818163626+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T19:59:25.822749276+00:00 stderr F time="2025-08-13T19:59:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T19:59:26.200523475+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=rCCoh namespace=openshift-cluster-machine-approver 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=wU+NU namespace=openshift-cluster-samples-operator 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.267584197+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.315502542+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:26.323310225+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.323437499+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.630627235+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631473109+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631561792+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.631884371+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=yW7NQ 2025-08-13T19:59:26.631966313+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.632391166+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:26.633056135+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=iF9KB namespace=openshift-cluster-version 2025-08-13T19:59:26.633345513+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=/0z19 namespace=openshift-config 2025-08-13T19:59:26.673319412+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=/0z19 namespace=openshift-config 2025-08-13T19:59:26.707423114+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=P7R3s namespace=openshift-cluster-storage-operator 2025-08-13T19:59:26.707529307+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="resolving sources" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:26.707577859+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checking if subscriptions need update" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:26.740619411+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.744728288+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.744964494+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.745052047+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=U8uxi 2025-08-13T19:59:26.745615003+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:26.745987864+00:00 stderr F time="2025-08-13T19:59:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=8nG21 namespace=openshift-config-managed 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config" id=/0z19 namespace=openshift-config 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.068613230+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.126553402+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.126753548+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.127020695+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.127736336+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yW7NQ 2025-08-13T19:59:27.167088657+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T19:59:27.264241327+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console" id=9Gh0e namespace=openshift-console 2025-08-13T19:59:27.264353730+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.264394321+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U8uxi 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=Wm9lv namespace=openshift-config-operator 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.340527411+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.341884690+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=T/C3j namespace=openshift-console-operator 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.496339343+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.496996842+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.497136946+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.737028164+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=4m9Zr namespace=openshift-console-user-settings 2025-08-13T19:59:27.737266901+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:27.737314662+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:27.737550989+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:27.737615391+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=8fXaO namespace=openshift-controller-manager 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="resolving sources" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="checking if subscriptions need update" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:27.823124308+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:27.843101857+00:00 stderr F time="2025-08-13T19:59:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.021570865+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=chpL+ namespace=openshift-controller-manager-operator 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.042563063+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.042618355+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=sb0C1 namespace=openshift-dns 2025-08-13T19:59:28.042629975+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.042934364+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=rjPqu namespace=openshift-etcd 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.176819719+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=Lw1uN namespace=openshift-dns-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=gVT6l namespace=openshift-etcd-operator 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.204133568+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.205114356+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=/+To3 2025-08-13T19:59:28.340480004+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=u7Ka4 namespace=openshift-host-network 2025-08-13T19:59:28.340734442+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.340734442+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.341944786+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.348934225+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.352167718+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=nujbR namespace=openshift-image-registry 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:28.353058673+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:28.443163431+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8p0k2 2025-08-13T19:59:28.623900403+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.623900403+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=r366e namespace=openshift-infra 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="resolving sources" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:28.732500659+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checking if subscriptions need update" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:28.766033965+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:28.769268027+00:00 stderr F time="2025-08-13T19:59:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.329208979+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=d/l/x namespace=openshift-ingress 2025-08-13T19:59:29.329330142+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.329376943+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.329596840+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330245548+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330298460+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.330675310+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=aF0pv 2025-08-13T19:59:29.398870494+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.399252315+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.428178660+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Ydart namespace=openshift-ingress-canary 2025-08-13T19:59:29.428306033+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.428348624+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.551404102+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=yu8bW namespace=openshift-ingress-operator 2025-08-13T19:59:29.551521896+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:29.551563397+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:29.618814094+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=00kih namespace=openshift-kni-infra 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="resolving sources" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:29.912166496+00:00 stderr F time="2025-08-13T19:59:29Z" level=info msg="checking if subscriptions need update" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.291718285+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=A23fX namespace=openshift-kube-apiserver-operator 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="resolving sources" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checking if subscriptions need update" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=dXsgs namespace=openshift-kube-apiserver 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="resolving sources" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:30.588336350+00:00 stderr F time="2025-08-13T19:59:30Z" level=info msg="checking if subscriptions need update" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:31.003420522+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=/FT8f namespace=openshift-kube-controller-manager-operator 2025-08-13T19:59:31.004217255+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.004274696+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.141924400+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=sRyvf namespace=openshift-kube-controller-manager 2025-08-13T19:59:31.142061084+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.142096805+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=vQOBl namespace=openshift-kube-scheduler 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=DZQfB namespace=openshift-kube-scheduler-operator 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="resolving sources" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:31.355001094+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checking if subscriptions need update" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:31.390212098+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.421724486+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=0kX61 2025-08-13T19:59:31.422030935+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.422030935+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:31.624880807+00:00 stderr F time="2025-08-13T19:59:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.391256522+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=4dfz6 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:32.584702026+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=AE+gp namespace=openshift-monitoring 2025-08-13T19:59:32.585387936+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.585524430+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.593423435+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669432191+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=n+zaX 2025-08-13T19:59:32.669492833+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669492833+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:32.669542154+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=OIaZW namespace=openshift-kube-storage-version-migrator 2025-08-13T19:59:32.669727270+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.669768551+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.714419844+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=5HI0X namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T19:59:32.715084313+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.715348650+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.792942022+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:32.952874511+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=Tvm/D namespace=openshift-machine-api 2025-08-13T19:59:32.953173450+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="resolving sources" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:32.953307313+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="checking if subscriptions need update" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4p7jA 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:32.982043382+00:00 stderr F time="2025-08-13T19:59:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=xPiUN namespace=openshift-machine-config-operator 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=n+zaX 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=CW6fM namespace=openshift-operators 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.123416902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.147542840+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=7gj+h namespace=openshift-monitoring 2025-08-13T19:59:33.147659554+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.147693865+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.147978023+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=DH/KU namespace=openshift-marketplace 2025-08-13T19:59:33.148037714+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.148078215+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=h2NDS 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.165429970+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310597278+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310660420+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310660420+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310676000+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=EG+R5 2025-08-13T19:59:33.310688081+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.310688081+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.385050291+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.385050291+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.385362430+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=BL42U namespace=openshift-multus 2025-08-13T19:59:33.385441562+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.385441562+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.591915857+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.674899343+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=cyGEC namespace=openshift-network-diagnostics 2025-08-13T19:59:33.674899343+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:33.676986902+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706623267+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706623267+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.706731020+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=EG+R5 2025-08-13T19:59:33.740440261+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=4wzOg namespace=openshift-network-node-identity 2025-08-13T19:59:33.740507163+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.740507163+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.741243924+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:33.840507144+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="No subscriptions were found in namespace openshift-node" id=c3ECV namespace=openshift-node 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="resolving sources" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:33.842934233+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checking if subscriptions need update" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:33.905607960+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=tZPiK 2025-08-13T19:59:33.905647991+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.905750974+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939003481+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939536447+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939582898+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.939737092+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=xySjj 2025-08-13T19:59:33.939947768+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.940077102+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:33.985690332+00:00 stderr F time="2025-08-13T19:59:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161017760+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xySjj 2025-08-13T19:59:34.161310969+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=IzSRQ namespace=openshift-network-operator 2025-08-13T19:59:34.161679489+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.162027529+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.194938967+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.197689775+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=X8taY namespace=openshift-nutanix-infra 2025-08-13T19:59:34.198110157+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.198179299+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.352345744+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:34.411095919+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.411177581+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.473884238+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474142106+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474217108+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474275190+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1LWtC 2025-08-13T19:59:34.474325011+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.474374332+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=MMlDc namespace=openshift-oauth-apiserver 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.566921241+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.641284630+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=CD0QG namespace=openshift-openstack-infra 2025-08-13T19:59:34.655651460+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:34.655940278+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:34.703154954+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703583876+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703643278+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.703752451+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1LWtC 2025-08-13T19:59:34.826184401+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=Cbtq2 namespace=openshift-operator-lifecycle-manager 2025-08-13T19:59:34.826184401+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="resolving sources" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:34.826243543+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="checking if subscriptions need update" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:34.956635909+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:34.956762983+00:00 stderr F time="2025-08-13T19:59:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028041045+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028125617+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OxmCH 2025-08-13T19:59:35.028125617+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.028139348+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=8Hevs namespace=openshift-operators 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.052736449+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.104361450+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.126499021+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=9c8wv namespace=openshift-ovirt-infra 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.235304383+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.324618519+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=YWQxZ namespace=openshift-ovn-kubernetes 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.413547903+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=e+NiR namespace=openshift-route-controller-manager 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:35.612152014+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=g+Jzv namespace=openshift-service-ca 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="resolving sources" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:35.824071075+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="checking if subscriptions need update" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:35.939629879+00:00 stderr F time="2025-08-13T19:59:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.014112112+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wkF6t 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=/7VRD namespace=openshift-service-ca-operator 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="resolving sources" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:36.073974139+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="checking if subscriptions need update" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.147751992+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=npCd0 2025-08-13T19:59:36.283543052+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=V0Mj5 namespace=openshift-user-workload-monitoring 2025-08-13T19:59:36.467016732+00:00 stderr F time="2025-08-13T19:59:36Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=uVRCy namespace=openshift-vsphere-infra 2025-08-13T19:59:39.198040701+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.198040701+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.262666783+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386097141+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386338798+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386388440+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.386757920+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=PQFYE 2025-08-13T19:59:39.433271946+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.433271946+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451519856+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451879316+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451936398+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.451982749+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=06pP6 2025-08-13T19:59:39.452017720+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.452048641+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583471257+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583720424+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.583761046+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.584012163+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06pP6 2025-08-13T19:59:39.857200820+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.857328944+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:39.970764137+00:00 stderr F time="2025-08-13T19:59:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.178530660+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:40.197105349+00:00 stderr F time="2025-08-13T19:59:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4QB1e 2025-08-13T19:59:42.951135533+00:00 stderr F time="2025-08-13T19:59:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:42.983729312+00:00 stderr F time="2025-08-13T19:59:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117258258+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.117603748+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.341403168+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:43.343336543+00:00 stderr F time="2025-08-13T19:59:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=c+nUg 2025-08-13T19:59:46.735288911+00:00 stderr F time="2025-08-13T19:59:46Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:46.735288911+00:00 stderr F time="2025-08-13T19:59:46Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169190439+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169540529+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0nGjK 2025-08-13T19:59:47.169573950+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.169573950+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:47.733056373+00:00 stderr F time="2025-08-13T19:59:47Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0nGjK 2025-08-13T19:59:49.859424817+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:49.859482799+00:00 stderr F time="2025-08-13T19:59:49Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.197821493+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.197821493+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.348487368+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.359249445+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.609499119+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.740285827+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:50.944928531+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=liWgX 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:50.974117473+00:00 stderr F time="2025-08-13T19:59:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.207667451+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.207943779+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208099823+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208140744+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=wt1Gj 2025-08-13T19:59:51.208179315+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.208221417+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.344408219+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.344730288+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.344875572+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.345033086+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t++He 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:51.934687685+00:00 stderr F time="2025-08-13T19:59:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=wt1Gj 2025-08-13T19:59:52.079561575+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.079561575+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145560836+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145821764+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.145901376+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.146001069+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=gPEn3 2025-08-13T19:59:52.146040170+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.146079781+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.312279509+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336229291+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336381076+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.336600242+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=gPEn3 2025-08-13T19:59:52.568228405+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.568329207+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.613310480+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.648750480+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.648970566+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.650705826+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wkEPj 2025-08-13T19:59:52.650861700+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.650908281+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843163742+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843427369+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843467740+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:52.843584284+00:00 stderr F time="2025-08-13T19:59:52Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wkEPj 2025-08-13T19:59:55.809157948+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:55.809157948+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:55.814073058+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:55.814073058+00:00 stderr F time="2025-08-13T19:59:55Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189320515+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.189640814+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:56.427273978+00:00 stderr F time="2025-08-13T19:59:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=iwckz 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.554957182+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.558713989+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.570590918+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=nCdV/ 2025-08-13T19:59:57.571053191+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.571149394+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691261188+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:57.691340420+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:57.691736371+00:00 stderr F time="2025-08-13T19:59:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.146626098+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147017139+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147089101+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.147257586+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=xg3sJ 2025-08-13T19:59:58.301985327+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302308336+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302525802+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.302713597+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=J6dpF 2025-08-13T19:59:58.328573935+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.328699418+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.445077216+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.447239057+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536339537+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536660586+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536703237+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.536883623+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LLmbp 2025-08-13T19:59:58.690032828+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.690915883+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.705605072+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:58.793308482+00:00 stderr F time="2025-08-13T19:59:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Io3rP 2025-08-13T19:59:59.388897670+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.389099906+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.402952380+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.404585347+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.405404920+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.405570945+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rcxaT 2025-08-13T19:59:59.406018848+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.407268553+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.451741681+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452075531+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452189124+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rcxaT 2025-08-13T19:59:59.452303107+00:00 stderr F time="2025-08-13T19:59:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rcxaT 2025-08-13T20:00:00.491755946+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.523464760+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.695830604+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.858367957+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.858768298+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.859165879+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=b3BRZ 2025-08-13T20:00:00.859254962+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:00.895947778+00:00 stderr F time="2025-08-13T20:00:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.051495292+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=b3BRZ 2025-08-13T20:00:01.397743552+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.397743552+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.437899726+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:01.671504025+00:00 stderr F time="2025-08-13T20:00:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1zHG8 2025-08-13T20:00:10.257916954+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.257916954+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618114084+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618688671+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.618764433+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.633569755+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=215t+ 2025-08-13T20:00:10.633639977+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:10.633680268+00:00 stderr F time="2025-08-13T20:00:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.625957411+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626000152+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626000152+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.626870777+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=215t+ 2025-08-13T20:00:11.986223504+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:11.986223504+00:00 stderr F time="2025-08-13T20:00:11Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.174732659+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.858314771+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:12.865198987+00:00 stderr F time="2025-08-13T20:00:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hpl2D 2025-08-13T20:00:13.518068723+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.518068723+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.668883343+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.671459407+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.678907139+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.679191017+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=9FaCP 2025-08-13T20:00:13.679475516+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.679511957+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:13.735700629+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:13.737032737+00:00 stderr F time="2025-08-13T20:00:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477480900+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.477549732+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=9FaCP 2025-08-13T20:00:14.490480350+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.490755508+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491100338+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491267593+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=iRqKV 2025-08-13T20:00:14.491357865+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:14.491395416+00:00 stderr F time="2025-08-13T20:00:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:16.614995068+00:00 stderr F time="2025-08-13T20:00:16Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:16.615298246+00:00 stderr F time="2025-08-13T20:00:16Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.063028203+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.102506279+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.103391784+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iRqKV 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:17.585501881+00:00 stderr F time="2025-08-13T20:00:17Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PSvZx 2025-08-13T20:00:22.519198049+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.519198049+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:22.649052642+00:00 stderr F time="2025-08-13T20:00:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:23.311714839+00:00 stderr F time="2025-08-13T20:00:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=FO0J2 2025-08-13T20:00:25.342577698+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.342733923+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.382118736+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.385075220+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:25.542007313+00:00 stderr F time="2025-08-13T20:00:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=prq86 2025-08-13T20:00:27.574588102+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.575607891+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.575607891+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.578337239+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.599054859+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.599170943+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.664769343+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.665193065+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=BsAt4 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:27.705098873+00:00 stderr F time="2025-08-13T20:00:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RRXyD 2025-08-13T20:00:28.159993484+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.159993484+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.181162468+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289581400+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.289965351+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4uxdX 2025-08-13T20:00:28.304474084+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.304474084+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.377364413+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378421583+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378490995+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378537576+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Gc25i 2025-08-13T20:00:28.378580947+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.378627869+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:28.442753307+00:00 stderr F time="2025-08-13T20:00:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Gc25i 2025-08-13T20:00:57.666292432+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:00:57.666721735+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:00:57.704953045+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:00:57.704953045+00:00 stderr F time="2025-08-13T20:00:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.329597053+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348178953+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348262555+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348433920+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=kPFEg 2025-08-13T20:01:03.348525503+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.348570214+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:03.426941299+00:00 stderr F time="2025-08-13T20:01:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kPFEg 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.347043367+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=rtMPL 2025-08-13T20:01:07.388328095+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.388328095+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520350389+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520576086+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520616927+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520654998+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=2qdyS 2025-08-13T20:01:07.520690729+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.520727110+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:07.528978965+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529331725+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529379006+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529418868+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=IRiQU 2025-08-13T20:01:07.529449949+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:07.529480639+00:00 stderr F time="2025-08-13T20:01:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.064295639+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.064629838+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.064711361+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.065045300+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=2qdyS 2025-08-13T20:01:08.096452916+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097157546+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097301700+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:08.097634070+00:00 stderr F time="2025-08-13T20:01:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IRiQU 2025-08-13T20:01:37.338197592+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:37.338363566+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:37.338668485+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:37.338668485+00:00 stderr F time="2025-08-13T20:01:37Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.216011662+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.220481139+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:01:51.799299583+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799687804+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799687804+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799711044+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=fpzh6 2025-08-13T20:01:51.799892539+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:01:51.799892539+00:00 stderr F time="2025-08-13T20:01:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.318031038+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.318031038+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=fpzh6 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ER/Xo 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:01.322561157+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:01.338527872+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:01.338527872+00:00 stderr F time="2025-08-13T20:02:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.455292138+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.490024119+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:03.503401750+00:00 stderr F time="2025-08-13T20:02:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.183876642+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4YVg1 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:11.198228512+00:00 stderr F time="2025-08-13T20:02:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Y9gAd 2025-08-13T20:02:29.459578992+00:00 stderr F time="2025-08-13T20:02:29Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:31.320216660+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKVh 2025-08-13T20:02:31.320359594+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OFmS9 2025-08-13T20:02:31.320380295+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OFmS9 2025-08-13T20:02:31.320392025+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=buKVh 2025-08-13T20:02:31.324995286+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=buKVh 2025-08-13T20:02:31.325200072+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=OFmS9 2025-08-13T20:02:31.325309705+00:00 stderr F E0813 20:02:31.325252 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.325442679+00:00 stderr F E0813 20:02:31.325238 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.331356318+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YuG3U 2025-08-13T20:02:31.331434770+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YuG3U 2025-08-13T20:02:31.332093909+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8sZdC 2025-08-13T20:02:31.332093909+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=8sZdC 2025-08-13T20:02:31.335483205+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=YuG3U 2025-08-13T20:02:31.335630260+00:00 stderr F E0813 20:02:31.335536 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.336220676+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=8sZdC 2025-08-13T20:02:31.336242677+00:00 stderr F E0813 20:02:31.336211 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.347115237+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=tUqdu 2025-08-13T20:02:31.347115237+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=tUqdu 2025-08-13T20:02:31.347680633+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fxfKq 2025-08-13T20:02:31.347680633+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fxfKq 2025-08-13T20:02:31.350528075+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=tUqdu 2025-08-13T20:02:31.350528075+00:00 stderr F E0813 20:02:31.350496 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.359026477+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fxfKq 2025-08-13T20:02:31.359140730+00:00 stderr F E0813 20:02:31.359109 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.371356229+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VKUpv 2025-08-13T20:02:31.371457772+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VKUpv 2025-08-13T20:02:31.377650838+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=VKUpv 2025-08-13T20:02:31.377997318+00:00 stderr F E0813 20:02:31.377921 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.379327976+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JDVSq 2025-08-13T20:02:31.379422999+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JDVSq 2025-08-13T20:02:31.382383563+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JDVSq 2025-08-13T20:02:31.382383563+00:00 stderr F E0813 20:02:31.382323 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.421214611+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=x39iE 2025-08-13T20:02:31.421214611+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=x39iE 2025-08-13T20:02:31.422724244+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3OfOW 2025-08-13T20:02:31.422936740+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3OfOW 2025-08-13T20:02:31.423440045+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=x39iE 2025-08-13T20:02:31.423440045+00:00 stderr F E0813 20:02:31.423385 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.425895305+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=3OfOW 2025-08-13T20:02:31.425982657+00:00 stderr F E0813 20:02:31.425952 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.504993781+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=p3gN7 2025-08-13T20:02:31.504993781+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=p3gN7 2025-08-13T20:02:31.510025254+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1Lp29 2025-08-13T20:02:31.510164538+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1Lp29 2025-08-13T20:02:31.524701023+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=p3gN7 2025-08-13T20:02:31.524701023+00:00 stderr F E0813 20:02:31.524677 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.685335536+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wq+F6 2025-08-13T20:02:31.685376547+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wq+F6 2025-08-13T20:02:31.723889646+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=1Lp29 2025-08-13T20:02:31.723946007+00:00 stderr F E0813 20:02:31.723879 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.884613581+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cysEU 2025-08-13T20:02:31.884613581+00:00 stderr F time="2025-08-13T20:02:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=cysEU 2025-08-13T20:02:31.924894470+00:00 stderr F time="2025-08-13T20:02:31Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=wq+F6 2025-08-13T20:02:31.924894470+00:00 stderr F E0813 20:02:31.924867 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.124681629+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=cysEU 2025-08-13T20:02:32.124681629+00:00 stderr F E0813 20:02:32.124632 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.245592088+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y7GJB 2025-08-13T20:02:32.245592088+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y7GJB 2025-08-13T20:02:32.324189910+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=y7GJB 2025-08-13T20:02:32.324312864+00:00 stderr F E0813 20:02:32.324289 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.446008686+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uuH+l 2025-08-13T20:02:32.446008686+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uuH+l 2025-08-13T20:02:32.524368101+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=uuH+l 2025-08-13T20:02:32.524368101+00:00 stderr F E0813 20:02:32.524259 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.964896128+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=59J53 2025-08-13T20:02:32.964928159+00:00 stderr F time="2025-08-13T20:02:32Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=59J53 2025-08-13T20:02:32.968003557+00:00 stderr F time="2025-08-13T20:02:32Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=59J53 2025-08-13T20:02:33.165883152+00:00 stderr F time="2025-08-13T20:02:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ezGB7 2025-08-13T20:02:33.165883152+00:00 stderr F time="2025-08-13T20:02:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ezGB7 2025-08-13T20:02:33.169296029+00:00 stderr F time="2025-08-13T20:02:33Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ezGB7 2025-08-13T20:02:34.461571413+00:00 stderr F time="2025-08-13T20:02:34Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:41.180077903+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EdhsD 2025-08-13T20:02:41.180258628+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EdhsD 2025-08-13T20:02:41.184228181+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=EdhsD 2025-08-13T20:02:41.184272712+00:00 stderr F E0813 20:02:41.184250 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.189750679+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Oj8ti 2025-08-13T20:02:41.189907113+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Oj8ti 2025-08-13T20:02:41.191991262+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Oj8ti 2025-08-13T20:02:41.192127796+00:00 stderr F E0813 20:02:41.192004 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.194489064+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cne/A 2025-08-13T20:02:41.194563256+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cne/A 2025-08-13T20:02:41.196163562+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Cne/A 2025-08-13T20:02:41.196213943+00:00 stderr F E0813 20:02:41.196155 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.201539315+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=upscx 2025-08-13T20:02:41.201573576+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=upscx 2025-08-13T20:02:41.202747019+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fcrac 2025-08-13T20:02:41.202912874+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fcrac 2025-08-13T20:02:41.203769559+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=upscx 2025-08-13T20:02:41.203863561+00:00 stderr F E0813 20:02:41.203812 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.206941009+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fcrac 2025-08-13T20:02:41.206962590+00:00 stderr F E0813 20:02:41.206940 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.214092483+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5gZ8D 2025-08-13T20:02:41.214092483+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5gZ8D 2025-08-13T20:02:41.216442510+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=5gZ8D 2025-08-13T20:02:41.216522152+00:00 stderr F E0813 20:02:41.216451 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.228136414+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eTECs 2025-08-13T20:02:41.228136414+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eTECs 2025-08-13T20:02:41.230565593+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eTECs 2025-08-13T20:02:41.230565593+00:00 stderr F E0813 20:02:41.230267 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.238051627+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=kH+Io 2025-08-13T20:02:41.238081248+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=kH+Io 2025-08-13T20:02:41.240472596+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=kH+Io 2025-08-13T20:02:41.240557138+00:00 stderr F E0813 20:02:41.240493 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.271165252+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ogtb3 2025-08-13T20:02:41.271206903+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ogtb3 2025-08-13T20:02:41.274379723+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ogtb3 2025-08-13T20:02:41.274413014+00:00 stderr F E0813 20:02:41.274362 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.281761504+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RsNAH 2025-08-13T20:02:41.281835696+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RsNAH 2025-08-13T20:02:41.284457551+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=RsNAH 2025-08-13T20:02:41.284457551+00:00 stderr F E0813 20:02:41.284415 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.355263271+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eeoYO 2025-08-13T20:02:41.355263271+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eeoYO 2025-08-13T20:02:41.365502733+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=R8X3r 2025-08-13T20:02:41.365502733+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=R8X3r 2025-08-13T20:02:41.385820093+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eeoYO 2025-08-13T20:02:41.385903655+00:00 stderr F E0813 20:02:41.385835 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.546526037+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=vpLbh 2025-08-13T20:02:41.546526037+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=vpLbh 2025-08-13T20:02:41.584814309+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=R8X3r 2025-08-13T20:02:41.584935812+00:00 stderr F E0813 20:02:41.584887 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.745618686+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z8Tk9 2025-08-13T20:02:41.745735399+00:00 stderr F time="2025-08-13T20:02:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z8Tk9 2025-08-13T20:02:41.784504675+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=vpLbh 2025-08-13T20:02:41.784504675+00:00 stderr F E0813 20:02:41.784476 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.985068617+00:00 stderr F time="2025-08-13T20:02:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=z8Tk9 2025-08-13T20:02:41.985068617+00:00 stderr F E0813 20:02:41.984942 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.105127703+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fKVKG 2025-08-13T20:02:42.105127703+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=fKVKG 2025-08-13T20:02:42.184517948+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=fKVKG 2025-08-13T20:02:42.184564359+00:00 stderr F E0813 20:02:42.184495 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.305245692+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z/+5s 2025-08-13T20:02:42.305245692+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z/+5s 2025-08-13T20:02:42.384638557+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=z/+5s 2025-08-13T20:02:42.384638557+00:00 stderr F E0813 20:02:42.384559 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.825919346+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=e2NJz 2025-08-13T20:02:42.825919346+00:00 stderr F time="2025-08-13T20:02:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=e2NJz 2025-08-13T20:02:42.828727216+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=e2NJz 2025-08-13T20:02:42.830009803+00:00 stderr F time="2025-08-13T20:02:42Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:02:43.025300714+00:00 stderr F time="2025-08-13T20:02:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=pBUc+ 2025-08-13T20:02:43.025300714+00:00 stderr F time="2025-08-13T20:02:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=pBUc+ 2025-08-13T20:02:43.028725332+00:00 stderr F time="2025-08-13T20:02:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=pBUc+ 2025-08-13T20:02:47.833958362+00:00 stderr F time="2025-08-13T20:02:47Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:06:11.493510853+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-08-13T20:06:11.513128685+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.513128685+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.527002293+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="resolving sources" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.527002293+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="checking if subscriptions need update" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.530683628+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=k2b6s namespace=openshift-marketplace 2025-08-13T20:06:11.564933489+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.613763127+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.613963973+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.614943581+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=6ycoF 2025-08-13T20:06:11.615105376+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.615142827+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.717925600+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.718236119+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.718306171+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:11.719213827+00:00 stderr F time="2025-08-13T20:06:11Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=6ycoF 2025-08-13T20:06:13.735607719+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-08-13T20:06:13.735857166+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="resolving sources" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.735949859+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="checking if subscriptions need update" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.736274268+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.738287036+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.740709755+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=emBXK namespace=openshift-marketplace 2025-08-13T20:06:13.741268591+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.741347584+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766092952+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766359430+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766359430+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:13.766474793+00:00 stderr F time="2025-08-13T20:06:13Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=AU2sX 2025-08-13T20:06:26.110215478+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.110277409+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.113314296+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.113314296+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114711466+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.114995094+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Yd8EB 2025-08-13T20:06:26.115138028+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.115174030+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Yd8EB 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.129995614+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mlSDt 2025-08-13T20:06:26.135973235+00:00 stderr F time="2025-08-13T20:06:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Yd8EB 2025-08-13T20:06:26.139168817+00:00 stderr F E0813 20:06:26.139011 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:26.139256829+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.139256829+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.146991231+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.150852471+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.153847167+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.153847167+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283168110+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283281093+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=O7Sr0 2025-08-13T20:06:26.283317255+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.283317255+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:26.485976288+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486146673+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486161263+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.486262386+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OgeZt 2025-08-13T20:06:26.500887035+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.500887035+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:26.886070925+00:00 stderr F time="2025-08-13T20:06:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.083998242+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.084290890+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=O7Sr0 2025-08-13T20:06:27.096395597+00:00 stderr F time="2025-08-13T20:06:27Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=O7Sr0 2025-08-13T20:06:27.096551121+00:00 stderr F E0813 20:06:27.096522 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:27.102279406+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.102279406+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481279009+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481479894+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481529046+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481574137+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+4AoL 2025-08-13T20:06:27.481606088+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.481635659+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:27.685135696+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685418774+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685461056+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.685586239+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Zh+dj 2025-08-13T20:06:27.699274701+00:00 stderr F time="2025-08-13T20:06:27Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Zh+dj 2025-08-13T20:06:27.699274701+00:00 stderr F E0813 20:06:27.698399 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:27.709208696+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:27.709451833+00:00 stderr F time="2025-08-13T20:06:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.085910023+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086225762+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086282274+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086363386+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Oq41s 2025-08-13T20:06:28.086406027+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.086482439+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.282898944+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283227293+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283351627+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.283547252+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+4AoL 2025-08-13T20:06:28.302014561+00:00 stderr F time="2025-08-13T20:06:28Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again" id=+4AoL 2025-08-13T20:06:28.302014561+00:00 stderr F E0813 20:06:28.301962 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:28.302055262+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.302055262+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685616206+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685669818+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685683108+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.685895464+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:28.883888374+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884000247+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884000247+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.884082709+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Oq41s 2025-08-13T20:06:28.902330863+00:00 stderr F time="2025-08-13T20:06:28Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Oq41s 2025-08-13T20:06:28.902330863+00:00 stderr F E0813 20:06:28.902313 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:28.904563686+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:28.904659029+00:00 stderr F time="2025-08-13T20:06:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.288172151+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:29.484531994+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484769601+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484769601+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.484890994+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Tff8j 2025-08-13T20:06:29.485084470+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:29.485084470+00:00 stderr F time="2025-08-13T20:06:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.081941562+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082042295+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082042295+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082058995+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Zknej 2025-08-13T20:06:30.082109716+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.082109716+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.286076917+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693063231+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693214205+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=q7Re+ 2025-08-13T20:06:30.693433561+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:30.693498553+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:30.884149023+00:00 stderr F time="2025-08-13T20:06:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.087252319+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.089005829+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:31.410178215+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.410178215+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Zknej 2025-08-13T20:06:31.815722402+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.815722402+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889601161+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889944851+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.889994392+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.890068284+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=3/b+M 2025-08-13T20:06:31.890100255+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:31.890156607+00:00 stderr F time="2025-08-13T20:06:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.086024412+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.086065673+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.086075844+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.304304171+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.610152570+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.610152570+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CNIN0 2025-08-13T20:06:32.679106567+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.688346182+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=3/b+M 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:32.762185319+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:32.881690145+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.882261601+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:32.883097115+00:00 stderr F time="2025-08-13T20:06:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.088444173+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.682592218+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.683471263+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="catalog update required at 2025-08-13 20:06:33.682825164 +0000 UTC m=+449.334213281" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=q32a1 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:33.888190812+00:00 stderr F time="2025-08-13T20:06:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.203693937+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=8ka7C 2025-08-13T20:06:34.230721402+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.231202476+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.294228903+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294601363+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294688216+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.294835090+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GYogS 2025-08-13T20:06:34.294933823+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.295012845+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:34.531050753+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531341911+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531425763+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531661660+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EgTk3 2025-08-13T20:06:34.531742733+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:34.531845646+00:00 stderr F time="2025-08-13T20:06:34Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.087074635+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087519547+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087647251+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.087946150+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GYogS 2025-08-13T20:06:35.088233678+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.088348361+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.319184129+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.319619572+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.319726125+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.320045724+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="catalog update required at 2025-08-13 20:06:35.319971052 +0000 UTC m=+450.971359429" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.492294943+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492363785+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492363785+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.492444727+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:35.716204932+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EgTk3 2025-08-13T20:06:35.739636144+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:35.739636144+00:00 stderr F time="2025-08-13T20:06:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.083299257+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.290457157+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="catalog update required at 2025-08-13 20:06:36.290062885 +0000 UTC m=+451.941451072" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.701448760+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=daLHK 2025-08-13T20:06:36.733992323+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:36.733992323+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=em2ci 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:36.896833492+00:00 stderr F time="2025-08-13T20:06:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.085092960+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.283263672+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283379175+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283379175+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.283433127+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:37.885185538+00:00 stderr F time="2025-08-13T20:06:37Z" level=info msg="catalog update required at 2025-08-13 20:06:37.884757656 +0000 UTC m=+453.536145863" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XhBeC 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.086100799+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.092241975+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-08-13T20:06:38.092323467+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="resolving sources" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.092323467+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checking if subscriptions need update" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.104610420+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=qJFDI namespace=openshift-marketplace 2025-08-13T20:06:38.315227018+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=V1QwJ 2025-08-13T20:06:38.355523894+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.355523894+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.484491041+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484713547+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484754199+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.484928834+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=4TwJq 2025-08-13T20:06:38.484971205+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.485004926+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:38.687010868+00:00 stderr F time="2025-08-13T20:06:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.286016762+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286108774+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286108774+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.286166416+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=4TwJq 2025-08-13T20:06:39.411929782+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.411929782+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.515309426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eGwG5 2025-08-13T20:06:39.558727521+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.558727521+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.682668194+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683063256+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683152478+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683388495+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wIhww 2025-08-13T20:06:39.683499848+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.683588961+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:39.895125296+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.895463175+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.895557448+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.896552426+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=k0tK8 2025-08-13T20:06:39.896742382+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:39.896977069+00:00 stderr F time="2025-08-13T20:06:39Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.492854883+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wIhww 2025-08-13T20:06:40.518465107+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.518599201+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.683681214+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683840249+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683840249+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.683945572+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=k0tK8 2025-08-13T20:06:40.702434612+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:40.702434612+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:40.910619761+00:00 stderr F time="2025-08-13T20:06:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.091704553+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.683858140+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684409336+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684473457+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684592981+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3zDkc 2025-08-13T20:06:41.684758366+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:41.684909280+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:41.884354688+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.884581335+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.885465550+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.885964304+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v1iQK 2025-08-13T20:06:41.886066947+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:41.886122629+00:00 stderr F time="2025-08-13T20:06:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.089097418+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.299479610+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890318840+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=KWBpt 2025-08-13T20:06:42.890383862+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:42.892015309+00:00 stderr F time="2025-08-13T20:06:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.352143611+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=NXE+0 2025-08-13T20:06:43.867567909+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.867911889+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.867963541+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.868006242+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GW6YT 2025-08-13T20:06:43.868038283+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.868068834+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:43.975695169+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:43.975974367+00:00 stderr F time="2025-08-13T20:06:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GW6YT 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:44.721862303+00:00 stderr F time="2025-08-13T20:06:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:45.911116949+00:00 stderr F time="2025-08-13T20:06:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="resolving sources" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="checking if subscriptions need update" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="resolving sources" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:47.259663453+00:00 stderr F time="2025-08-13T20:06:47Z" level=info msg="checking if subscriptions need update" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=N6Yij namespace=openshift-monitoring 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="resolving sources" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:48.517634150+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="checking if subscriptions need update" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:48.529694646+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.529741967+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.529741967+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FL2Zc 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:48.530030396+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:48.530299093+00:00 stderr F time="2025-08-13T20:06:48Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=nuOJf namespace=openshift-operator-lifecycle-manager 2025-08-13T20:06:49.170617092+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=UI3Kj namespace=openshift-operators 2025-08-13T20:06:49.183381657+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.183552712+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:49.186333622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=4lZcL 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.187175386+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918537555+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918658748+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918658748+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.918703399+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="resolving sources" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:49.950863622+00:00 stderr F time="2025-08-13T20:06:49Z" level=info msg="checking if subscriptions need update" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:50.513729180+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=60tmm namespace=openshift-marketplace 2025-08-13T20:06:50.655559256+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.656228175+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wnVLL 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.708972438+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=bcIBF 2025-08-13T20:06:50.714640670+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.714640670+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.797510826+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.797510826+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.805644769+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.805963649+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.806042481+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.806094592+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=TdbPT 2025-08-13T20:06:50.806133023+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.810715425+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.832434047+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:50.901265181+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.901662052+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.902011852+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:50.902231359+00:00 stderr F time="2025-08-13T20:06:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TdbPT 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Kf0TQ 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.107906076+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.133741756+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.133741756+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.136862036+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.146006978+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149216880+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149323193+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149441806+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+VsiI 2025-08-13T20:06:51.149549129+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.149635552+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.312652676+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=o0mEY 2025-08-13T20:06:51.509337165+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.525533099+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.525657003+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:51.526219469+00:00 stderr F time="2025-08-13T20:06:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+VsiI 2025-08-13T20:06:56.134097751+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.134097751+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.376976224+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.378197979+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.378197979+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.380846025+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=dA2JE 2025-08-13T20:06:56.381006220+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.381068022+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.429337665+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=dA2JE 2025-08-13T20:06:56.490907001+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.490907001+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.501915416+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.539867094+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6Fxp2 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=Jgp93 namespace=default 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=Jgp93 namespace=default 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.904147439+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace default" id=Jgp93 namespace=default 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=8627U namespace=hostpath-provisioner 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.910020227+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=GAzm9 namespace=kube-node-lease 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-public" id=UgcrS namespace=kube-public 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=YBTPP namespace=openshift 2025-08-13T20:06:56.918897792+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=YBTPP namespace=openshift 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift" id=YBTPP namespace=openshift 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace kube-system" id=dUTGp namespace=kube-system 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.930916906+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=1r5b5 namespace=openshift-apiserver-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=ScKe9 namespace=openshift-apiserver 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="resolving sources" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:56.938095442+00:00 stderr F time="2025-08-13T20:06:56Z" level=info msg="checking if subscriptions need update" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=vxc7b namespace=openshift-authentication 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=D5zUy namespace=openshift-authentication-operator 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.052847822+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.103426132+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=GH+4T namespace=openshift-cloud-network-config-controller 2025-08-13T20:06:57.103552366+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.103587677+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=EFPAp namespace=openshift-cloud-platform-infra 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=jSDho namespace=openshift-cluster-machine-approver 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:57.877818125+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=ivWJs namespace=openshift-cluster-samples-operator 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="resolving sources" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:57.877918478+00:00 stderr F time="2025-08-13T20:06:57Z" level=info msg="checking if subscriptions need update" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=RF8fd namespace=openshift-cluster-version 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="resolving sources" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="checking if subscriptions need update" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=fKmPg namespace=openshift-cluster-storage-operator 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="resolving sources" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:58.536983374+00:00 stderr F time="2025-08-13T20:06:58Z" level=info msg="checking if subscriptions need update" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:59.170645801+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config" id=YcZnZ namespace=openshift-config 2025-08-13T20:06:59.170830256+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.170918399+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.175838440+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.175838440+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.181687667+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=fnLg3 namespace=openshift-config-managed 2025-08-13T20:06:59.181903334+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.181993026+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console" id=R3oAq namespace=openshift-console 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=fJd8l namespace=openshift-config-operator 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.230045014+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=88gS+ namespace=openshift-console-operator 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.247192736+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.272742758+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qgbwR 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=jVVWB namespace=openshift-console-user-settings 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.302259294+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=7ABnv namespace=openshift-controller-manager 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.506092188+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.703075856+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=bozjR namespace=openshift-controller-manager-operator 2025-08-13T20:06:59.703192020+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:06:59.703232731+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=Bio85 namespace=openshift-dns 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="resolving sources" id=fddqw namespace=openshift-etcd 2025-08-13T20:06:59.902379050+00:00 stderr F time="2025-08-13T20:06:59Z" level=info msg="checking if subscriptions need update" id=fddqw namespace=openshift-etcd 2025-08-13T20:07:00.096657181+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.096657181+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.098826673+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099016628+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099061860+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099155562+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Nrb/Z 2025-08-13T20:07:00.099232584+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.099268175+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=Af1UN namespace=openshift-dns-operator 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.103694382+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.138364416+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Nrb/Z 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=fddqw namespace=openshift-etcd 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.319853490+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.503281089+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=uuetg namespace=openshift-etcd-operator 2025-08-13T20:07:00.503375092+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.503409213+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=jF/Mz namespace=openshift-host-network 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:00.700257756+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:00.935907223+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=kLxN2 namespace=openshift-image-registry 2025-08-13T20:07:00.936043507+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="resolving sources" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:00.936077668+00:00 stderr F time="2025-08-13T20:07:00Z" level=info msg="checking if subscriptions need update" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=+/qN1 namespace=openshift-infra 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.131677856+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.301361841+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=Ty/lT namespace=openshift-ingress 2025-08-13T20:07:01.301466174+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.301499775+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=cRYWK namespace=openshift-ingress-canary 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.502206029+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=tkPdT namespace=openshift-ingress-operator 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:01.706895048+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:01.909935219+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=E3P2t namespace=openshift-kni-infra 2025-08-13T20:07:01.910360421+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="resolving sources" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:01.910960379+00:00 stderr F time="2025-08-13T20:07:01Z" level=info msg="checking if subscriptions need update" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:02.428272220+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=EvcdC namespace=openshift-kube-apiserver-operator 2025-08-13T20:07:02.428531668+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="resolving sources" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:02.437184656+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="checking if subscriptions need update" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=X/Qr9 namespace=openshift-kube-apiserver 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="resolving sources" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:02.438742051+00:00 stderr F time="2025-08-13T20:07:02Z" level=info msg="checking if subscriptions need update" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:03.884353897+00:00 stderr F time="2025-08-13T20:07:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:03.884477280+00:00 stderr F time="2025-08-13T20:07:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=MfPwX namespace=openshift-kube-controller-manager 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.452600719+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.452815165+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=/r0yC namespace=openshift-kube-controller-manager-operator 2025-08-13T20:07:04.452930688+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.452972260+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.519223669+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=M48GQ namespace=openshift-kube-scheduler 2025-08-13T20:07:04.519269010+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.519269010+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=MhMX5 namespace=openshift-kube-scheduler-operator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.521825534+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=HWQsE namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=/ZiT9 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.550900997+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.574843154+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.574843154+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=7tI/m namespace=openshift-machine-config-operator 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=1p/uA namespace=openshift-machine-api 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.611680340+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.612855994+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.616764686+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OmDfI 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.642194045+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=iPKzd namespace=openshift-marketplace 2025-08-13T20:07:04.642263837+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.642263837+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=KEqPT namespace=openshift-monitoring 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.643818171+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=XK4U+ namespace=openshift-multus 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.653687764+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.662074755+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.675230562+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=THIlK 2025-08-13T20:07:04.720861450+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=sfNTf namespace=openshift-network-diagnostics 2025-08-13T20:07:04.723694311+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:04.723694311+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=z0TgY namespace=openshift-network-node-identity 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="resolving sources" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:04.923082668+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="checking if subscriptions need update" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:04.998404328+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:04.998404328+00:00 stderr F time="2025-08-13T20:07:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.017358401+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037186670+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037311213+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037311213+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.037565210+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=SM2df 2025-08-13T20:07:05.098929900+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.098929900+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=UXPNK namespace=openshift-network-operator 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.122251278+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.173327193+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194309614+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194309614+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.194476909+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hUGCZ 2025-08-13T20:07:05.200168582+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.200168582+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206701680+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206727580+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206740501+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206753011+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=EM2Dn 2025-08-13T20:07:05.206763421+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.206814553+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-node" id=+Tnz6 namespace=openshift-node 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.304568886+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.324980481+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=EM2Dn 2025-08-13T20:07:05.495970383+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.495970383+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=EF1d4 namespace=openshift-nutanix-infra 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:05.507549825+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:05.527593480+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527648152+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527648152+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.527713273+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:05.720449489+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.720449489+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=90LR/ namespace=openshift-oauth-apiserver 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="resolving sources" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:05.904604799+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checking if subscriptions need update" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:05.927625519+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:05.927741673+00:00 stderr F time="2025-08-13T20:07:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=bzNXl namespace=openshift-openstack-infra 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.104661205+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.128517619+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=7fSuO namespace=openshift-operator-lifecycle-manager 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.303616518+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=7AwXH namespace=openshift-operators 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.502120210+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.552257327+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.552257327+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=cmZbo 2025-08-13T20:07:06.556577681+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.556577681+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=Fif+h namespace=openshift-ovirt-infra 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:06.722922060+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:06.735860501+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736024806+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736024806+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.736105458+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=G2Jyv 2025-08-13T20:07:06.903362294+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=8/ggq namespace=openshift-ovn-kubernetes 2025-08-13T20:07:06.903362294+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="resolving sources" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:06.903425696+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checking if subscriptions need update" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:06.933280412+00:00 stderr F time="2025-08-13T20:07:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=KRghI namespace=openshift-route-controller-manager 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.102216185+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=HgYn+ namespace=openshift-service-ca 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:07.310395244+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:07.429827448+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.433853584+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.433853584+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=5YpIl namespace=openshift-service-ca-operator 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="resolving sources" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:07.502697127+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="checking if subscriptions need update" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:07.523702840+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:07.523702840+00:00 stderr F time="2025-08-13T20:07:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R/FBr 2025-08-13T20:07:08.166985893+00:00 stderr F time="2025-08-13T20:07:08Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=t5Uw+ namespace=openshift-vsphere-infra 2025-08-13T20:07:08.167210020+00:00 stderr F time="2025-08-13T20:07:08Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=PVkqD namespace=openshift-user-workload-monitoring 2025-08-13T20:07:09.012897627+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.012897627+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026530958+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026739694+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026849607+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.026943929+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=oO6F4 2025-08-13T20:07:09.026994031+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.027029852+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.098257214+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=oO6F4 2025-08-13T20:07:09.295143979+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.295143979+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.318319843+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352439852+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352667768+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352715280+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.352948736+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=paOHr 2025-08-13T20:07:09.544717705+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.544928851+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.566983993+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pXP5D 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.595940763+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.606969489+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927146228+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:09.927937911+00:00 stderr F time="2025-08-13T20:07:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=6G8MG 2025-08-13T20:07:17.402023609+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.402023609+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.412219631+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.445047393+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.471673926+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.471673926+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=qjygH 2025-08-13T20:07:17.540994884+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.540994884+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.545500963+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.564189038+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.569925323+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:17.569925323+00:00 stderr F time="2025-08-13T20:07:17Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=eF3ux 2025-08-13T20:07:19.897743354+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.901121591+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.962722627+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963857450+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=XXh2y 2025-08-13T20:07:19.963908401+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:19.963908401+00:00 stderr F time="2025-08-13T20:07:19Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109529166+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109769743+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.109941668+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.110055541+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XXh2y 2025-08-13T20:07:20.363563220+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.363563220+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383363787+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383672586+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383724398+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383768529+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=xkuDP 2025-08-13T20:07:20.383921013+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.383967415+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488287136+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.488488451+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=xkuDP 2025-08-13T20:07:20.494112093+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.494289488+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521108847+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521445106+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521614701+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.521707434+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=SeWd4 2025-08-13T20:07:20.522132546+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.522691662+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.539406671+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.539712020+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.565002835+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.565767577+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566136148+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566481028+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=mvFP1 2025-08-13T20:07:20.566576080+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.566707794+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.650509956+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650744512+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650846445+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.650978819+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=SeWd4 2025-08-13T20:07:20.822077504+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.822822676+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.822964710+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:20.823405262+00:00 stderr F time="2025-08-13T20:07:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mvFP1 2025-08-13T20:07:21.118466272+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.118579555+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143076477+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143546361+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143663764+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.143761227+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=TpWB0 2025-08-13T20:07:21.143910861+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.144029865+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.222951427+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.223850313+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.223993457+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224400239+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224472851+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=TpWB0 2025-08-13T20:07:21.224765519+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.224996136+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.258483186+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.258935579+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259019062+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259229778+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=kdqU8 2025-08-13T20:07:21.259333681+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.259418713+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513392535+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513501248+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513501248+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513692293+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:21.513692293+00:00 stderr F time="2025-08-13T20:07:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=kdqU8 2025-08-13T20:07:26.432005016+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.432005016+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.444967928+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463701255+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463750046+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.463760816+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=BKWN2 2025-08-13T20:07:26.537822570+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.537822570+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545020466+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545276254+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545319895+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545360856+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=+5tYT 2025-08-13T20:07:26.545391007+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.545420538+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:26.581111321+00:00 stderr F time="2025-08-13T20:07:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=+5tYT 2025-08-13T20:07:34.621979639+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.621979639+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.626902090+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:34.736871293+00:00 stderr F time="2025-08-13T20:07:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rpx5y 2025-08-13T20:07:35.201738891+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.202012579+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.212002335+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227052937+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:35.227093568+00:00 stderr F time="2025-08-13T20:07:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QgRXm 2025-08-13T20:07:39.803162468+00:00 stderr F time="2025-08-13T20:07:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:39.803162468+00:00 stderr F time="2025-08-13T20:07:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284657233+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284698754+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284708834+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284718375+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kwBL/ 2025-08-13T20:07:40.284728345+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.284728345+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.489997690+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534016432+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534016432+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kwBL/ 2025-08-13T20:07:40.534107155+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.534107155+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.570955361+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.572495475+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.585762246+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.590760549+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.591497060+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.591582493+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=A24F2 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.601486417+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.608217870+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610414963+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.610990589+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.611012370+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=CQ/MI 2025-08-13T20:07:40.890683498+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.891013548+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:40.891013548+00:00 stderr F time="2025-08-13T20:07:40Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:41.092128714+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:41.092128714+00:00 stderr F time="2025-08-13T20:07:41Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GGhoG 2025-08-13T20:07:42.210912870+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.210912870+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.235076863+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238390857+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238522041+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238632754+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=GWpeZ 2025-08-13T20:07:42.238742578+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.238829830+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.323922220+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=GWpeZ 2025-08-13T20:07:42.328618204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.328618204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.331729294+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332459115+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kRsnw 2025-08-13T20:07:42.332483155+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.332483155+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.361420195+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.361420195+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.363117934+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363152725+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363162755+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.363394412+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kRsnw 2025-08-13T20:07:42.507196204+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507257866+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507267827+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507277557+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uUuJK 2025-08-13T20:07:42.507287257+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.507287257+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:42.615342925+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.615342925+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892730938+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:42.892986045+00:00 stderr F time="2025-08-13T20:07:42Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.090820408+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091066445+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091110296+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.091269920+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uUuJK 2025-08-13T20:07:43.492354760+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492641138+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492689280+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492920576+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.492966527+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jnpU3 2025-08-13T20:07:43.493119482+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.493202214+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.690738548+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691115039+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691176620+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691215671+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=zL7jK 2025-08-13T20:07:43.691245762+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:43.691275603+00:00 stderr F time="2025-08-13T20:07:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.090946862+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091207440+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091248581+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091381265+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:44.091425996+00:00 stderr F time="2025-08-13T20:07:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=zL7jK 2025-08-13T20:07:57.427919925+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.427919925+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.442174744+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.465447051+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cr9Ef 2025-08-13T20:07:57.616078620+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.616078620+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.621872686+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622178515+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622240697+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622283578+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/W5/o 2025-08-13T20:07:57.622338369+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.622556936+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.635754994+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.636161676+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.636214817+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661724129+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661833862+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/W5/o 2025-08-13T20:07:57.661948955+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.661995626+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.669732978+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.669973195+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670018046+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670056607+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ZaL6T 2025-08-13T20:07:57.670089528+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.670119569+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.686739826+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.687120957+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.687190539+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.691604545+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:57.691604545+00:00 stderr F time="2025-08-13T20:07:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ZaL6T 2025-08-13T20:07:59.719443285+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.719556459+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.730188293+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.730188293+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.733711424+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.745876913+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746115750+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746156171+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:07:59.746274225+00:00 stderr F time="2025-08-13T20:07:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=/d3LR 2025-08-13T20:08:00.174554403+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.174687257+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179552996+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179695070+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=7VIOl 2025-08-13T20:08:00.179719591+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.179719591+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.237632841+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.237918690+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.238012702+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:00.238112515+00:00 stderr F time="2025-08-13T20:08:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7VIOl 2025-08-13T20:08:01.069850143+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.069850143+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.096753804+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120543376+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120871825+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.120973828+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121166044+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121215975+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SOnQo 2025-08-13T20:08:01.121343809+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.121392790+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136554085+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136739570+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136851224+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.136933486+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=yqVMl 2025-08-13T20:08:01.136987037+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.137161492+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167616776+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167848692+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.167848692+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.176215862+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:01.176215862+00:00 stderr F time="2025-08-13T20:08:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=yqVMl 2025-08-13T20:08:04.740190434+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.740190434+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.744980792+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.745589859+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.792837464+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793194344+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793234185+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793402310+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:04.793441461+00:00 stderr F time="2025-08-13T20:08:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=2tQSQ 2025-08-13T20:08:05.227830806+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.227830806+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.239376377+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258361051+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258426143+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258436623+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258636439+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:05.258636439+00:00 stderr F time="2025-08-13T20:08:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1S6Hl 2025-08-13T20:08:34.569230327+00:00 stderr F time="2025-08-13T20:08:34Z" level=error msg="Unable to retrieve cluster operator: Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-catalog\": dial tcp 10.217.4.1:443: connect: connection refused" 2025-08-13T20:09:45.312959779+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.312959779+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.313585607+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.313674139+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.325440706+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329022759+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.329209134+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329468842+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329510783+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329549094+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=m5f4a 2025-08-13T20:09:45.329580195+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.329610826+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.371295331+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e2a+y 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=m5f4a 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.379218928+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.387238498+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395537146+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395660890+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395710771+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=auOVA 2025-08-13T20:09:45.395743392+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.395877206+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398470770+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398541992+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398585564+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Id+B8 2025-08-13T20:09:45.398619555+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.398650715+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.399083668+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.718941189+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Id+B8 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:45.908512014+00:00 stderr F time="2025-08-13T20:09:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auOVA 2025-08-13T20:09:53.174562738+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:53.175244858+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:53.175335740+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.175335740+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.179189551+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=1JynF namespace=openshift-operator-lifecycle-manager 2025-08-13T20:09:53.179189551+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="resolving sources" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.179215391+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="checking if subscriptions need update" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.197599259+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=cqdOf namespace=openshift-operators 2025-08-13T20:09:53.199541824+00:00 stderr F time="2025-08-13T20:09:53Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=yM50g namespace=openshift-monitoring 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.747227128+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.751689806+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752277003+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752326864+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752379006+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=HVKfg 2025-08-13T20:09:55.752410847+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752451938+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.752714516+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.753997462+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754057224+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754096405+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=OemHP 2025-08-13T20:09:55.754128546+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.754158867+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.770968679+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OemHP 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.775595562+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.776185209+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776207459+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776207459+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776339553+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.776339553+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HVKfg 2025-08-13T20:09:55.777490676+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:55.779940056+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:55.787737710+00:00 stderr F time="2025-08-13T20:09:55Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.145267191+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=afjez 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.145713573+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z5DhF 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.348223830+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.546288508+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.546357680+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:56.748351692+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748483165+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748483165+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748497786+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=IakPh 2025-08-13T20:09:56.748497786+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:56.748508116+00:00 stderr F time="2025-08-13T20:09:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.156892975+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=aUKHs namespace=default 2025-08-13T20:09:57.156892975+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=aUKHs namespace=default 2025-08-13T20:09:57.157035829+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.157035829+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=NGiwD namespace=hostpath-provisioner 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.163282918+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.165284335+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace default" id=aUKHs namespace=default 2025-08-13T20:09:57.165284335+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.165311576+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=VX3sc namespace=kube-node-lease 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.168065415+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.168197969+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-public" id=aaOn8 namespace=kube-public 2025-08-13T20:09:57.168197969+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.168212199+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift" id=cmzNJ namespace=openshift 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace kube-system" id=d1BGQ namespace=kube-system 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.171427562+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=4umjd namespace=openshift-apiserver 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.174108278+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=WMEZ7 namespace=openshift-apiserver-operator 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.174298894+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=Z2Ddy namespace=openshift-authentication 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=jYGI7 namespace=openshift-authentication-operator 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.177936758+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=F/ZKL namespace=openshift-cloud-network-config-controller 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.365030892+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.548997987+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=AUBJc 2025-08-13T20:09:57.549209973+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.549209973+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=XQ/f/ namespace=openshift-cloud-platform-infra 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.563831192+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=IakPh 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:57.749354201+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=EJvXe namespace=openshift-cluster-machine-approver 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:57.762163049+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:57.946444862+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.948296765+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=YCKSt namespace=openshift-cluster-samples-operator 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="resolving sources" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:57.962935355+00:00 stderr F time="2025-08-13T20:09:57Z" level=info msg="checking if subscriptions need update" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:58.148277418+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148434292+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148434292+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.148464483+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=a5iDW namespace=openshift-cluster-storage-operator 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.162341041+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=N9XnC namespace=openshift-cluster-version 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.362413208+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config" id=shDZ2 namespace=openshift-config 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.561747213+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.745648065+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.745931463+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.745931463+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.746050217+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.746050217+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=rG7wq 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=u9Mnp namespace=openshift-config-managed 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=xns6Z namespace=openshift-console 2025-08-13T20:09:58.761941082+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=xns6Z namespace=openshift-console 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.947855433+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=FPqH4 2025-08-13T20:09:58.962240075+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=rL09V namespace=openshift-config-operator 2025-08-13T20:09:58.962337268+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="resolving sources" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:58.962337268+00:00 stderr F time="2025-08-13T20:09:58Z" level=info msg="checking if subscriptions need update" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console" id=xns6Z namespace=openshift-console 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.162651561+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=hQUnk namespace=openshift-console-operator 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.363311044+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=2uTnu namespace=openshift-console-user-settings 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.565472180+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.762659064+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=cDtnC namespace=openshift-controller-manager 2025-08-13T20:09:59.762659064+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=CTpzL namespace=openshift-dns 2025-08-13T20:09:59.762715376+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=CTpzL namespace=openshift-dns 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=L69/p namespace=openshift-controller-manager-operator 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="resolving sources" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:09:59.963401190+00:00 stderr F time="2025-08-13T20:09:59Z" level=info msg="checking if subscriptions need update" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=CTpzL namespace=openshift-dns 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.165312179+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=WzHQ7 namespace=openshift-dns-operator 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.373735034+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.562843906+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=sAyKO namespace=openshift-etcd 2025-08-13T20:10:00.562987640+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.563004551+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=RUWXH namespace=openshift-etcd-operator 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:00.765071324+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:00.962427103+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=b1K1t namespace=openshift-host-network 2025-08-13T20:10:00.962524555+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="resolving sources" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:00.962524555+00:00 stderr F time="2025-08-13T20:10:00Z" level=info msg="checking if subscriptions need update" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=T2uVn namespace=openshift-image-registry 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.163223430+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=vUTpE namespace=openshift-infra 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.365713315+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=KYZtS namespace=openshift-ingress 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.562995042+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=oFtfD namespace=openshift-ingress-canary 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:01.765622000+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=iqP7R namespace=openshift-ingress-operator 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="resolving sources" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:01.961185727+00:00 stderr F time="2025-08-13T20:10:01Z" level=info msg="checking if subscriptions need update" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=7k1Dn namespace=openshift-kni-infra 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.163209680+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.372387967+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=vvE+v namespace=openshift-kube-apiserver 2025-08-13T20:10:02.372522941+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.372563762+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=1Jzn4 namespace=openshift-kube-apiserver-operator 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.568864200+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.760844894+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=I4eoQ namespace=openshift-kube-controller-manager 2025-08-13T20:10:02.760844894+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:02.760895106+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=RBqCK namespace=openshift-kube-controller-manager-operator 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="resolving sources" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:02.964298548+00:00 stderr F time="2025-08-13T20:10:02Z" level=info msg="checking if subscriptions need update" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=QGL7g namespace=openshift-kube-scheduler 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.164272721+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=EN/zw namespace=openshift-kube-scheduler-operator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.362014181+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=MpSL1 namespace=openshift-kube-storage-version-migrator 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.561016716+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=BjQbm namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:03.763069669+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=dyEQA namespace=openshift-machine-api 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="resolving sources" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:03.973649977+00:00 stderr F time="2025-08-13T20:10:03Z" level=info msg="checking if subscriptions need update" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=/z821 namespace=openshift-machine-config-operator 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.163847310+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=wmntX namespace=openshift-marketplace 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.362302830+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=/GelK namespace=openshift-monitoring 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.563660643+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=K7dM5 namespace=openshift-multus 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:04.762154954+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=IjjbX namespace=openshift-network-diagnostics 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="resolving sources" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:04.962517399+00:00 stderr F time="2025-08-13T20:10:04Z" level=info msg="checking if subscriptions need update" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=9ERws namespace=openshift-network-node-identity 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.161489993+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ejPIl namespace=openshift-network-operator 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.363526565+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-node" id=3JdwK namespace=openshift-node 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.562304444+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=cxk9z namespace=openshift-nutanix-infra 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:05.761377402+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=twwxk namespace=openshift-oauth-apiserver 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="resolving sources" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:05.961238832+00:00 stderr F time="2025-08-13T20:10:05Z" level=info msg="checking if subscriptions need update" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=LOHtz namespace=openshift-openstack-infra 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=077is namespace=openshift-operators 2025-08-13T20:10:06.162898174+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=077is namespace=openshift-operators 2025-08-13T20:10:06.361410876+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=PrCpi namespace=openshift-operator-lifecycle-manager 2025-08-13T20:10:06.361549950+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.361585521+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.560976667+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=077is namespace=openshift-operators 2025-08-13T20:10:06.561110021+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.561188093+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=gIJXC namespace=openshift-ovirt-infra 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:06.761085465+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=ikEL/ namespace=openshift-ovn-kubernetes 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="resolving sources" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:06.961369297+00:00 stderr F time="2025-08-13T20:10:06Z" level=info msg="checking if subscriptions need update" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:07.162838483+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=oFghW namespace=openshift-route-controller-manager 2025-08-13T20:10:07.162959517+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.162959517+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=jtVwc namespace=openshift-service-ca 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.360749808+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=4EA5b namespace=openshift-service-ca-operator 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="resolving sources" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:07.562619746+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="checking if subscriptions need update" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:07.763725242+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=+3L9L namespace=openshift-user-workload-monitoring 2025-08-13T20:10:07.962111509+00:00 stderr F time="2025-08-13T20:10:07Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=ANtt4 namespace=openshift-vsphere-infra 2025-08-13T20:10:36.567661543+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.567730145+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.568002362+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.568002362+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.571734019+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.571849393+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572242304+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572242304+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572262824+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=52tz4 2025-08-13T20:10:36.572318606+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572318606+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572536092+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Upb4M 2025-08-13T20:10:36.572574683+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.572574683+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.592592257+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.592866545+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.592866545+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593097232+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593097232+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=52tz4 2025-08-13T20:10:36.593356229+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593453532+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.593453532+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.593564375+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593564375+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593581236+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593592416+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Upb4M 2025-08-13T20:10:36.593727950+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.593727950+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596436368+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596566301+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596566301+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596580592+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=h3sA4 2025-08-13T20:10:36.596590752+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596590752+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.596769277+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597005344+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597005344+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.597020974+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:36.967022983+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967294600+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967346602+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967502926+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:36.967541448+00:00 stderr F time="2025-08-13T20:10:36Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=h3sA4 2025-08-13T20:10:37.169114087+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169361304+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169429406+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169642292+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:10:37.169769006+00:00 stderr F time="2025-08-13T20:10:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=zC15b 2025-08-13T20:16:57.983401621+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.983401621+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994107757+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994407785+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=NJHA9 2025-08-13T20:16:57.994495598+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:57.994495598+00:00 stderr F time="2025-08-13T20:16:57Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052560476+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052759281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052759281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.052999578+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="catalog update required at 2025-08-13 20:16:58.052873305 +0000 UTC m=+1073.704261512" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.104206561+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=NJHA9 2025-08-13T20:16:58.160920630+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.160920630+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.182861157+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.183069583+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301356281+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301490114+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301490114+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301576117+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uYxTw 2025-08-13T20:16:58.301687300+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.301687300+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349110124+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349228398+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349244878+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349256929+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Q4L1p 2025-08-13T20:16:58.349267839+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.349267839+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.365853723+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Q4L1p 2025-08-13T20:16:58.366071269+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.366071269+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371099302+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.371587276+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.589040206+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=bWm0C 2025-08-13T20:16:58.869911227+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.869911227+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:58.892030609+00:00 stderr F time="2025-08-13T20:16:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187540528+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187934079+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.187934079+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.188080593+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5lbPJ 2025-08-13T20:16:59.982015846+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.982015846+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.985854136+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:16:59.986118273+00:00 stderr F time="2025-08-13T20:16:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.010869410+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.011457507+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="catalog update required at 2025-08-13 20:17:00.011047765 +0000 UTC m=+1075.662435882" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.118078552+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=4Ikty 2025-08-13T20:17:00.206741004+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.206741004+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.222015870+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.222015870+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.247353173+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247353173+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.247411305+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:00.388159314+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:00.388311109+00:00 stderr F time="2025-08-13T20:17:00Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.065919068+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.065989060+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.065989060+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=auUuY 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.068575694+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.204820865+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Nnq8Z 2025-08-13T20:17:01.388421828+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.388733117+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:01.790047488+00:00 stderr F time="2025-08-13T20:17:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ytnzk 2025-08-13T20:17:02.214701065+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.214916361+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.257914299+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.260706749+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.328990569+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.329082781+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.398516884+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:02.590033273+00:00 stderr F time="2025-08-13T20:17:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=vRJPZ 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.006849277+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=pj29E 2025-08-13T20:17:03.226194831+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.226194831+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231187223+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231403959+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231451051+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231496242+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=NcZkZ 2025-08-13T20:17:03.231594725+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.231634786+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:03.589912777+00:00 stderr F time="2025-08-13T20:17:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=NcZkZ 2025-08-13T20:17:13.567481128+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.567481128+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575424395+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575732133+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575732133+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575751934+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=cWRIS 2025-08-13T20:17:13.575766694+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:13.575833406+00:00 stderr F time="2025-08-13T20:17:13Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.575128444+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="catalog update required at 2025-08-13 20:17:14.574302031 +0000 UTC m=+1090.225690378" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:14.761215717+00:00 stderr F time="2025-08-13T20:17:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=cWRIS 2025-08-13T20:17:16.057152613+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:16.057152613+00:00 stderr F time="2025-08-13T20:17:16Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.090869583+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.092864910+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.092945742+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.093063286+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=DQw9G 2025-08-13T20:17:17.093101357+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:17.093149408+00:00 stderr F time="2025-08-13T20:17:17Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=DQw9G 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.687015579+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:20.829281742+00:00 stderr F time="2025-08-13T20:17:20Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.167229383+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.167229383+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468461835+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=elow1 2025-08-13T20:17:21.468679401+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.472224943+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.623563934+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.623563934+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.634942809+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:21.643182265+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643390751+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643430382+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:21.643529405+00:00 stderr F time="2025-08-13T20:17:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=oFTtd 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163267576+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:22.163396920+00:00 stderr F time="2025-08-13T20:17:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6npwb 2025-08-13T20:17:25.192955276+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.192955276+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.367319875+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.588933943+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.588933943+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.613010941+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614580255+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614639487+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614680998+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LEDYg 2025-08-13T20:17:25.614712339+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.614743140+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=b08SX 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.802365198+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.872067149+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:25.952123845+00:00 stderr F time="2025-08-13T20:17:25Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LEDYg 2025-08-13T20:17:26.014261069+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.016297707+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=A6muu 2025-08-13T20:17:26.446116762+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.446166283+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.450755694+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451002092+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=VNRX8 2025-08-13T20:17:26.451029212+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.451029212+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500597568+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500840705+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.500840705+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:26.501008230+00:00 stderr F time="2025-08-13T20:17:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=VNRX8 2025-08-13T20:17:28.104889022+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.104942904+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.109917646+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:28.140235902+00:00 stderr F time="2025-08-13T20:17:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=H9D55 2025-08-13T20:17:29.083862719+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.083862719+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780267686+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.780400900+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988379249+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988635976+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988635976+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:29.988765340+00:00 stderr F time="2025-08-13T20:17:29Z" level=info msg="catalog update required at 2025-08-13 20:17:29.988696568 +0000 UTC m=+1105.640085225" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:30.091590506+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=fI9Bs 2025-08-13T20:17:30.118922297+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.118922297+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.181192185+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258727749+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258895684+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258914255+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258914255+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=tlYOF 2025-08-13T20:17:30.258924595+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.258934245+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.292201915+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0d4y8 2025-08-13T20:17:30.300262586+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.304010403+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tlYOF 2025-08-13T20:17:30.359155597+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.359155597+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447140790+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447489830+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447532691+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447571042+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=qpwQE 2025-08-13T20:17:30.447625614+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.447664765+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.505590159+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.506902747+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qpwQE 2025-08-13T20:17:30.507065671+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.507065671+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.519579569+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.786895173+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.786895173+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.790183456+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790392212+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790466695+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790550767+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=5DLXM 2025-08-13T20:17:30.790622629+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:30.790622629+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:30.988258963+00:00 stderr F time="2025-08-13T20:17:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.237228273+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237274404+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237287695+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.237355846+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:31.786854869+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.790262216+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:31.790262216+00:00 stderr F time="2025-08-13T20:17:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=le9Xh 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:33.004230403+00:00 stderr F time="2025-08-13T20:17:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.089041672+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=qF6GE 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.153361819+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.309984322+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:34.355004368+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ESgBx 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.357128128+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:34.713398702+00:00 stderr F time="2025-08-13T20:17:34Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.301474856+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.301877788+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.302487085+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330228528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330280479+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330280479+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=WkWXW 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OC4YC 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.330924267+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.338001729+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.338001729+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.349076676+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.351170726+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351229217+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=LtZef 2025-08-13T20:17:35.351245528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.351245528+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.507943673+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=LtZef 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.508587661+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.593362432+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.593362432+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=3ug3Z 2025-08-13T20:17:35.849649031+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.849649031+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:35.894919484+00:00 stderr F time="2025-08-13T20:17:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195428065+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195492227+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.195492227+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.196427574+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jwtyF 2025-08-13T20:17:36.624680583+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.624680583+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628370598+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.628461031+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847434594+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847479315+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847479315+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.847670481+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=9ciqs 2025-08-13T20:17:36.853169388+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.853169388+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:36.996943424+00:00 stderr F time="2025-08-13T20:17:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.599457340+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=geRH1 2025-08-13T20:17:37.600328055+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.600328055+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.603665960+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:37.604008350+00:00 stderr F time="2025-08-13T20:17:37Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013003720+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013050411+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.013050411+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.014143572+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.014143572+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=PX9hI 2025-08-13T20:17:38.847365117+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.847365117+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852446052+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852603126+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852616807+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852652978+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=t60me 2025-08-13T20:17:38.852664618+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:38.852664618+00:00 stderr F time="2025-08-13T20:17:38Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032347619+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032394491+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.032394491+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172130361+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172130361+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=t60me 2025-08-13T20:17:39.172176812+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.172192393+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.176426884+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.389021935+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588865252+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588865252+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=i/23X 2025-08-13T20:17:39.588918974+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.588937064+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787139864+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:39.787755032+00:00 stderr F time="2025-08-13T20:17:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.661614527+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.662073730+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:43.662073730+00:00 stderr F time="2025-08-13T20:17:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.101329484+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.101422527+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nB5aM 2025-08-13T20:17:44.763039721+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.763039721+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.975679963+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976215999+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976273680+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976273680+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=ja6zk 2025-08-13T20:17:44.976289581+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.976289581+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:44.993212864+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:44.993404710+00:00 stderr F time="2025-08-13T20:17:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.061914386+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=ja6zk 2025-08-13T20:17:45.068746621+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.071657754+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251885971+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251934492+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.251944773+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.252406326+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Cr0Wf 2025-08-13T20:17:45.252493518+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.252493518+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.304897995+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305139592+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305181903+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305221494+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=cxXAg 2025-08-13T20:17:45.305253065+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.305284176+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449427192+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449815113+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.449882175+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450066551+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450107972+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cxXAg 2025-08-13T20:17:45.450257196+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.450310008+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460243081+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460365025+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XeZZz 2025-08-13T20:17:45.460418126+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.460418126+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.495863248+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496298881+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496298881+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496430045+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:45.496430045+00:00 stderr F time="2025-08-13T20:17:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XeZZz 2025-08-13T20:17:58.140876162+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.140938434+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.152175935+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.187421121+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:17:58.188168663+00:00 stderr F time="2025-08-13T20:17:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=f6dDy 2025-08-13T20:18:00.092684011+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.092684011+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.292591780+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.292647911+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.552924954+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.553143560+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.553360606+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:00.557076243+00:00 stderr F time="2025-08-13T20:18:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967377044+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967893429+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.967893429+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=BGjV2 2025-08-13T20:18:02.999135901+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:02.999352517+00:00 stderr F time="2025-08-13T20:18:02Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+rQlS 2025-08-13T20:18:15.062646720+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.062646720+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070203696+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070523866+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070523866+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.070708231+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101226102+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101420618+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101420618+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:15.101629364+00:00 stderr F time="2025-08-13T20:18:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dO5Zc 2025-08-13T20:18:24.802453013+00:00 stderr F time="2025-08-13T20:18:24Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:24.802453013+00:00 stderr F time="2025-08-13T20:18:24Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371243652+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:26.371673174+00:00 stderr F time="2025-08-13T20:18:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=CEJs1 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.673912080+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.687576910+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:28.793699001+00:00 stderr F time="2025-08-13T20:18:28Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=817zx 2025-08-13T20:18:33.000200576+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.000200576+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.005656402+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.006006962+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:33.107085609+00:00 stderr F time="2025-08-13T20:18:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Rc4s8 2025-08-13T20:18:45.102857093+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.102857093+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.106837916+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.107105494+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127030363+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127136296+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127136296+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:45.127194348+00:00 stderr F time="2025-08-13T20:18:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=uRN/m 2025-08-13T20:18:50.911552532+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.911657815+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.918417388+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945205163+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945329567+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945342337+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:50.945415929+00:00 stderr F time="2025-08-13T20:18:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=h+jCC 2025-08-13T20:18:51.034061451+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.034061451+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038585820+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.038867428+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087387074+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087558829+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.087558829+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.203088938+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.203088938+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ltPw3 2025-08-13T20:18:51.205561498+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.205561498+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209178852+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209348007+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209386928+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209424589+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=x6WRy 2025-08-13T20:18:51.209455260+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.209484670+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.224655474+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.225014904+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.225014904+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.229637206+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:51.229702278+00:00 stderr F time="2025-08-13T20:18:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x6WRy 2025-08-13T20:18:56.030993378+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.030993378+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.048494758+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.060944494+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534042774+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534369633+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.534431465+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.536889185+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=dSllX 2025-08-13T20:18:56.537126812+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.537176154+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.573850301+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574244392+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574735616+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574735616+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=sZqtq 2025-08-13T20:18:56.574752107+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.574752107+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669666167+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.669886183+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=sZqtq 2025-08-13T20:18:56.670265584+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.670265584+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=kpkmt 2025-08-13T20:18:56.740382367+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.740431948+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.844820709+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845126518+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845241561+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845402136+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:56.845440837+00:00 stderr F time="2025-08-13T20:18:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=kpkmt 2025-08-13T20:18:59.231328542+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.231328542+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.265665972+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.265926380+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266036373+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266100075+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Cle0k 2025-08-13T20:18:59.266134555+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.266166386+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.285993252+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286239890+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286239890+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:18:59.286408614+00:00 stderr F time="2025-08-13T20:18:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Cle0k 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.223460944+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237103764+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237207807+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237207807+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:00.237387262+00:00 stderr F time="2025-08-13T20:19:00Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QxWsM 2025-08-13T20:19:03.108079571+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.108079571+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111257432+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111529539+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111672513+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111731485+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bi/DE 2025-08-13T20:19:03.111766506+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.111874869+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.356746543+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:03.357187816+00:00 stderr F time="2025-08-13T20:19:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bi/DE 2025-08-13T20:19:15.128864790+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.128864790+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.137729953+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.138222637+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.194517545+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:15.202372679+00:00 stderr F time="2025-08-13T20:19:15Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=MW41X 2025-08-13T20:19:31.079602079+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.079602079+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112271453+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112491929+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=a07yu 2025-08-13T20:19:31.112512779+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.112512779+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130389610+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130656218+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130656218+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.130855944+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=a07yu 2025-08-13T20:19:31.205528878+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.205528878+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212642071+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212748464+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212764055+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.212867548+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=LvVPs 2025-08-13T20:19:31.212867548+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:31.213025682+00:00 stderr F time="2025-08-13T20:19:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:32.421495939+00:00 stderr F time="2025-08-13T20:19:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.120932469+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.120932469+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=LvVPs 2025-08-13T20:19:34.131326806+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.131326806+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:34.999898807+00:00 stderr F time="2025-08-13T20:19:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.244542949+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.828544909+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.828666523+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=NRKI6 2025-08-13T20:19:35.843087175+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:35.843087175+00:00 stderr F time="2025-08-13T20:19:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.254558045+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=6ViN4 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:36.698876273+00:00 stderr F time="2025-08-13T20:19:36Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.337431651+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=+GcWt 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.579905471+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.584940115+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.699018575+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Oo754 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.700074545+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702572287+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.702761442+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718816861+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718913884+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.718913884+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.719110969+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:38.719110969+00:00 stderr F time="2025-08-13T20:19:38Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3uSaW 2025-08-13T20:19:45.203583479+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.203583479+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.212295488+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.213076901+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231634581+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231717194+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.231717194+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.232405603+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:19:45.232405603+00:00 stderr F time="2025-08-13T20:19:45Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=e37o0 2025-08-13T20:22:23.582698448+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.582698448+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.584282953+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.584467578+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.591555321+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.591886760+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.592366794+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593759534+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=nw+2Z 2025-08-13T20:22:23.593865707+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.593865707+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608149195+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608293229+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608306630+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608468524+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608482785+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608669270+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=OTm45 2025-08-13T20:22:23.608669270+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608716671+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.608923097+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.608941598+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.609014160+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.609014160+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=nw+2Z 2025-08-13T20:22:23.609103472+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.609138513+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612150039+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612229572+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612336435+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612375166+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612413887+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Yo163 2025-08-13T20:22:23.612444628+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612503430+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.612745146+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789613431+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789740455+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.789757655+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.790150497+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.790150497+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=JcgEP 2025-08-13T20:22:23.987111926+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987264800+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987282611+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987452915+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:22:23.987452915+00:00 stderr F time="2025-08-13T20:22:23Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Yo163 2025-08-13T20:23:25.274452558+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.274452558+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.274585281+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.274585281+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.278488203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=WKAf+ namespace=openshift-network-diagnostics 2025-08-13T20:23:25.278587946+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.278587946+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.278848883+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=26pRh namespace=openshift-infra 2025-08-13T20:23:25.278907275+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.278907275+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=v44d/ namespace=openshift-operator-lifecycle-manager 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.281244742+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=3ImAl namespace=openshift-ovn-kubernetes 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.282133977+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=myf9p namespace=openshift-apiserver 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.285119902+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=aQ7Eq namespace=openshift-console-operator 2025-08-13T20:23:25.285220695+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.285326038+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.287261794+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=UuCS+ namespace=openshift-dns 2025-08-13T20:23:25.287322135+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=1wSOa namespace=openshift 2025-08-13T20:23:25.287322135+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=1wSOa namespace=openshift 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=SHiBK namespace=openshift-ingress-operator 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.287655265+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift" id=1wSOa namespace=openshift 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.290094095+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.290204408+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=7OGDf namespace=openshift-cloud-platform-infra 2025-08-13T20:23:25.290217238+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.290226318+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=zioiR namespace=openshift-ingress 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.480389203+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=/X4+y namespace=openshift-kube-apiserver-operator 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:25.678599768+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=27eAx namespace=openshift-monitoring 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="resolving sources" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:25.880302963+00:00 stderr F time="2025-08-13T20:23:25Z" level=info msg="checking if subscriptions need update" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:26.079347861+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=9qgNb namespace=openshift-controller-manager-operator 2025-08-13T20:23:26.079520806+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.079556917+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.279465021+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=8Gm9g namespace=openshift-machine-api 2025-08-13T20:23:26.279616845+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.279665146+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=+A9qY namespace=openshift-service-ca-operator 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.479201109+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.678405412+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=52xeD namespace=openshift-multus 2025-08-13T20:23:26.678405412+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:26.678462454+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=rigbO namespace=openshift-ovirt-infra 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="resolving sources" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:26.878764609+00:00 stderr F time="2025-08-13T20:23:26Z" level=info msg="checking if subscriptions need update" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=yPz0V namespace=kube-node-lease 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.078688941+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.278844792+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=qg5x0 namespace=openshift-authentication 2025-08-13T20:23:27.278904683+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.278904683+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.480060842+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=Na6cW namespace=openshift-ingress-canary 2025-08-13T20:23:27.480060842+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.480105244+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=ofcr8 namespace=openshift-kube-scheduler 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:27.681057427+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:27.879351374+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=pbqJ6 namespace=openshift-nutanix-infra 2025-08-13T20:23:27.879351374+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="resolving sources" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:27.879391395+00:00 stderr F time="2025-08-13T20:23:27Z" level=info msg="checking if subscriptions need update" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:28.079429683+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=Gox56 namespace=openshift-service-ca 2025-08-13T20:23:28.079655029+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.079693940+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=75adD namespace=openshift-openstack-infra 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.281998772+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.478929790+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=lEKVF namespace=openshift-route-controller-manager 2025-08-13T20:23:28.478967151+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.478967151+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.679541924+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=95zqT namespace=openshift-apiserver-operator 2025-08-13T20:23:28.679586375+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:28.679586375+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:28.879235711+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="No subscriptions were found in namespace openshift-config" id=e20x5 namespace=openshift-config 2025-08-13T20:23:28.879235711+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="resolving sources" id=4F67A namespace=openshift-console 2025-08-13T20:23:28.879292752+00:00 stderr F time="2025-08-13T20:23:28Z" level=info msg="checking if subscriptions need update" id=4F67A namespace=openshift-console 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=w/k7b namespace=openshift-dns-operator 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.078267459+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-console" id=4F67A namespace=openshift-console 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.279920622+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.479273590+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=VxLwh namespace=openshift-console-user-settings 2025-08-13T20:23:29.480161175+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.480161175+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=ncGTk namespace=openshift-host-network 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:29.679517303+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=kUVi4 namespace=openshift-kni-infra 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="resolving sources" id=lZ2JR namespace=kube-public 2025-08-13T20:23:29.879731814+00:00 stderr F time="2025-08-13T20:23:29Z" level=info msg="checking if subscriptions need update" id=lZ2JR namespace=kube-public 2025-08-13T20:23:30.078891927+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=AQ+aa namespace=openshift-kube-controller-manager 2025-08-13T20:23:30.078938818+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.078938818+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace kube-public" id=lZ2JR namespace=kube-public 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.280734605+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=0sdrF namespace=openshift-cluster-machine-approver 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.478089306+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.685425780+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=jWYDM namespace=openshift-cluster-version 2025-08-13T20:23:30.685467922+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=YADVZ namespace=default 2025-08-13T20:23:30.685467922+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=YADVZ namespace=default 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=ODqCf namespace=openshift-vsphere-infra 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="resolving sources" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:30.878117218+00:00 stderr F time="2025-08-13T20:23:30Z" level=info msg="checking if subscriptions need update" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace default" id=YADVZ namespace=default 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.080339247+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=0dv1p namespace=openshift-etcd 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.279559751+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=B89A7 namespace=openshift-operators 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.480312099+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=h8mP3 namespace=openshift-kube-controller-manager-operator 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:31.679576314+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:31.879869898+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=LBeOL namespace=openshift-machine-config-operator 2025-08-13T20:23:31.879929879+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="resolving sources" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:31.879929879+00:00 stderr F time="2025-08-13T20:23:31Z" level=info msg="checking if subscriptions need update" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=h1KWz namespace=openshift-network-node-identity 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.079894044+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=34qy7 namespace=openshift-user-workload-monitoring 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.279527420+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.477210620+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=LE6nr namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:23:32.477394285+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.477449906+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=ZqsjM namespace=openshift-network-operator 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=lrgTb namespace=openshift-node 2025-08-13T20:23:32.679878262+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=lrgTb namespace=openshift-node 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=489c1 namespace=openshift-oauth-apiserver 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="resolving sources" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:32.878000334+00:00 stderr F time="2025-08-13T20:23:32Z" level=info msg="checking if subscriptions need update" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:33.079031800+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-node" id=lrgTb namespace=openshift-node 2025-08-13T20:23:33.079093591+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.079093591+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=vq/Dx namespace=openshift-controller-manager 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.281458495+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=4aVqI namespace=openshift-kube-storage-version-migrator 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.478631100+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.680558511+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=X2XhN namespace=openshift-marketplace 2025-08-13T20:23:33.680625313+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:33.680625313+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=L94Te namespace=openshift-config-operator 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="resolving sources" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:33.880255458+00:00 stderr F time="2025-08-13T20:23:33Z" level=info msg="checking if subscriptions need update" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=rcLeM namespace=openshift-kube-scheduler-operator 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.080902143+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.279279401+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=ByakK namespace=openshift-image-registry 2025-08-13T20:23:34.279338033+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.279338033+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=tqBUa namespace=hostpath-provisioner 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.483005584+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.679611322+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=ttsJq namespace=openshift-authentication-operator 2025-08-13T20:23:34.679729226+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:34.679864840+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=A3r/Z namespace=openshift-etcd-operator 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="resolving sources" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:34.879157535+00:00 stderr F time="2025-08-13T20:23:34Z" level=info msg="checking if subscriptions need update" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=8fxIB namespace=openshift-cluster-storage-operator 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.079941204+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=w6Ebf namespace=openshift-config-managed 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.279440186+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.479221045+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=27YHB namespace=openshift-kube-apiserver 2025-08-13T20:23:35.479338269+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:35.479374890+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:35.679251252+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace kube-system" id=g5QKY namespace=kube-system 2025-08-13T20:23:35.679371545+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="resolving sources" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:23:35.679412346+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="checking if subscriptions need update" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:23:35.879907187+00:00 stderr F time="2025-08-13T20:23:35Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=Kw5Sr namespace=openshift-cloud-network-config-controller 2025-08-13T20:23:36.080542461+00:00 stderr F time="2025-08-13T20:23:36Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=tz9qS namespace=openshift-cluster-samples-operator 2025-08-13T20:26:10.557713661+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.557975158+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.558019659+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.558225385+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.568387246+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.568607772+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569072225+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569218189+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569314292+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=jjdyi 2025-08-13T20:26:10.569368614+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569411965+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.569503438+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569503438+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569521388+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=82cCN 2025-08-13T20:26:10.569521388+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.569549739+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.586239956+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586475883+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586475883+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586656668+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586656668+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jjdyi 2025-08-13T20:26:10.586729980+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.586942976+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.587275366+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587413600+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587457221+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587657317+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587725259+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=82cCN 2025-08-13T20:26:10.587913264+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.588059168+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.589939162+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.591514757+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591539028+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591551748+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591563829+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=N73fY 2025-08-13T20:26:10.591573769+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.591573769+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.764340349+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764581136+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764621647+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764746011+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.764856804+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=7RIXq 2025-08-13T20:26:10.963193385+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963378190+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963378190+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963556586+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:10.963570866+00:00 stderr F time="2025-08-13T20:26:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=N73fY 2025-08-13T20:26:51.378101812+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.378101812+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.378541324+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.378604646+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.383734863+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.384194766+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384194766+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384215967+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384225547+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fbx2n 2025-08-13T20:26:51.384234957+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.384234957+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407581495+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407823032+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.407823032+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408155261+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408155261+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fbx2n 2025-08-13T20:26:51.408313286+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.408313286+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=EhycU 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.410325153+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.412890567+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413032331+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XVosD 2025-08-13T20:26:51.413051251+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.413051251+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.415528632+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415617175+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=mMHet 2025-08-13T20:26:51.415630715+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.415640495+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.582409004+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582568818+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582568818+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582686762+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.582739173+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XVosD 2025-08-13T20:26:51.787308723+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787364754+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787364754+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787450447+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:26:51.787450447+00:00 stderr F time="2025-08-13T20:26:51Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=mMHet 2025-08-13T20:27:02.910862330+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.910862330+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.910954953+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.910954953+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=L/JIx namespace=openshift-operator-lifecycle-manager 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="resolving sources" id=OATTi namespace=openshift-operators 2025-08-13T20:27:02.919243130+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="checking if subscriptions need update" id=OATTi namespace=openshift-operators 2025-08-13T20:27:02.919337803+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=kfcha namespace=openshift-monitoring 2025-08-13T20:27:02.921322119+00:00 stderr F time="2025-08-13T20:27:02Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=OATTi namespace=openshift-operators 2025-08-13T20:27:05.587701902+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.587701902+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.588240057+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.588240057+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592149169+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592277173+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592456148+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592527050+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592573421+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Dg+Zu 2025-08-13T20:27:05.592618812+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592650153+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.592747646+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592747646+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592764706+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=u13ew 2025-08-13T20:27:05.592825458+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.592825458+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604485182+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604519133+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.604519133+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="catalog update required at 2025-08-13 20:27:05.604565974 +0000 UTC m=+1681.255954091" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Dg+Zu 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.605890332+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.608475986+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609234727+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609306930+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609376271+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=7QFOv 2025-08-13T20:27:05.609411883+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.609442743+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630010321+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630208127+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630247778+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.630540807+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="catalog update required at 2025-08-13 20:27:05.63031641 +0000 UTC m=+1681.281704527" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.631330109+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=u13ew 2025-08-13T20:27:05.648020906+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.648020906+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.817190904+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=7QFOv 2025-08-13T20:27:05.845462672+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:05.845653888+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:05.992745104+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.992979530+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=uIXIK 2025-08-13T20:27:05.993040852+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:05.993065643+00:00 stderr F time="2025-08-13T20:27:05Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.197191580+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197351684+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197351684+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.197367215+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.793177852+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793548082+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793593863+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793714697+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793750328+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=uIXIK 2025-08-13T20:27:06.793931393+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:06.794043356+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=X7mdk 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:06.992609614+00:00 stderr F time="2025-08-13T20:27:06Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.192680665+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.192848760+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.193829028+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.393473886+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.396466352+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:07.992046822+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992230498+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992230498+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.992306210+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e6I4b 2025-08-13T20:27:07.993027990+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:07.993027990+00:00 stderr F time="2025-08-13T20:27:07Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199321409+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=wawcU 2025-08-13T20:27:08.199428362+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.199428362+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.396344503+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:08.593088429+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593317615+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593372507+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593428788+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=QgpZY 2025-08-13T20:27:08.593471799+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:08.593515401+00:00 stderr F time="2025-08-13T20:27:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Uui59 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.194541236+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.392242049+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393138455+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393260138+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.393695830+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=QgpZY 2025-08-13T20:27:09.394145433+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.394391530+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.592765033+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:09.798720253+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799315680+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799440413+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799544036+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=1ECed 2025-08-13T20:27:09.799696700+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:09.799976328+00:00 stderr F time="2025-08-13T20:27:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.394728315+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395145047+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395195749+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.395315932+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=5iLo6 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:10.619287536+00:00 stderr F time="2025-08-13T20:27:10Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1ECed 2025-08-13T20:27:15.986079095+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.986217479+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.994691361+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995077322+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995133143+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995188995+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=usioD 2025-08-13T20:27:15.995220696+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:15.995250277+00:00 stderr F time="2025-08-13T20:27:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.012711636+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:16.013114138+00:00 stderr F time="2025-08-13T20:27:16Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=usioD 2025-08-13T20:27:18.640321989+00:00 stderr F time="2025-08-13T20:27:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:18.640387931+00:00 stderr F time="2025-08-13T20:27:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.021962132+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022034294+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022034294+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.022140117+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.108886688+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.109065703+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.340868041+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jDV0J 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.342131307+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:19.512700575+00:00 stderr F time="2025-08-13T20:27:19Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=4kQ+B 2025-08-13T20:27:20.031982022+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.031982022+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038391595+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038626582+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038665963+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038708324+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=o+5uM 2025-08-13T20:27:20.038754476+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.038842488+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:20.053014683+00:00 stderr F time="2025-08-13T20:27:20Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=o+5uM 2025-08-13T20:27:26.208424501+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.208424501+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218349525+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Iu40F 2025-08-13T20:27:26.218545621+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.218567871+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236246947+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236426812+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236426812+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.236544435+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Iu40F 2025-08-13T20:27:26.341705882+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.341705882+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348538678+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=V95ue 2025-08-13T20:27:26.348721433+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.348749164+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.362693263+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363116775+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363181356+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:26.363332101+00:00 stderr F time="2025-08-13T20:27:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=V95ue 2025-08-13T20:27:27.200910640+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.200910640+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209475655+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209592758+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ppEqc 2025-08-13T20:27:27.209613099+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.209613099+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243204249+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.243302082+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.243302082+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254441610+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254681467+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254681467+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254701548+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=YUGao 2025-08-13T20:27:27.254713588+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.254725039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.261695998+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.261695998+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ppEqc 2025-08-13T20:27:27.263760517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.263760517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274579836+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274873295+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274891105+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274907706+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=lMTM+ 2025-08-13T20:27:27.274907706+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274960297+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.274960297+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.275287447+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.275304057+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435446946+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435533299+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=YUGao 2025-08-13T20:27:27.435661092+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.435724374+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.612335504+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.612424517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.612424517+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:27.813162157+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813229039+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Pgq+4 2025-08-13T20:27:27.813255879+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:27.813255879+00:00 stderr F time="2025-08-13T20:27:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.018274392+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:28.018274392+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lMTM+ 2025-08-13T20:27:28.414124611+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.414321676+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.414321676+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.614579873+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:28.614579873+00:00 stderr F time="2025-08-13T20:27:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Pgq+4 2025-08-13T20:27:29.587962126+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.587962126+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594576925+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594716109+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594729579+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.594748110+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.616736668+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.616736668+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.622042850+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=VPS8m 2025-08-13T20:27:29.630929544+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631161171+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631161171+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631176021+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=xYlza 2025-08-13T20:27:29.631176021+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.631186272+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813262168+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813532566+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813624288+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:29.813756632+00:00 stderr F time="2025-08-13T20:27:29Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=xYlza 2025-08-13T20:27:30.145874999+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.145874999+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.151888081+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.190230007+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.190230007+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.413088979+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=DnChm 2025-08-13T20:27:30.414091247+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.414129448+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:30.612113699+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612253553+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612253553+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612269604+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612279504+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jA0Zq 2025-08-13T20:27:30.612460769+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:30.612460769+00:00 stderr F time="2025-08-13T20:27:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014216317+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014375462+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014390642+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.014555617+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.212750144+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.212984231+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.212984231+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213207357+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213207357+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=DnChm 2025-08-13T20:27:31.213262039+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.213262039+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612159995+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612246318+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612258898+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612268088+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=b/75+ 2025-08-13T20:27:31.612268088+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.612277558+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:31.812117543+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812200225+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812200225+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812390290+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:31.812390290+00:00 stderr F time="2025-08-13T20:27:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GOJCo 2025-08-13T20:27:32.212036668+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212237254+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212237254+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212348927+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:32.212348927+00:00 stderr F time="2025-08-13T20:27:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=b/75+ 2025-08-13T20:27:35.632209844+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.632209844+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.635748505+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636015453+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636015453+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636046523+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=Wr/Yi 2025-08-13T20:27:35.636060424+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.636060424+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646249685+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646408050+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646408050+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646563154+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.646563154+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=Wr/Yi 2025-08-13T20:27:35.818397507+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.818397507+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822343920+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.822475804+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833224791+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833372116+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.833372116+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.834251871+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:27:35.834251871+00:00 stderr F time="2025-08-13T20:27:35Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=XKv1U 2025-08-13T20:28:43.254469814+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.254538446+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.259174940+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260038304+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260137017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.260280481+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279125173+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279191525+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279204205+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.279445772+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="catalog update required at 2025-08-13 20:28:43.279262877 +0000 UTC m=+1778.930651114" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.310497095+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=lmNDI 2025-08-13T20:28:43.333983480+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.333983480+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.353519032+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.376674797+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377066058+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377131780+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377261594+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=+dw+v 2025-08-13T20:28:43.377602304+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.377688916+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385157771+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385536162+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385599744+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385647325+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=orR+p 2025-08-13T20:28:43.385686206+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.385724237+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407342689+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407498503+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407516784+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407669618+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=orR+p 2025-08-13T20:28:43.407986017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.407986017+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.460921429+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.460984801+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461026952+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461044412+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=VzCkx 2025-08-13T20:28:43.461056563+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.461056563+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.863512192+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864122329+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864190921+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:43.864355826+00:00 stderr F time="2025-08-13T20:28:43Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=VzCkx 2025-08-13T20:28:44.065517389+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.065517389+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069128302+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069128302+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.069324688+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460311807+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460738529+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.460925065+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.461180082+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=hkCnO 2025-08-13T20:28:44.686685004+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.686893840+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692123461+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692302356+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692302356+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692316516+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=j9Ilt 2025-08-13T20:28:44.692326567+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:44.692336137+00:00 stderr F time="2025-08-13T20:28:44Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.059853951+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060085458+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060085458+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.060159620+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=j9Ilt 2025-08-13T20:28:45.700760064+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.701313660+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.706951412+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707230380+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707278661+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707317832+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1eV/T 2025-08-13T20:28:45.707350223+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.707380704+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.720619575+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721039397+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721089478+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:45.721202291+00:00 stderr F time="2025-08-13T20:28:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1eV/T 2025-08-13T20:28:58.823046762+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.823046762+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828352615+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828554991+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828554991+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.828686654+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853182689+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853304382+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853304382+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:58.853536609+00:00 stderr F time="2025-08-13T20:28:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=W6dV0 2025-08-13T20:28:59.821934486+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.821934486+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835347291+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835511536+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835511536+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835528157+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Cs38y 2025-08-13T20:28:59.835528157+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.835539857+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:28:59.853963977+00:00 stderr F time="2025-08-13T20:28:59Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Cs38y 2025-08-13T20:29:03.819867187+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.819867187+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828040382+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828101454+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=XHcJM 2025-08-13T20:29:03.828116945+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.828116945+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846457512+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846674518+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846674518+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.846767951+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=XHcJM 2025-08-13T20:29:03.847152722+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.847152722+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850550909+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.850719544+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869118033+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869326689+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869343280+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.869458703+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1fMF/ 2025-08-13T20:29:03.952824390+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.954735224+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958091961+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958263086+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958312147+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958349778+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=31e1z 2025-08-13T20:29:03.958381039+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.958411290+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.974093571+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.990631976+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.990631976+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=31e1z 2025-08-13T20:29:03.991181802+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:03.991248434+00:00 stderr F time="2025-08-13T20:29:03Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024139789+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024275293+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024275293+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024318385+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=MbB2T 2025-08-13T20:29:04.024340705+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.024340705+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.422918813+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.423215961+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.423215961+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.625743933+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:04.625743933+00:00 stderr F time="2025-08-13T20:29:04Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=MbB2T 2025-08-13T20:29:06.308482025+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.308965369+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313576681+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313885620+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.313946132+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.314136067+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=QlN5u 2025-08-13T20:29:06.314181639+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.314245120+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332115274+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332262018+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332262018+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.332358481+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=QlN5u 2025-08-13T20:29:06.895144589+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.895237722+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899833244+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899922396+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899922396+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899935767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=AKAco 2025-08-13T20:29:06.899945767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.899945767+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928347703+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928548919+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928548919+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:06.928665273+00:00 stderr F time="2025-08-13T20:29:06Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=AKAco 2025-08-13T20:29:07.468727926+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.468904331+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.473551145+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.489868624+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490161342+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490161342+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490364788+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490364788+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=svFkZ 2025-08-13T20:29:07.490537513+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.490537513+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498314657+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498448631+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=/DcUI 2025-08-13T20:29:07.498476511+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.498476511+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524169690+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524287303+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524287303+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524534580+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:07.524534580+00:00 stderr F time="2025-08-13T20:29:07Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=/DcUI 2025-08-13T20:29:13.310707337+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.310707337+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315120664+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315356171+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Qy4PJ 2025-08-13T20:29:13.315419963+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.315419963+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.331589987+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332102452+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332164764+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332345859+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:13.332396120+00:00 stderr F time="2025-08-13T20:29:13Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Qy4PJ 2025-08-13T20:29:25.860089594+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=IDLE" 2025-08-13T20:29:25.861642709+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=IDLE" 2025-08-13T20:29:25.861642709+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T20:29:25.861876145+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.861904356+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.862066681+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.862136663+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.862667748+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T20:29:25.871682017+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872280065+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872413568+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872543412+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=ODM5w 2025-08-13T20:29:25.872703947+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.872856731+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.873487219+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873558501+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873571482+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873583572+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=bHhfJ 2025-08-13T20:29:25.873661704+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.873661704+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.886810192+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=READY" 2025-08-13T20:29:25.889217131+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=READY" 2025-08-13T20:29:25.889388116+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="resolving sources" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.889428317+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checking if subscriptions need update" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.892328661+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892516766+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.892576638+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892613499+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.892702362+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=lbixZ namespace=openshift-marketplace 2025-08-13T20:29:25.892873136+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.892942178+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.893078802+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.893124664+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=bHhfJ 2025-08-13T20:29:25.893542786+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.893594217+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=ODM5w 2025-08-13T20:29:25.908448774+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.908448774+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.909181525+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.916642860+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:25.920732907+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929641733+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:25.929762747+00:00 stderr F time="2025-08-13T20:29:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.067388593+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067605369+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067605369+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067731293+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.067731293+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ag3x9 2025-08-13T20:29:26.074127517+00:00 stderr F time="2025-08-13T20:29:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"certified-operators\": the object has been modified; please apply your changes to the latest version and try again" id=Ag3x9 2025-08-13T20:29:26.080554121+00:00 stderr F E0813 20:29:26.080435 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "certified-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:29:26.080590313+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.080590313+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.270189473+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=9p/HG 2025-08-13T20:29:26.291902467+00:00 stderr F time="2025-08-13T20:29:26Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Operation cannot be fulfilled on catalogsources.operators.coreos.com \"community-operators\": the object has been modified; please apply your changes to the latest version and try again" id=9p/HG 2025-08-13T20:29:26.292289898+00:00 stderr F E0813 20:29:26.292263 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Operation cannot be fulfilled on catalogsources.operators.coreos.com "community-operators": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:29:26.292381251+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.292439932+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.471669454+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472093807+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472144748+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472197850+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=IVAIN 2025-08-13T20:29:26.472254471+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.472286312+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:26.669128741+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669350997+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669390618+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669429659+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=qvytb 2025-08-13T20:29:26.669463340+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:26.669494351+00:00 stderr F time="2025-08-13T20:29:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.130067450+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=IDLE" 2025-08-13T20:29:27.130067450+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=READY" 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="resolving sources" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.136101984+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checking if subscriptions need update" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.141432467+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=Iz3// namespace=openshift-marketplace 2025-08-13T20:29:27.267853131+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268166730+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268208791+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268346585+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.268390547+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=IVAIN 2025-08-13T20:29:27.286310242+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.289581716+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.343343171+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=IDLE" 2025-08-13T20:29:27.343343171+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T20:29:27.351528807+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=READY" 2025-08-13T20:29:27.351769783+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="resolving sources" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.351913928+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checking if subscriptions need update" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.355880932+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=dQ83w namespace=openshift-marketplace 2025-08-13T20:29:27.467052507+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467199872+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467199872+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467314815+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.467314815+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=qvytb 2025-08-13T20:29:27.485198729+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.485312222+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.667667774+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.667957622+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668053655+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668101757+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=jYXYT 2025-08-13T20:29:27.668139228+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.668233650+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:27.870866355+00:00 stderr F time="2025-08-13T20:29:27Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.476526806+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=jYXYT 2025-08-13T20:29:28.490677703+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.490677703+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.672372555+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=+eOEm 2025-08-13T20:29:28.690657910+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:28.690657910+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:28.868224694+00:00 stderr F time="2025-08-13T20:29:28Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.067641717+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.068102180+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.673468052+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.674550453+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="catalog update required at 2025-08-13 20:29:29.673964586 +0000 UTC m=+1825.325352703" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.871500065+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=tg7US 2025-08-13T20:29:29.902366472+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:29.902366472+00:00 stderr F time="2025-08-13T20:29:29Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.086921327+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=QI/Uk 2025-08-13T20:29:30.118863966+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.118863966+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.271424261+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:30.471629556+00:00 stderr F time="2025-08-13T20:29:30Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075876846+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075965238+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.075965238+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=GzliX 2025-08-13T20:29:31.076230606+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.076230606+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.277869532+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278112969+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278154690+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278281154+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278316915+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Ci7F0 2025-08-13T20:29:31.278427688+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.278528761+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.466890766+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:31.670110057+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:31.670225961+00:00 stderr F time="2025-08-13T20:29:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.268088386+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268333793+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268375374+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268557919+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.268596570+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=YUsDI 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.476162587+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iVmGt 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.480623775+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668717822+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:32.668852796+00:00 stderr F time="2025-08-13T20:29:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074224299+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074672562+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074721983+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074868407+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=RJgaQ 2025-08-13T20:29:33.074958670+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.075066323+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274758184+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.274894658+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.673948288+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674263387+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674334649+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=eCngn 2025-08-13T20:29:33.674479074+00:00 stderr F time="2025-08-13T20:29:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=eCngn 2025-08-13T20:30:00.087849848+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.087849848+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093647334+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093815869+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093815869+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093848830+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qnVZ4 2025-08-13T20:30:00.093848830+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:00.093859011+00:00 stderr F time="2025-08-13T20:30:00Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.153067387+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154602241+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154602241+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:01.154681723+00:00 stderr F time="2025-08-13T20:30:01Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qnVZ4 2025-08-13T20:30:07.395009878+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.395009878+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.403667227+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404198012+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404251244+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404293905+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=dPB/P 2025-08-13T20:30:07.404431929+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.404467380+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429060657+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429265763+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429359416+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:07.429459739+00:00 stderr F time="2025-08-13T20:30:07Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=dPB/P 2025-08-13T20:30:08.381187566+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.381409602+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391396369+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391868703+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.391966646+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.392115790+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=z9jxi 2025-08-13T20:30:08.392218983+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.392318836+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.411661382+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412101244+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412224308+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:08.412458885+00:00 stderr F time="2025-08-13T20:30:08Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=z9jxi 2025-08-13T20:30:30.678726219+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.680324975+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696571992+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696663364+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696663364+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696677685+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=0ogZg 2025-08-13T20:30:30.696728886+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.696728886+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.731091104+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=0ogZg 2025-08-13T20:30:30.824839489+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.825205609+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.833672703+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.834149896+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847428678+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847579832+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.847579832+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862139941+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862139941+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=5Dw73 2025-08-13T20:30:30.862649906+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.863433168+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.867727792+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880538440+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880734075+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.880734075+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.896538670+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:30.896538670+00:00 stderr F time="2025-08-13T20:30:30Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=F7Tog 2025-08-13T20:30:31.156699008+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.156767480+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.161682512+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.489263468+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.684967394+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:31.684967394+00:00 stderr F time="2025-08-13T20:30:31Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=06773 2025-08-13T20:30:32.978336031+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.978336031+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993658712+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993713763+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993724044+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993736414+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Q/ZaW 2025-08-13T20:30:32.993746194+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:32.993746194+00:00 stderr F time="2025-08-13T20:30:32Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008257061+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008360794+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008374535+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.008452197+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Q/ZaW 2025-08-13T20:30:33.566198690+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.566198690+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570876814+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570933446+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570945806+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570955057+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Lr1gT 2025-08-13T20:30:33.570964237+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.570964237+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:33.582153869+00:00 stderr F time="2025-08-13T20:30:33Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Lr1gT 2025-08-13T20:30:34.179811709+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.179811709+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190007142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.190097114+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201019318+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201146142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201146142+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201285946+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201285946+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Ol4C3 2025-08-13T20:30:34.201343868+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.201343868+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204170099+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204192700+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204202330+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204225051+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=IDV2+ 2025-08-13T20:30:34.204225051+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.204241781+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217487252+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217738399+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217738399+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217754369+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:30:34.217754369+00:00 stderr F time="2025-08-13T20:30:34Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=IDV2+ 2025-08-13T20:31:03.010201680+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.010201680+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019081985+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019232660+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=g3ts9 2025-08-13T20:31:03.019250450+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.019250450+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034408246+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034641553+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034641553+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034722565+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:31:03.034722565+00:00 stderr F time="2025-08-13T20:31:03Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=g3ts9 2025-08-13T20:35:01.852202125+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.853252566+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.853374639+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.853462172+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.862532792+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.863551292+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.863916342+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.863982874+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.864066337+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=rC0bJ 2025-08-13T20:35:01.864156769+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.864251112+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.867295509+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867501785+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867715691+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=FHAUm 2025-08-13T20:35:01.867848785+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.867940438+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.883066263+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883214097+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883214097+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883411403+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883411403+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=rC0bJ 2025-08-13T20:35:01.883555537+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.883908367+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884227916+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.884511274+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884619467+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.884982768+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.885125652+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=FHAUm 2025-08-13T20:35:01.885442081+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.885664577+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.887966714+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888236191+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888306573+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888385266+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=HxuCG 2025-08-13T20:35:01.888419297+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.888462108+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:01.889839087+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890179037+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890303701+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890431634+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s7WmR 2025-08-13T20:35:01.890742873+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:01.890963630+00:00 stderr F time="2025-08-13T20:35:01Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058354381+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058641240+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.058641240+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=HxuCG 2025-08-13T20:35:02.257482765+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.257756573+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258041451+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258291279+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:35:02.258360451+00:00 stderr F time="2025-08-13T20:35:02Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s7WmR 2025-08-13T20:36:53.388034701+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.388034701+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.388325389+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.388325389+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-config" id=VFOrE namespace=openshift-config 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.396364431+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-apiserver-operator" id=JJ+8B namespace=openshift-apiserver-operator 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.396546176+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-openstack-infra" id=stn8i namespace=openshift-openstack-infra 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.399357757+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-dns-operator" id=rBOYg namespace=openshift-dns-operator 2025-08-13T20:36:53.399407719+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.399407719+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager" id=oxUpj namespace=openshift-kube-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=SBluW namespace=kube-public 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=SBluW namespace=kube-public 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-route-controller-manager" id=T2eSg namespace=openshift-route-controller-manager 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.402621272+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace kube-public" id=SBluW namespace=kube-public 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.404943408+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-cluster-machine-approver" id=lGYyO namespace=openshift-cluster-machine-approver 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.406287067+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-console" id=IE4fJ namespace=openshift-console 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.409279704+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-cluster-version" id=rEkak namespace=openshift-cluster-version 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.409741967+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.595268516+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-console-user-settings" id=wMzTv namespace=openshift-console-user-settings 2025-08-13T20:36:53.595308527+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.595308527+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-host-network" id=nVLri namespace=openshift-host-network 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:53.792380319+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="No subscriptions were found in namespace openshift-kni-infra" id=RdSIx namespace=openshift-kni-infra 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="resolving sources" id=Db5z+ namespace=default 2025-08-13T20:36:53.992025865+00:00 stderr F time="2025-08-13T20:36:53Z" level=info msg="checking if subscriptions need update" id=Db5z+ namespace=default 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-vsphere-infra" id=VjCvs namespace=openshift-vsphere-infra 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.193906325+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace default" id=Db5z+ namespace=default 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.392425629+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-etcd" id=hcG5S namespace=openshift-etcd 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.592038073+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-kube-controller-manager-operator" id=XSilc namespace=openshift-kube-controller-manager-operator 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:54.794418988+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:54.992814608+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="No subscriptions were found in namespace openshift-machine-config-operator" id=PO0gX namespace=openshift-machine-config-operator 2025-08-13T20:36:54.992814608+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="resolving sources" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:54.992941382+00:00 stderr F time="2025-08-13T20:36:54Z" level=info msg="checking if subscriptions need update" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-network-node-identity" id=ljE/t namespace=openshift-network-node-identity 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.197920681+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-operators" id=48RO+ namespace=openshift-operators 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.392218093+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator-operator" id=eUlKx namespace=openshift-kube-storage-version-migrator-operator 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.617722165+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-network-operator" id=wnWXv namespace=openshift-network-operator 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:55.792151533+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:55.991897921+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="No subscriptions were found in namespace openshift-oauth-apiserver" id=TTn6+ namespace=openshift-oauth-apiserver 2025-08-13T20:36:55.991949463+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="resolving sources" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:55.991949463+00:00 stderr F time="2025-08-13T20:36:55Z" level=info msg="checking if subscriptions need update" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-user-workload-monitoring" id=E1REP namespace=openshift-user-workload-monitoring 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.192648019+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.393842649+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager" id=VTe3f namespace=openshift-controller-manager 2025-08-13T20:36:56.393907951+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.393907951+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-kube-storage-version-migrator" id=0irSf namespace=openshift-kube-storage-version-migrator 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.592472676+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-marketplace" id=qWKh8 namespace=openshift-marketplace 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:56.792553194+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:56.992554081+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="No subscriptions were found in namespace openshift-node" id=zlUkI namespace=openshift-node 2025-08-13T20:36:56.992604432+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="resolving sources" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:56.992604432+00:00 stderr F time="2025-08-13T20:36:56Z" level=info msg="checking if subscriptions need update" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:57.191831596+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-config-operator" id=8yh9s namespace=openshift-config-operator 2025-08-13T20:36:57.191893087+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.191893087+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.392847441+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler-operator" id=Piv70 namespace=openshift-kube-scheduler-operator 2025-08-13T20:36:57.392905693+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.392905693+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace hostpath-provisioner" id=B2om3 namespace=hostpath-provisioner 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.592836027+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-authentication-operator" id=LY/yO namespace=openshift-authentication-operator 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:57.791666939+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="No subscriptions were found in namespace openshift-etcd-operator" id=OQMMA namespace=openshift-etcd-operator 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="resolving sources" id=yhel9 namespace=kube-system 2025-08-13T20:36:57.992757847+00:00 stderr F time="2025-08-13T20:36:57Z" level=info msg="checking if subscriptions need update" id=yhel9 namespace=kube-system 2025-08-13T20:36:58.191854536+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-image-registry" id=au+aT namespace=openshift-image-registry 2025-08-13T20:36:58.191919628+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.191919628+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace kube-system" id=yhel9 namespace=kube-system 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.391483332+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.593005012+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cloud-network-config-controller" id=ltN1l namespace=openshift-cloud-network-config-controller 2025-08-13T20:36:58.593005012+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.593066134+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-samples-operator" id=a7cm1 namespace=openshift-cluster-samples-operator 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:58.792568716+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="No subscriptions were found in namespace openshift-cluster-storage-operator" id=o6a8I namespace=openshift-cluster-storage-operator 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="resolving sources" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:58.993050326+00:00 stderr F time="2025-08-13T20:36:58Z" level=info msg="checking if subscriptions need update" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:59.191148527+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-config-managed" id=Eqi5R namespace=openshift-config-managed 2025-08-13T20:36:59.191266030+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.191300851+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.391858323+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver" id=5wSDE namespace=openshift-kube-apiserver 2025-08-13T20:36:59.392011648+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.392056369+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.593216928+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-apiserver" id=FH6ef namespace=openshift-apiserver 2025-08-13T20:36:59.593342931+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=sTFxb namespace=openshift-dns 2025-08-13T20:36:59.593383303+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=sTFxb namespace=openshift-dns 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="No subscriptions were found in namespace openshift-console-operator" id=B2nqc namespace=openshift-console-operator 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="resolving sources" id=5vN/7 namespace=openshift-infra 2025-08-13T20:36:59.793593055+00:00 stderr F time="2025-08-13T20:36:59Z" level=info msg="checking if subscriptions need update" id=5vN/7 namespace=openshift-infra 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-dns" id=sTFxb namespace=openshift-dns 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.006157823+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-infra" id=5vN/7 namespace=openshift-infra 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.194086031+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-network-diagnostics" id=s26VQ namespace=openshift-network-diagnostics 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.392604635+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-operator-lifecycle-manager" id=q+bOg namespace=openshift-operator-lifecycle-manager 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=rtok6 namespace=openshift 2025-08-13T20:37:00.598104059+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=rtok6 namespace=openshift 2025-08-13T20:37:00.793269346+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift-ovn-kubernetes" id=yhBsE namespace=openshift-ovn-kubernetes 2025-08-13T20:37:00.793269346+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:00.793302827+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:00.993206561+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="No subscriptions were found in namespace openshift" id=rtok6 namespace=openshift 2025-08-13T20:37:00.993246192+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="resolving sources" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:00.993246192+00:00 stderr F time="2025-08-13T20:37:00Z" level=info msg="checking if subscriptions need update" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:01.193548277+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-cloud-platform-infra" id=iZvx5 namespace=openshift-cloud-platform-infra 2025-08-13T20:37:01.193548277+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.193596558+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.392715689+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress" id=MMLJM namespace=openshift-ingress 2025-08-13T20:37:01.392715689+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.392851213+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.593042834+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-ingress-operator" id=r+dwA namespace=openshift-ingress-operator 2025-08-13T20:37:01.593042834+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.593103245+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-kube-apiserver-operator" id=dm6rL namespace=openshift-kube-apiserver-operator 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:01.793229225+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="No subscriptions were found in namespace openshift-monitoring" id=av0fE namespace=openshift-monitoring 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="resolving sources" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:01.993424696+00:00 stderr F time="2025-08-13T20:37:01Z" level=info msg="checking if subscriptions need update" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-controller-manager-operator" id=2SdCa namespace=openshift-controller-manager-operator 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.193224846+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.393097139+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-machine-api" id=7B/h0 namespace=openshift-machine-api 2025-08-13T20:37:02.393097139+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.393186041+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-service-ca-operator" id=e+7jb namespace=openshift-service-ca-operator 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.592714244+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace kube-node-lease" id=N6crf namespace=kube-node-lease 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:02.793885884+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="No subscriptions were found in namespace openshift-authentication" id=Wkz+p namespace=openshift-authentication 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="resolving sources" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:02.997471303+00:00 stderr F time="2025-08-13T20:37:02Z" level=info msg="checking if subscriptions need update" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-ingress-canary" id=seGjw namespace=openshift-ingress-canary 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.192958918+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-multus" id=zKPT1 namespace=openshift-multus 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.393518880+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-ovirt-infra" id=YwDwN namespace=openshift-ovirt-infra 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:03.606887721+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-kube-scheduler" id=n2bJb namespace=openshift-kube-scheduler 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="resolving sources" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:03.793828001+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="checking if subscriptions need update" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:03.993434825+00:00 stderr F time="2025-08-13T20:37:03Z" level=info msg="No subscriptions were found in namespace openshift-nutanix-infra" id=f+/CR namespace=openshift-nutanix-infra 2025-08-13T20:37:04.192877725+00:00 stderr F time="2025-08-13T20:37:04Z" level=info msg="No subscriptions were found in namespace openshift-service-ca" id=UtUw9 namespace=openshift-service-ca 2025-08-13T20:37:48.140845145+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.140845145+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.149819614+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150244806+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150244806+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.150271737+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169262244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169486381+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169486381+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.169585623+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="catalog update required at 2025-08-13 20:37:48.169524322 +0000 UTC m=+2323.820912439" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.202895864+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=LJPfk 2025-08-13T20:37:48.235902465+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.235902465+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250294840+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250387523+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250387523+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250403694+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=V0BeR 2025-08-13T20:37:48.250416114+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.250416114+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=V0BeR 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.278460742+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.289895642+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310082564+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310183017+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=iLMLn 2025-08-13T20:37:48.310202588+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.310218458+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346458733+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346649158+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346649158+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346667969+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=exBoo 2025-08-13T20:37:48.346667969+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.346683899+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.746413244+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=exBoo 2025-08-13T20:37:48.901461744+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.901461744+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:48.954429901+00:00 stderr F time="2025-08-13T20:37:48Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.347029030+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eYO5E 2025-08-13T20:37:49.652853206+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.659175078+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.666604712+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667440096+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667749245+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667766916+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=OvvLL 2025-08-13T20:37:49.667766916+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.667854928+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:49.947296895+00:00 stderr F time="2025-08-13T20:37:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=OvvLL 2025-08-13T20:37:50.659916190+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.659916190+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.664305747+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:50.677160507+00:00 stderr F time="2025-08-13T20:37:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=MsbCW 2025-08-13T20:37:54.702537060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.702537060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.706724060+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.719991153+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720081666+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720094756+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:54.720294862+00:00 stderr F time="2025-08-13T20:37:54Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=82ddB 2025-08-13T20:37:56.510581626+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.510581626+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:56.985428096+00:00 stderr F time="2025-08-13T20:37:56Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176652949+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176819174+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176819174+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:57.176939727+00:00 stderr F time="2025-08-13T20:37:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=26uBa 2025-08-13T20:37:58.709366357+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.709366357+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716500383+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716531534+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716541624+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716561715+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=v6ChI 2025-08-13T20:37:58.716561715+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.716571635+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730188238+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730385453+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730385453+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:37:58.730493686+00:00 stderr F time="2025-08-13T20:37:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=v6ChI 2025-08-13T20:38:08.712229842+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.712864230+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.723938010+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724248169+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724316031+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724378502+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=inZ7f 2025-08-13T20:38:08.724420964+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.724460165+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734251827+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734417332+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.734417332+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.753557394+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.753748839+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=inZ7f 2025-08-13T20:38:08.757069885+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.757069885+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.761936785+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.779865862+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.780030897+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.780257533+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.785683050+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:08.785683050+00:00 stderr F time="2025-08-13T20:38:08Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=URCqG 2025-08-13T20:38:09.211962620+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.211962620+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.230719220+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244304872+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244395695+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244395695+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.244537729+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=yfLya 2025-08-13T20:38:09.827574717+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.827574717+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833049285+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.833431306+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866261663+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866775578+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.866913492+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=0abZw 2025-08-13T20:38:09.867066806+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.867117897+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:09.872003308+00:00 stderr F time="2025-08-13T20:38:09Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120128792+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120208544+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120208544+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120326007+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:10.120326007+00:00 stderr F time="2025-08-13T20:38:10Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=s+jRy 2025-08-13T20:38:18.205309937+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.205309937+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.209619681+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.209919810+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210004862+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210114695+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=/UUWu 2025-08-13T20:38:18.210206248+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.210281980+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229032341+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229131134+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229131134+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229330149+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:18.229330149+00:00 stderr F time="2025-08-13T20:38:18Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=/UUWu 2025-08-13T20:38:27.984278184+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.984278184+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989278628+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=Ld4lM 2025-08-13T20:38:27.989414642+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:27.989432803+00:00 stderr F time="2025-08-13T20:38:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002103098+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002278123+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002278123+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002394966+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="ensured registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:28.002394966+00:00 stderr F time="2025-08-13T20:38:28Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Ld4lM 2025-08-13T20:38:36.025954327+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.025954327+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033188556+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033395082+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033395082+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033409762+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mCjzB 2025-08-13T20:38:36.033419982+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.033419982+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046728436+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046952883+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.046952883+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.047027775+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="catalog update required at 2025-08-13 20:38:36.046963033 +0000 UTC m=+2371.698351150" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.076431152+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mCjzB 2025-08-13T20:38:36.095997827+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.096064658+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101432033+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101737922+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101888866+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.101944148+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=w3m+T 2025-08-13T20:38:36.101993429+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.102024590+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119409181+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119694810+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119854244+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.119982568+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w3m+T 2025-08-13T20:38:36.144541655+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.144623367+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154501002+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154614195+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=8BKfc 2025-08-13T20:38:36.154645996+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.154645996+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.174859609+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175226120+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175274451+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175394795+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=8BKfc 2025-08-13T20:38:36.175509788+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.175567700+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.230473672+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635668854+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635745037+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.635745037+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.636006694+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=OcfXv 2025-08-13T20:38:36.809771714+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.815577961+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835503266+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:36.835680361+00:00 stderr F time="2025-08-13T20:38:36Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231557874+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231700468+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.231749590+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:37.232066649+00:00 stderr F time="2025-08-13T20:38:37Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1C3N0 2025-08-13T20:38:38.040583679+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.040583679+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.052417700+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:38.063656424+00:00 stderr F time="2025-08-13T20:38:38Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=d0AYk 2025-08-13T20:38:39.052584605+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.052584605+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056694574+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056851708+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056869919+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056869919+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=yPMA5 2025-08-13T20:38:39.056909940+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.056909940+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.070698508+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:39.071094319+00:00 stderr F time="2025-08-13T20:38:39Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=yPMA5 2025-08-13T20:38:44.095858713+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.095858713+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101091804+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101286750+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=mFy1J 2025-08-13T20:38:44.101312300+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.101312300+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.111966228+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112211775+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112326378+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:44.112477592+00:00 stderr F time="2025-08-13T20:38:44Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=mFy1J 2025-08-13T20:38:45.083822447+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.083822447+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090675814+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090946492+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.090995984+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.091035015+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=e92MU 2025-08-13T20:38:45.091067476+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.091098456+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102374282+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102573637+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102573637+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:45.102639139+00:00 stderr F time="2025-08-13T20:38:45Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=e92MU 2025-08-13T20:38:46.581209017+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.581315311+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590748013+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590865646+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=1XQaS 2025-08-13T20:38:46.590888397+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.590888397+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.602829551+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603099199+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603099199+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:46.603118799+00:00 stderr F time="2025-08-13T20:38:46Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=1XQaS 2025-08-13T20:38:56.600761379+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.600761379+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.606826364+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.607076611+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624625037+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624850884+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.624850884+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.635995575+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.635995575+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=uW69p 2025-08-13T20:38:56.640464834+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.640464834+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649345390+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649572396+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649627118+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649676889+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=y9XcG 2025-08-13T20:38:56.649722221+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.649763432+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663227400+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663358624+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.663358624+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.668764420+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:56.668926914+00:00 stderr F time="2025-08-13T20:38:56Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=y9XcG 2025-08-13T20:38:57.573863334+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.573863334+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.578659592+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592442830+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592547973+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592598544+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:57.592670976+00:00 stderr F time="2025-08-13T20:38:57Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=wUMqM 2025-08-13T20:38:58.200207671+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.200207671+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204743742+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=am3VE 2025-08-13T20:38:58.204892796+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.204892796+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222874964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.222947846+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=am3VE 2025-08-13T20:38:58.225684215+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.225684215+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.229899257+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.231878964+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246613609+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246645790+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246645790+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=lSuGG 2025-08-13T20:38:58.246723762+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.246752153+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249256685+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249283486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249283486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249299486+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=RxOF8 2025-08-13T20:38:58.249311097+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.249311097+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.605917108+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606131374+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606131374+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606323190+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:38:58.606323190+00:00 stderr F time="2025-08-13T20:38:58Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=RxOF8 2025-08-13T20:39:06.078032810+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.078032810+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086442792+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.086703710+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=jXQ9t 2025-08-13T20:39:06.086792913+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.087428181+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098113519+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098339335+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098339335+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098408087+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:39:06.098408087+00:00 stderr F time="2025-08-13T20:39:06Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=jXQ9t 2025-08-13T20:41:21.359712161+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.360188655+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.375548007+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376392892+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376455903+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376567807+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JA2JG 2025-08-13T20:41:21.376613018+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.376644079+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398404336+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398714105+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.398935692+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.399160668+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="catalog update required at 2025-08-13 20:41:21.399119637 +0000 UTC m=+2537.050507884" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.436382651+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JA2JG 2025-08-13T20:41:21.456508562+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.456702157+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.477696532+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.491948363+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492104778+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492104778+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492119768+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=lnhwh 2025-08-13T20:41:21.492539550+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.492539550+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.508408988+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.524944875+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=BLbf4 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.525875591+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.568005376+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966515775+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966635409+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GJRiW 2025-08-13T20:41:21.966740402+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:21.966754632+00:00 stderr F time="2025-08-13T20:41:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.169561929+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568117450+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568162071+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568176081+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568338156+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=GFKjJ 2025-08-13T20:41:22.568922683+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.568922683+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.764863052+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:22.765135530+00:00 stderr F time="2025-08-13T20:41:22Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.170066744+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=iX7BZ 2025-08-13T20:41:23.199135652+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.199298947+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365362345+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365735835+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365899840+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.365962612+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=aQO+y 2025-08-13T20:41:23.366007463+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.366048504+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765080429+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765328116+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765372407+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:23.765521581+00:00 stderr F time="2025-08-13T20:41:23Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aQO+y 2025-08-13T20:41:24.189760582+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.189921307+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195527539+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195663143+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195702974+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195784976+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JTrAU 2025-08-13T20:41:24.195873519+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.195923920+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:24.366974281+00:00 stderr F time="2025-08-13T20:41:24Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JTrAU 2025-08-13T20:41:49.595426319+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.597122208+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.625866667+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:49.644166045+00:00 stderr F time="2025-08-13T20:41:49Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JjYKc 2025-08-13T20:41:50.478897890+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.479014664+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.484344197+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485094339+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485154241+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485255504+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=aMQtg 2025-08-13T20:41:50.485363477+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.485441989+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.508896725+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509281986+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509281986+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:50.509305417+00:00 stderr F time="2025-08-13T20:41:50Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=aMQtg 2025-08-13T20:41:51.436771096+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.436771096+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.444349654+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.446124076+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=jLth3 2025-08-13T20:41:51.466836423+00:00 stderr F time="2025-08-13T20:41:51Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=jLth3 2025-08-13T20:42:12.028532201+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.028532201+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.037502209+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049278609+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049386252+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049386252+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.049439004+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XMdr1 2025-08-13T20:42:12.132087226+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.132206940+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.141983392+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.142414344+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.158975662+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.172837501+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.173045127+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=T2x7r 2025-08-13T20:42:12.178349390+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.178349390+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190517391+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190627644+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=SyITD 2025-08-13T20:42:12.190695276+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.190695276+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.215049318+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.234955482+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:12.234955482+00:00 stderr F time="2025-08-13T20:42:12Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=SyITD 2025-08-13T20:42:14.280544576+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.280575917+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287078865+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.287149327+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301883032+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301883032+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.301922413+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.302129409+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=umXeC 2025-08-13T20:42:14.700878763+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.700878763+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.706892667+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742541894+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742769351+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742873784+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:14.742977057+00:00 stderr F time="2025-08-13T20:42:14Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HEk78 2025-08-13T20:42:15.645178848+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.645371383+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656382491+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656490364+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656490364+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656508114+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=HqDrp 2025-08-13T20:42:15.656508114+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.656521225+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.699921786+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700378689+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700378689+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700669237+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.700669237+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=HqDrp 2025-08-13T20:42:15.701007197+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.701007197+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715175146+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715338750+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=Fz3M4 2025-08-13T20:42:15.715401792+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.715401792+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738736995+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738919550+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:15.738940741+00:00 stderr F time="2025-08-13T20:42:15Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=Fz3M4 2025-08-13T20:42:21.467882987+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.467882987+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.471869692+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472009786+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472009786+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.472033887+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486397541+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486512274+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486512274+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486632568+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:21.486632568+00:00 stderr F time="2025-08-13T20:42:21Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=nrngo 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.371066825+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace health=true id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391533916+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.391838604+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392165434+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392253346+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392324568+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace health=true id=R4BYM 2025-08-13T20:42:25.392357319+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.392404501+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424544857+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424712972+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.424917948+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.424965669+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-marketplace-8s8pc current-pod.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425070212+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.425070212+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=redhat-operators-f4jkp current-pod.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.425168135+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425207176+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=R4BYM 2025-08-13T20:42:25.425410092+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.426424491+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.426525494+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.426562195+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=qC+nW 2025-08-13T20:42:25.426633837+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.426673749+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.434557386+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434833034+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434885505+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434924907+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace health=true id=zTeMR 2025-08-13T20:42:25.434955887+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.434991648+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.439813977+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.440527788+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.574757068+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575143659+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575186140+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=certified-operators-7287f current-pod.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575378466+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="ensured registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.575417057+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="requeuing registry server sync based on polling interval 10m0s" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=zTeMR 2025-08-13T20:42:25.775083633+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775301020+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775301020+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:25.775429033+00:00 stderr F time="2025-08-13T20:42:25Z" level=info msg="catalog update required at 2025-08-13 20:42:25.775357471 +0000 UTC m=+2601.426745588" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:26.001076909+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=SZPn1 2025-08-13T20:42:26.026720348+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.026720348+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174703455+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=3reUD 2025-08-13T20:42:26.174875540+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.174897940+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=3reUD 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.579582078+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:26.778006938+00:00 stderr F time="2025-08-13T20:42:26Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177567398+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177697992+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177697992+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.177759503+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=gFay8 2025-08-13T20:42:27.892832629+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.892832629+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.896906027+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.896963068+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.897060281+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:27.914892475+00:00 stderr F time="2025-08-13T20:42:27Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=1id1d 2025-08-13T20:42:31.791832838+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.791832838+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.798975694+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799129189+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799129189+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799142949+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="checked registry server health" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace health=true id=jifKp 2025-08-13T20:42:31.799142949+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="registry state good" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.799152399+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="ensuring registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819035763+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="searching for current pods" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="evaluating current pod" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="of 1 pods matching label selector, 1 have the correct images and matching hash" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace correctHash=true correctImages=true current-pod.name=community-operators-8jhz6 current-pod.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:31.819269409+00:00 stderr F time="2025-08-13T20:42:31Z" level=info msg="requeueing registry server for catalog update check: update pod not yet ready" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=jifKp 2025-08-13T20:42:39.302358298+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=IDLE" 2025-08-13T20:42:39.303379778+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=CONNECTING" 2025-08-13T20:42:39.303487781+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KaU8M 2025-08-13T20:42:39.303530122+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=KaU8M 2025-08-13T20:42:39.305189010+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=KaU8M 2025-08-13T20:42:39.307013572+00:00 stderr F E0813 20:42:39.306924 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.313402777+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OZNsZ 2025-08-13T20:42:39.313424687+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=OZNsZ 2025-08-13T20:42:39.314723615+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=OZNsZ 2025-08-13T20:42:39.314861459+00:00 stderr F E0813 20:42:39.314735 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.325173216+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JdCT7 2025-08-13T20:42:39.325173216+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=JdCT7 2025-08-13T20:42:39.328115041+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JdCT7 2025-08-13T20:42:39.328311126+00:00 stderr F E0813 20:42:39.328173 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.333503286+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:39.333526417+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=60fxG 2025-08-13T20:42:39.333536657+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=60fxG 2025-08-13T20:42:39.334476064+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=60fxG 2025-08-13T20:42:39.334476064+00:00 stderr F E0813 20:42:39.334461 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.349912389+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3IXh3 2025-08-13T20:42:39.349912389+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=3IXh3 2025-08-13T20:42:39.351027821+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=3IXh3 2025-08-13T20:42:39.351027821+00:00 stderr F E0813 20:42:39.350986 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.431494521+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XvpGc 2025-08-13T20:42:39.431494521+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=XvpGc 2025-08-13T20:42:39.432640164+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=XvpGc 2025-08-13T20:42:39.432693016+00:00 stderr F E0813 20:42:39.432643 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.594484399+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U1hZz 2025-08-13T20:42:39.594539681+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=U1hZz 2025-08-13T20:42:39.598815574+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=U1hZz 2025-08-13T20:42:39.599062621+00:00 stderr F E0813 20:42:39.598878 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.725401654+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=IDLE" 2025-08-13T20:42:39.725401654+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=KqOOE 2025-08-13T20:42:39.725451675+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=KqOOE 2025-08-13T20:42:39.725716933+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=CONNECTING" 2025-08-13T20:42:39.732380185+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=KqOOE 2025-08-13T20:42:39.732402876+00:00 stderr F E0813 20:42:39.732374 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.732446807+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZKca 2025-08-13T20:42:39.732543970+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=yZKca 2025-08-13T20:42:39.733369753+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=yZKca 2025-08-13T20:42:39.733389254+00:00 stderr F E0813 20:42:39.733357 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.734430794+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=community-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:39.734430794+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZS0Pg 2025-08-13T20:42:39.734451695+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=ZS0Pg 2025-08-13T20:42:39.735339760+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ZS0Pg 2025-08-13T20:42:39.735339760+00:00 stderr F E0813 20:42:39.735305 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.737593425+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=22fgy 2025-08-13T20:42:39.737593425+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=22fgy 2025-08-13T20:42:39.738130551+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=22fgy 2025-08-13T20:42:39.738130551+00:00 stderr F E0813 20:42:39.738105 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.778287578+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Nmv/v 2025-08-13T20:42:39.778287578+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=Nmv/v 2025-08-13T20:42:39.778901296+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Nmv/v 2025-08-13T20:42:39.778926287+00:00 stderr F E0813 20:42:39.778894 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.859767828+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JD95p 2025-08-13T20:42:39.859767828+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=JD95p 2025-08-13T20:42:39.904756515+00:00 stderr F time="2025-08-13T20:42:39Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=JD95p 2025-08-13T20:42:39.904756515+00:00 stderr F E0813 20:42:39.904736 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.920288802+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=8rqVT 2025-08-13T20:42:39.920288802+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=8rqVT 2025-08-13T20:42:39.944602543+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=IDLE" 2025-08-13T20:42:39.944602543+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CHY0C 2025-08-13T20:42:39.944656885+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=CHY0C 2025-08-13T20:42:39.945185070+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=CONNECTING" 2025-08-13T20:42:39.953284364+00:00 stderr F time="2025-08-13T20:42:39Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=certified-operators state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:40.040316833+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=IDLE" 2025-08-13T20:42:40.040436596+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=CONNECTING" 2025-08-13T20:42:40.044638258+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="state.Key.Namespace=openshift-marketplace state.Key.Name=redhat-marketplace state.State=TRANSIENT_FAILURE" 2025-08-13T20:42:40.105641986+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=8rqVT 2025-08-13T20:42:40.105641986+00:00 stderr F E0813 20:42:40.105594 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.105711408+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ONbEu 2025-08-13T20:42:40.105711408+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ONbEu 2025-08-13T20:42:40.305063236+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=CHY0C 2025-08-13T20:42:40.305100627+00:00 stderr F E0813 20:42:40.305053 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.305173279+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=asLjU 2025-08-13T20:42:40.305173279+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=asLjU 2025-08-13T20:42:40.506030160+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ONbEu 2025-08-13T20:42:40.506030160+00:00 stderr F E0813 20:42:40.505984 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506125882+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YSP3F 2025-08-13T20:42:40.506141043+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=YSP3F 2025-08-13T20:42:40.705592583+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=asLjU 2025-08-13T20:42:40.705592583+00:00 stderr F E0813 20:42:40.705573 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.705642695+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=suOAk 2025-08-13T20:42:40.705642695+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=suOAk 2025-08-13T20:42:40.904975241+00:00 stderr F time="2025-08-13T20:42:40Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=YSP3F 2025-08-13T20:42:40.904975241+00:00 stderr F E0813 20:42:40.904949 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.905013752+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="syncing catalog source" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=j66ZP 2025-08-13T20:42:40.905058694+00:00 stderr F time="2025-08-13T20:42:40Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace id=j66ZP 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=suOAk 2025-08-13T20:42:41.106209573+00:00 stderr F E0813 20:42:41.105955 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=No0Sy 2025-08-13T20:42:41.106209573+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=No0Sy 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=j66ZP 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=UnIEX 2025-08-13T20:42:41.306850968+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=UnIEX 2025-08-13T20:42:41.505318040+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=No0Sy 2025-08-13T20:42:41.505351801+00:00 stderr F E0813 20:42:41.505293 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.505501865+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m4nH4 2025-08-13T20:42:41.505501865+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=m4nH4 2025-08-13T20:42:41.705478940+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=UnIEX 2025-08-13T20:42:41.705517131+00:00 stderr F E0813 20:42:41.705468 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/community-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.705572143+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PpzAk 2025-08-13T20:42:41.705572143+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=PpzAk 2025-08-13T20:42:41.905277341+00:00 stderr F time="2025-08-13T20:42:41Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=m4nH4 2025-08-13T20:42:41.905324902+00:00 stderr F E0813 20:42:41.905211 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.925762921+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dwSwo 2025-08-13T20:42:41.925897595+00:00 stderr F time="2025-08-13T20:42:41Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=dwSwo 2025-08-13T20:42:42.105688269+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=PpzAk 2025-08-13T20:42:42.105688269+00:00 stderr F E0813 20:42:42.105656 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.147917036+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Wl+0f 2025-08-13T20:42:42.147917036+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=Wl+0f 2025-08-13T20:42:42.304827760+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=dwSwo 2025-08-13T20:42:42.304827760+00:00 stderr F E0813 20:42:42.304758 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.345775970+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RVYto 2025-08-13T20:42:42.345775970+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=RVYto 2025-08-13T20:42:42.505527846+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=Wl+0f 2025-08-13T20:42:42.505527846+00:00 stderr F E0813 20:42:42.505496 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.505635749+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x8BXC 2025-08-13T20:42:42.505635749+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace id=x8BXC 2025-08-13T20:42:42.704977556+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=RVYto 2025-08-13T20:42:42.704977556+00:00 stderr F E0813 20:42:42.704955 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:42.705079099+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cAhw1 2025-08-13T20:42:42.705092210+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=cAhw1 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=community-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/community-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=x8BXC 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eJByL 2025-08-13T20:42:42.905590960+00:00 stderr F time="2025-08-13T20:42:42Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=eJByL 2025-08-13T20:42:43.105845533+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=cAhw1 2025-08-13T20:42:43.106050719+00:00 stderr F E0813 20:42:43.105946 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.266869965+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w1tnL 2025-08-13T20:42:43.266869965+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=w1tnL 2025-08-13T20:42:43.305633583+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=eJByL 2025-08-13T20:42:43.305681664+00:00 stderr F E0813 20:42:43.305620 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.466166321+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ffEpJ 2025-08-13T20:42:43.466166321+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=ffEpJ 2025-08-13T20:42:43.504700592+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=w1tnL 2025-08-13T20:42:43.504926168+00:00 stderr F E0813 20:42:43.504895 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.705847031+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=ffEpJ 2025-08-13T20:42:43.706270283+00:00 stderr F E0813 20:42:43.706201 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:43.826166130+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="syncing catalog source" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=pD+NU 2025-08-13T20:42:43.826166130+00:00 stderr F time="2025-08-13T20:42:43Z" level=info msg="synchronizing registry server" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace id=pD+NU 2025-08-13T20:42:43.905995921+00:00 stderr F time="2025-08-13T20:42:43Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=certified-operators catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/certified-operators/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=pD+NU 2025-08-13T20:42:43.906124515+00:00 stderr F E0813 20:42:43.906098 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/certified-operators"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:44.027184745+00:00 stderr F time="2025-08-13T20:42:44Z" level=info msg="syncing catalog source" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1gecN 2025-08-13T20:42:44.027350100+00:00 stderr F time="2025-08-13T20:42:44Z" level=info msg="synchronizing registry server" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace id=1gecN 2025-08-13T20:42:44.106599205+00:00 stderr F time="2025-08-13T20:42:44Z" level=error msg="UpdateStatus - error while setting CatalogSource status" catalogsource.name=redhat-marketplace catalogsource.namespace=openshift-marketplace error="Put \"https://10.217.4.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-marketplace/catalogsources/redhat-marketplace/status\": dial tcp 10.217.4.1:443: connect: connection refused" id=1gecN 2025-08-13T20:42:44.106599205+00:00 stderr F E0813 20:42:44.106445 1 queueinformer_operator.go:319] sync {"update" "openshift-marketplace/redhat-marketplace"} failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130647033234 5ustar zuulzuul././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000222515117130647033237 0ustar zuulzuul2025-08-13T19:59:25.749930980+00:00 stderr F W0813 19:59:25.749386 1 deprecated.go:66] 2025-08-13T19:59:25.749930980+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.749930980+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.749930980+00:00 stderr F =============================================== 2025-08-13T19:59:25.749930980+00:00 stderr F 2025-08-13T19:59:25.750743644+00:00 stderr F I0813 19:59:25.750677 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:25.792617047+00:00 stderr F I0813 19:59:25.791677 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:25.796481088+00:00 stderr F I0813 19:59:25.794679 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-08-13T19:59:25.796481088+00:00 stderr F I0813 19:59:25.795529 1 kube-rbac-proxy.go:402] Listening securely on :8443 2025-08-13T20:42:42.434768166+00:00 stderr F I0813 20:42:42.434158 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000202015117130647033230 0ustar zuulzuul2025-12-13T00:13:17.437702538+00:00 stderr F W1213 00:13:17.435500 1 deprecated.go:66] 2025-12-13T00:13:17.437702538+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:17.437702538+00:00 stderr F 2025-12-13T00:13:17.437702538+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:17.437702538+00:00 stderr F 2025-12-13T00:13:17.437702538+00:00 stderr F =============================================== 2025-12-13T00:13:17.437702538+00:00 stderr F 2025-12-13T00:13:17.437702538+00:00 stderr F I1213 00:13:17.436092 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:17.437702538+00:00 stderr F I1213 00:13:17.436120 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:17.442024852+00:00 stderr F I1213 00:13:17.438812 1 kube-rbac-proxy.go:395] Starting TCP socket on :8443 2025-12-13T00:13:17.442024852+00:00 stderr F I1213 00:13:17.439495 1 kube-rbac-proxy.go:402] Listening securely on :8443 ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000237415117130647033244 0ustar zuulzuul2025-12-13T00:13:15.642503755+00:00 stderr F I1213 00:13:15.639976 1 main.go:57] starting net-attach-def-admission-controller webhook server 2025-12-13T00:13:15.643496338+00:00 stderr F W1213 00:13:15.643077 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:13:16.000629189+00:00 stderr F W1213 00:13:16.000394 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-12-13T00:13:16.004703615+00:00 stderr F I1213 00:13:16.003804 1 localmetrics.go:51] UPdating net-attach-def metrics for any with value 0 2025-12-13T00:13:16.004703615+00:00 stderr F I1213 00:13:16.003829 1 localmetrics.go:51] UPdating net-attach-def metrics for sriov with value 0 2025-12-13T00:13:16.004703615+00:00 stderr F I1213 00:13:16.003835 1 localmetrics.go:51] UPdating net-attach-def metrics for ib-sriov with value 0 2025-12-13T00:13:16.004703615+00:00 stderr F I1213 00:13:16.004133 1 controller.go:202] Starting net-attach-def-admission-controller 2025-12-13T00:13:16.209619611+00:00 stderr F I1213 00:13:16.204606 1 controller.go:211] net-attach-def-admission-controller synced and ready ././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000255215117130647033242 0ustar zuulzuul2025-08-13T19:59:12.195411848+00:00 stderr F I0813 19:59:12.184348 1 main.go:57] starting net-attach-def-admission-controller webhook server 2025-08-13T19:59:13.341524580+00:00 stderr F W0813 19:59:13.330254 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:13.777938261+00:00 stderr F W0813 19:59:13.777869 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. 2025-08-13T19:59:13.864351914+00:00 stderr F I0813 19:59:13.863190 1 localmetrics.go:51] UPdating net-attach-def metrics for any with value 0 2025-08-13T19:59:14.306827216+00:00 stderr F I0813 19:59:14.228559 1 localmetrics.go:51] UPdating net-attach-def metrics for sriov with value 0 2025-08-13T19:59:14.307369132+00:00 stderr F I0813 19:59:14.307339 1 localmetrics.go:51] UPdating net-attach-def metrics for ib-sriov with value 0 2025-08-13T19:59:14.409334398+00:00 stderr F I0813 19:59:14.382647 1 controller.go:202] Starting net-attach-def-admission-controller 2025-08-13T19:59:17.037691044+00:00 stderr F I0813 19:59:17.032208 1 controller.go:211] net-attach-def-admission-controller synced and ready 2025-08-13T20:01:48.651575909+00:00 stderr F I0813 20:01:48.650318 1 tlsutil.go:43] cetificate reloaded ././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130647033061 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012467015117130647033075 0ustar zuulzuul2025-08-13T20:05:59.835111434+00:00 stderr F I0813 20:05:59.834569 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc0007986e0 cert-dir:0xc0007988c0 cert-secrets:0xc000798640 configmaps:0xc0007981e0 namespace:0xc000798000 optional-cert-configmaps:0xc000798820 optional-configmaps:0xc000798320 optional-secrets:0xc000798280 pod:0xc0007980a0 pod-manifest-dir:0xc000798460 resource-dir:0xc0007983c0 revision:0xc0001f5f40 secrets:0xc000798140 v:0xc0007992c0] [0xc0007992c0 0xc0001f5f40 0xc000798000 0xc0007980a0 0xc0007983c0 0xc000798460 0xc0007981e0 0xc000798320 0xc000798140 0xc000798280 0xc0007988c0 0xc0007986e0 0xc000798820 0xc000798640] [] map[cert-configmaps:0xc0007986e0 cert-dir:0xc0007988c0 cert-secrets:0xc000798640 configmaps:0xc0007981e0 help:0xc000799680 kubeconfig:0xc0000f9f40 log-flush-frequency:0xc000799220 namespace:0xc000798000 optional-cert-configmaps:0xc000798820 optional-cert-secrets:0xc000798780 optional-configmaps:0xc000798320 optional-secrets:0xc000798280 pod:0xc0007980a0 pod-manifest-dir:0xc000798460 pod-manifests-lock-file:0xc0007985a0 resource-dir:0xc0007983c0 revision:0xc0001f5f40 secrets:0xc000798140 timeout-duration:0xc000798500 v:0xc0007992c0 vmodule:0xc000799360] [0xc0000f9f40 0xc0001f5f40 0xc000798000 0xc0007980a0 0xc000798140 0xc0007981e0 0xc000798280 0xc000798320 0xc0007983c0 0xc000798460 0xc000798500 0xc0007985a0 0xc000798640 0xc0007986e0 0xc000798780 0xc000798820 0xc0007988c0 0xc000799220 0xc0007992c0 0xc000799360 0xc000799680] [0xc0007986e0 0xc0007988c0 0xc000798640 0xc0007981e0 0xc000799680 0xc0000f9f40 0xc000799220 0xc000798000 0xc000798820 0xc000798780 0xc000798320 0xc000798280 0xc0007980a0 0xc000798460 0xc0007985a0 0xc0007983c0 0xc0001f5f40 0xc000798140 0xc000798500 0xc0007992c0 0xc000799360] map[104:0xc000799680 118:0xc0007992c0] [] -1 0 0xc0006fdb30 true 0x73b100 []} 2025-08-13T20:05:59.835111434+00:00 stderr F I0813 20:05:59.834918 1 cmd.go:92] (*installerpod.InstallOptions)(0xc000583d40)({ 2025-08-13T20:05:59.835111434+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:05:59.835111434+00:00 stderr F Revision: (string) (len=2) "10", 2025-08-13T20:05:59.835111434+00:00 stderr F NodeName: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:05:59.835111434+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:05:59.835111434+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:05:59.835111434+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:05:59.835111434+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:05:59.835111434+00:00 stderr F }, 2025-08-13T20:05:59.835111434+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:05:59.835111434+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:05:59.835111434+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:05:59.835111434+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:05:59.835111434+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:05:59.835111434+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:05:59.835111434+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:05:59.835111434+00:00 stderr F }) 2025-08-13T20:05:59.842751153+00:00 stderr F I0813 20:05:59.838103 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:05:59.852998977+00:00 stderr F I0813 20:05:59.852958 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:05:59.857587378+00:00 stderr F I0813 20:05:59.857188 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:06:29.871751453+00:00 stderr F I0813 20:06:29.871543 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:06:29.927650613+00:00 stderr F I0813 20:06:29.912246 1 cmd.go:538] Latest installer revision for node crc is: 10 2025-08-13T20:06:29.927820478+00:00 stderr F I0813 20:06:29.927747 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:06:29.937145415+00:00 stderr F I0813 20:06:29.936965 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:06:29.937145415+00:00 stderr F I0813 20:06:29.937036 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:06:29.937458994+00:00 stderr F I0813 20:06:29.937385 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10" ... 2025-08-13T20:06:29.937458994+00:00 stderr F I0813 20:06:29.937440 1 cmd.go:225] Getting secrets ... 2025-08-13T20:06:29.948920812+00:00 stderr F I0813 20:06:29.948461 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-10 2025-08-13T20:06:29.953436762+00:00 stderr F I0813 20:06:29.952585 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-10 2025-08-13T20:06:29.958541818+00:00 stderr F I0813 20:06:29.958434 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-10 2025-08-13T20:06:29.958541818+00:00 stderr F I0813 20:06:29.958509 1 cmd.go:238] Getting config maps ... 2025-08-13T20:06:30.028237794+00:00 stderr F I0813 20:06:30.028116 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-10 2025-08-13T20:06:30.033609938+00:00 stderr F I0813 20:06:30.033515 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-10 2025-08-13T20:06:30.036895452+00:00 stderr F I0813 20:06:30.036684 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-10 2025-08-13T20:06:30.041016810+00:00 stderr F I0813 20:06:30.040969 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-10 2025-08-13T20:06:30.045113907+00:00 stderr F I0813 20:06:30.045031 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-10 2025-08-13T20:06:30.078006779+00:00 stderr F I0813 20:06:30.077171 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-10 2025-08-13T20:06:30.286083348+00:00 stderr F I0813 20:06:30.284316 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-10 2025-08-13T20:06:30.480644879+00:00 stderr F I0813 20:06:30.480576 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-10 2025-08-13T20:06:30.680035208+00:00 stderr F I0813 20:06:30.679933 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-10: configmaps "cloud-config-10" not found 2025-08-13T20:06:30.680208983+00:00 stderr F I0813 20:06:30.680162 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token" ... 2025-08-13T20:06:30.680323866+00:00 stderr F I0813 20:06:30.680302 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:06:30.682192650+00:00 stderr F I0813 20:06:30.682158 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:06:30.682834408+00:00 stderr F I0813 20:06:30.682731 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:06:30.683144557+00:00 stderr F I0813 20:06:30.683114 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:06:30.684131515+00:00 stderr F I0813 20:06:30.684090 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key" ... 2025-08-13T20:06:30.684222138+00:00 stderr F I0813 20:06:30.684198 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:06:30.685318589+00:00 stderr F I0813 20:06:30.685287 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:06:30.685529345+00:00 stderr F I0813 20:06:30.685503 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert" ... 2025-08-13T20:06:30.685589647+00:00 stderr F I0813 20:06:30.685570 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.key" ... 2025-08-13T20:06:30.687402839+00:00 stderr F I0813 20:06:30.687362 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/secrets/serving-cert/tls.crt" ... 2025-08-13T20:06:30.689492049+00:00 stderr F I0813 20:06:30.689451 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:06:30.689641083+00:00 stderr F I0813 20:06:30.689564 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:06:30.707133994+00:00 stderr F I0813 20:06:30.707021 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config" ... 2025-08-13T20:06:30.707330309+00:00 stderr F I0813 20:06:30.707300 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/config/config.yaml" ... 2025-08-13T20:06:30.714536146+00:00 stderr F I0813 20:06:30.714440 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:06:30.714651599+00:00 stderr F I0813 20:06:30.714628 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:06:30.728982389+00:00 stderr F I0813 20:06:30.718252 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:06:30.729157004+00:00 stderr F I0813 20:06:30.729126 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:06:30.738554784+00:00 stderr F I0813 20:06:30.738445 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:06:30.738554784+00:00 stderr F I0813 20:06:30.738503 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:06:30.740369666+00:00 stderr F I0813 20:06:30.740296 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:06:30.740596192+00:00 stderr F I0813 20:06:30.740528 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:06:30.740945882+00:00 stderr F I0813 20:06:30.740732 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config" ... 2025-08-13T20:06:30.740945882+00:00 stderr F I0813 20:06:30.740861 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:06:30.742336842+00:00 stderr F I0813 20:06:30.742225 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca" ... 2025-08-13T20:06:30.742336842+00:00 stderr F I0813 20:06:30.742294 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.743521 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.743564 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.745655 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:06:30.746833191+00:00 stderr F I0813 20:06:30.745674 1 cmd.go:225] Getting secrets ... 2025-08-13T20:06:30.879011066+00:00 stderr F I0813 20:06:30.877620 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:06:31.081470183+00:00 stderr F I0813 20:06:31.080423 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key 2025-08-13T20:06:31.081644418+00:00 stderr F I0813 20:06:31.081623 1 cmd.go:238] Getting config maps ... 2025-08-13T20:06:31.371338561+00:00 stderr F I0813 20:06:31.371276 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca 2025-08-13T20:06:31.897416895+00:00 stderr F I0813 20:06:31.897357 1 copy.go:60] Got configMap openshift-kube-controller-manager/client-ca 2025-08-13T20:06:31.955715866+00:00 stderr F I0813 20:06:31.955186 1 copy.go:60] Got configMap openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:06:31.956504899+00:00 stderr F I0813 20:06:31.956057 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer" ... 2025-08-13T20:06:31.957035874+00:00 stderr F I0813 20:06:31.956553 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.crt" ... 2025-08-13T20:06:31.957035874+00:00 stderr F I0813 20:06:31.956928 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.key" ... 2025-08-13T20:06:31.957107226+00:00 stderr F I0813 20:06:31.957059 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key" ... 2025-08-13T20:06:31.957107226+00:00 stderr F I0813 20:06:31.957075 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.key" ... 2025-08-13T20:06:31.957278241+00:00 stderr F I0813 20:06:31.957185 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.crt" ... 2025-08-13T20:06:31.957642931+00:00 stderr F I0813 20:06:31.957345 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:06:31.962102239+00:00 stderr F I0813 20:06:31.960192 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:06:31.962102239+00:00 stderr F I0813 20:06:31.960413 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca" ... 2025-08-13T20:06:32.028939486+00:00 stderr F I0813 20:06:32.028378 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.034442 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.034579 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:06:32.037492591+00:00 stderr F I0813 20:06:32.035400 1 cmd.go:331] Getting pod configmaps/kube-controller-manager-pod-10 -n openshift-kube-controller-manager 2025-08-13T20:06:32.068991654+00:00 stderr F I0813 20:06:32.065289 1 cmd.go:347] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:06:32.068991654+00:00 stderr F I0813 20:06:32.065360 1 cmd.go:375] Writing a pod under "kube-controller-manager-pod.yaml" key 2025-08-13T20:06:32.068991654+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"10"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n 2025-08-13T20:06:32.069063786+00:00 stderr F \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:06:32.130591450+00:00 stderr F I0813 20:06:32.130436 1 cmd.go:606] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143163911+00:00 stderr F I0813 20:06:32.143032 1 cmd.go:613] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143220332+00:00 stderr F I0813 20:06:32.143086 1 cmd.go:617] Writing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:06:32.143220332+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"10"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-10"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy 2025-08-13T20:06:32.143262553+00:00 stderr F -controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000012467015117130646033074 0ustar zuulzuul2025-08-13T20:07:28.135939739+00:00 stderr F I0813 20:07:28.133266 1 cmd.go:91] &{ true {false} installer true map[cert-configmaps:0xc000bb7c20 cert-dir:0xc000bb7e00 cert-secrets:0xc000bb7b80 configmaps:0xc000bb7720 namespace:0xc000bb7540 optional-cert-configmaps:0xc000bb7d60 optional-configmaps:0xc000bb7860 optional-secrets:0xc000bb77c0 pod:0xc000bb75e0 pod-manifest-dir:0xc000bb79a0 resource-dir:0xc000bb7900 revision:0xc000bb74a0 secrets:0xc000bb7680 v:0xc000bcc820] [0xc000bcc820 0xc000bb74a0 0xc000bb7540 0xc000bb75e0 0xc000bb7900 0xc000bb79a0 0xc000bb7720 0xc000bb7860 0xc000bb7680 0xc000bb77c0 0xc000bb7e00 0xc000bb7c20 0xc000bb7d60 0xc000bb7b80] [] map[cert-configmaps:0xc000bb7c20 cert-dir:0xc000bb7e00 cert-secrets:0xc000bb7b80 configmaps:0xc000bb7720 help:0xc000bccbe0 kubeconfig:0xc000bb7400 log-flush-frequency:0xc000bcc780 namespace:0xc000bb7540 optional-cert-configmaps:0xc000bb7d60 optional-cert-secrets:0xc000bb7cc0 optional-configmaps:0xc000bb7860 optional-secrets:0xc000bb77c0 pod:0xc000bb75e0 pod-manifest-dir:0xc000bb79a0 pod-manifests-lock-file:0xc000bb7ae0 resource-dir:0xc000bb7900 revision:0xc000bb74a0 secrets:0xc000bb7680 timeout-duration:0xc000bb7a40 v:0xc000bcc820 vmodule:0xc000bcc8c0] [0xc000bb7400 0xc000bb74a0 0xc000bb7540 0xc000bb75e0 0xc000bb7680 0xc000bb7720 0xc000bb77c0 0xc000bb7860 0xc000bb7900 0xc000bb79a0 0xc000bb7a40 0xc000bb7ae0 0xc000bb7b80 0xc000bb7c20 0xc000bb7cc0 0xc000bb7d60 0xc000bb7e00 0xc000bcc780 0xc000bcc820 0xc000bcc8c0 0xc000bccbe0] [0xc000bb7c20 0xc000bb7e00 0xc000bb7b80 0xc000bb7720 0xc000bccbe0 0xc000bb7400 0xc000bcc780 0xc000bb7540 0xc000bb7d60 0xc000bb7cc0 0xc000bb7860 0xc000bb77c0 0xc000bb75e0 0xc000bb79a0 0xc000bb7ae0 0xc000bb7900 0xc000bb74a0 0xc000bb7680 0xc000bb7a40 0xc000bcc820 0xc000bcc8c0] map[104:0xc000bccbe0 118:0xc000bcc820] [] -1 0 0xc000b7f560 true 0x73b100 []} 2025-08-13T20:07:28.135939739+00:00 stderr F I0813 20:07:28.133934 1 cmd.go:92] (*installerpod.InstallOptions)(0xc000b28680)({ 2025-08-13T20:07:28.135939739+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:28.135939739+00:00 stderr F Revision: (string) (len=2) "11", 2025-08-13T20:07:28.135939739+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F Namespace: (string) (len=33) "openshift-kube-controller-manager", 2025-08-13T20:07:28.135939739+00:00 stderr F PodConfigMapNamePrefix: (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:07:28.135939739+00:00 stderr F SecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=27) "service-account-private-key", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=27) "kube-controller-manager-pod", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=32) "cluster-policy-controller-config", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=29) "controller-manager-kubeconfig", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=38) "kube-controller-cert-syncer-kubeconfig", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=10) "service-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=15) "recycler-config" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=12) "cloud-config" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F CertSecretNames: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=39) "kube-controller-manager-client-cert-key", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=10) "csr-signer" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:07:28.135939739+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=9) "client-ca" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:28.135939739+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-08-13T20:07:28.135939739+00:00 stderr F }, 2025-08-13T20:07:28.135939739+00:00 stderr F CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", 2025-08-13T20:07:28.135939739+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:28.135939739+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:28.135939739+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:28.135939739+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:28.135939739+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:28.135939739+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:28.135939739+00:00 stderr F }) 2025-08-13T20:07:28.209260091+00:00 stderr F I0813 20:07:28.204641 1 cmd.go:409] Getting controller reference for node crc 2025-08-13T20:07:28.244244624+00:00 stderr F I0813 20:07:28.244110 1 cmd.go:422] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:28.250320798+00:00 stderr F I0813 20:07:28.250209 1 cmd.go:514] Waiting additional period after revisions have settled for node crc 2025-08-13T20:07:58.255243085+00:00 stderr F I0813 20:07:58.255028 1 cmd.go:520] Getting installer pods for node crc 2025-08-13T20:07:58.270363959+00:00 stderr F I0813 20:07:58.270251 1 cmd.go:538] Latest installer revision for node crc is: 11 2025-08-13T20:07:58.270363959+00:00 stderr F I0813 20:07:58.270306 1 cmd.go:427] Querying kubelet version for node crc 2025-08-13T20:07:58.280990563+00:00 stderr F I0813 20:07:58.280531 1 cmd.go:440] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:07:58.280990563+00:00 stderr F I0813 20:07:58.280586 1 cmd.go:289] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11" ... 2025-08-13T20:07:58.281383995+00:00 stderr F I0813 20:07:58.281285 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11" ... 2025-08-13T20:07:58.281383995+00:00 stderr F I0813 20:07:58.281325 1 cmd.go:225] Getting secrets ... 2025-08-13T20:07:58.295838999+00:00 stderr F I0813 20:07:58.295612 1 copy.go:32] Got secret openshift-kube-controller-manager/localhost-recovery-client-token-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.303010 1 copy.go:32] Got secret openshift-kube-controller-manager/service-account-private-key-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.307771 1 copy.go:32] Got secret openshift-kube-controller-manager/serving-cert-11 2025-08-13T20:07:58.328000401+00:00 stderr F I0813 20:07:58.327978 1 cmd.go:238] Getting config maps ... 2025-08-13T20:07:58.335728553+00:00 stderr F I0813 20:07:58.335667 1 copy.go:60] Got configMap openshift-kube-controller-manager/cluster-policy-controller-config-11 2025-08-13T20:07:58.345595516+00:00 stderr F I0813 20:07:58.345471 1 copy.go:60] Got configMap openshift-kube-controller-manager/config-11 2025-08-13T20:07:58.349395815+00:00 stderr F I0813 20:07:58.349363 1 copy.go:60] Got configMap openshift-kube-controller-manager/controller-manager-kubeconfig-11 2025-08-13T20:07:58.352842753+00:00 stderr F I0813 20:07:58.352817 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-cert-syncer-kubeconfig-11 2025-08-13T20:07:58.358223618+00:00 stderr F I0813 20:07:58.358192 1 copy.go:60] Got configMap openshift-kube-controller-manager/kube-controller-manager-pod-11 2025-08-13T20:07:58.468715386+00:00 stderr F I0813 20:07:58.468660 1 copy.go:60] Got configMap openshift-kube-controller-manager/recycler-config-11 2025-08-13T20:07:58.660710620+00:00 stderr F I0813 20:07:58.660603 1 copy.go:60] Got configMap openshift-kube-controller-manager/service-ca-11 2025-08-13T20:07:58.859350016+00:00 stderr F I0813 20:07:58.859288 1 copy.go:60] Got configMap openshift-kube-controller-manager/serviceaccount-ca-11 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.062954 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/cloud-config-11: configmaps "cloud-config-11" not found 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.062997 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064041 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064218 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064407 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064495 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064655 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064713 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key/service-account.key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.064861 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/service-account-private-key/service-account.pub" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065012 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065094 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert/tls.crt" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065186 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/secrets/serving-cert/tls.key" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065278 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/cluster-policy-controller-config" ... 2025-08-13T20:07:59.065932038+00:00 stderr F I0813 20:07:59.065386 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/cluster-policy-controller-config/config.yaml" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067285 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/config" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067412 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/config/config.yaml" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067532 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/controller-manager-kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067588 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/controller-manager-kubeconfig/kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067680 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-cert-syncer-kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067727 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067873 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod" ... 2025-08-13T20:07:59.068023389+00:00 stderr F I0813 20:07:59.067959 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068127 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/pod.yaml" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068266 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/kube-controller-manager-pod/version" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068367 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/recycler-config" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068466 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/recycler-config/recycler-pod.yaml" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068558 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/service-ca" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068637 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/service-ca/ca-bundle.crt" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068727 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/serviceaccount-ca" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.068848 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.069090 1 cmd.go:217] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... 2025-08-13T20:07:59.069268034+00:00 stderr F I0813 20:07:59.069107 1 cmd.go:225] Getting secrets ... 2025-08-13T20:07:59.267692263+00:00 stderr F I0813 20:07:59.267338 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer 2025-08-13T20:07:59.469969843+00:00 stderr F I0813 20:07:59.469055 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key 2025-08-13T20:07:59.469969843+00:00 stderr F I0813 20:07:59.469120 1 cmd.go:238] Getting config maps ... 2025-08-13T20:07:59.669069401+00:00 stderr F I0813 20:07:59.668869 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca 2025-08-13T20:07:59.871931127+00:00 stderr F I0813 20:07:59.866753 1 copy.go:60] Got configMap openshift-kube-controller-manager/client-ca 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.071345 1 copy.go:60] Got configMap openshift-kube-controller-manager/trusted-ca-bundle 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.071666 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer" ... 2025-08-13T20:08:00.074967828+00:00 stderr F I0813 20:08:00.072018 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.crt" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.072324 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/csr-signer/tls.key" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.080616 1 cmd.go:257] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key" ... 2025-08-13T20:08:00.081078783+00:00 stderr F I0813 20:08:00.080639 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.crt" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.081228 1 cmd.go:635] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/secrets/kube-controller-manager-client-cert-key/tls.key" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.083488 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca" ... 2025-08-13T20:08:00.084667776+00:00 stderr F I0813 20:08:00.083523 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.084704 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.084977 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-08-13T20:08:00.088862496+00:00 stderr F I0813 20:08:00.087658 1 cmd.go:273] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle" ... 2025-08-13T20:08:00.148623069+00:00 stderr F I0813 20:08:00.090499 1 cmd.go:625] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-08-13T20:08:00.157142794+00:00 stderr F I0813 20:08:00.150176 1 cmd.go:331] Getting pod configmaps/kube-controller-manager-pod-11 -n openshift-kube-controller-manager 2025-08-13T20:08:00.281932152+00:00 stderr F I0813 20:08:00.263229 1 cmd.go:347] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:08:00.281932152+00:00 stderr F I0813 20:08:00.263292 1 cmd.go:375] Writing a pod under "kube-controller-manager-pod.yaml" key 2025-08-13T20:08:00.281932152+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"11"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n 2025-08-13T20:08:00.282006684+00:00 stderr F \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:08:00.294647666+00:00 stderr F I0813 20:08:00.294562 1 cmd.go:606] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr F I0813 20:08:00.295071 1 cmd.go:613] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr F I0813 20:08:00.295115 1 cmd.go:617] Writing static pod manifest "/etc/kubernetes/manifests/kube-controller-manager-pod.yaml" ... 2025-08-13T20:08:00.298865277+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager","namespace":"openshift-kube-controller-manager","creationTimestamp":null,"labels":{"app":"kube-controller-manager","kube-controller-manager":"true","revision":"11"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-controller-manager","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-11"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10257 \\))\" ]; do sleep 1; done'\n\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nif [ -f /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem ]; then\n echo \"Setting custom CA bundle for cloud provider\"\n export AWS_CA_BUNDLE=/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem\nfi\n\nexec hyperkube kube-controller-manager --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt \\\n --requestheader-client-ca-file=/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --allocate-node-cidrs=false --cert-dir=/var/run/kubernetes --cluster-cidr=10.217.0.0/22 --cluster-name=crc-d8rkd --cluster-signing-cert-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt --cluster-signing-duration=8760h --cluster-signing-key-file=/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key --controllers=* --controllers=-bootstrapsigner --controllers=-tokencleaner --controllers=-ttl --enable-dynamic-provisioning=true --feature-gates=AdminNetworkPolicy=true --feature-gates=AlibabaPlatform=true --feature-gates=AutomatedEtcdBackup=false --feature-gates=AzureWorkloadIdentity=true --feature-gates=BareMetalLoadBalancer=true --feature-gates=BuildCSIVolumes=true --feature-gates=CSIDriverSharedResource=false --feature-gates=ChunkSizeMiB=false --feature-gates=CloudDualStackNodeIPs=true --feature-gates=ClusterAPIInstall=false --feature-gates=ClusterAPIInstallAWS=true --feature-gates=ClusterAPIInstallAzure=false --feature-gates=ClusterAPIInstallGCP=false --feature-gates=ClusterAPIInstallIBMCloud=false --feature-gates=ClusterAPIInstallNutanix=true --feature-gates=ClusterAPIInstallOpenStack=true --feature-gates=ClusterAPIInstallPowerVS=false --feature-gates=ClusterAPIInstallVSphere=true --feature-gates=DNSNameResolver=false --feature-gates=DisableKubeletCloudCredentialProviders=true --feature-gates=DynamicResourceAllocation=false --feature-gates=EtcdBackendQuota=false --feature-gates=EventedPLEG=false --feature-gates=Example=false --feature-gates=ExternalCloudProviderAzure=true --feature-gates=ExternalCloudProviderExternal=true --feature-gates=ExternalCloudProviderGCP=true --feature-gates=ExternalOIDC=false --feature-gates=ExternalRouteCertificate=false --feature-gates=GCPClusterHostedDNS=false --feature-gates=GCPLabelsTags=false --feature-gates=GatewayAPI=false --feature-gates=HardwareSpeed=true --feature-gates=ImagePolicy=false --feature-gates=InsightsConfig=false --feature-gates=InsightsConfigAPI=false --feature-gates=InsightsOnDemandDataGather=false --feature-gates=InstallAlternateInfrastructureAWS=false --feature-gates=KMSv1=true --feature-gates=MachineAPIOperatorDisableMachineHealthCheckController=false --feature-gates=MachineAPIProviderOpenStack=false --feature-gates=MachineConfigNodes=false --feature-gates=ManagedBootImages=false --feature-gates=MaxUnavailableStatefulSet=false --feature-gates=MetricsCollectionProfiles=false --feature-gates=MetricsServer=true --feature-gates=MixedCPUsAllocation=false --feature-gates=NetworkDiagnosticsConfig=true --feature-gates=NetworkLiveMigration=true --feature-gates=NewOLM=false --feature-gates=NodeDisruptionPolicy=false --feature-gates=NodeSwap=false --feature-gates=OnClusterBuild=false --feature-gates=OpenShiftPodSecurityAdmission=false --feature-gates=PinnedImages=false --feature-gates=PlatformOperators=false --feature-gates=PrivateHostedZoneAWS=true --feature-gates=RouteExternalCertificate=false --feature-gates=ServiceAccountTokenNodeBinding=false --feature-gates=ServiceAccountTokenNodeBindingValidation=false --feature-gates=ServiceAccountTokenPodNodeInfo=false --feature-gates=SignatureStores=false --feature-gates=SigstoreImageVerification=false --feature-gates=TranslateStreamCloseWebsocketRequests=false --feature-gates=UpgradeStatus=false --feature-gates=VSphereControlPlaneMachineSet=true --feature-gates=VSphereDriverConfiguration=true --feature-gates=VSphereMultiVCenters=false --feature-gates=VSphereStaticIPs=true --feature-gates=ValidatingAdmissionPolicy=false --feature-gates=VolumeGroupSnapshot=false --flex-volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --kube-api-burst=300 --kube-api-qps=150 --leader-elect-renew-deadline=12s --leader-elect-resource-lock=leases --leader-elect-retry-period=3s --leader-elect=true --pv-recycler-pod-template-filepath-hostpath=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/static-pod-resources/configmaps/recycler-config/recycler-pod.yaml --root-ca-file=/etc/kubernetes/static-pod-resources/configmaps/serviceaccount-ca/ca-bundle.crt --secure-port=10257 --service-account-private-key-file=/etc/kubernetes/static-pod-resources/secrets/service-account-private-key/service-account.key --service-cluster-ip-range=10.217.4.0/23 --use-service-account-credentials=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12"],"ports":[{"containerPort":10257}],"resources":{"requests":{"cpu":"60m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10257,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"cluster-policy-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791","command":["/bin/bash","-euxo","pipefail","-c"] 2025-08-13T20:08:00.298947980+00:00 stderr F ,"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 10357 \\))\" ]; do sleep 1; done'\n\nexec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml \\\n --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig \\\n --namespace=${POD_NAMESPACE} -v=2"],"ports":[{"containerPort":10357}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"200Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"startupProbe":{"httpGet":{"path":"healthz","port":10357,"scheme":"HTTPS"},"timeoutSeconds":3},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["cluster-kube-controller-manager-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-controller-manager-recovery-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 9443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-controller-manager-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-controller-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:9443 -v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000755000175000017500000000000015117130646033145 5ustar zuulzuul././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000755000175000017500000000000015117130654033144 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/6.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000014462315117130646033161 0ustar zuulzuul2025-08-13T20:04:44.888728027+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:44.893972447+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:54.878027590+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:04:54.878081642+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:04:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:04.898170868+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:04.898170868+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:14.877953171+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:14.878994381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:24.878938698+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:24.901763522+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:34.880028638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:34.888279825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:44.877050427+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:44.877050427+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:54.877406116+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:05:54.877406116+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:05:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:04.873914236+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:04.897182743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:14.875633185+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:14.876301114+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:24.876177779+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:24.887274867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:34.901938306+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:34.901938306+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:44.884100374+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:44.884670630+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:54.887548271+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:06:54.901070549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:06:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:04.877456080+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:04.878290154+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:14.876924493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:14.876924493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:24.876599141+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:24.877872537+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:34.877159516+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:34.878752791+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:44.876859645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:44.876859645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:54.884906584+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:07:54.889264949+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:07:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:04.874438713+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:04.875134483+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:14.878531079+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:14.879438245+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:24.875006197+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:24.877290643+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:34.875727365+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:34.875946571+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:44.875280310+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:44.875334141+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:54.896377233+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:08:54.896377233+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:08:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:04.874617457+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:04.875455401+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:14.876610033+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:14.876989734+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:24.875273435+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:24.875411979+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:34.876292742+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:34.876361464+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:44.877966247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:44.877966247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:54.875029411+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:09:54.875664350+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:09:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:04.891257056+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:04.891257056+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:14.875624635+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:14.878898049+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:24.875490887+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:24.876027833+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:34.874839007+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:34.875504136+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:44.875599527+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:44.876278696+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:54.874877994+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:10:54.875300776+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:10:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:04.885986769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:04.888866112+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:14.877410392+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:14.877525665+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:24.875386473+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:24.875896337+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:34.876020156+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:34.876020156+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:44.875280875+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:44.876194671+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:54.875339943+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:11:54.875717254+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:11:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:04.874858908+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:04.874971102+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:14.874866656+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:14.875305319+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:24.875899935+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:24.875899935+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:34.874569125+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:34.875109830+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:44.875420674+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:44.876627619+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:54.873643782+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:12:54.875057172+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:12:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:04.874175265+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:04.874244207+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:14.874158743+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:14.874721559+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:24.874623755+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:24.874623755+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:34.874381965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:34.874458417+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:44.875748023+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:44.876144354+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:54.873665730+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:13:54.874572326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:13:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:04.875581986+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:04.875676059+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:14.875640415+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:14.876189791+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:24.875375544+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:24.875468727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:34.875174045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:34.875298628+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:44.874482803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:44.874548545+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:54.880318548+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:14:54.881359348+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:14:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:04.875109967+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:04.875320773+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:14.876096772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:14.876096772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:24.876631268+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:24.877218265+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:34.874602682+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:34.874857909+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:44.873905452+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:44.874024645+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:54.874181191+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:15:54.874637124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:15:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:04.875879751+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:04.876344854+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:14.874633168+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:14.875580235+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:24.874567479+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:24.874650501+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:34.874614263+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:34.874796578+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:44.875339293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:44.875495158+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:54.875721814+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:16:54.875840738+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:16:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:04.879383970+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:04.879619367+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:14.873847532+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:14.875212991+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:24.875065068+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:24.875166851+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:34.873994679+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:34.874061241+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:44.877205541+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:44.877711606+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:54.875900034+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:17:54.876359368+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:17:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:04.875607908+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:04.875664629+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:14.876385011+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:14.876385011+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:24.874482290+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:24.874548892+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:34.874673825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:34.874673825+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:44.875253153+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:44.875954123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:54.874702969+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:18:54.875022458+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:18:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:04.875556776+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:04.875935727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:14.874408274+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:14.874909528+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:24.875276633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:24.875577012+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:34.874678069+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:34.874678069+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:44.875894925+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:44.876254865+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:54.876675080+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:19:54.885165523+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:19:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:04.874873544+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:04.875042369+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:14.875571982+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:14.875866700+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:24.874903896+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:24.875017079+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:34.874503889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:34.874503889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:44.875141889+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:44.875279513+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:54.874700765+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:20:54.875555170+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:20:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:04.878385424+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:04.878545968+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:14.875189197+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:14.875267969+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:24.874953647+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:24.875516293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:34.876191494+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:34.876191494+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:44.874696495+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:44.875341083+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:54.875379996+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:21:54.875706686+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:21:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:04.876192689+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:04.877191887+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:14.874716719+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:14.875201063+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:24.875537137+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:24.875597658+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:34.875666620+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:34.876148754+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:44.875079835+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:44.875693912+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:54.876059296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:22:54.876180939+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:22:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:04.874739301+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:04.875832353+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:14.877123410+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:14.877583663+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:24.876410912+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:24.876478694+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:34.877141128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:34.877141128+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:44.875017840+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:44.875167945+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:54.875209594+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:23:54.875276616+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:23:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:04.876443140+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:04.876628735+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:14.875599025+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:14.875701158+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:24.875469477+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:24.875730965+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:34.874444668+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:34.874518460+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:44.876177127+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:44.876644111+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:54.875238157+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:24:54.875323130+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:24:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:04.874918895+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:04.875500371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:14.875561522+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:14.875691016+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:24.876888822+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:24.877013085+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:34.874693585+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:34.874869520+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:44.874894170+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:44.875096215+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:54.877111803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:25:54.877111803+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:25:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:04.875430383+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:04.875612748+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:14.875736880+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:14.875937006+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:24.876522633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:24.877168101+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:34.873941987+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:34.874279987+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:44.875512929+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:44.875607191+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:54.882751573+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:26:54.882959629+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:26:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:04.874727566+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:04.875175359+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:14.873916413+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:14.876865648+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:24.879849482+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:24.880450389+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:34.874746425+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:34.875950389+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:44.875592499+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:44.876326890+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:54.875055852+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:27:54.875132294+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:27:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:04.876076826+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:04.876146048+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:14.874762562+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:14.874980408+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:24.880971247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:24.880971247+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:34.874637832+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:34.874726785+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:44.875949525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:44.876455410+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:54.878058891+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:28:54.878058891+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:28:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:04.873956788+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:04.874317879+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:14.875684022+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:14.875974681+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:24.876114270+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:24.876246984+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:34.875232380+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:34.877091683+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:44.876884172+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:44.877306124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:54.875609259+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:29:54.876537436+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:29:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:04.874713649+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:04.877918661+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:14.876076045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:14.876296501+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:24.877316725+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:24.879651342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:34.878995877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:34.880163371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:44.875007381+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:44.875131234+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:54.876424911+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:30:54.876424911+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:30:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:04.876680163+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:04.877402134+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:14.874521638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:14.874675732+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:24.874946315+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:24.874946315+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:34.875186878+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:34.875271910+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:44.877499258+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:44.877911550+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:54.873995226+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:31:54.873995226+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:31:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:04.874318060+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:04.874364502+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:14.874591153+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:14.874696446+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:24.876263998+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:24.876531425+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:34.874191296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:34.874268209+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:44.876020945+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:44.876730045+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:54.876964326+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:32:54.877179202+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:32:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:04.876386117+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:04.876494020+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:14.875852398+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:14.875927800+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:24.874691828+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:24.874925175+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:34.874654405+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:34.875039966+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:44.877827131+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:44.878375227+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:54.874361527+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:33:54.875222872+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:33:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:04.875028260+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:04.875152244+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:14.873858946+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:14.873997380+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:24.877743674+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:24.878407944+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:34.875057881+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:34.875226936+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:44.876473507+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:44.876709633+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:54.874539769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:34:54.874539769+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:34:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:04.874885285+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:04.874979877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:14.875463835+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:14.875639401+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:24.873876052+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:24.874397867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:34.876251115+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:34.876354308+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:44.875533638+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:44.875860217+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:54.873841977+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:35:54.873909189+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:35:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:04.876650874+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:04.876740907+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:14.877361679+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:14.877850293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:24.874589473+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:24.875094478+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:34.874493960+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:34.874741507+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:44.874764953+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:44.875077672+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:54.874251540+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:36:54.874731414+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:36:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:04.873286331+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:04.874243149+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:14.875478292+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:14.875529104+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:24.875402662+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:24.875873855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:34.873890123+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:34.874092298+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:44.876540775+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:44.876999478+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:54.875221698+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:37:54.875293580+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:37:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:04.876490367+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:04.877680001+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:14.874964371+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:14.875125636+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:24.874243342+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:24.875336493+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:34.876535549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:34.876732334+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:44.874252855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:44.876123839+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:54.873657386+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:38:54.874943103+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:38:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:04.875230903+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:04.875357456+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:14.874139573+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:14.875194144+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:24.876877913+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:24.878520320+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:34.876636135+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:34.881854865+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:44.876229275+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:44.876543124+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:54.878921304+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:39:54.879038727+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:39:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:04.875610138+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:04.876332729+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:14.874282691+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:14.874338962+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:24.875280032+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:24.875704484+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:34.873937883+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:34.874442447+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:44.875692694+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:44.876143877+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:54.876345164+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:40:54.876605542+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:40:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:04.875234483+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:04.875344667+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:14.874629855+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:14.874690807+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:24.875425519+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:24.876146620+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:34.879277603+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:34.879277603+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:44.874017071+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:44.874453514+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:44] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:54.876080042+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:41:54.876712460+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:41:54] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:04.878643778+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:04.880964525+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:04] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:14.885862486+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:14.885862486+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:14] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:24.877525607+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:24.878525616+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:24] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:34.873933626+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:34.875990815+00:00 stderr F ::ffff:10.217.0.2 - - [13/Aug/2025 20:42:34] "GET / HTTP/1.1" 200 - 2025-08-13T20:42:39.843300253+00:00 stdout F serving from /tmp/tmp2jv0ugaq ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/7.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000002640615117130646033157 0ustar zuulzuul2025-12-13T00:13:30.755210615+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:13:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:13:30.755464355+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:13:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:13:40.737096610+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:13:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:13:40.737186943+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:13:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:13:50.737346349+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:13:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:13:50.737406801+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:13:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:00.737033201+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:00.737141205+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:10.738193880+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:10.738328894+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:20.737756284+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:20.737837208+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:30.744402180+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:30.744542975+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:40.737682635+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:40.737752867+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:50.736377904+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:14:50.737315504+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:14:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:00.736845280+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:00.736893672+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:10.738424903+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:10.738586737+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:20.738187408+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:20.738759295+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:30.737403235+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:30.737403235+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:40.736382094+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:40.736432366+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:50.737730150+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:15:50.737890025+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:15:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:00.736983991+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:00.737154296+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:10.738238178+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:10.738316210+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:20.736895968+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:20.736965820+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:30.737974874+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:30.738107598+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:40.737100696+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:40.737179688+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:50.737067961+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:16:50.737140923+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:16:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:00.737089100+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:00.737155661+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:10.737500997+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:10.737552070+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:20.738678732+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:20.738764504+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:30.737099293+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:30.738183221+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:40.737878548+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:40.737977721+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:50.738534699+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:17:50.738641772+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:17:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:00.736971880+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:00.737107804+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:10.738103663+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:10.738278697+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:20.737475550+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:20.737531001+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:30.737023779+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:30.737145972+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:40.737629792+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:40.737629792+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:50.737835870+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:18:50.737998464+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:18:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:00.745578652+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:00.745986983+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:10.737430793+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:10.738250196+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:20.737343157+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:20.737801280+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:30.737978709+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:30.739023238+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:40.737551762+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:40.737551762+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:50.736874340+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:19:50.736980203+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:19:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:00.736800237+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:00.736855838+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:10.737055379+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:10.737103571+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:20.737442407+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:20.737508958+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:30.737306009+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:30.737368450+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:40.737407992+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:40.737486004+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:50.736783884+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:20:50.737367630+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:20:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:00.737010767+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:00.737091439+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:00] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:10.736980503+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:10.736980503+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:10] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:20.737235747+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:20.737300809+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:20] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:30.737372549+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:30.737964345+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:30] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:40.736784461+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:40.736845073+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:40] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:50.737229173+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:50] "GET / HTTP/1.1" 200 - 2025-12-13T00:21:50.737296354+00:00 stderr F ::ffff:10.217.0.2 - - [13/Dec/2025 00:21:50] "GET / HTTP/1.1" 200 - ././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_down0000644000175000017500000000011315117130646033142 0ustar zuulzuul2025-08-13T20:04:14.907116744+00:00 stdout F serving from /tmp/tmpsrrtesg5 ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015117130646033136 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015117130654033135 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000167740715117130646033166 0ustar zuulzuul2025-08-13T20:00:57.601024272+00:00 stdout F Copying system trust bundle 2025-08-13T20:00:59.150240997+00:00 stderr F W0813 20:00:59.149367 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-08-13T20:00:59.151697528+00:00 stderr F I0813 20:00:59.151463 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:59.152227024+00:00 stderr F I0813 20:00:59.151494 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T20:00:59.152227024+00:00 stderr F I0813 20:00:59.152190 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:59.152914523+00:00 stderr F I0813 20:00:59.152864 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:06.958417626+00:00 stderr F I0813 20:01:06.957144 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-08-13T20:01:07.010158681+00:00 stderr F I0813 20:01:07.005014 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:18.831567916+00:00 stderr F I0813 20:01:18.825876 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:01:20.315622631+00:00 stderr F I0813 20:01:20.315483 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:01:20.315622631+00:00 stderr F I0813 20:01:20.315570 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:01:20.315747505+00:00 stderr F I0813 20:01:20.315695 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:01:20.315767645+00:00 stderr F I0813 20:01:20.315756 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:01:20.445957847+00:00 stderr F I0813 20:01:20.445641 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.445957847+00:00 stderr F W0813 20:01:20.445701 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.445957847+00:00 stderr F W0813 20:01:20.445725 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:20.446168933+00:00 stderr F I0813 20:01:20.446115 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.946501 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:20.946457579 +0000 UTC))" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966538 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:20.96648868 +0000 UTC))" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966572 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.966627 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.950537 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952678 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.967330 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952729 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.967928 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.952749 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.968432 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.970943257+00:00 stderr F I0813 20:01:20.970361 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:21.000124869+00:00 stderr F I0813 20:01:20.991088 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:01:21.000124869+00:00 stderr F I0813 20:01:20.993434 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.058740 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:20.983738 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.064586 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072051 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072104 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072211 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072385 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.072358079 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.072675 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:21.072659357 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077533 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:21.077465674 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077748 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:21.077726392 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077922 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:21.077760273 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077956 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.077934858 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.077978 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.077963238 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078004 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.077988539 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078026 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.07801058 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078047 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.07803163 +0000 UTC))" 2025-08-13T20:01:21.103751574+00:00 stderr F I0813 20:01:21.078074 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.078052561 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114507 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:21.078080662 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114660 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:21.114568952 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.114731 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.114714266 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.115201 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:21.11517397 +0000 UTC))" 2025-08-13T20:01:21.119193004+00:00 stderr F I0813 20:01:21.115454 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115270\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115269\" (2025-08-13 19:01:07 +0000 UTC to 2026-08-13 19:01:07 +0000 UTC (now=2025-08-13 20:01:21.115438927 +0000 UTC))" 2025-08-13T20:01:21.634012614+00:00 stderr F I0813 20:01:21.633686 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-08-13T20:01:21.663876685+00:00 stderr F I0813 20:01:21.636694 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_dfde8735-ce87-4a11-8bd2-f4c4c0ff21d4 became leader 2025-08-13T20:01:23.664939632+00:00 stderr F I0813 20:01:23.659023 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:01:23.684375367+00:00 stderr F W0813 20:01:23.679522 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:23.684375367+00:00 stderr F E0813 20:01:23.679651 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680257 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680766 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.680951 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681079 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681094 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681104 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681115 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681127 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681137 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681147 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681157 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681414 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681433 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681486 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681500 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681515 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681529 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681542 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681555 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681566 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681576 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681586 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681597 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681608 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681621 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681657 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681673 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681686 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681700 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681713 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.681732 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682142 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682173 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682410 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682436 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682449 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682467 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682483 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682506 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:01:23.684375367+00:00 stderr F I0813 20:01:23.682680 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:01:23.690124541+00:00 stderr F I0813 20:01:23.687756 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.692861649+00:00 stderr F I0813 20:01:23.691127 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.692861649+00:00 stderr F I0813 20:01:23.691418 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.694466604+00:00 stderr F I0813 20:01:23.693678 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.719885519+00:00 stderr F I0813 20:01:23.718499 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.728182826+00:00 stderr F I0813 20:01:23.727762 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.736134953+00:00 stderr F I0813 20:01:23.729167 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.736630577+00:00 stderr F I0813 20:01:23.736571 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.736904285+00:00 stderr F I0813 20:01:23.732416 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:01:23.759032195+00:00 stderr F I0813 20:01:23.755954 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.759032195+00:00 stderr F I0813 20:01:23.757448 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:01:23.761861286+00:00 stderr F I0813 20:01:23.759595 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:01:23.761861286+00:00 stderr F I0813 20:01:23.761255 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.779152519+00:00 stderr F I0813 20:01:23.779041 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:01:23.779292603+00:00 stderr F I0813 20:01:23.779245 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:01:23.781823315+00:00 stderr F I0813 20:01:23.781386 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.781877527+00:00 stderr F I0813 20:01:23.781832 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.782127914+00:00 stderr F I0813 20:01:23.782105 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T20:01:23.782259668+00:00 stderr F I0813 20:01:23.782154 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T20:01:23.782313839+00:00 stderr F I0813 20:01:23.782182 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:01:23.782469254+00:00 stderr F I0813 20:01:23.782380 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:23.782514255+00:00 stderr F I0813 20:01:23.782192 1 base_controller.go:73] Caches are synced for IngressStateController 2025-08-13T20:01:23.782621858+00:00 stderr F I0813 20:01:23.782587 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782206 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782859 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782254 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T20:01:23.782944457+00:00 stderr F I0813 20:01:23.782883 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T20:01:23.783595756+00:00 stderr F I0813 20:01:23.783568 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-08-13T20:01:23.783641957+00:00 stderr F I0813 20:01:23.783627 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-08-13T20:01:23.783698149+00:00 stderr F I0813 20:01:23.783683 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:01:23.783728940+00:00 stderr F I0813 20:01:23.783717 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:23.786337854+00:00 stderr F I0813 20:01:23.786310 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:23.799062797+00:00 stderr F I0813 20:01:23.795492 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.807875158+00:00 stderr F I0813 20:01:23.807598 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.809104123+00:00 stderr F I0813 20:01:23.807656 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.812135370+00:00 stderr F I0813 20:01:23.811597 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.813392686+00:00 stderr F I0813 20:01:23.813334 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.828136846+00:00 stderr F I0813 20:01:23.827440 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.838033328+00:00 stderr F I0813 20:01:23.831671 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.840524729+00:00 stderr F I0813 20:01:23.840457 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.845056688+00:00 stderr F I0813 20:01:23.840856 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.845056688+00:00 stderr F I0813 20:01:23.841557 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.871267616+00:00 stderr F I0813 20:01:23.871128 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:01:23.897636398+00:00 stderr F I0813 20:01:23.894465 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:01:23.921866379+00:00 stderr F I0813 20:01:23.916084 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.921866379+00:00 stderr F I0813 20:01:23.917222 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:23.971640038+00:00 stderr F I0813 20:01:23.965615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994463 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994560 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994599 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T20:01:23.997192686+00:00 stderr F I0813 20:01:23.994605 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:01:23.997192686+00:00 stderr F E0813 20:01:23.996014 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.006004578+00:00 stderr F E0813 20:01:24.002904 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.017182026+00:00 stderr F E0813 20:01:24.015431 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.040476041+00:00 stderr F E0813 20:01:24.036712 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.041134619+00:00 stderr F I0813 20:01:24.041104 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.084437394+00:00 stderr F E0813 20:01:24.084350 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084581 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084621 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084653 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-08-13T20:01:24.084739073+00:00 stderr F I0813 20:01:24.084661 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:01:24.097243379+00:00 stderr F I0813 20:01:24.097197 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.176132239+00:00 stderr F E0813 20:01:24.170080 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.278356773+00:00 stderr F I0813 20:01:24.277114 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.337191441+00:00 stderr F E0813 20:01:24.336831 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.466957101+00:00 stderr F I0813 20:01:24.466645 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.658996427+00:00 stderr F E0813 20:01:24.657273 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:24.667258213+00:00 stderr F I0813 20:01:24.667094 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.863943911+00:00 stderr F I0813 20:01:24.863327 1 request.go:697] Waited for 1.190001802s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/configmaps?limit=500&resourceVersion=0 2025-08-13T20:01:24.867625346+00:00 stderr F I0813 20:01:24.866515 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:24.963424247+00:00 stderr F W0813 20:01:24.961338 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:24.963424247+00:00 stderr F E0813 20:01:24.961657 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:25.067905877+00:00 stderr F I0813 20:01:25.067464 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.266332485+00:00 stderr F I0813 20:01:25.266070 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.298593314+00:00 stderr F E0813 20:01:25.298511 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:25.707237687+00:00 stderr F I0813 20:01:25.706312 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.784144260+00:00 stderr F I0813 20:01:25.783887 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-08-13T20:01:25.786075505+00:00 stderr F I0813 20:01:25.785996 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-08-13T20:01:25.865314244+00:00 stderr F I0813 20:01:25.864253 1 request.go:697] Waited for 2.190413918s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0 2025-08-13T20:01:25.893330563+00:00 stderr F I0813 20:01:25.893261 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.950398310+00:00 stderr F I0813 20:01:25.949985 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983342 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983382 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983443 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-08-13T20:01:25.984226265+00:00 stderr F I0813 20:01:25.983452 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:01:26.069915768+00:00 stderr F I0813 20:01:26.069377 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.270666692+00:00 stderr F I0813 20:01:26.270583 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.284813315+00:00 stderr F I0813 20:01:26.284594 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:01:26.285052591+00:00 stderr F I0813 20:01:26.284942 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:01:26.287715797+00:00 stderr F I0813 20:01:26.285505 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:01:26.470182960+00:00 stderr F I0813 20:01:26.469441 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.579690483+00:00 stderr F E0813 20:01:26.579588 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:26.659031375+00:00 stderr F W0813 20:01:26.658260 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:26.659031375+00:00 stderr F E0813 20:01:26.658424 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:26.669964707+00:00 stderr F I0813 20:01:26.668072 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.868958311+00:00 stderr F I0813 20:01:26.867414 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:26.882898748+00:00 stderr F I0813 20:01:26.882653 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T20:01:26.883909807+00:00 stderr F I0813 20:01:26.882904 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:01:26.885030489+00:00 stderr F I0813 20:01:26.883574 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:01:27.066296428+00:00 stderr F I0813 20:01:27.064957 1 request.go:697] Waited for 3.390505706s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets?limit=500&resourceVersion=0 2025-08-13T20:01:27.075250893+00:00 stderr F I0813 20:01:27.073760 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082391 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082547 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082610 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.082617 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.083898 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.083918 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.084145 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:01:27.084917449+00:00 stderr F I0813 20:01:27.084156 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:01:27.128292386+00:00 stderr F E0813 20:01:27.125655 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:27.128292386+00:00 stderr F I0813 20:01:27.126925 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:27.269064480+00:00 stderr F I0813 20:01:27.268271 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282300 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282363 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282563 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282573 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282861 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T20:01:27.283953614+00:00 stderr F I0813 20:01:27.282871 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:01:27.484936365+00:00 stderr F I0813 20:01:27.481954 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.668396366+00:00 stderr F I0813 20:01:27.667422 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.686198444+00:00 stderr F I0813 20:01:27.685345 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:01:27.686198444+00:00 stderr F I0813 20:01:27.685392 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:01:27.868074850+00:00 stderr F I0813 20:01:27.868018 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:01:27.882122220+00:00 stderr F I0813 20:01:27.882055 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:01:27.882207453+00:00 stderr F I0813 20:01:27.882190 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:01:27.882269575+00:00 stderr F I0813 20:01:27.882162 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-08-13T20:01:27.882308516+00:00 stderr F I0813 20:01:27.882296 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882680 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882709 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:01:27.882734748+00:00 stderr F I0813 20:01:27.882727 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882732 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882748 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:01:27.882755958+00:00 stderr F I0813 20:01:27.882752 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:01:27.882991775+00:00 stderr F I0813 20:01:27.882962 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:01:27.883093208+00:00 stderr F I0813 20:01:27.883071 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:01:28.265288876+00:00 stderr F I0813 20:01:28.264249 1 request.go:697] Waited for 4.479476827s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver 2025-08-13T20:01:29.140591084+00:00 stderr F E0813 20:01:29.140530 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:29.276565671+00:00 stderr F I0813 20:01:29.264871 1 request.go:697] Waited for 2.978228822s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:01:29.382012428+00:00 stderr F I0813 20:01:29.380383 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" 2025-08-13T20:01:32.212365833+00:00 stderr F W0813 20:01:32.211005 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:32.212365833+00:00 stderr F E0813 20:01:32.211528 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:35.315058375+00:00 stderr F E0813 20:01:35.309483 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.317909616+00:00 stderr F E0813 20:01:35.317744 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:35.318626706+00:00 stderr F I0813 20:01:35.318523 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:01:35.330132065+00:00 stderr F E0813 20:01:35.320317 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.330132065+00:00 stderr F E0813 20:01:35.328652 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.330814334+00:00 stderr F E0813 20:01:35.330725 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.351273227+00:00 stderr F E0813 20:01:35.351143 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.434510491+00:00 stderr F E0813 20:01:35.434401 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.597888489+00:00 stderr F E0813 20:01:35.597740 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:35.921267900+00:00 stderr F E0813 20:01:35.921136 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:36.565020306+00:00 stderr F E0813 20:01:36.564724 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:37.852888508+00:00 stderr F E0813 20:01:37.850910 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:39.382332008+00:00 stderr F E0813 20:01:39.382228 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:40.230678778+00:00 stderr F I0813 20:01:40.230622 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:01:40.415179749+00:00 stderr F E0813 20:01:40.415094 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:42.978231701+00:00 stderr F I0813 20:01:42.977340 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authentication-integrated-oauth -n openshift-config because it changed 2025-08-13T20:01:44.713770138+00:00 stderr F W0813 20:01:44.713667 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:44.713770138+00:00 stderr F E0813 20:01:44.713725 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:01:45.538965267+00:00 stderr F E0813 20:01:45.538896 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:53.763712966+00:00 stderr F E0813 20:01:53.763200 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:01:53.775870473+00:00 stderr F E0813 20:01:53.775696 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:01:55.783664842+00:00 stderr F E0813 20:01:55.783488 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:04.290913305+00:00 stderr F W0813 20:02:04.290126 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:04.290913305+00:00 stderr F E0813 20:02:04.290747 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:02:20.380357169+00:00 stderr F E0813 20:02:20.343460 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:23.764121197+00:00 stderr F E0813 20:02:23.762438 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:23.769153661+00:00 stderr F E0813 20:02:23.768893 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:26.933185062+00:00 stderr F E0813 20:02:26.932990 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:26.948695884+00:00 stderr F E0813 20:02:26.948445 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:26.989922850+00:00 stderr F E0813 20:02:26.989511 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.004154426+00:00 stderr F E0813 20:02:27.003469 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.022583992+00:00 stderr F E0813 20:02:27.022435 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.043768966+00:00 stderr F E0813 20:02:27.043682 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.329108505+00:00 stderr F E0813 20:02:27.328969 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.529163642+00:00 stderr F E0813 20:02:27.528948 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.928094343+00:00 stderr F E0813 20:02:27.928008 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.128399067+00:00 stderr F E0813 20:02:28.128107 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.327478386+00:00 stderr F E0813 20:02:28.327428 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.731456270+00:00 stderr F E0813 20:02:28.729698 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.930338964+00:00 stderr F E0813 20:02:28.930232 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.127728155+00:00 stderr F E0813 20:02:29.127603 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.642929832+00:00 stderr F E0813 20:02:29.641740 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.759355994+00:00 stderr F E0813 20:02:29.755106 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.273063908+00:00 stderr F E0813 20:02:30.272704 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.931073889+00:00 stderr F E0813 20:02:30.930430 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.031080811+00:00 stderr F E0813 20:02:31.031014 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.896913252+00:00 stderr F E0813 20:02:31.896737 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.901731269+00:00 stderr F E0813 20:02:31.901144 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903485989+00:00 stderr F W0813 20:02:31.903367 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903485989+00:00 stderr F E0813 20:02:31.903426 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.903617233+00:00 stderr F E0813 20:02:31.903577 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917903270+00:00 stderr F W0813 20:02:31.917771 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.918132147+00:00 stderr F E0813 20:02:31.918076 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.922458890+00:00 stderr F E0813 20:02:31.922432 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.934531055+00:00 stderr F E0813 20:02:31.934350 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.936878452+00:00 stderr F E0813 20:02:31.936742 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.127017396+00:00 stderr F E0813 20:02:32.126971 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.255019067+00:00 stderr F E0813 20:02:32.254935 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.528008705+00:00 stderr F E0813 20:02:32.527930 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.653532296+00:00 stderr F E0813 20:02:32.653441 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.727059043+00:00 stderr F W0813 20:02:32.726958 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.727059043+00:00 stderr F E0813 20:02:32.727027 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.932082382+00:00 stderr F E0813 20:02:32.931836 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.127423844+00:00 stderr F E0813 20:02:33.127363 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.253611194+00:00 stderr F E0813 20:02:33.253178 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.328117050+00:00 stderr F E0813 20:02:33.327994 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.932450330+00:00 stderr F W0813 20:02:33.930969 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.932450330+00:00 stderr F E0813 20:02:33.931612 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.052659389+00:00 stderr F E0813 20:02:34.052584 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.136916972+00:00 stderr F E0813 20:02:34.132246 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.328227679+00:00 stderr F E0813 20:02:34.328114 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.534071921+00:00 stderr F E0813 20:02:34.533970 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.687623582+00:00 stderr F E0813 20:02:34.687573 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.736053963+00:00 stderr F E0813 20:02:34.735985 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:35.127705846+00:00 stderr F E0813 20:02:35.127553 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.328358600+00:00 stderr F W0813 20:02:35.328198 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.328358600+00:00 stderr F E0813 20:02:35.328266 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.453077748+00:00 stderr F E0813 20:02:35.453020 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.527173802+00:00 stderr F W0813 20:02:35.527113 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.527251724+00:00 stderr F E0813 20:02:35.527236 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.727884567+00:00 stderr F E0813 20:02:35.727750 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:35.927503792+00:00 stderr F W0813 20:02:35.927397 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.927503792+00:00 stderr F E0813 20:02:35.927461 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.327550184+00:00 stderr F W0813 20:02:36.327428 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.327550184+00:00 stderr F E0813 20:02:36.327499 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.528281810+00:00 stderr F W0813 20:02:36.528120 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.528281810+00:00 stderr F E0813 20:02:36.528206 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.727435112+00:00 stderr F E0813 20:02:36.727322 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:36.747421802+00:00 stderr F E0813 20:02:36.747322 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:36.927103428+00:00 stderr F W0813 20:02:36.926945 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.927103428+00:00 stderr F E0813 20:02:36.927013 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.052703641+00:00 stderr F E0813 20:02:37.052619 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.130507020+00:00 stderr F E0813 20:02:37.130384 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.328582881+00:00 stderr F E0813 20:02:37.328481 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.727091989+00:00 stderr F W0813 20:02:37.726976 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.727091989+00:00 stderr F E0813 20:02:37.727046 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.927703161+00:00 stderr F E0813 20:02:37.927482 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.127932173+00:00 stderr F E0813 20:02:38.127747 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:38.328096443+00:00 stderr F W0813 20:02:38.328010 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.328096443+00:00 stderr F E0813 20:02:38.328066 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.527090450+00:00 stderr F W0813 20:02:38.526951 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.527090450+00:00 stderr F E0813 20:02:38.527022 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.728004041+00:00 stderr F E0813 20:02:38.727762 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.928185062+00:00 stderr F E0813 20:02:38.928079 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:39.128406164+00:00 stderr F W0813 20:02:39.128242 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.128406164+00:00 stderr F E0813 20:02:39.128350 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.175635611+00:00 stderr F E0813 20:02:39.175271 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:39.527495399+00:00 stderr F E0813 20:02:39.527376 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:39.727723831+00:00 stderr F E0813 20:02:39.727575 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.855182367+00:00 stderr F E0813 20:02:39.855037 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.928222110+00:00 stderr F W0813 20:02:39.928035 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.928222110+00:00 stderr F E0813 20:02:39.928115 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.330568398+00:00 stderr F W0813 20:02:40.330149 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.330568398+00:00 stderr F E0813 20:02:40.330514 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.530181012+00:00 stderr F E0813 20:02:40.530063 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.728542011+00:00 stderr F E0813 20:02:40.728404 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:40.928131315+00:00 stderr F E0813 20:02:40.927956 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.127443081+00:00 stderr F E0813 20:02:41.127355 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.528347618+00:00 stderr F W0813 20:02:41.528216 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.528347618+00:00 stderr F E0813 20:02:41.528295 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.728027864+00:00 stderr F E0813 20:02:41.727880 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:41.927577757+00:00 stderr F E0813 20:02:41.927448 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.128612642+00:00 stderr F W0813 20:02:42.128336 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.128612642+00:00 stderr F E0813 20:02:42.128407 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.327284090+00:00 stderr F E0813 20:02:42.327148 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.528119720+00:00 stderr F E0813 20:02:42.527951 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.728413734+00:00 stderr F E0813 20:02:42.728286 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.927731520+00:00 stderr F W0813 20:02:42.927610 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.927731520+00:00 stderr F E0813 20:02:42.927699 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.129365182+00:00 stderr F E0813 20:02:43.129188 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.527767438+00:00 stderr F W0813 20:02:43.527659 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.527767438+00:00 stderr F E0813 20:02:43.527742 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.727519107+00:00 stderr F E0813 20:02:43.727459 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.159466789+00:00 stderr F E0813 20:02:44.159362 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:44.372410054+00:00 stderr F E0813 20:02:44.372313 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.984436533+00:00 stderr F E0813 20:02:44.984282 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.300437367+00:00 stderr F E0813 20:02:45.300330 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.491012544+00:00 stderr F W0813 20:02:45.490924 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.491012544+00:00 stderr F E0813 20:02:45.490980 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.660187220+00:00 stderr F E0813 20:02:45.660074 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.093562493+00:00 stderr F W0813 20:02:46.093390 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.093562493+00:00 stderr F E0813 20:02:46.093459 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.296879373+00:00 stderr F E0813 20:02:46.296743 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.896875699+00:00 stderr F E0813 20:02:46.896731 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.229642569+00:00 stderr F E0813 20:02:48.229165 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.434714093+00:00 stderr F E0813 20:02:50.434581 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:50.614304156+00:00 stderr F W0813 20:02:50.614129 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.614304156+00:00 stderr F E0813 20:02:50.614192 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.774613249+00:00 stderr F E0813 20:02:50.774502 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.172401237+00:00 stderr F E0813 20:02:51.172180 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229745813+00:00 stderr F W0813 20:02:51.229630 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229745813+00:00 stderr F E0813 20:02:51.229699 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.156068209+00:00 stderr F W0813 20:02:52.155874 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:52.156068209+00:00 stderr F E0813 20:02:52.156054 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.765568843+00:00 stderr F E0813 20:02:53.765410 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:02:53.767631182+00:00 stderr F E0813 20:02:53.767560 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.769331031+00:00 stderr F E0813 20:02:53.769283 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770287828+00:00 stderr F W0813 20:02:53.770226 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770287828+00:00 stderr F E0813 20:02:53.770276 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770340499+00:00 stderr F E0813 20:02:53.770318 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.770487224+00:00 stderr F E0813 20:02:53.770415 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:02:53.773539391+00:00 stderr F E0813 20:02:53.773464 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.778548844+00:00 stderr F E0813 20:02:53.778440 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:53.785736249+00:00 stderr F E0813 20:02:53.785659 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.811054111+00:00 stderr F E0813 20:02:53.810964 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.967387341+00:00 stderr F E0813 20:02:53.967258 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.167566501+00:00 stderr F E0813 20:02:54.167444 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.367052942+00:00 stderr F E0813 20:02:54.366960 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:54.567292725+00:00 stderr F E0813 20:02:54.567165 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.766946050+00:00 stderr F E0813 20:02:54.766873 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.089055399+00:00 stderr F E0813 20:02:55.088918 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.227706785+00:00 stderr F E0813 20:02:55.227612 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.731500816+00:00 stderr F E0813 20:02:55.731367 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.297721488+00:00 stderr F E0813 20:02:56.297603 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:56.897877649+00:00 stderr F E0813 20:02:56.897404 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.014861806+00:00 stderr F E0813 20:02:57.014663 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:58.477131781+00:00 stderr F E0813 20:02:58.474671 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.579520009+00:00 stderr F E0813 20:02:59.579401 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.687168456+00:00 stderr F E0813 20:03:00.687070 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:00.857913247+00:00 stderr F W0813 20:03:00.857746 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:00.857913247+00:00 stderr F E0813 20:03:00.857868 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.476541395+00:00 stderr F W0813 20:03:01.476415 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.476594157+00:00 stderr F E0813 20:03:01.476530 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.493944529+00:00 stderr F E0813 20:03:02.493680 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.506707073+00:00 stderr F E0813 20:03:02.506415 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.522068951+00:00 stderr F E0813 20:03:02.521926 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.552519840+00:00 stderr F E0813 20:03:02.552368 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.597944556+00:00 stderr F E0813 20:03:02.597678 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.685393231+00:00 stderr F E0813 20:03:02.685317 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:02.850047908+00:00 stderr F E0813 20:03:02.849866 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.174533113+00:00 stderr F E0813 20:03:03.174424 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.818949566+00:00 stderr F E0813 20:03:03.818761 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.701402750+00:00 stderr F E0813 20:03:04.701197 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:05.102472402+00:00 stderr F E0813 20:03:05.102371 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:06.297269146+00:00 stderr F E0813 20:03:06.297219 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:06.899501755+00:00 stderr F E0813 20:03:06.899331 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:07.674637578+00:00 stderr F E0813 20:03:07.674518 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.657603490+00:00 stderr F E0813 20:03:11.656881 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.014409079+00:00 stderr F E0813 20:03:12.014274 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.021263144+00:00 stderr F E0813 20:03:12.021168 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.033699899+00:00 stderr F E0813 20:03:12.033630 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.055958634+00:00 stderr F E0813 20:03:12.055824 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.102658386+00:00 stderr F E0813 20:03:12.102579 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.166654942+00:00 stderr F E0813 20:03:12.166523 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:12.186324323+00:00 stderr F E0813 20:03:12.186242 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.348351775+00:00 stderr F E0813 20:03:12.348236 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.670494515+00:00 stderr F E0813 20:03:12.670386 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:12.806152205+00:00 stderr F E0813 20:03:12.806023 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.313283152+00:00 stderr F E0813 20:03:13.313092 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:14.596306612+00:00 stderr F E0813 20:03:14.596171 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:14.944364571+00:00 stderr F E0813 20:03:14.944263 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:15.711166666+00:00 stderr F E0813 20:03:15.711008 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:16.299980244+00:00 stderr F E0813 20:03:16.299884 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:16.899382965+00:00 stderr F E0813 20:03:16.899257 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:17.159255479+00:00 stderr F E0813 20:03:17.159063 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.488493338+00:00 stderr F W0813 20:03:18.488412 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.488601241+00:00 stderr F E0813 20:03:18.488582 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.618289821+00:00 stderr F E0813 20:03:18.618122 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.625185267+00:00 stderr F E0813 20:03:18.625110 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.637349034+00:00 stderr F E0813 20:03:18.637274 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.658940010+00:00 stderr F E0813 20:03:18.658767 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.701117413+00:00 stderr F E0813 20:03:18.701022 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.784162483+00:00 stderr F E0813 20:03:18.784045 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.947229385+00:00 stderr F E0813 20:03:18.947181 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.269028345+00:00 stderr F E0813 20:03:19.268920 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.911894545+00:00 stderr F E0813 20:03:19.911560 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.194807962+00:00 stderr F E0813 20:03:21.194399 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.342962569+00:00 stderr F W0813 20:03:21.342902 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.343047731+00:00 stderr F E0813 20:03:21.343032 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:22.282460580+00:00 stderr F E0813 20:03:22.282076 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.052871018+00:00 stderr F E0813 20:03:23.052646 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.760202456+00:00 stderr F E0813 20:03:23.759999 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.769813371+00:00 stderr F E0813 20:03:23.769669 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:23.779965870+00:00 stderr F E0813 20:03:23.779894 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:23.781059171+00:00 stderr F E0813 20:03:23.781003 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.784121259+00:00 stderr F E0813 20:03:23.784096 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.785240851+00:00 stderr F E0813 20:03:23.785206 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.788247946+00:00 stderr F W0813 20:03:23.788197 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.788319458+00:00 stderr F E0813 20:03:23.788302 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:23.797432929+00:00 stderr F E0813 20:03:23.797349 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:23.806918239+00:00 stderr F E0813 20:03:23.806727 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:24.163956525+00:00 stderr F E0813 20:03:24.163896 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:24.494039410+00:00 stderr F E0813 20:03:24.493690 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.764908027+00:00 stderr F E0813 20:03:24.764638 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:26.299993099+00:00 stderr F E0813 20:03:26.299749 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:26.899464241+00:00 stderr F E0813 20:03:26.899334 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:27.286445800+00:00 stderr F E0813 20:03:27.286342 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:28.890913401+00:00 stderr F E0813 20:03:28.890567 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:31.740079510+00:00 stderr F E0813 20:03:31.739954 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:32.525712363+00:00 stderr F E0813 20:03:32.525538 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.426605317+00:00 stderr F E0813 20:03:35.426247 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:35.645656476+00:00 stderr F E0813 20:03:35.645505 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:36.299152909+00:00 stderr F E0813 20:03:36.299090 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:36.901969136+00:00 stderr F E0813 20:03:36.901541 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.133112993+00:00 stderr F E0813 20:03:39.133000 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.659148194+00:00 stderr F E0813 20:03:41.658446 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:42.442147039+00:00 stderr F W0813 20:03:42.442027 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.442147039+00:00 stderr F E0813 20:03:42.442093 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.483134259+00:00 stderr F W0813 20:03:42.482557 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:42.483134259+00:00 stderr F E0813 20:03:42.482661 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:43.536431116+00:00 stderr F E0813 20:03:43.536332 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.300290470+00:00 stderr F E0813 20:03:46.299877 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:46.908462350+00:00 stderr F E0813 20:03:46.905407 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.394999873+00:00 stderr F W0813 20:03:49.393645 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.394999873+00:00 stderr F E0813 20:03:49.393889 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.025811794+00:00 stderr F E0813 20:03:53.023946 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.776505229+00:00 stderr F E0813 20:03:53.772389 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.776505229+00:00 stderr F E0813 20:03:53.773188 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.779145 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.786216 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:53.789837509+00:00 stderr F E0813 20:03:53.786304 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.795836431+00:00 stderr F W0813 20:03:53.795634 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.795836431+00:00 stderr F E0813 20:03:53.795691 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.828464271+00:00 stderr F E0813 20:03:53.816611 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:54.151538628+00:00 stderr F E0813 20:03:54.150645 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:54.345151752+00:00 stderr F E0813 20:03:54.344983 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:03:56.302523861+00:00 stderr F E0813 20:03:56.300941 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.673452972+00:00 stderr F E0813 20:03:56.673295 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:56.903368320+00:00 stderr F E0813 20:03:56.903319 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.620550084+00:00 stderr F E0813 20:03:59.618633 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.312954040+00:00 stderr F W0813 20:04:02.309330 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:02.312954040+00:00 stderr F E0813 20:04:02.309618 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.302755927+00:00 stderr F E0813 20:04:06.301888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:06.903333330+00:00 stderr F E0813 20:04:06.903206 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:16.305217907+00:00 stderr F E0813 20:04:16.304262 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:16.913825279+00:00 stderr F E0813 20:04:16.913395 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.850512924+00:00 stderr F W0813 20:04:19.850001 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.850605137+00:00 stderr F E0813 20:04:19.850588 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.413072252+00:00 stderr F E0813 20:04:20.412450 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.769465263+00:00 stderr F E0813 20:04:23.769303 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.770306377+00:00 stderr F E0813 20:04:23.770234 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:04:23.771539322+00:00 stderr F E0813 20:04:23.771375 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:04:23.773421256+00:00 stderr F E0813 20:04:23.773370 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.776546526+00:00 stderr F E0813 20:04:23.776490 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.783513345+00:00 stderr F W0813 20:04:23.783400 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.783513345+00:00 stderr F E0813 20:04:23.783473 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.798514465+00:00 stderr F E0813 20:04:23.798408 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.822333627+00:00 stderr F E0813 20:04:23.822215 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:23.972993131+00:00 stderr F E0813 20:04:23.972892 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:24.508599069+00:00 stderr F E0813 20:04:24.496475 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:24.579112658+00:00 stderr F E0813 20:04:24.579001 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:24.771043864+00:00 stderr F E0813 20:04:24.770921 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:26.307646646+00:00 stderr F E0813 20:04:26.307192 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:26.906563348+00:00 stderr F E0813 20:04:26.906472 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:27.287944838+00:00 stderr F E0813 20:04:27.287721 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:33.582276875+00:00 stderr F E0813 20:04:33.581698 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:33.987219352+00:00 stderr F E0813 20:04:33.987048 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.312691544+00:00 stderr F E0813 20:04:36.312200 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.917266547+00:00 stderr F E0813 20:04:36.916570 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:40.583078430+00:00 stderr F E0813 20:04:40.582122 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.022061001+00:00 stderr F E0813 20:04:41.021936 1 base_controller.go:268] ServiceCAController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:46.326883580+00:00 stderr F E0813 20:04:46.323295 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:46.909530754+00:00 stderr F E0813 20:04:46.909405 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.181715267+00:00 stderr F W0813 20:04:50.181102 1 base_controller.go:232] Updating status of "WebhookAuthenticatorController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.181883722+00:00 stderr F E0813 20:04:50.181729 1 base_controller.go:268] WebhookAuthenticatorController reconciliation failed: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.767715196+00:00 stderr F E0813 20:04:53.767349 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:04:53.770173056+00:00 stderr F E0813 20:04:53.770124 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:04:53.770377622+00:00 stderr F E0813 20:04:53.770329 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.771906916+00:00 stderr F E0813 20:04:53.771834 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.772924025+00:00 stderr F E0813 20:04:53.772878 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.774461939+00:00 stderr F W0813 20:04:53.774407 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.774461939+00:00 stderr F E0813 20:04:53.774452 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.785646849+00:00 stderr F E0813 20:04:53.785579 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:53.866119903+00:00 stderr F E0813 20:04:53.866006 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:56.306290470+00:00 stderr F E0813 20:04:56.306165 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:56.910579185+00:00 stderr F E0813 20:04:56.910461 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:57.350171663+00:00 stderr F E0813 20:04:57.349594 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:58.237879773+00:00 stderr F E0813 20:04:58.237672 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:00.340124593+00:00 stderr F E0813 20:05:00.338045 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:04.189001221+00:00 stderr F E0813 20:05:04.184949 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:06.315115545+00:00 stderr F E0813 20:05:06.313999 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.915299503+00:00 stderr F E0813 20:05:06.915171 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.568591731+00:00 stderr F E0813 20:05:09.568101 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:05:10.970612210+00:00 stderr F W0813 20:05:10.970159 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:10.970612210+00:00 stderr F E0813 20:05:10.970545 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:13.451698458+00:00 stderr F E0813 20:05:13.451599 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.913312373+00:00 stderr F W0813 20:05:14.908382 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.913312373+00:00 stderr F E0813 20:05:14.913234 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: Get "https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes?fieldSelector=metadata.name%3Doauth-openshift&limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.310306737+00:00 stderr F E0813 20:05:16.310246 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912380388+00:00 stderr F E0813 20:05:16.911955 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:20.615698997+00:00 stderr F E0813 20:05:20.615061 1 base_controller.go:268] OAuthServerServiceEndpointAccessibleController reconciliation failed: Get "https://10.217.4.40:443/healthz": dial tcp 10.217.4.40:443: connect: connection refused 2025-08-13T20:05:23.777205299+00:00 stderr F E0813 20:05:23.776533 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:23.982701903+00:00 stderr F I0813 20:05:23.982325 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31176 2025-08-13T20:05:53.780925638+00:00 stderr F E0813 20:05:53.773995 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:53.846642450+00:00 stderr F I0813 20:05:53.846554 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:53.884143743+00:00 stderr F I0813 20:05:53.884055 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.150880939+00:00 stderr F I0813 20:05:57.150016 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:05:57.184283916+00:00 stderr F I0813 20:05:57.184150 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-08-13T20:05:57.184283916+00:00 stderr F I0813 20:05:57.184200 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185208 1 base_controller.go:73] Caches are synced for MetadataController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185241 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185271 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185277 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185295 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185301 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185320 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185326 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185342 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185348 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185365 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-08-13T20:05:57.185454769+00:00 stderr F I0813 20:05:57.185371 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-08-13T20:05:57.191000838+00:00 stderr F I0813 20:05:57.190088 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:05:57.191000838+00:00 stderr F I0813 20:05:57.190136 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:05:57.259119429+00:00 stderr F I0813 20:05:57.258912 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.291520217+00:00 stderr F I0813 20:05:57.291376 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31177 2025-08-13T20:05:57.467956799+00:00 stderr F I0813 20:05:57.458971 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31898 2025-08-13T20:05:57.602885093+00:00 stderr F I0813 20:05:57.602057 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31898 2025-08-13T20:05:58.111040365+00:00 stderr F I0813 20:05:58.110278 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.115503 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.116373 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.117903471+00:00 stderr F E0813 20:05:58.116996 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:05:58.593472469+00:00 stderr F I0813 20:05:58.593120 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31941 2025-08-13T20:05:59.392556071+00:00 stderr F I0813 20:05:59.392139 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31941 2025-08-13T20:05:59.994268782+00:00 stderr F I0813 20:05:59.994170 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31955 2025-08-13T20:06:00.393736241+00:00 stderr F I0813 20:06:00.393620 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31955 2025-08-13T20:06:00.951252697+00:00 stderr F I0813 20:06:00.951148 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:01.398687769+00:00 stderr F I0813 20:06:01.398630 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:01.592664314+00:00 stderr F I0813 20:06:01.592543 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:01.994168851+00:00 stderr F I0813 20:06:01.994108 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31964 2025-08-13T20:06:02.218265058+00:00 stderr F E0813 20:06:02.218209 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:02.379615469+00:00 stderr F I0813 20:06:02.379483 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:02.798964217+00:00 stderr F E0813 20:06:02.798852 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:03.280820726+00:00 stderr F I0813 20:06:03.280603 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:03.395710406+00:00 stderr F I0813 20:06:03.395587 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:03.590517874+00:00 stderr F I0813 20:06:03.590118 1 request.go:697] Waited for 1.06685142s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:06:03.793120646+00:00 stderr F I0813 20:06:03.792978 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:04.195684024+00:00 stderr F I0813 20:06:04.195558 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:04.585346633+00:00 stderr F I0813 20:06:04.585242 1 request.go:697] Waited for 1.188418642s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:06:04.808918075+00:00 stderr F I0813 20:06:04.807539 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.201243302+00:00 stderr F I0813 20:06:05.201114 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.594123029+00:00 stderr F I0813 20:06:05.593970 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31969 2025-08-13T20:06:05.796711801+00:00 stderr F I0813 20:06:05.796568 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:06.395088896+00:00 stderr F I0813 20:06:06.394551 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:06.395088896+00:00 stderr F E0813 20:06:06.394938 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:06.601353492+00:00 stderr F I0813 20:06:06.601259 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:06.793587567+00:00 stderr F I0813 20:06:06.793457 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31983 2025-08-13T20:06:07.595002947+00:00 stderr F I0813 20:06:07.594288 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.200963119+00:00 stderr F I0813 20:06:08.199025 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.394736918+00:00 stderr F I0813 20:06:08.394611 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.395164010+00:00 stderr F E0813 20:06:08.395101 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:08.797241104+00:00 stderr F I0813 20:06:08.797136 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:08.802477464+00:00 stderr F I0813 20:06:08.802395 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:09.161467163+00:00 stderr F I0813 20:06:09.160116 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:09.214572484+00:00 stderr F I0813 20:06:09.213712 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.214572484+00:00 stderr F E0813 20:06:09.213961 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:09.684253444+00:00 stderr F I0813 20:06:09.684155 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.995109275+00:00 stderr F I0813 20:06:09.994982 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:09.995424974+00:00 stderr F E0813 20:06:09.995296 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:10.173123513+00:00 stderr F I0813 20:06:10.172991 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:10.914646817+00:00 stderr F I0813 20:06:10.912960 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:10.922015297+00:00 stderr F I0813 20:06:10.921943 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.096769872+00:00 stderr F I0813 20:06:11.096666 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.811653774+00:00 stderr F I0813 20:06:11.811553 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:11.812051776+00:00 stderr F E0813 20:06:11.811976 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:11.992449141+00:00 stderr F I0813 20:06:11.992313 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:12.193235221+00:00 stderr F I0813 20:06:12.193177 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:12.794165150+00:00 stderr F I0813 20:06:12.794056 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.251686431+00:00 stderr F I0813 20:06:13.251568 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.444129302+00:00 stderr F I0813 20:06:13.442353 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:13.444129302+00:00 stderr F E0813 20:06:13.442536 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:13.994283917+00:00 stderr F I0813 20:06:13.994167 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:14.122405486+00:00 stderr F I0813 20:06:14.122004 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.187278824+00:00 stderr F I0813 20:06:14.187176 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.195468358+00:00 stderr F I0813 20:06:14.195350 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:14.533009894+00:00 stderr F I0813 20:06:14.532219 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:14.808445681+00:00 stderr F I0813 20:06:14.807930 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.003177808+00:00 stderr F I0813 20:06:15.002420 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.003177808+00:00 stderr F E0813 20:06:15.002886 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:15.794070796+00:00 stderr F I0813 20:06:15.793986 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:15.979400003+00:00 stderr F I0813 20:06:15.979279 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:15.997270335+00:00 stderr F I0813 20:06:15.997103 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:16.399074080+00:00 stderr F I0813 20:06:16.398917 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:16.399115161+00:00 stderr F E0813 20:06:16.399084 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:16.464236376+00:00 stderr F I0813 20:06:16.464109 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:06:16.794003089+00:00 stderr F I0813 20:06:16.793881 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.197062101+00:00 stderr F I0813 20:06:17.196924 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.197212995+00:00 stderr F E0813 20:06:17.197139 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:17.722846068+00:00 stderr F I0813 20:06:17.722676 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.993669173+00:00 stderr F I0813 20:06:17.993511 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:17.994296241+00:00 stderr F E0813 20:06:17.994159 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:18.438699257+00:00 stderr F I0813 20:06:18.438641 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:06:19.248248059+00:00 stderr F I0813 20:06:19.248041 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:19.343039114+00:00 stderr F I0813 20:06:19.342098 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:19.584885919+00:00 stderr F I0813 20:06:19.584401 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:19.598600272+00:00 stderr F E0813 20:06:19.598516 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.614885068+00:00 stderr F E0813 20:06:19.614706 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.635157919+00:00 stderr F E0813 20:06:19.635096 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.674930438+00:00 stderr F E0813 20:06:19.664489 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.748246817+00:00 stderr F I0813 20:06:19.746043 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.749856013+00:00 stderr F I0813 20:06:19.749722 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.803189150+00:00 stderr F E0813 20:06:19.794665 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:19.828946597+00:00 stderr F I0813 20:06:19.828433 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:19.880045051+00:00 stderr F E0813 20:06:19.879148 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:20.043557963+00:00 stderr F E0813 20:06:20.042559 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:20.366935763+00:00 stderr F E0813 20:06:20.366018 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:21.009144944+00:00 stderr F E0813 20:06:21.009081 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:21.609717192+00:00 stderr F I0813 20:06:21.609270 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.234066591+00:00 stderr F I0813 20:06:22.233754 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:22.287365577+00:00 stderr F I0813 20:06:22.287227 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:22.294139981+00:00 stderr F E0813 20:06:22.294044 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:23.078702238+00:00 stderr F I0813 20:06:23.078452 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:23.152022367+00:00 stderr F I0813 20:06:23.151737 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:23.191397335+00:00 stderr F I0813 20:06:23.191229 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:23.191462137+00:00 stderr F E0813 20:06:23.191446 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:24.745077965+00:00 stderr F I0813 20:06:24.744379 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.783302970+00:00 stderr F I0813 20:06:24.783153 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.834289230+00:00 stderr F I0813 20:06:24.833331 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:24.847144218+00:00 stderr F E0813 20:06:24.847017 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.851570105+00:00 stderr F E0813 20:06:24.850085 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.857206446+00:00 stderr F E0813 20:06:24.857177 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:24.913851938+00:00 stderr F I0813 20:06:24.913718 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:24.951681371+00:00 stderr F I0813 20:06:24.951569 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:25.040631169+00:00 stderr F I0813 20:06:25.040492 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:25.314961334+00:00 stderr F I0813 20:06:25.312493 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:27.048497405+00:00 stderr F I0813 20:06:27.048413 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.215903255+00:00 stderr F I0813 20:06:28.215234 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.256019734+00:00 stderr F I0813 20:06:28.255448 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:28.292744106+00:00 stderr F I0813 20:06:28.292625 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:28.293143647+00:00 stderr F E0813 20:06:28.292841 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:28.414768040+00:00 stderr F I0813 20:06:28.414654 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:28.646161106+00:00 stderr F I0813 20:06:28.646046 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:29.686390835+00:00 stderr F I0813 20:06:29.685560 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:31.822085495+00:00 stderr F I0813 20:06:31.821356 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.153242831+00:00 stderr F I0813 20:06:33.153123 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:33.520317245+00:00 stderr F I0813 20:06:33.519759 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:33.575179558+00:00 stderr F I0813 20:06:33.573144 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:33.575179558+00:00 stderr F E0813 20:06:33.573340 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:33.576475315+00:00 stderr F I0813 20:06:33.576423 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:06:36.250033578+00:00 stderr F I0813 20:06:36.248480 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:06:36.270607538+00:00 stderr F I0813 20:06:36.268604 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:36.377326287+00:00 stderr F I0813 20:06:36.373692 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:36.458074342+00:00 stderr F I0813 20:06:36.457999 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:36.461352066+00:00 stderr F I0813 20:06:36.461272 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=31990 2025-08-13T20:06:36.967812537+00:00 stderr F I0813 20:06:36.967687 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:37.718851319+00:00 stderr F I0813 20:06:37.715596 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:39.611042031+00:00 stderr F I0813 20:06:39.610212 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:40.082855298+00:00 stderr F I0813 20:06:40.082102 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:40.161935945+00:00 stderr F I0813 20:06:40.160274 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=32119 2025-08-13T20:06:40.192331217+00:00 stderr F I0813 20:06:40.191960 1 helpers.go:184] lister was stale at resourceVersion=30552, live get showed resourceVersion=32119 2025-08-13T20:06:40.192331217+00:00 stderr F E0813 20:06:40.192162 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"30700", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:06:42.073213563+00:00 stderr F I0813 20:06:42.073075 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:42.353942972+00:00 stderr F I0813 20:06:42.353658 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:42.369517168+00:00 stderr F I0813 20:06:42.369422 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:42.424848485+00:00 stderr F I0813 20:06:42.422844 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from True to False ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" 2025-08-13T20:06:43.369853199+00:00 stderr F I0813 20:06:43.363073 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:43.560472214+00:00 stderr F I0813 20:06:43.559423 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:44.436295555+00:00 stderr F I0813 20:06:44.431154 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:06:44.744640726+00:00 stderr F I0813 20:06:44.743828 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:45.613246239+00:00 stderr F I0813 20:06:45.612552 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:45.923279647+00:00 stderr F I0813 20:06:45.922384 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") 2025-08-13T20:06:45.926984984+00:00 stderr F I0813 20:06:45.926915 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:47.027851337+00:00 stderr F I0813 20:06:47.027547 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:48.495591258+00:00 stderr F I0813 20:06:48.493729 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"30700\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.June, 27, 13, 28, 1, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0006da648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" 2025-08-13T20:06:48.541989388+00:00 stderr F I0813 20:06:48.521692 1 helpers.go:184] lister was stale at resourceVersion=32195, live get showed resourceVersion=32202 2025-08-13T20:06:49.220022478+00:00 stderr F E0813 20:06:49.218626 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:06:49.263620898+00:00 stderr F I0813 20:06:49.259652 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:49.878502717+00:00 stderr F I0813 20:06:49.872820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" 2025-08-13T20:06:50.549607578+00:00 stderr F I0813 20:06:50.549511 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:50.963101364+00:00 stderr F I0813 20:06:50.962722 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:52.650859433+00:00 stderr F I0813 20:06:52.649631 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:06:53.395922365+00:00 stderr F I0813 20:06:53.395406 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:06:55.018081174+00:00 stderr F I0813 20:06:55.008754 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:00.290906550+00:00 stderr F I0813 20:07:00.269440 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:04.240339803+00:00 stderr F I0813 20:07:04.239152 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.713414188+00:00 stderr F I0813 20:07:05.712626 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:05.720681146+00:00 stderr F E0813 20:07:05.720598 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:05.732849615+00:00 stderr F E0813 20:07:05.732208 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:06.622944324+00:00 stderr F I0813 20:07:06.620393 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:06.814317541+00:00 stderr F E0813 20:07:06.814180 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:09.126943667+00:00 stderr F I0813 20:07:09.122061 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:07:14.841465356+00:00 stderr F E0813 20:07:14.835520 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:07:30.191761342+00:00 stderr F E0813 20:07:30.188131 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:07:58.905403376+00:00 stderr F I0813 20:07:58.896695 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:07:59.008467061+00:00 stderr F I0813 20:07:59.008386 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:59.042475026+00:00 stderr F I0813 20:07:59.042305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" to "All is well" 2025-08-13T20:08:24.045096453+00:00 stderr F E0813 20:08:24.042992 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.050072765+00:00 stderr F W0813 20:08:24.047200 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.051353532+00:00 stderr F E0813 20:08:24.047264 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.061981107+00:00 stderr F E0813 20:08:24.060696 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.065391385+00:00 stderr F W0813 20:08:24.065222 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.065391385+00:00 stderr F E0813 20:08:24.065301 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.081278270+00:00 stderr F E0813 20:08:24.081184 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.090175325+00:00 stderr F W0813 20:08:24.090064 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.090175325+00:00 stderr F E0813 20:08:24.090129 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.115317196+00:00 stderr F E0813 20:08:24.115227 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.117149219+00:00 stderr F W0813 20:08:24.117072 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.117149219+00:00 stderr F E0813 20:08:24.117140 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.164695612+00:00 stderr F E0813 20:08:24.162356 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.168014307+00:00 stderr F W0813 20:08:24.165195 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.168014307+00:00 stderr F E0813 20:08:24.166593 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.256644158+00:00 stderr F E0813 20:08:24.256294 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371038608+00:00 stderr F W0813 20:08:24.370757 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.371038608+00:00 stderr F E0813 20:08:24.370980 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:24.574927424+00:00 stderr F E0813 20:08:24.572225 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:24.749282603+00:00 stderr F E0813 20:08:24.749180 1 leaderelection.go:332] error retrieving resource lock openshift-authentication-operator/cluster-authentication-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.771673885+00:00 stderr F E0813 20:08:24.771274 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.970065703+00:00 stderr F W0813 20:08:24.969294 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:24.970065703+00:00 stderr F E0813 20:08:24.969386 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:25.168003797+00:00 stderr F E0813 20:08:25.166709 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.367454255+00:00 stderr F E0813 20:08:25.367064 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.568685445+00:00 stderr F E0813 20:08:25.566403 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:25.769318827+00:00 stderr F W0813 20:08:25.767739 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:25.769318827+00:00 stderr F E0813 20:08:25.767873 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:25.970122625+00:00 stderr F E0813 20:08:25.968364 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.353991311+00:00 stderr F E0813 20:08:26.353083 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.378685579+00:00 stderr F E0813 20:08:26.378502 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:26.569700015+00:00 stderr F E0813 20:08:26.569648 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.769699879+00:00 stderr F E0813 20:08:26.767454 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:26.967452419+00:00 stderr F E0813 20:08:26.966467 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.165713454+00:00 stderr F E0813 20:08:27.165504 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:27.290641866+00:00 stderr F E0813 20:08:27.290541 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.303245497+00:00 stderr F E0813 20:08:27.303084 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.369988360+00:00 stderr F W0813 20:08:27.369166 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.369988360+00:00 stderr F E0813 20:08:27.369257 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:27.412205051+00:00 stderr F E0813 20:08:27.411140 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.582706679+00:00 stderr F E0813 20:08:27.582457 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.768069004+00:00 stderr F E0813 20:08:27.768009 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.812603321+00:00 stderr F E0813 20:08:27.812334 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:27.968471070+00:00 stderr F E0813 20:08:27.968174 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.169348299+00:00 stderr F E0813 20:08:28.168588 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.372013970+00:00 stderr F E0813 20:08:28.371098 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.413937132+00:00 stderr F E0813 20:08:28.412224 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.567555536+00:00 stderr F E0813 20:08:28.566500 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.768137066+00:00 stderr F E0813 20:08:28.767597 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.812500518+00:00 stderr F E0813 20:08:28.812421 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:28.966491243+00:00 stderr F E0813 20:08:28.966373 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:29.166768005+00:00 stderr F E0813 20:08:29.166653 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.211285252+00:00 stderr F E0813 20:08:29.211232 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.368611422+00:00 stderr F E0813 20:08:29.367387 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.568887134+00:00 stderr F E0813 20:08:29.568833 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.766863351+00:00 stderr F W0813 20:08:29.766470 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.766863351+00:00 stderr F E0813 20:08:29.766526 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:29.812843879+00:00 stderr F E0813 20:08:29.812659 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.968834921+00:00 stderr F E0813 20:08:29.968403 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.169648219+00:00 stderr F E0813 20:08:30.169492 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:30.366019758+00:00 stderr F E0813 20:08:30.365931 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.455198815+00:00 stderr F E0813 20:08:30.455144 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.568202395+00:00 stderr F E0813 20:08:30.566868 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.688641188+00:00 stderr F E0813 20:08:30.688524 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.813959052+00:00 stderr F E0813 20:08:30.812976 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.012709651+00:00 stderr F E0813 20:08:31.011994 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.213289311+00:00 stderr F E0813 20:08:31.213022 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.413338887+00:00 stderr F E0813 20:08:31.413275 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.812176271+00:00 stderr F E0813 20:08:31.812067 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.015182451+00:00 stderr F E0813 20:08:32.015085 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.217369507+00:00 stderr F E0813 20:08:32.216942 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:32.330155181+00:00 stderr F E0813 20:08:32.330030 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.331628133+00:00 stderr F W0813 20:08:32.331548 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.331628133+00:00 stderr F E0813 20:08:32.331595 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:32.412567704+00:00 stderr F E0813 20:08:32.412458 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.501086511+00:00 stderr F E0813 20:08:32.500995 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.612261799+00:00 stderr F E0813 20:08:32.612160 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:32.813102527+00:00 stderr F E0813 20:08:32.813006 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.011336201+00:00 stderr F E0813 20:08:33.011169 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.300630165+00:00 stderr F E0813 20:08:33.300571 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:33.411134734+00:00 stderr F E0813 20:08:33.411019 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.016948782+00:00 stderr F E0813 20:08:34.016646 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:34.211524341+00:00 stderr F E0813 20:08:34.211467 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.979539251+00:00 stderr F E0813 20:08:34.978848 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.068993066+00:00 stderr F E0813 20:08:35.068533 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.494494035+00:00 stderr F E0813 20:08:35.494334 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:35.867394596+00:00 stderr F E0813 20:08:35.867290 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.331954435+00:00 stderr F E0813 20:08:36.330279 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:36.663480730+00:00 stderr F E0813 20:08:36.663423 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:36.937287141+00:00 stderr F E0813 20:08:36.937181 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.461235673+00:00 stderr F E0813 20:08:37.461092 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465001621+00:00 stderr F W0813 20:08:37.464585 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.465001621+00:00 stderr F E0813 20:08:37.464647 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:38.057275622+00:00 stderr F E0813 20:08:38.056945 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.121127314+00:00 stderr F E0813 20:08:40.120651 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.202952520+00:00 stderr F E0813 20:08:40.202463 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.997972094+00:00 stderr F E0813 20:08:40.997415 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:41.873606559+00:00 stderr F E0813 20:08:41.873176 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.367147970+00:00 stderr F E0813 20:08:42.366964 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.388495 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F W0813 20:08:42.398454 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.399290 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.402416291+00:00 stderr F E0813 20:08:42.398772 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.413402036+00:00 stderr F E0813 20:08:42.413346 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.421702924+00:00 stderr F E0813 20:08:42.421659 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:42.582036081+00:00 stderr F E0813 20:08:42.581967 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.767459767+00:00 stderr F E0813 20:08:42.767286 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.169258266+00:00 stderr F E0813 20:08:43.168730 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.568299387+00:00 stderr F W0813 20:08:43.568007 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.568299387+00:00 stderr F E0813 20:08:43.568082 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.590563165+00:00 stderr F E0813 20:08:43.590408 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.770945797+00:00 stderr F E0813 20:08:43.770870 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:43.967106021+00:00 stderr F E0813 20:08:43.966929 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.169006020+00:00 stderr F E0813 20:08:44.167976 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:44.574914518+00:00 stderr F E0813 20:08:44.574749 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:44.969279875+00:00 stderr F E0813 20:08:44.969224 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.367978046+00:00 stderr F E0813 20:08:45.367418 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.578657656+00:00 stderr F E0813 20:08:45.570487 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:45.768164160+00:00 stderr F W0813 20:08:45.767463 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.768164160+00:00 stderr F E0813 20:08:45.767521 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:45.968257076+00:00 stderr F E0813 20:08:45.968187 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.174493719+00:00 stderr F E0813 20:08:46.173555 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:46.567953009+00:00 stderr F E0813 20:08:46.567318 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:46.967325470+00:00 stderr F E0813 20:08:46.967193 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.367953876+00:00 stderr F I0813 20:08:47.367503 1 request.go:697] Waited for 1.037123464s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:47.376156851+00:00 stderr F E0813 20:08:47.374092 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.569507115+00:00 stderr F E0813 20:08:47.569113 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.780139534+00:00 stderr F E0813 20:08:47.779160 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:47.969995228+00:00 stderr F W0813 20:08:47.969926 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:47.970092600+00:00 stderr F E0813 20:08:47.970078 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.173554544+00:00 stderr F E0813 20:08:48.173499 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.369102530+00:00 stderr F I0813 20:08:48.369006 1 request.go:697] Waited for 1.198369338s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:48.371757767+00:00 stderr F E0813 20:08:48.370471 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.573633485+00:00 stderr F E0813 20:08:48.573276 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.969884065+00:00 stderr F E0813 20:08:48.969666 1 wellknown_ready_controller.go:113] Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.168740957+00:00 stderr F E0813 20:08:49.168625 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:49.566419959+00:00 stderr F I0813 20:08:49.566331 1 request.go:697] Waited for 1.114601407s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-08-13T20:08:49.768094771+00:00 stderr F E0813 20:08:49.767501 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.968180258+00:00 stderr F W0813 20:08:49.968050 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.968180258+00:00 stderr F E0813 20:08:49.968111 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:08:50.172106924+00:00 stderr F E0813 20:08:50.171987 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.364823499+00:00 stderr F E0813 20:08:50.364571 1 base_controller.go:268] NamespaceFinalizerController_openshift-oauth-apiserver reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.367040633+00:00 stderr F E0813 20:08:50.366962 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:50.566413309+00:00 stderr F I0813 20:08:50.566355 1 request.go:697] Waited for 1.197345059s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-08-13T20:08:50.569850187+00:00 stderr F W0813 20:08:50.569463 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.569850187+00:00 stderr F E0813 20:08:50.569519 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:50.769344097+00:00 stderr F E0813 20:08:50.769278 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.369874175+00:00 stderr F E0813 20:08:51.367442 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:51.769272376+00:00 stderr F E0813 20:08:51.769161 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.968970392+00:00 stderr F W0813 20:08:51.968851 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:51.968970392+00:00 stderr F E0813 20:08:51.968945 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.170285694+00:00 stderr F E0813 20:08:52.169633 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.369885726+00:00 stderr F E0813 20:08:52.369128 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.769490233+00:00 stderr F E0813 20:08:52.767737 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.168273877+00:00 stderr F E0813 20:08:53.168091 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:53.370563587+00:00 stderr F E0813 20:08:53.370429 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.768486325+00:00 stderr F W0813 20:08:53.768341 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.768540916+00:00 stderr F E0813 20:08:53.768487 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:53.834438036+00:00 stderr F E0813 20:08:53.833611 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.168090622+00:00 stderr F E0813 20:08:54.167988 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.368919430+00:00 stderr F E0813 20:08:54.368570 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:54.574085232+00:00 stderr F E0813 20:08:54.573232 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:55.169695249+00:00 stderr F E0813 20:08:55.169436 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.370972760+00:00 stderr F W0813 20:08:55.370366 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.370972760+00:00 stderr F E0813 20:08:55.370465 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.768937660+00:00 stderr F E0813 20:08:55.768823 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:55.967508383+00:00 stderr F E0813 20:08:55.967410 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.368312545+00:00 stderr F W0813 20:08:56.368198 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.368312545+00:00 stderr F E0813 20:08:56.368250 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.568381761+00:00 stderr F E0813 20:08:56.568273 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.167729875+00:00 stderr F W0813 20:08:57.167605 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.167729875+00:00 stderr F E0813 20:08:57.167705 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.368017587+00:00 stderr F E0813 20:08:57.367766 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.570202373+00:00 stderr F E0813 20:08:57.570141 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:57.767974264+00:00 stderr F E0813 20:08:57.767746 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:08:58.367627167+00:00 stderr F E0813 20:08:58.367062 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.571766060+00:00 stderr F W0813 20:08:58.569885 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.571766060+00:00 stderr F E0813 20:08:58.570083 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.967660021+00:00 stderr F E0813 20:08:58.967524 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.293345769+00:00 stderr F E0813 20:09:00.292919 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.687833059+00:00 stderr F E0813 20:09:00.687656 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.931614558+00:00 stderr F E0813 20:09:00.931484 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:10.590964230+00:00 stderr F I0813 20:09:10.589417 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33015 2025-08-13T20:09:10.638054530+00:00 stderr F E0813 20:09:10.637972 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:28.814591308+00:00 stderr F I0813 20:09:28.813369 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:29.604256818+00:00 stderr F I0813 20:09:29.603837 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:30.178241024+00:00 stderr F I0813 20:09:30.178185 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:31.361714086+00:00 stderr F I0813 20:09:31.361450 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:32.362193281+00:00 stderr F I0813 20:09:32.361343 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:33.781685288+00:00 stderr F I0813 20:09:33.781545 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:34.400346116+00:00 stderr F I0813 20:09:34.399530 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33023 2025-08-13T20:09:34.437505131+00:00 stderr F I0813 20:09:34.437206 1 helpers.go:184] lister was stale at resourceVersion=32766, live get showed resourceVersion=33023 2025-08-13T20:09:34.437505131+00:00 stderr F E0813 20:09:34.437374 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:35.160181941+00:00 stderr F I0813 20:09:35.159682 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:35.160444489+00:00 stderr F I0813 20:09:35.160387 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:35Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:35.182949394+00:00 stderr F I0813 20:09:35.178589 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") 2025-08-13T20:09:36.183181381+00:00 stderr F I0813 20:09:36.182351 1 request.go:697] Waited for 1.014135186s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:09:37.383550266+00:00 stderr F I0813 20:09:37.383452 1 request.go:697] Waited for 1.267698556s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca 2025-08-13T20:09:37.849265269+00:00 stderr F I0813 20:09:37.849146 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T20:09:38.593693882+00:00 stderr F I0813 20:09:38.593476 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.600523548+00:00 stderr F E0813 20:09:38.600397 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.606486579+00:00 stderr F I0813 20:09:38.606344 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.613476549+00:00 stderr F E0813 20:09:38.613406 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.625349050+00:00 stderr F I0813 20:09:38.625280 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.633824643+00:00 stderr F E0813 20:09:38.633739 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.655101473+00:00 stderr F I0813 20:09:38.655010 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:38.656756420+00:00 stderr F I0813 20:09:38.656525 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.664366438+00:00 stderr F E0813 20:09:38.664093 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.706020173+00:00 stderr F I0813 20:09:38.705672 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.727943461+00:00 stderr F E0813 20:09:38.727275 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.728329742+00:00 stderr F I0813 20:09:38.728299 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T20:09:38.808550282+00:00 stderr F I0813 20:09:38.808400 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.817439457+00:00 stderr F E0813 20:09:38.817411 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.979365520+00:00 stderr F I0813 20:09:38.979309 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:38Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:38.993256238+00:00 stderr F E0813 20:09:38.993201 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:38.995624496+00:00 stderr F I0813 20:09:38.995515 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.074880778+00:00 stderr F I0813 20:09:39.072858 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.315513077+00:00 stderr F I0813 20:09:39.315405 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:39.321931551+00:00 stderr F E0813 20:09:39.321866 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:39.859586766+00:00 stderr F I0813 20:09:39.859233 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:39.965092921+00:00 stderr F I0813 20:09:39.964876 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:39.971887716+00:00 stderr F E0813 20:09:39.971853 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:40.590280895+00:00 stderr F I0813 20:09:40.588307 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:40.721889528+00:00 stderr F I0813 20:09:40.719649 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T20:09:41.256895097+00:00 stderr F I0813 20:09:41.254858 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.272325010+00:00 stderr F E0813 20:09:41.270282 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.003101422+00:00 stderr F I0813 20:09:42.001555 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:42.762881406+00:00 stderr F I0813 20:09:42.760248 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:42.762881406+00:00 stderr F E0813 20:09:42.761447 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:42.768262710+00:00 stderr F I0813 20:09:42.767079 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:43.836652932+00:00 stderr F I0813 20:09:43.836540 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:43Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:43.858078695+00:00 stderr F E0813 20:09:43.853310 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:44.286400066+00:00 stderr F I0813 20:09:44.286234 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:44.439010921+00:00 stderr F I0813 20:09:44.438763 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T20:09:44.663102456+00:00 stderr F I0813 20:09:44.662930 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:45.580610793+00:00 stderr F I0813 20:09:45.579381 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:46.812063900+00:00 stderr F I0813 20:09:46.811882 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:46.854568888+00:00 stderr F I0813 20:09:46.854333 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:46Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:46.863504424+00:00 stderr F E0813 20:09:46.861719 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:47.264677707+00:00 stderr F I0813 20:09:47.264553 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:47.288824629+00:00 stderr F I0813 20:09:47.288681 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:47.736836453+00:00 stderr F I0813 20:09:47.736530 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.148161416+00:00 stderr F I0813 20:09:48.145409 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:48.975455756+00:00 stderr F I0813 20:09:48.974995 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:48Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:48.984041552+00:00 stderr F E0813 20:09:48.983644 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:49.615860927+00:00 stderr F I0813 20:09:49.615132 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:49Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:49.623829645+00:00 stderr F E0813 20:09:49.621235 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:49.813302098+00:00 stderr F I0813 20:09:49.813240 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:51.188335851+00:00 stderr F I0813 20:09:51.187817 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:51.601003312+00:00 stderr F E0813 20:09:51.600623 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"32907", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-08-13T20:09:52.643865962+00:00 stderr F I0813 20:09:52.641709 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:52.686666690+00:00 stderr F I0813 20:09:52.685997 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:09:52Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:52.692047854+00:00 stderr F E0813 20:09:52.691981 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:53.054038612+00:00 stderr F I0813 20:09:53.053871 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:53.563500969+00:00 stderr F I0813 20:09:53.563367 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:54.364638189+00:00 stderr F I0813 20:09:54.361472 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:56.583068233+00:00 stderr F I0813 20:09:56.582319 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:57.815065265+00:00 stderr F I0813 20:09:57.814990 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:09:57.861014273+00:00 stderr F I0813 20:09:57.860957 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:06:44Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:57.884623490+00:00 stderr F E0813 20:09:57.884536 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:59.212995395+00:00 stderr F I0813 20:09:59.212118 1 request.go:697] Waited for 1.156067834s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2025-08-13T20:10:00.017574593+00:00 stderr F I0813 20:10:00.016494 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:00.412400763+00:00 stderr F I0813 20:10:00.412021 1 request.go:697] Waited for 1.193054236s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift 2025-08-13T20:10:01.039028689+00:00 stderr F I0813 20:10:01.038959 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:01.414870635+00:00 stderr F I0813 20:10:01.414421 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:01.612929483+00:00 stderr F I0813 20:10:01.611468 1 request.go:697] Waited for 1.187769875s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication 2025-08-13T20:10:05.994944689+00:00 stderr F I0813 20:10:05.994001 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:08.314978607+00:00 stderr F I0813 20:10:08.314538 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:14.120859465+00:00 stderr F I0813 20:10:14.120610 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:20.593659503+00:00 stderr F I0813 20:10:20.593198 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:23.869260428+00:00 stderr F I0813 20:10:23.868864 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:25.514277572+00:00 stderr F I0813 20:10:25.514185 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:27.472026502+00:00 stderr F I0813 20:10:27.470139 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T20:10:30.920531234+00:00 stderr F I0813 20:10:30.917262 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:32.111634214+00:00 stderr F I0813 20:10:32.110131 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:32.326100383+00:00 stderr F I0813 20:10:32.324274 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:33.337597044+00:00 stderr F I0813 20:10:33.337494 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:34.581236090+00:00 stderr F I0813 20:10:34.580885 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:38.555504355+00:00 stderr F I0813 20:10:38.554902 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:38.978677758+00:00 stderr F I0813 20:10:38.978615 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:38.979649666+00:00 stderr F I0813 20:10:38.979578 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:38Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:38.996976132+00:00 stderr F I0813 20:10:38.995651 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-08-13T20:10:38Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:10:38.996976132+00:00 stderr F I0813 20:10:38.996248 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"32907\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 4, 8, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e9a9c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well",Available changed from False to True ("All is well") 2025-08-13T20:10:39.002536212+00:00 stderr F E0813 20:10:39.002389 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:10:39.865847024+00:00 stderr F I0813 20:10:39.865735 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:10:48.375629945+00:00 stderr F I0813 20:10:48.373525 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:29:36.584846903+00:00 stderr F I0813 20:29:36.583938 1 request.go:697] Waited for 1.136256551s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift 2025-08-13T20:42:36.642834533+00:00 stderr F E0813 20:42:36.641909 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.655063095+00:00 stderr F E0813 20:42:36.654734 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:36.836902268+00:00 stderr F E0813 20:42:36.836436 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.036862743+00:00 stderr F E0813 20:42:37.036597 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.160458276+00:00 stderr F E0813 20:42:37.159852 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.172532134+00:00 stderr F E0813 20:42:37.172300 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.190632706+00:00 stderr F E0813 20:42:37.190565 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.216130321+00:00 stderr F E0813 20:42:37.216044 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.235806149+00:00 stderr F E0813 20:42:37.235648 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.258745950+00:00 stderr F E0813 20:42:37.258690 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.447022458+00:00 stderr F E0813 20:42:37.444536 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.558196643+00:00 stderr F E0813 20:42:37.558098 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.644990536+00:00 stderr F E0813 20:42:37.643264 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.842356836+00:00 stderr F E0813 20:42:37.842114 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.851507320+00:00 stderr F E0813 20:42:37.850143 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.180712041+00:00 stderr F E0813 20:42:38.179735 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.380054948+00:00 stderr F E0813 20:42:38.379961 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.580030223+00:00 stderr F E0813 20:42:38.579936 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.980053556+00:00 stderr F E0813 20:42:38.979878 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.180444863+00:00 stderr F E0813 20:42:39.179891 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.579247880+00:00 stderr F E0813 20:42:39.578761 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.780079240+00:00 stderr F E0813 20:42:39.780027 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.979863310+00:00 stderr F E0813 20:42:39.979715 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.379862572+00:00 stderr F E0813 20:42:40.379683 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.979263443+00:00 stderr F E0813 20:42:40.979080 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.017829305+00:00 stderr F I0813 20:42:41.017697 1 cmd.go:128] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:41.019967157+00:00 stderr F I0813 20:42:41.019734 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:41.020199643+00:00 stderr F I0813 20:42:41.020132 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:42:41.020199643+00:00 stderr F I0813 20:42:41.020172 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:41.020349598+00:00 stderr F I0813 20:42:41.020265 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:42:41.020397429+00:00 stderr F I0813 20:42:41.020364 1 base_controller.go:172] Shutting down CustomRouteController ... 2025-08-13T20:42:41.020412520+00:00 stderr F I0813 20:42:41.020402 1 base_controller.go:172] Shutting down ProxyConfigController ... 2025-08-13T20:42:41.020630246+00:00 stderr F I0813 20:42:41.020566 1 base_controller.go:172] Shutting down OAuthServerRouteEndpointAccessibleController ... 2025-08-13T20:42:41.020630246+00:00 stderr F I0813 20:42:41.020614 1 base_controller.go:172] Shutting down WellKnownReadyController ... 2025-08-13T20:42:41.020989066+00:00 stderr F I0813 20:42:41.020682 1 base_controller.go:172] Shutting down PayloadConfig ... 2025-08-13T20:42:41.021255244+00:00 stderr F I0813 20:42:41.020046 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:41.021317106+00:00 stderr F I0813 20:42:41.020354 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:41.021331676+00:00 stderr F I0813 20:42:41.021315 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:41.021392778+00:00 stderr F I0813 20:42:41.021341 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:41.021580743+00:00 stderr F E0813 20:42:41.021519 1 base_controller.go:268] PayloadConfig reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.021734888+00:00 stderr F I0813 20:42:41.021576 1 base_controller.go:114] Shutting down worker of PayloadConfig controller ... 2025-08-13T20:42:41.021734888+00:00 stderr F I0813 20:42:41.021726 1 base_controller.go:172] Shutting down MetadataController ... 2025-08-13T20:42:41.021749498+00:00 stderr F I0813 20:42:41.021742 1 base_controller.go:172] Shutting down OAuthServerWorkloadController ... 2025-08-13T20:42:41.021897042+00:00 stderr F I0813 20:42:41.021758 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:41.021897042+00:00 stderr F I0813 20:42:41.021869 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:41.021916773+00:00 stderr F I0813 20:42:41.021886 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:41.021916773+00:00 stderr F I0813 20:42:41.021910 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:41.021938713+00:00 stderr F I0813 20:42:41.021924 1 base_controller.go:172] Shutting down IngressNodesAvailableController ... 2025-08-13T20:42:41.021951094+00:00 stderr F I0813 20:42:41.021937 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:41.021962624+00:00 stderr F I0813 20:42:41.021951 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:41.021974175+00:00 stderr F I0813 20:42:41.021964 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-oauth-apiserver ... 2025-08-13T20:42:41.021985925+00:00 stderr F I0813 20:42:41.021978 1 base_controller.go:172] Shutting down OAuthAPIServerControllerWorkloadController ... 2025-08-13T20:42:41.022060777+00:00 stderr F E0813 20:42:41.022021 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022116439+00:00 stderr F I0813 20:42:41.022083 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:42:41.022128659+00:00 stderr F I0813 20:42:41.022119 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.022144009+00:00 stderr F I0813 20:42:41.022135 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:41.022156230+00:00 stderr F I0813 20:42:41.022149 1 base_controller.go:172] Shutting down OpenShiftAuthenticatorCertRequester ... 2025-08-13T20:42:41.022168380+00:00 stderr F I0813 20:42:41.022161 1 base_controller.go:172] Shutting down WebhookAuthenticatorController ... 2025-08-13T20:42:41.022181600+00:00 stderr F I0813 20:42:41.022175 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:42:41.022246472+00:00 stderr F I0813 20:42:41.022188 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:41.022351045+00:00 stderr F E0813 20:42:41.022301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.022351045+00:00 stderr F I0813 20:42:41.022344 1 base_controller.go:172] Shutting down RouterCertsDomainValidationController ... 2025-08-13T20:42:41.022732486+00:00 stderr F W0813 20:42:41.022662 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022732486+00:00 stderr F E0813 20:42:41.022709 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022735 1 base_controller.go:114] Shutting down worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022743 1 base_controller.go:104] All OAuthAPIServerControllerWorkloadController workers have been terminated 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022756 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:41.022833349+00:00 stderr F I0813 20:42:41.022769 1 base_controller.go:172] Shutting down TrustDistributionController ... 2025-08-13T20:42:41.022853040+00:00 stderr F I0813 20:42:41.022838 1 base_controller.go:172] Shutting down OpenshiftAuthenticationStaticResources ... 2025-08-13T20:42:41.022863260+00:00 stderr F I0813 20:42:41.022856 1 base_controller.go:172] Shutting down ServiceCAController ... 2025-08-13T20:42:41.022886701+00:00 stderr F I0813 20:42:41.022871 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointsEndpointAccessibleController ... 2025-08-13T20:42:41.022902241+00:00 stderr F I0813 20:42:41.022888 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointAccessibleController ... 2025-08-13T20:42:41.022912292+00:00 stderr F I0813 20:42:41.022905 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:41.022953973+00:00 stderr F I0813 20:42:41.022920 1 base_controller.go:172] Shutting down StatusSyncer_authentication ... 2025-08-13T20:42:41.022953973+00:00 stderr F I0813 20:42:41.022946 1 base_controller.go:150] All StatusSyncer_authentication post start hooks have been terminated 2025-08-13T20:42:41.022970083+00:00 stderr F I0813 20:42:41.022962 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:42:41.022983014+00:00 stderr F I0813 20:42:41.022976 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_OpenShiftAuthenticator ... 2025-08-13T20:42:41.022995354+00:00 stderr F I0813 20:42:41.022988 1 base_controller.go:172] Shutting down IngressStateController ... 2025-08-13T20:42:41.023007654+00:00 stderr F I0813 20:42:41.023000 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:41.023017265+00:00 stderr F I0813 20:42:41.023008 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:41.023026905+00:00 stderr F I0813 20:42:41.023018 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:42:41.023036665+00:00 stderr F I0813 20:42:41.023026 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:42:41.023046405+00:00 stderr F I0813 20:42:41.023035 1 base_controller.go:114] Shutting down worker of CustomRouteController controller ... 2025-08-13T20:42:41.023046405+00:00 stderr F I0813 20:42:41.023041 1 base_controller.go:104] All CustomRouteController workers have been terminated 2025-08-13T20:42:41.023059016+00:00 stderr F I0813 20:42:41.023051 1 base_controller.go:114] Shutting down worker of ProxyConfigController controller ... 2025-08-13T20:42:41.023101647+00:00 stderr F I0813 20:42:41.023071 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:41.023113937+00:00 stderr F I0813 20:42:41.023100 1 base_controller.go:114] Shutting down worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:42:41.023113937+00:00 stderr F I0813 20:42:41.023108 1 base_controller.go:104] All OAuthServerRouteEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023123998+00:00 stderr F I0813 20:42:41.023117 1 base_controller.go:114] Shutting down worker of WellKnownReadyController controller ... 2025-08-13T20:42:41.023134698+00:00 stderr F I0813 20:42:41.023124 1 base_controller.go:104] All WellKnownReadyController workers have been terminated 2025-08-13T20:42:41.023146558+00:00 stderr F I0813 20:42:41.023135 1 base_controller.go:114] Shutting down worker of MetadataController controller ... 2025-08-13T20:42:41.023146558+00:00 stderr F I0813 20:42:41.023141 1 base_controller.go:104] All MetadataController workers have been terminated 2025-08-13T20:42:41.023159119+00:00 stderr F I0813 20:42:41.023150 1 base_controller.go:114] Shutting down worker of OAuthServerWorkloadController controller ... 2025-08-13T20:42:41.023171099+00:00 stderr F I0813 20:42:41.023157 1 base_controller.go:104] All OAuthServerWorkloadController workers have been terminated 2025-08-13T20:42:41.023171099+00:00 stderr F I0813 20:42:41.023165 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:41.023183889+00:00 stderr F I0813 20:42:41.023172 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:41.023183889+00:00 stderr F I0813 20:42:41.023180 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023185 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023193 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:41.023201600+00:00 stderr F I0813 20:42:41.023197 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:41.023213090+00:00 stderr F I0813 20:42:41.023203 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:41.023213090+00:00 stderr F I0813 20:42:41.023209 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:41.023255831+00:00 stderr F I0813 20:42:41.023216 1 base_controller.go:114] Shutting down worker of IngressNodesAvailableController controller ... 2025-08-13T20:42:41.023268572+00:00 stderr F I0813 20:42:41.023223 1 base_controller.go:104] All IngressNodesAvailableController workers have been terminated 2025-08-13T20:42:41.023280872+00:00 stderr F I0813 20:42:41.023272 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:41.023290522+00:00 stderr F I0813 20:42:41.023281 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:41.023302743+00:00 stderr F I0813 20:42:41.023294 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:41.023312523+00:00 stderr F I0813 20:42:41.023302 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:41.023322243+00:00 stderr F I0813 20:42:41.023311 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:42:41.023322243+00:00 stderr F I0813 20:42:41.023318 1 base_controller.go:104] All NamespaceFinalizerController_openshift-oauth-apiserver workers have been terminated 2025-08-13T20:42:41.023368995+00:00 stderr F I0813 20:42:41.023330 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:42:41.023393075+00:00 stderr F I0813 20:42:41.023378 1 base_controller.go:114] Shutting down worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:42:41.023403416+00:00 stderr F I0813 20:42:41.023390 1 base_controller.go:104] All RouterCertsDomainValidationController workers have been terminated 2025-08-13T20:42:41.023413076+00:00 stderr F I0813 20:42:41.023400 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.023413076+00:00 stderr F I0813 20:42:41.023409 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.023423086+00:00 stderr F I0813 20:42:41.023417 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:41.023432877+00:00 stderr F I0813 20:42:41.023424 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:41.023442607+00:00 stderr F I0813 20:42:41.023431 1 base_controller.go:114] Shutting down worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:42:41.023442607+00:00 stderr F I0813 20:42:41.023438 1 base_controller.go:104] All OpenShiftAuthenticatorCertRequester workers have been terminated 2025-08-13T20:42:41.023452487+00:00 stderr F I0813 20:42:41.023445 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorController controller ... 2025-08-13T20:42:41.023462267+00:00 stderr F I0813 20:42:41.023451 1 base_controller.go:104] All WebhookAuthenticatorController workers have been terminated 2025-08-13T20:42:41.023462267+00:00 stderr F I0813 20:42:41.023458 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:42:41.023477468+00:00 stderr F I0813 20:42:41.023468 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:41.023477468+00:00 stderr F I0813 20:42:41.023474 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:41.023487358+00:00 stderr F I0813 20:42:41.023481 1 base_controller.go:114] Shutting down worker of TrustDistributionController controller ... 2025-08-13T20:42:41.023497168+00:00 stderr F I0813 20:42:41.023486 1 base_controller.go:104] All TrustDistributionController workers have been terminated 2025-08-13T20:42:41.023497168+00:00 stderr F I0813 20:42:41.023492 1 base_controller.go:114] Shutting down worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:42:41.023507299+00:00 stderr F I0813 20:42:41.023498 1 base_controller.go:104] All OpenshiftAuthenticationStaticResources workers have been terminated 2025-08-13T20:42:41.023516869+00:00 stderr F I0813 20:42:41.023505 1 base_controller.go:114] Shutting down worker of ServiceCAController controller ... 2025-08-13T20:42:41.023516869+00:00 stderr F I0813 20:42:41.023510 1 base_controller.go:104] All ServiceCAController workers have been terminated 2025-08-13T20:42:41.023526879+00:00 stderr F I0813 20:42:41.023516 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:42:41.023526879+00:00 stderr F I0813 20:42:41.023521 1 base_controller.go:104] All OAuthServerServiceEndpointsEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023536920+00:00 stderr F I0813 20:42:41.023528 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:42:41.023536920+00:00 stderr F I0813 20:42:41.023533 1 base_controller.go:104] All OAuthServerServiceEndpointAccessibleController workers have been terminated 2025-08-13T20:42:41.023546910+00:00 stderr F I0813 20:42:41.023539 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:41.023556760+00:00 stderr F I0813 20:42:41.023544 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:41.023556760+00:00 stderr F I0813 20:42:41.023551 1 base_controller.go:114] Shutting down worker of StatusSyncer_authentication controller ... 2025-08-13T20:42:41.023566880+00:00 stderr F I0813 20:42:41.023557 1 base_controller.go:104] All StatusSyncer_authentication workers have been terminated 2025-08-13T20:42:41.023576481+00:00 stderr F I0813 20:42:41.023564 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:42:41.023576481+00:00 stderr F I0813 20:42:41.023570 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:42:41.023586671+00:00 stderr F I0813 20:42:41.023576 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:42:41.023586671+00:00 stderr F I0813 20:42:41.023582 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_OpenShiftAuthenticator workers have been terminated 2025-08-13T20:42:41.023596761+00:00 stderr F I0813 20:42:41.023588 1 base_controller.go:114] Shutting down worker of IngressStateController controller ... 2025-08-13T20:42:41.023596761+00:00 stderr F I0813 20:42:41.023593 1 base_controller.go:104] All IngressStateController workers have been terminated 2025-08-13T20:42:41.023606772+00:00 stderr F I0813 20:42:41.023598 1 base_controller.go:104] All ProxyConfigController workers have been terminated 2025-08-13T20:42:41.023606772+00:00 stderr F I0813 20:42:41.023603 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:42:41.028196864+00:00 stderr F I0813 20:42:41.028132 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:42:41.028196864+00:00 stderr F I0813 20:42:41.028167 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:41.028868273+00:00 stderr F I0813 20:42:41.028687 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029112 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029153 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:41.029181742+00:00 stderr F I0813 20:42:41.029161 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029690 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029707 1 base_controller.go:104] All PayloadConfig workers have been terminated 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029748 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.029898 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.030945 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:41.031303004+00:00 stderr F I0813 20:42:41.030972 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:41.031627873+00:00 stderr F I0813 20:42:41.031516 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:41.032096826+00:00 stderr F I0813 20:42:41.032020 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:41.032726745+00:00 stderr F I0813 20:42:41.032663 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:41.033029423+00:00 stderr F I0813 20:42:41.032918 1 builder.go:329] server exited 2025-08-13T20:42:41.033148777+00:00 stderr F I0813 20:42:41.033087 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:41.033428035+00:00 stderr F I0813 20:42:41.033141 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:41.033592650+00:00 stderr F I0813 20:42:41.033525 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:41.033738504+00:00 stderr F I0813 20:42:41.033702 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:41.034633180+00:00 stderr F E0813 20:42:41.034423 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:41.035723531+00:00 stderr F W0813 20:42:41.035649 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000065061415117130646033154 0ustar zuulzuul2025-08-13T19:59:06.260034229+00:00 stdout F Copying system trust bundle 2025-08-13T19:59:22.590102599+00:00 stderr F W0813 19:59:22.588695 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-08-13T19:59:22.640260319+00:00 stderr F I0813 19:59:22.638079 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:22.834208448+00:00 stderr F I0813 19:59:22.833239 1 cmd.go:240] Using service-serving-cert provided certificates 2025-08-13T19:59:22.848578007+00:00 stderr F I0813 19:59:22.834682 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:22.881456264+00:00 stderr F I0813 19:59:22.881027 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:25.607941583+00:00 stderr F I0813 19:59:25.589342 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-08-13T19:59:25.893253336+00:00 stderr F I0813 19:59:25.891501 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:33.373549473+00:00 stderr F I0813 19:59:33.372947 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:33.668078119+00:00 stderr F I0813 19:59:33.631417 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:33.672148474+00:00 stderr F I0813 19:59:33.672098 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:33.691128516+00:00 stderr F I0813 19:59:33.690001 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:33.691409884+00:00 stderr F I0813 19:59:33.691263 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:34.000225697+00:00 stderr F I0813 19:59:33.999685 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:34.000225697+00:00 stderr F I0813 19:59:33.999969 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:34.000225697+00:00 stderr F W0813 19:59:33.999985 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.000225697+00:00 stderr F W0813 19:59:33.999991 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:34.141756031+00:00 stderr F I0813 19:59:34.141160 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:34.141756031+00:00 stderr F I0813 19:59:34.141651 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:34.172037764+00:00 stderr F I0813 19:59:34.171578 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:34.172037764+00:00 stderr F I0813 19:59:34.171693 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:34.175457882+00:00 stderr F I0813 19:59:34.173635 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.254901 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:34.254658649 +0000 UTC))" 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255474 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:34.255444882 +0000 UTC))" 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255505 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255662 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:34.274052702+00:00 stderr F I0813 19:59:34.255684 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:34.362964547+00:00 stderr F I0813 19:59:34.362912 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.411506 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.412464 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-08-13T19:59:34.467654201+00:00 stderr F I0813 19:59:34.230255 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:34.593040415+00:00 stderr F I0813 19:59:34.468477 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:34.631076069+00:00 stderr F I0813 19:59:34.631009 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:34.630966396 +0000 UTC))" 2025-08-13T19:59:34.631178092+00:00 stderr F I0813 19:59:34.631155 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:34.631138831 +0000 UTC))" 2025-08-13T19:59:34.631231254+00:00 stderr F I0813 19:59:34.631216 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:34.631199473 +0000 UTC))" 2025-08-13T19:59:34.682004001+00:00 stderr F I0813 19:59:34.681926 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:34.631250714 +0000 UTC))" 2025-08-13T19:59:34.685440619+00:00 stderr F I0813 19:59:34.685412 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685362857 +0000 UTC))" 2025-08-13T19:59:34.685525151+00:00 stderr F I0813 19:59:34.685506 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.68548534 +0000 UTC))" 2025-08-13T19:59:34.685711317+00:00 stderr F I0813 19:59:34.685608 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685587803 +0000 UTC))" 2025-08-13T19:59:34.685770398+00:00 stderr F I0813 19:59:34.685751 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:34.685735487 +0000 UTC))" 2025-08-13T19:59:34.693335204+00:00 stderr F I0813 19:59:34.693308 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:34.693280732 +0000 UTC))" 2025-08-13T19:59:34.709294179+00:00 stderr F I0813 19:59:34.709228 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:34.709179716 +0000 UTC))" 2025-08-13T19:59:34.975225909+00:00 stderr F I0813 19:59:34.975151 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-08-13T19:59:35.045131102+00:00 stderr F I0813 19:59:35.045053 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.045484022+00:00 stderr F E0813 19:59:35.045451 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.045567465+00:00 stderr F E0813 19:59:35.045546 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.046629675+00:00 stderr F I0813 19:59:35.046602 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-08-13T19:59:35.076158477+00:00 stderr F E0813 19:59:35.069426 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.076494866+00:00 stderr F E0813 19:59:35.076466 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.076594539+00:00 stderr F I0813 19:59:35.069664 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28178", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_6afa03f3-b0bc-43af-aa46-6f8b0362eaa6 became leader 2025-08-13T19:59:35.125263296+00:00 stderr F E0813 19:59:35.110643 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.130622869+00:00 stderr F E0813 19:59:35.130220 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.172244315+00:00 stderr F E0813 19:59:35.172183 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.172356189+00:00 stderr F E0813 19:59:35.172339 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.214627814+00:00 stderr F E0813 19:59:35.214570 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.214826349+00:00 stderr F E0813 19:59:35.214763 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.302565860+00:00 stderr F E0813 19:59:35.302503 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.302658613+00:00 stderr F E0813 19:59:35.302645 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.475498699+00:00 stderr F E0813 19:59:35.475207 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.475601372+00:00 stderr F E0813 19:59:35.475584 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.153495055+00:00 stderr F I0813 19:59:36.144494 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:36.153495055+00:00 stderr F E0813 19:59:36.147098 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.153495055+00:00 stderr F E0813 19:59:36.147158 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.156144171+00:00 stderr F I0813 19:59:36.155212 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-08-13T19:59:36.270210032+00:00 stderr F I0813 19:59:36.270060 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:36.796701340+00:00 stderr F E0813 19:59:36.795870 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.796701340+00:00 stderr F E0813 19:59:36.796317 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.147242088+00:00 stderr F E0813 19:59:38.147176 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.156567364+00:00 stderr F E0813 19:59:38.156543 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.736104134+00:00 stderr F E0813 19:59:40.720145 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.736104134+00:00 stderr F E0813 19:59:40.725657 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:42.370973406+00:00 stderr F I0813 19:59:42.328014 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:42.370973406+00:00 stderr F I0813 19:59:42.357328 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:42.624090121+00:00 stderr F I0813 19:59:42.581157 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:42.624090121+00:00 stderr F W0813 19:59:42.581689 1 reflector.go:539] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:42.624090121+00:00 stderr F E0813 19:59:42.581720 1 reflector.go:147] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F W0813 19:59:42.636451 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F E0813 19:59:42.636577 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:42.636923546+00:00 stderr F I0813 19:59:42.636629 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-08-13T19:59:43.927570097+00:00 stderr F I0813 19:59:43.924584 1 request.go:697] Waited for 1.626886194s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/nodes?limit=500&resourceVersion=0 2025-08-13T19:59:43.983752688+00:00 stderr F I0813 19:59:43.977926 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.029193893+00:00 stderr F I0813 19:59:42.347523 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261391 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261449 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261463 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261476 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261487 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261499 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-08-13T19:59:44.261972449+00:00 stderr F I0813 19:59:44.261508 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-08-13T19:59:44.285028176+00:00 stderr F I0813 19:59:44.284967 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.313641902+00:00 stderr F I0813 19:59:44.308901 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-08-13T19:59:44.344395498+00:00 stderr F I0813 19:59:44.330525 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.357344547+00:00 stderr F I0813 19:59:44.357293 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.379923801+00:00 stderr F I0813 19:59:44.364104 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.380647502+00:00 stderr F I0813 19:59:44.380612 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.381414253+00:00 stderr F I0813 19:59:44.381391 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.382467663+00:00 stderr F I0813 19:59:44.382438 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.419523020+00:00 stderr F I0813 19:59:44.419457 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.420504798+00:00 stderr F I0813 19:59:44.420473 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.510409980+00:00 stderr F I0813 19:59:44.510343 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:44.540482718+00:00 stderr F I0813 19:59:44.538576 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.561860267+00:00 stderr F I0813 19:59:44.548935 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.561860267+00:00 stderr F I0813 19:59:44.550243 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.695884888+00:00 stderr F I0813 19:59:44.670212 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-08-13T19:59:44.695884888+00:00 stderr F I0813 19:59:44.670327 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-08-13T19:59:44.704042670+00:00 stderr F I0813 19:59:44.697017 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.721011894+00:00 stderr F I0813 19:59:44.720828 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.725898 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.755682 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.770326 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:44.783754522+00:00 stderr F I0813 19:59:44.770412 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:44.885173573+00:00 stderr F I0813 19:59:44.881585 1 trace.go:236] Trace[193441872]: "DeltaFIFO Pop Process" ID:default,Depth:63,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.293) (total time: 510ms): 2025-08-13T19:59:44.885173573+00:00 stderr F Trace[193441872]: [510.729819ms] [510.729819ms] END 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.548496 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.549220 1 request.go:697] Waited for 3.178596966s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-etcd/configmaps?limit=500&resourceVersion=0 2025-08-13T19:59:45.549647464+00:00 stderr F I0813 19:59:45.549388 1 trace.go:236] Trace[1372478155]: "DeltaFIFO Pop Process" ID:csr-5phj9,Depth:13,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.756) (total time: 793ms): 2025-08-13T19:59:45.549647464+00:00 stderr F Trace[1372478155]: [793.1724ms] [793.1724ms] END 2025-08-13T19:59:45.554639617+00:00 stderr F I0813 19:59:45.554418 1 trace.go:236] Trace[1425268458]: "DeltaFIFO Pop Process" ID:kube-node-lease,Depth:61,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.935) (total time: 614ms): 2025-08-13T19:59:45.554639617+00:00 stderr F Trace[1425268458]: [614.132736ms] [614.132736ms] END 2025-08-13T19:59:45.798581691+00:00 stderr F I0813 19:59:45.793346 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:45.807958558+00:00 stderr F I0813 19:59:45.803328 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:45.814158995+00:00 stderr F I0813 19:59:45.814111 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-08-13T19:59:45.832557879+00:00 stderr F I0813 19:59:45.832188 1 trace.go:236] Trace[3213281]: "DeltaFIFO Pop Process" ID:openshift-apiserver,Depth:57,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.554) (total time: 277ms): 2025-08-13T19:59:45.832557879+00:00 stderr F Trace[3213281]: [277.296235ms] [277.296235ms] END 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834510 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834641 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-08-13T19:59:45.834730671+00:00 stderr F I0813 19:59:45.834710 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T19:59:45.834761962+00:00 stderr F I0813 19:59:45.834730 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-08-13T19:59:45.834761962+00:00 stderr F I0813 19:59:45.834753 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-08-13T19:59:45.846495966+00:00 stderr F I0813 19:59:45.834770 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-08-13T19:59:45.846495966+00:00 stderr F I0813 19:59:45.834881 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:45.945129288+00:00 stderr F E0813 19:59:45.935665 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.100202093+00:00 stderr F E0813 19:59:47.090732 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.202653813+00:00 stderr F I0813 19:59:47.200413 1 trace.go:236] Trace[1441772173]: "DeltaFIFO Pop Process" ID:openshift-etcd,Depth:37,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:45.835) (total time: 1364ms): 2025-08-13T19:59:47.202653813+00:00 stderr F Trace[1441772173]: [1.364812394s] [1.364812394s] END 2025-08-13T19:59:47.270733784+00:00 stderr F I0813 19:59:47.270612 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:47.317994751+00:00 stderr F I0813 19:59:47.307542 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:47.899932630+00:00 stderr F I0813 19:59:47.627693 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683396 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683427 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103115 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103128 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683454 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103149 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.103165 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683522 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.683546 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.105971 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.105982 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.719088 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.719577 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.738197 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.707766 1 request.go:697] Waited for 5.044216967s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces?limit=500&resourceVersion=0 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762473 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762492 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.762535 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107562 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107570 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:47.809319 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107630 1 base_controller.go:73] Caches are synced for IngressStateController 2025-08-13T19:59:48.109050971+00:00 stderr F I0813 19:59:48.107637 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-08-13T19:59:48.110215605+00:00 stderr F I0813 19:59:48.109487 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-08-13T19:59:48.110238695+00:00 stderr F I0813 19:59:47.809389 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.110291597+00:00 stderr F I0813 19:59:48.110276 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-08-13T19:59:48.110433441+00:00 stderr F I0813 19:59:48.110414 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.110459 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.110468 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T19:59:48.110517543+00:00 stderr F I0813 19:59:48.109504 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:48.110533554+00:00 stderr F I0813 19:59:48.099900 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110532 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.101422 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110559 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.101513 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110578 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.102613 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110597 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-08-13T19:59:48.110621666+00:00 stderr F I0813 19:59:48.110602 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-08-13T19:59:48.110675808+00:00 stderr F I0813 19:59:48.110655 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:48.110725059+00:00 stderr F I0813 19:59:48.110712 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:48.110767590+00:00 stderr F I0813 19:59:48.110755 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:48.111398048+00:00 stderr F I0813 19:59:48.110480 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:48.111567353+00:00 stderr F E0813 19:59:48.111464 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.112250462+00:00 stderr F I0813 19:59:48.112225 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:48.112636423+00:00 stderr F I0813 19:59:48.112552 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::IngressStateEndpoints_NonReadyEndpoints::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable::OAuthServerServiceEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:48.113512278+00:00 stderr F I0813 19:59:48.113444 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-08-13T19:59:48.113564680+00:00 stderr F I0813 19:59:48.113544 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-08-13T19:59:48.113602291+00:00 stderr F I0813 19:59:48.113589 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-08-13T19:59:48.221219729+00:00 stderr F E0813 19:59:48.219539 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.945052413+00:00 stderr F E0813 19:59:48.944337 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:48.969533311+00:00 stderr F E0813 19:59:48.969473 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:49.011041914+00:00 stderr F E0813 19:59:49.010539 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T19:59:49.016328685+00:00 stderr F I0813 19:59:49.016267 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-08-13T19:59:49.052701932+00:00 stderr F I0813 19:59:49.052634 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:51.339436357+00:00 stderr F I0813 19:59:51.338201 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.913025 1 trace.go:236] Trace[1463818210]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10542ms): 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1463818210]: ---"Objects listed" error: 10542ms (19:59:52.912) 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1463818210]: [10.542314785s] [10.542314785s] END 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.913746 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.914537 1 trace.go:236] Trace[1770587185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10721ms): 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1770587185]: ---"Objects listed" error: 10721ms (19:59:52.914) 2025-08-13T19:59:52.915093852+00:00 stderr F Trace[1770587185]: [10.721248335s] [10.721248335s] END 2025-08-13T19:59:52.915093852+00:00 stderr F I0813 19:59:52.914546 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.915093852+00:00 stderr F W0813 19:59:52.914686 1 reflector.go:539] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:52.915093852+00:00 stderr F E0813 19:59:52.914706 1 reflector.go:147] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125: Failed to watch *v1.OAuthClient: failed to list *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io) 2025-08-13T19:59:52.990222764+00:00 stderr F I0813 19:59:52.990163 1 trace.go:236] Trace[770957654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10796ms): 2025-08-13T19:59:52.990222764+00:00 stderr F Trace[770957654]: ---"Objects listed" error: 10796ms (19:59:52.990) 2025-08-13T19:59:52.990222764+00:00 stderr F Trace[770957654]: [10.796764708s] [10.796764708s] END 2025-08-13T19:59:52.990287066+00:00 stderr F I0813 19:59:52.990272 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.990656876+00:00 stderr F I0813 19:59:52.990634 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:52.991002756+00:00 stderr F I0813 19:59:52.990974 1 trace.go:236] Trace[300001418]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10797ms): 2025-08-13T19:59:52.991002756+00:00 stderr F Trace[300001418]: ---"Objects listed" error: 10797ms (19:59:52.990) 2025-08-13T19:59:52.991002756+00:00 stderr F Trace[300001418]: [10.79755446s] [10.79755446s] END 2025-08-13T19:59:52.991061928+00:00 stderr F I0813 19:59:52.991044 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.003764270+00:00 stderr F I0813 19:59:52.993586 1 trace.go:236] Trace[979236584]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.663) (total time: 10329ms): 2025-08-13T19:59:53.003764270+00:00 stderr F Trace[979236584]: ---"Objects listed" error: 10329ms (19:59:52.993) 2025-08-13T19:59:53.003764270+00:00 stderr F Trace[979236584]: [10.329986423s] [10.329986423s] END 2025-08-13T19:59:53.003764270+00:00 stderr F I0813 19:59:52.993626 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.004158351+00:00 stderr F I0813 19:59:53.004124 1 trace.go:236] Trace[86867999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10633ms): 2025-08-13T19:59:53.004158351+00:00 stderr F Trace[86867999]: ---"Objects listed" error: 10633ms (19:59:53.004) 2025-08-13T19:59:53.004158351+00:00 stderr F Trace[86867999]: [10.633566795s] [10.633566795s] END 2025-08-13T19:59:53.004207972+00:00 stderr F I0813 19:59:53.004192 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.004401748+00:00 stderr F I0813 19:59:53.004378 1 trace.go:236] Trace[328557283]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10811ms): 2025-08-13T19:59:53.004401748+00:00 stderr F Trace[328557283]: ---"Objects listed" error: 10811ms (19:59:53.004) 2025-08-13T19:59:53.004401748+00:00 stderr F Trace[328557283]: [10.811036734s] [10.811036734s] END 2025-08-13T19:59:53.004457219+00:00 stderr F I0813 19:59:53.004443 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.005326104+00:00 stderr F W0813 19:59:53.005292 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.005426517+00:00 stderr F E0813 19:59:53.005405 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:53.006176029+00:00 stderr F I0813 19:59:53.006146 1 trace.go:236] Trace[970424958]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 10812ms): 2025-08-13T19:59:53.006176029+00:00 stderr F Trace[970424958]: ---"Objects listed" error: 10812ms (19:59:53.006) 2025-08-13T19:59:53.006176029+00:00 stderr F Trace[970424958]: [10.812689222s] [10.812689222s] END 2025-08-13T19:59:53.006235890+00:00 stderr F I0813 19:59:53.006217 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.007204038+00:00 stderr F I0813 19:59:53.007172 1 trace.go:236] Trace[1873105387]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 10636ms): 2025-08-13T19:59:53.007204038+00:00 stderr F Trace[1873105387]: ---"Objects listed" error: 10636ms (19:59:53.007) 2025-08-13T19:59:53.007204038+00:00 stderr F Trace[1873105387]: [10.636448808s] [10.636448808s] END 2025-08-13T19:59:53.007264810+00:00 stderr F I0813 19:59:53.007246 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.007769564+00:00 stderr F I0813 19:59:53.007746 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-08-13T19:59:53.028289059+00:00 stderr F I0813 19:59:53.028255 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T19:59:53.109228056+00:00 stderr F I0813 19:59:53.109117 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-08-13T19:59:53.109352030+00:00 stderr F I0813 19:59:53.109330 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T19:59:53.111984805+00:00 stderr F I0813 19:59:53.111953 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:53.111911542 +0000 UTC))" 2025-08-13T19:59:53.308283359+00:00 stderr F I0813 19:59:53.269501 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:53.269156764 +0000 UTC))" 2025-08-13T19:59:53.362028301+00:00 stderr F I0813 19:59:53.154295 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:53.362443493+00:00 stderr F I0813 19:59:53.154593 1 trace.go:236] Trace[1108983851]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.297) (total time: 10856ms): 2025-08-13T19:59:53.362443493+00:00 stderr F Trace[1108983851]: ---"Objects listed" error: 10856ms (19:59:53.154) 2025-08-13T19:59:53.362443493+00:00 stderr F Trace[1108983851]: [10.856865691s] [10.856865691s] END 2025-08-13T19:59:53.362521245+00:00 stderr F I0813 19:59:53.156190 1 trace.go:236] Trace[133450871]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.663) (total time: 10492ms): 2025-08-13T19:59:53.362521245+00:00 stderr F Trace[133450871]: ---"Objects listed" error: 10492ms (19:59:53.156) 2025-08-13T19:59:53.362521245+00:00 stderr F Trace[133450871]: [10.492752323s] [10.492752323s] END 2025-08-13T19:59:53.439980033+00:00 stderr F I0813 19:59:53.258277 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:53.444960535+00:00 stderr F I0813 19:59:53.270449 1 trace.go:236] Trace[1116695782]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.193) (total time: 11076ms): 2025-08-13T19:59:53.444960535+00:00 stderr F Trace[1116695782]: ---"Objects listed" error: 11076ms (19:59:53.270) 2025-08-13T19:59:53.444960535+00:00 stderr F Trace[1116695782]: [11.076712457s] [11.076712457s] END 2025-08-13T19:59:53.445165581+00:00 stderr F I0813 19:59:53.310605 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") 2025-08-13T19:59:53.445221433+00:00 stderr F I0813 19:59:53.385293 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.385214752 +0000 UTC))" 2025-08-13T19:59:53.445250794+00:00 stderr F I0813 19:59:53.385330 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.445344716+00:00 stderr F I0813 19:59:53.439913 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::IngressStateEndpoints_NonReadyEndpoints::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.445501421+00:00 stderr F I0813 19:59:53.441443 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.445530422+00:00 stderr F I0813 19:59:53.443214 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:53.445640545+00:00 stderr F I0813 19:59:53.443260 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443279 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443295 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.443330 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.479112 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:53.503855384+00:00 stderr F I0813 19:59:53.479188 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:53.47914662 +0000 UTC))" 2025-08-13T19:59:53.504193694+00:00 stderr F I0813 19:59:53.504129 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:53.505556923+00:00 stderr F I0813 19:59:53.504744 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-08-13T19:59:53.505706487+00:00 stderr F I0813 19:59:53.504752 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-08-13T19:59:53.506406407+00:00 stderr F I0813 19:59:53.504758 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T19:59:53.548425305+00:00 stderr F I0813 19:59:53.504769 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T19:59:53.548966790+00:00 stderr F I0813 19:59:53.528316 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.493298023 +0000 UTC))" 2025-08-13T19:59:53.549686591+00:00 stderr F I0813 19:59:53.549596 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.549489755 +0000 UTC))" 2025-08-13T19:59:53.549759463+00:00 stderr F I0813 19:59:53.528426 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:53.550024050+00:00 stderr F I0813 19:59:53.528443 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:53.550170024+00:00 stderr F I0813 19:59:53.528457 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:53.550360770+00:00 stderr F I0813 19:59:53.528468 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:53.550653758+00:00 stderr F I0813 19:59:53.528481 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:53.550965407+00:00 stderr F I0813 19:59:53.549918 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.549892966 +0000 UTC))" 2025-08-13T19:59:53.551064420+00:00 stderr F I0813 19:59:53.550109 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:53.551256955+00:00 stderr F I0813 19:59:53.551240 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:53.588022383+00:00 stderr F I0813 19:59:53.550308 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:53.588918889+00:00 stderr F I0813 19:59:53.550351 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:53.589305100+00:00 stderr F I0813 19:59:53.550935 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:53.589770713+00:00 stderr F I0813 19:59:53.551449 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.55140693 +0000 UTC))" 2025-08-13T19:59:53.611528333+00:00 stderr F I0813 19:59:53.611498 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:53.61140994 +0000 UTC))" 2025-08-13T19:59:53.617472703+00:00 stderr F I0813 19:59:53.617445 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 19:59:53.617419221 +0000 UTC))" 2025-08-13T19:59:53.631119492+00:00 stderr F I0813 19:59:53.631088 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 19:59:53.63106044 +0000 UTC))" 2025-08-13T19:59:54.191673681+00:00 stderr F I0813 19:59:53.836206 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:54.191763743+00:00 stderr F I0813 19:59:53.836250 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NoValidCertificateFound' No valid client certificate for OpenShiftAuthenticatorCertRequester is found: part of the certificate is expired: sub: CN=system:serviceaccount:openshift-oauth-apiserver:openshift-authenticator, notAfter: 2025-06-27 13:10:04 +0000 UTC 2025-08-13T19:59:54.191935458+00:00 stderr F I0813 19:59:53.836465 1 trace.go:236] Trace[1025006076]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 11466ms): 2025-08-13T19:59:54.191935458+00:00 stderr F Trace[1025006076]: ---"Objects listed" error: 11466ms (19:59:53.836) 2025-08-13T19:59:54.191935458+00:00 stderr F Trace[1025006076]: [11.466047985s] [11.466047985s] END 2025-08-13T19:59:54.192130534+00:00 stderr F I0813 19:59:54.192101 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.371965880+00:00 stderr F I0813 19:59:54.371650 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.504917340+00:00 stderr F I0813 19:59:54.448435 1 trace.go:236] Trace[570889891]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.370) (total time: 12077ms): 2025-08-13T19:59:54.504917340+00:00 stderr F Trace[570889891]: ---"Objects listed" error: 12077ms (19:59:54.447) 2025-08-13T19:59:54.504917340+00:00 stderr F Trace[570889891]: [12.077815774s] [12.077815774s] END 2025-08-13T19:59:54.504917340+00:00 stderr F I0813 19:59:54.448476 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.640223627+00:00 stderr F I0813 19:59:54.638295 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:54.640223627+00:00 stderr F I0813 19:59:54.638380 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.806600 1 trace.go:236] Trace[478681942]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.475) (total time: 12331ms): 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[478681942]: ---"Objects listed" error: 12331ms (19:59:54.806) 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[478681942]: [12.331477334s] [12.331477334s] END 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807076 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807439 1 trace.go:236] Trace[1033546446]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:59:42.191) (total time: 12615ms): 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[1033546446]: ---"Objects listed" error: 12615ms (19:59:54.807) 2025-08-13T19:59:54.809634356+00:00 stderr F Trace[1033546446]: [12.61581104s] [12.61581104s] END 2025-08-13T19:59:54.809634356+00:00 stderr F I0813 19:59:54.807446 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.821143 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.822735 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CSRApproval' The CSR "system:openshift:openshift-authenticator-dk965" has been approved 2025-08-13T19:59:54.824360275+00:00 stderr F I0813 19:59:54.824230 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://10.217.4.40:443/healthz\": dial tcp 10.217.4.40:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T19:59:54.824485799+00:00 stderr F I0813 19:59:54.824461 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:54.824525890+00:00 stderr F I0813 19:59:54.824512 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:54.824587032+00:00 stderr F I0813 19:59:54.824573 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-08-13T19:59:54.824618303+00:00 stderr F I0813 19:59:54.824606 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-08-13T19:59:54.826882207+00:00 stderr F I0813 19:59:54.826753 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'CSRCreated' A csr "system:openshift:openshift-authenticator-dk965" is created for OpenShiftAuthenticatorCertRequester 2025-08-13T19:59:54.835257986+00:00 stderr F I0813 19:59:54.835113 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:54.835257986+00:00 stderr F I0813 19:59:54.835149 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:54.849867433+00:00 stderr F I0813 19:59:54.849738 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-08-13T19:59:54.849977596+00:00 stderr F I0813 19:59:54.849960 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T19:59:54.862338968+00:00 stderr F I0813 19:59:54.862251 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-08-13T19:59:54.862479722+00:00 stderr F I0813 19:59:54.862465 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-08-13T19:59:54.862529714+00:00 stderr F I0813 19:59:54.862439 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-08-13T19:59:54.862562194+00:00 stderr F I0813 19:59:54.862550 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-08-13T19:59:55.068942147+00:00 stderr F E0813 19:59:55.009727 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.068942147+00:00 stderr F I0813 19:59:55.064916 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.598418321+00:00 stderr F E0813 19:59:55.594431 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:55.618398260+00:00 stderr F I0813 19:59:55.609513 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::APIServices_PreconditionNotReady::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:55.635942630+00:00 stderr F I0813 19:59:55.635868 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-08-13T19:59:56.577146750+00:00 stderr F W0813 19:59:56.576170 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:56.577146750+00:00 stderr F E0813 19:59:56.576603 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T19:59:56.803007408+00:00 stderr F I0813 19:59:56.784530 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ClientCertificateCreated' A new client certificate for OpenShiftAuthenticatorCertRequester is available 2025-08-13T19:59:57.869365875+00:00 stderr F I0813 19:59:57.817721 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_UnavailablePod::CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"APIServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:57.869365875+00:00 stderr F I0813 19:59:57.836769 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:58.532150488+00:00 stderr F I0813 19:59:58.491747 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:58.555246356+00:00 stderr F I0813 19:59:58.554051 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T19:59:59.026001885+00:00 stderr F E0813 19:59:59.023617 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:59.026001885+00:00 stderr F I0813 19:59:59.025256 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601-\u003e10.217.4.10:53: read: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:59.026001885+00:00 stderr F I0813 19:59:59.025744 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authentication-integrated-oauth -n openshift-config because it changed 2025-08-13T19:59:59.191602576+00:00 stderr F I0813 19:59:59.191358 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nCustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" 2025-08-13T20:00:00.988583578+00:00 stderr F W0813 20:00:00.985560 1 reflector.go:539] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:00.988583578+00:00 stderr F E0813 20:00:00.988021 1 reflector.go:147] github.com/openshift/client-go/route/informers/externalversions/factory.go:125: Failed to watch *v1.Route: failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io) 2025-08-13T20:00:05.766076904+00:00 stderr F I0813 20:00:05.756472 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.756441289 +0000 UTC))" 2025-08-13T20:00:05.766246049+00:00 stderr F I0813 20:00:05.766224 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.766140316 +0000 UTC))" 2025-08-13T20:00:05.766725483+00:00 stderr F I0813 20:00:05.766707 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.766684052 +0000 UTC))" 2025-08-13T20:00:05.766829396+00:00 stderr F I0813 20:00:05.766764 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.766748793 +0000 UTC))" 2025-08-13T20:00:05.766933289+00:00 stderr F I0813 20:00:05.766915 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.766896128 +0000 UTC))" 2025-08-13T20:00:05.766987800+00:00 stderr F I0813 20:00:05.766971 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.766954729 +0000 UTC))" 2025-08-13T20:00:05.767042702+00:00 stderr F I0813 20:00:05.767029 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767011351 +0000 UTC))" 2025-08-13T20:00:05.767096963+00:00 stderr F I0813 20:00:05.767081 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767063202 +0000 UTC))" 2025-08-13T20:00:05.767142385+00:00 stderr F I0813 20:00:05.767130 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.767116244 +0000 UTC))" 2025-08-13T20:00:05.767211757+00:00 stderr F I0813 20:00:05.767185 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.767163085 +0000 UTC))" 2025-08-13T20:00:05.767562677+00:00 stderr F I0813 20:00:05.767544 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:00:05.767525566 +0000 UTC))" 2025-08-13T20:00:05.768005659+00:00 stderr F I0813 20:00:05.767984 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 20:00:05.767966438 +0000 UTC))" 2025-08-13T20:00:12.004702051+00:00 stderr F I0813 20:00:11.995725 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-08-13T20:00:12.036581620+00:00 stderr F I0813 20:00:12.012869 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-08-13T20:00:12.036581620+00:00 stderr F I0813 20:00:12.012914 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.044553 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.044598 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045238 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045269 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045301 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-08-13T20:00:12.054676386+00:00 stderr F I0813 20:00:12.045307 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067044 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067095 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-08-13T20:00:12.067141681+00:00 stderr F I0813 20:00:12.067134 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-08-13T20:00:12.067210793+00:00 stderr F I0813 20:00:12.067139 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067449 1 base_controller.go:73] Caches are synced for MetadataController 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067480 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067502 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-08-13T20:00:12.069852988+00:00 stderr F I0813 20:00:12.067506 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-08-13T20:00:12.429934526+00:00 stderr F I0813 20:00:12.428685 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"XyQxByO8dMbdAXGOFdhjWrxg0HELpVdxvAUV_2DuovP3EK6S_sdSNGYO1dWowvrD71Ii-BaK1iul4_iTDM-yaQ","operator.openshift.io/spec-hash":"ad10eacae3023cd0d2ee52348ecb0eeffb28a18838276096bbd0a036b94e4744"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"XyQxByO8dMbdAXGOFdhjWrxg0HELpVdxvAUV_2DuovP3EK6S_sdSNGYO1dWowvrD71Ii-BaK1iul4_iTDM-yaQ"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:12.975490032+00:00 stderr F I0813 20:00:12.974998 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:13.015944106+00:00 stderr F I0813 20:00:13.015377 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691-\u003e10.217.4.10:53: read: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.406758249+00:00 stderr F I0813 20:00:13.405089 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:26:20Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.415712555+00:00 stderr F I0813 20:00:13.414686 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:57601->10.217.4.10:53: read: connection refused" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:13.606415462+00:00 stderr F I0813 20:00:13.606353 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.609902272+00:00 stderr F E0813 20:00:13.606943 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:13.718737195+00:00 stderr F I0813 20:00:13.608178 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp: lookup oauth-openshift.apps-crc.testing on 10.217.4.10:53: read udp 10.217.0.19:59691->10.217.4.10:53: read: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:13.718737195+00:00 stderr F I0813 20:00:13.718658 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.") 2025-08-13T20:00:13.843928115+00:00 stderr F E0813 20:00:13.843736 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.863371922+00:00 stderr F E0813 20:00:14.862982 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:14.908161409+00:00 stderr F E0813 20:00:14.904196 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.004283370+00:00 stderr F E0813 20:00:15.004205 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.173046952+00:00 stderr F E0813 20:00:15.156544 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.182964455+00:00 stderr F I0813 20:00:15.182923 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthClientsController_SyncError::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:15.253002892+00:00 stderr F E0813 20:00:15.252132 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.388949248+00:00 stderr F E0813 20:00:15.386277 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.476292329+00:00 stderr F E0813 20:00:15.461625 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:15.572213484+00:00 stderr F I0813 20:00:15.568489 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"UeLoNQgNyi4ek--YG05tkJpgr5YeX9hH-xOiyjIBYuZg66HSrnnna0O-xw2d6c90LgOJuApblDmeGo40yQBZ1g","operator.openshift.io/spec-hash":"f6f3b5299b2d9845581bd943317b8c67a8bf91da11360bafa61bc66ec3070d31"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"UeLoNQgNyi4ek--YG05tkJpgr5YeX9hH-xOiyjIBYuZg66HSrnnna0O-xw2d6c90LgOJuApblDmeGo40yQBZ1g"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:16.069873294+00:00 stderr F E0813 20:00:16.069421 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:16.719182359+00:00 stderr F I0813 20:00:16.718490 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:16.724515991+00:00 stderr F I0813 20:00:16.721913 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:17.149870289+00:00 stderr F E0813 20:00:17.149315 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.165937637+00:00 stderr F I0813 20:00:17.165762 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.202066678+00:00 stderr F E0813 20:00:17.201641 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.497314967+00:00 stderr F I0813 20:00:17.497116 1 helpers.go:184] lister was stale at resourceVersion=29206, live get showed resourceVersion=29240 2025-08-13T20:00:17.500207359+00:00 stderr F I0813 20:00:17.500067 1 helpers.go:184] lister was stale at resourceVersion=29206, live get showed resourceVersion=29240 2025-08-13T20:00:17.534337442+00:00 stderr F I0813 20:00:17.533562 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.536315839+00:00 stderr F I0813 20:00:17.535193 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: unable to ensure existence of a bootstrapped OAuth client \"openshift-browser-client\": the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:17.668454376+00:00 stderr F E0813 20:00:17.667141 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.738651368+00:00 stderr F E0813 20:00:17.738582 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:17.765923386+00:00 stderr F E0813 20:00:17.765691 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:17.772682838+00:00 stderr F I0813 20:00:17.772240 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:17.984901610+00:00 stderr F E0813 20:00:17.957458 1 base_controller.go:268] CustomRouteController reconciliation failed: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:17.998317872+00:00 stderr F I0813 20:00:17.996657 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.027750481+00:00 stderr F I0813 20:00:18.001198 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.005592 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.005978 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.027750481+00:00 stderr F E0813 20:00:18.016061 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.073065473+00:00 stderr F E0813 20:00:18.067757 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.225214142+00:00 stderr F E0813 20:00:18.223388 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:18.267422925+00:00 stderr F E0813 20:00:18.254323 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.267422925+00:00 stderr F I0813 20:00:18.254421 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:18.267422925+00:00 stderr F E0813 20:00:18.254661 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:18.314470386+00:00 stderr F E0813 20:00:18.314079 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.510731402+00:00 stderr F E0813 20:00:18.439370 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:18.580644456+00:00 stderr F I0813 20:00:18.519024 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" 2025-08-13T20:00:18.622977703+00:00 stderr F E0813 20:00:18.616420 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.055428173+00:00 stderr F E0813 20:00:19.029603 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.235422316+00:00 stderr F E0813 20:00:19.234686 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.242731974+00:00 stderr F I0813 20:00:19.242602 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"CustomRouteController_SyncError::IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation","reason":"OAuthServerDeployment_PodsUpdating","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:19.300572264+00:00 stderr F E0813 20:00:19.300451 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:19.532105785+00:00 stderr F I0813 20:00:19.529928 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation" 2025-08-13T20:00:19.538820187+00:00 stderr F E0813 20:00:19.537609 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:19.610996055+00:00 stderr F I0813 20:00:19.610297 1 request.go:697] Waited for 1.007136378s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:00:20.135667296+00:00 stderr F E0813 20:00:20.135228 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:20.144572859+00:00 stderr F I0813 20:00:20.142549 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 0/1 pods have been updated to the latest generation","reason":"OAuthServerDeployment_PodsUpdating","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:20.233140455+00:00 stderr F E0813 20:00:20.233035 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:20.295036100+00:00 stderr F I0813 20:00:20.286171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:21.188543427+00:00 stderr F I0813 20:00:21.184308 1 request.go:697] Waited for 1.122772645s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets/router-certs 2025-08-13T20:00:21.862728060+00:00 stderr F E0813 20:00:21.862242 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:22.463267694+00:00 stderr F E0813 20:00:22.463207 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:22.535823483+00:00 stderr F I0813 20:00:22.530093 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:22.617541683+00:00 stderr F E0813 20:00:22.617487 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": dial tcp 192.168.130.11:443: connect: connection refused 2025-08-13T20:00:22.687107217+00:00 stderr F I0813 20:00:22.685318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") 2025-08-13T20:00:24.386693571+00:00 stderr F I0813 20:00:24.386152 1 request.go:697] Waited for 1.085380808s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit 2025-08-13T20:00:32.112004482+00:00 stderr F E0813 20:00:32.111258 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:35.710360804+00:00 stderr F I0813 20:00:35.669757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/v4-0-config-user-idp-0-file-data -n openshift-authentication because it changed 2025-08-13T20:00:36.365259357+00:00 stderr F I0813 20:00:36.292967 1 apps.go:154] Deployment "openshift-authentication/oauth-openshift" changes: {"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"uomocd2xQvK4ihb6gzUgAMabCuvz4ifs3T98UV1yGZo0R1LHJARY4B-40ZukHyVSzZ-3pIoV4sdQo49M3ieZtA","operator.openshift.io/spec-hash":"797989bfafe87f49a19e3bfa11bf6d778cd3f9343ed99e2bad962d75542e95e1"}},"spec":{"progressDeadlineSeconds":null,"revisionHistoryLimit":null,"template":{"metadata":{"annotations":{"operator.openshift.io/rvs-hash":"uomocd2xQvK4ihb6gzUgAMabCuvz4ifs3T98UV1yGZo0R1LHJARY4B-40ZukHyVSzZ-3pIoV4sdQo49M3ieZtA"}},"spec":{"containers":[{"args":["if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle\"\n cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec oauth-server osinserver \\\n--config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \\\n--v=2 \\\n--audit-log-format=json \\\n--audit-log-maxbackup=10 \\\n--audit-log-maxsize=100 \\\n--audit-log-path=/var/log/oauth-server/audit.log \\\n--audit-policy-file=/var/run/configmaps/audit/audit.yaml\n"],"command":["/bin/bash","-ec"],"image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0","lifecycle":{"preStop":{"exec":{"command":["sleep","25"]}}},"livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"oauth-openshift","ports":[{"containerPort":6443,"name":"https","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":6443,"scheme":"HTTPS"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"securityContext":{"privileged":true,"readOnlyRootFilesystem":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/configmaps/audit","name":"audit-policies"},{"mountPath":"/var/log/oauth-server","name":"audit-dir"},{"mountPath":"/var/config/system/secrets/v4-0-config-system-session","name":"v4-0-config-system-session","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-cliconfig","name":"v4-0-config-system-cliconfig","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-serving-cert","name":"v4-0-config-system-serving-cert","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-service-ca","name":"v4-0-config-system-service-ca","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-router-certs","name":"v4-0-config-system-router-certs","readOnly":true},{"mountPath":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template","name":"v4-0-config-system-ocp-branding-template","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-login","name":"v4-0-config-user-template-login","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection","name":"v4-0-config-user-template-provider-selection","readOnly":true},{"mountPath":"/var/config/user/template/secret/v4-0-config-user-template-error","name":"v4-0-config-user-template-error","readOnly":true},{"mountPath":"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle","name":"v4-0-config-system-trusted-ca-bundle","readOnly":true},{"mountPath":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data","name":"v4-0-config-user-idp-0-file-data","readOnly":true}]}],"dnsPolicy":null,"restartPolicy":null,"schedulerName":null,"securityContext":null,"serviceAccount":null,"volumes":[{"configMap":{"name":"audit"},"name":"audit-policies"},{"hostPath":{"path":"/var/log/oauth-server"},"name":"audit-dir"},{"name":"v4-0-config-system-session","secret":{"secretName":"v4-0-config-system-session"}},{"configMap":{"name":"v4-0-config-system-cliconfig"},"name":"v4-0-config-system-cliconfig"},{"name":"v4-0-config-system-serving-cert","secret":{"secretName":"v4-0-config-system-serving-cert"}},{"configMap":{"name":"v4-0-config-system-service-ca"},"name":"v4-0-config-system-service-ca"},{"name":"v4-0-config-system-router-certs","secret":{"secretName":"v4-0-config-system-router-certs"}},{"name":"v4-0-config-system-ocp-branding-template","secret":{"secretName":"v4-0-config-system-ocp-branding-template"}},{"name":"v4-0-config-user-template-login","secret":{"optional":true,"secretName":"v4-0-config-user-template-login"}},{"name":"v4-0-config-user-template-provider-selection","secret":{"optional":true,"secretName":"v4-0-config-user-template-provider-selection"}},{"name":"v4-0-config-user-template-error","secret":{"optional":true,"secretName":"v4-0-config-user-template-error"}},{"configMap":{"name":"v4-0-config-system-trusted-ca-bundle","optional":true},"name":"v4-0-config-system-trusted-ca-bundle"},{"name":"v4-0-config-user-idp-0-file-data","secret":{"items":[{"key":"htpasswd","path":"htpasswd"}],"secretName":"v4-0-config-user-idp-0-file-data"}}]}}}} 2025-08-13T20:00:36.400968906+00:00 stderr F E0813 20:00:36.333924 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:36.400968906+00:00 stderr F I0813 20:00:36.339919 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.580487184+00:00 stderr F I0813 20:00:36.555590 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.580487184+00:00 stderr F I0813 20:00:36.556938 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF" 2025-08-13T20:00:36.652128637+00:00 stderr F I0813 20:00:36.651830 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed 2025-08-13T20:00:36.909954379+00:00 stderr F E0813 20:00:36.908986 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:36.912915663+00:00 stderr F E0813 20:00:36.910144 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:36.973100329+00:00 stderr F E0813 20:00:36.972602 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:36.986419359+00:00 stderr F I0813 20:00:36.983106 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:22Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:36.998673178+00:00 stderr F E0813 20:00:36.998162 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:37.078261178+00:00 stderr F I0813 20:00:37.077288 1 helpers.go:184] lister was stale at resourceVersion=29830, live get showed resourceVersion=29845 2025-08-13T20:00:37.097267870+00:00 stderr F I0813 20:00:37.094501 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": dial tcp 192.168.130.11:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" 2025-08-13T20:00:37.469467033+00:00 stderr F E0813 20:00:37.465356 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:37.502009441+00:00 stderr F E0813 20:00:37.501224 1 base_controller.go:268] OAuthServerServiceEndpointsEndpointAccessibleController reconciliation failed: oauth service endpoints are not ready 2025-08-13T20:00:37.509666649+00:00 stderr F I0813 20:00:37.508157 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:48Z","message":"IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready","reason":"IngressStateEndpoints_MissingSubsets::OAuthServerDeployment_UnavailablePod::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError","status":"True","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:37Z","message":"OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.","reason":"OAuthServerDeployment_NewGeneration","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-27T13:34:16Z","message":"OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps-crc.testing/healthz\": EOF","reason":"OAuthServerDeployment_NoPod::OAuthServerRouteEndpointAccessibleController_EndpointUnavailable","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:37.562600368+00:00 stderr F E0813 20:00:37.555952 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:38.067596878+00:00 stderr F I0813 20:00:38.067325 1 request.go:697] Waited for 1.078996896s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/etcd-client 2025-08-13T20:00:38.324205215+00:00 stderr F I0813 20:00:38.324036 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.324214 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.324779 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325447 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:38.325396489 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325475 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:38.325461081 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325495 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:38.325482021 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325511 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:38.325500432 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325540 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325528573 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325556 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325545743 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325573 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325560843 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325589 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325577764 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325607 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:38.325597164 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325639 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:38.325615085 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.325998 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:00:38.325975315 +0000 UTC))" 2025-08-13T20:00:38.338193134+00:00 stderr F I0813 20:00:38.326321 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115169\" (2025-08-13 18:59:25 +0000 UTC to 2026-08-13 18:59:25 +0000 UTC (now=2025-08-13 20:00:38.326261023 +0000 UTC))" 2025-08-13T20:00:38.560458541+00:00 stderr F I0813 20:00:38.537772 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6.") 2025-08-13T20:00:40.670992890+00:00 stderr F I0813 20:00:40.669437 1 request.go:697] Waited for 1.047166697s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/trusted-ca-bundle 2025-08-13T20:00:42.162988253+00:00 stderr F E0813 20:00:42.160729 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:42.637888014+00:00 stderr F E0813 20:00:42.637580 1 base_controller.go:268] OAuthServerRouteEndpointAccessibleController reconciliation failed: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF 2025-08-13T20:00:42.882033666+00:00 stderr F I0813 20:00:42.881970 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="9480932818b7cb1ebdca51bb28c5cce888164a71652918cc5344387b939314ae", new="fd200c56eea686a995edc75b6728718041d30aab916df9707d4d155e1d8cd60c") 2025-08-13T20:00:42.883879189+00:00 stderr F W0813 20:00:42.883854 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:00:42.884015782+00:00 stderr F I0813 20:00:42.883993 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="ae319b0e78a8985818f6dc7d0863a2c62f9b44ec34c1b60c870a0648a26e2f87", new="c29c68c5a2ab71f55f3d17abcfc7421d482cea9bbfd2fc3fccb363858ff20314") 2025-08-13T20:00:42.897758034+00:00 stderr F I0813 20:00:42.884538 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:00:42.897927709+00:00 stderr F I0813 20:00:42.888281 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:00:42.898027102+00:00 stderr F I0813 20:00:42.898004 1 genericapiserver.go:539] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:00:42.898078293+00:00 stderr F I0813 20:00:42.898066 1 genericapiserver.go:603] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:00:42.898159456+00:00 stderr F I0813 20:00:42.888544 1 base_controller.go:172] Shutting down OAuthClientsController ... 2025-08-13T20:00:42.898188427+00:00 stderr F I0813 20:00:42.888571 1 base_controller.go:172] Shutting down MetadataController ... 2025-08-13T20:00:42.898216907+00:00 stderr F I0813 20:00:42.889263 1 base_controller.go:114] Shutting down worker of OAuthClientsController controller ... 2025-08-13T20:00:42.898262839+00:00 stderr F I0813 20:00:42.898249 1 base_controller.go:104] All OAuthClientsController workers have been terminated 2025-08-13T20:00:42.898314980+00:00 stderr F I0813 20:00:42.898301 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:00:42.898352641+00:00 stderr F I0813 20:00:42.898341 1 genericapiserver.go:628] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:00:42.898391672+00:00 stderr F I0813 20:00:42.898379 1 genericapiserver.go:669] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:00:42.898450134+00:00 stderr F I0813 20:00:42.898437 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:00:42.898633219+00:00 stderr F I0813 20:00:42.898588 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:00:42.898738682+00:00 stderr F I0813 20:00:42.898666 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:00:42.898823935+00:00 stderr F I0813 20:00:42.898768 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:00:42.899380681+00:00 stderr F I0813 20:00:42.899277 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:42.899380681+00:00 stderr F I0813 20:00:42.899361 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:00:42.899434772+00:00 stderr F I0813 20:00:42.899395 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:42.899447402+00:00 stderr F I0813 20:00:42.899431 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:42.899701310+00:00 stderr F I0813 20:00:42.889374 1 base_controller.go:172] Shutting down APIServerStaticResources ... 2025-08-13T20:00:42.899701310+00:00 stderr F E0813 20:00:42.890466 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": context canceled, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:00:42.899701310+00:00 stderr F I0813 20:00:42.899692 1 base_controller.go:114] Shutting down worker of APIServerStaticResources controller ... 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.899701 1 base_controller.go:104] All APIServerStaticResources workers have been terminated 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.890485 1 base_controller.go:172] Shutting down StatusSyncer_authentication ... 2025-08-13T20:00:42.899718060+00:00 stderr F I0813 20:00:42.899713 1 base_controller.go:150] All StatusSyncer_authentication post start hooks have been terminated 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890499 1 base_controller.go:172] Shutting down IngressNodesAvailableController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890510 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointsEndpointAccessibleController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890521 1 base_controller.go:172] Shutting down OAuthServerServiceEndpointAccessibleController ... 2025-08-13T20:00:42.899734751+00:00 stderr F I0813 20:00:42.890545 1 base_controller.go:172] Shutting down IngressStateController ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890556 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890581 1 base_controller.go:172] Shutting down WebhookAuthenticatorCertApprover_OpenShiftAuthenticator ... 2025-08-13T20:00:42.899745981+00:00 stderr F I0813 20:00:42.890590 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890607 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890637 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890660 1 base_controller.go:172] Shutting down ManagementStateController ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890689 1 base_controller.go:114] Shutting down worker of StatusSyncer_authentication controller ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.899762 1 base_controller.go:104] All StatusSyncer_authentication workers have been terminated 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890695 1 base_controller.go:114] Shutting down worker of IngressNodesAvailableController controller ... 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.899776 1 base_controller.go:104] All IngressNodesAvailableController workers have been terminated 2025-08-13T20:00:42.899824773+00:00 stderr F I0813 20:00:42.890700 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904143 1 base_controller.go:104] All OAuthServerServiceEndpointsEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890705 1 base_controller.go:114] Shutting down worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904179 1 base_controller.go:104] All OAuthServerServiceEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890711 1 base_controller.go:114] Shutting down worker of IngressStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904193 1 base_controller.go:104] All IngressStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890716 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904205 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890722 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904215 1 base_controller.go:104] All WebhookAuthenticatorCertApprover_OpenShiftAuthenticator workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890728 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904227 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890734 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904236 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890740 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904247 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890745 1 base_controller.go:114] Shutting down worker of ManagementStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904370 1 base_controller.go:104] All ManagementStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.890763 1 base_controller.go:172] Shutting down PayloadConfig ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.893541 1 base_controller.go:172] Shutting down ServiceCAController ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.894204 1 base_controller.go:268] ServiceCAController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904445 1 base_controller.go:114] Shutting down worker of ServiceCAController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.904454 1 base_controller.go:104] All ServiceCAController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894222 1 base_controller.go:172] Shutting down ProxyConfigController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894237 1 base_controller.go:172] Shutting down CustomRouteController ... 2025-08-13T20:00:42.908890762+00:00 stderr F W0813 20:00:42.894272 1 base_controller.go:232] Updating status of "CustomRouteController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.904492 1 base_controller.go:268] CustomRouteController reconciliation failed: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift) 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894289 1 base_controller.go:172] Shutting down OAuthServerRouteEndpointAccessibleController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894307 1 base_controller.go:172] Shutting down WellKnownReadyController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894326 1 base_controller.go:172] Shutting down RouterCertsDomainValidationController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894549 1 base_controller.go:172] Shutting down OAuthServerWorkloadController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894655 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894672 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894684 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894697 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894709 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.894722 1 base_controller.go:172] Shutting down OAuthAPIServerControllerWorkloadController ... 2025-08-13T20:00:42.908890762+00:00 stderr F W0813 20:00:42.895173 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.905507 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905536 1 base_controller.go:114] Shutting down worker of RouterCertsDomainValidationController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905645 1 base_controller.go:114] Shutting down worker of CustomRouteController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905661 1 base_controller.go:104] All CustomRouteController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895222 1 base_controller.go:114] Shutting down worker of ProxyConfigController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905743 1 base_controller.go:104] All ProxyConfigController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905750 1 base_controller.go:104] All RouterCertsDomainValidationController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895239 1 base_controller.go:114] Shutting down worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905759 1 base_controller.go:104] All OAuthServerRouteEndpointAccessibleController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895248 1 base_controller.go:114] Shutting down worker of WellKnownReadyController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.905932 1 base_controller.go:104] All WellKnownReadyController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895268 1 base_controller.go:172] Shutting down OpenshiftAuthenticationStaticResources ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.895405 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895421 1 base_controller.go:172] Shutting down OpenShiftAuthenticatorCertRequester ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895432 1 base_controller.go:172] Shutting down SecretRevisionPruneController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895444 1 base_controller.go:172] Shutting down WebhookAuthenticatorController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895457 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895474 1 base_controller.go:172] Shutting down APIServiceController_openshift-apiserver ... 2025-08-13T20:00:42.908890762+00:00 stderr F E0813 20:00:42.895518 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: the operator is shutting down, skipping updating conditions, err = failed to reconcile enabled APIs: Get "https://10.217.4.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.user.openshift.io": context canceled 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895534 1 base_controller.go:172] Shutting down NamespaceFinalizerController_openshift-oauth-apiserver ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895570 1 base_controller.go:114] Shutting down worker of MetadataController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906156 1 base_controller.go:104] All MetadataController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895579 1 base_controller.go:114] Shutting down worker of OAuthServerWorkloadController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906174 1 base_controller.go:104] All OAuthServerWorkloadController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895588 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906189 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895593 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.906412 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:00:42.908890762+00:00 stderr F I0813 20:00:42.895600 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:00:42.908890762+00:00 stderr P I0813 20:00:42.906432 2025-08-13T20:00:42.908997645+00:00 stderr F 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895606 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906446 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895611 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906566 1 base_controller.go:114] Shutting down worker of OAuthAPIServerControllerWorkloadController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906573 1 base_controller.go:104] All OAuthAPIServerControllerWorkloadController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895623 1 base_controller.go:114] Shutting down worker of OpenShiftAuthenticatorCertRequester controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906585 1 base_controller.go:104] All OpenShiftAuthenticatorCertRequester workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906662 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895630 1 base_controller.go:114] Shutting down worker of SecretRevisionPruneController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906683 1 base_controller.go:104] All SecretRevisionPruneController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895636 1 base_controller.go:114] Shutting down worker of WebhookAuthenticatorController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906694 1 base_controller.go:104] All WebhookAuthenticatorController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895642 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906752 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895651 1 base_controller.go:114] Shutting down worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906762 1 base_controller.go:104] All NamespaceFinalizerController_openshift-oauth-apiserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895674 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895681 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.906868 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895696 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907061 1 base_controller.go:114] Shutting down worker of APIServiceController_openshift-apiserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907071 1 base_controller.go:104] All APIServiceController_openshift-apiserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.895715 1 base_controller.go:172] Shutting down TrustDistributionController ... 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.895746 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": context canceled 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907095 1 base_controller.go:114] Shutting down worker of TrustDistributionController controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907100 1 base_controller.go:104] All TrustDistributionController workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.896184 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": context canceled, "oauth-openshift/oauth-service.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-openshift/trust_distribution_role.yaml" (string): client rate limiter Wait returned an error: context canceled, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.896215 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.896228 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907345 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:00:42.908997645+00:00 stderr F E0813 20:00:42.897470 1 base_controller.go:268] PayloadConfig reconciliation failed: client rate limiter Wait returned an error: context canceled 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.900227 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:00:42.908997645+00:00 stderr F I0813 20:00:42.907647 1 builder.go:329] server exited 2025-08-13T20:00:42.909276393+00:00 stderr F I0813 20:00:42.909246 1 base_controller.go:114] Shutting down worker of PayloadConfig controller ... 2025-08-13T20:00:42.909319824+00:00 stderr F I0813 20:00:42.909307 1 base_controller.go:104] All PayloadConfig workers have been terminated 2025-08-13T20:00:42.909401276+00:00 stderr F I0813 20:00:42.909339 1 base_controller.go:114] Shutting down worker of OpenshiftAuthenticationStaticResources controller ... 2025-08-13T20:00:42.909828228+00:00 stderr F I0813 20:00:42.909456 1 base_controller.go:104] All OpenshiftAuthenticationStaticResources workers have been terminated 2025-08-13T20:00:42.948115590+00:00 stderr F I0813 20:00:42.947976 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:00:42.948115590+00:00 stderr F I0813 20:00:42.948043 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:00:45.606345136+00:00 stderr F W0813 20:00:45.602018 1 leaderelection.go:84] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000072747215117130646033163 0ustar zuulzuul2025-12-13T00:13:14.841342104+00:00 stdout F Copying system trust bundle 2025-12-13T00:13:16.052149340+00:00 stderr F W1213 00:13:16.051540 1 cmd.go:154] Unable to read initial content of "/tmp/terminate": open /tmp/terminate: no such file or directory 2025-12-13T00:13:16.059954152+00:00 stderr F I1213 00:13:16.058128 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:16.064195614+00:00 stderr F I1213 00:13:16.061688 1 cmd.go:240] Using service-serving-cert provided certificates 2025-12-13T00:13:16.064195614+00:00 stderr F I1213 00:13:16.061770 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:16.064195614+00:00 stderr F I1213 00:13:16.063349 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:16.167127523+00:00 stderr F I1213 00:13:16.166677 1 builder.go:298] cluster-authentication-operator version v4.16.0-202406131906.p0.gb415439.assembly.stream.el9-0-g11ca161- 2025-12-13T00:13:16.168273661+00:00 stderr F I1213 00:13:16.167909 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.736460894+00:00 stderr F I1213 00:13:16.734543 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:16.748315843+00:00 stderr F I1213 00:13:16.748231 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:16.748315843+00:00 stderr F I1213 00:13:16.748254 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:16.748315843+00:00 stderr F I1213 00:13:16.748285 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:16.748315843+00:00 stderr F I1213 00:13:16.748290 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:16.784297571+00:00 stderr F I1213 00:13:16.782115 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.784297571+00:00 stderr F W1213 00:13:16.782355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.784297571+00:00 stderr F W1213 00:13:16.782361 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.784297571+00:00 stderr F I1213 00:13:16.782522 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.785184 1 builder.go:439] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.785520 1 leaderelection.go:250] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock... 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.787043 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.787062 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.787286 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.787294 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.787306 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.787604183+00:00 stderr F I1213 00:13:16.787311 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.789459325+00:00 stderr F I1213 00:13:16.789386 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.793180060+00:00 stderr F I1213 00:13:16.793054 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:13:16.787304092 +0000 UTC))" 2025-12-13T00:13:16.796952467+00:00 stderr F I1213 00:13:16.793496 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:16.793452509 +0000 UTC))" 2025-12-13T00:13:16.796952467+00:00 stderr F I1213 00:13:16.793526 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.796952467+00:00 stderr F I1213 00:13:16.793559 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:16.796952467+00:00 stderr F I1213 00:13:16.793583 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.802644398+00:00 stderr F I1213 00:13:16.799845 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172 2025-12-13T00:13:16.802644398+00:00 stderr F I1213 00:13:16.800532 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.854709 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.942013 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.942084 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.955413 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956212 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:16.956186258 +0000 UTC))" 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956233 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:16.956219759 +0000 UTC))" 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956250 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:16.956238489 +0000 UTC))" 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956264 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:16.95625403 +0000 UTC))" 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956283 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.95626869 +0000 UTC))" 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956298 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.956287041 +0000 UTC))" 2025-12-13T00:13:16.956323362+00:00 stderr F I1213 00:13:16.956312 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.956302371 +0000 UTC))" 2025-12-13T00:13:16.956383264+00:00 stderr F I1213 00:13:16.956328 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.956316462 +0000 UTC))" 2025-12-13T00:13:16.956383264+00:00 stderr F I1213 00:13:16.956343 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:16.956333092 +0000 UTC))" 2025-12-13T00:13:16.956383264+00:00 stderr F I1213 00:13:16.956374 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:16.956348443 +0000 UTC))" 2025-12-13T00:13:16.956726626+00:00 stderr F I1213 00:13:16.956671 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:13:16.956657203 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.956944 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:16.956913252 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957168 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:16.95715607 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957184 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:16.95717412 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957201 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:16.957188371 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957217 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:16.957206321 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957232 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.957221912 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957265 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.957251463 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957280 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.957269724 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957293 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.957283644 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957311 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:16.957297635 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957330 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:16.957320065 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957345 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:16.957334706 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957616 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:13:16.957597865 +0000 UTC))" 2025-12-13T00:13:16.965585063+00:00 stderr F I1213 00:13:16.957867 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:16.957854233 +0000 UTC))" 2025-12-13T00:18:20.733533266+00:00 stderr F I1213 00:18:20.733103 1 leaderelection.go:260] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock 2025-12-13T00:18:20.733533266+00:00 stderr F I1213 00:18:20.733224 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"09dcd617-77d7-4739-bfa0-d91f5ee3f9c6", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41983", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' authentication-operator-7cc7ff75d5-g9qv8_d8ee3972-7647-4ac1-a911-c1524860820a became leader 2025-12-13T00:18:20.837087269+00:00 stderr F I1213 00:18:20.837031 1 base_controller.go:67] Waiting for caches to sync for IngressNodesAvailableController 2025-12-13T00:18:20.837087269+00:00 stderr F I1213 00:18:20.837050 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:20.837087269+00:00 stderr F I1213 00:18:20.837066 1 base_controller.go:67] Waiting for caches to sync for ProxyConfigController 2025-12-13T00:18:20.837129660+00:00 stderr F I1213 00:18:20.837087 1 base_controller.go:67] Waiting for caches to sync for CustomRouteController 2025-12-13T00:18:20.837129660+00:00 stderr F I1213 00:18:20.837096 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointAccessibleController 2025-12-13T00:18:20.837228923+00:00 stderr F I1213 00:18:20.837204 1 base_controller.go:67] Waiting for caches to sync for TrustDistributionController 2025-12-13T00:18:20.837228923+00:00 stderr F I1213 00:18:20.837221 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:18:20.837263314+00:00 stderr F I1213 00:18:20.837244 1 base_controller.go:67] Waiting for caches to sync for IngressStateController 2025-12-13T00:18:20.837263314+00:00 stderr F I1213 00:18:20.837257 1 base_controller.go:67] Waiting for caches to sync for OpenShiftAuthenticatorCertRequester 2025-12-13T00:18:20.837290654+00:00 stderr F I1213 00:18:20.837273 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:18:20.837300165+00:00 stderr F I1213 00:18:20.837291 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorController 2025-12-13T00:18:20.837336396+00:00 stderr F I1213 00:18:20.837308 1 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-12-13T00:18:20.837346856+00:00 stderr F I1213 00:18:20.837336 1 base_controller.go:67] Waiting for caches to sync for NamespaceFinalizerController_openshift-oauth-apiserver 2025-12-13T00:18:20.837355446+00:00 stderr F I1213 00:18:20.837348 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:18:20.837364046+00:00 stderr F I1213 00:18:20.837358 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:18:20.837372856+00:00 stderr F I1213 00:18:20.837034 1 base_controller.go:67] Waiting for caches to sync for OAuthServerServiceEndpointsEndpointAccessibleController 2025-12-13T00:18:20.837403897+00:00 stderr F I1213 00:18:20.837381 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-12-13T00:18:20.837403897+00:00 stderr F I1213 00:18:20.837396 1 base_controller.go:67] Waiting for caches to sync for OAuthAPIServerControllerWorkloadController 2025-12-13T00:18:20.837414878+00:00 stderr F I1213 00:18:20.837407 1 base_controller.go:67] Waiting for caches to sync for APIServiceController_openshift-apiserver 2025-12-13T00:18:20.837501980+00:00 stderr F I1213 00:18:20.837472 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-12-13T00:18:20.837501980+00:00 stderr F I1213 00:18:20.837491 1 base_controller.go:67] Waiting for caches to sync for SecretRevisionPruneController 2025-12-13T00:18:20.837782007+00:00 stderr F I1213 00:18:20.837749 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-12-13T00:18:20.837782007+00:00 stderr F I1213 00:18:20.837760 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-12-13T00:18:20.837797558+00:00 stderr F I1213 00:18:20.837778 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-12-13T00:18:20.837807108+00:00 stderr F I1213 00:18:20.837798 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-12-13T00:18:20.837807108+00:00 stderr F I1213 00:18:20.837804 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-12-13T00:18:20.837852499+00:00 stderr F I1213 00:18:20.837827 1 base_controller.go:67] Waiting for caches to sync for APIServerStaticResources 2025-12-13T00:18:20.837852499+00:00 stderr F I1213 00:18:20.837838 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_authentication 2025-12-13T00:18:20.838004083+00:00 stderr F I1213 00:18:20.837981 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.838254750+00:00 stderr F I1213 00:18:20.838238 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:18:20.838297331+00:00 stderr F I1213 00:18:20.838287 1 base_controller.go:67] Waiting for caches to sync for ManagementStateController 2025-12-13T00:18:20.838329592+00:00 stderr F I1213 00:18:20.838320 1 base_controller.go:67] Waiting for caches to sync for OAuthServerWorkloadController 2025-12-13T00:18:20.838371443+00:00 stderr F I1213 00:18:20.838333 1 reflector.go:351] Caches populated for *v1.Role from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.838397133+00:00 stderr F I1213 00:18:20.838354 1 base_controller.go:67] Waiting for caches to sync for OAuthClientsController 2025-12-13T00:18:20.838489686+00:00 stderr F I1213 00:18:20.838478 1 base_controller.go:67] Waiting for caches to sync for PayloadConfig 2025-12-13T00:18:20.838520847+00:00 stderr F I1213 00:18:20.838511 1 base_controller.go:67] Waiting for caches to sync for RouterCertsDomainValidationController 2025-12-13T00:18:20.838552397+00:00 stderr F I1213 00:18:20.838542 1 base_controller.go:67] Waiting for caches to sync for ServiceCAController 2025-12-13T00:18:20.838614349+00:00 stderr F I1213 00:18:20.838349 1 base_controller.go:67] Waiting for caches to sync for MetadataController 2025-12-13T00:18:20.838906747+00:00 stderr F I1213 00:18:20.838891 1 base_controller.go:67] Waiting for caches to sync for WellKnownReadyController 2025-12-13T00:18:20.839026340+00:00 stderr F I1213 00:18:20.838971 1 base_controller.go:67] Waiting for caches to sync for OAuthServerRouteEndpointAccessibleController 2025-12-13T00:18:20.839464532+00:00 stderr F I1213 00:18:20.839411 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.840253303+00:00 stderr F I1213 00:18:20.840209 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.840331925+00:00 stderr F I1213 00:18:20.840301 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.840382886+00:00 stderr F I1213 00:18:20.840354 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.841903476+00:00 stderr F I1213 00:18:20.841828 1 base_controller.go:67] Waiting for caches to sync for OpenshiftAuthenticationStaticResources 2025-12-13T00:18:20.843239462+00:00 stderr F I1213 00:18:20.843169 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.852503198+00:00 stderr F I1213 00:18:20.852457 1 reflector.go:351] Caches populated for *v1.OAuthClient from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:125 2025-12-13T00:18:20.852666113+00:00 stderr F I1213 00:18:20.852636 1 reflector.go:351] Caches populated for *v1.OAuth from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.853173276+00:00 stderr F I1213 00:18:20.853153 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.854331707+00:00 stderr F I1213 00:18:20.854299 1 reflector.go:351] Caches populated for *v1.Route from github.com/openshift/client-go/route/informers/externalversions/factory.go:125 2025-12-13T00:18:20.855119798+00:00 stderr F I1213 00:18:20.855075 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.855456627+00:00 stderr F I1213 00:18:20.855422 1 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.858499218+00:00 stderr F I1213 00:18:20.855478 1 reflector.go:351] Caches populated for *v1.Infrastructure from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.858897938+00:00 stderr F I1213 00:18:20.858830 1 reflector.go:351] Caches populated for *v1.Ingress from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.859031793+00:00 stderr F I1213 00:18:20.859003 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.859495895+00:00 stderr F I1213 00:18:20.859462 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from sigs.k8s.io/kube-storage-version-migrator/pkg/clients/informer/factory.go:132 2025-12-13T00:18:20.859653389+00:00 stderr F I1213 00:18:20.859623 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.860193393+00:00 stderr F I1213 00:18:20.860162 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-12-13T00:18:20.860244024+00:00 stderr F I1213 00:18:20.860222 1 reflector.go:351] Caches populated for *v1.APIServer from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.860588843+00:00 stderr F I1213 00:18:20.860558 1 reflector.go:351] Caches populated for *v1.IngressController from github.com/openshift/client-go/operator/informers/externalversions/factory.go:125 2025-12-13T00:18:20.860681646+00:00 stderr F I1213 00:18:20.860656 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.860718437+00:00 stderr F I1213 00:18:20.860696 1 reflector.go:351] Caches populated for *v1.APIService from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:141 2025-12-13T00:18:20.862697500+00:00 stderr F I1213 00:18:20.862642 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.863095820+00:00 stderr F I1213 00:18:20.863030 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.863256995+00:00 stderr F I1213 00:18:20.863224 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:20.874999006+00:00 stderr F I1213 00:18:20.874399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.876836456+00:00 stderr F I1213 00:18:20.876794 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.889593454+00:00 stderr F I1213 00:18:20.889536 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:20.937964871+00:00 stderr F I1213 00:18:20.937886 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:20.937964871+00:00 stderr F I1213 00:18:20.937914 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:20.938005182+00:00 stderr F I1213 00:18:20.937955 1 base_controller.go:73] Caches are synced for StatusSyncer_authentication 2025-12-13T00:18:20.938005182+00:00 stderr F I1213 00:18:20.937984 1 base_controller.go:110] Starting #1 worker of StatusSyncer_authentication controller ... 2025-12-13T00:18:20.938025673+00:00 stderr F I1213 00:18:20.938011 1 base_controller.go:73] Caches are synced for APIServerStaticResources 2025-12-13T00:18:20.938025673+00:00 stderr F I1213 00:18:20.938018 1 base_controller.go:110] Starting #1 worker of APIServerStaticResources controller ... 2025-12-13T00:18:20.938139676+00:00 stderr F I1213 00:18:20.938111 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:18:20.938139676+00:00 stderr F I1213 00:18:20.938132 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:18:20.938187217+00:00 stderr F I1213 00:18:20.938163 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_OpenShiftAuthenticator 2025-12-13T00:18:20.938225558+00:00 stderr F I1213 00:18:20.938207 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_OpenShiftAuthenticator controller ... 2025-12-13T00:18:20.938265319+00:00 stderr F I1213 00:18:20.938255 1 base_controller.go:73] Caches are synced for NamespaceFinalizerController_openshift-oauth-apiserver 2025-12-13T00:18:20.938289970+00:00 stderr F I1213 00:18:20.938281 1 base_controller.go:110] Starting #1 worker of NamespaceFinalizerController_openshift-oauth-apiserver controller ... 2025-12-13T00:18:20.938335691+00:00 stderr F I1213 00:18:20.938326 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:18:20.938358921+00:00 stderr F I1213 00:18:20.938350 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:18:20.938483665+00:00 stderr F I1213 00:18:20.938463 1 base_controller.go:73] Caches are synced for CustomRouteController 2025-12-13T00:18:20.938483665+00:00 stderr F I1213 00:18:20.938473 1 base_controller.go:110] Starting #1 worker of CustomRouteController controller ... 2025-12-13T00:18:20.938606228+00:00 stderr F I1213 00:18:20.938582 1 base_controller.go:73] Caches are synced for ManagementStateController 2025-12-13T00:18:20.938606228+00:00 stderr F I1213 00:18:20.938598 1 base_controller.go:110] Starting #1 worker of ManagementStateController controller ... 2025-12-13T00:18:20.938635229+00:00 stderr F I1213 00:18:20.938622 1 base_controller.go:73] Caches are synced for OAuthClientsController 2025-12-13T00:18:20.938667540+00:00 stderr F I1213 00:18:20.938655 1 base_controller.go:110] Starting #1 worker of OAuthClientsController controller ... 2025-12-13T00:18:20.941540276+00:00 stderr F I1213 00:18:20.941518 1 reflector.go:351] Caches populated for *v1.Authentication from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:21.003878843+00:00 stderr F I1213 00:18:21.003843 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:21.039819109+00:00 stderr F I1213 00:18:21.039753 1 base_controller.go:73] Caches are synced for MetadataController 2025-12-13T00:18:21.039819109+00:00 stderr F I1213 00:18:21.039789 1 base_controller.go:110] Starting #1 worker of MetadataController controller ... 2025-12-13T00:18:21.141661686+00:00 stderr F I1213 00:18:21.141618 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:21.204837257+00:00 stderr F I1213 00:18:21.204737 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:21.410313720+00:00 stderr F I1213 00:18:21.410246 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:21.606293430+00:00 stderr F I1213 00:18:21.606171 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:21.804508671+00:00 stderr F I1213 00:18:21.804413 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:22.002174237+00:00 stderr F I1213 00:18:22.002077 1 request.go:697] Waited for 1.174359775s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?limit=500&resourceVersion=0 2025-12-13T00:18:22.004973691+00:00 stderr F I1213 00:18:22.004837 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:22.038120732+00:00 stderr F I1213 00:18:22.038040 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-12-13T00:18:22.038120732+00:00 stderr F I1213 00:18:22.038079 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-12-13T00:18:22.038330547+00:00 stderr F I1213 00:18:22.038261 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-12-13T00:18:22.204480626+00:00 stderr F I1213 00:18:22.204346 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:22.413012170+00:00 stderr F I1213 00:18:22.412854 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:22.606657658+00:00 stderr F I1213 00:18:22.606557 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:22.803773970+00:00 stderr F I1213 00:18:22.803699 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:23.002895344+00:00 stderr F I1213 00:18:23.002596 1 request.go:697] Waited for 2.167907352s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/nodes?limit=500&resourceVersion=0 2025-12-13T00:18:23.005286708+00:00 stderr F I1213 00:18:23.005225 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:23.037924125+00:00 stderr F I1213 00:18:23.037821 1 base_controller.go:73] Caches are synced for IngressNodesAvailableController 2025-12-13T00:18:23.037924125+00:00 stderr F I1213 00:18:23.037857 1 base_controller.go:110] Starting #1 worker of IngressNodesAvailableController controller ... 2025-12-13T00:18:23.038918042+00:00 stderr F I1213 00:18:23.038845 1 base_controller.go:73] Caches are synced for OAuthServerWorkloadController 2025-12-13T00:18:23.038918042+00:00 stderr F I1213 00:18:23.038873 1 base_controller.go:110] Starting #1 worker of OAuthServerWorkloadController controller ... 2025-12-13T00:18:23.204471324+00:00 stderr F I1213 00:18:23.204385 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:23.238180380+00:00 stderr F I1213 00:18:23.238031 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointsEndpointAccessibleController 2025-12-13T00:18:23.238180380+00:00 stderr F I1213 00:18:23.238109 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointsEndpointAccessibleController controller ... 2025-12-13T00:18:23.238180380+00:00 stderr F I1213 00:18:23.238047 1 base_controller.go:73] Caches are synced for IngressStateController 2025-12-13T00:18:23.238180380+00:00 stderr F I1213 00:18:23.238153 1 base_controller.go:110] Starting #1 worker of IngressStateController controller ... 2025-12-13T00:18:23.427596456+00:00 stderr F I1213 00:18:23.427519 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:23.437286034+00:00 stderr F I1213 00:18:23.437236 1 base_controller.go:73] Caches are synced for ProxyConfigController 2025-12-13T00:18:23.437379436+00:00 stderr F I1213 00:18:23.437356 1 base_controller.go:110] Starting #1 worker of ProxyConfigController controller ... 2025-12-13T00:18:23.439649087+00:00 stderr F I1213 00:18:23.439615 1 base_controller.go:73] Caches are synced for OAuthServerRouteEndpointAccessibleController 2025-12-13T00:18:23.439649087+00:00 stderr F I1213 00:18:23.439641 1 base_controller.go:110] Starting #1 worker of OAuthServerRouteEndpointAccessibleController controller ... 2025-12-13T00:18:23.607247023+00:00 stderr F I1213 00:18:23.607163 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:23.804288642+00:00 stderr F I1213 00:18:23.804222 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:23.839502187+00:00 stderr F I1213 00:18:23.839371 1 base_controller.go:73] Caches are synced for TrustDistributionController 2025-12-13T00:18:23.839502187+00:00 stderr F I1213 00:18:23.839406 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:18:23.839502187+00:00 stderr F I1213 00:18:23.839422 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:18:23.839502187+00:00 stderr F I1213 00:18:23.839426 1 base_controller.go:73] Caches are synced for RouterCertsDomainValidationController 2025-12-13T00:18:23.839502187+00:00 stderr F I1213 00:18:23.839410 1 base_controller.go:110] Starting #1 worker of TrustDistributionController controller ... 2025-12-13T00:18:23.839502187+00:00 stderr F I1213 00:18:23.839445 1 base_controller.go:110] Starting #1 worker of RouterCertsDomainValidationController controller ... 2025-12-13T00:18:24.004049873+00:00 stderr F I1213 00:18:24.003993 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:24.202430249+00:00 stderr F I1213 00:18:24.202381 1 request.go:697] Waited for 3.365162315s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets?limit=500&resourceVersion=0 2025-12-13T00:18:24.205551325+00:00 stderr F I1213 00:18:24.205517 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:24.237780924+00:00 stderr F I1213 00:18:24.237689 1 base_controller.go:73] Caches are synced for SecretRevisionPruneController 2025-12-13T00:18:24.237780924+00:00 stderr F I1213 00:18:24.237736 1 base_controller.go:110] Starting #1 worker of SecretRevisionPruneController controller ... 2025-12-13T00:18:24.237821155+00:00 stderr F I1213 00:18:24.237776 1 base_controller.go:73] Caches are synced for OpenShiftAuthenticatorCertRequester 2025-12-13T00:18:24.237821155+00:00 stderr F I1213 00:18:24.237798 1 base_controller.go:110] Starting #1 worker of OpenShiftAuthenticatorCertRequester controller ... 2025-12-13T00:18:24.237897507+00:00 stderr F I1213 00:18:24.237836 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-12-13T00:18:24.237897507+00:00 stderr F I1213 00:18:24.237861 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-12-13T00:18:24.237897507+00:00 stderr F I1213 00:18:24.237845 1 base_controller.go:73] Caches are synced for WebhookAuthenticatorController 2025-12-13T00:18:24.237897507+00:00 stderr F I1213 00:18:24.237890 1 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorController controller ... 2025-12-13T00:18:24.237919288+00:00 stderr F I1213 00:18:24.237834 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:18:24.237919288+00:00 stderr F I1213 00:18:24.237912 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:18:24.237980520+00:00 stderr F I1213 00:18:24.237961 1 base_controller.go:73] Caches are synced for RevisionController 2025-12-13T00:18:24.238015461+00:00 stderr F I1213 00:18:24.237876 1 base_controller.go:73] Caches are synced for OAuthAPIServerControllerWorkloadController 2025-12-13T00:18:24.238015461+00:00 stderr F I1213 00:18:24.237990 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-12-13T00:18:24.238015461+00:00 stderr F I1213 00:18:24.237998 1 base_controller.go:110] Starting #1 worker of OAuthAPIServerControllerWorkloadController controller ... 2025-12-13T00:18:24.238132854+00:00 stderr F I1213 00:18:24.238102 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-12-13T00:18:24.238235297+00:00 stderr F I1213 00:18:24.238165 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-12-13T00:18:24.238235297+00:00 stderr F I1213 00:18:24.238204 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-12-13T00:18:24.238235297+00:00 stderr F I1213 00:18:24.238174 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-12-13T00:18:24.238277578+00:00 stderr F I1213 00:18:24.238201 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-12-13T00:18:24.238277578+00:00 stderr F I1213 00:18:24.238265 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-12-13T00:18:24.238295688+00:00 stderr F I1213 00:18:24.238183 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-12-13T00:18:24.238312519+00:00 stderr F I1213 00:18:24.238291 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-12-13T00:18:24.405703645+00:00 stderr F I1213 00:18:24.405595 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:24.437739838+00:00 stderr F I1213 00:18:24.437654 1 base_controller.go:73] Caches are synced for APIServiceController_openshift-apiserver 2025-12-13T00:18:24.437739838+00:00 stderr F I1213 00:18:24.437686 1 base_controller.go:110] Starting #1 worker of APIServiceController_openshift-apiserver controller ... 2025-12-13T00:18:24.438043976+00:00 stderr F I1213 00:18:24.437974 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling 2025-12-13T00:18:24.605304868+00:00 stderr F I1213 00:18:24.605169 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:24.638317459+00:00 stderr F I1213 00:18:24.638229 1 base_controller.go:73] Caches are synced for OAuthServerServiceEndpointAccessibleController 2025-12-13T00:18:24.638317459+00:00 stderr F I1213 00:18:24.638271 1 base_controller.go:110] Starting #1 worker of OAuthServerServiceEndpointAccessibleController controller ... 2025-12-13T00:18:24.638680789+00:00 stderr F I1213 00:18:24.638637 1 base_controller.go:73] Caches are synced for PayloadConfig 2025-12-13T00:18:24.638680789+00:00 stderr F I1213 00:18:24.638668 1 base_controller.go:110] Starting #1 worker of PayloadConfig controller ... 2025-12-13T00:18:24.638703739+00:00 stderr F I1213 00:18:24.638645 1 base_controller.go:73] Caches are synced for ServiceCAController 2025-12-13T00:18:24.638804722+00:00 stderr F I1213 00:18:24.638743 1 base_controller.go:110] Starting #1 worker of ServiceCAController controller ... 2025-12-13T00:18:24.805688253+00:00 stderr F I1213 00:18:24.805580 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:24.842179449+00:00 stderr F I1213 00:18:24.842109 1 base_controller.go:73] Caches are synced for OpenshiftAuthenticationStaticResources 2025-12-13T00:18:24.842179449+00:00 stderr F I1213 00:18:24.842137 1 base_controller.go:110] Starting #1 worker of OpenshiftAuthenticationStaticResources controller ... 2025-12-13T00:18:25.004192217+00:00 stderr F I1213 00:18:25.004119 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:25.039581133+00:00 stderr F I1213 00:18:25.039491 1 base_controller.go:73] Caches are synced for WellKnownReadyController 2025-12-13T00:18:25.039581133+00:00 stderr F I1213 00:18:25.039515 1 base_controller.go:110] Starting #1 worker of WellKnownReadyController controller ... 2025-12-13T00:18:25.204560322+00:00 stderr F I1213 00:18:25.204497 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:25.238240751+00:00 stderr F I1213 00:18:25.238139 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:18:25.238240751+00:00 stderr F I1213 00:18:25.238171 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:18:25.402156140+00:00 stderr F I1213 00:18:25.402080 1 request.go:697] Waited for 4.463704497s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver 2025-12-13T00:18:26.403050431+00:00 stderr F I1213 00:18:26.402541 1 request.go:697] Waited for 2.562882735s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets/router-certs 2025-12-13T00:18:27.603125332+00:00 stderr F I1213 00:18:27.602564 1 request.go:697] Waited for 2.963600411s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca 2025-12-13T00:18:28.602691525+00:00 stderr F I1213 00:18:28.602620 1 request.go:697] Waited for 1.798324908s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/openshift-authenticator-certs 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563473 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.563433638 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563887 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.56386687 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563905 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.563893361 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563920 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.563908652 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563951 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.563925202 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563966 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.563956243 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563979 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.563969993 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.563996 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.563985454 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.564011 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.563999934 +0000 UTC))" 2025-12-13T00:19:37.564040615+00:00 stderr F I1213 00:19:37.564026 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.564017775 +0000 UTC))" 2025-12-13T00:19:37.564131248+00:00 stderr F I1213 00:19:37.564046 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.564031725 +0000 UTC))" 2025-12-13T00:19:37.564131248+00:00 stderr F I1213 00:19:37.564063 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564052565 +0000 UTC))" 2025-12-13T00:19:37.565864985+00:00 stderr F I1213 00:19:37.564350 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-authentication-operator.svc\" [serving] validServingFor=[metrics.openshift-authentication-operator.svc,metrics.openshift-authentication-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:19:37.564335183 +0000 UTC))" 2025-12-13T00:19:37.565864985+00:00 stderr F I1213 00:19:37.564616 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584796\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:19:37.5645992 +0000 UTC))" 2025-12-13T00:20:36.426692959+00:00 stderr F W1213 00:20:36.426229 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.426692959+00:00 stderr F E1213 00:20:36.426676 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:36.432852875+00:00 stderr F I1213 00:20:36.428522 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.436039891+00:00 stderr F E1213 00:20:36.433013 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.436039891+00:00 stderr F E1213 00:20:36.433020 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.436120453+00:00 stderr F E1213 00:20:36.436088 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.439032982+00:00 stderr F W1213 00:20:36.438982 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.439052332+00:00 stderr F E1213 00:20:36.439035 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.439091563+00:00 stderr F E1213 00:20:36.439058 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.439169675+00:00 stderr F W1213 00:20:36.439125 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.439169675+00:00 stderr F E1213 00:20:36.439158 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:36.439724680+00:00 stderr F I1213 00:20:36.439683 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.440346157+00:00 stderr F E1213 00:20:36.440314 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.450977214+00:00 stderr F I1213 00:20:36.450903 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.461098287+00:00 stderr F E1213 00:20:36.461034 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.482473364+00:00 stderr F I1213 00:20:36.482406 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.484277562+00:00 stderr F E1213 00:20:36.484247 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.525272389+00:00 stderr F I1213 00:20:36.525211 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.526475501+00:00 stderr F E1213 00:20:36.526449 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.596071029+00:00 stderr F E1213 00:20:36.596018 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.607892928+00:00 stderr F I1213 00:20:36.607848 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.608958067+00:00 stderr F E1213 00:20:36.608910 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.771088352+00:00 stderr F I1213 00:20:36.770678 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:36.772157051+00:00 stderr F E1213 00:20:36.772096 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:36.996527375+00:00 stderr F E1213 00:20:36.996420 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:37.093697388+00:00 stderr F I1213 00:20:37.093619 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:37Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:37.094844778+00:00 stderr F E1213 00:20:37.094805 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.396998952+00:00 stderr F E1213 00:20:37.396883 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.595605892+00:00 stderr F I1213 00:20:37.595503 1 request.go:697] Waited for 1.144089303s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:37.596695151+00:00 stderr F W1213 00:20:37.596636 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.596732132+00:00 stderr F E1213 00:20:37.596702 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:37.735600199+00:00 stderr F I1213 00:20:37.735519 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:37Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:37.736780360+00:00 stderr F E1213 00:20:37.736753 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.796722328+00:00 stderr F E1213 00:20:37.796664 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:37.996489969+00:00 stderr F E1213 00:20:37.996410 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:38.196585088+00:00 stderr F E1213 00:20:38.196504 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:38.396854863+00:00 stderr F E1213 00:20:38.396764 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:38.597750884+00:00 stderr F W1213 00:20:38.597291 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:38.597750884+00:00 stderr F E1213 00:20:38.597718 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:38.795344995+00:00 stderr F I1213 00:20:38.795273 1 request.go:697] Waited for 1.362525486s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:38.796560649+00:00 stderr F E1213 00:20:38.796480 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:38.995849086+00:00 stderr F W1213 00:20:38.995773 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:38.995849086+00:00 stderr F E1213 00:20:38.995812 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:39.017584732+00:00 stderr F I1213 00:20:39.017527 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:39Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:39.018922779+00:00 stderr F E1213 00:20:39.018881 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:39.196697836+00:00 stderr F E1213 00:20:39.196645 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:39.396109138+00:00 stderr F E1213 00:20:39.396046 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:39.995086660+00:00 stderr F I1213 00:20:39.994998 1 request.go:697] Waited for 1.365813806s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:39.995864272+00:00 stderr F E1213 00:20:39.995834 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:40.396233776+00:00 stderr F W1213 00:20:40.395838 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.396233776+00:00 stderr F E1213 00:20:40.396182 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:40.595838552+00:00 stderr F E1213 00:20:40.595766 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.796524877+00:00 stderr F E1213 00:20:40.796446 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.995649531+00:00 stderr F I1213 00:20:40.995435 1 request.go:697] Waited for 1.398755585s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:40.996715859+00:00 stderr F E1213 00:20:40.996693 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.196560852+00:00 stderr F E1213 00:20:41.196451 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:41.395784668+00:00 stderr F E1213 00:20:41.395704 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.579983868+00:00 stderr F I1213 00:20:41.579889 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:41Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:41.580683657+00:00 stderr F E1213 00:20:41.580640 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.596532135+00:00 stderr F W1213 00:20:41.596460 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.596532135+00:00 stderr F E1213 00:20:41.596504 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.795705360+00:00 stderr F E1213 00:20:41.795642 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:41.996182260+00:00 stderr F W1213 00:20:41.996114 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.996182260+00:00 stderr F E1213 00:20:41.996161 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:42.195311383+00:00 stderr F I1213 00:20:42.195249 1 request.go:697] Waited for 1.557231281s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:42.196007112+00:00 stderr F E1213 00:20:42.195971 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:42.397538240+00:00 stderr F E1213 00:20:42.396897 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:42.996775700+00:00 stderr F E1213 00:20:42.996727 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:43.195353639+00:00 stderr F I1213 00:20:43.195272 1 request.go:697] Waited for 1.567959121s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:43.397202676+00:00 stderr F W1213 00:20:43.397120 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.397202676+00:00 stderr F E1213 00:20:43.397173 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:43.596367290+00:00 stderr F E1213 00:20:43.596293 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:43.796636734+00:00 stderr F E1213 00:20:43.796252 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:43.996525649+00:00 stderr F E1213 00:20:43.996469 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.196161126+00:00 stderr F E1213 00:20:44.196089 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.395422533+00:00 stderr F I1213 00:20:44.395350 1 request.go:697] Waited for 1.598201728s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:44.396489512+00:00 stderr F E1213 00:20:44.396439 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.596277404+00:00 stderr F E1213 00:20:44.596175 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:44.796416684+00:00 stderr F W1213 00:20:44.796347 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.796416684+00:00 stderr F E1213 00:20:44.796385 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:44.996450552+00:00 stderr F E1213 00:20:44.996383 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:45.195876074+00:00 stderr F W1213 00:20:45.195790 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:45.195876074+00:00 stderr F E1213 00:20:45.195831 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:45.395910111+00:00 stderr F I1213 00:20:45.395818 1 request.go:697] Waited for 1.568102936s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:45.396970460+00:00 stderr F E1213 00:20:45.396867 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:45.796603553+00:00 stderr F E1213 00:20:45.796533 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:46.196092144+00:00 stderr F E1213 00:20:46.195992 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:46.395673709+00:00 stderr F E1213 00:20:46.395543 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:46.595510313+00:00 stderr F I1213 00:20:46.595450 1 request.go:697] Waited for 1.567853857s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:46.701632705+00:00 stderr F I1213 00:20:46.701543 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:46Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:46.702917181+00:00 stderr F E1213 00:20:46.702866 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:46.795474428+00:00 stderr F E1213 00:20:46.795418 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:46.995904616+00:00 stderr F E1213 00:20:46.995824 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:47.196007146+00:00 stderr F E1213 00:20:47.195951 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:47.396502696+00:00 stderr F W1213 00:20:47.396422 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:47.396502696+00:00 stderr F E1213 00:20:47.396474 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:47.595850386+00:00 stderr F I1213 00:20:47.595756 1 request.go:697] Waited for 1.598915166s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:47.596786161+00:00 stderr F E1213 00:20:47.596723 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:47.796210823+00:00 stderr F E1213 00:20:47.796132 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:47.996145347+00:00 stderr F E1213 00:20:47.996079 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.196643768+00:00 stderr F E1213 00:20:48.196573 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:48.396886982+00:00 stderr F W1213 00:20:48.396799 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.396886982+00:00 stderr F E1213 00:20:48.396855 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.795540970+00:00 stderr F I1213 00:20:48.795473 1 request.go:697] Waited for 1.767784893s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:48.796362471+00:00 stderr F E1213 00:20:48.796329 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.995677840+00:00 stderr F W1213 00:20:48.995603 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:48.995677840+00:00 stderr F E1213 00:20:48.995645 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:49.196222792+00:00 stderr F E1213 00:20:49.196165 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:49.595545607+00:00 stderr F E1213 00:20:49.595480 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:49.995571582+00:00 stderr F I1213 00:20:49.995304 1 request.go:697] Waited for 1.466988507s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:49.996186439+00:00 stderr F E1213 00:20:49.996150 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:50.195658662+00:00 stderr F E1213 00:20:50.195596 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.396175132+00:00 stderr F W1213 00:20:50.396098 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.396175132+00:00 stderr F E1213 00:20:50.396140 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:50.596740505+00:00 stderr F E1213 00:20:50.596452 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.797007809+00:00 stderr F W1213 00:20:50.796898 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.797007809+00:00 stderr F E1213 00:20:50.796975 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.864864291+00:00 stderr F I1213 00:20:50.864787 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:50Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:50.865817466+00:00 stderr F E1213 00:20:50.865772 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.996059590+00:00 stderr F E1213 00:20:50.995998 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.194865605+00:00 stderr F I1213 00:20:51.194803 1 request.go:697] Waited for 1.798202715s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:51.195585105+00:00 stderr F E1213 00:20:51.195508 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.395908770+00:00 stderr F E1213 00:20:51.395840 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:51.595978839+00:00 stderr F E1213 00:20:51.595915 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.795721809+00:00 stderr F W1213 00:20:51.795663 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:51.795721809+00:00 stderr F E1213 00:20:51.795700 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.195603150+00:00 stderr F I1213 00:20:52.195022 1 request.go:697] Waited for 1.596585803s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:52.196459013+00:00 stderr F E1213 00:20:52.196422 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.397650702+00:00 stderr F E1213 00:20:52.395974 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:52.598790189+00:00 stderr F W1213 00:20:52.596790 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.598790189+00:00 stderr F E1213 00:20:52.596840 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:52.799081984+00:00 stderr F E1213 00:20:52.799017 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.195121801+00:00 stderr F I1213 00:20:53.195067 1 request.go:697] Waited for 1.738172583s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:53.196073847+00:00 stderr F E1213 00:20:53.196037 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:53.398185711+00:00 stderr F E1213 00:20:53.395593 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.795997916+00:00 stderr F E1213 00:20:53.795913 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.996205188+00:00 stderr F E1213 00:20:53.996145 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.195344292+00:00 stderr F I1213 00:20:54.195280 1 request.go:697] Waited for 1.791372599s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:54.196760700+00:00 stderr F E1213 00:20:54.196187 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:54.400133639+00:00 stderr F W1213 00:20:54.398966 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.400133639+00:00 stderr F E1213 00:20:54.399080 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.596293742+00:00 stderr F W1213 00:20:54.596086 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.596293742+00:00 stderr F E1213 00:20:54.596125 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:20:54.798741085+00:00 stderr F E1213 00:20:54.798569 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.995977677+00:00 stderr F E1213 00:20:54.995701 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:55.195838820+00:00 stderr F I1213 00:20:55.195732 1 request.go:697] Waited for 1.716533079s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:55.196888678+00:00 stderr F E1213 00:20:55.196823 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.396648109+00:00 stderr F E1213 00:20:55.396556 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.597728645+00:00 stderr F W1213 00:20:55.597513 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.597728645+00:00 stderr F E1213 00:20:55.597697 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:55.997128273+00:00 stderr F E1213 00:20:55.996822 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.196006230+00:00 stderr F E1213 00:20:56.195948 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:56.395520833+00:00 stderr F I1213 00:20:56.395443 1 request.go:697] Waited for 1.954461381s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:56.396664744+00:00 stderr F W1213 00:20:56.396580 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.396664744+00:00 stderr F E1213 00:20:56.396647 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:56.796214476+00:00 stderr F E1213 00:20:56.796133 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:56.944538168+00:00 stderr F I1213 00:20:56.944466 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:20:56Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:20:56.945558975+00:00 stderr F E1213 00:20:56.945503 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Put "https://10.217.4.1:443/apis/config.openshift.io/v1/clusteroperators/authentication/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.005168214+00:00 stderr F E1213 00:20:57.005113 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.396140444+00:00 stderr F E1213 00:20:57.396070 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.596741168+00:00 stderr F I1213 00:20:57.596637 1 request.go:697] Waited for 1.116121289s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:57.597552490+00:00 stderr F W1213 00:20:57.597494 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.597552490+00:00 stderr F E1213 00:20:57.597537 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.795878682+00:00 stderr F E1213 00:20:57.795815 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:57.995840157+00:00 stderr F E1213 00:20:57.995756 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:58.195987858+00:00 stderr F E1213 00:20:58.195862 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.402065719+00:00 stderr F W1213 00:20:58.401165 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.402065719+00:00 stderr F E1213 00:20:58.401221 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.795973259+00:00 stderr F I1213 00:20:58.795333 1 request.go:697] Waited for 1.17586754s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:58.797025787+00:00 stderr F E1213 00:20:58.796525 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:58.996087679+00:00 stderr F W1213 00:20:58.996008 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:58.996087679+00:00 stderr F E1213 00:20:58.996056 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.397371467+00:00 stderr F E1213 00:20:59.397303 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.795799289+00:00 stderr F I1213 00:20:59.795742 1 request.go:697] Waited for 1.199724254s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status 2025-12-13T00:20:59.796518918+00:00 stderr F E1213 00:20:59.796483 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.995890228+00:00 stderr F E1213 00:20:59.995837 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.196756948+00:00 stderr F W1213 00:21:00.196595 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.196756948+00:00 stderr F E1213 00:21:00.196653 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.395912602+00:00 stderr F E1213 00:21:00.395876 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.596901106+00:00 stderr F W1213 00:21:00.596094 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.596901106+00:00 stderr F E1213 00:21:00.596155 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.796575874+00:00 stderr F W1213 00:21:00.796485 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.796575874+00:00 stderr F E1213 00:21:00.796525 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:21:00.995443881+00:00 stderr F I1213 00:21:00.995366 1 request.go:697] Waited for 1.197719371s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster 2025-12-13T00:21:01.608360340+00:00 stderr F E1213 00:21:01.596435 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.795910861+00:00 stderr F W1213 00:21:01.795831 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.795910861+00:00 stderr F E1213 00:21:01.795877 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.996637547+00:00 stderr F E1213 00:21:01.996571 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.191155556+00:00 stderr F E1213 00:21:02.191101 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.195893713+00:00 stderr F E1213 00:21:02.195855 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.197454946+00:00 stderr F E1213 00:21:02.197418 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.208337150+00:00 stderr F E1213 00:21:02.208302 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.229760627+00:00 stderr F E1213 00:21:02.229705 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.272026559+00:00 stderr F E1213 00:21:02.271882 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.353918108+00:00 stderr F E1213 00:21:02.353604 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.396600480+00:00 stderr F W1213 00:21:02.396516 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.396600480+00:00 stderr F E1213 00:21:02.396579 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.516418293+00:00 stderr F E1213 00:21:02.516350 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.796953293+00:00 stderr F E1213 00:21:02.796661 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.838236258+00:00 stderr F E1213 00:21:02.838144 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.396235485+00:00 stderr F E1213 00:21:03.396145 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.480559240+00:00 stderr F E1213 00:21:03.480493 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.595825150+00:00 stderr F E1213 00:21:03.595754 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.796543227+00:00 stderr F W1213 00:21:03.796444 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.796543227+00:00 stderr F E1213 00:21:03.796488 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.996396410+00:00 stderr F W1213 00:21:03.996307 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.996396410+00:00 stderr F E1213 00:21:03.996348 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.396200288+00:00 stderr F E1213 00:21:04.396114 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.596316738+00:00 stderr F E1213 00:21:04.596253 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.762737500+00:00 stderr F E1213 00:21:04.762654 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.796731427+00:00 stderr F E1213 00:21:04.796666 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.396411749+00:00 stderr F E1213 00:21:05.396329 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.596659432+00:00 stderr F W1213 00:21:05.596581 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.596659432+00:00 stderr F E1213 00:21:05.596624 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.803982827+00:00 stderr F E1213 00:21:05.803291 1 base_controller.go:268] RevisionController reconciliation failed: Get "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.196162790+00:00 stderr F E1213 00:21:06.196083 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.396542617+00:00 stderr F W1213 00:21:06.396420 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.396542617+00:00 stderr F E1213 00:21:06.396470 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.720078437+00:00 stderr F E1213 00:21:06.720032 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.324891168+00:00 stderr F E1213 00:21:07.324814 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:08.248079800+00:00 stderr F E1213 00:21:08.247976 1 base_controller.go:268] APIServerStaticResources reconciliation failed: ["oauth-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/apiserver-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_binding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/RBAC/useroauthaccesstokens_clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:useroauthaccesstoken-manager": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-apiserver/oauth-apiserver-pdb.yaml" (string): Delete "https://10.217.4.1:443/apis/policy/v1/namespaces/openshift-oauth-apiserver/poddisruptionbudgets/oauth-apiserver-pdb": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:09.113568265+00:00 stderr F E1213 00:21:09.113493 1 base_controller.go:268] OpenshiftAuthenticationStaticResources reconciliation failed: ["oauth-openshift/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/authentication-clusterrolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-authentication": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/serviceaccount.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/oauth-service.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/services/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_role.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, "oauth-openshift/trust_distribution_rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/rolebindings/system:openshift:oauth-servercert-trust": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:11.039879126+00:00 stderr F W1213 00:21:11.039565 1 base_controller.go:232] Updating status of "WellKnownReadyController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.039911987+00:00 stderr F E1213 00:21:11.039858 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:21:11.519877319+00:00 stderr F W1213 00:21:11.519797 1 base_controller.go:232] Updating status of "OAuthClientsController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.519877319+00:00 stderr F E1213 00:21:11.519848 1 base_controller.go:268] OAuthClientsController reconciliation failed: unable to ensure existence of a bootstrapped OAuth client "openshift-browser-client": Post "https://10.217.4.1:443/apis/oauth.openshift.io/v1/oauthclients": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.950956802+00:00 stderr F E1213 00:21:11.950862 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.957654152+00:00 stderr F E1213 00:21:11.957582 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.968920726+00:00 stderr F E1213 00:21:11.968849 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.993609313+00:00 stderr F E1213 00:21:11.993403 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.035136483+00:00 stderr F E1213 00:21:12.035085 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.055571435+00:00 stderr F E1213 00:21:12.055513 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.117105215+00:00 stderr F E1213 00:21:12.117050 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.281383649+00:00 stderr F E1213 00:21:12.281321 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.447406899+00:00 stderr F E1213 00:21:12.447342 1 base_controller.go:268] IngressStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:12.603625424+00:00 stderr F E1213 00:21:12.603545 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.245954397+00:00 stderr F E1213 00:21:13.245732 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.453033409+00:00 stderr F E1213 00:21:14.452913 1 base_controller.go:268] APIServiceController_openshift-apiserver reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:14.531354953+00:00 stderr F E1213 00:21:14.531276 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.639902136+00:00 stderr F E1213 00:21:15.639400 1 base_controller.go:268] OAuthAPIServerControllerWorkloadController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.840339905+00:00 stderr F W1213 00:21:15.840093 1 base_controller.go:232] Updating status of "RouterCertsDomainValidationController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.840339905+00:00 stderr F E1213 00:21:15.840304 1 base_controller.go:268] RouterCertsDomainValidationController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.440914881+00:00 stderr F E1213 00:21:16.440625 1 base_controller.go:268] PayloadConfig reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/authentications/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.093627645+00:00 stderr F E1213 00:21:17.093572 1 base_controller.go:268] TrustDistributionController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/oauth-serving-cert": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:20.297106930+00:00 stderr F E1213 00:21:20.296618 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:21:20.866649959+00:00 stderr F I1213 00:21:20.865151 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:20Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:20.879708581+00:00 stderr F I1213 00:21:20.879644 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"5e81203d-c202-48ae-b652-35b68d7e5586", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") 2025-12-13T00:21:20.940676337+00:00 stderr F I1213 00:21:20.940618 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:20Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:20.946590116+00:00 stderr F E1213 00:21:20.946543 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:20.953053441+00:00 stderr F I1213 00:21:20.952996 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:20Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:20.958231970+00:00 stderr F E1213 00:21:20.958173 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:20.969354001+00:00 stderr F I1213 00:21:20.969294 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:20Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:20.974433858+00:00 stderr F E1213 00:21:20.974380 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:20.995063904+00:00 stderr F I1213 00:21:20.995015 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:20Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:20.998758194+00:00 stderr F E1213 00:21:20.998717 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:21.039654007+00:00 stderr F I1213 00:21:21.039582 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:21Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:21.043604344+00:00 stderr F E1213 00:21:21.043549 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:21.124698732+00:00 stderr F I1213 00:21:21.124635 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:21Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:21.129725298+00:00 stderr F E1213 00:21:21.129702 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:21.290848606+00:00 stderr F I1213 00:21:21.290799 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:21Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:21.296837888+00:00 stderr F E1213 00:21:21.296773 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:21.618037435+00:00 stderr F I1213 00:21:21.617709 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:21Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:21.627244134+00:00 stderr F E1213 00:21:21.627190 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:22.074570945+00:00 stderr F I1213 00:21:22.074507 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:22Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:22.078058110+00:00 stderr F E1213 00:21:22.078040 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:22.271087588+00:00 stderr F I1213 00:21:22.269337 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:22Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:22.274225823+00:00 stderr F E1213 00:21:22.274134 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:24.835722144+00:00 stderr F I1213 00:21:24.835405 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:24Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:24.840440852+00:00 stderr F E1213 00:21:24.840374 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:29.962568031+00:00 stderr F I1213 00:21:29.962010 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:29Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:29.969600731+00:00 stderr F E1213 00:21:29.969561 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:31.557977522+00:00 stderr F I1213 00:21:31.557417 1 helpers.go:184] lister was stale at resourceVersion=42933, live get showed resourceVersion=43016 2025-12-13T00:21:31.558143947+00:00 stderr F E1213 00:21:31.558108 1 base_controller.go:268] WellKnownReadyController reconciliation failed: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"9374e9d0-f290-4faa-94c7-262a199a1d45", ResourceVersion:"42929", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly) 2025-12-13T00:21:36.145538185+00:00 stderr F I1213 00:21:36.145143 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:36Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:36.150131800+00:00 stderr F E1213 00:21:36.150053 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:40.211701780+00:00 stderr F I1213 00:21:40.211359 1 status_controller.go:218] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:06:42Z","message":"AuthenticatorCertKeyProgressing: All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2025-12-13T00:21:40Z","message":"WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: \u0026v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9374e9d0-f290-4faa-94c7-262a199a1d45\", ResourceVersion:\"42929\", Generation:0, CreationTimestamp:time.Date(2024, time.June, 26, 12, 38, 3, 0, time.Local), DeletionTimestamp:\u003cnil\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.August, 13, 20, 8, 43, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00291f1d0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)","reason":"WellKnown_NotReady","status":"False","type":"Available"},{"lastTransitionTime":"2024-06-26T12:47:04Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:47:04Z","reason":"NoData","status":"Unknown","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:21:40.216889718+00:00 stderr F E1213 00:21:40.216817 1 base_controller.go:268] StatusSyncer_authentication reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "authentication": the object has been modified; please apply your changes to the latest version and try again 2025-12-13T00:21:48.397654477+00:00 stderr F I1213 00:21:48.397209 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:51.267623464+00:00 stderr F I1213 00:21:51.267105 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:52.450791307+00:00 stderr F I1213 00:21:52.450679 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:21:54.580321712+00:00 stderr F I1213 00:21:54.579715 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:55.221062225+00:00 stderr F I1213 00:21:55.221008 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:21:56.002453928+00:00 stderr F I1213 00:21:56.002054 1 reflector.go:351] Caches populated for *v1.Console from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130647033030 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130654033026 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000006575415117130647033053 0ustar zuulzuul2025-08-13T20:01:08.525355365+00:00 stderr F I0813 20:01:08.494832 1 cmd.go:92] &{ true {false} installer true map[cert-dir:0xc0005d0c80 cert-secrets:0xc0005d0a00 configmaps:0xc0005d05a0 namespace:0xc0005d03c0 optional-configmaps:0xc0005d06e0 optional-secrets:0xc0005d0640 pod:0xc0005d0460 pod-manifest-dir:0xc0005d0820 resource-dir:0xc0005d0780 revision:0xc0005d0320 secrets:0xc0005d0500 v:0xc0005d1680] [0xc0005d1680 0xc0005d0320 0xc0005d03c0 0xc0005d0460 0xc0005d0780 0xc0005d0820 0xc0005d05a0 0xc0005d06e0 0xc0005d0640 0xc0005d0500 0xc0005d0c80 0xc0005d0a00] [] map[cert-configmaps:0xc0005d0aa0 cert-dir:0xc0005d0c80 cert-secrets:0xc0005d0a00 configmaps:0xc0005d05a0 help:0xc0005d1a40 kubeconfig:0xc0005d0280 log-flush-frequency:0xc0005d15e0 namespace:0xc0005d03c0 optional-cert-configmaps:0xc0005d0be0 optional-cert-secrets:0xc0005d0b40 optional-configmaps:0xc0005d06e0 optional-secrets:0xc0005d0640 pod:0xc0005d0460 pod-manifest-dir:0xc0005d0820 pod-manifests-lock-file:0xc0005d0960 resource-dir:0xc0005d0780 revision:0xc0005d0320 secrets:0xc0005d0500 timeout-duration:0xc0005d08c0 v:0xc0005d1680 vmodule:0xc0005d1720] [0xc0005d0280 0xc0005d0320 0xc0005d03c0 0xc0005d0460 0xc0005d0500 0xc0005d05a0 0xc0005d0640 0xc0005d06e0 0xc0005d0780 0xc0005d0820 0xc0005d08c0 0xc0005d0960 0xc0005d0a00 0xc0005d0aa0 0xc0005d0b40 0xc0005d0be0 0xc0005d0c80 0xc0005d15e0 0xc0005d1680 0xc0005d1720 0xc0005d1a40] [0xc0005d0aa0 0xc0005d0c80 0xc0005d0a00 0xc0005d05a0 0xc0005d1a40 0xc0005d0280 0xc0005d15e0 0xc0005d03c0 0xc0005d0be0 0xc0005d0b40 0xc0005d06e0 0xc0005d0640 0xc0005d0460 0xc0005d0820 0xc0005d0960 0xc0005d0780 0xc0005d0320 0xc0005d0500 0xc0005d08c0 0xc0005d1680 0xc0005d1720] map[104:0xc0005d1a40 118:0xc0005d1680] [] -1 0 0xc000567560 true 0x215dc20 []} 2025-08-13T20:01:08.525355365+00:00 stderr F I0813 20:01:08.518481 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000581380)({ 2025-08-13T20:01:08.525355365+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:01:08.525355365+00:00 stderr F Revision: (string) (len=1) "7", 2025-08-13T20:01:08.525355365+00:00 stderr F NodeName: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-scheduler", 2025-08-13T20:01:08.525355365+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:01:08.525355365+00:00 stderr F SecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=20) "scheduler-kubeconfig", 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=16) "policy-configmap" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F CertSecretNames: ([]string) (len=1 cap=1) { 2025-08-13T20:01:08.525355365+00:00 stderr F (string) (len=30) "kube-scheduler-client-cert-key" 2025-08-13T20:01:08.525355365+00:00 stderr F }, 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F CertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:01:08.525355365+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", 2025-08-13T20:01:08.525355365+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.525355365+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:01:08.525355365+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:01:08.525355365+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:01:08.525355365+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:01:08.525355365+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:01:08.525355365+00:00 stderr F }) 2025-08-13T20:01:08.558026046+00:00 stderr F I0813 20:01:08.545567 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:01:09.709344395+00:00 stderr F I0813 20:01:09.708316 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:01:10.010008738+00:00 stderr F I0813 20:01:10.008949 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:01:40.010064498+00:00 stderr F I0813 20:01:40.009280 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:01:52.020591303+00:00 stderr F I0813 20:01:52.020372 1 cmd.go:539] Latest installer revision for node crc is: 7 2025-08-13T20:01:52.020591303+00:00 stderr F I0813 20:01:52.020447 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.130444 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.130517 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7" ... 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.131123 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7" ... 2025-08-13T20:01:56.148918597+00:00 stderr F I0813 20:01:56.131137 1 cmd.go:226] Getting secrets ... 2025-08-13T20:01:59.477298721+00:00 stderr F I0813 20:01:59.477140 1 copy.go:32] Got secret openshift-kube-scheduler/localhost-recovery-client-token-7 2025-08-13T20:02:01.293670193+00:00 stderr F I0813 20:02:01.284574 1 copy.go:32] Got secret openshift-kube-scheduler/serving-cert-7 2025-08-13T20:02:01.293670193+00:00 stderr F I0813 20:02:01.284669 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:03.471152171+00:00 stderr F I0813 20:02:03.462088 1 copy.go:60] Got configMap openshift-kube-scheduler/config-7 2025-08-13T20:02:05.728719872+00:00 stderr F I0813 20:02:05.722475 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-cert-syncer-kubeconfig-7 2025-08-13T20:02:09.366901882+00:00 stderr F I0813 20:02:09.366325 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-pod-7 2025-08-13T20:02:13.052766894+00:00 stderr F I0813 20:02:13.050491 1 copy.go:60] Got configMap openshift-kube-scheduler/scheduler-kubeconfig-7 2025-08-13T20:02:15.100946713+00:00 stderr F I0813 20:02:15.096609 1 copy.go:60] Got configMap openshift-kube-scheduler/serviceaccount-ca-7 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.827442 1 copy.go:52] Failed to get config map openshift-kube-scheduler/policy-configmap-7: configmaps "policy-configmap-7" not found 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.827567 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.828621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829105 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829208 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829339 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829444 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert" ... 2025-08-13T20:02:17.837092107+00:00 stderr F I0813 20:02:17.829640 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert/tls.key" ... 2025-08-13T20:02:17.849720188+00:00 stderr F I0813 20:02:17.831747 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/secrets/serving-cert/tls.crt" ... 2025-08-13T20:02:17.850252683+00:00 stderr F I0813 20:02:17.850183 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/config" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858457 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/config/config.yaml" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858690 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-cert-syncer-kubeconfig" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.858969 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:02:17.859245109+00:00 stderr F I0813 20:02:17.859105 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859387 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/forceRedeploymentReason" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859545 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/pod.yaml" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859682 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/kube-scheduler-pod/version" ... 2025-08-13T20:02:17.859902438+00:00 stderr F I0813 20:02:17.859864 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/scheduler-kubeconfig" ... 2025-08-13T20:02:17.860032522+00:00 stderr F I0813 20:02:17.859969 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/scheduler-kubeconfig/kubeconfig" ... 2025-08-13T20:02:17.860201287+00:00 stderr F I0813 20:02:17.860141 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/serviceaccount-ca" ... 2025-08-13T20:02:17.860311810+00:00 stderr F I0813 20:02:17.860253 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:02:17.860493775+00:00 stderr F I0813 20:02:17.860418 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs" ... 2025-08-13T20:02:17.860493775+00:00 stderr F I0813 20:02:17.860480 1 cmd.go:226] Getting secrets ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474599 1 copy.go:32] Got secret openshift-kube-scheduler/kube-scheduler-client-cert-key 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474675 1 cmd.go:239] Getting config maps ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474688 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.474734 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.crt" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.475101 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... 2025-08-13T20:02:20.477510320+00:00 stderr F I0813 20:02:20.475229 1 cmd.go:332] Getting pod configmaps/kube-scheduler-pod-7 -n openshift-kube-scheduler 2025-08-13T20:02:21.118620340+00:00 stderr F I0813 20:02:21.118522 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:02:21.118620340+00:00 stderr F I0813 20:02:21.118597 1 cmd.go:376] Writing a pod under "kube-scheduler-pod.yaml" key 2025-08-13T20:02:21.118620340+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"7","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.152955 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.153271 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F I0813 20:02:21.153282 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:02:21.176472970+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"7","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-7"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000024600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130646033027 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130654033026 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001202615117130646033032 0ustar zuulzuul2025-08-13T20:08:10.249757059+00:00 stderr F I0813 20:08:10.244741 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:10.252956431+00:00 stderr F I0813 20:08:10.252847 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-08-13T20:08:10.353970297+00:00 stderr F I0813 20:08:10.353841 1 base_controller.go:73] Caches are synced for CertSyncController 2025-08-13T20:08:10.353970297+00:00 stderr F I0813 20:08:10.353922 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-08-13T20:08:10.354230874+00:00 stderr F I0813 20:08:10.354151 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:08:10.354230874+00:00 stderr F I0813 20:08:10.354188 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.319571 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.319662 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.320047 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.322048498+00:00 stderr F I0813 20:09:06.320067 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612236448+00:00 stderr F I0813 20:09:06.612141 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612236448+00:00 stderr F I0813 20:09:06.612204 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612404873+00:00 stderr F I0813 20:09:06.612359 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612404873+00:00 stderr F I0813 20:09:06.612391 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:09:06.612481265+00:00 stderr F I0813 20:09:06.612449 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:09:06.612481265+00:00 stderr F I0813 20:09:06.612458 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.324387120+00:00 stderr F I0813 20:19:06.324231 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.324387120+00:00 stderr F I0813 20:19:06.324298 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.612445676+00:00 stderr F I0813 20:19:06.612323 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.612445676+00:00 stderr F I0813 20:19:06.612387 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:19:06.612665402+00:00 stderr F I0813 20:19:06.612603 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:19:06.612665402+00:00 stderr F I0813 20:19:06.612632 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324132 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324208 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324741 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.324989539+00:00 stderr F I0813 20:29:06.324753 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.613248366+00:00 stderr F I0813 20:29:06.613183 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.613663758+00:00 stderr F I0813 20:29:06.613587 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:29:06.614190093+00:00 stderr F I0813 20:29:06.614167 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:29:06.614294426+00:00 stderr F I0813 20:29:06.614275 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.325265148+00:00 stderr F I0813 20:39:06.324939 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.326232376+00:00 stderr F I0813 20:39:06.325074 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.326451882+00:00 stderr F I0813 20:39:06.326356 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.326451882+00:00 stderr F I0813 20:39:06.326386 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-08-13T20:39:06.614742214+00:00 stderr F I0813 20:39:06.614606 1 certsync_controller.go:66] Syncing configmaps: [] 2025-08-13T20:39:06.614742214+00:00 stderr F I0813 20:39:06.614657 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000002164015117130654033033 0ustar zuulzuul2025-12-13T00:10:45.283521340+00:00 stderr F I1213 00:10:45.283195 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-12-13T00:10:45.283521340+00:00 stderr F I1213 00:10:45.283339 1 observer_polling.go:159] Starting file observer 2025-12-13T00:10:55.286093938+00:00 stderr F W1213 00:10:55.285921 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:10:55.286093938+00:00 stderr F W1213 00:10:55.285967 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:10:55.286093938+00:00 stderr F I1213 00:10:55.286052 1 trace.go:236] Trace[539469756]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Dec-2025 00:10:45.283) (total time: 10002ms): 2025-12-13T00:10:55.286093938+00:00 stderr F Trace[539469756]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:10:55.285) 2025-12-13T00:10:55.286093938+00:00 stderr F Trace[539469756]: [10.002705478s] [10.002705478s] END 2025-12-13T00:10:55.286170050+00:00 stderr F I1213 00:10:55.286100 1 trace.go:236] Trace[789341815]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Dec-2025 00:10:45.283) (total time: 10002ms): 2025-12-13T00:10:55.286170050+00:00 stderr F Trace[789341815]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:10:55.285) 2025-12-13T00:10:55.286170050+00:00 stderr F Trace[789341815]: [10.00276561s] [10.00276561s] END 2025-12-13T00:10:55.286170050+00:00 stderr F E1213 00:10:55.286139 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:10:55.286191370+00:00 stderr F E1213 00:10:55.286168 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:11:01.883672098+00:00 stderr F I1213 00:11:01.883581 1 base_controller.go:73] Caches are synced for CertSyncController 2025-12-13T00:11:01.883672098+00:00 stderr F I1213 00:11:01.883611 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-12-13T00:11:01.884080760+00:00 stderr F I1213 00:11:01.884031 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:11:01.884080760+00:00 stderr F I1213 00:11:01.884051 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:21.042292890+00:00 stderr F I1213 00:13:21.042219 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:21.042292890+00:00 stderr F I1213 00:13:21.042253 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:21.054286333+00:00 stderr F I1213 00:13:21.054221 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:21.054286333+00:00 stderr F I1213 00:13:21.054252 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:21.076224630+00:00 stderr F I1213 00:13:21.075171 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:21.076224630+00:00 stderr F I1213 00:13:21.075197 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:21.087916443+00:00 stderr F I1213 00:13:21.087598 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:21.087916443+00:00 stderr F I1213 00:13:21.087624 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:21.133867707+00:00 stderr F I1213 00:13:21.133807 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:21.133867707+00:00 stderr F I1213 00:13:21.133829 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:21.182771050+00:00 stderr F I1213 00:13:21.182697 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:21.182771050+00:00 stderr F I1213 00:13:21.182724 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:25.862792619+00:00 stderr F I1213 00:13:25.862723 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:25.862792619+00:00 stderr F I1213 00:13:25.862749 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:25.881298502+00:00 stderr F I1213 00:13:25.881093 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:25.881298502+00:00 stderr F I1213 00:13:25.881133 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:25.901318224+00:00 stderr F I1213 00:13:25.901080 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:25.901318224+00:00 stderr F I1213 00:13:25.901103 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:25.907815782+00:00 stderr F I1213 00:13:25.907717 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:25.907815782+00:00 stderr F I1213 00:13:25.907732 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:25.920363254+00:00 stderr F I1213 00:13:25.920300 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:25.920363254+00:00 stderr F I1213 00:13:25.920330 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:13:25.927956739+00:00 stderr F I1213 00:13:25.927558 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:13:25.927956739+00:00 stderr F I1213 00:13:25.927581 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:11.067768719+00:00 stderr F I1213 00:21:11.067725 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:11.067837421+00:00 stderr F I1213 00:21:11.067818 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:11.068058577+00:00 stderr F I1213 00:21:11.068008 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:11.068058577+00:00 stderr F I1213 00:21:11.068046 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:11.068284023+00:00 stderr F I1213 00:21:11.068257 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:11.068284023+00:00 stderr F I1213 00:21:11.068269 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:11.068351024+00:00 stderr F I1213 00:21:11.068322 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:11.068351024+00:00 stderr F I1213 00:21:11.068331 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:11.068513850+00:00 stderr F I1213 00:21:11.068427 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:11.068513850+00:00 stderr F I1213 00:21:11.068465 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:11.068839288+00:00 stderr F I1213 00:21:11.068806 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:11.068839288+00:00 stderr F I1213 00:21:11.068825 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:22.663869858+00:00 stderr F I1213 00:21:22.663801 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:22.663869858+00:00 stderr F I1213 00:21:22.663837 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] 2025-12-13T00:21:22.664009731+00:00 stderr F I1213 00:21:22.663971 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:21:22.664009731+00:00 stderr F I1213 00:21:22.663994 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000651215117130654033034 0ustar zuulzuul2025-12-13T00:06:37.706140991+00:00 stderr F I1213 00:06:37.705871 1 base_controller.go:67] Waiting for caches to sync for CertSyncController 2025-12-13T00:06:37.706140991+00:00 stderr F I1213 00:06:37.705885 1 observer_polling.go:159] Starting file observer 2025-12-13T00:06:47.711156497+00:00 stderr F W1213 00:06:47.710284 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:47.711156497+00:00 stderr F W1213 00:06:47.710293 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:47.711156497+00:00 stderr F I1213 00:06:47.710461 1 trace.go:236] Trace[194606047]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Dec-2025 00:06:37.705) (total time: 10004ms): 2025-12-13T00:06:47.711156497+00:00 stderr F Trace[194606047]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (00:06:47.710) 2025-12-13T00:06:47.711156497+00:00 stderr F Trace[194606047]: [10.004538672s] [10.004538672s] END 2025-12-13T00:06:47.711156497+00:00 stderr F I1213 00:06:47.710470 1 trace.go:236] Trace[309883925]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 (13-Dec-2025 00:06:37.705) (total time: 10004ms): 2025-12-13T00:06:47.711156497+00:00 stderr F Trace[309883925]: ---"Objects listed" error:Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10004ms (00:06:47.710) 2025-12-13T00:06:47.711156497+00:00 stderr F Trace[309883925]: [10.004595455s] [10.004595455s] END 2025-12-13T00:06:47.711156497+00:00 stderr F E1213 00:06:47.710508 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:47.711156497+00:00 stderr F E1213 00:06:47.710594 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: fa**********ed to list *v1.Secret: Ge********** "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 2025-12-13T00:06:54.310455006+00:00 stderr F I1213 00:06:54.306721 1 base_controller.go:73] Caches are synced for CertSyncController 2025-12-13T00:06:54.310455006+00:00 stderr F I1213 00:06:54.306749 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ... 2025-12-13T00:06:54.310455006+00:00 stderr F I1213 00:06:54.306797 1 certsync_controller.go:66] Syncing configmaps: [] 2025-12-13T00:06:54.310455006+00:00 stderr F I1213 00:06:54.306807 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}] ././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130654033026 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000040215615117130646033041 0ustar zuulzuul2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384316 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384491 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384498 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384503 1 feature_gate.go:227] unrecognized feature gate: Example 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384509 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384516 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384521 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384527 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-12-13T00:06:37.384543103+00:00 stderr F W1213 00:06:37.384531 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384536 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384544 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384549 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384554 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384559 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384564 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384569 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384574 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384579 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384584 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384594 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384599 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384605 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384610 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384615 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384620 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384625 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384630 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384635 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384640 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384645 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384649 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384654 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-12-13T00:06:37.384662847+00:00 stderr F W1213 00:06:37.384659 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-12-13T00:06:37.384686477+00:00 stderr F W1213 00:06:37.384664 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-12-13T00:06:37.384686477+00:00 stderr F W1213 00:06:37.384670 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-12-13T00:06:37.384686477+00:00 stderr F W1213 00:06:37.384675 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-12-13T00:06:37.384686477+00:00 stderr F W1213 00:06:37.384680 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-12-13T00:06:37.384695718+00:00 stderr F W1213 00:06:37.384685 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-12-13T00:06:37.384695718+00:00 stderr F W1213 00:06:37.384690 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-12-13T00:06:37.384705078+00:00 stderr F W1213 00:06:37.384696 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-12-13T00:06:37.384705078+00:00 stderr F W1213 00:06:37.384702 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-12-13T00:06:37.384712778+00:00 stderr F W1213 00:06:37.384707 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-12-13T00:06:37.384720938+00:00 stderr F W1213 00:06:37.384712 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-12-13T00:06:37.384720938+00:00 stderr F W1213 00:06:37.384718 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-12-13T00:06:37.384728519+00:00 stderr F W1213 00:06:37.384724 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-12-13T00:06:37.384735399+00:00 stderr F W1213 00:06:37.384729 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-12-13T00:06:37.384742459+00:00 stderr F W1213 00:06:37.384734 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-12-13T00:06:37.384742459+00:00 stderr F W1213 00:06:37.384739 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-12-13T00:06:37.384749739+00:00 stderr F W1213 00:06:37.384744 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-12-13T00:06:37.384756609+00:00 stderr F W1213 00:06:37.384750 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-12-13T00:06:37.384763680+00:00 stderr F W1213 00:06:37.384755 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-12-13T00:06:37.384763680+00:00 stderr F W1213 00:06:37.384760 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-12-13T00:06:37.384770830+00:00 stderr F W1213 00:06:37.384766 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-12-13T00:06:37.384777660+00:00 stderr F W1213 00:06:37.384772 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-12-13T00:06:37.384784530+00:00 stderr F W1213 00:06:37.384777 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-12-13T00:06:37.384791530+00:00 stderr F W1213 00:06:37.384783 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-12-13T00:06:37.384791530+00:00 stderr F W1213 00:06:37.384788 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-12-13T00:06:37.384803291+00:00 stderr F W1213 00:06:37.384795 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-12-13T00:06:37.384810321+00:00 stderr F W1213 00:06:37.384801 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-12-13T00:06:37.384810321+00:00 stderr F W1213 00:06:37.384807 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-12-13T00:06:37.385319195+00:00 stderr F I1213 00:06:37.385282 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:06:37.385319195+00:00 stderr F I1213 00:06:37.385308 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:06:37.385319195+00:00 stderr F I1213 00:06:37.385314 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-12-13T00:06:37.385331175+00:00 stderr F I1213 00:06:37.385319 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-12-13T00:06:37.385331175+00:00 stderr F I1213 00:06:37.385325 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-12-13T00:06:37.385339886+00:00 stderr F I1213 00:06:37.385331 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-12-13T00:06:37.385348566+00:00 stderr F I1213 00:06:37.385335 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-12-13T00:06:37.385357196+00:00 stderr F I1213 00:06:37.385344 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-12-13T00:06:37.385357196+00:00 stderr F I1213 00:06:37.385351 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-12-13T00:06:37.385366416+00:00 stderr F I1213 00:06:37.385356 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-12-13T00:06:37.385366416+00:00 stderr F I1213 00:06:37.385360 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:06:37.385381637+00:00 stderr F I1213 00:06:37.385366 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:06:37.385381637+00:00 stderr F I1213 00:06:37.385370 1 flags.go:64] FLAG: --client-ca-file="" 2025-12-13T00:06:37.385381637+00:00 stderr F I1213 00:06:37.385374 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:06:37.385440448+00:00 stderr F I1213 00:06:37.385379 1 flags.go:64] FLAG: --contention-profiling="true" 2025-12-13T00:06:37.385440448+00:00 stderr F I1213 00:06:37.385384 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:06:37.385440448+00:00 stderr F I1213 00:06:37.385405 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-12-13T00:06:37.385440448+00:00 stderr F I1213 00:06:37.385431 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:06:37.385440448+00:00 stderr F I1213 00:06:37.385436 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-12-13T00:06:37.385453389+00:00 stderr F I1213 00:06:37.385441 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-12-13T00:06:37.385453389+00:00 stderr F I1213 00:06:37.385447 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:06:37.385463529+00:00 stderr F I1213 00:06:37.385452 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-12-13T00:06:37.385470819+00:00 stderr F I1213 00:06:37.385463 1 flags.go:64] FLAG: --kubeconfig="" 2025-12-13T00:06:37.385470819+00:00 stderr F I1213 00:06:37.385467 1 flags.go:64] FLAG: --leader-elect="true" 2025-12-13T00:06:37.385484690+00:00 stderr F I1213 00:06:37.385471 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-12-13T00:06:37.385484690+00:00 stderr F I1213 00:06:37.385475 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-12-13T00:06:37.385484690+00:00 stderr F I1213 00:06:37.385479 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-12-13T00:06:37.385492640+00:00 stderr F I1213 00:06:37.385483 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-12-13T00:06:37.385492640+00:00 stderr F I1213 00:06:37.385488 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-12-13T00:06:37.385500650+00:00 stderr F I1213 00:06:37.385493 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-12-13T00:06:37.385500650+00:00 stderr F I1213 00:06:37.385497 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:06:37.385532111+00:00 stderr F I1213 00:06:37.385502 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:06:37.385532111+00:00 stderr F I1213 00:06:37.385521 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:06:37.385532111+00:00 stderr F I1213 00:06:37.385526 1 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:06:37.385541901+00:00 stderr F I1213 00:06:37.385530 1 flags.go:64] FLAG: --master="" 2025-12-13T00:06:37.385541901+00:00 stderr F I1213 00:06:37.385534 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-12-13T00:06:37.385541901+00:00 stderr F I1213 00:06:37.385538 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:06:37.385550741+00:00 stderr F I1213 00:06:37.385543 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-12-13T00:06:37.385550741+00:00 stderr F I1213 00:06:37.385547 1 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:06:37.385559592+00:00 stderr F I1213 00:06:37.385552 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:06:37.385566672+00:00 stderr F I1213 00:06:37.385557 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-12-13T00:06:37.385566672+00:00 stderr F I1213 00:06:37.385561 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-12-13T00:06:37.385575362+00:00 stderr F I1213 00:06:37.385567 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-12-13T00:06:37.385582562+00:00 stderr F I1213 00:06:37.385574 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-12-13T00:06:37.385589424+00:00 stderr F I1213 00:06:37.385580 1 flags.go:64] FLAG: --secure-port="10259" 2025-12-13T00:06:37.385589424+00:00 stderr F I1213 00:06:37.385585 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:06:37.385596524+00:00 stderr F I1213 00:06:37.385588 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-12-13T00:06:37.385620654+00:00 stderr F I1213 00:06:37.385594 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:06:37.385620654+00:00 stderr F I1213 00:06:37.385611 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:06:37.385620654+00:00 stderr F I1213 00:06:37.385616 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:06:37.385629315+00:00 stderr F I1213 00:06:37.385621 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:06:37.385636445+00:00 stderr F I1213 00:06:37.385627 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-12-13T00:06:37.385636445+00:00 stderr F I1213 00:06:37.385631 1 flags.go:64] FLAG: --v="2" 2025-12-13T00:06:37.385647935+00:00 stderr F I1213 00:06:37.385637 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:06:37.385656585+00:00 stderr F I1213 00:06:37.385644 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:06:37.385663596+00:00 stderr F I1213 00:06:37.385655 1 flags.go:64] FLAG: --write-config-to="" 2025-12-13T00:06:37.390281805+00:00 stderr F I1213 00:06:37.390242 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:06:37.947977794+00:00 stderr F W1213 00:06:37.947887 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.947977794+00:00 stderr F W1213 00:06:37.947931 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. 2025-12-13T00:06:37.947977794+00:00 stderr F W1213 00:06:37.947939 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false 2025-12-13T00:06:37.956741580+00:00 stderr F I1213 00:06:37.956660 1 configfile.go:94] "Using component config" config=< 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F clientConnection: 2025-12-13T00:06:37.956741580+00:00 stderr F acceptContentTypes: "" 2025-12-13T00:06:37.956741580+00:00 stderr F burst: 100 2025-12-13T00:06:37.956741580+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-12-13T00:06:37.956741580+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-12-13T00:06:37.956741580+00:00 stderr F qps: 50 2025-12-13T00:06:37.956741580+00:00 stderr F enableContentionProfiling: false 2025-12-13T00:06:37.956741580+00:00 stderr F enableProfiling: false 2025-12-13T00:06:37.956741580+00:00 stderr F kind: KubeSchedulerConfiguration 2025-12-13T00:06:37.956741580+00:00 stderr F leaderElection: 2025-12-13T00:06:37.956741580+00:00 stderr F leaderElect: true 2025-12-13T00:06:37.956741580+00:00 stderr F leaseDuration: 2m17s 2025-12-13T00:06:37.956741580+00:00 stderr F renewDeadline: 1m47s 2025-12-13T00:06:37.956741580+00:00 stderr F resourceLock: leases 2025-12-13T00:06:37.956741580+00:00 stderr F resourceName: kube-scheduler 2025-12-13T00:06:37.956741580+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-12-13T00:06:37.956741580+00:00 stderr F retryPeriod: 26s 2025-12-13T00:06:37.956741580+00:00 stderr F parallelism: 16 2025-12-13T00:06:37.956741580+00:00 stderr F percentageOfNodesToScore: 0 2025-12-13T00:06:37.956741580+00:00 stderr F podInitialBackoffSeconds: 1 2025-12-13T00:06:37.956741580+00:00 stderr F podMaxBackoffSeconds: 10 2025-12-13T00:06:37.956741580+00:00 stderr F profiles: 2025-12-13T00:06:37.956741580+00:00 stderr F - pluginConfig: 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F kind: DefaultPreemptionArgs 2025-12-13T00:06:37.956741580+00:00 stderr F minCandidateNodesAbsolute: 100 2025-12-13T00:06:37.956741580+00:00 stderr F minCandidateNodesPercentage: 10 2025-12-13T00:06:37.956741580+00:00 stderr F name: DefaultPreemption 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F hardPodAffinityWeight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-12-13T00:06:37.956741580+00:00 stderr F kind: InterPodAffinityArgs 2025-12-13T00:06:37.956741580+00:00 stderr F name: InterPodAffinity 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F kind: NodeAffinityArgs 2025-12-13T00:06:37.956741580+00:00 stderr F name: NodeAffinity 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-12-13T00:06:37.956741580+00:00 stderr F resources: 2025-12-13T00:06:37.956741580+00:00 stderr F - name: cpu 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F - name: memory 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F kind: NodeResourcesFitArgs 2025-12-13T00:06:37.956741580+00:00 stderr F scoringStrategy: 2025-12-13T00:06:37.956741580+00:00 stderr F resources: 2025-12-13T00:06:37.956741580+00:00 stderr F - name: cpu 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F - name: memory 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F type: LeastAllocated 2025-12-13T00:06:37.956741580+00:00 stderr F name: NodeResourcesFit 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F defaultingType: System 2025-12-13T00:06:37.956741580+00:00 stderr F kind: PodTopologySpreadArgs 2025-12-13T00:06:37.956741580+00:00 stderr F name: PodTopologySpread 2025-12-13T00:06:37.956741580+00:00 stderr F - args: 2025-12-13T00:06:37.956741580+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:06:37.956741580+00:00 stderr F bindTimeoutSeconds: 600 2025-12-13T00:06:37.956741580+00:00 stderr F kind: VolumeBindingArgs 2025-12-13T00:06:37.956741580+00:00 stderr F name: VolumeBinding 2025-12-13T00:06:37.956741580+00:00 stderr F plugins: 2025-12-13T00:06:37.956741580+00:00 stderr F bind: {} 2025-12-13T00:06:37.956741580+00:00 stderr F filter: {} 2025-12-13T00:06:37.956741580+00:00 stderr F multiPoint: 2025-12-13T00:06:37.956741580+00:00 stderr F enabled: 2025-12-13T00:06:37.956741580+00:00 stderr F - name: PrioritySort 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodeUnschedulable 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodeName 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: TaintToleration 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 3 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodeAffinity 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 2 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodePorts 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodeResourcesFit 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F - name: VolumeRestrictions 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: EBSLimits 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: GCEPDLimits 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodeVolumeLimits 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: AzureDiskLimits 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: VolumeBinding 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: VolumeZone 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: PodTopologySpread 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 2 2025-12-13T00:06:37.956741580+00:00 stderr F - name: InterPodAffinity 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 2 2025-12-13T00:06:37.956741580+00:00 stderr F - name: DefaultPreemption 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F - name: ImageLocality 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 1 2025-12-13T00:06:37.956741580+00:00 stderr F - name: DefaultBinder 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F - name: SchedulingGates 2025-12-13T00:06:37.956741580+00:00 stderr F weight: 0 2025-12-13T00:06:37.956741580+00:00 stderr F permit: {} 2025-12-13T00:06:37.956741580+00:00 stderr F postBind: {} 2025-12-13T00:06:37.956741580+00:00 stderr F postFilter: {} 2025-12-13T00:06:37.956741580+00:00 stderr F preBind: {} 2025-12-13T00:06:37.956741580+00:00 stderr F preEnqueue: {} 2025-12-13T00:06:37.956741580+00:00 stderr F preFilter: {} 2025-12-13T00:06:37.956741580+00:00 stderr F preScore: {} 2025-12-13T00:06:37.956741580+00:00 stderr F queueSort: {} 2025-12-13T00:06:37.956741580+00:00 stderr F reserve: {} 2025-12-13T00:06:37.956741580+00:00 stderr F score: {} 2025-12-13T00:06:37.956741580+00:00 stderr F schedulerName: default-scheduler 2025-12-13T00:06:37.956741580+00:00 stderr F > 2025-12-13T00:06:37.956931725+00:00 stderr F I1213 00:06:37.956902 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-12-13T00:06:37.956931725+00:00 stderr F I1213 00:06:37.956915 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-12-13T00:06:37.958130710+00:00 stderr F I1213 00:06:37.958085 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:06:37.958437118+00:00 stderr F I1213 00:06:37.958370 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:06:37.958701766+00:00 stderr F I1213 00:06:37.958669 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:06:37.958797948+00:00 stderr F I1213 00:06:37.958766 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-12-13 00:06:37.958722046 +0000 UTC))" 2025-12-13T00:06:37.959173819+00:00 stderr F I1213 00:06:37.959139 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584397\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584397\" (2025-12-12 23:06:37 +0000 UTC to 2026-12-12 23:06:37 +0000 UTC (now=2025-12-13 00:06:37.959098007 +0000 UTC))" 2025-12-13T00:06:37.959188009+00:00 stderr F I1213 00:06:37.959180 1 secure_serving.go:213] Serving securely on [::]:10259 2025-12-13T00:06:37.959277501+00:00 stderr F I1213 00:06:37.959227 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:06:37.964813657+00:00 stderr F W1213 00:06:37.964557 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964813657+00:00 stderr F W1213 00:06:37.964674 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964813657+00:00 stderr F E1213 00:06:37.964708 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964813657+00:00 stderr F W1213 00:06:37.964710 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964813657+00:00 stderr F W1213 00:06:37.964745 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964813657+00:00 stderr F W1213 00:06:37.964569 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964933141+00:00 stderr F E1213 00:06:37.964821 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.964933141+00:00 stderr F W1213 00:06:37.964687 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.985707306+00:00 stderr F E1213 00:06:37.985626 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.985749237+00:00 stderr F E1213 00:06:37.985716 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.985912321+00:00 stderr F W1213 00:06:37.964726 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.985912321+00:00 stderr F E1213 00:06:37.985895 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.985923212+00:00 stderr F E1213 00:06:37.985916 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.986135297+00:00 stderr F E1213 00:06:37.964817 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987046293+00:00 stderr F W1213 00:06:37.986988 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987058893+00:00 stderr F E1213 00:06:37.987048 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987122525+00:00 stderr F W1213 00:06:37.987087 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987122525+00:00 stderr F E1213 00:06:37.987110 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987200657+00:00 stderr F W1213 00:06:37.987175 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987200657+00:00 stderr F E1213 00:06:37.987196 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987250968+00:00 stderr F W1213 00:06:37.987227 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987250968+00:00 stderr F E1213 00:06:37.987248 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987307350+00:00 stderr F W1213 00:06:37.987285 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987314470+00:00 stderr F E1213 00:06:37.987310 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987368872+00:00 stderr F W1213 00:06:37.987346 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987375712+00:00 stderr F E1213 00:06:37.987369 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987405243+00:00 stderr F W1213 00:06:37.987376 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987416363+00:00 stderr F E1213 00:06:37.987411 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987439144+00:00 stderr F W1213 00:06:37.987418 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:37.987446034+00:00 stderr F E1213 00:06:37.987439 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.837323343+00:00 stderr F W1213 00:06:38.837220 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.837323343+00:00 stderr F E1213 00:06:38.837273 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.867917764+00:00 stderr F W1213 00:06:38.867830 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.867917764+00:00 stderr F E1213 00:06:38.867892 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.898956557+00:00 stderr F W1213 00:06:38.898861 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.898956557+00:00 stderr F E1213 00:06:38.898909 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.923674702+00:00 stderr F W1213 00:06:38.923568 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.923674702+00:00 stderr F E1213 00:06:38.923632 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.942456270+00:00 stderr F W1213 00:06:38.942276 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.942456270+00:00 stderr F E1213 00:06:38.942350 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.959993454+00:00 stderr F W1213 00:06:38.959848 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:38.959993454+00:00 stderr F E1213 00:06:38.959892 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.019731094+00:00 stderr F W1213 00:06:39.019633 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.019731094+00:00 stderr F E1213 00:06:39.019686 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.282977540+00:00 stderr F W1213 00:06:39.282877 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.282977540+00:00 stderr F E1213 00:06:39.282919 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.286761736+00:00 stderr F W1213 00:06:39.286701 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.286761736+00:00 stderr F E1213 00:06:39.286733 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.341765123+00:00 stderr F W1213 00:06:39.341649 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.341765123+00:00 stderr F E1213 00:06:39.341699 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.343861673+00:00 stderr F W1213 00:06:39.343761 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.343861673+00:00 stderr F E1213 00:06:39.343785 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.402302086+00:00 stderr F W1213 00:06:39.402228 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.402302086+00:00 stderr F E1213 00:06:39.402275 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.425854129+00:00 stderr F W1213 00:06:39.425772 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.425854129+00:00 stderr F E1213 00:06:39.425829 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.500197540+00:00 stderr F W1213 00:06:39.500100 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.500197540+00:00 stderr F E1213 00:06:39.500159 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.519890294+00:00 stderr F W1213 00:06:39.519791 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:39.519890294+00:00 stderr F E1213 00:06:39.519853 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:40.838846258+00:00 stderr F W1213 00:06:40.838729 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:40.838846258+00:00 stderr F E1213 00:06:40.838792 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:40.966814559+00:00 stderr F W1213 00:06:40.966645 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:40.966814559+00:00 stderr F E1213 00:06:40.966712 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.108761552+00:00 stderr F W1213 00:06:41.108593 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.108761552+00:00 stderr F E1213 00:06:41.108642 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.180699585+00:00 stderr F W1213 00:06:41.180531 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.180699585+00:00 stderr F E1213 00:06:41.180574 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.263888176+00:00 stderr F W1213 00:06:41.263763 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.263888176+00:00 stderr F E1213 00:06:41.263806 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.616807544+00:00 stderr F W1213 00:06:41.616712 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.616807544+00:00 stderr F E1213 00:06:41.616758 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.623961405+00:00 stderr F W1213 00:06:41.623899 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.623961405+00:00 stderr F E1213 00:06:41.623933 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.726842569+00:00 stderr F W1213 00:06:41.726731 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.726842569+00:00 stderr F E1213 00:06:41.726771 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.776587349+00:00 stderr F W1213 00:06:41.776504 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.776587349+00:00 stderr F E1213 00:06:41.776550 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.806028866+00:00 stderr F W1213 00:06:41.805908 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.806028866+00:00 stderr F E1213 00:06:41.805949 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.852417462+00:00 stderr F W1213 00:06:41.852301 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.852417462+00:00 stderr F E1213 00:06:41.852348 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.925860148+00:00 stderr F W1213 00:06:41.925760 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.925860148+00:00 stderr F E1213 00:06:41.925816 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.999609383+00:00 stderr F W1213 00:06:41.999478 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:41.999609383+00:00 stderr F E1213 00:06:41.999513 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:42.014644736+00:00 stderr F W1213 00:06:42.014529 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:42.014695607+00:00 stderr F E1213 00:06:42.014667 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:42.614891261+00:00 stderr F W1213 00:06:42.614828 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:42.614891261+00:00 stderr F E1213 00:06:42.614872 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.542233742+00:00 stderr F W1213 00:06:45.542135 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.542233742+00:00 stderr F E1213 00:06:45.542188 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.586193819+00:00 stderr F W1213 00:06:45.586072 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.586193819+00:00 stderr F E1213 00:06:45.586134 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.711620188+00:00 stderr F W1213 00:06:45.711509 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.711620188+00:00 stderr F E1213 00:06:45.711564 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.725185168+00:00 stderr F W1213 00:06:45.725088 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:45.725185168+00:00 stderr F E1213 00:06:45.725139 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.248098919+00:00 stderr F W1213 00:06:46.247955 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.248098919+00:00 stderr F E1213 00:06:46.248024 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.413552843+00:00 stderr F W1213 00:06:46.413415 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.413552843+00:00 stderr F E1213 00:06:46.413459 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.504498572+00:00 stderr F W1213 00:06:46.504296 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.504498572+00:00 stderr F E1213 00:06:46.504342 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.515063479+00:00 stderr F W1213 00:06:46.514928 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.515063479+00:00 stderr F E1213 00:06:46.514955 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.651315392+00:00 stderr F W1213 00:06:46.651203 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.651315392+00:00 stderr F E1213 00:06:46.651260 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.698940802+00:00 stderr F W1213 00:06:46.698793 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.698940802+00:00 stderr F E1213 00:06:46.698888 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.812527798+00:00 stderr F W1213 00:06:46.812453 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.812527798+00:00 stderr F E1213 00:06:46.812508 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.876647241+00:00 stderr F W1213 00:06:46.876563 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:46.876647241+00:00 stderr F E1213 00:06:46.876603 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:47.518817217+00:00 stderr F W1213 00:06:47.518664 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:47.518817217+00:00 stderr F E1213 00:06:47.518716 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:47.821899583+00:00 stderr F W1213 00:06:47.821766 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:47.821899583+00:00 stderr F E1213 00:06:47.821858 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:47.892985132+00:00 stderr F W1213 00:06:47.892866 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:47.892985132+00:00 stderr F E1213 00:06:47.892919 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:53.062716755+00:00 stderr F W1213 00:06:53.062562 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:53.062716755+00:00 stderr F E1213 00:06:53.062636 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:54.384544190+00:00 stderr F W1213 00:06:54.384060 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:54.384544190+00:00 stderr F E1213 00:06:54.384095 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:55.021698294+00:00 stderr F W1213 00:06:55.021588 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:55.021698294+00:00 stderr F E1213 00:06:55.021641 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:55.648760024+00:00 stderr F W1213 00:06:55.648668 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:55.648811146+00:00 stderr F E1213 00:06:55.648728 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:55.748197291+00:00 stderr F W1213 00:06:55.748012 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:55.748197291+00:00 stderr F E1213 00:06:55.748115 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:56.204583460+00:00 stderr F W1213 00:06:56.204446 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:56.204583460+00:00 stderr F E1213 00:06:56.204537 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:56.539802251+00:00 stderr F W1213 00:06:56.539248 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:56.539802251+00:00 stderr F E1213 00:06:56.539732 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.215083438+00:00 stderr F W1213 00:06:57.214961 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.215083438+00:00 stderr F E1213 00:06:57.215037 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.265501526+00:00 stderr F W1213 00:06:57.265375 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.265642840+00:00 stderr F E1213 00:06:57.265613 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.324925607+00:00 stderr F W1213 00:06:57.324854 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.325048611+00:00 stderr F E1213 00:06:57.325023 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.767663402+00:00 stderr F W1213 00:06:57.767571 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.767663402+00:00 stderr F E1213 00:06:57.767608 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.824943534+00:00 stderr F W1213 00:06:57.824877 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.824943534+00:00 stderr F E1213 00:06:57.824910 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.925139622+00:00 stderr F W1213 00:06:57.925071 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:57.925236805+00:00 stderr F E1213 00:06:57.925218 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:58.567917385+00:00 stderr F W1213 00:06:58.567816 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:58.567917385+00:00 stderr F E1213 00:06:58.567875 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:59.097217315+00:00 stderr F W1213 00:06:59.097134 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:06:59.097217315+00:00 stderr F E1213 00:06:59.097202 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:08.540801678+00:00 stderr F W1213 00:07:08.540679 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:08.540801678+00:00 stderr F E1213 00:07:08.540740 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:11.662521046+00:00 stderr F W1213 00:07:11.662357 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:11.662521046+00:00 stderr F E1213 00:07:11.662496 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:11.783738976+00:00 stderr F W1213 00:07:11.783631 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:11.783738976+00:00 stderr F E1213 00:07:11.783705 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:12.930885758+00:00 stderr F W1213 00:07:12.930713 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:12.930885758+00:00 stderr F E1213 00:07:12.930764 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:13.950983144+00:00 stderr F W1213 00:07:13.950864 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:13.950983144+00:00 stderr F E1213 00:07:13.950929 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:14.423925339+00:00 stderr F W1213 00:07:14.423820 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:14.423925339+00:00 stderr F E1213 00:07:14.423897 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:14.585934177+00:00 stderr F W1213 00:07:14.585799 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:14.585934177+00:00 stderr F E1213 00:07:14.585846 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:15.012468485+00:00 stderr F W1213 00:07:15.012249 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:15.012468485+00:00 stderr F E1213 00:07:15.012345 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:16.133432930+00:00 stderr F W1213 00:07:16.133320 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:16.133432930+00:00 stderr F E1213 00:07:16.133373 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:16.234510563+00:00 stderr F W1213 00:07:16.234338 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:16.234510563+00:00 stderr F E1213 00:07:16.234476 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:19.807782665+00:00 stderr F W1213 00:07:19.807677 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:19.807967280+00:00 stderr F E1213 00:07:19.807932 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:20.980042542+00:00 stderr F W1213 00:07:20.979953 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:20.980042542+00:00 stderr F E1213 00:07:20.979989 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:21.181078818+00:00 stderr F W1213 00:07:21.181030 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:21.181157230+00:00 stderr F E1213 00:07:21.181145 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:21.641513320+00:00 stderr F W1213 00:07:21.641449 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:21.641513320+00:00 stderr F E1213 00:07:21.641499 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:24.231302786+00:00 stderr F W1213 00:07:24.231210 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:24.231302786+00:00 stderr F E1213 00:07:24.231269 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:42.504914631+00:00 stderr F W1213 00:07:42.504811 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:42.504914631+00:00 stderr F E1213 00:07:42.504888 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:43.779167768+00:00 stderr F W1213 00:07:43.779009 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:43.779167768+00:00 stderr F E1213 00:07:43.779073 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:46.528549247+00:00 stderr F W1213 00:07:46.528456 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:46.528549247+00:00 stderr F E1213 00:07:46.528513 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:47.590251834+00:00 stderr F W1213 00:07:47.590136 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:47.590251834+00:00 stderr F E1213 00:07:47.590221 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:48.307234231+00:00 stderr F W1213 00:07:48.307092 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:48.307234231+00:00 stderr F E1213 00:07:48.307176 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:49.332793249+00:00 stderr F W1213 00:07:49.332648 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:49.332793249+00:00 stderr F E1213 00:07:49.332705 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:56.180445341+00:00 stderr F W1213 00:07:56.180321 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:56.180445341+00:00 stderr F E1213 00:07:56.180378 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:57.083741139+00:00 stderr F W1213 00:07:57.083665 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:57.083741139+00:00 stderr F E1213 00:07:57.083716 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:57.848748756+00:00 stderr F W1213 00:07:57.848673 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:07:57.848889860+00:00 stderr F E1213 00:07:57.848862 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:00.024656978+00:00 stderr F W1213 00:08:00.024546 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:00.024656978+00:00 stderr F E1213 00:08:00.024595 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:03.640549552+00:00 stderr F W1213 00:08:03.640453 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:03.640549552+00:00 stderr F E1213 00:08:03.640524 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:04.142751699+00:00 stderr F W1213 00:08:04.142634 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:04.142751699+00:00 stderr F E1213 00:08:04.142700 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:04.205256261+00:00 stderr F W1213 00:08:04.205167 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:04.205256261+00:00 stderr F E1213 00:08:04.205229 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:08.850431225+00:00 stderr F W1213 00:08:08.850300 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:08.850431225+00:00 stderr F E1213 00:08:08.850350 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:13.396040616+00:00 stderr F W1213 00:08:13.395919 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:13.396040616+00:00 stderr F E1213 00:08:13.395990 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:18.638529051+00:00 stderr F W1213 00:08:18.638386 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:18.638529051+00:00 stderr F E1213 00:08:18.638475 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:23.994967926+00:00 stderr F W1213 00:08:23.994874 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:23.994967926+00:00 stderr F E1213 00:08:23.994950 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:27.038039463+00:00 stderr F W1213 00:08:27.037907 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:27.038039463+00:00 stderr F E1213 00:08:27.037977 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:35.508080260+00:00 stderr F W1213 00:08:35.507987 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:35.508080260+00:00 stderr F E1213 00:08:35.508046 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:35.941727540+00:00 stderr F W1213 00:08:35.941581 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:35.941727540+00:00 stderr F E1213 00:08:35.941650 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:39.732968214+00:00 stderr F W1213 00:08:39.732772 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:39.732968214+00:00 stderr F E1213 00:08:39.732885 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:41.284565199+00:00 stderr F W1213 00:08:41.284376 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:41.284565199+00:00 stderr F E1213 00:08:41.284497 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:42.114422448+00:00 stderr F W1213 00:08:42.114196 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:42.114422448+00:00 stderr F E1213 00:08:42.114299 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:44.778715853+00:00 stderr F W1213 00:08:44.778574 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:44.778715853+00:00 stderr F E1213 00:08:44.778676 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:47.928314264+00:00 stderr F W1213 00:08:47.928198 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:47.928314264+00:00 stderr F E1213 00:08:47.928270 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:49.524940449+00:00 stderr F W1213 00:08:49.524807 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:49.524940449+00:00 stderr F E1213 00:08:49.524903 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:51.617862133+00:00 stderr F W1213 00:08:51.617713 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:51.617862133+00:00 stderr F E1213 00:08:51.617787 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:51.986071774+00:00 stderr F W1213 00:08:51.985930 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:51.986071774+00:00 stderr F E1213 00:08:51.985995 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:54.574147806+00:00 stderr F W1213 00:08:54.573995 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:54.574147806+00:00 stderr F E1213 00:08:54.574109 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:54.670632378+00:00 stderr F W1213 00:08:54.670474 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:54.670632378+00:00 stderr F E1213 00:08:54.670578 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:59.188561648+00:00 stderr F W1213 00:08:59.188377 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:59.188561648+00:00 stderr F E1213 00:08:59.188523 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:59.324711998+00:00 stderr F W1213 00:08:59.324571 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:08:59.324711998+00:00 stderr F E1213 00:08:59.324674 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:18.652431111+00:00 stderr F W1213 00:09:18.651955 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2025-12-13T00:09:18.652431111+00:00 stderr F E1213 00:09:18.652371 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable 2025-12-13T00:09:20.697369502+00:00 stderr F W1213 00:09:20.697218 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:20.697369502+00:00 stderr F E1213 00:09:20.697333 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:23.704353696+00:00 stderr F W1213 00:09:23.704236 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:23.704353696+00:00 stderr F E1213 00:09:23.704298 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:28.542008344+00:00 stderr F W1213 00:09:28.541844 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:28.542008344+00:00 stderr F E1213 00:09:28.541926 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://api-int.crc.testing:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:28.903000655+00:00 stderr F W1213 00:09:28.902919 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:28.903000655+00:00 stderr F E1213 00:09:28.902961 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:30.842821227+00:00 stderr F W1213 00:09:30.842723 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:30.842821227+00:00 stderr F E1213 00:09:30.842795 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:31.437127731+00:00 stderr F W1213 00:09:31.437006 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:31.437127731+00:00 stderr F E1213 00:09:31.437062 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://api-int.crc.testing:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:32.499161740+00:00 stderr F W1213 00:09:32.498981 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:32.499161740+00:00 stderr F E1213 00:09:32.499102 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:32.548076626+00:00 stderr F W1213 00:09:32.547925 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:32.548076626+00:00 stderr F E1213 00:09:32.547989 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:34.425864400+00:00 stderr F W1213 00:09:34.425729 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:34.425864400+00:00 stderr F E1213 00:09:34.425782 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:40.281898377+00:00 stderr F W1213 00:09:40.281797 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:40.281898377+00:00 stderr F E1213 00:09:40.281882 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:45.166012063+00:00 stderr F W1213 00:09:45.165359 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:45.166012063+00:00 stderr F E1213 00:09:45.165944 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://api-int.crc.testing:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:45.202152309+00:00 stderr F W1213 00:09:45.202011 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:45.202152309+00:00 stderr F E1213 00:09:45.202078 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:45.881663540+00:00 stderr F W1213 00:09:45.881561 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:45.881663540+00:00 stderr F E1213 00:09:45.881611 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://api-int.crc.testing:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:50.474761620+00:00 stderr F W1213 00:09:50.473865 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:50.474761620+00:00 stderr F E1213 00:09:50.473913 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://api-int.crc.testing:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host 2025-12-13T00:09:51.175022572+00:00 stderr F E1213 00:09:51.173791 1 server.go:224] "waiting for handlers to sync" err="context canceled" 2025-12-13T00:09:51.175022572+00:00 stderr F I1213 00:09:51.173867 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-12-13T00:09:51.175022572+00:00 stderr F I1213 00:09:51.173885 1 server.go:248] "Requested to terminate, exiting" ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000014160515117130646033040 0ustar zuulzuul2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858007 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858089 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858093 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858097 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858101 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858105 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858109 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858113 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858117 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858121 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858125 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858129 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858133 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858136 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-12-13T00:10:44.858145900+00:00 stderr F W1213 00:10:44.858140 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858179 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858183 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858187 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858191 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858194 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858198 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858202 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858206 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858210 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858214 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858217 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858221 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858225 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858228 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858232 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858235 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858239 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858243 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858247 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858250 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858255 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858258 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-12-13T00:10:44.858267883+00:00 stderr F W1213 00:10:44.858262 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-12-13T00:10:44.858288174+00:00 stderr F W1213 00:10:44.858266 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-12-13T00:10:44.858288174+00:00 stderr F W1213 00:10:44.858270 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-12-13T00:10:44.858288174+00:00 stderr F W1213 00:10:44.858274 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-12-13T00:10:44.858288174+00:00 stderr F W1213 00:10:44.858278 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-12-13T00:10:44.858288174+00:00 stderr F W1213 00:10:44.858282 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-12-13T00:10:44.858288174+00:00 stderr F W1213 00:10:44.858285 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-12-13T00:10:44.858297244+00:00 stderr F W1213 00:10:44.858289 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-12-13T00:10:44.858297244+00:00 stderr F W1213 00:10:44.858293 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-12-13T00:10:44.858304644+00:00 stderr F W1213 00:10:44.858297 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-12-13T00:10:44.858304644+00:00 stderr F W1213 00:10:44.858301 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-12-13T00:10:44.858311955+00:00 stderr F W1213 00:10:44.858305 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-12-13T00:10:44.858311955+00:00 stderr F W1213 00:10:44.858309 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-12-13T00:10:44.858319155+00:00 stderr F W1213 00:10:44.858313 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-12-13T00:10:44.858326255+00:00 stderr F W1213 00:10:44.858318 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-12-13T00:10:44.858326255+00:00 stderr F W1213 00:10:44.858322 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-12-13T00:10:44.858333465+00:00 stderr F W1213 00:10:44.858326 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-12-13T00:10:44.858333465+00:00 stderr F W1213 00:10:44.858330 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-12-13T00:10:44.858340655+00:00 stderr F W1213 00:10:44.858334 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-12-13T00:10:44.858340655+00:00 stderr F W1213 00:10:44.858338 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-12-13T00:10:44.858351286+00:00 stderr F W1213 00:10:44.858341 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-12-13T00:10:44.858351286+00:00 stderr F W1213 00:10:44.858345 1 feature_gate.go:227] unrecognized feature gate: Example 2025-12-13T00:10:44.858351286+00:00 stderr F W1213 00:10:44.858349 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859011 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859043 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859048 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859053 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859057 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859062 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859065 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859071 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-12-13T00:10:44.859078957+00:00 stderr F I1213 00:10:44.859075 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-12-13T00:10:44.859099757+00:00 stderr F I1213 00:10:44.859078 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-12-13T00:10:44.859099757+00:00 stderr F I1213 00:10:44.859081 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:10:44.859099757+00:00 stderr F I1213 00:10:44.859085 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-12-13T00:10:44.859099757+00:00 stderr F I1213 00:10:44.859088 1 flags.go:64] FLAG: --client-ca-file="" 2025-12-13T00:10:44.859099757+00:00 stderr F I1213 00:10:44.859091 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-12-13T00:10:44.859099757+00:00 stderr F I1213 00:10:44.859094 1 flags.go:64] FLAG: --contention-profiling="true" 2025-12-13T00:10:44.859110757+00:00 stderr F I1213 00:10:44.859097 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-12-13T00:10:44.859132948+00:00 stderr F I1213 00:10:44.859102 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-12-13T00:10:44.859132948+00:00 stderr F I1213 00:10:44.859120 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:10:44.859132948+00:00 stderr F I1213 00:10:44.859124 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-12-13T00:10:44.859132948+00:00 stderr F I1213 00:10:44.859128 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-12-13T00:10:44.859142438+00:00 stderr F I1213 00:10:44.859132 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-12-13T00:10:44.859142438+00:00 stderr F I1213 00:10:44.859135 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-12-13T00:10:44.859142438+00:00 stderr F I1213 00:10:44.859140 1 flags.go:64] FLAG: --kubeconfig="" 2025-12-13T00:10:44.859150409+00:00 stderr F I1213 00:10:44.859142 1 flags.go:64] FLAG: --leader-elect="true" 2025-12-13T00:10:44.859150409+00:00 stderr F I1213 00:10:44.859146 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-12-13T00:10:44.859164329+00:00 stderr F I1213 00:10:44.859149 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-12-13T00:10:44.859164329+00:00 stderr F I1213 00:10:44.859152 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-12-13T00:10:44.859164329+00:00 stderr F I1213 00:10:44.859155 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-12-13T00:10:44.859164329+00:00 stderr F I1213 00:10:44.859158 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-12-13T00:10:44.859164329+00:00 stderr F I1213 00:10:44.859161 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-12-13T00:10:44.859173069+00:00 stderr F I1213 00:10:44.859165 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:10:44.859180079+00:00 stderr F I1213 00:10:44.859168 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-12-13T00:10:44.859180079+00:00 stderr F I1213 00:10:44.859174 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-12-13T00:10:44.859180079+00:00 stderr F I1213 00:10:44.859177 1 flags.go:64] FLAG: --logging-format="text" 2025-12-13T00:10:44.859187730+00:00 stderr F I1213 00:10:44.859180 1 flags.go:64] FLAG: --master="" 2025-12-13T00:10:44.859187730+00:00 stderr F I1213 00:10:44.859183 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-12-13T00:10:44.859194940+00:00 stderr F I1213 00:10:44.859186 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:10:44.859194940+00:00 stderr F I1213 00:10:44.859189 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-12-13T00:10:44.859194940+00:00 stderr F I1213 00:10:44.859192 1 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:10:44.859202670+00:00 stderr F I1213 00:10:44.859195 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:10:44.859202670+00:00 stderr F I1213 00:10:44.859199 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-12-13T00:10:44.859210570+00:00 stderr F I1213 00:10:44.859202 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-12-13T00:10:44.859210570+00:00 stderr F I1213 00:10:44.859207 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859211 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859227 1 flags.go:64] FLAG: --secure-port="10259" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859230 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859233 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859237 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859244 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859247 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859251 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859255 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859258 1 flags.go:64] FLAG: --v="2" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859263 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859267 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:10:44.859430087+00:00 stderr F I1213 00:10:44.859271 1 flags.go:64] FLAG: --write-config-to="" 2025-12-13T00:10:44.863588097+00:00 stderr F I1213 00:10:44.863542 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:10:55.257473590+00:00 stderr F W1213 00:10:55.257358 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://api-int.crc.testing:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout 2025-12-13T00:10:55.257473590+00:00 stderr F W1213 00:10:55.257406 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous. 2025-12-13T00:10:55.257473590+00:00 stderr F W1213 00:10:55.257415 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false 2025-12-13T00:11:01.790689351+00:00 stderr F I1213 00:11:01.790581 1 configfile.go:94] "Using component config" config=< 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F clientConnection: 2025-12-13T00:11:01.790689351+00:00 stderr F acceptContentTypes: "" 2025-12-13T00:11:01.790689351+00:00 stderr F burst: 100 2025-12-13T00:11:01.790689351+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-12-13T00:11:01.790689351+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-12-13T00:11:01.790689351+00:00 stderr F qps: 50 2025-12-13T00:11:01.790689351+00:00 stderr F enableContentionProfiling: false 2025-12-13T00:11:01.790689351+00:00 stderr F enableProfiling: false 2025-12-13T00:11:01.790689351+00:00 stderr F kind: KubeSchedulerConfiguration 2025-12-13T00:11:01.790689351+00:00 stderr F leaderElection: 2025-12-13T00:11:01.790689351+00:00 stderr F leaderElect: true 2025-12-13T00:11:01.790689351+00:00 stderr F leaseDuration: 2m17s 2025-12-13T00:11:01.790689351+00:00 stderr F renewDeadline: 1m47s 2025-12-13T00:11:01.790689351+00:00 stderr F resourceLock: leases 2025-12-13T00:11:01.790689351+00:00 stderr F resourceName: kube-scheduler 2025-12-13T00:11:01.790689351+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-12-13T00:11:01.790689351+00:00 stderr F retryPeriod: 26s 2025-12-13T00:11:01.790689351+00:00 stderr F parallelism: 16 2025-12-13T00:11:01.790689351+00:00 stderr F percentageOfNodesToScore: 0 2025-12-13T00:11:01.790689351+00:00 stderr F podInitialBackoffSeconds: 1 2025-12-13T00:11:01.790689351+00:00 stderr F podMaxBackoffSeconds: 10 2025-12-13T00:11:01.790689351+00:00 stderr F profiles: 2025-12-13T00:11:01.790689351+00:00 stderr F - pluginConfig: 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F kind: DefaultPreemptionArgs 2025-12-13T00:11:01.790689351+00:00 stderr F minCandidateNodesAbsolute: 100 2025-12-13T00:11:01.790689351+00:00 stderr F minCandidateNodesPercentage: 10 2025-12-13T00:11:01.790689351+00:00 stderr F name: DefaultPreemption 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F hardPodAffinityWeight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-12-13T00:11:01.790689351+00:00 stderr F kind: InterPodAffinityArgs 2025-12-13T00:11:01.790689351+00:00 stderr F name: InterPodAffinity 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F kind: NodeAffinityArgs 2025-12-13T00:11:01.790689351+00:00 stderr F name: NodeAffinity 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-12-13T00:11:01.790689351+00:00 stderr F resources: 2025-12-13T00:11:01.790689351+00:00 stderr F - name: cpu 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F - name: memory 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F kind: NodeResourcesFitArgs 2025-12-13T00:11:01.790689351+00:00 stderr F scoringStrategy: 2025-12-13T00:11:01.790689351+00:00 stderr F resources: 2025-12-13T00:11:01.790689351+00:00 stderr F - name: cpu 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F - name: memory 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F type: LeastAllocated 2025-12-13T00:11:01.790689351+00:00 stderr F name: NodeResourcesFit 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F defaultingType: System 2025-12-13T00:11:01.790689351+00:00 stderr F kind: PodTopologySpreadArgs 2025-12-13T00:11:01.790689351+00:00 stderr F name: PodTopologySpread 2025-12-13T00:11:01.790689351+00:00 stderr F - args: 2025-12-13T00:11:01.790689351+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-12-13T00:11:01.790689351+00:00 stderr F bindTimeoutSeconds: 600 2025-12-13T00:11:01.790689351+00:00 stderr F kind: VolumeBindingArgs 2025-12-13T00:11:01.790689351+00:00 stderr F name: VolumeBinding 2025-12-13T00:11:01.790689351+00:00 stderr F plugins: 2025-12-13T00:11:01.790689351+00:00 stderr F bind: {} 2025-12-13T00:11:01.790689351+00:00 stderr F filter: {} 2025-12-13T00:11:01.790689351+00:00 stderr F multiPoint: 2025-12-13T00:11:01.790689351+00:00 stderr F enabled: 2025-12-13T00:11:01.790689351+00:00 stderr F - name: PrioritySort 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodeUnschedulable 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodeName 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: TaintToleration 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 3 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodeAffinity 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 2 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodePorts 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodeResourcesFit 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F - name: VolumeRestrictions 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: EBSLimits 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: GCEPDLimits 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodeVolumeLimits 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: AzureDiskLimits 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: VolumeBinding 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: VolumeZone 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: PodTopologySpread 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 2 2025-12-13T00:11:01.790689351+00:00 stderr F - name: InterPodAffinity 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 2 2025-12-13T00:11:01.790689351+00:00 stderr F - name: DefaultPreemption 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F - name: ImageLocality 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 1 2025-12-13T00:11:01.790689351+00:00 stderr F - name: DefaultBinder 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F - name: SchedulingGates 2025-12-13T00:11:01.790689351+00:00 stderr F weight: 0 2025-12-13T00:11:01.790689351+00:00 stderr F permit: {} 2025-12-13T00:11:01.790689351+00:00 stderr F postBind: {} 2025-12-13T00:11:01.790689351+00:00 stderr F postFilter: {} 2025-12-13T00:11:01.790689351+00:00 stderr F preBind: {} 2025-12-13T00:11:01.790689351+00:00 stderr F preEnqueue: {} 2025-12-13T00:11:01.790689351+00:00 stderr F preFilter: {} 2025-12-13T00:11:01.790689351+00:00 stderr F preScore: {} 2025-12-13T00:11:01.790689351+00:00 stderr F queueSort: {} 2025-12-13T00:11:01.790689351+00:00 stderr F reserve: {} 2025-12-13T00:11:01.790689351+00:00 stderr F score: {} 2025-12-13T00:11:01.790689351+00:00 stderr F schedulerName: default-scheduler 2025-12-13T00:11:01.790689351+00:00 stderr F > 2025-12-13T00:11:01.790958768+00:00 stderr F I1213 00:11:01.790896 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-12-13T00:11:01.790958768+00:00 stderr F I1213 00:11:01.790921 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-12-13T00:11:01.794011921+00:00 stderr F I1213 00:11:01.792793 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-12-13 00:11:01.792766978 +0000 UTC))" 2025-12-13T00:11:01.795081061+00:00 stderr F I1213 00:11:01.794438 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:11:01.795722888+00:00 stderr F I1213 00:11:01.795663 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584644\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:11:01.795624895 +0000 UTC))" 2025-12-13T00:11:01.795722888+00:00 stderr F I1213 00:11:01.795699 1 secure_serving.go:213] Serving securely on [::]:10259 2025-12-13T00:11:01.796045357+00:00 stderr F I1213 00:11:01.796004 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:11:01.799436029+00:00 stderr F I1213 00:11:01.794648 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-12-13T00:11:01.800049776+00:00 stderr F I1213 00:11:01.799982 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:11:01.812120234+00:00 stderr F I1213 00:11:01.810391 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.812120234+00:00 stderr F I1213 00:11:01.810686 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.813169712+00:00 stderr F I1213 00:11:01.813133 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.813538782+00:00 stderr F I1213 00:11:01.813506 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.814016526+00:00 stderr F I1213 00:11:01.813962 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.816015369+00:00 stderr F I1213 00:11:01.812418 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.826898445+00:00 stderr F I1213 00:11:01.826812 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.830348369+00:00 stderr F I1213 00:11:01.827513 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.830348369+00:00 stderr F I1213 00:11:01.827805 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.830348369+00:00 stderr F I1213 00:11:01.827976 1 node_tree.go:65] "Added node in listed group to NodeTree" node="crc" zone="" 2025-12-13T00:11:01.830348369+00:00 stderr F I1213 00:11:01.828206 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.830348369+00:00 stderr F I1213 00:11:01.828344 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.830801252+00:00 stderr F I1213 00:11:01.830760 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.834256456+00:00 stderr F I1213 00:11:01.834185 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.850277951+00:00 stderr F I1213 00:11:01.849644 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.858624538+00:00 stderr F I1213 00:11:01.858516 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:11:01.897463713+00:00 stderr F I1213 00:11:01.897352 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-12-13T00:11:01.901527994+00:00 stderr F I1213 00:11:01.901448 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:11:01.901907754+00:00 stderr F I1213 00:11:01.901860 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:11:01.901832452 +0000 UTC))" 2025-12-13T00:11:01.901907754+00:00 stderr F I1213 00:11:01.901887 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:11:01.901878113 +0000 UTC))" 2025-12-13T00:11:01.901907754+00:00 stderr F I1213 00:11:01.901903 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:11:01.901892014 +0000 UTC))" 2025-12-13T00:11:01.901977266+00:00 stderr F I1213 00:11:01.901918 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:11:01.901907324 +0000 UTC))" 2025-12-13T00:11:01.901977266+00:00 stderr F I1213 00:11:01.901951 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.901922544 +0000 UTC))" 2025-12-13T00:11:01.901977266+00:00 stderr F I1213 00:11:01.901965 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.901955915 +0000 UTC))" 2025-12-13T00:11:01.901996116+00:00 stderr F I1213 00:11:01.901982 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.901968496 +0000 UTC))" 2025-12-13T00:11:01.902007017+00:00 stderr F I1213 00:11:01.901994 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:11:01.901985856 +0000 UTC))" 2025-12-13T00:11:01.902017807+00:00 stderr F I1213 00:11:01.902007 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:11:01.901998656 +0000 UTC))" 2025-12-13T00:11:01.902029427+00:00 stderr F I1213 00:11:01.902022 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:11:01.902014317 +0000 UTC))" 2025-12-13T00:11:01.902366136+00:00 stderr F I1213 00:11:01.902308 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-12-13 00:11:01.902292644 +0000 UTC))" 2025-12-13T00:11:01.902606393+00:00 stderr F I1213 00:11:01.902550 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584644\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:11:01.902538701 +0000 UTC))" 2025-12-13T00:13:33.556313198+00:00 stderr F I1213 00:13:33.556256 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/kube-scheduler 2025-12-13T00:13:33.681235907+00:00 stderr F I1213 00:13:33.681138 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-999b2" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:13:33.681235907+00:00 stderr F I1213 00:13:33.681180 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-5fv2w" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:13:33.681235907+00:00 stderr F I1213 00:13:33.681187 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-image-registry/image-pruner-29426400-tlv26" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:13:33.681235907+00:00 stderr F I1213 00:13:33.681205 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-scn2m" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:13:33.681305159+00:00 stderr F I1213 00:13:33.681263 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29426400-gwxp5" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:14:37.706053639+00:00 stderr F I1213 00:14:37.705997 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/marketplace-operator-8b455464d-kghgr" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:14:40.458436473+00:00 stderr F I1213 00:14:40.458338 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-lcrg8" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:14:43.454056351+00:00 stderr F I1213 00:14:43.453964 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-fs22p" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:14:44.461593197+00:00 stderr F I1213 00:14:44.461500 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-zg7cl" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:14:46.057243348+00:00 stderr F I1213 00:14:46.056704 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nv4pl" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:14:46.255502494+00:00 stderr F I1213 00:14:46.254075 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-s2hxn" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:15:00.165170723+00:00 stderr F I1213 00:15:00.165030 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29426415-vhdrh" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:16:37.410757389+00:00 stderr F I1213 00:16:37.410685 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-multus/cni-sysctl-allowlist-ds-7gz4s" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:18:32.344714610+00:00 stderr F I1213 00:18:32.344629 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-image-registry/image-registry-75b7bb6564-rnjvj" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.560881 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.560842826 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.560919 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.560905408 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.560960 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.560923899 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.560980 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.56096614 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561000 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.560986531 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561017 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561004642 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561035 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561021502 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561054 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.561038992 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561072 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.561057833 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561089 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.561077613 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561109 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.561094784 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561505 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-12-13 00:19:37.561483165 +0000 UTC))" 2025-12-13T00:19:37.564211000+00:00 stderr F I1213 00:19:37.561877 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584645\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584644\" (2025-12-12 23:10:44 +0000 UTC to 2026-12-12 23:10:44 +0000 UTC (now=2025-12-13 00:19:37.561857215 +0000 UTC))" 2025-12-13T00:20:02.000523853+00:00 stderr F I1213 00:20:02.000452 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-ovn-kubernetes/ovnkube-node-brz8k" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-12-13T00:21:09.905975379+00:00 stderr F I1213 00:21:09.905820 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:10.511635272+00:00 stderr F I1213 00:21:10.511350 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:11.201862377+00:00 stderr F I1213 00:21:11.201798 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:12.904086402+00:00 stderr F I1213 00:21:12.904020 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:13.710316148+00:00 stderr F I1213 00:21:13.710251 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:14.288325134+00:00 stderr F I1213 00:21:14.288071 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:17.154686623+00:00 stderr F I1213 00:21:17.154620 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.311067167+00:00 stderr F I1213 00:21:18.310997 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.864337336+00:00 stderr F I1213 00:21:18.864038 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.910486382+00:00 stderr F I1213 00:21:18.909092 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:18.925148617+00:00 stderr F I1213 00:21:18.925074 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:20.021985456+00:00 stderr F I1213 00:21:20.021664 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:20.428405653+00:00 stderr F I1213 00:21:20.428340 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:21.752165575+00:00 stderr F I1213 00:21:21.752065 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:26.204071428+00:00 stderr F I1213 00:21:26.203990 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000016354415117130646033046 0ustar zuulzuul2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230190 1 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230330 1 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230337 1 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230342 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230347 1 feature_gate.go:227] unrecognized feature gate: ExternalOIDC 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230352 1 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230356 1 feature_gate.go:227] unrecognized feature gate: NewOLM 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230403 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230413 1 feature_gate.go:227] unrecognized feature gate: DNSNameResolver 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230418 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230422 1 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230427 1 feature_gate.go:227] unrecognized feature gate: PinnedImages 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230432 1 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230436 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230441 1 feature_gate.go:227] unrecognized feature gate: Example 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230446 1 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230450 1 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230455 1 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230460 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230464 1 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230469 1 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230474 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230479 1 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230484 1 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230488 1 feature_gate.go:227] unrecognized feature gate: SignatureStores 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230493 1 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230497 1 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230502 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230507 1 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230511 1 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230523 1 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230528 1 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230533 1 feature_gate.go:227] unrecognized feature gate: GatewayAPI 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230537 1 feature_gate.go:227] unrecognized feature gate: HardwareSpeed 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230542 1 feature_gate.go:227] unrecognized feature gate: MetricsServer 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230546 1 feature_gate.go:227] unrecognized feature gate: PlatformOperators 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230551 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230555 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230560 1 feature_gate.go:227] unrecognized feature gate: ImagePolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230565 1 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230569 1 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230574 1 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230580 1 feature_gate.go:227] unrecognized feature gate: ManagedBootImages 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230585 1 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230590 1 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230594 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230599 1 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230604 1 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230608 1 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230614 1 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230621 1 feature_gate.go:227] unrecognized feature gate: OnClusterBuild 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230628 1 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230634 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230639 1 feature_gate.go:227] unrecognized feature gate: InsightsConfig 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230644 1 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230649 1 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230661 1 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230667 1 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230672 1 feature_gate.go:227] unrecognized feature gate: UpgradeStatus 2025-08-13T20:08:10.230920209+00:00 stderr F W0813 20:08:10.230677 1 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231000 1 flags.go:64] FLAG: --allow-metric-labels="[]" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231022 1 flags.go:64] FLAG: --allow-metric-labels-manifest="" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231062 1 flags.go:64] FLAG: --authentication-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231071 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231077 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T20:08:10.231092794+00:00 stderr F I0813 20:08:10.231083 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231087 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231094 1 flags.go:64] FLAG: --authorization-kubeconfig="/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig" 2025-08-13T20:08:10.231106134+00:00 stderr F I0813 20:08:10.231099 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T20:08:10.231116925+00:00 stderr F I0813 20:08:10.231104 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T20:08:10.231116925+00:00 stderr F I0813 20:08:10.231108 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231115 1 flags.go:64] FLAG: --cert-dir="/var/run/kubernetes" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231119 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T20:08:10.231128245+00:00 stderr F I0813 20:08:10.231123 1 flags.go:64] FLAG: --config="/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml" 2025-08-13T20:08:10.231139305+00:00 stderr F I0813 20:08:10.231128 1 flags.go:64] FLAG: --contention-profiling="true" 2025-08-13T20:08:10.231139305+00:00 stderr F I0813 20:08:10.231132 1 flags.go:64] FLAG: --disabled-metrics="[]" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231138 1 flags.go:64] FLAG: --feature-gates="CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EventedPLEG=false,KMSv1=true,MaxUnavailableStatefulSet=false,NodeSwap=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,TranslateStreamCloseWebsocketRequests=false,ValidatingAdmissionPolicy=false" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231184 1 flags.go:64] FLAG: --help="false" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231191 1 flags.go:64] FLAG: --http2-max-streams-per-connection="0" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231197 1 flags.go:64] FLAG: --kube-api-burst="100" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231203 1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231208 1 flags.go:64] FLAG: --kube-api-qps="50" 2025-08-13T20:08:10.231218768+00:00 stderr F I0813 20:08:10.231214 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231218 1 flags.go:64] FLAG: --leader-elect="true" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231222 1 flags.go:64] FLAG: --leader-elect-lease-duration="15s" 2025-08-13T20:08:10.231234218+00:00 stderr F I0813 20:08:10.231226 1 flags.go:64] FLAG: --leader-elect-renew-deadline="10s" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231231 1 flags.go:64] FLAG: --leader-elect-resource-lock="leases" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231235 1 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231239 1 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system" 2025-08-13T20:08:10.231250558+00:00 stderr F I0813 20:08:10.231244 1 flags.go:64] FLAG: --leader-elect-retry-period="2s" 2025-08-13T20:08:10.231261759+00:00 stderr F I0813 20:08:10.231248 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T20:08:10.231261759+00:00 stderr F I0813 20:08:10.231253 1 flags.go:64] FLAG: --log-json-info-buffer-size="0" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231260 1 flags.go:64] FLAG: --log-json-split-stream="false" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231264 1 flags.go:64] FLAG: --logging-format="text" 2025-08-13T20:08:10.231272329+00:00 stderr F I0813 20:08:10.231268 1 flags.go:64] FLAG: --master="" 2025-08-13T20:08:10.231283149+00:00 stderr F I0813 20:08:10.231272 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T20:08:10.231283149+00:00 stderr F I0813 20:08:10.231277 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T20:08:10.231293160+00:00 stderr F I0813 20:08:10.231281 1 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s" 2025-08-13T20:08:10.231293160+00:00 stderr F I0813 20:08:10.231285 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T20:08:10.231302840+00:00 stderr F I0813 20:08:10.231289 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T20:08:10.231302840+00:00 stderr F I0813 20:08:10.231294 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-08-13T20:08:10.231312790+00:00 stderr F I0813 20:08:10.231298 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T20:08:10.231312790+00:00 stderr F I0813 20:08:10.231304 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T20:08:10.231322541+00:00 stderr F I0813 20:08:10.231310 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T20:08:10.231322541+00:00 stderr F I0813 20:08:10.231316 1 flags.go:64] FLAG: --secure-port="10259" 2025-08-13T20:08:10.231332641+00:00 stderr F I0813 20:08:10.231320 1 flags.go:64] FLAG: --show-hidden-metrics-for-version="" 2025-08-13T20:08:10.231332641+00:00 stderr F I0813 20:08:10.231324 1 flags.go:64] FLAG: --tls-cert-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt" 2025-08-13T20:08:10.231342871+00:00 stderr F I0813 20:08:10.231329 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T20:08:10.231342871+00:00 stderr F I0813 20:08:10.231337 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T20:08:10.231353441+00:00 stderr F I0813 20:08:10.231342 1 flags.go:64] FLAG: --tls-private-key-file="/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:10.231353441+00:00 stderr F I0813 20:08:10.231347 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T20:08:10.231363342+00:00 stderr F I0813 20:08:10.231353 1 flags.go:64] FLAG: --unsupported-kube-api-over-localhost="false" 2025-08-13T20:08:10.231373152+00:00 stderr F I0813 20:08:10.231357 1 flags.go:64] FLAG: --v="2" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231369 1 flags.go:64] FLAG: --version="false" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231375 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T20:08:10.231387042+00:00 stderr F I0813 20:08:10.231380 1 flags.go:64] FLAG: --write-config-to="" 2025-08-13T20:08:10.245082115+00:00 stderr F I0813 20:08:10.244995 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:11.146145129+00:00 stderr F I0813 20:08:11.146059 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:08:11.188037240+00:00 stderr F I0813 20:08:11.187945 1 configfile.go:94] "Using component config" config=< 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F clientConnection: 2025-08-13T20:08:11.188037240+00:00 stderr F acceptContentTypes: "" 2025-08-13T20:08:11.188037240+00:00 stderr F burst: 100 2025-08-13T20:08:11.188037240+00:00 stderr F contentType: application/vnd.kubernetes.protobuf 2025-08-13T20:08:11.188037240+00:00 stderr F kubeconfig: /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig 2025-08-13T20:08:11.188037240+00:00 stderr F qps: 50 2025-08-13T20:08:11.188037240+00:00 stderr F enableContentionProfiling: false 2025-08-13T20:08:11.188037240+00:00 stderr F enableProfiling: false 2025-08-13T20:08:11.188037240+00:00 stderr F kind: KubeSchedulerConfiguration 2025-08-13T20:08:11.188037240+00:00 stderr F leaderElection: 2025-08-13T20:08:11.188037240+00:00 stderr F leaderElect: true 2025-08-13T20:08:11.188037240+00:00 stderr F leaseDuration: 2m17s 2025-08-13T20:08:11.188037240+00:00 stderr F renewDeadline: 1m47s 2025-08-13T20:08:11.188037240+00:00 stderr F resourceLock: leases 2025-08-13T20:08:11.188037240+00:00 stderr F resourceName: kube-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F resourceNamespace: openshift-kube-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F retryPeriod: 26s 2025-08-13T20:08:11.188037240+00:00 stderr F parallelism: 16 2025-08-13T20:08:11.188037240+00:00 stderr F percentageOfNodesToScore: 0 2025-08-13T20:08:11.188037240+00:00 stderr F podInitialBackoffSeconds: 1 2025-08-13T20:08:11.188037240+00:00 stderr F podMaxBackoffSeconds: 10 2025-08-13T20:08:11.188037240+00:00 stderr F profiles: 2025-08-13T20:08:11.188037240+00:00 stderr F - pluginConfig: 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: DefaultPreemptionArgs 2025-08-13T20:08:11.188037240+00:00 stderr F minCandidateNodesAbsolute: 100 2025-08-13T20:08:11.188037240+00:00 stderr F minCandidateNodesPercentage: 10 2025-08-13T20:08:11.188037240+00:00 stderr F name: DefaultPreemption 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F hardPodAffinityWeight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F ignorePreferredTermsOfExistingPods: false 2025-08-13T20:08:11.188037240+00:00 stderr F kind: InterPodAffinityArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: InterPodAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeAffinityArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeResourcesBalancedAllocationArgs 2025-08-13T20:08:11.188037240+00:00 stderr F resources: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: cpu 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: memory 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeResourcesBalancedAllocation 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F kind: NodeResourcesFitArgs 2025-08-13T20:08:11.188037240+00:00 stderr F scoringStrategy: 2025-08-13T20:08:11.188037240+00:00 stderr F resources: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: cpu 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: memory 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F type: LeastAllocated 2025-08-13T20:08:11.188037240+00:00 stderr F name: NodeResourcesFit 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F defaultingType: System 2025-08-13T20:08:11.188037240+00:00 stderr F kind: PodTopologySpreadArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: PodTopologySpread 2025-08-13T20:08:11.188037240+00:00 stderr F - args: 2025-08-13T20:08:11.188037240+00:00 stderr F apiVersion: kubescheduler.config.k8s.io/v1 2025-08-13T20:08:11.188037240+00:00 stderr F bindTimeoutSeconds: 600 2025-08-13T20:08:11.188037240+00:00 stderr F kind: VolumeBindingArgs 2025-08-13T20:08:11.188037240+00:00 stderr F name: VolumeBinding 2025-08-13T20:08:11.188037240+00:00 stderr F plugins: 2025-08-13T20:08:11.188037240+00:00 stderr F bind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F filter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F multiPoint: 2025-08-13T20:08:11.188037240+00:00 stderr F enabled: 2025-08-13T20:08:11.188037240+00:00 stderr F - name: PrioritySort 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeUnschedulable 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeName 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: TaintToleration 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 3 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodePorts 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeResourcesFit 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeRestrictions 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: EBSLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: GCEPDLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeVolumeLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: AzureDiskLimits 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeBinding 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: VolumeZone 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: PodTopologySpread 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: InterPodAffinity 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 2 2025-08-13T20:08:11.188037240+00:00 stderr F - name: DefaultPreemption 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: NodeResourcesBalancedAllocation 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: ImageLocality 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 1 2025-08-13T20:08:11.188037240+00:00 stderr F - name: DefaultBinder 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F - name: SchedulingGates 2025-08-13T20:08:11.188037240+00:00 stderr F weight: 0 2025-08-13T20:08:11.188037240+00:00 stderr F permit: {} 2025-08-13T20:08:11.188037240+00:00 stderr F postBind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F postFilter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preBind: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preEnqueue: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preFilter: {} 2025-08-13T20:08:11.188037240+00:00 stderr F preScore: {} 2025-08-13T20:08:11.188037240+00:00 stderr F queueSort: {} 2025-08-13T20:08:11.188037240+00:00 stderr F reserve: {} 2025-08-13T20:08:11.188037240+00:00 stderr F score: {} 2025-08-13T20:08:11.188037240+00:00 stderr F schedulerName: default-scheduler 2025-08-13T20:08:11.188037240+00:00 stderr F > 2025-08-13T20:08:11.188878124+00:00 stderr F I0813 20:08:11.188854 1 server.go:159] "Starting Kubernetes Scheduler" version="v1.29.5+29c95f3" 2025-08-13T20:08:11.188957786+00:00 stderr F I0813 20:08:11.188943 1 server.go:161] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" 2025-08-13T20:08:11.203107062+00:00 stderr F I0813 20:08:11.203033 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.202985438 +0000 UTC))" 2025-08-13T20:08:11.203489373+00:00 stderr F I0813 20:08:11.203445 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.203324348 +0000 UTC))" 2025-08-13T20:08:11.203548794+00:00 stderr F I0813 20:08:11.203507 1 secure_serving.go:213] Serving securely on [::]:10259 2025-08-13T20:08:11.203942036+00:00 stderr F I0813 20:08:11.203879 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:08:11.204870982+00:00 stderr F I0813 20:08:11.204750 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:08:11.205753528+00:00 stderr F I0813 20:08:11.205396 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:08:11.205753528+00:00 stderr F I0813 20:08:11.205692 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:08:11.206858439+00:00 stderr F I0813 20:08:11.206722 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:08:11.208265020+00:00 stderr F I0813 20:08:11.208201 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:11.208802005+00:00 stderr F I0813 20:08:11.208759 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:08:11.211168003+00:00 stderr F I0813 20:08:11.209991 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:11.217854585+00:00 stderr F I0813 20:08:11.217143 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.217854585+00:00 stderr F I0813 20:08:11.217420 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.218323598+00:00 stderr F I0813 20:08:11.218294 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.218656498+00:00 stderr F I0813 20:08:11.218594 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225140743+00:00 stderr F I0813 20:08:11.225093 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225409441+00:00 stderr F I0813 20:08:11.225093 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.225541035+00:00 stderr F I0813 20:08:11.225393 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.228423748+00:00 stderr F I0813 20:08:11.228394 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.228852450+00:00 stderr F I0813 20:08:11.228737 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229345944+00:00 stderr F I0813 20:08:11.228441 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229720875+00:00 stderr F I0813 20:08:11.229585 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.229720875+00:00 stderr F I0813 20:08:11.229561 1 node_tree.go:65] "Added node in listed group to NodeTree" node="crc" zone="" 2025-08-13T20:08:11.230234189+00:00 stderr F I0813 20:08:11.230210 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.233049880+00:00 stderr F I0813 20:08:11.232089 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.262424672+00:00 stderr F I0813 20:08:11.260649 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.275833337+00:00 stderr F I0813 20:08:11.274170 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.275833337+00:00 stderr F I0813 20:08:11.274459 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.302108150+00:00 stderr F I0813 20:08:11.301417 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:08:11.304706425+00:00 stderr F I0813 20:08:11.304648 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/kube-scheduler... 2025-08-13T20:08:11.310589033+00:00 stderr F I0813 20:08:11.310070 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:08:11.310714567+00:00 stderr F I0813 20:08:11.310689 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:08:11.310912263+00:00 stderr F I0813 20:08:11.310699 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:11.310624834 +0000 UTC))" 2025-08-13T20:08:11.311012305+00:00 stderr F I0813 20:08:11.310992 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:11.310959374 +0000 UTC))" 2025-08-13T20:08:11.311070797+00:00 stderr F I0813 20:08:11.311056 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.311035286 +0000 UTC))" 2025-08-13T20:08:11.311119569+00:00 stderr F I0813 20:08:11.311106 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.311089788 +0000 UTC))" 2025-08-13T20:08:11.311165560+00:00 stderr F I0813 20:08:11.311152 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311138159 +0000 UTC))" 2025-08-13T20:08:11.311209871+00:00 stderr F I0813 20:08:11.311197 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.31118407 +0000 UTC))" 2025-08-13T20:08:11.311252592+00:00 stderr F I0813 20:08:11.311240 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311227522 +0000 UTC))" 2025-08-13T20:08:11.311322654+00:00 stderr F I0813 20:08:11.311299 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.311282933 +0000 UTC))" 2025-08-13T20:08:11.311389356+00:00 stderr F I0813 20:08:11.311374 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:11.311342025 +0000 UTC))" 2025-08-13T20:08:11.311438288+00:00 stderr F I0813 20:08:11.311424 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:11.311411097 +0000 UTC))" 2025-08-13T20:08:11.312300042+00:00 stderr F I0813 20:08:11.312264 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.312244321 +0000 UTC))" 2025-08-13T20:08:11.312595841+00:00 stderr F I0813 20:08:11.312580 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.3125637 +0000 UTC))" 2025-08-13T20:08:11.317314156+00:00 stderr F I0813 20:08:11.317284 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/kube-scheduler 2025-08-13T20:08:11.318466309+00:00 stderr F I0813 20:08:11.318358 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318649 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:08:11.318623444 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318698 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:08:11.318681925 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318718 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.318705526 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318742 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:08:11.318723397 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318761 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318747857 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318844 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318767468 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318870 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.31885504 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318932 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318882411 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318956 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:08:11.318941623 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318976 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:08:11.318966564 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.318995 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:08:11.318983994 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.319304 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" certDetail="\"scheduler.openshift-kube-scheduler.svc\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:27 +0000 UTC to 2027-08-13 20:00:28 +0000 UTC (now=2025-08-13 20:08:11.319284853 +0000 UTC))" 2025-08-13T20:08:11.320042334+00:00 stderr F I0813 20:08:11.319577 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115691\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115690\" (2025-08-13 19:08:10 +0000 UTC to 2026-08-13 19:08:10 +0000 UTC (now=2025-08-13 20:08:11.319561951 +0000 UTC))" 2025-08-13T20:08:37.333024227+00:00 stderr F E0813 20:08:37.331678 1 leaderelection.go:332] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:09:01.242213943+00:00 stderr F I0813 20:09:01.242038 1 reflector.go:351] Caches populated for *v1.CSIStorageCapacity from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.336110067+00:00 stderr F I0813 20:09:03.335887 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:03.904718640+00:00 stderr F I0813 20:09:03.904657 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.237111850+00:00 stderr F I0813 20:09:04.236960 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:04.238841120+00:00 stderr F I0813 20:09:04.238728 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.534492978+00:00 stderr F I0813 20:09:05.534406 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:05.723631721+00:00 stderr F I0813 20:09:05.723574 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:06.019105803+00:00 stderr F I0813 20:09:06.018397 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.053737367+00:00 stderr F I0813 20:09:07.053594 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.761629982+00:00 stderr F I0813 20:09:07.761557 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.771045132+00:00 stderr F I0813 20:09:07.770994 1 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.834012487+00:00 stderr F I0813 20:09:07.833944 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.952277558+00:00 stderr F I0813 20:09:07.952198 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:07.981075823+00:00 stderr F I0813 20:09:07.980921 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.114889060+00:00 stderr F I0813 20:09:10.114739 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:10.765327109+00:00 stderr F I0813 20:09:10.764682 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:12.779279180+00:00 stderr F I0813 20:09:12.779210 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.272706390+00:00 stderr F I0813 20:10:15.272394 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:10:59.918217989+00:00 stderr F I0813 20:10:59.912574 1 schedule_one.go:992] "Unable to schedule pod; no fit; waiting" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" err="0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules." 2025-08-13T20:11:00.012890153+00:00 stderr F I0813 20:11:00.012639 1 schedule_one.go:992] "Unable to schedule pod; no fit; waiting" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" err="0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules." 2025-08-13T20:11:01.525253034+00:00 stderr F I0813 20:11:01.523937 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:11:01.531536584+00:00 stderr F I0813 20:11:01.528683 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:15:00.383192080+00:00 stderr F I0813 20:15:00.378262 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:16:58.186838320+00:00 stderr F I0813 20:16:58.185353 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-8bbjz" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:00.178201349+00:00 stderr F I0813 20:17:00.177988 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nsk78" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:16.061619591+00:00 stderr F I0813 20:17:16.061426 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-swl5s" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:17:30.380858087+00:00 stderr F I0813 20:17:30.380641 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-tfv59" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:27:05.675630296+00:00 stderr F I0813 20:27:05.672255 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-jbzn9" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:27:05.827866179+00:00 stderr F I0813 20:27:05.827567 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-xldzg" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:28:43.342763392+00:00 stderr F I0813 20:28:43.342273 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-hvwvm" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:29:30.103067461+00:00 stderr F I0813 20:29:30.100957 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-zdwjn" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:30:02.007482088+00:00 stderr F I0813 20:30:02.005463 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:37:48.222392256+00:00 stderr F I0813 20:37:48.219625 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-marketplace-nkzlk" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:38:36.098357314+00:00 stderr F I0813 20:38:36.087705 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/certified-operators-4kmbv" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:41:21.475952852+00:00 stderr F I0813 20:41:21.453662 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/redhat-operators-k2tgr" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:42:26.033155404+00:00 stderr F I0813 20:42:26.032886 1 schedule_one.go:302] "Successfully bound pod to node" pod="openshift-marketplace/community-operators-sdddl" node="crc" evaluatedNodes=1 feasibleNodes=1 2025-08-13T20:42:36.442535128+00:00 stderr F I0813 20:42:36.441417 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464219033+00:00 stderr F I0813 20:42:36.439661 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464219033+00:00 stderr F I0813 20:42:36.459284 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464431250+00:00 stderr F I0813 20:42:36.464388 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464653396+00:00 stderr F I0813 20:42:36.464635 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.464760989+00:00 stderr F I0813 20:42:36.464743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.469013992+00:00 stderr F I0813 20:42:36.468989 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484011014+00:00 stderr F I0813 20:42:36.469107 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484375045+00:00 stderr F I0813 20:42:36.484316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484535129+00:00 stderr F I0813 20:42:36.484516 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484705284+00:00 stderr F I0813 20:42:36.484647 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.484908530+00:00 stderr F I0813 20:42:36.484885 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485015883+00:00 stderr F I0813 20:42:36.484998 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485104616+00:00 stderr F I0813 20:42:36.485089 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485197898+00:00 stderr F I0813 20:42:36.485182 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.485337742+00:00 stderr F I0813 20:42:36.485316 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.508276894+00:00 stderr F I0813 20:42:36.430578 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.603936832+00:00 stderr F I0813 20:42:37.603863 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:37.604083756+00:00 stderr F I0813 20:42:37.604062 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:37.604331843+00:00 stderr F I0813 20:42:37.604306 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:37.604409386+00:00 stderr F I0813 20:42:37.604392 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key" 2025-08-13T20:42:37.604490028+00:00 stderr F I0813 20:42:37.604472 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:37.609753310+00:00 stderr F I0813 20:42:37.607518 1 scheduling_queue.go:870] "Scheduling queue is closed" 2025-08-13T20:42:37.609753310+00:00 stderr F E0813 20:42:37.608056 1 leaderelection.go:308] Failed to release lock: Put "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/kube-scheduler?timeout=53.5s": dial tcp 192.168.130.11:6443: connect: connection refused 2025-08-13T20:42:37.609753310+00:00 stderr F I0813 20:42:37.608306 1 server.go:248] "Requested to terminate, exiting" ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130653033025 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515117130646033027 0ustar zuulzuul2025-12-13T00:10:44.373999401+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515117130646033027 0ustar zuulzuul2025-08-13T20:08:08.757827394+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000012515117130646033027 0ustar zuulzuul2025-12-13T00:06:36.200582816+00:00 stdout P Waiting for port :10259 to be released. ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130653033025 5ustar zuulzuul././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001213715117130646033035 0ustar zuulzuul2025-12-13T00:10:45.380610582+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-12-13T00:10:45.385213109+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-12-13T00:10:45.389553733+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:10:45.390296832+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-12-13T00:10:45.453480124+00:00 stderr F W1213 00:10:45.453344 1 cmd.go:245] Using insecure, self-signed certificates 2025-12-13T00:10:45.453660146+00:00 stderr F I1213 00:10:45.453636 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1765584645 cert, and key in /tmp/serving-cert-3561885241/serving-signer.crt, /tmp/serving-cert-3561885241/serving-signer.key 2025-12-13T00:10:45.732080954+00:00 stderr F I1213 00:10:45.732022 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:10:45.733219547+00:00 stderr F I1213 00:10:45.733175 1 observer_polling.go:159] Starting file observer 2025-12-13T00:10:55.736210061+00:00 stderr F W1213 00:10:55.734351 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/pods": net/http: TLS handshake timeout 2025-12-13T00:10:55.736210061+00:00 stderr F I1213 00:10:55.734503 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-12-13T00:11:01.906881749+00:00 stderr F I1213 00:11:01.906682 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:11:01.908411930+00:00 stderr F I1213 00:11:01.908380 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-12-13T00:16:28.701657279+00:00 stderr F I1213 00:16:28.701563 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/cert-recovery-controller-lock 2025-12-13T00:16:28.702468711+00:00 stderr F I1213 00:16:28.702444 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:16:28.703469219+00:00 stderr F I1213 00:16:28.703401 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler", Name:"cert-recovery-controller-lock", UID:"e24b93c2-79d9-43db-937a-d4e24725daea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_7cf1a661-cc5e-4376-b8d0-e9ad5938f91e became leader 2025-12-13T00:16:28.706620605+00:00 stderr F I1213 00:16:28.706559 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:16:28.707724314+00:00 stderr F I1213 00:16:28.707685 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:16:28.709180975+00:00 stderr F I1213 00:16:28.709153 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:16:28.714423988+00:00 stderr F I1213 00:16:28.714381 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:16:28.732778339+00:00 stderr F I1213 00:16:28.732733 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:16:28.803014047+00:00 stderr F I1213 00:16:28.802967 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:16:28.803075168+00:00 stderr F I1213 00:16:28.803063 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:21:14.363799011+00:00 stderr F I1213 00:21:14.363730 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:14.776486927+00:00 stderr F I1213 00:21:14.776412 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:17.580365179+00:00 stderr F I1213 00:21:17.580319 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:23.073714117+00:00 stderr F I1213 00:21:23.073626 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-12-13T00:21:51.932911864+00:00 stderr F I1213 00:21:51.932833 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000000467615117130646033046 0ustar zuulzuul2025-12-13T00:06:37.794311601+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-12-13T00:06:37.798580901+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-12-13T00:06:37.802425449+00:00 stderr F + '[' -n '' ']' 2025-12-13T00:06:37.803138450+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-12-13T00:06:37.940634268+00:00 stderr F W1213 00:06:37.940100 1 cmd.go:245] Using insecure, self-signed certificates 2025-12-13T00:06:37.940883555+00:00 stderr F I1213 00:06:37.940838 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1765584397 cert, and key in /tmp/serving-cert-1234325982/serving-signer.crt, /tmp/serving-cert-1234325982/serving-signer.key 2025-12-13T00:06:38.262430040+00:00 stderr F I1213 00:06:38.262339 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:06:38.263374997+00:00 stderr F I1213 00:06:38.263250 1 observer_polling.go:159] Starting file observer 2025-12-13T00:06:48.265677676+00:00 stderr F W1213 00:06:48.265572 1 builder.go:267] unable to get owner reference (falling back to namespace): Get "https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/pods": net/http: TLS handshake timeout 2025-12-13T00:06:48.265755219+00:00 stderr F I1213 00:06:48.265712 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-12-13T00:06:54.279530556+00:00 stderr F I1213 00:06:54.279230 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:06:54.279770472+00:00 stderr F I1213 00:06:54.279748 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-12-13T00:09:51.170700496+00:00 stderr F I1213 00:09:51.170551 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-12-13T00:09:51.170700496+00:00 stderr F W1213 00:09:51.170628 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000031600000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000001431715117130646033037 0ustar zuulzuul2025-08-13T20:08:10.423486490+00:00 stderr F + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 11443 \))" ]; do sleep 1; done' 2025-08-13T20:08:10.429459371+00:00 stderr F ++ ss -Htanop '(' sport = 11443 ')' 2025-08-13T20:08:10.435645509+00:00 stderr F + '[' -n '' ']' 2025-08-13T20:08:10.436465452+00:00 stderr F + exec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=openshift-kube-scheduler --listen=0.0.0.0:11443 -v=2 2025-08-13T20:08:10.591871248+00:00 stderr F W0813 20:08:10.591324 1 cmd.go:245] Using insecure, self-signed certificates 2025-08-13T20:08:10.591871248+00:00 stderr F I0813 20:08:10.591729 1 crypto.go:601] Generating new CA for cert-recovery-controller-signer@1755115690 cert, and key in /tmp/serving-cert-2660222687/serving-signer.crt, /tmp/serving-cert-2660222687/serving-signer.key 2025-08-13T20:08:10.967190418+00:00 stderr F I0813 20:08:10.967073 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:08:10.968185366+00:00 stderr F I0813 20:08:10.968134 1 observer_polling.go:159] Starting file observer 2025-08-13T20:08:10.988939231+00:00 stderr F I0813 20:08:10.988860 1 builder.go:299] cert-recovery-controller version v0.0.0-master+$Format:%H$-$Format:%H$ 2025-08-13T20:08:10.998709851+00:00 stderr F I0813 20:08:10.998553 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:08:11.000208574+00:00 stderr F I0813 20:08:10.998984 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-scheduler/cert-recovery-controller-lock... 2025-08-13T20:08:11.009863831+00:00 stderr F I0813 20:08:11.009712 1 leaderelection.go:260] successfully acquired lease openshift-kube-scheduler/cert-recovery-controller-lock 2025-08-13T20:08:11.010233502+00:00 stderr F I0813 20:08:11.009910 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-scheduler", Name:"cert-recovery-controller-lock", UID:"e24b93c2-79d9-43db-937a-d4e24725daea", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"32844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crc_d5b9b15f-ff83-4db6-8ab7-bb13bf3420f4 became leader 2025-08-13T20:08:11.012003963+00:00 stderr F I0813 20:08:11.011877 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:08:11.027060494+00:00 stderr F I0813 20:08:11.025374 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.041092377+00:00 stderr F I0813 20:08:11.040728 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.041394735+00:00 stderr F I0813 20:08:11.040988 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.047141560+00:00 stderr F I0813 20:08:11.046865 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.065527837+00:00 stderr F I0813 20:08:11.065403 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:08:11.113013099+00:00 stderr F I0813 20:08:11.112833 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:08:11.113013099+00:00 stderr F I0813 20:08:11.112876 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:08:59.290396854+00:00 stderr F I0813 20:08:59.290258 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeschedulers from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:06.506841487+00:00 stderr F I0813 20:09:06.506684 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.035347489+00:00 stderr F I0813 20:09:07.035041 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:07.586281885+00:00 stderr F I0813 20:09:07.586001 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:09:11.968019000+00:00 stderr F I0813 20:09:11.967733 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229 2025-08-13T20:42:36.324418343+00:00 stderr F I0813 20:42:36.324115 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.336306026+00:00 stderr F I0813 20:42:36.334141 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.428947056+00:00 stderr F I0813 20:42:36.349394 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.429622136+00:00 stderr F I0813 20:42:36.429561 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.431371316+00:00 stderr F I0813 20:42:36.431343 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.290643040+00:00 stderr F I0813 20:42:37.286428 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:37.293847222+00:00 stderr F E0813 20:42:37.292964 1 leaderelection.go:308] Failed to release lock: Put "https://localhost:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler/leases/cert-recovery-controller-lock?timeout=4m0s": dial tcp [::1]:6443: connect: connection refused 2025-08-13T20:42:37.293847222+00:00 stderr F W0813 20:42:37.293055 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130647033060 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134015117130647033060 0ustar zuulzuul2025-08-13T20:30:04.045714829+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/olm-operator-heap-48qq2" 2025-08-13T20:30:04.473513386+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/catalog-operator-heap-88mpx" 2025-08-13T20:30:04.492237284+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/olm-operator-heap-hqmzq" 2025-08-13T20:30:04.500235874+00:00 stderr F time="2025-08-13T20:30:04Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/catalog-operator-heap-bk2n8" ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015117130646033055 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015117130654033054 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000012553615117130646033073 0ustar zuulzuul2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534265 1 flags.go:64] FLAG: --accesstoken-inactivity-timeout="0s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534371 1 flags.go:64] FLAG: --admission-control-config-file="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534378 1 flags.go:64] FLAG: --advertise-address="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534384 1 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534393 1 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534399 1 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534403 1 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534407 1 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534410 1 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534414 1 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534420 1 flags.go:64] FLAG: --audit-log-compress="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534428 1 flags.go:64] FLAG: --audit-log-format="json" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534432 1 flags.go:64] FLAG: --audit-log-maxage="0" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534435 1 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534439 1 flags.go:64] FLAG: --audit-log-maxsize="100" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534443 1 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534446 1 flags.go:64] FLAG: --audit-log-path="/var/log/oauth-apiserver/audit.log" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534450 1 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534454 1 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534468 1 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534472 1 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534476 1 flags.go:64] FLAG: --audit-policy-file="/var/run/configmaps/audit/policy.yaml" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534480 1 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534483 1 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534486 1 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534489 1 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534492 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534495 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534498 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534502 1 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534505 1 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534508 1 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534511 1 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534514 1 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534517 1 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534520 1 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534522 1 flags.go:64] FLAG: --authentication-kubeconfig="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534525 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534528 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534531 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534533 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534538 1 flags.go:64] FLAG: --authorization-kubeconfig="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534541 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534544 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534547 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534550 1 flags.go:64] FLAG: --cert-dir="apiserver.local.config/certificates" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534554 1 flags.go:64] FLAG: --client-ca-file="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534557 1 flags.go:64] FLAG: --contention-profiling="false" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534560 1 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534565 1 flags.go:64] FLAG: --debug-socket-path="" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534568 1 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-12-13T00:13:17.534579452+00:00 stderr F I1213 00:13:17.534571 1 flags.go:64] FLAG: --delete-collection-workers="1" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534573 1 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534578 1 flags.go:64] FLAG: --egress-selector-config-file="" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534581 1 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534585 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534587 1 flags.go:64] FLAG: --enable-priority-and-fairness="false" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534590 1 flags.go:64] FLAG: --encryption-provider-config="" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534593 1 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534600 1 flags.go:64] FLAG: --etcd-cafile="/var/run/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534603 1 flags.go:64] FLAG: --etcd-certfile="/var/run/secrets/etcd-client/tls.crt" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534606 1 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534610 1 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534613 1 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534616 1 flags.go:64] FLAG: --etcd-healthcheck-timeout="10s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534619 1 flags.go:64] FLAG: --etcd-keyfile="/var/run/secrets/etcd-client/tls.key" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534622 1 flags.go:64] FLAG: --etcd-prefix="openshift.io" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534625 1 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534628 1 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534632 1 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534638 1 flags.go:64] FLAG: --external-hostname="" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534640 1 flags.go:64] FLAG: --feature-gates="" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534651 1 flags.go:64] FLAG: --goaway-chance="0" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534655 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534660 1 flags.go:64] FLAG: --http2-max-streams-per-connection="1000" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534663 1 flags.go:64] FLAG: --kubeconfig="" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534666 1 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534668 1 flags.go:64] FLAG: --livez-grace-period="0s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534671 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534675 1 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534678 1 flags.go:64] FLAG: --max-requests-inflight="400" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534680 1 flags.go:64] FLAG: --min-request-timeout="1800" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534683 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534686 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534689 1 flags.go:64] FLAG: --profiling="true" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534691 1 flags.go:64] FLAG: --request-timeout="1m0s" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534694 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534700 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534702 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534707 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-12-13T00:13:17.534716077+00:00 stderr F I1213 00:13:17.534711 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534716 1 flags.go:64] FLAG: --secure-port="8443" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534720 1 flags.go:64] FLAG: --shutdown-delay-duration="15s" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534723 1 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534726 1 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534728 1 flags.go:64] FLAG: --storage-backend="" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534731 1 flags.go:64] FLAG: --storage-media-type="application/json" 2025-12-13T00:13:17.534737918+00:00 stderr F I1213 00:13:17.534734 1 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-12-13T00:13:17.534750158+00:00 stderr F I1213 00:13:17.534738 1 flags.go:64] FLAG: --tls-cert-file="/var/run/secrets/serving-cert/tls.crt" 2025-12-13T00:13:17.534750158+00:00 stderr F I1213 00:13:17.534741 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-12-13T00:13:17.534757708+00:00 stderr F I1213 00:13:17.534749 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:13:17.534757708+00:00 stderr F I1213 00:13:17.534753 1 flags.go:64] FLAG: --tls-private-key-file="/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:17.534765009+00:00 stderr F I1213 00:13:17.534756 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-12-13T00:13:17.534765009+00:00 stderr F I1213 00:13:17.534761 1 flags.go:64] FLAG: --tracing-config-file="" 2025-12-13T00:13:17.534772169+00:00 stderr F I1213 00:13:17.534764 1 flags.go:64] FLAG: --v="2" 2025-12-13T00:13:17.534772169+00:00 stderr F I1213 00:13:17.534767 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:13:17.534779359+00:00 stderr F I1213 00:13:17.534770 1 flags.go:64] FLAG: --watch-cache="true" 2025-12-13T00:13:17.534779359+00:00 stderr F I1213 00:13:17.534774 1 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-12-13T00:13:17.586827509+00:00 stderr F I1213 00:13:17.586358 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:17.846870287+00:00 stderr F I1213 00:13:17.846562 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:17.849477824+00:00 stderr F I1213 00:13:17.849428 1 audit.go:340] Using audit backend: ignoreErrors 2025-12-13T00:13:17.854849685+00:00 stderr F I1213 00:13:17.854814 1 plugins.go:157] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook. 2025-12-13T00:13:17.854849685+00:00 stderr F I1213 00:13:17.854837 1 plugins.go:160] Loaded 2 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy,ValidatingAdmissionWebhook. 2025-12-13T00:13:17.855863759+00:00 stderr F I1213 00:13:17.855834 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:17.855863759+00:00 stderr F I1213 00:13:17.855859 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:17.855995333+00:00 stderr F I1213 00:13:17.855886 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:17.855995333+00:00 stderr F I1213 00:13:17.855898 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:17.881692336+00:00 stderr F I1213 00:13:17.881646 1 store.go:1579] "Monitoring resource count at path" resource="oauthclients.oauth.openshift.io" path="//oauth/clients" 2025-12-13T00:13:17.895870853+00:00 stderr F I1213 00:13:17.895813 1 store.go:1579] "Monitoring resource count at path" resource="oauthauthorizetokens.oauth.openshift.io" path="//oauth/authorizetokens" 2025-12-13T00:13:17.903139967+00:00 stderr F I1213 00:13:17.903067 1 cacher.go:451] cacher (oauthauthorizetokens.oauth.openshift.io): initialized 2025-12-13T00:13:17.905180536+00:00 stderr F I1213 00:13:17.905152 1 reflector.go:351] Caches populated for *oauth.OAuthAuthorizeToken from storage/cacher.go:/oauth/authorizetokens 2025-12-13T00:13:17.941385602+00:00 stderr F I1213 00:13:17.940316 1 cacher.go:451] cacher (oauthclients.oauth.openshift.io): initialized 2025-12-13T00:13:17.941385602+00:00 stderr F I1213 00:13:17.940347 1 reflector.go:351] Caches populated for *oauth.OAuthClient from storage/cacher.go:/oauth/clients 2025-12-13T00:13:17.944161515+00:00 stderr F I1213 00:13:17.943982 1 store.go:1579] "Monitoring resource count at path" resource="oauthaccesstokens.oauth.openshift.io" path="//oauth/accesstokens" 2025-12-13T00:13:17.946798264+00:00 stderr F I1213 00:13:17.946317 1 cacher.go:451] cacher (oauthaccesstokens.oauth.openshift.io): initialized 2025-12-13T00:13:17.946798264+00:00 stderr F I1213 00:13:17.946344 1 reflector.go:351] Caches populated for *oauth.OAuthAccessToken from storage/cacher.go:/oauth/accesstokens 2025-12-13T00:13:17.957539745+00:00 stderr F I1213 00:13:17.957493 1 store.go:1579] "Monitoring resource count at path" resource="oauthclientauthorizations.oauth.openshift.io" path="//oauth/clientauthorizations" 2025-12-13T00:13:17.960547986+00:00 stderr F I1213 00:13:17.960488 1 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-12-13T00:13:17.960954529+00:00 stderr F I1213 00:13:17.960921 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:17.960954529+00:00 stderr F I1213 00:13:17.960948 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:17.965337627+00:00 stderr F I1213 00:13:17.965306 1 cacher.go:451] cacher (oauthclientauthorizations.oauth.openshift.io): initialized 2025-12-13T00:13:17.965337627+00:00 stderr F I1213 00:13:17.965326 1 reflector.go:351] Caches populated for *oauth.OAuthClientAuthorization from storage/cacher.go:/oauth/clientauthorizations 2025-12-13T00:13:17.983862180+00:00 stderr F I1213 00:13:17.983690 1 store.go:1579] "Monitoring resource count at path" resource="users.user.openshift.io" path="//users" 2025-12-13T00:13:17.991165405+00:00 stderr F I1213 00:13:17.989543 1 cacher.go:451] cacher (users.user.openshift.io): initialized 2025-12-13T00:13:17.991165405+00:00 stderr F I1213 00:13:17.989568 1 reflector.go:351] Caches populated for *user.User from storage/cacher.go:/users 2025-12-13T00:13:17.998709429+00:00 stderr F I1213 00:13:17.997778 1 store.go:1579] "Monitoring resource count at path" resource="identities.user.openshift.io" path="//useridentities" 2025-12-13T00:13:18.004603697+00:00 stderr F I1213 00:13:18.004558 1 cacher.go:451] cacher (identities.user.openshift.io): initialized 2025-12-13T00:13:18.004603697+00:00 stderr F I1213 00:13:18.004577 1 reflector.go:351] Caches populated for *user.Identity from storage/cacher.go:/useridentities 2025-12-13T00:13:18.014831931+00:00 stderr F I1213 00:13:18.014568 1 store.go:1579] "Monitoring resource count at path" resource="groups.user.openshift.io" path="//groups" 2025-12-13T00:13:18.016338701+00:00 stderr F I1213 00:13:18.016313 1 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-12-13T00:13:18.016403243+00:00 stderr F I1213 00:13:18.016383 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:18.016403243+00:00 stderr F I1213 00:13:18.016397 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:18.019462726+00:00 stderr F I1213 00:13:18.018766 1 cacher.go:451] cacher (groups.user.openshift.io): initialized 2025-12-13T00:13:18.019462726+00:00 stderr F I1213 00:13:18.018792 1 reflector.go:351] Caches populated for *user.Group from storage/cacher.go:/groups 2025-12-13T00:13:18.197043073+00:00 stderr F I1213 00:13:18.196996 1 genericapiserver.go:560] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-12-13T00:13:18.198710579+00:00 stderr F I1213 00:13:18.198680 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:18.210611839+00:00 stderr F I1213 00:13:18.210557 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-12-13 00:13:18.210521155 +0000 UTC))" 2025-12-13T00:13:18.210774845+00:00 stderr F I1213 00:13:18.210750 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:18.210837817+00:00 stderr F I1213 00:13:18.210824 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:18.210968881+00:00 stderr F I1213 00:13:18.210907 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:13:18.210867278 +0000 UTC))" 2025-12-13T00:13:18.210984182+00:00 stderr F I1213 00:13:18.210967 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:18.211094405+00:00 stderr F I1213 00:13:18.211072 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:18.211126547+00:00 stderr F I1213 00:13:18.211102 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:18.211150577+00:00 stderr F I1213 00:13:18.211060 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:18.211186699+00:00 stderr F I1213 00:13:18.211166 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.211257661+00:00 stderr F I1213 00:13:18.211228 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:18.211303292+00:00 stderr F I1213 00:13:18.211223 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:18.211327913+00:00 stderr F I1213 00:13:18.211318 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.213480886+00:00 stderr F I1213 00:13:18.213450 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.214675595+00:00 stderr F I1213 00:13:18.214534 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.215496633+00:00 stderr F I1213 00:13:18.215475 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.215719850+00:00 stderr F I1213 00:13:18.215668 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.219296331+00:00 stderr F I1213 00:13:18.219263 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.232598587+00:00 stderr F I1213 00:13:18.232526 1 reflector.go:351] Caches populated for *v1.OAuthClient from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.232805474+00:00 stderr F I1213 00:13:18.232776 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.235482234+00:00 stderr F I1213 00:13:18.235452 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:18.311960915+00:00 stderr F I1213 00:13:18.311893 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.312031287+00:00 stderr F I1213 00:13:18.311996 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:18.312077209+00:00 stderr F I1213 00:13:18.311964 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.312326107+00:00 stderr F I1213 00:13:18.312303 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.312269235 +0000 UTC))" 2025-12-13T00:13:18.312700819+00:00 stderr F I1213 00:13:18.312683 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-12-13 00:13:18.312662898 +0000 UTC))" 2025-12-13T00:13:18.313117373+00:00 stderr F I1213 00:13:18.313097 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:13:18.313068932 +0000 UTC))" 2025-12-13T00:13:18.313460655+00:00 stderr F I1213 00:13:18.313419 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:18.313381452 +0000 UTC))" 2025-12-13T00:13:18.313460655+00:00 stderr F I1213 00:13:18.313456 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:18.313444004 +0000 UTC))" 2025-12-13T00:13:18.313495986+00:00 stderr F I1213 00:13:18.313473 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.313460835 +0000 UTC))" 2025-12-13T00:13:18.313506476+00:00 stderr F I1213 00:13:18.313492 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:18.313481805 +0000 UTC))" 2025-12-13T00:13:18.313523277+00:00 stderr F I1213 00:13:18.313508 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.313498166 +0000 UTC))" 2025-12-13T00:13:18.313532107+00:00 stderr F I1213 00:13:18.313523 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.313512206 +0000 UTC))" 2025-12-13T00:13:18.313542867+00:00 stderr F I1213 00:13:18.313537 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.313527417 +0000 UTC))" 2025-12-13T00:13:18.313577299+00:00 stderr F I1213 00:13:18.313556 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.313542577 +0000 UTC))" 2025-12-13T00:13:18.313577299+00:00 stderr F I1213 00:13:18.313574 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:18.313564038 +0000 UTC))" 2025-12-13T00:13:18.313607600+00:00 stderr F I1213 00:13:18.313588 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:18.313579689 +0000 UTC))" 2025-12-13T00:13:18.313616840+00:00 stderr F I1213 00:13:18.313606 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:18.313596229 +0000 UTC))" 2025-12-13T00:13:18.313911130+00:00 stderr F I1213 00:13:18.313885 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-12-13 00:13:18.313872888 +0000 UTC))" 2025-12-13T00:13:18.314266342+00:00 stderr F I1213 00:13:18.314231 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:13:18.314212511 +0000 UTC))" 2025-12-13T00:19:37.562038880+00:00 stderr F I1213 00:19:37.561991 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.561956727 +0000 UTC))" 2025-12-13T00:19:37.562141442+00:00 stderr F I1213 00:19:37.562126 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.562106981 +0000 UTC))" 2025-12-13T00:19:37.562195564+00:00 stderr F I1213 00:19:37.562182 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.562161273 +0000 UTC))" 2025-12-13T00:19:37.562235605+00:00 stderr F I1213 00:19:37.562225 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.562213124 +0000 UTC))" 2025-12-13T00:19:37.562270236+00:00 stderr F I1213 00:19:37.562261 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.562249185 +0000 UTC))" 2025-12-13T00:19:37.562304097+00:00 stderr F I1213 00:19:37.562295 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.562283736 +0000 UTC))" 2025-12-13T00:19:37.562351238+00:00 stderr F I1213 00:19:37.562337 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.562320337 +0000 UTC))" 2025-12-13T00:19:37.562424410+00:00 stderr F I1213 00:19:37.562411 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.562384879 +0000 UTC))" 2025-12-13T00:19:37.562493922+00:00 stderr F I1213 00:19:37.562479 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.562441561 +0000 UTC))" 2025-12-13T00:19:37.562545543+00:00 stderr F I1213 00:19:37.562532 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.562515913 +0000 UTC))" 2025-12-13T00:19:37.562595545+00:00 stderr F I1213 00:19:37.562584 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.562565044 +0000 UTC))" 2025-12-13T00:19:37.562643736+00:00 stderr F I1213 00:19:37.562630 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.562611715 +0000 UTC))" 2025-12-13T00:19:37.563036087+00:00 stderr F I1213 00:19:37.563019 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-12-13 00:19:37.562998726 +0000 UTC))" 2025-12-13T00:19:37.563367976+00:00 stderr F I1213 00:19:37.563352 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:19:37.563337835 +0000 UTC))" 2025-12-13T00:20:53.503475892+00:00 stderr F E1213 00:20:53.503415 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.503517853+00:00 stderr F E1213 00:20:53.503502 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.521360505+00:00 stderr F E1213 00:20:53.518424 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:53.521360505+00:00 stderr F E1213 00:20:53.518462 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.461913726+00:00 stderr F E1213 00:20:54.461854 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.461957537+00:00 stderr F E1213 00:20:54.461918 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.466508709+00:00 stderr F E1213 00:20:54.466457 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.466530800+00:00 stderr F E1213 00:20:54.466511 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.476497069+00:00 stderr F E1213 00:20:54.476430 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.476521420+00:00 stderr F E1213 00:20:54.476494 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.482529382+00:00 stderr F E1213 00:20:54.482472 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:54.482529382+00:00 stderr F E1213 00:20:54.482507 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000037061315117130646033071 0ustar zuulzuul2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.198944 1 flags.go:64] FLAG: --accesstoken-inactivity-timeout="0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200040 1 flags.go:64] FLAG: --admission-control-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200051 1 flags.go:64] FLAG: --advertise-address="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200058 1 flags.go:64] FLAG: --api-audiences="[https://kubernetes.default.svc]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200067 1 flags.go:64] FLAG: --audit-log-batch-buffer-size="10000" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200074 1 flags.go:64] FLAG: --audit-log-batch-max-size="1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200078 1 flags.go:64] FLAG: --audit-log-batch-max-wait="0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200089 1 flags.go:64] FLAG: --audit-log-batch-throttle-burst="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200093 1 flags.go:64] FLAG: --audit-log-batch-throttle-enable="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200099 1 flags.go:64] FLAG: --audit-log-batch-throttle-qps="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200106 1 flags.go:64] FLAG: --audit-log-compress="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200110 1 flags.go:64] FLAG: --audit-log-format="json" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200114 1 flags.go:64] FLAG: --audit-log-maxage="0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200118 1 flags.go:64] FLAG: --audit-log-maxbackup="10" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200122 1 flags.go:64] FLAG: --audit-log-maxsize="100" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200126 1 flags.go:64] FLAG: --audit-log-mode="blocking" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200130 1 flags.go:64] FLAG: --audit-log-path="/var/log/oauth-apiserver/audit.log" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200134 1 flags.go:64] FLAG: --audit-log-truncate-enabled="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200138 1 flags.go:64] FLAG: --audit-log-truncate-max-batch-size="10485760" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200143 1 flags.go:64] FLAG: --audit-log-truncate-max-event-size="102400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200148 1 flags.go:64] FLAG: --audit-log-version="audit.k8s.io/v1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200152 1 flags.go:64] FLAG: --audit-policy-file="/var/run/configmaps/audit/policy.yaml" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200156 1 flags.go:64] FLAG: --audit-webhook-batch-buffer-size="10000" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200160 1 flags.go:64] FLAG: --audit-webhook-batch-initial-backoff="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200168 1 flags.go:64] FLAG: --audit-webhook-batch-max-size="400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200174 1 flags.go:64] FLAG: --audit-webhook-batch-max-wait="30s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200178 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-burst="15" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200186 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-enable="true" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200191 1 flags.go:64] FLAG: --audit-webhook-batch-throttle-qps="10" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200196 1 flags.go:64] FLAG: --audit-webhook-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200199 1 flags.go:64] FLAG: --audit-webhook-initial-backoff="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200203 1 flags.go:64] FLAG: --audit-webhook-mode="batch" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200207 1 flags.go:64] FLAG: --audit-webhook-truncate-enabled="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200211 1 flags.go:64] FLAG: --audit-webhook-truncate-max-batch-size="10485760" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200215 1 flags.go:64] FLAG: --audit-webhook-truncate-max-event-size="102400" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200219 1 flags.go:64] FLAG: --audit-webhook-version="audit.k8s.io/v1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200223 1 flags.go:64] FLAG: --authentication-kubeconfig="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200227 1 flags.go:64] FLAG: --authentication-skip-lookup="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200231 1 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200234 1 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200238 1 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200246 1 flags.go:64] FLAG: --authorization-kubeconfig="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200250 1 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200254 1 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200257 1 flags.go:64] FLAG: --bind-address="0.0.0.0" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200263 1 flags.go:64] FLAG: --cert-dir="apiserver.local.config/certificates" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200268 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200275 1 flags.go:64] FLAG: --contention-profiling="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200281 1 flags.go:64] FLAG: --cors-allowed-origins="[//127\\.0\\.0\\.1(:|$),//localhost(:|$)]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200288 1 flags.go:64] FLAG: --debug-socket-path="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200293 1 flags.go:64] FLAG: --default-watch-cache-size="100" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200298 1 flags.go:64] FLAG: --delete-collection-workers="1" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200303 1 flags.go:64] FLAG: --disable-admission-plugins="[]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200310 1 flags.go:64] FLAG: --egress-selector-config-file="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200315 1 flags.go:64] FLAG: --enable-admission-plugins="[]" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200321 1 flags.go:64] FLAG: --enable-garbage-collector="true" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200326 1 flags.go:64] FLAG: --enable-priority-and-fairness="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200332 1 flags.go:64] FLAG: --encryption-provider-config="" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200339 1 flags.go:64] FLAG: --encryption-provider-config-automatic-reload="false" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200345 1 flags.go:64] FLAG: --etcd-cafile="/var/run/configmaps/etcd-serving-ca/ca-bundle.crt" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200350 1 flags.go:64] FLAG: --etcd-certfile="/var/run/secrets/etcd-client/tls.crt" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200355 1 flags.go:64] FLAG: --etcd-compaction-interval="5m0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200367 1 flags.go:64] FLAG: --etcd-count-metric-poll-period="1m0s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200464 1 flags.go:64] FLAG: --etcd-db-metric-poll-interval="30s" 2025-08-13T19:59:27.201426766+00:00 stderr F I0813 19:59:27.200542 1 flags.go:64] FLAG: --etcd-healthcheck-timeout="10s" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.200547 1 flags.go:64] FLAG: --etcd-keyfile="/var/run/secrets/etcd-client/tls.key" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215297 1 flags.go:64] FLAG: --etcd-prefix="openshift.io" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215322 1 flags.go:64] FLAG: --etcd-readycheck-timeout="2s" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215329 1 flags.go:64] FLAG: --etcd-servers="[https://192.168.126.11:2379]" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215350 1 flags.go:64] FLAG: --etcd-servers-overrides="[]" 2025-08-13T19:59:27.216054413+00:00 stderr F I0813 19:59:27.215356 1 flags.go:64] FLAG: --external-hostname="" 2025-08-13T19:59:27.219097400+00:00 stderr F I0813 19:59:27.215361 1 flags.go:64] FLAG: --feature-gates="" 2025-08-13T19:59:27.219097400+00:00 stderr F I0813 19:59:27.219074 1 flags.go:64] FLAG: --goaway-chance="0" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219097 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219104 1 flags.go:64] FLAG: --http2-max-streams-per-connection="1000" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219110 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:59:27.219132261+00:00 stderr F I0813 19:59:27.219121 1 flags.go:64] FLAG: --lease-reuse-duration-seconds="60" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219223 1 flags.go:64] FLAG: --livez-grace-period="0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219259 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219268 1 flags.go:64] FLAG: --max-mutating-requests-inflight="200" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219272 1 flags.go:64] FLAG: --max-requests-inflight="400" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219277 1 flags.go:64] FLAG: --min-request-timeout="1800" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219281 1 flags.go:64] FLAG: --permit-address-sharing="false" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219291 1 flags.go:64] FLAG: --permit-port-sharing="false" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219295 1 flags.go:64] FLAG: --profiling="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219300 1 flags.go:64] FLAG: --request-timeout="1m0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219313 1 flags.go:64] FLAG: --requestheader-allowed-names="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219324 1 flags.go:64] FLAG: --requestheader-client-ca-file="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219329 1 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219335 1 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219341 1 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219346 1 flags.go:64] FLAG: --secure-port="8443" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219350 1 flags.go:64] FLAG: --shutdown-delay-duration="15s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219354 1 flags.go:64] FLAG: --shutdown-send-retry-after="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219358 1 flags.go:64] FLAG: --shutdown-watch-termination-grace-period="0s" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219364 1 flags.go:64] FLAG: --storage-backend="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219370 1 flags.go:64] FLAG: --storage-media-type="application/json" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219375 1 flags.go:64] FLAG: --strict-transport-security-directives="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219380 1 flags.go:64] FLAG: --tls-cert-file="/var/run/secrets/serving-cert/tls.crt" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219385 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219395 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219402 1 flags.go:64] FLAG: --tls-private-key-file="/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219431 1 flags.go:64] FLAG: --tls-sni-cert-key="[]" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219608 1 flags.go:64] FLAG: --tracing-config-file="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219614 1 flags.go:64] FLAG: --v="2" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219619 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219629 1 flags.go:64] FLAG: --watch-cache="true" 2025-08-13T19:59:27.221894980+00:00 stderr F I0813 19:59:27.219637 1 flags.go:64] FLAG: --watch-cache-sizes="[]" 2025-08-13T19:59:28.503555713+00:00 stderr F I0813 19:59:28.503189 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:31.354631633+00:00 stderr F I0813 19:59:31.354516 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:31.413209243+00:00 stderr F I0813 19:59:31.412390 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T19:59:31.525990398+00:00 stderr F I0813 19:59:31.521558 1 plugins.go:157] Loaded 2 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook. 2025-08-13T19:59:31.525990398+00:00 stderr F I0813 19:59:31.521671 1 plugins.go:160] Loaded 2 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy,ValidatingAdmissionWebhook. 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533000 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533154 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533585 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T19:59:31.533613475+00:00 stderr F I0813 19:59:31.533605 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T19:59:31.715911232+00:00 stderr F I0813 19:59:31.714652 1 store.go:1579] "Monitoring resource count at path" resource="oauthclients.oauth.openshift.io" path="//oauth/clients" 2025-08-13T19:59:31.790381324+00:00 stderr F I0813 19:59:31.789477 1 store.go:1579] "Monitoring resource count at path" resource="oauthauthorizetokens.oauth.openshift.io" path="//oauth/authorizetokens" 2025-08-13T19:59:31.811382352+00:00 stderr F I0813 19:59:31.811291 1 store.go:1579] "Monitoring resource count at path" resource="oauthaccesstokens.oauth.openshift.io" path="//oauth/accesstokens" 2025-08-13T19:59:31.844340722+00:00 stderr F I0813 19:59:31.844266 1 cacher.go:451] cacher (oauthaccesstokens.oauth.openshift.io): initialized 2025-08-13T19:59:31.845079293+00:00 stderr F I0813 19:59:31.845048 1 reflector.go:351] Caches populated for *oauth.OAuthAccessToken from storage/cacher.go:/oauth/accesstokens 2025-08-13T19:59:31.851908797+00:00 stderr F I0813 19:59:31.851750 1 cacher.go:451] cacher (oauthclients.oauth.openshift.io): initialized 2025-08-13T19:59:31.857767484+00:00 stderr F I0813 19:59:31.853637 1 cacher.go:451] cacher (oauthauthorizetokens.oauth.openshift.io): initialized 2025-08-13T19:59:31.858423753+00:00 stderr F I0813 19:59:31.858378 1 reflector.go:351] Caches populated for *oauth.OAuthClient from storage/cacher.go:/oauth/clients 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.895619 1 reflector.go:351] Caches populated for *oauth.OAuthAuthorizeToken from storage/cacher.go:/oauth/authorizetokens 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.940494 1 store.go:1579] "Monitoring resource count at path" resource="oauthclientauthorizations.oauth.openshift.io" path="//oauth/clientauthorizations" 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.944551 1 cacher.go:451] cacher (oauthclientauthorizations.oauth.openshift.io): initialized 2025-08-13T19:59:31.960014699+00:00 stderr F I0813 19:59:31.944618 1 reflector.go:351] Caches populated for *oauth.OAuthClientAuthorization from storage/cacher.go:/oauth/clientauthorizations 2025-08-13T19:59:32.129051887+00:00 stderr F I0813 19:59:32.128892 1 handler.go:275] Adding GroupVersion oauth.openshift.io v1 to ResourceManager 2025-08-13T19:59:32.146996099+00:00 stderr F I0813 19:59:32.141533 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:32.146996099+00:00 stderr F I0813 19:59:32.141612 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:32.268331218+00:00 stderr F I0813 19:59:32.268131 1 store.go:1579] "Monitoring resource count at path" resource="users.user.openshift.io" path="//users" 2025-08-13T19:59:32.280715111+00:00 stderr F I0813 19:59:32.280485 1 cacher.go:451] cacher (users.user.openshift.io): initialized 2025-08-13T19:59:32.280715111+00:00 stderr F I0813 19:59:32.280534 1 reflector.go:351] Caches populated for *user.User from storage/cacher.go:/users 2025-08-13T19:59:32.342159052+00:00 stderr F I0813 19:59:32.340089 1 store.go:1579] "Monitoring resource count at path" resource="identities.user.openshift.io" path="//useridentities" 2025-08-13T19:59:32.365422735+00:00 stderr F I0813 19:59:32.364535 1 store.go:1579] "Monitoring resource count at path" resource="groups.user.openshift.io" path="//groups" 2025-08-13T19:59:32.365656342+00:00 stderr F I0813 19:59:32.365546 1 cacher.go:451] cacher (identities.user.openshift.io): initialized 2025-08-13T19:59:32.366472145+00:00 stderr F I0813 19:59:32.366259 1 reflector.go:351] Caches populated for *user.Identity from storage/cacher.go:/useridentities 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.367976 1 cacher.go:451] cacher (groups.user.openshift.io): initialized 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.368038 1 reflector.go:351] Caches populated for *user.Group from storage/cacher.go:/groups 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369329 1 handler.go:275] Adding GroupVersion user.openshift.io v1 to ResourceManager 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369391 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T19:59:32.376074109+00:00 stderr F I0813 19:59:32.369402 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T19:59:35.053460550+00:00 stderr F I0813 19:59:35.053277 1 genericapiserver.go:560] "[graceful-termination] using HTTP Server shutdown timeout" shutdownTimeout="2s" 2025-08-13T19:59:35.054209721+00:00 stderr F I0813 19:59:35.054173 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T19:59:35.064731441+00:00 stderr F I0813 19:59:35.064541 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:35.064180125 +0000 UTC))" 2025-08-13T19:59:35.071705980+00:00 stderr F I0813 19:59:35.071409 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:35.069769704 +0000 UTC))" 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.073763 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.073924 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:35.074109038+00:00 stderr F I0813 19:59:35.074058 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:35.074169020+00:00 stderr F I0813 19:59:35.074129 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T19:59:35.074615162+00:00 stderr F I0813 19:59:35.074521 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075587 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075671 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075697 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.075933650+00:00 stderr F I0813 19:59:35.075736 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.084971758+00:00 stderr F I0813 19:59:35.084607 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:35.090388342+00:00 stderr F I0813 19:59:35.090352 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.104283678+00:00 stderr F I0813 19:59:35.095711 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.261017356+00:00 stderr F I0813 19:59:35.258405 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:35.261017356+00:00 stderr F E0813 19:59:35.259591 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.261017356+00:00 stderr F E0813 19:59:35.259635 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.291381701+00:00 stderr F I0813 19:59:35.291261 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.314580743+00:00 stderr F I0813 19:59:35.310533 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.315930531+00:00 stderr F E0813 19:59:35.315769 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.316024594+00:00 stderr F E0813 19:59:35.316004 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.360429779+00:00 stderr F I0813 19:59:35.326507 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.374394 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375589 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:35.375543711 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375633 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:35.375602292 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375657 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:35.375639933 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375676 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:35.375662334 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375692 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375681064 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375710 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375696965 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375761 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375715045 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.375922 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:35.375902321 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.376298 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:35.376273311 +0000 UTC))" 2025-08-13T19:59:35.397081133+00:00 stderr F I0813 19:59:35.376628 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:35.376610761 +0000 UTC))" 2025-08-13T19:59:35.398331219+00:00 stderr F I0813 19:59:35.398292 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.401314314+00:00 stderr F E0813 19:59:35.401282 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.413428499+00:00 stderr F E0813 19:59:35.413179 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.448248762+00:00 stderr F E0813 19:59:35.423880 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.448248762+00:00 stderr F E0813 19:59:35.435259 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.464321790+00:00 stderr F E0813 19:59:35.464255 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.498595977+00:00 stderr F E0813 19:59:35.481648 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.505466903+00:00 stderr F I0813 19:59:35.505434 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:35.562928661+00:00 stderr F E0813 19:59:35.545409 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.673230705+00:00 stderr F I0813 19:59:35.673175 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:35.675129009+00:00 stderr F E0813 19:59:35.675000 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.728270224+00:00 stderr F E0813 19:59:35.727924 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:35.837193959+00:00 stderr F E0813 19:59:35.837129 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.033313399+00:00 stderr F E0813 19:59:36.033147 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.035455731+00:00 stderr F E0813 19:59:36.035426 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.036464989+00:00 stderr F E0813 19:59:36.036407 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.037394686+00:00 stderr F E0813 19:59:36.037322 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.037973162+00:00 stderr F E0813 19:59:36.033144 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.048474692+00:00 stderr F E0813 19:59:36.048441 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.052049914+00:00 stderr F I0813 19:59:36.052021 1 reflector.go:351] Caches populated for *v1.OAuthClient from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T19:59:36.158320383+00:00 stderr F E0813 19:59:36.158265 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.376588465+00:00 stderr F E0813 19:59:36.376519 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.377338296+00:00 stderr F E0813 19:59:36.377303 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.377953704+00:00 stderr F E0813 19:59:36.377920 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.378338595+00:00 stderr F E0813 19:59:36.378310 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.379342223+00:00 stderr F E0813 19:59:36.378599 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.436415990+00:00 stderr F E0813 19:59:36.436345 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.437664406+00:00 stderr F E0813 19:59:36.437540 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.472188440+00:00 stderr F E0813 19:59:36.445173 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.472631222+00:00 stderr F E0813 19:59:36.445254 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.476919835+00:00 stderr F E0813 19:59:36.445319 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.477208573+00:00 stderr F E0813 19:59:36.447532 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478147860+00:00 stderr F E0813 19:59:36.448082 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478380826+00:00 stderr F E0813 19:59:36.448223 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.478628153+00:00 stderr F E0813 19:59:36.448301 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.479070036+00:00 stderr F E0813 19:59:36.448361 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.480056024+00:00 stderr F E0813 19:59:36.448428 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.480478096+00:00 stderr F E0813 19:59:36.448483 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.483198044+00:00 stderr F E0813 19:59:36.483108 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.483739059+00:00 stderr F E0813 19:59:36.483712 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485178480+00:00 stderr F E0813 19:59:36.484054 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485286323+00:00 stderr F E0813 19:59:36.484127 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485370796+00:00 stderr F E0813 19:59:36.484190 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485545511+00:00 stderr F E0813 19:59:36.484253 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485641593+00:00 stderr F E0813 19:59:36.484421 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485720866+00:00 stderr F E0813 19:59:36.484505 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.485938202+00:00 stderr F E0813 19:59:36.484602 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486062785+00:00 stderr F E0813 19:59:36.484761 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486180459+00:00 stderr F E0813 19:59:36.484900 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.486455607+00:00 stderr F E0813 19:59:36.484981 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.575577707+00:00 stderr F E0813 19:59:36.575523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.577465621+00:00 stderr F E0813 19:59:36.577436 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.578437578+00:00 stderr F E0813 19:59:36.578405 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.580095616+00:00 stderr F E0813 19:59:36.580072 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.597659256+00:00 stderr F E0813 19:59:36.580571 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598071008+00:00 stderr F E0813 19:59:36.580626 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598313015+00:00 stderr F E0813 19:59:36.580696 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.598541692+00:00 stderr F E0813 19:59:36.580749 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.602086093+00:00 stderr F E0813 19:59:36.595085 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.602660729+00:00 stderr F E0813 19:59:36.595307 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.606749556+00:00 stderr F E0813 19:59:36.606721 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.608390602+00:00 stderr F E0813 19:59:36.608366 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.663521664+00:00 stderr F E0813 19:59:36.663462 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665381357+00:00 stderr F E0813 19:59:36.663881 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665555102+00:00 stderr F E0813 19:59:36.665531 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.665762728+00:00 stderr F E0813 19:59:36.663938 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.679138919+00:00 stderr F E0813 19:59:36.664129 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.680098706+00:00 stderr F E0813 19:59:36.664191 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.680434846+00:00 stderr F E0813 19:59:36.664262 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.681091575+00:00 stderr F E0813 19:59:36.664327 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.688758883+00:00 stderr F E0813 19:59:36.688652 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689161305+00:00 stderr F E0813 19:59:36.689135 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689456673+00:00 stderr F E0813 19:59:36.689423 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.689703040+00:00 stderr F E0813 19:59:36.689677 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.690070311+00:00 stderr F E0813 19:59:36.690047 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.690914385+00:00 stderr F E0813 19:59:36.690884 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.691368538+00:00 stderr F E0813 19:59:36.691183 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.733156989+00:00 stderr F E0813 19:59:36.733102 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.756324479+00:00 stderr F E0813 19:59:36.756160 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.759497590+00:00 stderr F E0813 19:59:36.759459 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.760244381+00:00 stderr F E0813 19:59:36.760215 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.851534213+00:00 stderr F E0813 19:59:36.851139 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.865752729+00:00 stderr F E0813 19:59:36.852028 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875198648+00:00 stderr F E0813 19:59:36.852119 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875459345+00:00 stderr F E0813 19:59:36.852199 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.875731643+00:00 stderr F E0813 19:59:36.852256 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876069383+00:00 stderr F E0813 19:59:36.852310 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876304159+00:00 stderr F E0813 19:59:36.852393 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.876563787+00:00 stderr F E0813 19:59:36.852462 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877315348+00:00 stderr F E0813 19:59:36.852523 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877532394+00:00 stderr F E0813 19:59:36.852576 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.877748271+00:00 stderr F E0813 19:59:36.870143 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:36.936192617+00:00 stderr F E0813 19:59:36.929752 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.937054831+00:00 stderr F E0813 19:59:36.932134 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.940238372+00:00 stderr F E0813 19:59:36.932220 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.940929152+00:00 stderr F E0813 19:59:36.932304 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:36.941176139+00:00 stderr F E0813 19:59:36.932386 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.458372 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.458449 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459342 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459485 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.466960256+00:00 stderr F E0813 19:59:37.459713 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.716233812+00:00 stderr F E0813 19:59:37.716104 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.716739256+00:00 stderr F E0813 19:59:37.716716 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717144728+00:00 stderr F E0813 19:59:37.717093 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717200919+00:00 stderr F E0813 19:59:37.717185 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.717411685+00:00 stderr F E0813 19:59:37.717359 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:37.971477538+00:00 stderr F E0813 19:59:37.969981 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:38.120652770+00:00 stderr F E0813 19:59:38.120511 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.121609857+00:00 stderr F E0813 19:59:38.121584 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122111541+00:00 stderr F E0813 19:59:38.122088 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122400310+00:00 stderr F E0813 19:59:38.122369 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.122666387+00:00 stderr F E0813 19:59:38.122641 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:38.159218769+00:00 stderr F E0813 19:59:38.159141 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:39.318221776+00:00 stderr F E0813 19:59:39.317938 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.318472713+00:00 stderr F E0813 19:59:39.317948 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.319165733+00:00 stderr F E0813 19:59:39.319135 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.334884941+00:00 stderr F E0813 19:59:39.334715 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.335225161+00:00 stderr F E0813 19:59:39.335185 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650348344+00:00 stderr F E0813 19:59:39.650258 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650348344+00:00 stderr F E0813 19:59:39.650321 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.650915920+00:00 stderr F E0813 19:59:39.650751 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.651055594+00:00 stderr F E0813 19:59:39.651002 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:39.651371383+00:00 stderr F E0813 19:59:39.651350 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:40.533147208+00:00 stderr F E0813 19:59:40.533083 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:40.720975642+00:00 stderr F E0813 19:59:40.720818 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:41.924524750+00:00 stderr F E0813 19:59:41.924344 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.974490634+00:00 stderr F E0813 19:59:41.974428 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983171752+00:00 stderr F E0813 19:59:41.982641 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983171752+00:00 stderr F E0813 19:59:41.982901 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:41.983934714+00:00 stderr F E0813 19:59:41.983729 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.260972311+00:00 stderr F E0813 19:59:42.260295 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261708322+00:00 stderr F E0813 19:59:42.261350 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261708322+00:00 stderr F E0813 19:59:42.261654 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.261923968+00:00 stderr F E0813 19:59:42.261891 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:42.262538365+00:00 stderr F E0813 19:59:42.262510 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:45.662298766+00:00 stderr F E0813 19:59:45.658946 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:45.847894196+00:00 stderr F E0813 19:59:45.845683 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:47.205464043+00:00 stderr F E0813 19:59:47.202257 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.242596192+00:00 stderr F E0813 19:59:47.242459 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.243157758+00:00 stderr F E0813 19:59:47.243104 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.243861318+00:00 stderr F E0813 19:59:47.243727 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.250236379+00:00 stderr F E0813 19:59:47.250172 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656272564+00:00 stderr F E0813 19:59:47.656166 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656481800+00:00 stderr F E0813 19:59:47.656440 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656747918+00:00 stderr F E0813 19:59:47.656688 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.656765598+00:00 stderr F E0813 19:59:47.656740 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.662089500+00:00 stderr F E0813 19:59:47.661973 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=4357750288773576036, SKID=, AKID=5A:D5:1A:CF:C2:66:64:C5:1C:FD:F3:E7:41:34:74:2F:B4:7D:C1:A9 failed: x509: certificate signed by unknown authority" 2025-08-13T19:59:47.712139347+00:00 stderr F http2: server: error reading preface from client 10.217.0.2:56190: read tcp 10.217.0.39:8443->10.217.0.2:56190: read: connection reset by peer 2025-08-13T19:59:51.870432504+00:00 stderr F I0813 19:59:51.869647 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:51.881604152+00:00 stderr F I0813 19:59:51.881522 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:51.881392326 +0000 UTC))" 2025-08-13T19:59:51.881726736+00:00 stderr F I0813 19:59:51.881707 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:51.881681854 +0000 UTC))" 2025-08-13T19:59:51.881908551+00:00 stderr F I0813 19:59:51.881888 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.881749776 +0000 UTC))" 2025-08-13T19:59:51.881968363+00:00 stderr F I0813 19:59:51.881953 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:51.881936802 +0000 UTC))" 2025-08-13T19:59:51.882024604+00:00 stderr F I0813 19:59:51.882012 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.881997093 +0000 UTC))" 2025-08-13T19:59:51.882072705+00:00 stderr F I0813 19:59:51.882060 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882043885 +0000 UTC))" 2025-08-13T19:59:51.882116687+00:00 stderr F I0813 19:59:51.882104 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882090576 +0000 UTC))" 2025-08-13T19:59:51.882160598+00:00 stderr F I0813 19:59:51.882148 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882134397 +0000 UTC))" 2025-08-13T19:59:51.882230560+00:00 stderr F I0813 19:59:51.882212 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:51.882190329 +0000 UTC))" 2025-08-13T19:59:51.883008762+00:00 stderr F I0813 19:59:51.882979 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 19:59:51.88294675 +0000 UTC))" 2025-08-13T19:59:51.883424654+00:00 stderr F I0813 19:59:51.883398 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 19:59:51.883371553 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773747 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.773664261 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773882 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.773861226 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773915 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.773890167 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773933 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.773921418 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773954 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773940988 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.773970 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.773959979 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774004 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77398973 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774020 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.77400938 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774038 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.774024961 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774059 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.774046851 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774377 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:20 +0000 UTC to 2026-06-26 12:47:21 +0000 UTC (now=2025-08-13 20:00:05.77435506 +0000 UTC))" 2025-08-13T20:00:05.782996497+00:00 stderr F I0813 20:00:05.774709 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:00:05.77468649 +0000 UTC))" 2025-08-13T20:00:56.698411453+00:00 stderr F I0813 20:00:56.678254 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:00:56.698411453+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:00:56.845173228+00:00 stderr F I0813 20:00:56.844884 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:56.845173228+00:00 stderr F I0813 20:00:56.845073 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:00:56.846576258+00:00 stderr F I0813 20:00:56.846323 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847425 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:56.847258947 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847863 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:56.847678039 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847919 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:56.847876525 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847944 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:56.847925996 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847962 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847951797 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.847984 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847969098 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848007 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.847993848 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848028 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.848013279 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848067 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:56.84805345 +0000 UTC))" 2025-08-13T20:00:56.848462682+00:00 stderr F I0813 20:00:56.848361 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:56.848342018 +0000 UTC))" 2025-08-13T20:00:56.848745390+00:00 stderr F I0813 20:00:56.848729 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:00:56.848690248 +0000 UTC))" 2025-08-13T20:00:56.853486745+00:00 stderr F I0813 20:00:56.850681 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:00:56.850582102 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028636 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.028399108 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028708 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.028692296 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.028740 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.028714357 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029003 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.028910823 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029024 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029012746 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029080 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029056577 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029122 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029084858 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029261 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029130349 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029285 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.029270753 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029304 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.029293694 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029331 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.029319794 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.029748 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"api.openshift-oauth-apiserver.svc\" [serving] validServingFor=[api.openshift-oauth-apiserver.svc,api.openshift-oauth-apiserver.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:41 +0000 UTC to 2027-08-13 20:00:42 +0000 UTC (now=2025-08-13 20:01:00.029691875 +0000 UTC))" 2025-08-13T20:01:00.032227177+00:00 stderr F I0813 20:01:00.030145 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115170\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115170\" (2025-08-13 18:59:28 +0000 UTC to 2026-08-13 18:59:28 +0000 UTC (now=2025-08-13 20:01:00.030123767 +0000 UTC))" 2025-08-13T20:01:05.265928136+00:00 stderr F I0813 20:01:05.264561 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:05.265928136+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:10.946272075+00:00 stderr F E0813 20:01:10.945945 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled 2025-08-13T20:01:10.946272075+00:00 stderr F E0813 20:01:10.946239 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout 2025-08-13T20:01:10.948325794+00:00 stderr F E0813 20:01:10.948042 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout 2025-08-13T20:01:10.948325794+00:00 stderr F E0813 20:01:10.948091 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout 2025-08-13T20:01:10.950701332+00:00 stderr F E0813 20:01:10.950364 1 timeout.go:142] post-timeout activity - time-elapsed: 3.955753ms, GET "/apis/oauth.openshift.io/v1/oauthclients/console" result: 2025-08-13T20:01:16.667006236+00:00 stderr F I0813 20:01:16.666703 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:16.667006236+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:31.303546439+00:00 stderr F I0813 20:01:31.302924 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:31.303546439+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:36.667417336+00:00 stderr F I0813 20:01:36.667273 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:36.667417336+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:44.989727596+00:00 stderr F I0813 20:01:44.989550 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:44.989727596+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:47.000469451+00:00 stderr F I0813 20:01:47.000415 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:47.000469451+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:54.660572879+00:00 stderr F I0813 20:01:54.660450 1 healthz.go:261] etcd check failed: healthz 2025-08-13T20:01:54.660572879+00:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:54.660946280+00:00 stderr F E0813 20:01:54.660916 1 timeout.go:142] post-timeout activity - time-elapsed: 5.657571ms, GET "/healthz" result: 2025-08-13T20:01:54.661201217+00:00 stderr F I0813 20:01:54.661031 1 healthz.go:261] etcd check failed: healthz 2025-08-13T20:01:54.661201217+00:00 stderr F [-]etcd failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:01:58.103602303+00:00 stderr F I0813 20:01:58.103359 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:01:58.103602303+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:02:03.279348222+00:00 stderr F I0813 20:02:03.279140 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:02:03.279348222+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:02:15.043214406+00:00 stderr F I0813 20:02:15.042674 1 healthz.go:261] etcd-readiness check failed: readyz 2025-08-13T20:02:15.043214406+00:00 stderr F [-]etcd-readiness failed: error getting data from etcd: context deadline exceeded 2025-08-13T20:04:04.348973941+00:00 stderr F E0813 20:04:04.345887 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.348973941+00:00 stderr F E0813 20:04:04.348915 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.416483897+00:00 stderr F E0813 20:04:04.416321 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:04.416483897+00:00 stderr F E0813 20:04:04.416456 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.085312727+00:00 stderr F E0813 20:04:05.085217 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.085312727+00:00 stderr F E0813 20:04:05.085293 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.104089753+00:00 stderr F E0813 20:04:05.103978 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.104149364+00:00 stderr F E0813 20:04:05.104081 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297203 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297288 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297651 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.298401566+00:00 stderr F E0813 20:04:05.297674 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.360691791+00:00 stderr F E0813 20:04:38.359378 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:38.360691791+00:00 stderr F E0813 20:04:38.359546 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:39.335531786+00:00 stderr F E0813 20:04:39.335387 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:39.335588038+00:00 stderr F E0813 20:04:39.335522 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.029295599+00:00 stderr F E0813 20:04:41.029154 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.029350590+00:00 stderr F E0813 20:04:41.029302 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.275168065+00:00 stderr F E0813 20:04:47.274663 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:47.275290638+00:00 stderr F E0813 20:04:47.275243 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.036167873+00:00 stderr F E0813 20:04:59.033754 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.036167873+00:00 stderr F E0813 20:04:59.033999 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.093002537+00:00 stderr F E0813 20:05:05.092767 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.093002537+00:00 stderr F E0813 20:05:05.092979 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.308754306+00:00 stderr F E0813 20:05:05.308472 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.308754306+00:00 stderr F E0813 20:05:05.308717 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.326847394+00:00 stderr F E0813 20:05:05.326709 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.326904556+00:00 stderr F E0813 20:05:05.326887 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.355996059+00:00 stderr F E0813 20:05:05.355847 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:05.355996059+00:00 stderr F E0813 20:05:05.355970 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.264399003+00:00 stderr F E0813 20:05:06.264250 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:06.265136094+00:00 stderr F E0813 20:05:06.264572 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.280454782+00:00 stderr F E0813 20:05:16.280262 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.280454782+00:00 stderr F E0813 20:05:16.280434 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:06:23.498213610+00:00 stderr F I0813 20:06:23.497964 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:28.010316348+00:00 stderr F I0813 20:06:28.010101 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:28.096337502+00:00 stderr F I0813 20:06:28.096272 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:48.196574406+00:00 stderr F I0813 20:06:48.196290 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:06:50.297769168+00:00 stderr F I0813 20:06:50.297496 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:05.311458663+00:00 stderr F I0813 20:07:05.310420 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:08:42.713645834+00:00 stderr F E0813 20:08:42.709166 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.725764152+00:00 stderr F E0813 20:08:42.725565 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766877421+00:00 stderr F E0813 20:08:42.766703 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:42.766877421+00:00 stderr F E0813 20:08:42.766847 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.513741693+00:00 stderr F E0813 20:08:43.512988 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.513741693+00:00 stderr F E0813 20:08:43.513085 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.530236006+00:00 stderr F E0813 20:08:43.530011 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.530236006+00:00 stderr F E0813 20:08:43.530131 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.609963312+00:00 stderr F E0813 20:08:43.609275 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.609963312+00:00 stderr F E0813 20:08:43.609384 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.611957989+00:00 stderr F E0813 20:08:43.611516 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.611957989+00:00 stderr F E0813 20:08:43.611585 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570522845+00:00 stderr F E0813 20:08:48.570328 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.570522845+00:00 stderr F E0813 20:08:48.570428 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.887212386+00:00 stderr F E0813 20:08:49.887128 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.887265298+00:00 stderr F E0813 20:08:49.887209 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.262668022+00:00 stderr F E0813 20:08:52.262575 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:52.263017712+00:00 stderr F E0813 20:08:52.262984 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.801090063+00:00 stderr F E0813 20:08:56.800852 1 webhook.go:253] Failed to make webhook authorizer request: Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:56.801152225+00:00 stderr F E0813 20:08:56.801084 1 errors.go:77] Post "https://10.217.4.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:28.515678168+00:00 stderr F I0813 20:09:28.515417 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:46.544141888+00:00 stderr F I0813 20:09:46.540980 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:50.709580955+00:00 stderr F I0813 20:09:50.709457 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:54.399265802+00:00 stderr F I0813 20:09:54.397571 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:55.196098577+00:00 stderr F I0813 20:09:55.196024 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:10:14.687619245+00:00 stderr F I0813 20:10:14.687467 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:17:57.759289405+00:00 stderr F I0813 20:17:57.759058 1 trace.go:236] Trace[646062886]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8e440eb5-de6b-4d07-ad72-6fd790f8653a,client:10.217.0.62,api-group:oauth.openshift.io,api-version:v1,name:console,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients/console,user-agent:console/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (13-Aug-2025 20:17:57.161) (total time: 588ms): 2025-08-13T20:17:57.759289405+00:00 stderr F Trace[646062886]: ---"About to write a response" 588ms (20:17:57.750) 2025-08-13T20:17:57.759289405+00:00 stderr F Trace[646062886]: [588.849764ms] [588.849764ms] END 2025-08-13T20:18:02.690719743+00:00 stderr F I0813 20:18:02.690551 1 trace.go:236] Trace[2103687003]: "Create etcd3" audit-id:f118d6a4-62e6-4076-a8b6-218f0a0d636e,key:/oauth/clients/openshift-browser-client,type:*oauth.OAuthClient,resource:oauthclients.oauth.openshift.io (13-Aug-2025 20:18:01.702) (total time: 988ms): 2025-08-13T20:18:02.690719743+00:00 stderr F Trace[2103687003]: ---"Txn call succeeded" 987ms (20:18:02.690) 2025-08-13T20:18:02.690719743+00:00 stderr F Trace[2103687003]: [988.013605ms] [988.013605ms] END 2025-08-13T20:18:02.951389737+00:00 stderr F I0813 20:18:02.950466 1 trace.go:236] Trace[353295370]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f118d6a4-62e6-4076-a8b6-218f0a0d636e,client:10.217.0.19,api-group:oauth.openshift.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:18:01.701) (total time: 1248ms): 2025-08-13T20:18:02.951389737+00:00 stderr F Trace[353295370]: ---"Write to database call failed" len:206,err:oauthclients.oauth.openshift.io "openshift-browser-client" already exists 1248ms (20:18:02.950) 2025-08-13T20:18:02.951389737+00:00 stderr F Trace[353295370]: [1.248658908s] [1.248658908s] END 2025-08-13T20:18:04.535705811+00:00 stderr F I0813 20:18:04.531464 1 trace.go:236] Trace[1861376822]: "Create etcd3" audit-id:523a0143-4800-4c01-b2ee-b100a17a7106,key:/oauth/clients/openshift-challenging-client,type:*oauth.OAuthClient,resource:oauthclients.oauth.openshift.io (13-Aug-2025 20:18:02.996) (total time: 1534ms): 2025-08-13T20:18:04.535705811+00:00 stderr F Trace[1861376822]: ---"Txn call succeeded" 1534ms (20:18:04.531) 2025-08-13T20:18:04.535705811+00:00 stderr F Trace[1861376822]: [1.534461529s] [1.534461529s] END 2025-08-13T20:18:06.381304026+00:00 stderr F I0813 20:18:06.380288 1 trace.go:236] Trace[1300541338]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:523a0143-4800-4c01-b2ee-b100a17a7106,client:10.217.0.19,api-group:oauth.openshift.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:oauthclients,scope:resource,url:/apis/oauth.openshift.io/v1/oauthclients,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:POST (13-Aug-2025 20:18:02.985) (total time: 3395ms): 2025-08-13T20:18:06.381304026+00:00 stderr F Trace[1300541338]: ---"Write to database call failed" len:167,err:oauthclients.oauth.openshift.io "openshift-challenging-client" already exists 3383ms (20:18:06.379) 2025-08-13T20:18:06.381304026+00:00 stderr F Trace[1300541338]: [3.395104984s] [3.395104984s] END 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.326608 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.322968 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329669044+00:00 stderr F I0813 20:42:36.324161 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.329882780+00:00 stderr F I0813 20:42:36.329763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.352735829+00:00 stderr F I0813 20:42:36.323150 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.352735829+00:00 stderr F I0813 20:42:36.324215 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:42.122575675+00:00 stderr F I0813 20:42:42.121383 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:42.122575675+00:00 stderr F I0813 20:42:42.121637 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:42.122837923+00:00 stderr F I0813 20:42:42.122675 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:42.123355858+00:00 stderr F I0813 20:42:42.122480 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-oauth-apiserver", Name:"apiserver-69c565c9b6-vbdpd", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:42.134867230+00:00 stderr F W0813 20:42:42.134581 1 genericapiserver.go:1060] failed to create event openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd.185b6e4e3e4981aa: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-oauth-apiserver/events": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000755000175000017500000000000015117130654033054 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000000000015117130646033045 0ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiser0000644000175000017500000000000015117130646033045 0ustar zuulzuul././@LongLink0000644000000000000000000000025700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130646033103 5ustar zuulzuul././@LongLink0000644000000000000000000000030400000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000755000175000017500000000000015117130654033102 5ustar zuulzuul././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000001201715117130646033106 0ustar zuulzuul2025-12-13T00:21:41.123588102+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:21:41.123588102+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="Go OS/Arch: linux/amd64" 2025-12-13T00:21:41.123588102+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[metrics] Registering marketplace metrics" 2025-12-13T00:21:41.123588102+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[metrics] Serving marketplace metrics" 2025-12-13T00:21:41.123827637+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="TLS keys set, using https for metrics" 2025-12-13T00:21:41.154846122+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="Config API is available" 2025-12-13T00:21:41.154846122+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="setting up scheme" 2025-12-13T00:21:41.181664134+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="setting up health checks" 2025-12-13T00:21:41.183897960+00:00 stderr F I1213 00:21:41.183844 1 leaderelection.go:250] attempting to acquire leader lease openshift-marketplace/marketplace-operator-lock... 2025-12-13T00:21:41.190669066+00:00 stderr F I1213 00:21:41.190605 1 leaderelection.go:260] successfully acquired lease openshift-marketplace/marketplace-operator-lock 2025-12-13T00:21:41.190743228+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="became leader: marketplace-operator-8b455464d-kghgr" 2025-12-13T00:21:41.190743228+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="registering components" 2025-12-13T00:21:41.190743228+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="setting up the marketplace clusteroperator status reporter" 2025-12-13T00:21:41.199512444+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="setting up controllers" 2025-12-13T00:21:41.200228973+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="starting the marketplace clusteroperator status reporter" 2025-12-13T00:21:41.200228973+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="starting manager" 2025-12-13T00:21:41.201246687+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"starting server","kind":"pprof","addr":"[::]:6060"} 2025-12-13T00:21:41.202535929+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting EventSource","controller":"operatorhub-controller","source":"kind source: *v1.OperatorHub"} 2025-12-13T00:21:41.202620871+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting EventSource","controller":"catalogsource-controller","source":"kind source: *v1alpha1.CatalogSource"} 2025-12-13T00:21:41.205241795+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting Controller","controller":"catalogsource-controller"} 2025-12-13T00:21:41.205387109+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting EventSource","controller":"configmap-controller","source":"kind source: *v1.ConfigMap"} 2025-12-13T00:21:41.205398679+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting Controller","controller":"configmap-controller"} 2025-12-13T00:21:41.207222294+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting Controller","controller":"operatorhub-controller"} 2025-12-13T00:21:41.311806844+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting workers","controller":"catalogsource-controller","worker count":1} 2025-12-13T00:21:41.311907956+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting workers","controller":"configmap-controller","worker count":1} 2025-12-13T00:21:41.313512876+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="Reconciling ConfigMap openshift-marketplace/marketplace-trusted-ca" 2025-12-13T00:21:41.314267765+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[ca] Certificate Authorization ConfigMap openshift-marketplace/marketplace-trusted-ca is in sync with disk." name=marketplace-trusted-ca type=ConfigMap 2025-12-13T00:21:41.325869041+00:00 stderr F {"level":"info","ts":"2025-12-13T00:21:41Z","msg":"Starting workers","controller":"operatorhub-controller","worker count":1} 2025-12-13T00:21:41.326057025+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="Reconciling OperatorHub cluster" 2025-12-13T00:21:41.326216450+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:21:41.326282462+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:21:41.326346893+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:21:41.326439716+00:00 stderr F time="2025-12-13T00:21:41Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" ././@LongLink0000644000000000000000000000031100000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_0000644000175000017500000002357415117130646033120 0ustar zuulzuul2025-12-13T00:14:39.816386013+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="Go Version: go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:14:39.818617735+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="Go OS/Arch: linux/amd64" 2025-12-13T00:14:39.818688647+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="[metrics] Registering marketplace metrics" 2025-12-13T00:14:39.818688647+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="[metrics] Serving marketplace metrics" 2025-12-13T00:14:39.818847462+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="TLS keys set, using https for metrics" 2025-12-13T00:14:39.896798659+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="Config API is available" 2025-12-13T00:14:39.896798659+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="setting up scheme" 2025-12-13T00:14:39.951755046+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="setting up health checks" 2025-12-13T00:14:39.957906264+00:00 stderr F I1213 00:14:39.957867 1 leaderelection.go:250] attempting to acquire leader lease openshift-marketplace/marketplace-operator-lock... 2025-12-13T00:14:39.974051933+00:00 stderr F I1213 00:14:39.973991 1 leaderelection.go:260] successfully acquired lease openshift-marketplace/marketplace-operator-lock 2025-12-13T00:14:39.975276953+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="became leader: marketplace-operator-8b455464d-kghgr" 2025-12-13T00:14:39.975276953+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="registering components" 2025-12-13T00:14:39.975276953+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="setting up the marketplace clusteroperator status reporter" 2025-12-13T00:14:39.989772289+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="setting up controllers" 2025-12-13T00:14:39.991158324+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="starting the marketplace clusteroperator status reporter" 2025-12-13T00:14:39.991158324+00:00 stderr F time="2025-12-13T00:14:39Z" level=info msg="starting manager" 2025-12-13T00:14:39.991704611+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"starting server","kind":"pprof","addr":"[::]:6060"} 2025-12-13T00:14:39.994614764+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"Starting EventSource","controller":"catalogsource-controller","source":"kind source: *v1alpha1.CatalogSource"} 2025-12-13T00:14:39.994614764+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"Starting EventSource","controller":"operatorhub-controller","source":"kind source: *v1.OperatorHub"} 2025-12-13T00:14:39.994994598+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"Starting EventSource","controller":"configmap-controller","source":"kind source: *v1.ConfigMap"} 2025-12-13T00:14:39.995037189+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"Starting Controller","controller":"catalogsource-controller"} 2025-12-13T00:14:39.995085160+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"Starting Controller","controller":"operatorhub-controller"} 2025-12-13T00:14:39.995357679+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:39Z","msg":"Starting Controller","controller":"configmap-controller"} 2025-12-13T00:14:40.100761909+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:40Z","msg":"Starting workers","controller":"configmap-controller","worker count":1} 2025-12-13T00:14:40.100876042+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="Reconciling ConfigMap openshift-marketplace/marketplace-trusted-ca" 2025-12-13T00:14:40.101471572+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:40Z","msg":"Starting workers","controller":"catalogsource-controller","worker count":1} 2025-12-13T00:14:40.106203584+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[ca] Certificate Authorization ConfigMap openshift-marketplace/marketplace-trusted-ca is in sync with disk." name=marketplace-trusted-ca type=ConfigMap 2025-12-13T00:14:40.116247547+00:00 stderr F {"level":"info","ts":"2025-12-13T00:14:40Z","msg":"Starting workers","controller":"operatorhub-controller","worker count":1} 2025-12-13T00:14:40.116835986+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="Reconciling OperatorHub cluster" 2025-12-13T00:14:40.116962150+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.117045343+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.117056823+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.117109845+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.126142465+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="Reconciling OperatorHub cluster" 2025-12-13T00:14:40.126180506+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.126245198+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.126274889+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.126306530+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.141859811+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="Reconciling OperatorHub cluster" 2025-12-13T00:14:40.142040237+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.142157830+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.142213692+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:40.142252393+00:00 stderr F time="2025-12-13T00:14:40Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:43.464330262+00:00 stderr F time="2025-12-13T00:14:43Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:44.466723472+00:00 stderr F time="2025-12-13T00:14:44Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:14:46.066019931+00:00 stderr F time="2025-12-13T00:14:46Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-12-13T00:15:11.553161078+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:15:16.283429681+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:15:16.545925761+00:00 stderr F time="2025-12-13T00:15:16Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-12-13T00:15:20.108678435+00:00 stderr F time="2025-12-13T00:15:20Z" level=info msg="[defaults] CatalogSource certified-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:15:31.558213140+00:00 stderr F time="2025-12-13T00:15:31Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:16:12.065401281+00:00 stderr F time="2025-12-13T00:16:12Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:16:15.285121362+00:00 stderr F time="2025-12-13T00:16:15Z" level=info msg="[defaults] CatalogSource redhat-marketplace is annotated and its spec is the same as the default spec" 2025-12-13T00:16:17.385753434+00:00 stderr F time="2025-12-13T00:16:17Z" level=info msg="[defaults] CatalogSource community-operators is annotated and its spec is the same as the default spec" 2025-12-13T00:20:40.225279653+00:00 stderr F E1213 00:20:40.224738 1 leaderelection.go:332] error retrieving resource lock openshift-marketplace/marketplace-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-marketplace/leases/marketplace-operator-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.230534926+00:00 stderr F E1213 00:21:10.229982 1 leaderelection.go:332] error retrieving resource lock openshift-marketplace/marketplace-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-marketplace/leases/marketplace-operator-lock": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:40.223723637+00:00 stderr F I1213 00:21:40.222737 1 leaderelection.go:285] failed to renew lease openshift-marketplace/marketplace-operator-lock: timed out waiting for the condition 2025-12-13T00:21:40.234712278+00:00 stderr F time="2025-12-13T00:21:40Z" level=warning msg="leader election lost for marketplace-operator-8b455464d-kghgr identity" ././@LongLink0000644000000000000000000000021000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130647032775 5ustar zuulzuul././@LongLink0000644000000000000000000000021500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130766032777 5ustar zuulzuul././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005637615117130647033020 0ustar zuulzuul2025-12-13T00:10:52.181468125+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:52.180599Z","logger":"etcd-client","caller":"v3@v3.5.13/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000c0000/192.168.126.11:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:2379: connect: connection refused\""} 2025-12-13T00:10:52.181468125+00:00 stderr F Error: context deadline exceeded 2025-12-13T00:10:52.208834959+00:00 stderr F dataDir is present on crc 2025-12-13T00:10:54.209803108+00:00 stderr P failed to create etcd client, but the server is already initialized as member "crc" before, starting as etcd member: context deadline exceeded 2025-12-13T00:10:54.212477911+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released. 2025-12-13T00:10:54.220819428+00:00 stderr F 2025-12-13T00:10:54.220819428+00:00 stderr F real 0m0.008s 2025-12-13T00:10:54.220819428+00:00 stderr F user 0m0.003s 2025-12-13T00:10:54.220819428+00:00 stderr F sys 0m0.005s 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=8589934592 2025-12-13T00:10:54.223894061+00:00 stdout F ALL_ETCD_ENDPOINTS=https://192.168.126.11:2379 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_STATIC_POD_VERSION=3 2025-12-13T00:10:54.223894061+00:00 stdout F ETCDCTL_ENDPOINTS=https://192.168.126.11:2379 2025-12-13T00:10:54.223894061+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key 2025-12-13T00:10:54.223894061+00:00 stdout F ETCDCTL_API=3 2025-12-13T00:10:54.223894061+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_NAME=crc 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_SOCKET_REUSE_ADDRESS=true 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION=200ms 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_EXPERIMENTAL_MAX_LEARNERS=1 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000 2025-12-13T00:10:54.223894061+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_INITIAL_CLUSTER= 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s 2025-12-13T00:10:54.223894061+00:00 stdout F ETCD_ENABLE_PPROF=true 2025-12-13T00:10:54.224454866+00:00 stderr F + exec nice -n -19 ionice -c2 -n0 etcd --logger=zap --log-level=info --experimental-initial-corrupt-check=true --snapshot-count=10000 --initial-advertise-peer-urls=https://192.168.126.11:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://192.168.126.11:2379 --listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0 --listen-peer-urls=https://0.0.0.0:2380 --metrics=extensive --listen-metrics-urls=https://0.0.0.0:9978 2025-12-13T00:10:54.250534115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250345Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_CIPHER_SUITES","variable-value":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"} 2025-12-13T00:10:54.250534115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250462Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_DATA_DIR","variable-value":"/var/lib/etcd"} 2025-12-13T00:10:54.250534115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250475Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ELECTION_TIMEOUT","variable-value":"1000"} 2025-12-13T00:10:54.250534115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250485Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ENABLE_PPROF","variable-value":"true"} 2025-12-13T00:10:54.250534115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250505Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_MAX_LEARNERS","variable-value":"1"} 2025-12-13T00:10:54.250534115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250516Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION","variable-value":"200ms"} 2025-12-13T00:10:54.250595837+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250525Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL","variable-value":"5s"} 2025-12-13T00:10:54.250595837+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250537Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_HEARTBEAT_INTERVAL","variable-value":"100"} 2025-12-13T00:10:54.250595837+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250548Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_INITIAL_CLUSTER_STATE","variable-value":"existing"} 2025-12-13T00:10:54.250595837+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250566Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_NAME","variable-value":"crc"} 2025-12-13T00:10:54.250595837+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250585Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_QUOTA_BACKEND_BYTES","variable-value":"8589934592"} 2025-12-13T00:10:54.250615878+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250595Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_SOCKET_REUSE_ADDRESS","variable-value":"true"} 2025-12-13T00:10:54.250634768+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.250613Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3"} 2025-12-13T00:10:54.250649788+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.250627Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_STATIC_POD_VERSION=3"} 2025-12-13T00:10:54.250649788+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.250642Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_INITIAL_CLUSTER="} 2025-12-13T00:10:54.250902205+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.250837Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-12-13T00:10:54.250902205+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250866Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--logger=zap","--log-level=info","--experimental-initial-corrupt-check=true","--snapshot-count=10000","--initial-advertise-peer-urls=https://192.168.126.11:2380","--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt","--key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key","--trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt","--client-cert-auth=true","--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt","--peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key","--peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt","--peer-client-cert-auth=true","--advertise-client-urls=https://192.168.126.11:2379","--listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0","--listen-peer-urls=https://0.0.0.0:2380","--metrics=extensive","--listen-metrics-urls=https://0.0.0.0:9978"]} 2025-12-13T00:10:54.251026419+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.250963Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"} 2025-12-13T00:10:54.251026419+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.251004Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-12-13T00:10:54.251048099+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.251014Z","caller":"embed/etcd.go:121","msg":"configuring socket options","reuse-address":true,"reuse-port":false} 2025-12-13T00:10:54.251048099+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.251027Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://0.0.0.0:2380"]} 2025-12-13T00:10:54.251103541+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.251056Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-12-13T00:10:54.253234659+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.25316Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"]} 2025-12-13T00:10:54.253282680+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.253228Z","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"} 2025-12-13T00:10:54.254056101+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.253991Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"GitNotFound","go-version":"go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"crc","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"existing","initial-cluster-token":"","quota-backend-bytes":8589934592,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} 2025-12-13T00:10:54.254785090+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.254276Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member/snap\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-12-13T00:10:54.291482038+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.291342Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"36.478932ms"} 2025-12-13T00:10:54.703987189+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.703808Z","caller":"etcdserver/server.go:514","msg":"recovered v2 store from snapshot","snapshot-index":40004,"snapshot-size":"8.9 kB"} 2025-12-13T00:10:54.703987189+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.703894Z","caller":"etcdserver/server.go:527","msg":"recovered v3 backend from snapshot","backend-size-bytes":74416128,"backend-size":"74 MB","backend-size-in-use-bytes":44195840,"backend-size-in-use":"44 MB"} 2025-12-13T00:10:54.855894117+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.855745Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","commit-index":41745} 2025-12-13T00:10:54.855971549+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.855896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c switched to configuration voters=(15298667783517588556)"} 2025-12-13T00:10:54.855971549+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.855926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became follower at term 8"} 2025-12-13T00:10:54.855971549+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.855952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d44fc94b15474c4c [peers: [d44fc94b15474c4c], term: 8, commit: 41745, applied: 40004, lastindex: 41745, lastterm: 8]"} 2025-12-13T00:10:54.856084292+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.85605Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 2025-12-13T00:10:54.856084292+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.856065Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","recovered-remote-peer-id":"d44fc94b15474c4c","recovered-remote-peer-urls":["https://192.168.126.11:2380"],"recovered-remote-peer-is-learner":false} 2025-12-13T00:10:54.856084292+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.856074Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} 2025-12-13T00:10:54.856144674+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.856114Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-12-13T00:10:54.856367030+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:54.856312Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} 2025-12-13T00:10:54.856429401+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.85638Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":36405} 2025-12-13T00:10:54.890658401+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.890546Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":37957} 2025-12-13T00:10:54.891087903+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.891039Z","caller":"etcdserver/quota.go:117","msg":"enabled backend quota","quota-name":"v3-applier","quota-size-bytes":8589934592,"quota-size":"8.6 GB"} 2025-12-13T00:10:54.891900046+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.891845Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d44fc94b15474c4c","timeout":"7s"} 2025-12-13T00:10:54.902644257+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.902555Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d44fc94b15474c4c"} 2025-12-13T00:10:54.902691878+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.902658Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d44fc94b15474c4c","local-server-version":"3.5.13","cluster-id":"37a6ceb54a88a89a","cluster-version":"3.5"} 2025-12-13T00:10:54.903029088+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.90287Z","caller":"etcdserver/server.go:760","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d44fc94b15474c4c","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} 2025-12-13T00:10:54.903164802+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.903042Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} 2025-12-13T00:10:54.903178282+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.903157Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} 2025-12-13T00:10:54.903212263+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.903178Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} 2025-12-13T00:10:54.905723001+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.905521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-12-13T00:10:54.905786233+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.905613Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"[::]:2380"} 2025-12-13T00:10:54.905786233+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.905762Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"[::]:2380"} 2025-12-13T00:10:54.908344872+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.908189Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d44fc94b15474c4c","initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"]} 2025-12-13T00:10:54.908435715+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:54.908334Z","caller":"embed/etcd.go:859","msg":"serving metrics","address":"https://0.0.0.0:9978"} 2025-12-13T00:10:55.657240175+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c is starting a new election at term 8"} 2025-12-13T00:10:55.657240175+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became pre-candidate at term 8"} 2025-12-13T00:10:55.657240175+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgPreVoteResp from d44fc94b15474c4c at term 8"} 2025-12-13T00:10:55.657298906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became candidate at term 9"} 2025-12-13T00:10:55.657298906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgVoteResp from d44fc94b15474c4c at term 9"} 2025-12-13T00:10:55.657298906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became leader at term 9"} 2025-12-13T00:10:55.657298906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d44fc94b15474c4c elected leader d44fc94b15474c4c at term 9"} 2025-12-13T00:10:55.657757059+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657714Z","caller":"etcdserver/server.go:2119","msg":"published local member to cluster through raft","local-member-id":"d44fc94b15474c4c","local-member-attributes":"{Name:crc ClientURLs:[https://192.168.126.11:2379]}","request-path":"/0/members/d44fc94b15474c4c/attributes","cluster-id":"37a6ceb54a88a89a","publish-timeout":"7s"} 2025-12-13T00:10:55.657817490+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-12-13T00:10:55.657875822+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657765Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-12-13T00:10:55.658097658+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.657987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-12-13T00:10:55.658117098+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.658092Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-12-13T00:10:55.661365246+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.661282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.126.11:0"} 2025-12-13T00:10:55.661642234+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:55.661566Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"[::]:2379"} ././@LongLink0000644000000000000000000000022200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000066720615117130647033020 0ustar zuulzuul2025-12-13T00:06:44.555008220+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:44.554079Z","logger":"etcd-client","caller":"v3@v3.5.13/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0004c6000/192.168.126.11:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:2379: connect: connection refused\""} 2025-12-13T00:06:44.555008220+00:00 stderr F Error: context deadline exceeded 2025-12-13T00:06:44.579194120+00:00 stderr F dataDir is present on crc 2025-12-13T00:06:46.581616791+00:00 stderr P failed to create etcd client, but the server is already initialized as member "crc" before, starting as etcd member: context deadline exceeded 2025-12-13T00:06:46.584670417+00:00 stdout P Waiting for ports 2379, 2380 and 9978 to be released. 2025-12-13T00:06:46.589362610+00:00 stderr F 2025-12-13T00:06:46.589362610+00:00 stderr F real 0m0.005s 2025-12-13T00:06:46.589362610+00:00 stderr F user 0m0.001s 2025-12-13T00:06:46.589362610+00:00 stderr F sys 0m0.004s 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_QUOTA_BACKEND_BYTES=8589934592 2025-12-13T00:06:46.592766065+00:00 stdout F ALL_ETCD_ENDPOINTS=https://192.168.126.11:2379 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_STATIC_POD_VERSION=3 2025-12-13T00:06:46.592766065+00:00 stdout F ETCDCTL_ENDPOINTS=https://192.168.126.11:2379 2025-12-13T00:06:46.592766065+00:00 stdout F ETCDCTL_KEY=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key 2025-12-13T00:06:46.592766065+00:00 stdout F ETCDCTL_API=3 2025-12-13T00:06:46.592766065+00:00 stdout F ETCDCTL_CACERT=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_HEARTBEAT_INTERVAL=100 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_NAME=crc 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_SOCKET_REUSE_ADDRESS=true 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION=200ms 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_EXPERIMENTAL_MAX_LEARNERS=1 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_DATA_DIR=/var/lib/etcd 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_ELECTION_TIMEOUT=1000 2025-12-13T00:06:46.592766065+00:00 stdout F ETCDCTL_CERT=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_INITIAL_CLUSTER_STATE=existing 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_INITIAL_CLUSTER= 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_CIPHER_SUITES=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL=5s 2025-12-13T00:06:46.592766065+00:00 stdout F ETCD_ENABLE_PPROF=true 2025-12-13T00:06:46.593251309+00:00 stderr F + exec nice -n -19 ionice -c2 -n0 etcd --logger=zap --log-level=info --experimental-initial-corrupt-check=true --snapshot-count=10000 --initial-advertise-peer-urls=https://192.168.126.11:2380 --cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt --key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key --trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt --client-cert-auth=true --peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt --peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key --peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt --peer-client-cert-auth=true --advertise-client-urls=https://192.168.126.11:2379 --listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0 --listen-peer-urls=https://0.0.0.0:2380 --metrics=extensive --listen-metrics-urls=https://0.0.0.0:9978 2025-12-13T00:06:46.615831734+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.61566Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_CIPHER_SUITES","variable-value":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"} 2025-12-13T00:06:46.615831734+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615782Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_DATA_DIR","variable-value":"/var/lib/etcd"} 2025-12-13T00:06:46.615831734+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615795Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ELECTION_TIMEOUT","variable-value":"1000"} 2025-12-13T00:06:46.615831734+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615801Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_ENABLE_PPROF","variable-value":"true"} 2025-12-13T00:06:46.615831734+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615816Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_MAX_LEARNERS","variable-value":"1"} 2025-12-13T00:06:46.615873205+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615824Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WARNING_APPLY_DURATION","variable-value":"200ms"} 2025-12-13T00:06:46.615873205+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615833Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_EXPERIMENTAL_WATCH_PROGRESS_NOTIFY_INTERVAL","variable-value":"5s"} 2025-12-13T00:06:46.615873205+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615846Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_HEARTBEAT_INTERVAL","variable-value":"100"} 2025-12-13T00:06:46.615873205+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615861Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_INITIAL_CLUSTER_STATE","variable-value":"existing"} 2025-12-13T00:06:46.615908906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615878Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_NAME","variable-value":"crc"} 2025-12-13T00:06:46.615908906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615899Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_QUOTA_BACKEND_BYTES","variable-value":"8589934592"} 2025-12-13T00:06:46.615916976+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.615907Z","caller":"flags/flag.go:113","msg":"recognized and used environment variable","variable-name":"ETCD_SOCKET_REUSE_ADDRESS","variable-value":"true"} 2025-12-13T00:06:46.615963287+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.615928Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3"} 2025-12-13T00:06:46.615963287+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.615942Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_STATIC_POD_VERSION=3"} 2025-12-13T00:06:46.615963287+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.615956Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_INITIAL_CLUSTER="} 2025-12-13T00:06:46.616057040+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.616003Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-12-13T00:06:46.616057040+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.61604Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--logger=zap","--log-level=info","--experimental-initial-corrupt-check=true","--snapshot-count=10000","--initial-advertise-peer-urls=https://192.168.126.11:2380","--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt","--key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key","--trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt","--client-cert-auth=true","--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt","--peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key","--peer-trusted-ca-file=/etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt","--peer-client-cert-auth=true","--advertise-client-urls=https://192.168.126.11:2379","--listen-client-urls=https://0.0.0.0:2379,unixs://192.168.126.11:0","--listen-peer-urls=https://0.0.0.0:2380","--metrics=extensive","--listen-metrics-urls=https://0.0.0.0:9978"]} 2025-12-13T00:06:46.616162564+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.616119Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"} 2025-12-13T00:06:46.616162564+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.616142Z","caller":"embed/config.go:683","msg":"Running http and grpc server on single port. This is not recommended for production."} 2025-12-13T00:06:46.616175504+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.616159Z","caller":"embed/etcd.go:121","msg":"configuring socket options","reuse-address":true,"reuse-port":false} 2025-12-13T00:06:46.616175504+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.616167Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://0.0.0.0:2380"]} 2025-12-13T00:06:46.616223776+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.616196Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-12-13T00:06:46.617142831+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.617095Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"]} 2025-12-13T00:06:46.617142831+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.617116Z","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"} 2025-12-13T00:06:46.617869931+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.617649Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.13","git-sha":"GitNotFound","go-version":"go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":true,"name":"crc","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"existing","initial-cluster-token":"","quota-backend-bytes":8589934592,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} 2025-12-13T00:06:46.618963233+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.618166Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-12-13T00:06:46.619606041+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:46.619531Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member/snap\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-12-13T00:06:46.703105820+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:46.70298Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"82.83923ms"} 2025-12-13T00:06:47.146152933+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.14601Z","caller":"etcdserver/server.go:514","msg":"recovered v2 store from snapshot","snapshot-index":40004,"snapshot-size":"8.9 kB"} 2025-12-13T00:06:47.146152933+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.146099Z","caller":"etcdserver/server.go:527","msg":"recovered v3 backend from snapshot","backend-size-bytes":74416128,"backend-size":"74 MB","backend-size-in-use-bytes":42295296,"backend-size-in-use":"42 MB"} 2025-12-13T00:06:47.196741396+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.19662Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","commit-index":41289} 2025-12-13T00:06:47.196860910+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.196819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c switched to configuration voters=(15298667783517588556)"} 2025-12-13T00:06:47.196870080+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.196858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became follower at term 7"} 2025-12-13T00:06:47.196899241+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.196872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d44fc94b15474c4c [peers: [d44fc94b15474c4c], term: 7, commit: 41289, applied: 40004, lastindex: 41289, lastterm: 7]"} 2025-12-13T00:06:47.197029744+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.19701Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} 2025-12-13T00:06:47.197037894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.197023Z","caller":"membership/cluster.go:280","msg":"recovered/added member from store","cluster-id":"37a6ceb54a88a89a","local-member-id":"d44fc94b15474c4c","recovered-remote-peer-id":"d44fc94b15474c4c","recovered-remote-peer-urls":["https://192.168.126.11:2380"],"recovered-remote-peer-is-learner":false} 2025-12-13T00:06:47.197045145+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.197033Z","caller":"membership/cluster.go:290","msg":"set cluster version from store","cluster-version":"3.5"} 2025-12-13T00:06:47.197125997+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:47.19709Z","caller":"fileutil/fileutil.go:53","msg":"check file permission","error":"directory \"/var/lib/etcd/member\" exist, but the permission is \"drwxr-xr-x\". The recommended permission is \"-rwx------\" to prevent possible unprivileged access to the data"} 2025-12-13T00:06:47.199555885+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:47.199497Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} 2025-12-13T00:06:47.200092460+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.20004Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":36405} 2025-12-13T00:06:47.236363251+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.236205Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":37545} 2025-12-13T00:06:47.239955992+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.239874Z","caller":"etcdserver/quota.go:117","msg":"enabled backend quota","quota-name":"v3-applier","quota-size-bytes":8589934592,"quota-size":"8.6 GB"} 2025-12-13T00:06:47.246986950+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.246855Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d44fc94b15474c4c","timeout":"7s"} 2025-12-13T00:06:47.258737550+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.258625Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d44fc94b15474c4c"} 2025-12-13T00:06:47.258807872+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.258733Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"d44fc94b15474c4c","local-server-version":"3.5.13","cluster-id":"37a6ceb54a88a89a","cluster-version":"3.5"} 2025-12-13T00:06:47.259066309+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.258907Z","caller":"etcdserver/server.go:760","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d44fc94b15474c4c","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} 2025-12-13T00:06:47.259145312+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.259063Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} 2025-12-13T00:06:47.259145312+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.25913Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} 2025-12-13T00:06:47.259160662+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.259141Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} 2025-12-13T00:06:47.262990689+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.262909Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]} 2025-12-13T00:06:47.263098772+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.26301Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"[::]:2380"} 2025-12-13T00:06:47.263098772+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.263062Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"[::]:2380"} 2025-12-13T00:06:47.264507492+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.264419Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d44fc94b15474c4c","initial-advertise-peer-urls":["https://192.168.126.11:2380"],"listen-peer-urls":["https://0.0.0.0:2380"],"advertise-client-urls":["https://192.168.126.11:2379"],"listen-client-urls":["https://0.0.0.0:2379","unixs://192.168.126.11:0"],"listen-metrics-urls":["https://0.0.0.0:9978"]} 2025-12-13T00:06:47.264507492+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:47.264466Z","caller":"embed/etcd.go:859","msg":"serving metrics","address":"https://0.0.0.0:9978"} 2025-12-13T00:06:48.097764863+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c is starting a new election at term 7"} 2025-12-13T00:06:48.097764863+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became pre-candidate at term 7"} 2025-12-13T00:06:48.097863826+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgPreVoteResp from d44fc94b15474c4c at term 7"} 2025-12-13T00:06:48.097863826+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became candidate at term 8"} 2025-12-13T00:06:48.097863826+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c received MsgVoteResp from d44fc94b15474c4c at term 8"} 2025-12-13T00:06:48.097863826+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d44fc94b15474c4c became leader at term 8"} 2025-12-13T00:06:48.097863826+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.097815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d44fc94b15474c4c elected leader d44fc94b15474c4c at term 8"} 2025-12-13T00:06:48.226829823+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.226695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-12-13T00:06:48.226829823+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.226784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} 2025-12-13T00:06:48.227023210+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.226947Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-12-13T00:06:48.227032310+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.22702Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-12-13T00:06:48.227407320+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.226739Z","caller":"etcdserver/server.go:2119","msg":"published local member to cluster through raft","local-member-id":"d44fc94b15474c4c","local-member-attributes":"{Name:crc ClientURLs:[https://192.168.126.11:2379]}","request-path":"/0/members/d44fc94b15474c4c/attributes","cluster-id":"37a6ceb54a88a89a","publish-timeout":"7s"} 2025-12-13T00:06:48.229702074+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.229646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"[::]:2379"} 2025-12-13T00:06:48.230858267+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:48.230677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.126.11:0"} 2025-12-13T00:06:50.877984835+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.877812Z","caller":"traceutil/trace.go:171","msg":"trace[1639790704] transaction","detail":"{read_only:false; response_revision:37553; number_of_response:1; }","duration":"257.381201ms","start":"2025-12-13T00:06:50.620408Z","end":"2025-12-13T00:06:50.877789Z","steps":["trace[1639790704] 'process raft request' (duration: 62.437656ms)","trace[1639790704] 'compare' (duration: 194.825411ms)"],"step_count":2} 2025-12-13T00:06:51.413456099+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:51.41328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.219609ms","expected-duration":"200ms","prefix":"","request":"header: lease_grant:","response":"size:41"} 2025-12-13T00:06:51.413456099+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:51.413375Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:06:50.879867Z","time spent":"533.504669ms","remote":"[::1]:52054","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:06:57.638691684+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:57.638534Z","caller":"traceutil/trace.go:171","msg":"trace[556512222] transaction","detail":"{read_only:false; response_revision:37559; number_of_response:1; }","duration":"191.595089ms","start":"2025-12-13T00:06:57.446925Z","end":"2025-12-13T00:06:57.63852Z","steps":["trace[556512222] 'process raft request' (duration: 191.447785ms)"],"step_count":1} 2025-12-13T00:07:11.381211703+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:11.381084Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065155,"retry-timeout":"500ms"} 2025-12-13T00:07:11.881973880+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:11.881849Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065155,"retry-timeout":"500ms"} 2025-12-13T00:07:11.917237082+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:11.917103Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"2.967323215s","expected-duration":"1s"} 2025-12-13T00:07:11.918113606+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:11.917994Z","caller":"traceutil/trace.go:171","msg":"trace[721158760] transaction","detail":"{read_only:false; response_revision:37582; number_of_response:1; }","duration":"2.968316853s","start":"2025-12-13T00:07:08.949647Z","end":"2025-12-13T00:07:11.917964Z","steps":["trace[721158760] 'process raft request' (duration: 2.967966693s)"],"step_count":1} 2025-12-13T00:07:11.918947930+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:11.91888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:08.949626Z","time spent":"2.96857611s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4038,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:12.442607042+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.442485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.686017ms","expected-duration":"200ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:18"} 2025-12-13T00:07:12.442744436+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.442697Z","caller":"traceutil/trace.go:171","msg":"trace[621510609] transaction","detail":"{read_only:false; response_revision:37583; number_of_response:1; }","duration":"3.174448092s","start":"2025-12-13T00:07:09.26823Z","end":"2025-12-13T00:07:12.442678Z","steps":["trace[621510609] 'process raft request' (duration: 2.786423316s)","trace[621510609] 'compare' (duration: 387.510491ms)"],"step_count":2} 2025-12-13T00:07:12.442850978+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.44282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:09.268205Z","time spent":"3.174586076s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5131,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:12.443043144+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.44297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.563007689s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/masterleases/192.168.126.11\" ","response":"range_response_count:1 size:144"} 2025-12-13T00:07:12.443056744+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.443022Z","caller":"traceutil/trace.go:171","msg":"trace[507583711] range","detail":"{range_begin:/kubernetes.io/masterleases/192.168.126.11; range_end:; response_count:1; response_revision:37586; }","duration":"1.563066971s","start":"2025-12-13T00:07:10.879943Z","end":"2025-12-13T00:07:12.44301Z","steps":["trace[507583711] 'agreement among raft nodes before linearized reading' (duration: 1.562896886s)"],"step_count":1} 2025-12-13T00:07:12.443088695+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.442809Z","caller":"traceutil/trace.go:171","msg":"trace[1568943674] linearizableReadLoop","detail":"{readStateIndex:41339; appliedIndex:41334; }","duration":"1.562812035s","start":"2025-12-13T00:07:10.879976Z","end":"2025-12-13T00:07:12.442788Z","steps":["trace[1568943674] 'read index received' (duration: 1.03762805s)","trace[1568943674] 'applied index is now lower than readState.Index' (duration: 525.183065ms)"],"step_count":2} 2025-12-13T00:07:12.443196808+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.443064Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:10.879762Z","time spent":"1.563292348s","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":167,"request content":"key:\"/kubernetes.io/masterleases/192.168.126.11\" "} 2025-12-13T00:07:12.443374183+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.443344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"904.553967ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:07:12.443651631+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.443623Z","caller":"traceutil/trace.go:171","msg":"trace[1356508025] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37586; }","duration":"904.833225ms","start":"2025-12-13T00:07:11.538783Z","end":"2025-12-13T00:07:12.443616Z","steps":["trace[1356508025] 'agreement among raft nodes before linearized reading' (duration: 904.528606ms)"],"step_count":1} 2025-12-13T00:07:12.443750723+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.443653Z","caller":"traceutil/trace.go:171","msg":"trace[108100212] transaction","detail":"{read_only:false; response_revision:37586; number_of_response:1; }","duration":"2.288750316s","start":"2025-12-13T00:07:10.154863Z","end":"2025-12-13T00:07:12.443614Z","steps":["trace[108100212] 'process raft request' (duration: 2.287870821s)"],"step_count":1} 2025-12-13T00:07:12.443877277+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.443444Z","caller":"traceutil/trace.go:171","msg":"trace[328360296] transaction","detail":"{read_only:false; response_revision:37584; number_of_response:1; }","duration":"3.16939713s","start":"2025-12-13T00:07:09.274038Z","end":"2025-12-13T00:07:12.443435Z","steps":["trace[328360296] 'process raft request' (duration: 3.168570697s)"],"step_count":1} 2025-12-13T00:07:12.443965089+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.443936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:09.274008Z","time spent":"3.169902374s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5124,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:12.444020631+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.443471Z","caller":"traceutil/trace.go:171","msg":"trace[1121197422] transaction","detail":"{read_only:false; response_revision:37585; number_of_response:1; }","duration":"3.165216343s","start":"2025-12-13T00:07:09.278247Z","end":"2025-12-13T00:07:12.443464Z","steps":["trace[1121197422] 'process raft request' (duration: 3.164436911s)"],"step_count":1} 2025-12-13T00:07:12.444156255+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.444094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:09.278216Z","time spent":"3.165820229s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2765,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:12.444261918+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.443851Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:10.154836Z","time spent":"2.288922901s","remote":"192.168.126.11:38124","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":672,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:12.444318009+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.443723Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:11.53874Z","time spent":"904.973868ms","remote":"192.168.126.11:37802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2025-12-13T00:07:12.846733020+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:12.846619Z","caller":"traceutil/trace.go:171","msg":"trace[2134279464] transaction","detail":"{read_only:false; response_revision:37587; number_of_response:1; }","duration":"308.504258ms","start":"2025-12-13T00:07:12.53808Z","end":"2025-12-13T00:07:12.846585Z","steps":["trace[2134279464] 'process raft request' (duration: 240.349291ms)","trace[2134279464] 'compare' (duration: 68.004223ms)"],"step_count":2} 2025-12-13T00:07:12.846935856+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:12.846905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:12.538059Z","time spent":"308.741285ms","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":125,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:14.476473077+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:14.476267Z","caller":"traceutil/trace.go:171","msg":"trace[2029240010] transaction","detail":"{read_only:false; response_revision:37588; number_of_response:1; }","duration":"195.704386ms","start":"2025-12-13T00:07:14.280544Z","end":"2025-12-13T00:07:14.476249Z","steps":["trace[2029240010] 'process raft request' (duration: 195.581953ms)"],"step_count":1} 2025-12-13T00:07:14.478370640+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:14.478198Z","caller":"traceutil/trace.go:171","msg":"trace[1238414433] transaction","detail":"{read_only:false; response_revision:37589; number_of_response:1; }","duration":"197.293539ms","start":"2025-12-13T00:07:14.280875Z","end":"2025-12-13T00:07:14.478169Z","steps":["trace[1238414433] 'process raft request' (duration: 197.120125ms)"],"step_count":1} 2025-12-13T00:07:14.806691467+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:14.806535Z","caller":"traceutil/trace.go:171","msg":"trace[676616763] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/apirequestcounts.v1.apiserver.openshift.io; range_end:; response_count:1; response_revision:37589; }","duration":"130.706187ms","start":"2025-12-13T00:07:14.675795Z","end":"2025-12-13T00:07:14.806501Z","steps":["trace[676616763] 'range keys from in-memory index tree' (duration: 130.48323ms)"],"step_count":1} 2025-12-13T00:07:15.265171604+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:15.265024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.721939ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/flowschemas.v1.flowcontrol.apiserver.k8s.io\" ","response":"range_response_count:1 size:6809"} 2025-12-13T00:07:15.265171604+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:15.265101Z","caller":"traceutil/trace.go:171","msg":"trace[1040757859] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/flowschemas.v1.flowcontrol.apiserver.k8s.io; range_end:; response_count:1; response_revision:37590; }","duration":"236.810302ms","start":"2025-12-13T00:07:15.028271Z","end":"2025-12-13T00:07:15.265082Z","steps":["trace[1040757859] 'range keys from in-memory index tree' (duration: 236.547435ms)"],"step_count":1} 2025-12-13T00:07:15.429090296+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:15.428958Z","caller":"traceutil/trace.go:171","msg":"trace[1962304372] transaction","detail":"{read_only:false; response_revision:37591; number_of_response:1; }","duration":"150.37383ms","start":"2025-12-13T00:07:15.27855Z","end":"2025-12-13T00:07:15.428924Z","steps":["trace[1962304372] 'process raft request' (duration: 150.123223ms)"],"step_count":1} 2025-12-13T00:07:18.245725812+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:18.245616Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065186,"retry-timeout":"500ms"} 2025-12-13T00:07:18.745921144+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:18.74575Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065186,"retry-timeout":"500ms"} 2025-12-13T00:07:19.054047532+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.05378Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.60351181s","expected-duration":"1s"} 2025-12-13T00:07:19.054431482+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.054304Z","caller":"traceutil/trace.go:171","msg":"trace[463303666] transaction","detail":"{read_only:false; response_revision:37594; number_of_response:1; }","duration":"1.604161138s","start":"2025-12-13T00:07:17.450114Z","end":"2025-12-13T00:07:19.054275Z","steps":["trace[463303666] 'process raft request' (duration: 1.603970683s)"],"step_count":1} 2025-12-13T00:07:19.054649908+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.054552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:17.450082Z","time spent":"1.604300812s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5067,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:19.060028770+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.059866Z","caller":"traceutil/trace.go:171","msg":"trace[1244558075] linearizableReadLoop","detail":"{readStateIndex:41351; appliedIndex:41348; }","duration":"1.314671055s","start":"2025-12-13T00:07:17.745168Z","end":"2025-12-13T00:07:19.059839Z","steps":["trace[1244558075] 'read index received' (duration: 1.309048716s)","trace[1244558075] 'applied index is now lower than readState.Index' (duration: 5.620678ms)"],"step_count":2} 2025-12-13T00:07:19.060061661+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.059971Z","caller":"traceutil/trace.go:171","msg":"trace[1972068813] transaction","detail":"{read_only:false; response_revision:37595; number_of_response:1; }","duration":"1.608787487s","start":"2025-12-13T00:07:17.45115Z","end":"2025-12-13T00:07:19.059937Z","steps":["trace[1972068813] 'process raft request' (duration: 1.608454338s)"],"step_count":1} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.06015Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:17.451109Z","time spent":"1.608946382s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5096,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.060171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.314978833s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/catalogsources.v1alpha1.operators.coreos.com\" ","response":"range_response_count:1 size:8030"} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.060222Z","caller":"traceutil/trace.go:171","msg":"trace[1972295032] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/catalogsources.v1alpha1.operators.coreos.com; range_end:; response_count:1; response_revision:37596; }","duration":"1.315047915s","start":"2025-12-13T00:07:17.745161Z","end":"2025-12-13T00:07:19.060209Z","steps":["trace[1972295032] 'agreement among raft nodes before linearized reading' (duration: 1.314854929s)"],"step_count":1} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.060246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"546.186755ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/flowschemas.v1beta3.flowcontrol.apiserver.k8s.io\" ","response":"range_response_count:1 size:3410"} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.06027Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:17.745081Z","time spent":"1.315174738s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":101,"response count":1,"response size":8053,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/catalogsources.v1alpha1.operators.coreos.com\" "} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.06035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.654517ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/validatingwebhookconfigurations.v1.admissionregistration.k8s.io\" ","response":"range_response_count:1 size:10141"} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.060305Z","caller":"traceutil/trace.go:171","msg":"trace[1929320525] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/flowschemas.v1beta3.flowcontrol.apiserver.k8s.io; range_end:; response_count:1; response_revision:37596; }","duration":"546.249447ms","start":"2025-12-13T00:07:18.514041Z","end":"2025-12-13T00:07:19.06029Z","steps":["trace[1929320525] 'agreement among raft nodes before linearized reading' (duration: 546.040201ms)"],"step_count":1} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.060457Z","caller":"traceutil/trace.go:171","msg":"trace[962775366] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/validatingwebhookconfigurations.v1.admissionregistration.k8s.io; range_end:; response_count:1; response_revision:37596; }","duration":"306.785401ms","start":"2025-12-13T00:07:18.753651Z","end":"2025-12-13T00:07:19.060436Z","steps":["trace[962775366] 'agreement among raft nodes before linearized reading' (duration: 306.483253ms)"],"step_count":1} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.060499Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:18.513979Z","time spent":"546.502803ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":105,"response count":1,"response size":3433,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/flowschemas.v1beta3.flowcontrol.apiserver.k8s.io\" "} 2025-12-13T00:07:19.060874723+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.060513Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:18.753615Z","time spent":"306.884734ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":1,"response size":10164,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/validatingwebhookconfigurations.v1.admissionregistration.k8s.io\" "} 2025-12-13T00:07:19.061165601+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:19.060022Z","caller":"traceutil/trace.go:171","msg":"trace[544140568] transaction","detail":"{read_only:false; response_revision:37596; number_of_response:1; }","duration":"1.608653714s","start":"2025-12-13T00:07:17.451349Z","end":"2025-12-13T00:07:19.060003Z","steps":["trace[544140568] 'process raft request' (duration: 1.608432588s)"],"step_count":1} 2025-12-13T00:07:19.061308725+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:19.061207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:17.451336Z","time spent":"1.609814836s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5040,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:20.045197724+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:20.04504Z","caller":"traceutil/trace.go:171","msg":"trace[958600907] transaction","detail":"{read_only:false; response_revision:37601; number_of_response:1; }","duration":"255.842477ms","start":"2025-12-13T00:07:19.789176Z","end":"2025-12-13T00:07:20.045018Z","steps":["trace[958600907] 'process raft request' (duration: 255.690113ms)"],"step_count":1} 2025-12-13T00:07:20.597936824+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:20.597818Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:20.291458Z","time spent":"306.354409ms","remote":"[::1]:50814","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:07:21.761682591+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:21.761062Z","caller":"traceutil/trace.go:171","msg":"trace[953226761] linearizableReadLoop","detail":"{readStateIndex:41357; appliedIndex:41356; }","duration":"221.963054ms","start":"2025-12-13T00:07:21.539076Z","end":"2025-12-13T00:07:21.761039Z","steps":["trace[953226761] 'read index received' (duration: 221.79237ms)","trace[953226761] 'applied index is now lower than readState.Index' (duration: 169.254µs)"],"step_count":2} 2025-12-13T00:07:21.761682591+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:21.761145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:20.881589Z","time spent":"879.550053ms","remote":"[::1]:52054","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:07:21.761682591+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:21.761187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.092328ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:07:21.761682591+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:21.761266Z","caller":"traceutil/trace.go:171","msg":"trace[142977058] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37601; }","duration":"222.18207ms","start":"2025-12-13T00:07:21.539067Z","end":"2025-12-13T00:07:21.761249Z","steps":["trace[142977058] 'agreement among raft nodes before linearized reading' (duration: 222.067027ms)"],"step_count":1} 2025-12-13T00:07:22.408808486+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:22.408656Z","caller":"traceutil/trace.go:171","msg":"trace[320713913] transaction","detail":"{read_only:false; response_revision:37603; number_of_response:1; }","duration":"174.959092ms","start":"2025-12-13T00:07:22.233675Z","end":"2025-12-13T00:07:22.408634Z","steps":["trace[320713913] 'process raft request' (duration: 174.817578ms)"],"step_count":1} 2025-12-13T00:07:26.224194398+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:26.224091Z","caller":"traceutil/trace.go:171","msg":"trace[1062867050] transaction","detail":"{read_only:false; response_revision:37611; number_of_response:1; }","duration":"206.939842ms","start":"2025-12-13T00:07:26.017133Z","end":"2025-12-13T00:07:26.224073Z","steps":["trace[1062867050] 'process raft request' (duration: 206.825419ms)"],"step_count":1} 2025-12-13T00:07:28.153458852+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:28.153164Z","caller":"traceutil/trace.go:171","msg":"trace[1809344867] linearizableReadLoop","detail":"{readStateIndex:41369; appliedIndex:41368; }","duration":"190.89684ms","start":"2025-12-13T00:07:27.962237Z","end":"2025-12-13T00:07:28.153134Z","steps":["trace[1809344867] 'read index received' (duration: 190.693314ms)","trace[1809344867] 'applied index is now lower than readState.Index' (duration: 202.096µs)"],"step_count":2} 2025-12-13T00:07:28.154152321+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:28.153976Z","caller":"traceutil/trace.go:171","msg":"trace[2055667951] transaction","detail":"{read_only:false; response_revision:37613; number_of_response:1; }","duration":"390.537707ms","start":"2025-12-13T00:07:27.762727Z","end":"2025-12-13T00:07:28.153264Z","steps":["trace[2055667951] 'process raft request' (duration: 390.204497ms)"],"step_count":1} 2025-12-13T00:07:28.154984624+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:28.154863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:27.7627Z","time spent":"391.414321ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3285,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:35.422761589+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:35.422579Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065257,"retry-timeout":"500ms"} 2025-12-13T00:07:35.923257678+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:35.923153Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065257,"retry-timeout":"500ms"} 2025-12-13T00:07:36.143219936+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.143057Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"2.05428388s","expected-duration":"1s"} 2025-12-13T00:07:36.143900655+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.143466Z","caller":"traceutil/trace.go:171","msg":"trace[144797986] transaction","detail":"{read_only:false; response_revision:37622; number_of_response:1; }","duration":"2.054856256s","start":"2025-12-13T00:07:34.088591Z","end":"2025-12-13T00:07:36.143448Z","steps":["trace[144797986] 'process raft request' (duration: 2.054697681s)"],"step_count":1} 2025-12-13T00:07:36.143900655+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.143572Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:34.088555Z","time spent":"2.054948008s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5092,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.57795Z","caller":"traceutil/trace.go:171","msg":"trace[711876486] transaction","detail":"{read_only:false; response_revision:37623; number_of_response:1; }","duration":"2.441682727s","start":"2025-12-13T00:07:34.136249Z","end":"2025-12-13T00:07:36.577931Z","steps":["trace[711876486] 'process raft request' (duration: 2.441462261s)"],"step_count":1} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.578072Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:34.136214Z","time spent":"2.441802971s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5052,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.578144Z","caller":"traceutil/trace.go:171","msg":"trace[689193116] transaction","detail":"{read_only:false; response_revision:37624; number_of_response:1; }","duration":"2.395132549s","start":"2025-12-13T00:07:34.183001Z","end":"2025-12-13T00:07:36.578134Z","steps":["trace[689193116] 'process raft request' (duration: 2.394848181s)"],"step_count":1} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.578201Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:34.182975Z","time spent":"2.395192991s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5101,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.578412Z","caller":"traceutil/trace.go:171","msg":"trace[256756933] transaction","detail":"{read_only:false; response_revision:37625; number_of_response:1; }","duration":"2.07456414s","start":"2025-12-13T00:07:34.503819Z","end":"2025-12-13T00:07:36.578383Z","steps":["trace[256756933] 'process raft request' (duration: 2.074271612s)"],"step_count":1} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.578374Z","caller":"traceutil/trace.go:171","msg":"trace[1001049706] linearizableReadLoop","detail":"{readStateIndex:41384; appliedIndex:41379; }","duration":"1.655880372s","start":"2025-12-13T00:07:34.92248Z","end":"2025-12-13T00:07:36.57836Z","steps":["trace[1001049706] 'read index received' (duration: 1.220952057s)","trace[1001049706] 'applied index is now lower than readState.Index' (duration: 434.927595ms)"],"step_count":2} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.578456Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:34.503767Z","time spent":"2.074666133s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5131,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.578487Z","caller":"traceutil/trace.go:171","msg":"trace[1275315951] transaction","detail":"{read_only:false; response_revision:37626; number_of_response:1; }","duration":"2.021211909s","start":"2025-12-13T00:07:34.557268Z","end":"2025-12-13T00:07:36.578479Z","steps":["trace[1275315951] 'process raft request' (duration: 2.021020344s)"],"step_count":1} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.578568Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:34.557243Z","time spent":"2.021267691s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5131,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.578581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.656106089s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/overlappingrangeipreservations.v1alpha1.whereabouts.cni.cncf.io\" ","response":"range_response_count:1 size:4456"} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:36.578605Z","caller":"traceutil/trace.go:171","msg":"trace[1688944207] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/overlappingrangeipreservations.v1alpha1.whereabouts.cni.cncf.io; range_end:; response_count:1; response_revision:37626; }","duration":"1.65613051s","start":"2025-12-13T00:07:34.922467Z","end":"2025-12-13T00:07:36.578597Z","steps":["trace[1688944207] 'agreement among raft nodes before linearized reading' (duration: 1.656046847s)"],"step_count":1} 2025-12-13T00:07:36.579627743+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:36.578624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:34.922421Z","time spent":"1.656198491s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":1,"response size":4479,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/overlappingrangeipreservations.v1alpha1.whereabouts.cni.cncf.io\" "} 2025-12-13T00:07:38.257505872+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:38.257228Z","caller":"traceutil/trace.go:171","msg":"trace[2112852994] transaction","detail":"{read_only:false; response_revision:37628; number_of_response:1; }","duration":"354.945933ms","start":"2025-12-13T00:07:37.902267Z","end":"2025-12-13T00:07:38.257212Z","steps":["trace[2112852994] 'process raft request' (duration: 354.823814ms)"],"step_count":1} 2025-12-13T00:07:38.257505872+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:38.257339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:37.902248Z","time spent":"355.036927ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5448,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:45.436678730+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:45.4358Z","caller":"traceutil/trace.go:171","msg":"trace[2040551951] transaction","detail":"{read_only:false; response_revision:37637; number_of_response:1; }","duration":"151.83327ms","start":"2025-12-13T00:07:45.28394Z","end":"2025-12-13T00:07:45.435774Z","steps":["trace[2040551951] 'process raft request' (duration: 151.597963ms)"],"step_count":1} 2025-12-13T00:07:48.401335898+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:48.401187Z","caller":"traceutil/trace.go:171","msg":"trace[1342488025] transaction","detail":"{read_only:false; response_revision:37641; number_of_response:1; }","duration":"194.55851ms","start":"2025-12-13T00:07:48.206609Z","end":"2025-12-13T00:07:48.401168Z","steps":["trace[1342488025] 'process raft request' (duration: 194.416778ms)"],"step_count":1} 2025-12-13T00:07:50.725782488+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:50.725668Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:50.289803Z","time spent":"435.857912ms","remote":"[::1]:35710","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:07:50.986951894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:50.986824Z","caller":"traceutil/trace.go:171","msg":"trace[220478534] linearizableReadLoop","detail":"{readStateIndex:41405; appliedIndex:41404; }","duration":"436.612329ms","start":"2025-12-13T00:07:50.550194Z","end":"2025-12-13T00:07:50.986806Z","steps":["trace[220478534] 'read index received' (duration: 436.470357ms)","trace[220478534] 'applied index is now lower than readState.Index' (duration: 141.152µs)"],"step_count":2} 2025-12-13T00:07:50.987006083+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:50.986938Z","caller":"traceutil/trace.go:171","msg":"trace[1490614113] transaction","detail":"{read_only:false; response_revision:37644; number_of_response:1; }","duration":"704.94583ms","start":"2025-12-13T00:07:50.281982Z","end":"2025-12-13T00:07:50.986928Z","steps":["trace[1490614113] 'process raft request' (duration: 704.699192ms)"],"step_count":1} 2025-12-13T00:07:50.987054440+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:50.987026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:50.281964Z","time spent":"704.999698ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4035,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:50.987274245+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:50.987188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"381.787742ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoleclidownloads.v1.console.openshift.io\" ","response":"range_response_count:1 size:4867"} 2025-12-13T00:07:50.987274245+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:50.987252Z","caller":"traceutil/trace.go:171","msg":"trace[2091603013] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoleclidownloads.v1.console.openshift.io; range_end:; response_count:1; response_revision:37644; }","duration":"381.859463ms","start":"2025-12-13T00:07:50.605379Z","end":"2025-12-13T00:07:50.987238Z","steps":["trace[2091603013] 'agreement among raft nodes before linearized reading' (duration: 381.70995ms)"],"step_count":1} 2025-12-13T00:07:50.987739327+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:50.987283Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:50.605332Z","time spent":"381.944856ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":100,"response count":1,"response size":4890,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoleclidownloads.v1.console.openshift.io\" "} 2025-12-13T00:07:50.987739327+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:50.98734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"437.145773ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:8"} 2025-12-13T00:07:50.987739327+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:50.987365Z","caller":"traceutil/trace.go:171","msg":"trace[1208044493] range","detail":"{range_begin:/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions/; range_end:/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:37644; }","duration":"437.173097ms","start":"2025-12-13T00:07:50.550183Z","end":"2025-12-13T00:07:50.987356Z","steps":["trace[1208044493] 'agreement among raft nodes before linearized reading' (duration: 437.092515ms)"],"step_count":1} 2025-12-13T00:07:50.987739327+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:50.987383Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:50.550136Z","time spent":"437.242819ms","remote":"[::1]:52060","response type":"/etcdserverpb.KV/Range","request count":0,"request size":130,"response count":107,"response size":31,"request content":"key:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/kubernetes.io/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "} 2025-12-13T00:07:50.987739327+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:50.987507Z","caller":"traceutil/trace.go:171","msg":"trace[387261561] range","detail":"{range_begin:/kubernetes.io/masterleases/192.168.126.11; range_end:; response_count:1; response_revision:37644; }","duration":"104.70874ms","start":"2025-12-13T00:07:50.882776Z","end":"2025-12-13T00:07:50.987485Z","steps":["trace[387261561] 'agreement among raft nodes before linearized reading' (duration: 104.566567ms)"],"step_count":1} 2025-12-13T00:07:51.631590666+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:51.631487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"524.694123ms","expected-duration":"200ms","prefix":"","request":"header: lease_grant:","response":"size:41"} 2025-12-13T00:07:51.631590666+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:51.631557Z","caller":"traceutil/trace.go:171","msg":"trace[711781516] linearizableReadLoop","detail":"{readStateIndex:41406; appliedIndex:41405; }","duration":"641.027456ms","start":"2025-12-13T00:07:50.990519Z","end":"2025-12-13T00:07:51.631546Z","steps":["trace[711781516] 'read index received' (duration: 116.199753ms)","trace[711781516] 'applied index is now lower than readState.Index' (duration: 524.826543ms)"],"step_count":2} 2025-12-13T00:07:51.631643055+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:51.631574Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:50.988379Z","time spent":"643.190035ms","remote":"[::1]:52054","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:07:51.631913307+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:51.63168Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"641.154466ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoleclidownloads.v1.console.openshift.io\" ","response":"range_response_count:1 size:4867"} 2025-12-13T00:07:51.631913307+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:51.631705Z","caller":"traceutil/trace.go:171","msg":"trace[685985830] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoleclidownloads.v1.console.openshift.io; range_end:; response_count:1; response_revision:37644; }","duration":"641.18171ms","start":"2025-12-13T00:07:50.990514Z","end":"2025-12-13T00:07:51.631696Z","steps":["trace[685985830] 'agreement among raft nodes before linearized reading' (duration: 641.073613ms)"],"step_count":1} 2025-12-13T00:07:51.631913307+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:51.631723Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:50.990487Z","time spent":"641.231978ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":100,"response count":1,"response size":4890,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoleclidownloads.v1.console.openshift.io\" "} 2025-12-13T00:07:52.132106147+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:52.131981Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065328,"retry-timeout":"500ms"} 2025-12-13T00:07:52.318006810+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:52.317808Z","caller":"traceutil/trace.go:171","msg":"trace[1231134054] transaction","detail":"{read_only:false; response_revision:37645; number_of_response:1; }","duration":"725.384254ms","start":"2025-12-13T00:07:51.592233Z","end":"2025-12-13T00:07:52.317617Z","steps":["trace[1231134054] 'process raft request' (duration: 724.678453ms)"],"step_count":1} 2025-12-13T00:07:52.318060268+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:52.317974Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.592201Z","time spent":"725.687251ms","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5131,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.213644Z","caller":"traceutil/trace.go:171","msg":"trace[539379048] transaction","detail":"{read_only:false; response_revision:37647; number_of_response:1; }","duration":"1.584611799s","start":"2025-12-13T00:07:51.629007Z","end":"2025-12-13T00:07:53.213619Z","steps":["trace[539379048] 'process raft request' (duration: 1.584448774s)"],"step_count":1} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.213841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.628983Z","time spent":"1.584724117s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2729,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.213934Z","caller":"traceutil/trace.go:171","msg":"trace[1125515293] linearizableReadLoop","detail":"{readStateIndex:41411; appliedIndex:41406; }","duration":"1.582342563s","start":"2025-12-13T00:07:51.631571Z","end":"2025-12-13T00:07:53.213914Z","steps":["trace[1125515293] 'read index received' (duration: 685.469386ms)","trace[1125515293] 'applied index is now lower than readState.Index' (duration: 896.872497ms)"],"step_count":2} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.214169Z","caller":"traceutil/trace.go:171","msg":"trace[1047578370] transaction","detail":"{read_only:false; response_revision:37648; number_of_response:1; }","duration":"1.581937419s","start":"2025-12-13T00:07:51.632221Z","end":"2025-12-13T00:07:53.214158Z","steps":["trace[1047578370] 'process raft request' (duration: 1.581336245s)"],"step_count":1} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214214Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.632207Z","time spent":"1.581985797s","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":125,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.097029526s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/configmaps/\" range_end:\"/kubernetes.io/configmaps0\" count_only:true ","response":"range_response_count:0 size:9"} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.214331Z","caller":"traceutil/trace.go:171","msg":"trace[1924701088] range","detail":"{range_begin:/kubernetes.io/configmaps/; range_end:/kubernetes.io/configmaps0; response_count:0; response_revision:37649; }","duration":"2.097084524s","start":"2025-12-13T00:07:51.117239Z","end":"2025-12-13T00:07:53.214324Z","steps":["trace[1924701088] 'agreement among raft nodes before linearized reading' (duration: 2.096752682s)"],"step_count":1} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.117181Z","time spent":"2.097171698s","remote":"[::1]:52128","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":549,"response size":32,"request content":"key:\"/kubernetes.io/configmaps/\" range_end:\"/kubernetes.io/configmaps0\" count_only:true "} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.214384Z","caller":"traceutil/trace.go:171","msg":"trace[1344685722] transaction","detail":"{read_only:false; response_revision:37649; number_of_response:1; }","duration":"1.570676143s","start":"2025-12-13T00:07:51.643699Z","end":"2025-12-13T00:07:53.214376Z","steps":["trace[1344685722] 'process raft request' (duration: 1.570107224s)"],"step_count":1} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214446Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.643666Z","time spent":"1.570757946s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4224,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.67606638s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.214601Z","caller":"traceutil/trace.go:171","msg":"trace[1789602178] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37649; }","duration":"1.676088163s","start":"2025-12-13T00:07:51.538504Z","end":"2025-12-13T00:07:53.214592Z","steps":["trace[1789602178] 'agreement among raft nodes before linearized reading' (duration: 1.676049527s)"],"step_count":1} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214621Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.538375Z","time spent":"1.676241147s","remote":"192.168.126.11:37802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:53.214789Z","caller":"traceutil/trace.go:171","msg":"trace[1775891288] transaction","detail":"{read_only:false; response_revision:37646; number_of_response:1; }","duration":"1.586061246s","start":"2025-12-13T00:07:51.627467Z","end":"2025-12-13T00:07:53.213528Z","steps":["trace[1775891288] 'process raft request' (duration: 1.585246167s)"],"step_count":1} 2025-12-13T00:07:53.215177164+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:53.214861Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:51.627441Z","time spent":"1.58736876s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5146,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:56.443246311+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:56.443068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.187106ms","expected-duration":"200ms","prefix":"","request":"header: lease_revoke:","response":"size:29"} 2025-12-13T00:07:56.736595360+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:56.736441Z","caller":"traceutil/trace.go:171","msg":"trace[1042114903] transaction","detail":"{read_only:false; response_revision:37652; number_of_response:1; }","duration":"106.772352ms","start":"2025-12-13T00:07:56.629607Z","end":"2025-12-13T00:07:56.736379Z","steps":["trace[1042114903] 'process raft request' (duration: 106.562876ms)"],"step_count":1} 2025-12-13T00:07:57.275333654+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:57.275093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.231178ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/\" range_end:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts0\" count_only:true ","response":"range_response_count:0 size:9"} 2025-12-13T00:07:57.275333654+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:57.27518Z","caller":"traceutil/trace.go:171","msg":"trace[566959667] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/; range_end:/kubernetes.io/apiserver.openshift.io/apirequestcounts0; response_count:0; response_revision:37652; }","duration":"351.346121ms","start":"2025-12-13T00:07:56.923815Z","end":"2025-12-13T00:07:57.275161Z","steps":["trace[566959667] 'count revisions from in-memory index tree' (duration: 351.132145ms)"],"step_count":1} 2025-12-13T00:07:57.275333654+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:57.275231Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:56.923751Z","time spent":"351.465935ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":116,"response count":180,"response size":32,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/\" range_end:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts0\" count_only:true "} 2025-12-13T00:07:57.586219595+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:57.586055Z","caller":"traceutil/trace.go:171","msg":"trace[175253013] linearizableReadLoop","detail":"{readStateIndex:41416; appliedIndex:41415; }","duration":"128.885207ms","start":"2025-12-13T00:07:57.457139Z","end":"2025-12-13T00:07:57.586024Z","steps":["trace[175253013] 'read index received' (duration: 128.545176ms)","trace[175253013] 'applied index is now lower than readState.Index' (duration: 337.83µs)"],"step_count":2} 2025-12-13T00:07:57.586321918+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:57.58619Z","caller":"traceutil/trace.go:171","msg":"trace[1292613180] transaction","detail":"{read_only:false; response_revision:37653; number_of_response:1; }","duration":"259.942137ms","start":"2025-12-13T00:07:57.326199Z","end":"2025-12-13T00:07:57.586141Z","steps":["trace[1292613180] 'process raft request' (duration: 259.629217ms)"],"step_count":1} 2025-12-13T00:07:57.586321918+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:57.586277Z","caller":"traceutil/trace.go:171","msg":"trace[1932612911] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/builds.v1.config.openshift.io; range_end:; response_count:1; response_revision:37653; }","duration":"129.133534ms","start":"2025-12-13T00:07:57.457133Z","end":"2025-12-13T00:07:57.586267Z","steps":["trace[1932612911] 'agreement among raft nodes before linearized reading' (duration: 129.00394ms)"],"step_count":1} 2025-12-13T00:07:58.097267870+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:58.097096Z","caller":"traceutil/trace.go:171","msg":"trace[204064353] transaction","detail":"{read_only:false; response_revision:37654; number_of_response:1; }","duration":"498.099269ms","start":"2025-12-13T00:07:57.598977Z","end":"2025-12-13T00:07:58.097076Z","steps":["trace[204064353] 'process raft request' (duration: 476.286283ms)","trace[204064353] 'compare' (duration: 21.365513ms)"],"step_count":2} 2025-12-13T00:07:58.097267870+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:58.09721Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:57.598949Z","time spent":"498.197991ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4894,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:58.563089168+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:58.56291Z","caller":"traceutil/trace.go:171","msg":"trace[1806174883] transaction","detail":"{read_only:false; response_revision:37655; number_of_response:1; }","duration":"342.079371ms","start":"2025-12-13T00:07:58.220804Z","end":"2025-12-13T00:07:58.562883Z","steps":["trace[1806174883] 'process raft request' (duration: 341.896895ms)"],"step_count":1} 2025-12-13T00:07:58.563230012+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:58.563098Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:58.220774Z","time spent":"342.224325ms","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5092,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:59.050567026+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:59.050187Z","caller":"traceutil/trace.go:171","msg":"trace[756408437] linearizableReadLoop","detail":"{readStateIndex:41419; appliedIndex:41418; }","duration":"277.327754ms","start":"2025-12-13T00:07:58.772836Z","end":"2025-12-13T00:07:59.050164Z","steps":["trace[756408437] 'read index received' (duration: 268.158517ms)","trace[756408437] 'applied index is now lower than readState.Index' (duration: 9.167937ms)"],"step_count":2} 2025-12-13T00:07:59.050567026+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:59.050293Z","caller":"traceutil/trace.go:171","msg":"trace[1640521431] transaction","detail":"{read_only:false; response_revision:37656; number_of_response:1; }","duration":"829.127847ms","start":"2025-12-13T00:07:58.221157Z","end":"2025-12-13T00:07:59.050285Z","steps":["trace[1640521431] 'process raft request' (duration: 819.882598ms)"],"step_count":1} 2025-12-13T00:07:59.050567026+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:59.050378Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:07:58.221123Z","time spent":"829.189249ms","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5119,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:07:59.050567026+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:59.050437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.642011ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/controllers/\" range_end:\"/kubernetes.io/controllers0\" count_only:true ","response":"range_response_count:0 size:6"} 2025-12-13T00:07:59.050567026+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:59.050522Z","caller":"traceutil/trace.go:171","msg":"trace[447032230] range","detail":"{range_begin:/kubernetes.io/controllers/; range_end:/kubernetes.io/controllers0; response_count:0; response_revision:37656; }","duration":"231.790715ms","start":"2025-12-13T00:07:58.818712Z","end":"2025-12-13T00:07:59.050503Z","steps":["trace[447032230] 'agreement among raft nodes before linearized reading' (duration: 231.59814ms)"],"step_count":1} 2025-12-13T00:07:59.050682290+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:59.050528Z","caller":"traceutil/trace.go:171","msg":"trace[750609137] range","detail":"{range_begin:/kubernetes.io/namespaces/; range_end:/kubernetes.io/namespaces0; response_count:0; response_revision:37656; }","duration":"125.846058ms","start":"2025-12-13T00:07:58.924653Z","end":"2025-12-13T00:07:59.050499Z","steps":["trace[750609137] 'agreement among raft nodes before linearized reading' (duration: 125.788127ms)"],"step_count":1} 2025-12-13T00:07:59.050682290+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:07:59.050435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.533159ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/validatingwebhookconfigurations/\" range_end:\"/kubernetes.io/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:8"} 2025-12-13T00:07:59.050682290+00:00 stderr F {"level":"info","ts":"2025-12-13T00:07:59.050646Z","caller":"traceutil/trace.go:171","msg":"trace[1938749386] range","detail":"{range_begin:/kubernetes.io/validatingwebhookconfigurations/; range_end:/kubernetes.io/validatingwebhookconfigurations0; response_count:0; response_revision:37656; }","duration":"277.794487ms","start":"2025-12-13T00:07:58.77283Z","end":"2025-12-13T00:07:59.050625Z","steps":["trace[1938749386] 'agreement among raft nodes before linearized reading' (duration: 277.496398ms)"],"step_count":1} 2025-12-13T00:08:00.336087586+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:00.33593Z","caller":"traceutil/trace.go:171","msg":"trace[2066501215] linearizableReadLoop","detail":"{readStateIndex:41420; appliedIndex:41419; }","duration":"156.274276ms","start":"2025-12-13T00:08:00.179637Z","end":"2025-12-13T00:08:00.335911Z","steps":["trace[2066501215] 'read index received' (duration: 156.09236ms)","trace[2066501215] 'applied index is now lower than readState.Index' (duration: 181.186µs)"],"step_count":2} 2025-12-13T00:08:00.336087586+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:00.336053Z","caller":"traceutil/trace.go:171","msg":"trace[104249594] range","detail":"{range_begin:/kubernetes.io/runtimeclasses/; range_end:/kubernetes.io/runtimeclasses0; response_count:0; response_revision:37657; }","duration":"156.395689ms","start":"2025-12-13T00:08:00.179622Z","end":"2025-12-13T00:08:00.336018Z","steps":["trace[104249594] 'agreement among raft nodes before linearized reading' (duration: 156.375068ms)"],"step_count":1} 2025-12-13T00:08:00.336181728+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:00.336083Z","caller":"traceutil/trace.go:171","msg":"trace[1129212275] transaction","detail":"{read_only:false; response_revision:37657; number_of_response:1; }","duration":"194.215871ms","start":"2025-12-13T00:08:00.141836Z","end":"2025-12-13T00:08:00.336052Z","steps":["trace[1129212275] 'process raft request' (duration: 193.956073ms)"],"step_count":1} 2025-12-13T00:08:00.838896761+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:00.838714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.918704ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/configmaps.v1\" ","response":"range_response_count:1 size:13457"} 2025-12-13T00:08:00.838896761+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:00.838809Z","caller":"traceutil/trace.go:171","msg":"trace[1052319972] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/configmaps.v1; range_end:; response_count:1; response_revision:37657; }","duration":"298.026237ms","start":"2025-12-13T00:08:00.540761Z","end":"2025-12-13T00:08:00.838787Z","steps":["trace[1052319972] 'range keys from in-memory index tree' (duration: 297.609385ms)"],"step_count":1} 2025-12-13T00:08:01.099002774+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:01.09883Z","caller":"traceutil/trace.go:171","msg":"trace[1947163626] linearizableReadLoop","detail":"{readStateIndex:41421; appliedIndex:41420; }","duration":"215.883813ms","start":"2025-12-13T00:08:00.882921Z","end":"2025-12-13T00:08:01.098805Z","steps":["trace[1947163626] 'read index received' (duration: 215.646036ms)","trace[1947163626] 'applied index is now lower than readState.Index' (duration: 236.767µs)"],"step_count":2} 2025-12-13T00:08:01.099002774+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:01.098915Z","caller":"traceutil/trace.go:171","msg":"trace[1963423815] transaction","detail":"{read_only:false; response_revision:37658; number_of_response:1; }","duration":"239.626725ms","start":"2025-12-13T00:08:00.85927Z","end":"2025-12-13T00:08:01.098896Z","steps":["trace[1963423815] 'process raft request' (duration: 239.378917ms)"],"step_count":1} 2025-12-13T00:08:01.099134227+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:01.099059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.12354ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/masterleases/192.168.126.11\" ","response":"range_response_count:1 size:144"} 2025-12-13T00:08:01.099155468+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:01.099126Z","caller":"traceutil/trace.go:171","msg":"trace[751400761] range","detail":"{range_begin:/kubernetes.io/masterleases/192.168.126.11; range_end:; response_count:1; response_revision:37658; }","duration":"216.206913ms","start":"2025-12-13T00:08:00.882904Z","end":"2025-12-13T00:08:01.099111Z","steps":["trace[751400761] 'agreement among raft nodes before linearized reading' (duration: 216.003327ms)"],"step_count":1} 2025-12-13T00:08:01.726925616+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:01.726769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"537.26374ms","expected-duration":"200ms","prefix":"","request":"header: lease_grant:","response":"size:41"} 2025-12-13T00:08:01.726925616+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:01.726866Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:01.100216Z","time spent":"626.646756ms","remote":"[::1]:52054","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:08:01.793097874+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:01.792983Z","caller":"traceutil/trace.go:171","msg":"trace[184878987] linearizableReadLoop","detail":"{readStateIndex:41423; appliedIndex:41423; }","duration":"252.444599ms","start":"2025-12-13T00:08:01.540506Z","end":"2025-12-13T00:08:01.792951Z","steps":["trace[184878987] 'read index received' (duration: 252.431528ms)","trace[184878987] 'applied index is now lower than readState.Index' (duration: 10.93µs)"],"step_count":2} 2025-12-13T00:08:01.793097874+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:01.79301Z","caller":"traceutil/trace.go:171","msg":"trace[801779550] transaction","detail":"{read_only:false; response_revision:37659; number_of_response:1; }","duration":"629.608131ms","start":"2025-12-13T00:08:01.163378Z","end":"2025-12-13T00:08:01.792986Z","steps":["trace[801779550] 'process raft request' (duration: 629.399025ms)"],"step_count":1} 2025-12-13T00:08:01.793175646+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:01.793145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.631483ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:08:01.793175646+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:01.793165Z","caller":"traceutil/trace.go:171","msg":"trace[1212915132] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37659; }","duration":"252.669935ms","start":"2025-12-13T00:08:01.540489Z","end":"2025-12-13T00:08:01.793159Z","steps":["trace[1212915132] 'agreement among raft nodes before linearized reading' (duration: 252.549351ms)"],"step_count":1} 2025-12-13T00:08:01.793260258+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:01.793186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:01.163342Z","time spent":"629.746855ms","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5052,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:02.181749272+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:02.181494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.062216ms","expected-duration":"200ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:18"} 2025-12-13T00:08:02.181749272+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:02.181568Z","caller":"traceutil/trace.go:171","msg":"trace[938406550] linearizableReadLoop","detail":"{readStateIndex:41424; appliedIndex:41423; }","duration":"387.522954ms","start":"2025-12-13T00:08:01.794033Z","end":"2025-12-13T00:08:02.181556Z","steps":["trace[938406550] 'read index received' (duration: 52.227761ms)","trace[938406550] 'applied index is now lower than readState.Index' (duration: 335.294053ms)"],"step_count":2} 2025-12-13T00:08:02.181749272+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:02.181641Z","caller":"traceutil/trace.go:171","msg":"trace[1962347598] transaction","detail":"{read_only:false; response_revision:37660; number_of_response:1; }","duration":"453.871019ms","start":"2025-12-13T00:08:01.727744Z","end":"2025-12-13T00:08:02.181615Z","steps":["trace[1962347598] 'process raft request' (duration: 118.577246ms)","trace[1962347598] 'compare' (duration: 334.989174ms)"],"step_count":2} 2025-12-13T00:08:02.181749272+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:02.181672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.636508ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:08:02.181749272+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:02.181692Z","caller":"traceutil/trace.go:171","msg":"trace[622747610] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37660; }","duration":"387.657068ms","start":"2025-12-13T00:08:01.794029Z","end":"2025-12-13T00:08:02.181686Z","steps":["trace[622747610] 'agreement among raft nodes before linearized reading' (duration: 387.554805ms)"],"step_count":1} 2025-12-13T00:08:02.181749272+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:02.18171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:01.793981Z","time spent":"387.724611ms","remote":"192.168.126.11:37800","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2025-12-13T00:08:02.181857735+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:02.181747Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:01.727713Z","time spent":"453.983032ms","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":125,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:03.531200515+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:03.530871Z","caller":"traceutil/trace.go:171","msg":"trace[257943039] transaction","detail":"{read_only:false; response_revision:37661; number_of_response:1; }","duration":"262.580674ms","start":"2025-12-13T00:08:03.268269Z","end":"2025-12-13T00:08:03.53085Z","steps":["trace[257943039] 'process raft request' (duration: 262.4417ms)"],"step_count":1} 2025-12-13T00:08:04.065555940+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:04.065375Z","caller":"traceutil/trace.go:171","msg":"trace[1382771714] transaction","detail":"{read_only:false; response_revision:37662; number_of_response:1; }","duration":"495.657076ms","start":"2025-12-13T00:08:03.569695Z","end":"2025-12-13T00:08:04.065352Z","steps":["trace[1382771714] 'process raft request' (duration: 495.509412ms)"],"step_count":1} 2025-12-13T00:08:04.065631942+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:04.065559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:03.569647Z","time spent":"495.836332ms","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5094,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:11.501296240+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:11.500981Z","caller":"traceutil/trace.go:171","msg":"trace[1679283392] transaction","detail":"{read_only:false; response_revision:37671; number_of_response:1; }","duration":"502.317162ms","start":"2025-12-13T00:08:10.998555Z","end":"2025-12-13T00:08:11.500872Z","steps":["trace[1679283392] 'process raft request' (duration: 501.949101ms)"],"step_count":1} 2025-12-13T00:08:11.501609969+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:11.501541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:10.998537Z","time spent":"502.553639ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4952,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:16.946844753+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:16.946708Z","caller":"traceutil/trace.go:171","msg":"trace[571118183] linearizableReadLoop","detail":"{readStateIndex:41447; appliedIndex:41446; }","duration":"193.265544ms","start":"2025-12-13T00:08:16.753425Z","end":"2025-12-13T00:08:16.94669Z","steps":["trace[571118183] 'read index received' (duration: 192.725397ms)","trace[571118183] 'applied index is now lower than readState.Index' (duration: 539.017µs)"],"step_count":2} 2025-12-13T00:08:16.946844753+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:16.946821Z","caller":"traceutil/trace.go:171","msg":"trace[170414244] range","detail":"{range_begin:/kubernetes.io/volumeattachments/; range_end:/kubernetes.io/volumeattachments0; response_count:0; response_revision:37680; }","duration":"193.356956ms","start":"2025-12-13T00:08:16.75342Z","end":"2025-12-13T00:08:16.946777Z","steps":["trace[170414244] 'agreement among raft nodes before linearized reading' (duration: 193.339036ms)"],"step_count":1} 2025-12-13T00:08:16.947128771+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:16.947066Z","caller":"traceutil/trace.go:171","msg":"trace[1473223744] transaction","detail":"{read_only:false; response_revision:37680; number_of_response:1; }","duration":"193.710906ms","start":"2025-12-13T00:08:16.753321Z","end":"2025-12-13T00:08:16.947032Z","steps":["trace[1473223744] 'process raft request' (duration: 192.931813ms)"],"step_count":1} 2025-12-13T00:08:17.021873539+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:17.021673Z","caller":"traceutil/trace.go:171","msg":"trace[1308947188] transaction","detail":"{read_only:false; response_revision:37681; number_of_response:1; }","duration":"202.612295ms","start":"2025-12-13T00:08:16.819033Z","end":"2025-12-13T00:08:17.021646Z","steps":["trace[1308947188] 'process raft request' (duration: 202.467241ms)"],"step_count":1} 2025-12-13T00:08:18.374425322+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:18.374194Z","caller":"traceutil/trace.go:171","msg":"trace[2070554119] transaction","detail":"{read_only:false; response_revision:37683; number_of_response:1; }","duration":"684.216092ms","start":"2025-12-13T00:08:17.689945Z","end":"2025-12-13T00:08:18.374161Z","steps":["trace[2070554119] 'process raft request' (duration: 665.890939ms)","trace[2070554119] 'compare' (duration: 18.20307ms)"],"step_count":2} 2025-12-13T00:08:18.374472933+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:18.374386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:17.689926Z","time spent":"684.357326ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4628,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:21.358187461+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.357967Z","caller":"traceutil/trace.go:171","msg":"trace[364684275] linearizableReadLoop","detail":"{readStateIndex:41452; appliedIndex:41451; }","duration":"473.7966ms","start":"2025-12-13T00:08:20.884154Z","end":"2025-12-13T00:08:21.357951Z","steps":["trace[364684275] 'read index received' (duration: 473.651806ms)","trace[364684275] 'applied index is now lower than readState.Index' (duration: 143.874µs)"],"step_count":2} 2025-12-13T00:08:21.358187461+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.358077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"366.480341ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/statefulsets/\" range_end:\"/kubernetes.io/statefulsets0\" count_only:true ","response":"range_response_count:0 size:6"} 2025-12-13T00:08:21.358187461+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.358097Z","caller":"traceutil/trace.go:171","msg":"trace[48835757] range","detail":"{range_begin:/kubernetes.io/statefulsets/; range_end:/kubernetes.io/statefulsets0; response_count:0; response_revision:37685; }","duration":"366.521602ms","start":"2025-12-13T00:08:20.99157Z","end":"2025-12-13T00:08:21.358092Z","steps":["trace[48835757] 'agreement among raft nodes before linearized reading' (duration: 366.464401ms)"],"step_count":1} 2025-12-13T00:08:21.358187461+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.358035Z","caller":"traceutil/trace.go:171","msg":"trace[1445413087] transaction","detail":"{read_only:false; response_revision:37685; number_of_response:1; }","duration":"510.762907ms","start":"2025-12-13T00:08:20.847241Z","end":"2025-12-13T00:08:21.358004Z","steps":["trace[1445413087] 'process raft request' (duration: 510.609572ms)"],"step_count":1} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.35817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"473.996726ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/masterleases/192.168.126.11\" ","response":"range_response_count:1 size:144"} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.358219Z","caller":"traceutil/trace.go:171","msg":"trace[2075938802] range","detail":"{range_begin:/kubernetes.io/masterleases/192.168.126.11; range_end:; response_count:1; response_revision:37685; }","duration":"474.058308ms","start":"2025-12-13T00:08:20.884148Z","end":"2025-12-13T00:08:21.358206Z","steps":["trace[2075938802] 'agreement among raft nodes before linearized reading' (duration: 473.890653ms)"],"step_count":1} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.358206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.228324ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauths.v1.config.openshift.io\" ","response":"range_response_count:1 size:6390"} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.358249Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:20.884103Z","time spent":"474.14107ms","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":167,"request content":"key:\"/kubernetes.io/masterleases/192.168.126.11\" "} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.358119Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:20.991527Z","time spent":"366.587454ms","remote":"192.168.126.11:38344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/statefulsets/\" range_end:\"/kubernetes.io/statefulsets0\" count_only:true "} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.358265Z","caller":"traceutil/trace.go:171","msg":"trace[260140289] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauths.v1.config.openshift.io; range_end:; response_count:1; response_revision:37685; }","duration":"238.294976ms","start":"2025-12-13T00:08:21.119955Z","end":"2025-12-13T00:08:21.35825Z","steps":["trace[260140289] 'agreement among raft nodes before linearized reading' (duration: 238.141122ms)"],"step_count":1} 2025-12-13T00:08:21.358655876+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.358276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:20.847221Z","time spent":"510.964623ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4694,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:21.765108983+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.764948Z","caller":"traceutil/trace.go:171","msg":"trace[800836000] linearizableReadLoop","detail":"{readStateIndex:41453; appliedIndex:41452; }","duration":"400.211015ms","start":"2025-12-13T00:08:21.364713Z","end":"2025-12-13T00:08:21.764924Z","steps":["trace[800836000] 'read index received' (duration: 235.135434ms)","trace[800836000] 'applied index is now lower than readState.Index' (duration: 165.074251ms)"],"step_count":2} 2025-12-13T00:08:21.765108983+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.764974Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.359118Z","time spent":"405.851ms","remote":"[::1]:52054","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:08:21.765233706+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.76515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.416021ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauths.v1.config.openshift.io\" ","response":"range_response_count:1 size:6390"} 2025-12-13T00:08:21.765233706+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.76519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.572324ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/serviceaccounts.v1\" ","response":"range_response_count:1 size:12914"} 2025-12-13T00:08:21.765233706+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.76522Z","caller":"traceutil/trace.go:171","msg":"trace[1006927492] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/serviceaccounts.v1; range_end:; response_count:1; response_revision:37685; }","duration":"319.620696ms","start":"2025-12-13T00:08:21.445589Z","end":"2025-12-13T00:08:21.76521Z","steps":["trace[1006927492] 'agreement among raft nodes before linearized reading' (duration: 319.468141ms)"],"step_count":1} 2025-12-13T00:08:21.765272817+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.765224Z","caller":"traceutil/trace.go:171","msg":"trace[193121766] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauths.v1.config.openshift.io; range_end:; response_count:1; response_revision:37685; }","duration":"400.501023ms","start":"2025-12-13T00:08:21.364706Z","end":"2025-12-13T00:08:21.765207Z","steps":["trace[193121766] 'agreement among raft nodes before linearized reading' (duration: 400.295328ms)"],"step_count":1} 2025-12-13T00:08:21.765272817+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.765247Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.445539Z","time spent":"319.702898ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":75,"response count":1,"response size":12937,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/serviceaccounts.v1\" "} 2025-12-13T00:08:21.765290858+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.765241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.055719ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:08:21.765307548+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:21.765267Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.364677Z","time spent":"400.581346ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":1,"response size":6413,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauths.v1.config.openshift.io\" "} 2025-12-13T00:08:21.765425182+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:21.7653Z","caller":"traceutil/trace.go:171","msg":"trace[970649833] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37685; }","duration":"226.122061ms","start":"2025-12-13T00:08:21.539162Z","end":"2025-12-13T00:08:21.765284Z","steps":["trace[970649833] 'agreement among raft nodes before linearized reading' (duration: 226.021008ms)"],"step_count":1} 2025-12-13T00:08:22.327529784+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:22.32738Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065461,"retry-timeout":"500ms"} 2025-12-13T00:08:22.828567838+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:22.82842Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065461,"retry-timeout":"500ms"} 2025-12-13T00:08:22.904971196+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:22.904836Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.080292447s","expected-duration":"1s"} 2025-12-13T00:08:23.073658762+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.073274Z","caller":"traceutil/trace.go:171","msg":"trace[1149678074] transaction","detail":"{read_only:false; response_revision:37687; number_of_response:1; }","duration":"1.294957814s","start":"2025-12-13T00:08:21.778288Z","end":"2025-12-13T00:08:23.073246Z","steps":["trace[1149678074] 'process raft request' (duration: 1.126728571s)","trace[1149678074] 'compare' (duration: 167.998107ms)"],"step_count":2} 2025-12-13T00:08:23.073658762+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.073465Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.778272Z","time spent":"1.295070587s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5305,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:23.089974278+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.089796Z","caller":"traceutil/trace.go:171","msg":"trace[1411802560] linearizableReadLoop","detail":"{readStateIndex:41459; appliedIndex:41454; }","duration":"1.262989092s","start":"2025-12-13T00:08:21.826784Z","end":"2025-12-13T00:08:23.089774Z","steps":["trace[1411802560] 'read index received' (duration: 1.078242397s)","trace[1411802560] 'applied index is now lower than readState.Index' (duration: 184.745495ms)"],"step_count":2} 2025-12-13T00:08:23.089974278+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.08981Z","caller":"traceutil/trace.go:171","msg":"trace[1283143266] transaction","detail":"{read_only:false; response_revision:37688; number_of_response:1; }","duration":"1.243289148s","start":"2025-12-13T00:08:21.846467Z","end":"2025-12-13T00:08:23.089756Z","steps":["trace[1283143266] 'process raft request' (duration: 1.243090041s)"],"step_count":1} 2025-12-13T00:08:23.090045460+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.089981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.846444Z","time spent":"1.243449973s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":10228,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:23.090125962+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.090057Z","caller":"traceutil/trace.go:171","msg":"trace[688029081] transaction","detail":"{read_only:false; response_revision:37689; number_of_response:1; }","duration":"1.136724002s","start":"2025-12-13T00:08:21.953286Z","end":"2025-12-13T00:08:23.09001Z","steps":["trace[688029081] 'process raft request' (duration: 1.136379271s)"],"step_count":1} 2025-12-13T00:08:23.090125962+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.090049Z","caller":"traceutil/trace.go:171","msg":"trace[413070883] transaction","detail":"{read_only:false; response_revision:37691; number_of_response:1; }","duration":"343.612415ms","start":"2025-12-13T00:08:22.746329Z","end":"2025-12-13T00:08:23.089941Z","steps":["trace[413070883] 'process raft request' (duration: 343.410688ms)"],"step_count":1} 2025-12-13T00:08:23.090487333+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.090172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.263375923s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:429"} 2025-12-13T00:08:23.090512283+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.090482Z","caller":"traceutil/trace.go:171","msg":"trace[212047819] range","detail":"{range_begin:/kubernetes.io/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:37691; }","duration":"1.263684852s","start":"2025-12-13T00:08:21.826779Z","end":"2025-12-13T00:08:23.090464Z","steps":["trace[212047819] 'agreement among raft nodes before linearized reading' (duration: 1.26326759s)"],"step_count":1} 2025-12-13T00:08:23.090525254+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.090209Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.953268Z","time spent":"1.136851175s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5141,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:23.090555425+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.090529Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:21.826735Z","time spent":"1.263782035s","remote":"[::1]:52206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":452,"request content":"key:\"/kubernetes.io/services/endpoints/default/kubernetes\" "} 2025-12-13T00:08:23.090568345+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.090521Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:22.746301Z","time spent":"343.843652ms","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5171,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:23.090583726+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.090523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.089435874s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/secrets.v1\" ","response":"range_response_count:1 size:13088"} 2025-12-13T00:08:23.090657338+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.090607Z","caller":"traceutil/trace.go:171","msg":"trace[614141189] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/secrets.v1; range_end:; response_count:1; response_revision:37691; }","duration":"1.089533367s","start":"2025-12-13T00:08:22.001054Z","end":"2025-12-13T00:08:23.090587Z","steps":["trace[614141189] 'agreement among raft nodes before linearized reading' (duration: 1.089076273s)"],"step_count":1} 2025-12-13T00:08:23.090699079+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.09066Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:22.000999Z","time spent":"1.089652009s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":13111,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/secrets.v1\" "} 2025-12-13T00:08:23.090699079+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.090061Z","caller":"traceutil/trace.go:171","msg":"trace[2069179365] transaction","detail":"{read_only:false; response_revision:37690; number_of_response:1; }","duration":"1.062111828s","start":"2025-12-13T00:08:22.027926Z","end":"2025-12-13T00:08:23.090038Z","steps":["trace[2069179365] 'process raft request' (duration: 1.061778997s)"],"step_count":1} 2025-12-13T00:08:23.090766501+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:23.090725Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:22.027903Z","time spent":"1.062792728s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5126,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:23.286882567+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:23.286684Z","caller":"traceutil/trace.go:171","msg":"trace[1657506536] transaction","detail":"{read_only:false; response_revision:37693; number_of_response:1; }","duration":"134.086429ms","start":"2025-12-13T00:08:23.152472Z","end":"2025-12-13T00:08:23.286559Z","steps":["trace[1657506536] 'process raft request' (duration: 133.294405ms)"],"step_count":1} 2025-12-13T00:08:29.204454277+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:29.204291Z","caller":"traceutil/trace.go:171","msg":"trace[4801510] transaction","detail":"{read_only:false; response_revision:37708; number_of_response:1; }","duration":"106.00689ms","start":"2025-12-13T00:08:29.09825Z","end":"2025-12-13T00:08:29.204257Z","steps":["trace[4801510] 'process raft request' (duration: 105.810454ms)"],"step_count":1} 2025-12-13T00:08:30.966188337+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:30.966075Z","caller":"traceutil/trace.go:171","msg":"trace[500773150] transaction","detail":"{read_only:false; response_revision:37711; number_of_response:1; }","duration":"122.545902ms","start":"2025-12-13T00:08:30.843489Z","end":"2025-12-13T00:08:30.966035Z","steps":["trace[500773150] 'process raft request' (duration: 122.294625ms)"],"step_count":1} 2025-12-13T00:08:35.591656986+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:35.591532Z","caller":"traceutil/trace.go:171","msg":"trace[1994705129] transaction","detail":"{read_only:false; response_revision:37721; number_of_response:1; }","duration":"365.432241ms","start":"2025-12-13T00:08:35.226056Z","end":"2025-12-13T00:08:35.591488Z","steps":["trace[1994705129] 'process raft request' (duration: 364.660389ms)"],"step_count":1} 2025-12-13T00:08:35.592457040+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:35.592268Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:35.226039Z","time spent":"365.577055ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4655,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:44.072273570+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:44.071868Z","caller":"traceutil/trace.go:171","msg":"trace[871004396] transaction","detail":"{read_only:false; response_revision:37733; number_of_response:1; }","duration":"117.108384ms","start":"2025-12-13T00:08:43.954719Z","end":"2025-12-13T00:08:44.071828Z","steps":["trace[871004396] 'process raft request' (duration: 116.799768ms)"],"step_count":1} 2025-12-13T00:08:45.594305540+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:45.594211Z","caller":"traceutil/trace.go:171","msg":"trace[1207897017] transaction","detail":"{read_only:false; response_revision:37738; number_of_response:1; }","duration":"195.034368ms","start":"2025-12-13T00:08:45.399154Z","end":"2025-12-13T00:08:45.594189Z","steps":["trace[1207897017] 'process raft request' (duration: 194.903546ms)"],"step_count":1} 2025-12-13T00:08:48.380586666+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:48.380374Z","caller":"traceutil/trace.go:171","msg":"trace[1720203464] transaction","detail":"{read_only:false; response_revision:37740; number_of_response:1; }","duration":"246.836671ms","start":"2025-12-13T00:08:48.133482Z","end":"2025-12-13T00:08:48.380319Z","steps":["trace[1720203464] 'process raft request' (duration: 246.027957ms)"],"step_count":1} 2025-12-13T00:08:50.181359360+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:50.181216Z","caller":"traceutil/trace.go:171","msg":"trace[195996877] transaction","detail":"{read_only:false; response_revision:37744; number_of_response:1; }","duration":"112.769557ms","start":"2025-12-13T00:08:50.068424Z","end":"2025-12-13T00:08:50.181194Z","steps":["trace[195996877] 'process raft request' (duration: 112.621185ms)"],"step_count":1} 2025-12-13T00:08:53.225850938+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:53.225695Z","caller":"traceutil/trace.go:171","msg":"trace[1911854080] linearizableReadLoop","detail":"{readStateIndex:41523; appliedIndex:41522; }","duration":"112.634575ms","start":"2025-12-13T00:08:53.113035Z","end":"2025-12-13T00:08:53.22567Z","steps":["trace[1911854080] 'read index received' (duration: 112.124856ms)","trace[1911854080] 'applied index is now lower than readState.Index' (duration: 508.109µs)"],"step_count":2} 2025-12-13T00:08:53.225932640+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:53.225845Z","caller":"traceutil/trace.go:171","msg":"trace[1431711479] transaction","detail":"{read_only:false; response_revision:37749; number_of_response:1; }","duration":"305.949182ms","start":"2025-12-13T00:08:52.919846Z","end":"2025-12-13T00:08:53.225795Z","steps":["trace[1431711479] 'process raft request' (duration: 305.642407ms)"],"step_count":1} 2025-12-13T00:08:53.225932640+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:53.22591Z","caller":"traceutil/trace.go:171","msg":"trace[1273672505] range","detail":"{range_begin:/kubernetes.io/leases/openshift-kube-scheduler/cert-recovery-controller-lock; range_end:; response_count:1; response_revision:37749; }","duration":"112.873459ms","start":"2025-12-13T00:08:53.113025Z","end":"2025-12-13T00:08:53.225898Z","steps":["trace[1273672505] 'agreement among raft nodes before linearized reading' (duration: 112.764447ms)"],"step_count":1} 2025-12-13T00:08:53.226114933+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:53.226056Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:52.919814Z","time spent":"306.132446ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":9694,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:53.227385586+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:53.22726Z","caller":"traceutil/trace.go:171","msg":"trace[104691143] transaction","detail":"{read_only:false; response_revision:37750; number_of_response:1; }","duration":"104.755077ms","start":"2025-12-13T00:08:53.122479Z","end":"2025-12-13T00:08:53.227234Z","steps":["trace[104691143] 'process raft request' (duration: 104.461681ms)"],"step_count":1} 2025-12-13T00:08:58.371841452+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:58.3717Z","caller":"traceutil/trace.go:171","msg":"trace[2080467863] transaction","detail":"{read_only:false; response_revision:37757; number_of_response:1; }","duration":"137.950672ms","start":"2025-12-13T00:08:58.233727Z","end":"2025-12-13T00:08:58.371678Z","steps":["trace[2080467863] 'process raft request' (duration: 137.580176ms)"],"step_count":1} 2025-12-13T00:08:59.583779915+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:59.583667Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.182203869s","expected-duration":"1s"} 2025-12-13T00:08:59.780375611+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:59.780014Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:59.193195Z","time spent":"586.813455ms","remote":"[::1]:45478","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:08:59.783147499+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:59.782839Z","caller":"traceutil/trace.go:171","msg":"trace[1113059403] linearizableReadLoop","detail":"{readStateIndex:41533; appliedIndex:41532; }","duration":"609.597206ms","start":"2025-12-13T00:08:59.173194Z","end":"2025-12-13T00:08:59.782791Z","steps":["trace[1113059403] 'read index received' (duration: 410.593168ms)","trace[1113059403] 'applied index is now lower than readState.Index' (duration: 198.861196ms)"],"step_count":2} 2025-12-13T00:08:59.783147499+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:59.782725Z","caller":"traceutil/trace.go:171","msg":"trace[1736013637] transaction","detail":"{read_only:false; response_revision:37758; number_of_response:1; }","duration":"1.381353179s","start":"2025-12-13T00:08:58.401322Z","end":"2025-12-13T00:08:59.782675Z","steps":["trace[1736013637] 'process raft request' (duration: 1.182525214s)","trace[1736013637] 'compare' (duration: 196.092387ms)"],"step_count":2} 2025-12-13T00:08:59.783454515+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:59.783374Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:58.401296Z","time spent":"1.381834927s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5092,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:08:59.784965061+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:59.784897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"611.699522ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/egressqoses.v1.k8s.ovn.org\" ","response":"range_response_count:1 size:5963"} 2025-12-13T00:08:59.784965061+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:59.784944Z","caller":"traceutil/trace.go:171","msg":"trace[1098478752] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/egressqoses.v1.k8s.ovn.org; range_end:; response_count:1; response_revision:37758; }","duration":"611.746323ms","start":"2025-12-13T00:08:59.173183Z","end":"2025-12-13T00:08:59.784929Z","steps":["trace[1098478752] 'agreement among raft nodes before linearized reading' (duration: 609.938212ms)"],"step_count":1} 2025-12-13T00:08:59.785005943+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:08:59.784984Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:08:59.17312Z","time spent":"611.855996ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":1,"response size":5986,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/egressqoses.v1.k8s.ovn.org\" "} 2025-12-13T00:08:59.900375856+00:00 stderr F {"level":"info","ts":"2025-12-13T00:08:59.900234Z","caller":"traceutil/trace.go:171","msg":"trace[1898966368] transaction","detail":"{read_only:false; response_revision:37759; number_of_response:1; }","duration":"100.635374ms","start":"2025-12-13T00:08:59.799582Z","end":"2025-12-13T00:08:59.900217Z","steps":["trace[1898966368] 'process raft request' (duration: 95.474253ms)"],"step_count":1} 2025-12-13T00:09:01.421743304+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:01.421558Z","caller":"traceutil/trace.go:171","msg":"trace[2073896358] transaction","detail":"{read_only:false; response_revision:37761; number_of_response:1; }","duration":"302.305139ms","start":"2025-12-13T00:09:01.119218Z","end":"2025-12-13T00:09:01.421523Z","steps":["trace[2073896358] 'process raft request' (duration: 300.365064ms)"],"step_count":1} 2025-12-13T00:09:01.421811605+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:01.421752Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:01.119203Z","time spent":"302.455471ms","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":125,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:01.676859901+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:01.676779Z","caller":"traceutil/trace.go:171","msg":"trace[715600039] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37761; }","duration":"138.228927ms","start":"2025-12-13T00:09:01.538524Z","end":"2025-12-13T00:09:01.676753Z","steps":["trace[715600039] 'range keys from in-memory index tree' (duration: 138.115554ms)"],"step_count":1} 2025-12-13T00:09:02.026640817+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:02.02623Z","caller":"traceutil/trace.go:171","msg":"trace[1363431943] transaction","detail":"{read_only:false; response_revision:37763; number_of_response:1; }","duration":"192.593216ms","start":"2025-12-13T00:09:01.833617Z","end":"2025-12-13T00:09:02.02621Z","steps":["trace[1363431943] 'process raft request' (duration: 191.688089ms)"],"step_count":1} 2025-12-13T00:09:04.467376241+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:04.467197Z","caller":"traceutil/trace.go:171","msg":"trace[1733743287] transaction","detail":"{read_only:false; response_revision:37765; number_of_response:1; }","duration":"211.075941ms","start":"2025-12-13T00:09:04.256097Z","end":"2025-12-13T00:09:04.467173Z","steps":["trace[1733743287] 'process raft request' (duration: 210.920478ms)"],"step_count":1} 2025-12-13T00:09:04.470190771+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:04.469837Z","caller":"traceutil/trace.go:171","msg":"trace[929854509] transaction","detail":"{read_only:false; response_revision:37766; number_of_response:1; }","duration":"199.972186ms","start":"2025-12-13T00:09:04.269835Z","end":"2025-12-13T00:09:04.469807Z","steps":["trace[929854509] 'process raft request' (duration: 199.779233ms)"],"step_count":1} 2025-12-13T00:09:07.542219353+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:07.542048Z","caller":"traceutil/trace.go:171","msg":"trace[1412445367] transaction","detail":"{read_only:false; response_revision:37771; number_of_response:1; }","duration":"156.098262ms","start":"2025-12-13T00:09:07.385926Z","end":"2025-12-13T00:09:07.542024Z","steps":["trace[1412445367] 'process raft request' (duration: 155.896708ms)"],"step_count":1} 2025-12-13T00:09:09.631674194+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:09.631572Z","caller":"traceutil/trace.go:171","msg":"trace[1842871525] linearizableReadLoop","detail":"{readStateIndex:41550; appliedIndex:41549; }","duration":"128.81226ms","start":"2025-12-13T00:09:09.502738Z","end":"2025-12-13T00:09:09.631551Z","steps":["trace[1842871525] 'read index received' (duration: 128.490895ms)","trace[1842871525] 'applied index is now lower than readState.Index' (duration: 319.945µs)"],"step_count":2} 2025-12-13T00:09:09.631674194+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:09.631623Z","caller":"traceutil/trace.go:171","msg":"trace[724538994] transaction","detail":"{read_only:false; response_revision:37773; number_of_response:1; }","duration":"154.93316ms","start":"2025-12-13T00:09:09.476676Z","end":"2025-12-13T00:09:09.631609Z","steps":["trace[724538994] 'process raft request' (duration: 154.627125ms)"],"step_count":1} 2025-12-13T00:09:09.631935020+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:09.631741Z","caller":"traceutil/trace.go:171","msg":"trace[1005746772] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/network-attachment-definitions.v1.k8s.cni.cncf.io; range_end:; response_count:1; response_revision:37773; }","duration":"128.999263ms","start":"2025-12-13T00:09:09.50273Z","end":"2025-12-13T00:09:09.631729Z","steps":["trace[1005746772] 'agreement among raft nodes before linearized reading' (duration: 128.903462ms)"],"step_count":1} 2025-12-13T00:09:15.251623362+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:15.251451Z","caller":"traceutil/trace.go:171","msg":"trace[1009458074] transaction","detail":"{read_only:false; response_revision:37781; number_of_response:1; }","duration":"389.347224ms","start":"2025-12-13T00:09:14.862031Z","end":"2025-12-13T00:09:15.251378Z","steps":["trace[1009458074] 'process raft request' (duration: 389.144299ms)"],"step_count":1} 2025-12-13T00:09:15.251698323+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:15.251652Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:14.862008Z","time spent":"389.544477ms","remote":"[::1]:52296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":673,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:16.567079220+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:16.566983Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.315880796s","expected-duration":"1s"} 2025-12-13T00:09:16.567232983+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:16.567202Z","caller":"traceutil/trace.go:171","msg":"trace[1294042736] linearizableReadLoop","detail":"{readStateIndex:41560; appliedIndex:41558; }","duration":"1.615451456s","start":"2025-12-13T00:09:14.951736Z","end":"2025-12-13T00:09:16.567188Z","steps":["trace[1294042736] 'read index received' (duration: 299.446927ms)","trace[1294042736] 'applied index is now lower than readState.Index' (duration: 1.316003199s)"],"step_count":2} 2025-12-13T00:09:16.567436556+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:16.567353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.615615629s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/imagedigestmirrorsets.v1.config.openshift.io\" ","response":"range_response_count:1 size:5762"} 2025-12-13T00:09:16.567811353+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:16.567323Z","caller":"traceutil/trace.go:171","msg":"trace[1143022132] transaction","detail":"{read_only:false; response_revision:37782; number_of_response:1; }","duration":"1.667116847s","start":"2025-12-13T00:09:14.90015Z","end":"2025-12-13T00:09:16.567267Z","steps":["trace[1143022132] 'process raft request' (duration: 1.666942074s)"],"step_count":1} 2025-12-13T00:09:16.568258821+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:16.567439Z","caller":"traceutil/trace.go:171","msg":"trace[1997797981] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/imagedigestmirrorsets.v1.config.openshift.io; range_end:; response_count:1; response_revision:37782; }","duration":"1.6156703s","start":"2025-12-13T00:09:14.951717Z","end":"2025-12-13T00:09:16.567387Z","steps":["trace[1997797981] 'agreement among raft nodes before linearized reading' (duration: 1.615524358s)"],"step_count":1} 2025-12-13T00:09:16.568280812+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:16.568247Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:14.951652Z","time spent":"1.616580297s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":101,"response count":1,"response size":5785,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/imagedigestmirrorsets.v1.config.openshift.io\" "} 2025-12-13T00:09:16.568312702+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:16.567353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"827.695039ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/storageversionmigrations.v1alpha1.migration.k8s.io\" ","response":"range_response_count:1 size:7052"} 2025-12-13T00:09:16.568427314+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:16.568337Z","caller":"traceutil/trace.go:171","msg":"trace[972612531] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/storageversionmigrations.v1alpha1.migration.k8s.io; range_end:; response_count:1; response_revision:37782; }","duration":"828.677067ms","start":"2025-12-13T00:09:15.739639Z","end":"2025-12-13T00:09:16.568316Z","steps":["trace[972612531] 'agreement among raft nodes before linearized reading' (duration: 827.640599ms)"],"step_count":1} 2025-12-13T00:09:16.568481555+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:16.568428Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:15.739601Z","time spent":"828.7767ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":107,"response count":1,"response size":7075,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/storageversionmigrations.v1alpha1.migration.k8s.io\" "} 2025-12-13T00:09:16.568860072+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:16.568251Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:14.900125Z","time spent":"1.667914413s","remote":"[::1]:52520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5052,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:17.070060517+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.069129Z","caller":"etcdserver/v3_server.go:908","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5497939759744065751,"retry-timeout":"500ms"} 2025-12-13T00:09:17.230161739+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.230018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"661.322488ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:09:17.230383243+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:17.230321Z","caller":"traceutil/trace.go:171","msg":"trace[38169801] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37782; }","duration":"661.621773ms","start":"2025-12-13T00:09:16.568651Z","end":"2025-12-13T00:09:17.230272Z","steps":["trace[38169801] 'agreement among raft nodes before linearized reading' (duration: 661.245137ms)"],"step_count":1} 2025-12-13T00:09:17.230599117+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.230536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:16.568617Z","time spent":"661.901057ms","remote":"[::1]:37944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2025-12-13T00:09:17.230837061+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:17.230353Z","caller":"traceutil/trace.go:171","msg":"trace[2048929810] linearizableReadLoop","detail":"{readStateIndex:41561; appliedIndex:41560; }","duration":"661.031453ms","start":"2025-12-13T00:09:16.568664Z","end":"2025-12-13T00:09:17.229695Z","steps":["trace[2048929810] 'read index received' (duration: 652.112976ms)","trace[2048929810] 'applied index is now lower than readState.Index' (duration: 8.915867ms)"],"step_count":2} 2025-12-13T00:09:17.231181977+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.231064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.988553ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/mutatingwebhookconfigurations/\" range_end:\"/kubernetes.io/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:6"} 2025-12-13T00:09:17.231299669+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.231194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.254463ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/operator.openshift.io/kubeapiservers/\" range_end:\"/kubernetes.io/operator.openshift.io/kubeapiservers0\" count_only:true ","response":"range_response_count:0 size:8"} 2025-12-13T00:09:17.231330389+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:17.231295Z","caller":"traceutil/trace.go:171","msg":"trace[743344309] range","detail":"{range_begin:/kubernetes.io/operator.openshift.io/kubeapiservers/; range_end:/kubernetes.io/operator.openshift.io/kubeapiservers0; response_count:0; response_revision:37782; }","duration":"287.360635ms","start":"2025-12-13T00:09:16.943914Z","end":"2025-12-13T00:09:17.231274Z","steps":["trace[743344309] 'agreement among raft nodes before linearized reading' (duration: 287.226903ms)"],"step_count":1} 2025-12-13T00:09:17.231501352+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:17.231231Z","caller":"traceutil/trace.go:171","msg":"trace[1757488513] range","detail":"{range_begin:/kubernetes.io/mutatingwebhookconfigurations/; range_end:/kubernetes.io/mutatingwebhookconfigurations0; response_count:0; response_revision:37782; }","duration":"222.179646ms","start":"2025-12-13T00:09:17.009026Z","end":"2025-12-13T00:09:17.231206Z","steps":["trace[1757488513] 'agreement among raft nodes before linearized reading' (duration: 221.888811ms)"],"step_count":1} 2025-12-13T00:09:17.231501352+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.231434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"658.574329ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/imagedigestmirrorsets.v1.config.openshift.io\" ","response":"range_response_count:1 size:5762"} 2025-12-13T00:09:17.231624654+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.231111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"658.748192ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/storageversionmigrations.v1alpha1.migration.k8s.io\" ","response":"range_response_count:1 size:7052"} 2025-12-13T00:09:17.231624654+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:17.231567Z","caller":"traceutil/trace.go:171","msg":"trace[147606921] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/imagedigestmirrorsets.v1.config.openshift.io; range_end:; response_count:1; response_revision:37782; }","duration":"658.713081ms","start":"2025-12-13T00:09:16.572786Z","end":"2025-12-13T00:09:17.231499Z","steps":["trace[147606921] 'agreement among raft nodes before linearized reading' (duration: 658.390206ms)"],"step_count":1} 2025-12-13T00:09:17.231697546+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.231637Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:16.572763Z","time spent":"658.860474ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":101,"response count":1,"response size":5785,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/imagedigestmirrorsets.v1.config.openshift.io\" "} 2025-12-13T00:09:17.231697546+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:17.231635Z","caller":"traceutil/trace.go:171","msg":"trace[939468479] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/storageversionmigrations.v1alpha1.migration.k8s.io; range_end:; response_count:1; response_revision:37782; }","duration":"659.290181ms","start":"2025-12-13T00:09:16.572317Z","end":"2025-12-13T00:09:17.231607Z","steps":["trace[939468479] 'agreement among raft nodes before linearized reading' (duration: 658.584749ms)"],"step_count":1} 2025-12-13T00:09:17.231806947+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:17.231724Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:16.572272Z","time spent":"659.430174ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":107,"response count":1,"response size":7075,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/storageversionmigrations.v1alpha1.migration.k8s.io\" "} 2025-12-13T00:09:26.399245622+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.399106Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:24.093727Z","time spent":"2.305373087s","remote":"[::1]:37790","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:09:26.400492163+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.399895Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:25.30668Z","time spent":"1.093208414s","remote":"[::1]:37806","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""} 2025-12-13T00:09:26.402100659+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.401872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.208103827s","expected-duration":"200ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:18"} 2025-12-13T00:09:26.402100659+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.401974Z","caller":"traceutil/trace.go:171","msg":"trace[682897539] linearizableReadLoop","detail":"{readStateIndex:41580; appliedIndex:41579; }","duration":"5.071464801s","start":"2025-12-13T00:09:21.330493Z","end":"2025-12-13T00:09:26.401958Z","steps":["trace[682897539] 'read index received' (duration: 66.299µs)","trace[682897539] 'applied index is now lower than readState.Index' (duration: 5.071397302s)"],"step_count":2} 2025-12-13T00:09:26.402100659+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.40204Z","caller":"traceutil/trace.go:171","msg":"trace[1688471402] transaction","detail":"{read_only:false; response_revision:37800; number_of_response:1; }","duration":"5.387949821s","start":"2025-12-13T00:09:21.01408Z","end":"2025-12-13T00:09:26.402029Z","steps":["trace[1688471402] 'process raft request' (duration: 179.598802ms)","trace[1688471402] 'compare' (duration: 5.207373489s)"],"step_count":2} 2025-12-13T00:09:26.403868854+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.403613Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:21.014055Z","time spent":"5.388013831s","remote":"192.168.126.11:37846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":125,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:26.403868854+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.403785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.073275047s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/certificatesigningrequests.v1.certificates.k8s.io\" ","response":"range_response_count:1 size:9760"} 2025-12-13T00:09:26.403923855+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.403853Z","caller":"traceutil/trace.go:171","msg":"trace[397123425] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/certificatesigningrequests.v1.certificates.k8s.io; range_end:; response_count:1; response_revision:37800; }","duration":"5.073353868s","start":"2025-12-13T00:09:21.330474Z","end":"2025-12-13T00:09:26.403827Z","steps":["trace[397123425] 'agreement among raft nodes before linearized reading' (duration: 5.071573362s)"],"step_count":1} 2025-12-13T00:09:26.403985465+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.403931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:21.33041Z","time spent":"5.073504598s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":106,"response count":1,"response size":9783,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/certificatesigningrequests.v1.certificates.k8s.io\" "} 2025-12-13T00:09:26.483956891+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.483831Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.60446012s","expected-duration":"1s"} 2025-12-13T00:09:26.484726647+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.484562Z","caller":"traceutil/trace.go:171","msg":"trace[961457781] transaction","detail":"{read_only:false; response_revision:37801; number_of_response:1; }","duration":"1.605354131s","start":"2025-12-13T00:09:24.879174Z","end":"2025-12-13T00:09:26.484529Z","steps":["trace[961457781] 'process raft request' (duration: 1.605054328s)"],"step_count":1} 2025-12-13T00:09:26.484824118+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.484748Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:24.879143Z","time spent":"1.605512732s","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5171,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:26.488885557+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.488796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.211620454s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/endpointslices.v1.discovery.k8s.io\" ","response":"range_response_count:1 size:8925"} 2025-12-13T00:09:26.488926507+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.488897Z","caller":"traceutil/trace.go:171","msg":"trace[1005462197] transaction","detail":"{read_only:false; response_revision:37802; number_of_response:1; }","duration":"1.143338206s","start":"2025-12-13T00:09:25.345535Z","end":"2025-12-13T00:09:26.488873Z","steps":["trace[1005462197] 'process raft request' (duration: 1.142830051s)"],"step_count":1} 2025-12-13T00:09:26.489045028+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.488993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:25.345513Z","time spent":"1.143423626s","remote":"192.168.126.11:38124","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":673,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:26.489045028+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.488989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.801115394s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/statefulsets.v1.apps\" ","response":"range_response_count:1 size:6634"} 2025-12-13T00:09:26.489080708+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489051Z","caller":"traceutil/trace.go:171","msg":"trace[1400533528] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/endpointslices.v1.discovery.k8s.io; range_end:; response_count:1; response_revision:37802; }","duration":"2.211711743s","start":"2025-12-13T00:09:24.277142Z","end":"2025-12-13T00:09:26.488854Z","steps":["trace[1400533528] 'agreement among raft nodes before linearized reading' (duration: 2.211454281s)"],"step_count":1} 2025-12-13T00:09:26.489097729+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.948847484s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:09:26.489097729+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.48908Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:24.277081Z","time spent":"2.21199232s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":1,"response size":8948,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/endpointslices.v1.discovery.k8s.io\" "} 2025-12-13T00:09:26.489116939+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.48908Z","caller":"traceutil/trace.go:171","msg":"trace[1955531798] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/statefulsets.v1.apps; range_end:; response_count:1; response_revision:37802; }","duration":"1.801213845s","start":"2025-12-13T00:09:24.687834Z","end":"2025-12-13T00:09:26.489048Z","steps":["trace[1955531798] 'agreement among raft nodes before linearized reading' (duration: 1.800948293s)"],"step_count":1} 2025-12-13T00:09:26.489116939+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489097Z","caller":"traceutil/trace.go:171","msg":"trace[75758448] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37802; }","duration":"4.948879295s","start":"2025-12-13T00:09:21.540207Z","end":"2025-12-13T00:09:26.489087Z","steps":["trace[75758448] 'agreement among raft nodes before linearized reading' (duration: 4.948817864s)"],"step_count":1} 2025-12-13T00:09:26.489184609+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.183437803s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/configs.v1.imageregistry.operator.openshift.io\" ","response":"range_response_count:1 size:4154"} 2025-12-13T00:09:26.489184609+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:24.68776Z","time spent":"1.801362177s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":1,"response size":6657,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/statefulsets.v1.apps\" "} 2025-12-13T00:09:26.489184609+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.488812Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.287517183s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/roles/\" range_end:\"/kubernetes.io/roles0\" count_only:true ","response":"range_response_count:0 size:8"} 2025-12-13T00:09:26.489219950+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489179Z","caller":"traceutil/trace.go:171","msg":"trace[1625084015] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/configs.v1.imageregistry.operator.openshift.io; range_end:; response_count:1; response_revision:37802; }","duration":"4.183509162s","start":"2025-12-13T00:09:22.305652Z","end":"2025-12-13T00:09:26.489161Z","steps":["trace[1625084015] 'agreement among raft nodes before linearized reading' (duration: 4.18332032s)"],"step_count":1} 2025-12-13T00:09:26.489219950+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489192Z","caller":"traceutil/trace.go:171","msg":"trace[1804970785] range","detail":"{range_begin:/kubernetes.io/roles/; range_end:/kubernetes.io/roles0; response_count:0; response_revision:37802; }","duration":"1.287907746s","start":"2025-12-13T00:09:25.201276Z","end":"2025-12-13T00:09:26.489184Z","steps":["trace[1804970785] 'agreement among raft nodes before linearized reading' (duration: 1.287440742s)"],"step_count":1} 2025-12-13T00:09:26.489241050+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489215Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:22.305602Z","time spent":"4.183605661s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":103,"response count":1,"response size":4177,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/configs.v1.imageregistry.operator.openshift.io\" "} 2025-12-13T00:09:26.489330431+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.875251175s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/operators.v1.operators.coreos.com\" ","response":"range_response_count:1 size:4312"} 2025-12-13T00:09:26.489330431+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489313Z","caller":"traceutil/trace.go:171","msg":"trace[1917891337] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/operators.v1.operators.coreos.com; range_end:; response_count:1; response_revision:37802; }","duration":"2.875290274s","start":"2025-12-13T00:09:23.614012Z","end":"2025-12-13T00:09:26.489302Z","steps":["trace[1917891337] 'agreement among raft nodes before linearized reading' (duration: 2.875168493s)"],"step_count":1} 2025-12-13T00:09:26.489356111+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:25.201218Z","time spent":"1.288102659s","remote":"192.168.126.11:38190","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":96,"response size":31,"request content":"key:\"/kubernetes.io/roles/\" range_end:\"/kubernetes.io/roles0\" count_only:true "} 2025-12-13T00:09:26.489356111+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489335Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:23.613961Z","time spent":"2.875368382s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":1,"response size":4335,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/operators.v1.operators.coreos.com\" "} 2025-12-13T00:09:26.489384841+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.552009053s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/statefulsets/\" range_end:\"/kubernetes.io/statefulsets0\" count_only:true ","response":"range_response_count:0 size:6"} 2025-12-13T00:09:26.489384841+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.365298926s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/machineconfigurations.v1.operator.openshift.io\" ","response":"range_response_count:1 size:5568"} 2025-12-13T00:09:26.489454482+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489384Z","caller":"traceutil/trace.go:171","msg":"trace[122894620] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/machineconfigurations.v1.operator.openshift.io; range_end:; response_count:1; response_revision:37802; }","duration":"3.365330575s","start":"2025-12-13T00:09:23.124045Z","end":"2025-12-13T00:09:26.489376Z","steps":["trace[122894620] 'agreement among raft nodes before linearized reading' (duration: 3.365254174s)"],"step_count":1} 2025-12-13T00:09:26.489454482+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.48938Z","caller":"traceutil/trace.go:171","msg":"trace[1068981500] range","detail":"{range_begin:/kubernetes.io/statefulsets/; range_end:/kubernetes.io/statefulsets0; response_count:0; response_revision:37802; }","duration":"2.552038042s","start":"2025-12-13T00:09:23.937333Z","end":"2025-12-13T00:09:26.489372Z","steps":["trace[1068981500] 'agreement among raft nodes before linearized reading' (duration: 2.551992332s)"],"step_count":1} 2025-12-13T00:09:26.489454482+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.329810487s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/storageclasses/\" range_end:\"/kubernetes.io/storageclasses0\" count_only:true ","response":"range_response_count:0 size:8"} 2025-12-13T00:09:26.489454482+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489436Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:23.93728Z","time spent":"2.552148876s","remote":"[::1]:52464","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/statefulsets/\" range_end:\"/kubernetes.io/statefulsets0\" count_only:true "} 2025-12-13T00:09:26.489454482+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489441Z","caller":"traceutil/trace.go:171","msg":"trace[55627877] range","detail":"{range_begin:/kubernetes.io/storageclasses/; range_end:/kubernetes.io/storageclasses0; response_count:0; response_revision:37802; }","duration":"4.329866738s","start":"2025-12-13T00:09:22.159566Z","end":"2025-12-13T00:09:26.489432Z","steps":["trace[55627877] 'agreement among raft nodes before linearized reading' (duration: 4.329797287s)"],"step_count":1} 2025-12-13T00:09:26.489481792+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.48943Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:23.124008Z","time spent":"3.365415537s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":103,"response count":1,"response size":5591,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/machineconfigurations.v1.operator.openshift.io\" "} 2025-12-13T00:09:26.489481792+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489463Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:22.159503Z","time spent":"4.329953597s","remote":"192.168.126.11:38244","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":31,"request content":"key:\"/kubernetes.io/storageclasses/\" range_end:\"/kubernetes.io/storageclasses0\" count_only:true "} 2025-12-13T00:09:26.489572873+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.998533657s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/servicecas.v1.operator.openshift.io\" ","response":"range_response_count:1 size:5216"} 2025-12-13T00:09:26.489597503+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489559Z","caller":"traceutil/trace.go:171","msg":"trace[1332285803] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/servicecas.v1.operator.openshift.io; range_end:; response_count:1; response_revision:37802; }","duration":"2.998604518s","start":"2025-12-13T00:09:23.490942Z","end":"2025-12-13T00:09:26.489547Z","steps":["trace[1332285803] 'agreement among raft nodes before linearized reading' (duration: 2.998348595s)"],"step_count":1} 2025-12-13T00:09:26.489617753+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.641307358s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauthauthorizetokens.v1.oauth.openshift.io\" ","response":"range_response_count:1 size:3340"} 2025-12-13T00:09:26.489637853+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489609Z","caller":"traceutil/trace.go:171","msg":"trace[731379386] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauthauthorizetokens.v1.oauth.openshift.io; range_end:; response_count:1; response_revision:37802; }","duration":"4.641332168s","start":"2025-12-13T00:09:21.848268Z","end":"2025-12-13T00:09:26.489601Z","steps":["trace[731379386] 'agreement among raft nodes before linearized reading' (duration: 4.641265838s)"],"step_count":1} 2025-12-13T00:09:26.489637853+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.488988Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.549239144s","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/schedulers.v1.config.openshift.io\" ","response":"range_response_count:1 size:6557"} 2025-12-13T00:09:26.489657064+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.48963Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:21.848228Z","time spent":"4.641397017s","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":99,"response count":1,"response size":3363,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/oauthauthorizetokens.v1.oauth.openshift.io\" "} 2025-12-13T00:09:26.489657064+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489636Z","caller":"traceutil/trace.go:171","msg":"trace[453468330] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/schedulers.v1.config.openshift.io; range_end:; response_count:1; response_revision:37802; }","duration":"2.550031161s","start":"2025-12-13T00:09:23.939596Z","end":"2025-12-13T00:09:26.489627Z","steps":["trace[453468330] 'agreement among raft nodes before linearized reading' (duration: 2.549026392s)"],"step_count":1} 2025-12-13T00:09:26.489718354+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489605Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:23.490889Z","time spent":"2.998695212s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":92,"response count":1,"response size":5239,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/servicecas.v1.operator.openshift.io\" "} 2025-12-13T00:09:26.489718354+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489685Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:23.939557Z","time spent":"2.550121s","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":1,"response size":6580,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/schedulers.v1.config.openshift.io\" "} 2025-12-13T00:09:26.489845665+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489754Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:21.540142Z","time spent":"4.949574489s","remote":"192.168.126.11:37802","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2025-12-13T00:09:26.489935116+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.489859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"890.116485ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/thanosrulers.v1.monitoring.coreos.com\" ","response":"range_response_count:1 size:4406"} 2025-12-13T00:09:26.489935116+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.489909Z","caller":"traceutil/trace.go:171","msg":"trace[570616051] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/thanosrulers.v1.monitoring.coreos.com; range_end:; response_count:1; response_revision:37802; }","duration":"890.167376ms","start":"2025-12-13T00:09:25.599729Z","end":"2025-12-13T00:09:26.489896Z","steps":["trace[570616051] 'agreement among raft nodes before linearized reading' (duration: 889.112136ms)"],"step_count":1} 2025-12-13T00:09:26.489963456+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.48994Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:25.599709Z","time spent":"890.223836ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":4429,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/thanosrulers.v1.monitoring.coreos.com\" "} 2025-12-13T00:09:26.491010877+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.490934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"860.622587ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/events/\" range_end:\"/kubernetes.io/events0\" count_only:true ","response":"range_response_count:0 size:9"} 2025-12-13T00:09:26.491041047+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.49101Z","caller":"traceutil/trace.go:171","msg":"trace[123566057] range","detail":"{range_begin:/kubernetes.io/events/; range_end:/kubernetes.io/events0; response_count:0; response_revision:37802; }","duration":"860.705839ms","start":"2025-12-13T00:09:25.630285Z","end":"2025-12-13T00:09:26.490991Z","steps":["trace[123566057] 'agreement among raft nodes before linearized reading' (duration: 858.383957ms)"],"step_count":1} 2025-12-13T00:09:26.491110348+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:26.491059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:25.63022Z","time spent":"860.82629ms","remote":"192.168.126.11:38400","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":6320,"response size":32,"request content":"key:\"/kubernetes.io/events/\" range_end:\"/kubernetes.io/events0\" count_only:true "} 2025-12-13T00:09:26.723597942+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.723443Z","caller":"traceutil/trace.go:171","msg":"trace[415387800] transaction","detail":"{read_only:false; response_revision:37805; number_of_response:1; }","duration":"193.628352ms","start":"2025-12-13T00:09:26.52974Z","end":"2025-12-13T00:09:26.723368Z","steps":["trace[415387800] 'process raft request' (duration: 192.981956ms)"],"step_count":1} 2025-12-13T00:09:26.738870804+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.738782Z","caller":"traceutil/trace.go:171","msg":"trace[650008226] transaction","detail":"{read_only:false; response_revision:37806; number_of_response:1; }","duration":"207.913406ms","start":"2025-12-13T00:09:26.530849Z","end":"2025-12-13T00:09:26.738763Z","steps":["trace[650008226] 'process raft request' (duration: 207.746045ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739011Z","caller":"traceutil/trace.go:171","msg":"trace[2002674972] transaction","detail":"{read_only:false; response_revision:37807; number_of_response:1; }","duration":"204.367883ms","start":"2025-12-13T00:09:26.534627Z","end":"2025-12-13T00:09:26.738995Z","steps":["trace[2002674972] 'process raft request' (duration: 204.079161ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739262Z","caller":"traceutil/trace.go:171","msg":"trace[856486259] transaction","detail":"{read_only:false; response_revision:37808; number_of_response:1; }","duration":"203.217951ms","start":"2025-12-13T00:09:26.536027Z","end":"2025-12-13T00:09:26.739245Z","steps":["trace[856486259] 'process raft request' (duration: 202.903179ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739344Z","caller":"traceutil/trace.go:171","msg":"trace[14011871] transaction","detail":"{read_only:false; response_revision:37809; number_of_response:1; }","duration":"202.052601ms","start":"2025-12-13T00:09:26.537269Z","end":"2025-12-13T00:09:26.739322Z","steps":["trace[14011871] 'process raft request' (duration: 201.852749ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739359Z","caller":"traceutil/trace.go:171","msg":"trace[961001415] transaction","detail":"{read_only:false; response_revision:37810; number_of_response:1; }","duration":"201.9076ms","start":"2025-12-13T00:09:26.537435Z","end":"2025-12-13T00:09:26.739343Z","steps":["trace[961001415] 'process raft request' (duration: 201.769119ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739366Z","caller":"traceutil/trace.go:171","msg":"trace[1987760043] transaction","detail":"{read_only:false; response_revision:37811; number_of_response:1; }","duration":"198.399227ms","start":"2025-12-13T00:09:26.540944Z","end":"2025-12-13T00:09:26.739344Z","steps":["trace[1987760043] 'process raft request' (duration: 198.297086ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739551Z","caller":"traceutil/trace.go:171","msg":"trace[1268393273] transaction","detail":"{read_only:false; response_revision:37812; number_of_response:1; }","duration":"198.323267ms","start":"2025-12-13T00:09:26.541207Z","end":"2025-12-13T00:09:26.73953Z","steps":["trace[1268393273] 'process raft request' (duration: 198.099665ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739548Z","caller":"traceutil/trace.go:171","msg":"trace[760746383] linearizableReadLoop","detail":"{readStateIndex:41593; appliedIndex:41585; }","duration":"196.353888ms","start":"2025-12-13T00:09:26.543176Z","end":"2025-12-13T00:09:26.73953Z","steps":["trace[760746383] 'read index received' (duration: 179.502431ms)","trace[760746383] 'applied index is now lower than readState.Index' (duration: 16.850307ms)"],"step_count":2} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739569Z","caller":"traceutil/trace.go:171","msg":"trace[1999830336] transaction","detail":"{read_only:false; response_revision:37813; number_of_response:1; }","duration":"164.027097ms","start":"2025-12-13T00:09:26.575521Z","end":"2025-12-13T00:09:26.739548Z","steps":["trace[1999830336] 'process raft request' (duration: 163.934246ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739666Z","caller":"traceutil/trace.go:171","msg":"trace[316972719] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37813; }","duration":"196.481809ms","start":"2025-12-13T00:09:26.543172Z","end":"2025-12-13T00:09:26.739654Z","steps":["trace[316972719] 'agreement among raft nodes before linearized reading' (duration: 196.455549ms)"],"step_count":1} 2025-12-13T00:09:26.739986894+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:26.739694Z","caller":"traceutil/trace.go:171","msg":"trace[1958085743] range","detail":"{range_begin:/kubernetes.io/controlplane.operator.openshift.io/podnetworkconnectivitychecks/; range_end:/kubernetes.io/controlplane.operator.openshift.io/podnetworkconnectivitychecks0; response_count:0; response_revision:37813; }","duration":"182.350067ms","start":"2025-12-13T00:09:26.557327Z","end":"2025-12-13T00:09:26.739677Z","steps":["trace[1958085743] 'agreement among raft nodes before linearized reading' (duration: 182.318047ms)"],"step_count":1} 2025-12-13T00:09:28.712038446+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:28.711911Z","caller":"traceutil/trace.go:171","msg":"trace[1249539955] transaction","detail":"{read_only:false; response_revision:37814; number_of_response:1; }","duration":"286.82836ms","start":"2025-12-13T00:09:28.425061Z","end":"2025-12-13T00:09:28.711889Z","steps":["trace[1249539955] 'process raft request' (duration: 286.666619ms)"],"step_count":1} 2025-12-13T00:09:28.714756182+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:28.714635Z","caller":"traceutil/trace.go:171","msg":"trace[1664414610] transaction","detail":"{read_only:false; response_revision:37815; number_of_response:1; }","duration":"188.799947ms","start":"2025-12-13T00:09:28.525814Z","end":"2025-12-13T00:09:28.714614Z","steps":["trace[1664414610] 'process raft request' (duration: 188.512005ms)"],"step_count":1} 2025-12-13T00:09:34.888446857+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:34.887806Z","caller":"traceutil/trace.go:171","msg":"trace[1776565058] transaction","detail":"{read_only:false; response_revision:37823; number_of_response:1; }","duration":"160.594235ms","start":"2025-12-13T00:09:34.727193Z","end":"2025-12-13T00:09:34.887788Z","steps":["trace[1776565058] 'process raft request' (duration: 160.468024ms)"],"step_count":1} 2025-12-13T00:09:35.911266091+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:35.911121Z","caller":"traceutil/trace.go:171","msg":"trace[2087100809] transaction","detail":"{read_only:false; response_revision:37825; number_of_response:1; }","duration":"193.053478ms","start":"2025-12-13T00:09:35.718041Z","end":"2025-12-13T00:09:35.911094Z","steps":["trace[2087100809] 'process raft request' (duration: 192.883376ms)"],"step_count":1} 2025-12-13T00:09:38.118835655+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:38.118691Z","caller":"traceutil/trace.go:171","msg":"trace[2028676025] linearizableReadLoop","detail":"{readStateIndex:41612; appliedIndex:41611; }","duration":"428.710192ms","start":"2025-12-13T00:09:37.689964Z","end":"2025-12-13T00:09:38.118674Z","steps":["trace[2028676025] 'read index received' (duration: 428.580121ms)","trace[2028676025] 'applied index is now lower than readState.Index' (duration: 129.221µs)"],"step_count":2} 2025-12-13T00:09:38.118835655+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:38.118775Z","caller":"traceutil/trace.go:171","msg":"trace[763931153] transaction","detail":"{read_only:false; response_revision:37829; number_of_response:1; }","duration":"431.184775ms","start":"2025-12-13T00:09:37.687568Z","end":"2025-12-13T00:09:38.118753Z","steps":["trace[763931153] 'process raft request' (duration: 430.978003ms)"],"step_count":1} 2025-12-13T00:09:38.118891356+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:38.118867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.886854ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoles.v1.config.openshift.io\" ","response":"range_response_count:1 size:5493"} 2025-12-13T00:09:38.118929966+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:38.118886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:37.687542Z","time spent":"431.281945ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":13212,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:38.118929966+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:38.118892Z","caller":"traceutil/trace.go:171","msg":"trace[1414521488] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoles.v1.config.openshift.io; range_end:; response_count:1; response_revision:37829; }","duration":"428.930364ms","start":"2025-12-13T00:09:37.689953Z","end":"2025-12-13T00:09:38.118883Z","steps":["trace[1414521488] 'agreement among raft nodes before linearized reading' (duration: 428.790593ms)"],"step_count":1} 2025-12-13T00:09:38.118939056+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:38.118919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:37.68991Z","time spent":"429.002834ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":88,"response count":1,"response size":5516,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoles.v1.config.openshift.io\" "} 2025-12-13T00:09:38.510647254+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:38.5105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.289276ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoles.v1.config.openshift.io\" ","response":"range_response_count:1 size:5493"} 2025-12-13T00:09:38.510716574+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:38.510641Z","caller":"traceutil/trace.go:171","msg":"trace[520693298] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoles.v1.config.openshift.io; range_end:; response_count:1; response_revision:37829; }","duration":"387.400377ms","start":"2025-12-13T00:09:38.123181Z","end":"2025-12-13T00:09:38.510581Z","steps":["trace[520693298] 'range keys from in-memory index tree' (duration: 387.109734ms)"],"step_count":1} 2025-12-13T00:09:38.510795825+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:38.510734Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:38.123148Z","time spent":"387.565838ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":88,"response count":1,"response size":5516,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/consoles.v1.config.openshift.io\" "} 2025-12-13T00:09:38.510996827+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:38.510863Z","caller":"traceutil/trace.go:171","msg":"trace[857485316] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/storages.v1.operator.openshift.io; range_end:; response_count:1; response_revision:37829; }","duration":"188.648017ms","start":"2025-12-13T00:09:38.322188Z","end":"2025-12-13T00:09:38.510836Z","steps":["trace[857485316] 'range keys from in-memory index tree' (duration: 188.088021ms)"],"step_count":1} 2025-12-13T00:09:40.538292044+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:40.538139Z","caller":"traceutil/trace.go:171","msg":"trace[1597611684] transaction","detail":"{read_only:false; response_revision:37834; number_of_response:1; }","duration":"933.138048ms","start":"2025-12-13T00:09:39.604977Z","end":"2025-12-13T00:09:40.538115Z","steps":["trace[1597611684] 'process raft request' (duration: 932.973937ms)"],"step_count":1} 2025-12-13T00:09:40.538374514+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:40.538309Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:39.604966Z","time spent":"933.264419ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4421,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:40.542613414+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:40.542529Z","caller":"traceutil/trace.go:171","msg":"trace[851441133] transaction","detail":"{read_only:false; response_revision:37835; number_of_response:1; }","duration":"645.771033ms","start":"2025-12-13T00:09:39.896739Z","end":"2025-12-13T00:09:40.54251Z","steps":["trace[851441133] 'process raft request' (duration: 645.636731ms)"],"step_count":1} 2025-12-13T00:09:40.542713335+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:40.542667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:39.896707Z","time spent":"645.887854ms","remote":"192.168.126.11:38408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5094,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:41.762011958+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:41.761849Z","caller":"traceutil/trace.go:171","msg":"trace[213705337] transaction","detail":"{read_only:false; response_revision:37837; number_of_response:1; }","duration":"223.40346ms","start":"2025-12-13T00:09:41.538426Z","end":"2025-12-13T00:09:41.76183Z","steps":["trace[213705337] 'process raft request' (duration: 223.266919ms)"],"step_count":1} 2025-12-13T00:09:43.521751603+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:43.521618Z","caller":"traceutil/trace.go:171","msg":"trace[1957676863] transaction","detail":"{read_only:false; response_revision:37841; number_of_response:1; }","duration":"233.962759ms","start":"2025-12-13T00:09:43.287632Z","end":"2025-12-13T00:09:43.521595Z","steps":["trace[1957676863] 'process raft request' (duration: 233.834607ms)"],"step_count":1} 2025-12-13T00:09:44.219678971+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:44.219553Z","caller":"traceutil/trace.go:171","msg":"trace[115599865] linearizableReadLoop","detail":"{readStateIndex:41626; appliedIndex:41625; }","duration":"283.52479ms","start":"2025-12-13T00:09:43.936004Z","end":"2025-12-13T00:09:44.219529Z","steps":["trace[115599865] 'read index received' (duration: 283.348968ms)","trace[115599865] 'applied index is now lower than readState.Index' (duration: 174.362µs)"],"step_count":2} 2025-12-13T00:09:44.219734212+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:44.219646Z","caller":"traceutil/trace.go:171","msg":"trace[1939321088] transaction","detail":"{read_only:false; response_revision:37842; number_of_response:1; }","duration":"341.151087ms","start":"2025-12-13T00:09:43.878466Z","end":"2025-12-13T00:09:44.219617Z","steps":["trace[1939321088] 'process raft request' (duration: 340.880584ms)"],"step_count":1} 2025-12-13T00:09:44.219734212+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:44.219684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.665631ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/poddisruptionbudgets/\" range_end:\"/kubernetes.io/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:8"} 2025-12-13T00:09:44.219764192+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:44.219724Z","caller":"traceutil/trace.go:171","msg":"trace[1354312100] range","detail":"{range_begin:/kubernetes.io/poddisruptionbudgets/; range_end:/kubernetes.io/poddisruptionbudgets0; response_count:0; response_revision:37842; }","duration":"283.725372ms","start":"2025-12-13T00:09:43.935986Z","end":"2025-12-13T00:09:44.219712Z","steps":["trace[1354312100] 'agreement among raft nodes before linearized reading' (duration: 283.636051ms)"],"step_count":1} 2025-12-13T00:09:44.219865023+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:44.219805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:43.878438Z","time spent":"341.279168ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4438,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:44.818593618+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:44.818469Z","caller":"traceutil/trace.go:171","msg":"trace[179829707] transaction","detail":"{read_only:false; response_revision:37843; number_of_response:1; }","duration":"517.66437ms","start":"2025-12-13T00:09:44.300681Z","end":"2025-12-13T00:09:44.818346Z","steps":["trace[179829707] 'process raft request' (duration: 516.833942ms)"],"step_count":1} 2025-12-13T00:09:44.818645088+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:44.818576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"400.465759ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/operatorpkis.v1.network.operator.openshift.io\" ","response":"range_response_count:1 size:6420"} 2025-12-13T00:09:44.818687709+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:44.818342Z","caller":"traceutil/trace.go:171","msg":"trace[1246636678] linearizableReadLoop","detail":"{readStateIndex:41627; appliedIndex:41626; }","duration":"400.230447ms","start":"2025-12-13T00:09:44.418089Z","end":"2025-12-13T00:09:44.818319Z","steps":["trace[1246636678] 'read index received' (duration: 399.429989ms)","trace[1246636678] 'applied index is now lower than readState.Index' (duration: 798.808µs)"],"step_count":2} 2025-12-13T00:09:44.818687709+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:44.818658Z","caller":"traceutil/trace.go:171","msg":"trace[275914584] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/operatorpkis.v1.network.operator.openshift.io; range_end:; response_count:1; response_revision:37843; }","duration":"400.558829ms","start":"2025-12-13T00:09:44.418078Z","end":"2025-12-13T00:09:44.818637Z","steps":["trace[275914584] 'agreement among raft nodes before linearized reading' (duration: 400.337307ms)"],"step_count":1} 2025-12-13T00:09:44.818752759+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:44.818709Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:44.418015Z","time spent":"400.683171ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":102,"response count":1,"response size":6443,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/operatorpkis.v1.network.operator.openshift.io\" "} 2025-12-13T00:09:44.818752759+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:44.818719Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:44.300662Z","time spent":"517.925743ms","remote":"192.168.126.11:37496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4563,"response count":0,"response size":41,"request content":"compare: success:> failure: >"} 2025-12-13T00:09:46.952034151+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.951329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"441.430922ms","expected-duration":"200ms","prefix":"","request":"header: lease_revoke:","response":"size:29"} 2025-12-13T00:09:46.952124484+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.95208Z","caller":"traceutil/trace.go:171","msg":"trace[1185212477] transaction","detail":"{read_only:false; response_revision:37847; number_of_response:1; }","duration":"200.708838ms","start":"2025-12-13T00:09:46.751351Z","end":"2025-12-13T00:09:46.95206Z","steps":["trace[1185212477] 'process raft request' (duration: 200.596455ms)"],"step_count":1} 2025-12-13T00:09:46.952204596+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.95214Z","caller":"traceutil/trace.go:171","msg":"trace[813058923] linearizableReadLoop","detail":"{readStateIndex:41631; appliedIndex:41630; }","duration":"413.407422ms","start":"2025-12-13T00:09:46.538719Z","end":"2025-12-13T00:09:46.952127Z","steps":["trace[813058923] 'read index received' (duration: 56.202µs)","trace[813058923] 'applied index is now lower than readState.Index' (duration: 413.34943ms)"],"step_count":2} 2025-12-13T00:09:46.952288909+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.952231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.500325ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/health\" ","response":"range_response_count:0 size:6"} 2025-12-13T00:09:46.952288909+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.952258Z","caller":"traceutil/trace.go:171","msg":"trace[385665374] range","detail":"{range_begin:/kubernetes.io/health; range_end:; response_count:0; response_revision:37847; }","duration":"413.535806ms","start":"2025-12-13T00:09:46.538715Z","end":"2025-12-13T00:09:46.952251Z","steps":["trace[385665374] 'agreement among raft nodes before linearized reading' (duration: 413.466194ms)"],"step_count":1} 2025-12-13T00:09:46.952313449+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.952279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:46.538666Z","time spent":"413.608498ms","remote":"[::1]:52002","response type":"/etcdserverpb.KV/Range","request count":0,"request size":23,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/health\" "} 2025-12-13T00:09:46.978813876+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.978668Z","caller":"traceutil/trace.go:171","msg":"trace[875842192] transaction","detail":"{read_only:false; response_revision:37848; number_of_response:1; }","duration":"211.442828ms","start":"2025-12-13T00:09:46.767204Z","end":"2025-12-13T00:09:46.978647Z","steps":["trace[875842192] 'process raft request' (duration: 211.285514ms)"],"step_count":1} 2025-12-13T00:09:46.978931029+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.978839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.121156ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/ipaddresses.v1beta1.ipam.cluster.x-k8s.io\" ","response":"range_response_count:1 size:4414"} 2025-12-13T00:09:46.978979871+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.978916Z","caller":"traceutil/trace.go:171","msg":"trace[57877628] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/ipaddresses.v1beta1.ipam.cluster.x-k8s.io; range_end:; response_count:1; response_revision:37849; }","duration":"242.205268ms","start":"2025-12-13T00:09:46.736685Z","end":"2025-12-13T00:09:46.97889Z","steps":["trace[57877628] 'agreement among raft nodes before linearized reading' (duration: 241.991192ms)"],"step_count":1} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.978957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"435.571833ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/probes.v1.monitoring.coreos.com\" ","response":"range_response_count:1 size:4394"} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.979004Z","caller":"traceutil/trace.go:171","msg":"trace[763187445] transaction","detail":"{read_only:false; response_revision:37849; number_of_response:1; }","duration":"196.210917ms","start":"2025-12-13T00:09:46.782766Z","end":"2025-12-13T00:09:46.978977Z","steps":["trace[763187445] 'process raft request' (duration: 195.823936ms)"],"step_count":1} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.979085Z","caller":"traceutil/trace.go:171","msg":"trace[2124326597] range","detail":"{range_begin:/kubernetes.io/minions/crc; range_end:; response_count:1; response_revision:37849; }","duration":"127.695414ms","start":"2025-12-13T00:09:46.851372Z","end":"2025-12-13T00:09:46.979067Z","steps":["trace[2124326597] 'agreement among raft nodes before linearized reading' (duration: 127.606081ms)"],"step_count":1} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.979101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.236243ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/runtimeclasses/\" range_end:\"/kubernetes.io/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:6"} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.979117Z","caller":"traceutil/trace.go:171","msg":"trace[1338758638] range","detail":"{range_begin:/kubernetes.io/apiserver.openshift.io/apirequestcounts/probes.v1.monitoring.coreos.com; range_end:; response_count:1; response_revision:37849; }","duration":"435.679146ms","start":"2025-12-13T00:09:46.543347Z","end":"2025-12-13T00:09:46.979026Z","steps":["trace[1338758638] 'agreement among raft nodes before linearized reading' (duration: 435.424159ms)"],"step_count":1} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:46.979133Z","caller":"traceutil/trace.go:171","msg":"trace[1350800142] range","detail":"{range_begin:/kubernetes.io/runtimeclasses/; range_end:/kubernetes.io/runtimeclasses0; response_count:0; response_revision:37849; }","duration":"303.283175ms","start":"2025-12-13T00:09:46.675839Z","end":"2025-12-13T00:09:46.979123Z","steps":["trace[1350800142] 'agreement among raft nodes before linearized reading' (duration: 303.218483ms)"],"step_count":1} 2025-12-13T00:09:46.979217947+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.979158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:46.675794Z","time spent":"303.357227ms","remote":"[::1]:52338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":29,"request content":"key:\"/kubernetes.io/runtimeclasses/\" range_end:\"/kubernetes.io/runtimeclasses0\" count_only:true "} 2025-12-13T00:09:46.979284949+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:46.979222Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:46.543323Z","time spent":"435.82935ms","remote":"[::1]:37958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":88,"response count":1,"response size":4417,"request content":"key:\"/kubernetes.io/apiserver.openshift.io/apirequestcounts/probes.v1.monitoring.coreos.com\" "} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:48.871977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"537.341688ms","expected-duration":"200ms","prefix":"","request":"header: txn: success:> failure:<>>","response":"size:18"} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:48.872206Z","caller":"traceutil/trace.go:171","msg":"trace[754127860] transaction","detail":"{read_only:false; response_revision:37887; number_of_response:1; }","duration":"550.384085ms","start":"2025-12-13T00:09:48.321783Z","end":"2025-12-13T00:09:48.872167Z","steps":["trace[754127860] 'process raft request' (duration: 12.736178ms)","trace[754127860] 'compare' (duration: 536.812313ms)"],"step_count":2} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:48.872342Z","caller":"traceutil/trace.go:171","msg":"trace[1960519724] transaction","detail":"{read_only:false; response_revision:37888; number_of_response:1; }","duration":"135.956434ms","start":"2025-12-13T00:09:48.736357Z","end":"2025-12-13T00:09:48.872314Z","steps":["trace[1960519724] 'process raft request' (duration: 135.782219ms)"],"step_count":1} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:48.87241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:48.321764Z","time spent":"550.522ms","remote":"[::1]:52082","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":818,"response count":0,"response size":41,"request content":"compare: success:> failure:<>"} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:48.872529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"464.8595ms","expected-duration":"200ms","prefix":"read-only range ","request":"key:\"/kubernetes.io/serviceaccounts/\" range_end:\"/kubernetes.io/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:9"} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:48.872575Z","caller":"traceutil/trace.go:171","msg":"trace[440737719] range","detail":"{range_begin:/kubernetes.io/serviceaccounts/; range_end:/kubernetes.io/serviceaccounts0; response_count:0; response_revision:37888; }","duration":"464.921992ms","start":"2025-12-13T00:09:48.407641Z","end":"2025-12-13T00:09:48.872563Z","steps":["trace[440737719] 'agreement among raft nodes before linearized reading' (duration: 464.771128ms)"],"step_count":1} 2025-12-13T00:09:48.872767807+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:48.872636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-13T00:09:48.407585Z","time spent":"465.042516ms","remote":"192.168.126.11:38038","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":319,"response size":32,"request content":"key:\"/kubernetes.io/serviceaccounts/\" range_end:\"/kubernetes.io/serviceaccounts0\" count_only:true "} 2025-12-13T00:09:48.872860390+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:48.872357Z","caller":"traceutil/trace.go:171","msg":"trace[739949486] linearizableReadLoop","detail":"{readStateIndex:41674; appliedIndex:41673; }","duration":"464.647715ms","start":"2025-12-13T00:09:48.407657Z","end":"2025-12-13T00:09:48.872304Z","steps":["trace[739949486] 'read index received' (duration: 45.041µs)","trace[739949486] 'applied index is now lower than readState.Index' (duration: 464.600074ms)"],"step_count":2} 2025-12-13T00:09:48.874385774+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:48.874105Z","caller":"traceutil/trace.go:171","msg":"trace[936672082] transaction","detail":"{read_only:false; response_revision:37889; number_of_response:1; }","duration":"127.283034ms","start":"2025-12-13T00:09:48.746799Z","end":"2025-12-13T00:09:48.874083Z","steps":["trace[936672082] 'process raft request' (duration: 127.15623ms)"],"step_count":1} 2025-12-13T00:09:49.497197765+00:00 stderr F 2025/12/13 00:09:49 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" 2025-12-13T00:09:51.139936566+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:51.139848Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"} 2025-12-13T00:09:51.140060061+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:51.140016Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"crc","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.126.11:2380"],"advertise-client-urls":["https://192.168.126.11:2379"]} 2025-12-13T00:09:51.140200995+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:51.140168Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp [::]:2379: use of closed network connection"} 2025-12-13T00:09:51.140311168+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:51.140292Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp [::]:2379: use of closed network connection"} 2025-12-13T00:09:51.161120319+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:51.160525Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept unix 192.168.126.11:0: use of closed network connection"} 2025-12-13T00:09:51.161120319+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:09:51.160689Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept unix 192.168.126.11:0: use of closed network connection"} 2025-12-13T00:09:51.161120319+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:51.160834Z","caller":"etcdserver/server.go:1522","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d44fc94b15474c4c","current-leader-member-id":"d44fc94b15474c4c"} 2025-12-13T00:09:51.195000389+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:51.194905Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"[::]:2380"} 2025-12-13T00:09:51.195073892+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:51.195045Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"[::]:2380"} 2025-12-13T00:09:51.195073892+00:00 stderr F {"level":"info","ts":"2025-12-13T00:09:51.195059Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"crc","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.126.11:2380"],"advertise-client-urls":["https://192.168.126.11:2379"]} ././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd/0.log.gzhome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000327347215117130647033021 0ustar zuulzuul‹§±ò3fVÏùQk*îù0ÄQ¤ƒ,Œ£±Õ‘‚Ì—F–øQ:‰“¬muð’uwè¬~èÃc‡m󃕫ŜðÒ=˜¦ºÿ¥ÑøçVJKRÊ<ÊÕªÀ;óÅfú>ƒ%æÐ-]`¯¤F=å)æqw•ZßÏü7ab…©5It Š`Áâ‚$¨ ÛŒƸðÄ2™…°­,†[50ÝBY[¹fY½ife7ÚJur«Í³üL룲„òðw¸ÙO­±÷`Bžß°zz'úâ'ªL0dóYOc®]1—0ÏY¬=†u}´~öCCžf¡ˆS ¥v»Äò£¾åyÊEh= RiÄÖ¬~•ÔqˆCV¹ü´ÙÀšÑ1iJÒ§Ý *•ä·þÄ[Ó‡´¸Sm¸…#©·Ì@ÐÈëÓ7Ýÿúôáú¤ûúäôï‹7Ý׿^w®Ž]Çõ<.=àÉùy×R?œ]\_ßdÙ]F…}í±Ø³÷'o;Ç¿Mý‡f·â‰ŽÒ›pÙ…xí¾¾mÅÁľMbƒöáÀßÒŸ9²ípå tÏ÷Òï{´ç¸:àÌó)]Ÿ¹•Š+?вçóPÂuÝë Ιxýßc¹W×'×g§ÝÞtê\^}¸8ÞƒÌéõù¡ФÿÞùõ¸öÙú6ëŒ4xÞ˜nö$†(£Á²Z©ÿÿšPá¦6'Z'6X~ó›~Øo 'ÏöeÌéÉiçòúqàsápìOŠE£Kça~+ðíÞ4êt3H²=¤ü®sryýºsrÝu.:9?¦„ìAèâä}çx£ ß®iÀ~¯»—OWîÉ›7—««ã,™ê=hu~ùع<{ß¹¸>9ïþ|ryqvñDôñü×î›O— Ñ ÆŒqZ—öû“_ºçÀ¼ °cºµ7'àºÞœ]·ný¤5 {F¶û,ë¼sЏº×°¶Ÿ®Q€dO…ÜIw²§ýÔñìâìú ¸{zþé ÔÑ8¡Î±¾SŒ–õ ïAâôìã;\ʧ3 /×çWÝÎé›wüûê¤ûóÙõ»îIçªK™Û}{ú¾{õî<öÑbÞåN³V¨ÁÅÙ<îŠMÔ6Î*Q;}wO~þ+åÄÙ²ÂMsëâõé»îÇËoѶ»®Ï~üuáqœ½¬ñâäõy§û¨þ¸Õ[H‰òV3 qÓ…¶ìȲ©gA†l> FHžöÙv^·ÿîOò¶)}ŽÃhÀ¾Ÿè$CÆéì"±´ƒ8I¦“Ìntðͬ f¦øï›Fãi”#%0<»ÇïC–š…š&£ts¸tñN´:{)p=‹“Üh0„Âg¦ .0"Í4ÜëïH~׸‡Ì0YΚÝ̸žû£çáTÙ·ÍH?¯ÊYÈŒrmnåD ¾¬sléò*ÛÚXÌØ®ÊCÛ'­£Ê ÁÇßfæÑ4 ï+¨‰u X0ª?ÖY)˜ž†·zqoq©úv,ª*Ý'ÊÁÍ4øÍ$A$÷þQni Fþ0máߨÀ ”—:A<Œò Š=,Ê-݆I¡/± %ýÞHà ³íÈk¸u=&•'Ýú£)Îú¦*»Ï$\Ê%!áÎ2Ã*¹.%‹‡Åë:/„w5­Âáò pÅK-¥-U@ÑèK©ñÆr©R¾‡Å,Ü?tuýYß”¤‡eä ¶eþU¬ṗ—Ò„õÎÊ¿¶®"Ž£êÁÍ7_^oeé^yVÍ·C_ 76¾ª`âvÂAò—ÒäŠ.{àEãý°¸_*-©jK>=b{.£®»ïjŽXn++¼âiT¼jiØžÏLøOÜ¡8(]÷eXµgrP`}!ÍXi£nÅc‚Ê: ã˾w^ûE{ÜpÀ.§Q„û¨Xé3âF}±çGV GÚl±6­ë›0Åà(Î,äǰâƽلMâþÔì’7÷÷¸û⌳epPÇý02¢SkàÚžIH†@õ³™Ë}ÉÙÇRgÒ íØ›4s«º“æÂžýIsïa:”†ôAz”†òÁº”9KªnæÊ¡:• âÏß«\Ð>h·²ÄŸì{rÇ2·šš=Ë2‘í]K3s­oY¾ÿÑÎeãë&Ç$¨K¤WÏ1I¢¶;&JåÜ3öÆO­žÖQÕј‹ç|ì~˜¬õ­àR:ó0Á“Ÿ¬iìíñˆÂ]Ïù¿Qw)qXÁ "T¸¹ÔCËO„—ÆÁ7Yñ˜Â”DC† fÔOt OCÚ "ÖF{àR}H ŒoÅ¡*q ÕY¹éq¬Z#ÍJƒÜhMeÜqh-,’¹[À@U¾0¥ÙI³»0»Éá\Ÿc¡‘RÛ< ƒk³Ž­g‹ Gn4=A ud•<óñü\‹Ø°Ë#Ÿ–k<Áh®$#‰`åNn€L: ᙨ;îÀ­;í,ÍÛºG²™â–y;ì“TÓÝ8{£ò+N¡„÷ê(?xk¹Í’¹Siɹ(«l¹ž«­ÙDó1z#VIî‘Z†®Ümv.™C@<`¬ÐVV'~vƒ±±¯{ÓaËLØ)¨ç1¹¨±Vꉭr~¬;%?ÊÏAäñ«D´Iø1…°óÛ0> 3;½ñaèm˜]ÄÙPG Æa\š=Œi“Ѧg} AòŸYùg›6õÈ늿X¿´SÌ‚²A8I¨D |ɉĈkFÓûü³ŸÈ;Ü—h+cÿÞ&SH¾!RÉÅgÿÖGy%‹£&Õ°ËYJèŠÆ6í¶æ-wP åWlˆøaàgH ‡‹§Th¹ÎQ„@Û‘¾åFï˜ñõ¨q£õ=íg¶9ëò6ñ Bò“Ü62&žâwŠã³*­t=ø†91AŸa\®ñmSäå¬$À_'ÿ0›—,îò³àf:±Á“½¨cnßR –µªFüú”h %pUŠ¿õà¾>›™*'êU”ò\ì,NÌõ¿â‡›8Í컈?He>ÔsW6B 'šfá¨5û]¿³(Ìz-“9ãÆaj\÷â=ðjàbâäÁú²ìÛ¾4,£Ÿ‹7/÷cØûÒè'w÷‰¾4°tÒK%ÓêlfÛæPÎb|SäóƒI “@9¬i4IÂ[Xè;üAe ÎÃ'£olf*”{¼NßHú2\-"F ½ð¿‡]”t\V£$V·ì‘yÒ*\²Ø]·¸]Ë,.Zý^)ÁÚÄÑ–™•Å1¸«†p ¤2Ž"tÃV¼ÓtI0Nì# (-½MÐòŒòP±´“Àî–Á3ãD[ƒ$[³ÐÜ(EéD}ßh3ˆÍ¬4œB²ƒ{tMi}{}Ptç; +Uó%t|.¾U|³p†8fAKAo}ïÃ̦dÓ~È&B%CœzVä(æU'9“i24~ÜYç&xÙ$;ùé—mm)ìçLƒðIÀç¦iSý{³!Sê%srP°’;êÅÁþIH;(Ò;@Qš|NIqe­8')¥l˰bî†s)ÅæüAN¦¬œï|–Ã)+';x>eÓqÏŸMyÖ³)¨½¥QEë€çÒ-àxÞÒÇù ³,ñÁtÍâì„\ãs»ý5ï!l^° N=‹e¬òìâ|Áªt¼o<½o·Mê:öÒE­uÒ­'Þ ‚Ztã;«ÌÜVn(­b_}×äáßv?$l$ª¯¸ª%dÁ· ruëYu¡•ë®^¶¢Ê£tkT@•Ͼ㦥ï´"}·HëÊ+êRé:C AòI°½€ç;?ê‡}üRÅÐp9z- ¹œ:Ï&Ño!M{Ÿ?&ú§8Ó—:äMÒµÙ»Àûn=˜Ò9ÐÖ¦6#TP^‰çR`;JkFá^ÖÂ(¸8´FÚï—öï¶Áp™CH=RÕ„a†°ðo¯‹Â¸=lÜæž.*¦(feuDÅÔÚKU ZœÇöF¦A¼´Ÿn6‹]•ì&‰§Ã«à×ZBÅ$?ƒØ×›š´¾ñǾ õˆujòO—çiûó– âë?MÃ:?§7;ÉBŠ‚2m­>¼UzÚã­§½½C÷¸ †ÄöÿÞ…\@UÙ¡é:’ò{Ûø&ÈÁ\Už#¶,Ò•œÉzZÄ”sØUB•㺠ì5VÉ «èqš7µð/³“ J¹vf}j«ïëqtuBn_\¹25{hƒ)nH™•†`Ž»,T:´ÎA%CBH¶Mج¼Ð"™-¤\ƒ ú`šè¾9«Û |]ì4륔w-«ßˆLEê4³9Ê{9dEu 5Ì$e_2Cʱ<Ü RAÂqäòAXU7èÌOùÎÅ¢6ƒŸ)u3gIÐc©W³Ã&‹1ùÊB¿\êOC.ß„lˆiÚÔ¦*¨"›® T££þ°ùpàSç­£ùWk™S/ó´é;Ëú~¶¬¶…«¢Ü£Â§ÉŸJ£ÀV1;ÅwV~°(»ññÛâ Ž4 üeúÌq›B¹ÿóß@ýk¾Önñ^Û¨L€WG$HùéŠàP ¹œ‡¨?êeÁãH·§‡aÔ.uþð+0ŒuÚ ²´õ*¿½ Ò{d*y…{·“8JuΖvyd¶ëŒ¦Èi¥Fyàˆ‰ØI£¸zL£$eªB£JlùÎ&Z›¯„ðÁñMÒcaÊ—߉?· ÐÔ ðë§\,,Æë:A7ë„r½=Ï Ïìe5¹ØÍ9N‰I.¾ZæZÈü¯ù„nò@[$M1+ëÆƒîlJ{]®LxMÁ1#T®ÊÁfË£žBºUŽ¢„ì»I›s*FžEz²bɰ*âyJˆÝå&=Å•R¢†Ü€cOCtйj?—îV¸tµn€Üm:Â¥”>"(ÕtÈ;åvATêV¹ô˜<:, Ò!˽L¡–?ç‚4NŸàÏ•p9ٳƞ«‘tö°]F<‡@]BjÛ.ÛÕv)irÕ8T%„+½üí…­*Á<^e¼%h»/Åÿ<ßX’ñ>ƒà„³G ¦ŽÄv¡ëñ'FâIÆI˜=#N@¶al¬A¶;‰åÿ“±MÅz36 †²¸Ö1Àq\ƒê†d`yX²¯M`pÖíb{pHÝèD‰l‰c°ÍC¬ìÒÜ`jƒdœÈ¡ºÜãa ‰Ã¡Éj…Ÿ‹ô*Š):Ä<Š}Åh "MÄ@©;1)Ô„ºUSöMP#ã9ØG*}Ñôðãs³Qùú©~GWÈ>?êî—Ë/šÌ?ÎsÎa¬ÃA´(º4‘^D‹r´H`R´8xÐ9‚ÂAl‚‹¶Übj"¯œ¦P{U‘Ë Ž%øJj2㨉IåQ 5y—¥&öÝØˆÃxéúAg¡&¹uÐã>ššV˜$Hhµ”ÔÿMÁ¡'. &QT.KMR©³ÁƒÎDMÞšFPƒ³™©§&Ö­ ¡Ê¿•ÿp“í[.Nˆ »i&$ïͨ¿i2®nlÚ+]CV”WÄI(V) ¢.8½¯íÆW$D k²ØMwBÁ¢Íú(€ËæÂ(‚öhø¼ܦÐ6­Ä1‘kYC>p!Þ‚kXG•‰©Í{tGxªâ6±¶ÄšìN†Ò0¬…ЈýŽ1Ç›E^œ€cA ððNhZ ÞÄ þ)ÃßX/?¡ºp¤;ÂT‰IÇ–P˲'#.#Î5Z:fóˆc"4G×An\Ò˜Ú‚V‚8Kš÷©»ã(IFœñÈe‘º0ò3"}¤Ê¸ÏúÀа¯Ö?339=‹k 0ŠØµvZ G‹ï«Å à)GlXQrR¥+zWV&DquÑ«­í_ÑG¶ÆŠWŠCfzys©º±ýq´ T)Q‹È}ÚšÙ‚VÂ¥r-ã=—"Ž4DÔ"®=ÂûñJAls'ðdÄ•†Yµ(Û žco±¡Hb¯Ã[¸IºªÈJð&·2N ^‹ðÆšÑ>‚þ|ÞÖG_a‡‚zâ$®üD´ÙPZÅaƒÑÊTç´@yÂ>öÄÛp›ÒŠÈЬaoÉÉÉ`.œmÇû‹vÇ*kfsÄ~ÙaE–gÔéÞSÕ› ¢8‡E꫾é 2"ƒÞÏam!I~€VhÍD"ưcÍ I§±,7âwgKÌ™îÎh˜‘±®~}T ±öTG3º¿}ÐMAç«çÕÑœ´=BuîÜXÜQ…ÙA ^ì{œPwE%“™˜+’ÒڷÖØÅ´Áž1Ió‚ˆ<x_¬x… œAË8… \o’Œ.æD'N•§i"C>ÿþøüÛâêvùåáñåõvõr¾þ¬ß_µ|þrýºøàòµNT„tϰ6u¹"ú2˜£/g<¦ C7¯4SÓ0Ce9»-ÄI¢‚j .¬ñ:fDäÔd㬴çEdùM*@¬;êZ­edRn +* Õà~1ÞPl9Äx µAö‡\Õó"~£ækD<Âá á‚CLE×D&ã8ƒ)lœc:ÎèaÅtwÒ˜¼„¡1‘¬ßéž0I¾s7ŠŽ|©„Ÿ‰ ¼u5‰<Äà Ü4ù¾.zZÜÝÞ\¯~®î®·¦áòuy÷øeû ®ñ&|¿©ö!IiZºoÙÚ"JC—£4 Ç&3„››CÖGM=‡Â³R‹¯ˆ£!z/•'Júe’žÄDtD6䱪“}«É|ưAO± ^ì©ZðÁ ɸˆ… *ÚÄr±­÷H&ÚzDG+äZhóÊBcUq7 öP9„D¬Ûp Æ<Ç¢Ò+Ç­‚ )ólKõ/wr¹›üñKâˆi#‚,6d±8ì6hÏÑx" ˆQ{Oi*»‡â*:tGAvÞ\Ä1GÂÖdEYg×v¬Ûr!-€Òã4´ÔMñd}§MÚâöJ·¿þ&­¢ÛE>d {À@¢ÛRÝ œþyPy ZW‹þqõº ÎïðúÚ\F·ÁÀÄèö—Ú o»S†·ç¸í{ñí–¢úBqáþÅ9ZåX¬YÑ{Œ˜Ó½»{}Ûvj¬w­w‹(¿_Ý<è<®ÈçPÿòê ©"¬àE8qÉ&Ö@ÏÛÈìrÑܤbX‰¶‘/·Ó­6<"½Y› .&¸Q¨ËÙF˜„jÏ5><".NMðòx·õÎ×Èß×aÇï×—_E…/‡8u¹¸Ã¿×釔T‰øÇíäÖ©â²4ä“ù£Á‹ÍlYÈf.Î&pcõëdëæ;¯ÐJŠ Ñ`,[ònl‡ÃaµÔ:E|¾Mc\]Þ \®xqÿúÃ…A‰øÕryYE*œ˜QìsGùX&;‚ã–Dû–1³WÜ»áíáÎùæ†uQæ±Z·çt\µ¬huFlt%#à^Xu}Dº-S}_6¥yع­;Ej†ByÇL¼âú=sâßù¼Vq·•¨ê qØ €t!fIÇ»$å ^j¦ü+¡#P7dã4yàjšµ­SÃHþc¬ ‰r6ŠIL†F§­Z_„IÎÊ÷ζô-0ù°ÙÆy²{©•áÓšö¡ l¹¸@±«ÄyäÚá6íþAz úöÛŽ½öUžêúùâ×ýÓ¾?å]°âÕ´¾–J„Çßĉº½º€¬˜–pƒ~I1ØÏ»ÆXku¹¤ÙÕ½¤ðU/¨ê¨ÙhC­©Ç $8êpâS0Fsó&×ÛܲK4ö8G)¬6)À”d+¹‘ëâ^ÈmxÆ´l%›F7LF?†§t³®§IÓïo¯KÕ•£`ŒýtJ8;±'Š¥„ÙòÿÁwÑÃç% s5¥FÔ‘†ô'2숵Ύ;lŒoy’EXaÞŽ Úhš“ªB¸œÅ¾}¡,9½6¡àÛ±äf!¨ ßöŒwÿ˜ Y¨¤ ÎmRª yªð)ª¼ËD!—l³E‰°tâ œ0Ÿíf îåP{´ƒÉÆY£sPãÊ×.MÒÛ'E.úS#RÈÕÀF±²Kë9§5—ô¶±Bz‹ð~¢h'‡[¡0Ü´3 ´!/Q—Ëdå¯M6æ +Ùà ^Šˆ¡âüáÉê|qÕ›ø¿ì]ër9®~•©ý³Ž:¼àÆóçÇy†-[²×ÄNÆv6›yúZ²Õ’(5›ìV²[“ÌT*vL‘øáJ€x7YåšNÌ®ö’7W©§Sð0AçØ3¦¥ç' éÏKÿ ß”†oÕJ€ó¾…N]ðá Ùùí¼’J®ÇsoVfIÉÏ¥:@*TRŠŽ|¨ì¸[@£¹,þž÷l£*‡SB7nòçLþEf0ùm×’ú’½ëZüè—.CU*Cl;5~ÂU¹ºüâ¶?×&äòk_Â`}’š0„]†$µO€Bû{­Ò)kØÞ'¦q›Èú~§qÙÕ…Ÿ@Èþd%cÖªûnDœ!Á1Çêë[ã¦jh­LêFy—©k?vIÉÊð­(÷"¯”£1Ú¯ä²óm§)˜‘Ø©¡J>ô‹®Ð6$Q°µ±ÔÈíå $µ|‹*Yúñìö:ñ©KÑwê²3]¡Ë8üŒ.ã%uoQät£HÄ“aë®±Áxí&:IÅ»°¿øyUÃ!ˆþ@ dɱ¬–B–ê9¯Ù¬j¤TÕ¤~#3à—çŽá—œ© ­Dר$ìQ`Š®eܸ#’,ðtÆ cdï¯ørãáñæãÝêùîãÃËëó½±µ¾Y}NáÓÄÒq©»Ò/á«î &%«½äm«Ê¿D¦ª,±'A.õ Ùáâë)æRÖFdŽ‚|•Úh– O»ƒýÅ=Ïaq—QéŒÛ6GÁ,ߨB‡ëÔl—uÉPܬ­Ùs\»Õš¸¸ð« c¢K™Âû>¤¬xÃmùnYùÕQ êã£v{¤¦¸Àèg]ˆDR5RjPH‘‚¯™§ IœØ¨¥æX£Ÿ0! õ„qňeD±D{„ž¸?ZÉKÝVJa‚ÞçÛ(âD"ZZosS¨ì;î8Ð8±¨³ g”J?göDü¾~º¼{ýöùðõîñÓíÃó§ÍÓÃC&ø˜°t ?y€@˜p*üàKp$ˆ¢’Ñ"ÖêQÍ_@]@ä¿´•_íÇUõ‚¨ñMXX$&þe{ùùL(@E¼$òãnŠÞëQ7E‰—Ÿ4 ÎnŠ˜s}A¹º™ã^Æ—¶p-ysÚÍÚ8…κ¢þÕÌoË‹äXÔ¾obg‚ù'…²}z€å§<(Øý¹¸_¯×|K«ÇߟÖë÷r`¸³¶€„“ W Áz>ŒCnrUÖ·gÆh5us½&(¥Ý\ðk¢ÂzÝy}ƒoR'>;zO§À·—o¯„˜#šã¶&~iðí»œ†ôÈâ8Q¸òC‚P‚‚Ôþ` lœá±Í¦óm ƒó¸@+kÇ¡Êêãa.OÊÜþ£d–1`ŠÑ’È5»<1Hr•S_.ÌÅÜQ¬tfÖÊuö8“c• ècð#@‹Ö²:ûbkO…9¦vÉû¦Ôoa°†GAš.¡OKgÝ­H”(ÓÇJÖ¢8¼.ÐæJLAˆO°6†©P[9£©¡€¦‹j/¹-XS*Έ)¨â5Å5B§Å5‚é´o5«ŸÅ&§±Ã`¥3—îêö¬.—œ¦`¦.‰î ºf¸D[;d°AE’킪Åàò=·%00Ìßä–±óè¢Çk¶,üüÍÜÅÕËÍã×Ïê=¾E:÷Õí6¸¡Õ÷—Oß?Nò†8,y=Uþ+WSŒL‘[Öµ’²Æ9bä `tJE@€i ÔþËuTmŽÖ _œBqË\2ƒÈÖã=AkíQϾ²ô•#u²b­sÅ’Í®å*nçI7-ÙŸ¬d,AR‚´É.‘}ùãõ‡BpÙòú%a›$.›fßÒÞ–×÷Ü#Ja¦‘zeé/É Ô[¾<²Ô3F › :ŸÉ¾Cl¬Ž¬ØÃ %Æž #K’eslî\ÚŒ:´‘dÑIu[‚ Æwu‘ÌmôRe× ŸÚµB§ãôê€jö࣠¸ËyûíY³À7<͸aK¾³7‚UãÃ%š [«&S, bÃv.9H­(X<‘Õµ*C)méխЛd\$¢…‰GE"Å\òmp°’2\U1ÞZޏ:ƒ%²º0©Ko3‡¡œÓöÖ6Ø ÀN§TáȦ$^$…ÆæÎÛ²U·Vo§:Íø¿z|ø¸µŠßíæ²¶"¡5­d½úãöé©*ÅÛ‹Þ‰q§k»ÀEÀC£R–svô€¨s ~Q¸DÏúÄQ'ªÚØ¥úy¶„btMc+PûÛy2c±E¤¿}~ýöòA¿ýýËóï«Ç»×ç‡õËjs£–ÖÓêÍý3ÔÉÅ©{庄1Ê…Â)²¿,ö.#d+”¤ùûÖ½Wóâå·ûç/ªUôŒ_žìôË«ÊÍñ¼ÐÏ'Æ4AèoòØ"Ä5Ša’f%RúšC­¬Î§¨¶æ(ÙYUß;ÏvSØ­¤›‚Y=qqðš;ïÐÞ°Cµî·%ë*9=’øÐÌ·TZÈ©ŸÕykš£|KL>^ìÝ<泿û“•°ÍùŽ™#¾#¬‘ïEõc©ú· «™AS ³Åùªž'ŠUê—¸*ƒ?Á©ÁŸ2óÖ¼ëÐbKiœ¿‰wS%/ñ—w…ï'÷òý0ãö¾ØÀt“¦Ã'ûšÌýèºècš$À5IA_#I¡"ªÑ<ƒø^6úáð¯«õóºJc'Ÿ•(a!,“(E ‰ÙóºÌaÉyK¾û²Òpÿ¿ÎN«)Éàêæ=¾-AX ƒuGÅš$——|ˆ úµL4‘•]ú‹*[:5|v¥ÀYF¾5ß‚p–qhPÓ ñŽßp‰¶ÁhAWO;½2Š »sA2ë±®—ÛiÒü).cûØ·ŠºZŠëñfýIÒ¢n÷÷îåÑ×w°óOÂ﫪Cp«fÄøu ®j¯$¥4»F·j:í*@›\’ìUW÷ëò Ä÷«îxôªï"'Ó}ß 5ǰtÝ4ª&ç8³g¸­ÁÁ‚õÕodö|ú ÔŽ,ÆÁx•g ‹ê«Ã¶kiÍ3ÐJ¼X‚«ÈQª–ÔkA Û<³íå,ëì¨/ŒaüÎÚh‚0®ž);·t²’vêP«ÆQ& .¬‘÷ÌB瀣\Yí†pT°ÀE&Héô"ƒã.Ä9]3G™|ú9ÊÂK>xQK›{ûï–nW(/Ÿ>í |7|«˜wšÆJiÌe¶9Ìt -™é\fÿÙd¨Š¶õŸ*Ÿ4¿-j²`ɶ„UîeÜÊ…,ƒµeÁWè8ú$t _·'å\äy”‚±Ð ] œûg Vh ŒÄ.¡yr¥(<—DöU³qÅ%§º­Qåz,b'×9t„a\$áQ‘ˆQòCmßV EÕJÃóãƒE²JWù KSøV¡æÚøÍTqåÕ¥°’5Šn±YÈzwÖ=möY9y+T@QhåEˆÂiT|³­”š#4¦Ûð ÔͲ½YI3'l’“ãwerÉBµÑ5&97OƒªÒá_VŒ÷$ÌÈ·«§?ÿng“V›ÐcòX Ì~\㤘-¾Phù`Ëý`éPùäÃ>²GˆB¢>M´Žì*ßÄ¥Ÿ±5½~p6©±@:$5!Ç¥Â(ª`ä¬RÐm–Ò ËØ«2I¥Rƒ}ë§ÆN¨•š~ Œ5‰WbpŽZ ¥Æ†K„@iÌþDu$ÙÉE¾nî³½vG+J½b—œ…ïjûÏNA½XŒëù¦K`ªë‘œ"’8×Ê8‰¥Œ×aˆéòÛ þT,^\fœž|¾ÏúþhEŒsul¡Ãœùp‘¼•h“à "•Z‰¶e•6§ÿµð;V¥KUå¹ì¹Ê3´r–cÏ0W”dwŽ.'Íûs(ƒú1ǘí¤?8LA‘,tõ’ÁAbl°B“o¾óAå'I8OMR@ÁWu<#Hµš#«³"²D©%B¿„'ù&‚ ¸ ˆ pœ8ÿ~óùƒþoçVà}?÷‹âÜo÷››×›—Oë}ŽÜXûwïÏ¥Èý¹ýƒ$…í ¾~ÿº„nþÄ¿‹z‘<û 7e³,7Ã-Ñ~þýîõëgÅ®Ïw›O7_Z=?~¿zY“¿¿»Á@§©ÄØœ¨ÛÈ ø¯â¸lð¿n‡fÀ½É,¨çî¡V«o—ð‘ÓjbUDU¶]ÊDýsåkªÒÄGŽ1^VÅsõáb f )§‰g)ûwVr¬ö—ãaAä`‰Æ‚Èh^I±‰ßŒ8öMb€U`õŠ=Yw÷f—®ô=p/CLøËÝöàÙ¸íà`%IELŽ AÉ;tV<­î™ÇIìNhõŽMÊN€ͶoÉŸiKØó=ÀkfÛüœlû_Jsy¥yN=Zw;æ.Š.2»U¨w@ýq«“M×+=Ó)ýèÙá…°¢{Zß*óQVÏ·›Íí´(j§œg”l±ct _¡ÀT“Ñã%ý³·.žÁ9}©²&ŽœU˜!Ȉ´ õÙW%JÎаmG'â ãýÖôJ%Á¦ûîÅ/­1FÁœbŒÄ"˜àõ¤ÞCIA©â'×”.ƒ8g™Ö— 61?.ý£%KÖþëRÎ5Ûý«¶n’t,¾þ ÎŽ‹U/³bTð›>çélÚI¾Ñ¬°Ÿd„M‚⦌%†¹s árbxK&È¿ì{'à l»v±¼¨»µ$BÐtAÅàžÊ; ?,6>¥àR:SÓ/4kÉP┄T Àí(p–«HÆÖ&®âQ£Ø9ÐU\êB@Œ‹vêÍÔŒ|è§f½Þ|||x~þòür7Ú¹³l‘Űٲ]ÀMØŒ5õÁ>xˆd­ó'bs!Ñ/@w!Å[½—¿äÉ¥ ©³÷ðiܼÆ|Ážˆ3 »íÚQ¤&!;1¤˜š0àø‰íü浑yÜ:€v;²:úÝ‹¯<0 w‹ÛÙM ÿ³Áè¬x$6ÂMâ!iþ)Fê™v‘C”+ 1*M7er_ w²Mˆžj¢þ‘£¸É1“rÍü0Ö:qaÔî¶æÒŽFÑ9a® s@‘9ZÂꮽUÙátö^ÕP“Ý-‰.ìT·<¦Ó²šµ”<´ÝŸ¾¼>ÜïžàbÙ:mvÚÙ”nRMïƒk …z/©f"ƒjÇŽdºVHú˦Z!Ý[¬5“E›„é”d”ÇðàÜX‡=)牅¢%ÍȦàA Ôä±ùÂòYÂ)›%´¾ÎÓEk‰æµÖBѳ SÌ~ `:+$`÷]£ÍÆ ( /¦ÏÄó•Ýú÷§%5ôþ‡SÀÑQhRPó– ƒš÷NÐÏàÅh<Ñ{¸Mx @úk´²(¾É÷E51WY4 Ù,jÀÛ@SÔQó«I  “…Õ€Q9ärbÊ'ä ŒÊÄc#!ËœèŸd~ð¿*Àä„@ì™ZŒÕ‰Q["-àš£µþ`vqöòXuæüËcßZÚ}a…›ÛÛ5ÃfV미Þì<0($‹;©û!Ÿ`žqÛV±¿Â„€ÆýN+¤-º —ÕjAÈU=‘ö1$¬z%!žTÒF™yv±ëÏ?öNºèwÏé»þ¬HÙ÷‹ƒÃôD×Ya:è-z°F[A—˜ìQ]iqå\’À5ëXm$ ¾±”6ÄÒdcÄ€eô¹²tJEÓ¨XäÌ ÁÉJÚV‘ï‚NéàõÜp3 …ö2x^n(F&\Ô:ÚRߟZGÆ?öðÖÞõÄ:R'yÆ Z‘Ÿ8Sç/½ú‹èÕ%MLðiþZ[…HË–_Ý?ß¼¨¾~U&u»¾Ý¿V¿KÿýGÅðÏQoèÝæ¡(Ü3}Á% æbìnDöÌ3šÁª“T@¯ØS÷mÎÆû=zøújæÚËêFOfÿÿú´™–ãCï—4L!Äš² ›VªÚä’ºÉWábõÝä{Pê ¥#“ÈÐ"уÊF"CCÑ?¶”Ôcð³ÚäçÒŒál²E¦nŒ”ÜXt_xk’É9Ù0ÀñNm{,r¢`\4 g-ˆ3ƒd˜ç—läð¬‚Qù, [ÅzOGNÚ]@µŠ™³5œÖ¸¸¾³µtL@WŒ=ùöš+gþpî+\ÃæöV$¥ÕëúÓŸ8íI½Ú§ç]Â6<÷AjÌ8š¢ãƦÏíÄœ¾¹²rÎøV  cðíçæè )WßÜY·™Iö™j%”6ô¶ÒÅÑ;ÆíË#ôæ.š¿WñÞʪƒì‰T­÷6~œcµªYRÑkbuò¸4«™ãÌË\è’ZFŽÏT‚GuþÿÆ6útó´¾û`Uß^N¹¾:I|®2Nûê”ï«lâó<3¢º†’Z˜Bšß/ŠÖ]ÖQIþ9¯5¾~þ¦Ê¡¥z·Â ±-ëÄlôÍbMØÜˆ‡è­4f¶oä®|›ñFëâ€HF¿ö2ÇÐêW%€¿<ŒcK^Ìë×=ýþÞïò··+YÛ« ÖíoŠ– ª¿¢k‚Þ>. ½FܘthÉ%J1Æ3Ð+³áz'ó?˜ýIh“‘ïm¾_”ˆ\û»„—š`Kdb±ÃUfÑ—OˆFŠ#±ïºDJ˜K—»?¸£ìì®áÉŠZðÇN'x*¼Ç[¢“% ⿹{¶å¸qå~eë¼äÅC}Ðû˜—Tª’oHÉ’ÖVKr$Y{üaù|YºIJÃ11$’³>YWmI£ Ñ÷{¯Â‡Šæ<² >@ÍúD>ŽªDɇÌÒi¢“Ì¢*Y¯išC•m$ËÍÒ?^¦`àªWÁ­ ûd˜þàˆUE¢lñŸ˜V v÷Jö ¿Š  j_Ž~±bÉv¬ä^_¼g64ÙƒŸgß”_”u¼Z û¢Ö÷!ž‰ž’­åÔx¥\ç— [ >’ê)ý¾BZ³XrÇ{¿.’Òç°ÏW÷ß,x,<ó‡Ã§kŠtÃ7áðçó—?«v+´ b´uÍ5ª‚`™€9b‚~ ÊÏ}WG¸m²TKí/±\:,Å+˸@¢«mÔ뎬Øì¼ka­†P<’™CckSæÖ#y[}$iRmt÷ö¹|æàbE›m§:· kê èMVa¡‚Õc"ÇjÀÈÚ½õóStk¸¹%†Ñr¥¨,(hí,<‹wv9Õ0ÍÜL¡«ZBj—:W=ذ»œÊ³š¡ Þfú)Õ×~ydøE¤ &C£…ÌÊ哳˜ôÙù„ÃÛ”lØUŸ"¹Ó]ªÃ3V™~Ì•÷ÿû?…¦«®Ú‹| õ„`ë]égÙyÛ"Ku„2„³ûÒR  0K!1NB²@I~«îñ6%.€…ù}ŸÙø;c]£¨-©Z:„;J¨)Ìd&ô‰ÖêwtÅú½ xž*”ª0ÌREÏ£Eï7+Qð Ž¡Wõs²DoxF¾O ÔŸIå6C6ÖìY h‰ÇèÖY}õÚáúêãñÇCá‘OáÃë×Oÿx­2 Œ†F2DÅjJ@P$C&ë¤{ȹ\4÷šM¬×.ÎåÒz5ïjC_;ÍŽ"\c8o¥÷jWi¢±6 Ì*bòsÍzµÓäÚÓþ²]…7¸M6V> Ô­Èù;c6³#•nàÙˆôˆš÷!u–W‡„†BuÎ ìÓ¬6PªPAÃOúÎç—§Û-šª¾ |yªÓ!+=”ÉŽ"Qpe7&à±Q‰³\ê²UPãÚÇXÛÿÐÁ?—Ðl’ÊOV2¡<¸ëhìlÕüÛ‹×7w%-*…§ì0U« …“üÜ¡jVbÛ*5o‚lÑòúÒv‡ ÈK3û!U&n'‡3™}¤`©ìAeÊŽÇÀ±:³¯ìlNiº4ó3îZTÕ×w±zÔñõÀgZåxãêç°ñ|­¿^øì(QÔK¦ Á¦Ä'ĺ8‘Œ3ÅQ2;ÎFëˆ>à,_ûvé_—r>Þà2ó„4É–ZŸG¬Ë#©=Ã¥áÂí¨ U‰¢-t6/f­WJË<0Rs+nL‘§É:þâÙYBÇ‹•ÌÒ‡b;þÂ"}”¹Ûl—tËÝÓžëDÏ7ßïf7 Œ?ð [i¡j^g)âÒÂË 0',² $Ë/È iŽ1 éȳ ù²Êt6pÂì±IMF gÚö¶ÃTò»Ð[Ñ)å\C„íæx±ã Ìñ®zh£ í ¯Ÿ®ÇÓ¸BX8kòðÁL-â°xrÖäÙgæ_mG€)ìL€Æèq”ç[5_â ¼í„ÌJœ°ÊØKkí‰í°GûF²)ˆ^/ØöxuýE…ú¡XüÓzϧOW×µÔþñãpýt÷xè8©¼É1p¢]-‚U³*8ªÆŒâÖÖë,]Up6˜Q:Q¯¨ 0C8YñуÑçç\áTÑÒˆÉ5I}ŠK‡dB¤ýC2™-À†²ìJœ\SF¶jkK‘œŠö^Øp€ÚæÆ)‘C1«%Þ¨¢6ßábMÀV½Ïä1ŸÁ¾ðçë/·7ßõÛÏþ¡žå3E^ÎÙ¼Äv~Ë$ËS#”$N¶ë!˜MÛ ´7 bn—µ[’Cɦ¿s¬ÛjÒÁÎn5yÅÿê~2—ÞiäÏ«»½Áoz³ß,àöï}´êG‡:ûƒ¾þòôã®ÕÞÏ ¢CÏV‡;ùOC`£øv „~à`ââñ{;é¬Õä9 x[2Ž=Öå(ߎpXS³â¼ 0JXLn\› Îàˆšì*%ÅÉÚ„™*)ÓÄH•þ® 1—`\f>©ZÇ ×OzU†G¬Š@z„€Æ¢\õõ« –Ú#â›{ ƒŠ>ï/9¿µÝñ®zN~;Änøߤpˆ÷Ÿ™© žeER]C•›ßP k:†½2‰[[õÞ%xò•ÙyÔaçá»FÚmÙ ˜¡P‡4+í#åû·Ž0Ûbe‰±D¿,†Â¶+mƒ‡´ƒ·íº.†}÷]}y|ÖçV`þDG݈—·¿Ïo^pÒ^fý¨LÉיּÕÓ“ÜRF¥^QW bÞ¬GÙÔ¬—½Jê/£*Α@0ßW™û–µ¨i°s¼, ™/áÀ m°„ýVvl~’8Á4Ø!‚ sº1äôŽðÜ"°£Ò“ …+ J©i^ À¸wœ@áìÇ®@òÐ8g‰Ñ|“NâmÇomìü:º)CFOQåµ$¨ wG ƒŠo ÎsKUªÜgRå~œ*§¶ ÁƑͰº4ê0#OpzwUõJsC׎w)èÕIÜÌød¤Ëðˆu™ò¶†b(L”oFj!W¬C´qÂ!0®mÖ‘Ò~òÜ€šYäçI"`¢yšÈÉþÁ½ zuŒL1Dê&+Èœ‘íßWT¦nŠêI…УÚÓö‘~Õ5Ž”ÃÂ…k"º4çÒÚ\Dw^ªŽü* 3Ñ®Z·¥ï~ûiN9$"Šç™:L– —¬Ûpã tg¼/:Õ¢úrhûÅÏ}3Dóê›ëÇÇ'µÆ†Á¸Q#)¶–5rlõƒž ëÒ·zŽlH)7hõŒ+ZA-ýäˆÝú w)QUä“é#6yçS"ðE¦ á¬bÌi¢ ¶ÈG¨¦Ò‡¦Â°[ŠCÙ]ÚÌT?‡Š(ˆ/[–NEûÒT®)÷9Qg˜œØÜ·é'a«ŽŸˆªÄv­æ1÷øúÐ<>}þx« ôü|÷mÎ[Ë~f‡˜arfÅ«upMÀÈ«Õêeq‚ø¡M±¡8µaíȨš§z]o#ÍaFœ*H²»†wÞdì’z#’0;„ßïŠq©©ãR™ƒ¾ 1žg¢‰``žƒŠÃ~¹Á*-Ï­²ë4DB?O!ù€Œ¶è ×ÇÉyÆM)¤@`#ù½5®D””m³]€x&êiïö¯UA¿}Æ®q—Ñ*˜cܵ‚¯Ôé;ñžo_Ì—ù¤¢Ë7ýk'Ÿ\ ØW;«Ðª’ÕÎļ_Zs»5,·Rñ-%%ŠgU¼‹Ñùyóüp õ@§ã/-À{¶—oᬈ Ù™ Ķh)+ÁÙöÛIc‘§´ q³ƒ<ÙÕ̲½äF×Ĥ~7_¾>ËÞÓW¼½ZñìãCW÷øT—ˆŸ8o‡:®íÐ*ÛÚÅ’ ûn9ígÉC jÜŽoùxúëáþÊr˜‹”.»´«Îe©ñˆ“mKæM*覘`qUݬr«TJ$ð‘çoG2Ù%Ó>e7 !»…N6ÑÆ/ïvǪU8Nô¿È¸Ô”«bíì54úÈi>$úOfI#rvµÎ:ÐF+}"H—¶×RJ{ÛkÊœOí5ì §3~ 7-¹÷x™¾¿ÚšÈ™8OÞø£’hº#\¿Ì°t€K†uØ 럓Ôf»#ÐCMS*IPwRRU ’¸q R’‘d›Žq&ϧ!§/NU¢w—ÉŽ¢Üf¾I³&yÐOœ¬Œž±r¯5j”M ~»X@äÜ,ðº¹ÁlµJâ®Ë¾Ü~½?•2öÊõ%—§ÛoÏw*]îfÛ† OÙÁçéQh•U¼Š—CEé²0¢j'ZZ¹\ ô k»â¥6v >+CÈ‘—2âyV†ŸÛ%0ã†TË7ªR(4¤ºG‹„ã*)ÂöqlŽ©Qý&,¿BûÛÓ’ÕËë¯WÏ}]N?9Í „ß«rİ)_ÎÖã„ØIZÅÖ±jnD´<šsa§v kÜ¡,Ù å&ŽÈ8ËÄgæNa¶ëS{!u[1±­Øt~GÞµà§3ÄqúÑ®LÁj®.½Æ"ˆ”ùl-E΢Yl;2­”Õ¼;šÕÑÂ1š“­ë•6”M3o[εý˜¿ÞbTpêWœnzž±2?O¡,?óv/Žê¸Ud@©Feë­U©9^9Ë{\ü0´9ß¾^=Üb¼V›òæN‡ðUâ§šð_r’!3l\¢é4à;•%ž¥²ü<ª¨¶hOQ2”v»ð"y êC¬#”ši ^±u¿kÕs‹¿B1ï@ž§·Ávh„Q¨@atsr0\M©äþ²˜CÕð6‚^ŸJ%:œ 99b•œWÞm0B±œßаFÝ;;Xܰ𓘿»¿ú|{xÒw>¿<ýøxúë³Ê÷›ôéðÊ÷ôR%߽˒—’"ÅêÂ(³Ô…µÓ@ÚBÀëS[/“¶"ȹUBRá€EVæ`¿VÀ‹c.1™Ç+IHóx¥˜]î|¼Y‘€·Hqá,0²f;ÏêÉ“wX…¶þ«ü/&ï#£_;æ'a±ÿ\CR˜´·ôVI+çú·‹βãàf%xÓ§rÔ‹X„7ï<¯Á›¢½¦¬&2™=)‹Ëj®¾¾|™”¬nÁ#Ë9ªŠtÎË *Ea‹ãúÞP)¹´ùà®›αñ‚©Ð{Ã’zòjJ®bPÆŠRH“ˆÈ$¾Ê³Ü?`™Å›Š«9ÄÊðg©¨Ì`ÇË”8Ú¡!I'ŽöðˆUöF›7Pf~Ñïž5O¬¦ê’êý™¢`5o±¸(¸½Í²±OK%'tGx¡ª 2GœªæŠð˜d¼2z×°¤$3ÚI/â’©ù Vè/›µ*†·)Øá×Îk6Ustž±n4¡¨‘$a7Ø´RXÁ v„Û±V¿Ä&."…ñè ÄÞvñx¦G9ó©éÿh·±·ÿÿ7U¬/ã$õa”¥>d’Ô‡qÑÉ!›¥>ÏœA©„Ó*œpà]J4U'‚ÿßÀßÑÀ¦¹VÉÈ€5æ¼z:Þ)Wú_x¢LKFTëAU“³Ò| yá|673ÓF•˜)%ŸJ}‡îÙ¥˜ÖÉÏÒ®ò³3J¦SôO|‘õŠÿ${À{¬g@~bwØØ¦L£¿õ;íÛ_^V?´²©¼àóâ68ªª4‚¨Dl‹rM÷o¶’¬†|[>ˆóv-«G/aN´—‚@²E˯Ѭ \"YƒÅq<¬³‚Òö hV¼$tûZA×·O/wÜ]Û6ª»ÏmaSû¹zãó´ã¯w÷w/ʆ®ˆi¡ãZõ±€Ïó-8 H«øÖ¨Ú§dÝ=,­4›€íDmÙ`Ç, å<½7fU;¢ÌWõ³<í ò«”ÞÁµOD=/Ò"¦AŠq S[ò{gsÉàŒãŽÃTtò–᜘T›6 —+…ö0¡giÙbÓi“§Q³Òl<¨BA°¢{ Eêü…uYöÅÙ-°çy;¹·%€çyÛ.žu…7+É’€k‹§“8ÔðŒì:P9åCè>U_Š æE;¬Fv{D ®½w>ª¨±êÙš8c¦æA2©çgsæ<ÝЈÞÃMâ·»kÈÊîÁmæãŒ"VóØ/¹û;b]m›ŠAu(‹ÃŒ[‘I…q¯^l" œVó|q‚'Ù "š§ Že–&(—O\¬„åõ¡ÔH_š/ë¾9¨.‹´ k¡¦/Ùúç $¦•*ý*§Ÿ^~b òé&ÝÄ›Ãóõëk•ׯ˜•ä‘K¤ÎRD> 6€Ú)V}j0Îõ¥$£š]ÎdL-É´G¨‘{B2÷¯×fß© 2jOïÔÒê©ë—¶ó¡f­9ÕaCÅ‘ååsOjÙMuHƒ¢«öIÛ#T6Qb * +T“87ÊëkùÂBçd&«ïÛl0ÇIÕÔ]ÖI½ß¦$Œ ùvFøIìxÄ*Õ”SrÙ.ìI¾*ÌïÛ†ïñråÌYBÑ@‚2Bñ³„âóúƒën!žì±½šKÅâ©}6´–¬¸ ÙPS| -µAí´¡°ØùˆÑ ž  D4‹Ø|kÓðj%¦R£0ôqâÄf-«2ÂzĉÃ~4ß©^ùû«Mš3¾iU‰-]R†cÝÆwwÏ_” ûÑ úÓñ½v\÷Ëá'µó/´úF§^/&04|¹zþò·ß‘Õ¶²b?}ßõ÷§'+ñ¸ùt0Gþðé‡òùß~Wc]¥r Ñ,A¿ýç¿þmüÑ»‡Ã÷ç÷ÈùÀù÷µËwì N?·«·@7 ´|‚Ç(¡Áêcnÿü­RV# g >¹Ì ZU4X²#H$n¿Ê[eŠ•† áE§w©‹ž®û<Ùû¯[dvÊ€>IFÝ®ÆÐÂ@lÅðÚ±®“pªr¢Ë´¢ª`äÂL°HÂfŽM¹ =вó‹Pù—Žr”XŠ[Ší)•mÛY¦E⛬“8p­‘Õá« ªØMª y¥ÚWóS'ÆãŒá­—RçÚ¥©2Ìþâ”S»ƒ›b@À‡R´ÅªaËT¶öGUe‚b‹(P ÊGbûHœ ß17)Æ™¦’ö"ÑOVªô—…ìÊ¢Ám |$}ªè9tMÄrg¬+´"y¤Òø]w1äà¬"„¾áfiïšš)0ý“´¦ŠËìNãÐø(žC™ÍRæíð#¨¶ð¯ô¡-ƸH`Xäã*: UEŦ¸­ô¡¸†›ÕÚ,†þ8‹WNÿÇÞµnÉmãèW™“?û'-“ ˆKÞ`Ÿb]îÄ}âv{ºíLÿbÝ´/Ékkv<AÜRêe³,&´kóâ~"¢òÃ,M˜„Žè/Eဉ_äÛñâ¹hnV…øañ€e.•-ù—~ÙìƈlÛcÛ,)ËåHËH±YB¼ðϱP87g×YLÓ†³¿G šH&85Ü5”°½G—™wÏþQ9£âÙóÚøˆuÞ™-|Ô\ç7”h*a2›„g`]­½µˆ(³Mo½èoeÁnBœ• €2üÒéjêëŸE‰#/S_ ¢ÐÖs8"½r—›¶Ãé¸Ç—9_Ü,ÞŒJsÓŠÛ*å74ò;€ë)€üøååÔO<þ‡;οZ:Ï™?Ü}ù7üù¡3#ð>0#*ß›^Bæ$Ë 7þœ¸ dúG_îx=U–8/ßv¿ñw ¹·l¼sZeó(5x>Îæ¤“èrÈì᏷úïJøa7‚?ÿ§«!í´°DÜGgÁ§ˆÎU ‡YçI£JÅ7 í¶ÈPü£-)_1ÍÊM…ɾÁ²uu,@Ë'þ{\‡V‰aûN\½ªhßâšÓMµÙ’ž†8–‚ý`qį› àײ©ÒPÐc¢Œ@Aª欳zl‰R)`>Qm‹mqöÕàžo­ÇCgÕ C'§r(,³ » nŸk&¯rÐí¸Ö’£cC –Ü4¢ Gx—zËókfŠ@m9n.丗£êæ ;” fU–ÍãqžRÙýesùÙçt›Š$W´åyUƒŸ±·ÁqݪsÜ^ À1iØ R ¾«‚ ÑÚWk1‘’šá@ù¨‰@¬s"C¹#gtµŠ×eBUr”Z;Ýÿìä;ÏB«Þ³~ûYv¿ £ÆÿkáVOQŒ)¥U65¦ÜR0B‡xŒü?/ÞÒP4ÞžÄ*Vï ³šz•nD¶-.ûl ?x‘&çH‘ó*MFÔG\Fç$Xˆ¸|RÒá^ŸÝÊJÿz9îA´Šã·ß—<ó' ÿñ–Æ»¼D` êîåáãóƒÓ¶iÃíÅ)ë6FNXzsïd•¥§&LBS5 J ›ìj¿$ûâ í—4¯.™•L¿É ‰Z¸=oú…4¤YÓŸË{gGtÜÂô»ê8úi^dúí¼2ˆ#ºõ;…Ñ™éro‘_Y¼­ ¸ˆÌ~?‡›/"»| ]¹tû- Ñ5¹0+Ëiªk B.ô& )æíÁ\ÿ·€¤ .–³Å5æBnèÃÊ9FG¡¿FJÉ0;ï£/霵Ë"~¥9»lô*5wœè±@Š÷ÊgΰÈ*ƒW+J ûÔZéæy ¤Å€<«bxÛ×ãÅë±ñòôùþÝðë௻ݎ?ÐÝãï_v»ÓÛ§ÄzÏOÆYB^ùdÜð ãwâ„·|'nø¸âã09àtLD ¬mõÃø A±2ꀬ±i¥ŽB¡o }Ë@¦VIxé×.Âp¾Þwx¼j*Fw™¯C®´Ÿ0ÿ¹tƺŠq G€¯«.ƹ¶ó%)ß <1Zù.¹iñ±ì¿³ù¯§çßg1^&þèºLU®ë,;vu^¥³MKmÁ¡14/Ã$ð’Ôu+sR’ŠvAÅÂÞTc&Üôä"Ú̈`[?öÕn­«BŸÃ—9œJZgà54ìÊòv=G–^¹yϾº× ÌXò,SÍІ‰½ö‡‹ — £›Õ<ÙWEáÚHÃöšÐÿu<"Þ0b=’1]¢M'¤.Dì'η(!W.Bƨo²Ö•ŸíïF”ÑF{×õÓ.û×K’), [·úŽñþdKÓG¨[}Ç•(Õò= N0 17±ýâî…Qªp¾š–›<Þãû¯¯bwGÛݧñßýfFÃþïÝÇ÷/Ÿ><½þxç‚߀Va‰ca›d°´uvYIOsppÂî)8¬1xüŽHÔ0%ëà9ÊþÌZiÁÑZI¢H£@쀖!œ(ª @+§dÍ6WOɺ0ÝãÐ_Êü<”W.Že>oV3;åk»9aˆµlK]ð³ù¡•mýÀÐr'–q…›²M•ËlS *g~/g š @,©‘©ls¸j¾qw¼KÅŒ¬}S ’“¼Â¸;±ãͧ#×f›û‹™$†Ub€ !)D²lRW+o5Äa'*)ɼD„1ÏÊb)$ݬrð1ƒ?ÿ/RÞ-¸Ö”vßÂ/ݵ,ëa•Òí,½zz±_ßyï¿}ºÿþòüýó%vÅ1œywþw_ïŸ-I3ß½sD<;Ðó³»þŒ†‚ü^† öƒÍN/ã9ØÒ8/-å´ôDÐMð¥¨#·%ÕÒ䨥æÂãþ m öþø‰1´¸‚”. ©´*.»YTœuê(o0ÅÊþ²™¡Œ?¼Í¼3@ %BJÏ|Áøˆu™ÑÛ1SÖøXé ö’À–Lã*I°”¼e¤™4£ËZ€Õ[š9vý2uœ‹¤‘!ÏŠSq&stµ ‡@¡Ô%=“‹ñEœã´ËÌŠ~ä*¿S$“7]Ãoû¨†eôÖ`öõF•ŽËe‰WOú÷ý篟ÞÇîñá·³Y´&oQ€ÃwC2i•‰‰³&& %±×ø¹'²m‚–± ¯]8ÊŒ*JX%3‘›š³ÔÇðÌc.š÷_??ýx|õz÷*¦8ß»sùû›‰ŠZÊ)4/*†ÖÓIQ‰åɉ±6 ,ÀaCLV‰ $Ô×™mS5Ø·Z‚ÖðLG¡mËÇÊB¡Óì;¹æ…!Bœ·¥`sD–MÖ„{7Š À"™H –9­“ ip99)˜:´—Ç A\î¡øGÜÎ@P—=+H¾„2ñ¼,P yD2û®/Bm% vBK¼ Š`:oð|:ˆÂÁS¼ÞE7“ˆ § \‘Àä .¤˜ÀŒˆ² ;wJØ r…H°/ I½G'µ•“‡#·lf’9ÈÚŠ`õ:zËÑBÀ “5†þR€‰ÓuM?^KÈèfUådŸíŒ€T϶Œ¢ÑýR;ÛìÆ¶¹×ŒÝRB@â‹5!Ĉ³œB !à4§ü®P*ü.SQNÔ…à—~|ĺjrg¹¡ªÊ'‡k©…L²F 4´˜ss„–ÈãZåõaç:åMÔ9¼Ý´›Þ‹D„©ÅõÇ‹•w|µ* òÔ!a¹îsU{U!®âÛë¦Üºg€ŒZZOdÞï†|™=[º÷°{ÿrÿmÔ°Ýâ‹¡0ùîÖ,êMó6Û÷0W°¤”¿³‰+ö¹È µRá]#Ù"|i„À>1ìßX˜¾Z\+© Ã….mºVH$‡ñîžYNŠÄ©MÃ]Q¥˜t.3oÓI:Áä3˜¾ñ «L:Y áË'R­MïY˜B2³¾J  NÙWtšLË‹„÷~³½Ÿ P§v£?=šš?}Þݼ÷ N•Oú‡€ÝÌÅ—{ó<|û±ût¿ûÝŸ©z ÒcívÅi¤&Ë©(L:½øâ(ŽIçÄ1C(º…e±¯6ï”-²,I,Pˆëdª©†èñ¶0¯í„ìX'`çýÜ jø&à@OÅ•F§›Õ¼2ˆEwá|ÛÞèˆâ#ƒƒôZ¾žh‘ý@¶4AlZU„Á—ǯfvõö°èKésžÙ5ÅYf[ Æ7«Zé(m™c\¤£YÈþÐJ¾å»Ï²7ï¬æ[}0o±@ô€§FIuâ‰øÄ7,ßÓÕ*ƒyË3r€JÆ%0F[NÂ)·*Üþˆ ÚÒg¹oV‹„V2.§jÆôgô ªþV¾FZ¦¼fs‹Ù‹¥±ÑÕªj(ØwFæjÆaÈäÕxM­Œëh¨{IbHAš (\Åæ(6tÛðŒ‚y Ä:ô^ã“_4j±ct™Šé/‰}G{Áñ«¢m†ÐAåÆŠ %ÀrÆ@ŒH«5·v%··+O÷Òî%œ—âœD^Ü߬fB´S‚´±âNLˆŽ áÛMˆÉÈ… c„š¥Ò"ª#r€íæBL@Þ »n‘Š':wˆÿÜýv1 Ӳ†9÷0 RX<ïÑð3‹£ƒ &MÖˆ±ÛÍí'OÅ„(Sü “ú÷¾#.Õ"L ‹%h‚Ø9¤טzãWS­\ÜVJ„m0ŽÎ ÔR éÆEÀ:M*‘«JsîÁ»œJ¨±ÅX¨}¶¥úAÓÿd© À*Å£À{øroÈ·Å›$çùéóý ´Œ¦/ïö¥ü_Ž’öË©’ÿËãûÝ'cÃ]áùé…:éÁÀOÍ|¨ÐInZI"–{*‹.ݶ–v›©+°ïãÕ8ßK"ËfÕ•c9;j uu9kí»HÊ…¢˜¼„Ùª®ýo ®d6ST‡ —©ë™¬]‚¡œf:î~——…º eYEôiÝìj¨jP”)ÉRHÀEtjÓC-€)ø¢S aNsgqŒSϓ͊È#¢´¬c5A•¨Ó"µË ОeõG$ÅíÕ.¨Ó ÓMÕî«#¢¾x"ôÇÓçï÷sh(ÿýæ(š’KIX¥˜¹iÝjVV‚K·Ú\’r÷ä’Žµ“àEýôæ*eË÷çõSç“ú™C±ê1¢Íô_ù×Õk©7 x¯m^¤¥sâP¡¥H7H"£'† Ç7ÒÒ¥sޝWM-ò—€!\WKv,~X¥–DMÏ7Ú§¦Io<#úšv›ùÐh>´ï…­ÐÑ'ÃŽt,¿èÔâCMžÅjh'³ãOsÊíRaG ¶´ôpÿõiõ&S(¼2¥ ,_µŸ2Ï9‹=rÐiÎù]¹„£9ºLE›¦}kŽél¢o|ÄÊÅØÒäêg¿[˜¨×Hméë²;Fˆ·mè9 ÿmׯS2þv9q†YC•9Yã¨Ø>v¤Ùí:ä/ÊÀ*K¬†IªP¦UòBÐ2&j™9©ä%SÁ…×§°„«hü XÃUÐ}Ëë$W)”¸:ºXHD² #buUcח⛫Šû#¯Iþ‘þkÅ5Öi#ëþõÞôФÓònßÿs0އ¤q|ö³ýþ·ç}¤öb’}7àîÁhzŽÚjÑŽ&ŽìÜùÖÓwçM2XìªÜêðöG ¥¨…¢«I€¹ÉáÉ%Ê}È…´R–.Ä 3þÎQ“AI¦Šäû«RWjt—ŠÖ̯‹gÈã#Öù;õ)'ªõwýµ,JóòQ»&¸ mž“8û|}f¸yÁîbªõÊüK¯)áÅÀ?5³a^%$l)áH‰˜Jx+(×iµ<™'‰UZŽ2§å kzG*miøG[è"Bõ>Ë>-š>¬ÑTyݵq'Å@f¸ÜÙáWFØ÷YNbm¦¼é¦L 5ëšbL¡z{Ç6ã:‹Íd\ÅâÛˆ(¡×Üt˜²}ÓÑ0Î4ü˹ÒnÕ7(÷îi–^몀JbS‹šeQD”6©+½&ùD ¸ŽÞÕeá’…7ÙC´ƒçM¼åœ@³&>r±,<¢á6ÞU&²ä´ÈÆyÇÆ*`Ñå­m|vdöKŸ|ÕœòÐ-g)m»–©°ùù¶§XmÙÿZtU&R²DOVÉD ùº/틤o¾ôà¨Ä²pœ‚N˜r âºx¡ý¥ká°2‡j„$‡}0ï;gг¹h󟓌Ýß<a3GW«‚H _MõY<õÑ6“}e+ãú#BK7µ±ÛÒoáF‘\`\öE¦8³P­gЦ iÒïo.¹øDyºZ ã2w*„1.i`J«SK´KFËÅ|ܼ!hÒBФ…&ÖÈ]R_2Ï+Ž ó¼ŠPœÆݦ¢Þ¿JÒ€­òséŒUQ“t¡º ~«œ1·ÛÝþlYEÍܨÅ×ÖÙ%ª?|JíéË»áצBzIéeŒs¥ ÅYAB.ƒŸ²E—»}¶Ùrä6A-F°H€VIQƒ1‡,F}Ùb¯íl¼¶{y¾{yøí‹¯®xß®å$w(R[!'æguVN(g¦N¤ÚDL| "W·¯p'H)ÕšhõG@À¿Q[=-i«7;T…[Aû#òkÄ«êü&'szMU pá@%žš u,9Ë\…È=Z™ìÇÚ_¶h÷Æ·©è¬·¯²d s>/;ŒÎX‡ø¥ËPßZ¿• hË,Y vç°o—X™»ÔBùR’ÎÈbž— Ë<"ÌJ…h1]­æÒ>‹ü]kÍXã¦ÍØžq7@S /Άè–増ûÝ÷ç‡o?Ά¿Û?CüùÍ‘:Œ!¾_àý—ß>.›ÎÉ?ÝúôZë÷¤^ŠÛöêýW-Ô6ß1ý@ˆ&Âbúj|ƒ¥Ç‡/ýéîyWz%Òe¯Dç§^~ì:¿ýœwí=g#ÃLZ–ú[€&ýÂ’…;=š”²-†MŸïÐ9˜©Îç3ÏYw e«}¶h}Âù­¿¹ºµý±„,åKûcœJ%ÑÌ{væM»±KÚ— ™RÝŸ°Â•Y‹Þòkî3´+>öà1M;õ¤”Ö¿är-ˆ0u—æ"2 MÓ/¹û›—«É£«Õ¼ävb*¶Ÿ‹ø¹tHq¿D“kIY«#pã·¯ti†ôرé8ˆ {Lºšß¼€ßTY«ø=ý,´¿y,/I9]­†ßöY9B®/BoÆ8iñÐýúa6-_Í8YÀ8¶˜¸†q˜hžqå&úÑÕj°~3Æþg'‹}£¬b4,K°ˆ-*[Í6]À¶½ÀPÃ6¡<Ï6.ÁaŒnVɵ˜8äELËb‚ULÃÔ¢m¨œ,qHÐV±ÒBÅ*ñ’ýGÍs*Ùq–SŽâ£ÛÌW¬2w¤@ù¬Oæìˆu+Èç@L‹fÎÛ{æ÷ê«M(¿È1¤Õ%+ K8³å5 <½Øf¸9ky?Êñjµì¯íËì.)JXgwÛ&û'JY_j”XÍ7ï¾Ç þ2y+ã,ßr]jt³*¶…N—R7eÛ\®éGäx“µ&ð÷ª4~zzùöè½k©9æ?ÝR}ˆšTÀ¢c‹²n]zM»¦ÒÄ¢s%%™}Ú;WžM3©ü4"ÔF»²  ,ÐVÁdAº®¨Ú¯—€T®  Å5!µ5EŒû᯼‚·ÉCaU'i€j¼jŠž_ÌȃQ¬ ×0"É&àÒ)pR|k°ßn)$«C] ¤•Ë¡OÝÿöõ³ÿn÷ôøøý‹’coÄËfòÉ‚ñÈ<Ÿ69ÞRгò¹¸¼dD¡Mú!¨¶púÍÅ£m5GŘWÇdP“açmÄZK[(­i–«¹<€z¸Ùsw­Ëq·úURúmŽ»qén¸Rù•?ç1DŠ´ëfJQ•ó^çΓ`w¥r{·¯³’“Tªšœ™ÆhàCULfÅp‰±ÍÊ“úñ1™qM6ˆÉü¶vtS¼{³ VáæwùãÛ¦8,hŠþª›öƒ=‹U=$Ûé|k ÖC¯Y¶Ô$#QãÒàæ@8µ1[â_gJ𥠜­a«–5¢ÑÓƒ¨ÆÀÿ†Wîî8X7 1”µŒ'<,†¢Ã,Ìe¾rnou²*¤]Y¼‹ÒÄ6Ñìã5} ×1Åš²,¬9ß÷Xœ÷„»%uùÇ@ì«n†Uè™POÔ¢Âh·ïÀS–ï}#òôº…«ÛÃs¶YåKÃW‡åˆHOìuX0Û·¢Ú Û¬Ÿm±—kt펙ô¨eãhx ÕGb”B×ý§÷2w² X{~‘•šn§Ë¬´³fƒÛÕa*&F5âæŸÝ£¯16f»ßÕè®Ûçˆu™dMÐ\~ØGWÊG·Dq‘Ë—JDŠBAYäáõÑjRý,æ.4éot¢fˆq]#¹½må“4~}PÝÀàÒÊ}ªKËÚò˵¿­ª/4HWû“Z'~74ĸÔÓÐÁE@?Œp•ªÛ[‚æ:Á“£ ¶»Pd[ÊSVÅ4·$—X®Î4éÙ>¥yœýéxSR¢j+i~“KVÒÕp-»„a}²šL&jã É3ßùì!Ù4uŠ]t¡Å)¹]J€¢ }Œ”“ââ@CýïŽï:uóÚvŠ;.­™¢täïÙî¼þé5nøŽ”ë«1ÅLIÅ$DJå<†4ø€XÒ`rY^‘iJ‰É¶s» ×9Hó¨@< ª€‡*Û\UÕXß¹ø¯]ùMDç¦~*h_ÔO}†žŽÏäˆ<7—®Vù•Œ_U‰P]IX ¥$P¨í {‹¶"Î µ4tLò€Ò¤–ªÍú ‰†—®[VÄÚ ÇÓÊê®Rô†¡_FPÆ €{.2ö0Q~RÒ?­&ÐE·„˜bㆧ²ÒÓWj…7;q˜q©qI¿„¤†qTVH€ìèêd•|‹È±åªEßÍAÿBFü ‘lP³÷NðÀéªÛÄïÞ¾~|·¶ôï^ÿzó¤>Ac¯?mÞðÅnä6Ì49³Ž`Ï}qCšÄܽFad¤@ƒ{Äû‰8Íešð¨'IJ‚²·vñ¢†2å.KW›á1,bkB“‹~yL¡Ôƒ‘϶d, wžHu7À’la¹Ÿˆ8.^Á<žý·“ÕØ]ð6rØ”0¡d$GUoÛ…XOö_v<ηê.noÛvCr¡B-X)ñÁå‡2ŽG«šÕø+:W¿$©øoûäg¨è=ú–ÐÇšvÝ?½ÿ—·ãES3¬hÙ³ ÏT(ØY³óÕaÊ÷_bëµÐ=¿þZ?aèúËŠÍèò bà¬ìf&ý`× F¯‡?\بnQ±yÜCY}U5¤¢L`þz{u´ õYBp·gCUëgd »z ³³çÙ-¶ ƒ‡Øw%‚jjs3øË˜ìÓÏ¿ÜÝ<ð›îîn4+I7tË|#·éöðî!àýÜKì‹Á cIlä*Ô̤ï,IQhb~rçH ­¿úÑú? Z— 6ªNlr·1°Gh ×S´ öÎwÎTe|©þ“8M eNš{t}Âþ°yý_¦ìÔ/ÎÉ. _U®ž1äx—â‡êûŸ pR?Õ½ïdÿˆù^ä`A[:.[æÒ‡ûˆ%ßhõõ§,õÂÕ?h‹å®U|,+tbîé?åÈž5šûÚ_h‘k |u»œœ‚šïÄQ#(,÷X‰ …v¹=­)Ûòz$æ×`_­^3Ô£©î>MÅHqÌ"¤mÎödöþ`ÈŽLÁÖÌ6 Ï]÷ „!Ùöú£Ø¦sb">ypDLÄÖ€jÊ`ÛÆªAµû§ÏêÏmmEW‡xž ô4bñåpûÒÚ =4oóî¦Y_|ŸYMa¢S€RS¤,N¥¢WúeRŽšQcÕ¯fF€#.ê­ôCÚ ´õ¨[X?Lè;õ <Ý¿ùíõçÕÅ÷›»?þó[Ûˆ˜²÷¼VÚÒ¥« £Àf ÷•~³øâBåë?4^0\ ;Y‰dÈ’ïêfµ(I3êy y!eM#P¬€b‹`ü‹fð °§½áß(3ÁÚg¬mý¸ìž©iûA¹Àž2"@šæ&ÏèÐ<÷¨&‚8{˜wî/Qí —¢ZQf’5là›ä‚]Ò<=¨Þòm¸}skºnßÜñË•®mŸ³Ú©RúÛ ìÅŠ#Lš§<ÛµðsîÈ;CÆO<`vôWšú‰,ý¶:TmuЮ#¼«1;!AÉ›è¹}®µ:YÙ‰¶ ”É·âÞÙøŽTª” nû@Ü ©3fÖ#øx&רÀÌÎÉ¢Â_ÞßåççÿW#wÛb˜Y·â©Í¾ô¾weD4m^ËÒûÚs \vò¨"‘ú!ºö% M®FÄ;ï¿O:ò5£»yûøp÷çÝÛû¯èŽ?ߨãÝç52oïoIˆð/¢O0äØ÷t½i¤¢ZÀœ¦å( œ—·&$šÔ]™8*z’”½29RkRMHCfñ©ÉŠ–ûGÌ×Z±•ÀeÓ Íßî_¿ý\yr¤iG…\Rº,4MU“W¦‡á= R ´Â5×rÌåÎhsEýù+ÅÕÉjB16hI`j“ÿd£nˆm± '8 È­3®{¡íZ™œ³uªhlTe[çÑc /sCr«ÓÎ0uúÕ žCKÿ§M²Xç©“0¿ï"Þ´ïïÓëwÕIži‰Ùog.µÕT=c£n?eÄHC©®w®çV+‰)¹k¾Ôª¢ø…F¿:r¨¿ÙL‡‰S…!´Ò½~äWÄ)8ãRÔôÅšÉc›ö“KCM¿&iëœ[É|Úáglò¶Ø¨ÔáØMíðóU;:ü¾¯:+>&7ÔÀ¢Ì asÙ@ÂL=Æ¢±hÝR…ý’@4¯0£ù‘»BeæÅi[ý5S¡‰©­B3úþU¥F ¶—jFß©d£ñ\ò†,"°lÒ/炈ÿQQ°Ú–Ü`€~TÄ4Ø…ëHŸÃ°f¶r©ÅÆšìS©Ø²ãýaùÁ)Œï7"Miåbò%ßz>jÁ¤?ÊY¬õh󛂯pÉ'-;>ØhHö¦€5뜺ð8Ö-”GÐêC…†$;wÊO«¸SópÀHÜÆµè†Jö¿mlOF>­ŽÐYV>Íð0³!ÉïÝÜÆu¯×÷ßöÜCê=ر}RЏÛñÿÓòÅ/ß’–íß¿~ûì'E1Réj+ŠmúqëÞ&Gí³M?î|9M q!â¦G$;3…€Û¶@íËî/Jñû0î›;ý¹§~惓tžèÑ©¿sCî&pG b{H1´„·Ð©/üˆYG…èÄUy*ýÅ¢§ 9¼øI&„öÑ^¥¶¥Ãäñ"à[…IÛ;2Ø_”ž:2!æPJð&ϺùšO3O¨¿èkµçªßqSpI·þ4¨éÃìÊì1ƒÓÄîP]R*k¦Fuš©©F.38¦ Ù…Î/HŽ÷••Ÿ2Bì‹O1›UœAì§B€à¦«ôÀ=­ ÉÚ††ÓŠZØr&§ägvR!’RaTÝNŽÙûÕÑ* J¸kx†îùìYG ù(ä§²»dÅõ˜`k+®Ô§VÜøG–Åã5ïã^eR¢g†ùå|¢¡;øªw®²ˆ„àFïÝ«Þy>9˜cÀÈu”E j‰]Ÿ“Œ ™M ª ÑB—¢É2T4Y˜í˜_¦bg‚Å&êðòäúcÀ“Ö‚Š»ÆÙëú15Ä[YS"OÃ^¬vk ï$ ûrÍÓ*£¥áo;wvJøx°šq¤ÝÒ°8cšI»!oÄ·öFJEÆSod|ÐØ-¥+ GÆï1$\¶ÎëI\ÿæM”ÙƒÁ}Ÿ°fçíz•iHP#â|`[zBDÙvšçí»çù¦„é›ì_Üý¦êótÿñçGeÉcqÑ{ÛöXü^ÅÐ w‘zö’F4€miNiâÁ…Þ÷FÔ6ÁwºLÁÄ3R*—Çȼw*ºª” »ÇöQgÝêW“m· -¾l†‰ÜØ••ñ´ÿÁN ä?dïy<†¹ÍïXÕà õ]?š…ÚRHRÜ ·x >8ÿu7SlýÉ|JÝ ¢ªZáü»ÉÈ™j“€¤.•kcÁF«±dª½ó¹ÕR+šÌè¤Ð¯æÝ멦ºF zJžb[$"Æ8˜,îX·yÊzÞ"”=°ÆÉP‚”²ƒçKž«£ÕlžÒÏr‘C¢&¾é¹5O±ž~ pF=—àð‡š«xÿáýÓ‡ŸÛ7‹³ÕÍ‚ Õñ=—F^8iˆ·õTÅ‘f³ « Ç®yŒ+Ê5©{³ËR¿hF0Æ%q/WWÏ-€3C\¬Ó+ÉQÂÐUnÀ–'‰ã›ÿ¼û×iëηÐ|K†Àö½g*õ>eÊk¶Î{?‹|—*TZt‘¶qyn°·4ÁЃLgíý¨ ¸à6y¿jL vÍ"P$(Eˆ¡\AW-ÉE+‚Ívõ«5Ä¢”®m•1mb”SúŠŽrýÍ wÞ½û÷{ÕÃU5ùwùãÛ&ó›ˆøÕ¶”Ÿ¥ Ó¢¾=ò¦Éþ Þ!a½1™¿y|£š Ô7D<û»o?h"?CÚÔL’ë¸HJZ‡ó8œ•Õ™öP©Ææõ.ÚCÌÎý)3É&¤s­aTt †'‰R«\têÑ´ôÅŒ…ºœPö•¨Ü-Ùì”ê Í7úÌÈ`ô³cBOþÚ¾’pó€Yé¼oXy0{ïA\ꎚ;Ä«l˜ë;\\G†³ì–Ý–TpÔ;V·vÝ÷QRŽôu afi9ºì‚jk ©hvtx±Êw8kÈoW‡)w 1/ †4¹nZ?a¨IÈ;^¢†\»Æzw.›Ñ3$±k IÒpé·~k9,š—ìöº—„Âve"Bn6a}²ŠÊ¯!Ðx‘=ª÷O¹gd›]/ä)¸n{µÅúTŠCÜ–žÐÛ¤Ê0p¨Šk-³­µ9x/@ÅÅÖæ·³+eW'«`¶þ¦ÊVr¾vdr϶èX#ï¶`Ïý 1ˆgçVó-,Pßl|%•ø½v[­†qd÷jIRha¨=Ö|Œq!õ×JsÛå†ÃŒ£ÆA Äj'É—wª÷x´JÆù˜’ocœvÅ¡èÈ*{ಒþÇ]é^§/+â¬oD±.à‚÷mÍQ)ë}&$EöÕ!&vMæØæç‡¼(¦Ù¶z „Ÿ¼ à‚‘Å‡Aà‹ˆöRåÜî‘“ÄÀc_/†B5–0ú¦§Œ´+U¨/Š™*ÏF¢/é³)„~J)²³f—U®S1faFF#ºýº´Ÿ2Ê P]øÿûßꀙB¤Âˆ„Ð÷<#Gfµ$^SŽ`¨Ëe‰8l—»$yÜ®ã±jœ{2û MQ2;Í®xŒiÒ“¨èŠLèvJÕ<³ì¾.·(0-¥¢“¸,ßVG«äÇLM|SÝg?Ä7Ð1ئ~†P†ðõ-¦ŒáŽ‹CB¨³ÛPb8SÊÚí#I¦8à°Ø.kjˆÃPõ>j'ò]ǽ?i"ÈÌ8ž\ÕŽ¸©÷\\ŒTÚH§‡p‡ž¡ l¨çæV'«Qcö‹ý~H-lÓ¨Núèñ™@~WkýñVÉï„á%Û=._„2×YKÖ[éæ²5¬afh³mµ @ä›Ä‚ †Ä¨g^Õ%RyâýôýÓçÇ DW‚ñ+}1¼œI‚Á´hî„\6óàS¼¸¡ò@:ÈUÉV´™!úÕμm‚¡ŽAC¼!Á`ìù³‚9²àæÅ˜O_ßÞÿzÿfš€DYˆ’‡Šh]ÈE, çöfHH´ÛÜ”ÖKˆf ê±ö"1í¸Á -Q(º†6ÛgmD¿)‰îŸ~ùûÿüóŤ®ÆhÛ%v÷Â& ~¿ÿåïo~¡;º“ôš¾¾»½óÿxÞ:´ß ¯ÎRýz¨W+!“ôl·…7,»j|GQƒr·Â’™ýPIvæfŠJ¡‡Â6W£\¦¢¨Áj=Ù¥ä :býˆ1$ÂD‘Cý=¡ÉAbä8"è¨Å 4Z¢7Z{_=ýíX‘"c2[S Á¬T¬ŽV5¤™»éØPÚˆšøQðÝ›§ö™?N‰>,lsnòC ©X|~zÜ}IóLP ¯º¹PR}öT˜Ç$([,Ï׳ø ȱ I !W¥‰ 5F#bÌÕ.VTšÑv¹»÷´=Èk¾~Ç^|Ub?ýíáéÃ;µã7ïîß}xúó`Ñ?ë'¼è ѨÌPÙ¨Þhëi5Ôwé>á¦Ø1{Î…D§x”ÊûHøu,sóá©[6çd»á–ãs–±bဠ™“ØQæŠtZ£«öŵZ†QÉQs )Zr67Y´!»$ru²W¬ê(¨áà³k§õ3²Z‹5×%nQb²¤>â óÆJ¬Äwét= ±/¦çQ3¢O󠼆Çt¨FŒ†—„¡ð¼ñu+X'%ˆ‹Íèƒï;<È~ñ¬!ì.ÔØ#8t5ͨã°!j¢aKUÛ¦¦¢¼ €¯(À‘SÂ\,ÈïNÎÙ{–õÑjPçH?+Úžñgݤë‡dm•²egGà,Ã)i Û ÞrxÊæÆJ³¨Óé€dQ`¤«b§ªwû¾Ø©gkÓ§Æ+¸yð©—^»FP ’f"¨^zíE[5€G³eAº‚.°õõãª6e¼h¶b*›2ïjLYà˜º¾ž¬¦-Þ–[±aqÖÖ?vl} ¥!‹$ÛÎ+ÈH§ø™ÑÇ%(®»]9Er4gI×EsÖLõ{¢9ïÄ1yå·Ç!CCG-ISq¯‚Ú7¦E|Z§SÓC*Ù–°P9ß³ºj,[ž˜›È9¥b#A\$z¢g V*ÆÓq?W\f%T£CF-8œ¿ÁUcu]J°é¶ŸwÞï€ß¿HR^ëA>?Ý¿}}{ÿvŸÄ”` ž´Êêž•ä8z7¢Í cÇ]vpÎ3y¯nª ·PÿÄj é«ñUýiÛrð~×Õå˱ kÎ#P²( C®€´"ç„rpiòÑIK¬c#èKEeY²…T=—tI’ø0>æ¹!FÕ”{×ù]ŽQñr'ÔþäœEý_­:Hµ¢]ã$FŠqÄž'yq»ºIŠ^2AjZ8}Ï{Yâ£è&.@´÷\?4*;É y(dú–U(Ëq<­ú–K!®¨SÔÑÁ¸µ>˜¿È$mæqbÐ8çÿÙ»¶å»“ª )y]±'ú¡w7,ýkùús»òÊÇðKæË‹½×õíûi‹œN edŸŠ'– â 4ì¡-œìYúVЉþjõ*õaÒ(Hš"0ÅX ðºF³ÃúζÇ;éh·¯aðÏbÏŸ¢áEâ?RÇÀ&ˇÍà¿g7WêD)÷•è»é×| €ë¯ÊÏë{Uªñ}»²[¹›Yå Ç!ßMN8Od¹jË BnÕžìf«›…W×#'<¯æ) 7É ©!,4&'¹£]”PP}†ý7d ËTÔl'p°‰S›UèØe®u“dÛ5u2d²8¢ÞÖâ”SbÞÐÖÉ€RÊaÀ(pÎ¥²:BlC¨£ÜŽÞu`ò¤Z3¦¶r§¬¶2¯sÛ(¯r{IšƒÝE}³DQNºPNΨó&ë½Aô²;ª.HIbwÛö|A‡ Ј0BTIëj1â K…ÁÑPñ1§&ƒUî×Ó@3©X(YÐÒî12Ô*Û†½dîò€±ß ¢(ÈÈê;¢!(=ú=™Ç#œ·‚oÜ^9D¿úöI_ùÝ£·mJ¿^~ÿ|y÷A½„.+6>àDê–tIÓ:¼ÒáÖª­²O·²¨‚~t(¶´Ýkòg‘Pg–ÃZÈÇ—[­€ª#Æ8 “ÝÐv!M¨š›ÄÁÖ ±ÉV¥¼Ž]ûDšÃ d™¤¨‡ñÌ ,Ψ[ØVo¡ø5@,!‹ú•ØÏn=BJÆtÊ ¸”0Üñ‹{ä__qÀÐòð”“dCMëÜ6«`ZKÊ<6Ÿl߬¸ŸjgÔ¹MiÒ@¶°[¥ŠJ3ÕE:ŠJzIr¤aÝ_ËxgŒ!¼ó”C$ˆ?ZbO‘L=5Nú½U©ˆÕ©Ùež YÄIà}`[IúwôfÈGÄýgìåIÉß4Cvì…:TǬ±NÿéÃÍ—³oÖtvþYõʇ»Ë§ž´m9±@õœ˜ëÚ¯sO¾C æ\¶w’m»©>Hª 8%ààRÀ\ÚO­6üt+{¤À¬{(!û_¢Õ H†~‘Ð#bé2¥¥¨ñÂ<7¹ ˆNSanÚUe†Ò:[ò€õ¥ð¤y ˆ¶ Ì÷Ò®.Ψ†MT¦$jÀo/)ªïØM©g¨¡æ„9õAÕ¤ v|ªd¯A_¤íml28j´Šëü5Rë¡Ë‚V )r ö·kÞpÎn®öv§H˜ß¿å#J±'ÅbøÜ´y×Ý÷×ådQ¨>˨±»”5åØ|–©Úͳ¸«=¼,0çæú¨ûÕb¢”Ó’b‹ôb_¢)‚zxÃ^Á–m–dÛ~Ú|eý¸Üà+¦êNÚ%e®e–l¦Üqªþ¼þÝú7Äî+óÜÕn…j¯ôk·°­Â×@¹8T¶ØiÌ#Í1Oúq‰Wóƒ¡ZTXæáØÄLp瓪‹”BNÜýÞæ#ŽaÝÆNÙ2'½ÇÑçÆ^`X°ÕAªBË»IS(zðês;žkztA™‡múU€~(‹¬jm|%w³m>‚¥Ç+ Ô~¨’æ›{Y¨àI›|+Q¹²jþ”Wc•e®žèWE€ø£ù¦¡nOñs ¦á`B†&d*nyØ&°Ž,s <Ök¯”¹Ø–&VÙÂ6Ã\§@Cl+]¹Éê÷„®î‰ÂËd@©Ð-€Vi|Þ~a¥Žþ¿ ¦ Áhùd®ääˆ1ðZœ ‰¸$o6Àã`³ÉqD¬à×Õ¾~ÍdGôtu ¤,)ÛNå õ²Ð_B¥Ðo£Ç¶T»Õk“§ˆ¡aŸæ×[íî[ÐÒ¶O&°zþi5byÄpŸ§>ç€~û¤„%²F‰!µ$9ﯗæã$¿1ÄÝíå÷W ÝßNUÖ€IÆ÷ÝWîxx)öø†¡$H–WÝŽh罦>ß@ªï×6ЈçùBÁæNõ”ÉâJöHašÖÉA˜·¸1`‘DCQz@ñ­Ñ£ØûŽñÜ6’Õ‰ŽM(e+Øù4¥ÖV<jØ‚2Ï#Ó¤‘üÔù¯f´y®ÊþÃûWöÕ…œ¢Ø.¬²¼nÞ4óíìËi˜6£T]ŸŸ%¬oöÜxÚØöôJËÎ?¬á8ËK…ÞnÏãËHòbÏü¶Šz–\]ƒÅwÜœŒ¥¤°‡Ç›˜7ÀD·YôÿʪØ ”Iïk[/Ú|Ò¢MƒÚ¼Gù >©ºNÙûJöŠ9u©é¦ÀÝ#Êj%°US¹s Ps h"N©U%>:P”›Ö’«É…å½î±dR?[ôš>JUɽÈÃGDxSÝk÷J©ìBS’5Da¬.éFÞwI7FßrÉí«%ÿ2æüU9ÉbÓêCr’ž­ ØÃíB– õÆ7"¯#ñY\r=£ðÝnØ¡þÊcΕ¼Î8ë„1³ÑÕÏU X‘÷A=<¹ëCá¸èÛ`â&" PäùÞb.Ñ\ç*(òâòv0 öÕ6áʸÅ4H@Û1òä%„òƦÁ®ù°ûÄ4ÉÉv³®nÉä„)ïj"‚xüóèß>üg©žºTd0Îúõ|D,D ²lÈpÖÝ ¹2qÔ¿³ýÖEPÖ›™fº¡¾OdAš#m’pÊÁæ0O›™gT3¥)Ndk,Ø™)qîvÑwTz™}8 ‡Ý*b1ÛàÛ(»ý+ã 7À:ˆpÝJU&’v(©>_ýDš΀¢ê-o– 5 Y£‹Þ2Ô|„!,÷´™Mt!Åfÿ³º¦ß?¯ö ¼f§©6t ¡@hd<•Òñ¸‘áfo£>)µ ·§] Ì#ý¸·˜ Eh€·zôìøQ^}i­kJ^Ì~=»ýõòûÍ=ôo·—ŸÏžF ï>\œÿöÏÏãƒçÏ.òÅyîj"¡ X5§)ç’4C_y”u¹°›¬ j,nª«‰$M–7tw&¢%Ò“­ü†Ð/zD„.„c+ÜñðvÏD^ËÓ £IœÛLDÂÖ¹¨„C‘:Üñ#ižŽ³±‘ ð¾ÙÌ[`†¾iМ» 0RÎIUdì*G–ü²Y*][Ñü•œC›WhóM±Á+%¶ŽÚµ ÆÑn-`…žBS/Ïk ª>bpz[Êluà$Dî0ÚdýOª²†[CÉÝJjŸ )ÃÛR‘8‘4¥"RM*”¹– Á4ïaÄMIÿ;Æ6Œ}{•£Y§´$ùe“¦Æä¿ÜžÝiìwþý^·–cqüº?Éb"S¨ ¬±;»T·…êË›ïo~pÕtªƒxìáÛTrK¡!éé‹1sÙ<5=Ã6Ü—Šü¡4~=üóÒ6ŒA!ú|¶Š<ž05å ‹´«òÀµ†–Å…ì§Â2%Ø KÕ‡C‡~i ’táꨄYÒ° ;Ú‰#ä¶ `5kЈÑlñék89¤9Í’A´Ó&Æi„Ž0Ž‚t0®5Wqxh:‘©2èÇjl”ÚlCµÊÔb…*8òa.¦…ÉåBò3-‹Í‹q?ÓôˆçPö‹¢Ö±;ÀØFs%òoWŸT;ªÆxw÷íìæîóõwýwV–ûb‘¡ràÕÝÖÑ/ç…>‚j:§óç?ttP?ÌêûŸm·>û‘‡ó~FƒÒ_û™‡ÛÒŸ õ+óÚêžT˜%ö䕊Ei0WDÆý †“2Cn§ 5—uA7Ê«]sKÒ\(àɶêmˆ2m“dQ7:0.3?c\UÔ#-xww¦‘×RÊŸ ä,´oùÍ™ÉñÍž¾yΞ[ ùÝ¿¾\2îÿôñ!ñ+ßÍ“µJw·gËÔ_Q@ý5Ê9o¶%MÂêf3…nm>"w­6 EF¥îR¦¡•· ™>eV ÙLwª¢2, sùl8I‰\ÄùFI#ªÔ õ¾ÑÃñrVW/I–­ñòùÝÕÅíÕユ§Ÿò§N¸TG®‘…[X)z¬”´ÂêÃ]…Z¨¼¸‹î¹HýRÈ!‚WC“¨F•Ïz=öáïÑÆÕ(Û’»­Érëäy¾ÈÛþ,WjÐÖë¨ìk$Ë•e@Nk!ñ‘XÊÕ—úDM;Yž`Ê çjËr‘÷∱á±½°Ù«¼w“¤®%dD*úêÈŒ*oq»X‰õþ%0¶…BciŠM¡À:JÙ‚4×Ä^™$"Âé²Å!õ-d< §Íî®%d¢™÷,yŸÛ<À/º?U½?Ñï%«Þ—ÏÎÏ J_Õ»2hzE4).µ"mµrĤxѵøx[{dPm.ÜrÛÙo!’¹´A(uv-Ž8îŸÞùU}[É,PvG¤õÙ~ "õ‰(_ýzÿíêû QþµüöÛÇEÇ/ñR?ýÅ ­ËÄ£îþ’ÅèO‚äD¢öÌí÷‰çF¾~”üTb¿g|8¢§Y¯X$ÂÇM¢·|Œ&Î@­ÖLSV½®Î°§*òÀ‚2Ogfœ0§rè²þ©rD¶S«Qö[DãT‰AšãgM`;7g/?Uš³­„ *leµ;Ûšh¿áÉáÌNþ¿ºü³ÔåkŠ‘õ#£Êá€b䨝dËêDc¿áDѸÌU·¸­Yï¤4£µšÅ‚2OZOß5k°èígä8T_Ô›ìäÚá)>?©Ê*QÅf”k‰Ýé9žH¡‘µ±íáV _aÚ‘îjçù‚0WzŽ'+ßza‰™l9§0bwRõpqîÉ CJÆyæ.ƒÇ8‘Uý,£Éª­6˜ÉÆzþü.WHõ¤BÑ[dž€Š`Ý@(‡#¸g@P5è_º2j(/3jÈ•È7N)ª§ÑHžŠJ‚µäÉTª,ˆñtŸ*ŸôáO•ÁpÒ„~0£ŠƒÆ³ØÝ„|£8q€Rôw•E;ô0ž¹WjoÝ…­MŒÏßŸŠ¯Íƒ©âà¼@MÁ¬F¸KÒÅZëÕóö&]l´Ôà€Ræ^”Óùžq£dx ©K…¿T!µöC ×IN-ëlvy =éH*Õ|ª'ZD¿)Óiÿúâ„! ‚PÔ_+n 2‹ªszëÀ0åLÑ|ží¦ÎÝf7gç¿ê?"Ι„>žŸ3üp÷ïß~ûgWŽ6KUÌ2D—”­¢Žo3׊Åuí¡.pšÄH[ÔsŠýÓ‰êŽÀ„½D[3žaû>»›\Q4 Ãèa«Uë¨ÉÖ:øý‚2£ˆ<¡¨ÝqGdÖ !Šj·îˆl>‚þ›¼+jn#ÇÑ%5/÷²­ ‚y¸÷­º‡«½Ç»­+Kr_Û±gfý-YjK´Èf³;;µ©™ÊŒâ´šø U—nÚ¢Ib¹“éX£»ÔÏRW[¤cÊ9w·%Ë063<`¿Ôät˜ÁbJ|,·_qÌ1m$"ò*X.öÏ­ŒÀÕt ‹Èå»­±S‰‰%I-ݼèV²ß\®¸MÄY‚ÈÚ„CLv—V²{嵬w¦¸?´™âW“]®ï#¹ ”[‹wʱl Ö[‹ø#S3úž'žLÕÀ†‹]qX9ç%@ÎêUï¹8ãŠuá6•ÿ VVä‰Ã LaqµYÏ57RŵQ¨ ¤C"¦¤¦$fBŠ4u±¦b¿V—lÉ?.&HKÄä"€WSO˜ÈÒàWzìÍ¥¤XÍÌÀÕÌ€cr"woý4¨ßÞ>¾—»íõÇ«_ŸºOëŸ:ʱÇ9iED¼/³¢¬9Læ DÑd’›dè.DZ j\ZŒj‰uðš oðŒ7††8[–ÞöÞÙ*ô×®—©Û>1ÛþÕ‰ãÝâ·L]¬ýÔ À® é4µ¬Ý8ÉŒ1ïý“7' 5áñС2ŠYÙš™GMფ7JÀMঙÇG‹Éý@Gi³F/«{Xo·ëfF¤LVÉ+óFääCÎÑ~Õy¢xd#¾G¯¬€&öÅ=I`\wÿò ®ŠåŸ•fƒ¡*ÈŒæ<ÈäDÓJRks¹Á©_aÜ׎&uyX¬±inýÃj ŽkåQÆÛ×l`ÃgL»OeX™`ËÎk¸¦É•‰ÌŒ™ðžˆ"Þ:⃘¨³s¢ V¡§ê”µ €äyí`iEµÊ\L`Üf¥-€IŠ#Sq†§Ã$¢¼îXG°¯zX¿^‹y =¾ß;}8¸‡GàÿðíjóYBùGîžëhžRV Øœ —9áØp!w8ˆ1U¹z”R —w&…„q»÷BBk‘h‚…(G……xK\žÊ¡ÂP|ü#!ºŽMBŸ×éåÉc‡…'i++b‡ð+ÑB viµyš)½¨7Ñl¦Ex ÖÝÁGÝ÷ŸoëörÂÏ{T.¼ˆù½L”d-«÷ýÏœ è=ˆ¦Ån–·Ö¢x¢¥í‚ªÊ"œö×(?íÄÒ'mì㬎ƒ­:ŠYÞ&®©{^oï·í̃W’¢_.žÞ[G@æ¬uP²xz( &æV±Ÿ.¼¸yÄóÐòt žyšyì*e‰à›Òi£ ¯qMÔm¿ã㦙¹„ xíÀDí…‰ƒ@9I:X { ÊÔca•U;{©£¸(ˆ£ AOÌE>ÿy÷ð¥ÛÞ\}º½{|ºÙ<¾ùlóùzó¥ÛµÐuÏè×OÍLÄÄ•‘Üàò`ž¤ø°ŸÞ{ÉD$IHO5>©ÉV xtK;²5 ätZÜd<99CzŸ8V¢µq—×¾û¹¥ëA|‘ïñ˜7N¶…ÕÄõp?ùÁ,%Î×&Nbk×ßÐM±KDj7»uGôÆç]Øl‚˜ n±û¿·´pÿãò6cÍžô¢Í¤G¦åÖ(œ•ÄÎ.o45'Ù’ü“¢ë4£ÑI«’þu’‡?Þ¿þßž?oÚe9µžŒmA–ã-˜¼ÓqIûdšÔ¬ öŽgäÍ&YÖLK·Êƒ£…tï7ŽIM"=ÿCÁæß0a÷å–·vX•Q±ä\üµ´I,È­‘Š€¥µ«òn~e)ð놵ï¡CÍÚ“ñ8AFë¢Å9§†6¶Àÿù­d ¨Ý  Àæ·jµ\Þ̽ZbA-FFïÍŸp/;s~o¡Í=¤UËÙ½¸Ëyi/Uæä^>Š­bœ”{Ï툭+Õ_iO°‘`,Õ†$Î Sç< ö•rfhÅ/è¼­¼½s.§EY7&y¾ +9¢~ÀøW-âƒ'$ [P™ù]ðeu--ík<»üE’+غg~ã]B»¨Š@P ]å.Ìk7}·8XMÁsp+ÿîøuéÒà“nœV=%‹Øß¶iBÖ[ãÈÍ@§%ö’ŠÃœÞz_íþøþQÿê¼bw?ÚÞ{¾?ý`”ƒ¶‘à··5Ä`i’&NØÚ(BÂ&=†˜SJbøx¥šxU³zpçãÄÌŽ.ˆY¢K¦IÐç«|œÄO¬óòÆBÓ´Yì#›™’ØÇ›hó¸ê1y‡3UMð£”òèqTðã#D?Ñ0BÅ ½N˜0ö >»iêÔn“î4²õ®Ð浞œ‰=CƒäU_:xgÆR²°„ݱ„åνÉ$ûìþ÷8$>XÂÏ«›'$K{§±Ð_÷qÄ â›ÿ"Ÿ?=üqÓ£ô£È¨Û£k?Bè”'ÖWà”¿Ð)ãÝUöë¼ ”à¸~'„mMó¸2bFt¬IÏÄD ”h›}ÔŽD´\`ÛÑ^.Cۭܤg™—V èké*Ê‹ŒŠ—³^y}8žÓ” ô Ç]I÷+‚PF#ñ«}©?„þ÷‡ö?5äE ­¨Æ9'uÇ‘€Ö}ë€ð“½w£y=ë¾5Éá)f¢åßÚ:!Û¦Öœõν=¿ª:$öVp^<Õ~¼qSZo¸ ­wyÐ Mý&*Wú•ÃS;5‰j§½ÊÀxE3i¶Z –_Yu1Ùv©=^ö‹½=¢­95ÁžnIþ©;5I4ƒÆD;¶U€¸ÂpÙîÌÅ­ÝZM²àf°˜ü¡ i,ï%ªz]¦?|Æ´CÙË¡oŸ(=5if \!¡†´1úé'¥Å¼*+‘MÐS*Èš…dmy³HÕVVB»nXâ£èü«©aÃg¤çˆ¶µÃ¢w½¡¢“q­Ãª^úÎê^,'P"¬rbùÔ.¬ î—Эÿ‹»±YV¤šÆ2嵕0Ÿ§'r¥Ô2ÁØËй‰ÝD„³8ÓüOƒ¥•](.¼h8Þ73à¨w%…¯§WäpyÐCh:è -€<ý†|ܵÅêiT×ð~ó°IÌna–\|øÄG'gŸ=sæ-·¯æ]¥ü¼bñTs a=lÖ•6vßž~wSá$öœ¾=˜šs¸F@Õl\WÝ!u*{ƒì1˜ƒ­ð½|廳ژd«ˆ¦E‰•¾5F#ÎÂþfHûJjJ¤ª(egƒ)ç|ÛÛå‰c0§ŽøÜ/Ø3¿`•Z&åZœêtÀs`l¤ˆÚp?#Æî˜#Ë`Óì`s^Ȭ™úc•œ‰ír´™i Œ:éXÂÍ,Úà. ø>È"$càãjÛ` ˜™X0. î¤Ãn)ïð)Šž˜ùdNnÈ·„@p%¡1”ãßqËΩœÒôF˜µão¹°±®ùi\é/#Ì ŠX3I×épXýÓ4%AUM&FytTMŒ.fA]ªàw ¬& ªoítBǨŠçFU³ TuZ™ÅÈ1sðà›Â+SI„8`m€Y s*:œô“¶Âç¼äõ~Q|®lv…Ñ,«š£C õSÔ’áÈÞþ©ÞÀé$=7”Çi1ü†dÑ÷@`pZ¢ßà_§Î~ b¶ÄéÈÎIʺÄ@4%ðÌìýxž€oé˜ØÄI:fsRÚ¢½3âÎ<ÎÑ©Q}WO?6O?D}/$f#O\êEžá—i3#eÕdK˜ßRC˜õÚbê!d &ƒœ ‡uÚi*>Ф ÌŠÑZëFÀl‹-hÑÍ]O©£ì9³¢¨ÄW]‡1rÓ×"¼µÞ”‡Ã#ÁàMu€?IûfýÖˆJÆÊ6ˆvÑ ÷ñîëõÑk~Ðáv½Þ¿ÝÄns›í¸ ƒ¶^8ë­*0§@A¸ÉÑîHá5„_ÒSEÈñ . C¶‹]térõƒ ÚÀ¯v aˆq ü6دf‡_d1´ü’äý²§Pta”p·9æT2™9NŠ-DCòkΓˆ«û›ëßEöºãW_xçîÄ“Ý}Ýý#ÝÊÃooô+ûò÷^äüÊ!ŽBgÐáUTâ1VÒÚµZs‹¦cïÉ1ëÛNŒípZ͇Ñåë,Gt> ÔÉvC‰5Ái‰ì£vCÁ(œ&åàŸ¶…OZ—g©Ýµ&ž•ÇF‹Ê𗾊‹m#l :Ëö¤rxn‹!o)9êepˆ“@!b E¥X#+yÍHL–¼í¨ÍßgÑlÏ{%G‡™ T\ÙÎä3ãèS•ò4!y¢²Ãr–Á"kÈoù([«=[…•cÏnVÚˆ½i ›Ý^îo^nfºû¯W·×‡?y¼~ÒšËméqU¶Ò¥Á~Œ¶fä@4œãÑ”±“„ÖªBFíÃï²{TXŠÙ&'‘aj„Ô¢—]­ZBö0ªŠ<*ݤ=jcûd`Mš¼·~Î=*¢¾ÿ&²\mDÆwòÛ·÷W²Œ§=ìïN•߈<WýsŒ<§WÉ®©áéáÇu›’‚8˜´Ñ¡Š«É ŒÇ2º–‹~¸­«å> Ô£äãyŸm‚£,$ˆ²lúÚNIFh…Iœ§Ë—úæ€àW=meœ®ô|xï…^ŸÂŒ»Aòþ·j ìU_3ä78ï=ŒÞªo ¥UTÜk—bÄl_V$ƒ˜íËŠ>y-?@ƒ ¦/-ÑUŒaé æqî¾-³Çóò)]rdùå¾¥3anzM´»nÏ^Q(΃3|N¥QûK!‰VHB˜®öóÝãÓýÕÓçNÙÂú­¬f7ÝËÜý!ØÑ}~þÆŸÆa¥DÁ4+ZÔPÒ®F‰«ŠöæÀÁQÈÂ(+9m¶º)î;ÒÏ(-¢i€£ýk;‚BºÙ–ô3èlÍèÎaTW,y#žÒÍ}ïC¡O-àÈ>§Ñè0§V™Úßô ‹+–à!ð·‚ëwù]V£bî~^¯?Ë+vñ£NZw_é>ŽB`³ðž½zìc°¤Ã×ýL÷ðy)¶æÞzÄOÛ˜fÁec³="Ñô¤â£È³¾vÐr–Å9º¹‘Yw:ïºÒ%Ç='úu§EW=hš_Ä—aÈ|J¸ í#bòaå£á"âóÉ2Ÿ~Ð…­§-®×´í>ÑÓ¸Ù8žó8AIê*d†Á:ÇmÆò” ¯*«­ ÑÑòy6˜{Fª ¨¬rL…ËA5åÞ½1^”A"t˜›-FÄlΛzEŰ¿ ø'«’²Æ›P‹ÎuÈ1§’ÚóôI&ä3,Êùq“»²‡7èäM÷üýù)Œ‚f †gEæ†d°^›—§s­›Á³Z zŒ‡gDbžÁ¥®aâjϤ1|pqitœ›¨@¥ìΫ£tÅÑZ"·HÈÌE\-¸g¬å)È1§†kO‘EVF…íœÐœ,$;™Ö™»c/zÆ´ûõy¡ÜÕ¶)ç“tÔŒ=)ù…»õ2yO¹WïmOã:›xï) ðÎ'iõ2lðúÚÀ€K#¼£¹Ë_UÌÏ^Vl=Y¾€£ø×¦¸µEø¤ÿµ@4§qø¦a˲Aåíf­¹Ø<Þôû/#ö—[âÿsÄû8âß|—%«<5ˆ}‘¶Ö{{óÌ?Èx ¬÷Vt2SÖ%¿±. ëžCº\ê ‰°Þ¿¶—D€ÇázRS£¶îì "æÄ%¤.X„‡!w éËØ¿KQ}cPo sê—LhÍ>® Dcæ…æÝ]ÁkO·ÿðû›Í—Þô²‚/{Ê|ÀžQa°“«© ј3’58ž±¡Hì—¼Pæ“€^m0†y ÉL9Þ‰9=±| Ç@¯¯Mç.ô„sßjŠœ!Q¶§KvQgexù‰¶|Šö,Nû_I3ZœV$5qlVŽ#„%gÚï:êĤ¯>]w/÷Ìßn>í.5×e?ÖÓ†6Áó¦û¾¾½}É2fuéB]ãDO3:qäý¼‚nu–VÙaþª½qœÍ 2¼ÑBñ"Õ®B·Ž<2ú…]…ÒÍí*DΉ ‹ª)¢—6‹ÅîZå¹E”ÖTëÏIsÄ~¸|[ÏãÊû Ú^Ú3$ÈJ_¤»øq+Ÿs·Hg‹æQ…l€¯“R7°µÀlyiÄ=ÐÒ˜m1ÌÙ"e8?Ÿ×%#°wË-†¢*r/:Õc0bNµã ÄAõJŠ-‡¼ßõõæãõæÍ×ëƒ`ï¯6_ä?ösIØëP’ âº{ü¿Ûïÿn#Ь(ìLM¸M†a2Í|µ Ûq©å\–ÁH2“lUŒ oòt} °&¤AúÚV–„ £²³s×ňœ™S3ŒdÉãÂTó±(’F_=ËhÌ©ëÓ¾ØFoNg£:ž“xs{}ÿõîo'ƒB÷|¥í¯’Õœ~0²ÝÇÁ¬-&]s "Æ‚plÙâDÑ5$s;éç¹íï„.³§d5Ë@NmØÜœ l¢ ³gžŸÍ “ÃådÉ$±qÉp9¶¬Ä¦°Ç§œë¸Z¤5ËÖë˜AKõûŸ­¥Ps[¦å¹"z¤É〩p°|×ÊyÃÌYb‡èr• ºrLN³,­`°¾hãðˆ –u¥|në·¤>ÂÎ{Aµ“£9Ï`½RiX¥1º|ìhšÎ©~èù'‘×MêŒGKW›«ÖSÉÛ½ÖpB¹§‰ÊÅ,•äßòHÒªã!‰ÑŒ~2"a1"…•¯åc "e ¦…++$yR H9µ’‹a~@‚E  ì[ÏOÔ4KN°ÜXòj>Ê)X3õ»‡ƒÍC lˆ½š ÊNª…þPC‰gÈGcû;ȉ° a#@\Ixb 6BŒî"ãÿná>9Óu°´ÜÀ+²d}q3T‘Þ.ãFÿDž7Dމ@F5!É3åîO[2>5¿j.) K>… ¥õUQöeC||…/ßÔ'ù-¾ëßßÍ 9+Hñ=Û_¡M„œX 9ò¶ͧ,äx£µoYÈÁ:q4lðÑkMóĉ¼·“'Ì\°Ñ‹Ñrqäû=9ò<éò¯4c¸Ò4Á!/ùÈ(jø*” ’†©†¯rĬ1Ðã“GÔô­K¸/©¤%ÓQŒKQÌ Šh£ pŠŽ3(&+·ÉQIƒ¥•À˜ÅDt0Ŭ±‚~Ž' ˜<ÂÌ7‰g²ª&2L"-C1 ¸ˆ]¦=Á¥â8\ºüôá LxãÆ9;‚.ë%Tá¨u< UÀÔ J v[û÷ÃeÈÕúëõßIþãîîþ ^þKLùú¯·Ûëß?8ÙzOxs½}ùL¬4$}?€+³Ÿx{HlLÏò9®æßômßÝè[ Œl®ož¯·'7`$HÓ[w’¿¤ž±_Ûþ17ïnï~¾ûz÷óúáÝÓç«Ûw‰¬úåŸPŒG^EÊï`NKðTQ‰ç¼|$ˆSÝ‹7…î…V`ÁÅ’AR$Ÿµ © y°²ïBlW ÄWcÒò¼Ú Ü Zœ;H1îŒ^w°«"H¯QYsiZî~A´<¶xîÄÉžž!ýúaî.~ijT<öë'ù¬“v†Žuqÿ ŸV+Ç<ÖL„u6 ó">-uQg‡Äb5²Ç%! ›uFÁ¥Ç¾H«IW~\IR`½[ÚWñÌ3z)#ÚIJ‘ íB´>>-W.É Šôê±”ä· QYrê² :Ë&$¦;¯%C·(h—Û¥E>\ÞªºÖäáê`1ùdB‡*?§…a21|Ĥ\‚t&»6 Ú rût,jû ƒ²³%dš·åvÏšiuÞÿÔ“ÇZ© Úöe»üÿì]Ýr[9Ž~•®Üç„‚˜šš«½Ù»}Gv:®ÄvÚqj§ß~I¶%Jü9ëüJ…»¤Ìb8Ø‚hÖÆ9ƒ$€º†×9’¦ ôkîÆÉHÅÖXݳ¢É—¥"œ@¥z!͹°cûĈi¨\Ôå¸v/¨jß¾ÅýíŽ1UT•ó§âß4Žªªù ê‡e–»åUÙšÖf+FÏpœÖ$¦fò÷ΣálEã£Xf˜4ÇwéÿÕΣA~UÏŽ>Íø$#üg­<Êø[Ma!–ÄÔØÄŒ ˜‹ÂfQnb?1é±åÂîÁ­]nR2ón3ÚÛr“2j;¥šþÞx´rÅ !HÏè:1¡Kaq?1úÚ1õÅÖÈXîíK†­EÝ…l¡xv±š)/Ö"¢éÅu“Æ/¸O>NìÔpù_Á¹¾ô’ÑõÇÆë¼ß|£Íu#Òùuõ'u GñVó¨')7,q5A ª(¾ìH‘)¤²2r®·ÿ…L#²V=sŒš _Ü×v£Èû‡ƒ·é^YSŸ.´¥ Ÿãp7zÖVügóÖ¼™?î1w …tF³K3 \SÄåë)¦^ d’ ¬Úij¢¡½9¹~Tø Âà-ý–Ll-‘-¢Û¨Èr+"”Tà+jÕHåÈ’òÕ3" ˆ-íÔ)`¼´³¡´~EmÁñ;7¹Àˆ2£­ºª¿9ºPb.¶kò72®P?)r@/X?x)é|x-îDÞ|üdÃìòþîéß@mÍ”*|qeÚG ž'Íi!àå4ï¥WEúÓíïûßëÛŽšýÒ ßFyÒ(=£ÆD‰ÀÉÂ]5’?®0£ª­þ“+`\÷›ŽÎúϸƒu= z!ÍÒŒž¼§±î³B.Øõ,ñÖKÝœ—8d‰w^-›WyçurQçµÙ*ƒ<Ç¢0 TRY˜$Û80£çˆÞk=6%t>\:cïW/ô¡ d }2±j¤OnƬ Æla{78}mܰ*_aíf.ÓŸÌK¥‰2;$‰¥EàÆVrñ"›ÚFD²¦ ˆ[áµ/èDTv_0Zß·³¿VÔÿà=_c¼¦ãõûÇÇßãS[ìîBÂU>éA¬‘ Æ{í´ý™¨ºT€øAäâœ)ŠÏE¯tüé¡=G5u—vׂ«»kg(%ÇîZ¯ܾ z¹E2U=FÞ¡ã^gÝg6Öåñøú ¥4©ü{€5-òK¸óÖÑéïÿ[–¼eåW–åî²®)çSŽäÀs+ÈG-ÕϤgµ$_’š™"°py#r¬é"•ìvš˜z;58õÍ—7õ²6R•’åØÔo%ÄXÖýUí1³«½~«ãÏ·H+ŠùƒÅrCbwS"ò$kzŠý¢ Ç‡¯7oïMíæ>ùúþÕý6é1´Q¼­†|H=Ï›*jŒË­›ë©4·í'¯¯ L’C[¼›¬¨E_çä5ã/™t¥/dŸ>_ 8"|×c³ømé©Å¦géצ°D«#:ù=ÒùÛð]¯¬q%È·Õ%·“ÔÅí)V[ò6‹±.Óp«1M…å§´™ìjP¯9ÐÛ?¯3ÁeœžíkìoêiGdÉö{ê.9O®³¥‘†€y+},[×è#¹T¶®œ}Ëx%Ïãº=µCöriãºöö%ò¾Ëó-°«^˜Ñ{ ‹5U7“Œ Y`ÖäfZ!¶Å)€†aå*ÈÍ¿•â¦Úß§/iç¾Ô{=Ü)—¶8ƒ×úñûÛÝÓÁ á_²ŒïSöÁ )NiUK° "±óí5ŽUh:¬Œm’…É9¬_OÅwg yl§‡D¶;<€÷—6Ö!®mºÛÜw+Ÿq¤R¿yLCkŽj"âä[j«Ù™<ë­E…¯ßr:q×f–èh¤²‚"T‰èñ5gҔʨD OSvólcïüjC"v,t†D†ð Ôqö¼O"©bÁb®a%×<ÇI…b™Y¹FêÅR‰kÙʯ«à™ ÐVž61M#vð®ßÎê'ö}ÐCÃ(ujD“GºàóþÝÕã—›§o_•ö67O·Ÿnõ'Ÿ Ø÷÷ ‰?5…HbÒÚMû …!î¶ N$zjÆ6í¡W_øsŒm(T„¶4Sé½/šJ ìÛΜ8Â;¶áó&ߢ–Ñ1çEj¹B‹¼]†c`^u%ƒòV­ÝåÞ·‡[{;Ñ ãêÇ×§Ù_lRCy×Mê -LÐQ2"eÖø³u ©Š<#µ.¡FãR¡uNbÙÕ%Ÿsu3b R:¶Îé-›®‰ú%lØ©§xH‚.íÒ†j±Ï|ã«pÙš"íîN³Õ.î²Ætv³šFOeÕȖŦd«öÜ¢Rƒ~}OÇ»Í]GŸ".æ[íT¹·®Ø·*ñMÑÒz½yÈO2Ì®VÃ8=£MOÕ2Žm 8õèÝï˜ô¼–‚ sâ¥|‹\Í7™Üv–ã,ßxrú·Îõo/ÎÙi³›U-žBýµù®‹ù'öƒè“iŠÉj—Xؼ±:a‡ý¼ÖOPì©j jƒtí2±%«‡»LlcýÕ Jâ"ÕË„ó»Lv—ÅìfÄÙm*6#’iY€7ü}ó‰eËL¼!'‡à[!:ÇE‚ z²rÌæ¡—j=W MV‘©Ðz;•¥BÈgs–׫ÕXkŸ&µ›\·¿e‰JKg£N=u¹(5jèØ)×<åÔSþ‡»›§ÇÛM_ÄÌ™­FÎ Y ±Ìó@Xâ¹A§dKz¯T2{ç&uûÒ$‰mMé2™H±gØPBl5TêÜ*ÓõúÓFÉòð]ÿçîYPöÿ2ƒù 9NHØÞ XAÜ»‹²Œäk¯T""lØ‘ª£ó4Ab z¸;:·O`O5Ø¢ðÈ]nŸÜ±Û7ØøC>zÒüÈI*…ãê¢h~|Ž‘»«bîMnv™Šf–"è©Þxýù9}‘ m- Õ:ýQB Æ©9Dfß<»ÔøîµÇTùœbVØ4er U–RQØT"sÉ/4a4ôÐ$ @6t¸k †>jB ê, •¥+ViLLe¶RØGPçØJ^0 Dÿzµš ,ÍôHáÒŒ ®'H„ 5¨ma\†?®EqÒˆÜÇ2ߨÉùµçû‹g{çf7« îäÉsmíSÿ>¸,jX:ù¶û„ nšÑ#º¥k¤˜m°Üo7|žã›‡)Ep|ÎŒî/îsftv³\FÜ˰PÛR”³êvöh÷‰°jS㞊ñ¸OFMÙ¤ùËó÷…¶Õ³Ë­^{[}]¿ã|£{¼þdÿ|ŒßSúþùsf…=á¢öCÎ4Ç× sé^û!g:±ìÞìbdÍ—‘S¿£hkÈ:Òµäúžt¼?JGÀ§#j &6 ÉX6mìãÙõ»»†”sI³Ë”Ó‘ qVýßÛ|dö‰EùH %t¡¶½s¨Á/Æ®1ò†Ó.¸ØÝÕ¶˜ñ¶°CÑc…»‹{Äñs2ÁÙäìjþ.jik©©!LÑߎ1j¨ÒïïìÁ¯îðâþåíí2ÛrêR¼À‚ –Ÿ @|hÁ7›gˆßOnâZxÕ¿=ó]€‰–à o%+y‰ÑÁ£’|O©Ëbw+BÑb£Rû°AhÍX1”õôR†¨â dT’ÇÜ3ôìf6… 0ÏkŽ‹-6E4p‰@KlJZyiÏŽŒ‘3ƒôÊ•ÒIJh2¨Î¦P¸Dè|4,´[[°ù|sýã«-Ÿ½y|z¯¡ÆƒžþÏy(øõaóå8Lf×&/ýý™YAjˆ—þü™èWC´á%¢.ƃy©NÊ©çÞεvo8oöhÕÞÇ{½ÓÃïž)¡5´|jˆL<ÍLï$v7.îå!õ¼¬ø¸ƒ+i…h¡ÿ$™&â×NÇJ¦ùÑÄCÜ>xU¤SXJ§”Þœ}Ýy%èˆéXS&5Õ¾%²V- Lƒ~‚׬ÙSf+¡Ý8‚p]Z}³Ï¢AÙ_ÉDå%…=[zCÒowôèzÞü}±ÕQôÕP踆âÇ,=ŠæËÈXm­œ”ø¼ÒëeUsùòì6oºUă@zÓÊ5ÿÆÂ*Ê‚8HõU½™‚sU”’Í0aZaÉ^¢âE¦›‹ö("ï¿_Ý}ûzsÜhÔ†6¤fü]7ý+41uAÀ‡$`oºaTGV‰h=oèªÄÇCÔ*èW¼ý0`¤¢r§ltN 3Ôzj Õ|½KgŸœ:õ©§¬ÿ:¤t4!cTŠPz¢¡Ï\ñìS¨wê‹,ÅIæç}\d{øñƒ›(,b_Œ~í¸FïŽ2÷•y†ºšrÝ «×¯]¢kã9âëíæfYÿ?¥Ï›\s"56sŒ?Òì!Öƒ£æ—ØŽtúqVe]XM‰[¤.rÐ:?"È`páÇýˆÓJ1FnQÕ™UVß?HR6l-Ñï7÷F=’}òëMÛÎ%Öü¯Ÿ%±ˆPG·h²œ%µbXŽ$ã¨Åä:_~EI¤²æ2ÍD,ìe’hèM‹½-Lå…ÓÊ%W±†§ãˆÅ0!R6•×Xjh¥Uª"Fª¯´Ž¶"§Ø,Þ'O¸Ä.HO9¸è̵Æå~TÆóÆ¥X‘Æûx)h/ß!¿~áåf5i<ªMakƒ\2ÿHº'Ûoï}Ë3§  àűüø®)NõÓÀž. èÿøðãéfÜÜ]Ý_ý~óxæ¯|_ï¿ç„g ‹êgÒ5½añOŒ-Å÷ï¡^ÚΛÀ$‡¸¨ËÖõ”ŠžXB~sñŒZ#6ë±mxÞa‹+"ÀÄ‹´WîhÚÑ3nS\„f±»ãÚÀÿ½Æã$—#:DzÌ6¯»Çxï!Sbà)rb( @V®1®+1 ú‹ ”o¾iÚ¬Ü5ßwTd°‡»¶"Ã*‡š÷{GïšË «ê\¡A4¦Öø‘ÎD7þ5ƒCš(Åâ¯Ui÷ÿûðøåMŽð¨¾þûÓãm#­G–UÙâ®PÿQS”œ„U÷F=êüñãáéêôøÃ«"œï~mýœ¡–˜X¡2ô4Ç¢CޏuU3áÏtæ7S}I4kr©7†Hu¥þÆžð>ÍÎ(;¢°dêDCjˆf (}Z(W¡'óI¼CX½yÖΫFz©ý'•¥&|Qj4ÓÉ> ¾Òm„Ôè±5^t†JM…KaZ{\Z’ìÁlÊ‘¢ÿЏÐê‚»"8ŒÞ~ö+¸¶U%$®^°VMôÇ2¦ ÃÑZ¼Ã¹æà.ß„æÉÁŠ•ëbTº¦+ñÐáJÔ‡ P}X\¼æêâµ³p¢šâuÒÿ½ÏBtÍnVS¼VÑ´N@G—µå¥¯ß%jïG¹n ?$Jo†8¦Î„(pdÖp,ÈÞs±­¸1ü@sl' Íuáç9SÓ¢þ 8f-L†ûŠ«vhÞÝÞoqH7›¶zDÐó­è(Àvȵ; ÃÚeïRsÊñJ†qóbÊ¿ Q|ª‹ûµèg<ˆR$äšgW2/¦¤¿ä/îa—¯1/}fEò–Q–.œr0C'¿!UE€êäë·Ï(ñI.1F!¿ˆK|°ífÈàWŒ“ÊȪ̧Ùº‡»»÷·OÎvÐ}‘?þø¸Gºþx½Ù¸ø1AÛžãàNs#±m±Yd#SO·:kÞkí©­ ƒ8lÌK¥&ÚS-•#tÂ"«ÉvvÐkF®s^1LÁ6¤É¾ ‚œ[–P¡¹âiížrØuƧ$‘Š;?HWÙÙDõÓ^ È >« %&¿ˆÏäp|¿«‹Œä—z_;_íjý\åCŽOû—Õ(ý¬,›wò]X'H’ JsÃQ3ùÏ<ç4Óþؤ†Ê¼‰§Pi#á¶è‹]ªB;h‘£×œW©ËóÄ6ôÝR—b!üêÈ FæÌįñ‰ ["\¤M8[÷éÈ ˜&^¯àº‚™Ê‰z‚a׸nd7ý„‹Ô׿¼N•%Z0°ðå1aMüŸM°·7”ÕûÙÕ*j´€0Ù–›X=°ˆ6— ¢:YÀ8Òˆ º ±œ5€NxqÎ@cÑ1¯"hØ«´Ü?ÊIèì2àÝe“Hëå6eh,ØêŒ2V ±^¿± Ól©I54Ö I€Cx–Úey¤îÅ/Va©Ua±f|Ï€5*,¡(IžÝ¬Fƒ“L ®žÛ4ØÚw8.â[àÔ…ÐACQ¿x´ÃV—ÕñMÕYEi±M\8á¼»xLy„†—›Õð-úÉ6@¶ð-ñ2}“ØÁ7¼áë/f›¯ö˜ˆÑ—ù–Ü32õY¾IÌéÛìfUÓšÙq¿k¶–mŒêb7°Æ–ó‘Æ3!ê§ÑàW$ÖXðîêÛÁÆIýÃW×w·÷ï¯6_Ú:LÜ»nb—uÄpñ»zÈ„¬ƒÔ·#W¨«I9{ˆ“½ˆU€NmÛ’^…˜nfÔBÔ´9ŒtaÅC$YEñÅ­Žþ,W»¦©?•hw^V>Îpž®~<}VöÜn¶?ßöºjiþšÚˆûޕ֗Œf¨]¿2¶m¤~"G’T°””‘cöà…8Ct5wðÃ…u“ÜxX]zdŠ«–¯o¾}}øóî H)óÔ¿hRTR£¿¦ž’ëYq-³ýÓ¨§ƒ(8LkMl8…²SMàb*&‡´ß%x ¸3jP\=4ØâA®W\ƒ- ¶E³_qõñÕŸ j`‚ËEý¬$ºyüÇ?ÿû¿Ž¶#E—~ûñÝÀäïnT™í¸*j·J{Õ@s¯âáû×oOÿ¾ÿÇ?Ç´Øep_aszéB ½tgyÞ4çelÓÜÙþ—þÖ[“·wJ\•}š3»~ˆzö‡t>ØU«„Ì*D¼ 1@˜Ün-àY³£&%I,ÄòzQΕg7)×)‰'ˆª60/Sο°¨J ‘&[¢¹¾L©"` ç KD@sêH …½&aùÞ_¢ê2%«#ò\ Ù @I&btYLáÙÕj 'LSh¨wÙª“–è.8êQ^ïÔêǰø‰ˆª6?9ôÏû ºŒX!Ðvíäø6»YͲ9ž,UHorÌ¿‘E ±òfàm»=åS§f%T}îJá.:8äü¡õ^~=({ C°ÇƒÓÑélM^­üéæÝãÃç›_6ŸÕk<þ±»ÿxÖw8¾”\T)ÉuN­ÀÐ_á€"åŽNyHÎ\—X|Mß%Gigî‡\gTŸcò‹îH’@j8T Œ^÷ìòô%V—ÃjXF‰`X@ ›Û”+ Þ/¬²…&¡Î3 5NT•Á"bCD”&VC®Šf´¾}<Œ}vª_¿}ú´Ùþß6opÚ"éÍËi*xL±ä FÂŒÜ(•½,:8{kM­Af9gãâÚ Ááz©ÈEý.þÙfïd †gþPc0!ê L¤.Û+°—Yƒ!u{4Ó^s’ŽÞúW4ƒÀ1ê^ûyQîåžáÝxL>§¾9ž+o&M„œ,Q‡îO2®}pÉ®ë®8Ÿ|,‹{pþ¶‰,wô–î„71„ÏïçQÓ¤ *Hêy ‚ú]•ôL Ž0’—e£ÈçH­`šyxæè¨â@‘û`*ªæ‚'"ê‘n% "Œ¥Mõˆé‰9Y´E ÒêŽà”Ø¥n 8]eîx¬¢|•ÀÒ:w|.FœÓj)rûD”í#x…–:† ѧ–ºÝ¨™»]çØ{·fNoдø¼ØA£UtKÀ641ňÚ:ÇÙL`3EÕ aÍЇ1º+媢٠–6@ˆÙvñ½\ztx7°4ç@l`'‹˜<b] !0%dB¼.=ÜKÙÍkð,Ös*jvãŽ|ò¢;v.K¿¹Ww¬oM|‚Y¶¢ÙG’¸¸#ÇÕ J¢÷§³¬4¹7¾”@pfÚyàëóÂ)`»kÃM3¨¼†Y3ZѬ(´£>"º†³-´Sw¼¥ã‚O;.Â);Œjƒ³­ŠÅ½j×Á%^]+g!'‹)7]xGvæ„NŸ°¨éÂÛ,—„ÿû?ÕºlŒêÕÒ#`Dß(€o×pñ±t-·“âñ ±ŒÞˆâ·õ»—,‚1äÐ{²²ŠsiOqJ<‡©B}¸Vç¿Dm-Ýâ$iXÛ˜õ ß<=?~»}þ¦Z{éülóÕrªmä!ÄHå£F4q¦¢²“d㺉Hzœ5¢DŒcZRo j¬ž…Y…–zuÍl…¶Gû÷ŸªÍuÆÄCÍ)Êh ¡d Æ–˜ñ÷Òèb < …73žm~+#ÂÌÏîFZF¾i`¬ ’ÆŒKk£¼'ã ò’Šz%F±èL!eïŸö+«t}”¨ˆpuµa Ó—ã!ýÐf'þ+ƒ0eõ{¥F¼Ñî³Lo õØÖd„ ‰<βa5w¿È>Ø·ø|"/Þ-oIJÕÚÈ&1Ô(–JUF¶ò|kìdiU‘ h”ó—ˆtg.Rœ$ßRAÀÖr5w§îÚ×-ü4ü&Û½®¡ÙÃç—Qow÷ï>jö¹±Ú^óÐ!êuÛðÎ FC™Ê­j¤:ç¢Yˆ„l¹Á‹àzìv}ggu'3¼„ä’ R»ÑˆÑ…´ÔIЯ:—šRsÈ!„L<æ¬>H^èû/é1H¥xÌÖšîÉb*ødH4”>àC˜sÝhœütðgׇž>?>|{¾ŸL6Ü]¶>^øÈåË×s3œ9£·k â>Í®:KlûQHŽKe*­¶¢¯íĦÕÚ å ;•As"`u© ‰€&R„SyÇÚ>HÏa‰J: ¹p÷²]¸ËFêû•Uåon Ã&é#² 9Þ«®uíÕ 9d­ôú¸H×àZ&/ÛÜF´ž–…ʦê0À ÆIY¬§‚‹@eƒW…dOÚ÷K«Ñ¶Kªm# ¯Ý¤2ÎÂýæ“t{„'8<†ûüû­]zkN`:ÓßxÕÙhp·ÏcsÑ6¯/›·JPób!Ì¿)Fç#¤$íoŠœK %2ê‹],Ò±• gýÄti5ÕÖc©Ÿ>œô1}FQ4lVÉK<ûµ÷÷ÏÖŠöóÓÍW“×÷§›[éóý¾]÷æÍ—»-iÈßÿöó üŸÿeU ¿~ýöüocÿðž¤íùíóã:þÍ.FÜ|þø~[îwL×`Úüç¶E7Fë<˜@åógß3å4¨!ë&…v´qÌþ]Y–”’† ±úÑÆNŸvH†ÀÙ°q.Ÿvˆ¦I”¶÷²çŒw»Ö,ÁÒd1…ì;Ž@:8î˜#gÆs:œ¥m±á¡©¹¾l\t\—¯h+üÌÌ9€8r¦¹Ë°¹®ì»Žg4 ¾ÏD®Vãø”ÿ¥wlW {J%áy&eQÛ¯NØ"€Ü|¶ˆ¶_=G¡69 ÌÚJÁÖ`Ý6îÍ)Ö,w‘Ù¡@õÿëáñ·×C¨YUîH!µ »è1TØ©©xMý$‰ƒ¹Çvuj8Š3®Ìˆ ââùІ„ñ2ÑVZ17OÄÑeN\ Õwf½ŒSË ybt¢ÞœÇµGïÁR #ÝoI±‚ñr_ØnåYŽýÊjZìkɧ8Ko<ÞŒ-ALSý ˆ ƒÑ0züÞBš‡½Ó?nîÞ<}xûðæñnó›ÿqûáþö·ñ:ÎîV¾~zóå~ÈÆóÆîãOÍZ«Øã°‰«Ÿ’:#çÖ½ë\(é~Á—Õ|°.º З¡ æÏÅ'bí4$Lá \ €ÆRuáWpѦü"Àxð¦e°ãŸOoUç s²æ.†H-lRCërm»y®R¯hÊu®êäu7üûÒ„-àñ!ÛD"=Ú5õ­9ø$éÚp'ÏÓTÌqË¢rÄÿmgfžÎñweO TÓ®IàçÒÉUÀ­"²,о}ŒMm<ÔU…å ìœ "ò‹[S(…⥈.œóUB¯+«9gãoÖï„ú­¤?8,¸·ÐG8lè¸á舂ÌÏ™^*y ÝW/sKr"ïÂ`†¹T Mä}¤Ëº IrÐD?/yóòbÕœžöª¬yP}õ`2ö$BYà‹õ!ú–ÑuI‚ÄDm÷—”¹¿Ì¨/ø!D)WÝ«ú’+ ¯Û®óÓë^SQ­ ƒIäpzÝþ ‹®/Y #¤ú ­>V°Ëæ 4®¹èÃwuÍO°¤úùP¶ _:X¶•;È Ú/­f^£ÚƒþƒG¬PX1+¬}©›@‘£º ›ß&Š~2Ëä¬`Éq1߀oj C—"¦Ø¡üäÂm—½·¥™*Sšˆ‹Ä ÿ“’e#Oùv²W¡ô¨2Õ×d¤¨s|‘Ù»e(”°%/L¬ùªÿÀm¤yóPÿ¦øà½Tù7¢y$È™ÇD@]jý5r¾Þ:Xì¢ch·މcj ­Ò7´E*ÊÉh5:dÝ2eMjÄ­jº¬I[+eÇæìSŽTŒoÔýVûMž°,R¡A“D_ï6zX8×R‰Vñc”M‹•ÚIJ>›oš–MÂTRØÜ¶pÈ™ÄdeUµÉlE¨ÁVßMž‘-´Bo£œÂa|3ýÖ áÍ-¤âJûis«!Íã¿þu~ÿpûeæh|žŽâœ`ÍïÞ£ÔˆÚE·njXçè#Àµ”Ó»ˆìØ!t™ã‘Csæ¿—§v¸ [ãä°Å $ä¥Y7A¡Z ÚõO– g"­.P"†§vLdþÔŒƒGs*Su½êT|€EöG¡iöœ##ÒeZáZüÜu¸ÅÞŸÞܵXóK2'ª¼²@Ñú0;m*«.4npˆ‡Ž{ú# Ö'ƒ3æ®4Ëú’h¬Y_lbíL!¦5³ª*»Ùšð¼‰¬ll©hlsP7M[E%!>òì“ßh2µà V3§Ÿ'§ø­Õ«ã#(t¿ÆE {$Z¹«Pp±¶`‚´»Ïlî>¾yÿåáISÖ×SÍøáÍl7··›ç‡Íþ=&±À®egÓT6ÙýԬ勀²Õ2S E¡Ã(èÛ.,~PÅôÂ.3rH!±+І#—®«wJÊ6 M´Ð‚+ú–Þy_}ðÐ XxåëéQ¬~[~p=mKÆ„B»ޏÖoîYÍ¿êªoª|Œ[Ñ„ÈAÿIèã@!9Ö¯žÛ©Rï0€_Óa£– NŽ!ªã÷žt4ò Pâ–Pœ\þºt/š×·D ³@\ëhÙäõAü¥øÄuÅìûôìÕ‚8Wq܃á-XpN­ê>œpX¢ÖˆÒT:|r»)’+áªúÂû·ºK4'{úE¥´‹uuà ~¯4»sí*(¬5pÍØ(bqU˜Ëy ¶ 5D2;UÏ `Us W>G+Q/†Ç‘²ôí‘´« ƒÍ±t2Y{lA’•Õ¤Ê`5½¨¼ü™âM¡¾xêNñTíèQmÇÅú¸x œÓjRc½8ñ®¬UI°®ò Óªë.²s;ê¨$Ó£OWÃkÚÁ«U`§vE”áURlhyQï•@DÜì)¾‡Â»€°Ç’Ë€lš²j¤é=Ô€,{.¬$ΟNÓ²4ŽèqÈvØŽšåÑú(K[NÄ#”UÅØ(i*Í^§®£˜sp›NáVw` Õp»VÕòQ{[Ôŧ½øåþoyOq/ðM.5qd’±Å}‡–âª;T7—-•ÐÇ \Æ‹{;‰FÎe¾Š¬ –qPìâjÕnvWa¼*,#l ~`íÔ˜ 4×÷D8U…4ó(¡^䕌b•té#0®Ñ*,©ôi* ð…èûÇ_ÿò÷¿ýJ˜b‚hÝÿB¬[•éæ›¾à—7ŸïUòã ÝÛOU*)«eÝozó×›çÿþòë_ªYš^iü>¬~µZ¾Üá ]Svóèšþü„·Éû³y›þü_õ]ÛöUäœýjDÀ`Á ¸>Â7º¢‘×LÆÊšI]é`!p…¯‰Á‡ËÎÆVî³ë“¥UM¢þ^²›ÃöŽéC²U“𥠠ÔBT軈WúïWwJŠ\§ÃÈG JH…\œô£§óšy\žîÿoð8c=V»q¾â8¥tÔÅ8Rÿö»QËZÐ;²«àÍðY^íÞ¨;ž>ÜßÝ<Ýê?¿}Ò?í?kÛþesÄŒ¼wâVâÑX*ÍÞ?hœ¢æ˜(å[¤øöø¨f¶¹{»1ÑoÞþ¡âüé×8r£1ž|@Ÿñæ?þý§Ó¯j¬>ýå ¬–9à¹Ï¢=h5¡ë#\ÈðM›F|=¾˜Ô0n¾ÜÿëfRNV'BÎHŸ¼ƒs+ÒˆÛ{Ò¶+Š®©'Í{ð¤·Í.²Jõ““ÓDLÀ £È˾Q!ñ²o4|‹É~1å ¢Î>xƲéEbã^¯÷¡},ARѵî¸iqÌT;[XcSà"•í ‰Šv!’mˆ,­"fÒ ¬éu5ÖgäŠ+?61KVÏÈUŽpÊÍ;j"iì/?RÇàG]ÉbŸ‹$î#€LäCi^ä³èÇ'qÓü°gÑo_ zz`Ô±©Wæu úMu¤K1*ÔŽËÕPcH Î«P_ª«r ÿÕ1*Qv^êdie¯Å* teˆÒÀmUˆÅèS:…(æ4ØH3„Ò]bGˆ"¦+`ÔQÃà.„£¸ Øð;¾}K˜H6oïîÞæ’¸8Ê®ñŽij1 ³!ïï¸62F-ãªü–Qmq𖪃7´i #íS9xãÊA}òÙqU¯+«ŒÝD䮌€ýùç%¤!9¡]cÖJןÿxx[ºK¶Ô–—ÜìjaÍ}âARKm {k›KA?ÊèBAÉ( Ó*’ ÑÇñv2US`J9²&©ÅÙC¾„äuýZðì¥ÅºZèÚ» ÃêÇÄÑg2%;ÊÀÄÇm-¯¬‹(=ï.1“!ÁIô¡›m[rRu}9g¿¯ªCìO]Kè‡dIטØaÄÈ÷ö¯Ï¿<|ú¼½nÙ¡!¬_.Q*:›QÈÐ8±£ ¡¦ö‘ x†ÁÙÀÉbŽæ)!I;1_·—Fð4S•DãµÁ“üÚ9œŠyë~°ÓÔ¤qÒËpˆ³Ý$RתêT¬©ï&™kj’]ÿ¶qK´A#¾vgž‚Øã›§çÇo·Ïšð5Â(HXFwÝÁsÌZk²`‡–¼3bê…¥£úÆŠòWÀâ—çlÇÈT&ÐÔ^;`¹:š2ÅÕOÄBðù1Ô„|‰µS®ÝC’AQt´¨'ï ¬º¯[(À]B/þONÍðr?`ê#o‘CÄ2Œ5‚‡2Œp®£w¢‚œ28Òí»kç³á˜°‹#7Â< Qâwï±ß™ä<|2JÀÙ—㺩…HÃÄ;4ݤï0abͽ@ýÕ?³ŒRIöh¬—«Ê=íÙ×l`V6$ULA‰DÒrÿ´%ƒ‰xG â†À’ª„‰´€ªù0ͳš¹(î…QeDY KƒÒ}+"ÏMËA¨4©`ŒJQQ¿êd¥Pè(Tz#@\Š}Ùm'Ê,N¤ (af‘Á4Ç=+OÙT€6dØÍK^§¥ÿ•FK«K+Žf˜.’²Ï2=ù¿ÿmÔåARÀØiN4eõ"±u½6[ð•ÌE晈H5™`„X|Ù;^­e¿6ëDêk@[CäQŒëš©1CrÆÅYÓÃãi«Ö·¥?÷_?ÒuG¡{¼{wÿ|ÿ­„ÓW–,ñ^-¬ 1` ïE Ê{Ò^ßu†¬õÕ)Y’ DƒÐm‰9é~Ѱ#´§‰*"“%Ž€ÚgÚ©`Ú᜕&ÁlúYe¥±)hÅ´ûec±Æ<»MƒmaÂ?”ÎXeÜíÒh^ùS³u'K°ìv«$!kÇîÔ-¢VcÚ6wÀ4ÙmsµÎ}¤éW¥"Këðx³ÛnŒŸc[e‰'Œ(¨kØf©uÇ…SW³ ZÙ–‚ÏÊ0ו9ú{E•mfãKÊ<»Y ÛBž€1éI 6?£Þl:Êæüi‰ŠŽàuìš¡ÃdšúñïïŸÞŸ=þN1žÎÿýÝÇ/¿vmšp[p¶A'ËÙR¨‡tÀßZÕ®ŠÏ!Ç=ív$Ö·î_ –¹õª¤\O»w·Cý–ÕDŽyÛŠÙŸ7ºé…woï퇼Ÿõë‡;ïs}ü|ÿqasJÄ7[ZL= Á¤¼æÿ—¥½Ì¾dáܪì$:IPwJ»KU«‚ZÄžœ€YñÏ)ÜÚªn°ë$±c´ï¼é_cÕâÎÁ«Ë¨FY’Ž–5u™Ä â2ªÖÖµт˔Vq¯wÁf8¼^Õë¦Ñ륬GòPk×µèèuŽÉЊúAYz™ÇÀ1¬Î!¨µ®Gº›ô õÝWj;~q-vìoÖRÖ³¯Ò(œb;Ûd»¥»qòZÚr’%šÚôÕn¤P»áóîÀì‘o¨>·2p”ë¬òËr¹?ðx›zí†yò@¢Ó]=³#V•nM>‘9µç…v±L!¬íªï±2¨p£þ’ÅQdÖ¦®¿ÙŒZ¬ …B)‰›Ý¬AQ`2ïiÿÉI `vÆ¥@v¨O\Àlôæëûƒ0—Þ¶ÁpOüXXöjìÃð"*B[ƒaÛ=k aøýº–ÃÓizÌJ˦é×ýúl¬žEóâ±úu¿~i¾ÞÅ<šîD¡6 C¤®,WL 9Ëj˜nF£fqÆ%S¬?8‘˜£ÁŠQ³›ã…+/Wk´j¾b…”žWµOöù‰7¶ONG,ôº'ЂõJ?4T¨·@ûxÿõã—?<¿Š¿nÄRX¸¶`»/›¯g!Ðå¸!›}Ù*ã× C¸Z¥4å€7n7¿öúï9éõQ(åM]¦ŽpÔ~:§LCÛ%.¬¯ZØàíBœXRj»`¬z/,§ÔGê ¨”øW'$ºµs£ÉödçÓ=&óSðÐàÂËÐáž&D2à úË{ Öüä –ŠXÒ8"›`äŸFì‡÷”>lgbØÖ°²tVÞá+¢B'Dþu2¤;¦‹)oý) ! ®W )ïÇ9_Ã3©1"߿ڋ[ÛQÎ[›Q#2¥s„üàù–>TòÕ2º¡ùÜbP‰#ä× À–ŒT ñ”iâ 9m©>L4>Þ?=ÕYNþ²÷Û¯#¥{Åc뇟ß?üñðñý·à_ï~µ¸;ôº~+ &zÇšÞi¦»ÿúo}„eˆb„C‚:cê&AcèyJ‰’ôÌòŽ$æ(}ÞÉR"Iõçgö™õjÜ !_jf”ZdŸ9 á­:ÂÖ/ÍFgŽç¸í~e±lšo2º›Shq^6·;Ú li"@bÏ€/eMJ›âݽ4Þ½ýãÎè:Ì»g™Ôþõî/}x_hYpÖF•å!î#kO‰¤™tq¹„W¡£–•ãÐä-w©Åq˜ƒÁjÁH³§Éf4á9\£)‰oí9rÖ­=gÎ…üÒ9•%‰–=æ¡ûÝ"¤Ážãû²O%)2DJ)q¯Ññ#âxq!»IsûO×vc3äŠãÉT¡J` >_~Mó÷×.‚ÌîÕ2˜dßh¿Þ¤M±1 Bì˜EÌJ‘‘uýRšQGAˆ$´üù÷RNû±¥vW~ÝAÕ8{”‘‚òj^chUP¥‰,WhÀ…²¼,ˆTyÍŠ³G/7kâ5NäØA¸DEG°B”H_ê ¡gtC(Œ¦Ba.MCÑ/³i%V9…¹Õ9»L}ró}Dð'f~ªÁÁÈyJèу6ÕÞ×;Ú¦Æ}%¸š±[ZøÝ·$í7 }ëþ<~÷;>»Š»XÐöœ'И¥M†X«2¤P²ì3J ©îfGðO!ÝØ¸"ö ÀE6êcÈßÛ¦î’D˜ƒDP®Bì"4IU‰ÀP†UŸe„HØgx›]»HpÌ1¦”¥79ó#’Œ_o ¦ì“r¹½vsR«ùO#ÑûÇŸþþÿüÉ_U2:ä&‹qí¢Þ=îÏE–ãøçÞ=˜Eþül‹ï¦< HüÛ?þöüïÏ?ý½yøÊäàëËKŸ>î@Ï×<ì}zÿüøááéîÓo"_Ï2CâeÓW~ÚlüÊ\_X<~µá§ýÃ>æ´,¶ÿÌô¦[%jVÒT"S‡³ôðNó®XÓ3Ceš±.ž ã‡‚T›?¸]7n~ÕRcÃì*õˆÉΡ ºGÅø¡pÂ:d‰ŽÂ Ed5¸(D¯‚܃Œâ{6­OwšQ2}\È7O4D®ÉC†\¬GïÕ¶˜Ò$Ñ{Óù2¶t*§5¾Ì]ĸU4kÜ~]ðùÌÔ!j~y*FÓˆBxÓ̓½9tP,íKÕl‡Ê]ëƒ;éÖs–`¨0CLwšÌrÔªr bé‘JCp¨hŠŽœé©aØÿ´¿^êêBÒdñvZ AŒÍX¶9L„Èœë|5CÜÀW¼°sàåfMæuSÒ”Nw(Ö =&!d¡vGj6s„û²øâ¤ñ¯ÃaÒ7~^ØÀfºwwHoïŸ|Êu‘iFÅ(oºyQSC\èõsÌö ±®¯’¤|]_]v‹Å9EFäþöÕÑá¹Ýî‚'4—%ð¶ý %u%(n?RÂDuIA^L5I±à¸´=bN«Û>Û"ñ:Àm%ºF „Õ´%®v٭Б 2iôL ÎX4ŸUÆb¤rïÿËÕZ|¶òäh÷a™Š iTYã}-ÔÚÂûZÒiÖoÛaRõØ$õñþéùñ÷‡çßM ºÐÌÔ›n’7èŠÄ®ò‘²¯¤Z¯*­€f18Ò¶d-Þˆ´ª*‹Ñíñf-/¥A'U‰‹4E#Y¾Žm= 6À·dDð4rA¸Çù¼<™`ÒŸ8Ô}žßÊf$âò\J!Ó2——ÑuZ%œRO0Zm¹àj5n[ÐÉi(šá£X z³¥’ÅàËÍZðä1Ñ®e–¬)’ösÍŽ®@EÕ7lÙïwïc,ïsAтμëÝ©pJ|{d¥zï— Å¹ÄÙmêõ{š,‹NûwÇJ'¬«ß§<å¬Ü^v0´ÎráþÀÇŽ7|Ä;Äó_ ñ‰àRa¡ÚuµÓÐ…,oöÄ$-~?O z«æ‚<|oÐÛÔôVÃDgk&äâ9e²2,±Áþ2­kºL\žzVJƒ ùûÃêò.…æònž,5NR÷œ–ëiÄ*_c(¶ŠÌ®Öâ:sšbYÄ7ÓðÃ*“‰q<)zqË×jÈÍsÅO÷¿T›â/ÿ‡ë&tÒ›nF5(r†‘ˆ†HAGä*{â^™Ç¹BÙæá›‚1vy‚#¥¥(U¥E*ÖxgÔ1|ãj@ÑN½µRSØÜaGçÎ×ྯPž½q0”‘à¡ÜÛW²ßʤ\ä=jÂUv‚cOI#FIwÍ®+sØÖ Ž-Æ*-J-€õëP¾8[øs¼Z“'Ž“2Æ%,öÛj¹›¤UJËteb”‰vø2Û6²¼µTÑ,`w_ÆêóóÓ²mŒrÅåàˆ@ÚÜŒ ”0r‚BC>’lH4LSÊ!-*œÑ_ ›GÃf%¥„ŠËS@3\×Ñôm¯/lG¶äµbÜÀXóë[ØÓ—ÿá벪YLiS£«³`–I›Ò§«‚£Ä¨°Ç öÌ-†“©j8•ŠÛ¬÷b7Mú,qÖ›Ç=*¼½Ý¤\¶›èi¾€öÃÐ'¤¦ÝyqA¡¤Ë%Ván?7qÄîg;BóagÎâ®Jæ…ûšåø¼Y.Òy®A2©d†Zþ˜#ŒWŸ„v—)–›n^nSo–‹¸‹¡²ž ßœ± (N)˜7ã’êÚeK¬¢ \]\hï0‹6‘GMØ áúò½ý½‹ÝÙÅšð÷pÊN„âäâHft,‡oÍléÙ{MË0&‡Ðî¸Ô(ìÖ¥?¸û¸Cƒéóæ%âî%¤&Ãê2$Zl y¡×ˆ*†}´…©9 ‰¨yʰFVLzSe0ÝK¢F¿•;eÐçôáì#ù…ò[¹û×Ûw_û¶EŸK‡ … PZ¤ã HW¤ÃÄ{´g!öÙFÔ^å$¡£,íƒs‰×W¥[§$"óäáu®³•blïX*~Ì.Öâ8|U ÛàÖ\#ìzLP/ë'L›þš ¸{þòܹ߭¤Ó@SÈ¢uæ+"J•ù„ŧ¤}Fè´¶CàßÚæ+ç.˜€,¦8FÇ¥‡Ó~ÀãÛû‡;Ó®ÿ1ÎêÓ$ÌЖld¨›NÅNð#†}rŒ‚EF_}æ Vˆ¢¦ÜSK^ÆÅਠ=y£òÆÂ”c‚‰,ö±ÊJb&®ä~Y-²rv›´4EM)ŸàÊžœ±*o4Gb~‹Ó‚¼qˆ$XþÒãþc°Ôh½û—%H>ó×ÓF"¥ZÞ¨¾Ò $³‹µ¼I#O)¦„'³wó3Ši£ñ:XãP^_/ì펈7@WƒIÍq¦üý½h¾"^1Géôdc*dö0B¼É#f[ü®¨tÁ¤köõ¹M&¨¦½fŠ&}F§!ðjÞÓëC´íî9Å€œbÄ~!ñú¡äžœŒ|rßç]Vemíú÷½–˜¥z%f·:œ¯óÕ.^¬æÍ.Ö`”Ñ<5‡IXÄ5•x…yõ#6Þº§b(¼›8ö4 %ª¬o‡LãZÍ™ä­æÃÊçèšuY'úÈo™7ªórÈæ‘Ÿr©Ý…|þ8ÅÖÌŽÀŸ'AeÀõx:”ZÍYTïo®¯3EÌdYzÅœÙÍ¡\f8^­Å EšXDp‘_7dAÚø!xOG.LFÇâðºƒfá^é6;fá‚Ð ,YwiüÜreË,ךߞ£Ëƒ¥»‹mÕš¿f›ˆPR?,žá;;lSÎ&3J‹;\šöi\Ý¿{¸·¨üË/}U³ÂÍ»RÕ«fbá[¨ÅoFÎ2žÍŒ^#ÊföÙàèNq‰A$‡"\c‰aƒ=©§¨½á- iž¦#ñÿ¿O?>ýþöéáñÃWÿ™™†VÆ£—†dÞæ‘ƒ¯#5ªN‡ŠÏÒš§¨÷pîD¸i"ø†EÔnOÒSa„۵܌«Æ€-¿E­ÊZDTQt.ƒ©;Xb ìöå5Æ€CÚ::r:ï ê«b´è(°^Íö8"Œ]ŽJƒû‹ÿL£tQ2ýVIä [Sôfdˆ›NŽ\ÜLþéþñ×÷Ï_?š÷.7Ÿ1Ì-P€t[`ìiqfs–z…&ú^ñMÄ]ã\ÌÀ⤆æ*FŠk^€QJ¥Ú‡TjM9<„âEN@¢§á«TÓ…qò7æï% œ%kã¬ðñÐaÖ@’˜´­²Â¾ØŒ cÄE‹3¬².®^Ъ?î0 T-†P©j6#òlEoÔŽÓ‹!!‰Ø‹arÚ ‡„Ià¶&ãUÇË8Æ‚»Ÿ8¶ø\ø÷wúð ?ÿ¬üŽï~É¿ý+-šsÒ¤—_¢ú2ñ5Šo® ÿÔysZ×%»–’£êA.FÉ2£ ) o_¨©²D,þÏÈ6$„ÉØGKZ«íÓØR(«T_m_âý)M`̶p›·D(n¢vƒ†RêéTJÈ'š¿#€â‚5¦c"iÐ;IAëzGE¨”1F¸Pâ)ƒÀ¢˜Û»å,´]¥vÒæ…¼Tx 8®z•jç––ñDµØ«¹à²¿ÄèœC`YUG‘œ¶(·çI)à­ÁŒÏžyvÐ/!ø²5sIø áM"RZcjsî‚'"É€[¼Š’kXÄ=p΂õ0 Õj‡åAÅÿ#e†HêêiyÔR¯¤žõn„?Ë?à/3XU­J©˜•L’Y­ÙËôDwUVx8…qH]ÊŽç UCWf$>¢Û‚ÕÄÒë@ϤD$‹›P߯AQ>1²¬_')~½»ºßùLÝ´sHÖíÃ÷ö·¿ß>ÿíæóöæ·±ÖŸÙ|¼½úõþáÉRž§ïIþf÷áÍþ¾msóx³y~ØdÏ7¶—MÍ¡ Ÿñ ˆM#²Tõ›h`O‹9¤ßµbúyP… ‚Rœõ6‘8ßÍò¢…î&mÍDB»¤jÒ¼A€ØänÔÇÕÝ !d²ÓTÅàò6ž¸k²à#ÑRƒ„%ÙÂû†ºœáSH*#_}'“á4Ô€—Ù±wU]ʧ]ÊpJMOB¶gõì­ç~±%ã­f¾K™iˆB^7)ÑÔ¤Ò`3 €³°¢%xGZ3/˾¦ÁIó*HG‰ÿ+Ѳ˜E4\ôsf11n´²‚’ôVit9ó tÓ[Mëù ¨-£ÉÍîf¦4¢h7Ÿ¯}ÝïfÍ^|WV¢6{«4Ùó"µY€­T=zu¯yò+{ìdýæÜd© Ñ,ï?Á¹Ë÷ýØžî¶Çìðû_ØŸ¶€d¥›¿n¯?[Bµ!¥O–.ëõæŽÿ43LGÚÚ^fÔĶ-ZƒÚ^fªSh¿'Òø'lÛºöÙw‚Ú —` ÑUÐÄ–ðõ=½'!¬«¯ð^:<êêf÷SE°ö¸/ßî-}1‡§·æïªæHUýî¸[÷¡Ÿ±: lÿm²º ý‡$ƒ2¡Þ!eÇã×íóçí·§E¡; 2¤ñNMQ ÕTž€3÷"xº iÇ‹àªX;(—âÐà¢%ɱ$( ª³±,Å<ïQPÎ9’u q°(jRPUh2Ë—› !ªD/¾írfwaødyüÇoöã“ÿ’ûªs°Œ}¤{æ‰6-ˆµùÙ Ø3å.ÏGêB¹Kœ¨üî¼È: <®1MÆ lçÁïXßXÚù†€³ßuË&@à8 ó‰Ä7´Á|d_G¦Õ Æ®øLÿy™¶ìè=±|0{nKÛ~ô2›ûüÉöXf].Ríµ]ØÑK-ÙÓ3S°§eåy‹{93gêaÜÀ¶/SŠXt}j;T]íýigXYSë ¼BLNÒáªâ]5}ò5º¿^žî:EüO¦¬&W \1zW Ó‚}Ó]¤ 쨕îý`G•´æda½–]‘ÈüY¸fđػ¤öÖNÀ-r&³6W+Â8`Â9`¢C*ºÉ_“é[ŒÃðy~ ¬MÛMb0n²X¥¡ÐÛ£ó×eøzw{sõ´}îÓ•°(ŸHg­Óz±Í >¶ø¨»³tŠÈâáieÙë”`gI rŠ˜Néç¼€É5Ë15\§©•˜Øé–¹%6mçÕ3ŠI2¥™‰eЉép¦4“ +«ŒÆ’ÔÂüVy…ý°2­ñÈ®¾soÐ×wüa_ãâø!µµÀe•ŠƒJ_ß5Lj5Ð4iú»hÄ!óº„@ÉĶŸ¾Ýí6Óù8iüÑôì»/Òx~ü¶í“ÛQ@†¦šóE\3KF\zNÿJ€g2µWÒkI»’]˜UøÙfs$‘yÖáˆÙXG‰ôp¸Éœ™ÕÑ"‡;g›sínˆ½œ!ãrmÉ&ÿè'nö5öíeëvõĆI3ÇÆ{¹èkˆÑvzq¼‹¹ƒó–ß̇Üè•Ï[8Ülf‰êÂêê³x¥E÷¶ ©¿Í!€Ô؇ø|.>í{9!x]Åm}Øv‡ò”ÏÛ»/7ŸMuÎ><ÝÝÚ~÷×ö#~HÿúêÝŒ‡0 ªd-¸ÖµÈHæÝäi@Gòëa=éµvURåÖC1Û¹&÷©ÿP`1{"§B¹ôÅî¹6¶¹løß\“‰U6×?^/kž%Ÿ¦E)¦iÙæ}Õy £˜²tp]…ÛkÃ[7¨xœ-Yƒ%ºf/}Ó9Lö€æ(É>¤YõˆG“„ØXj¦VŽ]1xzékqøà,K×T÷,;jºýíŽ=SVÀ‰È‚¥ öäÖ}aßœ˜ºh:^ö?o¯îž?wºle§ŠÎµ3(ÔÌ`@KÁâ”…À¼_~¯"É@œ(Ð<¦_½Ù>>ož÷·°ûtŠQS«ico“¯(Ÿ¦žƒøŽs{g…Ö-²ÕD¹ÈI[>Ä”6R 6£\V·1k!8Laõc3ž²œì¥á-§Ö÷¸v ÝÐ7ÆËÝÝÄŠ:Å·}ð]P7‘hh*ÂYõÞ¾¬ÙÕdzûéÖ~}Ü캠'Ç!® »+®fbš5è–6^VˬWèKRèKód´ªçïí8Bîêe$Ÿ€›ì™@!^q#¯Mchb†ýÀ’×uS¦§4›gçmwÅÞ²6G\~‡ß‚kB/øþ×j{½ ýc²Þ=Ø»^_Ù›ÂR)úfûß¶î¯î–mD¿.ª«Öðz±ZR‡NÂiuôs<˜LœbÇp,8ë1,S͘¿È¾G‹mÃD£›Æ_ÔcDz^£L¡­ƒ„äMæFvö Ö=Í ˆü`$<o«:#êÙG‡ÔÛëøâŸ·÷fÇw;¾Ì>9x¼¦Cjñ¸÷sÜ} ó$cp>ÝÝ÷èÿ\®£¥m ËÔÔ šØÎ= °É¤Á|³ÇI¦£Œ§é ‡§òià/dàëá©|\{¸t’²“¬«²½½&=qUäû™D¾DWè;†¾ÍÉ“ö§,@ ¤úßxùÑžlûËr# «æF¤æÚ×¾˜nÿQºtö”n·>RÕ]mTÁm„Z8;ï> æ™u’ìÑÕb{Äc ½¸A·úuDú=íjIŠR‰®å¾Å‰e¤ëÈÜØ@ÚsÖLXЯÀP“È¡ÒôÈ÷ÃPóªu·+ë'¿§Ì$¸š*v‹Ú˜"{]•šæµ:ºÑÒ¼ÖE%ÍŽç;8šOB5Øg½j¾5ê(ï.œ4. ’f/òo˼HÀÕÌLÎ.äImäÜÀ¿àúž˜Qø¡ä4 ˆ6aâ9œT….Æ¿ÿYöüeîê™õ9_³ôD_ú ä¨×á沫u‘z¡xr³TCHeà°ÆŒ×ƒœºÝŽx§I%Ã8ÜüT‡4WJ:’I—3'³[`JqV9ØwØ…‘`u²4­:ÃpoŠ"@Eá鮾è> EúŽwB^¡=¸4Ÿª~‡Û#œ‹5ƒ8^§|æ”_¿n¿Ÿßñ³/ ¾fÏ,ûws·¡’ø¤²­Ë# u¡¤PD÷ŒÔÿ”û½32s{úðéñáˇÛûÍÛ#;Œqz¶WxóLˆF5a”Oj2ƒaÁ…&›#_csÞ+ ³ Í“š pR“n@µ·-0ôNçlÅÉž'VV0òÇû41-yxe ã‡æl¼þж;`¹ºÕœ׫{Ǧ[5)Þ¼-QЬn,U7úÓû†yuKÔ0ƒ ;†Ø¬¾GK+Ñ·½V°•œ¦ébjIlC7|zDÍd.¯Î!šÔ—á<>Ümõû¹¹6¿I¥‡ S+ð:pš«9kâ,nœ)°LRôY1Saoí(¨ KŒ$Í^³äºÉHTk:s#š/Y|ñ®˜Q2ÆnO!Æë‰s³ý’x%{x4’_'ë±8‹gÿ%2`J£ÅB5‘åî~…ÑEÁ’# ί۳ôå*…ëwÛ«'­×IȲ>„Ÿ&%ŒÊDÕ¾WÄZ¥–èÒÝyF(½vXð4x$psåŒ4öžÅ罄r>z$ŠÐ:™ 8f ‹6T`o9zÓ†Zû0v'UÏ™#Ó pŠA'ZçEz¦å¡¬ÌcqR>³¡§”Æx•&¥Eê´‰ÓÑ&ÄU6¿¦‘‹OI¬¿?Ü}û²ãø;ù| ¨L«%s$hNvU%è.°íBˆ‹É NDyn¾Ï‰˯¼N15ÙJ´÷ž¥w²-Ï·qîE'yþí£lþ´{Ëß߬Zwvmûe ´0¥Úô–]Êžqmh5áâi}Z²øÄÛ1GÁܹJ0ËÏ=×A‹I6ê<È sž½k]ª“ÆlzY ë]<£†Ä}DMHL\SðÆéLÎhþcD$;ñCœe‹sƒ› t z³$'#1U„ºв-¤ExÌÅ5åŽlúÊxŒéì4ÔMŠJ7z3­–¦t%5qE1¯ÇòkPbJÅÑ›£W×¢âèt…Ài D«öÔØÝom~Ó¿üå:±ê^óõÇ/Ÿ`k?º†uZ °…£¯9Çg&—î*–Ò›t`706«Iˆ#2Æ~? ýÇC0ù¶3å(¬(¶w ÂãEtHLMûdm$ö>d&\Ú‚ÅÛn… ZèZF,R‚¿²çÐ/‚ߎ(1©^ÛöÚ`8¬Pz…‰RØEGï¦RøæîáÛÇÉþùÓÃã—âãUÏzáÜó{T ïTd¹NpM'Å1hEÕ0{äiÍ¢á¬fz•gÕÒtš’,›ƒw~ÞU€·ÈyÖYÉ W‰¾ÆYØ£0úEç(‘¢3Ñ„&D~mo>ÄLÜnq¹K,mt¡–ü#Ë„œMÚN„ÖûÂÈWðD–®Y> ñ¹¢žƒ=‰ä1-ë¯ÆÉŸêæ ,[F×–Äša™$ˆ pPÓúê뫦•Õæ¶xP-s[0ë¶"å _F ©r[†áÑIXtÜ$iJmcŽs`½YÛmŬÛ"Ž.Ýæ’Ëw.í¿Þ Nš–Wô®)¿Ï´ŠW Þ,*\z¥jm|Û“à›¢]Záxa§¦ö®!ÓZ¿‰ƒ¥›ÓÊÀ€Nš±«& ;—ªì`¥v¹³ì‡ÃòW(‰ƒ-ø’8‡ÃŠ>_f~”V›i»àyQ$¬hHÚvĬW¿MU\˜…b¯.è\E#õ%æ“"êñ¡”»AƤ¢‰I°éÀ×ìþeå ²+Xø<3h'=ö¤©ÖE©¦i‚Cp^ÞùÌ SàÝ™kœ@lAü<ðR¶"e$  ÜMoiæ äà.9ŒÔT‘¢Êý©û![9!Y»ËãÚz{ÿëÓ~þùÓßLl_~ÞÿñË‹©ýrg϶Øóîáæ7ûð›3×eôÆ4¹7)ù]À–fª#$²u‰*Z=:ȯÛF5£Áœ/(‹g7*¥q³Yú€qÕlÔ]?šÄEwð”Ø.¶d2ö×à¤ØF¼ó«–ÿè½^ÙÛ‰}þ_&ïF¬ñ‹w,MïX‹[=cÓŽE®¡I4Pä-n^JØY”Ý6oº¬ .ð|MH9}˜Ý¼H!Ku\ÍîMöîî^b‘&3¡5óq@¼ýüx¦×.…«žé•¡™7ˆ¥¨æMÍâ>ÐYS#ÌÏÝ= ¿†aÎ^31?„%¦;¬[λUjhKǨQÑýç _ÎÕõÝöß ëÿüððõ„yì?,éÞþkZú/æ—@ÿùCò7ÀÿNN5G2^ZŒ<«9BÕ9o+u9~ ÑZþ”Þõ ¢ÇíÍöö÷íÇ7j¢8Xê/0æ“?á°®ÃCnŸ>Ü?üÕ¼è_·ž?_Ýx‘ư[ú®RT3ú¿ÿ-dÛ­ Ó§o2¤š8jº-Y…»öõTø:0à¬I¡0h™I‘ÌÚÔalì[¿3’MÎ){m`¡E7öë¶Ó-ƒkºÉ18vÍl„\ÈFNC[ ù ­ z‚|V­L9Œ?.¬€‹0½”G\ÜbȃSg1×ŸØØ#Dµ¢>O}pÑœP»Òb©Ò’QQAï g=[ê±_·ÏÞ5WV¢5ø4Ëc¹ÖÒDšà8„z­Ù#Pk$m]Ü8‡Už}ÆkFU~pQœÌó+ˆàùj¿VÎÒ+Œ3ïŠÁYYøµëøM¾8„!ª"ÇR Ø´0I§­‘k÷l"¬]ãcN‹ƒÏÜl8ÛEžÜÌ͆ž¿nþs"ûy÷ß1½áØø·W› øæôŽc“eÑùiJ' )hËÞÔCIÔ²½)Î$šºDš!UŠI˜ÍÁQà‚}j¦ϳòîî³å£••ò7*/—ÛýttžÁ7©¹â™Rˆ !H³ÚtÚ1¨-¸ó” û…gékF++T›¾Ò‚øÅ¾ãÙƒ´xÂt¤[3“ɼ ’ý4·ê]©ÞÀ§•y.‚±èÄ«‹3jK“Vc6™8®¬ˆóœv³äÀ/Ò#»&½Y”¶\oQ¢9#„ج6¿Dm†-¢%j›flÝÙs…ÑŠ´–(Œ-0âEZcø6i¡"] 3zĘ ¨µV<`À¢H•*ZK¦8¯6ÎN<­¬@mŠÉ;[®4onECl÷/N©åžVåy²öê×íÍÝÕÓÓ,«áë¯A€ºWˆlɹÓ#j¸£SµNÀÅ|OoDx¦cúüŠ{žÓ®<É-1ŽQå€ç‰O÷"£œ·;ʤÃÁÙΤ11"-Ú aÇ.ݲA¼¬²A-»|Wä »cÛÍ÷ÓZ[ÆãÃï½;Žó¿Ñƒna§*c °švP_3eZBª-PX—oaBEýˆ&ôÓŠ6@ Ð&Ì”aíU41Ÿú¨ƒNxã‡Ø’ØÛ~,Ón±ÀÊ cËÕ¢`°Hu¡¨ßÏi~Uº·3ãgÎWë-0“4ï”bQ(?wÞab„ì‘‘œz\礷ŽN –X‰…–¢Q‹•x‡¾Š8EÔȘª‚×ÿfFñëöys(-ýŽWUWŠ9$Ä« /9¼óS{%Ïs×B—E,’ÀEgrd†+à¡ÉAjš:Àòõ°xPÖ¨êíòºÙ‚ÀàÌÐ|Y“ò¡ïó¬)@Þi¤ÑÃÄ*QÊéívïFBÒd AcÍQ ×Öjùgé'¿äšìDy0 Rœ/C0ç& ³†4{õ"©fb/M–ÿ…°ÈL„êg¾ì±?“ñW.Vs~©ª?MjCÑòk×´i¥†ûÔcf ®¥èü²crY‰Mʰ`ÛZD>ïêEsL@#aõÈI’¥;raÉ=½QË!¥¥ÕoØK»ì[Ó„ÏÕ'Öí"…} pxòëÈõÕ'ž~~45oí=RÜ™ú–ñêææ“x`·½¾¹Ù2]y ?ª¸uáÜ»ÄX<¹©з_þ?yW²irœ_…ÐÅõ?™™Ä Oº‚.‚/†!°Ivf7ÅEÖø½ü~2GTÉŸdVåZœû285Y•±e¬_(·û^bÖÇÉ77ºÓô]…-ÆÄËžä©q㤔/=r†Âë¯F´…ˆM O¶iLá#aæÅ–XiˆõãÌœ¼8½yøôõêü4šÐ~¦ =Ó@= ‰!’—b8êœÉòMÓZ“˜„N*"r»ÞƒZK²Ïô±fh­þj¯)·h-;}£1Œ”*"Ñ1&¬#,Ö- G…ÔÜ¥!×È{<Ã6Õt{=hvÄ<0q±=;rµd ¢Ñ¡ïKÕVÐhšþ-ub,W Eß(E·J0—«Ô¯(2%uïç×/çÚü4aúç™þñ™c²þÉ'¤®Vxwv}óõòî9V;B­ðõwLª²7hé¡~ö®§¹ Ô&¡F…ü¥Â7š]*|Þ¡R¡m9âTNº&á‹æÆ»,däšSìØl+¤&{ト¤¡tš¾ÉGhM Ø”Ò ßÝÞ<þñËí÷‡›»kòò„J[[[!ž÷; ž@`(2gåWϪ!J †K3ÌÅ+·ƒWÔ}£êØ¢êê20'ª˜CQ×€¨¨ê1;¾·"ß M'¿0¥È å^ ÞEýg?àhêØåh²5+û(íX; NC| §¸XÅ÷ü¸j|?[íâ¹ êêb5ƒ`ú£BháZ„¸pàš®"B86|Š®A°àÞ‚áÛùYHaÑc™SÛ€…† ½¬Ï&ÄÖ·©Ê‹j‹¸—£`ë3†FÁW³Z“q†õ6µ1pâ_‰l“üÈËŽ{Æ«}BôBì›îõÆß¾üçå§_Ô`¼hT<(:ÿÃ!ñl„-HÍ«±(/cþ¹&ÙŒ™àED\›À°ó‰‰‡Fz¶&êÏ•HÓãï¨M=/+jȇa ±,,ÂYaY‘kа$#mÃ@“°@’¡:™Ñi“F¢­Í’·—7_¯ÎÏî.ï÷Ì#=ÿßDôYä}ž)É  J6J#³F¼,ÖÔš:“dƒ9aôM²a«–G@ìˆØS’¢8q?ö¶ë¬€È¢Î~ô¾*þ_²ÆcE¢)"‹Ó@µí¡!µ~<äÜ2u˜L©Ùrlçœÿþðýþ¬0~ÀKi8fÐg‘cÄ"dÌF”v£E‰²í+rN’$L”¦…õ«$òY’ƒž<½’"ñQV-Ý^^ürv?¸g)'" cðP€kdC©—­¯È3eVJM5LA$ ~Ù¯Š8s†épQ%y3åå?ïU¸,ZÔÏÝLŒ<ÜÝ¿~´Dzø·]d~b)cJ4%¤Õ} {S‰A£ç¡¸5QJ=YG{!¶¯°|GJÏ2à­©=•]RƒS/¦²•êœuIWd2f‡ 'Ë"¶˜ñVÝJ#¦@ü1æáÃ&#ú¸÷îXcv**¿Úý°ýã¯g×_wá…”hå)c)QÚk ¦SÂPT!à»zü(!%'7ªÈ~Àg¬¥ùPRTe}Ü`ï—’¢‹õ¥r˜uMÆv@u”14Ùà£ô¯#ßéóõAm|úñ憶X–Çzõs憌ۢÿŠ$PvŒ¤Dï27ôšCÓç†^³gÌÚ¨¤[CTXG±Xh•³mçÏ,˜bm’í§‘¦<˜X×¼Œ›Ýœædc#Kâ]2òHÖ¦?§Ö†Ò¡¾)ï·ú`¹à‡,€xîBïKÌÎ!üîR‘&”C?àÀ‡yh½'MQQ•j”¦Apn@Eõˆp Úàã¶žu½z/žcø»£'=ÿØÆH#ÊoKRWÜZ]¼ËûÿÈÙÏþ#7†^{•cŽK¢žyÁ’¨®‰äËçOôž2#¬Ê‰!5™’ˆ‘™’øj+èœá°pŸÒ;`‚¼zŸÎn®v?ÿüòöþÃíå—Ëo—Û¯]W=leyTÁ~­ÓX€Á=h©œ,/¥Në(¬s+í¦½ü**)¤šIÛí‹ê)ûð¯5eÐWm\‹ºzG‡šÎƒãx眢áÆø^ËÞô~PR|»oÂDÚÛÕh}¬G¶TÙ]Þ¸KœP G2ÞÑižÚRl 5㾬*V|6½ÏgäVT™¢‡qÑëp žþ¶€ ~d~Ó¶~ÈQôp;8øÛ dªÁ«?µ…Èö;É^£3{.½ô,üÃ×´›ÏÞ/ÉÐkâp×ü»Æ!É…4<±ÉµðU>¹Å6eWô/±Á¢ú¢Hì6s¿‰’ž¯V3±É²P h)êwK”€2Ä·Ôcª½`T!“ñ½Ûìlµõ  VØjòXf\¢ì¾žÕÕªÇâB^_ì&Æ!j1°# §+ Õóð!øa¾Õ¢MyƒuŠ1U(Ci}Ž]ÜgŸoV¥oq±*žHÛTüBNT° ¥Gß,ØV¯*Žó­—ÇÓvé TðMJèqvó¼Ï»ºZ ã,]$Â-«·õ»‰!ð¡D é uãÔÐx[þUÍ· {ÜnføæÄK ßv(Qù¶[NðŠo«›U±-,Êc×2.(¶.=²ø¶…ÝââfÇD©< ãj25è[P7¶bTùF¥•évóìž±õÕ*Ç‘ƒ4é[ˆ1ªÂ 1.v¥ìCÒˆWcµqÆÕ"*ùˆ “¬ 2ôÝ,3.:Éo¿}ºZ ã¢_Ôê!`ã ‘Ò㈻&£$a"h˜qÕà-A4ÎÐØ"ãb`*2޲³“«›Õð-èQÉŽ[øfÈq„oªo=M+j·ÄZc{›ûç«//[Ô¯®Ï¾\îÐ#7½«bŸ{ÕõÍ(œ~#·ˆYí*q,‰Iôy9YQrÆð›ýì(à›’12"ÉQDìÂÁU×:j ßæð²šó¢ÜúÔÞ|}~þa‚CV@xQš úŠ—;ùò#äS Ï$š" ¼~#r“€X¾Ô¹dêMtÍӔܑ»ã7R3¯)nsܘùI` âÔ8WÄÏ\1ïwZßÎÞ>ÑwÊžbýÙ¶ù±IºÈƒ‡±„–¡w ùGa“¾ÒÇ®]íêb²=;,lV/òR#l»íà‡„Mݞܠ÷ŠÜ“dƒ÷©Ée" µTä‡d :D-z€ &¢=wú¿ÙãwwöáëÕÁ\íääúìfÞËg‘)„Š@N®¢~FÀ¹©œg’M‘ýÑÁ ½MÒAcî–="Èút:޽+ÚìzAÝÅÕÝíÃÍ«Ó6¤YÙ×»`t×' †R®êO¥ža<õÔ€q©£HªYʸ‘++VT¶¢x*Æ)”í/ZÑeʾkuBÉæ{”‘=„@1)#¿‚ž¥ŒS$pÜ}׫½‡çiÖuk¬÷am(ùQéÏ#Õ+å ôÔ‹ƒO¹ä›—X¯HvrM¯!eôæþªÉ¯QÆ$%˜u#XÊÆ„+ŠLRG@òMêÀctCò€Ð!ú†ûÍŠ±ÿ;ÍsàY¤Â¦'ÕH(‹f{ºW„œÊ…Kð .5IQŒa~Úf“å£>-ǂʥ1wOàOß'*œøýœ@"ÒØØeÁU”×çÞOHçh4Í›RvN$U„6!EEÍ‹ùò÷Š SpŽ’ U¾&n¿@h°¥kø ˜Ñª#áHõqȦ8DIbM̺ä=ÜB”ÇÛ}ºZMÙ'ù…4n³™ÊnKù ÙLŽ`3eñä½ð{ ¢¿´ jÔÉ»¸ü|öðõ¾Éd‚wø‡nR—M&øž‰Bô‘cs}ufKFç*°CÕXqŠR1çí®H1ÅVÚ¦H‡M­D¬ÎqLÌC*÷º•hŠÊ¡_0j\{ÔÑ%õ•Ê”ÒòEfz›óØýË&PõÒÚö+pI†»¹$å´†)ïMìiöÉðS¨iՅ䋿³/슮3Ì)ž–‚9Û0‚?´r¥l’!…²ØòBÀw›/À@®>Z™‰ˆ;µV?ÊïMkмÞuD±wë´Z×ÞD±Fht#ãá‡Ú[Ö4{£Jž  A¬ W¨$آ̢J&ŸtZÑdŠN²9ÐT«á$ˆF¢[’®}ŽàÍ&…0Þõ ·Eé¬Bè«<Â%£¼•Yk†©‚ Q±Ø'³kžˆ8%¥aUòè›Â*Q›îÅÃ=]P ßjš5Þ½ µM²™Ge¨HT«NqE8 >kžnV K\À:î«ášY?ïlî§¶ospÏ0cDDH)Ôôw1ÙqðBö¨³8b"„ï4ö=˜[ÞÒ.[0\ç_¶Ž…úw'Ÿo¿_Ÿ\}ûp­žìí¯»YÇ{Õ÷· Ìd9²&I°GCÆ$¡§ûܦ·c"Jÿ_<}¶ÖÜ«!ŠÀE!"É®^Q¶KŠôgº›Ì‰ `ÿB¢ÍmD¨kˆÁFÙÔéë=¸ûéNÝýC]Bÿo¦ K\ÿ܈Žøv„Ë@<¦S ÚÄÀ¶¢ö{“Û#ºöRõ„ˆÅ§¾ùøvFÞææ_³ÎËN”´eÖyujb‰uê½…|¥úé6CòNVí…‡õü‹3Ʀäyq>ÄÚ1y»è— Áˆ$è…:€Á1`H†ÝB©^lìBd‘ ©PÄ’T€ËæpV7«q Ê]ØÎÆü1sFCýRﱉ٘¬ü7Äl Ü…pe»ÚÄ»ÑݶAÌlB¶óº%+tN5¿¸ÍnkŠÊ΄š+ÎCd=QxFŒ©T‚¡1Ô?+èÑyŠŽûå ½z¸ÔcŒ&2c7¤jœçc;ôÏý|Õ‹'lŒù|³ª Sf{œ[øf‹6}L#|SsÙ‘\bd½vJnðþÒÛ³Ÿ4ìÐÿê÷³keàÙùåhkjÊøNCM5©EÐÒA, @pY ˆ‰fèµÛÈÚ䃔 †ä#ˆïi†3c‡öHÕ¹£—d;#±¬×»_dkÈ®.^ݬjð9,l%lb{Cb[B¸h€¡)¸QŒ¼ <Þz,r“~þÌÝ4å&\ '+^íhÐE)ˆžò¥…'BMÄ%ĨéÕ’’>1Cb’º€{Ä‘í¬Áqõ®†Ê¿¨ÍŽDeÆŠ†–Td,S6ª_]­F¿ÍÝŠ‰¡…oÑ:G`ˆoÑ÷Œ‘0a²”É0¨aª\ÒÐYéBÕcb‘mzñ\Jnu³ªÐÍ/( ¶±Y£»1¶uíáQ*Š•áÇÝÓ›3Å¢”xLŽ6DG©ÌúlãÅŠ:SÖôê¯VYÂÔ$d(8ÝàÛ#^iôœ ¿ØvðÇì¼xòÖ×"ÖÚ"®1ß"¾!ŽQ—âê1†‡ÌYŒÍ u÷Ñd–vm8+úÏ w×À¢»9‹ó°&À”™oé†-êú4v#êEú¼A½p¡dÏÖïZ½ˆ1…‘0Q±«å>‘úëî‡S/ïËêùº,‘/©—ÒC¾1ÿ‰SÔËv1#AhR/ñêÜȘzE9Šz19Ž?ÐJÙW@>37Ƚœ]\_ÝݽZ Fç|žRáá—OŸÚvªªÏp"†4€ˆ]й6›æÝ„ĉ¯Î‹ÊâH=ŊΪ¨!';gò ØóÕª£ÆçZšêÐd@„ÜáS×:LU´…oï€É4 ö«§¥º.ºÇ3¢Í‚€ÏhW<8ä¡çüBÌgŠOòŒl56ú@ÓcŒØ0§º˜B”Ô:¦3ãÉ›éD§˜ªBT©((ˆY¬õ©f Š8l* ƒÁôƒç!A‰Ðó¢€­çŠäýXŸÇ{b¨ë’ðÿóßÕMxê;¸Àq¤ÜjGô€0DhºF¼ooÝ$â³mØŒjË)4 §°(S’8ÛµùD)3ZV°Äüæ²½ÖAÌ×  Žç?Æ ;7Às/^õ1Æ€²2 VÈ ¤’Ìp~ hE¶ISUhB“Ì[d†d&@WÏ¥  £ ïÐÓ{”`öÅÁ ÑlFàЩ‘bñeßsQàž~¡gšÏ8ûÙà‚ã&‰3 ItCG]xú¶†Cµ-¶â ëÅ•“wË^øõ¾ß^\¯=cPŽâ‚ÉVµUÈ@(ŠA¶ÙaEÈ)b¤¿š¢ÆÎõbD¢1Œí7ë#'=Ìö¼@Ÿ+ÌU®0°>AÀ•‡Í%)±Òîšß°±ºMÙ†ÀKRÿ|[ØûcîŒ!gØ–ë1œê§RTÔ!È xÇ]ý‡hÑàŒX¶v#–z™¶–]¹‘M? ‡÷jloN)Á÷tµŠ`ÁYY(!ëC²SHêg‰MåMew©~¥GøWÉÐàí͹šèûËÛóË³ÎÆpá'†?Ö®”à÷O„?¹¿º¾<@ýíX›}èäNýý#ÿ4Âå­¾½¼þ~oGüûé©ÿS5@«âÒÉý¯7öoz®¯Ý|Zþõß~úë?¿=—ºN-åó_¬ö¤ÐHquØîcnõ§íç‚_ŸµÙ¤_ºcØéÏzÕ/—÷§þËŸNŽYÞ¼þ~±Ò Ö÷äãÉÝù‰ÞéÏ»ß÷¿Ô]ír[9Ž}•®þ=¾! € R[ýûSŽã¤]íÄ™ØNmÞ~I¶®eÊü”&éšîª‘í«Kð_þýíñáýÿœò5~\Þ>^ÿ{sy'ÿØT”¿'›2ìô?‰"ª®Küõןô?šˆžÞncýOû~ýñן§Ô„]Ž•¿·ºã÷‰\­ý€Ê3¨?”bQB–£c½´Eˆ6‹Æ&Wf‚Jƒˆ§Vi*Çmäq Òt'¢¨4µã‹š«p^¼L1ÌÓm]8ƒn»üvó]#‚û‡!ö?ééãg^~ùvyõÏåçë½s¿Ž _©6H’ÚTÛœ·Xi6sךØœ·Ò_ç€rSJS,ïÑ9Š6%~x¼¹ýøœûúÛžÚ;]„£;©!ì ¬Qƒ<¹›ómÁu% %±H,e[N¢cB.Z$¬EZ‰iÆý¡½¶:6Ú,©n‡¡“ùEd  ƒàÙ8û‹¡ñÈŒŸ*WœÂØ7O$èWCŒ0Âà8^ =a¸‡aÍ ƒT>aùÞ»•¦0…$ê^60ãAòCûÏ]m^)=™I`Ô——ÚÚÀ –FŽ)rž:‰T±¯!{C¿_Z…/¯¿¸€o© ´L\TåŽ#g&÷)!ªQÁ8¼o©vX4„´˜!¢r¶"“Q!änÉV «JEñ¢áZ`˜ºke{Çõœ S)n^F`ºŒ^v1Ç#05‹ó"0:C6ÚÑÿ*ú2жèkÂ+¬B/ÄöÔÑ„7x+ì lR²5ÁKGÞ9ÛD‘a•UËáÞ-+¸!ù]³Ô›*+?¸|µ²ådq‘¤Ig¥d[C>¸Ð|žuSGŽ‘0œx@”ÁûÛíå×ë#—ªßî>ê¯Ûx2ýå¯ú7?n~^ý}}õÏú0í~çâãÍåç¯w÷7W÷ïž>ÛüòÅýÝãw#ú~uñpw±ÿË]êÃH¥·!èÉtS`—gGÏfBÝã8ðnvkyFýŸ÷Ð>•ê×ݘY¡€¼Ö¤Ÿ#¤Xt}l.R¶`c¿ b‘ÍÑtB¾EÏ!XÖ3|ψ·ªì¥g¤ÛÄ!¤Ýȇcž1n”\Ñ3úßÝA8ðÜ«›7ÎøFþ•oDæ¼£ßSáP…¶õVæQta˜ÓOª9ýV`”5 |ѹ ü€ÓýÂjÂX®Ií1ŸûÐ{9Á5@«‹ü+5co*ï7çâáaSÍ:·%ûõÌiÌžã`HÏ 7V÷ÕÆ2â9Ê?_îÍìЗS]À—ó+Èè^ RM}Íä(ú’'ªÚ †[a‡4¶âsû‡*æ…<,Ž^{¶Qlíê¹Bl©EL5É– ÕîÄï¢âNˆÄtrô$}Æ/õ~ Ÿ¦Á½F¤‰¥Bç(•Èî÷ÕåÃåíÝç­?Y[üôºbBC­¶œÝÔ—YN$¢æôÝÔ—9žÈãÀV˜ê'Ôw?…œ[<ÃI«'Z[‰®®.^6œµål€úw¡äQÙ.pO…#áÈ”ü‰Û°^É®/­òzµA%Tv0D…^$e«ÞW’šáÿèkk8‡¡uÎiE/'¶`*g'™Bf£¯÷Ÿ–îœ Æ™ŽP U‰JÕŽÐÅqJ]@©gb†~oâ íû`ò#4ŒGý§\þ] %fd[w–eµ°Jbdý¶ä°éd’OCÄLÜ52ϧ¨Z L+€;ruì÷—³T¸ O‡>”éV2”pÀ;[øº˜ãYXSwã£O-@±V†Ž·Æá±« MR Þ·v\ïÔÚV«íVÿ2‰üâ7¾ÝÝÝÞ¿ÛÞ(OCˆÍ"Ñð«¬)¼ßñd½…ä1å[Ûže4e¤¾¶w›Œ|BçÜU˜>:ô? :uR}h¿½Ómÿrùí~;/ñþ§.óË»ëÿSºaâÙ_\>ª üúpsµÐ4t ß6>–G'¨a„RÝÄs”§+ͺÍ4—6}5#bqŸ4Ц„¸ÕÒ|¿þv«~ýP5ùØ.>\!ãGúç™ÓJEÍŽ”X~L°1[Œ¶’ܳ£ï‘\›R„€CþI’®š+‹U N2àêúûÃͧ›ëƒSŽÀCØ({ªàQj§Vù¥ì4ε„&ÁCÔm ,E4þ¡¡ÀRÄÓüâ|¡E¶—“)¿*–{ºÚ~ÖJ‡\ÜÞ]ýÓ˜þác!ŸÆÁ&tŒNé¹ïf±á !…Ñ©Iu›u6 è#TX|$.Z|‘”õWò™Qׯ¯íI<¶„–`“A(šô>õèn¢ AN=Aó°.uâ]ëÁ£.ZsANä+øˆÕ {IlW8;q/ö)ö@ß:’sÔ„9DÑØvÀè#ü zµâ⢪Ëÿƽ@ ]뺞ËãVÁZFFçÚôÔXÛtåÄ.H˜s)pBö¾œmPŒxŒ¡‚”¤xL1Qþ˜>KiJÏi8ÈÒBÊÊD.Úø¥Œ@Jž&°@²ÙqèJ&™#11™dˆ˜—ûh$¥¸O%˜/èß‹hÒUãb z\br0Ò§¡èj ôF˜ âbã» ãUÊL0š^I\î&K>ºQ§­•²]€«Õ”¯"-Æ5é±ÑúC„WÖ i3Ý»–Iƒô4„"ßÕ/“—þÞàg“²Í,¾VïŸÜÁ÷wFŒ§ŽÆÏ>•AY aJDU8C*âŒò3•Vš¡2ì­:Dܤ2"Gn *Ø5Ç“¦F“jšoG aŒݨ)º]ãÓÛûê²ü û•UÍøÄÅ&‚¶Ü+»CÂ02ãSß~×'Ý80Ûº°Ñ»ñm«ÍJ~qÁFÑVG ·}qÛvU±‡³÷+«Ù6Ô@Ë9=¾ Û–ŒŽJFØI®±0Nñ")…06}¡º1„®9}JW?þóãû4sÌ0õ'uã!ˆ¯QÍ;ŠÛ7° Ò¤,ÃÃJ^SÆf§%ØWŰD ª«`,ÑÅ*G‰½zgžér"LC Òžj:m:¡„”è çö¯d5å–PM”õ Ÿ'»kÒÖ™.NÝŸ<Ÿ‡ÕýîþâÛã&§°ûãy`‰‹ÓØ|XŠP—­)\‰kV„|S„˜6D‡°B®Ã™2Uï4¶ çj‚—iƒ ÕÜÆ,ï. w±{çå‡ÿ öËî³92 û ž]…L(¥‡íKŽI{%·)˜±l™]A6Æè*£Z¢žz7`£¥v~ÆØÝ¿Õayò`¦Á@½wt©Â̤°+ž$Yݱ’Ä”ô.Š)pv¤€Í-‹ú­ùÇ»¯7jAtý/n¶Å»Öä®|>S™¨böÂ5ÊÄCÙY‰ùY…+9NÒ&((Ô¤M$1h˜6„" n­F^À–Lûmø*nï.?^|¸Ô/¾²rºo7Vf÷ýëåí4äyY<«£\¼È eýµ ÀÛI÷²ŸRi)6~G œxª;ŠéŒ ÃiØßª½>^»½ûùå ¨Ú—XÜ|º¾úyu{½§xºýòüãia·¥Äu!wi €ÉÐåŠ/VBš{k›7pn€ø®\(%¯Í£…9Ϭ—ûð)óY¹<Ç5 `Ab” ˆ 2²³p‡Ök%¥)CeôµÁ&#ÐS¼E.Øu1ñ„›• ä«Gœ˜àcSd<ÚÈ)7m/‡Y.Œe‰èì.L×ýš‹Aظ.õ®Ž\îÖ««!€yü@\*©µÈx–єܭ¾¶ñÅV«‰´øhé©¿šgóˆÀ=IŽŒtÂ÷õÕR&zðê!`*û´8pñí‹íÂ1ïWVsA£o…ѱç¦}‹˜Æ¡}£®Ù©)‚qE6ìZfsÜ{ùÊóK0ªl)o›ê\*îZ¶Åj¿¬ª^L¿iðZ¶ ¼qh¸‘-×3ÑœU."h Bƒ'-UŸ´ 'Ͱ5G ßU¿Ý4¤l!ô~i5û¬#KˆZ¶ %&èïŒÜ<¢GAJä„lÀåЮyçªëxQŸÅW4ë-.íRvXìje•u)`“]#Ç.&Ù5ÕáÐÕæcâSV§[Ñò´zt{XuúM¯4´%]  7øx¾…rókO¢’›Ñ·†špeÆ”8ßU±f“JínþWîXËÂÕ€“WsP†‡ú²18R ¹—Ï”x+,Û² V¶ådjÐzH2ìlÀi{b4Ÿ?}zýÐ1-dÆbòJñí¨m»YÎŽg9O™s®¯¬i™°ê-?²]Ê¢1³ÃÄI\JÓ4nþ»ÎÌN² @„ €†¾\ÄCî’`%ˆ)(MËG ":ôýC•6P t(ˆLNcÇÉ“Í>^º|¼}¸X§ÿG›enñIB¨BˆEW%B–•t%¦Iò"Ø„‘H0äªôuYg¡1sˆ¿a M5–OT¬ÑKòزmQ'!Ûb¹ܤ‹`–SéïúÞ<"öônyýVµÛ{óTGJ!öLé.â§xõP(õ 3¶X>ØcÙ›IkoÉ$ü¶å”*}mG›€¤ú‡ò]º<éP?ªuÉ’£î”¾ðÍÅÊÅnŠìÏiÞðËÇ6øÃ¬yu.YÅ+°)¥"Ö²u+iO Ê-C—ÈSÒ‚*¹G¦AyψIÁPcV™Yž‰Å/>ãá]ú5õ£è“‹Óô’—%Æ”°ì©œèíæä­8³µkyM*¯QE|X ‚¥£8}P" YzWOdiÍÛ€ÊÛêlÊEq[wÃÑ‹¦ö+«Hè&±¼m;Eÿ•yÄŽ¨û€™zêQsméæ}[{ëÐ^sìòeœY¿0¼ÙP½ÙÁê$Ul¶‹›ÍÙÈzi•w.è=ù¦C7Y‘4´qÐ3Ëã8œüI½‡u…÷4ßaýÐ1Ï!¸ÅEb¬A…TVGÒJÖS¢d}m´qBMHS h„4‚´èº¦.XÆØ«l 6æ|‚ª\ÚØè0~¬–VImì=F›6’qªm\è)ò×0È\%ß!–‹Aà“4p,ƒÀ<ëb\`‹ì-ﳘ¦„ úÖÆàÎM‰âÆ •Ô1"ØäP:ÍeÝÝ—/_o~Ž^Öåu?C¨Òý.U?ú\ °’Ï$ͯ&&¤&ƒƒ@KtöÐŽÔ‹ò]ŒÆ/s£;{ÿn`,hÁ¨ cÅ~ðà°ì`ŒÙ^Ó½0¦ !,"…†µ¨_¯1 dª¢ÝõdÊÕ[Áä¥ŽÈÆÛÒ¥\cDXXl~Eq+1‘+)|[kž p¿˜ 6":›b\Ç€ë q)n—`oY+NAr]±¢ß$Ì ÇŠ¾ºÒ+Ñ’˜•ëóÕ_¤$tå>Ïô¼´G0ú¢*ƒ—U«gdSÕ}tÞó¹·¹c»mˆ:VŽO[ÿNøbËPwñùú«¹ ×7—"·×]VÂg.Æ¢,Á¡‹P¡Z$P(â)›dÞËlJ¥¼h ¨Ã\#ñ6÷èfutÀr2…|ÔÑ‚.‰?)ùh¾AyO•ùýñözêí[a)¸?OyҩǺ{³¦Ô>ØäÜ¢ž¥ 6HT{+ _ºE0±§¬Ïø,קΨÜõd¹•¸0;^ا°»È9‘2ȲV¼ä:lãÖ8ã¤Ç7¥ŽË½F|ë¢ûmÚð_þl;æôp¾ý4Óoó> ¦ŠZ;Œ¾À[µÝ¤,ÃÙj&¡PÃO1ž9>LEß z˜‰šÛ­+è<¥¯'Z>§Bé{O\ÄÁnfÉáã^S´¾ ]2vʳã 'fðlÆ œ'U0qYŸ;eã/;îäÃ!ÏÒšBXo=c–$8·>G:EŠ .!ÄpÚÙu¹»Ê]úŒµ§X¶-W`ÑüI58ÅŽ k)BEO¸Ó=&§y!›Ý¾q¬8„h#Ê!›弪•L&%ꌶ Ê»b´®F8…úˆƒò®9§P§æ,ánf ¤¾ý¦3¨¶þØ ´›:‡2Prc ¼±‡{•ÙflHLýƒ¢ Bšweñ®¢ÈF-•JuMb”Ÿ¹É”˜ªã]&X]ŸFñ½€Ø<uµ!zkë`쪮°q-‡Õ™+P\$UÔØK´Zü·ör³ØŸë¶ZM¹¼Â¦x§¤1¿(°X?c¨Â" éH¤Ú+÷YH ìH†D—ÂfêÙpEíŸ`5 ì]9)jé·ÛŒ·ëÎ2M®VS^a375†Š/ê+ÖÏÈ—Whœ¤?nÙk²ÙÁo5¨¿mw7k–GlÖm9Ù˜=ùïd32“FVµyÂ.Ä?Ë189k€õ°ÓøŽ4Î?Iéëk™õ嘽½V_OUTùÌÕø±tfÉå§n­4É-Fõp©É(Û2“ç1xôMöŽÉª²›;Õý{\<Ó̹SÍTj -ÖÉåÀÌ(E«!ŒùñáO›0åJ•ÔÔ{l`°±äS8‰}0zpàSÚ‡¶žð¯ºG÷÷ïÔ8ìa6å‘®ÎÞ¸­½yÃØŒ„nH›„Ð5wżJ»\?i¯ÿn¦5ùï¤_Ý£wÄZÙ°‘*kEESã‚<™í^“¬ mò¿ ÊÍfÒ¾ g”ÞFÎ{8AJa?Ö~0’`ÈQXÇÅoz—k‚RRFG‚ìû³|¦Ô咾ޣú©r›W#ŒÒ8º˜}BÖØÉCøå/íòh³À«©ˆÊrv®…4"úÚ^¶@Eýó ¶Ëë)!ôTJðÃ=¾¶…•™§ŒgÆŠÔr9õ@ÄÙ£¿ZZEî!xYô/bÄso÷ôoÁº%›bç…üý»{uk´ÿÌÍ!JE+†zÿ’Êš}¶i%™I:rj9Ð @à Á…*>Ž=9eJ¬ßžÒ/wÏ—2Ä»ª Äq1œ´v˜ 0Ö–|jz/”)Ô»na›[‚MHê:ø8‰D]ðz L6BâïëËÛ‡¿{†e7Û{$’”7›¸XÜìò…ûåN"ü²¢ רHyqnh³1¸®Þm°ƒvz#uxeö ¥çŠÊ}›ÏÅ?é~š&ða¡hí©à0•À>?ýs%ž)à†éèÛÀA ž ƒz.=;'úo˜Q² î¾?^=<ê6õ÷çaD¸‚©8 ^aá7’¿‚ÞËdJi­¾6’@= 8>¤m¡è‚…ú0 ÷\@£Ѿº€–,ï¹±}–72F·ã³>¾‘ºÔì|œõZ*®Ÿ,f#‹ëçÏkð7F‰ ±Aý•$'ãÁ‣ŽKdRŸYxòæ yÝñ½ã½¹î3&ÁgU‡žhµ¿eÄQ,²‰ƒ•Ð&é Š˜§„8ª84öºÎ!Ä`ê!Y×ChxƒN†ÑëOp(r@X’ Wh]÷ô› ÈÓì$1%²,tGS ŒïÆP@Òq•>šífMÙÖñ‚Ø”I‚X±­ê(Rq[³ui«…U1HÆE ømçÕ¿2ÏÈ–­¨ðÎs‹ÀÀÀ2´×j·{ÆF)6Sb‚ÎɺÛôÒO]ä—™Càóg?.É.q¸ $¾l$Ë*»ɤ ª¤¡ 5þ¤#·î„õ!HO0T.^ÔÉ>ý¡A6 \§ßBðÂé·•g]ÉÕÊ* d£ˆKrî}ëkÕ•äųÈp±a¨fþõlž–@; x(î›ä¹ÚVK«¢õT=¡‘´÷gÞ8æžÙæÁ!0¦à¾–ÝEÀî‡5©Ÿ#Ÿ÷©mÈ2;¢çPlÔ7”ãD“ràl~p/ÆYÔŽ¶žpf%×s'uéÑze'_ oçBh÷Ùˆ{¯ƒ€C®P$߀ãÿ©»Öä8r}‡ÿìŸQ6  Ð{˜ I–m­eKcÉžöÁö{²²Jª,‹J2IVMïLÄDŒmQI¼ˆç‡׎|Ù¨nAŸAÂAé첡f°¥ì>¯”ðÿMóãÝý凋«Ë;kBøn¶IÅñI¿ìòn˜à…4W‘¨¼äöïÒªàù|§Ê‚øƒÐ l_íD¨¥-Iµ+„ÐZµÜZ|6HT1¶å!ã«þû‹‡ww7×ßožº— æDE#kïv¢Xí¥*p‰èïp/ŽÊÛ‡{ çŸ~Sئ™ßLû²’’ošSKIÄãVäŠzRJ“Û(ƒUÞ]Z/¾ìä–%‹ùBœ!@âi®Vó&­„¨ovi¥g­$Á 'EûW‰˜ûLþùãþérùì3—_înÍqëàG›Ï€|TÇÆ hB¦Q2iÜ·UÁ·“~ÅëÛN÷ÿÎd3iÌ#5ŽTŒ"¼ä”_h;È>¨%Rq“ˆt>- ¿`Ë>¹?‹«§NÔ¯ñäBÙîG¾‚¯{d¸WÈ‹‡«ÕD‡&ñOÄM|›“ ±‹oØÒÀ‘¬Õ$¹Ø=-j¡fl¨Î‘ÆAe}Œž"–ßkÌBŽ,nV5/ÀSÒpª¾bþÕÉ1aW8EÔ„(â•0àC-UyØô¬“RÄÌ6‹l#’üüÎóÍj§¼@¥d[,&3]¹â¦f:« „ˆtÊTÌa²`Xêåpd_ª%ͰÌ ½¤Þd1kÞxeê…Î#"èd®Að²É¤'g[f\—¯Î¿Ù†1¾:L¤þ§?iµs¹ä·ùâmk$ß7Ó¼¬ØÉ¥–•›úÔx$C^FŒag=b˜Xʦæ Ž¡¤†Ée—°.ˆ2Ä!†)Š«ÇüÙ} 9èr¬’ç,8&IÄnH |$¬KÎ.;õY%DWáJÇèÂgßö%MFØeûìæ Æ œw©ïñMÛX=%±’nŸ 7¸ÚIˆ* hÑ?cU®¾·!"-®Vå´É\É”M¡mB⺠RlМÌ9Ínõ»¾þP÷êWçD¶´%jÁ}®i®W°‡¢Ï®ÌfFRÙ2d¨ä`‹¡G!»B1Ã'kxúõC]Tò.m©L|PR}:I+)ŧ {.¥bÎ1×» Ì:ÎX®.»ÇÉê]l™Ÿ€À-e÷“¨Jðßzn*å ¶©ÇÒ£&a¢WÛœfúAÞ8¨¡Wm†tƒ ÙÑQÂÐÒ‚¡qÛN5>år›ñ‚0Ž](&ÎMW7ôîi—¯v¨Ó"ö•Þ@Ÿë%Aß*^j—„@ Å™¤pô± Ý^£ûE”|O‹ƒÒþs½ÌõŒuÎé]²0í‹ËT =ø41ùpÜÞ¾<¢kê!Û3X=U;H B€†n˜àH0!ÄÞP mCéÑ7+DBã#,‰„ú•9 ÅÍ*Qz0FÇçVÞ°·ÚŽ®¤OcpÒ¾Ùc ¹uóדt¥ÄÀåòÇÓgu“n¯gµÙrȉ¢y•EXEßÚâ(MÏÔ䦑ÔËZ‹ÆØ.zDl©ÊØbƒd9i«¥‰È æ¡;·>ÓŸb†Ä(Ë›Ù~*È›1$ "¿¤ø(hêÑÞæoKÉÆ ¹GÞ4°nzDp²ñóÔ)Æœ@XPo F¨B ÖSŒ3ÅgA"$dTœ[¨eM"[WlÏDìÁ }8¯P }=UÿêâîþúKÓŽ‚œè¸¨L€PñvÅÄPÊ¢L-È7$9­_íd£èˆºD©\Á3C$×+:ÇîËAlž[»Þøû±cóœ6àTaj(JÙÔdŸžɆ¼<6ƒ*ëÅ…‚U .´‹‹á¸ÅNàØ1÷· %Ü0 mûlÅ’€b«v󼻸ZMã4¨¿gÜ?r‡dq†£XV²¯Õñª~²úµè ‹ß±i£!³CÖÿÙ2k™a«[‘‚WV§“„Pä7¹õ"õ|ó²äâjUü¶í[ŽæßM4öîbú¦*4¸(›¦-;½m~/A󙞲¤ÈE¾Å,ÞÆòfUm™n ÂÂÞcVåVÙŽ„±„[<{3)©)K˜2YBÊd M¹ÊåœPr¶1sUv×ìæèÅeÊYÂ0é“êEŽPÛômqa²ô@¤z³kR€œš÷ÆíŽ ¦&ð=Ã)%ìf¥ö™u€ú++d °y»›Ó°y/W«P߀úø‡tŒv°<"ûÈ&˜‚W·Þoá¶:U í­Øó%5$…yFò)ôúà;Øßý —_GU£Ù?nµ¢œ)±!,‚à«LIÉ;3ú\§É‚FCb´4y®Þë¦Iå9á· È|6!^¸Ä¶F3t#^Hõ…ƒ‰Ùj+˜C‚ÕªÁîâYxÖåÍ*¬Ÿ¨gÈÇæ`yFÖèWª¯—_ ynƒú(1B{»Ø|Ä~IæF€,[¦/j7³¹ºF&Õ.a½VÕP„Ö¹mÙÑáfU5¢0¹”‚“z%ï;؆Bmv.}©º¡D6tù©w„Ul[3Û];æ{ü«ìñu#Ï͵MX×hÛA»ÁMÀ¹-\ÀÒÆí=×<ù–\Ê"_¿Ü¬†m6œ¥EÞÀ6R/×ñ"lNKÝ6ƒÛ–p7ßü–žÚà¨Ø+=ó øfÝéمه›Õò ˆp ß!¥ö–ÚùˆØàȰ>Õž<´õÀ½Žn%·#À"«.–üúÆÁÝUC6’Y\¦¦†'çÁÉQºpyDò§›¼GJaƒ‡Cjs…|ì’‚}cðÖ^(а8y ní­nƒqa2e¡ LB±wz_uF®V¡¾'P/1m±º~séá›ÑÔÃæbPCÍþÉW“¦×«IíÈ»›§–Èu–“Œ™PÍ.¶ÊÍVÂKéN·<úÓfC6‰ðĤ®Ò6‰A æº$&Ʀwzî] „§ÀÝ µ® “ j§#ÔˆšƒU‹²«Ï„Ò¯`ÐÏLÕ«J£º ¢^=r³ûfGp¼¡Db¸=±wY!`uˆëaÒ0&•–Þ>wzî9zƒ¯»‹Ç,XÐâfæQ‹0ÁÒ)X‘Mg M6çœj1˜‡ñšÚÖš³ù§ŽÃI*Q³¼zìõ€‡2³9í—í®2›ò€­‹«Õ<ö2Ï¥Ò™uTÔ¡i© bdÇ(رwãñ]ÃëŸ/vüÏCúÏý”ÃÅõe“õε¿»8Ek“Æû ŒWÒAÔù@›A-CAõ’!¶Ñ8%/í’¡G4áö‡¤a MëÁ0¥WAæŠÉž&UL*­ÙÐk°q5»µ»ªãì;¼¸M9ŠKI5X?ß^Ñ7É`àúâ!Ôö1b°÷Q¶ ªGÈÖ4Þûˆo¨Q’-]` ²P€Ò…"—YÜ«ª/„ô ú®qLkB'P¯mÛP7׸:áiÖ $R…*ƒe,s-_Y^\­ŠqabªÞ7<ÿnÛOSìb\ Î ÛR&HÝÊ–6 Ù–è’elÓ‡2¹F"ùÂîóÅ*ç†@rxf¦êyKQ‹öÀu³mS |¨Ð6ý8.ñÍr–Ù²ÐájUÛ½õ÷Q}-oß0pËZ1J1±…`Ûò\?ôiW÷î_7WŸïï¿å»V—Ï­þ`õ@Îl“3`>Ä"ôÖΑ‚ÕEC¶-oA²Q çΖÕÊ‹WñÒKzßœ°UŽÛå%F«„éEy9½±~ßä"óªùœËúLNŸÅR‘Øš7QCšUŸiG¡˜æ^`HD„öd€?7û¹ÁZÌýÜ‘6[‹ß§ü?ß?>=\>}¾PsºãéÍ÷?®o/rqñØ5çE„'¥µ”€9gynäY‘<:ß‚NCDDM„ Á™eD•© *©Ÿc…ÝN)ùpóñòÇÝÓW?nï>´Íé½%õ¯b•p( ¾QYÐa _2J l® á¡ —œ5ŸÂT —Q—K¸¯;ØgÃ×Â*¹‡ã@’!ò S¢$ÕƒãÃä!¶TÙ©­¢t&¿bØf€Ä¥šgýêÒ“=‰òûqD"4 ]5 wÔx‰C˜Q£[`>BãÒß“ (…íûqæ]7;T #ÐÅÜ þl.>Ü_¹ù~ýñÓÅÇÏ?Ÿ¤Í&¤ŒLx›æfWö&…5f^•‰Õ²Þ ² ‘ ?9qÞ$1ªãºD¢iQ^JÂHªR=u—ßàÍãØ“³U(e'A‡õ¤ÐŽBÙ&¸ FMdGå'Ôó_ï/lhÉíü×#RSÁ4©åElš£Ìzlr¯Ke :AŽ èô›À‡uNÚ]ó}‹Ë”ë+SP‡þ·EÈË#ú¦À‚LÑ©-°ØÍì5ñ=†ÀdIZATþÌö&}ƒ“ê™qRg:2•¥‚drñ|ýtq³ÊM§¢~"ñ.òm‡}w„û 7vû¼pF=Þ“n2[ƒÜuMí’Œ¤†Ù‹¸º¿zT&=èŸnBgOžß¿Í o@=¤Ìlñ®HO“#|+‰×öäfìô¼%Th$BY#c~ÉÙ‚R# Ûõ£Õ”põR•ݧ%µôkH¾ û{T>JaõË<žtÉÙÃý‡•…ë ÍVÖmÒc/>¿X¾ŽEš¨eF–ÔWߨÈëD])ȬS´W¡ÙdNBšÓvÍÃÔ¼PlBŠOáÜ | …ƉXHøœ/ðï¡ú>‘»I; ऺI®¥É;‰3`ìÍj¾A¡qO)NÖôWêí6ͳ‚)5%¿0ô@!º§ÒJ„‡ê^44aW8ˆ!AˆÝAKmËÃÌWïªø*e¶RÊOr¼\¬¦Ù' ŽGÕGd[»-BuŽ]‚Mê;»ylgir!z:©}öÁ‡EæïΉmó€Ü›¦†éU Hˆ5½©ÏÀ)6BÌ®gœµ^p©JIŠI¥˜ò}÷ Z 2°"˜Ü&‹@1ô(’ N¢t6™´oø8‘Ò}»üªB¡Ìx,„&‡X‡PÜ«¡wå}3ùËzˆÐX³Sç= l-ä/H¶x,èõJ)nÓEÃÓ*Öówº¸º¶ó™ZùâÝ 5†¨"M ÑU³GCzr Õ—ïÈ/éû%XËùì4$Š©wŽM­¯ãÂÎc eO/¥d ´ÎU»7å|ÅÅjš;Å@ Bò[¸–PÿC=Û<…-˜IŸ(òÝàjÁû iu#–¹FÞ¯÷Rì.²HƇ›U¦Õƒ¾A[Ø–Ô¨°“.¶IS•3DÇ/Ôóà9 ‹¦Ñ“Æ"à ¸:`‘áù˜ä@“A¸hÝŒ½zV‰0ìý&°Ìà¬G¼[‘«a$ÄbqœJŠÌ®0F¼»8ä3Ÿ/Ve~ ¬Ú#ѹ¹ÖÝcm âðlÑÃÝý¯:ìEÉj­AeV‰p¡ÈkÈnË[dÐEô†Öxni-%:Ôú¥ÀÝJj•˜½u •]cÕw¹ÌØüˆÌâj5jÌN_ý‘³«q=g@4ê¯lÔcý®Mû />ÅŸê`zÆ+ºº  ?:¦æ'¶éD_¡æ>JYÍC~ÒxA¯!{ô³Ñ&OÏ--±IÍ ¥ý àÚ©æqƒš¤âr뽚Sù­’õÁW«Tsà¼?»š#bÓÞ›Q ŽT÷kÏ.Ÿž.¯?Û_^n‘Y4«~xúï«ašÖ…Ê®šzСì‚FÉïB|&еÖoöÁ󹥃BÛ8¹' Âß$)“¢ÉYÇr9cbC¾¾(o@‚(1B ÈRÒ!œ] šÖˆ·ÕN\·u¯^ <…ȈväPækÊzç‹«U ¨§ÉqTm:7ã¸iq!E‰>‰ð)ðÙLŸ‡©±(¿Õ—öN»>ÇPvÚ™òÀlŠŒPdýlÐü¹åAšöN°Þ\íXF…ªõ˜¦„aà+ºß¶¤ÈWÉO-®V¥Ç8§ÿÝÆ7Ÿœ£Ð÷ä¨qà,›î=`7ãªñA|šlnE«8 ìç<Ö—v¿ã ,nV…3Aêα†iCùVªåÎ_Ïãk¹Î[‰£“¶Š_>ܾ,$~œ¾ðÎÀ«-¿ÿªü¿ÿñýúæƒþívcq©÷{ú~swyus·Çk£FGO¶¾ /1¾?¥jù¶•¾:·öžžŽ¬£ÞÔY¼H=ŸŠ9!•UøDØ„#ŠÄúÙ1¡ð6gîÔñ}¡f¸Ž³KŽOÚ¯ñ*éfŸ~_8úûTÛÇð12¤a“³ÀûfT(1¤– :1Zc{@éMVn¡ÛH=µ˜k|_ŒÅP'AÞõ]išªE˜×÷žWMã)Úªœ3B9¥š.fx¯n¿M3ª×תè<}¿¿àç6ýôàâI4‚oZè({ûÔóŠÓL• òIO°X;H1;¿¤ÎÍœ»Œ‰øÜš‰ñšéÝ„ >Âi§9ùì‡Ww·×Ïó ‹üõ×Oé¯oÛžKx³ÉqŒ:"r  ™P²õ*=Yÿu* SAc¿ˆ`¨ÐÁçäÀª"æÎ^H2Bí«#o@8¥‚§S5è¶-Éæ©Ô¼ëñK—ì°KþöãÍõ¯ë»›‹¯—ß.?é•.¯¿Î|Á‹ý ÛôÝIÕ3µ¬åŽŒIÏóˆsý$§¼¤Á«ê‹t,Ť®íNȶ³¾Ðkˆîª¸CòqÛó©6Š‘ºtW€Æ+/Àxþ×sׯµxâÕÝ_ÿÚø| ¾©Ÿì¼§ÔÕaœDZ²±6;gð¸CžÏ<™†© ñ_0Õt”£>Yå$0d¹¢ŒPBýì eSÿ‹HèQBvxLH“AaT _í‹Þõf¨Ù·µÑúý&%_o?íž‚—$ÇÖ¸¸SI­Ób·k~ß̰² ³“†'6¨°zå5õ.å>%‘Wm€Ûb’/IEœG—ÊÅUΣŒ,ˆ:ÄèW;†³[8‰@C,¢sç—Ž¡â6½¾1ÂI5Zö’Ęþ»kI«#ÙÑ[©¯æ¤CŠT[èYzNan5ׯ`ÀõÚWo WÖÒáÀI ñÌcwÝÁ­ú\8ÉÔ+$…ôÿ U5ð„NÒ½#T¹›+^ª ²[\ILr74úîxj ´¿!-‰0ðÉóÞ··_ý¿&vfx^W7qJ‚ð„´÷ˆ”¦¹ Úô‚Íæ—]0¸DÅéPÎîv¬$2Ãõ¥£]×6ž1…„x`˜Há±ošˆmö©îY-ø-¥Ïy«P’Ó¿¨I ]©4æmëC~€èùk*øžãœ(½„ÿX?b É’`II‚oÀ Ñ/#InÈÔœ;ƒ-Źùöõ —ÅýÐ.ûñz¨.ÈÐ ,ŽR¬8§õˆŠ%T•^ÌeÅ+éL™8t¶Nš‚„øÈûi3-Ži-ó“…°mÓâšiÖƒa=Áµ%Ëþh¶z»7#®©¿Ç5µ<&•² VcÌË#ƒ@y1#øT,6íÃÏž½Ï_V18˜parŽe°×Èã5iö¸-£ÆméKÚ7÷”F(õ\ÚbÚ;ã}ÙZÌćÅ©|ÇPUVa2>7³’ϤeD-‘X]Cƒ1DbŒ ›L.’rÐÄ—%A`"àñ˜+AOwrCêA2®4£=…ð/…3ÍãTۚѨÂËÁ=Nsá,ÐA“*$çv [½Çl²8âqч næRXDSö€ß¦ôñþ©§”º] šëàPªCN©óÆræ‚û¡'2ΧɈÃÀ5u‡qç`Éù¢Ï^‹¯2ÃùÌ^‘6w1ÐCrÈû6˜ÊVÜÞ÷_øòg7Ÿ.Ûš†§ƒwËÈ<4ÌbH¡kw‰4b`gÛ°[z3ýAªò×" ˆ 2Ù…z–ÔEÛƒFÕ;ù±Ó’y¥¸ Ã_õ)úo¡ªWíê·tñ¯¶¦~pÔ=縧æÂ±o˜…|J4‰¡ìˆ¤¦ù¢šµR|Å^¸¸ ÅÄ5Ajo%–®¨o4©i MèÐS Ôoúßà " àƒ6ùjàKŸƒ­SÒˆRȉìË!›­>­CÑ«¹n LâÐi¢ŠSÓ…bs€¤GìÙs%ƒ™+ŸÝéf°ÎyQWš1ì9—ßÑ•8ŸÚ_MùJ†ÒâCŒøâJæÅ#†®d¢["éó:F® ,↠!õpV'MºÙ'Þ÷Õ¨ÉÎgÛ=+;°Í,Ä¢Q$ÊuV_V㿪6¶ß÷²ï»zÆœþ%h“Eu8ªlýS(¤‡’Aó µüÐŒÔÖ¸Á|{óq?+óbsùBÀ›{ýÇuן‹/$Øî¹|ÆG)†Êm]­Ä6ã^N_šõÜi9)¼QeG±¿j]ÈcM/\ éº+¸>¿}‰ýõlH‡Û³óoÿ­"½ºØ‰hšuˆAšƒÃòeAt8”ÌÃû<ôùJBSîm3—Ä·ÙG„4Ô=IF·¹Á@3ÄÅùˆÿÜuY“>‹陨#zà^ÀíÈ1øÇ]–Í8¦ÙDÛ/g†µà‹Ž™Gä[IgʶO\ E' ~IÎY êÒˆ_õì º$² „€›ÎS-BŽ+`#Œì£è˜”ÏPWò™rª-Ç$- äÀ±Æè8ä™{ô¸É‰nû¦qXVvÁ½+º ³¹³q%’)ôX¼¸„NÚ<0$0 y º jDL‹cͭég{U°j(÷—}‘¾ýqç³Ý‰CΤŠ;2±Øñz¤WšæwÕ‚­¸)ûž}PÌI!pÊ’ª¤1e®Ì@Äɵ`™é»¥ÝNí˜ã¥ Ë",ìU<§ç¿~Ÿ  É Þ9)0òBêš1ÖŠr öûÒšæ“É©OÆX©Ô—1»—,¾àJ63|R­˜ô Ô⓼­á?ä“dŸô‹ÆŠÀòÿ‹+Yø¸ú"ñBB‹QŸªäá¥Èzš¯ —Ôœ„NË®’ש¨ ;¨růS;uZJôÝl¦MhÌëdƒ4¦EÓ­ÿ; *h›§ðCíP=z–¢uìéYjȺ\\ÃSÉÿ®ËÂrßÅGÉ'ŸYLq¹dk¨¾e£SߌÑ|¬ê£-ð®õk ™‹ø{ù\çFÃ;çœØz˘ÓIH] ДTI‚nh›áˆ× ÅŠƒ.$yäd×ëÄçÓ˃0&ymCS¯%€óŽý×í1J'{- É²;Üɾxy äñéÓ×[? îÄDž¢Ç¡Z/@Ϭæ¶V €Ÿ@wDJó|0D&_ãƒTÚ(2‰e}p%’I.h,h.h3[ìF\lpáyI ”ü ®×¶µû÷Ýâٵɣ{Çõ?FÄ1sØ`‡ f ‚ÿ˜L#Dd»LÃ-šÉFî<Á¦9Ý—ókØùÅ‹›Û½øÏ.ÚîÏ$ð;Bfâ44»ƒ¯×*æH™-’‹ßÏžw»;ÓªÕ ÿQˆÆ$-[LIáI7¯àŠªh°\Þ±ò„¶ò9$û-ÊC½&«@tBÑï®™w\>wm£iø=ÇèµdÄäÒS’@¤ó¶öÞ Pæ¿¿ò´¡mÀ%¹©Üš‰_œéÇò#1ÁÌÚÖ׎Fü›š2Trˆi,C%Ø abX0;¿1Ýà㋾l²ëïÿâþày"Áû–«kKXÕ^ïîþZ²Mú6¯Å÷êIÒ <Á˜v¶ÀWí$ ¹Â§ [=쎨³»Ë‹ý¨¿Ö?P?YøT[pxGF{6ÔOAAÙDú\N:GÝJÓqfJjËÖäøhK‡ÇŠÙ™ŠµúPŸóÿ”´X«a?¢‹Zh‘pöùð⸻ù¦"ÿ ÂIÁýhðÉÙt-.!bb®J×bqÇ.î ßò=‹dJ¶f H¥Ñ’­!h¶0vE”ùS\Zy,þ1õüAœq'ÉÇ•ñ‡»o— ËüŽ7j,£¡æ¢&s~é#«„ÿ u*zïõĺBé¨RɳFZM ¦Þ^îÎâi'D iœ“Š‘ŸÜ#jþ»'{¾Wn%Š)ÈN¼'ØèB⇮±5?ìØ5×\ˆˆ}ÂØÄJcƒ¯Ø¥óLÅi- !‹^qÅ ;°Ì˜ø¦É4Š cvÐùdƒõÞ…¦p.µÖ¬3MųХ©(€«0•,™ÒJZSL5>:µ©`ê7ÃH‘ÈE7RàéD·x‹KÐĬÿ½1-³x€k!LÙ¼EÃlp)µ™%7¶ð‚1v°Íxaº®à(Î¥´/h$rùìˆõÙi]^¹-‚rÑöÄÙ¯CÀA3ôoå½­þ5©_ÓsLcê‡vö¤¾=ôó†ox1`ʪ¸|,ZÃ~\é•5¬$2eø†—„Àm+OjòÞÉÀ­¢=½Œ¿‡»Û‹W_.ï..o­ño!ôlO¥šJöáYÂ?=\]_¾ãt¸»²úé^Ýÿaï¶6œºSÔÝåõ̓=œÅÅÅ-~‰šÙðªîúéá¯[û™‡¢òö×å?þëÃîòéåö’s‡?±Âìç_PVÛÿP\ýÑãO©Vqý,Æ— :hGÕ j[èF¼<ºÔ³ÕâĘ҉¢|ÃðdäÕ\Á<ÈøJ~­%¸ÏÞYd0Á±­sa !¡©(ˆúa„ÇÏñjÆyÇ6Ü øÖ±MSIâ…ãDŽÍ5Ž`·~Ó‹;ªØ@d- zÏ]:Î¥˜À>x^ŒûÇAoƒ/̪ŒŽ`ç< a’oü-6MÌF´5|«âºöØ{!³€ªÉ©©Å3P’šâÊìˆ>[Üd3©¶Ó84F´Ò†‚~xE–¼IЇG°WAßîGüÓrß!è‹_À¤>-¿ &685îÇ·qÞÆýàá½Ðßvp\,"cýž–.jÄóþG^L"XD0Ö4êT[åº,‹E±’Å”Þ>,RhôdqZ†ú±ž®ïHàS4>3iîíO}™ÚÒ臘¥ëEŠ‹£è9‡8¼Ö¤–.Q„ÔTÁ'ã"ű†ŽèÙ vú¶NkFS)ìÌ ~q‰a?^ ¡ÜÔó.¿=|äx!ÕÔÕMZð ÞR€Š@p!$Äv°ò ÆJgF„­˜AÐ@KåK¡”§¯8ˆoRA¯énu ‹aUPÒ”¬×xvp®gì=>~|裷ñ-½EÈ0ŽG·ˆÁ±ĺ™ïêÒ>6fÙ-ÖSC8î´,‰ñãøúCô‰lƒôÿ§’ï`–„žAº¤µkU;ÌmQKi ’4?¬@ÚeU¶¤’E$Ÿ–8|XµH\ P~–Ò÷ÿ¢)­8v£Z µ”B@Öþó±‚Rˆ9aYiyB’Õ‡Õh-É¢Zn e³_MÞkÓ½H·{¸ æÀÛ”\ð›NÉíiÿOöý“_f/~âþƒðÙ7wŸ´Zøíò‹úúJöÈÏ—mCܤ§ðÏÝ:){Aìh¬G› €Ö ¸>‘u‘¢f›ýi1BuÏ3èRÙAÅa¶Ùÿ,£)Í~5i‡;*µÓ:hØIÍK@JÛ‚¯V±ÐN„_:!žUÉ›:hè)=xHˆ>´2Sõ mÞM€[€)rE£)P©© LYª•ˆ¦x¨¨$!…“{(mpçe Nx[=¿½ºüóA­JŸx¿|âG T¼¹~ºùý¨ÿre¿rG_ÿ«þEUÐÕÃÍj`¹PÍÜÜë?®Û®ë |S§%ßÑ4@dd!¶$Ïã4'†ˆæÊÛkìÅAñ”U ÎU©+‘MIƒÅðØChòa›}Lýôƒ»G¸ž*‰Ñ f6+Vñÿìü7•èoÏIØì汤ň±kF®Èy,M]$#éÍ%b)Íhdè['ï‚4Œ\iâFpÄDôÑwAž‰&<Þ… Õ#Êq1D"LCëCÁñí»]v…éùÃ*G”C4º¥iTñÄaDiÑu­+“PÒ#(Ž« kÕ¦¯«Z©P›%T¾¤¶èòäi«O«Qœí DFiÒƒr¶zˆº@ô¼3˜þ0«WÎûb1fqv9¯ÆkYB,«?äÕ¿’Ð$ ]}vØdEí}È>ºÆo@ ´ÈC¤}•†ª÷þ‹ãîìâó•ŠRë°³HJ×`AÏi«Õ¹"40¥bD1Äüþ³tf؆½6z$j³ 7C¶Á±hPq`ÏÍ0cy&é½m’¶ŒápG0LsctzHž+öÖRÑ`(廕ÈfØ‹¾5 íHIêí%o4#ö’\ì™ »=×ß>©ûÂô)œL³ ‹ÆÝ*ÂHÐl¿xĤ=yâë©‚ƒ`¦Ü[ê[£ÚÚ*¼ãžÆ¼žlZQÁ:V–3‹]+ByÌS<—¦TnÀ”íÆ? fŠUèK;5²–zœ¬$?dH=õDˆÉ@‚:À1ž»Àß#ãˆÁ$õ0 5à Š³~ެ„6Éd¢ÖM§6ê)ePÏÁ„>õrOSÕîÏfg,‹×D‚ʳO„XŽ"Ù Û•PfX„½tHQŠ·$“A¿EDùí˜LUg`D–ñ9D®íIpÜÑÄq ?,Á8Ç ƒ}¸Ïe «/«iIè[iÒ™ZòÄIj#îš4Çd6ꂪ@ÿª3½ÜÄ‹D­àRYU6+TEùáð篩Àª°·òÂ_`U¬Ÿ1„U†ö Xsì@º€ì5gÚ&ªTs·7Wš(~ÞªO¿ß}”¾øÎÙ‰äÀ Uæ•RѼ$†#í©'!͈ðúÚ.Il8ò£ž1‚id9LË•=/H£…hªAŽqdŒZ<°°C,¶–L¯Z'Îmûòl§`ýi!Þ†m¼oC4™£¸®™eHxü8¤ Ö_hèÓª-•ã=:çJË@öå”s\}ZÕu-Þl€›ç‘\ÀÅqÇÕ\°ûq–˜†µV DcÄ ¹¸ç®î ¶¤5ɹÛê˪.ç’Ý=`ˆMJ3°›pº¥êœLˆE˜eËIS,ê-{¸­¿¬FoAôt£¦„Dp(“T³êE5^Ÿ`®A­Åj­% ŽC…Ö|¢b”w °ôùÓª2I\b‚Ô¤8o7àC™¤x袩ÓÒ£Ké{s\`¦z÷¼D/ì*Tì‹ &"—§+Ì(Þõ­5Qj,µÔ#‡,]è"0òlÃûÇ+âý•ÎÇûy&!KŠ>BEýhÜ¡h!‹¸µ–Ê›Ð×v–k5ÙDDG†?Í&¤ç2X«Vq ÍéýÃÍÚÃÙï—w¦Ø³ë«ß/ŸÇHZÿB™ Õ5XT ELç¨Â¢´xÁ¢E!g§‰B2¤d´ÏŽš *1p!>>¢GGIKÁ Mä–çÕÅÙoçw¿š\X ùÂìâÐEþã럟þžtЮ`µXŠ5©†“b!-Éç!ÆRša#öÚšÇ74´Ä¢èº9Ö„“DšrÏðçíù—ù †Ï÷ôÇí4Óˆn±»™Šèa¼9 elV™2²¦oýÔ]MvI޾J½ÞÌŠ©øA €ºÂìf1{Y¦-¶%RMR®òÜk.0' I1SRdW½7o^—l¥3~ø + ¦©ºí)Fž¤r¨cŸ‹ì<¢©êY{ßáxœS9uBß}]·‹N•Ñ<˜PP¹ñ³f 4õÜI M®›P7‚D„IpÐé%š“lÊ#,ÔÀÁ{ÉÙÝH¾<Šð¶÷O‹Ÿ›§×ç3—’ÿüö¯ÇÇv(ÑÝ^´îŠ{¼Ù¹€£I£dS”P'!,{s”`MŸ«¤„à=‡i{Y¿l7r悌—§ûýrµ–ÓY?\\Ôzæ7̬ 4*«-_¥Ã³yˆ$9,GBjÔój Á®y5åí`BU¤9šÞ‰SŽ[„1àJ.w$@á,B¥‘!$FSÈØäÕ3 !Ps‡@†­¥XÓÌøº]í½<Þ~Ú¯Rüs¯LäòÏh7ÌÝãFðAK‹ÍœŒn„9—XÀ–òø€äeÄHDM†då­M$žˆ†ë©äø&»Õ^×%&ONe Ôw»Õ÷µHYþç—×õ×§å$¶oÝ9†’,HqÉlç¥nýU6º@‡`íu9œ²ÍmÏ«í¶çHH”»ÿó¼žÆ·&ýaä-¤õk¢,PJqŠ­›Kä×ÊV*d¢¤SlKºÉ0dr›¦ « …ÏsH$‰XEþ˜ŒõÐd²óB Ñ Aph `ÑïøÌ†¨ÇF¡OÔƒ`­ÏñÎ;7!H%ŽÌÜ -@®(žê¢ê§Zç5f„â6(Ã]@ùÿùÌ4‰7º|¬úÝ©êÖð]%]º¨‰;˜rf,1Ud;çÌ$9 U<.ºb×U°ØžÇ=7{Q¥Î!&Î=HîfÑåg-äóØøÜ¹³Móˆ„Ó"å”×Û O,‡ã”YlÆ ¶<ÂT!Ën)ÙDƒ€[唯·¢+ÔÞ{[`ç£ÞrenØUrIï?M#\HJÅ“P!~¼™… \,Ø:댫?’h`OæPÉâ) —¾KI¤ ¸3÷[š„]¦lÃ,<º’ ¾Ëxl`%^´ Gþ×zÿvE¦“ýïÌðÓŸO-‚ áZB Y—E$göF2j z\¸5@¢­ID5$e?2t+?×Êu¿~÷HzÜï·Ò‹Ÿ¶{csèN×ìs©BS@±bIÀËš6`(1É8–Uî11%1:,F ˆÁýP®žÿUðj–5[ÑqlZ삜Iúœèþe0Ÿm0_îÍ:H)M26ˆá?å¦Ë_»ß¾m7Ï¿­Ö‹çåófûë8 ¾ˆ|¦#ÔkN¢Ÿ9ˆUÖÁ"zÿwh°Hè•ÏO3c¸Ü”u¢Ç´axS &ä-=º`'Ø †ÄzLÈ#|Í~#uôÍ­#`ñÐi¿7‚ÍÝ`Ê y“¹Á<|·Kî+¾¬¤’  O¼!‡“NÄöDžuj¹ÎÏ{ež™Íx¹A¿¹«äÜÌåqªÃ—ïüöeçF XTç=£Çø™¼Üo?(©×Úh|* t×8€.úª¦kÖ}dÿ²]×I`pOðdC 0Àæq†þc$F‘¿‹èùÖШ"þñ GßœûÄ+n€9ƒ ±Ù›‡þò(ÏApɮɑd¡"xçËïñõÕéÚ‚Y°(¦fÞΥ!GjEð7̽ ¶Ãƒkš³Q´6‡Žþh&‘4™4ºe82ßGJ‚‰=’dti½›j&—÷OûÇv¡wQES*xΪ¾·©nØÑÇ6Ò|]PBîÖ'팫›»àh¢¥F“‡qm/¨¼^HA—€¡qÙHúAy 0œ®xF(Èkëå)Ý T5‚C$jâ0'8øºÞ-†eR-ƒbç,™Â,\L× 2hd ÈÚ›‡®*q¬XÒ&ã'ÔÒåƒâs5Ð)â+úAÏ?®!]Õ;}YQg‰ë˜#ÞúØ f½#(‹ˆåkÑ#6~¼l¾¶S_×ä]PrÌÖølÙÏC²w$‰&Ú«~g¬/‡¸¯;‡°‘VÍãGh7°Uܾ?sû"~îï‹Ú(mðÙ“”TËQ¼|’ú­ÉÅ £) öuQ×l¿+Ž0×7ØNGJŽ7¹s(%-®oÞïT7UHÔ ®±èõ-§ûý½µXôÃoÕí$)sá±³A‡² óZìµ9©ïH\¿I«ÑŽ!É%)ðsÑ(U¾ž˜+ÁD¨$7…¾õÿð£CÊú_Í`FÕ>ƘXdÊÂ$Ú$§âHTMpÂ@;N¹7nƒ“ãÆ”‰—„,G¼YP(²î—ç}Œäíœõ1–(-Œ¾¬ˆÎöë€ÑŽÝÌøÉË&u–Œ'¾±ûp¦jà@§-8 á6ÛažïEÞhÄNvàíç›ÝâËëêI[ÏäËÖ•¤b)ã$ä1ï˜7QWå@äLza$°&,.s2ºMÃå°Ãç±k>&1L§,Äáš`X ÇK2·Ë¯÷ï~´ ½ßÞï(<3üDõì¿#ö&áÑ‹%e?k½?¼uKÛpMm÷u-?@: abÛÏi~¨™Û%ÒåëE©€ËµnöòJÖóy4Ù¾EšI˜[Gä©¢{WL£R¶³k†9]±'ËEóºŸ1ÚÓ8_-1x”·øèSeû‘Dš´ý›ÎéŒÞÚäÇpî D#Å#†+sgLcÇYoÖÛÍf¿øé¦Ñ—˜`¯jž‰ëh.Ñ‚ÎÓ^›Th[+U|8Ë6ß_£l‚Ù\Û¥sí±„šÌke%#¸ušä?,âmÅ)D¯Í*”(ñ¾¿¬Æ—îšzè­©¢$ › ùõïëÜd÷‡À®DßÀeËÞ¤¹ÎG’hÄÉÉ ÝZßÜ58¼Xw„bŒ×Ô·éë’‹ú°Ë”Ô”¯ª¢®†‡^«ŽuÓíÕ×Oß´¯½•R”ŽÎ1%Å›ìQɯ‰bsçtöåÖ±®?îhhL8e%,›·ö¤e=Ñ“ˆ©ÝìïÓìßâÃÓÏ:xÄôScòe}ꗮŌ[ŠGM¶¬ŸdTCO$o PÎ}ÝdÁT]ªJÐL‘ÍL)F¾âÀh·í)=T*3ŠlìL?<–‚£Œáù3ªƒtªˆª°c „Eä‚Çg`ACå ,ˆq;ÂÙq"•ƉÞušKØ{9Ç@™c”ï†T³âèÊöÇkS³>”ŸZì âfΩÉ#ÔP‘±ÚU5ib~šHëöÉòBG&„…‹l¼|Rú­.eyGS2‘]”ßÀ8Žçǘ5“ºàÿï‹C~KõÆY; à*|z@k!:7µ(|œ8´¬¿û}“Ì»¿±»3vñ|¯w ý¾é§eeäOI|‰=ÇèËðY|A’†b$©&„#ºçBb?;ÅR´€I¬Yÿi#)‘›‡¿nõ0 9b–—â\N Â;ÃŽ’NhÈ[Cˆ¢Àè‚/øœ ±¢dàÈ¢„Yf¾ã/¾DèKz’ˆ•ø“˜ÓwùnHFò×MŸaÎM*ìÉ©y’ÿ›£ÏZ­‰×$pRÚ ˜Wòý85önç”ß4Åj©ÔßëQGÌ#µ;#a(çTHRž„Ö$ù÷XöÑOB ’›±êêðˆšëÂ(É…¥zâl²7/ÇKB’'@S !D›šT5h$“&€ÀŽLù&_}5Ôý3Á®†Î"Öø©Šo*ÿ>Z<ýÇÛÄvoÍ)ÍÐáU·l'˜¢#`ôYt°MUFjÄPgåm¦9tÎrsÐ!9vÍr2Ê“/“C0£úà[óàƒdg4Þ·ƒ…Õý}~âƒà\š­`L\hßTˆ†'áBç4bœ…‹#ÛÓD«¡‘²è‹ EÀËÓýzÙ¥MˆDò×ÿØlÈ_–Äs¿ú©3ˇc,ÿÎâëêþûz³Û¯vwo?ëÿòâP™^ëßÃ}?°xyýò´Ú=¶ÔgñØÁ¼(4æ>¸äbÞA2MZÞZ`¦ÅŸè8ø™¸¨ÚÀ*˜ò¾Ôvÿy¹ßJÀ à¨ßË{Îõ³œ–Ãß¼{¾ ŠdN2ˆ¥‘ë§HÂ$L°¤En^N«öxÃàÅ”òjG–¤vÙ‹öÛ¾5$ë¢uùX“{c™4ÉF¸ÓBt˜âöADzŒ 3½Gŵ ’ÄÙûK0³ž³ ¤sÝE928‹äXÌ ‰V£Ñ`ð7†A”³"'°lиhfvBðׇ"õƬ¹Ö§*”£/+Œý$1LS©ž?6gC¬Ÿ`9Äý¡&•ÓeÕZ¼œ¨½OËû]éÕCÞ?ªC=m~´3ö¾#’«¤ôäB଒C²™e$°&¶^ÞÚIº5ÍÖ銗ÛÛzkÈ‹ì°"óŸÈ¢l=%î—í¼€mZCWR²và ¢Ãtmr$£FM'å’èå=x^krIròïzÜì]ã»­o÷¯OûQ_ëãŸë‡ç¦Å!ã±è6<óÕ!HW­Ù4*ir"28j¤:+@pUû—"éF<±m ¸{Ø®^êÈÝÒÁ¢ÕU\àcQ°˜íŒˆÇ-ŸÛ]NÒh4d™rï Ù„y~DXƒpRÛÍŒ]q´è; %£ hÀBÁ¹rRÇG_V-šNÃKœ¤ÃŒgoçBM‚x£¿ õþ9Q¤-Qñ@ùÄÁ‡äœÓ ­F*.饳“BE DÔÒÌ‚ ÔŒ=‚n !Æù—¾xêÑvL¾”òÇš¡:|¸Iœ¾¬@Ù:ÆãÞŽâCcÒ%‡ó²ÁªýéÚkؘkvÞ/VÝ‹*ë•Þpïû~–±e‚/9ûXÂO–úFjTãAÏÃM‡ŽÅ;ëçÀƒB ù‰(°ôÞµ°ÿŸ÷iÖh6CØN´b”¤üëÇz÷¥¥“AÑììeKïd2“bªº:’Q#/£+zÌ”˜Sr$ð3šóú¯“\«f0lÄ`þfYJÒÝ„Nsú,ÈÁßd Šõé¥y#‘µâ‘À¹ôz>Dµ‡Æ‰Ë«ÍjûG[Ã&Œ’ú;n×›?^&©“<«o«å°é¼ªé&†Ùw¤Mk™`DdãµùúÒõÝA~†’üCƒ€ªøä¸¿ct®).î9|Mà+lä ®s.^wa^®6ÏϯëÕþ×€«I;D¬ñÉÝÅÍt]M† É ‰ãjHö‘Z+eìˆÞf•Ñy1ô;Ê/] ¨Bõ-­uý.¢beÔÁ-æ(#z¾Âº.¥> >íú¸»ÛÉã 2л3?Ÿ¦žÁ„TŸG^=j:ÙÄ–‡}ˆ\ËÓ=S†Í´U\¹æå^[%Ñ¢œº"$o(Ç«QW]?6)’j¢®®¡®Ð¼Õ^u—œÿæiù~fâøÃ—§WçîNTqÀQÍÌüe{Í4cU£¡%2v2­b‰t/dJ%¢-Í—Òz-‹>z—×kI‘²zHÉnÆ“øªÔäÑòÜT­ °Â±†QÉ“²Ä¿2áV2Cbß»Àù IWb œ¶ò#ÕdHò–µKéÆpˆ¦êØ‚±4‡–4¥ëýA‡¨àгvlY;ýñîîÛÓæ#:/ÛÕFÇÝúÏ:ÓæVÙ›„LŒ]0@yâUþÙ8>¦ëû#Ö F^,XoŒåÈ© =š( M‹{Ÿ¡Ádõ¬HX=¿l¶µtlé² u"+€‚ª :“ ƒuéÐð$”ªªŠî€#cnÈEKx•ÈPBhïÿíËY/õ´ýþúòõ~¿\ô„GYÇþuZå%‚½f<]ÍÕ¬WnzáÔh¡¼edòžn­…ð¡\ÞÆ­†Î$W.¸œéÎBµgîÄ;Š`fÔNJ¤~2—ñ:.ÇÇ¡×2~âqG¿Û=ÞoåyÛn¶ø¹Zþ?w×Öɱ›ÿÊÂÏQo±È"«†Ÿò ‚ /Ap •äµ`­¤èbÿûsÑôÌÔLUWW}x±ÞѨ/¼“E~ü}bÍ™“,kì¸!Û$räÕnP·ž±ŒTö 5Tá‚ZõPj`*‡r§@#¢4†É%NpézµõƒINÉKC3jYEºñÞÔ>ÄXÂQ4ÞG[Ç]æ½äûŽwäha¾>¦Z½r©ÉŸ¿8?HˆÊ{ß°< Iܲ)^yÔ˜pfûxŒµøhäâàT½âIþx+¤”N´hí½yÖfüjíãäÄ:|BE çã’ !Nv‘{—Ø?'ü _žo>ß?jÒws÷lÅ]ãœzŽ-ç¶ÞÑ’¸ ~z»ÿ~wšŒqíìKŸ^U_ß6Œð.nà_Tªßì ÿýå üÏRY‰#Ïöéígûéçþ:üË}þÏ¿?î|é§-½vŸ˜ãûá‹ÕÒFÛ|Í>Ú~o|-¥ÁêI7û—¿ü¨oúíîíË¿þÛ?:)~¬è~„\¥K½ø÷§Û=yðé§O¯ï7&6_~Ü<Óßžßß¾üØûÖ¿]?¼ßým!cÿé§Ÿ>ý¬º÷no¼½ñÊõ¿õOŸ~úá¤l‡¢Ä£)ù&8·AíYhX^ÞÇÃå…ëÏìÚ¡„@\¶C¼Y uÎ ÅÍwŽq×>^¦¼¼põPNÂÁ2òñ5fn/_Þ^8~«˜l…ì,û§˜H/ÚÎ -”œ˜Û|"ÎÖ»$ˆàfé]¢&EMì£wLÝ@ú4˜¬‚‹#õ¥$¡MvV¨oðEýM˜ÇMÜ¥Cç.®L!U¤£gÓ CmÎÑÂävˆ.g(Žâ ã:§¾d?Ѐäà8€ç@]¢«‰8þc#š1‡;Œ98qÀQÄâ1st1'ø©Úi‹Èý ×°64á`:ŽQÿ›î×Ü)Û KèÌvö—²¹ë‡{/îS~÷ÒöÍ*¢}{*kO› „I³B™ã õ~é`߈ÈéX õ…#’‡˜‹ö‘׸S´>ÐÅ¢ýó¨5§•Ý¿YÙ=†ú®%ÌŸ~ÏQŒ q/Æ?‚'À5®Ä9v"‡M¡Ù2I×´E´°+´ºmeer(‰š§;_´'œP|(Ù“@é„ßÿ @'˜D—Ü”òB \ò0Çä„ øÃÀ¯6Æ,Ù~4P¸7fúù7ºrÛu³ë3ñ XÛj½Ï´¦"Ózh³.i–¿â›RaÍýÔjc‡aÌI´k ÓÕ#ÎF£bÙý»ÄÅ0]é˜]à;&T0]<Ú>v˜¤®jLÂ,uŽKGJçuSÕ~„ œRsñD€¸gp«‚s"rÕÁysqгÖH–Ò¬Øo‰a9‚hã ~уÐB“ñGÜóÑ\<Éî&Uúv²—í®Áf71ÙiêtHÑÉÄêehM"Xš_®‡ˆx(Zvù”",‘F.KEvêxD›‚aO Q|WÁ¨±Ñí%K´ Ir™ò˜ê‚Æ=‰B©]ÈS×òX¨jrœêc£/}ЧB0Éž Fßÿ(Xx@²}›—8ƒxnTmžì¯ï~{yz.aU\Á0‡VŒ_·?½½¼ßMÀ ŠgXgfgÖ)†6oF@~úhW µÏôxÔº¾䨔›È‘c‘²ƒ'}{_<êØÎ†:øùz)›¢¨LêüÔ$ÛKœÕù™$,ßtæ]ZÛ–ý#ec”0…1áÐÕ‚ƒ‹Ç&L8ÂäóË›¼z½(v«h™Š%ï+ZkƒFáE'.ëïF$éQÒ¢4HLÉá$gkÖ¼Ìl¡¸3‘šÒ3äQæUÅ<|Y|ŽwD….C* pJ¹»B*,o<€b[$èÙ ½=âÑj¯Ž]ƒªŠy xž7û¸gúO1N<¡s³'àÒ¥K7X¥~Yô®|ä&lë™¶5ÊíÔ.[JMŽfUXCwNq²ç¬#P/‹¹bz jèÊÅCäPL5ÊÈõÈŒˆÑÃ`²4ab)S¼PÎÓ;~éÚ¡Ry]:¨ºc 1_:Œ¾«½ U‡?ªX<³7:§÷'ù‡”Ø»Yü;l¢ìÓáƒfé–®õ½¨¹x}Û‡4ßxœÕ®“ß`Ð~m/Ú¡î4ù \t0Ëb¤P® Á6ñä~ôF’u FM.VB[Ó(EËŠë…ØG¸^;êté>Ô‡ÖPt’a%Õ+˜U„—ƒÅ‹‹´rÌ ~)“$|,æ9Ýz¸«…_ca•³8¥ ×j!Nrv•d¤yœ…пœ–l|œ0ÈÂ]……RäÙRæ(/Ø|çêöþúÛãÓëÛýÍëÖõmZ>7‘7/7WoOWû?[?쀓Bd `Q.†Ek"/~Ñ„D å½òU-áøLrCìÍs +›ÉÙU¾Ÿß^Û/O+¦@íLªp¿!¶ °·ý$S'?%kO Ɔj\4C±^$!‹z·£a-ê¢ †n’“î"?-%dŠú«ñô†Ù¿°ùíV¶VWm°¿<¨ VH`vTqÄ…Uë¨Xʼn|W¬p4ìýò‘bH¹HQ,œOÁ5ÎÿfSôMëÏiwu'ï!±þÇ3²ÀÖl2Óù‹§¬?ü¼ Øÿh@è±ju;Á‹Öз Ç@Ùaè1‘%R7ÃK0xu¢œ*:à!¥’åU‚a.AQ¤S¼°æTÕ:äˆÔìÆfqX]ÂKÃh„GO,N`.Àb’J€EðQù唪.hÍQ,g+Úë§\7ìèÍ*í©| +¼Á lS¹CÁYlc[ €‚í¦Ù|ãZ¾…4x'1¤2ßô¹ùÆ@Y̆ݫU0NÌHH\_ýSî|Æý_‹2ØPpÈ­=ð§Ùmˆ”4‹ÝâZØIÙä³×ší@ÌíÎ *3È\¡ë«¦P–™”SÝѧJ€>6C` @?õ¡$w--Û4¢º0»Tã¾ß©l¼¿¾¼ïmK© 6w DݤˆÒPyLe) œ/KQ6ØÑ±‡ éCkÒÈ&É©CÁy%ù–uh Uq2ÐĨËâÌ„Íî[õƒ4Ys‚×DXá’ÏíâÑ¢‹1Aw‚“4I%ͨæBiÈ ÐQ8ÀÂ-øOš>h¨äZ]ô^ó7sVë}ÏÃï¿Ü)1¿>½¿©Uz¼×?7?·žüðKì E©òK¡hQ‚ Y‹²#f'·ÔÀ O’$}a–$yj)#õüšÖÀR+dWîç ãåîÆdën¢qHÁo…Î‹Š† eQñ˜¯ÿí¨ÕèHÛÉ„½^&+¶öÀE7OVZpþ âQ¶ Ю°Ih?>锪‰öA Ãùnæ’Qv‰ð–2‚§qL JÐ;öí¸ ë—K-ã`+ܦ"öÝ¿€$DÜ‚‹÷„”µ#jt’ý"N’ŒÎÑ, ¡ñ·T·öè‰ '–bt³Ü’É ‹Íåyß²ÌsõúÙ•‚kìÀvCðY;À’µ#Rõ°^Ÿ‡QxߌoÒfl´;øKÛäú¯°ó1Ù<þ—ùr«D?N83®îø‡»}¾w/oÓÀ¤’ÄMBr®¡L$ÎLoì°Ut»!Q_*ŒƒÈRÁ8(I³»svDë²ûŠr@‚cÓ0ºE‹aк¯ë’†!9ZÄ0(-ÆËK?|_W¥ÚàY4BƒE5Þ·¨¼†èQ•„tAƒ>"QOmNÖak´Ù{(j3äGôè¥Ï ȇ=W?¾I›F“M‘Á¥5šdWŸôÒý +ZvÝ ÆÖÕgž½dç:‹*7µØ"Z»­ôÙÈqšXÝ´ÜdƒfÄE-÷)¥ÀE-§”óÙ#ÂtRraÒkŽ•||6gë‘Kë8Çet\ l–Õñ¦1Œi^œÈ/ªèâZ†Ë18vHÌÙÉ•“ú]ØÂÓô=ž\Ù’/›ÀèÓIá#zÚóêã›´i¼>?_úD Á(Š>È`ôôL±U´¦®?¼y¸¿}úýñáéÚÊU·OWúÁÕÇ'WßåIz)=4aGG1hذ­‘hÝÔÞ„#%ÏTÓÒSáæB¶WcD¢.½+/–÷ƒùÑMZÔž)$¾´ÚcÂEÔÞ°/)^«èÌLÝûÃÛûëè W¿Æ×«Õ·&b ÈÄ(ÿß§ÙC°Ê?L¤Z2ycG2ö´ Bž1Õ5ûØt9~X†SÜK¨áú&Ë1…»Ø4ý—pp8ºI› ‰O ò¯=ÅŽX£Ü"‹"¬I˧‹C’ì²Ü·+aÝ^Ý\_}}¼}¸›d,¼÷’ä‡fÊ—Í’4"²ê÷ö8"V?“ ¢ê,-‚è¹~ðY:©8G·—!ŒïѦábÍpa 'äNúl3‰G\R÷C±ÿûþôv½×ܲ.O_½^Vÿµ;§:¿¸mòõ\ÍØl‹ä%ÍQà–M½`œŠ{0òg&Û§“½rþD‹¢*/B \crΠl(ŸEB‘¶“ɱ#sÙëQߣÅä¤<à…MNpK4(_ºÅ—Á¯Jj×7+2í÷¼¿üz÷öü |Q‰¾ýåz ¿¨­Ž ʃí ¡­ 9™dýz’ÁzèØÅ…'.¶ —];¢O'…G8í¥ïÑ ðììˆWxÍüQxÍ"†ÿŸøØV”»úz­7¾Q¨±ø•ƒ¼<^?LëEÀE‰ GrÀ~ÿaðrO³£§¡Š†ÄW™P±áÑÂ…6ψö •^/Žºéøm† =P¸t26#) •œx?):ÂkÍö/óJ}‚¦°M…UU"qÀÝrŸB7¥<Tž—é¨Ü%Ú®(e:X,®ÊW¦ˆ˜ê§»›´”à4r›ÐpÁúýä1̘èfG›*öDï˜ÓFoæ>ÕÎá§hMœ°(+dQ)ž—{qÊuàÞ¬QVÙæƒw{¥ôñ%²¸¤ÞàÍO¬X”ש —Téì gu:ð†¾èðF—Aksƒ’Œ¨\h%$_äoCt÷.eÜ @$RÚOqö®1 zU¤ÕŸ3\X‚kšØAÇœD“ËÙJªáˆqIΕò ™"BI( ðÑ«Õh}ðƸX ûÙq>´ lz‡ë]É3Çð¿A(rãDœ„"ã|Oo÷fUøß2X7¤Kó-7Ý(ÁÓ|´›jÜv[<çlë@™o1&¤"ßríæãÕjg£«B›Ö—K2.ib\Ô(2í^žÞßî‚mk‹ú<³*f¢©4kr#ÑC,ó9?Î0"DÈ[ŸšSô.tƒRÖÍJ ì=±îS}<øS'žô­~yz}û@Uµ®;eÓÕÛÓ¯ww§‰Ð©Z^}d-‘KĤªÌ½' Ä륮+QÁˆ\B&¡.ed‹`¹£NÍÔ§f ùÂ%‹Ûz™‚¦:šVÿ%w%¨X¤ã’ÆA³Yö\QòsÈE¹ˆ(¹Ó¼eºŒRʰê6›€Ëž4ЂfH…wmÇnšzMÏBÇ™»ç‡wý³›*~}Â9pÊV‹m÷r…I±mNŘ[©K¹˜{D¾.8ZúÔà|½ì|b×¼@nu ”¦ML¤º#>µãIæ38ØQûŠcZV&• ‹ú²œò'6»·©Ð×»Eýî> Êø³Îñ$œhÂy® äæ=“›K`Þ£sH æž*`=†¾U›]EÛjE*ÉA¶kl÷^G hMöö—*Œ/‘=¹ˆ#g÷þžætà€À³8Ò„®¤ŽÍ*Ý_7ÀD—ŠÊd_gCŠâ‚Lù}¢Ô饯ÅM9\”h4g'ƒDHŽ[’kcðàÃl3à'.ªµ]-I,ò5¸Tò ÑÖed3‡Ý«U.ÚèjÒˆîÂŒó¾éT88ƤƔg3ª'ƒOšAQB&’ãôͳ%ÑÝ›U #…ˆ—V8-Ù~¶k“ô-Ãw†™¶¡ò¸Jß|QßôÅsú6z³JucP!™ n‰Ðin9gáP"e|ÎIÄ`Û4d!¾Q†oêI˜¼+«›²äÿ¸»šæHnûW¾Ìe”M Ah;ú²sÙØqŒcvb/»‡T*¹µÖ×êÃnö¿/PUR¥ªXEf&™ÝãÛá®Vg%â$€ÀL)œ®ÜPÒn—V 8¯%=ÖæöçÔ3’!S7ƒB&mùÕc^˜¢nØ­”/D× ¿h8}ºÕãí7Ûèió®fDÖ%º‹Áàäzu, vÊ:8Ì •E‰gCw”djµpH&°z++PÛê­œveQ›Õ1ã!LR[S,ä„X|ádk+-€§ÄqÑ`¾0.2ÐñR›õÂ)U×[Y‰Ú\ì¼f0ß5×÷Ÿ‘t½,ç-”ˆº¸§¤HÙÇ“‘«5sx ÿ ‹×wâîËõ’ªn¦7u¿&"EÞÏor?y¾¾]þÆô‡NžÄÏ>oÔ§+Ø÷<.oïŸõÿuvf<ƒ`]èe OžÐ?ý°M–>\tÿþŸþñån›·~Å7üõüæeùÓfš·'«¡~gµCO\†è¢·xòéÓÉ•ÀËJ÷Ów‡7:©»›‚jvÓN0Y Æ¢4Õb1ªAÆA}¹` y—C5¥¤LoWVr»êƒzHTêŒœŽ¿d3ŸvG[À“ÞR'à 4)Ng€§hí×@§Ý&ÓߟÞÜ/~Ùš÷…e¼ºZÖ ±/ÑÃ2q2Äd÷jÄ8´ãzµŒüU3¹åxÆèE€/8§p4Çp Wëui… ccô6Ì 2Ž|s”‘CpH¢Œó¸Zre,‹(Bìe:kÏÐ8ÆŠpø5àfUœJNúƒ^h±Xð"`mì©òF} ²ß9x³¼¤CåQtÀq¿è€}¢"-v],¸èÖ“6az MÒ¸]LAópÄÎzëþŽƒ±÷ˆI5Œù”Uªµ äýGø#”irï0–8dMZYŸy™d]Ù ëÓM3½¥•å1vc4³½„õça­ÏÛÈÜ”(WçC—Ô#<ý.¼={ûÁ3M‘Ôéòf¹Ún«°ï´ñáþæzñ{ï9ÃX«<}×P[ÎÄÐD[ 8c=÷n(~ðON‰‚â Ãéåÿú§ÅÐn§hZú?Ùæc€/hDÌ4˜À´šGå—ôš;q3вë hI#c l|/rO{«_Ò×6‘höø_oì›ÇÿÖ4'{— Þ‰£¡Ô-"Wöÿ}³]w³ø³Ùûí^ÜoÈ#ú·Ç– £:±”ˆ›šÉ7€XZÂ#Ö&¯KÖRA:ÊÚã5˜ë•ÇdMAoiEù(– 9:Šs[gàÖÖÉâg×…ªï­Ó€6˜ìžÎDR‚U]<ÃÀ>o¢ÿºŽc>|¿TýÿõZ<òž±žîÓO—‚§ûæzš<¦4%`çÑÄ)çês.«°Y"_ßÒ@˜¥ê|y’vØ€‰çÇë#m£íÁ²SšÆÃ`‰°Z´£‡‘8Ÿ;ϰ¯·7§ˆpzòªìÈ[“VXØ!pZÃxÁ…öpêB"¥b°3ž"Æ£—ž ššQOä’¨ÇQ â §&Œ´Ôµ7Ô¨±S*¯¶ÿz‘pq}§†“ ’x—.z÷Á°“¨ØzS`ö£êX¤l´qCÙdGʬËÎprò„$Þ ìQ$ö.YרP(FÁmŠ7$¸­¡“PìÈ›MpïÜ}Õs'AѹÓy_ŒÀSð¡¥JC¨?"¬ëLŽñ9àýŸ­óЧ›Q¯wÀƒpÈØ¦¸ǵõGoÂ?Í ‚ÍTó²Ë­a\à=¬§l='l”'ú‘_ÕPÃ{¨qÊÙÀÑÜÞƒ¬kî=¬[ßš½÷²d´Ö˜ô­¥õ¡ª÷°JÜÃb÷ñíƒÝ¡ƒFÖݤÄ0шB%$gG#‚Ò2‚5s'*_[’:ñ6_EÉŽæ­0+%NÔèì Ên¤bn‘út‰àÛ¦>/—7÷¿ßN-–‡3ÑhM „)ê@¨} ××ñ;y,î¼Eoüñ:6‚º ëŠ2Z>šA‹™*n9àNêë@ë«J@Õ…h-ºyiÝýàôùfØœö`­¯†¼§Ep#’ޤµÑ;™y䪹PÙ&$Ñ7䯢÷þøü—µm*ÙS 'ª›pP­W C…æ¥Ç^KAö![µâ+9à~ƒƒU/á‹€:X(¿ª-ëCƒ$iàNίÆ5ÞøûÉ×E)—×ú]O™éo…OÑáo«ý°.¡x~|Y›¢÷†ˆvèõ=[98⑼9±¡ÿ+•y1`Þe’¶¯—À;æHJ “¼åßʱ ¾Ëk¯zÒçFøÐ¾‡Mg·$ºKdÉz vøž¦jžÕÚ²{[Œð_ˆí 9ìXr“ÏÈêó@Gå ‰!Òü‡çwùíõÿ ÌÆcÆK¼Ïy‹—|/Òp8/R­¸{¥{/!lA! â¦jý(0s’ûk+dÏQY`%D”|–d/ðóóÑ5¸»’ ÚbÓ1â²qt\ôn…§î—¸¾Ð–È÷þöuì¥<ünŽKÑÑÓÓÕõãò7Yêú/Üÿz×Ý?þ<È<Éòá³rV!yëô›e;ôMôÊ96ôZºž«›•àÚ .1%?ÍÆUžmІ¦'±a•¼up:anûå&ÇaU$NO¬èÓ 3åšvC¬Z½oKÂ+r¶ü]Jj;XjpŸ¹+A7®«Èýטe£L|-Z© G”ƒ¹ãàQ6SÄV’u~F .„ã3m×LV$ôDT‘eW+_yœ‘Æ­®múIvy‘EOè€mú K¶*sú!õ“@¢¥ZÁ7¸È×Ûâû[gwOýÑ_]b<ý|¿< ˜´Åã͸ºaxu°(šÖå:]–õ@ZÂ.W´µy”vyÆ,‘kªê¸'¸*(-à!0ÍÒÀØ¥M4»ª¢´¯ÑfŠbÕ¸9ÅÍa@õq`i ›Á]óÐ`Iv<ø*7_ƒ…T¯¤Ëu^6—Í',ØXo²œ3hÒùè­¸ªu¹]ŒsDäæ bÆâ:¨Ø©"ò²d‚4ç¢Æ«¦¤‹p!˜P^?Ü6~x\^~>÷Ñ@â/×Ö[º0Â[Š«ðšó“«mw7g½Œ[`à’òœ½‡¦ =Ãî ÇVU<w>âŠê³¢ç+ÙcÆ8Û†3Áà}0Îtª•}@ ñ–K(U)RÁî ô ª­€ªLù\sÚM0y{ü0¥cû.Bâd`åÇ#dÇP7_á÷ÇĈ|p{SRÙ Á‰ •Ú)?þÃ^jιI£‡¦ŽyE'=”/=*Ùñ¢/@ãQ5wtúÍ“]òýùËóç^ÑÝjË-n®e³ÖÂ_ËØ¹àb–½ØvIJ³|}„>54y+“³Üô¥M´ÃÒU¶Ã˜ÞcËà­ìœG™OÍÌ”×á%‰)Î'¦Ä”|šçM8•2SZ:l^\Žío,’]®#·OôøFÃJuépÊšV¹œÎ`´—h©Î {ÝÓ0‚¶ç7vºkiÝž?áp[ÿÌÐó/QS_ëa”³ä•ª5Ž*Ñ*Sµ³Žª?¢+`¶fù1ôYHõŽ“¾v+’gymË:dnH ¡A‰ÇÎa³Ûà¿®¯–‹ß7Ë·\ZövÈÓ S@~“â@®©]‘¾—/&çÁR ³>"ùã¡AbßÃ?t[rdSæ'ØôņSqxO¬5 Á+{­ErsC[,v‚sÆÅ¹+ªÅˆÏŸž_Ï/²c>Œ¡³Ûš1¹1´qÄ Šo¯PA}@LÕü³ê?ÊÖâÉYغ‚™¼D6ÉáÖ“I #Ô×6ÁÍl„Ñ4ɹÎ8~Cþ5ÛnÏŸEö§#‚eˆÄ-1š1Y}I`é¨E;Ã{qÕKh©©— žwŒŠÝ9›û†dáÄV4UòVcxùgn›´–8Fîd?ѼùòÓ˜j`õÙz‡ #KBÀ#ýÙ5LÒ¢UõHoòmqFZõ¼¤l–óeÉ„—ò•L"8›f:}“L/)¯my`Ïk‘ÎB/©æ°¶[š(_àå¥×çôöáËÿ l:Ñ"Ù–6ê`Ìý=òÌè¦ÏV.¿š~Ô±ˆ €Þ^iͲÌè\H,l…UÅ‘:eqìf7[n`¶¨Ã—ÐÇøuÊ¡ÑA^ßÉ~X}òt>Œ‚0Ʀö vD“˜®ŽIµªÔ®Þ5±l4h ¨Ä£„³Ù’‘bÊÁöÅTåšØt6pô³CÑ»–ª×ÄÎÙ8?Q‘Ÿf¦04uóÃ0;ÍåÅ}µWg±G$`˜!h9FS¥¡«HÈõLZow¡¤RŽŒŸi$Ó{2ªbÒòÚ`h`EZ©CLšbì:"$œ?ùºª“½ò,B_ýæT"ÃaqŒGêœrÏ0bÇЇ ä-ÔHÒ$¥T­šE´¯´C®`ª0áfæè1 d›žKÚIrÝ´J½s[à t"çuÉûûrÕ°c:@íKUy}Ñ|=’M5£Z*DFªŸà";‹ï^ï‘^eÊ鸇¦ aß.X ÆÓí•OMð H°¡ÎÏ€'Ù×ZOÙ›ññ|ÿËòNM}ùÛ鱃а|‹ã•‘EU0›·¡)Dùv‹ƒ‡ÕU`=Ü :«=­&wÚh• Z·âª¼¡SÞ9Š[År½§öÈ»éÀÚAÞ m>ǪIP7zEdnʇC׃CŠŽÖx3‰}\ hѱSÊ C_g4´–¸¯Åw¹/øu+Ðz‰‹åãóéó£èiRû8^'y¤v»%y…%)d€$êóÕÆEc=À–-„Mç&A¾iF6:§i>¶R«ر ‚<$7^ÅŽÉ6oš9ÛÄ!Õ”ÖÛr°CÕIBŒe#ßl…YÒ#%­o(£‹0#dǺ1ùYmX¨6ðV\¸`#¨Í›)ru'Éz +PšÉ[Ö®H”þœxÆâþöáüqùþ¯1uÑÛ* t4íŒ<‘"MÑt4cªØHé-ÿøælÎ/n–õþõþþaOçÿ!ð½ü·»Ëå—3°hD¿¢]//·ŸA‚I;KìóÚeˆ9Fk]k²‚¥·˜?éËž\ëK‰rËë_——ïÕc·HúÊ}÷ˆÍÊ6O¹~ßü›„ž¿-Ož?Ÿß¼É£[-~gó€ï|tȱê>ȹf‘Åú€Qgo“išƒ“(èZBqˆý™I粌çÍñäõBéxÛUñs¦M;t+]Çœ7¤ííYùz‚ÇÁC<ËE¤ïª\îÅ||>á+t/ª¾‚Éf}…øød·GO–5buymˆÀÊò:€àvj±jÇêk9›Äµ¶,Ù| v(V—8 CóùÌ“(ù¾ djºIÐ7ñâM=Ç?œ×¸¹¾½~–cªüúf<†£­EÌ‚süÛax àKüE଻p!}öYÉ[¨‹£8»·hÌý–rषJ‡bçööà-Þ!RË ‚®©™µ5¿+ü OîaÀÀi¢›‚<â˜cþªØ:ÈæhrY¿+±Q·ò)àÖ]uJÉ­änåWÒKv¬õÄSåV^^{5f¨*v—l7†ˆt¶Dê¹&^÷qáuŸƒÐñªY<«Xñ¨È1¯Ød#âva%×}àEmÁ7·ËÅÐÞåz€T2Ô­†¹“)T4SÒ:¢ÊÙ”¶·Ô¬oQØ1ËëÑ™š» ú1ˆ$Äѹo™’9yÎñÊÀñÚútÔ]x볨ê1Yí¹•O•“Žlgkg¿óÑ·†]äÔ½˜(J;ð>ÚªÕžE`+Þ{³òxph©Ñ°ÃtTçhB'o1ÌTúæÅæ© `šBpعX,¬.2 c|_:]ŽõÎ1²…8@QÆB 1Ÿÿ @ÉꢭԪœc䵫†0oT²ãöñ°„6Sä?&f«AãüÅEé­ƒ'-!‚F‹:ÒÀ[3õÔêMé©Õ™ÎÊwš||,ÑEþÔºa©ß5ãíÊJŽ­–”ø#òt+ýa˜•ÆØÄÙò Â:ÛË£õÁîÍu©@ ›bø_…Ew?Ÿ.†qq€a>h_V0ȾxT¯öP`¬Âãz\\5-; Tâh ÊZ¨˜UÊB·²©äg‰­‡AOÙQ`Á}:¹”Q!}jé–ÈØ1T¨A¾V¯Ÿ&'+¨4ì‹F{Cæë‚ ˜s*d’嫽••„}òV}¬ê4JÔ6¦±3 {ôy²ÚBiI¹1xte`Cðv¤Úz++QsÄä8ÌìëŒkìTŒ°ïŒ“3µ³œË19ÄŠNB q/±C9xäú©¿ ºâ«KŒ§Ÿ/â—ç]á"G;Ìg´x§ž÷Ð %ïœÄë­BøßéÓIc÷Dc˜•˜ƒsòïäk ôÅ@ðY‚PµPÆç.$›Í{K+@º•ÝkÓÇÜ@‡;I€@'ÒNDÁÆwÑ9ÀÐqE ófÀ¹±“¾Þcˆec¿·‡WH4®Æ~mcH ÙYÓ[ZÑ 7tÞq4C\‰`´õMQX;¦ëYNe†ASÆ“çŠqÚÄ‘Wœ9æ'+OÀé-­DqFžåm¤az‹Ñs˜¤77ªwίRˆ0YoP¬7ÓÉQ° Y%)«µtgBaEæf:‰*ü ‚Ð!#ÆøIjÛÄZq›È(îf²ÚJ»ÕEÍ!ùµè ɥYt$m“^ÇMÂ4½1…Q2à¼Å z+½É³ÑŠ#è çÅ^0«81Íòò¶´ry-½ù‹afÅ93¦EIÝ:’(ñrHµƒììÌ´ís‹u¿|E¶^7½¸d2P¬0uX7ëmXþ VtU í·ÿØçÈw)ò:pmbÆ}/ ÃZC‹a­¡½êµs›„^åtö´zy”ŸûÙÝê®láÕúcOíE®"®2ì4j‹ª ÀyßÖ5ùÓèLÄ3hlÎ*ןTÎFôi±±%èÍX׆9| h„RF i{õŠDª{ˆò P©NçÊÐz•lç ²ŽâA¼Ñ¨ÈW‡fÖ×¶ ö¨æÐ¥oušY‹‹ùàc<ˆ°ŠÏ jàQl’µ´Úê—Oÿ:ÔŸ¢òOêj³“ÂÚ,2£:;mœE k]Mª3¦ƒI圿î}š·ÆÉ:_Æ;ä®v”‰j°؆Õ€m é)±ÚiHÕ'Ž!dhHc(üq|ãèˆ2MT$‡kÌ|mÉG9mVw»A‘ÓWßzv´úð?g·/÷‹çÙÓòaq·˜¯ ÁQ÷ÔŒ¨ºhFk7ú}ÖÏU­@ð=u!*ª¨i Q{ßlíܤusgtŸ¨>¥uŽîCJFÏ4(ÑHõ…yºv~%Ô~ºH#ƒ§w”¸»ÛÙç—Çû‡yá(—í+v ë\'?ÕN·•»7µ”>&¯ØæI_²\€Ç{ÚÓ£‘ô9pé®-}òkð}ÑeîVóç NG¸€ÍÏvwe 54JõÃë·)R‘üeŠe ©ÕL/ÂkèÐ¥¥ÑйâÝŽ.-ÄQ>zÛxmqĺ`ý D#öÝ?Gš|Z-¿ÍŸ¿Î_Ö«—‡ùþÛÆn`UøB{ùoe‚жFPaK˜& ¸Ud!=Gc èd}O#ì­5æ`0ÉFBÄxèˆÂM,° ‰§]–²HäEiµyê*úkÄ)l7ÿíY8Qž¸~á­Ã÷²~^~nY¾¬îæ÷òpaèðBê^þè¯ËÕ/bcåƒßÏ?î¾Îï~ \¾Éá?=Ü>·hÞª,‘GÖ|¬¾µ ­@ 5Edâà*…뺙éLÄ7£7U˜´+NŽ¢e’7ª¶PfÔÑxuEÐ#ö0XöïÖ•3È™çŠs'(BòFùõùÃ|ÓÞ²™…™íRÒÛ<áÑsʰÉuÕV×4ë°,€ô´SîDèvåô0ǧQgÌQå“åR!z4&QµI=]¾ZŽ£Íµƒ9hãDËâf©ØOãÌå’ÖëÅÓö·.¿?ËÕ—"ñvdûÊ÷q¥4Óû×T{Qé} }”~ͤ6p ê<„m»Ã¸(¶£€#bµ[ùl ]mÅ6£¶ëLEc®WŽë0² »;EþêÔ­gL[q†VÏh°G)Z¯fëÅ—0BQ ImlW9Õ5ïÆ ‹f7¶™˜fS¯]G… ‹Ô)òCdÎa²¿P(3§#R5i©XšbžýµEVÎÐAdqpÞ{ã~¦úêjù?ò›ßqn¨¬ù‰‰»Š.øЈêoÈËnC2¶“aL†Ý {µÓ2 ñý…#š5ba{k – Uˆ RUN››ñUe6¯%<‹0ú;e‡3 Íø$ŒpqtrJè!’Ì´zå“¥Zð‰|¶õ¨¨ 1–Y;c&ÀvÉ#,Ö”k{Q{}:¬N®Ê€Âq¨-$âñ@õh9vDÕßmMžX¹õ‡ÿ^-¿}X<ξͿ-W?äßîç¿}x–;Fê°ƒ £aTÂCåSxÈ«ª±§…wi2•Í]%¤Ñ ŽœV·hD§nÑ+ˆN™¼,¥Ã¸HoÌÚ?*ŸŽÑ÷ËÒ¡Á“8P$ž¢t”ŸvkduTœ' õ²¿¾Y¯ÛÏó?ËUýi¹|:¹¿¾}žÿ1è¦ÔD*C`°˜ß¿ýLG4+¸@ØßæÜ”J‹$ŒƒÃ½æwácwJt5¿›/¾ÏïP´Ã0†QÆŽeò໓ힲX‹±ÿUž_ç«Ï_o?¼ÑcØþHxyPÛÓláuJ“¥4 亪1ƒÄ%7ƒz¬¤z^…L|îP“:§LD׫£† ]ï™TB½xšnLžwðƒc*i©‡!8‚0Å dÞá —6åYpâ?º6=yíx¡ /¬wj¾HÔ¹SV28“b2TTð¿¬$ÄÆèdþžš-8ɸ&òþê¬Tã"ÒhnÆI¢{NÏ7ãí‡`TÓÕ9gÙo§Ö.2„×q¬£Wš´àùf£!îêüPÅ¡÷Õ¡Åwëø8J^ŒyçJïTÙ0ƣʄŒ¦óeŽœN2)«XdtQM:ÎÄq'KÙ+s©W¾½œQhì­UÿOŠ#1F5H¨é2Ð+E›í–½_b!e̺HÕÄQ %Ð{mF[³‰Í[6ú¹Â¾]Z‰²ŸrœýÂëv,ºUåƒ3<«µJ³Ä.™rœ)ÙS¥ KÈWë0½zm–0ª"°rb:ëµ½FÃûß–ëùô–÷¨srjÈe™©6iVA›]Q«‰s,_-µ.s†¼„“Ž'±Š³UUg ëúF·H‡ÛÞÔÜ<Þ~“ —wÍÖó»—•ø&3ùÂå݆^S盢މÂ:¯Œ*MhhOA&ÂR4¦Q®‰{"Ÿ-žt‰ÑñJ‡(lB1OQ=A‚U0ùb„>ó°¶cƒPÔEerØÀêTŸc ”Ž9#J4áœ.~’DM*3‰ йºZ2»2°ÂÝÓË,|Ûâ7tkÇ3fØv¥çð ¤YÝ™RÜ+Ñš°Œ”eùì|–ŽQÈOhñÅdÖäþƒS$§ï„5üÖ0=e8ê·ºÍÞÊâŒtŽ]´–°'N#mbء҇ՆÑKŽKÁŸ—Ïî?Oñ ŠÉF×¢DË 0 Cò»žÃ6°5=Èy;^W'µ ìêtuÛˆnR!ÉÕ,ñÝåæ“-q1ºnO½&¸Ùµ&;p Ë¥%:´ª±›Â#D9uÊ! TTÐ|ÐWøuSa¸ùýÿù†Œw’ÿãЦ?¼È÷oùæ/GÑ_>~•­=„•«Ï¿=Þü>{ÛkÖxê·Û»¯r›»<@d׫Dz]¯uomzEÇå«^ëÞúyÏaKæv¾!Þy¹áQR^Y_¯Þä¦fFY;ÔF+WU•øÐhkioÈ#€kÃÂz'Ñ!MV¨¹k†Ñ‹ƒaŒòœq¯^ëË×Nqè¥ýÉ2*z¼bPtoX,C½B•GkÔÖ¤6dT>²AJ.Â%mòöˆ¡7[ëc’[¤þå??ýÇo§›£NGi‰óðtw”:Ye¢«£²£ôx{OˆýåA‹m½l¼Hé0>—K-ŒÏKß7ŠÌu(X‡æ¥/” üã9>6J‚&àIúÇr¶@hÓG%~xDͨsZ‰Oõ›Ýê“e ÐmId­fŸ÷GËQ@Œ.[Æ"ÃÑââ|Ê'‹3Bl§‡Ë¹K—QÂe¡§ñ”¾7/ÖÐ%ïÍ»¨?0:Zν9?x±fùýMÛ{WQû)†Ã¨#pí†C踹94rŽ”²îÐpü×Íþë ÚmÐÈ^ˆ¸ÂÌE´ÏïP™î›¹Â)Bl´çÀ~T¦&½;áý™^ªÍôNø€‹ö‰:œ¦¦ª*ê BÓÓT5˜«¦Ð "†Òa‹AkȦÕT¼aft´5%Ÿ%¡ Ú2ó"±ÛI÷f°Æ/CT¬¹A~rG‘x@ ]:?'ÑšºQxt²,Ób¢ÏÊL‹/²n’†²^÷ÖPBFФNä"4yfw1ubBÃÔ ¢U?‡+üé»™©×‡m»òlïë+RF5÷†“Ÿ0vˆÅSêá'¿á¢â¿Näl’âbâ0V‡n²ÖÊ-|¢âA”tQ]¤ÍZМÔZ > ‘º;VŽÊ xéâ »2§*uc*ËCw§ÊZí#N•Ù)94Ä*o[&y%Øã+¨ªûùÓÃòG¨×\Ò-”ÁÛ/ŸzSÆØ2õTùÚqÊW1•§|+ß{I ‘VâPóž&…ýͰÝáÑ™a Ù€¼M˜a-+Þzwýˆa7"ÿé~ùëãÃòö~=³HZfóÙÏü¯wß;elÔvR˜õÎW{ 456ÈzçEŽÆ 8Ã$Žlßå`”›£ó?ñªÒ£¶ëõÃr=Û 1ÞÏ3±¬Õn7Yßû¡ö¸§èhpF«¾[¤^'?ÿö²|¾=ÐâoD˲ Á÷­Sž¤®uui—•v`ž¥Àdä78Ûï:„p 5øGbN¼ãZà°m=§¿l‡mtÉ=&pq´76èr _­Ø:Ï%tþ©)Q:xïPŠþP"­pGJDu /mT˜DG&§Dêç ^ŠÍxÈÙ€—ÂW‹Kå›òR†A Ý[oHmÅñÀq âÁ?K´Þ8ä¬RêŸwL}亪Ӹ,+*ó\ßÃß8ûÆ0ÓÄÛwÝëç~;“rµ¿‡«çsåsMï ñ¿Ÿs–H[Cj;8lïS2 êU]}þÒN°‚4¡%²žêæÜ`ÍÐÖfÀ •N=Qªª‘9f•… DÜ)ÇÃó!i• Äa ÷Tia–å³µg¶e.^I4Ž»H¢ü­°ktOФ4ÊÖ‹Û~A­¨=IÉ@•†] )šI¢–Â'âB6«kPqÚ%&Mqá{£G#áC‘d eÂçD\a7TáË[Ò,ÑÂäªRö\pØÛe’+ E;L_l6xt²œ¹¬€’† ãb_ÖÀ$¥IýYTÑfP07©®”ëÍ’=ufõ‰3ë¼Álw¶@iŸ½@{¦Y=«{ä4í` [ÓÕÿ<ƒí»Z¾<ϧaEŸ{D™U\qÓ:ªd¥äjÔ´¹‡KT“Sž3æéDazJ[Bkã€o'ËT˜Zâd*ªÄ7¹6¦šy•°DiåÛà_w”‘f”èÖç„/ìÉeðM|^fOÚ&Yū윶M+C‘»î-Î&’W²›±‹‰j8ZÛ4Á„e“Qçg˜ÞÅ¢te Ó%¤å0mûïVí’Zr]Íò‘«°7·ÞWD¨‚ ¿zZ)D &OăWéÈQl^aD’F­\¯Ö%ãr‚3GSÆØ™+¨eØÎºœŒY“ ¸¦ñt¿an©c-§ÊXK0—­ UÀ¹KdÑpžôÄÔà5Â\ë£a®¸©Já™[lkS-äÝ¢V&û‹Ëg/Ò3^{ÛvX³éL<ÀA›M+`m^GÀÚ"›ÎŒ¬…4ÞO€aãäüi “ÆgICµid‰Ê<@„ëàB¸A±ëÜ ¥ën–pk ï·íqwË—û=¸rƒFî4áÓîƒø¢56B;Ïâ\c“%µ)jµêÑP‚s:Ã}°a8?å>HŒ…‹Q§‰ÿ |ìDè‹r-Þz´…R«ö½t€f» ìûŒgœA©íoŒ?®Ñ †WšÅmäiwˆîÐiñ×<û3PËûÝ/¬·–üóáš67±ueŸW/ó&×& XœÒ¿ãÕnõziÿŽò¢Þ”j´åcDê ci:ORÚl rÀ2$’ÖRÚ"TMäŒh×BiÛìJê–iÎÉxÛ!ãlh"ÕŽ®]£QYÝ®E4ïžÅÕM‰|Ʀ =wzrM{$¶>ü¯A©ë˜Ø—w0')=I콬ؿŒ²˜ ¦ˆ“b…_Û¯‰Ì‹”°8jW—y€CŠ:ˆzŠ|árÕÛûo‹ÇÝbï§åÃân!ÙüËa÷ã`†¶Ï)JÓb@uï*ÜÎT4*yBÑ ÎõÝY[DÖf…oÀÁX­!PÆMKxt`vOÁeï  Xñu%Ü+vÝñfÜ´ç°`*·D¡ý×&‚rAmG}NéTÄÖä—N{k›~êCT|Íx)R"}nòž€løM=€²ÞeH5ÈÇ%ÛY,Ĥzt°œD¾á´-ßl!¶a–«·ÜBèJ9•[+a“E½[Ïp~+ù†(œ`´½Z"ÿØùÜ,^ŸŸ] ãËúbïåæb“õ|±÷\H·p# ­ÞëQvò¨#=¯lŠXVGP±Z  Ý ô7èìl³£ê§©/ðéÁn ¼>mV仿?~Ù”Ç_×Yðjg±þ=c XãXµ+s‰ë£87`` ß9/fˆŽÖgnøôð"ôÍÈK¤žÐ*gÜ TѶ2tjC€å‰ˆ$u/'"’¤Æš´—H%gn‚SþL nlÚfO¼&C"ÂÖ_Ûß9žÆé24áÐFÚs!ìïX/ÅBma9”m Ëq=ms–‰Ý¤­&ÙtÑöa°Y÷ÍBoÝ­0»²>HÙwºÝ<­ßó/òaAõúIy={ZßÎn?Ïfëw…ÓÆŸíôr4£&)vÏíºÀ¨)®õ·%d³¼“G‰PÁÈQèÉñª±y‹ÕitRìŠzÄh…†Sà•×ñ4:aD£Ë=« Æ'Ò™š&œ|ÖD:iÎWäíI¿‹vÖ¸>þ9‡4½\ËîŸEz˜ Øzb§1h[Ï^í )ݬe÷Ï–úUÜ,Yú•’“B,Š®·Q£‘† nƒ×Ö°š¯ aw“«'Ö*&5il|SŸ™LŽªec|#ð‘ è©A©þ Kñ=ZXÍö-rÇ?(je}–ø¬«^Åšy4oo¼nÔÌšE·–êÖ)­Ùæ¨[›l$+£îiÔHÙ:­Ñ][×"ëþºÖjÕµÿÇÝÕ47’ãØ¿âØK_FY €dMGŸö2»±»{܈ÙVU)Ú¶¼–\ÕýïTRZ¦D&“©êžÃLGùCÎxð:¤¹Ñ¨¡R®õ¡¨ÕÙ0°x]LÊÁ¦aa ½§Ü·\H¨ Ÿ(?Xöq±yYÞ­Ê4à´-UõDç­f w!3UK® Öa(ʸlÅŠ%N·íÒŠlƒód¯Í¶¯ ¡ã-$Ù6°3œ(CMu¹™ÊÚý¹Ü±­a…)ù•aм®, Øgä®*¯|øÇ1³ï`®ÓáܤD«”–#:Kbl¢´\`´v”‹••] çbÌÞ¿YŸ¼€ë[¨çê„O¯Ì¹n_ 4-禦ýª>‰#™©7Bjª°â‹r w×t#V ‰bRîÈ·²nà'.óÞ=è‰hÍórÿ‹Û•[ÿ9=ðº¯Ðm¿ÿy¹Þ장¦®›â”|ìlÍ‚èÀ/êngÄ–üLè/Ê>p6Ù«š©Q5ß-ÖŠ›AÅįÍÍW¨¯Ó¦É9 CÈ8Ä›²s,cgF;¤†»-•LÉØžâAÛÁ‰ybÆNØ«oÕjÕ×Í—•ªwm¦¿ Ëì¤N³C®ÐH"R ¹`+šoF™®5#hܯ©„’‘tì³…bÆTj¸g§&#éÞ ×fgŒ~ú1b̉1b°Í/zÄ;sÓ{¸%ìŒbÜ!ì<š6­ñ#¦±4pv›\`‚sˆÌ{½Ì!S ψVT*<£X1ªWC'¯n|É @Ÿ•e‹Ó“‘ŽëТ]W•HLpÃBôôjò âÉ%e“8.†.åpý›âãÀùëýr˜:"]¸ÎÙ:¿áñ´ë P/’€ÐBy£û´;ÞåT°:«»àx&ÆÜñŽÖ&ëlzÆh})PÁš·£·]ÄÉï"ÄÊ áèí:iwr83'Ò7½ƒ Wt~ä…ob×OË™S “ÃÎJhlù ÷ºó»}`z4áãüå·ÅæùAv™Øøññõi¹ùãp$¼à5ÎMJ¤+~cpì­áÊ Þá6kF®…xFçJªj¬‡<¹bH济j¢Êï,^›[ÉâôªüÞø„`)tÁpH³+г‰×¿nC<ô†·Ž&¤\o¦•„:k<„UJS].ð¤ ëMÍÍ®qÞ`@Û°„fŠªp4¶3†c,¹Çµm–P9&½Õ£=ª dÀ_›P½›þª@¯@„j·j£Dçµ­»ê‹M›’™Òrp2.Š}F-a„ øÓa§:qîLE8züº‡¶_ÛEÃþM¢òæQ–â~¾™ŸÌS xЀ~ÀÈ“{[N„©ûÜuÉ\¢ÌÆQ§·-@g² mËk¸¨+R¶´uø¹‚^Î.¦¸þÖÓ8Gã$ùØÈ6ø+ϯÑgß®ölW:[/?? ¬atÂç .@ £ú´F¿0'KV~ÕbÍÏglÔòÞ•¢sÞ%f}¶L\¨2Y&Þ³H£Äl4ÌLC|]RÕ@3ŽLýô-æbg†äÅ+ë$”kʦ½æTÔ”£u–#Hõœ]Ì "Ž#Ó0E¤Î¢@ø/·–óÏ‹Ùòñyõ"¡Ä| £"Ö[½€Q#Wdg uH‡Æ·\'†jWih;À(|®:@;ý™‹´)U Þ³J“bC.ùൕ³ëFmDEÕÔªX™)­ÄìÉ]律]—²ouß• ‚³ëÚw5JIÉŸJˆ6!T9ÜÄkì^%¬J¹¢ l#‡QzH²<5S6$Veñ 2y{´#-N§Ì `÷ÄO Bh2} K“²/MPMÇÐ9&˜VØO"ŸÍâQb£Mv~mÿG§™NÝ„‰%ê©‘¯vV(¬ÅLüÆ€Üæ7Öå èl4!0¯8¹ùÎyœ,VîY¤IêDà,ç€u×f^k¦nÿ;ûD‡®” wùÜxJ×XéÏ6v[’Ĥ4ŒÜFzìŒJÛùköám‹wc¾ov7ŸÝ-^6³ç×Û‡åúËÀº9oí¤¬[Õ‚Z $Üw#{ðŠìÕRVÊiä[rŸéö.2°Ñ)îY§IŠBìp[õy]&¼BŽ0$¥¥œçwÉ ­Õ¶¾¯)ñ}=Ru#^1ALÉ·ŽÂ$ RµÝÿê­ò6u:«Ñ»0)Ù2Ö4‘¨²Adv¦­Îê{c5+V¶²M1.)V&Ìç‚%ÆÅd±òÑ2T¢„r‡k-3O¯…œhÌÓ•ŠÀ&7ÑY"—ë·€ˆa •Âjš¦dY<‘¯oDzDæ°ì>aSI­äì¤Öž@Q<-vQÄmiÔ1Ä' #¾ŠcØÉ¯Ü2‰â{“µ $T알ç‚@ÂxÈ6I¶©q=û4‰#μ¥+oýôq„á˜# å9-äAÛ y¸2íC ÆÚ‡iv˜n¿{¬›Íbƒ°%+þš|åi=¿Ûî¨7Û^ »Ë;}š?¬ç6¥ùûÍÓë£Xê×Õ§çé~wßEó[_480šœ–´¼¹M–%ô^í§ç—•,Ìz·G÷ ý>¦wL?`/F´ÓËE‡˜½»(›13L[ ‹6åÿüþô~KÂé–´.º’¬*&Í»ÕãóüeññgyçÏ‹ÍÇÿ½InS±rJàùÑU¬å-ß~º‹tkî ';öquD¹¹ùåfýz§@úøóþÑ~}~Ý|üyÂ'ø:x]üºuäØq7‹¹l+--Îù¿@rÒàÍ/¿Ü|’-ûªÖùåÕ8v¤×ÀõTd»Õ”ï‚CT±ŒÐ$f>u}›Ê#…’@™Bv‰÷ÉÛøž5ÅÉ>Äà‡\Æ7€Âöª ¨€¢Ñ§Ž/ Äe°ËÄC:ãÑ3RˆÈcC0¦-iäÒðÁ‰>ýäÚk"¾­ø@ß]Ÿ©øp±i‡81D4²â#U¦^ÉèÁDq¦¾8Ôä$£CˆC.™Ç!‘ƒ‘u%v9*ÙÈ„™•¼øþÂõÝÌëÛ•¤$*ÞBùþk²jÑÄŠU#Ðqäyt"9”&’eÕ$ \²hÞg-š¤SïÅ É[œÿw1Ú1-Ç~íªm?\M†È"{‚kr°î¾8Ûé¾ßImÏS9\¶e¼UGªKmXq_mu¨sk=^ÝWÞ%QA±a[&E…‰ELÖ?õÌÒ¨§e¼2&¸*ù¢ºý3^KŸUtÞºª‘UÉR ›dtÛ7lX‘8G`¯}ì°u5·Nä½l+<i®·Ë'5ƒº[¿ôã ÷`‘çùøü"pzXH€3{š? 䯯gÏëùìa~»xŸöé®å1ÅQð%ŽìVÏ1‹ë’b”Gë5Ž<µ8ß|m?–‘*)H¬gÔz“‰àݯ¨à%3$£øÔ¢÷Ñ! LÁ£g & ‰†É3Ç«£#T¤ÍØ’øóÀn¢!³—Žœ`w›ªÍ3 @ÃX Ès&+¡z†i„ õ•ƒ»6*bMU?“&aÓ¨ë,ÅÑ×äe±ZË *ûoÈÆ·#î¬GSPÖ«E›.ó¾ñ3G³5Á wC¸¾ç[ g0¢m,€rôNv©× ¢eGB(*4`óçÌ~|ë;Å©£…šàƒÄ{ÒbAøÐæcðQS­ Úƒ†BüËLêP_fv;—?,.¯æìf‹ßuOó‡–ÀSè™_-`¹¬À³Ûü1™ðï¿ðTä7ÂÕWÃKâ©“•ÕµHÓ.Š–Ÿw܉‡³§¥wóÍüaÕ»Kþ²˜?Ï~øOM£×¿®ˆ©0ËTr^%Ó³Y+ÀsmÀp¬Jü‹÷Ç<‰ÞÉíëòá~šLŒX²¡pxK^NL^ìÓÈ3Žr† ‰—Àè¶SŠ"q:õ§¬¸,z9q½Þ~|³ùíÃâ¿äÕþmµz~Weöß›ùfñ§ûÅï ¿QQ£åâþø5N\ðÅκà VRS­6“SÓw唿Ú{™Ÿôao–úP²Nw‹å×ÅýÛ…Š¾“wÓ™þ–ø„ý‹í?d¹|»yX}[¼Ül¾ÌŸnæè¶ï~šTéŒÿ(Ùs(WÒ(D[áÍxBDÆèÇ–ºâCˆ]¬Ô™©@ qѤ C{oVTcè;‰öAÑÿŒ½6ÇûÅŽÆ´ØÑFùØ1‹¹&E´ˆ2p£KHÞ ÃÞñEg1yh­))J•1Ë!:3+•=;Z§Ñ  E )/°ìµ±4ÀìYthyZ è`4ÄD '_Œë*f„L.Cßl²Hèøj%L^‡]oGtX8ñ?hÔ¹š…CŠQ…óÈvô“òŸ]ñáÛíö²¦¿‡’½l™³kîÀ'Ç9¬Òh/˾d{uH UxvÅ—°6­÷9Þ“ÅEKN'¡Zƒ%8ˆYNKaÊÃ ù«ã F’ÊçäÓÄ-)šÖ5fÇ<æ!cpî³õ¼dP¨Ãù¯Q|#Ÿ ¥KÏFkõb½‹×F ÛºŠÒ !l€ñEÆ'J'ÿnÙ§¦— Ãp©„^=q¨RŸätñDD—/öŠ.Ú\œZ+YíÕ³G#½OvPI²@ÁZí®ù‹6Võ;‰‹¦ò>ãZÖy FŒ— Jvz1wJXo"¦»–¯V’ÇrŽqȨ1#Žˆlͬ.´lƒùO$‡©ŠM×9ÍBs~±µ&S}¥ÖJ¦¡ûæhÂ讳!Xà!`Ðf\< «v± Ý3OÖÌúmqûeµú­ŠÜñ\¸N‚ÉãÂÆ˜%1œKâ¢g™VUàÁ9eX0T‰hŠ/.N¦œFk¶CwàåDEhÈæ+Å^i¡žAvƒóƒ »åÙPez`¬¯a‰ˆLVvþ —Tmµ”jKªÎ@ÏÉñD®zÖf¡gÓú9GÛ7Bž3Á€„<Êà(ä!V]k Ç!NR!³|”°s|…ÌpõF*:¥Ä,80YèÙ³O«~9ópH6”¼¦dM¶¶.Áå­‹¯Ó_÷°!žB ºŽD. pÆçQÂéÖž¡š$Í5í x<’uÀ~L<×<œÜMC"¯ëÅË4‚¡sll(à\Ìû6ž’'LÏ>MÐ!-ÎÍ2;T• c +^5'Œ•ØQ;¿bƒ6{±…,îzä\Nd> t:ó#Û¾(13™˜Inª©Ò*ßG[4QáÍÈÒ0 °ÁÀŽÆÁq&3ƒ*$±ƒÆÅWi¥–õºö¶> îP<„ìõ‰Sr×'jAJ%¿{&j‚yjÖã |XÇΚ1ø@6X5¬N¨XCøÑ9*0Òß•ê,)0Xq*À…l«‡€A/L ¤xfÂÝÁgÈ~{Ú @#\6 5‰3ö^Ž+ÿgš2’$† žZðÑr œ CzÖhB òÔ}y…uµnÔ2Ôª0ËGÈÑiùÜÔà¯î×½j†à!špÀ÷ùr#¯p#¯v£ÅùÿØW¶×IîÏþ›|}óòÇr;Ux-6šíofK±ùÉX=¶€ª},¿0SÉfŽü9ÚNNAõ|%Œ ÕRÔÛ0'wƒ-F'‡èõXqSNN·?¿¬neÇýï¿ô¶Ô _5ò«[ìín²6/¯‹’AËf7h™ë—ê2uí–*PÕ¨à@'ä6é¹ÞøÂíÿEë–Ò›…÷ÊŠ* Á˜½Q'Ì€—ºAùÌ‚ƒÅ~Ú>åÍ÷'[ß|zY=Þ,Ÿf‹ÇÕËûž”pà[RÓguÞ›ò)Z­ö3Øi¥åwÆÝ o¤åõ•Ù>9mYNn©(±d°«Åzò×$”4œ#-P?âwûî„%þb§ú å aÄ.@ïÃÉ€…oó‡ò?}ï`èðÞë‡Õ·›O÷óÍ\ˆŽtàÿŸ½«[rãÆÕ¯’Ú»s12 €‹}†ó Îx¼qÅÎäÌ8[›·?€Fci$¶Hv“š\l*••¥n䇀r6osÉ„ÅÅH 5ç×c½ÿÄšhQTrÒ­m#¡µmdç¼êeÝ5Pö2‹ë“í^V^¾¸?YZK•†§ ÍKÔ¤Íøk#Ø¡À ÇÏ^_u2üšÃKöï üî!(H~1¿8qƒæjctƒ¹Úe þCéòóÇo|}x>–ÿÈ‘wwÿôý|´¶ïÖÔ7^{Ø‹œNØÖßÌÑ~}½¡™ý"ÿüéŸ ø•‚ØYK ëñË~‚W%]83YÓ”*³=ª]â—- E©á…¯_ÝíWž‹ðu²²6ø Ž®MDÛS‰¯ÞÇÑkX¼6ïáåÆÁoŸ~{øþÇW¸OŸ~ýøæ£óíܹ•7<ödWS~ë-î¾T€Ð«}VòŠ»ämÇŒ¼9ÚÂæhË"‹¬1rÕê7µT¬žS¹—ÈN—Öfõ„-4cì°z ³t`Ã~ólØ09r.†÷ÕàK†à l3 |‹¸ë¬ëôÜzçΗÏ_>ýˆ2žï¼Eïó…fíCŽ•=A ó‚´Û®|ì‹èƒl³æ-žxLŒkŠä2§Hšd;&åvLòbž$ ˜¤†*¹†I‰Ëem'KkÃ$ˆÎÅÅÜIÙüÕ«7ÌuLÊxÖ§3“LŽ/Äç˜dn”³Û]Ç$ 2ãûn¹ Ã}zkêG9шÌOý±' „dP¢§þÜ«¨¤~ËŽ[PÉÎÇšü@‹p™1mF¥V^3UÝYØŽUîÝd«2H¯ålå©Lgt\Z*©Ú1tǾu ’Üaà-¨ÄçSGÆ£’ËñåBõ *¹&Ø4—äÛ Ó”ø½Áèx&/àG{óí:Ô4pÞ<éÄpÚ1‡d]ow(™Ë¥y3ĤÇG}¥48>‡ù,W!&j¹žÿ¸´FÇ ú„°bjŠk€˜Äi¾ãPÌáK|ˆÁï 1wÞ…`¶ÿû÷ýàï‡ÇÁOûK¼ñ…†BSû[L‡-ZÃÚÃíloF-nG­ êª~‘³x§:hAy&Á…µa–ùQ9Aî,6‘'ÚYùÌÖLðŠLŠD%Èr9—Þ-c5ûµwÄ®s:ºî«{"–1¶æUÞâØ¸˜nÍ»\C3³üÉàlšA^Ñ]…Äàm‰Û“OÔŽf’œ¼²žŽÕÜSÆ¡ùÉÂÓቲ“ÜvÀ™ä„q œÙSqz:ÜYJp&*Éœ¯ÁYâ`œQB½œ ¬{¹ý‘>»:-\ Ë*äê}êéšYÇ•ÕûÔ+hècŸ6ÝÏ ¬¨†TÈàÝ_yR1‘\&Xv ¬’êI'"sy*pd /Î;YYKΉaGìU]9§ºÚªpdq˜ 7ÀS¾¬IqE TÕ¢ŸÃàÈÛ¹ÿ§\§ˆ`zÏâ”ðƒ +Š!Ù“ |hߊ‰.ª—]$•ÊÍ±Z~ž­!Žßæ” Ž‹©Ot±—òþdûKO¦wœþĦ‘.„;T_vë”QÛ×ð~»Két´Ý)nåàè «£³?„ú¦@*SI—Öd‡x—S á¡v¨Aq´ŽPÙkv“Eè[±Qqhgfº:‚6íØ|}­ªh‘ kP›¿TNíã¹hÍd÷ÁÅÈxÙÊfKVðYfß®÷Œf¢Eñá̧‡?¾>þå=ÛíG ^„vz+{êL /»‰•ÏíS¬²% Ó¤ö—ˆ£“f7 ’™‰\‡¤ò ÑãÂZ)†ØŸ§Ø…HÌoB¤óš¬ˆd.\"’é-„cúöß"ûw‰c4Ò»Æ1æ:kÊi æÆÜ'$d§– 7»A¡Õ Šd‡œP'ÄzPCJAÍÉÊšPg?ÌF¹ uªj«£Q˜:!b. íìñ*i”±Y]”ÿ.YÝoïµàý@èUÊêꄬîÅSOožXxNV÷â©Wà#F/ÆØ²¯SÀ8œz&zy]J‰!Ï$Ÿùè¢ûúðÑÙÿÞž“[_L|ዉDËf" 7>Ú[­"ZÝσ5o£—1fY*kèÎöqóe¾K|6cjÉwQÍ!5Q9F>Ê`Ç•ïJØ^´Û„˜HrÞpÆì'âìAÏó•ÛBÕ¯ £jÑc5ÑŽ¤ƒ¡‚Ï/}ÖÌí|0×Ïø¢ÒÌÙˆ 7#`œ~o’¢jñÞÄû,4~»iUJLŸª”bÕÅe!Šä)…(KOS{‚0«ödéñW\‹èsˆ´¥ÙÉ„¹¦äW8X´lv²ÓNÎô~.Ù E뇅eëÌ^š-‘SzM(.›­¼Ã¤Ze™p³Pœet"’fË^;“…b=7ÇõýPE@ÒpÆÒ9Â54`§¤t˜%3É5´-±Mñ>~ÿXª¥¿N%Vûzhó/+|„C­^Ñ ÄÌø0÷/Wåz…’°*ÔfZÂÂÙÞo+Q õ³ &q®}˜\s©'ûDpŽöþ­¡cî°“­sÒ)Ã%?!bÚù^\H¡F‚‘i a0Aá-eQó’(nÒ<`é™vYœ±s&¦ß?Ù®Š„_ÿØ6Y¹¢„¬L´ ³aÍû³…x¡{ôʫ஀ó©maÛ­‚p:”É^a ŸJb ûöÍÁ¿ †ÁÜ? m‡‘gðəòeãƒ/YJ®Þ­ÃP¾ØÔ” ÀØ Ç#@aQâ)sܤa†áhë< >™d&Ú.í£´ã4᯶º_ŸöþÑ>ùˇ6™Êºr°L‹ˆ›B°¿7eàH•Æg¹ Ì{ 4S¥9ÞGéùÝ `,«´—Ô%Þ`ãT×d½™³b €óÒ¥´fÔCÉޙ߱ä:ºY ÆTµw*Xlã;Je€½ó}+n¥{èœBûþì+Ts†ËÆWjÜ‚ÁÕjî2 Ž>ºÁ`Q›1AÙvÂóŠê)>ª{.Üvs4*‹èæ7iª×‡&¨õCO¥,âQP#޼½´Ù·¬Øuä!x²aÓ‘ÂM–§@`ªT.z¶O>þ«êo^ÿòœñ& úªjÓ׊â$ð.êŒ{úUd|%^­xSëÛ‹0'®G±S•QÄdZ´ê§RÅÚk'fÔwÄã¦#‘gG±ìœs—Q¬-™c”JƒéP³®2جßk–wƒ:‘ð¶Ý§S\™Y-ºc!Àâä²-4r¶d2˜ÛSc"îEÕ’ÊÕ‰Ñ ªE–áÆ<å°Ë"逶7Ïf¼ô…{öÿwÏûòAÖ« ÁüÔû¢Csâ™Å~¾ÿõáÓŸ_Ëäà¯ó¿þäÝ×ÇûßúŸk [SNè¦5¥Y}Ьëö-ÐQ½ßU™´ÒæáPª‚t*ŽÏ;ߘ– ;Í™(wÁô€ƒh:މ9ÄN0§”.ùypûXS¿Bîç‰È²¬p³ZqSŠÊ`züÕ¿pØaŽ˜dr†ðåEÏø¿<*ÝñpÍç!ÚîßÌ1t ¤hÜtI×0zRÒ|)]‘ê‘Õ0ôõ œr}³¼ºˆWÑ—!ûÆŽr¿þÚÑËbºàW %o;Ó¯v\Îñ’šÄ—LÀzèð;Ç_Ÿ=´~¾OÆgX÷¤£zñ`Q•0€nË5ÈìÙÙI)åK2KO¯ª«.XÒ06¯› iÔm½mwKú„`¡aØf(uÂUšAãNP“Nm4ûöå÷&ÌÎ ÃÓ×Ë´n!à*æð¤‰2So«É‰Æ8S DHu.ÍÌù0©ìšƒPž÷v²æ!ÎüpÅ”»2°N„ij ¿ÎÌ;3pi§„s% Dcñ¤ÉÒ¡"µs§½=Ћúå7uùœékÌ 6ûÈbs*Þ=eîËy¡âúý_/T\íðˆAuYö˜6¡ãyÊ´-y.ŠÙ6qœ‘>?׸k,Þ {ÄØrUÏÑB©ñîD4Cn±låÔUÿå7Em:“¨²¯¬»œ}žÐß"dzüÏ—÷ÒwnݾdküÇjm4œRâÕžÙÜó  f…ñWå7ÎùÑZÈ-Îpý ¨Üe5Äó±}%ÅxëSKªÓ=ÈBÏG½€©Zq¢C«6…Û"ÃLJøUüXÖ4›¯-Û4x~äOX˜Ëg&I ðò¯:³¶‘ÿŒ½ê,xo‹z!ÜV3|6TfŒ/,;t ¾¹/¼”Wyz¸ÿëÞ?øÁTß“7×´¨D… q‹µE\Á‰i¡Ÿ…®9….q£ÔƹƲóÖ݆¡e (Õ º ,Ù ñmWÛ㹫eÝ,Tº)¿€Dó¡÷° Ï WwA1VF¢‡Þ¶.¹ˆQ¶AoX,©—œÜ4mJªãÚ5Ù¥l[nÁ:òlÉÓãÿëۇǯß^äv:¦¨m³Ä+âV2?w ÚRÈiÕ5g׳`{%iHEDãø~ÄÙ!¥!wk@†ZM;ØîÎåék?ä1„ñǰթ…»jCÈôqSmÅé%|.çÒ–Ó8'§”¹…ÃÐʽ܄©¨›¤áô/h0ƒ=/ó&6ÍŽ…OLfá¼tj&|–ÇL¸•µ5çtÁiò"£Õâ¯Ãi¦¸N ¢t«­”Ø0tµm"s…bŒ\E×l¾S);tÏpõÍl8©\‡Mš®.æ\ PtEqLIÊõzܾ)±¦fpÝ€ 35šD&€­·(fyRê\ßv´HSÁ6‡5Co,DqÃu®‹lÜZœiÔÆÚØà«p{軨´; hÞÚ~&Š]C뇜Ό7 ¬=Ü á-øÌg½ 7Ñ™ É::ÌÔ¨ˆNÀ[r 橌¤ïÿɬÓó‡—‘Y?ÿÛÏט$~¶eª|¾ûþx÷ôøç÷ÓM½™ä™h윷kèÍ$‚·!ö–;Ïè8¬všniq!¤ºk¬¡L}v߬ö! Ü«9Ü«#,øÆœIpzR’ŒíJiëÐ^K;Zf*<²NÉS0!Üž\úÓïÞôéáóÇ?¿öyÉY¦3„5CëÑGßÚvB ýF:#ólø-#"AÕ!6I•ÿND1(ÿ`p—»ÊS†9±˜¹ˆ±’ø5x‘ÜM46¹Û”0³7’<_œù™ÚC™@4…y3Ï¥üã«Å ÏÄ]þ¤œÓgþü©ïº,d¡©`z^ùÓ¦H€ 绉7‰nÐÚ6qÓœÅznÚÎpöDJƒpV9É­q– ßÀ— %œõÙÈšò¼ƒ«ÿBÓ%ZàÀ¬›Áb¢j…fÔ‡i´Ó¥áЗyS¯õ­`W–(xŸÇLÌâ˜3›e³ã†x° ’±¾ ä5cxµP!›+k+2ñó|Á‰PF`¬¿µýÞc%…éU`&g¾$Èõ%ǨzΕþêËRγ§0V!ËFgö ÌTd¦ ˆ*ºÃŒ„ñF)Ý_¾ü~9&T~~; äîþׇûßîì ÿñhÛöùÎÞ¿E'üæÔ™=øß>ðÍyÅÔ ÎiQîú¬î`™^õ’Ì4¸l] ë¾_é´¯ãºW‰Ï% ÐNä=Öý­SîBõ²ÜúÀ`>¡‚I9]6ÐøŠÍma) Ì46 ÜÖ8#qMxMTºžOe„ç².¼›O]“zÖ5€¬˜<ªÝ‰Œ÷ñ©}°Åù|#U‡šÐŠS'N¤2}s¹1üšáùð‹ð«kÌ pþŒß¿w=@£¢3òhÞ›ÄÛcëÿ9Z²Ã]æ:œM²Ü:>fÖpÀq"´>å!0Û ´qi ô‚†ÔÂVžMÀU×Âí"YΉ„†ä1pB†[C®N‡\  7]S9àùÈÍËù«ƒg¾5Ñ™-Ê·*æ)׿S˜uB‰Þ©<øP2òa_/Ò™©€0kíû«°6 BL‘Æ Ø|+¢‘ :ææÐPþ0jí.û• õD ƒ&AØ«³òm¡Õ¾O³Çiºœ19bxí÷¿Ù4ÍÔ–S P|‰35‰yFéƒî0Fˆ4GïŸÎ{V>ùסïÓA€w÷Oߟû›áµN¶hz>0º­“-¢—ó(uƒi‡œÆ•5ذ=Ð@¼!†êTJ—Y±í(”!u ºs¦%Á[*êüþµ ‚…ºÝ‘!êyzàõÎ-¦w¸s#ìè_ëÆ™zL¨Sà4gÔ¹ ×eÞŠÔúVPÚáT(Mk®ÐÈEh¥Í¹ŒFâhÔ‚£A2Wq4•bþ ‚Qoß›ÃhºŠ2qEMK(^Îü‡#IÎXkJ§I…ÉÜsŦþ¨:Ób4oäqýóÛ÷-ä? Ÿ÷]f¥°|—•c¢M'*š ìáv|Ï,÷ûÙuÙme\® É#Ãúb³÷y ¹û~|NÝtÝÂfàl/`†×±ë m¢Rk41ÁQ‰½øT2CR/aGaß õ?®ÈËg¼lSÛ™Ï?}~züöÓ/_¿ÿôé—·?co Î+QÊÃâîJó¦cM2ŸdS48¦ï%Ÿ“]äÆA‡šÌÜÄJ¤œã¨Üx`•ª,‘td]SËkÍNi]’v³½…$Ñs÷îÄÖoÇ’äjÂ!)VùžˆkHAîrŠÀòHNÒ $ˆUº€¤¶ç€Dt~YYâtIáêŽ!‡¼@1?¤mi‚n&ˆ!cYÅ£9Rˆ†XØOMÃ]@ÓFñ ·Îi¼|xòÁAw÷w÷O}ýÅ‹C,LpHªiCzÃ"ÆUÄÍÐæ4"¿Q‘×@øF ®µÀU-ZËî[_r™Öá‡pFxN;æ—æ'~úsÿòûÝ7C«§¿ì¿>=üç§ïöoÔ~S³ Åv4·õ™áMGýÜësÍÑvdÂ[$P)×q+åzÖ,ªS’„ ›ÔÉÀ{O¿æÄ¿G2ådR÷_úR%Ó²´•Ä܉MX­qûM†S&Pæ~¿‘Ð8t6­gö[¸:gÀTEg åImGq @goü{ éO²íý™@:C@ML[޳«Ùy¦»¨©€Î¦} .’æZÙÅØ·W}¸ ihr¼Øı'øhV-o™Í^C'[ÝŸ^DŸ‘ywRÁÜÀlÞ=(.¦FDˆ1d쇟¼¢LxW!=ÿŽ&@Fãw£'Ë9ľ"=šìÑ"ßU&‰€K¦J€xÎÒVT4?D©Î¾¢’,²ƒ²œÿ‚@ àÈ žˆ·¿Ý_>ö³þøZZ‹Ê@fÙa$`±•ÉQË$<&Fô6Œ€å“ÔY# ³n³·h0‹ÑdmrÞ4ƒÊcì®Pu†Ñï¾£¤Uåê@ZÝð} ͰvB[iâ^"žÕ1j¡£•>Ycœ;aO KÒ±®³x³ÈUo/]©iBY§ø\•0œ©„ñmòë:›`Õ/qcíƒz]BËîA‘˜Hh-k¬§ÚÈJtÑm‚iP† æÁ§o'4Ò’è'çAÞU¢Ï¿¤Íà («@s4Àì‰:ô^Whý SƦ+ÑÅÄ|5$}nKC¯'…Ër ¡¾Â± w™êâ¾Æ.(ѧaîÑ®êÉ€ ÆÈÝ]>þºy~¸Õ§ÿü¸¹þ~ùîWÃ`ÄØxï!’›×É9$ÛТNêÆkFŸ.¡¾µêžTŒîÐw3]¡‚>¶ÌÄBòVÿªý§‚×åýÓåÕŽúïDDù¯h¹ùvyû´Yb þýËý; m¿½YÚÈïc¶‚™|, Ïr• a®t%<Ý¥t8Øß·ªÏO/ÜÜ£À×ô¥È¡`ÅU&Àº¶ ƒ4Ô8Å Ãð¡‚k 昼<æÉ `žmà­Éô/ăSª=xv²¶N(FªØæ½#À.¶ 6µŠSÔôèJt*›”**‘z g›C~w’mâl²«ûíd%lC˜Ð¨™o$¢|€ÐÃ61-ÁoÈs7ߨ˜o2E!Ë®U¾ésdù¦aHÒ…>­ˆqú,ÿJãÐ š>Ưd.bœˆ#ãC7߸Bߨ0—è›êLŽoòÒwĶÙÁJÕMC¨J®ÑnK×°e»3qŒùç›·uùõvó?ʪÿÞn>ðï5bÙü#`¿8`ãÿþ%f1n6ׇß%¼K%ŠFö\Â("Éòɦqqv–¿ÅwÝŠ›«ÍÍo›ë÷ŒòÔ³}ß =Äþ`û§Ü<©oúû—Ûíï›Ç/Ïß/‘cÚý(514ƒPÑw'¢ºŽ–¥K ‚4´ÅfI½ô*¯3Åž©4n•ý*è“2ÁDœ•‰ÀÉ‘«‡ƒy¦6»àk”×kdo z¸æ¡e D!ÚÓ¹¾sÝä É—`î~Ê)¾yH‚îüh¥¨k¬vUŒC£tïR7ßT ĬWÿìf\(eœ “XõõCžqè³ê¦çNBðì`%l³>® ®âš:¦à¹‹kØÔ/µ €•ú%•`Ülv}\‰Ò ¦T£„ c\ê<•¨0@ÖÝõè’îîŒ\#r=úÚ /¡FX‚Š˜þÓ%,Á»–è ±¤¯Û B©‚5¨ê¾˜‚èSÔ¢†,[ƒ`rˆùëÁJܘIÝtvË™¦‚écÝ‘êÑG°k™Ed•~^_Xš¼á˜W=ö†Áà”5ŠzìÙås­12È\¿Ç³šTªu~˜¼;liŠ~Œãwîðü]î0x;Yd°\îÏÁ ¾SûÝŒ>ÂÒø™°6fz4ÊßsðÿÇ^á«Çë¶µÂKƒhÆðçØáOÀ)èû9ŸØÎu·yÔ]nŽÿ–›Bì Çš F#YCfî“Ûc1Óƒb&˜}h='9¶¯5¹½üòáö‡ÂËSfòDÁÆÌщ¬¢8°‡U+Œ1µÎOäX±ëü;HN,,/7€Šx¦êx<7õnXV Gî+í©0tb­*äñÐkòx’«çG+ÁÃÀSœrj«Â8X´‹qÊ{lêR7Þ¨ÁC6Ç,ˆø0ç&j/«ßQï‹e«Ù„ŸN·©¿Qe„s£¯žV‰„S±íY÷aWÐL ÍVÜšªg¶XîEÎu¤™Ä%ŠC‚ ´•öÅ('µ•%e|gt¢¬*ÞŠ1ÉéËÊBÏÜ-}„÷v…HÓM±Âѧ¬´ؾ×–üÔLþ ¶¥0-¨° akÆ÷.¦t3ÊEP/pŒ ‚ͪf€dëâŒ<#tSßšHzªÑM¯vÔb—!õVÐMŒ§‰Øuwæ)jûj§¦„‚ã¢z@‚¿ÝJ,ee³î¥ôýåRì2*õA¯ïŸ*GmØú–Dý-å®Q@=؆ÑtÙä‰)Ó%ùÇâY£I„Ø}ç C.Ÿ€löz£Þ„I]ev`€èxÓâÅ:ˆ“ ×Ö(,iÓ0 ¨˜ Œ¡ íVÙº"{ŠùÞøT:pvþÌœ#;”ù¸¼B éq2AßíÓ+¹~èQvŽÖåUÌÖ>oÝÜ7­Y'Â5ÁÛƒm™“äØ:0 kT "Ý8g5Ö‹g5®ôɪjºC~F¨!8qy)a¨RU‡ «ÌCÖ(ëSÏ;6ò­šó¹|¸Ùüñ¬’¯¦_ý‹ªnï^gà_ëÃïoöwï)å,Ô¹^â9ü´"c¼3+\P"MèTný"üÄŒSŸf?¾n!¾x¸½¼¯Ò µµ3£KÛÚ‡zôjÓ‡ © ß¸ÄœŠ ¡í‰9ë9[6§´Lî@?kHbŽ&çÑ‚ÔáiœŽ’âšvª5AiØð6Gša&@(ɹü틊ÆÂnÏWÊ Ê©×,àëÄF(ö¡=­ÉV}fqÎí4KT‘yÚßÔ]´ðò=il¦1ÁôijÓl ñqëDµ¦ÖPi\µ¦­xTÝØ’Ëë`²ˆeN”!Õš*¶.øP‡Ý*çÔwêÇE \­†rŽÏÞãòrpŒüZbuyñõÇýué"Œl9ž×(ÊXÛ¥Œ¡e q\2Ó7G­¦¥E ¦1X6‚Ej¾ÖV©—23ê ŠP5¬®j 7v™Æàx|‘3ntÇEž1¶ b(n2GÌ‹p²×ö8ƒO2ÖWz¸äœtÅݺsÉ_ðùã* ÀñûjûãÕlqÍȱä¾h5[\#W<—|„‡½ÈY‰=Ð}>3[¿Jì_ÙÏ<5#fŒ_ÉmëØ¶“¼e[ðÁÅ!nЄƒ™Æ¹eçy)Óé-3eÁU\²sF“!nÇ‘–jj¥ƒÑï·º”0À i=u~Hÿé)ïänïó½VUüòšY~Þ¾$—gŸ©/Ä‹•†yhm-ý…ñª&Ä5ÖkT«´Òu¤ͤ§+r¯]Þ½Þ7ß~ˆ¾Dâ^û v¹ô*55•Ì}‡Î¶? ¨w4`‘òQ©ô8ApkÈPÉ„#$›¯>LÑbˆ$¸ CÐÇUIZ/¦§t&ÄýIã½.‹“WWß­êu=l¯çŽÄ1´¼ÂÏÏKÿゾŠ| ß®Ñ_|ÿêÿx®L£!þÔ̘¬Š†XoײjDMõ5ÏpbóàT–b°\2T³á±Ö§7(7ƒÓ×– ¶N¡Q‚ïSh´víY-íÇ-ŒÎÒ.ŵ÷òŽd3toWð%²ðË’Æ¢y@Yd5‰ºÄÒÇj¤œõ0YRgÕ{¦Û›»›ç݇?¤Ü·Ê§g蔽°¶ªyY}³äÏ«/@hH•v¬ ö¯¼:qA9ÙB0% ·É$Rϧ.3g䢚¬QôgVÍp|™¹Æ„eL]J¨[GVî’»R_aÜÂkS¶ðÚSßÂëåÔù"‘-ƒíêî1´Â@B}´1løsjF¦þhÒôåë›ç‹‡ííÍÕͦ²‘Úß·¤Õ¢¡‘Uꀬvõ[H»akÍä½\ÉTQý˜Ïbn:ÎjæM±:Ta®uŠfÒû˜š"œÆecž?ߊƒn¾Ýè·¿¯Tª©JÒOÍ (PÓ¦áè€dÁ°jÀ*Q‚hãnÕñGD+%7‰6›XV¨ ß(4ä&QE:jЙެé*>‘}é…:ò‰ÂäÔHÁ{—bƒ{u—xøã&§¡{ä±È72ĵ‹äÛ€bMìŰB,*fÚ }öîŒ|"Ù·ðû‹§Êª[±ë¢qÓö7ÈÅÖO†Æõd7wâì`.¹ùƒÛÈIša9£Ù¹#fòÊÎí@?Í@qWAVÎ;ÂòÈa¿ÝꩾoŸž/7W[ýÍŸW·7ʦ‹ÝÊá‚ë:RÜPÍá5ð¨{‚eå†9Sà&_´Cô˜ÙK=ºT·åL#\©(ÜÖ{vMYES‰âòÉÏÉL$LÁÛôã«'µ 7ÿ¾sS묬3Èkz>pdeWpjã’JøèԪ¨•0²”èëÌ–UÇ…Æ5Þl?j¯ Ã-C¨<×qP¬ã°8ŽSRg‡J°˜óq-S²ýà@©AXL úpn0°>p”KÞYäÓa­Ñ E‚¢°Ö})ÿz£°¦Å%ZaÒÓÎx9cMÜR4vs¯R¸ûM¸¨ÝüÆ ¼* 5­ŸWÊîŠ2û ß )6,hežœ‹+óA«¥}c×Iô”䨑yFD­*Ë&ˆ¯Á7D3íMBê\ rîyÑûâç·ÄÝÏw—Wß•ö-ÍBN5õ2þ¶¥íÀyåÚˆŠñ ±FúC^œ_âæ&)á ÝKðJ™!þŠ0"9>³BZC«×>‡—øÈ#R Š d1Yoª¢è‡Ö>Õ›:åh¥#TKÜDdºV—¨!BcCl˜qÝ+8±p§úxo r½¼Pö²Ü…äú±ÙÉJ6Î NÑÄrU;϶©°·t„)ZA”þÕ©¶”o»qN‚.Ÿ÷óFá)7%#¶º§ÝÃÑJ§¯å ¦ªç-Pãl‹M%K&ÖÙôóÍë›LjªJl¢À›uS19×tv°"uã m캭â²cò]\æ ·èŒcÛÍ5(Ö¶0¡ï @ÛÈd3;¸o-ùP±ðv°"]“É:å]qè ¢§07ƒd|„!hâk›µö©âG¶q˜ —€¤3ÁŸt@w'ÇÞ!t8Z ãØNB–ËÕ-~·z˜Örèaœ³-•FèƒzòÞöƒ$•ª›).B/¨ò¨*`s|sàS(9;Y ÛÈOA”®‚m æ:¦;Ç·'ß2 Äîv«Û•Ôls`ô&KئtɱöS5>4¢ŽV¤nFùà97ßXå®áBš×W&:ßÐNlЗ¨›béé=3/·É;äÃÉJ؆/l;»º‰oÒ7=\h»w»¹|z×¶[/^ÃÕË¥UÌÊ-ý }ÄukßB@Á7p(Ñbý˜`NÄKrôþ`Ò6`ÂŽÓ¹¥EMˆ´,iðwYW¯+ûx÷û’ÎØecö×:Ã$yŠé¡Ï3Óô÷´Jî`8Ðb„$è[“ìátþ.>ÂX¡ëÅîj9‰ÃÙ‹Kòy¯noZ²ëàÒÙõ1 ©|À–N$ !™õ´¥¬;I¡€ùUâØóœ W2ºd>hF§×_6zúÖ”oL%%š¶i˜©Øü)‰ñ‚ÛÛëa íÕmòj™©$æÈ ‚MÎG“bjÇ×¶n7ò¼‚€-ÉŒ‡È­;••âÛ›Xñ}½ùvùãöyöÁa¢xrqÎv(È5¢!ÈŠBʀψ1BBœÿ¼áI@ã< ÷ðãnÉ1öÛ«ýVõç-ã~í÷}+úù}óõ»¾ÒÅNïÿ}{]*'ÍY4Û*¤f¥G5œoY'Ôkœ=‹Îlœ­“A¥C‰±&ã8§˜¸7èÇ3¬ß¨3ÄTû‰ãÜd¨ÒKdu|^ÆNÁU“ÙXü½|ëÇïPÇ…¦ÔÅØ˶KÉBK/ƒÄ®ËQú˜$Ô85ô“FW®D åt£Ó Ñ’®ÒŒ*ƒôBÜÓ[£‡ñë5DïÑC‚ÔЙ‰ ³uŸl;Q–‘¬úØ®Kɶ’¨Ç›}<7Œ«hb^]Añ¦GƬAŒûC’©¦7²ŒÐD}ë8ñÆÙ MtÈNôšhCU&X9!ç·ˆ‹•êEŸj·¡lv=±)«ºÎjªó°q"ŽÐÝ´¦ì*iê’áe·$¹Û]¥³Kf(f„bwabÑ÷Ùù¿ÿ•ú’-PÁúòíq{÷åæþâns·}üSÿëzóÇ—g}‡£‡Æ5ÇŠð©XצEÓþ{W·×£_Å囩­Z ’Ù'˜ª¹Øš¹ž Yꌻ"K*IŽ'o¿@«Õ}$QâÏái'™½HU";G$€Hàƒ1mcý¦‰¯çªÕrŽX D‰—„ÊRDk2·¶¼‚±¨!„ô¾­ØÆ]¾‚㸳ª‚)+è™-Ì?rqóõöüî…ºíŒç\jR·'p KÔ u!ADôÏ8Î?_mþ®þÛÍÍí+µÿãáüaóW³ôŸ<ˆ§ÿù`žg»¹<þ ³ ŽêÑSªQ0–jtl¯”; f›ù‹-vÉ»ÍÅfûëæò•žH­ô±'î¿3_Øolÿ‘í½%ßÕ¡~ßÜ}xør~ýá Ži·÷—Ö“&ò6#³Í ôw÷”ß}»Þ* 5nÒÈ`1èc-èÁO( $,ÛDäDT´ „|½Ýag ×oèQôÆaýF¸§¾ISñw¯ µ¥¾éîâŒQÑ…¥zãêâÖEIù°ˆ„e½¥ì…ó|kU…’n#Zn®Tqšðªuõ+ÎtO``•ML!.Ö›«Å›CSó‚õª¥Ò›‘m<ûx8ÛY ÞØ­<%lR›ºˆ .MókäW&Ûúº Á6[æþ‘"èñ¡ùõ€™Çj¡³«›‹_,²/h»yƒ6¸J)X®d ÍâlÔçb0A˜¼žœªÀä‹XBÊÍÙžï¬L¬Ç©oSt^éB‡óòè9Úûrc<Öþ»2^ó–™‹f‹£TŸ!m"Ûc5Ú|W,=’ÃõÍiQ¸³ÚÚX¥£eËÊfÝ;±ë+ Û|‡ðÄCð>иbnš¼d[çŽ;«zœŒÞ†S‹Ú@7ãyÑ•B’œ‹:Dϲ¸•‡±%F†"aÖ.F.Ǩ9vàÙÎ*CÔi×vb­…Ð3>Zˆ¢^Þ‚•ªÝª‹pä¤m$RÖ[mó­UÂÍÛ¸¼– Õ!Ø;Æ¿ŠÜu¥£¿×'^é”Ìø>H<ŽåÜ0ìGĽãûÐRöìýÎag5xK4‰s!´œ’èõwKZ°tõ¡§:5Q2jK\~LV÷…'МYÝ^E*¯b‰TT[Ȧ„³U©M3yoÇj›ÚaÈYŸ9ÊÕÎæœ´Ýû%ïP½áñžu|0ab¤€ðÃÃÖÇÉ\óÙ,˜‚7qh½‹q‰ÔÅà wk6øÏVžN–Îå~yÝ6÷›ÒÛ—hb Œ²¤9@?á{&0E° yI¶þL&ãN5œP€ÉלjP¬ó‘Þ™¸†xJt#ØÖάü½ºø ä"Â*ž2QpIN_Êñ‚À® ˜6ãc·°+‰¡gÂGL`†8¤~ã…€FzÌ”*lí<&–Ë6æËpgÒä2câ(ÜÁ@ÂÍl6Ôqœ_ã1ɾ±Œ@¢-›œ‹MHD"c¯]Tôé¥kˆ·Éáaîê§nž‘mš¿„XÕ,¶¿*{·óŒŽ3© j¬P•6‹ˆ a I «”‘ÙC~øA¹Ð.º¿8?ûüíú²±’Ì;yûZBÍ-^RIfÀ=üªiM/†%BÏE4î¦'ý‡|Õ+6g9»A~–ÃAƒJÉBT‡ÜÒ0­¹~4ÆÉ%í'o NƒdBrႯ¯>L×ü&ö™€—ðýè'°‡º]Duôc¡7¾æKÕî)9¨j4Š¥V–¥ÿ(!‰5æ 6ÕpzQ¯çpÑ  ­ðLM2é‘$ÿ„£™Uf,Ø/ („{¨ @ld)üG3gs5jc]•“xWô²d± IIdr ¶ F4ÀïŽë.B‘»P “1lùôÁïÚµÚÕØTUJˆ÷ÑIÜÍþøCòûD¶’c&¬™3¥B*`U¥˜­îš‰iVÕºŒ-nSô°aÏaV×èÎeÒO‹ x"Š‹CÑþë·½_øé­à»ÍÅnö·F¯úfáIY)ÀM±‡[6:ŸÔ£·ÇµC9 È&¡RŰ£(¥9ªÞØ&³=L3©²™¾cJpj Ë·>ºF—(ü®š›_¯ÏŽñÉäÎn¯Î¯7gWÛ¯[]IcÿýÛ÷ó#`ÑõÔmîŠß997†Î¦Flã0Šb¢5…Ò«Ç=Áÿ«7ò£ˆa”bˆINÑ€² Fáº)ê®äéüb'¥Wwÿ÷_6—ßtOŸ®nts_nîþàìñ9 ­x…âª@íš».ªOÂvFÚå‰W´q‡T…WEÀú”-j™‰j`™Y$ž°/.Éd㤿ON~›û¢¢ø1”»ºùvyÖB]"œQîâ¦eœFt0äj·$­wQéÚRV¥¯)>QeTî •_µë¥3$gU;ÞQø• qTj„ëÞ/}=·íjs~¿)ÄÍÿªk‹sßæ´ÍØ3Íœ8ßLýLbï°¾=ײ $™ì]  ê©œxƘ²;È 02Eß2<™ h¼ þD?A=ìþ㘈þyØÙùç«Íßuk»¹¹}5ô盿^_nþý“±Aäêíæòø³Ì )pSÔ –É8}ò.NÛhöõl¶“¿ØJ?lmEª¤‹Íö×Íås-±×3æ‘úõ¿L¬¯¾°ßÕþ#Û{5‚ïzP}ßÜ}xør~ýá ‹i·ñ_6|‘9wõ€o™ƒs*¤E&°ƒ¤CCGö!y^i szm.Lb¨{®a-b,šDÈ–tÏvV3Öɤ6J!´ W\HNd‰Úzø%F„ Æ=»pœoíÔsÖlħ„j³1%µ1Ä,«ÑøIH­ š´f¯ˆ–hM #ue¶gIñå7g|Ž»×š}9ß<0þ|wóýÞZû“SÉm+fŽŽ× õ¬Î-w›4ÆQn"p>5´<1Ú˜„K Cì©ûä ;÷z:-†o¬†o˜$²`ùßG=P©|ê¾AFvÜZ€EÕBt6M‰§†çÕ!j ˜zX5©gň‹õæôæ<¦Šk²xKz ²ež³­U*Ž<Â©ÕÆØóKzp Åj£µ÷4¨µ 6b(ª³æ[«TŠñHZqÒs7!l™Ç4´Âæ]ÊQ¬áJË'M€\t<Ü«ÑUûNÒé¸ÓeËD—ÅÕ¶ÚŸ=Üü²¹#G¯yTÄÈ’°œAÎÎW:ˆiPÔ"5‘Q 1t]~|ðÁ±Yv²¼`±Þ^«¶vI:Y\"4ÙFSUšåS#¥ìáL*#lB—mïÑŸÜ&tñ›ëÿ¨Ñ¦û°ØeS hwpUÏ…èS*=$«9äÏr’…)éAäO~rìGr´Z‰DÁ4Ž-âÓþJæ·sú²·ü[u4Æ ª#°¸¡…G$ä—˜‚uƒ÷Ü êÉ”|l¬Fþ™½ÿJFˆ5=ŒâІ"1;•i.ª!`ifhbJ’`=~QP*©§ÉqÇ?.âÇrNfîOÇ’O¾ÙG¥‚„šg=.6Oª@c.‡9JlT[øÐR0Ô-É2ž—^°«ÖQïºhµŒ,à%­Vާa"{ .W…Xc,„ºUNù²Ô§½”‰µBš¢sž³y?°ˆW‹`²±¬"õœCL€À÷tEAŸâÏËÒ®èPMúAa²ÎG)#[¥HìKA€Ùù|³­UµEóĈ€é9ÇËì#YŽrSðä"œZßÔ3Ê1zö-äQƒwŒàÕ夿 ÎîÊêqÏKÿ®º);,w¶³m{š¢J°åš!hŽê!zZ¤6‘ÐÇÐ;‚ø»¡äÊjZ&Ïâ*øÝUÓ¾♬²×Ð3a ªÄAM ]C×J´ãË.°ý÷\Ï6V_[”‹š4e­YYHQm)R~ÚÂag5zc¯á¸ 7î!iÒ@Î8aq%w鳤ûÉÇ|Ú‚‹…èÈ6žçh:î¬ n6º#1ƵyÅzZ2.7ªC‡®)æ)X+âb¸Õjê2 E†òK½0úÒkËnçy*»ãÖj§Ëò<ÿy×ÖÛȱœÿÊÂ/ç%šíîªêêÞü‚ yN€¼F K´WÇ’(è²¶ÿ}ª(ŠQMöu¸ö‰ ûº §«ê«[×…Î͸íÜ’j¼I0îÙwϲc.eœ^‡ý•0N'wæCp»£•0N^Ë0&ž›qÐÒùb˜œ`µ{˜ûb¾ÅÉÙLþzÓ£„uy¾¦ ÜìdElcÁ[Ø4Ø—mØäM2º¨“èº'¼2»“AL BŒ€ó|CLâmv´"ÇDñ-–׿ ÿçI†ÿò ä<®<ü¢¾µïzø¡)㇠k, ûù+ £; ‘+.ÉvÉ)~#ûcC'ߎVdqÒÚ—šfÀ$ñ-¢ïam0ŒŽ-‹šÝ| ¦˜on⨗ny¾yG¹=3zðä$³ÙÉŠØf§H¡jè¤|4zm—~¥¦¨Oƒ#1 ÝÞL°¥\3~BŒ%\“x»€kG¾ýÑJØ&¯:بâê]“ :Ø&hòfl0è aফÑ?\| õlôzW(4ÎòJœöͨ‡N/-Ý&9¢U™â…¡{;2FßÞ½Êí¥LëM— Ä–-~­î¬ïOÛWìÖÒÄâ?b€%Ú´˜ŠhRqäìdEn­®ãG5øµ.ê ˜¶É?-·Œ®Làn¶A…µŒâ+Bþ–ÃÇ`½Í±MâgL6îNVj- çf›£†[‘gQ´Ø2K~BB3MÆÊÇ"⢸\ÃÍæÜ)°ÍVr7Å \3XSs"M –ºŒ¥%jjï4Á±Oýh+d來À&Ÿ³áà|.Iª'ÇäÌÕÙÑJçteÂƹè Bhßq(8¸“²âÐØ „,`—\q˜ž‹õ¾ƒf7 «n¯¡úC?4= l›RÓâ8K¸C71.€DÒˆÊl— .„ģ㈷»öCuîVÏ7WO•PŒnQ$6ÍnÅ©IbŒ£F7£Õ8,Š(RÁžIÌ;Àåu^äœu%¢E!ê¹%)ê9ÝÝ5Ƨ™€Ã€«B¢wF”2gë“»çä\}mçmKuó=È .OV7•ñ²vôp¬ËÁd†oxaÞ¦I½ŠäÅ“Døò­*´úp<ÎÌ2 Ö[riV'Ô? §˜jã Ê“ƒPPGìùm Ò)„†ˆÉ°3 A¨•0 ÐÁ™ª6aX;€éO½Û7 P/Îo03C<eÓ¹JÀ¤õÜh>uÒw°5­òCði-Àx|39û¶^ùœÆs?Ù÷êòâ§—ûëÛUÁø¸$­õ-ÕFºô£;ýxG£aàSÆCD,èo×A˜9ìY›o¹'Çì­œ{Ï=ˆ;ƒXzù<¿WYîѵ—mº°¨ûj¡aGޏW¯ î¡Ú8ˆj…q%uóÀÎgÃK ÉæÌ†`të¹E>7F–À¨n’p!ÒÙju÷²&˜x¼ü˜èxº¼¸’Ïx–Ÿ¾8%˜uÕ€hêÀûŸuàÅ–L¯³b ³³“žãp-2åÐZ“¼Ñ|˜àØ„±€P`©‘’£1fÄ¢ü£·\Æ9}• ^àÞ'„Éšø¶šêì~²ˆ–˜Õæ/ëëºâ‡Ž_ãd)^€í¦ð:oÄiÈf‡yʇTXᾋ† ò¼âí’ÏB0½×}N’!Ô‘e!TM¿AoˆÁдYcÏ[{l“„^-<Ýür_if! .ŠEv¡ipCíccVО¤×(Tn$‚)„X2*8b6¿+´ƒô¢q róÚÚåÏgGåÁ:§oðøpõùæþY÷?h&P…#òN8Þ )¤}Þ‘øÓóÍÝê_¯xô‡>I줇Úp*M#l8õ¸º[?ë#þçËûãÐUâ3}zþãA¿ûy¯6~šþý¿?ÿ}+™o¯µ¥™ÙE!öÃg{#ììK¯?âÔΟ%d¸®޲“‚ zØéÐøñZÖ…p:ø~õ-;‰+;Ú‰žW¯Ž %)hµÜ:Â?Ýòæ”^Õ)5älÁZ^A{¾ÊÚ½µVè×ßSe„^•צ ¯S•Ôð†ú€¸í’^P¯ÒfSéG½*Gò¡^µÑéPm¼œ¬ý‚ìÄé©`]‰‚%"_¬`«UÂ1†‚1!êbèvôÜ`ÿ•'qûcX´‹EH¶ú]è¬ð~š~ ¯Ó‹„EwÂõ‹äµ<üþæuìÎéRº‡imáF^f=?¾¬J·Ù*`¨fߨGk«L´hmãôjÌ׿Ž+™p¢ð°’Å+s“Þ4OÎG޶hñ†ËzÓ.Æd§ÛŒªC¼iM3¡¯ëtÕÙ®+Û¬›|–÷¦ùÕrxÓ:«ÁÃi­oœ#µ¾“Pû†?zÖ¶XíÿÙôÕQ0"f]ã(,bU¼1ìíÙó’3ë|'‘ÏÅÍÝÃú±º5Do Û©^ ú¡e¬0¢±6 i 9J¨q‰ŠmB‰êö˜*D3)‡}F•!š[üZ`[§¸Ñ¢ë; ‡;¨‚\DqûWùAq³ÑqÔGÒ ôóÒ¡( +Ò õŠà(#Åf“±]Œ$Š (Ô0Ýž »é¢?žž_úf«¶”ÜÔ|³ÓÃãúòéïêóª”+FCí(P®Þ6\üèþ®MƒZ½[ÝH´qŠV„Ã.iÁ³šŠÊ*ZoR;Af¤h#†« Gô`µÏ"ŠCH(Zñ‚1RäL^DÔôHËT¢qQd«ÆAîÐǘ‹Ž©ë6â“>ÀšI+‡ÎP—X-1†üW—rŽØN缎ÅÃŽ·2ëCˆlh+{;NšaªT9£%ªT¢Ý¬ÏІS3BŒP¥òÖLn³ª\•@Z·¸*K‰³œ8ê"9ŸòYÅ©AÉ”hPñ”+nî À~”s-Ù®°Á.pMg­î¢~›Ê±`Uâœ`ëo÷ûŸ8ú·[Ñ‹‡ÛËûÕÅíÍݼ[]1b¤vžèT@ji„VbŒª‡Ñqœ¶‚yƒXPÓdµú0«€Ò»(öT¢Eòu hUYáûåÓº±1¡‚íÄÆ†í¸àãÞ,ÐPo6b‰.f[îÌW(Çø-/eB_ºi‰t­u“ÈfѲ­‘GÁÎt%˜Z?Éÿî>_Ê1žw­|§3éOè»îóí¬+PïÞ¶¨wp Tj÷jŸ¸â+!u×½žˆœÕñ%UrNuVÏ{CÉï;ò Qóàרù°?CÎÂm ð´¼ð)ÐÛMÎ1-OLf¨–·&|Tóæƒšw±XË?õsL."o¹G.tSôæÀO݃g޶² }…ËOb–_tXŠt“¹¿^ý|ùrû\æ±gzD¡kð¿ð/6mC@9Öû—!ØI×¼Jeû)úÈŠT¶ÉàÅ#ûU÷ô¢²ýäsÍú?Iï»ÒÌ!.¯³zò-Cbùì±äÈÐ =†ÁšºG3eh4iõ1” - kãdÅGY¶Ùz»¾çýHín_DÙäl_ÁÌÍ ^W×Ù®Ê:QR-KˆtŽëë+²ä=ák—жOq‹ŒDWRC§#ƒ²Š›ÓÓâgô¢¹åµ bUå4xvAÎÐtð—Ok‹:JÜŠ7m€Ìá á«æ&‰Ñ;Ù~°ê>Ÿ¢9Æÿà1xî©€@ZºÂzo½Áa—[;—êæ9ö‹§aÕ[º8zRÞÛ &z›» ª&§ ͨ6@gè[³è v5:#Dù=ºd&„¦X h--™3lÚ:MW­».µˆŠº,ÈâpD|-w?)$“Cöt!$òÖ@¦&‹QT¶kDÃ-n ‹ÁeK…Ç剢( ÖüGN$¢ñéñi{ªŒ }mDªZ"øÖFê’ ô-3â#’h,¨¿„L°Ø8䤰֡æe)é€Î¨2B&䵃#XSV!Nšü^WëJä¾@©¦®C ná@S]’ÍØ”–µ‡õµüøoëÇ_å‡ïå%n¾Ý<ÿqõuuõkº|7ë/ŽþÄAl^²Ú\‹Å°bFCDGõìpö|‡ó®¯ÙL—ã ŠÐæ«¿„}69ö~ÆŸ!£?ãä¢Åמ…I}È+~²OŸ~~\ß}º¹¿¸“àòñùÓõê÷OÏò‡»'ŒÈ>åH»%ußÊsÑNÔ” ÷0ß%ÇA \•+o6>øþ'Q¾)ù â³; Þ´O¢Ù<´L¢!MdË~aãïÿÚ: Œùèwñ»,°Éé³0Y&œµ!^B™ÍH3@—1NŒÎ;7Weó¨×dÖ‚ni·èKUÙæ¨ä½×%jH-ÈŒ³òéa‘F”M » %)mQw’âP mn;Õý¤´azpöŒ@#$Ž&b#¡ß;ã9ÿŒ‘à 1Pp5"ìDN©GäÀh™FkB@ ÚÜQëò§ÛÕß…\ÿ±^?¼@ù≥]ý›žý‹| AnV×o_ó?ЇչX›íépÀâ·ä¤CÈ&§ÆîÎò7}×-WW«›o«ë>yéÙÙ9óçØlû”›'‘­ß$²úmõøéùëåý§9¦ÍÙß?>LlغX#âgÁP išjŠ`Äq?ŠD_Þ?]^m8wÈü×Ê–Ÿ/oŸVÇtƒÿ×O÷/w¢›þwýó.jRUr(`%&ul(/:p#'hÒ“Òf'ûÛÃãZ4áÓ«¢Ø:VBfRäcq-ïæ³‘±]|sÐàšx‘1Ë’þˆ€™¢a‡®D¶‰‘“2à\Êh̨4"3$o͞Ʉ*!0C—ˆ´ä ³^þ…~`»R`“¸\¨ëgó\5o7¡'¹ é$ðìh%È&7y½Ipgg[Ë Ïà‘cŒÝ|ƒR¾aœ,;o €oT¢‘‘)‰ÆÝÉJ؆®XT3¬©á؆’Ûµ¼Ú×õÓóÅãêj-jäÑ>Y]HI¾(¹I˜½Ÿf›^—5£Ù‰¡ÍP%ÃUY ½@èKZû¦MÃŽÅ÷ ¬þxgÖ‹³Ö<4ÞääÝlöjý‘U„û£¥­ýä+ʇõ³ M1v9¦¥/݈‹d±æ–(Íž47|ó(6;äïþl4Áç}¼­x"îOVtI„“ÈRðU—DAlwˆÔÇ6n‰yË:üµ'!ôášáˆ–þòíéáëêquq»úåòJ5ôúåúBˆúíæ:sQåû»)8@çJœç /é†Õí†Øv7±§ªâ_ÙV6(:9{üɧ‘㇀©çøQÔ†]œH)õI1"¥R¨ˆÅ£+eÛMb>{O"™q ³RËL­|‡âÂ"¢dNq¢Ûê±+ËÀÊîZ­ñþ¡«k<—m7ºv‡@åAtÌKX 9k¼‘Æ$’ù° ´@½NCP`º¶‡LZmÃÍíãýþÇÃí·Ë$¼„%ØFîŠÒ‹úŸØeËp[yLYC,ÿì|“WÌV „¡›‚{ú4¢=F <É.Q0Û2¤°;¨™!Åb ƒs™ÄTÅ'¡˜j§Ê–8O³‘ǤÆ)ÁKj:|Qƒ¼±Ú‡X/jz¢è¿Z™å®TW×·ç—‰M»’S\¼ó©†FM}õ;ã»Þã“(¦Xá¸ØJ"ßÒ›œœx?Ä ¥àÿ,‘Ñ䊴Nõ½Ž~Hüø´¸U`|Þý;5| ‡KH\A%ˆÖjTª¸«Ø$7Öµ‘Ë PØK³õf5"F…‘ "öT~½3 [üFÒ9ÜqY$X‘ƒkö0¡!›ÈÞHcR--¶É»h$V#Xð®gÉq*;#8ø”íÏ·×ߟî^~Œ/˜8Ä@éVˆ/¥[M‚¹Q±“„¦ÀƒH"5Áƒ=û!x vÀC¯M<á\F¨•RnÊ·¯»ë«™ˆQaU!B h2ü¡õm‹ÈI*“ ¡>?µŒê«ÙøãÇ >¢‹ÐÅ8÷~|Ií[ÚÝóÝ×o½å˜ lXŒÎUø¥4"lòÊoo; dÖ•êQbŒ÷B}Ì!4¤®HbLæ0·7¯4o™Ø]ÓÍ ‡D²{xüç_¦!Do’˜jšô…9½NŸ0¿Üú(«)Ë­ÅÔaÃM´’~ è#„z– ¡ ¬sôcSɾš!"‚m–Zgí Zeï¹i°÷¹œïæË*†[%{Q øËŠ¡Í#®÷WOï6HY…È 7,- ŽŒú"¦e““ГŒbôŠôá#¾zw¨Ûmü=å¶&稨ìŸZïÒJÇ/«™d¿hNZÎ(±Þ'nLmì{ø?Ù‹³¾ØaÂ_M@êª{ZÖ›8¢ÚÎt+m¾¬Š:@2´\ÂAC8/dDoÒE­yCãº&нÏL §Ì “EÐ<š©xÂ$eOØæc*&ÐÓ‚¤j};~|Âкùb fŠÔbu'À ô±ÇHb4 Ã,L¾ž¯Åv¢³¯ð±9DŒEPÈnÝ~Zc‹yjùÂÇ7f€Z7éâeJ”=ƒÀßÇlR‚Tsúä²*š²/[YL* Û6–vŠdü”#­Vúˆà;¢)ãÄW¨7OÌ]=ÞÝþó¢°ÁöŸ¹¸×¢¹êwÿýéúVáp÷íîuò}MÞ­ÇæuÂn9ffÆu28n1½¦Š;žõ/ ®Ëve¤7¥GßZ øè°"‡F:ÖͺèO¬¸†·ÃùT½®“– ¯bG'E¨¤V>Ðô½QëæËªÖuâ"ÆXMjã¤áÿЉWäv5P%‰)ÆÈ£zW«7/‹å»* +ê3«{XÔeéT·ŸVÅúÄ £íMmQœh@6”××—ÂÔhñºSeXo¾Aoœ¬§BoÉqù¼I–â~óe•jz]Ú{YµÅz–%Y´ ß7~Þª·Ÿª„ˆõ—³‹¢Ž4rQoz¦²«NŸV¥8\¢:t1^XqâúØñ0[ZÖ[õöSO‹žýÍz³Ü’Þ,vγã?­Ro@(œ.­7”ŽœR½¹@wXoÕüÿ6ë©¶ ±BoêJ–õ†ÙÂÏæËjÔæ’y®ˆ-Ç ­mˆãHLŠ.ÅžHqb©!ö'ôFlì}zKì õ|ûò|ÁnóiUŠ£E™¦Sç½ëaÈrä‰Ep8‡ Õ¹wç–•`¸œ ¬Kξ<Ïktú´ Å%ZHýר¦·uš# éM\×”U$ç\t~v÷ü´0ã¢Ái¨ˆ÷K*(NùñÆ“$¦lóŒFN ®å¾TïÎ׎ÔIm:²kùƒú(ævÜÜ=\}½ÝÝ=<îŸ^z;nr˜ ³Ù̾* J˜Ÿ_Íq’ʤ=Ž›R;ëB8ý†!H@$lýTXW>}Êù™}Þ9ˆ¸°$T?«âZ*Ž^­BÌ®õÙHiJWVXÃ#”Kc¤k´Wƒ›bón]#§BæñÞªdŸ’¶j«ÍÍU®0"˜çÈÜÈh’±<š»¸!ßASÒ;MÜEC}½¾± *‡KPUV´õh˜PsåËn~9ÉkJ[,ú.¡ezoXØ÷,ô1¦y´-ƒ#Ç»æ³çG²¶$¨C¢÷M9iÑ]‰þÆäç²+ 6šbKôµÁG‡÷1²X©Üᔊõ¹ ™PûÑűYsˆ’\M–c0YZÁ­È&µ &”‹Ûž‘#oUøIã©»§Ûud®_¦]3z"…k¸|8Râ²É7n3ÉŽJº8,bל@ $#›6˜¸¿!Ρ¬=1hPS+-×nLNÙÖð“ ¦t†“í,–Ë{¦:n½†9zNôi\{7û d{9xp\\`Ž¡¢ýD|s鄦ºë[crÔÚ¢c‡#ë_ôëzvއ`]§Ðœ{?oôê8|‰›Ÿ“«×·O/ó²øHTÕ[aô E@$‚ìPâQ(“&I¼-J»< z¦ ¼õ&Ñg†÷ùv íyw}µûòýÛM'çÅ3¡žšsRc&8U "±nä2ÉL íÙnŠIRºEÐ÷PlËk°²[š“<_ÿq{ó]ßáÍJ‚™ ô²fØPÓ)©XTQáa~íQ:“hÊP/Ž®¢ ‘Kÿ{›¤Î\"z;ÔÜ!L¾ ‰|Me#”I—‹Q)4ABCï„4 „Ôµ2Õ8—ÄpÂ’ìʨ¨©»aÔ‡À3äG)L€Í.C[·ýp×210n R ²ã6F}VJÓ†Y¬Ÿ© à©ÙîŠ4¦`Á¨ øÒæ §¿t(´ç§.»ýLy5hlé°¦¼JåUŸrÇF@“ª«ëŒ¿´¥Ø5Yl_LŠÍéË«‡GµK&?yÈ·Xèy€Jt‚ÖB©±/¾Â¾HȺN²/lu›¬Vk¶söÒZ£ bH6p3¬³úA"\ãIÊ:³†£âA#—å—Ú|YÕ¨­†ú Þú›%Z­zHg;’m°úŸÿéøŒ0›ÈÁèÖÅV5ŠÊgÉ)# I™+é×Gïë«b<„Ž®á ±!ÞÔ^ªà$è'8ƒavâ+Рʦ€³ýLBû 1´ 9:ìgÑX=Ts^ˆÙ+vã0Þ,œ6©mzОçÁ".Î÷Q¬&¢á{æßNr™… L±¾ -Ö¼}Dß }õÜûDsø.’,â÷$YÙYo• TøÔàlåÓǪ´o…\©wó1e’,=…@øª¨ÿÉ}Y-¡,Ѹáâ¥ÕFÐcq™×8›$ÕSÁbÅý+ «Õvµyk~O_VEIä5¿95©M]”È=£dÖ‡š˜3:ÿÒ Í7F{]ïþþG o¾2¼”€j_½Þ¼WîǤ™e¹ÅNB™â:é;«_Ÿ¢^_-F÷ØA–ÔEçëa΀Ÿµò§çÏ÷û¯kÉ¢Š«ÐµA‚42‚Dq’]pË$H MW29—B€!H¨{®ä¨hÄ s[0Ն̴ ÁYѱ ¾èwa‚½Â’˜ܹ-¨"µ«(azÑõ¯Ô¤[ÓW# î¾þ:nüü²ºúª/ûd=þãçµÈuHÒíN¯Ýáçw‡?°;þ‰yè¡…¼mO¬ð$Rø˜³ò ßüz²Ÿœ4÷Á». ì¹V|4&|‡ÂŸËkOþþx£ÑìîË÷»û›ÃX€ªøåûóLü·ZM‘I1€ ÌŽmD8 @1ÙnÔKˆ±§”£Þí^Áþ=£LžÐ»—…/jc±s…†b@BœÍm$1úÖ^â¥aÐ5?¼qF¹ÿhkÍ\°c„*\Ò¹â"`ÑLÂÙ›îFµoc_v>tñª» ÷«ssê§qû½¾Éûç—ÝÓíõ^‡:Ñîeÿçí·™(‰VÇ©B‰/2|†r#¨I01>¸8L ‹:Ç QI÷lºŠüq÷ë,ò¯3"¯#Ë»ã?ž‡YÀF8jâ0!»~a#¢Yó†ÎZ½šbaÝ/Š±Ž TkzHÑc«›ñÇíÕýËÓ’ä–¤·|¹Hh¨¾xcpÖ¡Ü|ë:}gñÒ”æbÛ·I<¤géÙ .`u,p#”7¯Å‹¼IîUF„èíñ§-¾<Ã{2Í6ˆÓhP=k¬)V-B ÍÙrÅÂóm ΫGº{Üßß]ÿØþ„áhXüBèU˜! P‹œ™CÝÈk Zü‚FÓÕ°’:;èLÑGôÕwNë5:Vãêe.-ÉjŒÕ=ÈER?\²|X›O«döM)M™* 0“F4Czã€=[P ¤8jm0»¹}¼ßÿxxËS3af”s½Ána½×©\ýc¢ŠE]ç[ ·Ò˜â º…"¸&op ¤§ª 6™ Ö»3 §æ²ãÿ›ŠñåE/¯x "›Þ1›2 @¾©6:©géžF$h]¯Í9HEƒ‡Ï· Qã.|áß×ÿJ_æaĶî@M à ƒ=¯RäýÝFLÓQLréë#}&Cÿ.Éà’ì:„¿Pø=í¾Áý÷Ó0‚¬NVJ¾Ü´l{ÈS)ô¤¯§oÅ4eø‹m«·z»IOÛ'ud9Èœ’ú¹hâåÉ¢‰›AŽ#Î&‘*ÏK!"HBv=ìVL³r• \Ú’Ä.âÕà5ÎŽÜ4Œœü_ÿRÍÍÝËkÔy×9HzÆ–XY=U4HÛpZ1‰œ·…“ &™#7„™â‰X" $%ô]ý²ÞzRÓ6uõºsxßëÎ’ÝG1Aʼn§’+¨R?ÖçWÂn¾¦ÜìÎ~‘!Ѷ×ý—G 5» ­ŠÍ.•s@P¿'&Æ! ö´rØ"ûáiÒ6Qì+ÊÝjC1R¶á·žÂñ»*r âõß¿ bó„ëýÃãÕÓí;=[}¶IËÈN-kXºVÆDQŒÿ?ˆAÄg)¸%:*CÂQJ¨a3KdŽršDÁmt&Œ-—‚-S«9„’{(¸“¢K_6„¹#´Galt6‹ uÔlS,Ôæïc¦Èz”'ÁLEZi¤KM°PÄþåó¯Àžœ´ú¡ 1Ðð<¼@Ã"[— ”;;ç¡xÜòÔê›/«\d+Î7ி;˜s4tœ{(p«Ûè°Æoöj§’—˜Ê§‘õþÅ«]Î$N_VUGàÅY§ZÓa Në‘N5Ã\€¾ŒŽV7t:Eí¯ÙÝI/¹édXR¢èC…!F²ê³‹§OÒ™Ùë[«1öÜ ì5hF=1›þ!ìš³?—%¸Îήa_ãó§"8Âa#ﻀþ(ŸI³ë›ÅGàÆ°¡¿²'ž‘mZ‘dÎ6¦sÉÁs +·÷·ëý²vìÎ6$ÌC”B¥*†<ÌB¨è³1äV¨³¸o5¨”Ø)ãïˆqRÈ=³¹¶êVp™¸sãñþ»þ‰é,+þx=™e:èOì댑/ñ/˜t)3lÄ7‰Ë’ˆméÅ(Ao+»ªº¨,1èÇ S7 ~8¯óº¾öåéî«mû “'ÖäË·X¾Óz#»Y,ɵ[á¬däc£ôë!Ü6ÂV$tyšÖ.ìo_OÛ;ž®ûP‘ãV†Å­[”J;(•*Lh”]Û±‘Êje¯Þº¤tqLHocCCúº£‹áä¿uØŸÁ’mǪCA¹þ¹¦`‚Ø’Ë`Ôà4âŒ0v¥ ÈÂÎCGXl÷ÀãýÕ·Û3 Ùëÿ½úSø›¹±ݽüÐ(ùúÏJ¡{0Ÿò;Üžü¶ˆÑqEJ ¬VB)b~cÈIQ³HCZ‡àPƒÑP¡TºÆÉlC‚sù_ƒéágv7wW_¿íŸ_ô¾ûíçß[x÷¼ÿþd«—Ÿ®w/û ôOŠÝÁn£87ÍP.#Õ%ôÏbPoÞl礆)#*úÚøºm§„È¥4t]Ò!tmíD†¨ò ;σ§q×Ëú»4Õ"Y4Æ T“êÉA $![ÞHe &d‹Z*{Æê £!L$ßSÙãä!‘ÚÑÿ Çî/ÿEßç)Øc„P¤P„Q¾^tã¤U¼ÎªM†%8Tä Ýn¥'ªç@j iËÓ[ò¯éù3f%±Tp ³K±ˆ‰YbŸ­D&]41´qkXº8hl6äÑGpÏZŠ€ÎÆŒR˨R¦xÖ‰MùÑÚ3Šj• ÅËB?œ²D™›/«* Ç%BbL-jc6¦Ž0¢6 ]‹Ö “ã ®«óO2Ùl™[RÍ`‰$"‰%MÑ–î]¼qü˜–[¿€QÿÂr»}ÄPãßJ¾œ ¥ño \ÍmÄèbô±m»áÀÄzp™ígÔ0‰ä$•`Â>;¶ùØ)½¾úÖcKÝXXØü‡!Ms¡q"¨Çt¸$T”jخ֓˜*Ô©tÿÚ‡ÇüŠëŸVµ‚„–äô®ôMZKŽpÄÛÒGÆZËo6„ì?¡À×Êb"=œIÅ­ìC©ËÛ沌æ'‰Lr¶BäÀ—FCOLïY½YïZ‹±*œ;Àôë—k•ÊþYÿçaÃdöôýWž³_wƒýÌYñdýÉiAè ‰}…`,zêfãµØ¦Ø}#žˆë&îz̈‹ G0#Îw&‚À%&÷ ½d“MˆÑXYεÆ]tž‹Ž=§¬c¿ɤÑ0n+Ê«2éÿÈ»–ÞF’äüW„>[Õ™™Â¢/öÍ^xa¾x…ZbÏjG/HÔxúß;‚¤È"™b>*‹Ó v–4U•‘_¼_N|þix°Ô’[Ä•å3ž_îDð,¿¯Ž5S()ZHŠUOgV´ÊCÉ[L™wÔìÔ?&æx_…%b0MÂR‹ i­cvX’oØEÒ.±º §D¦0b®ÓH7¦·bïˆÑÅ.!bpÎlp.¬QËjuŒ£/òó×å¾dÙØ¨«´ýo0hcÑ“(V3vDûDÄk…LJLÈMáU2&£#:u‰|µÎ£ªÓ<>Ä8e¢ CZ–¯éüs¡Kåòf»VÏ¢å¤Ib º «$‚eòz$9–wGšNí"§ `.‚xe8)‚,Ž]‹1Ó‚µ÷C6œ} .€ CQ’Ï¢‚“[Æ”é$/ÀèÛªpÅ*1Ó” ·Lí¶rxBÇ"@ù€Ë›Å˲Ÿký` iK|–MåÛi($g'íhÑÉcÛØØ*ë‚-!D?l¡¥Ý"ŸBç°4±øu°ªøÈ]-EOÙ°4Ûd¡Êèx=îV¿Ú°ó5Ng¢µ¦Ü­3>ĦN7«cZâä°tùèURÊ™¼GÀ`0çȹýMjï+ÊÆl0¦ÆñCpƒnŠhF 11W‘4ùÒ\ÅÂ6çt§iÉ¥qÖŸG¹þÔ­NV¸°MÖl?îskZJ7ØÅ@uØÍÄ{³÷FÖŠ%÷Ñgï Ó1ÞÑÑ ÙÍ{ƒkÏéŸRINûñÊŽX©Hî"iiˆŸtßdêP#êÞvž¸õäåémYÖx²5ç`¨õf¿›VŸ¾ Mc|8ˆamJÀ'l™ARÝ%ˆƒÇPSgt̰8c:®¨ÁH÷¤û'1þŒµÎ»!fÕÎmÞA& Σ+053ål ¹£ä‹Ñ%ti Ú9T!åIIQ•«-åöä™t—v‡Bç„ëQñœBð€ Fe² Ñ„,2\Hתn‰Ói’‘x+êÁF+»'!Ã75 Fa ck#ÿxúº—߈§Ëû»o‹›ï7÷‹ÝÐ}ËÍòRLŽow¢6/mð$¢˜úá$ >x_™ÖivœŠ÷É.÷­º E>Å u5H b'r )HáØRÄ HQ·¡T ‘d’|= m4°['dv„îö&ñµóîšhÈú¯B±tû̈$½Z%˜ÅLª„®3¤)¹ Ô5-¹M#•öÍwí]ßé—«Çë¹qy—Ø7oš"×àäÓz˜ÞÔÎõnÈ r£X0rÑG]“ÃÍGã2F”ëÎ’ÏFF*U9Ýqòç‚ëV•³~¶$BÁyðQ³s¬*ÁG­—-nnî^Wõ÷×_µÊâûãÍôéÑ%{œmÐ"ìÓ ×ÁÉŠ­ ‘ÓËnGTìÕ匆ª0ÄÚç“0äcKGé¾VûãÿJbÁc 6ŒœÅ†OGèvÔé ùj-ý÷5Ð@0n 4È0µ”ë[âàÀON‰¥Â9›šë‰ 1U8@ *àdiÿ†nÉ‘­#ÂtBE·ÉUÂéÔ²8 ›µ:‡M$k9ü41˜ÄZ›.!˜WD®.]hÅaÉÊ%²!9vt ](,m-Tè,6:[EØ­ƒl ·d„È1:W…MõídÁÀQ;Y0©\»X>/doRón'WЬê“säGgÉw“¡Î¶gØë%?`R/kŒÈºÒV²nhZp¤†¡Hgš˜[BƒýE!Fç¸:ó$‹‡ôJ¢ÝÁ 2K¤Éeñ?öK£'$ÓJrÏ ÌÏ|Ïbݺ¦ÎQËj¹Žu9ë¡\—¯w¿<¶‰þlŽ‚ƒüsIãÀ€>D?PºûtK.Å9òÕN¬\[!ù=®j>&x¼òG ¯n.‘OÖ0ÑD¾÷»# zˆy= ,'#`›ƒ§ÔÀè`E}…¤I4u·Æl˜qÒ­Qlj%óÞÆ®+¦¹Ç<æ 1¡ …˜½6 œlVT àÅoG=Ö\œHsGÍ#~ן[Š"A ­ã‘-ÿ¼YW˜ITe‰ÕÄÖ Cgêhmzÿì–~–8/B&T¡‡‘ÔВ<ÂÅ5Å™ º÷ëqÕÙÕ˜M^¸wƒoÃSþ °ÏŠƒtÇÅèÄ]ÖH»uîœûº7kW+ gƒXÇ`öˆ$ÞÊýËÿÞÄuÃAÀÁGΫ &ÄÅ%½º-zA¿zµ(¦ įŸd¤Q –R0§­ù‡§^߬Nv*ÇyðïË×ë~‚£ë>¹HPPœŒ ŽHÔIPDùfô5øyÀq>6qZ«‚­ÝÀOA¼º¾½üz-/ÖT—àîòîq)_v}ß w¨ó]bvOWè¼3Ú Ù’¾KšÝ ‘¬©³f½.Zm®»Ø@76M$4¬“óizBc'ŠV…›û;¡cO„&z[$ƒrxX§j–9<Ž ,•dʈb…à ˆ¥üybÀ¦!ÌÑG9>†ø3'sB¢êOS}@˜¯úã.W *äý »iD¿.e:.ƒ­OÄ`â4ðmiïµònˆ<9tRê].Ÿ~]<¶‰˜Hx[ää÷ “yüº`D·5pUu†ÎªÔæašr-MNwÖûØeøçAˆm[/|Ý깆DXKYDlÚXŽ*¶Dé„ÂæB "ÄÊR£e"|ËÄw­œa'¶Ï,*oŠU*èúFë.VãÝ8í‰4ÆxWˆ>“èÓÚeK©N8q¤i€³ã$´´U³ në—ú:üü²¹Ä $÷ °Îϱê¬@'>$;%G”ê´W@›\jº•úà„[Dƒ2HÔ ª;jŸ’# %¦ ©Ã‘Áçå´·uK¡N{[ «óqzà#Ʀra¿ÛâãÌØZ„‚Ñ,K 4ÚšD é‚áI:µ72™ˆgÆC4ÜÔXíIgBÄNëÃä‡Ýq»Ý"öüöõþ½áQW[”Øh³°ˆ$G¤édo¬ZøkìRùfŒÜYy„¸ÐõÀpâá)&ã´ýÎ&höÿÙž£LDÇsú|è=€õ˜ w(SnˈBàœKUðÐÕŠ&L‚GlÉÑm±pûlÜì´}½~xÖ¡-ï¦ÈçÛ§¥ü——/B„»‡E˲4>t A<}RvèˆD¦@9G†b >T†EÂ)øàñ!úOX#°™cQÒ:úÕçHü€º6ƒbyæ!$KbD“.¥î~pl¹>Á§f‚m«G¸¦eI0Š*²ØÔ’ á¸%üÑMZÒÑ&rÎd„Á88¹ËzsÖäMŽ“ïI%7ÈÃÛn9~ÄÄ ‡q°àÊÛ{á )Ž"£¶Omt‚XØèdƒœú•Ç„è_´YPød+Ëèd}NòQâ¿‘ÝëT?"=ð4ªˆéÜ—[ÖåAÐ$@¡7õ¶­)¾m1ûÅ¨Ë €_ݶËßuLï¬à²õ£tÞ q¬f§»pOd)þ÷úåñôµ!Ð~ŸŠf 6jT/o}mf{m*ˆ¾¿Ÿábùôô«þŸˆ¢Ç_ôEòo}•g8ø>Dxñ»XúËÅíåøŒYýîùeñíîwùÞò¥ÞòZe_Èï6o‘_þºø~õ×O™±·þÃn[Þå¯áõ¯ŸÖO\ÃBß·2 M‚‹WѧWâÔ}j¦yžU°ió«åhPÛ &N>I¦&ƒÉO~X!@DU龊ã2æÉºÉM:LúÑ@ÎÏ„î ŸÜ… Ù²2vN&|~º½½{}y{ÖG~}»ýeQÕuô|ûµŒ3Íš3ý¬|éZF<¡Î¶§¡!í2‘rMînŠY)º'͸ÈDY£ÓµC#JõàVÅ÷ªæÜÜJýy5Ò`‚!‡sòêÇ®s"Ý÷ry~º¿»ù>þ‹õcªª{VÎ¥–}ˆ¢x¼8XDÝB tì¦qBà­bˆît.dMSL©ÜÑz0qÄ!b,ïºìÅÃÞôçbgD@ÊÝ ;;ïóˆZ©bM«yðO³’ÚÎàaD36f%µhÛÅïK¡±ÎRxßuy#üüô Ä\µ·ÞÊÃïÖktOuÎùÎÛªk‹äçõQ|ËH1Ð6I6}‘ºê^rR™ .‘“Žòô”ŒŽHÑAP:£€·dcWIY„–ÖNï¨~¿ì9®Ÿî°x!T¦â&`|Rœ^F»%jÕ«ã¶ ¢,¥—BùÄ^4²‹›¨Þ9uoÒèÛ$ ªdºˆ%p³ õÐTX‘,¡õÜ©äCjõ´…ER†‚A“j Ÿ\¼!SzÝë–2]8’4aÎïÑÎ éÑnðÑ›0«O{jFå¶Þýå¦.üëiÞ8S ØTß hƒ©Þº\H¢~Ìç&'¶S2{:Y¾&§‹x·äèÂ{n ‹-ž™÷ÈÌVøÍ½<ß|^ ǹY<«É¢pˆ¼…Ã;ß e—[ _hýËÇv¬_#NÿèâUp°ÜXÂŽ6ê/‹‡§åbUý2XàAìÍ+ÒE#ƹX~Ö?ù¼Ï_‡^‡8>ÿy¡é·»nÝòíņ„—°û‘2ÙûO6ÏÝû«ÍÏÆöþ,¡ÊêÛçtX †¡Ìs#ëõÃG7‚ºÆ‚NÝH°ÿïnÄZ;‹ÅHbðsê§tѼÆò…lwëåAÛBù:kQSösª)1m‚ªtAUèÒ]𡺊 EŸMS’Å”8¢I'35º€çVUèÜ,lM`à³;n‡)¶å‹‚ëV‘˯o·÷‹9S“©ãE¤¦–0¯IøÛzëË{´{q¬¸ú³p,Eš9Ùñ®^ÇóŸîÖJ¤Ñiüég$·Ú42'¹Óó,wå¹/oûu¼=Oûÿ¼|½z½\¥Ro·?޽ãiêˆQÞnƒ ÷Ž˜²yàµṗb²êyLŽN²‘´ö»ÊÆ"0„–1ZÆc?\ßü] ÚÔ$*jN&t†°e ‘î¿â6Õ«¨ÕæpOýV8è4È;œà ,åXÚ™äD¼i:8 úÕØÄŠ:ÇN,ìÜ&ˆŽ-½'¯ ©NZ ÄÞØžŽƒ%Žƒæí*ý†jÙ0ç­‚‰3ZŒ:ç`æîé’Âw)pùüòö(?õ%­`hV‰ ¾Áì >êÄ*CóL¬M’­Ÿè…!x@Ÿoš–ƒÛè²¢(•Ѩ‹è…8P¤s‹^{`Ï z…Ì.&d¯ÕµÆ–½È±§ìe.‘½¢”C±ì*.æ¶¥›E,±#šÌ¡Â`ŽG;XȧàˇmÖ\'·XïVËAXïJ>»Adê3h®5úc†ì²á»å]ú‹G%2ÎÊ4®%ª›y<: ÝÆœ¦X?4®@±ÈY1³¼ç’Ù}ºhLùj«]ýçæM©³;+9å­„Á‰bŠá,Ù v%Š2Fž–Û(” ³Þ¨›Û–ˆëv©½U­'Ìb[}¶7j}¿Àw€p†¸·P}¿jßçÜU1Ší3@ ¯þë·Eùàã7: ~›ºÈ÷ÔwÂßbu^Ü/®ÅV!áŨ‘±èéxBÀ½¸ø—Y¡‰3Œ èŽ4s°òÃyâ«<æö*Ë—»›º9€¢{ͬú[ª½!qá0ôš¼ž¤T7µ:]Ì¡¥‚ü©ß Ó8©öѧ+wdé¡÷¼ÞvçÖûó;Êò_§er£ˆ›RˆÊ =c“EjDkëýzQ0'sSÓ°¤ìd—غÄìQì†à)ÉÒ].o—' FG+pŠÙ»ATŽçæ?ïú7›ïµ³qVUøp÷¸î’¬Œƒ‹&|:'Mû 5…æ±+³ÂvÔ9Ìg-ýp`ÿ¨ÒÍ<¤Õ¾”Ë»[mž]~¯¨!Ó¡ ¤ú#ö B¬íSRýÕµ!©Ó¾e_yÑצ¥G ›)§R̳5™rÐu}'ÕzŽ,ÃÀœÕ+⥶fìÔÃî“3:í>­²û>8aŒô<·á'd¶ÇÕ¬zd_$—!1Ø7;m‹²Ó ·Ql¨ÀæûÊs3š†Œ'{&k ¸ÉÖ+µî@Ü/ÝmÁyëŽ8Ÿð@“²íFç*1퀋‘éÜ<†àææ1%"óØêHËh&ÅTKKOÃf<¼Ý/ß^+ ëz”fߺgaîTrš}í)“¢‡²HM[WDgŠM>yÇŽÅÒ¼«xsÖÏù;ZÏã}VYGé½+Û£•d^= l<»¶G{0n†\€ÐŽÃÖ„ƒ¦NVÏ\þëÚâ” —3¶7ûÝLù cËu…çH0?ÈC¤¸É86&çÚ¨ÝÁr]}6B@{nÈQ˜Y‡ WFê‘c´lsyºæ>€°D›ÚXlý,ÒmN‘…®!Áêuýµ³~z0” ýçÜ`wù”ŒGk!Ûµ+B å׎Vàýü{×ÒÜHrœÿ Ã]ÌžªÊ¬|Œ7öb]!…¶¯Š .‰Ùaˆ/äj÷ ÿî,$š@UÝ]Õ3áðE«á î||••/-À³hÒâÌ¥S¡±>gvæÛ¼ëaÿ†ÛGvÝfãG6Ùk[gá0á|€ÊÌB ¶ôDU+‡´±ƒ˜–ΖÔv³Ûç}m°û´/— 'qzlqÎvÆ¿ŽtFm_–@ e‰Ø¹Òf¹N-/UÛP±hL“1ŒhC-0õS*†û#†üsÜ/Ó¿´ž:²3ÍëÀ:št(…e[õœ|ë´†…M¿<<®_n¯ÇÔÍ!TÌl>ÂÇ6=W7¹Qø gó FªŸßp˜|5´­[m#óçÇ»ÕÏ·éPYºyX÷ˆ±F ˆaº  â(SZ€6Ó :ºl•N­#)›œ*åi¶»]Vþlȇ·öDQãòŸl4DPrTð8r­œ$çxܾѦ‘› 5°,_±@³¼ RäúM•ÈõËýì¡‹ñ½{±KÜõï¦|?%PçPÞÈ_]tûþ‡=MÝŽ¥nZ/m¦ðMS4$Ïá*o $V/Qa†a(GX’¨ˆyö`Š:< ù. *yŠhºvüÒàL>¶ÏS €³KÛ5åD—fŒUÓ\VÑ-†¡ó,th¨ÓÝNçºY õ]Ú‡¼lÖbG>µ•àͧçÇ×´Õózõü26cÁM¿ÊˆS¿‰pvÂ⌠ª%+Ì"‰YA~tAÄÇìè‚r Áð^*5’Â{Y:¬š/‡19»Á\Ó”!{€°pÝ9”å-4ÎH[d ¡BÅ9l@(ç°d‚¦M=W÷¦„«ëÛà6MpãÆ¾ü¹þÙ0šö·ñ”¼¯“àd,Œž–J=~XÓ.+–`&ÛE5ת³Ùp7˜àÝ‹  A,vÏx6fþu¤‹IlN© QªÇÎùà$œHñÖí•Å2¦8?¢EggÍ'”}`;¼½ó"üsܧŒóq™M´KDLËΡ†‹Ì|×òuŸ¢Øë¹¼¯ 'j¯¼¯óP1§'Š ®¾yêÔo?Š“®†`ðSh‚ï±Úºñ«×—¯‰Gëz#‘OéÏ—ï]Í:Ôuv« ”ŸeµHC²ÖÉ"öªaé±S#(U5‚“@¨&«l½ö&]"`ù6àœøùè¦j¯®%b{ç§,¸fŸDÚlÂ9#¾Z>›ÌEoÑ™tW7>ç³Þ¹ÁoOT5–ÛcÚ‡.í³~·i­îÔª]¦9î³v5ÝdHOwW›jÉ@fÏìr×ÖiÿøÁâö×ۗ߯¿®®ÿþá8y¯¨gK3µ¿±V͸ fð”NH ªfE~Guõ¯1W×Ýœš4E×´€†¢@ÙÔœ©ƒ‹ž~jL„$×6cWö¨TJÍ·V$9„$ME»hâ !¹T—޲Œà`\MúÿP¶´¯P¿äRÒV؃¶<Í0_Íþçõñåª/»ÔyûNÕp^aEŸñ=U!LÉ:2  ÀØ•Ue>sø”‰wÎq’L-8Ä‚CŽf”=NÂà0oO„5ÂÞä ÕËÒ§ÉŽ¶±nØkEðUß8·+S_®¯îŸîV=£üòº^ñåêêiŸùž¼˜¦”`7N)ú›çÆHîì%udìP v¶¦ÄåC¿‡¶?öU#ô³§¶øÊG]ÜY…XXFƒ‘x Xi´4”à|Ìh©aàúYH”eÎ|—ÎîÃäÛ3½M®kü²l“iÛ!(Méw"âèlþ(AÕJnl -˜Ö±0$Ï÷lB“A¤žT*@mzl°°hÜÖ¯Žˆ¾ùܽÉ9/l4%â…N¬ýª»ô¡¬ï…ÔìµØg‘ ©©AZ7rç1±{,ßnZ††J ×w·¦’Ëë±M1bSlEÒ£„šB¾m^dÕÈm’];Îפ‚I–Hʤ'Ã#‘{ùT‰gí±*̯¹Œë˜ñ±9Èšœ¥2¹ oý‹´ÊZge^LûÑèO©0q$‘Ûœ@•Žû˜q–ë/ùá.z,ÓÛFŒý¡wõzsû2™#N—x0Gö“è>ÀQ¬²ì~PJÕ"Þ¤}ðùD`¢%Ì‚q¤´ï"©ñÚc#EÄQí‹U\PZϧG•ã–ï¦L‡ºÈzû¨eÌ3÷ÛŸ„€–:d_F- éØFÅatÃÓfš1ã[m—ÛÝbŒ²Ý¤šÂ(ãµàɧí³UPtPHÕBZS¾¸"퉂lxZ =€¢=‰Ôˆhí©‰8, ¢Lq‰5RÇ›#6ŠB »ÑþÅ"Ú28eu8NOBACm¢÷¾ ÏÚ÷¹åát½<§”÷ö—?¿>ÜÜ­FjHD¾ÜVÑO)ˆ)¡·p¢¬žVµïd "\BKOJ”£¥7±m¯QàÚ“K•oîº. ®¸ëRmÊÆ¡4°£oC@ÂÜÂàºeœË«¹£7^3ÐpR«vi~ÖdbŒÍÇø0z¬j¦¹p8¤8NÝ fã|F—¿}üû¶€øé/«D6ÿç[ƒå#Ý^QÝ^°^k÷réö´F„œ¨›¥õgE1 c¸(^—^¸+ïÚÓ¯7«¨³{¹O˜·$FZªÎ£oÓ΃öp¾)íã®ÌÞ5úׇËý?üôôüx¿zùºz]§uãÊÈ|:d!CnˆqNÈ%è¤E½Ž]Ï3EZõÊi0Ø¿1 žoÛqŠÙÑ’(ƒ-v=ÙTêÚ±°Òã( ÿ¬aÀió¼Z’² ð7=)©Àp¿6J¬[I.» ºòHe*$œT¦·kòeš=Ô_Žƒë˜ ñr”Á¸ï 8?µŠÌêÛJ]°Á±ì㚎fݬžîO¸u65üöÿÆIÝ3Rq,sN4SÜ”9("Ž‘Æi#eUï433 ´¯¸à4Ãʹӌ¼¬õäRå8K<92Ž“‚SÅY¦E¾ùqªC5{»l£j¸ÈÈö`ÌdŠR…EzkÈi”8sbç$´Ô"Rý•:ä\çMy°,7íoÔ¯xéÞÈ)׿›[Ü_®Wë#u`º puÂ}Ô®™Y\1›§¶XhÕ`6GdCÐ<ÌÈ„<Ì¢ ÁlOD5bWUû*I»UÃAh'ßÄØÇ0kŠ )Õ‹ìØa*Ë´Ò ªÚQqJ©’ú=g^H(6ØFR-@ÙéwÀ¥6k4¤û·ø ÐØ &0Z(áó B!Á8–ÃøÆäÄ.kƒ?Œ³S˜Ã{){2¬ã踋!=õÒ0®±uS¥IÙÅãö„¤'uA4C*«f|½Hùâ ›ë×FôjÔR¦,2ÐàÒŒOÙ>ß±wžÍ2Û®7‡¡ÕD‚4?ÿ‚ÑeÁ„!U‡{â«&j÷Qfp²0š°}Dëú‘F ãL§é¨sÁBC¾‚ƒ¸º ¥$*?2Ñù-ZW[ÚCĉÁˆÄÚ£‹¯Ù@òúúëêæõnl»QÓH#…)»)ÅRµÝïÒ©w-‰Ñ±d7P„˜’á8È×E•{yØìÄÅ!8²6¿—GpŦ¤¨4Ùe‰ƒºÝe%|»&çü-Õ(Ò€Ž>v½´í3nr:¿©ddã£:n ¥ê¦šìöF.Ué{\f³K² â|A!Ãn÷ÔYlÂÖžlªp¯›;^:çÉ¢·ÆVôCé•íË%7Þͨߠ©?Mþ»™]y€8¥XåÔÍêjUŠÔ`Ò[:sˆ·,ß°­ÿã/7T$—Ow·×·«qCTiËÜE`”y5~e/SzRÓþ@o®Ó 'õ¬ðê‚K§|Éîo¦ì6åáu==IU™O#FÂnTK•Ú]ÅѬZ¿™´_þÍH³à’“7þ†Ö-UR܆љ†™ˆ1¤Û`7FG&cÀæ#i7 ©ÉEç0ÂßÞÍöêç»ÕšÝþùññé#$ØÿË aõ§‡›ÕoŸÁ À¿]$T¿]ÝìŽWቷóVí2ŸñÓ”¢ˆñüRÔô²Ääbû—ùCzØ‹ÛôPæ„׫Û_W7^Èi—Að›pè_‡>b÷f»O¹]›—ÿÃN«¬ž/^¾^=\¼Ë£Û¼üÁFUèìNçüPi"œ¶ƒÂ,;€0eþ(D¥ èüß.ì'ë«ëîÕ¿Í€}¹º[¯NÀt2‰‡×4ÒÿÓã—÷ó0¡ú‘QhçÄ»òFÁbzʇmõ¨u`ÿjxz~´ëéz Ú;Ï?\‰¦8¦øÑ.úrýxÿtõ¼:Ò·²Z¸>Fß‚‰Ái–ß ÈÒP ß—V2ž#3I…ÉmÿqvpÈ™‰ÀÐßH%šx Bq+$¢qp†÷£}Âkð&úÄü…:Ûû±Ðûíèí4ǵ**žWkzóè½ÿjÞÌûÉÝóŠ ‡#ÒeÙ†hWÙz‹#ô)1“åõfÿ4§·°ÛPr˜ûØ¿Y‘Ú T 0JmÑâNÀYjžró¢¨qS{˜­7*Õ[ˆwžiÞø|›ÞöŇ÷öÞ¬ä°ÕD¢ úQzK÷˜géݽƒuv€ÌV—ªÍSG»h$£¶æ±ózîËé½Y‰ÚD:3= 8Fm`àêý,”'ÐÅ %npy¶Ú¤8¶Muõ¨ÙñxS¦8§6Øí?P[ïÍJÔ¡‹Pu”ÚÄE‡:Km<‰ÁŽôd0¢Ó6¿¯?­ŸV×g®ëóX’1ªÍ/›(äÕÎq·¾ì¬ÚyðpìK¦F´šÛù@a„]€Caösì‚§´azH=: c™2îVWëÕikØ/°:ñóË»Çë¿W³¢Ží<–|ÈQÌ ïº{Ž2"ﲪa'öÐv±,n'4)8œR¶.Ž--^­î“_ÎÐ'Ù7f’äWÍ&:oñ°†‚C£óY£ 08¸¶—K•«.tŽ64wËÚ„ú —‘DbJïU*y·šî‘äzý|¹¾ýåÁ~6‘œcÐLÈG%±R6$´ËÜÐÈMOTUÌ„º ivU;9_óHo'޹ÁjæD•y°Îíý±‘¹Æ"æJN,ÞMHW± )![qóÔýËå}Ò®œÌˆ]@(ˆC> &n°‘¤'°*k—ÍøÁ®Ð‹»rPjàʾóœ2Ë’ |Xþ}¿zy¾½^_¾Ük,ˆ‚¡©B˜xQLø³ÉN ©ÖYºQ¾JD)q@ï³÷t?LÊú.‘*˜êìʈK;`tÐÄY‰´9Qòé¶ÔOÛéüÏïÿy}uyw»~1Ù_îc¿Q¾i!ZS׌0áèûÖDĘ“g‹¯¦×J_ÛÕD³°D?÷„UÉkÓ®cj"^§o€H‘8§xmìÒ¬~ôߦMoo{¸ÌO»fVàYׄÀ0a¨$Úg¥m±ÚÕôPFõüìÊÏ›m{þ—­šÀÓœ=‰Tñ¿Øòâpaÿjqj‚v&öá{ãÀÚ,Q}¾¼^=¿Œ<0ÑsK·Ò)½íšjq/@…Õ—]5oM–¢ *¿&¥\®Ùä›d{‚ªá® ÀˆI°ôhâ‚·ËÜoÅàä‹Ðö ·¥þ4IßÚEÉDÙ]›ˆ×öŸû·jØî/Óé†u³úrõz7rœ(œX^¤†¼³b  •^ l‡SÀ±éÝ™’«æªf&®…X⪠!{°¢®ÚS Oµ§v¢h„«¦=êaŽ«Züxªï@cÔ…¹%oï¯~Y]>›q®_žÿôñãóBfÿ2YòYïÄIœ’^EÅâ9 Ù'iVZÕ<2¦v7(òHˆ’ñH“\lRîɦŠKúΣ}äÒ.W¹Vñɨ]””|ZÖ'·±ÙfeWÆå„ àZ:á”V(Î êl<N=ŸÓŽ-Œt\ÐL>ïsf;ƒœ‚{YÔp9{ê(G\/#Û3ø2Ý " Ó„@ 0 Ãì–8-m‰—šˆó ¨fà3×ôÞ0„¤½7+šöRùGúÓýÏöH#Ǥ|؃+°‹3éÈ~ÛOÈ,I²2ûåØ~Îs÷Ÿ‰Ý80þáí½½=~¾“Òô$S¡1)ˆmûíɨF·ƒ=5’D1FPÅ@Â4ˆiê9ŒMq¼•æ>²'}Ls¬ßØ”êY¤Uà›I«¬E¤Ö±œE—¡–èžPªXt–¶ˆˆæBÙZ)À„µi·ÀÖö‡Ow¯ö9bºÜ¯—7\œ° $™¸Äv%k;»*ÜaÃÅ^|•lG]9sǶ‘ÐG„ÉöNÑÕ_ƒc«ÞqT.ìCôþÕd´zþüßþø9•’‹ØÿH$Lµ†‹W{ÀÄDdþ&8Ýî ²È=áÒÞ@üÅ/¿=|þawÀþÁŒá—ÕËç¿üÇ/æ÷pÜ?Þì¿ Á¾nýz‘Ï?ìÞå§§×—Ï?Ìÿ®_¯î^W?m»µ”ùâÇ/¾˜«¾¦—zûªËÕø²íã?Þ¶ùÀa‚‹"C<bCŒ4¥7ÝÊ0{¦K§1¦ñAFŸ¯L. ŸiÞ¾8åÚ{oVäzM”*Œ‚Üþg ¹Ðq‘+ŽrkÁ¹¶+)6Òçc–Ф> &Ÿãb3s/ «øïߎÉ*ŽVڳú wDWƒdÿ_cá«!PÙ%BÔyPÇv¼«ÆI,ÈÇ, ²Á bÚ#˜_Õ`îÁBç›Ý¶/˃¹ÌÞÛÐp!&m³G·ÐÿˆY4Ò±žX_ZZø Õf»á«]ógY¥ݸ!³s¦‚¼U @Ö*À ÖœöoVpdÙ‘Û©\8ⵇWßúè1)"Ÿ=Ijxª³Ê³ìì J²ÀÙ3˜Iº½{yíÿ`Wo½¼¾ê®Ÿ_O¡D6òšøµ½)ˆÂèiâ×¶>š&5ŒzÒ½s³‰F ’Ý<¥à’n¾³ˆà ]TïÕŠ )vÒ åÒ˜Ôš›x+Çí ë“RarÇ~ÌÚ¨yç`(2±?²Nðý«Ä;ô3ýüE•~‰îFË‚)‡\ˮ瀎틇Áœæû‹ %ˆZ8“ö TEË­±×)Ä~"³ÃîèJ•Æö°KhÍM(1«4öƒEŒ·÷*ч9x KŸpêZGÝIˆÛ(àà —ÔB”íxGQ·é¶âQ'Hßâ¨ûX™Û#ð‰Ÿ¿aîêËÏ_ÜÊG><ØÜÌC­ÂóôŽ0Ï ϰçF‰5´¶fÃq:^–NJEà³Æ™Êv=–3Xt³ØrpOÏ›}¬?Ý_]½}è­6ÿ_ê®ï¹­[9ÿ+š¼Ü—ðX`w±n&O}éL;½Óö±ŒLѱjYTE)¹é_ßI‹‰C8tïLÆId›”SôUý`Înëñ ¸ÎØþ|~NâĪװ|ΤVz`ôíÞŒçmh´*ƒË1?Œ ÐL_³»?»)+6 ‰Añ]¦k/kZÎ E0 ú`ó¶ì º37g’K‡Ž%Ó!i°Ua ™¸Î Õ“ñÞ7¤?·©r—(øëI‰'¦tnÓÛ®ku‰J’ÂT¾Vw"Œž¥ ‘ì´é,3€«Œ×3ÌK`3[°>¬õí>¯7/‹çÕr­oöçaH¶mÙ‡éQ¶Ž¦lh3Šû¸\iÚΨáõC߸PE})A_—ÅÞôf cAuÂÞ8¯S7°£¶&ä Í^ͽ{2ÂìA&˜Ð)7Tœ¹u% Ì\ŽÁ}€côIõ´éý ºQe5h6ruë-Ëk¢Ùý§0ËÛ:*NF¸8·²£¸úƒ›Ä'Ĺ]ªè&Òw†+\‰#›øë¿ýWà?ÇåΆŒm Îõè&Ä^ˆâ¡w]š˜U¯û.*Äô`A'<ÄëìG!ÕàrKeÔ^¯àQU t!Df±&[d;{«6GPÕ¦fÂÁ4å1=aÿýÁ^?Ú‹³¾O$u byŽ!üº=‹¯w«ª+Ñ“Å_`¥$"§ ªÿÌÁ]&°n7³¾/:üÆ›tùf¶ó†IÉý¤o¢éq-G=¶b¸îZE‡MVyJØ2ǽl­Itwë÷“Q­¼J”Ħä^Žf¤Cœ”Á…±EМ›Ú¤pÓTÞÆq`Ÿ˜]mábËø±Ù-¢ØÝ´?ŠÝî]OŠ8ŸÉ•8Q›mGDË©™ªc‘ô¸wãcû æVcá}ÿ¶ôvìþž’ÞÖK¬¸Ú­O뻂¬Á˜’,–ÏËnŠ‚2ŒìoyE‰#ªY=q”^dtT=‰O™Ç«nDÕtß”éFÄU;;8ˆÛæôÏžVÏ›ûMDêßׯ_WˇÛû¯ædUÉN~´ØèM§?¨«@‹Œ7a!9Õ×d¹4‰ÄØ“>Vûlý„ØË€·Ê¸Ä‘ âÁfS,H&¹´ê ±.«s·Óƒ6T1͉[Kš\9ÚWëúöéÄz¡ÆŸû}?³Xïk¢[ÝšÖŠnù‚a’ÚÖK‘5S†Òc¶ p‡‚ñ{átk~´ï’|N3nì’ì•IûžýÓ+ó ‰ݪ¢%Ô Æ’ ì$4œ¥¹;Ï£˜tåñ T%ÓýˆàzFQh‹²›le •3ýÑSŒcb®©PHûÝÎk–ô£ýžÕ&ÐT©¯_UÑþçuýr»ùIÅt@Ì6ú‡·Ç·ÓuU¯f¯°U#ôMÕ$B˜¿†¨ â°6~=â…íh',^„–ÂYÕŽ8•S°DÅA¾’OhSMwÇRé´n—tª*å ;oš,9Ì£·ï,y³.¦åàÛèÈy’ÊÃõ›í\9¸ö†ˆÑCfõ C›÷ÊŽgaŠëË躻«×±u”|¿ ¯j‰õ8ØBÒ¶A±)ÉBu°5¤¯^i[%¨nN­Ã ‡’B½B²Ë±¹¨Ð|²P$•N`KèTå‚€»Ø›7Ä ~ñìh ’[Ôßç©Ïþ·bÔ-*Ù;¡š™žJD;O©¬ZÎS ÙÕa0no™5+P˜J¾|«•}ˆ)ƒe¿‡eò2>ó£þ– iÂåSó²Ñ“`â@Γ°¿àŠø Ä}ˆ«Â€+`×a6$ÙT ºÕ©ÌÄA†= \˜ªÆ2ãb ÉaV¯av‡Y¥Ì‰úÊl\ð9öð]Û®¬M€¸?q ”B1Š_F5"pð¦)É!a†^YŹa;âñIK¢ÜKrZÆCpãòoÍ¥6‹@;¡ûF¢†BÎÚ¾ä%ãBëæt«j`°Û*f¯äñZ’‰ä#ulœX0U5z!æ6MØÏØÀ!Ø2Pp8“Iö}çèÙ—8Ý(†z‘™\FŠ‘ÃlOè›æçõ#hÆ(ÒFgeÖÌF¤“ýxÿ-ådŸ\ì¬ÝÉñîœnvìì^q¹z~Yl› êÃø™€×`»eŽL?bRϲÛvâ«)Ôf™{ʱF« a¡‚Äq~T…jÓŒ•©õé¨ùÑÕ$FôÙ$ök6Ù1ÌŸ!’Tšb½˜ì;u×4´”as¨ÈC÷Æ‘ÑcöèTU›ŽÙà t'!Örƒº+W¬_¡*ûòw Ó#ûú ‹Mý­zvv‘r›:R±³à$ÒÓµ)ÍOû{õ¿êFVqœE>/ó5x¨ÙEÕEwb+þ”EêÚàm3†œ¬ˆ]ì¾xYY=vC0PLË(‰¢GÖ‰ó5É…Eorê¡$ñ© o''¯¬$“&ăQÑÕ+ÉÞÛx7({6{ðR?¼W¡Åòójùe¡‡õ´Öh³Ý‡µ¸[=¬~‹¾KÏ;Ê‹8WvGåJìQÆÉ;êXŠî(ìáÚN¬ƒ ¾Jì"DOä¤ΤŠT›ç–Öµ¤f¸A‚` ´P^1 Ù{q,›.Š¡O­W¸¶÷ÐO$G ¸Ëʼ·û§w@ƒCˆ8¾@ó¾ˆsÝ>¯Þÿ-¶Å%Ù©JÌyÚvÊ2&ÝA¿ÝŠÆlßôøöãÃêßô„ÿy½~:;ö¹}YýÓãÝêoÔ#ú‡›X©¸_Ý~拵xÐïa ùŽ(ÎÙ¶8’‰x{›¿Ä§½¹O¥Ç»\Ýÿ¾º;9)ŒEÚ% ~L}ÄþÕöŸr¿Q»ÿãæaýÇêùæåóíãÍ›@†íÛŸ|< ` ¸*=pâ˜l“8€)z°]£éŒo¶z(¶z;0Iž=9Z=#ä­D’Jqxµ«…±îj«àÚ«-´ÌŘ2«Ìq²¤œu´íhì7´üt'Ÿ–áÓrÁ›ÿûÇ´[>µ…OÙ€…q³z‘œè<’[—|úÐÖ˜š@€ÔÙch¨ljðh oXï<5[;Z{Ü+ãDƒ÷kGñ¹yñÅ!Éypx³cO$b|×cËõéeÞݨ;)Br½Æžð¾/‘¢®ŽÿøÛãyÁÑžõt+%­u>I÷µ÷®>ü¬oúÛêåÿüë?ÞŒÈè-½¼Ý¼PSÁé9ˆqˆÛ| ?†ÿüáæëúî Üææ—›Íë2*ЇŸ÷O÷ëÓëˇŸç}ˆßo^W¿îÔ%¾ùå—›Oj’¯Q¿ü0®’Þz4-H–eJä¨1.Kð¡J|1”È‚’ÝqT¶9(´7yüjEX ª}ÝàPÃׂ%B³ƒ 0¥ÐD{HÒ q𵚸¸èŽ& ¶§ÐÛr¨>¦}ëV8ü+¾}çÖ)ëú­¿ÜŒãž‘E€&"™RŸgÔ{\ †\) 9}ZÐ_©ÀO ’Ûþ¢/>R^?¼Y ñ !à.zù1ñɤù½Gg˃Ղ³.€.™™øt'{+Éý[Á‹E¶…ŽŽP0þ È¥R|éÓ 6àGúøé ·^Ò•ñ‰íÍv“üôz™;6úK@Rm·¾Ì¹R%±B‹s„hŠs%‘†C0ͨ&¥Y1`\žE5q’ioŽl&;¼ZIVFËê¡…ŠîÐ>Ç“–F*–ªc¤Î`óÁQáÁA€}¾Ù ï²ç–_ š¹äV|ÕäøËÑ»äáv4‡»ÖëÑ”n0§Â+œ‹>¦;¥úåÐø@íÅ/.µ[‹qÕ—Ãe>¯’€{x³ËÕ§Ší©«éf=B1pÂ&Ñ…)p«Ø8+ƒÇÝêéaýç×V§jp†Œ 3ú-z2!½Œ >·5¡¶U©“'U"ø¼ß`«6„ M±õ5w§ª4“õGâêÁ  Om}×6Ýýâ»9ƒ9ÅIIsúÊÎFÞ’‘¬6uå¡d†Œ/_cÑ;f=aœc¡=äç%›L4Aêoþ±~þ²¸»¿ýíq½y¹_ÖøÏ ½“†ÁÁAý›:´‰&$Ô [ã©“³PPåU»6œÅVÇ附7qtYŒêC{¾:¶^¡`èI\zw²1˜#\aÇ]‰{±hc=›rŒ­À€9ÍÚO)Ø[&²±RÕ—DS,K! ú|¯%9}+ÊÚ§OÒ4Þ«$ „và8põ¨y†‹ÏG÷RØãÌ4Ëçº'ë±¾®žõ_»Xúï÷Óº{Æ¡âkÑûQŠÖ>gAÆÍr.ذŸ`¹îî vS;Öði¬¸Œ³b™0iÜÅ{²¦GSy¥ÊvóS¢b¨:™‚ÞdŸmRŒH’N¬ɪ‡Ÿ¢Omm÷"\(É†îÆ êbýh¾²qn‡\6ËÏ«»W}™ÙÁão‹¸/°rÓG‹y–=ôÏ™éÍ7C’ëßN;Gm±¹ýúô°Ú,¾õ"üôéu³âÅ—[5ˆÅáÿ÷Ýk»ðüwË)Ä™êÖxG}ŸÑ¬´ÖöÅhBä}žq"}YÉ¢"L²÷¿ ¥(naÄ \»%†§lâqH@àïr{N0©‹Úb*´%" Xôù>Œ»|^[$Yó:¬‡¶èSCð^µ¯Ð½½tN²Å( ÌZ9kïHôo¼i^ÑŸúÖÝÁáÎ/Ýí§ºPDÔ¨ªNꯕví' ‘HÞÕÎZ^Aºý²{´å?¤’]¿w‰÷&?v^êÍ¢;k”9˜d2ðXò]’4dà:';ýU ÌŸ ÄÎI2°–gØ—U-ºrôÚ¢l ìÔe¯„UsªX?C¬ÅCœôáênÿ§Àÿ±wuËqå6úUR¹^IS©\íÍÞå”¶gììØžµì$»Uûî ¶ZꣻùsÈ£IÕÖLÕ”Gr7 ~|8ŸÂ™\ 5'EÎÇÉG1#'…Kd¼ûËØz„ wi­… Aú¨ÂTwq:YôÁb`#|ð¶û=.V3…&‘¨5±Úéè¦÷­„ù—¶gY ÖxÆÈ<Ô¸ÖÜ”ÐãÔòщ6Ï›§~ã2 Ça †pP1ÛQ¾@>ËÀ²á(.D4îí©LoÇò^{g¶e ¨ñJŒðX§ÌqUŒl{¢gŒf¢Æ‰’ªµL\ÈôJšiw’F>„˜_O¬U!P‘Åì‹ðJ&ƒ^B(1XóÎ0ަ·6ùp‹˜ÅGØeŽ`tusU6̼ùN:õyÂËŠ_(Ê‘+bž_ÁïqA ùé‹Ý®£ªw‡o‡6W]ÔÍ ¦0L)ç¨õmâÚKN´¶ñ™gš/¸¼çÕ™ÌàÉ5§nº›Ã!S;o \õ OMÈ· 8½–e|’ÊwC J†wç@ g2øàAƒQÃBæ!t‚WÊûÕ¬¡úšò~ðÅ·Tßoš{É ú~6ÿÊïîÑ€Çéõý(Ã3qqöë¯x4ÎTyå—ÚY=õ>î6ºQBÍqzß0èzTQåLø„]³ä$ˆÙÎ &µ .,!¿•;‚Š; À•‰pÏ[«é€tÇ„(´Pƒ‰‹æ9oxÿ·PBçbâ 7µÖøËýg;çû4èÚ( &¿1\{¶19‘oÐy…U*" @ŠÍÓ+„ÓçÙäT*."krþš4ï¶J%Aå2¶+IŒptmÑ‘™š.¸àoR¸ .õ!ú²¨ ´{¸y9*ü˜3y®l}7Uýc·è+°­\‚é> ¹mqçiLÊÓ¸”»ð^#‹¨?Üý㟿ŒRGsyÌ”DÅŠ^Râ·Nb£,aÝJ.CtécA^£BƸiZcÚ^WŽ…XÞ;qDàYP¢a@ º¤¨Š±AKl;It1ÇŠºÎ¨6t“#aqíÍð‹Ù Þüü^¨>~DãSƒÈºp*ŽŒõXý›?_˜~úÓüû%ÉTJ#ýá‡-0ÃïÕt/ÃátÇÎ7þðç?|ÿç—ŸþôÆÄ§GVß·'?ÎS3¡á°…üÙ¾ú¥™|LÄæ‹TÒÕÄ#SD·5<~HGéÅ­‚YêÍ­¡Õ̇jºY(÷D±•›™ÔÇÇ\&uµ³þUMc6äH [Jèl%œ^™¤3´v‰—•tL5f™¡ù´oAf8¥<óøœ ù §¬r…Š‚µt†5p¹Ïò¯ñÅV©_j¢tt×[Ø«æmFÍÚyi¾µ…Q|jºÀTDMÌ2+®vVÓRÎÌq\s¿­?#K›|&/XMê–Vì£9¨Þo:íÐÓŠ¬(`t›iݸš¤\iIc*ZäXµl#}vºÉjgU6— DõIͪc«°‘ÑÇéFRñÑÕ¸0’væFœM¯ó—xÅ£ÈãV2OUåïæÜþF6ßmß¾öõ1}u£ñÚöí·ŒÒ˜â®zs°ÿ"xê£ Õפ^C“ðâì'X&.³C®lˆ8;Òw½› Rç—è%<Nû·Üglb!Mü¸™‡Z¬ èó¦CŸ;%F^@_ò8$Ú ñt¡(þËN}ùÿäÇØ(ÚSX„•SÏf.4øpÞ;W‘åôŒÆE ÉVŒ­¶VéÄy•€¸·§~~¦C=ŬçÅë¦òŸ:øB•¸Û˜™Â«Ãù'ǧ‡Ã‡»Ãýr8²Ï\¢IÏÀ™þï_ªÃÎÑ3ýß¿ £j®zœ@ §²ß£Îßîÿ~?€:ge = p‰ÎE€©<)IP}„(!kW,"=>Ÿ•í ܦ|”›Ëͱ> fDÙ©-šCb—ßÙêÛÃêrÖêëèpß‚ÅÈ5uc¹b fž)¸ ,ÿí¸œ“©}5Ï™îSÓžÿpö­O®õÃSSc#eœÐT¤ìat1O5·ÖXt ­¯Ð"dŸ,ÓCIÅÜÖEÈ…ì•€†`n\ØBÞ;]Kàçç,âé^`nJNa +Í‹! e™f_SÔÅj°ÝSÏ”eæ²™@‰‘ö¥æL‚Á§Ãñ[ÞýÝß¹§úûÇ~q‹4v6ÎÝ9B#è’NóWes“xµÔÆÁ®]³eÓä œ/ÃnuW‚ºA=ÃÞ¨‹¤óQ×9΢."YpZòta,üV’üÇ ­ãMP1õt§à¯[À ¿{N!-Ýÿpw¸¿ûë/ïýÐVc¨ÿ‚r÷¦0ÎÅ!n®Ã·÷}Ô\¡í|þÒfÉÅj®`M"¨‘ö¹q—Ç™BÓTb³+/MáUÁ0TÐ]Šr,ÛM•,i×YŠC,§[\pâ_&²[m»C=‰C ó]ÓGÞ’´ë¬—ã.›ÁSHÃkk2Œ®ürõ¤¤¯<¯³‡Üû.L:ôþT؈é\=¦¥¦kÇË-=ÉÎ=ѱª'Ú´ ·% NÂÜ#åùic‡¹ÇJ¿P™B;ѰU9Ñ‘å­hØ^ú3ÏÜÏ`RZ‚îS%çTûeÎèØÜ—?VùãLƒë¥‹0È“¦çbêrÚnIgœå<ÎY†Šq̦þR®Ò¦S]È+6‚³(½ÎyŽõ$A£4.ÀüD±J ìã\ ¤àoZ΀„8”ªÈa¡Ñr–t?ŠèSñ¬¹`ýÚŒiì”ïJL&Æ`òÛ‡ÆÚ T ¢4H¹óLì”:Ïl秩ͯ3ŠÏ[«¨Èò(iðúqYµê7èàzH^˜mÉ6Ÿ[-Ç‹áé¢.Õv•Ï #:(ž@®âaµ³šc³U1¦§Ð¶cSVÞÐÆ|<ùÙL&FÐ×LZ]z^ó¡èeô[íTÜ5u­”—5tÄÐVC×ú}«š9¶Èª¹f®õû®×ÈÙ¤Ô­¬Ûn±ÌްˆsîëB!zÈGX†x2îòRÝ­ ô¥p²þO/Êï>ß>š'x÷Ì÷ªxÝ÷”¶~ëê"':áÎâÏÖoÝt+l)¹Ú ж7’rµ„ Ú·ÖØRï8mi>7¼ÞY•K4ËÛÒZ˜„ÎÑŒÃ&¢ Ó©wÑg`ÓçÈ׈¶1´¡Þ¨íþ&GÒSÃ÷Ï$øW|ïé”ÜöŽúæ¼è¤ÁʆùàbÞ&ûmŽº'è™úŽöÅc ¸d0¤¡6ÙSX"D_a9vRD—SNîÕà÷óÖjÐ…übËõ¨MèR:¸ t a¾“ãNcw^¢‹„¤I ²‰Æ?‹.¯²®¿â¸[®Œª½Å9Qú…»‹™Ì[g‡å½"VÂawž6Á`G²}/Âöìs¬ «Õ€Ï‘¡04¹6Åc«u³ ÂLŒ˜™!’"¨àµ7,vÉ,ÄÃýÓ%Ç"]µÒµãšê'nOT|ã­¨jòðéÕ»‘¯Ål¢3?ÍoEq Èã¡ÔOÐP>u-]Òµœ7V <ÞÔS`gàapa>ðŽy౟(Œ)ˆ¨Ìñ º·jú=·¶V÷ûzõ8 Ý·î›×ÉÓé[÷Õ³±ÉsG‹ ˜6ó4`““A+À ˆËàä#d [Î[«u‹HƒÒÞè`¾[)‹Nj{`×çÙ…rñµïPA9J8xôŽÃ9B€­^Sß:ÖÑGàKõ­c6ŠÔUh6T·£X-ÿa¢2OŸ*X§Û)A,Ä\biµ±.[To®öÞçc˜‰17†Ç¶¬ä8⮬±Þ"}ýíëï¹Í»k?¸‹‡ŸßëÏùùpÇýŸ<ç—=ÜÄÑýè÷œ%¾$·§Àà”¸žƒŠlQ™ó¼§ªÙ­T@,³sXˆÎŽ‹@¥yæÎóΪ€ –¨ìqï”àüçµ¢dê¨í ìÆ'ù”ÆøÝ.ïkyÒÊfxíeàòΞ•¬ &¨ãADž=+¹îi%¬cÄ[e[%óvãBÏ#^ÀH¶C˜¯m¹)í^zÄó‹³ß…Â#^Ú9dË$W[« ý4ŒW°ô0žn&’,Þ±€›‰Â‡Þÿ°ý¼{ùÇ;‹[šz$þqªô=Lµ ·8ÓÖž.\z’=ˆ‰@Þ*‡˜FÌÜýãë·ÿüõ«]°Ï_¿|2©~úòKCF±¯rlÀ:^ä1ŒÉ/ö,ä¦õ@²Üë vÔjk"fçâ°qÊ·p£«‹Fø•:b#“ÄŠi R¹d€„\Ær%M4iÕDøb“}*]„œNÄ^¿ó¦-Tõ¼Ëde®jžat:`°rÉ@^?Pâp›§È= 5IeœØ¿]ôï’£××3ç.`a€¢s˜¨Ýc(;‡![áµÚM™þÝ%`Ñð‹1$/>bû;¨.Ä!Ô³¿ÛM°»`—a“jòp×ÓSâY`uSY·>~¸ÿõûÇ!ì°¢¬0 Ä=á—îYŒ¢­D!Ûïâ|ÍXÄtdûx¬ÉŽaUÑ"ÒcEò«¾Òóv˜ÄãM£ÄðÐdGèM˜ 7N&¸©¬=¹6\ãtaî~ûöã˪k§$eƒÂÕœâìÀÍî:f|’¶Ä”âË»%.¥P@L¸{†Î™xè»VØGlj¨tÛðä¦k=*®°/©ÜïƒqŠj–×f%¤(j¸#‚"º7Šz’ÙœFNg4ØN \ :6V\ÅãJ ÅV¿z¬Ea[F-ÌYÒÈzŽsÊ4ùá7Ó±ëmD'ÁŹ¢Žãý¸˜¢§&»§ŽÏMÃÌ=­©c@r~®ø/|þ!7=òÈǹâ7‰}ýa(û_?¾~¿_™ûÙÃ;“ôÙ*6ÿu7*0rH2EG<1½!³i«> ÒU2Ó QWø|±‡,Ò ‹žA‡e‘oAȰ,r‚ÉÄÐRQ÷jrDz·$ûŒ¹ψ4²-;ˆšáhòö\ŽÀ=üâ^Òd„Æ»QaÉG]„£kb-N¬ô ‹IÊ!f/ÂYîAZµ©]­½ïADêÉ>;çÓ?–Qw0 xñè*𓉀 x"äS×gQ „ÄýïtèE¨0Ì‘pö»¡dÚxÒ–Ù«#ü½+™·Š<àa©ä>O=ÑéƒY’æ€Ïž¨…ã¾Lµu|{˜xJFQÕn=Æ„æžÔ]ðÀ¨¢›¹f¤vª‰Õ,¯škT´¼oÑiq]–lfµµŠJ²´,pÁ9ÚQ9Œg˜Eä0*½Q:àÅÁááÛúç‡û¦xÈ“w×õHSîP7é‘tUd*F3ðˆ~X8Ô$Âanqª (Xñ’=„²[|¢‰xõ¦x–׿8]wI“ò–nK…òÊíâ7Eóòm]4wêà©XíþpÓÍ™¹/ÿ|÷Ц±ìd®ÂöðF£'è™Bó¬~± {³J$­èÞUa„rCò£<Î2ñdRaK¶`憒‚0o«…S7›ÕÄ ñõ,†´eNSJ½½uÄKcŸ®˜ aÖ6°¸~º©õyãéLHS“©úöØžŒþO×¼‚o_m³ÿÝ8 oJzK…M ¬Ô‘/’ä°°ÌžS-Çq©¥T#ûØ\Î5CE®Y1Çû¿’ÚÌ’Ý| ÜEB¶©q€ù‰%—M,ÙØ…˺‚W(M¥Ñº´ßR’µ©íÐ-ÕÓG¤HåDGêyå«!¹–ü•ôFä9Ò¢6^vÄ~ XG7¾ÛÕǘh°yîÀÓ«E/-ϹµÎš+7ª¦&Íuô[Ô?vM䉉¯Á¾¼5OÒ,÷XÐ,ôMméZzOTvÛ•¤ø>e+V’‘]IËvæ¤B4®U 4„8;»br~lø~™]±-+8Ã×+/Â~hA0®éÿ=`ÓÕ›‘æA¸M/[.©†dfPgPvj/e5Ö!Ôc¯ç1Žzg¿‘zÀïÞºÿåËׇïŸÉpìyÆ÷u„ Pdð­ýl-R–UIóÚ2WÐêÚå Å‚ÁçiuŸd2"«’n-(JlBç:(Ó盘¹}_¦UÒAñÓ8û×#kÃPpŽ•màõÉ”V˜zˆJãÝoEÕE¦ßAçâëIcõ nøÍèÁ¯M˜Š=“àÑEƒõÍ4}ö´(h@eÎo¹R£-ÆÅfßÏâáý¦ËLÊmMÅÅËQ¡šH³ÓÖIÎ!3**åZÇ·ÃUl´õŒn‡Ë`ÃÕ•CÜÔßh;ÿœ(M2ÓÉ©Ž'¿þb Þã †ÚŒtþ/OKF—¬ž©«ERÇ£}7t¦5¯¹"£yEÂ[ré~ñ±k¶ŒÚàËL‘bÌwµ< mhµ‚}pm)‹:®³bs×)‹´eqv„WR^Ç>1J h7Ôì 1s/AœÀàˆv¾©Íc&ÐþôÅ>«$îÓo¹¶ö[R7®OÃûáÚ‚góïŽãµšàúIT7pùINÛØéðX^SÆ€BiŠ|dx%‡!ôt¸°‹MÓ€ÆÜèj+ŒÞ£ H›_kãZ˜²ˆOR~A¶c‰ås…<ííjk5OÈ.,¦HýÞ–Óûù³+)dæ¤5ŒYä †¡–3¸ªö/s$êg;lOÔÀ{ÚÒôCBvnÔkàsAÏëßmÁ¦2€DÜÀ¤ý,kš‘²ú½܈:€´j²·þà îA§ {6lÙ;yæó¼­“7ÿn£ƒ•&ˆNu°ugH呌Ò•g±ÞPëÛ2ݔ÷€I±èŒÕRzÅ€¹æZfC(½L œ÷‚{+õ¥-˜Dé•ÉaÚI!:"3ü¬:.öuÍÝ^ݦlæ8|É'¢É•¼Hìr¼û}q©õÅC\ì¦K…ZcThMLûöœÕêóÎj\ñà=2½Ôkm€àÓNp˱¡É®çܼ¹ð,)…·ñÜj'p#˜"G„Xö±Ž£ýBéà9pç­Õðb.ÀÐs+¡­­ÞóøZJ >ÍM™7©¥¼:Y§³`éÚ穦£~¡gö˜¨¦1 "£â§«’ï ¨®Š½:ÂR—)Â×Å¢sEøŒÂPTŸ%X]‰vD„•Ê'<Úâ :Û*RؽÖãÜÚî=ÿÝJµÇc^$¿AõkŽHÆ7ý#ÂPÀÇ™GôjTä-–‰ªßzšɇáÛ¢aš Ò m¨öÅÌžaó Û„†U.ˆ_É—!’¥èÙ"g“P«}UøGiMà”yo¿–CÇ™™áHü«ÜJG¹ƒBt•få iBEå â$ð\¾&¹Âו$GØQ[tšgäÃÞvTd|ͤEǦ± û×ñ<ÿá|÷NWïá©ñ·­l8LÅÞK^†º!â.Í:S–S!³aÚ™®Ÿºnꦹ¨¥w„$=—SΕxF(§§Å<Üãõ]•“œŸàä è‰'Sž¥j•l Ëo¿þ°›Yzß«ø„ÝÂÜš“ `e‰æìò|rº 1?êüsoñ»Ï÷‡¦$w¿Ü„ŸièùLü${x¹]"° B;]ñbßÈ)ÔÜêMY„¤ÜÉû©Êýºb\¾¼æ,½!tͶhseïÜ0¹ža!¢cDt]†Mš6Î'–c iÅ“€øSÕgáZäRË+É ñ‰mÑN ÈÞf=O}A‹I-Ì€öðáðíÃ÷&’ÌÒ/<ŽëýòËÝáÃqXoƒëìD§]œ0!&Ý:æ·Ë^4"·vz…S“ú„=½Ä,çòÊ—|z÷?Þú> .!½÷S_.g$±0¥˜B°›Š9Bœ³DàeZ5qT‰{›Q”Nm (*É€Jš[ª3Ò†êqÁUFÔÇò¥ÈÎDYÉe•( »Ç®ãã!r¸$6¸?[÷ÉO»{¸ÿüÛ¯ÎÖ󗯿Úçþ~Þ[‰zºp-÷à9Áæ‚°ºûfòx¼£XìÀÉa½©W8¸Ÿe3@!«vŠ´Ý«ýK£BRÞ÷ÚwL÷òVn.vK‡òX~ô¢Øí( u^a—&Q³ÕôâÇÿ¶‰ÿÿØ»šåFräü*оø2,‰D"Ñ11§½lØG¬¾ØlŠ=M·$ÊÕ½ý`~?™ÅYA…BÕx7v=Ý”T*d~ù‡ü¤°dw™ð'L¬0ÎT®b7ÌêýCFÝ—ˆ£ß …íiOÏ¡Ïðôâkmû´¬áê…×¶ x˜I!)C‰ sØuÓøPžëæ O{gÖûÝfµxÞüö80=$ìÖÓŠ®wEãè$â8ÐÛŸI¤Z^uË{g™\Κ«1)€èãS䎩bÄå­•uÆÎ.~‚ªÈ0‚Ȱ6óïîæÝn#Ñч)©ÜálÂJ3‚8?Ø(‹ÖS»¾tÝ>¾¼xÏ8`Â};¥µE;=ˆ|“<ø"óz·>ÂYÌ»HV‹ æM<ú9Ò¡Êu„|»Aˆ*¡À—ì}3 ªÞ ï§×YÜ cŸoïÖŸ—/÷õ.+E5(ƒl)£*NH™NîÄ×ÜôiQÃ…õºñ¬f7¡4 5Šë­¶ªâ⬽³í§|ÎÇ÷7¢_Ö«¯ áÿÓVìÙ°©œâDMjvš¢¦[,ªQa«ùœ ÊÑé¢ mW¥®5µÆõÂjH3rF†Dôœ,PLs|@É‘vUâQ? üÜfÕ•Œ©ÆIüÄ5¹Ÿ DB½o†·å•¦HÆ8ðÉ¿ÖÎL]-g}–#eql½Ü%0¡H .™‚Ò°lpt¥MÝNjWŸ¡o‰½5ÑR(Š@gôBèèÔ#«L~…F¢8nn<ˆC%‘7ÙPDæoÁÏS²LÊæøyb_Ò¡xtŒ]d•æXqp;óá┑pÖ›K³äJZœb”8¹NÓèÑ;:{ª¤ne,‚Fœðël 'Ǩ?Ð;Zî’pǼ×O±g¬¶OËÝúýÕKCΊM-ƒ‹ì&ÒÀãØm¹¤;ƒH)ô€CØ᪺‚3v«0CI´Y» `šÝ6>Œ²w´v£ˆ©58HN½2“1ºÇz&iÕéÂìž)ª! sÞÓÜ@ S äÉ õM¥É0—ÖM}ÙïŸîÊnª#¸0º!¤t«ŒeììúU\ÄÂâeª¬N’—käÌܨà’0ÐÊâ;¬±‡âíÆjúÁ‚Dkœ³ÃÇŠèû´¡`ñ%ZGZÔ@‚ ûÉóÜHpJÂð°Á/ì å©&ÍÜ®žw£Ö;Å aT6gó¸8.:­£M=âT†iÀ‹sC£äbÍÛ0°^ì\•<Ê!N|¼tûöŸõîÉmÈÝqÎýŽöéøÂQ¼(²Gš*×ä¶!ëˆæv-Lk陬ĸڔ¼_­t¯Õ„s£¨Œ5 Ö(m“÷ ó8{ªva¡‡ÙáaKú”HB¤ ™}• àÇœPÑbä4ð:D;³E¤¬[F¼s ÎiÌï<¨ :ä­Ã8;7:¹¨‡_b1k ªjÞª‚±KÑâjmèjÏRÉÞý@ºxëþ‘4U€!o-®­âl@ØéfxÌt @Åt”*-ÄuÝÿ‡^¶VQ}¼8‚~ó( ‡Øn?³%)¯€=š^ 7/ެ™¨½-—”µ„9‰äTÊçäÜ-¤|¡+F}ÄájØÈk£Õ mUiNÃF1•\?: ¸4¸Hü ^.¶+_ÓÿUVxÅB ¡†N—q+¤Rûa!rö(W%¶À†Ð @ [çÀJä[ޱb%…²jÒBAõÈ„”U*7!¥myU±NÂ=—¸j”skŽ+ƒÓÑrRRaŸ…xˆCr uØfKæ™ùÐmÉïH¾é!|³bÑÒWƒKòÍB¬§w²¶iÓHDÈLs³ÍiS2bÚ ÁšÑl˨UØ.lŒ3l#›C®0:ôùx²,is …íÁsKYUA9<£} e•×z¼–þmó,N_vŸìçÏò‡ÿ´À¯öÓ×F³ýDŸ>¯µþ´Æ•£Ë*'Ž3…¡°ÚpŽ0ëT9] k=²Õ0«òÎ@–fu÷>¸Êu§€Cž|‡ìí•L÷áÓý‹üÄóµqà?®r3…—ã½2³×9ݨÑkÈq*ÚÛ#_%ìxïon§Ì(ÉXÇ$±µªzaS{XÖÁ>exw6©V„|1×îDž*ÅÐ(á3Î $,ÉHD$†ZN !„«½Š‚×VŠýýs58hØ…m'i8ÚÔ5^ šõÑÛÿ#Uª@è¡PŠç†„å’û]v6 èöªN:ù´söí¾’óÏG¯)‰b¦]ÔE^gø&Ú™´o"Ô‰Îxè‘­hÂD hÔ*³}G€†$Žâ=¦FÃàŒâ9fÞ÷b˜…ƒ óÏuV#êA§TˆÐ+vžèQ *äåi~f0±%í~P¬í´%kÿµü¶\ì„›‡uIåZ¶ñd%DIÔIú£BŒö)ôHT òÚƒÆç×õŸ øé*t ½[^Zuÿûe»_¦z|¯„5Cž3.¾QÔXE.çšËzŸ'ŒvAô)ZOÔ koæ¶>PEhbwÁѨ\Å!oÕ†9çÑνI”Ðývõ5x(ãë%/'oHÔ}ŽqêfÐ^E ¡Žw·éU-ܰDÃÊί}IålX^Šm“Xyzwr°N¯F©¯K:ƒU/c±rÀÔÝ'_‡W:ZL{"O•xX5 @óܦI¼ù‚€Ø’ã°¶ÔUX¸)¤}åñâIùÜ—.»#Ä4Þ{‚œË4£“9!a´½G£*‘·6Æ3Î (™MbPÔfñtÄòxX®¾¹º®z¸;b26u .SÀ>ºìHš*¸@‰€É“ž¨Krß`T`ÝDVåpEr¿ù¼^ýXݯóiWËýò~[,®i«f2s9ÉY¾+Z+×£W´¸`›ÛcõÀe“JBÇ‹HL-õm.p±;ø«B»~–p}·×öõ|­‘PȘ¾nÇ’—kBÍhú¦O®*« ~Èj`qP4ÖF‡5˜¬ÐL;Ÿ¾NÕ~&¡‹GtyV\ƒ>×ããVN„ª’åÃÅG°¹(1ª‘ -…(9<¢«Gè¹²VÀ ¡`§(h­Þï‘’Èm:‹Ú·x]礜ÃZmÜ5³;+ÄîÇz‡ÉØ)*/e]ØÒß)ÚĨ¢Ú©F4ÜÿþOÞJÑñ<{¯y <¹!Š”·4z4qölkÖš=¤1VyNb"¾'¦w²œš +ŠÂ‡ÍÔC¤W“Ñli ßÂÈ‚‚š b°Ç*Wæ‡B S±m›N9&׊º*Z8œ)ß8ï-B–2 Ÿb¼ÖQëߣM•Ìš¼5Öpd¼B£Õ ¨¹W"*¶ £å9{Æ‹ÕAÇ{vi¶:Ä«Sñ»“«øŠàÓѲZ|MÐÞ™AŒ³â²h ,)Ÿ·VäBÞ¹ÆR¡‡—ûýËóbù¸ÚÜß/w?ûí¶¬ª¢EAäÖY[a&ùMšÁQ’ßx¡ÎæD’J·Î"“ȃÙÑãðàtI– YbˆÑ)òƒÎ“$/oÎ"B‹h+§Hg B€£“ˆèøÏ:%N4©â°«võ®{3uI igÁXðÕ¶F.œ÷» NîÆ ´b„LbÔ)­5$t×WcºŽŠ¬Ä‰LU¼A#Q"fÏþ1Pø¶¸§°§òð|§5z=•ß̯‡¿¤xÅG¤|_nör„9ÚMäþØEA½í´üI>ßï~lÚžËç° ²3£‹]ü'çSò‡ø·èˆ ´5îƒ1Ù<¬·/y¶í¿ŒA¢eÂíp©¤„GˆSŽeåšùI¦ú:87‡1€ÞH¹@¡Ö3Lu¾îÃÁ.¤í»“ex@á­D[šìUÞÛ¼“Ø©»áïÆÃ~3»§ÕíæQ•Õú)´üÆywdÜk/°Pn¤àM@Úe2 BÃ/ ßtó,Šmß2Â7â7ƒ>h™Ýúa»_·½|ЄY‡Q^1¥×§{³ÿñ¾íö$dOŸšü÷ÛûË㩇ø¦#:}Úx»º'õ¿§ûèðMàûR´/¼Ú><-wë?Ë[ï?þó¿üáæÂB¢÷fý5ù#qÛmïï§òÈýýó~¸yØÞà«n~¹y~i °?þܽͯO/û?×ý¥ß–÷/ë_´²7¿üróYDï%œô—¨Æ0 ô¨Mééát‰ÆpŒ!¾(» ³|~fí™–ðƇUxÉro{5ü=œUÅk{N‡I_†±õÒÁô/ÃÞÄyq˜]j°` ”3h“òêL´ß£w´ ÍyÓ0oiDzºGI¤W~b™ t<äÞÈdà„mK®ʤÐÑC=—Ùò .[|2b×GƒøÞ=~zæ¢eÿ¢ž;ÆÎ»7îØë¯iMâØ_ôËÍeïÎ(Ãy”¢é*†&c¼&± 0Í¢°˜g€‡íÓžu$ª"©iº¥°g9•ÓÑ24 ¶läò³h-〘q”…0ŠJvg€VÖx5:¢Ÿi!X‰*>º ר¦½•d›QÑ–óÞÁrn‚UXËþ«Êµ´}Õ1Û‡°ö࿽ >xÇê½Ïö?ê?4tá*™-ÿý®‘ü±µìöÝ4ÅÐ/ÎÜ£ü¿ýÎnÛ‹›,´/y“žqÑbÑ3c}á¡Ï®tlBxŠrvtO÷DÌ™¸îOT,¾¯?}Ùn¿¶™?cñî·¿I]°Ê£8©NsM `C†Mh­šÉ/°†Ñؤ}sÎQ:þQ>–Ãè+ü…wj#)`ÞŒUCŒ`Y˜aV²–/ 5vü시d\ÙMD'ùbiÆ{0È©L…8ѲÝujun¨û7j!ÃheŠ&o9«Q»BltW8×f¼ûÖóqf(œk\ÈÙ&1ÁÖ¦î.é–¿ŸÏÛ:¥$䉔« ‰¤³a rýÄ´ÅôÌÝ^œI oc÷vŸËÝæ7ù±‡Ín·Ý=T‰=!¯ü9!Ô²ä`¹ö»—uNŠBRôaJɶº .ÛYûÌ]érÇ®~—ü>7zÁ’·‘eÝX'²å²ìäøí/@Râˆj²×aâJ*ÑNcù;<éµ9£Û3OùËS+É^;»$;QÄ”«LƒXƃ”­]‘rƘu}kë»6œØ3Ð °zD"›îL8™]Øvéúc¾‹w**7~EúÒ”}¤ ÎØ–ÚM]¬‚ûN88b”’³l½‰`iO§j2&H>5™ü™žîWºMPåÝkCSdŽ*oä22ïËßDìÄÞVÏËå(‚’jbR°&1IÑ»êÄä ²!ÇÕäÅùè-¼ y/üÏ¢÷e®øûî_è°ÑÑ“u'œèÌ´2e‡@]¥‹y,§®Q%–ÊOƒ-cÈ-;;’n¦ë[§  ~mó,ÂÖ‰m–)L7>qйÄEŒäüL(çÿ,Ÿ1[2ÿd Ât·BkOÛ–>Þß>ßW'eÚ¶•9Ù¨cìYk턬£Ü·.3n#Ô4ËÚ„@} à*.B°zÁÙñ7G¢LÂ`±án×¶«ãæmD†Áû›îƒ-l¬~É9 ¦©ŒUu~)øjn‡-Ùˆ'ý®3à45 RØ4ðy¨J¿½ÛqåAŸôtŸŸž­‹òîIOöë0ˆ¼Í޼)ÄbO™¥ÍØMzqµç$Fi7 uMV€¸Üx•¤°Éï@ÈìÆ¡¥&ÀîNÂA ñ6Ó׶ȹ!}MÑÍ_çªín¨$áôõùƒ"ùݹTé¡ûäæÛãí×û×Ožï¼Ú—m  £dîÐHå›Ew:æG 'PIˆÐ§µ£œ¥º;±ç*¼VuAJõtJMÎ-p_‘k‚êÚ[Cð ©Iuô>£!ÕM¼uA“‘ß»­vä <”B‘ÉÍ´˜«z$$ÖOAä,«)9Œ0Äj¤ù!HFZ²Kô ô¹æÀ3?oƒéäR??*`šzæï±9EŒqL7’pN«à š>58 E”>Ì£?Aéµf ´ ;°—6¿–½zÂaHué$Ô±J«ï³R'ó1”M6!çŒ_ s±Yª°9yÇæè8Çat‘‚“£ƒùÙýHnIA_ ¯ÞD|Žº/Ë–¾==>ÜýZÿýcš:xöç™â1yé,ÖG„ž‰Ši7P¶œ»U"ä4WXeˆ”–RÑVdÃù‹®ð˘ÊwE{GªÍð…õµ# ¶ù‚ÞC5†ÍRýû¤Yá! 5°T§ôc V‡à{Gä,Ÿm‚dÄ!>Ÿ:N3;YÜÉ{rt…”Ïz¸ÿ {ùRºÉ–àòcÏØckHsÌû²=4š´Æ{õuC‚"ÐÚ¨½X \ ’Ï.^)2gÓ.æh#õ›pV òÐÔ•¨¸u¾ÝÈìÞ›Ävd6«>–2µ†*ÉFIŸJ 8ÇLqêÉ0“ Òt0U÷ܼ5>(Ôf¡‰ìRâc²b½s¸-·£Zzžè^É>&&zÆ ŠÚÈ2g{sžP³@ÕdÁ1”›l SéŒõº¢ÊTµ×Žj 75!Ù‚ öCFÕl=MMRnbŽØ&ž¦¾¯ §&Ðëj˜]Ú•†F 8ËGAË: ñQ|œ§˜Lþºpz®gñxI=fˆÖC+:8ßòà^l”hµŒG_W€ctÍ[‹»‰6 fU48¥˜Ê<† ©Îepx¦þÿ…B3`V_;Ih2^ÙUÓP*†ád3ä8K¶Êâ=Îê‘…1¢Ì#P=™¬*Z À'€Ûg™™ …rÙó|ðeŠ‹K¢¿vÛfa‡h[À–ÜùÚ}I0”Rã(=¸«®œ~3Á¿sãj6{¦âà$WdÏJËö”£,àI3#FaáB[þŒ;K}sÂÍ«”Ì)3Uˆ¬Ê!Qº\åçVécU•ƒå†£³Íàp–­ÆCZÄù òq‰>ŶDÚ/_õYÏî¾ßµ%8OS±¹­C%Ÿ,=ˉŦ&¦èZì+2Ì‚Fã’­ò+OÜïCÙͯ±_y4Ú[ë×S[ɼØ&Ï¡kzºÍçqŠ~_ÿe'&Vq;3oͧ©¨¯+û b5"ž(ñY.©×‚ÑpI¼› eïÔkp麽½û~¹˜žfynÄ@¦ó¹ÃCÅUzÖm'å3öÞ3šf8zXôÌãÃUiC±ŸHB¶®}M‹v£ *éqbhÇhƒ†’ùznÌ÷VO–©»R>©=Ï¡ãäxhŠu訷ŀ~FñÏ2Eÿ*œÄ j^/|œXzcYúùãMC–j…²ãùæîöëí÷_öÿi\Äp!¡$”<ÐlöŒBñI\Œ5m䙆šÆqÁBÅwk™/â&e·’¯ˆ16-QmóRšPS ÑXL˜ýæ3Þí­ò·¨i ³¼ÚZxâm;ÅYZÜèwë¡ç;/æ’:uÿ £SÇ Œ5aòM]ä’ò€Ý¶I¤wͬúÉßOßÿ¼1õ¹yø¤”øñëÃÓ__w)º—´U;yº@} F’ó±o™'[F!Žö¶×lšËîÐFoG_Žf*æÞ^€×üè¨#mføë&È^¢kkmW‘£pÕ/t~ƒ^K6›iÓºÃ×±ocˆúãÃ/ú~oõ`?¾ß?Þ~¼ÜÇŸ—¿`QN=èÝ¡¬xóûMZ«¿us§Bm¡Ç"çÂnKk“ÚnCÐy˜j~0Tubz)jµ;Lf:ïx¤Þ”NL^P’øxmÅö.m߉é)³W,@ì³=> üäùAUÃ+Ô ÆjWs;d9Ëð@•›r![[Ùp‚ÌÀÝx1§Î2p牠µ»kpÜÆY&#8»®aóJµC2áuG6‘ä:i¨ª .%×?‘¦ÞˆÏó“}P§fÀ.×G€ëX6I¶ñÒèV! •[…üâ ÅXZ+T)¥ÂåkKä³cV'«Y;‡Ö¼DoWG¯qXr2N*. êïûúÐ6{Jáhˆ×˜:x Û‘àRßjpñïWƒKž¿‘\¹ânÇ_¨à/ÆìèÕiÊ»Á•¿Qý|³|ý„¡ÍàÁJר{l‘=ÅàFä€ ô,›$Á¨wÏhÝ…ÎUßõ¸ßýÏÍ·§O]–:rV˜Á¨&Œ%a"ðٴЊ$33èk33qÃZY:¼Cˆ”zvB’ ÿ„0| ÄÚ[À³*dðŸuXv{˜©ÈØä³é¾ãÉ*nuBç‘Zv¬Øf¥èñí´g¬.Czlö»Ž¹A¾¥Z¾9QÛ‘bÑu6…d,³ ·Êq}²¾©I©ð¾[UXÍ7/¨ö)àß¼x×À^¡Î1„41?0f%£ž^™£È[Ãf,j§’9»ùñH‡ ëm&Kj؆CÁ©ò’ðHÏ\nÁj?¥4ã~ºýùãóqº°~¯U?ªTÜÞ|üùõÓãýEÁp ‚áxa—˸ªÄ¡,s¡³u¦ìβAÔœo Ÿ0 9eÁw•}ëGJ±U6ò«‰_ÿçØmphž}ÝV< 7ìzP³'ÕH‡Û×Þ]¥R¶øûH I«ÕôëSj0×Ô3´¹#×þn×|O=+Y ù(£×>Aõµ‹ž*«±/•gÙÉó¹°ÕÑ*î}q Ꟙ޸uëgdÝvp¶´OBƒ»&`Ë,âè¡)ô¸íõqìþ«Ñ€rR£Ö»mA,_fP*J å½·…¦ÀAXbjY ®Ò!jÈú0"ž\°UT0yšÓôžgóüýæùá¯3å"Ùþ±B.P$”äÂç.‰e¦ER¬ Ò²w\ÔóDŸdÀÅõ‚:f!ˆ T¯´9³•€û_ ϾÜÿøþp÷|óéö^iwó|{³ú¸O$0艋Ø|a*‹D ¡°’Õ¨s~åŠ.SÂPÌÄ€E™a„ìα²M^¼:ñÚ"óR_×è8ŠSÿL)315õ2^ìÅ‘ìm©Ï‰À˜•a„!¦ ÎÅVd™â7ªD¨:}u™=ÍÌèØbÕc@rRøxòáqñ^¶ß]„  ¡$è Ÿ8’hJ*m6£\[@|O¾ Y ì„e‹MëqY¯7ÌËÿ¾ÿøùééÏy7ŒY›„*ÄåòHÎ=5‰³sŽ_É5å‚Ñ—ñ¸ÚfMKâÀ1vËŠ=ˆ:ÀD-]ýã\ê BÅø>sÕE²X‘%–®…¤Ö>ÐECaÔl’zu–rÊ›gaË`ÖÁ†õ†"P–Œ誃“¤ÀGê1L²C/8èo2T—õ7Y 8—l'|I&ÔuÉÇ£UV.©sÊô&…ùæ!ù,Ý{|úe¤ÜKÌ`#+ qzÉHQ@lPW’ É–B®h4CBdŸl¯Ÿt8KB€ºj]¢zwÄ.N›<|ˆ×0Æ—»»êÿýê;œ—ޏ@Šâ°J:€JÒ ŸñXÑgŠtD½%ã®9çºÒºRš6†‡)Ä4­“a_L[U6Ùt›¨C—„"”o WÄŠ²u0+rL¹LpïÒ•…!¢tx¤ŠÈn4Ñu¶æþûºIóøŸîþì“ʸ.iñ®XaVªD, JDÊÅ+V´š®H XÇW“ ú¨.òˆ ˆ:T¨ak#ÙzðÚQãùéñþdÐìþ‡ßêo<XËAû¯»ÚÚì¬è8uq€¡ÂéµÝ3EGRÎé]SoJ¢Ô-*ÈÉsƒìè…©¾OëÆîzb(¼Æu)æ9ÙP–c;Ã¥$J>—²ÀòBŸ)²zz×Ú²ùw6qD6€zÚè¼b çÄÍ7ЪêîùÃ~ÝÕﯲñûQ8~?fdÕ¼ýùýáÇ/›Wøt·£×°CÙÞèÉ£ÔHM€X’ l/ïšr“ xƒ…q›0%„àxSWŠ{z0?øÓýÿÝþ|ü±ú‹3Ĩ‹µÜ{¡"~ø¬·»¢Å$![Ùõ‚€êª3#ò;ƒ’ºÖì H³$\7Ñ–upØ/ÑS…ƒã^fý\ôpS~ÏÎ+‰fˆ¾¶98Ü`¹*²"U°>ôÑw ET/Ù:Øqp$©u ÚP’ýÅñaß=øúq—Lø\Ñg\¬)"”eÂJ>}A&Œh.B]Qe fD3Tߺo©ZïÄu7Ù# ùžù-еɶˆŒ&R½Ÿ¹¡²¿¸»Tb‰Áê#^îñßöPwó~€ËëiÊéSˆ¼$µ \<ár|ÆXDÈûìM°©(ëˆa° j&!Žë=×꽈ÍmNzÏȱ(Ù¦­ã¹*´^ßIˆ^BX«ma&H÷HÄÝ#üüõ>ÁYNת̮»ôâÄø+Þ¸—ÉÞUgóÙÄ¥m‘%çÇ™î¨]¤äF´ÆònöIRMÂp2©šh]ÆTpïÔϤÃöFˆT¨†Ë3KÌÙRG M°¥ì¥wuŽmú½Uj égm×c쩼_áðfºap~q^=N¸Êz ªÁ•–c4ÄY¦&°øÊSãÉö›¨ëÅ6D}·k¯¯ öE '†åŠ€#”öwû8ut·«Zrã7î“‚³,Vo?8ì*¤s*á„-3>3sõ^2­qœÙÕ\´>ÆòE›ò;ýVG« ¤Å9f„&Ý” ÞWÒÍÃ<ÌÉ©S³vHså‹ô ýû’…O‡,ÂK³ŸK5ÝŸžý…ûsõ‰yêGõ’ÃpÝ˜Í éw žï Kc 艢3ZÄwåA“‹DÔ<ñ¬OLçÙ*΋PQäiÒP œ³@Þ±8’gŠ­â'¡¡"|špìYèm8&„_P¢y"— wPM²,aù Ÿku$ʉˆ õÃKvÕÍ»K•‡0ú Q»˜>ܵóèl³Íº#erø2 ¤y7:¹€7¨oÉíðÂ.QÅ{‘ÏsôÜvŸH7þÎ Ý@ñEâS ~â«?«H¹,j%œiD»ž%5i`þ˜ÏˆÜ"5„ºýQ„ÑÛϪ W÷ú³ßïÿ5F/« Þ¬ÐKY? ¼‹ú)ë:V"šB”KbбIAÀ‘^k##pëÀü(1¤EM1\zÕX«zb:سk¬Ï}²9XvÙõÓž]ãD^8 — 7ø‘€f™18A*–Ðø©dxúiü ŒÁœiªgh²;пàF–tüvG~‰à~T7²»=zçÊ•ï¸)~ùvÿõóÍ÷Ü<>\ý‘Ú–s(¦·Ý²®0;€žð(iÖ#˜ZïÀ*ñL³:;qMõ]…Õ© ‹V>„1Ãêô©)ù6´tÇn;$XáèYç§|¯t-J†Îýøùv‡[ru÷þÃÍÝÕÿ¾\¿¾Úê#Bo»Ï¨ÂZCÏÖ¯Þ =Øb¼W¬ó¬\5 “K±låz¯@1鱉®œ•„8ÅÊq±aŽˆMVnû‡¬œÜV®ñÙe‹W\w·zmñgÚ’¤+l•°gCŸÙ;ï8Lä^ÝKgžÅÉâ J…Å©Ç+¦‘*©ì†ëJSLN•”"Ŧ hüëY†L.n`q‘—¦÷íæþáöÁº­~½{ü¬¡Û·?¯¯>ÑÇo>]_ `ºÂDWò!}¸‚pý)†O7r#Ü–Zºs©%GöiìÖdìY&›ìáÖíó~™M³OÓ Ñ„Ñ¢më‰rGùVÔ¥8ßjÚÇð¨"ugXºžŽàØ'¨LÜ‚jdÐ$/V¹ M nÁm³¨ƒ§x0¸ßrcô¤^k£{›V.ç’m• CEÄji±ì‹ç@ã¹ꛇ,úúÕjèBФÀSˆ>8г×Y¥ˆ]<ìy bÊt<ãŽ1€¡|Tˆ±„ê¡ïêóðs«·)óÄ´°‘t ƒõW Ñ /ª5pLÒ‚.lÒ,ˆ¶·6h¾Xk¾DKŒ* mÉKìÍó[Ï«W«eHî ü‹/ÉÒ—¨3ò„167x[6”¡óNÒÎSÜyµÀSÈ_.W 6n3|gž—¨.µ¢…¤ÞÃKÑÑÛJeFSVB™20Áû{§éÍþÔŽhôñ7CrIÐ%ø÷P‰´@ >T](./ȳ8¯„2E%l•нK—V‰§-“ÆÚ›^å»K{C¾÷¯ƒ|ï'tƒŒüÔWéÆ~×ö¬nDŸ›Ú]‰g’nE6éF@ãŽ#º\Ï6*5ë}ä¿ý]V%dqzZu7ˆóÅX#<¹”שÂA*“TBÔûPhS ‰*…!•ÀÐQF€ A¹Áí Öê¯ïÞß~>3¢y}ý꣫u&úÁLuB‡Ú}Q[ÔUä´å °)Ú"KJàÚ. k5âÓ–H ­ˆ Fí ¹½™dP²m5 ÆbŒb6ÈXIb†-%lQƒ¨šÃ8c ÷]C1lÓ’³QežÈ—¯ÖS3Ãc/ A° #¤Šlˆ³¤"è²üŒ+1ÍP}jHQˆ/­"{2“dÛßÎcc¬±‹%~ÙO]¼Xý‡ýú/¿ï¶Žf2rê¶%¯upO¼ÔgÕ)»/¶’È}K4*çxi}ßS²´Ú*y š;¾Ïù|óýþözÚ/w… „š;„¸x‡ ä‡ÎW2™r‰ÈâÑq¼´‡ >¬]uÀzM«µ\€Lr›>¯&¤÷TÕB ‹j¢rÌs­5IM,Û€ti5ñ=EO}{§Ù¼¸Ô10ÿ<{»orîtáx÷NŤÊq÷õú»^fÌ9äµ%Fç¸Î©”†mMœ óÚò,¯IÚ$n ûÁ0:Æ¡E¯žµdƒfñNÆšTÛA¿$ô M/)ÏÕó`á(|x~³ª~&.†»Õ²JÎAÔËh`˜A¿ÂÇ®†¦¾tLžü Ø}=êý·Û§±˜÷o;ÃEÉž·qÇPE›Rˆ…:ƒI,dAÖW"™.úEc jš‚xdUX¿"v ‚Þ„¢Ùy ¿^_Cò ¥m?¤•€R«Ëd9°–ʬŒRovˆ-*áC°ö•0N»ay¶øèÇ;_¿Üª’¨D~¨Âê£çFG{#£"‹Ñ)Tµ¾qÑkø'?{ä5Bšrû«f xnS2?ƒC*‚Øsûƒ›(øïâ6ŒMÕ…·QL3Mh'ú¡©Lrz-Á¥u‚ #ÍLF¬‘Ð œš­‚Â⣄š{Äbì²BÏ5¼V"™â#ô©½†ëM>ÂWÂ@}…K]ŒN³vh^é¬Ä§ùqÆWßî¿èçÞaj\ƒŠx½GºŠkÄF BQC$Û] i ‰¦´”¤IEÀ!ŽÌÕéWøúLã¥æ99ÌNœûÃ]bš/1ð_6„²²¦°fŠTÒð.—¦d5COô¡5FÒ·^O"`à i`œB£mI¥)MÕ <- ía-Ñžw~q‘SEòIêΟª½x6Œ\½YE±!ì—Cÿ#ó ÙÑ˪Ÿ)dyhNŒ^FÕto¥Ð£ÖƒN=Åjó^9º¾‰k>ž¸f:>^ö†ÈCP*oáBgÒ^–³YÂêmÊ×4âðb¶výC×^Ÿ”ÕOQº´&@OÛœ×{‰B܆]F+h kt„¤¨šd‹ ÏïUS_ÔgrÁé­Ðà©%%Ö¨Õ ZðœM¢#Q ÞÚ×zꨞZkErGc‰EÊ^<ä±&¯VsnÑ-ô/a˹ zëRŽœ›øÔ1Sò ˆKzµãvÔzxuwûð}7\ð\Qî Å0£ ¬WPÅ&›9õ…µ‰2gd5#³µ\bh|H’^2 (ú‘zš5‹æ+aÜ¡–w!%P)”ÇS)9ï Ùº½¹ÏZøêÕj˜lÑ»üqýÁˆIh @¿â©9™õXKE!n ùhÀò¯@‰÷~»{TóðËù½ãŠopu€n¿wßvUÙÆ``¤~¤Zˆ‚í¥´¢xÏ,×ȶº¦‚”á;RãÄ©2ÙÀN*rxb =÷9Èo m²,Èì°ÍÐÕ=‡è‡ x[Š•œŸ@_Q¬ÈÂÖâ€Å ¹0—èÈ»x̱âŽ9V¤…aåBŽfËóÐÛ«i.è=-–^œ´ 7þð¾ FñӇ`s#ìTɶ=:’WÖ$b7æ1mr»Fn}4e˜®TGôêK©pÄ$ňK!— ¯¤4…˜.-้ Ëà4¾ c… â± ñ˜€ÇŸÉÛ»_$ú~ûûïx£€o»¥_a Ô3¿&šr©ì˜‰í“Ø,ÓÜé…7\‘ Ey*š&a–3ò Ÿ)!”ñ'`ðpqÓ|ÕoØ"„ÙãG¾ ¡lm€‘R.„BdSYêBI,ò4’Øs¾áä‰&£‹Ä¡eˆó½­wVe¥M“ßüìËóÿ$øtµ÷q Ø<ïù'H~¨l¤GØ3\îb0\¹)CB›æpM5X’@UÒšÊI+ÇìêJB“’VëWGiò¸Dá}¦WÕ©M’Ö§:þK«'Á:­y^P¢4ÕãbÇ^êyA‡<Ä©3EC½–±DT¶€¥%øË€¤˜9â@{9lÖälÑõ ¾¦Fß5À«Q"…Z§!ƒçë­|,½—è~À“¯ÑgîׯVS£×Ç ƒqSJ8ãà¸g„Ò³Dää$ 7WBms…­kâCÅV Ú¼/gGêׯVÕ\±YÜèâçÖÝH°!QÆç“jáápq†nS^‰Ô™–-ŸÞ¬æÔ ,‰!9l:6H!º8rAák† )IÄ…Ùº@—eŒ8õ Õ–h^óvSÑÇ `’9©z'„ôÓ)y^í¸µ…åž½¤äyÄY!pG_T¸Âjñ›á–a–år0µÄ "53>Š9Ø¥}íí¢™‘‚™ÿ€}U¡ÅNЋÓ¨÷;ñL¯ŒhZŸÂ܃©&xËz‘…¾<ˆf t5/¢YvÛÀjBïÇ<µlÞNöº÷23WK Ÿ™{Š33ó˜ª2sG¡¡ÚuKŸ>M=ÿ±) ™_ïmXÜ!\”žàe~å‰ÏŸèCüðéúC¸át6]Êè̹É‘}EÐ8±3 ÂQ’QQNsͪI !\ÅE$ƒ[(ºæ%¶\ n†o6ý÷D©É5ï6&Ã1#n`ÌI½nbI?ÅùÓýû‡ï÷×ßï;¹œ}Hð¶[ävŠ]ë)YŒrž¦p9gÅ4Í[S+Tu« K;*²í¿È•ÍQ³P—ÊæøÔ5:bê9¼ZÍ“^êò¦ž[E,ÄaƒPH ‰/ í0²Ö0x7ÿÔÌr‡TpXEè9ÚPÖé‰×†Ã=¤ÀÀÉèvûÅÚ„6/LRÝ ‰¡&L"Hʼn:•_®¸¸Д(I O@¯ó4T)ǪHBÔûÅ%lügM»ê<<ì ç­àt{¥(î cÁžF2ªfø*À´ñÖµˆæMšZ£¦ö•ý˜"—ïÎÊJ SFÍU_ƒíOµ¾u9 G…\3÷Ä)Hè§åƒ¿ £ýæÂ!ñ sEk۶ܱ¬Ù‰„•Pf¬ñêSß‚¯a½rGCþ8nQ6&OÇé²­o{ö Þx“Cýi¬–{Vàe$àž2[ûà ¦aù 1¶¥ÓƧkPíAï=ýGc‹%œ^«&ÍÇz_Â"hWÛîdùÞ¿þùåêðcmÉ Ÿ©’:ïxh¹ÊBœ¾Tký4CÉWˆgž›Óãf ÂÍ¥8SB!fQ%V²˜âæp!LM$‰¦ hø5Cš€¾C8@1Î«Ž¿öLó:–i!ͤ¦|¹Ü.¡­¬D2¥c™„}[ÜIzË;òÁ·ðÁÆe+D|ù–å.½ílVBHgª%aW¹žZ¶ ú p¦ô*_Égžæ…C®H÷€9Ó=ʵ¢V¢˜â„UMÉ ´“è¿CF÷tM7:M„£lšìåP}@a6ßI>E‰WXcO¶GÆËc˜}|RH]PÇ',P¡¦ÿ¤Xž®%y0Ìg¡L²Aㆄ¦]ýÇHI‡l7x!-E8]~ùØz[¢—öi”õïrÖ°,unƒµot‹sIm|K©ÁU£¸äp«ÏüÔÀº­KÁáôé a½ •¥¹#³'MMFá¿çÈsÚ²‚)•ðŽi¸CÉŠzÜÜØÕJ|3–‚,I¢MSWEÕ©0l °õTºŠyŸ›¿œJ׃òdÃk':x2ÈYݸÄê©ô =ˉg0P䱪¡Ö»”ø¿SUÎ Ø}®™~¬rEþˆÐ²³—+©LñiO°Õ’Ø”u¢ì$m¿›zKùŒãQ8qîn Ô8 §˜a#΄uÛ$;¿É$»-DJ“£ãòÚõ²ÏÝîwØžÿ¸­!‚~CªBçŠÎšWޱ8¼QNóÆ´T‚IÍв@iºU…s“"+©LZ¶=}ŸêPk€ ¸9„’· ¸ˆv°o$̰o¤ d?.ä*xŒC!V NÄìôùê]ÊÜžpñNÓT|I¾±úŽ1ò ·¨òæùý`K×u ¯ b¾Í5ìL_/šÛWKiúÉÕþÓiCB1-†õîËÍ5ôTR9qÌϺÿÄŒ!!}j¶ˆ­Éˆ±.`ع@ÏÔX` ŽR3dÛ‹UðŸB AYRpC2Ár<ÎTeJúrú²’Ù,Nð„šn !+ÊHÛŸCàLUŸ@¼‡ÜÒ;u9PK¿üß«ÓôÕ·¯w·×·ófôÚ_Ä©½—+ázç`‰Èä³K«AÍÐ}ꤿ?5i iÆ#N†´„©§+éбáÎã6¼‡.‰»$Mºë‘q…±Û¤FÎÒCÄ3E9`qzSV'ôÖ r5Õîö ú‰RÏöU`±ÑÃq¢¿TK¥Wû‚¢iE)°Ž/>;í¸qÎE£«7«!ú nñØ£ªŽí|únO½ ›ÖaöbÄ FˆDLþ9È1FTÕaþçŸ_Ž«0þuF"ÖN ÏÕ`žXßýªoúûÍ÷wÿõßÿù¦çâ„Ë:ÔhK?°‡k¸vÃG0ˆÄ¿½}óùëǃê»7¿½yx¼6õz÷ëÓ£ÿýÛã÷w¿þÄ'üóýÝãÍßw…àÍ®jýŽP ÂÖ%˜ž#y|óÛoo>©™?šp{»¥š?‘žLÆ‹“E 5]~ÖÐün{Gãçý\øCý$ÅS™ŠØËÛ /K»–™ \¹‰£óQuE )w¥8P»âJ±‘žÒ•cÈÐä2®LS$㎗¾sb¤Íï ×3wŽ^³>¸^Þ9^`±\}Çíìß¡ ~*tt¬ê# T݆†®gšz89ÄK’(=ÅÇ`ØE¯Š ƒ%˜4Ç«X/Ûk ÇbÀ‡ ³Ï/SÁü«eb¤uõqýƒÅG}gÀÿûßÊÚcZ r ønˆý {8dÍ#x†“€Z†IZˆŠ+Oª6¾}¶š¸sÌâ¡Þ¬" Мd Ó‹’ôúK²„ßÀ ª8}¾ðq‹twdçy;9Õâ¤c·J!qù´5—C„âi äj«7«âvÇ…-{¬À©:¶óׯ==z¿uÊgb„ãÛWÏÁ*•¯rŽn_œ—úq‚Ÿ‘úo’^1~ºæëÈ××Ï ¿7ü^nf§xCO²Jå"ãP*7ÉëØœVGôrò1ð°× Õw ,6ë–*î˜ä5,yÄ,:ÓêÍjîNÆÂáý¥E¿uÌ¿cé>ò:vêfýÓÌïI¯ÃS½Ntî^çhðÐ0ÉŒÿáé¿þ,Ämó9³žcåmÈ¥>åÇCìRöã·7›z0’´n‡ÆžÃ㬖r‚jâ¤A)³wçQ÷ï+”¯^¬Âa’Å ¸ú¢Å$]‚˜IчMähqèÐû³L3WóbŠð`}„MGþ*´ù«¾ßºòNˆšÝSßorFe½æ4Ÿr4ôaºø~a¦;|žô»îK\]Q~àÑp [ÞÖWìÂh°±Äöªz¥ ×Õô>)ÕàéÔvxÝneïøòñ=íÝ[¼µ3G ضå]¸xÅñŠŽ÷EH1R¹,|\4å8òqh=Êî¾×á¿Ï_žïMÈß^n~Õe“êÝPÀ×B-â)Ú‘úpæåÔ c“ -'`Á\Ú"ÂrÌRŠÏdÒbS] ˆ(m ±2º­9‰ù<_´äÁ[HP¨hïÁ,ŽZŸ7X ymç…ÍZSj-“þyX„I¢ü)ëL>ö&Wé ˆ±1Q5p Ç=Æ1}&y¹uËÒš…†ŠJ¾ÔGKi“!åXgBê‘¥Mv휇nmŸ]ê‡Ó&1çúHÐOPcضD´uƒWìÝEr0*˜öwmÙE;¯íÌ ?Zfa7ëàËÍÝý—¯ ½]ÃÊ‘PLŽ[æ2§.H7À$æ²ëÇÉR‚‹xè»N¢ARjAObÌyÀ31u@ãôÒÀ„²1“žÓMB9ãš(ŸÔMÒÕóÕª DtC“ §`1RµÂý½aö~ò¬êðO:ƒ2: º§WЕ}Ö.² >Â7·,°É¢x©XóŠU‹+š½6›I¥Ħ×vÙoŒ±âýp¾l“s†=}H)^…jnYpÛÎ-ë¥HýÇÙ3º)š‘e¾¸Â#bo¿kŠûpøÍ²¡e!ðP8=P*,ÅSO1 Q©öÄÔ M“þ9ÑÝ×pÍÆ"eRYL"é¦öÖêÀæ`Ãøá‡Ò·Xšô•Ýu&Jò¾3¨ºPb5¦.ƒ Ú$®(â Ž"·Œyõ0ækš@‚Ë@‡óbvÉ¥yô¬©díú¾´µJvÔ|1ehû˲‰þíºù3V±@ÛbÈ|Øì¶ö͆PÚÖÁ1 ÷‘4d˜i _›uOK‹Ìö´c“(küQøÑoï>ÙÖ9„§¹ÖPÀvö­ó†P ùÇð£}ëå6PuiØ8;l7f{¹‹ÝU¿‡ÿÜÿ.Ù³£ßíù·i®Ý_ìøýK²; À«åͽDZϿ~ùãaçG¾˜yÜduóðñ§ÓÎ2uûF>û›dûOßö É§Ì !¦ÁAȯVhO°'´\¶3‹C ‘×v·FWÙݪ @˜·#^=ú«Ñì~á!Û³4[YEskz+G°›9Uç`Uéíºñî^?ƱÁ½ÝyF0-™ìèqë2‚•ß¦É °ø:ÒÊ~ýür ʨ‘—aòÊoŸ÷­\Î+¿þJ'+Ma€²_PÉjxÊ.e¥¼®î¾•øwHàË÷:9 t|·p’ÜõòleøÓ,dÝ·SÑù3òs'¼ù‹Ž´zÄq•²Ë¨†ÑF5>Æó™uI}žƒrŸ;çZî3$·=ÉÂ^ùŒ×©n!ÉòxÃ|˜ÖR’,ƒË0gG!0B;Ì™´k ±£¤ÍWûaPÍ&µ¿OZÄ9s Õổ3ç'ì—V3ýÀ^ =²`½#V¡¸dÙ#œãÁ•äè2ý6'vàÈ–CWÀ€þý Gެÿ>ÿNða¶t{Kw<–p¤é}f )¬"éƒBfÌÐ2æ‹TQÔ†¶DŸž'úRòï4‹‹ôF»D¸Xž5ËZq\M9Ó'Á¢¡àcâ–œeúæÏX•éSóÄ(Íš©÷ÄLAÛùàöÆe0¬yÏç·¡Š>QÅ™%žÀš3´“ÉMúW’¨åNí݇.þã>kÿþðòõÞnÎðí&SrsžÉ¿ÉÂÛŽ\š`ãWiÄÇþd@š.ˆÄv‚l@t{·“ç²Éï/· ›µ] bÃ4ÌIJÀb(ÑFÔ(·¦«ÏÈçÀ›®ÔÌóEÜõvh•`×D(¹ –™Œ:\|¦wæ –xƒ=ì=µõ.ALé¶Õn|í 48é´|ššï‰ÑÕšç?/­f½ˆJ}Ó|/t%î?F<º8)0HxGt=Òƒ\úƒ¥+úÿzí!j©ÍK—›wÆØ*éõBÚ½˜7éË9DN|kI³—3auÛôÚ†Ü[ƒ-iCÎ$±ü™Vc--ÀZIÜ"Pƒµ‡*Ëëj•\ä2[Y Ô&cSÃfÙjyð”ÚÝîàsZÖd§1ÍøåKlz]û‰rĬ™>ù]œ½~è9ª£•›üδìÛÆ6*a»a {]î:>G\þñßþùÓ/÷_¿}þûø|ÿøéo_>}üíááÒ1’n™ nÞzÏ|¯l`ãÔKgõPºÌÀ¿öÍ}”E+‹í»ñ{WwÙ£JÔîMlÁ¥ªåÕÓ;b5×}À ½Ób9¶í&åÙtÍ¥ç0Ϲù}iUÃÂB*~v„ûT¯tþ#QÉ䈔¡(6M,¡n0¿·¾Ý{~‡Ê»Íï@þaæwôB)Z’lJdǪ ­F©Pøq˜Ô¾‘Ë(Å$AÊ(å}6žŸ-­&òãä0 ÕS¥õB)¯£/•’éüR)i":ÝqÇ犃º~(ÑûgúCN0)`ìV8žù¶ùնר³`<ómý"i`€ô(„a}iŽ_7BhÿTÁÓ²SÄÙYuǕբ½UØmÓx´ñš‰Ô'ÇNC¸àIGŸˆoœ‚HÉá›Ý²þ#´gÎþÆ™‘U‰„³gΜCQDaPŽ™" ˜R ‘/1×ù¾üREÊ,½ržµ—LÑUZ£àÜŸ&‹æÄ#…÷§²;Êr ‰;Ôb´‹¾|(¯ ‡²zq·­Ç0ØŠëêÅ «¿Ü™„"c±¹‹qDð’£ò˜I§ÃµàÞŽ+-8÷{lÊàFŸûIÈpŽº{5¥`Œ¶e®ó5€½yërè0R±¤ý‰=l¿M($¸1þéü«´–›»Ï¦•eL¡1èPœe×TÑäÐb «é鮪[ÝE²©ÈxŸH›ÑE.â+Å<‰ÀQ*=ê.Òk{ZTåÖc#²í×&9g¨’Ò’-æ $ÛrÞ×q&…Ȱ‚—® Šn@õ›E ξ¢ó›"ëã·Ï_¿½|Øÿrsûñña7)`~€íhYa±[ƒ³5*í¸íY«¶=Ûo¿üzÿõù³aÞ‡ÙïÃ÷_¿<ܽ,TÅ‘4M*4¨gâÕ]­ÔºzÉ8„ ‚R!€—Ò©‡à²óÂg"êqê%’$u¼ñ©‡t8åBp¹æä”ͱÍ…'½TU°±S¿âÔ[C·?iKnŒÑθõÛ¿öØêµÿ÷ç3™§Šûß{ŠYtšïä=ʨKZÁ9¯€¸õþ^Ú>õJdò i@ ¡[0× xb·ÊÙ]ä¥Ôjð0$…Ë©•äÿ~[]ð ØÀj4æ?i[ÝÞFÐlK+rº>okMˆô ©SN×¼¹ôÈÁ÷PÕÑ;B5ñ Xh±q|4gº;r<yúçZ¥E‘©‹Ìãóé5›O·ðÄ9üÿKÌ‘à+ à‹1EŽÔ¾‹wCG‡7¡adä"â²`9Íûº3Nû°Ž¢éãï‹JºûÛrÑîtHbÎÔï–É,l›åÕ:Ǘªx·üGª•Ž8I£ójÑdÜô$Mï¾Ï“Ûj†;APYX¦ðó2pelÉ*r*©ßáú,#¡Žxj)ÂiÀ‹"6H>5èål#‡ü8’£èú otÉ2‡7»ªeÛ”‡S!<„§èkjM ³°E¶°jIÐW î hV`yç“k‰iÁ©‹Á¼S¡táóæA›9X@ÒÌa-ߦGá 7‹¹ÇÓ>Un¯ .¼M_o((¸}dçó]ÔMê™ÛBËܪj·È“OŒ,äã²îªùìL‡™{ؽ5h¢àÛÚ~šh0 &¢"YZ ù½>áÄžà|ˆq_ÌF ZÒid?ø;Ü~~þt ÓãÃí%6ýªéyÝPÇÄÏÔK…Ñh,Mždc.·NVc†¾›¯Xo5ì 8yUÀpR×Õ# H}`î—†ñS²oŸóüËÝ—‡ç¥qŠâOÍâ.oR†à›â³é4ª·iBVID½ö^R»ƒârãQt‡él×¶ƒy×þ»4:l=<)qÀ%€ÒÝ£mÚU[uÀÖ£0!%2‚mk}iŒ½ùñCû˜å þ§f¡Wl@ îxª[ab–Õqø9uÛ†fæ+«”g¤¨ç‹G Óžbìd΄ÒcÚ[›×M‹¸»lC–þ“wѹ ÑÆÇ÷ïÞ¹á?Q{m$޲d’e—= 4šZÆäŒrŽÌiÉÌhÝøž¾ŒR¶!£ë„Ì%”©UÄþìàȦK{¶úñY¤ìü×þŸ—9[×"¼õ‚:n‘&³G ®)•T/©n›ì'UۨP†ØÃPÍÓr–ïbé°é~‘ví!Ûâ+n0x( f¦’ÚŠ=(_½Q%á¾@ËU@›r4 KË0aœN-à0 nÐM¶)c1+¹«á†¸ðù2/Wu$ê†жð§r´Íi!vkYËJö+/tF 5.ñ$_Ác“gÈO¤< ¬K‘K*FÞ’my¢Ã‹\b܇ 'õ…ÎþÈ'ES×\¿Öåúãr^©.@2P×pˆÞº—x»@¢CoþñéÞ~èoOß¾¾Lw¿=Øw¿¤ÏžŸŸž>¿?…ŸN©Ÿ!ì‡À}ýòí¾ÇxhÐÄ[5¶\ʆ]ó•ltIÆk˜v6ÆŒìk@ÝíÃŽk '­™É­KÑ8¤F[ ²1¨Ãé<¬!•‹‘5[4žš0_º,BÝâ#w¶š v€’¨ëVÙ"|TÆ~€ÌÇü]VZ.NÚE_Üä e8rD ål âêÙÐÀžªªÊUŠ7`ÌOW>ʦSGzfX’é²)iø4Ê$g†,83Ûóà,ïAã$¸Þ¹ST·Ï}š÷ÝÒDÂɘa©Ö§ ¦_ 5L¢C_ÓB ®\L'Ù꣰º4PÃdËÑÔ[±é¾·ýÅã¨e.—œEÓαH!ù‰Ð7Ò–JJÁúëÿ~aã4í%àt²³jŠlêê|ÕÿKwõ‰öµÝç¾î忺.‚Ö‘ŽlrĘ쨎¾œ½"À7áò魉͟Ðye÷Kœ‹¥ž>“WÈ{_Gô@a³f5\w°5 ‡á|®&fÕó|çNQ¶‚SvSâírSˆæ 1ãèy‰xî‚]ËtŽÃ‡‘Z¦Ð?ð 1NEÿ.—RûªÑÅ\’†aµ‚Bæt»Ú¦©zg1”I÷$`T.0e§¥Ó~ÓK#y¿9úR]„Îg¤3± ñRמ= <ãúâ¾>`Ý%SsuÆþ8káŠWÏþýjZc_dqkÚ¥CÁ5ú†Œ!+ÅH]kXOÄÔ RÍBª­+S™™æh\…Ô˜mÒ;ʤ¤ÚK§»‡[cjŒ£ó‰&eÚ»Ío1Õ–ŒÀE† ÞÌluSÀÉîNÅ«8¨P‚3Ö‚E¢ á}*ªÎAXæÍ’£‘€K -wðj&N{ù³ eØ ÍrÔI¬` RgÒ*¢1AÖÁ ¬Ûk³:[ã1 çÉ41ûxž`Ø)J_Ù¿´z*òºÞÕm€‘‘šPùˆÌùÕàF³S¼<}¾?¹HÛøüù›¡R©Ò¡â •m½¥2ª.B‹Ï #áå—ñEé^©¤ª회q2±nƒÊîµì´è!ï^Å×ÏÓÆ €ë¯ëÎmòKð‰F4œÚÞQÐðK>I%jçèoòAP,Ð$ÛbÈ÷-ÁâÎ%XÛÓÿ,Óú¢³A†°n8A ÇFùÉRßÿçxÌNٗ׌eã’Ah¨dÈXS€cСýÖ_ÌHîn_î°%Þþr'w,w ›®_ž¸SÖCÅÑ«-³$ŠÅ«Þw™«Va¼=‹b o ¤kîX‹µÊ¤”€5“P§¹|Xx!ÛÃ>ZîŠÀ»€ˆÞ/ϸrouµ’(5V"Ta%ù z&§^VBÞ‹ïj%hÝG̹é0Ùû{„ë~•ǾóX¥j«YP}X½ê`ª\ÐñÊÌ+XJ=häûæL\UÎĉ«'œëà5dtŒ0©øÔ•\#̧G˜Æ†éª¾™»U]7zß·9¶äý/{×ÖÛØ¤ÿŠÐû0/#™¬*^Ê› °Øvg‚Ìæa²,w;‘-$»»òß·x$[Ç%ò‘jw&Fé¶åsȪâWÖåZÈ7ÿÜÏ`Cû=´Br,0êDØT¨ã¬Út³=€ð ºhÍR‹D%ºbɲ x6¹ƒ0K ˆÐ)# 䛪sÎHÞôÝ,øD 8¡…NŽDlJwŽI’‰vxÞ¥ˆ@ÀˆQ¬‹l“RP‰Â6ÐS+4Pbe”¯:?Ïœ»„W_9l„FÓÅå7ßýéÒ;uDBÆÝ e÷㻩Às3áó¥Mr­­€èÁ·ƒÕ§ûËo&ó»‡ñbzùÃûéêòÿò§AæÄÇdߤ6¯ç“_DmÞ¼úåÝò£¼ún~ÝMÅ„òöåãd2]./¿Ùlí§‡ÇÕå7¥_ý4ž=NZ'ù¢Á·ßnät>†?¿¸9\å_ý­¼ìµ¼i¿ð®·Ì±FfÉõhy‚äYN¬|ç~9ž4²ø ÌDîÖÉ“7ãÙrzu¬ÀÑýãð§ùÍ‹Ç@*Ò&¬xŸ:‰E,‡eap‹Ö;7±•í­ýáa12·¡ ûwÇ0‹±­¼¥Æ¿øcì!›³–c­,Cl(ÔÄ(Sw:Ó†ü‘Ü‹À@tzï¶ï¥Arž7òŸî÷ÍU½g®êœ o|ÞïWãÞÕ”nKå[V‘i#VMï?Ü.Äð[®^ ¹¿ØtÞnî–Ÿô¨ñ†^ـ݆²z†ªÚ¤×X@ð²,ã;—÷%Y/óØÂ¾J ‚á@¢“šF’ò.©’l´p‹>%ʈeÕÎgÏc-v8ª4Ù™"=dËÖˆ’Jû3¶lÓJÌÈÊ:?§ú˜¨jF²éuðåiÖâ/R;¾šM±ýŸùüaÏœü«ÈÁô»ûëé§KRößÀo§×ÛïQô”b8}.ë”­7ÛìÕSü”>oæa±ƒÛ°(9ƒ“éíÓôzÇl´0B-IÚfcû›mžr»”CþQÓÇéb°ú0¾¼ÐcÔl~׺d±.ÅÛ.k^¦å@÷jX,ægÖ͈§Ý Èu' Ž¼Óž2°[9 ¤Th­Mt˜övkîDXV(¼F838kí©68 !ÒãA¶ÌˆÏÁOÎÉÎs4xôgðvÒ´[ƒTžÿ6´nrucéjÂß¾¹^î9!6×Í9è÷Ö–_ ÇØRgÇ ßkû¬E<,õ—lyD…˜ÐHü{Güå=‚¿;î„.óE#¶"ÀŸDÇ÷“élzÝŸî)UèÞ'²bÀ  é*®À­^{»Y‡Æ®6TÆyÒ ó>PdÝälWE¼l¹DÒa#š ç+Y©€2'34µˆP×9¯óeËÜ0 Z(iTÙZžX¡ä¾ÚP…-úè1¯ÊN*›V»0Ý¡n¦øá‹ÔÝYf;ÿ.Çe°“4Ú£~øð…òÏüý?øówþïËÁ?Bo˜ÿøk³’Á¿ùïåø\Ößm,ǿݟøþ¿µ,°š>.nWÓæ =./ƒ¬ÝOãyЬûr Ø1üÇà]cW?Ì儉/1™Í—ïíÀz¯OBR=±•ŸØåyšçGu@X§Q9JêðÆ:›Ôèc·­PÍ9ó–\'bzh[>üêÞÅ‘Uš¿ jtìbÑ 1JÞCeõˆïÇÖœ±‚él7ŽôÞµÙþ×:{ºWõV½ýiŸu»="ò2ABŒbÒ•…Ž·ÙÁ¹æ1e˜ºÊ±ZÇúŽÂœ‰Â\kÏ`n ‚9¾ΑÑ'6 å›Hê‘Ñ Ü—Ÿò¼Z„4¢ëád<¼z¼¿žM ¡{ò§ùò]í(Uè…·¥ ²fœ³›ÇjUmí+„S¼Sô\oÆf@¨%Û§b]/žŸ©1Å{ïÐæô¢¶: ÖÖIµ¤£%ç/;. Í¢ëne÷Wåß@"_aDªYRu‹µ¢Ò¿súŸh¨žg},?‹XÜ…ÄÀ]Ë{)'(Í'b÷Á Ú“b¯K€F²…†ó¡N6íg’C8ZáüLœhSÌÖîKT×4‹M‹×’â}Æñsº² dÖ‘ÄaÇâç;>Þ™˜BѨ£×eí€ p˜¿(>ÕWûúw6¡ø+Þaqµûíí Î1ÕΙ qKØ:«CK;eñP÷ɲ—*î\nNÞ%À¯¿I¡Ä‘²št¨èé+”áŠéyŸÖ rÉÊ¿ÈåÇñíJ–5»b¼¾ÛdG= PÛ"þ£|µø|ÛØÆKaópÃöáíõ»ý*ÓüÂ0Hðü±É*iìä_£DyW–ÚþDhfà«&‚œ`£ŒÁSˆÀ¸sËóq<»?aß^™—}/góƒ›ëñj¼ü|?Ù:}éAÉ9u™OF,Ó±Ðz¥) Yâë§!Ë€º Û­e‚–xyâ.ú3ƒÖî„à…Gr´Hœ`vçtȼ‘@{Guõiö´ŠXT¼ÐÁw¿êät@ß}¢Ä‡–/u¬/cJÒÅ#¥\ÒΗRaÔ„RÎ™Ž—RÍ<¥ @ï«Åð§°|#xçüÈ*R*ÜJ‰i^íVJȶ“¨ürWÿ¢ýp¤½¹²\mÇvÂW»'Ou;u5ÖÓ:b“O|—Ê‚g¯½ü.×RY’y\û7’ ¾Ï˜\K $¯>Yùs¦òLj*2É0¡Õztãfß6–ÝÚX–ê'Ú‘'j{-Õ~FôZÊYýj•ξ–*Cèk[èÐ$x¿ ;pO \â^ ¸dGåÎÑäw8;gU ûY&gÄÀï\|:0銊—ìÔ#¸$p1%Ê×ûŽ69mm,¸šEñrÈfú,¥¸FЧ9­·è= OÕ7Ngëå´VZ)Å7$ œV8âOF˶[ËbœrŠEi[S’qªc¬©:Dþ)ª;œ8»ŽŠGÈú•#Ççˆà`>#ïpv{3|žÌ¦/S%ç³»í›Øû÷ÃÉt±:QI[DK38ÍþŒ ÀP8;‰-rr =@}¶åªD²ÅÖI›®NYN"‰ÁXfBkgy•ò†ά õ™µŠw4i:9Üèr/H´X¥Ž-&nHhžm†Ë±™Õº«Z[ËJ…“e‰÷àñÌ `3}½ªïà”1QçAÖż"aO_™Ëpå¦ÄoÈex^O;éT}—!K„me¶ ‹Ù0bM‚öl ÞRE«’"¬íd81ÉëEHÚ¼v’j]»ÉiŸw¶]W×#­¦Ï;ÅË 2×M¬iTšñû‰5k¥è‰ÎÈ}÷%@yÃÏár|÷0›.·PxèÏ0(Û¿ˆ»—†å"+jëô›ˆåXÝ#[WÍ}p¥XDn õ,[L¶!o3Ly«c&akgYQhY•Óa>Ö™MB[Ý$ dŒÄÖŒ #Õ»€~#Á»ÇÙêqyq7]-n'ËáõX6{?\®x ñ²¶³tNgŸ±Og‡Žµuº×ÙI*ŒÆ£LzûäAù´×ȹØÚZFȲ@â†á6ÇéS0" S©Ý×Cèhc9ÀŒZÓT±l¾„©Òä|/'¦×Bf!¯P-|膓ÅäÅ=cwå½_—¶Kº¿¾…$†ýéFHJRÓ#*·Gœ¨éPƒh|¯'š#OlT¼qÀ°)ë;ڷܪô’l5ÖJ¨µ—Œy'aŽ(kfå[wßíGœ4ïDvÑ3)¶eèpT¾ç(DQÈլؽmgÆÆq,í4΄¸Rͳ}j;Ѿ\çÉèÔk"œ£˜='" Î'ëáқܗ£Ç |¬>±E”m ›U {žù¢ªmî4[OÝØ1÷L ¡¦D;×’æžs¤$7ÖQNÁsèÀχ³ùä—ýaD^w¬m,½ž–1HÐ=s¹ôrŽFoKœ]}®ŠèòH«ÃœÓÎøx 0UtX¢…œa‰†³»€uÖ•UùX~4ŽÀ ³ˆÎUïûúBcs6žÏÂòân<ù jfx Eì1»#¸pUíêÓJGcë°óPÚÔ*e}¬Å1|2wÄ’U&i}‚hÉÓ–6%FV…eËã›3›T»ŒS‡‚ËQÀµÞÇêÍI„ÑVŸ¬( èØ=¶3$TååN'Œ2ø¨âY®ˆ¯ëJ©»C9Ûá‰PUƒ!¨ «®‡;'2¯¡:Âj6‘Êià=‡œçtg3´>¥ñŠ÷-=Š@iX4)¶þÜPê|•〆½9ãä§óåêa¼ú0|XÌ×2Ý\w:‚}]Èã3ôCƒý…¡;ºÕV÷ž‰‹º&K“ òÙüêз&œݧ´ÎíCUç‰C¤Ž }IHpâ´g`ªLBªQ1HmѤ¤Bug†TCTRuh©­é1öI‚ˇUqp°*®[Víf„ǬhåÏ:c*/òb­îÝ~=©S««í¬:´¿ÁP–Û X;‹~9p ';X“Iç8dc¥Ñ5Z:Ø"Lt «F1í¹ÑÕÕ@W%TóÌ Ñ5tYX>Œ'Ó¨ÁÒÍI4ª2”úÚX*¼ɋÚÁRZæhÇGMTÀª(–Ú,Æfƒi/mZP­êáø3ˆ±Ì]Çn—örè)'×j6”ÌÕEvéЩY_vï6ÞR¡zʪ)\Þžœ?ñ}·sm±sßÙC«q¨äy»Ùµ’F˜±b°ÀV6ƒù¢þ)¹,ÿÔ@þáOƒý¯ÝÛÍU\tˆ‚€kC¥Ö¥Õ ó™Pn=¼ìò…€—«ù/Óû ӭ柋ù㪠W¯ä×½ঢ¹[ Øþ,É_Û|ÅÜñšL癇©X40Kš |ÎÌ LçØX˱iѬTd­X‡® T—8þ~0ÚÄ'°I @LÄÑ5ÀóÒ8ÿ²«0”Ôd¶SUbF΄Fšëho%ÀÍE$ÿù8_3;M5·˜Râ‰*êÕ&¾„•–ÝI@ï Wçc@y¦®q‹>äo#zÚï)Õ)úáŒSJC2úArÌ\R)8í¢í¶ô-þeæØÍ~w»Óp±†aÛ³=÷ôêùÓýpû± ùç&×kqû°Q»êqŒG®ÎëO$?ÕÎ{¶Ñ°ˆœq ¥ªhˆY£ï_ùÆõÅAAñÚ°æÓ”@:Ff$KÆ«Γ'« Y/è2.’­ï”tN›õ.v9Ø"N«™‰º¥®±²O4ô|õbêfZÄìzS0 (Œ‡TÊšóNg…TsÚìó,R“£ìk¨deÂi©š¿%gC¼}ÿç¢O~°àEʧA–•êai‹Ùæ•üñ%Pö8µ ¹òaäL2Æm}ÀrtvZ‹0eBÜ"ÁZåy)tE*¸†‰ì+‡U/wÊ"¹LCë&W7–®&<üùç›ëeÇÔRo컪´‡ÚA« áñ 9"lñÚ-/5›¨Ó”[«â+BH€x)¤QŸVø™#ýmX9Þ¨E~’6,jµ¾!<±rb3gÚ´)cÅz'”ëT†]䜛ꬡ£G4:-œ >ÍYjÛ2WeM}‹ÛrQþ;!Ä„DpOb§FS%ª$Ž¡ƒs怷{¨Í¯‡òÁóÅ/›R–’>Zî–î¨?2ðÐõé~Ī1…OM%îL»’TE•F_g´NfÀ1ÄÑ·E©2èKÈ$J²CÍF‘ã Õ›´ •ÇB€ÆñôÕT}}^~ëzÚa¯üÖ^xQ‘µÞûH¬”8var]=$Þ«Î?`ºnËôSx®Óï€Ï«ªIT5Jƒ9âr5/i_L‹×¹ˆòíÍ/n΃°c9ŸM?ïf›££'=Ú|÷Õïv‹‘ÑU5'ê~}ÃpE"ì¨9+ÉzÁp‘ NŠÎhN¨=èdN¤>Ç›¾Ð¯LÄ(\B§MI}š#=}zit*Ü\3u”žòg± à‘‚ôe<±/\Zpt,ÒØ&]Á±¡>MÓ¹ 1ÄÚ—9¡ç¡Ó±p—CXkŠ´AʾV”e’ùüDÙŠ†CUÖSõH§ £’b¬·a$¯7¡0ŠF@˜³šP£lŽ×1Fj2›ÀÕˆ˜p¡ |ù"ù›ñãlÕ±FªZtÔk쟢pK£ëU7D*ôö[+g9=æÜªôÜ!&¢èÄÀš”‰ye}ð„άj©úPe ûXÏ4ÞÏ[Û›•?a;dOt…‚šÜ4ºFÎÈÛ|ˆfýEœe¨ ¶F¼Âym-ÀW¼îðؘÑ¢ËNl!¯ê+LoÓ8 -šßh¦=Ôò…éñ· ìRÍñá‡-*”é_'¬ôì½=sÀ¡ßpDÁ¶0_[ÛJ‡ÜÓT0ÜàBGöépéM?£ãRCÑK÷-ÝÊDœÑ5}îhƒ©Ó)¯iûnÍ—+ê<àÏ_>->LÓálú~<ù¼¹¡iŒ•ì‹ÛTÁO!¾ÔŸ™ÌÚQtf²B/ÿ™DQªßj¹tZ¨>1R£ÆèÄHdÐÁqïÂpQ8H²´âQ.6['žßQsŸd‹C ÕÖï !ÊyÓ› –o÷ÂË£J[u‹[ˆ¯*^Óq‹ vÓJÛFµv‹v…´ã.e܇ØÚ [¬®¸ð&¸ýž\‚ ðáeh… >8Šª ÀZÁͱïü¾tRä®G²ëÅÌ‘fºXù OÌ’-gjbrynîüî;¿;~/äyá„ØÊ£ß:¶óßÃÉuÿò‡§^¥a9¹ÿsŒ‡ØãÞYŒi€4”Ai’U¯²d ^)/>ž›Ýœ2(ÕGÄwT¬AgGK²ªï,>(Í«!Éna¨# uÓIn^hÚppÁG-µ™"ަ]Z„é(rN§>8GaÑAí=öï)„ «$ØòqMª}Kæ¡C÷ôƒùT„zAú€v __û“gÊÊÑ‚ÎÇèf×04Ê«Wª%…&WH·Mô—š3Lx’MÎÄÓk'SEœ¶¶þ ¼lͺÆïÄÞwîÅ{Yµ©Ÿ«h–h÷km=ýõqN³˜æj°_¤YÄþåy°±ï¸çíÍ×§”¹Júùã%IÙœÓlwª³~ýé‚Þ8»Þµ«¨—QZˆ|¢#AaKƒX{¡÷d_œàýeôÆ‚S-£7r–ïH†à{Z¶}#ùûðÝÃÉÉ­ŸØÉ™Þs²¤-§¡Žï× æWÇAì]ÿŽF˜©öú×Sô»íQàûï1@k_¤rµ'?9žãPÎ}_§áY]ð?¶4GÛõß³}ìµ×𬳢ouVîK.É6Ñ¢Báâ¹ qÿzxñXˆ˜+K=mc!y£§¸ö± &fÈð‡LŠbµ{É}ž™û ר´*>´ жcưsÐ#ÏŽÛk3'ÓêKéMâð‹Ë^=e(úk§ Räæ¦Ë„wÖ$·¯'{ˆ$êËxíÁ![!ËÉyF¼žVmFÌkõ†0®SÉÐû”wÚ±‹ãÃ}¡#»o94å¢y|O´Õ¸Ý ª9¸”hã„ïWoâÛ§Þ\—D^úç?0HÇ- €ˆ{Üü½(è Ð]”ò@OfšïŠ€…÷Ô4—=¸˜|x,¹ˆ>ycX;Nt0¢'1¿OËLzâ €N>ø¾“S]UZ} Ìi ¾?ø£nØ¥)—}Ñÿ™}Ú^~ùôãdMJKOv"ßâXQ.˜Ù}¯Â‘Ia‰ÐEÊ0Êå®ë$‹l\|´Ù(j«Ž©ªjíÀ8xÇHböï'POŠJãZOòà6xã6A¦ô€ÙD½_œÌ8ã¹#u„ØØ”Ù81м_±}vbYIžq5=é¶1E{?ü064Yz‹Ò¼S]Ê}FB½žÿ’ÚÙ©§2<"Ò¾wø">"å¢ÌcytÀÇÉZ#¯Ž8x6ÖNÎá}œ9i á~ÎJªêžŽØN}ÁõGêÏŽØ)ÈaU쬟-v»ýòùñϹÃÅÍSÄœÒåsß«b>M§ NjÓ–:BLòaµÉ…¯’îÏfYÁ1TÁóž(ô"<“dG-$Õ URú,tÎ+–wGãÑ9d¨R“¢,xvïW¥ñ—ªvFva…†Ç03RÉÜ¿^Åmhjcùžþåóõ‹9Øý•ÁçÓu[0Lçƒaà ™`‰ì …4ú°sä-¸ñã«Ê§°cÇöxnÐÓ¼Br§í/ß>ˆZR/*m«oœm9¬‘$PWÇÊ‚Öòð1RÃAaB{Ô¨0ðÊI„cÊ·ÿ;Ÿ©UâP(nIȲ+†K"²—DÕoÍx7Ϊ€·™QŠiƒˆ.Wr$—.p›Ì7€ºµá= †Û$fÄ Ü&EqÕu”¨ŠFU!è¢ B F*•p©Š'r?@šö®¾<}}hKÖbж[ ?o`tÆ$kór뇽f"vG"-’*År¬KYBŽƒŒº@o²ë©"oeè%=[ÕD,èMzRVIÙÖÑqXxÒ;e{&Fê•Q o—:X‡Æ·vHmÿ0¹?Oóg•]VÇ. ÷¯½é·öåwÓ€Ú‹Üà›2·ø¼é}§Cáš¹eê-%vd‡àçó+­(é~Ÿ¬$Ä"ÀÛ Û¹bQ_ê;É!ü‘X»@¼-;#Æu!žF'`¤ý ˆˆˆP²Éd§}¹5|ÕØl%á9”K+ƒÑHK !ågÞvB߃…é1 ,?ºç4’›œ|M/Î¥gÇ–Ù6Êjñ©p7Ê¥S9·0- y£u2»èÙQ¹ð(Pé`ÊN¨9d§º7ï@Å­ü¢Èѯ{#ÍÖ½ù”ÔF.Lþt}¼!®Å´4ƒf™Ä?g™„àݤ0ñ!ü ”Ñ‘½¦jÛèNÝñ¬®ÎüþœC@ȵtQF Ñ|ßýµŠî&#òÄn7ÁèâLç¨>9U³];Gâé3Ó™Ôî%0kÒwÑ6*ü˜ÂèøÞÄ ™b‘iË!•äÅU³çò»åªñ0"‡®‰ ×Àðò»iJö¾§p,¾ÿéü|w÷ò3™+ Û¿Î€GCÑ8bK9 NˆÕT©÷(/Dß9¾c?'ÎVP%G»Âø‹“cœD-6˜˜ð|–Gé .¶yŽ‚²2G‘xµh„„}ä´Cº¡ÕŽñoCõÀ£OÅd﹤—š., Ÿ'”å®”œyIö™Îâef eOì ÆÒðĉÄÑA\±³½+ïÈF•õµšø2*s®žúML}ÆsˆûYå%= €atöŽOt¥é\©™ݶöª9*¯ÒÖ^$&é€viåǨÑDÚòèÝX-R·}PQ­`µ¾L.oÂËNŒ:H§«…­ ¯`µ¥j Þñ÷WGãY-l‘þ=«…ÐF<îçSŒæb¨rz¦nNžpa¤:ÀÊÞø”»ˆÀâÃ×»Ïs).X…_†Y M ˆ(žC€±8»X?œåÝ •}Φ·¨Îjȉ§ Ðòˆ1‘­ ´Aãp MA 5MQ´cl]ú ï¹G1R±¨ WÃ4úã÷dÌlÂX?c .´l]ˆ¡3afoLMZ©‘©Œ©v%àâÅUÉåÞvŽÄÑRmÕa*Ð]RLJ®&å ð¤'PùåË\ 9#è€1Ò!*°ß“’ BNÛãétfžÔ¹\}¸AÆ[ºW¿?úýã„>ÄÛ›_⽊¬z`n 4s£…`±ÛLÌ]A´½øßw¶&‚JxíEñùG£Ïáõ‘(»L§N«&ïte¸Ž4šþݤvf'émÇ^¼É0ÿêàð⪕àf¤òÙɬGdÚÑյ]'fô€c-CŒ!¸@D3ñ¸ïÈŒÊÐQˆEÈ„hFZ„LöÙÉCGÛí‚™˜Xü¬ÒÔ.~34MÎ2 i[¶SËû3ý ±këCgÌ,ŽË5„E¤nvCP'ê"IŒLú=š^_«–ŽD=ýžNmYmE iÎd„ÆèÓÅ"Ä^}ïDÝÒ8ðNÎKðÙl+—ÞÎ…¥ç35dIádÙ¥¾4¹ r˜Ôv€²(ƒŸÏMÎ^c¦¾4iŠXœ¬2›d­Ž¡tÖìpGÙÂ~´Jç—œsD^Â:©Å:ß£ËÆ³]4î>¦¾í›ëYYbs½ó…Á.xЖŒ¥jqŠs î¯Ó•kOîôí«J¯Ì0x‹Ó(Í/-d†i,”^Ûðn¨çi¦á ‚Q3ÀÆ‚Å9U eõ—=1@ô#<ÑÛUSÔű þ»…~KÒhïÿá>e;¥hŸ7¿Áfú¯ö¶ýàüPWL= Ý9”)ÀÁü®ýÅâëæ´f0ÅNÎÏ•B EJ>›ºÕs:Qu¹éš•SdÏ´®Ïbð0˜çÊļ×Nnº¦'EÀÂØtí[ˆXÅs•¢Ô9ù]Ðc ØñÔ2ðűjð±½?¤5²éˆÊ%D,Bp1Jñnu†›ãHV}Á–í$"¬‹h8"˜œ=æÁ8¨Ð–í»&Áë!jK»Â’È>¯f "!á‚êdN#Ϥ%ã\Dö*ÿófÁ×>oÿËLø?¿|‹öá›MlÿãávûÇϦ5o~›¢¾»ííá3÷¾•Á®Ù„ûÑk—âîôA…3]?üô&‘Í´ý“ï·Û'z \‘œi *-gXð¤v:ãÿüdŸ<<_ßLê;µ€Ýlø_®??oÏ6ýûO_ïÍþ÷ñ—·x9aü;»Ð ‹ýÑP¶ ‡û$ñE»PÎrFmíï_žo¶ÏÏ;ß»ÿ{ÅI õߨÅñwÜ<Þ¹~Ú¾ÿg$†v~–¾- ²KT;Â'“Ñ-¼æ³Ó|4Y;·h5Ý~0'³Sóù»ÁO?¿áìÏgpöçßž¿|Ú>¥j›×7¾V/>=þvW]nWH“W¨§ìŽvŠ· ˆcï!UàÌÍ£ŒiÛë%eb!Ôªã©äöè=f©â"ìÑ\fËCÄú< §‘‰ˆ´Ä€ØµkbdL¼öËÑ@‰ê0m<Ûéu´³š#8©Íþ®®6Ž e!^½Ý#mÑ‹M+Í‚¡ GGÊÞ&N¥¬¶=›ëiZì°³µÙªÀ¢$ +«Í.™-tv¼ƒÚñ.Kõ&nŽÞR“NJ ìgü\Ô›J¶rçhkUŠ“2/«+Î7][ƒíÏBô8'Ó™÷ît“ ‡hVs¼yªÐg¯°‡ÕoŒNÖŽIÀnÏMãŽ8Íþ¢ÅQ‰@µÃ%îX‡Táob'a¾Kô¶³*ƒ( Äµõ¶o"žëo&ò© oU1Gâ ®Bqù”Ñagµþ&‰ú|m½…ÐâobˆŽfgËýÍ×ú[LžäËœ…/ŒÅk„lJçhg5þqCÑô¶z`r:w¦²Ó< Z¬¶P­6´À|¬P[ÄâÜ6œmš{ÛY•ÚÂF=)­înH-9´«¦Þ‹o<#ê¼w5 ‰¾¬4Ì:ÛÛ¾j!2h Õ]Z ’‹¯nén]éjÞ”öÚH~ÙÓ´ìgäs~v´­*?sÃS]]gÔÂ<.ĉiP–$¹_ÆÑ„"Ö9Z9!ÌOiÛY¥«Ùïò¥q*Òt´DiìZê]ˆÓ¨‹…è(µèÑ®lAbYgÑùXDǨ³Õ~o«Ñ™ÓxçæD"â%¦»Â‚ŠñézÓ2Ñúbx¬ÕK9$‘Š2Ñæ¦û˜lø¶³ª„¤¦Jq°¶Ú”[¦YÆh2ô³îkyµU'$M@hZ‹T ûÓ¾wة֫Қlĵ-¶½ùI"MŽA—ç#©:=â,Zó®ÆÛ0:’ÞbÈêíxk5Šc±e1:Z[qH ùû®@lT\ª8­N${3ítj•Cÿ¨âBYq˜-‹=ÚYUéoö~m½‘¶ä‘!‚Æèh¹ÃÅGÞ"®r¸E½Q6¯u´³J³“&âwÐ[Ëà ’Z”MiüBƒþæƒçŠhÒü-7êíhgUþSYœ¬­7nêýöÌv7…ÅþÆÕád˜úÒbÅû¶í©Ô?vž/69ÚZâ(•W‘Õׂ“’¦+¥©ó‹Ý­:ìý&sù¾Í©|¸i̽Úí«ÊÙÜÆ.Q(¼²Î\ÃÕ-ØŸ¶CÀ-NGju|¢;ÐPÆHŒR<ÛØiÎ׎vV£60µÙÉ×>Û[Z¶\ð?¹ô°±x¢ÒóËãÓõÇíÕoÛ§i<ðýÝÇMM«|ÓºêÖÍÔ"n|õRƒÕ‹WÆ@Ù ö ã­¶lÌqíË Ç¦C P`LeéËã¯Û‡¤çíïWãyzüú’+8mj ÊY§ôŒˆ¯ˆÎP¨ô|•D›?åd×ÃrÒ²Ó‘Œk[Ž¶Ñ¡·³OØÍíË¢=@Ë[»Ýó«‰u3 ÔD…j.[¨E«ÐìcË‘\z…-Ú3… ]¢ÔG`_áOŒ¢ ‹"Â=Ãõwî÷Nã/M7w7ÓßžZ7—lGqð·Îj¨Ö’Ûàà@‚ªÔü]+Ë^Î<>Ûéùoðš~ùjÇL{uótsõòx•¹I\íGñ¼Nt˜äãù讨ä2¨áMSIYªÕÅùpðã*¦g.}]mw"b,¾j–PêX ½RdºÖµq& žéñDÙDK ’‹‰žÔØud=p+$yÁ9p?6â1 ƒ‚„E´oWé]r"E|­tPÍ~|œux¨„Ð.øŠÃƒZxíÊJš:Àä¯òR;ÙBP«"GŠOµºçßy7Ñæ ™.‘cbÏ1EÏIݧ¹‘ê`ÁÄ3­À8ßÔ9ÚUÞ¯~Ý{³¬\½½{™åƒDk–uÑ`Kc©]P€P¨ËCÚ‰xºy)G&S¤Í Šº¢>ây{óõ)…,)úÃ>}x6 Zœ8=X½þtA© uUÒ–LzLƒ!Ó‰1¨T¢A¬=]œR²¤ÊÅ÷E'̬G2ìäãèÌtm'0 Ò…¸± ¾;™jæL™—-êÀ'§2~çæÓÚ7J¬›s&»HÔÝ—©æ$s_|zþäÓÃ7Óª½‹«Ç¼{¢þ¾®éC"Rd‚¿F¶ô ^žç5à/ö=ØÇNK—z%Îh¡5 ætw¦ óªÿ#ïZzÉ‘ô_1ú2—Qƒ Fk}Ú˳ØÅÌ^YRU #?àG?ûß7(ÉVÚ¦D2“™Õí>4Z–ÓÉx#â‹n¥Úv÷¨ÿ¹ùtÿpw³yú¶y~|xÞ½i—¹ûùvqzÂqMÖbÿ­*ÃÍ®¹›Xn!?hkNÖ~·kî“F]†™BŠ1´SžCÍ–øxûT€"Ñ’³žC’«Ìúüiá:¢µE©êyj#œƒÖåë_v\4Ôñ–G$¡¤bâ5_Ï ¥ó€Í•L ˜9ذïi›Ì]Pé÷ßáÆ[I«²òxê è àñ‡§ö€õæËòy÷TYHµî‚‡cyÌ]øž«†–<’xS=ÑŠÍÒ–NsbÁLTùKòHÑXKd-l~zˆPu*œ‘—ÆéûÛ=Jº¿Ý{vÇ¡§óeO'MËžÁ–T=¹¦êÙ8\=Çn ƳØÜ$}ÍHúzî÷Ö×¼·,½&ÃëÇ_¾be[³‡á )°×ƒæšØ£N~†®æ$ [65{/Áù¢…(œ½yµÇÍØšáNkÔ¦¯é±*â BÖ±¥¿LÓ›k |¯mÊdÔhóMfQ!HSsmÊú•yÚ~å³–d:f[a˜ÊXÃË„èw®¥Å*ÏÝnóÛòf÷¸¼¹W‹=ÇOGlšÆ V»êóiÎ@gˆ¦½d='û÷»å“š¾Ï¿<,Ï¿^¯¯ë\§§0)¼±Ó0!ÌÉÜ(­Ÿ5Ü<=lW‹õR­ëíâð½*šØIß[žb.ÎvÁ!ÐÌTIA»í—Íê·Õnó:|}·»9ýxßFx«aóPÉ Î6Ý 1q,.Œ à ¸b5ýÆÇ¡ÑÁã¶‹¹‹«Ô‹ÀÆd«‚“;kzÄj7ÚŽU ©¦b&¢0Ò[6âéãF{ÀÍ{7rGdL¦»¹yÜ(Ecì #âÆ¡ä,§÷wJcJ¥£ç!Ͱ^ªóÎ6Ê(Ë≖Æ`ß³Z4êಠzJÅd¡§G¥&Ö@c ‹Žë¬J0àFYäFÜÁä¼³Òaïg™uåP”;úÕà¨Ü±<~>ÇW§9º0J÷= ÚðHαf›ãgä/űít}ßÙ*Eë|\YU÷éeu=ª´Òu XÕ«×F&xþ!ijLÔìYž÷7›–)(‚OÈ튄MîðíQ®ÙE#ìߺ¡Ø¸ü x‡­#o]Dd±6å#ЃµMï‹|„CnwTwù5)³y†ìÀIHÆÀÌÇ©reI޹LÎ]æœcdD8ªš«ÞЧ¹—ŒÐ0iCÆn³|Ü”àâû‘÷w»íê·þ7vlÕ5c çFÞõ†0²(ˆc#`]¥ïmDÁf×E s´&8\FÎjABrñb^5«)–­éÜö&ŽaŒ‚ÃdzËèI™rʃ—I‹ÌM=®HÑØù¨¨È7´iV‰ €+‰ ƒ°F û6ƒ¾Êîòz·ù» ïßîîî?ì*ø‡JÃæ¯·ëͯŸUæýå*Zùíf}úÌ'6“Ù8£k †,âÚ1›¹@‰‡åä Jï4Šo{µo¥ª¸ÚlÞ¬ßê"A‡„Öí‡ÿœzÄñhǧlU×Q÷õËæáêéÛòöê• Ýþôo/¦ BdR)™R¼I †a?<6rWEñâ³WžòFŽ‘ÇE™Ho=ë«dS… :/þœzÆêîæ~ù°ùÀkÖ´,¹ßbR^p%P@süV’â[>tBÁSÞY‹ FB–×!¹ºét°¢ý?¡å@Å sž'æëâè¶‹ 9£oÑ^úP^[d_R‰§ÝÀVwJ,ƒÀNc3(yå€Y<ØH4Ÿ„ý?Q¥É.ìÀ~^‘`gh˜ÉFÔôšLµLìk–«ãMAö¶ù§ØL8P­wT b5…Ï ÎDò9I›þWú´Ž¸^• ;·t¸!¹dõmÝHéØG»‘R‹} Ü^|DžEâ"q0¹õp‘^6-'‚4Y9¤¯­²…nnqÐ æ Æ=ƒ£\$Sí$yÚÜDÿq¸m~ùŸ5—„¸h'^)™ïîÐd#/.ù§¢ÊÅZˆ‹¾uÜòêæv#qUñ\Ò·Õø«~8øCáåÃûݳþÆcfØ7÷ëû)ÙÁ89d J2 ÙMõá•zMDG_Úx2s[´CÆþ4G Öøê5мӵÚñÞm$^J8â6 5-%éª0æ…ãXðý`X^ ÔD:â6‹8\*Ôc‘52üj9²åI)ƒFWèãûºðp÷ü”ô¦ç~P·½ÌÆøRX/lƒ¯ŽCZ t}öL`¢j{}¾«/€w—7œÈ™ŽLzk™` ³XæŽL(ð  |Çý!ëïl sÍ"ÙhkÄ9–1d$/0Á¦qò_IÖD`ôµ÷3qs Ì[¬ƒ#’.04Þ«°¹ØÖ—ÿm3ÆÔìƒeöã[ù`Ù‡ÁIÝNœHפ|oTÚqþH:À‘4qçœaœþuóë“Ú)}âc÷o9¸=õzw7*)û½Ík}¸ŠZüÂÅ­0]~«LÝ2âóUþ6ºpÀ dìÕ ‹Õ3ÿóº™ÿˆBèQ `Œ@b—gÖ „$^[ª-܇¾µa‚¹ÝGÒ,Kk‘þ•2õ°¹ßmWËÇ·ÕéõJ„‹¸ÝZi²¸Ù~=t{½6’”}mAB+Z1J³Ûµh¥Õ·‡ÏbÁçeŠ’½„'ª6ò-ûá™ ÁDÎEÔ·—´·ú~[—_"ø)Á‹ndo|.l•0¼!PKc.dÅI‰1·”M”XÉ…;'j42æÌbÅÏ®xþEÅãî¾_®[R$êHÿßuvÒ…a0Kº4hÞ¤ŠZÍlb R0G¸·‰6k]ºq"M#›Äb:õ?'±þf?€ýþoÝ׫˜x‚‹‹"€ßA¾Rßüúñ!…]°tÐ|d V7Ç´Õg†Ü‚¦¨ê4MfSÙ› ò+A5Ivα)꿳Ça ‹V‚“÷ƒ'6Úس©»l Aއ´DÄUKJ=h xv9ØiåDöA_Ð!ÎzÉ‹‡Kî蓨\±¾6 Bh*n!ÀÔhő̔Úõ˜ä Z1`Ó2¾2^\bü˜À~Bgï‡%ÖΊ;‘F¦IaÇ­Õ·%1XÒ~"”q`éNêדtÜZÀÑ‘y;£Ø{F˜Ö†Î‚ˆ¸šY¦ Ìô¤I\ç¸S»‡8ç=t›UuÛ®öâÓaŒöôc&ü% .níîÛ®îÚ:ç'Õ¹A‘ZÜMdx;.P«¤\³|-J yÁ‚HLÅÅåõ—“Ø^=*µÅô­½°œ9cv°‚Ø‹– Ž6Ë¡Ø,ÛÎ1Em¤ÞæÙjS!vïd%fÙ…ÃЙ¤ˆšþîn¹^\/õ¯TÞ–÷ÛEĸ]îšIï—íåW´ªä‘f”•¼ôz”>ñ›`uC'Á£™E/ºÀur†¼úÕ?Òv”3ò"Ôi®D\F{0.m0`J^zk!.¨!¦¹ÅÅòqá ‰ª?ÑrÆ#tÂ9ᜰÇöø–»ûoKè^?;Â}´ébëØ’$#ŸŸt÷Ɖ€M¤G_ÚÎïæ4 C À °P¨†›qáÞ¹(Èv± ÝØDÊg2JÀäæ×…š„AúÚ@f7.4ð!–ÈAm«m2ŸÀÆL[¥bMg —Ø ë%/ É>ù-švqn9P91 Á; Ô®çEǰ+¹Ç²È’Æ4ìÀ+ùåR½© l"<CÚ“ˆ q Öïl!Ñy‘`BðEÖÄes›‰QZeך¯Ï.ª<¬‰‘´Op„ÞKÄÍóîéùñÓÍæéAóãÅz¹Q .ßk'±<‹ù {‰°Ù0T¬Oµ£õ©ÒH$4½–ªÂS¼/2üð~4}Y;Én_d;ÿŠÇý‡Ÿ–ë›í¡ûñðÁbµÛ*{«e]‹1¡ýá,ñ•Yc.´•`)$€S­Ã(ÖJ1£P°5ûF†œb:8w+®¢›F ꑧѺ^|•fc4Á$б0afF z¹ }½pñO»ÇJt 8¯’YªçURÉ>¤W)^hXçÜhy‰PÍ41 Aô5 —Å»ˆ4ƒ$Šò‰(Mɘ8õ¥†¡F½ó*˜lÉ>Q£>¬þ%4u¦'ͦÜ(½Cï&ðu¢‡0ãì¾î°“êÓòéi¹úÅsõ­×\U¥… Îå ´PÃŒ!uØ` `Ày5±Ú¹ÂXèÆ`|J2ä/_¼7é•K¯¤iâ %ÞÀJ]}žb¿?„Q)a ”uÛiÐN0©F¦kýÇ‹½3eþÊezâÏ+e–øy¥$ƒ”2IJ_“ÞˆËôjç(]g”Rä(Í•¯”tIGÙ£M?»"Ї™•’O9e:AvÖÍ{!z\Ÿ[>)ÍoªË×K‡ÅúnõïÍÃêË×Åwëû*ý2~RõtaHɣ¯H«H×LU£œ°ó%á aE^U]òú¦G¦& S¦ãâ+RIo±QjÖG§ˆjglœš*ñ4zxJçëJtØšá«~ÍåÖÏÄ“'¯z'+šò†Þ€ö’DÔWìБ¥ 8@7²Ãeä(22-žçüˆñ<•–Ù‡·pž±uO^6v5F¶X„ëù?¿Þ~Dõ„÷¨ž`ðG`OóØ!…ëyäÝçõÌ_7OŸÿó¿þãj¨Â›»õ[46wõÓÕãó*Êßçoö¯ûç§Ï?Nó?/wÏ›»¬³W?ýtõEUù9ÿåïïMôToðÓÕOgÜ¡‡¸ïÚû9úÏCZô4ug4ˆúç«_^ï6Wëö·»»û&ïª&›¿Þ®7¿~vÑyüå*†7ÛÍúô™ùè´ÄtàâPMÖ¸YÕ­L¡/ž5 CÑ;ÌŸâË^mãK©i[m¶?oÖïlhøÈÑd¼±mýgv|ÌöQÞ/·ý²y¸zú¶¼½z%H·?ý{·CÆ:q… –8t"c$Á›A>/&ª2zŠ ÔçŽP³bóõ_kX)™“ o0Ý#u:ZÓs{| *æ¸ —u^q;ì¥Ó({"Œ *KŸÈû'OÉÜÉk²èl®Ã¡ì²¨8‰2EÌ¥Ò"‰Ê]µÚÅ÷k¬¸ïõ(̹ºý %íœ>ýø9"Q¾ôÑ­¾ÝÞ”&NËø`? /V#Së½ÿn ûý{#©ùñYŒ©t"-³¢Qx(ÉË  $tFêi’–ipÌaöð·Ê­)SZ<"Õ dÂôŒ¾…‰Ÿ{æ”…t»˜C?·ÃT_6Ø’¶+Qžßa!‰û1’r£Ì;?%U{,-›ªÉ•uÉüŦS'°—=ºÍ%v-ã }±’ë\W‘øÿi’¡®ÛËqÿíÓ•%/.È‚oZbn½ÏÚG^]ÞajFUœ|ŸºÒN¯­ úÎ*K]Ð$x6ÓÅ_28ï¦D³‰hKíШ¢“Y«²ÂvkÄE÷v³Dg{Ç.P¾I¨Å LDúe'6,~Úûö8'*Võ´FCG%Vkªùº5ï3MˆäÅ>NH6`ë.X˾]/JQydvÖý¢I ãŒÄÓ¨:B^=ÍâÕ„þÙ«é<Ê^ÙkbkO´°™¦ÕïŠn_œ)¿~™áz*”|Ÿ¶Î2ÿ®öù\­×µ¢!jW·;‰”FLdøvÕhHó¥×Òg|‘ †üMøŽ¶äè&|/ªF>Ø3sÀ¥}0ZÀ ;›tÂÁÙQ}Ïžöo«Rswœñ]ÕL}<3yË}[Ò e{¾¸_öÂæ%Þ¡Jz Ïú†Îf š+qÎhþô–NO¡ð@µ„y§»«%±ùì—V2=eÜ ¤NÕ7uªŠã0¡Nq ‰³^ƒã-Õ¼XÜ•ªXgÙà„¡ó6GÏWž&J-­1¾–©aÎd]µqÝ,ʼn™0¸‰ìˆdKl>3‡¢ÒÊѰmkͧPž½•¬ÚB’Qm´°’ ÊÁ úN5ŒìlÐi–FvŽÖÐú ©¯'§ñ¿e3[k\œùº„ò¸!ÑK‡œÖôÓTV3ZXQÞkD «´†bü,­Á$Ô<1ç„Âl'i}EÁ¾^eÔ¶cg:§7ÜÁ™9ÉýÒ ç"[X^q0¥ÒqÌ ·‰pOei3(Ëã¡yxlžn¯7?½~¶ýá‹ÍÃó:öÞ¯¯#¼Ã݃¾ëÕ¥~ñµž—·ÛÄûëå]U‰t{L!I3a˜e“A&A ÇhØÑ÷›…l§ŽV‡xÜÐα7Eáç}FHc…dß„}ZÍ4’¯:Å=:´lç82!tq*HÞ›®Nåm”êýÎÕwq7±ÑÍ­Ñ¢þ`Ü6÷·Ÿ_^b3|³—w_.íðöÙð[ˆ®$<9XR ¡¼k ‹S˜ª5—× ^éú µ¥“!/RdàÎæ,œ¬KYøH‚ 5è·¾©lg&1´XË>#Ï.Nšâh/ÒM±€'ú׫$ÑêGK+*N†Áû`qyÅšÂe[[g+ÎV(ŽÈƒáÅy粊s!ɺ_Y¡Þ82ÀÒ'ª ¢˧ A½‡•ïÅò±éo?E>–é0Œ»”ঌçÀ6$ùL˪Ù÷‚F¥ù¬!äÏ?°IÖݽ`šP|êKGþ°´9bäTn°Ø«ºäÊó~µÖ_.žÖq¨êæâö^?½XëvÝ<­ÿ¸¸®CßCt}Y²S¨°­qˆç­9ù53Ù¸_X|Éd¯ˆLÒdG²jÂ*¨om¬ðâ6뉺Ø,ÆQô®V{ÇïV—›Õ&ÓD6þQS7!‚}R™rê'.X`®å¬'²3¨ïä5×u]‘9db“…å‘D™#@¤ˆXØÙ×Áaˆ‘þŽ©|1ì‚“W‰›õÅæöó×ʘB_[<‚ñ)Ä'/†%2ˆÎ„+ÈË«Ý!©;‹.1K‹Y³d—´Ê‘lY¥DÆà¥óL†ˆ"ðGXrß-ÏÜáTΩ÷µAštÓm½‹ 3Ëtš™\TwìøÊ[\̳—%'È®G’haqñ¥5•t~i‹#çÛ[›Áy~åVû‘0¶^˜Ö+A¶¸«%NJ!]À` .‚²õ"µf·‡„‚XÙaÞDéåà|Z?ܸýzq¿ºXÿ¡ÿw³úýÓ¾Ááí‹<˜t󸛼Çò†Í‡wr†ß‚õx<üUO²(z­!D¸Љ|°³F–Ϻ˜S «{hÎÈ”*t d›3BÖ˜¯»ÿòênõºýÿõááñ¨;á?Uù«‰›ù#8Œ˜Ñãß®nöŸÁñ‘‹npòÖ[÷rôqóN6"ìWóOñmwf·^]¯n¿­nŽ&ÈŽÍùÝ3vkÛ=æv£îâz–ýcµþðôåòë‡7‰ Ûåx Ñ8C×ìj,;» ,›:…Éã ¯fºVŸt‰ïãÃûËõo«§Ç;UÌOëÕÍ—Ëw]„Mx¼l +VÎ×´wEMxTÚ˜$Mê{Ð"É‚©<ÈÛˆ¯–£âä w ˜"ìïv}ÖèÉú¤ÑïÅÕâŒ×ÓIœç@ï½ÂèK&œòF‹V÷-ªNyU#’ÐvtØ™™w'mÂ|c®vN©«§™2¦Â[Ö`N¥K€UCçòÉ7Y‚—ÿóNÁ¥"ÑÊJ&8Åj2âXxlóãg\?Ü?^®WG<‰†ÿ²´Q ÷†`§̪/Âì¹Ù¡ûýþõؤí‘IÛ“ŽA›IYõNwÖE^=}ü·ÿç%ÑüåóÓ}BlTuÿô±ÿcÏt…þ“ ^|uw_âÔÒýÃÍÞœhð÷á—›çë¸ý>þ¼{³_oVwú¿nO‘?÷y“_ôßOjÂÏqݯ_ÜûÿÒqÏ{×%ÐÅ„5ª]¢|¼ùI­è:5¢õ:çõÓýêi}{]¢dƒÃž'ˤ‘T§Þ?¦‚¯uÊEÕ¬(w‚}‚ªa¹sɧÌGbiQÞ‚Ùí¸MyM¸…-’ëÐuD8`Ä•ê ­¢¾Õí£²®UÆýåþuÛíþp¼ï¾}½Ø?áõ¯o?ÞuÞ\lg5/&™.;êjºSðσ±Ö¹Z<ËÞ’m×X¡ûLŒå<Ô4 rYKߥX¶>’c“Î Œ3sÞâ¶ CcE ÖwîÒß<Ü­ÞOKî>|¼{Ö-¼És æž`~œbSÀ)ìI‚ÀÞ[ úþ¬tÏ“ðfE;§ø·˜wB•k:å³V I©4_ +†BK[9ì&x¥†`biïá@Ûñ숌‚üþ÷O­8ÐÚX1O){v¨²éÇvBx­Žå¸YDÿƒùj1rÈà’=Æ#Iµ(ë[ë·Ë,c‡œ#`©p߆ǯ—÷º'.c–—íe¯ §½1}MÓOép´AŒ¦|µñt˜ÚµFqD!%›oi¤ v”íÊÌÉl¶Ó6«Hët¥ße‡ø§ïž3­¨ÃÇ ÆvÎÙZÎcäÐ(»G$9³=c úUˆwÔN7ÞÂ'ؽô&öÚ¾÷aª´Ç 7wÛ!Ì]ãÝè‡êX+BèÔ€…iý¸ÖЍ kÁUr²ien[U3˜¶cfGìsæöDî› št›!üt0qù|sûtñø 9ÃmeaÀꢺº\Â)ÄÚÖ£ âZÐYÖ¯•ÿÝî•°­šæj¹L–8ë~ R€«cA5á_ƒ&"ò¾¸¤û¥ÔšS @~éA’-?ê´9Nu.•Öx~ŽóēйÀNvô… 8e( Ézß`Èä½(›ÅFº/ܶ#&‘±’½Ù„Ýќכ$Z„FúÖЄ*Û<¡ê*Ûd–Þ¡‘Š™mºd²Öí‚ØSY(9pM»ÛÉ–ÄH6T¤¡eŽ¢£јö z*iX‹¾/yLE[G‚«hÔâÁ_¯î«ƒRž®•|x„fJ¿+F2]°ûA ²™;Ž›H,cAªê ¸lMMª&8Zú}iÏÂÞ öœŽLÆ!1b¤Ködƒ_&QMõÎ%€N„C “BGrRÑ Q´w³í¤ÃÝ5šÈõ¾ˆ©À“y!‚ÉöÛ•ûïûJ¸“ÓEš(pÒê5& 9«fä{âÀ¥¥×Ì3ÇÝL‰c†ÈÒ“uÌÄCIª…gÖ—FÖè¾ê¶!Ž_À,ƒ=È–:8f‰Ûo]1£¸6ÝÍÅPTXE:AQv'µLbÀÏ;±ø=°|dbÇï~ÿ âpqwûiuýÇõÝê­bûpw¿ÿãIÀÚB¦«>¨C‹—CÈú>Ø‹oµÉ:Aùé‚.8iRãzòšÝ[ß sñM<ÍjµQÛ$E Óî íì9GéN÷‘,špР qÝA×Âä Ãå †á­ûù»ÑGo.ïïV›=»ÅnR¤rÜýŒÏã ³L‘XË8i-¢ªš3FŸY;û ê!*°Ï˜åçí3¤êµ{5±Î0€ZVõÍS$˜šcdÈv±NCXš$:q‰¼§rµ¦š.šm8mšYéçM“Ì”ád‹Þú¾‰b¢ÈZZ&ˆpp%–É!ÛM&ø4:×›|ÚÙfà*Ûô–õf¥ˆä¤‹iŠ7„áÇèýÚž_?_\¯ÖOU¦éw™,üÓ„)×™ô› íÖü5X;ÔÁX´FJ s7%sÖ0wÀV‡Gæ^:ìR,‰óMí²$·±2Åk‹í–ZQ«U&âÍê|±ü '—P Hç||emÒ‹ïåÕ„ØÌd ,íÅûùF1C Ç ^tá”mË6M+}EMƒºb¨íËnRŒê©k–Ðó ª2ä¹k#áÝêró®zrˆ4¶—ëÉ©«õ‘î¸wSð}`£6[ËÇÓD|ÍÎó¸cb:Ypžƒ§ 'ŸœDK«Å®¯a8ëú ˜-zî@Q ëŽÂV¥¿þ?ò®¥7’#9ÿBŸX“¯x :íÍ0 †/¶±àœJ|‰äŒ¤ý_þþeŽèn²‹ìêά̬Ú¾¬VÍVue¼#2â‹3Mïî¾´Vê‘U_‘Fårš8Ñ‚¬}tì§ÆI-À‚•ë”|Èk±ðäôþˆf]´8ÚjÂck1F·Äf-Õb„çv£0Œ:‚==û¢ìù²úáÚÉT"\Rm±*m& èÓüÕmt릧hÝ]#”@±P^Qq:}Q©ËFÕSUûx첦%¼-k®ÙKüazúpwûËݧê‰ñ•æz ®B/‘ØpÑõÔË:uÓC“’ :X‰š²z˜$Mèáˆ*=ôPß$ἫŽU‘Zô„¨/3 àú1QïÙ·§¯£é-ý3Û"Âþî,ÍójÈ®j‹CpÝ¢Ú]:õÓB²6@)H>Q\Â,†)oz˜w†Ù¶D领0Z£*—«¡X'¦5T{µ€;ô2@Œ¸ìðéþË~/åÆÕùê×>¬ÅíåÏóúuhd–üyTª™òg6í¨[Äa’uÓN òÀ“þ–/†lw„@‚Éî¥-zh§m$§Ä³îzlÒ߀ó/Ö‚Å 8É8x^–^[fÖýõÙíå0½öòþîB¿þûÝïúå[}‰«ïWOž½<ÿu,š›ïœ^\}¹½{T}üðüÙê˧wßÎ/­YÀ–T¼þÛ‘øt#ù§5“®‘ÜÞžý<—³F@ ¦ CRTÛ b?Ú;æK¿0 Q D\’£Ë%Åb[k§Æ¶\è’§aWø1üýø7ÖÚ§ ÷xòùáîæäÓÝõÓÉŧ7–á¸èiÚié ]dRNšƒ¡Ò4LŠÎ¹ƒ«“ãt+Ðèhðµ^…–ÅÂÊR7³b\JÞcã ¹Šj·fº ãS3ê0¢ë1mGH”ç§Lø°:xœÜð6:Y Û@† Œv2‹mhÛ©‰m5—AVøàšù&¥|KÕáxtoBÉçù†“Qßèd%|3ãïœ@˜Ã74ɨŸÑ[?¢¦Jí5Î@¯ŸÇŠÕ‹/«¿Æµ°5`×ÊöˇÓë»ó_­ÉkOX•ŽnB*È¢jÀ©`„œP¨=O“ðÐ[ºõ€‡¶·„’æH éï K‹ÔPn±¼¦C»o¾ßžn¿úáõ¿>‹Ëé*¾ì')2Ø<&•˜””“ ù¦íÇ–V]$ÅúIŸ8KRL! É/P݆Aa¼xýo{L?µ >MɸÕZá!yˆ“ ÆTè$ ü,Cšö'M2€¡¦°Ç€s0·Kt:‘Üd|÷þþ/ËOŠŒ†t¬ªc–xøftMT?¹«nDµ2c¯MæÉ {lµìÞl•,\ †ýëÙ¹q@¨Ý&uÌC²l @ê@²Q ‰Lî Þ½‡ÈÙ; ÎÈ?zW>¿b›?µií5©A¢ ±[_ë˜ÛÆlab¹-¥•QŠì@ÊÚkõŽ“0 #ÊôðÃúÚ‘qVè—y7>Z ßôµ|”ÈÇf[¨â[ ä ךùÆ3øæ#XQ=‘ì¶¡ž^öþr²"¶©ÐŒ=á,¾)×Ä…&¾AÕ6\Ã(G×¼T5/UMI3PM@ "ÛÑ-Y¶%™^†ûr²®¥0è×ÍÒ6òV‹£®‘«é±Ššãhv˱™mÅ·d ƒõõŠäµ-jØŸe¹8Uæ¬ðRZ%)š ˜Á6 ¬ ¯í}Z=âí6†.½OUæ6¸°Ëõ>=Þ]_¾.8m>¼¿þ¦òã‡ÿøiêÖ<Á•õ0¹MƒÿOÕ¬*Ð0àªÅ‘¢æqæ³ä=\ÌÒ¶¸˜v €&c!¨&sAŒ*1ëLw ¿¯Gï‘i† gŰšò¢04é9¹œ E‰&ôÿ?¦ÖW„Ôø)µ¥DDUc뎄Óì¥c?nl}JmMd8äÕ6øMÃÔAµ¥é&•±ºlÁ‰CLˆiVTÅèAå¿ET¸Êæ‹Ó|˦֨ÊXX¹ÊV`Ly¶&àlPÅ@8 ý|°¢Â½ áØÑ,®‰þJýŠ«õ#j&€4 DBœ»âJ…zµnþöËp®Ò|÷¨ÿ¸y® nþx ·dóÕǦ½T]åßd¥ C^Õ™§We¿­Ë%¾uò³n{ôëÑ:ý›„F6­s…&xd”åA<Ÿñµû ©UµÇ’TL#½œ„HôÓ×}[u‘+ÄÏ+h ƨáj“„T¼*ådÕs#ôÕã÷û£öSb!8°FÜT !hÜž )±¦‡Tˆ 9¸(3ú“Sd› Ž®%²IK,¶LC DÞöòÅåç³o×O]°—W Ð Q|‹áŽLU -Z¾ 5Û@j‰Ö-†·½s®` ˆÅð°%8  JÁ鎯‰ºñi iÁœÜ;J°}èÒ ¡ÈáÓ¿~H[…㯞]øáìâæjH®?Pù»RþœžŸÍÇÇöª§ÆUÈM× ÆÀš Žè b¸ÇºÙ,Áúi&ØÜ† –d×ê²rš)aúpK.ƒ¿~ˆ« m–b&»›j)Š™f/€&EqHŽÅ„!½LìLšL4ŠÏ«ƒyÚ¯™’l V“fŠ«ÁØN¶²ÂÍî„­$Y7ådk[Œä¸¨»1å•“eúÆpK Ú©ò¬^ÆÓ·™œc=ChÒN \M%<èÁ±—@oÄï9({|Y(¸þò¼Yy&ùi/é}´ñœÍÔGx¬);²ú M«:ìyÎP«›N‚غÚäRÁ-’¼¹Þ ¥œãÉÂå i:íù‘˜f5’'[ö&Ñ5E²ôÚ¨JZ‡D<î*{ñm‚ôêßN ù>Áã)…p3O5Yã¢ýª‰û&ÕŒ5µe×TÆ}Ķm"s¨ÖMEÑ 6” šÍI˜²ÓTyD¡.ŠÏ€9^½3c“†ú"Z€ƒ„£§šÛæåÕjôÓ«›û»‡Ù)¦æ i¿N¢·ë¦&„Š8V•Äö›Ì^ƒ7Ný´PlÊBAðª2(!«†ÀS™åˆ*=ÔP%WÕ0¸YŽBò)´9ÊàX¹…<ˆÓÌ€ÞYQvßæÕ"ïWÒ,KòJê«}ÀÕ¢)–/ÓîûC¿4 )@,H@³¸.FÑ4 1"Y¸&8ID<¶ƒ[ Ø%»óZ6Ø}gðcsWòyeÈ¢¦À× 1¡•£úßW»ìoÊ ˆD…KæuÏgÏÞÉ ìˆò] Z4»v¥#Ž´„Aa³U‰-hYÑ´hsݰ£ËëËUÚ õèÅÞß]_ÿY‹¦¾o»u/#¸j7 ¬ºö*УŽ@ß~Šïkgá¢p‰Ù¤\c™^&øLÎ.ŠÏCHÂ)Wñc„%²C"&8zV¾O:ÖËsjJÚš3¢ê¡¦éZl°ÉK=²ôBºõ¬œE€çê™â¶äsvÓ$çˆH]– Ù®#WH¯~£ ŸäLåˆ è}J$±áVSc\$)‘±y©l® èôB]ÄåÃùç/§Äç¿üÞ­mÏ€tœ€äÛö¦Í§ýÒf„œî ‘jG®nOo.oîþÜÀ|<©0îâQ2ÆfDú.(Ž šä"aͲBÛ ¦á/„%cÝ×;’Œ¯Èß?w“ rƒá¼æå"9L•‹4m…F”ª‘ Ãb‰rt¹Øìfœ Œ¢d ¹=úVoŠQ뉼w¿ß^ß]<ž"$BæôINå÷óïÄÃ'ütyÂ¥»tŸå X¸y#øà#Œ¼\NÙ‹-¥ª¬…úÆ`-Äs¤B †–®5»’ˆÓä9hb4w‚d}¹¾3¬ùR·Ø|áÑ,Ê¡Ï9Ïqm²£Ú*PälÈon­ ÁTøˆ ÿ°zÉ“ç›!Búª~5S.AÖGµHM$5 <“óNØËrþfßFþç·Àò[¿ÈÄî/Q¨ÄÐ(«|VX„'6F´«²50PDfº1@ Ôp}‰A=P ’­&bÍ»Í7É#œ[·ŽHAÙAµ^rìä4 O;:ZŬ·Çc3N°bòÀÔ¢ÁåV!JÜE”8É*%8)`•ufY%09ý·=L#pýR†,7N-Çh LÎü÷•g›=D l`œf–)}`H¾Î\û޹4­ì^ÕS™å(¸É ã3%zA‘KˆP<:ʃ•]4ˆÕµ-{‡šZuÒ°G\¬³Pd4—X­í(€Û1çƒl\ŸuÊbÎ’71âΚÇf`üˆ6¬Ð(ÇXŒÚK ÐWD|dËpØQ3Ìo …n]„4®Î#g%»§ËŠÄf³ÔÛØ}{°'®¿3pH¯ÊŽãgœßÝÜŸ=\î2Ûñ]_f¾SXZ^·³|÷ç®n5²>¿¼·+fc·Ð »Ÿï”ÞO/t?yºº¹<@|·’(ûÒ‰Z};”²/©¥´|ШöÉñï?úÿüh`b<*ûŸ<ýyoý°½ó¸ÿ4üã¿}ø×?n·×'ÏVrû‰Ý üô1"Ëèa›¯¹ÑGëï%?~–aõª–}üËúîûã?ýó_O:6ìßÜ]¼YÕsòóÉã·sµÙ¼Îßî¿=}üKÇ_ý~výíòoëÆFôpòóÏ'ŸUQ¿Ù9Ÿså‚»þêÏ'?ÿ´Wœmsµ -â,ø§Ç0 šx'ïtE6ž‘¿üC…Ð$c½Ùÿ9³=.ïç+[¼‡pÕž+õ˜ÎIŒ®þ‚lÕêâËÝÈÄ„C§T_‚·âœÂéõt[ u2íµÁIb,2½s†D„±^Cív8Fçõ´ø+‡ŒƒÁ$˜r8jWv!ëpþe#žo\Ž{ër(í:¿ãp 0sÒçô·{™Šœ|jíàk”^£o©y¹]*Å—‰CÔÿñù<>¬¬vð89;9:YAÐâ¶{'ž¥Œü1§&e$Z\•ŒëØûµ2*#PÚTÞ_”ÑK<ê»UV¯)õ Y½ßÂÀ¶!±Wá` A7/lûõQXÈʸÙaaÛ¯ï½ÕÄUÕ]‹¼{‡ýgË`}‘ð<…ÒÔ†v¹YQÏêÜ>´8 ëÆ¨é^ðÎðô(öŸžØC´^á¡ ‡*èZ@、œÃQúO^5Œ(Ô!:\‰´æ¡å‡«w 7E‡ÞóÒå£sÚ-G$õÁ.ZEÎ!v ¡$LDŽÅAb›¥ØË\›:fic®ôŸ*c)Ú:´c#1¨µ{8{|zøvþ¤Þññ¹ø1wZ-Ô“¼ÀÞF¨é Ò|@S(ìÀ°‡Hݲoå>QÔ„/ÏCL‡{S×6ltK‘¹·½MÀT¾8h- !h¢Ð¤€éMùá¾Rv£}ý‹$ÕÃ4YëÕ„{ÚTtE©wr03ó.6{™È*Dm¿D«Î¢`8jûª„q{q§‚øx}u~Y;ÊCë)_`LQ*Œ©Ý~°U :§î£V·¨ÕäÁÂV) [7ÓvÍ*ò¾úˆ6=¢V}k\=̪iöPIò¼|ÔêH&£VÔ8ýÈA+J‰¥(ÕAk©mØÇÕàÂ!\¹®Ês–Æ0MÔHè]XÚ›«‡‡ä{µÉ%Wσ¼ÉÕO+¦0$gë=i)“;E¶ž¶weËl/fCZ•§©6Ó‘:Ù^ÆdVÅ ƒ–÷6yÛ›Ò¤íyÞ^ü6¦MäS×:™\‰ýMî>#±—«‘jâjˆKD¹qp ¬”wV¨ýôíêúb¦á=T¨Í2 ÀðJUÛJxœ_¾P»!Z?³«Â!©ŒÜŽHy«¦¯ñGêbvà ¨ÞbVÈÛEAE–6»êiªP‡èèHv÷=ÔgGbO}Ä` ¡ <õ(ÔÝè¢w¶ÄQàÝà Œ&ˆ•ÖW_nmd~& Kx€ƒw æ×Ç(ÀHA­-2°‡x½¬°I‹¦³Œ± ?#Ιaƒk˜jꑪƒFçÍÁ%ÅV8/&‹‰ï߈a=Êöº{W‰<¦£4SqQ3Ut(5½T-&cw“u²ƒ4qW.`ãH£þ(¬5Xå˜b=± ,îówfÀ›Ø%Ï«U‚=áW[\¦¬ªaïÛüyUu˜µªB“mo#jô0«~’Ï2«PÎÜdV“s¼¸Yuz¼ ³ªœb ìtÕ(wr¡Ì¾"b›}2{cýºâÕ#<ö_WŒlVQÁÝ’|³uóõ¿®oÎ\›ä|=òf5_ƒcÄÅ öš-Í“­›± †ÕF̘*D˜kùRºéåI[õ0¶&ÕÀÀ2ËØ†„,ؤ£oK} L )ãîåÙŠSlÕ˜}•„¾XÔ•€!Í-%ÔZ‰½\M QBW£øE,¯8B·èõºÛC?Þü‡›ºÍùÃÝí/wŸ‡ï~øtötþuì¨ÆPõ (0¼)Vij)Æh8w «šf=­®('–XÝD”µº)LÕoGêdtÖ3°3Œ®†ÙàÛÔ3!,ot7?²ctElx%Ó²‚ÇÇu¶ÅÖ·ÉPìc.xÏ$¾‰¹ª Ø^b2D±¥¢_¶ê½/œh$8À´ä|‹½ïk̵‹)ú+p¢Gæ~6ž$z¾ùÉÙxŸ-cÀ4Nüˆª]l<! `šcã!øàbKWšJàò6>B˜°ñÙ4÷Òe£cäµ7F€` »(ë #Æ4uQÖ?âùUGîýWõ g×§ß﮿ݼ C·€Ç¿|þíë× ŽHó0$Úß`Œ#¡.i6ŽDûìÇ’èc³b͆ÞP¶lâ©Íd+5Y. ’H$ßÙJŽ7óÂmVt“££•Ø,}-Nâ»Ú¬¾¥TÅ8âl-D£¯¡RÆ…dëÙ ’!Ë´Íæƒ¦mUÂ4}%o@ørlG“`évcþ݆ r63 ^…öô¹• Öº}ÜÝKÃN—ãTóë\LÛ[ŒÝ ¨v3mo±´«$5ðjȪi6Y\l²4UmÙ\Ð6Z"98†ÕÉ'ÓæÑÑŠÌ–W%”Žmµ€’P2ònvŶ²ó1ÌÈ»3[“;Ê&¬ñ‚VkßK¼2Z—5Zû^âÍŠN3€&›U³ŽJkgë@› –Ç8@Š1`Apü\G?h°6ˆŠ;u¾çƒ…Æ0H°2Ã{•åYÞ^Ù޳ţ,ÎO„Y¶ÄËÔt^ôqïËp•î4š²dËÅ_3ÞjlÚdÙxlÆ[5Ùº’ì„G­cL2áæ5EDa G­‚mnR@ìËåíåÃÕùé—³‡O6!ynÅŸó×Lÿý·?~ýûD 옻Öå8r[ý*ûQ/I܈üÈ3ät[Û9²å•¬d“§?àÌXÓÒp†l6Ùr¥jSVMu7AâÃ…À‡Hës`KÞÿ&æ±KlÉûGxÔ–i1”šû®7ñTkâÉ\a4ë[Á§AªT6ñ¹÷U#Ç•ÕØxr2íˆë–Øx–UåÖ»ÎUdb §ÛB©'±oÀ*hGÊgÉ‘ÁuǪQßCÖBÚçNŸ0C-ÛÇßî¯M«ͽsÒ4M8C¤wpvh;å°.ûÑÂäƒ}4­®;¿ê¤"c+TÊÉŸŸ“þº² Ó.§ÁÑ  佋NWM4&F9%æÀ)z“l’üpüQW‚}¿þv—¿{x–ÿ|Ï]b‡‹ÀŠ÷Î×çú¯â½œ¡D„·‹È58=XCº+o‘G.œŒÈÛýíüØ·™yuX̽îÆ~A1÷ €1ççÌS3#Ï>JÞÎÈ›?bÝŒ<¾Ü]-Vwè8 «Š-%—ª©˜Ìqì«%i]DX·a)ÍÔŽŽ+Ê6%Íï-"¬®ldv”N„MÇ8]CËÖ«ãÉýDY3÷@i§T‚ê63úªrCa%©T ( ÜMôQ@,Of£“áR=ÉäJן—±ùIp#¡ƒo(ß1ÄàD"u£R-ɬ›C›Î˜ÇšD˜9N%¸Åe°žI¨ ÚÒdž%,a›ê£ŸýøD™ù¦´å”Œô 1© ö`R­‡s[J1’²¬ÚR¹˜®V|Ðrk¿¾ûúeO$s0i·_îSƒÐ2¸õÒ.ý ¼×”1H–R£—~®í‰õ[œÓßß62Á–}6yp”N°Å %*,)É좙ÃÞ&)go?ÑœyχJ³Írâ«ÚפkÜD ÜXˆÈ«i ¬"~$äæ;=ïlwž›f¸/q$¾’oñg-þ–˜"ç¥d%âé—Šå “?W‘(ˆ‘K #;Qe{¥f²è’Š5l§6FSÂñ®«É9W0j;ïf}]jûù]S±U]ß!8_ §•êv5˜Y‰«¶1Ê¢éè§èXÙÌÖ*¶iï‚H»È+0Tµ\$ñ8t»ÔêB9AÒÝîC49– ë ݾ»ï"’ìûÉmô«DzðgØWc¢T]¤l`ÖÐeîNÔh"ÿ$fz§•–œæ†»mˆü뮲¼³SÕá2«žnÚv1F YSnÂrý‰÷ÈÁD^T†ïíêüre÷öíIÍ?üÿ2ŽSÇíâ.c(‡ŠÓ B0-_ˆ¡µê… iß9 –{…@%FÓ$®- 8Ê£ÇlkûjdUáEÚA÷BÙ'1g²¨iÉ‚égҨҕ펪ÈîhÁ¥ÕÝ?»ƒêƒ¬ªL·C ýiêć‰Ðb츓½Àkàó)à±`ñï{î¥×éŠËH‰ÚÅ_¦Q¹© €$òâ¬i£Èº…úé`ˆyåK*± ²ê†© xPPß>DiÙ-UdÒU1b¢ í¡ ’¡à°%sâH,FCçÚ¨"•zRÑ5(‘ßYIŽ»®P|qÈÜàFÙ†„À(~AE¾"¯±§ýĉþ×Ü•rsšŸ'…kå´pÊ%çf+«è¯H_…ˆ‹¹*¶­¤öõ1Ž›±#e[²š“<½¯'ýú+¼ÔqKÀªöŠ_šyñ]G ÙFÖ»û[ÍŦŠ_z•ÿøma‹È¯¾š‘˜Mm㙃¯àa=û\¨lŠ£D½ã\™H×Ð&b`.6…Žªäó£…ñö¨ë‡+sC®®_~<>›°ßt­ÒׇÏw§:Xߥ»äý#¨J–¼0D±s-ddAƒS^ï-ÕÒ‹“Kì„Râj„ EŒÒ˜§}=.­¤ì³¼SÂ1ŠÃà 6{1:ÌäôeŠlHå7%VBp„Vÿz¼É7ÖÿçOúK2Dªë!ªøÒ9.I\*¾t4k©½SóîJ0¢j0¢)¦zs,‚QH)v_#†|äv\YQÒ@ÚŒÐÃ`0 ‰j×gÀˆRé0 g#7ò¡c¾¶ð˜ÖÌN| E¬°ÓdÍ»gˆDLËǬy÷`ò˜*Ä­&Ÿâ–B7 Š4q›xWÅm¢›(*_„¢ÄÑ Mi­’üw\L™ÛÄÄ=©É}ºý-óˆUÜ&:1(B޶)Œ<„ á<{0Sn}8ÏÕá¼ áb…u:L§ºx"(Ûc8[X•q  Ô·³»V4NyxZ1I1S?˜öMvêKó|ƒëg¥L5¶`Ýzº7¾½~~ËHû6Ýöæ_Wv"ïè†î"çòŒ Ãú•¯ŸSù‡Å¶jåë/™+шâÖU‹µJ$†¤´¦¤Ú‰ÆÉ‰P†) ±=qJ³YÇãÂ*`Êž1EO;›z˜*îY¦ ýh˜òîÐÍð·Fƒ¢ÀX"íÐ3mÃ/¾kƒ~>^”îrØä—Æó /{ÏÒ2¬áƒaˆõ ÃyV !zuëÊgÜ|z‡Dè¼¹«eÚŒ…/¥ÓÂsÑüla5HäÜ”hpk ò4:˜4ž.DsÓÈ_0P꤆m½ÍŠéÿÞwïó{úgªq¼úy1y{ÿôc%ÙqóKg$f”«XŒûœ¸ÃðÚ®ÕÍ!5ïÂлbé}6Õt$1Ðjï¾~ù´÷_!¸îgWù–oãíÕŸ7ß¾=ý$¥þƒ ÞÞ1`δ>X„[ˆ%˜ÕÜŠ¸”3í£ßF¿&™ìR;¦Z•½ Ц'H–¢â(ç•Ö»¯&ݵiz?ê°K„IÁÇC yRì+›Ïb²¹†ÆŸï_ž¯þ/.ë} ˆC¥iÀ^ÊÏ 1.TìErjkx8ÕÂÝ ЇI–—ë_U8€˜Í¡Î¥Ò£¡Ì>;J𰹋ˆ:¼¾Úä|©¦}Šªçêô"vm'ã*BvRÝæ°Fn"{¦<'Q7mÈ}z|ùñfâõžOòéÂObl‹±Ü20ÛžÁb©µíº-ò뇽vb‚PE´˜ £Eèå3ÐQX] —í{BX2í¹“Öâðº“s®µ%mT4ß·0PM*gÕ¢p䪦^”ö¦ÞVü¹Ï"ý')¥¾y(Ã;s*áÙÈÂç§«ç/Ÿ¾Ùßr&‚ñ¡­†††p€êØ÷㣩–^7h¶ã]ˆÁ©çàKÜàI’>7²z&ªМ¹Gñ¼54ú*GB3—¡ßmTta[ň5Ð ŽµaÍ"ð¸Íì! @fœ¼pËþ>5|dÚÝí)—¡¯9e#Ñ÷}”R¾b{D n9Ax•€ú¬í:y9Lc» °†/ÅJý\MûL]vOP› À7Xö<¼/‡Sr#°h6ÅÀ>5ïÕ„6U¾/è>SQI ^‹¶òŸË¶2è€äÙ£ Î|u4Ÿ C¤ÄncÈ}Í}¸…x5b7/_îÞNî‹w7·ËŽ1ú³ûvF‹Ð* S/q° ãj8^*¼~Hm«ï*j•,t#(Ö*1¸l÷ÑLT] zG‡!¼ª‹ç¤B¿!Ðh¨&Á˜ƒê8EHéãUEÕ.0TÎÃÝ#7Ãð63ª‘áøÜÄÛƒƒt õù÷¯×·ŸM±®ÈqÓ è¡øÛ2™,8`uà›r‹¤Õ pY'ÔiE»' BÙ5F†üð†WÙôÜtŠ)%ô·\ÔÑx‹1Ó²nå‡wÀY¶q1G«ëå\¬r–’å.‡‘»J: ×`;8º±¹“ÏóãÃý;iîÿøýáÅÐæùw“Þ-Zžà:@õ@fn™¬“†ž 9h–CsQ¼s,n횺¦Ý‹" 5ð ûÏEøf—­¨˜É¯ |Ûg§ذ5|oR² ,Yü²È ø Ô¿½ãþ&7Bœ3!M>P¯+&ÞIš%û#¾ò`ô¬žïwoÒJ/?>Ûn}¹Ý½â³œùû•ÜÞÊÝÑÕ'ýóßqÙ˜Ê(ܾ-Et›ZÍ?]ˆîEÙÍ)·sŒà*œò`¹€ê&Ö|§ÄLn=PÝ>Û‰0/¹ì¢ÌÈÃëäLΧÜVl'68>Ch××ת4µD­ó`2r£™yjKâ³fÀ_‚ŸýÛõWÛOSÏ«çûÛ—§/?þ{e«}wt`'+¥š?Kù¤Ý¦îé ~<½Ü/ÈhÇ‘ŒN[:ÿÐÍ),.|þ)¸KiëŸR«ÏMÇLÏ­– Zî¹E¡tµ˜•Ü™ ºôܦ¡LÞ\ïƒï?¹1€©®w7ïDx¾ý|÷bëùIœr}»åÒÞ± ÕÁÐrá’h<0p·Þƒ òjs„2jiG"QVŒÃÀĽZÔJ“}®Ùà(›Z ûÞô›kå;Žú.Z‰`«9ôlXƒóýéñ¯/3ü…9¤¡‘ š?·\£Þa‚›w¢é¦ni¯5£U¨›÷ŰÃÄ”›ö*‡Ê–>Ù ÄÐUÙ*޶àÍ>„õù±’Q/€L* B˱¤-<·©³•UY¥¯JÓ akˆD‘ä§Höhÿ+MÝó:¥÷-ÄP<ßàÞEÈ5äá}t1: ´ÕÕ¹øºál:0Ñ(Wާ"Ë¥kÍ5QÎ…ÕkÓg»èqk¨¥–ƒâcpäƒ_J6s¿^÷çËãë¹Ï{prÍ»=fB.„¥Kž³*t5WsBç9–;r)uäÏҦ£<;¥ôÑf³5nÿŒhå(LR=ÈØäþ<Èz;¤ãÓý·û§/·WŸ®ŸnRªî6AÙ­¬eNsCŸ™Zª!E4z^ì77É«ÂÛ‘°x,BÂÛ¢Ëù$¦ì4™tº |˜"G’­^ZÈn<+“Hê—Ó8ö=¾ýgšÉÕël˜›œf|ª”s„QBÙúK <·ì«xz@¶Ð¤@ä6O6*ö/¤±`eB2ïh¨Ïþz¥=?hwß–My c“ÊMuŒ¿1,í§?'“nЛv6¢«ñ‡œ pQ½”}þòôU= ×§zœ4•wsõR?À%óïØméUÝ]~xüïÂH˜™—)à?)à Íe6‘#ðJ·hÌN4ô¬H4Ô[5»R0abXáWŠ´Ô6bâ>“•ü)Ï.ŽTÊš#áBmÎ n‰6“¢Íb¦Rùcµù@Îü³Åy™ã¥à¦ ÷ öÔÜ Ù½ƒîJüNTýâ; f¥œÔ(#P1›Lså3ÁtÒFA³t´¹6È6€N.™Vù5”ñP ôK)£Ï†–xïÕi@ôñ «~×ã:yŠTSµ’N –µQ³ ñ3Étº WTaØZƒI²Zœ®>ÒGeÿî¿¶ yº~¸ú÷ãÃË×ûöÆ «•Ðd"Íó³ÌuLÿ]X7ÕL§‚U°Âk%bÒ¢j†7”¯âé¡™é«àÖš‰¡‹kš8j.ˆˆnq½ûòüôò==òæåîÓ9&µÌß®¾ßÝô¡+袠ؒŸÇ1ª…ÁËûYWÉ­mÜJFSí” UÐx¥x.óE„˜½Œ ª‡ª¦Ã ¬ÛG˜Š"Ì€ðŒî¹`þú~ýí®Õt² 5l±w ÇžŠS ¤ºÚÎ÷rêf1ÓH4’\‘·õÅ‹lÎçyæR顇öÙæÿ-ÐCEg.µsÒ~&ìÎQKÖÐÂàV iešZ–=•ƒ!fWÀ×4ï!R6÷º²š4œ<²Û ü-óˆÃ@¿ww`S»8¨3®ÌÃ[©M«ijf0=á˜(Z†Í‡Óaó>¿¿â€4Öì¯`!%”ÖŠù«îãjÊÓæ}œì·°/0ü[Í{&³¾…¶>‚ÚtW£s  Á¿oÖZv™Ó„“sAQªNÄâi<“íÿ)‘wãI"úè—ÇÁ{jq L HÍÝ k­@ÚÁJ+€S°`U°f_Õù¢ ˜E‰ÙÒ*Ì€O}_A©¾mG&A"7SØÛ#RÑOÆH>ýgõ¾ùÊ}CJ¬º€P±oÝÈE}Ü/rå…³•ÕXïT>(JÁÏá}þŒ¼ùVsEñ‹XŠÛª7Ù·[1ËÑ7[ZÍÜëT‘†1ú®jz9ØMŸÍYÔ†Oç&ÅSaÛ>­åÙ Øþtz U ÁFßeöÜŸØÕÙϺ»ËŸ¢áóû Ø!bðmS°—½u6;‚ð›Ø?ß¹ó}º¾õ¿åÇk÷$j!&¥˜"o  àQvÎ%çNñü˜Êî`ü9ˆà"‘d½†ÙÊjà(h*IÆÍáHGOú3X—™a0Ä”§!²ÃÒ…À Ë(ÔÆíü…ìÔbX†Bmo¡PšÕ·…ÚÞº …*޳g?ú8§˜ÿtpe•¨1^5"{œOØZ\;©íb‹z¹_i÷·oŸ®nïŸ~¼?ʮ͘.záì‹óoOñX{‡Úbð áSÃ[°Ò§j& mÊ#ÑÜçP¶x{îà“$øqeUþ·}•m…­ Œ7x9s‡Sd¥ÒðÄʹ¶ufO£l`õLöo‹”öŽé¡è÷P©tõõ˧=kê«*×ýìŠ#ßò­ºÛ«?o¾}{š|¤¾¹ÿƒ ÞÞ1àJÜù>úm7:øï„*šF½(DbDò¸!¼µ]Ýû”-c¯q5¼…ZxK„.ægcù²À|Ýpù2h¿rÈâÛli5ø–Fw;¡úÑ^ø¦£ÇÚ}ÄŒ¤“f~ïÏN/P •^xeÍÆÝôùÿ¾>æ@¥CŽ¡îÕsX!} u¯žmà†d7©©1®¿¤€jt 8TW“m @TF§\¶a¶°*lr+lí{©GQ@Ÿ'˜"‰iý£³SLb‰„Iß“òœžô³õ&ÝFÏ5õᯇ\î×ã҂׿Á&¥.Ø´àõðI{ç|\O,@-#L‘"’rºjl(¯˜)¯8Í€‚â5”\Š®H¶RÈæ?gK©¨­ð.± oŠ+æÏXU\iÂgíx.?{4B…E±[t‘ÜêQ°X;‡ä&PõTîÝIt»ÅZ”†ìê·ù—Õ THü&šZ» ¹Ñ÷›$Fxß¼œöùÓË¡„ŽÍ‰âã“Aý9ÞÏ)‘ÆDÿ7šƒ™]UšÁ¬ÿ†8†ý?¤Qœ®.«ÆÚaW?%~¬(¶‹<Ê>WnŸ}YͰk”)ÚWhßÛ`…ÚdÉ”XŒ¨ª$ëÕVÛTj·#ƒJ(T¨ÍÕ&Ùjøì˪Ô'dsîºZmiô…Ÿ7oÌäù\ÍÕn,¬8 & ¢ÛÒŸñR¦Ä×ÃïÏ/WOw7öE?-ì»O G/Ü}kÛÂcÀ‹ yA ÖfŠX4¢â=G Õdçm²ë—SiÌ™bMN¼/AncvëÓQN]’êa’àU›l»xH*¬Õû z2å’ê49ònåÄ\5ù¯I¥›Q2¯ 8oÇ‹‘ºÝÏ&wFbûÕæò`èêö“úÄQ¼~»:þ`b-ýzg1éç«?ä¹q¿^н‹®àÑ‚í¾ÚJ1·@ZýðV' €Njj˜@RÄÛl?éL4]à6õ£Ûu–¶†[F_ÃŒ¸5=©ÃRcaD¬U%LçªQw)6ŒT«ê¤M›þ’,°7O·‹H`F¯w~YЂ`À­8`Âë‰ÃBÄêzIJwÖ$IÈǽo¢êÄ‰Ô ·b¾sL3‰§,+@Øo&þ¨™Äu €C ¡%½$k ã’–Žk´ ÀЗÉÅIx_τ˻!ðÓù¦ÝÿŸ]~½þvý¥~Éér—á@qQíÉEq5âo½zÁìNṳ̀ë 9 ×Ìä×NÎDÒgÓk§õ¤®eß] D£÷ÝñkÕ Ê’“ÉHDÙõR!Fè ®TåB5¸¶Ùÿ% Zðè…`•討1ås‰t³Øv9_jPY.à Ì$X@.EìY5²_²®çKÍE¢ÑLÏ_®yE TÜÈ‘ä“+y¿¿GšÎ£nh*‰õ°.¾4µEÀû04i)ˆ:ŸH¡®­Ì«ü²èóqTYô~ Ê*"c1ÇS~$ÆÙµR6¬]7$z熢äùHnÝ~kA]>÷‹rê œ‡]9ƒÃ"rZø•[}Jä´ƒ 1ºÍ‘3¿Á§]Ó¢YèDòJýœºfR« =`=†6ãÁ@…šQô§l ‰&€Rܶqà,þq·£ùëÝËÓýÍóÛþʆ‚•'‰¯hˆ´„L‘Í¢®ï¨X7 µCˆÌ5Š-è(f©Âé´‰ÈÓÀÖ<»_Y#eÖ6 % ê´ £icU¯û¸¦U !.j6°·+Ð*Í¢†‘¬N…6dw ’£ûÚ·Z4²i²Î/—zÒ†E%*/l^ =õˆe/Iª_(k‡€S,RÊBˆEˆ .¿Oè(•.Á¬Ú¥X}Ó¾¡†0Že!3&™¾\d º°ú¡ªº°¼ø°.ˆý .*2Âuén Ä NifhÛöÙü|Ý翾ÿó¿›ÐU‚ÒrÔ€«ð¢½ìÄ#¬c+eÖ¯eG#˜ÌbM9Êa9” ñÂz÷7u©Fé„ ° g;˜'eMÌ^3õ(S”ÌÇCÙªª¿—¡lPŒT.{’?@xÛj½y0û¾ÚwǺy~šÿûÆ07XD5ˆTµÀC ½b·(·I„]3 N}EŠA!ÛpÏEu>}?“W§p” [3'îIræLœ4¥Ã&°HU2vˆ›ÑゆUE-nÃUއDÆÑ“±ÃMË:ÓÂ’“©ÿ/¤-F6cX®Š"4«.¡|fT‹lw‹‘«¥×-ZFšR¿UÅ\g´¯U( ²ýd¹ ¢ê-§µ3Mä€LÖ¾N†7'˜¤rÝ[¸cCç¶–ëzd…%tˆ–›À#¯fû|;èvRW©ÙÇq³iɬ@G÷3$öýáúÛÝô&ÇSNÂï·öãÿz|úÃ~ø›½ÄýŸ÷/?wÉs¥ìæêöþúË·Çç—û›çOoöÚ¾üüøã)Ñž<Ý\½<Î2÷Wo·]Ý^¿®-7¹\Ë%ÐÇ´.nA^ÄÐÒþ§íÁø_X-ýr/âËýò“Eÿøj:¿½~¹>Û“ã&õB õœ é G‡è—ÃÕÎpøä†SÎåpâä~LqE1ô½3ìëŒÅKCd×riø‹#gþ1Úm3]Û¯ ^Ân'ÂAJÃVêÂI"—ziu2€')¬éË_yýÎ!föi5´éµ ¯\CDÊ0  Röúû]+Ãl'GÏ™vY ….ñû‹ý>=þxÉ%ƲýÖ>uNùëE«81+_5À‚*¦Í“̬ۆ„)¤št(ƒFP{·hÈN‚ξ¬3'Jcü²1f03LŒ9Ö9S„]ÞéoË ~Ø÷pÿÏ»›Ÿ7w³ý~}óGºÞ¾~Ê!9CV¿Î\ìf\ .©$ÄUàH—4±óªpcœ¾gŒKv¾§“-îÅÔ0Y`3_”Bf_S&Û½Uêm¤“›Ìüw¬"ó0÷ÿ÷¿Õ·û.¥h—„Uh±šwŠ'ظ ú×_“\g×0¹l·ãÿX,ù T\p) LküíЯΊ—„Õ+m‘‚Æ kNƒÎEkVÈ¥-f’é·Hoí5RKø@˜8¸×¤² ’˜ßï‰H_œºH›ìGãX5 g!ߊÔw Ô%¹Ðr}„·V×aຓÜõÍN5§sáw_®o~„»ÿ™×sW]k%Y®†2ÒZL¼i,”wÒ>i²Np½`7“”? ŠUÀÊŒT‚]r’î;Š©ꦗ”A—üpÃ$d÷~MZúbJs“±´ ýW¬§ïÐw=bŒTñˆÕOjµÁp€±ÐÞЮMi+ÃØºïïw×/¿× Óíç'‡z4аˆï=m¨±«õJŸÖp»¹³diH+6¤‡ý¤ö‡Þ ò<òõðgöÖ˜„›šùmý¨hYq@Î× Õ.„ šÈ«}|¨ë†„òΦÕd§ÅcªŽºXT¶] óK^ŸÛCÙéµ)€—®Ê®V¼˜M¹÷ çt¼Ù©ð….Ñw]HPW–ŽL‹c–FÏ:T§88 `¦üž›‡=NŽÔ"¿B@J}7š./(|öE5‘‚FZ¥&r#¢JÝ-+‹²ÁÂÇŠ‚èåiÜîür]Tø}Þ*«Z%,H%êVlª²_qA«Di%)ư–ENªÉÿÉHd–r¡Õ\¸“BŽ$}8ç2þ³/«a‘Ko…Ñq½i$q‡À«ÔFº` UȈëÉÿB¥ÚÊÄb•¢Ú sD°¨6âÜJ…Ù—Õ@?ÆIœùŠè·³çGX‘%¶Ë;x ýIŒ1ýÍ '^ÌôWÆfu´´J¸+­‰ÿ4¿±–suòBŸù&(¼º½Yé6yÁ9o-À?v!ìo)NW(û‹§}w¡Î·”O{H±.!îRô!-×w«Q «QÊB&o¸ìkPЏŒR³äÙ³O«‚©´vŸãÖ iðŠÑ‡´ZsùE .Ýg‘VP¬ Pxõ,俜ÙûXž¶Ð(–*'1qdWæ¿¿ 5 ¤í êxã0ÃàdtqÅäC~5sPç.r8‡¾¥™D'føõª[òÖXþH(ÝsvÅRp+  &ïž?Ù¹9¹›†î>Ý>þëÛÃãõm[#´8ŠŸû¾Æ†%rƟ²ᰠ-êvÎà)€³›²¢+wKÙ­%ó|zG‰tÔtbÑs Ió g£$]Ì`"ÄüKÍoÿ÷6¾5”±ÖG²dô§ú½ÓÞæ·ÿ{7ãÃ0EŸºeÊÆ‡,\g(æ·ÐÒÃúÒÅH[‡3f~#¼Ç ‚ß'DYß÷”á}NÇŸ?¾ÞÝ<\ß=™‘ûšœ=Ù©|~yúù)í[=ý£·•ãmö©:öv!q‘yvlG²Ñ<û ±›G³LPU_Ñ& %2Z“gŒyó=¬‡ù¦ïÀÑÖÖK¡¿õ"ð„âÈÍì 5O†ýæv H¿ŽÝoÇ.¤ß¾?ÚÝüxºùyu}ûõþ9ž«‡ëÏwWÏ?¿ÝÜ=-](!¤#-¼â¢ñ‹ëDZ·Ìh/³FL#½®ÆŠú7zÒ’]ƒŸ!:ˆ¯Çžµd =®÷ËÿÙ”f°€`ƒÎ*Ýe².}{~7ózVüh×÷Ž“ú-)!Ô5)ásåÌb úÕïÀèÚÛÒ>!ž‡·OpDÉ´O8—Ú£]à¼ÂU;¶O0Û Üõt÷ýáþæúù®S‹®m%°¯4/‹ù›ëb^éÃZÙz+q^úº#ÚUM$„ðk–j½¶Noß\Ú6\京{Eàü’U©‹áŠë·þ)/ªn!M“€Ó"}Zrµ‘Š&· ¤îGÁôiÐOš ½-¤Ymöy¨ÃCsMŸ•4ÅŽö;IG/zÝðSÜfÐÀi¹ Fâ*º1¸š"÷¿?CÚ¡Åýœé§‘!Ç¢xô²Å]Þñß‘#í¢bú¹ŒÄ¸¼†—ÊÅv`Í»Œƒºx ˜ì88Š[{ ï12„›IOˆ9Û#<1üF3Á/F³‘n¨ƒÀ®Œ!ÕhÀ±ü%ÿÏ?în}ƪÀ¿l¯ÁÏJ€ó2ˉ1’# Ü‹ó~ºƒ=ýÒéŽÎÜÓŒ„wzÍy´¼Å|⃀:N|´¼Åª¼C9t –*zr.®!àê‚4·—šPÊñˆY—+VÛAv†`öiUã„ö¼ N6Ž7Âpn“"Ey_Iö½çYôw{÷Ðu„/› Úèv Õ½ôW·ßƒj.——·¬}“qgß>í¼ú>À+qvpìì¯Á+aY°BLÒêpdô£òuû§Ûå‹Ý$Á.35X'¾4œ`R£ÖÍÄÒãîeAgPѦÉêò™(c¡èð>gs„Ü<•Ÿ Ê1è¶œ#¿0m7‹åÇ©ÔÜ·ïßj‰“‹„Õ­èÝÀïkÔ×x/’‘øªž—¬ÑJ«P=ztýö‚°ºAlt©8­X5’бØú¬>ä‡IŽ¢é²ä'•È›ƒ¬z¥ñC«H™ —iJ@|Û”Dªèô8úØú"ŒDWAUš.v?ùUlzßo¯¾\?}NWë›ÔÚrc§² \u,¸"/¡ Ö4ËbG½¬¬úa+™À¼c®ÀVÜß2?ÄV¤üŠÃ£dº`«`'Adkl nx¹Ù¥5ðl¥Iì¶á6&Íã:èCšwFb,{€±qrÑ9€ú?ßKvòq·áé2´}¡åHØe®nïsa3£ŠÇ¼h¼Ïiêsá]ÏýÅÚº“ÁW³ü˽q+}Ýœ]9bäNŒòúº8aKàÞsy n—É<$=Ij_+¥acß®fïªÂcÂ%]ÍcàæÒ&›eÊó€©J$·Î Íîµ<®´|ºÿúýñi¿Ø²¶QC¸(ur‰e !ÁNqKö^„´[Ú~$Õ ‰&Q¨H;Ï\êÙ4©å©¨fbéÅ’ö3´ ˜•EÑ%1ÇbÏ>SÑ7M Æà.­iÜ@¿Ø¹j®õ5‘QÛÓˆãÉàÀԔ¢ÂØìD7O2§sJž†ØÅÖ}“Y{Lì˜\{m‰ÀºA¬†T>(Oú¡¨!6"]v÷Òé°é(Cªû5!lšñ~ÂFŽ£ÉL̘AXMó Ĩ}z¦ªyr}U°»oЬÌS,…ˆqše;ŸÂÑOàý²„ðaeÀÕë΀«tÁh,¸EÏËE_Æ[BZPpSðàˆ8ôL _V?´¥‰ƒ½¹¯A[ÖR—‰.äöÌdÓmýäìžx46E[²+þx´ÝggÎЖÌÁ8Ü´*ŒÚ'ü& E؉;›fk¤C×_|»þjª¸>mûýñùÐnß¶”†ÁÅS rDä;äF<-ʦ'xŠÙ€Jx2Ás_>Ï™ :'°xå­Á“dìÌb§¦ÕRÒu‡ª’±žC=u•µJï=õJq“7­ÐЉÍwDÞ¯~gßì¿ïñ¸úzÿåu½À¡…¶õ?¸zx¼ù£1†Õ¡˜{^©¤¶ h14‚î¦rîàÁ…)¢'ªÀo ÊøÍŠYR£P{¸ŽóiSÊÖn²Žànß8|º»8)ŠÓ%¢Ô¡]Ã`û½Uqp¨ß´9 õ ÌCüB0KRÝ6+|¾òÓÍóÓÕóý—o) ”Ëõ6hÿZÖ^j\B]æ!í^²€uR¸,¯n0íy"GjRÂ.ÇÌLvùÝoGÙt‚é¡¥sÍNFê€YSÿN‡+Œ‡i–÷IФ(NMºŠ­o’¥¡è¶.ªSÔ[P°d5 YR¦ýžã_>Üi–®ß´pà¢ØÅ‚ßu57Ì´„€-($•CÇ$pNTý°5NŽbd¨ÁVR,b+“ÏÒ·Ó \ÿŸ½kYŽ#×±¿âÝlFiôܸ«Ù܈YLÌ|\’ÛŠVKj=º}ï×X*Iõ`V2“dºÝíh—­Ê$@pÀjþ”ƒ«ýuf®³F öMPäÀ”€ìé§YcJn-5Ä0fˆ$Þ…/Iܶ#_»h÷ãd¹º›7WƒF/iÛȵ}q4aL¬\ýêes¯Û{óýúêÅÖóùýÿ¶ðú³Ä…ºŠŸ¨},E”¨— 3°$fÛÑ®ôÏ÷ÛâÏ?ú¶‘MÍæâÇïßôë¼L…“ ]ÕÀ Ô€ƒù ®3%úvOr4íèíRàéóo—›ïv¶^,0T‰}EÏDÏΓ.t}šBr$û×n_Ìß›]Uð ®L[»va\UYjj¶Ú^Ò4ç\ë$Î/öï™w%²­™€ÇîuIœö¡½‰¢m9‰—²­u{òk1/™FPŽó|hÝ’`,7ô´C÷* “3žf“¦(µ˜@î‚‚ƶÄ|.”ä@œS¼ÐŒé\´PN«Ð#â‚ÔBê²ä-^Nx³Pša°­E0ÍΉ¬Qu4æ*W÷ÑR$Hæ`xÇ´l ½T$!œ^R2.Šƒ‘‰BëWÐŒb€¢ rTs 卯zÁóv:òÑsd¥>±]3«Žé¦ÔK(8Ü“h`ʪÁg+$÷Ô¬q;½*pœeØSÛ£À°ÁwOƒÏ¤ˆMO!’L•£CS—¢ÎŸ+ o3FKiâQ¬RìñÜÕ.i%~½9T-út§¯EF/y%q­.‚Kˆ,Vi”,™´!êƒ#?-°êž¨ ã r® ›ìUü4 ï敜Tý|ˆ© Sj=õ[¿e 7°{bî­iˆ»Oš²_üÈp0“ES$/J'§á°å (õW££š èªnq€—¤8-|vŒž[¦8óOíB2´0@b(°usò'm)ŸÝüI£˜ÌÙ œeëS;¢ÀÖtï&ä”ÎFed¿È:í-e„Úèd?M®ŽéÓåH:«–Eì᤼£°(>›•uhw’Ó>À’“yÚ¸Cö ßM“ƒ;Dì}æ÷äÆ˜6ntêûäè3W.©§›Ó@ȳ½‹cØ®ýèë·´”Jœk;~Û€_RèÁ£y³É#¯›†¢P8 %$¯<Ët°a(è‚Nì [x¶L}oe%ÃP€‡à5 ìŠý/Ù뱺iÀ³†Ø1F)¸jìÃBìãÄÈaþ½L羃Dò2‰}vlåÀooiàÇ!1j„Ò‹”ÔU ƪ@Œ{ߣ„äSŸ¸˜¼ “°î ;ÑÙP‡üqaÛàùz»Ã÷í÷cjÛ#Ü>þr2*ÍÙY6ÁnáãF×Åå£ë>þ,RENôÂU~×S×4 . Œ‘¯LøÞC{óízóÏÍíõûqðp¹ù5õÝîˆnw¿Î'½ûeu x·\/'H„•š!DàPYµå²›Bʸ涑"«/qÍS\7y±³°K’¯„ûOØ•Á<±¨< v'»`×AÈÀ® éÚYýÊ\ÛP„¿.Öào1LŒ«6ç\¥ûÛvq tV†Ý])ÒÅæòƒ=l1⦚qÁ‡€ŒU÷©uÉ 7(¤8—«wJZíÀ–—Jú°lÍG “`|¶ItO4MÀ†ÔFåý,°5Ägà*‹4¥;Ø gJ“¦<&¯ÃbUt÷Z*0¶F•-fª<–G”deLR dgU§±¨4åÑQtaš4µ Li„ìð‘••d±¼p—Â/µÁDïˆjl\wÊæ$ÆWrðÃ<–7´d“’ŽÔ 4ÌÀ—µi·ÉÀoÓ7ÛŠàíŸ7›“ÜTÊX.J«ç¿|?Wβ4Wžÿîsi%ˆ¶gCÕMhlÏŠcã@Ô7žŠ}Púýúòöù{=eÇV ˜ÆËWÁ)xXÚ&–À‚DPfñg]'Wî:%}1i $LB2DÉǨo mà5mw˜EÌ*sɨJÝ ÃØ±Ñq±Ó€ð‘BT{­Ä18`üÂÛv×TókŒ툎$Ø. \uªÊn^aã`Ó æO ÄÎL_W7O/é+¿¾\ýrèeîhDÞèDfšêâr¡Oߨ,juµ£˜ÄÏîu›'¨v1¦™™ƒX’æN|]“yÑü}Þ‡TZ€eŒCô$ÌË&†hpß;ÄdÙÑ*…˜n`!“àÔ¤hÒõÇ”*œ×™ˆÐQ¡êBû$6+ ¶×£ç9ÇX§Ïs;•}I!\رŽJ ×™ øu!#²8„:‚19Ÿa“+ò އoÚrbáwŽ‹ù|“ê³mËïBlãöÖHÄÌ+ã¸z߿ږ!øS§7M™¶@2Lv PÛ¹$>ãø†Ó —rt‡ ž;Ââ .¶œw±o½ð@½ºÿóîöþòj…´B_(Ç%Ŧ>ݤúÀû8Ùï¢jçfÛF @Õ7¦Ñ_o-ŽÝì=¹4Éä¸!‚ÊúðŒû»Ù€¹†.oðëÞXÞO¹þЭÏ¥„­½ë0Èk2Ý 2k…}G!Ä%Õ‰vþމ?¡2Ã¥yœT¦†ÑaA':z%3>c¢¶ò×läIAáÇÒJ9.¹öÀÉÔÅMš`4£ïé&9æ<$'ƒ©h$‘c!x»DN² ÁÊLÎÕõÃíý?s;}¤YG>?Iúˆ*ÏKú´z½üù‹nv‚¨Õ{Œç’ÚApÒa’ ^¸;½ü#ä~÷×õÛ¹Ûé—7»ÖCéΛ­€ÄÚõø¨K:ùcôŽCðË™cçIm‘Ÿñt>´í púÉS(&/eò =„öÔd´’íhçÐéÚ‡TwVxêéµÕ“`ÜñH­GKE„ƒHê—ÇÎ‡ŠžÊì0Å.ÒRw7¯:Bzº‘k'ὋÙã¡Yûb±,á~ Ž=F¬½T„Í 91’ÊÑt€`Q„C™Äf¡lGè‡ÀZ@sÚïˆö•kC³„Þ!|t°‰pˆÍIQöôcbÈÞ“ŸcYç‘÷Ë?×ÀHGdÐEtKâœT ¾:Æ…1¾…”©2¹€ÚU)Â$ãHL~Z6Ùý±²‚ŸH†Ô@W¶Põ½§$ÅWf¦CnGSCÄè×d1o"®È8r¶Yp[yÔ#xÚ³yç˨Ff?w/”XL22ûÁcwßaÒ|äÁ«ŠÓÕ‡S¾{᪉<ô*Uæ}¹»šY5bFÝó8ZT7ê9Bji1î匴Ú9„<€'_0æ%9„^'ÀlRgO0<˜úÁaíóf—ïìfÚs¶šÒö­­—ew µ_¹ºY/àÐÓØñèŠ´ÈØ9RLÃ;ë}?)õý@‹ H§Ã7ƒ÷ݽïYkÍ•Çì­«Äó³}fÛLk›"bïò"çt˜$ɳö§¾v.šÉ¯àò-#ø>¦Å 4Ïá[öÔ=o/µ%Îöö–=µ·«ÇGE$MÆÛNJqíyﯰ_Þeüå#šÿ²½¡ý¸Ý|¿Þüza0õpoÆûôêg?n‰Ôç±dôMã(/è埊 ™võ!¡…‡DrD1¸t&”éC‚1GÒ½¿´BfsÙ|p ÕÇÄÏU›,ɾ%Ž_%ßbìjåò蘴M›'ívƒKé#*q9§¦l€°'ðÓFð%JÃyB^ó`¼û¸À$g/¯D»g^e«weäs’x]Ï‘®J—Ð?*ÌÍŠK`¯Á 2ïH\;WÔ{Űš/z¾~¸K\}>ÖývŠŒ§„Ǒçtñã÷¼Sï…q¡{ºøùgýÔ{=HûŒvuêP úu9ƒŽÇË.¥ w.m=%ô'Æ‚ÀEÃÍ b=ŸñA5óKÒ`P7]6¤[f–IO#(åçš½K¥ÉÄi¿uVw4ÄuO}™˜]~.q¢èö°Q”#ž‘“ž =M[|XÂêÔ–ÇÒ°RtP¢+©ê3ˆ§Í3@–#è}e%Q¥ÈàÓt \Ýüº—–›%—0EÄ`Îàʳ.V©1ÏM{x¼~¸}+›ÎÎzø×€o™IÜ`ÒEñÃæ\o2ç¢øáU>_Fí¸ªfæGˆ£x­¿úŠ®£`Àâ ²™`±Ñ4FÅìÜ㽕a”4ÍN µ1*‘ÛwÀ(cf¤XRD*,¿.FÅŸ7g39’æ_¿?üsy´Å@žòçï#UðÔh"Oùó;ƒUt´ VbÝÖ'T‚•/+?˜ß"s#ˆ›Ä*[w6›û±°B¬Jl­ìWƪèb÷ “"Æ,V¹TéXÇíÿ—fO¸1Yþþrÿ|™·Ðo²¹ý#ƒP$-©âÇ÷6ãñ½ñÉã| {­Æ§Ò¹®€굠Ý?"LûR¶î\¼··°| 4 «“µáɇޔ°IˆxÊ™•Ô@êqV'÷“£¼§ëç¼uþ&Ês› Y7ùì`hãM>»7*¡_PÞà9Mw±–JG®"É‚ù3Ó i{¯@4‰Kè²l{K+&â=Ù›­LȽ§s$9B&È3MˆKÕ!#dÕZæí-V€¦#"§ÃfÑÏ÷é÷,ôm#› ›ÍÅß¿é×LI¸yøTýùG a6JU¿ÁY¨ é¹¢+,hÕ²í˜Zí`.ÛÒ[Eàa¹àvŸ£Ë;óc®˜:“%óƒš'55qÚ$åxª»>I“³Týâj$Ko-!Dž‡ŽAc¨AÇôØž¢oH­¯hËÑQ™ê뇷/¶UŸ&Hé ¾Á5aj79‹™uôUfm¡ñ$;¢ÙΚù7“â=cß%²­±ó´Ç±õ9;w>1`OÚyÈò¿ï‰¯o-ƒ\˜eçæÄÆ*;ОùÐ^ʾÚc碔툚ËÍVJ™’Ÿw¡¯/7·³k¢ FmÖ;;!0VÙlÔ%EÑNÐQt ³ëRfʪY›\Ú ©¢ ùÍ2Ùx“"Ùšê=É´h“³×öA=Ì2GªŽÝÝÐÄ®1‰p†a<­˜ ~¦x©„¢bØ×fºÂ•¸0ªR — –ëTªZ‘M%ÁÛ!ÔbmwçèÙ/Ôb˜ÍF¾†‹ß~½Ûl¯ü5˜¡8æx%œ$[(Q àúÝt♗ǶíÕ³Î審Ü¢Q89üMûDÜ[ÓÄ]4…IoÈûlOËž ZÀ¯½5«zž…¾[p•©úþ¹4ÜõÇÝUë ‚àyä®ÚŽÉ®ƒÓ(F+Þ[T¥Ø 0˜tH·VÅ࣠ö÷Óläó¨¿}µ8æo®Ü,78ŽÎÏNœ,P…ʲ¤äÀSP¡4‰º–keÙÎM¶­µänqúrÊ æ.§ö%×ÄMÖÁpŸçyÉ‘)P¨2gÝ“Šl†¡³sô01‡ÇµfÓ/9ΫÝVÆTÉQ­ó¢ÁÍ}n…à1ŠÉ«úŠõXÊoÝI—ó( 1ŒûИXÆ ê¢iÉE£…檘Ƈµ ö)”[3`N §¸ëؘÀe? ËH!æo/ÞEÔ–#ÅYÞ³=Ì…®1QdîyŽN;x“–l½„¼2!oÙ(nÔXGð3,Fõ} X§_íq‘Á~Ðo¹ðu?DºmyNŸ½ÊÞ”BrîŒä…˜«./Ð{ð‚xsõ¼kÓÄ8.¬v`뇘pCI…Êîý,ÜFÊyÁ{‚i¶öÖ‚ˆ³*ëÌCtÎU]VØžë·^Cn VA´éQh{UlÆXÛÐxF• ¦õUJíÂ[É2 š¾ü_Ìͽ|¹ºy¾x¸¿½ÙÜ\ÏÃ\NÏ(ÂÀA°r $.a-B¶‡ „þ¾î¡ðÚa° Än[Ž7…Á&]^$dI>$Õ„m‹G˜Å£lXA¨Ê`ºWR¹Œ-9` Ð*“š;ãÌð @ßÓ×=EŒ1ݲ?æî×è–]‡©$<8¦°cû‰`¼Ï}2x½ž‘º3>Ô /ï næ"/±õ¸cx—S3ÂÛ©,§Dbœts9[‡ý!‘di×zûgÖ¥Ö›¦Ê½ïMd|Z¤½U’ˆqx-#ë0ã£Ð_@`T(žëÎI ðÛS#òi‚´ÿt–΃ÔçÑôâ/ö#¦È§‹çû‹Çû—çý ÇóŠÒÂx6ŽI»]ò’[BÝ®1¥›ü좴®rm†Õi‹%rÂP@®„%`1×m¼/Åx^;µ Ì¢N`Æà@«ìœ=wgWÂè3€mKæhZœ Nm;[½lôR[>ÇÄÝÂ$u54Ôwz˜b ?§ øêîé³ý7/i\̪¹ª—‡•xÉÄ‚ÛÕïDÓnu°"BQ?‹‡Éz5ssÎñž š ­mO‡!κ}àhþ±¯*æØ3—M˜â¤(¢#cšÅ7M´qát5iP%¼gìcš³à«’i;\-¤»»”‡îêBì îûýÓó…ýµ?ïý¼ÿ›‹»ËßLSfw¯?4wÌðø}C'¾®ͼ‰<‚×2‹¼g"ié5ÃZÛ.@¡d"žîpôÖšQäªöDÕk·ÔÞ|å9XÄü¯€U+Ôݳ%>­NKN ±â`‚¦½…S†CùED#ÈS®Øö®ê ÇŒoMð8mUaé:D$ß0ýþ›Ç<½…sÝY8£€€ŒU(,Ž4ÄI` mZÓ DÖ zmg¤öÑPàæðÔô¸$>Ìu©~ȧò¦í샛ÇÔaN:™rkŒS\ŒÝ‘ØeÜܤ'ŽFrl̺þ€wŽ3ºáªÐaT§]å=‘ù± àjT‘5î ž>Û.Ùœkzy¼¾ºy´7hÔ'hñ"»*”EXPÕÆ)8òËn ÊÕªë-íƒB(I° O1úœg»'˜Føªæ§ê¬v !goQ‹ *öÇWôšÁ×” Wô~²ëØõ.%«j€[„ £*å˜þ­R)‡Ð¡t! n˜«÷juO/¹?ož/žn~¹›éÊ¢ÒøÕ¬2›®ºš•aIMD¼ú6åºgåÕ®’a˱èK檚‘ÇIz Y ÝN“jÛÈÞÓ¼Ù#Íröü-0˺{²¶0Í mÒTªy+íÆ—5HmÍî$@ŒêÕüi“Y•^c‡î´¤2! ¼WnØöòÉô?_]?ÜÞÿó€[óQ~½¢Y «ìyTj»®’PG[r‹ë-žUÏõ˜[*´–È«^ `XiB„)–á´…³u {j¼AÌ»BP§\U]½ºþÓ%“¨²¸«‰'L×Å])š>¯ŒZ»sbT¹Èö*åjô•Ó¬òðWš(F¯7/ÿÇÞÕö6’#ç¿"l>Ü—•Ì—ª"é\ X 9îäÃá°eyF»²ä•lÏÎýúTµd«%±Õì&[ž v13˜ÑÊìfUñ©ÖËâùëxzÿ¸Ø ÝÇËéÝ|9Þ~]ͤ°_n™ ðÙʨ¨¬Ü2oN®ª›*‹‚rOîD×rÎ"Ʀ%„)ª {†{㣙kT,âŽMI¶%;å{)UpY¹ÀÞ'WA‘8Þ2½çì>!8-£Ìk?Ø„àÎ8ÓÈv2@˜ï†€w?‘N*.Ã=-§«ªãi$Ï”Þßyò—W2蕉^d®«Ú·{ÑûÅôÓj½}^̶7oŸíæ7ï®XdzÍL²þïQ«žyñ,_êXmgà‹Y‡çu¦öD¶Wæ1CŠeã¶{ÕÇ·Ê–rºEÕ….E·(l×-mw]çAÝâ'$×¢“ne”™9›&:ˆnÙOš:Ñ-~"w.zè lʼnv6­¢\—Š“oïäG[X€sîZ qéÕ„u§QxÝÐâqúIn*>-¶Ï›¯|X´—ËÝ4ͧ—%[»éuëú3 U…hEЧxÅX ±Y•J¥Z1„gépZ©”ô=‚iû-ŒÎ™ª“¨ÂókC7|/r>ið9TBåØÔoá“9‡ªlàÇ©ŒÀO„’©Þ àx=Ñ…+‡ÝwÝ©ww÷7÷ó‡éËò¹JÿoºÝÙosû4ï Z³Ì2ù1÷$Š•Ã[– &m…V´õñ:ÀyŠ ­ž(I¿¦kãmÞž62K:‚·Ì§`Ñëëà­M ´«¼MdžYª•у ­Ü¾Â‡Ž 9vwÑqT ²zŸÚÛµG²e¼Ò“BNIU^Ѹ SðuŸ«{ _™lQkö@—Bð*·W® ¯ìoêáá0Dáý{ÏÜonPH(7($† M<™ D9!(m÷£m ã+L”²*èµf÷£VjkµºŸ½–³fA+ýœöƼDŸ!Ú8æ»B_Úœm Y9¼eÁpH ’­b-) ÑðÁ;} ŇƒG”¦gé€ ZK]lÎáð ñˆÅ`"7øÿÖž½€ ,…`!Ó°‚`NZ•$w ÚjÇXÝoöąɈ»oä 9UaÈkO­Ãåý‡–C AûX]Y5ÉìC|u¦¿<¢¢ÃèmïÛÑÃfý8Z¬Æ|6_ùo÷ó_GÏü'`&¼ÆXg^,/-x0 +x ¥`¾¬éÒÄõHqfÒÏ7·¿ÿáßodè“´U’6zᔢ[>¾òºã]c0>‚2¶ø wzô‡Ñó¯«Ûß'†bÒÄÉ,û`|·YÑC¼S}ƽ £å|ºŸRPCç±ÒC¼ìø-Ž-¯Ý=SO]/é éŒ6ær´lŸÂbÒYtZÿíI¦wËùŸYžÿs½~:`þð/¬pç?&ÜZƒÈ6‘°Åüþð™:CN´4qÄ ¶"§µd[¬!Ù«‰ZCµÍüN^v^›ùl¾xߟ#¥Ü0-±ßÙ~•Å–÷ Ç—ùfôüyº½ÓcRmþ<WS`³º @²æõB– ° ޣŧ•Ž’H¬øƒÕv:«xwÊþÅs¹}˜.·ó&Å ÿÈœàe Y¦;‹‰î¬amJݪØÁÑåŒÑÝÆm¬Þ¬¶³w–dŒ¶väêþl}¨?+˜áІD¶ÑÀº¡¢>àùܵŠBMEj‰Ó[§‚º‚gû®µàZÁ@¦² ò,}eZò‚?¹Ýü2]Þðo‘d–¹wIÞ.×_F÷Óç©Öµ)/`¤û?4iS†œØpbƒ´‹¦þ픫%” ßÑÄëÀ„æ]Õt\$³/×|“†í ‹ÀÁ¹ëµ„:s©eK›ÍzSR>¯¿ò¹›q9¿ïϪËÚ²ZÂô þj¯‚ôa„^žZ¨{áR6‰´çÉoXÑZ£B›¦Evᢠ]MÇ+»T)‘\QÉ»e+Ö&ºÐ¥N¯AX¥ ™cÉÂÂ(c«ìf9,;+DùUª:Žxºxtùòò7ÚŽÞ¾<úßýóøãÜŽþ:cüÛ诩ˆ8ú'ÿ·Ñ'fùíh÷Ádo6þÏjºùúç?ý[¥aYlŸ×£/›Åó¼âûËövôVC¸^*,»±¼ÏFÿ2ú®²©ŸÖ,‹íh¶\oYÜ¿‹ì€M&éÃkŒïÉKH{ôAë‹XM[>éác†¶w˜þ°óéŽÃËÂALJÊÛ–‚r!­½½ÆíøoH× éš0Í8>4r0Íö¨Æ—Fn,µâÎd"šKG4 Z±­€Ì~¼Í@3» §ÂKÀ3~)Cäå!ÑKbZ+.é˜>4.1áó! U&ç*31¢ Ê|HDóÍŸ/óÙ×Ùrþ~4Ÿ¦³Ÿ¥¦~Ÿn²ÿx|1Þ©<äÆ;³ß¨ 5!ˆ†f¿Ò%dcÝíØ*ÈA6ö³°O=¦UÖ+)XYئ•NÆ6™3)µ†¾ ÜØv2€mà”…hIåakIèÆ¯ÅnÔÝ‚A)ÉA7¶º†ND:îæ•Ÿ o™ÑY‘ÝÞu”ÀÄúÒÕ7–Jä¼®jH»€å?òØÖg"6^[²Áûlm¢ÒÙ&Q^ vS™´UD­Œ³ÑIµ¥ñM³”PPLe–V%iY|s؃oFFå¹üœŸîà(voŒÕí7QÌ«[Ùæ0VÂPÛYÛy§¡ŠØ¤³-€S9Ž•±eCÛ,ýóp¤º(xö¯™¼Î~Œ«óºªõd¼‘îãª;äøqþ¼Y̶ žMÈ6 º¾@Ý‘±¾DZG×7¸` °[¬P÷ŸòUÜ}‰Ì—¬öÙNEß2JÓ5ó–<ÄfÏ\ÃNŸŠpl ³ •ÀÔááG¶°÷%láô‡_vw´˜…ófè=Ñ–*ÄYJÆ,Zò—e-¿‚²nñ#ŠÏN‡Ûyx˜ÍfîŽÆ?¯f³}­ÔüÁ:…îݩīÌB³Ï?‚y—XXvATˆ—ΰ$­ê5C†&²• Ùö¿I&¡'"e[ Irš ´’|D£ömgiÁ$”È1ZÝÅD”±Y\CÝ#‹MfHH“Í6›Î6ËV³\„¶²-è=O.² U,XÛYÛ$_EƒëÀ6”ÂqK”¥€·ÿ´ïŒÅÑ{v<Ñ^Õú6‚Í~΂®p0°ùу›}Áü! Žÿ˱õÉ<1½B 勉¹ $C‰]K{À’öºk”(TÆB·«ªýMUí.~ɲpžqˆÝľä»ÔÎkŠÎÇ ä«d‹V­M¶ßd^k­“c¶Ö†t­­¬qØlÓÚCh‹µò¾m|xËagIZœfƒí$/gvèt)¡¢^ÜÉtu°Á ßAÄéé òÞãâæÐí‚ÜìîànÆ?ýôp¿ÝyR3…ÕT1H•öäú¾D ºH«\w®Ä BÄ'Æ9†=:\Q¬ÃU SXŠ­‘IIm˜"í«È´ B´&å°—öWŒL@txTûY[!«¿N4K‚5|z~'‘%ÅÀ˜ƒW‚·å;¶L;h•Æ«Ñlª¹J¿¼¬Ÿ§¢Ù—kJWQim^Ô¾Ïzã±%0†<æ[ö^ûdõh¹Bì<Ÿ4•Ì*ÓRiœÚOT+ŒÝ²›ÎŠÉ¸„„o£[1ƒ¬N½¨Ñ±D/~m%Õ2¦Ã­0KQÐ!§ H– 7|b·ÆhbwÐR©Šº%¼ ¨ì¬RH1ZLz[öë#P“@8#ö}Ž@8=@ïæ5IïWúC†fܯ¶7rÇBäåk×qÍ—¶™Nå¹3®G3°ø—²ÅÆeœ©O{®L&ñèZqPÙ¶´N¦Ø.&zšÑq IH&)|ñ®";ë¤V&ë<3zOe‹„ñŽ]5.:4Úé¢C£)if49W`>Fš˜èYý°C’ÃD¯OFàBQcYþ?º»Ããtö™ÏÏ>|Üm¬&»¹Í¤7l¾åÀ©7¦O·–ü€Z“¢]Ã1¹ ⪳¨dÊ|{-ÂÚpÕ ºÕhSXÙÅE©þ逬Þ*(dJãpðb!Øí›;¿â Érk½>I‚éì!ø5È/ÐRoìl$å.â’"]†D)– YßOî*Ýž•ÿ ¿û˜ÏI6Êîçå cTÂ,‘¶ Õ|8Ïi0YV·Ç^7vˆÎiïœéžÛÖJÞË“ˆZi›WëÁ2PkS ÝµDü>dp9Я´ó/Bh'mÑeYâÃMÙ¥{ Q,±•-rÖŠ Wx\gšØlí˜Ã~Qƒà<û¯è¾²ƒ•)/ßFêã/ U+Ø»?o^æÐÜ73r2•åΣGq¿•[(>1Яðã†pû„€¹ Ê*L€è–Éì;šíú¤@t(…Zúƒc—Úé”ãMVÛö¢ ÑÖÆo˼jªý(jsS¡Éœg&뮵Å ¢Ë2çÜ+í³2öMk;|òÞkÈoÇÉ==,`Ðtk'®`Ð8h½Û×!–¹ZÛYROk{ó2%ùpP @Æá ÖÁÐíH…ŒËLµäAË\£?„lÀõž¤zÞ‹jyw·ü)Ö uàJxöQ . åZp%<»¹×t«•º° ÿÀö›@¼ÚwÊ¿Ëø—éâ™èˆëHÚü°ožñ&‹u³ñ{þüyóuQ[ì´§ØxÁ q6b«”þ±œ†õKu*c2F712“X ÷%BµœòV2S×I‹Hï†ÌI~Zߟ]lY¢d;Áíøqñi×éê8¼õkcò4£™?ÿr·Zu‹s³a†ÍL ’@Ö»·ýn èÓ&‹¤ODÐè;šÒW%t©0x%ƒR ’dбE׎è:^-x j[\»àh×/õûØCvçƒÄvô°Y?ŽLVV‘›¯û&<ÏüÇ‹´£¥(ç1±ÏŽlX~@ÿye»%ìCß­3ÄÈi‡tÁ÷%};eµ_ù8šqô§õzÙf…÷Yrw>‰·í Äš«!û­lÝ©Ž Ô‹!¼þ^Üȉ Tr+̺ÌÄ;°—+2v ˆ7h~§p@²0öt Hµ‡T¤½m¿*±`ëÂN¨ÄgŸ–,TÒƒú5ÿŒ7‘Yö,rI}Rý>˾p\"%YA‡o•—*HˆÏ•÷Rs=dw‹÷,™cšòÇûÜ·Yy'Ñ9Þˆ$l^–­ÖSÒƒ…~Éféß]8£¥PÎè®—HI¿tk”Dî¼k"=v [’üíDIóAÓª+b^a‚E.‰ ë$­õ‘SXFF´‡¬D:) ²ÖcžcˆŠØñ/rG$%ÉhgãwDª¬_¨Kû… jƒJDÐxƒÖÖ†oàèu½|yœOŸŸ§³Ï•§0yÕ“}ÏDÜôŽ÷?6 õçH‚î Ý#µ—”pèzkPŠå )™õ„­Ž…2íÊ‚Tìràÿ˜»ÚµÆr}+ûìï%mI¶%ÍÝd3Å65@õ<³W¿rÄ'>ö±SÕº» bl}ë}3‘é ˆ%¯ 3g‘ŸÑ—X$øjÔ&?‘ê‡~]ø'ÓWìU8 "r+Õ‰ c|G’òrŸ1Ø@ÍT7cœm@PgJž: ð‡§¿ì¾Ý=nkî7ûÎióg5U,°þì{x„¢Hâ+i­$åÒ»ä¢[—¤+£j= ‰\·ìL%Ëž‰cLàì*“f–=?cE üz¼¶yçÉu£ƒþ ”Ø €=hp¥øp6M†5Scã ÏYE«³ëE«-‚¢{€@Ôÿáý4ƒÿ’4u𣻇íýcr¡MA}gðƒŠH=Íui–:?cÌBÒsŦÛÊäÆ ¹”P©¨ÈJÙÏix& s ¿šÆë<ïýÍnûº}xj£öB»RÔaÿ¸uù­rGmÔÂHHÕ«•[>KnïÀAIÓªUЪ„‰IZv´È˜˜‰fÈ$L HèôœÏ¸ågô ˜UmÚÔ“ª£¸î®iϨUÂl 0…D®þXög÷ÔWX»~a㜽êú$––AÝâÓŸµ«ÇÎ1 bÏè»~ÓäæÐë·ÄóèŒM1ï5ò:sÔæánûr÷ò¶î¸?;ü÷WÖ‹¶8˪°(ŒWíýŠ›Î;•Lµ“ÒP¤pÛ{¹FC<.¦‘†zUg 7U0wFê`ú|q¾ÄE –«r=Ê"½& ̱\ËbÈnU×=u‚c阠ò©Í›×.—[·QÞø`µÙ~M­íI&?{À–¹äí“JÃRGŒÙ’Tæ̪æÃRÙ}= !±¸¤É0&~ò‚dêÔH(Ô¼Lõì- €²kà±s–‹f¥-Ÿ”j‹¦ª0„SQ‡™}ŠvÆ;œºþw//GAæøí‹Gì‰Û»l>´ÑDšk·¥gÉ-ªã49Úšc ã8Ëžî%(k–=agâË~جø²”ÙÛN¨œœì¿gGtæY–2Åk[vpa²i7ͅÂýç¹&ûù‘½÷—ÃD îúÝê€>.6òƒ ÔYu£‚¨_§nžAS¤]Ð8½žû¸ýq&Ñúy{ÿÚ¸¦ÂLYC™‚e‰…§µG:âP.7 Ñ^)[$¯¥7±žéÕ}!TT!ô”T)Õ«|?iŠDQ¨¢– 2Õ÷×}±¸š nÌš+›'æ“ýõìŒN¯Ÿ «Úª«ë¯¢€ö /{4£âºªû—M÷8²“äŒÈ$ĵœÔ7«ì –Æd2Q ¸[~c?.±óœ‡”ùÅ‚„²OCïÖ‚'óî5váL¬é>x/ªælêø8õ+n­<Ö£¹zœÝ°Ç±Ø g\¼î +/Ñ£]?|µné¬ÎIÐLÆ*È„Y›˜ á¦(-•ô‘g¨uÐ)*ãLáGõã‡m`£©bìóTpÌÔy+vž>Ó¶œÌ2’©žÐTi{ÁS.Ÿ(áûÇT$¶èêåõù?›rÅáà4j3• Ÿ4kØ[›fUKÜCí)6ϧ´HÿÂ0x‹èױި[“°V‚`H³EªaB±˜bÅ9$ 6·/öN²#VÀŠ†à¡±¸Z»£ ¼©ws{ð{õQ‘ù•„\@ä‹T‚ö ƒFq÷kìøïc÷¦^Žü’¥ö“Y"pöؿ۴G]eûú7Öùêg¼JW GHssý ²Ò¼D0Xå:¿‘®L@F©ù M•¹ºßðgo>Ä3ÄoX‰B¾d”Ñí8’µ¤xmÇ&§Þ{ýE-9»––syÞBxhá„K ýŠ6Ð0p1ÏÕ·=’u¥–8²†6Œ ÔŸgfÝÏw?îw–ö½ÖÄœ}ç´´‚=YW‰S^;ŸÇ6HOB—owÛ‡×o¨zIrã|ƒnÜUÑè½gÒªã,üò{Mˉ$3×P" ÔÇÿ…K^0û]‡LÿG`uN%ŸþÏÏèëO‘ù?mBgñ yU´¤ˆ(¡LÌdj-Ó›ýñãçÃÃÍá«m˜)ˆçkbê”5ò:Q{žŒW‘IÕïðÌáǧvÛX¾¿qRûU‡Žÿ €½=cv»u´Ä–zí*vE%Oìš™rëwxÜ4@z• ÞªºéÁkÝÖz,—ª>„1d Š7ã꼞«Ž‡ô±rAGMùƠәѓòbiT)&²CÖ0{ŽíÔznàÝ]fðϪ4­"US@˜ÑÖ h¹ Íß•; ïé¯ï7Çoùãôo}¦¶)npseïg Xð¬ŽYp>©üv÷ÖÍ—~üuóãÙžâžc9ÛVh=Cì}Ý+*pr´½OAÕw.>µ]ÕqcMé5¦4_íè(cB!¹!2éŒ)ÍÙ=vâ 7D~H_Vbù4Ŧ¬dÌ…ë˜R ‰O›ƒ°ö×9*Õì7ã_%>¨Xß˰ÛVb&<ŠfHHfÏ#¡ókÞ?ÌŽè È|43Ü"˜hÊ-ë^åUÐÁìΡ™ ¥Îa°—‚‘+³W¨cKÁnÙêµ®Þµ‡sõ:{' $VàâN€… >†2轎¸ã¢•J&m¸û–nµ.ÁJqù¹}¥Ý(Ç“&ûÏâ{ÃIIpf’«j8)@ NE§åÕìLRCJ¥@ƒRn¾›Ò7om1C¸º¯ˆ~2Ò¤Œ¸„ô`Þ1P,5 àX|‡e‹{ÁÓðéÜ‹@yUËŽ`FÜn ùêKÛÿs4Íoëï/ï}ضäÞ‚Ô™†]éSeeYŠ)ÑY®&c6´Èl`16†ÄÀâëq¿s=î'(cø9§9|x~F_Ž™ê/W7æô‰ýdB€hšÃPªÄZpN]edÐ yÑÄ`ÐVîc/´=ç•«)1Z§Ü c~ذ Ä_V*ܨâCbtë°6L{]p†Á1Ä€4´x³›^½!H¸,X·âÀáp'/[q-óÈ¥3ÆŒ³¸´£Nà ³Cú*8̪5mAV/Ü‚·îÝ”Ð-á,`œÚ(3C~­žä@F7?¶ßïš^¼ùgúï©Àéµ{>PÚ_´¨Î…à¯ZkYVCcÕZvÅ–¦ÜcO±%U)‡0€/y Cr{òf¥Š–`‚¥zßÇc±Âò.žA³ÌóÙˆã =VÜoìM 5â ®šùžžºï] Yˆ'+¬/šÛcQ°V/ZÚ ×úMS)ÖòŽÒsÙ|d’xr×ò3ú‚´'wý !Ðì1³È¥äÏìBš<òŹc›=a‰£’ˆq%õÒ eªN=ÌVjèŒ%¥:H q®QŸ]}¤ð~x}öËNÿy}*˜ù[§Ï99¼Ù,bœY’M.§!Üá_ûîßìCïžožvš¥½9™i[…]Ž&8òÊ]µ¿ß“݃™< Õ[·ôÆ rdÚo<­Ÿ-IûëË{‰•½˜öÅ6*í‡}.t’ög‡ôyð=ÅX[ÄX»‰ ¬Ct2?éôåœ3yp€ËØGLcÙÁt‘+~y¼™šiy¢ï¡ Ëeì¦h«å¹€³2ÊŒÀF=D`ÄPëé“{‡ž¿hF"…"<ßQ̈ oÌ}"ùÇ|¾ì¾ž¾sýõÍÈd–Á¤<ÖBOÿ ~!§\Èw"#͇_Ôý¸|¿EtVi ȺÏ;*v€*¦Œ[\leŸÎ7o/@äk·kv"Ó618¤7¨÷Kü£9†jSAŠ„\™4ÜpTï(œ äg¬ 5WÚF?œ °®?l“ÙG$w¡=œôij˜|èf³Ê*D™q+ëgõŒD$ºJÏàg¤™béAý•a¾DpÇ@íoûHïÈ“´ûv·ûóÆôñãÉ®õËÍîùöæyïµõ÷®ë§jëM?Ú1Ä4Ø3¢°0CD:.÷´›SWÝâôAjƒC&_) e’zŠì§:4_™ÏÏèkU„¡Í1 0ˆ³ÁæMq¾w&Í[ä-\É;yhànØ|lÁÜi¸ÎªÞ¢KZFª~„«H锸À(óÇŠLÌöñ'uû³œjüØîþL°þoR§Z{¼{}¾ß5Ò'+I¿b8‰Ï“ÙËöÅÄ1EßÜ«/ÌQÞa™0Ñx=^öqƒ½“ªw@*Ñüd’ÁòC–#»Ãê×ÿNèó Õ¤É9Œ°:wE`¯8â¯ëdIõi==%JE7Kî³hÛ4BëÌ`uVÙi}+Ê*e{ft¨,³ÓH׿ø9×÷i6éf·m›7e‚ ZH¿«¬²ýºNKpÝD#(6Kn` Š%C_Vò€bÕÐ(OžÅ4&@çA$œl©ç‡ôû„ æ¹ÉØ×nÞ‚÷`F<(§ŒÈ¿Ëûß=Üßµ¿{ îÂéڿNüþ ´_î0Îþ…ö+xŠD¡Œ¥ÃC}-/ÛìµiëI“ýïVì‹ÞÂʼnéŠ-EžiÑ?Þ¸ÝRwòêdYȾÀ’—a¹2ñ ŠÙÁ}êæGôÙqŽÑr¹&;ΉŒÖ¾FÐc!h—Ys W±#°Œ=I˜æ’G2S¡qÂú—*lÓ œé˜í9ÞÛ…³÷¶ÙÙ;|z±=¾[à·/Öº)‹>c4p]} ììY!K£MâZá–IüBãw™¸×´„÷WÏ{¢X÷AØ×}D,Qíeá!âFYTN\D~DwGØo”ÚØö†v3 r…Mçn&öß7»ÿgæxûðãévûóõéÅ~‰»ª9÷×Ö™Ž8Õò3ÄÙ®ó>/;¤Ç;„M ÄÙ}u÷ Ó݃¥/ÊE„e *ñ̨ÓP …ȋگÈCÀ~Î[™‰ºœ”¢ˆH&Ô_2ÉðáþÌú%·w{³ÛÞüýç÷Û‡6ø,D”§ÚmìaŠ(”p†µ»/làú‚X0켃šåÞSU 7Ç–Ž¢b¶•<`ð._^ÈŽè éýF蛌6©xÄUë®ý°¼xÍ/yè/ßþ´ßçÓÿ½±»‘FG.¼óšô¼s Ò y‹t$°‡~I`#áÔR×Ã.€SÕêSìý£}Hg¢ZbŸÑÓíxFç¢Rh‚¸ðÔí#ÄÏǸ‚PƸò 9矱–DÑ<`ª¥fkÎ*Ô´Ia]¦Øn ÒÄ)‡™€HKá¥öŸg¿ÞÓ!N S›—»ÝÏçû×ÿlö7÷ûöáä3šì»Ž35ȳwSTD¥‰TÙÓ )ÑUžÜ"²8ˈGW‰äÏk”¯Eß™³QÝ,XÊög20~W碥ߴ ~óØ½º/Åï™Gðö´9'¡Éè\0s .4yõWÈš¼<Í71ä¨dbRÞîE…×* žtq‹ê/öPu8ÜaŸ?š©~»SzÉ{Üã©Chö,Me/gš!·ß_žïvOÏ·µ®þÒ™5¤VUâ§ò¨taÈæÝ‘´µà³Xð”K}ÝÀRº‡,æ´4|ýa¨ë¢s ä‹)ãQ’ƒZ¾clž3f‡¬YRH€M^&1FʺÜ1Dœßú¥bmßîXº,\£”Ác”2xXéw0s3o;™—¤LN“j×b@?dð&Ü´AþWbž{²'yÿÏòù»ä›ÿÂ-³±î¨tæ=¬,;rr&1G±™2óªR‰ª¤ê£F@c½XÉ¥~r&ÓQ J|ôy_"?£'¯‰DZ­ZÉÑE]W:aÏóa•´ˆÈoº7—sŸyì¬ÆeÔ1ØO¿<Û®»‰×•½[u t_d±@,òÌô戌Ÿ·ó7æû…[÷øÓr‘#H0 fb\+JRY ÛK¦TIc·Õ\Í&RÕb«–r…ì·“* ˆ8ð9þE~FŸÉ¦Ô¡j±ØHVÁ/à| <Óܾíç…‚r¼Œêˆ†V½iæÐrÞóVã¬æ¼ÙH«4Ga|‰6Þ‚@q3‡¾Ý=<~š¢}~ú_;)}a÷ÍîÓóݧ—ýÚ]uA¤éÃf”гXĶʰ{ìX>0o#ÄÑ7sn6*áÒÞ[›ÖÁj§û)Ô`µÓU· ì!“2©ÖÎ3bJ³âR~ÈŠâ"jšAJlc²#ýdÈý½£Ä´¶Ý¶lûÀêù¥¸diÓÐXŸÜèM¸ßÌü½%A0ú¸ê–„ K~ã<©Îÿ ¬Õ-Œí1 ëCSÅók£Eƒ²NôŠâƒ(â-Ÿ3³°w¹¾äëçïhëö~ßÇ«¼Œ…Ÿ2 Be„ ?³åŒI¤÷ŒÈ,ò;Ì ÆEö-Ùb‘¼éI‘Ð%}$‹U¦ŽÉO"H­%ç´¥¯àBd¶ô ¬ ÉÌ$$PíªL'–òT)wMÊE´âLŒc¨NÈ~âD=”—]³3þygŸþÓÜè÷×·MŽÿ23¿½Ý¾n?Aä8‹T²<žù¤ÞÐÃÆ)7Fiê-ïá¶Hj½Ê¤q©š¥W“2ûÈ.¢Txaªêµ~ŸC‘Ð+Ø¢TZ«d6åç)F~H_U <6 H¡"‡užy6qO¤˜_2gJÎ*)o-‡Á­ëE„ _Ńg7~çüTTûM–øh*¼UûMÑ«TûM$”}û‡`Æô›Èâ‘ÀN\ûñŒ>ÏÎÎ>Û<;‰®¬é|ÆßJý&S¼)õ°´:}ß|Ù„ÂÈ}ó¯æœ&©½56ƒœïàïTe$»¸n]Ï…zÜ(³‘ ŽiOÀ×É<ÑI)+A·f²ݺq’è s¦ðüŒNtoKl¹ ƈ ˜ó]5±H€~rB`ªóü½½òcpTY˜À0–Ès!5çõÔ<•’öYµz²XlÕ¤¥`36!@Ð~8ºæ êãv÷ÍždÚs{ŽºÙ#G}|%™ç‡vnÎ ®_ ì¸wUÍ@h†|X;YÚ"¶qq`º!%øZÇ1¸:ñ‘– z&¤!q bðÝøT5?£3ôÛFEɧÝöUã+éâNÎä;ÕR hÊ÷hÞ<^cV”eñ¦£þQÑVËsV­A#IXM¢¦lXÚ†¬áÚÐ$‡?|㾸=ÄÜ»‡§Ÿ·o_YfÈ«ãCÄÎÓ:5Š‘zøvÐN‹ÐFà’Ô¤uÑ~·b{0óm>¾>éÄÿ‹æ;–QI2áŒõfÁõ$Ïé‹È Ù’±&®u]©Žgì!§ŒÅ,ÑÜæô³Ýé—×Çnþ”¼Fº‘/WÇë0kªÀ;Œ «¶IÉO_Ò0-ɯ=$HØ*#¿èg]¬šàêvgÕ j×~]†lqàŒy`ñÀŒðKçuªûïv÷òÒÆgåÂÛ"ç8®ª£{ à­èì]+ÐÚ:z›ÜF%S鎤$ª3¹1¡OÔ¼±Ç"œH&¥1#¹˜~ ùÞ]~F_2T¤°Ú´¼nU2åM³‡q£pð¥a\ ßÝÑ5ªê I4¡¿ªÞn{Îè5‚î`×ÍpÊ ›Ž¨ÑnÝTnêowÛ‡×ocÒ£ˆN-Æ]×aÑÓà 脽'h^®H¿ý¨tÇ4FêjÕªGŠŠµöƒÉâ Môñ·ba--ñÎþ¡šèìΉ$$j‚lЉ yM¥Ú>b2˜Æ^y"P0±¦ýýXdˆÑÅ¡&iøºÃ»%8«‹¼Äñ:ìë)s#Á|› øËæF¬ÿúùôºí¦+€ÿ'ïê~¹‘ü¿"äòiøQ,}¹` »{ÈÞáA ËòX‰GòZöLfû߯ª%[-‰R³I¶œàfËr³YUüÕëã °²ð&+䤎 CmÑUM‰S«æ(Ò(m:³G‹4vâð¦[Âá$¶iê$(Eƶ‹•ÛkdβюG/vAú°Ã¡‡Å3çØ—Žw42!`ç-0^þØÛP)‹ä4Üœä*!Õ KöÙEbÈ2n¹K^¿Ìô‘3:^ÜHñÀÓ—w«OˆÎ/ôzF§©<°z*Âï`1Ã2FK„^iSzÿ›F±Š©< Ôv+Ø©õ:1<³¥[ô©’Ë£¥¯[I{ÝéÚ‹äÀ¸ŸköIæA™é ®äê€aôÀá ™8çc¹<ÌxÖ…„0ê†-LÒh2§óï€Óç$c™-¶p0…`º¶žðFăßþîgÇo­½ÖBtÜ ¥ÙŒÝÙ RÎ ]ÔÂÜóÑ)=¢{ˆqÊtôó¯n¥=ÒúËr¶¾c4µHp2݈qrd•“¯¯øÓߨbѤ4à§WŒ^ßÏdýaµzØ×_ü¡œèù÷‚˜WÖ8Ï>œØ ‹ùÍëghŽtŠ™X%r¼í6sZ§°ž¥¼Ùl|XNk7_ËÛn±ýq>›/>ÍoöñHpPùviï Ûm²X³Jú̶Õçùãèénº½ÒcÒl~ÿé"Úmš¡¦iTÒ݈ ä ?BåŒÐÐÒ,Uk‰ñËõtÖðîýâvz¿žŸ2)ð_GËç|ø^ݾob …!ËÊÕ„N¡°Z\ÿóB!ÇèÕng_?<®Ø?Zo Œ-t¥s1ýµ!²ÇlÓÙlë ~„†a3¯6d„ãäÖ #¤bç|¹BâmÑý¶ðÏBÞ÷_ ‰S–2.b$åH&6—ã”ï…Sž­0ÝS¾¤lÔjm+¤ŒD:/ŒQ`/€Q!b 7L`mª­¾ Hy§ß¤æïÿ(¶ðæ€_K­B ë[­1ˆÝ  q(ë-Z0ƒ.f^^®qÑ}½÷£a!í0@ž3’þTÖx,†4êi&hi.Ÿi¨ Õœ‰]ú¶v–ˆjÚ;°†5çppX“QÃQXcÄBBŠB7XÍßÍö» ¿[ÉÏãݯ=ÜÎØvñ³Ùø·ÜÒõ‹ñ‚¤o¯s¦¶}Uá}Ú(gÍï˜ò*'½ÑjTF—[S¡ôPÐ.y¼îŒv¤io,yØí×îÒNŸ‡ ߤ«TJ‡0<·¢v-¿N|¾5n=ήõít~±©ÈUµ©²Þ©…=äàrVÖËž³·Ø\ ®$h+0¾úu©+ùháMšl,¼’ûfþp¿úÒ·‘.а„`dGCx©KqC¢ÅÅ<UÙ—÷3_x|¹™ÝÍû]d+4ù¬ê¶ øùYóÖ”ÒAõspòfÝtëø­Rm÷­(ßÛÑÊEû+µhY媛_›¡ŸBèa¬¼$YI˜‡àlµ¶§²ª`hd³+A.(˜¹ˆÞVµ(SK.¸ÐôML– c­Ë/WÜŠ–؈•ógbQAËŽTX%*Xyœ…&ù†üBzé”)pVº ft’‹'O`¨ÔÕ*Ým&ö…„€>ß´={¾·íG9ív–æƒ:åùOlXïd[Âñ5xPÉÆ|ÐfF›L§:ß(-­!1 Æbzt=Ÿ=ÎÓ.öM·ü0žÍŸ½NÇXÓÏéÌ_¸}G¨)ôö óW>ç'Ö@&g2*)@Rrh]ŒL:™”%ðÚ˜Nd |l:ÃcÚé˜EÚÚY2)MÞiÀ #“s442ÏöO ™””cú`ãñ1W1>Fp‘kÆãÓyÐÊñÅŒ߬f¿2…o?Œ?¿ÿçQ ŒýUŠIék·^Öë ¨”¾öÀ¸d´59íºbºüÂP|ŸD`ÒÁBîx]ÀŠÐ`0™í ‹£Þ\»­%!“tŸCè.‹LfàÊþ #°Äl`1ªŸ3˜œç7¬‡O/ØƧ¤*ΗNﯽf÷É*E¾2å­ÚÂ$ëeÉž˜”·ê94²Æ–êß¿O¢~<Årk.ÛwÈ' -ð5úlž ëŸ`ª·FúùaS‘ôAnC¥H½ôá×èÃ…{Ulû¬_NËQ¨]^çñqõØ ³ê7f™”¢ÜÏoòÉܩɑŒ£œŽÈü™˜ 5ÚJ’f?zâ´¼ñ°-¨=ëiX ¡K¡£ š‰v3Þí°NŒ“O²Ûzd>V9FV­ñ…ÎsExËÒÕá¬ÎWFUíÓà’dª¢& ±ÓÜ“‰òåwŠÞi;zùòèþýÇ¿|ÿ—?]þ.S}ýýoÍþGÿB?>0Û®F›&[ ó¿—ÓÇ/?þç4:œEïi5úü¸xš7¼{^_ÉÖ–óÆÈ5àr5b™þmôUc?¬˜³‹õhv¿Z³È~ÙA˜ˆÓ$Á.›+†òˆ—ìáš²?ogß²$†F6’øyºxâ×ñ I]Ï÷Ûš˜Aicü7üùÓã—EƒökfçxËÞñ‚ÏþQõfêÿÁXdvõÜ”­5Èÿ¿Q"°]j½sNåí}zšÚ‚VK•“…"A—Õ«) ]>mÏzµ0!£<ðþ1Ôô›QÃÄú8õ ÿŠ@cbE®`‚?L‘«0C[K†l®ù²á§Î(´¬óYlqÂ&‡!œvF쉎›Ï…\¾œ³Zš}cˆå-ï6–„pšQ€¤ò3Í&©Å´ s²ÍÉ“–ÎøÅ\sé\“q>\ÓÔ1ǪÙ8…Ø}[kgilS2OÜ;S“mÀgX‹›«û™ŒLȈ)É[öȬÀ Þ·sÁðö^§ßÕêIƽÊ-Óøáùú~±¾Û‹óÂ?áîî8Ñ“õb^|;kñýŒÎüwÖâ§bJ•0Jjâ3"ܤ4«TùÕ[z52°S©é)™È] ¥•f¦·¶–„R’"r×wa”ÒAŒRìÆ[ :i›`»ê‘z}ý éÑïšÿÿÄ‚ôô‡¶Ò´±רÈ>£bÑ.O° ß‚ó]·Â÷ÛN|gÏ’ÙÜß*üÝÎR>"]Zá^áû­;u¤ð­Óžî* Ó¾Sáú>¿5ô¾²Þö¼2Ê_¹¥éÁöWôù Ÿ×òè|€2 ZÂ8< gý9ô%S®Ó‚¾œ`·­¹™ž‘ëOo³d9m±¶øÌß´µÎJ¬ dó»û5€ó&­ ¸ŠT(Ö¤&=! @ƒ f“à|V“ÒK㟳š(Ö#g·±4›X#[êg“v1-f m“ #6|pÆ€ÿÿá9 Áæ·Ï,ÂOíó¹ó—7Ÿ>¹ˆçì±ÜsN^|Ï9yñ¡ÊaN?7é§„ ÅÖ~z ŽjºgkuÆdµsQǹµ³4"ï¥c¥ïRÁ9Re 5|óN¡#ÆÂ{Ì4AwfÑ@EkŸù.aïtìÙÅ!b*…ž¦Rñ ´ÐЉ–Q¡]üg!«Kô Ë»œtVßÌ ¡¸ …NoBÁK²¨°ûF­q壣¼Û;Kƒ,vÄcõ‚,R¼|d…óì7d´±„V¦kUˆ»o5óX)¼Pml ¦!;6c~cÖñÇŇ& 𠘯ókc$œáÌÍÆÿ¸^.Ãd|® þ¾m£Ð—¢ààï{4ùè(ÈŸØÝ:P {˜Þ7ñ Ó!èP Hé9Õ`AúÜ@w²‘ÃN@2AGƒa­­¥’D…šÃ€Tθ¼ùÐ^ºvañµ‹>|säƒÑ |óºøf­eU·v–Ê6’¦z}3ÞxW¢G,(;¸!o)¦G¬Ò¸D¯Ööo­GÎd/~ùàcê+ª“„õÛZ3nZJ×?§\@Ú~XU„QÁæ(%J‰¹Q|Ñb\r èåBÝ;6Z¯ÓÞY"Hñ«ûè™å¥UQ&¨ä,R­"fFhézw6űö®é‘+å/aö¾v<Ü«.~ýáÕ-m„ ƒtêQ9k¶=p¥T\ÊYô,•Ëô‹Ù0¬L»hv3cž]àK&ey o¬·ÝEâþÍçOŸqSÁ¦ž Nz=\Ù³Kzs‚€,ø¡H ›œ9‹FÉœPå^z~¿>9²ÔqòÁƒê¼¸³&:g±µ³$,o¥$s»‡®À6Ð>«åšE/ÃFŠÙÖ£ ­v8lg^HZog hÔñ–k¯[K³”FDJ•’Ä·ÛÉÐЋ¿¥˜žiwx0MR ¡æ°«/q‡ñ8¸_̦ëýƈ‹å^f½™4É:¿>ß?Ý,\ßú뙣c» ûVÏä¯ÜR(lí÷7žòW.Ò$ ¢mÕÐ×s.Z>£ éà:¨WN•úÎ7H}¸»§B?»¾E¸ž…ñ/¿ÜÞ¬#ŽA_Ã)oÕ¶k`µ6ÅÉ¡iË,ÙÒÞ{hç€msxfÅf­‚K†™·HŽY}ZŽw_y·ÿãøã4îéº@…I1‰ë¶“a4–'Ã$.iÍÒ2‘ÁXss)¶°¨maã%6¹€ß%ôi¼Ä«Û kЦ££ÍÝ@ó=d92/ÂGÏéÈ _ Z†Îy´'ú.iõÇè»´%¤³Œ<¨Kx!btçt!¨Û kRwνnœwM>ÙÕ·ßwÜìŒÔè™_p9ý8ç#+¯;žÝ/˜R|ò$éc‡4OkÎðϤóãÛÓ½ŒDA5f×~µ¼-n®`³@S¸E;ßÌìûýnžÛZÓ3,ð¤Èdùƒ¯Èj„Î0ç,ÒO¯-+§×÷si,÷Ãjõp䊨Λ–sW–Ý^vÝ…“‹ùÍëg^y€ÀhÃò%4Ï{€$ ÷úôÍìëVMÌqomækyÙÑbÛo6_|šßìûÒ7ŽÐK1$ûßD±ÝÙö)‹õhÉ@È`8=ÝM—£WzLšÍï?ÞNPÚ£8P‘мɃ„“hÍÓ DÇSP˜Í|ÄÖÏ7õNbô;èwòBPg5k«"žÀãaâ ¥P5Ø ÷.^|œ~`»…rט·Ç*þÆ;› ÓÓãóÊ›C ÕfI½p*i70È2‘Adyà^X%i°a]³yrP)å­.êž<žœd½hL‘ßá¶“Ôª¢¹›ìã£ú=ÌŒ=9’oö8{ÆçÕ -[,½¦ÄèÓ¼!/†boè` I5ÞHáMªó@¼OdýÀ¹Ô ?²öX?=~éEiÉF?Cjò@eº’²îŸˆ-\¥zêÊêdMMh†Ýع> ­îÖ„Õ„mZTP…‘)„ÐC²ÉîBÙ™ ƒg,3êøB«Ù2jã´Žg,’©:IÀ¤(Cc|HV‡‰‡~@î¡6C¦aÑfïdP×d“Äj~Í>:ˆõ»MzÅÕ+-¯^®ºçÑ.–,‡òI/˜e[8Ÿ=Ý(‹tN·6@Ö©¾ÉP$­ˆÍ,Y†Ø„ÜöLÈ+}£‘-Öfã=KƒÖ} Ùiå}ááv~øÝŒ¿1hf”R×x®›UY«¤AÖ|šÒ=•!áå4Ï=»E&0‹ÑAv›5÷h½—.wH÷M¥øø~5ûU¦ïîM';îÁ˜ŽæxÆ=AÒg¢ÍåôÈ@Élrd{¢ù 䬈䠥¼ )a†sÝHî0VöÓ"^$çs ˆß§’wIN©Fç†O@p‘6 £4H–èùÔ‹XÒ“ÌmÄtH _N1Þi‹S¦ÂƒÇ!àܱqÎ,Äßœï¨zÕ|o—ê3»›Ï~ó¹X±¬¯û¹1ùœés¯LNDfµ°ëƒy21+B9K“\«w&fÆ’3#'v”ÅKv”«ƒäÎH“IÛÈ+œg¯Üàó(XGEJ§šï|¼Ã89S¾“®ÜðÝ O†d¶9Hˆ­ÞÖé€Ö Þ§Ÿº3`e9½¿[­eÊàlÅ[ü²Ë¨Im§£Ö €)²À½±98£–bQõ=A»"k‚µuž°¬ÙrîŒnË1‰ÙÝ;’UBk©‘3Ô'¸Ý)/)'8¸ÁáÚ²ŠÂµe§æ «ËУÄC’Õí)T%ž…$'™í™ßŠ˜í‚kPrô€p}7ŸÞ?Ý¥!í6“óI™€³Y]ß‘Ýë5éž8»ÙýYˆTý ’]^$k  3_Ñ»ØHñÖf+!$(­B|¬qd‚mˆÆ%˜KD`!zù§êâ¢I#nÒaqwbO³íÙ ìñ¶~0'Æòr7‰žƒf6Múºƒë¯í‘gëÇñzña)¦þ´—á)É7§YAÞ‘²Eˆ¸E¾ýø„ ºÊJ.,"_-“s#/À¾gg¨WiçU·Éé]´SàŽTUÞZI„} 5Ø@èŠÎì¶}Ì€*ÍÝñt †Q¬BºÚP˜ªwwI9† CÏ,Ãbì8Ád§@Kš&“30ƒõÁ)Þ—w¥šA—éóÍâiü°º_Ìó~1\måó¡•ISFÙLž= @[ ”iW’ER‚ë,ÉÁ iµ-´;ɤ}ìö­E¨*,o­H2ó’!¹Êi5jØ¢š ™#)n²eRF²ÜKO'cq’÷¯MïŒïB¼’·ÖÁHÜ }QnÈtîmïü …·OÞÏß©»Í—º²ì3ž8P9Op·”ëõ`ØÀt}c9Ìø?òÎ¥¹‘ÜHÀE7_L L ³=1'_a‡»{uLPÕb %jE²gÚ¿~E¶X"AP”äu;f£¦Š…LàË+i>}41äÐc?c­RÎvZeèN‹­ÂÚ’nƒàœX=²Aeje‘c>zÐ3¼^ÊzéJ§}^zÕœ8h]kƒ¡Â4›²‚ï$Wò€¹Û{¦8cMÓL î‘Á—:_Þ«+xÅη·l±ó<„£­Ñmxm4jlÖ¢çʬ bV1ç[Fl¼wÒ¡JMöS5µüC­Á¢ÍfðAó……[SG¬k8ÑÞ„¦F¡D_='úmÛñÞRÈ¿x¸¶<ÔW›¯f›üæ7=•Gü¶~ ‚Ó_)<+ëdC̸­ HîãýjÍòQ¦û[PŒå9ÂüAvº8‹š¯5?Š4Vw¤-³"œ•×ö‚=´\9Æx/gsjeÈ¡¢ÃëÁÈì¶É^maœTT7˼­Ál&m,à£SÿZA€/¯ËoËÕ⫼È1ãrò²™MV³»Åj*›åfŠ\..Ao±§Ð§E¡&O ™j$þe ´ È™•"m° 䀪»~kKÑ.ˆGé•á8¿È‘—㲬}8ŠÄ8.öPwäŠx*llK×CLMÍ®qº@Î*—Ä~H¼ñþ¤æön·\ÝgCªBÚê>ÍÃe‡(ÜbÑÄïET»¢xmôaû{ »–œ&êä®noÉ£vÃt•m¹»ÖV¦n2ÄŽ)‚žµÆx‰`]ô€}k}áóµ_S¨tzŠAd§×Ÿ$XîvK'Ϊ<ñÿ#¦h¢©¿ÑÉwúS…w\î¡¡åm+FÓå[ÄaQãIvØx´wsŒ:‘Ü’N‡·VNçkDµš·Œ-Õ±ñà)b6F!’†Q#è¼NŠÚÝ­B÷ž(5UëkZ–õ¨áýÛcâ¡sGfÝ Ó_î hö½š ýfXqaG÷LVA+ádÛyÐ`å3WzÑõx‚)Z«TèAeG&­‡Ú )[­U²(ÉùQ“ã\Z•!‡Å\ß(ê©T<`OUk49çí§ ìÅ¿ÉsˆÑAEèŠ*˜û„8  …dûU ºÿ¦$†M8F„îƒ홺üÝ ÊèÅ]KV…8lt“s<*‡QÕA 'GAl¹£†1ŽÒREBaÕ ¾JšÚ­ Ì¡¨ª¼ÕGòæVæË<½Òa‡1yYÍž™LVu™l©Ç‰/ Ì•~H$¾’L]–r7“ÑÜÉdëc—p-YB2ˆk,#É êûÆÎDƒÖ‚¢ØiÖ\6cÓl²‘<˜Õ뼈ÎR/º:n2’ròçó19,»<;¥û« ũЕe@Ëã 8H­$…ŒJCÁ£é$°‹† ·DTˆÀè}°9&ñ6‘-QG¦:(z jÏ[¶Ôu:QÅ>étBÖÔFñX\R/iVC 3¡È®F™ãÐ\C{͹¬ÛÌ÷»U8Qo>“ÙZÅöx7oe9öˆH³$˜3½Ø…Û©ŒÊF£àõ…Ö‰‰ÀtU±DsÈ[8ÁkK"EÊþ³–YÁÂ%ÖŸ³P» 1‘%)ü#Š2`QzɆœBWÊÎã@M]z5XÊF‹!üÙ‡t¿gýãØæÝoä• ’yYµÞôpm­1N1¯”¹Ñ%‚ –‰#T*C•/¤Nû˜›ÛW³QÞxgGæ°‡êé]hMãpÚdb ¨Mт”Äß"WϽ‡™þõ?òPmuT“u¤Ñ€îÔš'´‚¡Uè•VÖwt€r†¡“ÉVÅ:ZÌäKÊ­…à,ƒ¤‰Ä Y½ ”ðøÓ²³µµ}ï “h–(•4[7F‹ Ÿ–ƒ 2úDÂæÓ¤ªNëxØ^¾æà@5±g²ÆÖáÿžn7»»Íüuù¾æ]6ÇáÓÕòa1ÿ>_-Þ¢[®wÜ(ô-ªL_‰n•§X׫l:K&7¯¬”š®t*¥£-•eÎ;í’6^énÃã¢-/Zz(µpŠ™üØÆ¤~H‰qñ®Ê2bÛ‹éÕ@e› y*ÜUùß{5gT h! ±1¹)æsÒ:pµ–Ñ=®7ÛÉëb¾–Ÿ|Ÿì«Ñem4!ͱ3‰<È1¶<Ê©Lr¼³1âl†Ñbý$GT±›îP‰Õhíý(…•’êç{¯ ÔUÊxMÝjä:‡oŽÄ;…‘£å…x¯³Íöu7ßîDmýBæÃå\Mæ:mû” ³Ý6•nFLWÑH#¶" ›’ÛØYR_e Zaô(³R¡F¡°PI§Ìӧ𬒱kÔÙ…g3VÑ#YbÞ(áH\gìY(=M¶8ʤŒIÐiB6#›dv„|WMiSMc ëÇÈwMëMfÙsñë±D_¾ª–Ô?L‡}çóÓt£Ý ÔÉaJ£ÌÞ^0Ê¡ Ë(FY«´ Íå­r?Ä\Òºk†Ã°n3¹_ξ>¯7Ûå|sûãg͇'›õNA(0Ù®'«µ¼ëÝL¾xn³_–“pËT——‚)Ìè¯ÛnƒÐ§ÿ¨¶â+TÊäo¼>>ŠÖ,ÿQ¶^ ¶äL~Í–Äû资_ʘ€óœsæZ€*0F=£C?€3SÆ æj:½Øë¢5:­«z›UZîSò­â¼ASã®e+H ?ä®óiöúëb+Zœ/D¥OO»gQÑÛ…ò&ïpWÙªÅÌ·^öpâ×a¹n%2+˜˜$SÃ;‘[ÕãÃÏ5ª£Ù·8¡zK@e2“ä­•ÿj\ª£Uµ±.b†hx» ™Œ·WTbœ¤BWàN-5• X#K?4R”-Ûgïâu»|XÊ·Rö‚ïÁ^ É7S‡½ç2+È^™ö;³B•7ÝY¡.V°% 2ì•·6v¶ã²¨v4a³Ž]™Ès:š^<ŠáãçCM"W‰^Ðòu!y£"rïg2 DÛëýš0‘ì®#F€¡«¢ÖC®|Þ„æÍ²EÉDm®¬ ÞJëP"'!^ÐBa½‰Ãoɥ̥´ =ýä½G&,Õ¿®1C4NLLŠ|âÝž¸lº½Ã´t{ éI›}xPU—\ç AT¡Èéi,r ·{ת;Ý yU±Ê¦Gh˜ÈŒUðиX{‘ˆ¤J0?\m\?2pÊufÄ#Gëûµ¥RèÌ€}(xçG¦*Cý3§”ž0[4£öqI1?²¡D—‘ PˆkÔCˆ‹aðîAÜÖ‹F¾Ù_× ¢T™¿oJým¶Üʤ¼‘Ézó_2Aÿò|¿øýæ(ó6qÿ(?ß¾~_6ìÝ„1¤4YÊ2Bཥä_„.túiniCÅ– þõ®Ñ}ÃḈH…iMý…@äOQ÷Ûlu+ÿ„q“·qoVëßnîgÛÙæûóühKBÙzAˆ,±‹1Æ:>˜:L³H¡ïšG¨ e <è©%QŽ~J¶ï,â£LÅë—Ÿþò爞õÍn.!ž²@ÂëcÌCBñ‘Üúæç›íïÏ_~š¯Ÿ^f¯‹/? B¾.¶_þö÷?ßD××Ó,DP­³ø)ï!OZßn*(_°ÙÍç‹ÍæËO‡·ÿåe·ýòS§›­v‹_öI±þ¦ùÔùàŒéç›1Š»0žßÙØ´^ßú³<ï½ë°÷èò¤CqP½4édÆ×3ÍÒFŒä×ë©¶ÂyuXo§.vèlßiþ6 /ú.Ånÿ[Þg·9·}j&‘âV“s{0‰fe\V†WlöõÖšGˆ,zD„y1ìÿ¼‘Ÿ<-¶»Õÿ~‡—ÅÓãÝòõñþy¹<5bXI癇þßÜ2âgg[ˆþ_,Öã"š´ešØô¸ ›O€Æ E“¡d4yŒŠýèè²ÁS‚И²M‡j§hj - M²J"ßÔ®ücì!‡uñþ÷¼å);`ÿí iæ¢Âµø'–†(\ËŸ ÷Ö‡ÎȃÎ鶈dÿdÙq§-í¸KßZE‹L·G–fŠHdq‘1Ãuê­Ûi]¹Št³nG,‘,<™§dl´^’"*g€´Á Ðää²Ñ,ž›œ„µý$Yã‘,³=%œöñSAl¹ÉÉbVF˜÷‹—ÕúûÓI;µåóWùšÍíëz'BÜ/f»Õö|º fò¦kæ×µæ/[ ìyšùuW'®u0à}ÿˆ>­c½ƒO ôà™N7†ÁÞÛî}™¬eê4†VÇœŸÖÀÒl¡#¶Èœe ­„pÕq#bŒ4ÓnôÀàƒx„†w%¡çx#óäe-²Û¬–óÅyoí}ë$4 ¥Ÿ–_÷å™~„nÜÊÆåu9ßLþu÷ûÆŸ‰¬ÍRé÷iKkÏÙÄ*ý>W‘&6Nk3i@=jYú†,Ó`¤™ ¤ÉÖB÷Q“ïꚸ·!­5°D¤…M©@0iÞ¡e„4o¡6Ò|¬+b£ƌ︻ã‚~¾…l‡h´Íq ÷^½1®™<®•}›ö>‚ò· e_æÒP1²QCf¬êD&3Xf5î¦A:ÓÄ’eFÝ~»CÿÌkP“‘Gs×ZCK£š¨Á°9ççhŒì~íª¡&_›j"GïcX³Bq«9º1”ÙRf!%|œÅ*T*R¼ÅѶVñÛsäLƒT»|u T`´É&Õ€¯¾Š%°lÝ@,Ù5‘ÑXb~«‡ÉTÒ$ Sè:©BkrÝT2±¶P­‘%AIÞJÚäF¨ÚBya ƒ „È•¡ĨcPÒèÅá »GSJ¨ÜÈwy!LnÒü 4yŠ]×Aÿ뺳‡·¹‚~ÐÜÙ³¯‚CÜí]ºéCçµ\Æ£ö¤ì`pØtwI<]çL÷ÍZ쾃qí'wYš7ƒ^\r&]jK‡‡êÇN"Fàè&ÍØ¶Ö‘”:—skh·æuñ²ZÎg'í‰{´M¹¹›{ yl#—E0ú ·Qç1ÿÌ}ô¾ÌOq=5+´¦ÿB ÞëÉaIfH¨žj…Ú^K/¸Šò›Ê2zö} Ê–…=ô¸v]tSŠc¸çè €žzbtž®;µ¡bæÕýì~à:–µßY‚ÐSG¡L¶b.Úψ†\ˆæ¦bdS›r‘¤íëÓuÿc+ÇbéˆÄbzöÞÆ{™‚×äǰ"õØaÕüqq¿éŠTEXAÎj¬ˆÇwn®`2›À)ÿ37¾¾EsïáRÀÉEÅÏ—qä´Æþ8’GÈŽ¤ÏF6”[WÎüó-'fv·Z„œƒ¿®×/g\ ¼‹&á‹YÈî5Dº/÷?~†ä#"ßÐvy !ÊMn.( ÕºèÖõm,ïz³ïvüŶjJ\¿åK,^ZÁñã5µÖx^ßk7SK91\c)}ˆñP,¥@.–RÈs®p}bŒ>•P‰DŒ5× FUï[Ž$‹£sòáPú?OÉ£WM’{ù ¾©¬o/}sÍ ¬¡Åo‚„Õ«»$À¦5ΟÇÁ×ʺ]6•Ñ5>'‰iA|‚edzÅ׳öŠÐÑ€<ÖôçyˆØ8žÙÐìŒÙ^—$ ä ’ ÚU]5FÖ°;.Ÿô›Ü1ië…6”qkt{ÑBœÄ’¹Ê›X^!è!Hñ@%Çâ‚øTbÖ±8Ú¥ J¤ª5q,à=7û™ v,’™³’‘)‘` PRa÷#”ÍBð•.F]5lrM£ÔrÐÁ©ô.œà˜º[|sã Ñâ\3±÷䤔KâÄP<«à²3r§¶(™Äæsz3£VFÛ¦ØÀº_{öƒ“IfGÌ$“˜5‰îe Ïù]ØCO~vṤ×4ú÷Ó^>¼éAéõ!1<¥wÿýþöåîz{»¹¹{ºúý-£vpZ™XêñNÈ×÷Jôøs'±b[Ây!Åhoœ(èÂ1·ÏZpæÿ´-+5Ý>!nM â׫‘) vð#_í[ÌSM·-sÅfE ¥ØœT µ8raç'{iX¬¸£sѶª™ IÔ‹mí¤Dƒ=ÌdgŸwt’ZM]~xÍ|ßžŽe\$JŽmnå(h™ÜL¶›†àà|z±¸'õ 5ôÎÿÎb_ï_2iÃÌïŒÉa{½©ˆtŽWQ¶9 Šá:g±ïp9g®¦Ä…ƒ”¸yK‰@ö Å< pˆyj·oè¿ûýšxªà×þ!n*扌žx3#{ÍêÀÅ䪜5y»+ðáÊ]aü9eµºêê’6wò÷D€©u >Ê,…ëTõCzvAº²ü è‚í3M•™àuE*Å Ug§ý½²/P*‹ùb!%yVÅq§àäf†VèSG¡Ä“}M‡O@ûmj»høÌ°™™²Ž”¸·Jâäèþ´, „6¨¤þ, ¸‹§ØÓà1‹§ûÛë³ÌòþÅ?6w·O›»‡ÛëÁÀ’wåÊ‰ê­ †**•>,…T½!/õ9‡EfŸO?,²y‹#Œ»L„ű¾äþJœ?øh³@N>K£sbÇH¾?9 æÕ 99×ä2Ëž¾73½kÝ-‚]]X(uQW ?ô¤wlýñ€4¹?Ø…Äm÷aë1l‚¡óxQœÍöË?•k>üìdø±ÊÅuë¾Ú%àšá¢4@NµÉä:3ut¸C’Gã2=rÔbÂÂÞ)f‹¾Ù¤»mÛA †} ûSé!Ðp®³D©Ÿu·-’ ó}S‚K¼ìàp18׃ÀÔ:jbóÇF0•N÷Î ïaì¨ç¢‚éü%·ð]FµŸ—° Íê`•c‚€%™ãuVŸqº—š¼Íé&b\"\3g0ug§DÕeyåO¬ØÅåN]Ê€€5@ °ÍåV'~´Ï-`ž}Æç¶uŠA\À‚ðH”á>·]K û¤[À‡Ùæ@I¶m${Û^‰é˜Ñz½e7Ç wsÊ:r{¿ýµÊûfÆã§éC×ñçÉÍE/ÑÙ’Ä汪p¨ÙKSÙ^FCw23i.]b ¥DÒ$6·±sâoq˳Ýëqar=íúŽÙ 8Óré•ÝIÕ~xvÃìyýøüñáå—Û›§¯•I…iÛS GM£J~ 0¤Ü«‘å87WÏdZ©Ær¶ƒË(‹'„„¿Y§S¾#DÄ”¤©€YÒÛz¾mwùáù:¨¿Éw.xÈÀ]=äà%>ˆû&>rø0¹®Âޢߦ!  5%+& ºEIš =ârB¶#FH¡HèAJ±ÑÚ=“=°'¶ŒÑƒ=E„X3j­Áv³oº'Ud¸æ™Ù1£üm,in}?„Ažã;ÂØ]>fñûínýÓØÁÕÓõöåñæù«íÛÜžOKžMgX¸?f:cÕ—;•¾t$ÃÆ6V}¹™yïaGïÓ~¼ª^OŠ ì4O ñò 4Œ©‹2‘,p t+€dKèÇG[6†AÒ”AE³Sšr‹@-Íúi/Œž@•Ü©€S¯]òuK•Ñ¿#<¶IeÇÏ„y Àî véîCa×ùçÎÂ’y„SaHÏlÊÔ«ì!ä„À÷÷Ï­ñ¯£iÃGƒÈ&%€]YzÍ̇A«UëÕ1úUûÊ$¬åè×¢äRo­Y.«!xjš>Ñopš4¥BÍ}SÚ îÌA¿·3h.úUg·¬S¸HµQOm€Ð%èF„©µ à‚¾;tÅ«à%¢cßAÅÿü…äkçÿ¶ }-ä“õ+S†_ðqü’Otß‚ƒêøËmÙ¯Ìõ4/œ TBeÄ"8ƒ×\¿ìÑp]ê?vRî*È(:go‘Œ®ÿ`ZŒ˜©ÿØ:Å·“c•yÅs»q 4Ë—t-Ø×áÉäbs›jÌV®’™Õbt’¤Ãv·<[éSœH¾°£†C4;°gé‡OžlY²ÒâSIí°5‡´´n )ÅáôÃ<çcìaÑ)â„2,ô Óß‘rϤ–Ìöqû*#ÜVPN«‡ÇhõÔ~¯“(?8þê<µ_x.=Áî×4eèh Ý®·p›1Ð/Æ@Á¤‘ËŠZ†À³Ô¹¯¶m•<ÛÔ@ b$òmIô8 ìÐw¤ÁöaÇÝfûõæ·³9˜Ã‹¥n¾Ù¿4ÊW^®r,⚉jIÃ9id¾2™·ðL+ñ¼yÛˆUÚ£H(E!IÉÑ•ê­fÑ,Љɺ„!é@hR ¹èñ>Ÿ¿ï„˜‘97²g«$ÎÙ•0?²Ô5O{Oì]c&7B¢ùt¡ív†IŒ’îN¸¼ñv†ŠÛÙ®gó[¨|;+k™Ô!×Oqòd ¯g/.‘mÖœßÒ²-8¿ç£Ûý#”dFñ™%9?Þ Ç·S yõï¨xóðp›ò ›Û‡¯õ¿_¯mŸÿrÿòütµýíÆþÝ~Î6Qˆê˜cÅW: LP# +?V|¥¹ÐCcjã£6'–ø°1uXÚIæ“NéõçÎâÊé¯óPKkQöPÍ;ã*+€úÚZå©ýfÒïŒ×æFo°HÁÇ ì÷¢‡’ÿ©!›B;1Hÿ3mfÏæëÖÜ_Í's'3Øõ‰%4炦…RzC vÞc·\×R%tvA{BÄÔJ§QI§é^Clˆ°¾KÓ.oZ“D䋾̚þM»uÞÁ¤lÁÉ“-dM×ábÅ-.[ù€F7˜0`oFǹ–µгwrÁ–5û.ïÉÓÜ3Íî*WS¢&g¦w4ÆÃ\ðU¾kÊU7̳\ðUæ<ʘô¾Úx4u™mfLê?xÜëðõ¤6t½ñË.d4ßgÅUö±{ÀÀ×[ƒuìy‹I.ÅK™O’\iÉ¡ŒÎ>5O§ž7V”èk+ÚO¦=ÇÑ=ofç|Ï[´££‡ËN|-š° ì»N|å!b$ä’ë_‰J+ƒâtd¿}ºùôx³“/P·½þâ … Y,ìÄ&ø=lþJN1)YjY Nl7Gy4\KônÛAò7b‹i°…«ˆæ]‡"ØÚ[åÈÄŽÆè€µûMœTk¢÷âNX€µ48zß›ÞγìŠ@  ƒˆ†+¨¶©Pt‚†ÉUÖ)`ðz?@aÂ"rt©P{I ™L—Ìñ‡w›¤)wò;O•d2ÓS1jêßh‚ÝC²°’/ÌQäÛU+›L¥ézù¿¶QìvvKþoL:t¾ÉÑå ùÔP])l>­iØ)n“˜e4ãÙÙ‘d$)l¥ ­f !v-êëB†™°žaflL,18ô€ØV΢³Þñ^Éõ v~Tòá˜^x˜´+¾Áæ`…ž/ˆ¤é/=óY[õÌ9–*„rÊ€K›Ó\ üÄ02êÓ˜sEF‡ãh/žêÞ›ÙÅlÆ£R¤?á”Ý4L®%ApØ6eGqD’Q§9±w0e·Ý3Éú…Y€à¼Fœ²°új¦ÆîÆìç_$ˆZ \96¼ÔÀEüflNãh¹.öNÎSÈUyØÍ: ÓùÁ"Çfç 9ÛVJI(ïa»¾ÃÒº¬”'ËÛÆ ÊäZ²;¨MN޽ðÀÍU"s™ÂËOƒ˜‰Ì:¿ÚÅõ_ÊaM^YE¢ÁWµøÛrõs–!q{G_r–5Q¡@lC–˜üÄ"]œeÛ±(µBÆ ¯ÃêÈîV¹lÝgQ:R\ƒ²Àj ÐÌ#¢[ÄiÀ‘)Êl³îÑÜG;ìüôÓöö%×ê09 eÄ“ šä_%*W‚òj›u VS.˜«Y VS)¢ŒÃÂÀsÿéh >±ªD/ŒÃ(0:VU2‡6«¢ptêggêÿt×öSZ”sŒ²\q  'FBoÑPÍÄž°â2yÅݽöšÙ-tžÍþ­[Ͱ‡f a(6“Ò-í/µ-ÃxfÖ™~Õy›¾k¨é— la#zÁRñ…Ê )ã•-?±Y—~);NH£\¯yxõ>ÙsýR)‘'©¶à7w&AÉ6¼@vnm–±3¾L.°ËVÛjEF`ºr!©Ûõõzsû¼°ÀSšì/›röJ\#KŒ¼ÆÚúüþég\Wƒš¡”lpW@Hå$¡`ÎNY½>mÐLû,rU²¡ùÔØ[x šÉÌ.d@ÓÊÖbalŸûÊ.£È¬Û?žÞ©u²«)Yºi4Àpz Š,Yz TÔ_rz#¾'Ù¥ßïo_î®7ÏÏ›í×tÐ÷êAfu Z®~Õ§Ful.´e¿*² ”@ K”¤€”g”91\— jR‡ h·x…s‰æEC'”í¼‘¬ëßìì#ePÙVŠÌ·u±‹Ó²8"_TÉJj:ýÛ6+€erÍ#›ãÕÖw¯! @ð]×A 8²ë`oÁïͼ;h¥Þ©¿4úä$Z¸Ø„èQ×èX€9’P èÓ–c˜4kK~À¶’8J$¯RÂp‘bÇ=ÆìHÔ©­º`¸€”r  ^i @ïc‹ò˜}ž“Ánv¶Ø3ƒá¶RæQªóÔÒ]1\»3\W&÷¦ñ±¦jžSÇõÀt¹"0w?¦:ÁŸºQÕÅ1{ \ÅàØ‡þÙ:loPRA·¨y¤õJÔA,–¾!Op´PÞmi i µ¤Éî"òM 2¤÷v~›ýØ/”&v–¬º³þ+v($B›¦îùTöàGkäôõ.Þ»û´ýzýéåö›–ÑëÿvëŸ/š|ÊòÉÛ⤪.téѲS¿FôÂdwRq` Ü·#?‹¬œ8=±Jï×ö­ †ô5=ô$ÑB¥¦ž{¿àBdŸñ~m¡„!„‰y¥¾U1Y”N¶ø£±w&×1:ÔÔâ¦h6ÉÐÄòYSÅæåù«Yýf»ûˆß®¯Ÿ&^ÿ¶ÛðùsàOüñKüÇïZ¸êuºÉ‹[\Ðæ×FXAúAaÇy×ÖäÕjÊŽ˜Lœª'³$;°HÝ…RLIä“Ê'VëÉió;­êÊe€Ô˜Ñ‚Èì†wå&#û\¥Ï–‰]#]íÕ“¶–ÒÊ“íßýßÇ ú‰áO*ÀÝ—~%&MÍvè\±ëBô¥Óî,<£¬–õ«…úTýaW±‚ŠžWÛKr‹N’½…Œž[HVε¼*Cš‰õåËJM¨¼¶ÔT³ççÎ>¢ù©Ô Ç!e¥rñC{Ym=vœÜÿx¹Þ,¢‚OúV¿_§ŠSyËŽàªÎbg‡G¥ÖQ«7ýLý©ÞîmÕ(r¶Ÿƒ*ƒm,Ž;˜á]–DæÄ²}Bíî+s<+ ¿ÒОj£cÎ…Ú¶NItV4ÛJk~t×P©sê}ÀÓÈ›ƒØˆË1DˆêõÂqù}êóX3ƒ²ÿÃAÃm].ư†;ŒÔ¥b vˆâÆ­o;X¶î%$V•½vßl\/ÌÞŸ³®þ©µúöÕG!\ðyüè„ÅõE‚ýJ©xÎÆõ*}¹ÂܦÚF@ÊH@ç!cmjpª< í8Ø>^gŠŒÇëÓõÃíýv9~ºßþj[ùó—úü •ý¿ìy(P¿FF§H¡º a­Ñú%]Ù«S&(·%x’½~Æ,8ç N,Ô›US•ê…±9`ͶL!×f åÙ.HÉ;ã™wyQ†Q+ÚZbjQ•xïZU‡6p¿ª¸œ†€mc Rñ#5Ô,”¥É¯!ˆmÞHÔ ásb—7§¿2ÀâôåúJû|çhLPm¦Ï{|¼ÜqÇÙüŸ­Åæ·íõíõ§õv,ÝfÇ(+x•Ì»,f«âý—yôﯲ³ûi·iMæ¹3ñ w‰þùë)=ß¾¯ìœ þø®§Ã¾oûféõd«Nƒ§–S ~°@ûÞÌ{f»3NÆ}é¹P‡»®ÜÆŸ9ÄèºgÂ’„ÑÜ4ÊÄIœ^¢¤Ðð¦±1ýòONòøáÛ/øŸÿ¯¿üç_þãçMÚ”ûð×ÿÞ=à‡Ó¿}øb‹òó‡ý WÿOÞÕý¸q#ùe—¼ÜtXd‹œÍùåö%À.ÈÞ½vh4šxÎú˜•fìø€ý߯(É£–D‰ìn²íàŒ…³ÛM²ªø«ÖÇ*tcü¯ådýù—ŸÿmÛQëeuóiýô2Ûræusö¾ø»ÙÃÝHäôæ_o‚/åÄ·§ÍÍt¾Úˆ@~=a¯ ˆô‡ñ‰éÓ©‡‘ÅxwáŠm7<Ùžå)ä2í­ÇÉ|3»dÅÚ?Ý,_¢†~]=¾¹ Áè=µm5⽋¿¸ØX¯å×uð“ÇnÛû~ÏÌfì¥îÄtµr#”’»²lø—ØGö½BO†v™†Ã0]»8‡ýϾÜNAJˆ€øº²%~¬cšu¶Ñ>Ìì[\ïc™9û'¯Ík0Ghó*ô?NÉ}K ;˜Ã×þ°G÷vªÜTOô£¹ÇÓf¯ª[£×ûiµ{ ‘7[cñŽÐ³7¬ä7'*׉V8j{©)kˆI#9ÏC°Ëé>]Æ•qÛÖÚ±‹³±Ë4.´¹´ØEê7 @E‹¶ZGËA/-~‘mÙ„Aߨɪ!0$8à«ÃG²6J‹Wn®Ã£×å`ˆÄ1†v÷âäîïܹ]¿ä[ÙxÞÛÅÓo»ìˆ7<Èûkg]¨½3Ü œÆÙe ®Ä‚ëÚzœ]^îZ-—„4"Ò €DècÜ‹a¼©²sŽ&÷óÙ/ŠY­žÏò¯r)g?-f¿ßm• bpÈŸf‡ŸE¦­²kBâ'¤ì¹IWÖpQÅœÁöi¾»½y »DœÎž>ÎN 3‚†½sþÈžkb´ýWÄF_®>ÝÌWŸÄxy?YÞ¼¤Ùžþäó¶ñröùv*Ѭèú ‚|Bé“è€BÖ­5eÛ»½™Ûºñyx|ƒ¤ÙpR¤´C¢D|! bmÝt)^M+ 2˜¯„egÁú)ǸÃYa~¶òâØhpî«>O]ü“V4ÚþÏËû‡nïUbT|W“%H®FÜÓY€0ýå1c(ƒ”ˆ'\Äó²®ÃΑ‡ âXœ4¼Û‹{ƒ©@©hx“êñ¨ÍÖ:n™H©… ìÛÚ³½ÈNœD‚67ëÕB´èíBlðõç½>}‘=œ¨LlœhvÕMe–¸‹ì«À#“øgTszîr²AÚ·R®ÌzÒ í@TSU´#]!?‚C’»aZŒþÈS ë²Iµ‡7ÓÞ$@¥Iobõ‡²UF'öÑIà°çŒ É÷¸þ*i{¯Øh Ékg­Nª$2ѹw­Ó–È-!Õ¸`Ú#‡®½H•†29KXT%eÉOŸ¼Sï9$Vtµi.«‘RÛVZQîW»Åî^*•/mçŽö&l‘ €PɶYË…@ G¯>­EzÙ9Òúxl;‡j£½œ¿2HñHâH/®½23ŽÿÊ|Ustzf.Ä#W{:£Ü/ŽôÏÙÞPÊ0Ûéúø9STÁcæø9ÌÏnºn_à^NaiH¹§p®áÕ Þˆdé±û-<,C’XŸ1,‡ìMç õÈÔ§K$[k½×Ê2ŽhS,¬¹eµ5t2`P¹´éÅÈQ-y D¡¼)Bz#iÉÖ"ýL/çÁuHœ(s‡5µ],áᘋÅJü+`¾ÚFR‹"-š^šUÐÁÞ ªè8“ª<äò#/t(Àa¿¨ßróƒÜêçÕSx“Üw{kýÅNPŒÚW…bç{y* Î(ï]¿ö×éS Ž·wžrÑÀû$;wZÔ(á ;n´÷ˆ|œªÖZ¤k‹ÇFc8B›=›˜ÏèÄ_D 7tÙ^¾”5H¼Þ®m$ÓPR•}ª„ÖÅvDî+Æm©Ùzg\Ï_>-»½3ŠÙTž½ï“ =(/·ÝU(Œ‹­dœ)¤Pré8“#IÈöÑç´ …™@YÙÑ‘Ý^¤b@¥õÈMJcõ†Û×›Ÿ…0,)"¥ÔóJí l©ÝEÌ©ÊÒ“Òör0ΡeŨ0~¡Ðí‡ùJŽ÷~µy¹]Ϧ+9ÚçÛéüI¸uû²ú0ëëÀN™š¸NÊcŸ®EÆ!y`=×{R±(Îxð90o“Ï ¢/¢oT-‚Ãyã1Ú8ß^¤·¼Áð>.ÎŒód.à<¡@ú@Ÿ×ðB®¼¶~,Õ´Ñõ±=a(Š,n?f§û•D05™t)khø˜‘¡í‚s Y¤±!•ÝÒ´éõpi­s¡¢bd¤aS½<Ö€Ž?Ši…Ê“x+Û;×ë,ÛÒïlÐB¶e§LÚªÖf…|Ò G‹ô²>E,й±u‚¶µë›ïDžé„û– X?Æ$ ŸÕFEP‹²è„R5yºF“öÂ8(3þ ûd]a×p+n‡n)Ðñ]ðçÏ.}ŒU˜yp*'6X#À¤´ iË0~€é펎+1^‰+Å)ÝIÅŸv¦Í-´2‹5Ú5®4êõ(§þr°µœômH¥¤úGÔñÿ7ªQÿÌ:ô`UG íEz©±„´î¦þKÈ©í“`–uD\&ü™}Ç Æ"Th­"v\2?L}i·{]ò¢QÏ™Êd‡1+mhÛr×Z£—رX²Þ» K„Tí˜g€ ˆ%‡ èhS†=|Ñ,jÐY#=;7Ï&ª(T•ÒjòÛayõ3~“çsVLßÏ^å<'£Ho§ÝÊ Ù^Kn®è>s0Còzî•Ü‘`¡_dB;c“ÐoÐ+ŸŒ9ñb!‡qÊ„¡"ùŸõíC{‘àïÂì6¹#ƒðk‡¡…uÖÆÂÐÂ|¶â˜%Âкì8§¼Ì`îâÐuj2V´ ˆ;Ï`kbøsÐ}›@ï«ùëB´êóÇéíc5{œÞzîï‰ný½»¿ÕfúhÍãÌÏzTl-Ò ÍCo(Åc£¹1¦JN˜éîkÌløáþõiþpÔ‡}ÊôÐm0®‰« ¶èëkQiſע ’¹e[äõä×¢Üó‡"ÂýªHn{@9’ g PºÈ¥‹P2'„€=¥ã0!±€1ß›¾Ñ¢K©œ#v³iÃw{~èíC$gtôö#d„ØxFipÌãf„ؼÁ†õàŒkj¤&KÑcPŠQ¨¸f’qû¡ôrx7hãîíÏïÙ~wâûL?ܾUêtKûÐ\Ò z¥z®s±uqZÀˆ-Ž”a³ƒK`HE[„+e³‹Ÿ¡Ð¥·é…ú¢è¡ciIˆ S#À9öÞé±;µÉŸ~Z­?Ü>ä‚qÕ."{Ãúy>YΚøÔ£çÕÞ _&¾~|zù¼ â$´ï_~¶‹ølV¯"ýbÞOo_V·ó•ìõ~23¤×!rt»½Ëɼ[œ‚×T( O݃öNÌRÕ=ó›ãGA§C$Ú¢Nû !5--°FES‹öe|˨E–ø¨é`k‘^Š ™YÁȊɃ©¿Þ‰Éó9<+c•Ó±ø-ês€Í `Õ%Ãó›ÄÊšòb ÊÓµõ.4©¨Ø¾ðgÓ„qç«ðŸ…ȹˆï|ËÁ6;uO³éçé|öÖ¤Fè{¿J‹¨2cxŠh»Ó¶,mgÙZGÖõ²8EmõT‡?C†Îì„BX:hï÷»®j=cbs“<(³W¡{·cö­%zi<Ëá±~lg¸~ÄÞïÊ\O=1ë½8âöj>1AÙº6¶…KýÑ ´¦4!Õ ø!80ä¿JUÜùãÊ¡4qºYßnž~[ÊÏ:ÖȹëUUU¯é˜Ú„'Š|±¹lê• ð¡Ö‚éøi›|ºñíÙ¢T¡ðjYê([«½F/•â¶s‰ÆV)T¿?¤pn7ì,º‡:4–½Ð²l~¿ËjÆš¨’ë„B¸ûs7îZª1éHƒø¾jwgwéš58ÐòÉE³ù×ÕˤÍÅdýaö":w:KèìœOsnâdé¤1¬ëSGÌÚ{‘ò®YD9¾â¼dQwàˆf‘J煴ɦÎéôP&o9>kþ@Á29ÙrH8ðG E­Eºë£Ä\Gê4º0-“hêöX¼À»øP& H½OÔ{¨¢/E \aÿdL$«( ÄFŠ„T6ÆVô-OKùÖ&AäýßR]3Á±¦VeÐõÑ BS‹¶£RøB©+¸ÿ…LCó»wpºKM¤ÀM‚ ‰Ñ¯D‰víå°zÈi¢ÿnŸ×¯Á=ÞŒú…Ü9Œ„ÉÇd,sbN5ÛV8iAŒÍ‰k©ŒÈÝprãM,íjjpRµ3:åÎ"GÓ~B CŽF-Þu7A‚íà úŽÆ™ìµ5 •áþ(àO»3gÖŒ:öÛªDùÉr3™no؈(ÿ¬®»ÇÉ|3»pIÉüéfùºzýºz|3™Â>»ºÜȵ2’W×ã„o'ÇVqß®u²ïŸ×«él³Ù]Ù=·¯&CjŒ=àØþÆtµxž¬O½³0ñ ÐvðÎÊðZèÓ^)6JYSó÷¯"ÃО0"2¡å¹UÚ§EF‡WͤÌhÑîïú{Á^ ÁÏ|¸—½YÑ9ÆCèñXî­³þþv²Éý|ö‹í/«Õó&üUTÃì§`ÝÞYRØ¿§ÙÃág&ÊJ¹h¨m+YQ’• b¯Î­Ã|6»7×gOg'ã[}ƒZóÑÝo`®ý7ž6"ŸÄý4[ß¼¼Ÿ,oÞ¨Ñl~~;¸ "¬¨LÇhï ÛOXè1±Cù—4XP®>×üfÒáåTW³Â·ç&+l,C„M‰³§Áæ^]Y ‚85„k¢lúLBö¢Q|­2…üÞ T %;Bˆz‡ê•°ipÊŒ.:ýž=˜0Ææ«e’<¯ë²Vz¹a²I¶1Úˆ~OË&¡ŸNŸeÐâO é Û¡]:­ïת”¶ÕŠ]¥sþ*'[59«ÈD1µç DBÊ0½â J¿¨„xȶ• ~íØâ}ê*­uI\ûJöRDŠÊÏI‹‚‹k€­æ lÑLRx¢QÑõŠ ‹ìY¼Eôù¢#þ#o'½ôŠn?A¶B?Z§-·jªÈzÖ¦|Ïøó1Ùó#}ö\ù‰(Šòƒt$Ç[­“ºl¬ªEÉÞxØ´˜Õ†íèºÑ»ÚºÑ‚âó‚Øpd óôâzA¬µªì Ç¬ÄD]Š˜¾Xªj÷Ú>ñCCdöFù~© >’ÚpžÚ,6h÷¡|*KeûJ|± ‡ÅhÜÖiÒ¹ `XäÞ}äY}cPzƒñj¥˜¨ƒ ^/œÖTÁz€F+»íSÏz3ž:šÓ÷"—ûi ­òÂõ4Ä7îíýãTù{˜¢¢Žó@MÍëè\ŸGg°`•Õh»êþAtëbiuhœÚeÄÏ‚ ¥nùwíô–·ˆTD­Ë5ßüÈjÝëú‘2¦©GÔºn´Au¢ÕßJ’uÑê1ΚgÀ.?Ýx0JĹÊJ~9ùÕÿâ[ï­¢>ù§hM˜¦ŽCP67Û˜Dî=xò:©”ESZJ(e991GSGGˈ@…m9kã}Öàã@¬|ãUÿëx_·ÿ玌‘ªN’Ûh¶c×Ñ6ñb}Œ;™¯wÚG’o§“ÿ7grýÀ8SîÂí¸¹áq¥Üu/Ç‘2„:…FaæG¯ÚTñsqx4œ3ÁHœƒ†Nƒ‘2ÞÙëXÎ}žo,Š 5N šc÷àð…èÃh½Ï ùVüâÊôéáÀóœe°¸(RÉ”‡dŽŒÊÎK›Þ2]vÆÃìqò:i )üLJåæþÐÐw~‚´| Ö0íˆk—¯Œn€ªG?)pà "Úá¶ä›r Y56 oaBdß þØ×>ZÀ‰Õˆ·†ØÉÔb¤k5Zi¨BWª„Œ‘BëÀ$Øg˜žy>š ·{÷¯ŒP_ ®ÅÙ9©cÔÝçøk-ü‘óÔ@Ž?wRr•{¤+(á¼!3Èçâ‰ÉO(’˜Æôúz­×îäµwZGËÁR!ßPY76 XíkгkÎA8ú±¹ +1Yè2Yžç«ÏGƒ•×è×1›ªØ,×?2Z¬«e´\\ÿJ…bHd=¥ô¾~ÇTG¡1­ãÁ å²AÊ6dC¥gH‰dc ¤´ŠÖ¶N–…Q¦»…™»`T¶õˆë±³Öе3¸hÔª\®!68Í47Ö§™34[çÊáBE[EEy–Ö+\}½úöDôŠm,:ÆQ}b¯,~+ æÚ¶@÷ã'|ÿ>¦t¨‚Ò鸧¶"òVÕQD÷tU9ÉÅ tctõ0xÀóû"°Fl3a‡¡õ_åšœ¾OEŒ÷òlÝÙ}“‡Þ‡‹·߸X‡ÅIx†§>Š#(‚ÐÒn &×¹tNT„fé:±ê'U9©hCÀÖÑr¢âîÿØ»¶å6r#ú+[yÉG@_ÐÀ~ÅþMÉ–²’å¥$ûOƒ¤Å À 0V¥²©Ê®%YœéNßOGLñ¹ª@ךØ÷NpG9&ZÄwš§i£Î n «? Jæw__ÿõ=Rܤ.}öŒr¡ F]úì+88™Ñ<);x cEäqñ>R4ÍD(WÜB€C\œ™ÈÖëáÈQf†%¾8¦ZFoVÔACõË+Ø"¼~ìœ,VÔ|ï´vc"­úh(4ø *q?ï¿|ßþHàø6¸tå#ÇpÍàèÊG^C¡3ÇQŠUØb\žï'Q±Ÿ¤‡"[Q…(ŽäPHõ—äÀ½Z‘Ÿû -Ôä<òjË£‡Ð½‘‰œÙȉ›¤Šði‰.þÎÑÅïŒ.¬ûÇëýmÊ]ò­Ý¥š§#–7ÒÁªyšÞ`†nAˆ¥!ï&ÍF OÌF„3ôBc4œoòO¤þåÑ S£PïoR0A2:üÀúøþ f EX†Á«—ç+x Úr|jà .¸`g[3.ö©ý pBAÔÏ?W‘\=”"÷½X‘K-±Á,m̸ó®•OxÔ~PP“™ 7M§Â,aÔÎÚçKY‹÷½ôCk\ß‚¬a&¿DÓ›35¬»ŽÜ3¢ –PDEÑr¦ð³Y&50Ÿ‹Â•vß2º]; PsXÇYÌádwÚèÍ ÛoÙ¨ÞÂÒ ãzCNƒç«£XãöK8п8xÿòöðxû!ÿ¿¾ý– ÞM£àýòGv Þ/ä,·à ûò¿3 ,Ñ ðö02Ôiô2ëP¢zzpÕè§Ú®°`Ê·“ÊÉÏ)Òš4ð)ç„iñ<ˆ,ðU­õÙü¯–“Œœï²i0🚋_ڮꞙQ1SÂ›Š áÒˆYÛ}æ©êúùħ[±`f&tÕ&÷@X3 KW„¸´¤d­7}QÖOéõfc4tãùe|)MOSdëç`0_$ Qr”ëñÍ“s¶ãW+azWq× ´P\0ÖMbF@¯ºçO¼ 'i"Í@6)­Ø³Ï¦s‚1HÞåÓÄFÆåºl€¶‘ñžt¶‘zË01 侀$Æ%m$[h8†„nè«Ã“³MÄT¶¦õyªQ ‡ˆõÍš}žªs˜~»»”ž1QìS„`0A|Ž6“šú–E¾%ì³EÎå w¨«jC{'vpÎוũtSÄÎ]ÿ=+ùջ䫠ª!OÀ@6`præQ€-Šúر)Ã7==%&šzçlô’‰ÝM2°^Àüš9hڔ̡¬ö‡ÆÔ¬£¯ˆqúª³÷†„xk\¢µ<Þ; »núë9¸àÚö˜sÙtzÅxzOﯫòÙ-ÐªÌ åGM‚!À4¶K [žŒ‘½<¿½&çÈ.}c׈lÜ—ÍJ¾ý#±CeY¹·Ç3"R¨'ËéñL³âÒ—IÌ„ÀGÀ{ ¾"îI‡73£g9PŠí1κ‚(;ñy_&¤Ú!FoV” †=-í«„“xµI>Áâé:]×nˆk+’œ¨—£ãMÀI«íÑ«ö­¯-ðeÐÌ×WÝyÇÆcÁýp²Dw!„ôÊû÷WnáêëS;ˆŒ¬ËÞuh¡=Å‹:ðÖúù¸õëúñùÛöùM=žèŒ­ܯíðÞF1lTEÏñ_OuÓÇä¹çE4fÊ| a}¯ ¾BœÍÒ?z â°|(@Œ4)¹åÊhL’2w$»éŸx©-#‹_jéÞ¼kÝáC>ðQQÞ3¯PUèÙ’@O=ß>Ы„•žú>$nã‘èÆ†®û%aO©Ë­ Qƒ•¾E„†ÕÓ÷=³üÇ1ËìÕazä2é‰éV&Pcs`¼©ÍË. Üvn›ž6…Þ"¯Íd©%v•¤”Ûö.È&N[ ÅØÛÅ6ºÏ ª”Mb=ªIÃBÉ gPh ô¶ˆ5‚Á•Og,==ÏÁéÂâ6λªØÙÀðk:€i¢D‚è0ÎXß!$X§‘?êàðÒ/×8Èžª{Ê0žªÔÄ õveÛZ™·óë#œ(ñë åa M,rYÇžv먠n@<}*ׄ-d|,žÏnÔ4Å„ aS`ÍU/ Ôšª·Ž^¬¨5CóÒTk%0M²@8©pŒbØÀÂõÔ`Ëê©ûfù™åÔ)£«¶Eº“8˜à]¢ñÖ °µmÊ­¥³_î35€îu¼«¹n×O?õo' ÎINPŸ.ЊGO±ãn­ tµîÖຠAûÜÚ!Ò½0.À ±½Ñ ¸IÖGÁ×ù§dùo=t¯^sð§‘?däÓ,ƒu$¸’ Gž™³NÇÈîÄé8 £E !>4 ;XÚ'Á%(ìÛ±ÎúB|¤íã,¿ƒiÚãUÔd }-¿CôÕd‡1 0ƒ#LJéÉ~I€è¦ýx\¿KÛŸØIrRÐþ®ñðχ׿6÷w›?Kù4v?¼Úçé#©óêõyõñ{{ßcuPçêȉTÜ(Ævn2SV*hļ”úTÁ'ÖL³ôC<åBÖ,â4A _VLr•Ð"ùÚèoã¥MA÷E&µ¨3¾±I¡4“…Ϧ©f)J5£Õ¯ŸèzžîáàÃ`˜-Øå‹“»í«…ÍíëË_7ÿXGõ†L]ㄹnñÁPP 7©6^•V;ß_Ct…l‰ïO²@Θ"€‰¦‰ó¿c:\È™ðý‰¤DúƱ˜î ×à­é’:_‚äAMËÌ¢azjÓA{B0‡n±Ö_?0öï+ª;Úþ›8(¡"~Øì>lóÝËJ/Õjô}œ7}Ï:vMû»"®c;¥Öƒ‹c‚®rÛ°™; :â< ›Èižw§ƒhg½·Gqµp¨õ±=v~ivÒ{6Gå|hïÿèQGM©Km.à°ç¦.µ—¢æ )ϼ´CŽ®ú=á iåù¢ó[«ïþ­ò—}{T»Ùh°ðü¤JÛ·úË¿?ìI¨®Å.C>ö©ó™ÙUúÌ•2… -3Yoê{®”sKoƒqEÎ6ÂåYü Z´ y€gQG¥H›Žhä™#€¯3 ž» 2±¿kÁ']sÒÏOó…¢3ú È&öRÓ¹½0ju< Ö°íb<Ä;‚Þ½}[µÏ'Ü¥û/þx|SÌñüSf5²¿YUåM†5~ ùJp–x¬OÍg¥{…µD´sRãó`±Ä:ç¶!«t%•U‰¯à †°4Þ[{²£Þ%ñ^Ã{ÉPm° C÷ý³R—ƒ›‹ƒ›wC¸ƒE2}Û`ÒõŒ÷?\]:S³ê•`º @ü„T¹…È jó6“EÖ΅ד!´ãƒÎ¢4˜à³( ’L˜Ô¥Ã ~¨ UÝ2-n'šî¥O³¤ºe ÖÔàõnj» E¨(eãô,œè¨Ü8ÂÕgŠFó§ ¥îDE.oßn‚ȧ›NJ©(½ñqj§ù÷ÅÙëœåãNlW4$“ík´¸‡ós¢ËwÁ5š‘1.° Kc5wñ¤œw ³Zr(n#6:®nëéB£-RTz+ˆˆãé½™BQ)YLÅÚò.èÝi :Ç‘ ¦Å´b/~ÙÝýÎPÁzÀç,ÈAríQb§·ËÊâ +ýbEiJ:ÄâÈ.LZä«vgŽ´ÕƒZØ”9K‘®ÇZV±±EUýêžÖòGœºßFÿóùñíénó¸~xºÒ•;"?~éç݈N~ºF `ö´Ÿ§ f kiŸÚI±ÇÓÁ˜,æXðÍbñý‹G"kÅúØì¬suXÜâ sw,V9;9Çâ¨){:pú³fd–Ÿ*Öç/gõm‹!—”Ìœñó”¸;ã ;ÆôIéŽM憣ÄbKJjV<ÿ!ü/]Ÿ=Ñ%KÇ&Ù9‡¡×„Õ,Í.<ñ y6,³\T ’€á‘`Z¤Ùõ{O¡n±o‹û(Ý£Ž(æDçJT”Õ(œÓAÇÚÍMƒê%Ä™‰Ÿ tÔ%žò7ÁVGC× èËÝLJÍz{÷aëdà´ñÂ×W²ÙÈׯ·\W'}ÕâCµ¾ïXb²kMÿú²¾Ùjüþþåß·ëÕF?#nª^]»uÚ!ÓÓ*¢DÈ ¶4~2VOz5>èͬ¦'|žÁѸÈ›³šhÒÕé‘àZ˜M‡CP7ÝÂÂf-v7›qŸíy²n§)U”P.x¶Y»¢Fÿ˜ð-Ÿþê€ú=u‚&tÃŒFú}—Ô¬c2èñNek¢ÿ¨©$=£®H aË®€BíN›±È®4â×¼±[= ä¥dì]¶^‚àÓ<6G‰4» ƒ×ÿ3¼4«»Ûì15v«šrN‚ÏÔ®¡-¥3E1Lwú\èªÚ„èo‚^€Lr½9$ÞŽvíiýòçÝëG½ƒ7›»—ׇ¯úéïkF*ùϱk®§ðSýdð¾:¥?Yd-SFwKIʨ R­ÿ¤2÷#5I±"Ç`^ýSF‡låYÊH£õ³œQ‚sòtÑ[î–srNt‡í'+€J ÃæŸ4Ë/´ Šã"pt%A±£|Pl &=²£øšÅ!ƨxqŒú{d*gÉ X‚üÔÔe`@×xC5EŧQ¢4M¹Uiþ:Í»P’â¼ð,:4§G¯Ò½ 8·,Ô¿ø±ØºSÍöõåmóú¦ŠŸ4iÑ_‹¼ Ÿ*X i+j„Ä?©úW$¦f`­úg‹ò»%Ff&»¬ 2‰4?eÒ¬ã±E²¾Š=0{" ®¬@ûbš¸(Ò'ˆ±žŸžÞ¾?¼þÕ#Æj¢¢ÞÖ’ŒM,Ù]gùÂÌ ú_¶ Õ•×bî ¬<©|$ÁëM€>¡òùÉo…²ñ‚ƒe,@YŠ l>²Dé:Ñ»„ ì—ˆ]e=÷nrS9“?•wšqY—ø&VNã}Oå†!q²W•~–½ÇÜfû2þ~e‡›º9¦+‡0‹eÇlÏ-ŒW$Ø.‡©ÇÆ[qE9Lg²mo|ª€?W“¦>µQâ…qY£ÚÞÔà$r˜ Ù1«äé{óEÏ„ІØÑ ÈLAusäk³C ml™6÷zøW{aÅðñë­n¿šÁ£Ísy²CÀ¬[FƧ2•#ñ4¸ý»§6´xèKödŸB+»-Öø_²«îÒ Í _¯‹„Ùp׫zÊÜS¶ÍN½$/\ÝØÑJ„--·¬/±ÜÈù»kSKvGâjd¸õ“\ÝÞ&wW(>ØŽÏ ·×ˆÖ¦¹ÖŶµÛE³BVŸgþš» ØÑSÃ:LØÓ7ªº¨î¹í7÷w·oúR7Ïúv÷ÏÛ×ÕËÝæYßì¯Õ~H  —…|WXvÄS–`#T/Á˜/¼v¥œ1®¤”`òûäÈíYAÏ`EÕ¤” g|WY“ôf¿5‘Ô7•ÉVMykÉ_å¦ä1¾¨Ú+b¤¶j££nAÛ±ÀY›X2eq& ÅL—%™¦]–P1[ô®|L÷éáûŽ«~ó²é¬/¶B²ù l—4žãu0¯¯+ÅøF%¹ÕK1òK*m¦ëi3Ì”ÎÉxˆ˜¹“Å"kf)õd0 ”ô¬;—]ØÇ6¤†mGjÒ².ú8Nÿ™m(ëÚdÔÖ㛢8a(£¢Ä©¡K¯GÖ{r§‡ ³ZV2’ ¼…%ÿ©“PÕPå4‡g±<Øøt]·Üß­_ïmëÈŠ²i1Láñ‘Z6ÔBíþõ¯âfÅvÊÔq-Ø®aô‹í,eô!YF?¾nàÜ=¶ ÞUµœ·¸7ú+z'› n¯ h=Ò…}JmW6Þªq¼¶=Õ#–©W· ùÌñù5›Z‰ó·ü…o½«œc¿BïÒý”éHBõ’è9bkå|Æ D €éEƒEÊ&ÎY\rf}$¤ Ï59kdiõ®;ˆÚÄÔN|á¸èÕ_ÚÝ–eKÊÕ©‚èe.H¤”*곫VG&Ïëè¯Ðƒi§,¼1Qv£z‡5K£zÿÉš•º=DÊ€|£KÐ[vÝuÚIÐ$CΑˆþ¾7&z4¶¿}}y~úíáûêIOñË_ú_·wÿþíU1ᤴ­O'‚¨8Ûê< Oiô§¸ˆY5=_¾­ß5T?hœµEê7”W?R²Í$‘IúßÙ&(÷”[éŸÜ„¢ {bAê‡_ÞoÛžôšûÊÁÕS@œ2ýïr™xÀƱâ¥OêIW¼£À6ØúFÈÌŠÕ«+Z“œù×_¶þ¸Y‹2w7Ýo¡ä(’ÉEv©þ±~¦ s En4?yÅÄîW m_Ρ|ÿ_€[ªª,p¨ªÂ„éÖQ0.htò¿µ7ž1'Ö\lÏ,6w±U¼>y±Gò›p±wWAï5øª‹H<ºYÛI{ZC`;€!²ø+˜|c§î¾ÉéöæéîEÿµz}‰Sd·§ÛjGÌߺê"´OÀ!šA],ü5SïÖùèDÞÞ}]¿=¾V¦ÝütÁ@¦œ™â¨9¶X•kn+|wUÿËܵ-·q#íWqå&7Ñht£ÑÎlÕ^lí^o¥dвY–D­vò?ýß (qDŒJUbÊúðõ}…¾üFàç°ä]|ô’l«õ5­qåÍ$ÃVÓXéd}h×zZ‘š5…*P Æ-T`)Íû÷{Ê%Åà@™ 1ˆï¨±¾šÙ¥Í$»ÞY×kß®|Ù±ÅA\¶Ô•.û&Urï[‰Slµl ;‡­ÖpûB+gml`ú1­Ÿ‡øûõÏÎ/6g·Û«Íj³ž8EL¨çCfõù0‹J[ ¦YÏg!éš!® J\4Îù½ïAÕ³¶×¦ë®tª\׊‘”oµl¥›à{׬FªâÛÍ2úÕƒ‚±7Ý÷jî„¢RUu4wN@‡“¼%ÁÂ,}7¶¦)!ŽX¶,S{ö3D^Íäz3ƒëÙò|<¢Ës¡v»&O?êáòJ/„Hu´DRMž#bÕ$™ki’Ò3 ‘0KéýÑDö6oÿýÒCÅö’÷2Êâ¨qxÚöp6|Z'Yb ³tr¿!gb¨Ȩq‘É;*ÈÕÎ…B‘Ý•µ»lRvwL›*Ë ƒš'ï'½úµ³”°ûJ7%«H^õUàÝX'N¾h;’»¨Yı3³f[ ÁI®:òŠ_s¸ ® °ªx[¡Ÿ&»øÞ'¶Ë‹­gGnÁA¨éËŒ[&È9î™Yzï‡í@Ø Ôjù2ö9—\,=¦X%‡à'ì¦yzöŒ3Õu›9fã3»–M8Iá^‹}2N9ä8Ék@˜fñš€:Ü®kÁ ÿ`pNRö~ÚÕ¿só–e@S¨ãXý‹Áb{(>A²f—>̃Aµ¤äRœÞß®ùD?N5®Tu%®ßŒÈ¤´…6zÓÞU‘ACrÛw°õ‹k9¬¶wëmüÏõïÛ«ë'³r¨µ¶«îùª¡ÖA}ÔPgj­k!‰š©šòÝÙ`8?j5ÄåX’U5o“í #zÔ(›¾¦‰q㔋5´Æ œL ªÙÖcLø¯æüæþ|µ£÷+Píù3jÏÇËó«ûõ)–ù?>Ü<^«„ü¹½|Ñ€Èá7‚4h¨c‚-ÉÛyÌù«·£¤üÕÑÑ~½½Û®Ö÷÷Oý){Ý?®¤­ñ0%Ií‚Ȭ$A…CÔ0©³‹gêžJÎ JOþj¬ÜþIêî×jOޯ噕ײ°]3cIq‹ MÆØw ü~1àiêΫ ñ¦ ’îëù, +AS«¡Æûu÷’ž_¬‰½ÝuÆ,­Î¶ƒ:›A‚T}ö¦¡®ö »ªfý ¸;£ÿöëõh\]C\²…)°…`³(&rŒ(SãE5„úzˆuiäà )ÐGO2.'qšŠ <Û …^8E' »éd9/ˆƒ±™21Ðð.Y'6:YßJM˜Â6¥:‹1³Ø†žk†•{AñäþûÒf|þéjýoeÕ?·ÛÛ7üûÏÃùÃúQd?ªûðLJˆ¾›õÅá3“àÄE™Ä®„S–31~O¦T7’¬ñ^™b5·¨M}£Aí$b=3‰+ºÊK&›Äh$ g|M±Že“goç³?®‚kÒ¤ apjé ô-B]Ý­Þ·²Û,ð´ìñL¦™CkO5àP=¯Þ8WƒÐ¤RG<½y ¡Z)âNÔCǂʧÀžszèÓéóQèa|iW Ëë!õn>ŽT~"áëæc=2ªŠʬ u64Q´­›Ò|<ºrÔKdU çzo=¦c¡Û¶ß|>[­ï&nß¶âú‚kÍ‚;0¬:¨<ƒÙèš!VS€E¸`9ŸöQÂ¥Ç9(Ób­ Ö8µ’ [Óõ*ΑÄÞ‡¹É1ÅÕ± €Éå=X러ëmŠ­£ƒåk£аÁ/m-» £ø´aÂÒ†ÑmuëgÆ@ïÉT0¶‹mÔ¨ÝK×Ö7U´Û­JßýÕFUæ¹gkô—¦!t5…Ç{“Ë ‰Acè©™ ,iZZ¾˜¡"ˇ![€MUçÑÈðy¹4„±2&!”ã~ÈÜ$mgšB( 6"[>ÞµHí{òÐaar&îs'üyšûÕ—õÅ£æ÷«­žêËöþáeòësŒö°ýº¾™6:Eݾх«¹øw>°ñ¦¬E榘tÍWŪŽ¸¤Æì‡Ë¼‹·.•Ê9©ÚFÙFš2N·¢"bW°}"1HbŠœÀx §fh·lÑð8fïç&p&AE_Þ†n«h¸‡ž‰ O›G§l Óu‘k:¢0¦rE˜{m˜±§5éÛÊW”´ûv¹ÖœH:/ÉúÉm9·q©àÂX+Øß±u”Ê (—âð0ZxEL([ã ì|zê:Õ¨:Åcж-˜~¹;h~QÛo‚A€|3WÜ)P¢èÉNåaèy|kå0\Z×÷YÌæ à»Æ?*`÷›û¨ƒß¶W×=Þ~[]Æ÷ëËÕY܇r†ŸˆÎäSøtnuéÝåZÖÂÓªŠL0]ß¡„+â(pp,?Æù¹>¿ûº~¸½RMù}µ½¾~¼Ù<üý‚x7671òü×4\<^ÔQV|*ˆ ¿;¹¬«Zj[hÐä."²Y€ÜKðqëBïä)ÎxÛî_¥ª&ºN鯜&LX3ˆEâ /“KÀª5°¥[ "êW¹Õ.YºŸïxlnGjâVëk;2SÆlÈG Ú“] aÒi tÒi léQ3•xÔqâ\±K=Ïàwe*/.YN†KI à²ácQŽÊo/øyËÇd­6Y[eNøA¹©SÅä'>Ÿ¼â€á—Ž6–MÕÔ,ã+bëšÙØédlfv£‘©$›åÅeëòÙ„dkÚˆhÌnp™6»lºC´+É¥ÇqÀ8QÊì"²mjv¥¨ú“…sŠô[É©~ŵñ Ðbûî>»÷<ì{ξ¿Tr qb.è°±’)›Üâ}ñÞwû@“Ù-}mÒ¢]Zï÷›´š[o4Î[û$WÖw›Ë~{äJ•t5Yg0!–ù†»á2Dki˜ÉKpTb˜™³Évé6©…Zæ(Òq¼§[8_çÒUtì€sA}‡}UŸ7W ¯#<ìÖËPÉœÌ{j.¤æ¶ŒHÒ°ãöñB°8`Kÿš:Ï>é§Å‚>‹øi?.‚NŒž*N5 ¬S.©FYjÑ7’\qv±]}U±ºü|öíówþÒNåi‹,7d)›e2É×# 5Qy‚w—ÖxÒÅE‹Eoˆ?¬øç8*z,Ò7Áâ©¢£Ç£è—t-‹‚&R±¥'qåc™çòqVºŠ`D³F^3°K{qÞ×LqŠ3‘ÐÉôÑ›û»ÇÛxìOŸ×vÞ^|zWJÌ ç8ÇÜ8Éo3PsÁå¥$97xL¨H_›€mXêj¼»8wþi ÕïNX-rø‰²pÁR›r² SÓ ‚u5ã£5Ð5qN·˜‚r­Gm_Y¦z#Ú‹/Qw Ùö( ¸}r ÆÔÝ£¢‡Ÿ­îÿšÛ×tÉíŠt½ìV€´¼§9X…“Åîí²´4(ÉÒÎÛwbpàToåcàÍ:†Šœ. 6€Ÿ/¤¿|¼T¡ :«6€ÍTWC¥fý—ónñåð5z&ú«ß·w_–n4æØ|Û<ü½ú²^}ë™w`¬qøÍzHÖǶC*;ˆ+Ø‘Êd0.tN»ÈÚ©ll½§55©}„=yS!äöŠãF©Š«ï·øíETPr^NœgG"”^ëôL©&b"*&´+n]VLjV$ê©YƒòXûTõH8è ø‡–ÂŽ|‘t@vıúæ)36¢P#épâvCôJG>º‰û¬:r÷¸²ë?‹óè&ûÈkzýÇ3µtÓ.Áü¥£r QÍbpýH öÖ¶¸üÏ’¬]Ù­ŠyÏEë/÷í\辶R/¹úfDž&õ?V#t d×MêÑæËh;ëfÆk|×ëIçþïœ]lÎ?ßhø³YÝÿþüÙî/Ÿ=9´q€ìÙÃöìõÏTÄ>¯&O@Ž«éÊÙà{ç”Tƒ(‘SRTTc¦}á;yjwcXÖ¸8«Tû]Ëÿ¾MÔ]¸«òŠ©Ù— !© $ð¼s)ÙÒOC4h¨ÄOÃl©‰¤úgG4kå¤y°´ËQ­X ‘“>²ßí‡M˜¦>šͪ `My÷l{éÇiÅMËÛ j1½ð˜ËØtÔ;0tÄe%BENTC¬…6ךöe"PÁF7c¼Í®µQÒ¥bämÍ`$ò´ô L/Ø·­á¯ÔÓb°‹ a,K¶1L꣬……ž …“ qÓOß%~åô\ÝßÝo>ßL½èN¼°J”Gøuu'°?fÝ{œr6Ô\oi€b=sèÊê¶+¾Vùañ¡d¤LÉB²3É‘2#Ú4)¾†ÁÅ 7“öoœ„I:ìÿ Mzdïßä %èì‚çèüMªùšw¬«ºÐöêFÄõ&—›Ï/ã)Ge&ÿ ןšÅ¿6¨âX«ádI™‰q¹²Z%&õ}D™u&úÚj=mXZßí~ÓtÛ¯ ƒá`¹«ÍVUZ—&ûGŸM2Û|W­$®™Œ†V¦šáJ‚5[ú©Rͤ-ˆŽP0»þ5v¬'7^ÈÓbég”eþ—ÖLîàMCCœôcù'X‰½¹QØ}R± ÛÛw&©6 ?¸Û@}™àÄÒÏ5©ëÀˆ‰­£þÍH à1ÖÔ”ç‹AkP÷Ù\cµ۬2â½uUA0›CRF$û‚FTj²‹Ó*-Zè)r5uƒKŽœ^]o®ÇR·k<;»SQ½¸ûû÷X£ðú£³{Çôƒi5t"}•CMn}pÜÌÈõTl§ÀNý2JØe—*GŠ&ûeF$k¢Á.V ù¥5˜ vQaŒò¾§ _onvZ“ëX]À“…œäÜœ»M:ÞÏІ¦8ÄiÀ`çñÔ\ÀɬïL=+òx(jÚÁÀ8B¦0¯,¡Ôõn†}1¾ÐÏ`Ÿ‹ªŸñäRØw O èÓ·Fg¦\µPSÙŸ®s½yH–›Cp~Sùö Ú®!+aÀR¾ê±IÝa?µwx¼ö¨HíÕúØA:UíG†¨Ùʼn¡Áj|!PâÆçnN”"œjع‘ã¬ï–Ue§àá{ßf+™)±¿U1u|¢u¨é} m·RÇ—kó‘ÕS-¥ªãßëâ VìcŽßqfZ–¦ˆW”V°ÆbV’ã…F¤j¢ÏqØo€I»5ôÝÄ“à,}ÝõÙHtuŠÇçÅÙïìa@\ÞH«¿L®¶N°Ò™?Åf/–ØÍ™"åÈøî°íös£PÛA…Œ–©e(ÚI¦9oSYþ E°qüœQúW³™È‹0¹ã/³™ð@=9fΑ²o/¼Œâüt@0˜›« ´éˆë@¡Þñµm˜0˜ 1ìgT£ê#|¨™.hÈaðB³Îþþä—ÿÛý`ÎŒÿV<¬ Ç„ù€Ü¡ÜVðH?Ÿìí¨…xèkVàœŒôˆHP/úÄ øµ"€>ê†~ps¾Ú‘ÿ•Œè1ÿŒ‰°—çW÷ë4õþøpóx­ôçöò%cõ1¡ö Ñ‚+à+ƒák<8¤fŠŽNöëíÝvµ¾¿bçÞH±MßÊP`;mqÌâ­ÖG0UL‡ColˆeÈ3Ùfc'^Û0Nýã‚QZªiœëxnL]îVÂ5å>á‹K˶TÚkt¬"` ½«÷æ—æ™75ʦö;—ÃÙL³˜fÈI ×(wóOž,ó>¬m¬@>ÁÉõÁ#ÄÛ\SÑ2USs ø¸ÿûâ ºZÿ[9õÏíöö ûþ£!ðú7ë¿>ª§æã=âf}qø ¨¨4±ª]ùD©ãš›gª‡…Ü©>:ͯñm?lâ[)£VëÍ·õÅkNqP &G26¯±?Úþ)›{uh¿¸Ú~_ß}xørþÿÔ]ÛrcÇ­ý•”_Î÷ôhþYÒØÊèIãÄ’÷hšì+Ç•*ÇIÃÐXxüLJ@–íé?ýz·E§ w¥FÓ$Œ2ð}TzkÉ–3¸Ú|3VêND¹?¡B4%€PÄRÛ¶<¿æbu²ª«— .46Ù¯$JÁ ©-QÏ›1@H¢î ‡½n¬NClN%TlÿFuk%æ8;xÊ7«NVåvý’4Êl)'&µ5ôiDoê/:ô–¼õ+köþ?Ã6•[… j}~º{,³Nõ ’Y~“[wYá4Rð¥HLõ“í…\)`Êîýз‘ȅчܷ8CȢѿo«IqQ7™˜j* 1qpl_Æ»|¦ #nÇ­[¨Îç cÏÖº9Ëi~› Ÿ»2·Íœü¦!“ÊáõËë_ú¥‡_?0óë»Óúµ£ÍŒæá‡Œq±*ͳAüâÝ¥á]¶p¾’àüÐâÁ]?Ðs·ùä):ŠÍ›×ŽÎsc”Ý5ócÈÐxÃ9¨I%E|à³ÊJ6“UÑ/ŽŒ`°KNÿj¤á`‚Uˆ$.TD«è\…^cv6tu´ªhu· ÆùKëmŸ\7F«Q#qï ë «õæF;qÞÈ•ÃÀüààêd•j‹)²\ÜÜ¡khÔ ’\’¤âÖ:g¹e¶ßí±ª.+Ü2Ò‘ùÑwIMñÊluˆ¿¯õx]5@c\ˆg© ýçnlsV²8›B¨©ôÚÝ[Äåéꢘ„kahÁ;À8TÁW(uÕ’‚:IµòÃn>µ¸ù@Û¥±E7Ÿ\”¢^9_LZ­ÒÏ»¤¡É€Yœ-`QûûU¸¤†ÕF j3_ƒ5Q•Þ—ÅÒ-kŠœSÛád5Zó²B«Ù­)PºˆVÀk’Éëþ³¬7f{,µ'µŠGnÖd¸¨þSžpåCBSWܶS%]ܪ¡çAY$ß°Ys‹YoI­jÌ:1õ³\²«“Ušµ†f,|qµù.þ$޼ŸkÖU÷³˜3ëykzVöS¦'Õ~ä!n%™IæLÎ¥/‹®dLÃRýÉØ¾¬zŸtý‹ý¯ïïnžþýxÿteuù›'ÛCºùøÊL€h[Óf¡ø¨p÷ù$l-¡Iø‡äÒøH©gm­ºT½ýxûŒ´¸ûHˆUî^|9ŠK˜µûÃÉ*Ý}¡.­6Š=Áw¤’¢~J—øž1Â\'š±_;ë£ç±§Î÷EâÓly!5öŠæ*Wî¢`Á\$¿Ú S·ÔÖ¥<1âzÍÛkw’xb@÷|ÿ]âõËí?î†pÔáBòAê® ,>’«+Î…+ñMº& 2¸4v¼ïÀWD¤æÚÏšáÕ`ÙàŠƒ¼3‘%!“TŒ°(zñÙq%”)ÞD?µ#nëÄž‚ˆØÇ»,è¢þÓGa¨·Îóíõ©W»—Û›;½gÞ¦uê±µ\zªòB ± ‹ù2à‡d&y ØTOKm]ÂEèããföá¿Ö*zØåzúZšý× _cÆÖ-UswÊ×XÀ¯j}èg:ÉîÅxatrÏ5†"L0)ùýÜäuøÃßïŸ~»ºß\_M»ÔHNêj¼—ž±\Õò![ëþÐ tè‡&!5~¢ >>S''5}L #„" 40Îö11ú±µ4_©‡¹ÅH2€Åñ46øcÛ.ßwÛLñn TE-7н ÈæT+M‰KôcGç¹ÉO0Zàì†B¡gþ)8â˜âØüÓ'žñOã„Dr øõfógøgøsF’‘ÁatSPè÷Ãü'1B!ŸÙ¤4iFΡÄЄAýg#ÐEÿ¤'ON1}'òõêûýÛ<|ðkÈVƒPƇ@~ÄJB“ºeƒº„|µƒäFº¦õx]ïEšÖykNkMw7ÆÃéû™PHšÿUì%P(Äb’"‚9$¬d1 Ñ6† ýþ„mÓú+ ö¼øF• rs_O»ŸøíûÝýÍÄXƒ5ªܾ¬—À.PašÖä ‹ŽMZ›ç]lpX}ôH@ß5óæ‘Cð⦬SÜÖñ¿¿Úò¡Û··í[óþòX-W|øvýz3/ Û¶z_.£jÎ¥æ/£Ï/S=Èéÿv{#¾Ýþõú¯/Oÿ¸{Ü<Ü><½üµgbyS}@mÄÚÈ"/ –žà"&pI„ü¸%üÏÆ¬óÀK M¥ ©ŒæìÎJ:]X€-=*_ÚC$r=÷G°u º˜ØÿLÀÄ™©ä2ç ½)¢SIq)I.c8¥Ì¾ä.As¿f_Zý‚!î%\4÷ÊÉÃõO>…®µ/^=†a _MáC¸8½Ò+8|ƒ÷¡„òù·±ÕÑj:Æl^Ž=¶pøÌQ\¤ŽÖOpLäϺØßã»ÕUþøðø¯Ôç½cFë~I.©ò”Š^€bÊ]å¹L©,ùÅÞê[ ‘‚gÍùyÉw%… u§QgöÕ“òÉ-IÿÚroÌ®THñìà!›ì¯NVeÊÎ69öMzÓÏäúÉý·ª?à ]tÎíÇT϶ĵ.âÿÙM¼Ê·oØ´͸~9ª½ÏøCÉUL(:\©OÆd—MMW™õvå·ú²ÀéÉRÈœôóëûjj¨‘•ß¡8Ú2Ù¢V…ó\퇓Uñ¼èýéS[–2Co¬PéÑ›S}CŒã•"iØ9ãlµfÅý Ñ•˜^¶'Ïêíp²ªaÖëÚy¤KëÍ÷,ºWñ¡ a‚žP½Î"Xßž„P¡µ÷«“ZóÙBýê`UZM3$¶E«Q¢Â5 hl³c—µS¡þm^øs÷¥‹-HMåÌDC?‰¥HÉpaÞB?¤1eÙW\‚þU-„Ùs°Øãy5XܽH špõr‡‹¥„r d­>ŠŠ…˜å*^­ÂˆôÆŒˆMz# bH#zƒÐóΪpÉá´æŒ‡«—o·oÏ÷ú뿼ÜÞüqõצ™5‹íJäŠFaÍa–Â`•g£àƒ|¦ôf°Ý³øÒèÀž5S5áW`qºÈxÊóËÝÓËÝÛ_ÛcMžPÉBH³U²]6™”^!EeóZKqmzq‘.¡Ä±kÇœ=Šè%Ù=íX6/ÿþº'ÚUI~urus±k–7 oï5‰j°±E@¢|}ó ­IÃ’A@ÜÜÛ¨øÊd$há ¯LICmýåþœ¯LM$ûÇèJžŸnšž›¼§#ïM¼8@`äÔ=½ºý1ô´^°8›,÷tÎM'„8íá‰4Xï\Åû„F{þä+ðNžy ¿•À¦<<)â`uÙlûÑp»ékÀ|Ò§Çý9æK‹Qïup.ó]OªýÄ3t@Ù¯ûoÜ\¿¾lôó¿½Ð …m¯Åî„ñjòᇌ7õÐïxãsvΣöá4ãMŠIäXQñ&•u(Ú.R~]÷A^“l—Þ¾©ÉxÅš;ÇŒ—Ò:<<.ŽÝ–,æBÆ»#ÛMo^ž¾¿5Zf¢–)Æ¥ƒC–)®g5º‘xêÏòˆaϼ+3-†³Krz•ØVTœÍŽV²˜atSënk±¹¤qÀHW•Àç5sSl.ÀSŠBçíªj¨ä¼Ú¼¶E·Î‡_ŽKŸÂPp›ú¶Ÿ{ãaŽ‘ÏYúúØL³´Lœª"Ù„E»Ô 3Ë«qÎ »T(;ƒ£&ÃŒz2 f¤tÃD¶Ú‹'¢‡ƒÝ0õÝÃóÓËÛæúªÉ £7Hp A† 2rÏÎWvâ"Æ8#Ù<*¨™†h­õ•)e’¢!FÊ• WR™dˆ6aæB“!Z¿êéŸí«ÅžbŒ¨y:ðCý?rÕè—÷V·wÞï!Õín™ï¿¾?½]½.úeûÿ~øáy<²D{®éÝÒØ¤TwVÑæ®V²›ÒÂ#KÐüªÑ…‹Õ¾Gò1z¨³t¬ëÝ‘Ïé¯W×·ÇI1šü¶µ;õÛ$†þöݯ=›‚÷ÖU$­%þ ñLóÖ¨aÍk1IÁ’·&—šV¢˜Ôxn[½ ÉæŒÓQ?ÆÍQôg±9ô5 <§ÍÕzþ×�^¿Ü=oß!Õã_Ý?ÿqå?˜9_—kÕГýë¡ÉFè„–TSa£1Q£s±¹Ùõ âœjÓÈœ êQ®hÑGvl7É¢mCjSQÐa+PP:˃,}Ø/¾¹xâGw¦‡¶ú ¸ãÆš‚õj +ö,, â°í G[%5Ó WÙaðÅLHñž+®Ä2É1¨†›j…D!ÉÐÛ¸}Ú†8ËÕ­ç4Ň+K–îo¯Œ&Tí쟓ßêÚª…'j*;—< ™$ùŽW [ÒÚµØ ^Õ¤5jŠ(àR):)V)OdqÇK´Ï¬aa“%r$†¡—2ÑH?žÅyûÆxNKüaÎIŠðç—§‡[õþß_7߸µnÏþ—nù—Mñ}­ië”$ÛdljÞDÔ+³y×$.>ºDXeœ§ù%¶òó’MBWšdž‰Iÿ×`žè¬`èeÄ<Ùó9ºPŒV øÿNÌúiÌ¡ñfÄ£ïheWØ#v¥–$%Íkç…«gš1D†TÁöhE TL![y]IdŠªßØ>S °ˆçB_—Bì4nðë½Ý¹Ê¼°”S W ÄR§‘Q¿f‹„Í(Ìë§Fa‰Õtn²D ÐÅîQ¡í¯ð©§ZŒ^o&éás#Œ?ñ¹i¶˜¥rÒÜ=ã`YŒütoÿî¬å‘?¦Ìè¶Ýp s®ÝÖ¿aˆÒͶպ=e«›K¬¥‹†`À{*ËÝðÀדºÉ¢'*cÂ:r *¹ìÞ¾Ãɪ(Jhqd,ô-Ö;Cm"=CŸ I£íB§a½AƒÞD†X£79Mθ?y–Œou´ÅY#ŽDð-nõt<¢8ñ]ŒªÞÈL08l38”ƒ“XÔ›xÎÎ̬NV©·R¨')™¥7ì¡üH€d˜Ò€ë)@ÞcYm*“DE½í‡~®!½Ÿ¬Fmè—Èà¨Imš ƒP›†Ð¡g :%Jà¤9EŠÌÎU ÉY­ù5–«B§Ó£2;úlè´’ÐÌd]5©~´v>‚ïáh‹¢.‹ü„k´š1 É(x€©Ê¬¥¤Ö›¡X«Ò¦!`À‹ë,ô¬®ß® SÇÈô@\M „6òˆŒU¾8PQiáÈ#÷ádµ¾8"¸´ÞbWÁØqÔØÁÁ¸±I‹Þ(¦šüS0b**.Æ#fÞOV«7Ð 1]\o=´@6|Âd<ö‘j^ Œ íY“´õ\e½qÌW?ŽVE¬Ç jNYÏŸ6Kq=d0Dý+ŒßmâØ5`*«M¯ë²±a̽«¬ÎU£³¨q»XÏ& ¸ìØyXÑVz”!‰ÃaÎX µJã`tÚj”&.M;x–™{u²ªôк³-7iRp†ÔÖ·AÈxÓc«®Ç ,¹G_¾ÛláÑi®ÂÝÉS6(Y­êr“E2Š4)ŽÉûþõh;Ý÷øH¤¤_aÞ›Ù~ûõêáùþöõ°ëØô%Œ)ójLÞ‡ßë¹lÆY3>HlÊ£‰u 6%ºпBm{8¡Ž‚½Ö ÄŽbÂ#Û á\ÃÃOSþ§Ú#~½W‰êÞ?]SÁ~Ì¿=ýôtÑïd1†K2"Å2ÆÜ;ðIŒí–?oæ8Èy ÆÐ8š)]dÑùžô–@àÒ>Ç“»_§ùžd´z·W#¢?M·]ž×e%›)³T´€æmÜd(6™ÆAÜLj6'|þËjh_ô‘Ë)‚¦ãPs;‘Oe€à1¢À½ˆ¦x¯i(Hð—Æ‡‡žU¯>Dãå³ÒDí¾y0pa#Я»RPŠÀðù¤q%›Iw GuáòÈè*áXQp®uù«ŠçNýž58´zñxùþcdó‰~ÿÇÿܼÞ?½nnÿó¦ú»½™“À–ÏŠ+Þ¥ƒÅÇT |7ÙA†3ÉŠî,© AÕ%bW1)jêú?œ1[zĵÌÍ„’8_U†â" ä\ÊJ$“îqêYÇ· ¼Br,Jí£$ôÄBnf³éç«æÔjÌ2/iñœ¶ï(å;ȇ2`ôfÈvH|Èl `ôS{ ¡0š–A ‡žà•Yc»Ðœ˜êŸï¯o—|oêóÓÍþa^¿ùñöúíîÏ»·¿®ÿ¸½þVz¼ÿÚö›7;ö‰Íõ˵eÑ?þ™¢â÷Û·ÍÀ›‘ž×lŪMk­É –o±ý®ÐŸäƒ&ePQ]h[jlÅÅØ%†=òâªiR+‹‰â«&2¾{Tµm¿âah¦‚}5þÉI9FÆ”} [Ég’ƒ²I¸48¹eïˆêã”=¹Ÿ¡ñžO¯6:þ þyóŸ™!OH‰]UõÊWXH’mª=iRõW=Å6„Øìo?¯ÆdŸüÇnN·X¬gÃß±n}Ю\|Œ¸¹i¦ üÒ-ð M=ÝY¤]²·ÛF“¬—Ñ,ûÛê^Œ'ª¢a!–«[)KÒ¹’Ç”y&Y„“$j²>`£ŒÕ0˜»–î:ýË‘üpç´T*hPz¥b¨ Ê¢ßÔ¬Ù$Û?t8Zí:UPw[N]w³âcÏ«hrå›çÐjö;~*0]ß¾h„þ×£þ{û»_±‘™ï"5£+†‡P¤gÛ7W›£‡è]ä&¸Ø¾Á‡â0èydõ6úGB~x‹«TOHx[§>·¢À€­}×\­jÙv<ßÖ“Ðêéa$<Ò,€Ï²6„!2ž•ê¯j©]ÛRðþhh”4zå€#V’RÏÖ?Íã%€þ6[ÿraÑvUŒ†35a‘íÀ-–Þyù-¹iLÚ@ Ô–Ù$°¼(¨Û1<w&^ÈRó.üøT#ù¨ Ã×ëëkú-m¾=^_7Y íbøå¨Ô5îÒ„ÀUq]#˜ÞÜ$%±jRARó2k©&ò\eŠ®»¨Ôò«³Vb™bм$rNš,ÑŒ] gà_²¹R?^L¯Ù´K2Ø.R¹T•ñB©Ñ›OÌç‡|¦'ì¤7Å«š‹Ù8Çq&95˜#ˆŽÎ¿ÖãÇwäý›îYò—‡«ë?Tö›Ý7·íö #Ëè¶¢LÒaB×2:Ïi<ºŠqMâšY[gÑW´ó8£¹(%ä'áW²™Ä×'˜ô¯k2Ê‘½2J"¾>Àø_âë;Þ©ÜÁ6ÜqÓ›ïLéÚòíÝÿ“wí¿‘Çù_!.¿šsýªî*BÄ@`v9ÃØ[òDÆ<’àCw ÿ=U³ËÝYnïösöN² 73ÝUýÕ£«¾"í¤ÈðåÝõÄÓÑ:XÀœºŽžÐøºéu"ô3¨ *¨Ø9`œ –Ã@ f5s™†ÒUŒmAí3¶yŠv^+̉IQ ϵO´øðZ]´`²˜4c›ãICNÛ¦OhclSvVs§óÛº¨SªªòxO¥I¢Oæ÷ýŠ ‘»4=(gµšf(ÕHnP£ ±Ý¯.·iJf³+«‹@ƒ]thÓ¬‰Ä„ŠÁøÐ+=òævóoÝô`@atõ9¤C$­çßîL½àÏ–¤"½Œ ® E‚¦ªr\4Z𗬣3‹)„Ï,êœóž¢3cÃr ¾u³®ŒV«¬’Z¿‹`0P6úÆ7@?ÂÕ <Ž8¡YÔÙu3Ö eä]ZÔ|T" ·HQ'b³²,&™¹ˆ¡$KB ‚gànÕùÞWÑ%cß”|h–[>ŧ”gCš>¢B è••›hnz²´,.¶ʱ³P"8OÔÒ#Ý=¾ŠxµÓ A5 .›çÊÃ@¨UY Û(òiÁ‘‹æ-'KË*Tc›ÂÈ<š/¸ÀoÇ`[窸J2ÆhÊáf¹aA¡÷è}†ØÂšêï˜Ø¼ŽlÚ®+Kh8Þôè"˜ 4Ú„†¡Ê‘Em»'“O¨ËxäÈe”‹_CI±™@q_f³´L˜T2TØ Ž=VþG“àÖÕn¥ôÎ36 µR]±—}Üì …‚7-7K“rs^GÙè·+Ë:pfp2¦âÔr{m§/“:ÅÁ»n>oÙLÈatÜÈäÀ¤ I” ÆÄŽÛdaY§Í (y ,“šÜ¡7ž¶ªêù )ÞKoW£Ø²™n=îºÃ=)6’’>nÑoº´¬ãFƒâè±&Ù‡÷-×Ò[gjG€V8jší›Ö%Ìrìº{ÈÀIŽÁBRp`âý†Û¥å2ËqlRB&£*5rÐ×$8O5öUFÚù}ó‰Ó¦@p„‚˜%¸ôóµo“•eÊ-HØPpJ2K µ %{B®&cJd ‡8®ýÀÙ‚Ô96 6Gn„‰T¸¬¹Øj¸‰Xô^U]kÊ8·×š!")£$«œÑ-·³äÓ‚2QâÂíZÒ·šÚË œ°"&þ]ä ·šŽ£ ¹¤ÌOR÷Ñ UºJ¦&hÎvê’+]ÆMw¶x.©6>…p²²¬Óëk]§>¼kçºxŠ’iàÍb bS´ÍȾ°Òrs!Åo—–%7;xð¡„g¼à jˆ Õ‡f/¿Yp®`ƒUìÖ…kilZp`£IêÉÒ2Í¥sXv\eXa€ÿ¦sÍYj'N8£UŽàÐL Îc¼=v»´ÌÈgŽN,8«ªæó8 H0×øóý£Ðúv¡oõ"¢X89Ž.£fR¬Š×êLö¨KM[xg;¹†˜ªÈS©À› uà§zZ^_]¾ðÛßO• ä/ª=1#L h)'Á¾‘N*Œ±gÎØlY'ª’‘í}#ÞTÍ6/Ï ¿Næ qLi¾…~Û;þçÝâö¼¦b[XÞÍy`mMï!»3Σž©õðí–õ‚óQ1¤x7£ô2蟜ì^ˆåw'ÛÓ£Z›?šÃnwú³¹®`î|6Í ¤öÉÎßätœBgy{Ãb9_.ʤŒÎšõDBM“§ ,¿®¡ÍFõ;†fÐc‹`Î1L†C2X!V$·Ý’N§À5äwÒ‡š±½–#Ie8,Õ‡•g½ÜR4•¥›Ë”Ô´CN­‚ Þ§µ#6Íg²?]†¿ò7;vÆN®¡¦ã*r MÀ™ˆÉn_é‚¥KyQÇBW 霟AC&3×R\ÁãöÅì÷dº(µäùü©íw€جI±•_W³ŸŒAà÷p@0‡ì¸A@ð-cÙȰkÈx­S$=YÍü9ÖÍ {3x^¯ÍhžóB—8˜¼yÑy³“ÝébÒQLº-Ê’tP m©ÊÅ“Y„*Ph™è>¥†Ý3ïoæªu¤Ž¢9ꌛUÝf)y‰,%21?p»o=М¿ÚIéœ\i°êžÊ£¹r¾º¼¸*ØA&ÔdUÌ)ëÓiVŠæÙ'›ÑEìÊž\œªÊ²9ôÒ,k|öÄæ?6¾ßÓû†ÙQµ )^”S·llrÒ«lœvDNv¦ »»€]™^x«¡eÂ/OÛªBKÄA8. õb#ø]Å௽»õ½ÍâòÓÍÝz¦ÈÃýíÍò†ÿþÏzqûp½ÐÃø“_†õŸËÀ­¿¡<¾§¡ÕÞæôvÙ¤‘á=ŽW˜m7±“ pTæš%¯µ©U±„À§ÇZTÍ^ëå]dãdxÄÇg?õS =hBe3.k ÷&­Qöúíîtš—Å€…%ýÖ¬B¿è›ÂIçÕ œWÁ.pÁv'd>¡­7XZ(‘ã+›ÚöQë*ÊìQè]¤Œ«5‘a³‰¢( RÕNi0d4Ú¤NØh s²²œ†7òj3ÞÓœöôº:" ï„©kñíÖè_¬2ؼï÷7ŒÿçËÇËóDZx§ æE=öOŠöÉN‰QÀéTw¨gu³W].ÇìÀÇ›ÑêÔšP“‹<ˆQ¼­'Üä¦*e:<›úY°&)Xˆ÷§mW–ÃGL€ `§°¿óŒ8!1á(0ä»âùÑ»ÐÕ¢×5ŒGÐ SõWWyyÿùîö~qùtîYÑ} sú¼üùKÙÄJvÞÍyÌB ANàåx‡Øg^åá­ê¾£"°å̱ÂR½öÌBÔ o÷¥ËPLÀ`BWèÍ8‹3ŒýáHRèI™¹.!™Ræó·Õªš'¨öI—ÝŽ/¨ühŒ#øgìcÇ\t ø Kî_“§œü9 ›íë2s¿ZX¹ \,rqŸµU~ù½Â÷Úž½þòÙÿÓúßþåâìÇ%Ï_Î~üóx–Ïþqê§Ç‡åÅÙêÃÚ­øÏ»Åã/?üû?Ÿ}dà¯~¾?ûüxó|ÅoY<¿<]œ½‘Ü߉sýxqÆË]žýãÙ»Ñçz¸|–@ry{ÿÄ«}wpd}=RÉ#ôîçÏ–×ó~ìv^^=H¾O…ÍxE)YÊFGΞo>]x X6I£=_:{bÕb—XZ¢WpöxõéþYñãÅ…þË…õŽpgÏ¿<ÈŸ¾ß"éÇá_ÿëý|¹Û‚ÚÙZæjûAõÖOšþÎúG“_z}ÎÝóê3ßý_ÙæÿšÔ'X£´jRïûO„uãè¼ Ñý ,HDÆÿ:*ÙV¶ÅÝòŠå|XÕã×p1-²6U³cµ³DÆéoÑ̽±]£ÚH ‚OÚ.°Þ»Þ35^“Mé`¼VÚξŒ+1^^£e›Î.¾¹ ˜ú½^E¥»ÐÏKfÅ Ÿ—E&úXkd üµòÕðÿ•¡£Pü¿"ãÓAýÛ)l=Œ×vðÂã‡_Ëø,oo6ñüûûËûsþÁùæ'] JróÓÅk­«A)Îv4(Ç7l7òÆfŒÒ–¦L_8 ‘¦l†×*N¯¾]w›±RR „'¶üb˜ÙfÈ>Û}›1.™øÉol†&3ŒcrŒ´¾€à­ïi<œÎv¤OþoØÇâº)–õ^÷'ns& Z¢ä¾æ`·ˆfñò|½æ4zzU†.@„›CÐS ›÷(ÉHß7¦wâ(‚‹|À:ïÒ‰iЯ7~Gœ¢Õ“õpúG­C•|üN 3}D‚Òsã·ìòªz×ç9QPk’µ-~«Áè00îÓ…\À«žàmC+xçÓB‰ýJ`Ù ÑX>ÜT«sòˆ`\ÿ‘ €È{Ÿ Ë;0|=–&\|÷‡ß_°ûÈr¬OA&:9:{y²ÇOW¬òµç+™³¨E¶It}öýÙó—»‹ïÖWÁß­Zø.þøo¿?‹šÿ, ~ϧûËÝËtyÕÓËRDñÝz}xy¾ø®é=?/n_®þ:^³ Õü’ïG…z‘µ¼¾eD߯÷|ÏOÞ5g«6RŒž,Í;nºÆGh[Aj`tä›Kœ²gÎHÁªá5.{“!^ÇlØjáÑËÕÉÊrJœdŒ)R 3T'À@Mzî CIéåž‘Z •·&z'¡ÁWßIè·ÆIØÙsÌ“Ó1û”8û„ë>Ó §ÁÈh;¡4x¼}þ|·BÚ›2jy÷2#SË»­ãëŽ"Ý„K¦fH§WÊ)­}ó°«Kp)x¸äÐ%qÉ€‹ÒamV–KÕ€Á Óʬé3â…Yƒ!¡ÞD²™…Yã{~¦UMpfß„¥³À™[ÑŽîٲœI½«ëg,‘ÀÙß±õ®ZI3 ÉÚš¾ö`‘}ûx>ëò!iä!Ð.H|ÒC‘ltxØdayžR@îÍPâ)uÚzy¡Ô´2íÓg­Ï¯Õ途”ï|0I©Žrl–'4¯Œ$ ]O¡eØx3²f{À»1s ­ü–¹c)tÀ¨~v!9…›ûÈ8z³\<]ÃÑst—Ëeøà#Ba™…(ãÄV0^”»´åo2ì³5„ìJs¨ÒîTA‰}öl =dØg•6Ï>Zõ»]W®yæÀS+[bž;`Úº­tVóìWƒ|öÍsP` N‘|¢o%ù´G¨‰ÏcÀ¥úgŸ¾|ŠSì¹Î’~:øò#°$¤®Jc,‘­É‹“2ƇͰd ÒO6¸ |F°ÇP)\"k¢#½7 ËÍŠ fòq)-´4.‘S~þ4D*õG9È•úDÜ@ÊR0'@¨y£ß¢˜‘ËÐ2›ý'Hg éfÿÀ¹ÑÐÕPü¡VÁ;×<"݆|4doHób!£TÑ&±pÝGý¶Pq»®<0°‚Sƒ!„ùë Fsê,“êã^šéˆá$7„¼ëwYsz^õöüáñ厮•páƒ_*¼¼ô‹åOoA¯Ðm›ás&Ç1ÎÙíÕ‚ØÛº´ƒ}s‚)åj¸»@n2ø­¨ãTê»aiÔq&•oçuGÇ]L– ;B+Égð”°Ã_³Ç†¼‹­L@mI…‡Œ}_18|ÿáåæör'0Z¸ü)†>!á‘WNAãu·@ðÈ+çux8Üq5—FFƒL­jg~‡Ïyp˜áñx $ø˜è ”éÒ2ÑÇ%=B'FæGã‹;=ìÚ9i¶Lø*0t±prV?ÿô¿¾ì…wZ¡oE¤’·ïd©t§’·ÏSj€J†”z-ݹ8E8åÙn£3>’Riœ².z½½]Y®“DN ¥N S–Nà$E1ÊÛXÇ1ÊPǪ§`èkDhoï|´?|უ×ðg¡µp¥i약zyYßyÙ 5ÿMg\­zÊ#4¨Ý;¹Ï‹Û÷üQJv6Jùt{ÿùìãåâyñôËÝrBÃ4>„=l=ÐOѸUM’Gª>_òö=æM››ÞØ ;N¦ÊÅn¨è>Y\ȇÞIãÚû?­*û§ì|ï˜GJ w~6ýµ, —NÂÀ5 yçæ%ØP÷on›[¢ÞD¤ÌÚ=<âuÄw"èà;yßÔ[Ò¨ûxKGÞWÖô"ê$S8ƯõšÆGª(.cóß\tàr›^ì Ý[|üV8zÔkrŽÎZ^/c­÷“•åÌ'’¯² P妖ÆWƒ2º~°ÔøˆZ_§¼AÏ[«Ô ¤%ÀåS®®‘"stžÜzÙÑúÛÉÂ2ïd­ö!*š¶¦-è/#¦f6Ų:^Úoè¶ù»/í·¿ÑÒþQÃCK}·ÑøˆšQš.p̤9¤m¶ºÀÖNåØÇé½Wë¶±FþÉÊ2 Å Ò¿ì8?i6š>ã@³#›f§ËRf³ÑøÅlÂL#"±‡>wð-leû-£ü„}M4GÈá×Êþ}{¶ïªµ8²ÐÖôµ§ 9óç ³ƒE›Æ,°`’˜…&æÃNV–éÃZOÄŽC‰;Ô|hv軚¹=²½` î õ¬K ¤¿~–mò-âi5GCËÎ+w:Z\˜©£eç•G G=ÀHMÐCu_A*· 4ߎ:Sâ/”Á )8ŸÄ‚øhÖíÒ2Á‡]#£CÈŸ´à’àÃPa~Ï'XsÀóG'¨ÍøšÏ~ÉÂõíÓÏ&v J=žÃï›Çá9ü¾£¨Ãxà½m@VÞP3«Çñáä(ķ׃…lÔ A+!:`I©TˆWîcaÓ¥e¡N` TÂ[‚:rOemêèÙã-qéA–„c|Ôçáíè:2[è S3hxH[<5{ åU5ï< @–ÕXa[’(ÔM µræ ýÚÀ¸= 5ˆéb çÀ¹tšÈÛø´ÏÍÒ2Ý~]´%$7s†š,GU‘ Ã+*Öm›GÙ‚sÒ\¼J’œ ½GHë0n86+Ë’Ö2r*˜¹9”+&Ãa½™Ý]uk~=wýs«Oš¨CøŠ‰º½VK××Ë}Ç•ý‘>™ºƒ/Ü©'Ö¾[ªîà ZŽŠŒ8³" ¸XÆ™… ûˆîÀLGý%uаkùtóÄ;¶øéj¹xX,ožo®žÞ_ß?=?,ž¯ÏÎVŠzõ8þæòü£yˆ¤ P‹kß»sEf‹u¹öµGUšM°·ÐdS¡¦?Úh™òªŒ‡šÙ¬â‘Ù1£Œ–δÿã4¸¤…Õ˜}.¥ÍjÒ3²ùA¨—v†¥î<¢iH¶¬q2–Lþuš¼±I¼¯™raÀ‘&rÍTPЧ…R´g(dfh“v¯|”ªqº´ÜÊ é£,q®t€¶¨<Ì<¦bµ‹ê2»¸oÚ´f.B¶ðÕÚDŸ–×W—/¼Ë¼»¼icƒfØéÃüÈÑÚÒ/ô,m¡E¯ß)FƦbdÑ2­Åúùy«ò'˜ÿºÞ@ˆß™ùÐÃñëú`\GMõÁø¯—HÚ¿O:‡/ðåo1ºPß%tä»=ɾWéÈyMBŒ‹.´€.Ÿ†þ¥ÐVèVHûàfœÆõøÿì]mo#Éqþ+ò!_Lm¿VU+—‚üÁNpN†qàRÜ[Á”´¥½ÝŸª™‘H‘=ìî™nêò²°Ï>IKÍÔËSÕÕUO±ÀŽdyRÝl_eEhoóW5· “ íÁ„)ɾ!+×Hªí/à^3|~ÍàJ³J‡—õ@g1L¼l/Ë:>¢SNa~ (qÍy§eMYt¢|ïÜ!¾º9¢•Ê/\(쌪Ÿ9O˜¥~<ÚUXe©ú•RViíœnóO;¥nïX–ËGF°ÝÓã÷—+ÃîW.¿ÍÅ»bøÍªÞ¹å`«tðf¯Å0®MT@jž654É^A…0°¼{ùuô;eEAn\&X‡³ZXŒ† -,¤À€ šÕG¿S3ÉåplqY®RÉ>YxkPß‹ªR’ TXR#LÚIŽËjžäú¡ùó$ÉgX“˜ªòUï.)+ËÕž°AMè,x´T³±MÚ‘0´ÃŽ‹FÈ,ëv_Øåå:ì¾]Š“-ooX7·Oe /‡;j,ñ6—Á`AZ-Þ%Í‘nøí„ƒ€Œõ¿Ù÷ý‚Œ‚⎣¦‘Ðxœt ¢=Z¥· ùæZóî#ðŽz»ÿŒ Ɉ7üÌéÝÇ«H*Ý} BªP3äåÄ$ÚR¼6VU¼/Íõ¦ªGSLèiÇSGSÐiCé{ÙO €{QÕ:›Zc½Ã çF†.°8;Ê:EüÊ‘¿X¡ª>ëbÅ•IV³GUhy5W…öõ?R_ðœÌ† §·6«ˆ[寳$U2Ôfµ] õ/ÍXKNöÇ)lyiÖñjäÌ4N+uÃLEù–‡ÐRÚoÂmà4 þ]ZÁ"aÿ¾–/ô«¬úÃr\^:Lfͧ}›Ãç™^û>:î¾û²†SÜt9g¤’ÎNH%Ò._/‘$S3OD%å`ÈÈJ(\”7m/‡ZYbƒºd®·Š¯µoúWÞb4Ã@%|êt‰V*Ÿ74¤k”Í\½©÷NY$(M£øwWkL—šì-ÞZglºM’=)í×6Ú?»—P%îÆåËê5Û‡ieŒVú҉̾§mõ|sûTØÃîÛ&,í¹Ý *Oѳ6‡0F‚Ô*6SM1«1U[Mºó²×qõþº™§^4 \IJ!^9ð—<¡½œÄîVëÏ ]ìVB¾{pD;þƲ§Ÿ(<£q7P2ž0+ò ³÷¥##ÀHŒ*`aè«&ÃZq°7éCOd¸ú*xN±’“$,Ðø$É^bBaÿØÚp/ …){Éñ_js™…šSÝ2¾ƒ¾é“yý—=4ȸ{±ÁÒsæ8IgRh5ÌR¨¶‡ŒÎÐ!BRÐÙ\°è ÆOXµ²NlŽÙa±*€OVq&ƒš‚ɳ]Úƒ>Ù/ªVŒ¢`uF•Á†$;; 0J.p ¡Ju”•[E§‘`.Qfp.^g0²ÆE¯£¼¯Ú)‰Yý9¹ó©fÅ€QçÅ3û%Ñ@ƒ´XhÀÀ ÈÃêîjÍ^ó ÿs÷"µ¡®Ão!î¿ûÀRÜCÆœORüIEôÛžŸ791Zõ1š¦«2½IO˜µ£­BB;¼s¤ˆ×3Dõª$Ýf“”cÎ¥Òm$›ÞýÌŽ±}õ⬒m‹#9ö¥¢žLBòsmÉLY0gd±-è­ ½…Gþò€Cyv ù²]Ý﮾êÕöË畾ŠÙPµ^^WÁíîΛ¦Âô=éè´ÉÕèåå§¦Ž æ®Ènˆœ™WD# Æ‡qOe#‡váTUÑ«mIÕ¥uyÓÅÂû‡³QûŒE4¯&`}2r#ëŒ*ç烈7«íÓç¼#ùîdž­Rv^a˜D­¡rt°¥õ³þíÏ¢jI4ñ‰dÐØ9XUÂh–ŽÆ!Ú.ºÙÁ˜ÚÆTWªV£×jÖ„šUN·ÈÑçxÓ¶¹èîö¾¬õãºì²ÇÒ™qû2Õõûµ¼lq36(wwñ~­} ÜzóxØý6§KËŸQ#9ØY7VO¡8ör «4™¹%b«•MvŒ—¼ì,î /«L'Y†1¾ú!U>yjKè-¾”…d8©i,p)ËŸ5ÙK:©#×ãj÷ôø¼~z~Ü\ QéêÛòoÔWvm¶– dssÛýÚTUºüçݵž©]8>ì…lSØÁ+å9–^¶NPÆ™;× š˜SìííÕJ K¡¹Þ×Êò‹’>H·JÓ??¶ Þ,B@«phxÓ€ÂÊ'רE;‡f¶˜‡ß—{‡•ñI­Z2,*ÔÌýÍ4…ËÈõ»+‰é 4jöÓñK ±XºŸŽrÖq’ß뼨R7 Ѝñ“¶‘ãæG õÏÒ §>2pÂÈA…~‡ÛX­Äs[w-§KÔ)ÍC~­ä·š¤Œš—5€j^æzB—W@¶X[¥E7è]^9h>'Õè£4¦˜÷Ä5¬O§£—9gæ=þ-(ÔÂPA)[Ê ™÷XÌàÏóž(*8cBôœ þgt뼦ÓÖ>°yÃ7¡‰S×8ñ± 3‘u9rÑgÙ‰U0b>g¯Ù¨vÁÍ›ˆ´¾É¢àÓ jº0ë‹´¯íDê_¶Ïw›”ü|³3CJ-~KÅ»´„žŠysOEy&ŽŸÊq^´f[A-éw*Z„ Óea0_¦õ*›*áZLÜE…TÖÌc“9—ÆáZä¬b“DSBË”@ãºåa4•£vÜW¹÷ì̳TŽªi. ô^¸(¯ØËLã1!ñÑ×÷7Ú…ã¯gPù…ݼÑH§ü¤‚/!§(ªJ³Z1'[½.íd}X{¨Y@éYu0^Á}W•.¡Ï3¡lkO[a¿×SH}(ÙÚ1›Ú ØÑêqû°íNrñp0sÉõNsL”Ýg/ª*•~~l%}'%ÝŸ\æõ7ò1¢õùJä¬M¤ȯ‚ Á]¤&‹ŒžÞN3)ºŒêV8åçÑ ¹Ð`‰¨,'æ¢|3²P¸r×ýõU¶/ÑgäûÚXí8/¨WZ˜Gæ³c­Nfï â`.0Ï‘cMŒ&g”ÌÂè$ç Ñé¡Ô*a4ÉrŸ¢IX¯ùtigº¼2—€h…hr¼¦‹v´†¼ÖǾugVÏÅ“Qus– zVHöVQ ØJ‘µ¦å«_ö+^v´þåäÐpò¥"NÞϨÀ˜³}¨i”ö“ŽZüYZ6Y†Òiƒb«ˆÉóY—\‹Ê`a¼ÑILæ8JÑõ{ÕÁd Á~¦Læä’ÌÉk§Šç¼&«âŠ5€¶”d!æükè6?‹¸Ä<”MÄåÇ6àŠ·‚KbóE4"g…ÄåWvrš盕@™wØ–ŠVÕÝ–z cj….¥œuÁï Z -)§•vô›X–úñùv[Ê} g¨4“bÏ@Y x g|Œ@h¶uU=CàÓ-è4È2R&»±Xn1=LŒ%eC %Õàδoޱ:ÖmÓ+ŠtPök /™-`Wœ„ UÉáªEñ—Àhkàýxû§ä¬àÆ'(A+ŠŸƒ¦ÀîZަVVIZÙ]?~òX<TƵé¹*Ψ’À$M’pDt/‹:øÉfêø?%9jÒÒNŠZç¨,e¡§íôDÒžsÙ >¯«`æôdÌýGiƒ ~'­n25äÈŒjYƒ}|ؾ©t³w°6v¾<>Ümž>ožw2IPŸ4]Øèiõ¦Oë¥oOÛÒ#ž|jî²_0}Ê·ÖXL& `Ulòð@µ¶¡5E è|¿ë]ûm³Ö‡è !„hJ®îºÙ¼.ïߎï÷£úMvÞ´¡ñS ùDÀY¥æø[7ê6„fJU¨°·ÏÞK•!¯z¨ tw›bfÎŽÒã’0ì©:FÕéTAU±d­¨ˆÞ®‚_ºÖ+XÎŽ «¬)Ãgt—¥·ËÛ…Ô÷ MÜ…” czE0|òŸ‰· .œ5§ñšmÃáoâäfóiõ¼}ªv’”{kÑÓ´ûf#·Kàš]„ ²ª¶D‚MÁ{“žž°Î¨ôêôñ»æW¹TsûõÊ»’©tD‹@³¨¹Ù°lãäUä¬\dÌ-tÏc0Êzøßr!r c:%"KnV7<5AXGƺДÊ9%ÊiÌÆMwX¥’2e}¬ævìô•µöÅr§x+–>‰¨¨ª¶sˆJñeN‡©ƒ¨Žd‘¥-)Ôð>×QQ 3}LjÊY·•’ÀEÕc^¿$VCÔœ›eiòõzÖ@°p×· mð¬ëûõŠ€ô¿žžVÑî'VËÃ3[b÷©™ìÜiÅ™TbžƒR“æˆ:>ÈÂs¶àÏ0?dK}&!Û!ðŸ&„@e’„4ìH:™MÚK²!?¶"iŸÉÇõ ôì¬2¯´¢7'„€ FáešÀÛ(»W¡*-Öµ™ã~ `4j–ä¢q–e0FµÒyl†‰´¶ä~ëÁwå݇öƒ²²±¿FN93Ƴº ×è2=#ªšV4ÎQ½Œ[PÙ±ò“÷o h’JÁBìþm/:¸,dð¨Š:9ƒçÀ³.Òe&ª5.{Åe"¼ \`®ß‡¬ò°-žëÏôþ–:t (ÊRlÊø—láê>½ðn¡nžoŠÆÒÙqJòPê§t$±.ÈpÔÔUÚRÒªµ³7‡ZA W½TÛ“µáàGc¼ § °ÊSkí¼/V?w½pðÖ4V3Æ*ÃrWA¨.Ûà»~›•öN††¥¡­¥YHkL‹\5Ÿ”éGª/^¾¹ß}ŸZÊ9`ûµ´<ìÝt‰'–?BO!¨rÚÇ"¥jåá)U¬“¶r7é;7îÓÎà*itÃð¡LêTˆI+"À¬B³Âø>Ï6¿rÊÌX˜XˆÊa¸ÈèQV?`ºpFuP†tçíÓ= ŽY^ Lœ¶:õÁfçzíHÞy²œ%èT;JóÁƒGN{ðÊUº‘ø©!ÈŒuÏÍ_žµlSLÆ5îF1[Št#yÐ(›®ýùtÈÕÉ6Y3ÙÚägDGÊGôå€4ãÍÐã_h¼²{{ŒuitÔ°ébÝc”·5˜™ËQ‹‰QÇ5Í‘HŽcÓ5M²f”Dê«ý¹ÿ¢ì èUÙ¿®nŸ\ :‹Ÿhþp³ù¶Øëâ0Ïýýéñûm—ñîd•Ý ¿å팅NQñ?ȃcœèÌa)fñðÜ-Ìí²ß˜üh°Žó§É—ò6@‹–ø“µlëiX@JøÛÍÇÛ{ü݇¾uýjy×{“º~»]p¹þ¼YÿmÉéË;÷n9$«X“ˆ¼ÿ"‘OgúߨV÷ëÍvs3]9çsù>‡O¡Ø²¢Pzh'Ï·©ÇÉ!€íDèOUbüË] kÙÙšu'4þ¸(«Ö^,UÎbâDs»D*9¬3m‹Ö½”Ul ¿1Z²Nǧ¿°jTÒ sª*¼.Má¢PãòÃ}ÐvñòËÿü§Ÿþô‡?ýËõâ/k¶À¿.þòçî7/þŽ-öVòõ¢ÿÂ՗LJõf·ûûÕã÷ŸþퟟطÙNŸ¿>Þ>m:M?ï®E ÷ ¼l¯‹½®làëÅ?.Ä%îYŒl·»Åzû°“y¸’A@៙ÜÓ}ylm´è]´W¡ÛkQeiHíWò ÷‚÷þ܉öÔ|—úØ~—úÔ|—§iÕ2zé5'‘$qögÌQÓb¡ŒQZrù­IoüçΑ®øÃï#y‘]<óóݯî6ìÕÝÂõ~30ûpî¬zñãâéÛýõ뇻/«ÇÍõl6¿lž®ÿø¯¿_DA¥–bÖ‹nÿ©+Mþ#|ü„?:{£?åÝÃÍþ!…Gt÷¼/¾þaÂÏ_žŸ®xǧüºÚ>o~îWçX³è’ømx ‹ì çYüòð]€×Çÿ‘ŸëmÖÖ¯†§éÎt>Cë>âxx^Vi¥¤è­ÿºè xÕ¡ôÛD=¢oCù´Úî6cõó‹ûç;–öÏŸ^ËÔ×ÑR«–­„RI–5ÂÙ,«ñ8çÿÁ›ýý§úìj0”·YªpÅçPNv·ÿ.ö!ƒ+Ÿf_òÇadêÁŒê Ÿ­í,ðt64Žd"ÿèrV #~ Ù5Y­¹ÿþíþ4z¯.'ùrÑàõÿàû|G`À;éŽWÓÝŽ?BûýÔ†xuåÞôrb³}úœSP¸á–Sçbóm½ÙÜŒT:iŽZÁO[üý¤Ñ> E“,-D„p¶ `®òÅ´O(1‡ó[æº7´âÃ"¯¯Pç*Pþ(vÎÌ2€<È‹z7Ë#‚³ÿ\¤L­„zO;kÞ–©Õ•Ñx¥®l¸f  U$Œ™[ ñʘŠðŠŒ3C¢É*’°N¿Í~]m?ðE1¤ü«bvÛ‡_ŸnVO«Ý÷ÿfïJÒÉyìUj×++9`bõzQè…9í/=•å¬á?}ƒ!¥–(‘Œ Ã5ô*?+mE$@àáéz,!âYà“…svsrh†Û·© 1Ø tY( e/±Øæ“[_Hný ¼ÂgCˆzš"UHF.ɼ×ñÐ Øm|-/ñÖ¿´6w˜8çô ýœ•‡¶³ôÖŽ˜ÒØÓOõ´6§4JË2[ËX*/ÀR¥v—V?4 •l¼>ÿ6ºÍ:ü‹‡aÂ.„ê¸hËw1N±j^ÙòUN±Ça-“G0ǧæ+úîtfÃTÀ˜5‹«ò0€6c ?) ‚ÑÈ QÌXu0M¨A±ÇâgM›L(¿‹·?àˆüìYÃòYÓI l6´±ßggM’ò<£‘•Íš5 L³¸[¢>ÚÇöŠvzxy;ñ¡71U3JÔÙc­‰‰ DH8¥Ñ?Br*‘b×^?±æâúõzë£_›b#!ãíáácfAæ¼ÈèäQ?¦0Öqò„h±hÉØî| v¡Ã#¾¤»ÚÈähæü.^ÀO ïmžÖ_vÿ^ÀÕó„ üÿøöc­\Å@À핹i½h§¼Àx±õ_¬ùs1æEÖŒDº¦,›ù#•Œ&¶ƒTg&0!w0Ò.uöà`¬ì`4Q‹ƒ,{0ê[J÷ƒ1x›< „\"(7u}?Ã;ÛÖ$^ß­o¾?DçcýÓÑJ0<ðÈ”cÍôÈ*ŸÿÁ ƒù^XåãÏx^M Š§4‰ôAUÍl¨ ÅP“ÃA7‚ËB…Lœaà´-Õ8” ߬ªô­„ÄÂÂHÅØ©Ô>>ÕF,ÎC†ºÄØÂ‹G·BmÖׯë”Ѷµ£îÅÝ‹jÙÛ¯¿Êã•¢’76„:TšòÌñ ¤S ESžÙ~OXcuS(æEφ.‡õëYŒÍ£ÙÕ'œEŸ$øŒÆU>„±yKôÉOšSn,+÷ìÝìY“šY³»ŒŒÌ´IÈ»ÃòðÒa4²ÒyÓ³Œ‹Õìl t?5 RêÒ¢âMÖ7i÷[v|³Àñ1¯ð½[yá0ïécßY^}¤Ì{zçÃ…§`Ät<§È”ã ×£6Û5:‹S;õC  ­ ¨¢,¬qå@‚ñDâ@â;ËSlñ~{{qTz¢r@¦þ·£ìrþw™Ð­ a‚³]ö°±gíhªg]ö¬3PÓdÅv_±â\pé£Õ0ZqK†ŽÄ~FèècSç}VäÁçûÈ.u<3~4å%ÆK=¸ùA¤)ïpf  !o.眶b'5cVŒA¯^üï{*ßåÕÃ:VîþÏóóËѱë{ÖCMïÏÞ‰ÿýSL%½_ß¼æLòÞž…Kn€£³Í:rª„q?šÿŠoûÓý®úøz}ÿÛúæãùJº=‰‘së?|Çnl»¯¹ßüôôüûOÏ¿¯_z»»|úéÝ"«aø‡Š ":b¡Ôùí&/…¼£!Nz4žÔ9T߸¯QRØéı2êµ²y½ØÜ}ŠUˆ—u]) PÇ] êÔÒ$‰"劤žv ¸:óMRAáäžgAë½ç’=or¤Dì é¢w[5éZ—¹‰­¤Š9wƒ= ?$žûf$Ú$ƒ‰Ø#ò¸¨(œ”uý54S®=úáÙkõx sƒtà²Çöu¼µfJ‹ ÌÖ ÌF$[£Šâ«P£,"Y“ìñ¸Yaˆ.¶á…!I—`ÿ€Çî!Ç(ï˜áR7NZ^EûO¾ŠÚ‹nhï®ž6/‰›h¢f7ѧùá"šlË‹èSìBÎMºÒ+ z̆!WCÞ]å0ŒËÂúÓÉ;ÍýÐJ¯ ô÷ÝÂ8„ÆöÇ! É«bCh\ú¦À4¼)`Àèóò|S“Œ{ >|øÔ?q„=l]= ªboèñ0AC.^;¥ì³3𨼄3º!ä PÀü%e”•MeSŒFV<Þ! cZx\÷\Š€.$s)b4$[“èRiH{¼RÞ€g{ü'û?ýpG6w ðñ•ÕÓž: õ˜ëhÚS{ƒ’›¢6A±2Æ6@¡ŠD<ãucØlP7BBýÙÕH‰PìGV†B‘ˆÄö§ £˜Þ0¤vD±)RÎl¢,¬€÷YÞ×@ Þƒ$»î¤#ï½%R'jUŒf=|ì‘Å6YéûcÓ”µSÇ]ŸMX´l bÖd8MœÒ¸¬4jí ,ÍvÍ~©Ã&tØ?j!$£Ö¬=UfüÏ„¦ãàÉo_ç»T±‚4‡¦Óï M'Ö¯‹ÙÎÇ¢ç_õøÂ* #<=§¯fãUáX,ø” ãüå&›þŒFVŒcjD¨ñô²ÓV€cÔ]Ð;šéŽ©wËn­4ŸV :¥ñÄ¡RÏÄTÕoò¡B”UˆNy‘îøE0¥ÝºÕÉÎ.Ö¢òZQĪo—U*õ$%‹_„ÉÌŸýÈŠðË/,°0~©WÚ[ˆP͘Ôs$Èì—3´p2ýÐ_`ø`×ôèfyœž<ôå@†g%Ë}wwÜ)Á%¥FýüþŠjÁØhËyv%±¥l‰¹Ž;é¿íVYòè(,Œ½»¢m­è“kNy^ì%¿ä?;økøq‡Œ•‡ß_à?©X4wè ÞbìÙ‘‡A§‚·è ^“2œCB!Ð\øâò"˜4̱–)_Á¹<|q²Âa?°"ø2,1ûÒ»eáËZÛ›ôD+šT`Ü+B±år)“Äþsj ÷ý—þü{¹»}\¿}øõOxY?Þ]Ý¿ÞÝ<Ýß§¸‘™]XXøäJaÀU†…>L²RÀ°tSWxü ïh}‹"D§0‚>øÇöm‚‚ë×&(íDïgêýD9ñù‰¨@µ©ú÷øPkÅê_£®eOÑÊ>ä+¦¨Çè„xï‰'U‚9®$ŒÕ…§¬¾ÜŠ£ø }<Ê55ìÏÆ¶ƒ›,Õß&_IèÄEÒ¬žö‡.=ãï˜UIaAÉzy,¬$lµÐM9Å›{’`f‡Ì¹<;Ü’>’ëÎ/ \‘³[ᳫÓìk?°òåVd‚²èÆõ¥£ïH¶nB´+±(M'»àDCê›S>Ÿ1ua¨Ó_Àà‘*ë?-Ðþÿâü±ëÑGÁO9úÔÿÖ©™ßЃKóÐÅÙUì‘c8’Ûe)E<âä1¸Yâ‰3«ÀFyçc¡»Ù º¤C“nXy„XJ´­`î_ñ>vrÖ(âÄe1Ú ‘¤+ Ý_¯ªs·Q«®«;$}1‡uì—´q'èÃÙ2Â¥Âņ@øýãåרTùëýæíõÏ/¼àë«$¢ÛÛ‹««Û×Ûwiù+±—‚­åÂç¿ÎXÍ,ñðVG€„)ß¼³Þ±vºbCÝöŸ¦ÓàŽN’-ʱ°p¾»| ‚“D$}í²·P†áµºˆÅâhÍ€;5Öa¿´¸Ê®ìx›äpóåq­tsñösnvx­º €3]`»ë0»€||I6L˜‡¨ Îqä¦Ýý¸H+Cy±SÄ2ê‰GO~J-(Gh!j!™S¹)Úb±Ä¾Û%X Y(.Y]1²U#(ö!„Ø:{Y(ìûo:µû=^bó –lþ¨:O(gÂIprr•ª«åæL.Xc{œ³$úv@=ÏÙÇû'ý®Í.»§ÂIÓOÓmš‡V°vBd?öIÓ­*µbd#34H? NòE°ºBªERå÷£!·HR~­NjWm±‡ú ùª™m»¶%±l"JÓîÇRÄb˜¨#vq×ibסy»ƒØ9È¿A®±@†))ÖÊ~DÁ&V«þ=ä‘u»\òr-6x¥~·3È*ɈòÈVMÚËÇ×¶ÆpXZ]ç§ÁÎŽœâÁJð‹°Ï¿•Lã05ž@×׬ÙõÈgp†½íI>ã[Þ_¯/¯‡¹[xÈø9¬¬‚_ò2ÝêèëÝ„Ð+ë‚«uü+ Õ’Âç=P …5’uò!):2K# b•œ±58Ûb'ôï–(Maã-D I'%ئr¸T„³Då¾ Hpj"0ÚY H®Ç(“FÛîÀ½üùËX÷·Kaøx¯Xy Ó-\Ÿh'iµ^,R«uïO[¥)ÕéÅØšs¤¶Y¬T@MjˆïmЄ”ÆU{@X¶Øc‡’Íh ‹DŠÝoý@£ÂLï?ì‰áŽn~”ãVí@µ°ë r‚¸Î-'œ]rdOˆim²Û(âòXËy„í:c¡ Ó- <9èÊôë·z?<ë¨îž7ãÇCÂÿÅÛó·ue3ø®§™ §qÖ'c5ùŸg»†Þ@lx,aÞ'ù° ™T>ûÈNm¼6±HLüÂ9Ûßp$)o@'jèçÒ¤*¶8üb‹°–™¨Â1˜='™xl#O!6÷»øf¯xøúõíþ6æI¬c¨J7Îî6_Ôh{ȨøC£8Ìû6{øíõûº¶Í¶¥+dó”`9{ÃÑׯÊO›v Æåv=‚mSå˜èj ÆY—uLŒ1wLØ'eÀ÷Æjã—(ß)3h¶£Áõ!Y1cî“©ú+¿?¿~ÛÊyÜßÄü¨·?¿$?­cZº`úÂ+ag'%f…”N>`¦2»­»ÂZ¯0*Rç‚ù®ØMvJ°Ö¯ll¯ØFMI7Y)é÷lÁzΣ7&Ó¬ÕŠu#ÅÀÂð-Ý{Ó©M8ÁºYøŒ2MÑbÖME¬Û8ß _ytœè#Uý6Ì[ÈZÓ5çî=YùcQ?ÞýáÎìk݆›Í¯Ïý³ßìê›lVÏ¿=­ž_¿Ö-Cß™ð]"¹bD„¶ÝÕ;ÍÄëúåAO¸MRZ(Ñè£Êî<õ<Ñà”n Jò•§âüEÛî¦%nRYõ"Zk êIÑ@ª½ÚÈRM}klƒ*Z"äô0x™mm–BínjÈ‘Äu>M¾Ñ•ú·¾dAP*Óvo—6I88\–"¡¡¾ †}·Ë…8ðªt(Ç´þ|`’1´¥H¾„"á®sVEjs†÷œeÛ!2 CTÆïä?9+l×e¢YVXÒè¿Ôá­õSv˜ÿ‚yWEÔwâú7;9c^’‘EÓ,vÝr_¾?<\lÿ·®ˆÅ9˜në‚ÓM1GÀž}¨jªÍNÂ9ÞYMù•¸€$yV]U—_ !)92F+ʤ®[ð´0eòÖwvhãŽK•7Å-”½%DÖijIlY W^æôY'~×Uáúóh0œæÑAú$.î)Wº°(¸#W3’®Ò4¢çìöHÃp–@}º…IØG¡È/—_k¿R‘ï¢Ö”,Ì¡d–GÛÅò¬ˆhÈôô[^ŸÖW÷Oñhú ýö4á¾¼¼>?®ßîÖß7ßdS§E¦+ Öå>å*Ç2Æ ”6 Ågh;fãbƒ,2b³}sHÙI6›ÑCòêæÝ2Mˆ¾µ B sÜC•ã2޲àjÉUÛ§á’à!u0/“‚Ö‹óù%‘nÓ=²J›5¡€æÉ ]˜ì"¶¿mW³êxx|ìÏGšœçJè‹~k+ÓyuËr×þò¶N¡C‰{WP'˜r?“úkýÓlÛjÇÇÅæÝYJô ä”¬*ß[²Á~/m-Õ¨E#{4fmwbÛ…9pÐzrÀíõõ5_QenÍq6f÷‚÷AôiÿvŠnY›–àa˜±3±àÌr0]²QÞ—_K>ã@½×|•œòyp“d¾îÈ,èÌÐÆú¥ñM¸¿»³)1v¿²âÓQ×4J£gGI˜&(ŽÒ,D§zN>†Þ}CIv á!:]$Ž­Ü–™gcuaV¬®$ÚÔsRÙöVºi6I¡k1=^]6½Ì™å38TÈŽMàN§¦TB,Š˜%^¬`؃„‚qÂf—ÆÛ«Êu«¼C›/[ÙðŸß úó Ž­7/z´Æøö÷×û·?/t´Ï[ñhÖÑTïºÎ‹wíSLÜ*^ Šx+™ì ä9-y˜ ûïé³™ÿn 8†µác.GM6VG~›Œ}@nGöiÀm· \ð5ܶÉölŸËâVz@3…Ç‹Vn“m*ú4ycˆ%ü}¥r‡å"ÆåZÊʘB¢ç7¬¤2IF–j°aYì**ï‚<.¼aá@×§5uÌì䘺Æy²VØæ:ƒé‰ÿïVÌm2Í>ô MâÄð—)½™U8e®º¢2Ф–&8|× ÆcÛ5$QºRÔ È’(ñ²  é’þ‘¡ÚÐ(qL‚â–Fe±Óûbs,<Îä†Ô„D©Š´Î-¯ŸT‘××/zN-‚ïXP¿,„ÎUj‡q·›§Í—AAMùü°›çëoºFo¿^à·Wú£ |€»‚/ò„Û˜ØaÖèŸq”5V»û˜¸X‘0›_×òæ„”ªß[¦ÉuL|iÖÙ…±¥wõ™ÎEJqi2FA½°DÖ,uáÑ×13"ñEˆÐs»I-b‚ÚÚ[ƒµB¥Íc­-1@w“/y P+ïïÔM×ÈrP@] °vi î½J£­$QÀMrÒ ¤m/®P„ŒXîýö¸oè:Õ(͘_9´hYü_AFéñòúNwÚ6.1ÜŸìÞé×ԹÉ>@W' “òf\Ý„~&˜µžo×—.0Ìp:\¡W/-ïA‡t’Í» àùðÚ@–guìúÖBE;Càcz2û€U|±FÊÑÚö’/ñ¦çü‡÷Ä,aEÞ¹@Ÿ"ó# bB>ø î¦]%ˆ×)ÃP˜Pàc1°Xc š Ãd­ÖîæI—r”Ïò\Ð-€BR¥ml£&wOa±ÒgõtbQÇbä¨aXÐOüa͆æâ—JÊ €sè/ÑóïåáùÏZy§3]«²fÏob68áÂB¬ÂnMÿ¶–j(? ëóòrz[Yåv.HÕgŒìÒF}À€gu–kY5Ù‹Ü]qXͼm»|Xh®Ø[YTq˜°L¹‰;´ÿÛƒBÇ)¥ ]n%”ø«³ÈŸøþIÝÃÃËÃåSd¨—/w—v•q¨s…¨+Û)]W­‹í­DL'O¸Ü˜-šì‘™²uÔžc¢­M^!ï-×(žÉ–0ê´. Ñû_j0`2œ©Þ¯ïuC(ó~¥ýMr²ttjâÌʈ &üEœÞI"¨Y² Sz»cìP€ÒÏ×m¬ƒƒµÞp(qq S…%†Ü›¦…‹×°úÞ Ã°³½»ùD;'¤M‡™Š.Íã"ʦ¼?>]§’±EF‡–z îÖ—owe€™•m—n \#}vmnúvôgÐTòPD+ŽJxh^yŸÓJ…£Ñ¶â¡hb'¤…Ð[·µ'ˆ(§\>CD¹ii¤ó… ÷»·ë1H\²ñ}6ËЧ6þÈ>­6>£·Vænü_ê6>Y`ë[Noý Æž•ÊwÐà®úÿØ»ºÞÆrûWòÖ/—D‘”˜iÔÓ‹f±ƒÝ}4ÇÕ•­$N磻zýR¶+¹ŽeKºWºUÀ.0SNªí{Iéè"­q‚ïwW5P0U+ëWï“8‘~“º)€¾»<~,ÆHÊã!˘.ñùÅ ^¨µ8ÓþÔØŽž Àís±àeaœÂM×êÎôæˆ'±/ÊP”|Ô´áSa¼3óÇw5I{g=´9¾Ø¿`4R‘ñ§$œ7Ë’Ï š€,Ñ”×´E뼋X`¨*•2¸‡@P˜ÎïÙÝó°VŒýs±-¯&Ý´¥ë/®x7¿¾»¹ßýzó—oV9Œõ™½&Õµ ´aÜ|ýb6µãã\r6Æùc~,€ã•=QV{C1òWöa7(1mà››hoÄ-ÇÌóH@ê,½Í ˜Þ€¨:©/”Þ°QY±e¿b㋪¿z.r¾Ki@VôЙ~СnÕagï­~™þþv½Œ~9?õ×]½yêz‚Œš0¯â£^8µ0ÓØ¾-‹lÕ@ KŠLöÞNªmx3f£Ú†à½·lg>)(øþµ »VèƒÚ†$ŠîŸ®mpÜXÒ½¨Ê–·"e™…Ù¨ãJcûˆ:E)37»¨“þ Ürýé„:)'=·z‚xœ59"gÇÙ82M¤œ&j â0,À° Ù_ŒKÕ˜½Ù£ K¼è‚™ »X²ýaغ¤tK,TF <ëL*Sr€‰JNG` §3é1Š3^ÑÃh»¦o®‡ö{\¿<§`?ûÅyˆƒ•ý’¯–çþ×ÿþü¹Nw?œjqh€¸ì¨v~ äBu⥱1b³®%0Þô¡!IV\O\ºm`¹6輩À1$3£³CìŽÎ˜îŽžbÄí´½ÂˆßAa+T4Ct–#N+ÈxXˆ1âóÖÁ‰­ÍÒÔ[ßd]Oea“ÿÁ¤,«5~!E²·4Àeoi¢ S”mh£°°yl#5ºóù’…±Äy0z@ßæéSUv~üo±/­y§ŠU69ë…~mŠ­ëÏãZÞ±+ÁrÍâj¸ä!=°L!}j\°soFã¤ûN  …R)²t7¯žÇ÷ìWÌW¤7pjÅÓû" ICߤ†¼³NwŸùÿš×^ÞõF°Ç°OlYYxƒúónY_4MÔvÙ³)¥½cîÅÍ,Ü)#ïs‡4ˆuQ¢–èãªGÁÈÇ«Ëå"n¡µž[¹ÈíMo¤ñ~ɨØÓ6BMÃ/=b««OšÛ²]DWkŒ’K‚0‰.¹¿Ò Szio†k’‰ë_ßÇUä@šìft½çÃF3»Tåqt”€†EsŒs)K}°ph.ÂS…'G}‚1“|ͦ r ›(Aá¿glûí¬lü#oðHfQÏA›1QÐ:¦ýf£†P+lh);È$Í&–“Hiµº`5²sXµ-¶÷®ÉP3cò2P_À±³tR‡²`ÇS;:<ÜÿiúþÐOñ¡ ´‡Ð`í‚ ±™aŠìþQµ;™^õྩ¥ïÔë¦Æ/ãMŸS!•¸wÊ2¬çqÓ«ÌÕ Wãz`2º•óAc^Ê᪢''öo¶i¬ñ±Äk­òa›MIýÁû„’Y|å±yîdÂ<˜^–F'P™Mª†‡žn ¡}']p~aŒ×µ“n«ŒwÓ¤»Ñºü‚±¡+þʈ" B%‡JµjëÚ°"ÇUãÐç™!p6© bR#ËæjÈŽú,07 øÞ€‡¾Vo%Žœ¤5ƒ„Ûì.K*,Ïê7„ŽnÖ@È vÁfBÝe¶'6Çé/º {Y?_fêºO7U|’i¢pÙÇÕu#ÆP‚RÅx¥V›”¨±ö‰ª˜SO©•Ù,A§”•=g*袱mJn`Í&8¯­;ƒf…ùørˆýaž’0Ï–¼à]z<-¶íGÜV+7ìGü~ Ôs=¸]…8å>|gÎ1óœé ähF\øYä8¦FcÿöBm'm<ïÕ÷¶‹Éú,;I¦Ai„ÆAƒÿXn9/£u#§Y· póÖ]”%š™\KÉÎÜÌ£T ’ÞWÖyJµo‚·”ƒû##ŸÛ&² h4JpS›6õ_c-õ‡‡ÇõÝêùóêåéüKxšäÅ|c ñ¨Ù j~©­É¸»¹Wzú°|\¶Â`»ØÍLœSèFŸÚåZJˆ“35¯Ü¢t<>µ2èf†`6Ò¹°Qͼ;Æöö­¾²h @€öta#¶ÍDCÙ]Ÿ \NŽë¸3Ìòûá8-k\}h8X)ëíêòi5D¸m èòóêú%rþ×òÀÁ¶Mçû››nõ:‚KÔ•àzuãÝu®í èdцP,ÁQ<ÃNëñ$ö.)–4°_#,VÌqì`f,îÅ`“PÄ€djÌ=¶²^Tdξ¼Â­#ÈôDupÐAgÏ]ÉÊ*š‚úÞ}kÃaHG ù*Žo<"Ul0…c…úPxÐ``!/_”°U«ÑIê]}0e°œU¢cÏ.dÁÕšÏÝ™fó²M„袇ÀÖ@kÚ5Ðj-`glVF›è߉~ò¤ÇÓ>¶šX¿0 '§6å¸Ó³¿G¶yÂG§r,¦rc« ·¡q|jk¬òZP®V±‡zŸ7ºH?ÿí¯„¢ïèþ¡Q!öñlsâÄݺþ²ºøùæú—¸”p‰ŸØ]®®¯ìÇ}lÜ€ ÈqK"‹î·I–tÔ^A‰/”…Æó§^—–>ßÍåíÍÿ\^Ý®š¨¤–Yó䙲ûcÇœ)HÛ,í?_Q1¾Ú¨Iþ¾^?ìúÃÿT YýíþzõõBù,*âG·Ü¬®ß~fÛˆÄ.ö\o®®ZiOœßöqòúoð6?ŧ=»‰O¥kf¹ºù}u½òÎ`L(Ü”qüKê3vï¶û˜›'=FþÐ÷ÇêñìùóåýÙ«E›×ÏÐ7›ÚGÙƒcº.…1-ßÖÆbæ8";GÚR„¸cƒ fWëw|qȽPÙÅ&y1x¿Ÿ¶]÷öÓÙ§Çõ:øüNϼÇ?w®~V ±ïÍ °@JoƒwÏ@ÈM‚IìÜÔ»µ+¶}n<#d Yó+«Û†YSÙÀ)>æ Á°‡)òàz0 r»¶ÿè;ž¬ú?§kѸ6.ÏÿøÊp¿°®xiLœ@ DD ñxGäq/Ž^!óʈ T[È>Ýví2XºV¢e.ƒµ;^mA#Mi¸¾YªIKŸ:y´X¨-ö«’”ÎÒÑ̬è(¶2— 3XÁ‰úò«„6ÀÑÑÉJ¼¸¿“1ícä Vr>æü…ÑßcÔöaóç¿*V<úúܾwö¹=töù¡·Ï“¥³Ç\gI‡)´¬s#nm‹X¿ØÀ?•”]Þ?].7¨÷>.Ùʾº¼}ZN÷—³û—;]Å¿¬?½P©h%‚œ]‡98 Nv¦NÀ©¾¸M] Þ짇Çõrõô´…Ñ[ÞÃ%hT«„€|9\æÝ–ÝIû ;ï$5#úÔVRG: ®‰Öê}½?Ü@û'å&š®=_®ï.W?ë;ÿºz¾ø·ÿëY¡újB4h[3Ðc]þúÉÇþ »õõpqƒ¡³gO/˸ˆ.~Þ=Ö//Ï?wøöß/o_V¿l):<ûøñì“îÅ—øÞß¾{Cmº|ûdz]aŠxD7%ëäcp9¦|9L±±ÞsN•T÷Ç¢¤,L¥XßàÍÊ`ŠDý8Âb^˜bÛûÀfL²:u¸˜4JM5D'%ù 3ÀÓ~­È;¸Q[×ÁÍþ§ àC×¥P5~ì\o<ðnL6-^I‘W <°†·ÄyÞ‚Q1 0YTööjeˆ¯íB3‚çÞ ‡jF¢#¼Å†¨Ì‘î¢(KœÒGæ{Ò•W/7·×{õýÝýoœ ) HʉïR%Ìí¨É‰ïì @âF ËT1(“ñ'”ãýÎ*²ä ˜›Ä¦¦TÞ¬”Z=gZâO‰Ûx”¶°‹]Ì“ÝÆ<ÒkÀë|Ök™BäÝk£¤E¬__¬Ìk¯°‘eæSC¤wr0šÑ¦i¤Q2÷tâÈ£@»ãƒ±3íº¯÷Sw¨´{’ÁQ£O᫚vO2éÊï ý)v­ô9“{B”àßg€^™4dRBn†Êœ¿Ý*ÿ<þ·¶©õ«O>\ãÒ]~º<Ü%è]Ý.™ç)÷ÈšÝÖ(Ô/‘—ê­5ÏãŸÜv–<M!J‘Æ(Ã@ˆ]Enz"ŠÊ „·†Œ‡<ï#kò cR®ðõÍ óåú]¢4¬†A8GbÍ$´ÛŸAŸBKocFÅ`zšpË4¹ã0Gjœ|Ö»ÈÓq%üû֜Š¿µÆ}ë)R'90 …üYkM,”—éág°á§Gï]>üT,Á, ù¤ÆÈðÕ ãO%7äMM$“u\‰·ÝYwIÖæYãžóÚÎÎBßj†&î“0]7vjþëøw ©” Ò ïuü»:#Ž{?Y· qB@±€ì&#T ‹X}à<âDiä,âˆ$s'ƒW+D"`r3#NœSÖqv6:Dã4b€2îúUßµ@àEŸùüiõü¼Ñb8ÌK?É—/”È¿;hU$Pú{…¡e¡@étÇ*ƒU'|;d39H“šj °Ja-U΄d‘ýàÍ ¡*ÄF;çZBU‘߯°ZÅчýä3¦";lPA•‡g!(p\röìàÕʇŒEt3Ÿ1»¡Ü}s‘D6]æAú?ri±ß2-ïü GL¿6ôwùGˆEb5GO¿'fí¡>ißïÁ&T%‡|ɣ {˜2¨lã âi©ÒE7÷jµÛÛ‡[=,¿ÛËۇϗvñšÍY,׫uüÇÝÁ¦aÊMÓå¡ûÖS¸.Õ›ÕÃêÈØÖÖDràêPG\€‚±Yr°“Ÿ>Pr|}³ÒøÓù(û037€î…êÌ; %ÂOÎÁœ/6üÅ¡‰Ší'z¸½N¢–ú¢'a‰’•n‘è‰GèZïêËêÖAÈ;v2h™šHJ"ÑÝt˜“ ©Htðb¥({~{7]´K<ãùH8ãðÿ^Êl7{°Gï¾,ŸRHUKÁ¦?Á\)³ãOЧa”øÄɬ“ª¢žžH‚ëõ¬ó8…ÒÚU¯oVˆTK²fFª]‹^k™¢,"º®£Ino–¸ŸlYfÄ«+ï¯PªD ¼8îº_hL0"ºØ‚²ÕsEêL5Jà ø”†. e7›A fò»MRM·»´Ñ0°Á[1h§nÆÔ. Õ”¤Aƒþ9ùÞ!T‡)ÝÔÈ)Ÿ¿Žè_˜a„‘½S« ­‹ýfˆBÌ5Ö™PÕCpäY«0—\wv¨»É¦jÉte‹ i½z‘¦â?L% ^¨fL=èwu%öÏ¿Z¢ô可„˜>ÎÒO.W‰Ø}÷Üþ¾Ó—GþzXc„[ää¾uÈè݈+‡qßzŠÅ7XÎh‘:°A VIwdƒ¥éÕf²Åoë§ULH/¾„§Åú÷ûÅú±n~3Y}=AØÃ Þîëx¨÷+{sGöê WjÈëóÝð˜óÝô˜ó§›_ëgVy üxOäIZ?B,ÄÙΨ$¿’©O_Æí¸{ܶh|A ì { 6Ù0°Tî®OíQ±%©+Y&aDí‰X2^¬åêñbS÷XÃe¢˜â@£ÁD—_&>•ùXªÍ2‰³5â±órÓ¿ëÎ;“®ßæ º×ÂéNTj”ÉÔ)—õÅA@›³¾¯—g­AHv évô°#[“/-Ûz9Þ8WL—kÁ#ŽzYl0ÁMò²ó]Ȳ· 1;R´m›ÑÝåÃñš§o5Mñ¯IåÄLN1äœõ \‘@Õ}$`õ´£±kIÆ<Äi%d l¶)’ZëoöiDƼr®ŠŒ5Ø™;ýƾ§¬=Ö³­È’NËl¡ìú"Ø•Š³u 4ôô(÷ÉL°•`л 3±«(#97tåñêr¹¸|yþ¼ŽãâólÌͺnêœ u…aæw„–¬ÏÂãb[6 ƒŒ!ñaPìâÊ4SêŽc`¸Fa•xm34‡þm¬ÀÉysÑQQÍ™¦Ô-—´” ]Ö£yôS…'Ç|M&x/“J PBèÝÎ  ;OÞ·óÍÝ寫WëVa±UKŽ·t‹ÉX;¦«_Ù ±g;ŠŸ²NKtuÞYC*ŠŽY2&Ù/50E#xuL©Ôd™l92пŸ*–9&áÕy±hÒ²¹èÚδ·E•ØJœÛóGçbqÙ¤ä“>W?$ÐuÊò«¬Òžñ^ÿåMˆi§Ã4EIúÚ?¸ö÷Dºü­íè€ûË;5™"Ùi•¬ºC˹ æ.8´Ü¨CKÉ"Bµl…&jxr©ß%^yùfñ&dO.gRõ¦C{´9¹ô±c‡_Må|‹Í׌ڙ‰’:[jCÏîˆ*ah\ލøäªØýÇ=èc_Á´ýLc&- Õ…MQ~©j;>rÚÞš`Ë—§¦“Þ Ôêþ“\€Pµ¿yÓ8mÏø¹à DÉýMÔôBÌ—'áòý=‰^õô)rÂHã== ÓAą̃ n–›/yküÿÖîä÷ßÚý+X¬“ñ>)@aÑ6( >6ØA%7µbC`Q#Ùx(œù6jª_­ 0ëSÛP5¬Å¦þƒv0ŽHÎÕ }k&)°'dËp™¡–›cHO7s—©¶ÊkLÐtߥfá” ôòéqS¢?[^V´3È}]¶Ï©ËñÈÏfØäÈÎõ|xz~¬+á ¦ö†M I˜F¤‚h@­±;6«)^¨MóæÆ9Ë®$8±Ùþ;Ú]ÿ¯á}3U«à$›ƒª´y“uÆh”"Rœ-RQûÛ¨)R›Ò¶ø3C†vóðN/JÊ(lÑŠ Å*]ç[®„pö3D©ÛLÞáý‰‰Zu˜©ß„¶Ã¤‹h‘>™4¨$ª:°ûzÙÏ@zÓÅØ6Ã/»¦åb%^¨˜^ÈŽy‘þ8©–„:¥˜Á’t$Rëë²6ÖsúJ_ë zzR_³SŸŠ<‡àô¤úazo&u "ï‡¤Ž‰)­ìë"éRèb£²˜žŒxP+~us½Ñ|úStwñjî‹·°ýâ5ê;Z-_ožÿ<×—^ïn4Þþb]ÄâO,}˜4y‘Œׂà5N§ZžÜË¢- u,”Г¸€P›<¡vé¹jû5"Ô¬4!&´k¸RƒÅóŽ,•]!(IŒãŸ{êˆÙ”k'pílß锺Xx3`+¦íã,sh¹z N‡3Ü®§ÓÉAw¦í´à[ðM ¶5…Šoå»çÉÔÕã4ƒ4¦ë)ôùÙûàgÕù°¦¨Õ ѶúÉúßCœ”<Éÿ؇®KœöÒ³éMÁõF×§î´ÁxŽ;™Ý/÷’ý÷›ˆh;Êþñ|'A]Ùc@2Þ§7Ñð !ˆåÚÓ»ÔBÊ¿Ðæ ”…| SR‚÷ÍXmNj]é6x©)h±oIúèî¡ÐV«s®ZÖøè§FWíý~D«õ]”M÷Á/ÎZNj( áæwɉëØX Ë—u‚„ò’ÄvÞ”ÙŽeç8?µD¹~s´ŒœPBTb.hffÈÖâ¨!m‘ß,Õ(tÒ°É ø™™]ÿRËéÉd²wò qÍøúäqçÂQ €ü“|¤G‡‹"w$ÖÏœ8Õ_þ±~ür¾s¾¹Ž¥ÞÏu±ÿËÞ¹4Ç‘#ø¯0|Ù‹ÙBù”'æä‹#ìp„íëÆÕ¤$Ú")‹ÒŒì_ïD³Iv³Ñ]¨* (ÍzO;M |H$òÁª'äѱ8‹¼jöHˆÇÐÀZQî!Å`JÃÁŽ>!¬=äå¾ìŽ@ÚtÑŒr„C=a%03Î#¬õoGàbÂ’«"+JòYxš±IšææQ•§Bfy©Žbà¨.cÊåòæèR@û“ß^Bü!ó;vþýÈLN¨ƒ|y†9 uò7äÒ* ‹åì °%…)¥ ¿CŒòPíì…%_˜ž¥Õ˜ˆpT]hqsÞÏäY7öoäìRF.Bs¦mä%*Vj—8qZ Ðü€=5ŒQû Y}g^´£Ãþâ°ëà÷ËøÇ(+žpŠ¿ň:©A!D*8»@•ÈšÂWÍo°\_ÂAØÅå6‡Oòi…_”ûÅ-‹_dê_ŠeÝ7ÕÇ©º,EUˆâgã :Õ¨™Ì ÆRê‚[ n£'xEܾûvýér¿õ{–Û†¸n9à¤h+Hbqt´ÕD‘µÄ- RQkÂ1ƒaÜÖr4Õ“|á–ò œÇжÁÞÜÖçéK[d,Ò–€ÿ¤´=‡£5 ³<¸"]êJ$ð`ÚæàíÅ’|¼ºøôõc;ÃÖ;;]’à™d¦æ{Q†}…ÉŸ¤`×íW8ùæ„ ø¥}0¤T„ØœO“mÔíWH²çm ìë_rB(šœ š;v½èVnF¯Â*Ù[ò 6½÷ÇXÓ@ôd¾Æ‘}ÛUEØÃê·u7ùý ïhD.˵QR€8È}à -º lbO7ÿÿâòæúö:9lJ¸bˆh¢Å]J»rWøÀ6t¤˜BÂa²I±zœ(yUJvŒ8Q¢…yI b¬SÞu)Yc£[ÛEÎ&°Uˆ€iØ_eEÛáy®môrEoUm©ç–Zï4ϼŠižyC*Тí‘g[#™ÞQ}ú²MGkõm6I²Ò[¯X®½X¼þúrnjó¥ÆZ6ú!ÎòNh”.51åÐÑD¯úPóùÓÝÿìyþ÷æó‡û‘®C±éò>Å4Á„+0p®ílÖü¥¦,³†¾C_D$ÃO5• ˆk*|»òiskŽ)ù˜iL"f“ÕÓ”}æ7|”ѹן/nVÛèžÕ÷ó‡†YùO//sÄôúÓÅõÍý›Ýµ0åÂ,Ó‰ÃJ”CÀ›!ÛÉX(ÅŠTt>ïȰŠÊÃf%öa7\A|O©{‘ ߨPJ,ðÅK€uQë©®K¨Rhùèwü¤9®Û‰gÕM¸@{Ç·ðÃöê!"a¹ŒµÕ%üÇ÷ÛC…ÂA!›VcZa1‚q}wóùâËÕÛ_|ª®¾¾ý—ýdz¢’¿¸¯×÷ûŠ> ªySˆ³!Ó÷k¥÷—è*¿¹»|¦X8ûõìþÛzíô{ûËvt¿}þöõí/}ñûŧoW¿m[HE>ûõ׳÷~|Ë’øõø¢$ 2/ôD_öG¨:²ri†HggºãPßÜý~»¹û4¾^¡ä8|¹1ÂçRÉe³#ž‡Q5JuY„³OrwóËFÂä4ÊÕöo–+^y$àîï¼ç•5ñGBÝ…·Q½Â'Âz•V±¡lJpD{ ó*.“ó ´RÈÛžë·}^r ó3ðà¶Ïå ±Á¶Ï£A ÜtÛW¬ Ÿß”dÌë_ÏüOnï/Öñï-$Ÿçoy¿}ñéþê„o¿Ýø"ùíîýÓ6Êœ?LVL9/e¸ >zWôú<µ¿|þr—­ƒ…n)tŽ˜Ò¦gŽRœß³qVgUµ¼î``ºcÉwçš¶b䊟]!¶³/…°/_pi“}\BÛ±qî–•‰¬ùÝú\>üçÇ/MÍ€mœ¹ÙcL;–§øšÝ³<G´÷‚cúõì¸ÍkàúŸwcJ}š~c~“·žÏ½G/{•=߯×kyÇãJÜÙ‰¦2ƒb>W,ÈË$?‰%Dë)ª–ô’re0®È —‡Jϧ%[Ñgº#—V9ô’B c’‹lFƒ`ýsèÊM%WÓ2(FÃ+Túµí,&`áˆJ}EI²0«u…JçŠrvz¡xs¶™Ü„¸à/V¼ AƇ&‰ðˆÔ±¥ï°å!¹RøÑ0ýâösœkÊ£†Ù ­¼Ppð•à;þôE1¯r°maˆãìÎó¥–TÏ«¸NPH+Ö 1æ{àßþÄÖ¤}Áó@+!G[éö'«zhóû”…º^B¶²—ÃÍŸµ§~\½)¨¶»…@¤.!7ùinë5݇Øáu"Œ»Nœþõ‹ËÙæo½%4±$nÈ¥ü8¦f:úÂpú«ÇMÍ‘E‰aÆ ­ÿ„Єê"‰f“Ȭ’D”lEìØ&Q°-vœDyâ©Togf5(òQ%5Öz”j"3˜’¢o¹µ­ÃaNNž±†!ÞÌÉÉ©cŠbJ @eÆóèqœ7ÇgƧwp¤¾fGcgƧO0ixu2ɦG‰,'‡Ìw·Ö2É͉I†­£I‡™dV´ŽžgVÁ$!^¥Ø((Á&Ça”½ ¸­¼±Ç¤Ìàô'TÖ~«DSÒÈ䲿=yƒ¥KcQ5\Ÿÿñãí ”Þñ:“À¸6-XE(ãÕd»^ÕdG¬'7ÏGc¬ÉèNmS[(ÎÚ¤ÊB1$&Žšl¶••j­,öÓTɆ­,ðá ÍgþpÏ=Ì |šZ™Å9GXuœ™•Ðo¤8‹hñE®f<6ðÝ7³\]7M8^Ü"y¼ÌùöžUaô©2çÛ'q3vNî¼Gp‹! •/†€Š Ѹ¡“‹Ýn\`?xšoý8ÃןÂ-Fº·æ}}w‘Ç î«y_ŸµÌ+N_ 4徇ä_F˜}úÆj›)­ cÑ$@ÃÇ/—+ú=ϬÊh‚•åûÛ¨ÓWXç=Jâ@’ÂéëzðáoõÐûq~„ÇÁã%ü^¾ lû6xüË{Oƒ‰š? ÿò) ¹µ”‚¦Y@Ò)5s DHi6xÄuÀ"$…šû€¥ákœ–Üè;«¼ å®c€4¨µ Y”þÖ’%([KâzK7ýó|Å8½¦û|ïÕ‹ß½y‡V0‘9Êr÷½O-¶s‰ÿäIöäªdi–+¶nâ‘ÆP`ä§%΄†jçwJ«\ü+œ(†Øã'W >»3« •Š+J¬<Ê1¨·aúˆu•+ÄJáÊï#zÓ¤‚ÄOe/$`i똩c_îmÿrw6éï¶a´\z{þE­6A•Vš«WØE~ýjö¶ÙÜX2ŒžgVƒ& +Š”–FSìÛÌm+F. Éa‰ÙŽF"±å{\XHKi²PaäEmú—w€„i¼‘4ýÃ'y„9i“—lËFñûÆlá˜{šø.H5÷4ª¸§m¥©UÞÔ$b £žÛõV$ìSÛV5Û’ø÷)… E Ć@rîÙ ps±þx}{õÐmó2¾÷'Ûe±÷#…È }"¦n×/NJÝB ¦®3þ²ãg‚9–‚™!ÍOªÑæ˜IPÆs àß’”¤vfVkޤ‹ÓO¤¿9æYÁ‹«À•—Œ“BRûá⤮?GI©¦žQRÅ!ìšjbç©âNQ+WªÄ4+º3Æ þ-rÛ R@žíÞªJèwH Ô¸·¶‘§ c*AkgfUî­´b £L¶A­ C+nƒÀúº·½[ÆÊ–ŒQ'Õ׋QŸSËç0gcÍwÃÏÔnì§(4rÔÏToÒMKz6E IâlÔŨ#ôáÆ Ô æÞ«¥”ççyU‚.ç) S †Y>N'ù”Ž9nyrÊI\³µVD•â*æ² ï/´ £8¥¶DfÅ÷—ç™Ué-·ÑQÕ³ÞXüò=ç€JÒÝ¥àRäÂã¯ë!Ũbåh”H-‹ð-’rÇuíŠ~ãëåóÝõí<æ\Ÿ2ú-¦ñ€öü<á…¦ñxND˜('1ÎFÆImÀ¢€Ð_Ÿê˜]¼ûtõo²¾»û|@·÷­tõO·—Wßß&¿d†8Ë%ü®¯.ŸþŒô0½=_J}Ép Ó}ð!c¢r?¯§Ùü%öì:Ê1¶¾ºþýêrŸc¨°Ê!í!íVÿÙûíܶ?s}v{÷ÇÙ§»?®¾œ}ýxq{ö$‘Õfúû¿9¬0˜S¯¾NOÍÿ§qV‚ b˜Ô+ƒ8;Øfg¨`¨Îu¶ h ¥XÄM˜Á·ÝZXfL»±1ô˹3¦“‡§Hóü«È¡}WÑ|Ja€¤vóSôÎb@Žó*äøؤÀü\R åô±eOo•‚Yn•§«lðV›…YŽÕšlƒr°yÔ‰ è¹t(›·iºÕq1ãa¦Qž1…ú²®ÎŸ©ñ´› rÙâY*²Å<[pÕVHÉn~ŽÆÓ¢¬›ñ„V 俍æ&é‹uÅ-m£1c®›Ñ¶ËÐI°­â¹6ÀZsFTõlÑdÏhï·V—r©­jž²ß*,¤#UQë’”~ضª®Ì©ú6gúu§T¬ !*àè¶žn›m,íÎ_œR¾´Or¨Q8ü€( ‘xh£R R,rõ,é}Ø9ª?޲@ €“Îza¤-ˆÚv½Š’\Ú6¸Óñötsñy¿wôÕwß`y…œ?Ý¢Îó•(‡G®7ŸÕ'"7׿›¬ŠÝHi©“ïâ\Ïäfœ,´V4E]òÛæpX姃&¢ÒÿŽ|Zt§Ú,hçE\z{õ>I³”á°H]Ö¤hÂGÒ}¥éÝ@ªÚDøfT¨>Lg¢§Rå…3¸ sÍU™õgî¾WÊ÷åâþë—oë¯ß\_;…:FP6iì Y™RÛ.`M'´ ­’R3¬fõS.Ï<ˆU_"0|A!,>L?ˤXm3ž(´4X¥{=ã,çX}Ê )ñ맦§9Ô5ÐH®VCàˆ-*#Ì ?°нT*h)Ç*­õØëLÛäsbù™ââX/×ïå0==¼NLÜÞ€vcârUͧÕ,×3ž…Ó—‰-Ž}^™ þÉ/ZÃò´ü¿ž¤1ùþ³Ÿí¢…ÙOAy8¾snÇP™\—_,FxìH¨Et~±1/ýM¶'Jw‹Àå|ØÉ)+ ’‘âP%ˆ‹W®&Êõ# 9{ u%œÒ6­±}·îÓ‹õöÍÿ `ÊaW—QÜe ]©+4¡=B¾Ê¸u˜xš¯yŒ°Zñv³rCu­ˆç6ƒaÞ–üÎ;‚iÛ͘c^¶ú'¶øW Ã>e­)ŽM!ËZY–úF™yÐQ™Bˆ4]vË¢/\_}y8¶Ö¯.¿ùd¤}ûá|}õåë¸þÒ¹cG®Lñhɦb4Žöh’S3¢æ%`¢AjÚFzËs™•}Z;RiÕéÅ6>ª&QK³Ô„ º Î$v½ —š¾ô@?™äUë¡‹á»÷²¾Z«¬×2Š¡¦iº¦*JaBª.³‚& Ð„Û ÌrêT¤*2?eŸ3Sƒw$وˎH”qÆjƒ O©?–·ëe£]M¹YÊ‘¸–Ø4B€«°WSy!à”•oÁ¿i˜w(s‡¨Ã¬ØHÁ¨kÌç»Ëã5b÷þéÜÿKzG—Êçã͇QW7¦« ã2%ìÐÂ\âãXÇì ±µäs̅ʸŠÏiØP¬‡¶+¢€Î£ÿ¹FÚìQ!] `š¦"¡«=⢠±é;X¡5i}„ÌLHtÓªÿD ëB^ @¤¯ûöÍÿÊùýÕׯ¾‹ž’ F>†a?èféO©‰ìD¡¨©ùcXQ`-y+‘µ†· êgé=;âiÄ[Nȉåmž-ÁÛR߬(Çm„bo±mö”ĺ·1iø6v=5±sÕV@Œ}SH_?ô« wã”N³àvŒú³Å~m‡0T”þÍ-:x¼Û‡ˆ—¦îŽ€Z 7(&]½QûWF‡ …ÚÀ>å” þ?Ëf_©*‚:K©ØÁ¡lÙwä»—N²ÙÆÓ?^žj£Ÿ?üåQ¸Mdºä+h‹‘§Ä&D ¢1Yƒl›q5㬯ß×@U­ÁÓPÌ×FtÅz';²iÚ¼ŒIxŒÓ·Ñž|±)»´ ˆRêîšbz¬$tÌÆu¡´u.ÔÅ%IaVúMŽª5¿'„4K­ÔÁ©À¹ä›ßëL~,§Â”[ÓÅ_Á[¼ òÃn…¶¶›…¡òèm:ÙÚb ÃÖ-Y1,lG@-ª±äaCçXh°;9ö~js9C©‹Oc°r‘Þæ±¶’ªü š:ú†n]›”y÷•—É«­xK8ÅWåícÿéq€<]Þ€šXÅæv…´æë£ˆZ•r9ÖXÔm6ìI :ä @ÝH#žæ×^Q<µ”NÖª¯Ù}¬ ðT¨ÈSRŽ å.P]îZ;žîîþc*„¼Åf©Ð”»T£1½?O<2ŽsÆúÒ›.ûa˜úýlBü˜ªÿ‡"4ƒé yµ«JÀŠnFyÇã X!@)$lG8À*jl XìJh €µÐ 2OÙÀïiCµ®&GÄЀ¬\è©Ofj.‹SÒWÈ {re¿Y¼Zÿ×ùSEÃóõ§k×Î&)äÜqœ?ÖLOè!åB¼³ˆ㔆•~ð›ch“)V-¹vìõuâä VÃÞ€ƒ^€bBš°×V~ÛHQƱ—8ÌÚ«±»g6X¡’NžrDbX8wL¡Â9¯pnîØ(jÕ±/ ¶YÞwÀ˜ºðso!þ“Õ«yzqR #¿ä™ü …ç6ú—\³º‚³ÎaIÊÞØ'™4mb–0ê ¬É&ìkå EÎKlCü7Q€ÎBBŒ³|ëÀ]Ê f¯èI˜è† Ó…]SÆ)M"“2»‘ÝÔuÐÞ ›Ó²-U„äýã G‹1[;âhÄQ&€qé Mvžè efH"/.KTçŽMœ5¾ØŠŒÜ!P‹Á/ 5¼Fõ˜BiÊõý—óûë·#š±'Qs»´ ¾X7",gp¶q œW3¼ú‚H 8Ü¿%jJ2H×m@ñKGì³dZ°ÕÇ,EI ³5nßïz²Õ¥\hë»Q“Èÿ‘w5Ûqì6úU|²É&*A€ž'È9Y%ë,ôÓ¹ÖX²4’lçæélµÕ%©ºù[íLÆ;·¤ê"@|@àÉ¡zرúL…­ÎÒ ½9€,&R'£#Øcå-—×!¤)¬¢ý“G}éäQÔIü dŸ¬Í³ æ´»§}×?ÿ²²’Á£öVH@‚56È^YûrŒ«û7IŒ°Ä[â&ôPû¥ ¤¶rzjë—”ÛÍvQ™´ÿïï·KÓµc¹õ ¯4£¶t\=vt…W:=>=ü¾Í ?Öe4ùH-TBÜÃ;˜á&YG jǘp{'UFHÃŤüNó°‘•BöÞ(\:†fá*Ú[“bÕ1‚š:¹ËÛÉ*+¸Šœ¦S¿wmÉd"rZÚl.ò+Ê¢ê à6ÅE°ýØ¥Í-ÑX8EšÌV…Ó*›WYá—ÿì¨Ý Õv_´]ytÛÏõ蚆w{g~m%º6ËlØÚÖz©™Ë…åùR~q´—Ï4lí­™ÕU¥<X§8õÃrä÷3‹ÒŠÕ1‡ãXËéÿôýª|ù}NT®GDîS.¾±ü1Ðë'‹uhUOöY@&6 ìÓw_Î’E]_¥vß§ºr(‰±]ÐKÜÄ‹T#Öz°YáŒSŸÆ½#„0E—Íq¦¹‹Ì°{Q BÓhn¨¯ªãaq´úÀ“3èB_TÒ”‚¼Í°¼»A¢±íÿ\TòdAD¹çZdú”À)|èIø¸B}þÌBŸ2`æaêxüøðøûãÍÝoÛ(ଊÉÊí<Õv©g1ÔÁÔ§Ô9`ÐY€c’: ©®RㄺhAg”¦9*H…Ä–²H¤²—ÌHMÜ@[ŠðbDMõÞ^\]¼~ ØÒ™-¹áؼÁ Ì›NÑ=+ìïìZ$ñÿnHB¾ Í cfÑ¿Õì˜J^˜È¥~’_ç竨+¹d}»²nh)@ lúKJC=í¼äÆýBº> KJ'g†ê&1ÒR2d&§!>µ¡0HDžcùü;ʰõl™•¢! 6ÑO/푟)•Ù¦xüŸ_Æm šÄ":‰›"æ6E ¢;fb²)ìµ½9¯Z±)Ô¡£è±cSHäØB,–&b¡@òœ:92b!G†¹]“Ådq—Ï:ªØ€$>£X‰N­}¶´’ŒôZ‰°"ÆS+.hKý2r°×íÕ¸R½9{[u(-Êž²zã¸TG>[Y‰Ú\º¢ˆX®6 †Û ©Ó#Ì×Ð&ÿ̓O#jºõVÊIƒ!¦Öä ôy×.uXoiåËܦû•è-½Ù 8µÞ¼´8X` "J7NWèÍ@Ò{)ÑcÌê͇°Ø ·_Z¡â¼gÇtjÅEi`„¨`aÁß_Nîó‹›Í_MU¹»»§¿¿Y”½ùó׫Í??j4lLé¼ëÍÕËgêßkŠÂÄæ@@Ž˜më›î²êG5iqöÐËZþ˜ÞõÃuz'ÓÒåæúûæê ç“›‰Ÿ¹4þôþ»eížqýh^Î7w?6ž>Ÿýð"Œi»ò7 «aBGÅé“¢=p<}²}¬0…§ºVB~{ž’Ü7 Á³uܳ_­ìé8@µ8È´˜ ¨Em¶¥kK_IìX[Ç\\Å!º÷vœ6‚”oíˆJ ’3ã°\M<“Ç€€#½´9[¥l ]¶H+4û§Õ¨|¢Ëî Àm…À¯©95èˆÁ*¹]KAZœ(ªiíWáãåÚ”X8`åæð(r‰™{Õ¬™ïH@ßùU{)²s;¼A}•³ÙR.;g ìœ&$8ür;]ÛGŽàœ6 ,š[,Ú"jO!®aÐ+•¸,š®íõÎI‰é¢„¬éò’Ÿ=“ÖÃ¥ Ôq¨3ÜdvÚç,û73>‡®ÀlAé2Z‘ lõí`²Ìžó@’x ÿ­™Ð–ÌÓö†9…ÂE ô1ožq)Õ4—Ð7mÓçu©(õ¾Ë@ů` šf ÚËù5 t•màž„šáª,’]Þú°Œ·”é4„!¡¨ñÜé# 9 k*ÒÃ:ª1ȯ¡lz¼ü¼¹úfëùøú¿g÷wWu—0 aUhõ-î­¢yKäªKzÛ6aէЪ aó­_ôhgÒ°âSÃá©Ö{¿>ÂÚÙ±ˆ°*vÆÓ²JQÿ  ‹¸™r±¦f™W@\M©UoÇM6W×ßîÓ#/¾]ýöºøû…È5]ö< äu…¶Ü2Ô\ j›¿›dÕÄ켈µf¦$¡k=ç½YK¹3á ÁZÛÄN޵¬roöÙe~ƒµ¦(Rs3r½Ú~(¡’¸ÁÌÎÍа¦½‡ša’j¢µ»š7WóòD8_#êlfsó²hªûu•ò¦ëtWÕä6ÆßæñFŽ”ztW½™~U ±ßù‹c#–‡Iœ=gàê"†UMHšr?‰@I«[QG‰p\Ìb;Ç#”e…¨ +$°Ø"±Ø sU‚ÀÉ­—Âúç(¿xŽRˆnw¹vä Cù«b¥·÷uGýìì«Ý&|3;ݱ¨Ç¢a2…k¶Œ:Å¢dáhV±~‘ôu¾´¢p§`QC`ß0•kwá©?Å5çn²Úk‡ÜäÈ»·$~/לcyXCÑØo jˆXÛý :Õ|ÿ.;×ͲÊÞâ#î¶C-¶CšX$†P`‡ {Wí„[²ÃÙÒJìãÄž+†™Úw‹s‰Ï¿§ÓÈ1žh‘¶lÈÁ­ëÖ\mîoî~OÇÖ«tèöŽÿÁÎõǧ‡ß?¾þo["ÉÁÂè¼ä³&c G—ï%ÔÆ ÒæÓØv ÛÙ󂢓$9^ 5f¢áÒ@¢‚P¨ª·—zí³H]=Õã µâû“1)Jǰ[Z‰âÈMìœTÂ'³pt]ðéEWñhÔãv2ò)5[Ò¿m1øÅùå—o÷gç3+½àºq[^ùÍÂ/°nbÒÔ˜†X;§cX l¤[£ª%švr渓ìp%î…3­±÷ VÚ¥:è²K~C#´Š[³þûέ±h„æªî`õJ…ÉZì´#ò/DˆCŠŽdòéQ,0¯qaŦNuAAf|_1¾K©œm)«ŸïgÊ׳˛kSÓÙåæÁþfSXz³åà ©Æ¶§³ÏA .§3P§ÇdËëe8îf+Lƒ†"îŃë1`†x¿­ÐÛ lÈÅOƒrM]Øá`æ«}æûfdüó S >@WnSÈðÉå£_ì9ηÂ7º1U¼ù­PÊ-×ìSu JCÓÄpõ¡¬[Cw=•ÄÉF8ݼpsG¦–‰æê‚S!Ñ®¸0"G@Ç CzDË ª ¥‘MÝñ8•R6+¤.:{ál<YFC{êR"e¶²Êf{«ÁÕÜ/ØW«K»Ô&-W÷ŽR†ºK…JµÓ°tçÄçµ–û`Vkâ— Ò_VV¢µ¨¹¯¡±ÚæüBä.­yh)ð†à‰5xn½a±µÑä¢ÿ9]3cm.PNox¹Œi¿´"sÃÉmÃì*Å)"ÅŽë<{Ä.¯?Ö›CqpñäM§KwÚ»°¡®•R{øJ>RDp]&¡%Ñ/Á+7¢ãô˜´ÚÂ+zï¶Øf°(¨ ©B‚mȬÅE·è·Ì$3Âo±×f;1Qj,’Ébu‚‹dC,2Ža¤Ó2{úœ*¡/·_²'0ùI[ràç ÜGÚj*“`á[mºÚR2WO*ÅqV'Ÿˆv±ÀŠ92䬘iÑKKmˆÇÉ%Bý*#N3Å‚vmna•4ðF |FΧðûíÓ3¥~a‡@¤I£ÄìhcHý°Áqv‡°[ºß˜ iÄ(¨¸½ÅŽTå1sÔàÄuÁ¼®‘F#Ÿ/¿j­vJa%׺ÂA$N˜z 3¸¦ú_H\Dþ':.ÁµmñŒÁÀ5`Þã⨹\F µ½µ7ô­é:7•ZÔ =Ýré2žzŸ0Zħ¯j\rì;)ÓIô°an6b—a7$~¼4˜iHuc‘Ô†™¨mŽ._å˜2H 9bˆ$ÁÅ:œ™ˆF˜¨½µ_{î¹v²G¬AÃLœj»ÊÊT- à?sãß|pÆW—â¯.ãÙå=_ÖñdØN‚£.Ï6´0·€zÞþi=uKŸìƨ<…@.œ§nçÚ5ÖÅ¡s9 9Om;ÙvñVkLœÒ}ƪâW8OÙ"‡40ô´Öš?ÎDÍ/ôìûíå§J—×$qHbèÍ =›JS[LÖ‚´t»M¶A€ãYNdŠ¡ä ùk;,–»èöÒrÊòäRŒj WÒ€×¾¤4µÜè 3=Ѷ¤ôë;õ]Vë'¢?~¼=¿ülò:{þåqé 7[Ü é J¸ -½x4“Í|…³÷ÁªÆYÛâÑyæNVqÀPݯ(“~Ž^ G¿Ÿ/në ¤-ø= ÞŠ¤@=b7gb…Á^lïKŒñôy¢Cã§>úÁYb*¹ —uWÑv„Óds;oºŽUmiÔA™‚G7$‘Ô#Ìqž±LàÓtâ×X ¤¢‹9g’â‡í [U•¤ ‚Ô•d„–B DæÆŒ¼Øæ ¢àÈsU<–\xÏšM=ââàÆ™D«¶9«h (e£ƒëK{‰8hbBåÈZí›Wž„ã "L!úP’Î^Jˆ¸Ì€úS(ƒbga[mYº#|âðÞuÔ…xGñM?Zá¹bº—è\w)—–R%ê!ŠùÂEŒÞIøÓÊŸs|´’R*—˜ì|§*Åy³åžjhŸŠ[ Ýœ_H=ˆØgbô¸þjŠÙ~Ï.O}~¯|p[HPP€‘r3‘’ðK‘÷²òx9œzgPh3iE30î&B_jÒº¨ Âó@ª|V­ÄqÙ¤_–V`Ò 3>×Fþié—w·÷ç›×•ˆsÁ±_Â<¨í`á(q—¶6éÞdï-Èö+xxyBó*˧‰CøÙÚ|ÜòÍzó{d7õí½Õ^ ClßÞ¼«i—²W‹Hö]»A|Ÿƒ'Qóe´··(B|7!úÆÄ¶y.3é­50zaCaêü7oºt<æ ’Èiy¾Þ^¦#vT"?‰æþWùAí DzvT/M9ZˆÁä\ëg˜¤ïï® Zn f¢Éì—Æíœ<¸b Û”k„I‚B]ÎȾHbÐ>€mùBù>0‘ìH%´0¨a !!ïÓêû¼ *í•A…„½QóT†d‡A®ç"­\…ÙÒJG4}‘`•âHÀ¿ÅiÅ!{µ%v+®˜*F-N5¡@qäÐç§°œÚ›-­¨Y&˜?Èâ_ùƒóg,úƒQ¦40>p¹?0ZbŸº£Æ–á¶ (ÚíüS©óŸ¨’ I\¾QƒÌdÉj;ê¢c·_Y‰•Š™Ê©ÆJ#š¢ö¨-¶tZ¨ª>öGl„Å&ŠSû‹"lEÉ*-B\RÚle…ЊæçÔÔÝÛW{Ö€¾KiZÊîS;zß­5*65™Ó½r©!`ÈjÉÔg++25‹þ(Ç*­±$–â.­qSæ,&–î®_âbλ8¡gàk ö!dõæãbØliE¤Ì:°«ÃȈiB—âC}—¤Bþû‹s}~q³ù«éê/ww÷ïø·§ó§ÍŸ¿^mþù‰Òü_Ò-÷õæjÿÙ[Úf6/òÙi¿õųªÐeº­Ÿ‹ùczÙ×é¥LO—›ë7Šš4õÓÜq™?`·®Ý3®-ùñáæîÇæáÃÓçó¯^¤1m—þ¶×‰¦óT¸7éÚV…;*ÒxϰBaHL9(nÕ~Å]©ÑsÎä]BdŸñøtÿpýýúfó›½Á×ó[ÇÿåîjšãHrë_aÈg–H$21>8|pø²áX‡Ã‡=P-ŽD/E*HÍìÊ¿Þ@³Å.’ÙÌÊ*޼{ØŽØêÄWHà=UÜýù×û‹óë‹—×ç÷ßovóôz—¾;­•¨%‚&¤}DÏ WÐX5¯,dWGÁ§,g”D*nY/±TvªL³ i3¡xwTË' MÀ\tà–À²ééÒ³wÇÑx{{)ʆgŒQ21§Ÿ€Na(Ìžp Ì^€z”½âÈzŠ6>ævŠS@ƒ“xkžñ‡?Óˆrêû^ŽÅB=Y’Q]‹æh¸µø‚QÚl˜å‰5“àŠ8k˜¸¥¬X–MµfhÕbÉAä°q Õ2×´„‡lõY å)¢1ï`2Ù{ý°@ª˜LóPñ£ÿ¯Cà °91N’œp\wºô`|øß6¸5êUcès–ÃJ\0ŠD6”5üUýð¿£b¨–—y5!¤XŒ¡%‹'v”Èt¿¨%­s¶ ¢Þy U’&rˆ›'1Oào[óÏ«&0ÁiÊØÓŸÓÄ]‹/æÌsK^¢yU¬aë­Ê\˜b4aåy…gÒ”½Ûl\&êñ¶È^˜rĪ)w`g_={¡*6^š¾ä<=_~8+/RôÜb¤FÏ ½vv.Qªéµ` _qLŸG›¬ªÓn)šM!m~ßá gšÀ¸ÆámÎ#Á}8#£[õ¾Ó‹¢ÃG<êoF–lÎ4.ÓL„@UoR¼ïÈgg·gÒ’hª©: Cý®&\tÕèÅi^¶8\ÖóÅjdB©ˆ–ª~,GKòõÃãɪâ%OÞG’¶&KLKöììësר­ÿ¹Ä‹ùE}jÐ[$ŽjÇ¥í(;yxv´ZÅ9#1oQ\`´ô¢ãØaiV ÖÆ{6{} x 0¹úhß¾¿Ïþôüô³‘ëä bYåKP/ˆžÈ'Á†ì5[8¸Þ.¼q¡L$‰^3Rå°ä‘*Éü ëLTC*B2EÛ~ðB4ö"^‰» µuQÜõÒ1Ag ü°°›“Ó¸¾|˜´&¢Tµ[B‰‹qÖÇ\c~&•aÖ,¢HSº­=ËËú„VpCoóqoFŸQ…l݃HîtmZÔC…k&îw@^BsJ´\zãüÕÙrtJ¾Â_9–à L’ùÆýLTCÖïs°¶—´hË–ý*{BÆs"oÊú~ùÍín.®Ï¯o/>~¸¸¾°)þóý5róiÏWÞJùOOti “œ½>»&º 8UUÉ£9ß«$8®³í5HjÞJ½D(-©4Cö­t&®A›A›&¾¢ñn-)h"ø ö‚ WÏXŸXóG:1ð5¶ŽI±j/‰ ™Q‘ã¤~½Þ.- Ìa ÂIË (þ¤"S¹SˆìpQ î- ]ŒèèHE–Í“xŠZr¨©kó¤èó 3Á *l’½-6õô F\¸Ì!®ãޤ7œô;B‘¶Žû…Óã~Ñp(âîB ]»%ž1 0àØq¿¹”Fn—8‰ª\Š-Üb¶T™‰dToÁG M½…d“k,Ë\Ðã öd}t Û–*§®éEÛrº*I.iª¹¨—\ÏÐ9#ºDr‹‹’‚¬Æ½¨àDÉæ‚*&pƒÚrÉS¾þ˜ fȮگׄx·¤ãÒáMiÍ$RLf¾§`äÀ±ô² C ¸ ¡ ",©D*ÃI¥‘„eíÛÀi•nØixƒvPŽèì^/ñ«O7)^å{Zö*}½È…ÙÐÅgÏjö^`@œ-Ëk\«'M‰½Ö´z¸M /—ùÌ„3¨Õ³§†m*>“†ûE׉×oö|ÿY³G&GF/ºq¬­š"ò‰ãÒ®O1@œÔkÒü=.k¾‡èWÁ+°TÇÛ\73_µœÒiu¨§;YSt³'ën8LCÈ®›%8.ý ÖçkÐ$«Ñ¤ÿÉ=tÅ5é@SñýžxCH–ÄZÕ, É«׫˜½dáb0ŠHÆÎÙ× 9‚ê<0ˆœRµ@$tË¢4à C B¢÷Û¦Äê?ª¯ûIÞí+Šûóž‰N ¯tá--z t¡‡›'"ìÑÆ&ïIjXÈEœ’ºk"kýQ ¹³¼>G± IƒÉË}jšUý5XÔƒ×xö<¾ḞÕ§1×4%>äWš†¿yrÕJi¬]ý–bÁ EF¯v¤7Óïfè¡>O.󞣸Sõò¶íº¨Ñry"Á¶Ú¤”±aäðÀ'«ÙÖoEBӨЭEìhPØ» ‹¤Å ˆ^jÕFª ìËï*š”ˆ‹ìÜÓÚì`5Z#7QLm HC´&=LWš=†@˜d±³U³šØX‰OÈÈà¤Æ‹j“,TÂìdUÎÆÛØÖjS/ï;áÀH’d¹Þ AoV®Ñ[‘YØNžGêš­VqÑñÖŠó{g´,6h±XqØ ¸„Fä\¥¸ˆEÅi¹šUÜñhµŠS-Ðæ÷|…¤NqšË¨ G(Î7(Nl…¼*R†òg |9ÅÍŽV«8`Í=.øžo.…Åy Q‹Þ‚Á Vé­¸:­GÎ29=ž¬:Pò}x[µE×CÀÅ‘í=b±ÖªÉeT>ö¼çjî7뻵]nPîx°*¥…É%«~c¥ýàªjì˵¤ Ç/ÖZ5L!ºISûšÙ*ýPÄbV¢Î–%7<ž¬Vm)ÀPµ;',žWBÚ£ ¯‹Ëk`,®n¬ µƒnÛð•KkvJ„ Çß&p"pOÛ »«—jœO“PŠUO¬ÿ×^²”˜3ÁŽè¤ê—Žj'^¶Ž”VxÒ@n–Jl 4=ZÖŒéåÇZáý¾ñÚ6ÜãVõãà©Ç/ºØÇ6R!¤qî&@Œ UîWt¾€>ë~Gy r?M¡¤‰#zˆûñ°ÁúÑ)²¦þkºŸZÏýÕ½½4ü~{ýÛ—ËÝõÅÕ—'Ï _4¬Ÿß©UÞ»ûþ~w·{ö£ó{µ>ýA‘…Ⱥ÷,‡.®KSë=;NŠãX­G…TáÀ1R±Ì¦Û—"âÂúµ½w‰7wá52j“nŠ«Žèoº÷ôÏw_Ãîã¤5Ìþð+¤Ý¯—¸ÃPçÌî0ǵ®'§Žþ«ZÆ5l£#¯!ËW]Ú5¸´Ysª|s±ìÒ1—Ïd7£í[KµßÖ£#Æ5°K0MÈ@´*Âè賕µ£ άó!|úï÷ÕZB +ºpÔ »È0€‹oÅ8 ½q·°š‹€÷uO2%µi¦<½Æ£¨†\ÂÉ&™.á¤:ööú %¥•Y‚ÈbÖª†“×òˆ¼‡Xή8WR¬ÜåÊG«èGøÉ3$zJ}=ÿÝí—¯w—ÏϾC©'µ£o޾‹Ø\ÓDÍY¡‹Ùñ%³9d–yU&$?V¹_Õ°½‘AQÃ0Olþx˜2³¹…oœžQ›?b·9N½gj¥»Z…#W)  W†ÿh](«'Ñþ†8¸Ž Á { Ír›œºîß‹Ó8ÙËzðq:I ÄFÙFÖL,ƒª`ƒƒz×ø!¯ÓGú[oMZ½ž%¬êÐuz/>%¦¬Ç Î‡|þÄ¢nÉ5ùaÙ/²<¬3é ñË é<ƺ3 û a‰_®V£§I! l»³2g[;ÿ¨ÅÖåÝî×Oç»O÷û{Ÿ¼–¦ïº…^áH==d ÷׋wV^“ÔHOL’b„Oô>=O ŸÅ2ÈÉXÒâPO¬1Š.lz"aD¿|¨°zhyÒ ‰¥*¢ò^³8dz“Õ ÌX 1€oQÆ –˨_#~âÄZk§mãç ÖÐBÃN˽›¶ÂB¯Dy×-ì !è˜ä˜lW‹–£žf%4²¤ÐR;ùš€™**ŠKä3¿šIcH¸TSÅØG<ÆïVÁñרf«Ôì7mÂ_üöí³±‚íöÉìEèÐF>ñïÛÛñ a]÷ ÒLc5œÄÅ«(¡zI«wŒQ*\-ú‚×Þ!+óx´ªK̈˜"âæ—Åœ ¼~´ÞâòeÀ…q–á~úÇïŸÿ·­ ZÙgz DÀYÞ"Œ)^“Ö°ëMÍA\Ú0ûÞ1$(û\¶0?JfÄõf6ìUEqk ’V¹Þ´R!Z5±|L’žNê¿xhÔþz}ûwë}¹¸Ÿ~‡j¾~²ŸF§ãçü-Ù7ŽizZÕmŸmU>(rrÎy×J鸖LÇå®jYAj¦:­íŠÏ–p”ñlyàìUkÙ­@8ê µ.xYº²ˆ>ñs+¨'Ù»w§TáϲÈ=%õŒ5%M­bs`_.¼QkƼ±JT<|û1Å«+Ù=Ó™¤F ¡è·öƒkjGxÍô·–8¬þú Åà&‚IÞéæ”ù]ßê1÷UëÝåîVø½g[52¿ëÖHÙo=vp× ÓŽïH؇Ép˜›ñKM;Ña¤b;ÈÚ†¹ þ(²!?nò^oÐØäÀž8YäÀ(¸ GDÆu¡ªª@ÄŸN.6rÈ:z×-ú Oõ=¼@ØXmZrý©´†¹¤™3I¹0âC(>ýøC{Þá?Šf$’aSìƒäh‘K>ßgu§jyœü’à×—ê>¤r€¾,É¿rí–”VáÌ¡««É£fêa¥„ù•wÞJ)¿ð{n»Š]òÛ?–J—/bÊó÷Íd8(•– ÕOS?Ä3«k¥Enž!ñŒr{Ÿ£¼ÒámçÜä~õÑ.–oß_6ìîw»_,#üñÇǶ|:ÊiÇfM§Ó¢§VÏj†òH8†ß¨[#“êà5§ö5® Î}™³¸“G± Ê©mÏ‚Û<9E ´ìG·ê¤Ì¿-®Å“WÚÃ[Ü ¬ásøú)´B[œ¢2Õ–[²#Uƒ«$¶†@ËQîj6.³–)é°*³.¶—€Înœ°öxZ1!a´$¾yêY–µ!¥¤wZÖ_.ˮʞ)óø !sE§\X*¥è©>“ë”Ï7ˆ‹ôZgj ®Ë­F+Š«a£±–è}7´/¹¡=fz4F+ ŽÊºbpÅ{ós³Ã”¹¡‚*Jƒ(Îɡ籈ZX­’©–ºÊ ÊQ]¥·F–&£ÌéM‡ì¯/¿õq¸…°j̾§ñšö°}„iô¸ÁSQ ÉÖlM¥¢ÔâTX~yfßDfrÒl¥É¹èZš­8aÒµ(Åâ¸NÂ4˜¦7"joÏOŒT)¥ìªÔ‡OC‚Q’‡OÓ#Ç‘¹Ub×äV†×ùÿØ»–æ8räüWsñ‰%$€|Ñ:ùâ°ŽX;|qllð¥]Έ¢B”fg~˜ÿ€™ÝMvu7º ( šÚÏ¡i‰Õ…||ù@>bv&Ö˜j]|+7Pª„®z\€îàfd’OJi-eóÀXŠÅÓÐÃà@µ`c‘’A"N´ËŽ%­hº¹òì€ðÜŒ ³F"ª™ —ªÞjëë»d z*yê.Å¢ø yR‚äoF·Ôê¤äj?DÏl¬SÚ¢¿±Ž8D ={C¾«ãé—O+WñåƒËÛǺÁuQëgÂö½>íQÀPL¸nŠy€"ÄÇ=…Ik]έQ©‡¢šh‘†r=5_œ‹ó‡BùÕª9SÀ¥µ­¾y¨4•N'6@(¯SéoÑIœ˜<º:9Sv[Âèh%ÆØ^ËkÚXǸÈk ãt޼a®²´²‹‡J+.ísÔi¶¥-9qŠmšÑ6:XÑÌ}{©`VQª˜–~á°N¢Ši0kÇ‚¢Z´M‹µÍ¥ŽJ´M6EÓ'Ùä³û¶G+Ñ6Ò5m ©b¦tiãö6É—MûŒ\l×6ª¸#D79OÅØÅã´¶åÇëŽNVtEj‘¦˜˜Ô©E! M\CQŽ‘–Vyi^»Å\±À)0{ÓlcíIÛ¦(¹«íÑÉ 8qZóÅUlTm¦ç$Lì‰É7/'áâ’ ªD)@IO§Õ³Ö㣕0ý€B•O’v2´L]=ÂQ˜Sþd¦·'vŠa’uHK'W]$ëæâ”q³sc~ÝåëÁŠJ¾¿$o%_›²C%ÏsJ-õ=•œ}‰’“w¡XÉ@~IŽ.Ðñx%.P€%¦i…±ãùöãõóóΊ‘»û×ß>ÖÃÆãX€©!Ìi‰(è@¥vôÖiz¢§xŠÐÓé$z†­§¢|2Ew~5 _>ãzt×| y¿ Ÿ ~’Áâ©à*r]¥¨%8Š0Z¢öKjr„8gê´}¯CfmÎØ•^þš° ÞaAÈ3jœÔ͘MV² ‡€¸Óc²óÛ§ÇÏ×_îwyˆ^|sçw"/¬±‰útèð¬øÇáåòþ0ªñT¢¨ÿùë§C5…}5õä2;PÔ9=Ý0ìêG;êŸï¿^ýÛ¿ÿÓEVwê»ßém·óªvþoÁºu·w>|¸»¹ù`Šýøt·UwñþâùÛm’·«7/õ§Ïß¾^ý¸ÈwÿrýñÛýŸÖC‹ÐÉÅÇûkÓ;4°ÕÀÎ~Rd½xÿ~Õ0þ-QäýËʦ.â¼)›µÇ7Ú$boŸøûünó‡*?.. ÿ:oý"™ÿBú­Ù'SOŸNÕœ#*ºVUž¶ ù¥Š¯4éäÓiPpñÜ>ž#$Žë*Ÿ=Ÿ‡Áœôé,š&êšs%>1+`Åpp” ÌLMì$ZRa°#‚7_δñžë.s˜`>É 0•æŒOÕ¨!ÕHµzÔRZº`aî€éÆ~ºàÔOG»ÄÙº¼íÉJjL¹@óß  ø:°mNFC$v­‘Ž%ɯmm¥nÑ¢œ¿4g-xʽ§èKL³dгorÃa¯QÌÛÚÿ(;í¨^ÚØÎpMµžzpM%h@y¬¢Oº¶0Ä¢>¥4R¸˜‰§=ç£,C@i™“µŠ6fTݲã`Á³}Û=W¡p/#=IZpº±Þjï`ÚÈÆ\™îˆŒì*%±ºˆ«2²S‚P »ûe ¤íáPyŸ|(Îkd±P‹Yzu¾fze£ö,Šà‡ˆÕŸ={0^ÙU—D 'Т¯&B£ ‹Ü ˜  žùV`-¸ï>ûøñrý·u·ói]`¤pVkˆˆ«èõÇ‚Ø-LzÅž¨`ÐRp.Lv’˜Pò‘:¨JôÓk;:•¶PB›Î…Å»¼ŒÎëY»¦]ЧŒTß./,ªбÍÕ܇ÙÙL,QæY“™[xáãŒïÓ`×óN/‡ˆEwz:­Î>[à?¢Eu¶·†Ñ⻪sQZoÖ<„¤¡T½Æ¦cê¼k0N‹Ë:ð ¡ ¨Ù ¯ë•ôiêw•¸¬ÆÆaúÉ"×’.-ââ7Ø6•ÉÀ}|²ã­ÐøËýí“í·ÍZ¥Ë¯O?ß×µlYèâNÜÚtáÈ&ÓDÀlb&Ì1rÀ®æX‹†WDTlŽ;ßèÍævIMάQ ÊàÅ 4Ï –Ò‘Š!ȺÜôõŠO3 Jk²Ý£“õgÓ€”nsºâfßæ\­ ™òõYWtýllJ<F)š¤¡Ói@vÙ:«-Á:]µÚ·¯&oÕKú¶ê]gHâãZ$!}Õ¥p$‰]{-µ,²bñÔpÓ`æ29-@ ®ÉK›í„«xh¶2›Ï ~êBÕkÏFy“ £ÄïËŸ.‰oo>P¼¹ÕËŸ~úp÷ü:>Üú`¤é=‹aîKŒºMˆµy&C¡EÔ3ÜGÌ^cZé)gA¦²‹ ‹ÿ¹˜òß<ûħµÕ*¼ì.ÌpC"»ÔvÍÃïÅ:èt  Ó“·˜O7‘“MÏú £“8Qâ ÄóΔçñ3²³}| ·öÀª¬V7OÍQÒ±htÐÐòÖ«Œt’!˜Y,./žŽï̞ѷ‡t%æ(žë¤+-ý¢&s²üð)#³èaÍUR( ñ,Y7)º@/¡õ: Æ«8ÊX1g˹6ÆYš±ÆYŸa¬É2¹c~¨˜—®-7&A»©¾7å¨x¤¡.4‰‡ôŸ€Liòv`u‹Ö‘[tØFóîù7CÌǫׯž¯/M=í{MíŽÅOu޽„ ‘è96™ùÍP¨Úùøæä@¨îÔéHÄn~¡IOÚHé©À/L¥º“–[1»*sK²†ÛÞšÈ~VîèXš wt7ܪ|˜Ã2Ó”:Kà©•èÏ_eTŒô¡ä(·}ªn²æH½¸ñµæým7ô¶Ÿe)Œìi3òB%Ð –¥«®Ð#èœÅÒfØ B\f܉P­ÊO5±¿´ã >t ã#H6öQ±KìŸ ÿkE' ŠN4)½÷¸8ÄÌ FHæ´õTβ4ÎLò!´Óáõ„ jïµqÝç¨ ˆMs–J½Â*ßüù¼zW«Kp'8áY 4úœ*0(p”òÅË`ú êõsÎi@5‹ÀÍ«˜Å1‡|å¾ÇޏÁÁC^Õç§‘©&ŒŸŸÒ\×j¨óŠ»¿ÐÈ N«V»¿ÏI”ô›Œ¾À2åí1Í…%Sõ«ød/³òüôñ~;pÿƒËO·?W®ûÒùÄŸ6Q²ñ)+ãôæL‚w•)Ù™›•ƒUÈ–­GQ•é…4:rÊæ‰÷¹=#òt*[(H7.i–9½l¢c '6¿\üš“b~},ˆ}qbNÐúàÞ;>YQð뇡â|ýÝL K7¬_ØÙ0í°ˆ)SÒ’¢DQܵ•‹š Еïþiô£ü ˆ°‰Ÿañ}0I/2Ûœ’kr_ý1†vá€ç˜|òê}|ؼ?c¤Güé—Çœ+óªý_iäC‚CWíD.ðJ'ÝHŒÜÐv³zDäþ«dI\š„ ο¬÷ѨwùÅܨç¯_êÊô|*0šMêÇ`“®¥Å£Š]¶€î§›‡˜ø­1p‰+a¸<í!"ä‡omiÑÃEL¯ ŽË^¬ßJ›ÒáâåÓi†æξ?5Žžy L‘Ó‘Ö?4nÚëOõšõ¬˜ŠŠgˆçr6»žÓÎÈžçÕîí­HìÇî> ¨‹Ý¿~§GðÕazã·Ÿ ÉÕyŽ!4É»*-àVò ¦#²èÕNvð±Â‚LÒ#ýÓõ3ꚢ÷*\ Ôä¡èf©Bõlï‘Á·Ïþ×RÇ’ep%S°1­ƒ˜,OP8?ûõh%®%ÓÀ1jÄOATb“é0Þ/=B(Ñ‘3I}ã€Á#Ãÿ¸ãôë4¥ñ 6#«ÝyÅ^k𞕈®ÒJTáÈ.xÇõS««¿ð¤)!Vi 9­½ŽQƒ Z;3­|÷ Ìl  •èÑ4 aÌöÿlIÖ%4â-‚P­¼ÍÙmºÄT¿D§$«’JU—a¾)À1½m˜‹Pç9 Ãfó£@ý¬™‡ê5…êØ}hØO}Mr ÄTËÔw²Æ@=Rv“å–`]Ô×^8Öàw—9 Ì B┚˜êŠýLÌœ{(à+CX§kNó•s}£“º™Råܨ–w3 嚺d0ïïe ía©ôm¶Ô¢‘ è¥bÆt?Ø_”Å~ñœ£i drŽI×4FgX†"ô7´GgÛ—©'Áð6{tv^j¼G‡ÞlÎÎKŒZ‚b±Í ËW^DŸ…C'QŽ®àå¾iy(› £åiù¾´£LŽ^Z/a€ûGhX—¶7Ê9v%­¨º…¼úóõÝãç»sÖFß–tœ7‰Ñœ­˜2{Í þZº­ÞÔ-¹] ÓM´L¤q:F>;òvt´¢£nP1{V—tÄMþºt3f¢b8i›ø€õ(øùŽŽ;x‹¾þM!ñkA1ÅxsÃ|õÅ_~mÿ ÷·ª½{÷ç¼ÀβÐöþ|eÚ¼YbY&M¤€tþ¢ÏõÅt]æ‡;AbŒÞ5õÒ+‡9wF.¨z¯äºT{®éÒ3›1M)š–À: ôì³û+FDè•ÍA‡\—‰LMý«*á i¤O Ið±ÕSÒµ¥$b‘Ì>4VwnõüÛÐAL9ݶÙ#–˜En¡à€~weFO –‘öžQ:ïÙÜp6(û{(4ñQ_R2 qrî@¢hvÞ–d÷ÿØ»–Þ8Ž$ýWˆÙ3K‘/sXìa±—Á`‹= æ@‘”‡IöŠÒØþ÷Ùl«KdVWfeVÓ»°_°eª»*"ã‹GF|A*IZ:) ?BòúGì>̗ż€¼BTÞ+žÚüÞxP‘'sÌØ… qËí8„D)o—³Ûe‹uƒ³†ê Ý£¦)ïh¤õQ–pœY=kè‘Ê»cOR`é¹pƒ™]Κ,=E7•Øeé vÛõ$-¼¬5fMIà×øô" ]™xð6ÈÕzØ¢úH!¦> &Ø£×Ñ¦è¿ w]ü²Z »œf«¥ÛëŸ~füøki X>6·ÖÆ6­ËÉ‘<¶®‡¤qòàÏ0^pÏZ¦Iè‚@ËRgEŠ¡Ëƒ1nˆkÙ4¨Eä¾e;k’»&MS0&Œ5õᤫŠ”Y3© ph¹˜jž'494Éï± Ypÿúq*•Ó„zƒ\d#§šÈU0êÖ}:5H°¨HU-¸K‘‘y—2‚©Lú ì4;1Bh^dæ¶–ÅŸIƒvA«=넬ã2õ˜Z(„}Ö˜=ظê€' .1ª*ÌF^‡Xƒ"êWé *XzªQÖ#,ˆ$}N×hCe^Ðöæžêþ0qä)­«!Úú)I¥fÌ™¤†ËÜ©ÍC@•.’ÿˆK4õAiÇ¿²Ó®‹Ü±«í´šaLj´¬[æ Ч۔ö×m$(ê–Áÿ/»¿°Žõ‹1 __XŠU‹,Ô·_Á€÷™žÌm)‚Ÿž¼ùñáøôŸ?}yÌ_z{sýöËÇ»÷÷ìF(&Ë’²h=~ÐtcÃ’ÇŒ#†•ÎHk¤e䘪F’øéŽàœ,²wÌ3h$)¯~Ejò£QüUûüh¤ øÑbo¼+Ês%Yå:{'ÃTÇu„Š¡¯/tÊJhy-f|aëù‡ß}ñê×¾úïýËŸþãOÿþÝÕ_oýýíê¯ÿyø¾«Ñ¿]}ïjüîêé¦c?Þ}¼ùôË_þüo‡>&?ŠŸ¸úéÓÃçûƒ.¿<~—_óãý¡%ñÊ¿õ‡Oß]ù¾½úãÕÝŠ?þàš~x¼º}ÿãá?ß E·?ãØqyäaËdsò(j÷ŽÃÊöJʳ„¨©bÖF“moüÍ;–^PΟ^­¢½’’NI‹Ùæ”ó³Ï8vù}ûÛ¢N™ øC5—¼?qRîèI?œ˜½Q(Kÿå…pVŸ²RZ[Xí'b\{¦»'¼@æÝýïøåÃËh½S½0°!Í™}ß>ïÊDnçúìûöåŒLV¡Š ºÐ-mºËl»Ñ-6 ›ÇÕRn ­‚Ûqösp›½Y ¸)OfšBCËà½=g†¬Ô[Š1o%é÷Jµ[ý(ñ$ž¯S…âòT]Ua1 ž½Z•WbCjR‡äßßå_Hww/æïUô/„.¢xѵ~ ÿ'ÖúÚñï¾{w+¯¾Ë¯ôDó~:Fç½j5/ß|Y ˬž’ –‹Ÿ¬Ã$"—ØOYî©;<ó›Û÷9u{刿‘½î[¾á8lI­ûš³ޏ‹Ôå(yÓtœær†çûºiI˜„—KÂøåÒj8´½ëÚEËÓÂ4ÄUßÈPÌØfo³¾$ ÛC´K’B91ÆK\ñ4´ß—¹jfÜÍ$6Ï6ažÊL ;À«§¶÷¥ðptÿöác6ŒoGìß}ºyó4ûÝ×_þîñæÚï_ÿñûë3»èòÂx|3£õ¥…iKÓUð§BMÍØ;^–ãÀY' jTÓÓe¸ŽÍ)Æ"6Ÿ$7›ÝAÛ°yåØÔ˜³¦ý»·,p›]Ql ±<©4t*Òª–™ê‘yÉ1¼ìWÍš°Üqþj‰\Y7>RU[ÿOWYÒ+4®*d’îÊdv®;T&aŠ1±ìʽþþáÃÃçÃWݯwm†ìåÈ Ì¹mg{˜lèlqßd  —sìÀ“´ÂAHPuÁ%²—Ù{Õø¦ÉåØ×à0ªvYÝséüƒgºø²Ý4²L)¨Ç³˜gøÝ-0¿†[@O†­¯7†v¹°2”]û\w‡’ñO÷oÿîßütÅwüŽÇ7þƒ?ýðéׇ+À‡;?jŸùFØM7TÙ /kâà»F0uÓ`Iæ-dj-Jô oÜ•”å„¡‚DNó6ÅÕ ƒ¢wµÎD5èNJ=ÐäRRˆ¦]eIŠûßI›ÑŸÝIÙÄ™š6­õ-Žåª# sÁPua v,j™Ü!w¢ÁR#òÃ=*oå˜o ±ÏÔ'ãë®Ú¤g”70WM6 ¬2”h Ž@3©AOš‚%hš×¼Ö]ºŠ“š ÷ï4bµrÀ$yŒ|aEÂPÀ]“|í´|*<£‘uÔý#¶Q?PH¢B÷Ø‘TŽÅ¦„V1ÝÊèöuÞÚó‹—×=žÞ¬&³ô§ Ƈ4«Ö˜+Ô¶fÌþ̸sf™ÅXhjϯL¦.¾K2?€•ßèHãÐÛvO¯0Ò8ÿîYFjD#~¤€{°%³}H•ÔŒù’Ê_»ÁEkÁ%æVX"©€#®‹;x-’+ž^­]ü±"fržt‰ÈpOã[ֽƽÑÅåXà•‰&È–àcÒu——{o“­¡%~6&Õo”­yˆùÐt´J„©©… yºåR™èĆ.0à q’&ÓÀ¸Sý]_B†És@´eîÄiÉŽ †Ï ð§«2¨‘¤)LŠ”¨ÈXao s)¾\kâÞs ŽÝ¿ßÏ^°o¯p?ë§LYØ  ]„6ÔÉý•º#%«”Ü’ÕC”x1Xlj¥ríìÍ*ñ0´á‹% )vዪî/Z¨©äW6NP&,K®Ñ)„ DJåqòf®»kuo/rËoo t¯Ö˜–íòP3Ð" ÔŒY»<Ô9ˆË7\Aº*M)ñ–šv÷™·r§ÑKî´Ì§öœ;%w(¦UXËÔ_Æk°–:¦¡6ƒLª~ÛeWî6Ñ ]ÿóÃíOŸÛŒ×tWëÍå²vëÍ\%)P÷"Þ òe¶ùÈxz¼Þ)Ц´pq’ÒåýLV¬öpÎ!™^Úó2…ݯû\Ê…ý¼=Q„‹ìçÕ*– :öón„=õËÏöÄŒ‰¬d *–ðx™ b>D¿¿¾½ÿÔøoŬøA `”1ÔL+.É­ —‚¯ð˜¥tÕ9“ÎpÉ2¦¼ük‘w±K2Õ}·i—îÔž£Ü׿‚ªŸzºt{ûN=UU~Ú¨_!ÛÕˆÍdõ«ÿ> ²_y°tGZ<ÇDÈ5q=Ô*6XÏ$9Èà)dª„ [¼Ñ.o¨(»Z|y5؉ í뿵Ù-’ž†+ ¸‰ù<3|@l¥l©–ÒHû3ÏN*näÝþV ,eÆô¯dªíâmŸ«žá0_°BÑZ;®©®7M[æÍcš 64ðùS1£õÕ'6HoœÉê”CäTå2ƒ­öJ ¥5s3I ±Xõ B<#¿´ÅR¤jŠaʳDªZ~~8WOÔtoîþìÇпåä®o¾|þ{žº=|}›¡&&ÝÕP‰·0\ KnFÍcÔ[— ·Im\õЇe‚Æ •°¶8 °LD2“ÐúaîƒSMrq µ½ë‡YÎX *ñW&÷»ºÐÏ+CéˆJ{^–ݶì@o‡ˆ=uziŒº:‰$Ü—±òI¤ß6Ü}KÓ4o½kÀÙØz_óç6˜•MSÚ‡u‰¬ºR¸ô)ä¾Û燦]¨ã`ØO‹¹X©æÇâ: ‡·gÃþØþÐØvùZ–~›É&ÚÿÇ t£“¢ ­­"!Ó¡¬ï¡îsÁcQ¤!$êQ§Žï‡‰ˆºc vùZ‘[ŠKïñ©_ðÓõÝý»›/ïÛîj,Å3L– ¸'ÒUÚÒð"êi…4¤ZT–Ó0TÍ'@ESE‹5£è¨æ©ìbCËI&#@ÕŸÌj¨2ôÅA~¨öïÁfd.ô`ã1o)\èÁæ¡sí 5Xên²K[a`Is÷¨Ñp‡+ðènÍS» »>Þ|p¸M„x}{óñæÓ/*Ë’N̺fM,ꆪžã„‹Ê“×F­Î8àäI3mÔ §¸'ZCÎ\a( §œD18yb,¥)5Fê _ŒRÚx%¥pæÎ”ÌʲPСME„UÀ)¡7+-~Ay0FO‡:”çv€KÈWU‘‘/ýÏ6%lJüQxYÚna;8ãóG„M# œ\–lS}uMBÃ`t )iEŽ›+° ™ú£8æpÇÜôêÒrA Ž™®(ë2½„ûgñˆO³“ßâ¦k 0/,‘'š¼?’«É;šu%ïeÛ_T ‘cvªÂØ™×´! ün˜Ï3PÕ}H%ù1hBÄÅ»hp›ÄžÄß?7@pt3LL!îs}† ¯RÈ/€›Ú€›=p„šò+F]«¸ŒK«6g2Ûùn=jŽ1êa@ˆµ ¶.€Ú±°ð;¿±°Òsú»—k Çgâ^* mÇ·vÓï@‹Â½}ÿàzôl¤-’öÜ+n—zŽ×Y´öY ,¶w*¬ j\@§”¢çx5×b)É*.«A±}è«P†s˜­­9ÁÕÉ„]ÀlqÿK±c'Û3`ŽS®å¬à²çcë¸UåWzâ-M u€°¤Ï˜(š¥}bÚ!º81é¾ Ô•¾í᣸ï@hîÙô„5lþ:¼"… ,ähe‘Pö “ŸKl\ÿNB„kú4eŸÁOnq´a&!mšäI| -ëŒÇØ%Eؽ ÌKÝ®'3át‘!R©XÆ0<ì-ACY£ê§Ö3ÚØaì™d’¥\¸²t5‘Ò‹Lcªêì¨~0pDL±¨XϦ4u•ÌëÊÝFia.ˆ/oþn2©ÚüG„²KªÍŒõ¢]·`ïà4;Ù§N¥oUË2…Hη‘ðȺP½ÄÂ×Ò4ø1žû5ä{<¼Aa›t#[BՖ€›ãÕ–ï:µ:(F Ôyrm÷.ªˆ%P"Ïý\ÆK¤Už0%¹À‰ý 7OÙ+¤UŒ†Ä]ýlË Cæ 1hïÞ––;ÜQaU‚01–õNÏäÄÕÖ Tœ‡8 i@\•/ٙݩ7uüpÌ ýÖaŒº{O¶KÙ^V†À¤œ¯â*W‰VñTý/wWÛãÆ‘£ÿÊ ÷õF®V‘4ö8ì‡n‡]îÃbq˜häx6c1coâdK¶4Rµªº^4v$q”™VY$Ÿb‘m9¿_sÉBZ­Ê‹|h9RÉ#ÔÜ[XÙÒŽ|pÍ÷XxoaAbœÒgä» Q–e2m†ºr—.›Ü/­¤I^ËËÉv á&{2òWlàMÐGXÐà”$cðÙùƃð«ÿ¼~ܬ?¯õ®oí·lX¡u³V#ËÃ` ÅjÈÖTÚ`uguø|N^uqí½a%Çr|É…¡³YÃ#ëR…$‡âéB/óTJ±À0²i¹BÑG ™%ièMòN×m3Ýl`MW¤˘°­-ÇSåæ•)[Ê4$¹ d€í¶“ã$¬K!ÎÐÙ®Ç,jMDt&Ö– Ïêå´†Ôx¥ñú4Ž“Æé<³3úÄ ©Æë³!…9£P9{0äšîø¸¦®Û!À4³¹¿|{ÜÞ6„<Ýýô¾_Ü•×[Aî6Oâˆ`½Í4q’½ø@®¾5È!‡–„]5kp¦­ÈÂD;䪞ƒž /~/“JíÔ¦„ðCµä³&(ENàª8 =îÎɪ ¦• €¾,ñK9,bƒÔ­À^..Îɳ8ÉEÖ ´ÄYYÅ ÄÙ-óÕQœ¥•—“kœË8àK€`KÛ¼p}¾7çf”‰QœWhjÔP†W3ùÉXKØN{L¥ ‡Æ®¦Ûù¾'’¨—µQTŽÉäЦýÒ YI•¬)ïÿ…U°Ö3{_m„Ó#ví’©øOÿÛ?©òØÐWåýrs÷QüÈ•ø—+¥‹ÿãŽký‹½Ê•Ï?>~¾›Bæ“îÅÝú¯ïn•±§¶NÔ Èli›ò ×j±ŸT7a Ÿ)!9þ`Ž–ª‹¦G8 ßµ( Ê‘>6AÝŸ[ð/7÷¯äo]7™ðuÝO÷¿\½¹½ùxóôùýzƒìÊ xÃçp[€·Þ£²x×/@ḇžl•c\8=Csoekl_ÿîÿ‘г¹ú$/øþæÝF<üt>ØúkqÖ7Ÿ>¾Ý{8{õû«¿¾ý»±dâG“iXrþþ9æÝey‰ÜüÆ LTO`´}„{ dâ ‘Ý´÷‡1]KæÚýéF_ôýÍûõ敎´øôt i®O8a®¹ ëSPs¼E™WF„hBuÊ}«Œ£rì²~&½Rö0hîMœÜWz©CHÏÃXMœ­²Ý®;Éã¿_X ëÁôRL…(dúf´Ú\w<¦~’bŠHiZ²ò@Y÷ Ž£“÷º<£Ò‚€!¸4£’n3ÁãJÒâ^ÀššvÉÀÞȪÝ%ý‹‹äAqî˜Ã„.ºœ“ß~°²"cXкˆƒ—Ÿ±ÛÚÇ“*påc,§4½°³r¨5-n †Ó»©ì=§¼’v` ‡ßköoÕE¤Ig¤¥NŽJÕûsz„=*ìX|dcñA¶ÛlÁ‘M¿=8¯Í—µÞtz„ Ì¿Z¶ÁŒšç ÚÒ¬‘_ÏQœdÑš¤p–,g»p›|°²’¹KúVÑ#âµ^zs¦FoàÐ\óXË ô&›±@oèåõ–¼t°²B½aˆdƒ_¢7Pb`‹ÃÞ ÇÙ"GH\mOŠ ›&QçØ1¢‰S»_éãæÃýÝúæˆW½eàQ"þÙ…ô¦C^êpþ’¨pq´òRç‚«„FgêË@væ‡ÛŠÞ¡%m…QçÈÇóm Ž¡#˯˜íŒ¦†5óì=¹7£ÉZ¿û5º®¦r—ã´.,P6m1”“ǦH±âbÝɶsìÙ´bgʱ2cùt—¾XÌâSMå +‹Ã¢Ffcí’8¬ íÁ6ù n´o1`2»(@Å[ßRl_æRؘ—ð(ûÁ_ÿtqýã›?®ùúÿxsûô•o7›pnz»–Ú—8ð1ÈØìb¢ªØ6ÃT3ј%£•VŒ÷Fâç˜r+´E,8 ðå¡.ëW]²ð|¿š‚ùâ+ƒAÝašëÙ3ÚŒ¯¡àå8Z–›¶‚R²oñ[ѹ=}šÉ ãxvâ§Wâ ×M(tYe]°ñ‡jä37ϧÖ-!«£,î Ī’;çR@œ£R{Ü£Ù]â9 %•™9Xš»iË[ €‡6*HÜn3`†ámŠSÀ#hÕD›aO ]‹ï¸¨iÁÏK»:¹’ymïëï·pU´ ž  ¾ùt`ËO²m¢õ²V,†(oÅÀIÊ„ýÒJ¦¯Xv-1SÍ16™)Øáç‘#&ÍT`’&±ÇH×Ñt—¾€¥÷ùX,frM£Ð|óÁ)@0,N¸Õó¹¬Z Ê-nÚv6Ú!R{$ˆ µLïéÃÍzó¼Oí'å[† ¼×˸Àí æ¬©Åâéôë‚Âsré ó¢±™8ó gÄŽ‰à¤j{/„N8/ÈÁÝOõÅ$p®ÍÌÂx˜gvŒ'0O´Ø¢ÍPêø®0BÌCWÎ+›³÷Yõ!0ئ{º(Ï]y‚_zýŽõ‡ˆ‚¥çú3+gåÕVôt,]VwØA¼úÓFñß݉_}‘²ÒIœ:/ƺ&ŸÊ¾†É@@°÷±¹¿E}b”Ì}ä(¯œõ”ì]>#Ù%io–V¥åd­ó p'Díl°¡Å”Ðv…"G{ZZ:i‚¬¥mÌš/-õÐóO^æå†ujׂΠ_OjŸ­ëÒ\.õ¨ëšÿ¾Ã»:qX½*¼æ¿ðpF瀄08 @±46õš™?­M0φN_ º(ƒÔg9·¢'ôh[|¿‚™ ߯c ÖÝ{@}R\g‘¶Yˆ´Á.Úd³áÓ™˜CéôÚIH ‹ïÛnDÐ[´Ñai²1† Ò¶4𦉼Þ=Ìê¢oh&ÜÝtÑw |/Ç™¶Oí²@uÜiQ€Í¼1„¦$VR§é…›ÈúR§Í ­g†ÃPd.ïxµ ëxCHŽ]8PÇ+§Ý =ØKoôHm%4íp\/r†dy^0¬w·,¡ ¡¯¿…’ÌF´`{1¨÷iJðÕ>ºØLµÎøo©;„¨´ŸW⼃®&#˜á£ë߯¾’UTÞ¿—Ÿ‡8óyr4¢¾ìããÃãt4•Mý«lnmU¾ßÜÖ+á|ôÙ>¢bŒ5JOixÌPÄ3R{ƒNB¢zg9æ:u¼ˆÝÍŸ ,“l ¦:–ßcìávî0®lÕ&+5¡ÉüŽûÇ{Ç•­˜MHä‹Ôóx'Bl¹{máA>Åñ¦v¦l¥ñÏj6J„‹'—êúï ½²þêË_ýï¿ÿåÏüó¾¾úÛZvÚ߯þö×I:WÿB¿úItùújûÁj—µüŸ÷7ŸÿòߘÒC²?>\ýòx÷q3)ôÓÓk]øûÍ”¸½š|Ñë+ÙÈë«»úaÊé~xuß=]­ïž4²Ì­•æ”ÛBÙ!¡!€UÖËï`TnU< Bß°ªQFDJôsrq ÜÍù8ì|6 OÏuÛ/ºS  —´$0zWÏѼ}ÄÑ£‡±‘­¼˜g†oeˆ×œC–=ØÇúrª(°>X“ Ðxª #çJßYsœv‚·²ƒò%o(çeÊš#‡äq/„Öøeÿz€ÖhåS¨ç©Û®ŽFŸöõÚû”M ¯ì·DÜõt Õüa4+Gd‰mÕ??EJžôÚ?n{ìÎTXŒ¦¿ÀA"gw¦u˜¨¨úžv¦‹FÜPÓÎ$×*)¬Ø±À=ß¼“˜0·;îd™ož4÷~%~ÞS+¶cš¬6ò˜Æºš"M+èˆÉ›ºÎ< k¦ýÀzçs°Æ¡„Öƒ5Ö¥§n¢®ù²--9et0JëFjŠ˜-ŸY±vYr„ëÇé:~‹;G‹Î®a¡ª¿§â3¡)7jGÌO˜Òž½üóåóO燗=Äô 19u•„ª˜{BcŒqLÎê0zÔ‰7}ÃA¬äH7»êù˜ƒ©˜s ™^7å8ÍKBŽ·®)±eýøû Ÿ(XšÌ‘v‰æ‹Oü¿È¥›/™ß ^¬ßoL²V+,¸ @¡30x?Gc"A‚¤3šd«ž!ßŽß ùö¤ ñnܤ k=ö¯š³+¥As|@ÁOß¿…ñ„§Ñ_˜‚ŸæwÉùÄbΑߎ5®ƒòo mgu±¸7Ç[rÊ–ža«‰+â WÅváÉ”ÿÁŠ:s¼öUŠ'.‡!r¢T.Jnòƒ)ß'¦ê,dÁà½#OeV½ç}*{Ž»™Ï¿²Ým6JÒI¦;Vûû¸0vmä{ºÿdAô-ŽÇ®¨ p¬õ»‚x›ýŽ/¦×䙕î¼ß +ñ†³ŽÇJ ›8XY‘ã‘“€^í"°d>#I¯pÄ/8K…äW[m“ÓrnÒ„íØ1/[é£MT’—×Ûñtð:̾XáoÍ»#Yš3¡ÉAM¾ÇŠ1:(ÑŽN¡^êÔ!ƒœ]ƒËú#//ç²þÐ'‘Ð~iEt  À…° ¸AÇ."6y–€£‹ˆ1Eö£z`Á~áÛ~cúߤƒÑQ{/á`P¢&¸&ƒUw–ÚƒÁöƒ”“ 8ƒ çBŸ;h9¶³þÓ—++ã@ïrìÈ.ð/^¬PÌ·Å¿øÝìžg-£OQ h^): d2çÌ·{äZßlûoåÈuð>n‹¼m>ryÁslrAD­9ÚåMØÁ…b¤‘Õ;çrþ'j-eÈ: Â]Õ~]EîGà³llXâ~¼ª£&÷ㆧzTŠ.åD $ûչ̕SOÿè/à&c<šþýæî§½ý÷õýÃúçSÀããBÀSû½¾Äór:–Ú¯=‰X"7[×’Åôì}Û.OåÈ=¹aŸæÀ±iélaü^3‚Ë+1S¤XÅînÍ1»û–ñýÄ‹:Ï`ÈÚ»ØWÖ9—«‘g1»µ_MžÝ‚Ñ¢*òáÝ!»ûá3ÙÝ=±•¨Àåù-–Õë9´e+9’®EÓé¼`SEöQiØÇ3¤søýÎm5 SËÅ–<ÂÖ 5ˆZÂâšÑ£«}·÷ò8F6wî¼% 7©Ö²ƒ•áËzm,ïVŽwXGpk1$qçÃ[ŒŽýNàU3%6bNyœE[<·›÷Ÿßrj8×OfŒ.œ±ÖïM«Éî^ŒŠú½É9œÔÅ&ȧaÜ ì= .òÃQ^Àð™àéû .:I&R“FFðŒÀÊkÂåŠyõÅ·ÃÛRãçH¬äpKtFÈÞyOMLUÆ”PË}c î3ÁÔUQ¢ÃCU¬üh1wy#A*ßá!‘)™\Ý ¡K‡‡ È Ê®± È:Ûì ¾ÝQ^Œ‰§è˜/A(RFÁí•+·¶ÔöÄÒ«õVbº5ÃS&×ç‚ ÔfôMåè;P`òü>)BÖÁ¦íq¿²2ô0( Ýs ¦mÐØ±ßRÉFщþìm*ÆžCT”Ùäà{›]{~û¸»wü’q{:ºYHm\´ë¾õTƒ¸üÅ ºî[ÏhñÆÛöup0€ŽÄÁTÝÇÓ‘<—çnŽÂ¦Ž|Ôy2óÂQ5 ÎÚ>ÂÔ$Euê4©›¬"ɨ~›tn’ËOÌ ‚ÍÆ‹ì–:”Fæ§“>Ó¢lNn/^0óÒ­¼é ºõ6¼§õÛÍí§{%õ<ß tþ—µ wþÈ||ü´Y@×N? U˜ç[Ðù‹ §/|°]ÔF¦¿azQÂVŒm¾¨æ‚ ‘}_z°çÏ´Ÿf6|9©>¥?”ÂPÐwÈÕ0Š?õYŸ!ɽv µ.-ªúÚ–˜`‰ÏÍí™?]b›©l!DÅj!3´%p×Ãs(byv`üBʧØ5ÔøCñ{A‡C˜ù¯ò†óÉy¶¥Æ`š¹Êœ…V¤Œa.oæ!9ü@>]Rc Såˆ.mæqt]¢ºS—ìÓþé"¦;*|×t¦¢Ô˜m²í‘Ènè&À0Ø×‹±9LQਹ²Zìe Óà2‰Ò:ä8«jö$¢hR5ш+§èE‹¼ëŠÌŸ™ÇÛÍÍýÇ·FRI² ®²«iy6„¬58.W«¯›0•Š’¢0ÐzÍ,EXŒ(%Ù¦«ºö«í%uŸãü’ É!’õMF3¾,KÅlCÂ?FMùk?}sîùu@Æ®ÎѹæñR3–;§#4z›ÚtäÝ€iQ02Å¡)‡w7ë·²ó·˜`÷ä£Ô·È÷F~äñQ”!ûcû£9QýܶÌR<£fŒÄMPÍ€¡`râ &F9Pâ…S'wºœeÉ#'1¦Zàù0w2¼µ´8Y~Å¥uÈ%…Ô1/º'‡>Ÿ‹‡è³u“²lŸ®oþ*>©xyk9?ñÊy7bámö7ø¼¸³OU(Ë’uÖ£Kq†h1V×™_¡ì¼èšNŒ³`V‡V ÓM“­B%ù¡ã­NZPv1NGU¾ÚU^¸¿‘O¾üŸ§ÍÇëû‰~j‘_T5VqÄpK"&ò.Æ—ŽcËÓÇ(¼Cq°»Ï"‰tõ+„+ò¨rà6H–¦Q›6z¯Ø8Ùt4ʺ›«3$-âlpÜNê=î)ØË¨Ë8Lyi@ ¸$2j—elÜ5LD6¤mý#€Ò«¶<Ù<Úþ_¾{ı!ã²Uª ã•óÐÊnÉUNÉ־аËþ‘׎iÑrš-l n4·‘ŠÙû‰* •ÍýÈÊš´rp‚­ˆpKÛ\uŠJó›@ö´n‚•1jlIÇb³‰F÷é³HíÝÜ›e@Ìœ ë¼5Lÿ?ÅFœ0;0Ó<8K]:ã‹ï>ŠN6Ñ”Û^3*Ÿ×opÑÚ¦°ì|Åà²Ä4•°­Àúõ‚ˆm{£ž'›Ê÷]£¹äLÁ‰ôÉä‹GòÑ.©jÑüÿSwu½åÈõ¯Éþ¬e’E²Š½“‚Èî³ ò° 4¶º[[òZv÷t~}ª®Ô–,S—ä%©ÞÁ`fÐjûê²Hžú>UwÛ;ó| RÞ¤œFòyŸØÒUïÁ¶Å,!ç’;(¾ä™ð~~ t¨Šô#×%Ì>ƒt¢tó¸Ì놪¬?\Ý,ŸŠô%²}Åï¡Cä" ü®«…ò°¹[Ý|™óŸ?oaéΡá/·÷«õþãá‡VËm2w_ð¬n «,Õ•‡!àýhø¢XBW<kÒhìgo†u¤ƒýJ'ÉLø.ÄÒÞGÒiëgÕë\IiX““1…çKÆ,;šŠ K.çhIaÉͬ)£Ø¡($QÒ{úÏñó„-6<È´I„C^[&Òꢥ®«3CÛ=u¤4P4u$CÔ4Ù8Ý|Ûùj±iœom/±f F¬M6ºî§êí;ó½1>FÇÍG˜ÍiPv¼FÜ`ÛØ•¡êÚ˜d›äìaá£t¨;,›ðíÜxÖ’‘Û¯ØïbãÞŽõ›"i²zdhpèMÏ‚©ÍzÅçEø«`í‚òô¸¼[ü¼¼Ë,‘ÊR7Þ{=¡ÊRszJê)Î*c -µñØi%²¯³ÒäLzÉ}$³˜’êL²3 SÑ<Ô‘<Ûä1ƒLä„¢ gQQ]e‚ƒÞñp6ÓbñðÀÒƒ:š†ò`ÚªrÛX•ÿãÀÒùãÁ®©­=¡GôM_{j¦ èØ~öu™ ç&Sà,(*í¶ Æ^—ÒÉ@ŠqiHu±¤ÃÑbÛRdd‘P„¨êRŒì|öv|3QÞ=Þ%†åñ £k›|0ÐY3:õni ÎwH½krÒÀÑ•ïtû¸v9ùé:Ã6Lß’ ˆd¿m ˜5RmŠƒÓ¯¥8b»žŠ°®K“$m´MÖIi™e˜Ž"ú8ÌA,M2¸|°5™¢¡•|&\0¶.脺·*b¶±.o0L& 6ÚÆ}$æ:DŠÑ ≭£çŽk§{ÔD9ÏŽó~Ú…€ù¤=xµæ³8Do·‹¢$¯wçÉ·0H×e]-+NâŒàM6j•¯Þ)5,›q|—ɧ«f,cn’ ãs€$Ò¦jÆI»?߇Ì”ªºÆ1˜Þ"fkbe3Ž__ôÄËftÓÚ8¯²jã&#mÎm"±-¡®ö)XÓEÍjÿ¥|ÏÂÅÃê‘cû´obýšÖxXíÅ»½þ¤ç!„|0)ÂTKÁœ—¿V©2[= gdþ”ò.Ùuµù´¾:üÈõë?^Ý/ŠÙ$V#¢7†tU)ñæMÑgÀЂӵÍ;9òj×´Á:§ ê”;X9íªcÇT)Š’I§I¸$úk|‰j#éÕUÜà’_îìN°˜-ÅâݼQ~hǼDE(ª¼ŠP˜^ö g·Ó($ªJ…Ѿþ£0¨ª-ÙÒ˜êTµÔêªÊ×)2:EiH䜳é›îc\gé4!4äw–þ<,éÏ#ãˆêòZdNNF{¦3‘²6¦3+ÝiÈÇ.ºS¹­ ç¢[ö"²ozýuvKÁÈtà:nŠ L2qÐ’¥¬¦ÓYZÆrà“)Zê2âÄ¡Œ{K6f²ë›pŒ€Æ‘˜Û4eŠ(¥KP€ß*ÏWÊå{we²Œ•ueºÀH Á‚JªiZÜ™Öiñß½Ñù3<[±6=¸¯¶”KPGÏ/>óþv-vßíòýâù®¬9…M‰óL×d=ß„Úä€ïP T; %–úÆ8v/zRò!ªwøÅ}éf»Ú®Û›§Cõïv.ú7¶YedRÞ†éÛ“aÃ?\&ºx 5vÆámçtKŸpcJ¥:k ÓN7Dî#Q4ѪŒ!(‘×’Ö’çÀÚ –›‘±¶¦˜¾ÓÕjוĸ‚žñ%¤’Üž°yôðXˆ(˯I9ò:X‹®$LNˆ³ëB6ÖSg›Œ/)z±Éøš Ó¤2ãF™mK^í²:J$»Ÿm•ejö³Û(}õ¦’UÕû>㔼D&Í%â,ìíænyõUª×§\Ýmn~)KGŒZ> U¥5„z{1yÐÚùÚˆxžÀZ\b*“«3.NGÊPŇäÓhà’—’ÝP°ì­]eEÓ¾€¨y/Z§BWnº]kîW§mõð´ø™?º ¨‹™GrS²­‹X5¥Í6ðY–Ú‚R†’L 5¼yȦSΧ …õžühôæQˆ7_äѨTØ['É™‚«„,¯Ž „…îµÂZc´I–½¡ÔãÕm€MNÎgå¥BÁÔ|8»“l$kUÕ_: ‹„9°“@|Êz¦ûYô›g>mÞ<-RQ¸×?Ük&]¾Ú¡ªÝ"€ž’´øJ:[³'2Iœ°.®äƒ¼M§þ‡…¥À6@4xI›Ä?ŸiÂ>Xµ–ý°º°,«nÛ¡¬T¡ñl»+¼@Xv;¿aoä÷/~xÜ$­}ZȰ§Y>µŸ„ÑŠRY¶ÈKÒÉ£¡=ƒ×ý‰üA”È/ˆ Uˆù5MfÚÏ;åZ0¦ ãâÇ÷”2ÃÐæJÚ>èÑOËŽ¡ƒ}‘s?Ü%ó¦ÌO>üûóêæ—áde zÈyJŸo!(cŠ/…Ær &¥o‘!Hz&M9g‰}|ÞL–Ì+ÇÚ"¡”ìê5ÚÙDW¯ˆ9:ÕöHŒmºz…dœÎ¯£áW³ ÑWðÒÈ9×¥.U¨Tv#£.8‚lOo¹ýšþ)²ž4¿ðùûÊ»Í*½ê¾Z5¡-I:Rƒ²`̆:OËÊGVHÒ­–®|”ùÉ;!véŽdѨòÑà°wN“v ë®{íšò!Zº&|åÊ3æ*£ÚÎÑÌêFâûªQO±ÛßmÅñT]*d¤˜Dpù¢3("ùÏÕvõa]ˆœŒš.é äôzJªÇ­ P)é4u:I2Y>çžóh7½Š&mŽ$ÑÈéôVu˜ §‡þƒ‡•§øàá,û6&²m¤#±´é±F-\YeqêÊ’«º‰AQï"Y61V#‹^€ÂKÌêóydY~WJ;1È’B‚sûhx[S!%P=†(g­tá$ß>ºÿRRz}¿xüeùôpÇw¯Œˆ-È=XWÓÕ)ð~¼ÛÞ× á7&ª†&©³N9Ìè<°è\Ur±Ê©#¹´1IÙ1·ZÙ’¶Ñ^¦ƒW]F ÝMR³Ž.àòJFrŒF?z›Ú¦ÞeW êªì_ â{ŠBA&TÐJò#øhžØOðÓ¾Y‚·5 yÕݶ~^¬žøXÎø¸Î~à#úÇõíò×ÙAêǰû{þüéñËjà- æj/¨«Õ­”¨¶5PæÄ‘$÷ ü WrXáˆNÀ8"P»éÊØ©,y»G°qý[‚óà” ÆO‚4ü`ßÔ~˜[БԾžkþœïP XI6m2ýi!ï¹^¬o–×å×yÞ~“á¦,H x §ï AJø-Hò’fe8]ìJ[ÿ:•òyqwÍÿʺI¹—uoï6ŸgïoO‹í—õÍÁÌÓs‡š-PuÞqÖ 5¢†GH'm¹¼ÑRfC?Îøƒõvq3¼ì+SŠww—Ïz¿¸Û.Ï<á³õ³ ûýióþÅŽûèMzI@LªïšæŠÌ¾t䌴_8ÄrùG+ûÝÃãæ†}Šù³?{§é$‹H*×wluîÐê¾IÞAŒš|$ýÀ!ƒBÑ5é·ùï_×oað zÌ¢LµQ¼ÙÜ?,—ï¾ã%X>½ûÓ_þ}6Ö#^m¿ð^Þš1¯¾üŸyøøþ~ùô|÷÷/öayÿñçÕãÇÛõjÅæÏýæöø\3.;ŸmŸoäü¼ûnÿJ?=0¡d¿ LÁkÂ$09ŒQB­,˜¿NÈú/ L¨Mo`b1ú(0Éz¡_¬™õ‡G,¦ 㑸ßWÃû´é)äø0rÞ<üU¬óU¨òæÙƒ¦ÅÚ­Sä _äjäЙȡçÞ£7Ê%ÂåanÿPÒ¢!M5-,8ä¥|©“2¶³g'RÔ6âÚÉ6w‘¹ Aãbœ\½ä&nä-âê~±^|àÿ¿´§~àkËñ+ô1ìOâ$ôéðfÇÐe'CW‡ë{àݤ#€3ÎêjÜ3¸çØó³˜Æ=¡rIâ¸ø,ÞÃÊ2Ï)l½4ðYMݑϠ§(ò9öwµƒxÐøv€§© ‹ýõÈêë+C˜ö7j‰þgCþýûSô+tõ.ó†G(,ÍT;™‘Ãð ;ãÓIf&“Ùy4¸z²ñɱ˜„|Ï& €MT´öxiY%¯ÅÖœ‡ ”ëŽO,Få"øä›…h\¢+¤©eŒû854ˆ¼\ùW:ÜïwC`Ck*üî#l!ÐUØ¢çì¹Jm¾ZÔ6<œ"¶(ÁÐsÏ(иWðUZõãrq÷ôñM]…|ßããæq8R|¸~åC&‰£»åíY9ZeÑLÅèa+¦PX+ÍšÞ§)#+]&ñ‰C`d槃æS0Ƴ?jQÖ˜£4¨}ØëeèhPgí^ú@PЩEÌ:fIòFš×éQ­æFã\Í!¼c@M©cLVJ@T=œ¹‰…[$?|­èZÃìëÏþ÷_øóÿüïfB¾gûë°ÀÙ?óûÀ›òn¶û`¾×ûÿ³^<~ùá¿þmÀK>XO›ÙçÇÕÓrØ™çí;y÷õr0}f0¼›ñ‰¼™ýË쟫èaÃû¶ÚÎnî6[I˜FWÀ¶;DÚºéÁ0fB\…SÈ «÷2mv\^‹6"2˜Zã8vðÂu”4áheYqy ÞBe³±A¦—J½©¯Ú¶ýÄRë%éŠäª÷Ígï›B‚óJ'÷ ÉŸÜ7LÔú>,-kã¤J% S´q4…é .è<sê" ¾AÆX§u XžÕm´Ò0»£Ü`CÍpÄÁ>P…äZL(̵LüÚ#C›÷­<–9ñkÏ*‡ãHha¬œ/Ь™0¡Ð{ÖyÄj$rùH$¡’HäýdFÈê-,‡ Ç` ¥v-‡tob <ŠCJKYo.×õ-Ò2ÙL¯§I_K•i—ìo~•6õi•ì/E£ Ôé×ÎTõ;;]ì\#«=ðÎ^0ñÈ€ö-NøæÓúêð#ׯÿÈ*hO(ur¾5ûTuç;ó{O·n4ÌüÚ±³ÍÛOJéM+mÆŒ~Çn¼QÔ äŽùªÖû`Ö´¬ý0¤T­Ö!JÔt´²<]ë¥5ß«@× )=ÿE &I#qoLb9ú¨®õd€,]¢ˆÊ(ÚµJàë½|õp·àO¾þÍvù$K·qHªU¹…ß MHõÐTøõ£…ÚŒNˆÊ€(o'@öÃQUãàã1Ÿ%ñÉïhaOk<¿.+œÀJöЗ€›¶ œPÛîà„ž¢“·Âóê/Q¯àŒþyÀÅóÓGþhu3ìóÁ<>óùK~ŽÀÑÂú›Ö¹Áïs„`¬±2aÈÞ‡)¨£&YFн Ô^Õ‡±CòkŒ&JCOpdSÐóµ[ýM8ô°´LôÑ2W×– &±®³ŒÓ¦ð¬¡tÖ‡±m~[€r¬/Ð$‹{A-&5†ÑÚGÙÔ^–·kÎX9%%Al>| ºT£3øøöîÛ1š˜Ê`YKŸ †y|¢y*üc؉îì=“ЉñêJýé’ï:‚y¶Jpb´:ï»Æ¬Rêºójº;­£],Œ÷|Iu¸hLH›îº3•o®SÝPwþËŽ],ã[ôÐÿ®Ñ“ëyó1TéÇ}÷X)!ŸT9Iµõ/å-2È[Z´ÿs³yx£)¥}94o¿“ß³˜IIÕjyûò™V±”<{7Þ³^LV6ÓJÑùhaåÑj~'o;[íÛÌo–«OËÛ7u•¬C5{¬{Ä~iû§¬¶³õæóìnóyù8{ú¸XÏ^2VB/E2.À£Áû·:ל= A9¢:‹ºÃäMV;.M}¹‡ã0Æ©ŒÓ©pöÒ$ÔtUz0@ÁPÑÅž°î(ê7Ñ¡ý‡ƒŸwúÁËÌÁlé;ðç…¯𨪠ɠŸ‚x^f9+ZPžœÍIÔOÖDæ¾ðmcS„œMµOã³–ö¢rQ¼<È¢ÉÜ~k^. šƒø…¡*hÎG©7%KÙÚÈØ^°d (\‚‹4sÚ‹SÇEܳ»'mZX¥­Àœði7‚P¹võÃÝhI?­Dv|õ6«±ðÙõN¨/]8~tGØbª*˘32-Q«âé£Rk‡¯r<‚Oâ«òˆÉ Q6£¡µ#5ÁW¡DVÞùP°VSuEºsÎ ³øDoo½"JŒ‡ M¹ï1o¢7…âÒ*¬8»»ì,8UX7¥®Œg%ØPXÍï&IÍç?™‹s¤Ð§o­‹ò&­,«9PûÐ]J¯‚‡:½éBïH•ˆQc¤çƒ7‚œ´Y_J«Ö h s æ„×#*ÞÐ;²¼~Úq> 0”Ó½~ÜXD‰màS\wÂ,v°ÌXHoÝ.ÎÙÉ2û…¶óͧõ|óøáz9ðC¿àïøà¦ó¿X7°xÄ€cBùªj/ðaB÷9bÜÏâ‘íȈâÁfO%ŽZr|œá×'H¸pt JÖjƒ§( ×AXM,9¹@ ©Di ×ÞÖYrØyDÜ ft1KŽ—ÌÛDåQÕŽšRχ¬jn“o¿] QÎn~ þ§Ï@‡éošœŒ´Ážtõ›Ûø`ÅÒqpìëMr³%8%()ðZSéÀ1Á´ò–w[ìÑû¤ám=ÓÙö»£îò‘šLˆã×¶Fh, @Ö*… au÷,øÞ¼^”b{â.+­ÎtÌ´% ·YôFç3ЧnúÙ}c+Š=„š}³ê¤V¥½ë›â©ç€¤—`íë" þxÿ‹Ãîl¯^l—ý¹eObŸàyØÜ­nVüœOzq÷ðq¡çÃ'_æû¿ç‹5%¶Ú” Vâ‹4²[ÁVòY †€0úƒ*ÚËH¸]€SN›»Sf1kn›N YåÏ8g»Ø £»wA—@¶QlÚT¥¬&ÛÙ.f9ƒÇˆ]ÌK&B¢ÌßvrrȚͤ@eC÷å@èüI ñqªN‚1®‘Ìšy_ÿÒu¦ÓIÓ÷I2ïzñ|»*Ë>iíìy‰L‹T䆦¹8ýd\ÀIãœr¤ÔÔ~F+Ì1Ÿ]²Ç¦£´—‰´2Ÿ²™Ï@6èª~s>R¡¿ùlðÿÉ»Òå6Ž$ýD„ëÈÓ÷ïîCP=fŒ® åÙñÛo& XèêꪂìY‡ AˆÎûü’‹á³å x“‡Âá*#sãý¦µ&à*Ñs˜>‡ŠI&ÄÒ,bfL·«5ì:O—Þbs¾ýŽå#ú‘¸{yß¹ÉÖ’R¾Î ° Ⱦ׆V?G†"v'z)9.8v1âÀ\­ð) 2r êD¶!±±J9Øs‹9&V¡®>#PšQ5‹ñ%$¾éŒã±òz÷p’¾Âk&y!×eG«ès‰<ÅšŠeMj_}¦55i2’züî€x__ì·Ï¿|{þú¡Z9_zë¬nœï eéZlIiÛÉçàI­ñì"r‹ÔíkÉ™\Õ†Wy‡[}‹æ•ùÊ1èb^]XüÄjûjϸ——ݶȓ+Få¹PyP¿4ì=ÈÅ«¥I‡wænia®Êˆ;ç¾Ö,Å)…`eR¹uááT¹yx~úöý¥­è,pÜjzÚµ.å§nx‹•NëAJaDÕá‰Vö@ö!ÀŠ’ƒbm*‚è¸bûΟÈ1¨äl¿¨!Æ%‚öl}ÚGLG¶0:ü²äàH÷9B¬ ¸&Z{XÕº‹`yR_ñ¡h®²2eˆ‘ú†Ù·›HÁ1^bæn|XL$Bâ×pÐtBW ¡§´Ìù£­›g•%Ú-:˜íIX»t0Çéó¬FG.]›0˜íÀ^›øSPf~@=”õ „ñùá„àò(øÛo¿}(Óø³Ï¦e9vƒÇcô᜞M"ŽÑ`gæÂhæDì+LÏÛÞܛŃ—¨ m¡ššâ,P$KI·€ 12ù ‰ÆP­‰R£"6Ÿi¨6ìƒUðs#ZñÊõ9UDl{ÙÍ5Eläß‚»¼ªLN›Îø>mÞsÊ g\^ú´àud VBJ,jšIVjͶà*#Õ/ôõÛeBÚ›E,â‘bvôíTŽ¥ Øë FÀ+Mwü ö,hXíl´WÙhªbìëÖå4Ù/Q±šÌöõ¹‚_C ºjS—Thëòˆ€k×̘wM-›âê„Þ ¯­R¦[ögß’ôW˜»çÇ<~y<üØs`­öíõÑ ãƒ—zÚ~ö›²ˆ)#Zî×èS»i7nšBSDAÀj·Øöš›5Ï—‹×ìO„Òîs÷šUÃtY¦kÏÐÓæ±ˆÓû}öCiÏEÞšÍ=ïk ³¬r´Ø`—‡˜‹«¬µ´˜û¦4ä’µcªC!§¤¾A2sJc—âHà·.»±mw½i§è´©ËÆbÚ°\gÁ=‡È­85 4Wõ 1$ªOHbµO§K9ʉCj>.‘–0)´˜NJ9uáÏÛÃ)O6Ž?ã¢äcl²Däxì]Šd(ø Àª’­ïÌUÔû*Ï|n ««ã€Ó3*æ‘sfÉüX0W³“H}¡¨nš@ Þ‰Vý{̣ߪ‰«¦3 ª–wTC±`~F•!ÆÓD7!K |¾}7ͰO',·ÂÎâ³wžo®‰‡KmÂkñ½}C¤WaØ/•hWš`añ„¬v9R0É|ãý‰—‡ß?þaObÞÚTu‘s3x2.4·è@3ˆž(g oÚ‹¸HŽþ°ßóï_þ±¯DÞ…°2¶¯ M[hì´‹ôŒ<ƒô¶Y°ÿóŸYs¬Š.ªk˜beHÔ¿Ý*ÜdÛvîî¿Ü?ÿÙDk^ 4õ-5j2er$ÆìcëÓ§{gHwX"¹æbÉi6{ ì=þ©&åT·ú×âœÂ;¦E7)[àÝÕ³ÐË++cšþ¨ÙÔŒùæKöý¾ÃÂí¹³mRŒ$iÁê'JÐw´@#óê{}J óÌßžŸ¾>?}ÿsÿ°ª?¿¦ (–¹†¾CÍÀÑU–Q Ï84kôøáé‹§À5ÞœÿÓ>fè3,;è‘0fM`‚ùÆ>ü鳫cáðüJÝˬú˜IÛ™ÆDºk‹ó¦0QÓ~ÔHoò¾ÄZã,À+]¤Fžá彯’!ÞÚË_ ´ÿ îÂÏ8þu¶_ÞJÉ ‘°×ᥠa'Çmh¿yÞ û°ñ´ÐÀãaXƒw]Q­Â;ä°xÝáDŸ1×Bö¹æ–Ŷ± æ›Bˆs–P2F9{L;=~xùå ƒ¿¾ŠÝ¯ß¿þóñ‹ ËãÿžŽ)Üí‡ñÊú*‘iÁrúâ®j±DaKÄï@…4ÓI½[úéŸ;l}xüS6ËŸ>Ž*Z" vU,áÁ‰”7šÍ€ m©üþxÿéûïÃH§‚±k’Ã^Ür»s&â軌—ô&F=<ý¢ƒi¸ˆŒG”?/:{P³Gè…¥Öcˆ*ÅíêÓÓp0û¯Eˆ>·8ʉ(w©Iš±árL0¼¦\ÚýÃɉ}ȶB(ŸPD ‹£©kn&ä8‡œÅR²¿=ì»ÑX°ïŽ,âœBc‹£¢ÛQ¯õšw±ÔMøõ-7î^Û¹máS„ëÌà±Ë¹À¶õ&NÉgÓvlóKÐÿþn¤ò1ÔS”ó6…•ñDÔ‰ãHÔ’·É«c„R*/¾Rh„C²¯MÐäÌòSŸ®ŽÚÌîÐŒ~¾¹?:v=OÞhÛPÌû™V—2t¡fYÞ;-)Zææln\‹=D¶6R1_§µÅØ—æà„»1`§û”îó_à.‚ ý½™«çgûãÓçÇã€Øî_qw­xÓ2zóimÈB©O÷[}¼¢ègLãücº÷G–,”Ÿ¾}¿ÿðéñåîÞž±U[”ÂXÍ’÷1`Bé&îRT@ߚȡ5‚½!Õ¾¼…Ò R: 'šŠ¶R²ñþxúôqK‡íðÆYÎØ å!ô%?4º0b–Ì0s¸éãýãg§ÖƒóñËË/öëîãão÷|jÞÀ KJÁ±/x%å Jû‰=|óèµ+ô$yÚÊIú¨ÍaJ…7ìLù'*}~|¶ßîÌ–¼øØ·ò}ƒòéåûóŸ­]L€tݳ:ί÷ñ"Ç ’ω“ÊLwðbRïSŸî_^ªCoÿñ4³A%ôå`2eŠÏ$%ú™ïŸXûp÷á/?=¶…öfšÈÍ(Ò×§½¼,9ˆÜ¢E½sÓ°£U9¤O‡KN?²³ýUË´|Ea{fFÞëäw„è" 4Å8Ī›¸>W¤»(y9Ž2Œ’Ñ|ÐÔùÓëIêÎm›Û¤%¯™X:‹7nç';ås¦´1š‹É§í‚e³¼ ;ù4iý¤ÒÃ_þçÑauÿÛÌÈû%ê»wøªwxÕ»÷kÔwE|Õë1+Ϙú8B7àHb½ÂG™äŽüÍ8úÔ1M8qwªR©»±ï÷þ.¯}ç¿nA%Œ¼Ð¶3™ ‚ÐÇ™ÁÖ¬’uj£º8š{Á‰ÖÙÜ…hB¢¾’MI3híýû‚tãýâ÷ã|µuUÖñ h„…Ú²Å&”új?IóŒS\ÓÞPͬ³=}»ÿ¼;Vxvÿ¾;&öêÇÞt¬&ÅÕ÷ÏZ90E ÚuàÔ8ÏN8F3š‰ÓuJ/…ª“¹gpÎg†,Mùpž~ ’Õíuõ2”Q–J“Ùg¤‚Èj ‚’,rj˜S°”¸+á·È³¯’™“¦B hŒ26i†T\ÌcOBÅ0ú&ÔÏ17We” ä.Y žëê09I àWQˆ2’Èÿw¼sŽ 1-ö¯p7@ØyÞf…âl¦Ø9 ž`8§Ç ÀÎÒ-/gæê©ƒU/ åýœ3š Anr¬# “bì§à-ª.Õ·Wgk>X~$Í7Fí/4¦Å iMÉ`8¤sŽú? @êª(‚ô­ãuË"_{_`æ÷›¯"9 ¼„’,vÓ(Õó¡Jb·ßˆ&EópF•1—P¢Ÿ4ÎØd,ú" ]æ!]ÀœŒ·NçœJ—P’úÉ6â2 pH· %v!½/B•]å#*P—n¦-‡ë ©ZpÖ“c `ب*Ñ´WëJny`¨f‚ ’ßÈ3FÉåÊ-;º ш ]JŽi6ô­Ñ9åö­q*½B,¨ŒCƒŽƒSÂíyWùê'a(u)=á¦Óf!'áÜ\úY§Í0´àYªºnáµJU×éÐÒ|·r¢Ê]ÏA´EטûRýÄóu=9mI×Ѭ2@Öå€?½áò—ÎùsHYSìÒrÕ¸å4yûßœ_£–¯€F¨ÛbOç[ õ³…ªÂUÝV‰Åë'ZŒÑm Xý>ÜzÝÞ×C¥K·s ùg -à)•ñ²ØH 5?žÃÐCÓ«t[9¬Öì•ÈŸW¹háXîkèøAŽ ú¬ûA/ȼi¿ôÖ>Û´ aE|„¥Š®î$+êõM#¦b¾À× ×™8w·È<ÿ)õ‘,Àª¸lªÖVºìÐx>¾ß_#„ ]©¸EA[¶ìGiYÛµûÖëB]¾ôÚ“‰gƒX&SEËÊ!Ô5´Ø“}¥Î¬,GáM©) ÏûÞ®4ÜüÆÜŽìžÈ™Ráx—±)Y†+ç1 ͹T‹ÇžbüQà ãöää×$C£$¼ÁpZpo·û&ú•÷ç»kÑ£DÈÕë¯F/*ÕvÏ2D@•p¤4Ô݃߰?ʾ)^˜è#‡ÉáÊñWÁé½Ú ~!­ö #ñ䯲8e °©‹Å)àì¡Mð |i»Ç⌈©’"ı¾¼ª/Ãú^ÝZT†ë|T6mí2Ü ¶ŒÚ˜‘²è„üE€þÆ ˆÎßgñ9E3ä©6½';F“ÇjÅßt®8žsFÉ!&ßÏ,ˆ£±¶˜ül‘`×a3ÅÉßÉ|p+æÀÈ’Šc»‡Ží•÷ëÿ '+®ÊÅ>vèª(„-sÀ)‹\(Œh ΣL; ,ŠP)˜ä™JjÕ@äòüÞ‰cìƒï"«¯Æ6Ø´h!C—}€‹#ã „“ùàÌ. „ ‘‘ù/ØŒØ`+V^F¹ÎF$BìRçË]Uêì­³(í§éG`û ¬d HÕ€ÀWËç©Z ´d¤Ô*<£Õ˜‚€e­Á5´è;bèËPxv~•€l—ùÅmIËÁast˜š ®QsÈ!A?¤Õuö +vMèå iƒR0Éln ¼2pt@Ý1Ç\Ÿ°«æ€R±"t"Ù ÁdÇ(Ó&sP‘—怘¦D.îõe‹x²âµ‚ÐØ6Áº"4äƒoÿ\e2±öõ, Üëï+ŽÌ­xôhs ƒ ¨¹¾6!{oÉ’ùjc€ÍÓ`¬ê=g(†ý'ª é:þJlêZÖbÁH_À(iÃä7ûK„}|Þà#Œ¸œBóöéõâT{¹gnR ’ˆ×HÁŠIUÂR™ãDŒQR`þ+„Ô"Éä»)¶ì!ÛssöS.Ø}M¨é ùÀ’y Ž—ª2„ÇcjK2b•‹g…Nt2Aáýš-h’ËÓ3÷I nÈ^5åü®Úš 97ÿf!œ Ö$ˆS½že¤*eŸg´³ªY-—ÌmæÂâ‰>ØÚSia.Zêݬ¼à¾4¶±î|û; ¦…fh\¯«¶Si…QiRúŒˆc$È/›%l‘ ¿_BAœ=)m®T&¥sBò˲<Á1x•×Ý#ÄõsYïrîwµ˜ËöÙö —0©ÓÔÃY%•Ù;/¤)•@K£#ÊñRðuÎfº]*«–^0ó°ŠŽ†á"£ÑòÔгFìÙŠLVa‹s2—¶A|q,A*_塳W,køK »]Õ²E¶:@\×mûˆ8]-)KRÒ_Ì>K+–ÇÎÖ­³Ì qýÒÚÆj×k-¶ðß.Ó,“ÁÄö“g!–ÐE)fñùÖtÓ}ÄÒYÐ÷¬M2Uìýà"g)Êâôô Îj˜½âSc¹ètCVòCË¥‡nƒ3¬2Êzᆵá[d.SL}kþz c§27%ès9Ę+Wÿ½«ím$7ÒEØû/k™Åâ[9{ X`7wØÜá>‹F–g…Ø–á—Ý™÷߯ØÝ¶Z»Én’šÙË-’IÖ#·šUdÕS/|Š=qÑc«’®!J˜=¼û  oD„Jk%T–Kõ]ª¨Ã$ï|8ü¡«'dÚÈîf‚Z“ö‘«fiyûÊû@@íSÎ0̓+Ã_oD2"eé=Î…¸Œž€Î ' z" Â¾ƒQ%Œp¿7£ºþ°6/ãÃ$š£KrR„²•ƒ´±ØvBé ³µ¢ºŠmmSêëó2äS 1”p_$Ö&Ú¥ôC «xåɨíñ#!Ñ^<—¾úßÖýÅ›òV·Ÿ¯ë{ÞíþáÿXÜîî?ú/â÷Y…KÒl=Û wóéa³ö³ú:B4÷ð¸¹Ù~jvÅêúÂÇx-zXì52¥¤¶cYîµÅÏô/Æ¿ì×ܨé«ku¿ÞÜn®ç <Öháo»Í ë$›*­'Ï}š&¤Ãæ›câ8¥–B+G¹ÏÑ8¡Š0¹ú½GÁvýÞZKŒŠ÷›Ñßité:Ê-U—Ñ¢“3¨ãXSÀ&Ó¥j—ÍMÈYEóñ>Qþ×Â].^?¼øïûé¯ßÿõ/W‹¿¯y{ý¼øûßI,þÅý¼øÈ ¼Z´?X><îØr?ý×ýêñóOÿñ§Å ŸFÞ„Ï»ÅoÛçM£Å—§+¿È{¶¼Á¹Zðî]/þuá÷û=‹†u¼}Z¬owO¼ÂoV`ÈӧЬ I†Î°!­ nHíœs—SÞÿÏO÷ »PÌÞ„ƒÛg@ø¿“í#—–ãBI³'ÕøY’DŠCX‚¡¬¦"T8!ø?M9ìsRIŸêþoœ\Ñ×”/Êh ÅÑ BJBvûRK]QIYÅÚú¢‡v­èÝ :Š„0;Ñ¥üž…ƒÍÉ£ÅÑ sãö­ôLhê_Og$¦Xe_s#ºìp·PëNð/Þ·SL¡ú, uC Õ‚Ì™‡-™²ª=›¸ N‘¼üj‘%~>´É;¾(1#ké´¥\ NÎ!EñŠã¥iâxE‚‰Â¤þÛ2KaAŠú¬RàDuä5]ËX…ê8^KÎ8:Ë  # ')§žçÿùÃ>fص©’dޱ‘„3×£ Œ%þ:Ì?ÍaWu–¬¡élwå™Éå9K [VFýD˜‡ª'B© áP9göZ¸3” †ƒ @â¿>k] „¨_ت4Áü÷ïYp©´4žlö¶åG(kEíØÝ‚ÐlþôÓBPÇ5­¸RRÊx[ï+ÿº÷Þâ_þ­ðéî½€ãí{¸2wqÚºrÉ5±B*!£ë^ÃK­Ñ¸†a–¢·,HiñO¦ -ÌQn« òÒàçYò’‘×Òú…]ßæñê»ïÿ|¥Yö¾‚ÿp¾( Á-ü}[ßEÅ&¶½ùÕm¶…¾‰jOð‹w‹æ†öûltž¯¾{~¾½}ñä/\/¶×Wj­ÖäVêÆàêF(xwˆ»¼R®Ì—ÿ8Æjäoå *…M FèŸßƒoŠø‰ÅöÃn÷p¹ø‡~£n¾¿¿Þ|âoå÷øãÂëq»¹ÞÿLË„³H m[ÇHíZ!Ä¥Ó[ÌüË.¶þ¥x­7Û_7ׇøGIµtžSû¾žoèVÖ=…Ìýî7ÞŸ¿±;{þeu¿x“DzYüÉ$:@a• ¹5Ï!U^1PR¡©V|²Ð_ \é– ·õðÇí²ùó/þÌý®m#gëëÄ*5 Iúø2öÁXnŠ{«?µóòÇÍ#ѶO_L#~-ľßÎ×?Žf¿õ¼Õ¯ø¾»†ÆJ!áÞ”òÛjûÌ sÁñÕÂ[Èï;óò*·¾û–þüøyÛx³'Ÿ½éx±e»vâ‹°i}ã_¸ðêcŒÎ_§Ï‚òÛR“z¶ü#xÃÉß³übƽ2CünKúú… —Ârdb¥š-„æºîÍ6–tSö?5PVàR: R é„Qˆ_ ’Ö¼’–f[ÌW†lF'(ã·Õí%ÿ׫À ý¦‚'Æ"‹›ëÕóêéóýº×>¿Dex#1ŠÍð4¸†iFâl(Ú,€,Ìâ VÆ¡³?/šÐ~ÕDý'´½È}³º}Ú qñá÷/ÞM½ßݼýˆ'<enƒr…¥ ÍÛµëR<öö‡.íÑæëºmuÔoî_J#lê8ÔFkÒ÷`ζcŸX=/Çb µ÷ú%#Y…*\Á‘fv{/’qIœ0MÌzw÷°zÜp¸zü¸y¾úñßÿ¼HëS8,éì{~~aݵŸ4äÖêâií~±]›ÂæH$$yÍQêÝîz¿ó©O/k¿½®¾ëÞüýà G«_î]ݾlÞ7Ñ/m }ìòœ¶´x÷®É¾xѾ¶M¨9Êu9¶IK=¯ÇŒq–i™(Ù2ù:~Ä $”@SÌ41̱Ý‘íÂ’,“!M†›b™”óAu–ÎÔ¥¡crà&²µfÒý ƒQO¶ך‘«Þ¨Ö‰æ÷+Ks(Œ‘'ž¢6ÞZ«,‡¢è ÷EXÚ‡ÂÇÇÏõ¶£kšgé˜Z+;–\¦ÈŸá© ä4¿‘ûý=— ¬>0ü¯_ÞÔZ+}ý»Åˆ{ñžeÞáéM„@ ÖÉsîù1[ý=ܳyôQõÇÍý¦µzýçÀ¾—6sßÏx‡þÞ×6ïÏx…ÑýoØlj›åªÍãFâõd_¯çæ#,î«Q õÕ¯ÃG]µ^FÞ¯,ÍW£ðEœ±ŒómßYvËTOP³<°~Å <•h³Q¢µâužÁ\…HŠžÖ¿l®_X¸—olܶcš<2MDjšišú}=3¤HN¶CS¿oÔæø)óïFµ{—f¤›ø'V #!?ßdÓm*iM¼ŸØ'C nt\hQiiFÇ“ºjA“ŒŽ#5:o.ÁèXS=ãdý,ñÕPB íêgœ,©/‘pê¨_™¨ßÀ1ŸÉ6U³·qüÿ¬,KšøÝ=ƒdy3妉Œ !U–MñFiFʶC:Û¤¸ &ÅH--ÆaŒ!Û¶Œ™# ”(ê­,Ñ¢hœ˜(2ÀxLRŽA1™êÑY­‚…qÏy5(Æ„1 ñ &%|²yçÞ…ÉÃÁ{Wâ4ã‘ö-=¼Q¦'Ò¾f ªéçÖ«,¨‚sáQùÑ~†h¦]QÂ#F;›„¨]yÛr UzKKŒáH5ŲÄ—`Y¤¬UXŽÆ}¿d-„B‚"7wMŒµâK€–¦ ó .üÛ+p¸°^[gliÐ2ñ»ûÉX“ZŒ¤0 µhsfÀkÃJ)Ê5/&ԷؼgSnV¢ŒVJL—w=ݾ_Zb… =p‘“€‹‘"ƒ³U\]ÌNŽ`ƒ¥Å#c=4 ‰èæ%•PþŸÜ¦ýùún{ß‘Œ?ìn·ëíÆ³Ê¯n~YÁ²ùÉçe÷÷Ûûݧ“ Žq4íE{FËO_šŒ¢Îö¢£@ŒÈqì˜e) çÐhrtÄЃ¡lK)Ó-¥qš„nG YJ1Ú `‚wÐû+K3”Æ:TÚš †Ò÷)b^žÚŠÚœ˜^Š"”§f=«LJZ¡*‰ÂH)ûŽ4+}4ä“%B¾áïëC-W¶Pð7ü…cÖÇjÆã*«AÒ*7§‘É8Aªlã“jr@&áâŽkc¶Ç*cB›÷ K²=š!6[ÔSl[QÈ Þ}¥bFï˜EáÈÚì2ƒÖ´&¥u¤eTkÄhWGÕ¦)Tfè­,QmŒ“‘â)j³¾ªóÔfæ\ŒÖ·ƒè)ž>¬Aež¨CÉæMES.žÛb\m&”Êí­,Imþ­”60Emù1‡Yž¾*5]=ïþ€£÷‘’$ôó-lÿÌ—È·ð3žv·›^…æèûÌÇÚH½Ö7¥³.³Þ ¤*{)±_­¬½_ý±×¡öy>¢Šhl&PÑ›ÆÙ¨máίä‰öÇý^H™ý^ÉßÜïòâÝ”Ýå•üÅc¨5ºÃSP+©éŽÔHJ•@­é×zŒdتUþa³kL¶ºPoWoei!³d¤µ;³#uÕév¼UÈ.ùq„l½R‹•š,ôWé?›³ùÖópsãYÄ>˜³;ÑÁ×è+Cù­΢%—• wVÔÎ…û`U› ôsÈçͨâc&N{¾DçtÚN äxPgúÏäo>è’Æ|ÿ™üÅÃþ“ƒdŽuÁÊù›–ÊÏÊZrˆìü°A¬N:{˜=ë²e¯B|z…%Ù¤ƒIâŽÁ~„´3â~kÑGc0‹v6MDãs²¼FÉ1ìpñkÉJÚHmÔ "8¡¿Òs²ÚÈ'ý޲׳2èg+”կܰœMðÞrCÙf`´vÚ´8¥ …´é§{X™äwOyõ×Èh|MK1àÖnîvl!­*nååR‚°²Ôâ*ßä™ïk~´·‹Í§õfs=`Ç“:nÇ›G Q,‘Âjá$ú4gÄŽ„0j eH©]›!6ÐÊ¿=ÈÑ’l»Bj^é/¡€îöp6•¼Ô‰P`+“Ëð—t³Ù§ø Áî­ ³_!±u&*;*;› |àT†UÄ/ BóUÄ0ª¼Ñ² —ÄÒ÷ {ÅÉQM!o|þtõ]rÐt·ò®¯‹#ùi`„þÕ”Àhüéýx܆Ãqâ½;9*ÿVŽ|BŒ•àF6'¨ó »õ4ÌsîXDéxÇÏb¬T"ÀXiBµ8¥A¡H ±e£¿©Õ®Õaèm1qÆJX:?ýסceÿÙŒ•‚4hÂDÆJ¿0l¸•É1>ŒŽ]q㣗‚¤@‚–¢bÚÞ­>òqâ´‡ áÏþL#Ê6Ïýüø²9‚YíÞ2ß͇¯Dã-®è›ÙâCÃÁæ|åÕ|x¦á«Nd}€”W¯Љb¨,pzz݃4QB~Ë[dürw+¢à=˾ 4ÿÚl™ &4~7~ ç\Ö¬ÝUçå¬N3®­¦¬;¾ÛpÈlE£gåNñ™Tx‚ÐP¦B´L30¨Y­8—u¶•™Óh&‘¬eà–]²KgNBÅØÆÕq+»v¤Ñ«‚T|½•%”ìäÒHÞ¸V4ÎßžÑAÊÃ_ó¸W ÇénŠ#5Š£JÈ:Ǻ2õs+}K¡…$•¿ñ*õ ç vÅÊs”úþ)ƒ€A[d¬„¬ÓJ[~@€Zú0„´@¬ó>¼lo¯#ö½ýLÌ3#â÷$xYÑ–E3%Ã(Eí4˜×‰læuòÊyîc³½DæðÕŽ9 ‹A§Ñ—A˜×nY+uM€y òÙ¡¨¬¨°öd–3ºÓKf~É œÅyZÕÆy§^BR*ÊË´ƒŠõšÕ2K±ÚÖ %*>5ÎÔœfú¸y¸Ý®Ù¢¢=©^í-´öVPóõ’`rœsÛß9!%š:´½´,‹ Ç"œŽóN²J©ZTgãvº#;áZÛ ®H8î/s)g…™b§ çŽj¼2‡¤¡@8ÎK&ÁÎ2vX-™Pà’Û)¿6h“kÚ5 ËÎÙ­ryà˜Vè rÆ)# çHÝ2Á5ƒ/7ŸX=Þ*ôäzxán’­>V®®ŽÚ…Ê8Q!ùD[[QoMKßL±óÿsw¹»½k5s Y&ý¶˜¦&ã`¾–\*ÑŒ„–5jPóG‰OÛÚ¥\gs‚Ù«‘‰Ô¡ä’ñu´ÅJµôT¤Ì¥iŠçtÞÝá„03—f+IɉÛ#vî¢ßi‡.o±ÑAÉ‘±ßÚµÆP|Ó´¢c¼Õ“[¼åm¥2Öè’»&nùÙ,Qå1¢þp¢¤Mýñ¶Ö(5~×eÑÐ8twëo±Ëu0gŽýt0¬]åçÒäiWRõëé…д?‡ä¹Ç9#tÙ$rIÚµB%+·>ÂÜleøxdí5φ͢µÐ2XÕÇÖû fOø¾z¹Þ>OÃg õ°¬%ø¸0ËÃ<.fôÙA>(óÚ |Ja1Ö8ã ÅËv1,ŒÄ¢%JA§ÚE‘d³ñzµHrŠSEÿ”u褩QìñS›5¶ ë{®7·»Ï^èCÆmüãµJ@Q$N³ØG‚!!ä´Щ ‡«A§RÌ, ‘ j£ùF'Œ06zRe˜\°'˜2u!_ÔU¾YyÊQ%Fáy‰ÏṠ1Ø!•ücÃ;÷qûôÜ>´£{e©Û®7O—¿ÂrççÁõ–I>S9’ßT•¿“5úA²ð¥®h+›ñ[ï^´FMåñ§«YJäYÊ9] @0vjåfî6.W¡áÃêÇŽ"¥d 0ŽmÐ-fO@¥2ÊI-&›ÛC‰9(W‚âWubš=9dÃnôä„evÑ‚$Ðqî7ë´²q/ŠaæÉ½TÊôвÙóì 4iKHtyE;ÞUpŽ4†Ó Ö:$-ÐB•MaJrAñ¾IN.ä8ña¢çSÏÔhmâp ØÍ'¬Ðr´]¦hÖÈn—©€ •­Ñ—ó”í°FŽÈÏR7zx{%n†î²v,17ö«yèl¤TëoŽ‘Ír¿Úª9è 5!Ù„VÆ ©ñŒJ7/¾å}…ÒÓ8Æ3QR&À5m‚=‰•ÉE9Ãïz\³àŒÈ+ðtìTÅ\?(Ú»ÅçL7õ•ÞhÏ»Õýêc¿eýôxñ´ýxÏ?[¯&ź(ÔH`SEÂÑ5–æ\6㟱W"?±ÎS2ÓOÈÞ‹Ë—œý´ÎXJ+÷%³ŸíÍÉAUk0êœ,èÒ±î<’¸ÍÆ8–Ý”W&%‚‘˜b %&ª búÓÀk¬©æd‰Û˜ØÐúÚØòx—s¼0@±o"}qP‹ALHMW¬>ÂB—¨$ÚùêeG%m¶ŠnªÀPåáìrä‘.KADÎßR@=ˆK˜ŒQôªØ’)Ú-¹4 G¢q‘À–80‘Ðûºp$ÆÞù"–}*a#(RÃÑVl aaÛ&Œš[Šƒš&«þmU†XvË„‘¯èDŽKñ³Ë¦95²&o\ô£ãñ†ÅÿÍ‹ÑLM=I¥Ñt<)¡gÉáÛ‡¡‹Kë¢xHä{º´|˜ùb% l©%¾Áš“m}ìbKÅÇôD'€ÚA ¸º]]ÈŸ³«å›ùãM!Ô´ñG¬hYƒírm‰ô²}º^[‹ûc¼D+=ÅzÜoßž-–÷gÆø<áÆ$ªª±{ܤªVðš²Xl÷¦ˆë¨ 4e&P'í#Çq¨·c&P¤—ÄAÜO#¨à²¦"ØbsL(¡8ëÈ;p-~vÌVÃÛPƒ} 7Z Ú{iÇ·‚MM@l £ÍV×6®_"c+äXhè]‘#g“V…h°«! ·—þпÎ÷‹²~"qè»®BööëqùEÙJÞ•†›¦ú¬Ëâðå@XÒrb{x'°üê,8„)dàÚBΕ¶;"Mã! /’Ɉ‡ìx×§œ–dvpK­â!вGIŸ˜U$uÆ€mÇÙú޳ü›wÇQMѵe €Õ1æÍ/ŒÿC tše¨R w’v£çOšs|>äzñÎÌÙ:<Ùpwž €O÷CX¹*cL‹‡õà䱭уs}ºŠ˜1Ò ºŠ2{&!îmRÀ$Ò?¢ 9¨SQ—P’ÝÀßqêjÇ@1RK¯ Ó%>îD¨;:‰xˆ‰‘[$W“RjñÜR˜ù Ìé1—B¡3Ó¬C_DÒ(ˆcíµ®À¥p^‡{뎠ït#Q$ +zÞH›TqÊ—¯¯oU¦«‹õÅsùe³]îl¶²[ˆY¿¢Ð?´cJ6(Ë!‰m<)F}à¬|¨E¬Ë‡´Ã‡µIpԵ˱¬'$Ø!\¼·bÃ:a﷌ڭö™ŽGmZ¥7±©"Ö–hÅl½¥Xb`Ag`bÝ)'ê³ù¸ög÷b6Ò6J÷µíŠY‡;8“¶'øƒÚ Îb¨*]iUµ-µ1âÚÙ5TóË Ú£$’Â;3…:„'7¥™£y j¾YÈÙ!>c>:‹ã-É.isŸdÔ0†%8Ù\…ªb—è¦Dµ—è[à˜<ŒkÝ Î.¡ÏˆnF7Fî·$Ô“(E7¢TO%é­lx¼mÆŠ_Œ&¼d^ìÖ“cƒ ¶Ž-ƹI7޶ˆ Ù1éöúñúæj Àeø™.¢g«ýƒU¶Ë9ˆbdÄù—ˆð\šôäMAKz²¢§ÚÊY¹.-ÞÖÐ7é™å(”qíX{DJ_«ãMADM™G4mŠ5gz5EúRæ'2ÉÍ-ਟht¼u46wÉ‚ê–$Ú8ŠŒäŠF×¼‘;ªRpÞôÎÀ‰”=rjnUQÖp â{plµm7b¢–ö»KµìMéìêäzÔAíj+Õ°ÇàzÉY—PˆŠÙ€0Î4mæ<P[…ÿЋüàFP‰s/GpJë @𠈥ì^»Îûž¯]Ͻ’^A1U¬DÏ£ K$ÁØhpè7)í=Vï'¹´A]’xÊkŠŒÐæ´ªù/ÝùDÌÄà·d8ÐNúŽõ¦wz¶´ qˆzPÏ‘˜*¿áÂnw†(rÏ4ÀS® iNô¹ùÉ^p–^ ì¸*1'QúövV”æà]aæv[zGÒµÛ¢«ËÑÊ–ˆQÆì/f´ø’Ó]Oâhâ{ëFGµÄüZ6uæ}1ážÉ’QBÌ£˜1zS T ·3 uÌâx®:ïlp º¿NhA ˜pàGzõ!(¯Æ<š'NóŠkš¤UÝ’A›S­FüX2Í ÆyªSÎݹˆ9@êGqQ•=üè)ïΫZ}ʧ%žêÔjG@U8%;sBýÕÉ÷jjÁ}’ºÀe,u¬5Pƒ4u@£)4°ÉÚ–<±Ô‹ºä€;‘ª®qpÖö§2o“Tf,ëµtJ*3›GeŽò“hª‡´œlt_wä£ÄÒâ@1ì[U¡%eˆDx–ÇYzÄ0vÔ‡À˜féù"¥F¤!òÞÞPÉ8hP¬We‚ ÝÝH',’x»Úíà˜NÑ膔JÝŽÔÕªU gŒ®‘ »Cp€uIt`³i¤9 ½ã³}¥FôÐEØ‚UÈèb]¤­MþýáY@žU@OˆÎj^:kvÌ­ÝrÆVõzˆOÔ]ÛA®»dQ3Dd4D'dor/›½é”ªG‹¡ÿDIˆ)" m•—ˆ;(QȦîx°Y%äêjžû4=µHÔ¥O.°‡à^d½¢uÏܨÆCäI …,^‹'YÉË/V(o(Î)VŒ³"Û4é“<U+Д´”·8”Ü}XOåì’yL…$2ìNX­°/¥Z‘•à¨Ö¯Vãz# *·‹‰›Ó²†Îè9 ³±)GLÌâˆQŒJʾ1‹Éâ;žS Gs- Ä2Ÿ`cwŒ5™¸›‚¯a5zîzšvúk›16;òq=wú•ù õLÌ嶺¿—ÿ•½±YÊ+ön¾øMþeýÉæå*( °§ŸC›d¡ŸãÅ™”ÀžêÁ˜‹¶yË!æupd|’*x£þpHù?[rj“ÝSä#žÖÿ!ÂÞ”Ð*g—†ž äpLc¶†¦$e%-z¬á-¶ì=oÌhºÀ8‚6R„þô-« Ù+‹$¦Wé­H=-íg bdsñÙÙ‰tG¤ÓÇå*7¦zšÇŒ)›${ó¶(ZS–ØÖûÓS6Ô×\äœl=V?Ç ñ@UÅ|…ô‚--«ŒœùŽÊ³¶ì­}4¦gˆQ¨å›NÙP¡«í´4a¤?F VÚ€P$¨–V”QÛ?ÚgÉlåaD/«2uáÞ ±Ž0‰NŒâ¡;¢ð)„PŒJw´å>°ùš·aìÏ\ñ<8Ø„uŸcÀUyíÌÇõ3+ñF®™$—G¸ ®¨ª |DפŒZúœ6¢³às|N?–±yÙ$°ë“@šùœÆƒ/˜5³™˜*Ÿ“)ô÷9KŽ%E‹AÁƒR ‚¡„Ü2a—5kæ#Ôõ·<ÿ‡uèõR«Â†@˜P ÖBDùí“B0–5V‹ŠŒ:À(4§¦F·äЦ­ÚˆFž§è ëxlD2p‡Y"ËrLÏ{ôf9_-÷˜µ74‘:×÷n~;+ëXµï—‹;YÔ§í¸¹[ü^†ŠJþȉ$v¯ëâþ¼fC8´Ÿvõ W v39cpì%¹™V­¤­cõ;à Í(v x"ÒDèÑâ͵Ȯ̿dkŽH:D¨8Gø ÕdÔ!¡|¬Nuî §¥c(¢É` €àâ§µ*‰f²%‰V «F,Zt%÷‘¶–Ùš¸Náb†U±iIÇ20³Á“´¹æufy¹E+’˜©Py`c°ux¿>tz”xË»M…¾Ïàd‹ aãJQZ Ê¨Â_ޱ•à¨ww_ˆ‡}ý} V"ïJHëÄ6Ž’Nyn5²æ±Þ’L¯_¶¶3\feƒsXâ0¾{=!pLÕ÷ESÁ‰ù„õûòê 9s•Ö¢WÊñ*U£ Ý!öífT{·tdƒ1ŽOˆ±YûG!ØÛÓ¶Ô/±rV™xšÔ…éå‹%ÁÒ.Ì6Aw+çÎ= Î%{Ü9€¦=Gí>…$DØ–¸šØ}"!Í=Xùô8l†1`æÎ}–f3ʽc Ý.žrrþ܈"ù5ÓF*'g“ ýHéÍÔÆúÄ}Ñ P.. öYwuD 1Õ Í÷£÷ÕOs}Ñ[…(¼ø‡¼Ïãê«ÉI²‹Ä5Ê`Óš‚ϽâOì·+’x1Áüë»dÎ>d/H.x'‘a¬P†q`ó¼=7:µîŵÎÎÁ?˹ÿ&FsyùÃMXÀ0{”¼¿[Šsõ¼Z¡žÔ“Óog¯fÜ^þ°¸{÷~~¿¼üAöÍÛåÃåOÿù×YÒ7±Ýî¶uïÅi{!)ÁH ¯éìûÅLJsKáu\˜Åk–ç ¯öîîêéÍŒ¼Ùêq±X®V—?l–ýëûLJËNýZæ7Ë_‡Òš8¢Õ]™S@ž½z5{#‘Ý£ŠñÕ«ç‹õH9Þ§è-óÔuýˆ¸CF‰î(;F³ð¿ÌäƒÛÕ|1l¿gaªlµuîíÍüfµ<Ö+³ÚBq¶…ßQ‚Âq å9:3³Y¸Mò­>­,Ë@iQä ©Ä@E0–|•Š&öö”dU6¦x?äŽöøv@¹<ÛYª`Nbª²¦eö™v ”£25í[·ÌR0¦Ü.MûÖcÖHIÅT3rÿ\yßzPìw¼F˜Ù3 Xú¡ÀòåsðË5´¸_|v¬ß†7ìèMk¿ð»·z•N=D%Á0ÞLߣ ³F¦4´\"¦þCLCɺ6|q ŸdoZC_êþþî~Øv²ÿ¨Yì›åÕtYy'ò4)€òâªy‹ÓÎ2ô¼‰d¯3„‡6òѱ@84ÎŒDS*ÓÑÔ—u6é 6aàg¦Í‰ ÛF9Ëí”è QÔ#¥o'í YÔѦ´O,ódªRøÂÐ…õ³Ï?<ûßûùï?þý?.gÿ\ÈÖúeöÏ R˜ý ý2{+Ê»œ­?8ßøÐÿs;¿ÿôóýe°Ñ²îfﯖƒW—ºÀÛåEÌCs9“»˜ýëì»!Àx'ú½^Í7w+­v'VàÎ!xËÁ‡©ÝÃ#DÓkZO{ëlü²ÿýËš¿-¿Ï÷÷zß s‰ß¯ôÖ’7‘ûï}ÂŽ^É?êÑœ-ÿX,—W j£Õyk!cuvH3¬W÷ãÛ[µâ®o_1î~×…Ï›óõr1ÔülÝ¢®ÛïÅmV/Ï“ˆv~s¶þסAAŽ$ËÑC$YRx§ôâYb&ôyL_¼<»ÊNdù9"WÚ‰à~ú@¦B´­q.²—X“#Qu°mf°.FS©¸u¢u¬ŽE²'ýÑzÈzá!UÇÚZYF°®o%žYt˜›M¾ZîØèB•Úpgî=Î:gÁÔ—±¢ËW›vi´~TmresU&ý–­•å©M®bÙOJÔÆšÈ¡s¡L¶“ÀºûÍþxèp~À[ IÃjf’%‚9A4ÚþîY@ê¼,(mð Ûé—|qú¥Á+ÊÅ4Ûþç7ÖV„\jû‹BÁ6šßȬ†°ùÕµÏ.ŽãÕ£ˆùâùÿîgm€ÊòÇØnílp¶¾Þ1ö}Ƕ1Éíë|&åÈž2–ìÉGôõ½?>ûò5ÊxŠrôò•`6ÂØåK&IQö´°¬»W^ ¼G´w/¹(añ!çCoã#RL´yê’IäëûW`C _ÍæL˜"Û³E±…-šð"Û6 ]#5á=ŽÙ.6QQ©ªlㄸhA ØúxòmÅÂx¼G ŽÚ.Ž)@í•å/íƒÈX`¼FÕ6n¼Øô7^¤v;×ÁóàBî±B#w¢×|æ¦q,¶'zÍ£ÒYË“AgÖØ@Å—zw¨é8hàÞ…| Šj‡&Ž™Hv guÌD²I–¦­¥åÙH±ÚѺP’cï£7T¥8§@üZA[#«û\Κ¿¾YêœÜßîîÞïiP‡l–ÃÝ¥"Dš?Ï´{½¼úò˜T ½—øyŒáRq¨»QWœ7s2{ —O«ù^ßvv½™õ[,¯?,¯ž«ÊûÿãîÚzÉuô_ öe^N»EQ$¥³ï `X`Ÿ÷Áí¸'>Nr’ô fý’UîØ‰åº¨$O2˜iLO.å)}¼ˆü(ªÜÔ›³¿åž±_Ûþ1»§«»û߯nïß>^=߬ï®^$²ê–R-nt9Ž`jÕw·4c|^d.CªßE$+=D*ï^¨?Ü_Ÿa@{ÜÌ#åÂèwB"OB…Ól¯MV)åg…ûZŇûGx߸hÕä|r–;=‘Êq䚤î@Ê@“Ødm¬âäc=æIQŸØ<2Çq™ú¤¾k”V£ºýL©‘lwø¹ï_·¿Þ[4Ôîä<ÿ{YBÁeô²•Ô$?›*G lñ29¦ÖaÂ[œþÈ2áÇráZ9}„/!~Ô°W ”grûìëó$¿›z2ÛoÆöY¦ÇW‚°Ž8` mgIDUÁöoí$ËÃùÂÆ>°l£„’$°4J%îЫ=‘9VKÔß„=6¶?Lò·PžìùHU|!}m² <˜¥ mbZ†ÒÔöêÃΙÛ3‹¾ò†lË'Ý}èÎ ¯Ãô÷ žh]ÆÏX—ìùBýüH ¸p¶îà2"gɸÃvT#ɹ‹{W5´ ™Ð6úyäÏõÜ„³5Š'”Eõé@È™MÀLÞèóõ4Êä>ʘ ^˜.³O‹LlÛ®cA»(\|ãÅo¼hÿЋ) P]FçøéÄöOº K›p㥯£Ff³L#%›Å°(Îô‘±¿9 ¹rbÈ%5íé‚=ÛHá¹™3áð„JÂ5èÔÞÖ1m„z%mú³÷Ÿ5pÉ.6óÝ/Jp‰q€¶½Ù£xß×;^æ†î¥èáPþð‰eóå+‡/›ôéŸÿüzý4ëúÔDz7Î"Y„ö\V†!šu\–^ž"®zYgÝ Æ)ÔÔ\L´™R¶„ìE2uâ,VgTßg–1±‰¡nY6LBk/Û‰nÞœ—m÷G >\4éÌ“fãêqôRštž gÔ­M ý«ô-®éɵ»´¬…¸Ù®o»¶” ù∠Ò’‹¹ˆ 8?#RÒˆ#̽—ëW?ˆ…nÖu:³Uúàøuz×Ñ8†% †G«­s®û “K3À0uýÉ/éëØ¸Q©—r†6²Ó“úÕò´‘®îG‹³MçÎìYíøè–Å= Ík°B®%¤Í(‘}º¿Ýnnw×÷¿ßÝÞ¯¯»òØý—ßVÄNw.ƒsñ¼fôTóˆ XÂýf¼¾Öœ7wšWiÖ+_ò6ŠÕÌ÷Þ*&ÓXõ’ ¶¯ïy ·ÉU[#Š´S sà6@ti‰ï©«KÐoMÊ.W½¤zŠ"\¥½a*ð‚s“&ù¸ÙÞFàrVïì]raQÌcý4ÛÔ  ´+”¨ê›±ù"à%,À]ÐC!ÖÚÃónÓ뺦¼BpvSM£,ÒX«—Ês®éñj+€e¿Íا`IêúˆÐ8ékb¦\ÒW$ij÷¡$IªFéÞ×½wPI¤b¢ñî!¾«ÑçäfŒ>×÷ƒ‹©|rx÷¦ðq…`SŒ‡\ñ¨*{„]Ù}d!h¬AhÓrË… >´¢L-B”Åm~åœóŠç•óìS9¨0RðŠý­î®d ¾·å÷ˆoœ S†8^<\ÄÛ¦) :Aý ³gG+¨àí·z<Ñ ê^Í ×¼[pŒ«¢i`·3®uŠòIO4 ]&wùËAWçn°WŸº1§ólÂ{œgCj;šËA·{ÄÛMö¡,úñ–ìóºŒr!Dc™loèvCëŽ'' VjùÙ¨ÓlŒýøû¯µ½èÍ[ûü?ÝþúsI$†ò:¢î¿~ÜÉz¦‘Šwd÷øB6ÀÍ‹Ãðט"8IÖÃ~R÷_Ò2«›ƒ:F?gˆ`§Lï=ˆ/eÑ6¨±ò¥z)d-³Š/e[08‡Sùd{+@—g”úG8iŠð½˜Åg|)›!èÁ¿¥Þû8 ÁSW‹ß£«e+°:2~Ɉñ#¶$‘ô$åBÐGùÀBÐØÇë8±K…`p ¨Mª#D jà=¤:-w'ÇŽ¾£xFÿ,ÞCÄc˜p§T’áÐÈÅa«Löƒ8\|Я²åÏGK¨•âÑÅɤÝ»yOÈ/96¿©©YîåÌÙ ‡‹AGÿÍpôâµ®ã0nvå=š]YùÐ_ÅÅò=¦ˆØ´)8ûÌ\&€ÑÃå+\|ù”•“ÍåŠ÷Ö™í£Qªë °Dø€Á7¾Ê‘òÂw–"KÃC2—L l¯D!9Xpô^Ò’ÉÆa…lc$0ÂÌÉÆöéÞÙR(7ô6l×úDIÜbV«4yŽKp¬>nðÃÅÞºoÕ÷D¦aÀV¹ÑÆÇK›4ÇE_‹ˆÝä;Œî³;)ñ’ç‘°ñùW9ú˜2ÅÜÁe)¼¡²o;%”/7±ï8(?p’üñþáæë÷íóÛý¶ßo¾ìo®ïv»L½šOE#ú >ùx"n¢Ò™||®ª­ÛžÒ ÈYM\4©È%fq‘ýRh7š4Øõì¢ÉÂÊEˆ‰G¡ (;¶èhi“ I_KCAˆ2šD@?f4IÛºÙ½9Mºäˆ"]`†¨£ #’…žŸº/|îéùß‚ºrÐ9yøñ ;âE¸ròìsБÔéô‰ÑqqôÛ=‚Œ°rõÒåÙœŠ+^& t‹»GH =Gý‡A¸©Ó =¼¥PDpÌÔ·Ç+Îan¿B'ÙÁG/K¨Bo[È3Lm†¯u÷¢ì¥B†LØ``ô+#Ì!uok¦Q6¹JŒMi83—§ð»G0žÍ^ò÷vŸø¢§ÿxImn¶›oW›¶C¿wœË¿<])êê›<Ýì`HÕ ¤ «ƒn#ô«ûǯwv¬žo¶WõÐSõÛý7»°ø©¶/ÛÍú‡©øÕº¯vOw¿è1ݯ^Ÿw{¯û©ÿnw_ê¡ëɉH £qùŧÄêÊj¤_¾ø”(µåç·’³D ×ú\™FÄ.~Øt ¹•‘M+†Ò¤Yÿ~ÉVÿ²ÚYïbÌj…ä?\ÆÒ$§˜ãipdð¨ðg‹eew73W¡ ñ™R°øQJÁz]`ˆäÜ]xïÛ*#­Pø´jƒ­ëÕw ßsÊü‡R†QP8Y¤ ð­QIŸóYTòìöÃzÛdò.€NêÞ» q‰i@ãm­¤=ÍpÎ@bfç?I Ñ3]k,ׂ>¥~ฯùg‡ C÷§íæqû|ܰÞå>ìÖê§>~²¿«ûi³}|þ¤¡ÑÒŠi’ŒðûG)ȶT§,B¤™~™¬3½–Õ2‚ã1: ˆ0X!²—åš`Ž–\!#°ß›!Æ0qrµ£Fž³™˜Oïû%«—ìRÌûÂuÙ›ê¦ÊÏùLmžT“{‡Õ$+@VUKqwBÿN¸ä*WúSN$‰Ì»Jï>=iäÀ©´+°„‹ôlVhÿ}Šõ—Û­ÕbþçýýÃɵ•¹»Û®JóïêwRü÷+3»íõák.C³ IGÛÕ êY†ÏAc¿TÎAãa-¿Ø»^íöÕ¤›íî·íõkà³Ê]Ç»âÞßNŸ°_×þ!ºïTïª{ÝìÏ7뻫i¬º¥¿~ºm êZs†õÇŸÝj¹Ý#ˆRÖõ’F|)á; Q|ùî¡R}_¨Þ$gQf‘t©Ü¿ëhÂAe(9¨ n:E܈G±D®E,*2a;Œ%$C?ë>‚]È!Ä‘«P±ëkoœå§þÝXiÑ1çÆ}*gL‰1CÅ®K¶T(ÆËP)¦ICçSÄêTŠ¥@sVïI=„¸ ÞBxçÈ.çGCxÿŸV›»þÙ|µÿ×oü~ÿøíÓúùy½¹±cøéZ~·Ë½]ò°ÓíM DFó5î*õìƒ3Ö®8ÓÌÔÁÀ€í™ ˜Ì<Ÿµ º=A-Œz0úQÓ8; î Ó*–ÁΔÆUSû!§m¨Q„fß[†`w¹!ºd ë%o€BU†-@Wy*Ò{C¨³»¬ÛÛ-Ú%˜‘È¢pô—f¾ÿº]?ÿxÜþªÁؘZÆ~}™©ˆçuæÁæ°,0ª³XPž¢úqŠ x_ zÀ:ŒJy™=°}F!$³V6æÇ ‚ Vr¥WÇ’«bôµCŒ>ð‹€à8⢳޺¯“sŽ¥[²XuKž˜c]‹àü¤XQ.¶›³;!™\\´°Á §¸ÒØjä=LØØ'Ç5>›•òI!ùrÉOÀî€eØ­àäÔ©j”ò9H«^"'Y)kò cଈ#}¥Ò 8ëãòàü"š:#Luw6v8“‹âÓ¢#¸58ë/«½ÌÍ0UÑJDð©Ç2¤5Ú«?ã58œÕ*wÓ³iUƒ¸@«Î"4eyº[WMèËNÈž7삞—² &A'€*Ç’ÞQÇd£àiî´‹aÉTPÕ²`ò£™pP¿Êñ(€rv¦Ð‘êàgôhœ~~Ží 'M 5•¦ŠÙçðÓŠ\:3ä"V̓‡I¨ äy2jŽŸôsz5|–!dJMR£Bh;ïþR©Ëp@‹\(¹g´ë,òït.Œã#‡˜ÆðúìMáA •2¨‹)<ã¦Pß­#³]rÐèSs€dL˜ÈÀŠ 7~ º¿.Tj\‹‰|÷ÀÔ¢™T‹äA.=AíP¨¶þq½{ž‡˜Šˆçezà.BLtXBGâ*9IùioÄS 7{…'‰)Žâ&hP;êW‚O!Ë_ò"‹: «êVÆèÍͱ0áÐaãn¡^Ìr«ÞŒRì£7Ÿž†“Ð$.ž–9õgÕÇúÝoÁÛñÛu0“K¨& ÞA¦Ó {w_wK$ï~½Ó£³ÿ~WœrüÝ•ÝEά“ä¤\A€v?!~î ǽþ§QBt¶PëÁ³í-ýpŒÃEÈÑp÷@/à˜û$XŸ»²cbJsð™c· ŸÙscŽ“sÀ >cbª+òŽf KBW=_Z1ç”"E·Ì(«/í ºuìï54BõïëÍž¥Þ4îŸüZú¯~bìfqîãZ•5Œ*uÜøäK¦ÊªOi¶Ï=[e³µ°¤ì¡Û§ÖS★°yž2êÀû” p,Ù*‚\Rûà%̰5À"5&(ìåìsœ3ª©èÀ A)‰ÔM|†SSá@¼GØj¸gеȴižžnym—Ÿ¼yÚ=ÝivsTÃ>¦¤9jUY]Á® @ÁðNh3æqnCñ,ù˜”YÂ_fNl[j˜çhÌœaçiÌœ d;nŽZÅšØ[cäpak¢+ð­‰UZÄŒ1ÑC°ŒZ¾†®.“=x®\Uýž©åñ©…í}3€}.ðOÎ8uñßÓÓ¶‹þöõìúµ’ô’õÌ5„’¶„¨ç<¤FÙ¥aÖK%Ù¦ÁÈ!;êïËh*ÉhæsÈ~Vd×·Ÿ. 숭IF瞙֭8)æRSÖ-À‹“îGƒ§X=¡4Ž -5ê´M•Arâ |ûÑ‹‡Û”ŸÝD±ñîZU³{þãóýowÈÏ/Ì‚dòMÓªuΤ²aŒs%V ƒû}Îá»Urq?îz„);ûX@5è€úýì9Â…a˜Z“pñ4Ÿ¿WU ‘éseR“ ÌäX‚-5Ê -ð20Å–Éz‹(tÇ­ÝnÖëÍîy7Þê“ù•wœtG.á 2"ò`I²¹u.9yµ愹$Ù±ß6ºkÆÚRÜ*ªK(£UƒÈ’»^=’P4¶×Vp £±´¬fR΂ö+î2FaØ)u»Éûò–š­ƒ 1¤¥æ5ñ’1Pr(.é%æýÌ ~óÓíýæÛ,Ÿ™¡-J§’ãr‰Æ6½Ôe._EÚ¶ŒÆl8:`C’ :Œ"öžiåíôƒ´*!v H|a:8h=MT‚1ë@c`çQW^…Æyr"cRe ‡žt)‚œS´DB·¬‰Ûª¿ê#tXùè25¥(~ümgUôºw6Çr¾¾{ú¬ŒeýãvfM¸¾x¹ÀÇQ˜€‹Ú ÉZH<Ìf&ž&¢j@Ûi^ÈM¶6>]VZ‚- ?¤Ðv¯bš´5Î_ãÑ{9gæHôkŽhXŸÏT@ÝÒði4qÆeàh©Crõ‰0@Ÿ¬Ž¢pKšÔ‡ûëîç§ÍÍöú‡.e$Âü]7 uýOÔEĦ°K%“‘ºÉÔ8—1cXª¹Ša‘ž ³ŸÌý~2¯ktä°vF‘™zèzƒÌG2«ÌÝk£xt—Ff ­sÈ,&9EænÍì¬Ù1Ì®j9å ý2ÈŒ4½Ç±-®´T;Shæ„1„žÍ°˜?o¿?ÜN “:þÑVew5`›¥äš“FŠnv5÷+ù £ôAx‹2Éý¾@¡‘a˘VŠâG«+ˆó\öG"©ƒÊöÞ)Æ £2Ǧ ½ ÷÷oQÙÖœ4hÉ–ÎyW5Á¾rå\Mph©à©þªG“ÈèÛ6¨?Ýßnßö_ÜÜî®ï¿»½__O ú›ô˜V7‚5p;•Ôg0JTG :ß§É}˜dtšÐâ½íèûÐá=—FÉF)åˤ‚¬ƒöúÖêþ%º4Ú§¶uÒ½˜£@íÙ‘#Åõ埪 ÷™+C\È6ú§ƒQíÁ|“«EV­ƒ`K"]¤Õ/ ¾ÿ¡eŸZb¼bB i”4ðss*{¡ @ø^bKK;Œ…Mß/^ª;F+íXQ8KlrC¥›B« vÃe!š´¾(Œ.FÎ^²žTK†POÎ_gÉi¢ýdŒ^ Mõš …®[”‰Õ—x¤'½{¸ý¡xÓ³d¬Ý¼i_V^ßtáÍ]Ã4ÐHÖ:c+¦“I’¬vËØo#;ù£i ÆÛ U¬˜Cé#¹Õq¤õ½u‡8'Fé}gyCG­ù%åi]³DEc †“D“JïR}‚“ÉpÒRÓ¾EþÚîŠõ(øýåÑEÛÎ×?žo¬Æ|³î'åln豙ɚÓ<±ÿ÷[\>ûˉ9œÐ¶(I P<ö/Vé&?#Öª• ¢*A|_ 6ˆÑ^r÷GR©U ÉQ˜Å=}fÌ;¸)¶¯OùJó©íÀÜ:µ©I!Æ…}áÐÑR“R‚ýÊyãÆ táyX{¡ý”ðÓçïëÇoÛç‡[=}³@Øÿ?{W÷ÛÈäÿaï!/k™¬b‘,_n€Ã.îà²dwq‹ ÐÈòŒ[òIö$ópÿû»e¹%QbíIp‚™ñG«YÅúÕw•3¾?Ý[ØÆˆ=j< 8ë œÊ°Ñꩲkul¨ÑMÆÅês:9ºÉbÌömÐ%®Ö×W¼Ã#ã*’/lú ™m¤%{Ç)Uw&†¬³vØV¦/87leU JòÒ¸2¶-#‚ò% ívTÛˆ;Ð,nÙõ°_mÖÏ]mZñËÒÚs-æÄºò #k±Ûն׆µê÷ÊeZ‡+=Ã}jñxгï¬õÚ\ã¬> ~ƒÃ¤AN§]cÛ F.‚Å£AÖ9u]‹Ë@ªG½”aš4;ï>/f—yÖ26(oSãŠcB2Š”¨cHV· )cÞfƒV™¬"ym4¶³I¥MHu¦í€"°‰þ.Ì<šÞ´š|¬Ú>n­ÒË2’ [·"/Ñþ‘êîê+¿¸òO^Ðå8äGØËšß\(y D¯•0ÕÐÛaUzd[MÝf¶²ÍÃrÛ>l8z—l¿É½ª( ôE(—Ag“»£ö>¦õ@=.PIèq?uÄœ^ÈewƒcNë/öÌ£ÈÃ{#±Y‘[_V‘×tFà ;ȵVOìaæ}Ücàþ8ÐSôBp™ük.[?½ïçb?®—!§ºëàlü`ÇÉ \¼]ŸÁ N¥È2õmæ¾Džœ¾·°ÜXdHûÞÊÙt…‹þlP#“ïÍ$¸~d v¦x#·¹€žxTÕàOôñå±â]ŽïH‚î}Ü)¹/É?϶„ -v†é(:®;{xtzf¢u«e ­ž1ªý]7(fìaGOZ‡ÍÕ]¡¸Å/ÒíÈ=Дwτʗ”)6xÕ)œ!=mа€Ÿ»lļ?ÙuzbwË!—cvjwæÙð‚É6¼WT4­(g¶jëãiżåZCfÃûmñªè¥pE¦))«"Ø7j^\­w»ÙšU4WÏ6³Û–5 énñùÓ:À)ÝcE9+6×}Cyo‚]4ÑU'].†ø³î2»©N/ñr*ÚNØ P=ÜgfÓ¸˜v½ÅìšÎ†c6z`•—_á8fç‘;“¹{|:”d©6PqI_%ÛTD~–rçÂÊ‚¹Îz+<¼8C»o6Káh¯“ÿºÕ@CQôծNJ¬°][Ì庎N½Œ¡¹.LbzÚaÒÐvÚF»Q¤Ê…ÄX‡5›â8¬`‡ÉX£íåeŠ&ïð;ßn/:jj¿L1 z”ä2 ‰¨0 eÆþܨqlÖ;˜öL`Š¢1ô‡X#bÃb(öÜŠ\+¢åNˆ;¬,§"Z„Óv0PtGÕ+yòä ÃUƒ©ÇÆ_(ž4:GCÂ)O@z”í(ºÕvãúÏtn %ù‰^2‚™=” ;¼Q½_®‚”l¯·ŸåK7{"ß¼óæH-WrÙ»!¯.‹¼Fõ€^&,>‡î_;€~YMa«ÀÚ¶ðn‘ÝE(FŽ Úo+›)̲x#Cq ÷§J18$McÚHÕ ÎÈŒû3_ïa2¡þCY^py·»%V‰×a¢!}ãi#¡ëS ;Æ‹"³éÖ¤<8±[†Å}ä#0W˜€ 5“6²˜˜ì¥p¦¶£ƒ¯ÔÊÌ5”!(ıÙÚÒ62:¶‘ñÏÕ™ ḅêò~Ær_ Q’ÏǬ2i``Ñ¿`‹Î9Ö¬Ÿå6þÏóúivØ‚p·™%¨—Y}AY=‹½¶o{PÆB׬^‚¦J:”é«ï“w”´ª=)0iðŽ˜4˃ÝP­Þ#ôC±û»Ž2m ÛqÊJÇ [ŽlÄKñ´É;ìÈæJóa†½«ÝÉ*±‘ð绽U§+âKt7ã”=…%BEó†·‹Çûõç‡jtà+ÙvÉØëýDÖN®•cPýéÝÄYÛ>“ü=:r¾óÈ£–$Êh]Æ›Äzn-èj“K¯+ŽNñ%G€;l™XÔJ{€Î!}\¼#ZS ¡åÌ,W¢]+F,ò¬ÅÒ­ŒjÑyØÚªî þgY¨­•‹4„…^±/ ì}ðTË®I¹]n7Ïá‘ïŸo?,2©¿„£)²§qÔkÓcrœßQ‹ç¯º/DéD©œxÊUQ¯ÑID5 é6@¯1†¨ ÂdTF:».C޼ö¢M}ôÚ•®¨tŽU¶Ugín!¨ÕøCŽ|<íçXY¥%ü@V–V3‹ Ìð£´ÍäO³ÕìÃb“ªênñ„RÛª’¬kÎÐg·a³š².KÜ!µ/„+Úz`u²\9c‘t²0@}²*ÃCtlƒ~™ERD¥P Ï!ø¥«2j:+í VYÒÅpm5oåTžüvÈSòBU¤•[!XŸÏ½Q7Én@ö€Å3»'ä Rç€~sTìØr ›#"Ï2öž¼·ç²«ÚíLApÀœ.‰Ö:T5'±ß@´$ºAÀLu ".Ì:üwår% x!ô™ÚƒÐ£¬Hóåšh¤ÌZÀŽØ¤’tJÞ‚2CN…}èuÙùó͉ót·üpUkÐÛëz$ÚÕ\”jÇ9è (|AŸA(¶§Fè<È£rôi”èÁr¦t· 'c¢=öTÉdcX>èpdD&G…Íñ@fŒÏãGB±Õ/V€(Ì<@ºU£·A¦£=:BBIŽ“‡S¯¬pB;,?i6ß횉 ‹Ëòð,~Ìç} M7˜Õ⭗䀳Gkas% ‡•E¶ îþì–3P\R¹±ê5bP0W#te’»?³¦`5†ò‘IX—\¤.ØÝ J¦Œc&Y»±¶ÅÖY²ñ ¬6FÀ 5ØäÕn­z0}‡õÝ!á,G­vÆ`ÅâF‰m¨è࿇媞°™wÓWèÍY…EZ[«yPÙ ÷ð À¢ ¿Ù9¢ÿJ†œ 9Â@#”HârŒÉLj˜®«My=s¦@Œ SG¸Ë>Û4Ç“bDJ¯LQÎs|Dˆpš5Ÿã—Á·kI÷Ö¶ÃÊq9Á$eúäÚÐPØjM¿É›Ý­ïŒÖW¢5É ¦ÄåÙ” ëê¨#1nP(yãåÝЩ…Úƒ/³`\XÆõ‡¢ (Œ¤á¦C´F„äýŻ̵)=l)‚¤Šl׉¨ýÜý¢°ÛgÆ1ÙôX‰˜ ä±bRp¬rÍ;YH"Ôˆà %ã¢5´€V›Ñ B I{cD)h=2´’ÂÒQ1&]´NAX%æì®ŒéÕuTSÐnª¦NÝ&ä­V0|*Þ`ðTÀ¦â¥º³<0_’µ¶H€Ôb€b5Îìx¹,ó\¹`ý…S§z¸—Æ9fcúÎO(£Ñ*œ7*ÇSF«PØ&‘ÕrÌ÷l$ÑZ5ÂiÓe.Zñ;^'ßd tŽvENYãÀœ±˜òîâj5@ÅÝgÈ·ÿs,DKdÂB@ëÊ䘘5gÞ>Ç´Ø„qŸ‹Û9¦4Ò˜ »å~Sù`CÙ>q™ÈÀ)ͲF˜C­µˆ Le>„€±…I å °waYŒÍ! ÆØòn)ÆfOÊ™µÒa}db®Ù߇{GŠ’Ìµ¾L†_Œ­áMÐwöüôQ˜±›°|½ÿ¾Ú»cwÅÞÝŠŸ®á°˜µË†½—)–z+ëV¹tPV¤;¼.ºs¶AŸLЋ¢,Ú‘‘×AéÔJ 2DÇ2£tŒaÆ…Þv)gã3 o% òU™1ìà½0<¾AÕjÅõ«‡ÅÓf9¿Ú.?¬:Ž™D±–J‚-*ÛlÅ· ÉdªYP)ëú÷PãêtËÅ2!¦N “’Q,~РI®õï&ô?è‘m[TŽKw ;âÉëÀ*Gˆ&>ø&/°ZÕj%³\©zJ²ËlBÎz«Ý[„ak}´Ÿñv-ë6 ²¦+ \÷ÚÃÜmR[ýé”OÃðÞ×b¥§U˜ÄSä3Ë•÷Tɨá½ÅÎÛdÅ]‘DÙ¥žd¢‹*äÌa)pÊdÍ;_¡ÕÈtMÝmÖÖ˜P’¡dÊ +*^oT3õ˜|Q_`ݧÝJ»ó#Ò³€-Q¯Št¥”gA–Á¶k'ÒeÅ_A1e¬m¿†’ðK…ßrÁ/†Ê.ÃÑóH«a§²Á3ð‹ZðŸíö¬oeÏj×a"w¼8ÏZV¢w±ÖZ[‰1ìÙƒ7EâÊe˜ß/…¦CèûS½îZ¯ûLa E¡–³ãnƒPyQVüPðm×£IG ¬‹V¼7È’ fÅ‹&Ó¥  båÊ"Ûò0+Ü:³è­/±ÓÙêŒx{„ ç8*Ü`cý ޲*…®zÏJN·zmL¶OõC§?û—/ïýˆOzú,ÿ8˜ÒÐmœ€÷ÜŸþ-p–ûLiÔ&l”Ž8Û“dy×*JƒÕ8l1 ä‹Úµ úd\Ž¡Ãââ6 OÎ(¹Ûe;Ý-?<Ì#‘ÒÝ%üÜÇã$°%%’Ø÷ï‰à÷ÏÔ’H9eP£WèZLÛ3 “¹hbîÑ H&ÔHa:¸W­‚ÒEìÌM•N‰Í£1aó@ÞQÔÜ2²×iàR0(ÉNÐeR&`Èn•EÔ§Íúþñ~¶ªOU•R‡¨×·ò㿬7?˯ä%–Ÿ–OŸçóŸJö¦frÊUîOÌ4Š/Ô[0}‚‹NA¨ßÖ¶;ÖggßåÑ}Ùy7pÔŸI˜€œT¥¾­rA·“»Íúaò~}ÿ4¹}ø ôS@Mîáp¡¤pë¡À-íltE‡0-ÄWzçDkæ— z ·"˦}Aþ„‰³Õ¥œõC†<Ú°¸ål©Ú'ü±þ[à¬ÜÓ=g™-Ÿä‚OäâO¾—ËþÍêvñëä•M(ÿ£|ýióyYúVtµ#ØÕRDRpÝ1º°êÁ“ ‰ŸŠ¢¡P/Üõs 'Uø#L•ýPõνTPù§$ú)3‹_ﱤ÷}&¸>PîO8Ku-•ë9[Í÷‹Û tdk<÷U>õ#¬îãT{-žª0¡£ò‰™àGÊ¢â‚qœ.çý Êëó. sNYÔG4u‚_ÏAYì. ^Û^€êÝ€D–ô IU48XÑÙ3ŸšÕ5§äÔ‰q„”7ýØÆ´V­aþ²¡]“Ø¡å.…¾VþZãäå‡'ÿýïßÿ囿üçÍäŸa ø“þ­:áä_ü“—›Iý…éãf=_l·ÿXÍ6Ÿ¿ÿîO“;¹[OëÉ/›åÓ¢bÎóöfòR¶^M*|¸™È¥œOþm®ñJŽ,¬[n'óûõ6vôÚ(«i@“_õý›Ö:ÖhíDýÐ:ò…î·L¯t¨€ìOyÄñ¦ò_f÷×ò8·W´?÷ö~ýËäîvö4Û~^ÍþáÔ†‘\†ÎÏÒñàTÌM§zÛÅÕ#tÙ¡,B%2¡rí1N™ÈÐC¬NÊP‹<Ê·³ðž«`\ÿ­Âˆ·É£T„dm4ÚA¼p6'›€ÝTžKÆ=´6ä 7Q r–›¯¿ùó©ÌMB1Êjö°]rØ’b¦¯Î°ž¼›<ýººùz¾~xœm7_˵ù°xºùö¯žDU‘íp&cµÜ6Rƒv}îWÞXvnnßϯ܇Ÿ>~?ÞÛ¹š/nïðå5Ö·¯o©ä-·Ïó Œn¾Þ‘àÇÇç§›¯ßò?ÍîŸ?Vg`r¿˜mǼðäxòî]¥:Ÿyß½;´©/dfÃÝó¢zï_¨¡zTçêÉr!ÞûäRâ³J¿ÚÐr ~ÅÍÝì~»ø×Éê9äM\ßí#U]RKbÃ…q‚ÞÃåAˆ¤¦BZ¤‹¶õÁ]t»iãh_íLœÚ˜Þݬ£ÀHx-+j¹uâ¨úlðaæÀØ9q{r«€šŽxZºWY”¨_” Zìýý×Õ)îŸÀ¾SmLe¤£ ¹[ãÖãú6Þ±ÿÛ•uó÷wÖ¼ŸóÕO?ÝUÕ&0„Jü…nPÔïSÈ¢½?@—­üÓ¬ûnòîBñ½DÜgÍhØâ¢tÞ ¢Y3šzkåÈ—{7­ò‰>…CŸØ8X ¢)„ñØÊ)nÆy›Ù‰ÂáïUvŠ8·Žì¶âv¾˜]YŸ¿&¿:ìVüC›(ø×«@/êÎ̲›8?»ib“Áœ1¤­óƒÐĨåÈbÈ;¯Ç WsØun}LÀ{vI0AŽÎ¢ÛŸ«]¬Ú³`‰÷œ(’Ïvjš¢ÒhvïøX¬šÅ‡B‡è‚`Ëœ­m«Òiò~°TÂx¸xrþq¹Z„xîuãï{ ?`ç|7@éó™ 1;«û|äùP5;µ#¦mÿû, u6åûZA!:Þî/ôì‹cª©H“ùófеªf¯¶ZåMä„‘:º[ùf(ßš,~/·g êªWãP¹®Žu‹ÃiûÃ}óa ÌÄF­5„ÀϧõÏ¡bäåz¿_ÌgÏAŽ=YnW_=Mf»ÃËóî×ò±Wõw«‚MÀÞZ¹¨fxOdcg7Skµˆù€†íêºÄL~5UòŸO2ïŃ\™ª2[ò²š®¡z´`pÚ™ô,„ÈÑ/ÖdÖ<Ö=]Λê핽˜?®~|,¼Ð8BŽþúêÈë@ÛøCõn¤,;H°l~yGgˆ”ÞWgF@Åg:¶‘oÄætY»†ÖdþuW»KŽc_ebÿì¯[×¶$Ëš}…}…ýAÓC4\àöL¿ýJY$”3NÛ\&b:¦ƒ†¬JÙ>ú°tÎÂi\\£¬ý¹Ü’‰_±%3œd`âÝíˆÓ# ¥8z«“Wvâé§kú‰!®9Ëuœ;>âC(°MNDtéMeþÿ^àâ·Ûkë­üßûû‡³ÄÄÚä®§®Ë¿ƒfæîþfŽçæúêõg”Î |°ZÀºˆMÚ€]¯!Þô®rWÕ³—ùoû²»9µ‡^^ßüy}õÐtç=»Âì¢zþˆÓ›ž¢»ð‡.½.¿î÷ç^üøÛ«=ÓËLq%Ùx=½—¦³š4mÁs¿)`ðX“–æPZÊWÏö„BŽ©ò†TÚº\ܹ"×ìÍ6$¦ö­Šfö[›¨ì£½y ð£þ‡Ç&¦G3fœÜ´Öžá}¾‰Š;6Q‘ßTèÊ–íÏGO”°ß,âÑ`Þ ØÏ}¿ÿóÇDKøòƒó6*ÝŸYéæOžç¦ÚsÓͼ”¡N»2$là!8>b—º †­‰‡v@Š5€¤x(’]%”É/PgÏ^m+"ù`PƒHÀ£kB$K”t²£OyD‚i"ïKeäèW@ÓÝÅã×Ï·º5¾Ïþýµ†ôíöþòsd šëeÛ>xL!õ(šmûÜU\"ÍÊ<6áÒ.Q(Nê(CˆÍ¨Ä¨¤§_3XD%r~½ êøÞœ÷{³m äD$×`±ÆÃ± “ˆÂhLR+¦Å$ëív"㣤èü¯€¢·žë×{¹µÿ˜E¦ ©x+2íü3 "'í@µók¬âVçšp+ÑŽ16’ÿ¾¸Òvàr‰‘± \æì‹À•B.œš½ÙFàbÅ9qRƒ\¢'¸ ¹Òpä2Îñr¹”SJYÞ&æ¹ ¹Òg$xG–µ÷Ê'Ú—ãøôýDÊšÉêÕ!TåǽKå|ý=cåÇ­N&žÐ8z¾ö°Zø  I3âÈfÄa‰šÁqñÑŽJ¤@Å*cðYÕ½ù«m‚) n>½âßÉÞ´Á˜ùüøóº™­²y“˜lr)î¹=Ð#! §JJµ²©WH>ËvÞL‘l®éü=’×ø6PÉ 9kö-Þ—†D1Ë:ñf»mCÇ#¢¡B•›’˜æ›Î»€ì¦ÌÌ>ç¦ôAc#ÌV õDÄžÝBÞag]Ï_ƒ7K{ÁêüèšîÖ_è£zc?Ûýp8Ö·a¿[Ý[OÎ$çõΕ?êo7?ìÈéZ??ÞLqpiqê8Ê;x[¼„]—8^·”‰ku®v,ÆŠ»Ø±þƒŠTÊþ#h’PòšªdY‹fÖíã?؆ákª'š¹¦ê Œ¿þV;GÉùvõŸß³Àteõ:û¯ŠY‹›…¢gnó0'ý˜ÞF4ŒI¦‰>ÎÜæV>¤i3¡íÓ/<]?—Ö©æQüŠõ­Ä¦œHö¼“sÁÆ»j½JÕ¬ø“*ë7zÝ—ÂP“6„c}B1JÙ:ûÌ¢}<‰~íàM2²Â“ÄDÔÖ´‘F_ý™1W‡MƒÔcä[;=@_WbgWò•Àii‡  ¶©švˆ|(„÷pt˜˜34óæ¡ú¸—×ÏgJßoD~ÆïúmÎüñû·ËëÇçoÎa9È£éEwwS{*îÂ#OV!Áz¥Ü=ÖÚ¥²•îiGD.Á¶;$àb+ºlif›¨}ÜÅIjP{¢¶n+£ÇѼfæ£Û2©\k] Û¢®$öA{?$,.§ÑpHĪ©Ç@,DLh¸Äè{ÇuswñûõÕÍïúgw767·Án~ΨØÑÊmp{†Ü<ëò'ñ¼O·t‹é×Ä¿6Û½ܽë;†;êbÞ!« 8·e'|‡„kð}šW’&@ÀHÃñÝÈ™Îð]³zÓÉFå û^×à¿ -î &JmÍšÈa€¯`Ës½z]ðâcßU|úÃÉŠO߯®nïÿ²ƒûtøÓôžªä©c·¼Ö æÓ n²9)H¡VűÁj»äª³èͦ3 ·°4Årpž²š3uoÝÓÌn;%„}5räChjB2Ú¯þJ'‚±i^øähîáñþß7×{àòô—m[ú¯ÝKU>ÊäÂŽjj ÄŽB‡€íź•!Ú‹i[‚²ã–B>¥rËT/rÐC}*E¬kEÖœþÞÌ\Žõô­mRE êXGqД¤‘á1ÙÉÌá¼iÃ^YœLœ’]c2 [„çÁ7…dcPeqù=$ò©iù½óãÉù1’ótÇÌ)«±k صä·,Û/]Û"¿ÅUUd Á·­jJ|µ•ÝcðŸÎc¶¯øQ´äW`pƒ—˜’kg2kq„¶^vJŽÐ$mʳ9BöÎpö¶]<¡}m‡©†{Gý/Mg&ŒöƒfeŸóƒúÂ=~5³ÕEãÙd_®lKAp@ BÒÀ‹>_4{'¬aLQ¸ Ööÿ‘ÊÀcè ›ÝRv5Ùi4M˜Râ@¯8æ|ºÇ?#y{Û.‰»m4ÅY_Suµk8®Dõ$Ì̹/…èŇ쥚èÏ×Õ®¨¹–1-j¢.Ô´:qD¨æÃ5;UÀ•‘u•ÜÞÿëéòŸ×w¥Œgö›£ê&ª5[P1¦=´µ(‰"c%(έ·R'™›®5{Âiç:­ ÂAà WnFˆ!B–÷Õ=8p[Ù³& ŒÀÈÒÔ`5$í çÄ7Çw§á_žùFú^WÅÌmU –ÚH?dX\`bý_ôê¦%GM¸0 ‡È{ì½~º¼½xz*–›Þÿò0üèÛð7ú=ÃéŠ7 jñ÷ƒ W øƒÛ’rÝŠ‹Õit&]ᓌÎEÏ«Qºää¶©m¾Ï×`0O:6Qö08+73åŽ>dåúÊ(¦9ž'“Ô‚cçêto„XZdö㻦Ef7âb13¦0üãâNHÏ[ÉÀo¿8 ~YŒ®©(ÀžwÐX{‡NÃ3е^3ã­`ïÌrm¸«ûYw«/án0Z‹bðË>f[µfÖè¼¶1rªa}ä¢Ã¦ØˆÆ]33† î&˜„ª­ëOv¤ ÌHÞ;Ž£jî‹«LÉ'jºýcÂþÅ>õÚ ™æÈÚÃÔ´ö¨¸òôüø×!Ï­4ýÎÃãÏ×EXù¸a0‘Ø7©pÜ#¸é$»Ú«-íV¯ÃZœ]»-aÚ¦è0È=0Ç]8ö@¯z„è$;œýfÙáxºL´¢¦Q„Ô)4Ý1»±åÉÎ>ž»„é•I%ÊšK +¯÷ÓÆÞAù­¥-“<¤Ð6¢tçp/°ælè¿@Wïýíݱ›gjíÈѰT5ù"Êò½_ ÞBËßü.5æH6Þåuù–­Ø«éwÚ=i*]QŸ\qd#ù<·ûÌd]P_¿v”$RÃé¤çW6µå§€2õI4_É ¾åÄ­'úç}ûýÐGòÒ½Ql˜,®6 r[¹-a×MKAFÄõ\ݨ­ôà̰L2ìþéñçmÙ—–0jD/Q´½&X'ØÑ¦fmÒQ4¬eÏ(Ûz3£lè¶°Þ¶[ ¡0Õ1(FþÔø±Àþf¼>øî‘¦XsÉi~ ¤©" ‡w8]ë|„wOÎX ÈGêË“áºódü"ÄYÚ âblš¿dþkî«Grbœ)‰œXo¿Øng¬·Û£qr¼lnÀ¾iâN¼ßA¥'A])×j¶Z¨c¤_µ`øÖ¯Mâ jU„ˆ)¶Á79 ßfgç3ð­¯Ì‚Þåi–®]Ö>§&ÜTÿ•೸#Rò šŠß¦rÞݤƒæˆ’غ;˜nNkùtÿÓ¶õýíÍeyܵöq£ -"1`Ûd½|¤¸ß6YY³`/]œÄÊ2T;Œ•5hq¶KƒÉKaòvË#ø"Ù üW³vðÇ“¥OŒ6ìn"¾ÑG¸±%˜£‘ñœônZ¦`„IyÒ;ì;Ÿ“½Ní¢šð•pjq§õM1dõèäPÈ&„qhu~¢W^Ü>Ü_]ü|¾Ò—(»õ¥?ä Ô«Gß" ØÓÂŽ1Ú*Õ¶°/ÚuÅ,µñÉ®Ä"â{‰¾TìQ;ºäÏ ÕóÉèz óA\©é$C”јï…8æ0_3 mYÖoUCìZõI½ËïŸ)‹;@· C–SŠ#À<ú 9;¥ñ¥öu;iPÁ‡m0Ä5Eò&÷±GåÌzÌôð‡Õ÷\>Y¬…MŠRƒœ„jW18BNý{nƒ> ¬›41‹5 ̚ʅ¶À›ÇÞjgÌ‚pôˆâ(@¸oí†2¥hêqoÅ‚¥åõšQÓòÊ€ÍkÅnêÇ MnÉn†ß¥"'¹„íÞ®âI£bjoZ#¡e.I±«6ŸDŒu3¸zÉ±Íæ_×Ûlû6Ð×=Y4¬.Á¾c"Í“šBVæÍž]`ߎ‡*ÂC6‰G/-×túzþÙÙ‡ ìë+3ƒø|Ë `ßzKÀþ*c_˜÷N©)2÷ý«ör@´¤qhÑ^×îéþö\lÇ~¨vþc)åúßóˆL)5y tV˜H˜Äc=ámÁÒë´·3·8ƒãN‹à¹Py‡h¤ÔÅ:Œ‡Ü”ëÌr\Áô¥StúÐW@4 h:ê˜ú€x8h\F?ò¸?Ü_Í[^ì›OžâÛåãeU“›Mj.gÚE+o8œ´KOXsÔ”¤¶µbÍ0½zÛŽk 6W¢RršÔ…]ÕF”¹fFèÁ¥4}m}#HUAW$ËD›NZt£¹”ÌÐá\ùã´TŒ)ËB§¾Çu-t"mj€à·gÙ¥³¾¼pIœPDÊØ¶Ä£#Âó¶Dó &—9åo'û¶%ö×tÿ5Ùâ^$ÝuM{Aµè2'aI1Ž$ x¸¸;œºBÿþöGzùéÕ•ñÞ”ùqJ?*6&š”š ÝÁÁµE› I‰}-@ÑÒkóÿE37ÆÆÖ:ÅÞccJá4º¼ê´…³bŒo¦ëëù`±á÷í.;x1’€–Ón«ƒ‚cõJ€<\ðåîâ!ï6Ÿ.ožëä—¦ñ¬ås}Ó9 nG•4D ž îm)Y¨k¼ Îa‰}i ã|*M]Oû;G=úfnárò¢áIÍÙ Æ…ï›Î^+„z²s†³ù´Rbþ¥Ð÷njSÜl“Ä\©Ð²×X1®)õ €ýUPMÀ‰,?[õȨžüä©«`ÔT—­ïø¶hgT©qË Fî¡|ufžnjK. ”ŠtH6ŒTŒ_dÛjçæè¢ÓVÕFª@E¡·©ÉJ1tìýþd猘ôq¥<;ŸM]ûR×m¢) Ùmr£¹ƒ¿¸~ìuµçrºÉ¿ ±KŸ0A1çîøþôó·£ËÑ™{¤—_þv{óëË¿.o¯¿Ý]ü0j§ ËŸâúè’p –\|kRµ#¦ ¡É¹¥1«´aî¢u‰š˜ÿ§=ÏdbVåð{ƒ”-S.ú~[†^Ñ·Õ9¨i +n À“B}ÉtßÙGˆò¬©o‹§A#ÿ9À·´‘ÀS›è”ô0$ôHÞƒ íò­7œQW™Äỿ«»‘ĸ|!i<9ìcÛ²¤1•-c¼:I“ ÌNáØë¡˜à Îℸbñ<7uðß%7&Ñé ¤YYÖD=«[†^JœSú‹.êö+ùWðYNÁW{tr¯ÖE¥bRžA=¼†åM§ÏÇáWÁ.ž”3μ«®cà*‘,ǾwÂ9nñÌ0EnJÐq`y5ÅclŠ• =Q‡MFJ·Ð\Ý<=þœ"†ß~^ý~ý\ºÌýɨ!4.¢&Ü…=l¯š{qXßysnÏ•Ü(k̦DgÚ7Ó“R‰L"WdwUfµßç&êS"Óï­Ç$ºš ¦Fm#Åœ×Èl%b®H¦ïŒOBYg}9©¯¾C.Ói}‰‹+®€kkr0$øeT¯ÂHIN=2—·7W÷ÿúq{q5%%¹~š:rnçÒþ¥Ù€ÖÄ;àÚ>YPÈ âæ®°fÏ€šÑG P(X…ƒˆf\Å6f­r8>³]§ZO×8½¦WLv4µEaÑíÒ;Ú9Ã!r|gÓ³“Ï¥éön•ºHß§»`Þ¶^hCsñ#¢ïÄ!AŠ_ÍïÌnA÷ÕõÕÍôñfëßô‡þ úxñ¤.óòùçãõYçX%¸ÓòÅ´†)ºCÛJñ»É‘ž¾@ÿù³E+—]¨±xîàç2¶6dÃ5H.BdNeghÄ%g¨.*×·8·B'o˜œ…›5„Zè˜8µïé#h¼3´ä<ë “1zÅX˜ëï;éù•fPÃ&lŠƒEvtñ Oº&’pP¼ßuìJv¿ S©¼œßPÝPkå¦ûÞ,Ù§¸¡ßš]Hj€À§¨exâÁÅ 3³ç\q#1ØoÂÏŠ=m‹ŠºGÅmÚâ>Ý9M:ïšž„þD¨+lÜöÑT¹´oyc ¢º³.ßQœs—TäÛ£ýQ¿A§Vž¢Õ7„b÷Üù!j‚1q%ŒW[j•+ŠÎÇ`Zçë—äæ‹ÁY86n~¬8ÏìÒ”§/­ážø*L \¨©R0vôhe¤t†ÉÓ+' O°œõUž‰½™ïvaÂâ’’§Ø6"Цèý!Ù£#ý"ÒŒ‡ÛŸŠ3-Sš§'ŒÔ]nÙUHí|'á¨0¤#qÆ‹µwRg¼˜ºéúpÚsd¼w…ëCÅŸNÓ«`N˜cЛٯG€}<*!_s{ˆ1zl»KÂøáä÷°';gZ9ì£Ã$˜»=Ôc!©oµ9}âtøXäYÜ" ©mC àÜÓ¥õQ!†/!ø{{swó<=ÔÒ›ºs\÷ekt–Ð4AJw¨@F`ŸvÔ>÷¬[-dÚj2* Ôà$äŠ=특¦»™yú`µ~m²Žˆ ¨&QÚŠ¢äý`¤6ߌÎãñ“õ´K®Ïƒ|èÚkÇÛä{y€|ï94,.(9qm½å„~Ô±µf™þÓåeNžkŸHÃËêÅÓ¨0 5M¦ù=½-ÎÇ>c¯6®VŠy5pc€­û £`´!¹2f£äèéfFë„ÙÀA¢`M[‘DßVÀ¦ APXR6¾Öw¦¤’W[¥m†¸«úzúu±q`³´t/h¬âÛš5eL i•QøãDFùú|Ïz(:Þ"d“+›< ¬¬é7EYÑPözâHÐXA>{bùòâùâöþ÷£ìÑÓž¡¼÷Oåž‹KWvÏ'MšZ-fM,(Eè1xüÁصCÅ,Ýä§;YÓ†Ò@“qûâ­Fô1¯Ýüj¿ŽzúÚ6Ys©Mü˜±íØËè‘&5sŒ™áé•)hÈë#MÐYÌÍõî§ÿu´¼3R`i’‹öHDGQ³i7ª—¥ÚËvíì6 HR•„è‹Íl1äZXæì5,)¦oj¸"cn©‰ÂðiIÈ\—WŠuó¼¹½ë6š*më\9…u-Üì :-Áum‹ÎiD‰&G”ðÓ)Ã6ë8¯ÿñ°è±´bpÜÄêŸ]ô&Ü·éÆ Ü4Úþ’é²íQcÆ ÐŽÙáË™Õúúµum(TEzŠ\jJçuÛ†vðÏÝžNï F-\¨î`r_ººóÙh³´T­?¿o÷ÀH¦u£5†”cZ§~^8æI‡\úÚL뿆Þ{q/P\ç.ï…„ÎèXE/v ñ…†tO ]æsVîדuܤ¦á¢÷04ÿ?u׺ÞÖ­cŸÈq!@œ¿ówæ!Åm<ãÛ霾ý€’b)2÷…›¤šöëå«•(°±€ÀZÆ(11ÑØýÜCö*àòC伺4íŠ~ÀÐâ!Ÿb‘ÉùÌj}Æ]yß"¬8á“$6)±$ƒ‡]ÝÈ”¸4ìʈ¢xÕí[·‡$îÞ×Ãɤ¿-‚Û¬iqé(PÉ^IìÑ*ƒbC[Ç%¥LOçqÍK;Jê_ü0–2»¤t¹äš<­ÏŽRfBAÓ]$Û3¥7ÍN'|û·r*ó¯spˆÆtþÎJlð'¢;6¬§(œô4 jljäe!¦!)*"_[MãÍøï~¿¿yxüöôüz³»«KÙl†xÐÐÍÎM©L¬8îÓü‘SR_÷D¦µ;öàùm·éò)g`vìu¯w®VËæxf†~9lÈÓAAa^(âm ¢eõ¾Š%—cH§¯Ü%‡ÍO*U‘Ë’?ªMÝhd†®ÑÌŒ¡”Å–uìt^‹„ûò5âª6•Úa·kÕauÎÓþ2/z ->e“†% àA%¡ÉÓ¡kêÖJ!®¨CI`9†cñ"ñÌ, QÌœ’j‚˜ò—m*EÀ„ž%’`°¾;z?¥sOwß_?gÁðÏOÙ°¯OÜVc–˨-¼E笈XÇô ‹‘£?Û+ÖðVô§K£Öl[™öç†sN,KÁ%Ý./ÅҞݹ…ºo~ÜYLköìò•™´ïEÆÕ¿”íJKÓÙSþ.ŠÄ,qzë5¤ý‹MPFSuVÞ\A&Ý£—ÝmÅKÃ[Á)6]»dÜõ,5¸ûþéáµ®¶ñCzÚâYû£ ˆãæ!*1hÌ©h¤žY!‹cØš¬‰œó¬e©;w²H§¤‰À 1¬Â]ÑÄÐÄÙgé ý{(’Ux€y’µ$ vˆú~üô¶ªÞm”^œ€ ¦¼æ•6š QC2D¿Þ=ºîòàëɆG¶:ØÔfl¬6«s³›þqKÅI¤šïÿ*asÎ,=’½ìã0h©¸Lù1.å§' ô‚É”YÏ*fTS®mcÓšš¿Äñ0 ËÅ7yé.TÆÉ„×çYÅõäjK1>é6t` ¡Ém8@ào%曹”F™øwÌ;'‹=?}½ÏH¯ÏOùÛ¼éˆM½p“r§;ù¸»Ñßÿ÷óç[Hñ£ìÂîþÑoô‰ª6mwÕ €= îÕ®Xn@'ªML¯aÞ^H}xàÏSÅGºÍ7[„j(.œlÙ«÷Ÿ:qŒd5XMW²Ú‚Þ† ~Í ï™âóWŽùÎ¥8eèX™0u X¬oá_ u&ÝŸ/CblrÿÂx¾Íä‡äï>rƒø8Ÿé½ÿøð5‡Ôâ|ïûß±RêìØöN€a»GV@;SÜ’;Aöƒ¥Võ£dŹ}‚ ߣµUõ~óâïm²h`\N­‹­ß3uÉ­ýûÇ&«Ê­;,`ØC1rÉ»f’–«ÿÑÃnÿ‡¼é²~˜øùÂÀsÝ"ÀLܪyÝÕæÁÁEO(Ô<@Ñ«žeÝ,íºè«%qy{ߎÏHÉóÝÐ|ÒdzZe›Ã˜™-Ö@ø+ˆ«¼½zÊUŽ©Ê~H®ÈR9à%3ÇgL¨˜šŽÏeËñ)¢æOì ‘Û-fíÚºJ^XèŠæU^’^ÿÃ¥BF"3ëÐö[c±ŽÇ&‚H‹Ç_ĔϿ¬2W:OöésþùÓ½,ëžžú/JÜÌRKwTd 4¿:Lš®FM¦µ 1ҵǩºþ°«Fhú 4•Žã {ô[ÿÝÿ÷ôüGæÏÚ¢ÊëµóP¦¸…”ÛR«^eëc¿®ì8z|f 9.΂º-¹¤\v2V/DÖèÿŠWFd_‘¸•ˈœ8_•Ì!­º?gŒ©{»h@FzšE‡´A@€8^-ònwÙv{7y{ñÿ7/ukÛÒPTæ„[T¼SÌéÆmÒ‘›ÌÖ ÀÁRV 1ÅÅ9&`-&ÈgFê„Ç–©QkôlºDi 8K"ßOE:.¯½»UèÛ!ZÃZÁ™×Š#½*Àc°—sSt¬Rïîù~fÜáß|nk7‡:ä0?ö|³»~­ègܜڲÊb`’êñ·Ít]1˜)0ð [nRÇþÏ Õ ƒI•kôÕûD+óx .È«%vÔ†}?JûfÂë$Ÿ¢9Ñ-FzVyHœÙÄíHÙpÍÍÓüñ÷î¿ñpýöäOh%Þ†±p«²e¼?@`¯­öØ;-˜¨#®ú UWä¶?ZŸ³¸ªåuÿ3ƒôÁUÊ ³jU¹­gר-{ßþ¨£qÕË,ޤh¤“ÍÜ•YøGÝ•e"ÌZ–H3yè8Ü{…®sUöòáÓ×—›“žAÝÁŒ-©4Mƒ0nb@‰)% ºý:ìÒ*A/Q Iu™ (>´Nç@± zg6èz‰$FªºòbÒàÀ×bÄÃwíC& +^"G5‰iéÊ«ëÖ=óªy¼È[®¼JÁ>é=¡©^J ÿ0?S·L9ox¼ºí±ë¼M òÇo$X’²z±€4®¤-sÓŽBžë ]ähߌ\-Gûfá* é6ú+†óo)+Ø-ö\Y´XïŸY­DïÃð=,* :eÆÒ¦ÝѼµ5¢÷vò~w4{JÔ<,*T$ü'ŠÐŽÃ˜©‡ É´Tm|*6Hñ–1SB»:ÒÿÜxyùqˆÖ¬ËôœB„ˆþ·Àu&žÛBLåÇ„zÑ­' Õo‘ŸòÝÛRÂL·ê@Dº„Æ ÌMõf”.‹üþäæ+ãP“/G„hд·âeº ^äw3³aa‘ßåé£-Hƒ÷•\žbœR#0Ï@¤G½”mÛ( 2bÕ7¡ÂA~…ɰÛñéÑ=¶Wcýäž×'ÙÄFþ‡ûÿ¸³^ö?Ø0,–E†§ÄŒ^¦60Ök1 ˜R¥ïQmÔŽˆC[lñ¤ayѰLÀ|f¾>xösT³È£ÿ“ššº‘G󮸑KY{/1xÞ&¿Ž«@’þ"@[àeÒç")IÛÎ `AÞ€æùÉHò†oOŸfn'ƒ!§—•ÛéNt·»ùÏ¿Kë’g¤™i†Ì#š°©×uÅŠ×Jèeõªa›é:òb‘~?õ`‘+oµ-67¢Jw—n'3õ!ÆÂÌ']³p˜.Ú ® lÅV$½ôJ#Ð5t­„Ö1^J£®Õ¬ú¤)w SSDÒ/²<‹u}}ÀS|{úTIöMhÛ-½˜DYžnß²›Gƒ<Ýæ¾Znžž»Y^KZÞÍ ¦Ké[JÊ;¯'StÚÍÊÍ(®á°¼ÙÛVîÚ ‚¡‡`P).½:Ra’‹¥×á‡àªíWÀˆ½ŽÁcøOº1ϰFúåØéÖ"HÈêáóçIˆã€Ú~ÀˇǻÝg¡ãqݦWÒ(^7j‚ÒȦ¨ò[¢€´ I«¬Õ ZýqÇuÜ›‡V_´`°­G=à h=3MdÝ?ÄÑsL¡dF¹ Y//z#k6³×³ïsÒ½£¢Iœ%ØŠ*¡ë¤”¬Û2ð|¿W«±aÒ«ØáF¯Ú˜u?щ_Žúxÿúü°{ñ,ÿæã÷¯Ÿ¾Üס,Ì%¬â ›¸ e%঄1å–zß„õ­º¦¯ISJ+xeƒ‚.c¬I9}}3L¯ôU%«ÁXQJM‚6v ± 7sàröjT¯SùÇu\ýrÖ"LºÒ=™´¥öWÐ üÄ£˜’h¹ˆõåþîå'ÁÛãuóãhòûü”¿ÉÔ 7þŸj ½§W´,«iÍR¬ˆ-#* Eñ´ïú”9•öõ½Çš:ÉûǬøGI¯mëKÞ½= ìiú¾þ~ì"®O,‚Ù@Ûk&ç’ÐIÞ‘I…R”ÚÞ¦ àm7òŠì-maC ”yŠõÈÝÉžbeÁ -VX‘¿IJ‹ù[JEùæ“m:‰•…Û™Z“¿Y"nZô· ÕTªa¨"ayÓ2ÿš—ÝçûOßýûl[„ÿù-QnôqÞx†*_º´¼³{•ä{Ð`cò½âØÜìÎmqÃÀ¤—’µ[û¨\„O5ÊEì´Õ8Š`f8OKÝÎ=ß"[,­Ù¯ gè‹)V¹åqã6,Ð8¸Ù™­|¸ä¸À‚ì'Á óS“^­÷«pR•ëœ(Sõ¢eËÜß‚Ò\áý…¨ÔâJ·R¼WF˜[1$±ùù<öâØOµÅ½,»P"Î;³X$9vNb¬°5 5lkÒ ™Óäš üòÍI$öæ¨ûöð}{úò°ûëüW|yÚýQÙšÎò8Ä`M£UGPØgš2ˆh#û@Ÿïï¾¼~î3¹œµ,0bh2%ió &ÃLÓû·(±ì;Æ_Ööo¥ðŠÄ0mqm"¢Î5mýÙ“8%ÍEüßµîî˜{®lƈ"ž(­Ø‚ÅÍÀ±t©tf·^@¸êN ó MSCuÈ"±gû|fìþà9íx£ 9“>ùù¬k²Ä™#‡$/ï5y€9@Å3¤‘çÿã]>ÕYÀ|ª~þKC¥ýyÚú¼_Ín{þØ ±¼ô›~Ú®ûçׇßò³ÿòðû×½<Ûþõ=±Îù«[h»Ô¦ÕÜAÊFM#›lHFJ™FðmÑËÓ—û‹žÃ¿}ùîgêRجx‡¶–éLI<›nª…cØ2víÇ­æ¡æT²hî™Û[7¶Ø@•ÂÒ¬‹×nJG®ÞÙ˜˳.gæëÓcóH hkÙKå6ÅýåèÙ€›Ò‘Gû²ÇšTAKúD™íµ¿w&Ÿ¯&Û(Ï-Kõlážaõž»¬p\ Utœ4VÌâÅ Þ¤5“Æ©Ü2;٢Ϥ±;5CC 0,=+€AGÏ»‘ Ê‘ù “xBW¹†‹«¶ß4õÚ~{›št^26„ÆÑˆ íñH‰Í 2ŠçgFzNAX>%uyŠ•!éŠø-rŽœY¡Ó« ¡öŠøÍ",šÆ¿à¸•?ò"=¿Å¬r§ma‹úJŬZD†ˆë9U—§¡&=舸…U@cég™&ó…_‚ëúìäfÒóWÿžO¿{@ýq¨˜ ÉQÝOÓô¹4h¡éó·àMœ!îVÉÏÇ(¦ë:“v»’ô+Êò.óSjÌ·þü)/V^E*Õ3ûõ¸’„=¡DÖš ‘`›˜"ÃÐë`g)h\µg‚›ß’ÕØy¼!¬Sóâ ¿H3é|2 Ò2Ú¢z©,ÕKÃÛQÍ8Åñ·v/üqÚ­g²­íFã´ùY²zht3mKóDŽU5TKwo²XÏÄ<”-ÊšñaY\ÿB.¾'ótìÿÈ€5äÑ46uÂÝ|… TNÅ TÏú™.„e"¬Ð5_ÖuR1|X[©Ø½&=ªâpÙ¶Ê<la( ßI˜Ò¨çû§ÿÏã_\ °}|øúðx÷¥|½vžt“Y øò&¡”9#¬ZĶŸ {Â1yú qÍ6î‘bŽ‹0ïå7‹u‚cR­ic8d µm±1óÐ8R‘O-#ñ~¥$§È}sc[Å “{ƒ«Ñ¸/~L:Y!F…&€¶0&NÀyí}h6¼{¾[¸(‚OÏ7y"क़¦‡4Ø2oGSVÌ—32«YÀ’«fCh´\O8ŽAWt•MÈ݈XT<³R'4Nò*nNjæä8›`xWÙ”Šh‘(ѱ¤ž°œV-dä[‰Š$¹2&= þ 7qÑ8§!É2å“"âàI­wâeÓrgYˆç¨ìûêèáχ׿vŸïwä“r?<ÿíËÝ×}cèýL.A”ièŽ^P“4]ÛG„-3@þ©"græú^ô-ÝêS¾²2[Æz=ùÎa},!ý¹Q;] æë\O7k žÙš >â`ª±ƒ•C‘ÔÚOQápA³@ºb=àªD‹IkÚÓWƤÉGB¨INYÓåÖ]¯³(K®ÿ ’ÇÃùöO¨ÃwO¿§-ŸÌ5íQÇD¼eçDš‰ ºj<«#DG€@žë-C4ËòøfT±âVÉ›e:a´ÿåߨ £³ŒhFp3›#0ÅcB7Z/WV3Äî÷†?”/ƒ·­Gûƒ«çˆŸ¹º 𢽥ÜVµÀbZ?!Ý•ÚrÒÙBM]ÙÆœ2Y›'#ÔJ3²è¡'É8B#Z$QK‹s#­Ø09³W’ÿ£‚[Ms\¤&¥‘D£±Aí89ùò¨_MåÑÅëÃ4Ø ÛIN&=lYù¹-)´mmÖ„€¤V›¸Ffã¹n3RE4’ââj¤ÿm1ž+ï §¯Ûg5Òϣı"œÕË{lš¿·½É‘«‘ÊRnî©ä§}q52/ w=@ê¼yâÈ™t2øÝ‹^:l"ïÏÜadA· q®!«éW…íé(!-^W¤pT9› ×<­Vdí?Y¤K¸¢¦´œ¬ W‚üX7…«§Æƒ‹073&(„«ûÉòJ -$æ}ãVV%æ¢T;¿¹–‘iÒ›1xMÞdI㳡 tƒ·9çM‰UïPQ®/”H|}¥ì”Q³¾–œW5eU*[¦ñÁñ"Ë}ˆnïµÕuì®IÐ@kò3Á ."»`‘£âÌB½BP³)¥YÛ1…+±¦ò5‡dF¤«v×”WNæ'ØÒ^«§ñšrnò,[©©Ç¢f[Ò8Íʬvêð'2­ššŸ˜´Zƒ(kÆÁ~h.ÎÆº£m)ÖÏ Ò+Ö5Ï%Õ] Ø,5M$ë‘˱î%WˆKS'xý,*ä—[ã&]K„mjké2A_éþ^„è¹È°}éZÞ¶ž—l^60¬ ´À´˜$ÔRpnÁ^ŒI«èà%äÔÄHãçð-° QÝ~œøÊûÒ²Ô(ôß—ÞBgXö¾[RæújbðÀ#Ø0ÔË—È!Æ«/ò}{úö®¶·‘GÿcîÃ~¹$’HQRv®Ã.p`ß0{‡û° ÜŽÓ›$ÎÆÎL÷ýú#«œ¸âÈ–ª$y²‹z0ÝŽS%QâCŠ"^½á$P:õûmØ yÝŒDy šÂËÒOýÀ+ÉÊ¥¨JµßPÐÇ\¾¤”K¢ï²Õ8!¿§?H2QîE²ÑNÇÙÕáÇ #nÓxl •¥‚ì=Ôù1XÎ&„?/U,Äp¼Ù1(lÎQ‰ÿu'¶+ܹÔié9uWt°¶>Ý}0îœ×Ò!Þe[W¨ÿ¹ËF¸üö»ß_Z ¢çJÈA-ñŒ)Ì>Ì:[ bõÓòòÛ›«K\à"ø9^Ì—×D^Ãu‡ËšóþS@:Nfþì-è¥ÎCçê9¬Ûtú•¯r ƒ¼ïñq%Sé´â kÇü~±¼]^M—ãq×=ÂÒ ç½ò| Åtjvdê¯O û†F–À*IŽ>~¬Ýr]23ýü0vË;˜@ 3ÓíGRò‘geúÕ%[¤Ô8ƒº—²ŠälÈ”Ùþ˜ýSêÜhw®Î!0ä…P•óØsì4q@G.‘|ùBù ³ç/Ïþûß¿ÿÓwúËÙßÄÔü0ûÛ_» ÎþÅÿ0ûÄ‹r9ë?8x\-–ëõÝÏ¿~ÿ—ßÍ®YmxcmV³_o6ËnežÖ—³çšÕý¬†ËïÈÅìßf²…ïyƼn7ëÙâvµæ ùMtFÚ™ =ØôMư¦>û=[Ós'1™ÐÀh92Z× õ(£ÅbðÁÐE’t{³»s–…¦·ä<ä,ÿˆP›»Øµ JgJQÿ8—Þ‹Qºøk·Iߪì™Þ×Ù³3ÄÙÛ(ÀYô.ààb€µ`¦[>~„™Ò×h‰G)«Í³N犾6€¼5{‡òz~»^:eÙßÎîŸîØ_ýquýr¶•Ü›V5Ò\ƘTâcµW„î¸I”‰Çø¯†ûÍ£zS¸]–ýV4<(e «D¶ÉãW£ D®D‡·XíÅ>rµÎSfg ÇÏU¤²Bìÿùåþ­½Ñ r¬FˆÏ;óGÆ?žó§åæòþýìÈÛ—{Ÿ¬ø|±¾¾xÎcî¿Ì†ónu5ÜÓ¨x×O Ù<—ßn‡óãÃÓæòÛŠoýy~û´ü±gI%mg>tFôIæùüÎÎŬúÖ³‡á( S¤Šà(L¡c,H¸¤L>‰Ú;J‘ä1Û$ñbW‚ƒ©åá¡B¯³=òn§P©%xä”6­ñˆåØwÜÇ#"©Û4Q†'—G%’‰BÊá{A!™Á[ìáslìÙ¾kˆ8J«6ˆ³}×1œaÿQÐ$uàSÍ„¿õÊYžÌ…8ãFx=ÊSðˆi·‡=b{EuÑHÀnfy0R›Ã 5f¤‹”×EËf¬žrý¬¿ÚI;€Âuƒüu³J>9‡äº9—±l1z-¼›Y¦»*3"B“±næÁ8×Ü]%÷V¥ïŠÖ‰ÚÊÌz»<;¡•¦؉79¢[ìAeåÞÏY¡¸Pê ¯,í[5ÎzÔÁÀ®ð±½Yì‡3¼uá•Á9b øPèζޮ¬õàbûÕò·Ñ›:õ¡yûÕÆÚnTß®5+F÷ܰ0nãÖË`û¢Õ£½¢šC9ê5YòÆ2¯ &´$ÒÁƒRްØú⯉Øeò6ÃkRÆ…¤ùÕ&f~3Ë AÐÇ‹j,NJÏEÖÚŽ ¾pÝhĺIå¡ ëf}0Éuè·;œZ®»ëQâ£.O¦È Y°í£|&j†ØÄ€³Æ–”"gZ« ë3¹8yßöÚžÉ#yey\¹å™<£vGÚ›:[¦¶¹?Æx¡¬í‰¢ެ.ÎܧÿùüyÊTWh¯-®¡v0µÍä/¶¦7z¼A3åVÇ(‡§0žøL¯“FD³x|)tòtôVg7³LxiL¢ÃiÊ›`›Ÿ¼B ˆìŒ=a˜O’ƒt0Ì£•{±ýÿÛHïäJQð·¯z.†ê¹ß¾éȪ ØX5å …¢¬üÛÅAì G€D°Mpi°Âd0ÔcpÑ`ènj™hÚ Ÿmló,‘#˜(Ú °×ÙÄŒ53µA'€™‡ÕU~sÏ3‡× ö+Übqöåï×þã[ðÑÚŸâ ‰]UíGƒRñšCUShU@oô¨{›áž‡*-lqʧ¡ÊY‚$TÙ½oL-ª»²6œª¨q…C/G (Ti%…¾‡Úã?e”§§2X^:Ê}ï”(O¼ 7¥Š1Xv±É‡bÏFðlØ s`!dzÉ8GmãâûI»™e¢*Ô¨ëù*háLû8Ø?FY­ÐÚ:œ(¹Ùq`Ü;ºçzÕfïNKYjs§õê¥COF²¿š]`½zk[鈴†h@c­ÃâRŽs¶Î}[xÜ9 Go2Y(„½«ÚÍ,dT„Á[¸;)2Ô¸8S¤IÕµV1Ø8ptw‚œE§ åtiRJÖ}p±x\D*7œš¼‰?¼NfHüÙ­qƒh ??"[zàÝS #jÀ4£{]É0£5©Knž¹…h€f7µÔÑD«dÑ;Ÿ?Ș$R wËB”õìÍZ¬ „«¦$šŒô¡/^4c½È€Ò9Ö˸ô¢Å™lv˵^^yoÝi­—QúÖ‹âÆËñAI·` ï©Îh Ký¾‡¦Q0cÂØ†tJ7ŒkL[‘¥ÊQŸæe. C4c]³IaÏBœˆÕ_b0<¬¿ëó/þ¸ìüÃÍzó«Ñ;ÖYÓS`Ó’59•ßj[p ×øÓX `0—ßÈXg72&FµÅòæçåÕ^·3—ƒ¬ö’Kó¯‘Gl'¶}ÊÍzv¿úev»úeù8Û|žßÏ^ÄqÞÍ}?Â@½”ÍEMl„sN pò…{÷Ó¢ùGw•nC‡u郞=ñïçwKÖDî+£«ŒdÛïðE˜$6_î/¿”k³í‹ºë†úõÍÃçë»åæéöï_ñay÷ùãÍãç«û››…j˜žˆ3îÍCW,Ø¢,q/f|ŠñþkÿÍÁíj…^`2çjÿˆ ñ0£ ù|±k²£˜¨"ƒÆ§òùä£ÄúyCÂv3Ëp͈ðœ-‡µî† ž±Õ”}lRZFÉÆ`9‘XÑb“žÄ'…ö¦8üÉ'ÏìÕ–´ 6Õi‡9²6½Ú.Î÷2³¬ 5Ê;›ïˆw¯v– ™"“BKz{1‰0¾(äNÀ9`áŸs` >¾oÎç!¾âPEœÝ.eÏÇá“ÓSúž¦5*7É…:D!‰x»;ÔIDcÁ%‰¢×hƒ¹¤]hîICó3|D¡ Í^'IÃL;ÆLBE¾ïBãdw1¾±¾aìcH/EwJš_Ö™'½ÿ¿#žvÄ¢\Ðr÷^dÎÃ^—ƒ:'Dã»Óœ¾=äøÈY‚L›‹ ü„d4pNQ0 ‹ÝY5•Ü]&íÎâ¶Ä1㔋åîf–é΢„ IpgƒÑrv)Úÿºq›¬^ŒÖEÝY©>Uˆ§l<`Æ](ç…óÿÕ¿ž ÿ-|txLuu仇µ–hL±/Œ ¶t—†Ð¢‹¯1ìye«Ât‚×gõóýÙî+¯ÿyör ˜뺇u«ÕÙ£6@EÀnÌ„P:’UrÕhFö¶Ÿ$®×‚÷¥t$ò6=OîÙ$ä l¡sº§°8jŒŽÝ9„S§«°‘è e“xtCCÍ*\]&˜Æ‡³ŠEBx4ŸÁW9äöv‘¤°·9, »sýdt˜¼¬YžÜ„ƒ¿þ'>°ay«ï}ïuŒ³çMü‡)çNK‚¬Ïpî|Œ7c7Ù* ,ç0a”ïg­ K‘þ¢ÆÆ¾K\ìtÏ3–fàXWp‡§ï gãß¼Ú…W˜¼{5™àOéøìšòÜ\l£¤g·sþäù'ëåfשg„ä8@/t€¬uSʬ(£‡ÂRhŒÜ*zB¼C”BÒ"F®ô!Ùj-ÙÚ ©Ž'Ä£6Òr ’¦¶H†–Zߺa;ËÙxˆ¹B€l9Œ†NlÈçx@¤ < ±0qpYÙv»P¾nïèS |½ç9t ÑwÛ{#^u±ìºttVë –â5Jž¤øIݦèãu›Ç§eðdj)3@ÜMj ¢¤ »”cA|Œø‡˜] û2¿™÷¤Þ–ÉŒÔ*hJ⽳ѶbyÖÁ{V%VÒ@cð¾0øÖP,f¯`ÇÙ—ª0D5q_,vžß+@Å÷ *¥Á›’‰<¢ºù€sÞW,54ÙI¿OëÍêŽ%¼zâÅ‘Qt©¿üíåÞë «|Çž4Ááå1ŠŒ‚éFA–ÇM"P`íb–üH£ÐL¨µ¼~Ù[^Ú¹¹ –baš:nDÀÍõH°‚èTBN1?ÊcåÐP‘z€& îµ4ÝÐï@ÁïxcßÛ™åÕÍüyö¬'¬_ó5cèbóô¸<ßö+8ÿr6AÓQYõMÓ•Ú#þ­±Ræœ=>½Ú™àãWj{ê‰õ!¨Åçåâ§”•Ì}L™ï,¢³\\›I|7„÷.5®³ÄÏ–z‰÷ÞíCÍÒI7s¶µ÷IØf·'ÛIV€í^}T6 c¿¬àŠÀš7 9c¬O†¬”$SÂ}·¾ªû®³â6 ³Ý÷÷€J·ˆY]5pJ)ŸoÝ9†ÓqUQ{kŒÉéï 5ñì©$+9ˆ^‘çUƒ4„¨¨Ìí@ §èÿõöм›28°ˆð5`lÀ rù‘áÓ¸¨WßI_´úäC‹ã7ØÝr4t:·4ø{Id}îËûñfq&Œ*#/ê >(p­œRÁ¡v 0‰Ìh¡…ÚÙ2ª¾B/¥­59DT©º‘—õqú°gÔ_~x;|Ÿ¿L‰úiÚUé`bØëÅKFpÇ3[AWõòlž“Ù·¼öê ¹îæq‹·^<Þ:;É¥*b:>{*—˜1‰˜cÚ j¦Õ*Œ¨¨¢e,}ÀˆfðÁ¢-陞‹“˜…“B—”ô¼åºAh[uRYLËÔ‹Š•ûU0Ðà”³¼Éb0~lTÝ‚¯~ÅPSFH”5=•rÙØuÔn²•À.ž95Ø!@{°Óź`œÒ*Õ§«n¹‚ÊY«©š/–±Co\Ñ­¡pí¶Á6 àû’žS;;Æ—ÕË ÁÏ':áð•c†v‹À‘h‚ƒh­Aœ‚Ò8çñÕô$…¸ Z1â4Û@ŶdU [åîÊûØšÜ(JëÝ)°õ¸:ä¸S”€yÊ*‚·8=°94Z.ohãozÐhàÔAÍ)n<îXN-A”¶“¸Q­±j„4÷T󴕃gúŸg”LÖ¥BœOõE•nñA¼>-®«^óküàã·ø ^:Ù8ݪSU㘘u>'¢²0fLñ.Ï-ŸAÛ7QuT'Íê%¶½d»ê _ϤԸ;!6É“…žŸÛæŸcûh»ä)΃:"§š0Š*X!ÌKÃhFÎ;l«ëß´à~J-eOB›QÞi5ÓDµ‹F9=jÉ„5qµUëTÉd¥¡B°)P 8¸ŒHF—•1±§ØäN(H§ÒáÿN¬ñÊe‘Â4¢Ï¶YƒzŸWBAkv8mV dY'ã‘W÷¿¡X$gõ˜²NèU‘–y皣%¯†fíK&˜²)ÖkÄÖË!´’.\”xèÚÞ›Îïl\—ûÈ8±ÞlÛö… ϵ7‹åúâg}.=‚ö;jp?½‡Å/mˆ¼*P¤)Œ¡^*~<¿{|ñÔ‰ÕWÓµ¤ p5.&EŠf‚ÄS©"Š‡ã²»Gw#c(öªÈEw‚¤|c¢AR¹'ìâÂÇo ¨.aVžCÊ'‚1uQS!âÐÊZ¹ž+Ëú´ŠZ`®`<õÝIú] 9Òâe±~œT÷ä›Ê^7à)cÙ³-)E~<WˇÛÕWÁ¼- Êø ù ¦/AÚæÙíØØ,\4ÎYˆ­žá“ýA‚èÉ>–Ö“Kž*ØÏ²2ªbùxØà½RcL_%¥Ö1‘³ Û'+<ÿ—°}¡j0ÆeÝRÐõk‚ÀÅÁåã!Ê`—ñ/òóTyñiÚfUÔ|%Ô\|Òm¥ÉµI³ŸÓ8ÚoîE¹"®£„tWó§Íçy× p³úiyÿlUÇåÌÛ#áÈÀΈ/ÊR°„M‘:XûÜ]Ýo#7’ÿWŒ{Ù—•‡d}°8äå8,àw÷º[“ƶ [Nfþû+¶dY–)5Ù$åÉîÃ&{º›U¬_}W™`Ý{óá¹îøãÁŠE½âëáÈ…ëû*˜‘¡Y9¸IãYF1ÓŸ ¶ô„A€qEÊFÆ=!–#ãväj¤JõyªI‹" $×ê’HðèMéšÄr_ ›àËÕRòÆÛ:­²¬3]²64^kN[.!nq…ÓËwcιióˆÏÛ X0«¡b÷a™‚Á–”=$dŠcÃy¢Ð2ä )'´]'{e=%3A{i” ò€^¨dH‹OXWŒK‡“=ºd‚ì&èü&äõ2Šý®U̶͒%íŒPeÊGù® ]"KŽÄÛÕÂÝÕˆè¸J7†è¿ޏ²4Úãè{ï§´ƒ¡x&µÌ¦Áç…‚§r=(Û3LT½£É46ÉÓûähžúÙV̓ób'o»›LÆl¦¿b§‹‹·FL&Gæ=\”¢hôä¤í­q.öÊÎ_0öB½©™ T¾ûmh ™C¢í-ÕÙ Û lŒc§D›Z…rµ¯„’ŒÇ­S¯Â;jªS˜ §ï‘§ÀJˆ1ús#ìvpxO„U:C²NI¬Æ±·çèó¶q}:,ôd' wYEYë{FtžöÇË+˜UüÇ퇭ÇgŽÂ=¶hóóa;LI¤šØ¡i©x0w#ȵ6bP­ ±ÃÃÛ bƇ}3$'áìs¡‘ ƒ&áÌB÷JWÏ!@Rƒ}» # Yjè°¾uBöÏ~=¯rè’”8g¦k7ߦð  òþau»X^<=Æ5De{lºê2ü‚Î.ÎfЦe€ZÀ{ô ¶ãc"C*ÆòB‰Fái±¢&ñ¹rr†ð´‘dx:N‹°2–•j[OCyMz’§Îü®\ìdö¤žíx*x÷+½m7CŒêzñiþt³Þû¥Òfç¾)SEŠ% KCÓ£¤i•\02nòrÑ)âL>4Ù#D+‹—Œ(:Ÿ2¹¹’9¹çQÌ` ô5ƒi»k,k½ Ì‹Èûžü¼¸Õ›üÓR!ಜÙCm9K¤foÕå,©- ü*ñOúÄV¼wŽð éר±-zi;KÔ½ ”ÞeZO…¶+KIÿá–’E¡=£3çl–ûM64ŸgMæv†Æ 8aÜ`\ÏÇOÌ×4š¦Á9ØHœœ3a\åà„ÊݧY«ðcá—øàÍ9ªG·½oÃsAuÆÚÝšV@…¼ðœ.N¯4R=ÙmMŸY†D@Òu àaô`nîÛ*ýÁ?Úœm¨ŽX?¨±Uè’bO´öv‚{†Ô8*$ÓÈ×rÎ!éÿ$c¯`Ì9æP¼µ©Û{Ôj4ç0Æ1éÌ(m¡»£ÉA‡DB²Ä_п¡ I´C®ìÅ>rqz²±ç¯]}¼ú¼¸~Ò³|¸Yé¡>¯׳‡ÅÕJômvu³T†Í6 Ǿ,b噹/$û ˜L Þ±7Ô¦”µ€zMÇïÅvªq«™­1pÌ©”ö¡Z ß‹%‰AÎ Ç¡{Q+› G†ï9cìYGüdµ x–ê×BìèËfìË™ð^îVëå§åÕð¶ÇçŽ×ÙÓýoóëE£ÒÓPìxR[AÜ#‡Þ7âs’b ; 0Nà0ž$'Ï&±#Nv¼P¨ôxn›Ø…î½[JçäŠD=23\07 \x<㘞QèÉYè‚»êɈºÜ†Ïj¦i÷ÚãôÊHäÕ”½_ÖÛ¾á ´†P#¨@õm]E”k˜G4d&FK•”Š Þ£R«¸„G} ;3+ì÷K Út`"ˆ·áÍ]bòÌ_¨™pYŒ=KÒ'NÌœô5}#\®ïg*[K%Ÿþwa Øõ%tÿqƒq¾OH§§K'ŠXþüE ÿ,S{ֳ̓g”  Fœ½¾®-U;¯ÂUÐ… èžTm ÉÑ/”Hé¶£œ Œ?”¥ šPMvc iÂ介äø«ˆ¬ ’Ô„—b‚;ëð-Ê«³`ª¾õ¹{2QôÑz¤6Œ¼{‰èËÐW_Íg¿>Ý]ß´‹öŒ>uä%íVÑÞ¯y›F®v¡žx%˜Ùp#$&YºGžFn“¢ëÙÁ5€í®`Ó+4™­­þkÛ›‘˜ e…§ÈøÀÐCÔ‡âù†É45­=‹÷íN”1#F\j‚áEÚrÁX R´ ·t‹±ghð'›,Q=qܤ=’Msm'f-̰Öû¶‚¾kéÈMõ)é ÇéˆXGú³T©JÜjØ}j~b¥n^g|¥x½ºÕ7­ž”¦×j4ªõ›ídËÅW}Ëãð›užy.Ô~öš½ŒX¤’ ü.sžoç_ëûÚ1â¶ü´Ô·ïjºË"õ §BšRöK6Ù@pÍЬ•~Ý\àŒèW¹t«ªÇô«¤Ìç=5Я›¯¶èÜ™­g!î[<¸!3¼­Ž÷ƒŒ ¹äÞ™Ò„~5Ä FgEOævX¨l#câ–T×=bq|ÞD€vÄn@Ùö?l_zCû‘Cs‰€âø¶'Á—`­ÞÙþZÄ4ö—ø–ê$. ‚EÒUßù þcð€äÑ/`|&Û©^Ï4«‰ ·€EÈßžVk!‚§Ÿ’!žR‹÷Ñ@­ _mâbïÛ3«5ßyò Ö(1÷~ÀG+àÂÉÌÙŸ­ý·š#Áøö~© Hä]OlTGyõ¨ß­×;©‘ž›¸NÃgÁ“êöD8Þ"… 5«·aÂàÏ€ŽE%£tpCñO€p åkpz¸‘âFoÇÜý57^,¯ÄNÁôŽ˜ Pzøfì\J]¤ LØ.ãîˆÒJdËoûKã‘"ºMѬPÓt¨u­Ë3¿L:z?ÔG²u åxʈoL`áÇ>¹S‹[Öb Ag6³ G2€lÆ—Þ'ûmö Õ(¬ ÀEu†àœ5U³›ÔšjnXx K«$êiX¨<ÌUŒ®ÖO‹Ëí¼ü:Ûf?Üêµ½§Åõrx­ÞÅû›ùzÔM›þànf e‡UfØ)óÆÕÒK \Z{UÁœvHgjÌ’á> ¨¾5KØogjŸ´JÀ`r9å ±@ÊðÕŠž‹ 4êwV^ Îv‰Ry¨xe—Ä#£âhÚ.!i›§¢ÖvÉŸÒŽ_ ®ÓGƒ®úôã'gÒëS9¶%ùtC¾oš©–¬a)äoØÛM}”µjœ0Ø*ÖR‡ 'öÒ‚Пcßnøiº‡@_øaS°ûqYŒýTþ2Œ€) Œì½5S×ó•Ó¬a:S/;¶ðT.÷£k¥Ãvsý¡æÞ£Q›|&¨ p‰æÖPü­’PvÐ;ð«Rq_pÖ±õ#ÕB¡idÁgš@Qç}PåmPE@uèÛg‹ŸÑïŠßw05yÛ_Ktîô˜«ßTо •;©~Ü2`F>šü´ÆbL{¦0SŠ´•º*NJÿN} IÚrº¯`"Ÿ3Ýw¼a?l—½Áíú5šî«ö£¥’Qƒ6î¶P“ÒÛº”‰‘Ec™Þ¿Jlu{ût·\ëP%6Nÿ éUí5Ez%:LÔ§Lì-ÑÚUz5@Çæ”‰ÉX™˜Ð%ƒ¬{jcW‘öC‘|ÆÁJuòi ÎP'–‰l8åc›Å9É¿_yX!ŽòÔéÁ­©±§€\—>n!¥Žaó‘2JI·þcv¿*+ó"'l$µÅVÖ&ŒjÌCÙ8h^eÃMënɨ¢k‰ªÆÆµÅƒª.£J)iô줨±M·TÇ®C¨â9@•$¤A5®FÆd?„ÞÁ¶íÀ’å¥Z‹…=-™¢ÇyH¬TNà&d¥œxC‚¡I¿Z÷Z&£­ðóöÊÓÉhA7”¶i¼Ã™é/i“ŒÖg”’øS.rÞT‰4uŸþÉŒ.üƒç׎|íj>¬kW;ªÜ2Ó#bÀ*fê¥ì㔲+üÞô·ó«Ï*HÛ¹ÇeÑ$oüqÒ«îF¬N³”†ì/xë{XJ¯ÉÕÒfrŠ<!Çfb? °i›i4­Qò±;§`Ç.F†LŠógpDI’6“³Šï8²ñ›MÓ$=çOL[ãé-:c«U4ž+ãx†Î`LìŒ3$<=ë*"k²" €‚í{„Ëó Gy¯šªò:6æ¾ûô©!ôœp¯&ÈÁþ€H»gòÎñ×_½ÿÃŒ¾Ò×/e±âã_‹qéYU¾Æ"NŒE8FC¥R…¤jçUëSO2®]ã¨vU Û I<Ó¥UHBíO‚yqI‡%¨ÂáH­îê50¸tH"ÎàEéƦ gÁ°§‚Q `á(K½³®.m®j¯Ëà2±a¥¯×cÝßÌï—éEZJkýõ?V_ô—ïô#–¿/×ß®>/®¾¼*\ØÍ9ipmÿFÓfÚøUÈÀ|ÕÕS0I)–{UÍÙwªÑ¶ïj'¹£Í˜ä&`Æu‡dÕõÅάÄu­¾D÷ˆSmUn±¾ݬ8Nr$Fò§=;l;ÑM?±ýH·ÀÕ?_?|[FÐcÜ3´¥Øl©€Mb !Dq$¸0½ë_˜Å °zšNƒè œ"Vô'Ü$"XÃnÔrqýüg&`"$%F xAå§ûãžwGMΈØ;Ì_âÇ^,·êíj±ü}qýÚèˆf¿Ò]b«å_Øžlû”売ñÅ¥?ëÏó»‹=.‡Ã¿~|ð¨ª#Xð‰NNwô€!‡Uwœ]çù±Ê$`IÖ-zÅÐGÓ鄼 ãZñ§(8†ÿÿ/5Æ×ï§}L¬ƒ©Í0eD¢÷Š´ÎýëBÿàîq~5ÈÓ¡Hnš¼þÏÙí¶2¥ýËÉ2õ<ÇXÚ‰«Èb ×yÐ†Šœvt˜•DLéÚŒ– $Þÿ)oß; S‰.ã¾/ÂT®Ëßxx”1 U‰²Àa’‡¡þ·‰»°ª‘' ‘¨ª±ø88qy8áÜ;Z&ôXŠe^!{Àxö5؃(ÒÝRGbSÔò¦,ÌU/Ïdð=9׋û›Õ·Øð· Ü?&Â(hûxé·ïƒXÛÍ=H¿ýPÅU‘ÄRSéúC¡P,:4m‡B½¢ßçÅüfý¹¶ù™’¬Âçk _1e¡ãÄŽxJþäX‰c­½6¥œrÌdÔðo§œÐQ¡¤¶2ìµÉP‰xÉ “¡!!ˆ³5*BE®»Š0ìBª˜"µJüáZ†½¾4ò!´më­ïê="·ÇxË0¹B#7a|?1±Q+¥¾À̘é Ö¿1n~ o' ==x2©¿w²<ëËOhœ+-¶ ÞJh©Æ± I?¡g4xÕi. í$éõDn%ÿdü%:ïŒÈ¿QG>v×Ê¿¸”¾GÑòEÆ‹‡ç<Ìz55½vTvo{ê#ŸÈ‹Z%·5ZÙÓ®S6×ÛaqãOói×mèØ%Pó› ª3ÃØËü°‚·§×ªò¶Ô[ªµ×ôáþa¥Dÿ¼xz|xº]ížõŒä÷lOYN°-VÐGØ)[©<*Ý”GòsRóè]ç~ƒXU”"£W¢öõ¿½3Éɧ{4lâ€G‘‰# ”€ƒC]•îû—ï(™-§&«€8ãG9]LÒÔO6*8I¬ËÖï‹DGo«‘bëü~ é 0ÇþŸ½«ímäFÒEØû°_Î6É*Ion€Ã.°Ý[$w¸‹ %ÍŒ[6$y2óﯪ%۲śMRÎáyA2“´»«È§ÞŸâ°84Ý;{\/·ß.»£¼šÞ¾¶ÔÝcd¾k·N<¥Ì§5³)•°vLnUë@VGÄÙÛvóÔpÂväê ÌŠXc8œ :I{ÃjU²Êî,Q”’àE®U¬ˆ\-ö ]N'–—¡= Ef„šÓ¿²œÑƦ=XSšM!ƺ€ÈÖ &À Û'™±¢÷÷TíΉ!ÔØÀ 8ŸCÐß{­]’°<ùˆ+ îm…Ô0z5‚¥l‘Ƭi±!É)v/ÈYc¯£íÍ“ø³ò|š×>‘[¥„L»d䛵6fä›- ÿ)M»Å {ÄT+S'=+ÎYãÓ=l‚îIÆÙ·5¯2©b^ùØbÓpóÊÎáh´¤ÑÙ*Z›×ΊÆê¤ŽµdX[jt®º?€Ô3+ME›OÀA¯:PU†©®E«ˆ3$»wwª)¦Æäv‹q9X]/g›‹Ùôâæq5¿]ä•Jµ 'äÐQÙñ#vŒÖ™ÌÜ^¼K¾ÆÕfÇEU\1xFÄÔî¾ôJ¥¹ƒYlë>Kl568s°•ϼ ¡[‰ZÏ©‹˜£Ð*C„¼>ëÚl²Ã"k*¬ÍîG…>•jíØ9ÐEøJ¦I”†ª!¸ •¶ŒÈ®„Ë’Í“îç¢Žà•¼XÂøx0‡çßåÉ·†§†FàA¯*‰ï·.ËЛ ¤ªò1#MM‹ë/Ëû÷|î—"ÏTííÿÐ(¥oµ³‹ˆø Ç¤ôb”&?¥$Ì“Yû#I–%æe'‘”¿éê®>úh^þE8µàT`΀_£Ð³Šà7×~AE¹„9ʵ–´æåMݼ|lß æåÛ`F¯¶…(­hMŠaǤ @ ß¹Aß¶£sÍX³Ùîº'—{2mT¥{X,ßøNläûÜ/~ àX²‡ #†Š¥;4æ÷gŽYMOØs¤¥=¥¡( ÅÆøØ¤Ë|*!1(\£–µi%Hlà H ÖÄaï©—OÕ]jï†u[msº-G£CŸJ9òªhŒ!ïš”ÆÀËEz¿4îÃý< Yeš œµ£“4´Â˜ q)»ô¤ë&nY81”Ð)çBšJÌ{ )*1–ÆRµ’¨„¡^ª9\c âÖª²2@{y³ŽÒȳ¢‚L£G½Y]wJÐê!Ê·®^Žvå{µ'¾i_€‘‘…6pÉŽ] –­æo(Ë^¤6[Ïò*\ }.@FUQ•ìžåGØ1Ûš„KÁPn…ë”XjB$G‘^‡[¤"•‚Hp"ŸEP !QkcsÒ­¨QLe B¢6í99<ÆiçÉ’õûpàÄ‚ººî&«f1x ýS×½W$ô¸E¥+§t#´A‡¦­›Ål½x•©–ß¿ØÌ>/æ·ûÅÞ|U.f‹õöÂeá¦3ûÅ„pBÁ¦£1s4FʸšËïŠÎUM,%ÎW‚¥&Y¼’¶æhÛÕ‹\j¡©µìå´X Úb‘¿i÷šmÑv ÁGhJN¡7÷7-¿x%JÞÙ2X¥æ+õ\.ÎRì”× {hà¼ì}¦´:ÿ¼kM»úÛB¨Ê~XnÞo»%6%Á`‘Ft“ÁSYp¤ZæNÖ÷·‹›åJpêÈKx™×’ñݼì´íÏ °S¡È ÉÀ 'sæÈ~nwÆpU´mŽÀZL›6|jôŸÅeb–í@u ›Œ—º`rÂâO±bòyr¡=:Œ– p×HuÔ•áë–m”ŒöÃÓ(y·¿W‡AZ—ËLš¶-ȀŎïéÜÉ磮–Çùr›×áÆî^¯À²C”ÌÚó#Æ$¡Ù…”~šj$¡£Bªˆ ljNØkLŠ>‰¦£k˜_$RC;Þ:Èy´ì}Q†:°ýâ‹ ¢=Â_•஫Ý8< L¥·,)Ý ½ÚDÍF´¨–ÇfX·ASÉ+†¦µ¼]` †isµùÆÿ|wý,ÌëGþˆN ÓŽÿu{ÿëbõD™š¯Öà x«LQ…Ï¡1ôfÄ3VÚe£ëh™UE[i#†0mÝn¼í$Úb<¯ý"¡Zh‹Ò ”3çˆÚ%¶µ' •±ƒ´õEÌÅc­:øæå³9ÂޱEøÐ§Ó ƒ®hË›0Šæ—C0¥A‡ s®GÜ':YMoA‚Á,L| ý~YùIL°  9˱VDXØQáA9ùÙäJƒ‚‡ÐÜ>2„ ê ª­;) UBëNì!Žqñ„!Ì"Ûæ|¼OŠ÷4Ü'e[8ɺ]FéÖ/r.ÉÏÒUq¬_%0 ‘Ê$ëzO¡ÈäG´ïý"GÑŒe`Œ1cP/Á]]‚¡gW‘!þ|0Ó«þ@Æm7´mCä5„ó׋v¦sïjÏó]d÷‰]t»f»š²ÛÞ^3b—B[SÒkưcš&„ú×V*)c½(Þ+Žˆ‰‚à±cH¢¸VG{Z%‡œÌƒf A6ì±C‰«Ælε bvuØ9qæO·§Uí¬ƒ:+¬×¥¨ñpÒ«o”±¬ÈÔ¤F%KYÙîž·›ímZúåwä7XcËY·™J¤›WúgÄ~58¾`Š€šüj”Ý"Naqs[–äjB³ç €æýëIh&ç¢Ðü,¦JÐŒ(Y›ÍAïÊ ÙYlÍ&D‘٣Š(Qͪ›SñÃÆØPô¼ecFŸ†ÙöíÊò$¨B0²¦)÷æ±`ÄÇ?åê ^¨'¸3}ëûÇíb ³_äúõtg¿P8AcKà½fO³ŽódW»„‡ƒ’Û©ö–£¶ª „¬æc2JcIr›OÚ9y?np„Èl6 ½>Gó±4ì†ìq>^ôª–M=Ú²dú&üdÒ‚š°q2ãP®óÕ¦¤élÆÙøÀVŠ ÖáˆÝx ¬Uìàˆ ÅiÙTÄROâ,eðJÒÓ³  ¥‚¨ƒ¤|@eçgÆ¢j’ýŦhÇÉzìÖH*“{Ñ-£:PPðvüQÖ¡rÛë H²²é{ß«D T„™Û$€]@@8SƒV$qÃÿÊzÙìíOž§J¶_äÈèIEõ:;¢?ÃV·寷d%¥TGAN–Já0Ø.–áºØÆÐ™TrI]ÐH†r€T€Ü–¤óø¦9™#‹YÛ(¢R°Oæ7a¹ºdŽÃ\RïÇ4a €^-v-Áe­®Ö51vÞhÅ7àü#Æ/ùéf»x5Y¨Uꀀ ­Õ¶Qý¦FcÄf7®Î”qBZ'±Uåa+")«Ülå¨2‰­ÞD^¤S [Å·Vr°5x"5´/±˜Áű•ƒzæá¾Ó•ûƃBŸ:ÙžŽ;Š@–¹«»}ô;Ø®ý‰­ÕC·Õ]<ž–¬ê]´^H­J@GLm¡dËË^wWAv5}[+Þ­„¿lM@Œ& DU ;ò†œúW° <•ŒS[ª3à¯ÁhL Á]pŽuØ~PG»ú¡ú6ìb´Ó®qN7Bc‰BË&á»©Ä ·‹éf±¹z7ªä×&Å›†XŽŒÇ,µZ{§(;UpB&5½WꜢ0=“ V ^¬zA-ôdåq^ô$xôŒ«49þCcÔ{U¾j×beï5q¿Û錽*h’PB5ju8ƒÀŽU¨+á]-¾²¤åR¿8ÿo ymYìhû–°IšÆì# 7º€Ö6Oh5“ hð:½õöóÀ§@•ñYG³­Ïª‚©d·¬RæÌ˜ª=4_ú{8â¬2Ò!ìÍ9ØnݰÕ9H]²EÑt-5¡½ V(o¿Dg:Û{óoÈo–+>nòQ›iÌ’UMAF_ñÍ0Úx·gˆ*úÆ{FUJ»ª|§m²Ñ•Àøh ÿ,’:®j]ªÖâ™aõ }U2\æ`QÉúØhKºÀϾB—vûrVâ E€–0J®Œ §º¿Í ›Ùzù;>€ŒüM!½3¸ëCPüã+/ÎÝ‹¨&|%»q|B²N%}“ÑAÜqÔÂOÝ5Ÿ?­¢öø ;ágÐüd´gYÂ`‡¹¥^¹ZKnKðl³¶& ƼÕÕôߪOÓíâb{q3Ý,g’gÎ\“Km½QÂ1Íÿ2º(<È¡*ÏUÀ*"+(%KÇ‚xId¥øpìt*!+)Ù˜z¨y{•°F[XSÈ¿k}ª•#ÿa3XÊC%¶«ÑpÙéoƒ¸ô~j»f€çœÉæiñÚŒE|ÇZºäƒ9燯–ò#7²9ƒÿ×ßî׿²šVüBË/Ëí·ÙçÅì×Í¥hŽõòp;]u\=Ðø¾`©©×ëØ%Õl^]àŸß9pFAW…x+'ñɧ º ýE¦µðÝXòþÌž³SíY·ý ˆã»õ¨U|¯ªÌacÜ\Ö»œƒ3QÖIøG–A`c«ÛRNÎuÓÜ>¹x!é<º!oñžÛòv•n—¬ô¼~±¡_9ñ—ÈÂ}C#pŸBOÆUÏI¨*`{Ç0a6Q²£{¤Q ±- ·9É£0±á ¹¡Ž#¶lV7þ¬¬–u‡±-ÆíÀvªdÈµÍ „A‚Ýè8µ –p8±(¨ÿM€ªhDc‹þ¯…ʨð.¥Ð(ÕÇs©9³2êŒjjèÆì³PÚš`lµÂhJdMŸ¬ð%½Krœ4}:`€¤é£è@Ê„ªØ>9Ðl²•;³í#Ûœ\Ym¢Ù(À Ú$Zy0eP¾ßazé˜h ½®ùšX­£^ÑÒÙÌ Shéÿœ)Ä&4yA)§UÓü`ÿ¿›®]l9Æž-®dèjùqÉ?ý™<³*ÐÒÊvé1©>”a,¯M5[˜ZÍš7ºnaåþ¬Ìx :”P%~gwî0Ѓi?“G³ ð̶pXK¦²TÁ@Š–ØKMÂ_~ô™YñºV×}vSú°ö{\®fv*–ŸV™ÅpðˆMawÌdfð^[[/-¬špkµ—•!ƒè𒑇Ç‹<dS)ë¦4kÙÙ3í%}¾&­“ݤKû¬Y7´Y<ïßN‚CK¤ ؤ îƒuìø÷wsïïîWËí·ß«›Kn a¿æYƒˆmÜÜc¡UusÁ \¦qA'[;=Qˆ6 ½H¨’› hT0æÌÀK¡=¿‰·vZåT|VíÎø÷ónãS©g•²ãIŸåês>¹K@­‚» ¹¯ ö3ŸúÅúú»ïÿrm1¸Òlö1€6éÈZMï,å×õ!™Ôz¹ózòa²ýººþŽ¥ú0]/®¿ãÃ÷i±½þÛüeUKíµnö;lžvÙ\âÍs7.ìWûõ×Kíí ÍÔÜú°@=ŸñKÜÝÏ_ÞAñ;l»Í‰×ßí?ð—‡Çíõwí^àËôöqñË®@àhÒÉÑ?ùðaò‘QúQDóáÃk3´³7~üÑ;mmºGh3¢ÕÕáÙ·ÁýüŒ“Ó›ÛÅ|d~¸¿xm{øbäY|¿š/¾^håþ4‘3Ìä˯ёa.jçðî„aØ/Q|J–ö†Ý·îV›½1 óGyÙÉR^Šï×l±ü²˜¿†}v:ùZ±!»cÔÿ×Èö¶ÈrÃVå7¾š¿-Ö“íçéjò,ŽËîÛ_?].-?ÆË'ÓôŒa›$’ÑlRùWV›é¬ÓÝ[õÿ"nÜõÇéífÑç˜?MV’&þåþã³ãu}ì-P,!˶&}(€_N§FF|ÚÖ÷;/a oŽÁ¥÷æðX¼zÆÿÞêÛÊ6Qe ª¾ÓÇAËèíYüæ˜$Rè@£ÚÙ¥ãù3(GöŸ_WÇ>ÄQ-ÁÇÊEÇL­%ü¿Å`±úm“ì:Ժ被&hïñ®a(úp??µa䥊HafÉΠ]|¹™?ÌóBQV#×À³`ÇŒe;pÆ@0¹T˜£…6.5/A–}Ò ƒÒ‡eäS!•—f5[MwÑZ×Ó´¶!,e8Då“Qسý]<Uæ"Qv¡†3 DL©pi¼f› vtvA¡(´Øz!ä,,_wîð‘Ç«·ÛÏGø)?o½¾_wUò•U3]Í·‹ùx9žFÎNŽžÆ­2¤[ôébuäÛ_ÃàQšu IlóO;λ/4Q²´ƒO¨’f뎎Ra(Õo­‹ 9º ]º‰¤Ùø“­QûÛrÜáÚºªù3ÄQ>µÜ§ç2fªHþã+å¯4LžþãÉÿûÿþ~üsÆGæçÉ?ê>pò/þçÉ'VÊõd÷ —û8í¿VÓõ·ÿñçÎM僵½Ÿü¶^nf7ד§i“ûÕ¤Ã†ë ŸÈÙäß&è¢Ø‡{ÖGî³Ûû È?D¿<5Û㙿™9ümz{ÅÉÑòÊ>­Ííýo“óévºù¶š«rB&HÔÛ>¤7=*èZF”5aüð#\ËiØÎÑ`(ñþø–°éÛ÷´Mg£ÿ6•]‰9¸ú©;ïÓ &’´:(cl‰2¬¶-l·æ Ok­lƒ<2æä‘»ô—O|E9Þno¯µ½Ø,X¶óÉr~3œ?ÅÓjªc¹TÔýò·_ηùòˆ13IœÁ“.ΡáÀšøÕݺړd*;/?€õæ´+ ncóE_6 …¶{+-+ƒ[z©²Ú”*º6ÔróÔ³£c"T”jC¬—ã@ÂÒb%!ΫlÈn±¼ŒXÉÏ>H„iÍþUÂëéGwžvƒþaò¡¢Ø+tÚõ`š{µAS$ïË'Ñbœ:ݵc(¼ÿ –àªûû_żŸÙ&>š€¶ÄlÈÔ˜P±³Xì„Û h7XQŒW@ÉRÖïžfØyt6çàÓ†Ô^øµ¤;Üf>jã‹(»×÷¶ñe9F`&@iS²`˜½ôç0uö¼2‚–&ÏpÔy‹ÂQV¾©ó§l ñù‡ñ{[»G°53ÁÞŽW`Gö„H¹¸$>›tqŸKÀ'¡ÊªøìÄËפ›4øKß{]&>|FQûÇj `„fX9¹û4É EžäMýÈ“UÃÎû¾M›FÙY›/7ëÇyäÍãüÓ"cða~Sa¿ÇN –½ò²ûÌ?[ò«uE‚·c^‰F+“öJ„-=yÕƒŽ&¶U!±Ý½6¿Ož×â8ò¶( àš;--Åœ™ØZÛøzPP­7Ö-©Mõêm 6‚î€-g¤¢ìNW¯)ÿ7O¤y¡oýòNêÈÇΗ€.+nÄòzÝíbÌÏíÖÏ’Ô¸¶ˆ(Âpb›FX0ΧÖ1Æö@*uÖH΄a;ae“=%ùXùæ ãq!x¥Ñ¸hsRÕT2ƒ#BÎæl,èU¤Œˆù"·Ö™7íOµ ó~níۜ䛯èØ&Õ0ch*F3ÃGÏÖulˆ®¢k+û¿´&H¯ñ>Årœ¥î<U%àe‡¬Í©ä8PìSÙ}=ƒoköŒLÇ 9˦]‚›«Î¤î8—ÏçäÄ^£!ME¨Ç,ÎxïνUä .ô8_n3û­/ëÀ‹zùG ä0{/óùÔòi;{%=HÉ¡ûµÃ'¡•Cë³0ª´ÃñkR°¢g_Q]:„æ%rÍ~Œ—Qô$ü*þ,ëD`X«¯óeÛD"·¾©úZø±hÙž4¦¶_3Vl¶»‡>qJïå·œ-6WÅ»íÙñ ýâ·Fª•Á¨‘#p¼÷Ú‡|Žú1«˜,àcaA™d:Ö°¿gþ‡½kinäFÒE±—¹ Ùx%©uø²s™ˆ™Ø‰Ý½n8(’êÖ¶^AJÝöaÿûf)©(¡  ¨¶7Æa·],$€/ß_f+ÆŠ‹zâ©c²:šäP¬©£‘q3¡}5FØ»<[­çíUðu[)²"Ž´)ᜋËýóiëšDcPÇó©kn¦žÆ_Žñ–WB„Ow«õ¾S‹Ã‡ «ÎX­¤,¨‰á·Fi"oŠG‚ŒVM¨U] †Z±š„Zð>:ïU4• VyÃÛ\bÄ&ÏEÆ•ôÆ´±‚‚ÐüÀ©¾]áôÕjýõù±x®/ã{Ë»èÝ@Kå°óuçúžŠ©ªÁCڑƒç@¸{öz£©èK¥–ÅÃHÌûº&Î_3§¡¥bè0o!Éæ£ÔbFÍG­W’Ìç´.ó^ÈcÞ+öÊ‹œ¦» ³8&î N‡YùkóÒ‡>@­Ù×õdSÕçÇT馑¦›BÝkž–yR :QT”Î<ùh‹P¹•2OŠ¿_Ï­Ò‚n_S!êÇó‚I§ã3ã|UN>S;ݔꮳ=!`‹Ã£]ãAôBàóNO~øxûÌ8°ÿÄÒ{»ôcž æ2i2@“ÆôÂãÀ_L8“#Ý>¢Žíd¶ÎM9 i¦h{O|µ0Ø*6qf¦Fþ=›Ù ™•$þÍ^¹Û>ínÖãf0‚nŠ«Ô¨< ¢%š;ßqoï×_¶›ç[¡Üî>³ÀBvhÚÊÜ7ÒeNšèæ„ð– •ý¢³àï?/dÄÄB)ü=©&Ó^ÝL_"7yÂ$ÔŒcyGŠMÆŒ8VF ,ZåÛ“H¥8–7UÕMÎycŠc¼¥= ¯3#q}ªúƒÎ90Yþ`Fª0Z‰ØO-[ı4wŒ“ˆfp5ÄB½ÞÖx8Ö.îkh s>Ú9å<˜€æ3œw¥AâÈm=cé6Dï0ªo‡!_”«RÙ8`nÖìÚaUk!± +þ4bGÉL{"©جc4ÐÌÎcPÍs)"æxå88ŽõR­ ½Îœª¡§8žñ¥Švñ…»Ø€1ºS©€üj®­G*3ŸoW÷]D)Òõø°áØ}}!0üvóôËxý5^[šŒ í„–Šݨµß¯ZÜ1þºeê€Æ´±EÌ_ aDà±úfŸTVß»ÉÞB€o"¥™‚ŠÎ{êmN-W@Z†K4Sêhf`šÖv††}Œ«&å-»ê>šå¯\ÊoVÙ‡ø!˜Ú%›ž/mÂæˆÒ8Ó¶9 ‹&‰‹Õ­åó»ÝÜt_ÿ´½ã-’§|ÓWüK½d±[íŸvÏë§çÝvy,Yþº8”—õHÖµ¥²2cf6i§;ÚU?¢¡`&)W e"‘ÍêA°¨“F>óAż‰´V(Á²ú… `Þ7‰lXãâ‘ 2†0Á€ TµôH›¬F/ö˰¤aF4jzB› M¯µðêÏ7wç-¶´Þ­Ër§V‡ÐÄ­Sæp”W~Úh¹Ô 8y-¤.#ât´:΢®¡8׫ jEœˆØšs­qíNÇéŽÌz/ló䪰Nçb®ƒ¬h²5Z£óá¾7Ý@lc7£Rš!øw`7¿þöõè¸F£~Mºz×ÖB¶£Â9ì YÖ}ŹÙò¬i # Ô‡e 'AÙFƒ-=ÑÕ²„Ƀs83*;=?ºÁxŽÙF6~¥s¶¬²Öûúp)¬4ÝojžÓõä\t¿=)§ŒÂø”§p)ã6ÓÉŸ;8Ÿþ¾ ÷¿Ýì ~&5…2iÉ„¦¾ÇÑKÛ=Ün÷Ÿö¿ñß]¾åËÕgFÅÏìå-ž{>X«ÏÛÅjsws_¤9I릊˜†M6WÈ)þ»42Rd•# “®y³ A§]ˆ—5õTG=ZEìháÜ"o›7º°œU”ÙRÈß}Âi±uóå˜'"›_3&šn-µàU3Œ;µ,æþ0þú´ø`ñbX¤?±€ ]Áµ¿Z\¯?oð0(ûj»Ú¢2׫+û;JY㘠¦à½”ʺÒ$Àœ2®U´Ú>kØÈKR¼¡å-I‚;êXcO¤UÞäÊ ¦b{aí :ךãÅåx“}bç3¤FC‘«›€Ê ç—[2´K†—ó‘[z.Üúß²×*:” ñTµÆÙÂD›aŒ[Ÿì1ÏôDQÉ×Ö«ãÀž~çè} scÜ:7Æ=jè:gw§jzÍ<uæ¥Ü=Y¾ŸFçUh32‰ØOúq<Æ]þåµ1£ŒŽŠ}<5^êày ½–‚'YPà}¨ËhüNR5±Ú ŸnF"ö…DèÒ±pý¾‰¥ÖD$ÆlU4©ÂU$«Ú§bÑÇË_x§Ð¨Dù‹²¦ªåë³¢öX%Sƒæ8 ƒ;ꄬpÒdNr¦9êŒÕ4k›ôÎý˜Âb@zàöe¦*j7~2ÐÖá¨áHäÙê°Ó;¦‹$WÓŠuÈÒK£SÎ'ó­2À*:+éUH•lXLJÁ@Qкñ$Ó.*ùæ6,K0PvÂ>E}Wu¹Î‚ʲaÑâ„VÊb¤ÞY°!L‚me¢›Î³Ïéà;Ûµ¡˜c¶¸z¾ßÜn Ë Ý0Õ¼p-N³oý¨Òp¾"|•k·ƒbªkÜ‚fÃ3ø=Ž»? °€ñÁG™Ô²lÅšrT¯ªIôò„¶ ¹rGòÏF—ðv»Úo‡Ûÿß‚£?_Ü>¬¿ÝM°gR멽ȸšG¤ÖíP•zžUÄW•¨<8ÍöJºÔÁQ:{gC”¨¼'¬jDåÈ`f¾µÔ¼- ćBhmA“N×ÍßcäçØ«GË]6.´ñQ--S&R»pus/W¦/äÍýþÓãîán˶æó^zUÊÚæÜ`Ã…SZNå”|¹Lï†øKAwꮳETÓí,P†Û’ì],®Ãؘ÷û&Z~'¯ÇcÁ,ˆôaH^>^÷ ª]DË«ðÇLg‘BÚn:n²G9õé9îváy·Ë§Ýó¶ ìeÂ5ÎQŽhÚkƇ+ë€j¦–¼­E3-F{Ö±mŠÇaŒ«jƒ/w$Óù{sžB#yi¦Õ*1v€rhuªXIY IR o̫퉯J±’ ^ÉÐ4¡ä@m¦ Rª[N =+=whé­k7ôñ¡¬ÐpR8|Q#hÒE5G]:0üÖ+EDTÕpbP·&dXNéˆ=ËKP[¾ ¤’éZ#QÉå³J³>Ér2ígtL£Ñˆ=ãݼ¹RÀ,¦«¦©áÜI§=oåDúî9¦VÄ{ŸŒ’T¤s[ h¡rÓ›¯Ï04Ó|†ÁƒÖi&]i‡-œ¡RÎaš–"½ÎjZ®4ä_w¯?ü¼{x~LÉ?ã ͼ!pÁ¨0I7ƒ3ZX+RèK5sŽ°Ï˜Ð9’žfBîcrIE! ÏÎ*r01ÆÓ7éU± åš +¼¢[ÀA©I—°uç+KÙ)÷ç}ÒÁ&(ªÙ&ó¡.úGèŒm¤ê)ýú  ôleN:šó ZÒñj8¥í¡ŽhpÆ…?RG´8õÈOÚ|g§ÕQÏ Áz„†êùîæ># yüT35B˜VÈ&4`º\ÍJç Öò…zöEjgté‹È¦éK9Â’—\‹^‘I}‰>æøöÄPEaÊ[[¯K›œÌC0zšãšSEˆ˜uD_ —a`‡)w6æï¶(Ð’ÓA`hWµÚLô}ZdaUp¤-Ž2ünµûº}z¼åÛ÷i·Ý|Y=½f¶ „•6g¤Ï—PM)WãGØQåjÖ[´ÆT~N`õâŒÂ lú¦»2€ÿJÁˆð¢uk¯Â©e”Œ¦’þ´MžŒ¬-¶32j ֘ф÷óÏZq˜»EUƒ[©µ%5%Î(§¡y7ßs(øÊñM‚÷K[cæ [c²u ìwÖ(Mj’g¢56F¨Ø\”þÒß‹òìý¨t¤–?ÅìlORŸFdæ ;D`•o©?{?ª˜©SÀ[›®q²š¨sVƒS¡}ÕÉÔ±Y¯cJòäüþb?NÒ¡ZacF©E01F.ðdíÙŸ1u•h4Ä7ÉyùqåVCðÞP8Éé1 ›7¼òŒ'ÔŠÏ¥Ñ6Ær)=hó³w±Imñ;Å1¸¥!à¤j*éûCÜGÎ:Ðh“ªàäü~Ù®nŸ¾Tãir¸$þMÀ»ó¸n—†¬I×®jâ,|o«­‰â×AÙw¸n,Ð4â§Mƒ|­Q°o´Ñw¿æéýóÕÿð÷ó-Ûî÷r|¶ß;ŽXéc|à=é^i9j6Ë™b)c­šÂÎ#Û;& a|ÐŽ¼iD<]"ÎjÖšòKoÑx“¾Õbl%ӱƺè–žðjXkr´õºèR;GÚOÒÉú= E¥" ™lfºÑõÔa IÙÝê~õ999+ï!™#áxÙµójø¶±:™ä€€Z¶;‹Ó†x ÉùLÚ(SÈ®=UaÃN«Ùd†—ÅÉk—zB¬S†T`ÛB—”S2ð(>“Lqo[gÛYÌci%Þ(i²&nŠ«ºµw:’W‚Ní‚;þ£9ò Ë:HšVÔL›¢v+ÅÍ8ñùk¼ù±ñKLòò´äuqsÏX–\Ö«=L\㬖Áƒ“òRV¿«É ¬±'ü@=ž}‚kÄ‡à ›nñÇ1VçÜjÑ0Û›¸jÄ[ñŽJ*é¬1lLÊoX£¨}ªÊV?äªy¥R¤èUKBVI<pÙÔƒÁfÀ$ˆf×Ñ`ís¶þíaÈn#ˆÞ¬ølðÖ<¥¢•olUŽeùÂMr²­Sc‚]|´ Di/wOxglëžä¦Ue²-ªtzv©óìÞè$ú:Ÿ]ú&*´Tò‹»’BëÀ´ë|kšåì"4ïAУ4UfÁ¥6Ü”ÜF-PØ]°Îâ´P§÷MH2Øö£ÿcZ=yÇøXnÊ‹½"Ë—ÁA—x¶â˜ï”èÇjÍžï…T“?ŒÝaoTÚº5ú¨ñÏâëqnÂ;|íI¤Ò”R%#' Â5îŸ íkwXÊ*š)4&8RŠæm÷¤ÛI¹ ýž1$ˆí¦[éL';¾ã“|$Ûz*v D^Slĵq.ÞÛêÒŠP“[ù÷•¼îýê~½ýôŸüVÏûÓÜÑmI`ÝC0aK¼j¼#niƒáãŽ8 K¤`µ½‹Z/Úÿ±öBxAq|½L÷´õý;Òz)ó¯”ÉÏ«ž_Xlw—?ýõ/—à‰ÿà1t^<ó Ê4†ÃàãÛ–‰äöÞ”¡¾øùâé×ûËŸÖw«Ýöò'>7Ÿ·O—ÿ÷¿\ÄËöWâ‘™ÔNQ—Ÿ~÷°y{¸â‡ïŸ»äâåOÇ7ÿåñùéò§Oþ¶º}Þþr˜Ÿ…ݧ>.\Û‹Ÿ¾¸fãYÖòò‰0ê[æça‡ ¾0xàÿÔØÑEÇ3;*[HIqü÷«Ý°ººÝþ”¿=<<žÚ^üC¹šÛ¿Þo¶¿ÊX{ÍÆ’œÜ›íæíg*â‡Ú€^Šg1•¾uè ž-W?,ÖD'÷Vó'yÛ‹y+¾VëíÍ·íæÔrK¢\ÉÎþ9öˆãÒŽO¹Ù³õ¯ä÷íîâéËêþâU Ënõ§—ËŠ’‰•t˜†'Á(qŒv&ˆCÀ><ÿä~¿Zw»÷þœµëÕí~;d(ÿ^Ü?K»ä/ׯž‡ØÕ‘cÁ+Öd!},^B玅xb±àpoizÜ=¶ìæã].ü’]E£uÿ`œ<ä|ï6\Ÿ'Þ<_uÃÓºÆhc›b,‹±ök+Ü•¢a ËFMV¬ø¿~½ÿ¨ìõÈk.ªëÿ©ª†TÕ¿´<™ÎQƒ(7:1–ëN`>Ô!V˜ÉY ÆßÏ žx¾ŽÊ&grN)»„8”)p9Þ'5¼qѲËÞòªšå$1ÐSîìãZ÷ 6GlgÔb •’òÓˆJR|S‰’Þ®flO`ÙUí4£3rÝ#ÞYMßW·ŸøÙˆ àu#ölÿ]\oVO«ýo÷ë>*)ô†­I3Jz?ðúž„ÉÆ×üÉ#´n0ìÞ{·T º§…ÃiÚ9œ¹Å¬wŒ·–oÁvsÓ­èi{÷x»’§|ÓWüK½d¤Þ±Ý=¯ŸXé.Ùå寋׊á£À q¶Ì0˜õe{v†%Ší‰Y_¶Ì“κIçow“œ±cÚ$xoØD›ì?ùLÿÉ,")2O‘T±6Ìâ9{Xx´Ñám]Γ¨X¹iÓM0ýsì!QçÉ{+Œ=Úe{˵pTs牅k—Ý#~ŠMEXÏwnŠœ§ÂîÿØm °Æ¸DƒÁ@ .LÆWÌÇW™Ü„2É.¯x~ˆôaá6V>Ó[YÂv¸Î•;#ºRC®=Rb,}Ëûƒ¡xu9VDJ =PÖ*s‚VYÔeXë=zðæˆŠÑ­Ök´.[Vz¦MÆ­P‚[Fh/t·Œ IØŠU\÷–• ZÎñí¦™AË;Ó´Ìñ`|D-kña|ÅØ¸2aØb±Ÿö©w÷q¿þ²Ý<ßöïiò ØÐ\û«Åõúó—:À•¿Ú®¶¨ÌõêÊNŒ¶Ïüª= a(~oèÔðk V>ŒOo *göpA%uÄ>Vѱ˜¼8I¸"‡’xå1Ú²ß[Zdñk¡Ñ¨`fÈÂæ)±ùJDlU† ÝB(b} «ñAál¸Mt5xFi߆œ(°Æ…I(kíœ@€ør'ÍPžÎ— î£U$½•åáDðÈ›'¨1ëz'Çpø’÷@¼DeªpáýA㔑g6Ì8ýÚ>jx=jØqQr¾–^j±ý¸z3ŒÔ›}t€‚ÕK§Û\ ºC-%- ‹CŠÞjÒõfÚó×Ç@qRpÖƤ‚3ïý¬±äf¡³/Ó¢±†=Fåd|bSêÆõnû4yø$ÊEw=ï?/ÖÛÝÓB©P¡H¤Ö-d½3¦{1†ÈëbÖÆ1âWdBñ|—’ð1¥ã iXm·é‹§J¿Qx»õÌŽ{ž¾yìÃ…xnK{öu¢áF‡Öd¹“frLJ¢}ýGá¾¢nÑÈœrÞª¢!C«*»s2Æ“œkáïJG®Vë¯Ï ùl‰Ñð^ÅWQÐa@[cŒØ?J$6®!’¢š“Œzº"¡vI„~¡_åS§#2XbÛ ŠRj©Ó‘q“IasŽUO”š>8aí¶ñœZ¨;³€1cX>J±¡í†6é1—égì%Ь¦ðƒTáõtÝÛoä¼77ë.>.j¯pp.ÕB]²#Ìb ÖÈ®Ød«¸Hr5Á7HmµIcïˈƒ³ØKÑj…ž”*Ao ~àæ†ÞwìªuZ¤MÈÉ(¡¶lB(÷x»ºïf½vHwZôø°áØ}åßóKÜ|»yúmýe»þšÁ1>åé-¨~ªm¹o®lÅå‰*ÛÀ¦5ÛFñTpÝ‘!Ae1•c§VåÐîÙ§•êÆL_N42éþ!~šzù¦„Wà…ä}U碦é˜c=¯@ ž‡}“{•&ymmÙ˜WƒÈ0¢Æ| "çÀ{ -rt~ ëX=ÞUï>"K˜8ã¦Êš¨wôcÕœ V·ô¤‡gÿÈ—ûŒ†(sØmm+ê6 ¹Aš¥Ž#ÂZÍÁ]­¿0$ˆ€ŽO>=Ó'ŸØÚñÝØîø=­À ­‚ÍZy·¥ëu°+DZ)¿Ñë÷ÐU3l”cÏo•k±Uà%oß–¶í$Fñ¼y(e&Æs^²APÓÌ&3b±ö†½?Ï–N"n}nkúÑ2BǦié¥MÙANS4ˆÙ“d%O<¡óeAL6ž5M;Fcº9,CC@O…§(GÓT ¨ÈÌ •Qp6ã`¬Þ¨'ŠZïÐ8_ódàô±’¼©{L|Ìj²Þž§®ÈSw Ër´]p‡Yry#®çÐïÃç€=35F*kñt4†0)ãÕéØŠ÷¿›b…ލZ¤tˆ%áS½µ×A¶mx_Ôka…ÃYOC€æÉ,‹NG¼T|˜$w}U°>+«E82«õj16Ý4í£¢6ÎÊõâQÑÊœ×yóÁæç sÝÞ¡ícëþ½ëûm$ÇÑÿJ0/÷2q$‘”ȹżܽ,° ,îžwâ™&‰³Iºgûþú£Êî¤bË–T%U÷v€A£ÝI¹DJ?RüA³ÔGÙX¬…u¹ ÍšZOjfí8™.êÃæaŠ¿dI,04kg=¯õ‰¬ Tn€y‡ÇKÙtYuA\ÊÎDшé¢òAäšÀo‹3çC÷‹$•3¦“6†âeûXÍ í¿\ƒFÖ¹Ö­´ÈÂ.€ °nó¶À‰‰¹’[£L‰qjü‘Írã‚Sw^ª/48{çÅdò>þ~Rõá„Á·Å6¹òŠ»Ì;f·0ò pç+/}GÀÔ€]2)_dN÷­Q_¤)¿wnv:ñ‰ƒÛQGtØ ¶Q5†å?—MA„Íæéòúéº.€î1ôD>²~ò©›“WŸgv,žvœ0*œbÍzAïäl6YH?G²hSj¡$3€£ÐK6¤4o²êe9cíìÞ RÜDÏ0+ ·’i(¯×]c¹C®?ZZY­wDþe-Y^ ý&zg{52´DNXYôƒ|03rÂÒhÝS{»¡ÑSŒ]©)D$<p$iØä÷±©ï¤‰ß뿾þBì1—γ9j×»Möi×WùZãúitÔ­1_åkëdÕâà€Å.$ÑÄÖì=£†é4®QΩ~Óó×8z]*‹ §®lÀMI¸ˆ‰”@\K«$Õ’7ý¯ ÷XYŸå°û™£ä‰7©4"ކãüeY˜rÿ]4˜îOË ÜÙ]r€Mƒ‰¾¨L#—ß™WƒBO¢ï‚¬Ö1ž—IW;;8û8Ï__¡Î/·¶+Ðb˜Pà=(ÙSOÏ@¨[KÜyL%½2#¶ö2Lå*Œ„Ôv­Eb Ãnw¿!J™lu­ ékšbmà"¯pJ‚Â4Œè©To}§È§CCü‡I o±~—E(%Õ—á˦}F£[Òö[8Oa½K†ÈFÂhû”8&z.–þ­òØQÿš„tÍ_\¨2K¾ ¥œúïZNa‰¡C$éH áà3M®•p6ÅÕ¢d[A¢b\­„SúQNng•›ÄA²n=ÂØ.°#¬6M»ÍŠ2–lpJßþ8ÇÑBmývëÄ[U™€¹¡¸26@~¶HYFËm’y«¯ ±ù}M»²ç†MduÏNú–ùäëYãÏßé‚öõ¬ç{ºÔ>n^?²¾ØÚ IƒÀ‹C°Íû/èáL›ªj%ÌƒŠ¸M=Kð9ÊÄ>Þ­d¡Âše ¶ RÄ·6b`i¤°½ “JÙJ"C_{Šdèu?Ñ5í(k÷úþêŸxŸì ·Kì=!ÎL'¾š¶‰¯o“|GåHb=½=£¥ ‘û›äý蛘“tVŸíúîñãú-3òyu­x»Ü×%™€…®æÛ¹ Y&‚¨Ëhk³ùzеUpcØ`àÐæ&£zÎÄù†ìlªx$Ã&–: +Ì´°¥î½>ˆ9H`°.™@N¤¡”Ξ/¶ÕFÊòPv 5Š`¸7ÌôÔûa£”F‘e§„3æ0÷tæ¶·*,=C#q}_¶ÿÇqŒéæáùJÿŸmÖMH]ñe‚ûE^ÅC­û5Or ãÏbYJ¾× .fäTÜe$¦6ágÝÜ åÂM¶ûmžŠÙ&ÃϪ(rœéñkÍ7hî°¢£Ù|ÐèªbìsÏLìîè;¢ñãöf,7UÃóönó6ÚúðƒKºùðá:à͵\^?Òu]ƒÏgÚ‘¶Àcò0%k[%mCmÉÙ¢ky!Œ6%Ù†˜Ï¯ˆÃh“‰Û_åÔè:Pun ,M™)ðåºiyÐ?‡ŸÜ=â2Ôù­öLÂ* fŸD÷[Ò'QKGûZ˜Ó>‰ok }QpàÊTÃjÄ®Rëßê08Ðû̘“_Åp4¸°ËrlËÒ&÷xÚYjÉPßÙ•D2¤€8“Dò‘Ìš…2‚Ó/«Bò¤Ò«`ƒö‡ AH‡28v—EÛVJY¥¹ÛµrºM™¬î<œˆîÛ ”Ž-»˜„?ÒEçáPwnþ¡[ìAßcû«žŽì´âçÌË*•Rœ=éïÏí†yÆÙ‚éè §àc$Ñ—UúÚ$Á{}íEáCÀth4mý*Þ»ñN ßÇôÛÐg»Jð„º ™÷Øõ˜«k8eô¡yõQ:ßBçDÙŒ ÄÚ¿`)pö0#;ÈϾÉ®3#¹µàqû»h4—>Ìí3¢Y@ê|˜Ÿtg>¿ì3ª~ãñƼU±çó²òè5¶Åq§ƒvAeÇÝx!«þ¸ç„}Æ–HzŽö{Ïy îH,eëôetxèGÒkpè‡×vAéŽïëìäœè[—ŒH`Ã}2ÝÄ´í0¥ ;&þ~ÖœÒo…:=·ƒ·íKÆ=ÃÊ CW>÷øt»}º}ù2¬õ]~mN g~ó{}5ªSš ;§¸é|mYä9ñžAûs²óqK9]…dQÞXŸ'v'z±Œ„Õã‡càQ–ÆxOÔãUÊpÜwÐ’xHB< ™Þ¡™ðËaJZûÁuL,úéÚ×GiOëƒu+4ÖYw¿@ëÿó²ÞÿÔ<¬–é*Èaµ>ÂNê‹e‚'&ðÄ‘gpù«Ìæ`ð°bwT¼Ï%vÇ‚éÜ­i“KÆÚGrhÃñµcó6Gå0Üæ Zê]\å|j+޳¼¿Ï„ÚmÓj,Jà1¹Ä«n/?z*Øaè¬`õl÷!×wf€•¨2ïÏtÀ*¨¡ú~:`µÑPû¤zZ©¬p_Û·w!u‡?nocütŸú:úÁª¨3º¾F¦4¾RX²ÑºÔÛ¸ñ´Š$'u¸$k験Éõg‰²J÷gyFCßÚVú¿´¡‘î8ˆ’ú—Œ&å ´5tE#¿‘jì\áé?¥FoLÆ9jDã{ŒƒPĈlzÎ »¿}Ø5w¯ fUd0]¦yDKnæ\¬üçªË‹ÞÄÐ.ÁFõ'ËÕr 6Á8+˜ƒB´˜‚ÂÑ’›TsŠXW1±Í²Ü;-/Šywÿs_£|Ô€!›é„‹mÓò\QZžw4åµBïsJ_~zVotª¾â#tË$ŒŸá÷×ܪ21üª²ß×·/ºå.t+^ü—n¿??Ülþqñ&Ñ1þ¨Ÿ¿<}¹0ñY×|¹—ÁåíM4$‚Z0`R»©Mú —Q¹ÛOQD4àcJatºŽ©BÐ4›UPå„ Òüç?ª`7O?ýéÏÿ™’¹øô³€î7ºy†Yc×w·*\Ý-±ò¨Ù‹Ÿ/^þñðÓŸŠ_©Ô:¤ '÷÷6_¾&5|üV/‚—7|n.ÿNן?¯,Óÿaó~1ëØøìp –©›€µÄûÆa±‡‹»Íúys¤'ünNÖÏ?¿·»;ËÓ7îyë:2v¥ ÇMu]Ã"°À|¿ÁÜT˜;h^¬3¥Ó·¸Ü…ß×wWúÜØlèuc?ßm¿øåfý²Ž9Åãæ>J Áû}aSÊêÚ4•âò•¦M]ÀðK}ùoX‘W7#Õy,x«”Ÿ“w–ów]Ç}X?\o®þ[ßçÓ󷉆ª$ã +ÄÒeõhå!B±ïÀë\?^7­Öç„E¹Ʀ}똀ÄV#ö­?_ÔP¸¢=zžÂÅ=êÙò¤(²bMìt?›ÂÙB +’¡ÈÍRÁÂÙËÏaáÁA:$üº²gWBêZƒZÑþ˜zH’ÄÅc¬ö Bª”ÔuĤ@èz“8•¿9žŒ+oŒ !Ž™KÝdÜŽÄE¼‹û¨€ÚIø a˜_l˜p Fì¼Gü?¯‘Ìõ‡»MŒ%ýe»}<‚±ÈE6C”é§x®áß/¢­¾Ýܼ~fC°ÜÐ’+ð9„ béZ)…X£Åü[|Ù‹Û}8ìzsûyssètz}-áóøcâû•íŸrû|ñ TWéîæéâåãúáâU«añ‡u•±:ljÀ°‚3~–ÃàiÊe¨Xk=̶bPnÅb1¿#ÙMáM°Ù=á“÷šoë*°añ¢m3Ê‚õÃ73 ;3éÞIšäxìœA³›¿óýt5xÜÞT¶0PH0?LÁ™Ñ8%£Yô¸ùþÕëƒÌ&ݤ9Jⱋ.6B{Éã1'«ÓFòip“6XFòų†Wa³§Û=ÚGû,ÅC¢èTS&Zb„V(¡`ìU þ §TÊÅ!ÏQ)îÀU£eR*%ˆ03ÆÇ•!Fr]"”Û¢¥×;—À1‚Ô¹-Þaä ˆuTí ´x‡sî;ÖÛY'ÀÙ>Œ\tÌ{6ý-ù=ˆö§!v÷a}ýÛ§ÇËë§U¼ƒN×AžvD_cJÄ òEãçGœ\WgE& PÀÕó!']9%ËžFK+£ë¼UMøw!§ÑCÒ!'³RR€î+\³&úž2 +]’¯î‘4瘴¤šCžj¢ Ïïœ$Õ ©Õu ‚j²®Ó ›‡Ê \,'Æ ìE|4Éú—@hšÀŠ’¶+¦¶Î5§õëÐò,W‚¡¿+¡l0É;£rÑaº_lpíè&‰]€m·äG~ÅkK˜7ºuâóË»íõo© t¨cœ­ÞcÄ:Q¤št¶z³Ä3Þ‘"΃8Ç}B]q‚B×QB_KÀÞD¼wÁ£‘У1ÑúºŸo¯ÑƦku%3JÎpOÕ‚ŸÅEÐN +ˆWœX:QrMÙˆeã‹."œ‘,I¾^¥ÔˆŒ¸]¦C ÁØUi^DBÿ¸×~>ý1q±"ȧã^mg²ΘPêS]-:*Nj6X?ƒ}§ë0âÅ- Áï­ÙUÌÚ¹j˜~Ýõº4F I.’|ìîÇ_ÕÁ.8йÁΆÝiMš©|m%H(¹f@á,ÚzI9#é´òý⨠š[h½Ì;“Á-€¶Ètm!–C-Z¤lãQ˜ÓÁá¤^Eæ´ÀÙ=¢C-ã 2 Áv•é½_‘›.ûö£¦Žèâiž+†¬˜¸b`ÏU7GŒp€ê¡Ö#‰X=×lFëÕ*ÀXYŒ“L ¤ÆêËqÈÆ@7'çO¤?È‚Ç4È£ìè||Í8ßvVA·õ¡b.õtT8©ZbÈ~ÖA·aJG0Ö¢B³/S°ü2%ÊW`)áÁü‘µér£¥^¦(okˆÆÎ¬3é¸{Ì[åè8õ72uÊN^Åsj”Vÿ¢ª)ÁÀ]¥Òµõ¿0ˆˆß´®¤jõR£Èi˜Y>5ìÈx byMðó”k²CïçgaRAŒÂØ|D,¨X²1!LuR­¬‹¬sqNA ©®ÙÎÃ"ÏÝk‚JZ׈LOîIÉ ÏJðÐGNrƒÀ±ž«K†Ç{qµ¼JAR“PÜ .—¤»VBz^Í«pZ%‡ ÎWdvAkü,–«[CH"F’ž'Æ _@É›ÿ8F¹¥Ã)%'9Ó8¡ãÒÚd´¢éÃtmrÜ­~^u®ß}T—fÜ™sœ“þÃÎ1 RßÜNR-²iaúãÃ9l %21¦G·O ?w8ƒ ©ü°‘|Îá­õi¸êpýFfNëºßÿª˜í±[<,YÑÉgÜâ¶µe=?¬A© 2 &NëV¼±a¦n{û™Æ«3™ y(¨9{¶ç‡7›êÖ:Ó8Ãúû „'7‰C#žg‘u±=Œ³žôŠÐKLíÒ]wý®]õƒâñóóÕÓöSlS½æ3¿$f'r"ϳ ²“I‰›[‹’“©ã»Êä4­&iˆ­2 È#pÖ»ôÃH.m,ñМµŠ$+0L³°Bû5\Å,c}²ô<‡£Ù?ÛÕp]ý®ƒh] ‹ûa²ü Î$”{cãde5Ô•grªÌZ±äag°u™–Ê~ÅüuPýÙéd.9OöM@ çðÚÀÊ|Õñ;½Œbÿß—J r&<³7,YŒdn¨mEMVÉR1—š'uKŒx¦n¡ ²ÆÒòµÁQ OÛ¸˜Ë=·¼ºÛêê>nŸã¬ë­®ìË>—äòeûÛæáÒV–°…pZ¦“8&š@‘õ¨ Ó¦jx’Æ1¬¢–³ôi_s¡ S­F2kÄžÔ £8­§ Õ> Ì;Ät—Ñ!Ž¡f,ư–œ—ƒÖö½k‰¥,Œ¹¹åÄ“ñ䤾ƒ~޳Òù‚îÚ€±$]_¾¼Ÿmø®Ÿþ`‡_ÜG‘öY {Û¨?ÑâyõÙ®ï?®íêµóþ(¢P…âÎèܬ ¤X­=áɲº_¾žRwkCP:C/^†8´- êa7¡þðæèM†m@”#I2Ùî4¨3ÇÉ£³y î)ä±ë§P=f°[äLpš¤mÓ”%“øÚÝoNm6lf¡|œ(Ñ!*ÂÊ…ÔŸZ|JÀaG¤õ§›Û—:âm O Üûyw…lÝ”‘P–Ø1`‹¹I)µ |¨ò <å¢ù•‹SQ²¶»!Z‡‘Hš>8Ž Þ×ܲ‹@1뉇j·º¼''AD–>‚ñõ¯t³(Žm†¿Äîûu£9Ô®ý0YâGÐMišgk!R0-N`RHÿÏÞµõ6–ç¿B8~±8}­ªV6 1,Û‹u‚<‹‡¤f”H¢BJ;;ÿ>U‡ñÖ‡Ýçt7g ØÀÚ³ù§ªë«KW}Uó‚¾fÜϳ% ÐÄ" yTºž÷œ#» Žà–™ŸñíoçwCîg·óÞ·›vl½ˆÃg…=”+[ÄÑkþ½:”S eÚ˜âzëª"È×àW`kûv”¨5b™{è^%:þÐÑÛ#¾Ø2¶èi%þ^Ê5ôƒqšäHQ§ã‡Û×sö¿°ãIPY þÂyù*º¬##:Ú´3ÈÆC;Sëȯ–ûìÎ :”NMñR`“%²½VßK«Nó){YK§‡8Pï•Ý«“7§¾¤œ£u¬Š8À%¿ .žúʦöÅ•_?¨‡½zF)à«2=ŸÌTÕªû`'ó­4¿a¶ksAú6[ÔÑD£*½Ú — -{Œß$V³ˆ Êí £ˆ .9KD¢%„ñTªâz$­µSPƨ2¦–iøÈz ï×ûÎÙƒ9Ú»mDúö×Wnù^Ó 2o˲`]ƒÀÕОÛ¼€s\µ¾dWó2ûðx¿^³MËmååÆÛ¼‡´à lmV«Bó&™G…OM;tã‘CßÌa†ê'õ”r¶Ä¡mFôŸ âÏ•y›*FŸœjfÃL±qŒ‹Á«ä¤ Ÿ¦Ø¤ÌT*eA“²Æ pcà)k FaëTƒsÑi èËK¦ö³ù¬lÞ¤.Ì .á@¯YíÁc™Ý.•cì»]à¨5Ž=ì ÚµÞ6t²¾Í ÓQªÒ¯P'ËƒŠ°Z™1„†làì{pëvfTraJ93$)QæÍ¸ t’ëIÖp@’뉟íï>bp—¸’ÑÝ wiï6 Ê@Á6¯±œaëOÀ_YJc¢5z¨ Z»Êr_;Joy*P·à•E l5h[æNgÌ¿[ ï{ÿæè½p³ø?¿™oI€ßßùÙ{9ŠEIº´‚2 ݘ‹€`†sT´j­±»íñB%[ ."?N»A—Dþy÷Ñß^ˆUhj±‹q•ÒWF~jÓà"ó€WÚ ÿÖ¼w³îE‡5´ Ã¦F̨Û:¡&ÿÏ}*–š`¹·açœSÖÉÑÖÀRŽÞÁíeP©ŒÊXù¯lcá}¼  šr¡lù&µä»j+„sYý,ˆ£H¡cæÞN}R¿i±À8XvTA&…®X|œÍ?²™œyîãŸ+E¢mŸ,};>A×Z‚¥µÈ<‰Õ‚V9œk"8L…/d*­,>㣃É{ùT _ä4 ðW…V~½ÆÐ*rV!ºKŠ5¥5žÌ½5*J¢É*J†‚¶†|hhè*Ñ’n?ÇèFU€.W'«zHpÕw¡¦ƒi©ÇKPÝQKûœoyk÷qÅ@Dzž=<¯³×—Õf.¾2]yÕJü—­0µ›IeÝ"˜¦´9—³‡—-ïž6óhÜݳVHvÔWšy¸Ü‹^ÏÐŒ±l¶W¢MáéáLŽHç!ÝZJuüúMšUÌ’Q9täsƒÌ2ú惜dp­YØYÎbdƆ±×éIÌê-%GZÓ3í4 hMNIE^mW\TòtÞúxi~Ècƒ¥ñ2O<£¸køè-ôÁÅœ‘ÄT“’Õo½ÊáÔ6&y;¦ Æ„R‹RYg´½.¬j« ýeŠQÑ,’%Ýêg^¨»Ó²nÇl]À™€¦zt®š²KÔlMðí=î9Ö~ÕÓHSì ¼×o-Ž! T:€U¦-ÁcJ”1™ÏQ°”Ädãu“m|ÑîÜꀲó RV›+ƒ²S¶5(³œ}ô†ÛI;Øè•צjËB€,& lE嘃&-c`jSú3ÚÉ€–œ9»ž­ÌvO>–ùÑo<¯Vþ‘}`ï¶ ‰aÄhçd7×Ð ŽãeV•[΋*MÍàÀR‹AÅ®YöªtÉ¢­!§ü•‘›_²€g—cOfM‰5†ºÍcYHìIå‡Ç…ÑÒäQéQ›l‚RJn·ß–ekM¯˜)§Mà…y0¹ZÕHN™6à_Kóö¶ufÒ°»BÎZÅ€OïÖ,ø3*8I÷JfšNʆ緤uÛ@­Ý¢òv«ÐPO îÔ~þÁÇyÌëI)“XNaª–y!ë2ÛCF•qA&®cF·w^[äÑ‘Îk¤qìxY«•©å¢äôEÖ×zJµ¡Mˆ£&~$²JÓW!ÆîJzìNŸW|Œ‡QÌ1Ì´Œj8;SkräˆôàRS®„jÖõQÆG0dÔõÓ¬ô,.­!íåQ©®/[¯ž¸8ï\ûº¾S¸hN”}U‡ê³X=Uè±ÏA ¡•ÃØh¿€Ò᪥{ñu·ÒOnæëùÀ&[Ma”#ö0j…Ä´öÅ—¤QÕlÓ²n^ÙœÝÁ'alFäQk»€"aZ(ÅÑcÆ×ó¯çÙgÇé‚0¨EÄc¥3Eæ:®Í$ùt·žm8k˜¿¼®÷#ÕÃxu-õòêöÉw£ÃöØlç*cOÜ–Fäjk }^©0Üd÷âôhUf ï)—P&K›Gk KYüÄ×C_1e2Z§Ñ—c(“F_Œ±2©‚¾ü8'ú¾UlÛ6/ ‘RÛ&ÕÛæ7†ŽZ?~j«š4¨,BÎ"Ê8Çúñ¼W‹9Tp… ÕÐ`Î×eÔ—ÍkqcìZ)oæ³wû?–íñýÔÞ ,šp®Õ˜êù¨)ñ{¥ÔY«·¹ÎrÄgLMí¢P:%á•3¤(3ÆxªŒíZ Áp^3¤Hà­\°á«·튜mdj—%lh®ºù#ïÕcEF6BÄôj§Ê åCæX½Ê# ‹­¥^µ™vmçýCŽô”c—#~T tSçŸfòEŸfOó廿ò÷yÝœköFŸªö&¢Ú›sÝÞDï¯[*C&ðª»?=Ž\ íª¡YîïÈÝ}ìX±n¿ûá·žÓä`9 –<¸`½š¼ò|š=.ùÀ÷°Jƒë†õäûÉËoO·ßÍWϳõòö;>7–/·úË'ñ°ds/›åf–óÙól~ÿrÏÖ#[ÒŸg/ož×«í“eUÿæüæÎlè7þÜÇÕâý­ üÑ›×ù|¹ÙÜ~·{¯_ž__n¿«ú¹¿Î^—¿l·PX;ùþûÉ{ÁWyÕ/Úù®Êû=Òq ±M¦iüI½+tØuº(ÐÚ±3üâéfï–?ñ ûÕêù8tàŠM/xZ,»eã0ðÏ9ò÷ËÅþgþ̵ë)I¯.í®}z]{q‚§‹e¶/Ÿ?x›ßË·ÜË·b{œ/ï].Ž=·&bCÔ&<²ãþCì»WÛ=å~ÑÁ'¶åOËõäåãìiò&i÷ö'ç‡c@(Ò5aZb– Øì,j™•7uYŽ2œjµLÊÙTKV;¯r*‘—×gDš9Ñ•S›´8¯cÁôÁËVa™ãom\.ÿM-{ñÞ6î2Ð!„H£¿¯wNwè©©Ñ8Uhs<ËRµÅÈ79ôXmKS¤1¦h4{ÃùžùyÂ?yÚÌæ}œ:µ-ßÎÝìa³¼`bO¯ü²º{Ã"IoÏü¼Uäe÷ìeW§¦^™ }ÒðbËÓ_í÷ŠH(µµ¸ÖŽ-‹<çf”:rvGÙEçNŒñÁkCœXð2f\d”¡mv»•?al k­uœ'0˜¬êáþötnŒg‰OÈk9rÑÄçq{õ¸½ÃØFtÁžìU´ æ€YÜ ø¶÷‹›ÕÃò¸ ½ûáóÃ+G3 0ROP•"¿¤ªÒîÆ)5â2‹1Oi n·LRº—×í$E[Kòc4LîÚ S d2²7טKÛ‹¯J,ÉßZYo'"kŒ.1sÇÚ8œ1£‹„“¢(/]óÏŨ;u¢Ô_µs%˜éÕ?¡qEúÇ“ýk•`•÷n×®ÐæŸW‹#baYPt³fÈÚ¼¬?¿;þל¿_°ç»»›÷ïïÖwÃK´²0^ Nc&5ZgÔ@/ܸ ´(N£xÆ4N;ÉÔÃíæ[O«l{!ÕÁiTÎ ÙÝœšS’2;¥`›ã´Ób8Ýѧ$Ru‡)«=A+ŸÔÅ@ѧWÏ(†dKôêuü`L°×\8Ù%úö§î/vS#8|}¯4öz=Š9ÎãðËaY†,¡U^5&Þ¹ŒY_fÈØ {–Q~‘P¥YVço$뾋À•Õ[½qØ>@ÖÞFdv6Ö€³Õ]’„>yÉ”M5d#D¯R½aÔ4EJu€MP—ûѯ‚ºûáËÙ¶·Ý`Þö6ýf>†¹AÙñòÏÀ\¯Æ°áb*TƒÜË« ¸$‹öiÀE’·[Þ…Øý^<•ð–Ø·šlB¢­^¥½Ì4O¯1[à-ZˆºÄ pÕi²<NÙðYxÓ Ñ«[Ù2_ 'dSµŠ Æ ;&w噕S^§’µuض¢÷M<žÐ|:Î5¾*j”eVÎõr=pnZÑÆ«!ÃñÁ¨BÃhŒ^U˜;IœÚšõ¡¨Ö.£/‹I¯·–µÊ)àì¶H­ÚÀ.Nâ[òR¼Íœ?t{üv‹ú~i!Ñ1{¯-–à*Í0â ”‘ ,Kr(#ER65QT#„ŒÌÁçR JÇJ5‡r¨„¡r§íÔÜ!y ÒÆªÏ{÷64g&Mƒœž}™§±ÿiXDÃÙúË×Yhy~LD(rì÷Ç-ÞÈ’SU# äMN(ãè´ºè(íPj™!yD?Ì IQ 23ô¦ý]•1 e4éÁôlÍŪDï>k–Öë|"Ë,§Û«:B­ eªÃ+¨®/e³…‰1=_wñqÖ˜ž´=—¯>îò^u'S¹Eé&ç2cú œ“6hå«oḘ©WÅkëÓ ¼¶àÓ©gœ·ô@NµðÚ8²è†àµu­/2zc¡½Ñ[¯{ð:sÅ”Ç}€+.æH§z5ËX¨Ëº À;hó!ã½jYpݳåÏÙxVò?o?d¥,îå³Rv™O)ÛŠNãU˜_û0ЍFf1‚w5å¨vr$(–ˆ±Ê1­8¶â+£­-‹ƒ5gYÊ‘5$Ý!'²‰{(ªZÐ&’`‰«ø6 ©ïŒ òJ»²Œ1`›Fë­Á–MËÂ4³yžÍ—§ ”™F}Qµ –ŠŠê¨hÄDŸUèƒl.ˆã½2©Y±Ú¥Õ»;j“×¥#| €Jõk•SzH´.;n8å(±.áÂhß  ÁGë1–m[™Ô†[‚]ÖÆTÙp™Á ½WyÆeu™ò¨A ÍÙ•ô}ó ,G}^-÷›õë³|ôû×ŇåK·Òðyõp?ÿ<Œ‘ÒA×C¯UQñ͘Áh##!zðê:ò«…¹Û3ƒ‚M½†ã›,‚£‰rêH«JÔ‹$càiæ:aÏ-êäÃG½,æ[? {…žR‘¶XÞêó–XxGP}!jAz5-ka¡ìŠ“À6@ho=™–•ŽË‡ÇcÉ>¯Wÿß$1ÿÈÇo½|^mîù° ‡ÈåÜbØÃÊjý³ÓœË¡…¢Ú&š&r‘8ÑßÂ6òÍýb}/IŸÉ.Ü¥„ÚðøñªÈp·žpLw³€ŸÎ]C~õRëä=%S àB²Ý=FÇÙ„U‡ÃÎ!1Ìñ·¬ts;–³õ±¦DOJ_cï8eU ïfý½ã ÜèU/çÐUŒ‰¼e¤•ƒ?;xéí ¿x¡=Ð)–U¤½Ag‰ÃÝ$õžVʧƒÑ=¶ûM¦UÂssä?~M)ÅYBQQRk‚=–r¬(Íj ÂR›(JWÞ¼£­ª¾[÷ÛŠ{N‹SÁšP–Ì‘mq™Ù9^É>¾p1ãƒÈÇ`›7ñ ›Æçh¼ÒðOÎŽY¯äA1´°6 Ï¥V¯Ú"§ƒcŠmãñ%4ÕvQÊ%4'g¢-â"ª‚ç !p.«Xèn#JC<çoE]¢tV³×ÿòX5À¡®þÅ1"®ToòÞú‚˜Þ[ Aúp÷WûËîK³^ƒ¢7½~šÝ¿ð¹œðyÿ;ü/ò?Dã?ðÏ_ÖŸï;\ÞÈÈúîoî¿‹,—蜣:É X½vÛ¶:ŒŽ Hø} 0?›¾äÍhw3~í$–7ëcónÍ"\®oažWë—›Ýt™瑯¸^¯Öé/åßøHËŠ•‡åb¼äSNG¡ÆT´lãM­I¤„¸Ž]ΙaMK G€Ôî £œV ®9Ž!vSzøÖUüHw@9ðÊ¿*…Æ=*²6À¦e„­œ¹ˆáWö²Œâ£FZµ& 9OTùJ†±÷éÓigƒ;»=•_~§è¶“/¿<ùïýéÏ?üùßo'“6ÈŸ'ûk'“É?ÑÏ“¬ÁÛÉöÓYü=ÍÖŸúñß:jm>…/«É§õý˲SãëæV^÷iÙñåO:ì¹ðñOþeò»ŽJ_^AV¢Ì8)ah¼ë¨ÌŒõ8úDº-•§ýûu‚nê‚6 a¼œô£Ï|š=¼ãä½Iù·÷Þ<¬>Mî³—ÙæóÓü°þ®YÎöŽèø xŽO9KÀÑ,DÝ#L° 6i Q?GÏ®¾–f¨OÎ’ãeŸÜ=Â∆T>@dµWº|)ÍÙn'V)«SU7És¬÷Gy·/èc·âoPÁ½î°:2Ýk÷Õ8‚ñä‡Û ÛÞŠwbÞ‘ñ¹×î•¿ÃiÝíxÙLP×_6£êìš¹ ¢sé¿EÉo€d‚S r:ÞvyŠ›BÒ¼ó« rSO¤P÷¬ ?+$E’݇?1Ø=B+ÛX¶Û3w¦ aÊPÀùJŸ2Üß“2H˜‚ì¦{„?ᄪ…øÀÊŸ§ñ±Á>OÓnŸçÙþÛŽÐð`ÝíãìiöAöä,×ÏW,§Ï‘}¸'‹‚<Øa‹‚ª}‘ƒÍAÍàÍAվǰ r2Ö—;î’¡!é0b°•†8‰« W¤…ÌiœØON؉’Û@ ´‰ˆ‘dùQlzhÿfÒìTšcîÝÁ†´ÃgD¤qB5%m Ñcæ‚´­²-U¤ì14F9Y:ÌAR©²­ÊW6k›…Š.©ì`«_»÷VÑQ±ÃWËÒ6ñ÷ –Om~üŸV[Ú±™»Æå51šH«!xi+®²´=o½…õvÿpc ÝX¿Ã’j7ê2cpÔÚT 6D/°Þœ0Sïi­LE+ðîkXÁ1+çÍ—‘øs¶Î“c¯ }ö's(?çÙ|ñ`; œ§9g{R²Î›6nСÜ;Û|ï Ú7!y7&¾”tÎÖÇØ0^,Ï7ƒ’fÒ쉕,­eÀQëÚÜVŒ6F”Ïzý#ÁT™XÉ\>Ë"»0e7מРKE„~ø° zTëþ7:2ÎÞq0’ÕÿF„×U!œ÷#Z8@c±áÌ„³ذþŸº«ëm+G²Åè—y9d‘Udey`Ñ@/v°»¯ƒ†")‰Ð¶å•ì¤óï·(][×%’÷’7ÙiLwâI·ŠuXŸ§’ñho3.:DÐ{°L„ ®m ¾I(:¢º gmƒÁ‘nXAì©mg·Oî {Cg6û²»XBH”ÚA¡6ÿ2Xü&ãOö?gRÈÅ©]1ïØ@‰ jß4Mò8‡Š,§“<Áx’†ç£í›=ñTÞ‘Oò™&xÅ£LÓ{jž@݉&,p®¢Ý›ÊÖ]ÍyÃ;Jº7‡bÃ%š°ìÚÚQå& ÑP»×ïÏ@[°Ýܯž¾¬D´{GDŒn?žðð¹GÅVÁJÑp¥¤!Ø(°%  ´ròë­ø rY™å ybV.…ÌaËîõ¡ÆW ô¤VšÃº7zU’±­`ÈætŸ{%¯I¢½ÐŽe~üŠ­íŸ«§Ç;ÑY‘­Š7éÛÚ* p— ,”’p¹Ñ^§WQUô“ä xGéú€§ÓÖˆ±¥'—:~i a¼ÜOºßêûIzŠ#-Ø2ज"l³ü%U0¶Xýz¿¬oQ—ÇQúÖŠÚ§äm<%··¦Ø®`+ÈSu:•²¦šÀL5 Aÿ¦ªÔ-CÑWãZf:©m7w¡½ÿ»üùþý«Xß¿¬ý\Íž6³îGgÛ}KOÙÂJ§ÜXyFu¡ê5 m<‚;C¥êP™U¼Yåh³õéÁŸ08“¼Y5D·²÷$Tçjul,;Ä’«´ÑÞ³OôÍ ÙhõC4Åhe¨³b¬ÇªPëlÞ–D“?û3 .ëTö¸Œ¯Ñìš`®wâ´÷#f'{;?äÍÞ–ƒ¼~»ŒtÖZ¸"þ0®1*ãk†lr“OEâøXª5?y]dUW…E….gÒÒ&ûúÔbü+}ùT\ä-˜À—áw:ŽÐih5šÑ4›_@³6l]UékÔ„)­ÕØÒáãsåÕϬQ,w¶.R"ᨆ%Ñ:´© ih?'Ýù>ûþíãáóL½íz±›-¶‹2ÒY§/WÎꈽEèíÚo¹¿Ÿ?f´ãû•»íl·þü _[ÌË‚Å+ªp† Œ-cdå=Hi^wõÐV-^†dBF‡zÇâ{óÄ“ˆ€^O,µŠ—ÊqQÛT•#1 ¦â”f$B”§eOõNÉžBD“NÎ1@h6õéScèIªÊ!‘vy(:%AÆa8q“2Š‘À×`K–àåêñnó=ˆýSÝß:zøSYðèõe¨¶a‹äH‰·gòsã—Q;y>C•Ù\ L?_cŠukeBú|~÷øe®ocÛÊÎ;åD7íh,ýx½öFÔ^·no,ýx×z«’o=³.Ûæ@-hçâÛ\]Õi}Ÿ—SÄ…EˆÑþïeíjrÃ×vwÛfE´äÂfáR†í커fÙ„å4>£Ç_)Èðe}¬ÀÜH¥³ƒì‹²àÖŒ¡ÅíTó,¸ˆYÅ Ž&cÅ“à\w›'ú¬ý@@ÊfÛz‘ŸtQ‡H€0®’àZl rZ‰ëb[²Õ>†+o„üus÷|¿ZÜÍ×÷oÈÄî)g]¿ã÷wôž|iÖ±’—ù¢ÌWzyÄ=ó(Œ%5dmPX£î¨¸ôXOˆµ@øpxXŒ;Af ·Fâ‚ë$„‡#ÎÑž»£Äê¬rJ>8rQ¬(×ÀØX‘°E¾OžÅy.·M³>o×Oßß:º/_íˆÂä?¡[y€ò:+xé˵ؤ[K©†¦˜|çøì»Óä5‰|•{z²Üit H&‹³øUê½"?jÌÕ8£‡´”°‚(O—™Ö•µÅv•½˜"ê‚ òô+L¢¿ ýféESô[Åx Z)W‚þLÚšqM Þ´íñÜ‹ÙéØ,LàNS(vŸ…ñTw#E `ÌžäŸñúixT€Ûgf´±)†hÍJt„ÆNȧbYÿ0>•ÝâËjù,b~÷ö¯1Þ¨Á›rå û¼P‹åÊû]M0² \B£±EÀj™Q<"l9+ö’²{‹¯9&ùº_ qŠ_¯™ÇËÌ02êe7ˆ>?LÎrÚBÿ¨\W¤r=ŒñögUÀÔYJyHè­¡t’’£»^z¢­ ô%IJ«4ˆ+1j’œÌƒ¸õÉú0÷òhów«@þûfóxÖEø>W{âô÷Æè@*€g½Z¾~ (¶·Ç†í^6Õ;–Ö¼‚_ü&ˆ®mì=Íß§½Ywï‹ÕúëjyÊ÷lÑyð¾Çî÷æ5ºgë^f½“³ðM õÛj{óôeþpó*‘ÛýãŸÐ²Þ¯;S¶„0yr®vå`q‘®íƒÍŠš»ºÇMž°²×LYUª|§ùç¼Ã.ZåÇà‡Uv@ËNè¹ K|atï)æ÷žêÀ,Â:Yº" Š“VE;lz–×|ªE$—U êkOJ ²ú¤4Ui ƒ(Òж r·¹[Ì¢¾ø}~·›ß?Þ%W$f¾J‹ÍÚ{ùƒ²€£F°ä ÙâÇ¡‰!T7Ë›å²Ä~ÅÌ•ù8ïOΠG#˜òþ˜µÑÉüXh ˆ¹ =9ÖYõ'¦ã•|¦ P4Ž§Îœ Õ)XpF‚EÝ2\ÙÎwb’‹'‰Öo»~ŸÛ¿fi˽ë;#¾Äj¹Þ¿íÓJΗ8])lþÂíàb´¢qZ;wrxÁ¸ØNG‰~É¢øô3Um<Òš*ïæýñ7ÎÅsú§hÔ\™…!þ›–[{_º•o„É^¹WFØë¸RŒ 3ôì!9£FH¨\òªøVÙWYW©Äȇ«YLÁE“:g9øã±…Ç)a¿ã±ýøè|Ñu–s½‡éÇcWÒÉßg»²§üSë5S·&ÀƒÊ–Cç8N!ÑOÕ "»àøÿâi >;ׄÑ0`ÂåÚü®Ã¦Y\½>+eŒÒÚJ&QCÞŠZeT'ã¯è¸ßC¿Š5WiOG·,£1ÛèÎpQ”ùËDù´nòйœÜ¡+ª¸»h.«Ä·€‘ÌuwÀY ¦cèVå¾Fó`43A“¡p¢püŸt(üÒ¦N!³îœÏ·ë¯ÑD^v¦U)Ïò•¨ÙqÏÇ飅×z‰ñ¾%éÉ…U³Y(°fÈJ±À–-À]ÊÁÛ6;&Ôóìµ •˘áî®ù«®€5±îÞÃVšá&F϶¤éÎ ‘53a­› ;œvçzíŒN0+ÖvS9âH.§Më)Ç"½B)„a÷²Q4š€aØQsšÍ;Òé’ ˆÁê «æ(ÇÐQ`•ÆÙ$b‘ö´f-ŠnÁadñüÔÌ)½a²çåºíK#^2MÖŒ@4®,¶ùú eµ±q=­œCßf¦ÕôþÚ8MK¢¨!loƒ5š¶Üíà Kkp›œœøz™  Ñ§Ø1å‰p’w"+Òž0ª¤JzÂ2›¶æˆCÌQm +¯F5”#¸Š)M9#N¾¥Ó)MÖΤ £­±G)ÕIiÇ ýÔvJÔ¸d.xhÙFRš,^)p©)C¨;Éo²ncíü8ÒŽˆëÕTš§¦Y›X_«ü,9ºÓê1¶‹1¢Guh•I¾’ ý-ÝåC­•¥ ƒ­Ä“G&§Ùþ7#¹Å$ŽÌM/ÒÓmc™ó¦AiHb±¼jR9 .£MõÞE>z©”ôZ"V7ñõÉÚï¡ßCÇN)08)ìçMŒƒÝ |hªWãš„/Æì_bu²à»lÉ·ZØ èÂÐÎZëåø6‹¬ú⪪„ í0ª¦Ç5˜bTá}ÑT UÂ;.a¹©c“'Äs-\\¹¢Ý½à5)žd·‘Ï[¸¡¨ön£SPhêζY¨ŒRÊ£ÿÑû6l(CX¯}K„EEòB ¬5ÆYÕaߊ«¢7>µwdŽ^$“AaíoŒG¬'›:ÞlGÊN ±¨š3·*<4œº³Ê"œ¬U>Ž™ÀÀZo+ï‘;‡‡–X˪EÇZÑpS¬Ï‡Ÿé(;ÒãûõçN²Cf¯¼^#¾Œ*84d]’Q¬%åªðe\SE1_Æ5=ŒëmBKèÉ9JÝh<%§X0šÔ艶Πg±°ÐºÙ´'ŸZc.bŽÚÌË_:EøÓzÊAΈÑ1paŠo’ý yYÎ{W¡„˜¼jTB`jp¡Ñ ç^蹫PºäÕ¡©x_¤ {P> <éÛî{o^¡l´®Ÿ <¶0Ù!…¹`]ˆÇ­dZ°œØPÕ…T³³}23ŽVǧάØÁ !k൵¶qOst,1(ŠC21‹Àu9Íæ ·cù;©[͔ʯízy:­Ìb¹Ï7Җݪê»!tð>ðTc®Èn5’Ùª«ÕþÌ…5чý²×¼t´J§Sõ6^ 퉯æ,E9tS;é–[7ü‰˜ŸaþAO,×E}t¾.Ôû ©ŽÚÂNKŸ”nÎìîGçÉÉa  çA×e$ðYýG–X2|ª»¥Å“j“•5ìˆtˬìý†!Û]ø‰¯›»çûÕân¾¾ï+{}/®âl+7úîiûýø‡'_šu1|c·Ñ¡k²RÜBvùä‹ Ã3t4|eôœª¥kè‡Äé¬Wªííë ¨&GœD ÈÉ€œœM2W8ôÑý=iTâˆSJ; P§u éÖñ¸HL”"Nioð‚ë ërXg–©L…µ|§fßÒõs ²žÞÛ[<&{ß6ÅmY/åõ>[}ü"ï|p »÷HyÛ‰ßnÅA4|Y~}H˼5h%-ß”˜¯ë)‰à=kñ¥$h„ûTggÀENµÙ¢õhÞ´'· 0½7 ãØ›ûIašµXzãÞÎ g<ñ‚¦ À4ÝGû TÝÕ?Z©s¤6c˜m¦š‹ç<‘Ãt$/a[øÉb̆ÔÏÀKÑ]¬‹ù¾„X¡__Á•D«a·[^‚†ôë+t^Yר!CŒõ\ng¬–ÐXSŠ} “êH‘¢‰¦c2«âs;#/ñaQ:Ö:ù|z”ÛÆ3S{9ëX LÆxž¢5Œ³È‚¬B¨Þ†‰!—tlåÊ–{o *XOCHÙÂ+×­¸+ò•‡Ý|±·±7Ø gùpÛ|šßíV—ÌTì÷áù^„÷ÇæÓ+2«ŽtÙk" Ÿ±ðÒt=›×ŒW<•(×WïÑþö¸Ý„¼èÁh;Õ/´ô(§˜J¸a0žºa=92mI¬&t8é²í³ ñ±µ‡›íl±Ú>•ub‡Ý§¿ VCÚ~Pá‘g¯Ä὆¶Ht“WVÜ ð.§™Ä&M‘9:{Ü“R­fãät—o£öQÎ0*Öí›I ÄhœDO•R©^ux•5„ìŒÁåP —UÌÎñ¨¸uãæÍý­¦â[ý'€ýfE zµM>(¬qmS¤þ6Syˆeg/þÑåïÌœ#ôñ#Ñlù¿¸[„%Kéã'œ ·:.O+ ª¬úÙø£õj¤^Ã;æù-;¼’y$ñùømõôry“ÅÉOzÌÁflBM"ÁIàkúâøÇÍr¹ÞmŸÃ[|^~î57wëÅ÷"g#0S WFÚÙ`²œ /ØæÑ×Ãê°¦Ë!AµV9;•4úTÁL„y¸0OœŽž´jMhâ’jY «%¦ ƺ\´\f½(É+7Eìî]Ž£(ͪÇîi܈)Xü-çC.ÎVpx Rõƒ?ÒþÖ‡)‚zÚþ"°Ú¾ÿõ·œÝY†Ìͳ|À‡ùýJD¾¯:.îÖ¢Tp׎æ¯åFÞ_||䋸ééNNõl¶±.oÖË÷vaìçö™ù'µ„oúMÕÃň÷â×0 8†,ú×+lÌ?Þ­þK¤öûfóx–<ùo1ÅÕoËÕ_ï‘(êßn‚׫åñk&FE"z4 9‘&Aqü-¸«±ÙáQ£Æ§d¯Ïò·ðYoÖá3É [¬Ö_WË· h-Ürè'ÅPúûù+tÏÕ½Èz'ûMç·ÕöæéËüáæU·ûGûêáØ†ÒU(üœ,´´A‘sS H.·Ç"q'\'2é¦Ñö÷`kïöÿþ÷`pç¨;;‹f‘°Ù9î΢q@S\´Ö´`Œ"oú]ʧ+ª÷ˆYºDYk¦)Z”óȨˆ«-ˆHª"ÅPE•…«„I\µñMC=©Ô!y \>ŸX š!zÛƒœµ#PȈM»?9«Œ$а•÷Ö÷A!®Q¯N±òz¸FC¹/–ô¿š?:ŸY”*oôªÔoóuèI¸‘Ãzß:7áEæ}Àý»|ýiû}½‡Þ]x¦NJ³õò—sòÀÔZƒú7ÏAˆ¸ÇḜü>Kl2âX;F>™ ü6¿{'ÿÏí¾>÷N¼—›OËùÓ|÷ýaÑ¿JÀxgœ<Ì¥»D_~ ªP`c8V‡—°R€6„Z|87ºÞÇ™õ>}Kè,…üp­BLnxrˆ±jõ-£Þ·ÿXư"ÎTyoPE=æäy ¾5 †7F5Ô4`z€¼T³\Yf‚\ó(—7™d³JeÙäQoÞÏË]VSàUŠ „GÅNº'Yzü%µ+$GÈT ¶‹ Dð{\?¿åZ¯¿š±,/¯/—aöë¯×WëímîÒ¬„/¯4Ë ÖÍCb/L`±£®”7G£yjƒëÉî¡E²šC‰õéY.Ór ÄOpXÇr¬|Â…¤ºs:öû÷ާfaXÅŽýþ³,j²}.Ý.:ëÜ¡§Úˆ ÈÎËN=‚(kËмæo@}ò·Ìcì3#x¶Ý2¹Ìcœ5MÿÙ‡‘9ÔG™.îtcd¢-2]\›|>§c ^± §KÝ î½YYŒ©á Wc’SÅ™QF¶{ÝIÅhR…|Ž1¸I3;Õ¨i\ÖhT3r¥÷Í¿þæÕ”ÕZoŸãÞeK-L ùÎ=LRјzÇ<äKÏ"Gò …@ƒø$P­ÔxMãG#©¨*!CÌìóU%M<}víp´÷f…U%46ÀÊ©­˜ûW¾]Àt¾¯ Äà„1+Êÿ@1T¶ÏíÛ(JôDõ‰¢²²ŸêZ Ýâ¨ìƒœÂ1ÑÓlâp ^¿¡á}è°ZÌ#ï(úÝ5ÝO_ïß(>Çç×ÓøN¶•¾óü§S\9Q[q\7sþ[õx¥Z­~èÎ;Ïxè‚Bš¼·¶£ïdÐzÏÀnÎ4ú¡ì๒³»Ø¶/nÓLÇooVà=ãS¡3ˆXÊÈÑ,ôö>Ûˆq·Õêï¹QDlȳIF=,¾aáR´tÍæ¶!ç`?ô6~} x“EÛDï°‰F|õ~l=øêQ®°•ȹºŠåìØ%(é­‰í,6‡IV"ï]”È$›YÞ^¬“¼Èœ<ª›ÚïËÞÿŒ]|ûÏÄTª¥ýÖ­Œ;#Y¾KÖRU{`Ì ßÂJLÁû—Œšº"h}Pßìš 4KšG#!!‰=Nq%äÁGÔgÁ‡“´7{/“Ÿ Q[šÇµ=‚ûسÿ£†BB°s qaf HY„F€Tx™Ü®.õØßÜ\¿[ÿ¡t÷þ5ºzé õÁbÇO¿­î_î9릕 ûêÁ…zˆkS<ƒ›” æ+Ußf óÂÃ¥u?úC:ï4 AOuõJ£ÁP1h]b<6 izB~‡ çÓT~˜ÇÙ“P“qpŽcƒ/bE ÉñT¼aBëÆ³%ì§áI 댳Kzp³¹U4¥äžÁ=i´9 d ?ùQÐ[-D@íòàÑ.­%`˜` x—ï5r~žd+GJñGì ª`± ê”Yç½Ã=z˜ ˆæŒC™”²Š°h|Ï©;ÁYu"Žè©H±ý{„`Š $ZYO™ÛS'mÉÇŠ1IʹǚÄå]uìl—”G”Úèäcï‡ÌnOQþWq¿ÏåóýÕíª,öÞÑ䟉»…ÕŸã(o*†Þ(ꇱÅì{V\g}¨©ó¡6v”úPàC]Ö‡Jz¥óžx9QkØ‚«r¢ÞÇjÎ8»< F肽 œÄ^»©úM±ÏNŠ®Þ Œ›{/B…SÚÆhSÑÎõ)íPEÑw!Y/?­®žõ}Þm»Wf¯?:“F(ÛDô¡KMo3ØÌÌ¥uE*É;>°0„G Ò{CÜŒóåÜné÷8xÏÅ&Ã’ó{`Ò{\÷ÄÓªÚä5åšjS sµè» %6®ëöÓíxªèõu¹~œ­o>ÆÉ¢Jr&gûš)ñÊP=…AŒý“-ýHÛ®š”¾â–½ë|ÌÊÎÙ¬íî÷oè«ÈÚ˜.k FnjÓíNj§bŸ YUQ>ükéÇ饭” ¦»’Õ¢Oæ%b5±ÛŽ6üs$&Eô)u:0è€F©sÇbИô= …Еõðh«³þÍï¿Í"ZÎn®"‹ÿÓï¾Üo„üòu÷Üà‡K¿ÀÃ:C¸` ^#³J;P`­¼éæTÄ&Ì72FR®ì- 8qIzÙWñ4aAÔ§&ªò¦-,-ônY4„R-‹žƒµÁž'èB‚ŽbÄ-+¾Sù^ÄÑU³Ô%Åav&„:bn¼ÆH¬°Ø&5WÇÛºÛöÝâÕlSx«+F öU†‡.ù¦„8±áúoeרP?>ySX»îª¯³ÃA+Ù1^CZ‘Ú%’mŽjËT’]l,h!`‹Y߇ɱÜ}i5J%ãã¡¥ó+)=ȪQf±*d”fÂâÂMËVgbÑòåÂ`M¾\h95í±'ªFÕBàìÄçdÐ O¤muàX|usZq[ž †°€3œ8”ÜŰ/ŠFAŸ›'–é q»KéÉÚdUB=8ò”$óø–Ö¨Ÿ)Ž‘Û…t]õÛŸÁسõ©GU'‡q_ OJ ï‹zÊBlíÊñ|x¤lô‘s¾O%˜dwÞQ“èXŸ:ƪPWÈÈ¥ÞèQΉؘb ”†Ç0Å4OjuËqõ"UšþO@ÄI:ïˆÌ(£wf@fP(2Þ³©´ù­×i9Ië½+YêmÑù7_HµPî½l«$G¨¨Êmsš.°^Çý«ÓôAãfÆ4„ê¶¥ù‚k¸¾…J'µƒ& ØqÚé2`9ãµêÔCÎÛ?|÷|¿~þüùáqì[Þ6_]u}“?sò/ÁÁ0l#…QÿÓ¢¿¸Pf ÑSOÉÓˆ°Ëö?¨ÓL‡¯"jƒžúÐàŒ¯é?'4ÊE@ëÐv‡OÚ®×8„OKq‡=äý†"SŒ»VEOí²é‘}2z°†¦¡y~Ý~7»Z]>¬¥pÆ Ãe\€°ƒ¨B#1fTOá\¡Çrix®*öÞÛ<ˆj(yeNf{Rhƒ¢èãŠjÓ¢¨þiŸæÐwÁ-?Ùd4Þ‡b{;kô/âàè2^K!ø‹]áNúS6èqOeòzðT[`BŽÅ³iW5¥ôf‹gv§T_£Þ< ÀºïÈôfG»ÙƒÙçÛÅýj;˜b@×=G[ÖÂ¥i$:À¥± FfB¯‚ –Í_ˆ&’-L$îÞuzbS´ ûrj5’Èä<»–N¯ä” auE͹â¶ïÐltµÐ²{Dr&Kà²7ËÞZSpL uLöÕæfÙÆÙ2ä‰óK/̳¦›‘,!pšE‰š¶Õ{S6àäd3zuhÖS³¡wÃ€× S’{æ ²ýž‚«Œ(ÝÖp 5 :jm¦tœÕõu]JTz!µë¨Ý–À—ŸVËÍ@Ê¥:";ßýÝVY°ÒW?Ô¾ªÃbæš</wõ£~Ie–ëˆÙýT¯ŽÜ±Z†A´ê€)ôj)¨=Õ-· ™¸ôƒòô¡è„³å"=áæòx‘_£åBq»Êä1r¾4ÁW sý` önò´…¾ÆÓnž~롆õ¼%>çGÆ !WkуgÓ¤Ý6%úê®Û”ÜG]ÁéYŒÓMÁe¶ËËœ,J~B’Nx_–-A[ÑIŸûnb@€€(ÌÙ‘sÜôïŸ.¾6Þ%H7(øë¥§ë+œùßo×wìÓ%/Ír¹4×Kwe«.ÍÿºÆ[Ї9 ±sáGˆ‡W›€ëñáYÓ·M‹Ön’¡ÅŽglWvf@±LÜ?G®zW_¯ß*Û˜=‹¾Ë•§¼CÉcïvÝ!ÿÞ`o|lŠ;^ðnÚ*¦Ÿ„!.Þ'hÈk:EòuöØ4ŽW™î:Úrq|¾m?µýMz­âxŠ íØ£óó9¿pêƒ4ƒpx´-! è]1Óç 9nTŒ¶%HNÂÝùN›Æ4ÇÖPIí,`yI´g¸ñu ®Š vgíCÇiV7R[76ÍÚ׺ÇÜš"vµlÞd^ÃôÔ8Aû)úàí\8ô é“âz3¹Wu¼öTÞ]›à†K¾À¡c3èƒ5èjãÁji5 ôô0xƒ†B>гP°«i»öû(Ð{“L‹@O›%X¢ª@¯E²íÒŠNÀ†ìÅíÏŒFLV{*äþSÂd¤­30uå#²™½°œÍè*Ä9ž€ËÙï_î_‚ޏ‹q|Ý”$qWÔÜ-¢­œ—"Tas5ÅHÓÝ2ûvàòc“a70PÙ¥˜¨÷„Ö(ûŠW«-á´äÀÐ;°„XT]ꌭۅ¹¡Ø=Râv)äïº8YÛÞ“T#¯ë‘'wºÜèõŸ'ÛÔš½ó€“¬ÊòUys%ͽ~W=KoþhM þh±³ü]ºÚ¶yå·^¨’F}ÑS·ž¹C[6Cˆ2u[öaÝâùê¦nS“µ„=%®ÁAÿÍ“ÀÉ9am,§7OBãQ¼¢ú ìŠJEÖ¤ÙøúfâËÃíóÝJáñËrvMWžW×ËY\'4ÃK¢Y¸ô—3pËkv׫° ÒÛŠBZÅ 0×OézÇ›´¢‘öC=“2ƒ–å@ô#±m¡Ekú|š…ŸQãqlWòág$Ér»( '×åì £Eü©í#ÝùÔhìðê༜>šµ?ÿÒõjñôü¸ú¸xZ½°ªRÊÖ­‚EãúZ'é›òú\àƒéUk(c»=?*Ì™°¦6ä$o˜d(Ø“X“ }h‚ 2µ w¾<‹PiC8¾< âæ^œv5 ¢ÜÂZñãF ®»«úØuVŸš‰&)‰I‘hf~~­P¶Õ›— ]@G-«1´÷³Á¸yÞq×ûÎ4?jÒ°ýÐùo~_ö› S*ÒMÖ8¢ Ψ“ÿÏ•NuÈ­§ºÞbÐÇ4Hf£=è F.LkŽxï_‚fù©5b³ù Y£Ô|Í–ŒOö#Eпž‹ë—{Ü¢¨‚lÛ±£˜MÂ᪢Ø:}ü»ÌN$nKXÖµ,Õ@÷`Té©Z Uœ³Æ›ÿ¬m)YÁ4È€šÇeŽÿ4])zDÀ×c` ÖùpмrO0MÊz‚¾‘›iî¹!H4…FE‰âÞmÓ܆¹lŒÚÚòèx"tT&R¯¹é`ôݤ,P'Öån·®ÞܲÙYS²‚gšXZ`¬;Œ. ‚#M‡1ÕußÁBkÙ±bÅ9g hÚ"ãEl]zÎnOB­ö=R°vj´õýùÁ…t7‚¯†ì§ #‘¢nÁŠkêQÑS§„Z?w–pW÷ëµ­lq§úX,WÉÚ\]ôJ¦oð:VQsvŸM ÜzIÏPð‰@ÕÏœE8Q„÷MK(E*UTqó†ÞÓt ¸mÓçò{ݵíâñÅóÓÃzÙ¶k!v¿k]®Ý¼Ì#$C––€wyæ6òÉÑØ7ᵸvÓ‡F ¾Îâ1n&Gqãis<éÿŒƒÚmr/Ì@gÈ+^hÆPSDš#Š$®>{,8¦§ªLÒÿɡÁ)ذ3±>O])ØFš4?÷ÙPßÅVÎö¸@ÁŠûš‘$q?nŽl24¦…OuuR­@lx¥ÛУ-Ì­æ9Îõ ›??\íûÑͼg* 9õ3D–|¹œÉÇ_?}ª+a0âňQÇ1uy·Ž«uesÏήv×zsa¶ Ïìc¡’ðÜ@ÖW3¤7±ïI®I|æ†+£s§¯FEç|H Ô#:W›‰è\õäŠdnì›¶‡¢UìÂ\Î-ÐXN)Ý{Ž‘ù8Z"Ó¶ÆIšZ7Ò{@æ–6ðwà“@@Ó¼ÃfahOÝ3ô7x•ĽQœ$“ lϼŽð׺@Êjµ åÒÄq€ó'I¼Õ)WùtL»Æ×ØnÎÎowŸóêùóÓÁ&«n{"iÒúªÞ‡~½™³ØÞ­¯*g‹©ÞIÒ&ñä&ÝÎE¡°VÆõÀžë¨MqÔƒìÀÌÁ¡³®säê« 9Úöú¥—i©˜øp§ªyxÖÓx¥~¿rC©ùŽ ÿ^èæËÍÓ;·¹‹œ6Ìæ øƒÚzOP–A£D^l, `zžPÎífâõ 29‹%ÉZž6Nœ¤g’Þ¤Úè.…WUV[àBÿ«³­ÞÄnfîTˆÈ9"8öMw(ºT ä¥&pŸ‘Ò'ƒ lF”ã<ØCö•6Sj6‹Ý¡? Îù*çö‡Æqy‡á ȼ*@†Ü’90­IÛÉìÌUÇN`cn:6êŽ Ø(³¾?g.¸£ŒÒdžoBhjGU(FÚ8eö‰ïØââåå××w7÷³;E³Ç?ôWW«¯Oú‡äÞs³i9J€=ô4m!ßí£úljQŽš·qn·§õx?yãéµÄ$¢oø2¶?è –c0¸¤ôæ€.vbÙÁN|óx@;ýûâöþµé ½js}ûðûÅõÕâi±þã~ùÍ q\FB'÷ªÝžxŽ;SÝpVóÍG @Éh'ÄÛòÝÀßø˜O «Ç÷þþ·÷„a³2.Ó_É?]Ü®[ì{ømõþÃÍÕ{\F”^³[\›…ýé[—²ñ'Ö%F1`ÌëÍà aó»]€µ9BÜ"†é®÷ëÅr#¡o|ˆJc{ž¯·ëÕ)ðÇÿ¼¸¾S›øåáúÕ{ÆþÈ% ƒ†;휻üöÆB8K°yó3þQ þöjÿþùña¹Z¯·¾`§èCÌ×lj­“…øæ»I,™0Æ:Wзr”TyYOÚ±ÃZô?_ï‘Ùsù’»m´)\^>Ü}^<ª©é›~\=½ÿÇýí¢ÑÝÃÕþÑÆàÕä×ÏËx‚ÞØ=Ý/ŸŸŸÞè÷_·Ï«_¶×½.lç >ÚO×j©ÏQfÇùé,i '•áƒzw c”á±ÃFdœƒ'=iw|´¿xÖŒ-zþããΖ·7*)=¹±äúfÝVOòÓ×û÷F!L#ò÷CÜ7w=Ú>yêŒFžY1*†YÒX‚õq£çÀ0lûfÀ:bëHÿ´££°ÿgïZ–ãÈ•ë¯(f3+r€|)GxåÍðÊ?à Èž-JÔ©ëïèn²‹$ª (=²Ã³ͪLàäÉ·²00¿LI“ïI(-5Ha³ˆ“+à`û‡2èO+üŠ8ØAä‡5-§@<šƒ¥\¢fª앃¤e„Ê] Oß4¤ù{ží¼†?“)µ|¶Á{Œã]l@Ÿ›e fýÐ\óÖ£2œ7ú7ùÚÿÏ„R&<‡ÿdƒ] ü'ÿzU{á²›$‘vÒ‹ @š°§êë. @Ú?¯+ÀÞÜÇì¶šÓ«Ñ^{,û, (§½éù(l®xÒý辚$G—Û–ÒÂŒww©Þ-ä¿&þ[½+ìû•àÆ°ßê›rß<ŒûV?Ø"ò‘¡5@ò¡à–FóX+4_(g¾ÞIªºUà³ë «¸‡ŒÙ>–ç+õö£Ž´öÖ´V{F¸GÞIÑçØ 7éš× ùnÂÐÑÙg§gÀº=ÿ™²–ã¼H·7öq·?~»ÿóË~RÇÓÞr<\fÛÿò­ÐWƒÕö?¼ˆFÁhãR&±dÓ2$ðä÷#ÖÛÀˆ] óvhŒx¹©îøÚ²»‹ž_¬ƒ™—‘¹Œz(M¶ôZJšÊ`n#6Û­ˆ§j|çCy&W ·ü°’É«•’ç´xÙW)Îü€Vò@F[v>SA»¿@©¤2†3ÄŒQä'âÌǹ¥×W{úWH”£1D¹ìi¦ö$cÇeO³d„ u;¶\ ð|ŸR9›Jq(AàŒ.%jÔŸèz”®a{Gñ8æ‚”>ÏäŠ`Œ:슔>Ïâ%!|‹ÑŒ[Šõ¼O_B¢4O%ÌWîvÿa†þßï￾±þ©ød÷T‹üѧ Êš#÷Ùá4§·ù5=í±júÛîzwûçîæU–RU§h›tÊ’¼øŒã»?æöáÝ—û¿ÞÝÝÿµûöîñãÕ—wϹܿþaÄ#ÍA€ŠÜˆÇˆÔ„—Ì#浚AÜÈЇdYŸÜ±B~Ù,Ê‚[uš_éEû³ vûÀáí»×æ%8Ÿ›ŒèÈ ݺUv s·nò¶]†¨&$Kž\M§‡ª 7øº$Ásù›oIå¾®r`)puIV@ó™&/Vêër/UJ3xdi‚:ñéáP³iXsí½FÌw¡ô| ØyŒ× igÕc¬É»¦°(Œn:rÎgƒÌoÐ0Ó$ÔµÝc®t|vü™¢^Ǭ“Á”Ý~ˆ!M¼ mQíƒUj]G”ýu»A15â4SÙGÈ–‰—!šã-›<¦·žA†£˜:/=9+k?üc­~*½ëÁâ½²f“—Yw áÒ¹h>ÙÔ1˜~D“_à/=ºZ^³”ÎÚ1â¦seË.­è Þ¢q¿fŠåý\šfÍF—9N¼Tóýꡈœÿ?y³¢ª:$±ß`š¶ð¾ø#”¿Ö·§hÿUé;:g^nÜ\öÚÇ*ëaÖw/~Ž˜kSEwãëþã»ÏÄe,ÞÌןŒÝ•»†ßã ùÞÆ·ÃóL¯ó±ÑøÚfNuð-g8¢Ç± cç@y§n±ŒÂÔ.ô¶Åûí´Má‡T›Ü$|~Eß;-þq„àÃ!W{ÎÝîs›=ŽSð.¾ÞßÝ^ÿ˜þÄácj÷!™Pfõ"&WCnªÝ0ÕCIŒi†ãôD¹i:Sâ“o·ÙABE^OúCÚ´°FR>ß:É­Ï’ ä§oÅ¡’‚CSr™GÜeŽÁK 0ò.ÛÚ}0ªm²|UÌœÔáäÝüöðÃ$ùùýó÷ßFá^q¿³Ç õá±ne<Ç¡÷X܆!lH ¸oãX¹Æsôb4OƕǛs÷ѬªÙkˆ´ÆÚ1ØQ“õû˜­Â9½—똖5`ǺëHj¬¨é: ¯4Ôs¹È¥»Ô³äåÛßµhF’bÛäËFã>¯j6''´]sÜàwËþ”W;ý¶+Zö³ÕœŒDºº%mV±A —‹š­ 8p”࣯‡ žZš‡÷‡. öÜ ‚ErÝe.íúFz9¿Å»Kcì—îõ=©`×Ù¸$͹~ž×¬VÕ;æ6w*  `žØ‹ÇÃÝ©WóÄb¬ 1›ÿÅ6¿7nWT`Ç-þ•‡D\Þä`e…»0JqA²m,ÎÎhˆªk,"òZ~9 3»èz*­.P®AšýF5P­èâæ‚Çp0$Fy‡þ'‰‘œâ‰×ß^¨«**Íüe¬:t°åLÇ>³x|1D¿¸@"î»1Æñk_É®Çcýü) ó#›2]QaCÆÓÿ# -Ü£žq1ÃÖàѯÇżçõ¸˜ú\Nu"®Nq±Àio-W83C#bÐð"ðN5눇ˆz ˜œmE,Ú1E(c<òS2§jvÏA[T­NhL T#…Îɤ—¾Ù}½»ÿñùUÎïͶm[¶à<ŸVÒ¾Ï&¹{¯#8’§4~ÊᙟcDéé?¼J±Ö¥Š¢_=ÂkSíú-mÂ!í.UìàɬH«£Õ3þEJ~=Âäé=Z²zš_€;M«g‡Ø1F¨°z „ЖWW€áó‹RS|vÉž}ÃAôa™'ÛÏt]´Vdõ0RS.‡±j’å ‘Í±Á8iv×ßvoCÖ{…_<Üþñ¥ZÍ›œ—5zjÊÞ™º¶”ÓiJlѾ^¬ [KäÓ1\o§MÖÁ(;P}"Œ>ñú)ª=z šbH{^›®ŽGSNõ4µWFdŸOæõmC`,QårסôÞϪ4µ·©/†1¨iDFòÓƒ¹y‘ô{›Û{[„_WáÃPñSÀ1îA4 <Ôhíûã~3ðûz›¼³›ÝïWßï'?X Íïô8ŠÍjyh2Z¤[«‚ù¸"Õ6«@<]=sñEJ<€c/ã¢Ñ"Í­‰0zyX« P” Úœr>‡À󀚳Fùà®Ä?7 :îb¨0Y…·~^}ê%4å(•ý–wÚ–zIjûf¦/!5ìÃ>ª®äjK¶|äY<½è(3 57[Ø»Èm7[Ãx6³c„ìƒóê ;.†®ìPÄF™\¹Gß@§FjTÐ \›%cOc7,Qù}wõøýÛî«ÇÕLl¥Ž³’Õ˜ýömÞ¿ÁÔ†q œœáúõɽ„ص¢O?ò땊¾@«Õ¾*”kÒ;I¬SA_‚dðUd+€l H8뀹‚¾¤'3l-“ÝŠ«  ’Aj6"÷DY-k:Ò}U7¤_ʳ鉇Ö}½¿ih&¼ñÆè<Çkºx¸ŽCT‹™·µ¤]¿MÕ ª[ªù¼N¤¥Ÿ•XÝY’=½cæ´» ¼cÂÕº>5š™Áu’['÷˜Óˆ¬( A瘱)¤ká`¼{LÙB²¤)IF3?Ò¹o^L¥±Ec9b“yMKtܨiQÁI ÑóȪ7s£÷"}¸þ¸»ù~—z_R#Ç·Ýõ½½Ë‹Ö—»ûëO•p² ƒè@ZZd’7Œ,Iƒ`¬„è&±õätB̉q S†J×ZbèåôDH]ðØžÚèHPªÁcNÛnéð…KIÌœ±o¯@Yty láÂ¥âèFQÁ‚i¢—›ábV¿iíBSÈŽH¤!uôÈÏZ¹ðº.o1²o¤áý×{K—^—2¤5¤qؤ€!¦(BÂßÒÍpÒþ§ÑþòõÕŇï_nîvU²{yé#’Cn2ƒ°!¦d#šë94W”TÛžõêÞ°3lH-"¼j²žÉDN Ö½ kšCÓ8Vµ+!mHáš»†Æ©}¿é תg1 ¾ ÈÈkK…÷²ËŽø8 §S”1"¡Ñÿª“á‰=™ã û±QF†˜2Æ4ÍÃY[BQóSV,ɼj™ xšTK8~\DTÈ6©¤x—5ÍúصI¥H³†ç¡¡Ò¬š§Íj˜‘Õ»6 ¿ÚµÔƒzÆKŒ@^>ÿ9¾ô['¯âØ´¸Y¤"@a3{§¾Éê2n PصqæÊ÷U ´—YN§Š(Ä—§¶z½TV­2û|äâYz¬rzh@ÅÏ5VYÄè5]l–Õ¥b·:±æø\ìå¾àÂOi¡¿ŒS!$^3f"€( Ô`vÍSÛñ›S‹¸/º~ŽÐ§Õ…'@a»" PX`CœØœ´´×¸º|µ]x=}d1ßE ! 8].B®øÜü¤‰¤:¹ÈS&«'âŽ:%Þ£¤É/Yë…a.¥Ù6Ù%™Œ(×Ï.8CÙ¦ž‰û"3Ú¼¯Ê8ˆ¦nMv[„Ï0¡Oò>— Šhn·HÚWܵ©'•Qzô\ìjõ1;³Ê 54*7ßLb—Èe«ßMëÄ„pŽ‚,ãÐ톷óÕºWd£…s'#a‡Ø6{ïU7Ò˜m“’¢™·Ub¾zºo_„õEp,ï‡î>¨w šõõRÑNU|LËÏYõñùêú£YÑ'A?ã諯Ÿ"u‘ÕíŠX'kÞñ–œG£‰ ®µô£Zvë?P]4–²Úc­êu­~Úä˜o±žHªOý—æ´U ¬èp_íõÂðúMÝ•¹ú4´HyÅ|=^€ó³1Ø^ö± /æuËjg¬­èyLÝ´ýÚÞ2izîúðýöîæâÁîßãî7÷ןj!—x^ÞÀ5A®— ˜k–Í8«£†ÞÁeõ,u–h³¤ýjÒ8ð\ðI½ #ù¬ÖµÃP¬>òøBgÙ:gQ³%€3t·kB‘ËÊéc‚j›×ïþ¸ë J´©l(º¨cûêZ*Þë䃮å|Œ¼z3íͳ"¦¯V´Ø Œ‰©Ùñš8¥·»o†«éêÁà1ï{9’úÌ‚¶%oÇb9«_XÓZ¶ªMEÿ7mjóv`Î8þ,›Ú&Ï3ÙÔ™[µ¡gDjš­if¥;‡³| æØ ,_x£ o÷ß³»Dæ¾q!vÍn®¤_èŸ_ÿû¨3wƒðw¾ºÂ*ú§âª A¯1Œw”-k8iîùò<<çÊA¨ÆF¶K¾€xÓ–M¼‰ǵã8«¥Õ±nSØÂ’ºM`]¥àF泩¦“dzí{‰ (5͆=À“]s6…äbΚÚbÀÅiœÚu+”uzwL_ññMà0¯Õ¨vÛ´*cZZ º4×÷ÌaSÀ>mÏÙí¯ö$èñþÓîË“)¬‹sĸrLзÐÖŒ¸jZ¿Ñ!ÎQ&³žU")E :)Âz¹¼ŸÉðODÔ©LØEª }Hš?Òæ—Å-ªýov&õ#,÷bße|‹‚ËNš‚ åH1«Ü è§>xØrû R@&Hmb©r@×G€Q ˜Öj)nÕŸ½ñ±ôšÃÈÌXÍ…H@mI¤À:žg¹|ƒŒî—¥ /_xsÛ»ò,*ãY åiþ »(æt é`yiº2›UƒgÁÿåÕßLà·×W»Ç}m¿½A]ÓpŒà¶«`gó[&¡‹LéTo™[מUâ´Ç`vS÷É*Ó²Ó”íYȨîosÆk˜Vû%Mi²áÃfS‹Žäûר9xŽr»Pä×F³Ý«ífPbV« (Mkjí` $†€¦²´¥oÌ÷…ð>î®î?–è±Ád¾ñßÈ—ù\mcó’C#ÖÛ×mzù)Nõ°ŸmBMàö®NŠQ·‹¿€¹ ß0PÍ  "×6Šœ#ÑÔ‹á`ÄK§Y—{Öì|¾›Œ0€{zzò€Ì5ëêM€¶#[Õo´•iÌk4hÄ6Ÿ–xLz/å“îê¸Yié~|½»ú²ËÓÞ4ÃÕ~ü¯ûoŸì‡¿ØCÜþyûøãúãîúÓô"žâ}«»Á{ÿE×Ë}c×6¾Øû-0 ÆÒ¥~&Wwõ-4'Ð]c#³cJU ë« 8]' œ ½´Ó‰€°WN‰š ÂiÅBSmÉèÓƒCvÑ«©)¦5°Ù.f–®‘7{²’âR¨šö%gWHc§ÚL“ßf…ŸûµµçX\{NQ(ªãЈ—i)©®Ï¿Ç˜Z4y³¢ÒsŠ$Þ¸HÍÉUµ BðC6 !³zôpÎrÝW£ù–¯ØÊo§ërwûùöñ=›h èð‘`xÕ¥r¦T—©eÔ-WÌyQß­z%çư&ä7ÖߤXUó€f<Ø3­Ú ëEI°½Ê'ÉõI¾!zóŽë’ojOçÛ@ Ã;¼48ÉQÓTО7áü[.`gìmáƒjM±Ó9¡gî@ ó±©05͘â¹M2Ös}¸¿Û½äDÇ/~½ûn0ö°Î±Ö>¡—g¹ç'¡)²œfÇnð,‘H ú´Þ±\•î²£¸*ÚVÇ/¨DÑ‚È3WŠ,?:̺~'ñuòü‚(Jã—ŠÇ}Ûˆ#caã=?vÙ⢉Ÿ‹[9`ß6ÞIðLh3{ ¨‹mæ°õ?8L™=ÏɹÑhÁâähv{L}AÎÐeš÷ÎÓÏdûL^4‚š ¯k"­üs“þPˆ /ú@ŸþÚþØöù{ÿún¾¹pÛðe ¸!¾š<.^¯äΔ-&K+LVªfÒv›®ŽJ6ÿpÝW!ε¼ŸÞµƒÅJÍÁ.f¨É£cÚÒ×ÖähpT’2¾õTB K¯’¥îÌйKÔ´j©o´G,gªÚfuÄ)!Ú*–tD£·»tö`G6zw­ú[e¹gÚ0§"hmÙt_`Ãè/Ó²)×ëƒReu}r–‹OÞ¶KP*—ˆÂP…lâ¼ú¦Î;-~tˆ‰9Sâ¹WTpÎéZgß9f€Éxº ²bc¶ ú"c§—Àjnàçá Ð^¶ýåXwQò¿×6?.¨) jÓ3Ù°Ž¼SUŠ›–`åD» ™‘k î²O\$mÅZß½vœWë2Q8W~7•UàÝ_{§*à hÞ4oð£g,%9û·îoÒirá?góßú.L*šo†¾r Öh<™S=9³Õ¡©=‡t†4¼žu¥è«U­/ÿyq}wkšÜ喙°¬ª±G;£ÛÕP€Õ6u’÷äD´y«e•è:6f/ÆW#Õˆ«pP³ƒî&Bê“¡ öq$U«ȱ25%¤Rpt¨Úœ{Í b°W¦•A Ïz»´¢a½]5f̪ث#hËF±‹‰¢³ 8²ÑéËÕgSËÕõªå;ý`aQèX;ây.óâÆÚ£^\»•<Ø?žHë Ð U!-1¨ø‚1ª_Í Ò‘Í½Û“0ú@-ad¤fGlš`aOѵ0¾Áþ˯· ³ÁE\ZìeåL:èÅXŒ§¶£:å= [AaVµœ4ínOs€‡,o7o“ˆ\ø &a3NWßïì5»OÿLë†6·š"‰[ÐNÊÍ4UrЖX/ £:Ž£FdÔɳkaŠQÉðŒC™í"`çê÷&Âë4;#2E¨ZDœ˜Os²³'g˜òc&AYÉàP´)öŸ¢Q0sªOÍ«†é-ªgsLº#zIäÃƒŽŒFÿ×ý‡5˘~dTŒyUøë€Í6”['a,-PíøÈ½Àèò^Z-‘ã½Þþ:¿9f9¸%Ôýö®­9ŽÛXÿV^òŽÑ@ß ¸ü”ª”«ü”<Ÿ^V+"©”%çןÆî’r±;˜°äÉqÙå²$jÓ¾wmôÉBÅÐ@í®OíÁ鬙~ Þ›s]%{ÁùΉãDæ°ë(§OfbF?Q±óÚ´b‡G9ÔdktÀÎRZàu£ØDÔsŒ5¬©PÍÇœ}Ù¶>¶žÒed±(/N¿}gó8æîLJ‚ᚇºœ–,A_I-…:»ÿ¡žv­üßÍ]QóOeJ{Vq“Ùa¢ˆYp•'J5ÐÄëS}f-âm"¯ì{kbŸ×ÄëOŽ"J1SÂK;íÞ`¯ ,XY¤/:ò6t1“s.µªõÍY¬¾ñ“ ß?bR\Xôp{m»ýjwóÒ~s•^yppw˜ü·@„{*nÞÎVÌTÜK:ÀÙžñq ÝNɧKÂÔ¬Š¢ìü$B(;Ê͈ÚDÇ3¢OC{t\ÏN±ó”BÒâ’ÓñÆ'H5$ŸÕñ®- 3ø2t$ÒYЇGÖDoB0‚wNTu øðœ·Æ×„ñ_ëìRnHh^îš½tÔ÷yÑ’fBòÔ+{]@dž9k/ ¶Š¦ÝzS}wFSŽù}ÍODk4|¨ÂrTmN@ÒôÐìT®ÈhlòŠ$ñ¨)ëX–±frí3Ö…Ú$Çoœ¤yZÏ‹»¦×`iïÅ/CZ«¾ü¸|å’ ëÅÀ`p\¹urÉ!Fc„ pÍdÉxa“Óýd¯}i#7Í4º÷ª‹:·Ø®aÉ@>…HÍ,Ô1ûÁ›/å¦tL_o[^ö¬Í—ûl¾ôiˆGéXi«Ï1nZG õÀ12‚îà¤/#AõÂ9h'*G˜ƒ®ÛNðr,š5ÆyZ®îíã)i'Žf«±º×ïš^ßVeV’*MeªjÁдg1ºj`¶X¨§Øñz8ŠhRO¡v¬7ß$|þ°-ÅŽòçh)$HZÅ4eY‚ñ¦y"{­f—² hˆNƒÀ4ÛâáÆæË!§÷üi%Œ¢ÙŠyŒ‹æSˆ‡ó¢ñÕ¤öö%]ÿLÈÃN’¡W·oΨ¬s½Ì¬¤RÓ¬Jn†hۙ󘓺ÿáúìâ“…¿§›Þõ™ÙÃæêþäæöÛÉçÛo«»“‡Og7'OÖŸÿ ûÁëìÓc.òôû¯1€«¹ ¹.ðé쥩'A‰«}ºÒ”YSÊmzçOêkÞÔïEÜÔ‰_ߋѧøt>šÆ'ó5ã‹‹1~ÈV¿f8­sÑg1œ£R•ñ 1ö6žFÿLà ›qtÞÛû¯kVÊl¦w“ùG6õhÙÔ?õ” ì€9›|ÅäìPì¬r}öeg¶6³€hóçàæí¹3†,§þ´ Šäx#ÉöƒÙ?Y—2—`‹*µ)¾ãÝÈìZ—87Þ;œ2b‘ ×\9"Nƒ:í:êãÏÊ[°[>b²yD÷¼¸‘(f½ èÀï±UÒt T h/•w:!e‰bèÈPuÂ-§ò†…×tTîrMœ© #ûeE!&g~ ûïë¨Ì*vze( [aZ³gÛæGTm¢Ù-Øfa^FZ@#¸*ÍÎz„¹\ O͘9ˆ4µy’õÞÊÒ>(if³îJ´Ÿ'5jšm0«ÕÕ ßE·¹M}:£å£åWöÆÒ«‹Dß5ÀÍý,eoŠö€²WMûª”½„%ˆXÑ3‡ÀÈÕˆX³H×J}¯ïI ,¨ó&tµi¿\|¶rD¦À…vjä ^g©ï©KR «¨·úN{»3 +öÉ,è¦*0oƒ‰¤k¶ÖØÃb5—/ŠVûtI[5FÅBu/™+í%‹qX£ŸcA†Oö’)`biüi%½dQTõfȦº4¡Ž5Å~ã}è^ëWBÌÔú!¥~u‡½®õ£”a8—åwIáùÝut,¡/Kݧ~Ò«ø(ÄÞÍïvóò»M1ÊïšfÅmr†Y\»ø¦¯0T]|€ðØÑ a-öo“ÅÝ‹hÎü©ÌòÍ6¹ôlÂ[a1ì¼hGÔ“9Ð,‘{˜fÍ\Æt3"ñ¶µé ÝÑ(<ñý²ÛSÆjá3Ú±½È:"ša—’ß«Òüêý’Jsˆö¼úJ3C©C¡d^µ ù’`@p²A=d+Í£O+q(‡Ö#Îâ[0Õå¥J­öo¶JkDCÖÕ'xÔ¼¯Ý ßÐ ðžZ¬ÊƦ^5VAjðëÒXUvœq;•±§_;UÙqº!Å—Z%¯1˜šÔ2̇vHª|}|÷\aèþîôþê×›µÿ6cæWÐ^‰ª*‰}KØ„E2CÔ¶ý ½šÒ0c„RÞF?i’IÈZ£qZTœ,^öJóÌÕÄÕ(Kîžš’´U#ÿ‚ùI¬{jÆÔtª—cÑT¯²Ô¢´Oê…=ìŒ$d”ªó>„:TŒ¤gGêž“{OÝòs ölÃHÄçJMgÒ`ž²vèb‹î5ZvÞ8¯Ì¡$M(ÁOèÑÔ 9=:¢F= ƒiuf?CNß…’lJå86ÁƒÇM–/¿v”B_WCŸÄÎ(¾Edê(IÝ-’‰G•Ù~ÍÝoW«³‹­cýJï?‹âÕÉËÚÜŸµHˆ< ˜ÁŸÈ” aÒ±ÌåOFÄiBÓD¨#ˆMå± }‚âèfŸjæTUgOBñh?šˆbÁd?"NŽ^*rnbvôa…ƒýÎI8:×–À4Ûo…Èà빆Å\ ƒ‰@(a›„é"jv¸âùË Ù–°ˆœ›mKr•*v)#9¨žŠ‘“ÎìQJ’ÐìU§ùF.˸ѧN:; ÇgÜ"K»½£1V3N‹‡ƒˆ]m-`\æiÆaÖ}ZãÂ`NN„plwÓ÷Q×Ô¸¸›fމĶýÝ[½‹À¸$ÄРÕ{ºzØ‘¥foÛC™û OÅ™7K4ürkwôþ³¹Ñë\ôåÕýÅ­ýÅ߷혳 öÀ}9Ûs×IKZÞ¦Bþ¥¼üåúŽÇyá9•}Ô¡EAÌ”„îß™ýÁû‹÷smrao—׎Àmw¹ô½Þk—L‹ªEX]õä6‡b ZÝÿpÿ»ýÿõ‡ó¯WŸ/OïÍýxXýúûé¦}–’‡¾ôޱK£y’î¾Óž×·7WvÿbÄÅíÝê6µí_?¦q¶8¾ókìȧެõ¯®¯n®®Ï>ÏÓ‹*}9´GBÊi?ÉŸŽ¿¡öÅÙéù×›ËÏ«yy~DZ'½= tÃÓÁÁ¡k§ÜKë‡ËÕdz¯ŸF?4o´Yµ+}‡pCd†߃œþÖsyp[\;Âõcäûr§Ãª%©ëÊóx,θûú%=òüëå¯c/}úåòüýx?>tp‰S±Ù÷mÚmÙýt{ÿðå̈üåîv“ š[X"×—ÊäÚ×±wƒ'À¾ÍJkïÆüK{|¾Rlÿ7Ú=«x¢ˆ êè"QQÉIuÖß—fý,Hð!kÒâ‚bMÈ%!G_V’õ{›íäY9HŸ–$×L Fìƒ\“1Ÿ‚ti?ßÔREÄv=¯âé Ìf¬ö‡áÏÁÉžß?½$DÒ <½¿ÐO²Ûöª:0©ý‰ÆèIÎî|m¢ͯQÄ.T•=‰‚ ý§ÌÄiv~ŸMT]œõØ2oOX¶Dƒ¹.oŸk[ÌÈé–/Y2²ÃD¥¹}´³sÚ5І!m­‘‚J[4-Ì|}ï;OG¸óA(dî<j?î§bЦw>e Ä^óÝí×‡ì Ø¾?8eU¼¼ø(Ooðß_¾?":_†óð‘ÎÎBôÚU‚^÷9æØˆØ;Âb÷9•Ng4†%ˆvp‹´gÏ‹µëÊk•[ð$ƒ˜sQ0í [|C¹5;šÅy&Y‹Å»!b§ž7ÄÒà¾DGK†X": ›AÄnlu=0]MC “׃Ý!ÛSä] &EÀþâ‰ïÓ¥:V ÇjQÝQ=n:•D<UvqrÁ®±Qscx#>µÈy Üø±ÏëN­.•ß #ã“ ÕO$K±­òàK‚ ?Kÿü×5ˆ÷Ô‹è–à,'óeÿDš«K;µÛ•ÝpHt*ð}̳ 8©€BÖóÑ£IÑ5/ [ÿ öYÃàs ZiÞ­¨ÔF™´Äñ!Ärµ3g¡§<³,™¸¶Î|ÑN(–è¢é¾½,LÛ°‰ëD9,e5ƒ`–„C§zÖÌ!»fòïã€i6·y<ò´üûœüÈ×DüÝÁY€1Kü§îNø쾬ÔÈìsíéý«:ÓŠ7Ù¹¢VxòqÖ&»^ó¦{YŽ0VZ~^PÎôÎ…ç.¼¨Ÿú<¨Ü<Í ìbIK½ÙüÓš! 7¢TÍCHUøYŠ1Øéª9ß]1˜å¢¬bX¯PŸš+ ØvYZh\[h3ð¼—Ç f²J(ÇEËMBt1Ĺٿ‰Ñã– º€Ñ|øI9×ò›PžèÐ( ET%éªæBC•¤ Åþ!s60.íDOí ÚÔÀ¢,>Ð /¦ßǸà!Pø.jdcUû—@—á.OŒ´·“bå“b#Ôã´kvOÕˆ-„ØŽ¦•ݬÔ\He•!¶ët„VzÐL©MÌbïY Ø6OT$ÅtƉR¬†<® øˆ…{Dö°;!­ˆj²ëâ¶Å'»ä«»?þü·þatñä«0é@£é&yùùÊèl”J®Ì³„ÃÉO'ßo>ü¸¼Ÿ;·Íc·¹X”ÔD‰Îõô·ë‹oÍÅ«Ëð1Pðç¯{¸kû·Ûkܳ±°gû§Ÿ^‡lqÏ–ßÓ)£c°€dѦí9 ÿó¤CÏÎ?¯þa÷ë—ÛÛ/;x!ÿ4­´úùærõýCÀüדtá¯V—Ï¿—i 8¸@§ê9iåMôÙÇ’îYˆýô5N§=¹J§2i¼X]ý¶º|Õ· <°˜‰‚dþ’{ÆöÛ¶¹º7£óÍDùÛêîäáÓÙÍÉE†õç¿|~ò`M]s;6Ç÷½ ‹šBŒ®1†KcJ>h˜¾À®ä^Ä|‘ÿùÓ c¼·`–D||q1ÆÙ*Ì× ç!¤ænÊðÅ‘»:úç¶*-®£Éú_ÛÙ%qo1»ô‡­ËÚº½VÍ;6qðUª,º«ù<8ñ "ªŸKW'ÐZöƒŸVeHä`R•mƒÈ×IÌѧ• _Ù±Lö‚ƒò¸§€q*ÉnK{Vó#L{Æ·/“´mU!-,71œáO]™B½ÍjÄî!bZ ¹'£ä¹uàoaö.¹Š§w[u gúñã%ÇpÙÚ ÌýHÛSÄ÷¡í—ìÉáÔA€Ülžoœ/J¯‰Ë†Jè…‹B¥©2y¢iv?ëˆh-:ßìÔÂQŽnfú{¾Ff—AYLŒRSlrÜηˆeÃ~Š fý¼Ž*Â;X[O[Â<Ö¯@‹…þ ¦kàÉCœ–ã´NP¦äØŸäÑ—¸ƒam*§%l[´bÛ""ÓSÕ~|Ò´e|S;nXÀ6u!Æi¶…lycôe%|³Sy‘8o‘,´«Ñ¯Þ ôÖ¯výy¿o-@š°é™YxKßñ çÙy“Î[<Šç]|ÇY¯oí;‚S XwSµ{p£¹X’MñÄ!¶„øöÁÍ$†ðó•qççþlåúF;‹Î3ºÂÕßaÏ P•¸ß¶cÎ]7©©ÌÓK¥s£ÅÎM‚ôˆ¥À¹ñÛiÔƒVÒC6J}Z‘wÂnž•D±. ñ!tBìœÁg¬dTMáW ņZ(u7¿#<Éùm¼¯0ßÍËéƒ-9ÿdcÃJŽºáLÎ?Ù~ð/r Buº5MAð¤ÞW¯¹(U~hÒ(¦3¦34Þ'lûIåHùÍÓ§•(?tƒ)˜³k71N«épJì¬ü)"'( D:*X¨ o:‚óß eþººYmnÀ4`(aÀÐygƒ†5 w„Ê+¦võ*ÅE²…N$Áû‡êf …z+¤®^S)0Ýl^¯¾~þ÷ïøeuýéüêîÓåÍÕÕ®…a/‹,Ì‚7¿H@-¶+ ^|Èš´¸á{ßð¤'2 •I¦YM5Éž~qi¹_#Êÿ=Äu&y;Äuïq]…¢_³È!¬½•92åÚQ|´މyª¶P0ÉøK¢òëÿþݸñ°{ûOw®ÿi¦Êpº[Å=Í^ÿ½9™4>\ ÿŸ¬¾K00dŠê8ªÐ^³¤¼ú®šu„fz~u“¤ä Ôô‰°iýaouõ©`–ò%: zÑ´€“*¶„ØÝy”à ã<ê ;JœDîÄI/™ Œ‚oí¼p …æ‘ÐÎû…>L ßÕU]^|íÐd:¢’ $=uÊõYR)[‚¾´-ó|3³B{)œjb¡*¸dê0"Áì¼ Ú“Â¥%Ü볋Oæm:o·'8 ^UÌœ/çNßLKnìµhz7„¹{<þ—½këm,GÎÅØ<äe¥&Y’ d ÁÉb1ƒEƒÀu&nÛ°ìé‡ùï©:’-Ù¢ÉsHõd‘tc ØG‡ÅâWV}5¢³­üd=|½–D£òN2&›¿öhQµ¯€:@¶ÊInq8tp©`)¢Ûùl®þ.æu3œµíø¯[FxÑ”ûgñ=näáwë­‡:6+b™Ÿ5Quf#ñ¢‚w0/²aãû§ëBªÉØi¹Ù)&`šòaY‡—óflæéMCï ÎZ9¬SRû¬ç =ãñh–ãäЦZkKÊCÀ,|³IÒˆµEŽCÏ(·y|{½:çyH@±¹càc Âa§Ù“HÛr“º"šñH¡Š›ôÌä¤&„@ÆÎË:{ä^6 Ü·Kd³Ë{×Ŷ-‡¼ùðe%qóÍâéQÓ(7‹õùT«Ö›'½0»¬²ÏˆÎtÝ‹@ío(Rஹ¡Ó^ˆ’—¼~¼®¼œ|´ÄvÞ%@˜RJä¼èQ=oo•¬š8ª ê¸çËÇ´Æfmdð©ª×±4™ÈAK‰×«FÓ¶Ò‰)SÀ[²$^p‹VöJ0kæM |‡ˆ¡„Ô)?çᤢʪ…3¥FÇ ¯éŒUMqâ ¹Y°»‡Uáóñ°6=N9­èD"Û4u¦¸Œ RÉ{k˜œ« Çé½=õ%{Ù½m¨sB?s2z™’/1z¾±Y•S…ßËÑ6΀.B$ŠÓ‡P½;í®yý’Øšj7ŠÙñSž% $Ðä–WÐÉèô„*¿ åϬý˜ÔÝàÅD:Äú vóÓ;èõ8ºsâB…HÖ{y.i³à|Ö\oùÓYó—íiÒe!ÀN>x¬ÒM4$¾Ò,3úwYpHT àÐè !$“æ–Ú:ûòfý3yB¿)ûDœWŸÍÞv©kpÁ{®ž×f«çõíÍb#ð´úüëb[eR×ÌáìtyØ6SÊÔØGœ©¾--QÓŒŽÄí¶d8„Ä‘ÙËsm¿NRF½ ¤Ñ•¨s`\ º·8}ž¡{oÁ`2Ž Øs) ¨ÛÎÎ+š€IXAé]ƒ=;ìÊ­íñümÉÃûg ÞÈoªá6xÛ ÿôø¼ªˆ÷BW ö4…¢ÛãŒ8•Ô¤›y+ÞÇ}L¶³p :w¨¢•}*Ë' >j²ø@^M¢C·4Q^¾-F—tðLšPÍTÚ\_WÞ¡%ªywK&„‚y„˜W&Ÿžn¸—^óî–(þBÍû6!¸þ52b"tƒ° OÛpå–_Ø@‰y‡H±¦Š¼S`ĢuZן|79¶³Ée¨”tψçk"˜ü‰&o8æÉevcìŽûù^—Vºgp©•\pîBo‡\ÅÈ Ö½Á¼9x?‰¾7Ã+Ÿ™uoŸìþ°­R<"ÖãéÄzG?¤SJ³ÜyGÏiœo¢ŠÑt Çdë³>G¯BÔ46ŒEô*`gr«ôˆkz¦€ºÓ숉»~ë—Þ¸Áã¿ë‡¦z€EN„çrbüFºgì`'ñf‰é³Aç)ÍöJçS;£–)Ú‚ÆPíý÷.O~µ¥1OLã{YZÉ|jc–>ê%Ä™sÊNÚ<›Ã>.½ ®·ýÛýÞ"×.Iök›w uÝ}6ôÌÎp<¥ŽÖ‹tõÂoZCQ•¸ZEÓXôÁ¸|…¬Õ3kÆÉ%¹4EÓ šV-f'j|æ3)ËëÞ‹Ã1ßܰOñ+*¢5m*Ê‘£³¶²å¤z¬u=îÅ<é÷@Z$'õá^´ss»+§™ˆ›õfË@¼k†¯‚]v§{{›À.N)‚yŠ›ˆ±Sž3/Åv¹Í°ŒN\Ⴊ)DŒ. ǘäÙ=Y“ä¦]b@&n b{Þ1б­&Ó·™{²¥òòùfý´x¸5\¯*œ©t˜þRg‘û»ÇÝœ”£‰Ükˆ­I̓àBKgÇöÔÿò“¼gqéŒ à³…iMƒîXÄF>L)LkFIÞqË­k“]Ê«ëZ¿¶­ýÛKPßüÃÃãý—ÕÓO«çÍB\®º '›žÎ™øÒSJ€ ”ŽÙùZç¬@:Í@V·›*èíðÙŒ&"${;dÑfÉ,£Eoì™aV°ÿ ù“¸ã•’èŒ\ÈÁ,5 á¨ÈÿÂPΦSxø{î"uBNí_êË¿òø‹„¨—×»À5Õò’øl±©#„ô®/ ò»ñjeÜ`Ä”£š;cªÌÚÂ,S(‚Y΂,SŠXå@>­0–‚;7Ærˆý1–¬Æ¢;5>Å6 m}S™¯ *›‡=÷ÔCÀuZ]gÏKx5Ø+ýg±Y¾«¼‹‡ØY§vJ É_;›•è½pZ"¨‹KÚÔÄ}ʦDPèÓ=/’h‚¡ò:ÁÅschèÏö(bJb(€V(Ÿ`{¤¦ìdËê׫@´äÄwÜ<2îét_¼CFúæ¥L»Âÿƒf|’÷=á“Þ33^ô(Î÷(ez+®–€ê1–Fþ˜uJEtÉÈÿ@6•QS gFTÚÕuŽü]L"j˜ñ+é\‚óW5Ó¶ªéznë.§ÕkÝ ¹'¼Î \¼È3ÿ º‰W1Ò'¾Z|ºþ|ã몜Ç®ˆ 1Nšúm‚‡àcƒM„Ú·E·B4%¸mLöÂ\ìÒÃÅ_%Ø·rUr˜ Ûu¥6„ݯųS=9²`d/Þ®=¬íÈc›jKÔÜ„móÎ$*û“Û…ÑóK}S\ú­î»«TåýàÇ6¦€–‘œñ½ÝîÍýíêaÝ~øpû,8·É!åžÐˆÿ/¿UFa'ó Àéb½—ž•î8Ÿ[V´³è!TÅ‚w%ülN9³áƒÏ^|M ‚¼µCWÇàÓâ˜s*^wSDß™^GmšÛÔép¡î–`.?Û™`æäþ ³Js=%Ä>‹þã«j_^Ý®¾Ýþûû‡£V¿DYVßÝݬþþÀê±U{±^Ýì?ƒã¦>‹š<ã‚Y½Ìv7f~¼¹ß'›úVóÏú¶k}+9©×«õ/«›·GU]9w†Šý?¦ž±[Ûî1ë`ÁW±„_WO?]Þ]¼Jd9,ÿíóCXº`‡®ª#$piMˆ‚1„p„úÃLø`áâå‡/þë_¿ÿówþ÷»dúñâo? §éâŸÂŸåì¼Ø~°Üµ7þõîòñ×ïÿòoCë¹,êéþâëãúi5ÀóæãÅ àýÝ…– =~¼໾ø—‹? ÍŸ÷"ƒëÛûàÞ+PÞ‹s+¶*– °'[r~ÿÞU>‹>G^õùëåúIÞêBPúBµ÷»Ýο Î¡óGùüéñ×õàÑl;¤X¬o”ô+z­œ“U1¢.´tÞâÞýóÀD?x7¿%e”OM~5L—AÐ!\üYÑ)ù8NBtÞ¥1Z5·”…:wÜjÞ/ˆ˜’ù~±ï”ÿy©/zwyw½úðÃpÆŽÍÛ∠c‘Hý/Ž#E²08¹°´è” 'L!><ÂÁÛÍøzyûAþêC¯[°€¼øtsùt¹ùõîz3Ø¥“3ΞŠìæô¢ªcœj"·€)‘u'êá™Mð¾° ^%唿7g/Åñs~Ô^nWn“ÝK+h‚^ Ä2ÆXèÛêwƒµZÒ9GóÀ„Þ×~*Ç-QÀ–¬ƒ œMO?.c¥,#ÁArg Á©¿­¡ÈŽð[Ú@W|m/ã5®.EÞ³å˜:¦œ6op@©ä.c³eß’íüé¾HäIÌŽ/ò?…¡/ˆ¡¼3WÀKEÒ ðÂQô:À ;ãð" 瘚vt°²"ta!D1Åè¢Û¦TŒ3ìš>‚û£ ;—Bæ-‘›uãÍG®Œâ¼fʨ¶`Ê|¹ÔÛ¯oÕÛõ$¸¶Ð¥äfu+%Æÿš?]Tòlew>¬vˆyœÖù |#¢Y¤£/fSõSì0 xxÈÁG4„ã½Û…Sª?ý`eeðA$p'^j|d·-:µ 3|¨Ù§àƒHÓšž3ðÑ’ªÏ{ü^ÊÞ=Ø{쯯>1^]ÇÅÏ?ºÙìܳZûÉÛÜQ™ú¾ {{ÂWq1ú*Öƒ«7 lxREX-dž5¶kPâVljDÛÇy~·Ë6œœÑò²®2¤Á ¿rì*&D ³ÆÛÞä*ŤŸ‚Ñ ÞÍ¡W(Ä€iJ¡ðdË©®š¦/s6Þ¡©&mú2£>St&ÎIé詚ÐpÀÆP0f6ŒAŒYð¢Ëâ˜3H˜Å±àR8v°²B “øÉÙªˆË"ÚY@»—¯¨Ñ%‘Ì"ðÔ¤|¥Òl€3@ZÏʹ·çC¬t¥z¾Û!à‰_T x=ßm ÿœeñyþÉÙÅ)diÈALƒÎKœpëR·¾é4y`ã  Í”iÂÞ.Ö&iÌö‹)¸ôÕKÐcpéûæ³.}ÑFíµ#æÒ[ßae޼ã0O„Ë(a¼Ó‰S³M¡«0…Ñi¯g(péC´y­ðÉQëû••úôÀÖWc l››‡¡ ‡…àͼœ±ñk޶Í`”05rfÛÜRÂݬÈÑmsÉ+©ƒ•m›¼•óòVuÛ¦eqŽ£u½/¦EŒ 2ôíFDì ؤßrTje¦Z;-E½ë»Ïò­›™éœÊ¯:p+¬%[˜›i¢\ÝÙ8Z}:Î i›Œë)¼ü{ ø«ç÷¼s´Gð_ýb‡CG<…n‰€êuŠ%ö"C³ÎšömùZvœÅ@±cÅûÁ€ˆS×ãu”¯Dþ´¨É2ðÌøcR“„æ@di¶×‰…^gðnéµ./Ǻç–"È{ébòýÊ Ü}+ÇhÊiõ†¯f£Sqfê^W'blb‚1%L&Ìê£/œ*Å.œÁ²wÖ¿+hùººúIPdqùüô“<{}­ xd9D;+šÉ_|`ˆ¨>c2ù‹G‘߃ŸUÊ!Áv@~özÿAÏJÈòN¤Ã¬ ýlkg+™51ÜUòh\›õþ¾ »bÁã½.hñåòîòóKmÚÁç׫ǧ…Ö¹Êÿï7gaM™V¨¼;fÚO©|%ŒØ9ëgÓçdTyR±ÚþDÆQ§Âò¶ "“q´Áe­¼O^³¦AËØðÖ0\®V8:ÔÕÏóÝ电ÓuW‚CÈw(…HyŤ{¸^Å‘—/!§=³æìf0ÔiÎPÏîk5§ &j¨ä±wùzùuëóYNÄ€ˆ¢ø`@meD Uæ…»¶‚މ*=m&u+«UÚ4Cé0›o0OZÉ%ÐõÏu;Ní¸Òf˜éyÛ‡î;>ã§vÜëŸé߀ØR0S†ŽåvºîÛþ•î–’™eÏl¬ñáôy\D"A‘›ÃŸW ¥73xT2<˜a„ƒŽQœ2ZÇéþHL5íæßßü‡Äô!.Àeµ<̤6£ºVNö{ï“¿ù§g©2#|Óî}ðˆYÿ€K†!”ßû«h6o† ÖGw¼Å›pŽ"Æ33}íÓ8ãÂèïßn|;FïOŸTÙ!pfÖI¥)´à–q¨.0Ô€ëk/ÖF—q™ùÕî”_M ¿ZõÉ[f.ýŽ\à, P²eäPfMk¥$3ä*ë)9Ô±G6-2QìšMKy9uýÀ#‡‘p–¤°ÐùêCUR>†h¬ ×g¥N‹‰~w\ÆÁTÁxÙº'oOê{ QöÍAƒÔÅ´\E Sõ86€ùB:kr½1ƒhRNÖÁÚ›@ª¼5Æà£«‚ÔçûÕXcS¹ Y2; δé4MZ@Ñ$1OqV;z^£æ¹·!´ÏDâRÀ³÷=à\¿~\?êÃF>¹ü¼ÊyKã¿ÜÍmŠ,’ ³ð9:š4Øxç° 3B±ã Ï3è"E%ŽuÙ~b6Ÿ:1=û÷UjmúEĸ(Ö »ÙX >ÎÒ“:Œ”@BNlÆåPF¶2ñnZDh|þNΖ£™Õö1ÙW´—T=Ñ`?Xå¨I:³UåšWÒ;ç¢Ü–¾2•‹Òí-GLÖt²išZŒ¡ñŒèsÛ¢“Jà@þ™UÈ–BŸÀ<èÊ‘õps³Þ<>?è#¯žo>¯NLçI|¶x¸¹* Ò³Fž”mcÖ€m+«0'Nö—óuí¥?U _úuôXÚe~:“È+y;+&f„0¥LAÇ U§LR",ïËä7Çjªžè¥€¡¼Õdk²5¤Œ.]CºO£©omÅ~X_e4ÉFâ¶5+í!´`(Ql¦%GZkOô%3à CC>æh~×ch¯Œÿ_ò®¥9’ä6ÿ• ]t1kò U„.>9BŽØ°Ï‡ìÑRËǘ]­½î&ÙìÎêʬʬYÉ=¢É)v‰>l¾Òõ—/ßm íá78\C+náZ;¯ˆ²ð‘ôiô±]ƒÔ³ÑGØÃ‹Õÿ¼<<_Î0.|ŠkPPˆXÔ¦N(4gspŒ‰Ù×D”JùŒK*ñ²"€rÀP2ëðº{ù¬ŸÂ;ìÉšâ71¶êá˜RMYŸìÎ'.šo&¢.Ã^ÉG"à÷”½º¼øòr}[7â¥gJþ0[Þ&›ü “õ^!ÔkLÒxRöMF-/O¶ ;¤’Ë“8m|)¿yó@ nO¼7¢_c~,B˺km£zÿÛ“ {{âÁ»ýfž3]>¡é| &D'¡ÕÔì Óerž -â éÀ9`鋨›—µsóè¯fÜIÍ%,c¯KŽçÜB‡`L ‘ AÆý*˜VIž§nØÅ¢´ÚO£5È’\¡Q^M$‰CE^ͪ>Á°ÈÌØwfd19+,Ê d’O ‹ nl¯í=3 ļŸKÀtbéõF1ô¨G Ͻ¦Ð=o”51Ùü]Åú´ÝP´[Rüùêåéùáî5£¹Ö‡ßßlW½ÞÜ_=Ý\?Þèë=n4ª¡pa…µÙê™FV†9¹¤@Ô˜±vâ¹DÛE±Â*´!ÂôÖB·'<9É œk ;^›kd²æŠÈ²2"cÄî[ `Ì]kƦrœ¨tboà‹^6÷C™®ºOíÛu“¸bàžQïö÷ž^‡?ÌÔ¾=<Ö2kè Ê4oÈY-BõRÛt9"V¨ºÕ*YanšG‹Õ§O²¹2ŽLB¿½~TµomíÖk]T¥ØûþÈÄìO'ì•Ñ'1"­¶s?àʰ´œFëŒe÷TXвBb’ ›˜Ødèžãi5ŠØëd(óúºªºÃP¼Ü ŽÝ}7öôÓËË:w(ÞSW‡ÈqÎöò= ¥¦œè§Âjç*õ(Ä@¾€r’a~Ÿu•‰òÄçï‚iâ+að °¶¯d‘î¾@BÆWÂ%‘øUjBTF:)>J#Rî<"ô¬AÿÁçýLÎÉà³N„£’·¸šêÒ{hÜmº~DÇs€)†ÞˆÁçˆ vsš$ñ2d͘O=‚`ð v_ëÓï ˜u©/ö¬–óør»9NcfσóÐh1ÌèCÖÌÀ $¥¢lÊvŒÁš§@¬"'A}®@r ·&  ¶Y72ÑÊ ||hÚ³e’ÅmyË÷÷_žRÜ\mÿÈÅë(ïk[ÓØÏ/n®~®[ê©+¾"ô¦Š¶“ïr]Ûz cD¡Üí·-LY¿P¥N95/TÕà{•¦¬Dlš“H”Ú:USƒi× 1ïFòp=ª¶k–c[Ûwx°;¦çóà¹ZÖˆ›€;oWBƒp¸gß® I¨ ¸'Aýbѧï¼/oÆþ/ïÂø°i‘˜4͹¦Šš*(s‡}­w;™úc7_iÊ&9¬UbQ²Pï"i“õŠ­}kÛ tïUgM!1ãÍv€| ˜Ÿ«LmÝyQ—ULU6𺪹óÈåÖœ|n½‡Ù£0áµb”¢Š†‹Õ›»J¡~L‡Þqta “¨>Â¥îRÃéàŸ“‘ÀJnžÒ"¹{ŸºÐ¦#i¡o˜òüøpûíöò~“'2Ýè¯ÿúðø³þò½~‰›_nž»úisõóìùâ^¶U¿Ï䡘¤ôLМžÖäÁµêã¨>z<ñßG‰‹©ß‰^;àÏS¿›ß™ ï;0Ûƒû¦¨FÔïˆbå*¢;ÛA‹6’ë#º¸ [Þ‚ˆ$kÖ϶#Co1Õûç¯ÑÕ‡Ÿ×WÍǯ}`RU¿}ÀY‹„¨$‚¸¥Å”zé5ÌÈ0ZçÈä%dH0ÙÞcÛñòùØ»¤ÚäcIBÂTc±ÑEF·Ìb¹wuUå%—™žX“Ú0A1&Mo!¹,1sR¶­žÕv•²“ÑlÌ.˜î{¯žRáS†ºMãËÁ9 £S5Úmr–ø]694‚¿!9¥«/_ ¾\ÉÅßþöõúi¿JÙm¾:¾^ÆMëeÎs¿ÄÁ>gidŸs.Øçl']tÉ/B&vnFG5G&Ä€a12¥bdr ÝŠM_i~`†s\ƒÿÕ,qyqÏWF†Y›À%¤Àè§{DÎÈïC_Çá•÷bð‰[·cAI±¸éòXœ â#LÁ7ü߇í(Õúû¶ÊØ7[è ]Ü]Þ_þõõ"àôs#¿ð®’ZDáõŒN„4GZäôiN“°wFlPÚM[UËqÞâiŒ¸=@Ä4"ª§žh×ÝI”òä^ï"k%±&<¶º*ÜàÀ –…)´'þdC8Þ¬nÄ[§¾”þgÜDŵàe&ª'jN—:£ÍeßÂD³Rje¦}‰!Æ8¥©µœ4AÆì¢É‘40Áí¡ èRUÄ?y LP:DîäœÙü) NÁ¸¯ZᬜOiÙ¨Ü(Œ+QSþù#[»GDßGa€ºnïÍ·”~¹hN4î+õ„ÝËÁ ³å`»ûMx®ŒÿXå`uÙì0-ñnâxV*qöÕÞ­êì¶sp0` Rƒ„ÉÛE=Æù6Ôw©4qp0D²z}•ƒóš”ÌçïÙ="v¿´!ÉŒ•ê+“þ ïÞ mï)•цª·/,©ÆæQ5´F©ejÄö7=¤!ˆG[ÑÓÅíy¸Ÿ>«~{¸±âÌõæëåËíóÁ/Öµò,tÜ *Bð¦e©÷½¦¿áôžÓòç QmŠënƒ“2ëñ"©‘¬2Ϩ¾!I¢EƶÑ!| ƒ•÷‹ýºÙÖÕãÇ­Z;±ïxýùñáÅFÞ®6Ï•6S8#ta¼(V£æéÂŽ[Ûn T½«¯JPíb=Á%ã0}»&€ùõïRi«„S«ªUªÅ™ùº¸º|o9Ì|v@_à+ ÃÅ·_¯®þ·nùCð<®Œ!F¿R}ô3 ÕXIl•ÇeÄsEجf£ÇF#?L3ÖVŽ™yPqºìÐüZÔlôkG£É­i¦ÑïÆŒ¸(dU°§î£ŽêBfÔ‘˜h"fMÚ²Ùq›]?—c ŒŒªš0éQ\6yFØ¥ÿ8Ú28ìߺx´Ìsw‰û*ͧ϶ŽeóTwe#cJÉy·Œ g\™ø`lœ@L³šËäÔ²J®v,©¤JÎÓUr•äæ‘…Ò¤ƒX†à˜ë7øe5OÝGËUÊDÙyòAÎßSc¢ÆDD¾n£:¿Êkêr4Õ¦ Ê"¦}D .¡/I$Àuï¡Ìœ¶Ÿíä]{ ÎJ>8RÝ2dÝ3LÔ!«-•CtD‹ï¢'„Õ2ž•¤"ˆñlt>L¢«d‡xÓ(œµkœP®šYAi‰9†=y]ßhvòNȼÃ×Ç3Xßô’¨ì B\p‰U€ y­‚õÇZ'Ý|S·"€Ì™ À)Ymláh¶Žfë«úß4i¬v©&ŒÕÞœ²“¯V0› ‰M£…C†¯ÉR|‰å¤.ÆTNñ¥_Ùh"Ó‚r’>bϾÕÑŠÅvŸ^@›)¡‡°hÀ£lÖV˜õ-YŽôzØ/L~/O»ö½%½³ÿèÁÔ.P*šÍö.ù¼ Û#'Òž”yÐô±óUyVÔ[¶]=Gj*›wþ·‡ëªø-1û3pà–`ºw3 #9ŠØb¦/+£Ya›„SO`ºgò~z¬VB”)OàÝn-ä1Õã»DZp¯`’ìJã60Æ`ç—˜Ÿ¨Þç&æ][ÓGÄWE)Ü«w\·}»l¶/©ŸZÖ½=Š£Ú4n^EÚ ®=šÐGQê:øp²˜d¬ûýu1É·‡Û›«ß£~¯ɸ6"Ø–èeÚ öm`Qh ðt÷ݶÞÜ«ÑoE.Õ+ S\"ô÷Ý JbýGÖmè°)ùü‘œZ¹8;)ùׄ‹õ„šM&;V›ÌñP½ ¥‹Û\ð±ÊÅ50Ãúº¸˜ã©‹Û*Šaª2Ñúž­¬»6E è3x0®Oa²ÈÂÍY'œÐi\¸t[Ü<¿ÔÊöÍct+NÚ~ˆ”MÛ~v±Ì¸ؾ}ë4`–*Û‘”Â"Û‡öý1Ð )ùµ/^oîô¤=M/áù‡}è@ ôT`Ó0k˜Is–$ÜÖî…{~ëë˜d‹¹Bs«ž§m§÷tãŒØfQž4iHÙÆ™iµHYÍ À.”ªl½–0+Û#\÷"¥Ê9dx'MS)%-Ó ®{?Ž Ùe*¦&0F&æ¤-zª2¤òaÇæÒSϾí¾È`ÓЕ0bŸÒ}X¬øô›~t÷Ûx•éG;•ߒ⺒¤‡žŽ¢²ê˜âr¦uñ¥×ùHF¡ $ÖüuWf=¤Áç™Öß^¬à2â š"å´Ò`Î:M¯šŽè‰ª™r›úf.Õ ÐËt³å¡S'<Å,ƒÙ»¸ZxUýÚÂv˺^5õµ÷ª&g:ç²WÀDY¯j /m‡ R‰WÅXÁÁÔ í»êC÷ûZL²÷µ@ÁíWšœ®QM¡a‡Ž¬±뤶ûqÚù}†äµ¶;òó×›Ç;ÑU6ï´þ>‡==G|û¯_f‹Ò+}?úsߨfν }L‹ƒWÜ«H¶Œœ‘Kzúéà&ëÒÞ¬$º±q\')ÑÚ+R÷#d-Yç°¹[²º ÐÀ;ZѪEoåöòíBcœ3ÓÝÕÝ’îÃþ³œ—²FÄç0À‘gmÓ‰è´&¸>iÏɲ–ZFlî‰ÖÎèCd1®•T].,”P»&ÄíR£T0;"ÑqšÎZ ×~ &=ˆzV͉ÂÚu¯ºD™®sÕÓ–µ>­ÃAAe¼ËÉ/$íÉš~OìÄȰS#ðè÷Xã;rf3«}ˆ]1–܌ʑFd)EWV*I²Û­cA11ÅɆp@É‚ñ»Øš€±…ãjW¸6t¯/(ä¦ «š’ wª[.¦¦s|RVHJܽtRz‚·´'§@’Ávwº¸úÎîÃÏ QOÏ¿Í`óö¨+,'Â9œ•ÖÚ¬VâZÌàd¥Ô q“³Ö Lˆë£x˜„ܔݫu(’˜«(#R kc.÷žV1sfvÚ%)!ðïqÇ#ȲœQ,¨RæEÊ}¢Pð'›ŠÞÁ7µ_¶ÍÉD´Öåø[‡ÁÅåõÝMA°€;£¬ìj°–¢Ì©2ˆ!,¸9=’H3`ekÖ) f‘ýUÑ9`¥øì]Mo9’ý+½ìIÕd#‚áÌe÷2À,°Ø=ïA.«ÛBÛ–!Ùݳûë7XU’ꃙI&ÉR7ÐÀ`fÚ-§2#Èß/"d Ïè«v)z¯±Vt¿ «üè£$eÌd–“ž$‘Ô-ú²]YCÉ¥Év/›¹å•‡¤cƒzUÜá¹ѳ]X_ïq§™§gfºCþûió«ßþ¿Ý¡ÍÖîÖÓýÏçÊ”BÀ¡€J°Ây 9:!©oóì*Ê~®¦e5Áx¸Ž =ÜÔâ’«á½Ê­§ãOLáÚHLax¯§‰Ùg2¼Ñ¥@4ø¥¬)¿AVö,B¥MŸÝae(€H+¸ †Díq…|·Ûƒ@Ïfúwý>ïo·¿|ÿzót[Ç@Ì0›×4@ØovXM?\%¥n¸ËvÓYÑi »‡¸ÅÆPfÉ9ÀGBé’Xpà´¥âÚ¸+ŽÆ³{p†Ý8l\féðÅèó[,âHµk¡ÊÁ`$¤ú˜j®L2пŸøÖ>ì[býnÇ™éZ=ød# Å]ÕÀ›HP äÌ& rŠKeÙ/;›”Kt]Ý8ÅsNñ±àºxÅvœyÙWg9pG8ÅèU3N1n"¸¨ÙÕ»¾ÁëûÂf¥»/\'CÛq†£©‘߆ängùž¶÷_kËkAâP8ôkà˜AD#wã¸;Q?˜6. ç`‘,@`¿sëc¤Ñɶ‡ÁµSÀ‚tz;3¥5Në¢Dl"4-®¨QQE-±Äv µ;¿ûC¡3):!”ëÒ¼¿ÊnûéÞ_·0CýÐJ™P\1Jìƒ3 ˆÍ´îçÂéç•ÆMÚ%¤TR33Ü[DL’!è‘$:Í”µz,¿1Ùóø¢Ùa ùÌ+Mf╉ܩlO†b {îêDÍCtÕ5eã¢Æ7ß8ôùvûÑnСÁ£GI¤NôÿY‡£ª°f\#KP±wèTZýF³ý&#‡`e¿/Ϋf›ŽD³¬Sj  Ñ_°H–À°lœ “ºä@þ×Õ\àè ‡av1S13µêެ/›°£ÛwFÑR8dçún)ºDÚDäÄ"aˆWÿÏvœ”(ÿòóͶ®$fŽ™®ý2ǰj''zL{¤G.`A\ý]ݤÝÀx犟‰G.p¾ §‹§kÇPT¯ ±A†#lZϘqtucѳuµ®[Ö2A€¶A<ŒÄZÔ1PkH¯Ðvðô£“íÌõº}òÂP˜%Zƒ³‰,Ì8ºæƒrYuÙ¥$›à}ÞH˜-q ¦Æ‚`”xmŒ5z<Èj¶Æe +¼K dÙ¼»þ6839µup0ð†c€q-yˆÑþÓÎÅÜq¸ iu\,™½wnqÞƒÏæû^¿¬ˆ<Ι$ºö ”ñô*`0d £Á‹.õièÈÅüÌQ;ç…éhé@tòï§ø¢Hù¢V½Å1ãŠp;KÔª—˜å†êàÆ!sV!A%(8³óüܧ?ìÊ )J(z÷'øN‚/9NMJìÛà× ¡=Ípû^ïíûOwÿe§÷_/â÷ÿ¶ãp÷÷/îþù •Ã_~H ~÷áõÏ\f—l4 ,²¼§]fBË9n¿Xüb–çå[þ5½ë÷éì&nïî½ûpz}°#›V„ãíÇ8|Øá)÷OvÕ3ÛôÛÝãß>Þ~ùáE›Ý·Ÿ-v×=;b,]ìNæ•íz4Ýê=h»Gö/@BŒóŸ¹Ù‹Lð‰Éýh¢º{|÷׿ÿû; *æ¤9û¯H4„ðÃw{ÁÄogWè´s&Õ^-·àñÛ?¿¼ûëZ–ÔÃ…¼ÙÞ¾†­™?»¡ÀïÃOæM†›¯¿m·ÿwÁ›}h"Lmx“£(žƒwÕa|¿7±Pþo§ŽÏÞÁÉ;7E|æöpkÈ…àÔ¢¨Ô´×˜žÄÂô¤:Þ$ÎÎÅýä)cÿ¼®c ôv_Κíš9ú´‚ü${sA½à1î<äp½Î6;8;Uˆ£ÝqnB´Ñyͽü/»w ÔÔawÚh&®Oý' öÄ ècR3ÌÝú›áØv¯ˆæ‰óÛNãl{¿=ògÙyót³ó«ßä8CO2ô«_¤Ö´ÛáT¯f¡Ö›v{DŒ«Z v•»ˆ~]Ù&\Xs¿ñd6ÒX²æä(ȼ1Oß*ùåD/ß²Â`´X(p$91åÇÏhŠa„uc˜³ã¾)5ùl‘l` Üpì~MgµKuW{b³‹ ]¼Ý¡PIýB‡Âã¡°ïvÙ•U¯Vàà)¦¹Á¤ƒ“SqôŒ¼ç-Æ#»ÆuÊd¿>5µÄàÔÔ^øþ²s$©E…EGäÞ¢\ý§Qìjÿeò"ØõjC½ kÖÿY„SW-5ÃUÀžO(ïuöÃ2î…lkÜÑ—à^z«D6ဠsë{½i4Ç¥ ÀHh8€Yðà³æÀ"äz‡2CÍÑ;tG°„g¡ªøÊPuÀ+aF Õ 6à•fÑMì0!6¡›¬[nê0¸@à›ÑËÑ ]""_7Ðýj£Yló}¿/VnöË4¸5àÍçSi·È2Ü@…9n‰+-ÚèBèX˜‡CßÊUP<ž¢+ú1(WñJÇ™9{£a(WñJs(ç=qt¡åºXàÚ¿f§bÄ«’X8õwY»Ïwßï·7O÷?©GAš–¸RD’»bp°†È|&ˆ›é,&Å´n…³î6";,È2`¤%“ä5KA|,“½.»SËÆ “ehË´Ë€ã Èù„‚ý~¢<PgLWÆg¡±Ïb&µ˜šYÚh’ÚárÈ‚<JíO D'2¯õ¥¥¸áŸxûž‚ÁåÍãûÞo|¤÷üþîýOÄüOïË0Ù؉×+n×ìûô‡ŒjIÙÞBسÈîjÝŽ£·Ø~9 ì`¾/j/û<ÇÛ‘p» {ºEN£dï€ £÷|ìå `OŠÚ1ÏÏrN8•"¼Ç¢¦(ßžôVØ4y*$-Sl³÷Œ<ÄR¤™AÅxU§;UÅMC÷ÛÝoùñ×pã&øf¿Yðæñá{ÊR×o» ÁËz=?ËàgÙñWziöÄëd×Ï=O'…È]qàå* 0gÝó#Auqs •Ò’ïq]•‡ƒ8dAÜypäáÞy,b› !¸ï¼.&5I‚ǦäDÄQ½¨àµÙ¦ …wÛÿݦ?8HÚ»JZ䈓j@çÍËhʈÀaÔ°ÖçÀjè=˜ÊE׆½£€—a˜IaX]¶ÝöHN|éhž¹:_Ø6Eq ££ÑY“3ø\ÕÒ4”å„ÉÑ•­ uÜF†T‡“šU A¨ †5 7°!Ò3šŠ|^±q»/·‰ï‚šw³¨ÝÛwbüøw©^÷{Ãí =ß\”nn2÷æRÓ7ÙÊͤBÌ& ´yr´Ûƒ-Ë`§EÔÏΡÙw‰ŠáFd]êDMó8kç^åÑ)Ü@b¶³ÂÎ…/i“S 19Q MS$²í…‘ú²[S»¸n'ìüíŸÔ JtÌMö 1·gÞ»˜³g༲ÖŒA£?˜AKµÉ ù0¤ÆmšÜ÷â^e•ù.@~÷r®ßÝþlçúçÛow7ßn{?-|Bm3o´®9Þ/O!¶í:/YÏêwšL,j±^Ì®r!Ûcÿ*ŸNÅovj¬±vCÐdíÃóckß$[û6 oŽ \uº”mBßRVlB¯‰IÕZT©-»fº»˜`WÂnšÿ{úz»=ñ"¶Ÿ¾¸±Ÿùíáñ—çàø5~® )œ‡Ù&mkë"?¦­‹ÓÅ¡ÜeÎøŒV4áÚîÏö*¨¬)apnFò‚mÉ CRÊŠÌ„t_ãýý—d"2hrÆ ]'{•É«EÑm’gOƒðÆŽ½Èµ³ù¯'þþsÂìûÏ_¿Uo1Ц¥nÇݹ¶`Gܶ'ï‘ì#ÑAR¿´z¨zÞÀaïúøðëѤÕù¿¸ù´›%¯RñL2É>Xœ´ÝüÈQPc”7©i½^…Ó¼ùúð¡®”èf®C4”Eßv`xÇ¥?›ËöP0É\2;þ¡bu.eSš4£Ž(a»kaà…xÎWMœ”З8ÝŠž‘*<;íÇ’¾=~¿«èÎŒÓêvÜæO¡÷~ˆú¥¥ˆúæÕDkDM™Éਠ«ç6ð ˜UqC§D>ß&,ýüM9þQ×§=Ùœ+³çÐfÐ1 Ê^:C›¶ôÖÚn–4¤ºPÓħ™ÿ4´&DœÅÛëó”ÅáVÏÞ“TÚ"†‚Þüb’2féaŽdÓ©õDMÕ3¥éúÅ Ôt0Ö4(%ÖÏè‰j7g®É>uÍ^›YtÂ% 1>.ž õÙqâ#áôJ_£™t暣‘þ†k<aMqârƒß]sñBr¬ã¹`çE"œ c Îf«øG’éu.$1$IÕ¹@„ÛÎéš)O/\­1é–Êèiaû ûJô¼…q‡Íxóç…²e°#‰õ11i":RŒk”¶ã"+Z€„ÀŽ×£v2ÝØ×´ø”b)©ŒÒò‰È¹G2é…†|æÇ׈`ç>hÛÐ5Ô‚SŠÇÎc·žè¹Ô[Ç1ÇÂaÙUXØ»—^Ì:GâéÓ"Æ©wVŽ(ˆ®åpèyHXx8Ô«çꎊ²4Ô̲Tã`ª º(ËM†vµtÑ=Q„<åÀ« {™›´±ƒ«N¹bM=9ª²^ìŽ:1Ï=ÈXw"ÖÓ%ÔDñPô/¢Œ*å,б”:‘t༫raÍ2¶ôØ#†3)í˜'²•Œ˜2gà—:sº6¤J,éÌ¡u9ë2À“êh¡n‹‘D‘¶†ßÌ.P ¨±‡‡q’ 쪒ÅEx²°Ç I*ª‰¢[°ºXsÏÍÙ ÔvÏä -x”eLcƒãD‚yÁ**p@j\3‘ãžÒžw$^¤M{:¾2ˆYí±¹Óa¦úxý…Ö`àrAÓÙ¤VÑKlêß‹TÂhÛ«sC )¡r˜Ö™Q*t%‘¢»éA:L=.õ“Lj–‚gðMš%?|¼Ç"œ,ÚÚM¥à¢Ÿmx&°è²+'Œ‡"Ú¯ò¥voÛ1y8L·N¨É'ókµ¼g¡Ú9À“ÒøLô~Ro ÒÅü›°Ì hA`^tÌ|6{•G¯ð‹I}¨ ÑÁ%þCjŠˆ~|ø(K•²´Êê³n™‹q4+`eTKÏǤB3€wMžšÅðWð³=LøÙàtù»uÆ2¾Gn´é³Ò“EDs€Ú|o…+jNO>E"XB~TWú*R‹2#à:gF.‡ &µI˜þÚ´É㯧SÉ_ÏH|H^Ï\Ot××*8¤bµ®œšR+zšäD=3¾#ó$Œ’úŸda“t_F-"¡X>WßuR"¯kuÎü9X¿H~÷Ô…1Žd‡„…×ícÓÌ>¶˜Y)í6ƒ,lWÝï¦ ªô­Ù ÕÑÇ씵hÕŸìV=~DÓ>6N+ÈÅP¾¡«Ï1 ^‘Ú6¡‹’`óâ)\ÜÁN6hX< @‹G‚(·Žíõà wPà½˜Š“5ã¯È.cc·±p…ºjz ÜíŒcód{чKêt‹‚6.xPþÜŧ.Ûöá!ä2eÝ×}¬Kí©ÂNöz¸º-¿÷hy‡¹ÌêO¶wümÚìhʶRÓ)T×Ä 7h:~Ø¢ÿhÇÇNõöÓíÓr°|úÃm3C¼^ø¯Ä«šrÕ"'j›§Î„8“J;“`y2M.ŒG:Á>ÔÁ¢ñ  „²h<4H¾!÷E*ÒiéµÁ³YèÏÅé4uÞ€ž‰ÛîèàìËNÎA/Ûvš ŸÚŒ®“5åÕzƒÅ¤¶}LžA‹¶½ uÀФrg„î É'ˆw·Ÿ¾}ì2~¦;òhCˆ,õ^×4Š‘yµÀºˆ¤™o_·¬B2åS—z‡²Tmà´ U—ÒgYŽ>µKµ!1ûžŠDIÍ”š›. ŒÎd&!g)X“–D•!?¶ŽúÎ$Ü7Ÿ Ð —vRE½'mRÑaJ¢+¦E“1btžGº™æ£Ï8ôó†döïÎéâ őⴊÈù¾ ,®é©3¿Ábucö¼XgüÐy™^€.–ƒn:OÐ"æE·ÔET¦EÔ ùùÁ#™uÀÝôÚS•WJ‰lÅ7Ýé #ØZËIT‘ì9¥».wEœí§û´Æ'-ºÜýóù.ß ># f®°¡¬´i„œ ŽÒÁÏXÂtI^Ò3傾ý9šYÙ„—UZǾœp~,à¯V‚ÓŠqÞ ,”’•øÝï ­ëÎú̘öÍ´è°øEì&ŸËG‰«ËœH̃­‚nòÛBLÎòœjÒ˜k¼MJ2‡9p®øo¢íšGТÖLt™„¾¶cRÉ»]»maÅþÌ71à&¨¸Tb¹.ûÐ<ðfþF[’W§U#àÐIRs\³GÙ€‚•£Ã#l3vN–-ÉŒtfÄ1J_má2‡ElæüæÍ#ùôð«íµ}ôn_M,FgM°mWÏv*÷¸¸Á§â<¨Æ· –Ϫ_ÛÇm¢z?ã?«¶Ö^¼®)½$²O2¹†®ð¥¬zùIé$˜q¶ kñ:†˜6 /^ljêË‘`:\ÇôÚÁîdíuÔÖ ©±i£½FÉâ·û¹§ÓËÞoïî¿ÖñMû'o¸v´Ý6]“p"!Cxû˵•ÎsYô‹;0¦ž#†EºòQ]Á}Š”]ôúå} ìBr3+îÓ¢Þ—ïSÚ ?ºY5§f—¥N…¦FgžwÏ($·X:Ó¬ê˧ˆ²{ZgQ¤eR3I€Ñ™%sýüe¯RòÂÔ. Ÿõ* ê—Ü€¦^ÖWšÚÅû3“ ÇÄn×dôpt*Ñœ ¯—Ç;÷$ ‰¿n*‘©hŠS=j§TbÞªTãÉ Æè”Æfìïþi¢NvøióKÜß»&ŸMCßíP~°‡¹ßÏOnŸîŸ¾Ü~}úøðíµYÿi“œ¬ò‰$­WѲSæc¬¡ëDRWŸÅ%Ö®îU0ljKÜ«(‹½šgõ|•a'÷ Í0تq¯:\r„k¸W¤y÷*E–¼à^q×!LïŠÈo$†X“å‰7“ú§´”¸©4â Ô›‡SnŸf՞гŸ~|žÍÙÿp]õ•DÖ ¾ºWm‚)Õè×ñ`VI«'"Zȫˀ,àG?À–m“xM'@N»è£J [È Øx!Çã±ø=çÊŠ 4KrC®oón#;W9:] WÔjœ% }Ìp‡¾ø7n~Ù}Ñ®rÙì5#Ï$îÉN#¶A¯ÂªTb4_\Ôû0/Á^p¼;5€—§)0Š×eùP8¾Ì?¾ˆ«§×† >uÛUà13;i*¯œÚHr¾ì}H_Òæ€¥&)Rî‰ÇQÊðû7A,cȤ¢,ÔnK4³h:f·UG¿ Œþ`úë„ÑxÆ=ŽÑ«sMÚ™¤ä´—¡e´ìdçÁ‡šäŽ|¸ÙÞÞ¼ÿþåç»Ns0 .HcdyÕBêÄqÙƒ-uQZý&g0É ÌŽóщÌb5΄Y6íéô NR¡9©*Æ){ M€ϸúGD'ç˜cì“18¸jÙèyEàzP˜R':¤àÛ|›snô ËÁ%"9èßzc“÷õñÿÙ»–&7räüWsqøÐ% @BžÐi/a‡#l_›#µ§Š~Ì®ýëI²›ld5ãõF3R‹¬ÊLJÌDæ—_6gžúÚ¨:Ó°T'{xÒ_î* §pFì|hªæ£ú×àM.1€K3¥%#ì•—¬ÍF¼°§ ÀÆ]~œf'®HlO .Æš´5'Immjnan:96}c=Ëœ£»K´dKQuˆ4ªëž@ÇI{V°6§4KŒ "y¾L?v†ênwãñiùmµüõJøûƒn]»Y`ž.þ,ö8%Ɖ ›Þs]%²ŽEz5 qšÖo°*Ì(úfçaöåÓ)F0Vý ôµk†62&5°4;ƒbòY*zL¤p ’op⾋BŠxMC”)ýMÕàpR£šÚĶ'„3À­Ú»è6^õ÷}ÄÑõâµn—úž…ìÈ9êý=õÙ™Åxj Ûˆ [¶‹Ûý'LiLÆ¥àgºo;ߤÓëp·Æ$ŒÉ{gÙ @e¯{mØ9ºn{VŽ NC²n*ªºmK¶}´©}WÓhœ{ •`»ëþý@kðƒ-ÞÝÞ¦ÿκ5õwÝïÝÆ»ÔNišœ“ÛêÀinvXÞrãP>õÊ6¸PßRI¢Î›*/kœÔ¿ÑépÛÍÄþ÷¶ ÆÕA6ÝUÊÕÝâ~ñU}u¨²ŸºŠŒ]ÇëøK¸‚ûgùŸºÑWÕÆiE·Å?-g7yÁ)T X¨±nÖ_¸½öèq =Ø}rq”=‹|Ì[íÙád7!ö>VÑÒÚ‘)©‰jZ-Qf>ÙUÌ!C/bŠbÍä©*bß1+ï‹?º@S æAœ“ÊÇà›{õ“9šÌ Ü‚ó?ìÒ÷æ^­r-âTÍ›1¤3B×ø„ÚêdzÄö?aÅ Á…YùêrÊ«í~{ù2%ÎÙÿësÕUˆ‰&µ¦z±>¶Ô¡°òNЕŕwRn)°DñFNÉ2~¸D?:ÌL'8ß÷×ãtUïà(Á¼£âtU«IÜD… qætÊ®]n#©ó"ÞöK^âtý¿Ã,N#´µ®’esÇTÖ·sR©wŠ‹àñnd„]úÆV.MÏ}Òœ´ˆˆNÛH1½Ìrií…Õ`ýÅH1 vmãz%5&ÐiéŠEUM$$0*ñš|¸~`^ =oIãg¦ñKçÇò(†ÜxúÞÛwºtö¢É‹¯ið¢ì”o‚Ú„4û­³>&äéB aò2Bß”ºb-­üö'²c¾óðSšc@…–&§U±N‰¨ÙX€Atíû:È;îØ'NG§ ‚Êh»¦1âf¹&vRéÓ®©Ê‰"*¼™×ŒWMá2‹Ì¾›NåÌùvMÏê5.\–F³(pŠ˜ SØœ©ŒœP(xýGš.˜ÕøÓ”"µÚsИÉAó‚v)\ЈƒØüh^Ä*û#ŽjÂYR˜½7+ØÐnO…¶à´¦* ÖšÀ”ZôæA¦4ˆµPFa׬·T¡7kÒˆR 8ÑÝÈúæ1{¿÷jEŠÛ܃©Nqà<^ëÏݫžz¬}Óàÿtþh‹…’o*ë¼Fjµ áÌÁ Ï»õ­T¢Ý\›È–C¦’"PL.ŒóíN|=\[a½™¾ª„ÑEjrmœ£ƒ5+ûYóÊûÅšÄÂjü{Ó$›åêiq÷]Ï”·Q’ÊÕÕÞö×È‚14ùkœD¹ÆßÖV|ªÕµë%RI-6‘K£n!ë†{RéRàQÓÕϬk1 ç‚ø¦xR YŽX5Ö”Â)ÆÏv½Þ˧½wu‘/âéÈW Èa[äK.LYž¬?C·:ì˜Ôzž”I5‰%'å–}öœ‹’cʦ¤;u:*c=_ª\Ô Ðãg%Ù|n‚ø# Bw«çÇ›åSíö™WÇôN;&X#fSAðSŽK#âWðƒ¾•¡#YõsG0+HI£ˆŒ˜Æš=0wréäzþ"{c<…ħé×±mö¯œrH>J² ½©#/:_Ú‘~@vXŽ$UX:[iؼ÷†Yïp€áíÅ úñ¼ãÁ–TXìṀ±|¸û¾x\ü-ÏMxœªá”ªm-6MgY\„›Ô6«ù€\ÿùfÄ‹/·«SýþóÃÃ÷#¥ÿûóâyõçûëÕß4ÔPù§à7«ëÝïAöFŧà ÔŒkw„Ñ+¼½Ë?س~¸±gRå.W7¿­®µ«¹$xÒ¾v÷?bûbÛO¹yR—ÿ«žK]=~xþ¶¸ÿð&Žaýî‡f  ž6ÆSl¤§Äô'ë@Yî@£¾ õïFò­/ÎËôñ’õGMnEO^Ÿ­ÇŽÐé´Þk$=ÂL‚K©àšÒœj³ZâŒWÙšÜQ¸÷z.*CܰúŸ„¶¼VqWZàQœ›Ò>ÂhMLÍG!…>Ùµr§¢±•L2¦Ö5Çi®ãc÷f%g¡>æT¥6kMÛÔ§0¶ÛN¼4k *´&ÎzhŠ´<¦5q˜Ëw/Vª4"®Rš†Ø Ô¤4?Å×|ò¶G¨Yk\ª5‰9.ÐZ°Á Q­y—eQß½Y‰Úô©€ÉeC‡“jÓD†cômj›Ôa‡}!6«j R“€‚{Iu¶mbä)¤}µ•Ý"_hÚ¾ø¦fB&…AíE ê¤ÉöÄù±pU"ٱ尺ģaˆ6Ôƒ—ÛÃn²ÀFÉäœt›º;ô9øÿ«§E/±cMÓE‡-ûÑ@ÌÝuíDÔcôÎÄÕ¥+]Ì#M¦¥õ`•O©Ú>–«çs¥Ü«§›¯÷ýÐbãDy%:o Öª7n ’=Âwéd ¶7‘ü¥­a»§¶¶ &£ë58ÇŒnÆ8@‰¥$æ‘.Ðø fë…;ùô°{l/úe·™´¿AkrmБíKúøÞ2ž^C“ž&¡±x*2‰@ã&!>»£áM(½,BÍË_<¸ ˜°ˆ7‚:F§>_êêú¦#LfïT2ÌŸ4?j9‡=±t± 02y¾¸ML 8½±×Šu7§÷¡4½a@…Q_ Wy%`8¯×˜ò½‰o¯V’ßý>JÉS•â0`mÕ4ö4EqÉZš/Ðq³Í+ZoB¶ÊCÖõ[P²jq¼¤JÙ9¬=Auqo}j@ä:÷¦ Ú å<¥ÿ#DP$bGÍ Ã.…\g ËÛc?[°ÛT ‹wÖvù¬-0ç"Â=qt±…0¸dÖUe œbßh “˜˜0qEÞy{Ý· î;8éÚëž1ž(ƒ¤%7jAÕUbvd5úTa=v!¬'6Õ“·Ì•}ëÉN5ßšµÉö­Ây»N_Õê/·Ï{?TW,ùi²œÇ‹Å'Ì£D$ôȪ!•L·:°'£EL\ÐëCFè9V¶¦˜\T¾D2°™';¸«³@uð<…ðÉ–>‚ž=–/nÒ·W ~úx·X~Sy]µŒÕçJ~~ðÆ3^Ï…¨º…áàR–÷i'›.5?od±ªÜ†9ÎÃцéf%~²JÀ—›{æÓ†&eSø¸|º¹zº×ÜâÛû6o»lÞ—‘¦Üðy=bæDm†8iŸín,Hm)¶³(û¡<È6VTpÛ§9óXr¦b ÙížÜ:Á¼*^æ§Ô^HóÈ$õãÙY6ƒžXNFmZ„å>Œ_ßÎwôî$Ð Ë)x.ŽåiŽ 7H ž·E£ßÒŠ&g…ðI“2†€À˜ÆyàŽƒžþ˜ ¸ãvGÆYà.KZ½'®.mjë˜<^<>~ÊT…ÓÈxFB×)d5ŠÅ×UãrÍpb(AsWÐO§?”­´î¤Ò ÍY"§ŠÛxq^ôU=OGó˜â »øh°Å$©‚çãx[ÇwŸ~þóŸ>1¥˜Œõ ¡p D?¼èšq)ÀïÕg–-L܈ÿðùÃóßî?ý¼múô³Ã×Õó§ù×?}È—\~[½ï(z|xyÎ^œúƒ+¡ ~º _–Wñë}û6xá/aé~¹æè¯Wõ1ï®wOéô)Ÿ^–vaóéç­þòýåùÓÏ?ò[ܾ¬þ²á2aúp»Z<­ŽtI>|þüá…‚ïçÏïÏÅ-µÂO“Íw ÒÔ|ÉMÈ`¼ ÕQš6(–2ƒbr„DÁ[çóhg¯¢Œ§‘Î}WL¹¶ ½—)˜“4€à¸?)¶ÿM“bkÏW`ö±|R¬“ÄI-fœˆBLÍ÷Á±ð>ØŒB£\„›Ð\“Gm‚(e[ÅÞ^¬¨I? >JÀwF±ÿÙéP›:LÒUÙg»yW6o¤÷ †õÊfz¥“;·óªhKÎüíþxGŽ?Ú-)ñxIŽ;Z’CÙí‚ÿø]èðûiN‹ý—W¬mÙâ)šwyÅh7î[øVA@‘Np"ö:N¦P"úH¢Ió®ÇrK BÌžBÁ'Ÿ C=†bÈ2OìI¤C’´>å‡â†®T³>Æ6˜=‰'´IN¿ÓSÓõ?%êÏìÂÁÚ¯ì–sN<ý¾z|ºy2)ÿöpûr·ZÞ.nîž>ªÀv¸Pø—¬sf­ØMÙòùñeU±€FÎ(Èzæ–”ØGLÚQŒÎ©yÛZT¿'„z&a=!Ñ–ü”£9³‡4>ßAw„ÚÓ„HYHÞ“RHV³—¤ÈWS&}|Ö(¹Á‰õ#`îËvŽø!Y5d;p~ðý€€·"ÜCN*=Xa Z›ë?Í –h´¦_|õ–nâæi=ÆN ;úÐVFT}M¨"z£*¡HH=F¾åÓ+üU;TïfŒãÅ@ëIò£X{"üÝF¬5C%f†*¬ÐÀ26amäÙ±½jãøöM…A$à‰â‚@ר—Š¢Þ¸m‹ýþ”þ¼kÉmƒM™eLÓ;F±Í8£\O{?]‡¤á ’ZŽ£HÑ‚¤Þó„>(Ž8÷-<ä52놮 èjU«qpµ8t¬¸ âËRîɧӄ˜w¾ê¦…l?2AKgƒ~„Ÿ»p«ß°­ t\)¶:ëFÏa+‘°ôÄÖX†­¡[›Ðá¤NÕpššèø°»àòÀ“ø‹‡©ï{ ê •õ=OŠšƒºvS©ÀdMÙ÷îll2HêBLôN<ÝPýšhŸdF]â8¤ê!”¥¨Ú“EµÁS$Ui`i*x>¸%™HmRH=‰ãX»V¸ì*LŸ¬-Z=vÿ“jL! 5ÍmÚº°3 *¤˜~—k—ד¶8†ÓQ+꿚¢V7eᆕþ/{×¶Ç‘\¡½ͺ䥒»¡;¡í‹ýìØš ÐH®ü_þ™3{@¦¦»º«z(r­XiX3•—“—ÊK ÙNµÅñ™zí Wm{R/h¥v‰pzƒ—,ôhÕzUÊÕ'ösŠ DëJ¨ÑY•N«C¯ Ê@o´r-ï'û)W!9³êB/óe—ƒÇQ.«u-ž/@æVæ¶cÅWÞ:g3ÇÇÛ+ý£_nï~W²ßèÚ|Þ<üÑÓý¾Ûqâãû‹›ëC^ Ž3lm½Tø¢(è¬û 9­»¥’Ðíà$Jšã…4:œDûüDþ!U¡=³„9³`Í%pÌÉW¡=%Xí2½Ñ–j0†#Ù`'M³Á>׃qˆò‚©¼âôPtL¢w*=uÏqW0 A:pÉûUçyùíZÿЛÛOJÇ›þ}ùÖ~¾ùøñööýÔkèÄŸ®+­ –%U«ªG½¸¬´‚}Œ>ÀìE+Sd©±˜¢qM±…ÚUÅ,\ qª@É Ù¥zºµ€|ËØZ bäG|ªIhê×o –y4N‰­uð‘!6hæ+é¿ RŒü§Gœ£j]cUI2‚—Uáè´S%mõ¼¼¿Û­{Ÿ7`R‰{œô˜@V!8.™9e8„6´~Âä4½š9æªü”\*š/'ó0)ò0ÏÄiÓÀ¬x$vX÷– àh$ Ó–þ•É2eßôQ‘ŠÊ”c"®˜;YÇø Î%¦T‡¶¼ Ør?®íd+ÐŽ6U¤»¶UáÛ+^^ß=lû­gÁ0baI "®…5Ž_°,Û iö>”¦dl‡Î© É܉tfœšQ¡4Í:Ñ¢5gâÈ2«Ð<„ªÒ& Nø ÀÙç|è¤a¹0™$§¦>´%ÉÕã+Æææpr”ßÑ ÔMRB\ÉEfµ&žNöùæâ~syn‹sæ6Fûò²Pª*üP_ò‘h{L¹æuò™$-±”\б(!ÁÓ Ø Ì9œ÷ótÿF`ª‘·“Y À˜$Å*0Ew0Å,˜²º˜NR5®¬Ý9À²—Æ}í>ʱÉ×Á!œ`²œº³™W×y•9yiþ\gk]—^ƒ„Ó÷/[½úÛµMðúu£zÀÀóƒQNçþƒç‡,<ÏZÁ£AoÛ‹]Gd¾ó ¼n çîs?›W9|™]0ÏRññEc€ê{RÕ=Â-Úmî4žNÒÏNØLQ§™Ñ"§ÎC,Y~jD‡Ê´Pv“ù3%ZØ,“Pðgeg0$ ©¦ ÃÆ¢Ãú»@"d¦Ž+£Ä»ÇHíÀf±o[—è‹lƒŸ‘”)Qù£ÌCç•U`ÉŽVK«7'>ù ŽçŽû~¡JOß›wÛv3rÝêOŽP£ÆîUÐùÒi(Ìu§ÐEò-æp¥T;õc”" i²ó1û9 J#ÕóÎrý‘ÀQ¬ƒQò¼>ŒÕ+Gð/ó(+M:¢XÖøbÝŽQ(8ÆHrúc©ëJÜmþj©Â‘O”yÊF$,¦êOýÔ‡Û߯oŸff²¿'’wâ±*‘B޼'²S¼"ôayeéZ¢nrœ¡îäPQ¥#åÞ„j„º Ž(Ír^)Påä´õÃw%sÖyUFIH/§>%\N?nçÌ=jGY‹ ‚u™‰¸ qJ«Vñe‹_•›Ë‹måŽs‡k§:Y¹@UV€(,¯Ñ¯Þ‹³+³gQªƆ."?NÊi€±äós6ž©ÒdU­Ô8ÌY¶l· dÙù€ìvXê Æ ÉO¹Ç ]S·¬…Q ¥b¼ Ç8jÅ›,U '”¡Kð*ê Õû&bñryìl-]¤’™ 8ƒ’”±>¸Yɺ KÑ+|Ìêgãœ|U+1ëÏd@Î8;;‰ñˆ³Ã.4Üj«êO°eâãíÕèÈǧ©º$—Hx)oèüó›«W‡pÔ ·Z¢æ³‡{E­EröB‰šÙ"Œ^c€TL´dO9íGÐ/P L¡˜RG"€RÒœ%4Uvëó^âÒóÅŠp)¨ƒØÏÊ}±FÚjá«€ OÑh«žT˜RÇÑyôY`êßКSLΟlû;›°s }ˆûƒ¿3ˆDqɲ›¹Ÿº·Ô4ðÂÝ6s?õqçHÝc`Y¼>¹?Â/1j­¯~‚PpÜØrãIJhû±Æ@h{ñìlÖÁÍ Pˆ}%aì½£Ÿ2GìáÅFõ0³Dá8«Y“XÇê´¤ïÌ`’sòŸOQÛÅ›÷×ÿ®üýõööãÓÿCaîú—›«ë¼Vʧ¿œY¶bs}õü3ʬ†.Ykîô¤D·kÇÙK)¦nó£}Û³}+åîåõæóõÕ‹i©³R{›¯÷ÌÞ½#vWÛ²¹×(÷ËÙûÛ/×wg¿]Üœ=¤ëoÿRë‘RvSŪ’¸P˜õŸõ®j½T“§±ˆ…zæ0)Ýn<¼ZÚÛ×Òkh_ê|ôŸÍ6ÄÕ1N–4…(Á9Âú…®\Ê8Ua˜nè‹§GÙÕÁÕJ§_K£> ¾”qÒ©-A€ÅŒëØ¥Õç1.¡Š˜†=¾šo©Ô×w®³RŸ0…ÃIù¦ÙhAÑöâ9}\¬€k+ P`™ÅµÄ^–7ýôG0®ûν¥"fÆ,(Ðd¦š¹4túw)¾•}~%ÿÍCùžŸË»ËN=—7té/ñ­K|qå_Ænžÿßæ ã”Îú…»¯QAO";¶^FѰl/PÈ;ó½° š ªB™¸h£þ9Ï,òð‚;ôðìg/x+VS¦"¤Ó¨ÞcœF–ü@¯Ám¦=<ûVNH#‹=oxF•‹¹S÷ÿ§Ð¿+’ƒÜBh?üÔHAýá°æsâû͇ÍCÿËSíøƒß\cØÇ–jvbJ ¤%*i“ÿBœÝ¥8$ßÈ`!íʇx¤¬R³øÇÖ)¥žTj…ͬR?Ó£Áë¢}퀋 û}FoK§ôõG€H\ayF²)µ¼nCš’ºß{ó®»TßÚì£EÁ»ÿ8¥²Eg¬°©Œ}£Ê¼=‡¸hÒ1±#µÍ3•¹Œä#j^Fï0×\6¥8 Ä(q,ÎÛ‘Øa~Bò ì¿åÙã7»?{{wûAÍóù½ÓÝ;Cý  ñ2ŒH] êTÇk=¾Ì±¶#z⦔ٓ$¼V"œ2i;±Çgj €k6%}]Ø9*Všâ±J(^t¢4©ørÔÙØ¶Æó$^¬¿xÿð[Y!טƒUFÇL¦%Û™m\»D›9‰É™ÛgÍ@Oã—õF?ž$£Óv´àíÞz¾ì[6+g‹q³—*ç|ñ~¹f*¢aôÚ¸©du™~UcŒøÇ}d‡ýªQ^«÷(M› B¨wvDSxú?›—/Ê"Z–ØÜ‘dÉt[ Mð¥:±)¥ï‡Ê6òühb³¿”Òd·b «kOÏå£+HlO¨å.{Fè?X0F²ª¸EŠõxDX±Æ±ÿŒÈ@™öÛÛˆñ)^\4=öoöEo.n.¯_YÞåÓýWh$d"bâ*^´à’t61€ùOï 4¥$¯@ÉÔ©!ÿýPr Ôõ†ƒ] ‰PbÆÓ¹Trˆó*ùhe"2‰óD°Û qç1çS .»À¥²o“º8ùÕ¡"r¡Š³)ñ¢µÎ^ý?ÿuY›:‰N(”°6…iÎb~#Èóe±6uü,ÖrLŠ1©þ ®Ðo·±ÑoN6ìj<–þª+CÌp‹™!¦˜›3Rʃiõ‚ÈKà<%JHk&bä÷ˆv ‚JŸyJ¨ J¨aLi!Äl5ð&‹´ÐŠF‚+œ0ÒR 9­¢… WÝŽýñns{g› Þ_Üß?-À9·‡•óKý½Ò¼ÝX°®®[bÊHm6PšÝ5]BžeM{y%Ã|‰ƒ‰iZÇ$_ ùL‹…–œ~K.Õ1ê4ôcç‘i©ŽõG¼\8Ù:~ÅŽõ3Üa ~Û~B2>ÅSe;NDZ¿Z½È«þŸÿ¦Äø:Q,uäÀ'ñTÁr¶ ¼9쩯׹h3QËqoç~ëwr½þë/ÿzP—“ žY÷m[T,ìÇn\¾ß(¥ã¬IèYGýÙÏgÿ¸yý×õ+œb¸æxEHékU8í}ƒA…Sx¤Â)߯púyßDl›·Òré7½ô¥–¼ˆm Ú,¤•;e!\gs3']qbô£Ñis¦|ÆÿùjEe±Þš<ݶ%á§Ü!ÙF†èúÕEõë Ñ&ɺ Ìžü!6i1¡d˜ˆØ°94Jü3—c~`•…%î0Fq2º½u\Lû#^–7©öa{šâˆ«Ešc{ÿý9/*fï_{<¿¿V¼:Û\½†K¸”to)^¼ o0gÀ¥¿ª„¥f¡?®SW†ÅRƒËÊa1S›™ËÒ%¯>f˜´j-(Ä1K°½lÌÆÑƒÛL—Ã&ß%3;¸ßð48¢ª™;aÒQj0ú›¥¨aF¨ÑDäö©1 ô5LPâŠ/¦‡)±Ã_i^»#TÑ^Å|ecmâ|v檽á{ÿ=LÞ‘“5âÆHU¸˜DMCŽ<ËÌG€­ïe'3ò[ü0i-0MT|šÎ—pç‘dK“ä‡ ? ArÄ.Biú¤ˆýÓ I°æòØG2'9ÜY¬¡68>^VB±é|# ¶e5U|Œ­I­b[%®9py%‡‡›Ç$jø˜HƒºÉ´XúFÒbÖ&šÐ^ñó¤?"¦¸‚ãá;q´J€n½¬Ø‡ 3¢}Ðpÿj_6*Èñ“!ñѸgÏóÿÔŸÏæ$ŠämܲopKÄ« %¸,â¡Lă‡¹/T”§§5²cVšÈêôǬtYÈ®x\f:àaã¸í,ןrGT¶ÿ©TD`W]ðô#§^7× Â’±.–YK„¡zN@ð… QËfĤJ'…!Œì¯x¾w®¦cp±’²L+ð´°go¬ËðŒ|6Tq™AãgœÇlJLReeèEåG{‡ÎˆOœ©ç¤‚SU<’%h8+/à Ò¡ÿtVê¨=ç$…:Éhßåb†„Ø„ëôýŽ ®~»þt½¬ëhxÀI›žqÁ/Ê™¶â-ýXM¼kÒè¸GëÙ]Ž{„.Ï.ø¬ß’¢Mr(ò[⤷:zPwøL½éûÚ6Ž|iz¡ÿnÑÙ$¤—Àk§”ΞãP@ß9¶¿#ËA\Ó~ïê3 Ì9* 8`¨’â*€åqoÄ·múQ *KúWðGõªÀÍÅ t’äï ý}„}€²¨uüÑgKñÄkþLDÊ,ð'E¨JxÙŠë€?åÁLúOþüm‚ÿÞä%PåÑùå’`• }mˆ¦ä|X#÷‰ëå>Ÿ–'*ç>Þn¬Üåêúíŧ÷ƒ_<œlÌnfŒYú9ÃÙef•¥37 ªbCŠ[€ËÍ—‹&gûFAãLôpÌ-:×Å 7eMK ‚à¸9±»Ì>U>]¦`@§>sJûÅÏ'T¥@Úô€í×Ò¬˜ÞË88T‰/èaT¬ñÔO×¥ÓõÑÙ- $ÂEÇ“Á!·õcp±’ (¤ÎÖu˾L ÎÈf@ žUhi¯ÑRùªŒ¸u;zê‡tø¤mìC 1`>Ê¡a|8AôŸ×Výp\D9¥àjDÔ¦s¶/U{ÂÖ­ kF¿§ûîöóMw{÷îÕµÆ&÷÷o7w×_ô"S.éÈŸ\'ù©t &Ç×X|¹{¨,þEö’ ÑÜ1ocä |Çh[ñ†¬"/rQM$ .Ûf= Wƒˆ×¾u¿«šË#^²M´aÚ5˜>1²ÇoÙ'³˜Ò©wauh°ø1³?Â¥£ Ÿãß·ÿfR .=IÁ—‹ÍƒJñ™J÷™å~Ù%_¹4ÄúŸôçwlzÔ¿WRžïH{¾Qµ;ÌQö ôœ›¼Ü~Ú ‹Y€<l¼!™$-'‰úUû}lʆWú·Ý;9|º÷ýûÛ/go¯..îÿ¸¹|¶b :¾Ÿµé˜ó#` >ÉRÐïÀ%¹ õ CJÑÑ”`fó•¥ _`²w‚^P&1™„ÉTÙ^³`ƒÛLçí[Y-‚ßß‹0<£nõU[ÄS¶¡¿˜*³‰v¾F‚_bý¹ŸZ_Yº X5&øÈÖx7)œ\œ ﳆúéf%mâªÇÅ9„BC\Ķilî[ëmœWƒ„}ëƒÞf³˜w·vó7ïôÿ·?–ýÖ9 §+¾â·tnÒo›¡ß\¿Q׎ Þ`ƒA‚­ô–ìÞV·RL~nÙ× õ¢áv†‡ïTÜ»±À àøËЖôÙ´Ë€¶ \ó­õ)ü#Å´²k®d†Ìf®žQÞ6eóýäCÓ2ƒ]ò¯Hk z¿¶0°-üÌ CðVfƒi¼óM¤á\àS¬f~z.yß¿ž”=¥™“? >cðü~~ý{ÉGyúñÎ&m¯±.”Ïþ"ˆ¦ ’üŒ Ro H€üâ r{„ ðíÁòfªç.-®îˆutdÛ6ÒÌ@ºìûúsÛ#Â’ÒšD•è¼°½P2qtÊ8PFvÎѸÕÇÈ„c£µvw 9jp™é0ºÿR¢„ÑÃ#êæ©슣èþZˆ¶gÛ×H1„Â1E™ÈG¿æ¤áÛ›·›wÝ^·¾ŠýÝÅýÃݧ˵*“ˆ§OXðô±™êc ?¬Ê9I«p.ª§#Èßçð„|+À]ö *òQ­®­”¹c¥ H=6mº€ÎÅÓq3£³z¤ˆA¦Þ»ñ~¬q]n•Ë€zM"eýÒÎ:¼RY¤ÜNtbZ´2>x/‰Ý÷';ѧ,Ó²ã¦e'ÄüZú'ò5žhƒZ O½`Z·c«¢3‘µ)¹3O,@÷û:ä0¶>%ãÁaqÊåë8kŠÅË-"kˆ…-§ÌŠEŒÌ‰áÿÅbG³¢¡9(€*44)p´Z‹êÇ5(~\È“Ÿ²V&¥lÈ竽®aTñ±dËiA½o?¤!tz{Œý[áiƒ‚ݲ“ÇìôzyDó•ÔM™—Ö`žÇ˜|Z‘yÿõéöá"Kx¥ãí'µ$ýoL1°ô˜Ù±ÝØi3Ðô°``Ä­õ«E`w .3½ó])=tÏ·¨A!òä+([³tœ„fsýq6pÏûo @ñÔÈM²²¦0IÈøajC=xNñHÛ5ü¤PtŠÆ§›Û'ÿêòîòðm !Ì÷´wÚàK¯cïdsç8íwìŪ,á‚ÝJf4©.Ì„¥bC0MÅV žÒ9%{ƒa—¿Iàýr†@3-iO±Š,'œÒ7‘0¨ž“ˆ ö¨^ðn6Q6 ðLÁFY€èlŠç¬ÞÙ¥T§ï+¸T¾cl¨ÐŠ ÿ$0Ée?|wwûé㔾œ° 'n5ß®ÿÚœc·vƒ©)Hf5R/¬(1àÄ;šok~½K‡ö7ÀVÌÝV¾/~I[Ën,‡;À~IÖ… )\ÂÜåñ%º7bJïÀ ü{×÷Ûm£ÿ£/}©}%’¢È4ÈÓ‹ºØÅî¾ÁÄw’käÚ¾ðÛä¿_jf®=žÑŒ¤s¤qìÚÆ±gÎ!©¤H~Ä–N´ê}(ÀéF 1@t9æÍ-ñuéÔH°é=žØ &]^K™2ÁIMŠì´PG^Ç=Q{“jyotÈ(8µ@8že*#*»+jBEÙ¨{¿¼|¼»zøý%"ûéJ‘¿=ØÿÜÛ)^qzWLëÇMiá…uаƒ©Uv~DûuºÏˆQP†juÃÅfðq¹3?s×~Hmï·÷çö9wWË»ù³¬ub/{ç´ä~šOÔB@®Õ=·‹jÚ(jÎ󚤜³¬Ø#ÉÑ9%ÏË.Û#¹%™.ž7pð[ª]ìb ·(p"âi¶ŠFX<Á5câþ½,´$õæ6PX=•’útN¾lUeR–l—¤ÞÅ©$ê’ŽVUòaôÍ,³j̵X¦ã!ÍtŸií˨ν Õ§9›¡:]M'Ça&Û_ñL9Y_æì—CCß$s—ì°§K ô êücE„ "ªÎ*˰—cZäIV£Zãö•Z.>?|ê´•$YáœaÊ*0?…DÚ6]¯_¾cæI1‚b1 cR¼7YäÚ”·^¶OfV& ÐÒÅÜãÌép‡és¥Ìµ¢TEÃÁ­Ö!ªvõ–}·Z?Ü¡: ~H³r"&ðï±Y¹ {êZúHçÁ©À”Õb^ßs_ ‡ºö*ú#”‘:úìV'9vj ä4ºÒ‚Ó0λ¨6_ƒã›c¶äå Áþ«'p 6ŠD€¤³Œ"¸0Æ1DußÙKçµ k 08œ×aÈ&8‡ Ä ÞÓ»vJ¨5ÎÁ•ãøMÓÞ._Á³ {y‡ÈBM|E+ª>EÓ¸bÞAˆ¹ˆõŒËÛ÷5 6†âgƈ¶Fa¨…)2~¥1U{úûÕ\Ïæÿ45«2y ‹îf!:Ó¶¯àY°__â®z5‹¯µ/ìËL2Tœ73§,gØ“Hº\±$O+~M*VÍ£Eœwƒ ¾bIbö¹š„½²S.t™PßnqWÓ-®*±G¯Z ©3ðÑU(5žÖ`Õf0¬¿æ1 ¨×‹/{}÷ÇØl/ïïÎï¯~¹I]ø‹& EGÍ:ÌÒüfóuw"i÷Æ; ÂRE+©®Â÷OÛÐî±i4Ÿó¸O\÷ & , Ü'ËþÓT·7‚ý,¿Nä'(ÎBÙ¦êHƒ¯ãÉO8B–ýÕ£“xbö©a?i¨’¼² A¥™½¤ÊC˜œøN¨oyêh‹Ú+ûÁ0e¡Lêç‹QÞ%/";Ž´ˆ®Ø¬h&Ûn±%ÀN´ˆÁ§éª^ WaÖÙG ã¹õ\ž|\W«àOË­çýÛæÖ«ÊÕ±u³Lƒãˆi#´´.$ÅŸ¦ñqU1¹ÿݤxýaù›©0!ÇV eñøðÉôzu¹úú¶ò*q騀ö¸ÓÎRwM D=¸é=smBëGJn¶‘Z°ŒÜ!Ó"tGŸÛ±% .È,Úù–p¾ÇùŒƒ›[VbFα’'E h¶ÌÊÔ5ŒT·TÅOijiGˆƒJE7/GÓÝ•2@72XzvºV–ëÅݯˇ/ŸíÄ}¸\Þ=\ý|e_ùäÉΡ¯Ûæx€pºìËp«àaJ2IÚä`^7K•¼:B­Y„š.ÒŒEÓ–šígÙM¤]qc(µ”ÜzJ ´8ä‚d{å8iø´ûê*(À<µ±¥èU\šŸ a–^1Ââ×è-ÆÚÜò ùpG^VjŠÓò‰-¬ ošjÒµ1MFƒö¸Ì4 ¾lkvGÛ¶\;M²¦m" .  Tì®ê—Išø näéçåâ~¹ s“°ÏW?øÐÚL€þ4VÀ<œö6{d÷hcª]z= í#ÕÑ>¾)ÎÇÚzE„Œ“ÞûhÁ`#À>>ƒqƒbŠÔ'¬Z¦üÕMŠ¿  O0.Î…hÁé©‘x§¯µ?P¤aÕõB»Ý`Ü2 `TŸ½ö¾ÕKªvªob:î FêL†xO°0›TN< sùiùñÑÞäà¿hÕâ84í¬ê„­9>­Þ5ÌŠÜaT§Nl=Q@©8wÎ{ŒÅ¹k“`v3ζˆ:᪷ãbqáIq5m«1ײ҃ §˜Ú‰uS;@êfMíÔ£ÄH­ÂÆ TqèeèdäÆYGN¿ÕHÏ¿Ü~¾ºü}û7>ß^þÚ–ã° Åà aic=eµ\§®sدY?Y ³‹e¢n¿nI:ŠÇÀ9‚¿-quiÖCMWJõÄp CÖUZ+^†^Mþ*÷·_o.nï~ù°øx}u³¶«ŸÌ?¦úÚÝÍâóÝí£ýv!­ÿ AtŸ]Î<ú)íY1šÕ¥+‰ÆCß ü#‘ ’Ÿ×À›Ì‘$¬w¤…K|!ó;”¶¤Ù$å4,'†?¢qßþ&àÀ¿÷¢N"JˆR ðï®qßlŽ9p™O¸ `–ëw[~}÷í±1€À©>ùÑK;ÒÅbŽQÒ .uî{,tîw¾$ﰸ㕠5“0œvÃŽcÈxØÉÐèžx¤ïÝxt1 R8zýú:ˆËAnZÒ|Ô6Íýw[ܦ´¤1Ø_ºã¥®ƒO-Ä~o¶ø-nF÷+÷_ÞÏšP­œ[—cNÜ’u¯Ë"´õ7dUÖä¾Ãð‰óêÄžã/Fïb†$œBèÃØEd7‹kÓäâòÅè·‹Î)KÀÇ0]ÖHϧкاk^\#ž~¿IáÄA ˆASºWDbfÈÒ´<É¢ h5Õá¢ó.Î+B{sœVwG¢óg…9™,P©ûŒ@õÕ†ÓÛúéo+Ó@Ø ¯‹E ÓUTŽqÊÔ$x‚ÈÔup½1{x–ç>£uSSê³H¼œ=°mÅZ ¨f'Ÿ$Ö‡©ÉMHƒÂ ¦‹Àywz›°‘‡ ’ãò‘=½ZÀ·Ï"dG3p§1ö¹xrPé>9Å8Oé:âžHyç9ŽÜió¿ôŠöãÍ®t[r¬•Ÿ2oDîpÉœ(Z¢6«tcçfÊ-Qpé°H+Ûu­Ø ~­Ìç¥ûÉ9íU+¥ûd³|íö£Ç<‹É³»äûP(´ŒÊ­¨üè6ö$æìV3M-Íà¢äÃ{î›ø­Óø:{ÜOý í¯D# iDñÞ“ùôþs½®ÓóP¹à¾U‡¢“¡.ã„r rò¶‰^}[þzã=,愞‹£Pˆ2>äY°}òrÎ¥ ='ö »ûu$„.Köm¯Œ¢§!" uD¤.¾-.Ò-°i'¦ª)GÔD0€zâ­i†¶w “âãåÃãÝrÊ=ÞÎ'Œbãèá7˜¦P„[p')tV¦íŠ»qcÚ®¬g¦‰üÏC‰Üb_³ùrϲÏdlɯOŠaGÅé‰^£™ÄÌšK1ÄüxÀxˆ¨züöLß»€pÜhF›‹œ+5&žPçµ°D¥«]H•Yõ­Þ=¨)Z€•aVHüNS¯;ƶæFÿžîk²ÄøÍÕ§îw?]seo§dÂ4²>èºY4¡¶Ý-³ê kÏQÇ€Rl%’ kçî`ö³ ûø}ŒÞqË`¶…ëfD4ëü¿[413fïCb5Åpš»Eª»[L'7Ò÷~ñ¦¨¹}p³¶/ªãFxNå&”wå {’+=CY‹EÏ .Ò”9ͳ8…wìØGE ª);a©aÙ'f}d§ªù4IÚÐeÚ $úñe'Ñ{’)J1D=QÙ)¾×PÑoȉX"ótË`Ë¡…¹‰¯øãæqÍ8tÅ©²6Ž,®̸ÏÌèÏþÇ ý?n>.;{Vâ¶óø‹ýüáî÷«•¹7ŸoÞõüêcÚ£iËaZu(É2ÁU¿¨ýÁy2£ÛǤ‘°r)y!D%í(£é(kA2i÷ˆÓÔ0ô÷3ûÉÍýâr…N/°Ö¤³Îó^|¾_¸ð׳›Çk3•o~ò6iÜ}öØZ0,±0Õî/|ÚDtõÒ‹sÌÛŸÞìÏ_în/—÷÷k¸Ûèî%¬¥§Â€ X?´^¡·’ñÚG0 %݈÷ï6Ò+3Xˆå¯ ´pTUù¿ßnöÑÍïµTa5íÔ”½Ú¸¼½þ²¸[~÷½½ó/ˇïþó¿þí¬µvCMtÿᙣèúöã¶1qg?œÝ?^&£ùîûÍcüøåñá»ï;|Û×ÅçÇå›uË¢g?üpö³µÇô^ß¾k…À]¾í‡³²¸ì@"OÅÕG0MáÞWeGÀ^fãN¬ÄAiA·÷… péÀ¤;Jf³~sÈb?¿Zð¬Ë>ʇÚÎôÝ–i›Ë™ì5W¯@£Ç¤À’‰§Òän€ þì¹ì:àñšm×ïŽ<Çz^þãf9Ñ øˆÚ?¾r ƒ¼:ŒÍ 4á;!QêOµ“3‰ÐÉ”<“¢_Ñ›¹ÙHÄ H”v;ä $*Ánãvah뽪`•$¥uÉ}Èæ¼ü»¨¢dªË„MpPÙ@èÈÏA/ô ÃÑ+h¶$dÚKC‡AçpýÔ–g'€¬~µ¡@Š•Ú¬ß“l…vxë÷ GQW—ÎóPÏOÚzb˜¢ÆùyŸ´Ä_vìÃzaõXµ |kª¨½øëùÕ*㯠1¢4Å_‰’_ç!ÅñÆ1EŸÈ…ºµ+¡Ì…ÓAY~ýî.:Å0 òþpx*àä?û†“T´š…!ÁMšþKtý–Ïφm€R²7®œ$ !„²SzOoV‰ „ Ž„ Dž>f½úˆ±cIˆ(ð]üH·F„ãGŒeüøÛÊÌWÿýïfEûHr¾%ç™ è|¿=æ< %‡²Æ€³T‚'È©%g1=]>+òœÁÈÊTˆá•Réê¿;I5ÌOª«¿|;½ß%½®þö£î‚1ÑÍq´»Ç¤n$D)xF€¹î]µ»À(àsÑ]0h1Õ¦mºÝz³ºLÛ²‰i„¹Á_”ÔVN ~08%1f—˜"Ðö¾Oäôu½¸ûuùðå³ÙŇËåÝÃÕÏWËçO¼ç¿ÐW{è$ö¬³Ð©ök_”Î…¥Ú¯= H‘€ͳl1&BjÐaQ– ¹^|©¸kxì‡Ëû»óû«_nâ/š8Ù-ç骨ð &\Š˜,pÒÔÇ,éM"kO.h?7ÑD ¬\./9ˬ‹×d"ãl¶DÕ¥›ÇžÚRï|‹3RË_PgÙ†g/&æ,{XR”@ž@„©o?Uw8ÖÆÙˆ1Y·× :edXBÚ îÕÿýÉj?}^¦Þœ¿ÝÞ~Ù ÿ×ì`¹êÚùSKï_Ϭ›ÿyúYºWÈœRb 5‡T©xÀ9–Ÿ­WùszÔ³«MsÑåòêëòãnç‰\ ‘ãõVåeû#6ïµù”«{;âÿ0gõåÝÙçÅÍÙ“4.V¯¾³Aø$mm©ÐH9^œsÂ-VѼKAH’‰Ú)¯¸¦Ù;_\þzßäwÜŸ† ;Œ˜›A ÎV†W”ÝYš$îQ`ºÈË(6-"{€@zD:{&Ù1˜ 0²—"NZðEœ >æ2ç-iô fBšZáÐ’Yw1b ‰¶V-30ê¼90ýì!!EGÅ›”è|…9dïÝ·$Ò‡Ù>$zl2ŒfF³ÀüðÊqÈ÷&a³ÆârI×uQ¨‰r•p^›õÀ‡õÈÎÎÓ#êè 3 ÌòcÇhñS$Š…Ö\ßU‘ìjéi]Êí6¾åÞ‡ª“pDŒÄš*éÇ/è{> ·_oΟåÃË´ì¯9Z þH|Šˆg• LéÍKýûì=N\Õ×&®Ž¾Ò,‚Å;*úJ RÎ1Íô³U‡gáôq–¬>M¯H‹³,YFÅ©DŒ£AÖÄ YyVËo`‡Mp4·CtuÛûp:·C-<ЫGûÏÒ+a²UHít¹ ¯“þo.Ö>~ø¸¸ÿôÓíâîãÓÆë‡Û‡Åç&Ì5 G˜®ƒ Ô%žP¨µÐÒ±EñSÔ’Ø:boÔij¹â"ú"öRÈ]ðm‰¨ÓâM=ˆ!ÔcoŸ3ºc½""{Y7[︮;nG÷ouåðTÒöÆlm=Àû§(hD®q^&L =’½¼  ŠSHìÄ[TãZ/vÖ6} ][j‰~`”rÊ,e(Î2Î=½l§0T¢…kO …›Õô¡HìÐûp¢‚íýÍiÿÏM²­9¬ÅãÃ'SÏÕåêëÛ®ÓÓÓP 0<`FÍæb‘˜æ+–ºöÐîˤž=áH 0aq€}VbSI¶Yt¿8ÒnP³øbiV/\ËÏçâÈ-uO{jgOíOG|tMÊ’$¶¶8Ü9ÒûîHuפ¯M ‚ÚQ|¤nãFFr>‹גּÜCä²÷EˆðBPÁÃõÐ5 w_¯.M®vD/_tŠÞÞ\f}H=+·÷çö9wWµ’ŸŸÂ•}Ó”*®Ç`K€þÿÙ»ÖÝ6’ý*Áþ^uêBɃƒùµÀâ»À¾ÁÂväD˜8™µ93çé—””¸e•TÕÝUŠÇÉ æb;nu±X/E~œ<ë`ª¨ZE[UPǸnÎZ7CÑ|1K’€³˜#Ñ40oÛ×¶áËþî²æM j_û¶•ó^†ýÓ@éîDWOÓi‰û¦óà §K‰fuK+6 þJ5Ì´ªO”b1ì·B*º®„”o³þ&“6µ;N϶:rÙÃÍ)õ.D69C.s EÅ—KPͿػý»¨¶,5w™Èó¢2¹ûÑÓ7ed•@s¼§ dåJ’þj)‚­rž–"‹!‹“2ÌJØ9ýÏav$ 0koí‘1Ð…}(ÙÏÃí³&g8NïÙ’#ÄãÝE.Hþ ¹8€n‰5÷Ïf Ø>Â…ö*rÔÕG¨Ýý z¿¾ÿÛßÿñÇ̬o¾èû}ºº[«”ímW77*/”IîéÐû7¿¼yüãÓßþ^ݪR3ö܃¬“í8UjEBüŽÞÑmZ…Oü¯Á3^§ëõµ"%¸Æç¤yã¾Ó+›P½ÙV¯dXt‰1ùåÐØí‡¢ÏWïó&mûˆèçLË6H}äyLrÜÈdÍMÏÌj0, RŒá“‡ýÕÝ)û³[¬Ë^ïVSîe (ƒCZ™±¨—É :©f›k¦ 2‹ÂÞ›s¸œòÀWR0Æ!„²Z8rÊjÁÙèo´´ ΃Ó9>Ћñ3²ä‚ê¨ ¤žø¦û]aØ så÷Nün.Û@õVNy+¾%‹÷ HøiûZÆ®60=›ÀYYã4n¶@/p'ñðù¬g=VŸ §"ð¥°ø´² àctŠ ëâ­V–ú˜ìi㞘î†àü]"ªº„µŸðer:¿[ÔÏè@´üÝÈœ[é/u(gHC€”öºùÒKS[A8%œƈ7¶E¾dcÈÄ9ºe”؇ª8‡¸ˆë„YºìÑräÙ¶¯í’¿8ìsçnš˜ã1¡­8jäxW … é/\OgBö>I\t&÷Ü"S³åìTE£óu«" ƒTãWíg~Ÿ=’(GK«õ«0‘e ëXqãÊÌwE¼—c YÇ*{OwdÑ£:+ ‡—Nõ9Óךºù8>·JÓüë¼Ý#܌ܧJRý[<ÕÇ<º„WLƒâ»ŠÞo¶¿8DvòmžìVYv¾Ããý—u“üŠž§t–î©äf¸JäÀSÒÓ5u¼õô½87ézúF,Êæ¨º†ÀÈe‹ÎžïDsmU#Ù¶°vÆBÒ(xŠEêAsˆK#¸g-‘MÊk¬Úœ4ÂìêÌåÇg=7‡_NóÙÒ颢Ð˧9h„?'[Ê⃠ZFUtVP­\3S‚€jòË®™XV¦˜UÕh1ßò$–'q«»úQ,ÓNbTÓ½ì$ú;ûf*g‚cçÌ–)¤»‹’¥X奉ŸORT„„“j¬ymhˆÐÅÃäpÆ>nî6Š9Nÿjâˆ%=bg7µ Ýɑ̪U (O,¨Ä@ÃÁO,•Ë%’Ë—¯„ÛÈ”m¨ZoÌ:‰“%x¡ñ_ï@•3fÆØF¡È×ï (5íôQ_¬½Dà:©2ÀFD¼Ð{—þ*“#4Ùž .®2ü:Uæ(ñpRm0B¢EH€:t²1ÔEüž1Ÿ ˆZݯo>ëRrƒ’§Õâ™- iQ…ç;8‡~Ìàãå÷à©”k¿ïןֻ]¶ÁŸÞ‡-µ«_tö\Ûæ?4Öb¦®ùo?Äýöþ·r|x{wuóA}ž]͉ž”›_†ßýµºŸ~Øÿìà Ó2%zÎì€:ÜËöGÚç§(T±ºŒ¯¹év»uƒ!Uè0÷ެtM#l¢ž0£ât‚ÜŇÓr'›ô»,ïpËι}º Üõ<¨ßóvÿß•‚Âõ5Ñ5È ÿÀ?¦™¦3·éäØº–I=µÇ20¡0Ü}gJéÝ\‹÷Wú+Ó´]cìÓb÷úöˆËÄÎÔEÛ‘)@øÞbu«ÿ»ZÄè]Lo‘Ñ(~Ñ>ÄÈý  ÓB¿-²¢o@Gúóv‡ûœrçž´¹”ù¥† H.Lk,›û¹£Î0ð8ynïÜ=WeHžhYÆ6…­ ®SäÔø®íl€ÚêêÄ ÎJ+ )a,6n§”;R ÞÆ“'æ)‰W…÷èý¢t…îÜk*fà oˆn”¢vZzÕ¼!\7„›]¸ó1'·5jä"‹nÙ)ÎéIÒJÆn_6YPæÐƒ’PòÅ3OAWX¼n× 9GÖ0’O‹3¯‘8‹gŠ“Î<cì"‡“ ö¾m71ûcJ6Û'Ö@Åýä :ÜBõ —xšC¦b¼SüTnÕ©Ù…fçÜzt\àTîV$ŸÀ•ú¾îù€éobiqÎ! ^¢j÷¤s.Þ«{±èœ3Aïs®ïˆÇ½T¶OÀ^ÿ^Ô¤^]LS5Ó‹Çús>'vj/9(äÊ¢"X3˜…0ñdò¿‰I¬f ËÙ&¬jh8?Ùo'´˜;ÜORiÒÏÀƒ…0©S’#è¯,ÊMsôégÀL+2 NÔ}OØâNÉêc^eλãÜS^“®=¹¥ j˜`Ñ!‡4£6–ÑÒJˆÜažÜ(gÚ®m ‡@!pE‘zŒ¾Ì8À€Yþ$–ÇÜÒØ€Ûëå ÇѸásìí«›”s]K8Dvé9ÏLïÙu‡Üû@MçŸ=»8¹¡6ŒØ/¢ á4'4Îi|K\‡S^HÑ·ãU¯1oU/±{‰9e©ÓDzjdáÕ)q<©Ù˜Å8šU¸©ýÄK\HPÖ (#^öð34.pktAuj“­ù‚\\–¢Ç9’‹êmHâ&9ú LÍÜŸL•Vt/£‚rž²„r#I5©¶¡Xºï“À ¨'e0zcŠ9×¾ì5ÀQG†ð²2ªö ö‹÷Nnqô£ó2(H3‚| 諵›Ú^Ñ©„®Yqk”+å‹@©†žÕ¤ù=R/Ø.éæï¹º¸’Nr8þÿw¿$ÝvqümÛÿyµyT­}£ÚüÆ(Óÿ±§ÿº=ãr¢×ï?Þÿ¹Ù=¨W{y¬6ïþ-Ãþ¿õËôV¦ Ÿ¿lµc[d” VDd¶âo᥯acc>v’#àŽ‡¾ê½pÙ5þï+{ÏOWŸnÖo•ýËÃñ XUð¬2<«ã#°Ê&»Oîb2fó%{ êe]¢]È`‡ìú ¬hG¹¦2ÈA¹–BÜI5¶Q`™ëtíçðøà=zÇnÖ  ã!“l„ŠjÞÏ»@êPŠá,ŸÇn±>K9ZMy*îÛ åÁ´ñ3M¡°¯.”£Ú¡Û•ODË4!ò_2'AŠ8CcN-¬hØ ‡¢Z¨VH*ªE¤üЧ¥Uò!²:‚‡j1~Fv…m· S÷[b ~‰µÒ—êë2ïÄϹɄÁú¼8ªQˆ/„gôÇ1z§Í›qš,5œC„ z¶+,tÂ4ÈbšWÔ×@¾ŒiVÚRÄ4Ìò VViöVÌ#¢¯ çwÛÆSZ„M)…îØd)‘,6i›‹/e¼DDš2[Âpˆ~ŒÙ[ÍòàqæHœ3Í+YÇO‹É±’ÜØük…Xse‹ Ãj”ýk Ù!o£¥Õ¡h$-ê!L@–€<ºÁöÌÝQG岨£?‰ Ó ¨a+3~ñ\›»«÷k»hØ<<ÞÿùöðËÝ\¿Së‘noW××·÷·»9V7Þ¥5»uÐzôÖò×5O…EcµL Ecá(‹b9ñAæ4>©y#Áå”êÈšÔ£¾K$Æó%»…»lIÄÓÊê(†Æ®=€ÄJ2çw&ìvžzãJ13p»¬ïŸ É%ç‹nÅ,>ïÁ$X؃YúÀqóeË›/KŸwÎÿ)jqø€3ü5¾­šq1úÐ$ô‰B ú¤DEô—þ4ZZ-üG€“БÝ2ôýá'%Ÿ‡J6^ª8ÖшuFRwä…¶ Túü1&!4ǤÒÇŸ…¨ˆ[Q(~έ‡š?£Zî ñˆ )Øî2D%A,B’d¯@ž–VQp‘½LÁ(òÆE¥(Ú£’d˜’·;A½ÂrŒ‚ôÝ0jZAØNù85í°ŠaÕ´W8‹WB¤¶y^ ̨’µz%†Èn1\I=\)V‰~n1£„6›½˜Q’}.íy'úÓÊêÐÊ DŸêÑ*:…CôaZé#bg°2)†l<§,'Ø]‘^è†Í¸.[Ó›ÖŽXuxHMÿÄ$1qšŒIÓ?ñ é NNÏå"UVdo^Û¤jŠz.ƒ¸Ð‘¥ëa}s¿~¬˜Ý´Ùô°yÿiâÈȧe/êT%^€þúç´LYGfÄÉ•Ñsä5¯êYrÕ†«7ZUòãCÁ€DÇù¡n#á4({Þ¾¶ñ«ÓcV=È¢S)Ôݦ(g`¬, †<“Ic†©)zŽœê»"ç¢Bn7qHb`27íÎw¨MIM½‹‰CúQêW?ºwËÇ,3O¹þ+üöáönýøåãÿý ¿­ï>\oî?¼û´Ùd̊If|òÈì£Ì¦Ñ›ñÁ'jG=Ï××óvi«¯âœÉtŠ[^ƒ„y¥£œ)¥#;"  Éýž]í´Ù^[ûp¶Fp·V—÷m1åÊÑ 4°WÐNãÁñ#ŽÚI§D̾¶’°•ðœÆ-Œ{`ixj¥Ã•á©I_ôƒ±¤ˆÂ‘ŠZÁ1G±2^ZE| ª¬^ÝûƒÒÑñC²¥£)á )RÓ ¯0Tê;un:QùJÆïðÑ{`韅K/—…ûiç*íÜi‹¶¥•KÛT«C”ëù%NM£Üƒf½]S^6é*I–m‚÷iN$Œì¥³fV?‹¨#Ûwbf}©©d&ŒúÊÇ’™ð>[7Zl“ÔÞÚ%ïkÎ<[1GÌE¡ºd vò0Íé†àipC”¿!IÛ1T!,&é8qrOîQ#È\´GáY³G`3TSoÀãwåq~×Ùßuuh¾ŽAÆù[T˜1Ψٳ¾/ýàÔ”ÁôÌÉó=BÝ0uU›ˆ¬P¦”ú“CuãÎK}ÞñMbM@W_“` S@·ÁŽ„3&e—»×%³5Ö]‚ÛT2”H!džŽq›.E”ž›Ð~TJœÕö«ÀQ\Ï×F•RÐ3’ ]ÈãýúãÕõú㎘ª4cq“Zn…ôs.ó½8¯&"…‰H?EðgpŠÔ—øÞ;mŒ"åžo#Ç P´²£ãŸÄÙÀ lßÚŠ….l€{—Œ«˜™ŽYp·KNN=óÒضü7sä½ dê©#(ÜÅj°³–Õð­ÆÑHÞb9’Ç9SCT5{ý¦ƒ]´¢Ât ù¢éHÙ;Ÿ‘8™6(…KG©ÃbkŸÞ)ìCGXø•†›OýçæÖ¾Öüóóý¯««ÇÇ«›¶«wúpU»Š™ÝÓÖÅ¥4ê(JË6»`<sB”®åY÷¿onÖoÕ³¸ÇVO¸ñÖ.???¬ô9÷›Úâ¬åb¯ÁcšÁž8èá“8uÎÔDµ?Éu~Y2Ý!ùèŠw®]9­“RîÎu$Ô6])8ôq *·Ð¨9„%ÞÆwzáÉ•~S^«+–-´ˆëb®/E›SX¡’¿bÙË¥¥V8$”–:Qî"/X¬ë3ߘ—‚µúh\›'ylšìóÑ7î^’§pJC9aYæË1Ió¯@C˜ü®+³Wˆ§ò2ÑOÂe¾L+X<Íëtê4™l=ÉaV1udm…RÂb7N°tXe%;.n¼æ&8«¯úI8ÛâIïbj³#ÉäÒtÅŒ1¤Thס¦ó&BÕPX=i;2úºtÚáyîº]È}X›R_qÛ#»Õqy–ï5ºP/åÇ §€wúÙ œ’<ùéHª­â)/‹éf×¾£ïŒê¥ÄtßÙ½.ßù¤zš’}ÿk4Œ1{ÆN4†¼–€k{}¯åúì’ú<øñ•CP?¦£«Q3ìiô—Ó•êÂç/ªö_M‹Ë|Éõ݌أ–b2ÉŽ›‘)|?Ñu8IæèºF–¾‘åö†…ØabcAaÐbíYXfßÁóãÖw‚Äl3ò“¤Ú„Ï’$9çCKW«JMxF; o™9Bš7ɯâ(µÒƒˆh³5Kä&0R¡ÙžÂ'‰4ñ¼ Ø‚Q']Øóö»S5¡Ãl:Ũ¦„=}÷ñÙÙ½cÓžö»ïF§Î!–'— ±ìD²wè/:›«è0†©³+ƒž‘оϹg$¹”‹„DÄ—8°¥ñÄmjÍ+hèy@ƒôp˜#‚E/˜::̛߮î†=Œ ¼í¿ûînÐÃÍÇ«Í])®¬zÆ N‹†ègøU†±Yô«:‘ŸÉ†ÖÉ{YÔtϪ©ä‰ÅÈ!ËTÄÙ‹­‘ ›¸böÚ.íŠEè=#2ZùMÆ@늽Or¾J<º¶Uâ>4õï D}u£Oj‹ÉÌ|Ï[´«ß6ë?tÓ )†½8oT¼Ÿï¾úÀãõ^òjõ%>ß? ;/úhdmuÚ…ô­ùŸ‰À<«¬+—iúDìÖâl™ˆ1EæÜ­×É@ká~8ž’cJeó€1_÷ðMÒ7LÑ&™‡ì*§A@÷^R“3du6–&(Ô=´½+s®&€ÓÀ’¦ ÑîF=·û„!º˜Âk Z—û·®Â2@š3µ7Ä…Ý+ ÇP¸2æfI[F2lù c>÷ üÎ9Ž ¢GáÂ1Aü+Ç•¥Ä Cb‡x 8Ýt5ããÝ‹>«øm³ÿÅí¦?¼½»ºù çrG¼sóa}óëÃð»¿V$ôÃþgóƒ@ÅÝžÐs&—tŽB˜ti»ÀÀ Ê#,É•A˜Ÿà4’_OßÞÚš>/íécèέböO_WŒÞˆˆ tžÛzúXãéîxfj=ýn0Óuï»P€©;dƒ ~Þôuùq;XT½Ž$^¥Ã¯šGÑûb_i Þ—¡ŸsÅ9#¶ñ÷õ­£e/ŒüÉÇÞþ~à˜r|aúÝ! _ÜÏ;€9wË•ƒ|êcÐ;‰ø3ÔÏ,Pœ5Æ(A-ö• ’|ª0 ^ÊU›²3¥G2le@ô´L¡ï ̈ËÈÜ÷· 1oÙÇK[†×ž "V1/ë ˆ4#Õ Q^J ¦:øÊ©’d§ †åY ©_[8æˆcG+«ê`oåÑâ®ú<Û¶)†›ží©÷y61fz™¶a¤Œé.?ʇ†Ã‰.0œA¥þé`HÔæîêýÚ¦nïÿ|{øåŠn®ßaJéövu}}{;xÆëtã]ZÃúÆ=ŸÚà¦Mlèð:£Q”àͶõxÐS’ƒ!¿œÆB»©_¤ÀÈ¡·SàŒþF?H⻳·Ñ*ôø¿LŽo·ÿþO…ÁÇc^©ô*Ã~²:NV­²*}zG4²ÕãºÈðœ:ëœñOŸ…Pk¢ÄUŠ"P4¼£÷zž ~ZY!з ¦› èâ¢sôœÀäÿÙ»¶ÞÆn$ýWŒ} XÉdY¬êy` fö5¸eub¤}_Ò= ìߢ¤¶d‰ÉsÈãno‚F’ö刧Xüê«b]zœ#”@‰ƒdæ1”¾oó:BÃq=ìð5,Bí´è £_µª¼¼¼hmZ,h×*¶ >Ο€QlÔ#C@È€³`G£–¢ðãPW€BDr(ä7c¤Phûj%0aNêa2ÖÀPvãò0ä uG!JTÅ}ˆN™‚G¯†=‹ß–—O*Üó—Ý9×ÀಠÐÔ|úª7ãQlìA= U•6[GÔ$*®Uüœc·(@Uý¬“ë7™ÃŽ‰Ï¯V„*~n,ëÖU¡ PPê0 UÀv÷rUŽ)r£¯ FŒ›”Ü~³á‹[]ý¿Nðˆ÷ô/ 3ÞT/f—ÔX?޼ÅQ®n¬bj~ñ-½Eã¯;^¼|É\~™æy‡rÏ’G ¸vÙ$bkèøÞ©äEÌ(Óáü€þB¼·¨¼x)÷‰{—"Y—gb¹„R­cܸ|§íC®£o¯Kawä×àÒ%.Ú OUæ+§<%Ÿ]wóÅàA.}eõÚëtéÖw-¶$Kñ¸ü¶åµ ç¨>0t’¨“!pÕ¤½C0<5‹‡o^% !9¹„(º;¿ K`ê,AÌ‚6¡Äl†Çž4äRaØ62(¦Î´8úlû›\ÒÄŒ”‰LAQÕگѦcévŠ#U‰QÁŒ]ŒAŒÿ¦¼ßÐ'Èí\%0õœØÇèߢO<9›PÜIK°¹ÒÚ·[6²^-A µ•[2Æs£Î½¸0%@JZ6êÕðŸNAòՆؤ”Fñ²{µ:M¬Ñܸ K»î:Òkq¿< Ý63;_<ÜÏ®~½©lxŠìÜpÙçqœ†Œþpq4P0Õ5vC¤5¬~.ÐQ„°€ª;còQ²”¢ê;¢iÐä笋7¬ÝâHJ÷Ì$3ØDjÍ-91»t¦EÓTÔé™BE{¼aàpt_pSÿ @ì@¸=ÅKˆ`»†áW¶hõÃGÚ—<Â4™¹X°YØŒ< ÷…p¬Æê¸Eò=Á±‹„;*Ú¢jóîEÎB8†TºÉŽ[@x<JÛë’âZugºß+3ä„ë+£âµØ „34.æÆÕSbÎQUŽÈøQªà±Á¶s¨/ÅÓu8ºyxº»»½Ÿº¾"¿}|ÎûÓh…ä¹ (@rhÈô\o™À´+Ù°©§)¶mçœ %tÛ[—§Û>9öt+¢&l;Öiyµ1UPÝâ|v{¥Œ‰ÀxÜ'Ï$áHê kJ²CëÔcá¡ç¦º€®î Z|%ª½MJlöža¾"€8d ëÇzF²í¸öVÀøöVºcñ›H¹6–à·¡l HŽ­Þ‘`#×à. ÍÁáq‡Ý÷pã$ à$$Χ<´!0)ÅnŽ5Ç x1ˆãfâ¥ÚÃÜ ¡s×ýGu Ë>­›ÞåiøäA|ÓŽ«’dû`IŒ³Ã¦2 “Z» ¶*£+ÉEtNl¦ER¹ˆ;Bj„Ò"ž}UNƒC v¯ÐqŽ%$`æ„1\O:¿'ͽVëW;¿g8ZtÝß@@Ø+ۮ׈w·—'Dzì³ÍäñºúÉ .æ¡U´.¯vÕ ÏÆØäƒÙG0’ FïH¨Ñ}¢±1nZäµý+´Ý— Ø{ÈëçÖg²ÁhvM‘×7&Êã0¢çÞ: ]PcƒyMêûò¯&Õ¶A[?$qƒ8æÆ\÷å_Û‘[ËË¥„Û‚µÙë>YjôÕV&Qµ§XØ`­¤zÉè+ÇÆÔ覥¶„FSV ÁÑ ( £Ê~¡í‚ª±`aS[=)ªVôé¨ZrÃ÷¡h) @"êì{oš@m…ìZ¢oìÇá°~ó s(9fdGRØÙ*nq^…&`¸Ž’L.ÖŠ¦CÀŒMq—¡w9ŒÄÝJ¼8¶µlŒºÉãMà½1Ç EâÆPü"£ûöâéñ·ó‹U#‚ÇÛßõ;¡ôÌÏ÷é_°-ydfk`HÈ!Hìq”oŸ“ä‹û¹œÇÅÜœUTŸ¾g;…Ä|,Ôð,™&@Œs–ÚfplƒwfTÞ[èŸLRLX÷ÉÏûLØÌÁ†¹QoæÓjgßû} ÈHm8ÍY9Ð`˦=<˜G§âæíÿ¦gbyÿþrØwEøìI×wsq½TqÆÕίtKTpQš[<°g?ž=~¾y÷Ãð¾6©lóƒ/ͼð‡û—nÆŸ>>\o›Ê¸KñFš·sm·¬Ý^7áh¯1/{ÝüøÒˆ­~Ýòp5=m®VjÊ2(›;QÌ _ž÷âýÇåßT½~¾½½;hÄõwŰåO7—ËÏïÁûïÏ¢º_-/·_ÃD#yëŒúé2F_Á²‘ßú]ƒ$3«Ÿ_滸س«¸(=Š‹åÕË˽6ñæ6Ž$_Ūÿ#ñˆÍ›mžrõ ê“ãOËû³Çß.nΞå1_½üž}Ò.ŽŒI%ûAO=ÐÞÑ+Ù‰ý Ç÷˜õ…}Ø¢R6&—U Ëöt³ñõ‹ûTIëΛ´a³AÍ/Åëè]­Ø}Æ*÷w;¾NlNÛt·óÆI¬oþZú‰åëýóÁ±O¹…d¸eÓ6y•¦m·¤qëiÆÈú!ñ0UEõ.ÆK  ü"ÑŒÍã—X—Å/Ú¤©ô‘ܾZ €Åe!;²B¿¨ÙÎÍæ×r„4ý³X<®Û|1" š×@¤ûÛ§Çd‰ç±öÙ].>'³÷Ï»Ï;0ðaA‹‹÷­Ñ©Ïw‘JÂWTH<¨èˆãh  å@V ³É³o§Ýd “¥é;oV†S`#‚ÂÔ8åŒïS*Æ4cÒˆù{4&^†ONìøT}¥ùiT‹‘êЦúwpü@‡/Ÿ¶ >µú¼Ïú‚ާy6vÇB0¾Í6W Ž¾‚þ Î¦€ô$êxJ¥Wì¾Z1ì(N1TÁNµ/£`‡Œë;~í À˜œlH¤+k:(<…ŸÖqnô>‘«Ä§ŽKÛA2äPe—v ô‚QZfiè‰T=Çdã 8 zRzV˜¼õ!zê°Ø,èm’ýÊÚ¶¯Vz–õã\À ÐËn\èI玜k9Iž‰“ûS-üP­\aÄó m™Wîów‰˜w͉XîãOBTlãÍf D…Mð¦rª|ìža,BySƒPâÙ€²œõæÎþ¬÷í‹•â“gÃRO¹MËÃS°sªÖR .ÉÉŒ1Àëàáè¤ÖB šd¶R‹t«°â`U·†€Å«n 'AËëù¥ÿÎ`‡iõÖYòƯŽN9À©;‘‡ÛËóÍgþR(0‹[Ì>}&¸ÙÄ•+ÄÖU áª$`>RY\´Öù!^=J`µíŽÇ oP°7ÉKKâ¤$3Àe£Áa²ÄލšŒ˜×e{öA¨ÆbyÝæ‘'Ö[ém±TÎiBílð¨FsÌÀâ,à¢ê ®(,nƒGö6˜¸7fT„(0R4!LW^×|µXγ/ Ï_›)yï>(Åq³»O‹ÅÿT!2y°Ãw£’‡ÜC£SÃçšqÇCEØ—õŸP€ËRà'ˆMVgìÈ«.‹º¡—[œ]6àúã²P: G0Ф#zö›ârQÃRK7´ðx zôÜap:S˜«î“ =ëåô>ÝÞÿ>Owå_ê©{xXÝÍgûúW§‘…Ö­’AÎyâPÛ©Fú':ëÕˆ¾¸¦'aÀW*é%læœ2àbŒ7Ù›ˆXò˜JúØŠ³IQOˆ¥Ž ţÚéªÂ5ÎÄ.-üö•‰­e‘er„ye ÉTÇy¶Ò&ô|=1î”­{A’*ã¹g}åÓÝhœQÞÝ´o#´nGóu–žJ‚¦÷Ýh<Œ˜j§ÇÙ‹;2¸ÖmuýÝ(-;l¢.ôÎÃñVŸ»÷Ô)Ê sûqêW‡Ž*´â±+uê­l f»x>uß?"Ü\á¤dÇ…ŠT¥Ä!È‘Ž%ZÙ~’";k*Š'Á9¤‰¹7½ïœ£˜}*‚7Êa¦NB¹AÛ†âKBFúÆTÙ–¿7ÂôÔêíbmPëþmb{ë™ä­à¬R&`=Bu/¼iðÝÔá»XtW‘T†l› 4™È½#±Fø®V)®hb€'„þïÈ%^©A–#¼¾©Ï'®ñò)±¥§”>讇Ã2~Ë̽ø.`*î<hš‚.f_ûÜ]0NþÀl7yî\²1ÇV`­ „ÄÂXhÿkÝÉfþÐŽÒÐŽ¢ßó“@{Ê~l#$¶‚9hÍö*@ô¿uk¬RéCòU=‚nk?3p||ûsúêð ðÛGt#ý¹ÍËÛÙo«RÖþÚ"¢wPkŠä}Â> {¤€1ÇEO|ÖN þ”äì„Ê7UB¾#À6vã|_c'Zh VتðˆjÏÃ|¡R¹ÿ¹>_\<^|¼ýõáöI-S‘òœøõqºãüÜ;ŸÕIó(ì󺳞3´§;;âk¡;qÕFën¢Ç› 1Â}̆õ°ß†Õt ø®6Ù`Á{Š­Ãü7a4l¥Ñ`ð¶ÄhpÖ¹P§ŒÆ®[Y ö¨§eâ“o;XXË9YwÊ#“ÍÝ 4í-k­-ò3âгâ(ÒëÀPW\ÒØÓ4¨Gƒü7@'\Æ+¢®€NXNf¦ïȯŸ çýÔ|B:lœ™No‰OT'»3Šý<²Â‰£Æa–·H(BlöмPŸ'›¬Ïƒi¤ÏlÄ'â€:ô~â“Ý;E1Ûd´c×X·W“ò'Ÿ¨€¡ž¸‚0äJ½DcY]Û¶©Þy8WýXì–ú艺R±è‘9 µofº#÷W™oU±™‹¨žIž,Ø/R9‰hÓåÅ[Ñ4! ºì¯Â§& È®G ª ™žlAUbÅ.ÿùtûxq¢íÂé“ZøÓ`,n«³ìU©bTW•'¹TÈ'XB©„Ç…«£Æ9ÃÕ«X@P|ºzõYŽmªWuÙq"M|ìíÞ$Y½ªLA©‚Èé>83…Ö9éÓãOW…è0‘ ÈνgÞ”t²×zjî7Íw^ªQUë\_Ø2¥…#M1P‰Û'„ÒªcÀjwÙP€,Õ"‹e³˜ë’]¦wDÐrW«+vj¢µÉ“lO´ÔA÷±ë‹»½^*úÅMC÷Ëó_u'ôg—¿½¿½¸¿œýγ/àö0[||Š Ù“Eýµò8ŠÒ¾/%SÝi¤ŠÁ/ý:wÞ̯Ziš P@° ß¶I\2»#àVü ¬º¼±´X«N»7ð++!ͯÐ8^Y:ί¸q4¦WQ§ëf‡ö&Ș¹8#ä®ûB{l@x÷ñbÝâ9´º»½Ü$Óéßè"®þ¸zü׺KôPÒÛëc«3 ,MF= ,çÄ‚#¯d÷ÙÑÓÙÊ}¶sŒÅZ ²]–ž©GŸA¨û˜ÊBÚÙ¨kµjuœ ^Om±ÄwéEg Y‹ðjÀmÃè1ŽøóSÚÇou&등nZð‚d¬´T„ã¨Ëdrºå–ܬoO.Þ°ÝëiÌ6¥|•‰®‚¬˜&®iôy{²GŸ·Çz¤s¤08æ{'Š_çgœ45dRÎÑŽµN4&ö§Ú9ê_/©bv6Ù„Ý0q Ž´´åo9æÜÃÔõԃС3ߪ;&ëêI9y—Ÿu£"(<ÌÕé\¡»:›·×_„{©¿¹Šùp¾ñCã­ÿêKU¡j}A[CèMóqw‡$6  ‹µÑ±öÒlÙ…Ùù‚9æ1Ÿ7ƒì*X“ÎhÞŠ®´Cvg+ Ý´j_Fi}½Ð}ürÒô½¢sF1ŧ¡Ý‡¶-N¤$ïد©k¸÷•c» ÌÞ˜ÝìÐ!—Œ›[kÅ}Õ¯®<@‚*¯ ‚ ßò¼a·×̨p¥Z,cCðG³ú­®‰M©Ž¥?T››«®ŒetÉ©q;roœÂ¹Ø@uí5[ë\(³’³—ˆt¸9y “D:,%"|ØbÐÃdÁŽi±®ŸÅ*]š ?š%]oÅJ¯ÜÔùѦqËž×ìÐaÚçÁª6·ïzR™õØÖFaéÜŠ !öäkká×oß*ÍaµcdDòéã€Á…5Ì”œ–¸ó¶-ìr\5€5“Ûe"ÛÛ.«˜×S’^Úe}eˆ}'N·½Žc“›¶6lÑÛžÞcûD±ÌÈHupÔìÜÅZòÿÚQëÑÏ&»åy&ÃBxê)µÎ¸7ì§Uå„9 E~šÏš2ëQ û;[±7qÓô` FjÌÁx˜ d½ÀMK™Ý(a ÁM䦹"7;­ˆŒó¬õ>?8Ь0â©o×½0÷¨/¡G$s¥eލå½³ÿºˆë¼¹¸Y,Ïÿ®Ëyz8T¨ÙÁ æY¢Žuvè¨Í’Õq/¬¡>—tÊêPl1qxA”3雼ûá§¿¼óN‚ÄjkAåÓq9=éòn.®—zâbg‹W*'Õá‹§Çß¶oÏ~<{ü|óî‡â9×MŠî º&çê]7YÄΤë€r¶ª9¥òÆêØMV÷£~ìKµ.ÐááŠ~š0­]†Ô稼xûË™~áæáb±Òà´Iµu œ.>>,±ûýÙÍÓµÊò·ž c¼«<¸’D H@¹&K+ÏóÉîÜëwIuûfßÝÝßFõ\s‘ìÝ8̃ !"ç¿GA¡Û(Á…W¢Ûb½cßèDŠ?x‡ˆz …¬buÆQZ(4R:8¯ '‚Ïç1ŸPðs Ÿ¯þýŸŠf¯w¬ÀãÔû·'¡Ç”+lL×¶ò%„ŽuŠ÷ÈwŠàušQ?&ß…j2~ŒyÃCˆ³Pð^¹tè_–a;F ¢ï/d$g³U»„³6[÷*ÕÙeWZMµ*{0ž+Œ:Xql˜£Õ­3œF9»”?¯;eWSºÒõö®iQ&S£Öɦi²"ô8ºÃΑQS]–öÍ{0öHוê™ß[ß*¾A¥C‘Øÿ½«[rãÖѯrjoöfG&‰ÿÜî힪}…ùM\ñØ®;§rž~AIöh$ª›Ý$5v6IUâ(3­&@|@àCƒÆ5sz˜9šý ¬ú§ˆ ûòÎLO?‰¸âŽ¡Ñ,â‚ÛàDÓq·;˜BõžÝÆ`•HMöˆGƒáú0f9ã©›—ìc¹¾ëKÿÈÒ¹/b.œÕ©Ç•€m:µþ…"’ÛhÔl,Cöíon*»Ckÿä×Õ¿ûa_Øî‡æ ŠW \ÎTàDÙjÊ<ÔÄÇ™®ãγ°*µív¸ú@ø1AìܾaeJ[ö w«wʸ @1ÚÀgúrZ+“¿[ɾ {š,„çUd"Ørp°ÁŠê8ŒÆê/=7¦¥:qBL‹ôä,€E©˜\hfŠ6w„Ì€ sgAÎY8 DÖ'ãF€),IĈ£¤Fn²h#\HÝHÐÁ|I? úön·Ýª6fGC[T+aÀüœ¸‰ Tu0—áîE_Kß?ÞÿâVŠÏûDØóæ¸y¾~üüÁ»ØJ¸(SžoOèÅr¹} BK]ÃW…$.'2ì*Ê~ó¨B2äY˜ˆÓ\è{‹)ŽG{‘[CÍ(íO]‚Óy¬6y^²¬1Ðcw1-%Ì]QÑ6í±§¾ Í&U4A’KØ »ÃÊyKÞµM:ß×vŒÔ¹{@XšØ’y?#`”–¢ÇjUZ×á4SG¸«Œ¨ÒõXÿ¬Ö‘£6aMw “R Dá/øYp š6¨´*ñC³¾¾ŽÅCäE¼]ò>ºñhœ,ÉûÌî­ DA° ä} ×®YQª9.÷éÍ6÷ㆅ“‚ëhù†ííÑö`ƒ‹ðl1ž& ™Ò†0çzäFÆð³ð_ôRFžG7 °C"¦ q Æô¬Õ£ž6÷Ä—U+N?ý°OMÎtd»¾÷¯Më".ŠíŽ¡ˆS©ßI·d÷ˆ•Õ]¹Î/ß¹4—Osuù´ä‰$ã …ˆ”üdŸð ö+/wÝ,­ª~Ú_Kc R™áÛ}wrä[]%¹×ýXžâ¡T¸šÁF]Ǻòi×Gº@ùô€PòªÁ—AÕ€W:À7ZŽc^éLCîv«‹BrÔm‚¹Ä´†g"!D`i‡9ª†99"!³Y˜KŽþiæö\'¤/K«€9ˆi!%±×¬9úãÇ&©q›DÂ+àÆG—qòSÖ j„;v•Ät .‚ÿ—¾ØP8bZUd‘(—‚®£½¡SÚÆ2¹Djh²‰t¿V°bÍÍËbæYo¶øãA?½ÂŸWÏhd½AS2ð˜æ²8Å€#‚>Í]‘!Œ¼Ìk¢—òƒ+à¢ë;Ei‘ ¼âönÛ„™u)—|»ðÖ]ØqÑÜ#gž¡XcïQçìÝ%YʵŠªË]º"éüâPƒUïXP1v»dQfå3#%»ÞÞ(ÔdXUêS¬}c¤n‰x §à>Ëëà¾Iô;/Òñ.ë››[Á»[»ºýL·w‹À8ŠÆ+Æzø¶‘t1SG³äº"±#dÄ $&€y$&+µ¾ªGN¼(ÖÃX9Åá@Lp&‹ÉýÅ£^õÕˆœjÙü¥×^§¯ƒ‘@`B«š$I¨ ±`î8 þôü¸ûóËEݾó©›ÑGÉùÔ}Ýüy›·Üøu6«Í~‚–Œ¿K¤‡Ñû[gâ«ípèeôÊ(º´ŸlÜ«²Þãe^%G{ÍÎêeYÓª‚Òà`?kY…Õ¯k=5š­Â ӼѰ&™3Ù×!ž”‡¼,¶Gh~kLzq£A|ËïrÞw(¿îõ% åñ£Gå!!W­lÂì“®ÇdJÍ ¢g,w¤Ž’Œ¨êöhJ‚"ÊO0°²¹k¶†éOó 7÷ Žäâ5wnÓEÒÙ •f±-ƒ€Ãåv‰üµÙ _Û`ü=¸Ë9Jñ\ÅòdÀò=OÄŸw`GõŒàrÊS-2õ çrz}Ãíàótýüåéëí—¯.ùoŒ‡Ë›@ÓPøƒ5CQÙÌ"²¦UtMURê—'ÉêwøJ< ‘„³y(ÅL"éƒy‹ópi„d.æP"Zô%+!a©ÝóñÕ•¤ª$8_".ddªÆ€EZüß…ZT¤ ’±â ž]ƒåbùãØv×çͧ?>n>=ýúîÞ­îùùýç¹Jëâï´µpNy¨EA,‚hĪ" óÒfî²D'º=Êâlój9S¡Î ÐÜhm¶ªÉ¥tÉx ¢.ÍÛzSKË {fT;²Œ0v_ˆq” Ó1T³À.¸¦ÿªUä}.¥3“sÃf˜!'Æàšìz·:‡cq|$8ÓªN½Èh˜Ü“ì@µQg1ýüi‹ó €¹]nœË)‡õAgKÉ!-Ê…Ñ™ ‡,çTêÙö%»€ÒE/…ª.Eµ©·þ ª]£!g¯2‡°›÷ð&\HÅy,KÒ†2w9®¹ê™899vu¥8:VO´UÖH4[à VqyÅ¡8¤å@0ÐV9b’K£-Ãè2È€YtÅLÅE¨— Kà*Œu+”Љ,¡ CU)0&‡Aœh耵®ÞìBnÒ‹d$ØV4$äôhHfô“f$H‘€k2ó ¬%–¢õÊGätj  c°²r8Ì“]6Ž?r8<Ç ÚOï4°Sò3Uý¶ÿá_Ÿž~/÷›¾ûöéçßßÏ)¡þA8·ZPw!I ¼kˆiM¥~BÿÝ…ø¾@ø ¿@òm'AÊÓÖ<ú˜mxË^;Í–`j(Îe;”ffQ7#2Ô%‡Áì^š lƒƒ,çXšš˜rE¶bàÉÃ`{ÃÞ“l¨TQ-dC? :Ü&ц´“ ¥)mšÃã6¼Œ–T âPìO‰Öt6&Ç1°—œßåÂëX%b¹EDm¾ãžÓ~ÖS¤b_ã‹ ú”‰øJvaTO8¾Ž.ÆR•ˆ+*ÊÐE&wÅ:þQ#¼Ìà®=`ŒÔ-€ qãsU‹$ùë¹ñ 34߆h’¡H~\ÉUçÅSÌjRýù¼xXæÅä, VxñaíJ“b…Ùɉ‡La¡pa¸‹ÃøÌXtâ,Q,ó>ö퉥²é±RøÑ÷Ù¤Oô HðZJòí#X ?#‰`Ú°…húØŸ†2Á8Ê.œG IL´Œ!©ËK)¹†Ï0))/gRêòvËÈ/w[]Å·ûÚu÷ZQ*ï˜,ß8¥fV8­f… ~ji®Ÿ™<Ý2¶ç‚ëÉ’÷Ýʱ8Uá`iUä—ùµ‚úiTy|m¿;r¤Œ’†òï奮_2#Eã–[á:v7L¹½[€cˆsL^q=ÞááÔÒr ëñçˆã¶»-…&ÄJ´"‘o™$-j–ÒXR XÑ_Cl¯(ì/þ&ñ*ñê`a5$–Ky’Ú!‰Üá3Š–Ùwñp\|¬ä†«RvÊAŒ£QÎ¥¿ëE8“Ç”@™BCîw áh÷·Gw1î<RðCMHˆ«®4…s©P•šR Ðä2úeO ±þLfá—‡˜_M‡f†¿Òn,핞ÑÄ¡) ¾f]„“ŠAÖS¡oÁ6‚¶-WPˆçJÿüÉýùÃvùÝýÃõ×_~hÑ]êD¸4'ç “\S$FÉp)EæœhÖÝiÙZ3æ ãí7kM³ÖêgÑYù.ˆ>#ë|"Vó_æ7S›ÂêéeûÅ1_À)‘²OÂØÊ÷D ]çÔQU71Åô—5¶~Vs¹Ä­ÍÔ3aýÁ®¬ˆñoäuò?^µßÌOÜV)cKlÝ<¸”¸•¯à:š›"æú¶qÏcGQ=‰¹î- Îa®î‹t Þ¾ ¬è†GÛÐíaº2s]ÊXâök4˜é8 ¡ë-½u®¿í‰ gMä0Ø”ØT´þ4Æo£#ÐãE[$ò{oº¥eÎß~±w³D•’*°û¸´’`,Y¦CBlî–ø.ØE-ߥÚÜy/Å òãpûJ1Î7•ç;H«÷b6’ø¸¹I’ÿ£É ‰Ò`èv%:…î¼ä¤‹%ôElß1ÑOÎ*^( µÙlCœmÿ[FRÉßÝþðéÏÇãàŽæèÊááÝÑ¿Y”°`Ô¸^ H-kj_c¾ùsŸS–2í´I®c>Ãw Ͱ9ì|>žž“¸“a,ÎI<R/ÏÚ?©.haMØT4Þ·fJEß:ÿ~™¦‘·ˆ]ÛÄj2ìÆWÕíqVÃFÑѤa•”!šëmdz¸vÐâçOwûb2WÔG“÷¼ÿòçío÷·¿çé‹×>ÿv7{ûÏ®?ÞŸĸ¼=zŸÐ[f4æ&ð6\Ó·€$´´'ùÒ¢î‡ö!¬M°+ŸŸÂûˆ&qöª]­H†~(Ø.xï~Œ#[¬æ“Èïf[mBÓÑ—ê.gÀRûZÈc¾‘àÒa2÷-}­ÃyÆúq!oGg÷BÆ^iºË°aH¶EÑ-?a¶¥7af•¢æ‹kæe*²©¦Ä?{ÂEUkMÂ%ÑlÂ%»“EêÌïÂê”oÑ$ˆ²$ßbÑ0 ´™´Èø|K‰¹1/Ù ”ò-§ÝÉÖµÅÁÂÏ‘x©j>Þª@Q¤i¤LCL²!ʉÇÁåúôxÿå·û¯Ï÷Ûën¨ï}w»%ú¹útÿéÙÿõ¸¬z$^¯” ßÏR[HLïvP8ŽòçkäØËYßn!S™Ož{Ô»'Ý™ÄrEæú™uÀòíKçÆ7]„åyFh³b€Ñ¥&.fæÓÜy^2sž 5å!Å®E'†5.»­õwÙkñ¤¬p,L¨áð¶Ü Ö?Sƒ®/6“g0°ñì B­WEÍ)çËŠšßà•J ‰€W:¿Á+/ít«°¬¹³×-+†5!ÁHü É—°#VÙ8"b›\ã773jQgÂŒ)V, :XZMëHŒ› B¯:G^=£Ø:’±{Rª/‰îƒ¤qðµóNü§Uš[ýq"æÇ–$W]ñ\¢cäo(þëBqtÓ&BLÁVg­¶H°¦ÄSÉͤ¹»˜C%æ*ÓF!Ò<æZ™¼BØ-<kz–V¹ùµ<Œ¡o×ÅÛ令²þŠ`ûT ž.G>EO—¿Ë(ö‰;ê`ÈèG‚ÑÊHä¸mFV¾ÏÚ@¬|ŸItchB7Nk؈b&pÇ©ÞÒx#Í£è*à ¦©v Åë+«D·˜ ÷hº©°ÿ£ Ý$âxt“"º‘ûöP/À€œÞ„;!»yTÉ®ªüé…> ê§®ÄDïäNø*}ü¢ÿ>: ÔÊ­ÐÿÀÏSî…þï8 ˆyRîÞŠ©~îh2¡êº†dI…†d-@ lûdSæùBó^é ñ`)ÝÈ6Á|vR>¢©™H=`O’=©ºÈ;¯ËÃæ€ÀMð Ð?‡™õ—Ø‘?=þà]vnǚ춺H‰­Í‘·8`Ž¢›ù†ŒTÅËpîKHrÛ,…5íŠäŸšÉÒË»ÝòW5f·ðý\cã,üå2:›o]ÅœâÁr;Ü»m7…H‹â_?Çråe“Ù$<Ìr>-¡Ø*JØO…²ƒH©ëm[‰²©râÅjÏkG5´™"¯H+§ Pß qØÏ„n¦Ì¸‰àQ“Ty2»ŽÖISŽRšžv ®–œïdâ"KÆd¾[š,Œ†Çzb=Ü@ÐdÓ7èý‡õlÞ;EgMîsª4Åû¬ìBP0 ®!Óæ\L\‹Ém¡„5©fGËÙH¤Š¬¬6£ª&‹ìS}“¯¯pÙ¡À€`!×µ`Ä¡ªÇÝ÷'çÂö—´e¶Že³GcsÖÃ!dvÀ¦‹ó3uE«I.ä ˆK'd4JnU-Zñ(ŽÐ(³ÜÙý‡ÓìQêáV©qäEL]ŽÒ°Éð8-0UÎw:ÁZLÕA0ü(UR+¥qchiú(µaôHøÂDÒõÔé@㬊=ôBHMhl!ÏŒ[ñÖσF&}¼ Û¦û†oÁ¶¹¿»QññWÂrKJw7tõï;⛹äȵ¹¸{8©Èqw#ÿfŸ×:d Ni$#gŸ÷=—‡MîÅH eflc *me_fñʤâF9 Úò‹8•ÿ¼Î/úñúãíý»œgþú|jUW'fuUàÁ¸:…Í«¢YQ†DrI:¼­UF~„o¼mÐÝ`¶ ýËk1õÖ†²­úÝãýÝ/x‹·¦×øÀpýn¨TL™l¤$)1õŸã&¡ˆömüž’ò÷{ýaw¹Ô§·¯JšÓÎ÷ö°¦Ï;J æáÓ,GGß\òVMJ)Íd³Év·™‹Ô¾/ëûÏv]¡Ïÿxp‡èï?^=:¢=ý¹¿ûâ^õo~Ííx®Z7¹—mà`ªº\ùòó’‘³\L粉®íÕÍmwS¦yNCšó’ëÛ°ò#Ye¹½¡£¼»šSEP=B€€x†”×÷Lµ svƈ¥DïÁºªždž`f-.0«Y¥Í›ï)0zRYŒ»/9žÀŽy”Ê%G˜¼Õ(€£Pô]!:•¦Ó^ýñxû¯/ߨïïïàÒM¡8‘¹}@@§W; RÐ èôÎçܤ@@"a5Kßö‘ú3È;´¸íDG¼­gHãZÏŽj±Þ?ºš®žÜ™zþòôç»×ÿy%·7]2?<\ÝÜ<<=œîýd©©^mñ÷lðèik1ÚâXÖâUµ‡§ÏõÝ#Òšà(qû,3¦êss jŠ˜æNvAæézÜÝÂc™AôûʪNöÜ™àç\å¹¾ýbtÿ¤ xà((ï®»©Ä¦Ÿµ`!íF”Øô{öi¾Ài¾=Èž·n©š¯¶¼»}º-ÔÐ&]†I“?$Y 7“Ïž<ùÒ$9^j0­i à­?Ñša+a#íÆ?$µYØðW3šE ÞQtTñ¿¬¬. ðˆÙ‚cÁacèáCÊ¡*V†bõ:«oµhëii¿m™Ñ„‹N‰­…ƒ2¯þÑßìòåÿ»J+\¥³ðæg7I²xc”ÅrìvFàr]M>žÖäK*ZJ‰r|0h˜BšC4F*e3_•Ÿ n”éUUþá#šªòýár‹T‚°—nÄ6ØcÕþ…6[ æ ¢ ,´¹qi¿ÿøëåêmÌÎZ¥„ܶMÚ0ÂÚˆšg”Ù[O":W˜ïg?|º¾[¤ Qb&”á¶¡"e_¾¹°m A§4bÑœüÖU@QÑ¥TˆTáRê¾ìzÍJ| «PÛ·F£˜hA¬*ó„Ÿ«õ½‡»ŽšPŠ®cÎ/ƒ”ƒUƮ̹J5…O¢îäw-#®Á³ú›BA‰aÅÅ€º·okŠ ÛOµŽH’PÔ]ÃŒ/ç‘KC^„Õ²ÿhq øËIÔ& ØÓŸŒ…;Þí’ÍLæJ!»r¬ªT•Bº?¿¨²[wVÓI± òpðm¾[ ¸ÈÿB:fB ›e6ú Z²ù*®ÿÞ x÷ÏûœdúDߦŠ+‹Sc &MÑ ý[Z s‹ íøœÇ~âÜ:€­zøu7[7]Ý–°ŠƒX]S¢è™‚æ¾ÃCJÙóÓóâÂè²Xsv'€{FšdÍe;J`b^:V®ßÁÙ+^ʃ÷0i¶ 61#ÎÃFqzó‹¼:€F~gJ¡º {·U$OÅh å €{lÿ½«Û+¹Ñ¯bìÍÞDí*’Egƒ\íM€½X` %9nŒd)’Çyú%»ÛÖQ«ºëüTµ-d‘£sHÖ÷‘,þ¼naâ~ßS‡è?fëlÄñFžÑj>´Y™ŠÚ?ˆý¤díÿ Ö:–ZDrkå[x8åœg³åeç¼s-ÏÖ9HTvDb:s€£œ{£É=¨nþme)KF+JhÞ_¤`ùÌæyöy·›_¾Cúpuñé~ýøeþBI ˜ç+f¨sà9FB‚!k¯ 4£…ÙÊ{ÛÚ’$ײ݆þ€Õl··P{˜ž$××ýµC¥;Cïš)“sܲǰÛ'û p’•‘šGázàØ|ýÌ$D9¨ko_YFâÚ»Í""Ô—ë =>QP<076î qŒ®íýÇëºm‚÷’}dÄEIt œ;ÔŸò2©ÈM úÙ±¹õÖŽ·ÛF—.Ñá¿Ú+[ÕE”5Œ՞ÑBúí8°Ç4i94 Y”€ÂÔؽDÚ*„u»Š1AõB€´V¹iâ-ß[ä×€T)ÓÊHUO¸‚ˆÁ¾’xÉ„={ß[š˜sa+¶+ r—£ŽÐ659ªvÓñrtXÓ^ê£ò¢HÖ¸OÛ·<ÊJS캈àk±ÛÃ`¡Ñ·š¨/×›A%'<ò)b#SaFIºôqN¾Ò¬.“ivjiþX©¹Œ+ò%• ùoàk„ªÐORÅ~¤â>§`ÿæä k‚IØO™Â" ±ßä …Á¥®©“¼ð´÷¶ÙE‚¶÷S‘¯|@:h 9‡eÝXûUÿMh‚L»â½å¿h¢BÞ·ÁqÉý„w›S¦×UàüuRûÏ¢Âyå&æ@Ó j ‹ÊQí³¢»ìËjûÆÁWÃôq Ó«y>—e ÓÔªSMÌI‹Lÿ$Ç&Lo'$úܧñL sB]„µ?ÓÇ\˜½hŸ ¬‡Æí¶mljq\?N@ùq)~L9JŒÙ3ú‹ü?¯«ê@ð¦p³„œ~±C…4z;Ú"vÈ3jݯ ¨gýIÉÁ,9¦¨ur`³Q%‡,¥–Í¡[ƒ½v‰™¦ƒ¥hBXÚ»8ÑÄL…Îÿb2 b<Ä ð‹æpƒ~Œ‹üNq=gf Sò© ù'Í0Yô )ç3¾76Õ “¤b}Ü@ŽM Å‚n²£6 Z²‡E „önû` °%¥Õf¹O®e˜bþ•aš“aÜ,b ®ŸìDÿÄ¿í ÍF4äo6òùÜ^Ö‚ÙþŸ¼õ×ÝØª¯*:¦²Ÿ?ÞYo\ÔóÙNìgk;•/†ç¦Íå×›¹5Ý~r­¤»Z‚™X \°]hós仞Z¡ýBÖWGb>@ÀùÕìBqA"†¨´H»Ü%ŸBì0ï9Q¿yÏ/8÷Ç Ë÷ƒþõÕžA?öô¯ïXFaæìÛ^i¯ÁZ($‹†Ç€µp¬í‡¹ÖOrkÓÄkôÂÙœ®‘IÊV6ƒqÆ-šúÔsVŸ|~Šúùê)lh;’,úmcñQÛ!¢1¶S}<_ã‘Ä2Ð$ãaûÇ2>@äî!a1$2M¡D:šáÐ6Ãýš[Á›)º+ÝΔ¶^9C.nJñ*ß¡gB‘z·LŒrBjœ)(Ã"Ž É3B3MÞõ2-ÎtÉ„LW²`T)Õ©ßâ©Â7qq‡ÚàÓFmCBÉ›i “Ƚ¦¸G5užÚ°•#áÙÇòHÐ_K,5¦_KžLU}‘Á2kר¾Þ›”W™åk[t§èt·ê®:ïüíÅÕ½_â^ÜÚG}þs ~Ÿ‹šó4_#8CyÆíˆçÎ9N­Ün#¿VQ„ÛŒn®*…2ÉüT ˜«4¤©tã2V‹.\·ô2¤› ,#3%Yvn;Ï:Hy9÷@ˆ­ø@ìµã”넨ÎËüà ©?p:À¾„*ÓIø &…“¼Ì-GejÌ=`èM˜ÇŸx‰Møv÷´°A¼”âéc†»ÛËEºüÿ—Í^àùj«³™3—-èf’57aƒ¡¨'3ÁPÎ㻦J$ÞØ˜°z%Aê© Ï'²=I® ø!¤'P@‹ã¾¿5®˜˜S±$zä·{²ôLQãÑl4æît!ôÆ®FíÛRR–!íßüõ€»¸ëîúüãÕÍùÅ;ŸW›;þݿί $ :¢‰‚ ±>ꩆ 9”9wª;›(ÑV) Y%‰ª¹/¤8Þ$»“nï¿I¯ÜûK£¦0)óà™ó¼ì`‹vÎü»éåVÿ⪡Ø}ß|ƒDä_&€<l^74Z«œ#%]¤r ÔÅŸÏ”}ÚÑk÷ç'¦wê¾<EÔEøŒsÊŠ½Š’cþ¹\ùl(idÝ—Ç]>ì(¶C’beñ7Ñ5òå³1MžT}hX¤qÙQïŒî[1#}ùœ(Zzb_>½F_~\.‡ÐXlQŠßmªý’öW扄ܵ@§Ä”‡˜µ¦’ êÂ5EŽ`œUjq7e!ž|Ñ °g !÷ç?‚ÏŸí‘GøüÈ«L°8#d ÑFNN‰ã´Ü¿9Ó–%‚Ç8ýŠN¿&ÀLÞÍm‹}€_üµoÆ0-²Ù[Û„=’®@|«ãM¿}Įί?´rÔ[H’».ƒ¥b˶g(̧‚C[EQ9Ñ«Ù*ºÕˆpHËB§ýRÇqlš0'…Ì2m½óÖ[Å<~„xwuzŒè˜qÎ%:)î|k žósoÄ<‰ç4Š𢳗¥w“ƒ‹¹ÄsöÉÄ)eº9¸¸9‰jÓp`1Ù€ÒC:2ŽO ã³ó—Ò²…PÄpê8e!ÿw¾€¨*¬ŸçL-²³ì{áÌùnŒ, @Z[s—µz#ı †Å&Øâl×䚤ŸìµQC¤8Œ[ôH½ÓO.g*]I»¦Ô9ªÜ4pã+iÞ’ÏeÏòº½¥ÙÛ&çSfœ¢fe* ?•ÃäÔÇ8LëGKëÑ‚kå/±ý7È©ºB©Ü¡ïŸ¬õÄî¼>witR¶…E`Ÿ.>>>ì˜Nàw{½¾ø²²÷ëO/‰ü=oþàÝùÃÕæ^Þ¬?îþxó—×W5õÌzæL8g©fLÞ™®4¹Ø{žJŽÇ<},c·]ÝN¢=Ê(1sª6ù› Š!ø@Æmšü±&<1¥Pïk?s±(Ü%!ÇÇzÛ–jÝ×ùcãWO“IûpŽz†_ÿ(œó#&rÌZ¢}INæ?(Ý(A½É`0I=U‘"GÊ|“p+²Q/דMÂÜŸmB™mÔâ‚Ó°Mú™ÙfdB#â¢ËZ¦ÚƒirИTÓÏ2„l~ô’¨(J^Ä&Œ3 Ò3$3ôJFMc 3=„Œ´xCìLð ªvT?Ùòö.°r1‡Ò…!ø–lo“<ŠÀ ±íaµÌA%Î_Ö‰¶Kv`Bò8ZÓ|OÚáÃc9MÿÄ¿íW™)hÈßLáóùúÑLù™ø_àü×Ýò㯚ºR²Ÿ?ÞYoœª“×ÙN¸gk;{û«´•7zó½Dn4&5çìƒU‚(1gˆ8_öˆ¤ÍGNfÊöaaBÿÊ3GÑÞ>å·?ÿõ¿ RŠo>Ù ~<¿¹2«ô×=»¸^›pÍZÎ?=~xÂÊøæ/oÿõñ·?÷^ô>f÷Ýþb-J'Yô>æÕž-zçC‹Þ5œjÑû˜wþ‹½Ìs—~»3,>+Y2Âì!cÛGÌÈܰ@0 ‹¸xÃ`½a0²ÑdF‚Z© ƒ†ã •Íw —ľlÄ‚ÁdŒ!ñpüð»cº·@>ÉJÑk|Æ.xß¼±2 Á"`Tꦹð·[›÷Ó, æälóasWˆŽÛK¨,'ØJø W_®TV)BˆYçOI¶GXHMí+³ ÙWjQIÂîœÐÝèãC¿8"¼ðˆôyÍáqèy\ú¼ÿ4—dÔ:î’lì&3£ô:­¯aúù»ë+hþçööî…wòÆiW›Pç7ãâLÿõÆOæúêòÛÏ$ü,¬9pŠU?#1·ýØòœéÁ×ü§¿í›õ.(»¸Zÿóêrß¡•ß ýgÏØ}Ûî1ë‡7o?æ|¾ºóøáüã›oYm>ÿå"e{Žb©Žzš‚„¦ fÙì{»§:Ú=5³ æ Ž0 åªU”.ûŸ5nùuö@rØÄðEßÔ‰'„ñ¾i+•Ô{&¹‹žJ,W^¶ãÊM2X#·g‡qCk[»©¿øøuóqOæÕçlŠÆ  4o˰ZDa E¼õƒ¨ZÇ[–ãë}¶_ŠãÿŸ6 s³qn¡±ýBÍ'3Ê95iŸº”'Æó$ï–Õ[Ž)R]o\¼Õyú²qjVÍœZj­Ny Ð{·Iq—ôÙ§<°s±”IfcÓ h®Ùî="³Pd‘5{‘U‘ÄÉ„Ôì=zSޱïŒþuñ.ÑòbäŠã‘‹³"I=ð³8 ×±4[|ðeã‹sˆ’º8t÷Ö}É„” ‹L@š—l^ œ½Æ®Úx“øíÅFÿ›]BWŸî×_ö— =20'¦Yw`sOµïÔßöw­s©I–lçg1Àáh€“談“ÔðMÔÜ.­âÛ®pߣ|Ù(€“(œÀ‚²S\çü%•Îa"6?ùxI£¶¼4³ étnÚÆyøbº¼y:Óg_þ wÞß\=~ºþǺ»ºùðn}ÿáòãz]J0„Y~ÙŒß<-‹*æ:b3~qç{+5?£C}µn´ÿž´¾zWª~vq~f?´_u{ÿ¶ð³³DüŽÞÛÉ¡³»ÏÿÞæXÞ]]óìàò"6ìÔ,8äþùP(zXö ±2±ÜÀæq”Ç9Z¸în/‡f³¾±È‚Ÿ¿¯}þ¯grñî21óû÷gïÞ½¿_@!– ]^]Û/<Š K_İa 5ø…]­])õÏþg.{lG Óùõ¦TÈÿNó[uÎra5×Ô¤¹´0½ ÏšUçNs!㜣ù´8æ†/i=þßÍøÇ‡´1‡—eÁ{ònR½îß5Á¤lBùõ¦þœûŸþ”¨|ü ˜Š©Pn\´Zïëë ÍÖy`b˜µ×$ ¥”`~ÏðóÔýn¯}¿ÿt}5¶gøðÂ"„A^gNµùî²BA•Ú݈I¸¸Òj ˜áoM 7'ÅŒ0ôPY"8õVºM€ø-2|»-ȳ;©Ï3`×S‹sj³À~«ùÊqrÃÚ$IÍê<+R½¢(Ö¨^%ìóD„Ò%åP*MØÛ_ÛL7ÆSŸÄ½DB{öVVboûdIl€vÒÝr cZÏ¢¯ÿ^¶^î(tÕ(k‡ýq¸ÂH1å¾ûã< Û%ÝžÛÁ4$%ê‹£ÊspÔLJ,²›<2å°TZ¡æF½äs,+î ¯8H TGÍœ‹¨ù$ƒ&{ØÐ¼.0ÇüÔþ ž1Ë™Ura®LQ˜à¦ó¤¶3OJó `¹mÚ9Ú÷è華N9s"òqAß}NUµ^ä)¨¬ý…Ip €ÐWqœ{ KvŠR꨸ÂÒÑO’x Ú•àHæ4–fûÅ9v ÖиÛÅ~öÅ7žÕÓŠI3WY“¸ôÛ(QH(iIšc¬*ÏIqÒ8yýÔø“×ÎQ(j-ËÌ+L¢ÖÍAJUf‰´™—Ї OíC©vö¡LÌy;Ze¿¡9¡—éE^RB;Ö‡J:ƇJ£}¨i¬ÒW‡Ü=÷¿»y‘ú§²Ö6]Pnz€qŒ.}sxã€}»ž0ŸÏ™N;ÒI~ÎË ç òG]ßr¾1·¿dØèv e‰áä·Iº H(Yà5l˜kvuΨxQ1 ÃÉÓ}¬˜+:T®2³uÁšC.©ú±Éåví§¯m3FÎÞÚÞàÄÇf7á­£C(šKEgöÉ춨6È^ñ Ïfzbê’‡J>å¸#¼®á¿º»^oë÷Ÿ¯äŸ”å Iyš"&•I|°D€°¯aSÛ(¥à¥˜"S“"‹%Ý2ÉaA-‚ʈ$°VQ˜‹«š‚j”ä0DÏQ§UCU?épK¤þp,†D‰È<69ˆMc¢<ê>•$ŸåÙfz*Y¹ÏMBRŒiÛ¤ø|'Ï»ëóWƒÍ‰ÞŒUÚ®8 ×áèýlE=#œ^Õ9m[D3¤TOiSWóÎFx€ziÚ.“½ïFä× ÁýÍãi<†Ø¿˜Ý—•GÙ“n¤%ͱãËYGÝíL¸Üíˆ.=U¾¿°·œ3ú PÜÙ!žãÙ‡–õÄñˆqFä ‚29wÑX”-ᛑ# Öá›v#²ŽÁ·IµT™ó$µFàm¯ì'Fo$<zÿ?{×¶Ç­ceÖyW‡q‡ùƒùY–bM¬ØËv&sþ~Àî¶UêfW±Šd[Žý“Ùª*䯅À”{‘Ý|0•›°+z×€·Q”¶*Æ60)©ZwA#¸is×ÙþÓkbËZÖ¯;%¦íBÈ-˜.÷ávW§øke ]þª{Ñ“vPJÖÜ”~ùeSB à}è—ßµŽ÷µj#ÎÛêýF´°…}Ž}Ÿå¤e;UVRe¨ñ.Õd‹—qªó:û•û³JáÐdi5lô$»U_°ÀNR¤üL‰v¹Y±–ò³ò˜I~:·´YƒqaótEŽ ´ø}86~N»ˆUî~倭«8 nÀ*’<ì:`•Ôb•êN<Ȫ( $°UR)õ>]YTå¯b«­ ¨ÒÙ2Üp¤4n\„ñ<¯ž—k ñÓxÎ2tgí5q–Í$Ø_r” â(›ù†)(…qœd3ß0V™cX¥´! ’ãhsw[ÚÁŠV8VyÈ\Ô Ç*Ó,¢U‚" Ùdi5he´ƒÌ:¸±ønRÄ&ÄBÂá’1QÑA’˜g§/ÍU•®“{Â÷ DïI™ðÒB# zÏ›RŸ'¹D}Îö/ÂÒòî^†% [¦ÆÇ|½é›§;WÃî82bE¼ç€â,IˆÅâËÉÒª`)íPyTÃHX’Æ;R e÷{Mˆ‡´ñ©KB% ¬¦ôŠ<ªõ7_/ý,Vµ1~Öú/› R13ÌûZÿe—|2Û9Z‚9\ÊVðÛ?bËÜg7È9c–M3Ž,žÏ8R*ñ_„@ˆiìDIf/‹+…brk²–š Gþº±\|ɶwºÆ¢Í¿uÆ*)@ʰMhDº¡@Y5(Sû4†úykèN¨d)/¡]Æ"’Rš`²®*(B‡=ÎÅ•îx•Ò* …qô5Š ‹³\ Ý×+2çïW—ÆÜæžÉ~à»7ùÚLo>½yûöllh€•ƒk|ãèDa=ú5¾± +NØi5p'®Arוýô¬îs!Vr|{ºÿòîþ¯Ï¹*gUVPh”4Ä-#4ؽPdÑ•5|Û¶­TÏJ±%?e–Ýlp'×L› ¯˜ŸH§õ %$ˆÕÔúL=-´ît.Q0’|çªÛß]þŸ7Ý«ogÛ†{©" ¿»`ÀRe&Ì q~œkH]I!%Ö‘BÆz’Žíà=ŽÒ¦™ÌþòTTo8&ÛÚË‹píXJɲ å@“Tý|”^´FIAâ•Ñ”ÆßH(BB¬ç¸ ר¬.öÅ4u•4#Nax²+E$-*Ü>Æ“)?ìXÛ©Fã,m׈j°ÁGÐvIÈå/4Bw¦Ì©È:J ¸lÿû6蟷ÞÝÿ–Sù}þ~Ê€¤,‰Z”)öï&q÷&<òðv®¼×q•÷F¨+²‰Üñ›»ùýÞ½;ŽP¦oö.Ðó‰/¦Ô[m©ÆŠ‚ýýæsä Û‹`÷À-#º8F§œÜÁM$„”X2BW[ÉU¶ÒÏõZ=®Ž‰/¨ÕBH Ôf yH0ã`×ÓkàýüøÙwçíï÷w·oï¿<Þï}ãwèçÕqŒŸ†ËJIæ‘46XMm¡õHîc¦Q´¯U’ìÉøöd\*j±$²`^]ªeó:[—Æ(D¢h\o^- »& 'ÙA£ËZ²œS‰0Ð5¥f®Ë×mTehõ`ûÒ¾VƒÊE“hhËãˆÙó˜;†!þs¦IîΙï94á2Ɇ±ó®å¤a5÷à±EÄEÊ·~FËY£|ÝM‹ˆKÅ)Ïé‚·¾_%pˆ¸oÅOÀ&¼e°á· ©D¯ízUüY§þí ÆQÛÆ1Ñ~ý¤ˆB3÷G>„µeyf=áÅ£ÉîÑb®!»á'+«ë5ŒÉ?ŒeE¾Ö<2HÐtöl09òAŒXòuÜ0ZŽºìŠŒDBt½J€Šcy¼=ÏŒ$fo€û\ðo|ó´šSë½½ïO ÖUcŒ#RÌ´‹‰MŸ^ ÿ· î›7ÿ¾ùøáíº(šù²"A”–ä³+R7„Ñ brõ eÝ?•\/NƒãP„TQp¨jKåeù,”ìÄDF]2Ò”Ë5éÓ3²´A–ÍHŒI†§¤³³yÎ åjò¯'ZbþÕз;¬îþ‘mÙ~ 2.ªØ]Ü€¡ ‰Ç 1‰?üipÏô›Ç?óyÝ:5nWB§¼­½to4—ÄØ†ÞéÍ‚ë Äð‰íÈÄ7 ÅPù:R'(vó¶ ‰SÎy‡&$†Áì$.qùe=©A §¥>Ý®·„¢U}ºpUŸn`\V°GxÜTž¶@€bH¬…€¿ßÝûþ~óá¯/Ÿww>ú?wùç\Oïãó4ôüÛŽÏY{ûí?;\}Ûë§­—òq‘vnÁtB„=¾Y¬âB€baæDÊ=2rù³ƒûëÜ9ö_!kÛc¼¡%Ü]YFÊs©~Ú=ææ?Uí±Š"*ݲM¥Üm¹ÑX·Ç˜©©RÀÒ8sâô'Ožn?ýqÿåã{×ÎoŸîß¾»}ñ£›OO?ü¹nŠž©ük³è+¼ÈcmÏÊpžŒÈÖäoV?×1íAª â.»ŽGÛpÅ?‹¦“ëh¹¼>­:’ùDŶ(^p¼ë*ˬ't÷›õB]Y‚®õHU£‡¡ÚcÜ ã”é ñÚtwG Ú »”S¿'Ú<ájЉ«Á5âÁ;K›F¬?=€;¢»ÌD£Ü×⽨ÚxwûþË».\T]$™„Ü €o[‰_§|çb»oúíöe1_IZ‡/—vDShKQ©n*¹sÓŠäÿ3¨än<»Õµï»U4UÄ<ÓGÝ• s&¾ ¯‡ÿ‘$Yå,m ÿÃdtî*ËùÕg)öaÆp” µrÀV¿ÃLq¸»@¬Pt˜9dÙ¹ŠsߊˆSÿyE„£úŠˆ+ÀÿP‘ ½ Ìà*óùVõ`‡µh!ŠL5hhQžžú,­^pÁh_.0Ðà´ Âv$]=OåUŒã«ìÀî¦^Põš5w í‹ßi†ÅÛÛ Â7Ÿï¿l`Á64_¬[®ú%÷‚iŒ±ã‹¢¬zúfùÒK,V -‚-¢-—ËU'’é·’ È® ·`<Ü&+Ãm¶WS¹_†º^÷0Wñ÷BìÂß{.ªÒ8°„U’ hFt-¥äQCˆö36Ôéf~I·pE&¥˜[Ê~Ü~‚"@ï^Œi L!søLŠÅØùYt}à9“ 'L¸ž;œiKÃш‹¬’)åy ‰¯1‰äo"ØëŠ3í4[‹º9²p ¤jðs°×ie¼¶ÜuCp—ÛÌ0ü€ìeÜ–<›Èp¹–Êa›!Hq„Ô³Ð:á¶„‡¬Àí†sɹ”ƒ)I  ‘~ѯWЯÔåþŸ»&§áÄùZ;IYãJÿˆVõƒ8Ùcanò¹™pKG^2÷2‚#@s‰®(QbuÔXv‰*À•ŠÄN“•ÕÑ9Cžï™j霯 SÓIb2È iTzI›бâÁçÙôÎþÕ°ŽŸñ_Ců2<æ0ºp_'!%Ó>¦«®¼Ç„kÔ÷ ŒxRue{ÈšÔ åxuÁК+,Z<4¶FNæ¡Uršº—¶wÖJLqf†BîÈ´zÿ•*¼¥Ùa G…¢šH¤“ÿ¯!sÖ®I«û*U"5¨àøV ¥¢/¨à®l\¨”ÔÕûçPÅ_ÊR_ðÐ`J/*Ö;H6*v€oƒñȃØ'Ô§¿î¾8ïîÎþîÿnáÑoO<ï“ï˜û·û×~¹úøþöË¢ø·?8+ëýãÓã—ܱééQó&yv}#˜[Ú8°›ß$î?­óÅÌ@ƒVÚª]|/£ïe’¥R| DYŽe, ÅDÚ] EÈ#÷0®‰u·Zœ˜ò<14·×D?žì•vÎý÷¯S<ÈàMýRyˆx ?;¦ø~äÀ‹˜‚pd˜Ã ¡\Óñ,í>˜b¨î ^Tä8?`”óyÄîb Ã(y8u¨¾¹ÁçÊæ®×‡u±VÓýÞÒÐ-„qôBÿõÒųûPÉ–¶ôÝBøsl¡3yqå\`h*_8Ƚ¡(9Ðý.5»6KJ®pm m!K IÝG…@?»kC¹ÝMÒ²kÍdѵ(º6iwrm(R63k\›€,W°K¤e»fÄte»ô˵齅ú×X1ñŽ)ÓG¼ÞíÞ~íîãªÛÅd c­…miD 3î£*…=Y¯ª©ý¾P°%:ËÌL¡mÏóø®Eö‘‰|:àûþ³³ÖNïèv8SÄÁøîr&>¿|ÎKä„òt‚W©jÍWáÝË¥ èpQ£”K€›n‰úíá.8 „Û3òÄǧÛßïo>9θ•û÷o/ÿïܽyKÌüðpóæÍ燯D‰ïQÂCÀU8l!mWK Sˆ›è|TüÍk}öÞ’ìÏy1FáexÆtÌxÍÂ3”‰ƒ¾Ê­:[ÚY&ë[‡ÎÎrr–…†¡g¹«f/QR„1Éšküì×ÅOEsD7ýmŠíÑ84“eåså°c, CÅEºz6&UƒÉ€«›vx³ÆkLkJ[®ú¢a  +Mk c.™ôòTØCZ,óúºûÏnÚ¾-¶‹=t ‚"­²‡=4[4¨y¸aöÓЍ%ƒŠ:N¶¼Ó§îlä̳°{Õ‰x<Ï¢=·Za§Ëå;ÝLé›àÊwº¿.äFl£áù•}ÛHÁ!¿qº0‚b×Ý©óæY¦÷K„ Må”§²òPÌËùr§±e‰ê[–bž¾C,‹-KѸÂQà"ãîtiu=K2¹@Zu/µ¤¸Šs%qxÓŒË1• ¾£t§ó^¾*öë™± p…ž™m‰_4Å€ɺ¦˜moŽ@ñƒ±ºëeÛ[/·µXˆ¹¬qû¶Îý“·Ii‡¾oŒ0¸â »ö8ô”È9Qã|ŸöOšld¨œåS³Ã|ëÚÑ  ñv#m¹ûpCÔ— Ú<¡Øl£­ÒF{@´s›!¶0ý&0V“…»Ç¼ðQ̉ž¬¬ÂDï¿ Á‚Ôëþê!˜¶`€Ž bÄ󒯼ää_¯\_IgŸyL×7Í{î³o.T—Õž_Önî>Ýe¦Ðd”ß7E1di5Ç‹ï›3Ä@F¢m˜C´¥¿1×}B3èH-è`Ì ¥X:AMA‡Šü^Ó¥Õ †i&È]…:U(4)îÈ4»òž•!±y$Ó¬7®ÕÉŽ¢[ZÖRP[Ô›k·tú¼²šxN²Ú”_ŒP™<£<_Só<â$õ“Qü‹È7n“¶%nÐ6GKöS±µ­Õ®zКBªÐ6c’EeK(%_' «ò d_œƒë<ƒ%­UxâhÏÑ wtùÔ )Ë5¦¢ªÒ÷Šú+HúþAÒe×D-pK%C~D‚á~5‚pÁ¯Ö]>èéŠt1˜^]L5÷ÙKëhúbKÔ‰”±Å¿Âì,‘«v£+ áBƒcSi\¥Pm¿¬Ü!í¾y­U‰W÷wßÒïj¿*1‰]ŽG¦e5Í-™ê@sÝNØiËU‰9Y5ƒW3…€¡ ^]ZŠR‚W _rýASÉ´€W{mdõ>ÌÁ×à1¨ kŽcé}Ñar&ÙoòMŸLΠ·°¨ãw¼2ñ.T)˜JV©”|ûyf§ƒE#€]©`j–ýN–ü&7¶K9»»¿»8{¼¿œG c¹Tùö¬¯B]]ÚBŒvhÈwØ^_+NC¬È·/Ôø2o»¶÷ÂܬZL]Úk{µO‚98L‰Á†Vá0yô½+G&çLåˆÁ £/UŽ\ÓݵªÈb¢÷m·…WÈA=³ñ«ò¢<]ƒs޻؜Íw]î•ÖvNm+ß»ßîno.¾\>Þ§IÁyK€œèrá—áøå¸i´8ðÌ8Œ—É«þ&›ˆ–iþ*­¤M²ó9Ž®‰tZÀ¯a‘ÆY@ŠDÁѪcÉÝëö&åÀô…<ºª Ò”x1Vm¤µ´~#ír\8¤OFe¥U]~IßÚ gÓ?0¢ÉÎ[HEkÚ‚’—\„@º²¥ø–ùóX«(†Ö`­§bIwÛ^ÿ2Ôˆ§ ÖÂÅâ‘Y%]ÑìŸWMß½¤›ÄŒ”[ܸÆåæ¤kdß/¢é%’gX•Å0kÔ5¥'ß—ñëÇo·f‚÷×WæÎ>^~:¼~˜üM3—w.æFX=®‚×H Ê Š©mI<Ìå9,‰¦Žò€hº®©8*×n9†\Ì:DåÁ^&М)¬´Z£¬:lÑQ÷ rA5ƒ£<˜–¢k·Ú4z¥ª…|õµÛªcŸÓ!DC€åÔ5ö/Ú“OLco(­¹h$2\./\fÌ&À«Ù<½Òt̆ÃÛ³yz×y\U‡â¸{E\2Lª–.嵇Е‡ìiÁAqÂÇòÜ1oµùìYÐä³*:¨Ò+yuÊÕ÷ˆ­€L]ß{ÄQ†¸AËçÎÈ>ÙÒýÀpsB² (àNÏHpwûø-®ú?ÎÄ‚Í/ø‹³øù¿|Ù‡Ä ëX š¼Ó³ÑCÇk™ š¼Ó¡€F(§.ÐZ´TÂõ0çjaŽí!À¶µøXX®b èÔùl}còi5Í¢ã dÚ ÷l”júì}0Û3ù²šü `*Ž;:u~ /G,:x?£Ï«’"DO2 lªxCÂûÕÿ—Œ ûÌïÿæé˜oˆ¾Û˜ïþo^åš*L›¡Ã2(ÆÄ¼÷ÔûŠ —Ÿ M쿞5i¼<ºœ¤•‡à°€“BÙ[¾c‡žËJ.Úw¡¹x‡à,½ŒUñÈñÞËD}ÎõLDÖâ%Y|`KÄO횸3}üFÌ.CŸEìœHÌBÛ%6ؘ>¾1|ôÔ±Dé‚ÑìXÀ÷Äè,?…Y§¸ûû]%«´Î¡ö1ë¶Nu½Pu‹È†,ôCà8w¿_µàP3TK}-ÞÛ_ލïË è¶Š´Çt´“d#À'`f85à+RÀ‡Í^ó=À²•8B‹÷šn›©j<ª¿/ `ÔÏ2,hjÉ—î ¼„®‘|"|»ýóëp{÷ùÃå(Æÿ»½/îø9ðS=6ÔпæøÓP\´‡N)zžKpH¬G ÿL× ühC‰ 8ošÑƒB áèGã¨{ÔÐ&tÎ/¼ÛIµ??2 „Òpàçz ï®M+"û73é“Ñ òÒ{äjUÐuºª]:å–Qö¥"x7|}usõð;Œ `3a#øå:¬pàã’k!²“åÑÅw•À¼” ­êg)8 tvçéT’Rv¨êOíà5 OM $ѪèMa%IÛÜ@krƒ`Æñ–³ƒg°ÔÕ@¤Of òäýßKf0Ó³‚nÝ‚^*/¢éý¦©çÀIUJPðškÊú.£V~*„ÇS£;vô( ÝGüñ8§š‰cÿ¼ŠäUä‰åDy½õ¼ª›¬b]B™ÌšH?#À›¤ËÅ×éô#–‰‘U´°\`+¸,‹ãD2-âkrƒZ¨N ÀvL{‡×&fÌô¾&EI¢¡=°¦Ü?„]¬™*‘;TM`Àt»D½lþ¸úšŽEo"è ³>ð’² šÉ/ ²Y,¸f›lD% W”4Ô(B®ÙŠÆNHM*öÖ>ÍŸr»W44K(Æãòx§ñ¥¶%W¨ºãta¥Í* ÈéÕ@¤êÄ-¾¢!þàÄðŸþ¶-’¦ZûMßUû×ùÕƒ}Ì/f²¿¤Þ÷ÿÚ6Ž?©`ŠÊ¿ÙŸ?Üý}5âó½Iél+µ³+;KûóÀã¸GšHFpû8R{ŽX‚…3éþ=,&òJ€°$Ê ,NYâúÑ)_ÛN:DˆæÙŠIº*ÑÍÒ7NÙΌɧÕ,¥ÈâÀÍ”èo¹‡dG§XIZÇÚžÿ*u—lÞ¾šT:'’Iü˜.5Š}Ra¸ÔtÜ®‹6沿Ly\‰Ä‚†ˆ1È3Hš>cÕ¼’¢$p¨Ÿúle ‘â1߈ÚÀW…J_•ÌBìüXøV6 ³ )šE¤óýþi5Û²‡±åPÐ^ëÙ˜ïî!Y_%ãÌ+«4Õ÷q_5~µv÷UÛCùÌUúó‰ÏÝÿäAXÀƒ`9çO„9< @‡ÖüÑÇYÜ)‘^évf;àRɳóÏŸï.?Ÿ¯¹ªIñeG/eOX0^íÕ9oêã†eû²Ü–”B>RKgWÅA¾äð4·*d"¥¥ñ­…YÕW”ZSTßݱe=›EÎE9éªXÕ!U›óë#§`0 ªw‹÷†Œ è;R„ï÷ÇàÃfÁ©…²Zÿû<½ç×ó¯—Rðx¿¯à³½èå,3¬}¶¯â³lðrPÁ)ò]ă‘üFþÞ´€pJZÀYUƒÊ_µNñô“àŽc¬-SÌàß-'D;È‹««ã#ü"† ði›TX°reÂ*1 fÎ¥ ¡`)$²­cl¾ÛÅ,ÃÆ÷/«ÈWÓ[QÕ>ã¯NSÀnd„Îþy#FÞ/’¦OŽâÈêM‰u §…'È7·»Çs=Î+ðÁv&qdš‡&G>‰@qvxôÙ‡2¹Ñ|"‘êU¸Á!.aýGU‰E ÇöcÍ@Á{àÄUW`{1(ç ív_VUè2gÁ àÕÀá]:u~ù®Œñµ3p$1æâzHëƒ-7ÑV¬ÀU¬,_óï´dõñòÚ^â5êDo·Xµ9+©#V9Y’ö6ÃhÈñmrý4äø©ÒC…«Y´,ûLøJëoU¨úVEÓ’Ën»štÍ­àË1*kþ²m÷eU¾1x¯áù¥Êô!ÙK•”B:Cc®¾EõÍ.U'V;ÕçÎÁ¸`œzž»(÷"s<ÞÑ}ôg]Ýqô¿loÆÙɯ‹UTq$_æ•“ù]*ýøeWÞ’¹òŽÙˆÏB+ËD¤ñ9ââ)´$;;¿û˜ò•·—t˜6ü'¿åž±êÊ;VÁDÈ|âÚ𡇓ôŒÎKìyXon¿^YÀ{õõópq{wy{oÿq3ið»{¬Zvwö4Âø¡ô7Ì\ÛæÝ¯=G/¦©Û(ŽBDFQz[óîM榀‹ëóûû‘rØþɦg®nç­yŠÎÑGj%^©)PR^2ÖÈÎÛ?]§-z%.»Ÿã=ÐN†“¦_D‹}J‘ 0c³§@vÄñ»¼ÜÏm̃ýkNo±DZ•Æs÷i»$fدÿŠ ÎPŸOºVO*'Ø‘›¯Õ«‘ƒº6—o™û*`ˆ‹ m‰|Ü©†ÓQœ®£6ÍaBÚcàÈ—ïìSE³ˆ 1˵>•UPH¬si\p&”줶¤C]ïì#9Í`‚)Êsô$§ 6mÎlÔ¨ZrH·`&l™ç* Pá%C‰Š@æÒœ<2nF¤ÕZ(PÑîÌXhwÞ>fK0É6‚ 6œ£¨30£hWeÌ×ù:`#g—‹#LS‘XŽÇ>¶¥ÉôP·Ÿ—|5~¼J"ÙÕ*:·5o  ™qÂ4Ú Â@úo/Н³½èÕÐrØL½Ù)¬Ê˜Óàõ|§hB*ŸèL—x¼²{„èxYwÏÙù9ÎÎ¥ñŽ9”îÌ#’@Î×íÖÄÓ9`ô©§uާ+K¦aç4£ƒä¢cSS ö šg¥ðM÷&k&:öûÑ1ñ &è¾×y­[È­ºÂUC$j_6UˆÈAzôV÷\Óñ¿ý·èÕÇßÃE¸P9ŸØŸÂO”ëD=(ÉT-6Ó\¶¦ / FÛŠØi_¸’ `ÿÂ\†," Á @©7pЇxÓ·RŽÿrò15+Ùpˆú|'ÛôëV²™ 2B˜q7«œÖ¿á*3 Ežì³ÑQ§¶¯˜ÆCtd™G…MDŽ¡l17ï1ù²šlöVäÒ‘¯ï­P[û2žn¤H™‘úÄÍ]8@Réûš6{…wÐê~¶zÕ¶zµ9>†s‰ìÄ~]ˆú.C·”º› ûé¿nïþ*‹®B!öÔqÔÚ°TVìý6ƒe½z Î/¾X ³)oŸü\1ÏþŽËãÜÇõXèÔ(rHt KhÌØ²;]p?ËêÒ¹ž$ƒAK{±Ô“”h #•Â.Oh¶W“ùSË„½Í-ã•*cY¶Ú9/<ûºcî¡=Rî™}bëoHsfe¡kši.wº©ÄR«Û(vÈ/£~’k£J@Lê8ž8AzÏ>˜˜}Ì”µMOi¦ÙC÷_°íeG¦„«–‚½E××fzÏËøï½ /l&¹ !¿f•uõ¥z¬ë¯ ®yM92ík vˆ5½ÓrÓPó™¿\ž_?|i2¥ÑÆeS\2IlÙ‘¥{T^ΓùøeF97ê°@'–ühr‘Š~4KÜ8ùÖ&nÔl,H8µeì=yœ‚[ @4-˜#}îC]Z1¸ÁëïU›v àú-ÊmOH‹:»Í#QÓ@Çôy:Ѽ¥J9Ð 1«`AN»Bó¢lÅstéâbfºR/¤†é+Y´¥œ¾,$g;1§"i“¾¦¦*çôÔ©Ø=h4›Îñ&E™#ãy’4 ©ª9›€Ã ä¡(1†ŸQ¢¾ä]²¤9s ñ•¨8¿ýyqöíΔ~±Šñˆì£˜WXƒ…qÉUˆá‚:øÐ€ó ´ÚEŽQIÈm³¶£%ê˜(tJÈ!ך3‘M› µ†@ ‰gáfÁ.Êg2"õN®“”sjeB› ªÞ«³¢«bÛ Ô„mó(&T§'ש³‘EÚR‘º‘|ÏíY{-ã&³ûÛëËÛÿ<£öy".Îþúã×mÇxš—qœzgÁ­™Ý!=TZT´ nÕ-¹z¶à):‚™x»^v­À—Ùâ–XÓGP Zísýá95ÙZhæm!œÈŒ¹²‘TVeîß#·åy±CË !Ñ$a›½3ÕW>¾„%ÖR¶žJ.v@ä0„ àæ'[I'§ ¦à„Å3Â9ÿšk{¦|„­)Å|‹¶C1ܶ(‹ù;Á6Á|;@›VùB¾tsÜÙgº¢MK’ªI¹®èÔÀÅ?G^«G^9ßam|©¥]ýBË:o¤FZ0•©èEœ“¹­>1cS$Eߘy¤XémƒmÔ¾ÅÉüÚ ‰°Bz‚Û__.í‡þ¸}|0óõÊþºø”þüêÛ·ÛÛëRsaá§S/áõÕÍÕCš2á¾Õ%Dp‰\b*9ÍDÐ’ˆô:—今Q)-?r‹PÌчX¬^ˆŠ?ÀÛþ$·óíéXxÇO]¾ À.™­=XYo^™¬zG»ôíöãÙÃíÃùõ¼üTˆ¥ç©Uä *¥¹lò:°´eDÖ.Íd;œ^kÒÌHÅqEÊ¥™ñ4J3Ù~W8uiÑAú'š3‰¦éÉÀ  'Y0Úôz«N!…¦$mP¢çÕÑ’yx.]÷õjÛåÓcoýS.ý쇛Ýô:¢ƒ…"&“”{#…ìJá‰ìZÜõÚks@öpr‡:ð/kâg 㻎ʗ°vÌUqÑzïà#“Ç%0ƒ‚AªÌÃ6—9êû5ÛÒ1[“¸Ü†w*'Ë_Ro´:æ, }+‹ÕüÈÜ_Ü]}K¿9ù™óëo_ÎwÞæ~R!žG3 ]ï”ÌY½X¼YwœÉ~u´WëŸ^/ÍV®;åÚ.²G­:ÒÞŽtt.Ûݺ“\£Í^8¬>Ðÿšy ‰º4õ vH´‰B<ÚãÔ{»[¾ò¬3OKˆ`䈊nvžþºkrA8§PE‹EX<éä²Ëq&kÔ$ÕÍ;ì%s)vû<:o–×LOê,Ìò7?×,ÁÛ¨š{ ‚G{·(ücSͼEÁ-اÍ*ÒïÙ-™: }ˆð®™dp^£~4paA¨à’ÙŽo÷ ’%Lœˆ¶™Œåå3ð´A î¿G×Ä ¹1ø´zÀ1»pb.ªâ’±C¼+B™gð•3RIÍûèoñÁ° XŒ¤!ª‹ëÖ˜Ú×ÅÚ5¦Jܵ¥Æq0¦­:Ç `ün’\…g÷aÔ©§Nqʦ;y@vƒ©E¡ƒ… €¡–%7½­áÙC×(ZÀ’ÊBº‰¶ŸËu—« ìþð˜9þ“µuB³-Ù·¢ÔüfrD*[‘øíÉ8fEâ²”wS©µp(éµCçkSñÝF*[Zæ|—D*lÉœŽŒ ó¶É»=†m¼.Qˆ«t‰¡ˆêòÁÁîkÊÛà‡fôøÙJÕé#VQl'ô°H ‰g¡1ˆºUèá–Љ«(E^í(¤ÚQà ÞþÁŽ‚ =:›/?À?¸û´š…×”V.ÆyGX”Ö)ÎlÁ6[5;#‡?ì§ZrRw TD*þ`¶Â4‘ZØ·×v†8ÇfÔð'ÆU¡‚†E > QXB‹{ù ÇïÇÛ…MaåÃÅãýÃíiøöÑ’——Ÿ®¾^×OSµÛnÿû!Ûö¿ 8™7 !Á'KÑtüÈ»_2Ý6õï'wÂka:öÚÑ"¾@³L'RTZ1(/bT‰ìN£,j9{žJÝž?>|9Š4‡Ê­2gIn°|M˦‚>p1EQvÙ9§°šôš°a£Ê KñDbX2ì—LϹèƒ(z~s¦âæ™JZ”"Xe*ʦ‚š RvÒjd+ ¤€³lÅct+Ò[‰}\p+þfc“¥«J] …´ÂåLf"´F!Lpê)̲ «ÐÅŒ..è£hàJHðÞýPZ{ìb ¸p)b1Qf™c'²j-ì™p†¡¤¥¬ËŠüȸäò]‰½Ú9›±\ݤ RûîîþÎ7¿~¨¸‡™ñ˜u¶œJûcµ²dKÂø¸1™¸dÃ߉<ÿŸ¼kÛãH²¿"øyYÊK\2 ÃO ,ØÅ¾ìû€igH‘ E{4ÿµ?°_¶Õ-vuwvgVVV¯dfŒMUUÆåÄ%ãÒAšòÑïea¦ïØ\dþýöËË»_žÞ}x¼ÿüîæÃþc8 Þp¢Ã)™6ß.,‘Iýû-©Ú‚¨ ; Ì‹Ï6Ìúx]eñNýÝeÒÇÎZ:T¸Iæ%Å’ôA6nŸP®ƒôYå ~Oú¦ï¨“>3ø˜Íùž>²¬.-> ®!øG5Ùjè+f¦7²gËSQ¸Fö‚¸¢ìY 6×N´#\Ñ£A¿Y[ý³ª\AKÅzpV\iöîÓoK®ü<¹õucXAJE±ŠBùk½7Âv+KÞâÅá Zú \dëæŒôg’*$kÄŠÊRÙº‚)a;IUðÈ_ZªÐqKÑjTOS#enÛ|¶›ÿòøÛ§«Ý¯¼ßÿãÕõùGmåI9ÿÈ è“¯°c,R!’/ÝѦ‹wäH¶XãÒ’A!´Df€ãìYß¡«MìH…,¡…i[ v·½‘³‹,éWÍ“%ŽÐ-‹ò}ËÜÄ(AïDKmAtLêë&p¡"w+âqÉ^áNŽVSíxýu·—Þ{H¶&ºA-!y™•LãÄÑEØÁÔ€¶ûUAæ– ==ÞLMÊvõn2õᮘø#&¼ù€Wÿ¼AúÐÍÀD?Ø¥bª@Š ÀÙb’ º$s¼Ê©—ú—öi^аÔÀ´]¨ß&<;Nú“9¶P+Ô$s”Fòwoôìdb*èÌrd½5“q‰4y'M{Ä ‚ƒøMf’ݼL23Pö@tT’¿Û}d¹vë”JÖøœÎ’û/.2Q#4õhù  ´nÛ†Ýî=Ž;úöläåÇ`¦Fn„ŠFËcÈo†ÞQ®×„]Y×^A@P+‡cZ놿ñNƒû-þe;(E…G\zžß¯•¯zN=Ú;»Ùþ÷íµð×Îþéì‘ÑŸ~þr7N!yQ]m]«;%:‚°Dvú„„ެQ_ÿ• x|5æá8“$G„8(ÍRL¨•ã#÷¥gDðNÍbNh'‚>ö!ÿ~}ÿ^ÿgçNßÎýrÿøû»_n®?_¿|ùôq:ZÆE°}VÉŸ.ãO=D=#nሚ:×<)z´UlP¦b3ÓLÌ¢ÛªóÓLT'€£sØö°ùjÆÉi*J6쫈PR˜ÆaÓg,+ÙpiH$ÿû?uáÚæ`UüIÆy…ÉIÖ †6ßÀˆK ý^î^lŒÉo÷¯ãxªYsíôñ‡fT(cié$}KXi¬]ŽlMž‡‡¬Šû(¦:÷EùUg;u™W¤_mŸ XéxôRRðýwÑâx ÄË·0‡v»dòéþUEv ·ýÉÞß›·6…Ψ-¢ ¹W]:: äͤñƒ1R*ˆ”œŸÛA§^×ÍIÌ5sQý±îsOa÷i¶rt!,Bc€×(Ù`Î>­Çu Ò   PÕ¿p_fa‘ºDž‹ø|¢›Bµ=Bšu ×öho¿-)Â2(Xw›Ú†ÎÛ %{P`G&5¤|áY£‚U[BÈKwL¨7'9žT¢„Šý|”ˆ5b`”¥·Z¦²u·Zi¢h¼è±èi‘ ~,j2A.—<9YÍ¥–}•ç ç(ªÍœIË5ùþýÅæÏ …kzRÏû[Y·kbvw]»®WO/³\%cëimQ| iKj©·G‰æ83 ÕËdšëX¶™žE(jZÊÚÌUzôÁ£“PB˜e1KQ£ˆ)­í<“(rÎs$gÎdœçhV¶ÆPþ÷?>›Ih&•W5[G!;ru{¯ÿãOzÒ_o?ÿøŸÿõ¯ï²Ð°-­×0¿|Q>œZɬ¨ððx3',ßýüîåõ£Aö?m¿â/O¯ŸüiùË~»¾½ýËf;@Àw?ÿüîUàW;Ô×7Ú×ã]?¿ûù †Ek¯[$±r0¶¿Ä*.àq´g:•Qç=¼(.vÝ·I¬—<<ï¸ÞÃk0a§Jú^$Y–ú\a‰— ¶‘oÝ­›OÏwÏwŸ¿Œ‡Ý—|°`Ï´äêéþúÓí‘gzÁñ&ˆuL/ñ Ô¶”‡1XcìüAh‹×ÍS°õ¸b•qeOÁž‚ÂFö¾~B¥¾‚Š6 Aš]“÷QÎÍT-+ªŠ¯¼Jgwì+§HƒÅPÂ^»ÎkçT…½!ºjì] §LžyQT¦2Ò-S2:¶ë®S<ªjßø*ÿz{cï÷ÿ8ïÚÉÉi¢Å͸è XùÖ2¼0 ÏFÞy„êµ& ¾¸ìX! 1ÁÙAæ[ªå¡vB–ûÑLvÇŠ®PÛC(R‹9v¢Âš”Î+%¿?¼ÞÝߌ©¾ñÿ­’òöâö°l•E…¥,*œÝ9!VKIýlRA™ÀGëIíuý›Gl3ús·d3.ÐŶ ÎTб֣šç JZ¯lRË]à¤5_?;9L¹€D¿‰}¢¸?òcúˆeïE°ñ!Õ$Ѷï ÅΙ>b…„jgÓÛ\šoájúøÌ»váT–µ‚e•¤Ð8/‘Dpñ;o“ѾJãÏŽߊuŠù xoÄêaç ½«0ÆoSÛ‘dIÉQ$ëµoÔ£†ÆyY¼¥Âcåõ•Zãcô’*øj-Q%¾²˜¨óv²ª}F2¨'¬f ÁæýD–æ¤=b´õj+½R&‰_mQCfîýÃõ4ØÕ·_=Ü~~¾û83;¥ùC3½Ï«ÉHon:}0…¹Òµ$jÍ\¡­2¥»áQ¹B8«\r…hNèÑ3½­Á ûcC§ï8œ•w÷I)÷ðøüeëY}ÖOئbúó8ÖTèÖMm¬ÛNÔ<¸3æ³F½œ»#×7£…U5dS} ÙD9ÅB}èÂ"²K´&zÏò‡® ®âA§Õª7Œ² èSòü7_A}»I\ŒÐ gÇZn)˜Ÿrº#Q'è&bĽ­oÓw´A·X#|˜Ý=ôÞƒ¬Ýÿã¹9wPý»vÉßwRÜ‹½Â*°2ÿ'«¹ïô©è!` ¿ÃÚì<ô—Ôyô511bûóÅÙªuÂ~†äcÚÇþÝ;°ŸÝHxqì¢õ±Ûšwþ¶fÄI¼DëÇw[æÝ‹Ñ˜üVÀKRyåzáª9ª‰«• {H«¢=¶4F¢°MöOØP7<Ÿ`QÝÛ UàTFuæ³;\¾/7juB¨2D@·—‹™¾¢ ÔB |iP?L®®ê¼M‘‚ºò>Q ©Tâ ]=z_•”ñìg•x¶¡Î ΂¤cOKpD\Ëî2Åà¤åK¹=U^w$LÆp&(^w¨Ä§³³·'÷ù‰,»£UÜw¤HCHV´?Uó½‡dgÐù4ؘ–ghr™ßeM^»R{CýM|¸§ÈÆ¿€ôuùL«wV×[Kh.ètM¾×v fÃã¼¶ƒNŸ1iHâyvGB§Ï8Õ«ƒM•\l<}¶Œ×c[+lxmCi¸LiŽd%ÚðÌ"­˜‡³S6·GÍ.*˜¦\š#Ö^üAeÎî‹ sB0H¨Á>2@±ea§’“Ôë`ô¸¶EÕó`ý*DBc&_ u¤²ãëvG«0zÐÐÎÌ]<#kó”Ý@˜êÙ N=„H¼ÀDZuê¡iõ³D4»pó°“g¿z·o¬ÄYÑÀ<–Vç|‘B̈—Á<¡NJMýê Â3ŠFT“ø’¡Lç6 §˜Òb HÕ˜Ã@ú%®†­€±àýÚÉÑçàíh5ÍêÑà Ía\rÞ;âEŒÃÔ’ýVBº¹[¶3l7нýÃÕn¢í›J·épnO£j!•™->H,29›îx#G—·ˆkpµ»ÒŽ*<ÞqëÀäñ-…HÞŽ—êpð•:Ì ¡”PŠlôªÄçØº9yÈêðäh:LôâxoáëÞCò¬ä‚Õ‹ëø}>‚O͸îåò†ü›ZȽ–Ú ‡lõ_@ÇVäìvÓñË{•³§Ç; Ïö:ÞìRáýÍãïŸî¯o^ŽbUVgn^¬:û…“¨4F7;(ý¾Sá'Z³”J2.@+PËײ°ÈF7éÿBSü‰™øŽ ÅõÛ†q€tžÆ–ûÂàÁñ¨Ž³ƒ­vg)‡ŸAŸ$(…=pšä›gu?:ZgÜ««¾<ç¼HžçE6¼râGzi¶#Ùðʳ®$¡ÕÉ.â´î]ÞˆIŽ$LoѦÑùÙª¦]/å…j.åÙI}ÉÕ –ñË5úµ.çE¶N°e/‚UD&[Uí7fÜCWo4FT­cðE{dÛËc,Ú#u³ùÊÝÑjcpBØ._;]ïS{òâHQ«&ûYdvr;£¶ÉÙ·á<µ³þÜ<ó³ðÕSSäݾ)úyMøÛ\ñLøPÉRÇ2$^vBu؉ƒµ,3V„¶–±!¤ìÀ‘ÉѪÏ8$‹T/qå^È gsžÈêÐ ‡•C?‡5¹KŒU“s÷òüúdLýðz£¿H{fÆÊM/¢4ÀOÓK× L ¤*ªæb¼\ìÄ2$ É—‹¾„ŸŸ=±98çóMNVÃï€. AVÏÕQÎóQ6ˆóúöËñ0½e6îï~¹ýøåãýí%c¦³oß ›+Ã&ó×½—%À‘|l¸³UT‹I÷å® VW‹z•X(&]Èú‹®Kò.Wû79YM˜]M(šIíîkàÁ[`F››–ÆG„ô-­cMç¬cÕ¤”`RbVEOȼëãjä4ØZ`ÂB’˜+î<þÃFä¾ÿùoªLŸ!õêÈ©»Ê´p]§‹¯²>Ýižˆ†ÉíŽÈ†­-ãA)‘·yÛØTBp\B™åflÑOäâüuc»÷çï7gÍœONS1Ô¾ Ã^/åÞ3•èÓ«—ËÚ¡D_‹–襸ƒT\—½•J'…Œ±wÕs–¶µéª[7› o²¹}^wµh„þC3{Ê**¾¥¨ížtó‡i¬@Ѧڂ³˜ ˜*Jˆ >Û•4¡]u‡¦ ÎAÄNG’IÑ}‘^v¬ao½ß”¿Ù[Ûýóz”©Eî:þBE¥ªCZ0âœ+AËš,Òê£YÙ ñ[˜ó|÷t}sc]êãx©Ši~П= ÛnøÇÕ†Y³P4º>ÍÄèR\Äè?}ƒ €òõb{%¾ÜÜ>Ý?~1Ìl ™\&î ª~kÞ`B&nçN…‰ÅвO5,M"2·‰g%’v3²Á¶ +IbÙÌ¢H⢙EŸmþЯ‡ÕÏŽ˜’K³ì¬ÍÀeÒCC%Œ#–ùHn¥igÕ8ÙMlÌLe±$E±‰>—+ž®‡Ôè«ÈÇY!N².2H«'CT93uŸ¦']HTÚ–¡¯—VµªYê×e­hœNó^º…¼g\›÷às5¿V‚¥J.2—®jA¸Ø.Ý,71Ïk²ên•ÆÆAÁ®¥L2¡Í4 n[ÀCX•a/ƒS‘(­Bfk®¤óýùãY)»gr–r~- â)FÞ˯MŸ±(¿f;‡l7Gõ°£~rв[[¢[<æ#Ö–»A€õ´Å‹›ÈÄÅ ¡ÈUËNNVqq9 ú»ö²®Ógdû‚mAT+ˆ³˜-ÖÈNíoˆëf^Fâos÷ûåðÁ6…ëœH¼PǶ`ð…*××Û½„û>Êñšlõ ç]Ÿúäš—øÝø[™»´4»$åü[OÕž˜d¡w’œ,N<¢>ƒ2€j¢_5‘_%‘­kž‘¿gçΛÐ/~ôÐTè¬& lÆ^—í4Ç4j‹úò>:Ҫʉ\²˜ÏÉO Ò¥ÝOļÇP÷é·éûõûiŸ'^Eý| °z†¶%#þëóãëS‹6Ö@á´¦›4ËX#´¶ÕÄÍêÖƒ&6 ÉE_ßЖqá–Ÿ<4Ÿd!Ú!¿ˆ…@i•=—j4ýÿ[_ü{¥Û¯çý]W§eq£e½÷í,ª°‡èrí`sŸ÷ì‘?5‚³Hzd1c½ÅÅI ­Š3RE‹‰.W×2!Y§˜‘9&že0 .RiŒ²ŠÁ £§?啦1Æööâ"E¦Øri¦^œJ…Dÿ}Þ{ä=à˜—×›ù§Å4šzÉ”V´£\'8Äç)4Yÿ"…&ÄÕÝ, ™ì÷¥újÈ/¶’ë{Oƒ“ á°,(‘´l„rÑIb¼LÛ™H¥'X X—s*¦W1d7%O(× +Ùä9XQ›2VXŸáú!™d68Apƒ:iX.ï{Kšµ˜‹ÍÇÖ.e+î9¶›%$²ú¾æ.{@ÓÙ8 Þ;iº3c|gÆîØ{ÊÛƒYꈓ ÆæœoÎ Yk?9MÅlBý*çS¢ƒ[³É3'ôVW¦÷’lÙ o%ÒS/¿4«‰‚ Âä•…ՉâPl7ŠvjïNVqif_¥  ê)ȸ†¾e.@H³×±ïMľþUmݯ*ÄW÷W¿ÝÝþÞf¸3ƒ°lê¹”9¬üÙäç8ŒÙ?"ô˜ƒ­ßŒ.Q}Ì®_&l7í¡ùþs|ÄáþÏ.i¸ˆvìâª%Û!½×·ô.dgØÕíýi^;xþ¡™øEõFßšIœU|˜©~Mô꥔£H0K)öÁtRÅVr,ÒôH¤©©Ÿh³Ç¦N)ÕIð]û½¯=BÏ׿7@MÕÓ–+)eÎ}ÿñúóõýã¯ÿu*x§;·ß›Ux{XÖºÖÒͬ)©¬>Â4µarÔWÏ×!çY¥võJm"%N\*{Ø€â#ŸWk£nÞÞ¯a¦}¦0ÎÓcö!ù%z Þ­;Ào¤«ÛÜ…îÅÄ#gD ù³ˆiõþ®Ú™”“Œ'Ÿ"Ã"ƯQ¹£Q® †B„ËWî,ð¡ÐùvRWrÓlSgc´6»:Ôì¬â1ì€XŽcTÚxC†³àŠùåMZ´€«~&*ŒÏ×:–hep5ºúãÿcïj–ä¸qô«8ö¼J‘HÇœö²‡ØWhµZ¶ÖjµBÝòzûî d•ÔYU¬"“I–ZcÇÄL81 ,%L6ô˜¯Í>ºdœÏÇ׿~xxsóamüúÕcŠ1üÛùãŽõ’RËè; f$7¶A†Ýü*½9–½‹X§ú¾¨ú²ÍT uHFÙ²ÕÔJõÔ“ymÑ©’`Øt]$¤–Žû€ê#± špæ`ERù\ƒAñ\…òÞwVÓ¥«ÒÏ ÃºsFJ›PW˜ûǦ˜I>è–Ib* ú ãPïés݃|dÞ–}h4gÏÚÛÄ ÙrÖÑ?kÕ2gmãç/´)ÿKòÞßÜþ¦xmeÆ× 8=ü¹FÓDeFS½"ê>:ÅçÚo™ |Ô ÿ}©1Å…Ô˜ºýëÉšò¸Ýê'ÚhÈé’±±Þ31 d¦çÀd#[B!?#F‚.@ÛªËW·~ÝK9¢±5¹ !h<œÄùí Ûâ34WÄ3ÂêÑ1„mW@¤!°à¨Î{ØÌì‹RËì«È¬Q£ñö="¶3)^ˆ½KqÜ þ¼³f_]•)bD_íé±y›¿¾ìäe¸‘dc,81’óADÕ…äó# ±ã8 >]gÊÒõ¹¿ùüûÝÓ§z^ßÞ}~zÿNµúÙÿyõ+ý!é´ \½ÜÕ3QZ~va3…Z£´üì™þðÝu„dãÓ7ÝhðÌ”|Ò{¬:^^f¥OÀ>Ï>…󧣎­§´ÁLè' …±õPÁy›j|…<˦RÁB;'z«„Ø.;'°h‹ôî‡lóùB|².¶ì¢àŠäyÅå©PmŒC[ovrÞ;‡,<›…ö g"ºDßVäÑQ{cÍÀî¸õßosOÂQ“^Ÿ¢/7Eξ7â;wO>ªU®vCç&§›Ÿ~»ñÓýû_w«™z'‘(´KŠnyŸt™j¬”d/ôž¯Q’èR½-%(±ˆÞ!åBË…Øz¤ÌuÕΨþixwPåèý`ð61‡L:N·l9à+÷MVÍXFŠÜ½or¨œ;r›æ‰6y¢þ“àDÑÚûÞ‚\ÏŽ]»ÜËðìkyÌTÿ̰W·°¬”U7÷YoBB›ëS•Û#,°pÖ}^¦‡û¬Ëö)yâ5ìrXd‹:z`´û¬rÎ<ˆØ–Cê3:»c¨Bཻ]…À-À;S? ‘}³UõsåëÆ r2d+²ú›Þû»Ï?ÿòŸÿqú@áé§/ºÀ7÷w*ݹªîöÃ{•¸JêæËÓoÏZïúÇOO~üù—êü[óx82TÄ­$hì¿¢EÖ=¸ÕY»þ+ú‡®áÐøíÞáøüÅ6»9‘4"4ñFD¯G›ÈÕNAF~i…+å'ç@-Í%+µÛ9d«ê[«)Á°q”Ð [|$ËIkØà-L•¤´u^F:ˆGÞÛzÍò»Yb‡Ö‹4d2úšN¼+uOpAÊCæw‚̳àˆUÃe‹® Cß?ðÂ8qð†òZ#ü›÷Ím^ä#Õíúd¯I·ÌmñwkËGâGï’ÛbÐ7Œµ%çbPÄ_û¨Q-¢¦@‹\Æ„ÅIHˆ±Â„E°dÂloŽòïY =â,]µj›pöK³¶¿Iû|ôÃ-UÜ·QZ*L“#üJópò¦ÎÒ5ÁªˆÁ¼‚YuxµJýÏžaDŸ˜6)t€&¯À$ºß¶’¦Ì`Ù„YmWê°Ù_“˵Úû0”²\ß6SÑ¥Áéw£ þ=ó…M5MäÝd³ÃCí$„@Srsst3{çü WǪ2÷¢‹™¤5¸‰1Æã¤õN•R¹œ)ù¯[èÇ›·w¯M¶_¿Ë°ç^‡áÇæ¯` ä|:e^4?ß§`”«ùèpüÿÅ„CŽËFnWœ×9ñÆ%ž.”B-þùùUßPéí«wòçÛ§“0ï0u««\Á² íçû•ƒU®`]"©ê"_¶ôö Zßbé5Ìâ0¦“‡2$m“bPýã’ÙOzÛáb±Ð¼q¿Àsbö¿í¬†¤ÍVe]]‘+ìNøs<´²¿mRÜ]#[ ;Ftàá/ÔÜa©ß};Ÿû5ú;N~qÙâá$U´xôBj)mWÅÓfGƒªT3NQY(ø`Þtk?©r?äSóP¸ýñ¶ôCxˆÀäÔÍÕ±Bž½@Ûn”Æ2â $b*7Ç„K |Ê“ˆ= ¯hÏŠ` ÉÕLB»«VAµI­aø “É™äµmËÆLxݶï|å<)׿/b%œ;ø˜ôêÂ&k0È´ŸTXóœø€ÓŸު潟y;TÂûV5;šf¹&ÛO¤Ô`#Û[~PH}?Š¢FŽýòÖzŒj6r ©UXKP­2ÍRà/…Ö%o­Ëö6÷"ÔCµQШ³ï7)± †j“3dòÖºãä 2\—ëüjè?P»O†7¥ε3² vÄQ{7*cûàµYê ¨W‹›ÀV¸¡P…F quÖbà¬?k'ÆD EV!N|%%KŸ³ØmÖVí¬mH‚z£±Öp[h´?«bÆœ?«[Ž,ŠŠÙç½]ÁzK{Vڳlj½lrDAMÄPCù|ôã«&æ7ÏÇê}¾}æýs]e„ãv—¡ BKeÅHÞR[eD…úÅûzø¬ØEÅxYýÀÒCšJ,ÛæµI€´;K‚´@‚ߦ€<Ñ03øˆˆ.I¤‹ƒÏXã®Í—Uƒúµe•@pæ,ƒÆäÉm*d jë Gš…ˆ@ÃYÃîo>–ø}#z}óðêé³Uº¿}u{³Ž™0ž‡T»X=âHµ“k ÝE‚ž˜o¢ «V/\ïÍ@XÑ>‹1pUÅæ²Eò ¹ôàmœ¢$„˜–EôË9ƒ÷æáÃÓOoßµÒò¨§èWtÒEWL’6é´—ñô-j,9SI­®†”)dý×H]q9Ru/X“úJÖ±UÀ’;Ë –È¡"E„Ö³œ?nÀ(v&Ågá¤cFúòÓlß ~¿ûù—÷o¦[º¾¡woÞáí]®ùä¼$woÉ­;‚¸…Ý«#b„ÐÖi’2&13:Ì[IT.V$uæ/òVíöš}oZì¥btŠÞ‰ÈtÐI´üĦ6»ƒl¯´T‹Ž¶1u aÃŽÝ'`Ä8HK|Êž³ü{¥žôò­#}~Tö,6P™Ä¶h þÛ LÁ£·›2‚*õPZÝü½ ¢diÄã^/WÓï$çsl© Ñt™ð“zb5]õ¼4  ËØ¢‘@0º.ÓÄìrÓãP]´€Þ_™,µªÚ'ø“´Ááì±õâd›ºSCÂ\¢ P—ms'/Ôvò¦ÙMÇeeE*_ lµÇbg5œ‘ÉnÞŽq…2F`Ù¤Œaprg£—ÓŠ;;4B/ºßÒÛR׆Iâ¯Ð…¹*ÍsÒpÉë.WýØ¢·’ ®n­\õ[çº(ÕÁ нSh¿±!@rý—¹IAÿë ÙÎ!Œ£ÇéÖ`u4)-%hºŽÛ²¼ª Z¯êöu¬£Ì©ºÜ%#j~ÒPŒi“RG­°°%h•LКN 'LHŒÅ¡,sD°`8m¯ÙñŽ‹ÍT°#LQ0-/?°OA!DpëcÖ>·@bËÐׄbäuq»3…•Î{žÕÕN5ΔBwñNì§¿ŸÐ¢ER¼~ñ6xq«ª·»"ÉPhØ’Aå…¢\±äøT§ÒäŒß4”uâžú²NcÈfU¿í¶ƒJÏ^.&^]£e¸Ã¯RÎTlØ9yÅD83Æ xú+Ž{Š ð@y² M> ^ I¾¥[™DãY½üá‡åÊa®Þ¨èxÏHrsé2]é^´9j‰¥ìº`.OÖµéÚ K8t)`Ê€®”x”tn‰¨×y‡þƒºØÇÉô½ R·ëu0÷ï¦Ñ?(–:¸æ fÛJâ* ]ž«¼—hnÜ×Bd= Û‡Iغ® ÙÁ÷ŸrN@“U‡âPGÌ{?éY,3®‡õ8ߘˆ×e¨\”¡°Càüê Š qÇý_‰¡§lÆÔ›Iý6ƒŸ'çè†yL{M}%¤(EÐŒ)çè> ¬‹›«kdô›1ó¿—ú{î/Rôét²i…¶Küe¹Îïþæòù¿uG» @’oé…p $¬õºÖ¾^a&l´™”KS@¹Œ>ÛxöU"ÂÖì¼p€UQº > ž¡º²?UyÛ2¡ÌçÙv±Ð·]¬* 'õ‹k½º‘Ç(h ”ú¿Ém/ˆ=¦Þ©ä)Œ!º‰æ6ç¡Å‰½íTã¡ùŠ']Á,/äÕéý'ÍL9ׯ`‰q¼Ûå$dÝ.u­<ñu©É<ÂÛ^G¯PÜgM÷™ê>=JFŒWf¡ÿA9غ©¶ôòÄH'Iû‘Õƒìò…ZVøþ ŠÿÖ]öêWúCÒ*#,ÃH#l{-ÌEjº’þg-ÍF‹¼ºY\½ₚɚžS*;Èr= §‡Åµ‹ŒúÅte‹Kù M§B§0l'…ÂN EÖܹ鴊ÅHBàjnˆÜ¹Æ ç«ÅÒLNeŸHÞõ¯Õ ê¦Ù/ìß –Æuƒ z¦=(‡(qe ü e- â1òúŠøAËZ×8V¥—M߬à[2|jZ·}2Uw …))œC9U,¢¼Ø1´Éÿ.¶VÓ1aRü¥tÐIvð‘|Ç" Z=@]ËP/ØCÄ3ñcÆœ‘‹MÏ„MyûºV!T9]¡WèoØ|°y )Eá ‘ÛÒ>áz*‰E-½þöf€ µiÔB@.¥ €üê´ŸHÝ8c–¼ùyg5ƒæuU N T?gÚO'ãh:ýÄñlÎ@GÎ6?°àäHM@§ôI%â /ñ**‡H§gFc®r9K„#‰Ã®r9—M,ÓÒdchA6ðàƒµD§ÍЖj¡Ñx²hV]„6!ðR‚6ö)÷J°ÜZ ¶é²H5ÚØøê±­xpelcm*G—iaš\tÁÇ«bC|AØV~8‚5½‘c`­¼’%¢%糕W2Ì(¶¤pcô!†°ÌbµŸf`kÜ4⢛f(Ç.vVé§ ^® e!ŒG5<ÏÄ£¢P†ääñhTD|AVWYvD`ã ç¬n5 $‹èh’Õ­æš±8F°͘CS·;dnâsbÈð9É €‰CµñV›Y`S!§›%ÉNÁXì¦Lè¤ËŸDð(·üÄ&J§@¤›–¹½6a×ç&ÄÔBédó:…`³]ãj»¦Â¹ÊI—w;§l[ábk•N:Øë ®°låƒ+Z6ÖöãtÊL?±“8_.©~`?Ë&X׫½Ñ²5¾ žpe®tÄva«ˆW۪ƟÝd”*n´ôoð1cBL‘F¨|Õ‘é?¨¿}üJƒ¼ª2ÅF5…óB÷ˆÞÇ-øß”}5©ÑÍyŒU…)«äÔV‘ÂY"O.Õù¥ˆHE¹—½…P:¤ØªÕ åzVŸª QÖBqƒ;ovb§u¶å¤ÿ‡ÏšR¤ëZ†RÕx£úÇõu(«q Ž¢0Ô«tíŠm<MŽžBŽm.~8uñ™2}1ibLL%¦­Ÿ†þ²JÚf£ä}¹o»)»øóØqápÐÈâ ÛH[!LàŒ&¨ÞÃ7 ï¼Û`Ví.Õt÷™˜®@R ô/nïî©ã¦èÎ+¨9{>mQP#åh!{úÛvIeØ%^Ã.V-ࢪê²IIU Ï2ì÷Us%qÝÅÁL ç/d+b<¨~«ˆ¸JÿÐ mxÛ9dž{à<ˆþzKNOióÂN³:3`¤ØÇ /LÞÆ{´”[7l^6;ŸªÝ0]P/qj“ëO sgЉwçÁü×_‹iîž~×ÿóµbïóEªú+NÿÊ,èš=}þr·™¿v'Ù¨v6©4Qƒo6KÖùXŽš*ä¹ÔËaÖ“(žê®]pC,»kް¨»{Ê«cÝ}PÕ/;Rðt¿Fuµ5Ò6ÕMc;«f1û /ê|PI›èÜ>MnBùYƒTèÚΜx3—âõðãü± ¸·ûàq§;˸«>98öÙH Ç„x:ÜXÕ¿®ñŽ¦Ã½äéŸ=êäÕ‘MGCÿQ?,aJÆ2ïç_??|ùTЦݟf~Kâ¯0¿QZ’–ެÀ{Yg÷";op÷òÚbaç³WÛ¡e|9ÊŠÌYïøY=ºælÙ˜’ó«LlŠQÍò&LƒG5ÎrN!3šDÂ\+ˆé¢‰íKWLMìF8w°ì‚Ú„¬ì `ÖÃɱÞÓø‚º”pz·®KÏËÞCRÌÛ«ì¤Î‘˜|\=·E\½’v!|ˆ!•“Z$êÖA nÙq.ÿ¼Mf&½ÆÞfÀ®A[¶î_ÆMJ飌h„öd‚GÌL8‘1ì¥Ù£ŒGõ(ƒ‡³çjSD(m²¢H£¨ÃL‹å”!êÿ^w¼—‹WµjÚýjþ¯o?ßfÊE ´V=ùør WH›F©ž|û|å‡^D"Èm˜còüVO!Úp­‘VÿþÆÞr÷Â;¼Àë =ˆ.HXcû¸Í¶·•¯‹gä(kÉZ/H¥— ·ãà½OEÍC&¦˜åyAáöºjÅmf\eÃç¤ä¦÷>¦¬îö*æ2ÃíuË)8æsÚbßX‰ªê:bª¶Ü?shPÿË[RŠà〉Ò‹`«|ot”z n+H)_ßß=}~û¸3ùÇSùvùŸFÛQþ©¥`‘€CH1¥¥Qì*+¢„½â$s&-utšc»î->YXEI”½CªØÉj¼“µ“*FH‡Y[²‘Éœ¡ %¦è~¾6”—$Ñû¼‚‹«¶¡¼|í$ã`u³k·¡¼|í±dD¯ ~í»%=¾w\ÓowttƒShÏC Xf?ø1}3þóß ó7ùæ`—o2Ìô›ÃÓ~“Ýäkž =¶tßSÔ¿9´³K¦˜ÏÓLp°|  ¾•u®³ýª“Õ”‹Ù½…yäÅÇ7åìÓg,kXUÇF@`[._SNÛË<#õ/Œ4êI’`Õ xÛê}ñ­Æé=òùfûyq²ºVÇ­Sƒ \dÉ5Z'sLŒ3åŽblŠÍSÌ`@Lu°›~²×‰S‰uˆí¥Yw=òŸ1(>I KL˜}\»@RÅì2ÃlÉÑj¬ tŸèÜù§<©SP?e 3˜Q·#äp°˜º#Ä;Xu¯ˆ[{luCYâvøåÍõ寑< Ãvý”áÛªZÊé¬%RJgiP)Ê´”#tP/÷¨K N¿œA—¬P˜0 ‰0Iø(Ù*lÚÜ"› ®Å&©o£Ðþ9jE‡Œ{%€`¨È*«œ¥hï»Ô×cÄ^R=èkGIŒ3¢§Š}Ra°Á¥ V} ™×Û•Šmòç}tüDÌ$tµùœþvõ<•M…ãæ#JËÌõÝœ¯º¶X]³1ÄýXÙj$×ÏVmÚžJ/UØê ÕI[ ËtLÄÔÅTÍ£Hˆ2ËTK{¤ÆTÃê9xÝc‡dNêãAðçågË5ãføÙH°?[-fÕp•ѲÀ/†x1Ù¨.0¯Ê• YÆ?›>l9*ì¯_fNFÓØ²]à°Û4Më\Ž~.îÖ ©Äšò‰‹˜Ñ“CbwóûšÖö"é‘L¶=ë|àyîP L°.ÑÁVÎŽ3×ð~dÜI}Øk1–«’Ê*Öjˆ‡kê2 ®~`¾øÏoLrƒîÞäâIe’±|ÿëDsPÈZV™`û$ôO8PòHŒëÆ/™ð«ëÏßïž'?8ë´$8~ZŽmž—œ–ª®†WÞ…d÷‚ˆ­Ti§äÓí T•£$¾"O˜$øÒAéóôaô8'õ­ò0çœ,n…²ÙyïüÊç¤I 3ç¤.™˜’£#„@}ˆrU :Nó‰ÑJfT}èu™‹ü^¾%9híQQ GYÅ—`Û|ùô¸yºýòU?»¼èeåÞóàÄŠ{ŠVN,¾˜pðúq69ø*©Óé쥭Tg99$·èlõýÏVD7„<§Ù¨9Ýï·Ý8ÞÍoæ§<|ýe÷ï™ÁhøC³¸+Œ%´Œ eõS¡--J¨—é™Þ£±¢PÑô€l0rÑöP²F_ÅÑ£‡Ó6+Ì<_{˜^Œ+Ÿ¯Ö'k_lÉÉ©#t„y4bßø3ÔÅŸnnüYeúG5¨ð¢èÓ‹ƒU¶,ü 7¡£)iˆ7lOøvò@íº¨@Vi¡xÖÃ=è‹%+]†ž’_ϸFÊ @g/e—G0›œ«S`Ã6ýrÖ%‹>†´,°áõ›$ùÀFéÃéß<Ï5À+úçŠJ¸Ñ¬Þ $¾¥EÒFÑ9‡°x"»YhUw;Q§_°"!Ä®¦l)[>YYMKlE’èßÔ-M’o‰ÕÕJ„4§Iµ-ɇEæ×6gëè8Ìk°0¨³p™UW:,Î<õèÿ+1Ë•˜GLÏã Çè’ Þ÷ç±OÖÐ .Šÿ™èöbïPíçúêv| ýù¯Ÿ/žž¿_>«&‡ƒ?üm³}ؼ ^ÚU>qÀ·4壆`IãH¿2íÞlÙv«ýÑ­6rñBM—–8ð”ϼ ²Gí¾µM‰Â0Ç-íbîá MN™§QO¨ë‘§ÎSǽ¯Ê·³lÝ×UH÷šð&¯{cßqD$íºÞæˆõgVE7xëßøyzQU¾Oö m½¨ãErÓCñ j4UIZ²A¢`×ìF=]/œ¶}Â^±P‘? ‚t§MŠ”§nÙ‹©Gþ Ê1 ƒ·ä.“/ÙîaݶO>?>Üøôp÷üáêÓ»€‡Ð¥‘g¸Tl¶²Ñ³£Õ/X#a:ÌC˜ÆÏË]×Ü:&²xöÖ,üÓp°yÉÁ-I›º¦ °Â6,NEpe*Â#ä1Ë´ÕQ‰žJ ){u3]ZM.ôsà] ÕþÙT„§!ûÈ3 ƒ‘3,P7Z‘FC…ºOØÉbmK­¶™Byò•i!–´!«ìÉÂjtM#ç(q¬÷»ûh-@Ó™¯/l£Î©­µ3­9Má0έ­QT𩨨à³%K“ÅTt6ú0Ä`,}oÞÉ3–u6êñö°9æÕsKqáFà† –ó$oÑçBó Õæk•Óz@aŮЀ/”wåvÅdeUöK‚b5αßjÓƒ eêG´n¼´üµzS„Ów}¹ý9©7ã-©-—£Àœ®¬Jo2¨ü¹µSË-èëJ1üÓÜׯŒý²úÙ Õ¡zñø1fã­‰°zÔ¨ék;.¥so•ÔrµOSPS˜{³ÿŽ9¤W9ò‘mQ¤nPÆcÊ2¦NÑih àæÁ|ò¬ñuX² Òûrä:À J‰lNö™³ìÏ×÷ßî.Ƈ­‘mÏm'XR”§€Á—v“ˆå@e"Ð.ÛIß}€Y^BÑÁ²í”šºÕŸ¢8»ûV‡ÔU`]LЧOò!æÛ_eÕ w@ŸV?9  ±+Y¬u£Œ&÷Ò&ÿiDNmŒÛ)3±Ô>;à ƒFØe~eÕS¤]…Ü1]Ž‹å,Æt5åøp|+‚a¾yÆ¢ø0 Mÿû?•Á¡- ™Gk²v”Lj=)ÞŠj`՚퉣(sýI®¦õô˶Ðáã+}|os§óÜyŸ¿©ÒAÙѹ–tjRœf·r/‘[Z§Œ…GEk÷2϶dá§cýq§”E뉺0…Z ¤CÝz´F›0ɰÄLwÓÊ9tf˜¥nk€ef¹ölÉ­ù؆Äi” :õé×k2¸t†jÍNÕØoÙ^£:óJ7;½Æ´ŽÓ7p¾vzcUºäÕ„B\` d™˜€dPhM‘Öe3¸|¼~®a'‘ëL²y¢vÙ•}j`7HH‰!Î&LlW“ß(NT¶*~(5~c8=Ži+ºm…Ú»h"›.œ@2˜q>u1J‚3ø2ÃOLQÁnóJ³S×*L©dš—0ƒõ  Ö4øÐ’4v)wq±¿h–Yç/R„…* p‚qK¥¢½îº Ò·û¥Õ8Œ¤™(œÛ Óêä#&GÌÌʲ[Z‡Žp`Ïq ÑŸÁO|G‰¨¸«ÏÚ'šß° W$Wüé“\m¿ÈóA‹~¦“ØãÞLà4"Q—XÙ?X#ÑhôžQ^îÿÁMåw6xAEýýn눫»p«ÒÖÚÌÉùU}G À±á€4öô>ñJõ µ¢ìæW¢å5Ôß+³ëD‡^JùHkÈÕNäÖï´í¯Nã™±Õ;SMÈÉz˜šÔr–FóT•„Ôã¶£ù,YSÑïçòö‰êiˆIâºÃä^ÉÊtó\¾eN¾¼QÃڨ̙üy8þ~ýéFßt^ó¸*Lk$ÕB`h¶â¤•À°Uxý~¶‹¨øƒ@1àõ²ì†{Iu‰øi à…ùÜÈ‘Vøƒ¸Ü0^À]ô±ƒÎO  ͧ7\‚yݪĬš1J;è#B ÷!8ëø˜bKɆ8:(Ùw8úQýÁ‘NÉNÓÀú„ §íÔvX–­{ºšrɆC4Òàw%“g,*Ùð‚ƒK4£hC1Æy¯~Ì¢bÓa ßì9±›‘þÉdyÜ1ô–Ì®Hƒ¤$>Vì ÑwEÊ—O–V‘þ±×Bô‘C=:«â F ÚÑÙñŽ´¿‡;> ‰Q$ž©Úæé îó O›m¹èfR/º¹ø¢‡é—†¢tñ(˜Ú¼Éäâò*%LÐVëÕZrÓ*¼6‡*g’npÈä*€š<œ&³Ü Rr]6{Iup§l‹GS™Ì2XŠQ".Ú%ÔÒéšÔ‹c‰¾Lò&¼¹¾¸{¾9©èØ«˜8(&ºŠŠ5(,b¯§,MÉd±=êaõ­â<ÎÓ´$uÒ"hæûC³zEæjp8ã4±-ãÓåÍõÕw]‰ºþªúñš*l./ç±[:>Â%¡W˜—PhºÇòÈÀâ– ;-¨~€«{@«| õŒ¡h‡’ïAŸH¥äÚ iÅïÊ+2þè`ˆz\D ]“!¾<‚Vð‘8ÄAœ_ׯ®¿Ý=ü0©Ow™þïßÝ_Ƽÿ`æ4?á?4Kÿ„E¾J?¶Ü,G´óÞñÜZäF‘5Ùf8Ì.Ù¶Nãò“-Ç£h€0ðñrÆWñI~Äî^>ì-·_Õõ»xü±‹AŸÕtßÚ"‹èîu>Ýù]A×lÒ«\ýa6iÔLÎf“ÄQ_¨›×cu6iÑ(r`ðäÒ%]¦xAŸ/>K¼üó[dÞBpjäiøáSSé¤Ú—°kë{‡—(>ä¢vBÞX”¸•BàNtŽ.6`6Tܯ¦|‰"djiK;öÇÜ#–Ý¡Ø. ^‘¥0Ç…)&Cã¸×GxYÁ¤TƒêÑ‚Ðú¥ —»bÅžýY]úi§?=þi{Ï£@v]ã¹]üyÜ6Ùzì‹l“BƒmšOëÄEn«qx•âÔá)ˆ°:/›ñ‡Æ-b†CR6rŸ¸lä$9h"–yY{k¤2Ýn a¸ÈJÙ…•Ý“rfîVOQ¢KùzîêÍH<ôfЧÃÖ…Ùã;ÂÄQE«{" "Za*àT}§l/jT@ú~·w[W`.’Ê i¯¿!iãØ¨„`î[1±¾àª*‰¦&rì1CÄè  ‰‚Ùq²¾†Ñ^ÓG}Ït?—p‘m„µ³p*V)C¡‘Ï'á8víù÷®¦gÒqáeóRxû*n^ Å"ãÝÓÎj¦6ÚªØÃ‚joÅŽ£q;–ú'àèPÊÅHX¨3 ’̳æ¤í)Žs[<øò ­µ¹ï礬šºHYkváV0I/ kíÏÞv’°—Þcè¹ÚL|³ÎRÎ:øÕ»@ˆQ÷¶ZU9L’Ïx—†¼T&öä_‡í§‡÷w?<¼_z»C˜l1®œÙù@åeûý–7ùt&hób\-zí¡ÍÚä“)ãFê N2÷@mc>O„xfŒ›Aá«F@Ý‘Zðcò™Î“Æ–ê>œWªÅ#9‹«U§íåòúêÌð!#of’æÍ¬ÓŒ“ÅB¿'éx¸´U{ø˜÷]¶Sm/¥>íNà`?ÏÅ®¹‹}ËêQ¿Ž™ØTÿpIu,Æñ ã]ÖäöÑ›ÃÔ…Ø2fÑ.›ã÷öÝÏ­¯UCCNÕ“o~ùøþÝý/ÿ‰÷ïÀaJÀÀyÓÂ(ÄóPœU˜€âÀÅ…¸†LÇ “—Ù’Ö¾ &é·Ö9¸=ÐÓ»(Ã]ía·püç‡Ïo¾<úÌîg¦(7åo»¤ºtU©íu;‹8ÄîTÖV™ ÓNç¼#š ɶWHå‡ïoÒjZÐõL‰9ôYqe9D=E(œÎà¥Yé wÍ.Ú§w÷;3¯)f$v)fjiÑf§æÕ,q ými UIÅX§’º©’)–üê…hi¤ýQN{4ÒOÁØ¥‘éÙÂ#^{sP(åOU¤È²ž?Íáð†íB=`αŸ€~n«.›Ä»Ž5?óƒ9Vб|¬1!©ë¾S;ošêª;wœf#vûHU‘s‚Shá\e‹ÕÍoàÆ8èĵ:7…}#†ÜHE’3Ýj¨ãi;™ŠÝä ‘Œ€n[6ŠîK¦ ¸]Á¾¤6ÆCžØ}ßÊë€ÌC!ºò -IIó1$©äðëæŸMer¬Ë?óf…€s¯—Tf±ÛA hÆ,\ÝþÀ“z˜E€Í:㟤Ì2 '8ÏKÆFúÝ[߇»ŸL|µoîß¿3Ñ›öÜ}ùüöé~Àwýîó}øË?U?ñÖüÕÃ.Ù9ms×-žs#îÍÝõ[µÏ¸ì L%Zu ×g¾†2šæ1—MÛjޏPs¯ù½VxQ’&Ì«áÓ¼×ÈEÖó§Íl×E„É4âÙ8 å'ºjŽ\ƒ-f”"e:yúùf6EQîž­TYsä—ÂÎ Ô\ XOBŸ6JÓ ;«m·Â 3.oÅòg,|þ¯e3>Æv…í9Æ:°êÓ…_húðãcP,×U[HFëªc~‰ú¤?Œ×§•¤Q7öùܺ!ôH3–D9–t|ñRyoéÇGûŸŸ¿æiθÜãLB:Ô›xžT¯äLòQä¼·£ð`Ѷe¨l³|U926KcyDÄ“G)Ù²)†œ_<4J7O(7ÌÓ\(P2OlÖ|(‘@ÝRswªS†/77Ž>…¬±ƒZ{þ„¶¼ÙÆì5÷±Û µnhÒ ’Ê&ݤ©&FØPig@ËEší§­Uø¡)É”lIv¨lrZ^ À:TÖ~SâÑ*›)žêE.UÖNÂŒxqLMTeËž…K¯¨eow‘Ô…Š^ÓwL“Þî…-Ûò8ÄÃÚòv/lÅÁµH'‘ÿ=À15U«rdî>®¾SR&Tø29Ä-àƒXž/ð´³*܃IÅ‹_váÞÖ±Uà\ :‹±èªØA ¿ÝÈ t(KÊüŠàîë#·iì–Ù‘¦c°ìòW@% á0 ºüÕ5Â@sèB¡¬Ô¿ùT3' í†!©„!sˆ& MÒV)*O&^uÚx95¼ØY ùªÈÓ€{"¦„ä=]ç†À ÖC™#:…ÍÃÜݦ<· “í+Üæ ¦یf%J%ÄO;«±&»ø´ëØÌ«Wîòš1¦ƒ‡IS0XÍŠ£ß0ÔÞ'zåvP²W[Ž“ w ¥çªøoÍ’þ=~œÇl]˜Œ°Ï\ìü©…Œza'V°ÜƒÈ]—ëØª«²ØõU"Ä-^§œz(Ì/˜ëïíȸnO΢MV²|%`m}%°5ß&:>S챑†j ìv4‰z{w>þLJ7OÿÈ÷—ÿׄä²iK8K¡|-NÞ¨â¦{$œExËÌÆ(¥ m!œ!ÕkÑ')SÀ=fxófl#e|>÷r §{š$SJðçt×|¨V¦ÔÒQ—ÌÀ˜™_-§»8ÝÓäùí ¥„(hS)ËÝ Ù ¡tOÇ<·œ¾¬Rf„ƒð§2)Ð…§)£—I¿.ºpüUèÂeò¹"æMJóËðü‰s?óîib€>\–š Í0 Íòuéj‚) ¿.lj¦æ¯@niæ¼YQ-Ïû¶›íJ3dÀÛU.‡¦.¿ÑUj†4™­ªf·ò¥ˆÊ]7!Cs¼¿kA’~2N­Ì1¡Ó" +®b–Ík‘ryÇÓÖ*²(0E ˆ)›Ùy{9oqòÜíóÎ*Iš-çOØÆ7˜ÊZÌ$ç‰ð¿Rt_û3ÓŸšÅ¾­f)· od8¸õyHdzô'Œ!¬‚í¸©ŸæŠÊ‘'©Œè7°‹‹ÈÛ$êü)[™¿¤‘Á~Ç}°[`@1p ±vS5ìÊDYnžj2'9ÃÆ©ÚƱ».wVƒº¶ªr¢Ç–ÄczÚÉÇñ…Ó„‘Ókà [‡¿Áúáî~ò–’îíͤ¸§±»ð5éñóÁ`TJÒq0v¶’0k2 ›]K¯häÔ9—=bÈÄÜ~*›(gŸhé 2©Æt­Wõ o3éCMÁ£ oƒh´‹œ6@Ôn;—ÊÓ‚amÕA|$Å.qkZ& ‚‰ŽíB‡ƒ‡Y­ëß°+cÐcñG¨±»±âÊ ƒà…Ô†øS29%qíÉ“»ÖÄšýëù[zà#ÛjÁ¦òThœ£ëƹˆaâ™ø`ã(ó„fm`5ÓxÚ«[qŸ6SÓ8ç³!ÒeãÜâ}Ù °ëKòÿ[ÜŽºØò¨ä³-zÑ~¯:×NñC 7+f²{†º>Qô´óâ“Ðrk5s¶,T¯™Ù¥¾Q4PsáÞéì3ï¿¶;®*¢ùÐyësÂÏw¿T¼º>õß?~zóøîïìïÝßírÚ(IJ']u:›Æä诰÷ h€ôšLn†"N'rÎÞ*œÛ:Yl ZHjĤ2[´d ¬»4–ÍàSîÒØçÒ£„f)Ÿ9ü/'•Ù–3ç \~2ù ”b̓7ñÖW7OWÁ<‘Øwº€GŸ®W·^“¹ =w¶>÷UÎL ä×73á§üüP˜™ðЙ _v939=3áëÏÞ¨7‚0Ùßòù415šµùñ<©c/™ÃÊØ!H!B¸®Ã%  2eI›–GÁþÊ+–ç¼Y(zƒ‹ÝT¼xæ8¥¤)Ä‹aù®Òd¾ äXùVwV±í$Æá®¦Ÿ ™¬ŽùðøpÿéásUuß7[±Ï·L1ª„Òâ[‚}Pw“î—W›7‰EfTÄ\§Ó›*-TÌß,d3ÀôUc6o²6¦”z¸Ãá´Æ\˜|g[–h^9n ¾Íòò…FdÞܶÑ6|8PãÍoâ¦TˆlŽ~wB¦vxOÑœø´¤³Ðm}^ËyãT|½~ÚYÍ3g–‰À ±\˜àÅ7е%fY!Ó.Ã%æ{tøÜ~p¤ 󑤠Âvx)*½lÓyŠòšæÄ>¾›¡Þ_´MئçOµ—M™>Fý )°k‹X6“kCoÃEÜ 3`  ï‚h»äœÌ;ÏŒ„ì÷X`<‘$R<œHò™9z¼ûðã“Î÷—ÿ·ÀÇ<¢ gå/8¹pP·ÍÊïí"•¬»“[6Øî¤7Œ[Wðá ©1òÕBä[¨Â÷Tòñ"ëf×öA(*qÝìºþ•'¬?mf;ðMa’€óÛÕŸ ÿ~WÐëVFNµ$ƒÃ.ÁóÊ¥JGÌü y|o§'FÕclÑ)$DÞœÎIÕ‘±H峨YÍËç)3Z vé‰->RtÅlS@ÌCÏ»ÂåL‡T¶J*ðüûŠ é vœ¾(»ämiÑnÚ. )BH­mÓó'0À4é Œœ‘èÈlíñÃE0þµåöç»û·fQߨZ?}ô<ÿo4i¼Oªüæ§·ñßó‰œ@ð΢¿}=ƒ)·ŸÐºa96Ôøiôn¼û½ø@¡¶eþ¨èÓÉj‹zß}³kì$a)²j<‰pDg¡#³˜gR[í7LÃ9Ê~|s8U¿]vÚAE'{É7’ÇŽí¬›/ ¾µð`Œ¹qì`ΰ™þÔqìð•4u,°'š²„Àò‚]ÞÍ"ß…â>¹¬ý<6aÜ.§¶ÝŠ9¸ 9h_ëw³ ‡vŠS°›_†²l@¶I“ÊÓRžÄ5³'â °³‡(oNGC¶‰ù4¨ì²í˜ÐNpƒ^”RJC©SÕc莉Ë#Q¤tÔ8öá [kNŸ :ö¬a’ot»—Y}“ f(6þÇd;™ÿ¯w¾Ðwî¾÷œÌ—ÇëÃ~sK¾)„’o®ûM1–¼}ÙôžC×ad?¦Hsž‚¢†CÒËé¸ôò±ÎÏå“ 'Þ9âØÕ-Âüõ»9l&ý˜“ϳؙ8vÙûòß~å(Úw[]›ù!·t¡öD\¦îÌg¬Î|ÚOš aÈë^ˆ‡ML¸ÖósÞ¸–ÇÅÎjÞ ÙÃY;ƒ‹N‘å7ЉOÍ:9ã0Cmâs^1ú#­ö`¤Ïý;Ø`™ôQK‰OJQnT’èÀÄgÖ—H|þµ¿E¬½ª@ I4§vT5TŽ- ÅBЍÐöª˜ ¯ŠZR€™Oœ¶€%l©ï‹­O›Ù~U"o[ pQ˳üD×â݅Éá™Áðæ=ÀYªíx >`¼Oê'ˆÚZ ø (,R|üôñ‹éæ\Ub_øÏŸ~ЂßšOg[KCË —¾†Äuàïh[އ‹°€œ(·Î…k°ÀP|Y~ÞN_´y‰©2Çs:ölø]zÇŠ?K‹ƒâÁÂÑdÀÜ3K¡>/_5)Þ£êäÎqÈrè‰ëSý’Ù%>¾ wƒÜÄ÷$MŸuØŽñLz“"¬ ;‰Øu¡;na=°ˆ8"ª65ënHy‰Ö-"¾BxŒõï·-£ü¼ñÎS"›žŸ©¹žD8b¶Ÿ-Ûǰcí¤ŠÓÚ¢PéRyJGgw]ÎX–,\6ìÍ]3º_1^¡¼©,ż/;7o„9tÁÇ'1™N9Èq}†Ì¬\ )Ëf†ŒìÚm{p ERœ§UdÈ|U’%„=Úëcb´O{%oeÄœ'&Ó=Ò`ïšf\NqÈ·Ŭqn/¢=}šú“‚&ÉmS *„4*DòÃ@á¼i?ËÊWõKËïàß$2‚uÊVm´Ùc>E<…ºÐ~òè\3ùÞRŒ”“\6Ì9ž&!h|yš„žñgýN‘ ‹:¡g)Ëì±Ý·^:…ž¥¬e„7µa%P ËqJœR°`':¾}¸{ÿùí*ò…É¡½Õ2mq0˜Jf¶]èôI(R0,v;";äõ0…¤]ÈgqFŠØ•ýWlêäO@ rTbñœ>™S)ÇæU§9UÜ1¿pÓP*R1Î\lÄuQžØö©öºiæ6™/Ü\ä;"¤ceq²e’^J¶? Õ nTU¼Å¾–*¢Y’vR{kÐü‰s=ùX²½™×¾ÊylQ<¾Iµ.yÞΓrS;OÝ]´òHk+OÝoí+Ì©ºˆëFdþ ·Q¬Ëq=ušJµïħÃND1w¡Ž`<ظðO7êÂøñEä«ñª¿£†Ä?ÐëNB¡@ÇÕM"Ïiì‡ôªDû´&Æ|„Á”=s–õ¿ýÝŽÃNþóç÷ž-—ðæñÁ„ýãwï~üK¼÷9ÝÅ¿ Ýý-á%Sñöd5׬ë”3ç‰PÏ2->ßËÂ6þª¾qI?ÿ÷?»¸-?2IfбÝ|[¤¬ÒªrbçAì¶ßZýl rÛ~'€ ûí;§R=ØrkUïæ+˜ÚFˆÕÑXÅÁm+SáxeJPä°“H”@s1mi4Î[èL/QûáÇ_>šìß¿»hKÕáÿ§”¶ÜÉø:p)ƒÒ!í¯m·”ë®)(¥S Ð¶L ²…ÒÝ(–ö ˜:Q›l¢Ù@1õá•[ì¬ÄD4!Õc˜ý3R†ÙÁ:ÃØ;èn`˜¹”š6ze`8¡é%Èæ|]³¾~SßÍ-!ÙN¹a Yà˜dÚ cÃÖ±b ™!JˆqОMÖìp´mþMÄB5~.â–¤£V8_¸:Î÷¼Õ⨓Å^¶‹ñ9„‰ÍÝ—‹i¾ËOôQ[Oˆæoš›ÃÕIÛX$F ®k@ÒÐÅo!»„¹Ó–Aõ¶Ì‚T2Ï7m^ ÁŒ´y)ˆKôÈ‹UÚ2 ˜!í0fìÍ`Ä=ÆŒ#óÑÆLðVt‹vë4ö”ZWfÄ@ø¦î®½~6Ab:†Du÷–©6sü#Vݽ°U£gOÚ Nš—¨…ØRÄW·ãžw8î䉰 v†Uˆ›`w¦Lº"¹üº±J¬ó^U[ج˞c”.¬;³x긃Jë2ªýµê¸g‰zl?÷¨÷éÁü•û»Ç‡Þ†Ò’ó÷Ýȵ,°%Ánl¹–58 äƒBfH #:)Õn<ûû×ËQbÍ/5Oç×+àWÿÝÊŽ:4öQ·È/qOÊ[}ˆoK°y•¤!@KÖ3±®tÚ¬Ëôªð‰Vüù‚1Ó”rJPέ³žDZ®|ZÈlH¥-)Ás'¨>ºK©n£<É¥hî’ÉO0¾(WV.Ô Ó•¹3UÝÑhs,¾”? lþ·FÐB=ãcx½Ÿö6 œ§Ü¿¿{ôÇó€¸ü'Cmå,„‚{ë¹”™µf£:ÆÔ!eXG—ÝšoüU*#!8Ëå☸y%„-ÆvDðO`æ½®Ùpù÷Æ|éYo«hr"éQQ‹Bºß"æCÄ—Eíõ‹Ã6í¹g'·^•\œ©ô6¾×sn«&óç˜÷(oöÒ÷Ð¥¼)ýìäb޹`Î=ì' ¨=?c/{ôÕ0^úùP🻎8‚ÏÁ§œØä÷Aƒ0_‰S’ìó§/5xά·„!h˜Ó9šØép™áð—Œ{œ3¿wÉœmÄ-xOñ+ÙÁ¼„;:ÃzÃI!Ã|'Ÿ”§]øN(G—F¹œ¹Ì¨Y3G]bcÃ5»6U̯‘á †nÞ VávÖë“mI´ÛÇ8žv\69x.c¸»?¿äµOåO½20è+`,ÜìƒÝǪoêpŒæ²1öèp¤Ô¢Ã%:Óo©—¸ì£©“Ã3W¤ÔI7}4wœJêû$°1uŽ$É"ê=ê…@cú^Ï’Q§bBÝ)µQr2f¸:¡jBq6÷{8?aŽÜ<ë ±ËåŠÊz€ËE^ pžýâWÁGXûÃÝþVÆö©Àê Ø- (¢º›ãc¤Çy[þ®G*¼-±¤M¸N¹R/d6ÄÛ þ´"’÷x[ì÷%w=w0A߆Îß)°¦·Å G/…£d#žÝÉ ¬ia{Eα¼Ë5’ý@üZÐÔÔЕÛy7|°ü†úÉÂ(åËU¢.qÈBsÜ’9½9þ·ò<­ò<¬&‘YD>ÖÔžjµR„`°¡£ÃI@²“Hçáf†ÈlÎ 6¼ž¬uW=Æâ¸ßüÑAæš-ÜΈsBFܨÎ7873‚Ñ:>eßãáçô=Þ½·üù¿îáëõÝÕóï·ÿüÓ?\Ý}ýróøõòûÍM¢ 2›æ&È…Ÿ¼leÀØ£#ráÃ¥`Tްå†cx@>¥%@íÅäN÷ø´«<ºy‰Òœ=ÜžËW^¾ótõ|ö’u<¹Œ‘¾ø³çg|þ±êAŠà0Wæ91Ü¢-ªò¢[Ðbò¶×¨F2V½P©zJ8oZð—m˜¹qÞòŠ(XŸjU°#Z—´JÙ´ò2­ÐSâ¢Ó$ÅÛõ”P’}yäÈä9œ¤°‘‹Þ§{jß§: ÈA>£¸¾Ï#ò±˜‰´’zhK2uþ™WE˜«+*YbDycó @Ý›Š6Ÿ¼×†”°6U¾ˆàGÊ‹¨]\Þ˜€øùÞ plóCƒtë86ù{KÇMïw>pB“"ïºôð*t ú7ùz;Ša䤻#yã;ULê¶‘CŽU ##"QQú"(À†µCT#Žaqé ¢5"ÀY»Û‰ø… ŽéÁS•/»ƒ OèH`ìMåËbd( üZ”(­sê~dÀ¹ÑÖ¸?™i'ìSÛ‹ø„•/¢’8ž¾ôåo³ï4fßa\$/¦¾kÁE[—>½öê6ëžIs­i »&:#,oÓ‰½9›Ž´9_ʦ[œ&ÿLb&fß`á›%ÚŠCLFT1®Mg-Q hZд™QXhµ.kdìñè”Ü·Cu3Ù«+VjKb ¿T³2/Ôr«†(ZЖ‰ìW.×ÿH.ëÚ—§´†=X™ï$¡õ> +àDÒ³°"78ùúº g‡Ð¦¸T‚;À®q%¡p´Î% Îø8Ø•:‹AµgM)§ÀhRL2¶‰Æt-\°®È‡t\Úü8Àtð~81â«Û…o—0#¦êªÚjGŽsúzu~ûüµìÑ)Ü9R·ó5á6 u@äÚdæÍñ¾­ÀÕ Ë"„H9\Õ4cËY\³!…«‹ãö™„$Û¶b\ƒ«àƒxßmrx0® ·)‰ï'!YÖ0æéÊ^¦²øú=qÕAgX݉íAö`ˆ–¡‰=~@²ÕÖ½Œ:-éçL©Ó‚ ùœ†éP/+ (âÚ²Nþh‚QÄŠuux†‰±ßˆºWj×M¨{%u,;+F›…eØÇ,*£OŽ.ÚQ¯ («œxÑ~ (}j ÊíóƒAY©œšN§Íe½7”~Æç¾¦®¡’wü>#écÎÁÛÀäml» lÜàu(›·ã'•Þ?ì5öM´Úyñ*>_<=ž=Ýüã»6÷]7Œõ¬(ÀtöU1 cã<© Ô›ÈW—‰•pÏÆb41 àì²ÎWìHÕÁ=‘öKsk¼‡Ì†Ñá !sJA¸öf yü1¦kN–/ÂrÑ;+Ѽ=±DRcõœð͆ûC³Â)» îÿüTYÅÇó'QrÏ¿ 3õÍ<¡#WÎ uæ0Wt¨5YÞàLM­ Š»æÄ% ƒRfË(Ù °õ9Èο[ ;¶Å Ç,%s‘dë€ØºmcdÁ5ñepÚ³ÉÑÜ`ÀV2'âËzb¹±/£GçΖ5Ɖä¡{îl9”d´'±é!Dµw‡lðA•«Å¡-"u#QdFèöxuÿ$Ü}~x¼¿»zþzõ»Öwnò ŒzOÌ2®Õ=VÐfsªä*T/ ö‘xJ¥[Â)zßœµ–ò¯‡ ÑAܽçtÃêWêu÷YL|àV¡; 8†6;Í Fw¡³åý\¬™S‘#‡ãÅxìœé[ ãÝ? }^‹h0P“—&Tÿ>i ¨íýÆž½x¼zΑ|ûSm¨Îõ,(@õºÔ/ë FbX=ivC¶#ÐýB³|Ö;às(0¾=o&QÅç1ù ¹£Cã[¶í½p•Vásô,Ú½Iã»> #¬oωWÈ™ShM é~I¡o[Jtû¨Œ.'qŸ܎iÆ’8uì¼oP¼¤/Úö_ð?[·Axˆ^yûãüæYîæ'¹³Ÿ4™ó¿·‰/; …6æýǼÿ@~bõcrf¥@ë2Q^9yLš•Ë£dþ˜ O#ñ2ÛõÍ"ÉÔ“ACyk»sw^–0CçÎmˆ½ßDOøu±5mèÙøAV;AÚÿãÕÃíÍÅùÓÕs—Ôö½ò§H–Öåý÷ßѲÊ„õCéúïèPâ¿\(×кX©Õ%*¬?¹áÁ˜Ê¼ÿ˜Èûßï¹&&— v`!‡ã&Üæ¬.Õgq˜|ÚÐF¸²³M^þ¯‰%Úš#T ˜ˆ–Af§{lUÊ!9ùèH¡½2Ž 5$yM[3K4$ZG¹[¡“øvG+©óf‚(† –îEŒ;®ëæ%ÜØ1ó[:šýÀỂÀ¢s]ޏã´Õ r°²“óp/D|8¿jÀòïw¿}ûžRuqªë½Ÿe…›·ZÑõÞO“š+rý«šÄiž {í)âLÙøÅ»ûº”ò~¤‰ˆ«ú'é¾¼˜Ò•¦5ôªzòõ&a²ÄÉjêŠ/2YBÖdÑ‘µÉvI;âtˆ:©¥¥÷ØÅS+/ü满sâÍW9åXŒž˜Q^‘»>þ†¢vÝ@!®?­ˆC|bÇô-"mt5ñgäfchŽ·¨h–Y“r ÀŠ6¦¥ãiÑ›“LÆ[vG+±&çùõ6ÆUÖd–qy|y©+1„¤@z÷zf®LÂ(œ'NåêSœ–ñΊU3¦_Lñ†fdðûÆo舴X‹!4Ù2U#´½eMp¾ÙKæb\ã‰MS;ÌÈ"ɉl‹ƒ¡šÍ0!Ð TËs-‹jA[&Gµ€à>²ŸHƒC˜~ÜŠ®çäc´'5Ý×ì󽺀ÕÞ_Ê/æuˆÖw7K8CXf}7s ÊÐP´õÝ‘6K (jqÌ“ØÌd?Dæô–°ñEÀÖNÛ¯ÕçN{Ï®ž/Y£¦ÂvÖṳ̀ 4(uº””už´M \ÑIrŽ,fDÆŒ²úTòô‚n=æ­È®÷˜ìuPÃQm—ÆÕ¤f"è§s»ÃeKÛ!Nѳ [5²\ÝöóÛ˚ܬÀ0±(ZÄGó]ÙV€ÁÎÃ1Ù8‚1A‡«GM ¹¸—Cý¹üÛû‹oë½ü2R†<ÕÌü´,(åÁÔMl¤_/¸GKª1o³p‹AçÌdÓÇäÌ‘µ: ´éäbëyÀk‚SÝa‡ í2=áx¶Úâ'GtxèÅœµ¸¹î›y7åQ†­‰p’)eýöƒ k§%9½©î^Ý–£ðä/»þ­–¼°Žµ«×‡ˆHˆ/ò›|þ׫ۻ‹¯r…Ú÷OsMÎÕίÖï6Ä'Œ7C1™kº*©ô" ·qP€¢Ž²½ìg±hµh:<üG@öY$ß–ƒï5çØQ±Gµ‰ÈFŒ‘M<5’‹DFr!³ÁýþJÊ(¶¡ÏXŠâb@kK 8oBÿ¢ïzÌIÝœ‚EíMŒÕõFº„ЀŸœÎYÚfZÎø=«F7?¤ôöæîæù?QU€áúc—KʶžÇq|æ[_ƒãÁ2ˆÛ³Ç·T;bVoI¶ÃB“4;“¨Ë–+à1“kº: ÂøÜÉØ$ïèð››#—åéÓõãýݧ/÷·ÏŸ.¿¼+½7ifÏâî%jÌc_?gRï'Y)3ÐïͤÛϱŠ]ÍçÔ3¨\ ܇[Ö·ábÀ­üwÜ܉€®ºÖi^ÂÆþÔ s†éîÔ½éÄHÑ&Ö»nRâv<ég¯lD眎­§üq—p\/f ¿z.èZjÕ˜¶bõì#*kwðÙ—¸090ŒG£ºQQ„)DT6“Ϋ°ÅÍ,6œû¶IæÆ¦­nH¹__8ó‚ð¡Ziß7QÔ©H% |[O¹ŒÔ§yI½Ïõ¼”%Þ·ú©õÑÎZ^S'±O´øßVçaËd‰û[ã!ºÉ¶ÀÑs¬©¦ÓŽkºÕóïBã¯îÖî’øúd ¿„ {É××Wö*^íOBwkóœFìi‘íĆ?Í/{ûί¥>b³ëæžÉÅq¯rá Uäâ08ä"6fU8ç ³*0²XÓÀÙY6abóÒB÷ŽÞ<•,³8XAV…nÊhÏ€bõÜ ÍÄŠ­ž•Šn_?Ï| x¼f§0ïN1õoO¨e§ûÛ«ÏÛ?Ï‚÷_¾ÄøÅóþ|ÛJ±ÇKÑâÈ^¾‡AÓ5XfxbÞ Åp5n‘!ç›A¦8uk‘‹ŽCÖ 1Y 2`Ó•×»£•ÀŒƒ dðí$¾å"鱤ÑNÑŠ_Sܤ_<^ì‡4­[Ò<ºø²i2ÆÕÈ£k %Î÷Èq“¡A‚«²TÅÇCsk#‚`!‚D’Ê#gDÄßX—ƒàL²Â`q´‰d5}$^…!^ Àp†@ƒ1lõË QNÌ(ëk©Fo?ÂË.^rˆDì/Î~üÜ÷W+ñZ´ _9ý²[²7­È\³•ªƒ™½oÒsÑ0 N‹n â瑜>>Æ{ûëŠÓ¢‹x˜æ:ð„L  G[eb{ÁÇPt.¢S],1 ¹‚»Æû\,YǰrÖÁмɦMºÄ’eÛ(êã Më,Ò&)tÁŽ%+#'‚rdŠŽÇ’½qÞ.NÔš mie,¹òS¼ÍmüÄ#ãì}Ä‘¨zw®ôÚÚ»ooÁº—» »‹(\€¡ÁÖLoV•ÉV´ôÚ9Ÿ‡©Ò3…½b³ÙY«ƒ3O\æ't(¤=ËúáÇAF‘VÓä*ÆøÅjp‘âóÃýåÛþôß®žnE??^]~=ß5£:»ö¿}{XÅüì]]oÉŽý+~»/k¥Xü¨bî`žX,p ìîëÅ@±cl+ëÌÌ¿_RR,ÅêVuwUkb' à$RÔê&‹§HyIúŸ½HKª°Xd ™Æ)›ç-«v°kK!{N¢è*³†¥Ù‚¹®ÖÎ`šÀnòúµuL²7+:úW™cÖ43î²vŽ~öG΀# ‡sÛd¯ Á]›Ácƒ€w0ô)UÁ©K¹F©fÉøª©"QVzsçhMd.í÷µ¸ÎÌŸSÔ7(s€ª-M!NŠ.¢-É·r¹QŽ©˜„J[*ãc[™Â&SA|‘H‹‘~ÓäµÉc’ð1¥ºê…C«Z¡žÁiŽð-°$ó§Œw ö‚iK^Gñ€:±ÛÚ-› v%^÷$Ò¨tÀü:’œNl‚1¦ùK¶MÀ¥¶` Ð= sØHõ7P1 æÊ$ˆujÌ8‹/ì˜ø ")1#Q•Ì)ΑI!¢fÄù­Ë‘þ4Iß¾ë1î€1h¿¼Y8§:¨"%FÒ`‘c`(zu+¬€¡N·y–TIsÓÛs/J2·BSJìj ¯¡Ì{³¼o!—)C21V:{w2i“.qú‚ < azs|•Šèì”!™âa¬¶ÖTDތϡZ-¦ ³¸±Iâ©ÝØ•sïXÓÞ-Ÿ.¯G:®ÀtDâëñ UHšâ„šÄÐÔ -üÖN!µôT œç¬ì©†Àå”WÚðW¼$½“H#GU%h0 F ËaˆÒüŽê–ôÀOÕŸ‚Úžu{ )Õ¹§½Ð£C“Mfæuñ΢AÊ2ÿl¥üÖÌîÏt%_ýkÛê§=éÃe’<®>%¥~€åq¬Ú%§L.õþ¸4ºÞ»JlíÖVHtJ‘¢Çj^‘ùR¤…mûÁ4Óg5Ú,A ¶ã 59zK•‘ÎÝac2î®S1-t(Sʃ7„0vÓt°èU¯Ü®R/ÍR¨]Eâðæ’Ä¥Sæ2OÀÜ쵺ø*No-Ô æ¥6d*ã+S—'»'6žl/HB°æ±çªR@»ÄÜŽ,y™wW£ç˜MQÌßwB‚FÛfê¶É­ŸÐG9+x‰ÂŒ8jX=ÙZû¿§Õã²;ƒ±aꪯ2=“Jþ¨[eD€ UGŒÀsôXyßaN,oRK2/C*L ÃPʘb~-V¦ƒ›¢j!¿}T‚ÍÆÿ’[m'’6VA‰PÒ(Lmaƒyî+3vvXyÇŸ„NVøï©¢Šl‰e–*çTÒ<ñ"!;b|cÍùãòbùáÃÇ叿ý*`Ÿ^UCEô"@8¤Èœqt U•Üú¯ö+‚WÑ¥”cl©ó¬_JmÚ¾÷"urê©Ë÷i”y¢S2 á7GPk{´w§ÕLîµKÀ„‚0yš’ã+¡Kõáï!–@Ô"ÂR纉¬“/u_&­âS8KÙÑ«R~¼¥«œKmûæ.,öù´øãTú‹2 £h§¤4‰âÇÖ34ðá´ ©ý“s¨I*fô¤ÌLA»~žEÒ†Û?ðƒc""1ÉT™·ÍwYNaa±EÜLóž—Rrlëå6ãî¯/M×¾[}¾[/¹//Œã—Œý|EáØnuÒ\e‹$‹fœfc%ÖÊ49a:D€Û㦙É6ÞTìcÍ[ïž|˜æz1;ÞŽ2M$‚ª‚Ly錵Þ{7rÆÃ ukM¥‘n;“Û²ÄAséx8‰A6ô*B¨£/pòy˜Ý™"îàZóí ëÎÔVŸÃsÕ˜*°pk–óSˆë±=|j^µ›vñëÕå“{fϹ½âÿ8ß\n,›ô+…±jÞ¶]eÊÞš4‡ÌÉ, ´éŒ°§âr+Kð~”ÒÎ+ØI×¶/½FD xê½&ŒZõ:RÎ~(ëgÇÝ@Ir¤“ Cˆ©Û´Žº­Æôj^4Īø8¤)‡ˆ¢f^>Sîu +É ˆÌC¦•°ä ={2i5®$i2ä)puG†’ðãJ §ŒXÌ׎ò½Ÿ&$ûQ7Ã0Á,ãJB2'b:µ¯f/>¬n®vúxucöxÿç$g #ôK=¥ª1Ãlh¹û˜È[øbEµt¶Äì9ç2ÀºSòµJ—¯µ'œFç æge¥1MI˜2•á5Éì=[ÆË?ËħAñĸ yWHTç] B…^mfˆUMo$ ³DÄ^A¯˜ÿ¢9›?Î/—¶Lî¦á«ôó0$e•œ«ðUcžDŒkökRn8(§KVM¡•|¢2 €V*†±I!u“á>Ë¥Q‹(ŒÂVûzLTU[œtö¹Í.æ¨ÝðÊœ,x?i+< g³4™Ó‡ ½:•˜¸ªK|œgls"ûýæŠb²¨ÅôU…ÅY¦tƒa¾Ší©¯„§!SCòƒ«n³ŽkÞVÇšïdÒ*'Àl@9†ê&'ßn«€5'¤ä¤“3Lí‘msøÎG˜‚ÏR®›1F3a©Š¤8kNàÓêò «º›‡qœòòègÃ0ìÅ öb#Ø«`>»Vâ(L"ÈIѶJëØëžÜã2=Àf9j)F¦C!SqÔŽ‰4@7‡Î³Ìa³J0hƒÍjwCÍŒ\{¼ç )Mʦ)s¢Dºys6Åf¥ClÆCÞÜ #°y^XéÕ:f?»©*cËÚžI—!.ÄçÖèœulÏ2úzK´—·\ëöáÝòòöúîÓêæúâÏˇ«Ë«?lEß-oîWO~¥Ï°ø-?,VŸï«ûqÙ òlW¯rˆˆc¨q‚)ìæyìE‘c;ófh³"8°È1a±.Œ©Ø°§:YÐ÷„×¢Î,Á-ˆŒ©SûXªj±ÇË2s œË9fˆ]Q‘œn %SméОÄÛ.3ü$~6ˆéռ䕉ŽD³´ÿ˜a¤DQOJVx~q;²ŠŠ´_ ‰bUîãË\Æ‘‰å3å”›Q \Ã,³WG„ËÝ–ÙIË‹½={~™fÞI©ÑŒ`‹hÑ(ç:‡;Î>#X€¡+Ëì•ÞUNrˆ7tºe¬K.BŠÅúŒs¬é£f RÏÞÅ$³ö†4«ìÎG{^ß6°Ã¥~üxWéj(‚~ÅDó‰ªÒÑÍv‚?SHA-Ôªeº¨f+ Î OÇZt¦I#`©Ô›;é=É5@j7ìŽeáLÛºñs ¨2è(s7”˜œC:Dj×däSÎÇÔó1…¦ó`´€–^¥R„ª?»µçÍÌ®êæ^Þ¾µÈç"Vt™tbˆ9[äô ªˆ× ymà蕈X„Þ¤ûÄðY&- ×îÇ{iôjP‹h«¬0“ν›dÉ×Ðko F¸ý¾O }üŠseUvKÏ}¸à]éâœóÕ);á-™<(ÖaŒc[2PÓ¡Sß[\^cŠf8õ ƒ8yí‰7DÛî—÷OO¦žIc¢|~Æ‘+Î×ì‡ Sšj4ÙÆA[ÐôH©]Mp>ˆ%¾´ˆ‚¡ˆð¶Søån¸'’&g´¾hCôá<ÃwCÑ!U™`Œsï†Q¢vΊ²GŽÛißDzúáôÛ"Pàª]ñôªÓ¢² uˆJ8Ç̽`ѳÁͨwË[Ó€™Ö~@·Z>=þº‹ÜÆ!©í.ý¢æè›UHJ:%²@ó~i4×áù4ÄP”L‘7¥mG1ÔBâ"ˆRw6gOm@ÔÖ©y9QNö«*=ËÌ47ˆæÜÍÆolNuçݹ­/Šƒ°sŒ3:ÔîûÔ'¶JƒV©Oâ˜écÔðþ²ÚÃi¯‹û‹qšú“炞û®i•q÷kZŒ¶e]‹2VVíÀÔV‚æ@J` ™Š`*Ûqf/Ï0wri‚¥¾~É<ÜP*hßOZe‹(sC©I9¦.(µ¨!dÈ(§¨Ø“©QÍ`7ôêÒ;†B.Eq`EVñ™ºô TÞ\ß^?®/ê59#KýTú¥Ÿ“We¾åe?ï°ñ&†"f÷2W¥ß×k‡±¾(r@-’ï«­êDEŒMÜÅʽ'&kwí¸yLÐoîž„Pçð¼ìQmŸ;u1GìYôMŠcw \þ‚foJо€ïzêƒÊ¥šÂ¯/iß>c »ˆ 9ço¡æÚWÞÕǧ›‡«Çuù£=ÂÃ8Q[ÓuPÄ[ܲdŽtfí¦‚°9M3ám—ØZ¡®/4Q°ˆbÀXêU4B×̾}!µ¨ Î¼ðˆÙOÿ¥ëK6‹×ÖëÃÙÇûÕíÙ‡ÕÍãÙå‡e~¸ ΚºjGbÿBS¡*ô¶KÀÜ.²©+v`ºÂ)‰¼= z§A'_¤9z÷¡MZÉëûSªi$ɳxËÑé ’|~}»Ül‡‹ŽŒù80×HÓ•QsóÍÓ”ä®kœ£i&4?*À†Ù_öÆêHZÌþª$`Ý„ÙéLïK«Mö¢“ÜñªõƒP‚Ÿ ªw Fu[q‰íó·E;w{k«Ó'7}ÇLâIÛk6ÌExWJÍá½D=Šfr*p©!fbÔ 3à”hœ·°ò èvu¡vÀ¹)Á¾¿+-!7ñ4èDx ˆOF™n xoÕQŠùn˜{ŽSXÌÅv0b”>¯ù凛«ÿ²EÿÕêÓ×0b/þ·Gÿ~wyõÇ{4·þ~æÀõÕåî5<0r‹® r,DâëÊÉ#÷‡…N#ß{š¿ùÝž]û]™ _\]¾º| '_¤1äý8û«Klm{•ëÈßmgûýêþìñ×åÝÙ³@ë§qyYÄ”…GÄß-4@ À§›¶­¥Û'Û\ƒRTø’mþ©]5è¶tí/=÷NLÝûö÷ê)Û©N±R±KÙ÷Æ0Sà6R ­6ÿõªRsÜi.XœVÂÛy;C¹=ù5Øüý¶‘Ȱnxo@ÓFœ·v#çtÈqà,L‰å$©54(øææÁ×p™Såaæµk2Þ¨Cå´È‘dÛ3y¤Ë.Û?¼¦ÿÝúç¿v<®sx¹Î;NÇÎÁþwK O'f·æìå ª1»À÷®F‹"üóÌOÞ–k|éaýâ[æûË›‡«$EƒØ»§[[Ü¿¬>>oc¼/ù X ˜â¿+S¡÷ʞݯÌ~ØàêV1/<#»-ŸmôFSB댒5KÎÕz Cõ–ha7¢ e½±ê½AçïýG¢7»­£P:µâ"Nh !CÅl?ªõu„ÞØ‚Cô&BE½ÅÐÅŽº÷dÕÅ”0ÎÞ”l÷â*µ1O)ñI!Ûz©‡É˜†ª-㔌8ÄÜAŠjcê,ÏÙ=ٵ帠˜•ó©Õ&:ÁÚ$Øk9`µÖd Ö`¡‰ÍM)kÍÖ”Ͱ°Ck» 4„°È‘±©ÒŠ~"Rš™±a-EÑÃSwÓÙzfäv†o5cÃÿüqwè8‡ÚÕw3{ÎáÅêöÓòþêýOöÈÿ{õøþ?þó_φQ8øÍú×–®!j”ôá2I¶ÈàvuùÕâ:ûùìáéÂWÑûŸ¶7ö˧§Ç÷?ÍòýŸ—7OW¿¬óYølÝYûžI“©µ>è"ñÙÏ?Ÿ}43}r¡|¹­uŒ?Óý|ös¯‡Î1“ÖYƒ·OµÏÙBW4S5¶M"//¶áèNÄë–½ïŸzº¹9¸º¸¿z|Øë•òÊ!÷+C TµŸð”‰#`;­ˆ-ѱ•gmä7)Ã…]›®§´F²IE.e¸<ÊêŒÀöÄÕ"ÃeK=›#K:fc?ÖÉRg¸y.Ç„‹Ì$üæ8h\æ‰BJU2G€ù]‡Ðá:˜u¨ {zx⦭ÕyPÁ^FœSlÖ½ú¥¤‚P…¿8iR©3™%5µ`ƒœk[3†)‹ ÚՃýqûEÛ7Ž1äºÀ!S~ŽÊ‚rŠœŠ¨ìZ9Q;AyO„-jü®ýØAFr AB@p˜aV›ÝÈ:¯7ÕI›¹ú|w¾ûoÏ|™÷ןÏo®?ŒìCÕþòøØ/ MáK,²ÌЂZ·(²VÓzYä”u€ÃÄž‹,š&sì¤ Û‰§§®Ý6dÔÔ4­ )m¢ŒšÇò{n™™ö¦Hnü›óÛåů&¡s³üû•ÃBßçv‰Ëvî5D]`@’o@$2X-]H¾'°@nw½5Æ­ó­³V¹HžÈ‘RÎ4'wž½¾Û Oã²"»ó~ÜVo·Á*ÛLSp;IÌ$cA{˜xZ‚4±¯à! ]6ºînÖgI´ÂgÄG%Ó%hbÊ5&'q“Ë1æ“ûN·Ëûß®?ݘ‚ÞíýýüñÞWØåùÅr” Fs©?º-* l…ò„¹`N¸Ï&Žd-³–†é… C̲Xei¢K]ÓödÓÈ2=i™Îͨ¡Ê2 ÚÓÇÈ `²üöRM‘cMßÚZmSB¯<‹>¥á£;î²@[‘’NÑ ’:étöeÒ #Ùý0è¨&›Õí2Gn!ÉN¿?n^ÜÔ^¾»\>üúaµ¼¿<ÿ-ÿ?{×÷ÜHr›ÿ•_òbÍ6èzËuOyIURI%y‹].Š¢nekWQºóå¯@Ί#©Ééééáj¸Êw6%‘4ðáG¶—ªD÷OëÍö²Š{Õ.ÇÝ1óÔüÆEÆÆýì|ÅÍ¿­ƒN,®…¯œ&¿fV›\GÎïJkcž3:ÇcV«0'¹…¨Z-‡NsRtSš Ôq2h>;Ïu.Àç¡QEç!û¿£ƒµÍ!ÐJ>hIÀ˜R~šÄjG-ã+*‹¯ÈÏ ¯Jb9“· ¡š²˜×˜Ûqà kÃ&ÄÄCÉ,¹¦\¾` dßèò†…òY$MJØØéÉF'çöÁiñZO©K¼£¦¯÷×—÷«»‰8DYÔSÍb2¶4‘Ó¢µŽg‰µ‹Í‘9 ÊÐÒot:é~ ³UŽgá4 ŽU•!Ç3&sX`|V\çÿžü›˜!WñIE[è£ù½óÆŒ¸#„PpýÃäGou±g~÷ÄÑäúÇu`Còx^ÓÓïhôÜúüÝYÓŸ¶›‡9¤é²hNJŽ*FœƒQ {Œ~Îô7k¬Rg[Þb*`õK)ŒZ&¹¬eÄÓÂ0CÔðÓ¹ 3.Q0²c´ÍÌgVX\âv½û Cfxäõº–©´¬¹¢‡*lj Ù6i4®c ûÌ Žïd·4»À†d4¬%Ì3¢dÖ$ãŒÆµp±,ÐqÌÒyÍžã»Ø ¸Ì®Ö6öjv’øÀɽçM­Y» Kj2#t1Ñ(¤Š.Ëàù,›&F©jì5üäs¥ -2<¸ð.Zϲ®F©¹—±¡/Hî‡Y—³ÙvIR©FŸÜx<³ýÇY5 àvÔWç5Úôª–ߪý4!Iñ{×n?è #—7›Õã“êÒ$kM¼hWóáPAÛ¢ºË.Æ ¸ÇÅÖÎJ½fL)Ä’;Ïã}$œës8©U3±qó­ôߦYé±·V3’·«¿RàElÚKâpîIöuÞöñái½SÀªÒ/ œèdÊ qŠ5 ,údÛ¥Ẕ‘RKsU_‘ È\cË @Y®L¬ Ó¤*S$†9nUÜ"yÑvx‹ÃwÖþÞ«®œާ«ÑÈÒœuúX‘® 9OìÛðåM_;£¥N8Q*jcß÷¢’ô9£ˆª‰Íª–{à§Ø,û`+dfÙ,\Äf£Ãùµ1<ÜÞ<~Ú΃òA ñ QÓþÖ•1ö­‹ÇyǨÀŠ+l=%oãRMø´LOìm8-Ð7S,Z:@^Jö*ªëµhJ¹¼v ²&º .^ü¹ 1çäÝ7š®¿'^Ó&öÉ®ÆË"¢ÊôÝMÈäM0xÆ’†¥ Ž2$rÌóšdÒÈ…žÝi‘U¸õß½°4¬aN»UÁÇ%íPjvÍQˆªóŠo T”†ÂjeŒè, õEüûq<ÂpÙ%rµ­zû·©‚ÊD- É Æ¹!»Â€» F7F§.ôŽX~ÿàœ³üÁ“Dö­œ­P*½Ù·Æ`d®Ä3ŽM «h˜ ¶År¹z7ûÌfY쌗”KÜ»ÑW‘e‰g2i²Z;…dš¢õ Íшè¥fɼ‡¨(’"ÿáÔ±x‰£Íµªh£bcó<‘4Yï;'!NRˆÀˆÞÏRÍmjš°£Ó8f¢6<ÜßmŽ-Ü̾zùyóøp»Þ¶Ó ×yÅô͈pr±—R¶íºN½ÐbÕÒ0I/˜„d–ëÐжb›UÒ$ÙŠ^ïh™UN @‘W¯‰/*gYÒh¡ °Ëv ¼ËUf(f©BJ5ý¤^œÆ©?BA~Yp¡$ˆwI²Z‘4‰!¨ã¨éï$—¡©„y.ƒ=V¸Œ(žw«Pç:û/·Š*Žoj1xéÐP¼Ò¯öx¹Ö_¸ÿ|û?;‘µô"šP,q#q<À`ïsnd °FŽDÏÝCš¤-|šçH˜|¨ÑÉ€¡ó»µIEèQ =_É}xI#ôHX¦éƒsÆyèAU× ¢Øiüƒ8>žâîöf³þm}·ù¶9àÃñ-‡&'%Іø2®0’íÑȬ‚»è¦i ›ÞÆYSám:§ÙršZñ>¸’n­B¹ßê¿>èùûn34 ÏE¿×/´Sè@8‰)ÍxòÂ1Û[;\Ño­OŒStFˆ4ᙳrªÙˆQCvž£¼º×ý¼ZRA]®¾~}¸ÿeÓT'ØzõŠ€ÄÃh£ê•c]ˆ¥‘N°­PÀsë„âdE[]¤Ù¡e±cXâhŽˆ'–’p¤ŸÞ<í]²À¹´JfôƒÜ4¥A#T™£B©ngBÍÄ'g3½'Y­û«ÝôÅ×§«»Ûõ‡«§Û»ë–UѬ–0T9 õ\Z§A(_=ˆ¢I•#vÉ3¯ìÞ5ŽšÙΪŒ cU•#Õ]ô2žîŸ¶ãì—•vñ&„.êS—Õ»ü¸&°Ï×7Âh¢ ¡Sß‘¯–«‚ñ5zÄ9ªÖŒ‰P ^ñG(x©‘Å’)«÷B£ ‘œÏSñ=‹¤4ØxYœT Wf2ORÍl.X󧤹ë·*×võù«þäjûA»H"v–ZbQ a5’äçs‚jIÄ.…4)’HêÝf6†ö]ÐÄ…Ó'¸m—ò—>|£¶ß—;õ|ïŸÖ›ëÍÍ­&¬;yÙ‹²mÆó™Uî¼Ó ²Ho \Žl¼=H®‰Úpgkz“Ÿ¤6ÆRÇn–Ú¬¨‡%‹ˆ&B½—ÃÄðB‘‡Š•Ô±(ñUJNòh¤ ÈOPõtQšY,¯q5FÄÉϯ[lõ¥‹yõB˨æ&¹èâx’šßÖ8L£(kD:©œÕ@/°&KU¶šú™IêAAvl–Ô±¬4]¹4õ¹ºýbrÚVñ¶Ã+ÙvQpDIè[÷ô›Þ®g15Áèbtä§\ÓLª¦0Ç‘è[Ô`GDuÁì&ëÈúasB7®ï×Ý<¬o~¾üïõúîoÍt©S£&€à8vw¢R˶n ÄÒ$sIqФ]Qã÷™Ï(êêBJȹ҈Ïit¨­K V³voÑWk èûÉÌî_t©t H¿­7Z‚‘:‡þ¸tÒ«ì<¿ íðd%ó@ªgÖF‹Á>:hâå˜æb m#8iEó¾0÷ÜÀM87¶.%.9·FÏMó|mÏVxp AÊ£ÀV'TsEžÔRl0ÀÏ>8(>8¯gAVÑÁÜúÙ?y¶£røhE§ï%6·væƒ#ªZºîUÅRÜ­žyp~ÂÁ1ö’ƒ‹'·¥ïŸó41ƒG+<¸”Ô#ã¹.Ty8Бüé9X]Ýmþ]ëŸï￾9Áÿ0:½úr½ùÛGD ê×lóvs}xM²g¥ÇèÍÿþ¬0ÒèYéWΞÕáiþÁ¾íÅ­}+=©õæö—ÍõË££ªLáE@óâú'ëßäv«Ó¯w÷¿n.?­¾\<Ë£Û=ü+EðæñŽC§4Ò!ŽL)¸Ú9¹ý[¼žto2BCç½âÑ¢ûËr¹®[ÍÌV_V¿e"sÒÀ|=‰šBˆ³Ìeò³C{‹š f 9~òž $1%2«Éf07§ªšö8_fÚ#0lú›rÕ±¡€ZQ v>ú}©ô÷¹Ï(Kgô+kJD&Yy Ac÷YVÎaÑiØþ¸(C4¤NÖ`5Æh¡N³å4,c £…WNh1oŽœ­0°¦«0ãlÕQúödÆúÖ.ĸ(­bß&¶¿šéßùåµð‹ßØ~xP(Ø<è÷°ŽÃÍÃ¥àj½¾U"·ñéj½Þİ¡ëë$Ž8Mc)'u®þ¨FÁ^"WËê Õ‘ÒÔn¬Åe[åÜÛhÏT-x‰F]‚ã2âTÈÙÅ–1–aµ}-[ ]Þ‹±S 6SŒsì™û¶Å±Úä¸ïÊÕöÈxW´;ÕvÃà›2•€µÙgÄb¸> ¸у&çæàú›ÓÈqgÕ$ŸÞ5ª¯¯‘®™®Ö쯓[­®Ò&’Kײ¼¹™ˆêŠTP£¨®á¨f¾#@ŒjZrn`o3`WmÛM[»Hð#ENt~Æp ÈBhwFX+'”9Û˜´FòKC»H¿û%´ëYO®çè}†öÿúøþô1èOâwAt";¢—Kþø5OÐDÓIš’°¢_m¸¡ª’[Ä·%·˜¹e ÔÙ§تF8|r«bÿ¬3ÖÁÃŒWÜ€¤KÖ Åô{ø³Jn¶`ÊÝ¢1‚-.™ÉX âÆöÖ±ŒY¬â–éåßþ¦’û¬¶¦GÑ?:#¦ÑÀ::n~Éxu¹Þüì-¸ª¥D3?š >œV]‘ 3E¶Ôy¯A:ŽÙ4ÅÁtÚ˜Mr1ÛErLƒ›©0¦P¾ j÷ÕÀã©7H{‹Wýˆ ÇTÌ>CöªÎ‘#űÆî9rÝ«ç µ±d8z¨>Š—0ÇØÁ×øÚDÖ³]Ñt¼}l_!|î/ìk„­L(u.©§q¿<Œ™:ä¯Ê‚iÑd_:jä'“,]Ý>ò¬,ŪD|´ Ñ;¬ÆÂËj}ÌŽ9–”ÄŒÓ/Œc)f[ˆ‡OVpWŒåY>M:6MGøÔÚä€î§ý½½Hûe/ñY ‹)ĘKšŒk°¨ öŸûò“á5& !¼Ee÷• r ¼¾ÿüuõ°ùø}П7ÿå_ÿñ¢î£ŸÉ\büå/k2¸þ|}PgžüÅOÛ§µ©ÍÇ?ôßéÏ_Ÿ?þ¡õGÿ²º{Úüy!«8vñÓO7jzOöÄß>x¨í?ú§‹ŸŽú)oC@äçø)5› @sy à.´ñZn·ÿ0ËSÖÌ{Ù”¦=Êøm°f­a lös–ZÀ9c)e„ô]ÚH?<¨ÐíÖhóð¸½ì 7ÓÖ»†ür”^êb¼³ÜB]Aê<¢êO­šG³’jVÍvb#±„³2Im4IÀþö5ÄÒÈUy5·qS† ‚3Ê}Nq’“{µi$W—Ç1³»ÌŠ‘äûQÅÚzIq»¨ÅÓ¨Éç·xÅ‚£Iz–æ¤s¸(”ŽÑÈG¾ÛEáýý£&Ž;Qî‰À¶Ÿ^ÙÖë8V‰j²lrSË[¬¤šá*£]DG%·„8~u@„¹âËA0M. ب›†ªÁ3€ÌBÕ°üa¿Íî%¦ê)yÇÎqS9¸øfð(=IŽF> VØNÔ\\´Xòf—}õÃÝâ‹ÿש÷¹Šk/7«õêêêf5m!zpÇÏ E[ÿ0 aSßœ÷¶LNÿ1w«ß¹µÂ[o·ž6<^ðAk´¡’ÈO ¤ÔbôÉÊ€‘£LªHhb“f¶e8ˆÒÞNƒt6vŸâ’vúi³º{üTfp£åÃ`ËîÒ,s ^¨jJ[#ú¤Ç0ÑÜöÒn&Ôñ|ÄŽ)¢Œw3y‚àiÌn,ÈOn??n »1Mð!N²J€8+ÿsÛσ!‹†!°¬{û–} C„>|x% ®j¿|ýËÏa±ªHV`nØÅÞ,°7 Yޱ㬺'û^ûûd[uŽlI”y”ÙôÝŠM‘¬ÿbÕWM‰ècP£ê+ædÁ”ÄôÍ-¦s…º#r Ú˜çè§tnôÙÊ‘Ñ--¢r•³UæxV{TTWÙ‰kÓˆtþ>¿A³ÝU¹J¨•Çù×ä$¤,'i‘œD™ª8JX% Ra5š9Èc ðÃøŸ‘ÎÝ'­~~žêeÂã·/À5umD©I›Ê7,Ù¸d2øè…-™fÂ.иžÖÄr­‹£ÛY$Ë`!uš×ؘœº‚mIàäöæl#É@›þ˜ƒ$pô¿–Ê`§¸=|RC¥¬çà]›ÇÂj÷’ë¢ð9îfÆÚxÒI¦ñk(‡œW Ž?¹%ô>@“–Ué­))›£©‘ôÜFÊ9×´œrM‘¤ìég²b/ùü|ºÑ¥,âÏ:+ î»Ùg¨W”àC›zŽ ’s¨Wç\̦Õ N¨Wq”Í%©WJåomsÕÁQrJˆã6I¸¸š&¸ÈìÑ‘‹çõh˜cœoeK&Ï% 2íD¥”GÎ]Íí¯lsÉô@3*^BôÑ¥ ³ ¬ 4TM±cÂë€ÕbX-dZÜmî MW/q€Éw;,es£ÓL#«Qê‚9¯¸ƒg>^¢ Y-¨–³™rdµeø€±â)AÕtd‡í£Hb!º ¨—É)oKxö“,ÁÙQ€Ûƒ€»Ô;÷£6#{tî1òˆ]Ç -,!ù&B§Ta Ä&iêÇí8>©˜Ò®S·}‰ð{Œa’Ò‰²èLÛ“œXì—æt¦Ù¼âÀ⸅l¸)+˜‰™¢ ¶’Í—N粡—ê¥9žž?£32O’M²Ó¹F'+ ›H§*;°ŸE5oß-MTó5#žBJÀQ³¬I1ѰS‹‰ dMhŠhúÕ9¢mV$j¾޳hFìYBÍ6“SæÑŒBPeϳá"ø—‡õ÷S“-]Á>oD1:I%6tSþu’Ú˜óªF7²n~bp; è[Ø\ÕåHj%À…3FNû,Æ*xŽYbå §øƒäC¨í-2n¬Joú‡nÍö8´1ˆT e’sÄBhÆx‡%;ˆ©GyÛ›ÕºY’ARR¹% â]šflŒ=º£EøCwíÒ,×lo©Mƒ`UC>/;X°)úø£êr¦ÅwªRYÈ>éIÏ"ng)㢻ñ<‹5¢r§´±FˆUÆ…!:'óç=~¿Sý çïVz/V+úΛ¿œ¨}:õ—Ëéî¢*Žwtò!´¿fÈçðÞïqÊ3]”%¸Hj摊Óà*Ar j˜——ûK½8å «L~¾]_ŸCÉh¸bÓ Ëì/ଃ2º¡Eø;ôjÏŠk“ÌKΩ]]jKD<ÃÔ߯ô÷øWvíj­ÛSôSÄëUZ¯oÖ!¦õ•_ÿö›sî4ŽØ° û¡û»‡»—Oöpu1bRïBÉcnÿkC×hœü‡j¢ŸV =ÑãªX™R°iâ>Âl|«_A¨W½?w(qC¦Õâ‡_Ëhǹc'—(¤xR»xéD£ŸÓØ.%ò•£«^ćI–à @;ïã/)ð¤;V×ÞSa^z)%%çmêÈÅr¢ØÜ{NùòIX91ÓÓIÞña·êˆÍçE¯Í"U™c#µBÄPPÓ/Á´|½¼rGÕ¯ûµ5ƒ_µRêõ>O»#?qÉ×òU„oÂW ü¿]Çž×k¾ù °Z߬®+I•6OYxcóxURõÄ?mó†%j—²Ó@Ú¡qÎI±jã§8âZóaù„qzJH2åZûŽ•`§Œß@’|îotçev)yèÔ,«¦.õžû¯–çNM%Ây›©ú« rXhjÄ]8VÙ¿°iâ €™Ïš&e‚zÓô'Pm9Ž" ;£G‡®ºÖ–¡â >ˆs<Pèøƒ‚þˆÍv¼47óòîZïäîåçÇì§5‘;;ÌfvÔ—€4•9&{  §û«ôÙ‡Æí]-àYKêBpª±JO4Ð/°™—ÕÈ ¶D»ÜW÷õÿÆ1ñ;w¬tÓê\\]|_ÿqñŸù׋çõ7;°¹#W/·¦~¿zúx÷µW Öö$lK}t»ÿó—ãÿJ¼¦N*' ä:€¹a„˰IE”ª¾˜úÑdO‹³«ÏHuí~õ™~Ƈ¼ê•‘@¦*†5°H‚ñÔ3Øæ¬”kǦ`¬§ ¨uã o¼BSí™wx¯QdaMR¬è)‰-\}Õh"aÏ0»ÁòÇãõ1ʤº|~¹zZÕèØ2£c¨naHÈ"zK•óɪôá&!eq GWµÐS+Ä(¾4ÓÝïM<&õtÚ¥^˜H/XÜl>¹__=¯¨/ïW_Œy¼ é0œü"Å2É<›j¥ýgØíõ-Á;ºkôÓ/çä ¯‹½€S7£µ‚®'`Y1–:6€ž¸„¬o…V'ÉŠY¬ÓÑÉJª±Àw”b?VjTä:Z#[åª%h¨ ¥U®Ä6wF *ÓÕ»_bSu0wºƒ:­ LÓšbgÜËúáǽÚÔ½1>ùŸ9 ÑùHvZƒÝÌ{²²Ø8Õ$ª·}Ø Î=ÆýMA¾¿ateºí ­½ôy‹Bg“©™ÐÕvhKðò ÚÞúC$/Œu»Ãwjž_ô÷'ÙýÉmÜ æ¥—$áX†’O?TÓâ´ÜõKÔt“¨ûè=¹çIÝÎõ¼»;”=)—=c ²^É4){)$8™ÛÜV696ºÂ7ÃïäËE Õ„=5ˆž.œ_¾U×s,‘Ïg—½©«7O8œ:;Àà>%R:z®—8°~Žšw^ž|`®¹IaËd¡ ËåÌ@0…©ÖÞ~ZÎìŽ8[ÿ³½„29³yõy„æÈ™ªTàêô†OâYÑe†›TÛpØkݶÞækÛë:±s]tê‰Â²H”Á˜Q:ö2ñB<‘‡^Jæh>9éФÛ;®+Ôx[kbZ Éæ'Àø°ýR?Ë$ÀÐ\~ÂÉ!7ý)4:KtÜܽŸ4;Jýý(Óé/›`“HâNÃö÷Û/±K2X©¼Èúì#4‘?Ötfkxìö>J[\:”Ɣť„]Õf<ͺ/ñ“ì$ ‚»=YI\ª»:)` nšE(›ßßÊr¯ÛACž%÷Aù޹‰'\ÜC²0H)PWiýY)ÈÒ’T†2ZÂ$)]ôÙG·ía Þ54 }œ´“³¯Ñöòîºä ps–rHñMŒ«ð\ôÔŒª‘ÿÜþCF„ØQ´ÇàiÞ ê¦¦I拘ç½÷û[BØ®}ða–ñjÄ|K…Z¸ÙTõ_|ꑊ™ T¥Ø,QГ É[õËb†j·}êãý£îçöñùåòi½zÔO~.ìÑ€†&Ê>ž§†öp.4eĘ$×Xµ½¨EútÕ„Y_ÕŒ.$(ü©“jPpôÐÙeÝŽz3þ¬©÷‡«%ƒ`»\Í{Ô4å£)ÑXÛ ·´è5ýÞÙЯ³Ýɺ+[J2Ɇ ©U+©×ŽA&ºvìò²¸—ãÛY"ÔefõuwÁw¾¤l¶¢tâ yF}·JQª­ø~K‚gwtBºgvKo ‡uÁO8:zÂ%ÝÕXä®ZûÎÌ×(›i¥SÞˆ1U[ø~‰àk˜ ¡lÞT!â-uê L=é¨åp²³r8¸Ë™äÑÉŠÆá’² C*}Ã#ìЀí5p¬;4,Á¹$Tß@nm¯)­aô‘;ç ö4ÕÈ Žääè°áØ”F0:XIwMTVbßÛr¢ö1×B´PS…ÞÏÒŪh“ãƒÞ¸þ³}:o½‰pšNIƒÏ4E¨°é˜Ý'Ôö0­q¶)]*íXÍñmݰêI”wç˥Ô>Í K$Âå1 ƒ¡¼HøÏ)û›NºÏð—â‡cˆÎŠ¢¸I%Ö´#›—MÝb1íÄÕ8̽^ÎH¶P"ÉN2)Ù‚YŒí-U¨»¶Ü|œ¡¢ƒÒÖªO]‚á /çäm.3Åâ¹#Ž·zGë§OŸÿ密È£Ó?Žzí¯ºAƒQ‘ÝM~ Ñ–AàâËÅË?¾ú\œŠ+\±›œ9HÃiçç¥áê¾u”‚Ãèç‰×}ëýž]Ý6(±ôá“úÿcïzž#¹uó¿²·\¢6¸qùô.9¤’C®)—¬•m•µÒf%9ö°gVÓ3ò»ÉÑî³_½zÏ–v»› øáî;·Â0%Kf…&nBg'JW{µuwÁ¾–Y¸DtI›á’±€=¶pÍΜ¬¬¦îÀ¹AÁ»°×r?}F~HAú,ÙEØq·ÑÑ‚¬ÉÞ¶n™Û«ÜFv<°§â’­íBiƒOŒ,¦‚D%¹UöHT¦X9¸À#o’6Õƒ²iÂΉÛSx\­žŽqP[@È6Ù“zjwÇäÕ]â’éoËVaÙzÚ0ݤ'PSñ¸Ú†Õòú†T¨@J\ŽŒŸ%mÝ.œ³6w+«°aé«çøÇ-@ÈŒLêBf6Õ…û(46xcP>Q¯ ›0_…ަ÷OánɉŸo{§Y"Ý‚þ¤¸²¬®Ç÷L›7#¼¯º£f_Qˆ×L‡ ÛSsgaK¤·¹¯ÜÝŽoþÌÌœ‹ùµ]ÑžhÑ ÖT¿À>Äfw”‡bZ–f¡L¨CtȾœèFïE3A”§‰{•Hƒ4KújñôÒV„Dz[“²ËX‘´OOYjÚqɱêÑYLÚà1‡=wÑ|™8*ƒWžù­qôÀÆ]¿|¸{¾úôxwsw;3¥®k>™–Pšø`únV¸ºž^;Ð5]rTº†¹Š ËùÜöDRMPWg^ÇKÃ.‡î ö3¨›ª8ÌxÉe«86Í{EøµOkоEäè‰Á-bÛe1}D„Õµ¯¬Ýà“_X‘¶ÿHùìæ›F&+« ¬1¤á \¢Ãâ€5¢ÏÖ¶¢îCóèpŠ4¬%רßV€ >¨~=öë÷LlQú*l9À 6Ža¢=¹wûqe úr’#q]ñ_@–ó¹%v+Ù_Kw”wÿL#(UÔT¸(¡lB0[´ô*œ&Þ_Rsô—¶0â»[³÷÷Ï6*’waUÛsuínÍQlAsT@…ž=â‚ЀàH|XíæQ­›‡hÞ…ÇXqH½Û²gi̬ïVåå¥Q ¾~ÕöÍQ€aÕŒÁ÷>ƒ&E8&ZIû@f>ÉKܵáõ‰ˆ¿$UúþÝfú3¯Gr:$ù°?`zåë&ŽH´˜ï¼ò}çnl=zAÒ5ºkñ{ûÊU¶Ã)3èÛW®fHâç\¡.—~îæ.!í²b˜= u™ÄZyv<†(\.ZEe§ÅÂ1{M–úx"ž®Ý¨Ì†±>Ì1+Ù;WMô±·Y19ûãÌ^ZrlpY*ËÊþ,繓W‰“;;VK¯r¼ïpÅb{£ºðFÝ©*Ùöáîf|Ûwé߯^=onòié“.fˆÝ>¢Zð~ÔtÝ—Tt7œöé» §ÀûW¿ß5¾¿ùõöæ·+ÏO†OW ñ®>çóØF‘–oE…ý£%w\é¦9ˆ¡ãòùà‹¥×,Í‘ô%ŠÚŠÒ‹/Š¥ž|.‚šÈªEš#}5F•Y¦s ëŽlÿ "k.nïgŽÞySÍ?hzÉÅUT–$õw\°ãô63*Ê*8àeC<\ôQ›yÃçmY30Šiâ·!Ò”c(C@~ä÷D@-0À>;"Á, ¶]çwž²óE÷1 Ú¯l‡“Bº»Ã¾j–io¸ì¾ÜZ!Åè×ùÃܾv“•¢^zzd¶㌺ÿ}y|¾>“C6 îPcéS\ïæ¶wR§·L-DvaT+.à§QóDDunÞ¢VÈSd^(áÚ±ÂÙ‡iœ†ÔI_öéBEzCBŽ~'ÃÉtHýÌ䆹JŽ×¡y<à7êàÑÕLΜ• ­C8Õr -AܬïrÊÞ¯rNé{ç]\—ŽÜ!©l?l‹.Æi~ ÿÑoN?ÛT—ÍË<“'ç– ¾ à hK4bB.Õ´ÅY5ó±9Íz!UÕ®\M`®t®—u"˜.¶é¯XÌ?•œFÒþÕî˜útÜ'Uô¡T®ÛO Ua¶'\Ã`[ =!V¹C¾“à޼Ȯk2»ýÃd=ŽÙ~ÓM"ãåéùñãûõÁþp÷<ÎáøôøÁþêÿ=~þÍ$û`t÷ûÝóŸc>#•3Ÿî¯n‡ì%ì,€Ž¨+>£,¸d5²}™› ¹¬ Û¡{ª@W¨ ¸2ÌrP,C!äºO¤ÚÝà N× £»&Z$™l5füº^| û<_šóùfÞ-¢|ï‰À_&.öL]ÅãƒO'ij>ÁÏÒÔœVŸ¶ð±>o}yÐÕýA—reSr4œMT6÷Ø´fœÇоYˆµÈ c(f[¸vriÌ40_ïÒÀlþ¥tÇ‹P2E¼œ’!Þ¥Ô¶jÛImU×[Ìžªb‰Èmª)§yWqyùNz„Ú~µ'P÷:°9)± éœö#ËW=îß:œ©êˆ-ª:O¿oZÖí86*ë<ýÂys¦PÁ¹4† —šŸÍ#x‰ùÁÄl ´ºÀ\+ Ì%È aUô¦Ô‹ž#†Þ®œ²µ‚“¥U”˜§Ï æFÇÚYž›wƒA‰ò 0±G„Þmc ëÈ@¤«Xì £mˆ ÷@lÜ>NIÜ‹‚ɹa›L$7k“ùk{T'±Ê³0q•ïd`·Äu{5²Æ.Sí°»NйÔWv$F %¬Â<}Êde5ž“ÄAÔœ™ã9·­u¶HÛÓs21†ãBƒ´æ]~©ïû«ÌÎôRô œyéÔab©àIh„Ã’*Võ¼‹$‹¨Áe¨ã6(ÀjGBÊØÀ¥ëðÍbóô,»Å”©½…) ¨qêÆì=c·3¥[góÉê¹[i‚÷K¢mµ`55f®ÔŒC]´Éûâ¨çê0žgÜÚ,Üe ‘'+«1†@²Üì=M†gàî&cf0ZÚ‡Ñ_‚× ~Kl=?ýtíèVäkaë™|ÏÔ×5—h]OÒ@òu ø“e©¾”êgÄÕà*ÁGHÙBë2úÇRŒ­ÉΧú^—Vƒ>Q¥4œlú;pkЇ ;¡_’#a¥`ó â*Õ:ð±ý¾X‚ï<¯ß¥½Jâ{yºX 0‹_:õXÕÕB…9·ÎÉ*£ã‘`ð©ˆB_Ê×§ÇûÛƒ´ææ‡Ÿî_~¹{x*ÔAWHvó×Ûù7)o+“ïÈÔ´Bôùy`›€UÑ9Úmؾg°xWƒÀÉmBõë<êÃîÆ6HŠ’rŠ |é! »Àe¤=ŸW] ÌgdQ×%:–ŒþUGJÎyßbLÂxÚA§m8‡ÈŠî¬šßëKØÉß=‘Eì´¯ôav¢°ÆU‰Fî]_ÄLœÁNL“FÍ2ÑE'!pA€èºI™ãbŬ³>´Oe?Äh1G× 4©töé“©’ó¿ûƒë’ º|#*•–p< QŒanna"»3)„‰àÖd ì.ðU$¶åùg¡• ç–N„ÑZ5uÉÚuFJºÉ‰$îÝŸ`RF=®¡Hûö~Å3MY8“/t­3FÌ´‚†žÛ;Œþ’(zÕ¿Àè¯VÀ—”Å)PPsD¾Ñ_š©C0]Ia|ÅM †"(Çì¸Å‰œ€²Ä0D‘€piTŽ¡{e›‰ÉùL™Bê*MÄùOþZå —¡Àº%3VÓi°( [@Áæ‡W¯®¹ý°å†ëò~¹º¹ýÜ.à:Hб4ý,Ð@^B±tUY³¥ñ´ˆxí³ƒ&jó¦P¡‹ì„zLÃ$â\3ñZ\9ܘTÓÿ}|ýá/Ÿ_ö”æË/®îï~¾½ùóæþv«@giÚZ½Ã­SÃ8x¸F Ë·ˆ¶MYΠÝ>4QÂ8¸ÈÑÇ +a¿ÄY€ÂfäÝÆ®é´b\™gý›ýÃËÛWÒìö5ÃXlWŒyàÚI­‘ÎŒœO—v]¢í{£Ü ”f‚}|››Æ=ý;LÍdóÐÓ«ˆ^—xö^5¿[;ÙC‘µ:£b¤vÅP>æ0@1Ù“†çNçN> gújs´ÕÑ¥'m8CÎåÈuO‘o¿¿u9ó7¿â md]0..%ØÉ¹°ÔÌI·Â§Ë‰vMwT©”*Ü4§ ÅtAä챞ˆ«Å±N_bAÎ¥µv`Z%ÑцØóXåôS±exÚŒ&¸ûúAŸÿüîñ÷‡1TÿòƒY&—1t=©ÑÑ¢”žBR¿¹^òB‰µ²¸I--)žLHíÅ*j~ øD< ŽfújOì”/}4£ïË·‘³;æ‡KKNŒRm mGTeò˜ê 2W`D¿çcûñŠêÜ t•¿ÊYS7Ÿ?,5°#‹#\Ж‹ˆ*(—5µ^+,Vƒs€ZÑÝ"A ŒM’ù«î¨Z\u›Žè3¯w“þkÞ‰%ê«’†ñdÚ¯OÉÉ•ÙwñÙ8Áì¡)aŸR]½&÷ŸGµ/'UAÈ‹È*ðfãš~š1Vò•݇zü0óÜ;:#~V'º ±ÅÃ’öyŒA¨ÿ%¸I¬H'­@2À+ß|¹Ä‰€EÌòÝí¤Ó‚0Õ¾ÚÅhP3 ¤%¨iǪ“){d7bÆãý0†ï)?¿ªI¶ñD\ô(=¯¼·ØpjGÕyÖ‚#ße&.søÂ~ý¦sÈÓ=ùÝÏwöö¥£È½_¾e´\ãÚé—ùóˬà&ÕPB¹4Š’”î’ür—¼5š½hn`³2Mާö “Äì(3|ÓT.<{‘êF‘sh<ŠüNôÜ[/í/câÞâN½hbøãõͯv†’éúnB\lÿ¯¿yº}¾ºŸŸ0o¶+ú’_BÝÈ,3{ôí*¹5«öL*b efÄó°72ÄO…Ô¢ÜÓ¾Ú1 —†`âÞÍú&gÈЧ¦%£¢¬TÚvôµh ôç—ç†çÂDÏmµ`¦=ú¦ÁHD.Ôì´“lúúy~­?ÝÚYå š¡²FRBšn¶¢‰i_&Í@3 ô±<òÊÂr¢b¢,Ï'¾  hÚW(€^4•ºƒ¦ÙnÍ€¦-™X<œ` Ü4©®4ÞÏh={¾;î”6AD‘!Q«Qøgœ Û+PzŸ@÷u΄Í@hR…à}MïgLÄ%Eà\[ýD0- Ô¾šƒG¤ C¨¹„Ú}ôQ8¦$IKN—0á„züËÌ‚mµ™DÜ[uÐîó¶ßhd+h¥°„Ž•$ ˆsþ«ü—ÃV¢ È5s›Ÿ?7UrbC §‘× ïPÃ&¢lÖt©ƒÂ3Ä*G¡’g¨[?ªíÛÉ­…ghê/‰|g€³b”D™²ê,G†þTŽ‚™¦CÛ( xœ}tÔÔEŒRÕ>‡¹¼ªœÚr¢4ûz [½=•~ÇÄ…¬hx1ñÚã÷˜›öf³o‰"_=/AM„gÄÙ~¿§‰dÑ8w—8wf_Ç.”XS4f‰5ó}¹R¯‹ /doewÒiÆÑ;˜¦+1y¿êd2R÷0ÝùˆY0$ƒ–S·²ty Às{\–@ÃÉŸ1­Úу 1m –L=íѽÌ—3× ‘«¤LÁ¯ƒeE]BLˆA$Íúçšg©ÀÉ‹6™ÔPwHðEà–M™ÝqáN~MÛŽ†}6ñ,äŽÌ^uÎ#ö§î’ p³yË^üùÛC2¯£ñ\Ëð­Îµ<…"ØÖXðè ½kÍ,6  ]KqìÄÝ™žÚ‘Ú—ßµ-äùóíýõO·÷ÛJÑÂ6ÌxR§ùÆå,ÚƒègzU¦S¢yŽVÏ­IŸ#þ3vaŽìר‡¤“1:)óì9ï}©ÈÄù4ËNž ì;Jè<ÏÉ7Áï¨7Ïžó¨Ç‚UÌ’—uì-œ¥¦s2,†hl¾\Ê©¦·Öð=¥Gxó?º°’Dª·{vâ×± õý÷ÿþć(Ñ‹³ÿIÍBï^ìS§–I:}îÕÍýí£I.EX;w?¼{þãáý÷7?]¾}ÿ½iò/·Ïïÿã?ÿñîbeD?ìcÚg=½ÜXÄøôþûíšüôòüþû‹}Óï×÷/·?ަ,Fx7ö JšÙûw?üðîg3!/Ix_>uTô ~ìöû¦y“-Óåçâ¼Ï…à¶ø€HÑü¥ÿygÿþðt}3jûž!6ÍÞퟯïŸnOÙ½øoï^>šd|üùÕIé¯CkèÜ@Q%”9´íÁg£¥Íº!Ç¡µ[Ø¿|úü˜Twc·:q@÷íø+Œ¶î_±=ËÅÁÌ]ÈN”Áž¨KTFÙC&÷•vÏÜk§§ˆµ ª#á¿ÿx86pd Áq…¤ì ©¿!ô‡Ð`9~& òòd NyÔœó=œìçD¬á<ÚÓt7OË×¼w¢Íöu6k|ÃIuRTòÄËmoz.™®df'#½«­oÆÈº36ùhX9x(rÙú>ã?®¢Ï^2í–Va-nÆ:œÚß½‡d-p:qb«zlŸLh§4®õQºàÀ˜™anû§åSþê‰Ýbó{ŸºåÑé_yNƒL0ˆY 2Á-¹ ç@H=¯tñÑA¥‹og™5"1&Æpž%w\ø‰ ±“•U`Lú* ÌôWæ³Æw ‚¢¬Û7Z2ÆËÓxÇœ†È­Ü7W½o€pâÆyÜN?»q²Åº“¥Um ‰'s6Îô½[…òÒ™d~s27UIW£2ž/Ƶ µ]¼Eì/oU šÛ¯DÈDTªóÙ[§XOf‡FËÞz.øÑœéÅ*@R^ÒwÀ`‹âi5 a½!¡t—g€Q$µ#Qá­*fi²´:KB$n½fRtѯkÉ ¡3%1åðÈ\ó/!ž¿9¾!ÙÎ3]òíû§rÚ/p”ÜáyP4÷}‚¬ÎF¡¹/<‡?È!Å•vµ÷\€äŸ`ήš¦¢HdXÑZ§¾ìôÊ»|¢Ö;š™£\þæ©Uõn¶:/ñYÅÆè½¬4¬K®`Ì£r~µ]õ3ìjÅ.–íª_v5ëèOVViVS‹ƒú8Ǭ’¤ÎŒÿgïú~¹‘ô¿bÜË¾Ä ‹U$«²À½/°»Ï‹ƒÆV!–=k{2™‡û߯ؒ­¶D5ÙÝd'ÙI<š»Šü꫾š£6!3õË:…~³ÕFåjƒÎ‡àl|Æd fkAIUˆ÷Þ¬Lm…;ÕÛ(oˆb­Ê,3¢ûÒµv‡Pã3I¹C±§‰h˜„„©è i ‹EgcçÎ\ÌÌÈÍùÞ¾ƒduïæäÄÅ8ÒÌÛƒš»äzN]Ò%GäØòU… §p¢ØoéJvs³aú`üÌÍÙb=o6màÂûØÝÌÎø™†0LÈ ¨½Ž­Îÿ뵤pýánóµ}xøxfÿ©ó·ûÛÍï? 1„¿^ÅËÕíæöø3J§&‘Œ3˜OMê±È›¾ôäµÞËü%.öj¥†ïf³ýms{RÿÁfecÄìú·VýGÞìð”íÓÕýÃç«»‡Ï›Ç«ç_Ö÷W¯òXu/R O>¦¬å·[¥G`ƒYÛ@씑~ E±EÁÍ÷c]¡CÄW!ñ¹Î ·2è³ kÃÉ„õñÕ <"¶°ÒèÓYØ•{DyÅX#á¶ ë½!ѧšPg߸@]ã*Ö9õ‚ÍVè0edï™´{Ö0q˜úRVû2™dÿÿ×Ç$nqV¨özzÈÅÄÒè@¼ö‚Âó:ÇCêqé,“Eü¯?¼ða·ût¿}þR{xa»â™2ÜÔÌ"xqÔfzá¹Ð¦4twF+å´øè¥C‰Ó’½WBº£û(¡ ݲɳq~Qó¥¯G®y^ÑR:/T„ˆ’ù…i¨h©\k¸øR¯ÂûÓ½þt»}¾þøp·½ÙnF¢° ª) [œRFÄÞx¾ýÄî·Â«‡ÆºWœU?_ÝŒ΢±µÉ`¡'©*h¬ËVàqFãÓ}RéÀªlIC·è€§b‹ßèÚ¼x {ö¡ë'uÂ÷ÕåGÕs[ɺ '5Òíw©ÁÙ“žŠ…VñˆêîFÍmöˆê‰G”RG´'¢:'TWmˆ -}B]óú4}3›*CVPòV ¤»€¨ª—Äe 8a Â|“ÞTµ¡¹+¬'“¥>Áôe‰q^¡hœW0~Æ0¯QàßR©H F&räÉV ³»æ.p²Ž1’©võC‡ÿ7,Q£LijCaõÈ®X R0ØIÎn‘˜jÙÌNÿ6ŽØÉçÀC‚¬ÍÄäð’žHj,ëªÑv ›LlŸ!úrç÷µñ]¼v ~eKuù•MѼD6{+;Âz£AK}ª¼š)à©jBj¤@í[©ƒEÚ­ï×?on¤¨ßþÝbcÒõó£J|ÚêÖj‰µ4åÞÐ8vži, PÖŒ]ÀbÊßP33ea˜ 9Cª'­J± 8ý °06ï d{úˆ3B$Aõé`®êàr…2ùPŒ¾õ€£©~]ýì;£ÊÓ°ÿˆãÁ/JþÇ‘(<…3YCÁ@[xóÁSx÷óÂô7è÷‚ú´’Üýg.i†b¹ômêiýÐQŒ5(:IVÂ`ÆyÉé…Ž;¼ÛÛ÷:ó)$VƒuärþÄÿi¦Š×Òªk0‹$ZS«ÝŸ}{¹iÄmc°Ó¥^ÇŽ'= [Ýãuf<]TM×7΢&Ê×9°:Èyß×…rO(•\_çƒÖׂ áL›1ïÉoçqÅŒP+ÅòV…žµ°ÈƒÏªÕïûWcâ_^­ º’‚¬ [Ú*zâÆV1ÊÑçä;M¨Uä$鉭Øtë‹:nAPùÓ4Ý üšM·uvo‹€L`å\°Zdÿþôð¼NfuW>|ÒÃ×}"Çi[ú˜DÛÕlGðSlŠ•Žô&Š?À²],õRŠíd0÷!˜%+€ò¦Ê'k{z’¬ßÅe«×vqSfšˆÆ)!éëeõî4øÃ9¶¬8®++`7sºä@0ÒT‘‚©þ¶Hb$î]Éh|æ> ¤.k:peÖô÷`^Zn–ú…·¢N8ur¿Î\¥§õîc̃OsrxÂ{6÷r2 ¡Ì܃PC7Òܯ?n7¿ëIé.pW¿ò^‚º…v/[øVuy¿Ý#ÜÐLíU~&wµìƒÐ ÔÚ`Ȇ©Þû.kû9M7Økôƒ.ÛX€è²,›k§ÜÒZd"öPqx×Ëž8¼ëåÏñ#©*˜Í ò©+J^ÛÅWaótðk‚:²K;ŽB­ý •3ùókÛNQŒÇÌð.¬<¥Åð‚ûÚZµ¦õ¹\`SèôUš+Oîk^‡_?­Çõ ÓÖM˜4%—cÏJµ¾ÊyÕ¼kè&À˜ëÖ)¨³9üœEúGéTºl` =. ØÍÄ(g„d¤Ï‘’܆9\åÅ…äEuŽ.Wè¤ÌàBCuª¿èÚtQ 6¼ýÏÜUý8VUõãn=i‘g#ˆÛ-h‡ÕròéKñôEI“ ž ,Sà@ë@ú±c̪†fSè b21çÜ·;åøbû¬ÑùúÖ`M¾}ÔcO†ÙGùUjŸe‘8ñz”áÍ잢ãߨ ½SùÂÒíîû"Ò£ÍëºáÆUÚ€£ÿj*ðæsº¯ùB3¤ÚN–äP Zÿ3Mz`št5´v¸wé¼0ÃèÄ凤bäCÛ£¥€ ?Ù<' z©ƒ¿ Y@àaiü ¾93g°ivXBõ§½ã%º£]qBóº£/Z€¶.Ø„zÔ-l Ü̼ƀ³uêòÎq¬:—>8´>{ÛÏŸkL•Ù¥RáXÇEkdïxäíÅücmŶ €YÚO¬Z<Ïør?v}·ýisóåænsèæyüþánwüã·Oç}i8ÛÔûBhÁ­g"·¦©¿{¸¥W_åÃö>îõ§ó&ª#Oá'¼Ý…çµÈj\ž:¶ëÛÍÝæç¨³qyb4mudëŸõ{ãèF…¦–‰#fÇ„.›š9Ä)÷üÌÆBàÑ$-÷{Eo7¶À†Cö ·kŒ@–Ö@7š­ëUˆu¼],¦šV±dÑ”žũ¥Æ?ÕOc­]Ó‘WCž¯±ˆ6„ü®!›$„>J®Â¶ñ6¬$Žmi¨`5ni°‘™ôüvÈ[YùàQpÖ ùÒ I¸,H¢rž¡&&©)FÀv "[Ÿ«vI<ÍÙ­Ë©Ž–ä‡Óûj—a„0©4vOnµH )[¿0@i~}ÙüŠ9/N`‰ëc)¢i Ó+ÜO÷ZªºÑŲ€:hü‡lžÉî䎸Ìo‹Þ; ßHw°ntÈR€¸.ßFÙjR>YO*µ 7ö7/ ¹®=3NlØO#®4¡ÊŒ×?H‹Î|}ú6K1qf}eÞó›»­Šôúf=2¡Âc3*ãHpT Sâ8c/NYv HÎ_%UXu8uR¡XÑÅ]xQâl8ÛæÃ0ö®áÙ0Š£ ëÀ°ë1ð8N¾Ï˜c‹v ¶!IPÇo"_e¾de $††=¦7ÂêéRÒ²zã “¬q•›^Êja]y-ló;’†z·‡KúêX¦~Tø*W†ñº 2ºÁÝ1Eî9#°õ>LÉD3[älµ4S_@5-®%¯¿ ,®9 㲡H.YÙG% Ö²c7Ɔ:òj†fÙPZ`>\0¡ÖiDè—çä\ðR&„S¸¨Çnž†™… º!(°í‹’yžÐ¥>>|zŽjóø<–Ès(7À[1³ôpm2’¹^e6 f6瀜ªâ©ÃØÔYÁåñÔs*&é ¥œjP2ª—+è~³8u1,PâüÒõÀÓ@NhYZN[FË9"54.*Tâ¡0 WÉ·I ƶƖI÷e¹"F³8?Ðh j}m˜Ì‚SŒ…xâÝŸƒ•#"FT„ìÙkV‹Éjõ£T«yʤ¦bÔ=k¬Ýžíâxh·†v¶lö…VÐNU+24x.Áv•KyÝúò tiKÀy úÖyhc<6-Ó÷ëjQÏç@ÿ@½êeï¬Ìº‡%À)×FeÅ>Œ%P(OMF…H²¶$eY &°.yûú*‹Z8ì‚ <‡Éb̳Î9tdÛ# 8‰ÃH­ä’þ\5cÕë^JOJ~E±ÏÐPrXñ&\¦ºÿ ÿwÿ_Q“ÒIs¯ÉÏkµ º]u‡^ýCwåßîo7¿_%ÝGÔïôçÏ_¶¶>Å닃p®·ztIT= ²ó7Q'Rý ×Qç`è×¹gÓBP'$æÞ§ ACTñøí !¬ŒüœÈT!ÄG œ |û¼¾û^ÿ‰ïÍÆ½¾÷ÓÝÃ竟n×Ïë§/÷7}ƒ Ɖš{1š‚K/äœhÈ<Ù4u )”¿AWØÑ¶ø´Iñ—häÏ Å@!x›«¼ÝÕ×û÷v);Ó{±®ø¨=F#6Q;ò]â¾ò“¿†fÅh!Qm/);8 u¤§ïV}„Öã £ô}* Põ9Žˆ™0 øzdó š(if'˜Å5ߢTûŒ^ÆñÑ·XS³^®î6k=ª'8ìŽç²o±ØK|÷¼Ši5Å[;™)©{„;ÉÒŒ~´ê‹ëi$ðóJݸX˜caú P°HmØÃÊ90çýëb ɃMµfŸOüÏ:®ó~}³ùþŸºœOOçhp}×p×çÞèu .ê‚"•O6Â:§ñî+˜ªó@Öÿë5âY¸ÛD¿êïÏÌq”Ô¦ó¸~@]´ùëU ê·›ÛןY<ï_9΋¡¬á…Ø)7dx÷ïš¼’î½Ë_âZ¯¶Ïðf³ýms{nw5”Búv÷Í3ovxÌöIÄÏêU~Þ<^=ÿ²¾¿z•Ǫ{ùÓç“ôŠPh km™2â4¨bã¼õÙÞX(ôÆâ¬á®®È¸ì®P=AÈî 1)šÄÞ›¸cVC3v¥w3—¾K<ã‚;†+õÖ„yWSÛyÖàÁ·vÇTúæü­ӟ“!üœö¸2/Lƒ¯á„½-Ç>ú~þâC¨Vugÿt#§N˜™é€UXOÏcK…Ø·´\SËMíæÍ3(–«€\p ^ “ tïmÒ$î½W+,çÔÕµ•²ÇUBk@riä Þ‹H•Ò©$¥!ÍtJ³bº\Jµ[?þºyþx§;ä{}ÜîÓýöùËëAš 4s¾·(ä—„ +Ê8}`²öÌ÷s¸ÜÏA‡À˜E ݰÁfQ朜Þk•a’åW—Å ôÍ1C¥˜¨oßkÁ{õíR£¾ZÅ…Ha‰œÑ¾*(9Ïçõ€ž¸çY!k`VLûÖJ ů™Þ™ö­—8º¨¡YDaB™cÃúËÐl’bŠó¾ƒþÎd¾ÕnzkÅeAˆ’xïÍŠPÀ¨K5ÊsÉ©­…\cÆ€½÷dŸ'(¤Š°.Ђ)lõƒ¾¡ 6Ý®ñ'0µƒ§ ë願jhvðä„ÀÏ‚ á á1À\S AVe¤âA—… Èì*Y’$õÞ¬‚¬èAõìÇ@PVmyrF¸51®êN HÑ1p÷Ð䊑:À©Béê ´[ßü²½ßì½…ƒšÏ(Ÿ€û=½ü—"ëãRÎÒȑε×Ó +‰FûQµ4äaU€7wÊ¥Z–0($2»¶ ß<¢³†².–>4SĶsN±õ^­ à~Knµr,û-Bœã…+Œ5´yÜ%ŸœþÒR¢È!£_¤ªõ• æùFu+@ÊiF¤jCÅ%m[Ž#Íi–¶µ`W´Ñ1(¾! ow±~eïúY¥)ÖºÏ||üt¿yÌéaìãZzV© îxBG,šHõäGWŽÖÃìVÂðߦþÿÙ»¶æ¶räüWô¶/  ¯ÐNÍÓV¥¶jSI%yMMÉ=V­(ºtñŒÿ}%Q$Èœ{&™{L[‡ÝÀ×t”eÝ|:σ†$WG¸#Ø&DeöÖÎüm  ÃÐ*ØIo~tb€¤-9°8ç‡Ú¨š6ÑzÀÆ–áGĬž[F©×d@§±+wïçååíãçF•ä-0=â–¯²&r• þ¼üV<1Ie›1Ù:;ý1CD ´iceÆ"•žh)Õ&»ã¤p¤ ž*;9žßzïÛ&»‡Ûÿ©lsÇ˼QIÆÝß³cØ ¢?"¦–ªæœ²²¥£Ž1 »¦Äš¿ãI#DÕ4 ¼ªå Øk˜D™¶tGÔDú ý#úË«NÞ¢^ÞÿsùøåÖÛ‡”]¾ùtcßþJ7Wç§:€ñòFUä1ó¶<¦‹;ó<.ЯZËø?š(Äÿ´½‘;²H17´|WBâÿè|š©åMÎ'cw¿Õä,œÿc` .›8%óá{OÜʸ«Ž¸6ì‡Guª´4*nçµÆÜ4yT#Ð=Aº¾]½ýõû'Ôa1¹8^/X¼Ó]yge¡©øfH•k8ÖAÜ/³1Ná>gaG6rõÈöAEI“cI½AßdŒ’ÍÔ›ôxKV:[…‰ÝŒ¦©qZI BôT,S—”;öQÝ÷ܽ¹›žó—ñÅuúà´+ô²ú15(©2ËùÈ{Rt-QØ1šó-Ã(Ìn˜|‚9Ëÿö&§F(ì’;=3 sì~cjRvùŠi³”ÌNò={Ø4ñ£E={>T°û4A‹žº•N (©@“%ôâçò‹4–oîÒyyøðœ;»xõÅ[«ùÅž˜7Áƹê/kÛÉçW÷×çé%*»þ˜¨+> h[&D/T”é%Ò¦î3s@†aà§8ÜòÌ¡´ÏÏö&ÀVþ3§Á¿3ûÏéB¸ÇéõLÛ†£Îã¥;ôéê ÚƒgV÷«QÊt%&~ #'<ÏÜ«›ö@d Ãç0]Ò¤ —9Ý7¡´9‡ÁÏ~ ºß ‘Ãç†ü}ÿ 4`pBãxç'?°ƒXÁ~P =5µ×¤B!ÌQovعwÐuò妲–,tõ„âþÐò²H¼3ÄÇV ¼9µônT4—%¤ŠßÁK!ùRÞ¡4ro4EiQæÖè|÷[¡™p¹Y°šÙãíz-ämŒ¦eý»„:½÷ SbH¼íÑM‹@»ÇT‰óI^Ži\^V‹¶¸¦ÌÁ®¨¬„¥\‹]#à~*‰3­‡ùÄ`Ñ?«s¶‚¿ý“Ör~u{³¬¬Å†Ç3½›Ù28¥¼ÃáÇ4pGÇÁ™!áÉU~'ÕЀbP3mȃmÐ4‰Îg¯×Þ¤ÒÆ€b`5yÝ@Ñ&C’î¥Ö&eÊvpcˆÞéÃÅ‘ý„оÈë‘-t‹qGl¸…GŒ«¾“<Ÿ_t/ªKü6?¸-`¿¼^ÝÜm+F^2㋯þòöËçK¿Ø|òm±ý{;^Û‰•ø«q¼–†ð×´D,cÆ*Áfø$Öûì(ØVx6YÖ)²ì¬ÎH#%!‡ü¦W)6€k\¨ĸ2Àþ—Ìwb{"ßÿBL·2ê‹Íàæ…Ó‘š‚&¾®VðYö‡µ…ìÒ· °èj’¢â–ÿþýîÐúC訤°<ÏøÿXúÇÒ#¨éÉ¥0ÜÅñ¨iðch¬(:Žú?¯îöåÇÛåPþc½þr€žÿe‡nù÷»ëåï¸çþz–œ’›åõÛgî' [Àyq'É»í`²ã8™–J¹i~okùKz׳›ôN’WË›¯Ëë=”D^˜“S\ý“;OØ®kû›‹M~3Wë·åýÙãçË»³Wi,6KßCÓ¸agFOåhjË" Š"¬ “€é»ƒ0Þ &ÝïÞv0 §<4„¦ L“6ÃêÈ-Zhgÿ‚Å `“18ï-ž¹*•¯ÉK옸à9Ö›²&oqÊv îþaØòûÓÚÊ<~¶3¯ö³“Q‹ Q‹¢7§Ñ™?9lÉ8Ñq ¡–Åjí¬¬´Hã"0ÒÌ •X™:ƒVcfîlR„“‘®¦T¤—:í<‡Ó>¢F=Q|?|»³ß7ÿrK]àý>z¹:äjü*;æó;ëŒ+qL[MJ*kŒ“EŠÅ™;Í[)–àqX4Û ³³´BdQ MÀÏ,¡óX•g9:Î"‹sçÌ ès¹É$- ×`›é“¿ž3=Þgk„±0%Є@Ç´U;6»“É „¥1¦­kIœ2ˆAe‰ÞYXID–’Üì¾ ê@&Ä ÞF DBG\ß0ó1†ï<ïëÅßú‘Ï_úÕ§OW°„Ðx¼Ï®“cpþC Ò˜Zí€dV•'#–º?ª‹hÞ<•ÄUy8®B̺?;K+rdÁ tnðÙgìáþ˜'xXS<>C%]5iU)„!Eu3àPmóÊÁG¨Ã™Úï{‡!è]u²§ö 'åuŠö±v7¢fF3×{À ˆJÄCn< ׸ü#Aù‡Í¯ÿjÒ<ÜÑç[úšŸçpúM•ÆÚ äà¤äø¥NùÁã§í1ÜYZQ´íÓtÌ ³{¼1t¿J0qÆQ€Ô­óá¶ýL;?—hŽ4ß&òÜn·m†‰Vì5²Í|v~»¾úç×k¡n¥×;õûw¼`¬ö§~ýIŸØ6:zœ²Óì5€5ñ¿ÌÝEŽfgWß2Õzµzº»yüÖz2U‘†mD€1Ë DêÜ¢woðòðøüÐmƒÊKÿˆDà«_¤îŒw¹ŸÊQ«GzïšI_¡ƒøÁÄï¶ûO3ݶ•µ#QUToY}¿ã˜]ÚÌÔÙQŒ4а©ÃÈ2XFHrw˜;âiaꀤ‰÷ÐÔÔì Óh#âT²ÿÑ&çöƒaCtX°^¸ÏNçG{I“ý ‰d>ÂÜ®ûÐÛõ±cç3Y®tp£y§¯ 6nI•"JŒu©c u_Õv¿‰Aå ù]ÚÍ18%ê6Fü?0Þº•>£ëáJá<0óê{Pß™tÖ·Ë×ëßîn×—×uiu'îz¢“I5¦³™e•`¦Àµ¢½;Q;SiºgQ/%¦’†ïƒÂ¶³sß›Dš˜J\øˆ”m[õ£·CÉùCè§™,-ÅÕ¬wK`T+.‚êÎÿ1&Zs’I N¥†Òu›kûsϾ(RÃ0²‚c¦S3 ¤Þû?Æð‹àÒ‚B ô€+ËmÁgwGPM×¶7*H¬Ü‡Õc÷x³æXSIÔ°šu Ñi F+%ŽÙô#1Êì|ÀâšdY‡½ž°+ö†1ýâ^8¦¡·!´ß¬”Ú¡¬)?Šº"”µXpeC¶©aG"MPÖÞ:D޳£lˆ±?Ê:Æ Êò"€Cð«9f •¹µÞ‹L×£ÐS‰ í‰(Í[^D‡Ú·ÂÓµüÝ„¼I¥¼¤Î®ž×+SÍúÉ6áµ=üîæ¹ùÔüÒÅðüÓ*(޶»B1Ä1PLê"ÄF0SÎ(éVp¾Ù…A,þ„s‡©d4×€¿+Öxž^[™<û¹ñ]o¼4ÛµpàÀ/¥ ‘¦ ‡"؃§bômƒG• ¶Å¼Lƒc¥xlVSŸðoÙ'â叨ÍcFœ¾ #ÀW£ùyŒû¤Úw‹Øùe ôôEøBvpiMlœíy“K µ(;zLùÎ  ¥`v`Ðâ ÅËij@+Ȥ|„ÞHÚú¹T”k@×:×PÒ3bvœ-y ª¢ ± ªbH&ßµïnLþÖƒòqy'Jï'Á鸬AªÅeóZ·ÝµÎÝz²=ÅÄ%¹[² mF…rýé;òh£%TU7°‹!ºIM¯ö׿éŽ(ÇÂhŠgþ¶¤æiZÝ@EÅl]Ãö»á´mŠ|0NËü ÷}€”¼¹ ]Ó×Ë/·ëo«= ùi^©ÄÀ'ÄM‚›‚£~;Œ¡2'æºÚÁ­ÒBµÄQ ‰’¥Žz²eb;òh¤hHª5%샡¸¬óŠ3ú¤v΀>LAâ4cL²Á‘©ßC3§xXjÍ`1Õ©HI"× èP®‰ÐeÝá75âXç¨uƒÃÙLA“p8HwoØÄœ™öazŠN$Ìâ KQöV …3\†ÇušB9œ†»,Ü`Y ëwñ‚¿<ÝÞž?ÿm¼Â‰Æ2µeš&ê¥vÞÜùþDî—ן/ß}T9@äD.Ýl±L øztÐSÔÅ€º|µó!+„®O8ˆ‘ÝD©³vØôæ×©z×õÂè~}»üxs— ðÃ3¿ÚÃ7ðêÃóo¯Êø_ö®¬Ç‘9ÿ•zÛW6ƒA2Êyš—¼ðÂö«1PWi¦ [—ë˜ãß;(©T:(‘Ì$³{íÅ,u¯”A~qqõ1žvuÀö¢¸T—áéQÍd]qIííÍ„˜ô¦™ºp[°,}×:¬Oúž‚Ž”rjMt]öà´ „iÆ:°~Јð›3ç­º'WÆãá×Ëëº`ÈÑ«1uÓÎÃÔn«ã¬ú ð=LÜ/nîo6œ6ïTÃo°¸{úº€aõÉŸÃæÏcGìš"§NKÎi)˜¤%×ÁeB;°F8s’v†>ýF—fã²n ËåóãÛk4ËçJSAt¦=ÖÆØoRåYÕÐã²h ¨Ïæð^ö¾ß¾¼\?ß>­§{ÿ˲q_jÚì@V…:õ~¦…}ÎwX©…~P/Þ’ëlybÑþ`Ãúç»·_o³T/ßÐLUÞXƒ<)-G0‚h[ï<8ç¸Y…d'ªnÉ¥­öÁ—Aœe³o˜\ó³#Š&U8BUö-ºi# ÖYe§ÖôS±Ëbšýj—£5ƒB.ÈÑZk²„ë*GHn<ßT“$­ˆõ”`Ý1}:3)I¨{“XLsûäÎsïm@ø‹eD4¥XVï¼Ò1Yá`&µ…¢1#ÚëÑ2‡H·×ƒþû Ôk¹×:6¦`bÉ:o³-¢V$I“ò!›—_#ð˜”*‘=ù˯~~wš”è}ºä\ÄÈX5ïºWÄŽ`m[FðDj£ç…‹c.¼ˆN°'ݸüB˱q}»’)5«Ùâ-¸$4l¥Øhõ•¦Î-h п“Q¥lR½ã8¨û{hÖÕ[¢àšze[Ù9áÖQóÖ÷ßy‡XÓ+x² ÌÂÖÂEM¬XÃOXZcîýú¸nÔ®kõæU÷ªW•[®{!ãßÇ3™ã²NÞ¦èx¼îe+”&™€0èibKsc9u§b¶$Ã#6ΦçÒIlÛe°e»´Ûò^œêZz×»-#zŸ#íh”]Íß¡ Ý , ¹V FÑ3Û<„$ìÈ®…7<»¹!ÀuOò¨˜M"΋qœsx‚™Bšrƒ)Úí¨œ¨S[GO˜àQÌ`>ú^F¨&&uW´‹îì` Lª±±ù¬/§³¾;BjµØØ92vf< 3ƒ“¯r2¼óÑnš=ÏðãRSdð¡hD©<ï3µ«¨«§À#¢6‘ŽÌjÖ›¾ß³7zù½p #`Üé“÷(I°ò!Ÿ&Î@ÁX™ûò[ߟ`Ñ€ÅèŠç³ô*MS<¾ŒÐ4èÐÏw8ŸÔ,z}qœrí GMäxµ›`GÆþ•­2g–Lì“9ÂSS/ŽI'r%õbM–ãn³‚ãh¤çCÈfzâÚRWW1"G2i(A¿ºcGl2OT‹y ‡$—(n»`ÛKÌhTÐéoe'‰Í®“&ÉÁ¨%§&îBF7¢àœíð::%í]ÓÆýûÎãÌÔ »lMš6)Í£m¦òk1úØÁYS—«Èžˆqn†±AÆÆxEUyJ§+}ÛíÔ`\cŒ™¯×1­ÿ ±¬¥€z¸ƒÃÍñî¦0¼×íé_CˆÁŸ,@±…ýÿuôañ þßê󼽟„K8< —‰"äå±z™Ì[wU†´_»b­W1D¨{}¾_¯–ÏWŸúñ*‘½Ñ…H=B/Þô÷K=Ý«¶‹-?L Á?Ð.~¸xýãáê³Ú̧Åóòê³›_—¯Wý÷/Òlà‹ðß-/Ë—OûQ‹~ûýãÍ®)kõ^Þ®¯—//WŸ7OÿóÓÛëÕçßþÛâîmùóº}Ø_¬þÖñË3_üðÃÅ/j2ßâû¼ÿæê¾úÕôûöûš7c€ãÏ\Î}ˆƒêfDþRlðMÜ\¥<¼,®WgiχÐsósl¿úeq÷²ü׋‡·{•ÅÏ¿lµOv džݛÁ Ù2¤Z5‰aïyî/Nb†}çÍþòôüÏÓÚžoô{`·õ©È˜P1íß)¨÷‚õ×¥œ}³õ(Ø0iÞÿ¿þx(0Úf¼É.Æ™ Ù…=2u4ágwÐIÿ=ú¡+Tx³úÏ“„ÿ÷Ö ^|¹[þ‡Âÿ=>>aF´ãËŸn–¨H!"Ú¹ÛåÍö3L¸ý> Dä-äÑAÝìw^|WLö›}¼Ì_âÃ^ÜÆ‡Rl¸^Þþ¶¼Ù…ÖH$%°Ê9þKâ+6o¶ù–Û ~Wûýûòùâõëâáb+aõòû_omˆ/#!Uϰ=ÏAµF]ß›‘¦™ ¨1©‹­ø‚C!6Ga_l²NõñjE6#¦ÉÜ6CÓ;Ø £O DE8B+¹þã@£Ç‘›¸ÄzN²O7Gü‰¥iî‡Å¯ê¾ŸøüòúùúØ™µ€)[r³¼Ó_=ë_6yuCë¼ÚV¿ÚÓv±ñc²õb­Ä-®´ÒðdO¡Ù!h9Œz”² e>˜Ë—äÝÞyµÐrm:73j1 vF­(GJ VÔ[™Ô]VÈR”'˜WÝ›¥÷¡Ì VÆåÝpÇyFò¾:„ïþ€½Ð¢ŒjYVHâà'ã â ¸ƒqÁg‡ËôþÎfqÐbºÉhûf0WœF‡²¦­¸ Ú½a0& mb««Äê':<뼑–Λâ!Ï€‡ë‰ƒüºÕè£*xÔLt诙Prã~u¹Ô,U#׸_í GèÆL³¶v:¹b42©9 œG#!±˜E#dLrBo߬œ >NÇÎ~dÛTŒFh¤ŠpîhÈá8”ô¦²3À‘º ë ‡TXhleŠqÿÛvàC_'BVmñbïëzã K—.\vBhà§`i¼†ï›aóÈ`Í–%Xe—Þ¼}³’p õ*¨ÉÆÙÃ5ê]NŽR¤ÔŒ{ ä̉IVdG £5†Ù¼“ƒa÷ƒëÙ÷{=Þ7‹×ÅåqqÔŒñMês^ŒçVU‰’ÚSÇ:¿Jf"Ùïa}èåóòFÃËë×&ÄW'¤þ·:|æ1,'Á¬Ý}wê-˜Œ7GLk#Äz¶5Þœñ  VÁ¢*TÈçîHRƒr;blÑ{Ϻ·"¶ÊV¤¥_usÙB÷‚DœãL$TQ.8Ïóîu­{ÐF!Èh•–À€Šž ™€“½6*öÚÌ"ÕG¾4hc¿z¶4Èœ¬ï¼Y‰×feЇFf¿‰ž{»m*ƤÛfëÝ) °íÜ612ƒ×¦R?·TmÛþéÄç ZüÅ]søeñ‹µ7ÁLôèz<ÏŽ·ç…Órz%°Ô œŽ<Î80"˜ “¡Ç”BOÀ=k„Yà$¨‘Ê&¶áŽ·¯V‚=„ʈq:ö80ÜÝ ˆôsÇØ£š Ï9Z4ï›ÆŽ†ÿPÈ -!| 6Õ§`¿ í>Ó‰@!•¤»z<ìÙ¤Y „1lÄ—‹i0¦B Hr\J pC°pÁ¤zvÞ¬‰qñ«ÔòG»Þîwl.ßažŒÄF…ª6¼ÀÉÒ87èp:µa'|6v `ç@Ëÿ—ç (ÄîD/±Cð£hQô175Žát¼]"?ŠÁú#SìâÒùo8öªDÎåýU`&Õ[º#‘©…ÉAÈûª‚aþ8d±*zPv…‰´¶ òÏ• U¸‡ªüG:½Ì»Öå p/Ëê³u•£r ’1]138Å8M…#÷<É@Øx»)`>N¶qZ³»!ß?fÛʯE6]ÛõZ|Sˆ-8=#ø ½x²ä`ú*’ÌUkex#’Uƒ ÇÂP6}"LâXì¦É©ðƒðavÃ+ýÛ§)’'"Ócf={œÚúç ÿùAÿ6¡3Çøú¾9L=ºœ„³0M§jl ŠqT™@5]àXtT¦Ú—Ð) ^JÛx?âÞ¡ÿº\ܽ~ËÛÍ[Í4¢ž‚5-.OàxùqÝÇ–p¥/B”¼%t,"Ù4@ªólç]ÂøÐFãá¹í`dëÕèËp@÷=,ý˜¬{?7qÞdЩ søô¦õ6š oÛc—„ÈÃà‰;«bù‡Ú‰µp×;Ÿ®ß^^ïU–ojHoôËn×d`O7›Õ̓>Ðío·¯®è‰#wØJ]Ow‹‡•_pÌ{^¥5‰Iמ8InÌ쿵ˆqR®Ó’›3g¾]¼¡(ëÙÛ’0ÔXï²(K ÉM6[Q5Š7‚'47Î2B7Ó¦¼Ì0XÑ?É„ÆBSªã@ETÇ5‹M!~O-»é\tn0&€¡ž~¿¸þª—gí§o¾y_Î{ãå“1—ëJJÇ’¡®ˆìdÄ„;…¨ ®‘Ç‹­¯Ž‡ fÓ­qˆÕ…qÄY Þô„òNH¨£ãUµwv×›ÞÅa•2ËqG_Ô˜ð¾húˆ×¯5ß4ñMGÓPÎ6? $zªUº„1ÅJï:Úñ`I~sëõÝíúƒ*øE@9­õì¤ñ4U%Œè6t(œ©Þû5Ut-}aïœ)s…)[‘Q)©àbjâ ‡Á©.¼«‚àÜÉßU1ØžÎÅ $›¬z¯»ú±m#›ÚÞù«1‡½ìºWìõùmY‘ç›paK”Á4C\b ˜¬øí¤ÁĶ ·ÜÄ»•ÈÝSµíJ¤÷ŒÒ € Èñ®'.·\e ƒå3â×;ɪGd†‚G$ï°Ùö›ókeã¹`PÛ–B"ƒ¹‘¬ ›ŠBväÓÀ®N3;ã±Î²ú›vÚÝì>®bæDÕq¥(0KñMöߨAh°þ&'UKÀÈf’jm{Ôuìd ÌÃì‘ÈAA÷ ©ñvsûzùÎãU™ 2§¼ÕCôqãí„ícˆÈYÅ0ú¿¡HìšÅ"N ‚)èBÖÞßi V9nºïy>Õ"‰±¢;¬à”än«¨gê»3ö¨˜1±ƒÐ­s‹ÈBÛYëP¶ŸXÏiŽp-ptƒ‚U3fR%ÏFÄÑø]¤ç;EwCÄ)1øeKR07DbÙR>A©Ä®DZä€õ±QŒ!šûÒ£éÝü·Š‡„÷eT¼`ØåöŽ¢ë½“Üî mÂ_;fñèäTH7UëIFÓ>1Ìqi¦Xü.\6âÝtZüÏÛãëbUð\ý×øÖ Bg{:hö`ûoaªØÅý"½:'Ê…ÙÊcc}Ÿ‚ý-è›ó×,YH&·‚kÝ«`qµWb^è&îÝGQ$¼°õŠêévÝbÚëÆKä ·o ¨C”®ðÍízYüˆß75w‚ïBöÿÛ=«OÀ\>/_Ÿÿ¼„ õ9T;¶¦+Tݥɼ#ª뉢kJ êíBI(-ú². ÍÎpzgæVN-°9oãç†fwpW{ß…ÄH*Jq9À‰R’o;IŠ9Rµ#s°è‹Âí+¶NdpÎuNifº‰Ïv#ïæ”·™ŒìHSë_l3ÝÆȘUgÑSd²¶š¿ ‡úÎî6×Ý”¼Žê+Bž5%ÛÛÄ‘Ï Áä¨ïŽ~Z$sãÍÖ€&„¹-„þ‹¼œ=îeˆŠB m¦ÉÛ.ˆ×Gl?Óù0{ÚC: …lSã‹ckΙ®Ý†1 ÷åö!^Ú3Éúu§ÊÕöϯ>|‘«•Ø/U•Oz³êj~ÎúžÖ AÆ -’õ‚ê>V§æ¢l·x}©¤sK9VrM¥”vפ㜸¹K€q¨{ l¢%#ZŒ“-͆„²Æ›.°ÒÁÙ`‡¼R½óþ;È+7ºe_Rxð¦·Ü_|ü­Þ‰eTðÁz¡OêL‘=¯Tâ½"e– ¦W)m¥ÓÈyµ–|•ïÊŠŠRSà•ïïºÚÄÒ·¨&ïl ž™P£ˆÐ rF@œT¬Ú`¦9¯;0”Û0ˆ†;›òê·L¬&áë@˜ÆK¼dmÑgáVôk<Òs-R;º>X0@Á\3.dÓ°*°äÈå‡Dš8®zdA¥jä’#qÌ$dE螇U1§«¢Ôñæ`çÍ °A+xß6/°…‚“ÚTg@È3­ñÚ’VqS¼é»¬ t‹ëëåËË*ü>#Q·8üLµÞcP£;)•éqÔÈ«ú,ì Ð'·'»f!µ…ÁX£nrAHí³,7Oxwï’j€ÜÞà ÁŠ«"½÷dâÖ$äîßÛ4ðO4H©¢ æ=9 ܳ›eû-¸¦vß.še3gfÐ?©ÙõW µ/éS`yu¸+âç=äýª§ù|õù§¯XýP‰éyÁÀN/“§‹(¯‡ÅýRŽÊ4¬Kn*©(û?\¼þñpõùúñþiñ¼¼ú¬§ð×åÿ²wuÍÜÆö¯èÍ/Ö,Ð_6©}¹÷%UN%•Ü×”‹’¸6ËZQW»öCþûm”8"A0Œ·\)WÅZKÍt7N ûôç÷ýÛÿ^%µ¥b»´ êPà8ó¹š1ßÈ­š%ÀÍÇ›[Ðgú´¾;<’ÑGÚ|xë³ö•ßó–éÐŒ]»û 3læ4²hŒÿו~ò°YÜn îƒRãÚmø¸¸ß,Ïx TwòðüIµñãúã«wŽN昂ßbGž‡aÆ´ÈÿÈ—Úö/¾kZ<ö%½WûîñimhçCövtʲOúëLnËWžÞ2Å5¯D9âiù5¾² …˜4Áf•_ÿïׇSŸa}©{Í£ä…X64eóï¿å¾Xó–}_q{s FàÉø2@ªó="´ö Ò¼<Ã6Ðkú®>œÅ¯`¬ ž¦à—#[{ŠÜßh*~ÏůÀ17~¡¹„_Þ¿tü\Â/w]jTàðfðefìh˜¶ðõ}â;ö'çh)§'E=—ìs…ÑÊ=E½Öµ%u'E  ,ê5&¡5¯tC‚¨ ¾±ÕœÉÀ™šF]ND î+¯Jø}Ñè¬I ÓL‚8ˆÈ â­5Mýô§ÅíÏzÒÞìd{Û ûæ'6ïŒÙ…euèHgÇë æƒso^œ¥âM£ÅV­†iK}Æ2† Hâ‡Q;IöÕ“P ÔÖ§Áaj#z Â'Q´¡ù€%Š(QQâàr¯¯zߺ g•¤õ\BÁÌ$´8«Þز@2I½Ô$@W\7Ü–ÿkõ°#JËMx^=‡ñÍÀÓQ“ŸDZ=)¦‹éÉ¡BB\"n¹¬¸v·&ô"B&[{ï\)°UÃb.«D‘#–‰gH`†À’-¢³pný"T­.ƒÏBFôëjŽŽñY-qðÞOK?˜ÜÀYßykÄÌÅÂД~Áãyˆ¶Óê Æðh… †l=»ñô sð.$#M5ÒM2"Mëq”}G{"ªjúÎEšŸ2ˆ+‡&P±ÍqTålC"Ôô±õÅã/Xš?Ä#cˆê1.Dï,`§ê´M ˜œÏ¿“&>þŽðqsû´z,¤*´ä<]79‹vÆ Ž>ÅrÜ¢JUvÖžH¨fXJ­ÃZa©$gWzÒ¨•¢µ¶ M…ÀLKÜ…gJS`ªoÌ a5μ‹&/qWÉL\F“Ä€sšÔàœÉM‹\ƒoƒ¡žDM³1†p¯]än;? x¹ÈÝê×Öº«`Ö˜éLÐÌŸ†Ùë‘~w?SÚ²íÜx‘;׸²sLã5Œ,îÙnvv.\ž6;8SïY½C—ÁÙ±­“„!ǯg'ÙýÒÓT%Ïï!X[2ÕZÇNº_2ɤÐD–Àȸ4= Lçz‰u\|ØåäÕâý°=Ød§\O"UòêÐ,تöã*ŽV7É«ESÎQQÎèÄ•Py¡jÔL‹Ï:«¦ZD;G@Ÿ.3MLMr&„ØØªûÁ+÷Oü£Ð––v´Å´Vdé¬`Û½íûÞì·2Þøxÿ¬Îk3¬³¡o˜- Èðôvܶ ‡>Î ™rO?(ÞËÁå l'Å‰ÑÆ%c[±z$7X±ÉÕAùUŠ œ8wX`=µ Ða2,ð&Îç§·¸¹ºa5Rߡ̄3-õÐç!t±xÚô>ôNE´þíÓ½ËÉŽè—?•VÐ7Åìã6Ø\ÂðâK«…²ªW¬W;ë8+e·2Åœ÷î‹¥JÊ®)‰Ÿ‹AfˆíM‚Q3êIü0°8>¨¿¬ºOˆòR5üÍš#`áŒJƒñ¨¿'©”Lh¯¦3–ƒ|#H—\Þ—dÆÒ¼Ãf4ÞI[±mâÉ0¸0ÇŠéÍ› †Ï7»+ªmUš á²Ç~m¥$iX©—Üb,2޶É#XëáBâ4Z Óªî!®ûÉñàfð²Ý’¤.Û{b®ä¿1 +šmª¡½ÿ6ÀIÿMbì@<øª L–|åŒê[„¯–öÂÄM‚g›æ^óofËNÝ 5u$ìÆÜÿ:g¼8/ö·%2é6TT6„U„³ƒŽƒ’—s±Vñzvq~ÏÁíÇ«lºMK½D¢á¡] u3¿?ìºÈj!š$!‚ÆnßÙsTÔܵ&¿zâWŸ|}¿ú¸¼ýíö~ùÊ^ñ¸¸ýEÿ0¢lÎpSìW´ýdÑ»â ÁUY³¦'ŒÖæ¤Ñ B»pÚ{R«”°“àinhw M‚<§mÓ¶Í7 ·¥›ÝLÁñª—§íæÂkUC9Œ>Nœ8Øî~妠êе§Yœ®°:Ç~ßSyÜ=A«¦f!«° n7ÌWÐSÕ›â4Z¤hLäD)f«sPjÆÞNbL—{›à†cïý*Ü“Øû ®J±·³&Ì{;in‚õÉàÛ£tfE»U[ª¼ËÛô› õüIû͸ÔùÚИëû¿Æ”Ý%XÇãõžá¾™pÌÔ†‡L$5:’Æèª´CiŒ¢¦feb̺ŸžÿÞ‘ˆœÌ4Qé~"Ä]%\!Ôp 䛯G IFfÆátË”5¾N]‹þeàá)ÿÙ•¡ßýuWRþ°RÄ: ®Oöá^'b€ëÓ à:ÙzÜÖU·_X£–ŸÚGÏó™½_DÞÕm ó&ëšÂ?­…ì÷I-mé¨ëµÉáÞŠžn1òrEyæpËìpçô!³E´ÍÄ—×E'îQÈlà?qØ\v_w1›¬^1ô<ÜdL˜¦âÞ˰Ҹ¦wÎ{™Ù]#™a^RV#à!F¯ÊmÆ)&>9Êh|(ª¨Î:ÝÒQÌño&!‘æú?,m0åÂj¢‹¥À¢Io쌵Â{%Ò RZ—Œô{¢ªÐH³CÏÐeêöÌK'$þšiÈŠêÊÕ=¬<—PgWË0Ï­ ®UDŠcSM›EŸT-‹Û7‹hFm³Á–@‹ê|Ç­E»yKƒ·Ë’©w‰­^Œ„¬jÉð„WìÙLbèA •.±ã>hïçÆPß|'·ÊÙc*7õvèÝ0¡jÕ®!Êê² ù<ÕÃg¾iÝuÌ>;õ×X.má®~wQo‰´Ñ(Ägp3 KÃeRKIn†ƒä*m‘öDÁâÌ_ÿ~óà _¦œŽ*)бž{38¢ãª愼E–ò~“›¼–J®AXÅÜùHÄ1ËÝÜÛ†í·WÒ›—î²0KÅâ["t\ =¦{›b—œ7cgõ³$U~¹ „\¶ ¶hSɨ«'“*àËæwP|˺XØ·/cLÌmD=y9Û8­'©;¿!p{ÏJÙœn±z½mQÌ)¾GÉÆÿ”=T‘ú­k0'ϪºÀ†]ێç/«èªAÝ^hÚñ½þçÂ}"ãÅŸÇ6ŒÉ{… l âNÃQ"«‡ÌÒ)ÂÏèêŸ1>£éœ¤°¹'Ÿ*جæÌŒXXUœ~8Á´ïðaׄͪ)/2‡º Fä-.h íëBp-Ö‚ºÎ¤ö“«'»ZŸÖŸ–*ÏçÍõ/¾°•[LS€ÅQûšØþc݈iÅáÔ+.¢‚’ÙΚd@f`)Bêj·'Š*­Xj£±×ÅÎ ¥8Cqlj<9** ˆ €4Xw[S˜ÒîB'{$1ãè7Ôbä½m‚›Á ¶- ,ºz\ß•µÐBh œŒºÁðˆ-¢5âm1苪üª%X¬ÃAøUƒ 88™ rK5 ôSO£ýFîï¹ñ4ÇñTÅìýiÿÕVQŸ™êôXµ` ’G_œÏŽBƒ–ªÛ x`¶c¹ä`ÞâÁQE;¦íg»ô °ÿ ɘ¦Ð ãF;ÁŒ˜^‡VµZA4O€šDÐàÈ|ذOÆ=’©Q,ˆmÅðÜØ í¹±TÌ©R¢.¢Í[”ÍÃX=Ž0¥*› -•Ê¡¼õ4ߺ·{Σ¡ŒÇÕþ/n5ö:•ñ²ôêßÏëÏ‹M÷ÅvÛ?_­G(Ð…ÅŽ(+€"C `(߬W[šõJºjM|JÛ}KìE”“AèI®JEWZ­Üù¹QZ°yO‚‹ô ˜öc°gøò¤*:[còàÙsÉÒ¼˜ÒTÝ®Åwb½aúvª¯Ô ·ŸXs­¾óé·kûB"“ÍÆÐ6dvÆ´.ˆfŒQÒmh§Î‰®2«8ÿB‚všÉà Y"K’sª'§*ØÌƒlYóçÅfÇÍ'w†Du"¾² ãæ¥«Ëã4&¬M?u 6Ú©8 -ð8tDÛ–ƒ/ì´Ó§ßî‘}·ÿCY ¬Ž$4_•ù˜ÝúI6’=Õ[Dz,§z@«à,S·32Ô=%f“ ‘T¢UDÙ„s⬾ž4P1§X¼õ•™w—çpÉÕ¥HÈ‹…½ñPcÇe Òúdkã’¤)Ì3ú ¶?C NÈ·P£8p€iQ¶¹Î‘?/zŽþh¶Æ¯1µ`ŽEÈÖ4*C¼X-”&¡–ª‘áà]X‡8Ðê…GÉp¶' 0 ]@1¼Ýú}ê—ììUMtsõñiýéêf}ÿùêîæèÒÎtyS H›˜B`Р’a¼‰EëàfI´mùו~ð°YÜn•øÆÌTX?Æãúþãâ~³Rj LmH¼^sê2&Yã˜ùëC\UêÝ·+¤MÑYÏ9¦a Ú[L6:¤SÅ6‚†8-± 2.¦…SlƒŒ5ÛEàÉ¡à·Gc˜¶‰àõõ1Ç&Œ ƒ6Br×|O*•l"_f  0Ó$›°nÓÂ)ªƒŸúanèGÜÅú5…œˆ‚†K6I‚Ö{³œØPc ‹±-¢@oê´NÂy:žÎˬ·‰Êœ+½šþ´ˆ‡tßžva;Oÿç̤sl¡…v2α ~XÝ érÛA"5Îq|ìÈšìŠì!"N²‡Q[œôíIÑT‹UÁŠùËëi 4iƒ  ä%Ë hÐÙSz)S_$• ‚bXÉebv7É xLo6P`±†'ã:e§ô±Z¬/›ÄY2Lƒzݯ½¯® þúï7wÓ_¾¾PùíòáûÙ׋³¿ewëK&®ðun‘s郲ŠÃãGÈÒŸ–S^²5áO§”G óüKrÀ¬fÍšê–ˆQÓšÙŒ'qºï†\HƒÚïH¡â~BЕ}T!Î.=“§‡pر=À<–”m*¡Ëò“©å2‘#‹†swL¼jªVàÒÜg-¬7Ö$ɘʉ˜²þï„°]0‡ÛsƒóÄJ}s¿sBS5Lòb#/®.ÙŽÛ0þ<“/£¤-:»¸{8y¸{¼ï,±’€«ªÈå`i»JiWX&äê,‘Æ÷0OX"#¤°Ì@¹†K3¦dmyÔÐ0v~y÷xkoýåñü××=èÑxqŒvþ~r{þ¥_–ÌÈì JN •mU¢œ›=¡T'/[/u´g‰IB\hªˆBËp9 [mÂ4̾kç¼l dûö¯½ÞW¿qÿѹó€;Ñxg·Øãæb HÅ«V"Ÿ…‘ …ºØ7h¨gÕáö¹©æÎ¶„„˜â{Ÿðšìª ” MÌgå&4ê„ NBðÖ‚-#€½³3úÝëo²èY¨áÂBÅ}>€`(‚æsõ/ÄéRgÕ%ú¸Y¶ÅJO!-©á7‡·¥x;ÙÜc ÜÕçËÜîås&,©Æ– ¸büÂÞe‘bB’N9V¥ÇêúmEF¯žP{ØÑ2o†ÕwŽJ€ØÔÑÁ°ßÑÁaŸ‘¬>ÆsåtýØïvŒã{bv¤÷äU*:ôIœÔ‹ÕÐ1}IJ†Ž4 *x}/V/P·¹¡­ÇÚYI­ÛÒ”ÚNª¡8æ]E" ø‹2Á1ç&NÞ¬æ'Z%5ѦrëÇÌ3În®n?ß]ì|ŒõS²éŠ5y­ñuƒëƒÕŠ«O±zdáO6½cVë³þËÝÉåòª­Ìd+ƒ+$z®–ã9‹ I9W°?¡YûÀ–i 3j6ìhÄ„1Á¡l°@‹’€÷UØsHLblp°âÿ*û¶cÛŽ‰ Q<’>Qª‹˜À Áo 2GLÔ_"r‹p…ÅµÈ ;ûßâ2€èjÅA˜”K{Äf0K‰¯œ$[²5yµ+B8xo¬›Ã¸ÄN¿!-a\jš8ÉJ)ȧÅ|óµ|‹¬’­fµÌ¶ f·è¦˜eÛäŪlraׄPeƒ—¨›xà¿=ƒÈü¦Ë;ýùýÃÖ‚k¸6éżT"}üÍcþxù0È[s·‹\Qd) !þ†|ÙÒ }º$â>Á[ QÓ\:d úýØa4RϦËa‹Ø¥F8RQòµÙŠt’‡‚¯Ow’õ‚›êø•rF%y—£½sRIc*ëÙ)‹lwÔ“Š€é@Žø™0=ÄBOm•Æá­mHHØP-ƒST›ÛÀ§Çº>"Ïõ1ü|ð)~¡³¨¾ê…O_Rè&5äÃÁWð¸(6ÛtÊn?Ø ázHž:ˆ‹ßXj|C!Øà»V m5V÷ïo/ÎŽÝ^ëŸú…~dåXQj–× ?{zB“.¡ êÓê·½µD¨‹ÓRXE,êpͽkººüõµjÝ¿üo‡Ò=ÿóýÇ_¾Ýü¾ÍœØŸïϾ^\}>ÙþþÉö'ÏŸè'><¸SM‚IÃ>(z«Ö=’­ z¦`ñÑS£óæc¿­øpËâ— ÄÙõSÙ«Ê],Ù,Ë5ä¹þõäìs7ÁH`ÕR.T¤œHJEÁàë(ŸÐ¦‡`è©)1Ñ[»­èäPψã{îLÎÊŽP£•*Ù(‡4èróË&äé"híÔÞ30¶lßoå? òï.„ƒÅ{DSÕ¿¸Æ~#šÛ+µ£¢Cî*vžN²Éè5|Ÿ#›É2@qIBUE»e†'&fm\©k>[¹Õ½’¢—htT#"ÄE—™óc2'Dê""dÙÍò˜ø(íÍò)kªßƒÅ ™´ñs®.î.ÏNî/½î˜ KvUψU®N¤¢9KÁe%bB“.‘´µ³¢êD<»E[B(aïk˜RVD=7#UÈJQbv£é”$äAÔer³|ß”Ih‘&j‡ÑP$uìÞZ¨ÅP@òÎ;òò^³gYA°Þ’âÜ>}c(ÊeÍÅ„4Mr`Cى껛«äàø¶”ÍÛ¬±.%à€êÚò¬ :8à]½ÐYËQl»C~óT7•k»NË®Gé;þ¾ÕGÏ(˜q[ÌSÙò&_¶¼LÙ’û %4ÌNÆ ¯Y&Ökã±}[T6·rÚ|I„÷´õ ‡®ì‡˜|¤ 3(p*2?eËë^ÈÑ‚®zHLõwTÖ ÕõVo¯]õi…}ÙÆöYuÕùg%©òçJcôÙžZ6ŸK/v~¾yÃYà žé‡fF””ÐÑ2mARˆ*³an\¼œxmê‰û»UX‚’zŽ¿FRð~L¢%çýL(Õ ŸÊA¢—æé§¨ ã"ýä5wÿmÉJû«ÿŒ/œd›ø>¸bU<…®+V¡fõx‘êÍ}â“ÉjkC\ÂdõpCNֽŴî–Õö ³ÐW¥ <´ó Œ¿„à›¶w$²\ÍûL‘Ã]Rt‰¸Kx|Xü†€ÙìÄB-¸«§$©þÚ¢—JâNöa ÜU™Ã ðºÁù Pع >u^Ž5ÀkŠT ½Ëà!Ï_ç@’_¢î Û–|$pÊö¦þp ûýá²€)ج XTMð1…¢j¦írÏÌ~ާ·)·ˆkp¢ªè·sxÌ=cQ8&kKcµ}ý$!¶Té±ë.·dÚ²61ªÜ£­sÅb­Š 8…"fKì&/VÑ&]ì–^Ï ˜<#Û"®¼N’ïÊê2¨«!ZÔƒd¼ic^Äèüqo:ék@ý¿ÿq½é~o6‡}Lwû{´}Ï·Œ;ýI_ù׋‡ÓÿøÏ¿|Èbüv•Õx‹¶©ÏyÙexòýÿÂí×_®.¿ýý;Þ^\}ýry÷õüúòRþêæ|ª >ñ‡OîÏLèNÚéçÛLJӟ:óoŸ¿=^ü<º–a}øôéÃ/ªµöºO_;:+Ý¿øÓ‡O?¬)á¼S±Þ#’0{dc¼#½y²|÷fœñx1ÉÖEnEЪÇ–ÁTÁj˼'XãÂj\McÍDe\±%Tå¢x*Z#Ƭ‹2!O‡;¶za¡¾½›^²_Ùò(y“ÇeyÔ;4€b~mxþçôÔÿï)DBêDøƒês7í¨ã&XéžzÌ·rs|Ê YsàÄ…9YóW¨úU%þâîô§¿þå4¢°;¶mp™bøð¨´ÝÁJãÍE‹ºê×J)#拾{5Qÿ¸>ý©ÚþçûçÍ6yê”3•â/ç²ë¸yžÁZgšø ìðÃèìR;j@ñÊ£øô鵉Úd´Òa kõm¾ÉÑfŒ,ïáC[t›Ùg/ûÓÏÈ ¸“R9ñd‘ëñ‘¨›w=°}òùeÊÁ­F°êŸKÜT¹ü˜yÄ¢Ø6BÃ$©€f/–¼KB²©’Ûi'éT~pÈQ£¼ÕýÁ×É¢m¡æ¶Áñþãö³Rɘ\”ši^ֽ严ä9 }‡!èÔ–;æý+u€èõ%RI…5ºŠÇsÇ’å4LhÒ£ÂÕŽHµÚWz‰°‹Ã…Z(²²÷§tÞf¬_yã++†©qÿк§ûG®&‡¬˜ÂL°³‘A‚_ÄF²˜z‡œ$¾up}|ŽñçÇóˇ“Û›o—g—÷3oëÜaï&©F Á"„õ©¡ m@ºª .NÙV/fÔ˜Dæ kÂ$–ý•à\¹÷äµj¶Q.¸[çÅÑ™>#›°µà$pòg90X¶Œñõj6\\<ÄÍŒ_«ÅÅ!±­-{µ lCv2ÂäÅ*˜m‡R ¹ºv{üj°/X §qeƒgTôû·§öÊÌï=ï€Ð/¿ŽNè ìÅŽ•I<âýÂÐyþ·M‚bt^Ǿ?¬*fÖˆnb@ê6­hóiÏy¦össÚ<ì\¢#”'ñ‘Á24 EŤÔ;èÒÆw„Z#hשežlàQ×%;,wB™>‘Ž5yE»9À¯Q½›¹Z¡‘À«G:)nw³íF:1‹$üH»F:TU-㼫Ïu7aªÌÜ)K\™¨Ü”3Õ#µ5å!oÅ}G+®.æñgÉKYòâLãø®eaDÓ.C^<øˆ¼þÚ ×áv?ûåJ)Ú/€Àå 4¸ü°÷Ju±_–QSéqŽýRŒçöv˜ñq§nÈ‹\2çÆå ºá-.jS¨«ö¿ìªvnŽé0oIDâ"Þâ Ñ‚zÂÁvû?8}×XóÜ[”ćÉo:—y©°&à²êŒ°fiÒï_/ôC_nëKýïì—Ñ5Ó³Ós*~ºÔï°7ÚŽQ¼ x§‡:}èȸMîJ™~±¸ªƒ}¤5¢qö>q’°"ƒ­.âþöóÙEvÆÜ¼Ð;:8Bedy‘'Ã-T“DaÖs¦#Ó(ùGf6Š}}±Ú¾d ¡Ñw9‡AÝSÆTt‚ØçZ]&DîàÐæô‰aVò–Í·^(aè[æ`cH þýÜ1Çu¯c"‡½¢H9£ŽBY d_¿¡OÇF~Qôi–¹±‹ œ×Ïß;=æ~¡¼€%rT‚c.ÕFuuƒ=Rç‚Å÷î+”šd¦É-jŽ !‘"êAö¼èYx¢~¿§$,@Û0¯2¢PÊîV˜P§ ¢è±õ‘Ih¢ ÷a¢ˆ[»Ýè ¹}e¢§ípëMr®k)tU-ŒÅ]IºÆØ³ª¡mÍCÄEÕM¢¢ÑÝïO†D1m—®5CäâöÛÍ÷«,hfGK  Ò˜Úé^†[ñ-ÒƒÕe¸¹Ó˜g’ªΚˆK]ÉQ.Á¬¨ÅÏѼP¥ÊÚ¡Ô·™ŒG ^?è–©âNéi”52»ýº‹‘O*Ò²u–Áïzý†u×oêó— `p•¶ïœ¹àÛ]}AÕƒZI$‘?Ã`¦ E¢[³ØRÎR Dÿ“ fÊ ¯ ‹Jðeè ä8¡³EÞJõ€^=uP®û4 z£7­X¤¯Ñ­}Yž‚ÍÛ‡^}åhåñôF÷§Ð4rFCæev•Vh«Ö8lH¬$½¯M‡ÕÏÂe[àÜΓ \&†–a¥®$a¢Õ'6¦c?ˆ¶Æ|‡k šË}8r`•û„h]0zIJ€óÜãj¼zŽÑyS!±ƒÑÊ©äŸÂÙ½´¦¸®iM©ÂhvàVœätGÖd3Sÿ¦Çè’e˜d›˜^ ­.ÝÉñ´¥$4Â]•îúÓ•î^5×ÁºtŸ¹b^Ë) s«Ú —Ââ4tiöðØÙBÚËÞ™¢ KöTd<†b5›¤KºOÓÁÜÙ©ƒíCÅ·6wig@os7’ü~1›½r†ÝqWOÅl @1vB®¶vM8¼ª‚džúUŠâCçß犜}7°¶è¹@¨¦%»Œ³«2@+‚ äŽ÷6ohærAЄ(=š7ôÔb”·¶$ˆk³’ùò”qjÝ€Ñs̯г=Í]+1ê&3ªI÷ ' ‚55›ˆ[¦ÄXë>§8WµÏî/Ïï.Ç©ÉG¦’½üVý¤±ŒJ+uï$oöq@ËÞÅüT˜jô¸uÐc»DüÖJ-¾ÿH΄Q£S¯§»ú—ZOÖKÿ’A”õqäÞÙv²œ]5þ'¡Š.uÂÊvU wõ7!I»ª§FK}ù7×ÁÕËkŒÌad¿½2+ ºãEäV.†‚-eÝø™V¨uäÁì¥?´ ïâúìîû­}áS÷ÿ¸féüÛNÀ*®¥G‡mòy„Þ=:9b5­ÿÎ@¬ÉƒhP^Ž\¼;¾ÚlC:È%¹&´éRÀȳ†óñV­0Ëeà`CÆþûb»©¤´ÌˆD/¤¾é»ì›Ëº<* J1ª¸Oµ[ײ>¦ìˆ aº¸<2P˜1àø Tü×L}Ü‘Šÿgïj–ãÈqô«8ú²—U6 è˜èÓ^ö°ó ²¬^;,[Ùžæ±ööɬ*«R*V‘É$ÓêÞž9t·~J™øùÀ‡þO‚=Ê5ëäÈ^‚¶›ËqQ>?ž>'oa¡Óea‹8q›û¿–=Ô"#ôÃo&¢ÏQå™Ù+àè­+ ÿ#“Ý”9`3¯L1qºÿj>®RHEà‰M‡|l°Ü«ù8— NÎYe Å€¬!œL ?ü!±r²x²¿µìp¿ƒÿ"¾&NbÆÓ®á¢XÅaé ô¯öáNÚö¾9$<©=Ü¡óhÔ¡ïþDçáþ.Qƒ|·ÿøúQp¯ý5¯oÞÝÞ|¸2‡þ|oF¼¬*æ¡Ðì¹eu«K;•-/]zNÑ*²n¥Q2 «—K£à)”‘Ø“f[ êQÙc‡hõ½ßŠÉ ?69s¦yÐ^9R`¯ù¼Ø’¼®5QGž„ú÷5è0T£¾ÿ%zS[Tä-X¾L7æ4÷é¿øß÷ßžìAøñ«»÷¿ßÞ|¿¹»}ì¼¼^¥×ŸqËVîAà±z—¬!Næü ?¡qiüvô^!–bËä£Wf/r9z®È1[@ô@åk€”`—oH}vˆã(˜EŽ=µSî\ãÔ…6ä]iË0!7Åjä¼Ð4Ó 6×á.Eg*¶ÜøDëËVÈ1×EwTD#LO­dáÙw‡ À0¼Ò6_ÏpBîÐÂò».WÚdõÉöçŸàÀÅ•ýt…ø4T« Jj™ˆ€ÆîÖ}yüô½ð^Z:˜ÓpµD«$ñÆOŸ­½Í‚D±fœ/x(_K¶³Ÿ$G™u)½eòªJ[C³x_yg9½0ƒ"_¾•$o?óK}?u« (ËÍvYÃa#Äö½¯9á¯çó½«Ï×7ì_ì eÆŽ’´ð,[ü5ü‰ýšc—˱hïLÈ BMBíËŒñ°㄃ùQh]j±ì2Ï[£vd’z™#Òx7~¶L4=þœ¡óËÓ=¢ Ö²a\æ©_æ©Q[è-4p"c=v _U«3ž•Wõ'[ÝsùVZ–«î:œ”]7†,éÕQŠò-I׳ Od²"Yàºäøñ —Îf\1ZÍyæªCún9®[ÙÆ¸nq;Æ©2%Sý»ÞÕE+€Í/uƒ•?×7?®çÏõTÝÝÛÛ½»ÿ’ذn7ç÷#Ó‚²—´]EX6Ÿj8úæ(AQ8¶-¢Y!ºn•®™Š’ÃX¦}Øñ¯•fŽLŽYäIªòÚSûè,o¼€£YðÆÓ~ŸôÊ :p;od«ÛšiAp銚•¸1TÉ YBb¼ lv‘ÿæý§ä7™ûü3»õë†ñaK÷ÒÑމz\{›Ï"‘õ;j VL@ŒÊE†˜e› ¨KîköœøÙ`kFŒÃs_Œ”¡SHš Ÿ¯+~ŽÀ—tû~“LK¿Ïb˜éùâZüÒ@š-®fûKÞYÅ•e©ÌÍH0Tô€`ÙgÏÜÌ^­‚,KMpsŸ¡áw³ gîfí§Áéîäº}}udY„J°eÕ®f¼ó )–uºŒkÅŸžñ_:~Bsõãï"fÿ?ýÛ«ßF¦ƒÒ?€ª·A°ù1é³ûÃÙõ··ï¿^}¾¿{óþvY÷7Xq22:Hh™Ìö,œˆözœ.^·ÑlÉk¨£v¥n5dvDv&¨.l[~J‰ðƱH4ÏÅ“dÚÀMQ&>‚ð±´‘Åu­Ðëx·ÐúCҥȑWr—:<¶cAà[ÖF«šyE ášaÏ,,Ô0±¨g-ÞBr@_"ñIï 9½L™ 쓜Ižð@Ï?b?¸ŸH•±ž,º‡ˆs-äà!­éu ²º\€ÊrÁÒxs~F*[„èa×Ó‹°÷†\£çüÍ*ªà6’x'ÜàóÉsƒ[êÍ’`km#6´¥±4ØJrì͈qöpòêëý‡ÛFN0ȉŠR”*8ñZ4Ì®šÉ©/Ÿ=µÄ ² µ'ðV®!¬1±áðÐg>åÂF‚W;’‡öÕŒ2ãÒ­%&5v"Kvâ1›&EÕÃLì¡Ù›Ùâ"3±°EV™Éó5uSþiÛ+¦Vú•¡kCGÊ)SGJ…ZSiQ«³Óø³7« ŠöT ,Ñ›DŒîҺŲÞ]C @+‹‚õm·´æÛVžgÒŒz76UÇ´„škT}ØmI×t¸F|®ë£4º ½ŸÌ•üæ¦@ l¼lŽd%ðjöÕ즴¿¼˜•—ÕŠ>7l6{³N'ØVò/i]ê£6K†[† nT w^?zÆ“L‰«¦Œˆ{\öä_u”JW¶ÇLÌ´µM´ðOZÐ2äAß­÷æÂ0áýÝÇã·Ÿ~B?ü·]‡Pƒÿ¾4•jbõû\øy¦”['üe{ Ørl…+¥÷«ÕƒZƒfX¡W+”¹¨×ÙÞÖÙ«UEIL¨æz›+®éÔFS–¡ô2ûŸs1 mî&ð¡&@ ²Ö}öŠx&˜.1ÀÛ!.™Fèc±…ð4Q‡’ˆÊr«øbUû³ÛÆý?ß}³ß¸¼t¡üë ¶1äC¦YQ÷¡@Ê DÔì¡ÏQ|]B=µ#ò[—ü¼a¯®*T/iå¿´Å{ ˜¬B)Ÿ úÈÅdÎîÛ™I¤ËFp7EÃ%ÆÍÍÁ5e€oiY1·™>èî¶cþˆ“7ñiEÕ¡(X1¿ô §.i'“ƒ­Ã cM€0H,äxŠ´ÿv?Cpjú͉ƒ¹h-PBþÐðQ]l!íçJ¹P­-Èä1ZZäkíH¡8`ÒÜp² EÆ’ð¾<Š*\öÏN<6øgâ<&+€þp13Íl¤.îš9/z±LÎ_.÷ÍEü™Àþmosff_^ýþpÿñÕûOV}¼ø~è øjþltÙ‘)ÝÚmí±Gxlš 'ôðÓÖv ”_6½Ìa¬c 4$WÁ4&žbÁ3ðTZýü’ÄÌËþGdA¤è¹ë˜™hš\&ĺ$h&:1¶¸½Æ£wýWÃà®SÎïÞfËÕ0;"éY>6ËÑŽ7÷ö· )‘Íè—f5ÝÑ”À Ç&fÔi›_¿ºi‘亹fL«ÊÓ‰QEhº|<²—¢ÏLÍÄÔâšÑê_GK¢cêâ`…M$ÎIß²ÂH­ÓÕÍ ±¶/Î;I…J¤ˆ§¾ G{s‘ì®ã«UœŒ{Çsˆº*®ˆ©Vf f°ÞË1Ã`4‘ò@fŒ&Fí7F!x¿ÁMþî7ýL–æðéì B¤e³3 ÿÜl^‚-˜Yø÷Î É˜á ’UÝaM „&ÐIÃ.DméÈŽèN:²#èi¸@ˆA¼— cošk¿˜½I¹Û„e‘ Ãaþ=ó«Ú±ÁMk;tw¯%öúWÙiK‡®ÓD/´:î`uÜq‰HY´"‹¦R4‰Ãö…ç©ýñÍ*ÂŽÕÓnþF„²ÚŠaGÅ;&FÁLØqXtA.MÌhÇ1NP«16ˆ?Ÿïßžô›‚[KÎ|ýêæáæ4*Yº´,*uyˆy¬Rpq¬êò—"˜‡CŒ«ÐK[J) ’.`yuCaô Ð+öÆðb¤"x)åÀköbµàŰ›Ã®¯¢Ö*À+z^¼¿`>/.é/¬Ä Ò¼4ÊÐeâ¿ÄÌóØ ÷뙯'ª›7rÃ×úöúmxüs$sËPlÄóÌ@-ªÞÝ^›»1™Aú`Y˜·ÙŠÿ'`·ʶìÁ ÖÅ8Z ?‡ îãõÇۯŸï S~M§@ï·¼öQ1 OÓœ÷#C@´h1·œ]} ìÖ‰ZZÛAšÏTF‰'P5ĺڨKüa7íÉlùQB=(sí±=³#Yl§c Zå Ü;YyŠ–õ+ÿ÷¡WI¾Â3=c i®ÕÓèĹ=ë‘:ùTÐÕœV°,z¤§,ÛÃL2]<Òl8±HRW¬° ßÒMI7¿»½¾ûú]½®í&ÆàCù†©uÍ’o~|Ûm:öÔ8n­j -!`5‰K¤˜ü“uJಮÉd[ß>ž|.àæ ¿ uKiÂÁêÂtCV§á§âé–y›Âc¹RS‡½Ž³ëg²è‘x'35Ï«ÏÆÂä,>0H»!ì>µ%ãô¿È¾é¾—ðô¾—2—&+l,.jÒ~ýbµÓì´åñM*è·œ›ÒFôàŸP-Í>bÕ}¯ÚsZ¾ô¿ÿSyÝ»{+ÃýØÎµ³ûn9,S3¾%àÊ s£Ú\gO뙸|Ê=3IÑ"X1O¯V3€»ë~ ¦¼%®ëSÝŽ«\·õ,%qÅÙ'.-š-[z°¯ùzè«ÿ ?¾üȯò˜¾Ù<édiÁõMœh'ðê° RIûÞû¼öòé‘MÙc;+û볩ôl”Š?Ž«¬#pC啸°!5îöØöüåÛ›/7ï?ïÉ9²-¥öcf¾tÓóÂϯå=g~ä›VùyÇEó ”Ý<* “õ‘‹»¦©Ögo A×XŸakËòv‰!W¶3¿AV·ÚØ‚G®1¶Ãð%c#ô”M~Žòîdmbq³þ!q""§zF[+ÇÝG<·¶ÞÍ–—ñu©Ïš0õå\Ã-£•ÅžÿºNúéúÓÍí¯)7üöå´àê¤à Nû®N邯² ¿œS“¦\|2̦ú—ñ`B¶¢uÁ>Ê'eû»Kãë¿ýç<ï °À«”¶¤ÃC+íwUéã~¢äÕGw…W¿½úúÏO¯ÿ¶²cYÏÔc×Ã[ 7×á- éÂXÿL³N UªíÄøíé©È¡;­ÝBKÁÉ>š%ÚkDh+z9Sôž}¿#ï*Þ/¤­=Þ‡pùÓ»fFç/S®{->Ln<©{矱ŽwÚ¼[•ÂÇêÒ·%xma‘Ž£—Õ¥/W–¾ÁË$œ–­ÂJQŒE«ð!;`s|±ŠÂ—pÇC,ú„Œ|þyÞií)óÃ"e§ »öþè½½ø08KHÒßÏþ<É’úLyàqËCpZá_®Ðk˜,R m q«Œ:>kuë‘mYíaY­¨ºý;×L~v-ÅÒ  Ð.øŠÐƒoY]áÐ sq‹áRYµºq&ðXI\&3Þç#TÎG¢øìŒG¹t(DÓS{G°äØ£‹7Šs£ Ñ$fð™cŠR]È/ô¶ïo¿ÐÛœÐSõ²š<ªK† «¤C›DOÚ Î…¿ÃXé~ÝÏ÷ùO¤Fˆe7ÖF­¸( S=˜úhâb¸í$À®(D«PX‹ù¿•r¹ªp&®N(̔͗¡0 ã:Ï  p€, Š"1ÂêûîôÖª²\¿Q¶#rœU°&ú°NÁ:…(ўƨcIkÎod[$äF)T<«”ˆÐ¯Bèˆ-­žö›>ªë¶Ö±A޽pš4LU*Nï¼¹b¶!K„2ZœN†ÏVG†E8mÅnY_ãÆUã´‰97°žÄ·]ñ©°ÓÞ¶+e ÂÓàUwÜæs]„Ã^Õ?ßaE4e–UØLMmøžIÀébº¿ŸuZÁž§‚z(ã¯/ЭL>;ˆ3“KüåýKˆ°KVQ¿D<½g>Í““¢Ô æ¿N+vŠHëëRbòý‘•üq‘Èšç¿zü, Ö’Qs§r^ð ®‚Vn¸A´Pšz·tYƒuAVÝ•¬æŒ,edMëËŠ'öI¹îË™`z +¥¥v¤qQCRÑ(*UF#«‰9sÕ˜… Ìù2ñvEV©BV.V#k œUfªk(®ƒVéÏÀÂSä¼ ¶V50ÞÜÝ{{õùîú«9×ÇýØä¯&Û#‚ôÿ|Wænæg‘æg’ wƲo«ûúðívAäБ#¸&î‰HÁ‹”7Uå$öþsâZá!1íªsåslLGÔ…a—$¡<ŸÔL=0=Ć øÝFâªH\EŠ“c€±Œ•L­—_݇Tæðr ‹ŽâP4Æöj–tü‡¨Žº3r¾­W ùϥϑq⨈ç&)¦ä&ã,'ÂLˆ=8H'õi”»+œ—+Áà‚´ /"ýyO/‘¤.gàurèXÊ•Z@sÛâuupyâ²™dºÔ„)&ŠOÚ6ÌKíßkžÒŸµ®ìù²Ûe„‘êx,š‹oI®ÍªY Éðe-΂³y!Çò¥x$'ElÌòÌDÒœ}ê–râÖNÈÜa™Á›E>ѱ[>ÚÎÀ^¾Z†úCˆ¼e/sgà¹~[µTF*ÚmCIJKÊßÏ$Ó£ÝÖl˜Óø}غúÕá—]&æÃ9i·š@1¿îJ}ßË®ªv[/Î-ìÞZ #ñU ”I°šX_@Íûþ“ÙÛî+à®,ó~ø~‹[¶RúH¬G yƒat ~L{Np½`W'JÔ3¸«*NK¸ ÎçÈuRê»4]äî‚Ŧ° NFwo™”YèvwzŠ^£öY÷T=îàjð7Íy¯êâZC±Xû yŸºº\ôòs‰<;Ü8ŠöËõ2cZú¢cÐà»­¹ ®^¸ë}˜È2Q(ã.§±§âñ I »‡÷(œÀ»³ã´yyý1Ðß—/zÍ6å½ÈéuOR³}²ži›åæi…³â¢`ßyîH9tuèëÁ¹CþµìmArð®„&ôì#ýDH^Àޱ©CÀv•T uá„ø€˜Fúâô"v;@Tš4rÔX†oŠX„ïüØÃLf=“Í;b\2u,„Xò¾n:˜¹;§ L&Ø,óõöcê+*Ý•?þب–™¢*|–›¸‹í¯ª#«ê–õÌw¾mæ(µ53b¥ObOuå+54Û‡²/ZTÈò·%Ñ£ˆMæ –ònìŒDFŸîHòO»“¦ÈRºÔ9ܵs†3ãŸìV´ÎtA„X«ìú_Öx‰ÆÀ_Pëü~®öÇI­½ÇÃý?Šý5]þF¯ú¨Hš¶=E@Gc›èϨ¨_#ýý¬‰'–rMLÎ2èb<¡a¿Jêb< y&è™zÔæÉEC ^·Ž'ú¬·=MŠ$r”%‚öŽ»N´š§ýÜ&úŸŒw9Ò‰Ó-{hž¤Ý}Pÿ˜e©ÞäÒ†Žà¯ãø«û=áóôè‚.ãôì÷$3ObyÂÕùã1vF<üA~{µ„ýºÊ¾/éÝGka¿N‹Áap«IC%éq⥱£/Ä>œGÒ‹µÔîÍÏÐ9Ì^­†ö8·ø¡zÊ 0m{9J†-‡ÒyÊ; `?ÖbÍuYŒ'-žoj¸}ûîúÉ—®èë?ûƒøSCon¤7KqëCÌ-¯¢%ÞššGxY2Îw(].!3­Æ©ÄAœ4¹~ùêK#ÅËŒ]ûÏ+Î^¬j£L ‰ðqN¬>ÿŒ,±:£Kì7¨PK¬žXOÛ1WSŒ£;‹C†GFÐO.¤óB'BÔŽÄêÌnˆú+ó›y…?ó!ÇkºEëÀ˜ðjàV å±jšÈ%…‰È{4°žùѺoõÌŒòvÕeqcÒƒÎ/•ìT4Àb1Þ‡:E:¼Ål{käå#ü‡WK×qSEO¢ö.è©p¸Œ"Ò ¢2ï|û)Ò[h[ídÒf$ǧF«…$›$‡”»4©´¾ø´¼òv¸’õôíñûí·ï& u—^à ¾AÉ \CÕpý bƒí×*µJìKîÂb©š0›7‡ß÷¢}áRé³ ¸ùúøýËqµtízôJÍ­ýMÕš£Èâ$50û¬¡çéÖÌò&±à¦`R-; KáDÃì¥û‘š`ˆÄ»yĽ oè~û2acˆÏbˆ¯e .ÃÚ  ­—ÏçíDO¾Æ3¼Í&LÀEåÍv0O˜-•Ë»d­™°O›BF}Þ³q™!e³~¥ RL«QË+êÈ鲑&ÈM›®E¿2É?ïm¥)ôîW&:³\ö+§"éõBcP‡Òh‹¬õѨ÷ï[ÎX•žg×j/$|KËð;ÛïYDÃY|­÷ß?Þ;NûÜß=­+z']-ö¸p³²lˆê5H3XÔBÚµ²ÐITÄt‚À¢¸ëg-tô¹[NBµØ[²·&õw¯_œŸVlß ‚Ô̹Ü[: ¡j M»!¡Š/ŽòWwa;Øž=‹XÚ Q!ö=’íF|¶ìddÉ3:j¥Ùv¸ kEœ¯¹*˜ncC»s#GÜš¯Ü<ÞÝþ¸|Ç̱• ‡€fÃAa@dH¸}Çù¨k6F Œó;¨'ê49޳o[ )Šazª ¾Å}ܬN^)VÈMC!„ƒÑÆ/îXÈà,S€P)FSJ¶˜ I–•c ½Kjà"÷ ‰éÎëåXˆqÊ‚ïáúXh»Í Òx,ä-8¬U"ò+G9t¨Îñà4 oýúaƒ9gÀsÝ«:™~åy«|ÔL c ÜPÞêÍô\INŽ…”Tä`¹k€1;|¢L“‚œ‰|°j—wY. l<w/ÈY$D™‚œÐüófz3-÷P3•Â#,u(ò˜ zJÆûöý͇ï_>~ºk–ˆN"ƒ_N ‚qÿ®jºwÙU¡ uZd"ƒŠê:M·øo㪵¿&‚ö1èÂóe„}ÆJ–ÓWU^Õ©›×J4!ó¼I+ƒT¤ê&ß!mˆm›^(!W+DM ã£/¸”ἓE ùi± mZdX&Æ ¼2¬ó¾HN½nò¾Ø±ä@gˆ™SÆ)ÏA|Ü÷t~™V€êÓù¥æa–­äƒò¶  {\DœÒ·]2·¹vn,-°ÂV°—7Ù^Š5G†L!K§É±+äkÖ÷¤àú|‘âjßK\¸|¤d;jJª/t)<Ò°n2½÷L›t–]ï£F‰ÎpiЧGuûÌI(jx¥åôÖd VcŽ»}ZÕÝÂ];Ô©dHÐÌ?a á”_ÕÀó†zb/Û\¤f6 £ÅÊ ¥µ«.ÉÀ l¯¤¼”å—̪+;ð5¡F“ú’‰ªàUfµ…â!@÷úRˆšø’AœXˆ>s,»ŸÔÎÕ—Žûõs9ÍŸåŸ`‚ÝÄ?ê°ýp` {õøæŽ <Ù0Ÿtóü‡{¬æžÿF£³Ëì.0Ý|6Rv.O <†{œ¸àPëûìÙÒsN¢®>¸W³cŒ‹UJo$wØïÄ‚FîBbXÕrnanx‡ù౦tÑpñ?ÒÙìŸfíæDˆ,Û‰º)‘£ ½%HåR„¢O^ÍÅfZm1<¨æ ÐÅmà óf˜(ž½ÛÂÄ4®+]æ ,"îv¼h¯éh{üƒ§ºQ™þ€!§Y+sº(õ Y Ä£TÔÑ„LìÙ¯NéšÑ°]ÚÇi¨Àû‚5xK,d)í3zR.훬Å2fŒC‚çÅ0½|6ý£P›?½ûõñá󻟾½ûøá<0»×Ü7kb¢ï¡aÌŠ’‹Òž= Ïá¹SSS®®,w”òô¦V(Çc (špQ5€Ð~=cLYÇÒw[Œ Ä~Ûî°ƒZèÃ.üÔcð“—˜ÜX€Òcð+p(Ž"ˆ,R]®<¬ÄñáH C^a„Ð¥Þ$ºMFH¹÷ÌV¢¢¿IŸ,ÎD7 ”Y!uÄ;˜¡ºá­3 ó_ë¬LݯNZÕ°5¤îWçBÚˆ3ºç¢&ði¯À;×É\À89„?ßþvwûû¤¯¦¥ë®2ÐÌ"ê/z£~¨Áÿ0ÊFPqõWtVQ¬ªŠ–¢Ü‹Æä" ²,; İàHŒ|>ÏžèÓ¢Šfoí‘Ã}=}õIKdÎìæ¤OF'3àÐt&˜‹&Ò¨îFÎjÓÐOÛ-'¯YÂK6±¹C2ÈœUMËÖ–i¦.†xsš9ù–eÜ@‹;ÙùWåëÉ#6áv`Dr«ê˜°1œß¦×¾i p™ ÷Ínu•Ò®•aN²œ,÷¯4€÷~Ñ0ã¡çw»N”j`—Ó[‡`ö˯±ËMô;`)YX¯NÕÃÏÑ×ñ>Üãý÷Ÿîþb?œ&'ž¾Ûžn¾>½¿ùôþÃݧ›§_n×¼¸+:Ëœø-̀ȘayVò±'3Ú¡S¯ˆsi7–-hêPWà?D .^‹¶Óˆ€íâ[“B"*ˆoÃ[ ÈmòN¨Õ$ºõC ÏVYÑŠKˆÝ£[…Ü”€}rtñ¼ú"º•ÐtjPÊ€ ´|h°¡á˜c°9Wäú³„«Ó€g:' ôvVÒÓø#Í?6]ÍxõäëgÍXÛ#J·´Å—ü6œîÝã4àÞâsÑÕ3£Àçj¬‚¸Xú.¾Öf+€¯£xñ/ç´“AaÖ‚Ü'¦½·E§­(Yܼ¹›àæYÆf‹«zM„­FÖ\2p¢Ú-À»¦¬Íâ»d ¼_з-,xJÙS-r5‘ÐÑÊ<¹‰—î}ñ4eé‹›zptuŸ-Ñøs·D÷ @úÊK÷¹aÓKȬ„&Íf¢K œÒ65(C¢Ð!5X o:: ã*®$ÐÖ´¾Ð¡¨äÕ̇¤2ä‚Ýñ¸\j%Îø±Zx{é.ugâGÝ®›À˜›áuƒ9zÕ™Ñ*ó.ûYnÓ®Ò;ËßàÍfnêp>Ÿûm^¼A‡úSn^­Þ—X´ ¤žÒ&BÅ‚¦Kçæc+¨‘â´¬¹ª³·,ˆSGUP¹˜|B‰FEWÁ´`±Ê ¶P8”.59&K1ôÕäÚ‡ÂÍjp-XãõsÁlý\‰P—P_¼6ÅèŒPx¶Ã¾,Pif—+ê)Ä(ŒëñÒkT¨mÉns©ŽE –TêÂòø˜Ê-ŸP·Q¥Ž‘ãÊØ¹…}Pß¿úŽ[„Õ Ât"¾dÓ2LŸ_†éäsú Jû˜ÀÒù­}NG¾6à#”Ë‘zãï=ò¥àIÛÀ&ûFù*Ø0Ä€‰+ÝI õ ÜH é·¸ƒHj‚\žrs ÒbÑäh&äl1å–Þ:²òªù Ȧ‹?&Ž€h‚=šÐ»xdw@ðL!tЇºöB7x_;]äA>W!ÒD‡ €›ÁD-“­U>Š®™†Å3,Ž¢Z ˜­žhÔ Á~Ê¢žýuôÀ¤}ŒgT&¹œ¡J_Ìp<6·ÃÕn.ªBi€þ]f$æÙAd—²hÄÞ¬¯é͹í+ŽýëÃç§_þöðøûÊzi„èú² =Ì¡Žœ£]盞gíÕ“ÝýeüïuÔvp™ö#P‚9}F˜¾j‚ÉSvÎûD±&—áÍhE‡°nP`I^J4>vï)Œã?¯.ûdËÕ…ë—ó ÿ<îÒš á&fk‹à‡àÈØ¹7îisþõÿ]zË žðVYkâ^udbHÎ7½½F«V¶÷ hÑoXƒ³„kÑö*æ÷º´··†ÀQÖ\Ïn£Ü{m"‘9dÀÆí“18q„[*:ûbJ¶!Ü.ÙƒŽ¼ =V`A‡¬#~Õ[*ä^ï±×<²W³½ñàjnÂFpæ×C–W1äJÛ½Š›Æ·’Ü’˜:/ºòÑ…Åþ»1 {þkBá!xzí¼n}®…íßû@B¢3èenŸ é€Z̯ϙj:·…þÒøËÐúV:w2^³ã¶‡ÃªÊI NÝ6%Tîn¶Q=fÍv %7lª~`SS@ñ›æñ¯˜Y6F2‡7±±GÂ¥šˆ/ìXÉÌ´Ü1A;üp7wßÜÀê EïêÙP`^‘µèœCHM‘’”kW¬61IúZR¬NsM.²Ï‚‘¿©I­Ú O ¬+˜4PÕî{P\n4À>Áþ,\ ¶{Rdx)bØT6Yi2.ÌÆÐAƒê2íñ•Ž˜™ªd`¸dàK]Â#»p5<:|§Í¢ùÔ— ÙB“¯)€d°·°Ð4N!^=c&C@? þïÿ”2ŒßRt°EZÐýåþ¸²tÞ¦òíÜf¢C<™6s‘#a\ù.Ù ñZŽùbQ€RÁ1¿áÁ»H•ØÀÇG˜Øøš‘a öoÔt…¯&$ LH/Çýœ (è®÷>œ0ð5N?6_èž|ÍrH\TˆáJÓ«gl R@øÏ…1A+Aà×’Lo´ÔukH€X§©ˆ¯ß|–‰p]»ß=Âî]8†Ó§ÄITÉ2§Mõ÷JÎöüúçÇ›fm/tŒ—óE‰³o”µ•ak¥v”îÓ:ùÌ!žs¹3ØV 7Á¶Ö½ÄYˈçd+ŒkÝ[dQ]Û©¸Ð/[cæ¡/¬ëªKgs o$¼~òæû<í“8®ˆP4ŠÅ ¡ïi—3n´:ñrÆŠâYäD3îÒ¢•²Ê/yKq”+€N¨Ýb_"é]°ˆvv¦Ðß—RæÔÑ—’ØùÒËÏ=¸»«5ë)7Áu¸ü"šìb±‹PfÊ=M»†À¨û®0P=šèéôWŒà¯å&&ÄjU±¨Â\¨8tº½`#k±ÊQIªYxóX´^ŠM±LÌzÌ Z¬;ñÒßp[ñ ë0€gÏ’šœ3A±`§v³fH—ïV{·2·Ü­q/m•_v të´ÅÔ{¨*#cVZO¢¬‹R0ä„ã )Ž ¶Ê¹ØwüüeM){+gP!º²)ò+ú¾)aô¬æmƒ4†8²»2¿Ewå9&´3~èpü˜Á·žî³ð>/:.q}Ç%{²ÖáA‹‘9nL§Â+ø€C©ð àºÊ¤»Òì¢Õi?aªã#Æ¢/sÐÀ_Æ_;B q €j–ˆ+tþ´Ÿ·—òλ"JQ¯KœÛ61•8÷í«™G)OéÎÁމ7¿óÐøp!ÍWå6tT°\SÃMö𙝯1jô-òñ .²Iä®—Ôµîj3S¡jÓ »o]wýs±cúH%Cn67àä»®|³åWð1pALºº›B¹:ƒÄ¹Ûà J?ÿòðè³Øìÿqóáó'ÓÛ¢Üiæ¹!»hŸBÆ2±XˆãòlvzrÆ€Tö¶o®·OH¹mî ±!ìG>ZH{eû•ɬC© ʾXR@äZ–Gz¦[²lyAk0”µMÁ>%X¥m•)u‹Ð’GõSÑzǽÙÍ/·Ÿ_vns ~üññÇ›ïŸ>.kˆ:/}E ÌýÈì ìé?3ÉšI¨¤åÛI{6…ýT[ô_MPf¨ l)Ep,Ÿ1){íÀ(í9Ö†ÃQ3Mÿ¼<ôÏ™¸”"H1¡^,vy³ØÈF‚&æ°¤ØÑSõ*31гP1O„ÜWÝ6Mø‹ðVƒêTÀíIúúîô‹‡|{„5ký8`Fæ*MN6Àä:Ö*ËUϲµöÖœ%‡te¨˜Ýuèb¥l¬}²5]‡ÍemÁX…ØÞøß‹ 3ÕÓì­t¼%Å-¨3‡ÀÉ J êL8zîGÃ;Ì©¿]Fvæ™Â0ƒ€•YafZ®iÄŠ“æ^ÕØµf]% FÂ¥[LÛ…4îŽv勽*Õâ!µðëñP±bz,‘!w´¿5²^ùŽFš ê.f(:)…HjÝ_24 JÚÖYÝ~W/C‚©1PœE·¦ÄS‡±žû\^Ä’f'¦—ǃ /¿Úc樌AhêY ³( æÎÒcUð UŪÔ× ífÒjðêÁjq-é‘$ÇÌ|²F½vôe??»âàKvK!IpÊåVH2Ôm<®ÿ@iê aœâ|stb¹7'Æûøåq‘ëUa.²KÏvAÍ€yùžàVt¼ÍGA­;Þk”–.®\šÉ=’Ç Ç›Áà^äÚðÇð»Ú~ƒæÀ MJH#G^E ×4SÍS¾Á¹5ƒÆëuãôM8µ=¤qÞ…Ÿâ#s4ú•ÖàS[Ü3½‰ê‹®ÖõñtLg6ÊøÚy•„¸”±>¤’R¤:¶KyþòI†c˜$@Cò¹Ž+£{³ÑÝóÿ¹TK´w÷Eœ®ÑïPhªä‚nnxoÇÏtÜ™z¦pBúÊÍšßüϵÍìÈÐÚ=xÈ y*¤gâž!%“tT\œ&Y/¾‘†°ÍëÖ"¹¡ŠÜ¹Ìy$ª1 ‘î⤀|eì”Ù {ŽÝ%†H×”Æ ÁÞØÔ¨`‹cîkÇÃh¯ Ç9-[<Ÿ¢Æw˜_!SV ïÀ-?⦠7»E½°–9ÙPÞ¤†:*CÐŽLA” ¬æ€ò9É ¬Fûrª*©¬!2ìúd^—RSôk$å%ÕÁ†jæôR• KùlW“iàºxÜ”Øæ´`ùÈ8«bŸ;ˆa•Šy ›7báŽEBoÁyú¼~¡ Ò/é¸Í]¡€Âȼx›M]:]šB‰>O"£¦œ«ôyð´‹ñ"¤²–Öˆ‹b¨ÚkSò–ÒvT5%rÎyÉå³ùõ,DÕRúÙ>9{öL~blÙÚ|[‚õ —/L~¦îtÂÊHÙ¤ììg÷CáòE)õáöû·_~ôÿzp£øöð«ýŸ'yœú¬ëYæ~å4 ¬¢ô¤ýq¨çŽú"SÜ Íæ^äðnÏf3Ûûð`É °Ì}rÐàݾ¶×…ãý•WÓì ò¾¡GNÙOü“ÍÑBâWäƒAuÞ„ êOÞ·8væ^VÏ^Aæª}|ÿæY³»¯ß–õyæ8÷6ì[šk17Eˉö»ê¨@dk“Œ9Hõ>TrbØú}˜µ¸é £÷¡¿vJH:ö>l9!Úã/e…„bO\}B.˜Ò¨3±E‰Œ( >R"m9RÞÑuʉx£¨Å=z3]¤¦“eاÁ鸖"½¿œðËÊÓ·ežòÃ^BL´æÎžªÜˆ³ýßD‚§n… fùºÚå&Ú)Ÿ+^¡ÞŠC0S¡gô·9 ħÍ5õ\½òqÙÌ<×ëŠÐÓê½úOa:ëy)ŽKûù¡k=¸A1¤Ú‰¡Øóp$³!Ù`{ë ¾UüÊ7.ðìl°‹™J56¿ *C9œ†î;Ò¦V‡¼ŠÇbÈTXàÂgaKkW!¹™6­BJ‰61íÓ—ŒWÈRëÆ›Jµœ£/kX…äoÀ”òµÝa˜°29=žOœ¸8gÇ,n_ï>þrûí@ᶬbjnÂT÷&†ÙùZ?ÄpÚ ¶5¶ \jþjæq;©Þd'U«ßóæ'⌑‰Fo¢ê~‹£ýSk÷Oû{¦‚ìÆ…y˜xÉÒG¹{[4s¬ß¾œ¨zaì9NˆŸÒàîù[›¢…áÚ7Êlúʘñtúß?9µ[ôþî^n«þáïÕ®´E ýË2…b˜Ñ@²=7p]¹™õ°%tùˆÇóÏ^š(;§ ˆ$|Ò­x„I¸s:0¤óógç³½1öõi%3í y@_ìA Ï8t\Áû>䜪|Ô(õ wYåÓ†®g™ ásôv.Â%˜^;1-€³“æÛýQ%’3×”¦°k½›=q¦tŠéX rá•gã@h¦Ö f$_!ßÎðGþ·€r–ar{Š $÷ãD(ÕÖÜC’S,"ù‘ÌÆ´æ¦5äte '¢Ù¹&æâ¼ƒ7Ù «Î;ü?¢¿Pÿ+æ!Ù>’M” t?ŸþEMò,qÓߺùüðá×e=d§by‚ŽÂ›ÅÁââI–t±C…€ŒPOº$Ú†]ÄuƒïÒÚÒƒøF$]ÜœRë uvÒÅÄL§úVQÂÌñþªôò¨iMy)¿ü Œ™ªzš±8d1å½—~4/¸|´8%ÌsÁš©g´Øy bëx¿ÔÀ±7óŠ‚7þU}kê®ý2§ò(ñ³TƸÖvtF¸6çÙ<ð.åÝ‚î×®µé‰¢^Õµæ¦~‡”8 ò¬Kˆ0ÓÆYzª\ÉbÑdÚXjä_nïMr·^\AÛù’ƒ†™vb÷ÙEjîÙ¡cÔzQ‹ËäGÂaÛþÚÈ!éPÿªá(hê50ƒÜÂç˜]!/›lŸ G F’Üt"êk?LdPê‹9ÈdÐHHyaßÌz´Ç0¡1•…7ÎqLWø¹œ§Øÿ­uð:ž1¤Žx9‘FQŒÒ9Ûq!Ëù$²5¯Û# )4Lsd`HUö Tê-=Ã3ô·¦l¯¯n†2»ÙÀÅNþý“5{ΦÜlÀc S©ÐçŸb:MdÆÅ}þý80S±‘Æg*%ƒ=:žšª¼ìþü×&Ã0U{:»Lj½5 lOe5ÊÙÙž„(&®*Ì&Ájz#c‘pö ˜8ë¯)eZ³YhŽ:{°Êäœãi~Ñ?9k­Qõep{V$éÔ@և© Æ|ÕA±pGþkÙw/:”ÆwŠkLÐeØ·!¿Í(V)Á°pæŠsè—}(›ýv 2Æ@JÇÎ\•Å5lœÙOc¢úˆ;e‰P-è[tX*è gÄ4³½5ˆpâeÀ¼Þ*S˜Ý™åË@Óé2RÌ5\Ö0vœ¹¿PqÔÕyt˜¨VŠaüÊHÂä™â <l·ïñIªnE¿=,e1‡e&œRÄ'×¹r°ÔÇ=#aœ ®UßzWhSÖPMP,6@}ýF¤M@‰HWÆKŠ,³éLÊp:àzŠ.\-' ÒXbjÂI ö…8 {žÂ bŸh% f_Çw=$üôÛB ™ˆ&×ܳ± F@L˜iš,†ÁŸ©’ؾ‘àODkô7à;•K•¬£/€iã:ät]¯!O@É©€¦)6Ê9üš0Ŧö Љ{s.i*o(&öMôÝC–ÛG¼æ&o²Pñ`:©Â¿?ŸÁÛŸ?ßý«Â~xøíd´þßL«wþòñî?M!šå9‚~ºûxø³B­Ð´jF—7cà‹åãÝ·Iæ>æOþ²?|ò—2‹2ùÿíîãëZà&+‰ [Ô?ž°ÿ°ýC>=šÅþÝ.†¿ß}ýáÛ/·_~xÇfûí¯Sov{'ÀÒ8eœy 0PN«—c(®&VÈÚH¬ ²¿΄E`õLD)ñáÓ˜6>J-ñøT¼xÆ~®üDÝœ¢îS³º³ý*†UêæÜšHÊ&xÈ®S˜7=ú£®Ë;kGB Є#—wzîÄÇ¥àåH>#š<)¢1äæ._Ù qê½»·ˆi¼/ƒs,«î;­ÿàY•ËgM*ºl¿ÛG`ýFÊäç3þβ &íÇ)c ª&mw#ï\œs&½“(CqOï³Èþ´;„vîø¯÷vÁßÜ›{ùõû«þ›YüKö·¤èM<‹LXÉn„°Ê„3N0agŸ(@'™ð3ßÀ惡äƒÿÏýß~üðõÓo»–´Ã‰úÛão¿Ü}½»1‹ùz[±ïþ7ÿEj~׊SLŒ« Ϥ»”e;oþHg‹`‡2.À M47=•0ÃÎ/ƒÿ§Ž!`‚*f(]Ç#i÷€†›™Ýú—€†¢jJq hì— Å 4ÇØÂ¥¦öÞß~øÅ¼©—¬û?|¬7Õ~|ÆªŽ­À@¼Jg4!o™·Ž°äkýK±º¿ýëÝ>EòøðÝ“EŸ?}øt÷Øï­{×J°q—²Ú£AÃT¥î8|_ê`ßAnoÿ¸Åöý?,#ù—ô‚̽õˆ×\¬§w\¬ìÔu´¸õ~˜FÁ[ý3)HõöcvêŒÚíg^jqÉÍA$—eÙhb¸èòK€d•Í¥É+·brºLÌãNõåÉÖ-©Âбª¦ÊÙ´÷ú.´þ3ê´+-2À:u2ŽgÃäûŲÜ¿Å:Æ…d¼ËÊ€)¤~}4À«PX…×GÉpøúÉɾ”»žµÎªØk[$;È«|ý-ƒ]€ïk¥2£ú|G ¾›tÅwû¦›¿?|ýõóƒýžû‡/Ÿì'?}ùëÈÌŹß1(‰1 œ¼³Çׂ=7½ ƨhT>ã¬~Ö¤6ˆÀ…ÀÔàÜ©­ÌvÿB`:èñîÌ$ fôÚÎöîL®…å[ͨf¸ìÝ9UÝPÞ’L«—þžîÜY²sÄ*¬Ë­¿ZA0$å¢dqO’ ÷¿O&”Œgh«š¤^¿3 y:ÉØ·­š¾ÞJÄ“æ ™k‡ZñD¸ß‚~ ĨTÓ:¥Âý f•K²ÓQ½]k „hš á&U.t”¹^RÌQãý5ÈaÛøNì”ð•ùN¶Zð‘ÖU–µ§%4!ˆF¢!,ÒÌ Ž^sŽyOçYx æoÕီT¬V öOÛ—üáéÅÚAÁ^Õ< Ë@Á·jX ˜òl¿Îd[è³O^¥‹’÷ýkP ,2ÔŸÃYþÜ{*¿œ?)‰c„U¨Ó׈Ägù—ò^V+’P¦ZŽ\ƒ*^Õ’rU$䱎*±)I®V¶ÅãÛ|ëXIÞE«`%MÞÞº.î·GX„ty¿§÷ű4—q0¼¼EIþì‰ÈÑðC׈<¾9"‰nbJa¿Iæú%ÙçxûI´û¬‹"?»‚óÑ0†UÈ¡§t€¢ 5DWš='®a TH¡žÅK†ÌAª kyÔA6=! x'YËÂWažL+¿•kÄÓ­r ‚VÒxs_JK,ˆˆí%88§Ö$„°F­ñ5»Öê/&‹¬Œÿ×ûÍ)Zl'ë²o¤cí*ˆИáÿÎý<€ÕýæŒ^…­as„\lŸ9’YO…`#’,ñqqŸ_[eÄ‘âdlÎQUO}f׌Ӭàý{Üøá«PÞñÆêD®Óþëý¾3nfÚoH{u3çÒ‚D††Å²ÿrë/úåöˇ»}2óûã©þoN¶ÌÞ3Þœ^Î7ÅÑß³Ê ÞBºFfz‚7)’ÊÚGïokéÙ± ïÑ_áÃíÍÏß¿|ü|·(l‰vì£öë¡~i’¹‹=$‡ÊÉ Ž´+pé•ݰÆÏ‰E-1Œ„$µü’I1K‘ò𠦞†aݨÕi„qÆ8½Œer-ð÷»fXˆµ¶BEÂØ¶ÚD‘`昖vš®Š3:öUÓÑš8Õ÷dËl_È7a²¥Y$h·¡i²N—6TÇÒ¤cHQ–ò¥ iƒ-éZ7ÁS°b~k/Èû#öq&R°°9P}F,ÐgœR%D`ç@È+P,vœ4\œzß}ê®ñå„âðù[êìÒ†1l÷8ˆŽ±Š>ƒs´oùŸÿndSurßžz °–;CB#wFäl`ξÖ ^îÜ}xÄ3;æ÷_Ö@™793ÿ/{×ÒGޤÿбçVš 2"Fç ìb/{Ø’ÜÖŒly%Ùhï¯ßˆ¬²*UÅ*2™ÌRwƒ~Øíñª. ¬bÛi˜OÏ®ÿ¨YS: ÈÑŸsÔìˆ}ß•rŸ^ÿh˜½½þíÝå÷‹ËÛ»¯Wʆo7WµM'‹€{) ‡–RLQf¯¶j&ØI»ØÐÈ [-kcY÷rvñ„@fŒ§ökLJù0$µ?Ô4ñ­®“}B!jo~b•pDb½9µ½ª_Õ茶,Üs¡•CïÞê@N0r{×ì›ÉeÊÏj਄Þ`è/™O,zVõ Xû®ÖIÁI€ÞP†pÕ¨’þ%Ñv«çÞM÷§0ÇGÖnørúý?mÏãëñ¯ÿ®òø"1¥‘€ÑÚ¹°BýÞп%Ô ×Ê~öŽä:’1ÐB Ò>ŠªñŒ³wd¬Ý’,áðMTþǨÆ3¡”  TI–²C|&4éñ(ê±m>ˆ“úWQÐlŽ+l>á÷ªûŒëæA-†D«jáS&boÙÜ—›íÜÚeÇ ‡oþÝí—ïü-5œ§»GÒ¥*GBHíª+öîC‹mË5æv+¬NÛ^ún¢–"q­étZÝÌ>[Ѽ£cm7{OmjçèÙhÇÉÏØ>¿¿»}|uõ~/2j5¦˜]) GEA‚øE˜%­lC¯äçG[Û»ƒ{ÿêí«Çß?¿ùu;äöͯ*7¿]?¾ù¯ÿþÛ«¬j)ÙžO‚Ý•9ì¶M_¾ÿ@ñý¥\ü㮟ð=]âµ|ðÞ§Ë ùtwµ;‡Ós<|½´€ã›_·—üû—¯o~]÷ßÞÝ~½þûf#¼—W·æóÒÓ«·o_}Ðý«‘èíÛçËvÿqÔ×#ÄÐj—ŒŸØxÖý…à˜Q`i Zbe Zœ…„c(µ³oŒ~q'-…þnJGööînV5ãYNú×gA–é7²3ž1á@|v›å¶ ôcZÄmq-VEm}ÐÅ/ÎKV²mùžíœ/ sÕ[YÑFôEv3fc£“«UðÛŽ¥v smWÛ–êFøeO…\ÙfSµItøn›â ù@¥b:æ*ëí~ÿ|øh¼ÙUÝ'1ûfÿ|rf<9G—äœK‘iµiŸX!D˜" èDÏö3:±¡²‡ä›K¶Ÿp°²[`0kšUDeôŽtÒ§ÐUø¨2*Þó|âN‘¿ÿ|ùøáÓõã×Ûÿý¿\úøþæþãÕç››}Ñç2ò<(iÿÉSìø !~üØQ>»ÿà·¯N /Âæ­!P[/†T«è¯wÃÃ5Ô¢¨%ÍiAÆ“}e[BSÖ¾šP²Çj=vôŒ4Ãþ29²Џ eí¼£Ò9fFØ•95ìØ°é  7Ö P¦!¯Ðú;?ÚK í©è0¥ìB›Ée*Jûgó~êO?±¨Á=\(J½­r`ã;¡ÆþY7½Æä"ÿ%[¿ëH_ñ ïÇLëtN‰ƒøôíüμÐ&ÁécGňj0_hÌfÂ'¤éð@Û©= Ìz  I„©$y\Ùƒ12‡Ãzd»˜Ò‘¬Vø×éÿÞ°‚½ ¤EZNÒ§dr¢¿\±îÙ]?^¿»}üØTÖ™Q\x€„,eÅ% ñä^ì-1RvdÃä¶=vËÙ±Ç)€³4×zc-CôÈM.ÇÄ@éEy­Æ7’m`*óZA–}¤ó““v—íÁj;uäX½Fp<šMìŒKìg“dƦ‘ïª(É…øG«_Ë„÷ƒUNù åO¢>\Yù³¥ÀSšô=6ˆñ¸V"POL š•ü„‡–ªn&³‡´8E•ù¨0DŠìÊù(õÄ(žt¯6ÏVxï.V“}ÔCµë­ñ'+d'ho>áÃÊÆ–Q-åÙà„)„E%Duáb["s†xñ9¢"Ï"Ë‚Ÿ™¤:Ç'1hk5›„>Ç…«M+‘õ…[‡jbËfÆQÿm¦” 0¥Cô2¨«Ã¥Íë£îq(! …l§àä2M.ã¡ü¸DeR1ùIJÓÀ\ÝáR%el¥ýA“]Ö9ôñаjE÷Ç;•<Õ+Õ¡»«w_ïôj_ÆÔûc+¬ãÛPØÞ¯èé)Ɔ(£I¥›Û¨x”¬'¦^¥i½·C‡ Rd½QTövÔv>!C.i´£SbmýÀyŽ‘T”‘ENkçÔ9¡Œ‘¤W&GáXNSßIÔyPì9päã­*X–0žaood7 :ÉÜ@á˨̀MÎä¸I¤gt¡on Ä& ÖlU¶q ˆ²‹&¤éÄzê awQÈ$«»«DñV­øô.þL xRÛƒ²³yìöø‰dÝØ Ô?Ît/‰€…îUb¸<&ÜoÐIøóô±tb öŸÖ‹Öâk3ˆ¸ '^¯eÖ²ý_¿¼¾¼xø®‚q?þ™ÍÇ.¼_Xd¼Ú¡&AÕ×ÜÎ2Ê ®¹ÞoüDŒ±%€b}›žea<9†Ú¹Kˆ4¨¨@SxÁq•×Mα|sql4åéf5í z*‚(õmì¶¡ !H\w“׆Œ$™úcc'ˆ±}¬Gu@YŸ—èj8¢ç?^ð×¥ÿàG·Á‡àñ»DìÝò°Ê 'ФŽò¢~ˆQFÉ@¼È58(Áöº°@jÑIdP2AiL ©•]Š‹è„)WL8¹Y:%’ØÂˆi¬wú‰|¯‡±&…ª{íÆÛ* Š‹˜½]Ç83Ù­?¬%­)–Ÿüa,?3y É @b(óW­Þ˜Šü%Éf©w—)ÇòlòO‚gµ¢Ó/, å[«ë3K È{•8·èicÇk?m¦_‡þŠ|Pm ‘úL÷­{ã¼çs$MZÙå‹ …ö)þ? Ÿbø]t¸ê"–m˜B©{ý^!ãæóo¯7½Iož˜ófGÿ7_îo¾ÝÜ^ÿ¦1¿ðá‹âÏÃÅ—‡w·ïÞ_ßnY3+àIíÌ)¿0Ø-F}Øõ'§ÙñÎUèÙݬýri•)Œ6ð°ü¤9" ¥'¶ÛKöž´ õz”K;«ùÆY—:É’€éõÚ!Ñ ™cîU’ÁRÿ^N¿Jâbß~&WµD™"ÖGWC˜£œÇHí#Ž7Ÿˆ{sT» º…À%¿8 ÿpß<ßvs¡6çþýû,üVj·³¢¿Ñ7T>+\¨× k w‰xÝÀZÅE¬cEôË©q[Œ~ѶCuß¿üAªP½Éò¤˜fAµÅ|X¤°¸îŠÃ ‘£®8$ý-¸#±1¸vÁ!@««Ù k#Ï^õŸ½Õú.ˆXKJ-az“P}¶8AÌÄ àp:£¾Å*ðK?££›<žÖS»,eû&·©è*µNõ«ã³…Ͼ±l´u2 ÅEäúPAQH-ï:܉­* g/j‡qE{’b—ƒ…™œ \” Î÷½L®V ´c¡Mü õ]Á¸DË6õÒÕ¢b»‹ƒ´îÒˆM7Ôò ýÉk'÷M }fe¤Ò[-X‡GjæôfýbdœþÑbdÏ×aØb¬ó†ÃŽüüIä‹Y¸*ò¥¢¨`ÚÛ7ÒL¡eTžj€(h¥Å` µ`Ìq°þ@„"G'©ÆÁ#eGåí®VÆlÞk q—WÆàW/2:òa.Ù8á\b8†xþE1d7¦îÎJf.>©€_½{T½…ÕpãôÏ|%Ç^XQ!rýc䑼 j‡/³\õáòãõÕ×ÛyЧ½ørw5+†¢xA«â47mZóâÈ v[©zŠbm“ÃMC£T¤#–±>yE¨/ùmmOÔé:±S3áò­ ¢ÍX¦—ënÝÜÙ'<| l¸?x>ófU_CapÒa¯j #š9[¡õ[¬3–€‘Ñb ­³Pí*ã`qÞ²q†éô<€ÍÅcv¿âäfUžr´eZ2'˜ÙG#ãÚî]°½‘÷VÛ—àœSŒ½ºšî‚?T¨ðïñú[ð79᳂?YXð×IÜ©3/ìùGªûœ¦áÆêž ¨þzuó8ËT·%®ú0Ä–åËãîžäw1³TêeŽÜI8_”`AòâƒYŽmLÚR¤GöLOMŒèÜ®<Ùz$3Éáâõ‘Q6FSÎkbÝâuϼÌ<Š+j77P ‡9’Ìm¼ož–S\‹è“7®¨¸](çÒ$»{}rÙzk‡6ªïÙ즷´¶Þ•ݡ޲ „¤w(™Š=õBï6ú§äΚ|Ú‹’w1pX¿ì£¸ô2±¯¦mš/.ßÍ´q„Uj£—œ ëNº…¼2„êfæ˜ (’I¹ôˆØ•³”wœ'Téaçè±qpgÇK"Y/Õ±¦L•ó`;HÎkåÔY9)R‡H×D8ÊO $´(¥W(*@eFð´­½] X¿Ü]Í]g4^Õý¡´§ˆÂKà54”dA ä!¹¹;CÈÕ dMÀém}d}JÑÉHÙeÛ;Êô0IõÐj¸…™®¤"s^¤’„¼2Ä*•3ÆG6¥x¤“œŒˆç8#‰Ðˆ Ǹ9.ºp˸)¡ÿЊ`H,ò2–멈êåÃýÅÃÍoŸ­Õlž=\¤vVTà­pÃІ æ!FèfÎVS¯þš¸¹ üõàb1;ó1 ©z,a2!wd–ÛAeqmôõ‡#œF H\VX]_µt)8ê‘¿…Gùj9³˜ñ5¬0ò4Z ÅVÍ’ìú¸&ÄUÄ»Ÿ1ðÒNá2Âb ¦¬Hðâ$€› ±Ç©Ò­ÆZS|pPÑ™ ¶b91 fÓ ;ô¨„1¡TQ޳lWÔ?æp¡Š¥´zæ]¹á3ïÆ)Eþä‹‚®ýž1Ô"T#éie?Ê=‹‰q ÿ¾17X|$­ Ï˺Ÿ¯ýòíòâ˽ŠÃX…3yŽæm²¯ }tRKʉ…ÎÍ'7‘«¨Ž¡C©J†ÅnC’ûÓw¤é€¨vfGi¢²gHËŒÑÕ§º F< ·Ú…!:.⩸®–)»ªÊÂíóÍÐpŒ«ä‘U²–p•úÇ€eH€iÛŸ±În70OÒóß>_ìþ“×ÏÿUí}kŒ× ïÝqʃ•a/Êi¸5º¨xBinе‰\½pÖ$BBŠ\nŸ$"À¢ñJ>;~kBœHk§VØÄ8«Š›pܹH'Ñ­£•l¸ØKç˜÷ÚÎÇ2š‹iæy‘àÒ¿53¢BEÛ6õ±2A`Y!ï|’x=õ5'úÊàËúŠ!¿ÑqG©N «že)âXJ|‘º+¬°IC€¸ê3ºúíëðܤš§Šñx•+q O‹œ’s+@bHÙÏTÄDé¥bª[ƒ³ó•5,0qq mgýT{`:ÖáŠÔu"L¬Š†{ª¯d-è÷1ž¥h›Y"»Ð+Lb•@‡à`ÝÎåýõã‰WØ´f2³ržÕ¢pà\;åË`É®%‚ãÇ¢¦õìÎLjuË&š0Ĩ ¢Ž­õòÍ¡;ÂôÈ&ê©Õ;æYÉÄú«Q*gRŠz㨶xÀR±tàPÕ|-•¬M%re §ÖäjÄþÓ •‹jœèÿÒ9#åG›mýÆÅü¤#Óº°‹-䤕gY8o¡^7 6y TnÑrêk…bí2g·sMHÕ‡iã™"ˆÖ®\V*‰‡@¬|òFû=[W‡}÷$bU=…æz+d¬É]Žýk;$èK ¨ÿÿK®CìGúþqSé)¬Û›“]¿=þ7O–EihÍ'–mN«¾žL±eüX°]FÞ‡?æ2ËlÖÙ”9¢«È†8ä‹Eè̘õ[&´é‘xÖc»„˜b×÷²B2HË”Pmܦ™’Q¥Š'6QWéá’¾Ú‘лŠè¼‹à6¥†§å‡8;ŒtGÁòcƒa,§pvÇ7…µKÁlf7Ú[¦2€”@~®CíÇMZ»,ÚE/.Sˆâ¬FÅ’‚ynŠôÝßà« P‚¯fæK+J¥:A]%Ri /eq%õç<¼Þþü³íÊ^ÓZ³¹Á ƒ$Þªp?cmŸLÝRà”l`˜ÇŠ8Û>²Ò#kq\«àŽ$=2à*µ(a;)¾òE/I„xÙPÿ–”CJèÕÀ˧ÓùÊét„l³.ÈUÌ$I](øÏnMšÜ¬f:Û£_2ßÈnz´¹ýÎÛÞsûËÜ.#n¢¸öX;¥~H‡©"›Uâ“óL-ô¢‡wñ¬Ãaáiµ™Ã¥ŸúlB9U—#Gã¤èb‡ ûÇwÐödúäW­‹{²Žž?búËÛ?¸ hŽö•rçêÆ~üÃð †'S{¸ÔWàÎþöi®=Àíœ)Â?rhš¢bœšos7óõ§f·ØŽJ°—ò¸bç‚+«SÂÆlYÏ„r=\óƒZ„8k^qŽní Fç̈cT `<=Æ%:ð]½:©VS¨÷Ñ×Á–×yP#%(;|óÚŸñ>µô€|P“ܦj,´±ÒZôŠªÍ\Tgµ¨N:››ç#m“«U˜‹ÞfKŠçð|sôô#Y{1Ò‚ kª5õÈ=£ã%üV3ºÅY BAYåýbv§jvãC  ŸO8¡Ä»õâÀÙéO7«á¶ž ˜S}ܽÛš&˪I¨ÄIaùv.ªå[ÂC+Ô4…ÓÛ¹67çìv÷ÉÕj§Çò$Liã@<µwpnx/-öl¦™M-[ìÇZo\©£š åañÊ\ÏeÆù˜Í„M®VÃ8=VRñ™Å¸}‚´ˆqûëꀬ…Õ£oÛŒ˜2›)«c‰:åU:ÆXf•ËmÀ›\¦¼ÑŒ´„7 ¿d>±h/"&½²¾ØiÖ{ikÞOõdV½— ã.”ðÑ,Æ]©~/½Hý€ ó(A!Fº¹xÈáîäfUï¥Xý]?ï½DËÆÅElSڵ䩃²{±M¡si …C  ¦±ZúE¦Ÿ_e¹½WàÒ‚Ú¦rn–5-¤*ò6”3ÍÍÐ4Ž6ÕªFÓ˜¨È4Ì)Úäb•Šà%Ìã%ÐGgÙ3ÙÀ4ðŽía °¹Ú.uª9'ªÆ7Øw’k MoVe–ºÁÕÍb«2ã²g!´(›÷N¯è×ßÿ¾«]´ÿ=BFàPs¨²ž°, œ×ß'Zuˆ Ú¡ÄÙÑ,AlO&n>ü¦àÄðëÏL>"5„aŒó”¥ÊÞ‘@.=!\'©ANÒÙ¥¦©€…@4_lîõ×7þ™¦Òtc3³¾ùáëÞ\õ“±”_bMÌÃÊ@Ë‘ßH>¡Oé°Ùé6ùìÒÑ´4SVõ$—f®_]œ Äy‡r!®Â'ä|SäiºÈÉYšrŽ\ ¨d'\"è[|íÄ(ÉÈŸdoNNl‹6¹ŠÅª’üv¾Å)9AŸ­ošPª‡˜paFól/1ßReêA;w¾=LÇEj¢¬$EÁåWr>ݶ"&%ê,VÇdU{ËXÝâ¦xµ¿ØÓÓ û•;¿¶²lêNMš+HYõ!š¤‹îÓÀìRœ§ûd½öËtëlÍëQˆ H¸#R7•’ÌòX‘-&µ("…(M㉄T ½ð|ßãú÷G•Ñ Ýz—j^Þ}Rß}½¿¼¾ºþpóy㦸²CÞåí&;ä‡@Ì¡"ì% Œ/ùö¶ õzÈŽÚ‰Ÿ—6è!;Û¹C3³="!:œmZœšAºý½ÿgïZ–ÛÈ•ì¯(z3›fH$òáqxu7w11›YÞ‰‰¢º-K IöíþûI”H‰ U”ìî‰è‡MJõÈN>™gñRï9màîX1'ŸJ2¢÷/ ’܉ÑN:µ`%z†Xò ¶š¦Xò<æD‰3k¤VU•×_,`iéK­u0ÊÂT±GëŒiG»TZEaqhªó÷½kBå<õ® ò9GfïÍJúÈ¢vk%èþªØ¿F¶.MÙ²Iyhi©ˆÚvæo¤˜á¼–.@íT—ãåÿ^°^CY‚7[CÙÿ›¼!&ï§–Ë\Cý¾µd•,Æ–!Ù¨õÞߜ߮º|óÿýÝ¥ýø¿ï~·¾Müß®Ÿþ\þ¶ZþžgÀ<=a Á]Y¯ÜvÊ5õs”Fä‰Èœe€ÌÕµwbòM Õ|d·T®ß›Ä\;nNc¯‘Ο‘ï©§£zÚØÈ1õÎ7jã)Û1ÇLW·½²&fç&&ͪ=xöd%VQNþE`òÈúâħÇ—m®/a'£OåD'·¥PiÝf@í¢x }s¥“)ôᆽ¹ÏÖPì½Zw¿~,Ô 0ú× v jŠ 1nâŸ×Ó÷íIƒÛrÖN{(ã/tÎ%7?«ºs~d€ôaýÇA:þþáú›…ë »ìb=½Êø‡Š±ï¸›GZ0b½¨8Y,ž(äÂg*˜±Œ'ã âITé0 ¾ã^æQ9d%„Ôtßi_f_{ Ù%×;òAºèƒ¹^½ûš³Ý¿û²©RFªÓäâ™÷5»Uf5ÌÏFÅÙ9Ówé“5Å÷0cjÖ´å&ä1A’çÊY 5¤Þˆ§š15}«yóÒŸtQç{ó´ì$ß(·E Sš)˜mžÛ”²ÇÖaWóáyñZM¶-Lš^lIC‘%õ,yUú÷ü1í‰[“Nõ,"­|æN0 âQ•´ý™Ååõù¯·wO×ËÇÏŸ­x±i§Z,–‹§»Åëï6ôâù„lë ãÄcǵ NLÂj1ß`D¾;ú{GônG-SËJ„Äàì#J¿kªê¤ÏNˆ‡lAøN ÌDÚšÌÁ‡!ç}â,²”»7ÛZG\ªB‡¹û¤¨5ľ*ß¡*u1OÞ”Ï\bÔ ó.! ZŸÙ3ˆ[Ÿ©ü5mÕÍ=ëŹÝx¹zHÎÅb½ånÏo†±fcãuÛo¡ÇŒm2ÈT‹øSG|ê¨f—Ò‚¦è¤Ä,m›O™%EÈ7^ïd_Å.iræƒÃ™íRbwko—à0|Iz»9Â,å®žŠ¬ãæìù;·F§Q­år!h/ézvýâ%ûn«¦ȰlÌ©?y 4†Ìð;SF½¸(!yŒK õ–M+Ê%¼¾–ù à%¬ëà µ·>0k~xš9*",±C ¨?jT´E¶v«Fœ£&#»ý5Ðî9v)Ö…ÏýÝõípk„AZÆC¦añ£J£Æðã¤ìzÕR3.rBZPK›òuÒw®c*bÌ—H¼è †eÚàtðq^Ë$Î;ßÞ4)IÖ4ytî„]Ñ¢ûB¤"¼k¹~Þ6úÕ2R‰>ÏÃ_ÞH]®®Î¿ÞäXn"!lj§` «_$ðö›~D3uT15-”ýuÝÎmÆOZ*À\çéž**Ïq~;­{B’7¶ð0G1|—çJ$Â?–™:‰x-W…ØÉ‰ÐÖR½°LØ¢\žj®yþÓ°2njDWY‹Æ…±Ó¡M’C°à˜"ÅÅÄ«=I‘ŒõK;Íþu0xšTw–ÊPíáüÃý·Tòÿç2Ùª‘Øj»u¼ð °Uâ˜)[ÑÛà©ØZ&°j‰ÄÂÀПðN)ô"¬„|ÉçN:5òöÔä- RÍPecªÇÖ™“s&1xMØz|XuU}X.ªjàG#m9@Õ«¦&Ì)it€úSä)Æ•ཽØÇ›õè”±pKãEß·‚cFìz—¦Ãƒ4peߊ«š?k ‚Øã6¬9éÏ"pïÜ4=Ÿ…Ý §†?kíÔÇAô Uv%aëÚ±$gwx¸•^9™¬-uælp[–2àP×±ÍÁC^­@) ÷ ´Ùy õ½[bgÞmàwÃÚ›Õ¯çË?_æTmfÓϺ0kv>4QË"ãµÐ‡»I 2‚]#1šZvK$W ‚Ó2Q\ÿ=gÑt_Scvàëžœj 0q'ÎáŒB•­ ®}F!ž/×Ûö€Ùþø%Ù›Wç7«c;?üçÙí×/¦_î®^Ð;ÅÛ£ML3­\‚ê’€úx ,†Ë&Ü÷Þì?îî–«ÇÇ l×Ò›£Kt­pƒRê©’Y]˜²ß$6?ºDôœ9ºsCY½Ï]’/÷?ÜîvG‡LɈ;œg‘uA—w_îÏV?mHŠ?þ×ÿã,»ÿMêé sKÆõÁöfgðqAËx¥W1ÀÅ’m~¹»Ü-]wöùìñë2-Ÿ¶·þåþëÓÇOîðíüæëê—ÍDògë©#*k0kH©êHÎ>>»²mö5½Üçcè`ëÛ‰Ÿ²Ìx]¢Å²éØž¹|.Ë8„ÉiU;ïBpæÓòX­®/áPÈó¥áÇáÝX¼^hwñâ‚ùuV‹m*wˆÚDîæð<7ß¿ãQć»ô÷½2NÆ«%/‰—Ë‘3ŸÐCp£#Í%üÞUrH°ÁÙà ÙÕ2 i¡Hê‚-êmB=I.¼ÝÙ²¢=9ÕÈb­3ïÛ@åçÜ=ÞZÐëÛÅC܇?íO—«?Ξì2™”¦/–ÚÓõûb´(ÛOB€gh˜ÂL‘ÑZûä¶dK³å´ÄFAÎU=ÊèÁ¢£*Ž"@q¸H”x—Z bM:ÆŒ=¬ &L´ÈG鈹¾ý=¹Ô³•¯2'û·%‘¤Ÿ=ýqûñÓÐ~¤Ñ•†/½E—i&=ÁªN÷RÕçÙëuJ‡ó¥ÍNŸ_ææ÷å§ãK3pT™¶4i†| fhG0Åqî& E´ØŠCúˆ­<ªeÏL±~èý˜ÚL°ˆœ™0NíƒeWØkfÚ š1ôWå°g¾Ï\á¶.ü­¹Ú{µ‚FØõc™Ï¥¾´>²šââ(BS5›ÅÅLn`ÖBÅ™œS%Þ3£ð)Å‘†maÂiÅeÃÇýW+P\z,Jæ)NÐQœ”²À´õÄ‚´€2ÜÎdá˜üfè`áù:˜ßçeÌô‘Ïwš/Q.k{ žgÏc°m4©;ºÖ†ÐÚ5HPá|L»Vþ½këm#WÒ%Ø—}YÉd±.ä,pÞ° vÀc+‰ßÖrdýï[”d«-Q"ÙM¶3sf&@ÇV7«ÈuýÊ1ûú(Üp/§æåôßËw—O_WÏ·Šnj!ÜÝ}»¿yþùºUÖ‹GY|Ý1Wö’9|¼l½ƒG¿Åpß²ôß·%wæ1biýq ˜|gúÒ;“m­Ztgª9j îL{¢âuiw¦øX­8(pØÆn³ÇMcyC•wݧvüã=qJ±0$ú¼yC$4ÇU«ö'¾>ýåÏrÙ2vèiwfãWüCÏ÷ 1ð·ùšîÆO7ŸëФÑéûŸ<•=LÊîa@7¦HZ¡wê…¼„2/?Þ®þK¥÷G·Æ+Ò¬þã’ z?ýû‡¨À›Õõþk,tDùü>Ǩüý&ÝLøº˜/»  >­®V7ßW×-Iå!ô¶Vyÿ »…í>äfýáþá‡îË«§Ï_.ï?¼Šc¹YûqˆVÔ<>é­cäóG¶ÃHdµcɸÎD–1¬ô¶‚x÷ÅmõÙúBÏÜ>0æLÙ©5»VãÓ'ÖÅit“R9~L*‚Ã@feVºÃ\ÍÑ¥wÌ™0Ú‘ iâlQ.HÛ3"d!‚ Ýñ°—^‹ìÇA^Ψ‰Ê›I‰ÚYq]ME‹ISÑŠ"dFµ 4v[Ã%F#„:É™àæäFèeð¤À¡Ü{· A¥s{Z$w¾ °^~õ[‘~[??Ü©R7ü›×úáê¿Ço8Kí¹ÌSƒÖãi­ð4-NB#Úâ˜2®¾ŸmFAÊ÷§.KuF\óØ-‘ÖÙ A/—Ä…0j‹ !žŒ¾«.„ ?3~Jíö#zÓOD!C‚~BÕä/éÐ5¾ ¨ "¯ê]›ˆNm6Þë6ä pYX9¦ê‰ª•>s3tpf[äw=ç]J¤…L¹”Á4©ŠãVÊÇalv…5€4É^ÔÕ»º;Š«#ÐÙÈ£&ééLŽØÁüýÐ!¸Š!ª#Cieª Äb—ñÊÔ0þ¤Í÷Ýýcׇ«ú Æ¿êóǥ—nZݧbtçï»Ðȋ܇–à¿éןŸ~ÞllµJf±“Ôâæú_Žk¬¶ZýEÔü÷MKóÆ>L Á3èNñ4Aú®¯–.Ö¤%rz>,7*Hnh½î(¿¡ÿó2¾è}œrrCHßÖÇ[{q8_$Òz‹ãͽHÆÍOêÂYQ;ÁœÕ0nD C1- ØO®íRrzõ\Á«†²E9}&Æ {0øTMÎ`aYªøR¢ #[|'¨ÈÇ®Ÿ‰ZQ’£–^¬å±ÆMVÔ¨ ˜Ä„µíìŸszsRwýpiÅŠ#|K½å‘Ïaoö¡(E:ÎnôÀGÊž¯fðÛ¥ ƒ?CºP­úê2‚ÃŒ ë¹.+8ò±ƒÄ_0ƽIï½üíä=¢F²g?‘ü˜!¶~Óý8l©Ïë|É5’Ç# Éɰ¯++‚#ÃbÁ¢ABkøÉj6Ëèy,ÏSè:b¼ .÷…1oC ÆœåZ–ÏÃXð ‹²0ÌQ”µ)XoËõÖ?U—wÛ?ïÂ*i=ë÷—Ÿ7óŠ° ­Ã®ª‡  ««êYga* ±S`ŠÆN֪׌8¨|1P&Œ­ ’*oìŽèîP)À$[aK+D*rNQ‰*,§¬â 0Çwïqr4)ÌÑ%³U@rÉ(ˆw¾a•q–2yk–$6ÛË·ú>·'Ä8Iy!ƒ•x!ÞDBZæšÖ¶½eo© ¸seg”cbºZT‹žQw7cÈÐz?‹¡u"v÷š¸¸}P‘Çi ʼn]ÅÏUââùáëê~áûë^`èsxµ–ç»VÄ #ê³)’*OÅ_J='BKöÖbþŽÉ÷MÅuoGAÄýÂJ¸X—žÔ:ä¹á¤wþa+F:†“¨}ºšÄs’?+¿>ùÓ5|’O¼ ïKþ4x‹¡k]+¶`ß²m?‘Eý0òN·ˆ@ÇÆÞ»›ûM£ÜÕSåtsè á‡Ü4Å„ à݆©±ª9n ‡Qmo>™j  V¿Ë—ž°.‹îŒ'X^–Ü ím»íLÀ™¡Ÿ¨Ò›! ÈÝÝsôßêñöáçݼ$âÕß_&äþX}ü¢oZÇMät=w‡ÓJM'ñÎÖÎo,¼¦§ÕÙ`ÑœV—­¿Ô|Ú{•T«ãjb÷ÛìVº' \Œ¢§”b†{›z?ÓÚ²Á‹fíªà‹†ésËG‡\¿=õµ£Tèª/µV’êò$Æo;0ãÉl°©–¶ÂÏ6«S>y±Àw„lo¼5fÄ"7½§Ó—6ãpÅˤ>XÖÝV‹(>óižõÁÊ Ü]Yº8ÐÙÏìîz‹ØÙÝU1ÚpÜ¡+6'ý±¡—KsÄ̶ýüw—gšÿ¨±w¿]].,Lto§?}àÖÆRÈù¼WHm×d^”3±ïKúôýæjuyµ›X~Rô7÷zT6y²õe•¹ÍbºB7„1Ðí}ˆ…Pª)`FÈ«•…½ÙùÉ¡!vGnƒtgÁBªÖz(œv|kQ±2÷íàl÷Ü ëk&†|ê’ãÝ”™ÌçƒiJ%EC>9ÔÁŒ„‡žjEç;:ŒjÆ…°u‰úq¾ž¸ãöy¡}ÇÊžc§ŽÎO°+à"É «=¹îë‰\G‰¬]T#n RôÊ–©{Ýê,è"¦GíÔ$ª_ÛXoÝTÐý½òtvjD9ïÊjO­C]–£ò±ƒ*C«’ºÕ’‰˜¦ÞµÍ1 Aj˜]GƒÊÿÕ °j#ö34‚i €Ñ×üŒâ“9ü*ÀôãQØã:ˆÝ†m*ô²O ¯!tëfaÛ:û³Ð-°k8 ÝlÒÅ{Qµn}íˆÜUé'–¡÷D•ó®?í°äØ ³6á$!3ÿ,IÛÞN>‹=u+ØÅh&ÝäÌm4ê-ä˜ù芸aâªaè˜tÛµ0‘ÔT ¬£PE–lM¯÷éô«0Ú`ªîT 3Cjp½i*¢”OA*XïÐO P—BjjÈÕ1¤:Þ2ÖŽ·kSç¾£þÔê›V&qç÷„ÍÛoºËŸžnWoîã‘X_l¹¤~{ço/©·ßN5C¼ÜRUøJzÂk°vD§´ƒTjѵ©[D“ÉBAiT`²Ù°DH!H­M­…>ܱv^ŽE)½s÷A!5eÚFE©{(‰Ã°%* ›òÜ}{é©gè0 &R(ŠC«:ìnæL×ÙÊ~W ±¾Øý¡²ÜÍœ«3mÇî€Û²0¾€ÆY–0ÎÚ-’SC°Õ ÀXœ`wƒ Ïb­KW] DÒl@ä$ò3ƒ­ëßyËvÇnxDv*¨ÿ$ÀV]ŠÐvü‚)«”2õl•0ÐSx0u­–¢óHVèW“ß+ÉŽHž)ð s–Â/Q’¯úcDæ<û:™y,¤$–Ü щ~¤Ÿ9¤Èu·;UÌ$„C§k`”tHøÏRÚÛ@M‡¥Ø­ Ž ÷EºrŽº«õÓb}óù~Ã]+õˆ]‘GøòAïTŒí•mx•VKäT¬3$ùöB6Ùê­H›•bCØË¦r’Q‡gŽœ†Ã.šȉ»õrFº|oì,3¼BQäTå9u„Wº©3NípÖ«ÓàÏaLþ^Q¦4ÆñæH·꨾‹5yJÁÂÑPÐcUÛÓ (úŒí˜.ÅH¨„ BU!VZu'Núã玠å?õùÁþót—5P׎)µ9@z½ÈŒÇùÛw·A‹Å.Xqqð÷×®¯ºfE‘ñZ(€T’1´ë†À­n&˜&¸–†©E°V­›…UJ¦ôRj„ª‘ØÈ0ÌŒ«fàô”ìUE9Ë™QN ¼M-T)²P™ Nm­€Œžî ðA ‹?¢©ÚWå`€m®Æñ²&xù5¨HTºÊƒä¶ ò#+n…‘ÁYãíÌyXmØ#Å¥÷Í dÝ_Æg±¾¬9`!lyÁE<˜ÝøÜ÷'ÿc]u“ꉕv•'¡뱉Z)À–0œÏE0›EYkÐ%«K÷Âj„³ØÌ޳Öp'ßr8³Ž87 ' —À«ú~¢ :8zêÒÅÍp9Ú(à;âò«¬Þ–2è—w?¸ë”xR5Ü\]®WÏëåw»Ô¬ëLWã v…ccºZ6úhA©²N[SŽcT)€0{ŸEa€tœu/£V(Œ<ËÌ( 3 °ø4 ‡³“(Ñ€õMãE`¬ÊSVᢧvÑwÂ`§[ñü¶ñ·ç/ú%•âó0¨rqâë•d/Ø×6Çr1äÛ˜ÆuòkŠÊÞIÀ’¹k»á=gQ™ÒݬiµBeaCfFers 2A•#}Ž¿ŽmÌ“mãzàè©_îÒѪªS Âýs†i6ÆTÄQGúdï~ Êhc „”ÌqdE‘¤§ŠÖÜ %ƒ~¢;‚À3jù­1H˜ ü‰êSIñ„™¢%€€=°ŽÕšP¸“÷‹Ïî ÿ:$b7^ây$ #êR­5Î:çØµ Âî¥Ô'ÙëÁäP@­9j”(/Ÿ ´Ò&Õ v1Õ\“- õ¯¦R1›dÒŸ=èdª©Z'ý© 0ÕMàV!×·XÐS›Ò¥±‰#ÿp–c‘:„>Ó'¬(D‰ë(ÓAH阜üâúáÇýíÃåõz¡Æ¢úP?†:v/²¡çMåG•¬9p¶¶ƒ¢^X--{+! )I 2fsƒ*8›,S{•L£éÆlÒ-óÞXç°ì)$;ÏÈzê9en,njâ³±z–[øã ¡ãaWa ³Tl;õ^<ü:;ÑwD*˜cò…T œL. ÖÜÈòT+=¹yÏ1¢éoy†]³À‘åIŽt𗇞Uw F21z½Œþ|&eÖ‰ŒtB†Õd´¿ÔK ¨³Ù<:rHÍ:,¹Ò‘¾²3~f¤ýça…péb#ý_H—S“J£‹§'V(?#«ó–"{Ãdu¡§îñAwÚ G¶.ïí¹'’àˆ$ÄèÆTg½+åÔ-u è Å‚€¤Í§·õ£R`¹J¬”Ø“| V²qê0À´CÈýg¢M£“è]S€9¨š¹ˆWTLMÕ\'I¿ÉOQ$[êR]¤{’MW8ð®/î.¯¾èYYlÙ«Æ¢¨EÇã…‡Q¶a„÷¬Î*{4ÞŽ'=# †ðGU 1yzë­§~²õ©Ä÷@mTwªÕÿjèEô>•içú‚ª˜™SªŠâÈË—&µM A©ˆ4vìaÍœü“ ÔK˜pRá÷H8ˆ§¥agM¸ëËôeôx{y¿Z¾TK½­{}|¸ÖoÿñðôU¿ù^_âæûÍóÏ«/««¯éÔ™J{#s<Ñ”!»ÙÙÇã·B¬#hDœ£zBèêÞ3èîè¢1åM<#Ö(üfgÆî Ù{fëÚÍ Ük§Á=³9ØÈ®nB-GBVài0%¡ó Â(fsÌ…ºÑ`{w¾;ÀQÓÂ}Å’Yµj(Qÿ$xyb£Yƒè=O2hÄHû4 Y:ˆ}3Ì ]_èÖ½:3;g\©˜/÷üå#£†X€È„Ï0rºV¹¬Zù›­ >…,î3Ç©¨9ÜWÁ%Û’iüñµÁsÕìÙ&Ç:tÔ«b–ŽÈSããøؾ¬.oŸ¿´±îš0Ài äøƒ,yã.±ø³gÆÔœÕ—8ØÁçÙ3C*§lLS ÓÜ/¶É‘Ñ·ÆÝ0ï‘q¡=šYz"w3¤PÏ»ïІÀFšÿˆÏøíùéÛªâ¨ù®G íˆ4\ðê4L¹žq~^d6ån¶€ä/.’@.Û"ɲ±¡ÂÍK;1æ>…½ó®\â׈ÜݬÔ=E¼¶:;zê—¸=Ê¢1qw÷.ÓÓ‡^»÷Ç‹«Ë:êsƒ}-ò2†-8TÐ4¤^,½V¾Äf»ðëI? ɆÅfs*IH²©½Šª$oÞÚ0ÚÙ £ÝØážl˜ljhâIB¸›cˆ¤/"i84Ž^…]µëÛnЙ%:×5póÕ¯—ßï—OŸ/Vz¾ÖëO7O«ºÜåwæ'§ÙÇ}[ŒŒi¶Õ²Ÿ“î“ùœh§˜Ñ›-EŠEy;EÝÑlË H²¯úUX- ;¾s¬ë›² 3d«mbzE\2¨‰}ÞŠàÚžlõŸ\:îo;°º#Ú%9«†üûXÛû(óÛ¿..¿]ßÂfG0””гÙF$Î@z]²$e/›&%ô¸Txfv5ˆÞâT*¦v/¡§°áA =.P(ÜÍʽW6ZOdcJç@ôTlé·aé"ÞÂüpûB µû½®L]_x 8ÊG‡¿S£Ñ¦oÔPUç^@\IOeIŸ£°’µéi4ATµ%ƒ¥¹ñ4p<µ„ÀÓ°ÄH:‹óâ)Õ•x¤©pš€“ŠDçô–Y»Ëÿ6ÇOq‘ çÝðsP‡sð…Ê9}ãÅŸGÔ—©FµÕ ‘JÝTh‰¨§EÖcEضõô#fV_21O#Œe\]u‹Ã (ýQK¢¬7‘~ónŽñ#e£ùµ×óèpJ£È S4жm´Œ¹nž‘úê /»¾ú²ºþ¦‹:ù‹u]é´ÅUø8»e ôª6GP‹!uóþŸ½«YŽäÆÑ¯Ò1—¹ŒÒ@€ ç 6b»ç µTv+¬ŸIm¯l_`ŸlÁTI•ªb‰L&³Ôê°O%¹” âñûav^áõ$ÉŠì‚wZÁ’J=x&HÌuáMDÕ‰%+RjžœÁÚo– !ƒ¬Ï’2Ã÷Oç$Þ½M¬Ñõm ®ª.¥f¶¬VËqì”C‹§ü¢SöV±Ë‰ŠGÀŸÞ ~*Þž=SëØßߥ·Ùÿ`E!¶C…M6{5ß(‹ž™€>Þp­äzÚcAÏ„lÖŠ‹Ù‡ÊDL ²;œa{@•Ýú´…Šù5*vPª!ðISꪖó!/öŽç5x/¹ÔË ¸@øáˆ°»XV¡ŠH”Óï‚ÛÎÏ~$Pa%¹\ð2‰dWwì^¹—•t¢AOl$…O`$½JÞH‚9 {Ü®kO`¿!v—óÒu‘ ¬Œñ=Ò°‰Êñ%Ïrô'óüÏ78²»XÉ訅ˆÉ9`7K¯tl躚Ö-\¯°¬[а7-«æ·©LÄÔË´J*Ùɉmk¤õM«óš7­ØE:ÉNè:·sov[±âÁò*; ÔG–ðN[ ®7¿ž_üy¶ý•³íïœ=Þý¶¹5‡s~;3T×4Á -D¬>¢¤븧 Fr=™·’Ã+r²J3&FŸùØÉ©óv¢ Ç9«T» O°ÐÊk~Í‹TäS'e¡ªû Ë ƒZ›±æ ûu¬±…&ÁŸf§Áç«Ûš‡ŸžäúóËÅös†Üu^_„U0i‰(Eôí[f‰¬«õMk¶j*bÅ%[&¿lv* ^æ—Y£œ8»Àëï= çÍm‡ÖWb”Br!šûܵ?ªú¼kÙ0ÛJ9Yô, Žœ,Ð*©WS{TuïW{8¿ùz½y˜4~ùÁ<3lšÖ~E;Œ+·dmÉû`>¢t­‡UK°§U6ßR,ä­X}¡Ô‘kÒtÙŒïN\–5«’|ZÖüÌßxRhÓá‡O¿ÜßÝ|ú|wýøéòónZ"@¹–3\Óà –‚Å-£Ím÷©Ã\ãi«kZgÛ:U×f™¢ÌQ# &Jkií=}ú t-¬ófò$†±ý 矯7ÿa`ø÷»»¯¯}øŸ¦›»½ÜüÏÏĈҥqµ¹|ù â!öÉ h «ö"dñ,¾µáùée!f»ñ'oó÷ô´Ÿ®ÒS´/6W¿o._ƒRÃ@LÇyÔä¾bûjÛo¹z0Ûñ‡Ý†lî?=~9¿ýô"a|û= L4Ók1?¾˜ÅÞ/R„¨ ŠÀhA°·ÿû¿>Ù·çãÙíÿÓØå/ç×›¼ég–~ºývcúï»_^.ÚŸ3通–”•‚IˆŠJ!×¹6y³¿½¿»Ø<<<][ø¿>µè%ŧJ1ýŠ‹»›¯ç÷›ƒ³öHNdÖYÛ…À.,9kt-VÉ©snn`÷ʽÝYÂçÄÂÅùΚ o6_6ßÎ~Ó&êªQ‘T&ÝÇ‘´ÊŒ@QcÐ…\Zm"Ÿ.„=´3K µÌUöh¬È‰¿ڕþƒ´‚•–‚PkÀ •¶3•o«EëÛíIÇÕ^œBÞ¼¼Y…!HO¢?ëØÄ‘’sKް…IÏ ¾Þ]¾ÑÁùõÅ›ÿã!½½¡$ûêàp gž¯/«ƒº(PRs‚%7 ý"± ·‡N¥Äê¡’nÚÂ^›– YLãüü ¦9&·‰ÛÓÛÌ}FÔ,§°–5Á¡eMàì²”©4z肺!&Vľºðf´7~…wn……_`p>¾=ýÃØ·³ýǬ”ŽYˆ|'^/üyð-=Îa$ëíÇa·/¦^ Lç‚ e§+˜C+%Zè‹Ý&"鱱ȞLîˆ§Æ ˆ_ƒÊŽüª\’ù­†éw^ üó¶Ík‡MdÖalYb¢åÖ¹Ó_säÔ…š*Q¾…±ŒÂ'R¬ýŽ‚L:¡0DçªÉÖ»¡VA! æo:^„¿Ÿ__]Ú÷ÝþúÇæóûï8²~Ún=Kçpvu™˜Zÿ|¥‰óÚ(àªÈl*={‹}•y6ÇöráõÃ+Ùýc¬º5‘‹1­§,ËöDR]ðjŽ«w"wÃ+ó:x%F…𞀽ùvýøíaBaU„r – Nãl iõ„$Ak Wè–Áô`íß‹h:A5ÕN É º $S o{' &—º°dU*QnL[&Õwˆ#×t^íԃƊ¯!Ïùr©ÈyÂÚqt‚ž DN =õ´ôÒ²&]ø²À<ŽJKù\8±X€ƒÈ1PE¨ !Íݤ2 %[„ŸÈ¤ Ý`.)Ñ©}R†°Æè‡Ä‹ÀeS³2Ç–‰½Ô<Bø ´óN«Ò8} ƒ 1wN$Ò‚~pì}N AZaWZz¢T_‚Gz%ì®ïîíÓ—>±ÝGm]«€È«br[MŸy3:Bs6¢“>M&u‚ë Sb $U8…búÆ„˜¥6œH©PSãÑ©ÊnüMÒÂZ‡ï™¿Ù.þz}~»Ù§¹øb'~¿ùz÷0VÏ7ÓäEúùÙø 3S;ü-Ì[Dn“¾Âµ”>H9-b¹¥E‚ëTI£H•sê¹ TŽÙ„‰”ºÕTÛž{ÞÍ¨èØ°½DGtÿf¬5r춘›JøåúîDwpsž„ïÞAA1ƒ\1 }9_`ÒÊ)ÃN=áí¡Í0Ьñ¿(ÊöKZ–¢ˆ¶˜ Ô;˜Ûüðrì õ[½%åÝ‘' ¿C?’oW‰‹c-µÔ¸`ÂcŸMÿî¤ÓE7FI™“I2DgvbA&IÐüoZŶƒZÕ.‡óÙsÙÇ 2¢.ÁPÐlzKç DNSÈm²‰§ç86æ0Ô˜ãR+}’TrIt2ÇP‘_ÞMþH™%™Ãaº¡žìLÌ öšº¹"n†|–ÈÄß¹óP+rµe1Qk_Á ~”ýÑ´žuvCA2ÝkÇì‰}¬òŸ‚+I“”|Κˆ¡bQ4L;ýU€µoq†ï9€M“ ‡:½,g£«ÉÛ”)IG?‚Ý=NÒéw,â$eÐ<§§×„–R&¡RDZÌE+¹SmÅ³ÝØeoÚœ=Wâ"LoNÙÄÉ«UŽójJ0£Y¢âàJw®zsû§›É›÷dÎ ïQòé––žˆTôˆ)nþÞ >‡dÐéôÙQ(÷ø²V.!,:Í!l"‘7qzhç6O @ðk¸o¥L‡Ë9ÆcŠ,Û雽õt©q³+o»T­§3ìÍXø9Ó¬yŠ+„.øwúNfô9Éùð²Qúé—çñÛp«ZÓ–2ž† ‰+¦c·ö1aõ2«I¢¯™X´øh»ŽëM³Jùõ;Éô(ØC‹ÍÈŸÚ¬ú=‡´‹_C2DIó k2Ǽ¼ë]<»9¿=ÿÕÞÂ@¸S¬Æ/quHÏës‚ÇU¡Üä!Ø8§³¼+=“$;#å؇ÞTÒ9ós°"ŽMûÖC÷^³‰‰{øSöØ*Ñ…“ûS ´¶Cerö|èPQ,rð[þ•ý}¿f”»:TxèQ…ÃM8æˆsµGujÛ³jd»†K–ÂN”zkÞMõÆ™ÍUL°ª)ŠM¤"Âì0÷„ÚŒgfJÔ‰–;Å~«ÔÈ1 /ßqµ“Nß,é²›GÞÛG7‚oiyU2ÙÍçÉn4H¶`´¼gf¬zéAÚ‘VËpÙE^¢–]ô]g0‘D§ÍÁþ<žÜEºÂX¥ »uÍs¾¤uxÃͬÕ;·¢Aæ´î¨‰ÿÎYD¡OÙïPFÝlp:wŒ±Ì»%½«=q;’uÈ}÷"&ØžÚsÔÓbÝvØ¥;öB@Âpú-[æï¶Vs|p¹ö€¥…‚9-ªC}öìK¨'ò A´zTbB¥YÚå8:A/F/'‡^ôë@„Tþª¸ådÞ´˜Ùƒ9Yi=ð÷_sÏßÐ%Rwÿi„9NDÒ ƒƒÄ°ƒÿš‡AóV!`V§žs=¢ÇÞÅ“z<è…þèˆÍ¿Õ,ÄB_:c*j~€2ù¨.„¦/(q;xü&d |ž4ýE&蚃ÞÓ,Ì–4¢³«Ò‰ Yà¼6;€cK§¨¹Ó ‰| Úù³ÅÆ™;¢Håk“cƒ‘tšÀb 0³W¥Ã^Ó‹’ ÿ…Á¬Ì[‡¸ÄÛ¦¢ø@h áÙ¿ B¤ÀE†ìÞž©L:ÑÑ‘i;ž…‰Ðd~«´ûŽÇpx=>l.î7ùÕÂc6þâúÊÎd^Q¤U[0kHæÒü®™¬Q7ðaÚcÇXÁ binbW¶v5‘Gž*SWG|jð¡×UÀÇ`±cüë ÌʼeÊP)ì?B(˜Îß?ƒ±ˆA”<w"é„AoOMó²7,iHyýÔôÕ‚ûgJ2¯À SË€¸ÈÓÇ¡r¨™³7nyÇßD!cÖ È¤ —yå{ÅÔ©¶…a ²8ôƒ×ÿ}—4«÷—M,«òFÛlé$*°ª‹¦œjþ“;ÅêNtýë‡D*ª\W)ßšs·æNPøU†íüg-ZÑ¥)‡°HG¢oPMŒ4‰|¤GÏÕ±fÒÇû¤RØœŸ}þv{y½éÖEq oAm9ÞvMh1»1ßµ“R~,ÒÁq<«'!±ÁÆe3¼ÔÒ!¤Ž4tñ ¯ÖÎð* €j¯\n‰O|æTžá…ìh÷äÕjfx1 £Òkúé—\ÜÝ|=¿ßìÍlã`ÖI‘æÌlÏ»ÜgOµB)ÚW“ËA»]à·ç7¦&ÞýžÎy-wÄ|ô^¶;Ùr]bsZë¥"¾á¹]GeÒo-M¢$ƒZ1ÆÅT‘ép¶Ñg'€.hiV£›•)*ž~Ù?ƸÂd=:¼Jàpú×ñþ>dÀ›Émð*ä"5$k=¡ ¯]š\È©_«[j ƒ…÷Š~/n—Lí·úì„Ò¥ÕGÍ‹»Â°¬äZ¨Éâs4Ï¡ÃDaRËoö·þÀb£‹n>o€ÔU"5A‘&úÞ’vЖ”kÍá‹€:q†.ò©•¹A9"¦!JàY†1IE_|ð[åMuØÖ?öÔa"’N´XiinÀßÙ¦Ô¼J_ §5hzú;{¬¶ÝÔìV½©‰´¥ÁÏ¢t‹Õ¡uä+éôìë’çõŠ…VÜ@R„Ü‘ÕÒ;QtjëKYþä&Ø· ‘C‹´ãh2µ+Ór¼  RŽ›Èç)Ç&2éa„í±žÚ³®1‡DŒ*ø#M&õ@_S‡'ˆ3}f³^Ýò»&—§þ–³‹ó~ø£!‚ TàO¢Tø@)_Ø{‘JüÙß f“OîI\Ç Bo Ó¯¼Þ‡ûÅÔÚÜ{ É=€1î€â$Ö¸Eæ”Qœæ“ö/2éäAJFÊ©Q`…-¦ÎÑsÐwHÞ{ü²„Ì^dÝ€$´”¿”ÌeFˆÔ% ÉȨ_ÚÐŽ^"QEbȧU2eüeÓ†tIÚS³;9üÖh1K1–¨%&ÿq©Ñí¯m«½YÖõC£P øÐ{ˆÐ|‡"êy÷qZ–!Uw•ï¾ÈÙF„€¼ÿ8TsýIùúóYöÔ‰DºÜ~lA ø'åÑl5lH r¸!-p¦Ÿ­/ÖÅØ5÷f«ØÓËf‰û¦oS±!ÍžÊCtüzCÚô;mH3½@ãÿýoe³ýøbÀÇf‹<~…®0,—¶½Ù³uNi¼ºÑ?»º¾|8 |ÏüNâsù4éñxÿmScœÝ“q–¿5 ¿…Z|#‡yRÁÿ—Ù«µ9yØa—·Ã˜YwI)}»€²§"või_Ê>v'2èa‡í±!˜çØaç"û%ð3Ï1¬2ëÂHò¬ƒ¹º5%?H ų\#»Ïñ(qìjXt "„†¶| æˆ;»ŸÖÙë²/±)‹Í˜œó^j®U´ÍÔ:–s‘vÒé4&QFúƒÈLc¸È6››Ù´84õ; ‡Çó«:ŒÞb¹è4Ô+%<º7ôåÍjfÍ[ŠàÆu¦3Ž-¦ñɰȠê<8IÆbJ}‚"Í^ÚÓ‡?Ý›=yx¼ÿóìbsÿ8·DŠÇŒÑt-.JS‰ÄÜr1J{´êe¥ÔÏnúÁÂ%Š5ãPti0f=š‰@ºØM?€X¿=#=‰E¤n-_‹Ñ«GŠ$k‡OšTXl3…×W7W¦RÎþëZ¤.}ñ‹PIÒr¥ôFP2£ÐZ<‹®`<ËmQ˜‘*Nbê å0ƒ˜´xë‘øp$Ýó,.›L…x(½#ϼHj1‰| Ix9Oà1?7…ÔÔ’Œu/[m–e!m£**FBWtr)jÎZO¤Óe®ÌžÌüÏÓ ³ÕaYøé×èådÛ"Ëw“þémŸ=JÊ/B$’G^Ô šT,{‡Ô’Î]«R?Á— çAó¼ñ;tIýðà9?ËQò¬)˜Z¤Ìm´6˜È¼e.òõæüáǶ&z–¤uvu™Š3þt÷ûíh¯Ÿ?èf˜Q5¨/f;Û·ÛìŸÄç%ßfÿ"Ÿ–9=¶Ý&Js´ƒAÁ#.±Ì k̺„! 0ÅÓr)ï R —>Ûz‚ó:ÉÙG7J ½còK`ɈMÜ8f@D;9KÇeÕ-¦MõR"ŒP6ÖþŸ½«Û#Ùͯ"ìõª]Ud‘Ec±Wç&‚A›$XÈ’¼Ö¶IvÎæ½òy²=#©¥©žª®®žýÉÙs.Œñ¸§‹dñŸ=Ké:Æ…ªš¦K Iýá´(¨þ¿m[uƒl1ôâ,µéÓ©³ôÏÈÏ©æÔ›gü®™æ°ÅC"+ 5\ÅE„êv M"pÅ5´• E«õNg“KÏTéÒ¶à œ=Ñ¢ìn$uš„VÝC t|¢ ¬×°÷²NÉ¥-jבU–hUé,Rj˜Ð&¶ÍAä÷ž_ R”J¡&¿”"ï%¥Ü½œ£KzIv œëo¥·­ÐQÚEAáZDÁƒž[\à…m w·_Ue‘‚ÝùD+ŽTáʘ¨;Êåm_Êù‰òÈ÷t€(ƒ cÚ5âŸû¶RuöþîöÓÙ»ÛgWï^>yÐʹZàOp'A«öµª”Ý#B“ÒXл¸¶ä R[rõ–2¡”dÔCE¥É1 »ý¹9/Ï'«)¹ê[OõHvöÛ&êÈ…lç©%‘aÈÆ©[ŸÀò^Bp|ÐK8~öú"‹eÜ“”¹±J¼Ýá´?,fy59M¹—p|+µ.^\Ôé3VõÊPÛH¸;Udfr«¤¥¡•F,UÖQ*ð«eüë&/$+DQ]@©¢£ö´J¹‹ 1z¤®ô­Q­{5RØÞO1$]^¡Æm—KËÈ& 8Z¾S<¶=ËA—V*t™2S@ª$‚è8”ÚždÙå QºT™ÂË@«L¬tަ6%%PÇãÚD÷§‹ËJ‹7û2äù—úÉãßÜ_?œlÞ‘u Éác æ1aw,¨ £!åŠ!"õ}ëhv4.’ðQåj•Êj)…¤ `Ð!¿¯¹Þ¬<€ FIJ8¸´ï«<ª0³µ± Iº¬|ðÄ…Õ¥lÞ³©–Õ1I=ªÈ(~qþîÛáÓß¹¾¬é\˜ù¼›°»†ê HYZ¼u“•dNèÕ¥TF ‡êµk»W#O>Æ5ÂÒ3zÔ°âژѸW3zÎB‚Xæi@v@|>h|:ZEÐè=*ÓÈ¥Z0?Õ€L­[ºÆGhH6Û&ø ~Úýɸ'.=qï¿.nTîÎTÏ,Nü»}Œµ£úËüî÷úùÃݯ7c¦÷Þ÷8¿Q¢F•TV×R$=?9K«ZïèÃͧëۯƜ8f}óD@]5»©9r¶G¨ Ùˆ®‡Béõ•Ò«Nゃ”ŠÒ›Žº3ãÉÓLñþédÂK`‹?0Å«|½…Þµ ¯=âß¾ÁÝ—K6Ôƒ¸¼þbãçÆ9á'Î=– ”rO<3Q;BF?*ûÒ™Z&;ÔÈ1‘v¦áîúÓíÃõ¢Ës ¦„½«ŒJ8)œ=üúžöæù–}y7üý¿¾ù—¿~~.]œ=’îù«#|÷V#ž£ÑòûˈïÒÕŸ_…w¾zê‘®ò *uâxPõÖ?¨üÙ£N‚ª&t´Ê<¼Æœ­k‰Äè½Íè/«îgTÃ|¥?£ª»>Ìèä\q1A¬r9Šö%åL(Ó#í2*lDï`‰ù‰ªAœ¬Òsèeó H=g|"«›¿ ‚Ü`¢ny‹BXýó^8_é;w…e$§ìNhgNQ?‡un"4Œ Ee£:T«½ÄPí%zäñ\¼ÆÑÁÑ‘øÇ()769Y›H6®•â$r˜>"»ƒ L.»ýh>$HHÜ ´{#o}¹•ö>¹ ó^ØÏp0S?oFà %ÿçÝþªÍõîŸàÐÞ¼þàœ¯®â»wtu~w÷3= >Åwt ×—lAíkgÇ-st6z¥‰'”‚;k²¯’®1éÿ_xH³^ŒìæÊ ‰r¾?0" ‚.%üл™rÿæ±–½ûò¢N~ xžô™‘W‘ÞÇþ]ÛdpA)8õøÄ³Wþj£îÒ Ïê!pâx´¦…jÀ»Ft”]>àò9w$¦ëb©gÙj Í»G`À ’Ćórp]‹-Gb†ã$=0§° Ýnóæ†bë‹wml•Šæ\“yB³ w}í`½P´ìNKUwzû„»nR&ánxlMŸ!’Z»*xhWáЮÜà*ú­z™g¿D!·N¥3n]oQÓ˜é°1ãÊzMS©ÞÐq†È3úp‚›üPûÎkŸÖ€f…CZÖ?³ð禠‹;„þÞ‘Ö`±!Z×T£<í_]bÃÜELaÓ¦š×>Ôù¥±ƒïH°Ê@jÁÓ³¥îâÒ‘÷ÝéFÒ ‡#ÇRð1M $ew1»ŽarÚ–|ë —yó<8Xuk^ï-ï¯úÌÕoGŽI’KŸ²t ©§Á¯9Ú3=wigÙÃNåp]°EØ¿C ñI$m‹½="Y\\ŽÄ^eq~¿¬nÎÚYR¡)µ$.ôž²øÐˆÒ…н’•£üÄÄ^Š*Võ‡¢†%ήy{&Y k/”ýnYlÅ!¬›M1‘£5¬R™(3 Ÿp°™‚Ò|¾8êZAMÊ’%.ÈXöU%³ÜOÁ­ëqÇsÛêápâÒjŒ ²ö”÷AÊ%¦ãaí—4HlT¿Ÿãwr=­kam‚ #§w.¹~oha‡ÊÞŠÀà4N/+{¶é¿bö[²ƒˆŠô@ŸÖ—vhøK”}R¯g¥;­1ûÖ™">TöÖz„ÐùOk®«Vc’÷¡`§üY~Àa–—M­ê©Ð ¿ÿ>ì¤6ÙƒìôG߇MæÛSǵdNÖhVƒŠkÙ¹ ±ZÈßóB쌚5¹I¢E5ë½áp—ôlÚOÊlßx¦OEk¯íì…p‘¢Ef€U ãÖ~–k?‰ñBÑÚ‘AT¯ÅÓV,rC߇—a•­T³¬®+Fñc>$ðÁ‰ü&c³vìÑœ-ËqD øæy`Ý1°Îö‘—-x€¢J`ؾ xDUò% •ûýÆAýòg}‰›o7¿^~¸¾ü%ßS†ôüEש¼øUNÒ§¥Q'pôÖÙÚÖwÜ•}å¾å®¼[S¾ØÝ‘ä1R÷E‚+6&Ê. žò§ò p^Ò˜ÓQé¬PT¯{]¶@~‘ì`Šž8bˆ!eâ-ŒÎ÷-_¸Ð¹~ñçÑ“³â%£ë”_à– $­5K¡Û\Ùè׊ìƒD1Kˆjf•Ó̰ò„H}4PH!"E^ Äa´ÊU’×`h ¥ó.­üZs.8˜iEÆ®y}æš$B’3quþò,[ÕËDYÇÖ=N}WX­Âàcˆñom®lм*C$ú\EB‹'B~‡m®&N!<¦| ]®¡œ3 Ù]PÏëÒäʃa=󢔑ÄêT­ºÑÑáöM®~7føBQŸ]UÙyS=€OåõϳÑJö¶pѵ_hp±iä–M•Xg캽X»Ð( È=¶’Î_O•ͨŽg:~=íàafzöéh5[pUÍwòTýºð ÷ž2Ž%µ–´z{1ÆJÆM†DNŽŠzUT)J,1î±b÷šq“£ÕàãF‰vP·ßç’EÈU?bð Njr«ø]R·ú¶nâ6òS¦élìGs€‡‡)z®þ‘ß·°g9Ká B|íßõFÁmøýÉh¡_yÛK%…–‘ï$ß°»vŸºFECâÀ„I€Ê)@6ûù|² …do¥:] Ï©-Iˆm¨j‚Ô€Ãj¾¥j¾^ªÿ\Á·DÁ—ùæ(ï<­’q̺2®Â$€O››˜1 zd <(p©í\z®€ Oa>_}¹U"Þ¼yé—Oö½[±êüé‹ç÷ïÞ‰‡ÓÑ.Ù…öŸžNzzŒ‹=WüôÜÐg71OÛ‹y"1k¨œR)Ð¤Ž»HNá=-!~âg Ïs2î—ÉxóOž xóï®ï ó‹-ÛêY5«FÚv"òáND ®CÛ®¶­lpmÌ´ì(av«üä0åˆ^Ÿ”Bx±qúˆU;•ÔCB ‹vxÉn®’ƒÖ™Ckô À’¨Ú £Ä3WH…úE ŠRAó³OG«rÃâ`‰Ý…™˜ã*ìŸ 2×;˜1O4X#mœ ̤ŽÎ—?ÍÊš²¢:çÿIÿ3FØÝášþÜÔÉrì¶p²¦?wÌòÄ€à˜V ®ømö¤°Ó{§<ó'÷¿ê•ÿôæîZÕÿåÅýõäÑ}ÙØ‹¤vºWhziIÝpJWN—IÕTP³<~fQ GUâ5ázÔ‹vBbÌ¢t<¦Ë¢TÊ3Òå6&ÚÜŒXí1;V˜‚õkÌ´.Ä®¸ëTwÉÁ7ÏŽWiƒ /xt¾¡iÉ«‰µ¤ ¯Ï„òW.Fç<ÔdÔ0béŠÆý•׮Üäh•5q,N|£ÞÂíS {°¡_.ÚDå\ûPèˆF˜~?ýSΑ ›dv?6qãÔIÞ*“°û­}¸èô÷áÐ >h\æOÞúÔÚõä`S…î±amL¢MôhzZ×ï”sÏŒÉc(O# ¡HY÷ûœêŸ¡‡w¦/í’#<µaðä·6 Jä]&ð¥a06¥(fªï±ër]S u=N5íMÁ«AVG|ãö{:ëG?Dñún'lñ_¾(w™ µN°vNTèPhA¬öÖ«8¯kå_N½~zÖ ¨Ð³@»cGõ,„lÿÓ„R]4­Ê8 ó’ªv— ›£>!¯ƒ†o䉸´¨©®¯ÔÁn:«¥—¿MylÉeLý—Í`€séäÛK¯¶ HáÊ2"žÇ7§_Äåá¦08‡‹ =1º”]¥ütÚšÑ&²ÔséÔš1â¶Â™ýáp“ƒ*Í0ƒãëñŒãÛ‹=¯êø]Ö³ARtÄáÕ”û7ÊïË#m©mÁy Ûú• ±»ÄGÛÿÔVN©§U/?r PV–$)Ë)–SÌ¢†|(­ÊùWnÃìIà1B=&)ž“ã² ÊuÀO©Õ·Å^›/jÍêaÈ!l²2W’§¸ñj…×ç m}øýÍÏŸºepS¿ŒµÀ*q°…ÐÇ­^æZ¦W7žX#§$eîÅ•]3r1;P4!N'mÑ;n'Uá´Ç¶_¨Â‰Uû¥€[GýëÕM·d¯iH¨l ­±, ÞçàZ&´è¢ŸÓÀÂ"pjýìaƒ˜9‰Óˆ6í@Tºþ냊“ò¿¤¨©ÓxûI…åöëÝåõ•>üóÍÈØ ôn(ƒæ-Òìi[Íîcƒ÷n8{B\z}OKè~&A  `¬0 äË£ä1g&Tíbô­=$:¹"¡9@N ˜ÔÀmš"ëî z¢N™lcªK§¶‘ìÒ•_Ä0¢ã)± ¦]Üïï.Þ|ùv~w}ùë¥eJ؇-/&{×´QÇÅ„‹Û{ÚÖ³ø+¬W†jŠ¿>Péf*õ²ÀÄòtIõëk;‰õ[=YÍ’Æ2ˆ1´ ‡=ÂIK<‰½Oú_Þ»C¼+’L¢0 `€C%VÒ`Œ„£ùˆÝY)‡X19L ÞU‚þX€)ÞÕôëð®ZKÑÿþO%ÚU')ðT„ºQm ¹µ R‹à#Ö­(¡(IÔ¤†’H¨Üäêx““UWkèÉjÄùÄ—×cj)9‘Ûts¹r}=ã—›Ôº‰CF¤hu×*]°[!w”ñÙúí”4]ŠÖÉá<È©ƒÈ· ×ù(âÝò4×~~ÿ°;øc¢kïˆýùßü0v\®OXå¤#¸!øEdD2œ\Ä¢¥ðÿ½«YŽãFÒ¯¢ðe/Ã2ò 8&æ´—yŠ [â®#‰Z’ò¬l_`Ÿl3«[d‘n P(RvÌÅ–Zdu!‘ùáËDþ”{à= hH¢Ïܼ>æØ®Ñ;–€^ WSèýROñ/…Œ¶O+È­½Á!‹sàT³z$ØåËûê ‡"™[¬¬ïí­ÀÕºfßÚÉ%y˾¥—“•ïü,Kì%$=l OÙšžv'…àÕö5Zß«”ˆ ¶W By`ÏÓjèš¿U4KŸÍ•X>c_ã)¦dm®J7YoâØ‘r/Œëêâ¡øðÇ#èÿ~õþúîá¾Í ¼ 3µiJU“¸8`!‘`<Ï1FnNÖ2G‚‚1šÈÝÍPÜÉéEÜm[ùþãð'׉<Ï_:èÄ?¾y°%¼³¥½søûц¾•Œ.#.±Ïm“oæØË½ÉèꈊW7&óç3.¢‘¯¹Ե‹Vo¿Æñyæœì·‘¸;Æ1?B´ƒ ›÷.‘ k0R0йt˜ª>‹NŠäb¬ð°ÖXÊwY,¦Ž‹þR`;v )?Îî*™«¦]¥› æ¡‹Áó êÞ9&"=7Îbà%z…È''Ž›ûØpâÅß¿5êxÑ (­ë†Üû½‹ÆÈ¬ºº1rïמë‘<ë#‹ÄKw *=>I‹¢š®ªâ«Sý§\£çí öÐ/öÁ=5 )€,B:€D\’Õ(Êàš¨~abLHUÊÀT"– ±Œà•®¾ˆ˜×q1Ÿ#lsöŽ™i;ž.&æbsO[rBŒxnÈáÐ!ÒÔÓÓC-½==›à {/[ì;÷6°17È d¨_¨&½ø‹§U¦ÜnûqÆêôŒ45ŸL붯¥fK‘¸7°·f‰ˆëŒ_QI¶QË(ã{q›³kîyTôꑞo…q/YŽÿêñ³??¬«b¤ Ñ6eÙtmáý¿:¦M’qtHÊt£ØFЮ Œ1@¬ŸÐ ˜cÕJK·!  0ÒY­bÂUFj‡á¶Z÷h™e ßCdür«†¶‡„6£NÇ(n0¿ü¬UûL.ÒmNÈ;\gD™2ïä·Év?·7ŸÍ†çOî×akLéü& ‚ò&>${ÊQÄCɦaXÖ{‹à†ELVo!VG×T)@*7XHiDÌÔT;%£À¼]ÍJåM:‚Ü“ö¼*G“îY_Iž px¢Bi… ™d"Dmp¡sŽ•êßÃ)HÅži )i€š&ÊvàË*òl_ÜF£UxoÚˆžúÐNJœãÄ3‡#E¤M³ˆÄ¸Ý¶ˆû v[EM6/Ê/®SFœážu“ >÷÷Žžw»=¶±·Ÿ[üxüê›SÜ/ó8¦®†—H§0¢—ð9:Ÿ%ĉ1§¶\0¨ÐRŒO.e2\]oí¸Çö^X‡wK*²éRS^NqØáúËäN'€úN±ØÂáâ­&ä¡Ã6bhAYö••0Û g÷3ÆœQ·yF;d1“ýà›öʸùdÌëêæÓ—Û»‡µ½2|ÎX¿Ôp5JO5PO«5&8¶YÆ I ,±LB ™ëК„*o•Xl2¸ˈȒ)oBe\Ç[ÕykÞ­ñEIþÐj|\N¡ÕwÊ{¬†øº #‘Z 5R†mö"(”v4Oœ²Fî;ù#R>ßüï (¬*ð:s¤;Un~ô`9¼Þ±&H_!€ ø¡TyBo]š«T9iºÜ4ä°V*U³-S/P5üq?„ù’³¹š”_%&ÜB$(Æ8ŠH\Î5uáSvæÄMû—âã¼VÚ é­û§½ÈÈ™\}¹ýxóþæú~]®†Ô¿ šcêªó‚9ä:©]Þ(¼u]AMZíÝdl7j%áÿ ÈbYÜRRCú$Lk—ù éM›óø\ZNqbŸî«o£þðjuÝÃzDõ‚‰2™“²ÅD=¤‡õøDy°ƒa\¨úœ¸F嬊&¯ªQ:‡¸<ú :(ÇUžd3"®b¯M9¬ˆ«ÞM¼Nq‹QšríË‚r>Èðy\Å–ìiÄ/YÐIÈÒØuj£C!ŒˆX_‚‡³Ûj ;d܆µÃ‘6Úq'ٽݷ«ZxJB\]³ ¤ç]K"oî­›@ÖÈ@W m…·šÒüRR£ð5æè ÿs5*”§`®6Õñc¹oΓXà«+¯+p^…¯Â^» _Ç{)Q”£ùÌßYrÝÇ[[毷÷i}kKüýêýÇÛ·uvz¦ñsÓŽ4˜©`O“ç©E¦Ý3íÎJq˜ ‹WÀƘꎋ9ÌUÇ…¸Øâe)³6ìZrR\gÃv’GÜi?¦ˆ ‘’Ó7´á§óbéò…Hƒ(†H›ÀTvN^;(üá²õYu“/“?²Sz÷—±…£jj¯ æg79êÖp’ “{|UÛ ˆ8Ÿ,d˜ëjÔJr"Õºë‡i–¥L­¢áº)SH´ –ÕÎaÙÆ·‰ºòïP²ù”‘úF.3?ïþqýðå£=õÇÅŸŸ¨RHäÍób5_$O’±¢ã¢r°œ¤â=‰gD¹ ½6xìsÝ™­bư-®ãÞa ³ÁÃxíça ³bEÇþyçÓErzË“šÒETaÔ‘ÑÈÎîqâ6í1ñø0‡ùˆSðøáë§çùë‡3½¿»ùò°òš‡õLØAÜž9 ›ð6õ”Ÿ@Èr¦<¤ã׉ˆFá«o» ý«¾œ êqTìâºLjTg ÉX)ÀªÛsûvPÝFÅSÎ{,gŒ…Tgß©œàµ8I[SNA·Ý£aàÜVz/Jó!7meÞ}+CÔNÏJ&¦,•ÛÉÁ7mG%å8¨»F)l}vCÿGØäA1jÇmŸ‚dIxs_Ylì+›‚‘Cݨõ„¥ÇÜÜK˜ËJµ{‹•5´•5Ø0¥ÄU”•# n‹n0KÚ;!ÉÄxÉ?³CßJá¤'ÞÊö6mÍÓ²iç+tO{a‘KgÍžñéë盇ß]µû«/zÿÛ—ÓæiV6OëüÚEï4s*Ö7OëüÚ3½Ó8L˜ÌÞºS´à¸CýLÔ)#î=ß½yŽÀ:’n Ñ/ñ‹Ð|÷ŒÜH¶¨èw5m¡$3^”ê÷ PàRVúQbXl›´É¿”ÆôäþÝÞÝ~zwóùê“aäÝïÇ,ó£ñ/‚bÑëIˆrkPìð.ÑÇÂê&“ÝÃäü¾‘$î›duȘ¸»ýxý‹ÉÕ|ûï·>]ýr{ûàãš¾\¹wôô×/+GWš»ôC·ð¬/†ëCãºæO¥µ“+ûä5ÎM'²ÎÀª†x±/ÒQosi¢ìB6]VhTNFYg…Ò}<>÷¾¨2¡/²Ÿ‡#L!k€2“Ë8Ô“Rhj²@+2Ë»AàÜvR2„ïnšqØNÅñÝXu Éf~› à G\GkQÿ4kϼOñ» Xßw®[dàÕ4B=iŽ[¤\ÅV-¦g,äÓƒ­®¾µ¹sü0cL*{c«‰ •À¶d¤,Ç–{'hS݉®tÐ`Ï=Ý9ò1;œJÇ¥¹ÙGk¿N)VnÚRá!c ŽêŽ{*㳤eBå·qÏÅ_íļ¾ûé¯ÿ÷ÓŠoäw_í?ÿüéÚä:GÜï-}øëNû¿½{øŸÏ?ýµ¿yS¶ÀiÊ^uc÷þÖ/^¶ïǼ½}ë÷þ;ê9Ó8PŠ´'¡0÷¾'Nâ`'Pç ùX(¼—Óã_âÄIIë%,FIjÇ?$ÒâMäÓbê…÷HàœäÙ¾gÏØ6‚Ï ¢A|)êŽ;jèòÙs &qóe 5^–ÌŽ¹¥ÐøÂËù?Çu‡²Ãý¸°†»$ñÖ¼¸ÔŠå3Ž€ùr³yÒœ…eèf×Ï( { >RÑ/ð9 ¯1Ÿ&Ä·˜Oó¯#îü·'óBØ¡ù£ŸB~%HoóýéQÌ?=%'ÿtüù«÷÷wW¶Œ»ÛyZÅ㿯¬3Á=)†÷„î8X¼ Êì4,Ü%ȾP¹LŠÒvj 1V- ¥´ª…ØF¤4Û[«q›È¯Û@Œ»ÇÑÞ²Û°%gŸ°s¹¤×(§¡•±­ÒAD»!¥¼ã bÜ‘¶ «tLÏ` HÒô³À2ÏŽ@•â”{ô•JPÒEÁ~Œ.²/Jüs±²þ9¿¥¬Ðn¨öt2HRزm^»Ðá? wðºÑmàÖÙÝI'ðdMnà!×öͧ唼ÉÅÒZfw‹L! =s'—)zæ…š:Oyluì•YD»s³Ž*“÷w’9M§é®¶ÈÞyL’dc’VÐô‡IÒÒ€¯ž¤½}’û=u`È› LRO W°dÍ›¡LZ¡,¤I|¦oKšúÅ*”‰ëãKk² “@sYòổ“*l¤˜ö®qr9Æ!úNDŒˆTÁ$ÕQ_#oÔÄþfìîúï??,môÓÿóýI~‰ï#¿OôÁs,^BTXOƒ^aWáݽxq "î›=ñóP™·‰ìÚ{õ´^0Tã…œû¢î©u§—x b°NU‚ë!õX—\jv»\K=è&/(y]}öˆmÝnÑ|ó שHª!ç-À1ŒÏQ¤dØ“€Dÿ,u{mâ®[!uO{Fõq·Jò•íº¥¾ëHæJæªÕâãUÏ%³õ2ŽbÆ'qŒ¡ìÊ(ª¬á CL/íZë5Ë™²ž¶{ó%“;Õºè?{ÙÞ¼¶HŽ1nÚJÞal#k˜QÄü´Ó|$=«ðÏ4¡i“{œ õÄõbHY÷é¥ù(«QÈ:k‚*e¨{[lL²ð;þÈÉ}ó“\FDîí­5 §´XGX£¼îàŒ™˜ 9‰aÊæ‚"–î•™ñëO¦Ç5m&zÀ`ÏÔÈ©)ød‡â1+'Tý&­ç¢<Î!<Þf›;Ý“ðÊâ5Mº+´jU5»ðNqmýL¼F¡ë¬™kΦ‡…³¥r\µÄZ’ÒkÂÞàÕ±Uy÷à{ÖXHÈS©‘Vf ³y ڛ÷ÂÃÙ͘pËÆ&JãÇÅ‹ÀLþd}ŠÛ¤^GÙÄØ3¯fÌówا¸°®æKƆ$Úh~}õ² q1.°Êˆµ·ÃlæU;À™w¯À°S îlÉ*”Ó¦ÖôÖ/Ãl;†žJ9ÍI"ËnÈwן¯ÿùóÇQÆ Èæ›†õP}&ÅjNÃ!šÿ²<îQ(#ú ú#ÇW6Úñ˜S¦ À± ëíUýõ;œ¥ZÊMŠ¶Ñ¦Ò›$P U˜ÐÊ`ÿ(–:᯦aôÚ:;.üÍÞ|„#l,¦áñ?þvÿå×ë»ë«×ÿõó{ïyûõÕ'àÝ|¨xZaVpN °B¥ª±Ø7õI<ƒÐ‚5çæildÇŽP’î¾ó#0ŒÑD§˜SxC>n§î“6­ûÝÐÆÝé8È MC~èޡ˶;?âåÔ»¶Þn™SÒ,i(w_šè*‘žØ3±gH§7~¬eB©þ0ÙïWQåR"êBdÕñ®óœ•EÖY°9,7Ypâñ.5BžŒŠØõVèX^³‡ÔÑ[5ze³÷,þ㟿v¤NÁYL®j«éGU;r±y!±õæÉÇèµ7§™ßÄÔ1÷·ß;jØøRL žÚF°ï¸¯ósu Ãaž»uæË¯‹p+'É~a%aIÎØ—gj|‘‚`6‘¸Er£`ÛµÄþ!²Öa¢ávÕ0iš')¬ObêaÆöš!ÁŠ’­&¥h0MÜwxÔA®G÷ì9S³%S"ÆWé1¤¹mˆð¶|·UÐPÞWµœÝ£í7vFAzêW(Ì™G!t%•çB+—\>‘qt{í†ÃÒŽ+ÎgÕ T¬¾{ZMCZ¹½•!Ey^|·xƦ¼rpç—øÿþ·1­|V6\Á-Š sì™HrÚÜ÷rc!p˜RëHчuU´ÂVÅù‹‹¥µÓz Ôkð±™í,À ¼eã|°|×¥U ÑÞ¶ÿÖêi(óíýÕ/_o>~èõ¢rÁôÍ‹2ÿPˆöa 04©ëTV£8ï¬ Þᬎº1ÑåÄ­ƒÔ¸”²Ëе—Ö˜´½¡Ç0cÔ½{]Ì\ÇØ’³éÀë”D¶5HR¡Q™½e88·—˜Éèo±oL{蔹Ȭ°¬Í‚í{EQô^ µ2(œBLHUãÇ$\¤\O2TeÌxU –‚×n¦-Öï¹0ãË(&óÄw©ñݛޛˆoýŸ?4»ýpó0_D\¦AOÙæC¥Îo¡Êżu£7-È=™Ú¢jgGèl&S“úëoù×ËU0S↸W6¶êyQÙóZHq Ìv£1ˆ®Bd[mB|QM?$›iÊ „éOJÈÉÎôb“íbOÙºÿžæœèÂÈm™Sc7ukL1ã¡´à¢9"! ÁŒè^êL1§uöH€d›=ƽ“LΠ§möæJ :èŠôÏBΉ1n£Z4cdJ”ᕆ—•2I¯}ÜW!+ _5±1áM¸ÊÐÁ‰¼½˜.c3mòv¥ç{.ÿÏÞ³ìFr$÷+Ä\tk23"22èÅŒ½†/†á‹a,8œÖ’Öœe“#éàwDUuw±;»*+«z,^`‰ìNf¼À“Ь˜– 'CÉ)Ó6ÖxÒÓ[s"‡³T)JË{”™äÒ/zŠew:Ú V¿Ñ!åÓí}ÿùÏÀ\³`lBì3ô#lb5æµ^Ý*Rä#º³çúìíÁûÚ6ýq³~ºK׬Ü|ó»<¿(··\Þü’ì”*Aǘ­¬Œ1Lt*žØ)Ž z‡KÜãýY'}nŸž¿¼^}þtZ<ˆÐ—6PØ_ÙcÂÌ@çÔä·ØBäHÁ/-÷C.î|‘†ÑÉÔÜÓ–tÒ§˜FHGB’]á3­¨óE ——•U¤TW”,À¬¢[JÍê¥àAííÝž)Å'{eÔ)±F“ç¾sš…’çŽä\~b—zÚq”Z,3—ü,ì1óÃÅ÷˜*a3Ç^qž Y]Ã12[WewG-¦^Å ¶my.šiP#°ùMíe[ÞÜ«úwj"Ÿ•žßÔø¬‡?¥çfÙÇö½öÆ;ý_‘Ÿíx7©ÉƒºÃªi:*ÞÝ1UxÛ)xfuƒh¾žC‡™ŸI„S[î8¯ Äg6ƒu9&W6¼È jÝc=æ¦ ÐZ¦ ìbΗæ©ç©‚.8‹µÇ¤Ä;uÍÁZØ‚ãñi׿¼¯é¢«–ÎøèŽ3ã…Tfcy¦ã¤šò “œM| õzFý–ã•…Ý‚:Y±j(ÜÄ+æ©R`)× >ZCIJ“:!"GžÐ æÙqvií…þICÛ‚¨2¥àÛ¶™Êx¯;"Iß4:“ÆÁZDyöÔÎw#}GH=ü\æ}Úû4n¢HðS4W­¬~=Ò¼ÃXvˆë%¥Q7j7S9É1‰ ÖÒ\OrLIjjô1‚m:›½·íeóõËÆï¯ã?|ì”à fÐ[ã7µBB4IoµV4NoEWJ¹Pp€RÇÆ+µc9¹£çvØc¬5ûv­?yUÅ€9$ ‚ðû Ž· ‡V8îl÷…× ú(ªI1%yFŠªîI›ØkÙúñQ#‚7@Þ‰Üé}gž1Z+dš2®jçOža‹9k\ø(”<½YL¨ß ßYô.íq*»‡®wwGŒD$÷º3¢(3#J™èÄáne ¡fPTµ*ÈÓ9i<ÕUõtÖ#â¦ó©íQパá;´!ŽcµÿTv ©Ÿ–kýÞÕSaJ»ÚU»È³úm,•­‡#ªu‡¶S½Ò½j3 ÔÞLêUÕ ãzÕðä³zõ€ˆRµŠÈ¶Ä—«U ‰”F×}ˆ›D¹´Zõ©Ÿ_r¤VÅEŒ)NéÕu3ëe”‰Nôj+e³[ëÄ?CbŠÞKMgLµ)ÎîrU²Ì>YÞ‚¾äê%ý JÒäÛ«¯vçç·í•º¬JÁƒ6»º}ú|ÕÖ‘ýô—®è¿ÿÕ¼òÕeÿÐ*׋*Cêß{5Födõ_ïxJíOp ·ú”òÔ¤5eÂ%ÚÑR»[æ²EÒg¦Ÿlïî7Ÿß¾¬=7) чK w šJ;§Á0H•æ&#¯¢ä²Ó ë.0„"듪"`v\ÒO«ôi¤O©V.®ŒžId—`¬Ù°l1¸Ù{XO¹äÙÖASÇ`§þBqúpgM…×w›—ªü,“ø„6¦:1O3‰ºÜiÚ  å ÊOkôëµõ2ªÕ¿3—Ô ùQ±qUrà‘vQo¿^Íâzlaõþú8íg"“lqfÚÏ1«,g¥&…Ã~VäŠPSË9@[ï¼Ö<æ};ò¾ùúËÃÏ›»ßï¾lvË®¿ÞÞý¢ÿÒ±Ðj¡‰¦cMäÃ$Ç—+  m•Ñùzkk‚ïÍ15Û4HQ+Å8{”ãh—úß^žßÞ-‰9Ï:Õð3þ†[Ê…)QH%LØ'ƒG™bvbÆ€+qaÆ å\¨6;"ê¹PpuÛW$DnB¬K~H&ù‘ÙYÐè¥ p’–d½x2NK–³m'h v8¤†é}öãÝË68(X]D€òôÇ:|à±æE(’mfÔ0wFþ#“æp3ä;‚m(ô~Ú]%ÂH“<áû˜æçîÎÅÙË,Â9ÔT '†$îi´¡2mm!i+Ó~’~8»_g‡¥U´34>Ú2™ríœôïÛ4°‰iÛ®ƒ|úpôË·­ÒÅß¾Y«¦rJL{F±röí½B³Ë8|¾:|ÖŽëþãúè]éb²?Úí}F¤ûÛíý‡”¤‚n£vïÞ^^,YñùÓµåo®?ý®LöáF"J.~@d¼úçüpúÕ‡§ë·íþrÂ.<÷9³æí KâÜcæ±ÎpÐÊ#ÊŒ Ê+WO›_¯ZePu‚âS܇˜ÏDj³±ÎÓLØQ¨1ñm÷à‰‰Ï­Û"›TÚÚ» uÀêñË„»¦Àf“ÙC` Þ7޾õ &~xÂBï°ê^*7ñëðVÓó!$T݇ßÏÄClÔÉ’¦ðSq¤N<ßÄ'Ûpé0Ñœ§šk^SíKÑ϶ñÞۼEWìLÿþ׫w׎ÀdöÓ„³À¤4§|Îhˆ 5Ì»õ"ÛXÔ8Ǽ‹sÁû¸ˆ;¤ÊéVÆÔË ð’ö oô\s}wû±ÛT³ÿÝÍaÚýÍácëù‚±QOžb™îŸd ùBt­â Rƒê˜º¢|‘ — £¼£:¿dw„:dfœ­ ËZI>P#øâ’½ÔÕ›cÝ`ò8!¶òk·úø¹ÅmÀŠ ö’¥@°”jØD¢°ÔR­=BUDM K"˜"-&͘ñ¡\Š>NÐM3‡á|ôòüòd…#>¬ä6åó}¸ñ®ñ8 TÍfÙácE¡¥‚f£¬<.%ÛŒBKlœíu·(çÎä8€}áÀUTf M²Äi*$šõÏK ñf-Ѻ#W%3Ú ÞÂòX ›zTF5g]”Æ©fÒâq„n=ä”m:€VB8½–sì°¨¬ÙóIH!Vö¨"\„Õ Gˆ5ÂŽÁY§Ú2ºqñ ðDMâqikj7úžÍ>‡\·ó²²é­‹¼Ë, OX–X`¯J…ж?ï™ÀS@ÏK˜U]T ìvÁ_$«@)óšDÜDËöûiŽPrŸ__tÜC~ˆö²ÙÕ[¡¨»B³d7˜_)‹Èæ#Ö”Ä)`ê½ñÜb§_ï7JÏOÏo¯ÛæîéAÿ÷sûfümóòEÙÞæ“Ù·¾¾´e ·“ãŽêN,Þg!•_" # D©Ðù‚º0k¶X^#dµk«?ëÂ,³÷“ºŒãþˆ˜*œh‘ ŽeÍ¢˜–”çÂKCÉÅщ7=C &ž´2ˆY?n€’UФ4¨ ’/ålý‹¨Œ”jù¡=¢Bã0Ð0܇:oO½…)” Á†‡D‘ :‚ vôQÆèØB .W52fÚ[h/Ô5‘¡»0ÐY¥ï”iÚ_Sâb¢š7‘hl6·È IuX²’KEQ™à†IÁù¡FÖ(q×[«º!G340{J8êõM {8zZG±ÑP ýEwÎ0ï³$I s2ÈÁyõÄÂd/¾f\p´€ÆŠ0/H2Û‰5mm]œB›í'„}’ÜKÎ%«È ¶ã ËføëÕÑ^°–Ú#¨¯?7òr1*޽[˜íÒ £8KéïÑO8E TÔ˜ˆG¨ÚÁ)[I;¬ `NJ5•ÈÞUg ÏèÛ»ãl[™yêñ  ÐãÔKŽ¡.%½;"Ô ü×àõ›qqJ:ZÐRHlÛléG-i UD#÷ò¾Zçä9öZQJÚ7¤:¸ÌP¢é´‚ÆJuSwGx Åj'™Y¡«ŠVˆâI´BDRÅ&(J©»w¸3¤ê YÉ@3®ÙfÔˆðN0‡G,Ënº†4@Ë>M„ 2‚ÊoÅTARÄdz§»ýòðY!~úÛ¯›O÷ê^uæ¼ÇÂöcïHýrû´éw˜l7¯Mÿ¯Ç=ѳ­;;Ìr›w?Z±»ç6¦¸-„ll<@Ù*5=ª§È¦‹~gÅ¡¼^£8l€¢ªL\ú–ÕR°LáûhÓšAâ4]U~¦é Y…?­ð R46rï²ïÉxg†‘S¡~°ÚF"¦ÊÝÀ¸ÒlÍRþ¦mÙTÂXS›¸TSŒh[ …ÕŠU‰mŸ¿lö`P¥ºr+‡”)$rL~œÅ=1žß[s@a¶Älˆ£“á~O×›Çç—ß{»gÔ?ÎÇ:[(ÄI u•õEy«G®uN»#¸¦þ0±­!®)?¡†xhC‚ ^¢G²«ëCBEµGt#}H˜íCZédaÒW'H>Ó–ˆ”õ“Fó6ж¢î®jyoçÎPXêK(v©qÓ¸NP¨’†,aÌoè!÷ÙW…hE®qjlì8—É?)2Ë_k]ª"ÜZX¯áºõi˜9œŸ áxÒ#nŽÜiîÙ¹†˜FIÕÂáœwþ|EÞVÊÖy ¡)xºÕ[i,ìä}–rxƲ)yA-IJçÛ0pꢎ-¬y=êþ¨ƒl•×#ç úÈÃ…×ݽ¨?²}}î6Ý¡Ó8eûñ›oÚ¾¨ã‘”3ž’’„gñïm{Ã"9ï«^sÕN³_skqVã·'¦Œlk€-Ö†W$Û#yÎ=þò= ­ñ°äl]Ù*ÊýÍ‚hè–0G®šq­Y§MÐ.²®-Ë&X"Äi‚¬±?IÕpfB˲ã0X§ –ÕD±’©QÇ4Ùtè:7¯?BIªœ"öl•ËËȆʧ¥NQ`õ>¢¤ÑçÀ*’@çî!‚y§hZ‰S¤×R:K gaE²*ªºõ}û#jÆÀY[¹:–’-?â7j¶£¤I²¥dVcdkáÆüîædTiœÑ=ëý˜9"ŸãUy&‚²G î”LL“, 5Ùˆúš‰r¥ÕÇC…ÌŽNjõg™&Ao¨až›&° N'?A`6»ÀdMA«ƒ—F4ºé–sÿ˜;cÙs zèe&#ˆKè1B`¨ÒÖ¤FJ‹}qËÖj‚ ¤Þ6ÆL2Eˆ1¯¬÷•(k†& ›£¬É¶2reäîˆT“éFO8®’ãì}æþ—ÉΣJ³žÓd”FR­ÁeJ#ð$$Èó×xÓÕK·ƒ¯©”{°}¶±j­="Æš >E›|êƒV©ÿäOÕr™Ç¤7t„üôۣϲƒò#7Àh½æ÷Å ïÎX–ÿð©Ñ›–å?zÀÔ'MœpÔ¼í³ˆDXZË¥º¼øi±QÉSK[ÀˆÂ“L!>»”÷YQ/?4"ŽB˜!¾–0ój¡-ºªEŒ|"r+Tà­1ØUäVdJ:68!¸³æ¡I¿SôVŒegn Q²Ê(±E")B)CXFÀ;pÕúÜŽp5ùl´\ž0,ãâ)¢áŽ£¶K`”¬Vn a܉kᦔó안±ÞÊÚ!PæPÚà´„jX•ߊŽÅö˜Ì-_é%´+ZП^ß¾¾ÞÞÝ_Þü|­HúÏÍÝkôfÊTHå@íS ÅúO§èŒ1_i=ÀÅâKIC¼„3-ZˆKAQP“zÑàÃÐOXçŽAƃ,-E¥qÒ±îh9žné€Í.lBSà)©@|JGŹƒ3FãÄâ…J£qŒÉ*öÓFàã'ò2¿­žqnéÊÏŠ «dx¼ Þ+ÑrZA8ÇNrRò02"f¬¬M`c•Vs´R«Å>«#¯^¡'¨Ú¾?‚/²j ÕkÈé×^\hÓ å°¸P)¾Ý–¯,Ô.µ²pò*Ãe…Ê1\V8y•ò5…Å<>®îÚ#qÕŒv5!.B¬³{Xd÷T7Áž¦ìžm¯CŠcÚªƒ³¯ ` ùÚü,+È<ª%œ±pY¡T’ò…»á 'ˆ³I¾äi;‘õûè„ÐÖ3ýO·ñŽïË.8Ávi·dè¢1»7%e®+ã8+Ä+Îfn­_CC‘mþnèæÿ½šò¬Õ…JTñ_ÕŸã™lOtvìg²„×ËLΪmr]eÚ¦ µmÜMз‚—Ùjªèü /sæaª²ÀIáŽþ¸Xyî0µ‚À§¹#„±:؃ðå[ÄZƒ=¢ý-† óØÃÞ¥a™n—÷ñè7xùjÍ`*®w›¯¯Ï/-ƒÈ¡t{§×µ¯{_½>þá']½ óà´^?ê>~x”âàéèœç²Å©‹¸Á;in!$8å›tØkõ~Öä†v@ñTÌK\³$}äEú?PM%<Ûòéaù >˜FŸÄs‰]ïKG5wÀü à´’²/ È„õãü. ’-ü²@Ë)ífZ §´Œà’jŠj™| $uã³F[$¿|¼5í6øÌv-»¯ºÏ:y?É<6um¼Ò³Ãâ™É/{4­òá´Jžeõ(¡,ÒóáhI×tÓ†ú¹f èî7·_^ï˳>6ŽçÕ«²8·HÚ UäóS’1ÊܾúQ™™IÁ£+ð•r~Rh€swhW»u´äùÝþxûnõöãÛ—×·íÇ»§õ¿þ~­è~SFÛÎÊÔ"žÅºÆ¾Ìq™ï¤"زA.Žꦇ—"j-O$jÛ½ß^ƒì‹ØG½†l…×)k8 ƹž¢ÌyäËG á"9¹´šT,s÷:ù^M*4œ–㌊*þÆÇdsºïoЖ¯©/cQj0òÌ\Û£¨‚¬áµMy¤ZжGOhÖoð×Þ|+Q•±÷DýõöÁ&ä])³^Ù+öOýðçC}û£þüõå÷‡Vónm’G¦ë¢“¢Œßú…k#ÿó[ë0¶Z8„ Þ¬¨:ìÑ#ÒNhç®­²ùà!†ÿfïZ—ã¸uô»œß«6/äǾ‹#kc•/òJNÎfŸ~ž‘¦GòIö$µW¥â’G=M€¸¤;«T;JÌSbv©¬­ÈiÐ5«t8y~kÕédI%{«!puqßJ—×fÝÞé~#ã¡YùLÙ‘Y¢;–ê/¼6ñ<.†Qãqƒ¦7÷uº€CÚº x…EÈ ›#–o°©ŒºóÌã]aâ4*A”¿Ê>üïîÓG¯»;|r“3ì#Å]mDˆ¼W\ÚŽÕ·•Vmþ0g,ŒÞ×,5c‚¢Y±0 º pˆí­ƒ:3ps $ïæÒ÷°@ëí\>¢o;€­¬ž0#ÐbO»4vŠáñÿ—IÃÄÎê-]¢í•ø;-HÊZTLó­An9fK¶ÿ9„תêY$ˆIµø ¸ÒmMª„÷îÚQ:J™Â”r*Ù¦íBŽ)D7Ò¶¢ÔØVŒ±Ú´nS{23ŒW§à&½—!í­|}üöøsþp©õmñɾ¾wZeEˆ ‰=ŠÖ'j¨í{ërä‘{[iô_þýôüåδÐÝã'½½?ÿüðôÇ÷ùf¾þ`”Öõž'`Õ‡¥nëÚàšÖõÇ›ü^ë.è3d3(OoѹîFp©¡ÒcËqœ°“í±íÛ•éuåÀy‰øÇÓ§ã}ÑWñ~üC/Çýç‡û/¥výžGohåÏ\93§â•s¢qq)v¶yô\ì¼ ûˆgoíÝD({5`PŸzLƒ^Úñ][¢%⮫Hõ ©IÕKö³h–ÝË8™QVJ·†àWH"oä>£ßi>#^hÚµà CÑçàXJ’ ùý z L ™T™PØ(™$R—drØÛW*§Ã&‡3h¬”Âq%Ò•*/Ín…Á]1#5ħÔS€Ø#öèZv«é¯9ë ÜÜ]½ð¨¯HýÒîz ¢C_–ù¤t¤PŒ»eWߨ1"ì¶ØÆ:e‹Ð¯BYèm׿Þa·¾c&“m™¾ä|Äbk‡Œmípƒ…~\Ęã³LÞÙÞ#{[ùãö×üÒiÁ*U9i©¥.@–µrý@½Rm›y"±íj¶Ù{HeÛ M½¿¬¨—Ó5z¥³™å3²³öäÅ6GoòÄTK;éa¶·]¶ ÿq^*eh*:Ë UIOuŠ/±ûMW`ïË>]J°äÓ©F“ìÌð‰H#ÆLÑ:ˆi“Ë¢ú)¡K_‡ÓÄŠù‚ûVljðmŽÙ–fZW40H¤N’Ç]?»Ø>3—­™Ðö¿½ËÊ)»lßßÄá—(”†æá¹?[3½Îyõ@$öq~§ß ¿§ß,`ò‰Æ´AÕyÿ þvÞÿ? ¿õ›«‘^½>G¢ á1¹¨_¬ëƲR9šoj*ʸ – pXÄØ=$C…»M .I1À_°×Ž äe“¿Å ºTXL{[/¥3¤ËVN;2sÂqhM }M+grÕU¥ºdÕ*•:¸Oœ¡‚+Ú·Þ®õø—QRmht)JYªmyÏõ¯žv.x|£Èˆ:±Ì~#n’éÒm¨JûŽÇù'ñS2t¹]Ѷ¾=~¯²<~jë 4Þìb¤½]C½èœ©Ôë¥#§~Ýæ4W ÏÀZ¢ŠÁP ÓE ¸ÊEòìSèR«Ø2øB>&Q)Þºíõî_[zy¼ø= Rû<ÍË× ºl«\Q•bvÖeA…Ó£¦}‚Ì(Ëtié T22î,ÈJå,~ÌËÖ)/ȉaì€K WܯÉó\…Á¶øu¶>ÂKhšhsj7]ã’Ît6\¢ël°]E¥ÎÛ‡†âñº|ÚYC¾†ýv˜ŠÎ‰Î˘´‹Gtu6pÔ˜ õ=ëÓé8t8JJqã‹[Æ@±M_»N’ž­ÞûöðóùñþmÛ~¸­:îø_Íd®6Á–dƒ N]áØ³•0CšQAIH<DfªTÇEAÈžè0À’ÚõL(€µÀd^ÃñļâÖ¦Ã#¨Á ‘E­#`wQ-,¸OSL¡Ð”¬‡rjF®.><å`Á'«)Tº01è=Jõl €1õ°-x@Û,óÀ¹n4w¬ÅÝò^=ÄDBe¶ùxYøxì”]{:W ÓÒÄ¢otÖ¸|D¶¸¬¶6Ht•ÖÐä*ƒ¨1n´†ó#àʬÛãÌ ¾6àÌé ÙÔ¿T~ýºÛ# ²¡çÕ±*‚ä·´:fnµ»"ïÛ)'V‹‘ í^7vîjw88gµÔâd5ŽúVl3v­–ªb[éîÚ#ü®iì#3PÙ,4º¾ÆV¢¹w®uÌETâŽ[D>Þ! — jÏ_~þøªäÃóçÏÞ½ò^`̹m¹æ/]ç0çþsEa`TËÌ©]a`DliŽŽ!ÄDÔI$™HR.—ã ø)±¬!ŽÛ'Ö5„ÔåÖMžŽRŽ#™ýäÙ‹ÏÚ­NOèkwêc Xo8Ç\jÁÓßòây€g[Û‚—lÿ¨­±/^ˆD.^™8^ˆìÊ’ÅÁ*LFJ0Y+_}ÿ—ä¿ýï?áÇ÷Ͽ>>þôýñ1ÓÈ¥»‘£ò›†GtuT~ñJ‹Ç°û=~ûmä¤wšôÍvìÛž–}ø8sþçÓ—‡ï…^°ËÏïPÅ< e‹zdbØ3áh™³_Õ,kô¶ôÊžþø~wúàê?Ìs /os wótíçmYJ‚=9B¨¡ØHCu¡­M»³-³)Ï1¨£`»ª\ÇD%OðÐ’ñ¾Fx"ހ̦‰AÔP¬zÌn˜¦%½ƒO%sføË>Å”G˜eçyìRÓª*aâúM »©•]ù-8¾¿†`²¶$Ø·¿f†C?ú,çç6(oX×Åɉ͈£G&…†ú,£†»Ê›ÍëD¥Qgî†]*jTÖöêVˆ×kœ‹½$Ñu¡omë\lѨEþWHX‚°³F52‡Ëtž98F/+»¾dèZ`€*Ìî «k ¾'ӘLJìÃDVýL7Àe^ì”8Ôï¾}¼ÿ¬Br§ïúüdYû‡»¯3>Ê6ØfÂ]•hËj°à¢çº×ò6Rp”Æ/N)µ¹Yš&ÀutÑ#9]·]l€Æµ·qñÖW‚ì S2ŒÖyElùë$]µWª:V5Nߊäú˜¶MgÒ8ëƒÃ™N}‹ë¥ì[žŠ]ÂPXÆ dñ`ˆ=EÙ‡€Ù)'ÚŒ¶ôšÆy7~¿]»ÕtFºt·"[#;J •Ö(¿€ mZO™(6nëu½EÆrOm0> ɉ¦Žã®yÈÄ.Mï\»Ø&еMêŸ"—&ÆbДɱªÛë›ǨޓɠXÈ;†V8ÔÃ#âÛ.˜&0þí:'·_èü6^=Ý«d>Ùÿ¾}¸ÿøóã×§ß^ž~¾h ACƒmÞÄЗY¸Aé¸Ù·ú2ÛBl»¥»æÐêðß’Ýs¤j4‘•³[blÌÄØ—– YÃ"¥âü!~¾>¹~8lWét–Š"‹S˜G—h‹gtÅØ¦$ˆ˜%Ô†\v0FQ«{4^ðãýPeZ­äÂM»¶_ýø‚è›}8ÿù¶íWÅÕ A{Ä“kéÉduFB {»´ë(Öær^nm$oX)ÂÊ‚ôO* 61Å.Jo'úŒp9í}œŠfu Fß +8±C6õ;»œ3}ºìº!Ï£gÆ€ô´ffº³…Ú»³ë•Ä*g£3ÄÖ.¹?ölÍpÖnܙ؄ÚÚÚ’g˜¤$’ESÌÎe÷¬žNV3Å6M`ú÷<ß}zF6­i6Ö!æA„Ã*·A-Yì“ãcûø®¡£?—ýÕÆ? `aÉ‚  ‚·@ù'>¹U|²® I6o™>h„šPXït ÝèIæ¦ÔªBG B^Çe>žÛgÛ†N«©ðèK% ì(mqMÄalÞ1qxïŸ 0«ÑfØò+¸è05ÉE¼"ûñô©Vï^þý_2€I´qÕmËwžA%Qܬ–Z¾óšö '©'üÕXdüŠl&™Èûxë…‰/÷Ÿ>ý®')u‘_ÿå=f\f†EDÏYáðˆ½E“`ùÝËŽD¾2ÐR põôJ&Žæ'b!.†ÑêãKêWL‘ÒÔe;D1¼¢RÈ(aK“ y­ÒXw£•ÎH!3¼s–ί̮Œ“<ºrk ³v¬±F}¯>=ÿÎa’æ7yô7ÈrÖ­ÜØt²NoÛ®z¼zcYC}!øˆÑÅàR[b³‚HÃr™“Š7•p>z²×”pÌK,è1"‘©÷Õé‹#oÑÀѦNz4pô!쬕È.£ P¥ß'p ”ªôe [Ó—•Ò¿ÊBäû*ãÝ Ôñd« âMËD+üpÿðl+ î ’ãÏå¾>ÝÙ¦\9´ó¢B¹"6¥Lô7UôVš8®ˆd]¾Ø¶dµ ‰‹z³ ØKjP½nãIq›ê%G¨Kõ"óî5$‰>ÓRkŒ"ý…ÛèÞèdI÷’ÃöÒQ³âX密6œúÛˆãõ²W·Ö9”äÃ-³þË¿Ÿž¿œjrïР‰cZ¥¾Úáà»4qr-ÙsUô*â:‚Ò¼v+\bÁRœù5ສz@ªåû7òŒ œï²÷žýÍ[¼š7í]ôS*2ï2ä6¦GA©ûw¬Ýoثޡ öãkd²ƒÆ )øÀâhGû¶™à<-£?>þâqëÎoÏO¿ÿx™þð“5—ž}x“ö¾î©~5”mh£ðÖ³ÇȰ´}ùÆéb=‚Ç„Âê •t±’2›^Ðjˆ2¶¶,¶Ý1·UÆ7(X’’*[°4|‚:æ8Ö®‚Ž1Õ÷R R»êfÙÅ&&P"ܸœgü6ÿp?ÏËëüÄ6¨vq×\à”ô¿Œt0xp@=n…JÝÞ„6'Æ©èö’0Ëm}n#÷‚$cÜ^bÛ© ¼IÓJr±§¢®à°·ßK’b®5DE¡®jZà0Ôñ¥*Ha/ÑuUÞ®(ƒv¦ä|vâ(QC®‘½åÞİ:Û³¨²=KÙ>AàèÊyÁBô%‡Hž…÷Yœ¬ Å ¦€œRÚöS¶± !tH¡>"ìöS2¾ôwìÈäÉñÔÖý䇶œÊ-ZN¹½yl|Ê#1þxútŒJõÃßÕSxüãñçŸ÷Ÿî¿äBÙOûþô¢býòÍξ;4jÞÝ?ßßý|º;ýæi½œ^ßOÊRûÌEKXô”¶µ„ý­wÖë·ïéû[îJ“›úg6yÜS/ÒG¸†÷”|t pÑôg/3äÛÆ†©×ßÛÐ.Eä·¨š‡*,ƒi±8àìœeXÐk´zçY –uËÁQ£uè²ï1X†À‡ Ljl^·qìLqe®|®â<ÿúñþNí÷ÿü¹)˜¢õ|V™ö‚JÐPN@ [€·ÇR-äj‹ª.…ÒnDˆ© Å‹p,Ë$ÅÜ\Ñ‚4dr¾ÅÐÑÍe2ù½‹¸cº,âÎlC»YÙ0Ç®f¨ ¥hCΪY'¬²Sœ\µÇìd ÃU,MäŽåöf©Ž$>⎿|8þe“bôˆWh…>Hš| èb¤äF¸@y*Ò§3ûMÝÅ¢>Ea#5*ç@PO OíÑI Û|œÒe¨@á@ \^µ=î¸ûóÃǯ??׉Öqƒ®ˆUTo/uQRìl™ôºz¹Q·Ë“º#¬Îzžâ_íS}–¸+ÝwE«®ñ“óœ¤OMÆ–ÖÂ`3…® ÷¿[k·uöy×Nû¢"ÔG¼«m-±,‡Ý=Ö5ôå;Æ0ƒ[®¨ †‚æäà…òÍ$oÔ±/QU§x‘M=Õ#Tg{Ãò™3-ÕÆ( Ó#ÁMZªë¢qŒ±½£¯V-¬qE t¬É“-6ìj»@nªeíµ­ôðñÃé¯}£,ëÏ2ñËjbúGt Ä^5[E°QzÖðÏ|$Œå™A >âr]Ó³}~éàu†Œ®È"žp‹š!˜1¥½=T%3äàÏÈ60¥ßù*…›:àϪ5Dޱê«ñ ßl?í¶Ÿl|‰)e§uí³‹Á†š"BÝ{„~åÇ„þâ7Ÿaó§qØüÅ/Þ–;ß× 1bó*ÃÃ#\ÃFKÕØ¸ +×û Vî¥I±áéiuݤ ××Ç!ç³úlq{q˜2X.L„É–þGæ}ûhbTë ‚kqü惩<»à»îAô¡%&D ¨N¦t7Ã¥j¬2çƒ*1 TŠ·‚B)ÞŠ œ èNG«è†ÓEÃÁwûh–É¶¢™yÃ!Z/j¡Òßß7Gv ¶—U9H!ª4Ä©\LƒøW.¦ùÇàe Þªi#µK.B—Jû?ö®5IŽG_ŵˆúóc#ö­‡w´ã¶zŒÃ±w_0«¬Î®b™L²º-Ï„ç‡ZêÌ$@~xø ØA@mŽ,¦4`–mA´ô؃qÑ(‚VMKå½+k¢_ ”6†FMjk&‹<˜è˜~>&·+ñ˜¦ Êân‡Gm+§¼‹b]Ôö²5ê ô¢NÛ»&MPó‹¯C“ålç.h®.rYòyÜ MÚ¡É] ÀbR‡¦(«Ð„EþÕÊÚ)˜›Á69AnÐDv¥#™ŽLQJÈ”ç«'wƒn1-”ŸzÉŽÎðÉvâSï‡< ÙÀUïw\E/ƒ¨ýL›‡ƒ”:nÐò„a¥ÝØ[°Ë3\êØ¥•I"‡u[É­z\X#tiJ SÜ]5¥5@—Dš] ‹É#ÖbøfÈ])¥xär¡gÒ‰-¼òw`ò&¾}ÿþÝOô^ñÍ)nmÌ+ ú„bE’:½‹‚ˆé –]´kŸê Óä>DÝ€N¬øüþí§÷O”rÈ©½þíý›ø‡¼zÚOÿ*Ÿ½ÿöðù˜ÿj[•D½¬‡üû¢gãŽ4º (¼µ}a¯èºê%´ŠK° I¿#ïÄU›aTªè]Éió†5æVRÝ`T4`¾ÙuXο­ðƒËÅH]S\ô‡•‡Ý[[™D´vÂhqQµ.ŒºGµz¼•ŽÃìÿÉ!¿ò½sÎ5©¢ÅŠÜ“ÈDóÿH6W¬=çÜ8æ j°œ€(58ÖcÚYƒÅ£¬Fáq÷†·8ù#-jœÇÀ©ˆÇ `îþ½|?þºTsºtVX“]Ú•ç@òR_7Ó5î™Ãµ#!\àªà8†ž’ád–'_ÀV–„Í ·’ùõ¨nCm¥Ì¶¼Ì ´eJÝ¥ h;âÔ¢”‘Öí¡«ò½›E¼};‡Ä6æzQ¡¤O»# ”ëg»gˆ,€“ºæßî~íÿÏZôÍòM‹ŸþøÛ?½»ÿrÿù÷_Þ>Ô\eïÓE¯Õ‘¶¼ÎCÔ2EÿüçÖ¨è°Wé&2ÿ‹`Š¥~L1 õýø÷ù3¹ÿÅõžKÞ¿~>ß™¯Î®V^¶æ«ó½ùªxµrY1»8»T!0ãÆ ³XK.–ßÛcpÃÞžÖö«³JgÜy‹Þúâõ­y®mÜ{kÞúÞm=ËN3bÖnR¨å]ÕÏdi±dÝ}{Ž·çxGÉ bˆ© úך5.ù'‡•‡T —Ö6~>‡>îT¬û9ÖÏ(ösøq¶„êPƒýù“=LMŒ²GߦБ»eˆ!PÜ­nÚ nbƒ” AÝ©¦n·Å®®Ç•5h{Ù„þa­Þf~qò˜Ù öXÿøé×Z.Äp^€º¨!fEÈ P9ásÔ{521œ&M; Sã{×v â~»ÔøÚKÅ[ËÞÆà~Ä.@JÐ3Í#%Ýb~ °s‰ZKK#0üäFÊùñè0i¸~ósXw(õ>.¬ÉøP‚”=8ØH5¥Õ)ÑähãPæ[ˆ]¾exOü[Ä¡³è7ô3f5£Pw8¼§Z«û¥O|cm¨ÂZv åùƒi׆C¶ù´lQ0É éYºOÿ’]Éúwuƒíë½Ü:E@ÕÝí`Ê[\qŤf ®¸D¬š>LXrÅWÖꊑ n±}îŠsÜåŒ'"žEB©ìŒ«FKeÓÇ<‰¢„[T.÷ܶöŸFÙ†B=ï\×%3ñfêyçUðaßù÷ô nf?tb§‹œsº@é2Šs•2cÕÛVÁ¨UÈ¡òˆûÕjê¤.ˆp—ÙLŸ’w¬ž±‹Õ%çxQ2ó†¤P’˜ëeöm…žÚjˆÄnÃnS$ͦ “yü¢5S„y^= #*î‹ÕÒšlQ§–ü—6Ø¢0†”vÝú$ߎ“m‘ËcÁ!¹&Ü*_C*á8[d¥[ÈÛ6Ñä·?}ȵÎßpÚÞ¼ùŸÿýÖÄòS¢÷ïß¿}7³fÓW¬L–F½ÔJ#ÐÄ¥@ѽ­=Aœ?bB½ Þ¹QqÖéC?î}’@Ëwc#Í^ß}÷á˶ò¾"qƒ€{Œ¬´Ú6]aYb×Ì!õ­HÁdå'B¨†&¤\Mʹ¼¸Øó(E+ù£y©üÞd-ØW`ßù³ÙI”,åâ5‚ëÉÝ9‰×o­Ê–ÔT½ªçT´ÁEe:»Ó¶K™Ç®üÁ`J‰Sò“·R÷Æe? WæR壵k`ïÈËzÈeæwA¬HK>"cžÁkû.0`ã;ŒÕ/0ìHØs+EJª«•µÞ`„`¨´ £ièï<$øÏ²×‚¢áP`~>G“‡ÎÑ4lñý4¦ž9šýnË%Õ¦w]Êgš¸íxl(!û„s;¤?9Ê|þrxèÝ?mÝùÁ-ÁëÁÝâf÷·Eç‹§Ëò§èѶî` žÑÅr“,§ímÑ]2¼‡‘ãZÕ¡\€«¼9½Tr(WÒ2âP-ºÿ·yÁÝÜ—ýÊ âÙg°xñO¯HTÃWD+¬¤3&s¦$Û„®d`ºâôª·Å†•J \Myf T¼Wðk}v­p’°óV^ˆ'.‡ÙPÀ½0ºŽŸáÈÌsY’""ìOÁ—TcH†–n68¾…œCÇRVÝÈÁ° …ÇÛÉÓ‹ôok‚„$& [®ŽP’DÂ]H(:=…j¤D(t5eZ8¸66^ßwl|–¯kDwÅãîç¥ ñ8I.lóí37 ÿüñç÷'®øá‡~"þùùµËîñÔoÿýà¿¿èñp¹üåÓ×÷C´ª¶:€R߀Ö¨·‡ôI¯áu³˜Ûñ8–.ë]”îÎcõ²2p IËõÏ+Ù ýý³#mrMGœv“Ù™U—rq<¯7)¦Rk˜‹a,CàÝPü°f"€0¦17 …ýuì±y¥D­¡±šÚó•xdW+k-ƱˆîdÜö3Eœ_Œã0W,ÆÉ‘1¦t=¸t¯w ¿A[Lõà û´<•y[yΘ¯Xì䣳¹`gÌW\+áq8Ìp[s\š’ÑLBÝ¡±øQÒø4²g^m&в•Ð¥Œz^ó°è)izB\Ô|9×Tó „ÍNúv8˜©Oñ]IõŽÜ'‡0„ã×ïÞ}øüéë¯ùÕo¾¾skq~ýøó‡·Û:L…[¡žÉå~4DÜLÃq]€£pxÙ3Q0UaX\UN —%–ØŠWÂÃù£Ý3°[Ãð±Wc&;,Ä~X_r$µ‡›ÌáÐ&øäás8êÀ1UÁ6¡,¤»”’r|x–ª‰Å²=vÕ}øÅ7áÂÎóù~Û8_øT¯·«ïÂÏ„‹mu{÷\ü•͘y«¹£˜+pcÝ›¥bïÄãb‡¤™sDKùîo$Œ6hºkìU È#†aÅ1WÎÅ(óŠî¨ãsžš4­‡9’Š;c%W±bNÈßÚÀÆÉƒ¸PJƒ1nÔc•èG¤¡Å‡Hƒï ®ß§;rRn–ìUÏòˆ#AðPóè²½³ÙÒÃxJö„ó(Ù÷qŒ¼ o5𛘞•édý«Ü°µ2làZoÚ…×ÍÇò¢Ž¡ˆbûø¶RoËÎàÞ@İ ÷˜eä—àþ°T(õò<®¥Î¶EdwÆ~‚ž°m­ž°‹k‹=.à`%K37ŸÌ1mó"áóe}DwÞQj㥡¸„Râ¿ð*ëòã×åòrÀ‚0,{Ô|›öö|,fògB˜‘û…3/ï›ù§Ü’éÖ™‡¸ê5+‡GõŒˆÅt(aÍ´½àžBr0Nub|•£ç¸rˆÅPâqiM´½þï•cJËGÅY ÝÙÂ?t?ÝB)ò0ºTÌfáîs ³¨•Gœ±0Ú9Ä¢öÂõð ÄýÃ+jï»P¤xØy¹]°{‚Üñ¡§ËŒýàì´t2+UArut,•JV+kÄœLÝIºrr˜[¼=aòÅAŠ`EÌÉ,½nïnÈwhÀò'™_¡|S¾CóÀþ9ùóvT€±›ûþðˆIlâÂäTàg§;+ oa;³Ô/þªpñkGÙƒ/Ôt ÛٙȺîíhLÜ’DÔëÆ$¦Š)qáÅÒmøZ:cˆÆ…ÝÙdk·5cNæ BEP"J6ñ`žÀÜI¡ÅÓ?¾zûéí6îu 9xºf%nOßiy oÐb• ¥æÀFb9L,W„ä™ Ë=Ó¹s¢LâÖ ß­»¸.gÛ Êàkw)ë@]¹8Ù{%–!EhþÑBQoŒ®gŸò,fN…R#ÍG<òa{ïºæV i"àQ¡öñ =I—,¦îíÒ¥2N™ˆAýzã!Ÿ÷=üøM¾?þüxI䯾/në¬(sK¶©¥v­§¢W‚9¥Tê{$9ÐçÍ ÕŒ˜UŸ7†ë¸lXÂåµà×É÷#Qe0Ù5*]õÁ˜qofÌB;û@¦gÍSh®+6_gF TWlÄrqï·¥µÑ0–$4RquöŸÚ ¦œ4úžØ{G ÒSK￘ñ(ò cï-£§ëŸŒªèI ªà™ ‹“*E2< Õ²c{ë3¨³«ä\Ì!cW¢‡õÐõütÜ)ü"i|‡y¶‰æ’ä-¶)r‹uËýF$óG“燼ìbGK?‘ÈOô¼ÅŽ«¯XOÓŽ²§Øq˜=ŠQzx8ròn× š]·@š¸6ð2Ýå‘ZuÝ0é'V+kòܺ×í†NG™,F)€Œë3ëÖv‰6°‘oWP±D—Ë£å¯ÉÓ»Ë|F;Š'Ê_JˆöJ”Ÿ}í2rÀF$ÐÓ8“ÿZÀ¿1Æ(4§Žjkñp åOD‹QŽ:b®ñ¬½E®†©dVÒuD±`nûok?ÓôB ˜¾Ï‰Mmү㫻Ôsë„nã=|{¹›Ê«ÙkM m$‘­šüP ÅŽôGáŒÁØ<¬^#oI©ØRÇÎ ¡<Ë.XßRo"焌µÚü—„Xõ2h©!hµ˜:á„T&g@wèVÌëGìcœ; ¤düÃà#Lrâ±(éÔÖúqµDáß“»¦h–┤Ó2¦Võô´Û\ƒHSm.Yê¸-PÍiòdûÊ2ë²in1D´Ú„ïŒBÕ¶Í,·Ò]ã\™[È­¦0ÔÜ6Üxž–ÿLÀÙ"h¹Ã 97E_ÇY¡d«©-™´åþzBQáLfÀé¦UP©hZˆ„á&¥¸±EÓ{+qÛ,ÀD]&œ0w8WLçÉÚŒñÙºV®pºú»¡ÍêÒÁê:º»¬"†Íï1»®åŽT"ç†*qðÚ qe4Üu‘ždÚÖ(‘X…95tJX½"7 O,òJdƒ:%Тɖâ€äo÷¯Ûu¢ç£³KY‹œ¬®§\8iHËs³A. ˆ£3˜Î,aPÇÄ^p)«ßý#Ó’ s(ÌhCN™?áåÏ5n“d w³$SîBŠˆÌàf“ ‚R.*¯LE€T‰a²0´4Ùx½Ú!˜Ô€.³4ÿVzÉa3ùþùüÃOŸ>>üðá—W4Ÿ~?¦§¾ø7<}¨Bþ~1hÏ@9‰Ä0»JËuWìXË×›±<1™Òy‚'ÿi'&Ô‘ GKb»CÊÏ~ø^вE’ gÌžÁrÀù*Árôʼn/Åfõtð9º²¸±£ç´XË¡14®Œ^ðeº ª‹¬È·¹Z툼[æWV;¡ë~ò’‹…wÃñ™Ã,VËîžV> ì†(FTCxùàs—2θA¬zKABªî=E¢¾•¤Æ „A³LuBë½·~IßÞ[8¿ní-ñÜ¢Än‚3o)#/áþØoÞ’áîHÇøoo‰)Nöhý|q,Žœ!çÞ¶pt4ÙïgcªŽãx?.‘‡$y$ÀKuéÀýëGß³Y¼Ûê÷í"‘ÑÑˉû6Æ…f¡<}Q_B7E¾âZ¶õ§7÷oïòDŸ¹ÒfÅ=´M#mª;-Øsã½t1ÄIþÐz÷]zŸ{@ù„bP¬ŽÂIwfyvYÕ’R¾h%šd÷’kç{ýŽ®lQR–[ûÞÒC À¹NÁÌr½ëçuÔ\pŠ2¥l½6N ëáŸ2;§w<â“¿I3áj®ßÑü1áÍpQšìܹPPλŒ=Ž(r ýgË-õ®ê7l¿V¯ë1U6;ûìç"|u?Éùã-Ô™ ­'Ápu¶y33•ì±þøaESå&ó—à­ß¿{øð‹ÿÖo?ýsé™û°ˆüþç_ÿqÿGÝÝñïŸ{¼HŠS½H¥žÙw¦ÄM2í}‚eî—M&шªþf‚cGÓUs¯ ÅÌß„8d¸’MæC–'ÉÞÕ;ºÌ½Ëõææ^'Ü|%‘;£nÝzR¡ÑÓPuüçF‘Ël“œÏJ8FO[fr’øpÓô™Çf-ÆÙ¿lPãõ%å1äKšÓó2ÄÌüß¶wn2ÖÃÈ™ù¶=&1ÑD\+À9÷ºÉ ÔvX$¥ÉÜ鋿ÏùʲîÙá&–ƒ¾ÑÉ’›£BØ×’|ÑMÕ"Ϩ½`&°®»Pªûví“G¹Ñ*·yhæÇšæa¸çC!½ GS•,3Šì„IDoÌÙü1§Ì>WªéÿŸ½+ëäFÒEð³•MÆÅ`Ãè——v°û°ï†¬ÖØ‚uô¶Ô>þý«ª¥,«Èd’åžÁð¥Ve%ãøâ`Ç?˜Jé7lßNš|þüåfAÞ`C[ÖD¹£ñ°Ã çqOt유ìÊb*&bNcÍ Æ€8ˆ/^%‚“\íœZ}ª©4%E ÇfÆ`ïK6¤¸øzüEA£ù–ç¶Þó`‹` älµoDf|{Lãá»ÞT”^lÆcÕPÞ“`)¤=ûupæ]¾y>a€7SNÓ϶ä^’sC¡ßKÃü™”z3Ž&ZH«~¿‰‚¸4”@>(ïîâO‚¼çì}ý ]:@|d´·A¿_¬=ûŠ&o?,/—í¡ßaHÈï"[Ì?tòîÃÕ½ ÙÕõÍq±]8Ê‘ÃX(~“žÚŽ5X%%‹v†X ¬þ‚¹†¤€ F 5 úP˜Ž —²B^,Äé ™ï˜‹1%Fód‹…TdâUÎÅ€Ï6ÒÌHÑ'ƒi1œòÜûžG6‹¢è¹±\íwçryžòyÒu/)×åaXëSë•6b(Lcã€ù,<°3ôöþêgûSûÜýa¼rô7¿åd ¨´$SRef·°|{Ù“ã´[“.ÙÈG'åÆG07Ák¯!äÓ%¯ôèØf^&%û½tÉìKÚÓ%‘ÉIßtIM™IS÷c”èÕ¹~óíŽnêæ#„ÔÛ¦vš¢ÌI*¦×òÅ;ix|!Nø-L¼ÀÞÅûÞw49 äÜz'á¿—NÀT’¹<¡óøè£t0é ƒd{rž*e{˜?ÿüîÆDàééÓšә´#ŸZg¥N´Þç²ÈJ‘oJù$ ºuW.ÓÂÚeZ›‘аlp$mr,ççÉqÖ༭f›–s“3]Þjâ÷¹‡ì–9½ùœ‚G§ Kgzð»e< QP—Ž ?¦F'îwŽéкËÐÀ†}Pì”"_–ìX¡•ú\íxJÛ]öåç_Òê«xóUL—h™ð‰Sõ«¢¢ÑW;Æ¿@¹îIsê0ÛÝaÁe×;~êÜ€=Þc;jtÑÃ:ÌѦýÁ‚ΆßðÚ<„àÉ‚pNÙú’2 Q®5Îî=˜‘§ؤ×ö¤.g|³pT`º0x‚† xX]ž"ØÎìô©½»ø2—¾{Ež·¾£Ò&#@ZÒÖÑ쟋ØWïÌþ7màyþüxwiñpÓÜD£VD­|,;®²]–òfÔêöÚDñÌ€€Ð?ƒ¼L$DC|æ+M·sÐß}Ý딤ÊNcèÁ”pÙE!»8R5¨E5MÔҤѥµ¿õ4ê•4J¬OùÇEõsQD‹S²0ëü¿R¤ƒö Á$Žq¿[cþm3"ÀË2¯¿‹B ¶ð‰stXÑ•x/ögòÝÑÿÝŒPíö/ƒ”½ÑFIŒ>±%6À–ÅeÄÑ¥‹¡m=U<\Oápñ ãä(ÆXN )1ðÉšžíY³‰¡Ùa*ÖS%É3£{÷ óG¬ZOåQÒçªuys°@¨ZuyóæØ‹Ç ½Ñ†ïǯ|zgèp}¢¢£­œG¾IwÁПîÁM©lÊɿ瀳n´§´‹·U1 ¥}nEϱ`h?ú¹¼»½¿µZĉpd“W#*Œ´ › ´)ž ¡-ˆ¬Ç‹6•3-Uʘ^jÙ a,Ú3ÉnòšÑ¥ÇDa{ifðj£Ån2Z6V£s`iba½>uÃôC¨Döe)‘˜[aýJ©R’PPŸ_L C¼c1ºè7;1#fÔуDz\€îÖ–ä"ç ÏHÓe¹Åp›ë™%r $WÉEt¾e!3©wiᵬ¾þ¦ÊëoœâfÂcQß-b×_£sù4ÆëÑj®¿Ñž•fÖ¿ÉSÌ’¿þFž¢àFL똘®’œ®qæ¢w#i AývÒ_ïH¿É~/u§Í¦ÈwÇ9@àEW©œ‡– t&ûj>Ë ,>¤Z?DN GXáÏ¥­|ESmÌŽ˜‘¨ &ƒ©¨‹°ÈVG Áy^% -û<¤U_æÕ®Æd®Åd¦É _ÃXãJ1ñ!? nv² HNù0UõÙÍQí|«É ½Qì>qr˜ÒÎqÿ [‰zi6Ýɤm“È‚î|q8먤2ŽŠw§g$n‰œÝáözÚ.‘I0·Y„¨^k̆TEEí¼¶Gpˆ-I •(>­;­;ÅÉ΋»õ`'ùš¦/Ð.\ ›–x9Y ØÙ[™÷i1å¶¡E æI®azŒ-3“BP%“ÚÕ|“Z¾¥!Šš†{WðÍ ùfÁeÌ9z=Z 㢟z ~ãÈð€qãÞV°WF|."¯×·P­o!yáPðq ‹|Ã|³Ãëɪô'ÇÁq_¶•œ‹ôö¡ÿÌhÐ8ENCïï¿­aqŸ?.œ‡îØH"?û{•Ö´( ˜m4¶G† qF±¶xM3ˆé,šg„ÄL‹¯‹šG.ÛÚ6#OÆ„Y S©¦¤¡uq•j2cÕŒ4EBÇp†;ᅥ ç3{ø®™úšÉ¡©´2a‚šüÃÅÕ9Õ4© “+²ÛæGn§þTM–¬Qœ‘§‡j¦×vÉ­­WMbçœýÁ /•(6y©]ò„šÊvËv0³9ÕÉäÕE.Å›X!b!¡Îš 'fg©¨ÚAž ©d/™=º¢`á¬S‹1ësÞRPh£ hÚ4£6!£9vg*`xz·­€{ÿ9ï_ËàÞ_ÿrsýëåË…Ú"ˆæïF*!‰6µXz{,ŸªØH²&ŒŽ>³9mjw¥éèI¯]ÜÍH<©×Ä1ß§ùBŸÞRú=t…èåŒcû_¶t¦˜Ù ›*üw.ÖAOœÃ®ã™ÖTÇJÐú‡+Àa(Gµ$‚¤v ¦¡ë¶¤<Þ.3®tLâX fnÙ1oÞ!¤ÃKwbv¥c7`6âÔ?Îedt¥ŒR"j>3?£Zd¶×FÏ>Äs#3kÌ‘ÙN,è½ÓÂÞˆ]'bEªAèàëº;žŒd÷ÛJÂ>^2LàTîÏ<¥|g_Bðw³ø|2C  ÍÁµTp2/^©ÃÌñS¤êç› ƒ`…[¼ét{%æ°wF–.NqÙu¡zÃØ­;*{Ê8Å`ÖÆyæ¼SœëÚÎ5›¬éª1à%08ÊJlAÑ*õŽMËeØ#¦ =ë«ybå…¢NB¦£eµ¿Šú(Û¤=;WMy¥È£†mö÷¹‡™.”¢Y%ÁE©¦·‹Š›z‰Fë­ùE’ f{‘ðk‹wëúÞÿùãáPký[­EW3à…²Z»ãØûì¨?ß<¿ÿÇýíbUÁçû­ÍBßô{¿kº}ÿøñMÅñŇ‹§/×IòÞÿ°{П¾<¿ÿaÜKüvu÷åæÇíÚZþâÇ‹šITøú ;;ò%>\|töhh1ˆÎKp1õW¬Ã¹èjqÎÞV9Dç+Â@E.%èìà>w‡2;YÒ!¸‰m®¸=¯D,ž~3*bÈDy~Š.:‡g@,S:Äo²²5êûPì¨Ê~ù ¢|ºv†QÙo?‰MäYœ[…MÈ-—Äö9ˆ¬~=8ùàDÎÛ'jÀ)’/‚ñÃ^V‰N&a;6¸ÈŽCn:ÑØM§;:Bˆ¡Ïêå:”Š‘ü7Ro:Ò÷áÉäJÆÀÓ›¯SjƆKo¾ö" ©†€+ Êìöù–KQ3¥jnþê¹³jçΦzvGäʉˆB§SlN.ì³÷™¯G«Š 䘶Ö/@$aÃ<ûçÆ14µoRC]é7oüm‘ÑåÓïwÜx6n¡7j ß <•øÎàtm½P¦ÇضôÚÀqI]5Iöý K%pÀ­§K[†$jø÷­¬ ~…Vî^/ìoÞ«|»E€1W9'*2é̱¨˜†£¹÷WâôH¥'I‹ùÜz©èTç¦\©`Ô¿^/3k'ª&âwCp´ ÏiSë¡o2ø˜/ð1`íz}|ÕB%‹E¬_kEæ¡XÛp_).ûù¨Íè@/´Mª¼½}m%-ã.£mv-òŒ>] ®u"òàÌh+8`N:˜Ä›Su›ÒõÝ­±åòújÂz©‘±ab@EÂÝI/„êçôàäÌɃŠëIAŠPÒCsØsYûQºx=é®A4ž[ % O›•9“73>9ŽGî!Cà®#g¥n‰¡W’… € ÏHƒ+e4·k¶=Âù¦I1 Æ 2¢ñðùsª¥øhg¿üéËÃǻơ̠xSñfB¡¨âj!°áhd ìä„WêôÈ8¸Ô¡Q(yÙÀ–µ"êR—‹ÆÓãÝÍ›ºší?Ý}±OœÜjTññú GYáI-oA¹Bv<”d07ŠfF¼.’“&‘ÒÙ%§iéYôšîáxé Qûß?ÿ:}uÍ÷eàëO?ýz{¢Röéöç‡Æ £9añÞ|zªðéÓB­–˜“–ÁzH‹½´è’:pq U8X#-è›2Þ©¹Ñüµ„²+¯*ªWä9šÔ0VðUÑùŠ>; }~´Š« ÑÉ Ç°i¯†mþŒl Òä;­.aƒ)u X$šëx6àÐ4~,ª:– MM³ä›fÉnÝu›)‘´ÈasáINsÙ–²Ã\f§)·ÍbœA Þgðì«úfÉ1z¿L"vá*IPnQ|Ãu)á¿@ñ3ú}ÔøÎÜb“ù_¦Qe±@%Æ¢X(d«&fG«P|“¿ÉG÷7ßÎ’Õ| SZYÂ~ƒ³–›KÁ¶p£«Îùñ°ã'1”#c¡Ü³k.·8]û’ÂêZ‹æ î^µ…[ViÑü¥³Z â°Wj‘¯†Øˆ n¦n®’¾{¼úòüËåF²~¾L÷RËš ø¸Î¦žÜöÈûˆ–"}OäRã´G>±H±¶0?£˜i¥$)²¯PLÊÑÂ×¼ÉÁÁWòôÐÌ$Ë©fXi&o.Vi&õW̱ÙIÄÁè¹ / IæÛâNMÅx3AoÕ€ãZ+Éá*ƈãœAã Zd9¶†åúóÍó‰ë„»G;Õ/OÏ—\?ÚOþ¼Ü]®¬£]/ÝHо"뢅|é–Œù‰Ö¯tê£é­ w—Áh h~Ä:m=6#QYð0Šv49¯ xäÖ;i]½“Æ#zÀÅ1Ö¢±xpeœ*€˜&p¶ArÇçŽlë2=áÈxdMså+76ÍWr@Á/6[ÈÕnÉüQqR‘ͤc1› 1?±|Fœ.xKi‘,ÁÛJ‰ÎÑpÀ ;¾\žœ Ó–ØwѲT-ZŽ -·ÂÃq¶¦²'^çkùÐÒ³ lvN"4Œ,ëBô„8ùÔ6«n½Nw¬Ï㉃¦ÐWÚu8[ƒE€à=ê*@ðˆÃwHo!N͉•##%ûâ€w®j¦$ñ¢‰e"é<·EÑsZ ÕÎmIÛÊnFø Üõ~ãÓ†ÿ~uûlG¼0A¾HWÔß]ï~eÌÜSûÞ~þüùÏÛÏö”\Í-/oMŘbˆœý#Aøm¥¡}à2‰Æã—­\$.Gœ¼bô¦ÍDØ<þ+AD<'ÜNI û÷—¿_ݽ³¿Ó¹Õñ˹Ÿî¿øçǫ竧?®_ñT†DžHyâþø‚!£æ:îÍ# e5`š(“rwë mjwê$B‘FÃvÚ ÀÓ½¡ÛƒçZ‚g«éN/å‚w*ÍÍæ›9mY%vdìµù†Š ‡þg:²ù ,ð§Æø M)¸ýtõñ£}õÓõÝÕí}êÞÿÉäÞOöóûiç±N\þªO¹P)ß3fŒÁÒ÷šß½C7#jé{„@›¢$5ÈlÖžÍ#öÏmSrÑDtIÇÐ^å—M„óþ‡¿ÿ-cÛô"×z¸º¿1.¤×Ý¥ ŒXé.ëUºŒ‰Ï<¼ÿ¡ZèlÅ‚Ù.ž~ ¦çÊ?Éõ͵€¸¹r+ËGº½ÄL¤ƒðÅÝÍ•™“7ôô×ìš|ØO:í2ûí2xÚînÑ´qÐ|ÐÖ’GÌ”#f$…M«4¶Îñ¹o¼“2ø¾½Þ|Ù.Uwi!ÞåìOìu6£"eøùÈÝj/Íkˆ”à}Œ½²}‹È×–à yŒGŸG×`¼Ç¢2c¶~yF­u*é­-Ì ¬ ¸¶¯Û‡Âmñ¦.}DÀv•z{׿Èî1ñ9ò{jÒ{t}zo1lå/1™]ZÇ߈ƒù›l"Q†¿“Zü•¿AGèG«èÂèÿ(~8¿š·Êí£úwª#|ƒ§¨a a¸kÊdX0]à?>Ù¿îß™—wc*ÿåéó—“ÛK—é·9Á|£v–TxÒÔ°hÖ‘L†dé4°ŽdìèP # ¢C¡¤A‹…äƒÆÍúxL&÷̲ģfxW)±0ö(ŒÎœõ(Rk¨„˜÷(võ(bÕ¼ŒJÕ.Eg9Êå`¸®ëüF”îPí''ä-Ä :ªßŒM/^h,Ê|Â-BéT´¢íì¨éزÄ iå Õ Ot åÈ·‘·bÂ$žFdœ 0£/#²fׂ¾’§ÇÌôÖŽC¤E€ _§ªÁ]¤²!ó¶ßáÍ•šKKpÌõ‚üÌ@ê»´jå¼Ô/@ê‡G-B"YÅ`î?-WÕO›zF<Ãçîâìúéöãç[;E[ßçÁSÖµÊPZ¬Õ¿Ú<%C04’…éêîé“Áåž_²éC{¹E]ÖxÂGªŸ«h]a*µ©ñÄ‹¥4"wiµ=ñÅ·‡ò]ßx{hS“¾‹Š›|p"V£FѦ*çfsÎéØÁ¨n`ŠÁiu]d7)Ò–¸8:‚–Ή¬Ñ²^ï7¢…-\õiÕUYB¾Gé•=Jd7õª‘êLvïÁ[×7*—[T—¤?QÓBÍ|J"w­•õÒ>îãuF ÈÛµ•ý$)"fš*RzžÕ”¬ÐTáú cU;/誨õ3Æòû{iBfx @ÏÛ:¼°áðåv"ಌFDw‚ÖDªëìë®GhiÇJd‰bb½º=ø>ÝìkâxP.NØBû5öTN_Äì2˜-z˜×ôÖÞÛ–™We°Nëh´y52»Ì΃tdeïb <ûæ1êÀÓ¢~\Ñ œWÿ#l$çÒXÁ5Y({„—! ã(€²­!8cÂx[ªúùº_¸Lä"jÚ#¸¡âN˜ÒòÐÅ…?§èÒ ,·‘r®×¡–jóŒD”ÝñJƒN¹^ÓÏ´¤§,íÕ4¬Ö²ÀÃs½)›ë¢±0uº.½'>OÒw®ïÇùµ Úñ/7v¦%1v2pà7ÆÎMàÃä&}O¤|úÛhëÝ?nRåíÞ¬pñò ~ç23ÿâò—Y&gˆOÉ&\ŬʼnœàÙ³û¥iKó²ÒNé ÛåYZF ²¯ŽbŒÚ>yºU¿9-P(ÍH'ƒH'X4až³3 _)Ñerš ©Å¥~If•¤1é¼JãÀéx >ìëÃ) ¬X¥þMa`Ž XÇé§4å”$œ{Ìä«A7K¿l¤$ ĤH"«@š@0µö¦I=FJîQ§ŸŸ¸mÄQ-¹ñN8¸2Bÿ½kY‹cIίâOkSʈÌȈ8 o½ñ¯ÀÄo@h@ÌøøéÙ4tÙ—Êlif¼9º«â~ýã@ˆ!Q¼=t:.ͱÉ #Åm 7»âaDŽÞg¢x{cûb©â‡V<ªÐÏLù·]žú ùG™,qº‰‰û½‘±Ë¸‘$…8µ§ÿœ `Gñ±{‡vïB•ߤœì>Í4´4¡ŸdO–$ÐÌlØÝÜ]þùzï»þXò Ú»ßùþðô­¼áÝúqÛ0_d¦!FÇ=]N<ùTPi4ÄÍŒ8HÒÌ…-¥züÂÞ2ÐX4ðªâ±ØàÂ=°ÿ‡ùùeø¤^N@ܹ |šìšlà3óó;N‰qQ p Ckõàë¿ê7À~E«5Õ¿Ä0aÎGM§6x*®Á.ý¹¼Úäy7SsóÍd9½ÓÅãeÛzšZï®1ÿ­(´ƒTiÜ@]Ú°q¢TœJP6›[,t!†üIW’ ™¨3‘ ì,Î?³ÍÆéýUe§Ù‘ºÄ(Ö罌…®±¸0±îÆ&· 4€©F”yJ®ä=O÷¯=w¶GRJJöÝéÜÙòÜ·éŸKàUǹBÆ“m;è{¢lÐh¡ GçÛ‡ÁRr”5ö!a y¬ ¡9BÙ{ÎB˜¯È6(„Vó!iOºÁ—„¦Â??„fbÊ„Ða‘t{SòG# mäjÕ„3[JÕƒmÉDëàw 8ZÈÄ‘C£qØ—{×§bvÿ}¸þ|qwù-¥ ðà–tÀHʋ窱<êžó„wj¿¢Çˆ ,=sº;.MZ}”P1„,HL€œÀóJåÔá©,ìá‹‚}ÞÿGÛ¤6¢*ѼÂ=Ç®n9Ÿ°Vš3¡:Ë„ÔbCôÅŽ?¡§Pœ4MûüÙaªM†¤B&·©ïMJH 6¹Þèg÷ü™]n÷ÙöýÙÝ" 1œ?J§J[g¨ª­@Ž‹´;­ö²Ý¹Ðî#À_0ò—TPˆ<á1…_õ1›3¼¾ÄøSφ­bu6L&"Þ w¡{ h÷¡§½ž°ïqÈ!ö‘Ö `”*ef§O6¬Ÿ_4æ±0ö¯R¾@D–ètw3öß?|À¦óÃÞûÅcŒQkÒ> €j Ú$Ä=Úª2²n¾?,•÷‡ÅX-.Í&”%"µÉ©(”íp­_­âqP0 äa-o>${Ø{·¸tm+61<í"Þäu"…©±Ã3ý3ày;ŠÄPÄÙÆ8îè¥Bøå^þsx¯£~ʦ†ë&©•ñ+;’†!½ˆ›Z6¶XEɲÁ÷Ž»ì½ñåŒ ‰’ñ–˜˜˜Ö§+´Ï´ Ÿw¯o;âüJº¹¡T¸{´|Ø¢3f†ýlKO.sÒ,½p´œ,½ñéuı3Åg^GÜÑX-ΈMÂ0ŸM’g“šùc_b“–³úÿJ>âóîŸÿiÊþ“°=P¯[XÜä™í³ùfàíø§Šåæ`å(º¦’ŠÅŠ_]sGNŒH7qd?“<6]8ú0õ¨È¾¶ôp{ý»¥†æ??÷5~{ Ô~‚=LÇYà©zØ8„®[#–¥œ#bkz ÙzÊÒè29§ ˆX‡Xp„@Å*D9ȯ¢½RiDÀaM¨%â>²ý`“–ú€Ó]Yðq{)½²‘c~l^_áª]4òõG­7›‰£lµ´A7i¾ðvÕWR‰Îص¹D¤•%¢Ød)«+™VW¨kΨëêÍ**D1Ä„Îõ'÷v_Mbö6i£e0“µ1‘ÑÔÆÄ ÞûBdY{碲ÒãžwË&—zú–FßÔuL¨½´Õvú¾u]ÈQõoê5/ß¹s•C¿õ?þíx-(¤ KÙæe"O(¥; ½ûy“ØÏè†CØèŽ[ƒn*ÆŽQZƒÇt“h£í7’WÛ~Z\ºá ¶_#Ç¢ígÌ…j«7«²ýihÈAà&Û_b[…Ž0ÓGKôD…„i·òÛ[ ºxen̜§™ÌÉ“’Ï2%ãˆM+TB(9bô±‘ë ~¸~P0ã~¥Ñý6}ÙÊë¤f§Ûô]'}­:‹!·™¡ŽvADö=LJúO«þ°ìßÌ]/çþQ› aPì=øáV#R{fÂç«Y Gwˆm6N§W±Ìø±dš^YȲ,–Ô_¸.õçá©Ùáç/¥ÓcÛâÀ ?Ë”Æ.ãíãäqe>·xoÁ?T¨zðZVuö”¯¼PcH•/õÈŦؒ…”i[þ¥3šñÀKp,è~lùñ^zý,¸¸‰¤'7Ḻ€Osf²î~Þ1S§ï÷æ-ÚÎy˜Äô“»lôÈ|GWô#é×àCc¯göŒïÌJRaö R1e¦ýÍÛÎ =†˜=¿`´t‡†š½*aèXƒï@£Ó)±ð[S5N2ÌK`¤Š¾S¨‘ŒlßkEœ!¢ÁÉG€3{DsÂÓû^¦2}/¿Œ}ßåØKUð¼uêå[˜ËÁé¦)Ž2ä…ˆÀ—¦b=ÝjZ•Ç œÇ| ®f²ØO U‰,ໟcƨyŸ60v¿ÓlFÑO ¤ÐQžb@R^æ\›9J»‘~4B ¨U~T°èGA³; +B ò£i¹èÜ~ýŒ0IÖGó¢ûîÑq#x¨¨CÁ q4 îI»1ÕÄŽ:’0š…"ÒÍýD¨ì'Z¼µ}gˆEÕµì4*Àþ½sÕßÕ‹U´}L…*¬_ú¥—fwÃÇânbC $õÁ 2®oµ‡T:ßê5ºøãñû×?Ý]ÿxºýëáûõÝ×ßo¾~ùvs“ib‘nnbU~óª£E:¤£UùŧÚ[C{šîn‘ ¹ý™ñឦÇF/^/aù±Ÿ_ÜÞ_ý¥mââTá¥í"¦ma#«öÁÍŒ!㨸q'A!è>'<å|Ôl3jÑû쀿¯Ìˆ6âFÚs¢ ÎížÂ„#òF\g®yðvàà‰¡Ë#t¯§Þ ,.(»òˆæÝ{Vs*³cºÊ%ËývQeB¶˜½zÙ*“žÚôÅãùUfú¸‘9wŽkÇ(öº"û¸áäõ7bÕ±‡ qófà½É#ŠÛ?ÿ~ÿðûë»ßl¡vr"2Í¿o“qf஋Li8×l“rä¤ZLw>j&ÕÈùrc>Q¼"Û Iµ´)¦±Éˆ—„¦Bñgtÿ\´Ý¾Ne¢b‚yý<ýùýþK¸|šKðéàRft5ÓÀqéþ JiÑAqè4°Æª.:¸zБ V|ª]îAdN`¿ŽiXÐü^-Æ…Í1•ÄU`E_.ÅqÈ…Í+Š ›ã¢g·¸q~Ø,fÂf^œ£ñÌasÕ‘6 ›sÆ&7EÇ£® ¹½¿ÎÚّ쵚øó—ºþ›Ÿ·7EHhª™Uˆ=°mM#IekW¤zÃz!I\bpRÑ AŠ¡ˆûIš yW¤Ñ ºDШzfœjÛr]YÒ„¢Ð¿Úa\™p;.,¨Euì]‹ÿÐÛ¸¾º·ÿÑfJÃÔJBtÜSIðéD…¸cGùB‚it@¦X5™ˆÅù¦è8;à Ú ÁDB ÀC­jÌHOcTƒ¦Ò5ÈÔ=è#:7JT’™ñ‘*€5—·„âþ¤ï‡³Xb •haCÊFB8»V™?Ä47Äj:­þ%°>ž‰—ÿ¿5ãàütx°¯¿…÷Hª)@Ï»;"uU¯¨“vàODÇø,Î{¿ ƒœUi~qÓç•SÙ?ú—õxj4tl‚ª4ÛžyÔ%ÒÖFÊÎÇ(o›úœpx'¦‹lélú¶¨ÏyÃ¬ŠæÅ*¡L÷03JûöìÜfÙ8)ÐÂi³'ysߥ:…Q òX±’Œˆ“Ll-¢vMC›”Pìgo\&:{Ê‚bIº¼Bk½û׸f6‚ŸÞݤMaÉc˜Œ rzLåñêáæ{#0…騄™öÔ¾ ç&dš] BÆN«íI4™"í3J¨™U.íMµböÐÇŠCºn¼¤cMˆ‹?É¿ ELǨ H!'F ©­´?ÏAObiÎÞnù·gAÚ¾™ˆ†lÞ‹ô•{‘Àap%Õ܉6»B9h·›­®^­b3R¿Å'x¤·gØV’=Ãf’m‰µÆ†+lÆoJ7ÿÂ~ ö¬8yŠBì:¼H.sxQ2ü…Eðµ;{š¿ÑÙ+ ÙÅ£×W)^$¿râšµëOØty@ œ]|çHÈÆÎíÐÊT«ò¤‹SvÅ[œ”¤?'-'Eó½‚Õ›Õhc[ ·Xë"ÛJó—öΚJzG.F3\¤ÿX®z‹sýù¹zõúãÆa耟&j »_I&)Q©ù¶L'Áúâ(ÉZcJ~µÊ;*i»ìüåŠ<#à0Ün&A¨¡I- ŽPû5Ó"A‘ š‰º† øó@ÝÜY²¿áð“è1L]£?XÒí7伉…Ú3¿ÒA6/6{4­ôhư…Á•kŸÁ¹=œþq½Jo.YüðÕ«U¹4]TÓ”&ÅAîÔ˜cYqMQœ úrª{’âäôöJ³ÿáql£ômÞ «z'[NÓG¸R{,“Ö!À§-Äëótš9‚¡ EÇÕ$ótÑ—54{•û@©!G0LÄ-5qm kùãÉfD…ÂÖ/Ñ‚pO¿’¾C¼ÜÿÆîÓ®ÏÃýíÅ÷ÛËo×w×?n®7÷"Ò§™|"¦Vñð‚RùO‚ê_Aï óiÑrO³×Ä… 'ÛÏVYg]M³92Çš ^bÙºRÈ®«¯9ļú%®¡o3¯êTi›õ´z ICˆà~©ãy_+¤„¾Êת/Ke;ÿ+‚ r¶,°ÍÙ²ÃÀ²ÉˆG”ñFxá˜öÿ'§•×ÿóÃdj˜´O*¯žÜß½Œ’|±73´k¾‡\ZòÐL3…>¶Äž e=Û´%0³wí­µ ¦Ú&U¦"®bè<ƒU;r®|¿¢ÞÍNº`ÉSK“ÞMÁÞa[x&8!<ónq sÓÞãÓ©ï'ßýÿp©,ÕɧnT(¯ôàs¢‰¥™N'ÃfzËT¦¢& ê«¼o©‡šH9ÛMy¥ÑMBµ\#ŽщGÜ¢¢ž¦¨¨ ¯ã¹¥©Ë4³G×6¥;ÈÏÉ_u­í–OÝä/ë'BÏ™4FpÄ®xæ ÅN Ⱦ!×VeL„RUvR_ …Í•åV¤Œ" &Ó›‡|­ãðjUFÂ)ŒÒÂ8ûˆ[— 9{g¯ŽMüyW³Ç‘£_…¡Ë\†µ™‰T8tšË&ö²·Ý MÑ6Ã’¨IÏèÁööɨ.²‹ÝÙYYY”Öž¹Ø–X¬ÂÏ—øà7ñëãy—&¿ö9‚88Ö? 5êwΗÔ/Ón‡#õïÔ¥k}ÜCÉ/20^F·Ê<ðÕ› Ô`>ilú¡ß©eðO}Í©Å>Í%@¾áöYb-áEE\³Ž~],*‡Ë»ëºé hJ¼šYÏôW×F­Þ” h^\â¹cðÕÃì˪"Q78„°$ÃÔ_È#¦u)D ª~¥èìN±a ‚3¼˜!ˉªv©FU6b‚å¬!æÃÆý×”Ç <¨ªÄfè_̹̟±nÂ:„ƒ‹¡~Â6ªKÀJnŠÄãâe‡/–Žœ§=ÞF2ÞìêÎ×7_.¾<Þ?ô|?°‘SCM€)‰+ !Ë´Z§øRÏEСN¥ « \Ë‚. VK‰:r†­`z9qð³J(Ö$*þrœèS~SÛ^N~FðKút:µ NZ‚’œaß3i؉j†7Ú%©ªfTD€yrÞ½€º˜‡¾6¢ òPA°¾¿¢ËZ$›–?j oÌŠ²zÁCmh(¤ª«eâTêÚ°ç,úï?¬&2Ô—Š¶†Š© #E\¥6ð-„é ö‘9-m‚óùÃÝW³ð#’¯|a@³¾§hól86+äSH#–Q£u_´õœ|…i/¤ž­¯m#˜K&æû˜´Ô)#bpÃRÜÿtõQõxu}Sm ýì"iΘ|YXÿTÑ,€rÈ0L«HVíòqÞ£µY:Ye2 0Þi ¹ }okf}ᣠcÒ Ñ‚âìÃÝo7Ÿ¦ô¢Ÿuš|Jˆ5ç—èÃM€˜%ÜK¨‡yH0.Wß½Àƒõ›{b‡­æaðÒT–¦d´G²øN;®Ül迎@ò¢ý¿Í>24,mÌ1[Íx0B5Jçìc'Ál\1“Ð_vÍQ¿Ý|½¿øùËÝÇ‹[µú›w_¾N5‰5ŸC´}óÕqFTs&+·wùÛ#p"(YÈ<ERt[Vó/8ë(ÐØ­4½‡€1ÒÙjÔîÃs—³«‰(DHê¯(Ó$pš{vÇGø“3Þ¿ãSoµj.9yÖÜ?¯n‡.Ÿ.¬Rø÷©Êö´¨jÞ%øWýï_¾ÞŽý‚÷ê½—Ó÷_ÞªD¥˜0ªO)ì²€"¦µêé\Úª¬»GÓL{ó2PGIÍZ-w|„´¬%Ñw¶,+6•Nc†@&ÒñbLKeÈ«sŒ5æ“ΞI»oÅlȲÿ˜råÔ^ʦŸØÏ+§óG¬*œ»A¢üïÿTÖMõ³4Š·Å¸Â Œ‡¤aVXÀ¡÷NV/§Úì–El¥þ¿Â$ô±©`–å’›Ù—U‹íïKΧz3þ Ûÿ–ÚÌA›®ïÛIÑeˆ«À;žX>W¯Œÿ}:Þâç·ø¹t¼ÇϯBõ¹%~q×Ûô›¹yxûÿÛŪ%òW*ô‡)ßým‘üOjû~ÈÞÿ×›‹wïg–¯9tºxwqÿ8öo˜žÿãçLJ·?¼ú»ý~õáñæÇ½¤xñîÝÅÏêÆ&³§7c²oðnï.Þ½9éHÑÛ8Á:G:ØŠÚe³·X²$Ò·ß™W’UŠU´_÷‹ˆ–Íè qFšóZuŶ%ß4ø›ß3"kʦ2ǘ¦È9QUdáxŒqvV`.¡;¿õµƒ7†—EçœjN¸Ê=ãÁbŸ-:Š»f©ƒƒ. ‰ÙM>|ú ó¡ëÎïHU k]àê}µ«€"¯\qÁ¥ÀÚ•+ŽBwì%ñƒc€mIð²ãö;=^þëåÕãûÛ‡ËÏwn¯oob0œ ìcãEr0ÁD70ì7·VDõ:ûzš³àC8#sÍ*ÛÏ=“97]P±]ñ,®#®¶×¶ð˜ÖÜ3$‘"oDPÀó É‘ò—TÏ‚êpþi†?è³.8ÿôÕX?T`•gúƒM4=<3 b­Ûrÿì¯6ž{¥žö ˜ð/oßùåÃ×E~©ùç›fyWx%dž® YÑ ,gܯQ/ï $„bóGÀ*$Pô>Î23ÏÒÁûÌ^9ÅúòñÍ"¦H¸ÊùâÁ¦°þÁ§JŽcOS±¼ =ÿóí[ÿßo‹ˆzFœAj"Îè}uÀ¹ÈõO*0Ù—®;cAЛЋ¤šG|µQ㌧zR›g[Ž^ôTMRøüÐÙN1Ƕ5ûØ.ÃdÒ°ÌS;(:4•;Œ iíw¶)%cMÇ  h —Ëö@’òìkÏ2é1\¬¯ ¡[dšu;ç×NLicì{üx[„+‰hòÜÓ…GÜÆª € p¼G–{RÉ`#&²®€à6àu¬7l¢óxUÓÏwŠd/¥Ÿ7˜Ñ v—p_ojÂl7…Ùíj+£5û¶¹¯8Ï]Fóæ’>GSóšCDOoRyT€ºç€±êÖ/—'x–]PÛâ&).Âô„h]™J6æõ¹«Q8¶«ÎMÝ}¢#:Þ°÷"~îäjJœöÚèœíL\嵩%ÆBOÁB¥©q£Àz…[v±ïY_½\¦òD`Ñ1%Ë$>“N§[°×‘%ŽY4²cjªÏ[¶Tz×8xpKë,PO[€}oiª‚­@©:ÖZyÍ&`ŽQ…ÞîôÉ6Ù54‰ÔʳM†mAÑ“M—ý€ž‘CÑ_£uÿÖ><篳«io´ži~ÉAÙGkS¨°0Â2îtÑ$Ûzûb¦·/ÓO ~H(Εc ðCQSÁeiÄg_S1íxˆšÍÅ—cÑóg¬‹öÆ¥X½%оÌÒpG°Æ¢Ç†vu ¶pù¡ýtqü²2òþ“Û´xº×m¾ml¸ÆŒ<­(úìâñ™$º µØR«˜´;§ô¯hòÔGkâBAgMÂ^¯J™5°F¾–’— EKÄB/° ³ì[ó¯í²–M¸`IÇLŠ ‰h•®Sjеµ iquC/Õ$°¼x…ز^}ˆ‹zM˜ÓëìËjŽì ¦Y¤4b'kr+{´Ly#Vr@´fê¬ã…cÎ{ÁÎÕÀÞë‘‹8­’rÙ ÇL]¼×¶ÅtµÞ‹0$aÖ9Çw6“Ú=B2y?xµpšAmXâÈþ*Zµ•ûi4M¿©Tج}̺úf¶‰ N‰g½y÷„jpì9Ç]¢®ŒàÏÔ9«¥^}”S†GÕì¼Qdžôƒhà%çŽòÉ[Rž£`/É¿Œoyñôfõ…ö®>bzeÐ_¹e-eîî¬|QK?Ysc`̶hþÒ³„â}UßÁ‚ëªï¶´ŒÀq‹³Á“'#JyívÙ§Àã™ñöóƒÕî/Ç9[˜qýEÿÛ²¾M"Î(A9X…ím—W8Î|!ø}›µ‚k âò¸­6’ÀE)â¶fSѯöšj~Šš®xç\(¦+¶ÜÊàB¾,´—`k5t=.IZ„5 Â¸ÎáeëèÖ„{á£rŒäÄat®o-ßc8Кbþ·Ä›‘EOiU?Ž$ö8$ò ’´ëgã Þûù(ÄÓTY¿ŸÖÕ×Ý®ž²Ñ9PVaù0Û†Fx#]\ãínIê—»%‘¯9 ÌG/X< ¼. CnûÈLŒ­‡Áè-vM‘]|ÓÆg‰öøbWbyžNêÓ,Ô÷Tànx¿miÑ÷ŸfŒ) ä‘|£k†ãM&—Ÿî‹²¨ì’œ„M!=bËF[@YÎ\¿[†SâêUqíA1¦D­k`’ºXÄé˜åQ˦¡æb¯ ‰Â¢¼>NH›ƒ´mK<e5#Oë=‹.}I?¸ê 7y'=.ÎÀvÊLš›öï˜aNÛ‚ÛMÙ“ê–^ë ß¿Èo–õÊHâ Q5MW” ãd[ÈEÓ&[ÂÖ WÍ*bÀÀåj‚C*õ6ªífsÎÄÓ«ö–„⪩”ºy¢'ÞVU¬2„ÕúÉBàBiB”"¿þ„(%_{»¶Tíáþ¼.a+»AD¦)Áí@¶Š·Ú–uÞþ|{}¥?wû˧±tüs£¯æ:éd GG²) C §oú‹ãâŽÅÍdÚ-6ÓŠb¹RÎ/—˜ä›åÀ› °%Ç£È%ym¤FG#µŠušçzë'Ògµ@u™ÂU•)Œm~ To‡,'µ˜Èã*íSÚÌcÄ×§dÞóTÄTËz=N0×ɺ—CCÙs´µŒ¡K#ùtz"l–”j6AaƒÏÞ îeш°a€_Vbèàcy{„•ÈY„å`[Ïò5†ºkÀªÆe]›xÆÉO©Ï»@‰â*·•&RQ›NUlÚ›¥ ã·†¾Üq1Ù¶]ÀÒ’7ëu%öŠŽ'Y>±ÙÇ”©54ÝÔœÓè¿æÔóG¬cÖ` !fIऀõƒ¬ûHÞ÷çÌ瑦µ¥ACÝYxŽp’d´ qGÙ¼fËô;R aiõ}?ùŒÏ©ÊBª)ü˜Ïai8Jí*[ø™nNm…Š5À%§Ÿ‡ä%®:ý<Ä­Ûø‰]fXÊ¥©„@a¡ $êÚX³¹ êË@{÷=©& 1ú°NM©'82B¬qkïh%È$óë Õó=7í¬§$ÄIÖr·ºUq·Œ³kÅcŤ’W`žãü¦ùçO« o!/://#™V†~ˆv¼,„è p¤&…kä¦až÷ßmyæt=ž‘±l)šñY"ÍId˜·”½Lzì{Ñ×6ž(^Ô"äÇ‚µ¬‚]Laó)7‚ü”€¤"m9p¿ÅžâáözªøMìß/á'¾¾¹ùI­&áO|¸ÅÓ-ÛàÙï-fû:£“‹‘:ôp«´çà_,ò|÷fK»%¿uÛ„áC¦¹Í\5ª£úït&îÛ7᪮õÂò¢FuÀrR¬ÎÉë2±jš$…'²®j$¶½¿þõæý£þö“pyýåº×ùä£-¶äâ:2•Žz”ÃbÅD%˜ºž‹¨?™¾¶ÑFã²ìM¢°¬ºKô±e]¸~¢Æ`ºÈ™–ç³?xÜèŒK,†Õb4ÔôcäƒE‹‰ù|ÿYf],†…Ê…MÏE‹©8¤»¬è ë”Èé{蘚/®¯Æ;»,{Ô²<Ó9÷fK–Ð@©a›?’åîõTH±è›ù "Qу£º_…Kvpm&±K7ô¥5'\ÔºžQ·5l©UwÆ£-J›V‚>[Ðto¡×ïw?Þ”ŽþþVSHEµ”½\-+ÏÀ»à,¥Œ=å¹Ó÷HŽk*죭0ÇÒê3‹&n·ýœ¿‚£,mäL6Ö^[Á—¸]6n]b79ãñý2b˜&Ñ{ØÝ7h±\0^´ ^œT5ú ö±NÕDW4$)uæxqþ?Ü|üüÁúšŽDœÿk›!qI H -e{o|&Š7ËØõ÷‚{À'¤¶wÁn6#§Š8)éÈTNÙ"ÿ\]â$„â¢ÔˆYf• o9#F‰¤/¼vAÖA‘ä™3qÿ7¦šÒ©¿yùáîú·~…•8Äè¥"M&ñ\´ž©/õÐzfòë”&'ÄÕÖCƒÓ °â¾u|„—MVÛ%›‹õ@þ¸UÝ|yûÃßÿvX]Œ|ñ¨/øéê㢶½®Ú×­Ê^úêñá×½ø‹wÿúôö‡×ª×¿÷já&}Ûzýì-æõú@µõúw/Ï·]Ojþ$«2Âó6>[ÈARHâShkm£Lkæ˜Â"ŽKÃy<1Íx¶L»ûÒì‰Ù§Ô4¶ñ€FÊýòºyöˆUm¶ÚŽ0y¨¾•îfÒr±óØ ¯nC€Ú2jÁ‡Ê6ÁS[Õy«ÙYæÙ§U´! *Ži·øç¯¹Gd› ¬õC¡EêV¤‹»Glš,îd/˜c™Ó”>$Î7##t¼gòýß3ÿAέÓ'TPl‚•ÆŠ[ð“Fr€a·–äÉOº“¤´êlh*::5±ïûæ'…ì C6}â "ê ó¥“ d/ûfê‘ÅŒÁ’„@qIÃΖi®sϸýi ó§‰íþ $ÝÁýÙÈKÉÖDÛc íÊÕGù-öÏ8Òß§yü†àûþJíAÕñ°h»Ä"äEæÐ.ýòÚ# é¾Çv‡RôK—еI¬ w9·qF­"±$.â®b³§ó¸kÒËOÎÄÓwõµ±©vûxfikØõN¡4·Æ†¤E]û^ýøª«–ú.¼vŒ8¥Ù ²Ã°Êë§…¶Ë¼Þös ‚E+Sq¬NÅõˆ ¶9ËRò E‡[À>û²šm®úV×C\àE­Uø£¤ÍýQ¥è³î¢O"1¿Â/tìÝ>…WHªû¶a½H¬ÄÇeÉuß·™%ØÁ¥ø"~z—ñD}•·ywñnK,S{i¸'‚jÒ1úÕXFõeÅ$i>V|$H¡„eÁK®¬8û²*,s‰5Tà׳pÈï´Ep‘`·æë0§K!i䲆±Í’‹éЬ$ñ%j ,C­¶ß:C'£R\ŒNm¿ukBn¡z`oc.ë‡,C= ³³ÅIE™hKφMØZ‰ÕqFü pˆ9AûàøøÚ˜˜j'G'9›MD]ÀU^YHŠßέ„;B¤Ð ‘ÎýÚ$‰ô„¤s¿ö,&EÍ|­Â$hê s6îq5&ñL¢l×P“p"a8 I)ßöüe•„¦ãH²ËXJq$EÀ­!IÅx’(¥ ¦w’;Bãk ÒFmÇ å–‚Övo6Oay¸Ý‹C¾ÄnƒÕÊcÙG(„sü9Éc«T>šìN°…Ä›Tk”þÿ&Í_Lhˆì*j8‘^Ÿ;9»£y&¿N×ÙŽí×=Ùm_U)CÊB!1x’5…Ð?…l/¥ûMV'MRAãìo¿sáñá×;ëqÿNû”o<­ð1¢¬î©ßqi#9ÑÀv›õ 'e× ÕR€Uz5ÅÛä‹ÅÛ§ÞÅ£ªÉ^P}:v¬mØÖ?/h D´êê˜Ã&+’Àw€Né]¸ºýhà¸pY À›-6åƒùMy>^Y"1¥ /Ðe½‹Dí®ÄiéÊBÚcò K×ÑœJ/¨3åFky+1‘à`+”¡w y ö"è2Ÿ§&I8’ƒ/AºH„¸Nÿ-ÜFº( 6ÊdN‚SÏŒE¿< Uˆš%@ÙJ²{9uÊXô$'¿ÀH¢'Ö¡°øÍ3ˆ^²ËؾäU·_HÝž"¡Ð=s9{0o©d@ÙþF[òÚz ©¢S6+…]UëëVP÷^Au6EÚR³Ì©?)»2CŒ0Òâÿ#ïj²ãÈqôUüj3+…I€¿~µšÍìçj[]å×¶å±äš®ƒÍædD¦­’™d0iwOm\¥’#‚ø|h"yƒ[eíÚœêñJ?„í©ÙOb=Ì"t€ªårF¹ É€wûlä”A>^Ù…æÀ;[W£s”S(P?²÷ZÔö\cø€ :ŠŸšx‡%ª2¥`‘1f¾Ü}|v+æ~ÉhîÕP{Çë{ÿï›ïÿ{ÝFê÷$z¸Ã.¥I3¤}=™#B<|¾}{á>r]Y€9íéÀ¼(ß‘fãÍ+¬t` Ô帲ò‚…wuÇ…Ž|V¢ÉEüž%¸­YFCPë¶šÄ{&-¼­­A ¶¦Q1Ý)Uå!%!l‡PžÑøF“òàߌ„è6W crнÃoé.,‚·0Åœ#Ò™-mdŠ“eb”øãOxIµþ~JYúyÐàAµk]©) \«~>²tî’ƒ*Õ(&ª·»fÈe|¡'"¡\°“`¢U.s€šbÜÝe è3§]¯»::ç¶ «ÆÝ‹çÕºùþòxcH».‡6Uzoògº2½5º°?>¾žg¸?Þ~ºýÍt.¸Ô›Ÿ³ ¯–wµäØÿÏ"AemCg;Ý/À‰·½îöÔÚςȔ«[«q²Ðbµµ3£”z;Ÿ(9ÀÖÏX%_ÛÖ§§3©w4Dâ«/бZæð;«0I¸«Ööt¯DFÎyĺø"Æ[Æ{Kó#׃-Ê9`Uýg‹$léÄ™!_=ØJ´÷8¤Ñ™ Á–qJ,’’Ú¦À4´Éš´%ØæmÛäÏZ‚=™Iqïr…ù,Ë3N˜éÑ>+Ì4Ú1aðZäŸ#Ä;+$Œ˜D· ÉÍ ^lL* »ºÜóv/´íý'd?ÞÍÃí*ïk6½Ÿö Þ—cO—CRo[Ô4 ÷ò¹F^QBÂÜvmTÅÈÊýOÔrqäøÉp#fIÃ&µd¼Â=AÄâ=Aö÷Ùm;<Ôbç¦.VtY± en‚æH–—l³í¡g?E!˜B~,@àÇ-ßËclE1M*Mç´RŽeÕb„¼8M?µØB™ò3¼óå36ÁàÇ8YÀ©Êí°èÆE€|q£qƒ DîI¸|p“×–»­=_î?ÜýÕÈdÖíáõßö£o¾kÅ›o³ÙoÎÎyW”.g@¡ sy²$€Që2gª/Z“9SÑâ 'ºh÷¯FÐ5½oöi^xa“ÐôÄ æEØ\îJ©)¹½>bV>|ªq²À“„ŒìOÀQÐâmÈ‚Cd=A_ÕPbßF& =IêÙA($û'ì‚•~Š»5N:,Y 1±´X ®ŠŠ”V´,4ÈT¸Ú¦´J<؇5·™ ÒŽ~#bóªd¦r+äs° òÒ¤I›<€ßÓW£;.ðuq²ÈûªˆÆZÅ6±4h[|ˆÜÓ-ˆC‹Ûwãh3ÛxžtØb¬ER•mÅÖ¿ïÇjáYL“ý¾qy).ŸQ\Œc`L"E_~6”#&ܨ =kYêÖ6†> ýaÝÿ†æ­l3@ýòˆæÑ3æÐû˜š¿”:Õ~I¯Qæ\Í ­° è^Kã–‘=»Úc0dµTEe¸…÷o»Û³P¯(;ò~ƒya9‚»ž§_.Í/ 4B<|•¤%׸âºÑÄ3–lÒÁÒ‘MR" lööÍ+ÕÈ܆ÄLu¶úÚÎÊ%œŸ›K-‹ƒµ8ÿ(„Ⱥ†k‚‘ͬláš±kçYœÁöÄÍl‹­lã0;IL Þ>k-M÷“£÷e<­‰q:áœÖ¯aœ×¸$mcÜ­5ÎÑS ÄS‰3Óƒ£?lJà eÀÈS ¡Îk³ÝPåµ1È–Ôazí³-§q,¤ Ä9óY8–ÎW¦áfoRÂ<¢qâðÛÃõá»×ÓuûøÕþíGÃd‚oUDª'kÍþæšlh„’l,É3$j³Ï¶—‘\[60wyRH‘òO½Ï®$ä[ÚT[Ü;A–ªx ÝÄ‚@#ÄÿƒÀÕÅ#uõ>()un;|xmÿ|ÿÞäÝÝßn¿~x\üâ8I)ø¢âÔéÕ†ÁfZ/“žˆ1Dì«Í§¼¶$0ôHB"3”9îÐ~÷ܘܼ½& “EA¨%˜Äç]Ûeƒ5V¿ö7°rß–âTöNj¨"ü××ûÇÛ @—æo~é&M5ü‘¡çž…!šÜ¥òrJ³>íÄSÔ7— µü¦â¥hyTŇ;ýR.Þ³<èße2ôðêo_î?¾zÿéæãÝÇû/ïôMy_À¼‰ûl%×ÖFL;à(ªFsýø¯¼"zŒ>b>JÅÕx W© ú¨Gô€º:€Ú­¸“¯Ø ± O6ºüúD9_[5ÂÁΑ&NŽ4ÿ¡Úxz¡¾RwUFêZØ1Ť?wBÉ9šd8’®48ÇP+_»øj¹uõ‰>]¾‘&’¤B×ÖFÖ¼ƒo <µˆ]sh«ZW[¨"쪊9uÜ'Ï…”À«/¯Y‡,9Åà)¤Å– ^‘S¨{Å\æZP§Ë+†)BÀpu=TÝÃ+ò$ˆ áj`ß•AšÅ¯†AHàC41‡žÎ9îŽñ< D|õë«Ç|zó—þmk>´ñå~ž¶8λ= ãæ¥aO¿ópºV-&Ù¸Vmý',ö§¥¤Û÷§­ÿ‚_í¥Ï-ÜÁ”É®–ìXt_=—fD‹_ûÆU´0®"§V(ˤÜ`…䘧\4C©¸F{yšú¸ Å Õw·<›VY>bÓ´Š›ÂHaE·â£–ÃøµKó÷Æf¹}û»Iá!8=>ùðíwMká]ÿÄmx-”¼…µqÖêÅ¢EÜ·†òöËݥɾ÷vªßïoL]ïíD=ÛÍãýßïÖAGïí>Ïs¥u“µÍÐ3ÑákU@W7 ô¨ÅÅèr½N4ÃéP*˜³Fô†V”!Õ £ù¸5ãå=ÅyGl§ñ¯¶ ×¥#d«§?”™Ð§&V—é¶©e_™ $#:APjhBðb]Fðû'³%ßé4DFt È{µËˆCåb€¸)ÚSèºèT ]mîEM½¨àˆ"Ù²€¦ruá;"/(¿¬¡•p">ÓeØ·xBqò|àQ}“K{0×ÀèºÇW…ÊCù"ï wúlŠå|ëÊË‘–‡Û›ï}œå©p¹ÊùÓYø1ª§©£‹T8x»ô–Y Tf«M~RÔs“­Ž¹ªÖŠ¥tnA³¶Ú¥> ®™ÌÁ3žHü¹Éœdíš$𖪟l-NAbÔ‰I×åÁg=¨bæbcyÆðI†t¡‹E¥’¬‘‡Ìî|7€7™HÅ;džØtr€®È¸ëBÏ÷ïVôUÜ<þíÒuȉé—nÚ×uÑl–ôN–”hʼöê­‡^ì´K„ßÅ6”þ†jZéPúÅàë‰8#Ì´÷õJ\Ó,6F+1îpÀy ä}Ï{jå±d½œžÿý´r½.ŽŠaWUDÈëU- HÁdq¥&¶Óh˜þ9ïYcK;Ø|yWÕ?,öN/(2Býü«cÎL×Ö¿´OeÓžÌúÅ-ýx÷øûÝׇ/¢÷!M¾xÆ>…é1Êœ¨§£Å²H° ¡â«aÄ_’{[ÍÓD@rCûÄ RÕþTšž^PpP9‹£±¹äIŽ!QJ½âã íR‰èù\Ô„ÂÝ¥v~’ïiÆ\ï_ÊD/2òpÖRµ8Kýæ2Ež9†gð)ËGlÃÙ#_†PlJ„ób`fh‹pÄžYmÊ"Þš¢+ ›…úå ív¡@’MB‘/ÌÍ'åîÒÅÑJ›.,Òs±X<£ «cüF_î×ð;S¦îâæ|j Ø0õ@~€Â~X²¨+'¨a]GjÚûŸÿøÔ€›º纹‰gËäȳsgË{—-;ŒÏZv~ýeWùKy‡áf"à¿î@Bõ¬}ŠÜÇ`JK”þÔ eŸAæ1·U—‘Bá‰>Cv‡Û»B$^ŽÐÍÄWp „E×À¾wö:PÚMëcH:K»nÊ µà c ýênèуdAeT–Í—ÖÔxiMš&p¤‚–Ø0äËŠêÏÅn•§“µÜZ+NÊ(ÚõÐ^-6èáÌyÙ]´ ˆÆ´[R%F3&tÇhñ¥B&-QZŠ›¢´U%ÛMÕ–‹v5U·½l‹÷¶O·½ë×W¿žµ8&¹!qÞbqÌÀ÷¬sW¶#íñ ‹Ù2Ü`qðòº«ùÜQK‘Áâ`ÇtAUÖœ*×êÇ"‡ý“BÈ0ƒ“2æ3®?¥qvFƒà MçíK““hÉé|íÂøX&«­Oçk÷¶C}ØQ!ûõG1ïyØÒý}£ìëÛ¯ïÞ?ö¥6\€€5o¬‰«Û‘iŠYA¤jÁŽX/Ëa ’ €u„à(9^ÛÄóðºL!&Ý·ÿ»ò,/N¾ýð·/÷_?×®ºž°ÇE×(]>n°Z ' l’›¡s­Ç%r_¸çj¡uû-שâÏ2G¸ÞàëøHUŧ2tÜ‚~)Nê»l®®øüâ‚c|lctNzZðvNEŒpìù}Ûˆ…DCw:9 nð$¸Áöeè?ÎðìiMrÏô9E¡œ³âæY ŒæïßΊñút¾!C’·Kæú[EK‘‚ÏFg­A-R0³r±ø@Á¢½XhP 4ÐP{Ñ Úåm‚doèK€veÞ><ãÜÍJz¬(~[+5ý§ââ©aâ¢4‰%Uð€Êž»™ RL—!/öÙæîtMÅy‘=×)I¤¹q`Àαç6e#æyÙ‚¨¯ØÐ ka¿’ŠÅÙï$d?$Æ |myˆÒ³Ã2B ˆ˜× „£R<|¾}{w!ó™s0*·äœœª9§Ä²+Ycˆ(È”„×¹r#ˆ›D!ÅÜ# ˆ”ƒfÝk¼@œ²xkFθûòø0R&ÌÂç€M2u늽Z ¢ ‰œ¢EèCE¢žÈ.Ý:™mVú±`󨿙x­jzðhç#~оŒr“R×Vaó @Gw/ 5J IÂ’@®W2JJ©®†å¨mA•!­&ºquU€2+mºbÜûÆå«ðüÂCõ.KÓΘõu›¾0¶g—Ñ?Ü® Â8üÒÍŠºzZ6ؕǜgŒÙa§íäšÅ)%¦–Ð,^^Í} %–µö‰XƒB³˜Óµ•–3ï™Y[ÍâDˆByÌ€BkŒ&Ô£zW6¶£®³e6'MÉb·-þÚ¯Ì{ʦ)ˆ²IßøŸŽŸ—/g_Y%U7‹œ+¡—Ÿ5” ”‡©ÏŸE¶¸*%y¡´|¶ñ³‡©]mG#Ü3` %‹øÂNÓg¹zYLÅF¡ºH$äªHPq¥ûâ`- ^Ý–™u +î<†p-A‚®î*ïË&ýùbm)Èø$ÚbX°Æñe¯ýD”!«<=Cˆ³œ^U"0wõýš³£o4ÖMÍ&fÓ2hà*„ªs±yæé`Mjæ­ß,«˜Æ>J™6©1wÍþB ,9ÔÞ ›p½½Ø(ùùý056ÚI¨Í—§ªsuI”Azìuë¨kD‚:p‹HÆ®Zy@ÍþþŸ{XA>"NQMûê%¯“Ū™' T¬›?QhúF¼¼ÊÌ™tôcŽÑ“Æçœ¼qu¹õÇ8~³’„[,FÕèßN^ õDRc “=QÚï¸-—• ™b¿Á˜BWï£hIà><€qƒñ•íu¾ªðeE÷ƒ³–/È'kðýþUöw¤oìÃÁÂ’úùÆÑ¾¾k5]Ž9[~²И±9dËSHÄT»Ùä)™ÞR¼Ì8v@áò"‹§£5mÎK ¡qÙ^N‰yƒÂÙ#Ž·Š+aŸ‚_¶îÃíA<-œ`au5äÉB’\ecJÌ—Yåg-vŠ-ÓP8‰0‰C­À²r²|ĶÊIšÈòpmÆíñs™Kè‡D>JROoFÓŘǡjžNµß|zø{–.‡ÌX”ªd~¦:~”ªªPiqãò’4C"xûê”3­1ê%ýí‡ó#Pzü±š>+ìå±X^aÇ%na*dÎ5®æ"W¿¬Å¦ÏÊÌÞÊÖÆ³<Ír—Š'—¯<ü– ŽßaŸNÙB‘x¥Ù§9ý[“‚ûž»Í®CÁ1þ¥›Þ—ud¦·vu€' A!Œª{¼ QŸÁ¤ÝrÎk š*º•§˜¼êI·ô:Sóx"HÇBÿL_ãËéÊê¶ï8˜Õrƒtr½8³EI¥z½ˆiäõ"å&¸›VÃÝ4iýYFb 71òåâ»ýÑ[qöCu÷þáË×ÏþÈ¿~}÷ÛIãöLÇw÷ÿýéÃýí»uÛ8$æódO¾‰'l1Ÿ1¥®¶v‹Îˆ:àÜWj˜51`± OêF1§ª ±¼ÆõH–j‰Ì9è*JÙû 7i…¼ƒæyï 7ùï¼ÊÜ%ædùšÿðó‡¯&‚µ¡Ö†'4.\>NÏ_vF° yVÉ$æ§ä‡Æ–Ö1l wƒu$î .S°X2àèàrËh¼OŽÜ`!Ǫa$*uÇ,)Òe}‡¨½°: Ÿ<ÛÈÿ®ù? 9ƒ@Jë‘gªæìZE‹-kF«( ‹Z`‹õ6ŠYXhƒ´Ëî úuJ‹ß¨K*--ÖYhïtõykVcoU‚sÈ[<4 Ñ–,D``òÌ=œç¡÷¬nä¡òþ<<´¾`¡/ð´ÀåŒ’ÂØ†Õ¸Uûeçå@bÚ˜Žæ]"-oÓ]ËxïŒF÷zÁ{\œ•.Zì ßy“£eÖ,kHY³¬ël$Ѹ(Ë'mrœ„õ0‹)±«ŸèÑé8%'^å83[ʼ­ˆ—ÃçïÔêZ¼ÊŽSÌ’ uœÜ4 +ЪWhû9Bð¬Ûªwûsùà›_pбA$œá ¥€cgp›æ;$…föTÏr4ÆmŽo‡Eb)Д-¤Gù ê°=·X—ê d¹é¦:ƒó­ç‹2q`Ô}ê°Ã¯²Ø×Ü2bÃUSÍ @±=sA–®«,šØH¤5^b ´Eóx÷ÔÃ{ ¹Gð[wI/Á O‘âeèmãè­¡…Ã?+Y8×r„À"ÝR¯³ï­‰FG*ÀŒÚ‘3IØ´=åŸmVq™ÊþÛ´Š¯}¶L+6,Óš%ƒý2oº ˜Lƒ¢7ªîwý~wûáñ÷1·d¾‹×—&l´»Ñ”¥ÍÙAJûšj ÓÈX0´ÆhÞ6媡Ôc'YÅЖÆ¾¥ÞRkáþ$öøÙ0òâ ›:jtbÌÿû? µ À ¬›$SÏ0œßybÔXOàbœ¼Æ»ÒŠI1· Ô„„‹0|‹ãŽwÙVL,«¼5Sž¸Ép¦´··6:›W>õÖvd¶¼ö¯{â­ylóààJý“á>Ëžì—y{øEn;d3¹g%ÙÂ<ݹ½äÿØ»šå8räü* _|1[ò˜˜˜“/{p„ßÀAQÔ,wDQ&©ÙGø±ü~2gV—ÈbÝ@¡€–ä˜ù•(²º‰üÏüòÃͯ/Ë7·—¿^? ª­k…•Û)]¡õÑVêâ 1]ã½oZ“rŠ8½Æì౜<·Å'Eu¨Ñ}.8y¦CUàvA]òõ{íÕØfðNåïÊ—@ƒ«†–&Œj’tìXøãòöãÃåíçÂ.ùÊGlê\° ÍÉzËwHeЖîíÏA<“±Ç؉½uñ+: ;VÃ߸¡§Oàü=U¸NT)shs¡%ãBgpóÈ"dœªX’+xGvØ|-mqšŠÁ4}+}%‰/Ó^]=g¸¼nTÁùÀÇ2\þøÌUAj­^ï-«°\"ëÝç8Æ÷— P‰9 ÖU]Ò\z5ÝI„×ùà.e}ÿç£Õh.{­ÔoLu&g曆\â·\hŒÿ mw–2vþs_Ô±¡\ÇÖËÃ@Ô%·ë L-ÕW²½õ&,›=žj,ª ¾:§×ñ¤Þ˜ì:zI§õ†KEzÚ,³‹Âž–MÏOØÖä´SUš"×F[Ó%kL [.m)^ @ ”zqq{6 +M RÚuKˆóz( mùdGÇlc}.¸^œ¬ÂÔ •—PÕŽûìiŽú÷FSºw˜ØEŸ¼“xþʼnf"îïì4sÆûþíÕÃýÅÃͯŸVöØ>¿ãz“H˜6‰ J ¢AÔdý§ÏúÄ“äjê0QÌèáÔ™ªÑÃ@%¡#Ìc >Ó¦GeÀÞÚ±zì«ü?R#ìܦ‹AÔ‚5(*túÂ!vè™ÛFžü‘· ¥Û­ °sðÔ`}Jëí9 @;“ ³pj ºt©ù:[œVijÖh<úMš öÏ¿^­i&SÔõ ©4’®_t AТƒÉ}~Ø×Dí\L_x{uuèðëXçôŸ|ø2'@üÂ[ÿúäIÌZžýË›”à›þæÛÓ§Á™MuI£íqNÓBÛ]©°ßÙÿnŸ¾¨÷çýÍüþ4(BåS aº¸{ÿõñþËõоü8”…3ôkoÚ äà0¹t6Wúöûæ_¬r÷Ð0pÚi^aÖcÓdi"ËXEègÕÉÔÏÍSþ“wjéJi<nöR´è1äÁ‡ŸHÒÅÍsÖæ½È‹ÞA#Ñà4Ÿ’9HȤùŒQÂÃi‹ÎÐà†]Í©^!‰k nÖ¨ƒ£üLè¶¹íQÊt1X'›_ àYkÉN´ûÖš±ê¦°¼N k†'E‹Å åRû 2öÑ êŒZ2X£T/%ï6é„£U‚R³*ô;Þ—nO©„®0,ÞWá°€¯VßÞÅ;v?Ø; 6ÝŠñ 嘫(3F«6Ÿ¾ ûTÜAdYp×–9<Ê×àx“/À¸ãAÀʺã¶6ìΪ7öÈ—fX‰yýîæ“©We×ãýÍUä­à¶8‹ÛùZö ØÇ–¯0G›«Xë40ã„ÐÀ‰m®‚Ý׈1BÉUëi,v°Ì&Ÿ©ÛÅW°×¯Í _CˆÁoÓÁvŒÎ€ÌéÈ”RÁ¨CÔÕY€Øyìö{Õ^G¯ 2«cºI%7„)Ö±ǵ£©sžrÙøpõW•([Kûv¶¸Ÿ?^êW¾þÉÃõ£e3ßw¬? B•öTêK4 fëO "õP,úÖê/F·ªÔP¼!5EკÃ>ÞYfAä ùÝ—’÷ô›§]ȵîñ9£r¼l»ÃaSmŸÄlô˜À”H ¸ºYjU²WÃô~#l,áò!„Pßgé:Ž;®ç0Å[“‹‡ ›!yì[Fuî,eÔ‘Ná±[`™;º6ÝÄÀ‡À;ˆ>œa[cÓÜé †ìŒß3AºÀ”êK[~sU¹'X»äé‹a€qDÜ“Ä3ˆÑ ñÓ q÷E=¹ÿür÷xYòI^~ó¨¼T‘!eñŒ!6ÍňB kK´D<‘‡: à–´“ÝDŠ2‚0O1d!–Té ¤öÚÞ¶Y‡s éàªÁžÌûÂËÁR7eT‚¯ûÂ2]ãUîÝÚ[Eå1¦nS !ν6}qãö)žÁÊ;Ú‹_?¹Úï®~[çh'~…ÒEiÀuïláÜê™Û6zõò¦;AD.•Aжo•/rvùÉ3uzè^}kUòL²J÷Ú:ñ€Ûä2¥Ñ£·H.³PSOl­þÈR‹ä»&‘Ä×è^B®V¾íjá(;9¶÷&vÎCÎ]Õ,Q´ÑŠ(þ{(âì—?<êÜNy×é÷ 8‚ÜΑ ÝËܲ#(¨÷GpÊDì¥íòpp± ÁEçÊÅ›8;̯Öq>Q¬ƒ>ž®<Îx°ÕêXœ`HÛä7Ê`uìÄ€Û_éã‰O üŒp~¼ƒ&ö’JUŠ„S÷"N&9Æíj×H¶MÆøͨ^X"FèÛWó‚–½vñÕQ²¬eSSKbð]ªXI”9{Óf|‹¡ $ˆÅ&¤¡¨SÈ‘.Û§ÃP/' kZ ;ˆŒ†B£«žJæ9ßrØb¨öɜكC· ^vnÃOˆ»¦ BØ<rDnòˆzÞ„——cý:Lç$@mdSïY¹-¾¢ ­Ü¿MÖ5ÀðN\(N¹бiÊB6.(ÙEÔm¸!x÷5¢Nœ6…¦ ãhQW*ÉÍ€Ÿ å b!F ]¨‡o;*rôn(] 7ùɉ¥?¶cܱ4ï§?W:qFôXL}¾¿³ƒûƒy‚l]†‘ãÿ¤ä (w“Êר¬Å3\5CÛšbl¥a·Åi /¡åE&ÅùÞ¯[™_yqÏëÑ£¨¯íYVĹè M¶¹q)Öí˜ÒkÝnGNÖ¿Â}ÿªãÜ*O1¶' ·(’¯aç”"‰)6ózzÊØsP­mÇë³S/Ñ årÌdP—EÿÛ¥½è§ËOW×o šúËÃkf_¼Bx¼Èd5.^³û"kÒ0ôˆ aœkgX‚iÐük`?£TΆò¯Ó=üéç¿üëëµ:1¼ùeoJøî·ëŸ~¾yÿ^áUŠ—øá¸ëw¿¼4…“Í i(%ú» àã.©ïÜ·íò)êÓ×»¹ü¸‡£ïƒ%REÌÓþÃDÌ-M”jFô_qØ„´O.ƒ´ŸrÏ 8 ±”ÁfD/'ž÷‡ œ…þ{>MÍ*ذÓ/XïÂbeÙòÛVÁšL£m¨ªÛït’ó-©ŸïŸ÷r®¢ýðæÃýÝ­ò÷âVMÛý3§Õ-|ÉÌI­°Í–Ôúy”dr<Ók¦kfeâŒþ‰ËÖ—AšÎC [u§”ôHyá[&u¼€D/Ím˜Z¢(¿m‚.$rãTÒciž[|mãžNÛgÞ3`©Æ²ì%5AF÷,÷€‡X…‰<ô'Ëâ~Ü2„¸}‰=µl–ŒB Ƀۼµ‡ªÄiH&¤(z}AôôÜ>fÇÔŸNVµ Ž£ Î{õ’UæZQ²Ð ŽÞS1`N°ÔÁG°¥­n溆.€(ð.„àØ¾¢øaËaÛ‚ˆâg[ªaU u¹ý¾²õ H8kLèVT"ê*øæ‹¾à§ËÛk%­½îŕƉŸ•X6×ü| ¼ïñŸ~úyí•ë’“<¸‘ž]Óìò./6šøÖ ÛåUô>ç’]>»åDÀ;j·¥&(-SuÞ gUÙñfkÊÕÖ4 ¢3@­¢5eN{hâãÖÔNžÅŠX­Îœj,”ÔC[&k–ÏÈî—7=àÔÝ“t[…©àw…VÃÁ`íùc̵Ù(ÉVmÈx;L(ç³Ã*Å¡JñŸFŠ…I’mx/­¨šüømõ½ Q ø8!Fõ¬~°¾ú˜3tz{4j$,‡O/ÙS³†nA².}mRO´>aÓI‚E†Ç•jÌsË%ôÄVlbù³±¾ÔXßK; kºA¸F†¶r"¾.'Z‰ñ°däÂ’íù*–Œ¬”¨ì¡Æ,¦ìâ4åÄàvàÿgéŸ.Ÿ°©˜hgœ8øsû±ý w0<:‡ö æ–©Ûó›ëR׿‰Ÿ ÑKf¥‚mñrt±Ç^à¯Ä=Ñ&~вõå—ׯÜnº£”u‚¡º„¢NìÜ‚\=P]LHè¶§1¯¸* le=¥ÿæd„¬IF°~¤Oåd„øÓ»fɱuq²ª\„¾•­ëªF˜í¥ŒS^4S2zÎ'$úèðœkž9|‹ìš-$¹ŒlÌ(¬üüeAâö,ÂÊß”9(«ªàZã;›e¦íyÓX¯ªˆ +¾¨ª4î E dûÅr»Ï'«SU¤±“Še:¯ª Þ/C*]®¾oŒˆìÈÿÝH¿2ãPGnÖQuü¢PÙE9Õ}îà|fÜ'#RØ%ç|ŠûíÖçô²!6X„ࣧ(|·Ûs^G5Ó…¶Š¢Õø˜€Š™ŽÇÁ[Ц“¾uDÆz¤Ø^B9z´}Oæž1*9Š”ÇbŠßbíí7[šÓ‹Fì½U»"ÓØšÑ½*“‡ÇýCw¿Åeú÷F­îÛßýN_ø¡½@„1U³ˆ-@£à¼ºeλõ¢’õ«Ù½ CÌ>­jÃ|¤b¢! dÇOäéR Ò·N7Wƒþ}¥l{6„Ü«†0–ÁŒÆ”l³¯…>!Ì6C˜Íë ’j3&¿ªdÔªJþ{Öñ?†õ|žœòjÆö¥µW€Ì÷lKõ§±ä'²OÎ¥ŽF.asH/Kó®K¥™ÒX÷^Å´ ƒJ-5Iϲ6p)úD™¦Hå-µšiûþSÉÒ*50”ƒ‚lùvA¸Nkå\ŒÒª¤Y'ª–l­ ':svnE3Ì›icúü‹u®/y¢±¢Ú2ˆæ%EGC‡Zê2ul~Rþ“Áº³ÕÉ@Yc¶ùiA“>ÍOä’®ów{á`¸•IÙÁÞ©z½+L¿_²»Âl¾œûŽánŸTûöìM²8]£BÔÿŒìd¾ïáë,¼Ïw÷+wJ]v<ºl”ë¯>Ÿ¿À¬%šÀ…pfSɃM¥Ñ9Û¤¢Gö¼P~Mè*ÄX%Ä~?!S·Žæ¸‚ɰhˆ¶C°)ƒ³Y´›Ï+mYŒCÕ`lÙki#Ôõ³7iA%EGý'ŽTR13ŽÊñzÄìøçƒwÂ×vX_èÌê/ì™ÈR.R0Fq„óêÏwÍuÖ¨¿à}£ú›¥y(«d„+N)Éž ƒ_Gèž>º.¹–Ü&‚ üÚ--[°{² Ie§bF2@B,+¸”…§[œ¶OBÒÒ7?³†Ý…<‘RV\L>ùÓƒ`!a_ðèœ)¡÷ta8Q¢ ̉‚ƒáiãÛËÏËŽ”§Ñ¹·—_Þß<®sô§‹ÁI‹ö ÑÖy°`S¾ø4}úyÆpA(·E»T̃ãl±fA‹.ºÑÞÚ;Z×öÜCèÒhÝhtÆìC`õxcØãv¼öþ¨ëp,AUð»>UûUì[×ÖFèLäh0G4RiÑÃçË«ë,õŒnGɆ)³¢¯^¹’ƒ<É>|•rõv¬²ç“sN0­T®§éØO­Ú`‡å"¸€ŠzÕcÖç\¡ÏiÓ“Ìίҫ…KP#˜Ã£j¥3s®ƒLLÖ•Ïy½Ê]õjO'£W)ÔïY-k†‘ŒOC:•¼º ‘¾©Bý~T$@ –‹†Ùn ù^U¤2SØÏqV‘Të.,g‘ÁdèÔ'ä]·Òõì i䇫Èàó- >* Âé‰;ÇáO]y‚ƒó¢ôþº2‘ îôšC+°5¾~ç¸^Íí[fÌ¢êáTHÖôOÔ; žñDºÍ ˜I”PTÑI®˜ðÌâ%?S£—^M ÷ϬWÉ׫³HŽ(*›2cÊ׫¾¯^e×9ßÙO; å3Å!…@Áù02ö¸¾º·Nºg“uÍùíÕÃýÅÃͯŸV:°`Œ#Õ-IjZ?ËÚ·-ôê ;d¡v e·vî½®ú‰8`Á¢Ý•íD›¯FhY‹©aƒ…ó¸n#È<ôÂçæ‚¶Ù^›q¡2ûÅqÚ¯r:=h˜0›øy"Aîë5@õsç}Øáà~P%s”,â°:/óV‰#kB¸ïü Æ×–7 ¾ µ‹B6Ž» d-â h/{ÉFfø;€¶eÊš›«é³'$ÏL—î:KL)Œ´ÄH-¸Ÿ>%"ï¢ B®#dO‰ÄŠ…2”J eŒ¨.Ûñ» ['­g\Î!!ž8I8²‰¦lÛ>S×á˜DUˆÀì©;"p½*ÈhÆ4&D’ ^††HSSáåÕLÏj¨“u‰~CýL[&+‚%ÕqR#Ñzjb‰"âcÕj¯bï±úkÙ`iA¡^íê{>s €'o-ÿÊ¢îE1œöX@ÝóÔµ7Oªº½£úÀ&U1¹‚n8¤";çbž¹ Föýõ©ŽáüðVú>²‚§m²;qÚ(DVòjÕåÓ#¶l„U+ލ bŸ2û¯·™b ª c_ž³vÐÓ-ÑûÆ,Òâ4eˆ} ¸“_,ì^>bƾPÚý{Wºɤ_¥áß[)’q0¢1˜_ ,Ø}†¬ÖŒë0$µ=ý`ûûdÌ*I)³Èd2Õ‡gàñ¡.±È¸#üÂÖüköÇsÙ9è*9ð-XÜ^=Æèj„[­D¸E0œ†òÜ&u{OtR(€ÝGò/G«€¸E !8©Í=~·}Þ ¶Úã=ïãÆhƒækußOúm01"²†¶f=•À¶1ê; Ûvσ^#ÞzÁ…¨ÜÝ74AÂ%nÀé9ˆÜQ¶)áè*õ@íßÍSê׉øcBäVѾ§´å‰h„õè¾YŒ\͆*ª¤‡Y2¥PÊ¡Jv}J›¹¶k‰cXä´Rg¯sZL[;-V”ãSH,ÝòÃàþDP¹82D¤àÛ¹jK„7Oò&¶öwøûÁogÕÉ3gÿ8¿z4Áü`û!¥;„ÞO”ŸZàÿ°Ÿ?Þ¹mñƒgw ÖîêÓOÇ’u̶ívIî>'ZÒh—óDP ÷À~SÚ‰`KøG®â127@’Ä, Ídf›\9 ûŸó´ÑÛó[£”¢|~8–îÝQt¶Ę̈ÛË÷.œÍ2ÃÂ÷è¶{ [Âò²&  ª–äµe·1“ÝÒ‘Ëè—šrJ‰Ì>IqrÚeèSÕíÐåù0åäVˆo.Aý«ìvºÆªìÖ›Ãá`zvSÞVBY+üP¥NM‘MáqS¨ÒžtAñÚQ-»˜ižê¥jàÿ…ó\ÂÛ`3›‹qEg©ÒÓIn–ŽS;kðÌ¥˜N¢… ZTPÈBŽLiÐ¥ìoRiñ™ŠÕA]'Þ8ª3:Ÿký7Ny~Û¡úTfíÛSWì_P.¨ø,ÏbÀUå_[‚}“Öš¼Ä@¿¶ì'®z°U˯œP,&X1:Ee<ôt)ãËѪ&[…švaâ/_-r(>½ù=,XuAùÁ¿+t”u ?R#|w [{²&tãuj6j+²…ôUP¶².Ï8™U(™?Ýq´'‹Ç<&öó¹ûÀ,h&P–8¼×+”)n A“È̪9‡—Ð}"(w™E^WƒOû†jðUÓÉ_×ßÍ ñ6õ÷ªÍLjïì”7«½Wmf¾în‚-&8!¬Ó ‚ÍuÃç@t“ Ç …w Ž$ºDð„ºb.x‹Î5 æZêbé¥v+cA_ Fr¶ÙPôR–Ÿ+Ý”f‘Ò&«¡z0ÃHqêÀ­‹èÂ&[Êb¹Ñ{#lÍuļî‡Ú]œ/Šÿb?¯:>ɉ[¥:¾eLYpYÔ;í½UM¹žÑb’yõw^bIã÷è(%{¡R§p‘JX ‹4UÓ¸õ5š5Ü-]¢É1æ:çÌ`ZæxiÕ:².LTÏï1„¸ñþëÍâò€°ñk'¡Ÿ:u‹C¿Æ¯=äEpÖUü¢Üþ—%+Ñ¦Ž f®öÑw0I¬ºÈ‘pàÜ"/8ËaDZ™âF *»GöU çʲӛ¯þÖâþòÓ/çÍ-|Ø”ú¸;õÁÉ@ ä:Ï÷~•“vC†®¢dE|†ÐßG–HXð±É‹•·¢×+,Kêì0žŠË‘S¿@1.Ãìs• u:„e£QXЋd[# lÖU*,•¤M¨(Ñb!üNG)íI#*V)qCe\1"±wø‡ðÌ%< Lûbå!åæÕMŽß)ቒ/Jx-˜Zn=®nOf‡ùðPÅa£œ—jÏžsâ@è› ûyTJO$¶³‰‡[ô—{÷ûíîå#g¯ÿs·¿z_d1É»yÚKzø®«,&kK¨Â`¦Ú”aØÌŠû8†–éñ(Flb@eûˆÙŽ™Éa;ØÇq×ÈqßóVmmÿÑ­+IÐ[EÌ‘ŽïHÀéà,:Þäae@?R´ »ë‹õ3=grŒ9‰3·x•6Fh€"c­9Ÿ°ôb¿ÉxõŠn’}6Ûcd+j/9ïµ7úÜ”… qz¼aH^Å¥gÍK´W<2ªäÜ„K6Ö^rN3ÑÙªÁGÑg_Þ™!ëÔĪ ÆrÁê ¦ÙGϲ3p°¿VVç·¹G£èùP¦ÜêÞ¬ÉÃã~ÑáW™¶ \%áßý^äÝ¥Nöñ3ÍÐO¨Nð!ZÚ²*aL—-‘‘Á-¶¸ki×3µda‹)¨¢õ b±h#³½WBõê½b“ ^b|âªÔR‚Ð;´—@6³dER/§3KŒ]/ÄWÍjVä% O«ÍÆ,‹‘X%®¼\‚m ²©™‹["ó^ßß]_þ|u›´Æ2ø/ö£›Ïü¸vœù8O~òhš¾Ê£6¾Á{Û—%KAžZIÖµ´—ö°Æþ–‹æ‚’+šO ÔËþŠ_=~q/–íºUmGB·7ÀAfj{Söµ‡8욵ƪc„®þÂw…u˜á¨`¿&6ɇ-æ>XD%ºMà ²Ä•m÷IëÍ•íÿeY¬Ks#«h^4²iÚ`S°B hk«¢RG«šã8tR²ªÑñá¶û„U•8f£Ú'Šô±©dß”J¹Ó7°/_±K“ćÿ¸¿»ùðóÝõã‡O?¿Y…û K^ºvQcÔ­¯U !ÛefY´+¼I°äcë1™À˜–Ï€¬¶'9nÒ€hiš`{6œ–pZÉØteÉû¼LëõléŸ5Ý»!†HR*ýÓÁ2ƒ“£÷'Ïéùä`=Ùšêdé¶¶¶4(ƒ \ §®ÏNsm\¢i yH±K €kéµúQ%ÄÁœ³E³¦ÅAÔ\øÉ wpŸŸñr´šVzÛ9OZ]ÒÕA„Ÿ*öŸ¶ãÃ7`ÂŒ LÈ !A"¶ÞxŒKÄØ4ì"'hQrM˜1cfÄcÕÔ$ØAÉÊè`$ÔœeêO‘Þ§'_ª eªí=ùK­Dž­‘)P;[Ó{Fß?¦ ÑìFvTßWÆÖÜŽåã_þöŸ™›;ýðÙ6x{~si¤NÛÝ]\_ùR©£öEïý‡¿~xü×íÇ¿T#ß,*i¾Å»Y†w³èË&(7–ã.ú®¿Úò¯=Óèµ]KÈ–¡iH“€¡íj3W£™‹d&WÈSGS›&“Ÿöé¬.äç¢=Ÿ¦ân4ÅQޝnF§+¬ºb;3F§õ7£} `ÿ`0qwî;Ir #9Y¥Q[fD—L™8y¿,7¯r Ó±NåŠùëÌÉa{<˜´]Sš¿ Ë­`u…Òà¦hÉ{*Ï¿™T˜Jx·ÓI®ºï9ÉM­,.«X¥ÔÖû#BÂêÎ5¨êSK„²ê‘cWÖ=‹à²@/G«™êSçšy·eº•^”x\¥[¼A„ š¦õ¤×z?Úír"˜©‹ÇUÊÂÜàºÐ¡€|sÍû÷–دcಎ¥ÔuŒ)WÄ}¡H0ÛtdSX¿LÕråu!!“ß«O“ãòßMH=¹•Á5«l(˜Póñ%=(öi'zóï7 fˆ ‚ëBüØ+ïRwYtð5CüQÕ—ã $¥‹¬Q¸³1þä´ $Í Zd;°š¡…Õœ£…ž¼´5à÷»ëÏ7—çç¿$z=œMù^ú°[#^pÀôR$ƒ$}°,!;†~B›oÛlÛ@èu•Œ £76ꦀŒ3XW¦ n.û“ðÑ¢æï<ûSr@+³?¯M¯QA|çVg¡6û³„Þ¾ÒŪZ§eWåìO$ÿ¬ôùh5ï–Ø”]âë‡ ÓE²ÃàÒMDpÈ€‹Ê˜Þñ0Ž,1×ÚgVœ°Ò/ŸZgž1˜=À@e I/w¨( ê(Û´õL‹ÆÙ6má¼§E¹‹Z¸j›XcœÕz‡Ò\f2ŒÏ°B­×¦u“),Œ{‡Áÿ¾ž»ŸŸ<ÑKb…·í˜Ú× )#°^KÈǾÊÍ–œ¾ŸÀ6´$¿•Þ€MÒÛðÍSQß*Ê _<'×8xbKà¼k¾`NKqKåÅ¥ö¾Ô¾ÓrÃ,™æxœ hQ–ê`Á™ú“ø ‡ÃfÑ}§§)ß0§]…Àôz`ýt‰uoEÒµúÿýoedÖIRcJú¸„"èuu$Ε‘8Xè”*e"ZŒ±(ªYج—“UâiWí7ªëUl;í–ölÛ¸t¹'cÈd¹vdñNÜ»ø% _Ã/Í E:”5w¿Ý]_]|™~âЛöÖOy]é§šw2õ[$ëýVóFVù± EÒÎ8@´¥pÓZ¿¥¢øCÖ}ÅðâúüêæáÌhö’´Vþ’³_µcoqï?_.¸d‘ "1§´ÊÁ€k©õ„CT—>!ê‰BÀ E닜 `L~D0JUã¡ä¬ŒˆÙfB¥Åü´m sÜBoV‘ %{3£³cìÍìÈñ èÇó*}_lÝPsýõƒ›·6!³<'ÆaÏ úw6ñc‹oÚH²‡‰?¸È×W¶Ë&b!ÎS˜=iû+–=…µé:”5˜NW x©¦J[£È±]MìEt+ÃH¾œ¤á0ÙKÒôx"kÛQ|½]%$ò–œi»Ž%ì`ÕCÅ»ës†èý’ç £°ýýŸ&\P?>^ô´{¸4›i9ø§x*çø†Ÿ#xÎuñ£ÿižþl êWÑ7.⦆Lƒ‰˜+7U9znù¦ÁÄb‡ò‹ËÿNd>ÿþ_‰ÖÇžnw”¹í2ðݱ¯Ûe}]–'<ˆ8"l¿±_âĨӯ½/Þ‹ æ< 4wŽKxhAVHÀ.‘9®¿€Õʲ¨1&;V°øl~ßÂH=eñ÷'÷ÙûµÉÑ*ê>i[D‘B5®Ìžq’^lµJï~ Ú¶»aOGŒÔ•ެ–¬“æqeŒGýê>ÀïPø9T+~»>¿½žÞ¾îdûíîÓ¡do¾5OwõûÕã—‹_./~Í•õ?]ÿóöîáñêâáìégã‡wwŸïÓƒÆû‹ÝãÝîõŸí÷ùòr_S9ª'¥YËêIßü'e* ·ü~å›?à\ù+ÑÒK æÚ ¹-}ppÞL—G¦¦kÉ<”ÁdŽì|iÒE²9b‘n8m»í°œ/ÙONS¾ÆƒK#_=|µÆª{ŒÎ"?•k{lÒÑ$€å+ÚîRv±Åti³Ä1EÇ›?Ìv&“ÚR¸ÿ2“âìüó§«Çeéµ'ü©™àeÝ“·sƒ*Aÿ„¯wam®};qѧáêåJ×b§ôÐN}Â49YM[Âg÷QEdz¡:]$ÛÄ–½˜½íª_ìMó9Í­aôÒÞÃVP‰–Ú ¸9!ñì@j¬µPQJB¶f=¡H—æc401Tâ¶5@±l•µ ÛŽ%:™bfúEb”@aú…C~À\ïc\øp¤ÖìoÊËm¬ +jž—–Ƨ¹½ë—]EôüCgW×w‚ü|n_|qyŸ„i7òòöü:—SÉ÷’S<Ö$“Šñ;ʤNëTþ”ú|h•êo2ŠlìPÔ·¼ƒº»½2™C.ÌQÞ=Ø?nÎ,êúù²tÿwêW×5ð<«ØÒ@u«â/¤–ë,tHhu¸í„Û¬ÌL•#)TŽ·Ñç\_ŽVoÛž$áƒ, •JŒ«Ð±·­¦[„J1dÝk‚J{;Cw“šeïþd^õò_ó^5~¿^ur¬©Wõ ßµWë¤WM(!§&*טj-UI Ä¥ÉòI_x¢óë¤#¬íÿšÉ¥1å¿Kéi@‹ƒïóÙÊç Åú$Ó˜fOZì²ÄC”ä¥ÂClì ŒÌ’A%O¬ˆ*šuQ»‚i*®~¬ûõ¢ÀYþ+¹€ë"}ôß# 'Gƒ…ac¡p»8¼¦I¢ƒ:âÐÞôµ_¢ X‹x"€¥“½ÚA0rv3±L²–pîâ`™_(šMuùËþÉi;˜Í´k ûwÿõfS=ÄUÉkºŽßÔnŽdöüštd5¿f0ôÅq‡Îvó4ÊÁH[öäW±'pÿá1JpôŸ(ÿ^HîÏÙˆæS¸ѽnÊ pý=LœJ MyaÔ½7Óûð¸_tøUž~|è6x8ûÝcÝüUb± ÃN4œ ?1¨_å–,%l€°c äÂB¯ÔJ²^wiI0|¯±èÇœ÷^¥èÈ äÆÇNèÓÁâŒ)YäÇÐ+®SN’ýX"sfXÚÈ(QgšÚ,ê ¦Uðkê©Ú“­±³,å V5£(½ai—ˆ^xË–{Gô¯Ê1#©ÎöMç©~ópw}¹ðYGvÚVØRn™ú®£:qnÊ]–½ìdb¨‚°/šÉÔrzÊö^ê4ïOŽÞ#ÜORˆ¸àÍ÷ži(B«²deð›É±÷æxæ‰YPÇèÂ<®µmÏwŪ„XÕq€äk±ÍfU{–iBˆÈë˜aƒFÏQyÓœ /y$˜:¿ª)V㉖Ÿ½c çý·…˜œ/0ûTÚ²À%” Ì$.¸â£ÚÞ˜'4éR`¶m;ô1,¸‚$çÈ6±&ý³%ÂÖf£óáQÍ› ³Y"2Å?ñ8Þ‘Þܳ¬+x õïqVÄ"øòî¶•ÊñðõƒðÌçÎÜ쨋kÒvr¡å"O€Àþ‡Ú6¯‚F½ìhâ½F¥Šçeâ(Ä‚M!¹N A:XÑ´iÓ`Xðö, CpÃ*# o VúÑDäpÜÇ¡HCB{w%¸ôÐ.½®éÕx»tø]¥˜cfjEà€k4ÛâGnz8dk²È ÊÝ´ž` K ‹¨¾<˜mˆËå¹,hë”Z=ô>•þ‘-¹g*ËJYñ£÷[÷º':ûŒâsÂóˆn%jk5> ×è;chvÙz‘’c¯YºB{T•–H©ä·ô"~ùVõ"^ìЊ™3´(»‰OK‡ß3ÓtmL“K@¤×’ðÇùõ™ý?Û\óó¹®ïþøðOçç_n/¦u 7Bœ›‘š ¥ý‰¤â´5ž=-Ñ4ÉÞ#hzê±í9lÕTb&gw$”j‘§Žš4’ž@ >+p¶+øå0å×°ã¦Ú¶üôÞtuƒ31 Z‡jú,$±±ô¼¶ôŠƒ}/'ä¡Õ½âXÝ+îÒí“:¢T03HQ*²o3'G«ê·mEç‚pM¨ñôÝA¯Ÿ–ˆ¼-bÎHGt¹^q%!ôûÐ%Ǩkg¯pzœn䑟š<õNVžvÛÙ¤s›žCm·±lïõ“ø+‘éè* Ò€\88SMâ†5²|üÍùÅ/æ”÷UÈÃ÷•š[¬Ô:õÀ2“î½®cååNÛ˜1“8†-ˇæƒó‹‘ZS=º9¿ÿõòñ·kóJgfÀn>ß^=~yV’‡e7íà,%·&Œ06¼ù— ©£<.íZj¦Y[]§™hü?y×ÒÇq¤ÿ C_ÌaUfU>¸ Ÿöâˆ=íþˆ„-XÁ)Kú÷›Ù3˜š®ê®ê!EGØá0%ötçûù¥ÏÅÊl5t¢‘X Fr(#Gqf:éNbã º‡W‹hMzæ –EÉŒ§cKÎ(H¾Íw¶–ÆŽ-¥¦N|ÈÔ\é²gyªâÊ}à‡G`â-¼'øµå5÷žÿç)29&Ÿß<¡T-h8ùEÒÕ h0¹¨ködBšš.×/²¹«‰6ÊæN²¤ZÈ‘SÕ䢔±yž4¤‘oo4sZ’ŽÐÏ´uzèd.…ïù”!ë|z˜ZU‡Û^GÓn¶½]†bSÞnbzýÍ_r]âá*Æa,bÕ˜T¼©¡Mº¢µsòËÂaô6Ïh ]5"%Æ,-åÖza-')–[2ưf ³.mX-šÝºð&Yö7©_ZÖì³£9ë%z|™Ûzúa}oFû7åàxó)B;‚â†÷c6hF“Ò5“‚ÌA/uE¾dõ&~Yv·5£g¹H‹ÑËRF\}úØFozë@Ü8Ñ4Pe(¤ž_˜(„“þÉö½Bx~¢>³jø“Þ È#áñwb”ÒŽbb¢Ûïg€~  ]…*Ü['ýæèO å$(\µ”YâaªsÖRJ1ï>¦Éˆ0{í)kº´¥Ô˜·N¼%FÅÂXÚ±©jmô“ÆŽ~~‹ƒô^8â$ìa'Å8¾I9îb€Ä_¿±týþç«g´°¯´žüuûJ1óªÜÛ2U"›4–Ni6ÊØN‚‘2TûJ 9Ìl+=Ò¯8fL g»ìµaÒEa)cô¹ß° éR¢O"Ãx¹í>ƒ¦¶¤çÞ“ãL³j‰ Ǫë!4¸î¸ ¯ùHÏ1•ä#\¡ ~³Ušl=Ðæ•UM¥Ý·cÐ\(ì Xô=Òc[”6øºæ·3rV<!å¾P ‰nÒñL~¤ `Ëüj;Ýß}¸þüf¿Móö‘Ào熲–¥[”×3 !È×l+LjD¨KÛÖmdË3%ð æ–ž'R5Èû¡Ñ“åå' êy&p$XbÀh(…¸}Óópé¤ééM>³¼ÆÂ¼qnʹdAÆÕe!J<;˜0\ÒJÄËÃ#„aüh_¶œ8PF• .³fz<¸jú}÷ËõÛ_ÞX…+)ÝX]OÉyó9="Ä5Á2`b?E³dÔ¿0Ñ¿ n%5uS%ªfAÙ!Ëg§ë&²¨«xþéÓFýIÃNS`Ÿm3kƒT@§Ç猎R0kÎ /ÉwÏ'üOŒZX_sožï{6Á–Mï÷üîÑ|¾% ÏæóÿvÎ^äˆÉGG»ì…Ðð,BÎeÝ‚˜Ää5ÐÞ%ǺH¾)_3*ÅñÛ£iX ²£à7Ï%?¢k?,+ú57Ö¶kn­bPµ>9Ä ê8˜Ö †nùu2:®Ñ§èCv@‚ékvË]~´IáX³VΈÁÅ¡£¯Ó¦0/‚.ðØCt"lí±Îxê±ý“2i— ¼µ<ÿÄíòQúˆ«Ž’zKbJ©{ó57n¾’%+M1¢a 3NÅhøèË£a¿ÊœäÒºµé‘¤G:–IœIaûö‚‹¯–½|•Å׳£y'›¬Ú»Ézö§ŽC_ «©géÜ®)ì²ãÓCÆÕ»¦Ó#˜âçóü¬Ô ªúê×Ï~éöÚë¯ûzjÄrtÐ'ˆF¼/¿|ûãà,¬ØšÞ> +þì³,>ÏÁJ•:+L9’eÕ«ëàÓ#¯ˆ$½ŒÄ«23…ÓÌLCéž“ùdæÕ”uiœ«ÜkÎýé[ꉺÿT˜5þZx@WZ6é ©wlLʦo2=K¨=ö$C ì‰x—qÃ^Ô¿î~ª5ý_ÙbÛ¼‰ø ú·îDC8¦¥3)½f¦ &bµ'w¹ ³"&NæJ¸`N=šXÕY„Ò,ô†4™\X½†ÐÚdÇý{—) |#¥ªè§lµÔ7fËô„êiÃÃÑ—Y®• ûŽ>¬)kðQUÉÇE²ãg‚ˆ‚•EQ‘%v6Åbè²³¸åHȃN…ýUõEÓJॆ²†®°§²ö̆ôØûóœËÞcgÓ–½â½=å¬+&’*§ìcßcwåÏן—>3ª1ä.é˰E>(”‚7Ìá¢+¼'#Ï80%ÜOÿÎ2ÜšœÓ¶lÈ[Üœ@&GHÞveÆr¼˜‘ßÿáç«ÛOªwMž0üÈ}+ë°LkÊ·l!‘÷`÷Vö2©_7V Ö)E0éÂZXg±C‚zX—sñòç¡ÆDëfšÌÄb\­SÈ*Ô§á²uµØè|À{À§Í\‘96mk3wlk/7ôg9ki˜¦¾<Œ¢¬03X`¦iùê]ÕÞÎdé-ƶ/‰79 Ej…·¨Â¹^x; Û¾´ GôsHoǸ¹Ä.HLˆÐe8n I¶§s(%vöɪ,‹Ãþ:v=/|øëEgåÁ„R¹/” †û=ÓcŠ|¡áþŸn>ºŽm9ãŸõ|DN!ˆÜÅzqmcLD®ŽÒmñPü®–Ø÷4–OJ!¬Y² $,¨KßÚ{É;NÀáJ”ªÅP¿d˜«ÞQ‹÷3i2Ä;ª.'AYàÉñt¥/¶Ò´æ:€¹»ìKaý¦Í: 6JN&ƒ-)2bˬb]N°œ]=QjÄþ­¿¶åWYÉID{½®"*å Ìâp,Q%DíŸ/¢" Ýh:|yÍæÍz÷}–Ç)õµš7ñȬí~EÎEn¶öÓ‚Å.®¥5X"è—ÙQú™ÆLÃ,ŽcH îö…ê\C)õu޾¬kþVÓ™n¼]Â5uKí\güÆíܽìÇ"D“Åáì i=íܶŸèr;?Sì™ÐëéÞ¼»wºåšVmù”~lª3¯Ýë)?ûü&‰…¿÷IàÆ%¯½"§ÓÞ•Ž$ŠöxnÜ:Óxa ¬fe/$¬—ÈêÙƒœ©Š¥òlÌ5ǬÌYø­Ä´(ËÉìºìrJ[¯Ì9Ë+s”½§+'åÃØ!¶â¡œ®Ý¹oÇ>q¤.£“qÕU.÷]†-C:¾¿ûõ˳5e£ˆ1èóëwW¯îÿx³ÿŸQ%¼t—-Ìm˜Ç$•\5æ‹7µž¨1ÂhØk'¿¢E%›š,4 #ÖðX‚ˆª0ÄÛ¯v4éËÝ/×]X®{=n¹2¦msM#8D/&䜆ÞOj¥ß(ÅÅŒ»‘´^|÷¥-¬—Z"ÅÀ'r PÜIÔ}sQ—)®¤NÅeÝzÎÅè 0çbœb¨×Zq0‚·-ð £JK,ÈYF›ô]Æow!ñÎOAçôý¸D}ßÀ?“‘âNCHpÑÃM•Ň»/gÖ–hDÄ./IyÕr€ƒô‹vÞ®7iØ“ûr‚ëY)ÄzöK¹sD—Íèä•<±ßZä9˜ÀcŸ™”Í{ëâÏ{Æ)NAÑn7^ds+¢ƒQO«Ó°´1Ðdœå¸ ›ê²¼ ƒ Prýoúöx!@6§8`J £˜ë›Deìƒ#íµƒÇÛ‹V†ˆ­Ø ‰QÉÒ¨KÝÄͧ«ÛÝ! ÜýþúyøÓ÷ï½ÚñîÃÕÍí¬óhz@_95Ä]òž3Õk#É2,­J—S¬#©ØkŠ)/“ œ¥Ë¥¼tÛ Ã2Eͧ»,®3YŒŠÒuÖ±ùZmÛKHºÚ‘,‰ýÏs”’„>޾<±:"š7c¹s´Ó€[FónîþýqwwÿÏ7×Sõq`ÞKŸÿ‹›õÅÔBÊÔW ^S¢ÎIEïâÎXìÊö˜i²:™òG®ši u0¬IqŠŽþˆZÌô¤1òÂÈßkLÔW ÓHÛi£3êiäOF­þ™dë¡”®®×¥lË9!à`ÿÉ]é‡8~Ÿ1î€3ÈíwKèôòKb©ËF‡ñµej‘½P÷†¶4ohÓÜÊä† måPŸê/ØÒ£ÏjZÏλ¼µ¸ÄTr ™cî‹`‹­'`&©ä¯£&SZoÖV‘Óùމµo-þThÌföK%¸m—uØ™æ=)U¸o,Œ/È+ðnBvÞ´!rØþ ¸¯?ÜüãúÝï>\?·ºz÷‹wNvÜtúp÷î—e=k™mðƒèÒÉñ&…Œ1XH°©ç½útsoÙÀç/û‡î%—îÅþü;îì…??3;˼°èŒ:"…ÎõÈ[Ì‚Zú!Y.½ÇºŸ 2Êlû233ÄΦæÚ7xK”vl’NßÏ~¸¶{€t4GoÞß}±¿9Êæƒ˜ÿë«àŠ!YŒLÑW|7©ófć°ù(¥$!r±+#Ǫ{’yÒ¸`º}}À{–¡Ä\UŸ²ÃšZ?z+““ˆuã—aç8_ŽÊ~o¢Žé˜“ãÿÖu;§xލ1f‡E³’¼K”ÛãcÒNYXaø!… d‘Äl‘JT<+Kº<˜v€–œCƒÕÇÐ`ôc©0yDšFßrâh2};R.ŒþËk’ì6%•\Ú<¶¯e¿Œžçw›TÆB¬7E{’ûvK ÷yFªÎ^goa¤àæÞC!Zw…ËÆ&å¡ {³¡Óy:þ¬Ûªþ,3ê>ö…b¸I7aÂ7­j¸}®Ÿîïþe¿äÿàÝÏ&a÷ןî>ß)oª]ÐeÛjê¡×sg$&«¶‘í—#„¸4[ȃ™AŒ… è[IÏy0Ç\ìGjÀc.b>ÒtH8g:•-8îÛ O öA¶^Hv§’Ó6.A¤å[2v Gd|kæéœŒL¾¸ ç_‚ž6f¦ž&Ÿ¿µã†…ÁmôÒ²^ß¡˜«RF²2¬àMF ncØYvdÑf£4µï*èJxÑá"‰OCDDA¸2ÏEc¯)}³·xŒ1¥Ì]3'¼ùt^%-á±ó.!‹ÊíãX§ð‚§zn¿¿< ×èy…c÷¤kAÜ{Þ`Ú],h ØPGY±¢ pÐ!•͹©qEN°`RÀZHL$XÇéá"HêUÆ„ÄÞ¸“˜— ô@ÎØ›GÝ:(v*ÇT ŠYOP¹(2¶’› YÌq„s;; xž¥j½¯h aM€‘9…åy³OäªCs0Ös¡j€†èT‹íÑçŽØ3öáÇ`¾{QpêcM}˜²õ:'³¨ž.…™íÛ8rÈ|8cò<4:žÜ>L”žåS2;¯Ð©“k ŒcDr+Öéííå“Õ›ÕjΪ¾ë,H u§|0r/½òŨõ4ˆk,ÚÏSBW)[ÒæiŠß:¦ÓZ•‚î8%ó·½áÛmK1.½á;f¾ù,¯M¤)R_FÇ[`¸Co®»µ¹`<<ñ ­“Æ>€SÉÀ«¶ðù½[]†5ØFŸQ6ÖEYD­\¿A¹ŠXf´*oÓ?Òb€…ä4Elß=Âo’ÅõÍD’ñ@Vaç“sL_O`ãîZ(£&5‘^§G$\4lÉ CðÊ&#•§$¥šqgZcJWUMÀ< –²'_,ÎTè/{y2úüê÷w·¯n>¾¾5o~ÿÇáÝÓÝçÊèò –ñ2]t¬]éÑÅœDÆ+£*’̶ä-•ÑÁú~2zša«”þNý,>yˆ0ìoC%Î?¬fVƒææ¸*•IHR^ŠstSt:f Ô èBó°¡jÓœ'j®Ñs{KN2~yEßÃÆíg%Ô-°™âJ¹Â'ß/fcœ€ÁcÓeÿ¿%o3Êæ¼µàtÞÅÙþQÔð… ÆÐA‹¨M9«¶÷I/çNÎJÂT» .Ià ÚɲڋEüv?ýñåçVŒÙ¹&Â7¸æÚâÊ?IDÜt^}O¬Q}‚}tgö ·8Zɹêhy®â´OðHœµž–XÓ2OëoŒ}ÖX··Æ‚T¶Æ–&¾ìíï<áüd Îq•@‰¢ö(8¦UÇnƒWw£Üh#ÊMØa²¿Ð+ KM= Ê·k?­ é&Ùk¡¢>»Õ~üÃÌË‹ö<™ FÖìÓ;£„ G 7¿hèôG.¨1LãJQËû Ú†¦ß8¥Ô¦¼£§”:/Æ•"ÆÎq¥Žw9ž[bìŸ[êx•sLignAòzÔé$ã/̉™cB“õöÈòY$ùóy¿ýñïÿý6'eÅ=š™# ˜Ñxõ«½àÇ«Ûk£¼¿îSWÄ»OBäôå÷oœ‘ç¦î»Ÿ:ÿéYÀão=“×P“Õõ¿33C÷·¿=“gâaç5‡€9¯îº$nÅú¦¦I YÄ÷н¸úéÃõÿ—ÿçîîÓ‰Ûü?³´×÷ð-zý_¯\ìn®ß?ý†ÔrD‰ã¼Ä]ˆpÖ?î?5•ŽÉ}Ë_ü]êýõ»ë›_¿9bw%Ë3ïxüˆÃ‡žróùÕÇ»ßL›~»¾õå端ɱ›¾ýåŒ[da•ØèDíÃ,e·ü©G DÒªÃðÞ¤ ‚¦BlfB©S™°€É² †zÐdÙ"Ô„B°|ßýñÓ‚&Rï Ðá¦Ð_K)M“]… À·#ù]÷òRíÇ,FfJÀÇ ƒPýôë͇÷kîñîÿbß‚Ÿü°¥bê¨ûY1Vd0| íÂÃî¶×/JºnâÄ,ð¨9Ì1î·”gu]CiLùˆXCÆ”í­3;tKc9c”J+èæyÄ}gëŘ²}³fy±pvyv¨o“R\¿³qRÉèXßûncÇ6®¼mWj¯ÃûMºÂeúdž<ÆšCúóÝ‘ßÂom*/ÂÄ!H®ÑX¬ÿÏÞÕí¸•éW1r³7#™U¬*²Á^-°‹Åîõ"èé–ÇÊØn§Õ¶Ç ìcåòd[<’ݧ%Jä9‡”=“2IOG:}X?‹¬ª¯2ü¾8»ê`ÆM¢šã¸ï­Ùöt›OúGN…¶oKۼ؊¨xÎSçriŠ‘HZ”=Û[»4³)¾½î6¯ÎKg$NÕyzÚ”–lç_ˆœgS£¶£Â¿ÓÛVj‚á‘€š °½4‡”é¾2 Èé;¢psÌÝý&=±åzÑ·seBl܇pºê´}7™°_”‡z^?ìI¡Ççç¤I˜ Ô÷¢ÁëbV˜ Àhj°{A*­@sP¯¥X¼SˆAW ¦f‰ñG2hA€n¯Aµ~êÅY øÁÎ}ÃKª?¶™j‡$è ²I+pÚö5èÕ‹gªXE›¢,ÕÍÄ‘ú«‡"üï4…MBQ’Ð%reÇ=G¹¥jËÝû›Ûß4Hõç‹»S êÚjÓK2UM„ÔJ µŒIÙDŠ ”.Ï9H+×û$F1)§«ŠSÀµç±“þ1©ÏÞ Ø’a í-ôî´%ÆUW¶à¹~ºÄ èªJìP°²ÆØ»¨àäÒûáþÃãæ2©éñGvÓˆ \_„ežÁ9B)yairlŽðZ¡o²ä‹ÁmPnì-GðûUT Àw°q禕 ´ð؈Á7ÕOÊi BZ2¹u\Õbn¬Â\AžŸ&›‹=Õ+îcÙBaünzÓÓGþz÷Ë `õðzó&4èRo…Ã2«‰ÕEU ºv©‹m^¿ú™˜ ÖDÀèÊ…`‚y^Û'1µŠ}4ë½v ,Lýc`310EÛ0郹·m[ÏÁD^­¼p„æû¾=‚Afñà‚fGóZò4Ó’wJ)A‰ý,åÔJÔÓip]Ô›mÉ/¦Ü“—^*]=à¸÷êÙ#–õä‘-ÆÑ?þ^Ý¡eË Àæ ó½;Y’ô ²9>ͺ$yŠƒ¨mµ– >ÿœž—Ãÿ»m§î¾:iu_Á©¿¯Nã®U¶öþ¬k©ƒKtb§þöŒ˜”h·HÔé÷ÃÚóןîw»Õææýê×÷´: áßׯ6 •L v’ðà—À%™oΕœç`û¯Æ®¡RQŽ­b§dD˜Æ0ù:`Æ0Å,™ÈHn b§ôÚÎuõtšÃ»‰¹òüG{« ±?ºÇ ºz;ÂRpr]î•oDU!JNἦdŒ`Ö=WáÃ#PÏ6<ôÙÿ”tngƒ¯:ÿt³}4›}a¶ü"R:D!_T2ôì÷Ÿ·´ïLn«ƒWÛ»?œ’^ìÛËì «dvÂO|2Ìç… !M° ˆó…`¿]!ÈÚ³³/ê|Vk{E¢ç}²ŸnÞ¼´Òº£ã¯ëÞY€øâÕÝÍãÍîó»ÛñM†Œà0œw g ÄOÂìÂð?ãŽV³(Ûx)¿¸J~+XÛIˆ,²)oHõbÂ~Õ.wT­«‚§?b:thÆÏÈs[¹5'Îòjj«*MMUˆ:Ÿð÷™¾é.ñ˰ÿýâ«èT8¦²"·ÊªâoŽ[½…–SVUüÉsÔTÉ9&ÊfY\1Ψ‰Š„˜=-F.¨F®®Ñí¯·.b—Ø9Ÿ‹ØC®}j´²*ì Y½W¬Œ•«ÔVB‡J‚ž(”njrC¡M„Þï7‘Î{‘¾úÌL ÒRr½™ï1F)Åå(5ó5. W È¬ —8e¡4 ½›s%ëÏ\Éæ²&äÀJ±„UéNÑ•°Ê–š+e-¥|!›"ä è8À(Ê?bÑ…,Ú-6Ñ ÑX`ð©kg†£cC›¬hp’(ÈõAõ4½oá]39*vdDLÿi,É£;ýõF¢ ßÒË—QlZ­Ð|5”1“A'ÖúØQ‰ ÄÔ{Ó…’›Uè>w*5#QÁX>—Ú)¤„—©ü1Wèó$§&9f{k;+i}ì×ÂWÙõý,ÐÎò>%X1Ûûã|S²‘U•>±¾*½TtU,†. ½8¾yÓåß>Ü?ÞtîÖ }ÁÍÈòGõ¢À“‡ÝÎYKÔjxu 5}u5—žɧêœSWF]€Rê˜ôæŸß€¼"MZ:æîþCÚ:îßlo·›÷ÝÔÇ-cr½öŠÓKÅ5ïçÌUeò>kö‹óz¸Àï:Y Õ™ì,T$3ç Q탱€¹F˜‘`›@EzkÇXM‘±5 4úÅÁ0Cç-‰9w7g+–JÃe^ÎÐ6mèÚ8‘ý=BVW‹¡î…eì)S½?ì§\ôøÏ1`0ƒžÕ-*ñü²–ý#¨ËÕŠ÷¶ý±óן“žeAž@'ˆ:_Þ{:ÎHhQº‘¦·™”~*¡†Q»OÉ+ª(*“b,e”)wWò$6Q»÷œCœ²ûàYÃ"ç; Uí[SƘ#`)– q9{uCj¬âäK ºÎ*ÒÂ:ß}s¸Ò›©‡9ºPÑ|“¹ŸUšuÙà‚ëPrY\®è²9^£§•6ñ×4sØŒ“NÙDö%]䯄¾³¿&CÌåµ]0‡tgý•gIKE\7ŸI+ÓQ¿-,‹tÄ.vh¼Dïú抦ÕÇÞ|¼Y=˜RÌN¥êŠâ¯@BžEÙÉ‘€Kè[\?Y;ü4Ó@´½¼˜ÒB?9ÏÐ9Q MFÿÑ mâŸA:c¨Ég04iJ‘4\¥‘¿mùü<ôT©taN¶c¿#µãÿo¤Ð¡zÊœYãŒl_ß:ŽŒ)&ÛŸ( q$‡òÑO²tD£Å6 %SfØB«+àèJfoamÅiâžÊô\$èÿ#ÉÐcà!§nètÞêIÕ¶yütÿðKåõóѧ{%ÄÚh¤}EÅk1Ó×ÞÝ´»û7›£A2û_n~}LÌ߼¹ÿÙðê—Šé^ÕúfʬØëâœj2E%fpó¹Ä…äå±?,Û“¿ {(¦&)”+Ç‚äèIF2iÃÝih#ª¶Ük ÂÏ(™&¯ªD0ƒÁ¦Þ!/2­÷Æ%¦”p*]Ô¹Ò„ž4 XË͘MsÚÀštuHõ”c­öŠHÚ;ÍŽã¹Ë( ©GŸïA¡¦çL‰Ï™í£‡ž q¸‘ž:"&õ‡»—7·bu‹˜ù•ƒ¨TÑv!êËmÊY¾…ÑÒªzļN-“Ð>±†R\âŸâ¨=g!­-è žé—«}•îþç…µûv¤ýCWqKï²ìd¶.f¯Ý@¢ð¶ðº–<@¾BKÞeþð£F;Ô0­ÑîòÓGís^ Ÿ:¥Iˆ2¹¯îò_½Ô-W´Î2Š‹ 3FÖBt@(úyD².ܪ0b¦€KÁ¯•A‹äÉŸs Âc‘´`ÛA¿V2s “¢73Gº ®´o–`/g<-9Kš üUcY«j"¼ŸJ[¹?Õ!zïéBÒ2ˆÁ.äàڙءÚdu(س5Ü|¸ÛN«-`š/ñ E€9#d‚AVš Ñ"ùš•R»ú²¤ýc F!¦¢®"Œ¢£ìD™'‘4¹R±×Nåü8)ÈÆÔK·¨rWŽgš¶‡Ñ$gÌ5ãÙ’Õ%JŒ«’–Õá)À„AµÓÀà¬6½ƒñeÚŒ]ðÔ6k„¯=;æH„Ó”^5ˆ2-BR?‡!•\"óm17æH:-4ºt#+/”Ý¥æiŸ„Ñ A³‹›”§õ!8^T/ûèa:Ä €Fgf ùbi†V5<{\6+&ãóg•GØ-»ô ×>…K„ktÓó–éýÃöþaûøyXì~:<¿t!{á›Ë²ñ‚ª"{ÐE€Kçp˜{RôS›/‰÷B²ç’l—äv›2( åÛï)`”é Kú“¸ZÜØ[Gž¦åv˜D£[æ×±ûí€ÉOs;D~m;Jp˜½ˆÀݧ|ë ,{¨åëÁÊyõ‹)fÙÅw!V2LYìÕ›ÝÓ¦hJÛÞ4q(êœGð¼Ô\ÄEŒfÂ2¡Ùb0efjÒ~FPMƒcuö™ÛËÁ± ¤ØIžŒ=WÃø$”F±qŒ”Æ€Lá’EÔøa¤îÁqºÔÊÇÊÑG*´’kÓÛÁª09zå…Íä á¼Fƒs‘iT€» «…‡‘ôÚ7 Ï…8õ‚AæKºM…çd¼œmžªŒÔâŠá¹xZ‚(û”у"ˆ*`,Ö)I¾|,‹F(š¸J˜ÒT˜r®è–Ý0HèÍØéIáwÍ´z€Ø{ï2ë–üÅS꽢뎮ۻxJ¥XèÎVcŒ?çÚ@;…ûÍĤB¬å”—…®R,"–qø$“F`*ž¢“) ¯`©½ß2·–î!©IyOÛpâÖ¶§3@®Ìɧ¤õ?M$gŒ‹r]f¤}’]i–Ç!ùÑiWÜmn6GÒ_m>”]ܽ<V·›‡Çi³¼éÒ¦XzMƒ‡ÍÛ^†ìæä²“äÔ´‚À«VðU]Lƒw¹fí‘HZ%¿0㪸Nè¯ý:pŸVX,Îr&ûEßLi –NŽjRßç¾Ô{…®Š¥Ï·£—i很v_í¶Úmv»©Ç}‰¾+°²ÌaÅr*1,Öj¡µ„Y%'Î…Š2 2Ðr¾ÛáID­nRÑ‚`ÇWZÖþ7©‰}$‹´j; øXªÓjÚ¤rƒ“ˆ; *ÎiW18Š‹ÊôPßÑ-XÇUøíÕoxÒgžÊ‰ÕÄóXNóêU¨PTfÊyNmн!ElrçpF“™ÏHI)C²KE!–b-CdC¤"òë¡ôùGmýƒ; pšS_ýêê2pÚúMÎH§A¶mZk‡$¬ùZ†Ðv`†í$5?¡˜á{§3’&UË«j%mÂÁ­-râ®»Çahä³j‘ç~ö‰’’¦>®Ó.RÖiÅ.Â2'ýGQB$žÜð6Yv’ÉZX´›˜™‚0y,ï&N½+ã”9Û77’l‹Ý$y—• ›I¨`í}a“ÄL§óÓŠÑ3†Ô‹ö'÷ÇEÛÊ÷ˆZ=mæ¸Ä½Åö¢àítbÏŽoì}M@]fz0€s„ôMéû’ÂÓÀÐ"ÐZÄÊô[äÒŒãÕ6ÚÁ΄`Rƒr¯é^‚œ¨Q2a»B⸑£å¯~ô_ÈošÀ¯?FGaÆ­+ƒ’EêSÃøJjAS6x)Ç8‰ÝšJM­¶nÎ2g>-¬‚F@×ì‹x]ÏŠŽ±·g%1âi ÐC x\Üàwí(K$7l¦×ñ1+À×.¬ÕçÿÁ÷¯_½Ý<~xó·Ïô~óöõOÛ‡×wï¶ÛIœ57|Æ_Q°òÜIá3þð¶“žÈõzÌÜq‘‚;ÔŸt%F8"»Û„ØÛÃÓ*˜{î_nÀ&îY¨À,^„*!5¼ã"C*›)Þ ŠCÝM^˜RüU mÆîØîÁs¸î{SšÞÇcw\´ohá”ì›NòWUà`'Ò"TCÁ9}"IÔEúŒL=˜š9 þ&Ô+ãé×Mª‚øÂ—{Ý~VÆ 3F›ÛBÏ®d|™Ûõ’€›y)îwë‹ÄÀèŠp­.›${Yf`Ž`! N ahâàJ½g†˜”™sä…ö[ büåF3;t5müseךœ5èZ(ÕêÍf¹1ïšÍ£¦é©]hṞk)_¤,€-¢p°W;v~™‹sÎË÷+'ŸM…?-­†ò5¸µŠ‘k³àÖH=Ãì9²Ã#}ú~k2Ñøú‹ægÛòkÓùæáÇ?þéßNh4¼‡cd2€û_줹½û‘néVã ½ÿSð ÿú|W¶ÏüÑ:A^ô€á^uÎ`pI|ÀZlMÓ|3šq,:%ywaóaÔܳ]8) <W™äªH¤˜ÝÔI~s á¬>5²“E‡OíÏ6uH‚t=Û˜O˜6vë|³ÿÝ»ÝÃÆŽw¥gíc–ÝzÉ|V 6ɌҊlj€]-÷ W_ÕB_rÿ5˜aÑŠ–X@ë‹Ðž­VɱÅ–ôÒ¶‡dSÑÐú–Ѥ ™),¶ä(jžOäphŠìqq÷ã÷‡DyËÀàR!,,ÈÏÛ#gp$Ed%T]œ™EW[ ioKQùâ1 8(³lá ¹ûîÑÊ*2³ê׆NñY²~ôˆCvð¸"e…A±¾ Å‘ÓäEºž5¸ ’&ºiX¬l¨U¶ kGöº¡¬lñ^©¨ìC½âÉî§¥ÕTã:YVÛtjá×côÂÿÇÞÕ%Çq#髸Ë€üó>/û0W )y­0)*Dzfý°ÇÚ ìÉ6³»IVw£ (ÐæØžYUȾüAæ—©•ÈlÿjÊ—z†Da³Þj»£áä5 ÷%½5ƒˆ‹œ½‡…äKÏ^—VSCJ .«ƒP]E "Ή$×|c©À”Z8šë9÷–Òݨ6¬=nÌ—òE_ &ç к[vÌÖAÍVsØ4VŒæ9²ÎŸGÖô-MR ­ú½Qõ„Ànƒ¦õ«¸¥ô”™ƒZp’ͺ®nt KhÛH²®ÕQ…’²£uág˼ߖV£mý,UùjϘ'VEnÂæ›{DXhH¼~‘&¨ ÖiÊQ‡íBØ=Èÿ; A#5‘ ¨]ú–_ÄI±R¿ }sr~÷Äc!üëöþƒþc뎎^×ýtÿø¯~útû|ûôÛ×»#fbQøRrÈ_\€þ!d´âðî¾¥ !“ƒˆ´†k»mLPê_¸åéŠQžƒÐ¢§´_xÖí¬…í«¼°zW•(¼{587ì» È/Có{1B®kT—¬¥š‘-™çºBSLî÷áÙý÷=êa²Á÷óêRu7SÆ\~Ù¬¤C–˜ËïºTG­S‘š¹€wð‘ûÄä&Lè…ºwHªA¡~Õ´A`*ZûÜ›»û/º·TXF¿þ¶ ¼ ïù¿¿~üÛõ)‹Àß²(ׇêcû~-F”àZ¸Ð1ÙÓRä¦>Ëœöá„pž5˜œu[•l!Dµ…¸l m­!G;[L¹ÇO¬_#ó,þœ?aS’Ú¨juœÚk(¶4 ásÌOÛ3€µ8vooMx‰‹þîÂÙâž84:žŽÒ{[Y•$¬.2°„Ù®˜?#›•0X÷¢n•tÕvÙH…€~°[eÒÏ0_îôGÄ—ýÅÏ÷'7v#ݰèGðóÒ'Ѝêí^&¡%ÕIö‹#½³êírÁ9ª”–ƒs~—¤h|Ðg™ìg2éCÒ§¦Ø –êè½Ï!<%ïíof8„³$}/©#µŸl÷g/ßî¥O–$}–gvÑígi_»D÷¥îâÃéÜÜ?Þý²®Dd(¾rjšZµ»weï{•è.K¬ÖÚ¶¬¡Ð2ÖŠE‰@©ˆµ)O¼õ*Ÿ.X«Ÿ˜Iã“ëb­œðiôÆÚ½œ÷Åp'XkšbHÀ]\úÎźܡX· C5;bòëî.Æ¢}y"W5maÍ#F´¦wÒ¸“þµn“#¢kZÌ )õ6þøÆªuQ¿Í`½a¤M|C!\=ö±»ÚS+R©°§©dOAÿ4;Í÷M\½ì)#ƈ׵§pšbOCf„ÅnÉ1¦tž—(5f”µ›Ñfä©ßÓúæ>ÈŒ6T†ZU5|ßožžsÈ~‰O32ß/ƶðO?=ÚÕÝ‘Õ\—-Ò½9‹›ÊËCJèI|è6·ŽD¼Ê ©îZÉ£³t@L¾„ÎÞ9GPŒv$;#q&À>à¬^$¢¢à•Á9 ID 130’yz}~|ÒïVaf'¾ü}Ù5^ñ¤mlUùµªŒƒí¬—«ð±kgÂà$Ûıs’»ÚìäžáÕ%Õ{´ÒùÓS\ß§´ðu†$q¥„*ÞóZJ‰5ÇwÁ ¬9»ÛìŠbš¢ùPôú¸X¼±€•U†„âEž] ‹A±ºRkÌJi/U`„Ñ.¿ 3HdZ£œZžPá\çqîØ¹)òýX·‘ÛäôZ«÷Á á †Ï0nÚî¿~¹Ït¤f~f˜ÇÐñ±¥{*RtTø9‰AyN\Û@ÚToP¼VŽ€ŠW€1Ë7ý&‚.m_­ …+£4®]R)û$˜¶K'!ÑÒÔÞ¾tûNíÝ#õÊ>ö¯œž\rѺá%;·ßޏÆ,sóêF«AûôåyUÆ{¡@Ê-ó4}J¬ž’þESÁN”zå¼÷ÊO1ÔcÙûå=øžµ¿‰¤²î>[Ÿ—H®Œ¬,0ZMÎŽre¡É%˜ ›óê‹Sª ŽYY¥S‹Cbh!sÀD‘€V»iÍF÷»×Â]¤(T p½þ\E€›g˜É§S€ëõqê’]÷ˆ#øþ±‹úDSò  IýuØO·w;1Ý/½²/_uìnW\Ö…•ýèô‚º )Ÿ_,F‰Ít ] s ª2ùú+Å-w^±Û‹a+x”´k›N@/ Õ/Öýå¢/qfÚ”1(^Q!ä'aΤӃ3S±)"UOèvÚÓØã¾“sÈ?ï4¥Ž/¥wµ§¾yUœ úØ7CþH½žR!ôÉþ©ÛDÂ@ úóçÛûçŸëLâ52y*IhÈäa²KG‚•è¹_ý"®JÚ™Æbp\¼Y ¨°YlNEÌ2ùÏVÛÅï´¯J쯌†˜p°ócbö¹k^]2Š÷HÙÐ’¡ëJ€ÎW*o‡v¤zØÅ æ$Šóê% µÃ}ÐÑ ÔìÑþ'¾=>Þ—îžZ9¬¹‹r‡´ízHDpÍ"äƒàoö’+Ì?þóuUÇCœúh-<Í̬п6Ót šŽÂ6»j ±›QY²«ºù„‚‹v•}Ê8¿ ¸‹]Õ¯æ‘7—,ücåÙ¦³¶Ÿ‹oÓ5Ë)pg+lJÁÜÝ™ ÈÈm1WÔ Q?­kefšƒÂ–b«÷lfþgÆ×¡·ôL¤Èú»67Öá}Ç& 5j)bÄŠ2\Ä ðÁl“ÄL>êpÑ›s³ } »£}w9c‹7yÓ³âM Îó; ’¸½I¢Þ³É+4%—ônƒBõ·ùòõÀïAv¹¯™­%»L“EyÉ5ÏB²GïZ(±Ð=a3=3ÔRßs É³¨œŠYŒœ–§íî²SŒf+«JpJÖßk®Uj[Þ»»¯qtÂÕÄèÎgY˜"‚‘*ÉÃIˆðz¤F 8u:Ÿœ¥‰Ô¨áÍs:IáVR£†_"5J‹xG”¤}‡ëI1"ÏMI]™¸%}Î-¹½Íðt£ŠÛ¸Q[?ähߺíû¶õ;ÖQVmî’Õ53†¡)O£z±Û­nõ ’ÉÇ6­ÊVWpßRpÙê²ùà˜OP¼¬¬ÂêFu•Ed>aþŒ,!pRcíœãZÂNPf“Wä$}ÂÜÈœä{IÌç¿|µhìéÞûîãëÑûøb#>^º²¼y½²\—¬¼ÀáÖO-<܇²;Ðé±Ò¸›÷½Ócù¶‰á÷pþ2Pý ÔÂ)Ø•>l1EÒ‚SgSMÃæÙgÖ˜"L6éªÂÑ~®Ù’)—r¦h¶²*2\m¸Ur¾:¬P[xùk€ãIðñšñ_ÝHÃÞ V͉uÿ¥¨Uýæ9La˜ª~ñ.©Ùsp›‹,MüE>‚þÓÈÕήvÌB‘)Ñ ÙJäË^q¤ ôB¯«)“µs€ÉÈ~®êäž±‰®'cùŽaûÜa'ˆã†Ke£E3k;w:{¼M×5ò›Ž$2bŦÃeûÇ>—}“[—Ûo.°ó>­°ÅMSaAd¸}´ §Yû(ê?ˆË¶÷D‡]k°R¥­áàóÔ4ŽhP÷d“¸XÒ5 ~%À7w·ÞþsË ÁH#-HÐr{dÍoLaë…y•¼:°î k+/á/‚HEüÅlKÀL8}ð—¼>Mœ\1Æ_³ÏÑ¡«žÔ‡W¹-÷uø»á¶¼Fê“õ¿Ž aBÒOK×îX?é©8±]»ÞÕ›o÷_î¾|~Z×È\ ¹ì¡ex‘KÂÉC>ö5²ë…¾¶UHSï ÷ÿi"ÇeøeÙYH¯‚êÑ¥_ ÊÕá÷”î Çq “†Év;®?i@út÷óç‡ÛRMáì'GT¤wSE›©ÛïxNVo»=â¡færªÎaׂÔXU@æÃŠŠÔ.>•Å5Ћïd7¾{-,ÏÏÈBAûü€TשgÀ×¾èßÄr;lܵI”Á7å˜õæò耾;¼"$†k£¯Àhç×äìÏ«³lÉÑSбpäÑñèY@¶Ô ÷3×Tt§ä2J þ5y­ÒƦÉ@£ HïlòZ>Ñ€L …Ñ3 `ÕªE´Êò˜ÎdÒ'Õ€ êxµS )ÀèTƒÕ@eS Ⱥ/yVe=Æ‚kÔ‘°õOêk¼T+žýó4ÝéIy´=¼Ð4|»¿ýZËL}ñ÷G0”vÂ㈱É!êKlœnpYÔ£ .Ëy‹—l{MBR<-§(¬9°xCÜ»g3ßd×#GaGDR¤xe/9’ £c ‹Óy ËNSºböy6üÎÔÓ®7õôï8ö‚Á“¿a/$ Ð¿DVȨÒPðÿöøi!)±¬‡ÅßuuŽ7ìA_N<—UTú̾‰kÑ'Bæµ#—źòË2=x¨xÛOmÌ}1 ’Œ.¢Ðƒf"Å<ýâ«È:à»ï*öÕÍ_²ß꺻Ÿ~øéûãÃ?>Þ?ÿðéÇÿž¦è,™½¢ü'íhÞ£l‚q£+M[™aŸ¦oÀ䕸ÅBWn±”1pžK±y±Õc,JT?qÿ‹ kL‡AâïgNÒÕwßïÖÝ3£Æ‚àÕêIÜ„÷Š xï"3pW¼?V¯l‹m…h)„Xêœ*@=J–êq&˜.¨î'cä G=PG/©EuÇÛP ꜜ±o9ÕèüpLç”i½”É…àNïÄ^ªCbßê<¦ª«0!NPž‡•Œ*ƒ›B"á$´ôþ. eàâBrÈ#uãuk‹wÎúžÂÆ‚ÚÏ(no ¨}ïªÆÛºÍºhÔöð±…Ä8:AÂæ¶É´ð9Å…ó“M³IPÌ™• Kù£Ãbóù£ÙjÊmê©}DôGvfþˆM]vÐU©j-P·-yK2Ê/eÚ~ßÐæãÝÏŸï~¹Q{T·®IŠ/:gÒ/F•~j ˜ÔoµS¬Y¯³iósÂ5‰#á°|6M~!›8š ¨Ãá´|—PR3½îpÚukØr8u‡Éèt€ÊÙK&&…P)UçCçÕª!uÜÆ°%òšMžÔj;Ú ë#(õ/#Òh¤6ÜE»bØù~vÊùüño_>}Ä;¼KñbøQ€C.QÒEI²€« þŒ>‚S“÷ëRòÛå)“(— ÞÑDÀ^ è‹€÷‹TЇµ gáîm1åÄ6öýRõ˜VäNue1&Á°i#Ä–¡‘¢8ëßœ'ÇÚ<¹WÏS\ TÞ;£â®ˆ”»ÇŸ­¬† \0 ûz—tð$7vZÁAŒçES¦‡ÀzâuùÁý»"¹\ŒÑÏSàb¹,}ÉQN\Ò\–>är’ÜòzÚr lr@ÿ2 µDÁû´$y ö½Ä°EæâŒeâÉ¡sLïŒ4äÛã§•L!àpAü ÅŸú“@‘ÉAÏ×è¨:ÞôŸ¾ÖÔµ6d~sIDªŠ^–>·\'% ¤ÑºŒ§uÑÝÝ–¯8çÏ´ìÕ'–rn‚K¥\¢I/뾿 §Ç½Râ)x8!?zÉiÁå—¯7êþ|ÿíàò?ë7œ>Ôx¤š‘~¿ÚàC´ Ô˜ŽöUs.ãªîº˜üu9Iêhº’ƒŒ$%9؈˪ ¤ºÙäç$7 À«V(¦ÎåòGÝk`÷Ay$;h)d"hÀ÷înøÏqؼW¾ìƒP„aç³W|3‘tb•&ç’bñÏ_ÒÄh¬ ë€À}nbp0ˆMw™9^`nt=Ô^0E¿Í}chÁŠà#znlÏùÆÍï9Ǹ¾íýE, P‹Pöæ8¥XáÌeÛg^eÕBlv/©…8öåÞ^±Â/ _ƒ"; J«P9o‹P '¤¾×Æ2CÆP/¡ÛÇ\ FvSÜ> î²ïZÜ?!{y9_m‡-„ ŸÄžÓQ ôÑKš¬ÖaÕþ¡`] ›¬mÇÚëŽéüBÔ´A#¨“hÀMvFÝÃGHÄ]C6³1\p\/ªˆÄ´-}™bþ]f2¶¸èð=V”ÜØ¼ù¾»<]—× i5F“lBÛÓ¹Á• dLúai=YñuËÜð®o‘jn^K¬Ùe+MfÂêRibI<éW9zIXG1qX[Ø20ŽÀ»Ï{gMû‚!† ´90w¥‘­êE”áU+§HtI»ÑÐ&W.b MôÞ ØV‚Ëׯ»0i!n8üȦÁ'7‰DNåêQ‰/£Ÿ–pCÃSγw¿Ê¡Gùh°67N|¼Ù;öSp‰!Öb‡WØÎ%ô­Sãv`!xO#ï!­y¯+ˆÔÃöí­;öˆp Ã\;´$‚ŦM…M6æþ¬°)sQDc+Ô7•Ì«Ÿ°ÃÅʦýR!egŒ¼¬¥\×dß´+Ò’on°Þk±¥¾"Èn¼-ù¶@ÌŸbœ@%¢¹vœ¡Rƒ¬È‹¥ûµ†l‹Él5åPÌ;?1Ø ž£4æü›‚1Û¾¤®z}Js¿4¬ Üru3…ÁX+)Åx޵È4E)RžPÃô"Öþ§µ»ÿÿ»·s̽9óîo2åh7ç¨{“õî/JvѵqïUAÁwÇE¯Ö÷]“nõáöû/ŸŸ¿Ýëù ñÐï_¿<ÿö:Åþéæù§RZåš&=€í²¯D’†!b^X¼žÁÕÎi‹¼zù¢¶#4ØŽ E€•„n9µ±—]žK{&œ·þb#_0ÖÓKô:–§S?G $Ú5ŽR—LæŒÈuI°÷þfÉ-M>`+qj5@ŒÔ«øþ-ŒvO—Ó«+©›‰ëoà@¦èb¡É˜ÇÜíÝN^ Eç÷ººŸŸž_£©·P¡^êVµvB œ Õ«×Fï­¼úÜÆíŽa”Š&Žm>C9†Ìõ¹Ì$Ò£Tm¡“®®f;´ð HÂýÚ1RÛR7Oâä¬ÄÍW„šr¨¡[Þ&Ù¾õ¹¨zxB`ôˆ˜®í EHƒ=!;Žç×@v2¢žA‚¿Xöú©“aHUá¼îõ¡Ü¥g­ï·w?ë±¹ÙK÷Õ“üpòçoWPë.FRº¬kÏ aÛ¹ÂÑ9ï}OÕq„Æ‘,» j9 ®+V±"‰ÔÙéã 5+¹ldÙ5ñ¢y±&™Õ¼hÛH7WÌÒËXìE_ ƒP2²ìòÔú3QõpÆMɺei•‘ÝìÝètƒÉÙe¬¬iÊxäóùØÔ—­xß1P}ŒÂEÕ··~ïUKÿÏÞÕõÆq+Ù¿"ìË}‰Ç$‹Udyƒ¼ìÈÅ^ìî{0‘G‰dhä8ù÷[Õ3Ò´f8Mv7ÙŠ³ À°ãYEžú`Õ©úi^)Ò?“þ,˜q?üòðøKŽb诶 …(RV^;7áLbN âîû 1ò‹|*„…[^6œˆäX¡‚rÈè)ŸEmgR ä=yÕˆŒtî¬N£Q ]áf»ØÚÓ”ý¹ÿ¦ZR æèÇc¤ÖcÏÁ{ÄŒÄ%ñ%u`eŒG-µ\ä­Ÿ€¶Ôn…þ‰z} ”Õ%ê¸Æ´éFŸ7Ýÿ\ë:ï×÷×›÷úLýy÷6©µ4ÁÜ È„a¥ÃÇ æ(/2¡v£ÑDl¯ƒ¢¢9Sø#]߬áÚðìâi!ªW„Zo½ Õ@a_q“¨}U»:fæYÑ)v"ºOX3¡Ú=h[(2ÅÙåîTZŠuTRîMX]4È$ÐmÜP²@ñ¸³’QWÞ­">3x~“øFrÖ•ŽMƒŒ+æk­„9¾5µÓ^úx>ëJõú–cþ4ŽêÕÉǰóç_Ø¥Øu¥t\ƒŽÓ£éç6j²öu¨óe}÷^þÕÓ ¾œÖÝÝ׫›ë§õî÷ûë£YÔŽ:ÃgÑ^2‹6ÝVæW!:‹h"LÝ€~ž¨}ñ¼ ' 4ŒÜHmŠŠü.üzü.‘#k¹Ž3GÑBýX ›ø]aÁ‘´SRR§3ûØÏ$,ž²ˆ~O»ù\ÅSÖ0nlmÑváºí&Íb:åÎ3šiµÒ¨•>wÛ$4Ê‚‰ÞfݶÙ NŠÚo6¦ÇÖwS0¶ìJ\4èõÜÚÞ7æÍ­•,ÈÕE‹…þ]­£àƒ›V(áŠC<Û…î¼[yrÁXÎ -9&vÚïý…†ç­øóº,+‚ئçj%ÄsÒõò‘VÌqÃÄûúÓíæ7±Û]aÀê—¸Oy}Þ==lå<|ÿæ£|\œÏ®r@þ÷Ã÷ï «—‡ÏW ²‘dÙ5õ°qÜ$Ç=ÝE‘S,ÿM:p±¬ß«,\b³/i/M,µ=‹€#Ç"Lü±=¿‚åv,&þØKmǵ,Åš ùÉJå8–¤ª BL{þi…ÄåöŠàLÖaH•ØeWƒ£R×,?‡#-l§¨ñÛ'eÏ>„hˆÐG›nÆ2Uëä'•q‚‹›Wü†ÔÎT¶D š0|­ÙÌv(C±C‰VBó×XâÃ| A” 4Ž+p'•UI‚x½(£÷dn˜ƒYyýžÁºZìPpµÃWÛ§hÿ÷º“koÂ’S¡â›L…ú+¿2/¿’„:ÔÎzk”boêÐOÄp™‹å-ÐÆ0 ¡à€ ö¦WŠtŸ¸<"âk¸hÞ7óX>!ííœ÷·2Ïë™F¾7Ð Ð[‹Ì“ßô`BÛ€WÆÙóJ» /ñ2a¦Âù¯æé¡–V0Öïg@C+GN\šOü‡-ùð7ü1ÒÍæmŸM{«èÈh¡AÉGÑ)vè»SÈf“ ²º@4Û§çBŸVâëy2™$19ç“ÄÝÎÅ«Oò·V”$Ž …§‚`ý׃þG’ž=ÔɱÀÅsdª)ÜOV„,þ±«poŠn‚±èçî8¸á n¿sÇÉy!Ç­•Tùè²< ¾0ÛRÉ^xÓ¸jwqÎÇÿèÅ rïÀN&¾)O:ÛðǯÑù“›kzX‰V½ô‰Ãjt¯ÞzœÓT˜40´0ñjÇ.ßýÆûëÇëÑ*N'Z=ûxÏéo_ Þ÷ç/F˜Þ³ÿÄ”*E9 n*uwLTA$îW EGbB†íZ—‡ërvÍ[ˆIG減Æ8¹8Qþ±–_ù1½o̪‚ÐÀ„ÅIéï€ì 㬓àÜ”£`ÐÈÖÝÿ6áÕ˜'è,g­¤a‘ÉÄœ»ãý³/0x,œM‹ãΊÜ[‰"‘”c¼œÚ ˆ;iˆ¨o@TŒÆ%ТGL̹&¥é¶äì‰ÉLŸ4UlI¸^œŒçÎôw×ë™îÍøŸÖ³7Ö(ôV(ôÓ)^÷ç¨AÁ“{ aEËzÍÅì>ɽîû‘š‹‘£q{Ýý qÅ3‚ÓÓ%]€Æ0­<Ñx– k,wOt&$á»k¥§˜‹V½€‡ø×yø†ý·³ÚÆQT)UÖdÈx¯qã¼ÃÆ%ð¶û'ï²e0$QB:œ¥ª¼èŠ*Е“fÞù¶ÚkЭ¨M uÐð›0ãéäʹêf>ìÖ£€“Ð4ÅMÒ´¾›Rk=Tã<;R-üÜkŸu$n®dƒƒçar³ƒÄR󌎩ŸÝ¢Ù¸°4|jpƒcv²%úS‘V“ylÝ.©';A{Õ3]¾Ig`#¹åiä(@¶«$è6Ubƒ>{×M:5z—võÔîéÂÆúúÜÔfΊ.P‚C„àþ`´­I›¥€é2s6‹œ¶kçm§ª…{"©b´dÕ‘‘¯i´J΃›Ò8ÇZDR#ø;¹/õ¢?EÇâ[ä’wQ  bö$ Mv0õ„Q§r\öC–À/ì¿ kmKåÆI œ°¥ŒqrŸbrd’'ÿÿ€’µš=5NÒêuñ©(^/GÀL’ÖÔõ°¬ÁÍ çv½)2ãdÆàH‰`ÉGÃ.!›$Œ¡<Îúdqok…¯$1*ýæÒi4dßü•Ý~Âé+‰aØrÉ4Z,ìÀùÊGn·ëŸt¼ûO·»§nð\󒳟Ø$qd|$! r&DkåÁr󞺠s˜¥/ñ7̨à ÀÆØ³-I#£+ctD7œ7ªáÑgJ~óB-=' ¸ÙbÀ@˜¯þô„©êϾ|ªÄZ²lƒÀvaÃ@¡u± ÈY°T–h* !ÎéQ/uÈBµ'8k§NšM5Ì®E ˜}„Haá ðëç«Ý³—ÏbšJº7˜¢dgeì,Þdu–àê”: œóÆçsð‰Ã<ˆ˜#’?ûûçsVD²à|Õ™ö4õÓÛkÀì-ˆ!þøÂ}l® –{ nyáp}ßmÚµT Â+\§H®F$o¢«‚?‹¥žW¢ ¦ÇrÁDÎçWcrØòQu‚l±ÉèÀ‡¥¯Xh]£®#¢0I’¢i ñ5œ5+gÃʬ¢Òr`ž<û?öVçý?7š–ÿþv÷ôv=õ54ÂÖ5O{˜¸Ÿ‹tšöˆÚ•Î-:üÚã2ù>þ6T Z×"«aQ(Ñ[û¶íúSZv"ÔQ¶Ë9pÜÒv¡õv’+¨Ýà´{H<õl˜ª5©’3b–‰b¶Á-PÚS|E3&Ë–Fè—5ch±µ9{J…x²eGÕ@æßW…¶4å¢ãå®KÛ'¡]ô¤¶6Þ/]ڨ˗ÿ…¡à ÷NR!ul“cÄs`|;Û´»~¼ýô´'m}ÊijŸ€¦•°È“1(›GMûtQ=¥j'g ³l6¤ãƒ²6 0ÙíÛG%Ÿ#« Û(­ù$TÎh6J¶¶û¦ûŰn¿X‘?oÅd†Z¦©wû›j°Á{¥f׈l$ú³dä pÑ[?­Œš­¶]¾u —Èøl—,a ä³°çM¸P'ý¼ÛJI\BÖVÙ…aσoÜ&«rv`n”-ƒ`!aþ f…«&3œ›^è7|u›ê(¶p A™¡•aýmzñŽˆ=Vªq Œà›æÞqR{‘uLà MôËeU±1VNB@vÙÊ7MG䓇®ˆ³Ù8G¹Ô錕U+{¿[1ѶnI1ƒ¥$O^°Œ.†9ÑÅÏÔEŽ¢ ÁÖh,¹MUÙ˜Cί\ÀÃ,ô“ç#¦­#‰x^) ŠøÃ?¯ükd÷ì«i‰~ôÈpFßV+!åC7P×´,QÜ®Um»×ÙÈ‘I’Ë)’ bg&J  ˆŠÖ;»0ˆ2·FQ‘³çѤ²º£`—`"óeLd0¢}ðÝ®¡ÎÈÔg?ò:¥%‚gOmç£z9»>};4zߪ$µ+YðnúT™Z"&™§ &{&§;~`ä‘ÕÂÓý¹  šN .O×L&&áô(ž pÚ­Z¼Rr~Y8%k¡ùp\°tÞ‘Þm™½Ióz8¬ê’†"0õlyÌ<È©ÐÐRŸ>ÄÏvˆ.x}ªÇÌ …q!¾hжxbìŠÐÈ‹V8Øj̙ՂÛýÑ‚¤{!”p&¸¬ûúܤ|‚·=Uy ”U“ޤä…á¡qŠ4j^ãŽE,3ÙÍ¢Y¾ßÛ¤ ì„8ÑT·XzÏWÆ‹Ÿ¶ ¡÷ãæÓÝÃïÛ“ù6Ý$HN)M Uï¶·?F}~1²Ãg ¹ úÆ)Ã#,x4ÎͰÎ[-îH0`÷/0®?o°ƒŒ!Ùg×Rî–mˆMj¾H[n<¨¢“³÷ç%ºe+ÁÞþ&_F`ïê2n„²Ž,ç‡ -ÕK¡~u³]Efn ÂgC–_·‡,Þx/ýù»»‡ë_Æ‘¡Û¶ ˆ ÔVÙ;-ÅäªR¬ÒÝb"Îb´ø¹œ¯–/¥ˆgûB«€ÑݪZˆKctp¾1F«œý¹›¬[A·­2;¤¹ÈM&(OOTG“Ëúöbïh–¾Osúu’rt£Ã`[ΧxÓIÎîvw¿þ´ûùáIÕôø ûÚeH©Æ|JÙªºc²'ë|zü¼©RïœUf탙ÂMªEZr¼a,ÏÝ(ùàþ¼’j=– åLÅ6?6.›4=VI˜ÈªC°ã&g£³!ÃhÙØ耽 Ù±˜ƒ.‘ †X8™´Ôˆa™]KýÇE¦K„#¢Ç™$r‹„·£¤ÃÚ;üû¸éw¹QÛý¯Ò|·]ßK45.Ç24&"+ð|·8¡ˆ ¸á5ú5±\F3ÚN€K+d3ÚÚQžM¨„t;zO"u2ÚŽ¢ 4Œ)à®qÿllÝò¢bNTµuв¢&Ë‹úêÈ%¾úJùq@ÐR™Î×Óa"A„m»®¿Ç‡ÏOáÎû=AÿËTö¶­Ðƒm²‚Áj¿hʪkP} -{ô¥‡ óÕŸOTaĦ¦Í… ­šQ 5º‘d{#Ol-ó¦·1ZkÏæ¢0€ÇlKg8Œm81o=©T0o†x+ñÇvaó-2ZFh(š–$û„{¯2åš?dæ_þx÷þæîáËÁÔ¼Ó_ï®Þl×Rù£'[rMï,Lé* $7ÎÍ.¼z9eí`Ó*´‚¤òaŒóð=fNŽV9ŠªJNÄ1ck¸êE.9'0!/ÅÅÁe7v"k»ËW1Ìc­Ï†9‚»ç\j‚kO‚uÂOVg·-æ@ëù×*åtáŽè‰˜ÙTŠ T}-¶¦è¹ØÚˆåÝ;M-Q[¥ÇæïP‡¹ñ'ÏP°Bc¶ÉNƒÊ¹ÇXöødâôǧiqASÕ†&µZQœ:pÛ¥É×k/ÆùknN´[b‡ã„úXçl@1$\£}kPX5ë²"“±®¤.ËÇ|œÉ>„£`*UeE‘—޲ü%YÉŠ,ÖžcØ.:•’Êl¬#?¯µ+‹ mgh¬TÍO˜ó¼°æ ؈ÇJÛ ³Nü&ruUZ4h”5:7¡ÙR›Èõk›#ÇU Ë@Û¯„®†ý£)“’1†Àß’ ®ÓW´ló/ ZP»@©H½½V0iÝ¢ÑDOKÛ4Û½rÁS`ê–ÙС"üþ!Ñ×MWAEÑÆúÉÅy@°¦éËËó„¸áÂÃÿ5¯ZŒ›ªÀ™ú/ŽÀF«wlíÇL;èn¥¢ß²¿0þ™LÂq:K û»ô×ųŠ&žª|PÃvBÀ§cXÙáèÄëó½(õ{¾sLawÉ#À|t$ÌšB¹)bÔž*ØB¼UŒb¿ÇÙ´ÊÇ8=ÑM¡À5â¤{O£/-z1«ñF*fE˜?QLJ“?Qîã:Jµq¤.Ûì‹;*©îuÒ@¢>Cï¾c¹¯! ¾jd‰ Ï ]<ïàr#:hç:MKÔX±:ÝÏ Aå(ų½FY÷dÅïyÊ x7=íÞÚ#ih2|€)nÑšJšè6 \®Çø¥#¤ÑqȲ& AÈí Ž“qô‹,ª~ɪ]°aÔÔ¾xÐ:ŽÞ‹9Ѱ«|ÊÊ£Í;ªú‹E¹D áa4âçŠ7Ô£|¡5÷‘u `×ÞËŒÅ!füà¿ÞtˆìÂê<)“5Ô}ÂÕõƯè¤ÇòPüUè-;—­|øöÿ à FG\!E0¯>ËõíC$¶yxw+RAizüˆÄö껫Î:þ ¦LàúÛ§§»ßɽx¸ÿxuûñƒ¿ö××þ†àGÁ8ûÝëp{ÏÔf§‹ØâuŸ8L%‘qâ#þÃ×?Þmþ[äöýÃç×P~óä6mþqÿqóÛÍð¿_©"o7¿¯™q%w1W5*7hUm¿ÙÔœÙþfþ¦‹½ºÕEÉ!»ÞÜþºùxb’¼ ޏ/ ý&õÃÖŸ¹Ý‰Íû"'ôËæñêéçõýÕ‹@VÝîOHÙäìFOH©¢S×ò"J8Õ*UKrƒ¢ÜN ~øAÔ1çáò{½oï»ÿþ§^º·¡íä)GBÂáY*ñÜ¢`„U¤ÁÇí[O ¹ÝªÇ ´û}§‰tIôJ/Èa.â¤Á§Ç±ûÐdpÈkqÕ«‘ótLOÊ"eQÓ9 žlªTðÊxb,ÍŒì¥$k 5ÇI8b#oÓŠBÛE†‡Ä¢ˆÀ®;<äÒÚDq ÄîFž®Mùpý×8ñØÄ-üOã>ývÿáÛë‡í§õãFÐõãO›§ÿü¯¿_Íï Ü>||…-€òóvŸ¯¯7»Ý‡o›ùáÓgq|çÿ°_×wŸ7?ìbGWß}wu#¨úY7õü“:8¬ñ³¾“Ï'q/Ä ¯â4ÝàÈ'è„ó¿Ð÷Y§TÈïÜïÖ×Ý;õ¿÷iÔ›õÝnsÉ2ˆÉ¸ÿ¬lý?<ܼ]5$ç\Æp&Çu,8(c-tßikÑÛØß>=>è¡Ú[‰ƒºO .ˆVˆÁí™Ý¾I}äp N^Vr»©ÜÛ–5‹Õ aî„ÆUÞô1Í1a´§ bæÿýíþܘœùØžmIrÂ'}ì¿ðk,~]D*­¤²6ÎAª8…k¼—ãhÃ|œ¢BœB²+oŒ76ãתÏJâæ*›ªëí¬¨tU`aD+Z5­ J+œ”‰‘‚q³õÅöE\Ü œµ/Q¾Ëy½QÒÂô¶V ¸nYâ}…b2¢"Åå-E4 Õ»óçìî!Zw¡!¨p€S™}°°Ã#N ‚ ã Âð×{€Â>/œpîÃhÓ0üSgÙ‚‚ÃiMhíÆÈ%O±(L á"–‘çe§9.ç½TcO<9ÏÎøINµõô} ŽS} j˼Î)Yç,ƒë&qZpÌLhf\_npcdÇA}×ÝpÓØW’ï,½­•ܽ³b¢Ç\øónžâ¦Ô ¢Ò–xãÂl½¹Ž z—×[ˆ>º¬Þ.ïõ¶Vè(E:Åpi½…ICú"3èƒÛlÅሠ‡Ö#çXDq v¸Žw¿sòÉqQÇ­^8egPj³ŠŠ+p"\äæ¹èƒM:,ñ¬¸™bGõ¼ ñœð&Dü£•¼{ºùYæt}³Þȵ'pêG˜q>D½UôýgÂK´·¯üŠ /N¥Ä~³Î-´w~YS©s5åà#.àü¦:‡Û;¿“ØÎ\^œéòNZEßÑ a¾£;iƒî­,Kþem½™ÐRJ:¹Û1Œñn6Õ ˜àsc+F`ÞRÎØzclÈÎ ëí¬ÌÖ†@芉S‹´VbjO–ÞÇ”†¬ÈèÜ‚ÏÁØ·À®Æ%ð§(gç¢\ãõöð‰çãaãå"'‡cÖôÁ5¾ƒŠdÿÇÞµ47r#é¿¢ðe.+6‰| ÇáËÎe"fb7v÷:áЃ²+‰Ú–ä¶ÿý&H¶D‘(¢PPvÌømu‹ªÊD~ù!Ÿìs6h÷Ùý§¢<=¢{P>â¦#6¼9[ùæ°Ú9?w—0Že«ÆUÉÏ»Sœ’ )ÿ ¸'Uú&‡|Pé3¹:\xð8"WG(RtÒ”+¹ó2#*}í“ØñûJßÝϘUé‹H‹`ü^¡¦öÀ¬×wo‰Ð~]±ÁN@Š|ÿá«Ñ×cjfíF÷ßM–ÿKŒqJd8 ¨ ËnôC™M+0åŒqÛɈ©bf„qûP¬2IYã~“O‹¶u{jtÌÞ×$ê£ i½Âë´FÝó½>€f¾¼ (Áߟd;:ŽÜŽN·£çÑa@£ŠÞ'‘ÌÒ( ÷Ø4–T0vÝ4¶0|YÝ-/ÍÉ™•<}ÚÔ}~à硌\Ý”}€é:(cn?e!d[W/„œ%·VÀ»>"$p9?´Yíx v£€dËhÞDÔfç—K{…CMÖ¯‰‘ u/ë—蕳™æ6€²¸Û¸ÑWtÜ&™ñc gƒD?µª]¤ÃžYD”®-T«»Û«ßöÿ_W_þפúêÏ®ïo¶_^ÿ¡Ûei¬FÕgõØóØÆMSù¦–ç…]%Š×iàÈô§:ñŸÅ™ v4c¾ØÌ•fdi õMà"ùMß$Úd?ƒ=tpHzRÐO™ßÞ¡¸$æ™aëº0F ÛJç¾hãM¿/têyFB ìÒ€6p›Ú6áštMé)ž¾uÛU,Ý{ö]ñ=LY†ÁK*äà]·òiÇÆÓH/ЉÈôÅ24‡lÕÏŽ0šðñtP}>12ï·¾wH’Ø1›$1=­«áŽ&*CÔ¶G-Äð¾bÊhûï©H=VåÚ·ø”­—ÃçûËKɱø„yd[»‚1Ë”˜ s\ÿë ñ¾¸pì1²žÃ¬7gÑ;-â7¸€\Äoæl’jG~m\ŒixA91‚Ûíàà0;zË^]pÙåUj¬¿íòª‘sfÁöip§ãqàÐǨW»:~Ì,›m-ÉÓ§í/êòŒD=¶u‚Gu$®Ý ›}15äÔ¦ÉèK­+&1Íí•ÝI+H&äèO ÉàAûC²@äè±/þã¶[ÓÙÊ1ˆ¶˜a“Cƒžê ¡ ¤ªhª8ꙸ¿xÜ/sÞ̽º8¿|y¸¾[VÎãØNÃ:bêQ qÚH°²ˆb©)^ɗ㑊5X&®|ßýŽ@Ú€©¦Æciˆ¾óeŸU¢gÄRÏRhºl‡FíCƒùJ(‡=UÉÚ%D¡šš€>GM/v—›ÞÇÕuÝ.ÕÊjŒÿ¬CR(€ü‘)=ö•|”Å:³½ÈÜ u÷ÚwUŒªG»EÐÈî›4A®¦*x_‡¹YVª„î#j£Ù…ØIIú”¯œ—¶é:Š£¶4D`m):*Ѿڣ.Í[ æåãXëÓÕ—ÛÇ秺ÍÇA4LwjíÒ5!@!’]…Û’Ö­„Bgš²Ë£#)«w!wýß‘FøŒN‚ÓN Ÿi{Pwøùa=F ÓïÓrVµó8µt´"­; ÐS•¾CY[Ò’—?àê¿Y,¼-Ô¶w¸x¹¾}®R ¾8êuBÓ†ŸjíqÄÝ©Rj ¦^Dì¿#ÆmÓóGÁÔK®OcW$ÐÔsŒäO ¦Ø ¸‰™0 ¦^C~?\@fBè ôT"¹>\]t$=atÓ½¼Óå²moÞ†ž_ûéÓÍÝêëvNìùã—ÛÕ—ÛçßÖ"舮Mctf°„8%îŠ)9¸y» ¶%X£ÙúQ`íŠÅiÝk6Xû&ÅF`öóPŸ­é`:0¦˜UGkÉ5®J€Q©/ðÌ·;äô<Ûî¥æH/›Àà ¶®¬Ñûº¼üÙdÎe:ÕI¨ö kŸZ׆Ç$]Á\Â@#3 µÚ°9Rt-áZ<ˆêˆ™ÖDP†kÉ®2Û‘S#´° ãëR±p ³V‘¥·ãþSŠc¶P!®çð¤Û7F5屺ùÛ7+`££Ž#uiŠö`l…âï )z=òåóÞ·Û;›é•+»£Ãte”Q90Oi«ÃèŒØÔŠ5‘_Ë.iêyÄôc¿mO9ÍiÞjŽI¿ «Q›´w†ôîÄÐ8vŸÕjbfȶI{ïU|a€ú¦1d•qýÒÒ¼_ºŒMÎk|fŒ¼é­éF˜‡¦sí„é/ªð—½ë‰¿i¨\=þF3p±š‘OS|5 ‚F,pš°BnçŽ4Z,‘A,œ` °¨Â#åñ5-²c9ޝ@îô9:5{çÇÚG=2….øiøhOG½ºw+ãó«:µ›ôR&Ñ)±â@^Y#öXÛû&ª†šFƨÁY™É—V›%±eƒ¿obi©U£QíÓB*³ë¿ÙœÍ-å0Õ]ˆpSU›æìx\/±hh»º÷=*tT©ìÏÐmÅNÕ4iLâCØéÑ)ÅŸŽÏ0®ä³Ú—Ï ¹ «X˜¢$CÒf|vžD[2`Å]ÔƒØ\,†$Ä\²nG~°0ÚI81\ yé?‰ÍIÌR`50FçÌÇß9<Šújê;\zª\(tóèÒj3ùð`ðûÚ–º.á(]¡Z‘¦lÍŠ °ž1OWKŽ©Ab §û[†Õg ÜvdÓ‡£CâP¯½Cÿå&§,mvÑ{“†zyT©y×8Ô{ˆÕª.to\c;ÙÍóÒp×½õàì7dáj`穬ÒßHõÓß—i3Ðßn ”{~° æ<“c=?TïyvPGW APˆyÈÖAèƒõŸND=úAMÔØôðáûÞ¬Ò"õŸ™¾'!z¬ÓC]—¨’NÙX’bŒÄ„ÚeO°ìÒ;)aå=ÎS†¤o|!ŠÛ·d,v*Iv,ÕŽXƒÁ×°šükU7û¸?vÙ6½‚—B‚%Ÿ¾¶(¸Æ‹ŽÃLO @ÇÛZC{’pÌ¥þÁ(ÎlHÜŸDÕÂ¥š >¦úô¾cد«÷×V¹Ñè"O—}ÑÚwRÿDLýš†ÑÕ3ÓëÅÕÊsJàEjÔC)N?×´hMÁ‚ü΋WÙ4p…ëcÒº?©+Œ¬’(>ò‚5uûö4ʇ‹{;WË#duMLäbO#ñŒÐ¼²#®­ð#žVF磽_š9]^ï…i-e©¶ÝDårtG Œn}LÓP”¦6W> S RaW©<_^ÚÛ_®^žŸW·öïÕMúúÊôg朆A§ï¾5SL§¢Òü€ÙÉž4fX>p˜Ð¦þ´º[î•l¾ø°z¾½9¹y܇Ì;F1‘ƒ ÊÇÈ}›]wôÙYÉáÔ›([œ£ôÔ.Mc¬9GÁ.ñ 3È;{E3-È݋ь,JOrðóòânlÛ[aiÑQ–L2‰’'¤Ø@œ‡Zç¿yý£î½Âl ‘öVæ*‹f“¦:øú&iP¶ÓxçuØM:iÀþ>¹ÝÄÞë;“œÝaïZSöÎ Ù•BÂØ´eÍcãBofÛS=Ô}„œ]-93¶C‚,À·¥ûB}µw§OjF€šÝOÓB"=õÊÒÁ],¢šÊàŸÊ]q„)Í€!­òë®È/œd_vW䊗…$ Í·ó½½n w•N§ûñ©Ý•8íí®LÎx˜YkJ¼bpWá_îÊ¡‹Ô…ÓBÅ­ëúÒ”Zýò°X}ùéÓr½sêÿVOÅU€ßÕg%ikßè¢[¤í]Ï‘YäÕÝÅSY1ïÿð‡é£ì¾ ˜'DØI Mì®ô^C§üH¸cèˆÏ»¥Ñ":Šcni(~s{8æöLŠ+1S“Kš‘ù ¯'özØ»Ér-f`Ì\ÒLQF)"næ4›„Æî¯?¼wÅœ´Êƒ\Ú,k#2{@{öPv$¬]Hð#¢æÞdZF¯Ù&Á7¡4€„ôÔ¤`¼çÔà;׌l䌇E˜é•ÕÙ½ŸóD8¶-bnŒ­iÄ’9ºˆ¤s”ÌÒ~Ü'a°{LtÒ•«]<Þ.5-¬÷K}«·ºzyz^Ý›æV/vF¯íÃn7Ù–M·¦ý¡û‹Ÿ–ÛR­ßßB2ÓK>õhÍ'Ç4,,ÎAfÖ8½ç Ô]}Ég'©¶Ê®§ÓeüˆaÜ.ÓL¾㬚#v;l€â²@ ëèû¿e~ÆæÜÛQ:»ù²º?»\Ý=Ÿ]_î%¿xa"È_à‡¡Ø+’ÇY0töIYN|AR7ú´k @ m[e\+be•hGÌÊé_ñA aNÕúˆ Ü7I`Š4o– б‘GHáÐ*ð÷‹ôœWËOÿmóòô1•£]¨£stAA¡¹Ïf2Tµë™†ñ>ûþÙPoùåó÷ýËgsxQŒ˜ócE#g/ö€©Ëlb³ÉìîÖ$e§6•`½A>œýpöüëÃçï¯V÷_–Ÿ¿·cóÓòùóßÿã/gY“2±½oë•{Y€Ò%_Ý\à›ËË”Œ¹_]¿=ˆ³yz¹º²ûØçï·oùããËóçï;?Å/w/Ë7A!¦³»åÅÓòP¢ÁõÃÙùó—$¤~xOX6|:| ÅXE”©´dýöõ´ *¡üã̾ððtqµ>\ïȉ¤ ù½¹¸{ZùóÙÃKªÿquóJÍ8èq˜Þ75íè§|ûÑôÇæÅ97hvçÍþôøe•Ά&lÏ^œGy.`O»t`÷3¶ðþÛ˜pjuÿëŽ̓΂ ½s"IøÙöSÛ_˜@ 8jÁÿüúpˆù/¹ÌðáM0d!ÿ_ˆU…XƒØÄNCð< ›”'løRÅÔ a66éHl2^„`œ€ŠW\›4[I´óf#°)= {à±§µÚ@S#ó”aèžy51Å9@™ôÊl·*ñy–©ã2¯ãP&zñ'€™‰%)ï0Å3j®Lü±; bîJÞaÅ·º¾!·ý±?œ"oþع9@Ä&Ž(9ö†¯F¾…..ï–ÿeàó·Õêñ‘Òfùׇë察×Mx>K”ÿvyýö5àEFàE襌=’Åž·—ùSzسÛôP†™îW·‰3\/o.^îžwþ`]“¼Ã6‡Â,›C†I™¬t#Š:qÔÚqùL‹}jÖhÉù ¤£ŒÖ—Œ–íåSX¯ÒhÒÚžH¼úZÃ(ìdÇ™Œ"`ÿ{‹ß.˜>¸·`ZÂÇp¤í9º?LÛóZœì%ʬ›$õXr‘ S0P?|’ÈåËíÝõŒ "fwGäo¤ü,td˜Òúì8’Úr— "‡2k˜i±0ŒEÀÄD<‹€I1; dG@mÓ‰ÈÌ5€i9GT8„Þ€ir–ÜZ¸¤)v wKëøZ¦wÇñ¾ñ<> ªTId¦”Ð>] ê¼ë:gbxtçûEMŸ6}ܯ¿]9÷—†¥oˆhžAõ˜›åLò!ã)x¿èÕÑ­´·•sŽH\œë,þ/ÎMا‡è„ƒL¥ÿ#ÄÔЩ7Šî¡|u·ïTtj1æ* vdÒÆ§Ĉ‚PáÓŠb rÊ æÆ0úðFøº¶£ÎôäHN0õ቟%ié=Hss ½äØxEÙ[ÕòÀm[18j€PÐÚ9Õ£ñvP@ÑxÔ,?eÊ`DŸÆ;€´‚Ð×CÞ8=9œ[P<œm…ÛE+äÀh¿9½gâYÔQ j{Ž4`Î(ä´Ðܶƒ›ÆÙµäS33 |P‰vweÁYJD„>T£¹?9éžçuU·?=TN´²§uÓ%=7qJI 0GN× œ½Êy_:Mq3¦`±ŽÀÍPÎ J®†fWpÓ s€*ܤ”#–Y&°Å.€˜ΨvÐÊe‚?=^b¬Ú×1Ææ•Ç>xvór§¨uB—¯u"uTòÐV‹£vcÙ}³Þé•’†ÃzŒÌóB™v°KÎ! ФSoPÉ®¢Ù]JS·>›pXø’FàÏJ3¤;û¤4ƒxNÉÇ;TŠk™•Eï‹J9+•¯¼Cz˜dxO£¬,‚sQC{Ô€Žx–e pw„5÷˜GXôш']£"£ü$ Ï[£2 "óÌû¢„öë‰i o×é¹¹]ŽëÀK©{søç „ŽÓi"µÇ:HF s Sj÷Slɹa·¢=>sH®ã[À¡:&öâJ·º°Ë":."uÄ\ÑÛ›¨Z æµg¶Ë6bU=®¦YaNë~°Uäí¦N= ûmëk%h]ø<’ÿn²¼Ëf© SÏA0oEÊÒb$vFF-£(‘ó#¢X,v×mÁþ²¹y4 " Ûϲuõ¡[›e}¨1[ЍbÏ -·m·Ò¨Ž[{cM“jp`P—iýLœSP„{ž˜HOÍŒ¶\ûÕùÝíÍòê·«»åùýÅCjZ6!?_Ü­~ªZx¸D@ƒ¹Vœu;Uœ´ !‚Ó(Ú€Ma+Nç&]ü¤Ì”\EFÉ-ýÜW dO .uùW± v©…e–ítÉŸxAûgS—ØÉv_/€ßjѶ7Äíñõ·Ÿ>ÝÜ­¾&(µûäyúõÓÕÏËû‹óáå/piGëø“‹á»ÉšaÖ4e„¦”Cj¤¨´êþÂmH¼ì¤iq{ž År!%Ê6 ìH² ñòâI˜¤Æà…ÏkÖQé8õ‹˜¦¾ÆrÖÛS6%ž±÷ ½Ñ;p@s O€»·y‡êÛ¹õWaWF8ö%='Ô!jn>bpXl.V³±Í‚×cfrQé7鵨üef¢êuü°»ÍÉI¦5ë¶}ït2V®%5:R˜oÄèÚÎ7ÒQ¹Ûñ•.>ƒçb‡š×/Ý”Z8LE (>L«…«.yoER]?¥sy"p@(÷ËhÌs…75€ôØ>õáQlAšÅº'²Œ‘…Üæ\#G€Ðgê<0º¶uq4*4cìq4\œær2¨{¶ãâæ¹ é¬}³1¿ „¾Ó¾*-ˆ1Ì<È>¦Wì.픂çf<¨ ´–1󢆲Ò$hbtY"´#¡VA jš¬È™£P˜“+¶×S9¢ì¤ÁÇùΟ¶9*U+ ޏƒ*%ŠÎr‚)ÑÒsL->ª!Í$3©AãñSJŒÌB\ræ~+¦±S¡Ë †‰µ„[ B€åÚ€ÔX‹p»m+ÚŸFõ&žFhË.R ØGà0 l)Äî`ëØÇ<ØF§ñøÍ#´]Ñ›˜—×­Ç5d1bHµèÓ0 9ƒ¶'ï®lºQëùÈO>lQ´DÓe^FZ³á)Åê‘" RõLœ 9µ‚×tcyÓwj»à"›õì!Ûøú&“øEëV¬ë5±BîNfMÌþ0®—R'b~¤n×c7®óD›¦v``P‹˜j›æ ì HèR¿êDˆ‚œº~u½âüÛˆºŠ*ætQ—!qÊ SP»¨² ÕSWFˆ§eÕªS~ÄÈÅPÚ¶9•ù b;²h4AŒR¶Ý³* 3IšetˆÝ±ÓäÌÙæÓÔÿ“wuÉ‘Ü8ú*s+› €pLÌÓ¾ÌÞaBVËc…[R‡¤¶§¶Ø“-Y-eU±’L&Sn{#<1ÝR5‹@üÀäK½¯”ºV«V¹¤,ÛªUÏo}†{Á«ƒGG!­Ï|Óè¡¥22%G06!c±3ˆÅjmHo¡nQÏáÕ€.†ˆÓYc>59;L±Ø6E!Á±øhMÅÀκŸþ÷ê‹»ÉqƒKÌb³ùDÖ>>|º»¿{¿X13ûX}e fF‚$´ùtPJrO’„‹#A È9zhtÛu´Þ¯Jg¸ŸNH-OÿìÍ 1Šq ™© +ø G¿:J‘¯Ž],ò)d5ÄÛÑ*æ)ضlÌÛ¤"j—Ô‹\|“/3Î:8Zç{G¡\wÞ÷ašë4Ž.{~ütÛä…å®,éÄæ‚”¯¬ÕŒÆkÙaÖ ›¾Çµm/T}g£ªDÆZ;<§%BËM¶Ø„úÝ`Æ ÎÍ8å´/ ƒ/ = a/0œ\båtVÊÙñÙajÌ8ª¨Ã‘Ÿ¯°Éˆ{¤=Vq=•÷!¨3†[„@ý’† ‚^3Ee°ÃWýÑý¯zâÇo¹«·]MïÛº 1fÅ,áÃKJIÌ—ÄLc³œ˜ÍÈÕÅÈÓÀàÕίPc«!“Û +y6½R=»JêÚùßs)9•…oÆ£¿L‘LÇ—e‚)È„RÍeŸAgdé!T9ALþ(ƘIÕðØàùé¾Z¡ ^±h ]¶|Ó8.C± ”„ÚŒQÈ#8 ê›!I*¹Œ#·Ãr(05æš’g‡)#uòUu9‰2ç÷|‰mÖm„ï,àZz.(Û„dóP6ÄÊ¡lúÑyAþ¢H¡“’H€ËbÝÍNVDDP®Iâãg†ùÙ‘ÊkïDCWf/¾NKø´+ÞDü˜å6²ÏB$?ʤãÀH5Þa’Ûýµ9Vã\ÅçÇ™öÌÀ¶Hë¶-¯>îÈf;ê—®ž×¶ü­ƲM¢E^¤°I:ãI§hÄÆ¦ƒSÅ*´g’ý@±7—É~}eñ¿èY^]¦¹S59æ—>yõéñæ×UÉ$ʧãë¸Sa(°åÙB}1  ·íÛã>msY1ã è•` ëü_4Fˆ9ÿdF½«yU‚LÕyûƒè€7]lr~o³£dÆsàUãV37;âSß:TZ‘·ßQ³ä9Î*×öÖ–Ú•wCž~m´Ù.i§·éó{¬¼ P Å‹ŒQo"-_d=xÜÛôìd^¥‡¨‘¤x4†|¾FÖ«Œ2€·Æ«Ô ³†Ô71›Z’“^]>2ÕÐHÆL ÎùkÀr]…¢Ö q9õ0]lýêü0¯šNGTuüp0_cS$©r ‚%®’ðÍC§%M¥]ý7$ =võß.<|©¡½S©?={üZפšã¢ºµ¾Þ„~Ó ”֛ؔŸTîú¼.«—Û5 CŠrÀÖZô»l6]ñ:KÀl†ø•0Ü.Ûtd†z·Ë¤"„°å>ª`¥½Ý.¥ò™ÓeNNI›wºl[]«%ëðé€ê+~ÚÔÁEV’ Þ¥M¬ÄØ¿fXÁ ü1ãORL—³rÅV,šÛ¢k5Ô©:#v¾[Ãe5ñºé^•{Ù¬‹y—áÔ„ ù2ŽWRõP¾É[Ÿ2ó*ÝËhNñ¦ ËÀû‡¼‡&ÇÚ×@7pLï‹BêZ01q‡ÌUªã›Åqp~“ã%MS-‘++¾›‘ö¹ªNôƒÆŸ1REàd…WÅÛžfùJŠ YÝ3& QÖ\vI^6½v´Læ30y°=‡F•94y@Ý-W¼v¨s<‰öS²ýF³“Õ¼vDØ©Îõ«Ø&Þò|›âU9ñûÄ«zqRR¦ìéTé~|úuø¦æNù?ýüë] Û«~¡ð JŒ"ɦû'ÒTQ‰fB)­W¡lzå뫪s!sÔÅçTá¶!Š_8E²©Š95»ÍzÐÈV(pc}Þ–G,+諒7¥3„LØl¿ÒÛPðÛPú‚ˆA¦c&lAü^”SNJhpãT(ææîq p-u’l˜„£ÆJ^ɼygZãTÄ"ˆ YáÆã á0ãb‰åxXŸ²9³ÓTÞ+K¿¨ð}ÔÅ2¯Ô=@þª_•O­RÍ µ³gY5K°ky=PÑG]Ïñz\ð]hÚK¡›dEëqHe…®jiqNÃAüSžñ€-=ºPˆ“RK‹æU:‚:ŒÖ ]«Ó{iŽaoNâå|¹qÕf'_HWkˆß·Å®jæUôk0ÇwÓPyæ•• Ô^öbK8r¡á]ÊQ"¦Ø ɘÉ7œCº¨Ùˆbéþ› X*qùþëY1eûêÞS‘nÑàMó¨ïj¾Ä6LH´QØõ˜URPRJ™ÄÒÿùIùgÓ2"½CkÀóU*7—üª÷·/Ow7+ç.%'»^=ñ ÙctjipýUtj3áù+Œ†#ÁuW¸xƒ%›1œÑ¤Ç8zÝt äX]NÐM·dÌ~‹1m.‰•e †z)˜VpÕEö±ÈVâœbž¬‹“=ùêwDˆ `-µÃ«K@Ë+2z,ylzE¾¿þ|#ý 2ÿ<»Å/Oi~¼º¹n»Èœ­ã`LâJ¶X¬8<âb:g¢šÏ>%¿Ñ¥Îk¶Ù$ÍP;V½õ Fô‚·NE›–à“ÚŸ™Éü-ükú“1_iöÊü߯ÕýS¥¤zèoæ,ýóài|ókç†ôýùËÓ׻Ѥ>«B»:ˆýÕ’õ¨Óèªê?¸2ûñËøf2š×<x•ÜR;’ú þ¼Dðªz¬P$…VÈ¡q Ïôg¦ADÒ(R·Ó@—ˆþØtý~ýéƒþÏξžûYá¿ýüñúåúùëÃÍ|ô%ç=K¾¢¿|«/àØÚ~3-M€Ê‚œœ!ëo6¾©¶S)åÝ8—²¤‰™ ÇxN$?ïçõh5%7Æ@N‰)Uêáñ»É¦—ÒÉcŒû¦ñ&::Ì ¥PNø( Òòë×—N=¢wÀ¥;”Mè?#ÒïôçóJÑ LÁ:˜ºU_6C­‹@«ÁéV}׬:•;Qi7¬“ †CèTr{Ýi ß ¸«‚î£ ûUW·O?þýŸÿ•1-áo64Ïf™(em»W(¼ÒʪˆÞ¤À+í^þóðãß_%nâÆø×›ûà¿ô'ÆÏ©æáG.ÈLæ_¾I–åàüß+wÿqü0­Úy^²3Ês[ |Q}-«£ÞVØé¬Ï¹`!.š™h3L -:ü‡ƒŸ-ö};Y•™ ŽbRc/GÕ¾³E²°Vv-Æ 8¡òq½Ûç}«L&ú3úœyâ´hžô3=aS¥Æ:1-Z§ïRWì©Ô@‚r”€Á·•ý§ÌP‡tÞÂn5Jš¦p/9œÉ‹g(j‚ÃÔsÔä×ÓT¼ÃX7þ5ùmMñèÒ`O×é½%ñ;L6Lö¢I{N6|^~Ôžôå§ç›§»ÏSÚk¹W³¼À¶rÚõ PKdžJE\[á]Aë…¾ñ BW÷‹g}‡4¬LTr¼™(*© ³]!¯Äë2WQwíÇA¬Õ!lŸ+/'…Èý}#sȸzâà“K…–ôÍ¿S—õ`k”êÙQ*pC Á²-žâކ ߨz­Çx9×S£Ä”êu¾c³ ÎyjB1æ"c—áå¬C=Ù· Ev6l¯d$ôC¸8÷í@éü¨Æ7Rv1vÐGÿÎF‚ƒÛÛH™!7|×Åg+oÑõÅ?×Ù8|êhOÙÀþT08ddu®Ò;„ Ç}Qö™·ÙmŸ¿P¯DòdŸvÕïÔ2ü£‹Ñ 5zýkÈÕV2ï&b à¥wqú1,êk”l1Ï+izÔòئcbB|guM'@ýÓRFe^T;™jeXvêº:õ$uØýj·~­zØ‘±ÉyÜÁ-‡hp#vÔµV“üøÛÃðøôï·cÅÐÏwO·¿ë9J†oá_~Çžwr-ª úG¿|58Ïu|í%Ònó®M¢$°÷%ïšÁF­”´urY˜û¹ºx×¶k‡^ä}ÕurD;{×JfÊÝGêEĘõ®ƒPO--ÐÙ¹~?­r‘ý=±ÛÄ~HÔ߃BÆøuÆ-³dÝb; ¹Ö1´B÷‡ZjâÀæ[ʾ­t ö`%¶˜ˆQ>Å9* ºZ…-«))&öR€,6ÛŒª=\zÝ6˜m¯-ÏŸö†¢çMJâ4IÛ)ÌNTIü•0Z&‚©â1mºÇä¤e.Xàä£êÞï ¤%V«ˆÕAÓëÉÊ•‹·%‹©9£IŸÀZÙ¯¹…$!Á6S}X÷‰¿‚5;ð>i‘# _­‰ã‹!T¿A^^c· ŒÕMvÛ 3¥ÃlÃN¥ Úó! $¯Ë…,Ð{[¤f²—|€PŠÔÈS”²&ÎÖ×ÌhØ%T³m;Nâ×(õ†¹y*Îa‰½ êŒÌD™HMO !$Âl¤¦Ad×<ˆ‡ýò €þ¹(*¯p›M@Úù±Um¯§˜{lM G:½Gê \@®æU½²ÃWú†—Ø(6ºSâ6Š#ÞÁ´GTMᎦýóã§»›¯Ãa؀ݎCèöÓõóí¨J?Þß=~=~ø®xÕšÖÜËôÙ[6ýâ} Ì}t^C>Y“·±dÁhãÇ6×ÀdW|r\r лCbbÉ55ØYŒÝ7wq lÛòš¤[âãÞ¯¸èDr…tzbp†Tœ5úÖÏÅØÙ5ø¾õבay)m™÷HçIôžÔ9Žïü¨£ZøéúY©vóòå©È¡ŠvzãíÂ:‚= $U“BL‚}Ë^ŽhüËíõ§—_ê^ÚJ–¹LÉ ËLMóhí) ¡l™3§_|?[e#a†Ÿ-%©:z%ê@ ̆ϳÓv±‘8vq\ñ~Ö‡×-sÕ` ×áAõTG-¸\5 jc jôjÒ!»v©ž\–|F¾>5¨Q…GE¾§èT(Ü„{»WæFMòyâ_¡“Já¤Õ»<E¢A:R×T9l¹ õ/ò¬dcËS»Žª„Áð'rUÂÑK>árÙ™:Ã}{IR͛Ȋуœ_uI.Ä$£l‘ G¾CòÒ£ S{¦M nàùóõÍjòͧÇ/¯>º~Q•y5Rw%ºŸøv‚í¬Æ²Ð’è½/8‡°vžl-‘úå-•÷lFkÚ‚KOJ/‡Yš7‚tI[Ú“•3Í`æ_Rë~ 0˜—è ¢Á´µn¸ãJî“nÍô4‰ã¬ž¶"9ƒÁ]öŸ#tM)rÕ  :À V)ì Ú&ÇZÐqB›ØÊZ[S”ï §QÜœFÐûÑëÞš½Òq ßR~޽ XÆÐ4“™/›Î'’Oªž§ÂtéhÓžiÑ¥œ›š™Ÿ¦ 4c»Ša*7ý!·Æ¶ù²†àb-â{7Ai OÔwAvv0{n ôò¢ún©B*<‡X” HÙ÷ÝÙÑ*€Èl[Á`‘jGÌÖ1®¤Æt‰°sýÓDÇÌÜ:;2¹oÓxRjÆ;3OzÁ«Á.*`^aAf†ìùúþó§ÛçSàK·ô²ùKgPfÞa8B3ûÇEÛÄÂ&ÉCêŸ)nÄås‡1±;…?zÄc_äþúé×Ûon•÷÷_î^¾¾:"ÏW/?ÿ†2¨ûÝü|£îõ5ÿì>®B;3ÊúÙqS4” ­®£íDÁ^Ñ‘‰h8¾hÔ§f¢’QÐxóÑÑ+½:DG¶mÖkhÑèruSÿ¢yò2 ¼„Ýæ»Ü¨—<½ x½Ó„“‡_ÝÜ>½¬ºª »*N´³É6)Ï MŒŽçƒþW°]Sg©êPã ê˜²£ ßU+c‹Wí=1­~£j»½”°ÝrÑ{¬ðÌÉ—=s½')Û¹ôFžJض4ª“ôÞJ8ÙÝs'—2ž»™£û6ImA ľoK¾F 0Vëvsp‰³Áš¶pölP×8Šú8ã/ß3¹ú:“p%Ô’ ûR™vÈ¿H²!ƒê•íܤÿ¤ üùeZô[!ìÎãX¶ßü ~nŸN“P;ù˶|KÀ˜ÑôÉg¿Êe·8Âî˜jF.Z0Ïf2JLE4dkߨÐ#ŒÐ]']xë"©âX08ñ´T¡åÒtËÀ™fÀè|Y8°–…ƒsƒvfä邎 æœX¥£«lÔhgÁ½»ƒç-xÝà"Q€û÷(pˆ\s´­ÀáÈ_äF6ñídøs×ý€D@ ×¥Dä ÅSËdw ®C’ø}O´$}ˆRc=•©¨ #gñcfdèa=M8Á`!WiÈ7 =ì®!#GʨH›ÞiÝi¥øà/«+“ÀAÜÂÁy‡‰]Þ 6u€ùR ËuËÓ‡¶µ|I;Êz4áIËW-¬W )¨ziËz,4¨(¶¥Åø.&/e?Ó§àŠ¹Œ„Þça¸^©ÐA—‡⪠¤Ç%ÄwV£Jg€ó.ã”úê˜|ÞÓ$î ‚™Ü·F`|ÞÛkŸÑ›ÕÀE¾&a¤m|e×ß mJ'¿{'îÉTîW£ôáúËÇ»u) V­ŠÖzTý„–ì1éHà{L-?¡O/¯Ô8®Î"²’‹©á Ñ½/ªÓ8îYjø=Ô© j¨Ç4ìuéRäÃaHJy‡K÷ž˜†#Á$Dܦèhï<„ ™t° ™2\€.<•`ß^Påþ¯+*kÚföUèNñ-}ˆlò'€¤´û+щ åÄ®‚e4å!)g4é’Ø ÝÆ¸ª$Óê1öĮ:ÖiïÄ®Òr‰Ý¨¿>L{^àÜÿ2eÆϧ†y?C;ýð`³b„¿#4pcɵ^«§'e¡ Öa~Ñp}ó‹þxúëaƒí™É/´ÚVñ«¨‘C8+½Ydô6Ëk_°ïfwSç*y–ä ¡ª¯¹ Îƒ ØÍzÄoíT§£¡›‹kŠ%»\Ú_;Œ15%ƒ6Ô´ôNëbß‘k¾®^oE×î{j¢œ0„ÁYN3BlööÆ%b“nQ§"z‚€m S)Ó0EÿÇÞµ-Çq#Ù_qì˼ K@yóÃüÁ~Mql…EÓ!Jžðßofu“¬f£(Š–c666ÖKI`!/™@æÉsöšˆfI½äªc+ä|½ÿ°Y(¶Æ,vSo˜Bü59žôK,±m07Ú1Ê*ÐÚ:oÌ‹%o2âž°ŸcŽÁiÌ ÏcáÃñOûáçHé¹OäµËòÒt. E#4Ç&# R5B¢âé²ä%ûl3Êš¯¯íÛr€ìô7ìÈ™6±‹]#®h»»¨ÀB?^,¨•&>õ©A­)_¿F;ì;J™•ü¸±†¦; .&²T(`»Ò, I*¨ÝœãóoÛ‡ÜMœ$xÃ÷{_“}z¸ýùþã§ŸíŸ=|úòÅäþk§Qi=ˆ›ÔXó=_{0<‹zó[„äwEѯ¥À+ʽù²€â³-jLÕÞëäjÎ|ÝÝ]ÔI‹Lï²üÇü•?<Ù‘âÓo7vÀ}ùó|5¤ûòh!@’ìo:+Pç0` $Ù÷åñ Üxžø–Õ2“Zb€a,ÞøYÌßíi$Çgl•zÈzQç®­v¿ûíÚ“kïÇÏß,û|ûéá„ÂÃ|s,ÊýóÃÝ—»7?º±ôî‹ý`Ýc§ªôk¤õßR<·1žÆ¹`"®½'ÄQºQÊUl‡LLRÅöœJUî ‰‘üÔQÝ¿2±­ÍkP=‘¥ý×=ónHÆ7ï¥)Eä]ýµz7òáµ-ªqÂýú÷ oÒlƒãsê©‹äK¬k[VzÔqmòT‡.¶„€³É:•ÔC@‘„© \¹ooèß:Ok“÷ …=ÅF ¾™ÓàrùóàÈŽ—ûÛåiôááÛÓ×›Ÿo¿þ²¶b>KÜÕw5w´& Ÿ³È:Fúºl†Å¦jg¢L\w2 ª>¦E†«W1ôœÄnŽSLïî\ºƒs™üî á»W^êL>üåUî–Ô²ÊÝÍ~Ú5g•Å×2¢ ðº¸ÆEÃf pC4lKT£aå‹Í…lºÂa™Ì‡C{Eà '„¸Q¸†‰2Ç] ÍlîOBæêæ`?¯/"o~þñÖßSÖ•†°«3Fî©ÎåœÈ^·n’Û¸ƒÑ,Ä cÃÁ³jÕ-c±äl!¤®£1xXd«k¼’È Ì7=@Ø ºÃL>°!æ‡÷§ï?¼Jöèšfñºe]w?¢D=L9$Ÿ:†¸ÿD<£œÌžAPnùѧyÔœŒ¨œâ½Ê¢ËÉxJÑi÷W9Ûÿ†MW·N2´÷ý¾‰• …ÙÁŠÍõ]zØÛ ³!ªl%é?óò²úÌiÃ|õØï·~Á =¯{ªA“‚Âæ·õÜú¶qÊŠ±î…¯7ü6Š‹µ<®C˜T‚ЊÇuR"É*"l‰Ø3‘l&ÏœBW…•ÆB…•žk*{…hCT‚q䊪l¯\ K›©X):ej ²,°Z®°©¾*ëJ1ù‡KV Ñ&u‹HÀ®ËZˆÛ½[½—ÃʸÅ}!ÕÜ×vÅPu±µ÷%$°ÀŠÐÔÀE•hK¶âøc)ª¤´zªó Áã²,îù‡?yüvr¥ðZEW8•^ÿ°/¾ÂsÓ@KÚ!Û!ßT©¢0Ù¢B‘2èEx#Já ä@ :š-‡&M$[šÛhû@I»ºPsˆêmõxKÐæñ%E­QÐÓ$)¾êñ‡Ç2ÛðËΞՂ=œÖÿ,¬p¤?[ÃdÁ¤CkØSÕ‚=ªö¨>ùûÿjbÔ¹NöönÞÚž¡âæó§ßßýy÷ùþåžññóÃ럮Ðn_g–dgGn°$ˆª–$¹hI Á@ûlTöÿ› „EG–‚JWl˜œÅ†æTE]椔´M—RÕå±õæ¼­æe7õà-wF3=……Å›¢CH“r’w†³óžÔ.0úhkB>éÉöPÈXœm¡j; ¸H<¹À  ‘¨ôÎ8`‰Mù(7ýŽñŸÌ?LOóÝ<ýi[}øpø??¾ÅŸMLJ~~¼ûÕ¹†/÷üs¼±ã'üë‡]<µìв l2{Oè>l™“À¾Eƒ¯ãK¶q7^H)~üùîþyú—Ç?>}l­ß½ÚgÙ¤’†Dz ˜f"|œ)zãîb¼€‡+gÖÙé$“ååmKHÕÓIsìJóðȬ°uÅ-ßb\wz pcqÿÓKŒoN/™Ôë2Ї— ž‰¤4º£r<Š\R3SܦfŸ=®QÑßBpßï —Ñ-þûd¬y{[Pº,~¶ ‹Òd¶C­£X%­n€ï”XßÈ9»Y0j6ä%WóF.½¶,Ä3ý«Åq ³äÀJ›\“a÷ü#ãùß²"rÖ ï•¥ÕCçÓAA©™P3o‰Kª•H¤¸IµwhJ7…9 “þE4¿¯s²OKÓ×uÖ s¿èëˆ+±ïZXCV”qt±—Ä5 nÝrýqÂ`@hm%oè’¶öÑl‡.ò´C¦¸Õ%yo°UŸÍ|¶¶c‹µŸé /ƒ­ÈТcjš œ(„”±×°á¢V ÐÑMZŸÃ8­˜§œs‚÷ o¯>z¾æ?ž^ÝÜýr÷ë9õïfÉO«€˜.jF †cÿhæy‰ÜÓPÌŒd¡Òn/ÈÍÂÓ³5|UçÊ< ªO)‚Xª=XˆnÄÔ.ÿê8®º—¨ÚMÝ£õóïÔ.æsœÎ'Ìvžò…{‰8žchâô¦Œ}±ð8D¹ l¦˜X6(-WÔâdšTh&2Þ‘ f%aè•BȻۯ·Ÿî»Á€ Ô¯¢*ŽÛÐEÈURØ›‡u…XGÆÝÇzàmb ‚  ›ˆcyã« GDÞÙ’þ0S$5:Šd‚7ù8ãÞ€îb.LªuEE‹âjS%ÁXd—¶v¿Ø~é¼3ÚÔѧäYLÝlpó)u¾JÁBÈ8dôÕ¢-åëýgs#/ôÙŒk.–)1Æ ¯É$HW‡³åVîYXf(ˆødó”[yìÓ2³;X6X…7Øö õ;ËÀû)º*Ÿ Pg¥nQœ| 9W™Ž|@—éCž´ôŒ¸ØK½ðI` f1ž´ .VØÖ‚èDd‚Y«ÌçmY<,Ùd𻦋¥(9ÑFÞÛo_1ßùt7‹áÃï_ ~¹ÿötó«ôeJEk⬱ɜX«æ¤P¬™ZHeD]¨}6qÀ´ $Fänºö£Y1uMœ…9øî·Ãik1œ±i,S¦ºbsâPÅ åb¸ÅÖªáf”˜TH.×(ö©9e¼…`Ò i~¶áÚ”‡«Ááa‰ô&»³bp‰¢Yÿ‚ðE¾|û|^ÉõdùõÇo^ðBósáçë’À$ÅG•65]÷Êy‰œ:Žk¿ ¤!ɇI¶Ú “OÍÌ8×®õL~Ñ’@¾–e\|YñŒÛå§ÇÏ_øøÓºvû¬ö ›¡{”/ç7¥&ƒ½£$Çß)‹mYr€˜ÊWwœÆ^ÝŦwlÈ1lLð†¢IYõH"¬ToKÐcmSd§äKûÒe­œ›åÿøå¶ôiz–oÿ°>3¦Äýº©a·-Á=tì1[Èj¨`ßi}-òØfP>— ë ájSËQ´paèÆ‹ì1;:×aˆv+f;L u“ãJâ1ÛD±@Áe[&R•x‚+„¿à¹Å¼‘Â^3÷Z±£¬oÒlžv³þ–ÀÐÑr–’aTÞypç¡’ÀÒòÇ'û>Ñóû±¸"Äl™— äXED0c½Ž&ß,¥ îU€mxàß•µyZæÉç¨ý÷róDõ°BÊÉráØw1‡…‹¹ó+ÖÌÎð”RõŠÕ óUe6Ê„ù¯»©_ÍùWbâSb˜åÛ 'IÒÚ“8oŒ“_ó&C0‡é(ß µ |æ`Þx £×0Yó\Êšr݃%&æªQ¤â°•ÅÎnaü«2Á÷­jíúq>üÛ<}üqîRÄó,«3÷YrF•Ç6h›ŒÞÖ›˜5¾_oâ²¢âBÇybêlBlûeËnC Þnößu¡­pà0t‘pû9ˆt¼>=~¾?ðâ?´#ì×ÚŒÀëÿ¸}0ŒH0SšûýeÒ4§ãÜ¿«ÆeÆü…ìF<øg;§¯®=DÅœ·€žeM0¾” ÏPù8¶à¿îòa,D¦-^Íoy>«‡ÈçÐqÒ¿ë僢Wœ¢s.kSôš«^}á]y)»u lñU¤„iW[J…þnñj[@ve\Îñ¼?5³…*à´ÿ3Ñp31+‹ ùQ¶é›ÇÏAIÞ‘Èœu×¹€ÿ¶\ίÜnkóþs‡9~9 %ް´ERGîGŒ즸³—Ò»s-E·%ºJJS0<“: \¥–=Ê JéâB@8©ÓÎǘã*V`´É)5àÞ lR>„§§×æ¦Èñ“^„¡5›TÀ`Àón©ØŒÀã@á‚~%0€†M×}¨'érŠKçn «'vúï1ë<½f™-Új‰¹f‰MPuð‹aÖB#’'ûì4S6޳ì)ï·#&K°ì`\»ìqhÉÓ‡-CxJÁqÊ~1Eõt:Ö¯Í"ŠÄŒ ™Œ°ûlŸ›ÕLÍ8[øÙiæKÌ9íP’'Œ"‚ßQ:}ìTmypZ•Usˆ“꺆ª>+Îu×ÅÒÍf¢*iߤz…TG¹øl](ë.NêSz*..ž¨—©Á_D8"¬c;ýÄþGßÝÅu÷°N•Kƒ·LS ¨±Ö“B›dsK’Í!ï–c¯D›²ú!"VÜpæÛf¤D’½™ °ï·@; …‰.à$n¡öïFL•ÐÍ÷YŠÜûh`œÍ“—¯dX>î.WØô¶k«›'l¥÷?ì*%¢¤› ´‡Þ?ÙÁ $XöÏ–ˆýŽ›?{;9>8þÒ[‰Å‹Ô°°Ô;¨ÅÖ®Ïf< µXú±”ÚÆ ™ÔÂ(Zqk߯df7Ù Q¯Ø÷ZP)l!wž õŠ‹¹Kæ'Vo#­›ƒY iÕˆJYÂR CÌ!{ :¬3ŠÁd°É¨k|¡(iìœBvЧ·ÖÏV2Ô"Øß¶,"ç\³ ¹xUø*“AA¢¶•DœBt‹î®<ñÊ‹(Ç39’RøëiŸ-ð-_FëdLÅæmʸ æˆ]³½¹2ïA\`ÛæÎi€‘¦œ!E¬ºmBÎk^{å…"ÁWauÌ\¦¹cÆLÞÙOùÍÕÁèÌï Õg½éÍèHjœ“@#3?¡¶×ÕÃŒæq À-€±£ž½5}8ƒÐäU°ësªÉÓ´S{59þ­mϨ´+ k’®ŒÕ V¢ð³Ø®¼¡>ˬõý´„¯ –€˜3s=Gç(šs`5–'*¾ˆáóGþðüaíƒíÍ\C²|YÞd1ކ2OJ»úÞï/ÑèÞ=><|ûíÓ×?_ ìéæë¿ÿ@]úh ²§Ó‰ä®÷¬,)$;V:]¼ÆE:qJÑIö">Κºæ‰Ïuç”3/Âé uÌnI…÷NIìßÇÝcÎ …XÇÛw4ñ…Þ6;‴%ÄѸ¢z¡öŒ\s¢Õ‰ö+r¡·ÒÄÅZ ëëÿÞú‡þvû›IÐ=+ö欿á¦ÐÞps®Ú›¢bwUÒx‚sK¨&6¬ãÁGÜÉ¥Ñ/÷·Ÿ¿þ²}vϨKSרÕSÖ9W¬Âî{Fî”N dÐŽö%ª'Q¨Æ‚šÊ Õ‹Ývœ@nW”‚¾w®­‰öF,c—ÎK˜]18Ó$ÛÁò7žÂ”ôGdå¡ÇÀæI;<õ‚Ž ‹O~Ä :òžã ˆÂ$(uoñÓCúðdvûû§Â‹ÉÚ±8¡_êUÐ8²¢®mC‚©‰ß® jXxѲÏl±^sˆtmâ ´DÅÞò©ôDç!MädùÂ}mÍrúSâF‰Ôú’=Ê}9î ±¦¤BŒoêI)\¢ C¯1)5ŽÎI«èÂ×áGY‘è£+œC´_‘¶D ù;PV „Frö<¶#·} ÎÔEÕþb'û‚*sø ÎÔÁœà¢ë™[fÈ×ùΛ-—Ñ.vS¯Ï™¿Ê0>aF=YcS…ê$)6“/ ³ƒ3PHh‰oåÀ4¹5’/¨óÒÚ©T}kÃùµnÅnšåÖZ80%úŒlo—iÒÛu›?_Ó¾u•9æóÓÈ5‘8Fɕ׵é4¿ÿBçeÔC^ÇÄÐùkœ j9çjR†Î_{‰žaiK„ñ]Êè÷xaöŽõ›ó#=‰(]—B9X¾±ºm¾MB}‰QáÔp­SЖ8Ejg†W³—ïŒ^„1 ®Ê?:¾ ¿ó™" hï3ÅäœÏÇgø––Eñ}+6›RË7Ul´§8£4‰ã9R1& u @w¥º.—_¾ü?¯ÅÙG¶ ZíÆêõöh–ÕzësÏ’H‡TÂ^SÅ•zõzh-8ÇýÙV9su²†˜ª ¤gkŸ%¯¢16É¿:(zoà§Ý“ 3”ÆCG' ÊAK7[ICË’š*ôV<^Ÿpµ§©í0EO'Á‚ücG¹ˆvñ!°fä˜Þï1¶¯®0‰šêuFQ­^Ƙ0 Ì|ð²Û!¡í³rz÷ÀZ#쌯.çTš­“²z½ð:+`0‹Dß×ÙA:Rã©ý4«i†9íÚUrd9y½È9'Õ<¹å™s‘׿³n–3bÞRß@Ž>tm³×VÉ »§p;ÊØp»MÊ)×Ub™UàEP#n*ü³CÞPx÷ÛoR:tâžÞTØ–Ñï¤\o©cG7Kl X aš#Ö`±§f“Œ¿º`‘)GÚõêâ áÐËú›fóUÈ›LÍ»B¯%c=´È( ºò8~¦Kâ·ns}=~…A¨ ·XîÜ[ÈfܲðÄàsrWÁ-'ïÞä”÷¾v9‡óV>×T²–ò…q[c R©iœr¢ÐNÝÒ‰ ´éMÓ‡ ~nH—zúV²ÅȆ¢ß[É¿£¥7’S=œÒ ‘ª÷–Âïÿ2ÑPoŸEY×ÜÿÕ-¢êߎ…½ƒÝ8ø¨˜yØŽ['f?>þÃY|O¾<~óñGœæÿêg\ ¸«wrO½ª%!ØI×®m߸SÙ í¨áT6¿­Ý*™(±t*/d5äPvB8ÎYßÛiå Æ‡²(œÉqÓÒ[Òò³ÇZ; Sšk…@Öðª AÕL‚ã|Uæ†Â´kӀ䧻ûÛ»£ßL=ç)ø0„ð$sØŸIs¡YΉ$ÐꆂaBvUe¦“³ç4MŹ”j(Må¦ü…À†ÔÔ&QÌߥ9DÝ¿¦&C©N3OhLnª (G¢³¶–ÈÚÞ50@Ê:޼¢eË•-‘Æ7¨BŒ“%(ð=DÏßÌÍ©þsÿÓ/ö}'ôŠó±xûñáÓ“£Æ±§¯GâÅQé:Bc”xEWL‰¨¾}‰Ðñì •¢+z§ðz“|G!»›\Ö*M¥eͬ1TÝÄ•‹Ð¾æ–4„˜NiS¿¢±©,NÞ‘øÐÜ‘bB@Fî·Çÿãîê¶ã¸qô«ääzT!Aü9sæjoæ)ö(²&ÖZ¶¼–<3yûª[êR7»É"YïÜ$9rT&ñø ŸÀÐ4€L5ân›ÃI™9œÓ•µ@F .•˜°í/ãR•]Ögûg·)ÏáØ©(YÇÀ»9œå7ºæp|dõ›¼bG/&ÊF ]‚ÀÐR•CgóGÞ _;'q rG5k B*dßårQ*Øg'qW«˜Ä±cÛ ±â-2P&Æö½æö J~üX7è§T]6Wøpÿõñé3¿×ÊÎ(ѹìL™œà-2Œ‚8J¦(ÂZ ´!lóßQL5\J?ÇUæBI1ÙSvÍ‚Z¸ÙódUï5Š¢¨®wÈŠ~Â5à Kt.8ïÖÜŒ]uõ|  q™c(6‡Ø¶1åLaŒE/îcî5cq³ {k§2Ø<W± ]×ñ©Ÿ1lšQïÈȧ½veÍÉJ%<µO>ž$Ö®½™®zìñ|¦]˜ <ž}\¹ºçï].¤æðnöñÜx¢ ùÐuÉ²ß Ý‡ÉÆm÷Uµ?³Cåõ¿nþq÷ÇÝãýk_ÐÍ×Û»Oú»Ë®òû g_Í1%†M˜âÐp?®1ñô<Ý©»|²}~ûáïßž¾]êÅyîTNôþ5nß½tð½Â‡“kÙªq¦K x‹–¥Zýi û0S¶QÓ‘VïšÉA—4‚ó'|j¨ "“ž6lêXC<íXÃÓ§roÉFÕ°"+€¸‹½I»»RŽ•‹ËT,v÷@J.-ûÕ–_èjW Éöý†ê½îó­4gŽí/;9â,:Cñ54œ°QóDfµ²ã Ù…èË¡F qQ"B¶)fq³ŠÐÁNµ[£»è;iŒC>5ù_’SÿK’e•‰rä2«’†ÕPbUܹžvŒ¿Ý¦¢cÜNäv›êÿ’ûF— •»úÅ ³h‰ Z 0ó'¼lÐvlãnôÛâ•- ùmÖg‘‰~}ø¬ÿºyùöðûïï`‰VJ5Òÿ¹™3eM€ †•`ýƒâÄlrÆfÎO¬Ašiê-5ÖÀï#.Yƒ”]Ù± Üˆ†Uͪ0úvÓh\‡Ø„„±%E—ˆv*ùºý±Tûco`uì|‘­âØG)°5 fÑ“W«rȶ%K#äuŒ‹æó±ƒqè[²cL‰õ¼+æ^N©J+óú¯Ï÷/ï¿?ß|ŠmP…þÔÅ{CdÒØ³¸TA¹¯q2–”Úð³S úŒ³#M1¥àÓé€À…;ü´~Â¥ ü´!&'·-žÝ·§/ÿóô[eÚ¹Ž»—•º<¨„®ƒ¶c>ë’0ò=¶U?A-¹Nt‰­¸²úñ²‹r£4ÐÍÐtE,³Ó¯—4³(w 2 qÀó.©M¿º0†À ¥-±zý4øßà™Ô†nSÑ‚L‰¢FUùbT5ßÉ{òñüÃÇÛ½CŽ­‹›ÕT´@TB#Ös-n C×Ô0 6"5}Æ»Çfƒ«Œšüܽƒ—!'w!ý!_æšÝ;‹ð»¸XUÒi¾‘k¹æybÛÈâR#×vŸÐX¾å=Joç“…½l£XË6“"…gÆùVäðB›äÛÅóOC‡‹Õ°MEZ»Z n aÙÑ ^?¨¡|@íK/ט«•ÍöŽ;)ÙH½I.°mñKQ7«Ô6ÕæÅZ¶ÙüQ Êhï[Ù6¢ TTê²o˜ €“Š!Á)§4r±>äÂûŒÞCÿ7§é’]Üß5 'µ¸L¹`h‡2ññø®`¸üFWÁP…f"‘º’áë½4„…ˆ]b ™v Ä„Ú?ÍѰÛWr­Ñ¯Üfu2¡(öy¡pôvó3 €W«Ð_;V°yZ£¿)8ŽÔ¶±úõøãwS˜Gë¤ßP¹òõéáËËóãÜ~yFñýÿ<|ÛH†Ð† ]Þ›2€Ø*ƒÖ©´iê߬۾‹!ù+lß­m²ékúöýq‡Ê©©æƒ]t Íy´™×¹´nB}ƒ#-×!ÆÕ[{78Rv£o­ºT8j)m§„ZÅÕ{ LÎ…þÄ#{SÝŽÈ1Ǥ¨é2TÅ1P‘Þh&¹ÇŽQÔZìÔ½:âk{4†­ ¨‘O (¢-’'”ÒúrÀ¡ø˜œY=È]»G»Ö- ·æÆ`‚²UûékQöã÷ßf™iXj*Æf샗8w†"íC´j¬/Ú΃¦H6âÕÄN­®CíCÀ$iÚ¦”#OqȪÛù}|¯„ÏOßÍ0=©ú<Ü7lº=ÿ­.·ã£LÄè.7ÜìÄŠ½wR+9³8ì²C^ãÄ X±ºZ<.pß4’›|ˆ þ Kþf ¹]›Çî¿Oû:Öòy·i¢Ó`I‘&rÛ‚%e‡W¾¨Š=>~}¼ýòܸ³üý¾ü–7õש ŒTUZ49 ض†±B¶G¹cÓY´9°Š²£¨Ç¢ÝŒ’íbXdرӿK¢ÄkÛÍãÖñºÒ™N§=õÆölAx•}‹”ª¦=CX»o±Òno«Ó¾eÜÓm%16øÎ|®|g;{‚¦ÙXzXŽÅ…C (*Ê"/i7¢1ÂŽ­¢ WÆ [¼wª'6©¼Rrþ÷ûÓËmvjD<‡ÎóÿqQ€j¿Ñ|‹=%ö59x¨¨ù@Ìî(9$©³g R|‡)jâêPÝX«ÍŸÐ2·ì$P—×ûü$µóæÁÁÄ„Jµ¼0z–‹¯Ç»‹g[7«ÜÕSADåÜ*¶©ó Ä­¾þnëúwT¤ Þ®ÞXxxsµ·ŽÜ¢±?÷­c!WñÎñä¥#hĶáKGÍÞ¿sà¶ï5:÷ÊQ¥'æ¼´X$IÃŒÛÚ,b¦Í"Ó Õ¤Ek?*›4ƒ]ñE“æ1÷v¸MÅ`–žJÍ,²·¼eù®> tirÞW÷YÌ‹>C— H °¼W+–¼l¶°x¾1-ùM?­ª²ûYÉÛœë ®‚Æ«ºÄP”9FÍãŠ"wfSÜ‚nC`hRs*çög#UÄpi·ìfÕ n°F,†)‘í0ß²Töåö³ÊÄíÝ»¹;þºÍèçf—ÕRíLj°²Oxí“ÂYšŒR/u©68î« úL¤u²3Xo·’Á†Éx®a5ÈóÉÅ$˜ZuË>ãÙ²ÿ?Ãï­§²_ÿª7öÿëöÁº'~Ò›ýdÎøï{Göd.î/úó—o<̪÷¬$ºÙkÌ̓RœÐ¶›ˆÓDÒdÜïšiôn,Þ}únœ£Y ó40Ì Í 9‹Ÿ?|ÐmˆàÉùþä+ÕöþÙ°vR\l™(1»‹IõîÞ¹ÆÝ·kÕôýé‘, JX/¶,+‰­~âxÕÄèÔk¦á¾©ö}›™^Y+vI7.õÒ¸é ™×¾"{´¯éY©iãÐ{@’›Ï¿ï¶ÞÖ7­ý…›Ç§»O§-hª ë³kŸw‘·§ÕiÛµ{.«S²Æ3t]fS?áZŠŸÁ%[—Ú²ºÀ§Y]ÀŒ­tz„>m%{Ík¤`+1bÌF‹Û”³:ñ¢‰êòûËoô-h zR½ÿ\™ÖÍ7‹6GÊ]’ÀMzœg[3ÜëBcí¨™8œRâù•°$Êb±ßMy’y®VãFõX¸_ÕZéFËŒ+ºQŒr„U6$³rêU)·mŸÏ7Íç¯l?Èüæõ;ViT¤‹UƳÊÌ$£ ëtÞ'{9÷ö ØðîÏ_½ÙŠÎHÛrâh®v|ìiÆgבö>öt4iÆš8t=ù×…œäøz!çØXÆ|3ÿà—»ow§ñ¡Þ¼)>Ì|Y„'i æòß¾yiç©}\m'ÒàomKà¡­©G}›*;æÕ3a =ªÂ:OEÿ9翤PܱSiâ˜Ö¸÷¢œTXªä·Î’Ù‹`ÈdÉ^c ´ø*ÍIkš“ØÅµÍI½>kSæ‚l»Á$àÿÇŽ]—6µºMHáI…=Iròcu~fŒëÌ{²‹‚qÕðÕ¼BѸ¦ƒ¨\PdDåÜZRÔÎ8j\kœ05¼e† Þ6Ñ9Œ]º³zäb—ëT7xee%N@–VÈŠ=t–qAiA®!²2×g9ÆU²¤Ñ@WÊ ¡Äx¨3Í"D¢«&o·ß_>*îæ¿äàkÊr¾>cÏçÍy ‡‹‘ßx2þQ9D˜"«€ÁŸ´¬ú=|Ýš,òDí$/›ÌBÜ YÀÓ²ž1‹Æû§rÙ91£Ã’YL! Ú² Ɉ٠=µÌ+ŒW™Å`¿â»T0Ðxð‚ÉkR"×-i½[ظ Ñn¬ÿþíOžï_¬FðamPÝÏ›²€ÃÆ)¢JzØa•¿ïa5]I0Ÿ"Fä¡ÛÊØÕ¤ˆÖ©XŸ$®4ÄgÙˆ ¡Ï²JjÙöŽ‚ÖïÞ[Z#ý£lí¬èzúE[“\BÔ?¨BÊnw#Ò\ ²eàNp­EJ‚¡KÑ1l­èJf/§Uk»2 C 1µ/±®(Ú‹Bk­þyþ&µƒ]éh‚Ôe'‘W7ñ M †•‡5#)—‡ËÚ#dq}TQœ,f «LqP™éË€ÒDÕÐkå÷Zñ×Y¬÷sЋ1þÜÌ nš&¤¤¦ÕQìÖàV«?ú¤Ý«Ë5¥àB*;tÊB“.6¢¦¤ò.Qß:íå}W ¤~ÂÑÖ;j$œ>î€/DtWvè©î•'¤v‡Þ[ÜÊð;øIÀZÕì5ò{÷ DúZµ1ÈŠVí:• ÍB¿û„Û5…çÆjÍÀB½Ëzç¢>Îvð׿þý¿2dŠ?}×ÚЃÊßüJx÷ø ÔUq1i:XEÿÓß~zù÷—_ÿZÝÝ¿h(zwÿË·ûo{ž+ܺ¦Šæ¿ôBÃÅßþöÞ ŸÇqÙIAp  Ñ×îe± ÇEY ¬ùºÛf©AÊìàlŒK/œÂE¯¨6ìx—bÚýÍó[eW«hYÔPeJ"ÎmŽ[|d/ÆG+_M×bö¹6e¸Ä ½DêõV½=ª‰jÚÕƒFïæµ—®YõŒ×_f8g±W«h×c :¬ÎRêWaò!nZ¸Ø+Æ9³ê%Û¶D—ãOí£'QŽ«éºƒÔÕu÷Ÿã8Îûfåžë2L†EOBM뼉Âé|AÞ+H ä Â+rÑH䊟‹Ë”ÇÌ'°@Yú„å'º¦4°Ÿ¼ýÔ*ב…©ËÉOÅ;ã"äH6¬“dûïfì㣷±y›”¿ùþ|ûûýª2 âÏ›òƒ·`GHꬄ·äFSà | 0µpñw7L¬cV…ЂW«qMÐ8r-^â(yoªiYˆ—±Ó6g'’jìtHE;-g‚¹7z@z˜sŽè4*ZëY[zâ>aÁ&pãy&"¦µxø—õñB[Ýee¬n¬Ë ŒÚ¡ä=•Ó=FÄrºC6Ý[ÐlŒÄ„’4(0® òö¹Á÷ù$5@.%¸œ$ÚÅ«P8mïVu tdˆp–ÓÉynn¡ÜsZdkV«BíZ²Žy­¦•!gU†®CHÜžþQÇ90”Ç¡Gpü²'6$,q²}ËæçÛ¯¹îÝìW÷?ûpûr»î‰‚´ó ì¡ µµñô`I½{SgÝFÅp&!>øÄ…¥öJ“È_ÚÓ0%° Ò—Ì.N†T½I`˜š¯ o©ÎîÛ8:%_x˜tÈC;¤ Ø­l-l7›ò7m`†Å°+€$\}sÁÓãçñž_Ÿy׳qô›Z\ô —½Mº¤àä§f[í,ÑcÙ¶ï..ÿÙSËe›8äaZíÔÔz¸¶iÅ[›V›8èµ+c@ŠrÙ´ª] טV57XmZW³¬$%t1‘6ÀóP‰œ\@·íHX®í}¹0¬S”~}–:¦UØb–&Û˜¬ïÅ‘ ˜`zGè %ª"•{ªT³œ‘þÍ6&À‹ãŸ{†lYsA¹v{>6ØÎ”Uv›8Iä>e§¸±Ý6:g¶ÄØ•½âe»~,"ƒºýb©ãŸauΊD4 ’>S±¡Ôô÷üêÍ!ÙtbQÄûþááeØ›ûÉ¥À„åÙX,a#KÎ8,h1¢‚m§¶… ëbº’ TØ¡´Ás¦Ó,œ À5›ðõOþõôíÓéáƒ5ʾüñËÓ?¿Ì5ã׬¬gɦZ¶$Wبn¤·å¾Ž`ã#­ HCÏ’×àÊÉÖþm÷$Ùz#ϧ%eÏÌéÊŠië:–’Ùs®ñlnÚx ô$©rÕD®½¯¾Þ2œe¨aÆ4Öî?Ѳç­*•׺Ü÷·//êîšwa%ÌNÕïQ.ên~3Ûâ¶CœªžZe|]ÀMŽ˜‚ëÑ]r~ Ÿ*.Úà ^û¥h_ÜÝ>}øåóý7ýׯ¨Ïv€yƒòÍ7ÂgǸ[÷bÄñB·镹¯‹’Ü\fe"{ ~D »’|ý­ ŒºÂHEÐs±Is¸}ÒûóFª1îVœ° :¬RÙè:U–·~à7*û¬»— -\²îVh,X!U±Å=£ŒÆYöú„Îc{ýp…š®MÝkuc;“üüôxÔ±ÿáý¿_l¸ëññéwÕ¬OUˆêOmÕÔ9„™G3ƒZpƒOà¶LZÏO!56=>éí>>=Ûä;ÿqÜ«÷®‚ç Ò”]Îuø2¤µäN”Ôëk½"\®M×kAOÔlö!†ï¯4;ð!”°—lÐ| ç8.5jìCõ*óq”ZF䨥<¬~Ýè×Ä]Ú;À!”›n£FsR–“²M·Riº´¶Dô«%¨Ñ誌˜¬mÝ‹P®lä!ÛŠ)‘‡>>ÇP©‰Ô?bŒñ>Û²vÛŽêÙÔB \˜’‹q¿[úˆ·6Ž0LÂÃè>Û)L<+!Ö®LÐ%!4á¢M…‹•øþ£pSwô"gïº3JC ÅÚé iLðƒá¦f°ñ_ã'ÀXì'HÌŠý„Ù½ ’ŒÀòÓS“ê“[Õa+^×Y*!·u;A2ãSlWŽ‚”:ly,ôωÞ9ŒŸâÆÛT¢ÝöhØ´£vÕ*úo÷·¿Ý?îíŽÒ/ùEÛ¹Sa}Ç¿ë¬/âÈŒ®±ÉÎòue¨g¡ÒH,–ûugÃX´Ó’Eý?Ðn€™žÏìü¼úºfZhë4)f`K쾂‹x«1 nùªJ—±>_Úμœå;²K}=N6éèQʶÕÑënæ©#xÙBK@hybœôÇÚÌs¦q @cE㎭®ÐYXÛAõíø˜¼ÆŠÿÇÞÕìH’çWìEwÉøaÄÚðE òÕ7C˜íni;½3žž]c~,½€žÌÁ¬êíì*f‘™$«GZAíìde_| F|±vÛÝ/‚øñu;1@¶nÇÆG®ª‹IUMáakýN% ¬®%065¼DÔþ]Ù0Pûƒ2ƒ¸æ9Q\;'j L$A‹!à1·xaGØwc¶&`ñaA)Õ$EïñEX>Äà/Aßïß|Jßýñ§Ç7·fCç_í0ˆ0þãß~÷Æ ÷?Lð[C²ža³¥"Ãv1³éË-¦t ¸iÏ£2vöˆG“‘ð$]wR°`{È Z„F²ƒ­Îª¯Éýæ«Ãþ@·ÐoV7,F7ä1Y¸Q¼Œ<‚cGÍ7`$1 2J²G,¢$æÔeVsŒ²—Œv¨ÎjØÛË£oBÒáY 3"ø,ÌÄØ2Ä.…QuxCN¯‡7;¤ŽNÁÇŽ{ÀgÇ//‘ü^$ÚñÃaIͼkÚáêúW‹¡§9D•ÔMgÉG;ÊŸgÄo¿¿¿ûéÃfQòü¨úÙLþ ÎØbiïÜ1’‘GÒt'wõQ/M}üw6Y\Ö îƒwû… ºçtêÝ<Û”{È(e´ïŽ"f:KüÜ®DT<â¦q<¡©½ËÞP, Ò¥µÄv¬cŠþaC,÷ Fjò??¶Äy62dô„ÓG&!½îˆºganÓLZÅÕµL§¿¶¨e&Ñ.X8IoÇñ-f:>ù¬iýx:y’×?bòü/lë(þªÄËmî€ {dëÒˆ*I—Iû&ö1d/Hž÷PtX^™ˆ¿i|“!;‹pa´.}'öÖèٱ߂Éä7ù1:êîÇf–iÎ þ)#TÙ¼ÂKqWy4ÚÆöðµI#œ;aÚè¼ÅÖâ]°FÌO\tÂÃÌгޯg“tpÂôÖIhÚã&bD¶—¥Ñ GWÑj|:f¾ Fé‹)²'Ìw ão@a^6âµiyˆ~o`±£k¸2”ÎZ›;§‹E•uKÇt˜ñMÊqGßVp56Lô¥uz’UòLX&/>–¹ s®„fi‰NäE%øMä%hAš<.BÿCHž‚ÅÔ«'tÒëÏàzóéãÝ& ‚~³ÛÔ.'°§[]P€í‘ÊyažnTÅ–,8«,ˆ:2nPô9 YÅ…)zp{mpnLº®Ï ‡ÑdÅ™ó_ÈÌ+ÅÆ)ód¥·â5ùª©å© -}sæõ#W/È€6—Æuø¡‚ÖÊçü×ûm‡ÍWˇ ؇¯Ù>8/® B-°ïa-¶dÑZú*ÕæíÕ UÓ–ˆâ<ÇbbÜÑ—+}¼b–È<§Ëôé05Âm Œ-@ÛоC§ÆÈ‘5ŸLŸN­†i¤T!5î©«üG¬: ²B/•Úu|º®1 ¨::zE‚úxûùý§/ÛŽ…÷›»Œ°{&%Q@tN€û2Ô£}ú±T[ó”)ã –(O†ÁežÖèBRuÂTY ׯSƓԀ’#©:1–FNw¿j¬ê…ò(.öb« X_JLê1m¹Q?¼é%ö”-[Ï}69Ê}#¢b –ŽêÛÙz_u®/´8;¶%sp|A±gÍR Ñ9LJÞ—Ç KÄëŸ0½„¶yÃç‰ÜÕU4Ná\SG’8éßPŽ0%V‡2;}_`kpsûùv Èra ˆ<µ± ¼çœ vþA·vï7[/r”¶Gê3ª˜˜Ñà¼xØ Ç4ßÙióÙD=:ÅíµíçIÃ&vdtŠ´®©NvÎÔT§Osÿ˜ÏॡŽ=Ϙuœ({<ª¶ $V•‘CÆ2bS’T_xÞ}þáþ˧æro?ßß}ÿîËÍÓÌûÇ›Çÿý@ÛNêèÜ2£m‚[Þ#"iÎ38×·ecõ½›D%¤šÂªÃ ¤‹(Ë!;ta™^•U©vÔAì-šP–™¯ppÉNâN+¥ Ê×€Yæ˜U'»Q¶V×R“N]Ó!4¨1:2D°pn±ò7:ë"™œýW›.VÀÁžîgÅ 1 ¬ÿv‘…é Þ TȰ0a\Èê§-Ìצäªs²¦Á£·¿×âÚ0º+á`çƒèéYÚÁN-`!u%¿Äÿ˜tq¶\IÅž›êôÀÛ€NÁ[Ìå×KQ\T¾øw] Ã±ÚéB £¸Dˆ~Gã;ã9vÍ`\Ðf¿lÒ3‡-) ÛNQŸ® ) E‡\àÂ`òLà¼nÉgôphCÑáù ÑÜä3p²XJÂw> ¿*€óÒ$¤n‰VhY]}r#´Ý8’ø`£7:”›¯wó-$·wõ¦>çxÁä1;šà÷i˜:G‚ݺzOÔ·…Œáô*¨E/Eð=^Ÿv=¤Sž#)n°‹[ЗŒÆ·uõ±ŽŠùëY1óE¸J9hU/¯‘lç:tóæüu ™ÒÜÁ¦5ä@còéN1ȵ1ô8uúiúôÍ%åu“Ç॑â²î¨V’à ‚€ë¢y+õLD G ÀŠD:9.Â(gata’Nyh>Å[`TRavøJêÍó:¥*Ö«V,Õ5zöÚ†§ëX°ºššz"Û®XqŒ#:E¯H/æs-1`<מ>«§)]iâÎé˜.’ éF¼vLW¯‡~O{Zš½)FÙ7!P2s .7¥÷s\ÜmÑ¥×)î6Ðl‚kñ5å â&ˆh¿÷bÜòm3Á¥fHºi'±š$A|ÄJ˜iHÃàì÷SC\Ÿ:ÙºQJêr'åî³”þj/†Úµ šZDQö«Π8`rOQÍÑÃëY|÷Óûw Å<Öͯ. m-!DØs‚çt_ÏsàP\qn³}×1’‰Vlg >ÖE«B°2óùlMÅÂ>]ncÜ£˜kèU›£Â7Ï*?ÙùLV6NÄ/lÎòyºfcc]W†Ð¹"k+ Iæ`ÿÌóù0  z¦€l‹_GÌÐÖÉöâýÍ.Õí(â÷[¼ °»…$¦t!LÝ$ O­´R5{ˆæÇlÚEH5’¸„©ä³ÍÊÏ&é„©ÄI6u ¨vqÁ9.ô~‚Ô2„;5L>>Þ|÷îñýVí-oè?tFëø&?pš‰z)ÌCQÆ÷U&ÜEuØAu2‡Õëk‰ÁvmúîÈ›ˆRvž_µ íÙ;úµa!G®jTê\¢fa¾.0í'OÄQº°q*@Ãxðª9îë'„ôû°xØ÷R»RjÍ4Ù^áVûeÄX]ý¢ MI `Cbtd@'7šßUæ®@iÞ5cØ™û@ À¯Hö;»I¼LjøñÓd–‹°Ë1Ëæè¼18Þ„¼|/ŽÇÝÈ”Å] ö¸Îhš¯Nì»—3G·§–ÛÄ¢›ì”8óÜ£LNIAÑ£• •‰Ô1vvâ}¶IêÎôÚ©¾º3½[ºDClóh‘sŠIµiì„·ÃFúüñÃýwïL&}|ûø‹ý£‡oÝpß>]™|»ªûë´’m“¦W&ØV­J…ŸªÆ~‰Œú¸µç¢·)û…ç8!ÅPQ¨}YÓ¹N}ya·.ÑÙ¶?¹²/£sý;ðì©©jÒ‡¡÷uïÒœËGß¾óf탓€ðup¶nâ¦cWß<2¬®(;c´Ø¶¢2„ÎRtn,ÐÞÝúðñ—äED<7᪠\ ¯ì™"6Á*í¹ !.²ßZ£[ežžl•DŒÕ°UçŠi"ADµE²j/T8lBQNßÚ|Î¶Òø–%Ê’U&vǮݳ컺&ª&U‹aP5ˆV{ýêú #“oZ¿!)7E †¡)€·Ž?_]œß`|˜õn>|¼ýÁþåŒÄφfX]Œ(¶Ême]{Tý˜Ù‰ºˆÍýòZÝ/Ï“‚³ R‘v .Wheg/¾¬¢aÚÏ!ºML²¸leŠx…¤è±†í¬Å˜ ô¹4ŠGêäœêZŒÍÂþ -ÆÃƬ½ìCqa[ò°[4+cа¹YyØ‹]êhî{‘âØ‹Æ@ƒƒ­råâF?º ³p¨¯©vMâð%,Ĺѕ suj3tÞXìÚkÕCOíiB†oÂÄèÄûØ‚ëÚk+RWöQ¯YÞ‘9åWÚH%…Ôລ1¦l@i+{´8Å-·SõÚVºEÛj.,ù㟠E,Ð|ùòá[O7÷¶twoÞß}‹·x«òÿÄð]¼» 9ñ&ôûí_Be³?КJ NÚÔâü. v sд¨Æ}»¢†ŽA;hÓ·ÉJè<LYB&/ŠÞ¿TÐY<¡QA‡&cT¯ ÓÇyð˜ˆÖUÏMòDÂä ô6¨– íþÜííü¿ÿž|îyoΘîM†çÞœcïM–èÅF úg>ÅNøõ5`Þ?¼ûóý ˜>Ш{îî}4Âäâ ˜s›õ¢¨óΈÞy©BÛRÉw²_ÀlÉ÷³z g°×Œæ,õµw*ÉhÄ„”=>GLûä¸KERç꼚L'òK«"ÁGÛyû—Öá»ã.¥ ’@IÙ“[5£½Ì\—ÕÑŒ!Ñw|ç¾³“,¾‹ñî4ã¶¥b:½Â"ñÂf±™™ŸÙÒ±¾ÈÈlÓSµ­Á°%î;iÓ®$ûa;6;ØGÉý9%O4ý$H¤mÌ‘‹]AºÍJž¾5?Àgñ1eJŽ4!„¿þeÉÈ—hbäó’(õŒÜ¾ Åþ¹oÚèv($a0>m;¹UV—œ«¼&ô“¦ijeÚ€Œà‹[ò#¥Ÿ?¬â–‚L޽¾ÔÕ]>Ã0äñûwÞ<ü|{›âàý/ÿ•ð¡F?·-$uc’”ý¿—{.L ¶‡µëž«|HÃO¶ ÎeTÃdÌ}Qä'p¿;ŽÈ×¸âø­GÐõX©é™Z•L4zõOÅë”îDÇãf•²É¶t"ùUe‰ ›—ÁSý®©P¨Ž¼è×¥“ƒÐy¤©uå£@’Óׄª‡lqa’šÞÔ9‰[0T­ŠotB£Ù6§¶sºmŸL)º"$mܧ'ŒrU9`ÜÚT «ËˆÄ¡EÕq*eÜKÁD¶'Ƕ«7ꦹ ÷·?}~ÿå—MP›Œ÷¯HҢ߃´1éÑÑ\ˆxM…Ó…»qÚ=ö5w9K’¦É y%®…Åz1ø)ÐÛÒ‹Ùǃýh 6ÂÍç8lëdQ¿Ê€N­êJÝ­×R2=AÕ5‹¶®mYÔæT-Ç2Þ«ö U»€ãé˜Ä\Tÿµô å‘ÕV<‘O_¬à°˜îÖ¹.÷…1º +O¤>ø —äj”[s¥½tKrŠç]îB0‹mzZVä¦àãä&IWÕTÎ&ÿþ@*ßþç}º ùÃÿ³w-ÛqäFöWt¼™ÍT5/úøxåwþ¶D·dK¢Iͱþ~Å"+«ˆ¬@Rí™M·Tee pãÀ½ŸÞl¨ÈVÄGð¼â¢zZÔØ¿cšŒXzýöäJ§XrØ}úòíDP´Ýê‹€hÑ‚ˆ¶¶i] ]ˆ@f-Õ “ˆ†EæÅ%©d5ÊOÌÒ9|ó¡Ý_Ž4DXµÁÑxpÌ©¦WVİ1Í’M F¿®›zf­’WYµ èt÷1¤k oMò4ST ­æ«ýe«`+rËœ¦ÆÃyÅä/–ê­D{{ÏÈ%C¸t[=Ù,+R25Jd¥Ähé0H²’St+7¢Âø9ÈÈ”JÙÏtV{P PYA']Y?Π`v!Åv~ U É *í-z»04[=g ºBUõùÎÞîãÝÃãîþöý½ÙÓôf“iûB€ìåý"ШÒjó×õ¶ë ¼˜Šc ò‚úEä嬎÷ÄR]€×<Ü)A]JÛa¿Ê`}¨'3gtJ Ŭ‹úPVÉu½¸‰…üvÅøÛ8fYÕŒ´®… @C@™YbàŸG¹âÜÞ¿~ûþÛçOïwPI&"íkQ€Ë* ÉobvÁÐj¨ZÅŒùzB3Ç X4y|ª¿ ÍÊ9Õꉱ:A3…D ]ÍÑ›³¬Üµ›äĤYh6Ì–îÔC×vƒJÙú8HJá |Ì­³÷â, ¯Ygï=¥%TrøL¤§Ñ¸ZÖQ„ØnêeðõÞ7\žWOIº3ö†m$V ‘…Š€Õ{—Ó«8¢®†@¬¾W{ì7àªWÉâjLÊC²1[~Ù C¯©Ø«´¢IàÅ;\µŒÀÎÅÈVÁÖ.íݦ!Šª»U(½u³á @¢4Üs3€2X‰®E«mÕTÍÒqxI#]\l$xÐ\#ab˜.¨š`Ä@2T¡j‡íˆž‡£êó|ƪ&Öõ:ÓÁÕ¾z_R”¤Z€ÅXÚ„ók¬ö¡ukÉC&ˆ@ ¾M÷šQëŽÇœ‡+Æ„«®l¥õ‹M·É!Znç]·Îí5‹õœ@ €® qÅ q9qżhåÉ8æP9Æ:ŒMgIVíKr8|þÀª~ÌÎpRUžÃX7àÆwÈš´KÈ[PÞ'îy’ˆëf\ƒÈð9fˆ13P‚²’î5.qÒý'óu÷YÈHq@ÈL¬¤á' þúv÷O{ú ò¯è¸}–Ã&ƒ¶ôÕÁ±øƒçJö _ȾÁ^÷QÐ…F–ôsK!!ä®zLÞ¬€~ƒ½ßkº¾WLÒÝiã0ú:Ì’(Ç!ŒÀ:JÐÎêï>‘)(=µ£ç~òÀèZ7Ë4tS!5ɉÄtJƵ©è ƒ¶åª¯wîÁ«,4°,Ït1f³Õ‰ùzˆÛ·¢ ½¼^;›ÜèÉHÐ׌ìãÞ~:èŒ`žDèGIã7 ¤¹®ù~NO¸Žmæú§OId ‡ŒåkÕ¼ø×Ÿ:Gnß ÙX¹‰FYCð±ºÊ~ûxe>/íéß™ÿ{–ŸÈz_DMÔ<Ë]L+aCž“ùÅ2P+}mRïÊÝ;ù…ø·ÔwÖ!²¹Ü»—¤ƒ“jàrP£Äý·Ԝ˲èœ,ÔÁ=Ò·K|7OWÅãð ffv˜ j˜ÒŠÃŽÈÝ<6stíÀp!w¬væŽ+4gՇ襹­–>GÌ,ñ>‘3Œ¬AžÝsZcy¼¿K/ôåæëÍï·÷¿˜O ±ê£œ}ÔÁ)ž ãÇûï·%E‹;^fn_Ìë~XLi¹·BÁ Tx•ý§˜½Æø¯ðÞU1É-“|örÇ=µá*åÏ“½³Dƒö(b XzÂñLÅaúƒ%Þ=¿ýûÜß}y÷éëî‹åý#•ì£}‘‹DÃ>ØGWNÜÙ s"÷ï{XO ôóqÊ ÌI¬ÕßnÞß^N~Ô£!óH°*-@`U¹F’ÚÞŬQz%jimÁ1Âòð,{zê…^Û·ä@²÷d€‚Tê­Ïá|ãNòävæiå{Ví3Qo¼gÉ 'Ûµ¥ Å´øè}˜#Qì«©JewÔ°‚üë*fŒ\2$YÒýó¸÷ØYìÿx{óùñc¤ó*kÉ8 I6¿¶Nc“ä+Ìs/½¿|¹yÿÑÀl÷ôÃU¡ 9„¡¡ \“äb"š-˜[Î8ÞÕ°U‘o¦½Å‚ã.w¤»·Žƒµ¯)&_Þ¶‹HM’( àÏãÖô!Mq ¾¡ÕÉì]dØÅtn³u; 14ï#Ârmc†½>MtÜx¹^ÖÄ4=jK‘üAÜrêjÓg4yBtB[gH0œ—ÏP3·IΨĈ8Ã…1]Š]¯’”ýÀµëJ31ä!öçÕÃÔV`‚Í+Ïcó°–å$âÐ6TbÓ~=¦ÇøŸ}ëÀ)xlíHÚ¢áê9¤)Ø6%_Ê(ùæW’À—-äòöŬÜÝä]–…|St&‡ç¢­§OX¥ä+l¡+5tk}Ñ –ÒBðr¶¾G]Îï½ýÄê@ç¿Ý=|¾{ØÝþÛÀ·º}]¶¨7œ†(¤Î±ßV*èšIÛ*É×À< ¬–(Ê<Ù~r „(7â=1ac´EZ]àGµÌoU¾ƒ,s{6éêÁÿ“sˆrÖ°Ÿæ‚X’§†î˾éxsɺ\¼Ÿ˜£K¸‡=±(U¹IŒW¹ù–aQFu1¢÷ݨ&j¢7=]!RàXä ÄË®@.[NÌÑÉ”PËiÁßÍrOR\õ-eñ¢>ì1&’Þ±¬'ÙÛEij»Ç»ÝÖÉ®z³“›p˜ nÝ®lR^±D ÌGûÜÌ*¶]¯íš<…m©Eñ'Þˆ«Û5@ö¢ÖÄP]Â8ì}¤ë¸bpQÖy 7 EÅ Aœw?ç¾ Œk:Y‹ä ü¢xXö ÊNìO,ÓÆík;¡º˜ž4—]XåMŠ÷ž£y±YÚ –/ÇšvOžÑ/žãÞ±%ÀZ”Ú¹¸ì’ˆ‰):ÅóH*åÌö‡ïAÅé:€hÑ” •m‹ÀÏ&Þqˆà,×·ü® µg©¾/ˆ.«lx2IHß,a?¨C"õ«üáòÆMaÀ°Gƒjànì¿¶¾f©Ý:à@$ÍHðPéÒã¢?DÊjMLÒ ìk'ž©PåöÏ”WeꚎ8‰V$=Ö2T÷ 1>ìÞßì~ûþõC« j.‹{ 4 ¼ìèôºÀô“ñ‚ä‹ÁëtÉ"ÂÞÛ“b_ßX.Õ(]Šà0VÆèŒVãú•ß麺"h¶L]BëU[\Ëô¤Øº‰•6µ§sg&»rK÷Ì^ëŠ=óKPJ :m5]ÜŽ>«„1µH—bÏ= /HÕvDcp+ý¡%v[-¡dTúÙ\Älcà•ŽÄÃÍîó§‡GûW»Ø÷Lÿ1²/IÿE–Ügu­¦FëÝqŸ(³|U@Óqò5­Ö—Á–èÎìcÒkåx˜žÝ}þrl<—=Á&[¥gˆ¼Ø8ͯ/º<[£“+@À U-MWuüªÌ_¥Eo44IôN†púÜß}¼íÀè“÷DÜX°[Xs £‰}:9‡F…XÕö×1j\åѵô‹"‚03ðš~Ñ«r"ûøuíåœcÐH(r ŽË$fûSÛtñ Ú[è« SDKà­n÷ MyKCÒa©NŠžÛf83 D¯Y»‚á¿7#/ã?X*¾tX`ïzdϽdí:½Ìò0Ñ¡ÅO˜ÿùˆUÓ@ao_3 ”õñÂØB g–GFY=Œ¥ÌÂæÁ™»H‰SÇ‹Nq¼ŽýŠÊíåÍ <“W¤‚œôÌ+&Ÿ1`þ¸eÜòy²Ø¾š\Œ'׳€jÖÕõ–Ú¶K¹s²Iµ/MŒŠ²5ÈÖÑÛÝ̬™Ä*­“DÌuLPƒv%Å‘2ŽÖtË¿xŸWÃüÀ…Œt Øï£Cðm4h/2ÓÙ›U!4Ž]ˆþÅ"‰‹ŽUÞH øÜ¥¹K¿ß½üue‡G¸è¨å–x'‰é¡ß µR×íöÒV‘‚‹Œ a‰6öàÆÙa÷‰¥zÄ=ûÖê%òÆG•±e<Š=ÛË“p7'¹¾¯z¹Æ=0ÄEÇ@æãÏëžÁ¹£Ï‰uzðàÛw¦4DÏ'DÑ) NˆÌÌÈþ5g½$Á¢èý’°箤¬Ë:MÚA:z9Œ\ZïyHC!r|&ØúPåÉ‚gÍ»º~ׯã¿×­Ç&¤véŠC¯•Œ™zÖ›QÌÙH R.ý¶`*5Q£-¶Í¾> Æ/ìTÆÄ_U…Åù¯Û°ãó9É™eŠÓ¸w¸ wý}Q•Êõ`\ c×SpLWp‚Çm.BÙÿÐνÜc=Œ)î#«÷»÷·÷u§iäCû:”€r‹ô;˜¥mOðz)×:ÓõjsfV*‚^—±7äd'vê…½ T3çÛg³‚ƒ À×…<øZŽ4£ÞD];ƒZ„¹Dä*0w=Z ]ZCpØB©óòvÓE“»FÏ¿ª¼¥c˜Zæ‹Èño¯4VO¨MÓÖ!Âèr7 H²F'ËtÁZžtIhóD×ÞaƒSÁ,Ö‚cðannU¶ŸjñæTÆZfaäZ∃˜´¡,"[h„­ñ4\±[o ÅüÏÉOœ‡VðþÀÿ\Å,B+fe&†é„¬^}ØYrËo“Ŧ¿¢ÅÁApÛC¬Õ.åÛ„ Cwz˘¶wŽÄQýíöWäüŸ¾Üü~»;Žøü8³Ì’ÿt+±Û¶÷ {ÐÀnù®ÒáŽéò)fOq¦Fêq1}mŒ*›gTÄ2$ c fØž”ú˜æÿò’#é$Q¯+†nV M‚$B!±øaþéëõŒÔE´l-,Gj’,ÅÄTB5‚:Þ¼áDqƒ†“#Ÿ ÕD/„jž',Ó ÷å#i}Àcä*sÿê„öéê§l{ðÔ¿;ÒPúýke³ #º¡ÀËØ”%Eƒ“`_ÝæmŸ^КÒÍíåSTfF‹Ðʘò£TòSJ,[C+‡“Ò uÐ ´ès,€™Ö΄O¸®5hìNŒ-×J­0Œ è\7âÄë&ëwÏɃ’nDÉ=§¸<ãÀ17ã0µO—{N´·m"nó¼G4 Q¼¥.Ð]_z”¼éŒ%+‡„jZ€³_D5¾fä?±=C¢yˆa;1Ô×[ì°¼@–k b%^>`‘,¡Ùôm{ˆ¡¦¯í|åË.HÝ#«-f„K¬geÇRmáúž´”MuÚ—k"ñÐ¥•Ñe£m!Œ¯/,&o«ÍhÒ*X’Ï"dMÚNk¤CÒÏ@lž~^(RªëÄazSWÿyçÚ^ ƒk¸ ÖxVß…­{ÖP=SÍhø‹(¼bÁL—Ĭ|×É*RM1܈´u °hïּЂê"ù÷^t±v=+ ~è×E¿«@0rÝøSM N2ƒÑéšYL¬õ3ó"òÒ|‡»)5—MG®­^´íz\à8 ÿ‘´‚¥J“(9cÿ„?1‚ùzŠ%ô31Âò²f9ô'Fét•$ò*/ö؇:¾dŒÑÅìqTLZ×+F Qþï0$tXÐ8`" $­( Ǽybøp÷ùö—ãÿw moÏQ‚ª½¹o  Rtèc— $o¥~å'¹.,?Ü‚’ôÁbDX¤Sõa¸#‚Û"ª½œÐ€èöÂPjŒ ý‹ÎÉñÞÆoww‰ééÛî hùòÛo•){üÓPã_°íáõÊÑzÙaɰmT\Q8×Õ€ó8<YµEßÇÙ!+®í¬pÿ~(ëö‡¼5–$®aIÉç`½ìaÿÉ:]`Ö€) ‡­aÖ»ñ;=†ÜUZ(ÙÞ€:êzW:ÝÛ³Z¤|7£üÐåT’·&"Ö·‰šK.•‘3 ÅZhÑÅõœr7¬£{“ÏjÁS cPÊj5.â-8ÊîÉD]×|šØêâ­ÐoÐVgÌæVPÝg*BßÃe.‚ÞŠÓë bäâ'»Ã/'QݼmpPT~VR®‚Y„Xgê¿×-óB»`îÙ–ZÚq¹Ìù¡:ÇbŽ»©S¯S¥Î£/ctP‚¾é†ß"ú2çÇç^,Ñ |“H¹ÔMöä®nnQׯ˜_J¼É‘&e»¶GÖÕ³¯"·xVBTËèѵ.^ú5Hé?úˆ”T£¥F5ú L?šƒ[ˆùóßþj!,†ˆÁÙ”Eí·þÝwû‚_o¾Üš=fz9LmíÓîöïþòîñß_ýs±ÞÒš{†¯¥—0)!ÕH/­|üD…ÉŒV-´ò鱞‡²ã¹a»û^X÷¥Tï)±nGÁÕJwT¨t‡d_7S,Od+Úž¼zô÷ôæ0£…üòjRwÞïƒz†0Uº;ûŒRw+oó>«Þq¢wÍÒ¤ ÄRÍ»^ Ên4…{rŠÌTÏÁ­‚­-”ö¯û©ß±º ÄïþŒWñPØÕ†Û4I]ÁEN7¹Zdf%#3ËYœqì¥g)È"ÎÌ#N^fYfÖPV’‚MQvú «TfS¶•xš*`/4|~ìÅ(®ÍÐ> ÆH# ó'1ëÓ&:53F´G…f“ åuXÜö ´;œ 1‰—W–á«-×V¥Sæí•Ìt¡h§K\ØéfƬÌÌÄP=îß>QjÓœ)NOŸñäÂæµïþq÷Å0a÷ÅÂúý#:<ÚW¸€‹óÄÇ­÷?/üÓÊåÒ[û(~¦ðWçºöZµh,K*F`»ÀO~eÁEB](öÔBO+N=yÖWU\ZU‰ìƒ‹¾€ËÛê¥ÀxÒ‹KŽ“gòfEÚãbààpºÇ§ŸqÌ_mcrO·iòÿj‹Rìy†â…âòvŸïÞÿkZ…^JÃÞl97Ü„ÍN»Gé#Üh®òäD3p$†TBa†.7zíX|¡‹â«Ì1ru—®¬»ŠŸ<-¹×—\Åž¯¶À; ‹SºÂ±½óCòì°·:PÞðòÞã}:sü`m÷Û÷¯>ßV‡Aˆ¡Ýò‹qÐ,/-b΃¢ëÃqÅXm™t¦fN®@.–DQô ™ôÁlÙaƒ³t¹Èö¨T~¯ÛfÔ0>Ê Ë]v¶e²ZOi›|\vÜ ®×¾,Ì.&/«“\€¬V9ïÕ¿¡dã‰`î/vuMŒ îÊR¨ð*¨%ù_ö®.9ŽI_Eoó²*ÈÏ„ŸæeN±Á¡¸C©!){|°½ÀžlÝM±Ø¬ê ¨¦VÞðÄØ¦Ìê®ÌDþ!óûÖÌÓ«ÿ%}i‹Ä×Ïÿæ>#锸ßd¸èqr-l,¬.X‡€þiVå‚9jˆÖtj½æØÞï¯àŽ<°fC5<Ïêm‘…^«·ÅNcSån°¢’[ä–ˆä ’ÝEÚ‘Ë›ë›cÁNè,IhòÇW`W$N°¨vE¹èº¹âl(â–²ÌP¤B qÑóäàíHN=<±kU1ˆuž˜Bj¹øÞÙÙÖs·.f²‰°ÿQÈ»gu*£*‚È3`•ncVÇf m:Vؤû`æç¦h^X.Ìw>éÓ¿Q–üæ—›|·:Þ®c‹³ ó*Pe4iRn0"y³ðŒ„&¥iÆåýe%„:ò X 9¥‚‚p¨²†Õ¢2¹Wà'Ï2ê4àJÁSµºŒØs¤cS€d=Àk˜Ìxlð€bZÈx"…®WR4ê =XfʜŬz3xqcB˶ùÞ´`˜ØSô C†r€=ÛÐi‚uN+Z ‰F”S«$²†WÁBï–[_^ôË®tP‹)à©x¡'´èºÓäÐX$]Ò+wÝh˜êŠÕ%ƒ(8Ûv†‰Øij"6»n†s¡!Z‘Ƕ€ÎôÔÕÜœSÊÄMªgÉ[Œ¼Ú vXkgü> ß{ùñêòÓ{?©_îÜ8êðlNL,Š¿dÖ2­!kËõT¥Gmî¹þ•€9aIrLËlžÊŽ¿ ¨ÓömÆ÷¬K»M‰Û§Æ¨4™³;Ñ4:ãšéšs‰E«H‰Çágµ ^×…¦Š8­¸P/ô„x=º•‡ìY „:LÁ¶=/®¥dSèÒ#ùt9ðnýH«ª„þݬ©N@|†}û$³›aÑüÌ/AûG=?o<‹®ö¯N fUKnwm›E eƒ{t­±ÇÉMÙÈü]»éù1.ýøÜ=øß>?õxœÅ>ãÈìþíóõíõç‹›JV‰'4"hÖ”y%¢5­IÑ\æ•®¸Ÿ»9g·A(YÅ/*w¡LNÖ?‹¬K¹»#–P5YŸ8ÓpÓ>¬·¾A¹ëšJR˜IÇüÝz:e‹eÔ¬V^îöõ!³JVa#lR²`Üd”‚Ç8ãÈÔç‹ûOWžÞ^^ýtõáãÅã7>¼øý†êê`‹8 ?å5D¤Ö-Ò 1#Ño üõzÞàhÒfgç·¿¾¿¼º¬½œÓ-…I·¹÷Ðot?zÊîëÓ¦ÂÝ %”8„Œ·°é¦ó‡«/7wä0~‚›´JØŠ@› [mÒæL€ªéû`¾¿º½ú½2Í&ˆ›Ê]p£Æ‰ÞÀ«?ߥÞÜù[}¼{xt¹_ÞùOþ8Œ”½¼ûtu[Yì° ‰¹åò.wùVô zu©ªÐ<‡\'»n%ŽzL™ô¥d6Yö‹–'JœL¢0u£7’S·ád³šöSò↘[Öüa{¸Gd{Må›õ”2m‰ÌÜça:?°ŸÉоSw1«Û”Éó¸åüS\sÏU3B:wÄ:•žõ<ö@Oƒ=K§>-žz 2Iá=’N§,4¨ 5ÜÛ†WÿA›Î}:˜8JÄÉsp@ãÝd°ŒÃ;w½ZªCfê6Ch:ì«&uü'QŒ ÔnUýÎv,(cÑ 2-îÄ“]Ë‘4ºà„Á(í8ñ*÷’-î#8­m®”`b¼2+Ê«.™iZFî;£ƒ%g[‘ÊCzq•;«>$Tƒ¦£ ¶‹+˜Kض1¶ÜCì²£ÿÁãðô±¦‹§4Nâ²? ¦ÓhzJ‹u§š0p@I¶,]cÁbðl×Ï”µ zš…J ZbH²ìíq-z$˜N™\“Šç7‹UÞ‚½v´dú”ùÓï~×–4ñR± »×€¡À$tÒ&FRérm™¯"tµ‰‚àxq›•R‚ÉìÞ«# ²4T’ºŽàsÑ8®*)oÁn«ÓÍ—f¯‰=âøî—wÿ¾ýùoÅ”j«Û"/ÕB›Úê}A¦ö’¿ú—_^^âìwæíHÐ\¥?b $w½ t”¬“¡&3Çù¹YHÃÜÏ‘¤Ô—üÚ/3ó£ )ÌJÜ2î5I\×€ô€qÐ×Q|ëÅ·LH†üK¤ìÎ\Ŭij_bô2߃à¤/¾Çhcøv—§qeWÊð›­Àp›Ø*þ;!ü„ÝHË¥4°”‡Ó dÙ 4…%Î÷ì±&Y`Ÿß«„Ö¿“›Än]¤´¶aÌ „bKÌõŸn»!³—âر¿²˜rSþTFƒj ø <¨k2ª#T P³×|æ(\›‡jòÓ5Ÿ9O{êV˜£a“!ǰÁx’Î=‡7›Ä»üè‘ñ‰ËåÛ8ïÑÏ÷¯Y79‹°ßI¤›h#eLž7ÒÆ˜wý·zÅ*lX/õåœâ­DYÆâgÇbÇ … ³]×ÕäÉDOB(Kôóéeª«9W¦¦kä€dU¿Ç µ-Æô“§ž̾ǭõ÷—÷Þßïêù:°|>å+Û4‘âæ)©ga-,2­Ü™†GT‹Z‘Ügv¤2r®Vp[NiÍÂr° δ°<:ÝÅ€eÑ!§ýUÔ«Õåo’êró˜aØ3W8d©i9.¿ÝÆÀ¥;1ø€° Ò`!¤é9“];ÙZ4g‚J›o,„yåºq‰5)â8Šš4–m)0^\Ù!í|¹¿û|õøñêëÃûOúPÉ@c'nª`M>dÕj²ëZR-=x¹ˆú%»i ¼\ɮɲk= G¼^<~H—\7å\7R¬r­¨ÜD‘pó Ë„dr„3)/]à‡t~¼-Ôr½:70«L Ú–.SM¸3§ ðNÉÐI€›D3÷7ñ‡Ù»,vA$#\³w‚BùzÀ¾w.Yžä«P+¹ añ/7ÂÉà6Q'¾ÊÄJ•W8‚!µˆ¬aHÉ]m"‘øýì*Lç9ªœÊò.hêÑ$¯ðHA¶`]M¡Ä1Ëæ5$K´IæRÆ g†*ZCJhqí4TepžWjDäF¥Ú²W¦4½€â!YñOº€’õG1ŶÆŸIŠIe[à²ï#aõ¯¨M!’פPà%#rÁÿÉÊý4.I¡B Å°É2¹â7Q§*º)ĺJ^i:¢‡ž¾4!˜7 ùìCG&wdhÕlµQçç5ûH_7q‰E,~à‰øC]E*ƒ =ìg<ÓþϽH,ª .·G•&á žÒÉû±rÝÍ“JŒç·ðÃebƯE‡ÕË2²SöªŸÁM#s!˜Æ4žÍ7éô`Sôo À¨«i¸fUܾ  ‘&+J/ü Ó™Ùg¾O¦Š¬ÍD@óa¶b#©Z·ñ°É9ºžc`I–ŒYAêuj¼ûY*¦À2U”V%¾R¾vj:ß´ýìÑu˜ârࡳŒ1–¡¤C‡Ñ£Ù1ÑY=FˆMcütƒ™èV ù&~Ó™‹ _üðr¡ôúóůyxÇ?ñÇO»ÅàÊY[`K›ŠH§¾`Hæ™x•3…èUAÓÅsJܾùC„“Íñ ì<ì¤ß#ãp–R¦»gH¸ÁÜ9ú£?mÚuRo”y?ýSåU3‡°¥ÄsKfƒ˜%yõ³í^]Ʊyør‘™ç:óu²f>Qf`Vi›€_]¸ò(I™TºÌ½MØc·p•™"G]WÆ%á ¦™#yôVþ¥ ø€mR¬8˜J›1¬I] cT êK¬‚£Ò/kÑL°À 8#/[LöF¢è’´È€’¤6iY°ƒ’¤e󆶜(ÝòÅKu‰K`_®9Ò²YHNÐ867{ç•)ÀÒ¦ÌxDçÚ'}O_â¶{.¡«»ˆóÁ~xâN¾üúðx÷Ù5s÷Õñƒ?üö:äCó_ýýîþ“kæÖ¿Ðõo×ìÖ3ÑßnñËÍÅíŽgy·ÉüŠp¹œ‡ŽN†e倞xÍ<:Šùÿ˜VP²ŸQÐýb; š‚• ’™.÷õ€§À*FRíÜaÈÉRéÔLªÄ© pnQ‰"ëT%ª 3ÃÒBß K*‚mõo ÅN½0½ŸU#PÐ65ªnœí€ì{œý¼1¢ÍÑ#…ÐS{1uŒ*Ôwþˆ²¡%¼¸Ýâ@³LÏ‘ÈbšSøs®¬¸¯c±ÐtÐ5Éæc%’MΕ„dKÚ¥°]ð•nC ´`²&hÁxÒðd€‚§ ºgÃl ’RÊÒîðBÜCx; éoeÚ ×ž|ønµ^yføðp÷ÕçÃð[‘c64aV¦¨uvµÙÙi5°åf_ì—w“8ØQן†Ó•Øî4تÕqƒä®YHVá2K˜Àe¶W‡LA›§Ë+ ñbNnyï_&W{Go³ ÌœdeÏ1öSh‚fΞ„XÂ$Jšµ„ˆÆ,Á‰VÔä„8²´c3K!6s¶ á(¤%V‘N£uï_'ŒbôbàÌn1büwÂÙÿåáÝ—üÖw_Þ]º`½•3·!DwîŒÌ¥eNÒ9´ÐïLiU3vP†õë&ùªÒ‹Ô8GßÈq“v³l×nîÂ~Ä?,ûQEÿwgžI½>\ÝÿÝ?"q}¤sã‚ìv)¬ëÿÆ×ýßÜ> IÆî–bFú^IûÞ®ÄhÙ¿ìÔXãøe Ú¿8Éÿü÷(Þ¼xBS÷7le-îþîÞ 3ç4Ù¬Éx¢¿¹ %Õæö¯¶ã7èÄ,êržÌiÙ(p’›oôj `yœ€È‹ªrüƒ“|•lätÿžùµ :ÇÕÞiÔ6¯YšÂ=üÝ!î_âÈhaÖhYÜpcKl5"ÛÆ°>æG„5Ä )å,ZSuõÿ$¸q%?'µW~8O"g¶Ë¼¼˜Nyb¹À&½7x I¢ÇD1…¼èi/CäèCv¢x÷ôúÅ…>Å!¤8½4›f ÌbÔõK³»G¨l{—¹S ËëW )ƒH!,*½l3”5öð6sŽÁÿ"mÒð1fV?‡ŒuÇ(¤s9˜}ÕVy¸xsýð¸kk¤×Ñ<œÐ¤ ÚâÎc+ +“z ”â ž§fñ­ëéêD­äC‰ *¥ˆL‹þÞl²~~UwŸSk¯ ˆ˜Çöè3ê;ºîèÑ@Sªqô1`$nr1$ݾ£‹¨JÖ¼Z<íè“õåïä"R)â*R©.þhVÏîë¥&w/°‰·—à¾õxÅONoW¬ø‡˜æ…Ÿd=uê^šÖ@¾e\fðßíHi=/±ž]Ä(t¿ãYßðQŠQêê/¬É¯lÝîÉÚ{}K‘Õï‰Bân.®Mž'Ò“¶´yÞÖ­ÍZżÐfÖ¿d€”A×L ÏMøõÌŸîáûöâW‹ÓJ){H(+4t\8Ÿ0$j›”FÐkÔ+ôÁµTÎ>L¿ÿáAØöŒ*—7w_?þ¤Lú‡ÐΛJ>ÙCT ïf-ÎQ8KPÞ5õxxBä–Rj+«‰e Dgæ Köy²¨®¹ËG€dyÎÈyØLÌ“s ¯v¾ §Cš„y`×=Ò‹“ñG¬¨¨Ó0/·VåG=lMV ö«JCê‚ÒØ‘qæµ…eWc@°ˆÚŸç€]Ë6 å4’H3Èéòÿ’w-½­\Éù¯\x3›aßóªÇÑ^³ ƒÉz`Èm Ö+¢®Yägå䗥ФÈn²›}^-M <÷Z¦ú°ëñz×IESÿˆ¢ ®½Ê0ôâÃ@ÝÍaa\Á±=ϸ ¶xÃ?¶M†*æ´5 &¹JÑÖeöJV2Šk²3{‰vï‡?Ñè=Nš­ €@a®Æƒ;cÔœG?êò‰Øb;…HÔgX<Ù;£jØ;O6 j¡:_Ÿ‚]jvœk$¬g¶nv^ã©U£nÿ¹×/ÞYåØª%ÌIˆÂÏJß_øî›Fª}¢.P Á‘EGu¦ F»pôGÙΓ:*àµ÷$„o:g© ‰W:[¹íUÀÕÉãÏÀªIˆÞöU`‰[ /ü-ÞK“/¿Õ¾¯û¿ä•Ÿ('yæG[Üž8–•Ÿ%Q©]Xǃõ10Ïvr‰)±ÍÙåŽCöAšÄt\»°ßÅÕ;¡È¨GÒ%w9ˆî b¨´âY:¢#\ 0Ö™+|φárL½ù„¤¬¸mœY” .“ütÞ¨Šãëh+.u`Ëge7_E@nNŠ«6Ï믷Ͽ?=<_ßæ…çÙàz³ñ® Å$*‡ÌbBY 6FíÐYX/f7gx ˆc˜í³ouÌðîQ¤ >ë—‰V»þÝ?£¡}Ǻø9+ì¢ò!æ•Fû…zË9@?‚ÐÂ{òäh¡­o»¥9)õÊE›yØ2ÉLb4UÌ ´<{€+α_Ö€VR\r»>=¿Ýÿ|³÷eö7ßêÛË/¯×·ëFÝ3³ÔOk(É¥j‘y oòMé2zµêæØ‰8±ç ÛXò03/m,f~¤OËm´ÆøÐ/•9QÜA7Ú˜•+m¡ëûŠá[ùfÆÆ#x`Ä[¿Ü`\Ûbûоؾs&Y‹bPÝŒ!,ae38vbi¦™½ÿ3/ñéùp³â-W÷¾(7ïéœè]lmfïÿlhd3£ÖÖ¬‘ØÃlM£'ÝÔ#H#+ÛDÔÙ9ƒìfï"´v$~pÖhͬ›J´fZ­•wÞŽ™Ùºlàd”Í1»Ù¢!iÙº ¦¡qÝC”)ëÉrÕÌš` /ƒÉâIrðË–©ÜÞo^¿½è#úvûËú­2_ §yß*‹:곎߼¤ %ƒP-áwÁC<[7;>&ØÑ’ó>YÁ³µ:ä‹úðÜ?¤žI.t—5k,¸œ©ŠkêÚÑåáÙŒÎÓþ;E§83A’Û6ùÖQ|˜™dhðÑÆªÄCv:†@`~R$äåá›àÖì8æù'˜FaÕuªª…¢%ƒ%yƒNÞÉ{iD|mëÂ*òfê§ÍMÞÆU£Ÿ-E ãƒ'{äkׯ#ff[{ï‚Ë:£ccò.ñtL]R—‰‹ÞŒ†ÄDÆ0b¬{½UÛ–ŸãTÚB×$ûI[¸ªŠP:»ÀÕ!—¿\Ꮌ9îÖ×ow­}Ž’ ÈŽX2)R“ Ûå™È¾{ývñmá˜×èvˆu#q‚YŽ0ZÞ{Ý&Hlt¿³`î ¨H{ãCVn²‰&2/ ÄÂ:7f”+ï£Ý§¼Î¢&Ø6#é|c>Áw@§’Ôë…q‘žNí\=¦O¿þ–…¢ o¯÷7›ÕÍõªlç}¼@w=¬šâ¥£¢JÒŒVkލÉĸ ¤jÙŽ‰Ž‘RÚ1g»1aßY~–]<¥U3&Šénh^<žQ„½ Ú`²Š±åã‘ëæ©€Aü€fLïÆD,wê(úRÛ¥IÉ«î]ݯ|™d¥5:[º©ãH"{ë ÅFêÓÝ$ƒÉ¹> #\ >pUM5`Kà8D±\ÏÑ«f‹Hv6„9Ìfû$÷Eж£ Ý#NÐâ–y¤ÐÏ3öÏ(m!ÏbgˆmUè,/½©U8·ŸòxÚÂ{`­kùЦ¤2k"Ëu 5Óüdô®â§ón‘06$"÷ùYDzë ðàM3¬ ³.Dþ=YòËd©½F+¸ñ ‘f;_À'åX€ãH–6f­d± Ö½#аš½Ü[˜…Õº/±®—BäÔ.f6»‰îgafÔHs€Q¬Žm—ե؜×ðž-“œ š2¨s•Â"cù^À¼d$äéúQØp}³N 9¬Ä˜išƒsTIó%JÝqF{.— Ó&í ³òãý/î'¸íýb•M÷›íF—¬äÝœòvËò%úEZŒs‘–åË«\Л·ÝC»_¹Ï®m š°`Û-=à\^C/Gwþ£¯Ê¦ƒø›E9'^\ ™ÆK)Ͷ÷êzCB˜-ž’9˜m 00ž¥9¨M]€í‡Išãv vbø2ø,;z_§ñ°ôÜeåœmC0Þ€Eš»LmÛ’*§‚XÌÉöL òLrÅwäX‡% Vz R!2åcIÓ{±ƈU R~6¬ý”>ÎcŒ]ðÝ#\‹½"A!ân’×Ç)š i´¿%«Û <Ö …Ó¢Œ¦»kÃöØ á¾wÎq¸ 3ÜvÍ_¤4¯iו 3Í Ï)ž£Ñø§­‚ŸÓnÖ´Èy abn]`Ž'Ô0`u}ýl’“Ù ÌnÄ’Û|4`~¤IãÅG&KÄý&ÊþeÆ‹AƒYkDäób­UM¦1]zÉ´rn´Ö/Èõ*› ãsl›åLÛ-M»Z”$8ÉL²Q«]C]ÞCìæEâäìM°Œ¾¸°.Y!qŸ9bUq ³Lq XévÉvÉÛkµ|7ÔÂãõÍhàû*>ùräóëדŸïÞ0³\.`µ(/aU¹ ä'žXÌR°¹(õÔkYAHÀ1¸PA¸_ pÇÃhÛ{ŸT­jtyäƒCŠ€\«Yl–ÛŽ 0P7ÜÁ~@!-"$D05äñÆz똘t o‚C“Ì%íήŠÉ \ãYOºÏ+y¾ÍÂpçü…šïm%+Õ8•ÌCâ•£ mk¾…:M1::¿”‚Ñ~v<+Òèü¿%šA´Àã ¥FBSçP÷d!tôƸªì FCŸWæÍΚà݇îl†¤Hˆ¨t»‚ï=žL±‘¨*B..ƒÅ¬ISkÄâ‡õõf=\VÔ¢EtéÁªJÉ—ô6’בF6æ¶6¦Ó¨%23Ä`œOé¿ÁYd~x‚Ì=Š4kÀ Žc¿§F2‹I!+¯.̬2¯(Xÿ‰ÈL6²Ã™™$mgg§MfŸ¢ÎÖ)fF³­£¬a&³ÄL Á„E›%å5–t¹o÷¡ºÝ6±œóxÍESF˜"ˆ¡Ç¹Q=Í.ÌÙ¬rT h~T»0o< Ìňî¡MƒzD&¢D÷Ï(ÝZ5XœÕsÓBµÁ„¥ƒÕºjz,Æ!`í¾€ÿB¶ =Ò{£·ëy¸#Ý€®™qþ‚ÞÛQüªrä€ úÞ¿ÝÜ|ýõ·pVµöñåúf[µ%\ÞfvLþùþé~s'ZðÞ=tûåøY}Üî_Vïr.ä’+à3¢%´ŸÜxw½¹“Ö‰é*Ÿ»ùöú*ôXÝþ´Rr¯~ú»I§äZ`ÃæìòL _þúOßÿª(×·Íá ºTXîv3õ9yP°ú ‰èñœèJ„­2ù£2mve)_žÖ¿ÙiŒVgD£¾A.~£9˜Ð7¢öÉÄëŒ'k3öœnü;èõëÕ÷ùó„HÑ“‘ÿc@q„ÉÂíWMó‹.nÙÝ’(Q@-E<ÞOöË_Þþãéêû-Y_×Wß ¢ý²~»úë¿üùKÊÊÂÇë×_×o/r|}]ßÞ]¿B››Õæ÷Ð|ÕãóíðJD9vóíæf½Ù\}¿§o×rÈ[¹ú¾Ùá?Èÿ~–‹ÿ›¾ÝûY òÃÐ^ÚFqZÙ¢æ[M•²1–¤óµ´8Fú›ÜÆ×O›´õL$¯Lÿ|ý°YÿéËÓ·G!ÒÏ?lÀ§Ô¡ï¢è"ûÙ=­Ží~Ûç´Å£oFGeö^í/¯Ï*a;SgÏù¡åá}‡â(³¬aí?c¯'–8`@€qò×ÒœU¡ò¿ÿ¨—úÕöóïFgÊÎÝy©Æþs°~ÖúN®}­8÷SòŠÆ[ù§N^c¾¸ݧÄVúÛÁD½þéaý¯"¡ÿüüür&¶ÿ&ÆÙú/Jf±À€ÜŸ¾("߯o?³ç¨CÒ ó*îµÁy%ÐÞËüA¿ì^^×7ëûßÖ·'rFg+h_ÎúOØ¿Øþ!÷1è—‹æ÷õë—·»ë§/rtÛw?YB,Wc#)Ýî)p`|Muº ’%EH$»N”Ï@­q|ŠSpvê§E£˜ŒaV(<óœP 1ãÛ¥¯–€Z–£&…yl'œä[0âO„:¾YWG3ãu·Mޱ VxÌó-Dt8Ï7G3_-…oòµ¬ÑIYŒg¸ÆÑU] †¥ˆ#íRX.†u¸ÔzÕë_Ÿwy›Ã~¼WdûÝ4¹ƒh Wé ”lætØY´3CV‰$*Ë/Ä‘›R´Æè„¯”›ÒÆ8«\ã{}z4ˆ^mÅÕƒ0¥|→\w%‘æñ¼€ûº€â>ÚÇå|³5âÇj–¹¬žo~ÍRVÄ©®ÛîÌêª<ÂÚ’l`ÁFIJl`k‚6Sm‘*º˜gþÞÏvnç®Ê~ £©ÃùZh¶xçh°M½F~êІŽ)F›QÔ!o+¶1„:°×¾@D¼#´Qwç|t;ľ#mus="žûu“׬o8|W̧Ø€’¤TtHä ©IE´ÒU³t­ ‘,R‡bhÓÜ‚*¥ëè”áZ$²Ä–~uÿŒÀí9bY€ƒ| ® qàÛ†…?Äðø)¥œ"~š¢Y•tO[÷óOPýh zëÅgöb®`³’ÎS*5³Œ¶t²8¶IQ³JÍÑyÏ=’4ÑjÓùèìI0·H‘Z«¦DŽZ;gkrN(O€Ô;Ä‹.’¼y–¾ŸÎÏ]mîyÊm¶‰8­Õ³Ÿ×jñ4K|v¡—µ!úìE’‰4j©ÓZÊÄ1E§w‘ëK*í,Œö=z4Òh¢[ÌÁèQ¤ÐrÅ[ÈShÐÜV)4X»€aï:Ðî \R¡Åó\ÿ$ôþ\°º-Ò»:ü÷«Íõêá~ó&¿µ:Þay…À8mÃϲ$Aå°$¥]¹!·3£%›‚ˆ m"¥Xïq6L. ã9¯w’52Þ=Šuâ†I±Ã˜€NùÏi––we šÞ y¹’Ø“E‹™ÃE†äÉ~¦]hX@h[°÷uÿ—VfY×ÏXoæCÅž(ØÙx’Û7íŸÆ“Ž$i!aÈkõÀ=ìŸQ$bžxÎ1! V¹‡..0†ZË Ë£õw¢9ÀÍ3x|‚^ÍGTïŸDüô'yWPœ¶:gÙ“€1¬ '´&–"i» ÉwbÙYãR.¤yÇSë«ÆàâH¿FR@ ‡à¢wF\€Ö½‡,¸pbáWo¢·H%± ¯]•‚ß5FQ¿ÕãL*¶ÕŽÛd‡ÚBM[¨Fî­HŒ1!¬‰ÚN:'ˆÞŽ®òìÑ­… ²éP‹?ý0Ò;¤HÙBÄ,ÉïÅM«¹¸¼G»H\SÛ»‚}ü¿3*Ü ŠJ‚š€q›V2ÿHc¡Æ‚Âv%L¸V€öMÕÙÇ1?§GŽ&ê,_'úmñY¯W½wF6SgŒ%ÈJkŠ&bðu÷JÑ%Ýl.(µ÷èÖ0»ÙÝþEøCôÞçàƒú…© N£>MðÁùNx*æÝ²ÉÝý¶‘Çïû?W¯ë[‘ó›·&;í•ê¬Ý5ڦȑsÞQvŠ7“RõÛdè·Yõ ‚hÖ»¹úlÔ1c@2-ª8ÅI3ÂeZýC 4\ ÎxOYÑ ±ž¼3U8ð „Ý¡ÓE3à>+Û{¸ÒÎtÇ`ÜÕÍÝúæ×•HÔËó½\pyÕÞŽ¾+æK¸Xâ XqÂuš@Üo)ÛÅÙE’@Ë“ú£æ½R1LÇû£„kh'£ý!zg³5&f„àk¶¢è#ì&€‰ wô!&À~“î…b†‡gy»»çÎj¸y–7ûûq AFä3ð4" ®ÀªCô¡(:`u3Û2« ‚xíÜ .)( ÑÀ,Æñ@ÁTü€ <(8R€±³š)Ï  ÅXµüBo4r‹8Ñ:ÙB»õõÃÛ]#ë^GÉÆàjô]I1—Ec…ÛÞ,=Þ½~;“=tl´@$ÅdßoL¹¤‹èƣĽ×mb²‡.]J6ÐÅþ!Eºh ¹ä¡:,0–î¬Ú>Âͦ°Vç{i/~ÑpŠ·SDÔµÔ±6ÍdäEÈ9ò—‚ºû—…q{íø6 Ó)ä[¡ØäæÄë=£j>…eêä‹þ÷}° PÙtŠÈb™²‡Úé.¤N§@ÓQŸg¥)î{0.JrÍ-_-eÊ|-]MêR­wŒc1‰Cãb(`\´Èžˆ¸E_ÎD]Ê{åÕêåùáþæïýOìSdÔ© œk9N ?+qtáeh-l:ùÖÆ²7>EbôÛùÎ ¼X¥EÆÿáÞtqZ‚€äA;êT0UÕ0CLõ‹…´»·ÒN˜iU?¾ùh!mÿÕRTÝi!osòžl ãÀ¸‚:µÁ]rW#س»zû³1V ”!%°*îëí.± ŒK¯ô^&áªÖ/t6ô ãµ÷ˆª›Z3I×ôQ£¼¯[à·³ Žtü•³½¸Ý0Ä·: ã­W¯Û’Æ"äWÜ8;ÜÚÍóþG±›îå8Ò×= ¶@~‘ËŒ1@¼•¯O†¼4…Ò³}„ú‹ùâãÁ[ǦøÑr*ðìP fÐÄ›îØ»¢&9nÜüWT÷lµ$Õ•Ÿò’×TÞR©ÔJZŸu–¼ªÕJñå×èžÝí™á Ù${íäl¿H³«ž&|@Ÿè!^ÙÖeáùú¡ÕÊjpßIíª†ë·ÍFÖY§¦´o›ÂÜRŸìŠSµÇà'w>A\n«dò”Ba«tŒ¥°UºXй*Ãõj*_ctŠËmà*H[?£ úqŠæ.|®ÿaz ¾©LSðÜk¾ÞÕš/À””U³G.ªÇ …äO+«4_Ý}Hq“ùªsèÛ’tÏ;ßb¾Þ¡ õ_F]ÁÛ d=¶>¾ŸòæÎþþúéÇ-Gò¬@H`|mU€¥°û*> ÙAiÏòq&ëk#G¦*倷!LÄ\h;’ms5¼qž‚X{C—QS¬Æœ¾.²ãkÙÚeU‰ÕÌ.ýãÊ9eƒ±ÕÒ*¬:eƒ’;NÆ®Ÿ‘eŒŠPÀ\G0ioÌöÆh~`hÝn{4µ¾6¦k½»¨z·5Æ ƒv›§d=áÚn/ Ùršç•UAxœ‚€d‹©sVŠ~bš9'¨uÛ–G ´lèò(ßm¥6ι2e&«qp}ßtU­†àʾ-+³9óç¥U¥LÂS¨;{Ã[pz¯fضqð-D”’K–õœõ3ʘN6 ãZÒrY†Q._¾‰~^jNWK©s›m~ 5߬Ñ7z[H©.gò¸*rF‘Ö¥1µ…OÉU¸µ2[ÆüFmäó­ú_ß¾Þût4 ó1K~˜BðæäïO-;-š„¼ÞI þ:´?*^”TT¼,QÞZ~Cü3èÈSÀ-¢N@bìÒž-Q—z§IœsH»ŒÿÎîkô“c’ŠûªFå=÷5\˜Pü¼´è0k•ÔeËÑœC=s0;mÓÆ¡Þ¼œSòÌÀe{æŸöìÀ®3ßË/Ðö—#D‰ù7MV‰:Þ4%”ïb7Æ&[ÈsÉLB+œs‡›Ô……È & N²/c“¢:’¿ÆØäsŒM…~v™²±i-«3!_àËʬÕeÓÈųo\ÑòˆCGÊÆÉN6u)tÆ :’Tù+üàõh^FÅßËù¡§…cÎõY­¬§‚º»vAX—Ýo‘×¢oëÏz|„jEK`°ñÏÒÉ ¢®Ž ­šÙ]½f_V¥Fq…_âyå=`VK«Ù8H“x5ž:Ï€ÞœSLs©iãbKY=:Kù¥‡öí¥ââÛ¿©˜ô¯?(>½»»¹ÿðä]>Ü=Ü|j¨Ü‹ˆ¹‚ ´ûŒW5ax$wq´À³(!›Xɪ¥¨N_3*ÐĸA3Ðyð C3ÐAKÓ•ÕXµCcKìWä•"Å ûE¥pzR,(…J³Ó6Wbjíí/„´E)T—Ø¥ÁµÀy«ûþPƒïòyB¾J@tж³¹¨žóõ:Ï2i†J#@µ„‰õD#A׬ó#N;äksý`½³ {„’º‘ç¡dˆ6‚6ê!\ØHõ¾ +½Ì™ÌÌ5œÉÿþÛ¯ŒÉÂdwÆ–ì%Ç–\ͨzŸž¨ÓÑôêÝ<\Ÿ?þmþñ)ŪËÑ«~ùöpðtÈ |¿ùôíö¿~-=Õ^ý¸b]ýñ/T’[ K¤ÉÅBNO²–’så,±=vŠ.ê1ó [BR ‡"zH¶lsµ˜rj{~©ÈŽ. ×èJmËä°.¯]«eT²v§á}T&'tÉÉ®íÖùùÉL‹ÍÃ'÷m] ¢ ‘zô ²äTÞg ¤ 1Rg±–H:°ŸÐ ±j”"\!V|Z¹Ë+®—VFªVpôޤÄçïöl—Á­ ¾<ÂïF.r\ŒëÄu'¢|HÝ_q\s §×’]仨þä¿ïîYúM>~°’ׇ¼¹ûþë ÛœF!¢mdû7¯ÂF î(j|üÚÙÍþÅ?¾úñâ•ÙL›. W+’˜b}‰Ïðã|€ýÎ!ú?ÿ±·ð—åOÖöºŽ†DM9f«ÎÔÿ¤‘ƒ{³ÐÚ¢ˆLªÁTC”OŠŽRñ„I)Û}²’Ј(B_[½AL¼éÒHAªË>O¹wv8ŒœÎO ]rbO ÿxûþSJA=VbuðÐ…—ö”ì¶1õa®Äæî"«kKsÁî 2­¾¹ÿåöáË'µ¹7«?w©GW„Oš÷®î_h\;gjÒˤ^%°QhkJÉòE´ ÆÊ%´ÕH2ëϯÄ3b`é2û°mI]i߇¶4~Ü]ð<‰1_¤—OÐÞè2Ó=–듯~óêçÑù¼%Küùpÿí¶kÈÖÓ¨¼@ºò.ª ÀÖ.K(Cò·9É_¡?¯{}PÆû2Udç®hzô Ôü||¶ h%ÊΗ½µ‘:ÄMp`ýdØþ«*âîΗ%›Ï//“c:]ŽþÉ uÂÀw\$ÿaé’Ž0&]:"4Þ™ &M·ÿå+Ÿ`^ÿÂÛÆ#sŠí2/ã;;j¸ ³*x‹t¤a¡øk[óh¸+DSÓPÊ„‘ Ò–‰¡âmòÝ¢ËB¥0|ª: \*bùü˜ìgYŽÈº‚‘gðõÎÛ£¤]ŒÞJÉ~šŽHJ"íiÄ4M–åùÁ]ˆfTŒRƒ ¡ ›$weÈÕ“ä²½N+Ñ 2Iöœ^Ø$÷à}Á4àG:Ÿa&yt½7„ða¤…ahp`ƒèÄa™·%³ø¦¦ÑœÉØ~aÔð¤l3É;.:œíú^­uÈÝQšæÞâðÒ&“Â>·º$ þ3™LjdáfJá÷6Ý/+ø¬1ÌE“I(yíÇźoEç6Ð:›‰Ë]°†T©Õfâ\Ý‚q¸Í šŽ Á†cæÈF~ž'^¿ýë¿þËÛˆBì2+¥1Ir¯ŒûכϷjGöºÏda6žóYAàÕ¯~ûõí_·ÖE.ÁÊ<µûÍ¢†óRHÄÔT Y÷eëêGŸZ«ë¾ëG}ü1-Ízù–Ÿ*E¼9³"25Œôè­*…¶t#fÒç¤:‰­bŽ„ÊéFñѧ«Ñä²Ö”ó\W‹©É6Â8ð1§Îú}Óš-Å  qamþiÖ0ÒcéÑrÌMãúQýôîl#Tfã„n!„,*…/ŒnXN)?FÿqeUÙF£t£ÃEý™gäG»²L)2lÛmòj0=ÇNýkdÒOçÅ•óþ%Žä_²¼^®¼ÿÏcìò…ªòÂ]ª«þÈpÏ)L`Tó¿C+ñåq›'é¡“V¾/·÷êãªÜßßΤ÷æÞ¾žŸ±)ѦP®¸óè\ßµcÑO©$»p·–xÞsÇTÚ·¿) Ì3h¦_x™ÆðíëÃÝgæÝ7Ë·6ÜðaRó˜OÿùÛ;ÛÛ³¡ Û˜¶Áûöm©8ñcSØk,à·––d[r4çN¨q¨s0¡èKÄ|jôYfbV{e刴)fbÃÂúì˜ÒîNC<Ô^ž8 ~neŸÁ- \†e–*M`}gÆ€rqÇ“ K±oÇe|\²!ˆ!’ÀÎȽ¼èqŒ~|ø‡~ïÕÍŒnÀÃýÇ9‚ù:}‡É²wÙÎm›q<„Ë›DªÛ@]8N®!‚)q_ÈË% 3c:çv‡±ÀqÂ()#üÇ*Š^-vÄ|o{kŸBÜ–4dõ0»Ì‘7ǨfÅ ·«9º~§|gÚã§_~ùX*‡®P_›ÆXÐA—Ér„6"\=·Ölý•6 rïAÓFIìBœS("ã%~ÜE–a6¡˜6á¨ß‰}™œV£²EÈþÜ!‹z†;òsšò¼EýØ;>SóÎ<±P?^ç‚G—´ƒF„>çMpüiÑO)8ïwM”TÍ;ü|óþg5ÌÃ\ém.Y$j}ßÙAKdí~s—Fƒ°FEϳ6PpˆE´¶­®ì¿ e/}ŸE3®í­#‹c¿¯Ùž¤ëÒW5Þ¯UÌ>žãµ-™ÈHÕÆÌf­îÕàš:$窑».n+¨¨hOˆ»†DÉÁOÙ¿ü(™C½péè»þ÷è’ž7̦q'×1·ôØPŒúåa³¾k*bX"Ô3ùX‘ —Bù^5åçh¯¤8"ªŽ(ïoÂò’U`9Èî7¨â"Ÿ÷ÝÙN%çËPxìKW…å iK6tß$ÝeP)¦®^×Ù2NE4ìD@öCæW=!õ•ȽÓ=ѺR)¢”kþ„ì×J˜Á¯ù[ mfè[ƒ3=ð̈>af¨cöÄ cA=ªhKg³‰ó#­xl‹®ðàa /í§\T‚­I¢ïÚÕ5à†M` .†½>›K†ù!SðFÀ[Æ” §¿/ú!È9殕@G@ ¹IÄmƒ”’6U@Ê !Ûˆ¢R^†,# i|bl%1Ÿ¤ÁÞ‡¯êúÝÕÞÛ;Õã\Tò!p×¹Bx2ÙmȽS_Òyw >|ùLß)õÓ¦ 'P»Ì+Ü¿$ 1d°6Óà65ïÍ•»‰q2Rˆ71 xÃ)˵Ɉ«ÕZgþâ¦â&Ô%teÛU¥d÷» sÓù]Œnà™òå€0ÕUÔVçöªâAU-âɽäö3Mè7×ÃϱÁMóÚ¼¶}øÏb†È.À$‰JügiòDWSðËÂC.o³ZYC-L¼ZE­êWƒ³˜Ü¹ŽmÓM M§ÉÆÍ‹gS; ¸óvpY&RŠà”wJJ—%ËZ³ƒ­V‹©hg±—Bwè@ø!óˆ®všê)'lIêiZkG—ø–ƒY H o=˜¿}5Hû~÷éÛçÛ÷Ÿn>~^ûŒ?Ûä{ý·_îÿñæýýû“§s´ÛyÂ[=ÿ¸JÍŠJv8rN+ªžå5¢¢JßÙ¸ZB¬‡ Ö#/±jj»²è#4Ül¹©M*`OmxUx¦hcŸCq# :ñúNÚZ1;/~µ˜2^ØK©Ü Ö?äžÑº©Æ½48ø¤Þ|r]z@Ð4ÒGq28€ÐÝþ+ÛßÐH6£0§¢R…ÒÊZ!ù ;ÏK«8ïíµŒh‹{òèÛÇ.{/û²,r„¹R‡)¡”O€Ø¨qýkøýk*õã!Së‘í÷·~¾9úèõßßýϯ¢pߥ”Þº%r?ur{‰Uϼš{ØÎÚÚ#סΚ&Ô·èQVL;äiHÏNJû²`nì[ør÷á‘£nÊfÙ¶QѤÀîâÆA‚ï/ÜâH²˜:`Ü·ª(Ì6×1fR>vøª‰”=Žà5h(.^ò5+Á Éùh˜!°)ØTµg­×=öô+÷>|¬Ç1fs>êš“ð…ÃÇ MõUQÔ$ta¯N¨*D¹¸×>Šw9Î/òŠ>L¬OFÞqÈÑükúÖwïoQžÞ–^ýí=ʸæ-  Ôå´Û`éÔæ(N±Çó¶Î§31•\\—a}E”}´j+<~Çc ”ß÷”WrʪÚÖâP_˜µhERP†.P˜öe•óÒsw ʺS^#c8ˆn#]h²†oƒ$Ÿö®°ÝXj±7Z\Üìäìr  •ý$Ž–ãp˜øwïq8¾bÞØvо]ôè}CÛ)'PÞíÑäp,­Q^1z?±"Þ…"Ç,•ØJ4#š¼›$IÚè0IÚ¡íH]|B "ÿ/ÛŽ‰>Ä´w2Ì®hèüz¶Â@úÇ#•m%”âØ¶’s8¾¸›¤þT{ñA!¨…¯Ñ«³AÛ Œ_¬,ƒ°Í:IJ‹‹£Ç2ÂRž¬ñY6# Ö@InÚæârbl¯5Y” ýÞv®rv™¤·å„ÙÁŸ}d÷ՇFÛŽIBÇaÿ“óóÍ—uâþÎz3Võ’7ß>||Ø6š "^‘¸êaH]Ë©aÞ²u#$ä›¶FHÀì ÑS°úpwXÌÕð­$2WUeõe‚ß„«¢î“ôùO|R:²®zŸáR›7J¼x)M¸Â¡Ùƒ(UUÕzhnÄÕZ(¸´›Q‚ïÊÎ{<™‹4N§¸oOîiIÑ×7_¾¿ýSüÀéö§÷¯Å#¿Æw1¾–wüîµïJá§[¹Úv©æ®Ü©Å_ÆVlñ^“ O XÐY†U/³qYgó(T@-öP‚Zt)—¦]IhÔš:[.~“ ;À8qÿ@UÃ?Ȫn ÁQüÅ»¡øJ±êâ 6Ü›õÁCnKiò>’GñÍ­Hó#"íÀ`çü¤ïNiÀºŠý( Úò =…3m߸ª’'t\ “£#Hí:týИ‘š 1PC¢y¾_K=¦ÏÔcB†˜ØM`-ÀE€g= üÕÆÄe­)fë1ŸSQ¿Ü„z,…w?äžÑUéA&=r Kç/iB2ƒö^éåw`w¹ôGª”º¿{w;¦H P.Úfò€Rm& ×í1?T ¸sÔ1Žrñfí‘è* ²ÒõÚÛE¢!çâ­D6„™\Ó誧•ίôtÒeÁagމEÌ2Ìän"]1¤žíëʤ€üneR8¹¸ß1b@ìÂäò2AÁÈL² -4Ëè—ƒh¾ÍGõšÞ|‡iN< ˜tŽüd½FÅÙÅ4Ùä8ŒE4@Êú+hµI0)ºøúˆo~7—Ç}:ïP@&=–`ßáſ۽ð,71ºŸ>Ëä–¦H J=Eù¿s‘hê"DWç£CÙ&9e»cžE3ä„6ò'Q6™¤ Ót™ä{ßfNh˜Ðª·þœGy¶'¤XŸº¶•ïÀǪ¦%Îí›û>M­›Áé±þæÀ6õú˧ýäñ'_o^š“<›07^È~› E€È%p Îw11ëq¿s»ä6 }ME(ZËp}…cTB_r’‹ÖR«š­Gð¦øÈ¦!¨©ö˜)?þ~Ê[#¦ƒäèEÍôx+µÓ@ÄréM&›âe“EÀ>“ ÒD¾ÌìxÇ´‘>™%Í ñÅqݲ›iÙ@gû»V2²èk“DÂM=éÿ] -Î^#i…'kœîƒðV›ç%D>JΉTh‹äªòVæÑÒ]SÚ¢-Œ6|‚º®.(µt#‰S=‹x§‰A¹ )LLì#UIÀT ’"Ålµåji5#ƒNé"Qè=$Ë¡k7„ 71À– N1 t%4y÷à õV'ÁSš¢@:ÜDµ[g'œEL®½½èŸë®ô’k¢AÔ;T-ðigˆ«(Éî3øå¬9_…¾m¶£çpYÚžˆÛ;›–G@Cé¡KÎJC{Ÿ hØ¡.3vD,!› ¤þAràJÿËÞµ.Çq+çWqù¿Æ@ßÐð¼ u;f…Qrâ¼W^ O–Æìˆ;ÜÅ.0fÅã—ì’TôÌ ïÝèþz„KWžœ@~Kú—n>C%(b» Ø#–û¹­A@Œ1b[3fšà¼Ý,†ÉŽèˆ‹¡¼.°¿ÎÇtV—…^?¦Ê> |ºzáÂWèëE œXC56Ô|¬èƒ(wIѯE :ênÁf¥´àI@wø*ÃïèŒRa|-;¾¾¹æpp—3¾«“UDß*q¾m¥ âëgd«ª¤“‚Î0³˜â¹ùRôð/{›d#>£7%ö±%‹€…–@70x§ð3‚÷ŽûÚÓ@^bg ßñ-ë ž}Pßñ)—üY¨£D é2‚|a´Ä€Dî6‚ZiMž'°¬À…¢„p}ÉÁáàëÀ[¬Â2ÉDÄö‚ÚÓ^­Î;‹Ñ\‡9S£ÃÞ%#£èùÆçÄvE¤W‡GWg΂ȫǩþøÁ½}Ï?§zõ+#&‚]8Õ³È%0{–.©5Ï?¼š ©Šàe×髯&q÷óާ§âbÈ—?¼Òé!¤=™Ð\„i‹4Kl"[JNˆxe·ì ëkzæ9fùP£•–Ãç˜àP žÃˆ†ÙU/æÏvæÝ·¹²´,Ä.%Ÿ{¤lt†óæítd/x {:†±§2¸v1ÚD\â± Z€>ûS€ñ!†8ÎØáèãëƒ]–n†Åà-‘¼Â%Ußc’½—¦i7`û6/€N憂ŒÙNҤȾ¢ê¡f°d¶½Ïn5[Sn„ÙŽ:19!Üb¶= ÚWt©4xÚÛlá¼À‘8!`äÛì ¨ÄèLždÇiØ «r‘Û5`W$íeL”8Yn°¯ý^*_Ó ä§¿ì÷Ÿ~Îu~z÷î÷»Ï}¿¿Ï/ºËä¤H]q²-Ûâ™ÕÍƒŽ•$ejg¾G!¨0µi!(Mm€\÷ÖŠ,múj•ôÀM–Vƒ˜ˆwéžÒîíFæÌÙËR2ÐdYFd*Ô^Ðnäª5-æêѦ¶˜K¼Ÿr‚®@Øœ&ï`Gi¢|@·ïjõ‘ñ2­SG&uE¶p¡ÔE¶˜ VDìÌAé~ÍYPš;I‰% à²óá+b 1¡&¨do VÍÙG`—ÚbùïbB%¬¦#‹ÅÚ”ÇéS‰C$«,§™±®¹ðœÞ_d«Séò€À';̇XMô©¥b¼sù Ý%¡÷½ ð¶9 q.“?8t@]†T(¶,•uÑb7Oº ÎɆÙV“ VËo+ê·lf±`ä˶Ĭè3¶&q¶ L·ÙÖ`‘Ÿt¥†0ìn[Ó¶ùsÛjG–(žJWƒ4¶’ªà±(úMèX­fâkMë-Kís›ö°»»H`‡ÿݸR’ÚeCÓ êLŽÓn÷pÃvÜœI´ &Í× é(RñJ ò»ºW§bSÁm²ˆi /êQt¸ûVÚ*IØíÈÁÁðî‹1]ˆýs÷妽·JØUÂFvHÆ‘'Ël=ÿ=‘ܤ·¸¥ÓbSKtŒf¢ª¾V(·œíäÉ2h…ªÇeÏô5Û‰Ëv³{¥#m†Oc¿Ét¢’hŸéD¡ý'"SÆtòD)Uú$·Sž°˜Kì*{"ÅñX˜vÍpˆo õøçç7Çùíåß|ºÛ\µä¦ö¦–‘î’$u—c½è?5äekg‰ˆó MÁÔª™Êâ0‹‘rsÃ+ÚŒ@ÍL €°é^i„R2ïmkÌ ðèKà;²˜GŒØ5V\¯»ËX›Ôl.±“À,¤ëb'ùÖDy=²ÔwMÖ¿<¾ßÚ€¼ÉÂF§ÜNú²‰%Ï É}Ú FÀ›WH7Pk”ÅÁ‚Y)³‘ƒÄâ½yÊB©I3wžÚBpªßçSúzƒïv±GèIý¬nµ8º´ü(4.‘Ìàg<-HªÃ8*Í·èd ÎXà¤5@®Ê½:Lyð”& ê¬güÖèüŒ~J»’×ÏZìCžÁÏôˆ„µÊ‹ha²¬NC÷ÐS¬zZVl„â*SÝ(ª\ŠàóÀ+«£UL=êäÒr7y1¼~HvöÓømñeÇñ/óÛR@Fm÷Ä©ciça©ùd† 9'|ËÙOŠ?¸¥2æ:lñsž•ï]ÏtzéŸé¬|íåùML6uñÒ%Û§ RC2ysFÑKD½uÍÔŒ÷W£Ü×ïï¾gû˜P¡äîãtaåÕQdo‰ê€Zé2µ…•1Œ„É¥*‘¯ F¨èw8»ÞyM“™»‰mÐ#m‰+S;¹>#žwv0‰Î.ã`ŒSÞYÊTP[%­Já=9^1yvŠ÷NŒ(×ìs—ò6•ã¤j_ç_åtÔ×ïC¤ÒnÈvÞ”Œoê°DlÁP SÞ»[MG]!g“‘–Lr`cÅN"3ì Y‰´ž²˜«Ï´a£í«ÅPo£Ç(µ„¸·¶Tö¼Å™‰NC›·]hבּі ÃþcRór‘ï–TúÈ=ïm(« œ`o²Ÿ4·.Ùé²ÄÁŠ DE}öùÊê£U$ûUÞ)mRX@ õ‹q[àqÁ"çÑÁvSв—ÈÜ8“e>òÓoo¿ß?¼?Ä*OÓŸ~šÿÜßøš1ðOÌŠÅ]= ÂgDzÖ$±UÅ>›’ã6 à4öXx;ÝîQ¸)žÏÜŸ¥Ê¢FŸ¿?S¢qÕâoPÞi»:)îÓ¶âNÛ[W¥è5l®í´½õreG’ßjßtxDK•عL—ºý–«E÷æ ƒrU±€¨h¦ æÑÁG«Awf=-Y`¿É Y’Ý5šm |üÕ8¡*ã…A|„)D®àÞ~ü‘~&"×ú#†"r ¨…î®g’K •ËÆã{[F&‡û6‚- s‡PlyòËØíÅO|y||H]v›ËÉâEÚYP´ózEß4+(ns;È6*°MòÁÞE– á™K™Ž‘P0?-ûL£pâN§hbzyí¹zÉAvM\Ÿ~ùh¹ï/o¾ýòþíi€<957£õ× vDð¤§b²Êº»cò™½b‰ßâé°—…̓ÇÁ¤¦î!^ëÇ:mÍE¦FË付ªÈ;ð0‰¢Sý;!ÁTлÂZÁºw9¢Ptøš`²–9¤D©Æ0/½$W óÒqÚ°{$Ç»lÍö¿¥B5D÷ö^s ³‹j|Ò´$(bàÆ®yx•ø/1Hwùöˆ%rغNÝK4鉮­%0fZ3Ýó‚9rEm´e.1å1§Úpà&7‚oP¬w…rVI&Òë †~̺wи½¹À<ôc.×M¬¹«$”ôkpŸ$÷Á²^þ¹ Ŧ¸³mK*ŒRWHž x¾ËùPS­_P0 •¹"ѱ0‰9žkû(ù`×H­–ÈÓæ´«U§ÆI‡ŸR믖§‰…™ÊÒ¡ÙlE !âá¦h‘&ëPñ¨°ÞL~ÿØoIýO¦ºtrÑh×c?Š·¿9g¿ïÍùÑ•ïËÞÝC{S#ÌL„$EŒ–•Ö8xº9'Ô‘;ªoŒè‰.ñVI ×ÃÛ 'øòC°w<[ÒXô®±ïýçyvæúú¸å§úVKê®,0~ˆ0N$  JÊN,¸ÿr÷iZjGÓ½YÆœìoß¿7B>½{¸»ÿTbPÕ3öØ Zžrl6ÅF{‚Ûgß÷Ÿß|2óõ¯eâ›™ìs l•º!+·O‰˜–uw)ã² {x}œƒ èÏWÆw¾~»ÿxooÞ°²Qmf@…6*µ ‘§­à©æ¸‹6fh6LM4‚wN}YÁ{ÏEuÔlB¼"P‹:ÚW¦ýt6¨cZW€ÐŽšNÃNw¸*¡)58„û„Þ}ýp"Q‡=FŸ,ÐoÒšÌ7ïï¾Ý-ÍoÛ&zSä¯ÍÄ/«"{’&€8vi×߬‹-¦ˆi!¡+Ñ¢àà kŠÈåWäiÑÄÃNR”pkM¤].Ža "¼‚é—O‰†oZëÜ®jÈM30jYçâ.30kjó…`AІ _ÂXTAÎ/ \‘¦ÉBò1ø¡*X%-¸ÞB ÷c·ŠÒ”~ή[~3J,¾I«Á},a˜qʹ‹5M$ }fšµ áÆ <29ÒÒ­¿ë5SFŽ"LlnR+’\ÂX£ ýœ+¶HQÂÆ€KQXöálÅÕ"ä€Þo]iûùíîÝ‹!ÇBÛ0Û~2êѲíp?æ®1ݤ'u¤C ×í3!eR¸n,·3£úv®“eoÛ=r„6@às@8¿&ð§€vß4J÷}Tºèñã4e@‹ó'‚W èñüŒ.@;´ðÜÊ\‡ñ0J‚kþ`𜒄ýÂÉa–€ÝD¦¤¾NšHJÒ¢ÙIÍ#IF CsÚˆE!Æ-†.bðíWŸ6cˆWÏ~ÆÞQìÝ¢ýRuŸ æ{©´ãž‹ Æ|‹z%”™ŸE[Óg„R§Ï¶s“tppÑK—t,‘ $Æ„Y¬ýÐé¡lÁô ¼Eq5|u ,úU¾f{‰W«Qét â…Â&¦vܰQð @U –žn¾û™`’Ôö]Ná0Àâ}¯r=¸\ ·"ÍeN_´©D‘ÚA]—\ÄЀœB jQú&¹øt÷emíg ‡g ¯ß^þñÍÝ÷÷÷ßÞ|y|¸wÿáiœ”è„NÍ—¥DìŸb²˜@sÆáH¨!R’Ö`… ë½f)G»L>“o©öFT{æ[»4»úðx÷¾»ö“ìcô*DÂB,ïK"Áä²aÀŠ*ƒb{‰Ì•7 ú{ÊÔÎÀmey„-uä ©®Âi'éöRP”slWs¾™¥)¦')ι:‹oçÐÉ‚µ¥”‘eåóY³H˫Ô+AääbÛ‹JÐú}… -µøßÿ©)“€¦/$¸)3™±34ÓZ ñ“xT„¢Lˆ³h K2&ÙÌýx´Šx0}–Å„±¡–qWîâ<‚tünŒéRÍ¿ë]üY¾xA¤V«7_?üãÃç‡×®¢ßýû6#¡_÷T i©¤jZÞ(Лjo%]‹·•ôlã¡mYµl¢½M4°çl}õ™P#Úúí«%|áÖÊÊ;"*?9‡¨l&åw^ű • U)ªûø‡˜Š]K´ÃìL.!CîÚUßÙ±ÉÜ’é1ïjp¥¥|å-×òõuuÀdm«ñßW Ót¨#EÛ*ùLfE“!3S~R Œxkë*'˜Ü£ÍëLg ójœJë¿—åÙ€,ÉÐQ)q5ö5Õwµx›¸ÈƈºØ(îì%SfÀç«!SSð7Ý;@ȯhïÀ£ýȃù'ÓëٌݛsIÿã¬Ý oÿîáËw~úÏ?>˜ú¿}üþíiz÷ùÞþ}÷1·”€QeŸ¥ý_º²EwÛXÐÿ¥YÜEaØEg¢Ù£s'7…C¤ù8±ù•ݘ§Ç‡'–ëð—_¾›+/ òW' ŠDÚ©}¥õ³†ÔµÀC¹i¹½ó5C;~)d2o–2&߈Ýl©Î $È–ˆi„Úã üÜ ©‘™8³…9N8 f·]FïÇBŠ8­q±·ÄK?ÉìdäaN$Z¨Ñ–Ì`ß„gç(¡.po˜¥vCpSÂ&-S&â(‚^QóÃÁ]VÍ×'«YÀ qŠiP™+õ¸ŽoWõøðˆÅJí¥Ç dQZÒÅÇ-0H¤õ Ž÷Ùý7|ùãã§ß¾?üÇ_ôåç?ÞÞýãýçûûóXº·1T¾yÑrä«*_|!@5ñ JTÚ% üL C¸î7­\Ì·ý>£a.$}&ZÜ$XïGì[¢³p*ti™Â®¥—…®x^zIGVH“”%Ô!ºëо·WÒÊ ‰è½…ÌØÅǨ;l'd7‰zØî«‰§oËTà‚Ø´Pï>uýé'ûܧÓ}n&4Êeâƒoî«=ò¯)!¦ŒR%l‡{k¡Ø(‹š„B=b([TLæK]~]ÆŠ> 5}&™_ÔMA¨b&š<ñøÀÅò ¿àÄü].ôŠ!²×.ýÓØ’Ćh/a|eƒ¹0ÆøÀ“ƒré_Tº%¼> cŽ4i c$¡€›âò&¥+I@…ÒEÙ;Ša\Æï_F1ÁÌÌܪò/|8³€“ UŸé„½Ù˜\’Ël•ü?ö®n9ŽÝF¿Šë\G’ø!èJåbkor±Ï²%åXÛòJ¶Oüö ôŒ¤Ö gÈf“cW6¹ÊÍ´Ù ð €ä'çTíS~õ(TU.cÌkx/[ý¨S;ÊhQ›v”e»eÀ ä;¦U™>ËðÝ>>Þ΃Éõ×Vèn ¢» XW°¨àß¾ås’±…JȨ\Œ«+û°× ªÒ$Â>Äâ ª8O¾x‚ò~îAÄ‹¸š¨/£F¾Q¼¬8AÍOV·s‹n(ô¶1Š‹â¨±q´b l¯ý÷ÁO ï!•Ãöޱt¤òÎå _$ÒÈ-%¨­S%ž<¢Ä„±ùh~Ð,€ú;ü}oȪÉɳüñîî«Zê›Ü?¼±>¿í›(žNÉ%|ÿI?ÿúðãnòG=È®ö(pu§2&TÐÆ%$²)Û 5õWvNß³½¢ÔóBPít$¡ÕæGøØHÉýÖd˜Q¨W%Ã8:¤6ð« ÌvÝM’ÎéòîÅ‘OÝ<½YE2ŒNŒ@ÂõÊÛiß a߈bÒ˜ª©¡)¹ã†&IG;¥›:QLìcq§Øšÿ©¼Uùjù——)74)@N¢·£‘üS曚8jµýLuJPB0}Ä"uÒ¸ ¦ÇÞ>wV/hçÿ|ÿñSÝT’¯»ÏBì¹eHCã½üñqòÁ’ºi'æÛ¨Ñ/wM„ü •{î»÷o¿šÇý^1ÎOûÏC춺?¢ö]]¢ h%ïCë<²ç¸ÎÏ^9EÛ‚óy]ògˆ¹ª»Q¨ŸvùcyèY›ú÷«™w«aÛ®þü›¸—˃',™¯”ìÎsÕ5;À¡€NM€îD!Kà8èîŒøš.ar Î6—HM?UEC‹ N˜»UW5×È(Öç’»¡x £QíÇ(®;eÕ~”ìæÆ¨êFb¬qv×dFzÀÇ©m6f]æmÞ“”t,¤{xñšÙ¶Ê¥ðùß Wƒ¿ñ~ nÛ!ÚRçƒ$¹. ü;Ñžo·9!×-ηéRJêRR·л¢ó|–šm)«.ƒ{Õ<»°¶ƒOâhl«®Á`ØV1;ÎçËät§0œo\Wؖξ÷epå¤Ö?à6LO<  ̺Πǎ’«OέËn{wZà3SJÜ„Î!Å&†{µ€Õìƒ3˜¶Í§èC,§ŒhЇ"‡S4ùÏ"éÑêh:ëh} [-Q{5Ñîn4 «œ9s»l¯ƒ‹.B¾C©Š|`-ùN%œÜC‚DqS¨¬Bl(Ya£L÷« «ŸnlÎL¦ÙkuÕ-hÅc69e>ád„”ÅŠÖ˜;r*ãã¼H³é¥-šB¨/pPÎc¬G©U“æG49H,’Ô¶1爗&‘L ÿ”62Z:ËñÙtþî]óÃoSN{T¤à`™$~õŒóO‚ÕT§‰ç3¸¥¸I$6\¾ ¤”B²#qc‰G-ë% NFøb¾#N¨®fâ¢Rç*¾oVÓïŒ0¡£”ÂóE‘±ùŠd~„œ*Þ‰qGý:¤ÒÖ/IÄ!É/”3~üöþñúáîË{M.Çqœ6V)I¯[Õ"s̺yÃ2ÇëVu*yÜÍnRÿ¾@uÍ'§‚\t¢ùùÀÿàÛÛ®Sû–”O lU#8ñLgœŸq\EXïœJƧIz|y¼Ÿ\šP:½0d= ©tpNÍKD°ötSøOèIææË"{„žÁý­%Nc¼øýÿãõ‡Û›oú"-—u/?‘ è´_êÑ€„ LNÀ»¡ÜwŸ,lßu ý˜ò±êü/ß>—·píã~Ú¦žÇåݦ¶ T±,¯$p-¼jç,geŽga6õ°}\®l˜áÉInG DýÙ¨c'TÎá~‘Z;Fa«9ZÑ5ÓMg€»r¢–XCä•*³ÚvÏ(ÑjÃÝ¢Vm'Áù‰Ì»£>¸DTV+ Ù‰^Ï‚í’>T@À.ì ÷£ƒ]µ^áL;³Ú¿:TO£³IúDÿž7×éiÑîÈXcNÖâ/C °¾Üß¼ ùðÏÛ¯_>ª…ªŒ?}úöùîëçÙW¾ÿñýÓªT¢Ñ}ž–}ˆI$n‚uMŒ.Îhüê©Y-òjË*fN|ÕÕU_>ñ)‚"4ï%J;^„ÓãÄ×eStÃ*l.©F 6cÿZ-tº Ž]ä äõσàîKÃÜô@iôáHÉï8‰ËpJ–‰*1_ú¾Å5\•ßMjZÕçd+BÅÜ–T¯GãdöÞ5Öoœñ÷–°)œ2»XÎìE ‡Î³Öî„ÄÙzç…z€«.;’G¡Kƒkpq´mK ’)ÙÐWÖÓ‰SºLÉF¦Wp‹ç»܇"¶—Á»jQ㎗ó &=8%ÉyÄFu(»ö"vl~Åk»S*C.*N¹M*îÿt ‚¤Š,ñâÍܯr‚-c¯0`6EÝÆUœâL-5òDŒè\ =šU_ {m¿êkIo9ýg}S_RRñôW©EŽÅÓŸ1[5¿^n'[¶ŒqÍéßÅèeôéorÎõ¬ê+'ïKc ¸wÏ*wRñ³Ðg¤Zˆ h\õ“ÆiåâDð‡s,?ÞÒõ»•Œðz ÅñÄ 7` ôqz0ŸT¯›¯YŒ¯¨ŒÎ`DHEtΕ×-$Ò¥Õ[-ZˆtihNƒQg!Cò™FT?Q_¼téLÏ¡Ž&^vÜ.í<ñçðàÔŽ²0H{ú\r±…ëÌi,¢Ø\›* a11h…P‘’fÙB>g§C~òÏË«ÕTº8‘#± ƒEyôò!û*ÌC†?û]ð.—ª7ÜHŠ ŸõhðÐXO™xÌRÊ«GW‘Æ»õn‹}n‹/SQÿÁ&†Ó«ý]>aSõ»j‰BWrÀ]Õ ŒäÄ0ÉÍ 3—1h´½"z æ¯Ø"„~õÓýʧUêó°Ïué“zÏ7r‹·¬ p‡%Ón]¹t¿U,J¤5}3_qsÒëÖ_GS=·eéJs14f2S\žÅwöóäù^ª7EDZt} “óŒT¼> ÄyÆâaõèðÒU[a¬(¢09o;²‰s‡>jŸŽk7á|iX(¾ ?¨ŒnÞþåoÿ}dH1à›oºÀÏï>ݪUÛr¯vâ»o_?¼(ˆWDøú¯Ïoÿ²v”Ý6M>îå ’¦Áv›×±äýsÒ:ænó2þªÿòë0}ŸJyWhv Å©CÓDÜî£ê&ÅÍMkÑUúêŠ0Sò]²Œ6G u~úæ’ò,}/¯Vá«{  ]ðº•qù¬¯Ž¶k/¨÷Ѻìwð±¥Y5$&›,êÛ|uÎøê”Ùbœ„ùŠŽ!Ji‡ƒÏ3x-^¦ì«{€I#*÷šÐøÕ3¶õª²·^áÒšb +¿$Žêx¥Í†ïk ß °]$Êjá °hø!P®Àwñfvo«"‰ŽÂ…]ó†Y;1îî1^_—Í•ðüD“zd b¿ KwüMªÅJ‡#y‘×y.ÿìÂQIέ'(nügOõ‘î4ZÝNð›€]‹ ¢§«cðÛG#ÄPD8¹ Q£T ‘•‘$ë‚,^­ Š`Š¢Ñ¬_E…«€"ä0Š0Eº–ÔõàXOáê—_ŒÝÚqHæ†sa”eYÆtÊ×®g^F‚7¬G¾v=gQ-AÂMÆ ‡ÌŒ òï5¬³Jæ'ɾPoÝI 0 ”è›Õ!ËT¯‘£‡ _˜üßow×ÿœ-ª‚á¡æ)? zÊÇ>ø–{X#1 4fÁþê™pÇh_Ýh“‹¸õôS'uII(¹ú¤ìlç1öˆ¸uÕN"¹ØÕ©Ñ!ß CºRУ'6Œ1©²Üó„WUf»E‡ ÆÐype7 EWÖ!ïryƒ…»ëäN_d¹´{0 §¿¤Rf n}ݧ˜Ü‰§ôîäõ™-Ÿ6rýä£ì¤^` ž¶éEpÃÓM.Û§hš€žÂÌ#½@–ñzÑÚä%ƒ«fµ¨8r@Ú‚SHÈ«g³Üøœ£¼}}ݳéqF!nEå$•ï9Ô ÉQ'.„Ò…ê#N6Z!­sB:†Ñ14Ø,ÊP}Ä)yž[/ÎÆÐ€}/ˆ{#Fï‹Í¡» ãÜQ<8âä‹ÃMÄ.•¬ý¹ÄÝ/JþSÚ=ª´û´ ¨%âmöÜÄo•)¥Ë,£s©ÉÕÎå%·¤sîVߺ¢‚‹€‰*ÙÖì…h{pÇÛª½¹¦Uiz Çx ±2ïé_×S|ú~mÈzsû/S©àñY¥f„¹VÕ—ÝBϘ¶Û =zÈ%̯”£žŒ>!´¯TA.³Ò~7%2=™gå?ºìÙ[ß­úwŸï?¨ÜŸ¸XoÞ¼|×·û«ƒYL«ð“sûÜȇw‚C‚9YDú€oÖs}óþÊ@èêýÕûßÞFô¤ßpG_ÐgF|ó?ÿõÛñOï>_}{|~‚Qºh#óß››"íA#…Î}Þ³ÐM3àÀˉkÛ jýæóíof!ådu$äŒôQ½®ÜÉäˆ"¿s“/ñôwz²æwøû¾þL_.¹%úã>úrjËo¬!âoûn‚§cy·ý'ýüëûù–ûѶm ]Ý)ÈÜà{á™ëKpeŽÇý·]žÆn¼óBúþ^ô­Û:?Zº¹YA5¹¶–ãÆTÛ¤F4qòg¯Õô¥âDŠæ|úZíùÅ)w­¶x³Šä¯­JÏ’X ÖݶX’°õVxjhìIžŽ¦ÐégÝ)Qºb§Ð§TÞ)Ì]¢/^¦ÜØ3/ b$\6ö,±©¯GcßP7€îYÕoh‡/{ …ž-)Smˆ—…è"õØG¾ÿ‰1¥OUSW_î?Þ]ÿX~# E6CÍ+Y„CH¡æ…d¢'-¶›.Úf,ý îMÇQbØ£ØeîwsrŸ&.bÎþÖÕÕjÃ.ÓPeؾEÇ•‘-lŽ$¨‡’¶ñàˆõL€z^¦GÁ(œñj2§"‘øó Î/§bñPŒ˜ͺY‡`ÔV 1†UþM›¸¡Én,©lê{¬\’á÷…cÞF¤ä[9Ù;ƒÊ©M1F¿eÓ#cÿ.šiŠÆ|cgL^kè¶Ý®äzVŒ«OÕÝ]kÄþûçݤóê.H‘N‹]5™³°Êìƒþ‚Øo1ûàúß“h 4©r‡Ä¿€ÕŸøÜNšu¬èêIÅßšw¢lËÁ7±©ë„ódŒ1Æ|F|½jSžSÙbD–’Å0?ÊâYT=L6…‰‘Ù‡U&Ë<…M&K©¿Í‚§ A`,±Äž—ãÝõžª£¬wŸUæO×M1`9}ò‚UfømÖ]ÃÉ‹”„!²[$o‘[/C5!LÁ•ó­I¢§âѪBÌuÛ.¤ÔcÒ²®‚Ͱ[c§E©°SžŒ31»ã*T{儈O\Xr׸9VÍŒ~Eõéfœ¸¯à“ À_™Ø‹×ŇÆÜðbJÄ×ËÝÝ\]¿»zÿíóÍÇÛ:Ü==ìIn&ù-%¶{>5åŒ4¢Väâ£cŠâ: ·n ܪJ g®@[¤XD[^~Â׋tºÀ­F2I½4Y·%ݨ0K¾Ü"9ÈÀ­†œ¤‡Yºì™º1+|µÃé}Åämk7íkêßW¢M‰¶jAw»ºÇeÔUÝ?>Øug©¯¢ü€îÓk÷­«SËÌeJœŽÜÚô~…¬ÏÜKUz’›º%ôR.3Ö˜XDòĹ+©…ìz¹-:pˆ«âÛ.Ÿp4«”3ElöÊ„ˆ.ï7 ñ/>¡ñg!Î@mPhþ“›Œ|7„ Üv<þYõëúL6ìùÿ­»ÄGâ·Â4pizËy ø¶ŽzYu»Ô0U@ˆ!”±ÙªƒB œq?`áQóE0]À9©7긒ò¹§9\>gB'ÇÍ–ó 7•qX£¼|k­×È`íõÆ:`¹©@ý'¢#ñä¼8wù1¸»÷wC7ûÆÇûo7û¿tºÐè²À-UÁ¸-É÷¸Ð(I«×}Ƭ€ŽRiUªjèE¤ÎŽwYH§‹-Ûñri¨Œ´ºÀ ÇnðüÆ |8UvE]éN¢»Ä5F &ŒÜLŠý!–ì†0x¦Ÿ”´;¼:øïµ)»èd(ÖRj`µõSOÈÔ-gW[/çvV œÊk©(¦Ö­è<¹ !õ`¯³;dï#Ñ¥!—ýèRW•³dˆÅõ£·.Ð’sK]ycb¬JÝAÔ]ZŒÜÝ8b ‡›˜äè'ÖM¼D ïõí>Ü?Z[ùõ½¾Ù—©Ÿ+€Çq¤¦ñð ù¾µ²ë†Æ¦+‚Ê`¬!0sŒc~xÔ³œºŒîÐ5x^åý’Ò7Ýf¬<ú¢!’µÒd†<¸I·Ï¸Hõ„@ÇnÕ•hqroM»ž ?³È‚QfüFª•yt)ÄÔ’ŽsŠEÁÁ¯5)‡®óþs ‘Šðʪ\®X/A!f“lÏ鯶hÏ8×¾®€WëªLÛL`0º²Gàj/ â&èÜ>Ü^}¼ýýÑLÌ÷9_î¿ßÝÔfÓŠ·¼dä%7a/†–ÒþD}íÈa-øi¯«`S-u¤¤¢’ØÈCÑV9ë /DØ­Í"Ç«ÐZC&ò›œaB×FÂv ×úÆ­±Äg¸oiDì})<gNî=SoÛ{éߘ©îƒõÜ .7>™ç´›Ý«»ëùÛß=ÿye†vñW`7A»8¨h ¹^eç%ÖAÕB‘´‚OO؇b)å]è…pzð"˜*{ZwCAÑ…¸Í0÷ô°#yŒŒîø¶xÞ¦èâ¾c.CHÓ·Î8ÔU@0m¯€(#CfGž'¢Ò˜£ÑÖçG@j^㙽sÈ5Q(²ËP(¦ã.I§j09oœÆQÆHLgŒs÷²’%.^¼L™B18™’uQó’BñÕ36q(¢wН¥QÜéŠ4%·EÔi(}KÖ³ÆÎ˜#7 R%jHqò B(ê„r^Š:ré½Å›U †Äsµsªíeï¶mØrÃEˆz”qêZ°XßIÝvtÓ1Ý Gõè¼_ÔrÑïF,Õä\ãB^=èNtÕP¬5º"zÀ9€Mº’8¶0gè©+!Êf’+;‹w$@³"}þ½½£>£ ê01C*ë‚úhgév÷Ò"Îr`<‹£‡.誉“"UW]8ëÈÍ ˆýçO=º¬RjìèÊ=!ë.¯3G³§pfUPEÞý6Ðø[b*#» ˆ°–¶¦^F½ìÏö^=÷À¾ÊW£X²?BŸ ¤éAha«fZÅÉ‹° mP‡ôä]Ýr·’~Unrs8Ýè†O*7»7§êœÚ­Ý½ÜS)…bbVdS%JNü`ûûdÛ G∠C;µ®ŠcÓÒÓ?_ÿ ¬TTHP¿ÔŽMv¶|©³¤ðä`–©úE!g`õµ1¹k|ðfÎVˆ˜iv,|øˆ~ÜõÔê»(6YIXA5«sP«`7ÒîÍÛÿ㆚)Wž;q|¹Ø?ɰ!Ÿ€f3 2©ocÉeµ˜÷‹Ds\J5f“îê8‹¡Õµq~4sE$L±­!¾~]BÄž&D¼IrR_Ñr'}–‘ýÉãxçð.ù|ˆP§Ô¼ãá&%C"hŽ÷O#@»…(ù±&°ŽÓõКɱRiÕÒâióÛêS+¤°ž:‚3ù *’QVÀ‚MЩI@­§F%¾2R(jr§ÎRÐhÀæÓ/oÊ,vÿ¢?i³Ï5÷_üÒü ¡0¾SÃkòIXÑW3• JÊÄ,-„"ž:NÅ»¶ù¹Tøö`‘‚è•dYö#—·-] ·†£rRÞ™l`ÒãÄdi®sÁ°/­e@êŒÆ Þk°Q+»GpM´÷Ú LM­‹ÂhÏ:£Šn]¦˜± ñË.5Îìßš’×-ƒ+ö⡌±>p9ϼQç‡Õy¬ ÛvÀó²ë¯Å‹K%F¬Å‹µ•  :Apõ\s+ïgÔ‰¬ó…C–Ó{{xãB‚TC@¼,­ñU“ƒïR°^MϤ‚ƒoœáá&9Ã@Ø9ÏÅ7ƒñ­vó•«7^îq¼²l7†Æ*bHÜdør¥«ãÅipd°@ Ô×p9‰€ôÂøÁ›•¬F4¡S“ƒf~°-‡_ú@ž³T£'#s¢TÃtC;ﯱÏÐר‹÷ñöñ·ÕÓý Ä»ÁŸs;Á¦®Á+ýÁíwÐ`ë]éÏ=³ä®™T“moÅÆuƵ3sfSÓ[¿_ÿòÚº}©Û•< ìä»9­¹šM«Ÿ=Ún4­j›i9ñ*T‚§Ü~ÝÞ«¸8©§¤Ô Ó¢0žÚ8kõ ‘›ÝȨœ¨ÔWVWÑI²cщ4Þ“T4)°/oY¬Âƒ³¼”¸¬ÇNâ¥øö3ö}°ê¥+vØ«OD:w³òB÷wËíc¿‡j±7µc7Õö,+g&…]Êͪ9¢,ö¶˜4Žz­àw'.ÁšYøµV}xʯH²¢ë@ª=2ñÔ0"ÕX$&ŒÌŒ¾‘Ê|:.)¾±sÑgºÊ¸$Á¢-uzži“ÆCÆ9î:‹^@¦p×™@í'ÔÅ}ƒ³™ñx»¹_µ‰î?|¸V Ê e-x‚™Þ츧sì—ó“ðÚYÇUæÖˆsí²Ô½0ø¹„´¥÷E) "æUˆ|>1ç„/nLy!nº'ý…x-ÞEµ Öð¬ä(¹õ3#¸ÒCbà]ä‰ôËŽýç`í7>÷ùzs–ýè<¢ŸÄ~03L!µØ)_ƒ¥91þNi´ùµîhä`$ß»»ÍïŸî7·wÛqC˜øYÉíÏ@nßYP—}îZ¼»õöñù!>òçç»_W é.gêîŠÈ^`]Ívm›ýƒ¡ñ•w£HÕ*xÙ ”>0œµ|Ž’Ûk„ibû|gÄ÷’6”ŠŠ»E@VßZãã‘BQWíd:õ€Œ+‘/yYp©Ë¨9šˆv°¦²PÌG³\fqƒlHxA¾û»Î\U…¶YD*‰cËǪט…³|¤¸ÿ Lã#ÎïÏ ì;6OüYˆwÕ”™ÚÐtT>%†9`¹g;³:ËJÁ„i‘ y;‹¯¤<|­ôÃò~ýJÁw›e,£]Ô9MVÕ3 À>’T$ÁZ%¯ f.­¥û„b|(rŸú¢”‹&³/N86™ 5rŸ\7.sÐB?çÝJÒ“™$i4Éy™Iq¡AÓ™EY`¬4I"d‘â,s™â“˜Ë0C  ±h̺ðõÞò-Qõãþû ­·úbO}‚}Û}¶]r׸¢hà¼õ–pŒâcÌÌO»ÚQAoß÷â6 ž7³?}}±Åör¾hQ­¹Â)*‰®¦hѺØÜ蜡¶k2„k¥¦QFÁÛ|EÆ#eK&;#djÑïEÛ!Ó¨*ô€ÆšIz:GI1²U¬?{¢ã´ˆeи|\?<ÌjÄQŠßU“»@'‰*F–‡84Ü…µm'jæ+׸À7vÀmAQb¥š’ÔháGYu`É^]ódö´¿:‰•£‚ Ée‹MÛ±¢DecEe? º¾’- gY)@“.p° ÒDƒt.ˆ1üõVV—l³dè?‚ ,“PV°ÆóñˆbƒwæO42 Ä*9ŒÏ·:ãfoâQ’·¯Czµâ(ï謥Q@œ“–í?ûµñ¹d}eòqœîUÆ:K(Â_i´Øz|ÌËâöá¨s®‹ùv‘¯ÓÝqX¬8fÉõ àóeOâxµiž®Ô4NÇ{!^\³.Žc*µÙÈ}PeG®R§þ–÷ù@S’#i²êví8o7'Ü܉àHf<­rÙ1J,\ÛÙ-[ãŒiж‘‚sܤ8ÕŹiÜ 837Uè{¯úí®FÐr`ä¨rÛš,w¦“÷ÀçYù/ûtñ»¬ât…¿¯O˜º8i¸_ØS®.NÙºH6BžçˆÕXpZ Y;…ÓðÈØÆý2oòöV·÷Oµ³fštϩ̨ÙnãN`vžÇÍ5Û¿}Íš­sF‰½•"£ÙB²ÉÉT÷mb•v·ª Åc¬R–×%ZsTe4‹UJ Ùq*0K8‡cð>Îlj“&·¡œQÝó<0©ªRåf·5âÑ&lí‚ø¢­‘?—­Aö q“8‚Ð~¨3Aè\Öȼé®ã‰{©Ïîš[Ç5Ãgêi]`°fʦ³`‘ƒ “*ž’§UŽä,æ(`y}oOªähÍ-Z¬GbJB4j¢Õn#¤LTº&©;}tðjÍg½RÿtûQeâv¹:¶ŒLÓùÔÅÝ•fš/G æ°8Ñø',NèÈÄa6Ù`µmÑJzõG܈Ì`Òžç£5¶»õZWãÉ+Þ„àüHð,ófɧ¨·±>™ò~>2.I7@Šn~üQ"VÜ(ìô`Õ1˜¤Ðd瞨dÆý8ù·n>úΨŒ|fTˆo:*„Ê7ªLëq!xŸåi¥däÅÝôÐû¯X¢ò°¹eø„zR ¦¯LË AÄ´-OQê´ô5|èïÒ/ûš¤x—ÅKï“ÍÉR4r6½ÁQW¢MT.ÌïIb0ÇŽQʦ®2Z‰l ^ªÚíµEEJ¯óç™ç ̤ä|œ³:C¤à;o‚í{…¿Nñíá–£¾æ–χëÞ(xLJFÂa[¥˜dÄ]ÝV}¼]~Pê}÷Ã(°ø­ýgw·OãX@พF,Ø+PýÄ`<·°b…tkißãKÿwÖ¾åóþ!9~@£FæM;;jàB - ޝà'2ÊqŸƒúâ¯{Í\bçüÄ»èpq–½â½Îö*RΟ¿AÙÿ#þÚ΃.\º1 ?×ít Ž€´·Šž÷ ‡èsË·ƒVºw›û{i®›V®šnë©7€/µ†# Y@$[9®¼nh\®ŸÆ¥Á¨žøy˜ewtáP³1®du횤Á³k7x :3üÎÔ”gWÉÅfÊÎÈÓdð”ʲ 2npg Ít4{·Ò™ƒ§”QÑN\gë—¥ÆeZÊ æä'Í2„šc•zœ±púqóü´J Iìÿ¿XªOZ:D*[YÝW *&HIôlТ¹0ºŒ:­*¯w'O† `Ô@ö‚‘û’…ãõ±b4Q=´E¼6ˆºÙAÔ€KͼÕWb™.Õ]{€¹ß§Ô]Ðü9™(!Ì‚œÈ¢û*]ïG„—Ös2/|¨YBh56c‡Òª©ýˆDíüPe|,r-èZWÁóÙ´^Ü´šê§<У‚Šg3î «…ò…+ÌšýMê‚JgA8s‡e]ÛU.›bÂôîõ ÌÈK±8Çbéœó_H+Wr˜Y‘4n¿«™Œ&4Ðj ¥3L“޳cPß³Ly_íu LÅúô´Iš ©œ%º2š ˜M9éJ‡„äÂUF€”u¥ 4ÄТ©ÑMx8ËÈ~éÈõßÂz£§tn$u®Iã U‹‚óì6jî™Jç:y®L³ƒ>ÄArœÒ( …äÚ~©è®€¤&¤žÀcf0¬§nºÖƵ†Ôñp0'CQü,°ªÎ‰ç0c~t—J~·¼_ïvÉTEôåÌŠždjÐ9xWŒŽËˆ&éÑ$£î—@dÁ0$ÁlH¼z#dö¯}‰$de~”ôd“()Þá¹ñ9N¿9xkZ"¤+ŒÜÉ–&@Ϫöœ<ó~†!Ÿ6tF­“ù*E¯®ù7wU$ž+J½*sÜœ]Ó§Ýe‘rÝiìLEpéòá¹çT ã€MÐROmôÓo‹þ}œæ±mŸ;$ÅÍ'=—°5¹…O $åó5ï–øJwK)°H1;ç¬Õpª >ˆT\cpN£µ¾ŠóíÏ÷«ÿPyþûfóðôÃÿTYýíÓÝê¸õ›á¯7¶×«»ÃgötÇ`§bM.WÌ4½ý»&'— ^æûxØ›u<”*çrµþ¼º;Ú  ‡"«¿vÚù—Ä#ú7럲ުöÿ®æè÷ÕãÍÓ‡ÛO7¯ôèv/¼A(tlàÿçTõaF)°Æ‡š¥SâƒC üÏýäÓöv¹cÝ1÷ŠVôý/·÷ÛÕ9ðö½ùô{0~Úüòjß'd±㙲"¡g²9‘°ý½P‡7ûþáq³\m·{(ïµûH(ÐuÞ[WÙ»ŸäÀLãÛq•TánO,&Ï-:4Ü~\==®w ìôêº ˆIu›%{[©@¾_Ùz‘ß’Ï€"-Ö±"SQâàÑ“«^´{µÏ!Pç•¶óÎÇ??r51Üúm%ñb9®/–ÙÙ滋P “ÞÖè¤Ã¸!Á³i6’¹„r­5J ÇÝ¡ÄV÷>ÇEEõû«¢“›°™ZìŠÂm /×TòV=Ц^HˆbiMB¢”‚UeïXr•Ûr»¾{\Ç=nï†âP÷S°%à>ÒÆ`Ú«iFƒ—…(RÙ$­û€Œßïyór°íÍ/›ê»©=û¸yüÒ{qO*hGá<²Œ‘ˆw\è'HŽ9_#9=Hš5'ÅÜêÞz¹£H_QüúÏÍ<c: †ÑæeÂÇU™ðqÊH²çþ@ ï÷VK U¹8è1‘!˜Qâ€ê¶kà2M*RÛê¯'8zôZ¯íjTV?+%N·ï¶_ô£ï_ÿþ`vÞ¿mìY,?¬–¿-”9›µÊÒbùx·ˆ°\Û´šàNHã°|Ȩ(ãˆòÒb“»±¬=%Û0¢ym',l˜",ˆ¦"Ìs¬'e'0W_ãýFôa³}RiXnô“/­Ã—iLG b‚˃ˆ÷ɨJUaˆUgÄ™ò¥Ò9£‘»G[!ÒAÐP¥Â¤Ä®Q=màºKä€àÔ¯4Ðos.t&øL…Õþ]“Üû¤€ôLñ S@ÃGLJ9Ã2µ8ÔJªL>0Ãä —f€Ìn¢ #)Óž¿(Â)ˆ¼YIHOåÀì64ÔÝË!ÿîf†•K`\6 nÖÅ¢ÉýÙ/V£ÿÇã\ÓaÖÍ>é´þ´þx{?®Ú@ùýÝlФ°¦MÁ`PTìØ¹Xí¨Xgm9¡¡NuAˆ 1;£¡QÆS9»!ÅZ¤ôÔVí †«Zß(/P1zžXÁ ™eÎtÑéG‹Ãˆ©VâbCL–FÚdÅŲ޳âb“ò2 X‹o<5x4xmqšÌY/ƒ1ß̦‚„ vK3 ¨¿Š¿()/}@‹‚Omâòƒ« טAAôM2ˆos@gP%$‚L\P†!ä%B¿H $"9OxH•"qÓàµEÂÕ,ƒTW.°±ëÌé ‡ÇõçõýêWõ$ãÎÛÛÅýíÏ«ûÅö˧eCáaŠ.¹+ ã%+;ê'eç@½&W‡Ô©S\ê†í¢Áý¦Úº»ý#älsÜg÷ÓþOQ~‚‘Wùùývý¤op£ovãþ¿õQóKÍÍ0ºø‹~þôøe½‹3¶J¢E,ÖJó¸C"86ú›:Ž mvW¤¨ß°ˆµ?›çÝ ³]Ì‘¦u1¡f¤öênÿ_“[—]w–¯M„øD"„N¤ÙƒP,åÆñ)ô5H—!ý»¦:Íï’O„x0]|†‰á#&%BÀÛX½^šÙ½•ì ÎO‚oÜÆÑhàÅÛÉyP˜ñÄê6 Ì 2‚É Ø$¾ _­ m_ñüÅýÏF6¦¿ônÖFí=­œ6j{’˜¯—kÔVÅ(©Hü¯?>Ö#žÌ µ D+)IÄdµ÷róñáöqõþ}é_WOïÿñoÿz“LÒ¼f鿇uÿÐ>¹¿ºÝêw|¶Ýr³yT»¶“‡î7‰Oùïïn>nîr®Îš½ùñfû¼Œâôþ‡þY?=ýg†œÃ™žéŸÓ‡–³[©JÅtxØ ¶CòiMÏ‘“FfZ UËÒ4¼ôÍËÒš*wçy™|àUEm‡ñë'Ip2€ Û'ØÙÑV#àÜÛ¶ÿMn—‚¡Ÿæ6ô@€ë/:êÊŸ«È4,;6þ+‘¤Ü÷Éa±Ë5¹Ë9ÜEFdÇzh[Ÿ ØfNDVª`Úú]ɨìOÍ©]‚¡ŸÉŽãX4Ъõtª~©uoRµ8ËFQ1dXiI·f£Åî[Ëâë3+¡»âÐjÇŽMÅÍÙRÿR¸:·:Ç_ÛMÆ°Š¿Ãø•.!¹ ÙÇýìóv¯Vg!€žÖ‰í €l¾û]c’꥟ìeg¨œ*šµ§Ê@´¤ ©FyÌÀœU4†¨™,—\fØw•Ÿä¨ºŒXÜlâ‹fzZ|æe$­Sˆ©§<ï5ö¼±u4ÖyV C"h)-·š…KaJ‘m°ªWÑçO¯u­…³3 0/¸¶‰×Ce¯7¼Jo¬]…)‚x¹Ø7µ»¹£<žÝóÕ*šxõD 8ª¯Í;ºDýõ¢ù{V¢Y3Èw7Óä! hã<¤2Ë5h¾8™—l'ÿ3QXg‘IL„B“@$"·J RÄžÁj2•üÜ:îxKZX²²Ñs9e¤ S×kIŠ‚²ð¦KbŒPÒc[á¢ÞOã„‚QZ÷æõÝ'öm£Ãƒfçv«þˆþ‡wïo®õ>?¾üÇ«Ïw×m¨†>¿±Žú—qþ„†I=Š¨Ö Ò}X§Æ%ŠRM“ ƒÄM\TM$1„ÞQòªy O`Pˆ¢jOÁ 9_WCÚ6džéŠ)1»Ió·_€q)bޝ¿¥;AÃRÄ~›pޱ„Þ³[ÇX9j™bc)N<wnicçœ_o­Ùd1hó4®3¯Nûê'=ðËp¯­œÏûÉ_6²67ׯ8Ñ¢ÑÆvRl˜‘U±Ã׃²‘մ§’‘}*ÿûÈÓccõ”Ió&i w¨"C²o`c)e&Df¾h±`c)µ±PUÒ×,¸ÚÆ®° [j9pêˆÆ$ɈÛC)[,qô®±ûáçú;Ú+~½¾‰=—yš0EáR>Œ"‹f²Mì òucþÚQ'‰-¶ jì§YÞ*[@ã½2±ª±Æôª³œånÂ3?¿zhEy‡º9R¡Ã¡… ¢ŸHd,’jG¹o“`Š9’Ø~’PÔÛEÚ=¬Ã{Û!ÇÄþµ56DÞØ{‹í׿Ӈx½2²gô¯2á)Uk9“„îϵ&cK. Ž7Ìz²I#Yò´¥a~Ú´œ¢}YþÞãs_ýrw¿|×ã<ê=۶͇3­P£L²„ް*¡×€76›ä1ôeŒg™‰ %€k3ÂÌå\Jrõª±:Œ±æP“ŠuôðÚÆX6îŠÚQõ´)ÊØ2U–ºiÐ 5Êœêxë{£Æ™‹ ù¼l€…̶ïšøûŠ“Ï‚æ·…É1ni“mÌ·%j`é]›ÇÉcWd,ó,>I°ˆŒ‚öâ@J–9@~§Í‚f¶ÙŽI,€øÊ¶9o×\ßÖýº°©Îf‹_ÚÃ6`òà¶UÆØƒ«R-Ñ©pxJIaѲœzY,꘳Vò²Ží[cOF¨ŸIÑþ ‚ˆ¼¶Š%ØvvGÖÝ ÂËø絫)¥3ƒ°qè l¨ª ûЀ7T©ãyîÍ‹K4Ö ýZkƒdØ…éKÁÐ!½ï%e—%Íͺ¡•W\”'= –j·vYŸo5=ܦ KЦ¤æ€Ó–ôÅ7V.h‘‰ˆ«qIç‹©«’¬„ÀP„ X““¸º§+{m„@l§bª Ü#î^ŠýPËñ»Þáf-v*" Ðàˆ54*Û|¿õ5ÎoüŽ·##Åì,‡Dä˜ÉIªæãðHcÂW@#Ýu¼H@Ÿ×Ë}ûøüþ—7_?üë}¾ùøþ·÷ï¯?ÝÞžàVv6áöÿåâhÐŒ7Úÿ‡Ï!ŒšTzoíº°F°ýñž›!Ù¶E,ˆßžÒ¥%(m¾Kx#1Â5Â{ìXï1!%@÷çÚ$cþè8×6áI†óUômi‰ŽûYTöj÷¸«xbsÿÍðî—©zaVõ2IO‚¼àyŽ£§$à}ÑÇ`1êØ柖Sž(6âiQm«éê7-ÙÉÀöp\#.@©§U)EMÕ9¦?×CtFZÔ¢MA%(J‹mž/‡$šwf[ Q·S+çÚ"–¢¸”ÄÈ8Ðt]”r`…æíãõmÛ@†?÷=J5£tmꈤa©æhc+43}FiÝÌñ„(©l£Õ(^Þˆ¾£c¶õ@Œ!V&¯:Üf¥ $ZÖ¨Ê+¢½¦Âð4ò¶½Ú]²ÿÇuû.å‡nF”u¥'¸2L?ûÏŠ¾­Ë°O„«F÷È餉ƒÁ’JMµ&H*é¤jwnÊ’#t2±žG$a½Nr@õ7 °_ôްkõ©uáµÂì]ß|þp÷ÍHuÆ>¯½ÌEFÑj³^´(Q€ §É¯,}&HǨŽÒ9(Mì‘Ý*öSþ­'õê >áj„òsaó†ó»›ûaþÚÛØ–úQ_ö×IÓ%L%a@Ê£,.ÈÓ# zL¯AºTÛž$qpŽc·[˜?q¼°¶¶ 1Ø˾*|ÈTá)›áx ó0ÛEÞ±!9úx!swÙÜ^ßßÎKÄ.‹Ïÿ¯†`1# ¤êåÕ#¤²( c¤’(ÌbÁ¨1BìÅ9ZoøkËöTÙDoïUŒ†e_þc÷:“¬}ˆê~ÕòÖ9š¢H äüÌ‚*ƒ*iÂꇨI"4~äþÑÝí$ö„ !˜N ¬~³Mµ‹/ô¸Imµ«±ú®‹±çkÊÆ‡«Õ,¾°Š¬øPê¡ñIÃNgŒëk—yþDrgŸÀ¾âïþ—1O?3ïßoo¿¨è½Q‘|cÑÝßö‘ÑÓûê²@óýù—ûo·s©æAeûjÿ«Ûkkb1&§ÿÅê'Ñóܨ¿pe/¼w3.ì\¶ÉÁOÖ G€ÔÕpðô‰}O‡®dB1¦¾IwáL¼{j…¢;¾ÜO»ã¦¦þüPóóeCV\—)‡»v(—<»ø"Ü]~cÝ ;jŸ¤*Ü) = A˜ÐÛ{ÈúÆ©4bÉÃ$>º‹Ö{¡pVN* gá–W«0bv¬`Û |­«bÜe+6_6\ p #Ÿ6ýÍœˆô„s¾‰šq`JâÈð&”ç°êhO‘ÍÿÌÝ£]ÝhÜòðð4ü3}õóHÐÝ×OÓÝý¯§ý)D©­?eÔ9–ñ]sïʨsäÚZžµ’ "k´!a?¯f}ò†Ï²õ.…›ßULçá¹§Á²w_î>*IïU›¯õãŸnw²45Ð_ý÷Ýý?Uº?én¿Þ~ù6wù?Lûbâço?Í â§·šÞG%DÚ˜kã§´‚†¨Î®ýX®½ÿ÷7o?|y_GÌý«Vì§dÙ'ê çˆ‚ãØ±ÌâE¶/¹•LX™& Ä,Uae R‘Èe›™TÞÚ©Õ“`S€1B¤z â"«V;ÏE‘ÊèÓEN»zN› ðÁ+ ‹œ¦ðTä»ÌiŸßôx¸íˆ­¬š‹8ïe(«+ì0…­CIU¨]È‹PÒ„£WL¹¡KY=TöJØûÖî?ye?~^"XOîÖIĆíÍó§"aº ¤a×ñž^7-p~2ÐÈ¡ëp *µ¸ex&<8Ç#æÄÞ­âÃ|èyŠànó>)Ŀᄐ]Vº?¾½ÿçÍ•ûw7…f£šO¸AZ‘YeoÊè;ð`9F µK·Š¾Þ˜ªˆ»ÆCÏb–œP9‹b¸z%Í貋y1á©§Á‘ZôUG‚ͱ’9à醻r†P*ö¸±à5Þñj“üÇÙœ- ‰º¾ž‚-1$ƒ×l´$·Ÿß~œö`ÓïWû°Fz}mŸ›‹OÕÅ_^eCìYËÚ~¸åst”ŠÏB¶¿mA»!ï•zìXׂ<Òˆ¤#ÄÂ!-È1Ø$ʦÿ‹ZâQ¥± ±çq]‹ò¶±B ]Ð’¬lPnÔð ´½ Ú»F§gqâà1u:t±¤‡•|&Öˆf;5EÂØ Òà8%‘U@,´Á0¨Á4¿¥J7 Å6ì½òî‡n‚W¨¦HGG˜˜›òÀ­¸Võ4U#y¯úGåWÖìΖëYPd„þÙ©½C|eõ³…*ãw…¦`å2`ÜøÕ©ê]ïóýÝÇ›/ïoî?ìö”I·šq(¦wÊ;+W}lRR[j©„Øá@ "GJ í¯!CI9J—gA24,êr òņ ¥jö¹ã@¶ª<Ú‰#xu]æ­+žFe:…ò°+2¹<”Ç0´Ö)U¥oŒ-¥ïáÆdKV''€ƒºÉ³s þsиGšäýÓQ bÇó÷†Æ«hÿ%y*[Ûh½LX4·ÙÙËE†@€ê¡A#}ÿÚÖö¸±AA3%Ì@€ºI}"ïG`N¬­¦qÿá ÜãØ¨?¤ñ–48œÄù$›Â,¿¿ùðq:z´ûðñÝ{•ªû›Ïw³W*Ö—*¿²®ØÄfýaêÈhCôI±yMB-Õ/TžjI¾ªÄD¢„rzð‹ÏSúØ›–]qDÿˆžšÙ6¦¼®5×ÊÖÏSÁÆ’üi·€1JlOöåç)â0¶ÄÇ*(R_mÔÿx{”ÍX˜Ø÷OÞ¹ŽG+%sd^=)…®vRÊ…)hp@%½W©Ç°Ÿ€?¯÷¶æ‚s „7«š1˜^Nž,?±ïn?(™ #åjÒ°%¯÷‹%(™À¡Z"hõ ­R[­º ï¶ú›õ> 3²ë݄˥[“]'@%ÙUÛ™…Í[pjÄs¨ŸMˆR1rJ¡gò‰RLêpÜêŸÆÂj£¤¾“#U0Öà‹Œ%ÊCw®V3¾éH½²m‡zmÆ¥žÎ {¢?cu÷œºÛ-_!ž°¬î);%¹ Û m‡¤á@C‹3ê%Qˆ¹_hôçvh» †GÓ3îªVòdÜU—Ño™ðå 4Ù8¿>æù®ž³†ûp™ŠqWýc)Qz î²øÂªaWâ)¸„êc½V@ zh• uÄ&Jo¥½õqhíZç9u‰„¡lòm[S¢H æ”{q³‹ïe I8¼Üõ½øF6µU^4šhbvdd·Nå÷ïU­¨úÀ ÖCù¼X]ÿãWºrW»ÚÝþ5ÿêþîñ‹!xÜÜw.£ÂÌž¸ù9Iã~¬µÇ¡(;û¾Ð“8ð@¨žA 1àÏ@† iE8¡ŸÔƒ„ q–mÌLnµQ ê80NιPìRÓM.†šìâu‹›U…Á¼×~G5ßÄPdß"ötÔ;Q×ãÿ?ͬI33`tÖïX¸B ½÷E1Œ¹w®Ÿ†„¾íH1Ó¸ÆúøÒl˜7Ý+ýŸóUg½Rô*¥ë ¨¢°A£´LÉ‘„ï¡OúAy¢\0•š‹¿o?|~ÿÖOo}±1¾ql€ó\šÂ­ñÑw½Ô‘g§že£Jz%)ûê%™°ÄĈ5† ÉQÉi—­—,è6¤QZ&à„‰[RšˆÞ‘O«txsw£A"eÜX{ Á«n˜®kõˆ) o˜n°*g9,P«8Üxp5b?iâ™ü¦]~ïîï>ýv÷NÚf­U»û¹Pa­÷ÙNóZ@ ›´Zëu”fœl9ˆ«xÃ@¡ôömT¤ü“™FgnMj¼ø&ã¬Râeq6·Î6,pjS `¿ì¼u#­s’*ëÌõýÕ«ÆY'T™p«8œÆ›bMxƒ njŠóì—odO}ìmkü4z»(ó<°Êô¦ØÓ<RD‹Ó[Ñ{š(5ÌÒªòE%†ÔC±:§ñP6 ^Pe„¥ÕccpMφz6 yÄUz(›ÇÁJgwŠÈc76ß÷ùoyN2tn0B¡U]”ú8¸Ù&œch‚(.®‹r9à&QnpBqÓ©Àµ­m£‚š?öó¡llÓ¾ƒ±qR03ÉŸ¢k#èj:¤f °"Ð ˆÅB¸µ~çætç¢D'MÖ7¡ó.®²¾ 6szŸ‰s•QÑc¤ìü`J44¼e_5?Häªîkq–µ18Z÷r•|߆֠0~o£Â]÷–B Pn¡ˆ*„AŠºîùÌ‚“gšŒh¢Ðc'[‹š”]"F\å™S¤´Ð©ÞÆ»yqÇæ(GUÛ‹ÖÕ^¤º†õƒ»à€Kä®P@ñOǤ?xŸ:1ŽŠ¥|Æw±¹).*_°¾€XT>–Ü4Å‚CGg[@$mº'¶u•îm^OR*3gæô½59=í²¾PN‹;ªÐQÖ»Ûg¤„h˶yE Ã6yô´¹& î€×O;U/«óNaa.ë'S…öB»7dµóù^UïN…£4x>±íé>úÃùaßžÝ8œo+¬Åv5Ó¸ši8y$‰PÁ4*D4bÓ:¹Ö®ÅͪØSr.½6Û¤k±x°‘Þd˜xMxÞgšð2>]šØÛÄ~™Sª]¥é»«Ï¿vnSÑ…çTé5¬‰p´\ïðUmxQ/£QMªoÍ"êµ;QBj…¾¾ùüáî›Å’»;ûç«çÝV‘ËÈOÎQ¨˜œW¹*Gí\Îp/‰3"§1uˆžZ‰öôÞˆ°gh}‡¦T›v=n"Ï5÷&ÙÍ!k0þ—¼kiŽãFÒEáË\†%<òäqo{ØËþ Š¢-†Ô’’<Þ_¿™Õ¯b7º€B¡ÛZ{&–©nHä™ùåäh•&Ùd:][·Ç&¼æË*£­˜ n#Eäw³uæ/N#¢MŽgð&G«õ¥ :©nÅÀ$Òèìµ^ܸÆ–‹ó™\›UާoCä ì.ÊËAJÉ7”…·9þswµ9¬ÏcŸœ¦l•mWDB,¯¬òtuÅñ@W¼–4äZÅ-˜T¹ ¸V‚©¶á/¦0 WѬà +"W˾NŽV!Á¶­`#éx‘—.nþ-b\âxNY´sÒÐ]ºnç绯¯ŠJFWìÐ 86[0ú_~»¹¿[V}æ$_}¶¡=ià´Nh4TŸ‘5ž›Ýhz”_F®&O– £‹i Á~.N\”ºsƒ¦'´éNƒÓÈ8,‘IÐ(X%“â/ ä¹ÑmtZpfjH½pþšÝ'ê=òÿçîåyô?E ¹ä£î¹—s#©{IFòý«¦ÕyÆ6‚tm”éÇÏJÖ-'}úôx_#¬]æ£xª.±lö`û¹d Ú‹‚¸Ø!¥ûK¤êfI•<9.‡Ë @#î’N$5ËÙq£{ºô0¤cZæE€VyZVàÔC ÄK‹kÈÖé±n,¢:G7ªìl‰@HeqÙ¤ç„L=˜qHÌ'XÄ$hÕîëÜñÔ þl\\f|¢9Æ2¦Œg5ó‘—^¬âlL Är~<%·-ÿšu±(Ù½§UK7MÀ׿LÕÕÈs\§LÚQ!º1 ÅÄ'*óŠÀ"¯H¶»uJ°.Éݵs¨K–Àm´Ïk‡Áµ5‹m—ˆ¯mÏç÷öf÷þá߯)ÁÞSÆw«ûoc¡×æIyt«6·ñ‹^§3;µyD ¥ñ=b·áAŒ;ýøÃ5ŸŒ›³ÁŒºmJû]ÿúøåñåƒÒýå^ÿùý“þéðY[nó7G™tê¢!ñ¦[úÃÝËý³+—Ôï~Ö›¿yÿîÆÈ›w¨dürË«.r'Ð%Þü×ürúÕÇ/7ß_ö+€OQ4ûœuX[è’4OGo@#Í£‰‡§\»åê7_~3)G«çˆïNN„·¬ÓAÕ3Ä6(ÂíølùÇv.¨Låj¢!Sf§{ùr÷õåÃÓ7ý»OOº„U@ª6¼y4ýñÔUxçîáþøCÛlÌÍFtnÁéfŽ>²[÷™¨Ÿ‰ç>³Óúw†bADm„ ’΀8PL½àƒMy¹ûaw Öñ¹Æ³ŸÙs±,Üš=+Ù+ö¬ÑñvÓ}ÿ·iïOO¿½L´Ãî>0·o¾EgÖ8Ë]7…åÛ%Â61ŸÛ÷ø¯ÃÖÅ4Üïwߌ9Õм±´çns†»<Ç4Xÿ§þüÛócØþbJem,Œ ¬æÎYNÑæ—p·ô 7–oQÿNŽ!|žJP]"4N[Ý-±Mñ, ‡D$ªInH« r”¶Ÿ¥ïB­å@‰ÅÍF¶z+ø1¥wÖ½ØÖg‘]'§)g‚mW î€Àk´ÐÉ+ÑBÃÀP‰Ú‘ ¥‘]Vp†]´®ŽƒÖÂýr²¼Tð{ ¹È휞¬"¬fw`ruyÝï¦$+”˜-q„¾Ö5¼§c—Ð.Bä«âJ }\ÂÏwϾ}ý¤œñöùáý‡»W?ºùòñ?}Åʺqk’œªv\ó£K ÷ïwÕxupÆ¥áÏãœXVjç¹@´‚þ%IÔ%ˆ¹ÅÏQ[ïØ¥~pœó4ëåôgøä•lEéDï€i^:~yh’ šà8e`f®-ŽG寽œ Y7ÕR¯Ñ8 2&…@©ô6Šîú°œºgé€ËYÖ ç®6Râa¤‡ØRÆÕ–ÿüñŒq°ù¡â‹¢Æ.ô’h‡˜…¡:P¤ñm¬j ¾ôZwÂÆþÂ")µÞþ¸Äq÷[eÛ"9U—”.‚6“¿Fëïõ‹×(I⼆¶“s’|Ûâáh5Ã)u[AHØ×_œÞ™J-¶_œ˜‡ÔR0'Á?(º¦tsô§é昹+°çATzÆe‹m%Âü]Ùac6µ89MÅÈhÐPÚéŽ_§›§k¬K7‹<¤êtó†Tç4"Èí— †˜)Šzì$ül Üg¹)óÈ÷Ü4¯À7ÜäOHÒ¥SI“Ç6ëŠÁо½_Å‘¨†À‘Êâêò_;mØæŠÆy±|­öêZ¾ÖSd²}nl]€Ê6ôt aˆV«Ñvm»%"5ès{>ÕxÂS‹:GH'ê3£w4lVçgŸ.Æsx5ªé|qòþ¬ÙÞ“ÃÔhsTs¼5<]b2q¡²v¨#` ˜:Hpë¼±3:Y2x“‘gñ˜e ?XË ˆÄþà>§“''«Á0=àÕ?_(¼"êŽ5…Çû%Ž„·k|¼'#žÆÇãEˆ–ùøX}ÔЯvÈKMB–i¶rh$oKUáþ¥?±*œûpëKSG3_=ð`¹zçôûÙRœÍƒJv`ŸVH78Ç-¥’H;ö˜ê·‡ùmqÏrªÀÃï¹lT̘ š@‰„Ù†¸ :¹gªZhªP âÆ‡ÕÝZR1D°¾Ÿ“ÔAP§š9U9œŠ<à³M‘*tÍçÆ‘¯ˆ×fàjÄìšjú¥Y®æ‰rù«vãU'W÷`syÊW\.59m«Ö]³º»©úªe¯¶S·À­W=.ášWÉ%ò©­ýuDÙ`Œ¼í7Ä•)Ãb Ä)„;èÇœg˜e‡ Áò«Šta¬ÂÒ"v`«ÞvkØÁ¥¦Ø#{US¾)ÈcwäQN°ÓB™ên‹7é¶ãOÀö‡©òêo3‹i7]bU²?ú\ä\`³|­R )6¼¹'5;¬?†e“>6piö'“ˆoOõ/gd?_]ù×Îlp™±ôVUÄ6Gs<…ô@ŸNCõU¬œü-c[2§ ·ÁÀn—@qЩ³™TÒó;55çS£³]"±Ãët6ÛAuUª¨Ì–oa…7íºÖÔn»Á]´µY™Xoxykóžèêp®"ºäÚɵ6OiuBä3}åÇ'k ½á©5åŸwKŽ-}“æM0CH FQ…û$‘eýÈù7j,E;ÂDtvJëá°ù“ÓTXE±æö›aEÿÌ­±2‘‡ä¤êí³##Äm?ÇBF° i ë2Dj°) XB±Ì*ñž‹\±}9æŠÉÑj°u[ê=ÇPõúÙóâÔ)jðjÉÆ†KZúªµÃjy™ö_<}ú¼ÉLΖr¾ZëålX%£*";}•ªpPÁ9U1![7GlŽFXÀ3Ñ¢]J±gt hyñΜ<¥òJY®¶IÚF1&OÁ•d iç¯U%[Êp8YM“4 ,A£µ×à°Äö‘ý¨z’à²5Fg{Š$&nÂÎÜ/¡›’ªÕ@#)ÿ×iSª¦yI¾¬±[fÉ%f¶qŠ?UUÈF\+võþSDÕ’EÁÔEæÓ(s`’´ìê. þ½^¿ö‘9Ëp>u“ªy]±«G³Û¾¤¾ªó€·2œb€?Iuîý°·¯¡Ò©PKù¢”[æñØQ/°Ÿ=G¬^ŠÔ¸SŽŠŠÁ^芊”²£=”i™É*n°ª!ákëQæþ­Qd@!¹hÒ×çǧçÇoŒgݰÌvýR«àÌ7{÷îÉl¹xWÉkJ-ÕêØÌˆ¥ KsÔ‰@çH[}æ„Ø8Šòì,Ö­ÙŽ¸…k›â”›«uÔ¶m\"\æ 9fX%Ê—÷ˆ”´›)¯k´DÆÑFž =LQBWØŒpêɉk}µ_t=•rŽ 0YÈ'kØ‘úéQyȳ\}HÏñôö»ïï— NU›í/kf%xKÖ;xý¿`ºÄˆû‘H½<(»|²QieÝ›HÎã|èå²…ì‚4xP¶I°É>´Díö7º¸ÚUªd¦bëµXC_,¨]ÔSí¢TµŽzæ…i­àç.“†@è6;Yã-¯J>z”d¶-W3¹ºpú d×éćhÃæ¤psTÈ>ÔNSNÕé*|^÷œM×XWÁÂqHì«Suv°èCJÀkØ :iy]Œú«™qýë}m †¬LŠ<$bÛÍÁ³#¬''«x½·]i¼]r±FÞLÍ¡ì¸D8ª>ï­‰G"fü_5;ƒÊ%o‘pŽ_Y|êך€Ž®€j:­´´„ÛÍøƒ·÷Ï÷'ै——Î.>iq\Q:»v®ýaÏ=hó\eâÒ@º²KÉK¼ÊÒYû»ÌåF'çI­¡®°[¥ õ²ÚºÃ·ø5¤†>mÞvF§ë=HEC1é-ßÑ¢C¢}äSµ»äi}B–°NëS¼´ÖW2Ç r‹ÙtVBnqÔõÕë[0†j÷»VüÏ^£mÞùU×(¡?–ÍÜæ’௕C7‚Áˆ V)Q‘lõÕ‚ ‰âŸ-‰î3³àýœº®TÔ¥àƒOe]*Ù‰‡SštP¦¶mLzÿKt)8²PÚઠëR#sæ)ÃîI(tùª²wƒäúx1‰z}kä[-ËËäƒ5†áÒ¯þá÷§ç_Ÿ>=Þ?0'_tAj(SÁ†0X 7sQÆQyŒ}IÆw5 Ç2>¡K—îLÝ69Z$䬼8ÇBΡÿ¨çÜ@麋ZÚ]©Àká²ÏìÝ”R¢¦f‰K$GÊ'BH«¦ ©µÊzÕÀÅäcc-ò,Á+ ’g©½FøïØ&»Uø 2…ŸYòs%÷ìaàM\Ôc°HøM{‰[%ü|i ¯t§hÞvä”Ôcõù²9J]G*x_eÚ$‡ÿLåsŽ#ÆÙv"«4Š´ òxѨ/¾L!Øܼ—@ôl ”^BŒ,EE‘$ãq L—´~çaõcz²†¡VE1.—€ŒGRø—¬Ë¬¢ü¼@n–pM8ÀG þç­ÌÌô™);X§ —³WœçÙ‚ Ó¦¬ñžÐ¦¥6S·é êeÐf†öd´-Ñ4h \Æ'ƒjÉEC&³˜ÃbÉíJCð)Rš¿9;ªÏå'‡©hapÎQzÕ54]bU*Úë‰É‡ÚÞ¢^<€©¥"ÁE¯jŠØ¯NE‡ÚF²È6äŠdÆŠ,ù±:“£Õà]Ú$/{Ÿ×Þ-„ÈÒÇUb g¨~Ü}z|g‡x÷AÍm}Íná«§ñYœá•Œ¢ˆIs¥¢(2Åbçäýõ@¶^—î:¨TŸÌ™‰W1Mð-3–ƒÞÚ.aûÇTúÊ,gž1B´x°†1’øglGÅBœ¦cx4ЫJƈ8a@lfŒÍÞ}Õô!S›/àåÔð§³·UPHP¸G=@˜C Øœs›ƒ9ö)û¶¡àY ü_{×ÒÜFޤÿŠ¢ÏÍ™ À3ѧ½ìa~BÇMÉ-†%Q+Jžîùõ›YU‹H ”¬ñìɲ È72¿<ò&KÔ¡G°«ésKÒÚq@(ŠÓQì+SëhÌF'k3B‰»d 3`н ´'Ë"ôý3$—£¥EAÃhèpd÷+ã»WãKQkÐí¦ÿ Ëo°RcT8"ò­6·Û’¨ZÛhÁPÞ5dˆ–¼u› µ ®Ð°–“®ÈÊŠ„FtsPìÌfig:û66’ÑÅœò Z eã]Sš²ê­á  ˜*Yõ~QµL=Ðæ§ªAÉ#y†\[Šä„ÛÂl°Ø¥kP¢2h´ÐfÈ (m’2L4ó<¡I!´»Îm…0Í^9:š{§.Ø£ñQ~Ñ ;Å~J*dfb¡u 1Å/^ùh£Ç„bMÇU‚à–¾³ÒöÆQ{­­tg‚GX´ža¿¾{¸å/ãï‚߿ο)f­±@EC3q7^)ilpmÐÕÆoþñ¸{>nA°ºÝ~¹Þüµ¹½þ0=†mþg¨³£ƒÔ¾Ê0δÙñI%¢­E½ÜC¸6-\õŒŽ5%u59¬w¤æÆy‚†Åò¤¾Æ ‰´$ï^iþqÒ dU|ÄÝ„-¼ÑÛó3·Í¬,P¿ÎQoçŒ"¢¶VèˆY†ù yQÁhhQƒP‚¶ÉþªFð3ÑˇGÄeFî TòYTdfÌGœˆvèNŽÚBbxÏÄ|o•;‚IÎüPƒ^â©ã´)^=¬7_ù‹ŠG¨6¬cY5RŽÖ.Í1ñÐð…hMŒ´‘¡&ßÓ÷²5¢`Lq*µ_B«ö:–e¬Њ°¨§?¶¥¼¾a†ÉˆÿxyüßY)Tqÿ<‹èçÅ´_ÂP|9x‚¼\6â%“PE‚hÞŽp&ðdÙ‘M¢ïŒ4žÍÑ Tƒè£Å„,%h›¼M,zþ½%ÏZZ²lx¤k¬Ë² èqÀsº3È4î Êê²ÄõÃóUÀ© ÄÖ¨æB’Y Õ:Úþ\‰nÙ1Zi…*ô¨7ªZÆYé©ñê£å¹#:Ô°¤¢5:UÑÁÔö‘•OèP¦Xþ@‘’ZN^9°6n†õâp…5f”e¶$ÍØ£Ø2d!™rbœ=i¬KÞ_p ÔͱvñQë79MF9§ñ¬Mœ÷úZhºF]=§ Úm0³¢sàhÜUq‡E*x°‚~_]Ñnj«e(–R"ÅPV'jìY|[ úB”F3S)Èô¸9ÊÁ–cªl©‚'/±(^tÓ\–x)æ;¶üíÑøöGÞ_nï˜)F¯f¿{OolØí¾éõíÃÍZÇÍøÈ°óGh’ùev‹!iR0Ö0%d 6â]³Çj³›û±‘ c™ä̹{†eÈjU]*.˜W!ƪ7ˆ·oÓž©pv¾ÐxðXQðä`9b2…?U N׈ÎÑÔ!{‡Šò­HP^VÜ5ëVýjæHÞ]k'5¾Ðˆ4‡ØXsˆLoq@i©uNÛ”ÔÊY£Úr˜´7ÐqHÌNát…*_«¥F¿7XWÒ+†JÆÏîÝdg!Ç‹¸ªÙ>½üâ'}ùp«ëÛë^·¬nw›¯«1Yõ6å¯É:EÖ#æÖZìú„6‹SjF¨MQ/ä@Î&%æ ãEÔ ¤ +¹’Å‘qL<¨N×ß_^Ÿ†®oöȸUà3¥µ EVMN–3°Š=ÞM,œ.5¬6‡Þæjб«ÂÜF婈~ ]R`å­L3»ÁtsQŸY¹2Ãïßjˆƒôú®6ûÇ+åÇÝ7ùO{í ˜8úK…(¡SuûÙŒoOTþµhÄlù Ù%°R¹o-2—>°×^•v!K@ID¢ûÈÜ:Sæ_øˆñ6*°$“lr„dŽYÏ‚ÞgÕQÀŸéiÒ_3©¡SÿרU.*“ÛvÒ Ñ3ëA  ”4¤1M“ÞV[›iX)r¬ uÊÖ3O¸ ‚³)žÀø´´ÉÉ2¬*îï æH/J³{1Û°„³ `<ÉüUÖ.:"79qÍžÖ~¿}\D÷Õï»Ý·ûn÷8¯ëŠb9äÝC†ôø’&`Ã1IÈByS¤+3­6¢ }ÇηK½Œ Z%4úh!Ì„N-0™Èu>É®=™„…\c•°µ(‚í@f²‘Á=|Q½ѳN?®zÛœÉë¬ÇUcUöëj µqêŽI£ñZÕÜ1)¾c±kƒÍ:ºc±Œl Œkó~žRA¿3HýQ\žHI¼­·&”ƒÖ'?l bo¨ Ä>ùY'@í¶<¢FÖ—¤˜Œò$3ƒ¨ÅsUú@,E©T‡†õnb¤S@‹õIZ(².ýMÈÐ"7ÄÛ©æ™e¢ú©ÙÅóP†%ˆŠ0©¤èùÊ(n äð[äÂè«÷„ Mx;…ÖŸÅ>ø`M•›2Î jShŽåìûãÆnöÛýýúa³{:(Ö2ÇøRu]WtR«³Ç,~j@;U2²Î°§èüüvøYôŸ '{‚øÙ…ü±¦Ÿñ .P:‚a¿ ]R7„í›?´E#¤µ™£œvFû*ïV^O–Ž`˜Ìúm£}§ŒµÛ¯0e@½-¦¬É½dB%¦ìQL'ùƒ²,TE?| T›šœöc‘ÑB¦cû°¾ëÆ'‰îÏ•DŒÃw¯®$¼N]Kòï3Ìý|þ*#áJ¼>’a1è0ÓH¤)}Æ2¤É\c Á2“®¢";Ï™öÇc/Ò5Ap`A•o @š×4iòÅŽb¿„‚esÐYÇ#EòY*E'ñ±Ó”1þµlô~}¿¹¾”ǘçý[˰z“ðXEò«·)­U4áñËÉË AÔĪËÐ~— Ÿ5j;Ãk?Rµ7}Aȧ¿ÿ÷}B`Ón tÒºë/žyƒ²:•í .SŠ¥@(„U_üvñôçý§¿ggŒîÖ¢ ÆôÊq ,’"Òv^Šèüꓜ±ÿ­Èáav²èü§þÆë›“a>Ÿ?ÍtNS\`0ò­+é¦×R¯áIÚ ^!2F"ct;êѧ\{^ÏA’އ•NO“QìÎK)ï­:ž£;]£nŽ®Ò¶ÆeãGc¿P+Õ¤jƒV:^•¯~AÎ-Jƒ³ |¡Ñë$_x…ª-¿8ÂQÁ?bŒé"ÑÒ"QÑJYeœ —z ^¸Æàè±äjAë/ô'ˆ´ †Nb> ø^ ¦Ý Gf,XùÂñi¯NZ&m_ÇöUHÓ¦…\J¬ò´(Xïr»¹^oÆ—Ì,”¢ÝíÝáÇÇ+4™ç;•%ÏR¨1L(¸ŽLwÖ³sáw“²ìEÂE}“àƒ‡ç›$m“UGǹ¿Ð­Åè_Þµg;•_åÔo´aÿ¦J˜Ñ/oj”‰˜’q#jì`ycj|Û‘¿Á5ù»„69yÕ­rºêªm¯·MSvp.hÖ<šgèd_Ða¤‰Uɤ¼Ö±Q·rÿì `RÝNORÝz…ߙФE^/8¶ÚecœI7®ÒÄÎO)CôK”´œcy”ÙK%A=E‚úð–CŽvŒ3)Ã)¯ÍÎWGU>Zþûr˜Œ9¤úw|Žû5²D݃C—].ÇBEHÁ•jâž2Á-€gÏ÷‡hûþš8‹æiÁ“–ŸÉ# VYLƒŠÊ¯ô2‚Ë‘_{:o¤˜iâ IZ(bÞ5’±4G£¾ÎP¥ˆC™i¶Žø/C¨MªAn³ž˜*Ç–ô½ÅÎëå…¡-§×›·Å^YœsqN; ¾JyJ7û"á.0ùÌ»OÙɃ̡2ïrý|µ™XпS<-*ÒâWÐ'*°=’®­ú¹Q"5Óœ|ùÞ*“ÂR Öø¤çCZÇ*Y'iò4­;Ëë¹ì·ißyc´è§Rì—P¦½÷‚R–ë8’ï!™W>£J«_Š ~^þ†%J¦Ùb@k ‚+CÌ Q™ü™Hã¡êØqµ6ÑGÌİߧû¿A£#l')è"–MÚÝ{‹›¶Ë–‚ TÕi{QG¼àìûâFbÈôsq#3åþäeDªê2 Úö¥$Ä0¹e ÀvWS‹<™/ùx}u³>d=÷«¯æéy*4( § o•òJWéÐÑlÏôa¼&t¨çâ5̦U+U*¬€†l€¤*#-I]j\,œ¦@—Ê.¥›ÒÌÓ¥V`{¡Jü¬µ ëR&ë8ÉḬŽLÌãäN¼v4…Þ¥TÈí(’ÿ“w‰‡š»dÄÜPè8¤5h>BþÝzsÃQ€ôÙ~fµ£»ñG6KÍJú™Kñ‚s]¥f_c"gB´J%»f¡ný,B¶sg¡ (¹”´;+Á9$u0b3kBµ"‡Övžp®CË Åc^Æã¼N·€CËtÕ;åŒuú}[õä(c¼oÞªŸ­Aâ÷ ±wÏLR®«„"EÐy_=urp˜¯;`»„iÁV)ˆÊÉmôlr´ŒL«lËÚ`ò5ãâR‚ÊÛ÷¯æý.à-1#Mrd§Ð&$UjWèɼC="“ÿ¸X$íIuÚãgºýù*ðÖ½.\TóŠmaRÝèHÇËñQyãoKª²ªäy™×´v¡ìy=Dž×Ã[åâ´Œ°uÓÊáüx”á¬& –79Lúy½ß”rðèy}²D]Å|G6ûm½?jO k”¹öÕ©ŽõŽàƒêŸëm=‹æb7¾(ÎEÃT!x‰|?ÚãºH¯œ ¥u–ôž„Hæ£eNš´(+ùæÁÏr£¤*)dý±´ë€`BÄuà›2Z€„ë@‡©'ŸEÑùl/¦BˆÝgèP¦_Z¯‹±.d   M ¿Xž° Ó…w‚#keÖURN½à¿œµ²ÃÁ£Õß““å`X:œÜáþ×È'pñ-K/¹¨ôš“·Í†Û–§Ö†%ôÒONB}|‹¦'÷'Õ¤„ñ4i&ÅOƒ°fv„µžÝŒ¼3˜:ý„¡$ïhd˜ nPÍ…Ù JI™V=•RPBÌšjæ7ÉÁÃÑr°×y[VÙìCq–]âò™áîp×@' þòøÃ¥i멺 Kgh½†»ˆþWsAŸðÞØš¶3C»w°%/h¯ {¼ó BÉgNì‚g^šmJ>ó¬}HqtÚ>K9hú;»9ò¼K½mð­(µœÊ¥MñŠBÊôÈHǘé™P­A 뀽`bÞÏ2MüG®Î§pE° Ú[¥øEÙDÔYÙDì˜ÄÒqŽdE­Íˆs¢À›“Ód¤½ê@:Z˜¬Q7n’8ŽâèÎÓ¬€‚`?TDи´Ad‘³o»d]Ôô}zßÿ?Y&ž,ûË"ö‰•©ºoôíËôX#²0‚Ç1ûok¿»åom7_{æìÑÜÇï–»¦`NÔB7ã”s¶Êzs±9pKù̧f+kÞsG½ÎH“„i˜´æ1%]k.ÛæíØYÖœ_£êb·ð„ˆžÎciË‘ –õ4ó=ÆsZžšªî¬êi03ʧ—Ñ)'.; ôûêP¡%‚5¶ Eƒâ8ª“A”™ "Ä®Ÿ|˜öÈ™‰ÔyµþÜ::4üp°ŒT!;YçHhúÒ’,,Œz5PQÛ7*÷`牳¦e²Áð*“+ç?¯¯Â¬2™lašP®ªÊ¤•v!_² *\}A[.Z–gžOè—Ð)tÎ&Õ QˆOèx9YÎkïÊ;Âú½Œ³ËÖ³ dT‘Br5°yÏ BäÌyßzñVýË<Ü|¹»~z¾ýß¿àáúîæóöñæê~»rÕ]™Ÿ<Ñ)ìA·xùÊüà3iΚI€T ºœÈH͉ -:µ{¢µÿ»êû¶‡qÕÛë}Y,ô„Hƒ–FE6íI=hÀ¦2›‚nmà>ªÅÞ5²þîz¬j?J€­0ŸFpZ~®¹V’i¡d486(Þ‡V!÷Vûûgã—ŒeØHí“2h­‰âû¿¤ Ê®Œ1öÝe÷EŒÓ>ǾÙi«@ÇÓ@ð³×ÅŽé:c¥Ð¢´K(÷êa`·ïeº¿ÇÇ燧ÿy¾ßþ)WÙSõûM'|ö?î×·»ØßX[sp¦Yü¢[Ÿ›PêÍM…<é™o±Æh&÷þuÛÿ¶ÃÂfèaÈê|áó³ëq!jòöúâÛ®WìwBÈÇ‹ÑHñ"·;þ¬ÕðíÕV¼ +€/›Ÿ5‚ƒ ˆ*eÂ>ŠÂ>ãüoŸ A,Y u¤£ÎØô:& ßä_QôB˜/_¶›Þô³¤wã÷AÅŸÚ99%óãkvNèéìÎ)ÂÎ 6n¥e»jãäÃY“ŽL2ïûßÁ"ÿè§jž-…././@LongLink0000644000000000000000000000023400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130656032775 5ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000024100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000021600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130656032775 5ustar zuulzuul././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215117130647032771 0ustar zuulzuul2025-12-13T00:06:36.247942309+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215117130647032771 0ustar zuulzuul2025-08-13T19:43:57.535975521+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022300000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000011215117130647032771 0ustar zuulzuul2025-12-13T00:10:44.366379410+00:00 stdout P Fixing etcd log permissions. ././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130654032773 5ustar zuulzuul././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000004305115117130647033002 0ustar zuulzuul2025-12-13T00:10:47.435477223+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.434978Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-12-13T00:10:47.437017938+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.436965Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-12-13T00:10:47.437540581+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.437492Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-12-13T00:10:47.438357809+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.437626Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-12-13T00:10:47.438592414+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.438556Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc000212000/192.168.126.11:9978\""} 2025-12-13T00:10:47.438645285+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.438611Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc000212000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-12-13T00:10:47.438645285+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.438627Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-12-13T00:10:47.439908625+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.439846Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-12-13T00:10:47.440041168+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.439922Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-12-13T00:10:47.440372895+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440311Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc0001d9840 } }"} 2025-12-13T00:10:47.440383906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440369Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-12-13T00:10:47.440625911+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440582Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-12-13T00:10:47.440625911+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440613Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-12-13T00:10:47.440994609+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440718Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:10:47.441043510+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440925Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:10:47.441100112+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.440999Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, CONNECTING"} 2025-12-13T00:10:47.442255028+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.442199Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-12-13T00:10:47.442427752+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.442372Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-12-13T00:10:47.442872843+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.442837Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-12-13T00:10:47.442907944+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.442858Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-12-13T00:10:47.442907944+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.442892Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-12-13T00:10:47.442907944+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.442899Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-12-13T00:10:47.443431325+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.443352Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:47.443487307+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:47.443437Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:47.443541548+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.443509Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:47.443606319+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.44358Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, TRANSIENT_FAILURE"} 2025-12-13T00:10:47.443636490+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.443596Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-12-13T00:10:47.443645100+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:47.443628Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-12-13T00:10:48.444031116+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.443862Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:48.444211180+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.444173Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, IDLE"} 2025-12-13T00:10:48.444428225+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.444316Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:10:48.444529527+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.444495Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:10:48.448712134+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.448283Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, CONNECTING"} 2025-12-13T00:10:48.450133866+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.450067Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:48.450133866+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:48.450115Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:48.450177127+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.450149Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:48.450198757+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:48.450182Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, TRANSIENT_FAILURE"} 2025-12-13T00:10:49.954259721+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.954148Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:49.954487446+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.954423Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, IDLE"} 2025-12-13T00:10:49.954629290+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.954566Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:10:49.954702961+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.954674Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:10:49.954809704+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.95473Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, CONNECTING"} 2025-12-13T00:10:49.955230463+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.955184Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:49.955327835+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:49.955295Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:49.955430298+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.955399Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:49.955500449+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:49.955473Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, TRANSIENT_FAILURE"} 2025-12-13T00:10:52.428106707+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.427925Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:52.428106707+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.428073Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, IDLE"} 2025-12-13T00:10:52.428213300+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.428135Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:10:52.428230861+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.428209Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:10:52.428473017+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.428356Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, CONNECTING"} 2025-12-13T00:10:52.428802556+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.428726Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:52.428910799+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:10:52.428864Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:52.429066633+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.429028Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:52.429177337+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:52.429146Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, TRANSIENT_FAILURE"} 2025-12-13T00:10:56.879992685+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.879009Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:10:56.879992685+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.879074Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, IDLE"} 2025-12-13T00:10:56.879992685+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.87911Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:10:56.879992685+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.879137Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:10:56.879992685+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.879374Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, CONNECTING"} 2025-12-13T00:10:56.890035678+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.889974Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-12-13T00:10:56.890163981+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.890142Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000374bd0, READY"} 2025-12-13T00:10:56.890235253+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.890216Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-12-13T00:10:56.890288365+00:00 stderr F {"level":"info","ts":"2025-12-13T00:10:56.89027Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000004305115117130647033002 0ustar zuulzuul2025-12-13T00:06:39.835302537+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.834888Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-12-13T00:06:39.837736906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.837702Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-12-13T00:06:39.838508637+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.838475Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-12-13T00:06:39.839231127+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.838678Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-12-13T00:06:39.839484155+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.839459Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc00015e1e0/192.168.126.11:9978\""} 2025-12-13T00:06:39.839560387+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.839538Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc00015e1e0 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-12-13T00:06:39.839600858+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.839583Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-12-13T00:06:39.841021018+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.840982Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-12-13T00:06:39.841175582+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.841099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-12-13T00:06:39.841549603+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.841519Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc00007f080 } }"} 2025-12-13T00:06:39.841619115+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.841593Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-12-13T00:06:39.841912953+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.841888Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-12-13T00:06:39.841980595+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.84196Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-12-13T00:06:39.843355584+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.842115Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:06:39.843495728+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.843432Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:06:39.843607251+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.843486Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, CONNECTING"} 2025-12-13T00:06:39.844811724+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.844729Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:39.844811724+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:39.844788Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:39.844853436+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.844821Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:39.844867846+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.844853Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, TRANSIENT_FAILURE"} 2025-12-13T00:06:39.844883806+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.844871Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-12-13T00:06:39.846146043+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.846031Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-12-13T00:06:39.846291447+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.846244Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-12-13T00:06:39.846764640+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.846703Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-12-13T00:06:39.846764640+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.846737Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-12-13T00:06:39.846764640+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.846749Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-12-13T00:06:39.846792060+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.846758Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-12-13T00:06:39.847482290+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:39.847436Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-12-13T00:06:40.845287779+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.845185Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:40.845453455+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.845429Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, IDLE"} 2025-12-13T00:06:40.845562498+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.845517Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:06:40.845614419+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.845596Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:06:40.845747783+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.845672Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, CONNECTING"} 2025-12-13T00:06:40.845977879+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.84595Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:40.846055312+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:40.846034Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:40.846106243+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.846089Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:40.846145034+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:40.84613Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, TRANSIENT_FAILURE"} 2025-12-13T00:06:42.251741496+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.251627Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:42.251741496+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.251705Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, IDLE"} 2025-12-13T00:06:42.251825028+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.251779Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:06:42.251825028+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.251815Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:06:42.252051895+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.251998Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, CONNECTING"} 2025-12-13T00:06:42.252112416+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.252081Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:42.252121837+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:42.252105Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:42.252154947+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.252123Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:42.252154947+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:42.252139Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, TRANSIENT_FAILURE"} 2025-12-13T00:06:45.251881014+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.251314Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:45.251881014+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.251849Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, IDLE"} 2025-12-13T00:06:45.251938906+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.251886Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:06:45.251948416+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.251931Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:06:45.252142152+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.252066Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, CONNECTING"} 2025-12-13T00:06:45.252235774+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.252191Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:45.252235774+00:00 stderr F {"level":"warn","ts":"2025-12-13T00:06:45.252219Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:45.252247905+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.252236Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:45.252279816+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:45.252253Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, TRANSIENT_FAILURE"} 2025-12-13T00:06:50.168088054+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.16797Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-12-13T00:06:50.168088054+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.168044Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, IDLE"} 2025-12-13T00:06:50.168147616+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.168091Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-12-13T00:06:50.168147616+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.168128Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-12-13T00:06:50.169520585+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.168412Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, CONNECTING"} 2025-12-13T00:06:50.184463975+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.183529Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-12-13T00:06:50.184463975+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.183594Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc0004178f0, READY"} 2025-12-13T00:06:50.184463975+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.183647Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-12-13T00:06:50.184463975+00:00 stderr F {"level":"info","ts":"2025-12-13T00:06:50.183666Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023200000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000005016115117130647033002 0ustar zuulzuul2025-08-13T19:44:07.409131412+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.407635Z","caller":"etcdmain/grpc_proxy.go:218","msg":"gRPC proxy server TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-serving-metrics-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-08-13T19:44:07.414073583+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.413544Z","caller":"etcdmain/grpc_proxy.go:417","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"} 2025-08-13T19:44:07.415381827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.415228Z","caller":"etcdmain/grpc_proxy.go:387","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-certs/etcd-peer-crc.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "} 2025-08-13T19:44:07.417425202+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.415867Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel created"} 2025-08-13T19:44:07.417943455+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417861Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] original dial target is: \"etcd-endpoints://0xc0000f4000/192.168.126.11:9978\""} 2025-08-13T19:44:07.417943455+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417922Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] parsed dial target is: {URL:{Scheme:etcd-endpoints Opaque: User: Host:0xc0000f4000 Path:/192.168.126.11:9978 RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}"} 2025-08-13T19:44:07.417967766+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.417938Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel authority set to \"192.168.126.11:9978\""} 2025-08-13T19:44:07.419975659+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.419896Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Endpoints\": [\n {\n \"Addresses\": [\n {\n \"Addr\": \"192.168.126.11:9978\",\n \"ServerName\": \"192.168.126.11:9978\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Metadata\": null\n }\n ],\n \"Attributes\": null\n }\n ],\n \"ServiceConfig\": {\n \"Config\": {\n \"Config\": null,\n \"LB\": \"round_robin\",\n \"Methods\": {}\n },\n \"Err\": null\n },\n \"Attributes\": null\n} (service config updated; resolver returned new addresses)"} 2025-08-13T19:44:07.420416501+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.420235Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel switches to new LB policy \"round_robin\""} 2025-08-13T19:44:07.421162071+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421084Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: got new ClientConn state: {{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] [{[{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }] }] 0xc000409120 } }"} 2025-08-13T19:44:07.421162071+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421136Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel created"} 2025-08-13T19:44:07.421768917+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421518Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[]}"} 2025-08-13T19:44:07.421768917+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421754Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to CONNECTING"} 2025-08-13T19:44:07.421856239+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421629Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:07.421936281+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.421864Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:07.422733853+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.422625Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:07.423965905+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.423923Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.423989806+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:07.423958Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.424074838+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424006Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:07.424092389+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424065Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:07.424092389+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.424083Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"} 2025-08-13T19:44:07.427257203+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.426529Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3] Server created"} 2025-08-13T19:44:07.427644073+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.427503Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Server #3 ListenSocket #4] ListenSocket created"} 2025-08-13T19:44:07.428726452+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.428478Z","caller":"etcdmain/grpc_proxy.go:571","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"} 2025-08-13T19:44:07.428914807+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.428851Z","caller":"etcdmain/grpc_proxy.go:267","msg":"started gRPC proxy","address":"127.0.0.1:9977"} 2025-08-13T19:44:07.429300827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429224Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} 2025-08-13T19:44:07.429300827+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} 2025-08-13T19:44:07.429875752+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:07.429724Z","caller":"etcdmain/grpc_proxy.go:257","msg":"gRPC proxy server metrics URL serving"} 2025-08-13T19:44:08.425312715+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.424962Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.425312715+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425265Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:08.425740887+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425532Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:08.425740887+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.425715Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:08.426240300+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426164Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:08.426464226+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426387Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:08.426516Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426558Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:08.426630710+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:08.426594Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:10.295301275+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.295494500+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295467Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:10.295725296+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295638Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:10.295870770+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.295831Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:10.296169798+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.296034Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:10.297898874+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.297841Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298034777+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:10.298004Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298270424+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.298237Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:10.298428078+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:10.298396Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:12.743230101+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743099Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.743230101+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743181Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:12.743297363+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743221Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:12.743297363+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743248Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:12.743509149+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.743441Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:12.743921330+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.74386Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.743997642+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:12.74397Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.744110835+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.74408Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:12.744228798+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:12.744199Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632882Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632959Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.632994Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633019Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633195Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633257Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] Creating new client transport to \"{Addr: \\\"192.168.126.11:9978\\\", ServerName: \\\"192.168.126.11:9978\\\", }\": connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"warn","ts":"2025-08-13T19:44:16.633273Z","caller":"zapgrpc/zapgrpc.go:191","msg":"[core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }. Err: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633295Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:16.634001173+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:16.633314Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, TRANSIENT_FAILURE"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054181Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \"transport: Error while dialing: dial tcp 192.168.126.11:9978: connect: connection refused\""} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054257Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, IDLE"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054296Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.054322Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"192.168.126.11:9978\" to connect"} 2025-08-13T19:44:23.057118130+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.055101Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, CONNECTING"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079158Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079423Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[balancer] base.baseBalancer: handle SubConn state change: 0xc000415da0, READY"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079478Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[roundrobin] roundrobinPicker: Build called with info: {map[SubConn(id:2):{{Addr: \"192.168.126.11:9978\", ServerName: \"192.168.126.11:9978\", }}]}"} 2025-08-13T19:44:23.079851723+00:00 stderr F {"level":"info","ts":"2025-08-13T19:44:23.079495Z","caller":"zapgrpc/zapgrpc.go:174","msg":"[core] [Channel #1] Channel Connectivity change to READY"} ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130654032773 5ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000024200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000022400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130654032773 5ustar zuulzuul././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000036015117130647032776 0ustar zuulzuul2025-12-13T00:10:48.469069270+00:00 stderr F I1213 00:10:48.468819 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-12-13T00:11:05.037234461+00:00 stderr F I1213 00:11:05.037176 1 etcdcli_pool.go:70] creating a new cached client ././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000224015117130647032775 0ustar zuulzuul2025-12-13T00:06:40.218112517+00:00 stderr F I1213 00:06:40.217813 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-12-13T00:06:57.805309602+00:00 stderr F I1213 00:06:57.805202 1 etcdcli_pool.go:70] creating a new cached client 2025-12-13T00:09:25.278144912+00:00 stderr F I1213 00:09:25.278042 1 etcdcli_pool.go:70] creating a new cached client 2025-12-13T00:09:51.143498640+00:00 stderr F 2025/12/13 00:09:51 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "localhost:2379", ServerName: "localhost:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp [::1]:2379: connect: connection refused" 2025-12-13T00:09:51.143498640+00:00 stderr F 2025/12/13 00:09:51 WARNING: [core] [Channel #187 SubChannel #188] grpc: addrConn.createTransport failed to connect to {Addr: "localhost:2379", ServerName: "localhost:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp [::1]:2379: connect: connection refused" 2025-12-13T00:09:51.161742587+00:00 stderr F I1213 00:09:51.161630 1 readyz.go:179] Received SIGTERM or SIGINT signal, shutting down readyz server. ././@LongLink0000644000000000000000000000023100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000100615117130647032774 0ustar zuulzuul2025-08-13T19:44:08.536660030+00:00 stderr F I0813 19:44:08.536156 1 readyz.go:155] Listening on 0.0.0.0:9980 2025-08-13T19:44:23.569024333+00:00 stderr F I0813 19:44:23.567456 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:01:39.363673157+00:00 stderr F I0813 20:01:39.363557 1 etcdcli_pool.go:70] creating a new cached client 2025-08-13T20:42:47.039150261+00:00 stderr F I0813 20:42:47.039006 1 readyz.go:179] Received SIGTERM or SIGINT signal, shutting down readyz server. ././@LongLink0000644000000000000000000000022000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000755000175000017500000000000015117130654032773 5ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000022500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-cr0000644000175000017500000000000015117130647032765 0ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130647033032 5ustar zuulzuul././@LongLink0000644000000000000000000000026600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000755000175000017500000000000015117130654033030 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-regist0000644000175000017500000005677015117130647033053 0ustar zuulzuul2025-12-13T00:18:33.748402036+00:00 stderr F time="2025-12-13T00:18:33.74818108Z" level=info msg="start registry" distribution_version=v3.0.0+unknown go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" openshift_version=4.16.0-202406131906.p0.g58a613b.assembly.stream.el9-58a613b 2025-12-13T00:18:33.749624240+00:00 stderr F time="2025-12-13T00:18:33.749545858Z" level=info msg="caching project quota objects with TTL 1m0s" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:33.750705339+00:00 stderr F time="2025-12-13T00:18:33.750661698Z" level=info msg="redis not configured" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:33.750816722+00:00 stderr F time="2025-12-13T00:18:33.750788201Z" level=info msg="using openshift blob descriptor cache" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:33.750844663+00:00 stderr F time="2025-12-13T00:18:33.750817882Z" level=warning msg="Registry does not implement RepositoryRemover. Will not be able to delete repos and tags" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:33.750927545+00:00 stderr F time="2025-12-13T00:18:33.750829352Z" level=info msg="Starting upload purge in 14m0s" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:33.751718188+00:00 stderr F time="2025-12-13T00:18:33.751675937Z" level=info msg="Using \"image-registry.openshift-image-registry.svc:5000\" as Docker Registry URL" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:33.751915633+00:00 stderr F time="2025-12-13T00:18:33.751866122Z" level=info msg="listening on :5000, tls" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" 2025-12-13T00:18:42.718997408+00:00 stderr F time="2025-12-13T00:18:42.718502904Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=29057f01-91a2-41d5-89f4-47e39c6bc8d5 http.request.method=GET http.request.remoteaddr="10.217.0.2:47632" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="43.211µs" http.response.status=200 http.response.written=0 2025-12-13T00:18:52.716839361+00:00 stderr F time="2025-12-13T00:18:52.716356638Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=3a08cfa0-a1ef-4a59-81a1-4f97da6f6e1a http.request.method=GET http.request.remoteaddr="10.217.0.2:44362" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="60.733µs" http.response.status=200 http.response.written=0 2025-12-13T00:18:52.717758306+00:00 stderr F time="2025-12-13T00:18:52.717647473Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=daf4a91e-d5e1-4856-a0e1-3f14cc1a92bd http.request.method=GET http.request.remoteaddr="10.217.0.2:44364" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="42.271µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:02.717883467+00:00 stderr F time="2025-12-13T00:19:02.717355863Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=4282604b-c554-46a4-9fa0-d708a1f8c9d1 http.request.method=GET http.request.remoteaddr="10.217.0.2:42590" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="45.141µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:02.718258358+00:00 stderr F time="2025-12-13T00:19:02.718093163Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=4d7b34f6-7a2d-45b9-b690-73c4c367d8fb http.request.method=GET http.request.remoteaddr="10.217.0.2:42578" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="173.535µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:12.717716589+00:00 stderr F time="2025-12-13T00:19:12.716635459Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=c8054807-d141-472f-ad30-9d21c61e374e http.request.method=GET http.request.remoteaddr="10.217.0.2:57406" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="50.981µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:12.717716589+00:00 stderr F time="2025-12-13T00:19:12.717070491Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=2f52fa63-6ff2-4f44-b245-f6e82a89e285 http.request.method=GET http.request.remoteaddr="10.217.0.2:57404" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="55.322µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:22.716826503+00:00 stderr F time="2025-12-13T00:19:22.716336028Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=e41baf8e-e50c-472e-b64d-f5ccf745da35 http.request.method=GET http.request.remoteaddr="10.217.0.2:39826" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="35.251µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:22.717666726+00:00 stderr F time="2025-12-13T00:19:22.717580973Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=5a7b5099-7686-429d-ba12-96109c0fdfc4 http.request.method=GET http.request.remoteaddr="10.217.0.2:39830" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="61.972µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:32.717767692+00:00 stderr F time="2025-12-13T00:19:32.717176995Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=b3d65565-f315-47bb-8117-4ba89e35e658 http.request.method=GET http.request.remoteaddr="10.217.0.2:47136" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="60.451µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:32.717958497+00:00 stderr F time="2025-12-13T00:19:32.717867214Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=e3ee3b14-7fd7-425a-9b3b-29922f42f9fb http.request.method=GET http.request.remoteaddr="10.217.0.2:47122" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="54.052µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:42.718416915+00:00 stderr F time="2025-12-13T00:19:42.71789435Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=fc508a72-c8f3-4b02-873c-9fc2a26d6dd7 http.request.method=GET http.request.remoteaddr="10.217.0.2:42980" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="38.441µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:42.718597760+00:00 stderr F time="2025-12-13T00:19:42.718469436Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=821e3cae-95be-4392-a8c8-de30b47ae43a http.request.method=GET http.request.remoteaddr="10.217.0.2:42974" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="46.131µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:44.912510736+00:00 stderr F time="2025-12-13T00:19:44.912419543Z" level=warning msg="error authorizing context: authorization header required" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=d7be716a-c46c-40f8-8892-413492aca390 http.request.method=GET http.request.remoteaddr=38.102.83.182 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" 2025-12-13T00:19:44.912588768+00:00 stderr F time="2025-12-13T00:19:44.912545127Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=45206205-0053-4f26-b70d-2d22c9484a0a http.request.method=GET http.request.remoteaddr=38.102.83.182 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=2.572771ms http.response.status=401 http.response.written=87 2025-12-13T00:19:44.939574791+00:00 stderr F time="2025-12-13T00:19:44.939490119Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=066adcec-b1f5-486f-8d60-f8d17a0f54d9 http.request.method=GET http.request.remoteaddr=38.102.83.182 http.request.uri="/openshift/token?account=kubeadmin" http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=15.706793ms http.response.status=200 http.response.written=131 2025-12-13T00:19:44.946494263+00:00 stderr F time="2025-12-13T00:19:44.946452002Z" level=info msg="authorized request" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=4570c32b-9e7c-463d-8a1e-72e521fc6bdb http.request.method=GET http.request.remoteaddr=38.102.83.182 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" openshift.auth.user=kubeadmin openshift.auth.userid=1cd2e525-8d00-41b4-a486-09fa68a2ad45 2025-12-13T00:19:44.946582785+00:00 stderr F time="2025-12-13T00:19:44.946560214Z" level=info msg="response completed" go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=4570c32b-9e7c-463d-8a1e-72e521fc6bdb http.request.method=GET http.request.remoteaddr=38.102.83.182 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=5.046399ms http.response.status=200 http.response.written=2 openshift.auth.user=kubeadmin openshift.auth.userid=1cd2e525-8d00-41b4-a486-09fa68a2ad45 2025-12-13T00:19:44.946638367+00:00 stderr F time="2025-12-13T00:19:44.946612836Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host=default-route-openshift-image-registry.apps-crc.testing http.request.id=70aae38d-ab19-4f2f-a5b4-136b00b3b8cb http.request.method=GET http.request.remoteaddr=38.102.83.182 http.request.uri=/v2/ http.request.useragent="containers/5.36.1 (github.com/containers/image)" http.response.contenttype=application/json http.response.duration=5.148062ms http.response.status=200 http.response.written=2 2025-12-13T00:19:52.717280628+00:00 stderr F time="2025-12-13T00:19:52.716640271Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=8f33e2b8-a4f4-4781-a217-4eeaeece23e4 http.request.method=GET http.request.remoteaddr="10.217.0.2:39320" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="56.632µs" http.response.status=200 http.response.written=0 2025-12-13T00:19:52.717528525+00:00 stderr F time="2025-12-13T00:19:52.717180386Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=239c8716-0333-43e0-b953-154111d4af45 http.request.method=GET http.request.remoteaddr="10.217.0.2:39324" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="99.902µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:02.718555752+00:00 stderr F time="2025-12-13T00:20:02.717982826Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=06fb2573-24f2-4489-9784-232e31cad30f http.request.method=GET http.request.remoteaddr="10.217.0.2:44486" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="71.742µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:02.718695066+00:00 stderr F time="2025-12-13T00:20:02.718587383Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=51d68f5f-b367-42b2-9d17-8285f2207b9d http.request.method=GET http.request.remoteaddr="10.217.0.2:44482" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="48.682µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:12.718640022+00:00 stderr F time="2025-12-13T00:20:12.717756357Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=e7a96f70-b7cf-4fec-a3e6-127427c49182 http.request.method=GET http.request.remoteaddr="10.217.0.2:58298" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="74.152µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:12.720350498+00:00 stderr F time="2025-12-13T00:20:12.720232535Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=b6ae9880-ec0e-449d-9ffc-75069d3cea92 http.request.method=GET http.request.remoteaddr="10.217.0.2:58294" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="72.592µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:22.718364610+00:00 stderr F time="2025-12-13T00:20:22.717810754Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=2abde5e1-d652-4351-9ec5-ae694b8dc5d7 http.request.method=GET http.request.remoteaddr="10.217.0.2:42462" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="41.481µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:22.718532234+00:00 stderr F time="2025-12-13T00:20:22.718442022Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=0f25814a-6f83-42b7-b332-97822c7bf491 http.request.method=GET http.request.remoteaddr="10.217.0.2:42476" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="62.871µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:32.716833543+00:00 stderr F time="2025-12-13T00:20:32.71636114Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=2557ae43-a299-4717-a936-5547f209baf4 http.request.method=GET http.request.remoteaddr="10.217.0.2:48850" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="62.222µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:32.719117676+00:00 stderr F time="2025-12-13T00:20:32.719045034Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=e4383d56-308c-4e27-a167-566be8b6730b http.request.method=GET http.request.remoteaddr="10.217.0.2:48864" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="47.572µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:42.718439470+00:00 stderr F time="2025-12-13T00:20:42.717900424Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=0429b55d-8386-491f-8a62-1f5e3cad4322 http.request.method=GET http.request.remoteaddr="10.217.0.2:40446" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="60.092µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:42.718741598+00:00 stderr F time="2025-12-13T00:20:42.718699847Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=6e1b29cd-d3a7-4823-a665-8fc6b5b68b06 http.request.method=GET http.request.remoteaddr="10.217.0.2:40452" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="20.481µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:52.718384396+00:00 stderr F time="2025-12-13T00:20:52.717893263Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=fe914abe-f7ab-4c7f-927f-e17bf60f6000 http.request.method=GET http.request.remoteaddr="10.217.0.2:49906" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="50.601µs" http.response.status=200 http.response.written=0 2025-12-13T00:20:52.718468339+00:00 stderr F time="2025-12-13T00:20:52.718203532Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=48db8e4e-ac33-47fb-8335-71d090799bbb http.request.method=GET http.request.remoteaddr="10.217.0.2:49894" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="39.182µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:02.717177480+00:00 stderr F time="2025-12-13T00:21:02.716727468Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=01cc4a08-5c99-4d5e-a0d1-b7e96b67664a http.request.method=GET http.request.remoteaddr="10.217.0.2:59690" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="40.481µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:02.718112536+00:00 stderr F time="2025-12-13T00:21:02.718054564Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=5f0c6472-e381-4a70-a649-560564e699bd http.request.method=GET http.request.remoteaddr="10.217.0.2:59676" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="29.77µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:12.717972109+00:00 stderr F time="2025-12-13T00:21:12.717541758Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=0cf65586-23ae-4e7f-a3c8-b1379dc3dd5c http.request.method=GET http.request.remoteaddr="10.217.0.2:35778" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="51.681µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:12.718473523+00:00 stderr F time="2025-12-13T00:21:12.718416672Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=62dba9ec-49d1-4914-a988-ec1478475e0f http.request.method=GET http.request.remoteaddr="10.217.0.2:35762" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="38.502µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:22.718550693+00:00 stderr F time="2025-12-13T00:21:22.716549839Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=e0ce9e12-885b-44b7-b7a8-2471535dd53e http.request.method=GET http.request.remoteaddr="10.217.0.2:43230" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="37.572µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:22.718550693+00:00 stderr F time="2025-12-13T00:21:22.718304466Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=c59ebf0d-71b1-4d56-8b74-f93cb4d01fef http.request.method=GET http.request.remoteaddr="10.217.0.2:43228" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="31.421µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:32.718987112+00:00 stderr F time="2025-12-13T00:21:32.718521289Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=b4359028-a1b6-4919-a8e1-7697a6f8d380 http.request.method=GET http.request.remoteaddr="10.217.0.2:39336" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="40.951µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:32.720071230+00:00 stderr F time="2025-12-13T00:21:32.720038069Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=d8287dca-f642-42ef-afd5-9c562426bb0f http.request.method=GET http.request.remoteaddr="10.217.0.2:39328" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="18.19µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:42.720012678+00:00 stderr F time="2025-12-13T00:21:42.719315381Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=ea3c0add-b432-41f4-8fe5-15c46151304d http.request.method=GET http.request.remoteaddr="10.217.0.2:49784" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="177.185µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:42.720412187+00:00 stderr F time="2025-12-13T00:21:42.719785922Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=1057af63-301e-457d-8c03-9c8de157624e http.request.method=GET http.request.remoteaddr="10.217.0.2:49800" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="60.091µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:52.718597212+00:00 stderr F time="2025-12-13T00:21:52.71812591Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=7a0963b2-2bf1-408b-a75f-677bd3618acf http.request.method=GET http.request.remoteaddr="10.217.0.2:46034" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="44.171µs" http.response.status=200 http.response.written=0 2025-12-13T00:21:52.718869379+00:00 stderr F time="2025-12-13T00:21:52.718789947Z" level=info msg=response go.version="go1.21.9 (Red Hat 1.21.9-1.el9_4) X:strictfipsruntime" http.request.host="10.217.0.41:5000" http.request.id=4d499968-f736-442e-9585-e47d3003ef3e http.request.method=GET http.request.remoteaddr="10.217.0.2:46036" http.request.uri=/healthz http.request.useragent=kube-probe/1.29 http.response.duration="55.282µs" http.response.status=200 http.response.written=0 ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015117130646033025 5ustar zuulzuul././@LongLink0000644000000000000000000000032300000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015117130654033024 5ustar zuulzuul././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000004221415117130646033032 0ustar zuulzuul2025-12-13T00:13:17.666852207+00:00 stderr F I1213 00:13:17.666276 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:13:17.674765743+00:00 stderr F I1213 00:13:17.674699 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:17.678557611+00:00 stderr F I1213 00:13:17.678513 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:17.744148505+00:00 stderr F I1213 00:13:17.741574 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-12-13T00:13:18.179008257+00:00 stderr F I1213 00:13:18.176190 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:18.179008257+00:00 stderr F W1213 00:13:18.176210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:18.179008257+00:00 stderr F W1213 00:13:18.176215 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:18.205603091+00:00 stderr F I1213 00:13:18.200199 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.211462 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.212018 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.212060 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.212097 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.212109 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.212120 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:18.215046368+00:00 stderr F I1213 00:13:18.212125 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.235463954+00:00 stderr F I1213 00:13:18.234042 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-12-13T00:13:18.271978201+00:00 stderr F I1213 00:13:18.268532 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:18.271978201+00:00 stderr F I1213 00:13:18.268591 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:18.312581385+00:00 stderr F I1213 00:13:18.312489 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:18.312581385+00:00 stderr F I1213 00:13:18.312554 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:18.312633757+00:00 stderr F I1213 00:13:18.312627 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:18:20.184637351+00:00 stderr F I1213 00:18:20.184053 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-12-13T00:18:20.184822506+00:00 stderr F I1213 00:18:20.184763 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"41978", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_038c37fd-e40d-4df2-911b-a62c894e812a became leader 2025-12-13T00:18:20.202349222+00:00 stderr F I1213 00:18:20.202257 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-12-13T00:18:20.202919117+00:00 stderr F I1213 00:18:20.202861 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:18:20.203049670+00:00 stderr F I1213 00:18:20.202991 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:20.203049670+00:00 stderr F I1213 00:18:20.202989 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-12-13T00:18:20.203049670+00:00 stderr F I1213 00:18:20.202979 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-12-13T00:18:20.203049670+00:00 stderr F I1213 00:18:20.203025 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-12-13T00:18:20.203049670+00:00 stderr F I1213 00:18:20.202990 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-12-13T00:18:20.203169603+00:00 stderr F I1213 00:18:20.203091 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-12-13T00:18:20.203169603+00:00 stderr F I1213 00:18:20.203119 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-12-13T00:18:20.203169603+00:00 stderr F I1213 00:18:20.203128 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-12-13T00:18:20.203169603+00:00 stderr F I1213 00:18:20.203140 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-12-13T00:18:20.203169603+00:00 stderr F I1213 00:18:20.203150 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-12-13T00:18:20.203169603+00:00 stderr F I1213 00:18:20.203157 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-12-13T00:18:20.203335898+00:00 stderr F E1213 00:18:20.203271 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-12-13T00:18:20.203335898+00:00 stderr F I1213 00:18:20.203263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-12-13T00:18:20.208741592+00:00 stderr F E1213 00:18:20.208651 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-12-13T00:18:20.220355601+00:00 stderr F I1213 00:18:20.215670 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-12-13T00:18:20.220355601+00:00 stderr F E1213 00:18:20.219552 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-12-13T00:18:20.303310756+00:00 stderr F I1213 00:18:20.303202 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-12-13T00:18:20.303310756+00:00 stderr F I1213 00:18:20.303264 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:20.303310756+00:00 stderr F I1213 00:18:20.303278 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-12-13T00:18:20.303310756+00:00 stderr F I1213 00:18:20.303287 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:20.303510012+00:00 stderr F I1213 00:18:20.303469 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-12-13T00:18:20.303510012+00:00 stderr F I1213 00:18:20.303490 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-12-13T00:18:20.303529002+00:00 stderr F I1213 00:18:20.303511 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:18:20.303529002+00:00 stderr F I1213 00:18:20.303519 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:18:20.303544992+00:00 stderr F I1213 00:18:20.303532 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-12-13T00:18:20.303544992+00:00 stderr F I1213 00:18:20.303537 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-12-13T00:18:20.305012112+00:00 stderr F I1213 00:18:20.304010 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-12-13T00:18:20.305012112+00:00 stderr F I1213 00:18:20.304023 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-12-13T00:18:20.305012112+00:00 stderr F I1213 00:18:20.304163 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-12-13T00:18:20.305012112+00:00 stderr F I1213 00:18:20.304182 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-12-13T00:18:20.316276281+00:00 stderr F I1213 00:18:20.316242 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-12-13T00:18:20.316306102+00:00 stderr F I1213 00:18:20.316272 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.563417 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.563354305 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.563923 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.563894231 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564297 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.56421397 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564335 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.564308332 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564389 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564351224 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564423 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564398375 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564453 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564430996 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564483 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.564460737 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564513 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.564491247 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564547 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.564526978 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564581 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.564556929 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.564610 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.56458854 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.565271 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-12-13 00:19:37.565245508 +0000 UTC))" 2025-12-13T00:19:37.567901971+00:00 stderr F I1213 00:19:37.565780 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584798\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:17 +0000 UTC to 2026-12-12 23:13:17 +0000 UTC (now=2025-12-13 00:19:37.565751942 +0000 UTC))" 2025-12-13T00:21:48.599503896+00:00 stderr F I1213 00:21:48.599006 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000010113315117130646033026 0ustar zuulzuul2025-08-13T20:00:06.609040181+00:00 stderr F I0813 20:00:06.606756 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:00:06.609040181+00:00 stderr F I0813 20:00:06.607693 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:00:06.613715154+00:00 stderr F I0813 20:00:06.613562 1 observer_polling.go:159] Starting file observer 2025-08-13T20:00:06.745670326+00:00 stderr F I0813 20:00:06.745105 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-08-13T20:00:08.615501452+00:00 stderr F I0813 20:00:08.614612 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:00:08.615501452+00:00 stderr F W0813 20:00:08.615358 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:08.615501452+00:00 stderr F W0813 20:00:08.615367 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:00:08.803599465+00:00 stderr F I0813 20:00:08.803212 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:00:08.804180802+00:00 stderr F I0813 20:00:08.804153 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:00:08.804909533+00:00 stderr F I0813 20:00:08.804883 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:00:08.804961744+00:00 stderr F I0813 20:00:08.804948 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:08.811385237+00:00 stderr F I0813 20:00:08.811308 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:00:08.811453569+00:00 stderr F I0813 20:00:08.811439 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:08.816644177+00:00 stderr F I0813 20:00:08.816601 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:00:08.823141632+00:00 stderr F I0813 20:00:08.823095 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-08-13T20:00:08.827510887+00:00 stderr F I0813 20:00:08.827483 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:00:08.828151935+00:00 stderr F I0813 20:00:08.827768 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:00:08.835083783+00:00 stderr F I0813 20:00:08.827946 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:00:10.206555929+00:00 stderr F I0813 20:00:10.205128 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:00:10.206555929+00:00 stderr F I0813 20:00:10.205730 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:00:10.213300141+00:00 stderr F I0813 20:00:10.213253 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:00:10.859242570+00:00 stderr F I0813 20:00:10.858742 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-08-13T20:00:10.860707962+00:00 stderr F I0813 20:00:10.860514 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"29031", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_122f4599-0c2f-4c0b-a0bb-7ed9e07d3e2c became leader 2025-08-13T20:00:11.819948463+00:00 stderr F I0813 20:00:11.818486 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820062 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820101 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-08-13T20:00:11.820923450+00:00 stderr F I0813 20:00:11.820196 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.820960 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821011 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821026 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821053 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821058 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-08-13T20:00:11.821148627+00:00 stderr F I0813 20:00:11.821064 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.825981 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.826333 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-08-13T20:00:11.826646533+00:00 stderr F I0813 20:00:11.826354 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-08-13T20:00:11.844577405+00:00 stderr F I0813 20:00:11.844491 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-08-13T20:00:11.845871972+00:00 stderr F E0813 20:00:11.844912 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:11.845871972+00:00 stderr F I0813 20:00:11.844973 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:00:11.859098619+00:00 stderr F E0813 20:00:11.857896 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:11.968039095+00:00 stderr F E0813 20:00:11.951700 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:00:12.048349065+00:00 stderr F I0813 20:00:12.048045 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:00:12.048349065+00:00 stderr F I0813 20:00:12.048126 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099542 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099591 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099812 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:00:12.100115001+00:00 stderr F I0813 20:00:12.099867 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:00:12.253356991+00:00 stderr F I0813 20:00:12.250025 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.286079044+00:00 stderr F I0813 20:00:12.272341 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.286079044+00:00 stderr F I0813 20:00:12.274011 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.442111723+00:00 stderr F I0813 20:00:12.442008 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.445729096+00:00 stderr F I0813 20:00:12.445624 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-08-13T20:00:12.450233745+00:00 stderr F I0813 20:00:12.450133 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:00:12.450256085+00:00 stderr F I0813 20:00:12.450242 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-08-13T20:00:12.450256085+00:00 stderr F I0813 20:00:12.450249 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-08-13T20:00:12.450478772+00:00 stderr F I0813 20:00:12.449988 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.469317419+00:00 stderr F I0813 20:00:12.469145 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-08-13T20:00:12.469317419+00:00 stderr F I0813 20:00:12.469192 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-08-13T20:00:12.475613638+00:00 stderr F I0813 20:00:12.475482 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:00:12.521284411+00:00 stderr F I0813 20:00:12.521184 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-08-13T20:00:12.521284411+00:00 stderr F I0813 20:00:12.521242 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-08-13T20:00:12.558069769+00:00 stderr F I0813 20:00:12.556643 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-08-13T20:00:12.558069769+00:00 stderr F I0813 20:00:12.556684 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-08-13T20:01:00.024546078+00:00 stderr F I0813 20:01:00.011104 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:00.01059532 +0000 UTC))" 2025-08-13T20:01:00.024546078+00:00 stderr F I0813 20:01:00.011756 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:00.011738473 +0000 UTC))" 2025-08-13T20:01:00.049043057+00:00 stderr F I0813 20:01:00.048971 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.011764744 +0000 UTC))" 2025-08-13T20:01:00.049170980+00:00 stderr F I0813 20:01:00.049151 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:00.049107288 +0000 UTC))" 2025-08-13T20:01:00.049355846+00:00 stderr F I0813 20:01:00.049331 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049197771 +0000 UTC))" 2025-08-13T20:01:00.049464969+00:00 stderr F I0813 20:01:00.049447 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049422757 +0000 UTC))" 2025-08-13T20:01:00.049530141+00:00 stderr F I0813 20:01:00.049513 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049491139 +0000 UTC))" 2025-08-13T20:01:00.049596852+00:00 stderr F I0813 20:01:00.049579 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.049557081 +0000 UTC))" 2025-08-13T20:01:00.049661514+00:00 stderr F I0813 20:01:00.049645 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:00.049620973 +0000 UTC))" 2025-08-13T20:01:00.059181316+00:00 stderr F I0813 20:01:00.049740 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:00.049719456 +0000 UTC))" 2025-08-13T20:01:00.059327780+00:00 stderr F I0813 20:01:00.059308 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:00.059262308 +0000 UTC))" 2025-08-13T20:01:00.068147291+00:00 stderr F I0813 20:01:00.068084 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:08 +0000 UTC to 2026-06-26 12:47:09 +0000 UTC (now=2025-08-13 20:01:00.068004057 +0000 UTC))" 2025-08-13T20:01:00.068977955+00:00 stderr F I0813 20:01:00.068951 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115208\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115207\" (2025-08-13 19:00:06 +0000 UTC to 2026-08-13 19:00:06 +0000 UTC (now=2025-08-13 20:01:00.068701687 +0000 UTC))" 2025-08-13T20:01:23.351014291+00:00 stderr F I0813 20:01:23.350151 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:23.351014291+00:00 stderr F I0813 20:01:23.350951 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:23.351996309+00:00 stderr F I0813 20:01:23.351484 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352336 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:23.352288928 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352389 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:23.35237251 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352411 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:23.352397221 +0000 UTC))" 2025-08-13T20:01:23.352476933+00:00 stderr F I0813 20:01:23.352427 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:23.352416221 +0000 UTC))" 2025-08-13T20:01:23.352508124+00:00 stderr F I0813 20:01:23.352445 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352434132 +0000 UTC))" 2025-08-13T20:01:23.352508124+00:00 stderr F I0813 20:01:23.352480 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352451412 +0000 UTC))" 2025-08-13T20:01:23.352518574+00:00 stderr F I0813 20:01:23.352505 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352486313 +0000 UTC))" 2025-08-13T20:01:23.352563405+00:00 stderr F I0813 20:01:23.352529 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352510784 +0000 UTC))" 2025-08-13T20:01:23.352575506+00:00 stderr F I0813 20:01:23.352567 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:23.352552205 +0000 UTC))" 2025-08-13T20:01:23.352612767+00:00 stderr F I0813 20:01:23.352584 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:23.352574216 +0000 UTC))" 2025-08-13T20:01:23.352666948+00:00 stderr F I0813 20:01:23.352621 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:23.352607877 +0000 UTC))" 2025-08-13T20:01:23.354274304+00:00 stderr F I0813 20:01:23.354230 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-config-operator.svc\" [serving] validServingFor=[metrics.openshift-config-operator.svc,metrics.openshift-config-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:11 +0000 UTC to 2027-08-13 20:00:12 +0000 UTC (now=2025-08-13 20:01:23.354202292 +0000 UTC))" 2025-08-13T20:01:23.354967744+00:00 stderr F I0813 20:01:23.354866 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115208\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115207\" (2025-08-13 19:00:06 +0000 UTC to 2026-08-13 19:00:06 +0000 UTC (now=2025-08-13 20:01:23.354759108 +0000 UTC))" 2025-08-13T20:01:26.622344829+00:00 stderr F I0813 20:01:26.621656 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="7021067932790448a11809da10b860f6f1ea1555731d97a3cf678bc8b9574622", new="dca6c81c3751f96f1b64e72dc06b40fe72f2952cfcac2b16deea87fc6cd08c4d") 2025-08-13T20:01:26.622547835+00:00 stderr F W0813 20:01:26.622499 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.key was modified 2025-08-13T20:01:26.622707979+00:00 stderr F I0813 20:01:26.622619 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="e7c4eabcdc7aa32e59c3e68ad4841e132e9166775ff32392cab346655d2dac9f", new="adf358f49f26d932aaa3db3a86640e1ed83874695ab0abb173ca1ba5a73101ec") 2025-08-13T20:01:26.623009248+00:00 stderr F I0813 20:01:26.622955 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:26.623130121+00:00 stderr F I0813 20:01:26.623115 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:26.623421960+00:00 stderr F I0813 20:01:26.623368 1 base_controller.go:172] Shutting down FeatureGateController ... 2025-08-13T20:01:26.623436000+00:00 stderr F I0813 20:01:26.623426 1 base_controller.go:172] Shutting down AWSPlatformServiceLocationController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623444 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623475 1 base_controller.go:172] Shutting down FeatureUpgradeableController ... 2025-08-13T20:01:26.623555654+00:00 stderr F I0813 20:01:26.623493 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:26.623617945+00:00 stderr F I0813 20:01:26.623573 1 base_controller.go:172] Shutting down LatencySensitiveRemovalController ... 2025-08-13T20:01:26.623617945+00:00 stderr F I0813 20:01:26.623607 1 base_controller.go:172] Shutting down ConfigOperatorController ... 2025-08-13T20:01:26.624546072+00:00 stderr F I0813 20:01:26.624517 1 base_controller.go:172] Shutting down KubeCloudConfigController ... 2025-08-13T20:01:26.624602753+00:00 stderr F I0813 20:01:26.624589 1 base_controller.go:172] Shutting down MigrationPlatformStatusController ... 2025-08-13T20:01:26.624657085+00:00 stderr F I0813 20:01:26.624644 1 base_controller.go:172] Shutting down StatusSyncer_config-operator ... 2025-08-13T20:01:26.624698276+00:00 stderr F I0813 20:01:26.624676 1 base_controller.go:150] All StatusSyncer_config-operator post start hooks have been terminated 2025-08-13T20:01:26.624765838+00:00 stderr F I0813 20:01:26.624750 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:26.624957463+00:00 stderr F I0813 20:01:26.624931 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:26.625110698+00:00 stderr F I0813 20:01:26.625092 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:26.625173410+00:00 stderr F I0813 20:01:26.625159 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:26.625242542+00:00 stderr F I0813 20:01:26.625229 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:26.626234790+00:00 stderr F I0813 20:01:26.626160 1 base_controller.go:114] Shutting down worker of FeatureUpgradeableController controller ... 2025-08-13T20:01:26.626252370+00:00 stderr F I0813 20:01:26.626240 1 base_controller.go:104] All FeatureUpgradeableController workers have been terminated 2025-08-13T20:01:26.626338233+00:00 stderr F I0813 20:01:26.626291 1 base_controller.go:114] Shutting down worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:01:26.626350213+00:00 stderr F I0813 20:01:26.626339 1 base_controller.go:104] All AWSPlatformServiceLocationController workers have been terminated 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626367 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626391 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:26.626406105+00:00 stderr F I0813 20:01:26.626142 1 base_controller.go:114] Shutting down worker of FeatureGateController controller ... 2025-08-13T20:01:26.626419325+00:00 stderr F I0813 20:01:26.626407 1 base_controller.go:104] All FeatureGateController workers have been terminated 2025-08-13T20:01:26.626428965+00:00 stderr F I0813 20:01:26.626418 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:26.626428965+00:00 stderr F I0813 20:01:26.626425 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626613 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626682 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626711 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:26.627513836+00:00 stderr F I0813 20:01:26.626744 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:26.630990065+00:00 stderr F I0813 20:01:26.630905 1 base_controller.go:114] Shutting down worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:01:26.630990065+00:00 stderr F I0813 20:01:26.630965 1 base_controller.go:104] All LatencySensitiveRemovalController workers have been terminated 2025-08-13T20:01:26.631095068+00:00 stderr F I0813 20:01:26.631051 1 base_controller.go:114] Shutting down worker of MigrationPlatformStatusController controller ... 2025-08-13T20:01:26.631095068+00:00 stderr F I0813 20:01:26.631077 1 base_controller.go:104] All MigrationPlatformStatusController workers have been terminated 2025-08-13T20:01:26.631125419+00:00 stderr F I0813 20:01:26.631059 1 base_controller.go:114] Shutting down worker of StatusSyncer_config-operator controller ... 2025-08-13T20:01:26.631165940+00:00 stderr F I0813 20:01:26.631152 1 base_controller.go:104] All StatusSyncer_config-operator workers have been terminated 2025-08-13T20:01:26.631203632+00:00 stderr F I0813 20:01:26.631119 1 base_controller.go:114] Shutting down worker of KubeCloudConfigController controller ... 2025-08-13T20:01:26.631244433+00:00 stderr F I0813 20:01:26.631232 1 base_controller.go:104] All KubeCloudConfigController workers have been terminated 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631340 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631376 1 builder.go:330] server exited 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631415 1 base_controller.go:114] Shutting down worker of ConfigOperatorController controller ... 2025-08-13T20:01:26.633866888+00:00 stderr F I0813 20:01:26.631458 1 base_controller.go:104] All ConfigOperatorController workers have been terminated 2025-08-13T20:01:29.264183958+00:00 stderr F W0813 20:01:29.256407 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000007570415117130646033044 0ustar zuulzuul2025-08-13T20:05:36.187245573+00:00 stderr F I0813 20:05:36.180676 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:05:36.192640098+00:00 stderr F I0813 20:05:36.189383 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:36.197893678+00:00 stderr F I0813 20:05:36.193740 1 observer_polling.go:159] Starting file observer 2025-08-13T20:05:36.478370310+00:00 stderr F I0813 20:05:36.476926 1 builder.go:299] config-operator version 4.16.0-202406131906.p0.g441d29c.assembly.stream.el9-441d29c-441d29c92b1759d1780a525112e764280b78b0d6 2025-08-13T20:05:37.273257573+00:00 stderr F I0813 20:05:37.272955 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:05:37.273257573+00:00 stderr F W0813 20:05:37.273193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.273257573+00:00 stderr F W0813 20:05:37.273201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:05:37.325391825+00:00 stderr F I0813 20:05:37.325120 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:05:37.325612042+00:00 stderr F I0813 20:05:37.325561 1 leaderelection.go:250] attempting to acquire leader lease openshift-config-operator/config-operator-lock... 2025-08-13T20:05:37.326370803+00:00 stderr F I0813 20:05:37.326289 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:05:37.328065692+00:00 stderr F I0813 20:05:37.327701 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:05:37.328065692+00:00 stderr F I0813 20:05:37.327944 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328474 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328514 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328476 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:05:37.328573397+00:00 stderr F I0813 20:05:37.328555 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:05:37.328614748+00:00 stderr F I0813 20:05:37.328588 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:05:37.329384150+00:00 stderr F I0813 20:05:37.328735 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:05:37.414109896+00:00 stderr F I0813 20:05:37.413550 1 leaderelection.go:260] successfully acquired lease openshift-config-operator/config-operator-lock 2025-08-13T20:05:37.416395842+00:00 stderr F I0813 20:05:37.416309 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-config-operator", Name:"config-operator-lock", UID:"0f77897f-a069-4784-b2c6-559ada951a0f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"31747", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openshift-config-operator-77658b5b66-dq5sc_81393664-a273-42ce-b620-ccd9229e7705 became leader 2025-08-13T20:05:37.434366396+00:00 stderr F I0813 20:05:37.430576 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:05:37.434366396+00:00 stderr F I0813 20:05:37.430762 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:05:37.435024715+00:00 stderr F I0813 20:05:37.434560 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:05:37.625336365+00:00 stderr F I0813 20:05:37.625240 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:05:37.626072746+00:00 stderr F I0813 20:05:37.626024 1 base_controller.go:67] Waiting for caches to sync for AWSPlatformServiceLocationController 2025-08-13T20:05:37.626157699+00:00 stderr F I0813 20:05:37.626137 1 base_controller.go:67] Waiting for caches to sync for KubeCloudConfigController 2025-08-13T20:05:37.626332764+00:00 stderr F I0813 20:05:37.626234 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_config-operator 2025-08-13T20:05:37.626433157+00:00 stderr F I0813 20:05:37.626375 1 base_controller.go:67] Waiting for caches to sync for ConfigOperatorController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626486 1 base_controller.go:73] Caches are synced for ConfigOperatorController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626567 1 base_controller.go:110] Starting #1 worker of ConfigOperatorController controller ... 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626854 1 base_controller.go:67] Waiting for caches to sync for MigrationPlatformStatusController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.626992 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627100 1 base_controller.go:67] Waiting for caches to sync for FeatureGateController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627123 1 base_controller.go:67] Waiting for caches to sync for LatencySensitiveRemovalController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627196 1 base_controller.go:73] Caches are synced for LatencySensitiveRemovalController 2025-08-13T20:05:37.627411195+00:00 stderr F I0813 20:05:37.627203 1 base_controller.go:110] Starting #1 worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:05:37.627753844+00:00 stderr F I0813 20:05:37.627732 1 base_controller.go:67] Waiting for caches to sync for FeatureUpgradeableController 2025-08-13T20:05:37.628027542+00:00 stderr F E0813 20:05:37.628001 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.630202254+00:00 stderr F I0813 20:05:37.630165 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-config-operator", Name:"openshift-config-operator", UID:"46cebc51-d29e-4081-9edb-d9f437810b86", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:05:37.633285683+00:00 stderr F E0813 20:05:37.633261 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.643717082+00:00 stderr F E0813 20:05:37.643665 1 base_controller.go:268] ConfigOperatorController reconciliation failed: configs.operator.openshift.io "cluster" not found 2025-08-13T20:05:37.725701009+00:00 stderr F I0813 20:05:37.725637 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:05:37.725853283+00:00 stderr F I0813 20:05:37.725831 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:05:37.726215824+00:00 stderr F I0813 20:05:37.726192 1 base_controller.go:73] Caches are synced for AWSPlatformServiceLocationController 2025-08-13T20:05:37.726284146+00:00 stderr F I0813 20:05:37.726268 1 base_controller.go:110] Starting #1 worker of AWSPlatformServiceLocationController controller ... 2025-08-13T20:05:37.726529923+00:00 stderr F I0813 20:05:37.726509 1 base_controller.go:73] Caches are synced for StatusSyncer_config-operator 2025-08-13T20:05:37.726569094+00:00 stderr F I0813 20:05:37.726556 1 base_controller.go:110] Starting #1 worker of StatusSyncer_config-operator controller ... 2025-08-13T20:05:37.730930929+00:00 stderr F I0813 20:05:37.730051 1 base_controller.go:73] Caches are synced for FeatureUpgradeableController 2025-08-13T20:05:37.731001201+00:00 stderr F I0813 20:05:37.730983 1 base_controller.go:110] Starting #1 worker of FeatureUpgradeableController controller ... 2025-08-13T20:05:37.731230777+00:00 stderr F I0813 20:05:37.731187 1 base_controller.go:73] Caches are synced for FeatureGateController 2025-08-13T20:05:37.731271559+00:00 stderr F I0813 20:05:37.731257 1 base_controller.go:110] Starting #1 worker of FeatureGateController controller ... 2025-08-13T20:05:37.731335730+00:00 stderr F I0813 20:05:37.731317 1 base_controller.go:73] Caches are synced for MigrationPlatformStatusController 2025-08-13T20:05:37.731375092+00:00 stderr F I0813 20:05:37.731362 1 base_controller.go:110] Starting #1 worker of MigrationPlatformStatusController controller ... 2025-08-13T20:05:37.731427753+00:00 stderr F I0813 20:05:37.731405 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:05:37.731459594+00:00 stderr F I0813 20:05:37.731448 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:05:37.767118245+00:00 stderr F I0813 20:05:37.766661 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:37.827659169+00:00 stderr F I0813 20:05:37.827513 1 base_controller.go:73] Caches are synced for KubeCloudConfigController 2025-08-13T20:05:37.828510483+00:00 stderr F I0813 20:05:37.828486 1 base_controller.go:110] Starting #1 worker of KubeCloudConfigController controller ... 2025-08-13T20:08:37.866996977+00:00 stderr F W0813 20:08:37.863766 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.866996977+00:00 stderr F E0813 20:08:37.865660 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.878356312+00:00 stderr F W0813 20:08:37.878274 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.878477386+00:00 stderr F E0813 20:08:37.878372 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.892624171+00:00 stderr F W0813 20:08:37.892524 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.892673483+00:00 stderr F E0813 20:08:37.892652 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.918055460+00:00 stderr F W0813 20:08:37.917452 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.918055460+00:00 stderr F E0813 20:08:37.917522 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.962652209+00:00 stderr F W0813 20:08:37.962491 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:37.962652209+00:00 stderr F E0813 20:08:37.962563 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.049389966+00:00 stderr F W0813 20:08:38.049292 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.049499499+00:00 stderr F E0813 20:08:38.049381 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.215044335+00:00 stderr F W0813 20:08:38.214875 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.215044335+00:00 stderr F E0813 20:08:38.214976 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.258970315+00:00 stderr F E0813 20:08:38.258730 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.542396601+00:00 stderr F W0813 20:08:38.541764 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:38.542880965+00:00 stderr F E0813 20:08:38.542361 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.191196853+00:00 stderr F W0813 20:08:39.190668 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.191196853+00:00 stderr F E0813 20:08:39.191134 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.485072119+00:00 stderr F W0813 20:08:40.483459 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:40.485072119+00:00 stderr F E0813 20:08:40.483518 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.050922693+00:00 stderr F W0813 20:08:43.048040 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:43.050922693+00:00 stderr F E0813 20:08:43.050394 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.184106516+00:00 stderr F W0813 20:08:48.180432 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:48.184106516+00:00 stderr F E0813 20:08:48.182751 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.427457842+00:00 stderr F W0813 20:08:58.426605 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:58.427457842+00:00 stderr F E0813 20:08:58.427363 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:29.989139703+00:00 stderr F I0813 20:09:29.988276 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.132409295+00:00 stderr F I0813 20:09:35.131738 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.851451214+00:00 stderr F I0813 20:09:41.848936 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.746116538+00:00 stderr F I0813 20:09:45.745718 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.935623861+00:00 stderr F I0813 20:09:45.935482 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:46.245138595+00:00 stderr F I0813 20:09:46.245075 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.696959325+00:00 stderr F I0813 20:09:52.694671 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.301629342+00:00 stderr F I0813 20:09:55.301139 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.755286399+00:00 stderr F I0813 20:09:55.755198 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=configs from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.900307047+00:00 stderr F I0813 20:09:55.900239 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.602007832+00:00 stderr F I0813 20:10:18.601141 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:42:36.371746557+00:00 stderr F I0813 20:42:36.370140 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.371746557+00:00 stderr F I0813 20:42:36.371308 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388339156+00:00 stderr F I0813 20:42:36.368481 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.388339156+00:00 stderr F I0813 20:42:36.370500 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421560184+00:00 stderr F I0813 20:42:36.370600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370615 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370628 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370640 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.461377802+00:00 stderr F I0813 20:42:36.370653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.472563384+00:00 stderr F I0813 20:42:36.370687 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.514733510+00:00 stderr F I0813 20:42:36.361477 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:37.879286220+00:00 stderr F W0813 20:42:37.876887 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.884558642+00:00 stderr F E0813 20:42:37.883645 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.894989093+00:00 stderr F W0813 20:42:37.892563 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.894989093+00:00 stderr F E0813 20:42:37.892634 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.916855614+00:00 stderr F W0813 20:42:37.916590 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.916855614+00:00 stderr F E0813 20:42:37.916663 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.953634034+00:00 stderr F W0813 20:42:37.941999 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.953634034+00:00 stderr F E0813 20:42:37.942076 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.999502676+00:00 stderr F W0813 20:42:37.996164 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:37.999502676+00:00 stderr F E0813 20:42:37.996257 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.089921413+00:00 stderr F W0813 20:42:38.088872 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.090067927+00:00 stderr F E0813 20:42:38.089995 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.254857078+00:00 stderr F W0813 20:42:38.254491 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.254857078+00:00 stderr F E0813 20:42:38.254583 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.579708984+00:00 stderr F W0813 20:42:38.578982 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:38.579708984+00:00 stderr F E0813 20:42:38.579042 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.223660669+00:00 stderr F W0813 20:42:39.222692 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.223660669+00:00 stderr F E0813 20:42:39.223042 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.239147526+00:00 stderr F E0813 20:42:39.238968 1 leaderelection.go:332] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506215615+00:00 stderr F W0813 20:42:40.505414 1 base_controller.go:232] Updating status of "KubeCloudConfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/configs/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.506215615+00:00 stderr F E0813 20:42:40.505864 1 base_controller.go:268] KubeCloudConfigController reconciliation failed: Delete "https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-cloud-config": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.594969504+00:00 stderr F I0813 20:42:40.593861 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.595041106+00:00 stderr F I0813 20:42:40.594988 1 leaderelection.go:285] failed to renew lease openshift-config-operator/config-operator-lock: timed out waiting for the condition 2025-08-13T20:42:40.596103676+00:00 stderr F I0813 20:42:40.596073 1 base_controller.go:172] Shutting down LatencySensitiveRemovalController ... 2025-08-13T20:42:40.596210150+00:00 stderr F I0813 20:42:40.596137 1 base_controller.go:172] Shutting down ConfigOperatorController ... 2025-08-13T20:42:40.596289842+00:00 stderr F I0813 20:42:40.596269 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597314 1 base_controller.go:172] Shutting down MigrationPlatformStatusController ... 2025-08-13T20:42:40.597412994+00:00 stderr F E0813 20:42:40.597329 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597388 1 base_controller.go:172] Shutting down KubeCloudConfigController ... 2025-08-13T20:42:40.597412994+00:00 stderr F I0813 20:42:40.597405 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.597965530+00:00 stderr F I0813 20:42:40.597940 1 base_controller.go:172] Shutting down FeatureGateController ... 2025-08-13T20:42:40.598019672+00:00 stderr F I0813 20:42:40.598006 1 base_controller.go:172] Shutting down FeatureUpgradeableController ... 2025-08-13T20:42:40.598061883+00:00 stderr F I0813 20:42:40.598050 1 base_controller.go:172] Shutting down StatusSyncer_config-operator ... 2025-08-13T20:42:40.598991580+00:00 stderr F I0813 20:42:40.597152 1 base_controller.go:114] Shutting down worker of LatencySensitiveRemovalController controller ... 2025-08-13T20:42:40.599345920+00:00 stderr F I0813 20:42:40.598080 1 base_controller.go:150] All StatusSyncer_config-operator post start hooks have been terminated 2025-08-13T20:42:40.599588477+00:00 stderr F I0813 20:42:40.599560 1 base_controller.go:172] Shutting down AWSPlatformServiceLocationController ... 2025-08-13T20:42:40.599653929+00:00 stderr F I0813 20:42:40.599635 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.600121662+00:00 stderr F I0813 20:42:40.600094 1 base_controller.go:104] All LatencySensitiveRemovalController workers have been terminated 2025-08-13T20:42:40.600173824+00:00 stderr F W0813 20:42:40.600096 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000755000175000017500000000000015117130654033024 5ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000000000015117130646033015 0ustar zuulzuul././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-opera0000644000175000017500000000000015117130646033015 0ustar zuulzuul././@LongLink0000644000000000000000000000026000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015117130646032766 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015117130654032765 5ustar zuulzuul././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000001665715117130646033007 0ustar zuulzuul2025-12-13T00:13:15.384286008+00:00 stderr F I1213 00:13:15.384144 1 flags.go:64] FLAG: --add-dir-header="false" 2025-12-13T00:13:15.384396802+00:00 stderr F I1213 00:13:15.384380 1 flags.go:64] FLAG: --allow-paths="[]" 2025-12-13T00:13:15.384420933+00:00 stderr F I1213 00:13:15.384411 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-12-13T00:13:15.384442814+00:00 stderr F I1213 00:13:15.384433 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-12-13T00:13:15.384466014+00:00 stderr F I1213 00:13:15.384455 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-12-13T00:13:15.384487555+00:00 stderr F I1213 00:13:15.384478 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-12-13T00:13:15.384509486+00:00 stderr F I1213 00:13:15.384500 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-12-13T00:13:15.384531987+00:00 stderr F I1213 00:13:15.384522 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-12-13T00:13:15.384553757+00:00 stderr F I1213 00:13:15.384544 1 flags.go:64] FLAG: --client-ca-file="" 2025-12-13T00:13:15.384575898+00:00 stderr F I1213 00:13:15.384566 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-12-13T00:13:15.384597509+00:00 stderr F I1213 00:13:15.384588 1 flags.go:64] FLAG: --help="false" 2025-12-13T00:13:15.384618950+00:00 stderr F I1213 00:13:15.384610 1 flags.go:64] FLAG: --http2-disable="false" 2025-12-13T00:13:15.384648100+00:00 stderr F I1213 00:13:15.384634 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-12-13T00:13:15.384675121+00:00 stderr F I1213 00:13:15.384665 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-12-13T00:13:15.384699702+00:00 stderr F I1213 00:13:15.384688 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-12-13T00:13:15.384721503+00:00 stderr F I1213 00:13:15.384712 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-12-13T00:13:15.384744344+00:00 stderr F I1213 00:13:15.384734 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-12-13T00:13:15.384771145+00:00 stderr F I1213 00:13:15.384756 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-12-13T00:13:15.384792975+00:00 stderr F I1213 00:13:15.384784 1 flags.go:64] FLAG: --kubeconfig="" 2025-12-13T00:13:15.384814186+00:00 stderr F I1213 00:13:15.384805 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-12-13T00:13:15.384835367+00:00 stderr F I1213 00:13:15.384826 1 flags.go:64] FLAG: --log-dir="" 2025-12-13T00:13:15.384856527+00:00 stderr F I1213 00:13:15.384847 1 flags.go:64] FLAG: --log-file="" 2025-12-13T00:13:15.384886718+00:00 stderr F I1213 00:13:15.384868 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-12-13T00:13:15.384915759+00:00 stderr F I1213 00:13:15.384901 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-12-13T00:13:15.384960961+00:00 stderr F I1213 00:13:15.384928 1 flags.go:64] FLAG: --logtostderr="true" 2025-12-13T00:13:15.384996272+00:00 stderr F I1213 00:13:15.384983 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-12-13T00:13:15.385023183+00:00 stderr F I1213 00:13:15.385013 1 flags.go:64] FLAG: --oidc-clientID="" 2025-12-13T00:13:15.385045454+00:00 stderr F I1213 00:13:15.385036 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-12-13T00:13:15.385069184+00:00 stderr F I1213 00:13:15.385059 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-12-13T00:13:15.385092895+00:00 stderr F I1213 00:13:15.385084 1 flags.go:64] FLAG: --oidc-issuer="" 2025-12-13T00:13:15.385117056+00:00 stderr F I1213 00:13:15.385105 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-12-13T00:13:15.385138617+00:00 stderr F I1213 00:13:15.385129 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-12-13T00:13:15.385160037+00:00 stderr F I1213 00:13:15.385151 1 flags.go:64] FLAG: --one-output="false" 2025-12-13T00:13:15.385181478+00:00 stderr F I1213 00:13:15.385172 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-12-13T00:13:15.385203049+00:00 stderr F I1213 00:13:15.385194 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443" 2025-12-13T00:13:15.385235140+00:00 stderr F I1213 00:13:15.385226 1 flags.go:64] FLAG: --skip-headers="false" 2025-12-13T00:13:15.385256711+00:00 stderr F I1213 00:13:15.385247 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-12-13T00:13:15.385277961+00:00 stderr F I1213 00:13:15.385269 1 flags.go:64] FLAG: --stderrthreshold="" 2025-12-13T00:13:15.385299362+00:00 stderr F I1213 00:13:15.385290 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-12-13T00:13:15.385329183+00:00 stderr F I1213 00:13:15.385311 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-12-13T00:13:15.385352424+00:00 stderr F I1213 00:13:15.385342 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-12-13T00:13:15.385374145+00:00 stderr F I1213 00:13:15.385365 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-12-13T00:13:15.385397135+00:00 stderr F I1213 00:13:15.385386 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-12-13T00:13:15.385418696+00:00 stderr F I1213 00:13:15.385409 1 flags.go:64] FLAG: --upstream="http://localhost:8080/" 2025-12-13T00:13:15.385440327+00:00 stderr F I1213 00:13:15.385431 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-12-13T00:13:15.385461488+00:00 stderr F I1213 00:13:15.385452 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-12-13T00:13:15.385482608+00:00 stderr F I1213 00:13:15.385473 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-12-13T00:13:15.385503909+00:00 stderr F I1213 00:13:15.385495 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-12-13T00:13:15.385525640+00:00 stderr F I1213 00:13:15.385516 1 flags.go:64] FLAG: --v="3" 2025-12-13T00:13:15.385548250+00:00 stderr F I1213 00:13:15.385538 1 flags.go:64] FLAG: --version="false" 2025-12-13T00:13:15.385569521+00:00 stderr F I1213 00:13:15.385560 1 flags.go:64] FLAG: --vmodule="" 2025-12-13T00:13:15.385594792+00:00 stderr F W1213 00:13:15.385584 1 deprecated.go:66] 2025-12-13T00:13:15.385594792+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:15.385594792+00:00 stderr F 2025-12-13T00:13:15.385594792+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:15.385594792+00:00 stderr F 2025-12-13T00:13:15.385594792+00:00 stderr F =============================================== 2025-12-13T00:13:15.385594792+00:00 stderr F 2025-12-13T00:13:15.385624753+00:00 stderr F I1213 00:13:15.385615 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-12-13T00:13:15.386301475+00:00 stderr F I1213 00:13:15.386283 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:15.386368868+00:00 stderr F I1213 00:13:15.386357 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:15.387379442+00:00 stderr F I1213 00:13:15.387348 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-12-13T00:13:15.387845298+00:00 stderr F I1213 00:13:15.387821 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000001706415117130646033000 0ustar zuulzuul2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.288377 1 flags.go:64] FLAG: --add-dir-header="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289195 1 flags.go:64] FLAG: --allow-paths="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289218 1 flags.go:64] FLAG: --alsologtostderr="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289224 1 flags.go:64] FLAG: --auth-header-fields-enabled="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289229 1 flags.go:64] FLAG: --auth-header-groups-field-name="x-remote-groups" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289236 1 flags.go:64] FLAG: --auth-header-groups-field-separator="|" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289242 1 flags.go:64] FLAG: --auth-header-user-field-name="x-remote-user" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289247 1 flags.go:64] FLAG: --auth-token-audiences="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289260 1 flags.go:64] FLAG: --client-ca-file="" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289265 1 flags.go:64] FLAG: --config-file="/etc/kube-rbac-proxy/config-file.yaml" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289270 1 flags.go:64] FLAG: --help="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289275 1 flags.go:64] FLAG: --http2-disable="false" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289281 1 flags.go:64] FLAG: --http2-max-concurrent-streams="100" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289288 1 flags.go:64] FLAG: --http2-max-size="262144" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289293 1 flags.go:64] FLAG: --ignore-paths="[]" 2025-08-13T19:59:04.289316963+00:00 stderr F I0813 19:59:04.289306 1 flags.go:64] FLAG: --insecure-listen-address="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289311 1 flags.go:64] FLAG: --kube-api-burst="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289318 1 flags.go:64] FLAG: --kube-api-qps="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289416 1 flags.go:64] FLAG: --kubeconfig="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289423 1 flags.go:64] FLAG: --log-backtrace-at="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289429 1 flags.go:64] FLAG: --log-dir="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289434 1 flags.go:64] FLAG: --log-file="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289439 1 flags.go:64] FLAG: --log-file-max-size="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289445 1 flags.go:64] FLAG: --log-flush-frequency="5s" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289468 1 flags.go:64] FLAG: --logtostderr="true" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289474 1 flags.go:64] FLAG: --oidc-ca-file="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289479 1 flags.go:64] FLAG: --oidc-clientID="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289484 1 flags.go:64] FLAG: --oidc-groups-claim="groups" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289489 1 flags.go:64] FLAG: --oidc-groups-prefix="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289493 1 flags.go:64] FLAG: --oidc-issuer="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289498 1 flags.go:64] FLAG: --oidc-sign-alg="[RS256]" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289519 1 flags.go:64] FLAG: --oidc-username-claim="email" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289523 1 flags.go:64] FLAG: --one-output="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289528 1 flags.go:64] FLAG: --proxy-endpoints-port="0" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289593 1 flags.go:64] FLAG: --secure-listen-address="0.0.0.0:8443" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289600 1 flags.go:64] FLAG: --skip-headers="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289605 1 flags.go:64] FLAG: --skip-log-headers="false" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289610 1 flags.go:64] FLAG: --stderrthreshold="" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289614 1 flags.go:64] FLAG: --tls-cert-file="/etc/tls/private/tls.crt" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289619 1 flags.go:64] FLAG: --tls-cipher-suites="[TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305]" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289642 1 flags.go:64] FLAG: --tls-min-version="VersionTLS12" 2025-08-13T19:59:04.289662493+00:00 stderr F I0813 19:59:04.289648 1 flags.go:64] FLAG: --tls-private-key-file="/etc/tls/private/tls.key" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289659 1 flags.go:64] FLAG: --tls-reload-interval="1m0s" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289668 1 flags.go:64] FLAG: --upstream="http://localhost:8080/" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289673 1 flags.go:64] FLAG: --upstream-ca-file="" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289677 1 flags.go:64] FLAG: --upstream-client-cert-file="" 2025-08-13T19:59:04.289688734+00:00 stderr F I0813 19:59:04.289682 1 flags.go:64] FLAG: --upstream-client-key-file="" 2025-08-13T19:59:04.289699934+00:00 stderr F I0813 19:59:04.289686 1 flags.go:64] FLAG: --upstream-force-h2c="false" 2025-08-13T19:59:04.289699934+00:00 stderr F I0813 19:59:04.289691 1 flags.go:64] FLAG: --v="3" 2025-08-13T19:59:04.289709695+00:00 stderr F I0813 19:59:04.289696 1 flags.go:64] FLAG: --version="false" 2025-08-13T19:59:04.289709695+00:00 stderr F I0813 19:59:04.289703 1 flags.go:64] FLAG: --vmodule="" 2025-08-13T19:59:04.289906890+00:00 stderr F W0813 19:59:04.289737 1 deprecated.go:66] 2025-08-13T19:59:04.289906890+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F =============================================== 2025-08-13T19:59:04.289906890+00:00 stderr F 2025-08-13T19:59:04.289906890+00:00 stderr F I0813 19:59:04.289857 1 kube-rbac-proxy.go:530] Reading config file: /etc/kube-rbac-proxy/config-file.yaml 2025-08-13T19:59:04.291096624+00:00 stderr F I0813 19:59:04.291044 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:04.291190227+00:00 stderr F I0813 19:59:04.291158 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:04.292125893+00:00 stderr F I0813 19:59:04.292006 1 kube-rbac-proxy.go:395] Starting TCP socket on 0.0.0.0:8443 2025-08-13T19:59:04.292856924+00:00 stderr F I0813 19:59:04.292710 1 kube-rbac-proxy.go:402] Listening securely on 0.0.0.0:8443 2025-08-13T20:42:44.436262779+00:00 stderr F I0813 20:42:44.435618 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030500000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015117130654032765 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003130715117130646032774 0ustar zuulzuul2025-08-13T20:05:37.430602579+00:00 stderr F I0813 20:05:37.428176 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-08-13T20:05:37.431396461+00:00 stderr F I0813 20:05:37.431086 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:37.531196519+00:00 stderr F I0813 20:05:37.531136 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-08-13T20:08:24.743848747+00:00 stderr F E0813 20:08:24.742720 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:12:03.283463562+00:00 stderr F I0813 20:12:03.282470 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-08-13T20:12:03.408514977+00:00 stderr F I0813 20:12:03.408278 1 operator.go:214] Starting Machine API Operator 2025-08-13T20:12:03.414883729+00:00 stderr F I0813 20:12:03.414653 1 reflector.go:289] Starting reflector *v1.DaemonSet (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.414883729+00:00 stderr F I0813 20:12:03.414725 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415272170+00:00 stderr F I0813 20:12:03.415179 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415292001+00:00 stderr F I0813 20:12:03.415267 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415292001+00:00 stderr F I0813 20:12:03.415270 1 reflector.go:289] Starting reflector *v1.Proxy (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415304351+00:00 stderr F I0813 20:12:03.415294 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415413054+00:00 stderr F I0813 20:12:03.415295 1 reflector.go:289] Starting reflector *v1.FeatureGate (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415431205+00:00 stderr F I0813 20:12:03.415408 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415558749+00:00 stderr F I0813 20:12:03.415230 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415558749+00:00 stderr F I0813 20:12:03.415521 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.415901538+00:00 stderr F I0813 20:12:03.415833 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (17m0.773680079s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.415901538+00:00 stderr F I0813 20:12:03.415866 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416042702+00:00 stderr F I0813 20:12:03.416014 1 reflector.go:289] Starting reflector *v1.Deployment (17m16.814879022s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.416146305+00:00 stderr F I0813 20:12:03.416087 1 reflector.go:289] Starting reflector *v1.ClusterVersion (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416183666+00:00 stderr F I0813 20:12:03.416143 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416218927+00:00 stderr F I0813 20:12:03.416098 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.416357301+00:00 stderr F I0813 20:12:03.416207 1 reflector.go:289] Starting reflector *v1beta1.Machine (17m0.773680079s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416357301+00:00 stderr F I0813 20:12:03.416297 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416733772+00:00 stderr F I0813 20:12:03.416616 1 reflector.go:289] Starting reflector *v1.ClusterOperator (14m58.262499485s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.416733772+00:00 stderr F I0813 20:12:03.416652 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.432283748+00:00 stderr F I0813 20:12:03.432183 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.432682600+00:00 stderr F I0813 20:12:03.432620 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.434528012+00:00 stderr F I0813 20:12:03.434460 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435013376+00:00 stderr F I0813 20:12:03.434965 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.435326135+00:00 stderr F I0813 20:12:03.434887 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435607613+00:00 stderr F I0813 20:12:03.435557 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.435745857+00:00 stderr F I0813 20:12:03.435723 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T20:12:03.436151169+00:00 stderr F I0813 20:12:03.436098 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T20:12:03.442764949+00:00 stderr F I0813 20:12:03.442694 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.443384886+00:00 stderr F I0813 20:12:03.443356 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:12:03.508870364+00:00 stderr F I0813 20:12:03.508710 1 operator.go:226] Synced up caches 2025-08-13T20:12:03.509038179+00:00 stderr F I0813 20:12:03.509010 1 operator.go:231] Started feature gate accessor 2025-08-13T20:12:03.510557512+00:00 stderr F I0813 20:12:03.510481 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:12:03.510951624+00:00 stderr F I0813 20:12:03.510718 1 start.go:121] Synced up machine api informer caches 2025-08-13T20:12:03.511367516+00:00 stderr F I0813 20:12:03.511243 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:12:03.553660058+00:00 stderr F I0813 20:12:03.553520 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:12:03.564646243+00:00 stderr F I0813 20:12:03.564506 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:12:03.571000755+00:00 stderr F I0813 20:12:03.570848 1 status.go:99] Syncing status: available 2025-08-13T20:27:01.768104333+00:00 stderr F I0813 20:27:01.766359 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:27:01.777550173+00:00 stderr F I0813 20:27:01.777474 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:27:01.781471015+00:00 stderr F I0813 20:27:01.780930 1 status.go:99] Syncing status: available 2025-08-13T20:41:59.995734233+00:00 stderr F I0813 20:41:59.994921 1 status.go:69] Syncing status: re-syncing 2025-08-13T20:42:00.004860526+00:00 stderr F I0813 20:42:00.003019 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T20:42:00.011051705+00:00 stderr F I0813 20:42:00.009869 1 status.go:99] Syncing status: available 2025-08-13T20:42:36.421768310+00:00 stderr F I0813 20:42:36.415200 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424103457+00:00 stderr F I0813 20:42:36.424040 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.415396 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.419956 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.443491 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.415610 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421869 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421888 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.445927616+00:00 stderr F I0813 20:42:36.421911 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.457922882+00:00 stderr F I0813 20:42:36.421929 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF ././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003046315117130646032776 0ustar zuulzuul2025-08-13T19:59:30.422692088+00:00 stderr F I0813 19:59:30.324610 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-08-13T19:59:30.730222715+00:00 stderr F I0813 19:59:30.729311 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:32.264897550+00:00 stderr F I0813 19:59:32.241064 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-08-13T19:59:32.786068206+00:00 stderr F I0813 19:59:32.785728 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-08-13T19:59:33.460151711+00:00 stderr F I0813 19:59:33.456613 1 reflector.go:289] Starting reflector *v1.DaemonSet (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.460264245+00:00 stderr F I0813 19:59:33.460239 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.467459410+00:00 stderr F I0813 19:59:33.466231 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.470166087+00:00 stderr F I0813 19:59:33.470142 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.470908838+00:00 stderr F I0813 19:59:33.470700 1 reflector.go:289] Starting reflector *v1.Deployment (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.471114444+00:00 stderr F I0813 19:59:33.471098 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.528297624+00:00 stderr F I0813 19:59:33.526236 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (14m53.02528887s) from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.528948693+00:00 stderr F I0813 19:59:33.528641 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.536238090+00:00 stderr F I0813 19:59:33.535890 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.546972206+00:00 stderr F I0813 19:59:33.536681 1 reflector.go:289] Starting reflector *v1beta1.Machine (11m3.181866381s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.662896021+00:00 stderr F I0813 19:59:33.662732 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.677561929+00:00 stderr F I0813 19:59:33.552532 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (11m3.181866381s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.710037935+00:00 stderr F I0813 19:59:33.700993 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.570180 1 operator.go:214] Starting Machine API Operator 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571397 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571579 1 reflector.go:289] Starting reflector *v1.Proxy (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571608 1 reflector.go:289] Starting reflector *v1.ClusterOperator (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.571644 1 reflector.go:289] Starting reflector *v1.ClusterVersion (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.732953 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:33.758410833+00:00 stderr F I0813 19:59:33.734011 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:33.762266853+00:00 stderr F I0813 19:59:33.762202 1 reflector.go:289] Starting reflector *v1.FeatureGate (11m37.97476721s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.777463697+00:00 stderr F I0813 19:59:33.777395 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.813468923+00:00 stderr F I0813 19:59:33.813001 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:33.818254169+00:00 stderr F I0813 19:59:33.814688 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:34.934623872+00:00 stderr F I0813 19:59:34.934463 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:34.982955680+00:00 stderr F I0813 19:59:34.978433 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.019614 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.033448 1 operator.go:226] Synced up caches 2025-08-13T19:59:35.033982514+00:00 stderr F I0813 19:59:35.033478 1 operator.go:231] Started feature gate accessor 2025-08-13T19:59:35.037744732+00:00 stderr F I0813 19:59:35.034090 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:35.054888230+00:00 stderr F I0813 19:59:35.049334 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.280303986+00:00 stderr F I0813 19:59:35.279731 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.305181 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.345068 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-08-13T19:59:35.350771493+00:00 stderr F I0813 19:59:35.345498 1 start.go:121] Synced up machine api informer caches 2025-08-13T19:59:35.863532660+00:00 stderr F I0813 19:59:35.854514 1 status.go:69] Syncing status: re-syncing 2025-08-13T19:59:35.998346093+00:00 stderr F I0813 19:59:35.927690 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T19:59:36.046953968+00:00 stderr F I0813 19:59:36.046744 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T19:59:36.141443632+00:00 stderr F I0813 19:59:36.141079 1 status.go:99] Syncing status: available 2025-08-13T19:59:36.367874066+00:00 stderr F I0813 19:59:36.366889 1 status.go:69] Syncing status: re-syncing 2025-08-13T19:59:36.407210128+00:00 stderr F I0813 19:59:36.405968 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-08-13T19:59:36.451908862+00:00 stderr F I0813 19:59:36.451686 1 status.go:99] Syncing status: available 2025-08-13T20:01:53.429105265+00:00 stderr F E0813 20:01:53.428030 1 leaderelection.go:369] Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:53.434100147+00:00 stderr F E0813 20:02:53.432992 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.444965011+00:00 stderr F E0813 20:03:53.443054 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:53.434373710+00:00 stderr F E0813 20:04:53.434088 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:34.054634372+00:00 stderr F I0813 20:05:34.050754 1 leaderelection.go:285] failed to renew lease openshift-machine-api/machine-api-operator: timed out waiting for the condition 2025-08-13T20:05:34.152035191+00:00 stderr F E0813 20:05:34.147127 1 leaderelection.go:308] Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "machine-api-operator": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:05:34.165941200+00:00 stderr F F0813 20:05:34.165368 1 start.go:104] Leader election lost ././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000002463115117130646032776 0ustar zuulzuul2025-12-13T00:13:17.774163553+00:00 stderr F I1213 00:13:17.772072 1 start.go:72] Version: 4.16.0-202406131906.p0.g49a82ff.assembly.stream.el9 2025-12-13T00:13:17.774163553+00:00 stderr F I1213 00:13:17.773098 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:17.838910469+00:00 stderr F I1213 00:13:17.838867 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/machine-api-operator... 2025-12-13T00:18:07.543192675+00:00 stderr F I1213 00:18:07.542455 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/machine-api-operator 2025-12-13T00:18:07.562862168+00:00 stderr F I1213 00:18:07.562790 1 operator.go:214] Starting Machine API Operator 2025-12-13T00:18:07.563362281+00:00 stderr F I1213 00:18:07.563304 1 reflector.go:289] Starting reflector *v1.ValidatingWebhookConfiguration (16m25.102292694s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563362281+00:00 stderr F I1213 00:18:07.563340 1 reflector.go:325] Listing and watching *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563362281+00:00 stderr F I1213 00:18:07.563329 1 reflector.go:289] Starting reflector *v1.DaemonSet (16m25.102292694s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563394952+00:00 stderr F I1213 00:18:07.563364 1 reflector.go:325] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563394952+00:00 stderr F I1213 00:18:07.563382 1 reflector.go:289] Starting reflector *v1.Deployment (16m25.102292694s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563411883+00:00 stderr F I1213 00:18:07.563397 1 reflector.go:325] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563462404+00:00 stderr F I1213 00:18:07.563418 1 reflector.go:289] Starting reflector *v1.ClusterVersion (18m52.857497655s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.563462404+00:00 stderr F I1213 00:18:07.563442 1 reflector.go:325] Listing and watching *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.563462404+00:00 stderr F I1213 00:18:07.563433 1 reflector.go:289] Starting reflector *v1.ClusterOperator (18m52.857497655s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.563481364+00:00 stderr F I1213 00:18:07.563464 1 reflector.go:325] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.563971158+00:00 stderr F I1213 00:18:07.563308 1 reflector.go:289] Starting reflector *v1.MutatingWebhookConfiguration (16m25.102292694s) from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.563971158+00:00 stderr F I1213 00:18:07.563900 1 reflector.go:289] Starting reflector *v1beta1.MachineSet (18m39.062983196s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:18:07.563971158+00:00 stderr F I1213 00:18:07.563909 1 reflector.go:325] Listing and watching *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.564042750+00:00 stderr F I1213 00:18:07.563918 1 reflector.go:325] Listing and watching *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:18:07.564269226+00:00 stderr F I1213 00:18:07.564219 1 reflector.go:289] Starting reflector *v1beta1.Machine (18m39.062983196s) from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:18:07.564269226+00:00 stderr F I1213 00:18:07.564253 1 reflector.go:325] Listing and watching *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:18:07.564348218+00:00 stderr F I1213 00:18:07.564269 1 reflector.go:289] Starting reflector *v1.FeatureGate (18m52.857497655s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.564348218+00:00 stderr F I1213 00:18:07.564333 1 reflector.go:325] Listing and watching *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.564394449+00:00 stderr F I1213 00:18:07.564234 1 reflector.go:289] Starting reflector *v1.Proxy (18m52.857497655s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.564473502+00:00 stderr F I1213 00:18:07.564446 1 reflector.go:325] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.572030973+00:00 stderr F I1213 00:18:07.568719 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.572088904+00:00 stderr F I1213 00:18:07.572061 1 reflector.go:351] Caches populated for *v1beta1.MachineSet from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:18:07.573478331+00:00 stderr F I1213 00:18:07.572841 1 reflector.go:351] Caches populated for *v1.DaemonSet from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.573478331+00:00 stderr F I1213 00:18:07.572925 1 reflector.go:351] Caches populated for *v1beta1.Machine from github.com/openshift/client-go/machine/informers/externalversions/factory.go:125 2025-12-13T00:18:07.573478331+00:00 stderr F I1213 00:18:07.573085 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.573929342+00:00 stderr F I1213 00:18:07.573814 1 reflector.go:351] Caches populated for *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.573994854+00:00 stderr F I1213 00:18:07.573829 1 reflector.go:351] Caches populated for *v1.Deployment from k8s.io/client-go/informers/factory.go:159 2025-12-13T00:18:07.575678729+00:00 stderr F I1213 00:18:07.575101 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.578162546+00:00 stderr F I1213 00:18:07.578056 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.585288965+00:00 stderr F I1213 00:18:07.585203 1 reflector.go:351] Caches populated for *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:18:07.664128961+00:00 stderr F I1213 00:18:07.664024 1 start.go:121] Synced up machine api informer caches 2025-12-13T00:18:07.664231043+00:00 stderr F I1213 00:18:07.664191 1 operator.go:226] Synced up caches 2025-12-13T00:18:07.664231043+00:00 stderr F I1213 00:18:07.664225 1 operator.go:231] Started feature gate accessor 2025-12-13T00:18:07.664252334+00:00 stderr F I1213 00:18:07.664241 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:07.664678196+00:00 stderr F I1213 00:18:07.664551 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-machine-api", Name:"machine-api-operator", UID:"7e7b28b7-f1de-4b37-8a34-a8d6ed3ac1fa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:07.681564055+00:00 stderr F I1213 00:18:07.681495 1 status.go:69] Syncing status: re-syncing 2025-12-13T00:18:07.690726448+00:00 stderr F I1213 00:18:07.690655 1 sync.go:75] Provider is NoOp, skipping synchronisation 2025-12-13T00:18:07.694485718+00:00 stderr F I1213 00:18:07.694424 1 status.go:99] Syncing status: available 2025-12-13T00:21:07.587632078+00:00 stderr F E1213 00:21:07.587073 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/machine-api-operator: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000023500000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130647033030 5ustar zuulzuul././@LongLink0000644000000000000000000000024700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000755000175000017500000000000015117130654033026 5ustar zuulzuul././@LongLink0000644000000000000000000000025400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-schedul0000644000175000017500000006575415117130647033053 0ustar zuulzuul2025-08-13T20:07:26.749909171+00:00 stderr F I0813 20:07:26.747954 1 cmd.go:92] &{ true {false} installer true map[cert-dir:0xc0001be280 cert-secrets:0xc000871e00 configmaps:0xc0008719a0 namespace:0xc0008717c0 optional-configmaps:0xc000871ae0 optional-secrets:0xc000871a40 pod:0xc000871860 pod-manifest-dir:0xc000871c20 resource-dir:0xc000871b80 revision:0xc000871720 secrets:0xc000871900 v:0xc0001bf9a0] [0xc0001bf9a0 0xc000871720 0xc0008717c0 0xc000871860 0xc000871b80 0xc000871c20 0xc0008719a0 0xc000871ae0 0xc000871a40 0xc000871900 0xc0001be280 0xc000871e00] [] map[cert-configmaps:0xc000871ea0 cert-dir:0xc0001be280 cert-secrets:0xc000871e00 configmaps:0xc0008719a0 help:0xc0001bfd60 kubeconfig:0xc000871680 log-flush-frequency:0xc0001bf900 namespace:0xc0008717c0 optional-cert-configmaps:0xc0001be000 optional-cert-secrets:0xc000871f40 optional-configmaps:0xc000871ae0 optional-secrets:0xc000871a40 pod:0xc000871860 pod-manifest-dir:0xc000871c20 pod-manifests-lock-file:0xc000871d60 resource-dir:0xc000871b80 revision:0xc000871720 secrets:0xc000871900 timeout-duration:0xc000871cc0 v:0xc0001bf9a0 vmodule:0xc0001bfa40] [0xc000871680 0xc000871720 0xc0008717c0 0xc000871860 0xc000871900 0xc0008719a0 0xc000871a40 0xc000871ae0 0xc000871b80 0xc000871c20 0xc000871cc0 0xc000871d60 0xc000871e00 0xc000871ea0 0xc000871f40 0xc0001be000 0xc0001be280 0xc0001bf900 0xc0001bf9a0 0xc0001bfa40 0xc0001bfd60] [0xc000871ea0 0xc0001be280 0xc000871e00 0xc0008719a0 0xc0001bfd60 0xc000871680 0xc0001bf900 0xc0008717c0 0xc0001be000 0xc000871f40 0xc000871ae0 0xc000871a40 0xc000871860 0xc000871c20 0xc000871d60 0xc000871b80 0xc000871720 0xc000871900 0xc000871cc0 0xc0001bf9a0 0xc0001bfa40] map[104:0xc0001bfd60 118:0xc0001bf9a0] [] -1 0 0xc0003b8f60 true 0x215dc20 []} 2025-08-13T20:07:26.749909171+00:00 stderr F I0813 20:07:26.748951 1 cmd.go:93] (*installerpod.InstallOptions)(0xc00053cd00)({ 2025-08-13T20:07:26.749909171+00:00 stderr F KubeConfig: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-08-13T20:07:26.749909171+00:00 stderr F Revision: (string) (len=1) "8", 2025-08-13T20:07:26.749909171+00:00 stderr F NodeName: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-scheduler", 2025-08-13T20:07:26.749909171+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:07:26.749909171+00:00 stderr F SecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=12) "serving-cert" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=18) "kube-scheduler-pod", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=6) "config", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=17) "serviceaccount-ca", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=20) "scheduler-kubeconfig", 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=16) "policy-configmap" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F CertSecretNames: ([]string) (len=1 cap=1) { 2025-08-13T20:07:26.749909171+00:00 stderr F (string) (len=30) "kube-scheduler-client-cert-key" 2025-08-13T20:07:26.749909171+00:00 stderr F }, 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F CertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) , 2025-08-13T20:07:26.749909171+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", 2025-08-13T20:07:26.749909171+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:07:26.749909171+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-08-13T20:07:26.749909171+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-08-13T20:07:26.749909171+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-08-13T20:07:26.749909171+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-08-13T20:07:26.749909171+00:00 stderr F KubeletVersion: (string) "" 2025-08-13T20:07:26.749909171+00:00 stderr F }) 2025-08-13T20:07:26.759987490+00:00 stderr F I0813 20:07:26.756597 1 cmd.go:410] Getting controller reference for node crc 2025-08-13T20:07:26.890296186+00:00 stderr F I0813 20:07:26.890185 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-08-13T20:07:26.906916322+00:00 stderr F I0813 20:07:26.900969 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-08-13T20:07:56.901688677+00:00 stderr F I0813 20:07:56.901382 1 cmd.go:521] Getting installer pods for node crc 2025-08-13T20:07:56.915142843+00:00 stderr F I0813 20:07:56.913745 1 cmd.go:539] Latest installer revision for node crc is: 8 2025-08-13T20:07:56.915142843+00:00 stderr F I0813 20:07:56.913829 1 cmd.go:428] Querying kubelet version for node crc 2025-08-13T20:07:56.918199111+00:00 stderr F I0813 20:07:56.918136 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-08-13T20:07:56.918225582+00:00 stderr F I0813 20:07:56.918198 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8" ... 2025-08-13T20:07:56.919940471+00:00 stderr F I0813 20:07:56.918696 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8" ... 2025-08-13T20:07:56.919940471+00:00 stderr F I0813 20:07:56.918734 1 cmd.go:226] Getting secrets ... 2025-08-13T20:07:56.926031345+00:00 stderr F I0813 20:07:56.925940 1 copy.go:32] Got secret openshift-kube-scheduler/localhost-recovery-client-token-8 2025-08-13T20:07:56.931971486+00:00 stderr F I0813 20:07:56.930266 1 copy.go:32] Got secret openshift-kube-scheduler/serving-cert-8 2025-08-13T20:07:56.931971486+00:00 stderr F I0813 20:07:56.930330 1 cmd.go:239] Getting config maps ... 2025-08-13T20:07:56.940625444+00:00 stderr F I0813 20:07:56.940492 1 copy.go:60] Got configMap openshift-kube-scheduler/config-8 2025-08-13T20:07:56.951428843+00:00 stderr F I0813 20:07:56.946266 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-cert-syncer-kubeconfig-8 2025-08-13T20:07:56.951934968+00:00 stderr F I0813 20:07:56.951697 1 copy.go:60] Got configMap openshift-kube-scheduler/kube-scheduler-pod-8 2025-08-13T20:07:56.960256417+00:00 stderr F I0813 20:07:56.959273 1 copy.go:60] Got configMap openshift-kube-scheduler/scheduler-kubeconfig-8 2025-08-13T20:07:56.971857669+00:00 stderr F I0813 20:07:56.971645 1 copy.go:60] Got configMap openshift-kube-scheduler/serviceaccount-ca-8 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.979995 1 copy.go:52] Failed to get config map openshift-kube-scheduler/policy-configmap-8: configmaps "policy-configmap-8" not found 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980054 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980286 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/namespace" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980565 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.980868 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/token" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981019 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/localhost-recovery-client-token/ca.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981161 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981239 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert/tls.crt" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981328 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/secrets/serving-cert/tls.key" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981428 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/config" ... 2025-08-13T20:07:56.981905237+00:00 stderr F I0813 20:07:56.981513 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/config/config.yaml" ... 2025-08-13T20:07:56.981986470+00:00 stderr F I0813 20:07:56.981908 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-cert-syncer-kubeconfig" ... 2025-08-13T20:07:56.981986470+00:00 stderr F I0813 20:07:56.981980 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig" ... 2025-08-13T20:07:56.982170095+00:00 stderr F I0813 20:07:56.982079 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod" ... 2025-08-13T20:07:56.982186575+00:00 stderr F I0813 20:07:56.982176 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/version" ... 2025-08-13T20:07:56.982357660+00:00 stderr F I0813 20:07:56.982274 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/forceRedeploymentReason" ... 2025-08-13T20:07:56.982442893+00:00 stderr F I0813 20:07:56.982391 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/kube-scheduler-pod/pod.yaml" ... 2025-08-13T20:07:56.982599657+00:00 stderr F I0813 20:07:56.982513 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/scheduler-kubeconfig" ... 2025-08-13T20:07:56.982660939+00:00 stderr F I0813 20:07:56.982614 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/scheduler-kubeconfig/kubeconfig" ... 2025-08-13T20:07:56.983564815+00:00 stderr F I0813 20:07:56.983457 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/serviceaccount-ca" ... 2025-08-13T20:07:56.983733740+00:00 stderr F I0813 20:07:56.983647 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/configmaps/serviceaccount-ca/ca-bundle.crt" ... 2025-08-13T20:07:56.986030016+00:00 stderr F I0813 20:07:56.984629 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs" ... 2025-08-13T20:07:56.986030016+00:00 stderr F I0813 20:07:56.984666 1 cmd.go:226] Getting secrets ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.109998 1 copy.go:32] Got secret openshift-kube-scheduler/kube-scheduler-client-cert-key 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110076 1 cmd.go:239] Getting config maps ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110090 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110125 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.crt" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110429 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-certs/secrets/kube-scheduler-client-cert-key/tls.key" ... 2025-08-13T20:07:57.112871912+00:00 stderr F I0813 20:07:57.110567 1 cmd.go:332] Getting pod configmaps/kube-scheduler-pod-8 -n openshift-kube-scheduler 2025-08-13T20:07:57.307066400+00:00 stderr F I0813 20:07:57.307008 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-08-13T20:07:57.307175413+00:00 stderr F I0813 20:07:57.307143 1 cmd.go:376] Writing a pod under "kube-scheduler-pod.yaml" key 2025-08-13T20:07:57.307175413+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"8","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332129 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332700 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F I0813 20:07:57.332713 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-scheduler-pod.yaml" ... 2025-08-13T20:07:57.336503384+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"8","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10259 )')\" ]; do\n echo -n \".\"\n sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000023600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130646033051 5ustar zuulzuul././@LongLink0000644000000000000000000000025000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000016436515117130646033072 0ustar zuulzuul2025-12-13T00:19:59.004567090+00:00 stderr F I1213 00:19:59.004238 1 cmd.go:92] &{ true {false} installer true map[cert-configmaps:0xc000b4e3c0 cert-dir:0xc000b4e5a0 cert-secrets:0xc000b4e320 configmaps:0xc000b2fea0 namespace:0xc000b2fcc0 optional-cert-configmaps:0xc000b4e500 optional-cert-secrets:0xc000b4e460 optional-configmaps:0xc000b4e000 optional-secrets:0xc000b2ff40 pod:0xc000b2fd60 pod-manifest-dir:0xc000b4e140 resource-dir:0xc000b4e0a0 revision:0xc000b2fc20 secrets:0xc000b2fe00 v:0xc000b4f9a0] [0xc000b4f9a0 0xc000b2fc20 0xc000b2fcc0 0xc000b2fd60 0xc000b4e0a0 0xc000b4e140 0xc000b2fea0 0xc000b4e000 0xc000b2fe00 0xc000b2ff40 0xc000b4e5a0 0xc000b4e3c0 0xc000b4e500 0xc000b4e320 0xc000b4e460] [] map[cert-configmaps:0xc000b4e3c0 cert-dir:0xc000b4e5a0 cert-secrets:0xc000b4e320 configmaps:0xc000b2fea0 help:0xc000b4fd60 kubeconfig:0xc000b2fb80 log-flush-frequency:0xc000b4f900 namespace:0xc000b2fcc0 optional-cert-configmaps:0xc000b4e500 optional-cert-secrets:0xc000b4e460 optional-configmaps:0xc000b4e000 optional-secrets:0xc000b2ff40 pod:0xc000b2fd60 pod-manifest-dir:0xc000b4e140 pod-manifests-lock-file:0xc000b4e280 resource-dir:0xc000b4e0a0 revision:0xc000b2fc20 secrets:0xc000b2fe00 timeout-duration:0xc000b4e1e0 v:0xc000b4f9a0 vmodule:0xc000b4fa40] [0xc000b2fb80 0xc000b2fc20 0xc000b2fcc0 0xc000b2fd60 0xc000b2fe00 0xc000b2fea0 0xc000b2ff40 0xc000b4e000 0xc000b4e0a0 0xc000b4e140 0xc000b4e1e0 0xc000b4e280 0xc000b4e320 0xc000b4e3c0 0xc000b4e460 0xc000b4e500 0xc000b4e5a0 0xc000b4f900 0xc000b4f9a0 0xc000b4fa40 0xc000b4fd60] [0xc000b4e3c0 0xc000b4e5a0 0xc000b4e320 0xc000b2fea0 0xc000b4fd60 0xc000b2fb80 0xc000b4f900 0xc000b2fcc0 0xc000b4e500 0xc000b4e460 0xc000b4e000 0xc000b2ff40 0xc000b2fd60 0xc000b4e140 0xc000b4e280 0xc000b4e0a0 0xc000b2fc20 0xc000b2fe00 0xc000b4e1e0 0xc000b4f9a0 0xc000b4fa40] map[104:0xc000b4fd60 118:0xc000b4f9a0] [] -1 0 0xc000b11470 true 0xa51380 []} 2025-12-13T00:19:59.004896719+00:00 stderr F I1213 00:19:59.004657 1 cmd.go:93] (*installerpod.InstallOptions)(0xc000ab71e0)({ 2025-12-13T00:19:59.004896719+00:00 stderr F KubeConfig: (string) "", 2025-12-13T00:19:59.004896719+00:00 stderr F KubeClient: (kubernetes.Interface) , 2025-12-13T00:19:59.004896719+00:00 stderr F Revision: (string) (len=2) "13", 2025-12-13T00:19:59.004896719+00:00 stderr F NodeName: (string) "", 2025-12-13T00:19:59.004896719+00:00 stderr F Namespace: (string) (len=24) "openshift-kube-apiserver", 2025-12-13T00:19:59.004896719+00:00 stderr F PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod", 2025-12-13T00:19:59.004896719+00:00 stderr F SecretNamePrefixes: ([]string) (len=3 cap=4) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=11) "etcd-client", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=34) "localhost-recovery-serving-certkey", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=31) "localhost-recovery-client-token" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F OptionalSecretNamePrefixes: ([]string) (len=2 cap=2) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=17) "encryption-config", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "webhook-authenticator" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F ConfigMapNamePrefixes: ([]string) (len=8 cap=8) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=18) "kube-apiserver-pod", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=6) "config", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=28) "bound-sa-token-signing-certs", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=15) "etcd-serving-ca", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=18) "kubelet-serving-ca", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=22) "sa-token-signing-certs", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=29) "kube-apiserver-audit-policies" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=14) "oauth-metadata", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=12) "cloud-config", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=24) "kube-apiserver-server-ca" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F CertSecretNames: ([]string) (len=10 cap=16) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=17) "aggregator-client", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=30) "localhost-serving-cert-certkey", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=31) "service-network-serving-certkey", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=37) "external-loadbalancer-serving-certkey", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=37) "internal-loadbalancer-serving-certkey", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=33) "bound-service-account-signing-key", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=40) "control-plane-node-admin-client-cert-key", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=31) "check-endpoints-client-cert-key", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=14) "kubelet-client", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=16) "node-kubeconfigs" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=17) "user-serving-cert", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-000", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-001", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-002", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-003", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-004", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-005", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-006", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-007", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-008", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=21) "user-serving-cert-009" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=20) "aggregator-client-ca", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=9) "client-ca", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=29) "control-plane-node-kubeconfig", 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=26) "check-endpoints-kubeconfig" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { 2025-12-13T00:19:59.004896719+00:00 stderr F (string) (len=17) "trusted-ca-bundle" 2025-12-13T00:19:59.004896719+00:00 stderr F }, 2025-12-13T00:19:59.004896719+00:00 stderr F CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", 2025-12-13T00:19:59.004896719+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-12-13T00:19:59.004896719+00:00 stderr F PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", 2025-12-13T00:19:59.004896719+00:00 stderr F Timeout: (time.Duration) 2m0s, 2025-12-13T00:19:59.004896719+00:00 stderr F StaticPodManifestsLockFile: (string) "", 2025-12-13T00:19:59.004896719+00:00 stderr F PodMutationFns: ([]installerpod.PodMutationFunc) , 2025-12-13T00:19:59.004896719+00:00 stderr F KubeletVersion: (string) "" 2025-12-13T00:19:59.004896719+00:00 stderr F }) 2025-12-13T00:19:59.097213574+00:00 stderr F I1213 00:19:59.097149 1 cmd.go:410] Getting controller reference for node crc 2025-12-13T00:19:59.108196317+00:00 stderr F I1213 00:19:59.108133 1 cmd.go:423] Waiting for installer revisions to settle for node crc 2025-12-13T00:19:59.111081817+00:00 stderr F I1213 00:19:59.111034 1 cmd.go:515] Waiting additional period after revisions have settled for node crc 2025-12-13T00:20:29.111725634+00:00 stderr F I1213 00:20:29.111627 1 cmd.go:521] Getting installer pods for node crc 2025-12-13T00:20:29.119705774+00:00 stderr F I1213 00:20:29.119647 1 cmd.go:539] Latest installer revision for node crc is: 13 2025-12-13T00:20:29.119705774+00:00 stderr F I1213 00:20:29.119673 1 cmd.go:428] Querying kubelet version for node crc 2025-12-13T00:20:29.125495513+00:00 stderr F I1213 00:20:29.125452 1 cmd.go:441] Got kubelet version 1.29.5+29c95f3 on target node crc 2025-12-13T00:20:29.125495513+00:00 stderr F I1213 00:20:29.125478 1 cmd.go:290] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13" ... 2025-12-13T00:20:29.126038109+00:00 stderr F I1213 00:20:29.126001 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13" ... 2025-12-13T00:20:29.126038109+00:00 stderr F I1213 00:20:29.126025 1 cmd.go:226] Getting secrets ... 2025-12-13T00:20:29.128985190+00:00 stderr F I1213 00:20:29.128880 1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-13 2025-12-13T00:20:29.131888060+00:00 stderr F I1213 00:20:29.131833 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-client-token-13 2025-12-13T00:20:29.134135231+00:00 stderr F I1213 00:20:29.134100 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey-13 2025-12-13T00:20:29.135957802+00:00 stderr F I1213 00:20:29.135903 1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-13: secrets "encryption-config-13" not found 2025-12-13T00:20:29.137924227+00:00 stderr F I1213 00:20:29.137885 1 copy.go:32] Got secret openshift-kube-apiserver/webhook-authenticator-13 2025-12-13T00:20:29.137924227+00:00 stderr F I1213 00:20:29.137919 1 cmd.go:239] Getting config maps ... 2025-12-13T00:20:29.139551181+00:00 stderr F I1213 00:20:29.139520 1 copy.go:60] Got configMap openshift-kube-apiserver/bound-sa-token-signing-certs-13 2025-12-13T00:20:29.141159936+00:00 stderr F I1213 00:20:29.141127 1 copy.go:60] Got configMap openshift-kube-apiserver/config-13 2025-12-13T00:20:29.143515791+00:00 stderr F I1213 00:20:29.143473 1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-13 2025-12-13T00:20:29.315207625+00:00 stderr F I1213 00:20:29.315131 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-audit-policies-13 2025-12-13T00:20:29.519299732+00:00 stderr F I1213 00:20:29.518828 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-13 2025-12-13T00:20:29.715985006+00:00 stderr F I1213 00:20:29.715198 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-13 2025-12-13T00:20:29.916023511+00:00 stderr F I1213 00:20:29.915211 1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-13 2025-12-13T00:20:30.114918866+00:00 stderr F I1213 00:20:30.114848 1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-13 2025-12-13T00:20:30.314837279+00:00 stderr F I1213 00:20:30.314762 1 copy.go:52] Failed to get config map openshift-kube-apiserver/cloud-config-13: configmaps "cloud-config-13" not found 2025-12-13T00:20:30.514949687+00:00 stderr F I1213 00:20:30.514867 1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-13 2025-12-13T00:20:30.716115524+00:00 stderr F I1213 00:20:30.716027 1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-13 2025-12-13T00:20:30.716115524+00:00 stderr F I1213 00:20:30.716075 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client" ... 2025-12-13T00:20:30.716649018+00:00 stderr F I1213 00:20:30.716582 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client/tls.crt" ... 2025-12-13T00:20:30.716958377+00:00 stderr F I1213 00:20:30.716886 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/etcd-client/tls.key" ... 2025-12-13T00:20:30.717179223+00:00 stderr F I1213 00:20:30.717140 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token" ... 2025-12-13T00:20:30.717283647+00:00 stderr F I1213 00:20:30.717251 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/namespace" ... 2025-12-13T00:20:30.717459731+00:00 stderr F I1213 00:20:30.717422 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/service-ca.crt" ... 2025-12-13T00:20:30.717659717+00:00 stderr F I1213 00:20:30.717621 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/token" ... 2025-12-13T00:20:30.717832391+00:00 stderr F I1213 00:20:30.717791 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-client-token/ca.crt" ... 2025-12-13T00:20:30.718039437+00:00 stderr F I1213 00:20:30.718011 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey" ... 2025-12-13T00:20:30.718156770+00:00 stderr F I1213 00:20:30.718123 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey/tls.key" ... 2025-12-13T00:20:30.718350906+00:00 stderr F I1213 00:20:30.718323 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/localhost-recovery-serving-certkey/tls.crt" ... 2025-12-13T00:20:30.718541831+00:00 stderr F I1213 00:20:30.718515 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/webhook-authenticator" ... 2025-12-13T00:20:30.718651404+00:00 stderr F I1213 00:20:30.718619 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/secrets/webhook-authenticator/kubeConfig" ... 2025-12-13T00:20:30.718829809+00:00 stderr F I1213 00:20:30.718802 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/bound-sa-token-signing-certs" ... 2025-12-13T00:20:30.719044895+00:00 stderr F I1213 00:20:30.719009 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/bound-sa-token-signing-certs/service-account-001.pub" ... 2025-12-13T00:20:30.719274101+00:00 stderr F I1213 00:20:30.719226 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/config" ... 2025-12-13T00:20:30.719363583+00:00 stderr F I1213 00:20:30.719332 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/config/config.yaml" ... 2025-12-13T00:20:30.719803165+00:00 stderr F I1213 00:20:30.719746 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/etcd-serving-ca" ... 2025-12-13T00:20:30.719957309+00:00 stderr F I1213 00:20:30.719904 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/etcd-serving-ca/ca-bundle.crt" ... 2025-12-13T00:20:30.720281769+00:00 stderr F I1213 00:20:30.720230 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-audit-policies" ... 2025-12-13T00:20:30.720432433+00:00 stderr F I1213 00:20:30.720385 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-audit-policies/policy.yaml" ... 2025-12-13T00:20:30.726044928+00:00 stderr F I1213 00:20:30.725983 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-cert-syncer-kubeconfig" ... 2025-12-13T00:20:30.726220493+00:00 stderr F I1213 00:20:30.726171 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ... 2025-12-13T00:20:30.726486640+00:00 stderr F I1213 00:20:30.726443 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod" ... 2025-12-13T00:20:30.726640504+00:00 stderr F I1213 00:20:30.726591 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/version" ... 2025-12-13T00:20:30.726922332+00:00 stderr F I1213 00:20:30.726870 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/forceRedeploymentReason" ... 2025-12-13T00:20:30.727172699+00:00 stderr F I1213 00:20:30.727131 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/kube-apiserver-startup-monitor-pod.yaml" ... 2025-12-13T00:20:30.727378854+00:00 stderr F I1213 00:20:30.727339 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-pod/pod.yaml" ... 2025-12-13T00:20:30.727644702+00:00 stderr F I1213 00:20:30.727603 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kubelet-serving-ca" ... 2025-12-13T00:20:30.727898448+00:00 stderr F I1213 00:20:30.727850 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kubelet-serving-ca/ca-bundle.crt" ... 2025-12-13T00:20:30.728230187+00:00 stderr F I1213 00:20:30.728183 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs" ... 2025-12-13T00:20:30.728384252+00:00 stderr F I1213 00:20:30.728321 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-001.pub" ... 2025-12-13T00:20:30.728547437+00:00 stderr F I1213 00:20:30.728510 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-002.pub" ... 2025-12-13T00:20:30.728718212+00:00 stderr F I1213 00:20:30.728689 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/sa-token-signing-certs/service-account-003.pub" ... 2025-12-13T00:20:30.728969779+00:00 stderr F I1213 00:20:30.728911 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-server-ca" ... 2025-12-13T00:20:30.729112713+00:00 stderr F I1213 00:20:30.729084 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ... 2025-12-13T00:20:30.729371410+00:00 stderr F I1213 00:20:30.729334 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/oauth-metadata" ... 2025-12-13T00:20:30.729490683+00:00 stderr F I1213 00:20:30.729458 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/configmaps/oauth-metadata/oauthMetadata" ... 2025-12-13T00:20:30.730398607+00:00 stderr F I1213 00:20:30.730354 1 cmd.go:218] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ... 2025-12-13T00:20:30.730398607+00:00 stderr F I1213 00:20:30.730393 1 cmd.go:226] Getting secrets ... 2025-12-13T00:20:30.915568723+00:00 stderr F I1213 00:20:30.915509 1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client 2025-12-13T00:20:31.115215269+00:00 stderr F I1213 00:20:31.115069 1 copy.go:32] Got secret openshift-kube-apiserver/bound-service-account-signing-key 2025-12-13T00:20:31.315068370+00:00 stderr F I1213 00:20:31.314984 1 copy.go:32] Got secret openshift-kube-apiserver/check-endpoints-client-cert-key 2025-12-13T00:20:31.514970272+00:00 stderr F I1213 00:20:31.514886 1 copy.go:32] Got secret openshift-kube-apiserver/control-plane-node-admin-client-cert-key 2025-12-13T00:20:31.715781989+00:00 stderr F I1213 00:20:31.715692 1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey 2025-12-13T00:20:31.915379783+00:00 stderr F I1213 00:20:31.915265 1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey 2025-12-13T00:20:32.115429130+00:00 stderr F I1213 00:20:32.115370 1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client 2025-12-13T00:20:32.314843918+00:00 stderr F I1213 00:20:32.314799 1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey 2025-12-13T00:20:32.515121650+00:00 stderr F I1213 00:20:32.515064 1 copy.go:32] Got secret openshift-kube-apiserver/node-kubeconfigs 2025-12-13T00:20:32.715078615+00:00 stderr F I1213 00:20:32.715016 1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey 2025-12-13T00:20:32.914910872+00:00 stderr F I1213 00:20:32.914807 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found 2025-12-13T00:20:33.115807973+00:00 stderr F I1213 00:20:33.115416 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found 2025-12-13T00:20:33.314715421+00:00 stderr F I1213 00:20:33.314613 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found 2025-12-13T00:20:33.514454540+00:00 stderr F I1213 00:20:33.514407 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found 2025-12-13T00:20:33.714416726+00:00 stderr F I1213 00:20:33.714364 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found 2025-12-13T00:20:33.917158077+00:00 stderr F I1213 00:20:33.917103 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found 2025-12-13T00:20:34.115489249+00:00 stderr F I1213 00:20:34.115439 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found 2025-12-13T00:20:34.314629083+00:00 stderr F I1213 00:20:34.314520 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found 2025-12-13T00:20:34.520445967+00:00 stderr F I1213 00:20:34.520392 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found 2025-12-13T00:20:34.714652648+00:00 stderr F I1213 00:20:34.714613 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found 2025-12-13T00:20:34.914705727+00:00 stderr F I1213 00:20:34.914322 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found 2025-12-13T00:20:34.914767888+00:00 stderr F I1213 00:20:34.914754 1 cmd.go:239] Getting config maps ... 2025-12-13T00:20:35.117828068+00:00 stderr F I1213 00:20:35.117784 1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca 2025-12-13T00:20:35.314710711+00:00 stderr F I1213 00:20:35.314626 1 copy.go:60] Got configMap openshift-kube-apiserver/check-endpoints-kubeconfig 2025-12-13T00:20:35.515752756+00:00 stderr F I1213 00:20:35.515669 1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca 2025-12-13T00:20:35.714853918+00:00 stderr F I1213 00:20:35.714780 1 copy.go:60] Got configMap openshift-kube-apiserver/control-plane-node-kubeconfig 2025-12-13T00:20:35.923314914+00:00 stderr F I1213 00:20:35.923213 1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle 2025-12-13T00:20:35.923703715+00:00 stderr F I1213 00:20:35.923667 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ... 2025-12-13T00:20:35.924000893+00:00 stderr F I1213 00:20:35.923966 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ... 2025-12-13T00:20:35.924292361+00:00 stderr F I1213 00:20:35.924246 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ... 2025-12-13T00:20:35.924478695+00:00 stderr F I1213 00:20:35.924423 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key" ... 2025-12-13T00:20:35.924527327+00:00 stderr F I1213 00:20:35.924494 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.key" ... 2025-12-13T00:20:35.924702981+00:00 stderr F I1213 00:20:35.924666 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/bound-service-account-signing-key/service-account.pub" ... 2025-12-13T00:20:35.924840495+00:00 stderr F I1213 00:20:35.924801 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key" ... 2025-12-13T00:20:35.924840495+00:00 stderr F I1213 00:20:35.924829 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.crt" ... 2025-12-13T00:20:35.925052752+00:00 stderr F I1213 00:20:35.925026 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/check-endpoints-client-cert-key/tls.key" ... 2025-12-13T00:20:35.925211196+00:00 stderr F I1213 00:20:35.925177 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key" ... 2025-12-13T00:20:35.925211196+00:00 stderr F I1213 00:20:35.925203 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.crt" ... 2025-12-13T00:20:35.925378010+00:00 stderr F I1213 00:20:35.925353 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/control-plane-node-admin-client-cert-key/tls.key" ... 2025-12-13T00:20:35.925510354+00:00 stderr F I1213 00:20:35.925485 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ... 2025-12-13T00:20:35.925524114+00:00 stderr F I1213 00:20:35.925508 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ... 2025-12-13T00:20:35.925711609+00:00 stderr F I1213 00:20:35.925667 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ... 2025-12-13T00:20:35.925896994+00:00 stderr F I1213 00:20:35.925857 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ... 2025-12-13T00:20:35.925896994+00:00 stderr F I1213 00:20:35.925888 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ... 2025-12-13T00:20:35.926210833+00:00 stderr F I1213 00:20:35.926163 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ... 2025-12-13T00:20:35.926363507+00:00 stderr F I1213 00:20:35.926330 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client" ... 2025-12-13T00:20:35.926363507+00:00 stderr F I1213 00:20:35.926351 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.crt" ... 2025-12-13T00:20:35.926526201+00:00 stderr F I1213 00:20:35.926487 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/kubelet-client/tls.key" ... 2025-12-13T00:20:35.926703486+00:00 stderr F I1213 00:20:35.926659 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ... 2025-12-13T00:20:35.926703486+00:00 stderr F I1213 00:20:35.926687 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ... 2025-12-13T00:20:35.926914711+00:00 stderr F I1213 00:20:35.926880 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ... 2025-12-13T00:20:35.927135857+00:00 stderr F I1213 00:20:35.927100 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs" ... 2025-12-13T00:20:35.927135857+00:00 stderr F I1213 00:20:35.927122 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-int.kubeconfig" ... 2025-12-13T00:20:35.927303092+00:00 stderr F I1213 00:20:35.927274 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig" ... 2025-12-13T00:20:35.927447615+00:00 stderr F I1213 00:20:35.927424 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig" ... 2025-12-13T00:20:35.927584319+00:00 stderr F I1213 00:20:35.927560 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/lb-ext.kubeconfig" ... 2025-12-13T00:20:35.927725803+00:00 stderr F I1213 00:20:35.927703 1 cmd.go:258] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ... 2025-12-13T00:20:35.927733743+00:00 stderr F I1213 00:20:35.927726 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ... 2025-12-13T00:20:35.927918279+00:00 stderr F I1213 00:20:35.927890 1 cmd.go:636] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ... 2025-12-13T00:20:35.928196366+00:00 stderr F I1213 00:20:35.928173 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ... 2025-12-13T00:20:35.928248378+00:00 stderr F I1213 00:20:35.928204 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ... 2025-12-13T00:20:35.928408372+00:00 stderr F I1213 00:20:35.928388 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig" ... 2025-12-13T00:20:35.928460723+00:00 stderr F I1213 00:20:35.928419 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/check-endpoints-kubeconfig/kubeconfig" ... 2025-12-13T00:20:35.928597547+00:00 stderr F I1213 00:20:35.928578 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ... 2025-12-13T00:20:35.928619938+00:00 stderr F I1213 00:20:35.928604 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ... 2025-12-13T00:20:35.997286620+00:00 stderr F I1213 00:20:35.997195 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig" ... 2025-12-13T00:20:35.997286620+00:00 stderr F I1213 00:20:35.997235 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/control-plane-node-kubeconfig/kubeconfig" ... 2025-12-13T00:20:35.997445425+00:00 stderr F I1213 00:20:35.997417 1 cmd.go:274] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ... 2025-12-13T00:20:35.997519397+00:00 stderr F I1213 00:20:35.997476 1 cmd.go:626] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ... 2025-12-13T00:20:35.997921378+00:00 stderr F I1213 00:20:35.997881 1 cmd.go:332] Getting pod configmaps/kube-apiserver-pod-13 -n openshift-kube-apiserver 2025-12-13T00:20:36.114916705+00:00 stderr F I1213 00:20:36.114828 1 cmd.go:348] Creating directory for static pod manifest "/etc/kubernetes/manifests" ... 2025-12-13T00:20:36.114916705+00:00 stderr F I1213 00:20:36.114884 1 cmd.go:376] Writing a pod under "kube-apiserver-startup-monitor-pod.yaml" key 2025-12-13T00:20:36.114916705+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"13"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=13","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-12-13T00:20:36.118562404+00:00 stderr F I1213 00:20:36.118190 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/kube-apiserver-startup-monitor-pod.yaml" ... 2025-12-13T00:20:36.118562404+00:00 stderr F I1213 00:20:36.118349 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml" ... 2025-12-13T00:20:36.118562404+00:00 stderr F {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-startup-monitor","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"revision":"13"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources"}},{"name":"manifests","hostPath":{"path":"/etc/kubernetes/manifests"}},{"name":"pod-resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"var-lock","hostPath":{"path":"/var/lock"}},{"name":"var-log","hostPath":{"path":"/var/log/kube-apiserver"}}],"containers":[{"name":"startup-monitor","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","startup-monitor"],"args":["-v=2","--fallback-timeout-duration=300s","--target-name=kube-apiserver","--manifests-dir=/etc/kubernetes/manifests","--resource-dir=/etc/kubernetes/static-pod-resources","--installer-lock-file=/var/lock/kube-apiserver-installer.lock","--revision=13","--node-name=crc","--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--log-file-path=/var/log/kube-apiserver/startup.log"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"manifests","mountPath":"/etc/kubernetes/manifests"},{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/secrets","subPath":"secrets"},{"name":"pod-resource-dir","readOnly":true,"mountPath":"/etc/kubernetes/static-pod-resources/configmaps","subPath":"configmaps"},{"name":"var-lock","mountPath":"/var/lock"},{"name":"var-log","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"terminationGracePeriodSeconds":5,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-12-13T00:20:36.118562404+00:00 stderr F I1213 00:20:36.118471 1 cmd.go:376] Writing a pod under "kube-apiserver-pod.yaml" key 2025-12-13T00:20:36.118562404+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"13"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"13"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSec 2025-12-13T00:20:36.118612075+00:00 stderr F onds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} 2025-12-13T00:20:36.119439187+00:00 stderr F I1213 00:20:36.118885 1 cmd.go:607] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13/kube-apiserver-pod.yaml" ... 2025-12-13T00:20:36.119439187+00:00 stderr F I1213 00:20:36.119394 1 cmd.go:614] Removed existing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-12-13T00:20:36.119439187+00:00 stderr F I1213 00:20:36.119404 1 cmd.go:618] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ... 2025-12-13T00:20:36.119439187+00:00 stderr P {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"13"},"annotations":{"kubectl.kubernetes.io/default-container":"kube-apiserver","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-13"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/usr/bin/timeout","100","/bin/bash","-ec"],"args":["echo \"Fixing audit permissions ...\"\nchmod 0700 /var/log/kube-apiserver \u0026\u0026 touch /var/log/kube-apiserver/audit.log \u0026\u0026 chmod 0600 /var/log/kube-apiserver/*\n\nLOCK=/var/log/kube-apiserver/.lock\necho \"Acquiring exclusive lock ${LOCK} ...\"\n\n# Waiting for 15s max for old kube-apiserver's watch-termination process to exit and remove the lock.\n# Two cases:\n# 1. if kubelet does not start the old and new in parallel (i.e. works as expected), the flock will always succeed without any time.\n# 2. if kubelet does overlap old and new pods for up to 130s, the flock will wait and immediate return when the old finishes.\n#\n# NOTE: We can increase 15s for a bigger expected overlap. But a higher value means less noise about the broken kubelet behaviour, i.e. we hide a bug.\n# NOTE: Do not tweak these timings without considering the livenessProbe initialDelaySeconds\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 15 \"${LOCK_FD}\" || {\n echo \"$(date -Iseconds -u) kubelet did not terminate old kube-apiserver before new one\" \u003e\u003e /var/log/kube-apiserver/lock.log\n echo -n \": WARNING: kubelet did not terminate old kube-apiserver before new one.\"\n\n # We failed to acquire exclusive lock, which means there is old kube-apiserver running in system.\n # Since we utilize SO_REUSEPORT, we need to make sure the old kube-apiserver stopped listening.\n #\n # NOTE: This is a fallback for broken kubelet, if you observe this please report a bug.\n echo -n \"Waiting for port 6443 to be released due to likely bug in kubelet or CRI-O \"\n while [ -n \"$(ss -Htan state listening '( sport = 6443 or sport = 6080 )')\" ]; do\n echo -n \".\"\n sleep 1\n (( tries += 1 ))\n if [[ \"${tries}\" -gt 10 ]]; then\n echo \"Timed out waiting for port :6443 and :6080 to be released, this is likely a bug in kubelet or CRI-O\"\n exit 1\n fi\n done\n # This is to make sure the server has terminated independently from the lock.\n # After the port has been freed (requests can be pending and need 60s max).\n sleep 65\n}\n# We cannot hold the lock from the init container to the main container. We release it here. There is no risk, at this point we know we are safe.\nflock -u \"${LOCK_FD}\"\n"],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842","command":["/bin/bash","-ec"],"args":["LOCK=/var/log/kube-apiserver/.lock\n# We should be able to acquire the lock immediatelly. If not, it means the init container has not released it yet and kubelet or CRI-O started container prematurely.\nexec {LOCK_FD}\u003e${LOCK} \u0026\u0026 flock --verbose -w 30 \"${LOCK_FD}\" || {\n echo \"Failed to acquire lock for kube-apiserver. Please check setup container for details. This is likely kubelet or CRI-O bug.\"\n exit 1\n}\nif [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n echo \"Copying system trust bundle ...\"\n cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\n\nexec watch-termination --termination-touch-file=/var/log/kube-apiserver/.terminating --termination-log-file=/var/log/kube-apiserver/termination.log --graceful-termination-duration=15s --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig -- hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --advertise-address=${HOST_IP} -v=2 --permit-address-sharing\n"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"13"},{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"GOGC","value":"100"}],"resources":{"requests":{"cpu":"265m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"livez","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"readyz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":1},"startupProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"timeoutSeconds":10,"periodSeconds":5,"successThreshold":1,"failureThreshold":30},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-cert-regeneration-controller","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","cert-regeneration-controller"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","-v=2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz","image":"quay.io/crcont/openshift-crc-cluster-ku 2025-12-13T00:20:36.119480508+00:00 stderr F be-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-check-endpoints","image":"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69","command":["cluster-kube-apiserver-operator","check-endpoints"],"args":["--kubeconfig","/etc/kubernetes/static-pod-certs/configmaps/check-endpoints-kubeconfig/kubeconfig","--listen","0.0.0.0:17697","--namespace","$(POD_NAMESPACE)","--v","2"],"ports":[{"name":"check-endpoints","hostPort":17697,"containerPort":17697,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":17697,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":15,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}} ././@LongLink0000644000000000000000000000025500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130646033060 5ustar zuulzuul././@LongLink0000644000000000000000000000026400000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000755000175000017500000000000015117130654033057 5ustar zuulzuul././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-control0000644000175000017500000000375715117130646033076 0ustar zuulzuul2025-08-13T20:01:08.019466651+00:00 stderr F I0813 20:01:08.013488 1 cmd.go:40] &{ true {false} prune true map[cert-dir:0xc0005aa140 max-eligible-revision:0xc000387ea0 protected-revisions:0xc000387f40 resource-dir:0xc0005aa000 static-pod-name:0xc0005aa0a0 v:0xc0005aa820] [0xc0005aa820 0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa140 0xc0005aa0a0] [] map[cert-dir:0xc0005aa140 help:0xc0005aabe0 log-flush-frequency:0xc0005aa780 max-eligible-revision:0xc000387ea0 protected-revisions:0xc000387f40 resource-dir:0xc0005aa000 static-pod-name:0xc0005aa0a0 v:0xc0005aa820 vmodule:0xc0005aa8c0] [0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa0a0 0xc0005aa140 0xc0005aa780 0xc0005aa820 0xc0005aa8c0 0xc0005aabe0] [0xc0005aa140 0xc0005aabe0 0xc0005aa780 0xc000387ea0 0xc000387f40 0xc0005aa000 0xc0005aa0a0 0xc0005aa820 0xc0005aa8c0] map[104:0xc0005aabe0 118:0xc0005aa820] [] -1 0 0xc000333a10 true 0x73b100 []} 2025-08-13T20:01:08.051902776+00:00 stderr F I0813 20:01:08.050031 1 cmd.go:41] (*prune.PruneOptions)(0xc000488eb0)({ 2025-08-13T20:01:08.051902776+00:00 stderr F MaxEligibleRevision: (int) 10, 2025-08-13T20:01:08.051902776+00:00 stderr F ProtectedRevisions: ([]int) (len=7 cap=7) { 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 4, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 5, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 6, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 7, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 8, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 9, 2025-08-13T20:01:08.051902776+00:00 stderr F (int) 10 2025-08-13T20:01:08.051902776+00:00 stderr F }, 2025-08-13T20:01:08.051902776+00:00 stderr F ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", 2025-08-13T20:01:08.051902776+00:00 stderr F CertDir: (string) (len=29) "kube-controller-manager-certs", 2025-08-13T20:01:08.051902776+00:00 stderr F StaticPodName: (string) (len=27) "kube-controller-manager-pod" 2025-08-13T20:01:08.051902776+00:00 stderr F }) ././@LongLink0000644000000000000000000000027700000000000011611 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130647033052 5ustar zuulzuul././@LongLink0000644000000000000000000000032700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000755000175000017500000000000015117130654033050 5ustar zuulzuul././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000056132515117130647033070 0ustar zuulzuul2025-12-13T00:13:15.464905247+00:00 stderr F I1213 00:13:15.462825 1 cmd.go:241] Using service-serving-cert provided certificates 2025-12-13T00:13:15.465107943+00:00 stderr F I1213 00:13:15.465056 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:15.466530432+00:00 stderr F I1213 00:13:15.466510 1 observer_polling.go:159] Starting file observer 2025-12-13T00:13:15.512793457+00:00 stderr F I1213 00:13:15.512216 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-12-13T00:13:16.413626666+00:00 stderr F I1213 00:13:16.413212 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:16.413626666+00:00 stderr F W1213 00:13:16.413585 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.413626666+00:00 stderr F W1213 00:13:16.413591 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418666 1 secure_serving.go:213] Serving securely on [::]:8443 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418724 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418741 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418784 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418886 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418985 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.418991 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.419002 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.419006 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.419195743+00:00 stderr F I1213 00:13:16.419079 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-12-13T00:13:16.419956149+00:00 stderr F I1213 00:13:16.419348 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-12-13T00:13:16.521475681+00:00 stderr F I1213 00:13:16.519028 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:16.521475681+00:00 stderr F I1213 00:13:16.519135 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:16.521475681+00:00 stderr F I1213 00:13:16.519162 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:18:59.791958775+00:00 stderr F I1213 00:18:59.791458 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-12-13T00:18:59.791958775+00:00 stderr F I1213 00:18:59.791588 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"42125", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_350504e5-edac-4981-842b-6f39386c0912 became leader 2025-12-13T00:18:59.793439947+00:00 stderr F I1213 00:18:59.793395 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:18:59.795267056+00:00 stderr F I1213 00:18:59.795189 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:18:59.795267056+00:00 stderr F I1213 00:18:59.795188 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:18:59.838739786+00:00 stderr F I1213 00:18:59.838329 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-12-13T00:18:59.839049614+00:00 stderr F I1213 00:18:59.838955 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-12-13T00:18:59.840174345+00:00 stderr F I1213 00:18:59.840109 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-12-13T00:18:59.840174345+00:00 stderr F I1213 00:18:59.840144 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-12-13T00:18:59.840174345+00:00 stderr F I1213 00:18:59.840160 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-12-13T00:18:59.840219786+00:00 stderr F I1213 00:18:59.840173 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-12-13T00:18:59.840219786+00:00 stderr F I1213 00:18:59.840186 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-12-13T00:18:59.840219786+00:00 stderr F I1213 00:18:59.840201 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-12-13T00:18:59.840219786+00:00 stderr F I1213 00:18:59.840212 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-12-13T00:18:59.840477864+00:00 stderr F I1213 00:18:59.840428 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-12-13T00:18:59.840731811+00:00 stderr F I1213 00:18:59.840681 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-12-13T00:18:59.840731811+00:00 stderr F I1213 00:18:59.840700 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-12-13T00:18:59.840731811+00:00 stderr F I1213 00:18:59.840707 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-12-13T00:18:59.840761232+00:00 stderr F I1213 00:18:59.840746 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-12-13T00:18:59.840781182+00:00 stderr F I1213 00:18:59.840767 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-12-13T00:18:59.841221134+00:00 stderr F I1213 00:18:59.841089 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-12-13T00:18:59.841221134+00:00 stderr F I1213 00:18:59.841128 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-12-13T00:18:59.841459221+00:00 stderr F I1213 00:18:59.841418 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-12-13T00:18:59.841459221+00:00 stderr F I1213 00:18:59.841453 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-12-13T00:18:59.841483671+00:00 stderr F I1213 00:18:59.841464 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-12-13T00:18:59.841506842+00:00 stderr F I1213 00:18:59.841494 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-12-13T00:18:59.841525272+00:00 stderr F I1213 00:18:59.841507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-12-13T00:18:59.841739728+00:00 stderr F I1213 00:18:59.841650 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-12-13T00:18:59.841739728+00:00 stderr F I1213 00:18:59.841677 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-12-13T00:18:59.841789270+00:00 stderr F I1213 00:18:59.841739 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-12-13T00:18:59.841789270+00:00 stderr F I1213 00:18:59.841469 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-12-13T00:18:59.841789270+00:00 stderr F I1213 00:18:59.841779 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-12-13T00:18:59.841863062+00:00 stderr F I1213 00:18:59.841811 1 certrotationcontroller.go:886] Starting CertRotation 2025-12-13T00:18:59.841863062+00:00 stderr F I1213 00:18:59.841826 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-12-13T00:18:59.841863062+00:00 stderr F I1213 00:18:59.841842 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-12-13T00:18:59.841888442+00:00 stderr F I1213 00:18:59.841863 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-12-13T00:18:59.841888442+00:00 stderr F I1213 00:18:59.841881 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-12-13T00:18:59.841912593+00:00 stderr F I1213 00:18:59.841901 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-12-13T00:18:59.841970114+00:00 stderr F I1213 00:18:59.841948 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-12-13T00:18:59.842130669+00:00 stderr F I1213 00:18:59.842074 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-12-13T00:18:59.842268303+00:00 stderr F I1213 00:18:59.842229 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-12-13T00:18:59.842348195+00:00 stderr F I1213 00:18:59.842128 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-12-13T00:18:59.842348195+00:00 stderr F I1213 00:18:59.842147 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-12-13T00:18:59.842348195+00:00 stderr F I1213 00:18:59.842146 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-12-13T00:18:59.844728991+00:00 stderr F I1213 00:18:59.843101 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-12-13T00:18:59.895152931+00:00 stderr F I1213 00:18:59.894003 1 termination_observer.go:145] Starting TerminationObserver 2025-12-13T00:18:59.939268648+00:00 stderr F I1213 00:18:59.939206 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-12-13T00:18:59.939268648+00:00 stderr F I1213 00:18:59.939230 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-12-13T00:18:59.940389948+00:00 stderr F I1213 00:18:59.940362 1 base_controller.go:73] Caches are synced for EventWatchController 2025-12-13T00:18:59.940389948+00:00 stderr F I1213 00:18:59.940376 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-12-13T00:18:59.940951884+00:00 stderr F I1213 00:18:59.940917 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-12-13T00:18:59.940951884+00:00 stderr F I1213 00:18:59.940948 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-12-13T00:18:59.941255423+00:00 stderr F I1213 00:18:59.941222 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-12-13T00:18:59.941255423+00:00 stderr F I1213 00:18:59.941246 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-12-13T00:18:59.941591762+00:00 stderr F I1213 00:18:59.941562 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-12-13T00:18:59.941591762+00:00 stderr F I1213 00:18:59.941576 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-12-13T00:18:59.941899970+00:00 stderr F I1213 00:18:59.941819 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:18:59.942585789+00:00 stderr F I1213 00:18:59.942057 1 base_controller.go:73] Caches are synced for InstallerController 2025-12-13T00:18:59.942585789+00:00 stderr F I1213 00:18:59.942071 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-12-13T00:18:59.942869017+00:00 stderr F I1213 00:18:59.942833 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-12-13T00:18:59.942869017+00:00 stderr F I1213 00:18:59.942853 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943552 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943565 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943640 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: control-plane-node-kubeconfig, configmaps: kube-apiserver-audit-policies-12,kube-apiserver-cert-syncer-kubeconfig-12,kube-apiserver-pod-12,kubelet-serving-ca-12,sa-token-signing-certs-12 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943730 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943738 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943750 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-12-13T00:18:59.943811612+00:00 stderr F I1213 00:18:59.943754 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944218 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944231 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944498 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944507 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944517 1 base_controller.go:73] Caches are synced for PruneController 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944521 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944765 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944779 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-12-13T00:18:59.945206631+00:00 stderr F I1213 00:18:59.944783 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-12-13T00:18:59.960580105+00:00 stderr F I1213 00:18:59.960513 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:18:59.960883294+00:00 stderr F E1213 00:18:59.960847 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: control-plane-node-kubeconfig, configmaps: kube-apiserver-audit-policies-12,kube-apiserver-cert-syncer-kubeconfig-12,kube-apiserver-pod-12,kubelet-serving-ca-12,sa-token-signing-certs-12] 2025-12-13T00:19:00.023470329+00:00 stderr F I1213 00:19:00.023367 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: control-plane-node-kubeconfig, configmaps: kube-apiserver-audit-policies-12,kube-apiserver-cert-syncer-kubeconfig-12,kube-apiserver-pod-12,kubelet-serving-ca-12,sa-token-signing-certs-12]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:40Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:00.040856509+00:00 stderr F I1213 00:19:00.040667 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-12-13T00:19:00.040856509+00:00 stderr F I1213 00:19:00.040710 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-12-13T00:19:00.042845944+00:00 stderr F I1213 00:19:00.042757 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-12-13T00:19:00.042845944+00:00 stderr F I1213 00:19:00.042799 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-12-13T00:19:00.042845944+00:00 stderr F I1213 00:19:00.042836 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-12-13T00:19:00.042884565+00:00 stderr F I1213 00:19:00.042864 1 base_controller.go:73] Caches are synced for RevisionController 2025-12-13T00:19:00.042884565+00:00 stderr F I1213 00:19:00.042880 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-12-13T00:19:00.048539131+00:00 stderr F I1213 00:19:00.048395 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.061637552+00:00 stderr F I1213 00:19:00.061543 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.140752704+00:00 stderr F I1213 00:19:00.140637 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-12-13T00:19:00.140752704+00:00 stderr F I1213 00:19:00.140689 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-12-13T00:19:00.185238000+00:00 stderr F I1213 00:19:00.185181 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.196600884+00:00 stderr F I1213 00:19:00.195287 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.223275769+00:00 stderr F I1213 00:19:00.223216 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.239119956+00:00 stderr F I1213 00:19:00.239085 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-12-13T00:19:00.239119956+00:00 stderr F I1213 00:19:00.239101 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-12-13T00:19:00.397480142+00:00 stderr F I1213 00:19:00.397416 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.423174301+00:00 stderr F I1213 00:19:00.423117 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.440488579+00:00 stderr F I1213 00:19:00.440400 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-12-13T00:19:00.440540490+00:00 stderr F I1213 00:19:00.440488 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-12-13T00:19:00.440601472+00:00 stderr F I1213 00:19:00.440429 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-12-13T00:19:00.440601472+00:00 stderr F I1213 00:19:00.440586 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-12-13T00:19:00.442208386+00:00 stderr F I1213 00:19:00.442167 1 base_controller.go:73] Caches are synced for NodeController 2025-12-13T00:19:00.442208386+00:00 stderr F I1213 00:19:00.442190 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-12-13T00:19:00.442208386+00:00 stderr F I1213 00:19:00.442196 1 base_controller.go:73] Caches are synced for GuardController 2025-12-13T00:19:00.442236167+00:00 stderr F I1213 00:19:00.442208 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-12-13T00:19:00.595284177+00:00 stderr F I1213 00:19:00.595215 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.624427200+00:00 stderr F I1213 00:19:00.624367 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.641833751+00:00 stderr F I1213 00:19:00.641780 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-12-13T00:19:00.641950024+00:00 stderr F I1213 00:19:00.641898 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-12-13T00:19:00.641989135+00:00 stderr F I1213 00:19:00.641910 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-12-13T00:19:00.641989135+00:00 stderr F I1213 00:19:00.641965 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-12-13T00:19:00.642045197+00:00 stderr F I1213 00:19:00.642028 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-12-13T00:19:00.642085218+00:00 stderr F I1213 00:19:00.642070 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-12-13T00:19:00.642216001+00:00 stderr F I1213 00:19:00.642107 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-12-13T00:19:00.642216001+00:00 stderr F I1213 00:19:00.642198 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-12-13T00:19:00.642216001+00:00 stderr F I1213 00:19:00.642211 1 internalloadbalancer.go:27] syncing internal loadbalancer hostnames: api-int.crc.testing 2025-12-13T00:19:00.642233742+00:00 stderr F I1213 00:19:00.642218 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-12-13T00:19:00.642308084+00:00 stderr F I1213 00:19:00.642279 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642308084+00:00 stderr F I1213 00:19:00.642292 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642308084+00:00 stderr F I1213 00:19:00.642298 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642338845+00:00 stderr F I1213 00:19:00.642313 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642338845+00:00 stderr F I1213 00:19:00.642333 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642354085+00:00 stderr F I1213 00:19:00.642347 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642365776+00:00 stderr F I1213 00:19:00.642352 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642376576+00:00 stderr F I1213 00:19:00.642346 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-12-13T00:19:00.642387556+00:00 stderr F I1213 00:19:00.642374 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642387556+00:00 stderr F I1213 00:19:00.642365 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642401457+00:00 stderr F I1213 00:19:00.642392 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642401457+00:00 stderr F I1213 00:19:00.642396 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642412857+00:00 stderr F I1213 00:19:00.642401 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642412857+00:00 stderr F I1213 00:19:00.642405 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642424327+00:00 stderr F I1213 00:19:00.642411 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642528710+00:00 stderr F I1213 00:19:00.642351 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642596162+00:00 stderr F I1213 00:19:00.642573 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642596162+00:00 stderr F I1213 00:19:00.642587 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642629173+00:00 stderr F I1213 00:19:00.642610 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642640683+00:00 stderr F I1213 00:19:00.642590 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-12-13T00:19:00.642651333+00:00 stderr F I1213 00:19:00.642381 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642724995+00:00 stderr F I1213 00:19:00.642684 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642724995+00:00 stderr F I1213 00:19:00.642698 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642741256+00:00 stderr F I1213 00:19:00.642402 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642751986+00:00 stderr F I1213 00:19:00.642744 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642762446+00:00 stderr F I1213 00:19:00.642751 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.642857359+00:00 stderr F I1213 00:19:00.642411 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.642870609+00:00 stderr F I1213 00:19:00.642854 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.642870609+00:00 stderr F I1213 00:19:00.642866 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.643481876+00:00 stderr F I1213 00:19:00.642649 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-12-13T00:19:00.643558498+00:00 stderr F I1213 00:19:00.643444 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.650904761+00:00 stderr F I1213 00:19:00.650850 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.817000921+00:00 stderr F I1213 00:19:00.814883 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: control-plane-node-kubeconfig, configmaps: kube-apiserver-audit-policies-12,kube-apiserver-cert-syncer-kubeconfig-12,kube-apiserver-pod-12,kubelet-serving-ca-12,sa-token-signing-certs-12]" 2025-12-13T00:19:00.828394695+00:00 stderr F I1213 00:19:00.828316 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843073 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843102 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843114 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843105 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843083 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843144 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843140 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843164 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843176 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843212 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843249 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-12-13T00:19:00.843984515+00:00 stderr F I1213 00:19:00.843272 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-12-13T00:19:00.851272555+00:00 stderr F I1213 00:19:00.851208 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:00.851272555+00:00 stderr F I1213 00:19:00.851230 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:01.021128960+00:00 stderr F I1213 00:19:01.021025 1 request.go:697] Waited for 1.179614298s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0 2025-12-13T00:19:01.024405010+00:00 stderr F I1213 00:19:01.024348 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:01.241992150+00:00 stderr F I1213 00:19:01.241841 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:01.242629007+00:00 stderr F I1213 00:19:01.242591 1 base_controller.go:73] Caches are synced for CertRotationController 2025-12-13T00:19:01.242629007+00:00 stderr F I1213 00:19:01.242621 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-12-13T00:19:01.343671903+00:00 stderr F I1213 00:19:01.343595 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-12-13T00:19:01.343671903+00:00 stderr F I1213 00:19:01.343625 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-12-13T00:19:01.426642372+00:00 stderr F I1213 00:19:01.426568 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:01.627805969+00:00 stderr F I1213 00:19:01.627752 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:01.824009629+00:00 stderr F I1213 00:19:01.823870 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:02.022458591+00:00 stderr F I1213 00:19:02.022364 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:02.042130713+00:00 stderr F I1213 00:19:02.042043 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-12-13T00:19:02.042130713+00:00 stderr F I1213 00:19:02.042070 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-12-13T00:19:02.221016916+00:00 stderr F I1213 00:19:02.220902 1 request.go:697] Waited for 2.378920138s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-apiserver/services?limit=500&resourceVersion=0 2025-12-13T00:19:02.223184526+00:00 stderr F I1213 00:19:02.223098 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:02.430204705+00:00 stderr F I1213 00:19:02.430143 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:02.624001838+00:00 stderr F I1213 00:19:02.623864 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:02.642361225+00:00 stderr F I1213 00:19:02.642241 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-12-13T00:19:02.642361225+00:00 stderr F I1213 00:19:02.642303 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-12-13T00:19:02.823386946+00:00 stderr F I1213 00:19:02.823334 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:19:02.840868599+00:00 stderr F I1213 00:19:02.840848 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-12-13T00:19:02.840915010+00:00 stderr F I1213 00:19:02.840903 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-12-13T00:19:03.221352560+00:00 stderr F I1213 00:19:03.221272 1 request.go:697] Waited for 3.27623s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-12-13T00:19:04.221707425+00:00 stderr F I1213 00:19:04.221553 1 request.go:697] Waited for 4.175201749s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-12-13T00:19:05.421353214+00:00 stderr F I1213 00:19:05.420906 1 request.go:697] Waited for 1.395189422s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-12-13T00:19:06.040651741+00:00 stderr F I1213 00:19:06.040546 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:19:06.040993300+00:00 stderr F I1213 00:19:06.040924 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:40Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:06.048630441+00:00 stderr F I1213 00:19:06.048542 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: control-plane-node-kubeconfig, configmaps: kube-apiserver-audit-policies-12,kube-apiserver-cert-syncer-kubeconfig-12,kube-apiserver-pod-12,kubelet-serving-ca-12,sa-token-signing-certs-12]" to "NodeControllerDegraded: All master nodes are ready" 2025-12-13T00:19:07.221249285+00:00 stderr F I1213 00:19:07.220906 1 request.go:697] Waited for 1.179427083s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-12-13T00:19:36.757218817+00:00 stderr F I1213 00:19:36.756613 1 core.go:358] ConfigMap "openshift-kube-apiserver/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:18Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-12-13T00:19:36Z"}],"resourceVersion":null,"uid":"449953d7-35d8-4eaf-8671-65eda2b482f7"}} 2025-12-13T00:19:36.760640881+00:00 stderr F I1213 00:19:36.760599 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: 2025-12-13T00:19:36.760640881+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:19:36.763618243+00:00 stderr F I1213 00:19:36.762801 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 13 triggered by "required configmap/kubelet-serving-ca has changed" 2025-12-13T00:19:36.776041946+00:00 stderr F I1213 00:19:36.775932 1 core.go:358] ConfigMap "openshift-config-managed/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-12-13T00:19:36Z"}],"resourceVersion":null,"uid":"1b8f54dc-8896-4a59-8c53-834fed1d81fd"}} 2025-12-13T00:19:36.776986072+00:00 stderr F I1213 00:19:36.776929 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: 2025-12-13T00:19:36.776986072+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:19:36.946874816+00:00 stderr F I1213 00:19:36.946715 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:37.548818015+00:00 stderr P I1213 00:19:37.548770 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkzt 2025-12-13T00:19:37.548853646+00:00 stderr F HP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-12-13T00:19:37.549102033+00:00 stderr F I1213 00:19:37.549078 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-12-13T00:19:37.549102033+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:19:37.573776793+00:00 stderr F I1213 00:19:37.573704 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.57367088 +0000 UTC))" 2025-12-13T00:19:37.573776793+00:00 stderr F I1213 00:19:37.573735 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.573723022 +0000 UTC))" 2025-12-13T00:19:37.573800444+00:00 stderr F I1213 00:19:37.573776 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.573741202 +0000 UTC))" 2025-12-13T00:19:37.573800444+00:00 stderr F I1213 00:19:37.573792 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.573780563 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573810 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.573796424 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573830 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.573818594 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573848 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.573834465 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573872 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.573852415 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573889 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.573876826 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573905 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.573893626 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573922 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.573909827 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.573961 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.573926807 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.574267 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-12-13 00:19:37.574245436 +0000 UTC))" 2025-12-13T00:19:37.574846612+00:00 stderr F I1213 00:19:37.574589 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584796\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584795\" (2025-12-12 23:13:15 +0000 UTC to 2026-12-12 23:13:15 +0000 UTC (now=2025-12-13 00:19:37.574570985 +0000 UTC))" 2025-12-13T00:19:37.947376754+00:00 stderr F I1213 00:19:37.947252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:38.550065404+00:00 stderr P I1213 00:19:38.550000 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIcR/OLLSLArAwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUxMjEzMDAxOTI3WhcNMjcw\nODEzMTk1OTU0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3NjU1ODUx\nNjgwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6WTXoQu2VKIz4bNU1\nroi5aLdtMLOHOZPTZWP8rbDAlFcmkwxZoTqRBJhbye3WwmRfz4nUIyeSNPh+GNYm\nM3yR3+98frga5YLx+Y7E6pK1+n2PrVK8iBx8xjvJfyGPBZeBcyKRYMicZIy3DxOU\n0c3K3fA49fWMYgj/44tPa+SaVGstoA6rEpnn5L+Ao8GVbAIlf6rlECazCWeC0O8p\n6iGgwvob77fI+rKk5iVt+u3zLjhaUh8Dosowvj2hHqOBPaBWcIWRSYHUqrCyFEuW\n1wwiy0U3s167Pj8TTyzuCGIVzF8pVVV5fT2Rw1KnQet2cad7AWtraQA1Zb2FVnYU\nmn0rAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBQm7WH/CnvohybNOmxrZVgtqfwGzjAfBgNVHSMEGDAWgBS2Hnfq08/8\n2jCPkiWgUvqF9xKxgTANBgkqhkiG9w0BAQsFAAOCAQEAFzM2Noe+Dr8FGBxbWBZ6\nu08l4PUUri2k/PGpYsn6GFPeuXXgApMaRZFsysJxY2/cLrh1ssXiRXqeAdYQ5IKP\nS6pcrmNlrqjClRmHKK4uPJMFin7z9CysKZ3Z2GFpUCa3+Fek9rFLxSnYMLiRYEB3\nYm1ZM0G1VlsfnpW9huEl6i2edw9bCLUiojOxDIDadH9sCEFaruOmn1sIuee8V6yQ\n7xjq2IN6urfuk+L7G2c/8rMdhiGN0n6+z5vPugZWk2WXXbz7ZAEe6VUNTfP6u+zh\nCrEazlXXfIKU0XA1yVMCiicgJZ9QVWhkFJQS5nEMhoWk+tfY1tsvaN3lglbdcX6C\nSQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBA 2025-12-13T00:19:38.550113195+00:00 stderr F QC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-12-13T00:19:37Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-12-13T00:19:38.551822522+00:00 stderr F I1213 00:19:38.551070 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-12-13T00:19:38.551822522+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-12-13T00:19:38.742245993+00:00 stderr F I1213 00:19:38.742112 1 request.go:697] Waited for 1.192982016s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca 2025-12-13T00:19:39.145917913+00:00 stderr F I1213 00:19:39.145846 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:40.146785193+00:00 stderr F I1213 00:19:40.146468 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:41.148666589+00:00 stderr F I1213 00:19:41.146303 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:42.145284431+00:00 stderr F I1213 00:19:42.145211 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:43.146241661+00:00 stderr F I1213 00:19:43.146184 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:44.146096042+00:00 stderr F I1213 00:19:44.145318 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:45.146177189+00:00 stderr F I1213 00:19:45.145885 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:46.147818349+00:00 stderr F I1213 00:19:46.147715 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:47.148990956+00:00 stderr F I1213 00:19:47.148093 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:48.948338062+00:00 stderr F I1213 00:19:48.947878 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:49.551106043+00:00 stderr F I1213 00:19:49.550809 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:50.348574712+00:00 stderr F I1213 00:19:50.346739 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-13 -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:51.346880291+00:00 stderr F I1213 00:19:51.346800 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 13 triggered by "required configmap/kubelet-serving-ca has changed" 2025-12-13T00:19:51.370032079+00:00 stderr F I1213 00:19:51.368628 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 13 created because required configmap/kubelet-serving-ca has changed 2025-12-13T00:19:51.378025260+00:00 stderr F I1213 00:19:51.376166 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:19:51.406606947+00:00 stderr F W1213 00:19:51.406470 1 staticpod.go:38] revision 13 is unexpectedly already the latest available revision. This is a possible race! 2025-12-13T00:19:51.406606947+00:00 stderr F E1213 00:19:51.406514 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 13 2025-12-13T00:19:52.542513759+00:00 stderr F I1213 00:19:52.542380 1 request.go:697] Waited for 1.153872027s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-12-13T00:19:53.542729519+00:00 stderr F I1213 00:19:53.542399 1 request.go:697] Waited for 1.397015491s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-12-13T00:19:54.344035656+00:00 stderr F I1213 00:19:54.343988 1 installer_controller.go:524] node crc with revision 12 is the oldest and needs new revision 13 2025-12-13T00:19:54.344181810+00:00 stderr F I1213 00:19:54.344166 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-12-13T00:19:54.344181810+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-12-13T00:19:54.344181810+00:00 stderr F CurrentRevision: (int32) 12, 2025-12-13T00:19:54.344181810+00:00 stderr F TargetRevision: (int32) 13, 2025-12-13T00:19:54.344181810+00:00 stderr F LastFailedRevision: (int32) 1, 2025-12-13T00:19:54.344181810+00:00 stderr F LastFailedTime: (*v1.Time)(0xc003416ee8)(2024-06-26 12:52:09 +0000 UTC), 2025-12-13T00:19:54.344181810+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-12-13T00:19:54.344181810+00:00 stderr F LastFailedCount: (int) 1, 2025-12-13T00:19:54.344181810+00:00 stderr F LastFallbackCount: (int) 0, 2025-12-13T00:19:54.344181810+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-12-13T00:19:54.344181810+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-12-13T00:19:54.344181810+00:00 stderr F } 2025-12-13T00:19:54.344181810+00:00 stderr F } 2025-12-13T00:19:54.354707900+00:00 stderr F I1213 00:19:54.354650 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 12 to 13 because node crc with revision 12 is the oldest 2025-12-13T00:19:54.360674304+00:00 stderr F I1213 00:19:54.360618 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-12-13T00:19:54Z","message":"NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12; 0 nodes have achieved new revision 13","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-12-13T00:19:54.361007894+00:00 stderr F I1213 00:19:54.360926 1 prune_controller.go:269] Nothing to prune 2025-12-13T00:19:54.370354872+00:00 stderr F I1213 00:19:54.370298 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 12; 0 nodes have achieved new revision 13"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12; 0 nodes have achieved new revision 13" 2025-12-13T00:19:55.542561394+00:00 stderr F I1213 00:19:55.542200 1 request.go:697] Waited for 1.181263622s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-12-13T00:19:56.965247784+00:00 stderr F I1213 00:19:56.965184 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-13-crc -n openshift-kube-apiserver because it was missing 2025-12-13T00:19:58.142635901+00:00 stderr F I1213 00:19:58.142030 1 request.go:697] Waited for 1.176878194s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc 2025-12-13T00:19:58.145151291+00:00 stderr F I1213 00:19:58.145111 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Pending phase 2025-12-13T00:20:00.546173279+00:00 stderr F I1213 00:20:00.546118 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2025-12-13T00:20:01.142446621+00:00 stderr F I1213 00:20:01.142143 1 request.go:697] Waited for 1.097504264s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver 2025-12-13T00:20:03.145254008+00:00 stderr F I1213 00:20:03.144706 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2025-12-13T00:20:36.179065866+00:00 stderr F I1213 00:20:36.178503 1 installer_controller.go:512] "crc" is in transition to 13, but has not made progress because installer is not finished, but in Running phase 2025-12-13T00:20:36.401026155+00:00 stderr F I1213 00:20:36.400919 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:08:47 +0000 UTC) at 2025-12-13 00:20:36 +0000 UTC 2025-12-13T00:20:38.422537306+00:00 stderr F I1213 00:20:38.422283 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:08:47 +0000 UTC) at 2025-12-13 00:20:38 +0000 UTC 2025-12-13T00:20:40.055133851+00:00 stderr F E1213 00:20:40.054700 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.063814065+00:00 stderr F E1213 00:20:40.063762 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.078208504+00:00 stderr F E1213 00:20:40.078146 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.101718038+00:00 stderr F E1213 00:20:40.101669 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.146008023+00:00 stderr F E1213 00:20:40.145969 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.229504616+00:00 stderr F E1213 00:20:40.229443 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.393795830+00:00 stderr F E1213 00:20:40.393728 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:40.717871384+00:00 stderr F E1213 00:20:40.717806 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:41.364078552+00:00 stderr F E1213 00:20:41.363702 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:42.648477862+00:00 stderr F E1213 00:20:42.648398 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:45.216883970+00:00 stderr F E1213 00:20:45.216312 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.053945198+00:00 stderr F E1213 00:20:50.053673 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:50.340621343+00:00 stderr F E1213 00:20:50.340541 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.843104005+00:00 stderr F E1213 00:20:59.842642 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.945555430+00:00 stderr F E1213 00:20:59.945507 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.949387073+00:00 stderr F E1213 00:20:59.949342 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.949387073+00:00 stderr F E1213 00:20:59.949374 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:59.952331632+00:00 stderr F E1213 00:20:59.952283 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.959175397+00:00 stderr F E1213 00:20:59.959098 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.959519587+00:00 stderr F E1213 00:20:59.959442 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:59.963678738+00:00 stderr F E1213 00:20:59.963644 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.972754424+00:00 stderr F E1213 00:20:59.972709 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:20:59.974581893+00:00 stderr F E1213 00:20:59.974536 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:20:59.985551329+00:00 stderr F E1213 00:20:59.985486 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.149381200+00:00 stderr F E1213 00:21:00.149294 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.349841919+00:00 stderr F E1213 00:21:00.349784 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:00.546895137+00:00 stderr F E1213 00:21:00.546821 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:00.949637884+00:00 stderr F E1213 00:21:00.949564 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.147359850+00:00 stderr F E1213 00:21:01.147316 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.348401515+00:00 stderr F E1213 00:21:01.348354 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:01.546218713+00:00 stderr F E1213 00:21:01.546044 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:01.948871278+00:00 stderr F E1213 00:21:01.948783 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.350010922+00:00 stderr F E1213 00:21:02.349721 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:02.546611278+00:00 stderr F E1213 00:21:02.546504 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:02.772523784+00:00 stderr F E1213 00:21:02.772441 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:02.949588532+00:00 stderr F E1213 00:21:02.949489 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:03.350570483+00:00 stderr F E1213 00:21:03.350218 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:03.746364793+00:00 stderr F E1213 00:21:03.746284 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.148356340+00:00 stderr F E1213 00:21:04.148294 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:04.551755526+00:00 stderr F E1213 00:21:04.551676 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:04.967313170+00:00 stderr F E1213 00:21:04.967237 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:05.146175166+00:00 stderr F E1213 00:21:05.146117 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.548575625+00:00 stderr F E1213 00:21:05.548500 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:05.949802161+00:00 stderr F E1213 00:21:05.949587 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:06.150249781+00:00 stderr F E1213 00:21:06.149614 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:06.746832150+00:00 stderr F E1213 00:21:06.746770 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.169577437+00:00 stderr F E1213 00:21:07.169493 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:07.347258991+00:00 stderr F E1213 00:21:07.347184 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:07.951792715+00:00 stderr F E1213 00:21:07.951710 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:08.749476480+00:00 stderr F E1213 00:21:08.749129 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:09.165015083+00:00 stderr F E1213 00:21:09.164915 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:09.745920669+00:00 stderr F E1213 00:21:09.745844 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.350430712+00:00 stderr F E1213 00:21:10.350073 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:10.748235426+00:00 stderr F E1213 00:21:10.748175 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.153890953+00:00 stderr F E1213 00:21:11.152772 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:11.348570817+00:00 stderr F E1213 00:21:11.348484 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:11.561295516+00:00 stderr F E1213 00:21:11.561215 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:11.749514576+00:00 stderr F E1213 00:21:11.749427 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.347665841+00:00 stderr F E1213 00:21:13.347608 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:13.562222811+00:00 stderr F E1213 00:21:13.562151 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:15.147366606+00:00 stderr F E1213 00:21:15.146805 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.347835075+00:00 stderr F E1213 00:21:15.347767 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.562426376+00:00 stderr F E1213 00:21:15.562347 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:15.948923646+00:00 stderr F E1213 00:21:15.948583 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:16.753324412+00:00 stderr F E1213 00:21:16.753064 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:17.548373256+00:00 stderr F E1213 00:21:17.548203 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:17.760818418+00:00 stderr F E1213 00:21:17.760765 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:19.569262729+00:00 stderr F I1213 00:21:19.569211 1 helpers.go:260] lister was stale at resourceVersion=42667, live get showed resourceVersion=42996 2025-12-13T00:21:19.823610063+00:00 stderr F I1213 00:21:19.823543 1 helpers.go:184] lister was stale at resourceVersion=42667, live get showed resourceVersion=43011 2025-12-13T00:21:19.833691314+00:00 stderr F E1213 00:21:19.833642 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused] 2025-12-13T00:21:47.538841014+00:00 stderr F I1213 00:21:47.538368 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:51.974521370+00:00 stderr F I1213 00:21:51.974277 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:53.585776252+00:00 stderr F I1213 00:21:53.585196 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-12-13T00:21:56.578662300+00:00 stderr F I1213 00:21:56.578049 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000164041115117130647033063 0ustar zuulzuul2025-08-13T20:01:27.619418140+00:00 stderr F I0813 20:01:27.619067 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T20:01:27.619418140+00:00 stderr F I0813 20:01:27.619317 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:01:27.623407753+00:00 stderr F I0813 20:01:27.623350 1 observer_polling.go:159] Starting file observer 2025-08-13T20:01:36.827605524+00:00 stderr F I0813 20:01:36.825569 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T20:01:59.521524942+00:00 stderr F I0813 20:01:59.520654 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:59.521524942+00:00 stderr F W0813 20:01:59.521433 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:01:59.521524942+00:00 stderr F W0813 20:01:59.521445 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T20:02:01.273561360+00:00 stderr F I0813 20:02:01.271283 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T20:02:01.273561360+00:00 stderr F I0813 20:02:01.272562 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302114 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302209 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302358 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302384 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T20:02:01.306901900+00:00 stderr F I0813 20:02:01.302511 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.321242 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.302391 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.322353 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:02:01.322654950+00:00 stderr F I0813 20:02:01.322378 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:02:01.404880544+00:00 stderr F I0813 20:02:01.403667 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:02:01.423025072+00:00 stderr F I0813 20:02:01.422737 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:02:01.423450894+00:00 stderr F I0813 20:02:01.423368 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:02:05.703355729+00:00 stderr F I0813 20:02:05.700614 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-08-13T20:02:05.713348184+00:00 stderr F I0813 20:02:05.711766 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"30617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_7069e219-921b-44d2-84f4-301ccffeb8ac became leader 2025-08-13T20:02:05.759933782+00:00 stderr F I0813 20:02:05.759742 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:02:05.774889959+00:00 stderr F I0813 20:02:05.772704 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:02:05.776863865+00:00 stderr F I0813 20:02:05.776663 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:02:17.879199399+00:00 stderr F I0813 20:02:17.878450 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-08-13T20:02:17.879984281+00:00 stderr F I0813 20:02:17.879924 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-08-13T20:02:17.880006932+00:00 stderr F I0813 20:02:17.879998 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T20:02:17.880069803+00:00 stderr F I0813 20:02:17.880021 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-08-13T20:02:17.880069803+00:00 stderr F I0813 20:02:17.880056 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T20:02:17.880263309+00:00 stderr F I0813 20:02:17.880213 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-08-13T20:02:17.880322861+00:00 stderr F I0813 20:02:17.880279 1 certrotationcontroller.go:886] Starting CertRotation 2025-08-13T20:02:17.880322861+00:00 stderr F I0813 20:02:17.880304 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-08-13T20:02:17.880552497+00:00 stderr F I0813 20:02:17.880502 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T20:02:17.880552497+00:00 stderr F I0813 20:02:17.880537 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-08-13T20:02:17.881171435+00:00 stderr F I0813 20:02:17.881136 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-08-13T20:02:17.881326439+00:00 stderr F I0813 20:02:17.881303 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-08-13T20:02:17.881586467+00:00 stderr F I0813 20:02:17.881561 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-08-13T20:02:17.881675539+00:00 stderr F I0813 20:02:17.881633 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T20:02:17.882130002+00:00 stderr F I0813 20:02:17.882093 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T20:02:17.882254426+00:00 stderr F I0813 20:02:17.882212 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T20:02:17.882353649+00:00 stderr F I0813 20:02:17.882331 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-08-13T20:02:17.882443481+00:00 stderr F I0813 20:02:17.882421 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T20:02:17.882553794+00:00 stderr F I0813 20:02:17.882516 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-08-13T20:02:17.882888394+00:00 stderr F I0813 20:02:17.881137 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-08-13T20:02:17.882955656+00:00 stderr F I0813 20:02:17.882936 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-08-13T20:02:17.883161222+00:00 stderr F I0813 20:02:17.883118 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-08-13T20:02:17.903573644+00:00 stderr F I0813 20:02:17.903482 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T20:02:17.906328282+00:00 stderr F I0813 20:02:17.906267 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T20:02:17.906367934+00:00 stderr F I0813 20:02:17.906325 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T20:02:17.906367934+00:00 stderr F I0813 20:02:17.906339 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T20:02:17.913309502+00:00 stderr F I0813 20:02:17.913261 1 termination_observer.go:145] Starting TerminationObserver 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918169 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918245 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918301 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918422 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918437 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918450 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918464 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918476 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918488 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918501 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918748 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T20:02:17.919080726+00:00 stderr F I0813 20:02:17.918766 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T20:02:17.921977849+00:00 stderr F I0813 20:02:17.921919 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T20:02:17.921977849+00:00 stderr F I0813 20:02:17.921959 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T20:02:18.011930785+00:00 stderr F I0813 20:02:18.011883 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-08-13T20:02:18.011993247+00:00 stderr F I0813 20:02:18.011979 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-08-13T20:02:18.012509172+00:00 stderr F E0813 20:02:18.012486 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.019541782+00:00 stderr F E0813 20:02:18.019469 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.032089430+00:00 stderr F E0813 20:02:18.032013 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T20:02:18.096188709+00:00 stderr F I0813 20:02:18.096086 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-08-13T20:02:18.096188709+00:00 stderr F I0813 20:02:18.096143 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-08-13T20:02:18.096367914+00:00 stderr F I0813 20:02:18.096309 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T20:02:18.096412085+00:00 stderr F I0813 20:02:18.096397 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T20:02:18.097138846+00:00 stderr F I0813 20:02:18.097071 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-08-13T20:02:18.097163917+00:00 stderr F I0813 20:02:18.097117 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:02:18.097237749+00:00 stderr F I0813 20:02:18.097176 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-08-13T20:02:18.097237749+00:00 stderr F I0813 20:02:18.097209 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-08-13T20:02:18.097459735+00:00 stderr F I0813 20:02:18.097399 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-08-13T20:02:18.097583379+00:00 stderr F I0813 20:02:18.097544 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.097727063+00:00 stderr F I0813 20:02:18.097635 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-08-13T20:02:18.097828716+00:00 stderr F I0813 20:02:18.097741 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:02:18.098194596+00:00 stderr F I0813 20:02:18.098132 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098210446+00:00 stderr F I0813 20:02:18.098192 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098220017+00:00 stderr F I0813 20:02:18.098212 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098311529+00:00 stderr F I0813 20:02:18.098264 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098359401+00:00 stderr F I0813 20:02:18.098344 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098421402+00:00 stderr F I0813 20:02:18.098375 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098501605+00:00 stderr F I0813 20:02:18.098460 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098584537+00:00 stderr F I0813 20:02:18.098566 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098637529+00:00 stderr F I0813 20:02:18.098625 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098670160+00:00 stderr F I0813 20:02:18.098345 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.098703180+00:00 stderr F I0813 20:02:18.098363 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T20:02:18.118752002+00:00 stderr F I0813 20:02:18.118654 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T20:02:18.118752002+00:00 stderr F I0813 20:02:18.118719 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T20:02:18.118881376+00:00 stderr F I0813 20:02:18.118690 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T20:02:18.118927347+00:00 stderr F I0813 20:02:18.118913 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T20:02:18.119201625+00:00 stderr F I0813 20:02:18.118861 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T20:02:18.119241016+00:00 stderr F I0813 20:02:18.119228 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122062 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122114 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:02:18.122157770+00:00 stderr F I0813 20:02:18.122134 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T20:02:18.122199531+00:00 stderr F I0813 20:02:18.122189 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T20:02:18.126026110+00:00 stderr F I0813 20:02:18.125901 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:18.296336678+00:00 stderr F I0813 20:02:18.296214 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.296719959+00:00 stderr F I0813 20:02:18.296667 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.383032062+00:00 stderr F I0813 20:02:18.382910 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-08-13T20:02:18.383032062+00:00 stderr F I0813 20:02:18.382952 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-08-13T20:02:18.486049641+00:00 stderr F I0813 20:02:18.485949 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.685675145+00:00 stderr F I0813 20:02:18.685551 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.783402033+00:00 stderr F I0813 20:02:18.782672 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T20:02:18.783402033+00:00 stderr F I0813 20:02:18.782721 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T20:02:18.886902746+00:00 stderr F I0813 20:02:18.886694 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:18.904420416+00:00 stderr F I0813 20:02:18.904290 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T20:02:18.904420416+00:00 stderr F I0813 20:02:18.904336 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906616 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906658 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906658 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906679 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T20:02:18.906703381+00:00 stderr F I0813 20:02:18.906694 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T20:02:18.906734692+00:00 stderr F I0813 20:02:18.906681 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T20:02:18.919072304+00:00 stderr F I0813 20:02:18.918963 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-08-13T20:02:18.919072304+00:00 stderr F I0813 20:02:18.919023 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-08-13T20:02:18.919108845+00:00 stderr F I0813 20:02:18.919085 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T20:02:18.919108845+00:00 stderr F I0813 20:02:18.919093 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T20:02:18.919913978+00:00 stderr F I0813 20:02:18.919758 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-08-13T20:02:18.919913978+00:00 stderr F I0813 20:02:18.919904 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-08-13T20:02:18.920003100+00:00 stderr F I0813 20:02:18.919954 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T20:02:18.920089033+00:00 stderr F I0813 20:02:18.919981 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T20:02:18.920103733+00:00 stderr F I0813 20:02:18.920091 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T20:02:18.920103733+00:00 stderr F I0813 20:02:18.920099 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T20:02:18.922216883+00:00 stderr F I0813 20:02:18.921920 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9 2025-08-13T20:02:18.922646015+00:00 stderr F I0813 20:02:18.922185 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T20:02:18.922662596+00:00 stderr F I0813 20:02:18.922647 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T20:02:18.981311399+00:00 stderr F I0813 20:02:18.981186 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T20:02:18.981311399+00:00 stderr F I0813 20:02:18.981246 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T20:02:19.082276589+00:00 stderr F I0813 20:02:19.082038 1 request.go:697] Waited for 1.160648461s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps?limit=500&resourceVersion=0 2025-08-13T20:02:19.108201699+00:00 stderr F I0813 20:02:19.108016 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121188 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121243 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121245 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T20:02:19.121299662+00:00 stderr F I0813 20:02:19.121281 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183548 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183577 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183615 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T20:02:19.183865117+00:00 stderr F I0813 20:02:19.183591 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T20:02:19.185129783+00:00 stderr F I0813 20:02:19.185067 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T20:02:19.284423006+00:00 stderr F I0813 20:02:19.284276 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.379709014+00:00 stderr F I0813 20:02:19.379559 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-08-13T20:02:19.379709014+00:00 stderr F I0813 20:02:19.379598 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:02:19.485287026+00:00 stderr F I0813 20:02:19.485140 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.499418569+00:00 stderr F I0813 20:02:19.499274 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.499418569+00:00 stderr F I0813 20:02:19.499329 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.582021656+00:00 stderr F I0813 20:02:19.581705 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-08-13T20:02:19.582021656+00:00 stderr F I0813 20:02:19.581760 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-08-13T20:02:19.685821037+00:00 stderr F I0813 20:02:19.685580 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698586 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698622 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698647 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698673 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698693 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698764636+00:00 stderr F I0813 20:02:19.698719 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698767 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698888 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.698999373+00:00 stderr F I0813 20:02:19.698963 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699015473+00:00 stderr F I0813 20:02:19.698999 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699130016+00:00 stderr F I0813 20:02:19.699047 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699130016+00:00 stderr F I0813 20:02:19.699090 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699147557+00:00 stderr F I0813 20:02:19.699138 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699157397+00:00 stderr F I0813 20:02:19.699146 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699174 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699206 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699229 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.699253950+00:00 stderr F I0813 20:02:19.699237 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703331 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703368 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703384 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T20:02:19.703409108+00:00 stderr F I0813 20:02:19.703388 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780765 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780887 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780899 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-08-13T20:02:19.780948090+00:00 stderr F I0813 20:02:19.780925 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-08-13T20:02:19.889872838+00:00 stderr F I0813 20:02:19.889719 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.082710318+00:00 stderr F I0813 20:02:20.082501 1 request.go:697] Waited for 2.16078237s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/kube-system/secrets?limit=500&resourceVersion=0 2025-08-13T20:02:20.092984811+00:00 stderr F I0813 20:02:20.092914 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.293953334+00:00 stderr F I0813 20:02:20.289478 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.468287497+00:00 stderr F E0813 20:02:20.468230 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9] 2025-08-13T20:02:20.492221110+00:00 stderr F I0813 20:02:20.492157 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:20.499635562+00:00 stderr F I0813 20:02:20.499560 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:20.501735802+00:00 stderr F I0813 20:02:20.501518 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.502890005+00:00 stderr F I0813 20:02:20.502770 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.526226710+00:00 stderr F I0813 20:02:20.519326 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T20:02:20.526226710+00:00 stderr F I0813 20:02:20.519364 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T20:02:20.582665250+00:00 stderr F I0813 20:02:20.582493 1 base_controller.go:73] Caches are synced for EventWatchController 2025-08-13T20:02:20.582665250+00:00 stderr F I0813 20:02:20.582569 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-08-13T20:02:20.685719830+00:00 stderr F I0813 20:02:20.685578 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:02:20.781261646+00:00 stderr F I0813 20:02:20.781129 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T20:02:20.781261646+00:00 stderr F I0813 20:02:20.781199 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T20:02:21.088760848+00:00 stderr F I0813 20:02:21.082530 1 request.go:697] Waited for 2.160698948s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-08-13T20:02:21.154002859+00:00 stderr F I0813 20:02:21.152140 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]" 2025-08-13T20:02:21.182432900+00:00 stderr F I0813 20:02:21.182253 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:21.906225198+00:00 stderr F E0813 20:02:21.905498 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:02:22.082699332+00:00 stderr F I0813 20:02:22.082563 1 request.go:697] Waited for 2.295925205s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-08-13T20:02:23.283449646+00:00 stderr F I0813 20:02:23.282137 1 request.go:697] Waited for 1.368757607s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:23.596598178+00:00 stderr F I0813 20:02:23.593187 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authenticator -n openshift-kube-apiserver because it changed 2025-08-13T20:02:23.598113781+00:00 stderr F I0813 20:02:23.598044 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 10 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T20:02:25.681928317+00:00 stderr F I0813 20:02:25.681602 1 request.go:697] Waited for 1.173333492s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:25.917500667+00:00 stderr F I0813 20:02:25.917379 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-10 -n openshift-kube-apiserver because it was missing 2025-08-13T20:02:25.919894385+00:00 stderr F I0813 20:02:25.919761 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:02:26.197572827+00:00 stderr F I0813 20:02:26.195195 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:02:26.202027674+00:00 stderr F I0813 20:02:26.199059 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:02:26.839246442+00:00 stderr F I0813 20:02:26.838416 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing 2025-08-13T20:02:26.890933386+00:00 stderr F I0813 20:02:26.890736 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:02:26.906316425+00:00 stderr F E0813 20:02:26.905734 1 podsecurityreadinesscontroller.go:102] "namespace:" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-config-operator?dryRun=All&fieldManager=pod-security-readiness-controller&force=false\": dial tcp 10.217.4.1:443: connect: connection refused" openshift-config-operator="(MISSING)" 2025-08-13T20:02:27.062032857+00:00 stderr F I0813 20:02:27.061872 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2024-06-27 13:28:05 +0000 UTC) at 2025-08-13 20:02:26 +0000 UTC 2025-08-13T20:02:27.282296610+00:00 stderr F I0813 20:02:27.282173 1 request.go:697] Waited for 1.078495576s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:02:27.288882048+00:00 stderr F E0813 20:02:27.288736 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:27.493059402+00:00 stderr F W0813 20:02:27.492835 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.493059402+00:00 stderr F E0813 20:02:27.492985 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.689448045+00:00 stderr F I0813 20:02:27.688426 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ConfigMapCreateFailed' Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.696928148+00:00 stderr F I0813 20:02:27.696380 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RevisionCreateFailed' Failed to create revision 10: Post "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.697385821+00:00 stderr F E0813 20:02:27.697363 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.702021973+00:00 stderr F E0813 20:02:27.701974 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.723108285+00:00 stderr F E0813 20:02:27.722548 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.749338123+00:00 stderr F E0813 20:02:27.749277 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.794133701+00:00 stderr F E0813 20:02:27.793987 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.878679733+00:00 stderr F E0813 20:02:27.878589 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:27.888764171+00:00 stderr F E0813 20:02:27.888701 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.043317470+00:00 stderr F E0813 20:02:28.043179 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.289375869+00:00 stderr F E0813 20:02:28.289268 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:28.379214582+00:00 stderr F E0813 20:02:28.379076 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.497073854+00:00 stderr F W0813 20:02:28.496880 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.497182557+00:00 stderr F E0813 20:02:28.497164 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.689759121+00:00 stderr F E0813 20:02:28.689575 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:28.894884463+00:00 stderr F E0813 20:02:28.894732 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.025637333+00:00 stderr F E0813 20:02:29.025366 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.088538967+00:00 stderr F E0813 20:02:29.088396 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:29.295313266+00:00 stderr F W0813 20:02:29.295216 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.295313266+00:00 stderr F E0813 20:02:29.295284 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.488824236+00:00 stderr F E0813 20:02:29.488450 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:29.505160672+00:00 stderr F I0813 20:02:29.503630 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2024-06-27 13:28:05 +0000 UTC) at 2025-08-13 20:02:28 +0000 UTC 2025-08-13T20:02:29.907564892+00:00 stderr F E0813 20:02:29.907113 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.091381636+00:00 stderr F W0813 20:02:30.088902 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.091381636+00:00 stderr F E0813 20:02:30.089000 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.299063050+00:00 stderr F E0813 20:02:30.298658 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.330682142+00:00 stderr F E0813 20:02:30.330624 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.697258979+00:00 stderr F E0813 20:02:30.697085 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:30.887421723+00:00 stderr F W0813 20:02:30.887227 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:30.887421723+00:00 stderr F E0813 20:02:30.887279 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.086941195+00:00 stderr F E0813 20:02:31.086732 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.490093166+00:00 stderr F E0813 20:02:31.489980 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:31.686267992+00:00 stderr F W0813 20:02:31.686138 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.686267992+00:00 stderr F E0813 20:02:31.686205 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.875890662+00:00 stderr F E0813 20:02:31.875719 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:31.901012308+00:00 stderr F E0813 20:02:31.900911 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.913477984+00:00 stderr F E0813 20:02:31.913424 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/controlplane.operator.openshift.io/v1alpha1/namespaces/openshift-kube-apiserver/podnetworkconnectivitychecks": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.917548790+00:00 stderr F E0813 20:02:31.917479 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:31.921520413+00:00 stderr F E0813 20:02:31.921463 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.935493062+00:00 stderr F E0813 20:02:31.935342 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.957609803+00:00 stderr F E0813 20:02:31.957460 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:31.999495908+00:00 stderr F E0813 20:02:31.999379 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.090438442+00:00 stderr F E0813 20:02:32.090324 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.253104613+00:00 stderr F E0813 20:02:32.252976 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.287631368+00:00 stderr F E0813 20:02:32.287450 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:32.485872313+00:00 stderr F W0813 20:02:32.485628 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.485872313+00:00 stderr F E0813 20:02:32.485740 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.575041717+00:00 stderr F E0813 20:02:32.574943 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.685457496+00:00 stderr F E0813 20:02:32.685363 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:32.896541708+00:00 stderr F E0813 20:02:32.896479 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.081990088+00:00 stderr F I0813 20:02:33.081929 1 request.go:697] Waited for 1.153124285s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2025-08-13T20:02:33.116142203+00:00 stderr F E0813 20:02:33.116034 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:33.219207153+00:00 stderr F E0813 20:02:33.219100 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.288288754+00:00 stderr F E0813 20:02:33.288215 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.838225562+00:00 stderr F W0813 20:02:33.837417 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.838328125+00:00 stderr F E0813 20:02:33.838272 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:33.895170166+00:00 stderr F E0813 20:02:33.895117 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:34.123866160+00:00 stderr F I0813 20:02:34.123707 1 request.go:697] Waited for 1.437381574s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:34.132409974+00:00 stderr F E0813 20:02:34.131888 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.285426288+00:00 stderr F I0813 20:02:34.285325 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.287761615+00:00 stderr F E0813 20:02:34.287687 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.287761615+00:00 stderr F E0813 20:02:34.287726 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:34.501765090+00:00 stderr F E0813 20:02:34.501711 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.704875064+00:00 stderr F E0813 20:02:34.704735 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:34.888966325+00:00 stderr F E0813 20:02:34.888912 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.086482340+00:00 stderr F W0813 20:02:35.086334 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.086482340+00:00 stderr F E0813 20:02:35.086413 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.286871476+00:00 stderr F I0813 20:02:35.286587 1 request.go:697] Waited for 1.149563123s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:35.291251821+00:00 stderr F E0813 20:02:35.291186 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:35.889880789+00:00 stderr F E0813 20:02:35.889726 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:36.090603975+00:00 stderr F E0813 20:02:36.090455 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.482122994+00:00 stderr F I0813 20:02:36.482009 1 request.go:697] Waited for 1.394598594s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:02:36.485995604+00:00 stderr F W0813 20:02:36.485923 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.485995604+00:00 stderr F E0813 20:02:36.485981 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.686004040+00:00 stderr F E0813 20:02:36.685921 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.885666116+00:00 stderr F I0813 20:02:36.885489 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:36.886292603+00:00 stderr F E0813 20:02:36.886219 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.064326992+00:00 stderr F E0813 20:02:37.064202 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.109672166+00:00 stderr F E0813 20:02:37.109536 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:37.287078257+00:00 stderr F E0813 20:02:37.286993 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.682584849+00:00 stderr F I0813 20:02:37.682427 1 request.go:697] Waited for 1.195569716s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:02:37.686699647+00:00 stderr F W0813 20:02:37.686536 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.686699647+00:00 stderr F E0813 20:02:37.686594 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:37.886023922+00:00 stderr F E0813 20:02:37.885969 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.021723653+00:00 stderr F E0813 20:02:38.021300 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:38.496642781+00:00 stderr F E0813 20:02:38.496414 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:38.610677824+00:00 stderr F E0813 20:02:38.610594 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:38.682735480+00:00 stderr F I0813 20:02:38.682566 1 request.go:697] Waited for 1.3545324s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:02:38.686076805+00:00 stderr F E0813 20:02:38.685979 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.085831439+00:00 stderr F I0813 20:02:39.085615 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.087639121+00:00 stderr F E0813 20:02:39.087595 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.487886129+00:00 stderr F E0813 20:02:39.487464 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.895188618+00:00 stderr F W0813 20:02:39.895017 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:39.895188618+00:00 stderr F E0813 20:02:39.895131 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.283222147+00:00 stderr F I0813 20:02:40.282596 1 request.go:697] Waited for 1.096328425s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies 2025-08-13T20:02:40.288259131+00:00 stderr F E0813 20:02:40.287945 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:40.513629800+00:00 stderr F E0813 20:02:40.513510 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:40.686668777+00:00 stderr F E0813 20:02:40.686571 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.084468975+00:00 stderr F I0813 20:02:41.084342 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.086141903+00:00 stderr F E0813 20:02:41.086113 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.215589026+00:00 stderr F E0813 20:02:41.215513 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:41.285356717+00:00 stderr F E0813 20:02:41.285243 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.682650530+00:00 stderr F E0813 20:02:41.682502 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:41.685689016+00:00 stderr F E0813 20:02:41.685538 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:41.885515477+00:00 stderr F E0813 20:02:41.885385 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.082715363+00:00 stderr F I0813 20:02:42.082621 1 request.go:697] Waited for 1.024058585s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:02:42.088271282+00:00 stderr F E0813 20:02:42.088207 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:42.186979738+00:00 stderr F E0813 20:02:42.186926 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.885542287+00:00 stderr F I0813 20:02:42.885433 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:42.887598435+00:00 stderr F E0813 20:02:42.887569 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:43.106315655+00:00 stderr F E0813 20:02:43.106176 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:43.288137192+00:00 stderr F E0813 20:02:43.288006 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.284390023+00:00 stderr F I0813 20:02:44.284299 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:44.297508357+00:00 stderr F E0813 20:02:44.297295 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.087187274+00:00 stderr F E0813 20:02:45.087053 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.317925606+00:00 stderr F E0813 20:02:45.317397 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:45.684556595+00:00 stderr F I0813 20:02:45.684425 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:45.688731014+00:00 stderr F E0813 20:02:45.688693 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.087318435+00:00 stderr F E0813 20:02:46.087233 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.884228139+00:00 stderr F I0813 20:02:46.884159 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:46.886405971+00:00 stderr F E0813 20:02:46.886333 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:47.109306260+00:00 stderr F E0813 20:02:47.109244 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:48.092381354+00:00 stderr F E0813 20:02:48.092322 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:48.287721726+00:00 stderr F I0813 20:02:48.287638 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.289058344+00:00 stderr F E0813 20:02:48.289024 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.335550790+00:00 stderr F E0813 20:02:48.335494 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:48.486293661+00:00 stderr F E0813 20:02:48.486221 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.049689662+00:00 stderr F E0813 20:02:49.049111 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:49.543433607+00:00 stderr F E0813 20:02:49.543292 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:49.684554953+00:00 stderr F I0813 20:02:49.684439 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.686370745+00:00 stderr F E0813 20:02:49.686338 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:49.885480655+00:00 stderr F E0813 20:02:49.885305 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.087540489+00:00 stderr F E0813 20:02:50.087422 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.697695005+00:00 stderr F W0813 20:02:50.697388 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:50.697695005+00:00 stderr F E0813 20:02:50.697469 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.229356262+00:00 stderr F E0813 20:02:51.229271 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:02:51.284985769+00:00 stderr F I0813 20:02:51.284770 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.286830041+00:00 stderr F E0813 20:02:51.286659 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.485913661+00:00 stderr F E0813 20:02:51.485733 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:51.685286878+00:00 stderr F E0813 20:02:51.685195 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:02:52.176266554+00:00 stderr F E0813 20:02:52.176148 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:52.430383403+00:00 stderr F E0813 20:02:52.430286 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.285294732+00:00 stderr F I0813 20:02:53.285147 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.286495956+00:00 stderr F E0813 20:02:53.286258 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.493943244+00:00 stderr F E0813 20:02:53.487108 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:53.707885268+00:00 stderr F E0813 20:02:53.707699 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:54.086622012+00:00 stderr F E0813 20:02:54.086481 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:55.499963951+00:00 stderr F E0813 20:02:55.499741 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:55.686172193+00:00 stderr F E0813 20:02:55.686116 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.288966766+00:00 stderr F E0813 20:02:57.287212 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:57.506960085+00:00 stderr F E0813 20:02:57.506636 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:59.052462204+00:00 stderr F E0813 20:02:59.052368 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:02:59.089362067+00:00 stderr F E0813 20:02:59.089260 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:02:59.285310937+00:00 stderr F E0813 20:02:59.285216 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.485938219+00:00 stderr F E0813 20:02:59.485835 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:02:59.711761171+00:00 stderr F E0813 20:02:59.711633 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:00.887365237+00:00 stderr F E0813 20:03:00.887225 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:01.232437542+00:00 stderr F E0813 20:03:01.232338 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:01.687880194+00:00 stderr F E0813 20:03:01.687697 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:02.086353261+00:00 stderr F E0813 20:03:02.086236 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.466545123+00:00 stderr F E0813 20:03:03.466443 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.684693427+00:00 stderr F I0813 20:03:03.684563 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:03.686747335+00:00 stderr F E0813 20:03:03.686674 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:04.467702884+00:00 stderr F E0813 20:03:04.467650 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:08.822029240+00:00 stderr F E0813 20:03:08.821291 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:09.055237253+00:00 stderr F E0813 20:03:09.055153 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:09.202882104+00:00 stderr F E0813 20:03:09.200200 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:09.379519793+00:00 stderr F E0813 20:03:09.379391 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:10.016453054+00:00 stderr F E0813 20:03:10.016044 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:11.184720560+00:00 stderr F W0813 20:03:11.184594 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.184720560+00:00 stderr F E0813 20:03:11.184678 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:11.235424036+00:00 stderr F E0813 20:03:11.235291 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:11.690173799+00:00 stderr F E0813 20:03:11.689975 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:12.912997853+00:00 stderr F E0813 20:03:12.912941 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:13.726670644+00:00 stderr F E0813 20:03:13.726599 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.125912634+00:00 stderr F E0813 20:03:18.125755 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:18.922056237+00:00 stderr F E0813 20:03:18.921896 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.924148366+00:00 stderr F E0813 20:03:18.924051 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.932113354+00:00 stderr F E0813 20:03:18.932008 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.946991928+00:00 stderr F E0813 20:03:18.946904 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:18.968900543+00:00 stderr F E0813 20:03:18.968646 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.010868720+00:00 stderr F E0813 20:03:19.010530 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.058148869+00:00 stderr F E0813 20:03:19.058059 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:19.100995822+00:00 stderr F E0813 20:03:19.100933 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.194039566+00:00 stderr F E0813 20:03:19.193906 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.262645363+00:00 stderr F E0813 20:03:19.262539 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:19.418278873+00:00 stderr F E0813 20:03:19.417415 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:19.591219037+00:00 stderr F E0813 20:03:19.591118 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:19.723619304+00:00 stderr F E0813 20:03:19.723502 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:20.728083109+00:00 stderr F E0813 20:03:20.727651 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.124169747+00:00 stderr F E0813 20:03:21.124066 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:21.237491670+00:00 stderr F E0813 20:03:21.237385 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:21.691880852+00:00 stderr F E0813 20:03:21.691747 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:22.011119850+00:00 stderr F E0813 20:03:22.010974 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.174660680+00:00 stderr F I0813 20:03:24.174209 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.176144142+00:00 stderr F E0813 20:03:24.176038 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.573362163+00:00 stderr F E0813 20:03:24.573244 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:24.954020392+00:00 stderr F E0813 20:03:24.953891 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.062750943+00:00 stderr F E0813 20:03:29.062580 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:29.198595688+00:00 stderr F E0813 20:03:29.198530 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:29.695720750+00:00 stderr F E0813 20:03:29.695628 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:30.557427003+00:00 stderr F E0813 20:03:30.557361 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:03:31.240127639+00:00 stderr F E0813 20:03:31.240077 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:31.694991844+00:00 stderr F E0813 20:03:31.694894 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:34.237457784+00:00 stderr F E0813 20:03:34.237057 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.067095360+00:00 stderr F E0813 20:03:39.066545 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:39.197105628+00:00 stderr F E0813 20:03:39.196995 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:39.938886849+00:00 stderr F E0813 20:03:39.938749 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:41.243562208+00:00 stderr F E0813 20:03:41.243196 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:41.697672863+00:00 stderr F E0813 20:03:41.697378 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:49.071193746+00:00 stderr F E0813 20:03:49.070619 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:49.200004021+00:00 stderr F E0813 20:03:49.199886 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:49.786542082+00:00 stderr F E0813 20:03:49.786481 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:51.262599080+00:00 stderr F E0813 20:03:51.262267 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:03:51.710307585+00:00 stderr F E0813 20:03:51.710129 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:03:52.158916293+00:00 stderr F W0813 20:03:52.158156 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:52.158916293+00:00 stderr F E0813 20:03:52.158834 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:53.881637898+00:00 stderr F E0813 20:03:53.879692 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:03:59.102664930+00:00 stderr F E0813 20:03:59.102179 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:03:59.199332758+00:00 stderr F E0813 20:03:59.199234 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:00.451170349+00:00 stderr F E0813 20:04:00.448977 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:01.292417287+00:00 stderr F E0813 20:04:01.292352 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:01.734228721+00:00 stderr F E0813 20:04:01.728554 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:05.252811975+00:00 stderr F I0813 20:04:05.252134 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'InstallerPodFailed' Failed to create installer pod for revision 9 count 1 on node "crc": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:05.261754790+00:00 stderr F E0813 20:04:05.260639 1 base_controller.go:268] InstallerController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.110825073+00:00 stderr F E0813 20:04:09.110712 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:09.200838809+00:00 stderr F E0813 20:04:09.200682 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:09.386987151+00:00 stderr F E0813 20:04:09.386466 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:11.350253577+00:00 stderr F E0813 20:04:11.349438 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:11.731660527+00:00 stderr F E0813 20:04:11.731547 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:13.412571139+00:00 stderr F E0813 20:04:13.412460 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.126151473+00:00 stderr F E0813 20:04:18.125554 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:18.922528371+00:00 stderr F E0813 20:04:18.922199 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.924317022+00:00 stderr F E0813 20:04:18.924241 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.134651123+00:00 stderr F E0813 20:04:19.134583 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.134989332+00:00 stderr F E0813 20:04:19.134963 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be421a7ba openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,LastTimestamp:2025-08-13 20:02:26.838267834 +0000 UTC m=+59.536058970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.136563777+00:00 stderr F E0813 20:04:19.136456 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:19.200251414+00:00 stderr F E0813 20:04:19.200126 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:19.458201473+00:00 stderr F E0813 20:04:19.457505 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:19.810122922+00:00 stderr F E0813 20:04:19.810042 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:21.354154239+00:00 stderr F E0813 20:04:21.353918 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:21.354203371+00:00 stderr F E0813 20:04:21.354157 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1be6328fb9 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:OperatorStatusChanged,Message:Status for clusteroperator/kube-apiserver changed: Degraded message changed from \"NodeControllerDegraded: All master nodes are ready\\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, configmaps: bound-sa-token-signing-certs-9,config-9,etcd-serving-ca-9,kube-apiserver-audit-policies-9,kube-apiserver-cert-syncer-kubeconfig-9,kube-apiserver-pod-9,kubelet-serving-ca-9,sa-token-signing-certs-9]\" to \"NodeControllerDegraded: All master nodes are ready\",Source:EventSource{Component:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,Host:,},FirstTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,LastTimestamp:2025-08-13 20:02:26.872930233 +0000 UTC m=+59.570721509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-status-controller-statussyncer_kube-apiserver,ReportingInstance:,}" 2025-08-13T20:04:21.737023152+00:00 stderr F E0813 20:04:21.736903 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:21.737290399+00:00 stderr F E0813 20:04:21.737166 1 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:21.747134221+00:00 stderr F E0813 20:04:21.747030 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:24.410946792+00:00 stderr F E0813 20:04:24.408355 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:25.586371371+00:00 stderr F E0813 20:04:25.585478 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:29.312902817+00:00 stderr F E0813 20:04:29.312340 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:34.413537650+00:00 stderr F E0813 20:04:34.413133 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:35.594003465+00:00 stderr F E0813 20:04:35.590587 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:39.240500555+00:00 stderr F E0813 20:04:39.239746 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.412515322+00:00 stderr F E0813 20:04:41.412003 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:41.517410856+00:00 stderr F E0813 20:04:41.517330 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:44.424010949+00:00 stderr F E0813 20:04:44.423326 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:45.607638554+00:00 stderr F E0813 20:04:45.596688 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:46.880512543+00:00 stderr F E0813 20:04:46.880306 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:49.205967865+00:00 stderr F E0813 20:04:49.202745 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.944390879+00:00 stderr F E0813 20:04:52.943846 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:04:54.427027635+00:00 stderr F E0813 20:04:54.426478 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:04:55.599457919+00:00 stderr F E0813 20:04:55.599021 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:04:56.193565092+00:00 stderr F E0813 20:04:56.193401 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:59.210231627+00:00 stderr F E0813 20:04:59.204607 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:04.431498625+00:00 stderr F E0813 20:05:04.431061 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:05:05.601999314+00:00 stderr F E0813 20:05:05.601540 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:05:09.236661316+00:00 stderr F E0813 20:05:09.236236 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.383111080+00:00 stderr F E0813 20:05:09.382720 1 leaderelection.go:332] error retrieving resource lock openshift-kube-apiserver-operator/kube-apiserver-operator-lock: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.829698110+00:00 stderr F E0813 20:05:11.829275 1 base_controller.go:268] RevisionController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.091414447+00:00 stderr F W0813 20:05:14.090446 1 base_controller.go:232] Updating status of "NodeKubeconfigController" failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.091414447+00:00 stderr F E0813 20:05:14.090630 1 base_controller.go:268] NodeKubeconfigController reconciliation failed: "secret/node-kubeconfigs": Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:14.441321307+00:00 stderr F E0813 20:05:14.440432 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1c16ba9353 openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreateFailed,Message:Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver: Post \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-revisioncontroller,Host:,},FirstTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,LastTimestamp:2025-08-13 20:02:27.687150419 +0000 UTC m=+60.384941285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-revisioncontroller,ReportingInstance:,}" 2025-08-13T20:05:15.606367279+00:00 stderr F E0813 20:05:15.605755 1 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/events/kube-apiserver-operator.185b6c1d9ffe1e7e\": dial tcp 10.217.4.1:443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-operator.185b6c1d9ffe1e7e openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Deployment,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator,UID:1685682f-c45b-43b7-8431-b19c7e8a7d30,APIVersion:apps/v1,ResourceVersion:,FieldPath:,},Reason:InstallerPodFailed,Message:Failed to create installer pod for revision 9 count 1 on node \"crc\": Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 10.217.4.1:443: connect: connection refused,Source:EventSource{Component:kube-apiserver-operator-installer-controller,Host:,},FirstTimestamp:2025-08-13 20:02:34.285022846 +0000 UTC m=+66.982813812,LastTimestamp:2025-08-13 20:02:36.884312137 +0000 UTC m=+69.582103023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kube-apiserver-operator-installer-controller,ReportingInstance:,}" 2025-08-13T20:05:15.806682745+00:00 stderr F E0813 20:05:15.806565 1 base_controller.go:268] ConnectivityCheckController reconciliation failed: Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podnetworkconnectivitychecks.controlplane.operator.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:18.128535484+00:00 stderr F E0813 20:05:18.128411 1 base_controller.go:268] BackingResourceController reconciliation failed: ["manifests/installer-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa": dial tcp 10.217.4.1:443: connect: connection refused, "manifests/installer-cluster-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-kube-apiserver-installer": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:05:18.923285102+00:00 stderr F E0813 20:05:18.922736 1 base_controller.go:268] InstallerStateController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:18.938201949+00:00 stderr F E0813 20:05:18.938058 1 base_controller.go:268] StaticPodStateController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:19.209547759+00:00 stderr F E0813 20:05:19.209191 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:19.444272021+00:00 stderr F E0813 20:05:19.443961 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/ns.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/svc.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): Get "https://10.217.4.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-flowschema-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): Get "https://10.217.4.1:443/apis/migration.k8s.io/v1alpha1/storageversionmigrations/flowcontrol-prioritylevel-storage-version-migration-v1beta3": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/api-usage.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/api-usage": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/audit-errors.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/audit-errors": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/cpu-utilization.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/cpu-utilization": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-requests.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-requests": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-basic": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/podsecurity-violations.yaml" (string): Get "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/podsecurity": dial tcp 10.217.4.1:443: connect: connection refused, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): Delete "https://10.217.4.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-kube-apiserver/prometheusrules/kube-apiserver-slos-extended": dial tcp 10.217.4.1:443: connect: connection refused, Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused] 2025-08-13T20:05:19.828615827+00:00 stderr F E0813 20:05:19.828516 1 base_controller.go:268] TargetConfigController reconciliation failed: Put "https://10.217.4.1:443/apis/operator.openshift.io/v1/kubeapiservers/cluster/status": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:43.067359005+00:00 stderr F I0813 20:05:43.065919 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:05:43.067359005+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:05:43.067359005+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:05:43.067359005+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0021abea8)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:05:43.067359005+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:05:43.067359005+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:05:43.067359005+00:00 stderr F } 2025-08-13T20:05:43.067359005+00:00 stderr F } 2025-08-13T20:05:43.067359005+00:00 stderr F because static pod is ready 2025-08-13T20:05:43.114641889+00:00 stderr F I0813 20:05:43.114519 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31246 2025-08-13T20:05:43.150149946+00:00 stderr F I0813 20:05:43.148693 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 9 because static pod is ready 2025-08-13T20:05:55.437383641+00:00 stderr F I0813 20:05:55.436663 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.013147382+00:00 stderr F I0813 20:05:57.926832 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.052286792+00:00 stderr F I0813 20:05:58.052155 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31823 2025-08-13T20:05:58.128587747+00:00 stderr F I0813 20:05:58.127110 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:05:58.200418084+00:00 stderr F I0813 20:05:58.200336 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.200418084+00:00 stderr F W0813 20:05:58.200380 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.200463696+00:00 stderr F E0813 20:05:58.200427 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:05:58.314226252+00:00 stderr F I0813 20:05:58.311183 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.314226252+00:00 stderr F W0813 20:05:58.311231 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.314226252+00:00 stderr F E0813 20:05:58.311258 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:05:58.342522043+00:00 stderr F I0813 20:05:58.342053 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:05:58.342522043+00:00 stderr F W0813 20:05:58.342474 1 staticpod.go:38] revision 10 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:05:58.342522043+00:00 stderr F E0813 20:05:58.342503 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 10 2025-08-13T20:06:00.719452509+00:00 stderr F I0813 20:06:00.719020 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:01.479830303+00:00 stderr F I0813 20:06:01.479663 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:02.789855037+00:00 stderr F I0813 20:06:02.787758 1 reflector.go:351] Caches populated for *v1.KubeAPIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.220363992+00:00 stderr F I0813 20:06:06.219590 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:06.797371706+00:00 stderr F I0813 20:06:06.796734 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.084738681+00:00 stderr F I0813 20:06:08.084679 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:08.348720890+00:00 stderr F I0813 20:06:08.348309 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:10.168909382+00:00 stderr F I0813 20:06:10.167730 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:13.305920214+00:00 stderr F I0813 20:06:13.304600 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:13.823342542+00:00 stderr F I0813 20:06:13.823278 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:15.913681751+00:00 stderr F I0813 20:06:15.912900 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:17.953297747+00:00 stderr F I0813 20:06:17.953166 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:18.997522470+00:00 stderr F I0813 20:06:18.995168 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:20.484612853+00:00 stderr F I0813 20:06:20.481273 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:20.832134275+00:00 stderr F I0813 20:06:20.830053 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:21.027976373+00:00 stderr F I0813 20:06:21.026982 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:23.123362347+00:00 stderr F I0813 20:06:23.122947 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.086211088+00:00 stderr F I0813 20:06:24.086107 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:24.098150370+00:00 stderr F I0813 20:06:24.098014 1 termination_observer.go:130] Observed termination of API server pod "kube-apiserver-crc" at 2025-08-13 20:04:15 +0000 UTC 2025-08-13T20:06:24.112124860+00:00 stderr F I0813 20:06:24.112002 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:24.112124860+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:24.112124860+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:24.112124860+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedTime: (*v1.Time)(0xc0027b9050)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:24.112124860+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:24.112124860+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:24.112124860+00:00 stderr F } 2025-08-13T20:06:24.112124860+00:00 stderr F } 2025-08-13T20:06:24.112124860+00:00 stderr F because static pod is ready 2025-08-13T20:06:24.143504988+00:00 stderr F I0813 20:06:24.143375 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=31947 2025-08-13T20:06:24.164448378+00:00 stderr F I0813 20:06:24.164305 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "crc" from revision 8 to 9 because static pod is ready 2025-08-13T20:06:24.687857027+00:00 stderr F I0813 20:06:24.687742 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:25.501171447+00:00 stderr F I0813 20:06:25.501098 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:25.501171447+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:25.501171447+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:25.501171447+00:00 stderr F TargetRevision: (int32) 0, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedTime: (*v1.Time)(0xc005bf0c90)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:25.501171447+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:25.501171447+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:25.501171447+00:00 stderr F } 2025-08-13T20:06:25.501171447+00:00 stderr F } 2025-08-13T20:06:25.501171447+00:00 stderr F because static pod is ready 2025-08-13T20:06:25.517202186+00:00 stderr F I0813 20:06:25.517044 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:25.543665974+00:00 stderr F I0813 20:06:25.543613 1 helpers.go:260] lister was stale at resourceVersion=30706, live get showed resourceVersion=32041 2025-08-13T20:06:26.140531476+00:00 stderr F I0813 20:06:26.140387 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:27.886909364+00:00 stderr F I0813 20:06:27.886722 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.477964700+00:00 stderr F I0813 20:06:28.477299 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.803335017+00:00 stderr F I0813 20:06:28.803175 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:28.803641036+00:00 stderr F I0813 20:06:28.803498 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:06:29.674410552+00:00 stderr F I0813 20:06:29.671815 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.877198799+00:00 stderr F I0813 20:06:29.875523 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:29.889383038+00:00 stderr F I0813 20:06:29.886916 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.292428349+00:00 stderr F I0813 20:06:31.292313 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:31.494001678+00:00 stderr F I0813 20:06:31.490208 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:32.677649895+00:00 stderr F I0813 20:06:32.677031 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.013536475+00:00 stderr F I0813 20:06:33.013155 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.276858445+00:00 stderr F I0813 20:06:33.274143 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:33.278657506+00:00 stderr F I0813 20:06:33.278563 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:06:35.037602926+00:00 stderr F I0813 20:06:35.036613 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:37.144938036+00:00 stderr F I0813 20:06:37.141555 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:38.823364977+00:00 stderr F I0813 20:06:38.822602 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:42.599176403+00:00 stderr F I0813 20:06:42.598474 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.194326857+00:00 stderr F I0813 20:06:43.193966 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.197930590+00:00 stderr F I0813 20:06:43.195748 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:43.198819035+00:00 stderr F I0813 20:06:43.198697 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:43.209392969+00:00 stderr F I0813 20:06:43.209285 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:43.212251571+00:00 stderr F I0813 20:06:43.212085 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 11 triggered by "configmap \"kube-apiserver-cert-syncer-kubeconfig-10\" not found" 2025-08-13T20:06:43.224306866+00:00 stderr F I0813 20:06:43.224140 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.540526073+00:00 stderr F I0813 20:06:43.538687 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:43.861718771+00:00 stderr F I0813 20:06:43.861624 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10" 2025-08-13T20:06:43.863087181+00:00 stderr F E0813 20:06:43.862915 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:43.874888159+00:00 stderr F I0813 20:06:43.874700 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:43.876200427+00:00 stderr F I0813 20:06:43.876129 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:43.880994614+00:00 stderr F I0813 20:06:43.880931 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:44.686921531+00:00 stderr F I0813 20:06:44.684407 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:44.710498407+00:00 stderr F I0813 20:06:44.698668 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]" 2025-08-13T20:06:44.795200656+00:00 stderr F E0813 20:06:44.795113 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:06:44.812895113+00:00 stderr F E0813 20:06:44.808500 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.007854942+00:00 stderr F I0813 20:06:45.007627 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.013050641+00:00 stderr F E0813 20:06:45.012958 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.015726088+00:00 stderr F E0813 20:06:45.015702 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.017251091+00:00 stderr F I0813 20:06:45.015751 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.035925207+00:00 stderr F I0813 20:06:45.035704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.036284657+00:00 stderr F E0813 20:06:45.036250 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.108955201+00:00 stderr F I0813 20:06:45.108736 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.119003569+00:00 stderr F E0813 20:06:45.118342 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.119003569+00:00 stderr F I0813 20:06:45.118418 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.229408244+00:00 stderr F I0813 20:06:45.229219 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:45.281371224+00:00 stderr F E0813 20:06:45.281270 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.281371224+00:00 stderr F I0813 20:06:45.281347 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:45.605049374+00:00 stderr F E0813 20:06:45.604950 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:45.605106336+00:00 stderr F I0813 20:06:45.605022 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:46.159407527+00:00 stderr F I0813 20:06:46.159252 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:46.250021875+00:00 stderr F E0813 20:06:46.247318 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:46.250021875+00:00 stderr F I0813 20:06:46.247419 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:47.529952613+00:00 stderr F E0813 20:06:47.529861 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:47.530083207+00:00 stderr F I0813 20:06:47.530058 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:47.873036359+00:00 stderr F I0813 20:06:47.871318 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:48.555591518+00:00 stderr F I0813 20:06:48.555494 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:48.954120725+00:00 stderr F I0813 20:06:48.953954 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:49.227756030+00:00 stderr F I0813 20:06:49.224391 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:49.587960507+00:00 stderr F I0813 20:06:49.587728 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:49.877065196+00:00 stderr F I0813 20:06:49.875605 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.092003138+00:00 stderr F E0813 20:06:50.091852 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:50.092003138+00:00 stderr F I0813 20:06:50.091961 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:50.490470183+00:00 stderr F I0813 20:06:50.486047 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.625026801+00:00 stderr F I0813 20:06:50.621506 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.721054824+00:00 stderr F I0813 20:06:50.690508 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:50.783372881+00:00 stderr F I0813 20:06:50.783231 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:51.189895356+00:00 stderr F I0813 20:06:51.187513 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:51.750118559+00:00 stderr F I0813 20:06:51.743548 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:53.682530292+00:00 stderr F I0813 20:06:53.680117 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:54.823520696+00:00 stderr F I0813 20:06:54.821366 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:54.949069235+00:00 stderr F I0813 20:06:54.948306 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:55.217855282+00:00 stderr F E0813 20:06:55.214692 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10] 2025-08-13T20:06:55.223659848+00:00 stderr F I0813 20:06:55.223140 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10 2025-08-13T20:06:55.380837275+00:00 stderr F I0813 20:06:55.380586 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-11 -n openshift-kube-apiserver because it was missing 2025-08-13T20:06:55.922704220+00:00 stderr F I0813 20:06:55.920082 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:56.053962303+00:00 stderr F I0813 20:06:56.049454 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 11 triggered by "configmap \"kube-apiserver-cert-syncer-kubeconfig-10\" not found" 2025-08-13T20:06:56.360839252+00:00 stderr F I0813 20:06:56.360704 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 11 created because configmap "kube-apiserver-cert-syncer-kubeconfig-10" not found 2025-08-13T20:06:56.440267529+00:00 stderr F I0813 20:06:56.440209 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:56.452133759+00:00 stderr F W0813 20:06:56.450383 1 staticpod.go:38] revision 11 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:06:56.452133759+00:00 stderr F E0813 20:06:56.450437 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 11 2025-08-13T20:06:57.529571870+00:00 stderr F I0813 20:06:57.528545 1 request.go:697] Waited for 1.102452078s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:06:59.153954122+00:00 stderr F I0813 20:06:59.153508 1 installer_controller.go:524] node crc with revision 9 is the oldest and needs new revision 11 2025-08-13T20:06:59.154122097+00:00 stderr F I0813 20:06:59.154101 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:06:59.154122097+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:06:59.154122097+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:06:59.154122097+00:00 stderr F TargetRevision: (int32) 11, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedTime: (*v1.Time)(0xc002214d08)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:06:59.154122097+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:06:59.154122097+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:06:59.154122097+00:00 stderr F } 2025-08-13T20:06:59.154122097+00:00 stderr F } 2025-08-13T20:06:59.241574604+00:00 stderr F I0813 20:06:59.235963 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 9 to 11 because node crc with revision 9 is the oldest 2025-08-13T20:06:59.256743019+00:00 stderr F I0813 20:06:59.256671 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:59.258615703+00:00 stderr F I0813 20:06:59.258583 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:59.267689573+00:00 stderr F I0813 20:06:59.267567 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:06:59.305698563+00:00 stderr F I0813 20:06:59.305637 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 10" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11" 2025-08-13T20:06:59.360396791+00:00 stderr F I0813 20:06:59.360342 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:06:59.364085437+00:00 stderr F I0813 20:06:59.363854 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:06:59.391010869+00:00 stderr F I0813 20:06:59.390540 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-10,etcd-serving-ca-10,kube-apiserver-audit-policies-10,kube-apiserver-cert-syncer-kubeconfig-10,kubelet-serving-ca-10,sa-token-signing-certs-10, secrets: etcd-client-10,localhost-recovery-client-token-10,localhost-recovery-serving-certkey-10]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T20:07:00.332573145+00:00 stderr F I0813 20:07:00.332426 1 request.go:697] Waited for 1.018757499s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:01.331607958+00:00 stderr F I0813 20:07:01.328004 1 request.go:697] Waited for 1.351597851s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:07:03.219312669+00:00 stderr F I0813 20:07:03.215042 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-11-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:04.482399593+00:00 stderr F I0813 20:07:04.480938 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:05.929009659+00:00 stderr F I0813 20:07:05.928059 1 request.go:697] Waited for 1.107414811s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/localhost-recovery-client 2025-08-13T20:07:06.134280204+00:00 stderr F I0813 20:07:06.133848 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:06.931311815+00:00 stderr F I0813 20:07:06.928683 1 request.go:697] Waited for 1.128893936s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config 2025-08-13T20:07:08.540323188+00:00 stderr F I0813 20:07:08.539269 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:09.934975573+00:00 stderr F I0813 20:07:09.934565 1 installer_controller.go:512] "crc" is in transition to 11, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:10.134911265+00:00 stderr F I0813 20:07:10.134731 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:07:15.687022109+00:00 stderr F I0813 20:07:15.682408 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:15.727955753+00:00 stderr F I0813 20:07:15.726710 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:15.787348406+00:00 stderr F I0813 20:07:15.785969 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:15.841157668+00:00 stderr F I0813 20:07:15.840703 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:16.958534985+00:00 stderr F I0813 20:07:16.958185 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:17.311540395+00:00 stderr F I0813 20:07:17.304820 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:18.708619280+00:00 stderr F I0813 20:07:18.691467 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:19.950694422+00:00 stderr F I0813 20:07:19.917858 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:20.068989554+00:00 stderr F I0813 20:07:20.066161 1 request.go:697] Waited for 1.141655182s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:20.562498663+00:00 stderr F I0813 20:07:20.535507 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:21.514635910+00:00 stderr F I0813 20:07:21.512470 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:22.503481592+00:00 stderr F I0813 20:07:22.494349 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:23.613992292+00:00 stderr F I0813 20:07:23.612845 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:25.144175883+00:00 stderr F I0813 20:07:25.142508 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:25.706344610+00:00 stderr F I0813 20:07:25.704672 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:26.723927306+00:00 stderr F I0813 20:07:26.687676 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-12 -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:27.483437834+00:00 stderr F I0813 20:07:27.482905 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" 2025-08-13T20:07:27.541270120+00:00 stderr F I0813 20:07:27.533928 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:27.541270120+00:00 stderr F I0813 20:07:27.538561 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 12 created because required secret/localhost-recovery-client-token has changed 2025-08-13T20:07:28.676481077+00:00 stderr F I0813 20:07:28.675817 1 request.go:697] Waited for 1.140203711s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:29.865947060+00:00 stderr F I0813 20:07:29.865352 1 request.go:697] Waited for 1.175779001s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc 2025-08-13T20:07:31.498696882+00:00 stderr F I0813 20:07:31.497544 1 installer_controller.go:500] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:07:31.498696882+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:07:31.498696882+00:00 stderr F CurrentRevision: (int32) 9, 2025-08-13T20:07:31.498696882+00:00 stderr F TargetRevision: (int32) 12, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedTime: (*v1.Time)(0xc001e57e90)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:07:31.498696882+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:07:31.498696882+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:07:31.498696882+00:00 stderr F } 2025-08-13T20:07:31.498696882+00:00 stderr F } 2025-08-13T20:07:31.498696882+00:00 stderr F because new revision pending 2025-08-13T20:07:32.044521161+00:00 stderr F I0813 20:07:32.044386 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.045112688+00:00 stderr F I0813 20:07:32.045042 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:07:32.362962951+00:00 stderr F I0813 20:07:32.328253 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.387055412+00:00 stderr F I0813 20:07:32.386819 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 11" to "NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12" 2025-08-13T20:07:32.393638680+00:00 stderr F E0813 20:07:32.393100 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:07:32.410870024+00:00 stderr F I0813 20:07:32.408004 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:07:32.476455865+00:00 stderr F E0813 20:07:32.476356 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:07:33.066909054+00:00 stderr F I0813 20:07:33.065163 1 request.go:697] Waited for 1.014589539s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:07:33.908378810+00:00 stderr F I0813 20:07:33.906597 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-12-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:07:34.471827254+00:00 stderr F I0813 20:07:34.470752 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:07:35.082344628+00:00 stderr F I0813 20:07:35.082140 1 request.go:697] Waited for 1.17853902s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:36.264957995+00:00 stderr F I0813 20:07:36.264379 1 request.go:697] Waited for 1.169600653s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:07:37.266086328+00:00 stderr F I0813 20:07:37.265156 1 request.go:697] Waited for 1.195307181s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller 2025-08-13T20:07:37.705277070+00:00 stderr F I0813 20:07:37.705217 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:40.448543522+00:00 stderr F I0813 20:07:40.447141 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:41.068953629+00:00 stderr F I0813 20:07:41.068385 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:07:42.872562730+00:00 stderr F I0813 20:07:42.872111 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:23.948041820+00:00 stderr F I0813 20:08:23.946664 1 installer_controller.go:512] "crc" is in transition to 12, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:08:23.987622815+00:00 stderr F I0813 20:08:23.985531 1 termination_observer.go:236] Observed event "TerminationPreShutdownHooksFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:04:15 +0000 UTC) at 2025-08-13 20:08:23 +0000 UTC 2025-08-13T20:08:26.078102491+00:00 stderr F I0813 20:08:26.072992 1 termination_observer.go:236] Observed event "TerminationGracefulTerminationFinished" for API server pod "kube-apiserver-crc" (last termination at 2025-08-13 20:04:15 +0000 UTC) at 2025-08-13 20:08:25 +0000 UTC 2025-08-13T20:08:29.272876957+00:00 stderr F E0813 20:08:29.272045 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.287636911+00:00 stderr F E0813 20:08:29.287217 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.301855268+00:00 stderr F E0813 20:08:29.301817 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.329675506+00:00 stderr F E0813 20:08:29.329619 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.377394244+00:00 stderr F E0813 20:08:29.377301 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.464043468+00:00 stderr F E0813 20:08:29.463572 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.628308168+00:00 stderr F E0813 20:08:29.627972 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:29.953248174+00:00 stderr F E0813 20:08:29.953100 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:30.602610212+00:00 stderr F E0813 20:08:30.602162 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:31.890209348+00:00 stderr F E0813 20:08:31.889997 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:34.459982315+00:00 stderr F E0813 20:08:34.459464 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.274705807+00:00 stderr F E0813 20:08:39.274066 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:39.586015852+00:00 stderr F E0813 20:08:39.584968 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:49.277814194+00:00 stderr F E0813 20:08:49.276218 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:08:59.279422830+00:00 stderr F E0813 20:08:59.278561 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.072093276+00:00 stderr F E0813 20:09:00.072031 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:30.676249413+00:00 stderr F I0813 20:09:30.672351 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.652853485+00:00 stderr F I0813 20:09:32.652667 1 reflector.go:351] Caches populated for *v1.Authentication from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:32.997060833+00:00 stderr F I0813 20:09:32.996997 1 reflector.go:351] Caches populated for operator.openshift.io/v1, Resource=kubeapiservers from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:33.004507667+00:00 stderr F I0813 20:09:33.004432 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:34.052182014+00:00 stderr F I0813 20:09:34.052080 1 request.go:697] Waited for 1.047880553s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa 2025-08-13T20:09:34.859533992+00:00 stderr F I0813 20:09:34.859391 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.055166871+00:00 stderr F I0813 20:09:35.055104 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.274030664+00:00 stderr F I0813 20:09:35.273887 1 request.go:697] Waited for 1.346065751s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:35.654672239+00:00 stderr F I0813 20:09:35.654588 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:35.847469136+00:00 stderr F I0813 20:09:35.846666 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.056865441+00:00 stderr F I0813 20:09:38.056747 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.855268932+00:00 stderr F I0813 20:09:38.855138 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:38.942880734+00:00 stderr F I0813 20:09:38.942760 1 reflector.go:351] Caches populated for *v1.Proxy from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.454254595+00:00 stderr F I0813 20:09:39.454166 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:39.870002795+00:00 stderr F I0813 20:09:39.869847 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:40.312326496+00:00 stderr F I0813 20:09:40.310888 1 reflector.go:351] Caches populated for *v1.Infrastructure from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:40.316394453+00:00 stderr F I0813 20:09:40.313333 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:09:40.498634928+00:00 stderr F I0813 20:09:40.498514 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:09:40.503509538+00:00 stderr F I0813 20:09:40.503433 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:40Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:40.522071150+00:00 stderr F I0813 20:09:40.521872 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 12"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 9; 0 nodes have achieved new revision 12" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12" 2025-08-13T20:09:40.948998720+00:00 stderr F I0813 20:09:40.948523 1 reflector.go:351] Caches populated for *v1.Image from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.268646514+00:00 stderr F I0813 20:09:41.265043 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.289537333+00:00 stderr F E0813 20:09:41.288663 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.290885212+00:00 stderr F I0813 20:09:41.290625 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.307019204+00:00 stderr F E0813 20:09:41.306859 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.308151097+00:00 stderr F I0813 20:09:41.307896 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.313935433+00:00 stderr F E0813 20:09:41.313848 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.317861915+00:00 stderr F I0813 20:09:41.317767 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.323179508+00:00 stderr F E0813 20:09:41.323077 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.364632316+00:00 stderr F I0813 20:09:41.364497 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.371219595+00:00 stderr F E0813 20:09:41.371120 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.454053880+00:00 stderr F I0813 20:09:41.453389 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.462409540+00:00 stderr F E0813 20:09:41.462300 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.625209247+00:00 stderr F I0813 20:09:41.624176 1 reflector.go:351] Caches populated for *v1.ValidatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:41.625209247+00:00 stderr F I0813 20:09:41.624462 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.637270673+00:00 stderr F E0813 20:09:41.637136 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:41.651362987+00:00 stderr F I0813 20:09:41.651285 1 request.go:697] Waited for 1.142561947s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver 2025-08-13T20:09:41.964522376+00:00 stderr F I0813 20:09:41.962065 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:41Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:41.970516497+00:00 stderr F E0813 20:09:41.970466 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.613871283+00:00 stderr F I0813 20:09:42.613744 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:42Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:42.624072166+00:00 stderr F E0813 20:09:42.623097 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:42.674291996+00:00 stderr F I0813 20:09:42.674173 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.255737967+00:00 stderr F I0813 20:09:43.255395 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.507659819+00:00 stderr F I0813 20:09:43.507470 1 reflector.go:351] Caches populated for *v1.MutatingWebhookConfiguration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:43.906040130+00:00 stderr F I0813 20:09:43.905940 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:09:43Z","message":"NodeInstallerProgressing: 1 node is at revision 12","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 12","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:09:43.915460560+00:00 stderr F E0813 20:09:43.915310 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:09:45.063148636+00:00 stderr F I0813 20:09:45.062119 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.454994181+00:00 stderr F I0813 20:09:45.454880 1 reflector.go:351] Caches populated for *v1.ServiceAccount from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:45.556559583+00:00 stderr F I0813 20:09:45.556125 1 reflector.go:351] Caches populated for *v1.ClusterOperator from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:47.029896635+00:00 stderr F I0813 20:09:47.029137 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.431534712+00:00 stderr F I0813 20:09:49.431396 1 reflector.go:351] Caches populated for *v1.OAuth from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:49.603868443+00:00 stderr F I0813 20:09:49.603016 1 reflector.go:351] Caches populated for *v1alpha1.StorageVersionMigration from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.097949529+00:00 stderr F I0813 20:09:51.095966 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.156008534+00:00 stderr F I0813 20:09:51.155372 1 reflector.go:351] Caches populated for *v1.ClusterRole from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.346948488+00:00 stderr F I0813 20:09:51.343971 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:51.899874211+00:00 stderr F I0813 20:09:51.897396 1 reflector.go:351] Caches populated for *v1.KubeAPIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.322158879+00:00 stderr F I0813 20:09:52.322056 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:52.717963967+00:00 stderr F I0813 20:09:52.717106 1 reflector.go:351] Caches populated for *v1.RoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.001761305+00:00 stderr F I0813 20:09:55.001372 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.604439374+00:00 stderr F I0813 20:09:55.604041 1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:55.691689536+00:00 stderr F I0813 20:09:55.691355 1 reflector.go:351] Caches populated for *v1.Scheduler from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:56.397833182+00:00 stderr F I0813 20:09:56.397367 1 termination_observer.go:130] Observed termination of API server pod "kube-apiserver-crc" at 2025-08-13 20:08:47 +0000 UTC 2025-08-13T20:09:56.793974910+00:00 stderr F I0813 20:09:56.792242 1 request.go:697] Waited for 1.18414682s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:57.991351730+00:00 stderr F I0813 20:09:57.991270 1 request.go:697] Waited for 1.191825831s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:09:58.200016761+00:00 stderr F I0813 20:09:58.199677 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:09:59.278447041+00:00 stderr F I0813 20:09:59.278325 1 reflector.go:351] Caches populated for *v1.FeatureGate from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:00.344520857+00:00 stderr F I0813 20:10:00.344375 1 reflector.go:351] Caches populated for *v1.ClusterVersion from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:00.597497930+00:00 stderr F I0813 20:10:00.594133 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:06.140558853+00:00 stderr F I0813 20:10:06.136477 1 reflector.go:351] Caches populated for *v1.APIServer from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:08.678223861+00:00 stderr F I0813 20:10:08.677607 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:11.615709241+00:00 stderr F I0813 20:10:11.614708 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:12.447865849+00:00 stderr F I0813 20:10:12.447401 1 reflector.go:351] Caches populated for *v1.Namespace from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:15.023764372+00:00 stderr F I0813 20:10:15.023635 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:16.189231935+00:00 stderr F I0813 20:10:16.187663 1 reflector.go:351] Caches populated for *v1.SecurityContextConstraints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:18.832884551+00:00 stderr F I0813 20:10:18.831311 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:19.913476462+00:00 stderr F I0813 20:10:19.913155 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:20.042624055+00:00 stderr F I0813 20:10:20.042485 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:22.281866056+00:00 stderr F I0813 20:10:22.280848 1 reflector.go:351] Caches populated for *v1.ClusterRoleBinding from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:26.156467354+00:00 stderr F I0813 20:10:26.156096 1 reflector.go:351] Caches populated for *v1.Endpoints from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:28.443609229+00:00 stderr F I0813 20:10:28.442724 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:10:33.622891484+00:00 stderr F I0813 20:10:33.622596 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:10:33.626321162+00:00 stderr F I0813 20:10:33.622305 1 reflector.go:351] Caches populated for *v1.Network from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:17:07.038035726+00:00 stderr F I0813 20:17:07.036853 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:19:40.315920166+00:00 stderr F I0813 20:19:40.315069 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:19:56.823831628+00:00 stderr F I0813 20:19:56.822700 1 request.go:697] Waited for 1.189846745s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:19:57.816367055+00:00 stderr F I0813 20:19:57.815909 1 request.go:697] Waited for 1.194141978s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:20:33.628645173+00:00 stderr F I0813 20:20:33.627675 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:25:08.288098181+00:00 stderr F I0813 20:25:08.287198 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:29:40.318423466+00:00 stderr F I0813 20:29:40.316980 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:29:56.621923248+00:00 stderr F I0813 20:29:56.621255 1 request.go:697] Waited for 1.002703993s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:29:57.820948224+00:00 stderr F I0813 20:29:57.820346 1 request.go:697] Waited for 1.188787292s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:30:33.631933560+00:00 stderr F I0813 20:30:33.629183 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:39:40.323212651+00:00 stderr F I0813 20:39:40.322450 1 externalloadbalancer.go:27] syncing external loadbalancer hostnames: api.crc.testing 2025-08-13T20:39:56.818558953+00:00 stderr F I0813 20:39:56.817863 1 request.go:697] Waited for 1.198272606s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:39:58.019908028+00:00 stderr F I0813 20:39:58.017924 1 request.go:697] Waited for 1.193313884s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:40:30.989897036+00:00 stderr F I0813 20:40:30.988880 1 reflector.go:351] Caches populated for *v1.Event from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T20:40:33.630681710+00:00 stderr F I0813 20:40:33.630274 1 servicehostname.go:40] syncing servicenetwork hostnames: [10.217.4.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local] 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.312511 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.320010 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.322303 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.322653 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383445345+00:00 stderr F I0813 20:42:36.312996 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.383656031+00:00 stderr F I0813 20:42:36.383617 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384451584+00:00 stderr F I0813 20:42:36.384429 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.384734842+00:00 stderr F I0813 20:42:36.384713 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.421150922+00:00 stderr F I0813 20:42:36.404204 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.424615362+00:00 stderr F I0813 20:42:36.424585 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.425559339+00:00 stderr F I0813 20:42:36.425536 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.448576532+00:00 stderr F I0813 20:42:36.448525 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.449479658+00:00 stderr F I0813 20:42:36.449420 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.451915479+00:00 stderr F I0813 20:42:36.313036 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454242126+00:00 stderr F I0813 20:42:36.313051 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.454667788+00:00 stderr F I0813 20:42:36.313065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509514919+00:00 stderr F I0813 20:42:36.313078 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.509514919+00:00 stderr F I0813 20:42:36.313093 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.519564509+00:00 stderr F I0813 20:42:36.313106 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.519564509+00:00 stderr F I0813 20:42:36.313119 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.540144902+00:00 stderr F I0813 20:42:36.313131 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313144 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313156 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313168 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313214 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313257 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313270 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.560178770+00:00 stderr F I0813 20:42:36.313287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.573976478+00:00 stderr F I0813 20:42:36.313298 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.574460152+00:00 stderr F I0813 20:42:36.313311 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.574851103+00:00 stderr F I0813 20:42:36.313322 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575304116+00:00 stderr F I0813 20:42:36.313332 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.575304116+00:00 stderr F I0813 20:42:36.313343 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313361 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313376 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313387 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313403 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576330626+00:00 stderr F I0813 20:42:36.313415 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576568523+00:00 stderr F I0813 20:42:36.313426 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.576968474+00:00 stderr F I0813 20:42:36.313443 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.582537195+00:00 stderr F I0813 20:42:36.313454 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583345418+00:00 stderr F I0813 20:42:36.313467 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583345418+00:00 stderr F I0813 20:42:36.313483 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.583700218+00:00 stderr F I0813 20:42:36.313494 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.589248298+00:00 stderr F I0813 20:42:36.313522 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313539 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313600 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.313615 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.319488 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.451063 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.594907491+00:00 stderr F I0813 20:42:36.469514 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.450401166+00:00 stderr F E0813 20:42:39.449691 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.458503730+00:00 stderr F E0813 20:42:39.458413 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.472485983+00:00 stderr F E0813 20:42:39.472408 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.496672820+00:00 stderr F E0813 20:42:39.496580 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.541319437+00:00 stderr F E0813 20:42:39.541106 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.626199914+00:00 stderr F E0813 20:42:39.626025 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:39.789937274+00:00 stderr F E0813 20:42:39.789271 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.112324339+00:00 stderr F E0813 20:42:40.112277 1 base_controller.go:268] auditPolicyController reconciliation failed: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-audit-policies": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.694097262+00:00 stderr F I0813 20:42:40.693760 1 cmd.go:129] Received SIGTERM or SIGINT signal, shutting down controller. 2025-08-13T20:42:40.694507514+00:00 stderr F I0813 20:42:40.694398 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:42:40.694530464+00:00 stderr F I0813 20:42:40.694496 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694530464+00:00 stderr F I0813 20:42:40.694507 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694528 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694534 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694559375+00:00 stderr F I0813 20:42:40.694542 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694577336+00:00 stderr F I0813 20:42:40.694558 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694577336+00:00 stderr F I0813 20:42:40.694564 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694596276+00:00 stderr F I0813 20:42:40.694587 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694596276+00:00 stderr F I0813 20:42:40.694592 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694607 1 base_controller.go:172] Shutting down BoundSATokenSignerController ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694614 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694648 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.694672368+00:00 stderr F I0813 20:42:40.694654 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.694691069+00:00 stderr F I0813 20:42:40.694671 1 base_controller.go:172] Shutting down KubeAPIServerStaticResources ... 2025-08-13T20:42:40.694705539+00:00 stderr F I0813 20:42:40.694691 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:42:40.694719900+00:00 stderr F I0813 20:42:40.694704 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:42:40.694734100+00:00 stderr F I0813 20:42:40.694717 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:42:40.694748590+00:00 stderr F I0813 20:42:40.694731 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694910 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694951 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694966 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694979 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.694994 1 base_controller.go:172] Shutting down StartupMonitorPodCondition ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695009 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695021 1 base_controller.go:172] Shutting down StaticPodStateFallback ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695036 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:42:40.695114491+00:00 stderr F I0813 20:42:40.695049 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:42:40.695271006+00:00 stderr F I0813 20:42:40.695200 1 base_controller.go:172] Shutting down StatusSyncer_kube-apiserver ... 2025-08-13T20:42:40.695320867+00:00 stderr F I0813 20:42:40.695290 1 base_controller.go:172] Shutting down PodSecurityReadinessController ... 2025-08-13T20:42:40.695376439+00:00 stderr F I0813 20:42:40.695358 1 certrotationcontroller.go:899] Shutting down CertRotation 2025-08-13T20:42:40.695475131+00:00 stderr F I0813 20:42:40.695445 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.695549334+00:00 stderr F I0813 20:42:40.695531 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.694746 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696122 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.695061 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696139 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696150 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696153 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696170 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696182 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696187 1 base_controller.go:172] Shutting down webhookSupportabilityController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696192 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696200 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696211 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696214 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696259 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696259 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696271 1 base_controller.go:114] Shutting down worker of BoundSATokenSignerController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696281 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696292 1 base_controller.go:114] Shutting down worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696296 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696304 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696307 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696313 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696315 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696322 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696325 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696330 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696334 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696337 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696343 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696344 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696352 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696362 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696366 1 base_controller.go:114] Shutting down worker of StartupMonitorPodCondition controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696368 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696374 1 base_controller.go:104] All StartupMonitorPodCondition workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696375 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696379 1 controller_manager.go:54] StartupMonitorPodCondition controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696387 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696396 1 base_controller.go:114] Shutting down worker of webhookSupportabilityController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696404 1 base_controller.go:104] All webhookSupportabilityController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696405 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696415 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696422 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696425 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696431 1 base_controller.go:114] Shutting down worker of StaticPodStateFallback controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696435 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696439 1 base_controller.go:104] All StaticPodStateFallback workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696440 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696445 1 controller_manager.go:54] StaticPodStateFallback controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696472 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696389 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696486 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696487 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696451 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696503 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696509 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696464 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696529 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696568 1 base_controller.go:172] Shutting down EventWatchController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696583 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696597 1 base_controller.go:172] Shutting down NodeKubeconfigController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696612 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696626 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696631 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696644 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696649 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696671 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696675 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696688 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696692 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696702 1 base_controller.go:114] Shutting down worker of EventWatchController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696708 1 base_controller.go:104] All EventWatchController workers have been terminated 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696720 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696728 1 base_controller.go:114] Shutting down worker of NodeKubeconfigController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr F I0813 20:42:40.696737 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:42:40.697352706+00:00 stderr P I0813 2025-08-13T20:42:40.697432808+00:00 stderr F 20:42:40.696750 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696758 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696758 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696767 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696837 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696857 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696879 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696883 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696898 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696905 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696917 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.696924 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697087 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697126 1 base_controller.go:172] Shutting down SCCReconcileController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697139 1 base_controller.go:172] Shutting down CertRotationTimeUpgradeableController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697151 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697164 1 base_controller.go:172] Shutting down ServiceAccountIssuerController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697176 1 base_controller.go:172] Shutting down KubeletVersionSkewController ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697318 1 base_controller.go:114] Shutting down worker of SCCReconcileController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697327 1 base_controller.go:104] All SCCReconcileController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697336 1 base_controller.go:114] Shutting down worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697344 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697358 1 base_controller.go:114] Shutting down worker of ServiceAccountIssuerController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697366 1 base_controller.go:114] Shutting down worker of KubeletVersionSkewController controller ... 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697371 1 base_controller.go:104] All KubeletVersionSkewController workers have been terminated 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697383 1 termination_observer.go:155] Shutting down TerminationObserver 2025-08-13T20:42:40.697432808+00:00 stderr F I0813 20:42:40.697412 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:42:40.697462309+00:00 stderr F I0813 20:42:40.697445 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.697680035+00:00 stderr F I0813 20:42:40.697577 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.697680035+00:00 stderr F I0813 20:42:40.697663 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:42:40.697839070+00:00 stderr F I0813 20:42:40.697729 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:42:40.697839070+00:00 stderr F I0813 20:42:40.697827 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.697864340+00:00 stderr F I0813 20:42:40.697850 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.698041385+00:00 stderr F I0813 20:42:40.697971 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.698041385+00:00 stderr F I0813 20:42:40.698012 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.698062476+00:00 stderr F I0813 20:42:40.698041 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.698077136+00:00 stderr F I0813 20:42:40.698062 1 builder.go:330] server exited 2025-08-13T20:42:40.698211990+00:00 stderr F I0813 20:42:40.698156 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:42:40.698211990+00:00 stderr F I0813 20:42:40.698186 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:42:40.698268332+00:00 stderr F I0813 20:42:40.698211 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:42:40.698284232+00:00 stderr F I0813 20:42:40.698262 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:42:40.698284232+00:00 stderr F I0813 20:42:40.698274 1 base_controller.go:104] All NodeKubeconfigController workers have been terminated 2025-08-13T20:42:40.698298763+00:00 stderr F I0813 20:42:40.698287 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698313053+00:00 stderr F I0813 20:42:40.698298 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698313053+00:00 stderr F I0813 20:42:40.698307 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698331914+00:00 stderr F I0813 20:42:40.698321 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698346064+00:00 stderr F I0813 20:42:40.698330 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698357 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698397 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698418246+00:00 stderr F I0813 20:42:40.698409 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698438587+00:00 stderr F I0813 20:42:40.698416 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698438587+00:00 stderr F I0813 20:42:40.698426 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698453327+00:00 stderr F I0813 20:42:40.698436 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698453327+00:00 stderr F I0813 20:42:40.698445 1 base_controller.go:104] All BoundSATokenSignerController workers have been terminated 2025-08-13T20:42:40.698475168+00:00 stderr F I0813 20:42:40.698457 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:42:40.698475168+00:00 stderr F I0813 20:42:40.698467 1 base_controller.go:104] All KubeAPIServerStaticResources workers have been terminated 2025-08-13T20:42:40.698489928+00:00 stderr F I0813 20:42:40.698480 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:42:40.698504369+00:00 stderr F I0813 20:42:40.698493 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:42:40.698518789+00:00 stderr F I0813 20:42:40.698501 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:42:40.698533250+00:00 stderr F I0813 20:42:40.698515 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:42:40.698533250+00:00 stderr F I0813 20:42:40.698528 1 base_controller.go:150] All StatusSyncer_kube-apiserver post start hooks have been terminated 2025-08-13T20:42:40.698548030+00:00 stderr F I0813 20:42:40.698537 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:42:40.698548030+00:00 stderr F I0813 20:42:40.698542 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:42:40.698562680+00:00 stderr F I0813 20:42:40.698550 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:42:40.698577201+00:00 stderr F I0813 20:42:40.698560 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:42:40.698577201+00:00 stderr F I0813 20:42:40.698565 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698574 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698583 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:42:40.698591811+00:00 stderr F I0813 20:42:40.698585 1 base_controller.go:114] Shutting down worker of PodSecurityReadinessController controller ... 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698593 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698598 1 base_controller.go:104] All PodSecurityReadinessController workers have been terminated 2025-08-13T20:42:40.698607632+00:00 stderr F I0813 20:42:40.698602 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:42:40.698623482+00:00 stderr F I0813 20:42:40.698608 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:42:40.698623482+00:00 stderr F I0813 20:42:40.698618 1 base_controller.go:104] All ServiceAccountIssuerController workers have been terminated 2025-08-13T20:42:40.698637963+00:00 stderr F I0813 20:42:40.698626 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:42:40.698652123+00:00 stderr F I0813 20:42:40.698636 1 base_controller.go:104] All CertRotationTimeUpgradeableController workers have been terminated 2025-08-13T20:42:40.698843558+00:00 stderr F I0813 20:42:40.698711 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:42:40.698843558+00:00 stderr F I0813 20:42:40.698739 1 base_controller.go:104] All StatusSyncer_kube-apiserver workers have been terminated 2025-08-13T20:42:40.700186097+00:00 stderr F E0813 20:42:40.700133 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock?timeout=4m0s": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:42:40.700392933+00:00 stderr F W0813 20:42:40.700361 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000033400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserv0000644000175000017500000066324315117130647033072 0ustar zuulzuul2025-08-13T19:59:10.406231987+00:00 stderr F I0813 19:59:10.403110 1 cmd.go:241] Using service-serving-cert provided certificates 2025-08-13T19:59:10.406231987+00:00 stderr F I0813 19:59:10.404001 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T19:59:10.410000464+00:00 stderr F I0813 19:59:10.409906 1 observer_polling.go:159] Starting file observer 2025-08-13T19:59:11.410106593+00:00 stderr F I0813 19:59:11.409319 1 builder.go:299] kube-apiserver-operator version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2 2025-08-13T19:59:20.080237695+00:00 stderr F I0813 19:59:20.064219 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T19:59:20.080237695+00:00 stderr F W0813 19:59:20.064941 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:20.080237695+00:00 stderr F W0813 19:59:20.064957 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. 2025-08-13T19:59:20.968396892+00:00 stderr F I0813 19:59:20.929396 1 secure_serving.go:213] Serving securely on [::]:8443 2025-08-13T19:59:20.968960328+00:00 stderr F I0813 19:59:20.968911 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T19:59:20.969213366+00:00 stderr F I0813 19:59:20.969182 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T19:59:20.969589386+00:00 stderr F I0813 19:59:20.969553 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018540 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018626 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.018655 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.020506 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.027905059+00:00 stderr F I0813 19:59:21.020534 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:21.067076155+00:00 stderr F I0813 19:59:21.064635 1 builder.go:440] detected SingleReplicaTopologyMode, the original leader election has been altered for the default SingleReplicaToplogy 2025-08-13T19:59:21.198603954+00:00 stderr F I0813 19:59:21.198514 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T19:59:21.201436764+00:00 stderr F I0813 19:59:21.201395 1 leaderelection.go:250] attempting to acquire leader lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock... 2025-08-13T19:59:21.319617853+00:00 stderr F I0813 19:59:21.219440 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T19:59:21.319942802+00:00 stderr F I0813 19:59:21.319909 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.324274 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.324414 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.337971366+00:00 stderr F E0813 19:59:21.337285 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.354668482+00:00 stderr F E0813 19:59:21.354211 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.354668482+00:00 stderr F E0813 19:59:21.354276 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.378231954+00:00 stderr F E0813 19:59:21.368212 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.428961370+00:00 stderr F E0813 19:59:21.427143 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.428961370+00:00 stderr F E0813 19:59:21.427270 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.582008663+00:00 stderr F E0813 19:59:21.564368 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.582008663+00:00 stderr F E0813 19:59:21.564741 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.662010263+00:00 stderr F E0813 19:59:21.659349 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.662010263+00:00 stderr F E0813 19:59:21.659493 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.821740476+00:00 stderr F E0813 19:59:21.821422 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.821740476+00:00 stderr F E0813 19:59:21.821479 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:21.902165139+00:00 stderr F I0813 19:59:21.896240 1 leaderelection.go:260] successfully acquired lease openshift-kube-apiserver-operator/kube-apiserver-operator-lock 2025-08-13T19:59:21.902165139+00:00 stderr F I0813 19:59:21.900088 1 event.go:364] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator-lock", UID:"e11b3070-9ae9-4936-9a58-0ad566006f7f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"28027", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kube-apiserver-operator-78d54458c4-sc8h7_c7f30bb9-5fae-472f-b491-3b6d1380fb20 became leader 2025-08-13T19:59:21.945262068+00:00 stderr F I0813 19:59:21.921222 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T19:59:22.226725461+00:00 stderr F E0813 19:59:22.141585 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.226725461+00:00 stderr F E0813 19:59:22.214588 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.285924848+00:00 stderr F I0813 19:59:22.260011 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T19:59:22.300489964+00:00 stderr F I0813 19:59:22.300383 1 starter.go:140] FeatureGates initialized: knownFeatureGates=[AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T19:59:22.796644677+00:00 stderr F E0813 19:59:22.788656 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:22.893581660+00:00 stderr F E0813 19:59:22.858965 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.070172439+00:00 stderr F E0813 19:59:24.069365 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:24.143020776+00:00 stderr F E0813 19:59:24.141008 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.649134033+00:00 stderr F E0813 19:59:26.643536 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:26.733443906+00:00 stderr F E0813 19:59:26.724689 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:27.725856005+00:00 stderr F I0813 19:59:27.724591 1 base_controller.go:67] Waiting for caches to sync for SCCReconcileController 2025-08-13T19:59:29.825671460+00:00 stderr F I0813 19:59:29.810612 1 request.go:697] Waited for 1.996140239s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/secrets?limit=500&resourceVersion=0 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.749243 1 base_controller.go:67] Waiting for caches to sync for ResourceSyncController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750474 1 base_controller.go:67] Waiting for caches to sync for TargetConfigController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750492 1 base_controller.go:67] Waiting for caches to sync for NodeKubeconfigController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750507 1 base_controller.go:67] Waiting for caches to sync for ConfigObserver 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750698 1 certrotationcontroller.go:886] Starting CertRotation 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750725 1 certrotationcontroller.go:851] Waiting for CertRotation 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750753 1 base_controller.go:67] Waiting for caches to sync for EncryptionConditionController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.750770 1 base_controller.go:67] Waiting for caches to sync for CertRotationTimeUpgradeableController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769353 1 termination_observer.go:145] Starting TerminationObserver 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769414 1 base_controller.go:67] Waiting for caches to sync for EventWatchController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769432 1 base_controller.go:67] Waiting for caches to sync for BoundSATokenSignerController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769448 1 base_controller.go:67] Waiting for caches to sync for auditPolicyController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769463 1 base_controller.go:67] Waiting for caches to sync for RemoveStaleConditionsController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769483 1 base_controller.go:67] Waiting for caches to sync for ConnectivityCheckController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769501 1 base_controller.go:67] Waiting for caches to sync for KubeletVersionSkewController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769518 1 base_controller.go:67] Waiting for caches to sync for WorkerLatencyProfile 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769529 1 base_controller.go:67] Waiting for caches to sync for webhookSupportabilityController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769545 1 base_controller.go:67] Waiting for caches to sync for ServiceAccountIssuerController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769559 1 base_controller.go:67] Waiting for caches to sync for PodSecurityReadinessController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769563 1 base_controller.go:73] Caches are synced for PodSecurityReadinessController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.769577 1 base_controller.go:110] Starting #1 worker of PodSecurityReadinessController controller ... 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770420 1 base_controller.go:67] Waiting for caches to sync for RevisionController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770545 1 base_controller.go:67] Waiting for caches to sync for InstallerStateController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770569 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770588 1 base_controller.go:67] Waiting for caches to sync for PruneController 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770601 1 base_controller.go:67] Waiting for caches to sync for StartupMonitorPodCondition 2025-08-13T19:59:32.777271905+00:00 stderr F I0813 19:59:32.770624 1 base_controller.go:67] Waiting for caches to sync for StaticPodStateFallback 2025-08-13T19:59:32.784351197+00:00 stderr F I0813 19:59:32.784319 1 base_controller.go:67] Waiting for caches to sync for MissingStaticPodController 2025-08-13T19:59:32.784537142+00:00 stderr F I0813 19:59:32.784520 1 base_controller.go:67] Waiting for caches to sync for NodeController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808536 1 base_controller.go:67] Waiting for caches to sync for LoggingSyncer 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808607 1 base_controller.go:67] Waiting for caches to sync for UnsupportedConfigOverridesController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.808633 1 base_controller.go:67] Waiting for caches to sync for GuardController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809391 1 base_controller.go:67] Waiting for caches to sync for EncryptionKeyController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809431 1 base_controller.go:67] Waiting for caches to sync for EncryptionStateController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809451 1 base_controller.go:67] Waiting for caches to sync for EncryptionPruneController 2025-08-13T19:59:32.881509787+00:00 stderr F I0813 19:59:32.809466 1 base_controller.go:67] Waiting for caches to sync for EncryptionMigrationController 2025-08-13T19:59:33.057951796+00:00 stderr F E0813 19:59:32.938639 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.057951796+00:00 stderr F E0813 19:59:32.938876 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:33.133759637+00:00 stderr F I0813 19:59:33.097995 1 base_controller.go:67] Waiting for caches to sync for StatusSyncer_kube-apiserver 2025-08-13T19:59:33.133759637+00:00 stderr F I0813 19:59:33.098146 1 base_controller.go:67] Waiting for caches to sync for InstallerController 2025-08-13T19:59:33.627575204+00:00 stderr F I0813 19:59:33.566350 1 base_controller.go:67] Waiting for caches to sync for BackingResourceController 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620676 1 base_controller.go:73] Caches are synced for InstallerStateController 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620732 1 base_controller.go:110] Starting #1 worker of InstallerStateController controller ... 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620975 1 base_controller.go:73] Caches are synced for StaticPodStateFallback 2025-08-13T19:59:34.621301521+00:00 stderr F I0813 19:59:34.620997 1 base_controller.go:110] Starting #1 worker of StaticPodStateFallback controller ... 2025-08-13T19:59:35.761142381+00:00 stderr F I0813 19:59:35.759386 1 base_controller.go:67] Waiting for caches to sync for KubeAPIServerStaticResources 2025-08-13T19:59:35.769575441+00:00 stderr F I0813 19:59:35.769540 1 base_controller.go:73] Caches are synced for KubeletVersionSkewController 2025-08-13T19:59:35.769643613+00:00 stderr F I0813 19:59:35.769625 1 base_controller.go:110] Starting #1 worker of KubeletVersionSkewController controller ... 2025-08-13T19:59:35.769994773+00:00 stderr F E0813 19:59:35.769972 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.815189132+00:00 stderr F I0813 19:59:35.815031 1 base_controller.go:73] Caches are synced for GuardController 2025-08-13T19:59:35.822472109+00:00 stderr F E0813 19:59:35.818123 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.822472109+00:00 stderr F I0813 19:59:35.820422 1 base_controller.go:110] Starting #1 worker of GuardController controller ... 2025-08-13T19:59:35.822472109+00:00 stderr F E0813 19:59:35.820875 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:35.939666840+00:00 stderr F E0813 19:59:35.938506 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.295905945+00:00 stderr F E0813 19:59:36.295711 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.295951286+00:00 stderr F E0813 19:59:36.295921 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.308655118+00:00 stderr F E0813 19:59:36.308104 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.471091479+00:00 stderr F E0813 19:59:36.470635 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.471091479+00:00 stderr F E0813 19:59:36.470884 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.522045071+00:00 stderr F E0813 19:59:36.520763 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:36.569665028+00:00 stderr F E0813 19:59:36.569470 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:40.217258604+00:00 stderr F I0813 19:59:40.207376 1 trace.go:236] Trace[1993198320]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.838) (total time: 10368ms): 2025-08-13T19:59:40.217258604+00:00 stderr F Trace[1993198320]: ---"Objects listed" error: 10368ms (19:59:40.207) 2025-08-13T19:59:40.217258604+00:00 stderr F Trace[1993198320]: [10.368623989s] [10.368623989s] END 2025-08-13T19:59:40.272640772+00:00 stderr F E0813 19:59:40.262013 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:40.621248000+00:00 stderr F E0813 19:59:40.583928 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:41.229194869+00:00 stderr F E0813 19:59:41.224921 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.024455049+00:00 stderr F I0813 19:59:42.020242 1 trace.go:236] Trace[1834057747]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.787) (total time: 10582ms): 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[1834057747]: ---"Objects listed" error: 6515ms (19:59:36.303) 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[1834057747]: [10.582929308s] [10.582929308s] END 2025-08-13T19:59:42.024455049+00:00 stderr F I0813 19:59:42.021271 1 trace.go:236] Trace[2078079104]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.796) (total time: 14224ms): 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[2078079104]: ---"Objects listed" error: 14224ms (19:59:42.021) 2025-08-13T19:59:42.024455049+00:00 stderr F Trace[2078079104]: [14.224350017s] [14.224350017s] END 2025-08-13T19:59:42.024455049+00:00 stderr F E0813 19:59:42.021567 1 base_controller.go:268] StaticPodStateFallback reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.029403850+00:00 stderr F I0813 19:59:42.029324 1 trace.go:236] Trace[582360872]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.778) (total time: 12250ms): 2025-08-13T19:59:42.029403850+00:00 stderr F Trace[582360872]: ---"Objects listed" error: 12250ms (19:59:42.029) 2025-08-13T19:59:42.029403850+00:00 stderr F Trace[582360872]: [12.250847503s] [12.250847503s] END 2025-08-13T19:59:42.033875407+00:00 stderr F I0813 19:59:42.032164 1 trace.go:236] Trace[1034056651]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.827) (total time: 12205ms): 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[1034056651]: ---"Objects listed" error: 12205ms (19:59:42.032) 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[1034056651]: [12.205071418s] [12.205071418s] END 2025-08-13T19:59:42.033875407+00:00 stderr F I0813 19:59:42.032517 1 trace.go:236] Trace[518580895]: "DeltaFIFO Pop Process" ID:openshift-etcd/builder-dockercfg-sqwsk,Depth:16,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:40.368) (total time: 1663ms): 2025-08-13T19:59:42.033875407+00:00 stderr F Trace[518580895]: [1.663718075s] [1.663718075s] END 2025-08-13T19:59:42.062383520+00:00 stderr F I0813 19:59:42.062268 1 trace.go:236] Trace[1780588024]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.725) (total time: 14336ms): 2025-08-13T19:59:42.062383520+00:00 stderr F Trace[1780588024]: ---"Objects listed" error: 14309ms (19:59:42.034) 2025-08-13T19:59:42.062383520+00:00 stderr F Trace[1780588024]: [14.336749981s] [14.336749981s] END 2025-08-13T19:59:42.096995946+00:00 stderr F I0813 19:59:42.086121 1 trace.go:236] Trace[137601519]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.785) (total time: 12300ms): 2025-08-13T19:59:42.096995946+00:00 stderr F Trace[137601519]: ---"Objects listed" error: 12300ms (19:59:42.086) 2025-08-13T19:59:42.096995946+00:00 stderr F Trace[137601519]: [12.300748394s] [12.300748394s] END 2025-08-13T19:59:42.130937464+00:00 stderr F I0813 19:59:42.129235 1 trace.go:236] Trace[134539601]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.725) (total time: 14403ms): 2025-08-13T19:59:42.130937464+00:00 stderr F Trace[134539601]: ---"Objects listed" error: 14403ms (19:59:42.128) 2025-08-13T19:59:42.130937464+00:00 stderr F Trace[134539601]: [14.403338048s] [14.403338048s] END 2025-08-13T19:59:42.133862377+00:00 stderr F I0813 19:59:42.131372 1 trace.go:236] Trace[1185044808]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 14363ms): 2025-08-13T19:59:42.133862377+00:00 stderr F Trace[1185044808]: ---"Objects listed" error: 14362ms (19:59:42.131) 2025-08-13T19:59:42.133862377+00:00 stderr F Trace[1185044808]: [14.363246906s] [14.363246906s] END 2025-08-13T19:59:42.133862377+00:00 stderr F E0813 19:59:42.133728 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.143759409+00:00 stderr F E0813 19:59:42.133993 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:42.154493295+00:00 stderr F I0813 19:59:42.150129 1 trace.go:236] Trace[1599639947]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.814) (total time: 12335ms): 2025-08-13T19:59:42.154493295+00:00 stderr F Trace[1599639947]: ---"Objects listed" error: 12335ms (19:59:42.150) 2025-08-13T19:59:42.154493295+00:00 stderr F Trace[1599639947]: [12.335962369s] [12.335962369s] END 2025-08-13T19:59:42.177003577+00:00 stderr F I0813 19:59:42.175075 1 trace.go:236] Trace[638251685]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.839) (total time: 12335ms): 2025-08-13T19:59:42.177003577+00:00 stderr F Trace[638251685]: ---"Objects listed" error: 12335ms (19:59:42.174) 2025-08-13T19:59:42.177003577+00:00 stderr F Trace[638251685]: [12.33529162s] [12.33529162s] END 2025-08-13T19:59:42.750373270+00:00 stderr F I0813 19:59:42.183435 1 trace.go:236] Trace[27614155]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.785) (total time: 14398ms): 2025-08-13T19:59:42.750373270+00:00 stderr F Trace[27614155]: ---"Objects listed" error: 14397ms (19:59:42.183) 2025-08-13T19:59:42.750373270+00:00 stderr F Trace[27614155]: [14.398033837s] [14.398033837s] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.104636 1 trace.go:236] Trace[2014468090]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver-operator/builder-dockercfg-2cs69,Depth:13,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.203) (total time: 899ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[2014468090]: [899.704855ms] [899.704855ms] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.104728 1 trace.go:236] Trace[2080221332]: "DeltaFIFO Pop Process" ID:openshift,Depth:58,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.205) (total time: 899ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[2080221332]: [899.1632ms] [899.1632ms] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:43.105632 1 trace.go:236] Trace[1806978334]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.778) (total time: 13326ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1806978334]: ---"Objects listed" error: 12623ms (19:59:42.402) 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1806978334]: [13.326226606s] [13.326226606s] END 2025-08-13T19:59:43.106877212+00:00 stderr F I0813 19:59:42.191421 1 trace.go:236] Trace[1088521478]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.797) (total time: 14394ms): 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1088521478]: ---"Objects listed" error: 14393ms (19:59:42.190) 2025-08-13T19:59:43.106877212+00:00 stderr F Trace[1088521478]: [14.394350463s] [14.394350463s] END 2025-08-13T19:59:43.107256163+00:00 stderr F I0813 19:59:43.107191 1 trace.go:236] Trace[924750758]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.781) (total time: 13320ms): 2025-08-13T19:59:43.107256163+00:00 stderr F Trace[924750758]: ---"Objects listed" error: 13320ms (19:59:43.106) 2025-08-13T19:59:43.107256163+00:00 stderr F Trace[924750758]: [13.320284976s] [13.320284976s] END 2025-08-13T19:59:43.135742825+00:00 stderr F I0813 19:59:43.135669 1 base_controller.go:73] Caches are synced for StatusSyncer_kube-apiserver 2025-08-13T19:59:43.135915400+00:00 stderr F I0813 19:59:43.135899 1 base_controller.go:110] Starting #1 worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T19:59:43.135977682+00:00 stderr F I0813 19:59:43.135964 1 base_controller.go:73] Caches are synced for CertRotationTimeUpgradeableController 2025-08-13T19:59:43.136017843+00:00 stderr F I0813 19:59:43.135997 1 base_controller.go:110] Starting #1 worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T19:59:43.136067364+00:00 stderr F I0813 19:59:43.136054 1 base_controller.go:73] Caches are synced for RemoveStaleConditionsController 2025-08-13T19:59:43.136098325+00:00 stderr F I0813 19:59:43.136086 1 base_controller.go:110] Starting #1 worker of RemoveStaleConditionsController controller ... 2025-08-13T19:59:43.136177638+00:00 stderr F I0813 19:59:43.136162 1 base_controller.go:73] Caches are synced for InstallerController 2025-08-13T19:59:43.136218839+00:00 stderr F I0813 19:59:43.136205 1 base_controller.go:110] Starting #1 worker of InstallerController controller ... 2025-08-13T19:59:43.136698632+00:00 stderr F I0813 19:59:43.136636 1 base_controller.go:73] Caches are synced for StaticPodStateController 2025-08-13T19:59:43.161048897+00:00 stderr F I0813 19:59:43.160759 1 base_controller.go:110] Starting #1 worker of StaticPodStateController controller ... 2025-08-13T19:59:43.171934617+00:00 stderr F I0813 19:59:43.171879 1 base_controller.go:73] Caches are synced for PruneController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.203209 1 trace.go:236] Trace[1013357871]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.783) (total time: 14419ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1013357871]: ---"Objects listed" error: 14419ms (19:59:42.202) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1013357871]: [14.419417737s] [14.419417737s] END 2025-08-13T19:59:44.502769353+00:00 stderr F E0813 19:59:42.203284 1 base_controller.go:268] KubeletVersionSkewController reconciliation failed: kubeapiservers.operator.openshift.io "cluster" not found 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.203738 1 trace.go:236] Trace[904353601]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.785) (total time: 14418ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[904353601]: ---"Objects listed" error: 14418ms (19:59:42.203) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[904353601]: [14.418199793s] [14.418199793s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.253665 1 trace.go:236] Trace[1900812755]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.818) (total time: 14435ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1900812755]: ---"Objects listed" error: 14435ms (19:59:42.253) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1900812755]: [14.435143226s] [14.435143226s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.313282 1 trace.go:236] Trace[800968062]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.814) (total time: 12498ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[800968062]: ---"Objects listed" error: 12498ms (19:59:42.313) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[800968062]: [12.498837281s] [12.498837281s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.402149 1 base_controller.go:73] Caches are synced for StartupMonitorPodCondition 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.421581 1 trace.go:236] Trace[222516314]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/bootstrap-kube-apiserver-crc.17dc8ed8c9d1f157,Depth:239,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.191) (total time: 230ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[222516314]: [230.016326ms] [230.016326ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.699445 1 trace.go:236] Trace[1849060913]: "DeltaFIFO Pop Process" ID:kube-system/builder-dockercfg-kkqp2,Depth:33,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:42.161) (total time: 537ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1849060913]: [537.766968ms] [537.766968ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.704628 1 trace.go:236] Trace[1382711218]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.986) (total time: 12718ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1382711218]: ---"Objects listed" error: 12718ms (19:59:42.704) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1382711218]: [12.718383669s] [12.718383669s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:42.750164 1 certrotationcontroller.go:869] Finished waiting for CertRotation 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.064057 1 base_controller.go:73] Caches are synced for ServiceAccountIssuerController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.096032 1 trace.go:236] Trace[1559849221]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:28.000) (total time: 15095ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1559849221]: ---"Objects listed" error: 15094ms (19:59:43.094) 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[1559849221]: [15.095690773s] [15.095690773s] END 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176166 1 base_controller.go:73] Caches are synced for NodeController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176183 1 base_controller.go:73] Caches are synced for LoggingSyncer 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.176196 1 base_controller.go:73] Caches are synced for UnsupportedConfigOverridesController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.188026 1 base_controller.go:73] Caches are synced for SCCReconcileController 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:43.280637 1 trace.go:236] Trace[839686592]: "DeltaFIFO Pop Process" ID:control-plane-machine-set-operator,Depth:204,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.179) (total time: 100ms): 2025-08-13T19:59:44.502769353+00:00 stderr F Trace[839686592]: [100.798494ms] [100.798494ms] END 2025-08-13T19:59:44.502769353+00:00 stderr F E0813 19:59:43.465159 1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:44.502769353+00:00 stderr F I0813 19:59:44.501106 1 base_controller.go:110] Starting #1 worker of PruneController controller ... 2025-08-13T19:59:44.505865951+00:00 stderr F I0813 19:59:44.503724 1 trace.go:236] Trace[1060106757]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.818) (total time: 14684ms): 2025-08-13T19:59:44.505865951+00:00 stderr F Trace[1060106757]: ---"Objects listed" error: 14684ms (19:59:44.503) 2025-08-13T19:59:44.505865951+00:00 stderr F Trace[1060106757]: [14.684920135s] [14.684920135s] END 2025-08-13T19:59:44.702386043+00:00 stderr F I0813 19:59:44.702232 1 trace.go:236] Trace[1629861906]: "DeltaFIFO Pop Process" ID:system:controller:disruption-controller,Depth:111,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:43.363) (total time: 1339ms): 2025-08-13T19:59:44.702386043+00:00 stderr F Trace[1629861906]: [1.339129512s] [1.339129512s] END 2025-08-13T19:59:44.837178345+00:00 stderr F I0813 19:59:44.837078 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:44.935728074+00:00 stderr F I0813 19:59:44.905141 1 trace.go:236] Trace[740019728]: "DeltaFIFO Pop Process" ID:system:openshift:openshift-controller-manager:image-trigger-controller,Depth:33,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.704) (total time: 200ms): 2025-08-13T19:59:44.935728074+00:00 stderr F Trace[740019728]: [200.473475ms] [200.473475ms] END 2025-08-13T19:59:44.959257775+00:00 stderr F I0813 19:59:44.959194 1 base_controller.go:110] Starting #1 worker of StartupMonitorPodCondition controller ... 2025-08-13T19:59:45.190940199+00:00 stderr F I0813 19:59:45.141035 1 base_controller.go:110] Starting #1 worker of ServiceAccountIssuerController controller ... 2025-08-13T19:59:45.191140695+00:00 stderr F I0813 19:59:45.157727 1 base_controller.go:110] Starting #1 worker of NodeController controller ... 2025-08-13T19:59:45.191207477+00:00 stderr F I0813 19:59:45.157743 1 base_controller.go:110] Starting #1 worker of LoggingSyncer controller ... 2025-08-13T19:59:45.191256538+00:00 stderr F I0813 19:59:45.157751 1 base_controller.go:110] Starting #1 worker of UnsupportedConfigOverridesController controller ... 2025-08-13T19:59:45.404111546+00:00 stderr F I0813 19:59:45.157767 1 base_controller.go:110] Starting #1 worker of SCCReconcileController controller ... 2025-08-13T19:59:45.404111546+00:00 stderr F I0813 19:59:45.171163 1 trace.go:236] Trace[1633723082]: "DeltaFIFO Pop Process" ID:system:openshift:operator:openshift-apiserver-operator,Depth:18,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.905) (total time: 183ms): 2025-08-13T19:59:45.404111546+00:00 stderr F Trace[1633723082]: [183.235093ms] [183.235093ms] END 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094748 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094945 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.094984539+00:00 stderr F I0813 19:59:46.094958 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.105011265+00:00 stderr F I0813 19:59:46.101336 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111438 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111508 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.111537851+00:00 stderr F I0813 19:59:46.111519 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.111606923+00:00 stderr F I0813 19:59:46.111557 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.112875449+00:00 stderr F I0813 19:59:46.112743 1 trace.go:236] Trace[1514538963]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.826) (total time: 16285ms): 2025-08-13T19:59:46.112875449+00:00 stderr F Trace[1514538963]: ---"Objects listed" error: 16285ms (19:59:46.112) 2025-08-13T19:59:46.112875449+00:00 stderr F Trace[1514538963]: [16.285982883s] [16.285982883s] END 2025-08-13T19:59:46.141217627+00:00 stderr F I0813 19:59:46.141153 1 trace.go:236] Trace[2132981388]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/kube-apiserver-crc.17dc8f01381192ba,Depth:193,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:44.992) (total time: 1148ms): 2025-08-13T19:59:46.141217627+00:00 stderr F Trace[2132981388]: [1.148505488s] [1.148505488s] END 2025-08-13T19:59:46.144456709+00:00 stderr F I0813 19:59:46.144416 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.144650605+00:00 stderr F I0813 19:59:46.144624 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.144927433+00:00 stderr F I0813 19:59:46.144902 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.154048783+00:00 stderr F I0813 19:59:46.153965 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'RequiredInstallerResourcesMissing' configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8 2025-08-13T19:59:46.154176986+00:00 stderr F I0813 19:59:46.154156 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158647634+00:00 stderr F I0813 19:59:46.158612 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158740346+00:00 stderr F I0813 19:59:46.158722 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.158907421+00:00 stderr F I0813 19:59:46.158762 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.158953042+00:00 stderr F I0813 19:59:46.158939 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.159055045+00:00 stderr F I0813 19:59:46.159037 1 base_controller.go:73] Caches are synced for BackingResourceController 2025-08-13T19:59:46.159088616+00:00 stderr F I0813 19:59:46.159077 1 base_controller.go:110] Starting #1 worker of BackingResourceController controller ... 2025-08-13T19:59:46.160268820+00:00 stderr F E0813 19:59:46.160204 1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T19:59:46.160356262+00:00 stderr F I0813 19:59:46.160302 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.327105875+00:00 stderr F I0813 19:59:46.323731 1 base_controller.go:67] Waiting for caches to sync for CertRotationController 2025-08-13T19:59:46.640573681+00:00 stderr F I0813 19:59:46.334064 1 trace.go:236] Trace[383502669]: "DeltaFIFO Pop Process" ID:openshift-kube-apiserver/kube-apiserver-crc.17dcded7fc33a4b6,Depth:142,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:46.176) (total time: 157ms): 2025-08-13T19:59:46.640573681+00:00 stderr F Trace[383502669]: [157.750217ms] [157.750217ms] END 2025-08-13T19:59:46.640638053+00:00 stderr F I0813 19:59:46.347238 1 trace.go:236] Trace[927508524]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 18578ms): 2025-08-13T19:59:46.640638053+00:00 stderr F Trace[927508524]: ---"Objects listed" error: 18578ms (19:59:46.347) 2025-08-13T19:59:46.640638053+00:00 stderr F Trace[927508524]: [18.578666736s] [18.578666736s] END 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.640610 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.347553 1 trace.go:236] Trace[669931977]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.768) (total time: 18578ms): 2025-08-13T19:59:46.652059088+00:00 stderr F Trace[669931977]: ---"Objects listed" error: 18407ms (19:59:46.175) 2025-08-13T19:59:46.652059088+00:00 stderr F Trace[669931977]: [18.578891232s] [18.578891232s] END 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647321 1 reflector.go:351] Caches populated for *v1.Secret from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.368954 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647454 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.369250 1 base_controller.go:73] Caches are synced for EncryptionKeyController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.647500 1 base_controller.go:110] Starting #1 worker of EncryptionKeyController controller ... 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.369267 1 base_controller.go:73] Caches are synced for EncryptionMigrationController 2025-08-13T19:59:46.652059088+00:00 stderr F I0813 19:59:46.649332 1 base_controller.go:110] Starting #1 worker of EncryptionMigrationController controller ... 2025-08-13T19:59:46.652506731+00:00 stderr F I0813 19:59:46.369287 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652586393+00:00 stderr F I0813 19:59:46.652556 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.652668456+00:00 stderr F I0813 19:59:46.384797 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.652713147+00:00 stderr F I0813 19:59:46.652698 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.657545305+00:00 stderr F I0813 19:59:46.398921 1 base_controller.go:73] Caches are synced for EncryptionPruneController 2025-08-13T19:59:46.657621557+00:00 stderr F I0813 19:59:46.657601 1 base_controller.go:110] Starting #1 worker of EncryptionPruneController controller ... 2025-08-13T19:59:46.660720275+00:00 stderr F I0813 19:59:46.399045 1 base_controller.go:73] Caches are synced for EncryptionStateController 2025-08-13T19:59:46.661012374+00:00 stderr F I0813 19:59:46.660988 1 base_controller.go:110] Starting #1 worker of EncryptionStateController controller ... 2025-08-13T19:59:46.662276710+00:00 stderr F I0813 19:59:46.449596 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.662334651+00:00 stderr F I0813 19:59:46.662316 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.663541776+00:00 stderr F I0813 19:59:46.449633 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.663610078+00:00 stderr F I0813 19:59:46.663585 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.666594283+00:00 stderr F I0813 19:59:46.451435 1 base_controller.go:73] Caches are synced for EncryptionConditionController 2025-08-13T19:59:46.666665075+00:00 stderr F I0813 19:59:46.666646 1 base_controller.go:110] Starting #1 worker of EncryptionConditionController controller ... 2025-08-13T19:59:46.673105408+00:00 stderr F I0813 19:59:46.468242 1 base_controller.go:73] Caches are synced for KubeAPIServerStaticResources 2025-08-13T19:59:46.673184980+00:00 stderr F I0813 19:59:46.673163 1 base_controller.go:110] Starting #1 worker of KubeAPIServerStaticResources controller ... 2025-08-13T19:59:46.676508885+00:00 stderr F I0813 19:59:46.468300 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.676573737+00:00 stderr F I0813 19:59:46.676555 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.681108276+00:00 stderr F I0813 19:59:46.468319 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.681302312+00:00 stderr F I0813 19:59:46.681180 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.707044046+00:00 stderr F I0813 19:59:46.485287 1 base_controller.go:73] Caches are synced for BoundSATokenSignerController 2025-08-13T19:59:46.707230501+00:00 stderr F I0813 19:59:46.707200 1 base_controller.go:110] Starting #1 worker of BoundSATokenSignerController controller ... 2025-08-13T19:59:46.750947557+00:00 stderr F I0813 19:59:46.485317 1 base_controller.go:73] Caches are synced for auditPolicyController 2025-08-13T19:59:46.750947557+00:00 stderr F I0813 19:59:46.750875 1 base_controller.go:110] Starting #1 worker of auditPolicyController controller ... 2025-08-13T19:59:46.751920665+00:00 stderr F I0813 19:59:46.502223 1 base_controller.go:73] Caches are synced for NodeKubeconfigController 2025-08-13T19:59:46.751920665+00:00 stderr F I0813 19:59:46.751097 1 base_controller.go:110] Starting #1 worker of NodeKubeconfigController controller ... 2025-08-13T19:59:46.831887244+00:00 stderr F I0813 19:59:46.825572 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:46.831887244+00:00 stderr F I0813 19:59:46.825695 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:46.865169383+00:00 stderr F I0813 19:59:46.865101 1 base_controller.go:73] Caches are synced for EventWatchController 2025-08-13T19:59:46.865286456+00:00 stderr F I0813 19:59:46.865268 1 base_controller.go:110] Starting #1 worker of EventWatchController controller ... 2025-08-13T19:59:47.704232902+00:00 stderr F I0813 19:59:47.692032 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:47.862264577+00:00 stderr F I0813 19:59:47.844259 1 trace.go:236] Trace[1477171505]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:27.786) (total time: 20057ms): 2025-08-13T19:59:47.862264577+00:00 stderr F Trace[1477171505]: ---"Objects listed" error: 20057ms (19:59:47.844) 2025-08-13T19:59:47.862264577+00:00 stderr F Trace[1477171505]: [20.057651336s] [20.057651336s] END 2025-08-13T19:59:47.862264577+00:00 stderr F I0813 19:59:47.844328 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:47.903459451+00:00 stderr F I0813 19:59:46.502273 1 base_controller.go:73] Caches are synced for WorkerLatencyProfile 2025-08-13T19:59:47.903459451+00:00 stderr F I0813 19:59:47.887274 1 base_controller.go:110] Starting #1 worker of WorkerLatencyProfile controller ... 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:47.918354 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:46.502285 1 base_controller.go:73] Caches are synced for RevisionController 2025-08-13T19:59:47.947048473+00:00 stderr F I0813 19:59:47.918431 1 base_controller.go:110] Starting #1 worker of RevisionController controller ... 2025-08-13T19:59:48.329221767+00:00 stderr F I0813 19:59:46.502296 1 base_controller.go:73] Caches are synced for MissingStaticPodController 2025-08-13T19:59:48.329364341+00:00 stderr F I0813 19:59:48.329343 1 base_controller.go:110] Starting #1 worker of MissingStaticPodController controller ... 2025-08-13T19:59:48.450923837+00:00 stderr F I0813 19:59:48.433261 1 trace.go:236] Trace[1914587675]: "DeltaFIFO Pop Process" ID:openshift-config-managed/dashboard-cluster-total,Depth:35,Reason:slow event handlers blocking the queue (13-Aug-2025 19:59:47.844) (total time: 588ms): 2025-08-13T19:59:48.450923837+00:00 stderr F Trace[1914587675]: [588.63424ms] [588.63424ms] END 2025-08-13T19:59:48.658903396+00:00 stderr F I0813 19:59:48.658421 1 base_controller.go:73] Caches are synced for ConfigObserver 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:46.515255 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"crc\" not ready since 2024-06-27 13:34:14 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)") 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:48.760552 1 base_controller.go:73] Caches are synced for CertRotationController 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:48.760587 1 base_controller.go:73] Caches are synced for ResourceSyncController 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539556 1 base_controller.go:110] Starting #1 worker of ResourceSyncController controller ... 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539612 1 base_controller.go:110] Starting #1 worker of ConfigObserver controller ... 2025-08-13T19:59:49.540014883+00:00 stderr F I0813 19:59:49.539633 1 base_controller.go:110] Starting #1 worker of CertRotationController controller ... 2025-08-13T19:59:49.680538308+00:00 stderr F I0813 19:59:49.680477 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:49.757899653+00:00 stderr F I0813 19:59:49.746243 1 base_controller.go:73] Caches are synced for TargetConfigController 2025-08-13T19:59:49.757899653+00:00 stderr F I0813 19:59:49.746307 1 base_controller.go:110] Starting #1 worker of TargetConfigController controller ... 2025-08-13T19:59:50.217136634+00:00 stderr P I0813 19:59:50.215305 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKAB 2025-08-13T19:59:50.217380451+00:00 stderr F UmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:36Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T19:59:50.251507593+00:00 stderr F I0813 19:59:50.250263 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T19:59:50.251507593+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.251590646+00:00 stderr F I0813 19:59:50.251524 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:50Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:50.251992417+00:00 stderr F I0813 19:59:50.251867 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") 2025-08-13T19:59:50.296506296+00:00 stderr F I0813 19:59:50.294579 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/node-kubeconfigs -n openshift-kube-apiserver because it changed 2025-08-13T19:59:50.656940961+00:00 stderr F I0813 19:59:50.655745 1 core.go:358] ConfigMap "openshift-kube-apiserver/aggregator-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIV+a/r/KBVSQwDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2FnZ3JlZ2F0b3It\nY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX2FnZ3JlZ2F0b3ItY2xpZW50LXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwz1oeDcXqAniG+VxAzEbZbeheswm\nibqk0LwWbA9YAD2aJCC2U0gbXouz0u1dzDnEuwzslM0OFq2kW+1RmEB1drVBkCMV\ny/gKGmRafqGt31/rDe81XneOBzrUC/rNVDZq7rx4wsZ8YzYkPhj1frvlCCWyOdyB\n+nWF+ZZQHLXeSuHuVGnfGqmckiQf/R8ITZp/vniyeOED0w8B9ZdfVHNYJksR/Vn2\ngslU8a/mluPzSCyD10aHnX5c75yTzW4TBQvytjkEpDR5LBoRmHiuL64999DtWonq\niX7TdcoQY1LuHyilaXIp0TazmkRb3ycHAY/RQ3xumj9I25D8eLCwWvI8GwIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\nWtUaz8JmZMUc/fPnQTR0L7R9wakwHwYDVR0jBBgwFoAUWtUaz8JmZMUc/fPnQTR0\nL7R9wakwDQYJKoZIhvcNAQELBQADggEBAECt0YM/4XvtPuVY5pY2aAXRuthsw2zQ\nnXVvR5jDsfpaNvMXQWaMjM1B+giNKhDDeLmcF7GlWmFfccqBPicgFhUgQANE3ALN\ngq2Wttd641M6i4B3UuRewNySj1sc12wfgAcwaRcecDTCsZo5yuF90z4mXpZs7MWh\nKCxYPpAtLqi17IF1tJVz/03L+6WD5kUProTELtY7/KBJYV/GONMG+KAMBjg1ikMK\njA0HQiCZiWDuW1ZdAwuvh6oRNWoQy6w9Wksard/AnfXUFBwNgULMp56+tOOPHxtm\nu3XYTN0dPJXsimSk4KfS0by8waS7ocoXa3LgQxb/6h0ympDbcWtgD0w=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:00Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}},"f:labels":{".":{},"f:auth.openshift.io/managed-certificate-type":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T19:49:33Z"}],"resourceVersion":null,"uid":"1d0d5c4a-d5a2-488a-94e2-bf622b67cadf"}} 2025-08-13T19:59:50.660061100+00:00 stderr F E0813 19:59:50.656059 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T19:59:50.745166186+00:00 stderr F E0813 19:59:50.745016 1 base_controller.go:268] InstallerController reconciliation failed: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8] 2025-08-13T19:59:50.913502905+00:00 stderr F I0813 19:59:50.913036 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/aggregator-client-ca -n openshift-kube-apiserver: 2025-08-13T19:59:50.913502905+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T19:59:50.932861687+00:00 stderr F I0813 19:59:50.930694 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:51.056973045+00:00 stderr F I0813 19:59:51.056886 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:52.549088319+00:00 stderr F I0813 19:59:52.370677 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 19:59:52.370630792 +0000 UTC))" 2025-08-13T19:59:52.657562101+00:00 stderr F I0813 19:59:52.450127 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.494126 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.668765 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 19:59:52.668661858 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669440 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.669342427 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669571 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 19:59:52.66945185 +0000 UTC))" 2025-08-13T19:59:52.669975275+00:00 stderr F I0813 19:59:52.669823 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.669603794 +0000 UTC))" 2025-08-13T19:59:52.671953091+00:00 stderr F I0813 19:59:52.670080 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.669968045 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883408 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.834619428 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883495 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.8834619 +0000 UTC))" 2025-08-13T19:59:52.883664316+00:00 stderr F I0813 19:59:52.883516 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 19:59:52.883502232 +0000 UTC))" 2025-08-13T19:59:52.884120229+00:00 stderr F I0813 19:59:52.883954 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 19:59:52.883929784 +0000 UTC))" 2025-08-13T19:59:52.884397857+00:00 stderr F I0813 19:59:52.884342 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 19:59:52.884319995 +0000 UTC))" 2025-08-13T19:59:52.988627408+00:00 stderr F I0813 19:59:52.988069 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2024-06-27T13:29:32Z","message":"NodeInstallerProgressing: 1 node is at revision 8","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T19:59:53.006953971+00:00 stderr F I0813 19:59:53.004302 1 prune_controller.go:269] Nothing to prune 2025-08-13T19:59:53.225649734+00:00 stderr F I0813 19:59:53.221749 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-8,config-8,etcd-serving-ca-8,kube-apiserver-audit-policies-8,kube-apiserver-cert-syncer-kubeconfig-8,kube-apiserver-pod-8,kubelet-serving-ca-8,sa-token-signing-certs-8, secrets: etcd-client-8,localhost-recovery-client-token-8,localhost-recovery-serving-certkey-8]" to "NodeControllerDegraded: All master nodes are ready" 2025-08-13T19:59:53.558720948+00:00 stderr F I0813 19:59:53.558238 1 trace.go:236] Trace[1369719048]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:59:29.838) (total time: 23717ms): 2025-08-13T19:59:53.558720948+00:00 stderr F Trace[1369719048]: ---"Objects listed" error: 23716ms (19:59:53.555) 2025-08-13T19:59:53.558720948+00:00 stderr F Trace[1369719048]: [23.717376611s] [23.717376611s] END 2025-08-13T19:59:53.558720948+00:00 stderr F I0813 19:59:53.558621 1 reflector.go:351] Caches populated for *v1.CustomResourceDefinition from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 2025-08-13T19:59:53.583756152+00:00 stderr F I0813 19:59:53.583564 1 base_controller.go:73] Caches are synced for webhookSupportabilityController 2025-08-13T19:59:53.583756152+00:00 stderr F I0813 19:59:53.583593 1 base_controller.go:110] Starting #1 worker of webhookSupportabilityController controller ... 2025-08-13T19:59:53.605947754+00:00 stderr F I0813 19:59:53.605280 1 base_controller.go:73] Caches are synced for ConnectivityCheckController 2025-08-13T19:59:53.605947754+00:00 stderr F I0813 19:59:53.605320 1 base_controller.go:110] Starting #1 worker of ConnectivityCheckController controller ... 2025-08-13T19:59:58.613089865+00:00 stderr F I0813 19:59:58.609004 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StartingNewRevision' new revision 9 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T19:59:58.966505629+00:00 stderr F I0813 19:59:58.917090 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/webhook-authenticator -n openshift-kube-apiserver because it changed 2025-08-13T19:59:59.839284638+00:00 stderr F I0813 19:59:59.839152 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-pod-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.806181934+00:00 stderr F I0813 20:00:01.796903 1 core.go:358] ConfigMap "openshift-kube-apiserver/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:18Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-controller-manager-operator","operation":"Update","time":"2025-08-13T20:00:01Z"}],"resourceVersion":null,"uid":"449953d7-35d8-4eaf-8671-65eda2b482f7"}} 2025-08-13T20:00:01.809347355+00:00 stderr F I0813 20:00:01.808935 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: 2025-08-13T20:00:01.809347355+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:01.809347355+00:00 stderr F I0813 20:00:01.809002 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/config-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.907607275+00:00 stderr F I0813 20:00:01.904389 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:01.924416555+00:00 stderr F I0813 20:00:01.924338 1 core.go:358] ConfigMap "openshift-config-managed/kubelet-serving-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:47:06Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:01Z"}],"resourceVersion":null,"uid":"1b8f54dc-8896-4a59-8c53-834fed1d81fd"}} 2025-08-13T20:00:01.953431302+00:00 stderr F I0813 20:00:01.929953 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: 2025-08-13T20:00:01.953431302+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:02.037372065+00:00 stderr F I0813 20:00:02.035056 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-metadata-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:02.070134019+00:00 stderr F I0813 20:00:02.068760 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/bound-sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:02.923722088+00:00 stderr F I0813 20:00:02.874163 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/etcd-serving-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:03.445770364+00:00 stderr F I0813 20:00:03.445627 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-server-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:04.121005026+00:00 stderr F I0813 20:00:04.112026 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kubelet-serving-ca-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:05.170528143+00:00 stderr F I0813 20:00:05.160062 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:05.681253986+00:00 stderr P I0813 20:00:05.677309 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAw 2025-08-13T20:00:05.681315977+00:00 stderr F ggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:05.682706277+00:00 stderr F I0813 20:00:05.681355 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-08-13T20:00:05.682706277+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:05.934535818+00:00 stderr F I0813 20:00:05.920362 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:05.920269061 +0000 UTC))" 2025-08-13T20:00:05.935118714+00:00 stderr F I0813 20:00:05.935088 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:05.934695512 +0000 UTC))" 2025-08-13T20:00:05.935300110+00:00 stderr F I0813 20:00:05.935278 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.935151055 +0000 UTC))" 2025-08-13T20:00:05.935382762+00:00 stderr F I0813 20:00:05.935362 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:05.935336371 +0000 UTC))" 2025-08-13T20:00:05.935441924+00:00 stderr F I0813 20:00:05.935428 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935407243 +0000 UTC))" 2025-08-13T20:00:05.935499995+00:00 stderr F I0813 20:00:05.935487 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935465504 +0000 UTC))" 2025-08-13T20:00:05.935693671+00:00 stderr F I0813 20:00:05.935672 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935519506 +0000 UTC))" 2025-08-13T20:00:05.935751492+00:00 stderr F I0813 20:00:05.935736 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.935717932 +0000 UTC))" 2025-08-13T20:00:05.939093088+00:00 stderr F I0813 20:00:05.939070 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:05.939023986 +0000 UTC))" 2025-08-13T20:00:05.939164370+00:00 stderr F I0813 20:00:05.939150 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:05.939132769 +0000 UTC))" 2025-08-13T20:00:05.939613873+00:00 stderr F I0813 20:00:05.939589 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:05.939563471 +0000 UTC))" 2025-08-13T20:00:05.940415505+00:00 stderr F I0813 20:00:05.940349 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:00:05.940324553 +0000 UTC))" 2025-08-13T20:00:06.125240245+00:00 stderr F I0813 20:00:06.124171 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/kube-apiserver-audit-policies-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:06.867873281+00:00 stderr F I0813 20:00:06.858398 1 request.go:697] Waited for 1.172199574s due to client-side throttling, not priority and fairness, request: PUT:https://10.217.4.1:443/api/v1/namespaces/openshift-config-managed/configmaps/kube-apiserver-client-ca 2025-08-13T20:00:08.100086485+00:00 stderr F I0813 20:00:07.974193 1 request.go:697] Waited for 1.291491224s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle 2025-08-13T20:00:08.100086485+00:00 stderr P I0813 20:00:07.983291 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIIBAUNY5hoTagwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe\nFw0yNDA2MjYxMjM1MDNaFw0zNDA2MjQxMjM1MDNaMDYxEjAQBgNVBAsTCW9wZW5z\naGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0ukuw5ZLlf8y3UAx75lOITpbigoo/21st\nzRgpPzme2eLqZILklW/xqqlE3UwwD3KMa1Ykl6TM8CsYjRDgaBDJLRV72dRWLgHm\nu/qORGaUDy2Dyr90GSLaWxnCug57b63GrWXDCs0vRjjFR1ZXP3Ba9BSHx1kFXQCI\npXP8FyKS0kt4dOQOWtpgxZLxwnxu/i4TscCYhkbEPwUkeFrEnmNhmbRVCgQ7qq+k\n7rS9cxqgCQnNZzkVwRWjeiRCsoN+1A/3cK5Zg8Qlzjj0ol5rDR968odg0i7ARVW4\nz5PpWS7h8VzlLx9wWZ7C4or8U9T/Bw+2WfYYNudAYpeVLo3v7SW9AgMBAAGjQjBA\nMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTTYWky\nxTnkhhtSlIn4UYRO8sUCtDANBgkqhkiG9w0BAQsFAAOCAQEAGg7t07G8wJAjAOof\nvPJL49Jq6bBtE/NlSp6tNE6qbpupXXYlI6rLBlj9S9pPV7rhPurtw7Djq3mH9CyI\nS/nDAo9ubZF3Ux/IOcHGw/As4VLdTM184yKkVpjVlyvnGdWDupkQTQocUMo4Z3lQ\nsg7uEpqJEMTHDcgpeFanABq6gSctaVLLnPRb8awOvYzI4XdA4oSkr/u5w9Aqf20m\neq7KqvUtUhD1wXUhGND3MwfbetqAMgDidNZ4w0WrsY9APXcXMfZOhaQbiVjdhHD1\nMMMS3LMK+ocWyEWAZKcR8jastFHIeOnc+a6HQYeMfcQaHpWUedof6aOcgr5z0xr4\naKniNw==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZ 2025-08-13T20:00:08.100180528+00:00 stderr F XJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:05Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:08.189908176+00:00 stderr F I0813 20:00:08.189030 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/etcd-client-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:08.212464720+00:00 stderr F I0813 20:00:08.206614 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T20:00:08.212464720+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:10.243962056+00:00 stderr F I0813 20:00:10.231607 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-serving-certkey-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:10.463320621+00:00 stderr F I0813 20:00:10.460161 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/localhost-recovery-client-token-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:11.026082297+00:00 stderr F I0813 20:00:11.005505 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/webhook-authenticator-9 -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:11.373230415+00:00 stderr F I0813 20:00:11.353030 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 9 triggered by "optional secret/webhook-authenticator has changed" 2025-08-13T20:00:11.822691641+00:00 stderr F I0813 20:00:11.822014 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionCreate' Revision 9 created because optional secret/webhook-authenticator has changed 2025-08-13T20:00:11.846949852+00:00 stderr F I0813 20:00:11.846098 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:12.011013411+00:00 stderr F W0813 20:00:12.010579 1 staticpod.go:38] revision 9 is unexpectedly already the latest available revision. This is a possible race! 2025-08-13T20:00:12.011013411+00:00 stderr F E0813 20:00:12.010970 1 base_controller.go:268] RevisionController reconciliation failed: conflicting latestAvailableRevision 9 2025-08-13T20:00:13.096090201+00:00 stderr F I0813 20:00:13.095264 1 installer_controller.go:524] node crc with revision 8 is the oldest and needs new revision 9 2025-08-13T20:00:13.096090201+00:00 stderr F I0813 20:00:13.096032 1 installer_controller.go:532] "crc" moving to (v1.NodeStatus) { 2025-08-13T20:00:13.096090201+00:00 stderr F NodeName: (string) (len=3) "crc", 2025-08-13T20:00:13.096090201+00:00 stderr F CurrentRevision: (int32) 8, 2025-08-13T20:00:13.096090201+00:00 stderr F TargetRevision: (int32) 9, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedRevision: (int32) 1, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedTime: (*v1.Time)(0xc003c319e0)(2024-06-26 12:52:09 +0000 UTC), 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedReason: (string) (len=15) "InstallerFailed", 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedCount: (int) 1, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFallbackCount: (int) 0, 2025-08-13T20:00:13.096090201+00:00 stderr F LastFailedRevisionErrors: ([]string) (len=1 cap=1) { 2025-08-13T20:00:13.096090201+00:00 stderr F (string) (len=2059) "installer: node-admin-client-cert-key\",\n (string) (len=31) \"check-endpoints-client-cert-key\",\n (string) (len=14) \"kubelet-client\",\n (string) (len=16) \"node-kubeconfigs\"\n },\n OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {\n (string) (len=17) \"user-serving-cert\",\n (string) (len=21) \"user-serving-cert-000\",\n (string) (len=21) \"user-serving-cert-001\",\n (string) (len=21) \"user-serving-cert-002\",\n (string) (len=21) \"user-serving-cert-003\",\n (string) (len=21) \"user-serving-cert-004\",\n (string) (len=21) \"user-serving-cert-005\",\n (string) (len=21) \"user-serving-cert-006\",\n (string) (len=21) \"user-serving-cert-007\",\n (string) (len=21) \"user-serving-cert-008\",\n (string) (len=21) \"user-serving-cert-009\"\n },\n CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\n (string) (len=20) \"aggregator-client-ca\",\n (string) (len=9) \"client-ca\",\n (string) (len=29) \"control-plane-node-kubeconfig\",\n (string) (len=26) \"check-endpoints-kubeconfig\"\n },\n OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\n (string) (len=17) \"trusted-ca-bundle\"\n },\n CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\n ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\n PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\n Timeout: (time.Duration) 2m0s,\n StaticPodManifestsLockFile: (string) \"\",\n PodMutationFns: ([]installerpod.PodMutationFunc) ,\n KubeletVersion: (string) \"\"\n})\nI0626 12:49:19.085310 1 cmd.go:410] Getting controller reference for node crc\nI0626 12:49:19.096616 1 cmd.go:423] Waiting for installer revisions to settle for node crc\nI0626 12:49:19.099505 1 cmd.go:515] Waiting additional period after revisions have settled for node crc\nI0626 12:49:49.099716 1 cmd.go:521] Getting installer pods for node crc\nF0626 12:50:03.102802 1 cmd.go:106] Get \"https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n" 2025-08-13T20:00:13.096090201+00:00 stderr F } 2025-08-13T20:00:13.096090201+00:00 stderr F } 2025-08-13T20:00:13.198325806+00:00 stderr F I0813 20:00:13.195633 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeTargetRevisionChanged' Updating node "crc" from revision 8 to 9 because node crc with revision 8 is the oldest 2025-08-13T20:00:13.284744520+00:00 stderr F I0813 20:00:13.284573 1 prune_controller.go:269] Nothing to prune 2025-08-13T20:00:13.285207963+00:00 stderr F I0813 20:00:13.285148 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.486186314+00:00 stderr F I0813 20:00:13.485044 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9" 2025-08-13T20:00:13.491050473+00:00 stderr F I0813 20:00:13.491008 1 status_controller.go:218] clusteroperator/kube-apiserver diff {"status":{"conditions":[{"lastTransitionTime":"2025-08-13T19:59:49Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2025-08-13T20:00:13Z","message":"NodeInstallerProgressing: 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"NodeInstaller","status":"True","type":"Progressing"},{"lastTransitionTime":"2024-06-26T12:54:02Z","message":"StaticPodsAvailable: 1 nodes are active; 1 node is at revision 8; 0 nodes have achieved new revision 9","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2024-06-26T12:46:51Z","message":"KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.","reason":"AsExpected","status":"True","type":"Upgradeable"},{"lastTransitionTime":"2024-06-26T12:48:18Z","message":"All is well","reason":"AsExpected","status":"False","type":"EvaluationConditionsDetected"}]}} 2025-08-13T20:00:13.571507167+00:00 stderr F E0813 20:00:13.571212 1 base_controller.go:268] StatusSyncer_kube-apiserver reconciliation failed: Operation cannot be fulfilled on clusteroperators.config.openshift.io "kube-apiserver": the object has been modified; please apply your changes to the latest version and try again 2025-08-13T20:00:14.466612320+00:00 stderr F I0813 20:00:14.458400 1 request.go:697] Waited for 1.135419415s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc 2025-08-13T20:00:17.365699214+00:00 stderr F I0813 20:00:17.348199 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/installer-9-crc -n openshift-kube-apiserver because it was missing 2025-08-13T20:00:17.620944372+00:00 stderr F I0813 20:00:17.619554 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:18.586967406+00:00 stderr F I0813 20:00:18.586756 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:21.262488005+00:00 stderr F I0813 20:00:21.259307 1 request.go:697] Waited for 1.120055997s due to client-side throttling, not priority and fairness, request: GET:https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/node-kubeconfigs 2025-08-13T20:00:22.147030787+00:00 stderr F I0813 20:00:22.142248 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:25.082034549+00:00 stderr F I0813 20:00:25.074714 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Pending phase 2025-08-13T20:00:28.579868887+00:00 stderr F I0813 20:00:28.571487 1 installer_controller.go:512] "crc" is in transition to 9, but has not made progress because installer is not finished, but in Running phase 2025-08-13T20:00:49.954168169+00:00 stderr P I0813 20:00:49.869279 1 core.go:358] ConfigMap "openshift-kube-apiserver/client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwgg 2025-08-13T20:00:49.954592101+00:00 stderr F Ei\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}} 2025-08-13T20:00:49.954592101+00:00 stderr F I0813 20:00:49.950160 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/client-ca -n openshift-kube-apiserver: 2025-08-13T20:00:49.954592101+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:50.452469968+00:00 stderr P I0813 20:00:50.451511 1 core.go:358] ConfigMap "openshift-config-managed/kube-apiserver-client-ca" changes: {"data":{"ca-bundle.crt":"-----BEGIN CERTIFICATE-----\nMIIDSjCCAjKgAwIBAgIITogaCmqWG28wDQYJKoZIhvcNAQELBQAwPTESMBAGA1UE\nCxMJb3BlbnNoaWZ0MScwJQYDVQQDEx5hZG1pbi1rdWJlY29uZmlnLXNpZ25lci1j\ndXN0b20wHhcNMjUwODEzMjAwMDQxWhcNMzUwODExMjAwMDQxWjA9MRIwEAYDVQQL\nEwlvcGVuc2hpZnQxJzAlBgNVBAMTHmFkbWluLWt1YmVjb25maWctc2lnbmVyLWN1\nc3RvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN6Ul782iQ8+jnY/\nOLuLWoAXzZARJSnoWByuxk6bhZpoyx8By+n70URbh4zneV9u9V3XcFKZUDEvJvU+\nS3y2c1x0M5xCIv1QsThg4nTyAvG4zvr7hilvYMdOX2Z00ZmVHMC2GLno13nKygnH\n5eqNV0pxClxNMtekPfaTp770YFMVdJ07Yh6cda24Ff4vNAlYPEMmK0LVwOaJIvJc\n+EdX0BbBVf5qOeEqP2Mx4XgDY5lkxAy8wP4gZabX94w0GKFUlRMNaItcZ7+4HEA+\nrXsn3JmE/RiMCgxn5AIcuytYU+AGsCl3mKQkUftko1PrugMLGXuB0D7Wt31vPaFp\nw7OUbF8CAwEAAaNOMEwwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8w\nKQYDVR0OBCIEIJ2ugh+YE3hjpyupDEa6mDyCbykNMfRIez3zACTjDXNCMA0GCSqG\nSIb3DQEBCwUAA4IBAQAys20MJiy/aHBgqe2ysY4ejHgAQbSWPGQ7RWMMDywxV71/\nK6RKNS4+eXPWi0nWmo2ADdd8cqp4/x8ADT0l5gnV/hq69ivQrWuR9HVkt9PA06ua\n4pYarz7mE2pZDrqpk1uA2pdHOKvLgcUb6S8UL6p8piMbG0PZqkDnWt3e8qtt2iPM\nxbyJ7OIm+EMFsMtabwT90Y4vRHkb+6Y2rqb7HbarrnSLolwkxJcR0Ezww+AlORLt\nzzd5UlbjFg/REAfqye4g9+mjG3rvUtjYYZp1RegH4WK92mdgEzwXojTJx7EqbcLa\nNZsBj/EqSKs56a9L7ukAGoLfTR+HNeWWgS6KX1JW\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDXTCCAkWgAwIBAgIIDt8OBM2kfw8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE3MTk0OTM1\nMjAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLOB+rcc5P9fzXdJml\nzYWJ37PLuw9sBv9aMA5m4AbZvPAZJvYAslp2IHn8z80bH3A4xcMEFxR04YqQ2UEo\ncYg71E1TaMTWJ5AGLfaqLMZ8bMvrNVmGq4TeiQJYl8UIJroKbzk0mogzfNYUId17\nlperb1iwP7kVb7j6H/DqHC8r3w7K/UIDIQ3Otmzf6zwYEOzU+LpsVYqIfb3lTEfD\nEov/gPwowd9gudauqXKTopkoz1BtlR21UW/2APt0lcfUYw4IW7DMIpjv4L26REtF\n7fQqHZKI87Dq/06sp09TbsLGoqStefBYHTPVRNXF33LTanuA0jIi3Gc5/E2qEbgm\nE3KRAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G\nA1UdDgQWBBSeP6S7/rOskl2/C+Yr6RoNfdFVTzAfBgNVHSMEGDAWgBRGnbdX2PXI\n+QbTOOGuYuiHyDlf6TANBgkqhkiG9w0BAQsFAAOCAQEAdF1wao2p9hOqvJc/z0uB\na5e/tHubZ3+waCMyZw4ZCq6F0sEKdrxjf2EM/P92CNixfD3AQHe009FkO6M20k3a\n7ZkqTuJhYSzX/9fCyNRkkOLP3lMVbeZtRT67yut9yKQBVQsrZ7MwJVE4XVcaVOo6\nI9DV/sLe+/PqAnqB7DomTVyAmo0kKJGKWH5H61dAo8LeVDe7GR/zm6iGgTdlQMHa\nrhSRuIEkKJFjHWutNVKc7oxZLEHFH9tY+vUOAm1WOxqHozVNyjPp41gQqalt/+Sy\ndp0nJesIyD8r085VcxvdeFudoPhMwMN+Gw0Jdn4TJyaX+x47jLI9io4ztIzuiOAO\nvg==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIKjkEuWWeUqkwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3MTk0OTM1MjAwHhcNMjQwNjI3MTMwNTE5WhcNMjYw\nNjI3MTMwNTIwWjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTcxOTQ5MzUyMDCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPVwjXMgI5ERDcIrTTBtC2FP\nn0c4Cbuzy+WacqEvdaivs5U8azPYi3K/WEFpyP1fhcYqH4lqtor1XOmcref03RGM\nPK7eA0q3LWWQO0W9NGphMXxWgAO9wuql+7hj7PyrZYVyqTwzQ6DgM3cVR8wTPv5k\nt5jxN27lhRkJ8t0ECP7WaHzA5ijQ++yXoE4kOOyUirvjZAVbdE4N772EVyCCZZIo\nL+dRqm0CTa6W3G7uBxdP8gKMBzAJu2XlvmWe0u6lYrqhKI/eMHNSBEvH14fME66L\n2mm2sIKl7b9JnMRODXsPhERBXajJvMChIBpWW2sNZ15Sjx7+CjgohdL6u3mBee8C\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFEadt1fY9cj5BtM44a5i6IfIOV/pMB8GA1UdIwQYMBaAFEadt1fY9cj5BtM4\n4a5i6IfIOV/pMA0GCSqGSIb3DQEBCwUAA4IBAQDL9Y1ySrTt34d9t5Jf8Tr0gkyY\nwiZ18M/zQcpLUJmcvI5YkYdOI+YKyxkJso1fzWNjRdmBr9tul91fHR991p3OXvw+\na60sxKCazueiV/0pq3WHCrJcNv93wQt9XeycfpFgqgLztwk5vSUTWa8hpfzsSdx9\nE8bm6wBvjl0MX8mys9aybWfDBwk2JM7cNaDXsZECLpQ9RDTk6zLyvYzHIdohyKd7\ncpSHENsM+DAV1QOAb10hZNNmF8PnQ6/eZwJhqUoBRRl+WjqZzBIdo7fda8bF/tNR\nxAldI9e/ZY5Xnway7/42nuDDwn5EMANMUxuX6cDLSh5FPldBZZE+F0y1SYBT\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDiTCCAnGgAwIBAgIIDybx/7gBBXUwDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE\nAwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz\nci1zaWduZXItc2lnbmVyQDE3NTUxMTUxODkwHhcNMjUwODEzMTk1OTUzWhcNMjcw\nODEzMTk1OTU0WjBSMVAwTgYDVQQDDEdvcGVuc2hpZnQta3ViZS1jb250cm9sbGVy\nLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25lci1zaWduZXJAMTc1NTExNTE4OTCC\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM+nkBGrK8QMKohpks2u4Y7/\nrhyVcCbrXGJAgvcwdEa2jjk1RquxRXiXDmc4xZ71IKtRt8NaKQz97pi+FYgRP5F5\nEVlUfmrRQiXWdcx5BdnWskDka//gJ7qyH4ZhFPR/nKk6GYeOJ3POWC9bkrx9ZQny\nZqQzb1Kv712mGzu57bjwaECKsgCFOn+iA0PgXDiF6AfSzfpB80YRyaGIVfbt/BT5\ndpxbPzE0TD1FCoGQFbXQM8QqfI/mMVNOG2LtiIZ3BuVk8upwSAUpMD/VJg7WXwqr\nwa8Cufcm+hkN15arjh95kXWi05mg+rstkNJwhyf9w9fKlD9meH1C8zXOsULgGoEC\nAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O\nBBYEFLYed+rTz/zaMI+SJaBS+oX3ErGBMB8GA1UdIwQYMBaAFLYed+rTz/zaMI+S\nJaBS+oX3ErGBMA0GCSqGSIb3DQEBCwUAA4IBAQALoggXkMwJ4ekxi/bQ3X9dkPaY\n/Vu7hXuHstFU52v28fuRpZ6byfX+HNaZz1hXmRNYQB7Zr1Y2HTnxPVBAzf5mS80Q\nmWJYsbbWJF3o8y2vWSNaPmpUCDfbs/kkIhBDod0s0quXjZQLzxWPHD4TUj95hYZ9\nyBhAiwypFgAmSvvvEtD/vUDdHYWHQJBr+0gMo3y3n/b3WHTjs78YQbvinUWuuF+1\nb2wt53UYx4WUqk03uxnG05R0ypXvyy3yvPCIQpit2wpK6PORm6X6fWgjrG20V8Jw\ngbQynW9t4+6sisnot3RS3fimaCvVriYtC1ypLc5TcXKa4PJ3jOl1tY+b9C2w\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDlTCCAn2gAwIBAgIIaVQ0niw/vvUwDQYJKoZIhvcNAQELBQAwWDFWMFQGA1UE\nAwxNb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtYXBpc2Vy\ndmVyLXRvLWt1YmVsZXQtc2lnbmVyQDE3NTUxMTQ1NjYwHhcNMjUwODEzMTk0OTI1\nWhcNMjYwODEzMTk0OTI2WjBYMVYwVAYDVQQDDE1vcGVuc2hpZnQta3ViZS1hcGlz\nZXJ2ZXItb3BlcmF0b3Jfa3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxldC1zaWduZXJA\nMTc1NTExNDU2NjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK8A4PoH\nr4bT0zNI8AQWFOzBg9TESjK8MG8mrFIIopRK1QLzs4bJMSvd14LDghnPYiOnO4Cm\na2Z9RyF1PhhwoXONBiPLez6Oi2zKn7fQM/uQM5O+zMLpMbjyxtAvWEGjXptKiYtQ\n0d3sCQyBkNtk4HQkkgNVCq9VaRyxdD5bIwi9a2hvwIw6db8ocW4oM+LNo9C4JZZf\npZU7IFmYSwFWMSyQ1Epk508uG1OxCQTYzB+e8gMMEtO4Zb/7rcqdt9G8cOoE/7AH\ngmYtL1JfreEAnJRJ01DsrFGe/SIS1HiX6r+0KYCX2X+o8pe05hGoBRlTRVq/nVxc\nGEQ7F9VkIOffc/8CAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQF\nMAMBAf8wHQYDVR0OBBYEFEjgkF/fPsZfRu7d4GOjjRhUkn5SMB8GA1UdIwQYMBaA\nFEjgkF/fPsZfRu7d4GOjjRhUkn5SMA0GCSqGSIb3DQEBCwUAA4IBAQBbhNpygmf2\nH9JibMV6svWQBv+/YYimm/mokM/S5R7xbZJ8NDkzwXj8zHJxZVM3CxJB0XWALuEp\nfj+vAxmqzOEmnRaHPv5Ff0juElkRFWUz+lml16NG2VTY33au3cuMLFXjXCDkouAV\ntKG8rztFgdGbBAA4s9EphCBUgMDCaI0kKMoEjTIg3AGQpgeN7uuMfmXoGhORgk+D\n1Mh6k5tA2SsIbprcdh57uyWB6Tr6MQqpA9yq65A9MDNMyYKhZqpMukRBFKdgJ65L\nkq1vIHo8TnYSshzg6U19gFfGFR34mUWI9Mt6oEubVawyxfMFOf69+agcnP7sgysO\nNjJQ93yMg0Et\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIVEbgI+Ts3cQwDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1zGqjsr/iWkztHP3fZ1xSulP5\nfzYNLZbVFM7vHME+QxouNEFcurz8Q7G1Ts2J4yk9lckoMsc6Mb0WEX2TIQt6g9K5\nFGuRQF1ntghv8egXcGxeUi918HiJqvPu3JaYPJzbp10PC2zaXxCoY/5r95SC8m0R\ndVU6txvHGIb6QxYhW8C8gLEexdRbiN/aNm1tTjbY9A80/ENHpEvF7TAJzvjwuIQ9\nzojl61uijbwMeQszdO1gPtmdrWIDdBH3omoOZl2Xn2N9FMK9fXvaOYhE0mlyxSwy\nifFY2PThSttbFjowoAfEnsQ5ObO7Jt6+BRvKHQzx2WwSnBG9wG1Dhe6qv1vvAgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBQkXiY6aWacRxpCq9oeP2PwT2CsmDAfBgNVHSMEGDAWgBQkXiY6aWacRxpCq9oe\nP2PwT2CsmDANBgkqhkiG9w0BAQsFAAOCAQEAA9QEdQhpq7f6CdqpfBBdyfn3bLJ1\nn75fktFnUyyE955nmynu7hrEPpwycXL5s5dpSD3Tn9ZA7kLJ+5QLfRHAhDV/foeh\n5+ii4wD3KZ7VgIYALOg5XOGnoPhNqlwc6mc1nN4vj/DtKYWDGX4ot60URbWxjctJ\nQq3Y93bDEzjbMrU9sFh+S3oUmVcy29VO8lWRAmqEP2JxWc6mxqiYmSpXbifOZLog\n5rZoV+QdNGbQx7TswqdA/vC+E5W7BJgiUfShir4+kTcX5nDyj9EtFbDQg5mQA8Qc\nThlm7HkXod1OVC8tKXFyBkMZYUJ/xQxbgdi9DlDeU8bpCj10bgE03mmQJA==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhzCCAm+gAwIBAgIIWzi58t64qQ0wDQYJKoZIhvcNAQELBQAwUTFPME0GA1UE\nAwxGb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX2t1YmUtY29udHJv\nbC1wbGFuZS1zaWduZXJAMTc1NTExNDU2NjAeFw0yNTA4MTMxOTQ5MjVaFw0yNzA4\nMTMxOTQ5MjZaMFExTzBNBgNVBAMMRm9wZW5zaGlmdC1rdWJlLWFwaXNlcnZlci1v\ncGVyYXRvcl9rdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyQDE 2025-08-13T20:00:50.452521249+00:00 stderr F 3NTUxMTQ1NjYwggEi\nMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCfa5owddGI0fVGGUJFLDi5gk5b\nLL/alNsFzcv41gY+stjz6x80wUTBo7ASqnflX0KiRJHu+bWTp2XdrzOi/ahV3Gnu\na8Mdn70XY7XVf927GYv210sjARSi/GyTx027LoFdRGHQVOUGnDmFEst3uFjCzs3H\ntrh6WKW6ezdT5S0D5r0WJugln/CCI4NejwO5FwHSeV8+p8yH/jY+2G8XrvfNvRbH\n7UREwiaK3Ia+2S6ER/EkK71Yg9YHrSneTyWydTIHOhJKteJOajIgmdyFxKbct9Pr\nT20ocbsDIhexyLovPoPEGVArc8iz8trWyJH8hRFaEbpDL1hCr6ruC+XjcbN7AgMB\nAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW\nBBStAdHrLny44rLWUAGkc/g56/2ESTAfBgNVHSMEGDAWgBStAdHrLny44rLWUAGk\nc/g56/2ESTANBgkqhkiG9w0BAQsFAAOCAQEAOBddS+pMKFxZ7y6j+UPaVUccXRxR\n5M8VIknZmHxFnMZCbvgtbx5Qzy4TwAdKr2nl0zCbemqzRBQFOfw14DS5aS9FsLem\nw0jKA9hOmt4kDREqQws4xWJXjI/mTQu+GMqVfWjWL75+l3MEaVOVwDfoUCYp52Qw\nCmQ07fCGwFcmj9dsn4SZBnsmh07AC3xFZhKHxu0IBTc64oN//gzenB1+aJDOjpDJ\nzXCnqbeUjLLGmHadlnpc/iYQDhWREBwSc8c+r/LUhRTcsFU8vDYfFzv387SY0wnO\nZeeEJE20XkbJg8jemwuX3G4W6BX1TxW/IIElb+1v34+JaVlNN9avPMe4ew==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgIIYH5jtjTiKS0wDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE\nCxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u\nZmlnLXNpZ25lcjAeFw0yNDA2MjYxMjM1MDRaFw0zNDA2MjQxMjM1MDRaMEIxEjAQ\nBgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi\nZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1\nNAOpEO2uZeKABUmZx/iSVgUJ/NL4riZ9EHrw9K2jWxizH9M/wUE2r30q6KK1R5Te\nGOM40Eqp+ImtvyTv8r7slEXHpWVDJfqc7vb+TxBANz72KniMffC/Ok5+tlm6qdYZ\nHRtN341VgZMoO7EQ7a+Qk2YBtUAGfM+7CGrtELADMnKh1H9EANe4EY2FmFVrRKX6\n61oaoW4re+LmsUDbC5ZFmtKh7edkcTyoSjQVZKYew+X2lMrV1UNQueq6fpUlOjAL\nmgq6k2hmuf4ssHcFPaZzBASFCzAQgg4LBs8ZDxHvy4JjCOdUC/vNlMI5qGAotlQe\nh/Z6nIyt1Sm+pr/RzzoDAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBQSXbzhQRNtr23JF3QZ5zCQCiQzIDANBgkqhkiG\n9w0BAQsFAAOCAQEACG0YdguKTgF3UXqt51eiPbVIf3q5UhqWHMvZMgZxs3sfJQLA\neSHieIqArkYvj2SzLGPgMN7hzC17fmBtUN+Gr6dC3x7QW5DQPYbMFjJDx6FqDec/\ndq/cxa2AeGgacv4UqwAFrdCX2EbvQrxmddK22rCtYjcW0SipQf+vZw07wevvW6TM\nU8WNgFihiWPTEtnfJabf6lb+8u6Z+cJP2besrkvMP9ST8mXpM2vIi4HAKMtIvhPQ\nT047xTV2+Ch+hmE2nQMRGtB2vecN7+Vmgdr/Em61fDw0x3hcCIr5NvCXbo9fLP6c\nhp07F0gxmwFQCV4OikPMFqLMdmxrapq5Dpq3lQ==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIDhTCCAm2gAwIBAgIIeTbBLyzCeF8wDQYJKoZIhvcNAQELBQAwUDFOMEwGA1UE\nAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9wZXJhdG9yX25vZGUtc3lzdGVt\nLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MB4XDTI1MDgxMzE5NDkyNVoXDTI2MDgx\nMzE5NDkyNlowUDFOMEwGA1UEAwxFb3BlbnNoaWZ0LWt1YmUtYXBpc2VydmVyLW9w\nZXJhdG9yX25vZGUtc3lzdGVtLWFkbWluLXNpZ25lckAxNzU1MTE0NTY2MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA27lBXVBiDWvpIu2HawWru9vcVpi2\nHIDU3Ml6fgkfNFw+/2Flp2C0iAf9Tq0h9qpH+dIm/PbLWtAJIr19wMuApAIrIuAZ\nZHCYrvZgvR36efRzO6ZUJ2QV76N253M2l+SQAKpr9DEEL4HrMRJ2m2rhwaofRMjj\nnfB96EwjKcPUGORGx2vZKdQDKVZKtJnp7u5sXGaaZxnUu97hxSebnYHas+SkrvvV\n+Pvca6nlWHDtY/PDO8OWnw1OJlRz9GFHBGCJrG4Yt0L9R6nX1iWi5YyrUC53Drmq\npprxCj3raVqDWn0sTHTJSoyBUHg8FBh0ZAUPiOY/ZyeKb2XD/QVsyKmZBQIDAQAB\no2MwYTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU\n0bHr3NI8rUK8kAnH+uQswPt4QdMwHwYDVR0jBBgwFoAU0bHr3NI8rUK8kAnH+uQs\nwPt4QdMwDQYJKoZIhvcNAQELBQADggEBAGV1thuR7eKXSaQ2Wn5jKS5Wlj3+bX23\nd8jdkK4gZrcgSzZk1iobEgKVVNDI6ddfNPUCLVw7gUspaib6WDfkzwzJhYVjCROO\nKPztrbbr7y358DnfowIPWqNHq546sl7fC43l/irHaX1cihpFEnB2cTzZLDdD76rc\nuj3AIBOABui3yUxtVwBKsMgbI+/Vz6/Pg9shHPwazlooZa4lEyKpJdlquFFBMn7G\n7iJa+pvkPBfQ4rUxv3K0fFw+5q0/fKdYCkczrcW+1CGhFXLZFlhip+F8H0kpmTsi\nBzMGgDLkY1zQno6MXox0azEP4ZpLpCvOgFvaCzAjpMuvGCHK94bzCh4=\n-----END CERTIFICATE-----\n"},"metadata":{"creationTimestamp":"2024-06-26T12:48:56Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca-bundle.crt":{}},"f:metadata":{"f:annotations":{".":{},"f:openshift.io/owning-component":{}}}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2025-08-13T20:00:49Z"}],"resourceVersion":null,"uid":"7cd7d474-d3c4-4aba-852e-6eecdf374372"}} 2025-08-13T20:00:50.491882951+00:00 stderr F I0813 20:00:50.472665 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1685682f-c45b-43b7-8431-b19c7e8a7d30", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: 2025-08-13T20:00:50.491882951+00:00 stderr F cause by changes in data.ca-bundle.crt 2025-08-13T20:00:59.988258923+00:00 stderr F I0813 20:00:59.971160 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:00:59.971063173 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.991927 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:00:59.991860806 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992048 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.991966509 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992073 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:00:59.992059152 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992099 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992080352 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992128 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992105723 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992151 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992136864 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992169 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992156825 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992188 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:00:59.992174365 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992228 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:00:59.992200266 +0000 UTC))" 2025-08-13T20:00:59.992359120+00:00 stderr F I0813 20:00:59.992255 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:00:59.992240197 +0000 UTC))" 2025-08-13T20:00:59.994898803+00:00 stderr F I0813 20:00:59.992640 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2024-06-26 12:47:09 +0000 UTC to 2026-06-26 12:47:10 +0000 UTC (now=2025-08-13 20:00:59.992603197 +0000 UTC))" 2025-08-13T20:00:59.994898803+00:00 stderr F I0813 20:00:59.993035 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:00:59.993011119 +0000 UTC))" 2025-08-13T20:01:14.400175850+00:00 stderr F I0813 20:01:14.390670 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.key" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:14.400175850+00:00 stderr F I0813 20:01:14.393475 1 dynamic_serving_content.go:192] "Failed to remove file watch, it may have been deleted" file="/var/run/secrets/serving-cert/tls.crt" err="fsnotify: can't remove non-existent watch: /var/run/secrets/serving-cert/tls.crt" 2025-08-13T20:01:14.429368972+00:00 stderr F I0813 20:01:14.424378 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:14.429368972+00:00 stderr F I0813 20:01:14.425685 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:14.425611745 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.425769 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:14.425704257 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485069 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:14.484961197 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485106 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:14.485085791 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485272 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485183653 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485306 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485284976 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485351 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485314047 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485377 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485357498 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485404 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:14.485386229 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485472 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:14.485444671 +0000 UTC))" 2025-08-13T20:01:14.486963924+00:00 stderr F I0813 20:01:14.485516 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:14.485497092 +0000 UTC))" 2025-08-13T20:01:14.505378189+00:00 stderr F I0813 20:01:14.489240 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" certDetail="\"metrics.openshift-kube-apiserver-operator.svc\" [serving] validServingFor=[metrics.openshift-kube-apiserver-operator.svc,metrics.openshift-kube-apiserver-operator.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:22 +0000 UTC to 2027-08-13 20:00:23 +0000 UTC (now=2025-08-13 20:01:14.489194768 +0000 UTC))" 2025-08-13T20:01:14.507034327+00:00 stderr F I0813 20:01:14.506154 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115158\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115153\" (2025-08-13 18:59:11 +0000 UTC to 2026-08-13 18:59:11 +0000 UTC (now=2025-08-13 20:01:14.48961271 +0000 UTC))" 2025-08-13T20:01:15.604133219+00:00 stderr F I0813 20:01:15.603594 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.crt" has been modified (old="b57d435dda9015b836844ccd5987ed8197c1e056dd12749adf934051633baa90", new="60b634b6a45b60ea7526d03c4e0dd32ed9c3754978fa8314240e9b0d791c4ab0") 2025-08-13T20:01:15.604133219+00:00 stderr F W0813 20:01:15.603963 1 builder.go:155] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified 2025-08-13T20:01:15.604133219+00:00 stderr F I0813 20:01:15.604069 1 observer_polling.go:120] Observed file "/var/run/secrets/serving-cert/tls.key" has been modified (old="1b9136fb24ce5731f959f039a819bcb38f1319259d783f75058cdbd1e9634c27", new="f77cc21f2f2924073edb7f73a484e9ce3a552373ca706e2e4a46fb72d4f6a8fa") 2025-08-13T20:01:15.604323094+00:00 stderr F I0813 20:01:15.604280 1 genericapiserver.go:679] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:01:15.607932247+00:00 stderr F I0813 20:01:15.604630 1 genericapiserver.go:536] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608191 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608316 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608345929+00:00 stderr F I0813 20:01:15.608230 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608365 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608414 1 base_controller.go:172] Shutting down TargetConfigController ... 2025-08-13T20:01:15.608443591+00:00 stderr F I0813 20:01:15.608358 1 base_controller.go:172] Shutting down ConnectivityCheckController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608285 1 base_controller.go:172] Shutting down UnsupportedConfigOverridesController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608378 1 base_controller.go:172] Shutting down LoggingSyncer ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608391 1 base_controller.go:172] Shutting down webhookSupportabilityController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608399 1 base_controller.go:172] Shutting down NodeController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608432 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608517964+00:00 stderr F I0813 20:01:15.608512 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608477 1 base_controller.go:172] Shutting down ServiceAccountIssuerController ... 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608527 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.608540594+00:00 stderr F I0813 20:01:15.608531 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.608550604+00:00 stderr F I0813 20:01:15.608544 1 base_controller.go:172] Shutting down ConfigObserver ... 2025-08-13T20:01:15.608562215+00:00 stderr F I0813 20:01:15.608556 1 base_controller.go:172] Shutting down ResourceSyncController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608587 1 base_controller.go:172] Shutting down MissingStaticPodController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608601 1 base_controller.go:172] Shutting down EncryptionStateController ... 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608589 1 certrotationcontroller.go:899] Shutting down CertRotation 2025-08-13T20:01:15.608626157+00:00 stderr F I0813 20:01:15.608616 1 base_controller.go:172] Shutting down RevisionController ... 2025-08-13T20:01:15.608677738+00:00 stderr F I0813 20:01:15.608658 1 base_controller.go:172] Shutting down StartupMonitorPodCondition ... 2025-08-13T20:01:15.608711579+00:00 stderr F I0813 20:01:15.608671 1 base_controller.go:172] Shutting down EncryptionPruneController ... 2025-08-13T20:01:15.608739830+00:00 stderr F I0813 20:01:15.608679 1 base_controller.go:172] Shutting down WorkerLatencyProfile ... 2025-08-13T20:01:15.608766861+00:00 stderr F I0813 20:01:15.608705 1 base_controller.go:172] Shutting down EventWatchController ... 2025-08-13T20:01:15.610025417+00:00 stderr F I0813 20:01:15.608737 1 termination_observer.go:155] Shutting down TerminationObserver 2025-08-13T20:01:15.610150250+00:00 stderr F I0813 20:01:15.610097 1 base_controller.go:114] Shutting down worker of BackingResourceController controller ... 2025-08-13T20:01:15.610183981+00:00 stderr F I0813 20:01:15.610136 1 base_controller.go:172] Shutting down PruneController ... 2025-08-13T20:01:15.610236473+00:00 stderr F I0813 20:01:15.610223 1 base_controller.go:172] Shutting down StaticPodStateController ... 2025-08-13T20:01:15.610264143+00:00 stderr F I0813 20:01:15.610200 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.610303184+00:00 stderr F I0813 20:01:15.610289 1 base_controller.go:172] Shutting down InstallerController ... 2025-08-13T20:01:15.610344556+00:00 stderr F I0813 20:01:15.610332 1 base_controller.go:172] Shutting down RemoveStaleConditionsController ... 2025-08-13T20:01:15.610415938+00:00 stderr F I0813 20:01:15.610362 1 base_controller.go:172] Shutting down CertRotationTimeUpgradeableController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608865 1 base_controller.go:172] Shutting down NodeKubeconfigController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608868 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608884 1 base_controller.go:172] Shutting down auditPolicyController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608913 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608921 1 base_controller.go:172] Shutting down BoundSATokenSignerController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608955 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.608979 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609010 1 base_controller.go:172] Shutting down KubeAPIServerStaticResources ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609030 1 base_controller.go:172] Shutting down EncryptionMigrationController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609049 1 base_controller.go:172] Shutting down EncryptionKeyController ... 2025-08-13T20:01:15.641062132+00:00 stderr F I0813 20:01:15.609074 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609093 1 base_controller.go:172] Shutting down BackingResourceController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609210 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609252 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609320 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.608263 1 base_controller.go:172] Shutting down SCCReconcileController ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609339 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609361 1 base_controller.go:114] Shutting down worker of EncryptionStateController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609505 1 base_controller.go:114] Shutting down worker of EncryptionPruneController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609514 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609651 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641114503+00:00 stderr F I0813 20:01:15.609737 1 base_controller.go:114] Shutting down worker of EncryptionMigrationController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.609751 1 base_controller.go:114] Shutting down worker of EncryptionKeyController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.609759 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641128403+00:00 stderr F I0813 20:01:15.610261 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641143274+00:00 stderr F I0813 20:01:15.610350 1 base_controller.go:114] Shutting down worker of LoggingSyncer controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610443 1 base_controller.go:114] Shutting down worker of NodeController controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610451 1 base_controller.go:114] Shutting down worker of ServiceAccountIssuerController controller ... 2025-08-13T20:01:15.641156434+00:00 stderr F I0813 20:01:15.610460 1 base_controller.go:114] Shutting down worker of StartupMonitorPodCondition controller ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.608763 1 base_controller.go:172] Shutting down CertRotationController ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.610501 1 base_controller.go:114] Shutting down worker of PruneController controller ... 2025-08-13T20:01:15.641167685+00:00 stderr F I0813 20:01:15.610508 1 base_controller.go:114] Shutting down worker of StaticPodStateController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610516 1 base_controller.go:114] Shutting down worker of InstallerController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610967 1 base_controller.go:114] Shutting down worker of RemoveStaleConditionsController controller ... 2025-08-13T20:01:15.641181705+00:00 stderr F I0813 20:01:15.610992 1 base_controller.go:172] Shutting down StatusSyncer_kube-apiserver ... 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611015 1 base_controller.go:114] Shutting down worker of StatusSyncer_kube-apiserver controller ... 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611016 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key" 2025-08-13T20:01:15.641192225+00:00 stderr F I0813 20:01:15.611190 1 base_controller.go:114] Shutting down worker of SCCReconcileController controller ... 2025-08-13T20:01:15.641202416+00:00 stderr F I0813 20:01:15.611203 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641202416+00:00 stderr F I0813 20:01:15.611229 1 base_controller.go:172] Shutting down EncryptionConditionController ... 2025-08-13T20:01:15.641217486+00:00 stderr F I0813 20:01:15.611318 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:15.641217486+00:00 stderr F I0813 20:01:15.611364 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611392 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611420 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" 2025-08-13T20:01:15.641229086+00:00 stderr F I0813 20:01:15.611550 1 secure_serving.go:258] Stopped listening on [::]:8443 2025-08-13T20:01:15.641239587+00:00 stderr F I0813 20:01:15.611598 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:01:15.641239587+00:00 stderr F I0813 20:01:15.611643 1 base_controller.go:114] Shutting down worker of ConnectivityCheckController controller ... 2025-08-13T20:01:15.641249067+00:00 stderr F I0813 20:01:15.611675 1 base_controller.go:114] Shutting down worker of webhookSupportabilityController controller ... 2025-08-13T20:01:15.641249067+00:00 stderr F I0813 20:01:15.611683 1 base_controller.go:114] Shutting down worker of TargetConfigController controller ... 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.611704 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.611986 1 genericapiserver.go:637] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:01:15.641259667+00:00 stderr F I0813 20:01:15.612066 1 base_controller.go:114] Shutting down worker of EncryptionConditionController controller ... 2025-08-13T20:01:15.641269957+00:00 stderr F I0813 20:01:15.612072 1 base_controller.go:114] Shutting down worker of ConfigObserver controller ... 2025-08-13T20:01:15.641269957+00:00 stderr F I0813 20:01:15.612084 1 base_controller.go:114] Shutting down worker of ResourceSyncController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612104 1 base_controller.go:114] Shutting down worker of MissingStaticPodController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612113 1 base_controller.go:114] Shutting down worker of auditPolicyController controller ... 2025-08-13T20:01:15.641289048+00:00 stderr F I0813 20:01:15.612120 1 base_controller.go:114] Shutting down worker of RevisionController controller ... 2025-08-13T20:01:15.641299968+00:00 stderr F I0813 20:01:15.612127 1 base_controller.go:114] Shutting down worker of WorkerLatencyProfile controller ... 2025-08-13T20:01:15.641309749+00:00 stderr F I0813 20:01:15.612135 1 base_controller.go:114] Shutting down worker of EventWatchController controller ... 2025-08-13T20:01:15.641309749+00:00 stderr F I0813 20:01:15.612141 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612148 1 base_controller.go:114] Shutting down worker of NodeKubeconfigController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612221 1 base_controller.go:172] Shutting down GuardController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612236 1 base_controller.go:172] Shutting down KubeletVersionSkewController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612252 1 base_controller.go:172] Shutting down StaticPodStateFallback ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612263 1 base_controller.go:172] Shutting down InstallerStateController ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612280 1 base_controller.go:114] Shutting down worker of GuardController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612286 1 base_controller.go:114] Shutting down worker of KubeletVersionSkewController controller ... 2025-08-13T20:01:15.641387131+00:00 stderr F I0813 20:01:15.612292 1 base_controller.go:114] Shutting down worker of StaticPodStateFallback controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612296 1 base_controller.go:114] Shutting down worker of InstallerStateController controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612330 1 base_controller.go:114] Shutting down worker of UnsupportedConfigOverridesController controller ... 2025-08-13T20:01:15.641738091+00:00 stderr F I0813 20:01:15.612339 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612356 1 base_controller.go:114] Shutting down worker of BoundSATokenSignerController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612365 1 base_controller.go:114] Shutting down worker of CertRotationController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612413 1 base_controller.go:114] Shutting down worker of CertRotationTimeUpgradeableController controller ... 2025-08-13T20:01:15.641754601+00:00 stderr F I0813 20:01:15.612581 1 base_controller.go:172] Shutting down PodSecurityReadinessController ... 2025-08-13T20:01:15.641769122+00:00 stderr F I0813 20:01:15.612700 1 base_controller.go:114] Shutting down worker of PodSecurityReadinessController controller ... 2025-08-13T20:01:15.641769122+00:00 stderr F I0813 20:01:15.619109 1 simple_featuregate_reader.go:177] Shutting down feature-gate-detector 2025-08-13T20:01:15.641935446+00:00 stderr F E0813 20:01:15.630417 1 base_controller.go:268] KubeAPIServerStaticResources reconciliation failed: ["assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml" (string): Get "https://10.217.4.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader": context canceled, "assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/check-endpoints-rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/localhost-recovery-client-crb.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/localhost-recovery-sa.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-flowschema.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-flowschema-v1beta3.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration-v1beta3.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/api-usage.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/audit-errors.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/cpu-utilization.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-requests.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-slos-basic.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/podsecurity-violations.yaml" (string): client rate limiter Wait returned an error: context canceled, "assets/alerts/kube-apiserver-slos-extended.yaml" (string): client rate limiter Wait returned an error: context canceled, client rate limiter Wait returned an error: context canceled] 2025-08-13T20:01:15.641935446+00:00 stderr F I0813 20:01:15.641395 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641958087+00:00 stderr F I0813 20:01:15.641945 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641967987+00:00 stderr F I0813 20:01:15.641411 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641967987+00:00 stderr F I0813 20:01:15.641963 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641977968+00:00 stderr F I0813 20:01:15.641419 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641977968+00:00 stderr F I0813 20:01:15.641425 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641987658+00:00 stderr F I0813 20:01:15.641440 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:15.641987658+00:00 stderr F I0813 20:01:15.641980 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.641997388+00:00 stderr F I0813 20:01:15.641450 1 base_controller.go:104] All BackingResourceController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.643992 1 genericapiserver.go:586] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644035 1 base_controller.go:104] All ConnectivityCheckController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644047 1 base_controller.go:104] All auditPolicyController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644061 1 base_controller.go:104] All RevisionController workers have been terminated 2025-08-13T20:01:15.644121349+00:00 stderr F I0813 20:01:15.644098 1 base_controller.go:104] All MissingStaticPodController workers have been terminated 2025-08-13T20:01:15.644242882+00:00 stderr F I0813 20:01:15.644208 1 base_controller.go:104] All StaticPodStateFallback workers have been terminated 2025-08-13T20:01:15.645271592+00:00 stderr F I0813 20:01:15.645070 1 base_controller.go:104] All GuardController workers have been terminated 2025-08-13T20:01:15.645271592+00:00 stderr F I0813 20:01:15.644765 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:15.645499838+00:00 stderr F I0813 20:01:15.645456 1 base_controller.go:104] All InstallerStateController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128512 1 base_controller.go:104] All CertRotationTimeUpgradeableController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128679 1 base_controller.go:104] All PodSecurityReadinessController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128768 1 base_controller.go:114] Shutting down worker of KubeAPIServerStaticResources controller ... 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128827 1 base_controller.go:104] All webhookSupportabilityController workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128877 1 controller_manager.go:54] BackingResourceController controller terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128832 1 base_controller.go:104] All KubeAPIServerStaticResources workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:16.128894 1 base_controller.go:104] All WorkerLatencyProfile workers have been terminated 2025-08-13T20:01:16.128912252+00:00 stderr F I0813 20:01:15.641461 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128904 1 base_controller.go:104] All TargetConfigController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128911 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128913 1 base_controller.go:104] All EventWatchController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641470 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128920 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641480 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128925 1 base_controller.go:104] All NodeKubeconfigController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641488 1 base_controller.go:104] All EncryptionStateController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:15.641497 1 base_controller.go:104] All EncryptionPruneController workers have been terminated 2025-08-13T20:01:16.128960894+00:00 stderr F I0813 20:01:16.128947 1 genericapiserver.go:699] [graceful-termination] apiserver is exiting 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:15.641507 1 base_controller.go:104] All EncryptionMigrationController workers have been terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:15.641530 1 base_controller.go:104] All LoggingSyncer workers have been terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:16.128969 1 controller_manager.go:54] MissingStaticPodController controller terminated 2025-08-13T20:01:16.128975874+00:00 stderr F I0813 20:01:16.128970 1 controller_manager.go:54] LoggingSyncer controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641553 1 base_controller.go:104] All ServiceAccountIssuerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128976 1 controller_manager.go:54] RevisionController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641564 1 base_controller.go:104] All StartupMonitorPodCondition workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129046 1 controller_manager.go:54] StartupMonitorPodCondition controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129023 1 base_controller.go:104] All BoundSATokenSignerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641569 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129054 1 base_controller.go:104] All UnsupportedConfigOverridesController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129067 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129071 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129072 1 controller_manager.go:54] UnsupportedConfigOverridesController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641586 1 base_controller.go:104] All StaticPodStateController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129085 1 controller_manager.go:54] StaticPodStateController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641607 1 base_controller.go:104] All RemoveStaleConditionsController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641618 1 base_controller.go:150] All StatusSyncer_kube-apiserver post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129112 1 base_controller.go:104] All StatusSyncer_kube-apiserver workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641645 1 base_controller.go:104] All SCCReconcileController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641654 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641455 1 base_controller.go:150] All CertRotationController post start hooks have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129134 1 base_controller.go:104] All CertRotationController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641540 1 base_controller.go:104] All NodeController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129144 1 controller_manager.go:54] NodeController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128988 1 builder.go:330] server exited 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641518 1 base_controller.go:104] All EncryptionKeyController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.128997 1 base_controller.go:104] All EncryptionConditionController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129004 1 base_controller.go:104] All ConfigObserver workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129007 1 base_controller.go:104] All ResourceSyncController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129032 1 base_controller.go:104] All KubeletVersionSkewController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641597 1 base_controller.go:104] All InstallerController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129190 1 controller_manager.go:54] InstallerController controller terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:15.641577 1 base_controller.go:104] All PruneController workers have been terminated 2025-08-13T20:01:16.129365185+00:00 stderr F I0813 20:01:16.129207 1 controller_manager.go:54] PruneController controller terminated 2025-08-13T20:01:16.130086206+00:00 stderr F I0813 20:01:16.130032 1 controller_manager.go:54] StaticPodStateFallback controller terminated 2025-08-13T20:01:16.135711526+00:00 stderr F I0813 20:01:16.133970 1 controller_manager.go:54] GuardController controller terminated 2025-08-13T20:01:16.135711526+00:00 stderr F I0813 20:01:16.134100 1 controller_manager.go:54] InstallerStateController controller terminated 2025-08-13T20:01:18.800179041+00:00 stderr F W0813 20:01:18.797867 1 leaderelection.go:85] leader election lost ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130646033057 5ustar zuulzuul././@LongLink0000644000000000000000000000031200000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000755000175000017500000000000015117130654033056 5ustar zuulzuul././@LongLink0000644000000000000000000000031700000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lif0000644000175000017500000000134015117130646033057 0ustar zuulzuul2025-12-13T00:15:11.254855148+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/olm-operator-heap-htjnj" 2025-12-13T00:15:11.280716805+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="Successfully created configMap openshift-operator-lifecycle-manager/catalog-operator-heap-2mn9r" 2025-12-13T00:15:11.284248699+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/catalog-operator-heap-88mpx" 2025-12-13T00:15:11.288288528+00:00 stderr F time="2025-12-13T00:15:11Z" level=info msg="Successfully deleted configMap openshift-operator-lifecycle-manager/olm-operator-heap-48qq2" ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015117130646033061 5ustar zuulzuul././@LongLink0000644000000000000000000000032400000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000755000175000017500000000000015117130654033060 5ustar zuulzuul././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000000107315117130646033064 0ustar zuulzuul2025-12-13T00:13:15.018451155+00:00 stderr F I1213 00:13:15.017995 1 cmd.go:47] Starting console conversion webhook server 2025-12-13T00:13:15.028899066+00:00 stderr F I1213 00:13:15.028314 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-12-13T00:13:15.028899066+00:00 stderr F I1213 00:13:15.028834 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-12-13T00:13:15.029467425+00:00 stderr F I1213 00:13:15.029136 1 cmd.go:93] Serving on [::]:9443 ././@LongLink0000644000000000000000000000033100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-oper0000644000175000017500000000161515117130646033066 0ustar zuulzuul2025-08-13T19:59:37.777127617+00:00 stderr F I0813 19:59:37.775509 1 cmd.go:47] Starting console conversion webhook server 2025-08-13T19:59:37.955172273+00:00 stderr F I0813 19:59:37.955027 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T19:59:37.956180592+00:00 stderr F I0813 19:59:37.956106 1 cmd.go:93] Serving on [::]:9443 2025-08-13T19:59:37.957682034+00:00 stderr F I0813 19:59:37.957611 1 certwatcher.go:115] "Starting certificate watcher" logger="controller-runtime.certwatcher" 2025-08-13T20:00:50.382659807+00:00 stderr F I0813 20:00:50.373360 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" 2025-08-13T20:00:50.382659807+00:00 stderr F I0813 20:00:50.374426 1 certwatcher.go:161] "Updated current TLS certificate" logger="controller-runtime.certwatcher" ././@LongLink0000644000000000000000000000026200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130646033003 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000755000175000017500000000000015117130654033002 5ustar zuulzuul././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000013175715117130646033023 0ustar zuulzuul2025-08-13T19:50:48.104229662+00:00 stderr F I0813 19:50:48.084890 1 start.go:38] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-08-13T19:51:18.557700290+00:00 stderr F W0813 19:51:18.557095 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.558076260+00:00 stderr F I0813 19:51:18.558043 1 trace.go:236] Trace[912093740]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.535) (total time: 30022ms): 2025-08-13T19:51:18.558076260+00:00 stderr F Trace[912093740]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30021ms (19:51:18.556) 2025-08-13T19:51:18.558076260+00:00 stderr F Trace[912093740]: [30.022734787s] [30.022734787s] END 2025-08-13T19:51:18.558524563+00:00 stderr F E0813 19:51:18.558463 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.569047554+00:00 stderr F W0813 19:51:18.568932 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.569072735+00:00 stderr F I0813 19:51:18.569052 1 trace.go:236] Trace[2105199760]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.532) (total time: 30036ms): 2025-08-13T19:51:18.569072735+00:00 stderr F Trace[2105199760]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30036ms (19:51:18.568) 2025-08-13T19:51:18.569072735+00:00 stderr F Trace[2105199760]: [30.036876891s] [30.036876891s] END 2025-08-13T19:51:18.569072735+00:00 stderr F E0813 19:51:18.569067 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.572313797+00:00 stderr F W0813 19:51:18.572229 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.572412030+00:00 stderr F I0813 19:51:18.572388 1 trace.go:236] Trace[153696363]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:50:48.532) (total time: 30040ms): 2025-08-13T19:51:18.572412030+00:00 stderr F Trace[153696363]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30040ms (19:51:18.572) 2025-08-13T19:51:18.572412030+00:00 stderr F Trace[153696363]: [30.04035192s] [30.04035192s] END 2025-08-13T19:51:18.572482572+00:00 stderr F E0813 19:51:18.572458 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.663598296+00:00 stderr F W0813 19:51:18.663514 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:18.663703289+00:00 stderr F I0813 19:51:18.663686 1 trace.go:236] Trace[1936765059]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:50:48.640) (total time: 30023ms): 2025-08-13T19:51:18.663703289+00:00 stderr F Trace[1936765059]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30023ms (19:51:18.663) 2025-08-13T19:51:18.663703289+00:00 stderr F Trace[1936765059]: [30.02355184s] [30.02355184s] END 2025-08-13T19:51:18.663749820+00:00 stderr F E0813 19:51:18.663735 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.504049251+00:00 stderr F W0813 19:51:49.503980 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.504303129+00:00 stderr F I0813 19:51:49.504282 1 trace.go:236] Trace[1149040783]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:51:19.502) (total time: 30001ms): 2025-08-13T19:51:49.504303129+00:00 stderr F Trace[1149040783]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:51:49.503) 2025-08-13T19:51:49.504303129+00:00 stderr F Trace[1149040783]: [30.001813737s] [30.001813737s] END 2025-08-13T19:51:49.504357790+00:00 stderr F E0813 19:51:49.504343 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.604755611+00:00 stderr F W0813 19:51:49.604696 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.604995868+00:00 stderr F I0813 19:51:49.604972 1 trace.go:236] Trace[933594454]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.603) (total time: 30001ms): 2025-08-13T19:51:49.604995868+00:00 stderr F Trace[933594454]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:51:49.604) 2025-08-13T19:51:49.604995868+00:00 stderr F Trace[933594454]: [30.001502258s] [30.001502258s] END 2025-08-13T19:51:49.605047339+00:00 stderr F E0813 19:51:49.605033 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.608579750+00:00 stderr F W0813 19:51:49.608494 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.608737084+00:00 stderr F I0813 19:51:49.608587 1 trace.go:236] Trace[802142435]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.607) (total time: 30000ms): 2025-08-13T19:51:49.608737084+00:00 stderr F Trace[802142435]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (19:51:49.608) 2025-08-13T19:51:49.608737084+00:00 stderr F Trace[802142435]: [30.000833509s] [30.000833509s] END 2025-08-13T19:51:49.608737084+00:00 stderr F E0813 19:51:49.608606 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.955594646+00:00 stderr F W0813 19:51:49.955445 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:51:49.956119431+00:00 stderr F I0813 19:51:49.955722 1 trace.go:236] Trace[375825009]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:19.952) (total time: 30003ms): 2025-08-13T19:51:49.956119431+00:00 stderr F Trace[375825009]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:51:49.955) 2025-08-13T19:51:49.956119431+00:00 stderr F Trace[375825009]: [30.003125253s] [30.003125253s] END 2025-08-13T19:51:49.956178603+00:00 stderr F E0813 19:51:49.956178 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:07.688377796+00:00 stderr F W0813 19:52:07.688204 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:07.688377796+00:00 stderr F I0813 19:52:07.688318 1 trace.go:236] Trace[1219952241]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:52.075) (total time: 15613ms): 2025-08-13T19:52:07.688377796+00:00 stderr F Trace[1219952241]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 15613ms (19:52:07.688) 2025-08-13T19:52:07.688377796+00:00 stderr F Trace[1219952241]: [15.613260573s] [15.613260573s] END 2025-08-13T19:52:07.688473449+00:00 stderr F E0813 19:52:07.688372 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:08.200402353+00:00 stderr F W0813 19:52:08.200209 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:08.200402353+00:00 stderr F I0813 19:52:08.200319 1 trace.go:236] Trace[66411733]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:51:52.591) (total time: 15608ms): 2025-08-13T19:52:08.200402353+00:00 stderr F Trace[66411733]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 15608ms (19:52:08.200) 2025-08-13T19:52:08.200402353+00:00 stderr F Trace[66411733]: [15.608347813s] [15.608347813s] END 2025-08-13T19:52:08.200402353+00:00 stderr F E0813 19:52:08.200340 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:52:21.221752251+00:00 stderr F W0813 19:52:21.221643 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:21.221752251+00:00 stderr F I0813 19:52:21.221724 1 trace.go:236] Trace[1453919014]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:51.217) (total time: 30004ms): 2025-08-13T19:52:21.221752251+00:00 stderr F Trace[1453919014]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30004ms (19:52:21.221) 2025-08-13T19:52:21.221752251+00:00 stderr F Trace[1453919014]: [30.004688526s] [30.004688526s] END 2025-08-13T19:52:21.221908706+00:00 stderr F E0813 19:52:21.221764 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:22.252234010+00:00 stderr F W0813 19:52:22.252150 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:22.252393324+00:00 stderr F I0813 19:52:22.252369 1 trace.go:236] Trace[1556460362]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:51:52.249) (total time: 30002ms): 2025-08-13T19:52:22.252393324+00:00 stderr F Trace[1556460362]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:52:22.252) 2025-08-13T19:52:22.252393324+00:00 stderr F Trace[1556460362]: [30.002882453s] [30.002882453s] END 2025-08-13T19:52:22.252472317+00:00 stderr F E0813 19:52:22.252453 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.086000804+00:00 stderr F W0813 19:52:42.084352 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.086000804+00:00 stderr F I0813 19:52:42.084477 1 trace.go:236] Trace[1272804760]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:12.081) (total time: 30003ms): 2025-08-13T19:52:42.086000804+00:00 stderr F Trace[1272804760]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (19:52:42.084) 2025-08-13T19:52:42.086000804+00:00 stderr F Trace[1272804760]: [30.003320052s] [30.003320052s] END 2025-08-13T19:52:42.086000804+00:00 stderr F E0813 19:52:42.084495 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.671043855+00:00 stderr F W0813 19:52:42.670931 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:42.671043855+00:00 stderr F I0813 19:52:42.671022 1 trace.go:236] Trace[1619673043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:52:12.669) (total time: 30001ms): 2025-08-13T19:52:42.671043855+00:00 stderr F Trace[1619673043]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:42.670) 2025-08-13T19:52:42.671043855+00:00 stderr F Trace[1619673043]: [30.001258406s] [30.001258406s] END 2025-08-13T19:52:42.671085227+00:00 stderr F E0813 19:52:42.671038 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:55.955644476+00:00 stderr F W0813 19:52:55.955445 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:55.955708578+00:00 stderr F I0813 19:52:55.955686 1 trace.go:236] Trace[1977466941]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:25.954) (total time: 30001ms): 2025-08-13T19:52:55.955708578+00:00 stderr F Trace[1977466941]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:55.955) 2025-08-13T19:52:55.955708578+00:00 stderr F Trace[1977466941]: [30.001497346s] [30.001497346s] END 2025-08-13T19:52:55.955838651+00:00 stderr F E0813 19:52:55.955727 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:57.480925777+00:00 stderr F W0813 19:52:57.480755 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:52:57.480925777+00:00 stderr F I0813 19:52:57.480906 1 trace.go:236] Trace[1866283227]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:27.479) (total time: 30001ms): 2025-08-13T19:52:57.480925777+00:00 stderr F Trace[1866283227]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:52:57.480) 2025-08-13T19:52:57.480925777+00:00 stderr F Trace[1866283227]: [30.00181764s] [30.00181764s] END 2025-08-13T19:52:57.481038301+00:00 stderr F E0813 19:52:57.480924 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:20.367755117+00:00 stderr F W0813 19:53:20.367425 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:20.368044136+00:00 stderr F I0813 19:53:20.368019 1 trace.go:236] Trace[1083300361]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:52:50.364) (total time: 30003ms): 2025-08-13T19:53:20.368044136+00:00 stderr F Trace[1083300361]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:53:20.367) 2025-08-13T19:53:20.368044136+00:00 stderr F Trace[1083300361]: [30.003506528s] [30.003506528s] END 2025-08-13T19:53:20.368124388+00:00 stderr F E0813 19:53:20.368101 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:21.438760162+00:00 stderr F W0813 19:53:21.438626 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:21.438955457+00:00 stderr F I0813 19:53:21.438756 1 trace.go:236] Trace[702819263]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:52:51.436) (total time: 30001ms): 2025-08-13T19:53:21.438955457+00:00 stderr F Trace[702819263]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:53:21.438) 2025-08-13T19:53:21.438955457+00:00 stderr F Trace[702819263]: [30.001816914s] [30.001816914s] END 2025-08-13T19:53:21.438955457+00:00 stderr F E0813 19:53:21.438873 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:34.333573367+00:00 stderr F W0813 19:53:34.333471 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:34.333968239+00:00 stderr F I0813 19:53:34.333752 1 trace.go:236] Trace[623327545]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:04.330) (total time: 30002ms): 2025-08-13T19:53:34.333968239+00:00 stderr F Trace[623327545]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:53:34.333) 2025-08-13T19:53:34.333968239+00:00 stderr F Trace[623327545]: [30.002761313s] [30.002761313s] END 2025-08-13T19:53:34.334038991+00:00 stderr F E0813 19:53:34.334010 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:38.954408297+00:00 stderr F W0813 19:53:38.954273 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:53:38.954472968+00:00 stderr F I0813 19:53:38.954398 1 trace.go:236] Trace[32431997]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:08.953) (total time: 30001ms): 2025-08-13T19:53:38.954472968+00:00 stderr F Trace[32431997]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:53:38.954) 2025-08-13T19:53:38.954472968+00:00 stderr F Trace[32431997]: [30.001223625s] [30.001223625s] END 2025-08-13T19:53:38.954472968+00:00 stderr F E0813 19:53:38.954423 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:04.670979405+00:00 stderr F W0813 19:54:04.670474 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:04.670979405+00:00 stderr F I0813 19:54:04.670582 1 trace.go:236] Trace[1588578066]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:34.668) (total time: 30001ms): 2025-08-13T19:54:04.670979405+00:00 stderr F Trace[1588578066]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:54:04.670) 2025-08-13T19:54:04.670979405+00:00 stderr F Trace[1588578066]: [30.001681551s] [30.001681551s] END 2025-08-13T19:54:04.670979405+00:00 stderr F E0813 19:54:04.670604 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:06.844605739+00:00 stderr F W0813 19:54:06.844423 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:06.844650260+00:00 stderr F I0813 19:54:06.844638 1 trace.go:236] Trace[737086321]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:53:36.843) (total time: 30001ms): 2025-08-13T19:54:06.844650260+00:00 stderr F Trace[737086321]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:54:06.844) 2025-08-13T19:54:06.844650260+00:00 stderr F Trace[737086321]: [30.001231928s] [30.001231928s] END 2025-08-13T19:54:06.844730882+00:00 stderr F E0813 19:54:06.844665 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:09.417087090+00:00 stderr F W0813 19:54:09.416716 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:54:09.417087090+00:00 stderr F E0813 19:54:09.416971 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:54:22.292448382+00:00 stderr F W0813 19:54:22.292119 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:54:22.292448382+00:00 stderr F I0813 19:54:22.292366 1 trace.go:236] Trace[1159168860]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:53:52.288) (total time: 30003ms): 2025-08-13T19:54:22.292448382+00:00 stderr F Trace[1159168860]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30003ms (19:54:22.292) 2025-08-13T19:54:22.292448382+00:00 stderr F Trace[1159168860]: [30.003899291s] [30.003899291s] END 2025-08-13T19:54:22.292554615+00:00 stderr F E0813 19:54:22.292436 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:02.279342700+00:00 stderr F W0813 19:55:02.279209 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:02.279342700+00:00 stderr F I0813 19:55:02.279323 1 trace.go:236] Trace[316618279]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:54:32.277) (total time: 30002ms): 2025-08-13T19:55:02.279342700+00:00 stderr F Trace[316618279]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:02.279) 2025-08-13T19:55:02.279342700+00:00 stderr F Trace[316618279]: [30.002099667s] [30.002099667s] END 2025-08-13T19:55:02.279416412+00:00 stderr F E0813 19:55:02.279339 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:04.185069915+00:00 stderr F W0813 19:55:04.184917 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:04.185069915+00:00 stderr F I0813 19:55:04.185050 1 trace.go:236] Trace[151776207]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:54:34.182) (total time: 30002ms): 2025-08-13T19:55:04.185069915+00:00 stderr F Trace[151776207]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:04.184) 2025-08-13T19:55:04.185069915+00:00 stderr F Trace[151776207]: [30.00292695s] [30.00292695s] END 2025-08-13T19:55:04.185110916+00:00 stderr F E0813 19:55:04.185068 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:23.981912405+00:00 stderr F W0813 19:55:23.981607 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:23.982536453+00:00 stderr F I0813 19:55:23.982465 1 trace.go:236] Trace[1215179368]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:54:53.979) (total time: 30003ms): 2025-08-13T19:55:23.982536453+00:00 stderr F Trace[1215179368]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:55:23.981) 2025-08-13T19:55:23.982536453+00:00 stderr F Trace[1215179368]: [30.003092165s] [30.003092165s] END 2025-08-13T19:55:23.982720378+00:00 stderr F E0813 19:55:23.982695 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:30.972367921+00:00 stderr F W0813 19:55:30.972184 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:30.972367921+00:00 stderr F I0813 19:55:30.972342 1 trace.go:236] Trace[1271936866]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:55:00.971) (total time: 30001ms): 2025-08-13T19:55:30.972367921+00:00 stderr F Trace[1271936866]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (19:55:30.972) 2025-08-13T19:55:30.972367921+00:00 stderr F Trace[1271936866]: [30.001039224s] [30.001039224s] END 2025-08-13T19:55:30.972367921+00:00 stderr F E0813 19:55:30.972360 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:55:51.572445390+00:00 stderr F W0813 19:55:51.571947 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:55:51.572445390+00:00 stderr F E0813 19:55:51.572021 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-08-13T19:56:16.186532855+00:00 stderr F W0813 19:56:16.185956 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:16.186532855+00:00 stderr F I0813 19:56:16.186062 1 trace.go:236] Trace[1037743998]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:55:46.184) (total time: 30001ms): 2025-08-13T19:56:16.186532855+00:00 stderr F Trace[1037743998]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:16.185) 2025-08-13T19:56:16.186532855+00:00 stderr F Trace[1037743998]: [30.001968233s] [30.001968233s] END 2025-08-13T19:56:16.186532855+00:00 stderr F E0813 19:56:16.186078 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:51.014032237+00:00 stderr F W0813 19:56:51.013906 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:51.014345846+00:00 stderr F I0813 19:56:51.014263 1 trace.go:236] Trace[324623291]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:21.012) (total time: 30002ms): 2025-08-13T19:56:51.014345846+00:00 stderr F Trace[324623291]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (19:56:51.013) 2025-08-13T19:56:51.014345846+00:00 stderr F Trace[324623291]: [30.002111695s] [30.002111695s] END 2025-08-13T19:56:51.014450029+00:00 stderr F E0813 19:56:51.014402 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:52.162970465+00:00 stderr F W0813 19:56:52.162515 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:52.162970465+00:00 stderr F I0813 19:56:52.162611 1 trace.go:236] Trace[633606038]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:22.160) (total time: 30002ms): 2025-08-13T19:56:52.162970465+00:00 stderr F Trace[633606038]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:56:52.162) 2025-08-13T19:56:52.162970465+00:00 stderr F Trace[633606038]: [30.002157687s] [30.002157687s] END 2025-08-13T19:56:52.162970465+00:00 stderr F E0813 19:56:52.162625 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:58.950133678+00:00 stderr F W0813 19:56:58.949498 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:56:58.950133678+00:00 stderr F I0813 19:56:58.949878 1 trace.go:236] Trace[1460267524]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Aug-2025 19:56:28.947) (total time: 30002ms): 2025-08-13T19:56:58.950133678+00:00 stderr F Trace[1460267524]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (19:56:58.949) 2025-08-13T19:56:58.950133678+00:00 stderr F Trace[1460267524]: [30.0023144s] [30.0023144s] END 2025-08-13T19:56:58.950133678+00:00 stderr F E0813 19:56:58.949947 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-08-13T19:57:36.904398000+00:00 stderr F I0813 19:57:36.904118 1 trace.go:236] Trace[561210625]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Aug-2025 19:57:09.705) (total time: 27198ms): 2025-08-13T19:57:36.904398000+00:00 stderr F Trace[561210625]: ---"Objects listed" error: 27197ms (19:57:36.903) 2025-08-13T19:57:36.904398000+00:00 stderr F Trace[561210625]: [27.198137534s] [27.198137534s] END 2025-08-13T19:57:37.027895736+00:00 stderr F I0813 19:57:37.027722 1 api.go:65] Launching server on :22624 2025-08-13T19:57:37.032064235+00:00 stderr F I0813 19:57:37.030734 1 api.go:65] Launching server on :22623 ././@LongLink0000644000000000000000000000031500000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-conf0000644000175000017500000004135415117130646033014 0ustar zuulzuul2025-12-13T00:11:03.605991355+00:00 stderr F I1213 00:11:03.605043 1 start.go:38] Version: v4.16.0-202406241749.p0.g9e4a1f5.assembly.stream.el9-dirty (9e4a1f5f4c7ef58082021ca40556c67f99062d0a) 2025-12-13T00:11:33.611281251+00:00 stderr F W1213 00:11:33.610908 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.611781235+00:00 stderr F W1213 00:11:33.611589 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.611857267+00:00 stderr F I1213 00:11:33.611817 1 trace.go:236] Trace[60327345]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:11:03.609) (total time: 30002ms): 2025-12-13T00:11:33.611857267+00:00 stderr F Trace[60327345]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (00:11:33.611) 2025-12-13T00:11:33.611857267+00:00 stderr F Trace[60327345]: [30.002307855s] [30.002307855s] END 2025-12-13T00:11:33.611891648+00:00 stderr F E1213 00:11:33.611867 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.611891648+00:00 stderr F I1213 00:11:33.611872 1 trace.go:236] Trace[1306062620]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:11:03.608) (total time: 30002ms): 2025-12-13T00:11:33.611891648+00:00 stderr F Trace[1306062620]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:11:33.610) 2025-12-13T00:11:33.611891648+00:00 stderr F Trace[1306062620]: [30.002522341s] [30.002522341s] END 2025-12-13T00:11:33.611953510+00:00 stderr F E1213 00:11:33.611904 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.612026882+00:00 stderr F W1213 00:11:33.611808 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.612871787+00:00 stderr F I1213 00:11:33.612809 1 trace.go:236] Trace[2087126433]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:11:03.608) (total time: 30003ms): 2025-12-13T00:11:33.612871787+00:00 stderr F Trace[2087126433]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (00:11:33.611) 2025-12-13T00:11:33.612871787+00:00 stderr F Trace[2087126433]: [30.003087417s] [30.003087417s] END 2025-12-13T00:11:33.612871787+00:00 stderr F E1213 00:11:33.612838 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.613143145+00:00 stderr F W1213 00:11:33.613017 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:11:33.613252378+00:00 stderr F I1213 00:11:33.613210 1 trace.go:236] Trace[1909852663]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Dec-2025 00:11:03.611) (total time: 30001ms): 2025-12-13T00:11:33.613252378+00:00 stderr F Trace[1909852663]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:11:33.613) 2025-12-13T00:11:33.613252378+00:00 stderr F Trace[1909852663]: [30.001488806s] [30.001488806s] END 2025-12-13T00:11:33.613306989+00:00 stderr F E1213 00:11:33.613280 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.594827000+00:00 stderr F W1213 00:12:04.594767 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.594961624+00:00 stderr F I1213 00:12:04.594920 1 trace.go:236] Trace[1857975968]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:11:34.593) (total time: 30001ms): 2025-12-13T00:12:04.594961624+00:00 stderr F Trace[1857975968]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:12:04.594) 2025-12-13T00:12:04.594961624+00:00 stderr F Trace[1857975968]: [30.001242932s] [30.001242932s] END 2025-12-13T00:12:04.595004135+00:00 stderr F E1213 00:12:04.594992 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.780675437+00:00 stderr F W1213 00:12:04.780545 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.780675437+00:00 stderr F I1213 00:12:04.780630 1 trace.go:236] Trace[1527846789]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:11:34.779) (total time: 30000ms): 2025-12-13T00:12:04.780675437+00:00 stderr F Trace[1527846789]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:12:04.780) 2025-12-13T00:12:04.780675437+00:00 stderr F Trace[1527846789]: [30.000677011s] [30.000677011s] END 2025-12-13T00:12:04.780675437+00:00 stderr F E1213 00:12:04.780643 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.947808326+00:00 stderr F W1213 00:12:04.947736 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:04.948004462+00:00 stderr F I1213 00:12:04.947976 1 trace.go:236] Trace[1849654951]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Dec-2025 00:11:34.946) (total time: 30000ms): 2025-12-13T00:12:04.948004462+00:00 stderr F Trace[1849654951]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30000ms (00:12:04.947) 2025-12-13T00:12:04.948004462+00:00 stderr F Trace[1849654951]: [30.00090629s] [30.00090629s] END 2025-12-13T00:12:04.948122815+00:00 stderr F E1213 00:12:04.948094 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:05.102407881+00:00 stderr F W1213 00:12:05.102289 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:05.102407881+00:00 stderr F I1213 00:12:05.102366 1 trace.go:236] Trace[33789605]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:11:35.100) (total time: 30001ms): 2025-12-13T00:12:05.102407881+00:00 stderr F Trace[33789605]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:12:05.102) 2025-12-13T00:12:05.102407881+00:00 stderr F Trace[33789605]: [30.001596098s] [30.001596098s] END 2025-12-13T00:12:05.102407881+00:00 stderr F E1213 00:12:05.102382 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:10.684173438+00:00 stderr F W1213 00:12:10.684072 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-12-13T00:12:10.684227369+00:00 stderr F E1213 00:12:10.684163 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfig: failed to list *v1.MachineConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: network is unreachable 2025-12-13T00:12:37.446279343+00:00 stderr F W1213 00:12:37.446180 1 reflector.go:539] k8s.io/client-go/informers/factory.go:159: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:37.446349595+00:00 stderr F I1213 00:12:37.446320 1 trace.go:236] Trace[725381715]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:159 (13-Dec-2025 00:12:07.443) (total time: 30002ms): 2025-12-13T00:12:37.446349595+00:00 stderr F Trace[725381715]: ---"Objects listed" error:Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30002ms (00:12:37.446) 2025-12-13T00:12:37.446349595+00:00 stderr F Trace[725381715]: [30.002247216s] [30.002247216s] END 2025-12-13T00:12:37.446387166+00:00 stderr F E1213 00:12:37.446361 1 reflector.go:147] k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://10.217.4.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:37.778173102+00:00 stderr F W1213 00:12:37.778071 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:37.778173102+00:00 stderr F I1213 00:12:37.778159 1 trace.go:236] Trace[633650204]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:12:07.776) (total time: 30001ms): 2025-12-13T00:12:37.778173102+00:00 stderr F Trace[633650204]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:12:37.778) 2025-12-13T00:12:37.778173102+00:00 stderr F Trace[633650204]: [30.001437174s] [30.001437174s] END 2025-12-13T00:12:37.778245374+00:00 stderr F E1213 00:12:37.778180 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.ControllerConfig: failed to list *v1.ControllerConfig: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:38.078479040+00:00 stderr F W1213 00:12:38.078385 1 reflector.go:539] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:38.078479040+00:00 stderr F I1213 00:12:38.078451 1 trace.go:236] Trace[975599471]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:12:08.076) (total time: 30001ms): 2025-12-13T00:12:38.078479040+00:00 stderr F Trace[975599471]: ---"Objects listed" error:Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 30001ms (00:12:38.078) 2025-12-13T00:12:38.078479040+00:00 stderr F Trace[975599471]: [30.001428144s] [30.001428144s] END 2025-12-13T00:12:38.078479040+00:00 stderr F E1213 00:12:38.078467 1 reflector.go:147] github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125: Failed to watch *v1.MachineConfigPool: failed to list *v1.MachineConfigPool: Get "https://10.217.4.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: i/o timeout 2025-12-13T00:12:41.381760869+00:00 stderr F I1213 00:12:41.381672 1 trace.go:236] Trace[1434280565]: "Reflector ListAndWatch" name:github.com/openshift/client-go/machineconfiguration/informers/externalversions/factory.go:125 (13-Dec-2025 00:12:16.147) (total time: 25233ms): 2025-12-13T00:12:41.381760869+00:00 stderr F Trace[1434280565]: ---"Objects listed" error: 25233ms (00:12:41.381) 2025-12-13T00:12:41.381760869+00:00 stderr F Trace[1434280565]: [25.233949493s] [25.233949493s] END 2025-12-13T00:12:43.809561845+00:00 stderr F I1213 00:12:43.809136 1 api.go:65] Launching server on :22624 2025-12-13T00:12:43.809805991+00:00 stderr F I1213 00:12:43.809220 1 api.go:65] Launching server on :22623 ././@LongLink0000644000000000000000000000025100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130647033234 5ustar zuulzuul././@LongLink0000644000000000000000000000027300000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063315117130647033240 0ustar zuulzuul2025-08-13T19:51:02.025704074+00:00 stdout F 2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/route-override/rhel9/bin/ to /host/opt/cni/bin/upgrade_63e00e37-7601-412f-989f-3015d2849f1c 2025-08-13T19:51:02.044774749+00:00 stdout F 2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_63e00e37-7601-412f-989f-3015d2849f1c to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030000000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063315117130647033240 0ustar zuulzuul2025-12-13T00:11:06.611170646+00:00 stdout F 2025-12-13T00:11:06+00:00 [cnibincopy] Successfully copied files in /usr/src/route-override/rhel9/bin/ to /host/opt/cni/bin/upgrade_f52c04e6-0803-4d03-9576-0242e5f68f54 2025-12-13T00:11:06.618285700+00:00 stdout F 2025-12-13T00:11:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f52c04e6-0803-4d03-9576-0242e5f68f54 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000061015117130647033233 0ustar zuulzuul2025-12-13T00:11:05.036123652+00:00 stdout F 2025-12-13T00:11:05+00:00 [cnibincopy] Successfully copied files in /bondcni/rhel9/ to /host/opt/cni/bin/upgrade_a0c7ea2d-4887-4229-984d-7906edc5439e 2025-12-13T00:11:05.042732571+00:00 stdout F 2025-12-13T00:11:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a0c7ea2d-4887-4229-984d-7906edc5439e to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000061015117130647033233 0ustar zuulzuul2025-08-13T19:50:59.775239797+00:00 stdout F 2025-08-13T19:50:59+00:00 [cnibincopy] Successfully copied files in /bondcni/rhel9/ to /host/opt/cni/bin/upgrade_b1bfe828-9c07-4461-84b8-c6d1eb367d18 2025-08-13T19:50:59.791945004+00:00 stdout F 2025-08-13T19:50:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b1bfe828-9c07-4461-84b8-c6d1eb367d18 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000031400000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000000015117130647033224 0ustar zuulzuul././@LongLink0000644000000000000000000000032100000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000000015117130647033224 0ustar zuulzuul././@LongLink0000644000000000000000000000026500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000062415117130647033240 0ustar zuulzuul2025-08-13T19:50:57.794478745+00:00 stdout F 2025-08-13T19:50:57+00:00 [cnibincopy] Successfully copied files in /usr/src/plugins/rhel9/bin/ to /host/opt/cni/bin/upgrade_d145407d-9046-4618-84a2-0bd3cab7b7ed 2025-08-13T19:50:57.820920950+00:00 stdout F 2025-08-13T19:50:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d145407d-9046-4618-84a2-0bd3cab7b7ed to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027200000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000062415117130647033240 0ustar zuulzuul2025-12-13T00:11:04.516003976+00:00 stdout F 2025-12-13T00:11:04+00:00 [cnibincopy] Successfully copied files in /usr/src/plugins/rhel9/bin/ to /host/opt/cni/bin/upgrade_4c5bd6e4-c142-40de-9c51-85fded53b5f7 2025-12-13T00:11:04.526176483+00:00 stdout F 2025-12-13T00:11:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4c5bd6e4-c142-40de-9c51-85fded53b5f7 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063615117130647033243 0ustar zuulzuul2025-08-13T19:50:47.211892989+00:00 stdout F 2025-08-13T19:50:47+00:00 [cnibincopy] Successfully copied files in /usr/src/egress-router-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb608648-8ddd-4f33-bc1d-0991683cda60 2025-08-13T19:50:47.335048529+00:00 stdout F 2025-08-13T19:50:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb608648-8ddd-4f33-bc1d-0991683cda60 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000031000000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063615117130647033243 0ustar zuulzuul2025-12-13T00:11:03.484969186+00:00 stdout F 2025-12-13T00:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/egress-router-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_482a9e3d-1343-4bd9-b336-355a3aaa3738 2025-12-13T00:11:03.492161702+00:00 stdout F 2025-12-13T00:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_482a9e3d-1343-4bd9-b336-355a3aaa3738 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000027100000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000012015117130647033227 0ustar zuulzuul2025-08-13T19:51:11.553008882+00:00 stdout F Done configuring CNI. Sleep=false ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000012015117130647033227 0ustar zuulzuul2025-12-13T00:11:08.063193027+00:00 stdout F Done configuring CNI. Sleep=false ././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000755000175000017500000000000015117130654033232 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063015117130647033235 0ustar zuulzuul2025-08-13T19:51:10.079928701+00:00 stdout F 2025-08-13T19:51:10+00:00 [cnibincopy] Successfully copied files in /usr/src/whereabouts/rhel9/bin/ to /host/opt/cni/bin/upgrade_2927e247-a3e2-4e6c-9d4c-53a5b8439023 2025-08-13T19:51:10.090046880+00:00 stdout F 2025-08-13T19:51:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2927e247-a3e2-4e6c-9d4c-53a5b8439023 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multu0000644000175000017500000000063015117130647033235 0ustar zuulzuul2025-12-13T00:11:07.222379386+00:00 stdout F 2025-12-13T00:11:07+00:00 [cnibincopy] Successfully copied files in /usr/src/whereabouts/rhel9/bin/ to /host/opt/cni/bin/upgrade_902428b0-bee1-45c0-b45b-9bae1c6403e4 2025-12-13T00:11:07.227835965+00:00 stdout F 2025-12-13T00:11:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_902428b0-bee1-45c0-b45b-9bae1c6403e4 to /host/opt/cni/bin/ ././@LongLink0000644000000000000000000000025600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015117130646033136 5ustar zuulzuul././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000755000175000017500000000000015117130654033135 5ustar zuulzuul././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000005533115117130646033147 0ustar zuulzuul2025-08-13T20:01:14.502314102+00:00 stdout F Copying system trust bundle 2025-08-13T20:01:15.583350346+00:00 stderr F I0813 20:01:15.577320 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:01:15.583350346+00:00 stderr F I0813 20:01:15.579292 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:01:16.424418308+00:00 stderr F I0813 20:01:16.424200 1 audit.go:340] Using audit backend: ignoreErrors 2025-08-13T20:01:18.860923183+00:00 stderr F I0813 20:01:18.827448 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302331 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302376 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302397 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-08-13T20:01:20.309308931+00:00 stderr F I0813 20:01:20.302403 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-08-13T20:01:20.554633356+00:00 stderr F I0813 20:01:20.533716 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-08-13T20:01:20.554633356+00:00 stderr F I0813 20:01:20.534993 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951690 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951755 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951914 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951936 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951962 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.951970 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:20.954588391+00:00 stderr F I0813 20:01:20.953060 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:01:20.958902234+00:00 stderr F I0813 20:01:20.956315 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:20.952733968 +0000 UTC))" 2025-08-13T20:01:20.958902234+00:00 stderr F I0813 20:01:20.956749 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:20.956695071 +0000 UTC))" 2025-08-13T20:01:20.965157162+00:00 stderr F I0813 20:01:20.960488 1 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994375 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:20.994131168 +0000 UTC))" 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994452 1 secure_serving.go:213] Serving securely on [::]:6443 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994490 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-08-13T20:01:20.996439674+00:00 stderr F I0813 20:01:20.994509 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-08-13T20:01:20.999536142+00:00 stderr F I0813 20:01:20.997020 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.003034432+00:00 stderr F I0813 20:01:21.001406 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.017450403+00:00 stderr F I0813 20:01:21.017345 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.017450403+00:00 stderr F I0813 20:01:21.017361 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094024 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094195 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-08-13T20:01:21.094297944+00:00 stderr F I0813 20:01:21.094252 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.094431 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.094400877 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.094820 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:21.094748977 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095175 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:21.095146758 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095428 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:21.095416776 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095927 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-08-13 20:01:21.095881159 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.095953 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-08-13 20:01:21.095938851 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096000 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.095980842 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096017 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-08-13 20:01:21.096006323 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096034 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096022563 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096054 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096041484 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096070 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096058844 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096087 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096075395 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096137 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-08-13 20:01:21.096092525 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096164 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-08-13 20:01:21.096147627 +0000 UTC))" 2025-08-13T20:01:21.098100283+00:00 stderr F I0813 20:01:21.096183 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-08-13 20:01:21.096171578 +0000 UTC))" 2025-08-13T20:01:21.117871356+00:00 stderr F I0813 20:01:21.115991 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-08-13 20:01:21.115950232 +0000 UTC))" 2025-08-13T20:01:21.117871356+00:00 stderr F I0813 20:01:21.116413 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-08-13 20:01:21.116373694 +0000 UTC))" 2025-08-13T20:01:21.118922666+00:00 stderr F I0813 20:01:21.118737 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1755115276\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1755115275\" (2025-08-13 19:01:15 +0000 UTC to 2026-08-13 19:01:15 +0000 UTC (now=2025-08-13 20:01:21.118719181 +0000 UTC))" 2025-08-13T20:06:02.194718284+00:00 stderr F I0813 20:06:02.193134 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:00.871143946+00:00 stderr F I0813 20:07:00.868133 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:07:06.442934193+00:00 stderr F I0813 20:07:06.442399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:35.102717324+00:00 stderr F I0813 20:09:35.102399 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:36.885284120+00:00 stderr F I0813 20:09:36.885208 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:09:45.285875742+00:00 stderr F I0813 20:09:45.285584 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-08-13T20:42:36.418082203+00:00 stderr F I0813 20:42:36.411925 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418082203+00:00 stderr F I0813 20:42:36.415480 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418442384+00:00 stderr F I0813 20:42:36.418365 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.418907447+00:00 stderr F I0813 20:42:36.409926 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:40.919276434+00:00 stderr F I0813 20:42:40.918383 1 genericapiserver.go:541] "[graceful-termination] shutdown event" name="ShutdownInitiated" 2025-08-13T20:42:40.919544461+00:00 stderr F I0813 20:42:40.918425 1 genericapiserver.go:689] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" 2025-08-13T20:42:40.919602463+00:00 stderr F I0813 20:42:40.919510 1 genericapiserver.go:696] [graceful-termination] RunPreShutdownHooks has completed 2025-08-13T20:42:40.925908115+00:00 stderr F I0813 20:42:40.921108 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving 2025-08-13T20:42:40.927359407+00:00 stderr F I0813 20:42:40.927288 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished 2025-08-13T20:42:40.927359407+00:00 stderr F I0813 20:42:40.927331 1 genericapiserver.go:548] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" 2025-08-13T20:42:40.927452619+00:00 stderr F I0813 20:42:40.927395 1 genericapiserver.go:612] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" 2025-08-13T20:42:40.927466860+00:00 stderr F I0813 20:42:40.927451 1 genericapiserver.go:647] "[graceful-termination] not going to wait for active watch request(s) to drain" 2025-08-13T20:42:40.928975223+00:00 stderr F I0813 20:42:40.928936 1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-08-13T20:42:40.928996954+00:00 stderr F I0813 20:42:40.928977 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening 2025-08-13T20:42:40.930019173+00:00 stderr F I0813 20:42:40.929742 1 genericapiserver.go:638] [graceful-termination] in-flight non long-running request(s) have drained 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930071 1 genericapiserver.go:679] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930094 1 genericapiserver.go:703] "[graceful-termination] audit backend shutdown completed" 2025-08-13T20:42:40.930115756+00:00 stderr F I0813 20:42:40.930104 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished 2025-08-13T20:42:40.930631431+00:00 stderr F I0813 20:42:40.930547 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-08-13T20:42:40.931352372+00:00 stderr F I0813 20:42:40.931206 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController 2025-08-13T20:42:40.931506586+00:00 stderr F I0813 20:42:40.931453 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-08-13T20:42:40.931543027+00:00 stderr F I0813 20:42:40.931469 1 secure_serving.go:258] Stopped listening on [::]:6443 2025-08-13T20:42:40.931606989+00:00 stderr F I0813 20:42:40.931589 1 genericapiserver.go:595] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" 2025-08-13T20:42:40.931656681+00:00 stderr F I0813 20:42:40.931644 1 genericapiserver.go:711] [graceful-termination] apiserver is exiting 2025-08-13T20:42:40.931714812+00:00 stderr F I0813 20:42:40.931688 1 genericapiserver.go:1057] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"default", Name:"kube-system", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationGracefulTerminationFinished' All pending requests processed 2025-08-13T20:42:40.932471814+00:00 stderr F I0813 20:42:40.932398 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" 2025-08-13T20:42:40.932471814+00:00 stderr F I0813 20:42:40.932425 1 dynamic_serving_content.go:146] "Shutting down controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-08-13T20:42:40.933570996+00:00 stderr F I0813 20:42:40.933454 1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" ././@LongLink0000644000000000000000000000030300000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authenticati0000644000175000017500000007271315117130646033152 0ustar zuulzuul2025-12-13T00:13:15.211131280+00:00 stdout F Copying system trust bundle 2025-12-13T00:13:16.732092378+00:00 stderr F I1213 00:13:16.730818 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-12-13T00:13:16.732092378+00:00 stderr F I1213 00:13:16.731860 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-12-13T00:13:17.437583294+00:00 stderr F I1213 00:13:17.437508 1 audit.go:340] Using audit backend: ignoreErrors 2025-12-13T00:13:17.486233898+00:00 stderr F I1213 00:13:17.486187 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController 2025-12-13T00:13:17.507763712+00:00 stderr F I1213 00:13:17.507716 1 maxinflight.go:139] "Initialized nonMutatingChan" len=400 2025-12-13T00:13:17.507763712+00:00 stderr F I1213 00:13:17.507741 1 maxinflight.go:145] "Initialized mutatingChan" len=200 2025-12-13T00:13:17.507763712+00:00 stderr F I1213 00:13:17.507753 1 maxinflight.go:116] "Set denominator for readonly requests" limit=400 2025-12-13T00:13:17.507763712+00:00 stderr F I1213 00:13:17.507759 1 maxinflight.go:120] "Set denominator for mutating requests" limit=200 2025-12-13T00:13:17.513981321+00:00 stderr F I1213 00:13:17.512546 1 secure_serving.go:57] Forcing use of http/1.1 only 2025-12-13T00:13:17.513981321+00:00 stderr F I1213 00:13:17.513957 1 genericapiserver.go:528] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete 2025-12-13T00:13:17.530724373+00:00 stderr F I1213 00:13:17.530637 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-12-13 00:13:17.524017387 +0000 UTC))" 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531038 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-12-13 00:13:17.531000132 +0000 UTC))" 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531103 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531134 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531171 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531182 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531218 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.531224 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.532517023+00:00 stderr F I1213 00:13:17.532113 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" 2025-12-13T00:13:17.533772526+00:00 stderr F I1213 00:13:17.533707 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:17.533372322 +0000 UTC))" 2025-12-13T00:13:17.535671700+00:00 stderr F I1213 00:13:17.533840 1 secure_serving.go:213] Serving securely on [::]:6443 2025-12-13T00:13:17.535671700+00:00 stderr F I1213 00:13:17.533866 1 genericapiserver.go:681] [graceful-termination] waiting for shutdown to be initiated 2025-12-13T00:13:17.535671700+00:00 stderr F I1213 00:13:17.533890 1 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" 2025-12-13T00:13:17.535671700+00:00 stderr F I1213 00:13:17.535463 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" 2025-12-13T00:13:17.572043802+00:00 stderr F W1213 00:13:17.568676 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-12-13T00:13:17.572043802+00:00 stderr F E1213 00:13:17.568712 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-12-13T00:13:17.572043802+00:00 stderr F I1213 00:13:17.569081 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:17.572043802+00:00 stderr F I1213 00:13:17.569566 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:17.572043802+00:00 stderr F I1213 00:13:17.571240 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:13:17.634146499+00:00 stderr F I1213 00:13:17.634086 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController 2025-12-13T00:13:17.634212571+00:00 stderr F I1213 00:13:17.634183 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 2025-12-13T00:13:17.634269543+00:00 stderr F I1213 00:13:17.634206 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 2025-12-13T00:13:17.634649525+00:00 stderr F I1213 00:13:17.634631 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:17.634601744 +0000 UTC))" 2025-12-13T00:13:17.634692527+00:00 stderr F I1213 00:13:17.634681 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:17.634667296 +0000 UTC))" 2025-12-13T00:13:17.634740388+00:00 stderr F I1213 00:13:17.634730 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:17.634715247 +0000 UTC))" 2025-12-13T00:13:17.634776640+00:00 stderr F I1213 00:13:17.634766 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:17.634754369 +0000 UTC))" 2025-12-13T00:13:17.634811581+00:00 stderr F I1213 00:13:17.634802 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.63479005 +0000 UTC))" 2025-12-13T00:13:17.634846472+00:00 stderr F I1213 00:13:17.634837 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.634825271 +0000 UTC))" 2025-12-13T00:13:17.634881053+00:00 stderr F I1213 00:13:17.634871 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.634859922 +0000 UTC))" 2025-12-13T00:13:17.634944755+00:00 stderr F I1213 00:13:17.634905 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.634893993 +0000 UTC))" 2025-12-13T00:13:17.634987587+00:00 stderr F I1213 00:13:17.634977 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:17.634964606 +0000 UTC))" 2025-12-13T00:13:17.635022378+00:00 stderr F I1213 00:13:17.635013 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:17.635002377 +0000 UTC))" 2025-12-13T00:13:17.635340348+00:00 stderr F I1213 00:13:17.635327 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-12-13 00:13:17.635311577 +0000 UTC))" 2025-12-13T00:13:17.635616157+00:00 stderr F I1213 00:13:17.635605 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-12-13 00:13:17.635591457 +0000 UTC))" 2025-12-13T00:13:17.635892727+00:00 stderr F I1213 00:13:17.635881 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:17.635868896 +0000 UTC))" 2025-12-13T00:13:17.636059862+00:00 stderr F I1213 00:13:17.636047 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:13:17.636034351 +0000 UTC))" 2025-12-13T00:13:17.636101574+00:00 stderr F I1213 00:13:17.636091 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:13:17.636080133 +0000 UTC))" 2025-12-13T00:13:17.636137875+00:00 stderr F I1213 00:13:17.636127 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:17.636115524 +0000 UTC))" 2025-12-13T00:13:17.636173226+00:00 stderr F I1213 00:13:17.636163 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:13:17.636151325 +0000 UTC))" 2025-12-13T00:13:17.636207397+00:00 stderr F I1213 00:13:17.636198 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.636186446 +0000 UTC))" 2025-12-13T00:13:17.636248459+00:00 stderr F I1213 00:13:17.636239 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.636227298 +0000 UTC))" 2025-12-13T00:13:17.636303671+00:00 stderr F I1213 00:13:17.636292 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.63626781 +0000 UTC))" 2025-12-13T00:13:17.636356763+00:00 stderr F I1213 00:13:17.636331 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.636318072 +0000 UTC))" 2025-12-13T00:13:17.636393374+00:00 stderr F I1213 00:13:17.636384 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:13:17.636371024 +0000 UTC))" 2025-12-13T00:13:17.636428455+00:00 stderr F I1213 00:13:17.636419 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:13:17.636407915 +0000 UTC))" 2025-12-13T00:13:17.636463527+00:00 stderr F I1213 00:13:17.636454 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:13:17.636442566 +0000 UTC))" 2025-12-13T00:13:17.636744636+00:00 stderr F I1213 00:13:17.636733 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-12-13 00:13:17.636719345 +0000 UTC))" 2025-12-13T00:13:17.637032126+00:00 stderr F I1213 00:13:17.637020 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-12-13 00:13:17.637006605 +0000 UTC))" 2025-12-13T00:13:17.637316175+00:00 stderr F I1213 00:13:17.637299 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:13:17.637287444 +0000 UTC))" 2025-12-13T00:13:18.854661520+00:00 stderr F W1213 00:13:18.854575 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-12-13T00:13:18.854661520+00:00 stderr F E1213 00:13:18.854606 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-12-13T00:13:21.006684554+00:00 stderr F W1213 00:13:21.006639 1 reflector.go:539] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-12-13T00:13:21.006758316+00:00 stderr F E1213 00:13:21.006746 1 reflector.go:147] k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229: Failed to watch *v1.Group: failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io) 2025-12-13T00:13:25.915163660+00:00 stderr F I1213 00:13:25.915103 1 reflector.go:351] Caches populated for *v1.Group from k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569586 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:03 +0000 UTC to 2034-06-24 12:35:03 +0000 UTC (now=2025-12-13 00:19:37.569528046 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569637 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-bootstrap-kubeconfig-signer\" [] issuer=\"\" (2024-06-26 12:35:04 +0000 UTC to 2034-06-24 12:35:04 +0000 UTC (now=2025-12-13 00:19:37.569614419 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569672 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1719493520\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.56964581 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569703 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1719493520\" [] issuer=\"\" (2024-06-27 13:05:19 +0000 UTC to 2026-06-27 13:05:20 +0000 UTC (now=2025-12-13 00:19:37.569680571 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569734 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-apiserver-to-kubelet-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569710721 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569805 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569780443 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569836 1 tlsconfig.go:178] "Loaded client CA" index=6 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_kube-control-plane-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2027-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569813594 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569884 1 tlsconfig.go:178] "Loaded client CA" index=7 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_node-system-admin-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.569843345 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569916 1 tlsconfig.go:178] "Loaded client CA" index=8 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" [] issuer=\"\" (2025-08-13 19:59:53 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.569892956 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.569973 1 tlsconfig.go:178] "Loaded client CA" index=9 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer-custom\" [] issuer=\"\" (2025-08-13 20:00:41 +0000 UTC to 2035-08-11 20:00:41 +0000 UTC (now=2025-12-13 00:19:37.569929017 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.570010 1 tlsconfig.go:178] "Loaded client CA" index=10 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer_@1765585168\" [] issuer=\"openshift-kube-controller-manager-operator_csr-signer-signer@1755115189\" (2025-12-13 00:19:27 +0000 UTC to 2027-08-13 19:59:54 +0000 UTC (now=2025-12-13 00:19:37.569984279 +0000 UTC))" 2025-12-13T00:19:37.570175844+00:00 stderr F I1213 00:19:37.570045 1 tlsconfig.go:178] "Loaded client CA" index=11 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"openshift-kube-apiserver-operator_aggregator-client-signer@1755114566\" [] issuer=\"\" (2025-08-13 19:49:25 +0000 UTC to 2026-08-13 19:49:26 +0000 UTC (now=2025-12-13 00:19:37.57002007 +0000 UTC))" 2025-12-13T00:19:37.573604468+00:00 stderr F I1213 00:19:37.570550 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key" certDetail="\"oauth-openshift.openshift-authentication.svc\" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer=\"openshift-service-serving-signer@1719406026\" (2025-08-13 20:00:06 +0000 UTC to 2027-08-13 20:00:07 +0000 UTC (now=2025-12-13 00:19:37.570515873 +0000 UTC))" 2025-12-13T00:19:37.573604468+00:00 stderr F I1213 00:19:37.571095 1 named_certificates.go:53] "Loaded SNI cert" index=1 certName="sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing::/var/config/system/secrets/v4-0-config-system-router-certs/apps-crc.testing" certDetail="\"*.apps-crc.testing\" [serving] validServingFor=[*.apps-crc.testing] issuer=\"ingress-operator@1719406118\" (2024-06-26 12:48:39 +0000 UTC to 2026-06-26 12:48:40 +0000 UTC (now=2025-12-13 00:19:37.571059388 +0000 UTC))" 2025-12-13T00:19:37.573604468+00:00 stderr F I1213 00:19:37.571614 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1765584797\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1765584797\" (2025-12-12 23:13:16 +0000 UTC to 2026-12-12 23:13:16 +0000 UTC (now=2025-12-13 00:19:37.571588923 +0000 UTC))" ././@LongLink0000644000000000000000000000027600000000000011610 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015117130646032766 5ustar zuulzuul././@LongLink0000644000000000000000000000034100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000755000175000017500000000000015117130654032765 5ustar zuulzuul././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000002266615117130646033004 0ustar zuulzuul2025-08-13T20:04:17.711019301+00:00 stderr F I0813 20:04:17.710517 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:04:17.731650389+00:00 stderr F I0813 20:04:17.731502 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:04:17.741661665+00:00 stderr F W0813 20:04:17.741251 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.742158909+00:00 stderr F W0813 20:04:17.741915 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.750083895+00:00 stderr F E0813 20:04:17.749981 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:17.750083895+00:00 stderr F E0813 20:04:17.749982 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.584086456+00:00 stderr F W0813 20:04:18.583537 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.584086456+00:00 stderr F E0813 20:04:18.583991 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.647018632+00:00 stderr F W0813 20:04:18.646930 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:18.647018632+00:00 stderr F E0813 20:04:18.646976 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.350624461+00:00 stderr F W0813 20:04:20.346192 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.350624461+00:00 stderr F E0813 20:04:20.346495 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.358060563+00:00 stderr F W0813 20:04:20.357964 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:20.358214727+00:00 stderr F E0813 20:04:20.358149 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.939148562+00:00 stderr F W0813 20:04:23.938654 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:23.939262655+00:00 stderr F E0813 20:04:23.939234 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.261915080+00:00 stderr F W0813 20:04:25.260621 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:25.261915080+00:00 stderr F E0813 20:04:25.260694 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.435909533+00:00 stderr F W0813 20:04:36.434632 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.435909533+00:00 stderr F E0813 20:04:36.435360 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.668995607+00:00 stderr F W0813 20:04:36.660927 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:36.668995607+00:00 stderr F E0813 20:04:36.668906 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.885207502+00:00 stderr F W0813 20:04:50.884304 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:50.918401032+00:00 stderr F E0813 20:04:50.918193 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.839595078+00:00 stderr F W0813 20:04:52.839119 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:04:52.839595078+00:00 stderr F E0813 20:04:52.839544 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:17.762563304+00:00 stderr F F0813 20:05:17.755149 1 main.go:175] timed out waiting for FeatureGate detection ././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000003152315117130646032774 0ustar zuulzuul2025-12-13T00:13:16.164024229+00:00 stderr F I1213 00:13:16.162642 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-12-13T00:13:16.183400950+00:00 stderr F I1213 00:13:16.182154 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:13:16.225061510+00:00 stderr F I1213 00:13:16.224518 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:13:16.226260960+00:00 stderr F I1213 00:13:16.225542 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-12-13T00:13:16.230512853+00:00 stderr F I1213 00:13:16.228261 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.18809e0487e28d57 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}},Source:EventSource{Component:,Host:,},FirstTimestamp:2025-12-13 00:13:16.226063703 +0000 UTC m=+0.973661447,LastTimestamp:2025-12-13 00:13:16.226063703 +0000 UTC m=+0.973661447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} 2025-12-13T00:13:16.230512853+00:00 stderr F I1213 00:13:16.228375 1 main.go:173] FeatureGates initialized: [AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-12-13T00:13:16.230569205+00:00 stderr F I1213 00:13:16.230513 1 webhook.go:173] "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" 2025-12-13T00:13:16.230957978+00:00 stderr F I1213 00:13:16.230612 1 webhook.go:189] "msg"="Registering a validating webhook" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-12-13T00:13:16.230957978+00:00 stderr F I1213 00:13:16.230749 1 server.go:183] "msg"="Registering webhook" "logger"="controller-runtime.webhook" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-12-13T00:13:16.230957978+00:00 stderr F I1213 00:13:16.230844 1 main.go:228] "msg"="starting manager" "logger"="setup" 2025-12-13T00:13:16.232989537+00:00 stderr F I1213 00:13:16.232652 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics" 2025-12-13T00:13:16.232989537+00:00 stderr F I1213 00:13:16.232782 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":8080" "logger"="controller-runtime.metrics" "secure"=false 2025-12-13T00:13:16.236979630+00:00 stderr F I1213 00:13:16.234054 1 server.go:50] "msg"="starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-12-13T00:13:16.236979630+00:00 stderr F I1213 00:13:16.234130 1 server.go:191] "msg"="Starting webhook server" "logger"="controller-runtime.webhook" 2025-12-13T00:13:16.236979630+00:00 stderr F I1213 00:13:16.234509 1 certwatcher.go:161] "msg"="Updated current TLS certificate" "logger"="controller-runtime.certwatcher" 2025-12-13T00:13:16.236979630+00:00 stderr F I1213 00:13:16.234858 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/control-plane-machine-set-leader... 2025-12-13T00:13:16.236979630+00:00 stderr F I1213 00:13:16.235194 1 certwatcher.go:115] "msg"="Starting certificate watcher" "logger"="controller-runtime.certwatcher" 2025-12-13T00:13:16.250001218+00:00 stderr F I1213 00:13:16.246000 1 server.go:242] "msg"="Serving webhook server" "host"="" "logger"="controller-runtime.webhook" "port"=9443 2025-12-13T00:16:02.552209540+00:00 stderr F I1213 00:16:02.551599 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/control-plane-machine-set-leader 2025-12-13T00:16:02.552384465+00:00 stderr F I1213 00:16:02.552345 1 recorder.go:104] "msg"="control-plane-machine-set-operator-649bd778b4-tt5tw_b859865e-a680-47a9-bb47-a0be1c4334d0 became leader" "logger"="events" "object"={"kind":"Lease","namespace":"openshift-machine-api","name":"control-plane-machine-set-leader","uid":"04d6c6f9-cb98-4c35-8cde-538add77d9ad","apiVersion":"coordination.k8s.io/v1","resourceVersion":"41610"} "reason"="LeaderElection" "type"="Normal" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552550 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552587 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1beta1.Machine" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552605 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachinesetgenerator" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552618 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552647 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1beta1.Machine" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552660 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Node" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552674 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ClusterOperator" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552688 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Infrastructure" 2025-12-13T00:16:02.553080266+00:00 stderr F I1213 00:16:02.552730 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachineset" 2025-12-13T00:16:02.580964911+00:00 stderr F I1213 00:16:02.580553 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:02.582173557+00:00 stderr F I1213 00:16:02.582121 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:02.584650800+00:00 stderr F I1213 00:16:02.584597 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:02.589443242+00:00 stderr F I1213 00:16:02.589381 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:02.593743559+00:00 stderr F I1213 00:16:02.593683 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-12-13T00:16:02.685357881+00:00 stderr F I1213 00:16:02.685296 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-12-13T00:16:02.688743771+00:00 stderr F I1213 00:16:02.688711 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachinesetgenerator" "worker count"=1 2025-12-13T00:16:02.691019278+00:00 stderr F I1213 00:16:02.690997 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachineset" "worker count"=1 2025-12-13T00:16:02.691086440+00:00 stderr F I1213 00:16:02.691075 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="90fd9525-80de-448d-beef-3efe53fc993f" 2025-12-13T00:16:02.691466841+00:00 stderr F I1213 00:16:02.691452 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="90fd9525-80de-448d-beef-3efe53fc993f" 2025-12-13T00:16:02.691539854+00:00 stderr F I1213 00:16:02.691524 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="90fd9525-80de-448d-beef-3efe53fc993f" 2025-12-13T00:20:49.397245567+00:00 stderr F E1213 00:20:49.396720 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-12-13T00:21:15.399209311+00:00 stderr F E1213 00:21:15.398721 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000034600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_0000644000175000017500000005306115117130646032775 0ustar zuulzuul2025-08-13T20:05:35.452608365+00:00 stderr F I0813 20:05:35.448700 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. 2025-08-13T20:05:35.502237766+00:00 stderr F I0813 20:05:35.502054 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:35.689005154+00:00 stderr F I0813 20:05:35.688074 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.735570748+00:00 stderr F I0813 20:05:35.734903 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.185b6c47dd59a765 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:FeatureGatesInitialized,Message:FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}},Source:EventSource{Component:,Host:,},FirstTimestamp:2025-08-13 20:05:35.703058277 +0000 UTC m=+1.630097491,LastTimestamp:2025-08-13 20:05:35.703058277 +0000 UTC m=+1.630097491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} 2025-08-13T20:05:35.735570748+00:00 stderr F I0813 20:05:35.735070 1 main.go:173] FeatureGates initialized: [AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureWorkloadIdentity BareMetalLoadBalancer BuildCSIVolumes CSIDriverSharedResource ChunkSizeMiB CloudDualStackNodeIPs ClusterAPIInstall ClusterAPIInstallAWS ClusterAPIInstallAzure ClusterAPIInstallGCP ClusterAPIInstallIBMCloud ClusterAPIInstallNutanix ClusterAPIInstallOpenStack ClusterAPIInstallPowerVS ClusterAPIInstallVSphere DNSNameResolver DisableKubeletCloudCredentialProviders DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example ExternalCloudProvider ExternalCloudProviderAzure ExternalCloudProviderExternal ExternalCloudProviderGCP ExternalOIDC ExternalRouteCertificate GCPClusterHostedDNS GCPLabelsTags GatewayAPI HardwareSpeed ImagePolicy InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather InstallAlternateInfrastructureAWS KMSv1 MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MachineConfigNodes ManagedBootImages MaxUnavailableStatefulSet MetricsCollectionProfiles MetricsServer MixedCPUsAllocation NetworkDiagnosticsConfig NetworkLiveMigration NewOLM NodeDisruptionPolicy NodeSwap OnClusterBuild OpenShiftPodSecurityAdmission PinnedImages PlatformOperators PrivateHostedZoneAWS RouteExternalCertificate ServiceAccountTokenNodeBinding ServiceAccountTokenNodeBindingValidation ServiceAccountTokenPodNodeInfo SignatureStores SigstoreImageVerification TranslateStreamCloseWebsocketRequests UpgradeStatus VSphereControlPlaneMachineSet VSphereDriverConfiguration VSphereMultiVCenters VSphereStaticIPs ValidatingAdmissionPolicy VolumeGroupSnapshot] 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743278 1 webhook.go:173] "msg"="skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743496 1 webhook.go:189] "msg"="Registering a validating webhook" "GVK"={"Group":"machine.openshift.io","Version":"v1","Kind":"ControlPlaneMachineSet"} "logger"="controller-runtime.builder" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743655 1 server.go:183] "msg"="Registering webhook" "logger"="controller-runtime.webhook" "path"="/validate-machine-openshift-io-v1-controlplanemachineset" 2025-08-13T20:05:35.745880803+00:00 stderr F I0813 20:05:35.743716 1 main.go:228] "msg"="starting manager" "logger"="setup" 2025-08-13T20:05:35.808901448+00:00 stderr F I0813 20:05:35.807974 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.809405 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.809583 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":8080" "logger"="controller-runtime.metrics" "secure"=false 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.812095 1 server.go:50] "msg"="starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.812187 1 server.go:191] "msg"="Starting webhook server" "logger"="controller-runtime.webhook" 2025-08-13T20:05:35.815684252+00:00 stderr F I0813 20:05:35.813033 1 leaderelection.go:250] attempting to acquire leader lease openshift-machine-api/control-plane-machine-set-leader... 2025-08-13T20:05:35.820718106+00:00 stderr F I0813 20:05:35.820343 1 certwatcher.go:161] "msg"="Updated current TLS certificate" "logger"="controller-runtime.certwatcher" 2025-08-13T20:05:35.820718106+00:00 stderr F I0813 20:05:35.820531 1 server.go:242] "msg"="Serving webhook server" "host"="" "logger"="controller-runtime.webhook" "port"=9443 2025-08-13T20:05:35.820741327+00:00 stderr F I0813 20:05:35.820719 1 certwatcher.go:115] "msg"="Starting certificate watcher" "logger"="controller-runtime.certwatcher" 2025-08-13T20:08:08.092392625+00:00 stderr F I0813 20:08:08.090438 1 leaderelection.go:260] successfully acquired lease openshift-machine-api/control-plane-machine-set-leader 2025-08-13T20:08:08.094648189+00:00 stderr F I0813 20:08:08.094304 1 recorder.go:104] "msg"="control-plane-machine-set-operator-649bd778b4-tt5tw_d3b7e6b8-9166-459d-a6c8-d99794b50433 became leader" "logger"="events" "object"={"kind":"Lease","namespace":"openshift-machine-api","name":"control-plane-machine-set-leader","uid":"04d6c6f9-cb98-4c35-8cde-538add77d9ad","apiVersion":"coordination.k8s.io/v1","resourceVersion":"32821"} "reason"="LeaderElection" "type"="Normal" 2025-08-13T20:08:08.102699080+00:00 stderr F I0813 20:08:08.102529 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-08-13T20:08:08.102699080+00:00 stderr F I0813 20:08:08.102605 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1.ControlPlaneMachineSet" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103345 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachinesetgenerator" "source"="kind source: *v1beta1.Machine" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103378 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103747 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1beta1.Machine" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103844 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Node" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103922 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.ClusterOperator" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103944 1 controller.go:178] "msg"="Starting EventSource" "controller"="controlplanemachineset" "source"="kind source: *v1.Infrastructure" 2025-08-13T20:08:08.104515132+00:00 stderr F I0813 20:08:08.103964 1 controller.go:186] "msg"="Starting Controller" "controller"="controlplanemachineset" 2025-08-13T20:08:08.201578905+00:00 stderr F I0813 20:08:08.201332 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.201701399+00:00 stderr F I0813 20:08:08.201676 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.209287466+00:00 stderr F I0813 20:08:08.208649 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.222844255+00:00 stderr F I0813 20:08:08.222401 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.231268937+00:00 stderr F I0813 20:08:08.231171 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:08:08.282215947+00:00 stderr F I0813 20:08:08.280096 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.328518 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachineset" "worker count"=1 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.330982 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333281 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333431 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="35d200ba-a81a-41d2-9a64-b3749882f3b1" 2025-08-13T20:08:08.335102994+00:00 stderr F I0813 20:08:08.333528 1 controller.go:220] "msg"="Starting workers" "controller"="controlplanemachinesetgenerator" "worker count"=1 2025-08-13T20:08:34.116134166+00:00 stderr F E0813 20:08:34.115389 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:00.119765093+00:00 stderr F E0813 20:09:00.118762 1 leaderelection.go:332] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: Get "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:09:43.292954294+00:00 stderr F I0813 20:09:43.291277 1 reflector.go:351] Caches populated for *v1.FeatureGate from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:09:50.954091206+00:00 stderr F I0813 20:09:50.953177 1 reflector.go:351] Caches populated for *v1.Infrastructure from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:50.954306271+00:00 stderr F I0813 20:09:50.953764 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:09:50.954429834+00:00 stderr F I0813 20:09:50.954375 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:50.954588299+00:00 stderr F I0813 20:09:50.954555 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:50.954711542+00:00 stderr F I0813 20:09:50.954681 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="13720113-1d0c-466f-b496-4b482b402cda" 2025-08-13T20:09:52.483540696+00:00 stderr F I0813 20:09:52.483360 1 reflector.go:351] Caches populated for *v1.ClusterOperator from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484079 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484208 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:52.484613777+00:00 stderr F I0813 20:09:52.484248 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="daab30e3-71d1-45b0-9c1e-41cc640cb2f9" 2025-08-13T20:09:58.740448876+00:00 stderr F I0813 20:09:58.739092 1 reflector.go:351] Caches populated for *v1beta1.Machine from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:10:25.144246163+00:00 stderr F I0813 20:10:25.143425 1 reflector.go:351] Caches populated for *v1.ClusterVersion from github.com/openshift/client-go/config/informers/externalversions/factory.go:125 2025-08-13T20:10:39.931663151+00:00 stderr F I0813 20:10:39.929697 1 reflector.go:351] Caches populated for *v1.Node from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:10:45.189277050+00:00 stderr F I0813 20:10:45.188639 1 reflector.go:351] Caches populated for *v1.ControlPlaneMachineSet from sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105 2025-08-13T20:37:36.184186726+00:00 stderr F I0813 20:37:36.183258 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:37:36.191103225+00:00 stderr F I0813 20:37:36.190959 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:37:36.195214644+00:00 stderr F I0813 20:37:36.195112 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="70949f2c-5ba7-46b4-9945-1570ad727541" 2025-08-13T20:41:15.217404928+00:00 stderr F I0813 20:41:15.216565 1 watch_filters.go:179] reconcile triggered by infrastructure change 2025-08-13T20:41:15.217873911+00:00 stderr F I0813 20:41:15.217737 1 controller.go:169] "msg"="Reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:41:15.218243532+00:00 stderr F I0813 20:41:15.218176 1 controller.go:177] "msg"="No control plane machine set found, setting operator status available" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:41:15.218480519+00:00 stderr F I0813 20:41:15.218389 1 controller.go:183] "msg"="Finished reconciling control plane machine set" "controller"="controlplanemachineset" "name"="cluster" "namespace"="openshift-machine-api" "reconcileID"="c9fe446c-1028-4ba4-bcb4-ec131907f1ad" 2025-08-13T20:42:36.410736622+00:00 stderr F I0813 20:42:36.392587 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.410736622+00:00 stderr F I0813 20:42:36.387065 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.423466448+00:00 stderr F I0813 20:42:36.393743 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.423466448+00:00 stderr F I0813 20:42:36.387287 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.435931618+00:00 stderr F I0813 20:42:36.432591 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.435931618+00:00 stderr F I0813 20:42:36.386855 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:36.437936926+00:00 stderr F I0813 20:42:36.393763 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF 2025-08-13T20:42:39.121052261+00:00 stderr F I0813 20:42:39.120327 1 internal.go:516] "msg"="Stopping and waiting for non leader election runnables" 2025-08-13T20:42:39.121052261+00:00 stderr F I0813 20:42:39.121041 1 internal.go:520] "msg"="Stopping and waiting for leader election runnables" 2025-08-13T20:42:39.123198083+00:00 stderr F I0813 20:42:39.123122 1 controller.go:240] "msg"="Shutdown signal received, waiting for all workers to finish" "controller"="controlplanemachineset" 2025-08-13T20:42:39.123198083+00:00 stderr F I0813 20:42:39.123186 1 controller.go:242] "msg"="All workers finished" "controller"="controlplanemachineset" 2025-08-13T20:42:39.124597463+00:00 stderr F I0813 20:42:39.124511 1 controller.go:240] "msg"="Shutdown signal received, waiting for all workers to finish" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:42:39.124651265+00:00 stderr F I0813 20:42:39.124614 1 controller.go:242] "msg"="All workers finished" "controller"="controlplanemachinesetgenerator" 2025-08-13T20:42:39.124716827+00:00 stderr F I0813 20:42:39.124673 1 internal.go:526] "msg"="Stopping and waiting for caches" 2025-08-13T20:42:39.129012041+00:00 stderr F I0813 20:42:39.128957 1 internal.go:530] "msg"="Stopping and waiting for webhooks" 2025-08-13T20:42:39.129362521+00:00 stderr F I0813 20:42:39.129333 1 server.go:249] "msg"="Shutting down webhook server with timeout of 1 minute" "logger"="controller-runtime.webhook" 2025-08-13T20:42:39.129548196+00:00 stderr F I0813 20:42:39.129529 1 internal.go:533] "msg"="Stopping and waiting for HTTP servers" 2025-08-13T20:42:39.129749002+00:00 stderr F I0813 20:42:39.129657 1 server.go:231] "msg"="Shutting down metrics server with timeout of 1 minute" "logger"="controller-runtime.metrics" 2025-08-13T20:42:39.130267357+00:00 stderr F I0813 20:42:39.130159 1 server.go:43] "msg"="shutting down server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" 2025-08-13T20:42:39.130490133+00:00 stderr F I0813 20:42:39.130410 1 internal.go:537] "msg"="Wait completed, proceeding to shutdown the manager" 2025-08-13T20:42:39.136122996+00:00 stderr F E0813 20:42:39.135999 1 leaderelection.go:308] Failed to release lock: Put "https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader": dial tcp 10.217.4.1:443: connect: connection refused ././@LongLink0000644000000000000000000000026100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015117130647033072 5ustar zuulzuul././@LongLink0000644000000000000000000000030100000000000011575 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015117130654033070 5ustar zuulzuul././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/1.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000000202015117130647033066 0ustar zuulzuul2025-12-13T00:13:16.215704315+00:00 stderr F W1213 00:13:16.215378 1 deprecated.go:66] 2025-12-13T00:13:16.215704315+00:00 stderr F ==== Removed Flag Warning ====================== 2025-12-13T00:13:16.215704315+00:00 stderr F 2025-12-13T00:13:16.215704315+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-12-13T00:13:16.215704315+00:00 stderr F 2025-12-13T00:13:16.215704315+00:00 stderr F =============================================== 2025-12-13T00:13:16.215704315+00:00 stderr F 2025-12-13T00:13:16.216285925+00:00 stderr F I1213 00:13:16.216270 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-12-13T00:13:16.216346447+00:00 stderr F I1213 00:13:16.216336 1 kube-rbac-proxy.go:347] Reading certificate files 2025-12-13T00:13:16.217748354+00:00 stderr F I1213 00:13:16.217700 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-12-13T00:13:16.225833976+00:00 stderr F I1213 00:13:16.219031 1 kube-rbac-proxy.go:402] Listening securely on :9393 ././@LongLink0000644000000000000000000000030600000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy/0.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000000222515117130647033075 0ustar zuulzuul2025-08-13T19:59:39.171318469+00:00 stderr F W0813 19:59:39.169690 1 deprecated.go:66] 2025-08-13T19:59:39.171318469+00:00 stderr F ==== Removed Flag Warning ====================== 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.171318469+00:00 stderr F logtostderr is removed in the k8s upstream and has no effect any more. 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.171318469+00:00 stderr F =============================================== 2025-08-13T19:59:39.171318469+00:00 stderr F 2025-08-13T19:59:39.189878528+00:00 stderr F I0813 19:59:39.189460 1 kube-rbac-proxy.go:233] Valid token audiences: 2025-08-13T19:59:39.189878528+00:00 stderr F I0813 19:59:39.189516 1 kube-rbac-proxy.go:347] Reading certificate files 2025-08-13T19:59:39.255153229+00:00 stderr F I0813 19:59:39.253915 1 kube-rbac-proxy.go:395] Starting TCP socket on :9393 2025-08-13T19:59:39.255217230+00:00 stderr F I0813 19:59:39.255157 1 kube-rbac-proxy.go:402] Listening securely on :9393 2025-08-13T20:42:42.013922763+00:00 stderr F I0813 20:42:42.013004 1 kube-rbac-proxy.go:493] received interrupt, shutting down ././@LongLink0000644000000000000000000000030200000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/home/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000755000175000017500000000000015117130654033070 5ustar zuulzuul././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/5.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000006326215117130647033105 0ustar zuulzuul2025-12-13T00:15:36.280898427+00:00 stderr F 2025-12-13T00:15:36.280Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-12-13T00:15:36.312196114+00:00 stderr F 2025-12-13T00:15:36.312Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-12-13T00:15:36.312196114+00:00 stderr F 2025-12-13T00:15:36.312Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-12-13T00:15:36.312553404+00:00 stderr F 2025-12-13T00:15:36.312Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-12-13T00:15:36.312756370+00:00 stderr F 2025-12-13T00:15:36.312Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-12-13T00:15:36.313540273+00:00 stderr F 2025-12-13T00:15:36.313Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-12-13T00:15:36.314562914+00:00 stderr F I1213 00:15:36.314489 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:15:36.323815757+00:00 stderr F I1213 00:15:36.322377 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:15:36.323815757+00:00 stderr F 2025-12-13T00:15:36.322Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-12-13T00:15:36.324162057+00:00 stderr F I1213 00:15:36.324074 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-12-13T00:15:36.364045438+00:00 stderr F 2025-12-13T00:15:36.363Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-12-13T00:15:36.364286965+00:00 stderr F 2025-12-13T00:15:36.364Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-12-13T00:15:36.424565329+00:00 stderr F I1213 00:15:36.424465 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-12-13T00:15:36.424565329+00:00 stderr F I1213 00:15:36.424528 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-12-13T00:15:36.465219363+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-12-13T00:15:36.465334657+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-12-13T00:15:36.465334657+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-12-13T00:15:36.465334657+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-12-13T00:15:36.465366338+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-12-13T00:15:36.465366338+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00070c558"} 2025-12-13T00:15:36.465454380+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465512852+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465529332+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-12-13T00:15:36.465566813+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-12-13T00:15:36.465595314+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-12-13T00:15:36.465608335+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465620645+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-12-13T00:15:36.465713078+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-12-13T00:15:36.465713078+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-12-13T00:15:36.465713078+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-12-13T00:15:36.465713078+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-12-13T00:15:36.465730968+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-12-13T00:15:36.465797290+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465797290+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465812581+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-12-13T00:15:36.465812581+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-12-13T00:15:36.465812581+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-12-13T00:15:36.465836091+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-12-13T00:15:36.465836091+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:15:36.465852962+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-12-13T00:15:36.465866352+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-12-13T00:15:36.465879163+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-12-13T00:15:36.465892053+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-12-13T00:15:36.465970885+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.465970885+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-12-13T00:15:36.465994196+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-12-13T00:15:36.465994196+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00070ca30"} 2025-12-13T00:15:36.465994196+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-12-13T00:15:36.466041057+00:00 stderr F 2025-12-13T00:15:36.465Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-12-13T00:15:36.466041057+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-12-13T00:15:36.466082948+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-12-13T00:15:36.466082948+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-12-13T00:15:36.466097599+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-12-13T00:15:36.466236123+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00070ca30"} 2025-12-13T00:15:36.466413238+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.466413238+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-12-13T00:15:36.466475920+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:15:36.466490850+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-12-13T00:15:36.466535792+00:00 stderr F 2025-12-13T00:15:36.466Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-12-13T00:15:36.469132019+00:00 stderr F 2025-12-13T00:15:36.469Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:15:36.481559166+00:00 stderr F 2025-12-13T00:15:36.481Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-12-13T00:15:36.485641727+00:00 stderr F 2025-12-13T00:15:36.485Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:15:36.566216883+00:00 stderr F 2025-12-13T00:15:36.566Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-12-13T00:15:36.575539168+00:00 stderr F 2025-12-13T00:15:36.575Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-12-13T00:15:36.575606000+00:00 stderr F 2025-12-13T00:15:36.575Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:15:36.575756905+00:00 stderr F 2025-12-13T00:15:36.575Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:15:36.581083612+00:00 stderr F 2025-12-13T00:15:36.576Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:15:36.675750665+00:00 stderr F 2025-12-13T00:15:36.675Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-12-13T00:15:36.675800486+00:00 stderr F 2025-12-13T00:15:36.675Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-12-13T00:15:36.676233189+00:00 stderr F 2025-12-13T00:15:36.675Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:15:36.782437682+00:00 stderr F 2025-12-13T00:15:36.782Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:15:36.794178460+00:00 stderr F 2025-12-13T00:15:36.794Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-12-13T00:15:36.794364785+00:00 stderr F 2025-12-13T00:15:36.794Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:15:36.877534717+00:00 stderr F 2025-12-13T00:15:36.877Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-12-13T00:15:36.877813375+00:00 stderr F 2025-12-13T00:15:36.877Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-12-13T00:15:36.877974320+00:00 stderr F 2025-12-13T00:15:36.877Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-12-13T00:15:36.978216357+00:00 stderr F 2025-12-13T00:15:36.978Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-12-13T00:15:36.978258569+00:00 stderr F 2025-12-13T00:15:36.978Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-12-13T00:15:36.978354831+00:00 stderr F 2025-12-13T00:15:36.978Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-12-13T00:15:36.978354831+00:00 stderr F 2025-12-13T00:15:36.978Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:15:36.978410443+00:00 stderr F 2025-12-13T00:15:36.978Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:15:36.978410443+00:00 stderr F 2025-12-13T00:15:36.978Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-12-13T00:15:36.996110187+00:00 stderr F 2025-12-13T00:15:36.995Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:15:36.996110187+00:00 stderr F 2025-12-13T00:15:36.996Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "canary_controller", "worker count": 1} 2025-12-13T00:15:36.996110187+00:00 stderr F 2025-12-13T00:15:36.996Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-12-13T00:15:36.996158598+00:00 stderr F 2025-12-13T00:15:36.996Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:20:36.543368877+00:00 stderr F 2025-12-13T00:20:36.542Z ERROR operator.init wait/backoff.go:226 failed to fetch ingress config {"error": "Get \"https://10.217.4.1:443/apis/config.openshift.io/v1/ingresses/cluster\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-12-13T00:20:37.162973576+00:00 stderr F 2025-12-13T00:20:37.162Z ERROR operator.canary_controller wait/backoff.go:226 failed to get current canary route for canary check {"error": "Get \"https://10.217.4.1:443/apis/route.openshift.io/v1/namespaces/openshift-ingress-canary/routes/canary\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-12-13T00:21:46.158648962+00:00 stderr F 2025-12-13T00:21:46.158Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-12-13T00:21:46.158740864+00:00 stderr F 2025-12-13T00:21:46.158Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:21:46.158740864+00:00 stderr F 2025-12-13T00:21:46.158Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:21:46.158815586+00:00 stderr F 2025-12-13T00:21:46.158Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:21:49.552993633+00:00 stderr F 2025-12-13T00:21:49.551Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:21:49.552993633+00:00 stderr F 2025-12-13T00:21:49.552Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:21:49.553509266+00:00 stderr F 2025-12-13T00:21:49.552Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:21:49.553817614+00:00 stderr F 2025-12-13T00:21:49.553Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:21:49.554261965+00:00 stderr F 2025-12-13T00:21:49.552Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/3.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000007117515117130647033107 0ustar zuulzuul2025-08-13T20:08:00.875382117+00:00 stderr F 2025-08-13T20:08:00.873Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-08-13T20:08:00.953061394+00:00 stderr F 2025-08-13T20:08:00.952Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-08-13T20:08:00.953362393+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-08-13T20:08:00.953473236+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-08-13T20:08:00.953535928+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-08-13T20:08:00.953745004+00:00 stderr F 2025-08-13T20:08:00.953Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-08-13T20:08:00.998500257+00:00 stderr F I0813 20:08:00.997142 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:08:01.006195638+00:00 stderr F 2025-08-13T20:08:01.006Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-08-13T20:08:01.009092641+00:00 stderr F I0813 20:08:01.008085 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:08:01.013531228+00:00 stderr F I0813 20:08:01.013036 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-08-13T20:08:01.086373496+00:00 stderr F 2025-08-13T20:08:01.085Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-08-13T20:08:01.086373496+00:00 stderr F 2025-08-13T20:08:01.085Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-08-13T20:08:01.116419978+00:00 stderr F I0813 20:08:01.115027 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-08-13T20:08:01.116419978+00:00 stderr F I0813 20:08:01.115134 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.428Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00007fc90"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.429Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc0010b4270"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.426Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc0010b4270"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.430Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:08:01.435208218+00:00 stderr F 2025-08-13T20:08:01.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:08:01.435208218+00:00 stderr P 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting EventSour 2025-08-13T20:08:01.435319931+00:00 stderr F ce {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-08-13T20:08:01.435319931+00:00 stderr F 2025-08-13T20:08:01.434Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.498055630+00:00 stderr F 2025-08-13T20:08:01.497Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.567591913+00:00 stderr F 2025-08-13T20:08:01.565Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.569202870+00:00 stderr F 2025-08-13T20:08:01.569Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.569414636+00:00 stderr F 2025-08-13T20:08:01.569Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:08:01.582231593+00:00 stderr F 2025-08-13T20:08:01.582Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-08-13T20:08:01.582393098+00:00 stderr F 2025-08-13T20:08:01.582Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.601005591+00:00 stderr F 2025-08-13T20:08:01.598Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:08:01.651241102+00:00 stderr F 2025-08-13T20:08:01.651Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-08-13T20:08:01.651241102+00:00 stderr F 2025-08-13T20:08:01.651Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.754147262+00:00 stderr F 2025-08-13T20:08:01.754Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-08-13T20:08:01.754242945+00:00 stderr F 2025-08-13T20:08:01.754Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-08-13T20:08:01.770681766+00:00 stderr F 2025-08-13T20:08:01.770Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-08-13T20:08:01.787209920+00:00 stderr F 2025-08-13T20:08:01.787Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T20:08:01.802124928+00:00 stderr F 2025-08-13T20:08:01.800Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-08-13T20:08:01.802124928+00:00 stderr F 2025-08-13T20:08:01.801Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.963268078+00:00 stderr F 2025-08-13T20:08:01.963Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T20:08:01.963495594+00:00 stderr F 2025-08-13T20:08:01.963Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.964Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.966Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-08-13T20:08:01.967285223+00:00 stderr F 2025-08-13T20:08:01.966Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:08:02.003855092+00:00 stderr F 2025-08-13T20:08:02.003Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "canary_controller", "worker count": 1} 2025-08-13T20:08:02.008856995+00:00 stderr F 2025-08-13T20:08:02.008Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-08-13T20:08:02.009029540+00:00 stderr F 2025-08-13T20:08:02.008Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:08:02.009560235+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:08:02.009712849+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-08-13T20:08:02.009870424+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-08-13T20:08:02.010245235+00:00 stderr F 2025-08-13T20:08:02.009Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:08:02.533313412+00:00 stderr F 2025-08-13T20:08:02.533Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:30.401847845+00:00 stderr F 2025-08-13T20:09:30.400Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952017060+00:00 stderr F 2025-08-13T20:09:44.951Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952105402+00:00 stderr F 2025-08-13T20:09:44.952Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.952396641+00:00 stderr F 2025-08-13T20:09:44.952Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.957004893+00:00 stderr F 2025-08-13T20:09:44.955Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:44.957004893+00:00 stderr F 2025-08-13T20:09:44.955Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:46.234330695+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:46.234638084+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:46.234942633+00:00 stderr F 2025-08-13T20:09:46.234Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.585Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.587Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:09:51.588404831+00:00 stderr F 2025-08-13T20:09:51.587Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:09:53.788854380+00:00 stderr F 2025-08-13T20:09:53.788Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:09:55.836004494+00:00 stderr F 2025-08-13T20:09:55.835Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.894Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.895Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:01.895441172+00:00 stderr F 2025-08-13T20:10:01.894Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:10:01.895506774+00:00 stderr F 2025-08-13T20:10:01.895Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:08.138086435+00:00 stderr F 2025-08-13T20:10:08.137Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.831Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.832Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:10:13.833002522+00:00 stderr F 2025-08-13T20:10:13.832Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:10:22.868218468+00:00 stderr F 2025-08-13T20:10:22.867Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:22.868218468+00:00 stderr F 2025-08-13T20:10:22.868Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:10:22.868270259+00:00 stderr F 2025-08-13T20:10:22.868Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000012425415117130647033104 0ustar zuulzuul2025-12-13T00:13:14.516355504+00:00 stderr F 2025-12-13T00:13:14.515Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-12-13T00:13:14.535365033+00:00 stderr F 2025-12-13T00:13:14.535Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-12-13T00:13:14.535528818+00:00 stderr F 2025-12-13T00:13:14.535Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-12-13T00:13:14.535648852+00:00 stderr F 2025-12-13T00:13:14.535Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-12-13T00:13:14.535711814+00:00 stderr F 2025-12-13T00:13:14.535Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-12-13T00:13:14.535862089+00:00 stderr F 2025-12-13T00:13:14.535Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-12-13T00:13:14.541502689+00:00 stderr F I1213 00:13:14.541450 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-12-13T00:13:14.546224227+00:00 stderr F 2025-12-13T00:13:14.546Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-12-13T00:13:14.546318561+00:00 stderr F I1213 00:13:14.546276 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-12-13T00:13:14.547179779+00:00 stderr F I1213 00:13:14.547146 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-12-13T00:13:14.568409572+00:00 stderr F 2025-12-13T00:13:14.568Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-12-13T00:13:14.568583428+00:00 stderr F 2025-12-13T00:13:14.568Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-12-13T00:13:14.648735142+00:00 stderr F I1213 00:13:14.647332 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-12-13T00:13:14.648735142+00:00 stderr F I1213 00:13:14.647376 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-12-13T00:13:14.669435447+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.669435447+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.669480968+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-12-13T00:13:14.669480968+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-12-13T00:13:14.669491089+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-12-13T00:13:14.669547701+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-12-13T00:13:14.669547701+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-12-13T00:13:14.669557992+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-12-13T00:13:14.669586993+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.669615794+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:14.669615794+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-12-13T00:13:14.669638045+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:14.669638045+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-12-13T00:13:14.669638045+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-12-13T00:13:14.669649305+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-12-13T00:13:14.669660135+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-12-13T00:13:14.669692166+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.669701987+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-12-13T00:13:14.669780509+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-12-13T00:13:14.669791840+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-12-13T00:13:14.669800700+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-12-13T00:13:14.669800700+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-12-13T00:13:14.669839631+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.669839631+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00012b9a0"} 2025-12-13T00:13:14.669849632+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:14.669860182+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:14.669871622+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-12-13T00:13:14.669971916+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-12-13T00:13:14.669971916+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.670006447+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-12-13T00:13:14.670006447+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-12-13T00:13:14.670006447+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-12-13T00:13:14.670006447+00:00 stderr F 2025-12-13T00:13:14.669Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-12-13T00:13:14.670019347+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-12-13T00:13:14.670113600+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00012bf18"} 2025-12-13T00:13:14.670146552+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.670146552+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-12-13T00:13:14.670156762+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-12-13T00:13:14.670213914+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.670213914+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00012bf18"} 2025-12-13T00:13:14.670225834+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-12-13T00:13:14.670234914+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-12-13T00:13:14.670320797+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-12-13T00:13:14.670374759+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.670386399+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-12-13T00:13:14.670386399+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-12-13T00:13:14.670414840+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-12-13T00:13:14.670425411+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-12-13T00:13:14.670425411+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-12-13T00:13:14.670440611+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-12-13T00:13:14.670440611+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-12-13T00:13:14.670451672+00:00 stderr F 2025-12-13T00:13:14.670Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-12-13T00:13:14.671776356+00:00 stderr F 2025-12-13T00:13:14.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-12-13T00:13:14.682563538+00:00 stderr F 2025-12-13T00:13:14.681Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:13:14.773191453+00:00 stderr F 2025-12-13T00:13:14.772Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-12-13T00:13:14.773249466+00:00 stderr F 2025-12-13T00:13:14.773Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:13:14.773482914+00:00 stderr F 2025-12-13T00:13:14.773Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-12-13T00:13:14.773727442+00:00 stderr F 2025-12-13T00:13:14.773Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:14.774518258+00:00 stderr F 2025-12-13T00:13:14.773Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:13:14.774518258+00:00 stderr F 2025-12-13T00:13:14.773Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-12-13T00:13:14.774518258+00:00 stderr F 2025-12-13T00:13:14.774Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:13:14.774518258+00:00 stderr F 2025-12-13T00:13:14.774Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:14.784074099+00:00 stderr F 2025-12-13T00:13:14.783Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-12-13T00:13:14.784244765+00:00 stderr F 2025-12-13T00:13:14.784Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:14.787278818+00:00 stderr F 2025-12-13T00:13:14.786Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:14.875905875+00:00 stderr F 2025-12-13T00:13:14.874Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-12-13T00:13:14.875905875+00:00 stderr F 2025-12-13T00:13:14.874Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:14.882281079+00:00 stderr F 2025-12-13T00:13:14.881Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-12-13T00:13:14.977374224+00:00 stderr F 2025-12-13T00:13:14.977Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-12-13T00:13:14.984137652+00:00 stderr F 2025-12-13T00:13:14.984Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-12-13T00:13:15.077627033+00:00 stderr F 2025-12-13T00:13:15.077Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-12-13T00:13:15.092049988+00:00 stderr F 2025-12-13T00:13:15.084Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-12-13T00:13:15.092049988+00:00 stderr F 2025-12-13T00:13:15.084Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-12-13T00:13:15.180377376+00:00 stderr F 2025-12-13T00:13:15.178Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:13:15.180377376+00:00 stderr F 2025-12-13T00:13:15.178Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-12-13T00:13:15.180377376+00:00 stderr F 2025-12-13T00:13:15.178Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-12-13T00:13:15.185886251+00:00 stderr F 2025-12-13T00:13:15.185Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-12-13T00:13:15.185886251+00:00 stderr F 2025-12-13T00:13:15.185Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-12-13T00:13:15.185886251+00:00 stderr F 2025-12-13T00:13:15.185Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.186133930+00:00 stderr F 2025-12-13T00:13:15.186Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:13:15.513165139+00:00 stderr F 2025-12-13T00:13:15.512Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.999981399s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-12-13T00:13:15.513227701+00:00 stderr F 2025-12-13T00:13:15.513Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.513365306+00:00 stderr F 2025-12-13T00:13:15.513Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.513407887+00:00 stderr F 2025-12-13T00:13:15.513Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.513587373+00:00 stderr F 2025-12-13T00:13:15.513Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.550248815+00:00 stderr F 2025-12-13T00:13:15.549Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.701945622+00:00 stderr F 2025-12-13T00:13:15.700Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.301003937s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-12-13T00:13:15.702114068+00:00 stderr F 2025-12-13T00:13:15.702Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:15.896979036+00:00 stderr F 2025-12-13T00:13:15.896Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "59m59.104806734s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-12-13T00:13:24.676977484+00:00 stderr F 2025-12-13T00:13:24.672Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:13:24.677873273+00:00 stderr F 2025-12-13T00:13:24.677Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-12-13T00:13:34.672201275+00:00 stderr F 2025-12-13T00:13:34.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:13:34.777723350+00:00 stderr F 2025-12-13T00:13:34.777Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-12-13T00:13:34.777754841+00:00 stderr F 2025-12-13T00:13:34.777Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:34.778032312+00:00 stderr F 2025-12-13T00:13:34.777Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:13:34.778032312+00:00 stderr F 2025-12-13T00:13:34.778Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:13:34.778032312+00:00 stderr F 2025-12-13T00:13:34.778Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:13:34.778050212+00:00 stderr F 2025-12-13T00:13:34.778Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:13:34.778050212+00:00 stderr F 2025-12-13T00:13:34.778Z INFO operator.route_metrics_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-12-13T00:13:34.983110953+00:00 stderr F 2025-12-13T00:13:34.982Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:13:44.672544888+00:00 stderr F 2025-12-13T00:13:44.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:13:54.671737444+00:00 stderr F 2025-12-13T00:13:54.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:14:04.671365052+00:00 stderr F 2025-12-13T00:14:04.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:14:14.672098915+00:00 stderr F 2025-12-13T00:14:14.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:14:23.674586753+00:00 stderr F 2025-12-13T00:14:23.672Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:14:23.674586753+00:00 stderr F 2025-12-13T00:14:23.673Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:14:23.674586753+00:00 stderr F 2025-12-13T00:14:23.674Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:23.811312580+00:00 stderr F 2025-12-13T00:14:23.809Z ERROR operator.ingress_controller controller/controller.go:119 got retryable error; requeueing {"after": "58m51.191926185s", "error": "IngressController may become degraded soon: DeploymentReplicasAllAvailable=False"} 2025-12-13T00:14:24.673975677+00:00 stderr F 2025-12-13T00:14:24.673Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:14:34.677571853+00:00 stderr F 2025-12-13T00:14:34.676Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:14:44.671604932+00:00 stderr F 2025-12-13T00:14:44.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:14:54.346523608+00:00 stderr F 2025-12-13T00:14:54.345Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:14:54.346523608+00:00 stderr F 2025-12-13T00:14:54.345Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-12-13T00:14:54.346523608+00:00 stderr F 2025-12-13T00:14:54.345Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.425190458+00:00 stderr F 2025-12-13T00:14:54.425Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.425432435+00:00 stderr F 2025-12-13T00:14:54.425Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.425432435+00:00 stderr F 2025-12-13T00:14:54.425Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.425467067+00:00 stderr F 2025-12-13T00:14:54.425Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.426695846+00:00 stderr F 2025-12-13T00:14:54.426Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.440915333+00:00 stderr F 2025-12-13T00:14:54.440Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-12-13T00:14:54.671869202+00:00 stderr F 2025-12-13T00:14:54.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:15:04.671698427+00:00 stderr F 2025-12-13T00:15:04.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:15:14.671976964+00:00 stderr F 2025-12-13T00:15:14.671Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-12-13T00:15:14.782392632+00:00 stderr F 2025-12-13T00:15:14.782Z ERROR operator.init controller/controller.go:208 Could not wait for Cache to sync {"controller": "canary_controller", "error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} 2025-12-13T00:15:14.782392632+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for non leader election runnables 2025-12-13T00:15:14.782547167+00:00 stderr F W1213 00:15:14.782501 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Namespace ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.782841036+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for leader election runnables 2025-12-13T00:15:14.782841036+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "route_metrics_controller"} 2025-12-13T00:15:14.782841036+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingress_controller"} 2025-12-13T00:15:14.782980540+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "route_metrics_controller"} 2025-12-13T00:15:14.782980540+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "crl"} 2025-12-13T00:15:14.782998081+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "crl"} 2025-12-13T00:15:14.783008201+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "configurable_route_controller"} 2025-12-13T00:15:14.783008201+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingress_controller"} 2025-12-13T00:15:14.783017621+00:00 stderr F 2025-12-13T00:15:14.782Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "configurable_route_controller"} 2025-12-13T00:15:14.783025781+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "gatewayapi_controller"} 2025-12-13T00:15:14.783033992+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "gatewayapi_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingressclass_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingressclass_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "status_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "status_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "error_page_configmap_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_publisher_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "error_page_configmap_controller"} 2025-12-13T00:15:14.783105014+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_publisher_controller"} 2025-12-13T00:15:14.783119814+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "dns_controller"} 2025-12-13T00:15:14.783128064+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "clientca_configmap_controller"} 2025-12-13T00:15:14.783136115+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "clientca_configmap_controller"} 2025-12-13T00:15:14.783144195+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "monitoring_dashboard_controller"} 2025-12-13T00:15:14.783144195+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "dns_controller"} 2025-12-13T00:15:14.783153125+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "monitoring_dashboard_controller"} 2025-12-13T00:15:14.783200796+00:00 stderr F 2025-12-13T00:15:14.783Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for caches 2025-12-13T00:15:14.783364261+00:00 stderr F W1213 00:15:14.783318 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.783476635+00:00 stderr F W1213 00:15:14.783430 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.783538826+00:00 stderr F W1213 00:15:14.783510 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.784333650+00:00 stderr F W1213 00:15:14.784246 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.784564046+00:00 stderr F W1213 00:15:14.784445 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.784829504+00:00 stderr F W1213 00:15:14.784792 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-12-13T00:15:14.785109912+00:00 stderr F 2025-12-13T00:15:14.785Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for webhooks 2025-12-13T00:15:14.785127733+00:00 stderr F 2025-12-13T00:15:14.785Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for HTTP servers 2025-12-13T00:15:14.785676590+00:00 stderr F 2025-12-13T00:15:14.785Z INFO operator.init.controller-runtime.metrics runtime/asm_amd64.s:1650 Shutting down metrics server with timeout of 1 minute 2025-12-13T00:15:14.785817814+00:00 stderr F 2025-12-13T00:15:14.785Z INFO operator.init runtime/asm_amd64.s:1650 Wait completed, proceeding to shutdown the manager 2025-12-13T00:15:14.788474753+00:00 stderr F 2025-12-13T00:15:14.788Z ERROR operator.main cobra/command.go:944 error starting {"error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} ././@LongLink0000644000000000000000000000030700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.loghome/zuul/zuul-output/logs/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-oper0000644000175000017500000013665215117130647033111 0ustar zuulzuul2025-08-13T20:05:08.191183858+00:00 stderr F 2025-08-13T20:05:08.190Z INFO operator.main ingress-operator/start.go:64 using operator namespace {"namespace": "openshift-ingress-operator"} 2025-08-13T20:05:08.198035654+00:00 stderr F 2025-08-13T20:05:08.197Z ERROR operator.main ingress-operator/start.go:64 failed to verify idling endpoints between endpoints and services {"error": "failed to list endpoints in all namespaces: failed to get API group resources: unable to retrieve the complete list of server APIs: v1: Get \"https://10.217.4.1:443/api/v1\": dial tcp 10.217.4.1:443: connect: connection refused"} 2025-08-13T20:05:08.198298121+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for canary_controller 2025-08-13T20:05:08.198359993+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for ingress_controller 2025-08-13T20:05:08.198415215+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 registering Prometheus metrics for route_metrics_controller 2025-08-13T20:05:08.198545378+00:00 stderr F 2025-08-13T20:05:08.198Z INFO operator.main ingress-operator/start.go:64 watching file {"filename": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"} 2025-08-13T20:05:08.200367231+00:00 stderr F 2025-08-13T20:05:08.200Z INFO operator.init runtime/asm_amd64.s:1650 starting metrics listener {"addr": "127.0.0.1:60000"} 2025-08-13T20:05:08.229326390+00:00 stderr F I0813 20:05:08.229215 1 simple_featuregate_reader.go:171] Starting feature-gate-detector 2025-08-13T20:05:08.236722812+00:00 stderr F W0813 20:05:08.236296 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236722812+00:00 stderr F W0813 20:05:08.236305 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236747412+00:00 stderr F E0813 20:05:08.236721 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:08.236747412+00:00 stderr F E0813 20:05:08.236732 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.327244290+00:00 stderr F W0813 20:05:09.325689 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.327244290+00:00 stderr F E0813 20:05:09.326389 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.566674106+00:00 stderr F W0813 20:05:09.565749 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:09.566674106+00:00 stderr F E0813 20:05:09.566641 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.263050674+00:00 stderr F W0813 20:05:11.262490 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:11.263050674+00:00 stderr F E0813 20:05:11.262989 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.421045724+00:00 stderr F W0813 20:05:12.420659 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:12.421105386+00:00 stderr F E0813 20:05:12.421044 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.769327666+00:00 stderr F W0813 20:05:15.768608 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:15.769327666+00:00 stderr F E0813 20:05:15.769244 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912903143+00:00 stderr F W0813 20:05:16.911517 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:16.912903143+00:00 stderr F E0813 20:05:16.911649 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get "https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500&resourceVersion=0": dial tcp 10.217.4.1:443: connect: connection refused 2025-08-13T20:05:27.134481088+00:00 stderr F 2025-08-13T20:05:27.133Z INFO operator.init ingress-operator/start.go:198 FeatureGates initialized {"knownFeatures": ["AdminNetworkPolicy","AlibabaPlatform","AutomatedEtcdBackup","AzureWorkloadIdentity","BareMetalLoadBalancer","BuildCSIVolumes","CSIDriverSharedResource","ChunkSizeMiB","CloudDualStackNodeIPs","ClusterAPIInstall","ClusterAPIInstallAWS","ClusterAPIInstallAzure","ClusterAPIInstallGCP","ClusterAPIInstallIBMCloud","ClusterAPIInstallNutanix","ClusterAPIInstallOpenStack","ClusterAPIInstallPowerVS","ClusterAPIInstallVSphere","DNSNameResolver","DisableKubeletCloudCredentialProviders","DynamicResourceAllocation","EtcdBackendQuota","EventedPLEG","Example","ExternalCloudProvider","ExternalCloudProviderAzure","ExternalCloudProviderExternal","ExternalCloudProviderGCP","ExternalOIDC","ExternalRouteCertificate","GCPClusterHostedDNS","GCPLabelsTags","GatewayAPI","HardwareSpeed","ImagePolicy","InsightsConfig","InsightsConfigAPI","InsightsOnDemandDataGather","InstallAlternateInfrastructureAWS","KMSv1","MachineAPIOperatorDisableMachineHealthCheckController","MachineAPIProviderOpenStack","MachineConfigNodes","ManagedBootImages","MaxUnavailableStatefulSet","MetricsCollectionProfiles","MetricsServer","MixedCPUsAllocation","NetworkDiagnosticsConfig","NetworkLiveMigration","NewOLM","NodeDisruptionPolicy","NodeSwap","OnClusterBuild","OpenShiftPodSecurityAdmission","PinnedImages","PlatformOperators","PrivateHostedZoneAWS","RouteExternalCertificate","ServiceAccountTokenNodeBinding","ServiceAccountTokenNodeBindingValidation","ServiceAccountTokenPodNodeInfo","SignatureStores","SigstoreImageVerification","TranslateStreamCloseWebsocketRequests","UpgradeStatus","VSphereControlPlaneMachineSet","VSphereDriverConfiguration","VSphereMultiVCenters","VSphereStaticIPs","ValidatingAdmissionPolicy","VolumeGroupSnapshot"]} 2025-08-13T20:05:27.134896079+00:00 stderr F I0813 20:05:27.133874 1 event.go:364] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-ingress-operator", Name:"ingress-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} 2025-08-13T20:05:27.171135417+00:00 stderr F I0813 20:05:27.171080 1 base_controller.go:67] Waiting for caches to sync for spread-default-router-pods 2025-08-13T20:05:27.271658006+00:00 stderr F I0813 20:05:27.271480 1 base_controller.go:73] Caches are synced for spread-default-router-pods 2025-08-13T20:05:27.271658006+00:00 stderr F I0813 20:05:27.271548 1 base_controller.go:110] Starting #1 worker of spread-default-router-pods controller ... 2025-08-13T20:05:28.707483552+00:00 stderr F 2025-08-13T20:05:28.706Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Starting metrics server 2025-08-13T20:05:28.863034757+00:00 stderr F 2025-08-13T20:05:28.862Z INFO operator.init.controller-runtime.metrics manager/runnable_group.go:223 Serving metrics server {"bindAddress": ":8080", "secure": false} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Deployment"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Service"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Pod"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "status_controller"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingress_controller", "source": "kind source: *v1.Proxy"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingress_controller"} 2025-08-13T20:05:29.435990854+00:00 stderr F 2025-08-13T20:05:29.431Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Ingress"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.Role"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "configurable_route_controller", "source": "kind source: *v1.RoleBinding"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "configurable_route_controller"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452648831+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "error_page_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "error_page_configmap_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "clientca_configmap_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "clientca_configmap_controller"} 2025-08-13T20:05:29.452927499+00:00 stderr F 2025-08-13T20:05:29.432Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "informer source: 0xc00010e648"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "certificate_publisher_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.452Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "certificate_publisher_controller"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNSRecord"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:05:29.453065183+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Secret"} 2025-08-13T20:05:29.453085534+00:00 stderr F 2025-08-13T20:05:29.453Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "dns_controller"} 2025-08-13T20:05:29.455007799+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00010eba0"} 2025-08-13T20:05:29.455388390+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "informer source: 0xc00010eba0"} 2025-08-13T20:05:29.455467832+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.455765720+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "crl", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.455902264+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "crl"} 2025-08-13T20:05:29.456298656+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.456352127+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "ingressclass_controller", "source": "kind source: *v1.IngressClass"} 2025-08-13T20:05:29.456389038+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "ingressclass_controller"} 2025-08-13T20:05:29.456435940+00:00 stderr F 2025-08-13T20:05:29.433Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.ConfigMap"} 2025-08-13T20:05:29.456476721+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "monitoring_dashboard_controller", "source": "kind source: *v1.Infrastructure"} 2025-08-13T20:05:29.456510342+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:05:29.456558643+00:00 stderr F 2025-08-13T20:05:29.436Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "gatewayapi_controller", "source": "kind source: *v1.FeatureGate"} 2025-08-13T20:05:29.456639825+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "gatewayapi_controller"} 2025-08-13T20:05:29.456694627+00:00 stderr F 2025-08-13T20:05:29.446Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.IngressController"} 2025-08-13T20:05:29.456735988+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "route_metrics_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:05:29.456882772+00:00 stderr F 2025-08-13T20:05:29.456Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "route_metrics_controller"} 2025-08-13T20:05:29.463311917+00:00 stderr F 2025-08-13T20:05:29.455Z INFO operator.init controller/controller.go:234 Starting EventSource {"controller": "canary_controller", "source": "kind source: *v1.Route"} 2025-08-13T20:05:29.463311917+00:00 stderr F 2025-08-13T20:05:29.462Z INFO operator.init controller/controller.go:234 Starting Controller {"controller": "canary_controller"} 2025-08-13T20:05:29.574984784+00:00 stderr F 2025-08-13T20:05:29.563Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:29.945304648+00:00 stderr F 2025-08-13T20:05:29.917Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:29.965242119+00:00 stderr F 2025-08-13T20:05:29.965Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:30.014990724+00:00 stderr F 2025-08-13T20:05:30.014Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:30.023730434+00:00 stderr F 2025-08-13T20:05:30.023Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:53 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:30.026092791+00:00 stderr F 2025-08-13T20:05:30.023Z INFO operator.certificate_publisher_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default"} 2025-08-13T20:05:30.120415043+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "crl", "worker count": 1} 2025-08-13T20:05:30.120551686+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_controller", "worker count": 1} 2025-08-13T20:05:30.120979129+00:00 stderr F 2025-08-13T20:05:30.120Z INFO operator.certificate_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.121431882+00:00 stderr F 2025-08-13T20:05:30.121Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "clientca_configmap_controller", "worker count": 1} 2025-08-13T20:05:30.219567762+00:00 stderr F 2025-08-13T20:05:30.219Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "monitoring_dashboard_controller", "worker count": 1} 2025-08-13T20:05:30.241382557+00:00 stderr F 2025-08-13T20:05:30.240Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "error_page_configmap_controller", "worker count": 1} 2025-08-13T20:05:30.315357705+00:00 stderr F 2025-08-13T20:05:30.315Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "gatewayapi_controller", "worker count": 1} 2025-08-13T20:05:30.331297851+00:00 stderr F 2025-08-13T20:05:30.315Z INFO operator.gatewayapi_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:30.378874514+00:00 stderr F 2025-08-13T20:05:30.378Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "certificate_publisher_controller", "worker count": 1} 2025-08-13T20:05:30.379387008+00:00 stderr F 2025-08-13T20:05:30.379Z INFO operator.certificate_publisher_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.499907610+00:00 stderr F 2025-08-13T20:05:30.499Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:05:30.522918519+00:00 stderr F 2025-08-13T20:05:30.518Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingress {"name": "default", "related": ""} 2025-08-13T20:05:30.526560943+00:00 stderr F 2025-08-13T20:05:30.526Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingressclass_controller", "worker count": 1} 2025-08-13T20:05:30.526961224+00:00 stderr F 2025-08-13T20:05:30.526Z INFO operator.ingressclass_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.598924545+00:00 stderr F 2025-08-13T20:05:30.598Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "status_controller", "worker count": 1} 2025-08-13T20:05:30.599024598+00:00 stderr F 2025-08-13T20:05:30.598Z INFO operator.status_controller controller/controller.go:119 Reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:30.599125781+00:00 stderr F 2025-08-13T20:05:30.599Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "dns_controller", "worker count": 1} 2025-08-13T20:05:30.629683626+00:00 stderr F 2025-08-13T20:05:30.629Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "ingress_controller", "worker count": 1} 2025-08-13T20:05:30.679536544+00:00 stderr F 2025-08-13T20:05:30.679Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "configurable_route_controller", "worker count": 1} 2025-08-13T20:05:30.679735869+00:00 stderr F 2025-08-13T20:05:30.679Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:30.680070179+00:00 stderr F 2025-08-13T20:05:30.630Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:39.465426320+00:00 stderr F 2025-08-13T20:05:39.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:39.470629999+00:00 stderr F 2025-08-13T20:05:39.469Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:49.463650058+00:00 stderr F 2025-08-13T20:05:49.462Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:49.466441728+00:00 stderr F 2025-08-13T20:05:49.466Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.configurable_route_controller controller/controller.go:119 reconciling {"request": {"name":"cluster"}} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.317Z INFO operator.ingress_controller handler/enqueue_mapped.go:81 queueing ingresscontroller {"name": "default", "related": ""} 2025-08-13T20:05:57.318941402+00:00 stderr F 2025-08-13T20:05:57.318Z INFO operator.ingress_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:05:59.460963930+00:00 stderr F 2025-08-13T20:05:59.460Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:05:59.465319955+00:00 stderr F 2025-08-13T20:05:59.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:09.466039895+00:00 stderr F 2025-08-13T20:06:09.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:09.466039895+00:00 stderr F 2025-08-13T20:06:09.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:19.466356135+00:00 stderr F 2025-08-13T20:06:19.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:19.466356135+00:00 stderr F 2025-08-13T20:06:19.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:29.464497590+00:00 stderr F 2025-08-13T20:06:29.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:29.465170690+00:00 stderr F 2025-08-13T20:06:29.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:39.466094165+00:00 stderr F 2025-08-13T20:06:39.465Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:39.470533012+00:00 stderr F 2025-08-13T20:06:39.469Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:49.461005407+00:00 stderr F 2025-08-13T20:06:49.460Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:06:49.463712194+00:00 stderr F 2025-08-13T20:06:49.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:59.465943937+00:00 stderr F 2025-08-13T20:06:59.464Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:06:59.481697919+00:00 stderr F 2025-08-13T20:06:59.480Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:09.465054500+00:00 stderr F 2025-08-13T20:07:09.461Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:09.467221923+00:00 stderr F 2025-08-13T20:07:09.466Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:19.470896816+00:00 stderr F 2025-08-13T20:07:19.468Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:19.470896816+00:00 stderr F 2025-08-13T20:07:19.470Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:29.462609576+00:00 stderr F 2025-08-13T20:07:29.461Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 failed to get informer from cache {"error": "failed to get API group resources: unable to retrieve the complete list of server APIs: route.openshift.io/v1: the server is currently unable to handle the request"} 2025-08-13T20:07:29.463661686+00:00 stderr F 2025-08-13T20:07:29.463Z ERROR operator.init.controller-runtime.source.EventHandler wait/loop.go:87 if kind is a CRD, it should be installed before calling Start {"kind": "Route.route.openshift.io", "error": "failed to get restmapping: no matches for kind \"Route\" in group \"route.openshift.io\""} 2025-08-13T20:07:30.132827102+00:00 stderr F 2025-08-13T20:07:30.128Z ERROR operator.init controller/controller.go:208 Could not wait for Cache to sync {"controller": "canary_controller", "error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} 2025-08-13T20:07:30.135698934+00:00 stderr F 2025-08-13T20:07:30.132Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for non leader election runnables 2025-08-13T20:07:30.135768856+00:00 stderr F 2025-08-13T20:07:30.135Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for leader election runnables 2025-08-13T20:07:30.137697782+00:00 stderr F 2025-08-13T20:07:30.135Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "configurable_route_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingress_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "dns_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "status_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "ingressclass_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_publisher_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "gatewayapi_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "error_page_configmap_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "clientca_configmap_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "certificate_controller"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "crl"} 2025-08-13T20:07:30.154365289+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "monitoring_dashboard_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "gatewayapi_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "clientca_configmap_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_controller"} 2025-08-13T20:07:30.154443352+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "status_controller"} 2025-08-13T20:07:30.154507554+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init controller/controller.go:234 Starting workers {"controller": "route_metrics_controller", "worker count": 1} 2025-08-13T20:07:30.154528604+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 Shutdown signal received, waiting for all workers to finish {"controller": "route_metrics_controller"} 2025-08-13T20:07:30.154901485+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.route_metrics_controller controller/controller.go:119 reconciling {"request": {"name":"default","namespace":"openshift-ingress-operator"}} 2025-08-13T20:07:30.155027888+00:00 stderr F 2025-08-13T20:07:30.154Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "dns_controller"} 2025-08-13T20:07:30.155089260+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingressclass_controller"} 2025-08-13T20:07:30.155145122+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "crl"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "certificate_publisher_controller"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "configurable_route_controller"} 2025-08-13T20:07:30.162760550+00:00 stderr F 2025-08-13T20:07:30.155Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "error_page_configmap_controller"} 2025-08-13T20:07:30.163077299+00:00 stderr F 2025-08-13T20:07:30.162Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "ingress_controller"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.174Z ERROR operator.init controller/controller.go:266 Reconciler error {"controller": "route_metrics_controller", "object": {"name":"default","namespace":"openshift-ingress-operator"}, "namespace": "openshift-ingress-operator", "name": "default", "reconcileID": "8a74f58b-c73b-4c1b-937a-14d64341d67c", "error": "failed to get Ingress Controller \"openshift-ingress-operator/default\": Timeout: failed waiting for *v1.IngressController Informer to sync"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.176Z INFO operator.init manager/runnable_group.go:223 All workers finished {"controller": "route_metrics_controller"} 2025-08-13T20:07:30.176402561+00:00 stderr F 2025-08-13T20:07:30.176Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for caches 2025-08-13T20:07:30.195953332+00:00 stderr F 2025-08-13T20:07:30.195Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for webhooks 2025-08-13T20:07:30.195953332+00:00 stderr F 2025-08-13T20:07:30.195Z INFO operator.init runtime/asm_amd64.s:1650 Stopping and waiting for HTTP servers 2025-08-13T20:07:30.198455734+00:00 stderr F 2025-08-13T20:07:30.198Z INFO operator.init.controller-runtime.metrics runtime/asm_amd64.s:1650 Shutting down metrics server with timeout of 1 minute 2025-08-13T20:07:30.200492422+00:00 stderr F W0813 20:07:30.198536 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Proxy ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.201920803+00:00 stderr F W0813 20:07:30.198690 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202012955+00:00 stderr F W0813 20:07:30.201950 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202052777+00:00 stderr F W0813 20:07:30.198766 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202135089+00:00 stderr F W0813 20:07:30.198484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.202245812+00:00 stderr F W0813 20:07:30.202220 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.204503267+00:00 stderr F W0813 20:07:30.199382 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding 2025-08-13T20:07:30.223684287+00:00 stderr F 2025-08-13T20:07:30.223Z INFO operator.init runtime/asm_amd64.s:1650 Wait completed, proceeding to shutdown the manager 2025-08-13T20:07:30.229373850+00:00 stderr F 2025-08-13T20:07:30.228Z ERROR operator.main cobra/command.go:944 error starting {"error": "failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route"} home/zuul/zuul-output/logs/ci-framework-data/artifacts/0000755000175000017500000000000015117130664022347 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/resolv.conf0000644000175000017500000000015215117130630024517 0ustar zuulzuul# Generated by NetworkManager nameserver 192.168.122.10 nameserver 199.204.44.24 nameserver 199.204.47.54 home/zuul/zuul-output/logs/ci-framework-data/artifacts/hosts0000644000175000017500000000023715117130630023425 0ustar zuulzuul127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ip-network.txt0000644000175000017500000000315715117130630025206 0ustar zuulzuuldefault via 38.102.83.1 dev eth0 proto dhcp src 38.102.83.182 metric 100 38.102.83.0/24 dev eth0 proto kernel scope link src 38.102.83.182 metric 100 169.254.169.254 via 38.102.83.126 dev eth0 proto dhcp src 38.102.83.182 metric 100 192.168.122.0/24 dev eth1 proto kernel scope link src 192.168.122.11 metric 101 0: from all lookup local 32766: from all lookup main 32767: from all lookup default [ { "ifindex": 1, "ifname": "lo", "flags": [ "LOOPBACK","UP","LOWER_UP" ], "mtu": 65536, "qdisc": "noqueue", "operstate": "UNKNOWN", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "loopback", "address": "00:00:00:00:00:00", "broadcast": "00:00:00:00:00:00" },{ "ifindex": 2, "ifname": "eth0", "flags": [ "BROADCAST","MULTICAST","UP","LOWER_UP" ], "mtu": 1500, "qdisc": "fq_codel", "operstate": "UP", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "fa:16:3e:96:00:e4", "broadcast": "ff:ff:ff:ff:ff:ff", "altnames": [ "enp0s3","ens3" ] },{ "ifindex": 3, "ifname": "eth1", "flags": [ "BROADCAST","MULTICAST","UP","LOWER_UP" ], "mtu": 1500, "qdisc": "fq_codel", "operstate": "UP", "linkmode": "DEFAULT", "group": "default", "txqlen": 1000, "link_type": "ether", "address": "fa:16:3e:b1:85:fe", "broadcast": "ff:ff:ff:ff:ff:ff", "altnames": [ "enp0s7","ens7" ] } ] home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_check_for_oc.sh0000644000175000017500000000020715117130634027721 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_check_for_oc.log) 2>&1 command -v oc home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_run_openstack_must_gather.sh0000644000175000017500000000132215117130635032572 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_run_openstack_must_gather.log) 2>&1 timeout 2700.0 oc adm must-gather --image quay.io/openstack-k8s-operators/openstack-must-gather:latest --timeout 30m --host-network=False --dest-dir /home/zuul/ci-framework-data/logs/openstack-must-gather -- ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=$OPENSTACK_DATABASES SOS_EDPM=$SOS_EDPM SOS_DECOMPRESS=$SOS_DECOMPRESS gather 2>&1 || { rc=$? if [ $rc -eq 124 ]; then echo "The must gather command did not finish on time!" echo "2700.0 seconds was not enough to finish the task." fi } home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_prepare_root_ssh.sh0000644000175000017500000000122315117130641030670 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_prepare_root_ssh.log) 2>&1 ssh -i ~/.ssh/id_cifw core@api.crc.testing < >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_copy_logs_from_crc.log) 2>&1 scp -v -r -i ~/.ssh/id_cifw core@api.crc.testing:/tmp/crc-logs-artifacts /home/zuul/ci-framework-data/logs/crc/ home/zuul/zuul-output/logs/ci-framework-data/artifacts/zuul_inventory.yml0000644000175000017500000006777415117130656026234 0ustar zuulzuulall: children: zuul_unreachable: hosts: {} hosts: controller: ansible_connection: ssh ansible_host: 38.102.83.182 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 5956d295-b553-4ddf-9a24-66bb002b169b host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.182 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.182 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.182 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul_log_collection: true crc: ansible_connection: ssh ansible_host: 38.102.83.51 ansible_port: 22 ansible_python_interpreter: auto ansible_user: core cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 02c5855c-ecb0-48ea-8e20-d08d70e9697e host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.51 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.51 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.51 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul_log_collection: true localhost: ansible_connection: local vars: cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 1ff1f9e2914e4781854296cd59417fcd build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: f71192a0075543fc80fa0dd7b04d38c1 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 0e5659e7c90540b9826e576f72e96d0a executor: hostname: ze03.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/inventory.yaml log_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/logs result_data_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/results.json src_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/src work_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-nightly_bundles jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 9e0945fe8a0e74be8bc9449318446eeb74336986 name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 181b1cc6787a986fc316969801a1c9403cae9dcc name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 786269345f996bd262360738a1e3c6b09171f370 name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: 2f838b62fe50aacff3d514af4b502264e0a276a5 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 2da49819dd6af6036aede5e4e9a080ff2c6457de name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 672a220823fac36a8965fa0d3dca764739bb46c0 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_log_collection: true home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible-vars.yml0000644000175000017500000157631715117130656025505 0ustar zuulzuul_included_dir: changed: false failed: false stat: atime: 1765585173.48184 attr_flags: '' attributes: [] block_size: 4096 blocks: 0 charset: binary ctime: 1765585179.0079749 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 29404127 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1765585179.0079749 nlink: 2 path: /home/zuul/ci-framework-data/artifacts/parameters pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 120 uid: 1000 version: '3456551476' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true _included_file: changed: false failed: false stat: atime: 1765585178.1579542 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: us-ascii checksum: 2e33113041e3a5353938c20a6bf69b61c806763c ctime: 1765585178.1609542 dev: 64513 device_type: 0 executable: false exists: true gid: 1000 gr_name: zuul inode: 67126185 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mimetype: text/plain mode: '0600' mtime: 1765585177.8319461 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul readable: true rgrp: false roth: false rusr: true size: 288 uid: 1000 version: '2907456442' wgrp: false woth: false writeable: true wusr: true xgrp: false xoth: false xusr: false _parsed_vars: changed: false content: Y2lmbXdfb3BlbnNoaWZ0X2FwaTogaHR0cHM6Ly9hcGkuY3JjLnRlc3Rpbmc6NjQ0MwpjaWZtd19vcGVuc2hpZnRfY29udGV4dDogZGVmYXVsdC9hcGktY3JjLXRlc3Rpbmc6NjQ0My9rdWJlYWRtaW4KY2lmbXdfb3BlbnNoaWZ0X2t1YmVjb25maWc6IC9ob21lL3p1dWwvLmNyYy9tYWNoaW5lcy9jcmMva3ViZWNvbmZpZwpjaWZtd19vcGVuc2hpZnRfdG9rZW46IHNoYTI1Nn5DUkR6UkRFd01sSWhiZVZKTENyU0NoZEM2NzR6MlVtQ3JrQmVqVDdVRVA0CmNpZm13X29wZW5zaGlmdF91c2VyOiBrdWJlYWRtaW4K encoding: base64 failed: false source: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml _tmp_dir: changed: true failed: false gid: 10001 group: zuul mode: '0700' owner: zuul path: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/tmp/ansible.oapf2yv0 size: 40 state: directory uid: 10001 _yaml_files: changed: false examined: 4 failed: false files: - atime: 1765585110.960312 ctime: 1765585109.0992665 dev: 64513 gid: 1000 gr_name: zuul inode: 46190614 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1765585108.8522606 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 19386 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1765585179.0079749 ctime: 1765585179.0109751 dev: 64513 gid: 1000 gr_name: zuul inode: 83901671 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1765585178.8539712 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 28065 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1765585173.48184 ctime: 1765585172.0498047 dev: 64513 gid: 1000 gr_name: zuul inode: 159439335 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1765585171.8578 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 1126 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1765585178.1579542 ctime: 1765585178.1609542 dev: 64513 gid: 1000 gr_name: zuul inode: 67126185 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1765585177.8319461 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 288 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false matched: 4 msg: All paths examined skipped_paths: {} ansible_all_ipv4_addresses: - 38.102.83.182 ansible_all_ipv6_addresses: - fe80::f816:3eff:fe96:e4 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_collection_name: null ansible_config_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2025-12-13' day: '13' epoch: '1765585301' epoch_int: '1765585301' hour: '00' iso8601: '2025-12-13T00:21:41Z' iso8601_basic: 20251213T002141574207 iso8601_basic_short: 20251213T002141 iso8601_micro: '2025-12-13T00:21:41.574207Z' minute: '21' month: '12' second: '41' time: 00:21:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' ansible_default_ipv4: address: 38.102.83.182 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:96:00:e4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_dependent_role_names: [] ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-12-13-00-01-20-00 vda1: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-01-20-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 43968 22 SSH_CONNECTION: 38.102.83.114 43968 38.102.83.182 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.182 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe96:e4 prefix: '64' scope: link macaddress: fa:16:3e:96:00:e4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.182 all_ipv6_addresses: - fe80::f816:3eff:fe96:e4 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-12-13T00:08:36Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 02849e46-923a-4039-a741-7220969b703a hardware_offload_type: null hints: '' id: 42ec78f8-dc44-4516-ac4e-135cec7f7487 ip_allocation: immediate mac_address: fa:16:3e:91:d3:27 name: crc-02c5855c-ecb0-48ea-8e20-d08d70e9697e network_id: dec04d5b-6116-4933-889e-8847df6f684f numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-12-13T00:08:36Z' crc_ci_bootstrap_network_name: zuul-ci-net-1ff1f9e2 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b1:85:fe mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:91:d3:27 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:9f:c2:b6 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:ea:0a:6d mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:3e:f4:b3 mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:02Z' description: '' dns_domain: '' id: dec04d5b-6116-4933-889e-8847df6f684f ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-1ff1f9e2 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-12-13T00:08:02Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:08Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.45 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: 9c9866c3-8654-46c3-8ace-99e01a86017f name: zuul-ci-subnet-router-1ff1f9e2 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-12-13T00:08:09Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-12-13T00:08:06Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 02849e46-923a-4039-a741-7220969b703a ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-1ff1f9e2 network_id: dec04d5b-6116-4933-889e-8847df6f684f project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-12-13T00:08:06Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-1ff1f9e2 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-1ff1f9e2 date_time: date: '2025-12-13' day: '13' epoch: '1765585301' epoch_int: '1765585301' hour: '00' iso8601: '2025-12-13T00:21:41Z' iso8601_basic: 20251213T002141574207 iso8601_basic_short: 20251213T002141 iso8601_micro: '2025-12-13T00:21:41.574207Z' minute: '21' month: '12' second: '41' time: 00:21:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' default_ipv4: address: 38.102.83.182 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:96:00:e4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-12-13-00-01-20-00 vda1: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-01-20-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 43968 22 SSH_CONNECTION: 38.102.83.114 43968 38.102.83.182 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.182 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe96:e4 prefix: '64' scope: link macaddress: fa:16:3e:96:00:e4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-648.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.06 1m: 0.0 5m: 0.09 locally_reachable_ips: ipv4: - 38.102.83.182 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe96:e4 lsb: {} lvm: N/A machine: x86_64 machine_id: 64f1d6692049d8be5e8b216cc203502c memfree_mb: 7164 memory_mb: nocache: free: 7375 used: 304 real: free: 7164 total: 7679 used: 515 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7679 module_setup: true mounts: - block_available: 20336157 block_size: 4096 block_total: 20954875 block_used: 618718 device: /dev/vda1 fstype: xfs inode_available: 41888385 inode_total: 41942512 inode_used: 54127 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83296899072 size_total: 85831168000 uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfIWnOzoaFl6L11147qWcwowK6Ci0Rz8t1WjAVB/zcYVQE7pudrJ717ZfSW85tw14Xjf9dwVFE9kociqbG0zJc= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAINkLqbGxYdqx81uU7hEPuFtk8VGcR7wMa2mI4eVESIvr ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMl9iNNfpkBemj+80eKwpd7sRDpIn97HaSAZ/63vqGAbPhgMiH6WngR/zO4oyGyB8lf6fsD+w4LKZgiuAhBwp++i2IcfE5Kfy6quV1X1wL5NwHAombTIg8qtOBzyJxFksJBIHLn8mcWWkttFKy18Ou9KhVGzrDOe8XFSy+jDSiZpPmx7DYjwRg7irJ8dfyyG0bjzrw/C5eBQvyGVsr9RNSDlOv5XmLkybsyqg8nCjNrNEBaKrpRf51w6wWHrTzl/U492b0rnW+3xzYRAnuOhrbIP0OoK+92VqKKeAld7BUW4ZL3PPogxoRhuieoWCGzznwQBUdar6WNRcaUSK3mzSjHikjwkUAl9SR7srM4T9Tc4Yf13/dZ3kOQFQNkH9Xl2+vEHfr9xPC5dknmfsFD4wIdvGCo7MW7+D6tD55fkNhgwMl9YAT7IJVbxrwKBQtHspcwjxuQqyuNN48RMbfflqtKlYaiSqT6TmhmJHfFpToEiZ5O0I11Jw+S6nVSf65MU= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 336 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - service-telemetry-operator ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: controller ansible_host: 38.102.83.182 ansible_hostname: controller ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 ansible_interfaces: - eth0 - lo ansible_inventory_sources: - /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-648.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.06 1m: 0.0 5m: 0.09 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.182 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe96:e4 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: 64f1d6692049d8be5e8b216cc203502c ansible_memfree_mb: 7164 ansible_memory_mb: nocache: free: 7375 used: 304 real: free: 7164 total: 7679 used: 515 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 7679 ansible_mounts: - block_available: 20336157 block_size: 4096 block_total: 20954875 block_used: 618718 device: /dev/vda1 fstype: xfs inode_available: 41888385 inode_total: 41942512 inode_used: 54127 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83296899072 size_total: 85831168000 uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_nodename: controller ansible_os_family: RedHat ansible_parent_role_names: - cifmw_setup ansible_parent_role_paths: - /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/roles/cifmw_setup ansible_pkg_mgr: dnf ansible_play_batch: &id002 - controller ansible_play_hosts: - controller ansible_play_hosts_all: - controller - crc ansible_play_name: Run ci/playbooks/e2e-collect-logs.yml ansible_play_role_names: &id003 - run_hook - os_must_gather - artifacts - env_op_images - run_hook - cifmw_setup ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 8 ansible_processor_nproc: 8 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 8 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.25 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_role_name: artifacts ansible_role_names: - os_must_gather - run_hook - artifacts - cifmw_setup - env_op_images ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfIWnOzoaFl6L11147qWcwowK6Ci0Rz8t1WjAVB/zcYVQE7pudrJ717ZfSW85tw14Xjf9dwVFE9kociqbG0zJc= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAINkLqbGxYdqx81uU7hEPuFtk8VGcR7wMa2mI4eVESIvr ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMl9iNNfpkBemj+80eKwpd7sRDpIn97HaSAZ/63vqGAbPhgMiH6WngR/zO4oyGyB8lf6fsD+w4LKZgiuAhBwp++i2IcfE5Kfy6quV1X1wL5NwHAombTIg8qtOBzyJxFksJBIHLn8mcWWkttFKy18Ou9KhVGzrDOe8XFSy+jDSiZpPmx7DYjwRg7irJ8dfyyG0bjzrw/C5eBQvyGVsr9RNSDlOv5XmLkybsyqg8nCjNrNEBaKrpRf51w6wWHrTzl/U492b0rnW+3xzYRAnuOhrbIP0OoK+92VqKKeAld7BUW4ZL3PPogxoRhuieoWCGzznwQBUdar6WNRcaUSK3mzSjHikjwkUAl9SR7srM4T9Tc4Yf13/dZ3kOQFQNkH9Xl2+vEHfr9xPC5dknmfsFD4wIdvGCo7MW7+D6tD55fkNhgwMl9YAT7IJVbxrwKBQtHspcwjxuQqyuNN48RMbfflqtKlYaiSqT6TmhmJHfFpToEiZ5O0I11Jw+S6nVSf65MU= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 336 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_basedir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_artifacts_crc_host: api.crc.testing cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_artifacts_crc_sshkey_ed25519: ~/.crc/machines/crc/id_ed25519 cifmw_artifacts_crc_user: core cifmw_artifacts_gather_logs: true cifmw_artifacts_mask_logs: true cifmw_basedir: /home/zuul/ci-framework-data cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_env_op_images_dir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_env_op_images_dryrun: false cifmw_env_op_images_file: operator_images.yaml cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_openshift_api: https://api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_token: sha256~CRDzRDEwMlIhbeVJLCrSChdC674z2UmCrkBejT7UEP4 cifmw_openshift_user: kubeadmin cifmw_os_must_gather_additional_namespaces: kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko cifmw_os_must_gather_dump_db: ALL cifmw_os_must_gather_host_network: false cifmw_os_must_gather_image: quay.io/openstack-k8s-operators/openstack-must-gather:latest cifmw_os_must_gather_image_push: true cifmw_os_must_gather_image_registry: quay.rdoproject.org/openstack-k8s-operators cifmw_os_must_gather_kubeconfig: '{{ ansible_user_dir }}/.kube/config' cifmw_os_must_gather_namespaces: - openstack-operators - openstack - baremetal-operator-system - openshift-machine-api - cert-manager - openshift-nmstate - openshift-marketplace - metallb-system - crc-storage cifmw_os_must_gather_output_dir: '{{ cifmw_basedir | default(ansible_user_dir ~ ''/ci-framework-data'') }}' cifmw_os_must_gather_output_log_dir: '{{ cifmw_os_must_gather_output_dir }}/logs/openstack-must-gather' cifmw_os_must_gather_repo_path: '{{ ansible_user_dir }}/src/github.com/openstack-k8s-operators/openstack-must-gather' cifmw_os_must_gather_timeout: 30m cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_project_dir: src/github.com/openstack-k8s-operators/ci-framework cifmw_project_dir_absolute: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_run_hook_debug: '{{ (ansible_verbosity | int) >= 2 | bool }}' cifmw_run_tests: false cifmw_status: changed: false failed: false stat: atime: 1765585230.1122248 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: binary ctime: 1765585233.7253132 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 16782053 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1765585233.7253132 nlink: 21 path: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 4096 uid: 1000 version: '3877636691' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true cifmw_success_flag: changed: false failed: false stat: exists: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-12-13T00:08:36Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 02849e46-923a-4039-a741-7220969b703a hardware_offload_type: null hints: '' id: 42ec78f8-dc44-4516-ac4e-135cec7f7487 ip_allocation: immediate mac_address: fa:16:3e:91:d3:27 name: crc-02c5855c-ecb0-48ea-8e20-d08d70e9697e network_id: dec04d5b-6116-4933-889e-8847df6f684f numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-12-13T00:08:36Z' crc_ci_bootstrap_network_name: zuul-ci-net-1ff1f9e2 crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b1:85:fe mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:91:d3:27 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:9f:c2:b6 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:ea:0a:6d mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:3e:f4:b3 mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:02Z' description: '' dns_domain: '' id: dec04d5b-6116-4933-889e-8847df6f684f ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-1ff1f9e2 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-12-13T00:08:02Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:08Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.45 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: 9c9866c3-8654-46c3-8ace-99e01a86017f name: zuul-ci-subnet-router-1ff1f9e2 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-12-13T00:08:09Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-12-13T00:08:06Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 02849e46-923a-4039-a741-7220969b703a ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-1ff1f9e2 network_id: dec04d5b-6116-4933-889e-8847df6f684f project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-12-13T00:08:06Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-1ff1f9e2 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-1ff1f9e2 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true environment: - ANSIBLE_LOG_PATH: '{{ ansible_user_dir }}/ci-framework-data/logs/e2e-collect-logs-must-gather.log' gather_subset: - min group_names: - ungrouped groups: all: - controller - crc ungrouped: &id001 - controller - crc zuul_unreachable: [] hostvars: controller: _included_dir: changed: false failed: false stat: atime: 1765585173.48184 attr_flags: '' attributes: [] block_size: 4096 blocks: 0 charset: binary ctime: 1765585179.0079749 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 29404127 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1765585179.0079749 nlink: 2 path: /home/zuul/ci-framework-data/artifacts/parameters pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 120 uid: 1000 version: '3456551476' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true _included_file: changed: false failed: false stat: atime: 1765585178.1579542 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: us-ascii checksum: 2e33113041e3a5353938c20a6bf69b61c806763c ctime: 1765585178.1609542 dev: 64513 device_type: 0 executable: false exists: true gid: 1000 gr_name: zuul inode: 67126185 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mimetype: text/plain mode: '0600' mtime: 1765585177.8319461 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul readable: true rgrp: false roth: false rusr: true size: 288 uid: 1000 version: '2907456442' wgrp: false woth: false writeable: true wusr: true xgrp: false xoth: false xusr: false _parsed_vars: changed: false content: Y2lmbXdfb3BlbnNoaWZ0X2FwaTogaHR0cHM6Ly9hcGkuY3JjLnRlc3Rpbmc6NjQ0MwpjaWZtd19vcGVuc2hpZnRfY29udGV4dDogZGVmYXVsdC9hcGktY3JjLXRlc3Rpbmc6NjQ0My9rdWJlYWRtaW4KY2lmbXdfb3BlbnNoaWZ0X2t1YmVjb25maWc6IC9ob21lL3p1dWwvLmNyYy9tYWNoaW5lcy9jcmMva3ViZWNvbmZpZwpjaWZtd19vcGVuc2hpZnRfdG9rZW46IHNoYTI1Nn5DUkR6UkRFd01sSWhiZVZKTENyU0NoZEM2NzR6MlVtQ3JrQmVqVDdVRVA0CmNpZm13X29wZW5zaGlmdF91c2VyOiBrdWJlYWRtaW4K encoding: base64 failed: false source: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml _tmp_dir: changed: true failed: false gid: 10001 group: zuul mode: '0700' owner: zuul path: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/tmp/ansible.oapf2yv0 size: 40 state: directory uid: 10001 _yaml_files: changed: false examined: 4 failed: false files: - atime: 1765585110.960312 ctime: 1765585109.0992665 dev: 64513 gid: 1000 gr_name: zuul inode: 46190614 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1765585108.8522606 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/zuul-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 19386 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1765585179.0079749 ctime: 1765585179.0109751 dev: 64513 gid: 1000 gr_name: zuul inode: 83901671 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1765585178.8539712 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/install-yamls-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 28065 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1765585173.48184 ctime: 1765585172.0498047 dev: 64513 gid: 1000 gr_name: zuul inode: 159439335 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0644' mtime: 1765585171.8578 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/custom-params.yml pw_name: zuul rgrp: true roth: true rusr: true size: 1126 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false - atime: 1765585178.1579542 ctime: 1765585178.1609542 dev: 64513 gid: 1000 gr_name: zuul inode: 67126185 isblk: false ischr: false isdir: false isfifo: false isgid: false islnk: false isreg: true issock: false isuid: false mode: '0600' mtime: 1765585177.8319461 nlink: 1 path: /home/zuul/ci-framework-data/artifacts/parameters/openshift-login-params.yml pw_name: zuul rgrp: false roth: false rusr: true size: 288 uid: 1000 wgrp: false woth: false wusr: true xgrp: false xoth: false xusr: false matched: 4 msg: All paths examined skipped_paths: {} ansible_all_ipv4_addresses: - 38.102.83.182 ansible_all_ipv6_addresses: - fe80::f816:3eff:fe96:e4 ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_config_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2025-12-13' day: '13' epoch: '1765585301' epoch_int: '1765585301' hour: '00' iso8601: '2025-12-13T00:21:41Z' iso8601_basic: 20251213T002141574207 iso8601_basic_short: 20251213T002141 iso8601_micro: '2025-12-13T00:21:41.574207Z' minute: '21' month: '12' second: '41' time: 00:21:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' ansible_default_ipv4: address: 38.102.83.182 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:96:00:e4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-12-13-00-01-20-00 vda1: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-01-20-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: CentOS ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/centos-release ansible_distribution_file_variety: CentOS ansible_distribution_major_version: '9' ansible_distribution_release: Stream ansible_distribution_version: '9' ansible_dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 43968 22 SSH_CONNECTION: 38.102.83.114 43968 38.102.83.182 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f ansible_eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.182 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe96:e4 prefix: '64' scope: link macaddress: fa:16:3e:96:00:e4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.182 all_ipv6_addresses: - fe80::f816:3eff:fe96:e4 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-12-13T00:08:36Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 02849e46-923a-4039-a741-7220969b703a hardware_offload_type: null hints: '' id: 42ec78f8-dc44-4516-ac4e-135cec7f7487 ip_allocation: immediate mac_address: fa:16:3e:91:d3:27 name: crc-02c5855c-ecb0-48ea-8e20-d08d70e9697e network_id: dec04d5b-6116-4933-889e-8847df6f684f numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-12-13T00:08:36Z' crc_ci_bootstrap_network_name: zuul-ci-net-1ff1f9e2 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b1:85:fe mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:91:d3:27 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:9f:c2:b6 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:ea:0a:6d mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:3e:f4:b3 mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:02Z' description: '' dns_domain: '' id: dec04d5b-6116-4933-889e-8847df6f684f ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-1ff1f9e2 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-12-13T00:08:02Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:08Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.45 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: 9c9866c3-8654-46c3-8ace-99e01a86017f name: zuul-ci-subnet-router-1ff1f9e2 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-12-13T00:08:09Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-12-13T00:08:06Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 02849e46-923a-4039-a741-7220969b703a ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-1ff1f9e2 network_id: dec04d5b-6116-4933-889e-8847df6f684f project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-12-13T00:08:06Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-1ff1f9e2 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-1ff1f9e2 date_time: date: '2025-12-13' day: '13' epoch: '1765585301' epoch_int: '1765585301' hour: '00' iso8601: '2025-12-13T00:21:41Z' iso8601_basic: 20251213T002141574207 iso8601_basic_short: 20251213T002141 iso8601_micro: '2025-12-13T00:21:41.574207Z' minute: '21' month: '12' second: '41' time: 00:21:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' default_ipv4: address: 38.102.83.182 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:96:00:e4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-12-13-00-01-20-00 vda1: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-01-20-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 43968 22 SSH_CONNECTION: 38.102.83.114 43968 38.102.83.182 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.182 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe96:e4 prefix: '64' scope: link macaddress: fa:16:3e:96:00:e4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-648.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.06 1m: 0.0 5m: 0.09 locally_reachable_ips: ipv4: - 38.102.83.182 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe96:e4 lsb: {} lvm: N/A machine: x86_64 machine_id: 64f1d6692049d8be5e8b216cc203502c memfree_mb: 7164 memory_mb: nocache: free: 7375 used: 304 real: free: 7164 total: 7679 used: 515 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7679 module_setup: true mounts: - block_available: 20336157 block_size: 4096 block_total: 20954875 block_used: 618718 device: /dev/vda1 fstype: xfs inode_available: 41888385 inode_total: 41942512 inode_used: 54127 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83296899072 size_total: 85831168000 uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfIWnOzoaFl6L11147qWcwowK6Ci0Rz8t1WjAVB/zcYVQE7pudrJ717ZfSW85tw14Xjf9dwVFE9kociqbG0zJc= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAINkLqbGxYdqx81uU7hEPuFtk8VGcR7wMa2mI4eVESIvr ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMl9iNNfpkBemj+80eKwpd7sRDpIn97HaSAZ/63vqGAbPhgMiH6WngR/zO4oyGyB8lf6fsD+w4LKZgiuAhBwp++i2IcfE5Kfy6quV1X1wL5NwHAombTIg8qtOBzyJxFksJBIHLn8mcWWkttFKy18Ou9KhVGzrDOe8XFSy+jDSiZpPmx7DYjwRg7irJ8dfyyG0bjzrw/C5eBQvyGVsr9RNSDlOv5XmLkybsyqg8nCjNrNEBaKrpRf51w6wWHrTzl/U492b0rnW+3xzYRAnuOhrbIP0OoK+92VqKKeAld7BUW4ZL3PPogxoRhuieoWCGzznwQBUdar6WNRcaUSK3mzSjHikjwkUAl9SR7srM4T9Tc4Yf13/dZ3kOQFQNkH9Xl2+vEHfr9xPC5dknmfsFD4wIdvGCo7MW7+D6tD55fkNhgwMl9YAT7IJVbxrwKBQtHspcwjxuQqyuNN48RMbfflqtKlYaiSqT6TmhmJHfFpToEiZ5O0I11Jw+S6nVSf65MU= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 336 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - service-telemetry-operator ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: controller ansible_host: 38.102.83.182 ansible_hostname: controller ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 ansible_interfaces: - eth0 - lo ansible_inventory_sources: - /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/inventory.yaml ansible_is_chroot: false ansible_iscsi_iqn: '' ansible_kernel: 5.14.0-648.el9.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.06 1m: 0.0 5m: 0.09 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.182 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe96:e4 ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: 64f1d6692049d8be5e8b216cc203502c ansible_memfree_mb: 7164 ansible_memory_mb: nocache: free: 7375 used: 304 real: free: 7164 total: 7679 used: 515 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 7679 ansible_mounts: - block_available: 20336157 block_size: 4096 block_total: 20954875 block_used: 618718 device: /dev/vda1 fstype: xfs inode_available: 41888385 inode_total: 41942512 inode_used: 54127 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83296899072 size_total: 85831168000 uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_nodename: controller ansible_os_family: RedHat ansible_pkg_mgr: dnf ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 8 ansible_processor_nproc: 8 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 8 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.25 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfIWnOzoaFl6L11147qWcwowK6Ci0Rz8t1WjAVB/zcYVQE7pudrJ717ZfSW85tw14Xjf9dwVFE9kociqbG0zJc= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAINkLqbGxYdqx81uU7hEPuFtk8VGcR7wMa2mI4eVESIvr ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMl9iNNfpkBemj+80eKwpd7sRDpIn97HaSAZ/63vqGAbPhgMiH6WngR/zO4oyGyB8lf6fsD+w4LKZgiuAhBwp++i2IcfE5Kfy6quV1X1wL5NwHAombTIg8qtOBzyJxFksJBIHLn8mcWWkttFKy18Ou9KhVGzrDOe8XFSy+jDSiZpPmx7DYjwRg7irJ8dfyyG0bjzrw/C5eBQvyGVsr9RNSDlOv5XmLkybsyqg8nCjNrNEBaKrpRf51w6wWHrTzl/U492b0rnW+3xzYRAnuOhrbIP0OoK+92VqKKeAld7BUW4ZL3PPogxoRhuieoWCGzznwQBUdar6WNRcaUSK3mzSjHikjwkUAl9SR7srM4T9Tc4Yf13/dZ3kOQFQNkH9Xl2+vEHfr9xPC5dknmfsFD4wIdvGCo7MW7+D6tD55fkNhgwMl9YAT7IJVbxrwKBQtHspcwjxuQqyuNN48RMbfflqtKlYaiSqT6TmhmJHfFpToEiZ5O0I11Jw+S6nVSf65MU= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 336 ansible_user: zuul ansible_user_dir: /home/zuul ansible_user_gecos: '' ansible_user_gid: 1000 ansible_user_id: zuul ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_basedir: /home/zuul/ci-framework-data cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_openshift_api: https://api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_token: sha256~CRDzRDEwMlIhbeVJLCrSChdC674z2UmCrkBejT7UEP4 cifmw_openshift_user: kubeadmin cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_project_dir: src/github.com/openstack-k8s-operators/ci-framework cifmw_project_dir_absolute: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_run_tests: false cifmw_status: changed: false failed: false stat: atime: 1765585230.1122248 attr_flags: '' attributes: [] block_size: 4096 blocks: 8 charset: binary ctime: 1765585233.7253132 dev: 64513 device_type: 0 executable: true exists: true gid: 1000 gr_name: zuul inode: 16782053 isblk: false ischr: false isdir: true isfifo: false isgid: false islnk: false isreg: false issock: false isuid: false mimetype: inode/directory mode: '0755' mtime: 1765585233.7253132 nlink: 21 path: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework pw_name: zuul readable: true rgrp: true roth: true rusr: true size: 4096 uid: 1000 version: '3877636691' wgrp: false woth: false writeable: true wusr: true xgrp: true xoth: true xusr: true cifmw_success_flag: changed: false failed: false stat: exists: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-12-13T00:08:36Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 02849e46-923a-4039-a741-7220969b703a hardware_offload_type: null hints: '' id: 42ec78f8-dc44-4516-ac4e-135cec7f7487 ip_allocation: immediate mac_address: fa:16:3e:91:d3:27 name: crc-02c5855c-ecb0-48ea-8e20-d08d70e9697e network_id: dec04d5b-6116-4933-889e-8847df6f684f numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-12-13T00:08:36Z' crc_ci_bootstrap_network_name: zuul-ci-net-1ff1f9e2 crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b1:85:fe mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:91:d3:27 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:9f:c2:b6 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:ea:0a:6d mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:3e:f4:b3 mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:02Z' description: '' dns_domain: '' id: dec04d5b-6116-4933-889e-8847df6f684f ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-1ff1f9e2 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-12-13T00:08:02Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:08Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.45 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: 9c9866c3-8654-46c3-8ace-99e01a86017f name: zuul-ci-subnet-router-1ff1f9e2 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-12-13T00:08:09Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-12-13T00:08:06Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 02849e46-923a-4039-a741-7220969b703a ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-1ff1f9e2 network_id: dec04d5b-6116-4933-889e-8847df6f684f project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-12-13T00:08:06Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-1ff1f9e2 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-1ff1f9e2 discovered_interpreter_python: /usr/bin/python3 enable_ramdisk: true gather_subset: - min group_names: - ungrouped groups: all: - controller - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1 inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/inventory.yaml inventory_hostname: controller inventory_hostname_short: controller logfiles_dest_dir: /home/zuul/ci-framework-data/logs/2025-12-13_00-21 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 5956d295-b553-4ddf-9a24-66bb002b169b host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.182 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.182 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.182 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__25f488c32bf32fb4dee7ff7d2bcd558067c92af2 playbook_dir: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.182 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 5956d295-b553-4ddf-9a24-66bb002b169b host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.182 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.182 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.182 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul_log_collection: true zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 1ff1f9e2914e4781854296cd59417fcd build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: f71192a0075543fc80fa0dd7b04d38c1 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 0e5659e7c90540b9826e576f72e96d0a executor: hostname: ze03.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/inventory.yaml log_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/logs result_data_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/results.json src_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/src work_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-nightly_bundles jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 9e0945fe8a0e74be8bc9449318446eeb74336986 name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 181b1cc6787a986fc316969801a1c9403cae9dcc name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 786269345f996bd262360738a1e3c6b09171f370 name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: 2f838b62fe50aacff3d514af4b502264e0a276a5 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 2da49819dd6af6036aede5e4e9a080ff2c6457de name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 672a220823fac36a8965fa0d3dca764739bb46c0 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_change_list: - service-telemetry-operator zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '1' zuul_execution_trusted: 'False' zuul_log_collection: true zuul_success: 'False' zuul_will_retry: 'False' crc: ansible_all_ipv4_addresses: - 38.102.83.51 - 192.168.126.11 ansible_all_ipv6_addresses: - fe80::a95a:5b07:73a2:bffc ansible_apparmor: status: disabled ansible_architecture: x86_64 ansible_bios_date: 04/01/2014 ansible_bios_vendor: SeaBIOS ansible_bios_version: 1.15.0-1 ansible_board_asset_tag: NA ansible_board_name: NA ansible_board_serial: NA ansible_board_vendor: NA ansible_board_version: NA ansible_br_ex: active: true device: br-ex features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.51 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::a95a:5b07:73a2:bffc prefix: '64' scope: link macaddress: fa:16:3e:f0:63:3e mtu: 1500 promisc: true timestamping: [] type: ether ansible_br_int: active: false device: br-int features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 4e:ec:11:72:80:3b mtu: 1400 promisc: true timestamping: [] type: ether ansible_chassis_asset_tag: NA ansible_chassis_serial: NA ansible_chassis_vendor: QEMU ansible_chassis_version: pc-i440fx-6.2 ansible_check_mode: false ansible_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' ansible_config_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/ansible.cfg ansible_connection: ssh ansible_date_time: date: '2025-12-13' day: '13' epoch: '1765584429' epoch_int: '1765584429' hour: '00' iso8601: '2025-12-13T00:07:09Z' iso8601_basic: 20251213T000709006213 iso8601_basic_short: 20251213T000709 iso8601_micro: '2025-12-13T00:07:09.006213Z' minute: '07' month: '12' second: 09 time: 00:07:09 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' ansible_default_ipv4: address: 38.102.83.51 alias: br-ex broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: br-ex macaddress: fa:16:3e:f0:63:3e mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether ansible_default_ipv6: {} ansible_device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 vda2: - EFI-SYSTEM vda3: - boot vda4: - root masters: {} uuids: sr0: - 2025-12-13-00-06-09-00 vda2: - 7B77-95E7 vda3: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 ansible_devices: sr0: holders: [] host: 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-06-09-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '0' vendor: QEMU virtual: 1 vda: holders: [] host: 'SCSI storage controller: Red Hat, Inc. Virtio block device' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: [] sectors: '2048' sectorsize: 512 size: 1.00 MB start: '2048' uuid: null vda2: holders: [] links: ids: [] labels: - EFI-SYSTEM masters: [] uuids: - 7B77-95E7 sectors: '260096' sectorsize: 512 size: 127.00 MB start: '4096' uuid: 7B77-95E7 vda3: holders: [] links: ids: [] labels: - boot masters: [] uuids: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 sectors: '786432' sectorsize: 512 size: 384.00 MB start: '264192' uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: holders: [] links: ids: [] labels: - root masters: [] uuids: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 sectors: '166721503' sectorsize: 512 size: 79.50 GB start: '1050624' uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '419430400' sectorsize: '512' size: 200.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 ansible_diff_mode: false ansible_distribution: RedHat ansible_distribution_file_parsed: true ansible_distribution_file_path: /etc/redhat-release ansible_distribution_file_search_string: Red Hat ansible_distribution_file_variety: RedHat ansible_distribution_major_version: '4' ansible_distribution_release: NA ansible_distribution_version: '4.16' ansible_dns: nameservers: - 199.204.44.24 - 199.204.47.54 ansible_domain: '' ansible_effective_group_id: 1000 ansible_effective_user_id: 1000 ansible_ens3: active: true device: ens3 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: fa:16:3e:f0:63:3e module: virtio_net mtu: 1500 pciid: virtio1 promisc: true speed: -1 timestamping: [] type: ether ansible_env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus HOME: /var/home/core LANG: C.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: core MOTD_SHOWN: pam PATH: /var/home/core/.local/bin:/var/home/core/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /var/home/core SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 45884 22 SSH_CONNECTION: 38.102.83.114 45884 38.102.83.51 22 USER: core XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '2' XDG_SESSION_TYPE: tty _: /usr/bin/python3.9 which_declare: declare -f ansible_eth10: active: true device: eth10 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 192.168.126.11 broadcast: 192.168.126.255 netmask: 255.255.255.0 network: 192.168.126.0 prefix: '24' macaddress: 02:dc:78:f0:2f:5b mtu: 1500 promisc: false timestamping: [] type: ether ansible_facts: _ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.51 - 192.168.126.11 all_ipv6_addresses: - fe80::a95a:5b07:73a2:bffc ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA br_ex: active: true device: br-ex features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.51 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::a95a:5b07:73a2:bffc prefix: '64' scope: link macaddress: fa:16:3e:f0:63:3e mtu: 1500 promisc: true timestamping: [] type: ether br_int: active: false device: br-int features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 4e:ec:11:72:80:3b mtu: 1400 promisc: true timestamping: [] type: ether chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' date_time: date: '2025-12-13' day: '13' epoch: '1765584429' epoch_int: '1765584429' hour: '00' iso8601: '2025-12-13T00:07:09Z' iso8601_basic: 20251213T000709006213 iso8601_basic_short: 20251213T000709 iso8601_micro: '2025-12-13T00:07:09.006213Z' minute: '07' month: '12' second: 09 time: 00:07:09 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' default_ipv4: address: 38.102.83.51 alias: br-ex broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: br-ex macaddress: fa:16:3e:f0:63:3e mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 vda2: - EFI-SYSTEM vda3: - boot vda4: - root masters: {} uuids: sr0: - 2025-12-13-00-06-09-00 vda2: - 7B77-95E7 vda3: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 devices: sr0: holders: [] host: 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-06-09-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '0' vendor: QEMU virtual: 1 vda: holders: [] host: 'SCSI storage controller: Red Hat, Inc. Virtio block device' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: [] sectors: '2048' sectorsize: 512 size: 1.00 MB start: '2048' uuid: null vda2: holders: [] links: ids: [] labels: - EFI-SYSTEM masters: [] uuids: - 7B77-95E7 sectors: '260096' sectorsize: 512 size: 127.00 MB start: '4096' uuid: 7B77-95E7 vda3: holders: [] links: ids: [] labels: - boot masters: [] uuids: - 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 sectors: '786432' sectorsize: 512 size: 384.00 MB start: '264192' uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 vda4: holders: [] links: ids: [] labels: - root masters: [] uuids: - 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 sectors: '166721503' sectorsize: 512 size: 79.50 GB start: '1050624' uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '419430400' sectorsize: '512' size: 200.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3.9 distribution: RedHat distribution_file_parsed: true distribution_file_path: /etc/redhat-release distribution_file_search_string: Red Hat distribution_file_variety: RedHat distribution_major_version: '4' distribution_release: NA distribution_version: '4.16' dns: nameservers: - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 ens3: active: true device: ens3 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: fa:16:3e:f0:63:3e module: virtio_net mtu: 1500 pciid: virtio1 promisc: true speed: -1 timestamping: [] type: ether env: BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus HOME: /var/home/core LANG: C.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: core MOTD_SHOWN: pam PATH: /var/home/core/.local/bin:/var/home/core/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /var/home/core SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 45884 22 SSH_CONNECTION: 38.102.83.114 45884 38.102.83.51 22 USER: core XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '2' XDG_SESSION_TYPE: tty _: /usr/bin/python3.9 which_declare: declare -f eth10: active: true device: eth10 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 192.168.126.11 broadcast: 192.168.126.255 netmask: 255.255.255.0 network: 192.168.126.0 prefix: '24' macaddress: 02:dc:78:f0:2f:5b mtu: 1500 promisc: false timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: crc gather_subset: - all hostname: crc hostnqn: nqn.2014-08.org.nvmexpress:uuid:fe28b1dc-f424-4106-9c95-00604d2bcd5f interfaces: - ovn-k8s-mp0 - eth10 - lo - ens3 - br-int - br-ex - ovs-system is_chroot: true iscsi_iqn: iqn.1994-05.com.redhat:24fed7ce643e kernel: 5.14.0-427.22.1.el9_4.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Mon Jun 10 09:23:36 EDT 2024' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: on [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.09 1m: 0.85 5m: 0.26 locally_reachable_ips: ipv4: - 38.102.83.51 - 127.0.0.0/8 - 127.0.0.1 - 192.168.126.11 ipv6: - ::1 - fe80::a95a:5b07:73a2:bffc lsb: {} lvm: N/A machine: x86_64 machine_id: c1bd596843fb445da20eca66471ddf66 memfree_mb: 29021 memory_mb: nocache: free: 30337 used: 1758 real: free: 29021 total: 32095 used: 3074 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 32095 module_setup: true mounts: - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /sysroot options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /etc options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /usr options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /sysroot/ostree/deploy/rhcos/var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 221344 block_size: 1024 block_total: 358271 block_used: 136927 device: /dev/vda3 fstype: ext4 inode_available: 97936 inode_total: 98304 inode_used: 368 mount: /boot options: ro,seclabel,nosuid,nodev,relatime size_available: 226656256 size_total: 366869504 uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 - block_available: 0 block_size: 2048 block_total: 241 block_used: 241 device: /dev/sr0 fstype: iso9660 inode_available: 0 inode_total: 0 inode_used: 0 mount: /tmp/openstack-config-drive options: ro,relatime,nojoliet,check=s,map=n,blocksize=2048 size_available: 0 size_total: 493568 uuid: 2025-12-13-00-06-09-00 nodename: crc os_family: RedHat ovn_k8s_mp0: active: false device: ovn-k8s-mp0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: b6:dc:d9:26:03:d4 mtu: 1400 promisc: true timestamping: [] type: ether ovs_system: active: false device: ovs-system features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 2a:54:8a:53:65:39 mtu: 1500 promisc: true timestamping: [] type: ether pkg_mgr: atomic_container proc_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor - '8' - AuthenticAMD - AMD EPYC-Rome Processor - '9' - AuthenticAMD - AMD EPYC-Rome Processor - '10' - AuthenticAMD - AMD EPYC-Rome Processor - '11' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 12 processor_nproc: 12 processor_threads_per_core: 1 processor_vcpus: 12 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3.9 has_sslcontext: true type: cpython version: major: 3 micro: 18 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 18 - final - 0 python_version: 3.9.18 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd services: NetworkManager-clean-initrd-state.service: name: NetworkManager-clean-initrd-state.service source: systemd state: stopped status: enabled NetworkManager-dispatcher.service: name: NetworkManager-dispatcher.service source: systemd state: running status: enabled NetworkManager-wait-online.service: name: NetworkManager-wait-online.service source: systemd state: stopped status: enabled NetworkManager.service: name: NetworkManager.service source: systemd state: running status: enabled afterburn-checkin.service: name: afterburn-checkin.service source: systemd state: stopped status: enabled afterburn-firstboot-checkin.service: name: afterburn-firstboot-checkin.service source: systemd state: stopped status: enabled afterburn-sshkeys@.service: name: afterburn-sshkeys@.service source: systemd state: unknown status: disabled afterburn.service: name: afterburn.service source: systemd state: inactive status: disabled arp-ethers.service: name: arp-ethers.service source: systemd state: inactive status: disabled auditd.service: name: auditd.service source: systemd state: running status: enabled auth-rpcgss-module.service: name: auth-rpcgss-module.service source: systemd state: stopped status: static autovt@.service: name: autovt@.service source: systemd state: unknown status: alias blk-availability.service: name: blk-availability.service source: systemd state: stopped status: disabled bootc-fetch-apply-updates.service: name: bootc-fetch-apply-updates.service source: systemd state: inactive status: static bootkube.service: name: bootkube.service source: systemd state: inactive status: disabled bootupd.service: name: bootupd.service source: systemd state: stopped status: static chrony-wait.service: name: chrony-wait.service source: systemd state: inactive status: disabled chronyd-restricted.service: name: chronyd-restricted.service source: systemd state: inactive status: disabled chronyd.service: name: chronyd.service source: systemd state: running status: enabled clevis-luks-askpass.service: name: clevis-luks-askpass.service source: systemd state: stopped status: static cni-dhcp.service: name: cni-dhcp.service source: systemd state: inactive status: disabled configure-cloudinit-ssh.service: name: configure-cloudinit-ssh.service source: systemd state: stopped status: enabled console-getty.service: name: console-getty.service source: systemd state: inactive status: disabled console-login-helper-messages-gensnippet-ssh-keys.service: name: console-login-helper-messages-gensnippet-ssh-keys.service source: systemd state: stopped status: enabled container-getty@.service: name: container-getty@.service source: systemd state: unknown status: static coreos-generate-iscsi-initiatorname.service: name: coreos-generate-iscsi-initiatorname.service source: systemd state: stopped status: enabled coreos-ignition-delete-config.service: name: coreos-ignition-delete-config.service source: systemd state: stopped status: enabled coreos-ignition-firstboot-complete.service: name: coreos-ignition-firstboot-complete.service source: systemd state: stopped status: enabled coreos-ignition-write-issues.service: name: coreos-ignition-write-issues.service source: systemd state: stopped status: enabled coreos-installer-disable-device-auto-activation.service: name: coreos-installer-disable-device-auto-activation.service source: systemd state: inactive status: static coreos-installer-noreboot.service: name: coreos-installer-noreboot.service source: systemd state: inactive status: static coreos-installer-reboot.service: name: coreos-installer-reboot.service source: systemd state: inactive status: static coreos-installer-secure-ipl-reboot.service: name: coreos-installer-secure-ipl-reboot.service source: systemd state: inactive status: static coreos-installer.service: name: coreos-installer.service source: systemd state: inactive status: static coreos-liveiso-success.service: name: coreos-liveiso-success.service source: systemd state: stopped status: enabled coreos-platform-chrony-config.service: name: coreos-platform-chrony-config.service source: systemd state: stopped status: enabled coreos-populate-lvmdevices.service: name: coreos-populate-lvmdevices.service source: systemd state: stopped status: enabled coreos-printk-quiet.service: name: coreos-printk-quiet.service source: systemd state: stopped status: enabled coreos-update-ca-trust.service: name: coreos-update-ca-trust.service source: systemd state: stopped status: enabled crc-dnsmasq.service: name: crc-dnsmasq.service source: systemd state: stopped status: not-found crc-pre.service: name: crc-pre.service source: systemd state: stopped status: enabled crio-subid.service: name: crio-subid.service source: systemd state: stopped status: enabled crio-wipe.service: name: crio-wipe.service source: systemd state: stopped status: disabled crio.service: name: crio.service source: systemd state: stopped status: disabled dbus-broker.service: name: dbus-broker.service source: systemd state: running status: enabled dbus-org.freedesktop.hostname1.service: name: dbus-org.freedesktop.hostname1.service source: systemd state: active status: alias dbus-org.freedesktop.locale1.service: name: dbus-org.freedesktop.locale1.service source: systemd state: inactive status: alias dbus-org.freedesktop.login1.service: name: dbus-org.freedesktop.login1.service source: systemd state: active status: alias dbus-org.freedesktop.nm-dispatcher.service: name: dbus-org.freedesktop.nm-dispatcher.service source: systemd state: active status: alias dbus-org.freedesktop.timedate1.service: name: dbus-org.freedesktop.timedate1.service source: systemd state: inactive status: alias dbus.service: name: dbus.service source: systemd state: active status: alias debug-shell.service: name: debug-shell.service source: systemd state: inactive status: disabled disable-mglru.service: name: disable-mglru.service source: systemd state: stopped status: enabled display-manager.service: name: display-manager.service source: systemd state: stopped status: not-found dm-event.service: name: dm-event.service source: systemd state: stopped status: static dnf-makecache.service: name: dnf-makecache.service source: systemd state: inactive status: static dnsmasq.service: name: dnsmasq.service source: systemd state: running status: enabled dracut-cmdline.service: name: dracut-cmdline.service source: systemd state: stopped status: static dracut-initqueue.service: name: dracut-initqueue.service source: systemd state: stopped status: static dracut-mount.service: name: dracut-mount.service source: systemd state: stopped status: static dracut-pre-mount.service: name: dracut-pre-mount.service source: systemd state: stopped status: static dracut-pre-pivot.service: name: dracut-pre-pivot.service source: systemd state: stopped status: static dracut-pre-trigger.service: name: dracut-pre-trigger.service source: systemd state: stopped status: static dracut-pre-udev.service: name: dracut-pre-udev.service source: systemd state: stopped status: static dracut-shutdown-onfailure.service: name: dracut-shutdown-onfailure.service source: systemd state: stopped status: static dracut-shutdown.service: name: dracut-shutdown.service source: systemd state: stopped status: static dummy-network.service: name: dummy-network.service source: systemd state: stopped status: enabled emergency.service: name: emergency.service source: systemd state: stopped status: static fcoe.service: name: fcoe.service source: systemd state: stopped status: not-found fstrim.service: name: fstrim.service source: systemd state: inactive status: static fwupd-offline-update.service: name: fwupd-offline-update.service source: systemd state: inactive status: static fwupd-refresh.service: name: fwupd-refresh.service source: systemd state: inactive status: static fwupd.service: name: fwupd.service source: systemd state: inactive status: static gcp-routes.service: name: gcp-routes.service source: systemd state: stopped status: enabled getty@.service: name: getty@.service source: systemd state: unknown status: enabled getty@tty1.service: name: getty@tty1.service source: systemd state: running status: active gssproxy.service: name: gssproxy.service source: systemd state: stopped status: disabled gvisor-tap-vsock.service: name: gvisor-tap-vsock.service source: systemd state: stopped status: enabled hypervfcopyd.service: name: hypervfcopyd.service source: systemd state: inactive status: static hypervkvpd.service: name: hypervkvpd.service source: systemd state: inactive status: static hypervvssd.service: name: hypervvssd.service source: systemd state: inactive status: static ignition-delete-config.service: name: ignition-delete-config.service source: systemd state: stopped status: enabled initrd-cleanup.service: name: initrd-cleanup.service source: systemd state: stopped status: static initrd-parse-etc.service: name: initrd-parse-etc.service source: systemd state: stopped status: static initrd-switch-root.service: name: initrd-switch-root.service source: systemd state: stopped status: static initrd-udevadm-cleanup-db.service: name: initrd-udevadm-cleanup-db.service source: systemd state: stopped status: static irqbalance.service: name: irqbalance.service source: systemd state: running status: enabled iscsi-init.service: name: iscsi-init.service source: systemd state: stopped status: disabled iscsi-onboot.service: name: iscsi-onboot.service source: systemd state: stopped status: enabled iscsi-shutdown.service: name: iscsi-shutdown.service source: systemd state: stopped status: static iscsi-starter.service: name: iscsi-starter.service source: systemd state: inactive status: disabled iscsi.service: name: iscsi.service source: systemd state: stopped status: indirect iscsid.service: name: iscsid.service source: systemd state: stopped status: disabled iscsiuio.service: name: iscsiuio.service source: systemd state: stopped status: disabled kdump.service: name: kdump.service source: systemd state: stopped status: disabled kmod-static-nodes.service: name: kmod-static-nodes.service source: systemd state: stopped status: static kubelet-auto-node-size.service: name: kubelet-auto-node-size.service source: systemd state: stopped status: enabled kubelet-cleanup.service: name: kubelet-cleanup.service source: systemd state: stopped status: enabled kubelet.service: name: kubelet.service source: systemd state: stopped status: disabled kubens.service: name: kubens.service source: systemd state: stopped status: disabled ldconfig.service: name: ldconfig.service source: systemd state: stopped status: static logrotate.service: name: logrotate.service source: systemd state: stopped status: static lvm2-activation-early.service: name: lvm2-activation-early.service source: systemd state: stopped status: not-found lvm2-lvmpolld.service: name: lvm2-lvmpolld.service source: systemd state: stopped status: static lvm2-monitor.service: name: lvm2-monitor.service source: systemd state: stopped status: enabled machine-config-daemon-firstboot.service: name: machine-config-daemon-firstboot.service source: systemd state: stopped status: enabled machine-config-daemon-pull.service: name: machine-config-daemon-pull.service source: systemd state: stopped status: enabled mdadm-grow-continue@.service: name: mdadm-grow-continue@.service source: systemd state: unknown status: static mdadm-last-resort@.service: name: mdadm-last-resort@.service source: systemd state: unknown status: static mdcheck_continue.service: name: mdcheck_continue.service source: systemd state: inactive status: static mdcheck_start.service: name: mdcheck_start.service source: systemd state: inactive status: static mdmon@.service: name: mdmon@.service source: systemd state: unknown status: static mdmonitor-oneshot.service: name: mdmonitor-oneshot.service source: systemd state: inactive status: static mdmonitor.service: name: mdmonitor.service source: systemd state: stopped status: enabled microcode.service: name: microcode.service source: systemd state: stopped status: enabled modprobe@.service: name: modprobe@.service source: systemd state: unknown status: static modprobe@configfs.service: name: modprobe@configfs.service source: systemd state: stopped status: inactive modprobe@drm.service: name: modprobe@drm.service source: systemd state: stopped status: inactive modprobe@efi_pstore.service: name: modprobe@efi_pstore.service source: systemd state: stopped status: inactive modprobe@fuse.service: name: modprobe@fuse.service source: systemd state: stopped status: inactive multipathd.service: name: multipathd.service source: systemd state: stopped status: enabled netavark-dhcp-proxy.service: name: netavark-dhcp-proxy.service source: systemd state: inactive status: disabled netavark-firewalld-reload.service: name: netavark-firewalld-reload.service source: systemd state: inactive status: disabled network.service: name: network.service source: systemd state: stopped status: not-found nfs-blkmap.service: name: nfs-blkmap.service source: systemd state: inactive status: disabled nfs-idmapd.service: name: nfs-idmapd.service source: systemd state: stopped status: static nfs-mountd.service: name: nfs-mountd.service source: systemd state: stopped status: static nfs-server.service: name: nfs-server.service source: systemd state: stopped status: disabled nfs-utils.service: name: nfs-utils.service source: systemd state: stopped status: static nfsdcld.service: name: nfsdcld.service source: systemd state: stopped status: static nftables.service: name: nftables.service source: systemd state: inactive status: disabled nis-domainname.service: name: nis-domainname.service source: systemd state: inactive status: disabled nm-cloud-setup.service: name: nm-cloud-setup.service source: systemd state: inactive status: disabled nm-priv-helper.service: name: nm-priv-helper.service source: systemd state: inactive status: static nmstate.service: name: nmstate.service source: systemd state: stopped status: enabled node-valid-hostname.service: name: node-valid-hostname.service source: systemd state: stopped status: enabled nodeip-configuration.service: name: nodeip-configuration.service source: systemd state: stopped status: enabled ntpd.service: name: ntpd.service source: systemd state: stopped status: not-found ntpdate.service: name: ntpdate.service source: systemd state: stopped status: not-found nvmefc-boot-connections.service: name: nvmefc-boot-connections.service source: systemd state: stopped status: enabled nvmf-autoconnect.service: name: nvmf-autoconnect.service source: systemd state: inactive status: disabled nvmf-connect@.service: name: nvmf-connect@.service source: systemd state: unknown status: static openvswitch.service: name: openvswitch.service source: systemd state: stopped status: enabled ostree-boot-complete.service: name: ostree-boot-complete.service source: systemd state: stopped status: enabled-runtime ostree-finalize-staged-hold.service: name: ostree-finalize-staged-hold.service source: systemd state: stopped status: static ostree-finalize-staged.service: name: ostree-finalize-staged.service source: systemd state: stopped status: static ostree-prepare-root.service: name: ostree-prepare-root.service source: systemd state: inactive status: static ostree-readonly-sysroot-migration.service: name: ostree-readonly-sysroot-migration.service source: systemd state: stopped status: disabled ostree-remount.service: name: ostree-remount.service source: systemd state: stopped status: enabled ostree-state-overlay@.service: name: ostree-state-overlay@.service source: systemd state: unknown status: disabled ovs-configuration.service: name: ovs-configuration.service source: systemd state: stopped status: enabled ovs-delete-transient-ports.service: name: ovs-delete-transient-ports.service source: systemd state: stopped status: static ovs-vswitchd.service: name: ovs-vswitchd.service source: systemd state: running status: static ovsdb-server.service: name: ovsdb-server.service source: systemd state: running status: static pam_namespace.service: name: pam_namespace.service source: systemd state: inactive status: static plymouth-quit-wait.service: name: plymouth-quit-wait.service source: systemd state: stopped status: not-found plymouth-read-write.service: name: plymouth-read-write.service source: systemd state: stopped status: not-found plymouth-start.service: name: plymouth-start.service source: systemd state: stopped status: not-found podman-auto-update.service: name: podman-auto-update.service source: systemd state: inactive status: disabled podman-clean-transient.service: name: podman-clean-transient.service source: systemd state: inactive status: disabled podman-kube@.service: name: podman-kube@.service source: systemd state: unknown status: disabled podman-restart.service: name: podman-restart.service source: systemd state: inactive status: disabled podman.service: name: podman.service source: systemd state: stopped status: disabled polkit.service: name: polkit.service source: systemd state: inactive status: static qemu-guest-agent.service: name: qemu-guest-agent.service source: systemd state: stopped status: enabled quotaon.service: name: quotaon.service source: systemd state: inactive status: static raid-check.service: name: raid-check.service source: systemd state: inactive status: static rbdmap.service: name: rbdmap.service source: systemd state: stopped status: not-found rc-local.service: name: rc-local.service source: systemd state: stopped status: static rdisc.service: name: rdisc.service source: systemd state: inactive status: disabled rdma-load-modules@.service: name: rdma-load-modules@.service source: systemd state: unknown status: static rdma-ndd.service: name: rdma-ndd.service source: systemd state: inactive status: static rescue.service: name: rescue.service source: systemd state: stopped status: static rhcos-usrlocal-selinux-fixup.service: name: rhcos-usrlocal-selinux-fixup.service source: systemd state: stopped status: enabled rpc-gssd.service: name: rpc-gssd.service source: systemd state: stopped status: static rpc-statd-notify.service: name: rpc-statd-notify.service source: systemd state: stopped status: static rpc-statd.service: name: rpc-statd.service source: systemd state: stopped status: static rpc-svcgssd.service: name: rpc-svcgssd.service source: systemd state: stopped status: not-found rpcbind.service: name: rpcbind.service source: systemd state: stopped status: disabled rpm-ostree-bootstatus.service: name: rpm-ostree-bootstatus.service source: systemd state: inactive status: disabled rpm-ostree-countme.service: name: rpm-ostree-countme.service source: systemd state: inactive status: static rpm-ostree-fix-shadow-mode.service: name: rpm-ostree-fix-shadow-mode.service source: systemd state: stopped status: disabled rpm-ostreed-automatic.service: name: rpm-ostreed-automatic.service source: systemd state: inactive status: static rpm-ostreed.service: name: rpm-ostreed.service source: systemd state: inactive status: static rpmdb-rebuild.service: name: rpmdb-rebuild.service source: systemd state: inactive status: disabled selinux-autorelabel-mark.service: name: selinux-autorelabel-mark.service source: systemd state: stopped status: enabled selinux-autorelabel.service: name: selinux-autorelabel.service source: systemd state: inactive status: static selinux-check-proper-disable.service: name: selinux-check-proper-disable.service source: systemd state: inactive status: disabled serial-getty@.service: name: serial-getty@.service source: systemd state: unknown status: disabled sntp.service: name: sntp.service source: systemd state: stopped status: not-found sshd-keygen@.service: name: sshd-keygen@.service source: systemd state: unknown status: disabled sshd-keygen@ecdsa.service: name: sshd-keygen@ecdsa.service source: systemd state: stopped status: inactive sshd-keygen@ed25519.service: name: sshd-keygen@ed25519.service source: systemd state: stopped status: inactive sshd-keygen@rsa.service: name: sshd-keygen@rsa.service source: systemd state: stopped status: inactive sshd.service: name: sshd.service source: systemd state: running status: enabled sshd@.service: name: sshd@.service source: systemd state: unknown status: static sssd-autofs.service: name: sssd-autofs.service source: systemd state: inactive status: indirect sssd-nss.service: name: sssd-nss.service source: systemd state: inactive status: indirect sssd-pac.service: name: sssd-pac.service source: systemd state: inactive status: indirect sssd-pam.service: name: sssd-pam.service source: systemd state: inactive status: indirect sssd-ssh.service: name: sssd-ssh.service source: systemd state: inactive status: indirect sssd-sudo.service: name: sssd-sudo.service source: systemd state: inactive status: indirect sssd.service: name: sssd.service source: systemd state: stopped status: enabled stalld.service: name: stalld.service source: systemd state: inactive status: disabled syslog.service: name: syslog.service source: systemd state: stopped status: not-found system-update-cleanup.service: name: system-update-cleanup.service source: systemd state: inactive status: static systemd-ask-password-console.service: name: systemd-ask-password-console.service source: systemd state: stopped status: static systemd-ask-password-wall.service: name: systemd-ask-password-wall.service source: systemd state: stopped status: static systemd-backlight@.service: name: systemd-backlight@.service source: systemd state: unknown status: static systemd-binfmt.service: name: systemd-binfmt.service source: systemd state: stopped status: static systemd-bless-boot.service: name: systemd-bless-boot.service source: systemd state: inactive status: static systemd-boot-check-no-failures.service: name: systemd-boot-check-no-failures.service source: systemd state: inactive status: disabled systemd-boot-random-seed.service: name: systemd-boot-random-seed.service source: systemd state: stopped status: static systemd-boot-update.service: name: systemd-boot-update.service source: systemd state: stopped status: enabled systemd-coredump@.service: name: systemd-coredump@.service source: systemd state: unknown status: static systemd-exit.service: name: systemd-exit.service source: systemd state: inactive status: static systemd-fsck-root.service: name: systemd-fsck-root.service source: systemd state: stopped status: static systemd-fsck@.service: name: systemd-fsck@.service source: systemd state: unknown status: static systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service: name: systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service source: systemd state: stopped status: active systemd-growfs-root.service: name: systemd-growfs-root.service source: systemd state: inactive status: static systemd-growfs@.service: name: systemd-growfs@.service source: systemd state: unknown status: static systemd-halt.service: name: systemd-halt.service source: systemd state: inactive status: static systemd-hibernate-resume@.service: name: systemd-hibernate-resume@.service source: systemd state: unknown status: static systemd-hibernate.service: name: systemd-hibernate.service source: systemd state: inactive status: static systemd-hostnamed.service: name: systemd-hostnamed.service source: systemd state: running status: static systemd-hwdb-update.service: name: systemd-hwdb-update.service source: systemd state: stopped status: static systemd-hybrid-sleep.service: name: systemd-hybrid-sleep.service source: systemd state: inactive status: static systemd-initctl.service: name: systemd-initctl.service source: systemd state: stopped status: static systemd-journal-catalog-update.service: name: systemd-journal-catalog-update.service source: systemd state: stopped status: static systemd-journal-flush.service: name: systemd-journal-flush.service source: systemd state: stopped status: static systemd-journal-gatewayd.service: name: systemd-journal-gatewayd.service source: systemd state: inactive status: indirect systemd-journal-remote.service: name: systemd-journal-remote.service source: systemd state: inactive status: indirect systemd-journal-upload.service: name: systemd-journal-upload.service source: systemd state: inactive status: disabled systemd-journald.service: name: systemd-journald.service source: systemd state: running status: static systemd-journald@.service: name: systemd-journald@.service source: systemd state: unknown status: static systemd-kexec.service: name: systemd-kexec.service source: systemd state: inactive status: static systemd-localed.service: name: systemd-localed.service source: systemd state: inactive status: static systemd-logind.service: name: systemd-logind.service source: systemd state: running status: static systemd-machine-id-commit.service: name: systemd-machine-id-commit.service source: systemd state: stopped status: static systemd-modules-load.service: name: systemd-modules-load.service source: systemd state: stopped status: static systemd-network-generator.service: name: systemd-network-generator.service source: systemd state: stopped status: enabled systemd-pcrfs-root.service: name: systemd-pcrfs-root.service source: systemd state: inactive status: static systemd-pcrfs@.service: name: systemd-pcrfs@.service source: systemd state: unknown status: static systemd-pcrmachine.service: name: systemd-pcrmachine.service source: systemd state: stopped status: static systemd-pcrphase-initrd.service: name: systemd-pcrphase-initrd.service source: systemd state: stopped status: static systemd-pcrphase-sysinit.service: name: systemd-pcrphase-sysinit.service source: systemd state: stopped status: static systemd-pcrphase.service: name: systemd-pcrphase.service source: systemd state: stopped status: static systemd-poweroff.service: name: systemd-poweroff.service source: systemd state: inactive status: static systemd-pstore.service: name: systemd-pstore.service source: systemd state: stopped status: enabled systemd-quotacheck.service: name: systemd-quotacheck.service source: systemd state: stopped status: static systemd-random-seed.service: name: systemd-random-seed.service source: systemd state: stopped status: static systemd-reboot.service: name: systemd-reboot.service source: systemd state: inactive status: static systemd-remount-fs.service: name: systemd-remount-fs.service source: systemd state: stopped status: enabled-runtime systemd-repart.service: name: systemd-repart.service source: systemd state: stopped status: masked systemd-rfkill.service: name: systemd-rfkill.service source: systemd state: stopped status: static systemd-suspend-then-hibernate.service: name: systemd-suspend-then-hibernate.service source: systemd state: inactive status: static systemd-suspend.service: name: systemd-suspend.service source: systemd state: inactive status: static systemd-sysctl.service: name: systemd-sysctl.service source: systemd state: stopped status: static systemd-sysext.service: name: systemd-sysext.service source: systemd state: stopped status: disabled systemd-sysupdate-reboot.service: name: systemd-sysupdate-reboot.service source: systemd state: inactive status: indirect systemd-sysupdate.service: name: systemd-sysupdate.service source: systemd state: inactive status: indirect systemd-sysusers.service: name: systemd-sysusers.service source: systemd state: stopped status: static systemd-timedated.service: name: systemd-timedated.service source: systemd state: inactive status: static systemd-timesyncd.service: name: systemd-timesyncd.service source: systemd state: stopped status: not-found systemd-tmpfiles-clean.service: name: systemd-tmpfiles-clean.service source: systemd state: stopped status: static systemd-tmpfiles-setup-dev.service: name: systemd-tmpfiles-setup-dev.service source: systemd state: stopped status: static systemd-tmpfiles-setup.service: name: systemd-tmpfiles-setup.service source: systemd state: stopped status: static systemd-tmpfiles.service: name: systemd-tmpfiles.service source: systemd state: stopped status: not-found systemd-udev-settle.service: name: systemd-udev-settle.service source: systemd state: stopped status: static systemd-udev-trigger.service: name: systemd-udev-trigger.service source: systemd state: stopped status: static systemd-udevd.service: name: systemd-udevd.service source: systemd state: running status: static systemd-update-done.service: name: systemd-update-done.service source: systemd state: stopped status: static systemd-update-utmp-runlevel.service: name: systemd-update-utmp-runlevel.service source: systemd state: stopped status: static systemd-update-utmp.service: name: systemd-update-utmp.service source: systemd state: stopped status: static systemd-user-sessions.service: name: systemd-user-sessions.service source: systemd state: stopped status: static systemd-vconsole-setup.service: name: systemd-vconsole-setup.service source: systemd state: stopped status: static systemd-volatile-root.service: name: systemd-volatile-root.service source: systemd state: inactive status: static systemd-zram-setup@.service: name: systemd-zram-setup@.service source: systemd state: unknown status: static teamd@.service: name: teamd@.service source: systemd state: unknown status: static unbound-anchor.service: name: unbound-anchor.service source: systemd state: stopped status: static user-runtime-dir@.service: name: user-runtime-dir@.service source: systemd state: unknown status: static user-runtime-dir@0.service: name: user-runtime-dir@0.service source: systemd state: stopped status: active user-runtime-dir@1000.service: name: user-runtime-dir@1000.service source: systemd state: stopped status: active user@.service: name: user@.service source: systemd state: unknown status: static user@0.service: name: user@0.service source: systemd state: running status: active user@1000.service: name: user@1000.service source: systemd state: running status: active vgauthd.service: name: vgauthd.service source: systemd state: stopped status: enabled vmtoolsd.service: name: vmtoolsd.service source: systemd state: stopped status: enabled wait-for-primary-ip.service: name: wait-for-primary-ip.service source: systemd state: stopped status: enabled ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDs7MQU61ADe4LfEllZo6w2h2Vo1Z9nNArIkKGmgua8bOly2nQBIoDIKgNOXqUpoIZx1528UeeHSQu9SxYL21mo= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIDKHFhjB7ae+dVOClQLGXnCaMXGjEeLhmEhxE64Ddkhe ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCr2rWpvTGLA5BK4eYXB55gorB9vAJK1K0iUmnm+r9AcvcXH33bR/O6ZNh9h85mHU5l1Gw9nBLRbHn42EU+6Ht6te2Z1gIiJEKpfiC0sR0aMcT4hKQWHmwYqQM/VLXhPiS4OnhO1OJuz0arj1Anr1hDcEJpVTAj3sbfkgzzbBeEWMg2V3Apr1fqDimNlyWRiDFy3TUdKfnB7nucGaGbHneeVxvwv81RGur6I9VHZe/odqEQTGRUBXdu57xybxd6Yc3863ayL5L1OhGTN/x7d8qeEJGb9zt6VvtFWlpVjIXa2l+uTZVfTvufdLwxJdBRg0kHMXH2ZJ3U8w9NRHMBHG7M6YjX0w95uCB/FnyN6s8V/KRQtSnC6Wt6YMP438rM2K9yydXdS/qUQm5hQLP7eY8/Nl4+RDQAvZOjPp+DeUxXfZOqR4qq8tCKi/5Cvd7ChYfPyymeV4RKAJf971EuO0zphyDK8knic0c2XTybK6WTM8lYcbUMYJxg1CW5o1VMjpk= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 52 user_dir: /var/home/core user_gecos: CoreOS Admin user_gid: 1000 user_id: core user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack ansible_fibre_channel_wwn: [] ansible_fips: false ansible_forks: 5 ansible_form_factor: Other ansible_fqdn: crc ansible_host: 38.102.83.51 ansible_hostname: crc ansible_hostnqn: nqn.2014-08.org.nvmexpress:uuid:fe28b1dc-f424-4106-9c95-00604d2bcd5f ansible_interfaces: - ovn-k8s-mp0 - eth10 - lo - ens3 - br-int - br-ex - ovs-system ansible_inventory_sources: - /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/inventory.yaml ansible_is_chroot: true ansible_iscsi_iqn: iqn.1994-05.com.redhat:24fed7ce643e ansible_kernel: 5.14.0-427.22.1.el9_4.x86_64 ansible_kernel_version: '#1 SMP PREEMPT_DYNAMIC Mon Jun 10 09:23:36 EDT 2024' ansible_lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_lockless: on [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback ansible_loadavg: 15m: 0.09 1m: 0.85 5m: 0.26 ansible_local: {} ansible_locally_reachable_ips: ipv4: - 38.102.83.51 - 127.0.0.0/8 - 127.0.0.1 - 192.168.126.11 ipv6: - ::1 - fe80::a95a:5b07:73a2:bffc ansible_lsb: {} ansible_lvm: N/A ansible_machine: x86_64 ansible_machine_id: c1bd596843fb445da20eca66471ddf66 ansible_memfree_mb: 29021 ansible_memory_mb: nocache: free: 30337 used: 1758 real: free: 29021 total: 32095 used: 3074 swap: cached: 0 free: 0 total: 0 used: 0 ansible_memtotal_mb: 32095 ansible_mounts: - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /sysroot options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /etc options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /usr options: ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /sysroot/ostree/deploy/rhcos/var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 13220379 block_size: 4096 block_total: 20823803 block_used: 7603424 device: /dev/vda4 fstype: xfs inode_available: 41489052 inode_total: 41680320 inode_used: 191268 mount: /var options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota,bind size_available: 54150672384 size_total: 85294297088 uuid: 68d6f3e9-64e9-44a4-a1d0-311f9c629a01 - block_available: 221344 block_size: 1024 block_total: 358271 block_used: 136927 device: /dev/vda3 fstype: ext4 inode_available: 97936 inode_total: 98304 inode_used: 368 mount: /boot options: ro,seclabel,nosuid,nodev,relatime size_available: 226656256 size_total: 366869504 uuid: 6ea7ef63-bc43-49c4-9337-b3b14ffb2763 - block_available: 0 block_size: 2048 block_total: 241 block_used: 241 device: /dev/sr0 fstype: iso9660 inode_available: 0 inode_total: 0 inode_used: 0 mount: /tmp/openstack-config-drive options: ro,relatime,nojoliet,check=s,map=n,blocksize=2048 size_available: 0 size_total: 493568 uuid: 2025-12-13-00-06-09-00 ansible_nodename: crc ansible_os_family: RedHat ansible_ovn_k8s_mp0: active: false device: ovn-k8s-mp0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: b6:dc:d9:26:03:d4 mtu: 1400 promisc: true timestamping: [] type: ether ansible_ovs_system: active: false device: ovs-system features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] fcoe_mtu: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: 'on' hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] netns_local: on [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: off [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: 'on' tx_gre_segmentation: 'on' tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: 'on' tx_ipxip6_segmentation: 'on' tx_lockless: on [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: 'on' tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: 'on' tx_udp_tnl_segmentation: 'on' tx_vlan_offload: 'on' tx_vlan_stag_hw_insert: 'on' vlan_challenged: off [fixed] hw_timestamp_filters: [] macaddress: 2a:54:8a:53:65:39 mtu: 1500 promisc: true timestamping: [] type: ether ansible_pkg_mgr: atomic_container ansible_playbook_python: /usr/lib/zuul/ansible/8/bin/python ansible_port: 22 ansible_proc_cmdline: BOOT_IMAGE: (hd0,gpt3)/boot/ostree/rhcos-8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/vmlinuz-5.14.0-427.22.1.el9_4.x86_64 boot: UUID=6ea7ef63-bc43-49c4-9337-b3b14ffb2763 cgroup_no_v1: all ignition.platform.id: metal ostree: /ostree/boot.1/rhcos/8a7990dabf52ac75b58b2f3e4b0ab7fa03a563df103fbd3b4d71c823481c83ff/0 psi: '1' root: UUID=68d6f3e9-64e9-44a4-a1d0-311f9c629a01 rootflags: prjquota rw: true systemd.unified_cgroup_hierarchy: '1' ansible_processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor - '8' - AuthenticAMD - AMD EPYC-Rome Processor - '9' - AuthenticAMD - AMD EPYC-Rome Processor - '10' - AuthenticAMD - AMD EPYC-Rome Processor - '11' - AuthenticAMD - AMD EPYC-Rome Processor ansible_processor_cores: 1 ansible_processor_count: 12 ansible_processor_nproc: 12 ansible_processor_threads_per_core: 1 ansible_processor_vcpus: 12 ansible_product_name: OpenStack Nova ansible_product_serial: NA ansible_product_uuid: NA ansible_product_version: 26.3.1 ansible_python: executable: /usr/bin/python3.9 has_sslcontext: true type: cpython version: major: 3 micro: 18 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 18 - final - 0 ansible_python_interpreter: auto ansible_python_version: 3.9.18 ansible_real_group_id: 1000 ansible_real_user_id: 1000 ansible_run_tags: - all ansible_scp_extra_args: -o PermitLocalCommand=no ansible_selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted ansible_selinux_python_present: true ansible_service_mgr: systemd ansible_sftp_extra_args: -o PermitLocalCommand=no ansible_skip_tags: [] ansible_ssh_common_args: -o PermitLocalCommand=no ansible_ssh_executable: ssh ansible_ssh_extra_args: -o PermitLocalCommand=no ansible_ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDs7MQU61ADe4LfEllZo6w2h2Vo1Z9nNArIkKGmgua8bOly2nQBIoDIKgNOXqUpoIZx1528UeeHSQu9SxYL21mo= ansible_ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ansible_ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAIDKHFhjB7ae+dVOClQLGXnCaMXGjEeLhmEhxE64Ddkhe ansible_ssh_host_key_ed25519_public_keytype: ssh-ed25519 ansible_ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQCr2rWpvTGLA5BK4eYXB55gorB9vAJK1K0iUmnm+r9AcvcXH33bR/O6ZNh9h85mHU5l1Gw9nBLRbHn42EU+6Ht6te2Z1gIiJEKpfiC0sR0aMcT4hKQWHmwYqQM/VLXhPiS4OnhO1OJuz0arj1Anr1hDcEJpVTAj3sbfkgzzbBeEWMg2V3Apr1fqDimNlyWRiDFy3TUdKfnB7nucGaGbHneeVxvwv81RGur6I9VHZe/odqEQTGRUBXdu57xybxd6Yc3863ayL5L1OhGTN/x7d8qeEJGb9zt6VvtFWlpVjIXa2l+uTZVfTvufdLwxJdBRg0kHMXH2ZJ3U8w9NRHMBHG7M6YjX0w95uCB/FnyN6s8V/KRQtSnC6Wt6YMP438rM2K9yydXdS/qUQm5hQLP7eY8/Nl4+RDQAvZOjPp+DeUxXfZOqR4qq8tCKi/5Cvd7ChYfPyymeV4RKAJf971EuO0zphyDK8knic0c2XTybK6WTM8lYcbUMYJxg1CW5o1VMjpk= ansible_ssh_host_key_rsa_public_keytype: ssh-rsa ansible_swapfree_mb: 0 ansible_swaptotal_mb: 0 ansible_system: Linux ansible_system_capabilities: - '' ansible_system_capabilities_enforced: 'True' ansible_system_vendor: OpenStack Foundation ansible_uptime_seconds: 52 ansible_user: core ansible_user_dir: /var/home/core ansible_user_gecos: CoreOS Admin ansible_user_gid: 1000 ansible_user_id: core ansible_user_shell: /bin/bash ansible_user_uid: 1000 ansible_userspace_architecture: x86_64 ansible_userspace_bits: '64' ansible_verbosity: 1 ansible_version: full: 2.15.12 major: 2 minor: 15 revision: 12 string: 2.15.12 ansible_virtualization_role: guest ansible_virtualization_tech_guest: - openstack ansible_virtualization_tech_host: - kvm ansible_virtualization_type: openstack cifmw_architecture_repo: /var/home/core/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_installyamls_repos: /var/home/core/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: /var/home/core/.crc/machines/crc/kubeconfig cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_project_dir: src/github.com/openstack-k8s-operators/ci-framework cifmw_project_dir_absolute: /var/home/core/src/github.com/openstack-k8s-operators/ci-framework cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: vexxhost crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 discovered_interpreter_python: /usr/bin/python3.9 enable_ramdisk: true gather_subset: - all group_names: - ungrouped groups: all: - controller - crc ungrouped: *id001 zuul_unreachable: [] inventory_dir: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1 inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/inventory.yaml inventory_hostname: crc inventory_hostname_short: crc module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 02c5855c-ecb0-48ea-8e20-d08d70e9697e host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.51 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.51 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.51 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__25f488c32bf32fb4dee7ff7d2bcd558067c92af2 playbook_dir: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles services: NetworkManager-clean-initrd-state.service: name: NetworkManager-clean-initrd-state.service source: systemd state: stopped status: enabled NetworkManager-dispatcher.service: name: NetworkManager-dispatcher.service source: systemd state: running status: enabled NetworkManager-wait-online.service: name: NetworkManager-wait-online.service source: systemd state: stopped status: enabled NetworkManager.service: name: NetworkManager.service source: systemd state: running status: enabled afterburn-checkin.service: name: afterburn-checkin.service source: systemd state: stopped status: enabled afterburn-firstboot-checkin.service: name: afterburn-firstboot-checkin.service source: systemd state: stopped status: enabled afterburn-sshkeys@.service: name: afterburn-sshkeys@.service source: systemd state: unknown status: disabled afterburn.service: name: afterburn.service source: systemd state: inactive status: disabled arp-ethers.service: name: arp-ethers.service source: systemd state: inactive status: disabled auditd.service: name: auditd.service source: systemd state: running status: enabled auth-rpcgss-module.service: name: auth-rpcgss-module.service source: systemd state: stopped status: static autovt@.service: name: autovt@.service source: systemd state: unknown status: alias blk-availability.service: name: blk-availability.service source: systemd state: stopped status: disabled bootc-fetch-apply-updates.service: name: bootc-fetch-apply-updates.service source: systemd state: inactive status: static bootkube.service: name: bootkube.service source: systemd state: inactive status: disabled bootupd.service: name: bootupd.service source: systemd state: stopped status: static chrony-wait.service: name: chrony-wait.service source: systemd state: inactive status: disabled chronyd-restricted.service: name: chronyd-restricted.service source: systemd state: inactive status: disabled chronyd.service: name: chronyd.service source: systemd state: running status: enabled clevis-luks-askpass.service: name: clevis-luks-askpass.service source: systemd state: stopped status: static cni-dhcp.service: name: cni-dhcp.service source: systemd state: inactive status: disabled configure-cloudinit-ssh.service: name: configure-cloudinit-ssh.service source: systemd state: stopped status: enabled console-getty.service: name: console-getty.service source: systemd state: inactive status: disabled console-login-helper-messages-gensnippet-ssh-keys.service: name: console-login-helper-messages-gensnippet-ssh-keys.service source: systemd state: stopped status: enabled container-getty@.service: name: container-getty@.service source: systemd state: unknown status: static coreos-generate-iscsi-initiatorname.service: name: coreos-generate-iscsi-initiatorname.service source: systemd state: stopped status: enabled coreos-ignition-delete-config.service: name: coreos-ignition-delete-config.service source: systemd state: stopped status: enabled coreos-ignition-firstboot-complete.service: name: coreos-ignition-firstboot-complete.service source: systemd state: stopped status: enabled coreos-ignition-write-issues.service: name: coreos-ignition-write-issues.service source: systemd state: stopped status: enabled coreos-installer-disable-device-auto-activation.service: name: coreos-installer-disable-device-auto-activation.service source: systemd state: inactive status: static coreos-installer-noreboot.service: name: coreos-installer-noreboot.service source: systemd state: inactive status: static coreos-installer-reboot.service: name: coreos-installer-reboot.service source: systemd state: inactive status: static coreos-installer-secure-ipl-reboot.service: name: coreos-installer-secure-ipl-reboot.service source: systemd state: inactive status: static coreos-installer.service: name: coreos-installer.service source: systemd state: inactive status: static coreos-liveiso-success.service: name: coreos-liveiso-success.service source: systemd state: stopped status: enabled coreos-platform-chrony-config.service: name: coreos-platform-chrony-config.service source: systemd state: stopped status: enabled coreos-populate-lvmdevices.service: name: coreos-populate-lvmdevices.service source: systemd state: stopped status: enabled coreos-printk-quiet.service: name: coreos-printk-quiet.service source: systemd state: stopped status: enabled coreos-update-ca-trust.service: name: coreos-update-ca-trust.service source: systemd state: stopped status: enabled crc-dnsmasq.service: name: crc-dnsmasq.service source: systemd state: stopped status: not-found crc-pre.service: name: crc-pre.service source: systemd state: stopped status: enabled crio-subid.service: name: crio-subid.service source: systemd state: stopped status: enabled crio-wipe.service: name: crio-wipe.service source: systemd state: stopped status: disabled crio.service: name: crio.service source: systemd state: stopped status: disabled dbus-broker.service: name: dbus-broker.service source: systemd state: running status: enabled dbus-org.freedesktop.hostname1.service: name: dbus-org.freedesktop.hostname1.service source: systemd state: active status: alias dbus-org.freedesktop.locale1.service: name: dbus-org.freedesktop.locale1.service source: systemd state: inactive status: alias dbus-org.freedesktop.login1.service: name: dbus-org.freedesktop.login1.service source: systemd state: active status: alias dbus-org.freedesktop.nm-dispatcher.service: name: dbus-org.freedesktop.nm-dispatcher.service source: systemd state: active status: alias dbus-org.freedesktop.timedate1.service: name: dbus-org.freedesktop.timedate1.service source: systemd state: inactive status: alias dbus.service: name: dbus.service source: systemd state: active status: alias debug-shell.service: name: debug-shell.service source: systemd state: inactive status: disabled disable-mglru.service: name: disable-mglru.service source: systemd state: stopped status: enabled display-manager.service: name: display-manager.service source: systemd state: stopped status: not-found dm-event.service: name: dm-event.service source: systemd state: stopped status: static dnf-makecache.service: name: dnf-makecache.service source: systemd state: inactive status: static dnsmasq.service: name: dnsmasq.service source: systemd state: running status: enabled dracut-cmdline.service: name: dracut-cmdline.service source: systemd state: stopped status: static dracut-initqueue.service: name: dracut-initqueue.service source: systemd state: stopped status: static dracut-mount.service: name: dracut-mount.service source: systemd state: stopped status: static dracut-pre-mount.service: name: dracut-pre-mount.service source: systemd state: stopped status: static dracut-pre-pivot.service: name: dracut-pre-pivot.service source: systemd state: stopped status: static dracut-pre-trigger.service: name: dracut-pre-trigger.service source: systemd state: stopped status: static dracut-pre-udev.service: name: dracut-pre-udev.service source: systemd state: stopped status: static dracut-shutdown-onfailure.service: name: dracut-shutdown-onfailure.service source: systemd state: stopped status: static dracut-shutdown.service: name: dracut-shutdown.service source: systemd state: stopped status: static dummy-network.service: name: dummy-network.service source: systemd state: stopped status: enabled emergency.service: name: emergency.service source: systemd state: stopped status: static fcoe.service: name: fcoe.service source: systemd state: stopped status: not-found fstrim.service: name: fstrim.service source: systemd state: inactive status: static fwupd-offline-update.service: name: fwupd-offline-update.service source: systemd state: inactive status: static fwupd-refresh.service: name: fwupd-refresh.service source: systemd state: inactive status: static fwupd.service: name: fwupd.service source: systemd state: inactive status: static gcp-routes.service: name: gcp-routes.service source: systemd state: stopped status: enabled getty@.service: name: getty@.service source: systemd state: unknown status: enabled getty@tty1.service: name: getty@tty1.service source: systemd state: running status: active gssproxy.service: name: gssproxy.service source: systemd state: stopped status: disabled gvisor-tap-vsock.service: name: gvisor-tap-vsock.service source: systemd state: stopped status: enabled hypervfcopyd.service: name: hypervfcopyd.service source: systemd state: inactive status: static hypervkvpd.service: name: hypervkvpd.service source: systemd state: inactive status: static hypervvssd.service: name: hypervvssd.service source: systemd state: inactive status: static ignition-delete-config.service: name: ignition-delete-config.service source: systemd state: stopped status: enabled initrd-cleanup.service: name: initrd-cleanup.service source: systemd state: stopped status: static initrd-parse-etc.service: name: initrd-parse-etc.service source: systemd state: stopped status: static initrd-switch-root.service: name: initrd-switch-root.service source: systemd state: stopped status: static initrd-udevadm-cleanup-db.service: name: initrd-udevadm-cleanup-db.service source: systemd state: stopped status: static irqbalance.service: name: irqbalance.service source: systemd state: running status: enabled iscsi-init.service: name: iscsi-init.service source: systemd state: stopped status: disabled iscsi-onboot.service: name: iscsi-onboot.service source: systemd state: stopped status: enabled iscsi-shutdown.service: name: iscsi-shutdown.service source: systemd state: stopped status: static iscsi-starter.service: name: iscsi-starter.service source: systemd state: inactive status: disabled iscsi.service: name: iscsi.service source: systemd state: stopped status: indirect iscsid.service: name: iscsid.service source: systemd state: stopped status: disabled iscsiuio.service: name: iscsiuio.service source: systemd state: stopped status: disabled kdump.service: name: kdump.service source: systemd state: stopped status: disabled kmod-static-nodes.service: name: kmod-static-nodes.service source: systemd state: stopped status: static kubelet-auto-node-size.service: name: kubelet-auto-node-size.service source: systemd state: stopped status: enabled kubelet-cleanup.service: name: kubelet-cleanup.service source: systemd state: stopped status: enabled kubelet.service: name: kubelet.service source: systemd state: stopped status: disabled kubens.service: name: kubens.service source: systemd state: stopped status: disabled ldconfig.service: name: ldconfig.service source: systemd state: stopped status: static logrotate.service: name: logrotate.service source: systemd state: stopped status: static lvm2-activation-early.service: name: lvm2-activation-early.service source: systemd state: stopped status: not-found lvm2-lvmpolld.service: name: lvm2-lvmpolld.service source: systemd state: stopped status: static lvm2-monitor.service: name: lvm2-monitor.service source: systemd state: stopped status: enabled machine-config-daemon-firstboot.service: name: machine-config-daemon-firstboot.service source: systemd state: stopped status: enabled machine-config-daemon-pull.service: name: machine-config-daemon-pull.service source: systemd state: stopped status: enabled mdadm-grow-continue@.service: name: mdadm-grow-continue@.service source: systemd state: unknown status: static mdadm-last-resort@.service: name: mdadm-last-resort@.service source: systemd state: unknown status: static mdcheck_continue.service: name: mdcheck_continue.service source: systemd state: inactive status: static mdcheck_start.service: name: mdcheck_start.service source: systemd state: inactive status: static mdmon@.service: name: mdmon@.service source: systemd state: unknown status: static mdmonitor-oneshot.service: name: mdmonitor-oneshot.service source: systemd state: inactive status: static mdmonitor.service: name: mdmonitor.service source: systemd state: stopped status: enabled microcode.service: name: microcode.service source: systemd state: stopped status: enabled modprobe@.service: name: modprobe@.service source: systemd state: unknown status: static modprobe@configfs.service: name: modprobe@configfs.service source: systemd state: stopped status: inactive modprobe@drm.service: name: modprobe@drm.service source: systemd state: stopped status: inactive modprobe@efi_pstore.service: name: modprobe@efi_pstore.service source: systemd state: stopped status: inactive modprobe@fuse.service: name: modprobe@fuse.service source: systemd state: stopped status: inactive multipathd.service: name: multipathd.service source: systemd state: stopped status: enabled netavark-dhcp-proxy.service: name: netavark-dhcp-proxy.service source: systemd state: inactive status: disabled netavark-firewalld-reload.service: name: netavark-firewalld-reload.service source: systemd state: inactive status: disabled network.service: name: network.service source: systemd state: stopped status: not-found nfs-blkmap.service: name: nfs-blkmap.service source: systemd state: inactive status: disabled nfs-idmapd.service: name: nfs-idmapd.service source: systemd state: stopped status: static nfs-mountd.service: name: nfs-mountd.service source: systemd state: stopped status: static nfs-server.service: name: nfs-server.service source: systemd state: stopped status: disabled nfs-utils.service: name: nfs-utils.service source: systemd state: stopped status: static nfsdcld.service: name: nfsdcld.service source: systemd state: stopped status: static nftables.service: name: nftables.service source: systemd state: inactive status: disabled nis-domainname.service: name: nis-domainname.service source: systemd state: inactive status: disabled nm-cloud-setup.service: name: nm-cloud-setup.service source: systemd state: inactive status: disabled nm-priv-helper.service: name: nm-priv-helper.service source: systemd state: inactive status: static nmstate.service: name: nmstate.service source: systemd state: stopped status: enabled node-valid-hostname.service: name: node-valid-hostname.service source: systemd state: stopped status: enabled nodeip-configuration.service: name: nodeip-configuration.service source: systemd state: stopped status: enabled ntpd.service: name: ntpd.service source: systemd state: stopped status: not-found ntpdate.service: name: ntpdate.service source: systemd state: stopped status: not-found nvmefc-boot-connections.service: name: nvmefc-boot-connections.service source: systemd state: stopped status: enabled nvmf-autoconnect.service: name: nvmf-autoconnect.service source: systemd state: inactive status: disabled nvmf-connect@.service: name: nvmf-connect@.service source: systemd state: unknown status: static openvswitch.service: name: openvswitch.service source: systemd state: stopped status: enabled ostree-boot-complete.service: name: ostree-boot-complete.service source: systemd state: stopped status: enabled-runtime ostree-finalize-staged-hold.service: name: ostree-finalize-staged-hold.service source: systemd state: stopped status: static ostree-finalize-staged.service: name: ostree-finalize-staged.service source: systemd state: stopped status: static ostree-prepare-root.service: name: ostree-prepare-root.service source: systemd state: inactive status: static ostree-readonly-sysroot-migration.service: name: ostree-readonly-sysroot-migration.service source: systemd state: stopped status: disabled ostree-remount.service: name: ostree-remount.service source: systemd state: stopped status: enabled ostree-state-overlay@.service: name: ostree-state-overlay@.service source: systemd state: unknown status: disabled ovs-configuration.service: name: ovs-configuration.service source: systemd state: stopped status: enabled ovs-delete-transient-ports.service: name: ovs-delete-transient-ports.service source: systemd state: stopped status: static ovs-vswitchd.service: name: ovs-vswitchd.service source: systemd state: running status: static ovsdb-server.service: name: ovsdb-server.service source: systemd state: running status: static pam_namespace.service: name: pam_namespace.service source: systemd state: inactive status: static plymouth-quit-wait.service: name: plymouth-quit-wait.service source: systemd state: stopped status: not-found plymouth-read-write.service: name: plymouth-read-write.service source: systemd state: stopped status: not-found plymouth-start.service: name: plymouth-start.service source: systemd state: stopped status: not-found podman-auto-update.service: name: podman-auto-update.service source: systemd state: inactive status: disabled podman-clean-transient.service: name: podman-clean-transient.service source: systemd state: inactive status: disabled podman-kube@.service: name: podman-kube@.service source: systemd state: unknown status: disabled podman-restart.service: name: podman-restart.service source: systemd state: inactive status: disabled podman.service: name: podman.service source: systemd state: stopped status: disabled polkit.service: name: polkit.service source: systemd state: inactive status: static qemu-guest-agent.service: name: qemu-guest-agent.service source: systemd state: stopped status: enabled quotaon.service: name: quotaon.service source: systemd state: inactive status: static raid-check.service: name: raid-check.service source: systemd state: inactive status: static rbdmap.service: name: rbdmap.service source: systemd state: stopped status: not-found rc-local.service: name: rc-local.service source: systemd state: stopped status: static rdisc.service: name: rdisc.service source: systemd state: inactive status: disabled rdma-load-modules@.service: name: rdma-load-modules@.service source: systemd state: unknown status: static rdma-ndd.service: name: rdma-ndd.service source: systemd state: inactive status: static rescue.service: name: rescue.service source: systemd state: stopped status: static rhcos-usrlocal-selinux-fixup.service: name: rhcos-usrlocal-selinux-fixup.service source: systemd state: stopped status: enabled rpc-gssd.service: name: rpc-gssd.service source: systemd state: stopped status: static rpc-statd-notify.service: name: rpc-statd-notify.service source: systemd state: stopped status: static rpc-statd.service: name: rpc-statd.service source: systemd state: stopped status: static rpc-svcgssd.service: name: rpc-svcgssd.service source: systemd state: stopped status: not-found rpcbind.service: name: rpcbind.service source: systemd state: stopped status: disabled rpm-ostree-bootstatus.service: name: rpm-ostree-bootstatus.service source: systemd state: inactive status: disabled rpm-ostree-countme.service: name: rpm-ostree-countme.service source: systemd state: inactive status: static rpm-ostree-fix-shadow-mode.service: name: rpm-ostree-fix-shadow-mode.service source: systemd state: stopped status: disabled rpm-ostreed-automatic.service: name: rpm-ostreed-automatic.service source: systemd state: inactive status: static rpm-ostreed.service: name: rpm-ostreed.service source: systemd state: inactive status: static rpmdb-rebuild.service: name: rpmdb-rebuild.service source: systemd state: inactive status: disabled selinux-autorelabel-mark.service: name: selinux-autorelabel-mark.service source: systemd state: stopped status: enabled selinux-autorelabel.service: name: selinux-autorelabel.service source: systemd state: inactive status: static selinux-check-proper-disable.service: name: selinux-check-proper-disable.service source: systemd state: inactive status: disabled serial-getty@.service: name: serial-getty@.service source: systemd state: unknown status: disabled sntp.service: name: sntp.service source: systemd state: stopped status: not-found sshd-keygen@.service: name: sshd-keygen@.service source: systemd state: unknown status: disabled sshd-keygen@ecdsa.service: name: sshd-keygen@ecdsa.service source: systemd state: stopped status: inactive sshd-keygen@ed25519.service: name: sshd-keygen@ed25519.service source: systemd state: stopped status: inactive sshd-keygen@rsa.service: name: sshd-keygen@rsa.service source: systemd state: stopped status: inactive sshd.service: name: sshd.service source: systemd state: running status: enabled sshd@.service: name: sshd@.service source: systemd state: unknown status: static sssd-autofs.service: name: sssd-autofs.service source: systemd state: inactive status: indirect sssd-nss.service: name: sssd-nss.service source: systemd state: inactive status: indirect sssd-pac.service: name: sssd-pac.service source: systemd state: inactive status: indirect sssd-pam.service: name: sssd-pam.service source: systemd state: inactive status: indirect sssd-ssh.service: name: sssd-ssh.service source: systemd state: inactive status: indirect sssd-sudo.service: name: sssd-sudo.service source: systemd state: inactive status: indirect sssd.service: name: sssd.service source: systemd state: stopped status: enabled stalld.service: name: stalld.service source: systemd state: inactive status: disabled syslog.service: name: syslog.service source: systemd state: stopped status: not-found system-update-cleanup.service: name: system-update-cleanup.service source: systemd state: inactive status: static systemd-ask-password-console.service: name: systemd-ask-password-console.service source: systemd state: stopped status: static systemd-ask-password-wall.service: name: systemd-ask-password-wall.service source: systemd state: stopped status: static systemd-backlight@.service: name: systemd-backlight@.service source: systemd state: unknown status: static systemd-binfmt.service: name: systemd-binfmt.service source: systemd state: stopped status: static systemd-bless-boot.service: name: systemd-bless-boot.service source: systemd state: inactive status: static systemd-boot-check-no-failures.service: name: systemd-boot-check-no-failures.service source: systemd state: inactive status: disabled systemd-boot-random-seed.service: name: systemd-boot-random-seed.service source: systemd state: stopped status: static systemd-boot-update.service: name: systemd-boot-update.service source: systemd state: stopped status: enabled systemd-coredump@.service: name: systemd-coredump@.service source: systemd state: unknown status: static systemd-exit.service: name: systemd-exit.service source: systemd state: inactive status: static systemd-fsck-root.service: name: systemd-fsck-root.service source: systemd state: stopped status: static systemd-fsck@.service: name: systemd-fsck@.service source: systemd state: unknown status: static systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service: name: systemd-fsck@dev-disk-by\x2duuid-6ea7ef63\x2dbc43\x2d49c4\x2d9337\x2db3b14ffb2763.service source: systemd state: stopped status: active systemd-growfs-root.service: name: systemd-growfs-root.service source: systemd state: inactive status: static systemd-growfs@.service: name: systemd-growfs@.service source: systemd state: unknown status: static systemd-halt.service: name: systemd-halt.service source: systemd state: inactive status: static systemd-hibernate-resume@.service: name: systemd-hibernate-resume@.service source: systemd state: unknown status: static systemd-hibernate.service: name: systemd-hibernate.service source: systemd state: inactive status: static systemd-hostnamed.service: name: systemd-hostnamed.service source: systemd state: running status: static systemd-hwdb-update.service: name: systemd-hwdb-update.service source: systemd state: stopped status: static systemd-hybrid-sleep.service: name: systemd-hybrid-sleep.service source: systemd state: inactive status: static systemd-initctl.service: name: systemd-initctl.service source: systemd state: stopped status: static systemd-journal-catalog-update.service: name: systemd-journal-catalog-update.service source: systemd state: stopped status: static systemd-journal-flush.service: name: systemd-journal-flush.service source: systemd state: stopped status: static systemd-journal-gatewayd.service: name: systemd-journal-gatewayd.service source: systemd state: inactive status: indirect systemd-journal-remote.service: name: systemd-journal-remote.service source: systemd state: inactive status: indirect systemd-journal-upload.service: name: systemd-journal-upload.service source: systemd state: inactive status: disabled systemd-journald.service: name: systemd-journald.service source: systemd state: running status: static systemd-journald@.service: name: systemd-journald@.service source: systemd state: unknown status: static systemd-kexec.service: name: systemd-kexec.service source: systemd state: inactive status: static systemd-localed.service: name: systemd-localed.service source: systemd state: inactive status: static systemd-logind.service: name: systemd-logind.service source: systemd state: running status: static systemd-machine-id-commit.service: name: systemd-machine-id-commit.service source: systemd state: stopped status: static systemd-modules-load.service: name: systemd-modules-load.service source: systemd state: stopped status: static systemd-network-generator.service: name: systemd-network-generator.service source: systemd state: stopped status: enabled systemd-pcrfs-root.service: name: systemd-pcrfs-root.service source: systemd state: inactive status: static systemd-pcrfs@.service: name: systemd-pcrfs@.service source: systemd state: unknown status: static systemd-pcrmachine.service: name: systemd-pcrmachine.service source: systemd state: stopped status: static systemd-pcrphase-initrd.service: name: systemd-pcrphase-initrd.service source: systemd state: stopped status: static systemd-pcrphase-sysinit.service: name: systemd-pcrphase-sysinit.service source: systemd state: stopped status: static systemd-pcrphase.service: name: systemd-pcrphase.service source: systemd state: stopped status: static systemd-poweroff.service: name: systemd-poweroff.service source: systemd state: inactive status: static systemd-pstore.service: name: systemd-pstore.service source: systemd state: stopped status: enabled systemd-quotacheck.service: name: systemd-quotacheck.service source: systemd state: stopped status: static systemd-random-seed.service: name: systemd-random-seed.service source: systemd state: stopped status: static systemd-reboot.service: name: systemd-reboot.service source: systemd state: inactive status: static systemd-remount-fs.service: name: systemd-remount-fs.service source: systemd state: stopped status: enabled-runtime systemd-repart.service: name: systemd-repart.service source: systemd state: stopped status: masked systemd-rfkill.service: name: systemd-rfkill.service source: systemd state: stopped status: static systemd-suspend-then-hibernate.service: name: systemd-suspend-then-hibernate.service source: systemd state: inactive status: static systemd-suspend.service: name: systemd-suspend.service source: systemd state: inactive status: static systemd-sysctl.service: name: systemd-sysctl.service source: systemd state: stopped status: static systemd-sysext.service: name: systemd-sysext.service source: systemd state: stopped status: disabled systemd-sysupdate-reboot.service: name: systemd-sysupdate-reboot.service source: systemd state: inactive status: indirect systemd-sysupdate.service: name: systemd-sysupdate.service source: systemd state: inactive status: indirect systemd-sysusers.service: name: systemd-sysusers.service source: systemd state: stopped status: static systemd-timedated.service: name: systemd-timedated.service source: systemd state: inactive status: static systemd-timesyncd.service: name: systemd-timesyncd.service source: systemd state: stopped status: not-found systemd-tmpfiles-clean.service: name: systemd-tmpfiles-clean.service source: systemd state: stopped status: static systemd-tmpfiles-setup-dev.service: name: systemd-tmpfiles-setup-dev.service source: systemd state: stopped status: static systemd-tmpfiles-setup.service: name: systemd-tmpfiles-setup.service source: systemd state: stopped status: static systemd-tmpfiles.service: name: systemd-tmpfiles.service source: systemd state: stopped status: not-found systemd-udev-settle.service: name: systemd-udev-settle.service source: systemd state: stopped status: static systemd-udev-trigger.service: name: systemd-udev-trigger.service source: systemd state: stopped status: static systemd-udevd.service: name: systemd-udevd.service source: systemd state: running status: static systemd-update-done.service: name: systemd-update-done.service source: systemd state: stopped status: static systemd-update-utmp-runlevel.service: name: systemd-update-utmp-runlevel.service source: systemd state: stopped status: static systemd-update-utmp.service: name: systemd-update-utmp.service source: systemd state: stopped status: static systemd-user-sessions.service: name: systemd-user-sessions.service source: systemd state: stopped status: static systemd-vconsole-setup.service: name: systemd-vconsole-setup.service source: systemd state: stopped status: static systemd-volatile-root.service: name: systemd-volatile-root.service source: systemd state: inactive status: static systemd-zram-setup@.service: name: systemd-zram-setup@.service source: systemd state: unknown status: static teamd@.service: name: teamd@.service source: systemd state: unknown status: static unbound-anchor.service: name: unbound-anchor.service source: systemd state: stopped status: static user-runtime-dir@.service: name: user-runtime-dir@.service source: systemd state: unknown status: static user-runtime-dir@0.service: name: user-runtime-dir@0.service source: systemd state: stopped status: active user-runtime-dir@1000.service: name: user-runtime-dir@1000.service source: systemd state: stopped status: active user@.service: name: user@.service source: systemd state: unknown status: static user@0.service: name: user@0.service source: systemd state: running status: active user@1000.service: name: user@1000.service source: systemd state: running status: active vgauthd.service: name: vgauthd.service source: systemd state: stopped status: enabled vmtoolsd.service: name: vmtoolsd.service source: systemd state: stopped status: enabled wait-for-primary-ip.service: name: wait-for-primary-ip.service source: systemd state: stopped status: enabled unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.51 ansible_port: 22 ansible_python_interpreter: auto ansible_user: core cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 02c5855c-ecb0-48ea-8e20-d08d70e9697e host_id: b012578aee5370fae73eb6c92c4679617335173cccca05390470f411 interface_ip: 38.102.83.51 label: coreos-crc-extracted-2-39-0-3xl private_ipv4: 38.102.83.51 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.51 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul_log_collection: true zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 1ff1f9e2914e4781854296cd59417fcd build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: f71192a0075543fc80fa0dd7b04d38c1 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 0e5659e7c90540b9826e576f72e96d0a executor: hostname: ze03.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/inventory.yaml log_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/logs result_data_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/results.json src_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/src work_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-nightly_bundles jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 9e0945fe8a0e74be8bc9449318446eeb74336986 name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 181b1cc6787a986fc316969801a1c9403cae9dcc name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 786269345f996bd262360738a1e3c6b09171f370 name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: 2f838b62fe50aacff3d514af4b502264e0a276a5 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 2da49819dd6af6036aede5e4e9a080ff2c6457de name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 672a220823fac36a8965fa0d3dca764739bb46c0 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '1' zuul_execution_trusted: 'False' zuul_log_collection: true zuul_success: 'False' zuul_will_retry: 'False' inventory_dir: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1 inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/post_playbook_1/inventory.yaml inventory_hostname: controller inventory_hostname_short: controller logfiles_dest_dir: /home/zuul/ci-framework-data/logs/2025-12-13_00-21 module_setup: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 5956d295-b553-4ddf-9a24-66bb002b169b host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.182 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.182 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.182 public_ipv6: '' region: RegionOne slot: null omit: __omit_place_holder__25f488c32bf32fb4dee7ff7d2bcd558067c92af2 openstack_namespace: openstack play_hosts: *id002 playbook_dir: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/ci/playbooks podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true role_name: artifacts role_names: *id003 role_path: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/untrusted/project_0/github.com/openstack-k8s-operators/ci-framework/roles/artifacts role_uuid: fa163ef9-e89a-252b-6dda-00000000002e scenario: nightly_bundles unsafe_vars: ansible_connection: ssh ansible_host: 38.102.83.182 ansible_port: 22 ansible_python_interpreter: auto ansible_user: zuul cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true nodepool: az: nova cloud: vexxhost-nodepool-tripleo external_id: 5956d295-b553-4ddf-9a24-66bb002b169b host_id: bdb78bf25a270582fae0ca49d447ffffc4c7a50a772a0a4c0593588a interface_ip: 38.102.83.182 label: cloud-centos-9-stream-tripleo-vexxhost private_ipv4: 38.102.83.182 private_ipv6: null provider: vexxhost-nodepool-tripleo public_ipv4: 38.102.83.182 public_ipv6: '' region: RegionOne slot: null podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul_log_collection: true zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 1ff1f9e2914e4781854296cd59417fcd build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: f71192a0075543fc80fa0dd7b04d38c1 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 0e5659e7c90540b9826e576f72e96d0a executor: hostname: ze03.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/inventory.yaml log_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/logs result_data_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/results.json src_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/src work_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-nightly_bundles jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 9e0945fe8a0e74be8bc9449318446eeb74336986 name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 181b1cc6787a986fc316969801a1c9403cae9dcc name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 786269345f996bd262360738a1e3c6b09171f370 name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: 2f838b62fe50aacff3d514af4b502264e0a276a5 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 2da49819dd6af6036aede5e4e9a080ff2c6457de name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 672a220823fac36a8965fa0d3dca764739bb46c0 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_change_list: - service-telemetry-operator zuul_execution_branch: main zuul_execution_canonical_name_and_path: github.com/openstack-k8s-operators/ci-framework/ci/playbooks/e2e-collect-logs.yml zuul_execution_phase: post zuul_execution_phase_index: '1' zuul_execution_trusted: 'False' zuul_log_collection: true zuul_success: 'False' zuul_will_retry: 'False' home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_000_fetch_openshift.sh0000644000175000017500000000032515117130427030466 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_000_fetch_openshift.log) 2>&1 oc login -u kubeadmin -p 123456789 --insecure-skip-tls-verify=true api.crc.testing:6443 ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_001_login_into_openshift_internal.shhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci_script_001_login_into_openshift_internal.s0000644000175000017500000000044515117130440033261 0ustar zuulzuul#!/bin/bash set -euo pipefail exec > >(tee -i /home/zuul/ci-framework-data/logs/ci_script_001_login_into_openshift_internal.log) 2>&1 podman login -u kubeadmin -p sha256~CRDzRDEwMlIhbeVJLCrSChdC674z2UmCrkBejT7UEP4 --tls-verify=false default-route-openshift-image-registry.apps-crc.testing home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/0000755000175000017500000000000015117130657024514 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/install-yamls-params.yml0000644000175000017500000006655515117130656031331 0ustar zuulzuulcifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: '' BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_USER:' cifmw_install_yamls_environment: CHECKOUT_FROM_OPENSTACK_REF: 'true' KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/custom-params.yml0000644000175000017500000000215315117130656030032 0ustar zuulzuulcifmw_architecture_repo: /home/zuul/src/github.com/openstack-k8s-operators/architecture cifmw_architecture_repo_relative: src/github.com/openstack-k8s-operators/architecture cifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_installyamls_repos: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls cifmw_installyamls_repos_relative: src/github.com/openstack-k8s-operators/install_yamls cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_path: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin cifmw_project_dir: src/github.com/openstack-k8s-operators/ci-framework cifmw_project_dir_absolute: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/zuul-params.yml0000644000175000017500000004567715117130657027542 0ustar zuulzuulcifmw_artifacts_crc_sshkey: ~/.ssh/id_cifw cifmw_deploy_edpm: false cifmw_dlrn_report_result: false cifmw_extras: - '@scenarios/centos-9/multinode-ci.yml' - '@scenarios/centos-9/horizon.yml' cifmw_openshift_api: api.crc.testing:6443 cifmw_openshift_kubeconfig: '{{ ansible_user_dir }}/.crc/machines/crc/kubeconfig' cifmw_openshift_password: '12**********89' cifmw_openshift_skip_tls_verify: true cifmw_openshift_user: kubeadmin cifmw_run_tests: false cifmw_use_libvirt: false cifmw_zuul_target_host: controller crc_ci_bootstrap_cloud_name: '{{ nodepool.cloud | replace(''-nodepool-tripleo'','''') }}' crc_ci_bootstrap_networking: instances: controller: networks: default: ip: 192.168.122.11 crc: networks: default: ip: 192.168.122.10 internal-api: ip: 172.17.0.5 storage: ip: 172.18.0.5 tenant: ip: 172.19.0.5 networks: default: mtu: 1500 range: 192.168.122.0/24 internal-api: range: 172.17.0.0/24 vlan: 20 storage: range: 172.18.0.0/24 vlan: 21 tenant: range: 172.19.0.0/24 vlan: 22 enable_ramdisk: true podified_validation: true push_registry: quay.rdoproject.org quay_login_secret_name: quay_nextgen_zuulgithubci registry_login_enabled: true scenario: nightly_bundles zuul: _inheritance_path: - '' - '' - '' - '' - '' - '' - '' - '' - '' ansible_version: '8' attempts: 1 branch: master build: 1ff1f9e2914e4781854296cd59417fcd build_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator buildset: f71192a0075543fc80fa0dd7b04d38c1 buildset_refs: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator change_url: https://github.com/infrawatch/service-telemetry-operator child_jobs: [] event_id: 0e5659e7c90540b9826e576f72e96d0a executor: hostname: ze03.softwarefactory-project.io inventory_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/ansible/inventory.yaml log_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/logs result_data_file: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/results.json src_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work/src work_root: /var/lib/zuul/builds/1ff1f9e2914e4781854296cd59417fcd/work items: - branch: master change_url: https://github.com/infrawatch/service-telemetry-operator project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator job: stf-crc-ocp_416-nightly_bundles jobtags: [] max_attempts: 1 pipeline: periodic playbook_context: playbook_projects: trusted/project_0/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 trusted/project_1/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 trusted/project_2/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 trusted/project_3/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_0/github.com/openstack-k8s-operators/ci-framework: canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 untrusted/project_1/review.rdoproject.org/config: canonical_name: review.rdoproject.org/config checkout: master commit: 672a220823fac36a8965fa0d3dca764739bb46c0 untrusted/project_2/opendev.org/zuul/zuul-jobs: canonical_name: opendev.org/zuul/zuul-jobs checkout: master commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 untrusted/project_3/review.rdoproject.org/rdo-jobs: canonical_name: review.rdoproject.org/rdo-jobs checkout: master commit: 9df4e7d5b028e976203d64479f9b7a76c1c95a24 untrusted/project_4/github.com/infrawatch/service-telemetry-operator: canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a playbooks: - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/deploy_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_0/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_0/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_0/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_0/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_0/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_0/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_0/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_0/role_4/rdo-jobs/roles - path: untrusted/project_4/github.com/infrawatch/service-telemetry-operator/ci/test_stf.yml roles: - checkout: master checkout_description: playbook branch link_name: ansible/playbook_1/role_0/service-telemetry-operator link_target: untrusted/project_4/github.com/infrawatch/service-telemetry-operator role_path: ansible/playbook_1/role_0/service-telemetry-operator/roles - checkout: main checkout_description: project override ref link_name: ansible/playbook_1/role_1/ci-framework link_target: untrusted/project_0/github.com/openstack-k8s-operators/ci-framework role_path: ansible/playbook_1/role_1/ci-framework/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_2/config link_target: untrusted/project_1/review.rdoproject.org/config role_path: ansible/playbook_1/role_2/config/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_3/zuul-jobs link_target: untrusted/project_2/opendev.org/zuul/zuul-jobs role_path: ansible/playbook_1/role_3/zuul-jobs/roles - checkout: master checkout_description: zuul branch link_name: ansible/playbook_1/role_4/rdo-jobs link_target: untrusted/project_3/review.rdoproject.org/rdo-jobs role_path: ansible/playbook_1/role_4/rdo-jobs/roles post_review: true project: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator name: infrawatch/service-telemetry-operator short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator projects: github.com/crc-org/crc-cloud: canonical_hostname: github.com canonical_name: github.com/crc-org/crc-cloud checkout: main checkout_description: project override ref commit: 42957126d9d9b9d1372615db325b82bd992fa335 name: crc-org/crc-cloud required: true short_name: crc-cloud src_dir: src/github.com/crc-org/crc-cloud github.com/infrawatch/prometheus-webhook-snmp: canonical_hostname: github.com canonical_name: github.com/infrawatch/prometheus-webhook-snmp checkout: master checkout_description: zuul branch commit: 3959c53b2613d03d066cb1b2fe5bdae8633ae895 name: infrawatch/prometheus-webhook-snmp required: true short_name: prometheus-webhook-snmp src_dir: src/github.com/infrawatch/prometheus-webhook-snmp github.com/infrawatch/service-telemetry-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/service-telemetry-operator checkout: master checkout_description: zuul branch commit: eb8aace45bdd3decd60358ca0a7b52ae71b6879a name: infrawatch/service-telemetry-operator required: true short_name: service-telemetry-operator src_dir: src/github.com/infrawatch/service-telemetry-operator github.com/infrawatch/sg-bridge: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-bridge checkout: master checkout_description: zuul branch commit: bab11fba86ad0c21cb35e12b56bf086a3332f1d2 name: infrawatch/sg-bridge required: true short_name: sg-bridge src_dir: src/github.com/infrawatch/sg-bridge github.com/infrawatch/sg-core: canonical_hostname: github.com canonical_name: github.com/infrawatch/sg-core checkout: master checkout_description: zuul branch commit: 5a4aece11fea9f71ce7515d11e1e7f0eae97eea6 name: infrawatch/sg-core required: true short_name: sg-core src_dir: src/github.com/infrawatch/sg-core github.com/infrawatch/smart-gateway-operator: canonical_hostname: github.com canonical_name: github.com/infrawatch/smart-gateway-operator checkout: master checkout_description: zuul branch commit: 9e0945fe8a0e74be8bc9449318446eeb74336986 name: infrawatch/smart-gateway-operator required: true short_name: smart-gateway-operator src_dir: src/github.com/infrawatch/smart-gateway-operator github.com/openstack-k8s-operators/ci-framework: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/ci-framework checkout: main checkout_description: project override ref commit: b9f05e2b6eff8ddb76fcb7c45350db75c6af9b72 name: openstack-k8s-operators/ci-framework required: true short_name: ci-framework src_dir: src/github.com/openstack-k8s-operators/ci-framework github.com/openstack-k8s-operators/dataplane-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/dataplane-operator checkout: main checkout_description: project override ref commit: c98b51bcd7fe14b85ed4cf3f5f76552b3455c5f2 name: openstack-k8s-operators/dataplane-operator required: true short_name: dataplane-operator src_dir: src/github.com/openstack-k8s-operators/dataplane-operator github.com/openstack-k8s-operators/edpm-ansible: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/edpm-ansible checkout: main checkout_description: project default branch commit: 181b1cc6787a986fc316969801a1c9403cae9dcc name: openstack-k8s-operators/edpm-ansible required: true short_name: edpm-ansible src_dir: src/github.com/openstack-k8s-operators/edpm-ansible github.com/openstack-k8s-operators/infra-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/infra-operator checkout: main checkout_description: project override ref commit: 786269345f996bd262360738a1e3c6b09171f370 name: openstack-k8s-operators/infra-operator required: true short_name: infra-operator src_dir: src/github.com/openstack-k8s-operators/infra-operator github.com/openstack-k8s-operators/install_yamls: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/install_yamls checkout: main checkout_description: project default branch commit: 2f838b62fe50aacff3d514af4b502264e0a276a5 name: openstack-k8s-operators/install_yamls required: true short_name: install_yamls src_dir: src/github.com/openstack-k8s-operators/install_yamls github.com/openstack-k8s-operators/openstack-baremetal-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-baremetal-operator checkout: master checkout_description: zuul branch commit: a333e57066b1d48e41f93af68be81188290a96b3 name: openstack-k8s-operators/openstack-baremetal-operator required: true short_name: openstack-baremetal-operator src_dir: src/github.com/openstack-k8s-operators/openstack-baremetal-operator github.com/openstack-k8s-operators/openstack-must-gather: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-must-gather checkout: main checkout_description: project override ref commit: 2da49819dd6af6036aede5e4e9a080ff2c6457de name: openstack-k8s-operators/openstack-must-gather required: true short_name: openstack-must-gather src_dir: src/github.com/openstack-k8s-operators/openstack-must-gather github.com/openstack-k8s-operators/openstack-operator: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/openstack-operator checkout: main checkout_description: project override ref commit: 0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 name: openstack-k8s-operators/openstack-operator required: true short_name: openstack-operator src_dir: src/github.com/openstack-k8s-operators/openstack-operator github.com/openstack-k8s-operators/repo-setup: canonical_hostname: github.com canonical_name: github.com/openstack-k8s-operators/repo-setup checkout: main checkout_description: project default branch commit: 37b10946c6a10f9fa26c13305f06bfd6867e723f name: openstack-k8s-operators/repo-setup required: true short_name: repo-setup src_dir: src/github.com/openstack-k8s-operators/repo-setup opendev.org/zuul/zuul-jobs: canonical_hostname: opendev.org canonical_name: opendev.org/zuul/zuul-jobs checkout: master checkout_description: zuul branch commit: 935cfd422c2237f4863cdcdf5fb201bce8c32a67 name: zuul/zuul-jobs required: true short_name: zuul-jobs src_dir: src/opendev.org/zuul/zuul-jobs review.rdoproject.org/config: canonical_hostname: review.rdoproject.org canonical_name: review.rdoproject.org/config checkout: master checkout_description: zuul branch commit: 672a220823fac36a8965fa0d3dca764739bb46c0 name: config required: true short_name: config src_dir: src/review.rdoproject.org/config ref: refs/heads/master resources: {} tenant: rdoproject.org timeout: 3600 voting: true zuul_log_collection: true home/zuul/zuul-output/logs/ci-framework-data/artifacts/parameters/openshift-login-params.yml0000644000175000017500000000044015117130431031611 0ustar zuulzuulcifmw_openshift_api: https://api.crc.testing:6443 cifmw_openshift_context: default/api-crc-testing:6443/kubeadmin cifmw_openshift_kubeconfig: /home/zuul/.crc/machines/crc/kubeconfig cifmw_openshift_token: sha256~CRDzRDEwMlIhbeVJLCrSChdC674z2UmCrkBejT7UEP4 cifmw_openshift_user: kubeadmin home/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/0000755000175000017500000000000015117130417024334 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/openstack/0000755000175000017500000000000015117130417026323 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/manifests/openstack/cr/0000755000175000017500000000000015117130417026727 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible-facts.yml0000644000175000017500000004657015117130627025620 0ustar zuulzuul_ansible_facts_gathered: true all_ipv4_addresses: - 38.102.83.182 all_ipv6_addresses: - fe80::f816:3eff:fe96:e4 ansible_local: {} apparmor: status: disabled architecture: x86_64 bios_date: 04/01/2014 bios_vendor: SeaBIOS bios_version: 1.15.0-1 board_asset_tag: NA board_name: NA board_serial: NA board_vendor: NA board_version: NA chassis_asset_tag: NA chassis_serial: NA chassis_vendor: QEMU chassis_version: pc-i440fx-6.2 cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 crc_ci_bootstrap_instance_default_net_config: mtu: 1500 range: 192.168.122.0/24 crc_ci_bootstrap_instance_nm_vlan_networks: - key: internal-api value: ip: 172.17.0.5 - key: storage value: ip: 172.18.0.5 - key: tenant value: ip: 172.19.0.5 crc_ci_bootstrap_instance_parent_port_create_yaml: admin_state_up: true allowed_address_pairs: [] binding_host_id: null binding_profile: {} binding_vif_details: {} binding_vif_type: null binding_vnic_type: normal created_at: '2025-12-13T00:08:36Z' data_plane_status: null description: '' device_id: '' device_owner: '' device_profile: null dns_assignment: - fqdn: host-192-168-122-10.openstacklocal. hostname: host-192-168-122-10 ip_address: 192.168.122.10 dns_domain: '' dns_name: '' extra_dhcp_opts: [] fixed_ips: - ip_address: 192.168.122.10 subnet_id: 02849e46-923a-4039-a741-7220969b703a hardware_offload_type: null hints: '' id: 42ec78f8-dc44-4516-ac4e-135cec7f7487 ip_allocation: immediate mac_address: fa:16:3e:91:d3:27 name: crc-02c5855c-ecb0-48ea-8e20-d08d70e9697e network_id: dec04d5b-6116-4933-889e-8847df6f684f numa_affinity_policy: null port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 propagate_uplink_status: null qos_network_policy_id: null qos_policy_id: null resource_request: null revision_number: 1 security_group_ids: [] status: DOWN tags: [] trunk_details: null trusted: null updated_at: '2025-12-13T00:08:36Z' crc_ci_bootstrap_network_name: zuul-ci-net-1ff1f9e2 crc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b1:85:fe mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:91:d3:27 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:9f:c2:b6 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:ea:0a:6d mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:3e:f4:b3 mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_private_net_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:02Z' description: '' dns_domain: '' id: dec04d5b-6116-4933-889e-8847df6f684f ipv4_address_scope: null ipv6_address_scope: null is_default: false is_vlan_qinq: null is_vlan_transparent: false l2_adjacency: true mtu: 1500 name: zuul-ci-net-1ff1f9e2 port_security_enabled: false project_id: 4b633c451ac74233be3721a3635275e5 provider:network_type: null provider:physical_network: null provider:segmentation_id: null qos_policy_id: null revision_number: 1 router:external: false segments: null shared: false status: ACTIVE subnets: [] tags: [] updated_at: '2025-12-13T00:08:02Z' crc_ci_bootstrap_private_router_create_yaml: admin_state_up: true availability_zone_hints: - nova availability_zones: [] created_at: '2025-12-13T00:08:08Z' description: '' enable_ndp_proxy: null external_gateway_info: enable_snat: true external_fixed_ips: - ip_address: 38.102.83.45 subnet_id: 3169b11b-94b1-4bc9-9727-4fdbbe15e56e network_id: 7abff1a9-a103-46d0-979a-1f1e599f4f41 flavor_id: null id: 9c9866c3-8654-46c3-8ace-99e01a86017f name: zuul-ci-subnet-router-1ff1f9e2 project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 3 routes: [] status: ACTIVE tags: [] tenant_id: 4b633c451ac74233be3721a3635275e5 updated_at: '2025-12-13T00:08:09Z' crc_ci_bootstrap_private_subnet_create_yaml: allocation_pools: - end: 192.168.122.254 start: 192.168.122.2 cidr: 192.168.122.0/24 created_at: '2025-12-13T00:08:06Z' description: '' dns_nameservers: [] dns_publish_fixed_ip: null enable_dhcp: false gateway_ip: 192.168.122.1 host_routes: [] id: 02849e46-923a-4039-a741-7220969b703a ip_version: 4 ipv6_address_mode: null ipv6_ra_mode: null name: zuul-ci-subnet-1ff1f9e2 network_id: dec04d5b-6116-4933-889e-8847df6f684f project_id: 4b633c451ac74233be3721a3635275e5 revision_number: 0 segment_id: null service_types: [] subnetpool_id: null tags: [] updated_at: '2025-12-13T00:08:06Z' crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 crc_ci_bootstrap_router_name: zuul-ci-subnet-router-1ff1f9e2 crc_ci_bootstrap_subnet_name: zuul-ci-subnet-1ff1f9e2 date_time: date: '2025-12-13' day: '13' epoch: '1765585301' epoch_int: '1765585301' hour: '00' iso8601: '2025-12-13T00:21:41Z' iso8601_basic: 20251213T002141574207 iso8601_basic_short: 20251213T002141 iso8601_micro: '2025-12-13T00:21:41.574207Z' minute: '21' month: '12' second: '41' time: 00:21:41 tz: UTC tz_dst: UTC tz_offset: '+0000' weekday: Saturday weekday_number: '6' weeknumber: '49' year: '2025' default_ipv4: address: 38.102.83.182 alias: eth0 broadcast: 38.102.83.255 gateway: 38.102.83.1 interface: eth0 macaddress: fa:16:3e:96:00:e4 mtu: 1500 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' type: ether default_ipv6: {} device_links: ids: sr0: - ata-QEMU_DVD-ROM_QM00001 labels: sr0: - config-2 masters: {} uuids: sr0: - 2025-12-13-00-01-20-00 vda1: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 devices: sr0: holders: [] host: '' links: ids: - ata-QEMU_DVD-ROM_QM00001 labels: - config-2 masters: [] uuids: - 2025-12-13-00-01-20-00 model: QEMU DVD-ROM partitions: {} removable: '1' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: mq-deadline sectors: '964' sectorsize: '2048' size: 482.00 KB support_discard: '2048' vendor: QEMU virtual: 1 vda: holders: [] host: '' links: ids: [] labels: [] masters: [] uuids: [] model: null partitions: vda1: holders: [] links: ids: [] labels: [] masters: [] uuids: - cbdedf45-ed1d-4952-82a8-33a12c0ba266 sectors: '167770079' sectorsize: 512 size: 80.00 GB start: '2048' uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 removable: '0' rotational: '1' sas_address: null sas_device_handle: null scheduler_mode: none sectors: '167772160' sectorsize: '512' size: 80.00 GB support_discard: '512' vendor: '0x1af4' virtual: 1 discovered_interpreter_python: /usr/bin/python3 distribution: CentOS distribution_file_parsed: true distribution_file_path: /etc/centos-release distribution_file_variety: CentOS distribution_major_version: '9' distribution_release: Stream distribution_version: '9' dns: nameservers: - 192.168.122.10 - 199.204.44.24 - 199.204.47.54 domain: '' effective_group_id: 1000 effective_user_id: 1000 env: ANSIBLE_LOG_PATH: /home/zuul/ci-framework-data/logs/e2e-collect-logs-must-gather.log BASH_FUNC_which%%: "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}" DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus DEBUGINFOD_IMA_CERT_PATH: '/etc/keys/ima:' DEBUGINFOD_URLS: 'https://debuginfod.centos.org/ ' HOME: /home/zuul KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig LANG: en_US.UTF-8 LESSOPEN: '||/usr/bin/lesspipe.sh %s' LOGNAME: zuul MOTD_SHOWN: pam PATH: /home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin PWD: /home/zuul SELINUX_LEVEL_REQUESTED: '' SELINUX_ROLE_REQUESTED: '' SELINUX_USE_CURRENT_RANGE: '' SHELL: /bin/bash SHLVL: '1' SSH_CLIENT: 38.102.83.114 43968 22 SSH_CONNECTION: 38.102.83.114 43968 38.102.83.182 22 USER: zuul XDG_RUNTIME_DIR: /run/user/1000 XDG_SESSION_CLASS: user XDG_SESSION_ID: '16' XDG_SESSION_TYPE: tty _: /usr/bin/python3 which_declare: declare -f eth0: active: true device: eth0 features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: off [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: 'on' rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: on [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: 'on' tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: off [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: off [fixed] tx_gso_partial: off [fixed] tx_gso_robust: on [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: 'off' tx_scatter_gather: 'on' tx_scatter_gather_fraglist: off [fixed] tx_sctp_segmentation: off [fixed] tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'off' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: off [fixed] tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: off [fixed] hw_timestamp_filters: [] ipv4: address: 38.102.83.182 broadcast: 38.102.83.255 netmask: 255.255.255.0 network: 38.102.83.0 prefix: '24' ipv6: - address: fe80::f816:3eff:fe96:e4 prefix: '64' scope: link macaddress: fa:16:3e:96:00:e4 module: virtio_net mtu: 1500 pciid: virtio1 promisc: false speed: -1 timestamping: [] type: ether fibre_channel_wwn: [] fips: false form_factor: Other fqdn: controller gather_subset: - min hostname: controller hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 interfaces: - eth0 - lo is_chroot: false iscsi_iqn: '' kernel: 5.14.0-648.el9.x86_64 kernel_version: '#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025' lo: active: true device: lo features: esp_hw_offload: off [fixed] esp_tx_csum_hw_offload: off [fixed] generic_receive_offload: 'on' generic_segmentation_offload: 'on' highdma: on [fixed] hsr_dup_offload: off [fixed] hsr_fwd_offload: off [fixed] hsr_tag_ins_offload: off [fixed] hsr_tag_rm_offload: off [fixed] hw_tc_offload: off [fixed] l2_fwd_offload: off [fixed] large_receive_offload: off [fixed] loopback: on [fixed] macsec_hw_offload: off [fixed] ntuple_filters: off [fixed] receive_hashing: off [fixed] rx_all: off [fixed] rx_checksumming: on [fixed] rx_fcs: off [fixed] rx_gro_hw: off [fixed] rx_gro_list: 'off' rx_udp_gro_forwarding: 'off' rx_udp_tunnel_port_offload: off [fixed] rx_vlan_filter: off [fixed] rx_vlan_offload: off [fixed] rx_vlan_stag_filter: off [fixed] rx_vlan_stag_hw_parse: off [fixed] scatter_gather: 'on' tcp_segmentation_offload: 'on' tls_hw_record: off [fixed] tls_hw_rx_offload: off [fixed] tls_hw_tx_offload: off [fixed] tx_checksum_fcoe_crc: off [fixed] tx_checksum_ip_generic: on [fixed] tx_checksum_ipv4: off [fixed] tx_checksum_ipv6: off [fixed] tx_checksum_sctp: on [fixed] tx_checksumming: 'on' tx_esp_segmentation: off [fixed] tx_fcoe_segmentation: off [fixed] tx_gre_csum_segmentation: off [fixed] tx_gre_segmentation: off [fixed] tx_gso_list: 'on' tx_gso_partial: off [fixed] tx_gso_robust: off [fixed] tx_ipxip4_segmentation: off [fixed] tx_ipxip6_segmentation: off [fixed] tx_nocache_copy: off [fixed] tx_scatter_gather: on [fixed] tx_scatter_gather_fraglist: on [fixed] tx_sctp_segmentation: 'on' tx_tcp6_segmentation: 'on' tx_tcp_ecn_segmentation: 'on' tx_tcp_mangleid_segmentation: 'on' tx_tcp_segmentation: 'on' tx_tunnel_remcsum_segmentation: off [fixed] tx_udp_segmentation: 'on' tx_udp_tnl_csum_segmentation: off [fixed] tx_udp_tnl_segmentation: off [fixed] tx_vlan_offload: off [fixed] tx_vlan_stag_hw_insert: off [fixed] vlan_challenged: on [fixed] hw_timestamp_filters: [] ipv4: address: 127.0.0.1 broadcast: '' netmask: 255.0.0.0 network: 127.0.0.0 prefix: '8' ipv6: - address: ::1 prefix: '128' scope: host mtu: 65536 promisc: false timestamping: [] type: loopback loadavg: 15m: 0.06 1m: 0.0 5m: 0.09 locally_reachable_ips: ipv4: - 38.102.83.182 - 127.0.0.0/8 - 127.0.0.1 ipv6: - ::1 - fe80::f816:3eff:fe96:e4 lsb: {} lvm: N/A machine: x86_64 machine_id: 64f1d6692049d8be5e8b216cc203502c memfree_mb: 7164 memory_mb: nocache: free: 7375 used: 304 real: free: 7164 total: 7679 used: 515 swap: cached: 0 free: 0 total: 0 used: 0 memtotal_mb: 7679 module_setup: true mounts: - block_available: 20336157 block_size: 4096 block_total: 20954875 block_used: 618718 device: /dev/vda1 fstype: xfs inode_available: 41888385 inode_total: 41942512 inode_used: 54127 mount: / options: rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota size_available: 83296899072 size_total: 85831168000 uuid: cbdedf45-ed1d-4952-82a8-33a12c0ba266 nodename: controller os_family: RedHat pkg_mgr: dnf proc_cmdline: BOOT_IMAGE: (hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 console: ttyS0,115200n8 crashkernel: 1G-2G:192M,2G-64G:256M,64G-:512M net.ifnames: '0' no_timer_check: true ro: true root: UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 processor: - '0' - AuthenticAMD - AMD EPYC-Rome Processor - '1' - AuthenticAMD - AMD EPYC-Rome Processor - '2' - AuthenticAMD - AMD EPYC-Rome Processor - '3' - AuthenticAMD - AMD EPYC-Rome Processor - '4' - AuthenticAMD - AMD EPYC-Rome Processor - '5' - AuthenticAMD - AMD EPYC-Rome Processor - '6' - AuthenticAMD - AMD EPYC-Rome Processor - '7' - AuthenticAMD - AMD EPYC-Rome Processor processor_cores: 1 processor_count: 8 processor_nproc: 8 processor_threads_per_core: 1 processor_vcpus: 8 product_name: OpenStack Nova product_serial: NA product_uuid: NA product_version: 26.3.1 python: executable: /usr/bin/python3 has_sslcontext: true type: cpython version: major: 3 micro: 25 minor: 9 releaselevel: final serial: 0 version_info: - 3 - 9 - 25 - final - 0 python_version: 3.9.25 real_group_id: 1000 real_user_id: 1000 selinux: config_mode: enforcing mode: enforcing policyvers: 33 status: enabled type: targeted selinux_python_present: true service_mgr: systemd ssh_host_key_ecdsa_public: AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfIWnOzoaFl6L11147qWcwowK6Ci0Rz8t1WjAVB/zcYVQE7pudrJ717ZfSW85tw14Xjf9dwVFE9kociqbG0zJc= ssh_host_key_ecdsa_public_keytype: ecdsa-sha2-nistp256 ssh_host_key_ed25519_public: AAAAC3NzaC1lZDI1NTE5AAAAINkLqbGxYdqx81uU7hEPuFtk8VGcR7wMa2mI4eVESIvr ssh_host_key_ed25519_public_keytype: ssh-ed25519 ssh_host_key_rsa_public: AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMl9iNNfpkBemj+80eKwpd7sRDpIn97HaSAZ/63vqGAbPhgMiH6WngR/zO4oyGyB8lf6fsD+w4LKZgiuAhBwp++i2IcfE5Kfy6quV1X1wL5NwHAombTIg8qtOBzyJxFksJBIHLn8mcWWkttFKy18Ou9KhVGzrDOe8XFSy+jDSiZpPmx7DYjwRg7irJ8dfyyG0bjzrw/C5eBQvyGVsr9RNSDlOv5XmLkybsyqg8nCjNrNEBaKrpRf51w6wWHrTzl/U492b0rnW+3xzYRAnuOhrbIP0OoK+92VqKKeAld7BUW4ZL3PPogxoRhuieoWCGzznwQBUdar6WNRcaUSK3mzSjHikjwkUAl9SR7srM4T9Tc4Yf13/dZ3kOQFQNkH9Xl2+vEHfr9xPC5dknmfsFD4wIdvGCo7MW7+D6tD55fkNhgwMl9YAT7IJVbxrwKBQtHspcwjxuQqyuNN48RMbfflqtKlYaiSqT6TmhmJHfFpToEiZ5O0I11Jw+S6nVSf65MU= ssh_host_key_rsa_public_keytype: ssh-rsa swapfree_mb: 0 swaptotal_mb: 0 system: Linux system_capabilities: - '' system_capabilities_enforced: 'True' system_vendor: OpenStack Foundation uptime_seconds: 336 user_dir: /home/zuul user_gecos: '' user_gid: 1000 user_id: zuul user_shell: /bin/bash user_uid: 1000 userspace_architecture: x86_64 userspace_bits: '64' virtualization_role: guest virtualization_tech_guest: - openstack virtualization_tech_host: - kvm virtualization_type: openstack zuul_change_list: - service-telemetry-operator home/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/0000755000175000017500000000000015117130656025274 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ens3.nmconnection0000644000175000017500000000026215117130630030550 0ustar zuulzuul[connection] id=ens3 uuid=80e201bc-bccd-49eb-9325-1e09b23a9293 type=ethernet interface-name=ens3 [ethernet] [ipv4] method=auto [ipv6] addr-gen-mode=eui64 method=auto [proxy] ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ci-private-network.nmconnectionhome/zuul/zuul-output/logs/ci-framework-data/artifacts/NetworkManager/ci-private-network.nmconnectio0000644000175000017500000000051315117130630033253 0ustar zuulzuul[connection] id=ci-private-network uuid=79d6e8b4-96c9-5091-94d7-18fa262e88d3 type=ethernet autoconnect=true interface-name=eth1 [ethernet] mac-address=fa:16:3e:b1:85:fe mtu=1500 [ipv4] method=manual addresses=192.168.122.11/24 never-default=true gateway=192.168.122.1 [ipv6] addr-gen-mode=stable-privacy method=disabled [proxy] home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/0000755000175000017500000000000015117130656024372 5ustar zuulzuul././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-highavailability.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-highavailability.0000644000175000017500000000034215117130630033266 0ustar zuulzuul [repo-setup-centos-highavailability] name=repo-setup-centos-highavailability baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/HighAvailability/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-powertools.repo0000644000175000017500000000031115117130630033053 0ustar zuulzuul [repo-setup-centos-powertools] name=repo-setup-centos-powertools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/CRB/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean.repo.md50000644000175000017500000000004115117130630027521 0ustar zuulzuulc3923531bcda0b0811b2d5053f189beb home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-appstream.repo0000644000175000017500000000031615117130630032637 0ustar zuulzuul [repo-setup-centos-appstream] name=repo-setup-centos-appstream baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/AppStream/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean-antelope-testing.repo0000644000175000017500000000317215117130630032325 0ustar zuulzuul[delorean-antelope-testing] name=dlrn-antelope-testing baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [delorean-antelope-build-deps] name=dlrn-antelope-build-deps baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/build-deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-rabbitmq] name=centos9-rabbitmq baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/messaging/$basearch/rabbitmq-38/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-storage] name=centos9-storage baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/storage/$basearch/ceph-reef/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-opstools] name=centos9-opstools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/opstools/$basearch/collectd-5/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-nfv-ovs] name=NFV SIG OpenvSwitch baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/nfv/$basearch/openvswitch-2/ gpgcheck=0 enabled=1 module_hotfixes=1 # epel is required for Ceph Reef [epel-low-priority] name=Extra Packages for Enterprise Linux $releasever - $basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir enabled=1 gpgcheck=0 countme=1 priority=100 includepkgs=libarrow*,parquet*,python3-asyncssh,re2,python3-grpcio,grpc*,abseil*,thrift*,blake3 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/delorean.repo0000644000175000017500000001341515117130630027046 0ustar zuulzuul[delorean-component-barbican] name=delorean-openstack-barbican-42b4c41831408a8e323fec3c8983b5c793b64874 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/barbican/42/b4/42b4c41831408a8e323fec3c8983b5c793b64874_08052e9d enabled=1 gpgcheck=0 priority=1 [delorean-component-baremetal] name=delorean-python-glean-10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/baremetal/10/df/10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7_36137eb3 enabled=1 gpgcheck=0 priority=1 [delorean-component-cinder] name=delorean-openstack-cinder-1c00d6490d88e436f26efb71f2ac96e75252e97c baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cinder/1c/00/1c00d6490d88e436f26efb71f2ac96e75252e97c_f716f000 enabled=1 gpgcheck=0 priority=1 [delorean-component-clients] name=delorean-python-stevedore-c4acc5639fd2329372142e39464fcca0209b0018 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/clients/c4/ac/c4acc5639fd2329372142e39464fcca0209b0018_d3ef8337 enabled=1 gpgcheck=0 priority=1 [delorean-component-cloudops] name=delorean-python-cloudkitty-tests-tempest-2c80f80e02c5accd099187ea762c8f8389bd7905 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cloudops/2c/80/2c80f80e02c5accd099187ea762c8f8389bd7905_33e4dd93 enabled=1 gpgcheck=0 priority=1 [delorean-component-common] name=delorean-os-refresh-config-9bfc52b5049be2d8de6134d662fdde9dfa48960f baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/common/9b/fc/9bfc52b5049be2d8de6134d662fdde9dfa48960f_b85780e6 enabled=1 gpgcheck=0 priority=1 [delorean-component-compute] name=delorean-openstack-nova-6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/compute/6f/8d/6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e_dc05b899 enabled=1 gpgcheck=0 priority=1 [delorean-component-designate] name=delorean-python-designate-tests-tempest-347fdbc9b4595a10b726526b3c0b5928e5b7fcf2 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/designate/34/7f/347fdbc9b4595a10b726526b3c0b5928e5b7fcf2_3fd39337 enabled=1 gpgcheck=0 priority=1 [delorean-component-glance] name=delorean-openstack-glance-1fd12c29b339f30fe823e2b5beba14b5f241e52a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/glance/1f/d1/1fd12c29b339f30fe823e2b5beba14b5f241e52a_0d693729 enabled=1 gpgcheck=0 priority=1 [delorean-component-keystone] name=delorean-openstack-keystone-e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/keystone/e4/b4/e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7_264c03cc enabled=1 gpgcheck=0 priority=1 [delorean-component-manila] name=delorean-openstack-manila-3c01b7181572c95dac462eb19c3121e36cb0fe95 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/manila/3c/01/3c01b7181572c95dac462eb19c3121e36cb0fe95_912dfd18 enabled=1 gpgcheck=0 priority=1 [delorean-component-network] name=delorean-python-whitebox-neutron-tests-tempest-12cf06ce36a79a584fc757f4c25ff96845573c93 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/network/12/cf/12cf06ce36a79a584fc757f4c25ff96845573c93_3ed3aba3 enabled=1 gpgcheck=0 priority=1 [delorean-component-octavia] name=delorean-openstack-octavia-ba397f07a7331190208c93368ee23826ac4e2707 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/octavia/ba/39/ba397f07a7331190208c93368ee23826ac4e2707_9d6e596a enabled=1 gpgcheck=0 priority=1 [delorean-component-optimize] name=delorean-openstack-watcher-c014f81a8647287f6dcc339321c1256f5a2e82d5 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/optimize/c0/14/c014f81a8647287f6dcc339321c1256f5a2e82d5_bcbfdccc enabled=1 gpgcheck=0 priority=1 [delorean-component-podified] name=delorean-ansible-config_template-5ccaa22121a7ff05620975540d81f6efb077d8db baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/podified/5c/ca/5ccaa22121a7ff05620975540d81f6efb077d8db_83eb7cc2 enabled=1 gpgcheck=0 priority=1 [delorean-component-puppet] name=delorean-puppet-ceph-7352068d7b8c84ded636ab3158dafa6f3851951e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/puppet/73/52/7352068d7b8c84ded636ab3158dafa6f3851951e_7cde1ad1 enabled=1 gpgcheck=0 priority=1 [delorean-component-swift] name=delorean-openstack-swift-dc98a8463506ac520c469adb0ef47d0f7753905a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/swift/dc/98/dc98a8463506ac520c469adb0ef47d0f7753905a_9d02f069 enabled=1 gpgcheck=0 priority=1 [delorean-component-tempest] name=delorean-python-tempestconf-8515371b7cceebd4282e09f1d8f0cc842df82855 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/tempest/85/15/8515371b7cceebd4282e09f1d8f0cc842df82855_a1e336c7 enabled=1 gpgcheck=0 priority=1 [delorean-component-ui] name=delorean-openstack-heat-ui-013accbfd179753bc3f0d1f4e5bed07a4fd9f771 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/ui/01/3a/013accbfd179753bc3f0d1f4e5bed07a4fd9f771_0c88e467 enabled=1 gpgcheck=0 priority=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/yum_repos/repo-setup-centos-baseos.repo0000644000175000017500000000030415117130630032114 0ustar zuulzuul [repo-setup-centos-baseos] name=repo-setup-centos-baseos baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/BaseOS/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/ci-env/0000755000175000017500000000000015117130656023531 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/ci-env/networking-info.yml0000644000175000017500000000231215117130630027362 0ustar zuulzuulcrc_ci_bootstrap_networks_out: controller: default: connection: ci-private-network gw: 192.168.122.1 iface: eth1 ip: 192.168.122.11/24 mac: fa:16:3e:b1:85:fe mtu: 1500 crc: default: connection: ci-private-network gw: 192.168.122.1 iface: ens7 ip: 192.168.122.10/24 mac: fa:16:3e:91:d3:27 mtu: 1500 internal-api: connection: ci-private-network-20 iface: ens7.20 ip: 172.17.0.5/24 mac: 52:54:00:9f:c2:b6 mtu: '1496' parent_iface: ens7 vlan: 20 storage: connection: ci-private-network-21 iface: ens7.21 ip: 172.18.0.5/24 mac: 52:54:00:ea:0a:6d mtu: '1496' parent_iface: ens7 vlan: 21 tenant: connection: ci-private-network-22 iface: ens7.22 ip: 172.19.0.5/24 mac: 52:54:00:3e:f4:b3 mtu: '1496' parent_iface: ens7 vlan: 22 crc_ci_bootstrap_provider_dns: - 199.204.44.24 - 199.204.47.54 home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/0000755000175000017500000000000015117130420023461 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/0000755000175000017500000000000015117130420027514 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/0000755000175000017500000000000015117130656030654 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_all.yml0000644000175000017500000000142615117130422033136 0ustar zuulzuul--- - name: Debug make_all_env when: make_all_env is defined ansible.builtin.debug: var: make_all_env - name: Debug make_all_params when: make_all_params is defined ansible.builtin.debug: var: make_all_params - name: Run all retries: "{{ make_all_retries | default(omit) }}" delay: "{{ make_all_delay | default(omit) }}" until: "{{ make_all_until | default(true) }}" register: "make_all_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make all" dry_run: "{{ make_all_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_all_env|default({})), **(make_all_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_help.yml0000644000175000017500000000145615117130422033321 0ustar zuulzuul--- - name: Debug make_help_env when: make_help_env is defined ansible.builtin.debug: var: make_help_env - name: Debug make_help_params when: make_help_params is defined ansible.builtin.debug: var: make_help_params - name: Run help retries: "{{ make_help_retries | default(omit) }}" delay: "{{ make_help_delay | default(omit) }}" until: "{{ make_help_until | default(true) }}" register: "make_help_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make help" dry_run: "{{ make_help_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_help_env|default({})), **(make_help_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cleanup.0000644000175000017500000000152215117130422033270 0ustar zuulzuul--- - name: Debug make_cleanup_env when: make_cleanup_env is defined ansible.builtin.debug: var: make_cleanup_env - name: Debug make_cleanup_params when: make_cleanup_params is defined ansible.builtin.debug: var: make_cleanup_params - name: Run cleanup retries: "{{ make_cleanup_retries | default(omit) }}" delay: "{{ make_cleanup_delay | default(omit) }}" until: "{{ make_cleanup_until | default(true) }}" register: "make_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cleanup" dry_run: "{{ make_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cleanup_env|default({})), **(make_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_deploy_c0000644000175000017500000000167315117130422033370 0ustar zuulzuul--- - name: Debug make_deploy_cleanup_env when: make_deploy_cleanup_env is defined ansible.builtin.debug: var: make_deploy_cleanup_env - name: Debug make_deploy_cleanup_params when: make_deploy_cleanup_params is defined ansible.builtin.debug: var: make_deploy_cleanup_params - name: Run deploy_cleanup retries: "{{ make_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_deploy_cleanup_delay | default(omit) }}" until: "{{ make_deploy_cleanup_until | default(true) }}" register: "make_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make deploy_cleanup" dry_run: "{{ make_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_deploy_cleanup_env|default({})), **(make_deploy_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_wait.yml0000644000175000017500000000144515117130422033333 0ustar zuulzuul--- - name: Debug make_wait_env when: make_wait_env is defined ansible.builtin.debug: var: make_wait_env - name: Debug make_wait_params when: make_wait_params is defined ansible.builtin.debug: var: make_wait_params - name: Run wait retries: "{{ make_wait_retries | default(omit) }}" delay: "{{ make_wait_delay | default(omit) }}" until: "{{ make_wait_until | default(true) }}" register: "make_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make wait" dry_run: "{{ make_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_wait_env|default({})), **(make_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000161615117130422033405 0ustar zuulzuul--- - name: Debug make_crc_storage_env when: make_crc_storage_env is defined ansible.builtin.debug: var: make_crc_storage_env - name: Debug make_crc_storage_params when: make_crc_storage_params is defined ansible.builtin.debug: var: make_crc_storage_params - name: Run crc_storage retries: "{{ make_crc_storage_retries | default(omit) }}" delay: "{{ make_crc_storage_delay | default(omit) }}" until: "{{ make_crc_storage_until | default(true) }}" register: "make_crc_storage_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage" dry_run: "{{ make_crc_storage_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_env|default({})), **(make_crc_storage_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000200615117130422033377 0ustar zuulzuul--- - name: Debug make_crc_storage_cleanup_env when: make_crc_storage_cleanup_env is defined ansible.builtin.debug: var: make_crc_storage_cleanup_env - name: Debug make_crc_storage_cleanup_params when: make_crc_storage_cleanup_params is defined ansible.builtin.debug: var: make_crc_storage_cleanup_params - name: Run crc_storage_cleanup retries: "{{ make_crc_storage_cleanup_retries | default(omit) }}" delay: "{{ make_crc_storage_cleanup_delay | default(omit) }}" until: "{{ make_crc_storage_cleanup_until | default(true) }}" register: "make_crc_storage_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_cleanup" dry_run: "{{ make_crc_storage_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_cleanup_env|default({})), **(make_crc_storage_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_release.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000200615117130422033377 0ustar zuulzuul--- - name: Debug make_crc_storage_release_env when: make_crc_storage_release_env is defined ansible.builtin.debug: var: make_crc_storage_release_env - name: Debug make_crc_storage_release_params when: make_crc_storage_release_params is defined ansible.builtin.debug: var: make_crc_storage_release_params - name: Run crc_storage_release retries: "{{ make_crc_storage_release_retries | default(omit) }}" delay: "{{ make_crc_storage_release_delay | default(omit) }}" until: "{{ make_crc_storage_release_until | default(true) }}" register: "make_crc_storage_release_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_release" dry_run: "{{ make_crc_storage_release_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_release_env|default({})), **(make_crc_storage_release_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_with_retries.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000212115117130422033375 0ustar zuulzuul--- - name: Debug make_crc_storage_with_retries_env when: make_crc_storage_with_retries_env is defined ansible.builtin.debug: var: make_crc_storage_with_retries_env - name: Debug make_crc_storage_with_retries_params when: make_crc_storage_with_retries_params is defined ansible.builtin.debug: var: make_crc_storage_with_retries_params - name: Run crc_storage_with_retries retries: "{{ make_crc_storage_with_retries_retries | default(omit) }}" delay: "{{ make_crc_storage_with_retries_delay | default(omit) }}" until: "{{ make_crc_storage_with_retries_until | default(true) }}" register: "make_crc_storage_with_retries_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_with_retries" dry_run: "{{ make_crc_storage_with_retries_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_with_retries_env|default({})), **(make_crc_storage_with_retries_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_storage_cleanup_with_retries.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_stor0000644000175000017500000000231115117130422033376 0ustar zuulzuul--- - name: Debug make_crc_storage_cleanup_with_retries_env when: make_crc_storage_cleanup_with_retries_env is defined ansible.builtin.debug: var: make_crc_storage_cleanup_with_retries_env - name: Debug make_crc_storage_cleanup_with_retries_params when: make_crc_storage_cleanup_with_retries_params is defined ansible.builtin.debug: var: make_crc_storage_cleanup_with_retries_params - name: Run crc_storage_cleanup_with_retries retries: "{{ make_crc_storage_cleanup_with_retries_retries | default(omit) }}" delay: "{{ make_crc_storage_cleanup_with_retries_delay | default(omit) }}" until: "{{ make_crc_storage_cleanup_with_retries_until | default(true) }}" register: "make_crc_storage_cleanup_with_retries_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_storage_cleanup_with_retries" dry_run: "{{ make_crc_storage_cleanup_with_retries_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_storage_cleanup_with_retries_env|default({})), **(make_crc_storage_cleanup_with_retries_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_operator_namespace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_operator0000644000175000017500000000176715117130422033431 0ustar zuulzuul--- - name: Debug make_operator_namespace_env when: make_operator_namespace_env is defined ansible.builtin.debug: var: make_operator_namespace_env - name: Debug make_operator_namespace_params when: make_operator_namespace_params is defined ansible.builtin.debug: var: make_operator_namespace_params - name: Run operator_namespace retries: "{{ make_operator_namespace_retries | default(omit) }}" delay: "{{ make_operator_namespace_delay | default(omit) }}" until: "{{ make_operator_namespace_until | default(true) }}" register: "make_operator_namespace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make operator_namespace" dry_run: "{{ make_operator_namespace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_operator_namespace_env|default({})), **(make_operator_namespace_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespac0000644000175000017500000000156015117130422033354 0ustar zuulzuul--- - name: Debug make_namespace_env when: make_namespace_env is defined ansible.builtin.debug: var: make_namespace_env - name: Debug make_namespace_params when: make_namespace_params is defined ansible.builtin.debug: var: make_namespace_params - name: Run namespace retries: "{{ make_namespace_retries | default(omit) }}" delay: "{{ make_namespace_delay | default(omit) }}" until: "{{ make_namespace_until | default(true) }}" register: "make_namespace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make namespace" dry_run: "{{ make_namespace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_namespace_env|default({})), **(make_namespace_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespace_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_namespac0000644000175000017500000000175015117130422033355 0ustar zuulzuul--- - name: Debug make_namespace_cleanup_env when: make_namespace_cleanup_env is defined ansible.builtin.debug: var: make_namespace_cleanup_env - name: Debug make_namespace_cleanup_params when: make_namespace_cleanup_params is defined ansible.builtin.debug: var: make_namespace_cleanup_params - name: Run namespace_cleanup retries: "{{ make_namespace_cleanup_retries | default(omit) }}" delay: "{{ make_namespace_cleanup_delay | default(omit) }}" until: "{{ make_namespace_cleanup_until | default(true) }}" register: "make_namespace_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make namespace_cleanup" dry_run: "{{ make_namespace_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_namespace_cleanup_env|default({})), **(make_namespace_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input.ym0000644000175000017500000000146415117130422033353 0ustar zuulzuul--- - name: Debug make_input_env when: make_input_env is defined ansible.builtin.debug: var: make_input_env - name: Debug make_input_params when: make_input_params is defined ansible.builtin.debug: var: make_input_params - name: Run input retries: "{{ make_input_retries | default(omit) }}" delay: "{{ make_input_delay | default(omit) }}" until: "{{ make_input_until | default(true) }}" register: "make_input_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make input" dry_run: "{{ make_input_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_input_env|default({})), **(make_input_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_input_cl0000644000175000017500000000165415117130422033406 0ustar zuulzuul--- - name: Debug make_input_cleanup_env when: make_input_cleanup_env is defined ansible.builtin.debug: var: make_input_cleanup_env - name: Debug make_input_cleanup_params when: make_input_cleanup_params is defined ansible.builtin.debug: var: make_input_cleanup_params - name: Run input_cleanup retries: "{{ make_input_cleanup_retries | default(omit) }}" delay: "{{ make_input_cleanup_delay | default(omit) }}" until: "{{ make_input_cleanup_until | default(true) }}" register: "make_input_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make input_cleanup" dry_run: "{{ make_input_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_input_cleanup_env|default({})), **(make_input_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_setup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_0000644000175000017500000000165415117130422033334 0ustar zuulzuul--- - name: Debug make_crc_bmo_setup_env when: make_crc_bmo_setup_env is defined ansible.builtin.debug: var: make_crc_bmo_setup_env - name: Debug make_crc_bmo_setup_params when: make_crc_bmo_setup_params is defined ansible.builtin.debug: var: make_crc_bmo_setup_params - name: Run crc_bmo_setup retries: "{{ make_crc_bmo_setup_retries | default(omit) }}" delay: "{{ make_crc_bmo_setup_delay | default(omit) }}" until: "{{ make_crc_bmo_setup_until | default(true) }}" register: "make_crc_bmo_setup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_bmo_setup" dry_run: "{{ make_crc_bmo_setup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_bmo_setup_env|default({})), **(make_crc_bmo_setup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_bmo_0000644000175000017500000000171215117130422033327 0ustar zuulzuul--- - name: Debug make_crc_bmo_cleanup_env when: make_crc_bmo_cleanup_env is defined ansible.builtin.debug: var: make_crc_bmo_cleanup_env - name: Debug make_crc_bmo_cleanup_params when: make_crc_bmo_cleanup_params is defined ansible.builtin.debug: var: make_crc_bmo_cleanup_params - name: Run crc_bmo_cleanup retries: "{{ make_crc_bmo_cleanup_retries | default(omit) }}" delay: "{{ make_crc_bmo_cleanup_delay | default(omit) }}" until: "{{ make_crc_bmo_cleanup_until | default(true) }}" register: "make_crc_bmo_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make crc_bmo_cleanup" dry_run: "{{ make_crc_bmo_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_bmo_cleanup_env|default({})), **(make_crc_bmo_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315117130422033406 0ustar zuulzuul--- - name: Debug make_openstack_prep_env when: make_openstack_prep_env is defined ansible.builtin.debug: var: make_openstack_prep_env - name: Debug make_openstack_prep_params when: make_openstack_prep_params is defined ansible.builtin.debug: var: make_openstack_prep_params - name: Run openstack_prep retries: "{{ make_openstack_prep_retries | default(omit) }}" delay: "{{ make_openstack_prep_delay | default(omit) }}" until: "{{ make_openstack_prep_until | default(true) }}" register: "make_openstack_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_prep" dry_run: "{{ make_openstack_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_prep_env|default({})), **(make_openstack_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000156015117130422033401 0ustar zuulzuul--- - name: Debug make_openstack_env when: make_openstack_env is defined ansible.builtin.debug: var: make_openstack_env - name: Debug make_openstack_params when: make_openstack_params is defined ansible.builtin.debug: var: make_openstack_params - name: Run openstack retries: "{{ make_openstack_retries | default(omit) }}" delay: "{{ make_openstack_delay | default(omit) }}" until: "{{ make_openstack_until | default(true) }}" register: "make_openstack_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack" dry_run: "{{ make_openstack_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_env|default({})), **(make_openstack_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_wait.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315117130422033406 0ustar zuulzuul--- - name: Debug make_openstack_wait_env when: make_openstack_wait_env is defined ansible.builtin.debug: var: make_openstack_wait_env - name: Debug make_openstack_wait_params when: make_openstack_wait_params is defined ansible.builtin.debug: var: make_openstack_wait_params - name: Run openstack_wait retries: "{{ make_openstack_wait_retries | default(omit) }}" delay: "{{ make_openstack_wait_delay | default(omit) }}" until: "{{ make_openstack_wait_until | default(true) }}" register: "make_openstack_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_wait" dry_run: "{{ make_openstack_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_wait_env|default({})), **(make_openstack_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_init.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315117130422033406 0ustar zuulzuul--- - name: Debug make_openstack_init_env when: make_openstack_init_env is defined ansible.builtin.debug: var: make_openstack_init_env - name: Debug make_openstack_init_params when: make_openstack_init_params is defined ansible.builtin.debug: var: make_openstack_init_params - name: Run openstack_init retries: "{{ make_openstack_init_retries | default(omit) }}" delay: "{{ make_openstack_init_delay | default(omit) }}" until: "{{ make_openstack_init_until | default(true) }}" register: "make_openstack_init_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_init" dry_run: "{{ make_openstack_init_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_init_env|default({})), **(make_openstack_init_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000175015117130422033402 0ustar zuulzuul--- - name: Debug make_openstack_cleanup_env when: make_openstack_cleanup_env is defined ansible.builtin.debug: var: make_openstack_cleanup_env - name: Debug make_openstack_cleanup_params when: make_openstack_cleanup_params is defined ansible.builtin.debug: var: make_openstack_cleanup_params - name: Run openstack_cleanup retries: "{{ make_openstack_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_cleanup_delay | default(omit) }}" until: "{{ make_openstack_cleanup_until | default(true) }}" register: "make_openstack_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_cleanup" dry_run: "{{ make_openstack_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_cleanup_env|default({})), **(make_openstack_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_repo.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315117130422033406 0ustar zuulzuul--- - name: Debug make_openstack_repo_env when: make_openstack_repo_env is defined ansible.builtin.debug: var: make_openstack_repo_env - name: Debug make_openstack_repo_params when: make_openstack_repo_params is defined ansible.builtin.debug: var: make_openstack_repo_params - name: Run openstack_repo retries: "{{ make_openstack_repo_retries | default(omit) }}" delay: "{{ make_openstack_repo_delay | default(omit) }}" until: "{{ make_openstack_repo_until | default(true) }}" register: "make_openstack_repo_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_repo" dry_run: "{{ make_openstack_repo_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_repo_env|default({})), **(make_openstack_repo_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000204415117130422033377 0ustar zuulzuul--- - name: Debug make_openstack_deploy_prep_env when: make_openstack_deploy_prep_env is defined ansible.builtin.debug: var: make_openstack_deploy_prep_env - name: Debug make_openstack_deploy_prep_params when: make_openstack_deploy_prep_params is defined ansible.builtin.debug: var: make_openstack_deploy_prep_params - name: Run openstack_deploy_prep retries: "{{ make_openstack_deploy_prep_retries | default(omit) }}" delay: "{{ make_openstack_deploy_prep_delay | default(omit) }}" until: "{{ make_openstack_deploy_prep_until | default(true) }}" register: "make_openstack_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy_prep" dry_run: "{{ make_openstack_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_prep_env|default({})), **(make_openstack_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000173115117130422033401 0ustar zuulzuul--- - name: Debug make_openstack_deploy_env when: make_openstack_deploy_env is defined ansible.builtin.debug: var: make_openstack_deploy_env - name: Debug make_openstack_deploy_params when: make_openstack_deploy_params is defined ansible.builtin.debug: var: make_openstack_deploy_params - name: Run openstack_deploy retries: "{{ make_openstack_deploy_retries | default(omit) }}" delay: "{{ make_openstack_deploy_delay | default(omit) }}" until: "{{ make_openstack_deploy_until | default(true) }}" register: "make_openstack_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy" dry_run: "{{ make_openstack_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_env|default({})), **(make_openstack_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_wait_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000204415117130422033377 0ustar zuulzuul--- - name: Debug make_openstack_wait_deploy_env when: make_openstack_wait_deploy_env is defined ansible.builtin.debug: var: make_openstack_wait_deploy_env - name: Debug make_openstack_wait_deploy_params when: make_openstack_wait_deploy_params is defined ansible.builtin.debug: var: make_openstack_wait_deploy_params - name: Run openstack_wait_deploy retries: "{{ make_openstack_wait_deploy_retries | default(omit) }}" delay: "{{ make_openstack_wait_deploy_delay | default(omit) }}" until: "{{ make_openstack_wait_deploy_until | default(true) }}" register: "make_openstack_wait_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_wait_deploy" dry_run: "{{ make_openstack_wait_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_wait_deploy_env|default({})), **(make_openstack_wait_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000212115117130422033373 0ustar zuulzuul--- - name: Debug make_openstack_deploy_cleanup_env when: make_openstack_deploy_cleanup_env is defined ansible.builtin.debug: var: make_openstack_deploy_cleanup_env - name: Debug make_openstack_deploy_cleanup_params when: make_openstack_deploy_cleanup_params is defined ansible.builtin.debug: var: make_openstack_deploy_cleanup_params - name: Run openstack_deploy_cleanup retries: "{{ make_openstack_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_deploy_cleanup_delay | default(omit) }}" until: "{{ make_openstack_deploy_cleanup_until | default(true) }}" register: "make_openstack_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_deploy_cleanup" dry_run: "{{ make_openstack_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_deploy_cleanup_env|default({})), **(make_openstack_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_update_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000202515117130422033376 0ustar zuulzuul--- - name: Debug make_openstack_update_run_env when: make_openstack_update_run_env is defined ansible.builtin.debug: var: make_openstack_update_run_env - name: Debug make_openstack_update_run_params when: make_openstack_update_run_params is defined ansible.builtin.debug: var: make_openstack_update_run_params - name: Run openstack_update_run retries: "{{ make_openstack_update_run_retries | default(omit) }}" delay: "{{ make_openstack_update_run_delay | default(omit) }}" until: "{{ make_openstack_update_run_until | default(true) }}" register: "make_openstack_update_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_update_run" dry_run: "{{ make_openstack_update_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_update_run_env|default({})), **(make_openstack_update_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_services.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_s0000644000175000017500000000171215117130422033370 0ustar zuulzuul--- - name: Debug make_update_services_env when: make_update_services_env is defined ansible.builtin.debug: var: make_update_services_env - name: Debug make_update_services_params when: make_update_services_params is defined ansible.builtin.debug: var: make_update_services_params - name: Run update_services retries: "{{ make_update_services_retries | default(omit) }}" delay: "{{ make_update_services_delay | default(omit) }}" until: "{{ make_update_services_until | default(true) }}" register: "make_update_services_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make update_services" dry_run: "{{ make_update_services_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_update_services_env|default({})), **(make_update_services_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_system.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_update_s0000644000175000017500000000165415117130422033375 0ustar zuulzuul--- - name: Debug make_update_system_env when: make_update_system_env is defined ansible.builtin.debug: var: make_update_system_env - name: Debug make_update_system_params when: make_update_system_params is defined ansible.builtin.debug: var: make_update_system_params - name: Run update_system retries: "{{ make_update_system_retries | default(omit) }}" delay: "{{ make_update_system_delay | default(omit) }}" until: "{{ make_update_system_until | default(true) }}" register: "make_update_system_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make update_system" dry_run: "{{ make_update_system_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_update_system_env|default({})), **(make_update_system_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_patch_version.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000210215117130422033372 0ustar zuulzuul--- - name: Debug make_openstack_patch_version_env when: make_openstack_patch_version_env is defined ansible.builtin.debug: var: make_openstack_patch_version_env - name: Debug make_openstack_patch_version_params when: make_openstack_patch_version_params is defined ansible.builtin.debug: var: make_openstack_patch_version_params - name: Run openstack_patch_version retries: "{{ make_openstack_patch_version_retries | default(omit) }}" delay: "{{ make_openstack_patch_version_delay | default(omit) }}" until: "{{ make_openstack_patch_version_until | default(true) }}" register: "make_openstack_patch_version_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_patch_version" dry_run: "{{ make_openstack_patch_version_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_patch_version_env|default({})), **(make_openstack_patch_version_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_generate_keys.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000214015117130422033335 0ustar zuulzuul--- - name: Debug make_edpm_deploy_generate_keys_env when: make_edpm_deploy_generate_keys_env is defined ansible.builtin.debug: var: make_edpm_deploy_generate_keys_env - name: Debug make_edpm_deploy_generate_keys_params when: make_edpm_deploy_generate_keys_params is defined ansible.builtin.debug: var: make_edpm_deploy_generate_keys_params - name: Run edpm_deploy_generate_keys retries: "{{ make_edpm_deploy_generate_keys_retries | default(omit) }}" delay: "{{ make_edpm_deploy_generate_keys_delay | default(omit) }}" until: "{{ make_edpm_deploy_generate_keys_until | default(true) }}" register: "make_edpm_deploy_generate_keys_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_generate_keys" dry_run: "{{ make_edpm_deploy_generate_keys_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_generate_keys_env|default({})), **(make_edpm_deploy_generate_keys_params|default({}))) }}" ././@LongLink0000644000000000000000000000020000000000000011573 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_patch_ansible_runner_image.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_pat0000644000175000017500000000227215117130422033357 0ustar zuulzuul--- - name: Debug make_edpm_patch_ansible_runner_image_env when: make_edpm_patch_ansible_runner_image_env is defined ansible.builtin.debug: var: make_edpm_patch_ansible_runner_image_env - name: Debug make_edpm_patch_ansible_runner_image_params when: make_edpm_patch_ansible_runner_image_params is defined ansible.builtin.debug: var: make_edpm_patch_ansible_runner_image_params - name: Run edpm_patch_ansible_runner_image retries: "{{ make_edpm_patch_ansible_runner_image_retries | default(omit) }}" delay: "{{ make_edpm_patch_ansible_runner_image_delay | default(omit) }}" until: "{{ make_edpm_patch_ansible_runner_image_until | default(true) }}" register: "make_edpm_patch_ansible_runner_image_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_patch_ansible_runner_image" dry_run: "{{ make_edpm_patch_ansible_runner_image_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_patch_ansible_runner_image_env|default({})), **(make_edpm_patch_ansible_runner_image_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000173115117130422033342 0ustar zuulzuul--- - name: Debug make_edpm_deploy_prep_env when: make_edpm_deploy_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_prep_env - name: Debug make_edpm_deploy_prep_params when: make_edpm_deploy_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_prep_params - name: Run edpm_deploy_prep retries: "{{ make_edpm_deploy_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_prep_until | default(true) }}" register: "make_edpm_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_prep" dry_run: "{{ make_edpm_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_prep_env|default({})), **(make_edpm_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000200615117130422033336 0ustar zuulzuul--- - name: Debug make_edpm_deploy_cleanup_env when: make_edpm_deploy_cleanup_env is defined ansible.builtin.debug: var: make_edpm_deploy_cleanup_env - name: Debug make_edpm_deploy_cleanup_params when: make_edpm_deploy_cleanup_params is defined ansible.builtin.debug: var: make_edpm_deploy_cleanup_params - name: Run edpm_deploy_cleanup retries: "{{ make_edpm_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_deploy_cleanup_delay | default(omit) }}" until: "{{ make_edpm_deploy_cleanup_until | default(true) }}" register: "make_edpm_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_cleanup" dry_run: "{{ make_edpm_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_cleanup_env|default({})), **(make_edpm_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000161615117130422033344 0ustar zuulzuul--- - name: Debug make_edpm_deploy_env when: make_edpm_deploy_env is defined ansible.builtin.debug: var: make_edpm_deploy_env - name: Debug make_edpm_deploy_params when: make_edpm_deploy_params is defined ansible.builtin.debug: var: make_edpm_deploy_params - name: Run edpm_deploy retries: "{{ make_edpm_deploy_retries | default(omit) }}" delay: "{{ make_edpm_deploy_delay | default(omit) }}" until: "{{ make_edpm_deploy_until | default(true) }}" register: "make_edpm_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy" dry_run: "{{ make_edpm_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_env|default({})), **(make_edpm_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_baremetal_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000215715117130422033345 0ustar zuulzuul--- - name: Debug make_edpm_deploy_baremetal_prep_env when: make_edpm_deploy_baremetal_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_prep_env - name: Debug make_edpm_deploy_baremetal_prep_params when: make_edpm_deploy_baremetal_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_prep_params - name: Run edpm_deploy_baremetal_prep retries: "{{ make_edpm_deploy_baremetal_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_baremetal_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_baremetal_prep_until | default(true) }}" register: "make_edpm_deploy_baremetal_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_baremetal_prep" dry_run: "{{ make_edpm_deploy_baremetal_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_baremetal_prep_env|default({})), **(make_edpm_deploy_baremetal_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000204415117130422033340 0ustar zuulzuul--- - name: Debug make_edpm_deploy_baremetal_env when: make_edpm_deploy_baremetal_env is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_env - name: Debug make_edpm_deploy_baremetal_params when: make_edpm_deploy_baremetal_params is defined ansible.builtin.debug: var: make_edpm_deploy_baremetal_params - name: Run edpm_deploy_baremetal retries: "{{ make_edpm_deploy_baremetal_retries | default(omit) }}" delay: "{{ make_edpm_deploy_baremetal_delay | default(omit) }}" until: "{{ make_edpm_deploy_baremetal_until | default(true) }}" register: "make_edpm_deploy_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_baremetal" dry_run: "{{ make_edpm_deploy_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_baremetal_env|default({})), **(make_edpm_deploy_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wait_deploy_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wai0000644000175000017500000000215715117130422033355 0ustar zuulzuul--- - name: Debug make_edpm_wait_deploy_baremetal_env when: make_edpm_wait_deploy_baremetal_env is defined ansible.builtin.debug: var: make_edpm_wait_deploy_baremetal_env - name: Debug make_edpm_wait_deploy_baremetal_params when: make_edpm_wait_deploy_baremetal_params is defined ansible.builtin.debug: var: make_edpm_wait_deploy_baremetal_params - name: Run edpm_wait_deploy_baremetal retries: "{{ make_edpm_wait_deploy_baremetal_retries | default(omit) }}" delay: "{{ make_edpm_wait_deploy_baremetal_delay | default(omit) }}" until: "{{ make_edpm_wait_deploy_baremetal_until | default(true) }}" register: "make_edpm_wait_deploy_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_wait_deploy_baremetal" dry_run: "{{ make_edpm_wait_deploy_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_wait_deploy_baremetal_env|default({})), **(make_edpm_wait_deploy_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wait_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_wai0000644000175000017500000000173115117130422033352 0ustar zuulzuul--- - name: Debug make_edpm_wait_deploy_env when: make_edpm_wait_deploy_env is defined ansible.builtin.debug: var: make_edpm_wait_deploy_env - name: Debug make_edpm_wait_deploy_params when: make_edpm_wait_deploy_params is defined ansible.builtin.debug: var: make_edpm_wait_deploy_params - name: Run edpm_wait_deploy retries: "{{ make_edpm_wait_deploy_retries | default(omit) }}" delay: "{{ make_edpm_wait_deploy_delay | default(omit) }}" until: "{{ make_edpm_wait_deploy_until | default(true) }}" register: "make_edpm_wait_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_wait_deploy" dry_run: "{{ make_edpm_wait_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_wait_deploy_env|default({})), **(make_edpm_wait_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_register_dns.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_reg0000644000175000017500000000175015117130422033350 0ustar zuulzuul--- - name: Debug make_edpm_register_dns_env when: make_edpm_register_dns_env is defined ansible.builtin.debug: var: make_edpm_register_dns_env - name: Debug make_edpm_register_dns_params when: make_edpm_register_dns_params is defined ansible.builtin.debug: var: make_edpm_register_dns_params - name: Run edpm_register_dns retries: "{{ make_edpm_register_dns_retries | default(omit) }}" delay: "{{ make_edpm_register_dns_delay | default(omit) }}" until: "{{ make_edpm_register_dns_until | default(true) }}" register: "make_edpm_register_dns_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_register_dns" dry_run: "{{ make_edpm_register_dns_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_register_dns_env|default({})), **(make_edpm_register_dns_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_nova_discover_hosts.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_nov0000644000175000017500000000212115117130422033366 0ustar zuulzuul--- - name: Debug make_edpm_nova_discover_hosts_env when: make_edpm_nova_discover_hosts_env is defined ansible.builtin.debug: var: make_edpm_nova_discover_hosts_env - name: Debug make_edpm_nova_discover_hosts_params when: make_edpm_nova_discover_hosts_params is defined ansible.builtin.debug: var: make_edpm_nova_discover_hosts_params - name: Run edpm_nova_discover_hosts retries: "{{ make_edpm_nova_discover_hosts_retries | default(omit) }}" delay: "{{ make_edpm_nova_discover_hosts_delay | default(omit) }}" until: "{{ make_edpm_nova_discover_hosts_until | default(true) }}" register: "make_edpm_nova_discover_hosts_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_nova_discover_hosts" dry_run: "{{ make_edpm_nova_discover_hosts_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_nova_discover_hosts_env|default({})), **(make_edpm_nova_discover_hosts_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_crds.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000167315117130422033406 0ustar zuulzuul--- - name: Debug make_openstack_crds_env when: make_openstack_crds_env is defined ansible.builtin.debug: var: make_openstack_crds_env - name: Debug make_openstack_crds_params when: make_openstack_crds_params is defined ansible.builtin.debug: var: make_openstack_crds_params - name: Run openstack_crds retries: "{{ make_openstack_crds_retries | default(omit) }}" delay: "{{ make_openstack_crds_delay | default(omit) }}" until: "{{ make_openstack_crds_until | default(true) }}" register: "make_openstack_crds_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_crds" dry_run: "{{ make_openstack_crds_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_crds_env|default({})), **(make_openstack_crds_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_crds_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000206315117130422033400 0ustar zuulzuul--- - name: Debug make_openstack_crds_cleanup_env when: make_openstack_crds_cleanup_env is defined ansible.builtin.debug: var: make_openstack_crds_cleanup_env - name: Debug make_openstack_crds_cleanup_params when: make_openstack_crds_cleanup_params is defined ansible.builtin.debug: var: make_openstack_crds_cleanup_params - name: Run openstack_crds_cleanup retries: "{{ make_openstack_crds_cleanup_retries | default(omit) }}" delay: "{{ make_openstack_crds_cleanup_delay | default(omit) }}" until: "{{ make_openstack_crds_cleanup_until | default(true) }}" register: "make_openstack_crds_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_crds_cleanup" dry_run: "{{ make_openstack_crds_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_crds_cleanup_env|default({})), **(make_openstack_crds_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000215715117130422033345 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_prep_env when: make_edpm_deploy_networker_prep_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_prep_env - name: Debug make_edpm_deploy_networker_prep_params when: make_edpm_deploy_networker_prep_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_prep_params - name: Run edpm_deploy_networker_prep retries: "{{ make_edpm_deploy_networker_prep_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_prep_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_prep_until | default(true) }}" register: "make_edpm_deploy_networker_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker_prep" dry_run: "{{ make_edpm_deploy_networker_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_prep_env|default({})), **(make_edpm_deploy_networker_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000017600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000223415117130422033341 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_cleanup_env when: make_edpm_deploy_networker_cleanup_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_cleanup_env - name: Debug make_edpm_deploy_networker_cleanup_params when: make_edpm_deploy_networker_cleanup_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_cleanup_params - name: Run edpm_deploy_networker_cleanup retries: "{{ make_edpm_deploy_networker_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_cleanup_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_cleanup_until | default(true) }}" register: "make_edpm_deploy_networker_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker_cleanup" dry_run: "{{ make_edpm_deploy_networker_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_cleanup_env|default({})), **(make_edpm_deploy_networker_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_networker.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000204415117130422033340 0ustar zuulzuul--- - name: Debug make_edpm_deploy_networker_env when: make_edpm_deploy_networker_env is defined ansible.builtin.debug: var: make_edpm_deploy_networker_env - name: Debug make_edpm_deploy_networker_params when: make_edpm_deploy_networker_params is defined ansible.builtin.debug: var: make_edpm_deploy_networker_params - name: Run edpm_deploy_networker retries: "{{ make_edpm_deploy_networker_retries | default(omit) }}" delay: "{{ make_edpm_deploy_networker_delay | default(omit) }}" until: "{{ make_edpm_deploy_networker_until | default(true) }}" register: "make_edpm_deploy_networker_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make edpm_deploy_networker" dry_run: "{{ make_edpm_deploy_networker_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_networker_env|default({})), **(make_edpm_deploy_networker_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_pr0000644000175000017500000000157715117130422033375 0ustar zuulzuul--- - name: Debug make_infra_prep_env when: make_infra_prep_env is defined ansible.builtin.debug: var: make_infra_prep_env - name: Debug make_infra_prep_params when: make_infra_prep_params is defined ansible.builtin.debug: var: make_infra_prep_params - name: Run infra_prep retries: "{{ make_infra_prep_retries | default(omit) }}" delay: "{{ make_infra_prep_delay | default(omit) }}" until: "{{ make_infra_prep_until | default(true) }}" register: "make_infra_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_prep" dry_run: "{{ make_infra_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_prep_env|default({})), **(make_infra_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra.ym0000644000175000017500000000146415117130422033313 0ustar zuulzuul--- - name: Debug make_infra_env when: make_infra_env is defined ansible.builtin.debug: var: make_infra_env - name: Debug make_infra_params when: make_infra_params is defined ansible.builtin.debug: var: make_infra_params - name: Run infra retries: "{{ make_infra_retries | default(omit) }}" delay: "{{ make_infra_delay | default(omit) }}" until: "{{ make_infra_until | default(true) }}" register: "make_infra_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra" dry_run: "{{ make_infra_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_env|default({})), **(make_infra_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_cl0000644000175000017500000000165415117130422033346 0ustar zuulzuul--- - name: Debug make_infra_cleanup_env when: make_infra_cleanup_env is defined ansible.builtin.debug: var: make_infra_cleanup_env - name: Debug make_infra_cleanup_params when: make_infra_cleanup_params is defined ansible.builtin.debug: var: make_infra_cleanup_params - name: Run infra_cleanup retries: "{{ make_infra_cleanup_retries | default(omit) }}" delay: "{{ make_infra_cleanup_delay | default(omit) }}" until: "{{ make_infra_cleanup_until | default(true) }}" register: "make_infra_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_cleanup" dry_run: "{{ make_infra_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_cleanup_env|default({})), **(make_infra_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000171215117130422033354 0ustar zuulzuul--- - name: Debug make_dns_deploy_prep_env when: make_dns_deploy_prep_env is defined ansible.builtin.debug: var: make_dns_deploy_prep_env - name: Debug make_dns_deploy_prep_params when: make_dns_deploy_prep_params is defined ansible.builtin.debug: var: make_dns_deploy_prep_params - name: Run dns_deploy_prep retries: "{{ make_dns_deploy_prep_retries | default(omit) }}" delay: "{{ make_dns_deploy_prep_delay | default(omit) }}" until: "{{ make_dns_deploy_prep_until | default(true) }}" register: "make_dns_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy_prep" dry_run: "{{ make_dns_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_prep_env|default({})), **(make_dns_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000157715117130422033365 0ustar zuulzuul--- - name: Debug make_dns_deploy_env when: make_dns_deploy_env is defined ansible.builtin.debug: var: make_dns_deploy_env - name: Debug make_dns_deploy_params when: make_dns_deploy_params is defined ansible.builtin.debug: var: make_dns_deploy_params - name: Run dns_deploy retries: "{{ make_dns_deploy_retries | default(omit) }}" delay: "{{ make_dns_deploy_delay | default(omit) }}" until: "{{ make_dns_deploy_until | default(true) }}" register: "make_dns_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy" dry_run: "{{ make_dns_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_env|default({})), **(make_dns_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_dns_depl0000644000175000017500000000176715117130422033366 0ustar zuulzuul--- - name: Debug make_dns_deploy_cleanup_env when: make_dns_deploy_cleanup_env is defined ansible.builtin.debug: var: make_dns_deploy_cleanup_env - name: Debug make_dns_deploy_cleanup_params when: make_dns_deploy_cleanup_params is defined ansible.builtin.debug: var: make_dns_deploy_cleanup_params - name: Run dns_deploy_cleanup retries: "{{ make_dns_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_dns_deploy_cleanup_delay | default(omit) }}" until: "{{ make_dns_deploy_cleanup_until | default(true) }}" register: "make_dns_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make dns_deploy_cleanup" dry_run: "{{ make_dns_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_dns_deploy_cleanup_env|default({})), **(make_dns_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000204415117130422033370 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_prep_env when: make_netconfig_deploy_prep_env is defined ansible.builtin.debug: var: make_netconfig_deploy_prep_env - name: Debug make_netconfig_deploy_prep_params when: make_netconfig_deploy_prep_params is defined ansible.builtin.debug: var: make_netconfig_deploy_prep_params - name: Run netconfig_deploy_prep retries: "{{ make_netconfig_deploy_prep_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_prep_delay | default(omit) }}" until: "{{ make_netconfig_deploy_prep_until | default(true) }}" register: "make_netconfig_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy_prep" dry_run: "{{ make_netconfig_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_prep_env|default({})), **(make_netconfig_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000173115117130422033372 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_env when: make_netconfig_deploy_env is defined ansible.builtin.debug: var: make_netconfig_deploy_env - name: Debug make_netconfig_deploy_params when: make_netconfig_deploy_params is defined ansible.builtin.debug: var: make_netconfig_deploy_params - name: Run netconfig_deploy retries: "{{ make_netconfig_deploy_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_delay | default(omit) }}" until: "{{ make_netconfig_deploy_until | default(true) }}" register: "make_netconfig_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy" dry_run: "{{ make_netconfig_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_env|default({})), **(make_netconfig_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfig_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netconfi0000644000175000017500000000212115117130422033364 0ustar zuulzuul--- - name: Debug make_netconfig_deploy_cleanup_env when: make_netconfig_deploy_cleanup_env is defined ansible.builtin.debug: var: make_netconfig_deploy_cleanup_env - name: Debug make_netconfig_deploy_cleanup_params when: make_netconfig_deploy_cleanup_params is defined ansible.builtin.debug: var: make_netconfig_deploy_cleanup_params - name: Run netconfig_deploy_cleanup retries: "{{ make_netconfig_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_netconfig_deploy_cleanup_delay | default(omit) }}" until: "{{ make_netconfig_deploy_cleanup_until | default(true) }}" register: "make_netconfig_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netconfig_deploy_cleanup" dry_run: "{{ make_netconfig_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netconfig_deploy_cleanup_env|default({})), **(make_netconfig_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000204415117130422033325 0ustar zuulzuul--- - name: Debug make_memcached_deploy_prep_env when: make_memcached_deploy_prep_env is defined ansible.builtin.debug: var: make_memcached_deploy_prep_env - name: Debug make_memcached_deploy_prep_params when: make_memcached_deploy_prep_params is defined ansible.builtin.debug: var: make_memcached_deploy_prep_params - name: Run memcached_deploy_prep retries: "{{ make_memcached_deploy_prep_retries | default(omit) }}" delay: "{{ make_memcached_deploy_prep_delay | default(omit) }}" until: "{{ make_memcached_deploy_prep_until | default(true) }}" register: "make_memcached_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy_prep" dry_run: "{{ make_memcached_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_prep_env|default({})), **(make_memcached_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000173115117130422033327 0ustar zuulzuul--- - name: Debug make_memcached_deploy_env when: make_memcached_deploy_env is defined ansible.builtin.debug: var: make_memcached_deploy_env - name: Debug make_memcached_deploy_params when: make_memcached_deploy_params is defined ansible.builtin.debug: var: make_memcached_deploy_params - name: Run memcached_deploy retries: "{{ make_memcached_deploy_retries | default(omit) }}" delay: "{{ make_memcached_deploy_delay | default(omit) }}" until: "{{ make_memcached_deploy_until | default(true) }}" register: "make_memcached_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy" dry_run: "{{ make_memcached_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_env|default({})), **(make_memcached_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcached_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_memcache0000644000175000017500000000212115117130422033321 0ustar zuulzuul--- - name: Debug make_memcached_deploy_cleanup_env when: make_memcached_deploy_cleanup_env is defined ansible.builtin.debug: var: make_memcached_deploy_cleanup_env - name: Debug make_memcached_deploy_cleanup_params when: make_memcached_deploy_cleanup_params is defined ansible.builtin.debug: var: make_memcached_deploy_cleanup_params - name: Run memcached_deploy_cleanup retries: "{{ make_memcached_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_memcached_deploy_cleanup_delay | default(omit) }}" until: "{{ make_memcached_deploy_cleanup_until | default(true) }}" register: "make_memcached_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make memcached_deploy_cleanup" dry_run: "{{ make_memcached_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_memcached_deploy_cleanup_env|default({})), **(make_memcached_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000165415117130422033432 0ustar zuulzuul--- - name: Debug make_keystone_prep_env when: make_keystone_prep_env is defined ansible.builtin.debug: var: make_keystone_prep_env - name: Debug make_keystone_prep_params when: make_keystone_prep_params is defined ansible.builtin.debug: var: make_keystone_prep_params - name: Run keystone_prep retries: "{{ make_keystone_prep_retries | default(omit) }}" delay: "{{ make_keystone_prep_delay | default(omit) }}" until: "{{ make_keystone_prep_until | default(true) }}" register: "make_keystone_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_prep" dry_run: "{{ make_keystone_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_prep_env|default({})), **(make_keystone_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000154115117130422033425 0ustar zuulzuul--- - name: Debug make_keystone_env when: make_keystone_env is defined ansible.builtin.debug: var: make_keystone_env - name: Debug make_keystone_params when: make_keystone_params is defined ansible.builtin.debug: var: make_keystone_params - name: Run keystone retries: "{{ make_keystone_retries | default(omit) }}" delay: "{{ make_keystone_delay | default(omit) }}" until: "{{ make_keystone_until | default(true) }}" register: "make_keystone_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone" dry_run: "{{ make_keystone_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_env|default({})), **(make_keystone_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000173115117130422033426 0ustar zuulzuul--- - name: Debug make_keystone_cleanup_env when: make_keystone_cleanup_env is defined ansible.builtin.debug: var: make_keystone_cleanup_env - name: Debug make_keystone_cleanup_params when: make_keystone_cleanup_params is defined ansible.builtin.debug: var: make_keystone_cleanup_params - name: Run keystone_cleanup retries: "{{ make_keystone_cleanup_retries | default(omit) }}" delay: "{{ make_keystone_cleanup_delay | default(omit) }}" until: "{{ make_keystone_cleanup_until | default(true) }}" register: "make_keystone_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_cleanup" dry_run: "{{ make_keystone_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_cleanup_env|default({})), **(make_keystone_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000202515117130422033423 0ustar zuulzuul--- - name: Debug make_keystone_deploy_prep_env when: make_keystone_deploy_prep_env is defined ansible.builtin.debug: var: make_keystone_deploy_prep_env - name: Debug make_keystone_deploy_prep_params when: make_keystone_deploy_prep_params is defined ansible.builtin.debug: var: make_keystone_deploy_prep_params - name: Run keystone_deploy_prep retries: "{{ make_keystone_deploy_prep_retries | default(omit) }}" delay: "{{ make_keystone_deploy_prep_delay | default(omit) }}" until: "{{ make_keystone_deploy_prep_until | default(true) }}" register: "make_keystone_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy_prep" dry_run: "{{ make_keystone_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_prep_env|default({})), **(make_keystone_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000171215117130422033425 0ustar zuulzuul--- - name: Debug make_keystone_deploy_env when: make_keystone_deploy_env is defined ansible.builtin.debug: var: make_keystone_deploy_env - name: Debug make_keystone_deploy_params when: make_keystone_deploy_params is defined ansible.builtin.debug: var: make_keystone_deploy_params - name: Run keystone_deploy retries: "{{ make_keystone_deploy_retries | default(omit) }}" delay: "{{ make_keystone_deploy_delay | default(omit) }}" until: "{{ make_keystone_deploy_until | default(true) }}" register: "make_keystone_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy" dry_run: "{{ make_keystone_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_env|default({})), **(make_keystone_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000210215117130422033417 0ustar zuulzuul--- - name: Debug make_keystone_deploy_cleanup_env when: make_keystone_deploy_cleanup_env is defined ansible.builtin.debug: var: make_keystone_deploy_cleanup_env - name: Debug make_keystone_deploy_cleanup_params when: make_keystone_deploy_cleanup_params is defined ansible.builtin.debug: var: make_keystone_deploy_cleanup_params - name: Run keystone_deploy_cleanup retries: "{{ make_keystone_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_keystone_deploy_cleanup_delay | default(omit) }}" until: "{{ make_keystone_deploy_cleanup_until | default(true) }}" register: "make_keystone_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_deploy_cleanup" dry_run: "{{ make_keystone_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_deploy_cleanup_env|default({})), **(make_keystone_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000165415117130422033332 0ustar zuulzuul--- - name: Debug make_barbican_prep_env when: make_barbican_prep_env is defined ansible.builtin.debug: var: make_barbican_prep_env - name: Debug make_barbican_prep_params when: make_barbican_prep_params is defined ansible.builtin.debug: var: make_barbican_prep_params - name: Run barbican_prep retries: "{{ make_barbican_prep_retries | default(omit) }}" delay: "{{ make_barbican_prep_delay | default(omit) }}" until: "{{ make_barbican_prep_until | default(true) }}" register: "make_barbican_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_prep" dry_run: "{{ make_barbican_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_prep_env|default({})), **(make_barbican_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000154115117130422033325 0ustar zuulzuul--- - name: Debug make_barbican_env when: make_barbican_env is defined ansible.builtin.debug: var: make_barbican_env - name: Debug make_barbican_params when: make_barbican_params is defined ansible.builtin.debug: var: make_barbican_params - name: Run barbican retries: "{{ make_barbican_retries | default(omit) }}" delay: "{{ make_barbican_delay | default(omit) }}" until: "{{ make_barbican_until | default(true) }}" register: "make_barbican_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican" dry_run: "{{ make_barbican_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_env|default({})), **(make_barbican_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000173115117130422033326 0ustar zuulzuul--- - name: Debug make_barbican_cleanup_env when: make_barbican_cleanup_env is defined ansible.builtin.debug: var: make_barbican_cleanup_env - name: Debug make_barbican_cleanup_params when: make_barbican_cleanup_params is defined ansible.builtin.debug: var: make_barbican_cleanup_params - name: Run barbican_cleanup retries: "{{ make_barbican_cleanup_retries | default(omit) }}" delay: "{{ make_barbican_cleanup_delay | default(omit) }}" until: "{{ make_barbican_cleanup_until | default(true) }}" register: "make_barbican_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_cleanup" dry_run: "{{ make_barbican_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_cleanup_env|default({})), **(make_barbican_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000202515117130422033323 0ustar zuulzuul--- - name: Debug make_barbican_deploy_prep_env when: make_barbican_deploy_prep_env is defined ansible.builtin.debug: var: make_barbican_deploy_prep_env - name: Debug make_barbican_deploy_prep_params when: make_barbican_deploy_prep_params is defined ansible.builtin.debug: var: make_barbican_deploy_prep_params - name: Run barbican_deploy_prep retries: "{{ make_barbican_deploy_prep_retries | default(omit) }}" delay: "{{ make_barbican_deploy_prep_delay | default(omit) }}" until: "{{ make_barbican_deploy_prep_until | default(true) }}" register: "make_barbican_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_prep" dry_run: "{{ make_barbican_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_prep_env|default({})), **(make_barbican_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000171215117130422033325 0ustar zuulzuul--- - name: Debug make_barbican_deploy_env when: make_barbican_deploy_env is defined ansible.builtin.debug: var: make_barbican_deploy_env - name: Debug make_barbican_deploy_params when: make_barbican_deploy_params is defined ansible.builtin.debug: var: make_barbican_deploy_params - name: Run barbican_deploy retries: "{{ make_barbican_deploy_retries | default(omit) }}" delay: "{{ make_barbican_deploy_delay | default(omit) }}" until: "{{ make_barbican_deploy_until | default(true) }}" register: "make_barbican_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy" dry_run: "{{ make_barbican_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_env|default({})), **(make_barbican_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_validate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000212115117130422033320 0ustar zuulzuul--- - name: Debug make_barbican_deploy_validate_env when: make_barbican_deploy_validate_env is defined ansible.builtin.debug: var: make_barbican_deploy_validate_env - name: Debug make_barbican_deploy_validate_params when: make_barbican_deploy_validate_params is defined ansible.builtin.debug: var: make_barbican_deploy_validate_params - name: Run barbican_deploy_validate retries: "{{ make_barbican_deploy_validate_retries | default(omit) }}" delay: "{{ make_barbican_deploy_validate_delay | default(omit) }}" until: "{{ make_barbican_deploy_validate_until | default(true) }}" register: "make_barbican_deploy_validate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_validate" dry_run: "{{ make_barbican_deploy_validate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_validate_env|default({})), **(make_barbican_deploy_validate_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000210215117130422033317 0ustar zuulzuul--- - name: Debug make_barbican_deploy_cleanup_env when: make_barbican_deploy_cleanup_env is defined ansible.builtin.debug: var: make_barbican_deploy_cleanup_env - name: Debug make_barbican_deploy_cleanup_params when: make_barbican_deploy_cleanup_params is defined ansible.builtin.debug: var: make_barbican_deploy_cleanup_params - name: Run barbican_deploy_cleanup retries: "{{ make_barbican_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_barbican_deploy_cleanup_delay | default(omit) }}" until: "{{ make_barbican_deploy_cleanup_until | default(true) }}" register: "make_barbican_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_deploy_cleanup" dry_run: "{{ make_barbican_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_deploy_cleanup_env|default({})), **(make_barbican_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb.0000644000175000017500000000152215117130422033240 0ustar zuulzuul--- - name: Debug make_mariadb_env when: make_mariadb_env is defined ansible.builtin.debug: var: make_mariadb_env - name: Debug make_mariadb_params when: make_mariadb_params is defined ansible.builtin.debug: var: make_mariadb_params - name: Run mariadb retries: "{{ make_mariadb_retries | default(omit) }}" delay: "{{ make_mariadb_delay | default(omit) }}" until: "{{ make_mariadb_until | default(true) }}" register: "make_mariadb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb" dry_run: "{{ make_mariadb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_env|default({})), **(make_mariadb_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000171215117130422033322 0ustar zuulzuul--- - name: Debug make_mariadb_cleanup_env when: make_mariadb_cleanup_env is defined ansible.builtin.debug: var: make_mariadb_cleanup_env - name: Debug make_mariadb_cleanup_params when: make_mariadb_cleanup_params is defined ansible.builtin.debug: var: make_mariadb_cleanup_params - name: Run mariadb_cleanup retries: "{{ make_mariadb_cleanup_retries | default(omit) }}" delay: "{{ make_mariadb_cleanup_delay | default(omit) }}" until: "{{ make_mariadb_cleanup_until | default(true) }}" register: "make_mariadb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_cleanup" dry_run: "{{ make_mariadb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_cleanup_env|default({})), **(make_mariadb_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000200615117130422033317 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_prep_env when: make_mariadb_deploy_prep_env is defined ansible.builtin.debug: var: make_mariadb_deploy_prep_env - name: Debug make_mariadb_deploy_prep_params when: make_mariadb_deploy_prep_params is defined ansible.builtin.debug: var: make_mariadb_deploy_prep_params - name: Run mariadb_deploy_prep retries: "{{ make_mariadb_deploy_prep_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_prep_delay | default(omit) }}" until: "{{ make_mariadb_deploy_prep_until | default(true) }}" register: "make_mariadb_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy_prep" dry_run: "{{ make_mariadb_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_prep_env|default({})), **(make_mariadb_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000167315117130422033330 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_env when: make_mariadb_deploy_env is defined ansible.builtin.debug: var: make_mariadb_deploy_env - name: Debug make_mariadb_deploy_params when: make_mariadb_deploy_params is defined ansible.builtin.debug: var: make_mariadb_deploy_params - name: Run mariadb_deploy retries: "{{ make_mariadb_deploy_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_delay | default(omit) }}" until: "{{ make_mariadb_deploy_until | default(true) }}" register: "make_mariadb_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy" dry_run: "{{ make_mariadb_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_env|default({})), **(make_mariadb_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000206315117130422033322 0ustar zuulzuul--- - name: Debug make_mariadb_deploy_cleanup_env when: make_mariadb_deploy_cleanup_env is defined ansible.builtin.debug: var: make_mariadb_deploy_cleanup_env - name: Debug make_mariadb_deploy_cleanup_params when: make_mariadb_deploy_cleanup_params is defined ansible.builtin.debug: var: make_mariadb_deploy_cleanup_params - name: Run mariadb_deploy_cleanup retries: "{{ make_mariadb_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_mariadb_deploy_cleanup_delay | default(omit) }}" until: "{{ make_mariadb_deploy_cleanup_until | default(true) }}" register: "make_mariadb_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_deploy_cleanup" dry_run: "{{ make_mariadb_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_deploy_cleanup_env|default({})), **(make_mariadb_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000167315117130422033356 0ustar zuulzuul--- - name: Debug make_placement_prep_env when: make_placement_prep_env is defined ansible.builtin.debug: var: make_placement_prep_env - name: Debug make_placement_prep_params when: make_placement_prep_params is defined ansible.builtin.debug: var: make_placement_prep_params - name: Run placement_prep retries: "{{ make_placement_prep_retries | default(omit) }}" delay: "{{ make_placement_prep_delay | default(omit) }}" until: "{{ make_placement_prep_until | default(true) }}" register: "make_placement_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_prep" dry_run: "{{ make_placement_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_prep_env|default({})), **(make_placement_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000156015117130422033351 0ustar zuulzuul--- - name: Debug make_placement_env when: make_placement_env is defined ansible.builtin.debug: var: make_placement_env - name: Debug make_placement_params when: make_placement_params is defined ansible.builtin.debug: var: make_placement_params - name: Run placement retries: "{{ make_placement_retries | default(omit) }}" delay: "{{ make_placement_delay | default(omit) }}" until: "{{ make_placement_until | default(true) }}" register: "make_placement_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement" dry_run: "{{ make_placement_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_env|default({})), **(make_placement_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000175015117130422033352 0ustar zuulzuul--- - name: Debug make_placement_cleanup_env when: make_placement_cleanup_env is defined ansible.builtin.debug: var: make_placement_cleanup_env - name: Debug make_placement_cleanup_params when: make_placement_cleanup_params is defined ansible.builtin.debug: var: make_placement_cleanup_params - name: Run placement_cleanup retries: "{{ make_placement_cleanup_retries | default(omit) }}" delay: "{{ make_placement_cleanup_delay | default(omit) }}" until: "{{ make_placement_cleanup_until | default(true) }}" register: "make_placement_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_cleanup" dry_run: "{{ make_placement_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_cleanup_env|default({})), **(make_placement_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000204415117130422033347 0ustar zuulzuul--- - name: Debug make_placement_deploy_prep_env when: make_placement_deploy_prep_env is defined ansible.builtin.debug: var: make_placement_deploy_prep_env - name: Debug make_placement_deploy_prep_params when: make_placement_deploy_prep_params is defined ansible.builtin.debug: var: make_placement_deploy_prep_params - name: Run placement_deploy_prep retries: "{{ make_placement_deploy_prep_retries | default(omit) }}" delay: "{{ make_placement_deploy_prep_delay | default(omit) }}" until: "{{ make_placement_deploy_prep_until | default(true) }}" register: "make_placement_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy_prep" dry_run: "{{ make_placement_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_prep_env|default({})), **(make_placement_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000173115117130422033351 0ustar zuulzuul--- - name: Debug make_placement_deploy_env when: make_placement_deploy_env is defined ansible.builtin.debug: var: make_placement_deploy_env - name: Debug make_placement_deploy_params when: make_placement_deploy_params is defined ansible.builtin.debug: var: make_placement_deploy_params - name: Run placement_deploy retries: "{{ make_placement_deploy_retries | default(omit) }}" delay: "{{ make_placement_deploy_delay | default(omit) }}" until: "{{ make_placement_deploy_until | default(true) }}" register: "make_placement_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy" dry_run: "{{ make_placement_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_env|default({})), **(make_placement_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000212115117130422033343 0ustar zuulzuul--- - name: Debug make_placement_deploy_cleanup_env when: make_placement_deploy_cleanup_env is defined ansible.builtin.debug: var: make_placement_deploy_cleanup_env - name: Debug make_placement_deploy_cleanup_params when: make_placement_deploy_cleanup_params is defined ansible.builtin.debug: var: make_placement_deploy_cleanup_params - name: Run placement_deploy_cleanup retries: "{{ make_placement_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_placement_deploy_cleanup_delay | default(omit) }}" until: "{{ make_placement_deploy_cleanup_until | default(true) }}" register: "make_placement_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_deploy_cleanup" dry_run: "{{ make_placement_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_deploy_cleanup_env|default({})), **(make_placement_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_p0000644000175000017500000000161615117130422033337 0ustar zuulzuul--- - name: Debug make_glance_prep_env when: make_glance_prep_env is defined ansible.builtin.debug: var: make_glance_prep_env - name: Debug make_glance_prep_params when: make_glance_prep_params is defined ansible.builtin.debug: var: make_glance_prep_params - name: Run glance_prep retries: "{{ make_glance_prep_retries | default(omit) }}" delay: "{{ make_glance_prep_delay | default(omit) }}" until: "{{ make_glance_prep_until | default(true) }}" register: "make_glance_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_prep" dry_run: "{{ make_glance_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_prep_env|default({})), **(make_glance_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance.y0000644000175000017500000000150315117130422033262 0ustar zuulzuul--- - name: Debug make_glance_env when: make_glance_env is defined ansible.builtin.debug: var: make_glance_env - name: Debug make_glance_params when: make_glance_params is defined ansible.builtin.debug: var: make_glance_params - name: Run glance retries: "{{ make_glance_retries | default(omit) }}" delay: "{{ make_glance_delay | default(omit) }}" until: "{{ make_glance_until | default(true) }}" register: "make_glance_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance" dry_run: "{{ make_glance_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_env|default({})), **(make_glance_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_c0000644000175000017500000000167315117130422033325 0ustar zuulzuul--- - name: Debug make_glance_cleanup_env when: make_glance_cleanup_env is defined ansible.builtin.debug: var: make_glance_cleanup_env - name: Debug make_glance_cleanup_params when: make_glance_cleanup_params is defined ansible.builtin.debug: var: make_glance_cleanup_params - name: Run glance_cleanup retries: "{{ make_glance_cleanup_retries | default(omit) }}" delay: "{{ make_glance_cleanup_delay | default(omit) }}" until: "{{ make_glance_cleanup_until | default(true) }}" register: "make_glance_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_cleanup" dry_run: "{{ make_glance_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_cleanup_env|default({})), **(make_glance_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000176715117130422033332 0ustar zuulzuul--- - name: Debug make_glance_deploy_prep_env when: make_glance_deploy_prep_env is defined ansible.builtin.debug: var: make_glance_deploy_prep_env - name: Debug make_glance_deploy_prep_params when: make_glance_deploy_prep_params is defined ansible.builtin.debug: var: make_glance_deploy_prep_params - name: Run glance_deploy_prep retries: "{{ make_glance_deploy_prep_retries | default(omit) }}" delay: "{{ make_glance_deploy_prep_delay | default(omit) }}" until: "{{ make_glance_deploy_prep_until | default(true) }}" register: "make_glance_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy_prep" dry_run: "{{ make_glance_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_prep_env|default({})), **(make_glance_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000165415117130422033325 0ustar zuulzuul--- - name: Debug make_glance_deploy_env when: make_glance_deploy_env is defined ansible.builtin.debug: var: make_glance_deploy_env - name: Debug make_glance_deploy_params when: make_glance_deploy_params is defined ansible.builtin.debug: var: make_glance_deploy_params - name: Run glance_deploy retries: "{{ make_glance_deploy_retries | default(omit) }}" delay: "{{ make_glance_deploy_delay | default(omit) }}" until: "{{ make_glance_deploy_until | default(true) }}" register: "make_glance_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy" dry_run: "{{ make_glance_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_env|default({})), **(make_glance_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_d0000644000175000017500000000204415117130422033317 0ustar zuulzuul--- - name: Debug make_glance_deploy_cleanup_env when: make_glance_deploy_cleanup_env is defined ansible.builtin.debug: var: make_glance_deploy_cleanup_env - name: Debug make_glance_deploy_cleanup_params when: make_glance_deploy_cleanup_params is defined ansible.builtin.debug: var: make_glance_deploy_cleanup_params - name: Run glance_deploy_cleanup retries: "{{ make_glance_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_glance_deploy_cleanup_delay | default(omit) }}" until: "{{ make_glance_deploy_cleanup_until | default(true) }}" register: "make_glance_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_deploy_cleanup" dry_run: "{{ make_glance_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_deploy_cleanup_env|default({})), **(make_glance_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_prep0000644000175000017500000000154115117130422033414 0ustar zuulzuul--- - name: Debug make_ovn_prep_env when: make_ovn_prep_env is defined ansible.builtin.debug: var: make_ovn_prep_env - name: Debug make_ovn_prep_params when: make_ovn_prep_params is defined ansible.builtin.debug: var: make_ovn_prep_params - name: Run ovn_prep retries: "{{ make_ovn_prep_retries | default(omit) }}" delay: "{{ make_ovn_prep_delay | default(omit) }}" until: "{{ make_ovn_prep_until | default(true) }}" register: "make_ovn_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_prep" dry_run: "{{ make_ovn_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_prep_env|default({})), **(make_ovn_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn.yml0000644000175000017500000000142615117130422033170 0ustar zuulzuul--- - name: Debug make_ovn_env when: make_ovn_env is defined ansible.builtin.debug: var: make_ovn_env - name: Debug make_ovn_params when: make_ovn_params is defined ansible.builtin.debug: var: make_ovn_params - name: Run ovn retries: "{{ make_ovn_retries | default(omit) }}" delay: "{{ make_ovn_delay | default(omit) }}" until: "{{ make_ovn_until | default(true) }}" register: "make_ovn_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn" dry_run: "{{ make_ovn_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_env|default({})), **(make_ovn_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_clea0000644000175000017500000000161615117130422033355 0ustar zuulzuul--- - name: Debug make_ovn_cleanup_env when: make_ovn_cleanup_env is defined ansible.builtin.debug: var: make_ovn_cleanup_env - name: Debug make_ovn_cleanup_params when: make_ovn_cleanup_params is defined ansible.builtin.debug: var: make_ovn_cleanup_params - name: Run ovn_cleanup retries: "{{ make_ovn_cleanup_retries | default(omit) }}" delay: "{{ make_ovn_cleanup_delay | default(omit) }}" until: "{{ make_ovn_cleanup_until | default(true) }}" register: "make_ovn_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_cleanup" dry_run: "{{ make_ovn_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_cleanup_env|default({})), **(make_ovn_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000171215117130422033372 0ustar zuulzuul--- - name: Debug make_ovn_deploy_prep_env when: make_ovn_deploy_prep_env is defined ansible.builtin.debug: var: make_ovn_deploy_prep_env - name: Debug make_ovn_deploy_prep_params when: make_ovn_deploy_prep_params is defined ansible.builtin.debug: var: make_ovn_deploy_prep_params - name: Run ovn_deploy_prep retries: "{{ make_ovn_deploy_prep_retries | default(omit) }}" delay: "{{ make_ovn_deploy_prep_delay | default(omit) }}" until: "{{ make_ovn_deploy_prep_until | default(true) }}" register: "make_ovn_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy_prep" dry_run: "{{ make_ovn_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_prep_env|default({})), **(make_ovn_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000157715117130422033403 0ustar zuulzuul--- - name: Debug make_ovn_deploy_env when: make_ovn_deploy_env is defined ansible.builtin.debug: var: make_ovn_deploy_env - name: Debug make_ovn_deploy_params when: make_ovn_deploy_params is defined ansible.builtin.debug: var: make_ovn_deploy_params - name: Run ovn_deploy retries: "{{ make_ovn_deploy_retries | default(omit) }}" delay: "{{ make_ovn_deploy_delay | default(omit) }}" until: "{{ make_ovn_deploy_until | default(true) }}" register: "make_ovn_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy" dry_run: "{{ make_ovn_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_env|default({})), **(make_ovn_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_depl0000644000175000017500000000176715117130422033404 0ustar zuulzuul--- - name: Debug make_ovn_deploy_cleanup_env when: make_ovn_deploy_cleanup_env is defined ansible.builtin.debug: var: make_ovn_deploy_cleanup_env - name: Debug make_ovn_deploy_cleanup_params when: make_ovn_deploy_cleanup_params is defined ansible.builtin.debug: var: make_ovn_deploy_cleanup_params - name: Run ovn_deploy_cleanup retries: "{{ make_ovn_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_ovn_deploy_cleanup_delay | default(omit) }}" until: "{{ make_ovn_deploy_cleanup_until | default(true) }}" register: "make_ovn_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_deploy_cleanup" dry_run: "{{ make_ovn_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_deploy_cleanup_env|default({})), **(make_ovn_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000163515117130422033421 0ustar zuulzuul--- - name: Debug make_neutron_prep_env when: make_neutron_prep_env is defined ansible.builtin.debug: var: make_neutron_prep_env - name: Debug make_neutron_prep_params when: make_neutron_prep_params is defined ansible.builtin.debug: var: make_neutron_prep_params - name: Run neutron_prep retries: "{{ make_neutron_prep_retries | default(omit) }}" delay: "{{ make_neutron_prep_delay | default(omit) }}" until: "{{ make_neutron_prep_until | default(true) }}" register: "make_neutron_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_prep" dry_run: "{{ make_neutron_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_prep_env|default({})), **(make_neutron_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron.0000644000175000017500000000152215117130422033333 0ustar zuulzuul--- - name: Debug make_neutron_env when: make_neutron_env is defined ansible.builtin.debug: var: make_neutron_env - name: Debug make_neutron_params when: make_neutron_params is defined ansible.builtin.debug: var: make_neutron_params - name: Run neutron retries: "{{ make_neutron_retries | default(omit) }}" delay: "{{ make_neutron_delay | default(omit) }}" until: "{{ make_neutron_until | default(true) }}" register: "make_neutron_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron" dry_run: "{{ make_neutron_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_env|default({})), **(make_neutron_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000171215117130422033415 0ustar zuulzuul--- - name: Debug make_neutron_cleanup_env when: make_neutron_cleanup_env is defined ansible.builtin.debug: var: make_neutron_cleanup_env - name: Debug make_neutron_cleanup_params when: make_neutron_cleanup_params is defined ansible.builtin.debug: var: make_neutron_cleanup_params - name: Run neutron_cleanup retries: "{{ make_neutron_cleanup_retries | default(omit) }}" delay: "{{ make_neutron_cleanup_delay | default(omit) }}" until: "{{ make_neutron_cleanup_until | default(true) }}" register: "make_neutron_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_cleanup" dry_run: "{{ make_neutron_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_cleanup_env|default({})), **(make_neutron_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000200615117130422033412 0ustar zuulzuul--- - name: Debug make_neutron_deploy_prep_env when: make_neutron_deploy_prep_env is defined ansible.builtin.debug: var: make_neutron_deploy_prep_env - name: Debug make_neutron_deploy_prep_params when: make_neutron_deploy_prep_params is defined ansible.builtin.debug: var: make_neutron_deploy_prep_params - name: Run neutron_deploy_prep retries: "{{ make_neutron_deploy_prep_retries | default(omit) }}" delay: "{{ make_neutron_deploy_prep_delay | default(omit) }}" until: "{{ make_neutron_deploy_prep_until | default(true) }}" register: "make_neutron_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy_prep" dry_run: "{{ make_neutron_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_prep_env|default({})), **(make_neutron_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000167315117130422033423 0ustar zuulzuul--- - name: Debug make_neutron_deploy_env when: make_neutron_deploy_env is defined ansible.builtin.debug: var: make_neutron_deploy_env - name: Debug make_neutron_deploy_params when: make_neutron_deploy_params is defined ansible.builtin.debug: var: make_neutron_deploy_params - name: Run neutron_deploy retries: "{{ make_neutron_deploy_retries | default(omit) }}" delay: "{{ make_neutron_deploy_delay | default(omit) }}" until: "{{ make_neutron_deploy_until | default(true) }}" register: "make_neutron_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy" dry_run: "{{ make_neutron_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_env|default({})), **(make_neutron_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000206315117130422033415 0ustar zuulzuul--- - name: Debug make_neutron_deploy_cleanup_env when: make_neutron_deploy_cleanup_env is defined ansible.builtin.debug: var: make_neutron_deploy_cleanup_env - name: Debug make_neutron_deploy_cleanup_params when: make_neutron_deploy_cleanup_params is defined ansible.builtin.debug: var: make_neutron_deploy_cleanup_params - name: Run neutron_deploy_cleanup retries: "{{ make_neutron_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_neutron_deploy_cleanup_delay | default(omit) }}" until: "{{ make_neutron_deploy_cleanup_until | default(true) }}" register: "make_neutron_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_deploy_cleanup" dry_run: "{{ make_neutron_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_deploy_cleanup_env|default({})), **(make_neutron_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_p0000644000175000017500000000161615117130422033352 0ustar zuulzuul--- - name: Debug make_cinder_prep_env when: make_cinder_prep_env is defined ansible.builtin.debug: var: make_cinder_prep_env - name: Debug make_cinder_prep_params when: make_cinder_prep_params is defined ansible.builtin.debug: var: make_cinder_prep_params - name: Run cinder_prep retries: "{{ make_cinder_prep_retries | default(omit) }}" delay: "{{ make_cinder_prep_delay | default(omit) }}" until: "{{ make_cinder_prep_until | default(true) }}" register: "make_cinder_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_prep" dry_run: "{{ make_cinder_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_prep_env|default({})), **(make_cinder_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder.y0000644000175000017500000000150315117130422033275 0ustar zuulzuul--- - name: Debug make_cinder_env when: make_cinder_env is defined ansible.builtin.debug: var: make_cinder_env - name: Debug make_cinder_params when: make_cinder_params is defined ansible.builtin.debug: var: make_cinder_params - name: Run cinder retries: "{{ make_cinder_retries | default(omit) }}" delay: "{{ make_cinder_delay | default(omit) }}" until: "{{ make_cinder_until | default(true) }}" register: "make_cinder_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder" dry_run: "{{ make_cinder_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_env|default({})), **(make_cinder_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_c0000644000175000017500000000167315117130422033340 0ustar zuulzuul--- - name: Debug make_cinder_cleanup_env when: make_cinder_cleanup_env is defined ansible.builtin.debug: var: make_cinder_cleanup_env - name: Debug make_cinder_cleanup_params when: make_cinder_cleanup_params is defined ansible.builtin.debug: var: make_cinder_cleanup_params - name: Run cinder_cleanup retries: "{{ make_cinder_cleanup_retries | default(omit) }}" delay: "{{ make_cinder_cleanup_delay | default(omit) }}" until: "{{ make_cinder_cleanup_until | default(true) }}" register: "make_cinder_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_cleanup" dry_run: "{{ make_cinder_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_cleanup_env|default({})), **(make_cinder_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000176715117130422033345 0ustar zuulzuul--- - name: Debug make_cinder_deploy_prep_env when: make_cinder_deploy_prep_env is defined ansible.builtin.debug: var: make_cinder_deploy_prep_env - name: Debug make_cinder_deploy_prep_params when: make_cinder_deploy_prep_params is defined ansible.builtin.debug: var: make_cinder_deploy_prep_params - name: Run cinder_deploy_prep retries: "{{ make_cinder_deploy_prep_retries | default(omit) }}" delay: "{{ make_cinder_deploy_prep_delay | default(omit) }}" until: "{{ make_cinder_deploy_prep_until | default(true) }}" register: "make_cinder_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy_prep" dry_run: "{{ make_cinder_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_prep_env|default({})), **(make_cinder_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000165415117130422033340 0ustar zuulzuul--- - name: Debug make_cinder_deploy_env when: make_cinder_deploy_env is defined ansible.builtin.debug: var: make_cinder_deploy_env - name: Debug make_cinder_deploy_params when: make_cinder_deploy_params is defined ansible.builtin.debug: var: make_cinder_deploy_params - name: Run cinder_deploy retries: "{{ make_cinder_deploy_retries | default(omit) }}" delay: "{{ make_cinder_deploy_delay | default(omit) }}" until: "{{ make_cinder_deploy_until | default(true) }}" register: "make_cinder_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy" dry_run: "{{ make_cinder_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_env|default({})), **(make_cinder_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_d0000644000175000017500000000204415117130422033332 0ustar zuulzuul--- - name: Debug make_cinder_deploy_cleanup_env when: make_cinder_deploy_cleanup_env is defined ansible.builtin.debug: var: make_cinder_deploy_cleanup_env - name: Debug make_cinder_deploy_cleanup_params when: make_cinder_deploy_cleanup_params is defined ansible.builtin.debug: var: make_cinder_deploy_cleanup_params - name: Run cinder_deploy_cleanup retries: "{{ make_cinder_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_cinder_deploy_cleanup_delay | default(omit) }}" until: "{{ make_cinder_deploy_cleanup_until | default(true) }}" register: "make_cinder_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_deploy_cleanup" dry_run: "{{ make_cinder_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_deploy_cleanup_env|default({})), **(make_cinder_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000165415117130422033372 0ustar zuulzuul--- - name: Debug make_rabbitmq_prep_env when: make_rabbitmq_prep_env is defined ansible.builtin.debug: var: make_rabbitmq_prep_env - name: Debug make_rabbitmq_prep_params when: make_rabbitmq_prep_params is defined ansible.builtin.debug: var: make_rabbitmq_prep_params - name: Run rabbitmq_prep retries: "{{ make_rabbitmq_prep_retries | default(omit) }}" delay: "{{ make_rabbitmq_prep_delay | default(omit) }}" until: "{{ make_rabbitmq_prep_until | default(true) }}" register: "make_rabbitmq_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_prep" dry_run: "{{ make_rabbitmq_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_prep_env|default({})), **(make_rabbitmq_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000154115117130422033365 0ustar zuulzuul--- - name: Debug make_rabbitmq_env when: make_rabbitmq_env is defined ansible.builtin.debug: var: make_rabbitmq_env - name: Debug make_rabbitmq_params when: make_rabbitmq_params is defined ansible.builtin.debug: var: make_rabbitmq_params - name: Run rabbitmq retries: "{{ make_rabbitmq_retries | default(omit) }}" delay: "{{ make_rabbitmq_delay | default(omit) }}" until: "{{ make_rabbitmq_until | default(true) }}" register: "make_rabbitmq_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq" dry_run: "{{ make_rabbitmq_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_env|default({})), **(make_rabbitmq_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000173115117130422033366 0ustar zuulzuul--- - name: Debug make_rabbitmq_cleanup_env when: make_rabbitmq_cleanup_env is defined ansible.builtin.debug: var: make_rabbitmq_cleanup_env - name: Debug make_rabbitmq_cleanup_params when: make_rabbitmq_cleanup_params is defined ansible.builtin.debug: var: make_rabbitmq_cleanup_params - name: Run rabbitmq_cleanup retries: "{{ make_rabbitmq_cleanup_retries | default(omit) }}" delay: "{{ make_rabbitmq_cleanup_delay | default(omit) }}" until: "{{ make_rabbitmq_cleanup_until | default(true) }}" register: "make_rabbitmq_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_cleanup" dry_run: "{{ make_rabbitmq_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_cleanup_env|default({})), **(make_rabbitmq_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000202515117130422033363 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_prep_env when: make_rabbitmq_deploy_prep_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_prep_env - name: Debug make_rabbitmq_deploy_prep_params when: make_rabbitmq_deploy_prep_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_prep_params - name: Run rabbitmq_deploy_prep retries: "{{ make_rabbitmq_deploy_prep_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_prep_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_prep_until | default(true) }}" register: "make_rabbitmq_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy_prep" dry_run: "{{ make_rabbitmq_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_prep_env|default({})), **(make_rabbitmq_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000171215117130422033365 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_env when: make_rabbitmq_deploy_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_env - name: Debug make_rabbitmq_deploy_params when: make_rabbitmq_deploy_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_params - name: Run rabbitmq_deploy retries: "{{ make_rabbitmq_deploy_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_until | default(true) }}" register: "make_rabbitmq_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy" dry_run: "{{ make_rabbitmq_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_env|default({})), **(make_rabbitmq_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rabbitmq0000644000175000017500000000210215117130422033357 0ustar zuulzuul--- - name: Debug make_rabbitmq_deploy_cleanup_env when: make_rabbitmq_deploy_cleanup_env is defined ansible.builtin.debug: var: make_rabbitmq_deploy_cleanup_env - name: Debug make_rabbitmq_deploy_cleanup_params when: make_rabbitmq_deploy_cleanup_params is defined ansible.builtin.debug: var: make_rabbitmq_deploy_cleanup_params - name: Run rabbitmq_deploy_cleanup retries: "{{ make_rabbitmq_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_rabbitmq_deploy_cleanup_delay | default(omit) }}" until: "{{ make_rabbitmq_deploy_cleanup_until | default(true) }}" register: "make_rabbitmq_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rabbitmq_deploy_cleanup" dry_run: "{{ make_rabbitmq_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rabbitmq_deploy_cleanup_env|default({})), **(make_rabbitmq_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_p0000644000175000017500000000161615117130422033371 0ustar zuulzuul--- - name: Debug make_ironic_prep_env when: make_ironic_prep_env is defined ansible.builtin.debug: var: make_ironic_prep_env - name: Debug make_ironic_prep_params when: make_ironic_prep_params is defined ansible.builtin.debug: var: make_ironic_prep_params - name: Run ironic_prep retries: "{{ make_ironic_prep_retries | default(omit) }}" delay: "{{ make_ironic_prep_delay | default(omit) }}" until: "{{ make_ironic_prep_until | default(true) }}" register: "make_ironic_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_prep" dry_run: "{{ make_ironic_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_prep_env|default({})), **(make_ironic_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic.y0000644000175000017500000000150315117130422033314 0ustar zuulzuul--- - name: Debug make_ironic_env when: make_ironic_env is defined ansible.builtin.debug: var: make_ironic_env - name: Debug make_ironic_params when: make_ironic_params is defined ansible.builtin.debug: var: make_ironic_params - name: Run ironic retries: "{{ make_ironic_retries | default(omit) }}" delay: "{{ make_ironic_delay | default(omit) }}" until: "{{ make_ironic_until | default(true) }}" register: "make_ironic_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic" dry_run: "{{ make_ironic_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_env|default({})), **(make_ironic_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_c0000644000175000017500000000167315117130422033357 0ustar zuulzuul--- - name: Debug make_ironic_cleanup_env when: make_ironic_cleanup_env is defined ansible.builtin.debug: var: make_ironic_cleanup_env - name: Debug make_ironic_cleanup_params when: make_ironic_cleanup_params is defined ansible.builtin.debug: var: make_ironic_cleanup_params - name: Run ironic_cleanup retries: "{{ make_ironic_cleanup_retries | default(omit) }}" delay: "{{ make_ironic_cleanup_delay | default(omit) }}" until: "{{ make_ironic_cleanup_until | default(true) }}" register: "make_ironic_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_cleanup" dry_run: "{{ make_ironic_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_cleanup_env|default({})), **(make_ironic_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000176715117130422033364 0ustar zuulzuul--- - name: Debug make_ironic_deploy_prep_env when: make_ironic_deploy_prep_env is defined ansible.builtin.debug: var: make_ironic_deploy_prep_env - name: Debug make_ironic_deploy_prep_params when: make_ironic_deploy_prep_params is defined ansible.builtin.debug: var: make_ironic_deploy_prep_params - name: Run ironic_deploy_prep retries: "{{ make_ironic_deploy_prep_retries | default(omit) }}" delay: "{{ make_ironic_deploy_prep_delay | default(omit) }}" until: "{{ make_ironic_deploy_prep_until | default(true) }}" register: "make_ironic_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy_prep" dry_run: "{{ make_ironic_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_prep_env|default({})), **(make_ironic_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000165415117130422033357 0ustar zuulzuul--- - name: Debug make_ironic_deploy_env when: make_ironic_deploy_env is defined ansible.builtin.debug: var: make_ironic_deploy_env - name: Debug make_ironic_deploy_params when: make_ironic_deploy_params is defined ansible.builtin.debug: var: make_ironic_deploy_params - name: Run ironic_deploy retries: "{{ make_ironic_deploy_retries | default(omit) }}" delay: "{{ make_ironic_deploy_delay | default(omit) }}" until: "{{ make_ironic_deploy_until | default(true) }}" register: "make_ironic_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy" dry_run: "{{ make_ironic_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_env|default({})), **(make_ironic_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_d0000644000175000017500000000204415117130422033351 0ustar zuulzuul--- - name: Debug make_ironic_deploy_cleanup_env when: make_ironic_deploy_cleanup_env is defined ansible.builtin.debug: var: make_ironic_deploy_cleanup_env - name: Debug make_ironic_deploy_cleanup_params when: make_ironic_deploy_cleanup_params is defined ansible.builtin.debug: var: make_ironic_deploy_cleanup_params - name: Run ironic_deploy_cleanup retries: "{{ make_ironic_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_ironic_deploy_cleanup_delay | default(omit) }}" until: "{{ make_ironic_deploy_cleanup_until | default(true) }}" register: "make_ironic_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_deploy_cleanup" dry_run: "{{ make_ironic_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_deploy_cleanup_env|default({})), **(make_ironic_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000163515117130422033355 0ustar zuulzuul--- - name: Debug make_octavia_prep_env when: make_octavia_prep_env is defined ansible.builtin.debug: var: make_octavia_prep_env - name: Debug make_octavia_prep_params when: make_octavia_prep_params is defined ansible.builtin.debug: var: make_octavia_prep_params - name: Run octavia_prep retries: "{{ make_octavia_prep_retries | default(omit) }}" delay: "{{ make_octavia_prep_delay | default(omit) }}" until: "{{ make_octavia_prep_until | default(true) }}" register: "make_octavia_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_prep" dry_run: "{{ make_octavia_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_prep_env|default({})), **(make_octavia_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia.0000644000175000017500000000152215117130422033267 0ustar zuulzuul--- - name: Debug make_octavia_env when: make_octavia_env is defined ansible.builtin.debug: var: make_octavia_env - name: Debug make_octavia_params when: make_octavia_params is defined ansible.builtin.debug: var: make_octavia_params - name: Run octavia retries: "{{ make_octavia_retries | default(omit) }}" delay: "{{ make_octavia_delay | default(omit) }}" until: "{{ make_octavia_until | default(true) }}" register: "make_octavia_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia" dry_run: "{{ make_octavia_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_env|default({})), **(make_octavia_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000171215117130422033351 0ustar zuulzuul--- - name: Debug make_octavia_cleanup_env when: make_octavia_cleanup_env is defined ansible.builtin.debug: var: make_octavia_cleanup_env - name: Debug make_octavia_cleanup_params when: make_octavia_cleanup_params is defined ansible.builtin.debug: var: make_octavia_cleanup_params - name: Run octavia_cleanup retries: "{{ make_octavia_cleanup_retries | default(omit) }}" delay: "{{ make_octavia_cleanup_delay | default(omit) }}" until: "{{ make_octavia_cleanup_until | default(true) }}" register: "make_octavia_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_cleanup" dry_run: "{{ make_octavia_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_cleanup_env|default({})), **(make_octavia_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000200615117130422033346 0ustar zuulzuul--- - name: Debug make_octavia_deploy_prep_env when: make_octavia_deploy_prep_env is defined ansible.builtin.debug: var: make_octavia_deploy_prep_env - name: Debug make_octavia_deploy_prep_params when: make_octavia_deploy_prep_params is defined ansible.builtin.debug: var: make_octavia_deploy_prep_params - name: Run octavia_deploy_prep retries: "{{ make_octavia_deploy_prep_retries | default(omit) }}" delay: "{{ make_octavia_deploy_prep_delay | default(omit) }}" until: "{{ make_octavia_deploy_prep_until | default(true) }}" register: "make_octavia_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy_prep" dry_run: "{{ make_octavia_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_prep_env|default({})), **(make_octavia_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000167315117130422033357 0ustar zuulzuul--- - name: Debug make_octavia_deploy_env when: make_octavia_deploy_env is defined ansible.builtin.debug: var: make_octavia_deploy_env - name: Debug make_octavia_deploy_params when: make_octavia_deploy_params is defined ansible.builtin.debug: var: make_octavia_deploy_params - name: Run octavia_deploy retries: "{{ make_octavia_deploy_retries | default(omit) }}" delay: "{{ make_octavia_deploy_delay | default(omit) }}" until: "{{ make_octavia_deploy_until | default(true) }}" register: "make_octavia_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy" dry_run: "{{ make_octavia_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_env|default({})), **(make_octavia_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000206315117130422033351 0ustar zuulzuul--- - name: Debug make_octavia_deploy_cleanup_env when: make_octavia_deploy_cleanup_env is defined ansible.builtin.debug: var: make_octavia_deploy_cleanup_env - name: Debug make_octavia_deploy_cleanup_params when: make_octavia_deploy_cleanup_params is defined ansible.builtin.debug: var: make_octavia_deploy_cleanup_params - name: Run octavia_deploy_cleanup retries: "{{ make_octavia_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_octavia_deploy_cleanup_delay | default(omit) }}" until: "{{ make_octavia_deploy_cleanup_until | default(true) }}" register: "make_octavia_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_deploy_cleanup" dry_run: "{{ make_octavia_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_deploy_cleanup_env|default({})), **(make_octavia_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000167315117130422033370 0ustar zuulzuul--- - name: Debug make_designate_prep_env when: make_designate_prep_env is defined ansible.builtin.debug: var: make_designate_prep_env - name: Debug make_designate_prep_params when: make_designate_prep_params is defined ansible.builtin.debug: var: make_designate_prep_params - name: Run designate_prep retries: "{{ make_designate_prep_retries | default(omit) }}" delay: "{{ make_designate_prep_delay | default(omit) }}" until: "{{ make_designate_prep_until | default(true) }}" register: "make_designate_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_prep" dry_run: "{{ make_designate_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_prep_env|default({})), **(make_designate_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000156015117130422033363 0ustar zuulzuul--- - name: Debug make_designate_env when: make_designate_env is defined ansible.builtin.debug: var: make_designate_env - name: Debug make_designate_params when: make_designate_params is defined ansible.builtin.debug: var: make_designate_params - name: Run designate retries: "{{ make_designate_retries | default(omit) }}" delay: "{{ make_designate_delay | default(omit) }}" until: "{{ make_designate_until | default(true) }}" register: "make_designate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate" dry_run: "{{ make_designate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_env|default({})), **(make_designate_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000175015117130422033364 0ustar zuulzuul--- - name: Debug make_designate_cleanup_env when: make_designate_cleanup_env is defined ansible.builtin.debug: var: make_designate_cleanup_env - name: Debug make_designate_cleanup_params when: make_designate_cleanup_params is defined ansible.builtin.debug: var: make_designate_cleanup_params - name: Run designate_cleanup retries: "{{ make_designate_cleanup_retries | default(omit) }}" delay: "{{ make_designate_cleanup_delay | default(omit) }}" until: "{{ make_designate_cleanup_until | default(true) }}" register: "make_designate_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_cleanup" dry_run: "{{ make_designate_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_cleanup_env|default({})), **(make_designate_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000204415117130422033361 0ustar zuulzuul--- - name: Debug make_designate_deploy_prep_env when: make_designate_deploy_prep_env is defined ansible.builtin.debug: var: make_designate_deploy_prep_env - name: Debug make_designate_deploy_prep_params when: make_designate_deploy_prep_params is defined ansible.builtin.debug: var: make_designate_deploy_prep_params - name: Run designate_deploy_prep retries: "{{ make_designate_deploy_prep_retries | default(omit) }}" delay: "{{ make_designate_deploy_prep_delay | default(omit) }}" until: "{{ make_designate_deploy_prep_until | default(true) }}" register: "make_designate_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy_prep" dry_run: "{{ make_designate_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_prep_env|default({})), **(make_designate_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000173115117130422033363 0ustar zuulzuul--- - name: Debug make_designate_deploy_env when: make_designate_deploy_env is defined ansible.builtin.debug: var: make_designate_deploy_env - name: Debug make_designate_deploy_params when: make_designate_deploy_params is defined ansible.builtin.debug: var: make_designate_deploy_params - name: Run designate_deploy retries: "{{ make_designate_deploy_retries | default(omit) }}" delay: "{{ make_designate_deploy_delay | default(omit) }}" until: "{{ make_designate_deploy_until | default(true) }}" register: "make_designate_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy" dry_run: "{{ make_designate_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_env|default({})), **(make_designate_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000212115117130422033355 0ustar zuulzuul--- - name: Debug make_designate_deploy_cleanup_env when: make_designate_deploy_cleanup_env is defined ansible.builtin.debug: var: make_designate_deploy_cleanup_env - name: Debug make_designate_deploy_cleanup_params when: make_designate_deploy_cleanup_params is defined ansible.builtin.debug: var: make_designate_deploy_cleanup_params - name: Run designate_deploy_cleanup retries: "{{ make_designate_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_designate_deploy_cleanup_delay | default(omit) }}" until: "{{ make_designate_deploy_cleanup_until | default(true) }}" register: "make_designate_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_deploy_cleanup" dry_run: "{{ make_designate_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_deploy_cleanup_env|default({})), **(make_designate_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_pre0000644000175000017500000000156015117130422033376 0ustar zuulzuul--- - name: Debug make_nova_prep_env when: make_nova_prep_env is defined ansible.builtin.debug: var: make_nova_prep_env - name: Debug make_nova_prep_params when: make_nova_prep_params is defined ansible.builtin.debug: var: make_nova_prep_params - name: Run nova_prep retries: "{{ make_nova_prep_retries | default(omit) }}" delay: "{{ make_nova_prep_delay | default(omit) }}" until: "{{ make_nova_prep_until | default(true) }}" register: "make_nova_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_prep" dry_run: "{{ make_nova_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_prep_env|default({})), **(make_nova_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova.yml0000644000175000017500000000144515117130422033332 0ustar zuulzuul--- - name: Debug make_nova_env when: make_nova_env is defined ansible.builtin.debug: var: make_nova_env - name: Debug make_nova_params when: make_nova_params is defined ansible.builtin.debug: var: make_nova_params - name: Run nova retries: "{{ make_nova_retries | default(omit) }}" delay: "{{ make_nova_delay | default(omit) }}" until: "{{ make_nova_until | default(true) }}" register: "make_nova_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova" dry_run: "{{ make_nova_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_env|default({})), **(make_nova_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_cle0000644000175000017500000000163515117130422033356 0ustar zuulzuul--- - name: Debug make_nova_cleanup_env when: make_nova_cleanup_env is defined ansible.builtin.debug: var: make_nova_cleanup_env - name: Debug make_nova_cleanup_params when: make_nova_cleanup_params is defined ansible.builtin.debug: var: make_nova_cleanup_params - name: Run nova_cleanup retries: "{{ make_nova_cleanup_retries | default(omit) }}" delay: "{{ make_nova_cleanup_delay | default(omit) }}" until: "{{ make_nova_cleanup_until | default(true) }}" register: "make_nova_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_cleanup" dry_run: "{{ make_nova_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_cleanup_env|default({})), **(make_nova_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000173115117130422033360 0ustar zuulzuul--- - name: Debug make_nova_deploy_prep_env when: make_nova_deploy_prep_env is defined ansible.builtin.debug: var: make_nova_deploy_prep_env - name: Debug make_nova_deploy_prep_params when: make_nova_deploy_prep_params is defined ansible.builtin.debug: var: make_nova_deploy_prep_params - name: Run nova_deploy_prep retries: "{{ make_nova_deploy_prep_retries | default(omit) }}" delay: "{{ make_nova_deploy_prep_delay | default(omit) }}" until: "{{ make_nova_deploy_prep_until | default(true) }}" register: "make_nova_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy_prep" dry_run: "{{ make_nova_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_prep_env|default({})), **(make_nova_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000161615117130422033362 0ustar zuulzuul--- - name: Debug make_nova_deploy_env when: make_nova_deploy_env is defined ansible.builtin.debug: var: make_nova_deploy_env - name: Debug make_nova_deploy_params when: make_nova_deploy_params is defined ansible.builtin.debug: var: make_nova_deploy_params - name: Run nova_deploy retries: "{{ make_nova_deploy_retries | default(omit) }}" delay: "{{ make_nova_deploy_delay | default(omit) }}" until: "{{ make_nova_deploy_until | default(true) }}" register: "make_nova_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy" dry_run: "{{ make_nova_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_env|default({})), **(make_nova_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nova_dep0000644000175000017500000000200615117130422033354 0ustar zuulzuul--- - name: Debug make_nova_deploy_cleanup_env when: make_nova_deploy_cleanup_env is defined ansible.builtin.debug: var: make_nova_deploy_cleanup_env - name: Debug make_nova_deploy_cleanup_params when: make_nova_deploy_cleanup_params is defined ansible.builtin.debug: var: make_nova_deploy_cleanup_params - name: Run nova_deploy_cleanup retries: "{{ make_nova_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_nova_deploy_cleanup_delay | default(omit) }}" until: "{{ make_nova_deploy_cleanup_until | default(true) }}" register: "make_nova_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nova_deploy_cleanup" dry_run: "{{ make_nova_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nova_deploy_cleanup_env|default({})), **(make_nova_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000175015117130422033324 0ustar zuulzuul--- - name: Debug make_mariadb_kuttl_run_env when: make_mariadb_kuttl_run_env is defined ansible.builtin.debug: var: make_mariadb_kuttl_run_env - name: Debug make_mariadb_kuttl_run_params when: make_mariadb_kuttl_run_params is defined ansible.builtin.debug: var: make_mariadb_kuttl_run_params - name: Run mariadb_kuttl_run retries: "{{ make_mariadb_kuttl_run_retries | default(omit) }}" delay: "{{ make_mariadb_kuttl_run_delay | default(omit) }}" until: "{{ make_mariadb_kuttl_run_until | default(true) }}" register: "make_mariadb_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_kuttl_run" dry_run: "{{ make_mariadb_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_kuttl_run_env|default({})), **(make_mariadb_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000165415117130422033327 0ustar zuulzuul--- - name: Debug make_mariadb_kuttl_env when: make_mariadb_kuttl_env is defined ansible.builtin.debug: var: make_mariadb_kuttl_env - name: Debug make_mariadb_kuttl_params when: make_mariadb_kuttl_params is defined ansible.builtin.debug: var: make_mariadb_kuttl_params - name: Run mariadb_kuttl retries: "{{ make_mariadb_kuttl_retries | default(omit) }}" delay: "{{ make_mariadb_kuttl_delay | default(omit) }}" until: "{{ make_mariadb_kuttl_until | default(true) }}" register: "make_mariadb_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_kuttl" dry_run: "{{ make_mariadb_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_kuttl_env|default({})), **(make_mariadb_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db0000644000175000017500000000165415117130422033401 0ustar zuulzuul--- - name: Debug make_kuttl_db_prep_env when: make_kuttl_db_prep_env is defined ansible.builtin.debug: var: make_kuttl_db_prep_env - name: Debug make_kuttl_db_prep_params when: make_kuttl_db_prep_params is defined ansible.builtin.debug: var: make_kuttl_db_prep_params - name: Run kuttl_db_prep retries: "{{ make_kuttl_db_prep_retries | default(omit) }}" delay: "{{ make_kuttl_db_prep_delay | default(omit) }}" until: "{{ make_kuttl_db_prep_until | default(true) }}" register: "make_kuttl_db_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_db_prep" dry_run: "{{ make_kuttl_db_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_db_prep_env|default({})), **(make_kuttl_db_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_db0000644000175000017500000000173115117130422033375 0ustar zuulzuul--- - name: Debug make_kuttl_db_cleanup_env when: make_kuttl_db_cleanup_env is defined ansible.builtin.debug: var: make_kuttl_db_cleanup_env - name: Debug make_kuttl_db_cleanup_params when: make_kuttl_db_cleanup_params is defined ansible.builtin.debug: var: make_kuttl_db_cleanup_params - name: Run kuttl_db_cleanup retries: "{{ make_kuttl_db_cleanup_retries | default(omit) }}" delay: "{{ make_kuttl_db_cleanup_delay | default(omit) }}" until: "{{ make_kuttl_db_cleanup_until | default(true) }}" register: "make_kuttl_db_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_db_cleanup" dry_run: "{{ make_kuttl_db_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_db_cleanup_env|default({})), **(make_kuttl_db_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_common_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_co0000644000175000017500000000175015117130422033412 0ustar zuulzuul--- - name: Debug make_kuttl_common_prep_env when: make_kuttl_common_prep_env is defined ansible.builtin.debug: var: make_kuttl_common_prep_env - name: Debug make_kuttl_common_prep_params when: make_kuttl_common_prep_params is defined ansible.builtin.debug: var: make_kuttl_common_prep_params - name: Run kuttl_common_prep retries: "{{ make_kuttl_common_prep_retries | default(omit) }}" delay: "{{ make_kuttl_common_prep_delay | default(omit) }}" until: "{{ make_kuttl_common_prep_until | default(true) }}" register: "make_kuttl_common_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_common_prep" dry_run: "{{ make_kuttl_common_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_common_prep_env|default({})), **(make_kuttl_common_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_common_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_kuttl_co0000644000175000017500000000202515117130422033406 0ustar zuulzuul--- - name: Debug make_kuttl_common_cleanup_env when: make_kuttl_common_cleanup_env is defined ansible.builtin.debug: var: make_kuttl_common_cleanup_env - name: Debug make_kuttl_common_cleanup_params when: make_kuttl_common_cleanup_params is defined ansible.builtin.debug: var: make_kuttl_common_cleanup_params - name: Run kuttl_common_cleanup retries: "{{ make_kuttl_common_cleanup_retries | default(omit) }}" delay: "{{ make_kuttl_common_cleanup_delay | default(omit) }}" until: "{{ make_kuttl_common_cleanup_until | default(true) }}" register: "make_kuttl_common_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make kuttl_common_cleanup" dry_run: "{{ make_kuttl_common_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_kuttl_common_cleanup_env|default({})), **(make_kuttl_common_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000176715117130422033437 0ustar zuulzuul--- - name: Debug make_keystone_kuttl_run_env when: make_keystone_kuttl_run_env is defined ansible.builtin.debug: var: make_keystone_kuttl_run_env - name: Debug make_keystone_kuttl_run_params when: make_keystone_kuttl_run_params is defined ansible.builtin.debug: var: make_keystone_kuttl_run_params - name: Run keystone_kuttl_run retries: "{{ make_keystone_kuttl_run_retries | default(omit) }}" delay: "{{ make_keystone_kuttl_run_delay | default(omit) }}" until: "{{ make_keystone_kuttl_run_until | default(true) }}" register: "make_keystone_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_kuttl_run" dry_run: "{{ make_keystone_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_kuttl_run_env|default({})), **(make_keystone_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_keystone0000644000175000017500000000167315117130422033433 0ustar zuulzuul--- - name: Debug make_keystone_kuttl_env when: make_keystone_kuttl_env is defined ansible.builtin.debug: var: make_keystone_kuttl_env - name: Debug make_keystone_kuttl_params when: make_keystone_kuttl_params is defined ansible.builtin.debug: var: make_keystone_kuttl_params - name: Run keystone_kuttl retries: "{{ make_keystone_kuttl_retries | default(omit) }}" delay: "{{ make_keystone_kuttl_delay | default(omit) }}" until: "{{ make_keystone_kuttl_until | default(true) }}" register: "make_keystone_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make keystone_kuttl" dry_run: "{{ make_keystone_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_keystone_kuttl_env|default({})), **(make_keystone_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000176715117130422033337 0ustar zuulzuul--- - name: Debug make_barbican_kuttl_run_env when: make_barbican_kuttl_run_env is defined ansible.builtin.debug: var: make_barbican_kuttl_run_env - name: Debug make_barbican_kuttl_run_params when: make_barbican_kuttl_run_params is defined ansible.builtin.debug: var: make_barbican_kuttl_run_params - name: Run barbican_kuttl_run retries: "{{ make_barbican_kuttl_run_retries | default(omit) }}" delay: "{{ make_barbican_kuttl_run_delay | default(omit) }}" until: "{{ make_barbican_kuttl_run_until | default(true) }}" register: "make_barbican_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_kuttl_run" dry_run: "{{ make_barbican_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_kuttl_run_env|default({})), **(make_barbican_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_barbican0000644000175000017500000000167315117130422033333 0ustar zuulzuul--- - name: Debug make_barbican_kuttl_env when: make_barbican_kuttl_env is defined ansible.builtin.debug: var: make_barbican_kuttl_env - name: Debug make_barbican_kuttl_params when: make_barbican_kuttl_params is defined ansible.builtin.debug: var: make_barbican_kuttl_params - name: Run barbican_kuttl retries: "{{ make_barbican_kuttl_retries | default(omit) }}" delay: "{{ make_barbican_kuttl_delay | default(omit) }}" until: "{{ make_barbican_kuttl_until | default(true) }}" register: "make_barbican_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make barbican_kuttl" dry_run: "{{ make_barbican_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_barbican_kuttl_env|default({})), **(make_barbican_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000200615117130422033345 0ustar zuulzuul--- - name: Debug make_placement_kuttl_run_env when: make_placement_kuttl_run_env is defined ansible.builtin.debug: var: make_placement_kuttl_run_env - name: Debug make_placement_kuttl_run_params when: make_placement_kuttl_run_params is defined ansible.builtin.debug: var: make_placement_kuttl_run_params - name: Run placement_kuttl_run retries: "{{ make_placement_kuttl_run_retries | default(omit) }}" delay: "{{ make_placement_kuttl_run_delay | default(omit) }}" until: "{{ make_placement_kuttl_run_until | default(true) }}" register: "make_placement_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_kuttl_run" dry_run: "{{ make_placement_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_kuttl_run_env|default({})), **(make_placement_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placement_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_placemen0000644000175000017500000000171215117130422033350 0ustar zuulzuul--- - name: Debug make_placement_kuttl_env when: make_placement_kuttl_env is defined ansible.builtin.debug: var: make_placement_kuttl_env - name: Debug make_placement_kuttl_params when: make_placement_kuttl_params is defined ansible.builtin.debug: var: make_placement_kuttl_params - name: Run placement_kuttl retries: "{{ make_placement_kuttl_retries | default(omit) }}" delay: "{{ make_placement_kuttl_delay | default(omit) }}" until: "{{ make_placement_kuttl_until | default(true) }}" register: "make_placement_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make placement_kuttl" dry_run: "{{ make_placement_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_placement_kuttl_env|default({})), **(make_placement_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_k0000644000175000017500000000173115117130422033343 0ustar zuulzuul--- - name: Debug make_cinder_kuttl_run_env when: make_cinder_kuttl_run_env is defined ansible.builtin.debug: var: make_cinder_kuttl_run_env - name: Debug make_cinder_kuttl_run_params when: make_cinder_kuttl_run_params is defined ansible.builtin.debug: var: make_cinder_kuttl_run_params - name: Run cinder_kuttl_run retries: "{{ make_cinder_kuttl_run_retries | default(omit) }}" delay: "{{ make_cinder_kuttl_run_delay | default(omit) }}" until: "{{ make_cinder_kuttl_run_until | default(true) }}" register: "make_cinder_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_kuttl_run" dry_run: "{{ make_cinder_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_kuttl_run_env|default({})), **(make_cinder_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cinder_k0000644000175000017500000000163515117130422033346 0ustar zuulzuul--- - name: Debug make_cinder_kuttl_env when: make_cinder_kuttl_env is defined ansible.builtin.debug: var: make_cinder_kuttl_env - name: Debug make_cinder_kuttl_params when: make_cinder_kuttl_params is defined ansible.builtin.debug: var: make_cinder_kuttl_params - name: Run cinder_kuttl retries: "{{ make_cinder_kuttl_retries | default(omit) }}" delay: "{{ make_cinder_kuttl_delay | default(omit) }}" until: "{{ make_cinder_kuttl_until | default(true) }}" register: "make_cinder_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make cinder_kuttl" dry_run: "{{ make_cinder_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cinder_kuttl_env|default({})), **(make_cinder_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000175015117130422033417 0ustar zuulzuul--- - name: Debug make_neutron_kuttl_run_env when: make_neutron_kuttl_run_env is defined ansible.builtin.debug: var: make_neutron_kuttl_run_env - name: Debug make_neutron_kuttl_run_params when: make_neutron_kuttl_run_params is defined ansible.builtin.debug: var: make_neutron_kuttl_run_params - name: Run neutron_kuttl_run retries: "{{ make_neutron_kuttl_run_retries | default(omit) }}" delay: "{{ make_neutron_kuttl_run_delay | default(omit) }}" until: "{{ make_neutron_kuttl_run_until | default(true) }}" register: "make_neutron_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_kuttl_run" dry_run: "{{ make_neutron_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_kuttl_run_env|default({})), **(make_neutron_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_neutron_0000644000175000017500000000165415117130422033422 0ustar zuulzuul--- - name: Debug make_neutron_kuttl_env when: make_neutron_kuttl_env is defined ansible.builtin.debug: var: make_neutron_kuttl_env - name: Debug make_neutron_kuttl_params when: make_neutron_kuttl_params is defined ansible.builtin.debug: var: make_neutron_kuttl_params - name: Run neutron_kuttl retries: "{{ make_neutron_kuttl_retries | default(omit) }}" delay: "{{ make_neutron_kuttl_delay | default(omit) }}" until: "{{ make_neutron_kuttl_until | default(true) }}" register: "make_neutron_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make neutron_kuttl" dry_run: "{{ make_neutron_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_neutron_kuttl_env|default({})), **(make_neutron_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000175015117130422033353 0ustar zuulzuul--- - name: Debug make_octavia_kuttl_run_env when: make_octavia_kuttl_run_env is defined ansible.builtin.debug: var: make_octavia_kuttl_run_env - name: Debug make_octavia_kuttl_run_params when: make_octavia_kuttl_run_params is defined ansible.builtin.debug: var: make_octavia_kuttl_run_params - name: Run octavia_kuttl_run retries: "{{ make_octavia_kuttl_run_retries | default(omit) }}" delay: "{{ make_octavia_kuttl_run_delay | default(omit) }}" until: "{{ make_octavia_kuttl_run_until | default(true) }}" register: "make_octavia_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_kuttl_run" dry_run: "{{ make_octavia_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_kuttl_run_env|default({})), **(make_octavia_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_octavia_0000644000175000017500000000165415117130422033356 0ustar zuulzuul--- - name: Debug make_octavia_kuttl_env when: make_octavia_kuttl_env is defined ansible.builtin.debug: var: make_octavia_kuttl_env - name: Debug make_octavia_kuttl_params when: make_octavia_kuttl_params is defined ansible.builtin.debug: var: make_octavia_kuttl_params - name: Run octavia_kuttl retries: "{{ make_octavia_kuttl_retries | default(omit) }}" delay: "{{ make_octavia_kuttl_delay | default(omit) }}" until: "{{ make_octavia_kuttl_until | default(true) }}" register: "make_octavia_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make octavia_kuttl" dry_run: "{{ make_octavia_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_octavia_kuttl_env|default({})), **(make_octavia_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000171215117130422033362 0ustar zuulzuul--- - name: Debug make_designate_kuttl_env when: make_designate_kuttl_env is defined ansible.builtin.debug: var: make_designate_kuttl_env - name: Debug make_designate_kuttl_params when: make_designate_kuttl_params is defined ansible.builtin.debug: var: make_designate_kuttl_params - name: Run designate_kuttl retries: "{{ make_designate_kuttl_retries | default(omit) }}" delay: "{{ make_designate_kuttl_delay | default(omit) }}" until: "{{ make_designate_kuttl_until | default(true) }}" register: "make_designate_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_kuttl" dry_run: "{{ make_designate_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_kuttl_env|default({})), **(make_designate_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designate_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_designat0000644000175000017500000000200615117130422033357 0ustar zuulzuul--- - name: Debug make_designate_kuttl_run_env when: make_designate_kuttl_run_env is defined ansible.builtin.debug: var: make_designate_kuttl_run_env - name: Debug make_designate_kuttl_run_params when: make_designate_kuttl_run_params is defined ansible.builtin.debug: var: make_designate_kuttl_run_params - name: Run designate_kuttl_run retries: "{{ make_designate_kuttl_run_retries | default(omit) }}" delay: "{{ make_designate_kuttl_run_delay | default(omit) }}" until: "{{ make_designate_kuttl_run_until | default(true) }}" register: "make_designate_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make designate_kuttl_run" dry_run: "{{ make_designate_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_designate_kuttl_run_env|default({})), **(make_designate_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kutt0000644000175000017500000000165415117130422033442 0ustar zuulzuul--- - name: Debug make_ovn_kuttl_run_env when: make_ovn_kuttl_run_env is defined ansible.builtin.debug: var: make_ovn_kuttl_run_env - name: Debug make_ovn_kuttl_run_params when: make_ovn_kuttl_run_params is defined ansible.builtin.debug: var: make_ovn_kuttl_run_params - name: Run ovn_kuttl_run retries: "{{ make_ovn_kuttl_run_retries | default(omit) }}" delay: "{{ make_ovn_kuttl_run_delay | default(omit) }}" until: "{{ make_ovn_kuttl_run_until | default(true) }}" register: "make_ovn_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_kuttl_run" dry_run: "{{ make_ovn_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_kuttl_run_env|default({})), **(make_ovn_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ovn_kutt0000644000175000017500000000156015117130422033436 0ustar zuulzuul--- - name: Debug make_ovn_kuttl_env when: make_ovn_kuttl_env is defined ansible.builtin.debug: var: make_ovn_kuttl_env - name: Debug make_ovn_kuttl_params when: make_ovn_kuttl_params is defined ansible.builtin.debug: var: make_ovn_kuttl_params - name: Run ovn_kuttl retries: "{{ make_ovn_kuttl_retries | default(omit) }}" delay: "{{ make_ovn_kuttl_delay | default(omit) }}" until: "{{ make_ovn_kuttl_until | default(true) }}" register: "make_ovn_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ovn_kuttl" dry_run: "{{ make_ovn_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ovn_kuttl_env|default({})), **(make_ovn_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_ku0000644000175000017500000000171215117130422033362 0ustar zuulzuul--- - name: Debug make_infra_kuttl_run_env when: make_infra_kuttl_run_env is defined ansible.builtin.debug: var: make_infra_kuttl_run_env - name: Debug make_infra_kuttl_run_params when: make_infra_kuttl_run_params is defined ansible.builtin.debug: var: make_infra_kuttl_run_params - name: Run infra_kuttl_run retries: "{{ make_infra_kuttl_run_retries | default(omit) }}" delay: "{{ make_infra_kuttl_run_delay | default(omit) }}" until: "{{ make_infra_kuttl_run_until | default(true) }}" register: "make_infra_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_kuttl_run" dry_run: "{{ make_infra_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_kuttl_run_env|default({})), **(make_infra_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_infra_ku0000644000175000017500000000161615117130422033365 0ustar zuulzuul--- - name: Debug make_infra_kuttl_env when: make_infra_kuttl_env is defined ansible.builtin.debug: var: make_infra_kuttl_env - name: Debug make_infra_kuttl_params when: make_infra_kuttl_params is defined ansible.builtin.debug: var: make_infra_kuttl_params - name: Run infra_kuttl retries: "{{ make_infra_kuttl_retries | default(omit) }}" delay: "{{ make_infra_kuttl_delay | default(omit) }}" until: "{{ make_infra_kuttl_until | default(true) }}" register: "make_infra_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make infra_kuttl" dry_run: "{{ make_infra_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_infra_kuttl_env|default({})), **(make_infra_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000173115117130422033362 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_run_env when: make_ironic_kuttl_run_env is defined ansible.builtin.debug: var: make_ironic_kuttl_run_env - name: Debug make_ironic_kuttl_run_params when: make_ironic_kuttl_run_params is defined ansible.builtin.debug: var: make_ironic_kuttl_run_params - name: Run ironic_kuttl_run retries: "{{ make_ironic_kuttl_run_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_run_delay | default(omit) }}" until: "{{ make_ironic_kuttl_run_until | default(true) }}" register: "make_ironic_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl_run" dry_run: "{{ make_ironic_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_run_env|default({})), **(make_ironic_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000163515117130422033365 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_env when: make_ironic_kuttl_env is defined ansible.builtin.debug: var: make_ironic_kuttl_env - name: Debug make_ironic_kuttl_params when: make_ironic_kuttl_params is defined ansible.builtin.debug: var: make_ironic_kuttl_params - name: Run ironic_kuttl retries: "{{ make_ironic_kuttl_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_delay | default(omit) }}" until: "{{ make_ironic_kuttl_until | default(true) }}" register: "make_ironic_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl" dry_run: "{{ make_ironic_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_env|default({})), **(make_ironic_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_kuttl_crc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ironic_k0000644000175000017500000000173115117130422033362 0ustar zuulzuul--- - name: Debug make_ironic_kuttl_crc_env when: make_ironic_kuttl_crc_env is defined ansible.builtin.debug: var: make_ironic_kuttl_crc_env - name: Debug make_ironic_kuttl_crc_params when: make_ironic_kuttl_crc_params is defined ansible.builtin.debug: var: make_ironic_kuttl_crc_params - name: Run ironic_kuttl_crc retries: "{{ make_ironic_kuttl_crc_retries | default(omit) }}" delay: "{{ make_ironic_kuttl_crc_delay | default(omit) }}" until: "{{ make_ironic_kuttl_crc_until | default(true) }}" register: "make_ironic_kuttl_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ironic_kuttl_crc" dry_run: "{{ make_ironic_kuttl_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ironic_kuttl_crc_env|default({})), **(make_ironic_kuttl_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000167315117130422033376 0ustar zuulzuul--- - name: Debug make_heat_kuttl_run_env when: make_heat_kuttl_run_env is defined ansible.builtin.debug: var: make_heat_kuttl_run_env - name: Debug make_heat_kuttl_run_params when: make_heat_kuttl_run_params is defined ansible.builtin.debug: var: make_heat_kuttl_run_params - name: Run heat_kuttl_run retries: "{{ make_heat_kuttl_run_retries | default(omit) }}" delay: "{{ make_heat_kuttl_run_delay | default(omit) }}" until: "{{ make_heat_kuttl_run_until | default(true) }}" register: "make_heat_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl_run" dry_run: "{{ make_heat_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_run_env|default({})), **(make_heat_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000157715117130422033401 0ustar zuulzuul--- - name: Debug make_heat_kuttl_env when: make_heat_kuttl_env is defined ansible.builtin.debug: var: make_heat_kuttl_env - name: Debug make_heat_kuttl_params when: make_heat_kuttl_params is defined ansible.builtin.debug: var: make_heat_kuttl_params - name: Run heat_kuttl retries: "{{ make_heat_kuttl_retries | default(omit) }}" delay: "{{ make_heat_kuttl_delay | default(omit) }}" until: "{{ make_heat_kuttl_until | default(true) }}" register: "make_heat_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl" dry_run: "{{ make_heat_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_env|default({})), **(make_heat_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kuttl_crc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_kut0000644000175000017500000000167315117130422033376 0ustar zuulzuul--- - name: Debug make_heat_kuttl_crc_env when: make_heat_kuttl_crc_env is defined ansible.builtin.debug: var: make_heat_kuttl_crc_env - name: Debug make_heat_kuttl_crc_params when: make_heat_kuttl_crc_params is defined ansible.builtin.debug: var: make_heat_kuttl_crc_params - name: Run heat_kuttl_crc retries: "{{ make_heat_kuttl_crc_retries | default(omit) }}" delay: "{{ make_heat_kuttl_crc_delay | default(omit) }}" until: "{{ make_heat_kuttl_crc_until | default(true) }}" register: "make_heat_kuttl_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_kuttl_crc" dry_run: "{{ make_heat_kuttl_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_kuttl_crc_env|default({})), **(make_heat_kuttl_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000200615117130422033343 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_run_env when: make_ansibleee_kuttl_run_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_run_env - name: Debug make_ansibleee_kuttl_run_params when: make_ansibleee_kuttl_run_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_run_params - name: Run ansibleee_kuttl_run retries: "{{ make_ansibleee_kuttl_run_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_run_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_run_until | default(true) }}" register: "make_ansibleee_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_run" dry_run: "{{ make_ansibleee_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_run_env|default({})), **(make_ansibleee_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000210215117130422033340 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_cleanup_env when: make_ansibleee_kuttl_cleanup_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_cleanup_env - name: Debug make_ansibleee_kuttl_cleanup_params when: make_ansibleee_kuttl_cleanup_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_cleanup_params - name: Run ansibleee_kuttl_cleanup retries: "{{ make_ansibleee_kuttl_cleanup_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_cleanup_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_cleanup_until | default(true) }}" register: "make_ansibleee_kuttl_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_cleanup" dry_run: "{{ make_ansibleee_kuttl_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_cleanup_env|default({})), **(make_ansibleee_kuttl_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000202515117130422033344 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_prep_env when: make_ansibleee_kuttl_prep_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_prep_env - name: Debug make_ansibleee_kuttl_prep_params when: make_ansibleee_kuttl_prep_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_prep_params - name: Run ansibleee_kuttl_prep retries: "{{ make_ansibleee_kuttl_prep_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_prep_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_prep_until | default(true) }}" register: "make_ansibleee_kuttl_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl_prep" dry_run: "{{ make_ansibleee_kuttl_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_prep_env|default({})), **(make_ansibleee_kuttl_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000171215117130422033346 0ustar zuulzuul--- - name: Debug make_ansibleee_kuttl_env when: make_ansibleee_kuttl_env is defined ansible.builtin.debug: var: make_ansibleee_kuttl_env - name: Debug make_ansibleee_kuttl_params when: make_ansibleee_kuttl_params is defined ansible.builtin.debug: var: make_ansibleee_kuttl_params - name: Run ansibleee_kuttl retries: "{{ make_ansibleee_kuttl_retries | default(omit) }}" delay: "{{ make_ansibleee_kuttl_delay | default(omit) }}" until: "{{ make_ansibleee_kuttl_until | default(true) }}" register: "make_ansibleee_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_kuttl" dry_run: "{{ make_ansibleee_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_kuttl_env|default({})), **(make_ansibleee_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_k0000644000175000017500000000173115117130422033330 0ustar zuulzuul--- - name: Debug make_glance_kuttl_run_env when: make_glance_kuttl_run_env is defined ansible.builtin.debug: var: make_glance_kuttl_run_env - name: Debug make_glance_kuttl_run_params when: make_glance_kuttl_run_params is defined ansible.builtin.debug: var: make_glance_kuttl_run_params - name: Run glance_kuttl_run retries: "{{ make_glance_kuttl_run_retries | default(omit) }}" delay: "{{ make_glance_kuttl_run_delay | default(omit) }}" until: "{{ make_glance_kuttl_run_until | default(true) }}" register: "make_glance_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_kuttl_run" dry_run: "{{ make_glance_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_kuttl_run_env|default({})), **(make_glance_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_glance_k0000644000175000017500000000163515117130422033333 0ustar zuulzuul--- - name: Debug make_glance_kuttl_env when: make_glance_kuttl_env is defined ansible.builtin.debug: var: make_glance_kuttl_env - name: Debug make_glance_kuttl_params when: make_glance_kuttl_params is defined ansible.builtin.debug: var: make_glance_kuttl_params - name: Run glance_kuttl retries: "{{ make_glance_kuttl_retries | default(omit) }}" delay: "{{ make_glance_kuttl_delay | default(omit) }}" until: "{{ make_glance_kuttl_until | default(true) }}" register: "make_glance_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make glance_kuttl" dry_run: "{{ make_glance_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_glance_kuttl_env|default({})), **(make_glance_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_k0000644000175000017500000000173115117130422033340 0ustar zuulzuul--- - name: Debug make_manila_kuttl_run_env when: make_manila_kuttl_run_env is defined ansible.builtin.debug: var: make_manila_kuttl_run_env - name: Debug make_manila_kuttl_run_params when: make_manila_kuttl_run_params is defined ansible.builtin.debug: var: make_manila_kuttl_run_params - name: Run manila_kuttl_run retries: "{{ make_manila_kuttl_run_retries | default(omit) }}" delay: "{{ make_manila_kuttl_run_delay | default(omit) }}" until: "{{ make_manila_kuttl_run_until | default(true) }}" register: "make_manila_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_kuttl_run" dry_run: "{{ make_manila_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_kuttl_run_env|default({})), **(make_manila_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_k0000644000175000017500000000163515117130422033343 0ustar zuulzuul--- - name: Debug make_manila_kuttl_env when: make_manila_kuttl_env is defined ansible.builtin.debug: var: make_manila_kuttl_env - name: Debug make_manila_kuttl_params when: make_manila_kuttl_params is defined ansible.builtin.debug: var: make_manila_kuttl_params - name: Run manila_kuttl retries: "{{ make_manila_kuttl_retries | default(omit) }}" delay: "{{ make_manila_kuttl_delay | default(omit) }}" until: "{{ make_manila_kuttl_until | default(true) }}" register: "make_manila_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_kuttl" dry_run: "{{ make_manila_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_kuttl_env|default({})), **(make_manila_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_ku0000644000175000017500000000171215117130422033417 0ustar zuulzuul--- - name: Debug make_swift_kuttl_run_env when: make_swift_kuttl_run_env is defined ansible.builtin.debug: var: make_swift_kuttl_run_env - name: Debug make_swift_kuttl_run_params when: make_swift_kuttl_run_params is defined ansible.builtin.debug: var: make_swift_kuttl_run_params - name: Run swift_kuttl_run retries: "{{ make_swift_kuttl_run_retries | default(omit) }}" delay: "{{ make_swift_kuttl_run_delay | default(omit) }}" until: "{{ make_swift_kuttl_run_until | default(true) }}" register: "make_swift_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_kuttl_run" dry_run: "{{ make_swift_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_kuttl_run_env|default({})), **(make_swift_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_ku0000644000175000017500000000161615117130422033422 0ustar zuulzuul--- - name: Debug make_swift_kuttl_env when: make_swift_kuttl_env is defined ansible.builtin.debug: var: make_swift_kuttl_env - name: Debug make_swift_kuttl_params when: make_swift_kuttl_params is defined ansible.builtin.debug: var: make_swift_kuttl_params - name: Run swift_kuttl retries: "{{ make_swift_kuttl_retries | default(omit) }}" delay: "{{ make_swift_kuttl_delay | default(omit) }}" until: "{{ make_swift_kuttl_until | default(true) }}" register: "make_swift_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_kuttl" dry_run: "{{ make_swift_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_kuttl_env|default({})), **(make_swift_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000175015117130422033415 0ustar zuulzuul--- - name: Debug make_horizon_kuttl_run_env when: make_horizon_kuttl_run_env is defined ansible.builtin.debug: var: make_horizon_kuttl_run_env - name: Debug make_horizon_kuttl_run_params when: make_horizon_kuttl_run_params is defined ansible.builtin.debug: var: make_horizon_kuttl_run_params - name: Run horizon_kuttl_run retries: "{{ make_horizon_kuttl_run_retries | default(omit) }}" delay: "{{ make_horizon_kuttl_run_delay | default(omit) }}" until: "{{ make_horizon_kuttl_run_until | default(true) }}" register: "make_horizon_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_kuttl_run" dry_run: "{{ make_horizon_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_kuttl_run_env|default({})), **(make_horizon_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000165415117130422033420 0ustar zuulzuul--- - name: Debug make_horizon_kuttl_env when: make_horizon_kuttl_env is defined ansible.builtin.debug: var: make_horizon_kuttl_env - name: Debug make_horizon_kuttl_params when: make_horizon_kuttl_params is defined ansible.builtin.debug: var: make_horizon_kuttl_params - name: Run horizon_kuttl retries: "{{ make_horizon_kuttl_retries | default(omit) }}" delay: "{{ make_horizon_kuttl_delay | default(omit) }}" until: "{{ make_horizon_kuttl_until | default(true) }}" register: "make_horizon_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_kuttl" dry_run: "{{ make_horizon_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_kuttl_env|default({})), **(make_horizon_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000200615117130422033375 0ustar zuulzuul--- - name: Debug make_openstack_kuttl_run_env when: make_openstack_kuttl_run_env is defined ansible.builtin.debug: var: make_openstack_kuttl_run_env - name: Debug make_openstack_kuttl_run_params when: make_openstack_kuttl_run_params is defined ansible.builtin.debug: var: make_openstack_kuttl_run_params - name: Run openstack_kuttl_run retries: "{{ make_openstack_kuttl_run_retries | default(omit) }}" delay: "{{ make_openstack_kuttl_run_delay | default(omit) }}" until: "{{ make_openstack_kuttl_run_until | default(true) }}" register: "make_openstack_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_kuttl_run" dry_run: "{{ make_openstack_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_kuttl_run_env|default({})), **(make_openstack_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstack_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_openstac0000644000175000017500000000171215117130422033400 0ustar zuulzuul--- - name: Debug make_openstack_kuttl_env when: make_openstack_kuttl_env is defined ansible.builtin.debug: var: make_openstack_kuttl_env - name: Debug make_openstack_kuttl_params when: make_openstack_kuttl_params is defined ansible.builtin.debug: var: make_openstack_kuttl_params - name: Run openstack_kuttl retries: "{{ make_openstack_kuttl_retries | default(omit) }}" delay: "{{ make_openstack_kuttl_delay | default(omit) }}" until: "{{ make_openstack_kuttl_until | default(true) }}" register: "make_openstack_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make openstack_kuttl" dry_run: "{{ make_openstack_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_openstack_kuttl_env|default({})), **(make_openstack_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_chainsaw_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000202515117130422033320 0ustar zuulzuul--- - name: Debug make_mariadb_chainsaw_run_env when: make_mariadb_chainsaw_run_env is defined ansible.builtin.debug: var: make_mariadb_chainsaw_run_env - name: Debug make_mariadb_chainsaw_run_params when: make_mariadb_chainsaw_run_params is defined ansible.builtin.debug: var: make_mariadb_chainsaw_run_params - name: Run mariadb_chainsaw_run retries: "{{ make_mariadb_chainsaw_run_retries | default(omit) }}" delay: "{{ make_mariadb_chainsaw_run_delay | default(omit) }}" until: "{{ make_mariadb_chainsaw_run_until | default(true) }}" register: "make_mariadb_chainsaw_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_chainsaw_run" dry_run: "{{ make_mariadb_chainsaw_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_chainsaw_run_env|default({})), **(make_mariadb_chainsaw_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_chainsaw.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_mariadb_0000644000175000017500000000173115117130422033323 0ustar zuulzuul--- - name: Debug make_mariadb_chainsaw_env when: make_mariadb_chainsaw_env is defined ansible.builtin.debug: var: make_mariadb_chainsaw_env - name: Debug make_mariadb_chainsaw_params when: make_mariadb_chainsaw_params is defined ansible.builtin.debug: var: make_mariadb_chainsaw_params - name: Run mariadb_chainsaw retries: "{{ make_mariadb_chainsaw_retries | default(omit) }}" delay: "{{ make_mariadb_chainsaw_delay | default(omit) }}" until: "{{ make_mariadb_chainsaw_until | default(true) }}" register: "make_mariadb_chainsaw_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make mariadb_chainsaw" dry_run: "{{ make_mariadb_chainsaw_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_mariadb_chainsaw_env|default({})), **(make_mariadb_chainsaw_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000163515117130422033417 0ustar zuulzuul--- - name: Debug make_horizon_prep_env when: make_horizon_prep_env is defined ansible.builtin.debug: var: make_horizon_prep_env - name: Debug make_horizon_prep_params when: make_horizon_prep_params is defined ansible.builtin.debug: var: make_horizon_prep_params - name: Run horizon_prep retries: "{{ make_horizon_prep_retries | default(omit) }}" delay: "{{ make_horizon_prep_delay | default(omit) }}" until: "{{ make_horizon_prep_until | default(true) }}" register: "make_horizon_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_prep" dry_run: "{{ make_horizon_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_prep_env|default({})), **(make_horizon_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon.0000644000175000017500000000152215117130422033331 0ustar zuulzuul--- - name: Debug make_horizon_env when: make_horizon_env is defined ansible.builtin.debug: var: make_horizon_env - name: Debug make_horizon_params when: make_horizon_params is defined ansible.builtin.debug: var: make_horizon_params - name: Run horizon retries: "{{ make_horizon_retries | default(omit) }}" delay: "{{ make_horizon_delay | default(omit) }}" until: "{{ make_horizon_until | default(true) }}" register: "make_horizon_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon" dry_run: "{{ make_horizon_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_env|default({})), **(make_horizon_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000171215117130422033413 0ustar zuulzuul--- - name: Debug make_horizon_cleanup_env when: make_horizon_cleanup_env is defined ansible.builtin.debug: var: make_horizon_cleanup_env - name: Debug make_horizon_cleanup_params when: make_horizon_cleanup_params is defined ansible.builtin.debug: var: make_horizon_cleanup_params - name: Run horizon_cleanup retries: "{{ make_horizon_cleanup_retries | default(omit) }}" delay: "{{ make_horizon_cleanup_delay | default(omit) }}" until: "{{ make_horizon_cleanup_until | default(true) }}" register: "make_horizon_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_cleanup" dry_run: "{{ make_horizon_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_cleanup_env|default({})), **(make_horizon_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000200615117130422033410 0ustar zuulzuul--- - name: Debug make_horizon_deploy_prep_env when: make_horizon_deploy_prep_env is defined ansible.builtin.debug: var: make_horizon_deploy_prep_env - name: Debug make_horizon_deploy_prep_params when: make_horizon_deploy_prep_params is defined ansible.builtin.debug: var: make_horizon_deploy_prep_params - name: Run horizon_deploy_prep retries: "{{ make_horizon_deploy_prep_retries | default(omit) }}" delay: "{{ make_horizon_deploy_prep_delay | default(omit) }}" until: "{{ make_horizon_deploy_prep_until | default(true) }}" register: "make_horizon_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy_prep" dry_run: "{{ make_horizon_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_prep_env|default({})), **(make_horizon_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000167315117130422033421 0ustar zuulzuul--- - name: Debug make_horizon_deploy_env when: make_horizon_deploy_env is defined ansible.builtin.debug: var: make_horizon_deploy_env - name: Debug make_horizon_deploy_params when: make_horizon_deploy_params is defined ansible.builtin.debug: var: make_horizon_deploy_params - name: Run horizon_deploy retries: "{{ make_horizon_deploy_retries | default(omit) }}" delay: "{{ make_horizon_deploy_delay | default(omit) }}" until: "{{ make_horizon_deploy_until | default(true) }}" register: "make_horizon_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy" dry_run: "{{ make_horizon_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_env|default({})), **(make_horizon_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_horizon_0000644000175000017500000000206315117130422033413 0ustar zuulzuul--- - name: Debug make_horizon_deploy_cleanup_env when: make_horizon_deploy_cleanup_env is defined ansible.builtin.debug: var: make_horizon_deploy_cleanup_env - name: Debug make_horizon_deploy_cleanup_params when: make_horizon_deploy_cleanup_params is defined ansible.builtin.debug: var: make_horizon_deploy_cleanup_params - name: Run horizon_deploy_cleanup retries: "{{ make_horizon_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_horizon_deploy_cleanup_delay | default(omit) }}" until: "{{ make_horizon_deploy_cleanup_until | default(true) }}" register: "make_horizon_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make horizon_deploy_cleanup" dry_run: "{{ make_horizon_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_horizon_deploy_cleanup_env|default({})), **(make_horizon_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_pre0000644000175000017500000000156015117130422033354 0ustar zuulzuul--- - name: Debug make_heat_prep_env when: make_heat_prep_env is defined ansible.builtin.debug: var: make_heat_prep_env - name: Debug make_heat_prep_params when: make_heat_prep_params is defined ansible.builtin.debug: var: make_heat_prep_params - name: Run heat_prep retries: "{{ make_heat_prep_retries | default(omit) }}" delay: "{{ make_heat_prep_delay | default(omit) }}" until: "{{ make_heat_prep_until | default(true) }}" register: "make_heat_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_prep" dry_run: "{{ make_heat_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_prep_env|default({})), **(make_heat_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat.yml0000644000175000017500000000144515117130422033310 0ustar zuulzuul--- - name: Debug make_heat_env when: make_heat_env is defined ansible.builtin.debug: var: make_heat_env - name: Debug make_heat_params when: make_heat_params is defined ansible.builtin.debug: var: make_heat_params - name: Run heat retries: "{{ make_heat_retries | default(omit) }}" delay: "{{ make_heat_delay | default(omit) }}" until: "{{ make_heat_until | default(true) }}" register: "make_heat_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat" dry_run: "{{ make_heat_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_env|default({})), **(make_heat_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_cle0000644000175000017500000000163515117130422033334 0ustar zuulzuul--- - name: Debug make_heat_cleanup_env when: make_heat_cleanup_env is defined ansible.builtin.debug: var: make_heat_cleanup_env - name: Debug make_heat_cleanup_params when: make_heat_cleanup_params is defined ansible.builtin.debug: var: make_heat_cleanup_params - name: Run heat_cleanup retries: "{{ make_heat_cleanup_retries | default(omit) }}" delay: "{{ make_heat_cleanup_delay | default(omit) }}" until: "{{ make_heat_cleanup_until | default(true) }}" register: "make_heat_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_cleanup" dry_run: "{{ make_heat_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_cleanup_env|default({})), **(make_heat_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000173115117130422033336 0ustar zuulzuul--- - name: Debug make_heat_deploy_prep_env when: make_heat_deploy_prep_env is defined ansible.builtin.debug: var: make_heat_deploy_prep_env - name: Debug make_heat_deploy_prep_params when: make_heat_deploy_prep_params is defined ansible.builtin.debug: var: make_heat_deploy_prep_params - name: Run heat_deploy_prep retries: "{{ make_heat_deploy_prep_retries | default(omit) }}" delay: "{{ make_heat_deploy_prep_delay | default(omit) }}" until: "{{ make_heat_deploy_prep_until | default(true) }}" register: "make_heat_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy_prep" dry_run: "{{ make_heat_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_prep_env|default({})), **(make_heat_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000161615117130422033340 0ustar zuulzuul--- - name: Debug make_heat_deploy_env when: make_heat_deploy_env is defined ansible.builtin.debug: var: make_heat_deploy_env - name: Debug make_heat_deploy_params when: make_heat_deploy_params is defined ansible.builtin.debug: var: make_heat_deploy_params - name: Run heat_deploy retries: "{{ make_heat_deploy_retries | default(omit) }}" delay: "{{ make_heat_deploy_delay | default(omit) }}" until: "{{ make_heat_deploy_until | default(true) }}" register: "make_heat_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy" dry_run: "{{ make_heat_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_env|default({})), **(make_heat_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_heat_dep0000644000175000017500000000200615117130422033332 0ustar zuulzuul--- - name: Debug make_heat_deploy_cleanup_env when: make_heat_deploy_cleanup_env is defined ansible.builtin.debug: var: make_heat_deploy_cleanup_env - name: Debug make_heat_deploy_cleanup_params when: make_heat_deploy_cleanup_params is defined ansible.builtin.debug: var: make_heat_deploy_cleanup_params - name: Run heat_deploy_cleanup retries: "{{ make_heat_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_heat_deploy_cleanup_delay | default(omit) }}" until: "{{ make_heat_deploy_cleanup_until | default(true) }}" register: "make_heat_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make heat_deploy_cleanup" dry_run: "{{ make_heat_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_heat_deploy_cleanup_env|default({})), **(make_heat_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000167315117130422033354 0ustar zuulzuul--- - name: Debug make_ansibleee_prep_env when: make_ansibleee_prep_env is defined ansible.builtin.debug: var: make_ansibleee_prep_env - name: Debug make_ansibleee_prep_params when: make_ansibleee_prep_params is defined ansible.builtin.debug: var: make_ansibleee_prep_params - name: Run ansibleee_prep retries: "{{ make_ansibleee_prep_retries | default(omit) }}" delay: "{{ make_ansibleee_prep_delay | default(omit) }}" until: "{{ make_ansibleee_prep_until | default(true) }}" register: "make_ansibleee_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_prep" dry_run: "{{ make_ansibleee_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_prep_env|default({})), **(make_ansibleee_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000156015117130422033347 0ustar zuulzuul--- - name: Debug make_ansibleee_env when: make_ansibleee_env is defined ansible.builtin.debug: var: make_ansibleee_env - name: Debug make_ansibleee_params when: make_ansibleee_params is defined ansible.builtin.debug: var: make_ansibleee_params - name: Run ansibleee retries: "{{ make_ansibleee_retries | default(omit) }}" delay: "{{ make_ansibleee_delay | default(omit) }}" until: "{{ make_ansibleee_until | default(true) }}" register: "make_ansibleee_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee" dry_run: "{{ make_ansibleee_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_env|default({})), **(make_ansibleee_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansibleee_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ansiblee0000644000175000017500000000175015117130422033350 0ustar zuulzuul--- - name: Debug make_ansibleee_cleanup_env when: make_ansibleee_cleanup_env is defined ansible.builtin.debug: var: make_ansibleee_cleanup_env - name: Debug make_ansibleee_cleanup_params when: make_ansibleee_cleanup_params is defined ansible.builtin.debug: var: make_ansibleee_cleanup_params - name: Run ansibleee_cleanup retries: "{{ make_ansibleee_cleanup_retries | default(omit) }}" delay: "{{ make_ansibleee_cleanup_delay | default(omit) }}" until: "{{ make_ansibleee_cleanup_until | default(true) }}" register: "make_ansibleee_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ansibleee_cleanup" dry_run: "{{ make_ansibleee_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ansibleee_cleanup_env|default({})), **(make_ansibleee_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000167315117130422033352 0ustar zuulzuul--- - name: Debug make_baremetal_prep_env when: make_baremetal_prep_env is defined ansible.builtin.debug: var: make_baremetal_prep_env - name: Debug make_baremetal_prep_params when: make_baremetal_prep_params is defined ansible.builtin.debug: var: make_baremetal_prep_params - name: Run baremetal_prep retries: "{{ make_baremetal_prep_retries | default(omit) }}" delay: "{{ make_baremetal_prep_delay | default(omit) }}" until: "{{ make_baremetal_prep_until | default(true) }}" register: "make_baremetal_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal_prep" dry_run: "{{ make_baremetal_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_prep_env|default({})), **(make_baremetal_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000156015117130422033345 0ustar zuulzuul--- - name: Debug make_baremetal_env when: make_baremetal_env is defined ansible.builtin.debug: var: make_baremetal_env - name: Debug make_baremetal_params when: make_baremetal_params is defined ansible.builtin.debug: var: make_baremetal_params - name: Run baremetal retries: "{{ make_baremetal_retries | default(omit) }}" delay: "{{ make_baremetal_delay | default(omit) }}" until: "{{ make_baremetal_until | default(true) }}" register: "make_baremetal_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal" dry_run: "{{ make_baremetal_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_env|default({})), **(make_baremetal_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremetal_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_baremeta0000644000175000017500000000175015117130422033346 0ustar zuulzuul--- - name: Debug make_baremetal_cleanup_env when: make_baremetal_cleanup_env is defined ansible.builtin.debug: var: make_baremetal_cleanup_env - name: Debug make_baremetal_cleanup_params when: make_baremetal_cleanup_params is defined ansible.builtin.debug: var: make_baremetal_cleanup_params - name: Run baremetal_cleanup retries: "{{ make_baremetal_cleanup_retries | default(omit) }}" delay: "{{ make_baremetal_cleanup_delay | default(omit) }}" until: "{{ make_baremetal_cleanup_until | default(true) }}" register: "make_baremetal_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make baremetal_cleanup" dry_run: "{{ make_baremetal_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_baremetal_cleanup_env|default({})), **(make_baremetal_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_help.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_hel0000644000175000017500000000156015117130422033334 0ustar zuulzuul--- - name: Debug make_ceph_help_env when: make_ceph_help_env is defined ansible.builtin.debug: var: make_ceph_help_env - name: Debug make_ceph_help_params when: make_ceph_help_params is defined ansible.builtin.debug: var: make_ceph_help_params - name: Run ceph_help retries: "{{ make_ceph_help_retries | default(omit) }}" delay: "{{ make_ceph_help_delay | default(omit) }}" until: "{{ make_ceph_help_until | default(true) }}" register: "make_ceph_help_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph_help" dry_run: "{{ make_ceph_help_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_help_env|default({})), **(make_ceph_help_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph.yml0000644000175000017500000000144515117130422033306 0ustar zuulzuul--- - name: Debug make_ceph_env when: make_ceph_env is defined ansible.builtin.debug: var: make_ceph_env - name: Debug make_ceph_params when: make_ceph_params is defined ansible.builtin.debug: var: make_ceph_params - name: Run ceph retries: "{{ make_ceph_retries | default(omit) }}" delay: "{{ make_ceph_delay | default(omit) }}" until: "{{ make_ceph_until | default(true) }}" register: "make_ceph_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph" dry_run: "{{ make_ceph_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_env|default({})), **(make_ceph_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ceph_cle0000644000175000017500000000163515117130422033332 0ustar zuulzuul--- - name: Debug make_ceph_cleanup_env when: make_ceph_cleanup_env is defined ansible.builtin.debug: var: make_ceph_cleanup_env - name: Debug make_ceph_cleanup_params when: make_ceph_cleanup_params is defined ansible.builtin.debug: var: make_ceph_cleanup_params - name: Run ceph_cleanup retries: "{{ make_ceph_cleanup_retries | default(omit) }}" delay: "{{ make_ceph_cleanup_delay | default(omit) }}" until: "{{ make_ceph_cleanup_until | default(true) }}" register: "make_ceph_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make ceph_cleanup" dry_run: "{{ make_ceph_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ceph_cleanup_env|default({})), **(make_ceph_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_pre0000644000175000017500000000156015117130422033405 0ustar zuulzuul--- - name: Debug make_rook_prep_env when: make_rook_prep_env is defined ansible.builtin.debug: var: make_rook_prep_env - name: Debug make_rook_prep_params when: make_rook_prep_params is defined ansible.builtin.debug: var: make_rook_prep_params - name: Run rook_prep retries: "{{ make_rook_prep_retries | default(omit) }}" delay: "{{ make_rook_prep_delay | default(omit) }}" until: "{{ make_rook_prep_until | default(true) }}" register: "make_rook_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_prep" dry_run: "{{ make_rook_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_prep_env|default({})), **(make_rook_prep_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook.yml0000644000175000017500000000144515117130422033341 0ustar zuulzuul--- - name: Debug make_rook_env when: make_rook_env is defined ansible.builtin.debug: var: make_rook_env - name: Debug make_rook_params when: make_rook_params is defined ansible.builtin.debug: var: make_rook_params - name: Run rook retries: "{{ make_rook_retries | default(omit) }}" delay: "{{ make_rook_delay | default(omit) }}" until: "{{ make_rook_until | default(true) }}" register: "make_rook_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook" dry_run: "{{ make_rook_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_env|default({})), **(make_rook_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_dep0000644000175000017500000000173115117130422033367 0ustar zuulzuul--- - name: Debug make_rook_deploy_prep_env when: make_rook_deploy_prep_env is defined ansible.builtin.debug: var: make_rook_deploy_prep_env - name: Debug make_rook_deploy_prep_params when: make_rook_deploy_prep_params is defined ansible.builtin.debug: var: make_rook_deploy_prep_params - name: Run rook_deploy_prep retries: "{{ make_rook_deploy_prep_retries | default(omit) }}" delay: "{{ make_rook_deploy_prep_delay | default(omit) }}" until: "{{ make_rook_deploy_prep_until | default(true) }}" register: "make_rook_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_deploy_prep" dry_run: "{{ make_rook_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_deploy_prep_env|default({})), **(make_rook_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_dep0000644000175000017500000000161615117130422033371 0ustar zuulzuul--- - name: Debug make_rook_deploy_env when: make_rook_deploy_env is defined ansible.builtin.debug: var: make_rook_deploy_env - name: Debug make_rook_deploy_params when: make_rook_deploy_params is defined ansible.builtin.debug: var: make_rook_deploy_params - name: Run rook_deploy retries: "{{ make_rook_deploy_retries | default(omit) }}" delay: "{{ make_rook_deploy_delay | default(omit) }}" until: "{{ make_rook_deploy_until | default(true) }}" register: "make_rook_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_deploy" dry_run: "{{ make_rook_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_deploy_env|default({})), **(make_rook_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_crc_disk.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_crc0000644000175000017500000000165415117130422033372 0ustar zuulzuul--- - name: Debug make_rook_crc_disk_env when: make_rook_crc_disk_env is defined ansible.builtin.debug: var: make_rook_crc_disk_env - name: Debug make_rook_crc_disk_params when: make_rook_crc_disk_params is defined ansible.builtin.debug: var: make_rook_crc_disk_params - name: Run rook_crc_disk retries: "{{ make_rook_crc_disk_retries | default(omit) }}" delay: "{{ make_rook_crc_disk_delay | default(omit) }}" until: "{{ make_rook_crc_disk_until | default(true) }}" register: "make_rook_crc_disk_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_crc_disk" dry_run: "{{ make_rook_crc_disk_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_crc_disk_env|default({})), **(make_rook_crc_disk_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_rook_cle0000644000175000017500000000163515117130422033365 0ustar zuulzuul--- - name: Debug make_rook_cleanup_env when: make_rook_cleanup_env is defined ansible.builtin.debug: var: make_rook_cleanup_env - name: Debug make_rook_cleanup_params when: make_rook_cleanup_params is defined ansible.builtin.debug: var: make_rook_cleanup_params - name: Run rook_cleanup retries: "{{ make_rook_cleanup_retries | default(omit) }}" delay: "{{ make_rook_cleanup_delay | default(omit) }}" until: "{{ make_rook_cleanup_until | default(true) }}" register: "make_rook_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make rook_cleanup" dry_run: "{{ make_rook_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_rook_cleanup_env|default({})), **(make_rook_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_lvms.yml0000644000175000017500000000144515117130422033350 0ustar zuulzuul--- - name: Debug make_lvms_env when: make_lvms_env is defined ansible.builtin.debug: var: make_lvms_env - name: Debug make_lvms_params when: make_lvms_params is defined ansible.builtin.debug: var: make_lvms_params - name: Run lvms retries: "{{ make_lvms_retries | default(omit) }}" delay: "{{ make_lvms_delay | default(omit) }}" until: "{{ make_lvms_until | default(true) }}" register: "make_lvms_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make lvms" dry_run: "{{ make_lvms_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_lvms_env|default({})), **(make_lvms_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nmstate.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nmstate.0000644000175000017500000000152215117130422033314 0ustar zuulzuul--- - name: Debug make_nmstate_env when: make_nmstate_env is defined ansible.builtin.debug: var: make_nmstate_env - name: Debug make_nmstate_params when: make_nmstate_params is defined ansible.builtin.debug: var: make_nmstate_params - name: Run nmstate retries: "{{ make_nmstate_retries | default(omit) }}" delay: "{{ make_nmstate_delay | default(omit) }}" until: "{{ make_nmstate_until | default(true) }}" register: "make_nmstate_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nmstate" dry_run: "{{ make_nmstate_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nmstate_env|default({})), **(make_nmstate_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp.yml0000644000175000017500000000144515117130422033325 0ustar zuulzuul--- - name: Debug make_nncp_env when: make_nncp_env is defined ansible.builtin.debug: var: make_nncp_env - name: Debug make_nncp_params when: make_nncp_params is defined ansible.builtin.debug: var: make_nncp_params - name: Run nncp retries: "{{ make_nncp_retries | default(omit) }}" delay: "{{ make_nncp_delay | default(omit) }}" until: "{{ make_nncp_until | default(true) }}" register: "make_nncp_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nncp" dry_run: "{{ make_nncp_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nncp_env|default({})), **(make_nncp_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nncp_cle0000644000175000017500000000163515117130422033351 0ustar zuulzuul--- - name: Debug make_nncp_cleanup_env when: make_nncp_cleanup_env is defined ansible.builtin.debug: var: make_nncp_cleanup_env - name: Debug make_nncp_cleanup_params when: make_nncp_cleanup_params is defined ansible.builtin.debug: var: make_nncp_cleanup_params - name: Run nncp_cleanup retries: "{{ make_nncp_cleanup_retries | default(omit) }}" delay: "{{ make_nncp_cleanup_delay | default(omit) }}" until: "{{ make_nncp_cleanup_until | default(true) }}" register: "make_nncp_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make nncp_cleanup" dry_run: "{{ make_nncp_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nncp_cleanup_env|default({})), **(make_nncp_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattach.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattac0000644000175000017500000000156015117130422033370 0ustar zuulzuul--- - name: Debug make_netattach_env when: make_netattach_env is defined ansible.builtin.debug: var: make_netattach_env - name: Debug make_netattach_params when: make_netattach_params is defined ansible.builtin.debug: var: make_netattach_params - name: Run netattach retries: "{{ make_netattach_retries | default(omit) }}" delay: "{{ make_netattach_delay | default(omit) }}" until: "{{ make_netattach_until | default(true) }}" register: "make_netattach_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netattach" dry_run: "{{ make_netattach_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netattach_env|default({})), **(make_netattach_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattach_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netattac0000644000175000017500000000175015117130422033371 0ustar zuulzuul--- - name: Debug make_netattach_cleanup_env when: make_netattach_cleanup_env is defined ansible.builtin.debug: var: make_netattach_cleanup_env - name: Debug make_netattach_cleanup_params when: make_netattach_cleanup_params is defined ansible.builtin.debug: var: make_netattach_cleanup_params - name: Run netattach_cleanup retries: "{{ make_netattach_cleanup_retries | default(omit) }}" delay: "{{ make_netattach_cleanup_delay | default(omit) }}" until: "{{ make_netattach_cleanup_until | default(true) }}" register: "make_netattach_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netattach_cleanup" dry_run: "{{ make_netattach_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netattach_cleanup_env|default({})), **(make_netattach_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015000000000000011577 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb.0000644000175000017500000000152215117130422033261 0ustar zuulzuul--- - name: Debug make_metallb_env when: make_metallb_env is defined ansible.builtin.debug: var: make_metallb_env - name: Debug make_metallb_params when: make_metallb_params is defined ansible.builtin.debug: var: make_metallb_params - name: Run metallb retries: "{{ make_metallb_retries | default(omit) }}" delay: "{{ make_metallb_delay | default(omit) }}" until: "{{ make_metallb_until | default(true) }}" register: "make_metallb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb" dry_run: "{{ make_metallb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_env|default({})), **(make_metallb_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_config.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000167315117130422033351 0ustar zuulzuul--- - name: Debug make_metallb_config_env when: make_metallb_config_env is defined ansible.builtin.debug: var: make_metallb_config_env - name: Debug make_metallb_config_params when: make_metallb_config_params is defined ansible.builtin.debug: var: make_metallb_config_params - name: Run metallb_config retries: "{{ make_metallb_config_retries | default(omit) }}" delay: "{{ make_metallb_config_delay | default(omit) }}" until: "{{ make_metallb_config_until | default(true) }}" register: "make_metallb_config_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_config" dry_run: "{{ make_metallb_config_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_config_env|default({})), **(make_metallb_config_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_config_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000206315117130422033343 0ustar zuulzuul--- - name: Debug make_metallb_config_cleanup_env when: make_metallb_config_cleanup_env is defined ansible.builtin.debug: var: make_metallb_config_cleanup_env - name: Debug make_metallb_config_cleanup_params when: make_metallb_config_cleanup_params is defined ansible.builtin.debug: var: make_metallb_config_cleanup_params - name: Run metallb_config_cleanup retries: "{{ make_metallb_config_cleanup_retries | default(omit) }}" delay: "{{ make_metallb_config_cleanup_delay | default(omit) }}" until: "{{ make_metallb_config_cleanup_until | default(true) }}" register: "make_metallb_config_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_config_cleanup" dry_run: "{{ make_metallb_config_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_config_cleanup_env|default({})), **(make_metallb_config_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_metallb_0000644000175000017500000000171215117130422033343 0ustar zuulzuul--- - name: Debug make_metallb_cleanup_env when: make_metallb_cleanup_env is defined ansible.builtin.debug: var: make_metallb_cleanup_env - name: Debug make_metallb_cleanup_params when: make_metallb_cleanup_params is defined ansible.builtin.debug: var: make_metallb_cleanup_params - name: Run metallb_cleanup retries: "{{ make_metallb_cleanup_retries | default(omit) }}" delay: "{{ make_metallb_cleanup_delay | default(omit) }}" until: "{{ make_metallb_cleanup_until | default(true) }}" register: "make_metallb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make metallb_cleanup" dry_run: "{{ make_metallb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_metallb_cleanup_env|default({})), **(make_metallb_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki.yml0000644000175000017500000000144515117130422033325 0ustar zuulzuul--- - name: Debug make_loki_env when: make_loki_env is defined ansible.builtin.debug: var: make_loki_env - name: Debug make_loki_params when: make_loki_params is defined ansible.builtin.debug: var: make_loki_params - name: Run loki retries: "{{ make_loki_retries | default(omit) }}" delay: "{{ make_loki_delay | default(omit) }}" until: "{{ make_loki_until | default(true) }}" register: "make_loki_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki" dry_run: "{{ make_loki_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_env|default({})), **(make_loki_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_cle0000644000175000017500000000163515117130422033351 0ustar zuulzuul--- - name: Debug make_loki_cleanup_env when: make_loki_cleanup_env is defined ansible.builtin.debug: var: make_loki_cleanup_env - name: Debug make_loki_cleanup_params when: make_loki_cleanup_params is defined ansible.builtin.debug: var: make_loki_cleanup_params - name: Run loki_cleanup retries: "{{ make_loki_cleanup_retries | default(omit) }}" delay: "{{ make_loki_cleanup_delay | default(omit) }}" until: "{{ make_loki_cleanup_until | default(true) }}" register: "make_loki_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_cleanup" dry_run: "{{ make_loki_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_cleanup_env|default({})), **(make_loki_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_dep0000644000175000017500000000161615117130422033355 0ustar zuulzuul--- - name: Debug make_loki_deploy_env when: make_loki_deploy_env is defined ansible.builtin.debug: var: make_loki_deploy_env - name: Debug make_loki_deploy_params when: make_loki_deploy_params is defined ansible.builtin.debug: var: make_loki_deploy_params - name: Run loki_deploy retries: "{{ make_loki_deploy_retries | default(omit) }}" delay: "{{ make_loki_deploy_delay | default(omit) }}" until: "{{ make_loki_deploy_until | default(true) }}" register: "make_loki_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_deploy" dry_run: "{{ make_loki_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_deploy_env|default({})), **(make_loki_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_loki_dep0000644000175000017500000000200615117130422033347 0ustar zuulzuul--- - name: Debug make_loki_deploy_cleanup_env when: make_loki_deploy_cleanup_env is defined ansible.builtin.debug: var: make_loki_deploy_cleanup_env - name: Debug make_loki_deploy_cleanup_params when: make_loki_deploy_cleanup_params is defined ansible.builtin.debug: var: make_loki_deploy_cleanup_params - name: Run loki_deploy_cleanup retries: "{{ make_loki_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_loki_deploy_cleanup_delay | default(omit) }}" until: "{{ make_loki_deploy_cleanup_until | default(true) }}" register: "make_loki_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make loki_deploy_cleanup" dry_run: "{{ make_loki_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_loki_deploy_cleanup_env|default({})), **(make_loki_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000156015117130422033406 0ustar zuulzuul--- - name: Debug make_netobserv_env when: make_netobserv_env is defined ansible.builtin.debug: var: make_netobserv_env - name: Debug make_netobserv_params when: make_netobserv_params is defined ansible.builtin.debug: var: make_netobserv_params - name: Run netobserv retries: "{{ make_netobserv_retries | default(omit) }}" delay: "{{ make_netobserv_delay | default(omit) }}" until: "{{ make_netobserv_until | default(true) }}" register: "make_netobserv_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv" dry_run: "{{ make_netobserv_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_env|default({})), **(make_netobserv_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000175015117130422033407 0ustar zuulzuul--- - name: Debug make_netobserv_cleanup_env when: make_netobserv_cleanup_env is defined ansible.builtin.debug: var: make_netobserv_cleanup_env - name: Debug make_netobserv_cleanup_params when: make_netobserv_cleanup_params is defined ansible.builtin.debug: var: make_netobserv_cleanup_params - name: Run netobserv_cleanup retries: "{{ make_netobserv_cleanup_retries | default(omit) }}" delay: "{{ make_netobserv_cleanup_delay | default(omit) }}" until: "{{ make_netobserv_cleanup_until | default(true) }}" register: "make_netobserv_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_cleanup" dry_run: "{{ make_netobserv_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_cleanup_env|default({})), **(make_netobserv_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000173115117130422033406 0ustar zuulzuul--- - name: Debug make_netobserv_deploy_env when: make_netobserv_deploy_env is defined ansible.builtin.debug: var: make_netobserv_deploy_env - name: Debug make_netobserv_deploy_params when: make_netobserv_deploy_params is defined ansible.builtin.debug: var: make_netobserv_deploy_params - name: Run netobserv_deploy retries: "{{ make_netobserv_deploy_retries | default(omit) }}" delay: "{{ make_netobserv_deploy_delay | default(omit) }}" until: "{{ make_netobserv_deploy_until | default(true) }}" register: "make_netobserv_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_deploy" dry_run: "{{ make_netobserv_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_deploy_env|default({})), **(make_netobserv_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobserv_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_netobser0000644000175000017500000000212115117130422033400 0ustar zuulzuul--- - name: Debug make_netobserv_deploy_cleanup_env when: make_netobserv_deploy_cleanup_env is defined ansible.builtin.debug: var: make_netobserv_deploy_cleanup_env - name: Debug make_netobserv_deploy_cleanup_params when: make_netobserv_deploy_cleanup_params is defined ansible.builtin.debug: var: make_netobserv_deploy_cleanup_params - name: Run netobserv_deploy_cleanup retries: "{{ make_netobserv_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_netobserv_deploy_cleanup_delay | default(omit) }}" until: "{{ make_netobserv_deploy_cleanup_until | default(true) }}" register: "make_netobserv_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make netobserv_deploy_cleanup" dry_run: "{{ make_netobserv_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_netobserv_deploy_cleanup_env|default({})), **(make_netobserv_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_p0000644000175000017500000000161615117130422033347 0ustar zuulzuul--- - name: Debug make_manila_prep_env when: make_manila_prep_env is defined ansible.builtin.debug: var: make_manila_prep_env - name: Debug make_manila_prep_params when: make_manila_prep_params is defined ansible.builtin.debug: var: make_manila_prep_params - name: Run manila_prep retries: "{{ make_manila_prep_retries | default(omit) }}" delay: "{{ make_manila_prep_delay | default(omit) }}" until: "{{ make_manila_prep_until | default(true) }}" register: "make_manila_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_prep" dry_run: "{{ make_manila_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_prep_env|default({})), **(make_manila_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014700000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila.y0000644000175000017500000000150315117130422033272 0ustar zuulzuul--- - name: Debug make_manila_env when: make_manila_env is defined ansible.builtin.debug: var: make_manila_env - name: Debug make_manila_params when: make_manila_params is defined ansible.builtin.debug: var: make_manila_params - name: Run manila retries: "{{ make_manila_retries | default(omit) }}" delay: "{{ make_manila_delay | default(omit) }}" until: "{{ make_manila_until | default(true) }}" register: "make_manila_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila" dry_run: "{{ make_manila_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_env|default({})), **(make_manila_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_c0000644000175000017500000000167315117130422033335 0ustar zuulzuul--- - name: Debug make_manila_cleanup_env when: make_manila_cleanup_env is defined ansible.builtin.debug: var: make_manila_cleanup_env - name: Debug make_manila_cleanup_params when: make_manila_cleanup_params is defined ansible.builtin.debug: var: make_manila_cleanup_params - name: Run manila_cleanup retries: "{{ make_manila_cleanup_retries | default(omit) }}" delay: "{{ make_manila_cleanup_delay | default(omit) }}" until: "{{ make_manila_cleanup_until | default(true) }}" register: "make_manila_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_cleanup" dry_run: "{{ make_manila_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_cleanup_env|default({})), **(make_manila_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000176715117130422033342 0ustar zuulzuul--- - name: Debug make_manila_deploy_prep_env when: make_manila_deploy_prep_env is defined ansible.builtin.debug: var: make_manila_deploy_prep_env - name: Debug make_manila_deploy_prep_params when: make_manila_deploy_prep_params is defined ansible.builtin.debug: var: make_manila_deploy_prep_params - name: Run manila_deploy_prep retries: "{{ make_manila_deploy_prep_retries | default(omit) }}" delay: "{{ make_manila_deploy_prep_delay | default(omit) }}" until: "{{ make_manila_deploy_prep_until | default(true) }}" register: "make_manila_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy_prep" dry_run: "{{ make_manila_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_prep_env|default({})), **(make_manila_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000165415117130422033335 0ustar zuulzuul--- - name: Debug make_manila_deploy_env when: make_manila_deploy_env is defined ansible.builtin.debug: var: make_manila_deploy_env - name: Debug make_manila_deploy_params when: make_manila_deploy_params is defined ansible.builtin.debug: var: make_manila_deploy_params - name: Run manila_deploy retries: "{{ make_manila_deploy_retries | default(omit) }}" delay: "{{ make_manila_deploy_delay | default(omit) }}" until: "{{ make_manila_deploy_until | default(true) }}" register: "make_manila_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy" dry_run: "{{ make_manila_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_env|default({})), **(make_manila_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_manila_d0000644000175000017500000000204415117130422033327 0ustar zuulzuul--- - name: Debug make_manila_deploy_cleanup_env when: make_manila_deploy_cleanup_env is defined ansible.builtin.debug: var: make_manila_deploy_cleanup_env - name: Debug make_manila_deploy_cleanup_params when: make_manila_deploy_cleanup_params is defined ansible.builtin.debug: var: make_manila_deploy_cleanup_params - name: Run manila_deploy_cleanup retries: "{{ make_manila_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_manila_deploy_cleanup_delay | default(omit) }}" until: "{{ make_manila_deploy_cleanup_until | default(true) }}" register: "make_manila_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make manila_deploy_cleanup" dry_run: "{{ make_manila_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_manila_deploy_cleanup_env|default({})), **(make_manila_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000167315117130422033413 0ustar zuulzuul--- - name: Debug make_telemetry_prep_env when: make_telemetry_prep_env is defined ansible.builtin.debug: var: make_telemetry_prep_env - name: Debug make_telemetry_prep_params when: make_telemetry_prep_params is defined ansible.builtin.debug: var: make_telemetry_prep_params - name: Run telemetry_prep retries: "{{ make_telemetry_prep_retries | default(omit) }}" delay: "{{ make_telemetry_prep_delay | default(omit) }}" until: "{{ make_telemetry_prep_until | default(true) }}" register: "make_telemetry_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_prep" dry_run: "{{ make_telemetry_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_prep_env|default({})), **(make_telemetry_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000156015117130422033406 0ustar zuulzuul--- - name: Debug make_telemetry_env when: make_telemetry_env is defined ansible.builtin.debug: var: make_telemetry_env - name: Debug make_telemetry_params when: make_telemetry_params is defined ansible.builtin.debug: var: make_telemetry_params - name: Run telemetry retries: "{{ make_telemetry_retries | default(omit) }}" delay: "{{ make_telemetry_delay | default(omit) }}" until: "{{ make_telemetry_until | default(true) }}" register: "make_telemetry_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry" dry_run: "{{ make_telemetry_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_env|default({})), **(make_telemetry_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000175015117130422033407 0ustar zuulzuul--- - name: Debug make_telemetry_cleanup_env when: make_telemetry_cleanup_env is defined ansible.builtin.debug: var: make_telemetry_cleanup_env - name: Debug make_telemetry_cleanup_params when: make_telemetry_cleanup_params is defined ansible.builtin.debug: var: make_telemetry_cleanup_params - name: Run telemetry_cleanup retries: "{{ make_telemetry_cleanup_retries | default(omit) }}" delay: "{{ make_telemetry_cleanup_delay | default(omit) }}" until: "{{ make_telemetry_cleanup_until | default(true) }}" register: "make_telemetry_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_cleanup" dry_run: "{{ make_telemetry_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_cleanup_env|default({})), **(make_telemetry_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000204415117130422033404 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_prep_env when: make_telemetry_deploy_prep_env is defined ansible.builtin.debug: var: make_telemetry_deploy_prep_env - name: Debug make_telemetry_deploy_prep_params when: make_telemetry_deploy_prep_params is defined ansible.builtin.debug: var: make_telemetry_deploy_prep_params - name: Run telemetry_deploy_prep retries: "{{ make_telemetry_deploy_prep_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_prep_delay | default(omit) }}" until: "{{ make_telemetry_deploy_prep_until | default(true) }}" register: "make_telemetry_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy_prep" dry_run: "{{ make_telemetry_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_prep_env|default({})), **(make_telemetry_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000173115117130422033406 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_env when: make_telemetry_deploy_env is defined ansible.builtin.debug: var: make_telemetry_deploy_env - name: Debug make_telemetry_deploy_params when: make_telemetry_deploy_params is defined ansible.builtin.debug: var: make_telemetry_deploy_params - name: Run telemetry_deploy retries: "{{ make_telemetry_deploy_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_delay | default(omit) }}" until: "{{ make_telemetry_deploy_until | default(true) }}" register: "make_telemetry_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy" dry_run: "{{ make_telemetry_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_env|default({})), **(make_telemetry_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000212115117130422033400 0ustar zuulzuul--- - name: Debug make_telemetry_deploy_cleanup_env when: make_telemetry_deploy_cleanup_env is defined ansible.builtin.debug: var: make_telemetry_deploy_cleanup_env - name: Debug make_telemetry_deploy_cleanup_params when: make_telemetry_deploy_cleanup_params is defined ansible.builtin.debug: var: make_telemetry_deploy_cleanup_params - name: Run telemetry_deploy_cleanup retries: "{{ make_telemetry_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_telemetry_deploy_cleanup_delay | default(omit) }}" until: "{{ make_telemetry_deploy_cleanup_until | default(true) }}" register: "make_telemetry_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_deploy_cleanup" dry_run: "{{ make_telemetry_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_deploy_cleanup_env|default({})), **(make_telemetry_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_kuttl_run.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000200615117130422033402 0ustar zuulzuul--- - name: Debug make_telemetry_kuttl_run_env when: make_telemetry_kuttl_run_env is defined ansible.builtin.debug: var: make_telemetry_kuttl_run_env - name: Debug make_telemetry_kuttl_run_params when: make_telemetry_kuttl_run_params is defined ansible.builtin.debug: var: make_telemetry_kuttl_run_params - name: Run telemetry_kuttl_run retries: "{{ make_telemetry_kuttl_run_retries | default(omit) }}" delay: "{{ make_telemetry_kuttl_run_delay | default(omit) }}" until: "{{ make_telemetry_kuttl_run_until | default(true) }}" register: "make_telemetry_kuttl_run_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_kuttl_run" dry_run: "{{ make_telemetry_kuttl_run_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_kuttl_run_env|default({})), **(make_telemetry_kuttl_run_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetry_kuttl.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_telemetr0000644000175000017500000000171215117130422033405 0ustar zuulzuul--- - name: Debug make_telemetry_kuttl_env when: make_telemetry_kuttl_env is defined ansible.builtin.debug: var: make_telemetry_kuttl_env - name: Debug make_telemetry_kuttl_params when: make_telemetry_kuttl_params is defined ansible.builtin.debug: var: make_telemetry_kuttl_params - name: Run telemetry_kuttl retries: "{{ make_telemetry_kuttl_retries | default(omit) }}" delay: "{{ make_telemetry_kuttl_delay | default(omit) }}" until: "{{ make_telemetry_kuttl_until | default(true) }}" register: "make_telemetry_kuttl_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make telemetry_kuttl" dry_run: "{{ make_telemetry_kuttl_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_telemetry_kuttl_env|default({})), **(make_telemetry_kuttl_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_pr0000644000175000017500000000157715117130422033432 0ustar zuulzuul--- - name: Debug make_swift_prep_env when: make_swift_prep_env is defined ansible.builtin.debug: var: make_swift_prep_env - name: Debug make_swift_prep_params when: make_swift_prep_params is defined ansible.builtin.debug: var: make_swift_prep_params - name: Run swift_prep retries: "{{ make_swift_prep_retries | default(omit) }}" delay: "{{ make_swift_prep_delay | default(omit) }}" until: "{{ make_swift_prep_until | default(true) }}" register: "make_swift_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_prep" dry_run: "{{ make_swift_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_prep_env|default({})), **(make_swift_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift.ym0000644000175000017500000000146415117130422033350 0ustar zuulzuul--- - name: Debug make_swift_env when: make_swift_env is defined ansible.builtin.debug: var: make_swift_env - name: Debug make_swift_params when: make_swift_params is defined ansible.builtin.debug: var: make_swift_params - name: Run swift retries: "{{ make_swift_retries | default(omit) }}" delay: "{{ make_swift_delay | default(omit) }}" until: "{{ make_swift_until | default(true) }}" register: "make_swift_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift" dry_run: "{{ make_swift_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_env|default({})), **(make_swift_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_cl0000644000175000017500000000165415117130422033403 0ustar zuulzuul--- - name: Debug make_swift_cleanup_env when: make_swift_cleanup_env is defined ansible.builtin.debug: var: make_swift_cleanup_env - name: Debug make_swift_cleanup_params when: make_swift_cleanup_params is defined ansible.builtin.debug: var: make_swift_cleanup_params - name: Run swift_cleanup retries: "{{ make_swift_cleanup_retries | default(omit) }}" delay: "{{ make_swift_cleanup_delay | default(omit) }}" until: "{{ make_swift_cleanup_until | default(true) }}" register: "make_swift_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_cleanup" dry_run: "{{ make_swift_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_cleanup_env|default({})), **(make_swift_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000175015117130422033372 0ustar zuulzuul--- - name: Debug make_swift_deploy_prep_env when: make_swift_deploy_prep_env is defined ansible.builtin.debug: var: make_swift_deploy_prep_env - name: Debug make_swift_deploy_prep_params when: make_swift_deploy_prep_params is defined ansible.builtin.debug: var: make_swift_deploy_prep_params - name: Run swift_deploy_prep retries: "{{ make_swift_deploy_prep_retries | default(omit) }}" delay: "{{ make_swift_deploy_prep_delay | default(omit) }}" until: "{{ make_swift_deploy_prep_until | default(true) }}" register: "make_swift_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy_prep" dry_run: "{{ make_swift_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_prep_env|default({})), **(make_swift_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000163515117130422033374 0ustar zuulzuul--- - name: Debug make_swift_deploy_env when: make_swift_deploy_env is defined ansible.builtin.debug: var: make_swift_deploy_env - name: Debug make_swift_deploy_params when: make_swift_deploy_params is defined ansible.builtin.debug: var: make_swift_deploy_params - name: Run swift_deploy retries: "{{ make_swift_deploy_retries | default(omit) }}" delay: "{{ make_swift_deploy_delay | default(omit) }}" until: "{{ make_swift_deploy_until | default(true) }}" register: "make_swift_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy" dry_run: "{{ make_swift_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_env|default({})), **(make_swift_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_swift_de0000644000175000017500000000202515117130422033366 0ustar zuulzuul--- - name: Debug make_swift_deploy_cleanup_env when: make_swift_deploy_cleanup_env is defined ansible.builtin.debug: var: make_swift_deploy_cleanup_env - name: Debug make_swift_deploy_cleanup_params when: make_swift_deploy_cleanup_params is defined ansible.builtin.debug: var: make_swift_deploy_cleanup_params - name: Run swift_deploy_cleanup retries: "{{ make_swift_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_swift_deploy_cleanup_delay | default(omit) }}" until: "{{ make_swift_deploy_cleanup_until | default(true) }}" register: "make_swift_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make swift_deploy_cleanup" dry_run: "{{ make_swift_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_swift_deploy_cleanup_env|default({})), **(make_swift_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmanager.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmana0000644000175000017500000000161615117130422033361 0ustar zuulzuul--- - name: Debug make_certmanager_env when: make_certmanager_env is defined ansible.builtin.debug: var: make_certmanager_env - name: Debug make_certmanager_params when: make_certmanager_params is defined ansible.builtin.debug: var: make_certmanager_params - name: Run certmanager retries: "{{ make_certmanager_retries | default(omit) }}" delay: "{{ make_certmanager_delay | default(omit) }}" until: "{{ make_certmanager_until | default(true) }}" register: "make_certmanager_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make certmanager" dry_run: "{{ make_certmanager_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_certmanager_env|default({})), **(make_certmanager_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmanager_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_certmana0000644000175000017500000000200615117130422033353 0ustar zuulzuul--- - name: Debug make_certmanager_cleanup_env when: make_certmanager_cleanup_env is defined ansible.builtin.debug: var: make_certmanager_cleanup_env - name: Debug make_certmanager_cleanup_params when: make_certmanager_cleanup_params is defined ansible.builtin.debug: var: make_certmanager_cleanup_params - name: Run certmanager_cleanup retries: "{{ make_certmanager_cleanup_retries | default(omit) }}" delay: "{{ make_certmanager_cleanup_delay | default(omit) }}" until: "{{ make_certmanager_cleanup_until | default(true) }}" register: "make_certmanager_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make certmanager_cleanup" dry_run: "{{ make_certmanager_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_certmanager_cleanup_env|default({})), **(make_certmanager_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_validate_marketplace.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_validate0000644000175000017500000000202515117130422033353 0ustar zuulzuul--- - name: Debug make_validate_marketplace_env when: make_validate_marketplace_env is defined ansible.builtin.debug: var: make_validate_marketplace_env - name: Debug make_validate_marketplace_params when: make_validate_marketplace_params is defined ansible.builtin.debug: var: make_validate_marketplace_params - name: Run validate_marketplace retries: "{{ make_validate_marketplace_retries | default(omit) }}" delay: "{{ make_validate_marketplace_delay | default(omit) }}" until: "{{ make_validate_marketplace_until | default(true) }}" register: "make_validate_marketplace_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make validate_marketplace" dry_run: "{{ make_validate_marketplace_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_validate_marketplace_env|default({})), **(make_validate_marketplace_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy_prep.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000175015117130422033344 0ustar zuulzuul--- - name: Debug make_redis_deploy_prep_env when: make_redis_deploy_prep_env is defined ansible.builtin.debug: var: make_redis_deploy_prep_env - name: Debug make_redis_deploy_prep_params when: make_redis_deploy_prep_params is defined ansible.builtin.debug: var: make_redis_deploy_prep_params - name: Run redis_deploy_prep retries: "{{ make_redis_deploy_prep_retries | default(omit) }}" delay: "{{ make_redis_deploy_prep_delay | default(omit) }}" until: "{{ make_redis_deploy_prep_until | default(true) }}" register: "make_redis_deploy_prep_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy_prep" dry_run: "{{ make_redis_deploy_prep_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_prep_env|default({})), **(make_redis_deploy_prep_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000163515117130422033346 0ustar zuulzuul--- - name: Debug make_redis_deploy_env when: make_redis_deploy_env is defined ansible.builtin.debug: var: make_redis_deploy_env - name: Debug make_redis_deploy_params when: make_redis_deploy_params is defined ansible.builtin.debug: var: make_redis_deploy_params - name: Run redis_deploy retries: "{{ make_redis_deploy_retries | default(omit) }}" delay: "{{ make_redis_deploy_delay | default(omit) }}" until: "{{ make_redis_deploy_until | default(true) }}" register: "make_redis_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy" dry_run: "{{ make_redis_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_env|default({})), **(make_redis_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_deploy_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_redis_de0000644000175000017500000000202515117130422033340 0ustar zuulzuul--- - name: Debug make_redis_deploy_cleanup_env when: make_redis_deploy_cleanup_env is defined ansible.builtin.debug: var: make_redis_deploy_cleanup_env - name: Debug make_redis_deploy_cleanup_params when: make_redis_deploy_cleanup_params is defined ansible.builtin.debug: var: make_redis_deploy_cleanup_params - name: Run redis_deploy_cleanup retries: "{{ make_redis_deploy_cleanup_retries | default(omit) }}" delay: "{{ make_redis_deploy_cleanup_delay | default(omit) }}" until: "{{ make_redis_deploy_cleanup_until | default(true) }}" register: "make_redis_deploy_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make redis_deploy_cleanup" dry_run: "{{ make_redis_deploy_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_redis_deploy_cleanup_env|default({})), **(make_redis_deploy_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_set_slower_etcd_profile.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_set_slow0000644000175000017500000000210215117130422033415 0ustar zuulzuul--- - name: Debug make_set_slower_etcd_profile_env when: make_set_slower_etcd_profile_env is defined ansible.builtin.debug: var: make_set_slower_etcd_profile_env - name: Debug make_set_slower_etcd_profile_params when: make_set_slower_etcd_profile_params is defined ansible.builtin.debug: var: make_set_slower_etcd_profile_params - name: Run set_slower_etcd_profile retries: "{{ make_set_slower_etcd_profile_retries | default(omit) }}" delay: "{{ make_set_slower_etcd_profile_delay | default(omit) }}" until: "{{ make_set_slower_etcd_profile_until | default(true) }}" register: "make_set_slower_etcd_profile_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls" script: "make set_slower_etcd_profile" dry_run: "{{ make_set_slower_etcd_profile_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_set_slower_etcd_profile_env|default({})), **(make_set_slower_etcd_profile_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_download_tools.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_download0000644000175000017500000000170415117130422033374 0ustar zuulzuul--- - name: Debug make_download_tools_env when: make_download_tools_env is defined ansible.builtin.debug: var: make_download_tools_env - name: Debug make_download_tools_params when: make_download_tools_params is defined ansible.builtin.debug: var: make_download_tools_params - name: Run download_tools retries: "{{ make_download_tools_retries | default(omit) }}" delay: "{{ make_download_tools_delay | default(omit) }}" until: "{{ make_download_tools_until | default(true) }}" register: "make_download_tools_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make download_tools" dry_run: "{{ make_download_tools_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_download_tools_env|default({})), **(make_download_tools_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs.yml0000644000175000017500000000143715117130422033156 0ustar zuulzuul--- - name: Debug make_nfs_env when: make_nfs_env is defined ansible.builtin.debug: var: make_nfs_env - name: Debug make_nfs_params when: make_nfs_params is defined ansible.builtin.debug: var: make_nfs_params - name: Run nfs retries: "{{ make_nfs_retries | default(omit) }}" delay: "{{ make_nfs_delay | default(omit) }}" until: "{{ make_nfs_until | default(true) }}" register: "make_nfs_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make nfs" dry_run: "{{ make_nfs_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nfs_env|default({})), **(make_nfs_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_nfs_clea0000644000175000017500000000162715117130422033343 0ustar zuulzuul--- - name: Debug make_nfs_cleanup_env when: make_nfs_cleanup_env is defined ansible.builtin.debug: var: make_nfs_cleanup_env - name: Debug make_nfs_cleanup_params when: make_nfs_cleanup_params is defined ansible.builtin.debug: var: make_nfs_cleanup_params - name: Run nfs_cleanup retries: "{{ make_nfs_cleanup_retries | default(omit) }}" delay: "{{ make_nfs_cleanup_delay | default(omit) }}" until: "{{ make_nfs_cleanup_until | default(true) }}" register: "make_nfs_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make nfs_cleanup" dry_run: "{{ make_nfs_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_nfs_cleanup_env|default({})), **(make_nfs_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc.yml0000644000175000017500000000143715117130422033137 0ustar zuulzuul--- - name: Debug make_crc_env when: make_crc_env is defined ansible.builtin.debug: var: make_crc_env - name: Debug make_crc_params when: make_crc_params is defined ansible.builtin.debug: var: make_crc_params - name: Run crc retries: "{{ make_crc_retries | default(omit) }}" delay: "{{ make_crc_delay | default(omit) }}" until: "{{ make_crc_until | default(true) }}" register: "make_crc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc" dry_run: "{{ make_crc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_env|default({})), **(make_crc_params|default({}))) }}" ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_clea0000644000175000017500000000162715117130422033324 0ustar zuulzuul--- - name: Debug make_crc_cleanup_env when: make_crc_cleanup_env is defined ansible.builtin.debug: var: make_crc_cleanup_env - name: Debug make_crc_cleanup_params when: make_crc_cleanup_params is defined ansible.builtin.debug: var: make_crc_cleanup_params - name: Run crc_cleanup retries: "{{ make_crc_cleanup_retries | default(omit) }}" delay: "{{ make_crc_cleanup_delay | default(omit) }}" until: "{{ make_crc_cleanup_until | default(true) }}" register: "make_crc_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_cleanup" dry_run: "{{ make_crc_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_cleanup_env|default({})), **(make_crc_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015200000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_scrub.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_scru0000644000175000017500000000157115117130422033372 0ustar zuulzuul--- - name: Debug make_crc_scrub_env when: make_crc_scrub_env is defined ansible.builtin.debug: var: make_crc_scrub_env - name: Debug make_crc_scrub_params when: make_crc_scrub_params is defined ansible.builtin.debug: var: make_crc_scrub_params - name: Run crc_scrub retries: "{{ make_crc_scrub_retries | default(omit) }}" delay: "{{ make_crc_scrub_delay | default(omit) }}" until: "{{ make_crc_scrub_until | default(true) }}" register: "make_crc_scrub_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_scrub" dry_run: "{{ make_crc_scrub_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_scrub_env|default({})), **(make_crc_scrub_params|default({}))) }}" ././@LongLink0000644000000000000000000000017500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_attach_default_interface.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_atta0000644000175000017500000000222615117130422033345 0ustar zuulzuul--- - name: Debug make_crc_attach_default_interface_env when: make_crc_attach_default_interface_env is defined ansible.builtin.debug: var: make_crc_attach_default_interface_env - name: Debug make_crc_attach_default_interface_params when: make_crc_attach_default_interface_params is defined ansible.builtin.debug: var: make_crc_attach_default_interface_params - name: Run crc_attach_default_interface retries: "{{ make_crc_attach_default_interface_retries | default(omit) }}" delay: "{{ make_crc_attach_default_interface_delay | default(omit) }}" until: "{{ make_crc_attach_default_interface_until | default(true) }}" register: "make_crc_attach_default_interface_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_attach_default_interface" dry_run: "{{ make_crc_attach_default_interface_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_attach_default_interface_env|default({})), **(make_crc_attach_default_interface_params|default({}))) }}" ././@LongLink0000644000000000000000000000020500000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_attach_default_interface_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_crc_atta0000644000175000017500000000241615117130422033346 0ustar zuulzuul--- - name: Debug make_crc_attach_default_interface_cleanup_env when: make_crc_attach_default_interface_cleanup_env is defined ansible.builtin.debug: var: make_crc_attach_default_interface_cleanup_env - name: Debug make_crc_attach_default_interface_cleanup_params when: make_crc_attach_default_interface_cleanup_params is defined ansible.builtin.debug: var: make_crc_attach_default_interface_cleanup_params - name: Run crc_attach_default_interface_cleanup retries: "{{ make_crc_attach_default_interface_cleanup_retries | default(omit) }}" delay: "{{ make_crc_attach_default_interface_cleanup_delay | default(omit) }}" until: "{{ make_crc_attach_default_interface_cleanup_until | default(true) }}" register: "make_crc_attach_default_interface_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make crc_attach_default_interface_cleanup" dry_run: "{{ make_crc_attach_default_interface_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_crc_attach_default_interface_cleanup_env|default({})), **(make_crc_attach_default_interface_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000174215117130422033271 0ustar zuulzuul--- - name: Debug make_ipv6_lab_network_env when: make_ipv6_lab_network_env is defined ansible.builtin.debug: var: make_ipv6_lab_network_env - name: Debug make_ipv6_lab_network_params when: make_ipv6_lab_network_params is defined ansible.builtin.debug: var: make_ipv6_lab_network_params - name: Run ipv6_lab_network retries: "{{ make_ipv6_lab_network_retries | default(omit) }}" delay: "{{ make_ipv6_lab_network_delay | default(omit) }}" until: "{{ make_ipv6_lab_network_until | default(true) }}" register: "make_ipv6_lab_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_network" dry_run: "{{ make_ipv6_lab_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_network_env|default({})), **(make_ipv6_lab_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000213215117130422033263 0ustar zuulzuul--- - name: Debug make_ipv6_lab_network_cleanup_env when: make_ipv6_lab_network_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_network_cleanup_env - name: Debug make_ipv6_lab_network_cleanup_params when: make_ipv6_lab_network_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_network_cleanup_params - name: Run ipv6_lab_network_cleanup retries: "{{ make_ipv6_lab_network_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_network_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_network_cleanup_until | default(true) }}" register: "make_ipv6_lab_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_network_cleanup" dry_run: "{{ make_ipv6_lab_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_network_cleanup_env|default({})), **(make_ipv6_lab_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_nat64_router.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000205515117130422033267 0ustar zuulzuul--- - name: Debug make_ipv6_lab_nat64_router_env when: make_ipv6_lab_nat64_router_env is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_env - name: Debug make_ipv6_lab_nat64_router_params when: make_ipv6_lab_nat64_router_params is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_params - name: Run ipv6_lab_nat64_router retries: "{{ make_ipv6_lab_nat64_router_retries | default(omit) }}" delay: "{{ make_ipv6_lab_nat64_router_delay | default(omit) }}" until: "{{ make_ipv6_lab_nat64_router_until | default(true) }}" register: "make_ipv6_lab_nat64_router_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_nat64_router" dry_run: "{{ make_ipv6_lab_nat64_router_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_nat64_router_env|default({})), **(make_ipv6_lab_nat64_router_params|default({}))) }}" ././@LongLink0000644000000000000000000000017600000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_nat64_router_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000224515117130422033270 0ustar zuulzuul--- - name: Debug make_ipv6_lab_nat64_router_cleanup_env when: make_ipv6_lab_nat64_router_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_cleanup_env - name: Debug make_ipv6_lab_nat64_router_cleanup_params when: make_ipv6_lab_nat64_router_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_nat64_router_cleanup_params - name: Run ipv6_lab_nat64_router_cleanup retries: "{{ make_ipv6_lab_nat64_router_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_nat64_router_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_nat64_router_cleanup_until | default(true) }}" register: "make_ipv6_lab_nat64_router_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_nat64_router_cleanup" dry_run: "{{ make_ipv6_lab_nat64_router_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_nat64_router_cleanup_env|default({})), **(make_ipv6_lab_nat64_router_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_sno.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000164615117130422033274 0ustar zuulzuul--- - name: Debug make_ipv6_lab_sno_env when: make_ipv6_lab_sno_env is defined ansible.builtin.debug: var: make_ipv6_lab_sno_env - name: Debug make_ipv6_lab_sno_params when: make_ipv6_lab_sno_params is defined ansible.builtin.debug: var: make_ipv6_lab_sno_params - name: Run ipv6_lab_sno retries: "{{ make_ipv6_lab_sno_retries | default(omit) }}" delay: "{{ make_ipv6_lab_sno_delay | default(omit) }}" until: "{{ make_ipv6_lab_sno_until | default(true) }}" register: "make_ipv6_lab_sno_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_sno" dry_run: "{{ make_ipv6_lab_sno_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_sno_env|default({})), **(make_ipv6_lab_sno_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_sno_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000203615117130422033266 0ustar zuulzuul--- - name: Debug make_ipv6_lab_sno_cleanup_env when: make_ipv6_lab_sno_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_sno_cleanup_env - name: Debug make_ipv6_lab_sno_cleanup_params when: make_ipv6_lab_sno_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_sno_cleanup_params - name: Run ipv6_lab_sno_cleanup retries: "{{ make_ipv6_lab_sno_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_sno_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_sno_cleanup_until | default(true) }}" register: "make_ipv6_lab_sno_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_sno_cleanup" dry_run: "{{ make_ipv6_lab_sno_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_sno_cleanup_env|default({})), **(make_ipv6_lab_sno_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015100000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000155215117130422033270 0ustar zuulzuul--- - name: Debug make_ipv6_lab_env when: make_ipv6_lab_env is defined ansible.builtin.debug: var: make_ipv6_lab_env - name: Debug make_ipv6_lab_params when: make_ipv6_lab_params is defined ansible.builtin.debug: var: make_ipv6_lab_params - name: Run ipv6_lab retries: "{{ make_ipv6_lab_retries | default(omit) }}" delay: "{{ make_ipv6_lab_delay | default(omit) }}" until: "{{ make_ipv6_lab_until | default(true) }}" register: "make_ipv6_lab_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab" dry_run: "{{ make_ipv6_lab_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_env|default({})), **(make_ipv6_lab_params|default({}))) }}" ././@LongLink0000644000000000000000000000016100000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_ipv6_lab0000644000175000017500000000174215117130422033271 0ustar zuulzuul--- - name: Debug make_ipv6_lab_cleanup_env when: make_ipv6_lab_cleanup_env is defined ansible.builtin.debug: var: make_ipv6_lab_cleanup_env - name: Debug make_ipv6_lab_cleanup_params when: make_ipv6_lab_cleanup_params is defined ansible.builtin.debug: var: make_ipv6_lab_cleanup_params - name: Run ipv6_lab_cleanup retries: "{{ make_ipv6_lab_cleanup_retries | default(omit) }}" delay: "{{ make_ipv6_lab_cleanup_delay | default(omit) }}" until: "{{ make_ipv6_lab_cleanup_until | default(true) }}" register: "make_ipv6_lab_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make ipv6_lab_cleanup" dry_run: "{{ make_ipv6_lab_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_ipv6_lab_cleanup_env|default({})), **(make_ipv6_lab_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_default_interface.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_d0000644000175000017500000000213215117130422033330 0ustar zuulzuul--- - name: Debug make_attach_default_interface_env when: make_attach_default_interface_env is defined ansible.builtin.debug: var: make_attach_default_interface_env - name: Debug make_attach_default_interface_params when: make_attach_default_interface_params is defined ansible.builtin.debug: var: make_attach_default_interface_params - name: Run attach_default_interface retries: "{{ make_attach_default_interface_retries | default(omit) }}" delay: "{{ make_attach_default_interface_delay | default(omit) }}" until: "{{ make_attach_default_interface_until | default(true) }}" register: "make_attach_default_interface_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make attach_default_interface" dry_run: "{{ make_attach_default_interface_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_attach_default_interface_env|default({})), **(make_attach_default_interface_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_default_interface_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_attach_d0000644000175000017500000000232215117130422033331 0ustar zuulzuul--- - name: Debug make_attach_default_interface_cleanup_env when: make_attach_default_interface_cleanup_env is defined ansible.builtin.debug: var: make_attach_default_interface_cleanup_env - name: Debug make_attach_default_interface_cleanup_params when: make_attach_default_interface_cleanup_params is defined ansible.builtin.debug: var: make_attach_default_interface_cleanup_params - name: Run attach_default_interface_cleanup retries: "{{ make_attach_default_interface_cleanup_retries | default(omit) }}" delay: "{{ make_attach_default_interface_cleanup_delay | default(omit) }}" until: "{{ make_attach_default_interface_cleanup_until | default(true) }}" register: "make_attach_default_interface_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make attach_default_interface_cleanup" dry_run: "{{ make_attach_default_interface_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_attach_default_interface_cleanup_env|default({})), **(make_attach_default_interface_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_isolation_bridge.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_0000644000175000017500000000213215117130422033411 0ustar zuulzuul--- - name: Debug make_network_isolation_bridge_env when: make_network_isolation_bridge_env is defined ansible.builtin.debug: var: make_network_isolation_bridge_env - name: Debug make_network_isolation_bridge_params when: make_network_isolation_bridge_params is defined ansible.builtin.debug: var: make_network_isolation_bridge_params - name: Run network_isolation_bridge retries: "{{ make_network_isolation_bridge_retries | default(omit) }}" delay: "{{ make_network_isolation_bridge_delay | default(omit) }}" until: "{{ make_network_isolation_bridge_until | default(true) }}" register: "make_network_isolation_bridge_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make network_isolation_bridge" dry_run: "{{ make_network_isolation_bridge_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_network_isolation_bridge_env|default({})), **(make_network_isolation_bridge_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_isolation_bridge_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_network_0000644000175000017500000000232215117130422033412 0ustar zuulzuul--- - name: Debug make_network_isolation_bridge_cleanup_env when: make_network_isolation_bridge_cleanup_env is defined ansible.builtin.debug: var: make_network_isolation_bridge_cleanup_env - name: Debug make_network_isolation_bridge_cleanup_params when: make_network_isolation_bridge_cleanup_params is defined ansible.builtin.debug: var: make_network_isolation_bridge_cleanup_params - name: Run network_isolation_bridge_cleanup retries: "{{ make_network_isolation_bridge_cleanup_retries | default(omit) }}" delay: "{{ make_network_isolation_bridge_cleanup_delay | default(omit) }}" until: "{{ make_network_isolation_bridge_cleanup_until | default(true) }}" register: "make_network_isolation_bridge_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make network_isolation_bridge_cleanup" dry_run: "{{ make_network_isolation_bridge_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_network_isolation_bridge_cleanup_env|default({})), **(make_network_isolation_bridge_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_baremetal_compute.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_bar0000644000175000017500000000207415117130422033337 0ustar zuulzuul--- - name: Debug make_edpm_baremetal_compute_env when: make_edpm_baremetal_compute_env is defined ansible.builtin.debug: var: make_edpm_baremetal_compute_env - name: Debug make_edpm_baremetal_compute_params when: make_edpm_baremetal_compute_params is defined ansible.builtin.debug: var: make_edpm_baremetal_compute_params - name: Run edpm_baremetal_compute retries: "{{ make_edpm_baremetal_compute_retries | default(omit) }}" delay: "{{ make_edpm_baremetal_compute_delay | default(omit) }}" until: "{{ make_edpm_baremetal_compute_until | default(true) }}" register: "make_edpm_baremetal_compute_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_baremetal_compute" dry_run: "{{ make_edpm_baremetal_compute_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_baremetal_compute_env|default({})), **(make_edpm_baremetal_compute_params|default({}))) }}" ././@LongLink0000644000000000000000000000015500000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000164615117130422033355 0ustar zuulzuul--- - name: Debug make_edpm_compute_env when: make_edpm_compute_env is defined ansible.builtin.debug: var: make_edpm_compute_env - name: Debug make_edpm_compute_params when: make_edpm_compute_params is defined ansible.builtin.debug: var: make_edpm_compute_params - name: Run edpm_compute retries: "{{ make_edpm_compute_retries | default(omit) }}" delay: "{{ make_edpm_compute_delay | default(omit) }}" until: "{{ make_edpm_compute_until | default(true) }}" register: "make_edpm_compute_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute" dry_run: "{{ make_edpm_compute_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_env|default({})), **(make_edpm_compute_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_bootc.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000200015117130422033336 0ustar zuulzuul--- - name: Debug make_edpm_compute_bootc_env when: make_edpm_compute_bootc_env is defined ansible.builtin.debug: var: make_edpm_compute_bootc_env - name: Debug make_edpm_compute_bootc_params when: make_edpm_compute_bootc_params is defined ansible.builtin.debug: var: make_edpm_compute_bootc_params - name: Run edpm_compute_bootc retries: "{{ make_edpm_compute_bootc_retries | default(omit) }}" delay: "{{ make_edpm_compute_bootc_delay | default(omit) }}" until: "{{ make_edpm_compute_bootc_until | default(true) }}" register: "make_edpm_compute_bootc_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_bootc" dry_run: "{{ make_edpm_compute_bootc_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_bootc_env|default({})), **(make_edpm_compute_bootc_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_ansible_runner.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_ans0000644000175000017500000000201715117130422033351 0ustar zuulzuul--- - name: Debug make_edpm_ansible_runner_env when: make_edpm_ansible_runner_env is defined ansible.builtin.debug: var: make_edpm_ansible_runner_env - name: Debug make_edpm_ansible_runner_params when: make_edpm_ansible_runner_params is defined ansible.builtin.debug: var: make_edpm_ansible_runner_params - name: Run edpm_ansible_runner retries: "{{ make_edpm_ansible_runner_retries | default(omit) }}" delay: "{{ make_edpm_ansible_runner_delay | default(omit) }}" until: "{{ make_edpm_ansible_runner_until | default(true) }}" register: "make_edpm_ansible_runner_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_ansible_runner" dry_run: "{{ make_edpm_ansible_runner_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_ansible_runner_env|default({})), **(make_edpm_ansible_runner_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_computes_bgp.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000176115117130422033353 0ustar zuulzuul--- - name: Debug make_edpm_computes_bgp_env when: make_edpm_computes_bgp_env is defined ansible.builtin.debug: var: make_edpm_computes_bgp_env - name: Debug make_edpm_computes_bgp_params when: make_edpm_computes_bgp_params is defined ansible.builtin.debug: var: make_edpm_computes_bgp_params - name: Run edpm_computes_bgp retries: "{{ make_edpm_computes_bgp_retries | default(omit) }}" delay: "{{ make_edpm_computes_bgp_delay | default(omit) }}" until: "{{ make_edpm_computes_bgp_until | default(true) }}" register: "make_edpm_computes_bgp_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_computes_bgp" dry_run: "{{ make_edpm_computes_bgp_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_computes_bgp_env|default({})), **(make_edpm_computes_bgp_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_repos.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000200015117130422033336 0ustar zuulzuul--- - name: Debug make_edpm_compute_repos_env when: make_edpm_compute_repos_env is defined ansible.builtin.debug: var: make_edpm_compute_repos_env - name: Debug make_edpm_compute_repos_params when: make_edpm_compute_repos_params is defined ansible.builtin.debug: var: make_edpm_compute_repos_params - name: Run edpm_compute_repos retries: "{{ make_edpm_compute_repos_retries | default(omit) }}" delay: "{{ make_edpm_compute_repos_delay | default(omit) }}" until: "{{ make_edpm_compute_repos_until | default(true) }}" register: "make_edpm_compute_repos_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_repos" dry_run: "{{ make_edpm_compute_repos_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_repos_env|default({})), **(make_edpm_compute_repos_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_compute_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_com0000644000175000017500000000203615117130422033347 0ustar zuulzuul--- - name: Debug make_edpm_compute_cleanup_env when: make_edpm_compute_cleanup_env is defined ansible.builtin.debug: var: make_edpm_compute_cleanup_env - name: Debug make_edpm_compute_cleanup_params when: make_edpm_compute_cleanup_params is defined ansible.builtin.debug: var: make_edpm_compute_cleanup_params - name: Run edpm_compute_cleanup retries: "{{ make_edpm_compute_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_compute_cleanup_delay | default(omit) }}" until: "{{ make_edpm_compute_cleanup_until | default(true) }}" register: "make_edpm_compute_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_compute_cleanup" dry_run: "{{ make_edpm_compute_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_compute_cleanup_env|default({})), **(make_edpm_compute_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_networker.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_net0000644000175000017500000000170415117130422033360 0ustar zuulzuul--- - name: Debug make_edpm_networker_env when: make_edpm_networker_env is defined ansible.builtin.debug: var: make_edpm_networker_env - name: Debug make_edpm_networker_params when: make_edpm_networker_params is defined ansible.builtin.debug: var: make_edpm_networker_params - name: Run edpm_networker retries: "{{ make_edpm_networker_retries | default(omit) }}" delay: "{{ make_edpm_networker_delay | default(omit) }}" until: "{{ make_edpm_networker_until | default(true) }}" register: "make_edpm_networker_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_networker" dry_run: "{{ make_edpm_networker_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_networker_env|default({})), **(make_edpm_networker_params|default({}))) }}" ././@LongLink0000644000000000000000000000016700000000000011607 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_networker_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_net0000644000175000017500000000207415117130422033361 0ustar zuulzuul--- - name: Debug make_edpm_networker_cleanup_env when: make_edpm_networker_cleanup_env is defined ansible.builtin.debug: var: make_edpm_networker_cleanup_env - name: Debug make_edpm_networker_cleanup_params when: make_edpm_networker_cleanup_params is defined ansible.builtin.debug: var: make_edpm_networker_cleanup_params - name: Run edpm_networker_cleanup retries: "{{ make_edpm_networker_cleanup_retries | default(omit) }}" delay: "{{ make_edpm_networker_cleanup_delay | default(omit) }}" until: "{{ make_edpm_networker_cleanup_until | default(true) }}" register: "make_edpm_networker_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_networker_cleanup" dry_run: "{{ make_edpm_networker_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_networker_cleanup_env|default({})), **(make_edpm_networker_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_deploy_instance.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_edpm_dep0000644000175000017500000000203615117130422033341 0ustar zuulzuul--- - name: Debug make_edpm_deploy_instance_env when: make_edpm_deploy_instance_env is defined ansible.builtin.debug: var: make_edpm_deploy_instance_env - name: Debug make_edpm_deploy_instance_params when: make_edpm_deploy_instance_params is defined ansible.builtin.debug: var: make_edpm_deploy_instance_params - name: Run edpm_deploy_instance retries: "{{ make_edpm_deploy_instance_retries | default(omit) }}" delay: "{{ make_edpm_deploy_instance_delay | default(omit) }}" until: "{{ make_edpm_deploy_instance_until | default(true) }}" register: "make_edpm_deploy_instance_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make edpm_deploy_instance" dry_run: "{{ make_edpm_deploy_instance_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_edpm_deploy_instance_env|default({})), **(make_edpm_deploy_instance_params|default({}))) }}" ././@LongLink0000644000000000000000000000015700000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_tripleo_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_tripleo_0000644000175000017500000000170415117130422033402 0ustar zuulzuul--- - name: Debug make_tripleo_deploy_env when: make_tripleo_deploy_env is defined ansible.builtin.debug: var: make_tripleo_deploy_env - name: Debug make_tripleo_deploy_params when: make_tripleo_deploy_params is defined ansible.builtin.debug: var: make_tripleo_deploy_params - name: Run tripleo_deploy retries: "{{ make_tripleo_deploy_retries | default(omit) }}" delay: "{{ make_tripleo_deploy_delay | default(omit) }}" until: "{{ make_tripleo_deploy_until | default(true) }}" register: "make_tripleo_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make tripleo_deploy" dry_run: "{{ make_tripleo_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_tripleo_deploy_env|default({})), **(make_tripleo_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_deploy.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000176115117130422033375 0ustar zuulzuul--- - name: Debug make_standalone_deploy_env when: make_standalone_deploy_env is defined ansible.builtin.debug: var: make_standalone_deploy_env - name: Debug make_standalone_deploy_params when: make_standalone_deploy_params is defined ansible.builtin.debug: var: make_standalone_deploy_params - name: Run standalone_deploy retries: "{{ make_standalone_deploy_retries | default(omit) }}" delay: "{{ make_standalone_deploy_delay | default(omit) }}" until: "{{ make_standalone_deploy_until | default(true) }}" register: "make_standalone_deploy_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_deploy" dry_run: "{{ make_standalone_deploy_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_deploy_env|default({})), **(make_standalone_deploy_params|default({}))) }}" ././@LongLink0000644000000000000000000000016000000000000011600 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_sync.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000172315117130422033373 0ustar zuulzuul--- - name: Debug make_standalone_sync_env when: make_standalone_sync_env is defined ansible.builtin.debug: var: make_standalone_sync_env - name: Debug make_standalone_sync_params when: make_standalone_sync_params is defined ansible.builtin.debug: var: make_standalone_sync_params - name: Run standalone_sync retries: "{{ make_standalone_sync_retries | default(omit) }}" delay: "{{ make_standalone_sync_delay | default(omit) }}" until: "{{ make_standalone_sync_until | default(true) }}" register: "make_standalone_sync_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_sync" dry_run: "{{ make_standalone_sync_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_sync_env|default({})), **(make_standalone_sync_params|default({}))) }}" ././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000161015117130422033366 0ustar zuulzuul--- - name: Debug make_standalone_env when: make_standalone_env is defined ansible.builtin.debug: var: make_standalone_env - name: Debug make_standalone_params when: make_standalone_params is defined ansible.builtin.debug: var: make_standalone_params - name: Run standalone retries: "{{ make_standalone_retries | default(omit) }}" delay: "{{ make_standalone_delay | default(omit) }}" until: "{{ make_standalone_until | default(true) }}" register: "make_standalone_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone" dry_run: "{{ make_standalone_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_env|default({})), **(make_standalone_params|default({}))) }}" ././@LongLink0000644000000000000000000000016300000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000200015117130422033360 0ustar zuulzuul--- - name: Debug make_standalone_cleanup_env when: make_standalone_cleanup_env is defined ansible.builtin.debug: var: make_standalone_cleanup_env - name: Debug make_standalone_cleanup_params when: make_standalone_cleanup_params is defined ansible.builtin.debug: var: make_standalone_cleanup_params - name: Run standalone_cleanup retries: "{{ make_standalone_cleanup_retries | default(omit) }}" delay: "{{ make_standalone_cleanup_delay | default(omit) }}" until: "{{ make_standalone_cleanup_until | default(true) }}" register: "make_standalone_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_cleanup" dry_run: "{{ make_standalone_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_cleanup_env|default({})), **(make_standalone_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_snapshot.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000201715117130422033370 0ustar zuulzuul--- - name: Debug make_standalone_snapshot_env when: make_standalone_snapshot_env is defined ansible.builtin.debug: var: make_standalone_snapshot_env - name: Debug make_standalone_snapshot_params when: make_standalone_snapshot_params is defined ansible.builtin.debug: var: make_standalone_snapshot_params - name: Run standalone_snapshot retries: "{{ make_standalone_snapshot_retries | default(omit) }}" delay: "{{ make_standalone_snapshot_delay | default(omit) }}" until: "{{ make_standalone_snapshot_until | default(true) }}" register: "make_standalone_snapshot_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_snapshot" dry_run: "{{ make_standalone_snapshot_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_snapshot_env|default({})), **(make_standalone_snapshot_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalone_revert.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_standalo0000644000175000017500000000176115117130422033375 0ustar zuulzuul--- - name: Debug make_standalone_revert_env when: make_standalone_revert_env is defined ansible.builtin.debug: var: make_standalone_revert_env - name: Debug make_standalone_revert_params when: make_standalone_revert_params is defined ansible.builtin.debug: var: make_standalone_revert_params - name: Run standalone_revert retries: "{{ make_standalone_revert_retries | default(omit) }}" delay: "{{ make_standalone_revert_delay | default(omit) }}" until: "{{ make_standalone_revert_until | default(true) }}" register: "make_standalone_revert_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make standalone_revert" dry_run: "{{ make_standalone_revert_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_standalone_revert_env|default({})), **(make_standalone_revert_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_prepare.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_pr0000644000175000017500000000166515117130422033401 0ustar zuulzuul--- - name: Debug make_cifmw_prepare_env when: make_cifmw_prepare_env is defined ansible.builtin.debug: var: make_cifmw_prepare_env - name: Debug make_cifmw_prepare_params when: make_cifmw_prepare_params is defined ansible.builtin.debug: var: make_cifmw_prepare_params - name: Run cifmw_prepare retries: "{{ make_cifmw_prepare_retries | default(omit) }}" delay: "{{ make_cifmw_prepare_delay | default(omit) }}" until: "{{ make_cifmw_prepare_until | default(true) }}" register: "make_cifmw_prepare_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make cifmw_prepare" dry_run: "{{ make_cifmw_prepare_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cifmw_prepare_env|default({})), **(make_cifmw_prepare_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_cifmw_cl0000644000175000017500000000166515117130422033356 0ustar zuulzuul--- - name: Debug make_cifmw_cleanup_env when: make_cifmw_cleanup_env is defined ansible.builtin.debug: var: make_cifmw_cleanup_env - name: Debug make_cifmw_cleanup_params when: make_cifmw_cleanup_params is defined ansible.builtin.debug: var: make_cifmw_cleanup_params - name: Run cifmw_cleanup retries: "{{ make_cifmw_cleanup_retries | default(omit) }}" delay: "{{ make_cifmw_cleanup_delay | default(omit) }}" until: "{{ make_cifmw_cleanup_until | default(true) }}" register: "make_cifmw_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make cifmw_cleanup" dry_run: "{{ make_cifmw_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_cifmw_cleanup_env|default({})), **(make_cifmw_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ne0000644000175000017500000000166515117130422033340 0ustar zuulzuul--- - name: Debug make_bmaas_network_env when: make_bmaas_network_env is defined ansible.builtin.debug: var: make_bmaas_network_env - name: Debug make_bmaas_network_params when: make_bmaas_network_params is defined ansible.builtin.debug: var: make_bmaas_network_params - name: Run bmaas_network retries: "{{ make_bmaas_network_retries | default(omit) }}" delay: "{{ make_bmaas_network_delay | default(omit) }}" until: "{{ make_bmaas_network_until | default(true) }}" register: "make_bmaas_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_network" dry_run: "{{ make_bmaas_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_network_env|default({})), **(make_bmaas_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ne0000644000175000017500000000205515117130422033332 0ustar zuulzuul--- - name: Debug make_bmaas_network_cleanup_env when: make_bmaas_network_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_network_cleanup_env - name: Debug make_bmaas_network_cleanup_params when: make_bmaas_network_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_network_cleanup_params - name: Run bmaas_network_cleanup retries: "{{ make_bmaas_network_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_network_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_network_cleanup_until | default(true) }}" register: "make_bmaas_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_network_cleanup" dry_run: "{{ make_bmaas_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_network_cleanup_env|default({})), **(make_bmaas_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000020700000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_route_crc_and_crc_bmaas_networks.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ro0000644000175000017500000000245415117130422033353 0ustar zuulzuul--- - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_env when: make_bmaas_route_crc_and_crc_bmaas_networks_env is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_env - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_params when: make_bmaas_route_crc_and_crc_bmaas_networks_params is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_params - name: Run bmaas_route_crc_and_crc_bmaas_networks retries: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_retries | default(omit) }}" delay: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_delay | default(omit) }}" until: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_until | default(true) }}" register: "make_bmaas_route_crc_and_crc_bmaas_networks_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_route_crc_and_crc_bmaas_networks" dry_run: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_route_crc_and_crc_bmaas_networks_env|default({})), **(make_bmaas_route_crc_and_crc_bmaas_networks_params|default({}))) }}" ././@LongLink0000644000000000000000000000021700000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_route_crc_and_crc_bmaas_networks_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ro0000644000175000017500000000264415117130422033354 0ustar zuulzuul--- - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env when: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env - name: Debug make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params when: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params - name: Run bmaas_route_crc_and_crc_bmaas_networks_cleanup retries: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_until | default(true) }}" register: "make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_route_crc_and_crc_bmaas_networks_cleanup" dry_run: "{{ make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_env|default({})), **(make_bmaas_route_crc_and_crc_bmaas_networks_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_metallb.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_me0000644000175000017500000000166515117130422033337 0ustar zuulzuul--- - name: Debug make_bmaas_metallb_env when: make_bmaas_metallb_env is defined ansible.builtin.debug: var: make_bmaas_metallb_env - name: Debug make_bmaas_metallb_params when: make_bmaas_metallb_params is defined ansible.builtin.debug: var: make_bmaas_metallb_params - name: Run bmaas_metallb retries: "{{ make_bmaas_metallb_retries | default(omit) }}" delay: "{{ make_bmaas_metallb_delay | default(omit) }}" until: "{{ make_bmaas_metallb_until | default(true) }}" register: "make_bmaas_metallb_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_metallb" dry_run: "{{ make_bmaas_metallb_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_metallb_env|default({})), **(make_bmaas_metallb_params|default({}))) }}" ././@LongLink0000644000000000000000000000017100000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_attach_network.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000213215117130422033330 0ustar zuulzuul--- - name: Debug make_bmaas_crc_attach_network_env when: make_bmaas_crc_attach_network_env is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_env - name: Debug make_bmaas_crc_attach_network_params when: make_bmaas_crc_attach_network_params is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_params - name: Run bmaas_crc_attach_network retries: "{{ make_bmaas_crc_attach_network_retries | default(omit) }}" delay: "{{ make_bmaas_crc_attach_network_delay | default(omit) }}" until: "{{ make_bmaas_crc_attach_network_until | default(true) }}" register: "make_bmaas_crc_attach_network_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_attach_network" dry_run: "{{ make_bmaas_crc_attach_network_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_attach_network_env|default({})), **(make_bmaas_crc_attach_network_params|default({}))) }}" ././@LongLink0000644000000000000000000000020100000000000011574 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_attach_network_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000232215117130422033331 0ustar zuulzuul--- - name: Debug make_bmaas_crc_attach_network_cleanup_env when: make_bmaas_crc_attach_network_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_cleanup_env - name: Debug make_bmaas_crc_attach_network_cleanup_params when: make_bmaas_crc_attach_network_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_crc_attach_network_cleanup_params - name: Run bmaas_crc_attach_network_cleanup retries: "{{ make_bmaas_crc_attach_network_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_crc_attach_network_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_crc_attach_network_cleanup_until | default(true) }}" register: "make_bmaas_crc_attach_network_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_attach_network_cleanup" dry_run: "{{ make_bmaas_crc_attach_network_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_attach_network_cleanup_env|default({})), **(make_bmaas_crc_attach_network_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017300000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_baremetal_bridge.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000217015117130422033332 0ustar zuulzuul--- - name: Debug make_bmaas_crc_baremetal_bridge_env when: make_bmaas_crc_baremetal_bridge_env is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_env - name: Debug make_bmaas_crc_baremetal_bridge_params when: make_bmaas_crc_baremetal_bridge_params is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_params - name: Run bmaas_crc_baremetal_bridge retries: "{{ make_bmaas_crc_baremetal_bridge_retries | default(omit) }}" delay: "{{ make_bmaas_crc_baremetal_bridge_delay | default(omit) }}" until: "{{ make_bmaas_crc_baremetal_bridge_until | default(true) }}" register: "make_bmaas_crc_baremetal_bridge_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_baremetal_bridge" dry_run: "{{ make_bmaas_crc_baremetal_bridge_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_baremetal_bridge_env|default({})), **(make_bmaas_crc_baremetal_bridge_params|default({}))) }}" ././@LongLink0000644000000000000000000000020300000000000011576 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_crc_baremetal_bridge_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cr0000644000175000017500000000236015117130422033333 0ustar zuulzuul--- - name: Debug make_bmaas_crc_baremetal_bridge_cleanup_env when: make_bmaas_crc_baremetal_bridge_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_cleanup_env - name: Debug make_bmaas_crc_baremetal_bridge_cleanup_params when: make_bmaas_crc_baremetal_bridge_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_crc_baremetal_bridge_cleanup_params - name: Run bmaas_crc_baremetal_bridge_cleanup retries: "{{ make_bmaas_crc_baremetal_bridge_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_crc_baremetal_bridge_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_crc_baremetal_bridge_cleanup_until | default(true) }}" register: "make_bmaas_crc_baremetal_bridge_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_crc_baremetal_bridge_cleanup" dry_run: "{{ make_bmaas_crc_baremetal_bridge_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_crc_baremetal_bridge_cleanup_env|default({})), **(make_bmaas_crc_baremetal_bridge_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017000000000000011601 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_baremetal_net_nad.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ba0000644000175000017500000000211315117130422033305 0ustar zuulzuul--- - name: Debug make_bmaas_baremetal_net_nad_env when: make_bmaas_baremetal_net_nad_env is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_env - name: Debug make_bmaas_baremetal_net_nad_params when: make_bmaas_baremetal_net_nad_params is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_params - name: Run bmaas_baremetal_net_nad retries: "{{ make_bmaas_baremetal_net_nad_retries | default(omit) }}" delay: "{{ make_bmaas_baremetal_net_nad_delay | default(omit) }}" until: "{{ make_bmaas_baremetal_net_nad_until | default(true) }}" register: "make_bmaas_baremetal_net_nad_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_baremetal_net_nad" dry_run: "{{ make_bmaas_baremetal_net_nad_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_baremetal_net_nad_env|default({})), **(make_bmaas_baremetal_net_nad_params|default({}))) }}" ././@LongLink0000644000000000000000000000020000000000000011573 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_baremetal_net_nad_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ba0000644000175000017500000000230315117130422033306 0ustar zuulzuul--- - name: Debug make_bmaas_baremetal_net_nad_cleanup_env when: make_bmaas_baremetal_net_nad_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_cleanup_env - name: Debug make_bmaas_baremetal_net_nad_cleanup_params when: make_bmaas_baremetal_net_nad_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_baremetal_net_nad_cleanup_params - name: Run bmaas_baremetal_net_nad_cleanup retries: "{{ make_bmaas_baremetal_net_nad_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_baremetal_net_nad_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_baremetal_net_nad_cleanup_until | default(true) }}" register: "make_bmaas_baremetal_net_nad_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_baremetal_net_nad_cleanup" dry_run: "{{ make_bmaas_baremetal_net_nad_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_baremetal_net_nad_cleanup_env|default({})), **(make_bmaas_baremetal_net_nad_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016600000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_metallb_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_me0000644000175000017500000000205515117130422033331 0ustar zuulzuul--- - name: Debug make_bmaas_metallb_cleanup_env when: make_bmaas_metallb_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_metallb_cleanup_env - name: Debug make_bmaas_metallb_cleanup_params when: make_bmaas_metallb_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_metallb_cleanup_params - name: Run bmaas_metallb_cleanup retries: "{{ make_bmaas_metallb_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_metallb_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_metallb_cleanup_until | default(true) }}" register: "make_bmaas_metallb_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_metallb_cleanup" dry_run: "{{ make_bmaas_metallb_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_metallb_cleanup_env|default({})), **(make_bmaas_metallb_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016200000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_virtual_bms.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_vi0000644000175000017500000000176115117130422033351 0ustar zuulzuul--- - name: Debug make_bmaas_virtual_bms_env when: make_bmaas_virtual_bms_env is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_env - name: Debug make_bmaas_virtual_bms_params when: make_bmaas_virtual_bms_params is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_params - name: Run bmaas_virtual_bms retries: "{{ make_bmaas_virtual_bms_retries | default(omit) }}" delay: "{{ make_bmaas_virtual_bms_delay | default(omit) }}" until: "{{ make_bmaas_virtual_bms_until | default(true) }}" register: "make_bmaas_virtual_bms_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_virtual_bms" dry_run: "{{ make_bmaas_virtual_bms_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_virtual_bms_env|default({})), **(make_bmaas_virtual_bms_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_virtual_bms_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_vi0000644000175000017500000000215115117130422033343 0ustar zuulzuul--- - name: Debug make_bmaas_virtual_bms_cleanup_env when: make_bmaas_virtual_bms_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_cleanup_env - name: Debug make_bmaas_virtual_bms_cleanup_params when: make_bmaas_virtual_bms_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_virtual_bms_cleanup_params - name: Run bmaas_virtual_bms_cleanup retries: "{{ make_bmaas_virtual_bms_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_virtual_bms_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_virtual_bms_cleanup_until | default(true) }}" register: "make_bmaas_virtual_bms_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_virtual_bms_cleanup" dry_run: "{{ make_bmaas_virtual_bms_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_virtual_bms_cleanup_env|default({})), **(make_bmaas_virtual_bms_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000016500000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000203615117130422033356 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_env when: make_bmaas_sushy_emulator_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_env - name: Debug make_bmaas_sushy_emulator_params when: make_bmaas_sushy_emulator_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_params - name: Run bmaas_sushy_emulator retries: "{{ make_bmaas_sushy_emulator_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_until | default(true) }}" register: "make_bmaas_sushy_emulator_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator" dry_run: "{{ make_bmaas_sushy_emulator_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_env|default({})), **(make_bmaas_sushy_emulator_params|default({}))) }}" ././@LongLink0000644000000000000000000000017500000000000011606 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000222615117130422033357 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_cleanup_env when: make_bmaas_sushy_emulator_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_cleanup_env - name: Debug make_bmaas_sushy_emulator_cleanup_params when: make_bmaas_sushy_emulator_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_cleanup_params - name: Run bmaas_sushy_emulator_cleanup retries: "{{ make_bmaas_sushy_emulator_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_cleanup_until | default(true) }}" register: "make_bmaas_sushy_emulator_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator_cleanup" dry_run: "{{ make_bmaas_sushy_emulator_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_cleanup_env|default({})), **(make_bmaas_sushy_emulator_cleanup_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_sushy_emulator_wait.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_su0000644000175000017500000000215115117130422033354 0ustar zuulzuul--- - name: Debug make_bmaas_sushy_emulator_wait_env when: make_bmaas_sushy_emulator_wait_env is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_wait_env - name: Debug make_bmaas_sushy_emulator_wait_params when: make_bmaas_sushy_emulator_wait_params is defined ansible.builtin.debug: var: make_bmaas_sushy_emulator_wait_params - name: Run bmaas_sushy_emulator_wait retries: "{{ make_bmaas_sushy_emulator_wait_retries | default(omit) }}" delay: "{{ make_bmaas_sushy_emulator_wait_delay | default(omit) }}" until: "{{ make_bmaas_sushy_emulator_wait_until | default(true) }}" register: "make_bmaas_sushy_emulator_wait_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_sushy_emulator_wait" dry_run: "{{ make_bmaas_sushy_emulator_wait_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_sushy_emulator_wait_env|default({})), **(make_bmaas_sushy_emulator_wait_params|default({}))) }}" ././@LongLink0000644000000000000000000000017200000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_generate_nodes_yaml.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_ge0000644000175000017500000000215115117130422033320 0ustar zuulzuul--- - name: Debug make_bmaas_generate_nodes_yaml_env when: make_bmaas_generate_nodes_yaml_env is defined ansible.builtin.debug: var: make_bmaas_generate_nodes_yaml_env - name: Debug make_bmaas_generate_nodes_yaml_params when: make_bmaas_generate_nodes_yaml_params is defined ansible.builtin.debug: var: make_bmaas_generate_nodes_yaml_params - name: Run bmaas_generate_nodes_yaml retries: "{{ make_bmaas_generate_nodes_yaml_retries | default(omit) }}" delay: "{{ make_bmaas_generate_nodes_yaml_delay | default(omit) }}" until: "{{ make_bmaas_generate_nodes_yaml_until | default(true) }}" register: "make_bmaas_generate_nodes_yaml_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_generate_nodes_yaml" dry_run: "{{ make_bmaas_generate_nodes_yaml_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_generate_nodes_yaml_env|default({})), **(make_bmaas_generate_nodes_yaml_params|default({}))) }}" ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas.ym0000644000175000017500000000147515117130422033301 0ustar zuulzuul--- - name: Debug make_bmaas_env when: make_bmaas_env is defined ansible.builtin.debug: var: make_bmaas_env - name: Debug make_bmaas_params when: make_bmaas_params is defined ansible.builtin.debug: var: make_bmaas_params - name: Run bmaas retries: "{{ make_bmaas_retries | default(omit) }}" delay: "{{ make_bmaas_delay | default(omit) }}" until: "{{ make_bmaas_until | default(true) }}" register: "make_bmaas_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas" dry_run: "{{ make_bmaas_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_env|default({})), **(make_bmaas_params|default({}))) }}" ././@LongLink0000644000000000000000000000015600000000000011605 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cleanup.ymlhome/zuul/zuul-output/logs/ci-framework-data/artifacts/roles/install_yamls_makes/tasks/make_bmaas_cl0000644000175000017500000000166515117130422033334 0ustar zuulzuul--- - name: Debug make_bmaas_cleanup_env when: make_bmaas_cleanup_env is defined ansible.builtin.debug: var: make_bmaas_cleanup_env - name: Debug make_bmaas_cleanup_params when: make_bmaas_cleanup_params is defined ansible.builtin.debug: var: make_bmaas_cleanup_params - name: Run bmaas_cleanup retries: "{{ make_bmaas_cleanup_retries | default(omit) }}" delay: "{{ make_bmaas_cleanup_delay | default(omit) }}" until: "{{ make_bmaas_cleanup_until | default(true) }}" register: "make_bmaas_cleanup_status" cifmw.general.ci_script: output_dir: "{{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts" chdir: "/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup" script: "make bmaas_cleanup" dry_run: "{{ make_bmaas_cleanup_dryrun|default(false)|bool }}" extra_args: "{{ dict((make_bmaas_cleanup_env|default({})), **(make_bmaas_cleanup_params|default({}))) }}" home/zuul/zuul-output/logs/ci-framework-data/artifacts/installed-packages.yml0000644000175000017500000022647715117130632026642 0ustar zuulzuulNetworkManager: - arch: x86_64 epoch: 1 name: NetworkManager release: 1.el9 source: rpm version: 1.54.2 NetworkManager-libnm: - arch: x86_64 epoch: 1 name: NetworkManager-libnm release: 1.el9 source: rpm version: 1.54.2 NetworkManager-team: - arch: x86_64 epoch: 1 name: NetworkManager-team release: 1.el9 source: rpm version: 1.54.2 NetworkManager-tui: - arch: x86_64 epoch: 1 name: NetworkManager-tui release: 1.el9 source: rpm version: 1.54.2 aardvark-dns: - arch: x86_64 epoch: 2 name: aardvark-dns release: 1.el9 source: rpm version: 1.17.0 abattis-cantarell-fonts: - arch: noarch epoch: null name: abattis-cantarell-fonts release: 4.el9 source: rpm version: '0.301' acl: - arch: x86_64 epoch: null name: acl release: 4.el9 source: rpm version: 2.3.1 adobe-source-code-pro-fonts: - arch: noarch epoch: null name: adobe-source-code-pro-fonts release: 12.el9.1 source: rpm version: 2.030.1.050 alternatives: - arch: x86_64 epoch: null name: alternatives release: 2.el9 source: rpm version: '1.24' annobin: - arch: x86_64 epoch: null name: annobin release: 1.el9 source: rpm version: '12.98' ansible-core: - arch: x86_64 epoch: 1 name: ansible-core release: 2.el9 source: rpm version: 2.14.18 attr: - arch: x86_64 epoch: null name: attr release: 3.el9 source: rpm version: 2.5.1 audit: - arch: x86_64 epoch: null name: audit release: 7.el9 source: rpm version: 3.1.5 audit-libs: - arch: x86_64 epoch: null name: audit-libs release: 7.el9 source: rpm version: 3.1.5 authselect: - arch: x86_64 epoch: null name: authselect release: 3.el9 source: rpm version: 1.2.6 authselect-compat: - arch: x86_64 epoch: null name: authselect-compat release: 3.el9 source: rpm version: 1.2.6 authselect-libs: - arch: x86_64 epoch: null name: authselect-libs release: 3.el9 source: rpm version: 1.2.6 avahi-libs: - arch: x86_64 epoch: null name: avahi-libs release: 23.el9 source: rpm version: '0.8' basesystem: - arch: noarch epoch: null name: basesystem release: 13.el9 source: rpm version: '11' bash: - arch: x86_64 epoch: null name: bash release: 9.el9 source: rpm version: 5.1.8 bash-completion: - arch: noarch epoch: 1 name: bash-completion release: 5.el9 source: rpm version: '2.11' binutils: - arch: x86_64 epoch: null name: binutils release: 69.el9 source: rpm version: 2.35.2 binutils-gold: - arch: x86_64 epoch: null name: binutils-gold release: 69.el9 source: rpm version: 2.35.2 buildah: - arch: x86_64 epoch: 2 name: buildah release: 1.el9 source: rpm version: 1.41.3 bzip2: - arch: x86_64 epoch: null name: bzip2 release: 10.el9 source: rpm version: 1.0.8 bzip2-libs: - arch: x86_64 epoch: null name: bzip2-libs release: 10.el9 source: rpm version: 1.0.8 c-ares: - arch: x86_64 epoch: null name: c-ares release: 2.el9 source: rpm version: 1.19.1 ca-certificates: - arch: noarch epoch: null name: ca-certificates release: 91.el9 source: rpm version: 2025.2.80_v9.0.305 centos-gpg-keys: - arch: noarch epoch: null name: centos-gpg-keys release: 30.el9 source: rpm version: '9.0' centos-logos: - arch: x86_64 epoch: null name: centos-logos release: 3.el9 source: rpm version: '90.8' centos-stream-release: - arch: noarch epoch: null name: centos-stream-release release: 30.el9 source: rpm version: '9.0' centos-stream-repos: - arch: noarch epoch: null name: centos-stream-repos release: 30.el9 source: rpm version: '9.0' checkpolicy: - arch: x86_64 epoch: null name: checkpolicy release: 1.el9 source: rpm version: '3.6' chrony: - arch: x86_64 epoch: null name: chrony release: 1.el9 source: rpm version: '4.8' cloud-init: - arch: noarch epoch: null name: cloud-init release: 7.el9 source: rpm version: '24.4' cloud-utils-growpart: - arch: x86_64 epoch: null name: cloud-utils-growpart release: 1.el9 source: rpm version: '0.33' cmake-filesystem: - arch: x86_64 epoch: null name: cmake-filesystem release: 3.el9 source: rpm version: 3.31.8 cockpit-bridge: - arch: noarch epoch: null name: cockpit-bridge release: 1.el9 source: rpm version: '348' cockpit-system: - arch: noarch epoch: null name: cockpit-system release: 1.el9 source: rpm version: '348' cockpit-ws: - arch: x86_64 epoch: null name: cockpit-ws release: 1.el9 source: rpm version: '348' cockpit-ws-selinux: - arch: x86_64 epoch: null name: cockpit-ws-selinux release: 1.el9 source: rpm version: '348' conmon: - arch: x86_64 epoch: 3 name: conmon release: 1.el9 source: rpm version: 2.1.13 container-selinux: - arch: noarch epoch: 4 name: container-selinux release: 1.el9 source: rpm version: 2.242.0 containers-common: - arch: x86_64 epoch: 4 name: containers-common release: 134.el9 source: rpm version: '1' containers-common-extra: - arch: x86_64 epoch: 4 name: containers-common-extra release: 134.el9 source: rpm version: '1' coreutils: - arch: x86_64 epoch: null name: coreutils release: 39.el9 source: rpm version: '8.32' coreutils-common: - arch: x86_64 epoch: null name: coreutils-common release: 39.el9 source: rpm version: '8.32' cpio: - arch: x86_64 epoch: null name: cpio release: 16.el9 source: rpm version: '2.13' cpp: - arch: x86_64 epoch: null name: cpp release: 14.el9 source: rpm version: 11.5.0 cracklib: - arch: x86_64 epoch: null name: cracklib release: 27.el9 source: rpm version: 2.9.6 cracklib-dicts: - arch: x86_64 epoch: null name: cracklib-dicts release: 27.el9 source: rpm version: 2.9.6 createrepo_c: - arch: x86_64 epoch: null name: createrepo_c release: 4.el9 source: rpm version: 0.20.1 createrepo_c-libs: - arch: x86_64 epoch: null name: createrepo_c-libs release: 4.el9 source: rpm version: 0.20.1 criu: - arch: x86_64 epoch: null name: criu release: 3.el9 source: rpm version: '3.19' criu-libs: - arch: x86_64 epoch: null name: criu-libs release: 3.el9 source: rpm version: '3.19' cronie: - arch: x86_64 epoch: null name: cronie release: 14.el9 source: rpm version: 1.5.7 cronie-anacron: - arch: x86_64 epoch: null name: cronie-anacron release: 14.el9 source: rpm version: 1.5.7 crontabs: - arch: noarch epoch: null name: crontabs release: 26.20190603git.el9 source: rpm version: '1.11' crun: - arch: x86_64 epoch: null name: crun release: 1.el9 source: rpm version: '1.24' crypto-policies: - arch: noarch epoch: null name: crypto-policies release: 1.gite9c4db2.el9 source: rpm version: '20251126' crypto-policies-scripts: - arch: noarch epoch: null name: crypto-policies-scripts release: 1.gite9c4db2.el9 source: rpm version: '20251126' cryptsetup-libs: - arch: x86_64 epoch: null name: cryptsetup-libs release: 2.el9 source: rpm version: 2.8.1 curl: - arch: x86_64 epoch: null name: curl release: 38.el9 source: rpm version: 7.76.1 cyrus-sasl: - arch: x86_64 epoch: null name: cyrus-sasl release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-devel: - arch: x86_64 epoch: null name: cyrus-sasl-devel release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-gssapi: - arch: x86_64 epoch: null name: cyrus-sasl-gssapi release: 21.el9 source: rpm version: 2.1.27 cyrus-sasl-lib: - arch: x86_64 epoch: null name: cyrus-sasl-lib release: 21.el9 source: rpm version: 2.1.27 dbus: - arch: x86_64 epoch: 1 name: dbus release: 8.el9 source: rpm version: 1.12.20 dbus-broker: - arch: x86_64 epoch: null name: dbus-broker release: 7.el9 source: rpm version: '28' dbus-common: - arch: noarch epoch: 1 name: dbus-common release: 8.el9 source: rpm version: 1.12.20 dbus-libs: - arch: x86_64 epoch: 1 name: dbus-libs release: 8.el9 source: rpm version: 1.12.20 dbus-tools: - arch: x86_64 epoch: 1 name: dbus-tools release: 8.el9 source: rpm version: 1.12.20 debugedit: - arch: x86_64 epoch: null name: debugedit release: 11.el9 source: rpm version: '5.0' dejavu-sans-fonts: - arch: noarch epoch: null name: dejavu-sans-fonts release: 18.el9 source: rpm version: '2.37' desktop-file-utils: - arch: x86_64 epoch: null name: desktop-file-utils release: 6.el9 source: rpm version: '0.26' device-mapper: - arch: x86_64 epoch: 9 name: device-mapper release: 2.el9 source: rpm version: 1.02.206 device-mapper-libs: - arch: x86_64 epoch: 9 name: device-mapper-libs release: 2.el9 source: rpm version: 1.02.206 dhcp-client: - arch: x86_64 epoch: 12 name: dhcp-client release: 19.b1.el9 source: rpm version: 4.4.2 dhcp-common: - arch: noarch epoch: 12 name: dhcp-common release: 19.b1.el9 source: rpm version: 4.4.2 diffutils: - arch: x86_64 epoch: null name: diffutils release: 12.el9 source: rpm version: '3.7' dnf: - arch: noarch epoch: null name: dnf release: 31.el9 source: rpm version: 4.14.0 dnf-data: - arch: noarch epoch: null name: dnf-data release: 31.el9 source: rpm version: 4.14.0 dnf-plugins-core: - arch: noarch epoch: null name: dnf-plugins-core release: 24.el9 source: rpm version: 4.3.0 dracut: - arch: x86_64 epoch: null name: dracut release: 102.git20250818.el9 source: rpm version: '057' dracut-config-generic: - arch: x86_64 epoch: null name: dracut-config-generic release: 102.git20250818.el9 source: rpm version: '057' dracut-network: - arch: x86_64 epoch: null name: dracut-network release: 102.git20250818.el9 source: rpm version: '057' dracut-squash: - arch: x86_64 epoch: null name: dracut-squash release: 102.git20250818.el9 source: rpm version: '057' dwz: - arch: x86_64 epoch: null name: dwz release: 1.el9 source: rpm version: '0.16' e2fsprogs: - arch: x86_64 epoch: null name: e2fsprogs release: 8.el9 source: rpm version: 1.46.5 e2fsprogs-libs: - arch: x86_64 epoch: null name: e2fsprogs-libs release: 8.el9 source: rpm version: 1.46.5 ed: - arch: x86_64 epoch: null name: ed release: 12.el9 source: rpm version: 1.14.2 efi-srpm-macros: - arch: noarch epoch: null name: efi-srpm-macros release: 4.el9 source: rpm version: '6' elfutils: - arch: x86_64 epoch: null name: elfutils release: 1.el9 source: rpm version: '0.194' elfutils-debuginfod-client: - arch: x86_64 epoch: null name: elfutils-debuginfod-client release: 1.el9 source: rpm version: '0.194' elfutils-default-yama-scope: - arch: noarch epoch: null name: elfutils-default-yama-scope release: 1.el9 source: rpm version: '0.194' elfutils-libelf: - arch: x86_64 epoch: null name: elfutils-libelf release: 1.el9 source: rpm version: '0.194' elfutils-libs: - arch: x86_64 epoch: null name: elfutils-libs release: 1.el9 source: rpm version: '0.194' emacs-filesystem: - arch: noarch epoch: 1 name: emacs-filesystem release: 18.el9 source: rpm version: '27.2' enchant: - arch: x86_64 epoch: 1 name: enchant release: 30.el9 source: rpm version: 1.6.0 ethtool: - arch: x86_64 epoch: 2 name: ethtool release: 2.el9 source: rpm version: '6.15' expat: - arch: x86_64 epoch: null name: expat release: 5.el9 source: rpm version: 2.5.0 expect: - arch: x86_64 epoch: null name: expect release: 16.el9 source: rpm version: 5.45.4 file: - arch: x86_64 epoch: null name: file release: 16.el9 source: rpm version: '5.39' file-libs: - arch: x86_64 epoch: null name: file-libs release: 16.el9 source: rpm version: '5.39' filesystem: - arch: x86_64 epoch: null name: filesystem release: 5.el9 source: rpm version: '3.16' findutils: - arch: x86_64 epoch: 1 name: findutils release: 7.el9 source: rpm version: 4.8.0 fonts-filesystem: - arch: noarch epoch: 1 name: fonts-filesystem release: 7.el9.1 source: rpm version: 2.0.5 fonts-srpm-macros: - arch: noarch epoch: 1 name: fonts-srpm-macros release: 7.el9.1 source: rpm version: 2.0.5 fuse-common: - arch: x86_64 epoch: null name: fuse-common release: 9.el9 source: rpm version: 3.10.2 fuse-libs: - arch: x86_64 epoch: null name: fuse-libs release: 17.el9 source: rpm version: 2.9.9 fuse-overlayfs: - arch: x86_64 epoch: null name: fuse-overlayfs release: 1.el9 source: rpm version: '1.16' fuse3: - arch: x86_64 epoch: null name: fuse3 release: 9.el9 source: rpm version: 3.10.2 fuse3-libs: - arch: x86_64 epoch: null name: fuse3-libs release: 9.el9 source: rpm version: 3.10.2 gawk: - arch: x86_64 epoch: null name: gawk release: 6.el9 source: rpm version: 5.1.0 gawk-all-langpacks: - arch: x86_64 epoch: null name: gawk-all-langpacks release: 6.el9 source: rpm version: 5.1.0 gcc: - arch: x86_64 epoch: null name: gcc release: 14.el9 source: rpm version: 11.5.0 gcc-c++: - arch: x86_64 epoch: null name: gcc-c++ release: 14.el9 source: rpm version: 11.5.0 gcc-plugin-annobin: - arch: x86_64 epoch: null name: gcc-plugin-annobin release: 14.el9 source: rpm version: 11.5.0 gdb-minimal: - arch: x86_64 epoch: null name: gdb-minimal release: 2.el9 source: rpm version: '16.3' gdbm-libs: - arch: x86_64 epoch: 1 name: gdbm-libs release: 1.el9 source: rpm version: '1.23' gdisk: - arch: x86_64 epoch: null name: gdisk release: 5.el9 source: rpm version: 1.0.7 gdk-pixbuf2: - arch: x86_64 epoch: null name: gdk-pixbuf2 release: 6.el9 source: rpm version: 2.42.6 geolite2-city: - arch: noarch epoch: null name: geolite2-city release: 6.el9 source: rpm version: '20191217' geolite2-country: - arch: noarch epoch: null name: geolite2-country release: 6.el9 source: rpm version: '20191217' gettext: - arch: x86_64 epoch: null name: gettext release: 8.el9 source: rpm version: '0.21' gettext-libs: - arch: x86_64 epoch: null name: gettext-libs release: 8.el9 source: rpm version: '0.21' ghc-srpm-macros: - arch: noarch epoch: null name: ghc-srpm-macros release: 6.el9 source: rpm version: 1.5.0 git: - arch: x86_64 epoch: null name: git release: 1.el9 source: rpm version: 2.47.3 git-core: - arch: x86_64 epoch: null name: git-core release: 1.el9 source: rpm version: 2.47.3 git-core-doc: - arch: noarch epoch: null name: git-core-doc release: 1.el9 source: rpm version: 2.47.3 glib-networking: - arch: x86_64 epoch: null name: glib-networking release: 3.el9 source: rpm version: 2.68.3 glib2: - arch: x86_64 epoch: null name: glib2 release: 18.el9 source: rpm version: 2.68.4 glibc: - arch: x86_64 epoch: null name: glibc release: 244.el9 source: rpm version: '2.34' glibc-common: - arch: x86_64 epoch: null name: glibc-common release: 244.el9 source: rpm version: '2.34' glibc-devel: - arch: x86_64 epoch: null name: glibc-devel release: 244.el9 source: rpm version: '2.34' glibc-gconv-extra: - arch: x86_64 epoch: null name: glibc-gconv-extra release: 244.el9 source: rpm version: '2.34' glibc-headers: - arch: x86_64 epoch: null name: glibc-headers release: 244.el9 source: rpm version: '2.34' glibc-langpack-en: - arch: x86_64 epoch: null name: glibc-langpack-en release: 244.el9 source: rpm version: '2.34' gmp: - arch: x86_64 epoch: 1 name: gmp release: 13.el9 source: rpm version: 6.2.0 gnupg2: - arch: x86_64 epoch: null name: gnupg2 release: 4.el9 source: rpm version: 2.3.3 gnutls: - arch: x86_64 epoch: null name: gnutls release: 1.el9 source: rpm version: 3.8.10 go-srpm-macros: - arch: noarch epoch: null name: go-srpm-macros release: 1.el9 source: rpm version: 3.8.1 gobject-introspection: - arch: x86_64 epoch: null name: gobject-introspection release: 11.el9 source: rpm version: 1.68.0 gpg-pubkey: - arch: null epoch: null name: gpg-pubkey release: 5ccc5b19 source: rpm version: 8483c65d gpgme: - arch: x86_64 epoch: null name: gpgme release: 6.el9 source: rpm version: 1.15.1 grep: - arch: x86_64 epoch: null name: grep release: 5.el9 source: rpm version: '3.6' groff-base: - arch: x86_64 epoch: null name: groff-base release: 10.el9 source: rpm version: 1.22.4 grub2-common: - arch: noarch epoch: 1 name: grub2-common release: 120.el9 source: rpm version: '2.06' grub2-pc: - arch: x86_64 epoch: 1 name: grub2-pc release: 120.el9 source: rpm version: '2.06' grub2-pc-modules: - arch: noarch epoch: 1 name: grub2-pc-modules release: 120.el9 source: rpm version: '2.06' grub2-tools: - arch: x86_64 epoch: 1 name: grub2-tools release: 120.el9 source: rpm version: '2.06' grub2-tools-minimal: - arch: x86_64 epoch: 1 name: grub2-tools-minimal release: 120.el9 source: rpm version: '2.06' grubby: - arch: x86_64 epoch: null name: grubby release: 69.el9 source: rpm version: '8.40' gsettings-desktop-schemas: - arch: x86_64 epoch: null name: gsettings-desktop-schemas release: 8.el9 source: rpm version: '40.0' gssproxy: - arch: x86_64 epoch: null name: gssproxy release: 7.el9 source: rpm version: 0.8.4 gzip: - arch: x86_64 epoch: null name: gzip release: 1.el9 source: rpm version: '1.12' hostname: - arch: x86_64 epoch: null name: hostname release: 6.el9 source: rpm version: '3.23' hunspell: - arch: x86_64 epoch: null name: hunspell release: 11.el9 source: rpm version: 1.7.0 hunspell-en-GB: - arch: noarch epoch: null name: hunspell-en-GB release: 20.el9 source: rpm version: 0.20140811.1 hunspell-en-US: - arch: noarch epoch: null name: hunspell-en-US release: 20.el9 source: rpm version: 0.20140811.1 hunspell-filesystem: - arch: x86_64 epoch: null name: hunspell-filesystem release: 11.el9 source: rpm version: 1.7.0 hwdata: - arch: noarch epoch: null name: hwdata release: 9.20.el9 source: rpm version: '0.348' ima-evm-utils: - arch: x86_64 epoch: null name: ima-evm-utils release: 2.el9 source: rpm version: 1.6.2 info: - arch: x86_64 epoch: null name: info release: 15.el9 source: rpm version: '6.7' inih: - arch: x86_64 epoch: null name: inih release: 6.el9 source: rpm version: '49' initscripts-rename-device: - arch: x86_64 epoch: null name: initscripts-rename-device release: 4.el9 source: rpm version: 10.11.8 initscripts-service: - arch: noarch epoch: null name: initscripts-service release: 4.el9 source: rpm version: 10.11.8 ipcalc: - arch: x86_64 epoch: null name: ipcalc release: 5.el9 source: rpm version: 1.0.0 iproute: - arch: x86_64 epoch: null name: iproute release: 1.el9 source: rpm version: 6.17.0 iproute-tc: - arch: x86_64 epoch: null name: iproute-tc release: 1.el9 source: rpm version: 6.17.0 iptables-libs: - arch: x86_64 epoch: null name: iptables-libs release: 11.el9 source: rpm version: 1.8.10 iptables-nft: - arch: x86_64 epoch: null name: iptables-nft release: 11.el9 source: rpm version: 1.8.10 iptables-nft-services: - arch: noarch epoch: null name: iptables-nft-services release: 11.el9 source: rpm version: 1.8.10 iputils: - arch: x86_64 epoch: null name: iputils release: 15.el9 source: rpm version: '20210202' irqbalance: - arch: x86_64 epoch: 2 name: irqbalance release: 5.el9 source: rpm version: 1.9.4 jansson: - arch: x86_64 epoch: null name: jansson release: 1.el9 source: rpm version: '2.14' jq: - arch: x86_64 epoch: null name: jq release: 19.el9 source: rpm version: '1.6' json-c: - arch: x86_64 epoch: null name: json-c release: 11.el9 source: rpm version: '0.14' json-glib: - arch: x86_64 epoch: null name: json-glib release: 1.el9 source: rpm version: 1.6.6 kbd: - arch: x86_64 epoch: null name: kbd release: 11.el9 source: rpm version: 2.4.0 kbd-legacy: - arch: noarch epoch: null name: kbd-legacy release: 11.el9 source: rpm version: 2.4.0 kbd-misc: - arch: noarch epoch: null name: kbd-misc release: 11.el9 source: rpm version: 2.4.0 kernel: - arch: x86_64 epoch: null name: kernel release: 648.el9 source: rpm version: 5.14.0 kernel-core: - arch: x86_64 epoch: null name: kernel-core release: 648.el9 source: rpm version: 5.14.0 kernel-headers: - arch: x86_64 epoch: null name: kernel-headers release: 648.el9 source: rpm version: 5.14.0 kernel-modules: - arch: x86_64 epoch: null name: kernel-modules release: 648.el9 source: rpm version: 5.14.0 kernel-modules-core: - arch: x86_64 epoch: null name: kernel-modules-core release: 648.el9 source: rpm version: 5.14.0 kernel-srpm-macros: - arch: noarch epoch: null name: kernel-srpm-macros release: 14.el9 source: rpm version: '1.0' kernel-tools: - arch: x86_64 epoch: null name: kernel-tools release: 648.el9 source: rpm version: 5.14.0 kernel-tools-libs: - arch: x86_64 epoch: null name: kernel-tools-libs release: 648.el9 source: rpm version: 5.14.0 kexec-tools: - arch: x86_64 epoch: null name: kexec-tools release: 12.el9 source: rpm version: 2.0.29 keyutils: - arch: x86_64 epoch: null name: keyutils release: 1.el9 source: rpm version: 1.6.3 keyutils-libs: - arch: x86_64 epoch: null name: keyutils-libs release: 1.el9 source: rpm version: 1.6.3 kmod: - arch: x86_64 epoch: null name: kmod release: 11.el9 source: rpm version: '28' kmod-libs: - arch: x86_64 epoch: null name: kmod-libs release: 11.el9 source: rpm version: '28' kpartx: - arch: x86_64 epoch: null name: kpartx release: 39.el9 source: rpm version: 0.8.7 krb5-libs: - arch: x86_64 epoch: null name: krb5-libs release: 8.el9 source: rpm version: 1.21.1 langpacks-core-en_GB: - arch: noarch epoch: null name: langpacks-core-en_GB release: 16.el9 source: rpm version: '3.0' langpacks-core-font-en: - arch: noarch epoch: null name: langpacks-core-font-en release: 16.el9 source: rpm version: '3.0' langpacks-en_GB: - arch: noarch epoch: null name: langpacks-en_GB release: 16.el9 source: rpm version: '3.0' less: - arch: x86_64 epoch: null name: less release: 6.el9 source: rpm version: '590' libacl: - arch: x86_64 epoch: null name: libacl release: 4.el9 source: rpm version: 2.3.1 libappstream-glib: - arch: x86_64 epoch: null name: libappstream-glib release: 5.el9 source: rpm version: 0.7.18 libarchive: - arch: x86_64 epoch: null name: libarchive release: 6.el9 source: rpm version: 3.5.3 libassuan: - arch: x86_64 epoch: null name: libassuan release: 3.el9 source: rpm version: 2.5.5 libattr: - arch: x86_64 epoch: null name: libattr release: 3.el9 source: rpm version: 2.5.1 libbasicobjects: - arch: x86_64 epoch: null name: libbasicobjects release: 53.el9 source: rpm version: 0.1.1 libblkid: - arch: x86_64 epoch: null name: libblkid release: 21.el9 source: rpm version: 2.37.4 libbpf: - arch: x86_64 epoch: 2 name: libbpf release: 2.el9 source: rpm version: 1.5.0 libbrotli: - arch: x86_64 epoch: null name: libbrotli release: 7.el9 source: rpm version: 1.0.9 libcap: - arch: x86_64 epoch: null name: libcap release: 10.el9 source: rpm version: '2.48' libcap-ng: - arch: x86_64 epoch: null name: libcap-ng release: 7.el9 source: rpm version: 0.8.2 libcbor: - arch: x86_64 epoch: null name: libcbor release: 5.el9 source: rpm version: 0.7.0 libcollection: - arch: x86_64 epoch: null name: libcollection release: 53.el9 source: rpm version: 0.7.0 libcom_err: - arch: x86_64 epoch: null name: libcom_err release: 8.el9 source: rpm version: 1.46.5 libcomps: - arch: x86_64 epoch: null name: libcomps release: 1.el9 source: rpm version: 0.1.18 libcurl: - arch: x86_64 epoch: null name: libcurl release: 38.el9 source: rpm version: 7.76.1 libdaemon: - arch: x86_64 epoch: null name: libdaemon release: 23.el9 source: rpm version: '0.14' libdb: - arch: x86_64 epoch: null name: libdb release: 57.el9 source: rpm version: 5.3.28 libdhash: - arch: x86_64 epoch: null name: libdhash release: 53.el9 source: rpm version: 0.5.0 libdnf: - arch: x86_64 epoch: null name: libdnf release: 16.el9 source: rpm version: 0.69.0 libeconf: - arch: x86_64 epoch: null name: libeconf release: 4.el9 source: rpm version: 0.4.1 libedit: - arch: x86_64 epoch: null name: libedit release: 38.20210216cvs.el9 source: rpm version: '3.1' libestr: - arch: x86_64 epoch: null name: libestr release: 4.el9 source: rpm version: 0.1.11 libev: - arch: x86_64 epoch: null name: libev release: 6.el9 source: rpm version: '4.33' libevent: - arch: x86_64 epoch: null name: libevent release: 8.el9 source: rpm version: 2.1.12 libfastjson: - arch: x86_64 epoch: null name: libfastjson release: 5.el9 source: rpm version: 0.99.9 libfdisk: - arch: x86_64 epoch: null name: libfdisk release: 21.el9 source: rpm version: 2.37.4 libffi: - arch: x86_64 epoch: null name: libffi release: 8.el9 source: rpm version: 3.4.2 libffi-devel: - arch: x86_64 epoch: null name: libffi-devel release: 8.el9 source: rpm version: 3.4.2 libfido2: - arch: x86_64 epoch: null name: libfido2 release: 2.el9 source: rpm version: 1.13.0 libgcc: - arch: x86_64 epoch: null name: libgcc release: 14.el9 source: rpm version: 11.5.0 libgcrypt: - arch: x86_64 epoch: null name: libgcrypt release: 11.el9 source: rpm version: 1.10.0 libgomp: - arch: x86_64 epoch: null name: libgomp release: 14.el9 source: rpm version: 11.5.0 libgpg-error: - arch: x86_64 epoch: null name: libgpg-error release: 5.el9 source: rpm version: '1.42' libgpg-error-devel: - arch: x86_64 epoch: null name: libgpg-error-devel release: 5.el9 source: rpm version: '1.42' libibverbs: - arch: x86_64 epoch: null name: libibverbs release: 2.el9 source: rpm version: '57.0' libicu: - arch: x86_64 epoch: null name: libicu release: 10.el9 source: rpm version: '67.1' libidn2: - arch: x86_64 epoch: null name: libidn2 release: 7.el9 source: rpm version: 2.3.0 libini_config: - arch: x86_64 epoch: null name: libini_config release: 53.el9 source: rpm version: 1.3.1 libjpeg-turbo: - arch: x86_64 epoch: null name: libjpeg-turbo release: 7.el9 source: rpm version: 2.0.90 libkcapi: - arch: x86_64 epoch: null name: libkcapi release: 2.el9 source: rpm version: 1.4.0 libkcapi-hmaccalc: - arch: x86_64 epoch: null name: libkcapi-hmaccalc release: 2.el9 source: rpm version: 1.4.0 libksba: - arch: x86_64 epoch: null name: libksba release: 7.el9 source: rpm version: 1.5.1 libldb: - arch: x86_64 epoch: 0 name: libldb release: 1.el9 source: rpm version: 4.23.3 libmaxminddb: - arch: x86_64 epoch: null name: libmaxminddb release: 4.el9 source: rpm version: 1.5.2 libmnl: - arch: x86_64 epoch: null name: libmnl release: 16.el9 source: rpm version: 1.0.4 libmodulemd: - arch: x86_64 epoch: null name: libmodulemd release: 2.el9 source: rpm version: 2.13.0 libmount: - arch: x86_64 epoch: null name: libmount release: 21.el9 source: rpm version: 2.37.4 libmpc: - arch: x86_64 epoch: null name: libmpc release: 4.el9 source: rpm version: 1.2.1 libndp: - arch: x86_64 epoch: null name: libndp release: 1.el9 source: rpm version: '1.9' libnet: - arch: x86_64 epoch: null name: libnet release: 7.el9 source: rpm version: '1.2' libnetfilter_conntrack: - arch: x86_64 epoch: null name: libnetfilter_conntrack release: 1.el9 source: rpm version: 1.0.9 libnfnetlink: - arch: x86_64 epoch: null name: libnfnetlink release: 23.el9 source: rpm version: 1.0.1 libnfsidmap: - arch: x86_64 epoch: 1 name: libnfsidmap release: 39.el9 source: rpm version: 2.5.4 libnftnl: - arch: x86_64 epoch: null name: libnftnl release: 4.el9 source: rpm version: 1.2.6 libnghttp2: - arch: x86_64 epoch: null name: libnghttp2 release: 6.el9 source: rpm version: 1.43.0 libnl3: - arch: x86_64 epoch: null name: libnl3 release: 1.el9 source: rpm version: 3.11.0 libnl3-cli: - arch: x86_64 epoch: null name: libnl3-cli release: 1.el9 source: rpm version: 3.11.0 libnsl2: - arch: x86_64 epoch: null name: libnsl2 release: 1.el9 source: rpm version: 2.0.0 libpath_utils: - arch: x86_64 epoch: null name: libpath_utils release: 53.el9 source: rpm version: 0.2.1 libpcap: - arch: x86_64 epoch: 14 name: libpcap release: 4.el9 source: rpm version: 1.10.0 libpipeline: - arch: x86_64 epoch: null name: libpipeline release: 4.el9 source: rpm version: 1.5.3 libpkgconf: - arch: x86_64 epoch: null name: libpkgconf release: 10.el9 source: rpm version: 1.7.3 libpng: - arch: x86_64 epoch: 2 name: libpng release: 12.el9 source: rpm version: 1.6.37 libproxy: - arch: x86_64 epoch: null name: libproxy release: 35.el9 source: rpm version: 0.4.15 libproxy-webkitgtk4: - arch: x86_64 epoch: null name: libproxy-webkitgtk4 release: 35.el9 source: rpm version: 0.4.15 libpsl: - arch: x86_64 epoch: null name: libpsl release: 5.el9 source: rpm version: 0.21.1 libpwquality: - arch: x86_64 epoch: null name: libpwquality release: 8.el9 source: rpm version: 1.4.4 libref_array: - arch: x86_64 epoch: null name: libref_array release: 53.el9 source: rpm version: 0.1.5 librepo: - arch: x86_64 epoch: null name: librepo release: 3.el9 source: rpm version: 1.14.5 libreport-filesystem: - arch: noarch epoch: null name: libreport-filesystem release: 6.el9 source: rpm version: 2.15.2 libseccomp: - arch: x86_64 epoch: null name: libseccomp release: 2.el9 source: rpm version: 2.5.2 libselinux: - arch: x86_64 epoch: null name: libselinux release: 3.el9 source: rpm version: '3.6' libselinux-utils: - arch: x86_64 epoch: null name: libselinux-utils release: 3.el9 source: rpm version: '3.6' libsemanage: - arch: x86_64 epoch: null name: libsemanage release: 5.el9 source: rpm version: '3.6' libsepol: - arch: x86_64 epoch: null name: libsepol release: 3.el9 source: rpm version: '3.6' libsigsegv: - arch: x86_64 epoch: null name: libsigsegv release: 4.el9 source: rpm version: '2.13' libslirp: - arch: x86_64 epoch: null name: libslirp release: 8.el9 source: rpm version: 4.4.0 libsmartcols: - arch: x86_64 epoch: null name: libsmartcols release: 21.el9 source: rpm version: 2.37.4 libsolv: - arch: x86_64 epoch: null name: libsolv release: 3.el9 source: rpm version: 0.7.24 libsoup: - arch: x86_64 epoch: null name: libsoup release: 10.el9 source: rpm version: 2.72.0 libss: - arch: x86_64 epoch: null name: libss release: 8.el9 source: rpm version: 1.46.5 libssh: - arch: x86_64 epoch: null name: libssh release: 15.el9 source: rpm version: 0.10.4 libssh-config: - arch: noarch epoch: null name: libssh-config release: 15.el9 source: rpm version: 0.10.4 libsss_certmap: - arch: x86_64 epoch: null name: libsss_certmap release: 5.el9 source: rpm version: 2.9.7 libsss_idmap: - arch: x86_64 epoch: null name: libsss_idmap release: 5.el9 source: rpm version: 2.9.7 libsss_nss_idmap: - arch: x86_64 epoch: null name: libsss_nss_idmap release: 5.el9 source: rpm version: 2.9.7 libsss_sudo: - arch: x86_64 epoch: null name: libsss_sudo release: 5.el9 source: rpm version: 2.9.7 libstdc++: - arch: x86_64 epoch: null name: libstdc++ release: 14.el9 source: rpm version: 11.5.0 libstdc++-devel: - arch: x86_64 epoch: null name: libstdc++-devel release: 14.el9 source: rpm version: 11.5.0 libstemmer: - arch: x86_64 epoch: null name: libstemmer release: 18.585svn.el9 source: rpm version: '0' libsysfs: - arch: x86_64 epoch: null name: libsysfs release: 11.el9 source: rpm version: 2.1.1 libtalloc: - arch: x86_64 epoch: null name: libtalloc release: 1.el9 source: rpm version: 2.4.3 libtasn1: - arch: x86_64 epoch: null name: libtasn1 release: 9.el9 source: rpm version: 4.16.0 libtdb: - arch: x86_64 epoch: null name: libtdb release: 1.el9 source: rpm version: 1.4.14 libteam: - arch: x86_64 epoch: null name: libteam release: 16.el9 source: rpm version: '1.31' libtevent: - arch: x86_64 epoch: null name: libtevent release: 1.el9 source: rpm version: 0.17.1 libtirpc: - arch: x86_64 epoch: null name: libtirpc release: 9.el9 source: rpm version: 1.3.3 libtool-ltdl: - arch: x86_64 epoch: null name: libtool-ltdl release: 46.el9 source: rpm version: 2.4.6 libunistring: - arch: x86_64 epoch: null name: libunistring release: 15.el9 source: rpm version: 0.9.10 liburing: - arch: x86_64 epoch: null name: liburing release: 1.el9 source: rpm version: '2.12' libuser: - arch: x86_64 epoch: null name: libuser release: 17.el9 source: rpm version: '0.63' libutempter: - arch: x86_64 epoch: null name: libutempter release: 6.el9 source: rpm version: 1.2.1 libuuid: - arch: x86_64 epoch: null name: libuuid release: 21.el9 source: rpm version: 2.37.4 libverto: - arch: x86_64 epoch: null name: libverto release: 3.el9 source: rpm version: 0.3.2 libverto-libev: - arch: x86_64 epoch: null name: libverto-libev release: 3.el9 source: rpm version: 0.3.2 libvirt-libs: - arch: x86_64 epoch: null name: libvirt-libs release: 1.el9 source: rpm version: 11.9.0 libwbclient: - arch: x86_64 epoch: 0 name: libwbclient release: 1.el9 source: rpm version: 4.23.3 libxcrypt: - arch: x86_64 epoch: null name: libxcrypt release: 3.el9 source: rpm version: 4.4.18 libxcrypt-compat: - arch: x86_64 epoch: null name: libxcrypt-compat release: 3.el9 source: rpm version: 4.4.18 libxcrypt-devel: - arch: x86_64 epoch: null name: libxcrypt-devel release: 3.el9 source: rpm version: 4.4.18 libxml2: - arch: x86_64 epoch: null name: libxml2 release: 14.el9 source: rpm version: 2.9.13 libxml2-devel: - arch: x86_64 epoch: null name: libxml2-devel release: 14.el9 source: rpm version: 2.9.13 libxslt: - arch: x86_64 epoch: null name: libxslt release: 12.el9 source: rpm version: 1.1.34 libxslt-devel: - arch: x86_64 epoch: null name: libxslt-devel release: 12.el9 source: rpm version: 1.1.34 libyaml: - arch: x86_64 epoch: null name: libyaml release: 7.el9 source: rpm version: 0.2.5 libzstd: - arch: x86_64 epoch: null name: libzstd release: 1.el9 source: rpm version: 1.5.5 llvm-filesystem: - arch: x86_64 epoch: null name: llvm-filesystem release: 1.el9 source: rpm version: 21.1.3 llvm-libs: - arch: x86_64 epoch: null name: llvm-libs release: 1.el9 source: rpm version: 21.1.3 lmdb-libs: - arch: x86_64 epoch: null name: lmdb-libs release: 3.el9 source: rpm version: 0.9.29 logrotate: - arch: x86_64 epoch: null name: logrotate release: 12.el9 source: rpm version: 3.18.0 lshw: - arch: x86_64 epoch: null name: lshw release: 3.el9 source: rpm version: B.02.20 lsscsi: - arch: x86_64 epoch: null name: lsscsi release: 6.el9 source: rpm version: '0.32' lua-libs: - arch: x86_64 epoch: null name: lua-libs release: 4.el9 source: rpm version: 5.4.4 lua-srpm-macros: - arch: noarch epoch: null name: lua-srpm-macros release: 6.el9 source: rpm version: '1' lz4-libs: - arch: x86_64 epoch: null name: lz4-libs release: 5.el9 source: rpm version: 1.9.3 lzo: - arch: x86_64 epoch: null name: lzo release: 7.el9 source: rpm version: '2.10' make: - arch: x86_64 epoch: 1 name: make release: 8.el9 source: rpm version: '4.3' man-db: - arch: x86_64 epoch: null name: man-db release: 9.el9 source: rpm version: 2.9.3 microcode_ctl: - arch: noarch epoch: 4 name: microcode_ctl release: 1.el9 source: rpm version: '20251111' mpdecimal: - arch: x86_64 epoch: null name: mpdecimal release: 3.el9 source: rpm version: 2.5.1 mpfr: - arch: x86_64 epoch: null name: mpfr release: 7.el9 source: rpm version: 4.1.0 ncurses: - arch: x86_64 epoch: null name: ncurses release: 12.20210508.el9 source: rpm version: '6.2' ncurses-base: - arch: noarch epoch: null name: ncurses-base release: 12.20210508.el9 source: rpm version: '6.2' ncurses-c++-libs: - arch: x86_64 epoch: null name: ncurses-c++-libs release: 12.20210508.el9 source: rpm version: '6.2' ncurses-devel: - arch: x86_64 epoch: null name: ncurses-devel release: 12.20210508.el9 source: rpm version: '6.2' ncurses-libs: - arch: x86_64 epoch: null name: ncurses-libs release: 12.20210508.el9 source: rpm version: '6.2' netavark: - arch: x86_64 epoch: 2 name: netavark release: 1.el9 source: rpm version: 1.16.0 nettle: - arch: x86_64 epoch: null name: nettle release: 1.el9 source: rpm version: 3.10.1 newt: - arch: x86_64 epoch: null name: newt release: 11.el9 source: rpm version: 0.52.21 nfs-utils: - arch: x86_64 epoch: 1 name: nfs-utils release: 39.el9 source: rpm version: 2.5.4 nftables: - arch: x86_64 epoch: 1 name: nftables release: 5.el9 source: rpm version: 1.0.9 npth: - arch: x86_64 epoch: null name: npth release: 8.el9 source: rpm version: '1.6' numactl-libs: - arch: x86_64 epoch: null name: numactl-libs release: 3.el9 source: rpm version: 2.0.19 ocaml-srpm-macros: - arch: noarch epoch: null name: ocaml-srpm-macros release: 6.el9 source: rpm version: '6' oddjob: - arch: x86_64 epoch: null name: oddjob release: 7.el9 source: rpm version: 0.34.7 oddjob-mkhomedir: - arch: x86_64 epoch: null name: oddjob-mkhomedir release: 7.el9 source: rpm version: 0.34.7 oniguruma: - arch: x86_64 epoch: null name: oniguruma release: 1.el9.6 source: rpm version: 6.9.6 openblas-srpm-macros: - arch: noarch epoch: null name: openblas-srpm-macros release: 11.el9 source: rpm version: '2' openldap: - arch: x86_64 epoch: null name: openldap release: 4.el9 source: rpm version: 2.6.8 openldap-devel: - arch: x86_64 epoch: null name: openldap-devel release: 4.el9 source: rpm version: 2.6.8 openssh: - arch: x86_64 epoch: null name: openssh release: 2.el9 source: rpm version: 9.9p1 openssh-clients: - arch: x86_64 epoch: null name: openssh-clients release: 2.el9 source: rpm version: 9.9p1 openssh-server: - arch: x86_64 epoch: null name: openssh-server release: 2.el9 source: rpm version: 9.9p1 openssl: - arch: x86_64 epoch: 1 name: openssl release: 6.el9 source: rpm version: 3.5.1 openssl-devel: - arch: x86_64 epoch: 1 name: openssl-devel release: 6.el9 source: rpm version: 3.5.1 openssl-fips-provider: - arch: x86_64 epoch: 1 name: openssl-fips-provider release: 6.el9 source: rpm version: 3.5.1 openssl-libs: - arch: x86_64 epoch: 1 name: openssl-libs release: 6.el9 source: rpm version: 3.5.1 os-prober: - arch: x86_64 epoch: null name: os-prober release: 12.el9 source: rpm version: '1.77' p11-kit: - arch: x86_64 epoch: null name: p11-kit release: 1.el9 source: rpm version: 0.25.10 p11-kit-trust: - arch: x86_64 epoch: null name: p11-kit-trust release: 1.el9 source: rpm version: 0.25.10 pam: - arch: x86_64 epoch: null name: pam release: 26.el9 source: rpm version: 1.5.1 parted: - arch: x86_64 epoch: null name: parted release: 3.el9 source: rpm version: '3.5' passt: - arch: x86_64 epoch: null name: passt release: 2.el9 source: rpm version: 0^20250512.g8ec1341 passt-selinux: - arch: noarch epoch: null name: passt-selinux release: 2.el9 source: rpm version: 0^20250512.g8ec1341 passwd: - arch: x86_64 epoch: null name: passwd release: 12.el9 source: rpm version: '0.80' patch: - arch: x86_64 epoch: null name: patch release: 16.el9 source: rpm version: 2.7.6 pciutils-libs: - arch: x86_64 epoch: null name: pciutils-libs release: 7.el9 source: rpm version: 3.7.0 pcre: - arch: x86_64 epoch: null name: pcre release: 4.el9 source: rpm version: '8.44' pcre2: - arch: x86_64 epoch: null name: pcre2 release: 6.el9 source: rpm version: '10.40' pcre2-syntax: - arch: noarch epoch: null name: pcre2-syntax release: 6.el9 source: rpm version: '10.40' perl-AutoLoader: - arch: noarch epoch: 0 name: perl-AutoLoader release: 483.el9 source: rpm version: '5.74' perl-B: - arch: x86_64 epoch: 0 name: perl-B release: 483.el9 source: rpm version: '1.80' perl-Carp: - arch: noarch epoch: null name: perl-Carp release: 460.el9 source: rpm version: '1.50' perl-Class-Struct: - arch: noarch epoch: 0 name: perl-Class-Struct release: 483.el9 source: rpm version: '0.66' perl-Data-Dumper: - arch: x86_64 epoch: null name: perl-Data-Dumper release: 462.el9 source: rpm version: '2.174' perl-Digest: - arch: noarch epoch: null name: perl-Digest release: 4.el9 source: rpm version: '1.19' perl-Digest-MD5: - arch: x86_64 epoch: null name: perl-Digest-MD5 release: 4.el9 source: rpm version: '2.58' perl-DynaLoader: - arch: x86_64 epoch: 0 name: perl-DynaLoader release: 483.el9 source: rpm version: '1.47' perl-Encode: - arch: x86_64 epoch: 4 name: perl-Encode release: 462.el9 source: rpm version: '3.08' perl-Errno: - arch: x86_64 epoch: 0 name: perl-Errno release: 483.el9 source: rpm version: '1.30' perl-Error: - arch: noarch epoch: 1 name: perl-Error release: 7.el9 source: rpm version: '0.17029' perl-Exporter: - arch: noarch epoch: null name: perl-Exporter release: 461.el9 source: rpm version: '5.74' perl-Fcntl: - arch: x86_64 epoch: 0 name: perl-Fcntl release: 483.el9 source: rpm version: '1.13' perl-File-Basename: - arch: noarch epoch: 0 name: perl-File-Basename release: 483.el9 source: rpm version: '2.85' perl-File-Find: - arch: noarch epoch: 0 name: perl-File-Find release: 483.el9 source: rpm version: '1.37' perl-File-Path: - arch: noarch epoch: null name: perl-File-Path release: 4.el9 source: rpm version: '2.18' perl-File-Temp: - arch: noarch epoch: 1 name: perl-File-Temp release: 4.el9 source: rpm version: 0.231.100 perl-File-stat: - arch: noarch epoch: 0 name: perl-File-stat release: 483.el9 source: rpm version: '1.09' perl-FileHandle: - arch: noarch epoch: 0 name: perl-FileHandle release: 483.el9 source: rpm version: '2.03' perl-Getopt-Long: - arch: noarch epoch: 1 name: perl-Getopt-Long release: 4.el9 source: rpm version: '2.52' perl-Getopt-Std: - arch: noarch epoch: 0 name: perl-Getopt-Std release: 483.el9 source: rpm version: '1.12' perl-Git: - arch: noarch epoch: null name: perl-Git release: 1.el9 source: rpm version: 2.47.3 perl-HTTP-Tiny: - arch: noarch epoch: null name: perl-HTTP-Tiny release: 462.el9 source: rpm version: '0.076' perl-IO: - arch: x86_64 epoch: 0 name: perl-IO release: 483.el9 source: rpm version: '1.43' perl-IO-Socket-IP: - arch: noarch epoch: null name: perl-IO-Socket-IP release: 5.el9 source: rpm version: '0.41' perl-IO-Socket-SSL: - arch: noarch epoch: null name: perl-IO-Socket-SSL release: 2.el9 source: rpm version: '2.073' perl-IPC-Open3: - arch: noarch epoch: 0 name: perl-IPC-Open3 release: 483.el9 source: rpm version: '1.21' perl-MIME-Base64: - arch: x86_64 epoch: null name: perl-MIME-Base64 release: 4.el9 source: rpm version: '3.16' perl-Mozilla-CA: - arch: noarch epoch: null name: perl-Mozilla-CA release: 6.el9 source: rpm version: '20200520' perl-NDBM_File: - arch: x86_64 epoch: 0 name: perl-NDBM_File release: 483.el9 source: rpm version: '1.15' perl-Net-SSLeay: - arch: x86_64 epoch: null name: perl-Net-SSLeay release: 3.el9 source: rpm version: '1.94' perl-POSIX: - arch: x86_64 epoch: 0 name: perl-POSIX release: 483.el9 source: rpm version: '1.94' perl-PathTools: - arch: x86_64 epoch: null name: perl-PathTools release: 461.el9 source: rpm version: '3.78' perl-Pod-Escapes: - arch: noarch epoch: 1 name: perl-Pod-Escapes release: 460.el9 source: rpm version: '1.07' perl-Pod-Perldoc: - arch: noarch epoch: null name: perl-Pod-Perldoc release: 461.el9 source: rpm version: 3.28.01 perl-Pod-Simple: - arch: noarch epoch: 1 name: perl-Pod-Simple release: 4.el9 source: rpm version: '3.42' perl-Pod-Usage: - arch: noarch epoch: 4 name: perl-Pod-Usage release: 4.el9 source: rpm version: '2.01' perl-Scalar-List-Utils: - arch: x86_64 epoch: 4 name: perl-Scalar-List-Utils release: 462.el9 source: rpm version: '1.56' perl-SelectSaver: - arch: noarch epoch: 0 name: perl-SelectSaver release: 483.el9 source: rpm version: '1.02' perl-Socket: - arch: x86_64 epoch: 4 name: perl-Socket release: 4.el9 source: rpm version: '2.031' perl-Storable: - arch: x86_64 epoch: 1 name: perl-Storable release: 460.el9 source: rpm version: '3.21' perl-Symbol: - arch: noarch epoch: 0 name: perl-Symbol release: 483.el9 source: rpm version: '1.08' perl-Term-ANSIColor: - arch: noarch epoch: null name: perl-Term-ANSIColor release: 461.el9 source: rpm version: '5.01' perl-Term-Cap: - arch: noarch epoch: null name: perl-Term-Cap release: 460.el9 source: rpm version: '1.17' perl-TermReadKey: - arch: x86_64 epoch: null name: perl-TermReadKey release: 11.el9 source: rpm version: '2.38' perl-Text-ParseWords: - arch: noarch epoch: null name: perl-Text-ParseWords release: 460.el9 source: rpm version: '3.30' perl-Text-Tabs+Wrap: - arch: noarch epoch: null name: perl-Text-Tabs+Wrap release: 460.el9 source: rpm version: '2013.0523' perl-Time-Local: - arch: noarch epoch: 2 name: perl-Time-Local release: 7.el9 source: rpm version: '1.300' perl-URI: - arch: noarch epoch: null name: perl-URI release: 3.el9 source: rpm version: '5.09' perl-base: - arch: noarch epoch: 0 name: perl-base release: 483.el9 source: rpm version: '2.27' perl-constant: - arch: noarch epoch: null name: perl-constant release: 461.el9 source: rpm version: '1.33' perl-if: - arch: noarch epoch: 0 name: perl-if release: 483.el9 source: rpm version: 0.60.800 perl-interpreter: - arch: x86_64 epoch: 4 name: perl-interpreter release: 483.el9 source: rpm version: 5.32.1 perl-lib: - arch: x86_64 epoch: 0 name: perl-lib release: 483.el9 source: rpm version: '0.65' perl-libnet: - arch: noarch epoch: null name: perl-libnet release: 4.el9 source: rpm version: '3.13' perl-libs: - arch: x86_64 epoch: 4 name: perl-libs release: 483.el9 source: rpm version: 5.32.1 perl-mro: - arch: x86_64 epoch: 0 name: perl-mro release: 483.el9 source: rpm version: '1.23' perl-overload: - arch: noarch epoch: 0 name: perl-overload release: 483.el9 source: rpm version: '1.31' perl-overloading: - arch: noarch epoch: 0 name: perl-overloading release: 483.el9 source: rpm version: '0.02' perl-parent: - arch: noarch epoch: 1 name: perl-parent release: 460.el9 source: rpm version: '0.238' perl-podlators: - arch: noarch epoch: 1 name: perl-podlators release: 460.el9 source: rpm version: '4.14' perl-srpm-macros: - arch: noarch epoch: null name: perl-srpm-macros release: 41.el9 source: rpm version: '1' perl-subs: - arch: noarch epoch: 0 name: perl-subs release: 483.el9 source: rpm version: '1.03' perl-vars: - arch: noarch epoch: 0 name: perl-vars release: 483.el9 source: rpm version: '1.05' pigz: - arch: x86_64 epoch: null name: pigz release: 4.el9 source: rpm version: '2.5' pkgconf: - arch: x86_64 epoch: null name: pkgconf release: 10.el9 source: rpm version: 1.7.3 pkgconf-m4: - arch: noarch epoch: null name: pkgconf-m4 release: 10.el9 source: rpm version: 1.7.3 pkgconf-pkg-config: - arch: x86_64 epoch: null name: pkgconf-pkg-config release: 10.el9 source: rpm version: 1.7.3 podman: - arch: x86_64 epoch: 6 name: podman release: 2.el9 source: rpm version: 5.6.0 policycoreutils: - arch: x86_64 epoch: null name: policycoreutils release: 3.el9 source: rpm version: '3.6' policycoreutils-python-utils: - arch: noarch epoch: null name: policycoreutils-python-utils release: 3.el9 source: rpm version: '3.6' polkit: - arch: x86_64 epoch: null name: polkit release: 14.el9 source: rpm version: '0.117' polkit-libs: - arch: x86_64 epoch: null name: polkit-libs release: 14.el9 source: rpm version: '0.117' polkit-pkla-compat: - arch: x86_64 epoch: null name: polkit-pkla-compat release: 21.el9 source: rpm version: '0.1' popt: - arch: x86_64 epoch: null name: popt release: 8.el9 source: rpm version: '1.18' prefixdevname: - arch: x86_64 epoch: null name: prefixdevname release: 8.el9 source: rpm version: 0.1.0 procps-ng: - arch: x86_64 epoch: null name: procps-ng release: 14.el9 source: rpm version: 3.3.17 protobuf-c: - arch: x86_64 epoch: null name: protobuf-c release: 13.el9 source: rpm version: 1.3.3 psmisc: - arch: x86_64 epoch: null name: psmisc release: 3.el9 source: rpm version: '23.4' publicsuffix-list-dafsa: - arch: noarch epoch: null name: publicsuffix-list-dafsa release: 3.el9 source: rpm version: '20210518' pyproject-srpm-macros: - arch: noarch epoch: null name: pyproject-srpm-macros release: 1.el9 source: rpm version: 1.16.2 python-rpm-macros: - arch: noarch epoch: null name: python-rpm-macros release: 54.el9 source: rpm version: '3.9' python-srpm-macros: - arch: noarch epoch: null name: python-srpm-macros release: 54.el9 source: rpm version: '3.9' python-unversioned-command: - arch: noarch epoch: null name: python-unversioned-command release: 2.el9 source: rpm version: 3.9.25 python3: - arch: x86_64 epoch: null name: python3 release: 2.el9 source: rpm version: 3.9.25 python3-attrs: - arch: noarch epoch: null name: python3-attrs release: 7.el9 source: rpm version: 20.3.0 python3-audit: - arch: x86_64 epoch: null name: python3-audit release: 7.el9 source: rpm version: 3.1.5 python3-babel: - arch: noarch epoch: null name: python3-babel release: 2.el9 source: rpm version: 2.9.1 python3-cffi: - arch: x86_64 epoch: null name: python3-cffi release: 5.el9 source: rpm version: 1.14.5 python3-chardet: - arch: noarch epoch: null name: python3-chardet release: 5.el9 source: rpm version: 4.0.0 python3-configobj: - arch: noarch epoch: null name: python3-configobj release: 25.el9 source: rpm version: 5.0.6 python3-cryptography: - arch: x86_64 epoch: null name: python3-cryptography release: 5.el9 source: rpm version: 36.0.1 python3-dasbus: - arch: noarch epoch: null name: python3-dasbus release: 1.el9 source: rpm version: '1.7' python3-dateutil: - arch: noarch epoch: 1 name: python3-dateutil release: 1.el9 source: rpm version: 2.9.0.post0 python3-dbus: - arch: x86_64 epoch: null name: python3-dbus release: 2.el9 source: rpm version: 1.2.18 python3-devel: - arch: x86_64 epoch: null name: python3-devel release: 2.el9 source: rpm version: 3.9.25 python3-distro: - arch: noarch epoch: null name: python3-distro release: 7.el9 source: rpm version: 1.5.0 python3-dnf: - arch: noarch epoch: null name: python3-dnf release: 31.el9 source: rpm version: 4.14.0 python3-dnf-plugins-core: - arch: noarch epoch: null name: python3-dnf-plugins-core release: 24.el9 source: rpm version: 4.3.0 python3-enchant: - arch: noarch epoch: null name: python3-enchant release: 5.el9 source: rpm version: 3.2.0 python3-file-magic: - arch: noarch epoch: null name: python3-file-magic release: 16.el9 source: rpm version: '5.39' python3-gobject-base: - arch: x86_64 epoch: null name: python3-gobject-base release: 6.el9 source: rpm version: 3.40.1 python3-gobject-base-noarch: - arch: noarch epoch: null name: python3-gobject-base-noarch release: 6.el9 source: rpm version: 3.40.1 python3-gpg: - arch: x86_64 epoch: null name: python3-gpg release: 6.el9 source: rpm version: 1.15.1 python3-hawkey: - arch: x86_64 epoch: null name: python3-hawkey release: 16.el9 source: rpm version: 0.69.0 python3-idna: - arch: noarch epoch: null name: python3-idna release: 7.el9.1 source: rpm version: '2.10' python3-jinja2: - arch: noarch epoch: null name: python3-jinja2 release: 8.el9 source: rpm version: 2.11.3 python3-jmespath: - arch: noarch epoch: null name: python3-jmespath release: 1.el9 source: rpm version: 1.0.1 python3-jsonpatch: - arch: noarch epoch: null name: python3-jsonpatch release: 16.el9 source: rpm version: '1.21' python3-jsonpointer: - arch: noarch epoch: null name: python3-jsonpointer release: 4.el9 source: rpm version: '2.0' python3-jsonschema: - arch: noarch epoch: null name: python3-jsonschema release: 13.el9 source: rpm version: 3.2.0 python3-libcomps: - arch: x86_64 epoch: null name: python3-libcomps release: 1.el9 source: rpm version: 0.1.18 python3-libdnf: - arch: x86_64 epoch: null name: python3-libdnf release: 16.el9 source: rpm version: 0.69.0 python3-libs: - arch: x86_64 epoch: null name: python3-libs release: 2.el9 source: rpm version: 3.9.25 python3-libselinux: - arch: x86_64 epoch: null name: python3-libselinux release: 3.el9 source: rpm version: '3.6' python3-libsemanage: - arch: x86_64 epoch: null name: python3-libsemanage release: 5.el9 source: rpm version: '3.6' python3-libvirt: - arch: x86_64 epoch: null name: python3-libvirt release: 1.el9 source: rpm version: 11.9.0 python3-libxml2: - arch: x86_64 epoch: null name: python3-libxml2 release: 14.el9 source: rpm version: 2.9.13 python3-lxml: - arch: x86_64 epoch: null name: python3-lxml release: 3.el9 source: rpm version: 4.6.5 python3-markupsafe: - arch: x86_64 epoch: null name: python3-markupsafe release: 12.el9 source: rpm version: 1.1.1 python3-netaddr: - arch: noarch epoch: null name: python3-netaddr release: 3.el9 source: rpm version: 0.10.1 python3-netifaces: - arch: x86_64 epoch: null name: python3-netifaces release: 15.el9 source: rpm version: 0.10.6 python3-oauthlib: - arch: noarch epoch: null name: python3-oauthlib release: 5.el9 source: rpm version: 3.1.1 python3-packaging: - arch: noarch epoch: null name: python3-packaging release: 5.el9 source: rpm version: '20.9' python3-pexpect: - arch: noarch epoch: null name: python3-pexpect release: 7.el9 source: rpm version: 4.8.0 python3-pip: - arch: noarch epoch: null name: python3-pip release: 1.el9 source: rpm version: 21.3.1 python3-pip-wheel: - arch: noarch epoch: null name: python3-pip-wheel release: 1.el9 source: rpm version: 21.3.1 python3-ply: - arch: noarch epoch: null name: python3-ply release: 14.el9 source: rpm version: '3.11' python3-policycoreutils: - arch: noarch epoch: null name: python3-policycoreutils release: 3.el9 source: rpm version: '3.6' python3-prettytable: - arch: noarch epoch: null name: python3-prettytable release: 27.el9 source: rpm version: 0.7.2 python3-ptyprocess: - arch: noarch epoch: null name: python3-ptyprocess release: 12.el9 source: rpm version: 0.6.0 python3-pycparser: - arch: noarch epoch: null name: python3-pycparser release: 6.el9 source: rpm version: '2.20' python3-pyparsing: - arch: noarch epoch: null name: python3-pyparsing release: 9.el9 source: rpm version: 2.4.7 python3-pyrsistent: - arch: x86_64 epoch: null name: python3-pyrsistent release: 8.el9 source: rpm version: 0.17.3 python3-pyserial: - arch: noarch epoch: null name: python3-pyserial release: 12.el9 source: rpm version: '3.4' python3-pysocks: - arch: noarch epoch: null name: python3-pysocks release: 12.el9 source: rpm version: 1.7.1 python3-pytz: - arch: noarch epoch: null name: python3-pytz release: 5.el9 source: rpm version: '2021.1' python3-pyyaml: - arch: x86_64 epoch: null name: python3-pyyaml release: 6.el9 source: rpm version: 5.4.1 python3-requests: - arch: noarch epoch: null name: python3-requests release: 10.el9 source: rpm version: 2.25.1 python3-resolvelib: - arch: noarch epoch: null name: python3-resolvelib release: 5.el9 source: rpm version: 0.5.4 python3-rpm: - arch: x86_64 epoch: null name: python3-rpm release: 40.el9 source: rpm version: 4.16.1.3 python3-rpm-generators: - arch: noarch epoch: null name: python3-rpm-generators release: 9.el9 source: rpm version: '12' python3-rpm-macros: - arch: noarch epoch: null name: python3-rpm-macros release: 54.el9 source: rpm version: '3.9' python3-setools: - arch: x86_64 epoch: null name: python3-setools release: 1.el9 source: rpm version: 4.4.4 python3-setuptools: - arch: noarch epoch: null name: python3-setuptools release: 15.el9 source: rpm version: 53.0.0 python3-setuptools-wheel: - arch: noarch epoch: null name: python3-setuptools-wheel release: 15.el9 source: rpm version: 53.0.0 python3-six: - arch: noarch epoch: null name: python3-six release: 9.el9 source: rpm version: 1.15.0 python3-systemd: - arch: x86_64 epoch: null name: python3-systemd release: 19.el9 source: rpm version: '234' python3-urllib3: - arch: noarch epoch: null name: python3-urllib3 release: 6.el9 source: rpm version: 1.26.5 python3.12: - arch: x86_64 epoch: null name: python3.12 release: 1.el9 source: rpm version: 3.12.12 python3.12-libs: - arch: x86_64 epoch: null name: python3.12-libs release: 1.el9 source: rpm version: 3.12.12 python3.12-pip: - arch: noarch epoch: null name: python3.12-pip release: 5.el9 source: rpm version: 23.2.1 python3.12-pip-wheel: - arch: noarch epoch: null name: python3.12-pip-wheel release: 5.el9 source: rpm version: 23.2.1 python3.12-setuptools: - arch: noarch epoch: null name: python3.12-setuptools release: 5.el9 source: rpm version: 68.2.2 qemu-guest-agent: - arch: x86_64 epoch: 17 name: qemu-guest-agent release: 7.el9 source: rpm version: 10.1.0 qt5-srpm-macros: - arch: noarch epoch: null name: qt5-srpm-macros release: 1.el9 source: rpm version: 5.15.9 quota: - arch: x86_64 epoch: 1 name: quota release: 4.el9 source: rpm version: '4.09' quota-nls: - arch: noarch epoch: 1 name: quota-nls release: 4.el9 source: rpm version: '4.09' readline: - arch: x86_64 epoch: null name: readline release: 4.el9 source: rpm version: '8.1' readline-devel: - arch: x86_64 epoch: null name: readline-devel release: 4.el9 source: rpm version: '8.1' redhat-rpm-config: - arch: noarch epoch: null name: redhat-rpm-config release: 1.el9 source: rpm version: '210' rootfiles: - arch: noarch epoch: null name: rootfiles release: 35.el9 source: rpm version: '8.1' rpcbind: - arch: x86_64 epoch: null name: rpcbind release: 7.el9 source: rpm version: 1.2.6 rpm: - arch: x86_64 epoch: null name: rpm release: 40.el9 source: rpm version: 4.16.1.3 rpm-build: - arch: x86_64 epoch: null name: rpm-build release: 40.el9 source: rpm version: 4.16.1.3 rpm-build-libs: - arch: x86_64 epoch: null name: rpm-build-libs release: 40.el9 source: rpm version: 4.16.1.3 rpm-libs: - arch: x86_64 epoch: null name: rpm-libs release: 40.el9 source: rpm version: 4.16.1.3 rpm-plugin-audit: - arch: x86_64 epoch: null name: rpm-plugin-audit release: 40.el9 source: rpm version: 4.16.1.3 rpm-plugin-selinux: - arch: x86_64 epoch: null name: rpm-plugin-selinux release: 40.el9 source: rpm version: 4.16.1.3 rpm-plugin-systemd-inhibit: - arch: x86_64 epoch: null name: rpm-plugin-systemd-inhibit release: 40.el9 source: rpm version: 4.16.1.3 rpm-sign: - arch: x86_64 epoch: null name: rpm-sign release: 40.el9 source: rpm version: 4.16.1.3 rpm-sign-libs: - arch: x86_64 epoch: null name: rpm-sign-libs release: 40.el9 source: rpm version: 4.16.1.3 rpmlint: - arch: noarch epoch: null name: rpmlint release: 19.el9 source: rpm version: '1.11' rsync: - arch: x86_64 epoch: null name: rsync release: 4.el9 source: rpm version: 3.2.5 rsyslog: - arch: x86_64 epoch: null name: rsyslog release: 2.el9 source: rpm version: 8.2510.0 rsyslog-logrotate: - arch: x86_64 epoch: null name: rsyslog-logrotate release: 2.el9 source: rpm version: 8.2510.0 ruby: - arch: x86_64 epoch: null name: ruby release: 165.el9 source: rpm version: 3.0.7 ruby-default-gems: - arch: noarch epoch: null name: ruby-default-gems release: 165.el9 source: rpm version: 3.0.7 ruby-devel: - arch: x86_64 epoch: null name: ruby-devel release: 165.el9 source: rpm version: 3.0.7 ruby-libs: - arch: x86_64 epoch: null name: ruby-libs release: 165.el9 source: rpm version: 3.0.7 rubygem-bigdecimal: - arch: x86_64 epoch: null name: rubygem-bigdecimal release: 165.el9 source: rpm version: 3.0.0 rubygem-bundler: - arch: noarch epoch: null name: rubygem-bundler release: 165.el9 source: rpm version: 2.2.33 rubygem-io-console: - arch: x86_64 epoch: null name: rubygem-io-console release: 165.el9 source: rpm version: 0.5.7 rubygem-json: - arch: x86_64 epoch: null name: rubygem-json release: 165.el9 source: rpm version: 2.5.1 rubygem-psych: - arch: x86_64 epoch: null name: rubygem-psych release: 165.el9 source: rpm version: 3.3.2 rubygem-rdoc: - arch: noarch epoch: null name: rubygem-rdoc release: 165.el9 source: rpm version: 6.3.4.1 rubygems: - arch: noarch epoch: null name: rubygems release: 165.el9 source: rpm version: 3.2.33 rust-srpm-macros: - arch: noarch epoch: null name: rust-srpm-macros release: 4.el9 source: rpm version: '17' samba-client-libs: - arch: x86_64 epoch: 0 name: samba-client-libs release: 1.el9 source: rpm version: 4.23.3 samba-common: - arch: noarch epoch: 0 name: samba-common release: 1.el9 source: rpm version: 4.23.3 samba-common-libs: - arch: x86_64 epoch: 0 name: samba-common-libs release: 1.el9 source: rpm version: 4.23.3 sed: - arch: x86_64 epoch: null name: sed release: 9.el9 source: rpm version: '4.8' selinux-policy: - arch: noarch epoch: null name: selinux-policy release: 1.el9 source: rpm version: 38.1.69 selinux-policy-targeted: - arch: noarch epoch: null name: selinux-policy-targeted release: 1.el9 source: rpm version: 38.1.69 setroubleshoot-plugins: - arch: noarch epoch: null name: setroubleshoot-plugins release: 4.el9 source: rpm version: 3.3.14 setroubleshoot-server: - arch: x86_64 epoch: null name: setroubleshoot-server release: 2.el9 source: rpm version: 3.3.35 setup: - arch: noarch epoch: null name: setup release: 10.el9 source: rpm version: 2.13.7 sg3_utils: - arch: x86_64 epoch: null name: sg3_utils release: 10.el9 source: rpm version: '1.47' sg3_utils-libs: - arch: x86_64 epoch: null name: sg3_utils-libs release: 10.el9 source: rpm version: '1.47' shadow-utils: - arch: x86_64 epoch: 2 name: shadow-utils release: 15.el9 source: rpm version: '4.9' shadow-utils-subid: - arch: x86_64 epoch: 2 name: shadow-utils-subid release: 15.el9 source: rpm version: '4.9' shared-mime-info: - arch: x86_64 epoch: null name: shared-mime-info release: 5.el9 source: rpm version: '2.1' slang: - arch: x86_64 epoch: null name: slang release: 11.el9 source: rpm version: 2.3.2 slirp4netns: - arch: x86_64 epoch: null name: slirp4netns release: 1.el9 source: rpm version: 1.3.3 snappy: - arch: x86_64 epoch: null name: snappy release: 8.el9 source: rpm version: 1.1.8 sos: - arch: noarch epoch: null name: sos release: 1.el9 source: rpm version: 4.10.1 sqlite-libs: - arch: x86_64 epoch: null name: sqlite-libs release: 9.el9 source: rpm version: 3.34.1 squashfs-tools: - arch: x86_64 epoch: null name: squashfs-tools release: 10.git1.el9 source: rpm version: '4.4' sscg: - arch: x86_64 epoch: null name: sscg release: 2.el9 source: rpm version: 4.0.3 sshpass: - arch: x86_64 epoch: null name: sshpass release: 4.el9 source: rpm version: '1.09' sssd-client: - arch: x86_64 epoch: null name: sssd-client release: 5.el9 source: rpm version: 2.9.7 sssd-common: - arch: x86_64 epoch: null name: sssd-common release: 5.el9 source: rpm version: 2.9.7 sssd-kcm: - arch: x86_64 epoch: null name: sssd-kcm release: 5.el9 source: rpm version: 2.9.7 sssd-nfs-idmap: - arch: x86_64 epoch: null name: sssd-nfs-idmap release: 5.el9 source: rpm version: 2.9.7 sudo: - arch: x86_64 epoch: null name: sudo release: 13.el9 source: rpm version: 1.9.5p2 systemd: - arch: x86_64 epoch: null name: systemd release: 59.el9 source: rpm version: '252' systemd-devel: - arch: x86_64 epoch: null name: systemd-devel release: 59.el9 source: rpm version: '252' systemd-libs: - arch: x86_64 epoch: null name: systemd-libs release: 59.el9 source: rpm version: '252' systemd-pam: - arch: x86_64 epoch: null name: systemd-pam release: 59.el9 source: rpm version: '252' systemd-rpm-macros: - arch: noarch epoch: null name: systemd-rpm-macros release: 59.el9 source: rpm version: '252' systemd-udev: - arch: x86_64 epoch: null name: systemd-udev release: 59.el9 source: rpm version: '252' tar: - arch: x86_64 epoch: 2 name: tar release: 7.el9 source: rpm version: '1.34' tcl: - arch: x86_64 epoch: 1 name: tcl release: 7.el9 source: rpm version: 8.6.10 tcpdump: - arch: x86_64 epoch: 14 name: tcpdump release: 9.el9 source: rpm version: 4.99.0 teamd: - arch: x86_64 epoch: null name: teamd release: 16.el9 source: rpm version: '1.31' time: - arch: x86_64 epoch: null name: time release: 18.el9 source: rpm version: '1.9' tmux: - arch: x86_64 epoch: null name: tmux release: 5.el9 source: rpm version: 3.2a tpm2-tss: - arch: x86_64 epoch: null name: tpm2-tss release: 1.el9 source: rpm version: 3.2.3 traceroute: - arch: x86_64 epoch: 3 name: traceroute release: 1.el9 source: rpm version: 2.1.1 tzdata: - arch: noarch epoch: null name: tzdata release: 2.el9 source: rpm version: 2025b unzip: - arch: x86_64 epoch: null name: unzip release: 59.el9 source: rpm version: '6.0' userspace-rcu: - arch: x86_64 epoch: null name: userspace-rcu release: 6.el9 source: rpm version: 0.12.1 util-linux: - arch: x86_64 epoch: null name: util-linux release: 21.el9 source: rpm version: 2.37.4 util-linux-core: - arch: x86_64 epoch: null name: util-linux-core release: 21.el9 source: rpm version: 2.37.4 vim-minimal: - arch: x86_64 epoch: 2 name: vim-minimal release: 23.el9 source: rpm version: 8.2.2637 webkit2gtk3-jsc: - arch: x86_64 epoch: null name: webkit2gtk3-jsc release: 1.el9 source: rpm version: 2.50.3 wget: - arch: x86_64 epoch: null name: wget release: 8.el9 source: rpm version: 1.21.1 which: - arch: x86_64 epoch: null name: which release: 30.el9 source: rpm version: '2.21' xfsprogs: - arch: x86_64 epoch: null name: xfsprogs release: 7.el9 source: rpm version: 6.4.0 xz: - arch: x86_64 epoch: null name: xz release: 8.el9 source: rpm version: 5.2.5 xz-devel: - arch: x86_64 epoch: null name: xz-devel release: 8.el9 source: rpm version: 5.2.5 xz-libs: - arch: x86_64 epoch: null name: xz-libs release: 8.el9 source: rpm version: 5.2.5 yajl: - arch: x86_64 epoch: null name: yajl release: 25.el9 source: rpm version: 2.1.0 yum: - arch: noarch epoch: null name: yum release: 31.el9 source: rpm version: 4.14.0 yum-utils: - arch: noarch epoch: null name: yum-utils release: 24.el9 source: rpm version: 4.3.0 zip: - arch: x86_64 epoch: null name: zip release: 35.el9 source: rpm version: '3.0' zlib: - arch: x86_64 epoch: null name: zlib release: 41.el9 source: rpm version: 1.2.11 zlib-devel: - arch: x86_64 epoch: null name: zlib-devel release: 41.el9 source: rpm version: 1.2.11 zstd: - arch: x86_64 epoch: null name: zstd release: 1.el9 source: rpm version: 1.5.5 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/0000755000175000017500000000000015117130656025077 5ustar zuulzuulhome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-appstream.repo0000644000175000017500000000031615117130351033344 0ustar zuulzuul [repo-setup-centos-appstream] name=repo-setup-centos-appstream baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/AppStream/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-baseos.repo0000644000175000017500000000030415117130351032621 0ustar zuulzuul [repo-setup-centos-baseos] name=repo-setup-centos-baseos baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/BaseOS/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000015400000000000011603 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-highavailability.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-highavailabili0000644000175000017500000000034215117130351033340 0ustar zuulzuul [repo-setup-centos-highavailability] name=repo-setup-centos-highavailability baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/HighAvailability/$basearch/os/ gpgcheck=0 enabled=1 ././@LongLink0000644000000000000000000000014600000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-powertools.repohome/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/repo-setup-centos-powertools.rep0000644000175000017500000000031115117130351033401 0ustar zuulzuul [repo-setup-centos-powertools] name=repo-setup-centos-powertools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/9-stream/CRB/$basearch/os/ gpgcheck=0 enabled=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean-antelope-testing.repo0000644000175000017500000000317215117130351033032 0ustar zuulzuul[delorean-antelope-testing] name=dlrn-antelope-testing baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [delorean-antelope-build-deps] name=dlrn-antelope-build-deps baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/build-deps/latest/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-rabbitmq] name=centos9-rabbitmq baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/messaging/$basearch/rabbitmq-38/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-storage] name=centos9-storage baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/storage/$basearch/ceph-reef/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-opstools] name=centos9-opstools baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/opstools/$basearch/collectd-5/ enabled=1 gpgcheck=0 module_hotfixes=1 [centos9-nfv-ovs] name=NFV SIG OpenvSwitch baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org/centos-stream/SIGs/9-stream/nfv/$basearch/openvswitch-2/ gpgcheck=0 enabled=1 module_hotfixes=1 # epel is required for Ceph Reef [epel-low-priority] name=Extra Packages for Enterprise Linux $releasever - $basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir enabled=1 gpgcheck=0 countme=1 priority=100 includepkgs=libarrow*,parquet*,python3-asyncssh,re2,python3-grpcio,grpc*,abseil*,thrift*,blake3 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean.repo0000644000175000017500000001341515117130351027553 0ustar zuulzuul[delorean-component-barbican] name=delorean-openstack-barbican-42b4c41831408a8e323fec3c8983b5c793b64874 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/barbican/42/b4/42b4c41831408a8e323fec3c8983b5c793b64874_08052e9d enabled=1 gpgcheck=0 priority=1 [delorean-component-baremetal] name=delorean-python-glean-10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/baremetal/10/df/10df0bd91b9bc5c9fd9cc02d75c0084cd4da29a7_36137eb3 enabled=1 gpgcheck=0 priority=1 [delorean-component-cinder] name=delorean-openstack-cinder-1c00d6490d88e436f26efb71f2ac96e75252e97c baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cinder/1c/00/1c00d6490d88e436f26efb71f2ac96e75252e97c_f716f000 enabled=1 gpgcheck=0 priority=1 [delorean-component-clients] name=delorean-python-stevedore-c4acc5639fd2329372142e39464fcca0209b0018 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/clients/c4/ac/c4acc5639fd2329372142e39464fcca0209b0018_d3ef8337 enabled=1 gpgcheck=0 priority=1 [delorean-component-cloudops] name=delorean-python-cloudkitty-tests-tempest-2c80f80e02c5accd099187ea762c8f8389bd7905 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/cloudops/2c/80/2c80f80e02c5accd099187ea762c8f8389bd7905_33e4dd93 enabled=1 gpgcheck=0 priority=1 [delorean-component-common] name=delorean-os-refresh-config-9bfc52b5049be2d8de6134d662fdde9dfa48960f baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/common/9b/fc/9bfc52b5049be2d8de6134d662fdde9dfa48960f_b85780e6 enabled=1 gpgcheck=0 priority=1 [delorean-component-compute] name=delorean-openstack-nova-6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/compute/6f/8d/6f8decf0b4f1aa2e96292b6a2ffc28249fe4af5e_dc05b899 enabled=1 gpgcheck=0 priority=1 [delorean-component-designate] name=delorean-python-designate-tests-tempest-347fdbc9b4595a10b726526b3c0b5928e5b7fcf2 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/designate/34/7f/347fdbc9b4595a10b726526b3c0b5928e5b7fcf2_3fd39337 enabled=1 gpgcheck=0 priority=1 [delorean-component-glance] name=delorean-openstack-glance-1fd12c29b339f30fe823e2b5beba14b5f241e52a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/glance/1f/d1/1fd12c29b339f30fe823e2b5beba14b5f241e52a_0d693729 enabled=1 gpgcheck=0 priority=1 [delorean-component-keystone] name=delorean-openstack-keystone-e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/keystone/e4/b4/e4b40af0ae3698fbbbbfb8c22468b33aae80e6d7_264c03cc enabled=1 gpgcheck=0 priority=1 [delorean-component-manila] name=delorean-openstack-manila-3c01b7181572c95dac462eb19c3121e36cb0fe95 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/manila/3c/01/3c01b7181572c95dac462eb19c3121e36cb0fe95_912dfd18 enabled=1 gpgcheck=0 priority=1 [delorean-component-network] name=delorean-python-whitebox-neutron-tests-tempest-12cf06ce36a79a584fc757f4c25ff96845573c93 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/network/12/cf/12cf06ce36a79a584fc757f4c25ff96845573c93_3ed3aba3 enabled=1 gpgcheck=0 priority=1 [delorean-component-octavia] name=delorean-openstack-octavia-ba397f07a7331190208c93368ee23826ac4e2707 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/octavia/ba/39/ba397f07a7331190208c93368ee23826ac4e2707_9d6e596a enabled=1 gpgcheck=0 priority=1 [delorean-component-optimize] name=delorean-openstack-watcher-c014f81a8647287f6dcc339321c1256f5a2e82d5 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/optimize/c0/14/c014f81a8647287f6dcc339321c1256f5a2e82d5_bcbfdccc enabled=1 gpgcheck=0 priority=1 [delorean-component-podified] name=delorean-ansible-config_template-5ccaa22121a7ff05620975540d81f6efb077d8db baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/podified/5c/ca/5ccaa22121a7ff05620975540d81f6efb077d8db_83eb7cc2 enabled=1 gpgcheck=0 priority=1 [delorean-component-puppet] name=delorean-puppet-ceph-7352068d7b8c84ded636ab3158dafa6f3851951e baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/puppet/73/52/7352068d7b8c84ded636ab3158dafa6f3851951e_7cde1ad1 enabled=1 gpgcheck=0 priority=1 [delorean-component-swift] name=delorean-openstack-swift-dc98a8463506ac520c469adb0ef47d0f7753905a baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/swift/dc/98/dc98a8463506ac520c469adb0ef47d0f7753905a_9d02f069 enabled=1 gpgcheck=0 priority=1 [delorean-component-tempest] name=delorean-python-tempestconf-8515371b7cceebd4282e09f1d8f0cc842df82855 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/tempest/85/15/8515371b7cceebd4282e09f1d8f0cc842df82855_a1e336c7 enabled=1 gpgcheck=0 priority=1 [delorean-component-ui] name=delorean-openstack-heat-ui-013accbfd179753bc3f0d1f4e5bed07a4fd9f771 baseurl=http://mirror.regionone.vexxhost-nodepool-tripleo.rdoproject.org:8080/rdo//centos9-antelope/component/ui/01/3a/013accbfd179753bc3f0d1f4e5bed07a4fd9f771_0c88e467 enabled=1 gpgcheck=0 priority=1 home/zuul/zuul-output/logs/ci-framework-data/artifacts/repositories/delorean.repo.md50000644000175000017500000000004115117130350030225 0ustar zuulzuulc3923531bcda0b0811b2d5053f189beb home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22/0000777000175000017500000000000015117130664026720 5ustar zuulzuul././@LongLink0000644000000000000000000000015300000000000011602 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22/ansible_facts_cache/home/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22/ansible_facts_0000755000175000017500000000000015117130664031571 5ustar zuulzuul././@LongLink0000644000000000000000000000016400000000000011604 Lustar rootroothome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22/ansible_facts_cache/localhosthome/zuul/zuul-output/logs/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22/ansible_facts_0000644000175000017500000016060215117130664031600 0ustar zuulzuul{ "_ansible_facts_gathered": true, "ansible_all_ipv4_addresses": [ "38.102.83.182", "192.168.122.11" ], "ansible_all_ipv6_addresses": [ "fe80::f816:3eff:fe96:e4" ], "ansible_apparmor": { "status": "disabled" }, "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_vendor": "SeaBIOS", "ansible_bios_version": "1.15.0-1", "ansible_board_asset_tag": "NA", "ansible_board_name": "NA", "ansible_board_serial": "NA", "ansible_board_vendor": "NA", "ansible_board_version": "NA", "ansible_chassis_asset_tag": "NA", "ansible_chassis_serial": "NA", "ansible_chassis_vendor": "QEMU", "ansible_chassis_version": "pc-i440fx-6.2", "ansible_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266" }, "ansible_date_time": { "date": "2025-12-13", "day": "13", "epoch": "1765585111", "epoch_int": "1765585111", "hour": "00", "iso8601": "2025-12-13T00:18:31Z", "iso8601_basic": "20251213T001831840548", "iso8601_basic_short": "20251213T001831", "iso8601_micro": "2025-12-13T00:18:31.840548Z", "minute": "18", "month": "12", "second": "31", "time": "00:18:31", "tz": "UTC", "tz_dst": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "49", "year": "2025" }, "ansible_default_ipv4": { "address": "38.102.83.182", "alias": "eth0", "broadcast": "38.102.83.255", "gateway": "38.102.83.1", "interface": "eth0", "macaddress": "fa:16:3e:96:00:e4", "mtu": 1500, "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_device_links": { "ids": { "sr0": [ "ata-QEMU_DVD-ROM_QM00001" ] }, "labels": { "sr0": [ "config-2" ] }, "masters": {}, "uuids": { "sr0": [ "2025-12-13-00-01-20-00" ], "vda1": [ "cbdedf45-ed1d-4952-82a8-33a12c0ba266" ] } }, "ansible_devices": { "sr0": { "holders": [], "host": "", "links": { "ids": [ "ata-QEMU_DVD-ROM_QM00001" ], "labels": [ "config-2" ], "masters": [], "uuids": [ "2025-12-13-00-01-20-00" ] }, "model": "QEMU DVD-ROM", "partitions": {}, "removable": "1", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "964", "sectorsize": "2048", "size": "482.00 KB", "support_discard": "2048", "vendor": "QEMU", "virtual": 1 }, "vda": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": { "vda1": { "holders": [], "links": { "ids": [], "labels": [], "masters": [], "uuids": [ "cbdedf45-ed1d-4952-82a8-33a12c0ba266" ] }, "sectors": "167770079", "sectorsize": 512, "size": "80.00 GB", "start": "2048", "uuid": "cbdedf45-ed1d-4952-82a8-33a12c0ba266" } }, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "none", "sectors": "167772160", "sectorsize": "512", "size": "80.00 GB", "support_discard": "512", "vendor": "0x1af4", "virtual": 1 } }, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/centos-release", "ansible_distribution_file_variety": "CentOS", "ansible_distribution_major_version": "9", "ansible_distribution_release": "Stream", "ansible_distribution_version": "9", "ansible_dns": { "nameservers": [ "192.168.122.10", "199.204.44.24", "199.204.47.54" ] }, "ansible_domain": "", "ansible_effective_group_id": 1000, "ansible_effective_user_id": 1000, "ansible_env": { "BASH_FUNC_which%%": "() { ( alias;\n eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@\n}", "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus", "DEBUGINFOD_IMA_CERT_PATH": "/etc/keys/ima:", "DEBUGINFOD_URLS": "https://debuginfod.centos.org/ ", "HOME": "/home/zuul", "LANG": "en_US.UTF-8", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "LOGNAME": "zuul", "MOTD_SHOWN": "pam", "PATH": "~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "PWD": "/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks", "SELINUX_LEVEL_REQUESTED": "", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "SHELL": "/bin/bash", "SHLVL": "2", "SSH_CLIENT": "38.102.83.114 34192 22", "SSH_CONNECTION": "38.102.83.114 34192 38.102.83.182 22", "USER": "zuul", "XDG_RUNTIME_DIR": "/run/user/1000", "XDG_SESSION_CLASS": "user", "XDG_SESSION_ID": "9", "XDG_SESSION_TYPE": "tty", "_": "/usr/bin/python3", "which_declare": "declare -f" }, "ansible_eth0": { "active": true, "device": "eth0", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "38.102.83.182", "broadcast": "38.102.83.255", "netmask": "255.255.255.0", "network": "38.102.83.0", "prefix": "24" }, "ipv6": [ { "address": "fe80::f816:3eff:fe96:e4", "prefix": "64", "scope": "link" } ], "macaddress": "fa:16:3e:96:00:e4", "module": "virtio_net", "mtu": 1500, "pciid": "virtio1", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_eth1": { "active": true, "device": "eth1", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "on", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "192.168.122.11", "broadcast": "192.168.122.255", "netmask": "255.255.255.0", "network": "192.168.122.0", "prefix": "24" }, "macaddress": "fa:16:3e:b1:85:fe", "module": "virtio_net", "mtu": 1500, "pciid": "virtio5", "promisc": false, "speed": -1, "timestamping": [], "type": "ether" }, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "controller", "ansible_hostname": "controller", "ansible_hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4", "ansible_interfaces": [ "eth0", "lo", "eth1" ], "ansible_is_chroot": false, "ansible_iscsi_iqn": "", "ansible_kernel": "5.14.0-648.el9.x86_64", "ansible_kernel_version": "#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025", "ansible_lo": { "active": true, "device": "lo", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hsr_dup_offload": "off [fixed]", "hsr_fwd_offload": "off [fixed]", "hsr_tag_ins_offload": "off [fixed]", "hsr_tag_rm_offload": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "macsec_hw_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_gro_list": "off", "rx_udp_gro_forwarding": "off", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_list": "on", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_tunnel_remcsum_segmentation": "off [fixed]", "tx_udp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "on [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "127.0.0.1", "broadcast": "", "netmask": "255.0.0.0", "network": "127.0.0.0", "prefix": "8" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 65536, "promisc": false, "timestamping": [], "type": "loopback" }, "ansible_loadavg": { "15m": 0.47, "1m": 1.12, "5m": 0.87 }, "ansible_local": {}, "ansible_locally_reachable_ips": { "ipv4": [ "38.102.83.182", "127.0.0.0/8", "127.0.0.1", "192.168.122.11" ], "ipv6": [ "::1", "fe80::f816:3eff:fe96:e4" ] }, "ansible_lsb": {}, "ansible_lvm": "N/A", "ansible_machine": "x86_64", "ansible_machine_id": "64f1d6692049d8be5e8b216cc203502c", "ansible_memfree_mb": 5270, "ansible_memory_mb": { "nocache": { "free": 6764, "used": 915 }, "real": { "free": 5270, "total": 7679, "used": 2409 }, "swap": { "cached": 0, "free": 0, "total": 0, "used": 0 } }, "ansible_memtotal_mb": 7679, "ansible_mounts": [ { "block_available": 19987355, "block_size": 4096, "block_total": 20954875, "block_used": 967520, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 41790391, "inode_total": 41942512, "inode_used": 152121, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota", "size_available": 81868206080, "size_total": 85831168000, "uuid": "cbdedf45-ed1d-4952-82a8-33a12c0ba266" } ], "ansible_nodename": "controller", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "dnf", "ansible_proc_cmdline": { "BOOT_IMAGE": "(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", "console": "ttyS0,115200n8", "crashkernel": "1G-2G:192M,2G-64G:256M,64G-:512M", "net.ifnames": "0", "no_timer_check": true, "ro": true, "root": "UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266" }, "ansible_processor": [ "0", "AuthenticAMD", "AMD EPYC-Rome Processor", "1", "AuthenticAMD", "AMD EPYC-Rome Processor", "2", "AuthenticAMD", "AMD EPYC-Rome Processor", "3", "AuthenticAMD", "AMD EPYC-Rome Processor", "4", "AuthenticAMD", "AMD EPYC-Rome Processor", "5", "AuthenticAMD", "AMD EPYC-Rome Processor", "6", "AuthenticAMD", "AMD EPYC-Rome Processor", "7", "AuthenticAMD", "AMD EPYC-Rome Processor" ], "ansible_processor_cores": 1, "ansible_processor_count": 8, "ansible_processor_nproc": 8, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 8, "ansible_product_name": "OpenStack Nova", "ansible_product_serial": "NA", "ansible_product_uuid": "NA", "ansible_product_version": "26.3.1", "ansible_python": { "executable": "/usr/bin/python3", "has_sslcontext": true, "type": "cpython", "version": { "major": 3, "micro": 25, "minor": 9, "releaselevel": "final", "serial": 0 }, "version_info": [ 3, 9, 25, "final", 0 ] }, "ansible_python_version": "3.9.25", "ansible_real_group_id": 1000, "ansible_real_user_id": 1000, "ansible_selinux": { "config_mode": "enforcing", "mode": "enforcing", "policyvers": 33, "status": "enabled", "type": "targeted" }, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfIWnOzoaFl6L11147qWcwowK6Ci0Rz8t1WjAVB/zcYVQE7pudrJ717ZfSW85tw14Xjf9dwVFE9kociqbG0zJc=", "ansible_ssh_host_key_ecdsa_public_keytype": "ecdsa-sha2-nistp256", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAINkLqbGxYdqx81uU7hEPuFtk8VGcR7wMa2mI4eVESIvr", "ansible_ssh_host_key_ed25519_public_keytype": "ssh-ed25519", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMl9iNNfpkBemj+80eKwpd7sRDpIn97HaSAZ/63vqGAbPhgMiH6WngR/zO4oyGyB8lf6fsD+w4LKZgiuAhBwp++i2IcfE5Kfy6quV1X1wL5NwHAombTIg8qtOBzyJxFksJBIHLn8mcWWkttFKy18Ou9KhVGzrDOe8XFSy+jDSiZpPmx7DYjwRg7irJ8dfyyG0bjzrw/C5eBQvyGVsr9RNSDlOv5XmLkybsyqg8nCjNrNEBaKrpRf51w6wWHrTzl/U492b0rnW+3xzYRAnuOhrbIP0OoK+92VqKKeAld7BUW4ZL3PPogxoRhuieoWCGzznwQBUdar6WNRcaUSK3mzSjHikjwkUAl9SR7srM4T9Tc4Yf13/dZ3kOQFQNkH9Xl2+vEHfr9xPC5dknmfsFD4wIdvGCo7MW7+D6tD55fkNhgwMl9YAT7IJVbxrwKBQtHspcwjxuQqyuNN48RMbfflqtKlYaiSqT6TmhmJHfFpToEiZ5O0I11Jw+S6nVSf65MU=", "ansible_ssh_host_key_rsa_public_keytype": "ssh-rsa", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": [ "" ], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "OpenStack Foundation", "ansible_uptime_seconds": 1019, "ansible_user_dir": "/home/zuul", "ansible_user_gecos": "", "ansible_user_gid": 1000, "ansible_user_id": "zuul", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 1000, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_tech_guest": [ "openstack" ], "ansible_virtualization_tech_host": [ "kvm" ], "ansible_virtualization_type": "openstack", "cifmw_discovered_hash": "74bbc8589b27428ecda9125f78c03c8944b07a8b2fd431216ed273af7f01a4bd", "cifmw_discovered_hash_algorithm": "sha256", "cifmw_discovered_image_name": "CentOS-Stream-GenericCloud-x86_64-9-latest.x86_64.qcow2", "cifmw_discovered_image_url": "https://cloud.centos.org/centos/9-stream/x86_64/images//CentOS-Stream-GenericCloud-x86_64-9-latest.x86_64.qcow2", "cifmw_install_yamls_defaults": { "ADOPTED_EXTERNAL_NETWORK": "172.21.1.0/24", "ADOPTED_INTERNALAPI_NETWORK": "172.17.1.0/24", "ADOPTED_STORAGEMGMT_NETWORK": "172.20.1.0/24", "ADOPTED_STORAGE_NETWORK": "172.18.1.0/24", "ADOPTED_TENANT_NETWORK": "172.9.1.0/24", "ANSIBLEEE": "config/samples/_v1beta1_ansibleee.yaml", "ANSIBLEEE_BRANCH": "main", "ANSIBLEEE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml", "ANSIBLEEE_IMG": "quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest", "ANSIBLEEE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml", "ANSIBLEEE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests", "ANSIBLEEE_KUTTL_NAMESPACE": "ansibleee-kuttl-tests", "ANSIBLEEE_REPO": "https://github.com/openstack-k8s-operators/openstack-ansibleee-operator", "ANSIBLEE_COMMIT_HASH": "", "BARBICAN": "config/samples/barbican_v1beta1_barbican.yaml", "BARBICAN_BRANCH": "main", "BARBICAN_COMMIT_HASH": "", "BARBICAN_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml", "BARBICAN_DEPL_IMG": "unused", "BARBICAN_IMG": "quay.io/openstack-k8s-operators/barbican-operator-index:latest", "BARBICAN_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml", "BARBICAN_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests", "BARBICAN_KUTTL_NAMESPACE": "barbican-kuttl-tests", "BARBICAN_REPO": "https://github.com/openstack-k8s-operators/barbican-operator.git", "BARBICAN_SERVICE_ENABLED": "true", "BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY": "sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU=", "BAREMETAL_BRANCH": "main", "BAREMETAL_COMMIT_HASH": "", "BAREMETAL_IMG": "quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest", "BAREMETAL_OS_CONTAINER_IMG": "", "BAREMETAL_OS_IMG": "", "BAREMETAL_REPO": "https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git", "BAREMETAL_TIMEOUT": "20m", "BASH_IMG": "quay.io/openstack-k8s-operators/bash:latest", "BGP_ASN": "64999", "BGP_LEAF_1": "100.65.4.1", "BGP_LEAF_2": "100.64.4.1", "BGP_OVN_ROUTING": "false", "BGP_PEER_ASN": "64999", "BGP_SOURCE_IP": "172.30.4.2", "BGP_SOURCE_IP6": "f00d:f00d:f00d:f00d:f00d:f00d:f00d:42", "BMAAS_BRIDGE_IPV4_PREFIX": "172.20.1.2/24", "BMAAS_BRIDGE_IPV6_PREFIX": "fd00:bbbb::2/64", "BMAAS_INSTANCE_DISK_SIZE": "20", "BMAAS_INSTANCE_MEMORY": "4096", "BMAAS_INSTANCE_NAME_PREFIX": "crc-bmaas", "BMAAS_INSTANCE_NET_MODEL": "virtio", "BMAAS_INSTANCE_OS_VARIANT": "centos-stream9", "BMAAS_INSTANCE_VCPUS": "2", "BMAAS_INSTANCE_VIRT_TYPE": "kvm", "BMAAS_IPV4": "true", "BMAAS_IPV6": "false", "BMAAS_LIBVIRT_USER": "sushyemu", "BMAAS_METALLB_ADDRESS_POOL": "172.20.1.64/26", "BMAAS_METALLB_POOL_NAME": "baremetal", "BMAAS_NETWORK_IPV4_PREFIX": "172.20.1.1/24", "BMAAS_NETWORK_IPV6_PREFIX": "fd00:bbbb::1/64", "BMAAS_NETWORK_NAME": "crc-bmaas", "BMAAS_NODE_COUNT": "1", "BMAAS_OCP_INSTANCE_NAME": "crc", "BMAAS_REDFISH_PASSWORD": "password", "BMAAS_REDFISH_USERNAME": "admin", "BMAAS_ROUTE_LIBVIRT_NETWORKS": "crc-bmaas,crc,default", "BMAAS_SUSHY_EMULATOR_DRIVER": "libvirt", "BMAAS_SUSHY_EMULATOR_IMAGE": "quay.io/metal3-io/sushy-tools:latest", "BMAAS_SUSHY_EMULATOR_NAMESPACE": "sushy-emulator", "BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE": "/etc/openstack/clouds.yaml", "BMAAS_SUSHY_EMULATOR_OS_CLOUD": "openstack", "BMH_NAMESPACE": "openstack", "BMO_BRANCH": "release-0.9", "BMO_CLEANUP": "true", "BMO_COMMIT_HASH": "", "BMO_IPA_BRANCH": "stable/2024.1", "BMO_IRONIC_HOST": "192.168.122.10", "BMO_PROVISIONING_INTERFACE": "", "BMO_REPO": "https://github.com/metal3-io/baremetal-operator", "BMO_SETUP": "", "BMO_SETUP_ROUTE_REPLACE": "true", "BM_CTLPLANE_INTERFACE": "enp1s0", "BM_INSTANCE_MEMORY": "8192", "BM_INSTANCE_NAME_PREFIX": "edpm-compute-baremetal", "BM_INSTANCE_NAME_SUFFIX": "0", "BM_NETWORK_NAME": "default", "BM_NODE_COUNT": "1", "BM_ROOT_PASSWORD": "", "BM_ROOT_PASSWORD_SECRET": "", "CEILOMETER_CENTRAL_DEPL_IMG": "unused", "CEILOMETER_NOTIFICATION_DEPL_IMG": "unused", "CEPH_BRANCH": "release-1.15", "CEPH_CLIENT": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml", "CEPH_COMMON": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml", "CEPH_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml", "CEPH_CRDS": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml", "CEPH_IMG": "quay.io/ceph/demo:latest-squid", "CEPH_OP": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml", "CEPH_REPO": "https://github.com/rook/rook.git", "CERTMANAGER_TIMEOUT": "300s", "CHECKOUT_FROM_OPENSTACK_REF": "true", "CINDER": "config/samples/cinder_v1beta1_cinder.yaml", "CINDERAPI_DEPL_IMG": "unused", "CINDERBKP_DEPL_IMG": "unused", "CINDERSCH_DEPL_IMG": "unused", "CINDERVOL_DEPL_IMG": "unused", "CINDER_BRANCH": "main", "CINDER_COMMIT_HASH": "", "CINDER_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml", "CINDER_IMG": "quay.io/openstack-k8s-operators/cinder-operator-index:latest", "CINDER_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml", "CINDER_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests", "CINDER_KUTTL_NAMESPACE": "cinder-kuttl-tests", "CINDER_REPO": "https://github.com/openstack-k8s-operators/cinder-operator.git", "CLEANUP_DIR_CMD": "rm -Rf", "CRC_BGP_NIC_1_MAC": "52:54:00:11:11:11", "CRC_BGP_NIC_2_MAC": "52:54:00:11:11:12", "CRC_HTTPS_PROXY": "", "CRC_HTTP_PROXY": "", "CRC_STORAGE_NAMESPACE": "crc-storage", "CRC_STORAGE_RETRIES": "3", "CRC_URL": "'https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz'", "CRC_VERSION": "latest", "DATAPLANE_ANSIBLE_SECRET": "dataplane-ansible-ssh-private-key-secret", "DATAPLANE_ANSIBLE_USER": "", "DATAPLANE_COMPUTE_IP": "192.168.122.100", "DATAPLANE_CONTAINER_PREFIX": "openstack", "DATAPLANE_CONTAINER_TAG": "current-podified", "DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG": "quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest", "DATAPLANE_DEFAULT_GW": "192.168.122.1", "DATAPLANE_EXTRA_NOVA_CONFIG_FILE": "/dev/null", "DATAPLANE_GROWVOLS_ARGS": "/=8GB /tmp=1GB /home=1GB /var=100%", "DATAPLANE_KUSTOMIZE_SCENARIO": "preprovisioned", "DATAPLANE_NETWORKER_IP": "192.168.122.200", "DATAPLANE_NETWORK_INTERFACE_NAME": "eth0", "DATAPLANE_NOVA_NFS_PATH": "", "DATAPLANE_NTP_SERVER": "pool.ntp.org", "DATAPLANE_PLAYBOOK": "osp.edpm.download_cache", "DATAPLANE_REGISTRY_URL": "quay.io/podified-antelope-centos9", "DATAPLANE_RUNNER_IMG": "", "DATAPLANE_SERVER_ROLE": "compute", "DATAPLANE_SSHD_ALLOWED_RANGES": "['192.168.122.0/24']", "DATAPLANE_TIMEOUT": "30m", "DATAPLANE_TLS_ENABLED": "true", "DATAPLANE_TOTAL_NETWORKER_NODES": "1", "DATAPLANE_TOTAL_NODES": "1", "DBSERVICE": "galera", "DESIGNATE": "config/samples/designate_v1beta1_designate.yaml", "DESIGNATE_BRANCH": "main", "DESIGNATE_COMMIT_HASH": "", "DESIGNATE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml", "DESIGNATE_IMG": "quay.io/openstack-k8s-operators/designate-operator-index:latest", "DESIGNATE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml", "DESIGNATE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests", "DESIGNATE_KUTTL_NAMESPACE": "designate-kuttl-tests", "DESIGNATE_REPO": "https://github.com/openstack-k8s-operators/designate-operator.git", "DNSDATA": "config/samples/network_v1beta1_dnsdata.yaml", "DNSDATA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml", "DNSMASQ": "config/samples/network_v1beta1_dnsmasq.yaml", "DNSMASQ_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml", "DNS_DEPL_IMG": "unused", "DNS_DOMAIN": "localdomain", "DOWNLOAD_TOOLS_SELECTION": "all", "EDPM_ATTACH_EXTNET": "true", "EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES": "'[]'", "EDPM_COMPUTE_ADDITIONAL_NETWORKS": "'[]'", "EDPM_COMPUTE_CELLS": "1", "EDPM_COMPUTE_CEPH_ENABLED": "true", "EDPM_COMPUTE_CEPH_NOVA": "true", "EDPM_COMPUTE_DHCP_AGENT_ENABLED": "true", "EDPM_COMPUTE_SRIOV_ENABLED": "true", "EDPM_COMPUTE_SUFFIX": "0", "EDPM_CONFIGURE_DEFAULT_ROUTE": "true", "EDPM_CONFIGURE_HUGEPAGES": "false", "EDPM_CONFIGURE_NETWORKING": "true", "EDPM_FIRSTBOOT_EXTRA": "/tmp/edpm-firstboot-extra", "EDPM_NETWORKER_SUFFIX": "0", "EDPM_TOTAL_NETWORKERS": "1", "EDPM_TOTAL_NODES": "1", "GALERA_REPLICAS": "", "GENERATE_SSH_KEYS": "true", "GIT_CLONE_OPTS": "", "GLANCE": "config/samples/glance_v1beta1_glance.yaml", "GLANCEAPI_DEPL_IMG": "unused", "GLANCE_BRANCH": "main", "GLANCE_COMMIT_HASH": "", "GLANCE_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml", "GLANCE_IMG": "quay.io/openstack-k8s-operators/glance-operator-index:latest", "GLANCE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml", "GLANCE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests", "GLANCE_KUTTL_NAMESPACE": "glance-kuttl-tests", "GLANCE_REPO": "https://github.com/openstack-k8s-operators/glance-operator.git", "HEAT": "config/samples/heat_v1beta1_heat.yaml", "HEATAPI_DEPL_IMG": "unused", "HEATCFNAPI_DEPL_IMG": "unused", "HEATENGINE_DEPL_IMG": "unused", "HEAT_AUTH_ENCRYPTION_KEY": "767c3ed056cbaa3b9dfedb8c6f825bf0", "HEAT_BRANCH": "main", "HEAT_COMMIT_HASH": "", "HEAT_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml", "HEAT_IMG": "quay.io/openstack-k8s-operators/heat-operator-index:latest", "HEAT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml", "HEAT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests", "HEAT_KUTTL_NAMESPACE": "heat-kuttl-tests", "HEAT_REPO": "https://github.com/openstack-k8s-operators/heat-operator.git", "HEAT_SERVICE_ENABLED": "true", "HORIZON": "config/samples/horizon_v1beta1_horizon.yaml", "HORIZON_BRANCH": "main", "HORIZON_COMMIT_HASH": "", "HORIZON_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml", "HORIZON_DEPL_IMG": "unused", "HORIZON_IMG": "quay.io/openstack-k8s-operators/horizon-operator-index:latest", "HORIZON_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml", "HORIZON_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests", "HORIZON_KUTTL_NAMESPACE": "horizon-kuttl-tests", "HORIZON_REPO": "https://github.com/openstack-k8s-operators/horizon-operator.git", "INFRA_BRANCH": "main", "INFRA_COMMIT_HASH": "", "INFRA_IMG": "quay.io/openstack-k8s-operators/infra-operator-index:latest", "INFRA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml", "INFRA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests", "INFRA_KUTTL_NAMESPACE": "infra-kuttl-tests", "INFRA_REPO": "https://github.com/openstack-k8s-operators/infra-operator.git", "INSTALL_CERT_MANAGER": "true", "INSTALL_NMSTATE": "true || false", "INSTALL_NNCP": "true || false", "INTERNALAPI_HOST_ROUTES": "", "IPV6_LAB_IPV4_NETWORK_IPADDRESS": "172.30.0.1/24", "IPV6_LAB_IPV6_NETWORK_IPADDRESS": "fd00:abcd:abcd:fc00::1/64", "IPV6_LAB_LIBVIRT_STORAGE_POOL": "default", "IPV6_LAB_MANAGE_FIREWALLD": "true", "IPV6_LAB_NAT64_HOST_IPV4": "172.30.0.2/24", "IPV6_LAB_NAT64_HOST_IPV6": "fd00:abcd:abcd:fc00::2/64", "IPV6_LAB_NAT64_INSTANCE_NAME": "nat64-router", "IPV6_LAB_NAT64_IPV6_NETWORK": "fd00:abcd:abcd:fc00::/64", "IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL": "192.168.255.0/24", "IPV6_LAB_NAT64_TAYGA_IPV4": "192.168.255.1", "IPV6_LAB_NAT64_TAYGA_IPV6": "fd00:abcd:abcd:fc00::3", "IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX": "fd00:abcd:abcd:fcff::/96", "IPV6_LAB_NAT64_UPDATE_PACKAGES": "false", "IPV6_LAB_NETWORK_NAME": "nat64", "IPV6_LAB_SNO_CLUSTER_NETWORK": "fd00:abcd:0::/48", "IPV6_LAB_SNO_HOST_IP": "fd00:abcd:abcd:fc00::11", "IPV6_LAB_SNO_HOST_PREFIX": "64", "IPV6_LAB_SNO_INSTANCE_NAME": "sno", "IPV6_LAB_SNO_MACHINE_NETWORK": "fd00:abcd:abcd:fc00::/64", "IPV6_LAB_SNO_OCP_MIRROR_URL": "https://mirror.openshift.com/pub/openshift-v4/clients/ocp", "IPV6_LAB_SNO_OCP_VERSION": "latest-4.14", "IPV6_LAB_SNO_SERVICE_NETWORK": "fd00:abcd:abcd:fc03::/112", "IPV6_LAB_SSH_PUB_KEY": "/home/zuul/.ssh/id_rsa.pub", "IPV6_LAB_WORK_DIR": "/home/zuul/.ipv6lab", "IRONIC": "config/samples/ironic_v1beta1_ironic.yaml", "IRONICAPI_DEPL_IMG": "unused", "IRONICCON_DEPL_IMG": "unused", "IRONICINS_DEPL_IMG": "unused", "IRONICNAG_DEPL_IMG": "unused", "IRONICPXE_DEPL_IMG": "unused", "IRONIC_BRANCH": "main", "IRONIC_COMMIT_HASH": "", "IRONIC_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml", "IRONIC_IMAGE": "quay.io/metal3-io/ironic", "IRONIC_IMAGE_TAG": "release-24.1", "IRONIC_IMG": "quay.io/openstack-k8s-operators/ironic-operator-index:latest", "IRONIC_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml", "IRONIC_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests", "IRONIC_KUTTL_NAMESPACE": "ironic-kuttl-tests", "IRONIC_REPO": "https://github.com/openstack-k8s-operators/ironic-operator.git", "KEYSTONEAPI": "config/samples/keystone_v1beta1_keystoneapi.yaml", "KEYSTONEAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml", "KEYSTONEAPI_DEPL_IMG": "unused", "KEYSTONE_BRANCH": "main", "KEYSTONE_COMMIT_HASH": "", "KEYSTONE_FEDERATION_CLIENT_SECRET": "COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f", "KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE": "openstack", "KEYSTONE_IMG": "quay.io/openstack-k8s-operators/keystone-operator-index:latest", "KEYSTONE_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml", "KEYSTONE_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests", "KEYSTONE_KUTTL_NAMESPACE": "keystone-kuttl-tests", "KEYSTONE_REPO": "https://github.com/openstack-k8s-operators/keystone-operator.git", "KUBEADMIN_PWD": "12345678", "LIBVIRT_SECRET": "libvirt-secret", "LOKI_DEPLOY_MODE": "openshift-network", "LOKI_DEPLOY_NAMESPACE": "netobserv", "LOKI_DEPLOY_SIZE": "1x.demo", "LOKI_NAMESPACE": "openshift-operators-redhat", "LOKI_OPERATOR_GROUP": "openshift-operators-redhat-loki", "LOKI_SUBSCRIPTION": "loki-operator", "LVMS_CR": "1", "MANILA": "config/samples/manila_v1beta1_manila.yaml", "MANILAAPI_DEPL_IMG": "unused", "MANILASCH_DEPL_IMG": "unused", "MANILASHARE_DEPL_IMG": "unused", "MANILA_BRANCH": "main", "MANILA_COMMIT_HASH": "", "MANILA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml", "MANILA_IMG": "quay.io/openstack-k8s-operators/manila-operator-index:latest", "MANILA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml", "MANILA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests", "MANILA_KUTTL_NAMESPACE": "manila-kuttl-tests", "MANILA_REPO": "https://github.com/openstack-k8s-operators/manila-operator.git", "MANILA_SERVICE_ENABLED": "true", "MARIADB": "config/samples/mariadb_v1beta1_galera.yaml", "MARIADB_BRANCH": "main", "MARIADB_CHAINSAW_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml", "MARIADB_CHAINSAW_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests", "MARIADB_CHAINSAW_NAMESPACE": "mariadb-chainsaw-tests", "MARIADB_COMMIT_HASH": "", "MARIADB_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml", "MARIADB_DEPL_IMG": "unused", "MARIADB_IMG": "quay.io/openstack-k8s-operators/mariadb-operator-index:latest", "MARIADB_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml", "MARIADB_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests", "MARIADB_KUTTL_NAMESPACE": "mariadb-kuttl-tests", "MARIADB_REPO": "https://github.com/openstack-k8s-operators/mariadb-operator.git", "MEMCACHED": "config/samples/memcached_v1beta1_memcached.yaml", "MEMCACHED_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml", "MEMCACHED_DEPL_IMG": "unused", "METADATA_SHARED_SECRET": "1234567842", "METALLB_IPV6_POOL": "fd00:aaaa::80-fd00:aaaa::90", "METALLB_POOL": "192.168.122.80-192.168.122.90", "MICROSHIFT": "0", "NAMESPACE": "openstack", "NETCONFIG": "config/samples/network_v1beta1_netconfig.yaml", "NETCONFIG_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml", "NETCONFIG_DEPL_IMG": "unused", "NETOBSERV_DEPLOY_NAMESPACE": "netobserv", "NETOBSERV_NAMESPACE": "openshift-netobserv-operator", "NETOBSERV_OPERATOR_GROUP": "openshift-netobserv-operator-net", "NETOBSERV_SUBSCRIPTION": "netobserv-operator", "NETWORK_BGP": "false", "NETWORK_DESIGNATE_ADDRESS_PREFIX": "172.28.0", "NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX": "172.50.0", "NETWORK_INTERNALAPI_ADDRESS_PREFIX": "172.17.0", "NETWORK_ISOLATION": "true", "NETWORK_ISOLATION_INSTANCE_NAME": "crc", "NETWORK_ISOLATION_IPV4": "true", "NETWORK_ISOLATION_IPV4_ADDRESS": "172.16.1.1/24", "NETWORK_ISOLATION_IPV4_NAT": "true", "NETWORK_ISOLATION_IPV6": "false", "NETWORK_ISOLATION_IPV6_ADDRESS": "fd00:aaaa::1/64", "NETWORK_ISOLATION_IP_ADDRESS": "192.168.122.10", "NETWORK_ISOLATION_MAC": "52:54:00:11:11:10", "NETWORK_ISOLATION_NETWORK_NAME": "net-iso", "NETWORK_ISOLATION_NET_NAME": "default", "NETWORK_ISOLATION_USE_DEFAULT_NETWORK": "true", "NETWORK_MTU": "1500", "NETWORK_STORAGEMGMT_ADDRESS_PREFIX": "172.20.0", "NETWORK_STORAGE_ADDRESS_PREFIX": "172.18.0", "NETWORK_STORAGE_MACVLAN": "", "NETWORK_TENANT_ADDRESS_PREFIX": "172.19.0", "NETWORK_VLAN_START": "20", "NETWORK_VLAN_STEP": "1", "NEUTRONAPI": "config/samples/neutron_v1beta1_neutronapi.yaml", "NEUTRONAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml", "NEUTRONAPI_DEPL_IMG": "unused", "NEUTRON_BRANCH": "main", "NEUTRON_COMMIT_HASH": "", "NEUTRON_IMG": "quay.io/openstack-k8s-operators/neutron-operator-index:latest", "NEUTRON_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml", "NEUTRON_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests", "NEUTRON_KUTTL_NAMESPACE": "neutron-kuttl-tests", "NEUTRON_REPO": "https://github.com/openstack-k8s-operators/neutron-operator.git", "NFS_HOME": "/home/nfs", "NMSTATE_NAMESPACE": "openshift-nmstate", "NMSTATE_OPERATOR_GROUP": "openshift-nmstate-tn6k8", "NMSTATE_SUBSCRIPTION": "kubernetes-nmstate-operator", "NNCP_ADDITIONAL_HOST_ROUTES": "", "NNCP_BGP_1_INTERFACE": "enp7s0", "NNCP_BGP_1_IP_ADDRESS": "100.65.4.2", "NNCP_BGP_2_INTERFACE": "enp8s0", "NNCP_BGP_2_IP_ADDRESS": "100.64.4.2", "NNCP_BRIDGE": "ospbr", "NNCP_CLEANUP_TIMEOUT": "120s", "NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX": "fd00:aaaa::", "NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX": "10", "NNCP_CTLPLANE_IP_ADDRESS_PREFIX": "192.168.122", "NNCP_CTLPLANE_IP_ADDRESS_SUFFIX": "10", "NNCP_DNS_SERVER": "192.168.122.1", "NNCP_DNS_SERVER_IPV6": "fd00:aaaa::1", "NNCP_GATEWAY": "192.168.122.1", "NNCP_GATEWAY_IPV6": "fd00:aaaa::1", "NNCP_INTERFACE": "enp6s0", "NNCP_NODES": "", "NNCP_TIMEOUT": "240s", "NOVA": "config/samples/nova_v1beta1_nova_collapsed_cell.yaml", "NOVA_BRANCH": "main", "NOVA_COMMIT_HASH": "", "NOVA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml", "NOVA_IMG": "quay.io/openstack-k8s-operators/nova-operator-index:latest", "NOVA_REPO": "https://github.com/openstack-k8s-operators/nova-operator.git", "NUMBER_OF_INSTANCES": "1", "OCP_NETWORK_NAME": "crc", "OCTAVIA": "config/samples/octavia_v1beta1_octavia.yaml", "OCTAVIA_BRANCH": "main", "OCTAVIA_COMMIT_HASH": "", "OCTAVIA_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml", "OCTAVIA_IMG": "quay.io/openstack-k8s-operators/octavia-operator-index:latest", "OCTAVIA_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml", "OCTAVIA_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests", "OCTAVIA_KUTTL_NAMESPACE": "octavia-kuttl-tests", "OCTAVIA_REPO": "https://github.com/openstack-k8s-operators/octavia-operator.git", "OKD": "false", "OPENSTACK_BRANCH": "main", "OPENSTACK_BUNDLE_IMG": "quay.io/openstack-k8s-operators/openstack-operator-bundle:latest", "OPENSTACK_COMMIT_HASH": "", "OPENSTACK_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml", "OPENSTACK_CRDS_DIR": "openstack_crds", "OPENSTACK_CTLPLANE": "config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml", "OPENSTACK_IMG": "quay.io/openstack-k8s-operators/openstack-operator-index:latest", "OPENSTACK_K8S_BRANCH": "main", "OPENSTACK_K8S_TAG": "latest", "OPENSTACK_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml", "OPENSTACK_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests", "OPENSTACK_KUTTL_NAMESPACE": "openstack-kuttl-tests", "OPENSTACK_NEUTRON_CUSTOM_CONF": "", "OPENSTACK_REPO": "https://github.com/openstack-k8s-operators/openstack-operator.git", "OPENSTACK_STORAGE_BUNDLE_IMG": "quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest", "OPERATOR_BASE_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator", "OPERATOR_CHANNEL": "", "OPERATOR_NAMESPACE": "openstack-operators", "OPERATOR_SOURCE": "", "OPERATOR_SOURCE_NAMESPACE": "", "OUT": "/home/zuul/ci-framework-data/artifacts/manifests", "OUTPUT_DIR": "/home/zuul/ci-framework-data/artifacts/edpm", "OVNCONTROLLER": "config/samples/ovn_v1beta1_ovncontroller.yaml", "OVNCONTROLLER_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml", "OVNCONTROLLER_NMAP": "true", "OVNDBS": "config/samples/ovn_v1beta1_ovndbcluster.yaml", "OVNDBS_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml", "OVNNORTHD": "config/samples/ovn_v1beta1_ovnnorthd.yaml", "OVNNORTHD_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml", "OVN_BRANCH": "main", "OVN_COMMIT_HASH": "", "OVN_IMG": "quay.io/openstack-k8s-operators/ovn-operator-index:latest", "OVN_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml", "OVN_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests", "OVN_KUTTL_NAMESPACE": "ovn-kuttl-tests", "OVN_REPO": "https://github.com/openstack-k8s-operators/ovn-operator.git", "PASSWORD": "12345678", "PLACEMENTAPI": "config/samples/placement_v1beta1_placementapi.yaml", "PLACEMENTAPI_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml", "PLACEMENTAPI_DEPL_IMG": "unused", "PLACEMENT_BRANCH": "main", "PLACEMENT_COMMIT_HASH": "", "PLACEMENT_IMG": "quay.io/openstack-k8s-operators/placement-operator-index:latest", "PLACEMENT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml", "PLACEMENT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests", "PLACEMENT_KUTTL_NAMESPACE": "placement-kuttl-tests", "PLACEMENT_REPO": "https://github.com/openstack-k8s-operators/placement-operator.git", "PULL_SECRET": "/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/playbooks/pull-secret.txt", "RABBITMQ": "docs/examples/default-security-context/rabbitmq.yaml", "RABBITMQ_BRANCH": "patches", "RABBITMQ_COMMIT_HASH": "", "RABBITMQ_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml", "RABBITMQ_DEPL_IMG": "unused", "RABBITMQ_IMG": "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest", "RABBITMQ_REPO": "https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git", "REDHAT_OPERATORS": "false", "REDIS": "config/samples/redis_v1beta1_redis.yaml", "REDIS_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml", "REDIS_DEPL_IMG": "unused", "RH_REGISTRY_PWD": "", "RH_REGISTRY_USER": "", "SECRET": "osp-secret", "SG_CORE_DEPL_IMG": "unused", "STANDALONE_COMPUTE_DRIVER": "libvirt", "STANDALONE_EXTERNAL_NET_PREFFIX": "172.21.0", "STANDALONE_INTERNALAPI_NET_PREFIX": "172.17.0", "STANDALONE_STORAGEMGMT_NET_PREFIX": "172.20.0", "STANDALONE_STORAGE_NET_PREFIX": "172.18.0", "STANDALONE_TENANT_NET_PREFIX": "172.19.0", "STORAGEMGMT_HOST_ROUTES": "", "STORAGE_CLASS": "local-storage", "STORAGE_HOST_ROUTES": "", "SWIFT": "config/samples/swift_v1beta1_swift.yaml", "SWIFT_BRANCH": "main", "SWIFT_COMMIT_HASH": "", "SWIFT_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml", "SWIFT_IMG": "quay.io/openstack-k8s-operators/swift-operator-index:latest", "SWIFT_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml", "SWIFT_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests", "SWIFT_KUTTL_NAMESPACE": "swift-kuttl-tests", "SWIFT_REPO": "https://github.com/openstack-k8s-operators/swift-operator.git", "TELEMETRY": "config/samples/telemetry_v1beta1_telemetry.yaml", "TELEMETRY_BRANCH": "main", "TELEMETRY_COMMIT_HASH": "", "TELEMETRY_CR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml", "TELEMETRY_IMG": "quay.io/openstack-k8s-operators/telemetry-operator-index:latest", "TELEMETRY_KUTTL_BASEDIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator", "TELEMETRY_KUTTL_CONF": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml", "TELEMETRY_KUTTL_DIR": "/home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites", "TELEMETRY_KUTTL_NAMESPACE": "telemetry-kuttl-tests", "TELEMETRY_KUTTL_RELPATH": "test/kuttl/suites", "TELEMETRY_REPO": "https://github.com/openstack-k8s-operators/telemetry-operator.git", "TENANT_HOST_ROUTES": "", "TIMEOUT": "300s", "TLS_ENABLED": "false", "tripleo_deploy": "export REGISTRY_USER:" }, "cifmw_install_yamls_environment": { "CHECKOUT_FROM_OPENSTACK_REF": "true", "KUBECONFIG": "/home/zuul/.crc/machines/crc/kubeconfig", "OPENSTACK_K8S_BRANCH": "main", "OUT": "/home/zuul/ci-framework-data/artifacts/manifests", "OUTPUT_DIR": "/home/zuul/ci-framework-data/artifacts/edpm" }, "cifmw_openshift_api": "https://api.crc.testing:6443", "cifmw_openshift_context": "default/api-crc-testing:6443/kubeadmin", "cifmw_openshift_kubeconfig": "/home/zuul/.crc/machines/crc/kubeconfig", "cifmw_openshift_login_api": "https://api.crc.testing:6443", "cifmw_openshift_login_cert_login": false, "cifmw_openshift_login_context": "default/api-crc-testing:6443/kubeadmin", "cifmw_openshift_login_kubeconfig": "/home/zuul/.crc/machines/crc/kubeconfig", "cifmw_openshift_login_password": 123456789, "cifmw_openshift_login_token": "sha256~CRDzRDEwMlIhbeVJLCrSChdC674z2UmCrkBejT7UEP4", "cifmw_openshift_login_user": "kubeadmin", "cifmw_openshift_token": "sha256~CRDzRDEwMlIhbeVJLCrSChdC674z2UmCrkBejT7UEP4", "cifmw_openshift_user": "kubeadmin", "cifmw_path": "/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:~/.crc/bin:~/.crc/bin/oc:~/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin", "cifmw_repo_setup_commit_hash": null, "cifmw_repo_setup_distro_hash": null, "cifmw_repo_setup_dlrn_api_url": "https://trunk.rdoproject.org/api-centos9-antelope", "cifmw_repo_setup_dlrn_url": "https://trunk.rdoproject.org/centos9-antelope/current-podified/delorean.repo.md5", "cifmw_repo_setup_extended_hash": null, "cifmw_repo_setup_full_hash": "c3923531bcda0b0811b2d5053f189beb", "cifmw_repo_setup_release": "antelope", "discovered_interpreter_python": "/usr/bin/python3", "gather_subset": [ "all" ], "module_setup": true }home/zuul/zuul-output/logs/selinux-listing.log0000644000175000017500000040424515117130707020742 0ustar zuulzuul/home/zuul/ci-framework-data: total 8 drwxr-xr-x. 10 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:22 artifacts drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:22 logs drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 24 Dec 13 00:18 tmp drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Dec 13 00:19 volumes /home/zuul/ci-framework-data/artifacts: total 616 drwxrwxrwx. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Dec 13 00:22 ansible_facts.2025-12-13_00-22 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19832 Dec 13 00:21 ansible-facts.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 457935 Dec 13 00:22 ansible-vars.yml drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 33 Dec 13 00:22 ci-env -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 135 Dec 13 00:21 ci_script_000_check_for_oc.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 239 Dec 13 00:21 ci_script_000_copy_logs_from_crc.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 213 Dec 13 00:19 ci_script_000_fetch_openshift.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 659 Dec 13 00:21 ci_script_000_prepare_root_ssh.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 722 Dec 13 00:21 ci_script_000_run_openstack_must_gather.sh -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 293 Dec 13 00:19 ci_script_001_login_into_openshift_internal.sh -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 159 Dec 13 00:21 hosts -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 77119 Dec 13 00:21 installed-packages.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1647 Dec 13 00:21 ip-network.txt drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:19 manifests drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 70 Dec 13 00:22 NetworkManager drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 120 Dec 13 00:22 parameters drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:22 repositories -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 106 Dec 13 00:21 resolv.conf drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Dec 13 00:19 roles drwxr-xr-x. 2 root root unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:22 yum_repos -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 28668 Dec 13 00:22 zuul_inventory.yml /home/zuul/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:22 ansible_facts_cache /home/zuul/ci-framework-data/artifacts/ansible_facts.2025-12-13_00-22/ansible_facts_cache: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 57730 Dec 13 00:22 localhost /home/zuul/ci-framework-data/artifacts/ci-env: total 4 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1226 Dec 13 00:21 networking-info.yml /home/zuul/ci-framework-data/artifacts/manifests: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 16 Dec 13 00:19 openstack /home/zuul/ci-framework-data/artifacts/manifests/openstack: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 6 Dec 13 00:19 cr /home/zuul/ci-framework-data/artifacts/manifests/openstack/cr: total 0 /home/zuul/ci-framework-data/artifacts/NetworkManager: total 8 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 331 Dec 13 00:21 ci-private-network.nmconnection -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 178 Dec 13 00:21 ens3.nmconnection /home/zuul/ci-framework-data/artifacts/parameters: total 56 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1131 Dec 13 00:22 custom-params.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 28013 Dec 13 00:22 install-yamls-params.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 288 Dec 13 00:19 openshift-login-params.yml -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 19391 Dec 13 00:22 zuul-params.yml /home/zuul/ci-framework-data/artifacts/repositories: total 32 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1658 Dec 13 00:18 delorean-antelope-testing.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5901 Dec 13 00:18 delorean.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Dec 13 00:18 delorean.repo.md5 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 206 Dec 13 00:18 repo-setup-centos-appstream.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 196 Dec 13 00:18 repo-setup-centos-baseos.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 226 Dec 13 00:18 repo-setup-centos-highavailability.repo -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 201 Dec 13 00:18 repo-setup-centos-powertools.repo /home/zuul/ci-framework-data/artifacts/roles: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:19 install_yamls_makes /home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes: total 20 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 16384 Dec 13 00:22 tasks /home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks: total 1256 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 790 Dec 13 00:19 make_all.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_ansibleee_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Dec 13 00:19 make_ansibleee_kuttl_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_ansibleee_kuttl_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_ansibleee_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_ansibleee_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_ansibleee_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_ansibleee.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Dec 13 00:19 make_attach_default_interface_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Dec 13 00:19 make_attach_default_interface.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_barbican_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Dec 13 00:19 make_barbican_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_barbican_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_barbican_deploy_validate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_barbican_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_barbican_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_barbican_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_barbican_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Dec 13 00:19 make_barbican.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_baremetal_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_baremetal_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1219 Dec 13 00:19 make_bmaas_baremetal_net_nad_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1099 Dec 13 00:19 make_bmaas_baremetal_net_nad.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Dec 13 00:19 make_bmaas_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Dec 13 00:19 make_bmaas_crc_attach_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Dec 13 00:19 make_bmaas_crc_attach_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1264 Dec 13 00:19 make_bmaas_crc_baremetal_bridge_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1144 Dec 13 00:19 make_bmaas_crc_baremetal_bridge.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Dec 13 00:19 make_bmaas_generate_nodes_yaml.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Dec 13 00:19 make_bmaas_metallb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Dec 13 00:19 make_bmaas_metallb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Dec 13 00:19 make_bmaas_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Dec 13 00:19 make_bmaas_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1444 Dec 13 00:19 make_bmaas_route_crc_and_crc_bmaas_networks_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1324 Dec 13 00:19 make_bmaas_route_crc_and_crc_bmaas_networks.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1174 Dec 13 00:19 make_bmaas_sushy_emulator_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Dec 13 00:19 make_bmaas_sushy_emulator_wait.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Dec 13 00:19 make_bmaas_sushy_emulator.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1129 Dec 13 00:19 make_bmaas_virtual_bms_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Dec 13 00:19 make_bmaas_virtual_bms.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 829 Dec 13 00:19 make_bmaas.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_ceph_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_ceph_help.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_ceph.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_certmanager_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_certmanager.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Dec 13 00:19 make_cifmw_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 949 Dec 13 00:19 make_cifmw_prepare.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_cinder_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_cinder_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_cinder_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_cinder_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_cinder_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_cinder_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_cinder_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Dec 13 00:19 make_cinder.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1294 Dec 13 00:19 make_crc_attach_default_interface_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1174 Dec 13 00:19 make_crc_attach_default_interface.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_crc_bmo_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_crc_bmo_setup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 919 Dec 13 00:19 make_crc_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 889 Dec 13 00:19 make_crc_scrub.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1225 Dec 13 00:19 make_crc_storage_cleanup_with_retries.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_crc_storage_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_crc_storage_release.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_crc_storage_with_retries.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_crc_storage.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 799 Dec 13 00:19 make_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_designate_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_designate_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_designate_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_designate_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_designate_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_designate_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_designate_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_designate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_dns_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_dns_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Dec 13 00:19 make_dns_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Dec 13 00:19 make_download_tools.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1039 Dec 13 00:19 make_edpm_ansible_runner.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1084 Dec 13 00:19 make_edpm_baremetal_compute.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Dec 13 00:19 make_edpm_compute_bootc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Dec 13 00:19 make_edpm_compute_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Dec 13 00:19 make_edpm_compute_repos.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Dec 13 00:19 make_edpm_computes_bgp.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 934 Dec 13 00:19 make_edpm_compute.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Dec 13 00:19 make_edpm_deploy_baremetal_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_edpm_deploy_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_edpm_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1120 Dec 13 00:19 make_edpm_deploy_generate_keys.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Dec 13 00:19 make_edpm_deploy_instance.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1180 Dec 13 00:19 make_edpm_deploy_networker_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Dec 13 00:19 make_edpm_deploy_networker_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_edpm_deploy_networker.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_edpm_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_edpm_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1084 Dec 13 00:19 make_edpm_networker_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Dec 13 00:19 make_edpm_networker.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_edpm_nova_discover_hosts.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1210 Dec 13 00:19 make_edpm_patch_ansible_runner_image.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_edpm_register_dns.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1135 Dec 13 00:19 make_edpm_wait_deploy_baremetal.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_edpm_wait_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_glance_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_glance_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_glance_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_glance_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_glance_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_glance_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_glance_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Dec 13 00:19 make_glance.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_heat_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_heat_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_heat_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_heat_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_heat_kuttl_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_heat_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Dec 13 00:19 make_heat_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_heat_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_heat.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 814 Dec 13 00:19 make_help.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_horizon_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Dec 13 00:19 make_horizon_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_horizon_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_horizon_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_horizon_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_horizon_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_horizon_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_horizon.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_infra_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_infra_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_infra_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Dec 13 00:19 make_infra_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Dec 13 00:19 make_infra.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_input_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Dec 13 00:19 make_input.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 994 Dec 13 00:19 make_ipv6_lab_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1189 Dec 13 00:19 make_ipv6_lab_nat64_router_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1069 Dec 13 00:19 make_ipv6_lab_nat64_router.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Dec 13 00:19 make_ipv6_lab_network_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 994 Dec 13 00:19 make_ipv6_lab_network.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Dec 13 00:19 make_ipv6_lab_sno_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 934 Dec 13 00:19 make_ipv6_lab_sno.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 874 Dec 13 00:19 make_ipv6_lab.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_ironic_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_ironic_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_ironic_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_ironic_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_ironic_kuttl_crc.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_ironic_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_ironic_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_ironic_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Dec 13 00:19 make_ironic.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_keystone_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Dec 13 00:19 make_keystone_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_keystone_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_keystone_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_keystone_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_keystone_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_keystone_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Dec 13 00:19 make_keystone.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_kuttl_common_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_kuttl_common_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_kuttl_db_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_kuttl_db_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_loki_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_loki_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_loki_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_loki.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_lvms.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_manila_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_manila_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_manila_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_manila_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_manila_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_manila_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_manila_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 835 Dec 13 00:19 make_manila.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_mariadb_chainsaw_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_mariadb_chainsaw.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_mariadb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Dec 13 00:19 make_mariadb_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_mariadb_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_mariadb_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_mariadb_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_mariadb_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_mariadb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_memcached_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_memcached_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_memcached_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_metallb_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Dec 13 00:19 make_metallb_config_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_metallb_config.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_metallb.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_namespace_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_namespace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_netattach_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_netattach.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_netconfig_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_netconfig_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_netconfig_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_netobserv_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_netobserv_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_netobserv_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_netobserv.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1234 Dec 13 00:19 make_network_isolation_bridge_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1114 Dec 13 00:19 make_network_isolation_bridge.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_neutron_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Dec 13 00:19 make_neutron_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_neutron_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_neutron_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_neutron_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_neutron_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_neutron_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_neutron.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 919 Dec 13 00:19 make_nfs_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 799 Dec 13 00:19 make_nfs.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_nmstate.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_nncp_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_nncp.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_nova_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_nova_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_nova_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_nova_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_nova_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_nova.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_octavia_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Dec 13 00:19 make_octavia_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_octavia_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_octavia_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_octavia_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_octavia_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_octavia_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 850 Dec 13 00:19 make_octavia.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_openstack_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1075 Dec 13 00:19 make_openstack_crds_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_openstack_crds.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_openstack_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_openstack_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_openstack_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_openstack_init.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_openstack_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_openstack_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Dec 13 00:19 make_openstack_patch_version.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_openstack_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_openstack_repo.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_openstack_update_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_openstack_wait_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_openstack_wait.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_openstack.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_operator_namespace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_ovn_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1015 Dec 13 00:19 make_ovn_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_ovn_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Dec 13 00:19 make_ovn_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_ovn_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_ovn_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Dec 13 00:19 make_ovn_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 790 Dec 13 00:19 make_ovn.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_placement_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_placement_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_placement_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_placement_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_placement_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_placement_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_placement_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_placement.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_rabbitmq_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Dec 13 00:19 make_rabbitmq_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_rabbitmq_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_rabbitmq_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_rabbitmq_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 865 Dec 13 00:19 make_rabbitmq.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_redis_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_redis_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_redis_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_rook_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_rook_crc_disk.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_rook_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_rook_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_rook_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_rook.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1090 Dec 13 00:19 make_set_slower_etcd_profile.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1024 Dec 13 00:19 make_standalone_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Dec 13 00:19 make_standalone_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1009 Dec 13 00:19 make_standalone_revert.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1039 Dec 13 00:19 make_standalone_snapshot.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 979 Dec 13 00:19 make_standalone_sync.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 904 Dec 13 00:19 make_standalone.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_swift_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_swift_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_swift_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 925 Dec 13 00:19 make_swift_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_swift_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 910 Dec 13 00:19 make_swift_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 895 Dec 13 00:19 make_swift_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 820 Dec 13 00:19 make_swift.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1000 Dec 13 00:19 make_telemetry_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1105 Dec 13 00:19 make_telemetry_deploy_cleanup.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1060 Dec 13 00:19 make_telemetry_deploy_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 985 Dec 13 00:19 make_telemetry_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1030 Dec 13 00:19 make_telemetry_kuttl_run.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_telemetry_kuttl.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 955 Dec 13 00:19 make_telemetry_prep.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 880 Dec 13 00:19 make_telemetry.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 964 Dec 13 00:19 make_tripleo_deploy.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 970 Dec 13 00:19 make_update_services.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 940 Dec 13 00:19 make_update_system.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1045 Dec 13 00:19 make_validate_marketplace.yml -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 805 Dec 13 00:19 make_wait.yml /home/zuul/ci-framework-data/artifacts/yum_repos: total 32 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 1658 Dec 13 00:21 delorean-antelope-testing.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 5901 Dec 13 00:21 delorean.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 33 Dec 13 00:21 delorean.repo.md5 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 206 Dec 13 00:21 repo-setup-centos-appstream.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 196 Dec 13 00:21 repo-setup-centos-baseos.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 226 Dec 13 00:21 repo-setup-centos-highavailability.repo -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 201 Dec 13 00:21 repo-setup-centos-powertools.repo /home/zuul/ci-framework-data/logs: total 344 drwxrwxr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 25 Dec 13 00:22 2025-12-13_00-21 -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 155873 Dec 13 00:22 ansible.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 18 Dec 13 00:21 ci_script_000_check_for_oc.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14827 Dec 13 00:21 ci_script_000_copy_logs_from_crc.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 234 Dec 13 00:19 ci_script_000_fetch_openshift.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 151793 Dec 13 00:21 ci_script_000_prepare_root_ssh.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4485 Dec 13 00:21 ci_script_000_run_openstack_must_gather.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17 Dec 13 00:19 ci_script_001_login_into_openshift_internal.log drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:21 crc drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 72 Dec 13 00:22 openstack-must-gather /home/zuul/ci-framework-data/logs/2025-12-13_00-21: total 156 -rw-rw-rw-. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 155915 Dec 13 00:20 ansible.log /home/zuul/ci-framework-data/logs/crc: total 0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 18 Dec 13 00:21 crc-logs-artifacts /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts: total 16 drwxr-xr-x. 83 zuul zuul unconfined_u:object_r:user_home_t:s0 12288 Dec 13 00:21 pods /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods: total 16 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 108 Dec 13 00:21 hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787 drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 105 Dec 13 00:21 openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 42 Dec 13 00:21 openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Dec 13 00:21 openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Dec 13 00:21 openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 64 Dec 13 00:21 openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:22 openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Dec 13 00:21 openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Dec 13 00:21 openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 21 Dec 13 00:21 openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Dec 13 00:21 openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 39 Dec 13 00:21 openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:21 openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 51 Dec 13 00:21 openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 40 Dec 13 00:21 openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 31 Dec 13 00:21 openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 49 Dec 13 00:21 openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb drwxr-xr-x. 9 zuul zuul unconfined_u:object_r:user_home_t:s0 140 Dec 13 00:21 openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 27 Dec 13 00:21 openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:21 openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 26 Dec 13 00:21 openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 22 Dec 13 00:21 openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 21 Dec 13 00:21 openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Dec 13 00:21 openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 53 Dec 13 00:21 openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Dec 13 00:21 openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90 drwxr-xr-x. 8 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:21 openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Dec 13 00:21 openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 164 Dec 13 00:21 openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 46 Dec 13 00:21 openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Dec 13 00:21 openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Dec 13 00:21 openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Dec 13 00:21 openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 20 Dec 13 00:21 openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 23 Dec 13 00:21 openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754 drwxr-xr-x. 6 zuul zuul unconfined_u:object_r:user_home_t:s0 130 Dec 13 00:21 openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 47 Dec 13 00:21 openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 22 Dec 13 00:21 openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 52 Dec 13 00:21 openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 48 Dec 13 00:21 openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 57 Dec 13 00:21 openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 47 Dec 13 00:21 openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 62 Dec 13 00:21 openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:21 openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Dec 13 00:21 openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Dec 13 00:21 openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Dec 13 00:21 openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20 drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Dec 13 00:21 openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 34 Dec 13 00:21 openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Dec 13 00:21 openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940 drwxr-xr-x. 5 zuul zuul unconfined_u:object_r:user_home_t:s0 77 Dec 13 00:21 openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f drwxr-xr-x. 9 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:21 openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 64 Dec 13 00:21 openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 25 Dec 13 00:21 openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 59 Dec 13 00:21 openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 29 Dec 13 00:21 openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 44 Dec 13 00:21 openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 37 Dec 13 00:21 openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2 drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:21 openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 30 Dec 13 00:21 openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 26 Dec 13 00:21 openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 27 Dec 13 00:21 openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 59 Dec 13 00:21 openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be drwxr-xr-x. 4 zuul zuul unconfined_u:object_r:user_home_t:s0 60 Dec 13 00:21 openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3 drwxr-xr-x. 11 zuul zuul unconfined_u:object_r:user_home_t:s0 4096 Dec 13 00:21 openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 38 Dec 13 00:21 openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 33 Dec 13 00:21 openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17 drwxr-xr-x. 3 zuul zuul unconfined_u:object_r:user_home_t:s0 35 Dec 13 00:21 openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501 /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 csi-provisioner drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 hostpath-provisioner drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 liveness-probe drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 node-driver-registrar /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/csi-provisioner: total 228 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 180933 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 45289 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/hostpath-provisioner: total 80 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 65035 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16179 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/liveness-probe: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 396 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 396 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/hostpath-provisioner_csi-hostpathplugin-hvm8g_12e733dd-0939-4f1b-9cbb-13897e093787/node-driver-registrar: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1533 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1533 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 fix-audit-permissions drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 openshift-apiserver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 openshift-apiserver-check-endpoints /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/fix-audit-permissions: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver: total 192 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 93408 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 102215 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver_apiserver-7fc54b8dd7-d2bhp_41e8708a-e40d-4d28-846b-c52eda4d1755/openshift-apiserver-check-endpoints: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23827 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27959 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 openshift-apiserver-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-apiserver-operator_openshift-apiserver-operator-7c88c4c865-kn67m_43ae1c37-047b-4ee2-9fee-41e337dd4ac8/openshift-apiserver-operator: total 356 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 84973 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 214174 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59262 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 oauth-openshift /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication_oauth-openshift-74fc7c67cc-xqf8b_01feb2e0-a0f4-4573-8335-34e364e0ef40/oauth-openshift: total 56 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23257 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 30155 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 authentication-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-authentication-operator_authentication-operator-7cc7ff75d5-g9qv8_ebf09b15-4bb1-44bf-9d54-e76fad5cf76e/authentication-operator: total 932 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 217484 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 491271 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 241466 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 machine-approver-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/kube-rbac-proxy: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7732 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7599 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10792 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7598 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5648 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 83 Dec 13 00:21 2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 31c30f7e9c0640f9acb17a855004a8cb4f32333c72b230eb85e614bdf2c07efe.log drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 cluster-samples-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 cluster-samples-operator-watch /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator: total 252 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 30933 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 112225 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 110570 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-samples-operator_cluster-samples-operator-bc474d5d6-wshwg_f728c15e-d8de-4a9a-a3ea-fdcead95cb91/cluster-samples-operator-watch: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4037 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 664 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 cluster-version-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-cluster-version_cluster-version-operator-6d5d9649f6-x6d46_9fb762d1-812f-43f1-9eac-68034c1ecec7/cluster-version-operator: total 13252 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11627525 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1939205 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 openshift-api drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 openshift-config-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-api: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator: total 88 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 33371 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31684 Dec 13 00:21 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17548 Dec 13 00:21 3.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 console /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_console-644bb77b49-5x5xk_9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1/console: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1042 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1042 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 download-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console_downloads-65476884b9-9wcvx_6268b7fe-8910-4505-b404-6f1df638105c/download-server: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 75 Dec 13 00:21 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 51603 Dec 13 00:21 6.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11526 Dec 13 00:21 7.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 conversion-webhook-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-conversion-webhook-595f9969b-l6z49_59748b9b-c309-4712-aa85-bb38d71c4915/conversion-webhook-server: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 909 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 571 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 console-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator: total 1088 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 227835 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 769174 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 112444 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 controller-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager_controller-manager-778975cc4f-x5vcf_1a3e81c3-c292-4130-9436-f94062c91efd/controller-manager: total 276 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 28909 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 248443 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 openshift-controller-manager-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator: total 376 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 172802 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 38969 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 164870 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 dns drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/dns: total 608 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 614882 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3012 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_dns-default-gbw49_13045510-8717-4a71-ade4-be95a76440a7/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 dns-node-resolver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns_node-resolver-dn27q_6a23c0ee-5648-448c-b772-83dced2891ce/dns-node-resolver: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 96 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 dns-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/dns-operator: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 13994 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11549 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-dns-operator_dns-operator-75f687757b-nz2xb_10603adc-d495-423c-9459-4caa405960bb/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcd drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcdctl drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcd-ensure-env-vars drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcd-metrics drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcd-readyz drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcd-resources-copy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd: total 8768 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8727966 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 224902 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23806 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcdctl: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-ensure-env-vars: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-metrics: total 64 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20593 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17961 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17961 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-readyz: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 518 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1184 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 240 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/etcd-resources-copy: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd_etcd-crc_b2a6a3b2ca08062d24afa4c01aaf9e4f/setup: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 74 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 etcd-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-etcd-operator_etcd-operator-768d5b5d86-722mg_0b5c38ff-1fa8-4219-994d-15776acd4a4d/etcd-operator: total 552 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 100273 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 296671 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 162054 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 cluster-image-registry-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_cluster-image-registry-operator-7769bd8d7d-q5cvv_b54e8941-2fc4-432a-9e51-39684df9089e/cluster-image-registry-operator: total 96 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46647 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23357 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 20580 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 image-pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-pruner-29426400-tlv26_e98bd1f0-fe98-48b9-82cb-86e924ade65d/image-pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1437 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 registry /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_image-registry-75b7bb6564-rnjvj_d73c4e63-30ef-4915-925d-f44201c612ec/registry: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 24056 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 node-ca /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-image-registry_node-ca-l92hr_f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e/node-ca: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 31683 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6688 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 serve-healthcheck-canary /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-canary_ingress-canary-2vhcn_0b5d722a-1123-4935-9740-52a08d018bc9/serve-healthcheck-canary: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2922 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 602 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:22 ingress-operator drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator: total 152 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 48554 Dec 13 00:21 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 29309 Dec 13 00:21 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43180 Dec 13 00:21 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26290 Dec 13 00:21 5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:22 router /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ingress_router-default-5c9bf7bc58-6jctv_aa90b3c2-febd-4588-a063-7fbbe82f00c1/router: total 152 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 52224 Dec 13 00:21 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 37860 Dec 13 00:21 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 53338 Dec 13 00:21 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3574 Dec 13 00:21 5.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-12-crc_3557248c-8f70-4165-aa66-8df983e7e01a/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59795 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-13-crc_7151b8a0-1c89-40c8-8c45-4cfdb3a2fdb9/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59637 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_installer-9-crc_2ad657a4-8b02-4373-8d0d-b0e25345dc90/installer: total 60 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59909 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-apiserver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-apiserver-cert-regeneration-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-apiserver-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-apiserver-check-endpoints drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-apiserver-insecure-readyz drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver: total 308 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 312254 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-regeneration-controller: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11371 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-cert-syncer: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1648 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-check-endpoints: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 22010 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/kube-apiserver-insecure-readyz: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 116 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver_kube-apiserver-crc_7f3419c3ca30b18b78e8dd2488b00489/setup: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 265 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-apiserver-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-78d54458c4-sc8h7_ed024e5d-8fc2-4c22-803d-73f3c9795f19/kube-apiserver-operator: total 876 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 222883 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 475401 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 189141 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 19721 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-10-retry-1-crc_dc02677d-deed-4cc9-bb8c-0dd300f83655/installer: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43448 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_installer-11-crc_a45bfab9-f78b-4d72-b5b7-903e60401124/installer: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43448 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 cluster-policy-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:22 kube-controller-manager drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-controller-manager-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-controller-manager-recovery-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller: total 360 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 132818 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2076 Dec 13 00:21 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 229071 Dec 13 00:21 6.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager: total 888 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 737559 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 60383 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 58535 Dec 13 00:21 3.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 44793 Dec 13 00:21 4.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-cert-syncer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6726 Dec 13 00:21 0.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 5020 Dec 13 00:22 1.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 10487 Dec 13 00:22 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager-recovery-controller: total 32 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11354 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2590 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 13872 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-controller-manager-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-6f6cb54958-rbddb_c1620f19-8aa3-45cf-931b-7ae0e5cd14cf/kube-controller-manager-operator: total 592 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 193188 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 324520 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 77945 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-10-crc_2f155735-a9be-4621-a5f2-5ab4b6957acd/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2031 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-11-crc_1784282a-268d-4e44-a766-43281414e2dc/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1976 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-8-crc_72854c1e-5ae2-4ed6-9e50-ff3bccde2635/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1917 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 pruner /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-controller-manager_revision-pruner-9-crc_a0453d24-e872-43af-9e7a-86227c26d200/pruner: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1973 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-7-crc_b57cce81-8ea0-4c4d-aae1-ee024d201c15/installer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27628 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 installer /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_installer-8-crc_aca1f9ff-a685-4a78-b461-3931b757f754/installer: total 28 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27628 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-scheduler drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-scheduler-cert-syncer drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-scheduler-recovery-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 wait-for-host-port /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler: total 244 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 59236 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 132206 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 50053 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-cert-syncer: total 24 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5142 Dec 13 00:21 0.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 3402 Dec 13 00:22 1.log -rw-r--r--. 1 root root unconfined_u:object_r:user_home_t:s0 9120 Dec 13 00:22 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/kube-scheduler-recovery-controller: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 6351 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2494 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5215 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_6a57a7fb1944b43a6bd11a349520d301/wait-for-host-port: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 85 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-scheduler-operator-container /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_71af81a9-7d43-49b2-9287-c375900aa905/kube-scheduler-operator-container: total 264 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 78087 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 122585 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 64331 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 migrator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator_migrator-f7c6d88df-q2fnv_cf1a8966-f594-490a-9fbb-eec5bafd13d3/migrator: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2038 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1875 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-storage-version-migrator-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator: total 112 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39921 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 34528 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 36391 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 control-plane-machine-set-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator: total 52 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9654 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 22065 Dec 13 00:21 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 13139 Dec 13 00:21 3.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 machine-api-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7732 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 7599 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator: total 44 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12595 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12999 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10649 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 kube-rbac-proxy-crio drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 setup /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/kube-rbac-proxy-crio: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5045 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1509 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2192 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-crc_d3ae206906481b4831fd849b559269c8/setup: total 12 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 101 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 machine-config-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-controller-6df6df6b6b-58shh_297ab9b6-2186-4d5b-a952-2bfd59af63c4/machine-config-controller: total 184 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 136260 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46476 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:22 machine-config-daemon /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-daemon-zpnhg_9d0dcce3-d96e-48cb-9b9f-362105911589/machine-config-daemon: total 140 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 15339 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 79965 Dec 13 00:21 2.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 21411 Dec 13 00:21 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16511 Dec 13 00:21 6.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 machine-config-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1345 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1212 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-operator-76788bff89-wkjgm_120b38dc-8236-4fa6-a452-642b8ad738ee/machine-config-operator: total 56 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 16333 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 36951 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 machine-config-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-machine-config-operator_machine-config-server-v65wr_bf1a8b70-3856-486f-9912-a2de1d57c3fb/machine-config-server: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 46063 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 17132 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_certified-operators-lcrg8_7ea0b8b3-b3c3-4dd9-81a9-acbb22380e20/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_community-operators-s2hxn_220875e2-503f-46b5-aaa6-bb8fc45743cc/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 marketplace-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_marketplace-operator-8b455464d-kghgr_9025b167-d0fc-419f-92c1-add28909ab7c/marketplace-operator: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 10108 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5135 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-marketplace-nv4pl_85ca9974-9a5f-4cbf-a126-71bf61c49940/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-content drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 extract-utilities drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 registry-server /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-content: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/extract-utilities: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-marketplace_redhat-operators-zg7cl_b9a848b0-1438-4ada-b7da-2fe53dbf235f/registry-server: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 440 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 bond-cni-plugin drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 cni-plugins drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 egress-router-binary-copy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-multus-additional-cni-plugins drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 routeoverride-cni drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 whereabouts-cni drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 whereabouts-cni-bincopy /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/bond-cni-plugin: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 392 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 392 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/cni-plugins: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 404 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 404 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/egress-router-binary-copy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 414 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 414 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/kube-multus-additional-cni-plugins: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/routeoverride-cni: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 80 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 80 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-additional-cni-plugins-bzj2p_7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8/whereabouts-cni-bincopy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 408 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 408 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 multus-admission-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-admission-controller-6c7c885997-4hbbc_d5025cb4-ddb0-4107-88c1-bcbcdb779ac0/multus-admission-controller: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1386 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1276 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 58 Dec 13 00:22 kube-multus /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus: total 556 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 890 Dec 13 00:21 4.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 391605 Dec 13 00:21 5.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 163920 Dec 13 00:21 8.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3764 Dec 13 00:21 9.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 network-metrics-daemon /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1173 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1040 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-multus_network-metrics-daemon-qdfr4_a702c6d2-4dde-4077-ab8c-0f8df804bf7a/network-metrics-daemon: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 40893 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27335 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 check-endpoints /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-source-5c5478f8c-vqvt7_d0f40333-c860-4c04-8058-a0bf572dcf12/check-endpoints: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 9955 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4976 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 network-check-target-container /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-diagnostics_network-check-target-v54bt_34a48baf-1bee-4921-8bb2-9b7320e76f79/network-check-target-container: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 approver drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 webhook /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver: total 40 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 12256 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14943 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 11449 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/webhook: total 748 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 760212 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3886 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 iptables-alerter /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_iptables-alerter-wwpnd_2b6d14a5-ca00-40c7-af7a-051a98a24eed/iptables-alerter: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 381 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 120 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 network-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-network-operator_network-operator-767c585db5-zd56b_cc291782-27d2-4a74-af79-c7dcb31535d2/network-operator: total 1276 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 411501 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 574827 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 313250 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 fix-audit-permissions drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 oauth-apiserver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/fix-audit-permissions: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-oauth-apiserver_apiserver-69c565c9b6-vbdpd_5bacb25d-97b6-4491-8fb4-99feae1d802a/oauth-apiserver: total 172 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 127371 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 43870 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 catalog-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_catalog-operator-857456c46-7f5wf_8a5ae51d-d173-4531-8975-f164c975ce1f/catalog-operator: total 1880 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1508074 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 409724 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29251950-x8jjd_ad171c4b-8408-4370-8e86-502999788ddb/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 736 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426400-gwxp5_51a3de13-562b-4bfd-aacb-e8832c894074/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 739 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 collect-profiles /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_collect-profiles-29426415-vhdrh_7c3634c0-4a13-4d3a-9aa7-a0cddb1d08f2/collect-profiles: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 736 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 olm-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_olm-operator-6d8474f75f-x54mh_c085412c-b875-46c9-ae3e-e6b0d8067091/olm-operator: total 68 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39824 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26246 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 packageserver /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_packageserver-8464bcc55b-sjnqz_bd556935-a077-45df-ba3f-d42c39326ccd/packageserver: total 132 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 91140 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 39993 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 package-server-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1187 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1054 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-operator-lifecycle-manager_package-server-manager-84d578d794-jw7r2_63eb7413-02c3-4d6e-bb48-e5ffe5ce15be/package-server-manager: total 20 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 14404 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4051 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 kube-rbac-proxy drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 ovnkube-cluster-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/kube-rbac-proxy: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1316 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1183 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-control-plane-77c846df58-6l97b_410cf605-1970-4691-9c95-53fdc123b1f3/ovnkube-cluster-manager: total 212 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 157417 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 56640 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kubecfg-setup drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-rbac-proxy-node drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 kube-rbac-proxy-ovn-metrics drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 nbdb drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 northd drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 ovn-acl-logging drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 ovn-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 ovnkube-controller drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 19 Dec 13 00:22 sbdb /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kubecfg-setup: total 0 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 0 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-node: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4680 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/kube-rbac-proxy-ovn-metrics: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4640 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/nbdb: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2415 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/northd: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4829 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-acl-logging: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 5158 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovn-controller: total 8 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 8190 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/ovnkube-controller: total 1772 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 1812039 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-ovn-kubernetes_ovnkube-node-brz8k_27175a75-710d-402c-9aeb-0ec047bfd27f/sbdb: total 4 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 2347 Dec 13 00:21 0.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 route-controller-manager /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-route-controller-manager_route-controller-manager-776b8b7477-sfpvs_21d29937-debd-4407-b2b1-d1053cb0f342/route-controller-manager: total 56 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 25403 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 27324 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 45 Dec 13 00:22 service-ca-operator /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca-operator_service-ca-operator-546b4f8984-pwccz_6d67253e-2acd-4bc1-8185-793587da4f17/service-ca-operator: total 112 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 58729 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 26036 Dec 13 00:21 1.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 23768 Dec 13 00:21 2.log /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501: total 0 drwxr-xr-x. 2 zuul zuul unconfined_u:object_r:user_home_t:s0 32 Dec 13 00:22 service-ca-controller /home/zuul/ci-framework-data/logs/crc/crc-logs-artifacts/pods/openshift-service-ca_service-ca-666f99b6f-kk8kg_e4a7de23-6134-4044-902a-0900dc04a501/service-ca-controller: total 100 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 61484 Dec 13 00:21 0.log -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 36136 Dec 13 00:21 1.log /home/zuul/ci-framework-data/logs/openstack-must-gather: total 16 -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 3336 Dec 13 00:21 event-filter.html -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 4315 Dec 13 00:21 must-gather.logs -rw-r--r--. 1 zuul zuul unconfined_u:object_r:user_home_t:s0 110 Dec 13 00:21 timestamp /home/zuul/ci-framework-data/tmp: total 0 /home/zuul/ci-framework-data/volumes: total 0 home/zuul/zuul-output/logs/README.html0000644000175000017500000000306615117130722016715 0ustar zuulzuul README for CIFMW Logs

Logs of interest

Generated content of interest

home/zuul/zuul-output/logs/installed-pkgs.log0000644000175000017500000004723615117130723020526 0ustar zuulzuulaardvark-dns-1.17.0-1.el9.x86_64 abattis-cantarell-fonts-0.301-4.el9.noarch acl-2.3.1-4.el9.x86_64 adobe-source-code-pro-fonts-2.030.1.050-12.el9.1.noarch alternatives-1.24-2.el9.x86_64 annobin-12.98-1.el9.x86_64 ansible-core-2.14.18-2.el9.x86_64 attr-2.5.1-3.el9.x86_64 audit-3.1.5-7.el9.x86_64 audit-libs-3.1.5-7.el9.x86_64 authselect-1.2.6-3.el9.x86_64 authselect-compat-1.2.6-3.el9.x86_64 authselect-libs-1.2.6-3.el9.x86_64 avahi-libs-0.8-23.el9.x86_64 basesystem-11-13.el9.noarch bash-5.1.8-9.el9.x86_64 bash-completion-2.11-5.el9.noarch binutils-2.35.2-69.el9.x86_64 binutils-gold-2.35.2-69.el9.x86_64 buildah-1.41.3-1.el9.x86_64 bzip2-1.0.8-10.el9.x86_64 bzip2-libs-1.0.8-10.el9.x86_64 ca-certificates-2025.2.80_v9.0.305-91.el9.noarch c-ares-1.19.1-2.el9.x86_64 centos-gpg-keys-9.0-30.el9.noarch centos-logos-90.8-3.el9.x86_64 centos-stream-release-9.0-30.el9.noarch centos-stream-repos-9.0-30.el9.noarch checkpolicy-3.6-1.el9.x86_64 chrony-4.8-1.el9.x86_64 cloud-init-24.4-7.el9.noarch cloud-utils-growpart-0.33-1.el9.x86_64 cmake-filesystem-3.31.8-3.el9.x86_64 cockpit-bridge-348-1.el9.noarch cockpit-system-348-1.el9.noarch cockpit-ws-348-1.el9.x86_64 cockpit-ws-selinux-348-1.el9.x86_64 conmon-2.1.13-1.el9.x86_64 containers-common-1-134.el9.x86_64 containers-common-extra-1-134.el9.x86_64 container-selinux-2.242.0-1.el9.noarch coreutils-8.32-39.el9.x86_64 coreutils-common-8.32-39.el9.x86_64 cpio-2.13-16.el9.x86_64 cpp-11.5.0-14.el9.x86_64 cracklib-2.9.6-27.el9.x86_64 cracklib-dicts-2.9.6-27.el9.x86_64 createrepo_c-0.20.1-4.el9.x86_64 createrepo_c-libs-0.20.1-4.el9.x86_64 criu-3.19-3.el9.x86_64 criu-libs-3.19-3.el9.x86_64 cronie-1.5.7-14.el9.x86_64 cronie-anacron-1.5.7-14.el9.x86_64 crontabs-1.11-26.20190603git.el9.noarch crun-1.24-1.el9.x86_64 crypto-policies-20251126-1.gite9c4db2.el9.noarch crypto-policies-scripts-20251126-1.gite9c4db2.el9.noarch cryptsetup-libs-2.8.1-2.el9.x86_64 curl-7.76.1-38.el9.x86_64 cyrus-sasl-2.1.27-21.el9.x86_64 cyrus-sasl-devel-2.1.27-21.el9.x86_64 cyrus-sasl-gssapi-2.1.27-21.el9.x86_64 cyrus-sasl-lib-2.1.27-21.el9.x86_64 dbus-1.12.20-8.el9.x86_64 dbus-broker-28-7.el9.x86_64 dbus-common-1.12.20-8.el9.noarch dbus-libs-1.12.20-8.el9.x86_64 dbus-tools-1.12.20-8.el9.x86_64 debugedit-5.0-11.el9.x86_64 dejavu-sans-fonts-2.37-18.el9.noarch desktop-file-utils-0.26-6.el9.x86_64 device-mapper-1.02.206-2.el9.x86_64 device-mapper-libs-1.02.206-2.el9.x86_64 dhcp-client-4.4.2-19.b1.el9.x86_64 dhcp-common-4.4.2-19.b1.el9.noarch diffutils-3.7-12.el9.x86_64 dnf-4.14.0-31.el9.noarch dnf-data-4.14.0-31.el9.noarch dnf-plugins-core-4.3.0-24.el9.noarch dracut-057-102.git20250818.el9.x86_64 dracut-config-generic-057-102.git20250818.el9.x86_64 dracut-network-057-102.git20250818.el9.x86_64 dracut-squash-057-102.git20250818.el9.x86_64 dwz-0.16-1.el9.x86_64 e2fsprogs-1.46.5-8.el9.x86_64 e2fsprogs-libs-1.46.5-8.el9.x86_64 ed-1.14.2-12.el9.x86_64 efi-srpm-macros-6-4.el9.noarch elfutils-0.194-1.el9.x86_64 elfutils-debuginfod-client-0.194-1.el9.x86_64 elfutils-default-yama-scope-0.194-1.el9.noarch elfutils-libelf-0.194-1.el9.x86_64 elfutils-libs-0.194-1.el9.x86_64 emacs-filesystem-27.2-18.el9.noarch enchant-1.6.0-30.el9.x86_64 ethtool-6.15-2.el9.x86_64 expat-2.5.0-5.el9.x86_64 expect-5.45.4-16.el9.x86_64 file-5.39-16.el9.x86_64 file-libs-5.39-16.el9.x86_64 filesystem-3.16-5.el9.x86_64 findutils-4.8.0-7.el9.x86_64 fonts-filesystem-2.0.5-7.el9.1.noarch fonts-srpm-macros-2.0.5-7.el9.1.noarch fuse3-3.10.2-9.el9.x86_64 fuse3-libs-3.10.2-9.el9.x86_64 fuse-common-3.10.2-9.el9.x86_64 fuse-libs-2.9.9-17.el9.x86_64 fuse-overlayfs-1.16-1.el9.x86_64 gawk-5.1.0-6.el9.x86_64 gawk-all-langpacks-5.1.0-6.el9.x86_64 gcc-11.5.0-14.el9.x86_64 gcc-c++-11.5.0-14.el9.x86_64 gcc-plugin-annobin-11.5.0-14.el9.x86_64 gdb-minimal-16.3-2.el9.x86_64 gdbm-libs-1.23-1.el9.x86_64 gdisk-1.0.7-5.el9.x86_64 gdk-pixbuf2-2.42.6-6.el9.x86_64 geolite2-city-20191217-6.el9.noarch geolite2-country-20191217-6.el9.noarch gettext-0.21-8.el9.x86_64 gettext-libs-0.21-8.el9.x86_64 ghc-srpm-macros-1.5.0-6.el9.noarch git-2.47.3-1.el9.x86_64 git-core-2.47.3-1.el9.x86_64 git-core-doc-2.47.3-1.el9.noarch glib2-2.68.4-18.el9.x86_64 glibc-2.34-244.el9.x86_64 glibc-common-2.34-244.el9.x86_64 glibc-devel-2.34-244.el9.x86_64 glibc-gconv-extra-2.34-244.el9.x86_64 glibc-headers-2.34-244.el9.x86_64 glibc-langpack-en-2.34-244.el9.x86_64 glib-networking-2.68.3-3.el9.x86_64 gmp-6.2.0-13.el9.x86_64 gnupg2-2.3.3-4.el9.x86_64 gnutls-3.8.10-1.el9.x86_64 gobject-introspection-1.68.0-11.el9.x86_64 go-srpm-macros-3.8.1-1.el9.noarch gpgme-1.15.1-6.el9.x86_64 gpg-pubkey-8483c65d-5ccc5b19 grep-3.6-5.el9.x86_64 groff-base-1.22.4-10.el9.x86_64 grub2-common-2.06-120.el9.noarch grub2-pc-2.06-120.el9.x86_64 grub2-pc-modules-2.06-120.el9.noarch grub2-tools-2.06-120.el9.x86_64 grub2-tools-minimal-2.06-120.el9.x86_64 grubby-8.40-69.el9.x86_64 gsettings-desktop-schemas-40.0-8.el9.x86_64 gssproxy-0.8.4-7.el9.x86_64 gzip-1.12-1.el9.x86_64 hostname-3.23-6.el9.x86_64 hunspell-1.7.0-11.el9.x86_64 hunspell-en-GB-0.20140811.1-20.el9.noarch hunspell-en-US-0.20140811.1-20.el9.noarch hunspell-filesystem-1.7.0-11.el9.x86_64 hwdata-0.348-9.20.el9.noarch ima-evm-utils-1.6.2-2.el9.x86_64 info-6.7-15.el9.x86_64 inih-49-6.el9.x86_64 initscripts-rename-device-10.11.8-4.el9.x86_64 initscripts-service-10.11.8-4.el9.noarch ipcalc-1.0.0-5.el9.x86_64 iproute-6.17.0-1.el9.x86_64 iproute-tc-6.17.0-1.el9.x86_64 iptables-libs-1.8.10-11.el9.x86_64 iptables-nft-1.8.10-11.el9.x86_64 iptables-nft-services-1.8.10-11.el9.noarch iputils-20210202-15.el9.x86_64 irqbalance-1.9.4-5.el9.x86_64 jansson-2.14-1.el9.x86_64 jq-1.6-19.el9.x86_64 json-c-0.14-11.el9.x86_64 json-glib-1.6.6-1.el9.x86_64 kbd-2.4.0-11.el9.x86_64 kbd-legacy-2.4.0-11.el9.noarch kbd-misc-2.4.0-11.el9.noarch kernel-5.14.0-648.el9.x86_64 kernel-core-5.14.0-648.el9.x86_64 kernel-headers-5.14.0-648.el9.x86_64 kernel-modules-5.14.0-648.el9.x86_64 kernel-modules-core-5.14.0-648.el9.x86_64 kernel-srpm-macros-1.0-14.el9.noarch kernel-tools-5.14.0-648.el9.x86_64 kernel-tools-libs-5.14.0-648.el9.x86_64 kexec-tools-2.0.29-12.el9.x86_64 keyutils-1.6.3-1.el9.x86_64 keyutils-libs-1.6.3-1.el9.x86_64 kmod-28-11.el9.x86_64 kmod-libs-28-11.el9.x86_64 kpartx-0.8.7-39.el9.x86_64 krb5-libs-1.21.1-8.el9.x86_64 langpacks-core-en_GB-3.0-16.el9.noarch langpacks-core-font-en-3.0-16.el9.noarch langpacks-en_GB-3.0-16.el9.noarch less-590-6.el9.x86_64 libacl-2.3.1-4.el9.x86_64 libappstream-glib-0.7.18-5.el9.x86_64 libarchive-3.5.3-6.el9.x86_64 libassuan-2.5.5-3.el9.x86_64 libattr-2.5.1-3.el9.x86_64 libbasicobjects-0.1.1-53.el9.x86_64 libblkid-2.37.4-21.el9.x86_64 libbpf-1.5.0-2.el9.x86_64 libbrotli-1.0.9-7.el9.x86_64 libcap-2.48-10.el9.x86_64 libcap-ng-0.8.2-7.el9.x86_64 libcbor-0.7.0-5.el9.x86_64 libcollection-0.7.0-53.el9.x86_64 libcom_err-1.46.5-8.el9.x86_64 libcomps-0.1.18-1.el9.x86_64 libcurl-7.76.1-38.el9.x86_64 libdaemon-0.14-23.el9.x86_64 libdb-5.3.28-57.el9.x86_64 libdhash-0.5.0-53.el9.x86_64 libdnf-0.69.0-16.el9.x86_64 libeconf-0.4.1-4.el9.x86_64 libedit-3.1-38.20210216cvs.el9.x86_64 libestr-0.1.11-4.el9.x86_64 libev-4.33-6.el9.x86_64 libevent-2.1.12-8.el9.x86_64 libfastjson-0.99.9-5.el9.x86_64 libfdisk-2.37.4-21.el9.x86_64 libffi-3.4.2-8.el9.x86_64 libffi-devel-3.4.2-8.el9.x86_64 libfido2-1.13.0-2.el9.x86_64 libgcc-11.5.0-14.el9.x86_64 libgcrypt-1.10.0-11.el9.x86_64 libgomp-11.5.0-14.el9.x86_64 libgpg-error-1.42-5.el9.x86_64 libgpg-error-devel-1.42-5.el9.x86_64 libibverbs-57.0-2.el9.x86_64 libicu-67.1-10.el9.x86_64 libidn2-2.3.0-7.el9.x86_64 libini_config-1.3.1-53.el9.x86_64 libjpeg-turbo-2.0.90-7.el9.x86_64 libkcapi-1.4.0-2.el9.x86_64 libkcapi-hmaccalc-1.4.0-2.el9.x86_64 libksba-1.5.1-7.el9.x86_64 libldb-4.23.3-1.el9.x86_64 libmaxminddb-1.5.2-4.el9.x86_64 libmnl-1.0.4-16.el9.x86_64 libmodulemd-2.13.0-2.el9.x86_64 libmount-2.37.4-21.el9.x86_64 libmpc-1.2.1-4.el9.x86_64 libndp-1.9-1.el9.x86_64 libnet-1.2-7.el9.x86_64 libnetfilter_conntrack-1.0.9-1.el9.x86_64 libnfnetlink-1.0.1-23.el9.x86_64 libnfsidmap-2.5.4-39.el9.x86_64 libnftnl-1.2.6-4.el9.x86_64 libnghttp2-1.43.0-6.el9.x86_64 libnl3-3.11.0-1.el9.x86_64 libnl3-cli-3.11.0-1.el9.x86_64 libnsl2-2.0.0-1.el9.x86_64 libpath_utils-0.2.1-53.el9.x86_64 libpcap-1.10.0-4.el9.x86_64 libpipeline-1.5.3-4.el9.x86_64 libpkgconf-1.7.3-10.el9.x86_64 libpng-1.6.37-12.el9.x86_64 libproxy-0.4.15-35.el9.x86_64 libproxy-webkitgtk4-0.4.15-35.el9.x86_64 libpsl-0.21.1-5.el9.x86_64 libpwquality-1.4.4-8.el9.x86_64 libref_array-0.1.5-53.el9.x86_64 librepo-1.14.5-3.el9.x86_64 libreport-filesystem-2.15.2-6.el9.noarch libseccomp-2.5.2-2.el9.x86_64 libselinux-3.6-3.el9.x86_64 libselinux-utils-3.6-3.el9.x86_64 libsemanage-3.6-5.el9.x86_64 libsepol-3.6-3.el9.x86_64 libsigsegv-2.13-4.el9.x86_64 libslirp-4.4.0-8.el9.x86_64 libsmartcols-2.37.4-21.el9.x86_64 libsolv-0.7.24-3.el9.x86_64 libsoup-2.72.0-10.el9.x86_64 libss-1.46.5-8.el9.x86_64 libssh-0.10.4-15.el9.x86_64 libssh-config-0.10.4-15.el9.noarch libsss_certmap-2.9.7-5.el9.x86_64 libsss_idmap-2.9.7-5.el9.x86_64 libsss_nss_idmap-2.9.7-5.el9.x86_64 libsss_sudo-2.9.7-5.el9.x86_64 libstdc++-11.5.0-14.el9.x86_64 libstdc++-devel-11.5.0-14.el9.x86_64 libstemmer-0-18.585svn.el9.x86_64 libsysfs-2.1.1-11.el9.x86_64 libtalloc-2.4.3-1.el9.x86_64 libtasn1-4.16.0-9.el9.x86_64 libtdb-1.4.14-1.el9.x86_64 libteam-1.31-16.el9.x86_64 libtevent-0.17.1-1.el9.x86_64 libtirpc-1.3.3-9.el9.x86_64 libtool-ltdl-2.4.6-46.el9.x86_64 libunistring-0.9.10-15.el9.x86_64 liburing-2.12-1.el9.x86_64 libuser-0.63-17.el9.x86_64 libutempter-1.2.1-6.el9.x86_64 libuuid-2.37.4-21.el9.x86_64 libverto-0.3.2-3.el9.x86_64 libverto-libev-0.3.2-3.el9.x86_64 libvirt-libs-11.9.0-1.el9.x86_64 libwbclient-4.23.3-1.el9.x86_64 libxcrypt-4.4.18-3.el9.x86_64 libxcrypt-compat-4.4.18-3.el9.x86_64 libxcrypt-devel-4.4.18-3.el9.x86_64 libxml2-2.9.13-14.el9.x86_64 libxml2-devel-2.9.13-14.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 libxslt-devel-1.1.34-12.el9.x86_64 libyaml-0.2.5-7.el9.x86_64 libzstd-1.5.5-1.el9.x86_64 llvm-filesystem-21.1.3-1.el9.x86_64 llvm-libs-21.1.3-1.el9.x86_64 lmdb-libs-0.9.29-3.el9.x86_64 logrotate-3.18.0-12.el9.x86_64 lshw-B.02.20-3.el9.x86_64 lsscsi-0.32-6.el9.x86_64 lua-libs-5.4.4-4.el9.x86_64 lua-srpm-macros-1-6.el9.noarch lz4-libs-1.9.3-5.el9.x86_64 lzo-2.10-7.el9.x86_64 make-4.3-8.el9.x86_64 man-db-2.9.3-9.el9.x86_64 microcode_ctl-20251111-1.el9.noarch mpdecimal-2.5.1-3.el9.x86_64 mpfr-4.1.0-7.el9.x86_64 ncurses-6.2-12.20210508.el9.x86_64 ncurses-base-6.2-12.20210508.el9.noarch ncurses-c++-libs-6.2-12.20210508.el9.x86_64 ncurses-devel-6.2-12.20210508.el9.x86_64 ncurses-libs-6.2-12.20210508.el9.x86_64 netavark-1.16.0-1.el9.x86_64 nettle-3.10.1-1.el9.x86_64 NetworkManager-1.54.2-1.el9.x86_64 NetworkManager-libnm-1.54.2-1.el9.x86_64 NetworkManager-team-1.54.2-1.el9.x86_64 NetworkManager-tui-1.54.2-1.el9.x86_64 newt-0.52.21-11.el9.x86_64 nfs-utils-2.5.4-39.el9.x86_64 nftables-1.0.9-5.el9.x86_64 npth-1.6-8.el9.x86_64 numactl-libs-2.0.19-3.el9.x86_64 ocaml-srpm-macros-6-6.el9.noarch oddjob-0.34.7-7.el9.x86_64 oddjob-mkhomedir-0.34.7-7.el9.x86_64 oniguruma-6.9.6-1.el9.6.x86_64 openblas-srpm-macros-2-11.el9.noarch openldap-2.6.8-4.el9.x86_64 openldap-devel-2.6.8-4.el9.x86_64 openssh-9.9p1-2.el9.x86_64 openssh-clients-9.9p1-2.el9.x86_64 openssh-server-9.9p1-2.el9.x86_64 openssl-3.5.1-6.el9.x86_64 openssl-devel-3.5.1-6.el9.x86_64 openssl-fips-provider-3.5.1-6.el9.x86_64 openssl-libs-3.5.1-6.el9.x86_64 os-prober-1.77-12.el9.x86_64 p11-kit-0.25.10-1.el9.x86_64 p11-kit-trust-0.25.10-1.el9.x86_64 pam-1.5.1-26.el9.x86_64 parted-3.5-3.el9.x86_64 passt-0^20250512.g8ec1341-2.el9.x86_64 passt-selinux-0^20250512.g8ec1341-2.el9.noarch passwd-0.80-12.el9.x86_64 patch-2.7.6-16.el9.x86_64 pciutils-libs-3.7.0-7.el9.x86_64 pcre2-10.40-6.el9.x86_64 pcre2-syntax-10.40-6.el9.noarch pcre-8.44-4.el9.x86_64 perl-AutoLoader-5.74-483.el9.noarch perl-B-1.80-483.el9.x86_64 perl-base-2.27-483.el9.noarch perl-Carp-1.50-460.el9.noarch perl-Class-Struct-0.66-483.el9.noarch perl-constant-1.33-461.el9.noarch perl-Data-Dumper-2.174-462.el9.x86_64 perl-Digest-1.19-4.el9.noarch perl-Digest-MD5-2.58-4.el9.x86_64 perl-DynaLoader-1.47-483.el9.x86_64 perl-Encode-3.08-462.el9.x86_64 perl-Errno-1.30-483.el9.x86_64 perl-Error-0.17029-7.el9.noarch perl-Exporter-5.74-461.el9.noarch perl-Fcntl-1.13-483.el9.x86_64 perl-File-Basename-2.85-483.el9.noarch perl-File-Find-1.37-483.el9.noarch perl-FileHandle-2.03-483.el9.noarch perl-File-Path-2.18-4.el9.noarch perl-File-stat-1.09-483.el9.noarch perl-File-Temp-0.231.100-4.el9.noarch perl-Getopt-Long-2.52-4.el9.noarch perl-Getopt-Std-1.12-483.el9.noarch perl-Git-2.47.3-1.el9.noarch perl-HTTP-Tiny-0.076-462.el9.noarch perl-if-0.60.800-483.el9.noarch perl-interpreter-5.32.1-483.el9.x86_64 perl-IO-1.43-483.el9.x86_64 perl-IO-Socket-IP-0.41-5.el9.noarch perl-IO-Socket-SSL-2.073-2.el9.noarch perl-IPC-Open3-1.21-483.el9.noarch perl-lib-0.65-483.el9.x86_64 perl-libnet-3.13-4.el9.noarch perl-libs-5.32.1-483.el9.x86_64 perl-MIME-Base64-3.16-4.el9.x86_64 perl-Mozilla-CA-20200520-6.el9.noarch perl-mro-1.23-483.el9.x86_64 perl-NDBM_File-1.15-483.el9.x86_64 perl-Net-SSLeay-1.94-3.el9.x86_64 perl-overload-1.31-483.el9.noarch perl-overloading-0.02-483.el9.noarch perl-parent-0.238-460.el9.noarch perl-PathTools-3.78-461.el9.x86_64 perl-Pod-Escapes-1.07-460.el9.noarch perl-podlators-4.14-460.el9.noarch perl-Pod-Perldoc-3.28.01-461.el9.noarch perl-Pod-Simple-3.42-4.el9.noarch perl-Pod-Usage-2.01-4.el9.noarch perl-POSIX-1.94-483.el9.x86_64 perl-Scalar-List-Utils-1.56-462.el9.x86_64 perl-SelectSaver-1.02-483.el9.noarch perl-Socket-2.031-4.el9.x86_64 perl-srpm-macros-1-41.el9.noarch perl-Storable-3.21-460.el9.x86_64 perl-subs-1.03-483.el9.noarch perl-Symbol-1.08-483.el9.noarch perl-Term-ANSIColor-5.01-461.el9.noarch perl-Term-Cap-1.17-460.el9.noarch perl-TermReadKey-2.38-11.el9.x86_64 perl-Text-ParseWords-3.30-460.el9.noarch perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch perl-Time-Local-1.300-7.el9.noarch perl-URI-5.09-3.el9.noarch perl-vars-1.05-483.el9.noarch pigz-2.5-4.el9.x86_64 pkgconf-1.7.3-10.el9.x86_64 pkgconf-m4-1.7.3-10.el9.noarch pkgconf-pkg-config-1.7.3-10.el9.x86_64 podman-5.6.0-2.el9.x86_64 policycoreutils-3.6-3.el9.x86_64 policycoreutils-python-utils-3.6-3.el9.noarch polkit-0.117-14.el9.x86_64 polkit-libs-0.117-14.el9.x86_64 polkit-pkla-compat-0.1-21.el9.x86_64 popt-1.18-8.el9.x86_64 prefixdevname-0.1.0-8.el9.x86_64 procps-ng-3.3.17-14.el9.x86_64 protobuf-c-1.3.3-13.el9.x86_64 psmisc-23.4-3.el9.x86_64 publicsuffix-list-dafsa-20210518-3.el9.noarch pyproject-srpm-macros-1.16.2-1.el9.noarch python3.12-3.12.12-1.el9.x86_64 python3.12-libs-3.12.12-1.el9.x86_64 python3.12-pip-23.2.1-5.el9.noarch python3.12-pip-wheel-23.2.1-5.el9.noarch python3.12-setuptools-68.2.2-5.el9.noarch python3-3.9.25-2.el9.x86_64 python3-attrs-20.3.0-7.el9.noarch python3-audit-3.1.5-7.el9.x86_64 python3-babel-2.9.1-2.el9.noarch python3-cffi-1.14.5-5.el9.x86_64 python3-chardet-4.0.0-5.el9.noarch python3-configobj-5.0.6-25.el9.noarch python3-cryptography-36.0.1-5.el9.x86_64 python3-dasbus-1.7-1.el9.noarch python3-dateutil-2.9.0.post0-1.el9.noarch python3-dbus-1.2.18-2.el9.x86_64 python3-devel-3.9.25-2.el9.x86_64 python3-distro-1.5.0-7.el9.noarch python3-dnf-4.14.0-31.el9.noarch python3-dnf-plugins-core-4.3.0-24.el9.noarch python3-enchant-3.2.0-5.el9.noarch python3-file-magic-5.39-16.el9.noarch python3-gobject-base-3.40.1-6.el9.x86_64 python3-gobject-base-noarch-3.40.1-6.el9.noarch python3-gpg-1.15.1-6.el9.x86_64 python3-hawkey-0.69.0-16.el9.x86_64 python3-idna-2.10-7.el9.1.noarch python3-jinja2-2.11.3-8.el9.noarch python3-jmespath-1.0.1-1.el9.noarch python3-jsonpatch-1.21-16.el9.noarch python3-jsonpointer-2.0-4.el9.noarch python3-jsonschema-3.2.0-13.el9.noarch python3-libcomps-0.1.18-1.el9.x86_64 python3-libdnf-0.69.0-16.el9.x86_64 python3-libs-3.9.25-2.el9.x86_64 python3-libselinux-3.6-3.el9.x86_64 python3-libsemanage-3.6-5.el9.x86_64 python3-libvirt-11.9.0-1.el9.x86_64 python3-libxml2-2.9.13-14.el9.x86_64 python3-lxml-4.6.5-3.el9.x86_64 python3-markupsafe-1.1.1-12.el9.x86_64 python3-netaddr-0.10.1-3.el9.noarch python3-netifaces-0.10.6-15.el9.x86_64 python3-oauthlib-3.1.1-5.el9.noarch python3-packaging-20.9-5.el9.noarch python3-pexpect-4.8.0-7.el9.noarch python3-pip-21.3.1-1.el9.noarch python3-pip-wheel-21.3.1-1.el9.noarch python3-ply-3.11-14.el9.noarch python3-policycoreutils-3.6-3.el9.noarch python3-prettytable-0.7.2-27.el9.noarch python3-ptyprocess-0.6.0-12.el9.noarch python3-pycparser-2.20-6.el9.noarch python3-pyparsing-2.4.7-9.el9.noarch python3-pyrsistent-0.17.3-8.el9.x86_64 python3-pyserial-3.4-12.el9.noarch python3-pysocks-1.7.1-12.el9.noarch python3-pytz-2021.1-5.el9.noarch python3-pyyaml-5.4.1-6.el9.x86_64 python3-requests-2.25.1-10.el9.noarch python3-resolvelib-0.5.4-5.el9.noarch python3-rpm-4.16.1.3-40.el9.x86_64 python3-rpm-generators-12-9.el9.noarch python3-rpm-macros-3.9-54.el9.noarch python3-setools-4.4.4-1.el9.x86_64 python3-setuptools-53.0.0-15.el9.noarch python3-setuptools-wheel-53.0.0-15.el9.noarch python3-six-1.15.0-9.el9.noarch python3-systemd-234-19.el9.x86_64 python3-urllib3-1.26.5-6.el9.noarch python-rpm-macros-3.9-54.el9.noarch python-srpm-macros-3.9-54.el9.noarch python-unversioned-command-3.9.25-2.el9.noarch qemu-guest-agent-10.1.0-7.el9.x86_64 qt5-srpm-macros-5.15.9-1.el9.noarch quota-4.09-4.el9.x86_64 quota-nls-4.09-4.el9.noarch readline-8.1-4.el9.x86_64 readline-devel-8.1-4.el9.x86_64 redhat-rpm-config-210-1.el9.noarch rootfiles-8.1-35.el9.noarch rpcbind-1.2.6-7.el9.x86_64 rpm-4.16.1.3-40.el9.x86_64 rpm-build-4.16.1.3-40.el9.x86_64 rpm-build-libs-4.16.1.3-40.el9.x86_64 rpm-libs-4.16.1.3-40.el9.x86_64 rpmlint-1.11-19.el9.noarch rpm-plugin-audit-4.16.1.3-40.el9.x86_64 rpm-plugin-selinux-4.16.1.3-40.el9.x86_64 rpm-plugin-systemd-inhibit-4.16.1.3-40.el9.x86_64 rpm-sign-4.16.1.3-40.el9.x86_64 rpm-sign-libs-4.16.1.3-40.el9.x86_64 rsync-3.2.5-4.el9.x86_64 rsyslog-8.2510.0-2.el9.x86_64 rsyslog-logrotate-8.2510.0-2.el9.x86_64 ruby-3.0.7-165.el9.x86_64 ruby-default-gems-3.0.7-165.el9.noarch ruby-devel-3.0.7-165.el9.x86_64 rubygem-bigdecimal-3.0.0-165.el9.x86_64 rubygem-bundler-2.2.33-165.el9.noarch rubygem-io-console-0.5.7-165.el9.x86_64 rubygem-json-2.5.1-165.el9.x86_64 rubygem-psych-3.3.2-165.el9.x86_64 rubygem-rdoc-6.3.4.1-165.el9.noarch rubygems-3.2.33-165.el9.noarch ruby-libs-3.0.7-165.el9.x86_64 rust-srpm-macros-17-4.el9.noarch samba-client-libs-4.23.3-1.el9.x86_64 samba-common-4.23.3-1.el9.noarch samba-common-libs-4.23.3-1.el9.x86_64 sed-4.8-9.el9.x86_64 selinux-policy-38.1.69-1.el9.noarch selinux-policy-targeted-38.1.69-1.el9.noarch setroubleshoot-plugins-3.3.14-4.el9.noarch setroubleshoot-server-3.3.35-2.el9.x86_64 setup-2.13.7-10.el9.noarch sg3_utils-1.47-10.el9.x86_64 sg3_utils-libs-1.47-10.el9.x86_64 shadow-utils-4.9-15.el9.x86_64 shadow-utils-subid-4.9-15.el9.x86_64 shared-mime-info-2.1-5.el9.x86_64 slang-2.3.2-11.el9.x86_64 slirp4netns-1.3.3-1.el9.x86_64 snappy-1.1.8-8.el9.x86_64 sos-4.10.1-1.el9.noarch sqlite-libs-3.34.1-9.el9.x86_64 squashfs-tools-4.4-10.git1.el9.x86_64 sscg-4.0.3-2.el9.x86_64 sshpass-1.09-4.el9.x86_64 sssd-client-2.9.7-5.el9.x86_64 sssd-common-2.9.7-5.el9.x86_64 sssd-kcm-2.9.7-5.el9.x86_64 sssd-nfs-idmap-2.9.7-5.el9.x86_64 sudo-1.9.5p2-13.el9.x86_64 systemd-252-59.el9.x86_64 systemd-devel-252-59.el9.x86_64 systemd-libs-252-59.el9.x86_64 systemd-pam-252-59.el9.x86_64 systemd-rpm-macros-252-59.el9.noarch systemd-udev-252-59.el9.x86_64 tar-1.34-7.el9.x86_64 tcl-8.6.10-7.el9.x86_64 tcpdump-4.99.0-9.el9.x86_64 teamd-1.31-16.el9.x86_64 time-1.9-18.el9.x86_64 tmux-3.2a-5.el9.x86_64 tpm2-tss-3.2.3-1.el9.x86_64 traceroute-2.1.1-1.el9.x86_64 tzdata-2025b-2.el9.noarch unzip-6.0-59.el9.x86_64 userspace-rcu-0.12.1-6.el9.x86_64 util-linux-2.37.4-21.el9.x86_64 util-linux-core-2.37.4-21.el9.x86_64 vim-minimal-8.2.2637-23.el9.x86_64 webkit2gtk3-jsc-2.50.3-1.el9.x86_64 wget-1.21.1-8.el9.x86_64 which-2.21-30.el9.x86_64 xfsprogs-6.4.0-7.el9.x86_64 xz-5.2.5-8.el9.x86_64 xz-devel-5.2.5-8.el9.x86_64 xz-libs-5.2.5-8.el9.x86_64 yajl-2.1.0-25.el9.x86_64 yum-4.14.0-31.el9.noarch yum-utils-4.3.0-24.el9.noarch zip-3.0-35.el9.x86_64 zlib-1.2.11-41.el9.x86_64 zlib-devel-1.2.11-41.el9.x86_64 zstd-1.5.5-1.el9.x86_64 home/zuul/zuul-output/logs/python.log0000644000175000017500000000223615117130723017115 0ustar zuulzuulPython 3.9.25 pip 25.3 from /home/zuul/.local/lib/python3.12/site-packages/pip (python 3.12) ansible [core 2.17.8] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/zuul/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/zuul/.local/lib/python3.12/site-packages/ansible ansible collection location = /home/zuul/.ansible/collections:/usr/share/ansible/collections executable location = /home/zuul/.local/bin/ansible python version = 3.12.12 (main, Nov 14 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-14)] (/usr/bin/python3.12) jinja version = 3.1.6 libyaml = True ansible-core==2.17.8 cachetools==6.2.3 certifi==2025.11.12 cffi==2.0.0 charset-normalizer==3.4.4 cryptography==46.0.3 google-auth==2.43.0 idna==3.11 Jinja2==3.1.6 kubernetes==24.2.0 MarkupSafe==3.0.3 oauthlib==3.2.2 openshift==0.13.1 packaging==25.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.23 python-dateutil==2.9.0.post0 python-string-utils==1.0.0 PyYAML==6.0.3 requests==2.32.4 requests-oauthlib==1.3.0 resolvelib==1.0.1 rsa==4.9.1 setuptools==68.2.2 six==1.17.0 urllib3==2.6.2 websocket-client==1.9.0 home/zuul/zuul-output/logs/dmesg.log0000644000175000017500000015204615117130723016700 0ustar zuulzuul[Sat Dec 13 00:01:32 2025] Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 [Sat Dec 13 00:01:32 2025] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [Sat Dec 13 00:01:32 2025] Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M [Sat Dec 13 00:01:32 2025] BIOS-provided physical RAM map: [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [Sat Dec 13 00:01:32 2025] BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable [Sat Dec 13 00:01:32 2025] NX (Execute Disable) protection: active [Sat Dec 13 00:01:32 2025] APIC: Static calls initialized [Sat Dec 13 00:01:32 2025] SMBIOS 2.8 present. [Sat Dec 13 00:01:32 2025] DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 [Sat Dec 13 00:01:32 2025] Hypervisor detected: KVM [Sat Dec 13 00:01:32 2025] kvm-clock: Using msrs 4b564d01 and 4b564d00 [Sat Dec 13 00:01:32 2025] kvm-clock: using sched offset of 3148515771 cycles [Sat Dec 13 00:01:32 2025] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [Sat Dec 13 00:01:32 2025] tsc: Detected 2800.000 MHz processor [Sat Dec 13 00:01:32 2025] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [Sat Dec 13 00:01:32 2025] e820: remove [mem 0x000a0000-0x000fffff] usable [Sat Dec 13 00:01:32 2025] last_pfn = 0x240000 max_arch_pfn = 0x400000000 [Sat Dec 13 00:01:32 2025] MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs [Sat Dec 13 00:01:32 2025] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [Sat Dec 13 00:01:32 2025] last_pfn = 0xbffdb max_arch_pfn = 0x400000000 [Sat Dec 13 00:01:32 2025] found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] [Sat Dec 13 00:01:32 2025] Using GB pages for direct mapping [Sat Dec 13 00:01:32 2025] RAMDISK: [mem 0x2d46a000-0x32a2cfff] [Sat Dec 13 00:01:32 2025] ACPI: Early table checksum verification disabled [Sat Dec 13 00:01:32 2025] ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) [Sat Dec 13 00:01:32 2025] ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Sat Dec 13 00:01:32 2025] ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Sat Dec 13 00:01:32 2025] ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Sat Dec 13 00:01:32 2025] ACPI: FACS 0x00000000BFFDFC40 000040 [Sat Dec 13 00:01:32 2025] ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Sat Dec 13 00:01:32 2025] ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) [Sat Dec 13 00:01:32 2025] ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] [Sat Dec 13 00:01:32 2025] ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] [Sat Dec 13 00:01:32 2025] ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] [Sat Dec 13 00:01:32 2025] ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] [Sat Dec 13 00:01:32 2025] ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] [Sat Dec 13 00:01:32 2025] No NUMA configuration found [Sat Dec 13 00:01:32 2025] Faking a node at [mem 0x0000000000000000-0x000000023fffffff] [Sat Dec 13 00:01:32 2025] NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff] [Sat Dec 13 00:01:32 2025] crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB) [Sat Dec 13 00:01:32 2025] Zone ranges: [Sat Dec 13 00:01:32 2025] DMA [mem 0x0000000000001000-0x0000000000ffffff] [Sat Dec 13 00:01:32 2025] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [Sat Dec 13 00:01:32 2025] Normal [mem 0x0000000100000000-0x000000023fffffff] [Sat Dec 13 00:01:32 2025] Device empty [Sat Dec 13 00:01:32 2025] Movable zone start for each node [Sat Dec 13 00:01:32 2025] Early memory node ranges [Sat Dec 13 00:01:32 2025] node 0: [mem 0x0000000000001000-0x000000000009efff] [Sat Dec 13 00:01:32 2025] node 0: [mem 0x0000000000100000-0x00000000bffdafff] [Sat Dec 13 00:01:32 2025] node 0: [mem 0x0000000100000000-0x000000023fffffff] [Sat Dec 13 00:01:32 2025] Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff] [Sat Dec 13 00:01:32 2025] On node 0, zone DMA: 1 pages in unavailable ranges [Sat Dec 13 00:01:32 2025] On node 0, zone DMA: 97 pages in unavailable ranges [Sat Dec 13 00:01:32 2025] On node 0, zone Normal: 37 pages in unavailable ranges [Sat Dec 13 00:01:32 2025] ACPI: PM-Timer IO Port: 0x608 [Sat Dec 13 00:01:32 2025] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [Sat Dec 13 00:01:32 2025] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [Sat Dec 13 00:01:32 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [Sat Dec 13 00:01:32 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [Sat Dec 13 00:01:32 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [Sat Dec 13 00:01:32 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [Sat Dec 13 00:01:32 2025] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [Sat Dec 13 00:01:32 2025] ACPI: Using ACPI (MADT) for SMP configuration information [Sat Dec 13 00:01:32 2025] TSC deadline timer available [Sat Dec 13 00:01:32 2025] CPU topo: Max. logical packages: 8 [Sat Dec 13 00:01:32 2025] CPU topo: Max. logical dies: 8 [Sat Dec 13 00:01:32 2025] CPU topo: Max. dies per package: 1 [Sat Dec 13 00:01:32 2025] CPU topo: Max. threads per core: 1 [Sat Dec 13 00:01:32 2025] CPU topo: Num. cores per package: 1 [Sat Dec 13 00:01:32 2025] CPU topo: Num. threads per package: 1 [Sat Dec 13 00:01:32 2025] CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs [Sat Dec 13 00:01:32 2025] kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] [Sat Dec 13 00:01:32 2025] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] [Sat Dec 13 00:01:32 2025] [mem 0xc0000000-0xfeffbfff] available for PCI devices [Sat Dec 13 00:01:32 2025] Booting paravirtualized kernel on KVM [Sat Dec 13 00:01:32 2025] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [Sat Dec 13 00:01:32 2025] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 [Sat Dec 13 00:01:32 2025] percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144 [Sat Dec 13 00:01:32 2025] pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152 [Sat Dec 13 00:01:32 2025] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 [Sat Dec 13 00:01:32 2025] kvm-guest: PV spinlocks disabled, no host support [Sat Dec 13 00:01:32 2025] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M [Sat Dec 13 00:01:32 2025] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space. [Sat Dec 13 00:01:32 2025] random: crng init done [Sat Dec 13 00:01:32 2025] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) [Sat Dec 13 00:01:32 2025] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) [Sat Dec 13 00:01:32 2025] Fallback order for Node 0: 0 [Sat Dec 13 00:01:32 2025] Built 1 zonelists, mobility grouping on. Total pages: 2064091 [Sat Dec 13 00:01:32 2025] Policy zone: Normal [Sat Dec 13 00:01:32 2025] mem auto-init: stack:off, heap alloc:off, heap free:off [Sat Dec 13 00:01:32 2025] software IO TLB: area num 8. [Sat Dec 13 00:01:32 2025] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 [Sat Dec 13 00:01:32 2025] ftrace: allocating 49357 entries in 193 pages [Sat Dec 13 00:01:32 2025] ftrace: allocated 193 pages with 3 groups [Sat Dec 13 00:01:32 2025] Dynamic Preempt: voluntary [Sat Dec 13 00:01:32 2025] rcu: Preemptible hierarchical RCU implementation. [Sat Dec 13 00:01:32 2025] rcu: RCU event tracing is enabled. [Sat Dec 13 00:01:32 2025] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. [Sat Dec 13 00:01:32 2025] Trampoline variant of Tasks RCU enabled. [Sat Dec 13 00:01:32 2025] Rude variant of Tasks RCU enabled. [Sat Dec 13 00:01:32 2025] Tracing variant of Tasks RCU enabled. [Sat Dec 13 00:01:32 2025] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [Sat Dec 13 00:01:32 2025] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 [Sat Dec 13 00:01:32 2025] RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. [Sat Dec 13 00:01:32 2025] RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. [Sat Dec 13 00:01:32 2025] RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. [Sat Dec 13 00:01:32 2025] NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 [Sat Dec 13 00:01:32 2025] rcu: srcu_init: Setting srcu_struct sizes based on contention. [Sat Dec 13 00:01:32 2025] kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) [Sat Dec 13 00:01:32 2025] Console: colour VGA+ 80x25 [Sat Dec 13 00:01:32 2025] printk: console [ttyS0] enabled [Sat Dec 13 00:01:32 2025] ACPI: Core revision 20230331 [Sat Dec 13 00:01:32 2025] APIC: Switch to symmetric I/O mode setup [Sat Dec 13 00:01:32 2025] x2apic enabled [Sat Dec 13 00:01:32 2025] APIC: Switched APIC routing to: physical x2apic [Sat Dec 13 00:01:32 2025] tsc: Marking TSC unstable due to TSCs unsynchronized [Sat Dec 13 00:01:32 2025] Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000) [Sat Dec 13 00:01:32 2025] x86/cpu: User Mode Instruction Prevention (UMIP) activated [Sat Dec 13 00:01:32 2025] Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 [Sat Dec 13 00:01:32 2025] Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 [Sat Dec 13 00:01:32 2025] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [Sat Dec 13 00:01:32 2025] Spectre V2 : Mitigation: Retpolines [Sat Dec 13 00:01:32 2025] Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT [Sat Dec 13 00:01:32 2025] Spectre V2 : Enabling Speculation Barrier for firmware calls [Sat Dec 13 00:01:32 2025] RETBleed: Mitigation: untrained return thunk [Sat Dec 13 00:01:32 2025] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [Sat Dec 13 00:01:32 2025] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl [Sat Dec 13 00:01:32 2025] Speculative Return Stack Overflow: IBPB-extending microcode not applied! [Sat Dec 13 00:01:32 2025] Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. [Sat Dec 13 00:01:32 2025] x86/bugs: return thunk changed [Sat Dec 13 00:01:32 2025] Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode [Sat Dec 13 00:01:32 2025] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [Sat Dec 13 00:01:32 2025] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [Sat Dec 13 00:01:32 2025] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [Sat Dec 13 00:01:32 2025] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [Sat Dec 13 00:01:32 2025] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. [Sat Dec 13 00:01:32 2025] Freeing SMP alternatives memory: 40K [Sat Dec 13 00:01:32 2025] pid_max: default: 32768 minimum: 301 [Sat Dec 13 00:01:32 2025] LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf [Sat Dec 13 00:01:32 2025] landlock: Up and running. [Sat Dec 13 00:01:32 2025] Yama: becoming mindful. [Sat Dec 13 00:01:32 2025] SELinux: Initializing. [Sat Dec 13 00:01:32 2025] LSM support for eBPF active [Sat Dec 13 00:01:32 2025] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [Sat Dec 13 00:01:32 2025] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [Sat Dec 13 00:01:32 2025] smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) [Sat Dec 13 00:01:32 2025] Performance Events: Fam17h+ core perfctr, AMD PMU driver. [Sat Dec 13 00:01:32 2025] ... version: 0 [Sat Dec 13 00:01:32 2025] ... bit width: 48 [Sat Dec 13 00:01:32 2025] ... generic registers: 6 [Sat Dec 13 00:01:32 2025] ... value mask: 0000ffffffffffff [Sat Dec 13 00:01:32 2025] ... max period: 00007fffffffffff [Sat Dec 13 00:01:32 2025] ... fixed-purpose events: 0 [Sat Dec 13 00:01:32 2025] ... event mask: 000000000000003f [Sat Dec 13 00:01:32 2025] signal: max sigframe size: 1776 [Sat Dec 13 00:01:32 2025] rcu: Hierarchical SRCU implementation. [Sat Dec 13 00:01:32 2025] rcu: Max phase no-delay instances is 400. [Sat Dec 13 00:01:32 2025] smp: Bringing up secondary CPUs ... [Sat Dec 13 00:01:32 2025] smpboot: x86: Booting SMP configuration: [Sat Dec 13 00:01:32 2025] .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 [Sat Dec 13 00:01:32 2025] smp: Brought up 1 node, 8 CPUs [Sat Dec 13 00:01:32 2025] smpboot: Total of 8 processors activated (44800.00 BogoMIPS) [Sat Dec 13 00:01:32 2025] node 0 deferred pages initialised in 11ms [Sat Dec 13 00:01:32 2025] Memory: 7763972K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618220K reserved, 0K cma-reserved) [Sat Dec 13 00:01:32 2025] devtmpfs: initialized [Sat Dec 13 00:01:32 2025] x86/mm: Memory block size: 128MB [Sat Dec 13 00:01:32 2025] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [Sat Dec 13 00:01:32 2025] futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear). [Sat Dec 13 00:01:32 2025] pinctrl core: initialized pinctrl subsystem [Sat Dec 13 00:01:32 2025] NET: Registered PF_NETLINK/PF_ROUTE protocol family [Sat Dec 13 00:01:32 2025] DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations [Sat Dec 13 00:01:32 2025] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [Sat Dec 13 00:01:32 2025] DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [Sat Dec 13 00:01:32 2025] audit: initializing netlink subsys (disabled) [Sat Dec 13 00:01:32 2025] audit: type=2000 audit(1765584092.383:1): state=initialized audit_enabled=0 res=1 [Sat Dec 13 00:01:32 2025] thermal_sys: Registered thermal governor 'fair_share' [Sat Dec 13 00:01:32 2025] thermal_sys: Registered thermal governor 'step_wise' [Sat Dec 13 00:01:32 2025] thermal_sys: Registered thermal governor 'user_space' [Sat Dec 13 00:01:32 2025] cpuidle: using governor menu [Sat Dec 13 00:01:32 2025] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [Sat Dec 13 00:01:32 2025] PCI: Using configuration type 1 for base access [Sat Dec 13 00:01:32 2025] PCI: Using configuration type 1 for extended access [Sat Dec 13 00:01:32 2025] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [Sat Dec 13 00:01:32 2025] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages [Sat Dec 13 00:01:32 2025] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page [Sat Dec 13 00:01:32 2025] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages [Sat Dec 13 00:01:32 2025] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page [Sat Dec 13 00:01:32 2025] Demotion targets for Node 0: null [Sat Dec 13 00:01:32 2025] cryptd: max_cpu_qlen set to 1000 [Sat Dec 13 00:01:32 2025] ACPI: Added _OSI(Module Device) [Sat Dec 13 00:01:32 2025] ACPI: Added _OSI(Processor Device) [Sat Dec 13 00:01:32 2025] ACPI: Added _OSI(3.0 _SCP Extensions) [Sat Dec 13 00:01:32 2025] ACPI: Added _OSI(Processor Aggregator Device) [Sat Dec 13 00:01:32 2025] ACPI: 1 ACPI AML tables successfully acquired and loaded [Sat Dec 13 00:01:32 2025] ACPI: _OSC evaluation for CPUs failed, trying _PDC [Sat Dec 13 00:01:32 2025] ACPI: Interpreter enabled [Sat Dec 13 00:01:32 2025] ACPI: PM: (supports S0 S3 S4 S5) [Sat Dec 13 00:01:32 2025] ACPI: Using IOAPIC for interrupt routing [Sat Dec 13 00:01:32 2025] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [Sat Dec 13 00:01:32 2025] PCI: Using E820 reservations for host bridge windows [Sat Dec 13 00:01:32 2025] ACPI: Enabled 2 GPEs in block 00 to 0F [Sat Dec 13 00:01:32 2025] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [Sat Dec 13 00:01:32 2025] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [Sat Dec 13 00:01:32 2025] acpiphp: Slot [3] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [4] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [5] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [6] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [7] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [8] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [9] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [10] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [11] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [12] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [13] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [14] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [15] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [16] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [17] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [18] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [19] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [20] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [21] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [22] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [23] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [24] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [25] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [26] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [27] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [28] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [29] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [30] registered [Sat Dec 13 00:01:32 2025] acpiphp: Slot [31] registered [Sat Dec 13 00:01:32 2025] PCI host bridge to bus 0000:00 [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: root bus resource [bus 00-ff] [Sat Dec 13 00:01:32 2025] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:01.1: BAR 4 [io 0xc140-0xc14f] [Sat Dec 13 00:01:32 2025] pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk [Sat Dec 13 00:01:32 2025] pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk [Sat Dec 13 00:01:32 2025] pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk [Sat Dec 13 00:01:32 2025] pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk [Sat Dec 13 00:01:32 2025] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:01.2: BAR 4 [io 0xc100-0xc11f] [Sat Dec 13 00:01:32 2025] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [Sat Dec 13 00:01:32 2025] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [Sat Dec 13 00:01:32 2025] pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] [Sat Dec 13 00:01:32 2025] pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] [Sat Dec 13 00:01:32 2025] pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] [Sat Dec 13 00:01:32 2025] pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] [Sat Dec 13 00:01:32 2025] pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] [Sat Dec 13 00:01:32 2025] pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] [Sat Dec 13 00:01:32 2025] pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint [Sat Dec 13 00:01:32 2025] pci 0000:00:06.0: BAR 0 [io 0xc120-0xc13f] [Sat Dec 13 00:01:32 2025] pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] [Sat Dec 13 00:01:32 2025] ACPI: PCI: Interrupt link LNKA configured for IRQ 10 [Sat Dec 13 00:01:32 2025] ACPI: PCI: Interrupt link LNKB configured for IRQ 10 [Sat Dec 13 00:01:32 2025] ACPI: PCI: Interrupt link LNKC configured for IRQ 11 [Sat Dec 13 00:01:32 2025] ACPI: PCI: Interrupt link LNKD configured for IRQ 11 [Sat Dec 13 00:01:32 2025] ACPI: PCI: Interrupt link LNKS configured for IRQ 9 [Sat Dec 13 00:01:32 2025] iommu: Default domain type: Translated [Sat Dec 13 00:01:32 2025] iommu: DMA domain TLB invalidation policy: lazy mode [Sat Dec 13 00:01:32 2025] SCSI subsystem initialized [Sat Dec 13 00:01:32 2025] ACPI: bus type USB registered [Sat Dec 13 00:01:32 2025] usbcore: registered new interface driver usbfs [Sat Dec 13 00:01:32 2025] usbcore: registered new interface driver hub [Sat Dec 13 00:01:32 2025] usbcore: registered new device driver usb [Sat Dec 13 00:01:32 2025] pps_core: LinuxPPS API ver. 1 registered [Sat Dec 13 00:01:32 2025] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [Sat Dec 13 00:01:32 2025] PTP clock support registered [Sat Dec 13 00:01:32 2025] EDAC MC: Ver: 3.0.0 [Sat Dec 13 00:01:32 2025] NetLabel: Initializing [Sat Dec 13 00:01:32 2025] NetLabel: domain hash size = 128 [Sat Dec 13 00:01:32 2025] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [Sat Dec 13 00:01:32 2025] NetLabel: unlabeled traffic allowed by default [Sat Dec 13 00:01:32 2025] PCI: Using ACPI for IRQ routing [Sat Dec 13 00:01:32 2025] PCI: pci_cache_line_size set to 64 bytes [Sat Dec 13 00:01:32 2025] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] [Sat Dec 13 00:01:32 2025] e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff] [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: vgaarb: setting as boot VGA device [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: vgaarb: bridge control possible [Sat Dec 13 00:01:32 2025] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [Sat Dec 13 00:01:32 2025] vgaarb: loaded [Sat Dec 13 00:01:32 2025] clocksource: Switched to clocksource kvm-clock [Sat Dec 13 00:01:32 2025] VFS: Disk quotas dquot_6.6.0 [Sat Dec 13 00:01:32 2025] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [Sat Dec 13 00:01:32 2025] pnp: PnP ACPI init [Sat Dec 13 00:01:32 2025] pnp 00:03: [dma 2] [Sat Dec 13 00:01:32 2025] pnp: PnP ACPI: found 5 devices [Sat Dec 13 00:01:32 2025] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [Sat Dec 13 00:01:32 2025] NET: Registered PF_INET protocol family [Sat Dec 13 00:01:32 2025] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) [Sat Dec 13 00:01:32 2025] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) [Sat Dec 13 00:01:32 2025] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) [Sat Dec 13 00:01:32 2025] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) [Sat Dec 13 00:01:32 2025] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) [Sat Dec 13 00:01:32 2025] TCP: Hash tables configured (established 65536 bind 65536) [Sat Dec 13 00:01:32 2025] MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear) [Sat Dec 13 00:01:32 2025] UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) [Sat Dec 13 00:01:32 2025] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) [Sat Dec 13 00:01:32 2025] NET: Registered PF_UNIX/PF_LOCAL protocol family [Sat Dec 13 00:01:32 2025] NET: Registered PF_XDP protocol family [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] [Sat Dec 13 00:01:32 2025] pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window] [Sat Dec 13 00:01:32 2025] pci 0000:00:01.0: PIIX3: Enabling Passive Release [Sat Dec 13 00:01:32 2025] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [Sat Dec 13 00:01:32 2025] ACPI: \_SB_.LNKD: Enabled at IRQ 11 [Sat Dec 13 00:01:32 2025] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 88154 usecs [Sat Dec 13 00:01:32 2025] PCI: CLS 0 bytes, default 64 [Sat Dec 13 00:01:32 2025] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [Sat Dec 13 00:01:32 2025] software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) [Sat Dec 13 00:01:32 2025] ACPI: bus type thunderbolt registered [Sat Dec 13 00:01:32 2025] Trying to unpack rootfs image as initramfs... [Sat Dec 13 00:01:32 2025] Initialise system trusted keyrings [Sat Dec 13 00:01:32 2025] Key type blacklist registered [Sat Dec 13 00:01:32 2025] workingset: timestamp_bits=36 max_order=21 bucket_order=0 [Sat Dec 13 00:01:32 2025] zbud: loaded [Sat Dec 13 00:01:32 2025] integrity: Platform Keyring initialized [Sat Dec 13 00:01:32 2025] integrity: Machine keyring initialized [Sat Dec 13 00:01:32 2025] Freeing initrd memory: 87820K [Sat Dec 13 00:01:32 2025] NET: Registered PF_ALG protocol family [Sat Dec 13 00:01:32 2025] xor: automatically using best checksumming function avx [Sat Dec 13 00:01:32 2025] Key type asymmetric registered [Sat Dec 13 00:01:32 2025] Asymmetric key parser 'x509' registered [Sat Dec 13 00:01:32 2025] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [Sat Dec 13 00:01:32 2025] io scheduler mq-deadline registered [Sat Dec 13 00:01:32 2025] io scheduler kyber registered [Sat Dec 13 00:01:32 2025] io scheduler bfq registered [Sat Dec 13 00:01:33 2025] atomic64_test: passed for x86-64 platform with CX8 and with SSE [Sat Dec 13 00:01:33 2025] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [Sat Dec 13 00:01:33 2025] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [Sat Dec 13 00:01:33 2025] ACPI: button: Power Button [PWRF] [Sat Dec 13 00:01:33 2025] ACPI: \_SB_.LNKB: Enabled at IRQ 10 [Sat Dec 13 00:01:33 2025] ACPI: \_SB_.LNKC: Enabled at IRQ 11 [Sat Dec 13 00:01:33 2025] ACPI: \_SB_.LNKA: Enabled at IRQ 10 [Sat Dec 13 00:01:33 2025] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [Sat Dec 13 00:01:33 2025] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [Sat Dec 13 00:01:33 2025] Non-volatile memory driver v1.3 [Sat Dec 13 00:01:33 2025] rdac: device handler registered [Sat Dec 13 00:01:33 2025] hp_sw: device handler registered [Sat Dec 13 00:01:33 2025] emc: device handler registered [Sat Dec 13 00:01:33 2025] alua: device handler registered [Sat Dec 13 00:01:33 2025] uhci_hcd 0000:00:01.2: UHCI Host Controller [Sat Dec 13 00:01:33 2025] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 [Sat Dec 13 00:01:33 2025] uhci_hcd 0000:00:01.2: detected 2 ports [Sat Dec 13 00:01:33 2025] uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 [Sat Dec 13 00:01:33 2025] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [Sat Dec 13 00:01:33 2025] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [Sat Dec 13 00:01:33 2025] usb usb1: Product: UHCI Host Controller [Sat Dec 13 00:01:33 2025] usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd [Sat Dec 13 00:01:33 2025] usb usb1: SerialNumber: 0000:00:01.2 [Sat Dec 13 00:01:33 2025] hub 1-0:1.0: USB hub found [Sat Dec 13 00:01:33 2025] hub 1-0:1.0: 2 ports detected [Sat Dec 13 00:01:33 2025] usbcore: registered new interface driver usbserial_generic [Sat Dec 13 00:01:33 2025] usbserial: USB Serial support registered for generic [Sat Dec 13 00:01:33 2025] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [Sat Dec 13 00:01:33 2025] serio: i8042 KBD port at 0x60,0x64 irq 1 [Sat Dec 13 00:01:33 2025] serio: i8042 AUX port at 0x60,0x64 irq 12 [Sat Dec 13 00:01:33 2025] mousedev: PS/2 mouse device common for all mice [Sat Dec 13 00:01:33 2025] rtc_cmos 00:04: RTC can wake from S4 [Sat Dec 13 00:01:33 2025] rtc_cmos 00:04: registered as rtc0 [Sat Dec 13 00:01:33 2025] rtc_cmos 00:04: setting system clock to 2025-12-13T00:01:33 UTC (1765584093) [Sat Dec 13 00:01:33 2025] rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram [Sat Dec 13 00:01:33 2025] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [Sat Dec 13 00:01:33 2025] amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled [Sat Dec 13 00:01:33 2025] hid: raw HID events driver (C) Jiri Kosina [Sat Dec 13 00:01:33 2025] usbcore: registered new interface driver usbhid [Sat Dec 13 00:01:33 2025] usbhid: USB HID core driver [Sat Dec 13 00:01:33 2025] drop_monitor: Initializing network drop monitor service [Sat Dec 13 00:01:33 2025] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 [Sat Dec 13 00:01:33 2025] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 [Sat Dec 13 00:01:33 2025] Initializing XFRM netlink socket [Sat Dec 13 00:01:33 2025] NET: Registered PF_INET6 protocol family [Sat Dec 13 00:01:33 2025] Segment Routing with IPv6 [Sat Dec 13 00:01:33 2025] NET: Registered PF_PACKET protocol family [Sat Dec 13 00:01:33 2025] mpls_gso: MPLS GSO support [Sat Dec 13 00:01:33 2025] IPI shorthand broadcast: enabled [Sat Dec 13 00:01:33 2025] AVX2 version of gcm_enc/dec engaged. [Sat Dec 13 00:01:33 2025] AES CTR mode by8 optimization enabled [Sat Dec 13 00:01:33 2025] sched_clock: Marking stable (1189003049, 143566190)->(1452354749, -119785510) [Sat Dec 13 00:01:33 2025] registered taskstats version 1 [Sat Dec 13 00:01:33 2025] Loading compiled-in X.509 certificates [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8' [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a' [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0' [Sat Dec 13 00:01:33 2025] Demotion targets for Node 0: null [Sat Dec 13 00:01:33 2025] page_owner is disabled [Sat Dec 13 00:01:33 2025] Key type .fscrypt registered [Sat Dec 13 00:01:33 2025] Key type fscrypt-provisioning registered [Sat Dec 13 00:01:33 2025] Key type big_key registered [Sat Dec 13 00:01:33 2025] Key type encrypted registered [Sat Dec 13 00:01:33 2025] ima: No TPM chip found, activating TPM-bypass! [Sat Dec 13 00:01:33 2025] Loading compiled-in module X.509 certificates [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8' [Sat Dec 13 00:01:33 2025] ima: Allocated hash algorithm: sha256 [Sat Dec 13 00:01:33 2025] ima: No architecture policies found [Sat Dec 13 00:01:33 2025] evm: Initialising EVM extended attributes: [Sat Dec 13 00:01:33 2025] evm: security.selinux [Sat Dec 13 00:01:33 2025] evm: security.SMACK64 (disabled) [Sat Dec 13 00:01:33 2025] evm: security.SMACK64EXEC (disabled) [Sat Dec 13 00:01:33 2025] evm: security.SMACK64TRANSMUTE (disabled) [Sat Dec 13 00:01:33 2025] evm: security.SMACK64MMAP (disabled) [Sat Dec 13 00:01:33 2025] evm: security.apparmor (disabled) [Sat Dec 13 00:01:33 2025] evm: security.ima [Sat Dec 13 00:01:33 2025] evm: security.capability [Sat Dec 13 00:01:33 2025] evm: HMAC attrs: 0x1 [Sat Dec 13 00:01:33 2025] usb 1-1: new full-speed USB device number 2 using uhci_hcd [Sat Dec 13 00:01:33 2025] Running certificate verification RSA selftest [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [Sat Dec 13 00:01:33 2025] Running certificate verification ECDSA selftest [Sat Dec 13 00:01:33 2025] Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3' [Sat Dec 13 00:01:33 2025] clk: Disabling unused clocks [Sat Dec 13 00:01:33 2025] Freeing unused decrypted memory: 2028K [Sat Dec 13 00:01:33 2025] Freeing unused kernel image (initmem) memory: 4192K [Sat Dec 13 00:01:33 2025] Write protecting the kernel read-only data: 30720k [Sat Dec 13 00:01:33 2025] Freeing unused kernel image (rodata/data gap) memory: 420K [Sat Dec 13 00:01:33 2025] x86/mm: Checked W+X mappings: passed, no W+X pages found. [Sat Dec 13 00:01:33 2025] Run /init as init process [Sat Dec 13 00:01:33 2025] with arguments: [Sat Dec 13 00:01:33 2025] /init [Sat Dec 13 00:01:33 2025] with environment: [Sat Dec 13 00:01:33 2025] HOME=/ [Sat Dec 13 00:01:33 2025] TERM=linux [Sat Dec 13 00:01:33 2025] BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 [Sat Dec 13 00:01:33 2025] systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [Sat Dec 13 00:01:33 2025] systemd[1]: Detected virtualization kvm. [Sat Dec 13 00:01:33 2025] systemd[1]: Detected architecture x86-64. [Sat Dec 13 00:01:33 2025] systemd[1]: Running in initrd. [Sat Dec 13 00:01:33 2025] systemd[1]: No hostname configured, using default hostname. [Sat Dec 13 00:01:33 2025] systemd[1]: Hostname set to . [Sat Dec 13 00:01:33 2025] systemd[1]: Initializing machine ID from VM UUID. [Sat Dec 13 00:01:33 2025] usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 [Sat Dec 13 00:01:33 2025] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 [Sat Dec 13 00:01:33 2025] usb 1-1: Product: QEMU USB Tablet [Sat Dec 13 00:01:33 2025] usb 1-1: Manufacturer: QEMU [Sat Dec 13 00:01:33 2025] usb 1-1: SerialNumber: 28754-0000:00:01.2-1 [Sat Dec 13 00:01:33 2025] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 [Sat Dec 13 00:01:33 2025] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 [Sat Dec 13 00:01:33 2025] systemd[1]: Queued start job for default target Initrd Default Target. [Sat Dec 13 00:01:33 2025] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Local Encrypted Volumes. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Initrd /usr File System. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Local File Systems. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Path Units. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Slice Units. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Swaps. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Timer Units. [Sat Dec 13 00:01:33 2025] systemd[1]: Listening on D-Bus System Message Bus Socket. [Sat Dec 13 00:01:33 2025] systemd[1]: Listening on Journal Socket (/dev/log). [Sat Dec 13 00:01:33 2025] systemd[1]: Listening on Journal Socket. [Sat Dec 13 00:01:33 2025] systemd[1]: Listening on udev Control Socket. [Sat Dec 13 00:01:33 2025] systemd[1]: Listening on udev Kernel Socket. [Sat Dec 13 00:01:33 2025] systemd[1]: Reached target Socket Units. [Sat Dec 13 00:01:33 2025] systemd[1]: Starting Create List of Static Device Nodes... [Sat Dec 13 00:01:33 2025] systemd[1]: Starting Journal Service... [Sat Dec 13 00:01:33 2025] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [Sat Dec 13 00:01:33 2025] systemd[1]: Starting Apply Kernel Variables... [Sat Dec 13 00:01:33 2025] systemd[1]: Starting Create System Users... [Sat Dec 13 00:01:33 2025] systemd[1]: Starting Setup Virtual Console... [Sat Dec 13 00:01:33 2025] systemd[1]: Finished Create List of Static Device Nodes. [Sat Dec 13 00:01:33 2025] systemd[1]: Finished Apply Kernel Variables. [Sat Dec 13 00:01:33 2025] systemd[1]: Finished Create System Users. [Sat Dec 13 00:01:33 2025] systemd[1]: Started Journal Service. [Sat Dec 13 00:01:34 2025] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [Sat Dec 13 00:01:34 2025] device-mapper: uevent: version 1.0.3 [Sat Dec 13 00:01:34 2025] device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev [Sat Dec 13 00:01:34 2025] RPC: Registered named UNIX socket transport module. [Sat Dec 13 00:01:34 2025] RPC: Registered udp transport module. [Sat Dec 13 00:01:34 2025] RPC: Registered tcp transport module. [Sat Dec 13 00:01:34 2025] RPC: Registered tcp-with-tls transport module. [Sat Dec 13 00:01:34 2025] RPC: Registered tcp NFSv4.1 backchannel transport module. [Sat Dec 13 00:01:34 2025] virtio_blk virtio2: 8/0/0 default/read/poll queues [Sat Dec 13 00:01:34 2025] virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB) [Sat Dec 13 00:01:34 2025] vda: vda1 [Sat Dec 13 00:01:34 2025] libata version 3.00 loaded. [Sat Dec 13 00:01:34 2025] ata_piix 0000:00:01.1: version 2.13 [Sat Dec 13 00:01:34 2025] scsi host0: ata_piix [Sat Dec 13 00:01:34 2025] scsi host1: ata_piix [Sat Dec 13 00:01:34 2025] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0 [Sat Dec 13 00:01:34 2025] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0 [Sat Dec 13 00:01:35 2025] ata1: found unknown device (class 0) [Sat Dec 13 00:01:35 2025] ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 [Sat Dec 13 00:01:35 2025] scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 [Sat Dec 13 00:01:35 2025] scsi 0:0:0:0: Attached scsi generic sg0 type 5 [Sat Dec 13 00:01:35 2025] sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray [Sat Dec 13 00:01:35 2025] cdrom: Uniform CD-ROM driver Revision: 3.20 [Sat Dec 13 00:01:35 2025] sr 0:0:0:0: Attached scsi CD-ROM sr0 [Sat Dec 13 00:01:35 2025] SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled [Sat Dec 13 00:01:35 2025] XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266 [Sat Dec 13 00:01:35 2025] XFS (vda1): Ending clean mount [Sat Dec 13 00:01:36 2025] systemd-journald[305]: Received SIGTERM from PID 1 (systemd). [Sat Dec 13 00:01:36 2025] audit: type=1404 audit(1765584096.567:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 [Sat Dec 13 00:01:36 2025] SELinux: policy capability network_peer_controls=1 [Sat Dec 13 00:01:36 2025] SELinux: policy capability open_perms=1 [Sat Dec 13 00:01:36 2025] SELinux: policy capability extended_socket_class=1 [Sat Dec 13 00:01:36 2025] SELinux: policy capability always_check_network=0 [Sat Dec 13 00:01:36 2025] SELinux: policy capability cgroup_seclabel=1 [Sat Dec 13 00:01:36 2025] SELinux: policy capability nnp_nosuid_transition=1 [Sat Dec 13 00:01:36 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Sat Dec 13 00:01:36 2025] audit: type=1403 audit(1765584096.720:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 [Sat Dec 13 00:01:36 2025] systemd[1]: Successfully loaded SELinux policy in 158.424ms. [Sat Dec 13 00:01:36 2025] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.547ms. [Sat Dec 13 00:01:36 2025] systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [Sat Dec 13 00:01:36 2025] systemd[1]: Detected virtualization kvm. [Sat Dec 13 00:01:36 2025] systemd[1]: Detected architecture x86-64. [Sat Dec 13 00:01:36 2025] systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping. [Sat Dec 13 00:01:36 2025] systemd[1]: initrd-switch-root.service: Deactivated successfully. [Sat Dec 13 00:01:36 2025] systemd[1]: Stopped Switch Root. [Sat Dec 13 00:01:36 2025] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [Sat Dec 13 00:01:36 2025] systemd[1]: Created slice Slice /system/getty. [Sat Dec 13 00:01:36 2025] systemd[1]: Created slice Slice /system/serial-getty. [Sat Dec 13 00:01:36 2025] systemd[1]: Created slice Slice /system/sshd-keygen. [Sat Dec 13 00:01:36 2025] systemd[1]: Created slice User and Session Slice. [Sat Dec 13 00:01:36 2025] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [Sat Dec 13 00:01:36 2025] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [Sat Dec 13 00:01:36 2025] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target Local Encrypted Volumes. [Sat Dec 13 00:01:36 2025] systemd[1]: Stopped target Switch Root. [Sat Dec 13 00:01:36 2025] systemd[1]: Stopped target Initrd File Systems. [Sat Dec 13 00:01:36 2025] systemd[1]: Stopped target Initrd Root File System. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target Local Integrity Protected Volumes. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target Path Units. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target rpc_pipefs.target. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target Slice Units. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target Swaps. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target Local Verity Protected Volumes. [Sat Dec 13 00:01:36 2025] systemd[1]: Listening on RPCbind Server Activation Socket. [Sat Dec 13 00:01:36 2025] systemd[1]: Reached target RPC Port Mapper. [Sat Dec 13 00:01:36 2025] systemd[1]: Listening on Process Core Dump Socket. [Sat Dec 13 00:01:36 2025] systemd[1]: Listening on initctl Compatibility Named Pipe. [Sat Dec 13 00:01:36 2025] systemd[1]: Listening on udev Control Socket. [Sat Dec 13 00:01:36 2025] systemd[1]: Listening on udev Kernel Socket. [Sat Dec 13 00:01:36 2025] systemd[1]: Mounting Huge Pages File System... [Sat Dec 13 00:01:36 2025] systemd[1]: Mounting POSIX Message Queue File System... [Sat Dec 13 00:01:36 2025] systemd[1]: Mounting Kernel Debug File System... [Sat Dec 13 00:01:36 2025] systemd[1]: Mounting Kernel Trace File System... [Sat Dec 13 00:01:36 2025] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Create List of Static Device Nodes... [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Load Kernel Module configfs... [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Load Kernel Module drm... [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Load Kernel Module efi_pstore... [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Load Kernel Module fuse... [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... [Sat Dec 13 00:01:36 2025] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [Sat Dec 13 00:01:36 2025] systemd[1]: Stopped File System Check on Root Device. [Sat Dec 13 00:01:36 2025] systemd[1]: Stopped Journal Service. [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Journal Service... [Sat Dec 13 00:01:36 2025] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Generate network units from Kernel command line... [Sat Dec 13 00:01:36 2025] systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Remount Root and Kernel File Systems... [Sat Dec 13 00:01:36 2025] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Apply Kernel Variables... [Sat Dec 13 00:01:36 2025] fuse: init (API version 7.37) [Sat Dec 13 00:01:36 2025] systemd[1]: Starting Coldplug All udev Devices... [Sat Dec 13 00:01:36 2025] systemd[1]: Mounted Huge Pages File System. [Sat Dec 13 00:01:36 2025] xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) [Sat Dec 13 00:01:36 2025] systemd[1]: Mounted POSIX Message Queue File System. [Sat Dec 13 00:01:37 2025] systemd[1]: Started Journal Service. [Sat Dec 13 00:01:37 2025] ACPI: bus type drm_connector registered [Sat Dec 13 00:01:37 2025] systemd-journald[681]: Received client request to flush runtime journal. [Sat Dec 13 00:01:37 2025] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 [Sat Dec 13 00:01:37 2025] i2c i2c-0: 1/1 memory slots populated (from DMI) [Sat Dec 13 00:01:37 2025] i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD [Sat Dec 13 00:01:37 2025] input: PC Speaker as /devices/platform/pcspkr/input/input6 [Sat Dec 13 00:01:38 2025] [drm] pci: virtio-vga detected at 0000:00:02.0 [Sat Dec 13 00:01:38 2025] virtio-pci 0000:00:02.0: vgaarb: deactivate vga console [Sat Dec 13 00:01:38 2025] Console: switching to colour dummy device 80x25 [Sat Dec 13 00:01:38 2025] [drm] features: -virgl +edid -resource_blob -host_visible [Sat Dec 13 00:01:38 2025] [drm] features: -context_init [Sat Dec 13 00:01:38 2025] [drm] number of scanouts: 1 [Sat Dec 13 00:01:38 2025] [drm] number of cap sets: 0 [Sat Dec 13 00:01:38 2025] [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 [Sat Dec 13 00:01:38 2025] fbcon: virtio_gpudrmfb (fb0) is primary device [Sat Dec 13 00:01:38 2025] Console: switching to colour frame buffer device 128x48 [Sat Dec 13 00:01:38 2025] virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device [Sat Dec 13 00:01:38 2025] kvm_amd: TSC scaling supported [Sat Dec 13 00:01:38 2025] kvm_amd: Nested Virtualization enabled [Sat Dec 13 00:01:38 2025] kvm_amd: Nested Paging enabled [Sat Dec 13 00:01:38 2025] kvm_amd: LBR virtualization supported [Sat Dec 13 00:01:38 2025] Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled [Sat Dec 13 00:01:38 2025] Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled [Sat Dec 13 00:01:38 2025] ISO 9660 Extensions: Microsoft Joliet Level 3 [Sat Dec 13 00:01:38 2025] ISO 9660 Extensions: RRIP_1991A [Sat Dec 13 00:01:45 2025] block vda: the capability attribute has been deprecated. [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: BAR 0 [io 0x0000-0x003f] [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff] [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref] [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref] [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned [Sat Dec 13 00:08:29 2025] pci 0000:00:07.0: BAR 0 [io 0x1000-0x103f]: assigned [Sat Dec 13 00:08:29 2025] virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) [Sat Dec 13 00:14:51 2025] systemd-rc-local-generator[8637]: /etc/rc.d/rc.local is not marked executable, skipping. [Sat Dec 13 00:15:14 2025] SELinux: Converting 387 SID table entries... [Sat Dec 13 00:15:14 2025] SELinux: policy capability network_peer_controls=1 [Sat Dec 13 00:15:14 2025] SELinux: policy capability open_perms=1 [Sat Dec 13 00:15:14 2025] SELinux: policy capability extended_socket_class=1 [Sat Dec 13 00:15:14 2025] SELinux: policy capability always_check_network=0 [Sat Dec 13 00:15:14 2025] SELinux: policy capability cgroup_seclabel=1 [Sat Dec 13 00:15:14 2025] SELinux: policy capability nnp_nosuid_transition=1 [Sat Dec 13 00:15:14 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Sat Dec 13 00:15:23 2025] SELinux: Converting 387 SID table entries... [Sat Dec 13 00:15:23 2025] SELinux: policy capability network_peer_controls=1 [Sat Dec 13 00:15:23 2025] SELinux: policy capability open_perms=1 [Sat Dec 13 00:15:23 2025] SELinux: policy capability extended_socket_class=1 [Sat Dec 13 00:15:23 2025] SELinux: policy capability always_check_network=0 [Sat Dec 13 00:15:23 2025] SELinux: policy capability cgroup_seclabel=1 [Sat Dec 13 00:15:23 2025] SELinux: policy capability nnp_nosuid_transition=1 [Sat Dec 13 00:15:23 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Sat Dec 13 00:15:32 2025] SELinux: Converting 387 SID table entries... [Sat Dec 13 00:15:32 2025] SELinux: policy capability network_peer_controls=1 [Sat Dec 13 00:15:32 2025] SELinux: policy capability open_perms=1 [Sat Dec 13 00:15:32 2025] SELinux: policy capability extended_socket_class=1 [Sat Dec 13 00:15:32 2025] SELinux: policy capability always_check_network=0 [Sat Dec 13 00:15:32 2025] SELinux: policy capability cgroup_seclabel=1 [Sat Dec 13 00:15:32 2025] SELinux: policy capability nnp_nosuid_transition=1 [Sat Dec 13 00:15:32 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Sat Dec 13 00:15:48 2025] SELinux: Converting 391 SID table entries... [Sat Dec 13 00:15:49 2025] SELinux: policy capability network_peer_controls=1 [Sat Dec 13 00:15:49 2025] SELinux: policy capability open_perms=1 [Sat Dec 13 00:15:49 2025] SELinux: policy capability extended_socket_class=1 [Sat Dec 13 00:15:49 2025] SELinux: policy capability always_check_network=0 [Sat Dec 13 00:15:49 2025] SELinux: policy capability cgroup_seclabel=1 [Sat Dec 13 00:15:49 2025] SELinux: policy capability nnp_nosuid_transition=1 [Sat Dec 13 00:15:49 2025] SELinux: policy capability genfs_seclabel_symlinks=1 [Sat Dec 13 00:16:12 2025] systemd-rc-local-generator[9701]: /etc/rc.d/rc.local is not marked executable, skipping. [Sat Dec 13 00:16:17 2025] evm: overlay not supported home/zuul/zuul-output/logs/selinux-denials.log0000644000000000000000000000000015117130737020606 0ustar rootroothome/zuul/zuul-output/logs/system-config/0000755000175000017500000000000015117130752017657 5ustar zuulzuulhome/zuul/zuul-output/logs/system-config/libvirt/0000755000175000017500000000000015117130752021332 5ustar zuulzuulhome/zuul/zuul-output/logs/system-config/libvirt/libvirt-admin.conf0000644000175000000000000000070215117130752024710 0ustar zuulroot# # This can be used to setup URI aliases for frequently # used connection URIs. Aliases may contain only the # characters a-Z, 0-9, _, -. # # Following the '=' may be any valid libvirt admin connection # URI, including arbitrary parameters #uri_aliases = [ # "admin=libvirtd:///system", #] # This specifies the default location the client tries to connect to if no other # URI is provided by the application #uri_default = "libvirtd:///system" home/zuul/zuul-output/logs/system-config/libvirt/libvirt.conf0000644000175000000000000000104315117130752023621 0ustar zuulroot# # This can be used to setup URI aliases for frequently # used connection URIs. Aliases may contain only the # characters a-Z, 0-9, _, -. # # Following the '=' may be any valid libvirt connection # URI, including arbitrary parameters #uri_aliases = [ # "hail=qemu+ssh://root@hail.cloud.example.com/system", # "sleet=qemu+ssh://root@sleet.cloud.example.com/system", #] # # These can be used in cases when no URI is supplied by the application # (@uri_default also prevents probing of the hypervisor driver). # #uri_default = "qemu:///system" home/zuul/zuul-output/logs/registries.conf0000644000000000000000000000744715117130752020051 0ustar rootroot# For more information on this configuration file, see containers-registries.conf(5). # # NOTE: RISK OF USING UNQUALIFIED IMAGE NAMES # We recommend always using fully qualified image names including the registry # server (full dns name), namespace, image name, and tag # (e.g., registry.redhat.io/ubi8/ubi:latest). Pulling by digest (i.e., # quay.io/repository/name@digest) further eliminates the ambiguity of tags. # When using short names, there is always an inherent risk that the image being # pulled could be spoofed. For example, a user wants to pull an image named # `foobar` from a registry and expects it to come from myregistry.com. If # myregistry.com is not first in the search list, an attacker could place a # different `foobar` image at a registry earlier in the search list. The user # would accidentally pull and run the attacker's image and code rather than the # intended content. We recommend only adding registries which are completely # trusted (i.e., registries which don't allow unknown or anonymous users to # create accounts with arbitrary names). This will prevent an image from being # spoofed, squatted or otherwise made insecure. If it is necessary to use one # of these registries, it should be added at the end of the list. # # # An array of host[:port] registries to try when pulling an unqualified image, in order. unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"] # [[registry]] # # The "prefix" field is used to choose the relevant [[registry]] TOML table; # # (only) the TOML table with the longest match for the input image name # # (taking into account namespace/repo/tag/digest separators) is used. # # # # The prefix can also be of the form: *.example.com for wildcard subdomain # # matching. # # # # If the prefix field is missing, it defaults to be the same as the "location" field. # prefix = "example.com/foo" # # # If true, unencrypted HTTP as well as TLS connections with untrusted # # certificates are allowed. # insecure = false # # # If true, pulling images with matching names is forbidden. # blocked = false # # # The physical location of the "prefix"-rooted namespace. # # # # By default, this is equal to "prefix" (in which case "prefix" can be omitted # # and the [[registry]] TOML table can only specify "location"). # # # # Example: Given # # prefix = "example.com/foo" # # location = "internal-registry-for-example.net/bar" # # requests for the image example.com/foo/myimage:latest will actually work with the # # internal-registry-for-example.net/bar/myimage:latest image. # # # The location can be empty iff prefix is in a # # wildcarded format: "*.example.com". In this case, the input reference will # # be used as-is without any rewrite. # location = internal-registry-for-example.com/bar" # # # (Possibly-partial) mirrors for the "prefix"-rooted namespace. # # # # The mirrors are attempted in the specified order; the first one that can be # # contacted and contains the image will be used (and if none of the mirrors contains the image, # # the primary location specified by the "registry.location" field, or using the unmodified # # user-specified reference, is tried last). # # # # Each TOML table in the "mirror" array can contain the following fields, with the same semantics # # as if specified in the [[registry]] TOML table directly: # # - location # # - insecure # [[registry.mirror]] # location = "example-mirror-0.local/mirror-for-foo" # [[registry.mirror]] # location = "example-mirror-1.local/mirrors/foo" # insecure = true # # Given the above, a pull of example.com/foo/image:latest will try: # # 1. example-mirror-0.local/mirror-for-foo/image:latest # # 2. example-mirror-1.local/mirrors/foo/image:latest # # 3. internal-registry-for-example.net/bar/image:latest # # in order, and use the first one that exists. short-name-mode = "enforcing" home/zuul/zuul-output/logs/registries.conf.d/0000755000175000000000000000000015117130752020365 5ustar zuulroothome/zuul/zuul-output/logs/registries.conf.d/000-shortnames.conf0000644000175000000000000001735515117130752023727 0ustar zuulroot[aliases] # almalinux "almalinux" = "docker.io/library/almalinux" "almalinux-minimal" = "docker.io/library/almalinux-minimal" # Amazon Linux "amazonlinux" = "public.ecr.aws/amazonlinux/amazonlinux" # Arch Linux "archlinux" = "docker.io/library/archlinux" # centos "centos" = "quay.io/centos/centos" # containers "skopeo" = "quay.io/skopeo/stable" "buildah" = "quay.io/buildah/stable" "podman" = "quay.io/podman/stable" "hello" = "quay.io/podman/hello" "hello-world" = "quay.io/podman/hello" # docker "alpine" = "docker.io/library/alpine" "docker" = "docker.io/library/docker" "registry" = "docker.io/library/registry" "swarm" = "docker.io/library/swarm" # Fedora "fedora-bootc" = "registry.fedoraproject.org/fedora-bootc" "fedora-minimal" = "registry.fedoraproject.org/fedora-minimal" "fedora" = "registry.fedoraproject.org/fedora" # Gentoo "gentoo" = "docker.io/gentoo/stage3" # openSUSE "opensuse/tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "opensuse/tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "opensuse/tumbleweed-microdnf" = "registry.opensuse.org/opensuse/tumbleweed-microdnf" "opensuse/leap" = "registry.opensuse.org/opensuse/leap" "opensuse/busybox" = "registry.opensuse.org/opensuse/busybox" "tumbleweed" = "registry.opensuse.org/opensuse/tumbleweed" "tumbleweed-dnf" = "registry.opensuse.org/opensuse/tumbleweed-dnf" "tumbleweed-microdnf" = "registry.opensuse.org/opensuse/tumbleweed-microdnf" "leap" = "registry.opensuse.org/opensuse/leap" "leap-dnf" = "registry.opensuse.org/opensuse/leap-dnf" "leap-microdnf" = "registry.opensuse.org/opensuse/leap-microdnf" "tw-busybox" = "registry.opensuse.org/opensuse/busybox" # OTel (Open Telemetry) - opentelemetry.io "otel/autoinstrumentation-go" = "docker.io/otel/autoinstrumentation-go" "otel/autoinstrumentation-nodejs" = "docker.io/otel/autoinstrumentation-nodejs" "otel/autoinstrumentation-python" = "docker.io/otel/autoinstrumentation-python" "otel/autoinstrumentation-java" = "docker.io/otel/autoinstrumentation-java" "otel/autoinstrumentation-dotnet" = "docker.io/otel/autoinstrumentation-dotnet" "otel/opentelemetry-collector" = "docker.io/otel/opentelemetry-collector" "otel/opentelemetry-collector-contrib" = "docker.io/otel/opentelemetry-collector-contrib" "otel/opentelemetry-collector-contrib-dev" = "docker.io/otel/opentelemetry-collector-contrib-dev" "otel/opentelemetry-collector-k8s" = "docker.io/otel/opentelemetry-collector-k8s" "otel/opentelemetry-operator" = "docker.io/otel/opentelemetry-operator" "otel/opentelemetry-operator-bundle" = "docker.io/otel/opentelemetry-operator-bundle" "otel/operator-opamp-bridge" = "docker.io/otel/operator-opamp-bridge" "otel/semconvgen" = "docker.io/otel/semconvgen" "otel/weaver" = "docker.io/otel/weaver" # SUSE "suse/sle15" = "registry.suse.com/suse/sle15" "suse/sles12sp5" = "registry.suse.com/suse/sles12sp5" "suse/sles12sp4" = "registry.suse.com/suse/sles12sp4" "suse/sles12sp3" = "registry.suse.com/suse/sles12sp3" "sle15" = "registry.suse.com/suse/sle15" "sles12sp5" = "registry.suse.com/suse/sles12sp5" "sles12sp4" = "registry.suse.com/suse/sles12sp4" "sles12sp3" = "registry.suse.com/suse/sles12sp3" "bci-base" = "registry.suse.com/bci/bci-base" "bci/bci-base" = "registry.suse.com/bci/bci-base" "bci-micro" = "registry.suse.com/bci/bci-micro" "bci/bci-micro" = "registry.suse.com/bci/bci-micro" "bci-minimal" = "registry.suse.com/bci/bci-minimal" "bci/bci-minimal" = "registry.suse.com/bci/bci-minimal" "bci-busybox" = "registry.suse.com/bci/bci-busybox" "bci/bci-busybox" = "registry.suse.com/bci/bci-busybox" # Red Hat Enterprise Linux "rhel" = "registry.access.redhat.com/rhel" "rhel6" = "registry.access.redhat.com/rhel6" "rhel7" = "registry.access.redhat.com/rhel7" "rhel7.9" = "registry.access.redhat.com/rhel7.9" "rhel-atomic" = "registry.access.redhat.com/rhel-atomic" "rhel9-bootc" = "registry.redhat.io/rhel9/rhel-bootc" "rhel-minimal" = "registry.access.redhat.com/rhel-minimal" "rhel-init" = "registry.access.redhat.com/rhel-init" "rhel7-atomic" = "registry.access.redhat.com/rhel7-atomic" "rhel7-minimal" = "registry.access.redhat.com/rhel7-minimal" "rhel7-init" = "registry.access.redhat.com/rhel7-init" "rhel7/rhel" = "registry.access.redhat.com/rhel7/rhel" "rhel7/rhel-atomic" = "registry.access.redhat.com/rhel7/rhel7/rhel-atomic" "ubi7/ubi" = "registry.access.redhat.com/ubi7/ubi" "ubi7/ubi-minimal" = "registry.access.redhat.com/ubi7-minimal" "ubi7/ubi-init" = "registry.access.redhat.com/ubi7-init" "ubi7" = "registry.access.redhat.com/ubi7" "ubi7-init" = "registry.access.redhat.com/ubi7-init" "ubi7-minimal" = "registry.access.redhat.com/ubi7-minimal" "rhel8" = "registry.access.redhat.com/ubi8" "rhel8-init" = "registry.access.redhat.com/ubi8-init" "rhel8-minimal" = "registry.access.redhat.com/ubi8-minimal" "rhel8-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8" = "registry.access.redhat.com/ubi8" "ubi8-minimal" = "registry.access.redhat.com/ubi8-minimal" "ubi8-init" = "registry.access.redhat.com/ubi8-init" "ubi8-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8/ubi" = "registry.access.redhat.com/ubi8/ubi" "ubi8/ubi-minimal" = "registry.access.redhat.com/ubi8-minimal" "ubi8/ubi-init" = "registry.access.redhat.com/ubi8-init" "ubi8/ubi-micro" = "registry.access.redhat.com/ubi8-micro" "ubi8/podman" = "registry.access.redhat.com/ubi8/podman" "ubi8/buildah" = "registry.access.redhat.com/ubi8/buildah" "ubi8/skopeo" = "registry.access.redhat.com/ubi8/skopeo" "rhel9" = "registry.access.redhat.com/ubi9" "rhel9-init" = "registry.access.redhat.com/ubi9-init" "rhel9-minimal" = "registry.access.redhat.com/ubi9-minimal" "rhel9-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9" = "registry.access.redhat.com/ubi9" "ubi9-minimal" = "registry.access.redhat.com/ubi9-minimal" "ubi9-init" = "registry.access.redhat.com/ubi9-init" "ubi9-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9/ubi" = "registry.access.redhat.com/ubi9/ubi" "ubi9/ubi-minimal" = "registry.access.redhat.com/ubi9-minimal" "ubi9/ubi-init" = "registry.access.redhat.com/ubi9-init" "ubi9/ubi-micro" = "registry.access.redhat.com/ubi9-micro" "ubi9/podman" = "registry.access.redhat.com/ubi9/podman" "ubi9/buildah" = "registry.access.redhat.com/ubi9/buildah" "ubi9/skopeo" = "registry.access.redhat.com/ubi9/skopeo" # Rocky Linux "rockylinux" = "quay.io/rockylinux/rockylinux" # Debian "debian" = "docker.io/library/debian" # Kali Linux "kali-bleeding-edge" = "docker.io/kalilinux/kali-bleeding-edge" "kali-dev" = "docker.io/kalilinux/kali-dev" "kali-experimental" = "docker.io/kalilinux/kali-experimental" "kali-last-release" = "docker.io/kalilinux/kali-last-release" "kali-rolling" = "docker.io/kalilinux/kali-rolling" # Ubuntu "ubuntu" = "docker.io/library/ubuntu" # Oracle Linux "oraclelinux" = "container-registry.oracle.com/os/oraclelinux" # busybox "busybox" = "docker.io/library/busybox" # golang "golang" = "docker.io/library/golang" # php "php" = "docker.io/library/php" # python "python" = "docker.io/library/python" # rust "rust" = "docker.io/library/rust" # node "node" = "docker.io/library/node" # Grafana Labs "grafana/agent" = "docker.io/grafana/agent" "grafana/grafana" = "docker.io/grafana/grafana" "grafana/k6" = "docker.io/grafana/k6" "grafana/loki" = "docker.io/grafana/loki" "grafana/mimir" = "docker.io/grafana/mimir" "grafana/oncall" = "docker.io/grafana/oncall" "grafana/pyroscope" = "docker.io/grafana/pyroscope" "grafana/tempo" = "docker.io/grafana/tempo" # curl "curl" = "quay.io/curl/curl" # nginx "nginx" = "docker.io/library/nginx" # QUBIP "qubip/pq-container" = "quay.io/qubip/pq-container" home/zuul/zuul-output/artifacts/0000755000175000017500000000000015117127061016103 5ustar zuulzuulhome/zuul/zuul-output/docs/0000755000175000017500000000000015117127062015054 5ustar zuulzuul